text
stringlengths
0
514k
meta
dict
--- abstract: | Following the work of Burger, Iozzi and Wienhard for representations, in this paper we introduce the notion of maximal measurable cocycles of a surface group. More precisely, let $\mathbf{G}$ be a semisimple algebraic $\operatorname{\mathbb{R}}$-group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is of Hermitian type. If $\Gamma \leq L$ is a torsion-free lattice of a finite connected covering of $\operatorname{\textup{PU}}(1,1)$, given a standard Borel probability $\Gamma$-space $(\Omega,\mu_\Omega)$, we introduce the notion of Toledo invariant for a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with an essentially unique boundary map. The Toledo invariant is a multiplicative constant, hence it remains unchanged along $G$-cohomology classes and its absolute value is bounded by the rank of $G$. This allows to define maximal measurable cocycles. We show that the algebraic hull $\mathbf{H}$ of a maximal cocycle $\sigma$ is reductive, the centralizer of $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$ is compact, $H$ is of tube type and $\sigma$ is cohomologous to a cocycle stabilizing a unique maximal tube-type subdomain. This result is analogous to the one obtained for representations. We conclude with some remarks about boundary maps of maximal Zariski dense cocycles. address: 'Department of Mathematics, University of Bologna, Piazza di Porta San Donato 5, 40126 Bologna, Italy' author: - 'A. Savini' bibliography: - 'biblionote.bib' date: '. ©[ A. Savini 2020]{}' title: Algebraic hull of maximal measurable cocycles of surface groups into Hermitian Lie groups --- [^1] . Introduction ============ Given a torsion-free lattice $\Gamma \leq G$ in a semisimple Lie group $G$, any representation $\rho:\Gamma \rightarrow H$ into a locally compact group $H$ induces a well-defined map at the level of continuous bounded cohomology groups. Hence fixed a preferred bounded class in the cohomology of $H$, one can pullback it and compare the resulting class with the fundamental class determined by $\Gamma$ via Kronecker pairing. This is a standard way to obtain *numerical invariants* for representations, whose importance has become evident in the study of rigidity and superrigidity properties. Indeed a numerical invariant has bounded absolute value and the maximum is attained if and only if the representation can be extended to a representation $G \rightarrow H$ of the ambient group. Several examples of these phenomena are given by the work of Bucher, Burger, Iozzi [@iozzi02:articolo; @bucher2:articolo; @BBIborel] in the case of representations of real hyperbolic lattices, by Burger and Iozzi [@BIcartan] and by Duchesne and Pozzetti [@Pozzetti; @duchesne:pozzetti] for complex hyperbolic lattices and by the work of Burger, Iozzi and Wienhard [@BIW07; @BIW09; @BIW1] when the target group is of Hermitian type. In the latter case, of remarkable interest is the analysis of the representation space $\textup{Hom}(\Gamma,G)$ when $G$ is a group of Hermitian type and $\Gamma$ is a lattice in a finite connected covering of $\operatorname{\textup{PU}}(1,1)$, that is a hyperbolic surface group. Burger, Iozzi and Wienhard [@BIW1] exploited the existence of a natural Kähler structure on the Hermitian symmetric space associated to $G$ in order to define the notion of *Toledo invariant* of a representation $\rho:\Gamma \rightarrow G$. That invariant has bounded absolute value and its maximality has important consequences on the Zariski closure $\mathbf{H}=\overline{\rho(\Gamma)}^Z$ of the image of the representation. Indeed the authors show that in the case of maximality $\mathbf{H}$ is reductive, $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$ has compact centralizer and it is of tube type and the representation $\rho$ is injective with discrete image and it preserves a unique maximal tube-type subdomain [@BIW1 Theorem 5]. A domain is of *tube-type* if it can be written in the form $V+i\Omega$, where $V$ is a real vector space and $\Omega \subset V$ is an open convex cone. Maximal tube-type subdomains in a Hermitian symmetric space $\operatorname{\mathcal{X}}$ generalize the notion of complex geodesic in $\operatorname{\mathbb{H}}^n_{\operatorname{\mathbb{C}}}$ and they are all $G$-conjugated. Partial results in the direction of [@BIW1 Theorem 5] were obtained by several authors. For instance when $G=\operatorname{\textup{PU}}(n,1)$ with $n \geq 2$, Toledo [@Toledo89] proved that maximal representations must preserve a complex geodesic. It is worth mentioning also the papers by Hernández [@Her91], by Koziarz and Maubon [@koziarz:maubon] and by Bradlow, García-Prada and Gothen [@garcia:geom; @garcia:dedicata]. In the latter case those results were obtained using different techniques based on the notion of Higgs bundle. It is worth noticing that in the particular case of split real groups and surfaces without boundary, the set of maximal representations contains the Hitchin component [@hitchin]. The Hitchin component has been sistematically studied by serveral mathematicians. For instance Labourie [@labourie] focused his attention on the Asonov property, whereas Fock and Goncharov [@Fock:adv; @fock:hautes] related the Hitchin component with the notion of Lusztig’s positivity. A crucial point in the proof of [@BIW1 Theorem 5] is that maximal representations are *tight*, that is the seminorm of the pullback class is equal to the norm of the bounded Kähler class. The tightness property has an analytic counterpart in terms of maps between symmetric spaces and Burger, Iozzi and Wienhard [@BIW09] give a complete characterization of tight subgroups of a Lie group of Hermitian type. Recently the author [@savini3:articolo] together with Moraschini [@moraschini:savini; @moraschini:savini:2] and Sarti [@savini:sarti] has applied bounded cohomology techniques to the study measurable cocycles with an essentially unique boundary map. The existence of a boundary map allows to define a pullback in bounded cohomology as in [@burger:articolo] and hence to develop a theory of numerical invariants, called *multiplicative constants*, also in the context of measurable cocycles. The main goal of this paper is the study of measurable cocycles of surface groups. Let $\Gamma \leq L$ be a torsion-free lattice of a finite connected covering $L$ of $\operatorname{\textup{PU}}(1,1)$. Consider a standard Borel probability $\Gamma$-space $(\Omega,\mu_\Omega)$ and let $\mathbf{G}$ be a semisimple real algebraic group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is of Hermitian type. If a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ admits an essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow G$, then we can apply the theoretical background developed in [@moraschini:savini; @moraschini:savini:2] to defined the *Toledo invariant of $\sigma$*. In an analogous way to what happens for representations, the Toledo invariant is constant along $G$-cohomology classes and has absolute value bounded by $\operatorname{rk}(\operatorname{\mathcal{X}})$, the rank of the symmetric space $\operatorname{\mathcal{X}}$ associated to $G$. Thus it makes sense to speak about *maximal measurable cocycles*. This will be a particular example of *tight cocycles* (see Definition \[def:tight:cocycle\]). Maximality allows to give a characterization of the *algebraic hull* of a measurable cocycle, as stated in the following \[teor:maximal:alghull\] Let $\Gamma \leq L$ be a torsion-free lattice of a finite connected covering $L$ of $\operatorname{\textup{PU}}(1,1)$ and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Let $\mathbf{G}$ be a semisimple algebraic $\operatorname{\mathbb{R}}$-group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is a Lie group of Hermitian type. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map. Denote by $\mathbf{H}$ the algebraic hull of $\sigma$ in $\mathbf{G}$ and set $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$. If $\sigma$ is maximal, then 1. the algebraic hull $\mathbf{H}$ is reductive 2. the centralizer $Z_G(H)$ is compact; 3. the symmetric space $\operatorname{\mathcal{Y}}$ associated to $H$ is Hermitian of tube-type; 4. it holds $\mathbf{H}(\operatorname{\mathbb{R}}) \subset \textup{Isom}(\operatorname{\mathcal{T}})$ for some maximal tube-type subdomain $\operatorname{\mathcal{T}}$ of $\operatorname{\mathcal{X}}$. Equivalently there exists a cocycle cohomologous to $\sigma$ which preserves $\operatorname{\mathcal{T}}$. The above theorem should be interpreted as a suitable adaptation of [@BIW1 Theorem 5] to the context of maximal measurable cocycles. The first two properties are immediate consequences of the tightness of maximal cocycles, as shown in Theorem \[teor:alg:hull:tight\]. The tube-type condition is more involving and it is proved in Theorem \[teor:symmetric:tube\]. We conclude with some remarks about boundary maps of maximal Zariski dense cocycles. For representations, the relation between maximality and boundary maps preserving positivity of triples were studied by Guichard [@Guichard], Labourie [@labourie] and Fock and Goncharov [@fock:hautes] Here we attempt to extend [@BIW1 Theorem 5.2] to the context of measurable cocycles. Given a maximal Zariski dense cocycle, we can construct a boundary map which has left-continuous (respectively right-continuous) slices. Moreover each slice preserves *transversality* and it is *monotone*, as proved in Theorem \[teor:boundary:map\]. Unfortunately, to get the statement, we need to make an additional assumption on the measurable map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$. More precisely we need to assume that the essential image of almost every slice intersects nicely all closed algebraic subset of $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ (Assumption \[ass:zariski:zero:measure\]). This assumption is clearly verified by cocycles cohomologous to maximal Zariski dense representations, but we do not know more generally under which conditions of both $\sigma$ or $\phi$ this is true and it would be interesting to know it. The proof of Theorem \[teor:boundary:map\] follows the line of [@BIWL Section 8] and of [@BIW1 Theorem 5.2]. Plan of the paper {#plan-of-the-paper .unnumbered} ----------------- In Section \[sec:preliminary\] we recall the preliminary definitions and results that we need in the paper. In Section \[sec:measurable:cocycles\] we remind the notion of measurable cocycle and of cohomology class determined by a cocycle. Of particular importance for our purpose will be the definition of algebraic hull. Then we conclude the section with some elements of boundary theory. Section \[sec:burger:monod\] is devoted to continuous and continuous bounded cohomology. We remind the functorial approach by Burger and Monod and we recall the definition of pullback induced by a boundary map. The last part is devoted to Hermitian symmetric spaces (Section \[sec:hermitian:groups\]). The main theorem of paper is proved in Section \[sec:maximal:cocycles\]. We first introduce the notion of Toledo invariant of a measurable cocycle in Section \[sec:toledo:invariant\]. In Section \[sec:maximal:cocycle:thm\] it appears the definition of maximal cocycle. Maximal cocycles are tight by Proposition \[prop:maximal:tight\] and this result togheter with Theorem \[teor:symmetric:tube\] allows to prove Theorem \[teor:maximal:alghull\]. We conclude with Section \[sec:boundary:map\], where we prove Theorem \[teor:boundary:map\]. Preliminary definitions and results {#sec:preliminary} =================================== Measurable cocycles {#sec:measurable:cocycles} ------------------- The following section is devoted to a quick review about measurable cocycles theory. We are going to recall the definitions of both measurable cocycle and cohomology class. Then we will introduce the notion of algebraic hull and we will conclude the section with some elements of boundary theory. For a more detailed discussion about those topics we refer the reader to the work of both Furstenberg [@furst:articolo73; @furst:articolo] and Zimmer [@zimmer:preprint; @zimmer:annals; @zimmer:libro]. Consider two locally compact second countable groups $G,H$ endowed with their Haar measurable structure. Given a standard Borel measure space $(\Omega,\mu_\Omega)$ we say that it is a *$G$-space* if $G$ acts on $\Omega$ by measure-preserving transformations. Additionally if $\mu_\Omega$ is a probability measure, we are going to call $(\Omega,\mu_\Omega)$ a *standard Borel probability $G$-space*. Given another measure space $(\Theta,\nu)$, we are going to denote by $\textup{Meas}(X,Y)$ the space of measurable functions with the topology of the convergence in measure. \[def:measurable:cocycle\] Let $G,H$ two locally compact second countable groups and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $G$-space. A measurable function $\sigma:G \times \Omega \rightarrow H$ is a *measurable cocycle* if it holds $$\label{eq:measurable:cocycle} \sigma(g_1 g_2,s)=\sigma(g_1,g_2 s)\sigma(g_2,s) \ ,$$ for almost every $g_1,g_2 \in G$ and almost every $s \in \Omega$. Measurable cocycles are quite ubiquitous in mathematics and Equation can be suitably interpreted as a naive generalization to the measurable context of the chain rule for differentiation of smooth functions. By writing a measurable cocycle $\sigma$ as an element $\sigma \in \textup{Meas}(G,\textup{Meas}(\Omega,H))$, Equation boils down the cocycle condition. Indeed $\sigma$ may be interpreted as a Borel $1$-cocycle in the sense of Eilenberg-MacLane (see [@feldman:moore; @zimmer:preprint] for more details about this interpretation). Following this line, one could naturally ask when two different cocycles are cohomologous. \[def:cohomologous:cocycles\] Let $\sigma:G \times \Omega \rightarrow H$ be a measurable cocycle and let $f:\Omega \rightarrow H$ be a measurable function. The *$f$-twisted cocycle of $\sigma$* is defined as $$\sigma^f:G \times \Omega \rightarrow H, \ \ \sigma^f(g,s):=f(gs)^{-1}\sigma(g,s)f(s) \ .$$ We say that two cocycles $\sigma_1,\sigma_2:G \times \Omega \rightarrow H$ are *cohomologous* if there exists a measurable function $f:\Omega \rightarrow H$ such that $$\sigma_2^f=\sigma_1 \ .$$ Choosing a measurable function $f:\Omega \rightarrow H$ is a typical way to construct cocycles starting from representations. Indeed, given a continuous representation $\rho:G \rightarrow H$, one can verifiy that the measurable function $$\sigma_\rho:G \times \Omega \rightarrow H \ , \ \ \sigma_\rho(g,s):=\rho(g) \ ,$$ is a measurable cocycle as a consequence of the morphism condition. This allows to see representation theory into the wider world of measurable cocycles theory. Additionally this offers us the possibility to interpret the notion of cohomologous cocycles as a generalization of conjugated representations. Given a representation $\rho:G \rightarrow H$, if the image is not closed, it is quite natural to consider its closure, which it is still a subgroup of $H$. Unfortunately the image of a cocycle has no structure a priori. Nevertheless, if $H$ corresponds to the real points of a real algebraic group, then there is a notion which is in some sense similar to take the closure of the image of a representation. Suppose that $\mathbf{H}$ is a real algebraic group. Let $\sigma:G \times \Omega \rightarrow \mathbf{H}(\operatorname{\mathbb{R}})$ be a measurable cocycle. The *algebraic hull associated to $\sigma$* is (the conjugacy class of) the smallest algebraic subgroup $\mathbf{L}$ of $\mathbf{H}$ such that $\mathbf{L}(\operatorname{\mathbb{R}})$ contains the image of a cocycle cohomologous to $\sigma$. As proved in [@zimmer:libro Proposition 9.2] this notion is well-defined by the descending chain condition on algebraic subgroups and it depends only the cohomology class of the cocycle. We conclude this brief discussion about measurable cocycle introducing some elements of boundary theory. In order to do this, we are going to assume that $G$ is a semisimple Lie group of non-compact type. Let $Q$ be any parabolic subgroup of $G$ and let $Y$ be a measurable $H$-space. \[def:boundary:map\] Let $\sigma:G \times \Omega \rightarrow H$ be a measurable cocycle. A *(generalized) boundary map* is a measurable map $\phi:G/Q \times \Omega \rightarrow Y$ which is $\sigma$-equivariant, that is $$\phi(g \xi,g s)=\sigma(g,s)\phi(\xi,s) \ ,$$ for every $g \in G$ and almost every $\xi \in G/Q, s \in \Omega$. It is easy to check that, if $\phi:G/Q \times \Omega \rightarrow Y$ is a boundary map for $\sigma$, then $\phi^f:G/Q \times \Omega \rightarrow Y, \ \phi^f(\xi,s):=f(s)^{-1}\phi(\xi,s)$ is a boundary map for $\sigma^f$ for any measurable function $f:\Omega \rightarrow H$. The existence and the uniqueness of a boundary map associated to a cocycle $\sigma$ rely on the dynamical properties of $\sigma$. For a more detailed discussion about it we refer the reader to [@furst:articolo]. Boundary maps for measurable cocycles will be crucial to define a pullback map in bounded cohomology imitating the work done by Burger and Iozzi [@burger:articolo] in the case of representations. Continuous bounded cohomology and functorial approach {#sec:burger:monod} ----------------------------------------------------- Given a locally compact group $G$ we are going to remind the notion of continuous and continuous bounded cohomology groups of $G$. A remarkable aspect of continuous bounded cohomology is that it can be computed using any strong resolution by relatively injective modules. We are going to give few details about this functorial apprach and we will conclude the section by introducing the notion of pullback along a boundary map associated to a measurable cocycle. For more details about continuous bounded cohomology and its functorial approach we refer to the work of Burger and Monod [@burger2:articolo; @monod:libro], whereas we refer to the papers of the author together with Moraschini [@moraschini:savini; @moraschini:savini:2] for more details about pullback along boundary maps. Consider a *Banach $G$-module* $E$, that is $E$ is a Banach space with an isometric action $\pi:G \rightarrow \textup{Isom}(E)$. We are going to assume that $E$ is the dual of some separable Banach space, so that it makes sense to speak about the weak-${}^\ast$ topology on $E$. We consider the set $$\begin{aligned} \operatorname{\textup{C}}^\bullet_{cb}(G;E):=\{ f : G^{\bullet+1} \rightarrow E \ | & \ f \ \textup{is continuous and} \\ &\lVert f \rVert_\infty:=\sup_{g_0,\ldots,g_{\bullet}}\lVert f(g_0,\ldots,g_{\bullet}) \rVert_E < \infty \} \ , \end{aligned}$$ where $\lVert \ \cdot \ \rVert_E$ denotes the norm on the space $E$. Each $\operatorname{\textup{C}}^\bullet_{cb}(G;E)$ is a normed via the supremum norm and it can be endowed with an isometric action by $G$ defined by $$(gf)(g_0,\ldots,g_\bullet):=\pi(g)f(g^{-1}g_0,\ldots,g^{-1}g_\bullet) \ ,$$ where $f \in \operatorname{\textup{C}}^\bullet_{cb}(G;\operatorname{\mathbb{R}})$ and $g,g_0,\ldots,g_\bullet \in G$. Defining the *standard homogeneous coboundary operator* by $$\delta^\bullet:\operatorname{\textup{C}}^\bullet_{cb}(G;E) \rightarrow \operatorname{\textup{C}}^{\bullet+1}_{cb}(G;E) \ ,$$ $$\delta^\bullet(f)(g_0,\ldots,g_{\bullet+1}):=\sum_{i=0}^{\bullet+1}(-1)^i f(g_0,\ldots,\hat g_i,\ldots,g_{\bullet+1}) \ ,$$ we get a cochain complex $(\operatorname{\textup{C}}^\bullet_{cb}(G;E),\delta^\bullet)$. \[def:bounded:cohomology\] Let $G$ be a locally compact group and let $E$ be a Banach $G$-module. The *$k$-th continuous bounded cohomology group* of $G$ with coefficients in $E$ is the $k$-th cohomology group of the $G$-invariant subcomplex $(\operatorname{\textup{C}}^\bullet_{cb}(G;E)^G,\delta^\bullet)$, that is $$\operatorname{\textup{H}}^k_{cb}(G;E):=\operatorname{\textup{H}}^k(\operatorname{\textup{C}}^\bullet_{cb}(G;E)^G) \ ,$$ for every $k \geq 0$. It is worth noticing that each cohomology group $\operatorname{\textup{H}}^\bullet_{cb}(G;E)$ has a natural seminormed structure inherited by the normed structure on the continuous bounded cochains. By dropping the assumption of boundedness one can define similarly the complex of continuous cochains $(\operatorname{\textup{C}}^\bullet_c(G;E),\delta^\bullet)$ and the standard inclusion $i:\operatorname{\textup{C}}^\bullet_{cb}(G;E) \rightarrow \operatorname{\textup{C}}^\bullet_c(G;E)$ induces a map at a cohomological level $$\textup{comp}^\bullet:\operatorname{\textup{H}}^\bullet_{cb}(G;E) \rightarrow \operatorname{\textup{H}}^\bullet_c(G;E) \ ,$$ called *comparison map*. Computing continuous bounded cohomology of a locally compact group $G$ using only the definition given above may reveal quite difficult. For this reason Burger and Monod [@burger2:articolo; @monod:libro] introduced a way to compute continuous bounded cohomology groups based on the notion of resolutions. More precisely the authors showed [@burger2:articolo Corollary 1.5.3] that given any Banach $G$-module $E$ and any strong resolution $(E^\bullet,d^\bullet)$ of $E$ by relatively injective Banach $G$-modules, it holds $$\operatorname{\textup{H}}^k_{cb}(G;E) \cong \operatorname{\textup{H}}^k((E^\bullet)^G) \ ,$$ for every $k \geq 0$. Since we will not need the notion of strong resolution and of relatively injective Banach $G$-module, we omit them and we refer to the book of Monod [@monod:libro] for more details. Unfortunately the isomorphism given above it is not isometric a priori, that is it may not preserve the seminormed structure. Nevertheless there are specific resolutions for which the isomorphism it is actually isometric. This is the case for instance when we consider the resolution of essentially bounded weak-$^\ast$ measurable functions $(\operatorname{\textup{L}}^\infty_{\textup{w}^\ast}((G/Q)^{\bullet+1};E),\delta^\bullet)$ on the quotient $G/Q$ [@burger2:articolo Theorem 1], where $G$ is a semisimple Lie group of non-compact type and $Q$ is any parabolic subgroup. We are going to exploit this result for the Shilov boundary of a Hermitian symmetric space. We conclude this brief section by recalling the pullback determined by a boundary map associated to a measurable cocycle. Suppose that $G$ is a semisimple Lie group of non-compact type and consider a parabolic subgroup $Q \leq G$. Let $(\Omega,\mu_\Omega)$ be a standard Borel probability $G$-space and let $Y$ be any measurable $H$-space, where $H$ is another locally compact group. Denote by $(\operatorname{\mathcal{B}}^\infty_{\textup{w}^\ast}(Y^{\bullet+1};E),\delta^\bullet)$ the complex of weak-$^\ast$ measurable bounded functions on $Y$ (with the injection of coefficients, the latter is a strong resolution of $E$ by [@burger:articolo Proposition 2.1]). Given a boundary map $\phi:G/Q \times \Omega \rightarrow Y$ associated to a measurable cocycle $\sigma:G \times \Omega \rightarrow H$, there exists a natural map defined at the level of cochains as $$\operatorname{\textup{C}}^\bullet(\Phi^\Omega):\operatorname{\mathcal{B}}^\infty(Y^{\bullet+1};E)^H \rightarrow \operatorname{\textup{L}}^\infty((G/Q)^{\bullet+1};E)^G \ ,$$ $$\operatorname{\textup{C}}^\bullet(\Phi^\Omega)(\psi)(\xi_0,\ldots,\xi_\bullet):=\int_{\Omega} \psi(\phi(\xi_0,s),\ldots,\phi(\xi_\bullet,s))d\mu_\Omega(s) \ .$$ As shown by the author and Moraschini [@moraschini:savini; @moraschini:savini:2], the above map is a chain map which does not increase the norm and it induces a well define map in cohomology $$\operatorname{\textup{H}}^\bullet(\Phi^\Omega):\operatorname{\textup{H}}^\bullet(\operatorname{\mathcal{B}}(Y^{\bullet+1};E)^H) \rightarrow \operatorname{\textup{H}}^\bullet_{cb}(G;E) \ , \ \operatorname{\textup{H}}^\bullet(\Phi^\Omega)([\psi]):=[\operatorname{\textup{C}}^\bullet(\Phi^\Omega)(\psi)] \ .$$ The map $\operatorname{\textup{H}}^\bullet(\Phi^\Omega)$ is the *pullback induced by the boundary map $\phi$*. We are going to use the pullback map in order to define properly the Toledo invariant of a measurable cocycle of a surface group. Lie groups of Hermitian type {#sec:hermitian:groups} ---------------------------- In this section we are going to recall the main definitions and result about Lie groups of Hermitian type. We are going to remind the notion of Shilov boundary for a Hermitian symmetric space and we are going to define a suitable cocycle on it, called Bergmann cocycle, which will enable us to define the notion of maximality for measurable cocycles of surface groups. For a more detailed discussion about these notions, we refer mainly to the work of Burger, Iozzi and Wienhard [@BIW07; @BIW09; @BIW1]. \[def:hermitian:symmetric:space\] Let $\operatorname{\mathcal{X}}$ be a Riemannian symmetric space and denote by $G=\textup{Isom}(\operatorname{\mathcal{X}})^\circ$ the connected component of the identity of the isometry group associated to $\operatorname{\mathcal{X}}$. We say that $\operatorname{\mathcal{X}}$ is *Hermitian* if there exists a $G$-invariant complex structure $\operatorname{\mathcal{J}}$ on $\operatorname{\mathcal{X}}$. Given a semisimple real algebraic group $\mathbf{G}$, we say that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is *of Hermitian type* if its symmetric space $\operatorname{\mathcal{X}}$ is Hermitian. Among all the possible ones, a family of examples of particular interest in this paper will be the one of Hermitian symmetric spaces of tube-type. We say that a Hermitian symmetric space $\operatorname{\mathcal{X}}$ is *of tube-type* if it is biholomorphic to a complex subset of the form $V+i\Omega$, where $V$ is a real vector space and $\Omega \subset V$ is a proper convex cone. A typical example is given by the hyperbolic space $\operatorname{\mathbb{H}}^2$ associated to the group $\operatorname{\textup{PU}}(1,1)$, or more generally to the symmetric space associated to $\operatorname{\textup{PU}}(p,p)$ when $p \geq 2$. A Hermitian symmetric space $\operatorname{\mathcal{X}}$ can be bihomolorphically realized as bounded convex domain $\operatorname{\mathcal{D}}_{\operatorname{\mathcal{X}}}$ in $\operatorname{\mathbb{C}}^n$. For such a realization, the group $G=\textup{Isom}(\operatorname{\mathcal{X}})^\circ$ acts via biholomorphisms and its action can be extended in a continuous way to the boundary $\partial \operatorname{\mathcal{D}}_{\operatorname{\mathcal{X}}}$. Unfortunately the latter is not a homogeneous $G$-space, but it admits a unique closed $G$-orbit. The latter will be identified with the Shilov boundary. More precisely we give first the following \[def:shilov:boundary\] Let $\operatorname{\mathcal{D}}\subset \operatorname{\mathbb{C}}^n$ be a bounded domain. The *Shilov boundary* $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{D}}}$ of $\operatorname{\mathcal{D}}$ is the unique closed subset of $\partial \operatorname{\mathcal{D}}$ such that, given a function $f$ continuous on $\overline{\operatorname{\mathcal{D}}}$ and holomorphic on $\operatorname{\mathcal{D}}$, then $$\max_{\overline{D}}|f|=\max_{\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{D}}}} |f| \ .$$ Given a Hermitian symmetric space $\operatorname{\mathcal{X}}$, we denote by $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ the Shilov boundary associated to the bounded realization of $\operatorname{\mathcal{X}}$ and we call it *the Shilov boundary of $\operatorname{\mathcal{X}}$*. As already anticipated the Shilov boundary associated to a Hermitian symmetric space $\operatorname{\mathcal{X}}$ is a homogeneous $G$-space. Indeed if we denote by $\mathbf{G}$ the algebraic group associated to the complexified Lie algebra of $G=\textup{Isom}(\operatorname{\mathcal{X}})^\circ$, then there exists a maximal parabolic subgroup $\mathbf{Q} \subset \mathbf{G}$ such that $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ can be identified with $\mathbf{G}/\mathbf{Q}(\operatorname{\mathbb{R}})$. In particular $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is an amenable $G$-space in the sense of Section \[sec:burger:monod\] and hence the resolution of essentially bounded functions on it computes isometrically the continuous bounded cohomology of $G$. In order to describe more accurately the second bounded cohomology group of $G$, recall that if $\operatorname{\mathcal{X}}$ is Hermitian, then there exists a $G$-invariant complex structure $\operatorname{\mathcal{J}}$ on it. If we denote by $g$ the $G$-invariant Riemannian metric on $\operatorname{\mathcal{X}}$, we can define the *Kähler form* $$(\omega_{\operatorname{\mathcal{X}}})_a(X,Y):=g_a(X,\operatorname{\mathcal{J}}_a Y) \ ,$$ for any $X,Y \in T_a \operatorname{\mathcal{X}}$. Being $G$-invariant, the form $\omega_{\operatorname{\mathcal{X}}}$ is automatically closed by Cartan’s Lemma. Define now $$\label{eq:cocycle:symmetric:space} \beta_{\operatorname{\mathcal{X}}}: (\operatorname{\mathcal{X}})^{(3)} \rightarrow \operatorname{\mathbb{R}}, \ \ \beta_{\operatorname{\mathcal{X}}}(a_1,a_2,a_3):=\int_{\Delta(a_1,a_2,a_3)} \omega_{\operatorname{\mathcal{X}}} \ ,$$ where $\Delta(a_1,a_2,a_3)$ is any triangle with geodesic sides determined by $a_1,a_2,a_3 \in \operatorname{\mathcal{X}}$. Since $\omega_{\operatorname{\mathcal{X}}}$ is closed, by Stokes Theorem the function $\beta_{\operatorname{\mathcal{X}}}$ is an alternating $G$-invariant cocycle on $\operatorname{\mathcal{X}}$. Remarkably the cocycle $\beta_{\operatorname{\mathcal{X}}}$ can be extended to a strict measurable $G$-invariant cocycle on the Shilov boundary $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ [@BIW07 Corollary 3.8] and its absolute value is bounded $\operatorname{rk}(\operatorname{\mathcal{X}})$. We are going to denote such an extension with $\beta_{\operatorname{\mathcal{X}}}$ with an abuse of notation. As previously said in Section \[sec:burger:monod\] the cocycle $\beta_{\operatorname{\mathcal{X}}} \in \operatorname{\textup{L}}^\infty((\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)};\operatorname{\mathbb{R}})^G$ determines canonically a class in $\operatorname{\textup{H}}^2_{cb}(G;\operatorname{\mathbb{R}})$. We call *Bergmann cocycle* the measurable extension $\beta_{\operatorname{\mathcal{X}}}: \check{\operatorname{\mathcal{S}}}^{(3)} \rightarrow \operatorname{\mathbb{R}}$ to the Shilov boundary of the cocycle defined by Equation . We denote by $\kappa^b_G \in \operatorname{\textup{H}}^2_{cb}(G;\operatorname{\mathbb{R}})$ the class determined by $\beta_{\operatorname{\mathcal{X}}}$ and we call it *bounded Kähler class*. Recall that two points $\xi,\eta \in \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ are *transverse* if they lie in the unique open $G$-orbit in $(\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^2$. We conclude the section by recalling some properties of the Bergmann cocycle when $\operatorname{\mathcal{X}}$ is a Hermitian symmetric space of tube-type. As stated in [@BIW1 Lemma 5.5], if $\operatorname{\mathcal{X}}$ is of tube-type then 1. the cocycle $\beta_{\operatorname{\mathcal{X}}}$ takes values in the discrete set $\{-\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}, - \frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}+1 , \ldots , \frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}-1, \frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \}$; 2. if it holds $|\beta_{\operatorname{\mathcal{X}}}(\xi,\eta,\omega)|=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}$, then $\xi,\eta,\omega$ are pairwise transverse; 3. we can decompose $$(\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)}= \sqcup_{i=0}^{\operatorname{rk}(\operatorname{\mathcal{X}})} \operatorname{\mathcal{O}}_{-\operatorname{rk}(\operatorname{\mathcal{X}})+2i} \ ,$$ where $\operatorname{\mathcal{O}}_{-\operatorname{rk}(\operatorname{\mathcal{X}})+2i}$ is the open subset of $(\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)}$ where $\beta_{\operatorname{\mathcal{X}}}$ is identically equal to $-\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}+i$; 4. given $\xi, (\xi_n)_{n \in \operatorname{\mathbb{N}}}, (\xi'_n)_{n \in \operatorname{\mathbb{N}}}$ where $\xi,\xi_n,\xi'_n \in \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$, if $\lim_{n \to \infty} \xi_n =\xi$ and $\beta_{\operatorname{\mathcal{X}}}(\xi,\xi_n,\xi'_n)=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}$ then $\lim_{n \to \infty} \xi'_n=\xi$. Maximal measurable cocycles of surface groups {#sec:maximal:cocycles} ============================================= Let $L$ be a finite connected covering of the group $\operatorname{\textup{PU}}(1,1)$ and consider a torsion-free lattice $\Gamma \leq L$. Let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Given an irreducible Hermitian symmetric space $\operatorname{\mathcal{X}}$, we are going to denote by $G=\text{Isom}^\circ(\operatorname{\mathcal{X}})$ the connected component of the identity of the isometry group of $\operatorname{\mathcal{X}}$. In this section we are going to introduce the notion of *Toledo invariant* associated to a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with an essentially unique boundary map. Since this numerical invariant will have absolute value bounded from above by the rank of $\operatorname{\mathcal{X}}$, it will make sense to talk about *maximal cocycles*. Maximal cocycles will be particular examples of *tight cocycles*. We are going to introduce the notion of *tightness* which will have important consequences on the algebraic hull. We are going to show that if a maximal cocycle is Zariski dense then the Hermitian symmetric space $\operatorname{\mathcal{X}}$ must be of tube-type. Hence there are no maximal Zariski dense cocycle in Hermitian Lie group which are not of tube-type. Moreover, maximality affects also the regularity property of the slices of boundary maps. The Toledo invariant of a measurable cocycle {#sec:toledo:invariant} -------------------------------------------- Let $L$ be a finite connected covering of the group $\operatorname{\textup{PU}}(1,1)$ and consider a torsion-free lattice $\Gamma \leq L$. Let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Denote by $G=\text{Isom}^\circ(\operatorname{\mathcal{X}})$ the connected component of the identity of the isometry group of an irreducible Hermitian symmetric space $\operatorname{\mathcal{X}}$. Let $\mathbf{G}$ be the connected Lie group associated to the complexified Lie algebra of $G$, so that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$. Let $\sigma:\Gamma \times \Omega \rightarrow G$ be a measurable cocycle with essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$. Here $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is the Shilov boundary associated to the symmetric space $\operatorname{\mathcal{X}}$. Recall by Section \[sec:burger:monod\] that the existence of the boundary map $\phi$ induces a pullback map in cohomology $$\operatorname{\textup{H}}^\bullet(\Phi^\Omega): \operatorname{\textup{H}}^\bullet(\operatorname{\mathcal{B}}^\infty((\check{S}_{\operatorname{\mathcal{X}}})^{\bullet+1};\operatorname{\mathbb{R}})^G) \rightarrow \operatorname{\textup{H}}^\bullet_b(\Gamma;\operatorname{\mathbb{R}}) \ .$$ In particular we are allowed to consider the pullback of the Bergmann cocycle $\beta_{\operatorname{\mathcal{X}}}$. Since $\Gamma$ is a lattice of $L$, we have a well-defined *transfer map*, which is given at the level of cochains by $$\hat{\operatorname{\textup{T}}}^\bullet_b:\operatorname{\textup{L}}^\infty((\operatorname{\mathbb{S}}^1)^{\bullet+1};\operatorname{\mathbb{R}})^\Gamma \rightarrow \operatorname{\textup{L}}^\infty((\operatorname{\mathbb{S}}^1)^{\bullet+1};\operatorname{\mathbb{R}})^L \ ,$$ $$\hat{\operatorname{\textup{T}}}^\bullet_b(\psi)(\xi_0,\ldots,\xi_\bullet):=\int_{\Gamma \backslash L} \psi(\overline{g}\xi_0,\ldots,\overline{g}\xi_\bullet)d\mu_{\Gamma \backslash L}(\overline{g}) \ ,$$ where $\overline{g}$ denotes the equivalence class of $g$ in $\Gamma \backslash L$ and $\mu_{\Gamma \backslash L}$ is the normalized $L$-invariant measure on the quotient. Being a chain map, $\hat \operatorname{\textup{T}}^\bullet_b$ induces a well-defined map in cohomology called *transfer map* $$\operatorname{\textup{T}}^\bullet_b:\operatorname{\textup{H}}^\bullet_b(\Gamma;\operatorname{\mathbb{R}}) \rightarrow \operatorname{\textup{H}}^\bullet_{cb}(L;\operatorname{\mathbb{R}}), \hspace{5pt} \operatorname{\textup{T}}^\bullet_b([\psi]):=[\hat \operatorname{\textup{T}}^\bullet_b(\psi)] \ .$$ It is worth recalling that the bounded Kähler class $\kappa^b_L$ is a generator of the group $\operatorname{\textup{H}}^2_{cb}(L;\operatorname{\mathbb{R}})$ and it is represented by the Bergmann cocycle on the circle $\beta_{\operatorname{\mathbb{S}}^1}$ (which is nothing else than the orientation cocycle). In this particular setting, we are allowed to give the following \[def:toledo:inv\] Let $\Gamma \leq L$ be a torsion-free lattice and $(\Omega,\mu_\Omega)$ a standard Borel probability $\Gamma$-space. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$. The *Toledo invariant* $\textup{t}_b(\sigma)$ associated to $\sigma$ is defined as $$\label{eq:toledo} \operatorname{\textup{T}}^2_b \circ \operatorname{\textup{H}}^2(\Phi^\Omega)([ \beta_{\operatorname{\mathcal{X}}}])=\textup{t}_b(\sigma)[\beta_{\operatorname{\mathbb{S}}^1}]=\textup{t}_b(\sigma)\kappa^b_L \ .$$ Since the $\Gamma$-action on the circle is doubly ergodic and the cocycles are alternating, Equation holds actually at the level of cochains, that is $$\begin{aligned} \label{eq:formula} \int_{\Gamma \backslash L} \int_\Omega & \beta_{\operatorname{\mathcal{X}}}(\phi(\overline{g}\xi,s),\phi(\overline{g}\eta,s),\phi(\overline{g}\omega,s))d\mu_\Omega(s)d\mu_{\Gamma \backslash L}(\overline{g}) =\\ =& \text{t}_b(\sigma)\beta_{\operatorname{\mathbb{S}}^1}(\xi,\eta,\omega) \nonumber \ ,\end{aligned}$$ and the equation holds for *every* $\xi,\eta,\omega \in \operatorname{\mathbb{S}}^1$, as a consequence of either Burger and Iozzi [@BIcartan] or Pozzetti [@Pozzetti], for instance. Notice that Equation is simply a suitable adaptation of [@BIW1 Corollary 4.4] to the context of measurable cocycles. It is immediate to verify that the Toledo invariant is a *multiplicative constant* in the sense of [@moraschini:savini:2 Definition 3.16]. Indeed following the notation of that paper, the setting required by [@moraschini:savini:2 Definition 3.16] is satisfied and one has $$\textup{t}_b(\sigma)=\lambda_{\beta_{\operatorname{\mathcal{X}}},\beta_{\operatorname{\mathbb{S}}^1}}(\sigma) \ .$$ Thanks to this analogy, one can immediately argue that $\textup{t}_b(\sigma)$ is invariant along the $G$-cohomology class of $\sigma$ and its absolute value can be bounded from above as follows $$|\text{t}_b(\sigma)| \leq \operatorname{rk}(\operatorname{\mathcal{X}}) \ .$$ \[oss:alternative:definition\] We could have defined the Toledo invariant in a different way. Let $\Gamma \leq L$ be a torsion-free lattice and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Denote by $\Sigma$ the finite-area surface obtained as the quotient of $\operatorname{\mathbb{H}}^2_{\operatorname{\mathbb{R}}}$ by $\Gamma$, that is $\Sigma=\Gamma \backslash \operatorname{\mathbb{H}}^2_{\operatorname{\mathbb{R}}}$. If $\Gamma$ is *uniform* we know that $\Sigma$ is closed, whereas when $\Gamma$ is *non-uniform* then the surface $\Sigma$ has finitely many cusps. In the latter case we are going to denote by $S$ a *compact core* of $\Sigma$, otherwise we set $S=\Sigma$. Following [@moraschini:savini Section 3.4] we can define the following composition of functions $$\label{eq:j:composition} \operatorname{\textup{J}}^\bullet_{S, \partial S}: \operatorname{\textup{H}}^\bullet_b(\Gamma;\operatorname{\mathbb{R}}) \rightarrow \operatorname{\textup{H}}^\bullet_b(\Sigma;\operatorname{\mathbb{R}}) \rightarrow \operatorname{\textup{H}}^\bullet_b(\Sigma,\Sigma \setminus S) \rightarrow \operatorname{\textup{H}}^\bullet_b(S,\partial S) \ ,$$ where the first map is the isomorphism given by the Gromov’s Mapping Theorem [@Grom82; @Ivanov; @FM:grom], the second map is obtained by the long exact sequence in bounded cohomology [@BBFIPP] and the last map is induced by the homotopy equivalence $(\Sigma,\Sigma \setminus S) \simeq (S, \partial S)$. Given a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$, we could have defined the *Toledo number* of the cocycle $\sigma$ as $$\operatorname{\textup{T}}_b(\sigma):= \langle \textup{comp}^2_{S, \partial S} \circ \operatorname{\textup{J}}^2_{S, \partial S} \circ \operatorname{\textup{H}}^2(\Phi^\Omega)([\beta_{\operatorname{\mathcal{X}}}]),[S,\partial S] \rangle$$ To compare the two different definitions of the Toledo invariant, one can follows the same strategy of the proofs of either [@moraschini:savini Proposition 1.2, Proposition 1.6] or [@moraschini:savini:2 Proposition 5.5]. In this way it is possible to show that $$\label{eq:alternative:toledo} \textup{t}_b(\sigma)=\frac{\textup{T}_b(\sigma)}{|\chi(\Sigma)|} \ ,$$ where $\chi(\Sigma)$ is the Euler characteristic of the surface $\Sigma$. Notice that Equation is analogous to the one obtained by Burger, Iozzi and Wienhard [@BIW1 Theorem 3.3]. In particular $\textup{T}_b(\sigma)$ is an invariant of the $G$-cohomology class of $\sigma$ and it holds the following estimate $$|\textup{T}_b(\sigma)| \leq \operatorname{rk}(\operatorname{\mathcal{X}}) |\chi(\Sigma)| \ .$$ Maximal measurable cocycles of surface groups {#sec:maximal:cocycle:thm} --------------------------------------------- In this section we are going to introduce the notion of maximality. Maximal measurable cocycles represent the first example of tight cocycles and this has important consequences on their algebraic hull. Additionally, if they are Zariski dense then the target must be a Hermitian Lie group of tube-type. We start by giving the definition of maximality. \[def:maximal:cocycle\] Let $\Gamma \leq L$ be a torsion-free lattice and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map. We say that $\sigma$ is *maximal* if it holds $\text{t}_b(\sigma)=\operatorname{rk}(\operatorname{\mathcal{X}})$. In order to show that maximal cocycles are tight, we need first to introduce the notion of tightness for measurable cocycles of surface groups. Inspired by the notion for representations studied by Burger, Iozzi and Wienhard [@BIW09], we can give the following \[def:tight:cocycle\] Let $\Gamma \leq L$ be a torsion-free lattice and $(\Omega,\mu_\Omega)$ a standard Borel probability $\Gamma$-space. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$. We say that $\sigma$ is *tight* if it holds $$\lVert \operatorname{\textup{H}}^2(\Phi^\Omega)([\beta_{\operatorname{\mathcal{X}}}]) \rVert_\infty=\frac{\operatorname{rk}\operatorname{\mathcal{X}}}{2} \ .$$ Clearly the definition above mimic the one given for representations. Indeed it is immediate to check that if the cocycle is cohomologous to the one induced by a representation, Definition \[def:tight:cocycle\] boils down to the standard one. Another important aspect is that the tightness property is invariant along the $G$-cohomology class of a given cocycle [@moraschini:savini:2 Proposition 3.12]. Notice that we could have introduced the notion of tightness in a much more general setting, but this would be not so useful for our purposes. The deep study of tight representations done by Burger, Iozzi and Wienhard [@BIW09] enables us to state the following theorem which characterizes the algebraic hull of a tight cocycle and which is a direct consequence of [@BIW09 Theorem 3], where a full characterization of tight subgroups is given. \[teor:alg:hull:tight\] Let $\Gamma$ be a torsion-free lattice of a finite connected covering $L$ of $\operatorname{\textup{PU}}(1,1)$ and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Consider $\mathbf{G}$ a semisimple algebraic $\operatorname{\mathbb{R}}$-group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is a Lie group of Hermitian type. Given a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$, assume that there exists an essentially unique boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$. Denote by $\mathbf{H}$ the algebraic hull of $\sigma$ in $\mathbf{G}$ and set $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$. If $\sigma$ is tight then 1. $\mathbf{H}$ is a reductive group; 2. the centralizer $Z_G(H)$ is compact; 3. if $\operatorname{\mathcal{Y}}$ is the symmetric space associated to $H$, there exists a unique $H$-invariant complex structure on $\operatorname{\mathcal{Y}}$ such that the inclusion $H \rightarrow G$ is tight and positive. Since the cocycle is tight and this condition is invariant along the $G$-cohomology class of $\sigma$, the inclusion $i:H \rightarrow G$ is tight. The conclusion follows by direct application of [@BIW09 Theorem 7.1] which characterize tight subgroups of $G$. The next step is to prove that maximal cocycles are tight in the sense of Definition \[def:tight:cocycle\], similarly for what happens in the case of representations [@BIW1 Lemma 6.1]. This result will have important consequence for the algebraic hull of a maximal cocycle as a direct application of Theorem \[teor:alg:hull:tight\]. \[prop:maximal:tight\] Let $\Gamma \leq L$ be a torsion-free lattice and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map. If $\sigma$ is maximal then it is tight. Suppose that $\sigma:\Gamma \times \Omega \rightarrow G$ is maximal. Then it holds $\textup{t}_b(\sigma)=\operatorname{rk}\operatorname{\mathcal{X}}$. By definition we have that $$\textup{T}^2_b \circ \textup{H}^2_b(\Phi^\Omega)([\beta_{\operatorname{\mathcal{X}}}])=\operatorname{rk}(\operatorname{\mathcal{X}}) \kappa^b_L \ ,$$ and hence it follows $$\operatorname{rk}(\operatorname{\mathcal{X}}) = \lVert \operatorname{rk}(\operatorname{\mathcal{X}}) \kappa^b_L \rVert_\infty=\lVert \textup{T}^2_b \circ \textup{H}^2_b(\Phi^\Omega)([\beta_{\operatorname{\mathcal{X}}}]) \rVert \leq \rVert \textup{H}^2_b(\Phi^\Omega)([\beta_{\operatorname{\mathcal{X}}}]) \rVert_\infty \ .$$ Since the pullback is norm non-increasing, we have also that $\lVert \textup{H}^2_b(\Phi^\Omega)([\beta_{\operatorname{\mathcal{X}}}]) \rVert_\infty \leq \operatorname{rk}(\operatorname{\mathcal{X}})$, whence we must have equality and the cocycle $\sigma$ is tight. Before moving in our discussion, we need to remind briefly some notation regarding the triple products studied by Burger, Iozzi and Wienhard [@BIW07]. If we denote by $(\check{S}_{\operatorname{\mathcal{X}}})^{(3)}$ the set of triples of distinct points in $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$, the *Hermitian triple product* is defined as $$\langle \langle \cdot , \cdot , \cdot \rangle \rangle: (\check{S}_ {\operatorname{\mathcal{X}}})^{(3)} \rightarrow \operatorname{\mathbb{R}}^\times \backslash \operatorname{\mathbb{C}}^\times \ ,$$ $$\langle\langle \xi,\eta,\omega \rangle\rangle=e^{i \pi p_{\operatorname{\mathcal{X}}} \beta_{\operatorname{\mathcal{X}}}(\xi,\eta,\omega)} \ \ \text{mod} \operatorname{\mathbb{R}}^\times \ ,$$ for every $(\xi,\eta,\omega) \in \check{S}^{(3)}_{\operatorname{\mathcal{X}}}$. The number $p_{\operatorname{\mathcal{X}}}$ is an integer defined in terms of the root system associated to $G$. Recall that $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is a homogeneous $G$-space, which can be realized as the quotient $G/Q$, where $Q=\mathbf{Q}(\operatorname{\mathbb{R}})$ and $\mathbf{Q}$ is a maximal parabolic subgroup of $\mathbf{G}$. Burger, Iozzi and Wienhard were able to extend the Hermitian triple product to a *complex Hermitian triple product* $\langle\langle \cdot, \cdot, \cdot \rangle\rangle_{\operatorname{\mathbb{C}}}$ defined on $(\mathbf{G}/\mathbf{Q})^3$ with values into $\Delta^\times \backslash A^\times$. Here $A^\times$ is the group $\operatorname{\mathbb{C}}^\times \times \operatorname{\mathbb{C}}^\times$ endowed with real structure $(\lambda,\mu) \mapsto (\overline{\mu},\overline{\lambda})$ and $\Delta^\times$ is the image of the diagonal embedding of $\operatorname{\mathbb{C}}^\times$. More precisely, the authors [@BIW07 Section 2.4] showed that the following diagram commutes $$\xymatrix{ (\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)} \ar[rr]^{\langle \langle \cdot, \cdot, \cdot \rangle\rangle} \ar[d]^{(\imath)^3} && \operatorname{\mathbb{R}}^\times \backslash \operatorname{\mathbb{C}}^\times \ar[d]^\Delta\\ (\mathbf{G}/\mathbf{Q})^3 \ar[rr]^{\langle \langle \cdot, \cdot, \cdot \rangle\rangle_{\operatorname{\mathbb{C}}}} && \Delta^\times \backslash A^\times \ . }$$ where $\imath:\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}} \rightarrow \mathbf{G}/\mathbf{Q}$ is the map given by the $G$-equivariant identification of $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ with $(\mathbf{G}/\mathbf{Q})(\operatorname{\mathbb{R}})$ and $\Delta$ is the diagonal embedding. Given any pair of distinct points $(\xi,\eta) \in (\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(2)}$, following [@BIW07 Section 5.1], we denote by $\operatorname{\mathcal{O}}_{\xi,\eta}$ the open Zariski subset of $\mathbf{G}/\mathbf{Q}$ on which the map $$p_{\xi,\eta}:\operatorname{\mathcal{O}}_{\xi,\eta} \rightarrow \Delta^\times \backslash A^\times, \hspace{5pt} p_{\xi,\eta}(\omega):=\langle \langle \xi, \eta, \omega \rangle \rangle_{\operatorname{\mathbb{C}}} \ ,$$ is well-defined. Burger, Iozzi and Wienhard [@BIW07 Lemma 5.1] proved that if there exists an integer $m \in \operatorname{\mathbb{Z}}\setminus \{ 0 \}$ such that $\omega \mapsto p_{\xi,\eta}(\omega)^m$ is constant, then $\operatorname{\mathcal{X}}$ must be of tube-type. Now we can proceed proving the following theorem, which should be thought of as a generalization of [@BIW1 Theorem 4.1(1)]. \[teor:symmetric:tube\] Let $L$ be a finite connected covering of $\operatorname{\textup{PU}}(1,1)$ and let $\Gamma \leq L$ be a torsion-free lattice. Let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space and let $\sigma:\Gamma \times \Omega \rightarrow G$ be a measurable cocycle with essentially unique boundary map. If $\sigma$ is maximal and Zariski dense, then $\operatorname{\mathcal{X}}$ must be of tube-type. Consider a positively oriented triple of distinct points $\xi,\eta,\omega \in \operatorname{\mathbb{S}}^1$. By the maximality assumption we have that $\textup{t}_b(\sigma)=\operatorname{rk}(\operatorname{\mathcal{X}})$ and by substituting this value in Equation we obtain $$\label{eq:formula:maximal} \int_{\Gamma \backslash L}\int_\Omega \beta_{\operatorname{\mathcal{X}}}(\phi(\overline{g}\xi,s),\phi(\overline{g}\eta,s),\phi(\overline{g}\omega,s))d\mu_\Omega(s)d\mu_{\Gamma \backslash L}(\overline{g})=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \ .$$ Hence for almost every $\overline{g} \in \Gamma \backslash L$ and almost every $s \in \Omega$ it holds $$\beta_{\operatorname{\mathcal{X}}}(\phi(\overline{g}\xi,s),\phi(\overline{g}\eta,s),\phi(\overline{g}\omega,s))=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \ ,$$ and by the equivariance of the map $\phi$ it follows $$\label{eq:almost:every:maximal} \beta_{\operatorname{\mathcal{X}}}(\phi(g\xi,s),\phi(g\eta,s),\phi(g\omega,s))=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \ ,$$ for almost every $g \in L$ and almost every $s \in \Omega$. For almost every $s \in \Omega$, we know that the $s$-slice $\phi_s:\operatorname{\mathbb{S}}^1 \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is measurable and, by Equation \[eq:almost:every:maximal\] it satisfies $$\label{eq:maximal:slice} \beta_{\operatorname{\mathcal{X}}}(\phi_s(g\xi),\phi_s(g\eta),\phi_s(g\omega))=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \ ,$$ for almost every $g \in L$. Since the same reasoning applies to a negatively oriented triple, we must have $$\label{eq:maximal:slice:triples} \beta_{\operatorname{\mathcal{X}}}(\phi_s(\xi),\phi_s(\eta),\phi_s(\omega))=\pm \frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \ ,$$ for almost every triple $\xi,\eta,\omega$ such that $\beta_{\operatorname{\mathbb{S}}^1}(\xi,\eta,\omega)=\pm 1/2$. The equation above implies that $$\label{eq:hermitian:product} \langle \langle \phi_s(\xi),\phi_s(\eta),\phi_s(\omega) \rangle\rangle^2 = 1 \ \ \textup{mod} \operatorname{\mathbb{R}}^\times \ ,$$ for almost every $\xi,\eta,\omega \in \operatorname{\mathbb{S}}^1$ distinct. Fix now a pair $(\xi,\eta) \in (\operatorname{\mathbb{S}}^1)^{2}$ such that Equation holds for almost every $\omega \in \operatorname{\mathbb{S}}^1$. Since $G$ acts transitively on the set of maximal triples on $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ by [@BIW09 Theorem 3.8(3)], for a fixed pair of distinct points $\xi_0,\eta_0 \in \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$, there exists a measurable function $f:\Omega \rightarrow G$ such that $$\phi_s(\xi)=f(s)\xi_0 \ , \hspace{10pt} \phi_s(\eta)=f(s)\eta_0 \ , \ (\xi_0,\eta_0,f(s)^{-1}\phi_s(\omega)) \ \text{is maximal}$$ for almost every $\omega \in \operatorname{\mathbb{S}}^1, s \in \Omega$. For such a measurable function $f$, we consider $\sigma^f$ and the map $\phi^f$ as the ones defined in Section \[sec:measurable:cocycles\]. For the ease of notation we are going to write $\alpha=\sigma^f$ and $\psi=\phi^f$. By the choice of the map $f$, Equation can be rewritten as $$\langle \langle \xi_0,\eta_0,\psi_s(\omega) \rangle\rangle^2 = 1 \ \ \textup{mod} \operatorname{\mathbb{R}}^\times \ ,$$ for almost every $\omega \in \operatorname{\mathbb{S}}^1$. The previous equation implies that $\psi_s(\omega) \in \operatorname{\mathcal{O}}_{\xi_0,\eta_0}$ for almost every $\omega \in \operatorname{\mathbb{S}}^1$ and almost every $s \in \Omega$. We denote by $E$ the subset of full measure in $\operatorname{\mathbb{S}}^1 \times \Omega$ such that $\psi_s(\omega) \in \operatorname{\mathcal{O}}_{\xi_0,\eta_0}$ for all $E$. Define $$E^\Gamma:=\bigcap_{\gamma \in \Gamma} \gamma E \ ,$$ which has full measure being a countable intersection of full measure sets (notice that $\Gamma$ preserves the measure class on $\operatorname{\mathbb{S}}^1 \times \Omega$). Since $\sigma$ is Zariski dense, the cocycle $\alpha$ is Zariski dense too. Since the Zariski closure of $\psi(E^\Gamma)$ is preserved by the algebraic hull of $\alpha$ which coincides with $\mathbf{G}$, the set $\psi(E^\Gamma)$ is Zariski dense in $\mathbf{G}/\mathbf{Q}$, whence is $\psi(E^\Gamma)$ Zariski dense in $\operatorname{\mathcal{O}}_{\xi_0,\eta_0}$. Thus the map $\omega \rightarrow p_{\xi_0,\eta_0}(\omega)^2$ is constant on $\operatorname{\mathcal{O}}_{\xi_0,\eta_0}$ and $\operatorname{\mathcal{X}}$ is of tube-type, as claimed. An important consequence of the previous theorem is the following Let $L$ be a finite connected covering of $\operatorname{\textup{PU}}(1,1)$ and let $\Gamma \leq L$ be a torsion-free lattice. Let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. There is no maximal Zariski dense cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ when $G$ is not of tube-type. As a consequence of Theorem \[teor:symmetric:tube\], if $\mathbf{H}$ is the algebraic hull of a maximal cocycle $\sigma$ and $H=\mathbf{H}(\operatorname{\mathbb{R}})$, then $H^\circ$ must be a Hermitian group of tube-type. The following theorem collects all the properties we discovered about the algebraic hull of a maximal cocycle and it should be thought of as a statement equivalent to [@BIW1 Theorem 5] in the context of measurable cocycles. [teor:maximal:alghull]{} Let $\Gamma \leq L$ be a torsion-free lattice and let $(\Omega,\mu_\Omega)$ be a standard Borel probability $\Gamma$-space. Let $\mathbf{G}$ be a semisimple algebraic $\operatorname{\mathbb{R}}$-group such that $G=\mathbf{G}(\operatorname{\mathbb{R}})^\circ$ is a Lie group of Hermitian type. Consider a measurable cocycle $\sigma:\Gamma \times \Omega \rightarrow G$ with essentially unique boundary map. Denote by $\mathbf{H}$ the algebraic hull of $\sigma$ in $\mathbf{G}$ and set $H=\mathbf{H}(\operatorname{\mathbb{R}})^\circ$. If $\sigma$ is maximal, then 1. the algebraic hull $\mathbf{H}$ is reductive 2. the centralizer $Z_G(H)$ is compact; 3. the symmetric space $\operatorname{\mathcal{Y}}$ associated to $H$ is Hermitian of tube-type; 4. it holds $\mathbf{H}(\operatorname{\mathbb{R}}) \subset \textup{Isom}(\operatorname{\mathcal{T}})$ for some maximal tube-type subdomain $\operatorname{\mathcal{T}}$ of $\operatorname{\mathcal{X}}$. Equivalently there exists a cocycle cohomologous to $\sigma$ which preserves $\operatorname{\mathcal{T}}$. Being maximal, the cocycle $\sigma$ is tight by Proposition \[prop:maximal:tight\]. Hence we can apply Theorem \[teor:alg:hull:tight\] to get properties $1)$ and $2)$. Additionally by Theorem \[teor:symmetric:tube\] the symmetric space $\operatorname{\mathcal{Y}}$ must be of tube-type, whence point $3)$. The inclusion $i: H \rightarrow G$ is tight because the cocycle $\sigma$ is tight. Since the symmetric space $\operatorname{\mathcal{Y}}$ associated to $H$ is of tube-type and the inclusion is tight, by [@BIW09 Theorem 9(1)] there exists a unique maximal tube-type subdomain $\operatorname{\mathcal{T}}$ of $\operatorname{\mathcal{X}}$ preserved by $H$. By uniqueness, $\operatorname{\mathcal{T}}$ must be preserved by the whole $\mathbf{H}(\operatorname{\mathbb{R}})$ and we are done. Regularity properties of boundary maps {#sec:boundary:map} -------------------------------------- Imitating what happens in the context of representations, we are going to study the regularity properties of boundaries map associated to maximal measurable cocycles. Given a maximal Zariski dense measurable cocycle, under suitable hypothesis on the push-forward measure with respect to the slices of the boundary map, we are going to show that there exists an essentially unique equivariant measurable map with left-continuous (respectively right-continuous) slices which preserve transversality and maximality. We are going to follow the line of [@BIW1 Section 5]. Before introducing the setup of the section, we say that a measurable map $\phi:\operatorname{\mathbb{S}}^1 \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is *maximal* if it satisfies Equation . Notice that almost every slice of a boundary map associated to a maximal cocycle is maximal. \[setup:boundary:map\] From now until the end of the section we are going to assume the following - $\Gamma \leq L$ be a torsion-free lattice of a finite connected covering $L$ of $\operatorname{\textup{PU}}(1,1)$; - $(\Omega,\mu_\Omega)$ is a standard Borel probability $\Gamma$-space; - $\sigma:\Gamma \times \Omega \rightarrow G$ is a maximal Zariski dense cocycle with boundary map $\phi:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$; - denote by $\{E_s\}_{s \in \Omega}$ the family of essential graphs $E_s=\textup{EssGr}(\phi_s)$ associated to the slices, that is the support of the push-forward of the Lebesgue measure on $\operatorname{\mathbb{S}}^1$ with respect to the map $\xi \mapsto (\xi,\phi_s(\xi)) \in \operatorname{\mathbb{S}}^1 \times \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$. Having introduced the setup we needed, we can now move on proving the following \[lemma:maximality:triples\] In the situation of Setup \[setup:boundary:map\], suppose that $E_s$ is maximal. Let $(\xi_i,\eta_i) \in E_s$ for $i=1,2,3$ be points such that $\xi_1,\xi_2,\xi_3$ are pairwise distinct and $\eta_1,\eta_2,\eta_3$ are pairwise transverse. Then it holds $$\beta_{\operatorname{\mathcal{X}}}(\eta_1,\eta_2,\eta_3)=\operatorname{rk}(\operatorname{\mathcal{X}}) \beta_{\operatorname{\mathbb{S}}^1}(\xi_1,\xi_2,\xi_3) \ .$$ Denote by $I_i$ for $i=1,2,3$ open paiwise non-intersecting intervals such that $\xi_i \in I_i$ and for any $\omega_i \in I_i$ it holds $$\beta_{\operatorname{\mathbb{S}}^1}(\omega_1,\omega_2,\omega_3)=\beta_{\operatorname{\mathbb{S}}^1}(\xi_1,\xi_2,\xi_3) \ .$$ Consider a open neighborhood $U_i$ of $\eta_i$, for $i=1,2,3$, such that $U_1 \times U_2 \times U_3 \in (\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)}$. Then the measurable set $$A_i=\{ \omega \in I_i \ | \ \phi_s(\omega_i) \in U_i \} \ ,$$ is a set of positive measure, since $\eta_1,\eta_2,\eta_3$ are in the essential image of $\phi_s$. Since we assumed the slice $E_s$ is maximal, for almost every $(\omega_1,\omega_2,\omega_3) \in A_1 \times A_2 \times A_3$ we have that $$\beta_{\operatorname{\mathcal{X}}}(\phi_s(\omega_1),\phi_s(\omega_2),\phi_s(\omega_3))=\operatorname{rk}(\operatorname{\mathcal{X}})\beta_{\operatorname{\mathbb{S}}^1}(\omega_1,\omega_2,\omega_3)=\operatorname{rk}(\operatorname{\mathcal{X}})\beta_{\operatorname{\mathbb{S}}^1}(\xi_1,\xi_2,\xi_3) \ .$$ By setting $\varepsilon=2\beta_{\operatorname{\mathbb{S}}^1}(\xi_1,\xi_2,\xi_3)$, we have that $|\varepsilon|=1$ and for almost every $(\omega_1,\omega_2,\omega_3) \in A_1 \times A_2 \times A_3$ we have that $$(\phi_s(\omega_1),\phi_s(\omega_2),\phi_s(\omega_3)) \in U_1 \times U_2 \times U_3 \cap \operatorname{\mathcal{O}}_{\varepsilon \operatorname{rk}{\operatorname{\mathcal{X}}}} \ ,$$ where $\operatorname{\mathcal{O}}_{\varepsilon \operatorname{rk}{\operatorname{\mathcal{X}}}}$ is the open set in $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}^3$ on which $\beta_{\operatorname{\mathcal{X}}}$ is identically equal to $\operatorname{rk}(\operatorname{\mathcal{X}})/2$. By the arbitrary choice of the neighborhood $U_i$, must have $(\eta_1,\eta_2,\eta_3) \in \overline{\operatorname{\mathcal{O}}_{\varepsilon \operatorname{rk}\operatorname{\mathcal{X}}}}$. Since we have that $$\overline{\operatorname{\mathcal{O}}_{\varepsilon \operatorname{rk}\operatorname{\mathcal{X}}}} \cap (\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)}=\overline{\operatorname{\mathcal{O}}_{\varepsilon \operatorname{rk}\operatorname{\mathcal{X}}}} \cap ( \sqcup_{i=0}^{\operatorname{rk}(\operatorname{\mathcal{X}})} \operatorname{\mathcal{O}}_{-\operatorname{rk}\operatorname{\mathcal{X}}+ 2 i}) = \operatorname{\mathcal{O}}_{\varepsilon \operatorname{rk}\operatorname{\mathcal{X}}}$$ and $(\eta_1,\eta_2,\eta_3) \in (\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}})^{(3)}$, the triple is maximal and the claim follows. In order to proceed we have now to discuss a condition we have to impose on the slices of the boundary map. Recall that $\check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ can be identified with $G/Q$, where $Q$ is a maximal parabolic subgroup. We denote by $\textbf{V}_\xi \subset \mathbf{G}/\mathbf{Q}$ the Zariski closed set of points transverse to $\xi$ and set $V_\xi:=\textbf{V}_\xi(\operatorname{\mathbb{R}})$, the set of points transvers to $\xi$ in the Shilov boundary. Burger, Iozzi and Wienhard [@BIW1 Proposition 5.2] proved that the boundary map associated to a Zariski dense representation has very strong properties, since its essential image intersects any proper Zariski closed set of the Shilov boundary in a set of measure zero. The author wonder under which hypothesis the same property should hold for almost every slice of a boundary map associated to a cocycle. Here we are going to assume it. More precisely \[ass:zariski:zero:measure\] In the situation of Setup \[setup:boundary:map\], we suppose that for every propery Zariski closed set $\mathbf{V} \subset \mathbf{G}/\mathbf{Q}$ it holds $$\nu(\phi_s^{-1}(\mathbf{V}(\operatorname{\mathbb{R}})))=0 \ ,$$ for almost every $s \in \Omega$. Here $\nu$ is the round measure on $\operatorname{\mathbb{S}}^1$. Assumption \[ass:zariski:zero:measure\] is clearly satisfied by cocycles which are cohomologous to a Zariski dense representation $\rho:\Gamma \rightarrow G$. We are not aware if this property can be extended to a wider class of cocycles. \[lemma:transverse\] Let $E_s$ be a maximal slice satisfying Assumption \[ass:zariski:zero:measure\] and let $(\xi_1,\eta_1), (\xi_2,\eta_2) \in E_s$ with $\xi_1 \neq \xi_2$. Then $\eta_1$ and $\eta_2$ are transverse. For any distinct $\xi,\omega \in \operatorname{\mathbb{S}}^1$ we denote by $$((\xi,\omega)):=\{ \eta \in \operatorname{\mathbb{S}}^1 \ | \ \beta_{\operatorname{\mathbb{S}}^1}(\xi,\zeta,\omega)=\frac{1}{2} \} \ .$$ Thanks to Assumption \[ass:zariski:zero:measure\], we can suppose that the essential image of the slice $\phi_s$ meets any Zariski closed set in a measure zero set. Hence we can find $\alpha_1 \in ((\xi_1,\xi_2))$ such that $\phi_s(\alpha_1)$ is transverse to both $\eta_1$ and $\eta_2$. In the same way there will exist a point $\alpha_2 \in ((\xi_2,\xi_1))$ such that $\phi_s(\alpha_2)$ is transverse to $\eta_1$ and $\eta_2$. Using now jointly Lemma \[lemma:maximality:triples\] and the cocycle condition on $\beta_{\operatorname{\mathcal{X}}}$ we get $$\begin{aligned} 0&=\beta_{\operatorname{\mathcal{X}}}(\phi_s(\alpha_1),\eta_1,\phi_s(\alpha_2))-\beta_{\operatorname{\mathcal{X}}}(\eta_1,\eta_2,\phi_s(\alpha_2))+\\ &+\beta_{\operatorname{\mathcal{X}}}(\eta_1,\phi_s(\alpha_1),\phi_s(\alpha_2))-\beta_{\operatorname{\mathcal{X}}}(\eta_1,\phi_s(\alpha_1),\eta_2))=\\ &=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}-\beta_{\operatorname{\mathcal{X}}}(\eta_1,\eta_2,\phi_s(\alpha_2))+\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}-\beta_{\operatorname{\mathcal{X}}}(\eta_1,\phi_s(\alpha_1),\eta_2)) \ . \\\end{aligned}$$ The previous line implies that $\beta_{\operatorname{\mathcal{X}}}(\eta_1,\eta_2,\phi_s(\alpha_2))=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2}$ and hence $\eta_1$ and $\eta_2$ are transverse. Given now any subset $A \subset \operatorname{\mathbb{S}}^1$ we put $$F_A^s:=\{ \eta \in \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}} \ | \ \exists \ \xi \in A \ : \ (\xi,\eta) \in E_s \} \ .$$ We define also $$((\xi,\omega]]:=((\xi,\omega)) \cup \{ \omega \} \ .$$ \[lemma:one:point\] Let $s \in \Omega$ be a point such that $E_s$ is a maximal slice satisfying Assumption \[ass:zariski:zero:measure\]. Let $\xi \neq \omega$ be two points in $\operatorname{\mathbb{S}}^1$. Then $\overline{F^s_{((\xi,\omega]]}} \cap F^s_\xi$ and $\overline{F^s_{[[\omega,\xi))}} \cap F^s_{\xi}$ consist each of one point. We prove that $\overline{F^s_{((\xi,\omega]]}} \cap F^s_\xi$ consists of exactly one point. The same strategy can be applied to $\overline{F^s_{[[\omega,\xi))}} \cap F^s_{\xi}$ to prove the same statement. Let $\eta,\eta' \in \overline{F^s_{((\xi,\omega]]}} \cap F^s_\xi$ and consider $(\xi_n,\eta_n) \in E_s$ a sequence such that $$\xi_n \in ((\xi,\omega]], \ \ \ \lim_{n \to \infty} \xi_n=\xi, \ \ \ \lim_{n \to \infty} \eta_n=\eta \ .$$ Given any $\zeta \in ((\xi,\omega))$, we can apply the same reasoning of [@BIW1 Lemma 5.8], to say that $$\overline{F^s_{((\xi,\omega]]}} \cap F^s_\xi = \overline{F^s_{((\xi,\zeta]]}} \cap F^s_\xi \ .$$ Thanks to the previous equation, consider a sequence $(\omega_n,\eta'_n) \in E_s$ so that $$\omega_n \in ((\xi,\xi_n)), \ \ \ \lim_{n \to \infty} \omega_n=\xi, \ \ \ \lim_{n \to \infty} \eta'_n=\eta' \ .$$ Applying Lemma \[lemma:transverse\] we have that $\eta,\eta'_n,\eta_n$ are pairwise transverse. Hence we can apply Lemma \[lemma:maximality:triples\] to the triples $(\xi,\omega_n,\xi_n)$ and $(\eta,\eta_n',\eta_n)$ to get $$\beta_{\operatorname{\mathcal{X}}}(\eta,\eta_n',\eta_n)=\operatorname{rk}(\operatorname{\mathcal{X}})\beta_{\operatorname{\mathbb{S}}^1}(\xi,\omega_n,\xi_n)=\frac{\operatorname{rk}(\operatorname{\mathcal{X}})}{2} \ .$$ Since $\lim_{n \to \infty} \eta_n=\eta$, Property $4)$ of Section \[sec:hermitian:groups\] of the Bergmann cocycles $\beta_{\operatorname{\mathcal{X}}}$ forces $\lim_{n \to \infty} \eta'_n=\eta$ and hence $\eta=\eta'$. In this way wet get immediately the following \[cor:two:points\] Let $s \in \Omega$ be a point such that $E_s$ is a maximal slice satisfying Assumption \[ass:zariski:zero:measure\]. For every $\xi \in \operatorname{\mathbb{S}}^1$ the set $F^s_\xi$ contains either one or two points. Consider $\omega_-, \xi, \omega_+ \in \operatorname{\mathbb{S}}^1$ and let $\eta \in F^s_\xi$. Since it holds $$F^s_\xi=\left( \overline{F^s_{[[\omega_-,\xi))}} \cap F^s_{\xi} \right) \cup \left( \overline{F^s_{((\xi,\omega_+]]}} \cap F^s_\xi \right)\ ,$$ the claim follows by Lemma \[lemma:one:point\]. We are know ready to prove the main theorem of the section which extends in some sense [@BIW1 Theorem 5.1] to the context of measurable cocycles. \[teor:boundary:map\] In the situation of Assumption \[ass:zariski:zero:measure\], there exist two measurable maps $$\phi^\pm:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$$ such that 1. the slice $\phi^+_s:\operatorname{\mathbb{S}}^1 \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is right continuous for almost every $s \in \Omega$; 2. the slice $\phi^-_s:\operatorname{\mathbb{S}}^1 \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}$ is left continuous for almost every $s \in \Omega$; 3. the maps $\phi^\pm$ are measurable and $\sigma$-equivariant; 4. for every $\xi \neq \omega $ in $\operatorname{\mathbb{S}}^1$ and almost every $s \in \Omega$, $\phi^\varepsilon_s(\xi)$ is transverse to $\phi^\delta_s(\omega)$, where $\varepsilon, \delta \in \{ \pm \}$; 5. almost every slice is monotone, that is for every $\xi,\omega,\zeta \in \operatorname{\mathbb{S}}^1$ and almost every $s \in \Omega$ it holds $$\beta_{\operatorname{\mathcal{X}}}(\phi_s^\varepsilon(\xi),\phi_s^\delta(\omega),\phi_s^\theta(\zeta))=\operatorname{rk}(\operatorname{\mathcal{X}}) \beta_{\operatorname{\mathbb{S}}^1}(\xi,\omega,\zeta) \ ,$$ where $\varepsilon,\delta,\theta \in \{ \pm \}$. Additionally $\phi^\pm$ are essentially unique. By assumption we know that for almost every $s \in \Omega$, the slice $\phi_s$ is maximal and it satisfies Assumption \[ass:zariski:zero:measure\]. For any such $s$, we define for every $\xi \in \operatorname{\mathbb{S}}^1$ the following maps $$\phi_s^+(\xi)=\overline{F^s_{[[\omega_-,\xi))}} \cap F^s_{\xi}\ , \phi_s^-(\xi)=\overline{F^s_{((\xi,\omega_+]]}} \cap F^s_\xi \ ,$$ where $\omega_- ,\xi,\omega_+$ is a positively oriented triple in $\operatorname{\mathbb{S}}^1$ and $\omega_\pm$ are arbitrary. The right continuity of $\phi^+_s$ and the left continuity of $\phi^-_s$ are clear by their definitions. We can define $$\phi^\pm:\operatorname{\mathbb{S}}^1 \times \Omega \rightarrow \check{\operatorname{\mathcal{S}}}_{\operatorname{\mathcal{X}}}, \ \phi_s^\pm(\xi,s):=\phi_s^\pm(\xi) \ .$$ The measurability of the functions $\phi^\pm_s$ comes from the fact the slice $\phi_s^\pm$ are measurable and varies measurably with respect to $s$ by the measurability of $\phi$. The $\sigma$-equivariance of the latter implies that $\phi^\pm$ are $\sigma$-equivariant. Finally property $4)$ follows by Lemma \[lemma:transverse\] and property $5)$ follows by Lemma \[lemma:maximality:triples\]. The essential uniqueness is a consequences of the assumption on the essential uniqueness of the boundary map. [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cosmic strings are one-dimensional topological defects which could have been formed in the early stages of our Universe. They triggered a lot of interest, mainly for their cosmological implications: they could offer an alternative to inflation for the generation of density perturbations. It was shown however that cosmic strings lead to inconsistencies with the measurements of the cosmic microwave background temperature anisotropies. The picture is changed recently. It was shown that, on the one hand, cosmic strings can be generically formed in the framework of supersymmetric grand unified theories and that, on the other hand, cosmic superstrings could play the rôle of cosmic strings. There is also some possible observational support. All this lead to a revival of cosmic strings research and this is the topic of my lecture.' address: 'Department of Physics, King’s College London, Strand, London WC2R 2LS, U.K. ' author: - 'Mairi Sakellariadou[^1]' title: The Revival of Cosmic Strings --- \#1\#2\#3\#4[\#1pt]{} Introduction ============ Cosmic strings attracted a lot of interest around the eighties and nineties. They offered an alternative mechanism to cosmological inflation for the generation of the primordial density perturbations leading to the large-scale structure formation one observes. However, towards the turn of the century cosmic strings lost their appeal, since it was shown that they lead to inconsistencies with the Cosmic Microwave Background (CMB) measurements. Nevertheless, the story of cosmic strings does not end here. In the last few years there has been a remarkable revival of the theoretical and observational activity. In this lecture, I will discuss the present view on the cosmological rôle of cosmic strings. In Section 2, I will discuss aspects of cosmic strings in the framework of Grand Unified Theories (GUTs). I will first analyse the formation and classification of topological as well as embedded defects. I will then briefly discuss the CMB temperature anisotropies and I will compare the predictions of topological defects models with current measurements. I will then conclude that topological defects in general, and cosmic strings in particular, are ruled out as the unique source of density perturbations leading to the observed structure formation. At this point I do not conclude that cosmic strings are ruled out, but I ask instead which are the implications for the models of high energy physics which we employed to construct our cosmological scenario. The first question is whether cosmic strings are expected to be generically formed. I will address this question in the framework of Supersymmetric Grand Unified Theories (SUSY GUTs). I will show that cosmic strings are indeed expected to be generically formed within a large class of models within SUSY GUTs and therefore one has to use mixed models, consisting in inflation with a sub-dominant partner of cosmic strings. I will then examine whether such mixed models are indeed compatible with the CMB data. I will present two well-studied inflationary models within supersymmetric theories, namely F/D-term hybrid inflation. I will impose constraints on the free parameters of the models (masses and couplings) so that there is agreement between theory and measurements. In Section 3, I will address the issue of cosmic superstrings as cosmic strings candidates, in the context of braneworld cosmologies. In Section 4, I will discuss a candidate of a gravitational lensing by a cosmic string. I will round up with the conclusions in Section 5. Topological Defects =================== Topological Defects in GUTs --------------------------- The Universe has steadily cooled down since the Planck time, leading to a series of Spontaneously Broken Symmetries (SSB). SSB may lead to the creation of topological defects [@td1; @td2], which are false vacuum remnants, such as domain walls, cosmic strings, monopoles, or textures, via the Kibble mechanism [@kibble]. The formation or not of topological defects during phase transitions, followed by SSB, and the determination of the type of the defects, depend on the topology of the vacuum manifold ${\cal M}_n$. The properties of ${\cal M}_n$ are usually described by the $k^{\rm th}$ homotopy group $\pi_k({\cal M}_n)$, which classifies distinct mappings from the $k$-dimensional sphere $S^k$ into the manifold ${\cal M}_n$. To illustrate that, let me consider the symmetry breaking of a group G down to a subgroup H of G . If ${\cal M}_n={\rm G}/{\rm H}$ has disconnected components, or equivalently if the order $k$ of the nontrivial homotopy group is $k=0$, then two-dimensional defects, called [*domain walls*]{}, form. The spacetime dimension $d$ of the defects is given in terms of the order of the nontrivial homotopy group by $d=4-1-k$. If ${\cal M}_n$ is not simply connected, in other words if ${\cal M}_n$ contains loops which cannot be continuously shrunk into a point, then [*cosmic strings*]{} form. A necessary, but not sufficient, condition for the existence of stable strings is that the first homotopy group (the fundamental group) $\pi_1({\cal M}_n)$ of ${\cal M}_n$, is nontrivial, or multiply connected. Cosmic strings are line-like defects, $d=2$. If ${\cal M}_n$ contains unshrinkable surfaces, then [*monopoles*]{} form, for which $k=1, ~d=1$. If ${\cal M}_n$ contains noncontractible three-spheres, then event-like defects, [*textures*]{}, form for which $k=3, ~d=0$. Depending on whether the symmetry is local (gauged) or global (rigid), topological defects are called local or global. The energy of local defects is strongly confined, while the gradient energy of global defects is spread out over the causal horizon at defect formation. Patterns of symmetry breaking which lead to the formation of local monopoles or local domain walls are ruled out, since they should soon dominate the energy density of the Universe and close it, unless an inflationary era took place after their formation. Local textures are insignificant in cosmology since their relative contribution to the energy density of the Universe decreases rapidly with time [@textures]. Even if the nontrivial topology required for the existence of a defect is absent in a field theory, it may still be possible to have defect-like solutions. Defects may be [*embedded*]{} in such topologically trivial field theories [@embedded]. While stability of topological defects is guaranteed by topology, embedded defects are in general unstable under small perturbations. Cosmic Microwave Background Temperature Anisotropies ---------------------------------------------------- The CMB temperature anisotropies offer a powerful test for theoretical models aiming at describing the early Universe. The characteristics of the CMB multipole moments can be used to discriminate among theoretical models and to constrain the parameters space. The spherical harmonic expansion of the CMB temperature anisotropies, as a function of angular position, is given by $$\label{dTT} \frac{\delta T}{T}({\bf n})=\sum _{\ell m}a_{\ell m} {\cal W}_\ell Y_{\ell m}({\bf n})~\, \ \ \ \mbox {with}\ \ \ a_{\ell m}=\int {\rm d}\Omega _{{\bf n}}\frac{\delta T}{T}({\bf n})Y_{\ell m}^*({\bf n})~;$$ ${\cal W}_\ell $ stands for the $\ell$-dependent window function of the particular experiment. The angular power spectrum of CMB temperature anisotropies is expressed in terms of the dimensionless coefficients $C_\ell$, which appear in the expansion of the angular correlation function in terms of the Legendre polynomials $P_\ell$: $$\biggl \langle 0\biggl |\frac{\delta T}{T}({\bf n})\frac{\delta T}{ T}({\bf n}') \biggr |0\biggr\rangle \left|_{{~}_{\!\!({\bf n\cdot n}'=\cos\vartheta)}}\right. = \frac{1}{4\pi}\sum_\ell(2\ell+1)C_\ell P_\ell(\cos\vartheta) {\cal W}_\ell^2 ~. \label{dtovertvs}$$ It compares points in the sky separated by an angle $\vartheta$. Here, the brackets denote spatial average, or expectation values if perturbations are quantised. Equation (\[dtovertvs\]) holds only if the initial state for cosmological perturbations of quantum-mechanical origin is the vacuum [@jm1; @jm2]. The value of $C_\ell$ is determined by fluctuations on angular scales of the order of $\pi/\ell$. The angular power spectrum of anisotropies observed today is usually given by the power per logarithmic interval in $\ell$, plotting $\ell(\ell+1)C_\ell$ versus $\ell$. The predictions of the defects models regarding the characteristics of the CMB spectrum are: - Global ${\cal O}(4)$ textures lead to a position of the first acoustic peak at $\ell\simeq 350$ with an amplitude $\sim 1.5$ times higher than the Sachs-Wolfe plateau [@rm]. - Global ${\cal O}(N)$ textures in the large $N$ limit lead to a quite flat spectrum, with a slow decay after $\ell \sim 100$ [@dkm]. Similar are the predictions of other global ${\cal O}(N)$ defects [@clstrings; @num]. - Local cosmic strings predictions are not very well established and range from an almost flat spectrum [@acdkss] to a single wide bump at $\ell \sim 500$ [@mark] with an extremely rapidly decaying tail. The position and amplitude of the acoustic peaks, as found by the CMB measurements [@maxi; @boom; @dasi; @wmap], are in disagreement with the predictions of topological defects models. Thus, CMB measurements rule out pure topological defects models as the origin of initial density perturbations leading to the observed structure formation. At this point one has to ask which are the implications for the high energy physics models upon which our cosmological model was built. I will thus first ask whether cosmic strings formation is indeed generic. I will address this question in the framework of SUSY GUTs. I am only interested in cosmic strings, since I consider gauge theories, for which domain walls and monopoles are dangerous, while textures are cosmologically uninteresting [@textures]. Genericity of Cosmic Strings Formation within SUSY GUTs ------------------------------------------------------- I will address the question of whether cosmic strings formation is generic, in the context of SUSY GUTs. Even though the Standard Model (SM) has been tested to a very high precision, it is incapable of explaining neutrino masses [@SK; @SNO; @kamland]. An extension of the SM gauge group can be realised within Supersymmetry (SUSY). SUSY offers a solution to the gauge hierarchy problem, while in the supersymmetric standard model the gauge coupling constants of the strong, weak and electromagnetic interactions meet at a single point $M_{\rm GUT} \simeq (2-3) \times 10^{16}$ GeV. In addition, SUSY GUTs can provide the scalar field which could drive inflation, explain the matter-antimatter asymmetry of the Universe, and propose a candidate, the lightest superparticle, for cold dark matter. Within SUSY GUTs there is a large number of SSB patterns leading from a large gauge group G to the SM gauge group G$_{\rm SM}\equiv$ SU(3)$_{\rm C}\times$ SU(2)$_{\rm L}\times$ U(1)$_{\rm Y}$. The study of the homotopy group of the false vacuum for each SSB scheme will determine whether there is defect formation and it will identify the type of the formed defect. Clearly, if there is formation of domain walls or monopoles, one will have to place an era of supersymmetric hybrid inflation to dilute them. To consider a SSB scheme as a successful one, it should be able to explain the matter/anti-matter asymmetry of the Universe and to account for the proton lifetime measurements [@SK]. In what follows, I consider a mechanism of baryogenesis via leptogenesis, which can be thermal or nonthermal one. In the case of nonthermal leptogenesis, U(1)$_{\rm B-L}$ (B and L, are the baryon and lepton numbers, respectively) is a sub-group of the GUT gauge group, G$_{\rm GUT}$, and B-L is broken at the end or after inflation. In the case of thermal leptogenesis, B-L is broken independently of inflation. If leptogenesis is thermal and B-L is broken before the inflationary era, then one should check whether the temperature at which B-L is broken, which will define the mass of the right-handed neutrinos, is smaller than the reheating temperature which should be lower than the limit imposed by the gravitino. To ensure the stability of proton, the discrete symmetry Z$_2$, which is contained in U(1)$_{\rm B-L}$, must be kept unbroken down to low energies. This implies that the successful SSB schemes should end at G$_{\rm SM}\times$ Z$_2$. I will then examine how often cosmic strings have survived at the inflationary era, within all acceptable SSB patterns. To accomplish this task one has to choose the large gauge group G$_{\rm GUT}$. In Ref. [@jrs] this study has been done explicitly for a large number of simple Lie groups. Since I consider GUTs based on simple gauge groups, the type of supersymmetric hybrid inflation will be of the F-type. The minimum rank of G$_{\rm GUT}$ has to be at least equal to 4, to contain the G$_{\rm SM}$ as a subgroup. Then one has to study the possible embeddings of G$_{\rm SM}$ in G$_{\rm GUT}$ to be in agreement with the SM phenomenology and especially with the hypercharges of the known particles. Moreover, the group must include a complex representation, needed to describe the SM fermions, and it must be anomaly free. Since, in principle, ${\rm SU}(n)$ may not be anomaly free, I assume that the ${\rm SU}(n)$ groups which I use have indeed a fermionic representation that certifies that the model is anomaly free. I set as the upper bound on the rank $r$ of the group, $r\leq 8$. Clearly, the choice of the maximum rank is in principle arbitrary. This choice could, in a sense, be motivated by the Horava-Witten [@hw] model, based on ${\rm E}_8\times {\rm E}_8$. Thus, the large gauge group G$_{\rm GUT}$ could be one of the following: SO(10), E$_6$, SO(14), SU(8), SU(9); flipped SU(5) and \[SU(3)\]$^3$ are included within this list as subgroups of SO(10) and E$_6$, respectively. A detailed study of all the SSB schemes which bring us from G$_{\rm GUT}$ down to the SM gauge group G$_{\rm SM}$, by one or more intermediate steps, shows that cosmic strings are generically formed at the end of hybrid inflation. If the large gauge group G$_{\rm GUT}$ is SO(10) then cosmic strings formation is unavoidable [@jrs]. For ${\rm E}_6$ it depends whether one considers thermal or nonthermal leptogenesis. More precisely, under the assumption of nonthermal leptogenesis then cosmic strings formation is unavoidable. If I consider thermal leptogenesis then cosmic strings formation at the end of hybrid inflation arises in $98\%$ of the acceptable SSB schemes [@jm]. If the requirement of having Z$_2$ unbroken down to low energies is relaxed and thermal leptogenesis is considered as being the mechanism for baryogenesis, then cosmic strings formation accompanies hybrid inflation in $80\%$ of the SSB schemes  [@jm]. The SSB schemes of SU(6) and SU(7) down to the G$_{\rm SM}$ which could accommodate an inflationary era with no defect (of any kind) at later times are inconsistent with proton lifetime measurements and minimal SU(6) and SU(7) do not predict neutrino masses [@jrs], implying that these models are incompatible with high energy physics phenomenology. Higher rank groups, namely SO(14), SU(8) and SU(9), should in general lead to cosmic strings formation at the end of hybrid inflation. In all these schemes, cosmic strings formation is sometimes accompanied by the formation of embedded strings. The strings which form at the end of hybrid inflation have a mass which is proportional to the inflationary scale. Mixed Models ------------ Since cosmic strings are expected to be generically formed in the context of SUSY GUTs, one should consider [*mixed perturbation models*]{} where the dominant rôle is played by the inflaton field but cosmic strings have also a contribution, small but not negligible. Restricting ourselves to the angular power spectrum, we can remain in the linear regime. In this case, $$C_\ell = \alpha C^{\scriptscriptstyle{\rm I}}_\ell + (1-\alpha) C^{\scriptscriptstyle{\rm S}}_\ell~, \label{cl}$$ where $C^{\scriptscriptstyle{\rm I}}_\ell$ and $C^{\scriptscriptstyle {\rm S}}_\ell$ denote the (COBE normalized) Legendre coefficients due to adiabatic inflaton fluctuations and those stemming from the cosmic strings network, respectively. The coefficient $\alpha$ in Eq. (\[cl\]) is a free parameter giving the relative amplitude for the two contributions. Comparing the $C_\ell$, given by Eq. (\[cl\]), with data obtained from the most recent CMB measurements, one gets that a cosmic strings contribution to the primordial fluctuations higher than $14\%$ is excluded up to $95\%$ confidence level [@bprs; @pogosian; @wyman]. In what follows, I will be on the conservative side and and I will not allow cosmic strings to contribute more than $10\%$ to the CMB temperature anisotropies. Supersymmetric Hybrid Inflation ------------------------------- Inflation remains the most appealing scenario for describing the early Universe. Inflation essentially consists of a phase of accelerated expansion which took place at a very high energy scale. However, despite its success, it faces a number of questions, as for example how generic is the onset of inflation [@ecms] and how one can guarantee a natural and successful inflationary model. Unfortunately, inflation is still a paradigm in search of a model. I will discuss two well-studied inflationary models in the framework of supersymmetry, namely F/D-term inflation. ### F-term Inflation F-term inflation can be naturally accommodated in the framework of GUTs when a GUT gauge group G$_{\rm GUT}$ is broken down to the G$_{\rm SM}$ at an energy $M_{\rm GUT}$ according to the scheme $${\rm G}_{\rm GUT} \stackrel{M_{\rm GUT}}{\hbox to 0.8cm {\rightarrowfill}} {\rm H}_1 \xrightarrow{9}{M_{\rm infl}}{1}{\Phi_+\Phi_-} {\rm H}_2 {\longrightarrow} {\rm G}_{\rm SM}~;$$ $\Phi_+, \Phi_-$ is a pair of GUT Higgs superfields in nontrivial complex conjugate representations, which lower the rank of the group by one unit when acquiring nonzero vacuum expectation value. The inflationary phase takes place at the beginning of the symmetry breaking ${\rm H}_1\stackrel{M_{\rm infl}}{\longrightarrow} {\rm H}_2$. F-term inflation is based on the globally supersymmetric renormalisable superpotential $$\label{superpot} W_{\rm infl}^{\rm F}=\kappa S(\Phi_+\Phi_- - M^2)~,$$ where $S$ is a GUT gauge singlet left handed superfield, $\Phi_+$ and $\Phi_-$ are defined above; $\kappa$ and $M$ are two constants ($M$ has dimensions of mass) which can be taken positive with field redefinition. The chiral superfields $S, \Phi_+, \Phi_-$ are taken to have canonical kinetic terms. This superpotential is the most general one consistent with an R-symmetry under which $W \rightarrow e^{i \beta} W~, \Phi_- \rightarrow e^{-i \beta} \Phi_-~, \Phi_+ \rightarrow e^{i \beta} \Phi_+$, and $ S \rightarrow e^{i \beta} S$. An R-symmetry can ensure that the rest of the renormalisable terms are either absent or irrelevant. The scalar potential reads $$\label{scalpot1} V(\phi_+,\phi_-, S)= |F_{\Phi_+}|^2+|F_{\Phi_-}|^2+|F_ S|^2 +\frac{1}{2}\sum_a g_a^2 D_a^2~.$$ The F-term is such that $F_{\Phi_i} \equiv |\partial W/\partial \Phi_i|_{\theta=0}$, where we take the scalar component of the superfields once we differentiate with respect to $\Phi_i=\Phi_+, \Phi_-, S$. The D-terms are $$D_a=\bar{\phi}_i\,{(T_a)^i}_j\,\phi^j +\xi_a~,$$ with $a$ the label of the gauge group generators $T_a$, $g_a$ the gauge coupling, and $\xi_a$ the Fayet-Iliopoulos term. By definition, in the F-term inflation the real constant $\xi_a$ is zero; it can only be nonzero if $T_a$ generates an extra U(1) group. In the context of F-term hybrid inflation, the F-terms give rise to the inflationary potential energy density, while the D-terms are flat along the inflationary trajectory, thus one may neglect them during inflation. The potential has one valley of local minima, $V=\kappa^2 M^4$, for $S> M $ with $\phi_+ = \phi_-=0$, and one global supersymmetric minimum, $V=0$, at $S=0$ and $\phi_+ = \phi_- = M$. Imposing initially $ S \gg M$, the fields quickly settle down the valley of local minima. Since in the slow roll inflationary valley, the ground state of the scalar potential is nonzero, SUSY is broken. In the tree level, along the inflationary valley the potential is constant, therefore perfectly flat. A slope along the potential can be generated by including the one-loop radiative corrections. Thus, the scalar potential gets a little tilt which helps the inflaton field $S$ to slowly roll down the valley of minima. The one-loop radiative corrections to the scalar potential along the inflationary valley, lead to an effective potential [@DvaShaScha; @Lazarides; @SenoSha; @rs1] $$\label{VexactF} V_{\rm eff}^{\rm F}(|S|)=\kappa^2M^4\big\{1+\frac{\kappa^2 \cal{N}}{32\pi^2}\big[2\ln\frac{|S|^2\kappa^2}{\Lambda^2} +\big(\frac{|S|^2}{ M^2}+1\big)^2\ln\big(1+\frac{M^2}{|S|^2}) +\big(\frac{|S|^2}{M^2}-1\big)^2\ln\big(1-\frac{M^2}{|S|^2}\big)\big]\big\} ~;$$ $\Lambda$ is a renormalisation scale and $\cal{N}$ stands for the dimensionality of the representation to which the complex scalar components $\phi_+, \phi_-$ of the chiral superfields $\Phi_+, \Phi_-$ belong. Considering only large angular scales, one can get the contributions to the CMB temperature anisotropies analytically. In Ref. [@rs1], the Sachs-Wolfe effect has been explicitly calculated. The quadrupole anisotropy has one contribution coming from the inflaton field, calculated using Eq. (\[VexactF\]), and one contribution coming from the cosmic strings network, given by numerical simulations [@ls]. Fixing the number of e-foldings to 60, then for a given gauge group G$_{\rm GUT}$, the inflaton and cosmic strings contribution to the CMB depend on the superpotential coupling $\kappa$, or equivalently on the symmetry breaking scale $M$ associated with the inflaton mass scale, which coincides with the string mass scale. The total quadrupole anisotropy has to be normalised to the COBE data. In Ref . [@rs1] we have found that the cosmic strings contribution is consistent with the CMB measurements, provided $$M\lsim 2\times 10^{15} {\rm GeV} ~~\Leftrightarrow ~~\kappa \lsim 7\times10^{-7}~.$$ This constraint on $\kappa$ is in agreement with the one found in Ref. [@lk]. Strictly speaking the above condition was found in the context of SO(10) gauge group, but the conditions imposed in the context of other gauge groups are of the same order of magnitude since $M$ is a slowly varying function of the dimensionality ${\cal N}$ of the representations to which the scalar components of the chiral Higgs superfields belong. The superpotential coupling $\kappa$ is also subject to the gravitino constraint which imposes an upper limit to the reheating temperature, to avoid gravitino overproduction. Within the framework of SUSY GUTs and assuming a see-saw mechanism to give rise to massive neutrinos, the inflaton field decays during reheating into pairs of right-handed neutrinos. This constraint on the reheating temperature can be converted to a constraint on the parameter $\kappa$. The gravitino constraint on $\kappa$ reads [@rs1] $\kappa \lsim 8\times 10^{-3}$, which is a weaker constraint. Concluding, F-term inflation leads generically to cosmic strings formation at the end of the inflationary era. The cosmic strings formed are of the GUT scale. This class of models can be compatible with CMB measurements, provided the superpotential coupling is smaller than $10^{-6}$. This tuning of the free parameter $\kappa$ can be softened if one allows for the curvaton mechanism. According to the curvaton mechanism [@lw2002; @mt2001], another scalar field, called the curvaton, could generate the initial density perturbations whereas the inflaton field is only responsible for the dynamics of the Universe. The curvaton is a scalar field, that is sub-dominant during the inflationary era as well as at the beginning of the radiation dominated era which follows the inflationary phase. There is no correlation between the primordial fluctuations of the inflaton and curvaton fields. Clearly, within supersymmetric theories such scalar fields are expected to exist. In addition, embedded strings, if they accompany the formation of cosmic strings, they may offer a natural curvaton candidate, provided the decay product of embedded strings gives rise to a scalar field before the onset of inflation. Considering the curvaton scenario, the coupling $\kappa$ is only constrained by the gravitino limit. More precisely, assuming the existence of a curvaton field, there is an additional contribution to the temperature anisotropies. The WMAP CMB measurements impose [@rs1] the following limit on the initial value of the curvaton field $${\cal\psi}_{\rm init} \lsim 5\times 10^{13}\,\left( \frac{\kappa}{10^{-2}}\right){\rm GeV}~,$$ provided the parameter $\kappa$ is in the range $[10^{-6},~1]$. ### D-term Inflation D-term inflation received a lot of interest, mainly because it is not plagued by the [*Hubble-induced mass*]{} problem, and in addition it can be easily implemented in string theory. D-term inflation is derived from the superpotential $$\label{superpotD} W^{\rm D}_{\rm infl}=\lambda S \Phi_+\Phi_-~;$$ $S, \Phi_-, \Phi_+$ are three chiral superfields and $\lambda$ is the superpotential coupling. D-term inflation requires the existence of a nonzero Fayet-Iliopoulos term $\xi$, which can be added to the Lagrangian only in the presence of an extra U(1) gauge symmetry, under which, the three chiral superfields have charges $Q_S=0$, $Q_{\Phi_+}=+1$ and $Q_{\Phi_-}=-1$, respectively. This extra U(1) gauge symmetry symmetry can be of a different origin; hereafter we consider a nonanomalous U(1) gauge symmetry. Thus, D-term inflation requires a scheme, like $${\rm G}_{\rm GUT}\times {\rm U}(1) \stackrel{M_{\rm GUT}}{\hbox to 0.8cm{\rightarrowfill}} {\rm H} \times {\rm U}(1) \xrightarrow{9}{M_{\rm nfl}}{1}{\Phi_+\Phi_-} {\rm H} \rightarrow {\rm G}_{\rm SM}~.$$ The symmetry breaking at the end of the inflationary phase implies that cosmic strings are always formed at the end of D-term hybrid inflation. To avoid cosmic strings, several mechanisms have been proposed which either consider more complicated models or require additional ingredients. For example, one can add a nonrenormalisable term in the potential [@shifted], or add an additional discrete symmetry [@smooth], or consider GUT models based on nonsimple groups [@mcg], or introduce a new pair of charged superfields [@jaa] so that cosmic strings formation is avoided within D-term inflation. In what follows, I will show that standard D-term inflation leading to cosmic strings production is still compatible with CMB data since cosmic strings contribution to the CMB data is not constant nor dominant. This implies that one does not have to invoke some new physics. The reader can find a detailed study in Ref. [@rs1; @rs2]. In the global supersymmetric limit, Eqs. (\[scalpot1\]), (\[superpotD\]) lead to the following expression for the scalar potential $$\label{VtotD} V^{\rm D}(\phi_+,\phi_-,S) = \lambda^2 \left[\,|S|^2(|\phi_+|^2+|\phi_-|^2) + |\phi_+\phi_-|^2 \right] +\frac{g^2}{2}(|\phi_+|^2-|\phi_-|^2+\xi)^2~,$$ where $g$ is the gauge coupling of the U(1) symmetry and $\xi$ is a Fayet-Iliopoulos term, chosen to be positive. In D-term inflation, as opposed to F-term inflation, the inflaton mass acquires values of the order of Planck mass, and therefore, the correct analysis must be done in the framework of SUGRA. The SSB of SUSY in the inflationary valley introduces a splitting in the masses of the components of the chiral superfields $\Phi_\pm$. As a result, we obtain [@rs2] two scalars with squared masses $m^2_{\pm}=\lambda^2|S|^2 \exp\left(|S|^2/M^2_{\rm Pl}\right)\pm g^2 \xi$ and a Dirac fermion with squared mass $m_{\rm f}^2=\lambda^2|S|^2 \exp\left(|S|^2/M^2_{\rm Pl}\right)$. Thus, calculating the radiative corrections, the effective scalar potential for minimal supergravity reads [@rs1; @rs2] $$\begin{aligned} \label{vDsugra} V_{\rm eff} &=&\frac{g^2\xi^2}{2}\big\{1+\frac{g^2}{16 \pi^2} \times\big[2\ln\frac{|S|^2\lambda^2}{\Lambda^2}e^{\frac{|S|^2}{ M^2_{\rm Pl}}} +\big( \frac{\lambda^2 |S|^2}{ g^2\xi}e^{\frac{|S|^2}{M_{\rm Pl}^2}} +1\big)^2 \ln\big(1+\frac{ g^2\xi}{\lambda^2 |S|^2 }e^{-\frac{|S|^2}{ M_{\rm Pl}^2}} \big)\nonumber\\ && +\big( \frac{\lambda^2 |S|^2}{ g^2\xi}e^{\frac{|S|^2}{ M_{\rm Pl}^2}} -1\big)^2 \ln\big(1-\frac{ g^2\xi}{\lambda^2 |S|^2 }e^{-\frac{|S|^2}{ M_{\rm Pl}^2}} \big)\big]\big\}\end{aligned}$$ In Refs. [@rs1; @rs2], we have properly addressed the question of cosmic strings contribution to the CMB data and we got that standard D-term inflation can be compatible with measurements; the cosmic strings contribution to the CMB is actually model-dependent. Our most important finding was that cosmic strings contribution is not constant, nor is it always dominant. More precisely, we obtained [@rs1; @rs2] that $g\gsim 2\times 10^{-2}$ is incompatible with the allowed cosmic strings contribution to the WMAP measurements. For $g\lsim 2\times 10^{-2}$, the constraint on the superpotential coupling $\lambda$ reads $\lambda \lsim 3\times 10^{-5}$. SUGRA corrections impose in addition a lower limit to $\lambda$. The constraints induced on the couplings by the CMB measurements can be expressed [@rs1; @rs2] as a single constraint on the Fayet-Iliopoulos term $\xi$, namely $\sqrt\xi \lsim 2\times 10^{15}~{\rm GeV}$. Concluding, standard D-term inflation always leads to cosmic strings formation at the end of the inflationary era. The cosmic strings formed are of the GUT scale. This class of models is still compatible with CMB measurements, provided the couplings are small enough. As in the case of F-term inflation the fine tuning of the couplings can be softened provided one considers the curvaton mechanism. In this case, the imposed CMB constraint on the initial value of the curvaton field reads [@rs1; @rs2] $$\psi_{\rm init}\lsim 3\times 10^{14}\left(\frac{g}{ 10^{-2}}\right) ~{\rm GeV},$$ for $\lambda\in [10^{-1}, 10^{-4}]$. Our conclusions are still valid in the revised version of D-term inflation, in the framework of SUGRA with constant Fayet-Iliopoulos terms. In the context of N=1, 3+1 SUGRA, the presence of constant Fayet-Iliopoulos terms shows up in covariant derivatives of all fermions. In addition, since the relevant local U(1) symmetry is a gauged R-symmetry [@toine2], the constant Fayet-Iliopoulos terms also show up in the supersymmetry transformation laws. In Ref. [@toine1] there were presented all corrections of order $g\xi/M_{\rm Pl}^2$ to the classical SUGRA action required by local supersymmetry. Under U(1) gauge transformations in the directions in which there are constant Fayet-Iliopoulos terms $\xi$, the superpotential must transform as [@toine2] $$\delta W=-i\frac{g\xi}{ M_{\rm Pl}^2}W~,$$ otherwise the constant Fayet-Iliopoulos term $\xi$ vanishes. This requirement is consistent with the fact that in the gauge theory at $M_{\rm Pl}\rightarrow \infty$ the potential is U(1) invariant. To promote the simple SUSY D-term inflation model, Eq. (\[superpotD\]), to SUGRA with constant Fayet-Iliopoulos terms, one has to change the charge assignments for the chiral superfields, so that the superpotential transforms under local R-symmetry [@toine1]. In SUSY, the D-term potential is neutral under U(1) symmetry, while in SUGRA the total charge of $\Phi_\pm$ fields does not vanish but is equal to $-\xi/M_{\rm Pl}^2$. More precisely, the D-term contribution to the scalar potential $V$ \[see Eq. (\[VtotD\])\], should be replaced by $(g^2/2)(q_+|\phi_+|^2+q_-|\phi_-|^2+\xi)^2$ where $$q_\pm=\pm 1-\rho_\pm\frac{\xi}{ M_{\rm Pl}^2}\ \ \ \ \mbox {with} \ \ \ \ \rho_++\rho_-=1~.$$ In addition, the squared masses of the scalar components $\phi_\pm$ become $$m^2_{\pm}=\lambda^2|S|^2 \exp\left(|S|^2/M^2_{\rm Pl}\right)\pm g^2 \xi q_\pm~;$$ the Dirac fermion mass remains unchanged. However, sine for the limits we imposed on the Fayet-Iliopoulos term $\xi$, the correction $\xi/M_{\rm Pl}^2$ is $\sim 10^{-6}$, I conclude that our results also also valid in the revised version of D-term inflation within SUGRA. Superstrings as Cosmic Strings Candidates ========================================= In the context of perturbative string theory, superstrings of cosmic size were excluded, mainly because they should have too large a tension. More precisely, perturbative strings have a tension close to the Planck scale, producing CMB inhomogeneities far larger than observed. Moreover, since the scale of their tension also exceeds the upper bound on the energy scale of the inflationary vacuum, such strings could have only been produced before inflation, and therefore diluted. In addition, there are instabilities that would prevent such long strings from surviving on cosmic time scales [@witten]. Thus, for years was a clear distinction between fundamental strings and cosmic strings. Recently, this whole picture has changed. In addition to the fundamental F-strings, there are also D-strings as a particular case of higher-dimensional D$p$-branes (D stands for Dirichlet and $p$ denotes the dimensionality of the brane), partially wrapped on compact cycles resulting to only one noncompact dimension. In the braneworld approach, our Universe represents a D3-brane on which open strings can end [@Pol], embedded in a higher dimensional space, called the bulk. Brane interactions can unwind and evaporate higher dimensional D$p$-branes so that we are left with D3-branes embedded in a higher dimensional bulk; one of these D3-branes plays the rôle of our Universe [@dks]. Since gauge charges are attached to the ends of strings, gauge particles and fermions can propagate only along the D3-branes while gravitons (and dilatons, ...) which are closed string modes can move in the bulk. Since gravity has been probed only down to scales of about $ 0.1$mm, the dimensions of the bulk can be much larger than the string scale. In the braneworld context, the extra dimensions can even be infinite, if the geometry is nontrivial [@RSII]. Large extra dimensions can be employed to address the hierarchy problem [@Ark], a result which lead to an increasing interest in braneworld scenarios. Apart from the D$p$-branes, there are also antibranes, $\bar Dp$-branes, which differ from the D$p$-branes by having an equal and opposite conserved Ramond-Ramond charge, which implies an attractive force between them. Braneworld cosmology can also offer a natural inflationary scenario. Assuming the early Universe contained an extra brane and antibrane, then an inflationary era could be driven by the potential between the two branes, while the separation between the branes will play the rôle of the inflaton. The inflaton potential is rather flat when the branes are separated and steepens as they approach, until at some point a field becomes tachyonic, which indicates an instability, leading to a rapid brane-antibrane annihilation. D-brane-antibrane inflation leads to the abundant production of lower dimensional D-branes that are one-dimensional in the noncompact directions [@dstr]. Luckily, zero-dimensional defects (monopoles) and two-dimensional ones (domain walls), which are cosmologically undesirable, are not produced. In these models, the large compact dimensions and the large warp factors allow cosmic superstring tensions to be in the range between $10^{-11}< G\mu < 10^{-6}$, depending on the model. Cosmic superstrings share a number of properties with the cosmic strings, but there are also differences which may lead to distinctive observational signatures. String intersections lead to intercommutation and loop production. For cosmic strings the probability of intercommutation ${\cal P}$ is equal to 1, whereas this is not the case for F- and D-strings. Clearly, D-strings can miss each other in the compact dimension, leading to a smaller ${\cal P}$, while for F-strings the scattering has to be calculated quantum mechanically since these are quantum mechanical objects. The collisions between all possible pairs of superstrings have been studied in string perturbation theory [@jjp]. For F-strings, the reconnection probability is of the order of $g_{\rm s}^2$, where $g_{\rm s}$ stands for the string coupling. For F-F string collisions, it was found [@jjp] that the reconnection probability $\cal P$ is $10^{-3}\lsim {\cal P}\lsim 1$. For D-D string collisions, one has $10^{-1}\lsim{\cal P}\lsim 1$. Finally, for F-D string collisions, the reconnection probability can take any value between 0 and 1. These results have been confirmed [@hh1] by a quantum calculation of the reconnection probability for colliding D-strings. Similarly, the string self-intersection probability is reduced. Moreover, when D- and F-strings meet they can form a three-string junction, with a composite DF-string. It is also possible in IIB string theory to have bound $(p,q)$ states of $p$ F-strings and $q$ D-strings, where $p$ and $q$ are coprime. This leads to the question of whether there are frozen networks dominating the matter content of the Universe, or whether scaling solutions can be achieved. To study the evolution of cosmic superstrings, I have performed numerical simulations [@ms] of independent stochastic networks of D- and F-strings. I found that the characteristic length scale $\xi$, which gives the typical distance between the nearest string segments and the typical curvature of strings, grows linearly with time $$\xi(t)\propto \zeta t ~,$$ where the slope $\zeta$ depends on the reconnection probability ${\cal P}$, and on the energy of the smallest allowed loops (i.e., the energy cutoff). For reconnection (or intercommuting) probability in the range $10^{-3}\lsim {\cal P} \lsim 0.3$, I found [@ms] $$\zeta \propto \sqrt{\cal P} \Rightarrow \xi(t)\propto \sqrt{\cal P} t~, \label{law}$$ in agreement with my old results [@sv]. I thus disagree with the statement that $\xi(t)\propto {\cal P} t$. In Ref. [@jst] it is claimed that the energy density of longs strings $\rho_{\rm l}$ evolves as $\dot\rho_{\rm l}=2(\dot a/a)\rho_{\rm l}-{\cal P}(\rho_{\rm l}/\xi)$, where $H=\dot a/a$ is the Hubble constant. Then substituting the ansatz $\xi(t)=\gamma(t)t$, the authors of Ref. [@jst] obtain $\dot\gamma=-[1/(2t)](\gamma-{\cal P})$, during the radiation dominated era. Since this equation has a stable fixed point at $\gamma(t)={\cal P}$, the authors state [@jst] that $\xi\simeq {\cal P} t$. My disagreement with Ref. [@jst] is based on the fact that intersections between two long strings is not the most efficient mechanism for energy loss of the string network. The possible string intersections can be divided into three possible cases: (i) two long strings collide in one point and exchange partners with intercommuting probability ${\cal P}_1$; (ii) two strings collide in two points and exchange partners chopping off a small loop with intercommuting probability ${\cal P}_1^2$; and (iii) one long string self-intersects in one point and chops off a loop with intercommuting probability ${\cal P}_2$, which in general is different than ${\cal P}_1$. Clearly, only cases (ii) and (iii) lead to a closed loop formation and therefore remove energy from the long string network. Between cases (ii) and (iii), only case (iii) is an efficient way of forming loops and therefore dissipating energy. I have checked numerically [@ms] that case (iii) appears more often than case (ii), and besides, case (ii) has in general a smaller probability, since one expects that ${\cal P}_1\sim {\cal P}_2$. However, the heuristic argument employed in Ref.[@jst] does not refer to self-string intersections (i.e, case (iii)); it only applies to intersections between two long strings. This is clear since intersections between two long strings depend on the string velocity, however self-string intersections should not depend on how fast the string moves. In other words, a string can intersect itself even if it does not move but it just oscillates locally. Studying the time evolution of the slope $\zeta$, I found [@ms] that it reaches a constant value at relatively the same time $t$ for various values of ${\cal P}$, which implies that the long strings reach scaling. This result has been confirmed by studying numerically the behavior of a network of interacting Dirichlet-fundamental strings $(p,q)$ in Ref. [@ep]. To model $(p,q)$ strings arising from compactifications of type IIB string theory, the authors studied [@ep] the evolution of nonabelian string networks. The positive element of such nonabelian networks is that they contain multiple vertices where many different types of string join together. Such networks have the potential of leading to a string dominated Universe due to tangled networks of interacting $(p,q)$ strings that freeze. It was shown [@ep] that such freezing does not take place and the network reaches a scaling limit. In this field theory approach however strings are not allowed to have different tensions, which is a characteristic property of cosmic superstrings. Recently, this has been done in the context of modelling $(p,q)$ cosmic superstrings [@tww]. It was found that such networks rapidly approach a stable scaling solution, where once scaling is reached, only a small number of the lowest tension states is populated substantially. An interesting question is to find out whether the field theory approach of Ref. [@ep] mimics the results of the modelling approach of Ref. [@tww]. The cosmic superstring network is characterised [@ms] by two components: there are a few long strings with a scale-invariant evolution; the characteristic curvature radius of long strings, as well as the typical separation between two long strings are both comparable to the horizon size, $\xi(t)\simeq {\sqrt {\cal P}} t$, and there is a large number of small closed loops having sizes $\ll t$. Assuming there are string interactions, the long strings network will reach an asymptotic energy density where $$\rho_{\rm l}=\frac{\mu}{{\cal P} t^2}~.$$ Thus, for fixed linear mass density, the cosmic superstring energy density may be higher than the field theory case, but at most only by one order of magnitude. More precisely, the fraction of the total density in the form of strings in the radiation-dominated era reads $$\frac{\rho_{\rm str}}{\rho_{\rm total}}=\frac{32\pi}{ 3} \frac{G\mu}{{\cal P}}~.$$ Oscillating string loops loose energy by emitting graviton, dilaton and Ramond-Ramond (RR) fields. Accelerated cosmic strings are sources of gravitational radiation, in particular from the vicinity of the cusps where the string velocity approaches the speed of light. Similarly, cosmic superstrings emit gravity waves but since the intercommutation probability is less than unity, their network is denser with more cusps, resulting in an enhancement of the emitted gravitational radiation. As it was pointed out [@dv], the gravitational wave bursts emitted from cusps of oscillating string or superstring loops could be detectable with the gravitational-wave interferometers LIGO/VIRGO and LISA. One can place constraints on the energy scale of cosmic strings from the observational bounds on dilaton decays [@tdv]. Considering that the dilaton lifetime is in the range $10^7{\rm s}\lsim \tau\lsim t_{\rm dec}$, I obtained an upper bound $\eta\lsim {\cal P}^{-1/3}\lsim 10^{11}{\rm GeV}$ for the energy scale of cosmic superstrings, which determines the critical temperature for the transition leading to string formation. A lower reconnection probability allows a higher energy scale of strings, at most by one order of magnitude. Cosmic Strings in the Sky ========================= As a theoretician, I believe that it is of great importance to get observational support for the existence of cosmic strings. Unfortunately, up to recently the attempts to find cosmic strings in the sky were unsuccessful. A Russian-Italian collaboration claims to have found the first signature of a cosmic string in the sky. More precisely, the authors of Refs. [@CSL1a; @CSL1b; @CSL1c] point out that the peculiar properties of the gravitational lens CSL-1 (Capodimonte - Sternberg Lens Candidate no.1) could be only explained as the first case of lensing by a cosmic string. CSL-1, found in the OACDF (Ossservatorio Astronomico di Capodimonte - Deep Field) consists of two identical images, separated by $1.9''$. The two sources have very similar morphology, namely they consist or a bright nucleus surrounded by a faint halo with undistorted and quite circular isophotes. The most relevant feature of these images is indeed that their isophotes appear to be undistorted. The performed photometric and spectroscopic analysis [@CSL1a; @CSL1c] revealed that both the two components of CSL-1 are giant elliptical galaxies at redshift $z=0.46$. The possibility that CSL-1 could be interpreted as the projection of two giant elliptical galaxies, identical at a $99\%$ confidence level, has been disregarded [@CSL1a] as unlikely. Moreover, the peculiar properties of CSL-1 cannot be explained in terms of lensing by a compact lens model, since a usual gravitational lens created by a bound clump of matter lead to inhomogeneous gravitational fields which always distort background extended images. Thus, the most favorite explanation of CSL-1 in the framework of gravitational lensing is, according to the authors of Refs. [@CSL1a; @CSL1b], that of lensing by a cosmic string. Assuming that CSL-1 is indeed the first lensing by a cosmic string then the observed separation of the two images corresponds to a particular value for the deficit angle which implies that $G\mu>4\times 10^{-7}$; for kinky strings $G\mu$ could be less. As it was recently pointed out [@bsmw], high string velocities enhance lensing effects by a factor $1/\sqrt{1-{\bf v}^2}$, where ${\bf v}$ stands for the string velocity. This decreases the lower bound on $G\mu$ placed by CSL-1. In Ref. [@CSL1b], the authors study the statistics of lens candidates in the vicinity of CSL-1. They claim that they expect 7-9 lens candidates, which is a relatively high number with respect to the one expected from normal gravitational lens statistics. This excess of gravitational lens candidates in the neighborhood of CSL-1 is claimed [@CSL1b] to be compatible with the proposed in Ref. [@CSL1a] cosmic string scenario. As the authors however state [@CSL1b] only once we have spectroscopic studies of these candidates, we will be able to extract robust conclusions. It is crucial to confirm or infirm this finding by further and independent studies. Conclusions =========== In this lecture I presented the story of cosmic strings, as we know it at present. Cosmic strings are expected to be generically formed in a large class of models based on SUSY GUTs. If the predictions of cosmic strings are inconsistent with the various measurements, then either the theories which predict the formation of cosmic strings are altogether wrong, or the models have to be more complicated to avoid stings formation. Luckily, neither is needed. The free parameters of the models can be constrained so that there is agreement between predictions and measurements. Cosmological inflation is an attractive model with however too many possible choices. It is crucial to find out which are the natural inflationary models and to constrain their free parameters. Therefore, the constraints imposed by the cosmological implications of cosmic strings are indeed important. The recent proposal that cosmic superstrings can be considered as cosmic strings candidates opens new perspectives on the theoretical point of view. Therefore, even though cosmic strings cannot play a dominant rôle in structure formation, one has to consider them as a sub-dominant partner of inflation. The possible observational support which was announced recently is of course a major issue and cosmic strings have just entered a new flourishing era. It is a pleasure to thank the organizers, and in particular Mariusz Dabrowski, for the interesting and stimulating meeting [*Pomeranian Workshop in Fundamental Cosmology*]{}. I would like also to thank my colleagues with whom I have collaborated through the years in the various aspects covered in this lecture. [10]{} A. Vilenkin and E. P. S. Shellard, [*Cosmic Strings and Other Topological Defects*]{} (Cambridge University Press, Cambridge, England, 2000). M. B. Hindmarsh and T. W. B. Kibble, Rep. Prog. Phys.**58**, 477 (1995). T.  W.  B.  Kibble, J. Phys. A **9**, 387 (1976). N. Turok, Phys. Rev. Lett. **63**, 2625 (1989). T.  Vachaspati and M.  Barriola, Phys. Rev. Lett. **69**, 1867 (1992). J. Martin, A. Riazuelo, and M. Sakellariadou, Phys. Rev. D **61**, 083518 (2002). A.  Gangui, J. Martin, and M. Sakellariadou, Phys. Rev. D **66**, 083502 (2002). R. Durrer, A. Gangui and M. Sakellariadou, Phys. Rev. Lett. **76**, 579 (1996). R. Durrer, M. Kunz and A. Melchiorri, Phys. Rev. D **59**, 123005 (1999). N. Turok, U.-L. Pen and U. Seljak, Phys. Rev. D **58**, 023506 (1998). U.-L. Pen, U. Seljak and N. Turok, Phys. Rev. Lett.  **79**, 1611 (1997). B. Allen, et. al., Phys. Rev. Lett.  **79**, 2624 (1997). C. Contaldi, M. Hindmarsh and J. Magueijo, Phys. Rev. Lett. **82**, 679 (1999). A. T. Lee, Astrophys. J. [**561**]{}, L1 (2001); R.Stompor, Astrophys. J. [**561**]{}, L7 (2001). C. B. Netterfield, et. al., Astrophys. J. **571**, 604 (2002); P. Be Bernardis, et. al., Astrophys. J. **564**, 559 (2002). N. W. Halverson, et. al., Astrophys. J.  **568**, 38 (2002); C. Pryke, et. al. , Astrophys. J. **568**, 46 (2002). C. L. Bennett, et al., Astroph. J. Suppl. **148**, 1 (2003). Y. Fukuda, et. al., \[Super-Kamiokande Collaboration\], Phys. Rev.Lett. **81**, 1562 (1998). Q. R. Ahmad, [*et al.*]{} \[SNO Collaboration\], Phys. Rev.Lett.  **87**, 071301 (2001). K.  Eguchi, [*et al.*]{} \[KamLAND Collaboration\], Phys. Rev. Lett.**90**, 021802 (2003). R. Jeannerot, J. Rocher and M. Sakellariadou, Phys. rev. d **68**, 103514 (2003). P. Horava and E. Witten, Nucl. Phys. B **460** (1996) 506. J. Rocher and M. Sakellariadou, unpublished. F. R. Bouchet, P. Peter, A. Riazuelo and M. Sakellariadou, Phys. Rev. D **65**, 021301 (2002). L. Pogosian, M. Wyman and I. Wasserman, J. of Cosm. and Astrop. Phys.  **09**, 008 (2004). M. Wyman, L. Pogosian and I. Wasserman, Phys.Rev. D **72** (2005) 023513. E. Calzetta and M. Sakellariadou, Phys. Rev. D **45**, 2802 (1992); **47**, 3184 (1993). G. Dvali, Q. Shafi and R. Schaefer, Phys.Rev. Lett. **73**, 1886 (1994). G. Lazarides, *Inflationary cosmology*, \[arXiv:hep-ph/0111328\]. V. N. Senoguz and Q. Shafi, Phys. Lett. B **567**, 79 (2003). J. Rocher and M. Sakellariadou, JCAP **0503**,004 (2005). M. Landriau and E. P. S. Shellard, Phys. Rev. D **69**, 23003 (2004). R. Kallosh and A. Linde, JCAP **0310**, 008 (2003). D. H. Lyth and D. Wands, Phys. Lett. B **524**, 5 (2002). T. Moroi and T. Takahashi, Phys. Lett. B[**522**]{}, 215 (2001), Erratum-ibid. B **539**, 303 (2002). R. Jeannerot, S. Khalil, G. Lazarides and Q.Shafi, JHEP **0010**,012 (2000). G. Lazarides and C. Panagiotakopoulos, Phys. Rev. D **52**, 559 (1995). T. Watari and T. Yanagida, Phys. Lett. B **589**, 71 (2004). J. Urrestilla, A. Achúcarro and A. C. Davis, Phys. Rev. Lett.  **92**, 251302 (2004). J. Rocher and M. Sakellariadou, Phys. Rev. Lett.**94**, 011303 (2005). A.  Van Proeyen, Fortsch.  Phys.  **53**, 997 (2005). P.  Binetruy, G.  Dvali, R.  Kallosh and A.  Van Proeyen, Class.  Quant.  Grav.  **21**, 3137 (2004). E. Witten, Phys. Lett. **B153**, 243 (1985). J. Polchinski, *String theory. Vol. II: Superstring theory and beyond*, Cambridge University Press (1998). R. Durrer, M. Kunz and M. Sakellariadou , Phys. Lett. B **614**, 125 (2005). L. Randall and A. Sundrum, Phys. Rev. Lett. **83**, 4690 (1999). N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. B **429**, 263 (1998). S. Sarangi, S.-H. H. Tye, Phys. Lett. B **536**, 185 (2002). M. G. Jackson, N. T. Jones and J. Polchinski, [*Collisions of Cosmic F- and D-strings*]{} \[arxiv:hep-th/0405229\]. A. Hanany and K. Hashimoto K, JHEP **0506**, 021 (2005). M. Sakellariadou, JCAP **0504**, 003 (2005). M. Sakellariadou and A. Vilenkin A, Phys. Rev. D **42**, 349 (1990). N. T. Jones, H. Stoica and S.-H. H. Tye, Phys.Lett. B **563**, 6 (2003). E. Copeland and P. Shaffin, [*On the evolution of cosmic-superstring networks*]{} \[arxiv:hep-th/0505110\]. S.-H. H. Tye, I. Wasserman and M. Wyman, Phys. Rev. D **71**, 103508 (2005); Erratum-ibid. D **71**, 129906 (2005). T. Damour and A. Vilenkin, Phys. Rev. D **71**, 063510 (2005). T. Damour and A. Vilenkin, Phys. Rev. Lett. **78**, 2288 (1997). M. V. Sazhin, et. al., MNRAS **343**, 353 (2003). M. V. Sazhin, et. al., [*Lens candidates in the Capodimonte Deep Field in the vicinity of the CSL1 object*]{} \[arxiv:astro-ph/0406516\]. M. V. Sazhin, et. al., [*Further spectroscopic observations of the CSL-1 object*]{} \[arxiv:astro-ph/0506400\]. B. Shlaer and M.  Wyman, [*Cosmic superstring gravitational lensing phenomena: predictions for networks of $(p,q)$ strings*]{} \[hep-ph/0509177\]. [^1]: Corresponding author: e-mail: [ mairi.sakellariadou@kcl.ac.uk]{}, Phone: +44(0)2078481535 Fax: +44(0)2078482420
{ "pile_set_name": "ArXiv" }
--- abstract: | Regions of nested loops are a common feature of High Performance Computing (HPC) codes. In shared memory programming models, such as OpenMP, these structure are the most common source of parallelism. Parallelising these structures requires the programmers to make a static decision on how parallelism should be applied. However, depending on the parameters of the problem and the nature of the code, static decisions on which loop to parallelise may not be optimal, especially as they do not enable the exploitation of any runtime characteristics of the execution. Changes to the iterations of the loop which is chosen to be parallelised might limit the amount of processors that can be utilised. We have developed a system that allows a code to make a dynamic choice, at runtime, of what parallelism is applied to nested loops. The system works using a source to source compiler, which we have created, to perform transformations to user’s code automatically, through a directive based approach (similar to OpenMP). This approach requires the programmer to specify how the loops of the region can be parallelised and our runtime library is then responsible for making the decisions dynamically during the execution of the code. Our method for providing dynamic decisions on which loop to parallelise significantly outperforms the standard methods for achieving this through OpenMP (using if clauses) and further optimisations were possible with our system when addressing simulations where the number of iterations of the loops change during the runtime of the program or loops are not perfectly nested. author: - bibliography: - 'IEEEabrv.bib' - 'dynamicloop.bib' title: Dynamic Loop Parallelisation --- Introduction ============ High Performance Computing (HPC) codes, and in particular scientific codes, require parallel execution in order to achieve a large amount of performance increase. Depending on the underlying parallel platform which is used, programmers use different programming models in order to achieve parallel execution. In distributed memory systems, the message passing programming model is the most commonly used approach for applying parallelism in the codes. In shared memory systems however, an attractive choice for parallel programming is through OpenMP\[16\]. The parallelisation of codes with OpenMP is often achieved with loop parallelisation. As long as the iterations of a loop are independent, they can be distributed to the available processors of the system in order to execute them in parallel. A programmer is required to specify a loop that can be parallelised by placing compiler directives before the loop, resolving any dependency issues between the iterations beforehand. HPC codes often consist of regions with nested loops of multiple levels. In order to parallelise these regions, a choice must be made on how parallelism should be applied on the loops. Even though OpenMP supports a variety of strategies for parallelising nested loops, only a single one can be used to parallelise the code. A static choice however, cannot exploit any runtime characteristics during the execution of the program. Changes in the input parameters of the executable which affect the iterations of the loops may render the parallelisation decision suboptimal. In addition to this, the iterations of a loop can change at runtime due to the nature of the code. A common feature of HPC codes is to organise the data into hierarchies, for example blocks of multi-dimensional arrays. Depending on the problem, the blocks can have different shapes and sizes. These parameters affect the loops that are responsible for accessing this data. In some situations, a static decision has the potential to impose a limitation on the amount of processors that can be used for the parallel execution of the loops. With the current trend of chip manufactures to increase the number of cores in the processors in each generation leading to larger and larger shared memory system being readily available to computational scientists on the desktop and beyond, a more dynamic approach must be considered for taking such decisions. This report outlines our investigations into various strategies that can be applied at runtime in order to make a dynamic decision on how to parallelise a region with nested loops. Our approach is to try to automatically perform modifications to users code before compilation in order to enable the code to make these decisions dynamically at runtime. Specifically, we investigated the possibility of having multiple versions of a loop within a region of nested loops in order to make a dynamic choice on whether a loop should be execute sequentially or in parallel. OpenMP ====== OpenMP[@openmp] is, arguably, the dominant parallel programming model currently used for writing parallel programs for used on shared memory parallel systems. Now at version 3.1, and supported by C and FORTRAN, OpenMP operates using compiler directives. The programmer annotates their code specifying how it should be parallelised. The compiler then transforms the original code into a parallel version when the code is compiled. By providing this higher level of abstraction, OpenMP codes tend to be easier to develop, debug and maintain. Moreover, with OpenMP it is very easy to develop the parallel version of a serial code without any major modifications. Whilst there are a number of different mechanisms that OpenMP provides for adding parallel functionality to programs, the one that is generally used most often is loop parallelisation. This involves taking independent iterations of loops and distributing them to a group of threads that perform these sets of independent operations in parallel. Since each of the threads can access shared data, it is generally straightforward to parallelise any loop with no structural changes to the program. Nested Loops ============ HPC codes, and particularly scientific codes, deal with numerical computations based on mathematical formulas. These formulas are often expressed in the form of nested loops, where a set of computations is applied to a large amount of data (generally stored in arrays) and parallelisation can be applied to each loop individually. The arrays often consist of multiple dimensions and the access on the data is achieved with the presence of nested loops. Furthermore it is not uncommon that the arrangement of the data is done in multiple hierarchies, most commonly in blocks with multi-dimensional arrays, where additional loops are require in order to traverse all the data. When such code is presented, a choice must be made on which loop level to parallelise (where the parallelisation should occur)[@Duran04runtimeadjustment]. A summary of the available strategies is presented in Table \[tab:nestedloops\] [**Name**]{} [**Description**]{} ----------------- ---------------------------------------------------------------- Outermost Loop Parallelisation of the outermost loop Inner Loop Parallelisation of one of the inner loops Nested Parallelisation of multiple loops with nested parallel regions Loop Collapsing Collapsing the loops into a single big loop Loop Selection Runtime loop selection using if clauses : Strategies for parallelising nested loop regions[]{data-label="tab:nestedloops"} Outermost loop -------------- The most commonly used approach is to parallelise the outermost loop of a nested loop region, as shown in Listing \[alg:outerloop\]. Using this strategy, the iterations of the loop are distributed to the members of the thread team. The threads operate in parallel by executing the portion of iterations they are assigned to them individually. The nested loops of the parallel region are executed in a sequential manner. #pragma omp parallel for private (j) for(i = 0; i < I; i++){ for(j = 0; j < J; j++){ work(); } } Parallelising the outermost loop is often a good choice, as it minimises the parallel overheads of the OpenMP implementation (such as the initialisation of the parallel region, the scheduling of loop iterations to threads and the synchronisation which takes place at the end of the Parallel loops). More extensive work on the overheads of various OpenMP directives can be found in [@Chen:1990:ISG:325164.325150]. Despite the advantages of the Outermost Loop parallelisation strategy in this context, there are drawbacks of this choice. The maximum amount of available parallelism is limited by the number of iterations of the outerloop loop. Considering the example code in Listing \[alg:outerloop\], it is only possible to have $I$ tasks being executed in parallel. This restricts the number of threads the code can utilise upon execution, and therefore the number of processors or cores that can be exploited. Inner loop ---------- This is a variant on the outermost loop strategy, with the difference that one of the inner loops of the region is chosen to be parallelised. This approach will only be required or beneficial if the outer loop does not have enough iterations to parallelise efficiently as this variant on the parallelisation strategy introduces parallelisation overheads by requiring the parallelisation to be performed for each loop of the outerloop rather than once for all the loops (as shown in Listing \[alg:innerloop\]). Further nesting of the parallelisation (at deeper loop levels) will further increase the performance problems; the parallel overheads appear a lot more times, whereas the amount of work of each iteration becomes finer. for(i = 0; i < I; i++){ #pragma omp parallel for shared (i) for(j = 0; j < J; j++){ work(); } } Another issue with this strategy is the scenario where loops are not perfectly nested. In this situation, when there are computations in-between the loops, as shown in Listing \[alg:poorlynestedloop\], parallelising a loop of a deeper level will result in sequential execution of that work. Depending on the amount of the execution time which is now serialised, this approach has the potential to increase the execution time of the code. for(i = 0; i < I; i++){ somework(); for(j = 0; j < J; j++){ otherwork(); } } Nested ------ The Nested parallelisation strategy exploits the fact that more than one loop can be executed in parallel. By opening multiple nested parallel regions at different levels of loops, as presented in Listing \[alg:nestedloop\], more threads can be utilised during the parallel execution of the code. Unlike the Outermost Loop and the Inner Loop approaches, which can only utilise as many threads as the iterations of the loop with the biggest number of iterations, this strategy can exploit further parallelisation opportunities. Other studies have shown that nested parallelism can give good results on systems with a large number of processors[@Tanaka00performanceevaluation][@Ayguade:2006:ENO:1143496.1143504]. #pragma omp parallel for private (j) for(i = 0; i < I; i++){ #pragma omp parallel for shared (i) for(j = 0; j < J; j++){ work(); } } Loop Collapsing --------------- The loop collapsing strategy takes a different approach for exposing additional parallelism within nested loop regions. By performing code transformations, multiple nested loops are combined, or collapsed, into a single loop. The newly created loop has a larger amount of iterations, which can be distributed to the threads. As of version 3.0, OpenMP supports loop collapsing by using the COLLAPSE clause in the Loop Construct, requiring the programmer to provide the number of loop levels to collapse. To be able to use the COLLAPSE clause the loops have to be perfectly nested (i.e. no code between the loops) and the number of loop iterations (when multiplied together) need to be able to be regularly divided. Loop collapsing can produce better results than both the inner loop and nested loop strategies, since the parallel overheads are minimal, however it is not always available, either because not all compilers support OpenMP version 3.0, or because the conditions outlined above cannot be met. #pragma omp parallel for collapse (3) for(i = 0; i < I; i++){ for(j = 0; j < J; j++){ for(k = 0; k < K; k++){ work(); } } } Loop Selection -------------- OpenMP already provides a way of forcing a parallel region to execute sequentially with the use of the [if]{} clause on OpenMP directives. The [if]{} clause, of the following form, $if(scalar-expression)$, is used to determine at runtime whether the code enclosed in the parallel region should execute sequentially or in parallel. When the scalar expression of the clause evaluates to 0, the region is executed sequentially. Any other value will result in parallel execution. However, a new parallel region is always created in either case. The presence of the [if]{} clause only affects the number of threads that get assigned to the parallel region. When sequential execution is triggered, the code is only executed by the master thread, for parallel execution all threads execute the code. Furthermore, with the [if]{} clause, programmers are still required to manually write code which makes the decision, construct sensible scalar-expressions to be evaluated, and manually parallelise each loop that is a potential target for parallelisation. Dynamic loop parallelisation {#sec:dynamicloop} ============================ One of the motivators for this work was a parallelisation that was undertaken of a finite-volume cell-centred structured Navier-Stokes code for undertaking Computational Fluid Dynamics (CFD) simulation. It is a structured mesh, multigrid, code which works with multiblock grids, and includes a range of CFD solvers including; steady state, time-domain dual time-stepping, frequency-domain harmonic balance, and time-domain Runge-Kutta. The general pattern for the computations within the code is shown in Listing \[alg:exampleloops\]. Whilst this type of computational pattern is not uncommon for scientific codes one of the challenges in the parallelisation is that as the code can use a range of different methods, as previously outlined, the range of these loops can vary. For instance when performing a time domain simulation the [*harmonic*]{} loop has a single iteration. However, when performing a harmonic balance simulation it can have a range of values, generally between 2 and 16. Furthermore, it is not uncommon to run large simulations with a single block, or a small number of blocks, meaning that the [*block*]{} loop has a very small number of iterations. Finally, each block in the simulation can have different values for its dimensions. In theory, the loop collapsing strategy would be ideal for this type of simulation code as this would enable parallelisation without having to deal with the varying sizes of the nested loops. However, it cannot be guaranteed that for all input datasets the loop iterations can be regularly divided, and there are also particular areas of the code where the loops are not perfectly nested. for(iter = 0; iter < n_iters; iter++){ for(block = 0; block < n_blocks; block++){ for(harmonics = 0; harmonics < n_harmonics; harmonics++){ for(j_cell = 0; j_cell < n_cells_j; j_cell++){ for(i_cell = 0; i_cell < n_cells_i; i_cell++){ perform computations; } } } } } Given the different techniques that can be used to parallelise nested loops, the occurrence of nested loops in many scientific simulation codes, and the fact that the loop iterations of nested loops can change for different input datasets of a code or when performing different functions with a code, we wanted a system that enabled the selection of different parallelisation choices to be available to code at runtime when the specific ranges of the nested loops are known. Our strategy for providing this functionality is to create code, based on the provided user code, that can perform a parallelisation of any of the nested loops and add decision making algorithms to dynamically choose, at runtime, which parallelisation is used. Specifically, we have created tools that create multiple versions of a loop within a region of nested loops in order to make a dynamic choice on whether a loop should execute sequentially or in parallel In general, code duplication is considered bad programming practice as it can, amongst other issues, lead to update anomalies (where not all instances of the functionality are modified when modifications occur) and thus damage the maintainability of the code. However, if the duplicate code (in our instance the serial and parallel versions of each loop in the nested loop structure) can be generated automatically for standard user code then it will not adversely affect the maintainability of the user program. We created a source-to-source compiler that recognises compiler directives within user’s source code and uses them to pre-process the source code and generate a program that has the alternative parallelisation strategies encapsulated within it. By exposing a simple interface to the programmers through compiler directives, which are similar to the already familiar OpenMP compiler directives, we can automatically provide the dynamic parallelisation functionality for users without requiring significant changes to the original source code. Furthermore, this approach provides the users the choice of enabling or disabling our functionality with minimum effort. To complement the code duplication we have also implemented functionality (in a small runtime library) that produces the code which is responsible for deciding what parallelisation to perform automatically. The decision functionality considers the number of iterations of a loop in order to chose a parallelisation strategy that makes best use of the processors or cores available. Our implementation is currently limited to parallelising a single loop of a nested loop region, taking advantage only the Outermost and Inner loop strategies. Other authors [@Duran04runtimeadjustment] have already taken a similar approach by modifying the OpenMP runtime library in order to make these decisions dynamically. However, applying this logic in the OpenMP runtime library would have limited the implementation to a specific compiler. Using our source-to-source compile approach we are aiming to transfer the logic in user code in order to maintain the portability of our solution. In addition to simple heuristics, we also explored the idea of a profile-based approach at runtime in order to detect the best possible parallelisation strategy with time measurements. A heuristics based approach alone cannot capture any information on the amount of the actual computations when making a decision on parallelising a loop. Whilst this is generally irrelevant for perfectly nested loops (as all the work is in the lowest loop), it may have more of an impact where there is work between the different loops as well. There may also be situations where a different inner loop has slightly more iterations than an outer loop so could be chosen by a simple heuristic as the place where the parallelisation occurs but the overheads associated with parallelising that inner loop actually make this a suboptimal choice. Providing a profiling based decision mechanism may help with both these scenarios, and enable us to identify situations where, for instance, using less threads to parallelise an outer loop might provide a better execution time. The idea of an auto tuning code has already been proposed by other compiler-related researches [@Hall_looptransformation] [@Pluto_auto] for producing optimised code, we apply similar logic. Source-to-source compiler {#sec:s2scompiler} ========================= Our source-to-source compiler acts as a preprocessor to C code which can contain OpenMP directives, as well as our own directives. The compiler parses the code, and creates an internal representation of the code in the form of an Abstract Syntax Tree (AST). The regions of the input code that contain our directives are translated into the semantics of the C programming language and OpenMP directives during the parse phase, and appropriate nodes for these regions are placed in the AST. The created AST is then translated back to C code with OpenMP directives. This generated code is then compiled using a standard, OpenMP enabled, C compiler to produce a parallel executable (this process is illustrated in Figure \[fig:compileseq\]. ![Compilation process using the source-to-source compiler[]{data-label="fig:compileseq"}](newcompileseq){width="2.5in"} Our compiler, implemented using the Lua[@Lua] programming language along with the Lpeg[@Lpeg] parsing library, recognises a number of our own bespoke compiler directives of the form $\#pragma\ preomp$. A loop that is preceded by a $\#pragma\ preomp\ for$ directive is considered by our compiler as a suitable candidate for applying parallelisation. When such a loop is found, our compiler performs the necessary code transformations so that a decision can be made at runtime whether the loop should run sequentially or in parallel (and to ensure that both the sequential and parallel versions of the loop are available in the executable at runtime). In addition to this, a simple analysis of the loop is performed in order to facilitate the computation of a loop’s iterations during the making of the decision. An example of such a code is presented in Listing \[alg:preompfor\]. #pragma preomp parallel for private(j) for(i=0; i<I; i++){ #pragma preomp parallel for shared(i) for(j=0; j<J; j++){ work(); } } Furthermore we also extend the grammar to support an additional clause, the $parallel\_ threshold(expression)$ clause. This is optional, and when it is not present the compiler will assume a default value of 1.0. This clause is used to allow control over when a loop is parallelised, and will be discussed further in Section \[sec:decfuns\]. Code Duplication ---------------- The main function of the source-to-source compiler is to take the original user code and duplicate the loops to be parallelised so that there are both serial and parallel versions of those loops that can be selected at runtime. As previously mentioned our system only allows one loop to be parallelised at any given time (although which loop is parallelised can change over the runtime of a program as the parameters of the loop change), but both the serial and parallel versions of all the loops to be parallelised must appear in the executable to enable a selection at runtime to take place. When a loop is preceded by a $\#pragma\ preomp\ for$ directive, the loop is duplicated and wrapped in a normal $if-else$ statement which evaluates a decision function from our runtime library and selects the $if$ or $else$ branch based on the outcome of the evaluation. OpenMP if --------- As a comparison to our code duplication approach we also implemented the same functionality uses the existing if clause of the OpenMP Parallel Construct. Our custom directive is translated into an OpenMP Parallel For directive, with an attached [if]{} clause in order to decide whether to execute the loop in parallel or not (rather than a serial and parallel version of the loop). The expression of the [if]{} clause consists of a call to a decision function of our runtime library, which takes the evaluated expressions of the loop’s information in order to make a decision. This functionality was included to allow a comparison of our approach to the standard method that developers could currently use to provide dynamic selection of parallelism with OpenMP. ![An example of using the if clause to parallelise (a) the outer and (b) the inner loop of two nested loops with two threads[]{data-label="fig:openmpif"}](openmpif){width="2.5in"} However, a major drawback of this approach (and the reason we do not uses it for our functionality) is that a parallel region will be created regardless of whether a loop is parallelised or not. Considering the example in Figure \[fig:openmpif\], parallelising the outer loop of two nested loops with two threads will result in three parallel regions. Each thread of the outer region will create a new parallel region and become its master. In the case of the inner loop being parallelised, two parallel regions are created. For nested regions with a larger number of loops this method has the potential to produce excessive parallel overheads. Decision functions and the runtime library {#sec:decfuns} ========================================== The runtime library implements the logic for deciding which version of a loop is chosen during execution. Once a code has been processed by the source-to-source compiler it must then be linked with our runtime library to enable this functionality to be used. Decision Based On Heuristics ---------------------------- Here we use heuristics, based on information collected at runtime, to decide whether a loop should execute sequentially or in parallel. The idea of this approach is to look for the first loop that has enough iterations to utilise all of the available threads, based on the assumption that parallelising outer loops is more efficient than parallelising inner loops as the amount of parallel overheads should be lower (as the OpenMP parallel regions are encountered less frequently). Before the execution of a loop, the decider checks whether a loop of an outer level is already running in parallel. If this condition is met, then the loop is serialised. In the case that no outer loop is running in parallel the number of iterations of the loop is calculated and it is divided with the available number of threads. If this results in a value that is greater than or equal to a specified threshold, then the parallel version of a loop is chosen, otherwise the loop is serialised. As discussed in Section \[sec:s2scompiler\], the default value of the threshold is 1 (there must be no idle threads) although this can be controlled by the user. The calculations of the iterations is based on the parameters of the loop which are extracted by the source to source compiler and are provided as arguments to the decision function. In the case that the original code of the loop uses variables for its boundaries, any change in their value will also be captured by the decision function during the calculation. This design allows constant monitoring of any changes in the iterations of the loops which also results in dynamic adaptation of the parallelisation strategy during the execution of the program. The algorithm is very simple and with minimum overheads. Moreover, there is no need to maintain any state for the loops. However, the logic which is used by the function is based on optimism. It only considers the amount of parallelism exposed by the loop regardless of whether the amount of work of the loop is big enough to justify any overheads of the parallelisation or whether there is any work between loops. Decision Based On Heuristics With Profiling ------------------------------------------- To address the potential issue with the basic decision based on heuristics previously discussed we also implemented a more complex decision function based on both the size of loops and some evaluation of the work in the loops. In the same manner as the heuristics decider, it uses the same information extracted by the source to source compiler in order to determine whether the loop should be parallelised or not. However, if a loop does not meet the conditions, then the function reverts to a profiling mode in order to decide which version of the loop, serial or parallel, to choose from based on timings. ![An example of the Heuristics With Profiling Decider on three loops[]{data-label="fig:profilingheur"}](profilingheur){width="2.5in"} The first time a loop is executed, the heuristics decider determines if the loop should be parallelised. If the conditions are not met, the sequential version of the loop is chosen and profiling is enabled for this loop. At the next execution of the loop, the evaluation of the heuristics is still performed. If the conditions are still not met (for example there where no changes in the iterations of the loop), the loop is now parallelised since at this point we only have timing information for the serial version. Consecutive executions of the loop will first check the heuristics conditions, falling back to profiling mode if the condition is not satisfied. However, the function will detect that timings for both versions are available and utilise the information gathered from profiling to decide what loop to parallelise (providing the number of iterations of the loop have not changed), with the fastest version chosen as the final decision. In contrast to this, if the amount of work is not the same (i.e. the number of loop iterations has changed) the timings get invalidated, and profiling is re-initiated. To implement this functionality requires additional code, when compared to the basic heuristic decision function. This will impose an extra overhead to the produced program, although if the loop iterations are static throughout the run of a program the profiling overhead will only be imposed in the first few iterations of the program. Figure \[fig:profilingheur\] outlines this with an example of three nested loops. Performance Evaluation ====================== To evaluate the performance of our new functionality we aimed to benchmark it against standard, static, OpenMP parallelisations with a range of different configurations. In particular, we focussed on varying the number of loop iterations, the amount of work between and within loops, and the number of changes that occur to loop bounds during execution to evaluate whether and when our approach is beneficial compared to a static parallelisation. To undertake these benchmarks we used two different codes. The first is a synthetic, configurable, benchmark C code, shown in Listing \[alg:synthbench\], which we constructed for this evaluation. The number of iterations of each loop can be configured, as can the amount of work that is simulated (by calling the $delay$ function) between the second and third loops, and within the third loop. for(i=0; i<num_iters; i++){ for(j=0; j<outer_iters; j++){ delay(outer_delayreps); for(k=0; k<inner_iters; k++){ delay(inner_delayreps); } } } The second benchmark code was an extract from the CFD code outlined in Listing \[alg:exampleloops\]. This code is more complex than the synthetic benchmark and more representative of realistic scientific simulation codes. This code is used to explore the performance of our solution when the loop iterations vary and when the bounds of loops are dynamic during the course of the execution of the benchmark (i.e. one or more loops change their loop bound as the outer loops are progressed). Benchmark Environment --------------------- The platform used to evaluate the dynamic loop parallelisation functionality was Ness[@ness], at EPCC. The system is composed by two parts, a front-end for development and job submission and a back-end for job execution. The management of the two parts is handled by the Sun Grid Engine which allows submission of jobs from the front-end that must be executed on the back-end nodes in isolation. The back-end part of the system is composed by two SUN X4600 Shared Memory nodes. The central processing unit (CPU) of each node is an AMD Opteron processor of 16 2.6GHz processing cores and 32 GB or main memory. Each core has 64K of L1 cache for data and 64K L1 cache for instructions. In addition there is also 1 MB of L2 available to each core (combined for data and instructions). We used the Portland Group (PGI) C compiler for the majority of the benchmarks, with the following compiler flags: [**-O4**]{},[**-c99**]{},[**-mp**]{}. For the benchmarking involving the OpenMP $if$ functionality we used the GNC C compiler instead as the version of the PGI compiler we used does not support a thread team of a nested parallel region to have more than one threads when an outer region is serialised with the if clause (this seems contrary to the OpenMP specification where the $if$ clause only affects the number of threads that get assigned to a particular parallel region, not the thread teams of its nested regions). When using the GNU C compiler we used the following compiler flags: [**-O3**]{},[**-stf=c99**]{},[**-fopenmp**]{}. Timing information was collected using the $omp\_get\_wtime()$ function, with each benchmark executed three times and the worst time taken (since this is the limiting factor for the execution time). Synthetic benchmark results --------------------------- If we consider the example code in Listing \[alg:synthbench\], the execution time of the code of the two internal nested loops when only the outer loop is parallelised with a certain amount of threads ($outer\_threads$) can be calculated as shown in Equation \[equ:tpouter\]. $T_{p_{Outer}}$ is the execution time when parallelising the outer loop, $T_{outer\_work}$ is the time needed for the work in-between the loops and $T_{inner\_work}$ is the time needed for the amount of work within the innermost loop. [ll]{} T\_[p\_[Outer]{}]{} = \[equ:tpouter\] In a similar fashion, when parallelising the inner loop using $inner\_threads$, the execution time of the loops is shown in Equation \[equ:tpinner\]. [ll]{} T\_[p\_[Inner]{}]{} = outer\_iters \* (T\_[outer\_work]{} + ) \[equ:tpinner\] If we want to have a reduction in the overall execution time by parallelising the inner loop, the constraint $T_{p_{Inner}} < T_{p_{Outer}}$ must be satisfied. Solving this constraint in terms of $T_{outer\_work}$ we can get the maximum allowed threshold of the execution time for the work of the outer loop as shown in Equation \[equ:maxwork\]. It is worth mentioning that this model is an ideal performance model, where the work is evenly distributed to the threads. In reality, the time of $T_{outer\_work}$ might be affected by the presence of parallel overheads. [ll]{} T\_[outer\_work]{} &lt;\ \[equ:maxwork\] In order to test our hypothesis, we measured the amount of time which is required by the delay function for various values, with the results shown in Figure \[fig:syntheticbench\]. The graphs in Figure \[fig:syntheticbench\] show the performance of four different parallelisation strategies. $OpenMP\ Outer(1)$ and $OpenMP\ Inner(2)$ are the results from manual, static, parallelisations of the individual loops in the benchmark. $Heuristics$ are the results from our basic decision function using a value of one (i.e. only parallelise the loop if there are more iterations than threads available), and $Heuristic\ \-\ Profiler$ are the results from our system using the profiling functionality where appropriate. \ From the results it is evident that when the loops are perfectly nested, and regular (i.e. the loop bounds are not changing), then there is no benefit from using the profiling functionality. The basic heuristics will choose the optimal loop to parallelise apart from when we are using 6 threads. The variation in outcomes for 6 threads is is a consequence of the number of loop iterations chosen for the benchmark (8 iterations of the outer loop and 16 iterations of the inner loop). The distribution of 8 iterations to 6 threads results in all of the threads to get assigned 1 iteration of the outer loop each, and 2 of the threads get and extra iteration. The total execution time in this case is limited by the slowest threads, which is the time of 32 iterations; 2 iterations of the outer loop multiplied by 16 iterations of the inner one. Parallelising the inner loop with 6 threads however, 2 of the thread get from 2 iterations whereas the rest the threads get 3 iterations each. In this case, the total execution time of the parallel loops is the amount of time required for 24 iterations; 3 iterations of the inner loop multiplied by 8 iterations of the outer loop. Since both decision functions only utilise the heuristics decision (when the number of threads is less than the number of iterations) they cannot exploit this opportunity as no profiling is actually performed in this case. This could be altered by setting the decision heuristic to a value other than 1 (i.e setting the heuristic to 1.5). From the graphs we can observe that our threshold value calculations hold. For the parameters we used for this benchmark the calculated threshold value is approximately $T_{outer\_work} < 0.0468$ seconds. When the work of the outer work is less than the calculated threshold (Figures \[fig:syntheticbench0\] \[fig:syntheticbench0022\]) parallelising the inner loop with 16 threads is still faster than parallelising the outer loop with 8 threads. As the amount of work increases, the impact on the execution time when parallelising the inner loop is increased, since more work is now being serialised. In these cases, the heuristics decider makes the wrong choice (Figures \[fig:syntheticbench0079\] and \[fig:syntheticbench0150\]) since its decision only concerns the amount of iterations of the loops and the available threads. In contrast to this, when profiling is used in the decision function, it correctly detected that the fastest execution time is achieved by not parallelising the inner loop. In the case, where the amount of work of the outer loop exceeds the calculated threshold, parallelising the inner loop, even with 16 threads, increases the total execution time. The benefit from using 16 threads to parallelise the inner loop is not enough to justify the work that is serialised. CFD benchmarking results ------------------------ The first benchmark that we performed using the extract from the CFD code was to compare the OpenMP [if]{} clause with our basic heuristic functionality. We used, as a reference, the timings of the manually parallelised the $n\_blocks$, $n\_harmonics$ and $n\_cell\_j$ loops and compare the execution time of the heuristics decision function for the two code generation modes of our compiler. In order to avoid cases of the iterations not being evenly distributed to the threads, we only consider cases of 2, 4, 8, 12 and 16 threads. The parameters used for the loop iterations are shown in Table \[tab:cfdparams\], with varying amount of work in the inner loop. [**Parameter**]{} [**Value**]{} ------------------- --------------- $iters$ 500 $n\_cell\_j$ 2496 or 8 $n\_cell\_i$ 8 or 2496 : Loop parameters used for the CFD code benchmarking[]{data-label="tab:cfdparams"} We also consider cases where blocks do not have the same shape by altering the values of the $n\_cell\_j$ and $n\_cell\_i$ loops. No alterations indicate that all of the blocks have a grid shape of 2496x8($j\_cell$ x $i\_cell$). An alteration of 2 means that the first and third blocks have a grid shape of 8x2496 whereas the second and fourth blocks have a shape of 2496x8. \ The performance results shown in Figure \[fig:cfdifresults\] highlight the fact that there is a significant difference between our implemented functionality and that provided by OpenMP (the [if]{} clause). Not only is the [if]{} clause slower than the basic OpenMP parallelisation, but it also increases the overall execution time of the code. For Figure \[fig:cfdbench440small\], where 2 and 4 threads are available, only the loop of the outer level is parallelised in both code generation modes. However, the [if]{} clause mode produces a slower execution time than the code duplication mode. When more than 4 threads are used, the parallelisation is applied on the $n\_cell\_j$ loop. In contrast to the code duplication mode which produces an execution time similar to the case of statically parallelising the loop, the [if]{} clause mode is still slower. A similar performance pattern is seen at 16 threads. Moreover, in the presence of alterations in the shape of the blocks, as shown in Figures \[fig:cfdbench442small\] and \[fig:cfdbench442large\], the [if]{} clause mode produces an even slower execution time. On the other hand, the code duplication mode can exploit this opportunity in order to utilise all of the available threads by applying parallelism on the $n\_cell\_i$ loop. Increasing the amount of work in the core calculation has a positive effect on the [if]{} clause code generation mode. We can observe from Figure \[fig:cfdbench440large\] that compared to Figure \[fig:cfdbench440small\] the difference between using the [if]{} clause and the static parallelisation is not as large for small numbers of threads. This is likely to be because the performance cost of executing the [if]{} clause is proportionally smaller compared to the overall execution time. However, the same performance degradation is still observed when increasing the number of threads. The execution times of the code using the OpenMP [if]{} clause raised some concerns over whether the code was operating correctly. After extensive testing and verification we ascertained that both versions of the code (the [if]{} clause and code duplication) were correct and producing the same behaviour. Therefore, we investigated the parallel overheads of the OpenMP runtime library of the GCC compiler. Other authors [@Dimakopoulos:2008:MSO:1789826.1789828] have already studied the overheads of nested parallelism on various compilers, including a more recent version of the GCC compiler than the one used in this work. Their findings suggest that the implementation of nested parallel regions of the GCC compiler has significant overheads. What is not presented in their work is whether or not the use of the [if]{} clause on nested parallel regions produces the same overheads. In order to ensure that the behaviour we observed in our results is the cause of nested parallel regions and not the presence of the [if]{} clause, we have constructed a simple micro benchmark. Nested parallel micro benchmark ------------------------------- We created four versions of a benchmark code with three nested loops and the delay function of the EPCC Micro-benchmark Suite in the block of the innermost loop. The first version of the benchmark creates a parallel region on the loop of the second level. The second version performs the same operation on the innermost loop. The third version uses the [if]{} clause on both loops by serialising the outer loop with a value of 0 and parallelising the inner loop with a value of 1. Finally, the last version creates a parallel region on both of these loops, however we force the number of threads on the thread team of the outer loop to 1 using the $num\_threads$ clause. Through this we manage to reproduce the same behaviour as with the [if]{} clause code case when the inner loop is parallelised. [**Parallel loop**]{} [**Execution time (seconds)**]{} ------------------------------------- ---------------------------------- Outer 38.845619 Inner 153.06809 Nested (with $if$ clause) 163.05681 Nested (with $num\_threads$ clause) 162.85479 : Micro benchmark results of GNU’s C compiler’s implementation of nested parallelism[]{data-label="tab:microresults"} The number of iterations of the parallel loops are the same as the number of available threads. Table \[tab:microresults\] presents the execution times of each case. We can see that parallelising the inner loop with nested parallel regions takes 10 seconds longer than parallelising the inner loop manually, even for this small and simple benchmark. Moreover, the two versions that contain nested parallel regions achieve very similar execution times. From this test we can concluded that it is likely that the behaviour we observed from the [if]{} clause code generation mode is affected by the overheads of the implementation of the GCC compiler for nested parallel regions. Decision function benchmarking ------------------------------ Finally, we investigated the performance of our profiling decision functionality for the CFD extract code. This code is perfectly nested, so the basic heuristic decision function should be optimal here as it should chose the best loop to parallelise with very little overheads, whereas the profiling function has extra functionality and therefore imposes extra overheads on the performance of the code. The results from our experiments are shown in Figure \[fig:cfddeciresults\]. \ We can observe from Figures \[fig:cfddecis480large\] that both the decision functions make the correct choice of parallelisation strategy up to 12 threads. However, the overheads of the profiling functionality have a negative impact on the overall execution time. Even when profiling is not actually being performed, the functions which are inserted before and after the execution of each loop to count the amount of work performed at each loop level increase the overall time. Moreover, we can observe that at 16 threads the profiler actually chooses to parallelise the harmonics loop, whereas the heuristics decider produces the correct behaviour of profiling the $n\_cell$ loop. The timings which are performed for each loop version during the profiling mode are sensitive to the presence of any overheads which ultimately affect the decision of the function (such as the overhead of taking the timings). When alterations are present in the shape of the loops, as shown in Figures \[fig:cfddecis482large\] and \[fig:cfddecis844large\], the heuristics decider manages to adapt its behaviour, parallelising the innermost loop in order to utilise more threads, and can significantly out perform the static parallelisation. In all of the test cases, the decision function which is based on profiling provides slower execution times than the decision function which is based on heuristics. Moreover, the additional logic which is included in the decision function with profiling caused a suboptimal decision to be made in some situations. Improved profiling decisions ============================ The results from the previous benchmarks lead to considerations of the reasons behind the poor execution of the decision function which performs profiling. Comparing the functionality of this function with the simple case of the heuristics decision function there are two sources of additional overheads. The first one is the logic of profiling each version of a loop. In order to make a choice between the two versions of a loop, the slow version must also be executed. However, if an actual simulation code runs for a significant amount of time this overhead should be negligible (providing the loop bounds do not alter and trigger the profiling functionality too many times) as it should only be incurred infrequently. The second source of overheads is the inclusion of additional function calls before and after each loop in order to measure the time of the execution and count the amount of work performed. The elimination of the functionality for taking the slow path is not possible since this is the essence of profiling. Both versions of a loop must be executed in order to make a comparison between their execution time. However, we can relax the conditions on the validity of the timings. If we only consider the number of the iterations of the specific loop which is being profiled, then we can eliminate all the logic that performs the counting of the work for the internal loops. When the decision function decides that a version of a loop should be profiled (after the failure of the heuristics conditions) the number of the iterations of the version of the loop that is going to be executed is saved in the state of the loop at that point. This way, the code of the function calls which are placed before and after each loop remains simple, only adjusting the loop level counter of each thread as well as marking the starting and ending times of the execution of a loop which is being profiled, rather than counting the iterations of internal loops as the initial profiling functionality does. In order to test our theory we have created a new version of the runtime library which includes the above modifications, called the relaxed profiler. \ From the graphs in Figure \[fig:relaxedresults\] we can see that the removal of the additional logic which performs the counting benefits the decision function with profiling. When no profiling is performed (2 and 4 threads), the relaxed version of the decision function is faster than the accurate version, and the same performance pattern holds when the profiling is performed (8 threads and more for Figure \[fig:relax442small\] and 12 threads and more for Figure \[fig:relax844large\]). Comparing the execution time of the new version of the decision function with profiling to the execution time of the heuristics decision function, the latter still produces a faster execution time, however the difference is not large. This behaviour is expected, since the presence of profiling introduces additional computations within the code itself from the functions which are placed before and after each loop. Moreover, in the cases where the parallelisation is applied on a nested loop, the decision function must execute both versions of a loop, one of them being the slow version, in order to make a decision. Finally, we can see that the relaxed decision function rectifies the problem with the original profiling decision function of it choosing the wrong option in some cases. For Figure \[fig:relax442small\] we can see that at 12 and 16 threads the relaxed profiler makes the correct choice, and the same for Figure \[fig:relax844large\] at 16 threads (where the performance of the relaxed profile decision function is comparable to the heuristics decision function). Conclusion ========== The main focus of this work was to investigate the possibility of dynamically choosing at runtime the best loop of a nested loop region which best utilises the available threads. We have successfully created a source-to-source compiler and a runtime library in order to automatically allow a dynamic choice to be made at runtime. As our solution uses a directives based approach, similar to OpenMP, we requires minimum effort and code change from the user’s point of view. We have discovered that the current mechanism users can exploit to perform this, the OpenMP [if]{} clause, does not perform efficiently (at least for the implementation we tested). Despite the fact that this behaviour is the result of the inefficient implementation of the GCC compiler which was used in this work, the same compiler with the code duplication mode was able to provide additional speedup in the execution time of the code. From this we conclude that by relying on the OpenMP runtime library to perform loop nesting, the execution time is limited by the compiler’s implementation of nested parallel regions. Although code duplication is considered to be a bad programming practice, when it is done automatically, it can eliminate unnecessary parallel overheads. We have also shown that some level of auto-tuning (using profiling to select which loop to parallelise) can provide performance benefits in certain circumstances, for instance when loops are not perfectly nested. OpenMP is currently generally used for small scale parallelisation of code, primarily because there are very few large scale shared-memory HPC resources. However, the current trend in multi-core processors suggests that in the near future large scale shared-memory resources (of order 100-1000s of cores) are likely to be commonly available. Therefore, shared-memory parallelisations are likely to become more utilised and interesting for large scale scientific simulations.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Flagella are hair-like appendages attached to microorganisms that allow the organisms to traverse their fluid environment. The algae *Volvox* are spherical swimmers with thousands of individual flagella on their surface and their coordination is not fully understood. In this work, a previously developed minimal model of flagella synchronization is extended to the outer surface of a sphere submerged in a fluid. Each beating flagellum tip is modelled as a small sphere, elastically bound to a circular orbit just above the spherical surface and a regularized image system for Stokes flow outside of a sphere is used to enforce the no-slip condition. Biologically relevant distributions of rotors results in a rapidly developing and robust symplectic metachronal wave traveling from the anterior to the posterior of the spherical *Volvox* body.' author: - 'Forest O. Mannan' - Miika Jarvela - Karin Leiderman title: A Minimal Model of the Hydrodynamical Coupling of Flagella on a Spherical Body with application to Volvox --- Cilia and flagella are ubiquitous among eukaryotic cells. These small, hair-like appendages extend from cell membranes and play important roles in locomotion and fluid transport by undergoing a periodic motion. Examples include the transport of foreign particles out of the lungs [@TWSC14], the creation of left-right asymmetry in embryonic development [@essner2002left], and filter feeding [@mayne2017particle]. These biologically relevant flows are generally created through the coordinated collective motion of many cilia or flagella. The origin and means of this large-scale coordination has been a long standing area of research [@taylor1951analysis]. In some scenarios, hydrodynamic coupling alone has successfully explained such coordination [@brumley2014flagellar; @brumley2015metachronal; @vilfan2006hydrodynamic; @goldstein2016elastohydrodynamic; @dillon2000integrative; @yang2008integrative; @guo2018bistability]. Experimental approaches range from investigating colloidal oscillators with optical tweezers [@Kotar7669] to observing synchronization between lone flagellum pairs, emanating from two separate cells and tethered at fixed distances via micro-pipettes [@brumley2014flagellar]. Examples of theoretical approaches include the study of filaments with internal driving forces immersed in a fluid [@mannan2018; @dillon2000integrative; @goldstein2016elastohydrodynamic; @yang2008integrative; @guo2018bistability] and so-called minimal models where the cilia or flagella are represented as oscillating ‘rotors’ immersed in a viscous fluid [@niedermayer2008synchronization; @brumley2015metachronal; @brumley2016long]. This latter approach is what we build on in the current work. Ensembles of large numbers of cilia often exhibit regular variations in the beating phase of adjacent cilia, which are characterized as metachronal waves (MWs) [@elgeti2013emergence; @brumley2015metachronal; @mitran2007metachronal]. The colonial alga *Volvox carteri* (Volvox) has become a model organism for studying the emergence of MWs [@brumley2015metachronal; @matt2016volvox]; an informative review of these studies can be found elsewhere [@goldstein2015green]. *Volvox* is a multicellular green algae whose surface consists of fairly regularly-spaced biflagellated somatic cells, embedded in the extracellular matrix [@matt2016volvox; @kirk2005volvox]. *Volvox* swimming is mainly due to the coordinated beating of their flagella, which exhibit clear MWs traveling from the anterior to posterior of the spherical *Volvox* body [@brumley2015metachronal; @brumley2012hydrodynamic]. Further, *Volvox* flagella beat towards the posterior of the colony with a small 10-20 degree tilt out of the meridional plane [@hoops1983ultrastructure; @hoops1997motility]. The tilt has long been thought to allow volvox to ‘swirl’, where they rotate during forward progression swimming [@mast1926reactions; @hoops1997motility]. Minimal models of coupled rotors [@niedermayer2008synchronization; @brumley2014flagellar] are particularly amenable to the theoretical study of MW formation on *Volvox* due to the number and spacing of flagella on the *Volvox* surface; one flagellum is close enough to another flagellum to influence its periodic beating (via hydrodynamics) but typically not close enough to make physical contact. To represent a single flagellum with a rotor, the tip of the flagellum is modeled as a small, rigid sphere with a preferred circular orbit. The shape of the orbit is controlled with a system of springs and the motion is due to a prescribed driving force. The fluid flow induced by one rotor on another rotor can then be well approximated by a single Stokeslet [@brumley2014flagellar]. Additionally, the leading-order far-field flow induced by a rigid sphere is precisely given by a Stokeslet [@nasouri2016hydrodynamic]. Thus, a model rotor (oscillator) captures both the phase of the beating flagellum and well approximates its corresponding, induced far-field flow. Previous studies of *Volvox* flagella with minimal models of coupled oscillators were able to reproduce semi-quantitative characteristics of the average metachronal dynamics and the emergence of MWs [@brumley2015metachronal; @brumley2012hydrodynamic], while also using simplifying assumptions about *Volvox* geometry, e.g., the surface of *Volvox* was treated as a no-slip plane. One study considered flagellum beating on a spherical body, but was limited by using a single chain of rotors that all beat in the same direction [@nasouri2016hydrodynamic]; flagella on *Volvox* cover the entire surface and beat from the anterior to the posterior. In this study, we extend these minimal models of coupled oscillators to investigate biologically-relevant distributions of beating flagella on the surface of a sphere. [0.49]{} Following the studies by Brumley *et al.* [@brumley2015metachronal; @brumley2012hydrodynamic], each rotor is a rigid sphere of radius $a$, elastically bound in a circular trajectory of radius $r_0$, about a prescribed center point located a distance $d$ above the spherical *Volvox* body, as depicted in Figure \[fig:RotorSchematics\](a). The preferred plane of orbit is defined by the center of rotation and a vector normal to the plane of rotation, ${{\bf n}}$. The orbit of the rotor is driven by a constant tangential driving force $f^{dr}$ in the ${{\bf e}}_\phi$ direction and the preferred trajectory is elastically enforced through a radial spring and a transverse spring normal to the plane. To evolve the positions of the rotors in time, the velocity of each rotor is determined from a system of coupled, force-balance equations, one for each rotor. The forces acting on each rotor are the elastic spring forces that resist stretching, the net hydrodynamic drag force, and the prescribed constant driving force. In the case of one single rotor, the hydrodynamic drag force is assumed to be equal and opposite to the driving force and spring forces yielding the force balance: $$\label{eq:SingleRotorForceBalance} {{\bf \gamma}}({{\bf x}}){{\bf v}}= -\lambda(r-r_0){{\bf e}}_r - \eta \zeta {{\bf e}}_\zeta + f^{dr}{{\bf e}}_\phi,$$ where $\lambda$ and $\eta$ prescribe the stiffness of the radial and transverse springs and ${{\bf \gamma}}$ is the friction tensor. For simplicity, as in previous studies, we let ${{\bf \gamma}}= \gamma_0 {\bf I}$ where $\gamma_0 = 6\pi\mu a$, the drag on a sphere in free space, and $\mu$ is the dynamic viscosity of the fluid [@nasouri2016hydrodynamic]. With the parameters used in this study we compared this free-space drag to the case if the rotor was above the actual spherical body, and estimated a relative difference of about $2.7\%$, see the Supplementary Material for details [@SupplementaryMaterial]. ![image](ChainOfRotorsPlaneVsSphere.eps){width="\textwidth"} When considering a single lone rotor, there is no imposed external fluid flow and thus the hydrodynamic drag on the rotor depends only on the rotor’s own velocity. To evolve $N$ coupled rotors in time, a net drag force on each individual rotor must be considered that includes the effects of the external flow induced by all the other rotors. The external fluid flow imposed on a single rotor by all other rotors is calculated using a far-field approximation with regularized Stokeslets [@cortez2001method; @cortez2005method]. Letting $G$ be the regularized Green’s function in the presence of a no-slip sphere [@wrobel2016regularized], $\{{{\bf x}}_i \}_{i=1}^N$ be the rotor locations, and ${{\bf F}}^\text{ext}_j$ be the external forces acting on the $j^\text{th}$ rotor then the net hydrodynamic force on the $i^\text{th}$ rotor is ${\bf F_i} = -{{\bf \gamma}}({{\bf x}}_i)[{{\bf v}}_i-\sum_{j\ne i} G({{\bf x}}_i,{{\bf x}}_j){{\bf F}}_j^\text{ext}]$. As such, the force balance on the $i^\text{th}$ rotor is given by $$\begin{aligned} \label{eq:CoupledForceBalance} -{\bf F_i} = &-\lambda(r_i-r_0){{\bf e}}_r- \eta \zeta_i {{\bf e}}_\zeta + f^{dr}{{\bf e}}_\phi, \end{aligned}$$ where $r_i = \|{{\bf p}}_i-{{\bf c}}_i \|$ and ${{\bf p}}_i$ is the projection of the $i^\text{th}$ rotor’s location onto its respective preferred plane of orbit and $\zeta_i = \|{{\bf p}}_i - {{\bf x}}_i\|$ is the distance from the $i^\text{th}$ rotor to its preferred plane of orbit. This gives rise to a $3N\times 3N$ system of linear equations for the rotor velocities. We note that the free-space drag assumption results in ${{\bf \gamma}}$ having a strictly diagonal form, which allows for efficient calculation of the unknown fluid velocities in Eq. \[eq:CoupledForceBalance\]. For the regularized Green’s function, $G$, the regularization parameter $\epsilon$ is chosen to be equal to $a/d$ and the blob function $$\psi_\epsilon(r) = \frac{15\epsilon}{8\pi(r^2 + \epsilon^2)^{7/2}},$$ is used. The sensitivity of the results to this choice of regularization parameter was investigated by considering its effect on the phase differences between two rotors above a plane. As discussed in the Supplementary Material [@SupplementaryMaterial], variation in the regularization parameter led to negligible effects on the dynamics. To study MW formation outside a sphere with this model, we chose one set of parameters close to those from previous studies and that are reflective of *Volvox* flagella [@niedermayer2008synchronization; @brumley2012hydrodynamic; @brumley2015metachronal]. We set $d=$ 10 $\mu$m, $r_0 =$ 5 $\mu$m, and $a=$ 1 $\mu$m, which approximates a flagellum that is about 1 $\mu$m thick and is about 15 $\mu$m long when fully extended, refer again to Figure \[fig:RotorSchematics\](a). The driving force is $f^\text{dr} = 2\pi r_0 \gamma_0 / T$ where $T = 1/33$ s, to give an approximate beat frequency of 33 Hz [@brumley2012hydrodynamic]. We also set the dimensionless spring stiffness ratios to be $\Lambda = \lambda d/ f^\text{dr} = 0.1$ and $\eta d / f^\text{dr}=0.1$ [@niedermayer2008synchronization; @brumley2012hydrodynamic; @brumley2015metachronal]. A *Volvox* radius of $200 \mu m$ is considered. These parameters are used for all the simulations presented in this work. The phase of a given rotor, $\Phi(\phi)$, is computed by post processing the dynamic rotor positions such that $\dot\Phi$ is constant during a given period. In turn, a period of a given rotor is defined as the time that elapses as $\phi$ ranges from 0 to $2\pi$ where $\phi$ is calculated from the projection of the rotor onto its preferred plane of orbit. It should be noted that using the regularized Green’s function in the presence of a no-slip sphere results in each rotor pushing a larger net volume of fluid at the apex of each orbit than at the nadir. This naturally mimics the power and recovery stokes, respectively, of flagella. As such, the $\phi$ range of $[3\pi/2,2\pi)\cup[0,\pi/2)$ and $[\pi/2,3\pi/2)$ can be thought of as corresponding to power and recovery strokes respectively. For each simulation in this study, the desired spatial distribution of rotor centers was chosen and each was assigned a random initial phase. Next, for each rotor, the following steps were repeated until a final desired time: the rotor velocity, ${\bf v}$ was determined from the full system of coupled force-balance equations, then the rotor position, ${\bf x}$, was updated by numerically integrating $d{\bf x}/dt = {\bf v}$ with a second-order Runge-Kutta method. We first studied the evolution of a single chain of 30 rotors, equally spaced along half of a sphere of radius 200 $\mu$m, with randomly chosen initial phases. The arclength between the centers of adjacent rotor’s trajectories was set to $2d$, to mimic the chain of rotors studies above a plane in previous studies [@brumley2012hydrodynamic; @brumley2015metachronal], except with curvature from the spherical surface. No matter the initial phases of the rotors, a single steady state was always achieved. The resulting phases from each simulation decreased from one rotor to its neighbor in the anterior-posterior direction, and thus the steady states observed are symplectic metachronal waves as previously observed in *Volvox* [@brumley2012hydrodynamic; @brumley2015metachronal]. These results are in line with the parameter choices as they were stated to reside in the symplectic MW regime [@brumley2012hydrodynamic]. To directly compare our results to previous ones, the simulation was repeated with a chain of 30 rotors above a plane, whose centers are a distance $2d$ apart, and using the image system for regularized Stokeslets above a plane [@ainley2008method]. Figure \[fig:SingleChainDynamics\] compares these two cases with the exact same random initial configurations, relative to the rotor positions in the chain. The phase profile for the case above a plane compares well to that of previous studies (compare our Figure \[fig:SingleChainDynamics\](b) to Figure 3 for $\Lambda=0.1$ in [@brumley2012hydrodynamic]). The phases in both the plane and sphere case evolve similarly though the final waveform in the sphere case exhibits a greater total variation in phase difference, i.e., on the surface of a sphere, there is a slightly greater phase difference between each adjacent rotors. [0.99]{} ![image](CombinedSteadyStatesFinal.png){width="\textwidth"} In this study, we will assume that the somatic cells on the surface of *Volvox* are roughly equally spaced [@kirk2005volvox; @matt2016volvox]. To represent this in our simulations, we used Spherical Centroidal Voronoi Tessellation with the package STRIPACK [@du2003constrained; @renka1997algorithm] to distribute 1257 rotors on the surface of a sphere with a distance of approximately $2d$ between adjacent rotors, where the $200$ $\mu m$ *Volvox* radius was fixed. We numerically evolved the rotor positions in time until steady states were reached. We ran a total of 30 simulations, each with random initial phases, and observed only one type of steady state, a symplectic MW. Figure \[fig:UniformDynamics\](a) shows snapshots of the evolution of the MW, using a type IV Eckert projection [@kennedy2000understanding] to visualize the rotor phases all around the sphere in one 2D image. At $t=0$ s, the random phases are initialized; by $t=0.12$ s, patterns in the phases are beginning to form; by $t=0.3$ s, thick, solid-colored horizontal regions have formed, indicating symplectic MWs travelling from the anterior to posterior of the colony. The wave is symplectic because adjacent rotors in the direction of the posterior (the direction of the power stroke and hence the downstream direction) lag in phase behind rotors in the direction of the anterior (the direction of the recovery stroke and hence the upstream direction). In each simulation, the general shape of the symplectic MW emerged rapidly, coherent within approximately 10 beats. Previous minimal rotor models of flagellar coordination used regularly spaced rotors, whether along a chain or a two-dimensional array [@brumley2012hydrodynamic; @brumley2015metachronal; @vilfan2006hydrodynamic; @nasouri2016hydrodynamic], which allowed for straightforward computations and comparisons of phase differences among neighboring rotors. With approximately uniform distributions of rotors on the surface of a sphere, exact linear chains of rotors do not exist. To look at the trends in rotor phases in the equatorial and meridional direction, we created chains of idealized rotors in each direction. The idealized rotors in these chains were placed at equidistant points ($\approx 2d$ apart) along the true equator and meridians (see Supplementary Material [@SupplementaryMaterial] for a schematic). The phase for each of the idealized rotors was computed by sampling and averaging the phases of actual rotors within a small neighborhood (radius $= 2.5d$). Let the $j^{th}$ idealized rotor in a chain have $N_j$ rotors within its local neighborhood, denoted as $\Phi_1, \Phi_2, \ldots, \Phi_{N_j}$. The phase for the $j^{th}$ idealized rotor, $\overline{\Phi}_j$, is then computed as $$Ae^{i\overline{\Phi}_j}=\frac{1}{N_j} \sum_{n=1}^{N_j}e^{i\Phi_n}.$$ Phases for the idealized rotor chains along meridians were computed with the same formula. Representative dynamics of idealized rotors along an equator and a single meridian are shown in Figure \[fig:UniformDynamics\](b,c), respectively. These dynamics are tracked for the length of approximately two beats after the system has reached a steady state. To better quantify the collective dynamics, we followed Wollin and Stark [@wollin2011metachronal] and computed the complex order parameter for the idealized rotors in the meridional and equatorial directions. Letting the neighboring phase differences of the idealized rotors be given by $\Delta \overline{\Phi}_k=\overline{\Phi}_{k+1}-\overline{\Phi}_k$ the complex order parameter is computed as $$Ae^{i\psi}=\frac{1}{N-1} \sum_{n=1}^{N-1}e^{i\Delta \overline{\Phi}_n}.$$ A complex order parameter near 0 (A$\approx$0) means the phases are random and near 1 (A$\approx$1) means there is stable metachronism, where pairs of neighboring oscillators are phase locked with the same phase difference [@acebron2005kuramoto; @wollin2011metachronal]. Along the equator we found that $A=0.99$ and $\psi=0.00$. In the meridional direction, the results were found to be dependent on the meridian chosen. To report a robust measurement, the complex order parameters were computed for 200 randomly chosen meridians. Averaging $A$ and $\psi$ across the 200 meridians considered yielded $\bar{A}=0.9969$ with a standard deviation of $0.0012$ and $\bar{\psi}=-0.1488$ with a standard deviation of $0.0093$. This strongly indicates a stable metachronal wave in the meridional direction [@wollin2011metachronal]. Overall, the steady state results share similar characteristics with previous studies of arrays of rotors above a wall [@brumley2015metachronal] where an average neighboring phase difference of $-0.19$ and $0$ was found along the streamwise and lateral directions, respectively, with periodic boundary conditions in the latter. It is well known that the flagellar beat in *Volvox* has some tilt out of the meridional plane estimated at 10 to 20 degrees [@hoops1997motility; @pedley2016squirmers]. *Volvox* are observed to swim with a consistent rotational spin and it is thought that this tilt in the flagellar beat causes the rotation [@mast1926reactions; @hoops1997motility; @pedley2016squirmers]. This tilt can be prescribed within the context of the present minimal model by altering the normal vector ${{\bf n}}$, which determines the plane in which the preferred orbits are situated. We proceeded by selecting ${{\bf n}}$ for each rotor to have a 15 tilt from the respective meridional plane and quantifying the dynamics of the evolving rotors. [0.49]{} Unlike the case of no tilt, in which only a single final steady state was exhibited, simulations run with a 15 tilt exhibited two possible steady states. 100 simulations with a 15 tilt were run with different random initial conditions, 94 simulations reached a steady state with a horizontal simplectic metachronal wave traveling from the anterior to the posterior of the *Volvox* body. This steady state is qualitatively identical to that exhibited by simulations run with no tilt, see Figure \[fig:UniformDynamics\](a). This will be referred to as the horizontal steady state. Out of the 100 simulations run with random initial conditions, 6 simulations reached a steady state exhibiting simplectic metachronal wave traveling diagonally to the anterior-posterior axis, see Figure \[fig:UniformDynamics\](f). This steady state will be referred to as the diagonal steady state. The complex order parameters in the meridional and equatorial directions were again calculated for both steady states exhibited by rotors with a 15 tilt. For the horizontal steady state the complex order parameters for a chain of idealized rotors around the equator yielded $A=0.99$ and $\psi=0.00$. In the meridional direction, the complex order parameters were again averaged over 200 randomly chosen meridians yielding $\bar{A}=0.9973$ with a standard deviation of $0.019$ and $\bar{\psi}=-0.1492$ with a standard deviation of $0.0304$. This is nearly identical to the steady stated exhibited when there is no tilt. For the diagonal steady state, the complex order parameters for a chain of idealized rotors around the equator yielded $A=0.9994$ and $\psi=-0.1026$. In the meridional direction, the complex order parameters were again averaged over 200 randomly chosen meridians yielding $\bar{A}=0.9991$ with a standard deviation of $3.7\times 10^{-4}$ and $\bar{\psi}=-0.1335$ with a standard deviation of $0.0037$. The idealized-rotor phases for this steady state around the equator and along a meridian are shown in Figure \[fig:UniformDynamics\](d)-(e). Prescribing a tilt out of the meridional plane to the rotors’ preferred orbits was motivated by the question of whether this is indeed the origin of the swirling motion exhibited by *Volvox*. While the current model does not incorporate movement of the *Volvox* body, we can examine the velocity induced in the surrounding fluid. Just as the influence one rotor has on another can be estimated by using the far-field approximation of a Stokeslet, the fluid flow away from the *Volvox* body can be estimated by summing the far-field approximations of the fluid flow induced by each rotor. Figure \[fig:FluidVels\] shows the far-field fluid flow induced by the rotors with a 15 tilt when horizontal steady state is reached. The time-averaged velocity magnitude (color) and direction (black vector) are shown in the meridional plane in Figure \[fig:FluidVels\](a) and in the plane coinciding with the equator in Figure \[fig:FluidVels\](b). The far-field approximation is demarcated by the white line at a distance of $20 \mu$m from the *Volvox* body and velocities within this line are set to 0 since the far-field approximations used are not applicable within this range. As seen in Figure \[fig:FluidVels\](b), a clear rotational velocity is observed in the surrounding fluid when a tilt is induced in the rotor orbit. The spatial distribution of the time-averaged velocities in a meridional plane also compares well with previous laboratory measurements of in-vivo *Volvox* [@brumley2015metachronal], however it should be noted the magnitudes are roughly 3 times as small. In previous studies, magnitudes of Stokeslets were fit to experimental data from a single *Volvox* somatic cell and its flagella [@brumley2014flagellar]; the fit values were approximately three times the forces exerted in the present model. Since velocity scales linearly with force in Stokes flow, if the present forces were scaled by a factor of 3, the fluid velocity magnitudes would match the previous experimental data very well. Previous biological studies of *Volvox* have established that there is clear large-scale flagellar coordination across the algae body [@brumley2012hydrodynamic; @brumley2015metachronal]. Minimal models of the flagella coupling have thus far only assessed coordination above a planar surface [@brumley2012hydrodynamic; @brumley2015metachronal; @vilfan2006hydrodynamic], or a linear chain of rotors outside a spherical surface all beating in one direction [@nasouri2016hydrodynamic]. The present model considers a biologically-relevant distribution of rotors exterior to a sphere and reproduces the experimentally-observed flagellar coordination of *Volvox*. We note that the qualtitative coordination obtained with this model does not differ significantly from studies above a planar surface, suggesting that an array of rotors above a plane well-approximates the coupling of a patch of rotors above a sphere. However, considering a distribution of rotors around a sphere allows the generation of more pertinent flows to the actual organism. For example, prescribing a tilt to the preferred rotor orbit to mimic the tilt of *Volvox* flagella generates a swirl in the flow around the spherical body that can not be captured with a planar geometry. To our knowledge, there have been no published experimental results showing the different horizontal and diagonal steady states that our model revealed; we hope that our study will inspire more experiments to investigate such behavior. The present study has not addressed the sensitivity of the steady state synchronization to such parameters as the spacing between rotors, the stiffness of the springs determining the rotors preferred path, and the rotor tilt. This will be explored in subsequent studies. Additionally, the organized flagellar beat exhibited across the surface of *Volvox* allows the organism to undergo phototaxis. An improvement of the present model would be to incorporate the swimming of the organism. Our study also lays the groundwork for future investigations of spatially-varying densities of somatic cells, varying flagellar beat forms due to light, and phototaxis [@ueki20105000].
{ "pile_set_name": "ArXiv" }
--- abstract: 'The year of 2005 marks the 75th anniversary since Trumpler (1930) provided the first definitive proof of interstellar grains by demonstrating the existence of general absorption and reddening of starlight in the galactic plane. This article reviews our progressive understanding of the nature of interstellar dust.' address: 'Department of Physics and Astronomy, University of Missouri, Columbia, MO 65211, USA' author: - Aigen Li title: 'Interstellar Grains – The 75$^{\rm TH}$ Anniversary' --- Introduction: A Brief History for the Studies of Interstellar Dust ================================================================== In 1930 – exactly 75 years ago, the existence of solid dust particles in interstellar space was by the first time firmly established, based on the discovery of color excesses (Trumpler 1930). But the history of the interstellar dust-related studies is a much longer and complex subject, and can be dated back to the late 18th century when Herschel (1785) described the dark markings and patches in the sky as “holes in the heavens”. Below is a summary of the highlights of this history. For a more detailed record of the historical development of dust astronomy, I refer the interested readers to Aiello & Cecchi-Pestellini (2000), Dorschner (2003), Li & Greenberg (2003), and Verschuur (2003). \ - As early as 1785, Sir William Herschel noticed that the sky looks patchy with stars unevenly distributed and some regions are particularly “devoid” of stars. He described these dark regions (“star voids”) as “[**holes in the heavens**]{}”. - At the beginning of the 20th century, astronomers started to recognize that the “starless holes” were real physical structures in front of the stars, containing [**dark obscuring masses of matter**]{} able to absorb starlight (Clerke 1903; Barnard 1919), largely thanks to the new technology of photography which made the photographic survey of the dark markings possible. Sir Harold Spencer Jones (1914) also attributed the dark lanes seen in photographs of edge-on spiral galaxies to obscuring matter. Whether the dark lanes in the Milky Way is caused by obscuring material was one of the points of contention in the Curtis-Shapley debate (Shapley & Curtis 1921). - Wilhelm Struve (1847) noticed that the apparent number of stars per unit volume of space declines in all directions receding from the Sun. He attributed this effect to [**interstellar absorption**]{}.[^1] From his analysis of star counts he deduced an visual extinction of ${{\sim\,}}1{\,{\rm mag\, kpc}^{-1}}$. Many years later, Jacobus Kapteyn (1904) estimated the interstellar absorption to be ${{\sim\,}}1.6{\,{\rm mag\, kpc}^{-1}}$, in order for the observed distribution of stars in space to be consistent with his assumption of a constant stellar density. This value was amazingly close to the current estimates of ${{\sim\,}}1.8{\,{\rm mag\, kpc}^{-1}}$. Max Wolf (1904) demonstrated the existence of discrete clouds of interstellar matter by comparing the star counts for regions containing obscuring matter with those for neighbouring undimmed regions. - In 1912, Vesto Slipher discovered [**reflection nebulae**]{} from an analysis of the spectrum of the nebulosity in the Pleiades cluster which he found was identical to that of the illuminating stars. It was later recognized that the nebulosity was created by the scattering of light from an intrinsically luminous star by the dust particles in the surrounding interstellar medium (ISM). - Henry N. Russell (1922) argued that [**dark clouds accounted for the obscuration and this obscuring matter had to be in the form of millimeter-sized fine dust**]{}. Anton Pannekoek (1920) recognized that the obscuration can not be caused by the Rayleigh scattering of gas, otherwise one would require unrealistically high masses for the dark nebulae. He also noticed that, as suggested by Willem de Sitter, the cloud mass problem can be vanished if the extinction is due to dust grains with a size comparable to the wavelength of visible light. - In 1922, Mary L. Heger observed two broad absorption features at 5780${\,{\rm \AA}}$ and 5797${\,{\rm \AA}}$, conspicuously broader than atomic interstellar absorption lines. The interstellar nature of these absorption features was established 12 years later by Paul W. Merrill (1934). These mysterious lines – known as the [**diffuse interstellar bands (DIBs)**]{}, still remain unidentified. - In 1930, a real breakthrough was made by [**Robert J. Trumpler who provided the first unambiguous evidence for interstellar absorption and reddening which led to the general establishment of the existence of interstellar dust**]{}. Trumpler (1930) based this on a comparison between the photometric distances and geometrical distances of 100 open clusters.[^2] If there was no interstellar absorption, the two distances should be in agreement. However, Trumpler (1930) found that the photometric distances are systematically larger than the geometrical distances, indicating that the premise of a transparent ISM was incorrect.[^3] Using this direct and compelling method he was able to find both absorption and selective absorption or color excess with increasing distance.[^4] Trumpler (1930) also concluded that [**the observed color excess could only be accounted for by “fine cosmic dust”**]{}. - In 1932, Jan H. Oort demonstrated that the space between the stars must contain a considerable amount of matter. He derived an [**upper limit (“Oort limit”) on the total mass of the matter (including both stars and interstellar matter)**]{} in the solar neighbourhood from an analysis of the motions of K giants perpendicular to the plane of the Galaxy (the $z$-direction). An upper limit of ${{\sim\,}}$$1.0\times 10^{-23}{\,{\rm g}}{\,{\rm cm}}^{-3}$ on the total mass density was obtained from measuring the gravitational acceleration in the $z$-direction. The Oort limit has important implications: (1) [**there has to be more material in the galactic plane than could be seen in stars**]{} since the mass density of known stars is only ${{\sim\,}}$$4.0\times 10^{-24}{\,{\rm g}}{\,{\rm cm}}^{-3}$; and (2) [**the upper limit of ${{\sim\,}}$$6.0\times 10^{-24}{\,{\rm g}}{\,{\rm cm}}^{-3}$ on the mass density of the interstellar matter in the solar neighbourhood places severe restrictions on the source of the obscuration**]{}: what kind of material distributed with this density with what mass absorption coefficient could give rise to the observed visual extinction of about $1{\,{\rm mag\, kpc}^{-1}}$? apparently, only with small dust grains could so much extinction by so little mass (and the $\lambda^{-1}$ wavelength dependence; see below) be explained. - In 1936, Rudnick by the first time measured the wavelength dependence of extinction in the wavelength range 4000–6300${\,{\rm \AA}}$ based on differential spectrophotometric observations of reddened and unreddened stars of the same spectral type. Rudnick (1936) found that the measured [**extinction curve was inconsistent with Rayleigh scattering**]{} (which has a $\lambda^{-4}$ wavelength dependence). This so-called “[**pair-match**]{}” method remains the most common way of deriving an interstellar extinction curve. - By the end of the 1930s, a [**$\lambda^{-1}$ extinction law in the wavelength range 1–3${\,{\rm \mu m^{-1}}}$**]{} had been well established (Hall 1937; Greenstein 1938; Stebbins, Huffer, & Whitford 1939), thanks to the advent of the photoelectric photometry, excluding free electrons, atoms, molecules, and solid grains much larger or much smaller than the wavelength of visible light, leaving solids with a size comparable to the wavelength as the sole possibility. - In 1936, Struve & Elvey demonstrated the scattering of general starlight by interstellar clouds based on a series of observations of the dark cloud Barnard 15, the core of which is appreciably darker than the rim, although the latter is about opaque as the former. They attributed the increased brightness of the outer region to [**interstellar scattering**]{}. - In 1941, Henyey & Greenstein confirmed the existence of [**diffuse interstellar radiation**]{} (which was originally detected by van Rhijn \[1921\]) in the photographic wavelength region. They interpreted the observed intensity of diffuse light as scattered stellar radiation by [**interstellar grains which are strongly forward scattering and have a high albedo (higher than ${{\sim\,}}$0.3)**]{}. - In 1943, with the advent of the six-colour photometry (at 3530${\,{\rm \AA}}$$<$$\lambda$$<$10300${\,{\rm \AA}}$) Stebbins & Whitford found that the extinction curve exhibits curvature at the near infrared (IR; $\lambda \approx 1.03\,\mu$m) and ultraviolet (UV; $\lambda \approx 0.35\,\mu$m) regions, [**deviating from the simple $\lambda^{-1}$ law**]{}. - In 1953, Morgan, Harris, & Johnson estimated the ratio of total visual extinction to color excess to be $A_V/E(B-V)\approx 3.0\pm 0.2$. This was supported by a more detailed study carried out by Whitford (1958), who argued that there appeared to be a “very close approach to uniformity of the reddening law” in most directions. [**A uniform extinction curve with a constant $A_V/E(B-V)$**]{} was welcomed by the astronomical community – at that early stage, interstellar dust was mainly regarded as an annoying mere extinguisher of starlight which prevented an accurate measurement of distances to stars. The proposal of “a uniform extinction curve with a constant $A_V/E(B-V)$” made it easier to correct photometric distances for the effects of absorption (also because the determination of the color excess $E(B-V)$ for early-type stars was relatively straightforward). - In 1955, based on the UBV photometry of early O stars in a region in Cygnus, Johnson & Morgan noted that there may exist [**regional variations**]{} in the interstellar extinction curve. The [**nonuniformity nature of the interstellar extinction curve**]{} was later confirmed in Cygnus, Orion, Perseus, Cepheus and NGC2244 by Johnson & Borgman (1963), Nandy (1964), and Johnson (1965). Those authors also found a wide variety of $A_V/E(B-V)$ values (ranging from ${{\sim\,}}$3.0 to ${{\sim\,}}$7.4) in different regions. Wampler (1961) found a systematic variation with galactic longitude of $E(U-B)/E(B-V)$, the ratio of slopes in the blue to those in the visible region. - In the 1960s and early 1970s, the extension of the extinction curve toward the middle and far UV ($\lambda^{-1}$${\geq}$$3{\,{\rm \mu m}}^{-1}$) was made possible by rocket and satellite observations, including the rocket-based photoelectric photometry at $\lambda=2600{\,{\rm \AA}}$ and 2200${\,{\rm \AA}}$ (Boggess & Borgman 1964); the [*Aerobee*]{} rocket spectrophotometry at 1200${\,{\rm \AA}}$${\leq}$$\lambda$${\leq}$3000${\,{\rm \AA}}$ (Stecher 1965); the [*Orbiting Astronomical Satellite*]{} (OAO-2) spectrophotometry at 1100${\,{\rm \AA}}$${\leq}$$\lambda$${\leq}$$3600{\,{\rm \AA}}$ (Bless & Savage 1972); and the [*Copernicus*]{} Satellite spectrophotometry at 1000${\,{\rm \AA}}$${\leq}$$\lambda$${\leq}$$1200{\,{\rm \AA}}$ (York et al. 1973). [**By 1973, the interstellar extinction curve had been determined over the whole wavelength range from 0.2${\,{\rm \mu m}}^{-1}$ to 10${\,{\rm \mu m}}^{-1}$**]{}. - In 1965, the [**2175${\,{\rm \AA}}$ extinction bump was detected**]{} by Stecher (1965). Shortly after its detection, it was attributed to graphite (Stecher & Donn 1965).[^5] It was later found that the strength and width of this bump vary with environment while its peak position is quite invariant. - Cardelli, Clayton, & Mathis (1989) found that the optical/UV extinction curve in the wavelength range of 0.125$\le$$\lambda$$\le$$3.5{\,{\rm \mu m}}$ which shows considerable regional variations can be approximated by an analytical formula involving only one free parameter: the total-to-selective extinction ratio $R_V$$\equiv$$A_V/E(B-V)$, whereas the near-IR extinction curve (0.9${\,{\rm \mu m}}$$\le$$\lambda$$\le$$3.5{\,{\rm \mu m}}$) can be fitted reasonably well by a power law $A(\lambda)$$\sim$$\lambda^{-1.7}$, showing little environmental variations.[^6] \ - In the 1930s, small [**metallic particles**]{} were proposed to be responsible for the interstellar extinction, partly because meteoritic particles (predominantly metallic) and interstellar grains were then thought to have the same origin. Reasonably good fits to the $\lambda^{-1}$ extinction law were obtained in terms of small metallic grains with a dominant size of ${{\sim\,}}$0.05${\,{\rm \mu m}}$ (Schalén 1936) or a power-law size distribution $dn(a)/da$${{\sim\,}}$$a^{-3.6}$ in the size range $80{\,{\rm \AA}}$${\leq}$$a$${\leq}$$1{\,{\rm cm}}$ (Greenstein 1938). - In 1935, based on the correlation between gas concentration and extinction, Bertil Lindblad suggested that [**interstellar grains were formed by condensation from the interstellar gas through random accretion of gas atoms**]{}, as speculated by Sir Arthur Eddington (1926) that it was so cold in space that virtually all gaseous atoms and ions which hit a solid particle would freeze down upon it.[^7] However, it was found later that in typical interstellar conditions, the Lindblad condensation theory would result in a complete disappearance of all condensable gases and the grains would grow to sizes (${{\sim\,}}$10${\,{\rm \mu m}}$) well beyond those which could account for the interstellar extinction. - In 1946, by introducing a grain destruction process caused by grain-grain collisions as a consequence of interstellar cloud encounters, Jan H. Oort and Hendrik C. van de Hulst further developed the interstellar condensation theory and led to the [**“dirty ice” model consisting of saturated molecules**]{} such as H$_2$O, CH$_4$, and NH$_3$ with an equilibrium size distribution which could be roughly approximated by a functional form $dn(a)/da\,$${{\sim\,}}$$\exp\left[-5 \left(a/0.5\,\mu {\rm m}\right)^3\right]$ and an average size of ${{\sim\,}}$0.15${\,{\rm \mu m}}$. What might be the condensation nulcei was unclear at that time. - In 1946, van de Hulst by the first time made [**realistic estimations of 10–20${\,{\rm K}}$ for grain temperatures**]{}. Before that, it was long thought that they had a black-body temperature of ${{\sim\,}}$3.2${\,{\rm K}}$ (Eddington 1926). Van de Hulst (1946) noted that interstellar grains are much warmer than a 3.2${\,{\rm K}}$ black-body because they do not radiate effectively at long wavelengths. - In 1949, Hall and Hiltner independently discovered the general [**interstellar linear polarization**]{} by incident – their original objective was to look for intrinsic stellar polarization from eclipsing binaries. The interstellar origin of this polarization was indicated by the correlation of the degree of polarization with reddening and the fact that the direction of polarization is generally parallel to the galactic plane. The interstellar polarization was attributed to the [**differential extinction of starlight by nonspherical grains aligned to a small degree with respect to the galactic plane**]{}. - In 1951, Davis & Greenstein suggested that interstellar grains could be aligned with respect to the interstellar magnetic field by the paramagnetic relaxation mechanism. - The variation of interstellar polarization with wavelength was first revealed by Behr (1959) and Gehrels (1960). It was later shown that the [**wavelength dependence of polarization**]{} is well approximated by an empirical formula, often known as the [**Serkowski law**]{} (Serkowski 1973; Coyne, Gehrels, & Serkowski 1974; Wilking et al. 1980).[^8] But the near-IR (1.64${\,{\rm \mu m}}$$<$$\lambda$$<$$5{\,{\rm \mu m}}$) polarization is better approximated by a power law $P(\lambda)$$\propto$$\lambda^{-\beta}$, with $\beta$$\simeq$$1.8\pm0.2$, independent of $\lambda_{\rm max}$ (Martin & Whittet 1990, Martin et al. 1992). - In 1972, the interstellar [**circular polarization**]{} which arises from the interstellar birefringence (Martin 1972) as originally predicted by van de Hulst (1957), was first detected along the lines of sight to the Crab Nebula by Martin, Illing, & Angel (1972) and to six early-type stars by Kemp & Wolstencroft (1972). - In early 1950s – soon after the discovery of interstellar polarization, the validity of the ice model seemed doubtful since [**ice grains are an inefficient polarizer**]{}, and therefore it would be difficult for them to explain the observed rather high degree of polarization relative to extinction (van de Hulst 1950; Spitzer & Tukey 1951; Cayrel & Schatzman 1954). - In 1954, Cayrel & Schatzman suggested that [**graphite grains**]{}, comprising a small component of the total mass of interstellar dust, could account for the observed polarization-to-extinction ratio because of their strong optical anisotropy. - In 1962, Hoyle & Wickramasinghe proposed that [**graphite grains of sizes a few times 0.01${\,{\rm \mu m}}$ could condense in the atmospheres of cool N-type carbon stars**]{}, and these grains will subsequently be driven out of the stellar atmospheres and [**injected into interstellar space**]{} by the stellar radiation pressure. Hoyle & Wickramasinghe (1962) argued that ${{\sim\,}}$10$^4$ N-type stars in the Galaxy may be sufficient to produce the required grain density to account for the observed interstellar extinction. They also showed that the extinction predicted from small graphite grains is in remarkable agreement with the observed reddening law (which was then limited to $\lambda^{-1}$$<$3${\,{\rm \mu m}}^{-1}$). It is interesting to note that the condensation of graphite grains in cool carbon stars was suggested many years earlier by O’Keefe in 1939, while as early as 1933 Wildt had already found that solid grains of carbon, Al$_2$O$_3$, CaO, carbides (SiC, TiC, ZrC), and nitrides (TiN, ZrN) might form in N-type stars. - In 1966, in view of the fact that the albedo of pure graphite grains appear to be too low to be consistent with the observations, Wickramasinghe, Dharmawardhana, & Wyld proposed that interstellar dust consists of graphite cores and ice mantles. Wickramasinghe (1965) argued that graphite grains ejected from stars tend to grow ice mantles in interstellar clouds. Wickramasinghe et al. (1966) showed that graphite grains of radii ${{\sim\,}}$0.05–0.07${\,{\rm \mu m}}$ coated by an ice mantle up to twice their radii could satisfy the observed interstellar extinction and albedo. - In 1968, Wickramasinghe & Nandy argued that solid molecular hydrogen mantles may be accreted by interstellar grains in dense interstellar clouds. Wickramasinghe & Krishna Swamy (1969) showed that graphite core-solid H$_2$ mantle grains with core radii ${{\sim\,}}$0.04–0.06${\,{\rm \mu m}}$ and mantle radii ${{\sim\,}}$0.15–0.25${\,{\rm \mu m}}$ are consistent with the observed interstellar extinction in the wavelength range of $0.11{\,{\rm \mu m}}$${\leq}$$\lambda$${\leq}$2${\,{\rm \mu m}}$ and the albedo and phase function derived from the diffuse Galactic light. - In 1963, Kamijo suggested that nanometer-sized [**SiO$_2$ grains could condense in the atmospheres of cool M-type stars**]{}. After blown out of the stellar atmospheres and [**injected into interstellar space**]{}, they could serve as condensation nuclei for the formation of “dirty ices”. - In 1968, Wickramasinghe & Krishna Swamy considered quartz grains covered with dirty ice mantles and found that their match to the observed extinction curve was unsatisfactory. - In 1969, Gilman found that [**grains around oxygen-rich cool giants are mainly silicates**]{} such as Al$_2$SiO$_5$ and Mg$_2$SiO$_4$. Silicates were first detected in emission in M stars (Woolf & Ney 1969; Knacke et al. 1969a). - In 1965, Cernuschi, Marsicano, & Kimel argued that [**iron grains**]{} could condense out of the expanding [**supernova explosion ejecta**]{}. Schalén (1965) explicitly modeled the interstellar extinction curve in the wavelength range of $0.5{\,{\rm \mu m}}^{-1} < \lambda^{-1} < 4.5{\,{\rm \mu m}}^{-1}$ using iron grains of radii ${{\sim\,}}$0.01${\,{\rm \mu m}}$. Hoyle & Wickramasinghe (1970) also argued that a significant fraction of the mass of the heavy elements produced in supernova explosion could condense into solid particles during the expansion phase following explosion. They further suggested that supernovae may constitute a major source of silicate, iron, and graphite grains in the ISM.[^9] - In 1969, Friedemann showed that [**silicon carbide grains**]{} could condense in the atmospheres of carbon stars and then leave the star and become an interstellar dust component, although they comprise only a minor fraction of the total interstellar dust mass.[^10] - In 1969, Saslaw & Gaustad suggested that carbon may condense in cool stellar atmospheres in the form of [**diamond grains**]{} and are subsequently injected into interstellar space.[^11] Presolar nanodiamonds were first detected in primitive carbonaceous meteorites based on their isotopic anomalies (Lewis et al. 1987; see §5.4 of Li & Draine 2004a and Jones & d’Hendecourt 2004 for more information regarding interstellar nanodiamonds). - The extension of the wavelength base for the interstellar extinction observations into the far-UV and IR provide a strong stimulus for the development of dust models. The fact that the extinction continues to increase in the far UV (e.g., see York et al. 1973) implies that [**no single grain type with either a single size or a continuous size distribution could account for the observed optical to far-UV interstellar extinction**]{} (Greenberg 1973). This led to the abandonment of any one-component grain models and stimulated the emergence of various kinds of models consisting of multiple dust constituents, including silicate, SiC, iron, iron oxide, graphite, dirty ice, solid H$_2$, etc.[^12] [**By early 1970s**]{}, the two highly refractory components – [**silicates and graphite have been considered in most dust models**]{}, supported by the detection of the conspicuous bump in the interstellar extinction curve at 2175${\,{\rm \AA}}$ and the prominent emission feature at 10${\,{\rm \mu m}}$ of oxygen-rich stars and by the belief that graphite and silicate grains can be produced in stellar atmospheres and expelled into the ISM. - In 1969, Hoyle & Wickramasinghe modeled the interstellar extinction in terms of a mixture of [**silicate**]{} grains of radii ${{\sim\,}}$0.07${\,{\rm \mu m}}$ and [**graphite**]{} grains of radii ${{\sim\,}}$0.065${\,{\rm \mu m}}$. - Wickramasinghe (1970a) found that the interstellar extinction curve in the wavelength range 0.3${\leq}$$\lambda^{-1}$${\leq}$9${\,{\rm \mu m}}^{-1}$ could be reproduced by a mixture of [**graphite**]{} grains with a size distribution of $dn(a)/da\,$${{\sim\,}}$$\exp\left[-0.5 \left\{\left(a-0.06\right)/0.02\right\}^2\right]$ for 0.03${\,{\rm \mu m}}$${\leq}$$a$${\leq}$0.13${\,{\rm \mu m}}$ and [**silicate**]{} grains of radii ${{\sim\,}}$0.07${\,{\rm \mu m}}$. He also found that [**silicate**]{} grains of radii ${{\sim\,}}$0.03${\,{\rm \mu m}}$ with an [**ice mantle**]{} of radii ${{\sim\,}}$0.14${\,{\rm \mu m}}$ together with the same graphite population could fit the observed extinction curve equally well.[^13] By modeling the albedoes and phase functions derived from the diffuse Galactic light, Wickramasinghe (1970b) concluded that the graphite-silicate mixture was preferred over the graphite-(ice-coated) silicate mixture. - Wickramasinghe & Nandy (1970) found that a mixture of [**silicate, graphite, and iron grains**]{} also achieved a rough fair fit to the interstellar extinction curve at $\lambda^{-1} {\leq}8{\,{\rm \mu m}}^{-1}$. - Huffman & Stapp (1971) found that [**enstatite**]{} grains plus 12% small (${{\sim\,}}$100${\,{\rm \AA}}$) [**iron oxide**]{} grains also provided a fairly good fit to the extinction curve up to $\lambda^{-1} {\leq}8{\,{\rm \mu m}}^{-1}$. - Gilra (1971) performed extinction calculations for a mixture of [**graphite, silicate, and SiC**]{} and provided close fits to the observed extinction curves. But his model heavily relied on SiC: the required mass of the SiC component was ${{\sim\,}}$4 times of that of graphite. - Greenberg & Stoeckly (1970) found that [**ice-coated cylindrical silicate grains**]{} together with a population of [**small bare silicate grains**]{} could reproduce the extinction curve from the IR to the UV and the wavelength dependence of polarization. - In 1974, Greenberg & Hong suggested that interstellar grains consist of [**submicron-sized silicate cores surrounded by mantles of heterogeneous molecular and free-radical mixture of O, C, N and H**]{} (“modified dirty ices”), and a minor component of [**very small bare grains**]{} of sizes $<$100${\,{\rm \AA}}$ whose precise composition was uncertain. - In a study of the scattering properties of interstellar dust (albedo and phase function) determined from the OAO-2 observations at 1500${\,{\rm \AA}}$${\leq}$$\lambda$${\leq}$$4250{\,{\rm \AA}}$ of the diffuse Galactic light (Witt & Lillie 1973), Witt (1973) first explicitly suggested a bi-modal size distribution for interstellar grains: large grains with radii ${\geq}$$2500{\,{\rm \AA}}$ would provide extinction in the visible region including scattering which is strongly forward directed, and small particles with radii ${\leq}$$250{\,{\rm \AA}}$ would dominate the UV region and contribute nearly isotopic scattering. - In the 1960s, [**the first attempt to search for the 3.1${\,{\rm \mu m}}$ feature of H$_2$O ice in the diffuse ISM was unsuccessful**]{} (Danielson, Woolf, & Gaustad 1965; Knacke, Cudaback, & Gaustad 1969b), although it had long been considered to be a possible constituent of interstellar grains. This was the strongest objection against the dirty-ice model of Oort & van de Hulst (1946). - In 1973, the 3.1${\,{\rm \mu m}}$ H$_2$O ice feature was finally detected (Gillett & Forrest 1973). But it was recognized that [**water ice is present only in dense regions**]{} (usually with $A_V$$>$3${\,{\rm mag}}$). - By early 1970s, [**silicates had been detected in the ISM**]{}, first in emission in the Trapezium region of the Orion Nebula (Stein & Gillett 1969), then in absorption toward the Galactic Center (Hackwell, Gehrz, & Woolf 1970), and toward the Becklin-Neugebauer object and Kleinmann-Low Nebula (Gillett & Forrest 1973). - In 1973, Gillett, Forrest, & Merrill (1973) detected prominent emission features at 8.6, 11.3, and 12.7${\,{\rm \mu m}}$ in the planetary nebulae NGC 7027 and BD+30$^{\rm o}$3639. These features together with the 3.3, 6.2, and 7.7${\,{\rm \mu m}}$ features were collectively known as the[**“[*unidentified infrared*]{}” (UIR) bands**]{}, which are now often attributed to polycylic aromatic hydrocarbon (PAH) molecules (Duley & Williams 1981; Léger & Puget 1984; Allamandola, Tielens, & Barker 1985; Allamandola, Hudgins, & Sandford 1999).[^14] - Willner et al. (1979) detected a strong absorption band at 3.4${\,{\rm \mu m}}$ in the Galactic Center toward Sgr AW. Wickramasinghe & Allen (1980) detected this feature in the Galactic Center source IRS7. Although it is generally accepted that this feature is due to the C–H stretching mode in saturated [**aliphatic hydrocarbons**]{}, the exact nature of this hydrocarbon material remains uncertain (see Pendleton & Allamandola 2002, Pendleton 2004 for recent reviews). This feature has also been detected in a carbon-rich protoplanetary nebula CRL618 (Lequeux & Jourdain de Muizon 1990; Chiar et al. 1998) with close resemblance to the interstellar feature. - In 1973, Morton et al. found that [**the gas-phase abundances of some heavy elements**]{} (relative to hydrogen) measured by the [*Copernicus*]{} UV satellite for interstellar clouds are [**significantly lower than in the Sun**]{}. - In 1974, Field noted that [**the depletions of certain elements observed by Morton et al. (1973) correlate with the temperatures for dust condensation**]{} in stellar atmospheres or nebulae. He suggested that these elements have condensed into dust grains near stars and other elements have accreted onto such grains in interstellar space after they enter the ISM, forming a mantle composed of H, C, N and O compounds. - In 1974, Greenberg found that the observed depletion of C, N, and O is significantly greater than could be accommodated by the dust under any reasonable models, using the gas-phase abundances measured by [*Copernicus*]{} for the $\zeta$ Ophiuchi sightline (Morton et al. 1973) and the solar abundances as the reference abundances. - Twenty years later, Sofia, Cardelli, & Savage (1994) found that the interstellar depletions are lowered for C, N, and O if B stars are used as the reference standard. They argued that the solar system may have enhanced abundances of many elements, and therefore the solar abundances are not representative of the interstellar abundances. - Snow & Witt (1996) analyzed the surface abundances of B stars and field F and G stars and found that not only C, N, and O but also Si, Mg, and Fe and many other elements are underabundant in these stars. This led them to suggest that the [**interstellar abundances are appreciably subsolar**]{} (${{\sim\,}}$60%–70% of the solar values).[^15] - In 1980, Schmidt, Cohen, & Margon (1980), detected in the Red Rectangle a far-red continuum emission in excess of what would be expected from simple scattering of starlight by interstellar dust. This continuum emission, known as the “[**extended red emission**]{}” (ERE), consists of a broad, featureless emission band between $\sim$5400${\,{\rm \AA}}$ and 9500${\,{\rm \AA}}$, peaking at $6100{\leq}\lambda_{\rm p} {\leq}8200{\,{\rm \AA}}$, and with a width $600{\,{\rm \AA}}{\leq}{\rm FWHM}{\leq}1000{\,{\rm \AA}}$.[^16] The ERE has been seen in a wide variety of dusty environments: the diffuse ISM of our Galaxy, reflection nebulae, planetary nebulae, HII regions, and other galaxies (see Witt & Vijh 2004 for a recent review). - The ERE is generally attributed to photoluminescence (PL) by some component of interstellar dust, powered by UV/visible photons. The photon conversion efficiency of the diffuse ISM has been determined to be near unity (Gordon, Witt, & Friedmann 1998). $\longrightarrow$ [**The ERE carriers are very likely in the nanometer size range**]{} because nanoparticles are expected to luminesce efficiently through the recombination of the electron-hole pair created upon absorption of an energetic photon, since in such small systems the excited electron is spatially confined and the radiationless transitions that are facilitated by Auger and defect related recombination are reduced (see Li 2004a). - [**The ERE carrier remains unidentified**]{}. Various candidate materials have been proposed, but most of them appear unable to match the observed ERE spectra and satisfy the high-PL efficiency requirement (Li & Draine 2002a; Li 2004a; Witt & Vijh 2004). Promising candidates include PAHs (d’Hendecourt et al. 1986) and silicon nanoparticles (Ledoux et al. 1998, Witt, Gordon, & Furton 1998, Smith & Witt 2002), but both have their own problems (see Li & Draine 2002a). - In 1956, [**John R. Platt first suggested that very small grains or large molecules of less than 10${\,{\rm \AA}}$ in radius**]{} grown by random accretion from the interstellar gas could be responsible for the observed interstellar extinction and polarization. Platt (1956) postulated these “Platt” particles as quantum-mechanical particles containing many ions and free radicals with unfilled electronic energy bands. - In 1968, Donn further proposed that [**PAH-like “Platt particles” may be responsible for the UV interstellar extinction.**]{} - In 1968, Greenberg first pointed out that very small grains with a heat content smaller than or comparable to the energy of a single stellar photon, cannot be characterized by a steady-state temperature but rather are subject to [**substantial temporal fluctuations in temperature**]{}. - Andriesse (1978) by the first time presented observational evidence for the existence of [**“Platt” particles in a dust cloud near M17**]{}, as indicated by its near-invariant 8–20${\,{\rm \mu m}}$ spectral shape over a distance of ${{\sim\,}}$2$^{\prime}$ through the source and by its broad spectral energy distribution characterized by a combination of widely different color temperatures. He found that the the observed IR spectrum of M17 could be explained by a population of large grains and a population of “Platt” particles of ${{\sim\,}}$10${\,{\rm \AA}}$ in size which exhibit temperature fluctuations. - Sellgren, Werner, & Dinerstein (1983) found that the color temperatures of the 2–5${\,{\rm \mu m}}$ near-IR continuum (${{\sim\,}}$1000K) and the spectral shapes of the 3.3${\,{\rm \mu m}}$ emission features of three visual reflection nebulae NGC 7023, 2023, and 2068 show very little variation from source to source and within a given source with distance from the central star. They attributed the near-IR continuum emission to ultrasmall grains of radii ${{\sim\,}}$10${\,{\rm \AA}}$ undergoing large excursions in temperature due to stochastic heating by single stellar photons. - The presence of a population of ultrasmall grains in the diffuse ISM was explicitly indicated by the 12${\,{\rm \mu m}}$ and 25${\,{\rm \mu m}}$ “cirrus” emission detected by the [*Infrared Astronomical Satellite*]{} (IRAS) (Boulanger & Pérault 1988), which is far in excess (by several orders of magnitude) of what would be expected from large grains of 15–25${\,{\rm K}}$ in thermal equilibrium with the general interstellar radiation field. Subsequent measurements by the [*Diffuse Infrared Background Experiment*]{} (DIRBE) instrument on the [*Cosmic Background Explorer*]{} (COBE) satellite confirmed this and detected additional broadband emission at 3.5${\,{\rm \mu m}}$ and 4.9${\,{\rm \mu m}}$ (Arendt et al. 1998). More recently, spectrometers aboard the [*Infrared Telescope in Space*]{} (IRTS) (Onaka et al. 1996; Tanaka et al. 1996) and the [*Infrared Space Observatory*]{} (ISO) (Mattila et al. 1996) have shown that the diffuse ISM radiates strongly in emission features at 3.3, 6.2, 7.7, 8.6, and 11.3${\,{\rm \mu m}}$. - The modern era of interstellar grain models probably began in 1977 with the paper by Mathis, Rumpl, & Nordsieck (1977). By fitting the interstellar extinction over the wavelength range of 0.11${\,{\rm \mu m}}$${\leq}$$\lambda$${\leq}$1${\,{\rm \mu m}}$, Mathis et al. derived a power-law size distribution of $dn/da \sim a^{-3.5}$ for a mixture of [**bare silicate and graphite grains**]{}.[^17] With the substantial improvements made by Draine & Lee (1984), this model became one of the standard interstellar grain models with well-characterized chemical composition, size distribution, optical and thermal properties. Modifications to this model were later made by Draine & Anderson (1985), Weiland et al. (1986), Sorrell (1990), Siebenmorgen & Krügel (1992), Rowan-Robinson (1992), Kim, Martin, & Hendry (1994), Dwek et al. (1997), Clayton et al. (2003), and Zubko, Dwek, & Arendt (2003) by including new dust components (e.g., amorphous carbon, carbonaceous organic refractory, and PAHs) and adjusting dust sizes (e.g., deriving dust size distributions using the “Maximum Entropy Method” or the “Method of Regularization” rather than presuming a certain functional form).\ Recent developments were made by Draine and his coworkers (Li & Draine 2001b, 2002b,c; Weingartner & Draine 2001a) who have extended the silicate-graphite grain model to explicitly include a PAH component as the small-size end of the carbonaceous grain population. It has been shown that the IR emission spectrum calculated from this model closely matches that observed for the Milky Way (Li & Draine 2001b), the Small Magellanic Cloud (SMC; Li & Draine 2002c), and more recently the ringed Sb galaxy NGC7331 (Regan et al. 2004; Smith et al. 2004), including the “UIR” emission bands at 3.3, 6.2, 7.7, 8.6, and 11.3${\,{\rm \mu m}}$. - In contrast to the bare silicate-graphite model, Greenberg (1978) proposed that interstellar grains could be [**coated by a layer of organic refractory material**]{} derived from the photoprocessing of ice mantles acquired in molecular clouds and repeatedly cycled into and out of diffuse and molecular clouds. The organic refractory mantles would provide a shield against destruction of the silicate cores. Since the rate of production of silicate dust in stars is about 10 times slower than the rate of destruction in the ISM (mostly caused by sputtering and grain-grain collisions in interstellar shock waves; Draine & Salpeter 1979a,b, Jones et al. 1994), the silicates would be underabundant if they were not protected and thus it would be hard to explain the observed large depletions of Si, Fe and Mg and the strength of the observed 9.7${\,{\rm \mu m}}$ silicate absorption feature, unless most of the silicate mass was condensed in the ISM as suggested by Draine (1990). The most recent development of this model was that of Li & Greenberg (1997), who modeled the core-mantle grains as finite cylinders (to account for the interstellar polarization). In addition, a PAH component and a population of small graphitic grains are added respectively to account for the far-UV extinction rise plus the “UIR” emission bands and the 2175${\,{\rm \AA}}$ extinction bump.\ Modifications to this model were also made by considering different coating materials (e.g., amorphous carbon, hydrogenated amorphous carbon \[HAC\]), including new dust type (e.g., iron, small bare silicates), and varying dust size distributions (Chlewicki & Laureijs 1988; Duley, Jones, & Williams 1989; Désert, Boulanger, & Puget 1990; Li & Greenberg 1998; Zubko 1999). In particular, Duley et al. (1989) speculated that the silicate cores are coated with a mantle of HAC material arising from direct accretion of gas-phase elemental carbon on the silicate cores in the diffuse ISM. - Recognizing that grain shattering due to grain-grain collisions and subsequent reassembly through agglomeration of grain fragments may be important in the ISM, Mathis & Whiffen (1989) proposed that interstellar grains may consist of a [**loosely coagulated structure built up from small individual particles of silicates and carbon of various kinds**]{} (amorphous carbon, HAC, and organic refractories). Further developments of this composite model were made by Mathis (1996), Itaì et al. (2001, 2004), Saija et al. (2001, 2003), and Zubko, Dwek, & Arendt (2003) (see §2 of Li 2004a for more details). Interstellar Grains, What Do We Know? ===================================== Our knowledge of interstellar dust regarding its size, shape and composition is mainly derived from its interaction with electromagnetic radiation: attenuation (absorption and scattering) and polarization of starlight, and emission of IR and far-IR radiation. Presolar grains identified in meteorites and interplanetary dust particles (IDPs) of cometary origin also contain useful information regarding the nature of interstellar grains. The principal observational keys, both direct and indirect, used to constrain the properties of dust were summarized in recent reviews of Draine (2003) and Li (2004b). - [**(1) .**]{}  From the wavelength-dependent interstellar extinction and polarization curves as well as the near, mid and far IR emission, we know that there must exist a distribution of grains sizes, ranging from a few angstroms to a few micrometers. - The interstellar extinction curve contains important information regarding the grain sizes since generally speaking, a grain absorbs and scatters light most effectively at wavelengths comparable to its size $\lambda$$\approx$$2\pi a$. The extinction curve rises from the near-IR to the near-UV, with a broad absorption feature at about $\lambda^{-1}$$\approx$4.6${\,{\rm \mu m}}^{-1}$ ($\lambda$$\approx$2175${\,{\rm \AA}}$), followed by a steep rise into the far-UV $\lambda^{-1}$$\approx$10${\,{\rm \mu m}}^{-1}$. $\longrightarrow$ [**There must exist in the ISM a population of large grains with $a$${\geq}$$\lambda/2\pi$$\approx$$0.1{\,{\rm \mu m}}$ to account for the extinction at visible wavelengths, and a population of ultrasmall grains with $a$${\leq}$$\lambda/2\pi$$\approx$$0.016{\,{\rm \mu m}}$ to account for the far-UV extinction at $\lambda$=$0.1{\,{\rm \mu m}}$**]{} (see Li 2004a for details). - The interstellar polarization curve rises from the IR, has a maximum somewhere in the optical and then decreases toward the UV. $\longrightarrow$ [**There must exist a population of aligned, nonspherical grains with typical sizes of $a$$\approx$$\lambda/2\pi$$\approx$0.1${\,{\rm \mu m}}$ responsible for the peak polarization at $\lambda$$\approx$0.55${\,{\rm \mu m}}$.**]{} - Interstellar grains absorb starlight in the UV/visible and re-radiate in the IR. The IR emission spectrum of the Milky Way diffuse ISM, estimated using the IRAS 12, 25, 60 and 100${\,{\rm \mu m}}$ broadband photometry, the DIRBE-COBE 2.2, 3.5, 4.9, 12, 25, 60, 100, 140 and 240${\,{\rm \mu m}}$ broadband photometry, and the FIRAS-COBE 110${\,{\rm \mu m}}$$<$$\lambda$$<$3000${\,{\rm \mu m}}$ spectrophotometry, is characterized by a modified black-body of $\lambda^{-1.7}B_\lambda$(T=19.5${\,{\rm K}}$) peaking at ${{\sim\,}}$130${\,{\rm \mu m}}$ in the wavelength range of 80${\,{\rm \mu m}}$${\leq}$$\lambda$${\leq}$1000${\,{\rm \mu m}}$, and a substantial amount of emission at $\lambda$${\leq}$60${\,{\rm \mu m}}$ which far exceeds what would be expected from dust at $T$$\approx$20${\,{\rm K}}$. In addition, spectrometers aboard the IRTS (Onaka et al. 1996; Tanaka et al. 1996) and ISO (Mattila et al. 1996) have shown that the diffuse ISM radiates strongly in emission features at 3.3, 6.2, 7.7, 8.6, and 11.3${\,{\rm \mu m}}$. $\longrightarrow$ [**There must exist a population of [**“cold dust”**]{} in the size range of $a$$>$250${\,{\rm \AA}}$, heated by starlight to equilibrium temperatures of 15${\,{\rm K}}$${\leq}$$T$${\leq}$25${\,{\rm K}}$**]{} and cooled by far-IR emission to produce the emission at $\lambda$${\geq}$60${\,{\rm \mu m}}$ which accounts for ${{\sim\,}}$65% of the total emitted power (see Li & Draine 2001b); [**there must also exist a population of [**“warm dust”**]{} in the size range of $a$$<$250${\,{\rm \AA}}$, stochastically heated by single starlight photons to temperatures $T$$\gg$20${\,{\rm K}}$**]{} and cooled by near- and mid-IR emission to produce the emission at $\lambda$${\leq}$60${\,{\rm \mu m}}$ which accounts for ${{\sim\,}}$35% of the total emitted power (see Li & Draine 2001b; Li 2004a). - The scattering properties of dust grains (albedo and phase function) provide a means of constraining the optical properties of the grains and are therefore indicators of their size and composition. The albedo in the near-IR and optical is quite high (${{\sim\,}}$0.6), with a clear dip to ${{\sim\,}}$0.4 around the 2175${\,{\rm \AA}}$ hump, a rise to ${{\sim\,}}$0.8 around $\lambda^{-1}$$\approx$6.6${\,{\rm \mu m}}^{-1}$, and a drop to ${{\sim\,}}$0.3 by $\lambda^{-1}$$\approx$10${\,{\rm \mu m}}^{-1}$; the scattering asymmetry factor almost monotonically rises from ${{\sim\,}}$0.6 to ${{\sim\,}}$0.8 from $\lambda^{-1}$$\approx$1${\,{\rm \mu m}}^{-1}$ to $\lambda^{-1}$$\approx$10${\,{\rm \mu m}}^{-1}$ (see Gordon 2004). $\longrightarrow$ [**An appreciable fraction of the extinction in the near-IR and optical must arise from scattering; the 2175${\,{\rm \AA}}$ hump is an absorption feature with no scattered component; and ultrasmall grains are predominantly absorptive.**]{} - The “anomalous” Galactic foreground microwave emission in the 10–100$\GHz$ region (Draine & Lazarian 1998a,b), the photoelectric heating of the diffuse ISM (Bakes & Tielens 1994, Weingartner & Draine 2001b), and (probably) the ERE (Witt & Vijh 2004) also provide direct or indirect proof for the existence of [**nanometer-sized grains**]{} in the ISM (see §2 in Li 2004a for details). - Both [**micrometer-sized presolar grains**]{} (such as graphite, SiC, corundum Al$_2$O$_3$, and silicon nitride Si$_3$N$_4$) and [**nanometer-sized presolar grains**]{} (such as nanodiamonds and titanium carbide nanocrystals)[^18] of interstellar origin as indicated by their anomalous isotopic composition have been identified in primitive meteorites (see Clayton & Nittler 2004 for a recent review). Presolar silicate grains have recently been identified in IDPs (Messenger et al. 2003). Submicron-sized GEMS (Glass with Embedded Metals and Sulfides) of presolar origin have also been identified in IDPs and their 8–13${\,{\rm \mu m}}$ absorption spectrum were similar to those observed in interstellar molecular clouds and young stellar objects (see Bradley 2003 for a recent review). - Very large interstellar grains (with radii $a$$>$1${\,{\rm \mu m}}$) entering the solar system have been detected by the interplanetary spacecraft [*Ulysses*]{} and [*Galileo*]{} (Grün et al. 1993, 1994). Huge grains of radii of $a$${{\sim\,}}$10${\,{\rm \mu m}}$ whose interstellar origin was indicated by their hyperbolic velocities have been detected by radar methods (Taylor et al. 1996). But Frisch et al. (1999) and Weingartner & Draine (2001a) argued that the amount of very large grains inferred from these detections were difficult to reconcile with the interstellar extinction and interstellar elemental abundances. - [**(2) .**]{}  The detection of interstellar polarization clearly indicates that [**some fraction of the interstellar grains must be nonspherical and aligned.**]{} The fact that the wavelength dependence of the interstellar polarization exhibits a steep decrease toward the UV suggests that [**the ultrasmall grain component responsible for the far-UV extinction rise is either spherical or not aligned.**]{} - The 9.7 and 18${\,{\rm \mu m}}$ silicate absorption features are polarized in some interstellar regions, most of which are featureless.[^19] Polarization has also been detected in the 3.1${\,{\rm \mu m}}$ H$_2$O, 4.67${\,{\rm \mu m}}$ CO and 4.62${\,{\rm \mu m}}$ OCN$^{-}$ absorption features (e.g. see Chrysostomou et al. 1996). Hough et al. (1996) reported the detection of a weak 3.47${\,{\rm \mu m}}$ polarization feature in the Becklin-Neugebauer object in the OMC-1 Orion dense molecular cloud, attributed to carbonaceous materials with diamond-like structure. $\longrightarrow$ The detection of polarization in both silicate and ice absorption features is consistent with the assumption of a core-mantle grain morphology (e.g. see Lee & Draine 1985). - So far only two lines of sight toward HD147933 and HD197770 have a weak 2175${\,{\rm \AA}}$ polarization feature detected (Clayton et al. 1992; Anderson et al. 1996; Wolff et al. 1997; Martin, Clayton, & Wolff 1999). Even for these sightlines, the degree of alignment and/or polarizing ability of the carrier should be very small (see §2.1.2.1 in Li & Greenberg 2003 for details). $\longrightarrow$ The 2175${\,{\rm \AA}}$ bump carrier is a very inefficient polarizer (i.e. it is either nearly spherical or poorly aligned). - So far, no polarization has been detected for the DIBs (see Somerville 1996 for a review), the 3.4${\,{\rm \mu m}}$ absorption feature (Adamson et al. 1999),[^20] and the “UIR” emission bands (Sellgren, Rouan, & Léger 1988). $\longrightarrow$ Their carriers do not align or lack optical anisotropy. - [**(3) .**]{}  It is now generally accepted that interstellar grains consist of amorphous silicates and some form of carbonaceous materials; the former is inferred from the 9.7${\,{\rm \mu m}}$ Si–O stretching mode and 18${\,{\rm \mu m}}$ O-Si-O bending mode absorption features in interstellar regions as well as the fact that the cosmically abundant heavy elements such as Si, Fe, Mg are highly depleted; the latter is mainly inferred from the 2175${\,{\rm \AA}}$ extinction hump (and the ubiquitous 3.4${\,{\rm \mu m}}$ C–H stretching vibrational band) and the fact that silicates alone are not able to provide enough extinction (see Footnote-14 of Li 2004b). - The 9.7${\,{\rm \mu m}}$ and 18${\,{\rm \mu m}}$ absorption features are ubiquitously seen in a wide range of astrophysical environments. These features are almost certainly due to silicate minerals: they are respectively ascribed to the Si-O stretching and O-Si-O bending modes in some form of silicate material (e.g. olivine Mg$_{2x}$Fe$_{2-2x}$SiO$_4$). In the ISM, these features are broad and relatively featureless. $\longrightarrow$ [**Interstellar silicates are largely amorphous rather than crystalline.**]{}[^21] - The strength of the 9.7${\,{\rm \mu m}}$ feature is approximately $\Delta \tau_{9.7{\,{\rm \mu m}}}/A_V$$\approx 1/18.5$ in the local diffuse ISM. $\longrightarrow$ [**Almost all Si atoms have been locked up in silicate dust, if assuming solar abundance for the ISM**]{} (see Footnote-9 of Li 2004b).[^22] - The 3.4${\,{\rm \mu m}}$ absorption feature is also ubiquitously seen in the diffuse ISM (but never in dense regions) of the Milky Way and other galaxies (e.g. Seyfert galaxies and ultraluminous infrared galaxies, see Pendleton 2004 for a recent review). This feature is generally attributed to the C-H stretching mode in [**aliphatic hydrocarbon dust**]{}, although its exact nature remains uncertain.[^23] - In principle, we could estimate the volume ratio of the silicate component to the aliphatic hydrocarbon component (1) if we know the band strength of the carrier of the 3.4${\,{\rm \mu m}}$ absorption feature (see Li 2004b), or (2) if we know the total abundances of interstellar elements (see Li 2005a). However, neither is precisely known. - [**(4) .**]{}  Interstellar grains are unevenly distributed but primarily confined to the galactic plane with an effective thickness of ${{\sim\,}}$200${\,{\rm pc}}$. On average, the “rate of extinction” (the amount of visual extinction per unit distance) $\langle A_V/L\rangle$ is about ${{\sim\,}}$1.8${\,{\rm mag\, kpc}^{-1}}$ for the sightlines close to the galactic plane and for distances up to a few kiloparsecs from the Sun (Whittet 2003). Assuming a mean grain size of ${{\sim\,}}$0.1${\,{\rm \mu m}}$ and a typical mass density of ${{\sim\,}}$2.5${\,{\rm g}}{\,{\rm cm}}^{-3}$ for the interstellar grain material, we can estimate the mean dust number density and mass density in the solar neighbourhood ISM respectively to be $n_{\rm dust} \approx 1.1\times 10^{-12}{\,{\rm cm}}^{-3}$ and $\rho_{\rm dust} \approx 1.2\times 10^{-26}{\,{\rm g}}{\,{\rm cm}}^{-3}$ from the “rate of extinction”.[^24]\ The association of interstellar dust and gas had been demonstrated by Bohlin, Savage, & Drake (1978) who found that the color excess and the total hydrogen column density (determined from the observations of HI Lyman-$\alpha$ and H$_2$ absorption lines with the [*Copernicus*]{} satellite) were well correlated: $E(B-V)/\NH\approx 1.7\times 10^{-22}{\,{\rm mag}}{\,{\rm cm}}^2$ for the diffuse ISM in the solar neighbourhood. This correlation has recently been confirmed by the observations with the [*Far Ultraviolet Spectroscopic Explorer*]{} (FUSE) up to $E(B-V)\approx 1.0$ (Rachford et al. 2002),[^25] suggesting that [**the dust and gas are generally well mixed in the ISM**]{}. From this ratio of $E(B-V)$ to $\NH$ one can estimate the gas-to-dust mass ratio to be ${{\sim\,}}$210 in the diffuse ISM if we take $R_V\approx 3.1$ (see Footnote-2 in Li 2004b); together with the “rate of extinction” $\langle A_V/L\rangle \approx 1.8{\,{\rm mag\, kpc}^{-1}}$, one can estimate the hydrogen number density to be $n_{\rm H} = R_V\,\langle A_V/L\rangle\,\NH/E(B-V) \approx 1.1{\,{\rm cm}}^{-3}$ and a gas mass density of $\rho_{\rm gas} \approx 2.6\times 10^{-24}{\,{\rm g}}{\,{\rm cm}}^{-3}$. I thank the organizers F. Borghese and R. Saija for inviting me to this very exciting and fruitful conference. I thank F. Borghese, C. Cecchi-Pestellini, A. Giusto, M.A. Iatì, M.I. Mishchenko, and R. Saija for helpful discussions. References {#references .unnumbered} ========== [9]{} Adamson, A.J., Whittet, D.C.B., Chrysostomou, A., Hough, J.H., Aitken, D.K., Wright, G.S., & Roche, P.F. 1999, ApJ, 512, 224 Aguirre, A.N. 2000, ApJ, 533, 1 Aiello, S., & Cecchi-Pestellini, C. 2000, in Italian Phys. Soc. Conf. Proc. 67, Molecules in Space and in the Laboratory, ed. I. Porceddu & S. Aiello, 3 Aitken, D.K., Roche, P.F., Smith, C.H., James, S.D., & Hough, J.H. 1988, MNRAS, 230, 629 Allamandola, L.J., Hudgins, D.M., & Sandford, S.A. 1999, ApJ, 511, L115 Allamandola, L.J., Tielens, A.G.G.M., & Barker, J.R. 1985, ApJ, 290, L25 Allende Prieto, C., Lambert, D.L., & Asplund, M. 2002, ApJ, 573, L137 Anders, E., & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 Andersen, A.C., Sotelo, J.A., Niklasson, G.A., & Pustovit, V.N. 2002, A&A, 386, 296 Anderson, C.M., et al. 1996, AJ, 112, 2726 Andriesse, C.D. 1978, A&A, 66, 169 Arendt, R.G., et al. 1998, ApJ, 508, 74 Asplund, M., Grevesse, N., Sauval, A.J., Allende Prieto, C., & Kiselman, D. 2004, A&A, 417, 751 Bakes, E.L.O., & Tielens, A.G.G.M. 1994, ApJ, 427, 822 Barnard, E.E. 1919, ApJ, 49, 1 Behr, A. 1959, Zeitschr. für Astrophys., 47, 54 Biermann, P., & Harwit, M. 1980, ApJ, 241, L105 Bless, R.C., & Savage, B.D. 1972, ApJ, 171, 293 Boggess, A., & Borgman, J. 1964, ApJ, 140, 1636 Bohlin, R.C., Savage, B.D., & Drake, J.F. 1978, ApJ, 224, 132 Boulanger, F., & Pérault, M. 1988, ApJ, 330, 964 Bradley, J.P. 2003, in Astromineralogy, LNP609, ed. Th. Henning (Berlin: Springer), 217 Cardelli, J.A., Clayton, G.C., & Mathis, J.S. 1989, ApJ, 345, 245 Cayrel, R., & Schatzman, E. 1954, Ann. d’Ap., 17, 555 Cernuschi, F., Marsicano, F.R., & Kimel, I. 1965, Ann. d’Ap., 28, 860 Chiar, J.E., Pendleton, Y.J., Geballe, T.R., & Tielens, A.G.G.M. 1998, ApJ, 507, 281 Chlewicki, G., & Laureijs, R.J. 1988, A&A, 207, L11 Chrysostomou, A., Hough, J.H., Whittet, D.C.B., Aitken, D.K., Roche, P. F., & Lazarian, A. 1996, ApJ, 465, L61 Clayton, D.D., & Nittler, L.R. 2004, ARA&A, 42, 39 Clayton, G.C., et al. 1992, ApJ, 385, L53 Clayton, G.C., et al. 2003, ApJ, 588, 871 Clerke, A.M. 1903, Problems in Astrophysics (London: Black), 567 Coyne, G.V., Gehrels, T., & Serkowski, K. 1974, AJ, 79, 581 Danielson, R.E., Woolf, N.J., & Gaustad, J.E. 1965, ApJ, 141, 116 Davis, L., Jr., & Greenstein, J.L. 1951, ApJ, 114, 206 Désert, F.X., Boulanger, F., & Puget, J.L. 1990, A&A, 237, 215 Donn, B. 1968, ApJ, 152, L129 Dorschner, J. 1982, Ap&SS, 81, 323 Dorschner, J. 2003, in Astromineralogy, LNP609, ed. Th. Henning (Berlin: Springer), 1 Draine, B.T. 1990, in ASP Conf. Ser. 12, The Evolution of the Interstellar Medium, ed. L. Blitz (San Francisco: ASP), 193 Draine, B.T. 2003, ARA&A, 41, 241 Draine, B.T., & Anderson, N. 1985, ApJ, 292, 494 Draine, B.T., & Lazarian, A. 1998a, ApJ, 494, L19 Draine, B.T., & Lazarian, A. 1998b, ApJ, 508, 157 Draine, B.T., & Lee, H.M. 1984, ApJ, 285, 89 Draine, B.T., & Li, A. 2001, ApJ, 551, 807 Draine, B.T., & Salpeter, E.E. 1979a, ApJ, 231, 77 Draine, B.T., & Salpeter, E.E. 1979b, ApJ, 231, 438 Draine, B.T., & Weingartner, J.C. 1997, ApJ, 480, 633 Duley, W.W., & Williams, D.A. 1981, MNRAS, 196, 269 Duley, W.W., Jones, A.P., & Williams, D.A. 1989, MNRAS, 236, 709 Dwek, E. 2004, ApJ, 611, L109 Dwek, E., et al. 1997, ApJ, 475, 565 Dyck, H.M., & Londsale, C.J. 1981, in IAU Symp.96, Infrared Astronomy, ed. C.G. Wynne-Williams, & D.P. Cruickshank (Dordrecht: Reidel), 223 Dyck, H.M., Capps, R.W., Forrest, W.J., & Gillett, F.C. 1973, ApJ, 183, L99 Eddington, A.S. 1926, The Internal Constitution of the Stars (Cambridge: Cambridge Univ. Press) Field, G.B. 1974, ApJ, 187, 453 Friedemann, C. 1969, Astron. Nachr., 291, 177 Frisch, P.C., et al. 1999, ApJ, 525, 492 Gehrels, T. 1960, AJ, 65, 470 Gilra, D.P. 1971, Nature, 229, 237 Gillett, F.C., & Forrest, W.J. 1973, ApJ, 179, 483 Gillett, F.C., Forrest, W.J., & Merrill, K.M. 1973, ApJ, 184, L93 Gilman, R.C. 1969, ApJ, 155, L185 Gordon, K.D. 2004, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 77 Gordon, K.D., Witt, A.N., & Friedmann, B.C. 1998, ApJ, 498, 522 Greenberg, J.M. 1968, in Stars and Stellar Systems, Vol. VII, ed. B.M. Middlehurst & L.H. Aller (Chicago: Univ. Chicago Press), 221 Greenberg, J.M. 1973, in IAU Symp.52, Interstellar Dust and Related Topics, ed. J.M. Greenberg, & H.C. van de Hulst (Dordrecht: Reidel), 3 Greenberg, J.M. 1974, ApJ, 189, L81 Greenberg, J.M. 1978, in Cosmic Dust, ed. J.A.M. McDonnell (New York: Wiley), 187 Greenberg, J.M., & Hong, S.S. 1974, in IAU Symp.60, Galactic and Radio Astronomy, ed. F. Kerr, & S.C. Simonson, (Dordrecht: Reidel), 155 Greenberg, J.M., & Li, A. 1996, A&A, 309, 258 Greenberg, J.M., & Li, A. 1999, Adv. Space Res., 24, 497 Greenberg, J.M., & Stoeckly, R. 1970, in IAU Symp.36, Ultraviolet Stellar Spectra and Related Ground-Based Observations, ed. L. Houziaux & H.E. Butler (Dordrecht: Reidel), 36 Greenberg, J.M., Li, A., Mendoza-Gómez, C.X., Schutte, W.A., Gerakines, P.A., & de Groot, M. 1995, ApJ, 455, L177 Greenstein, J.L. 1938, Harvard Obs. Circ., No.422 Grün, E., Gustafson, B.Å.S., Mann, I., Baguhl, M., Morfill, G.E., Staubach, P., Taylor, A., & Zook, H.A. 1994, A&A, 286, 915 Grün, E., et al. 1993, Nature, 362, 428 Guillois, O., Ledoux, G., & Reynaud, C. 1999, ApJ, 521, L133 Hackwell, J.A., Gehrz, R.D., & Woolf, N.J. 1970, Nature, 227, 822 Hall, J.S. 1937, ApJ, 85, 145 Hall, J.S. 1949, Science, 109, 166 Heger, M.L. 1922, Lick Obs. Bull., 337, 141 Hellyer, B. 1970, MNRAS, 148, 383 Henning, Th., Dorschner, J., & Gürtler, J. 1989, in Interstellar Dust (NASA CP-3036), 395 Henning, Th., & Salama, F. 1998, Science, 282, 2204 Henning, Th., Jäger, C., & Mutschke, H. 2004, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 603 Henyey, L.G., & Greenstein, J.L. 1941, ApJ, 93, 70 Herschel, W. 1785, Phil. Trans., 75, 213 Hiltner, W.A. 1949, Science, 109, 165 Hough, J.H., Chrysostomou, A., Messinger, D.W., Whittet, D.C.B., Aitken, D.K., & Roche, P.F. 1996, ApJ, 461, 902 Hoyle, F., & Wickramasinghe, N.C. 1962, MNRAS, 124, 417 Hoyle, F., & Wickramasinghe, N.C. 1969, Nature, 223, 459 Hoyle, F., & Wickramasinghe, N.C. 1970, Nature, 226, 62 Hoyle, F., & Wickramasinghe, N.C. 1988, Ap&SS, 147, 245 Huffman, D.R., & Stapp, J.L. 1971, Nature, 229, 45 Iatì, M.A., Cecchi-Pestellini, C., Williams, D.A., Borghese, F., Denti, P., Saija, R., & Aiello, S. 2001, MNRAS, 322, 749 Iatì, M.A., Giusto, A., Saija, R., Borghese, F., Denti, P., Cecchi-Pestellini, C., & Aiello, S. 2004, ApJ, 615, 286 Indebetouw, R., et al. 2005, ApJ, 619, 931 Joblin, C., Léger, A., & Martin, P. 1992, ApJ, 393, L79 Johnson, H.L. 1965, ApJ, 141, 923 Johnson, H.L., & Borgman, J. 1963, Bull. Astron. Inst. Netherlands, 17, 115 Johnson, H.L., & Morgan, W.W. 1955, ApJ, 122, 142 Jones, A.P., & d’Hendecourt, L.B. 2004, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 589 Jones, A.P., Tielens, A.G.G.M., Hollenbach, D.J., & McKee, C.F. 1994, ApJ, 433, 797 Jura, M. 1980, 235, 63 Kamijo, F. 1963, PASJ, 15, 440 Kapteyn, J.C. 1904, ApJ, 24, 115 Kemp, J.C., & Wolstencroft, R.D. 1972, ApJ, 176, L115 Kemper, F., Vriend, W.J., & Tielens, A.G.G.M. 2004, ApJ, 609, 826 Kim, S.H., & Martin, P.G. 1996, ApJ, 462, 296 Kim, S.H., Martin, P.G., & Hendry, P.D. 1994, ApJ, 422, 164 Knacke, R.F., Gaustad, J.E., Gillett, F.C., & Stein, W.A. 1969a, ApJ, 155, L189 Knacke, R.F., Cudaback, D.D., & Gaustad, J.E. 1969b, ApJ, 158, 151 Kobayashi, Y., Kawara, K., Sato, S., & Okuda, H. 1980, PASJ, 32, 295 Kwok, S., Volk, K., & Hrivnak, B.J. 1989, ApJ, 345, L51 Lallement, R., Bertin, P., Ferlet, R., Vidal-Madjar, A., & Bertaux, J.L. 1994, A&A, 286, 898 Ledoux, G., et al. 1998, A&A, 333, L39 Lee, H.M., & Draine, B.T. 1985, ApJ, 290, 211 Léger, A., & Puget, J.L. 1984, A&A, 137, L5 Lequeux, J., & Jourdain de Muizon, M. 1990, A&A, 240, L19 Lewis, R.S., Tang, M., Wacker, J.F., Anders, E., & Steel, E. 1987, Nature, 326, 16 Li, A. 2003a, ApJ, 584, 593 Li, A. 2003b, ApJ, 599, L45 Li, A. 2004a, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 417 Li, A. 2004b, in Penetrating Bars Through Masks of Cosmic Dust: the Hubble Tuning Fork Strikes a New Note, ed. D.L. Block, I. Puerari, K.C. Freeman, R. Groess, & E.K. Block (Dordrecht: Kluwer), 535 Li, A. 2005a, ApJ, 622, 000 Li, A. 2005b, to be submitted to ApJ Li, A., & Draine, B.T. 2001a, ApJ, 550, L213 Li, A., & Draine, B.T. 2001b, ApJ, 554, 778 Li, A., & Draine, B.T. 2002a, ApJ, 564, 803 Li, A., & Draine, B.T. 2002b, ApJ, 572, 232 Li, A., & Draine, B.T. 2002c, ApJ, 576, 762 Li, A., & Greenberg, J.M. 1997, A&A, 323, 566 Li, A., & Greenberg, J.M. 1998, A&A, 339, 591 Li, A., & Greenberg, J.M. 2002, ApJ, 577, 789 Li, A., & Greenberg, J.M. 2003, in Solid State Astrochemistry, ed. V. Pirronello, J. Krelowski, & G. Manicó (Dordrecht: Kluwer), 37 Lindblad, B. 1935, Nature, 135, 133 Lutz, D., et al. 1996, A&A, 315, L269 Maas, R.W., Ney, E.P., & Woolf, N.J. 1970, ApJ, 160, L101 Martin, P.G. 1972, MNRAS, 159, 179 Martin, P.G., & Whittet, D.C.B. 1990, ApJ, 357, 113 Martin, P.G., Clayton, G.C., & Wolff, M.J. 1999, ApJ, 510, 905 Martin, P.G., Illing, R., & Angel, J.R.P. 1972, MNRAS, 159, 191 Martin, P.G., et al. 1992, ApJ, 392, 691 Mathis, J.S. 1996, ApJ, 472, 643 Mathis, J.S., & Whiffen, G. 1989, ApJ, 341, 808 Mathis, J.S., Rumpl, W., & Nordsieck, K.H. 1977, ApJ, 217, 425 Mattila, K., et al. 1996, A&A, 315, L353 Mattioda, A.L., Allamandola, L.J., & Hudgins, D.M. 2005, ApJ, in press Mennella, V., Brucato, J.R., Colangeli, L., & Palumbo, P. 1999, ApJ, 524, L71 Merrill, P.W. 1934, PASP, 46, 206 Messenger, S., Keller, L.P., Stadermann, F.J., Walker, R.M., Zinner, E. 2003, Science, 300, 105 Morgan, W.W., Harris, D.L., & Johnson, H.L. 1953, ApJ, 118, 92 Morton, D.C., Drake, J.F., Jenkins, E.B., Rogerson, J.B., Spitzer, L., & York, D.G. 1973, ApJ, 181, L103 Nandy, K. 1964, Publ. Roy. Obs. Edinburgh, 3, 142 Nayfeh, M.H., Habbal, S.R., & Rao, S. 2005, ApJ, 621, L121 O’Keefe, J.A. 1939, ApJ, 90, 294 Oort, J.H. 1932, Bull. Astron. Inst. Netherlands, 6, 249 Oort, J.H., & van de Hulst, H.C. 1946, Bull. Astron. Inst. Netherlands, 10, 187 Onaka, T., et al. 1996, PASJ, 48, L59 Pannekoek, A. 1920, Proc. Kon. Akad. Amsterdam, 23, No.5 Pendleton, Y.J. 2004, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 573 Pendleton, Y.J., & Allamandola, L.J. 2002, ApJS, 138, 75 Pendleton, Y.J., Sandford, S.A., Allamandola, L.J., Tielens, A.G.G.M., & Sellgren, K. 1994, ApJ, 437, 683 Pirronello, V., Manicó, G., Roser, J., & Vidali, G. 2004, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 529 Platt, J.R. 1956, ApJ, 123, 486 Posch, T., Mutschke, H., & Andersen, A. 2004, ApJ, 616, 1167 Purcell, E.M. 1969, ApJ, 158, 433 Rachford, B.L., et al. 2002, ApJ, 577, 221 Regan, M.W., et al. 2004, ApJS, 154, 204 Roche, P.F., & Aitken, D.K. 1985, MNRAS, 215, 425 Rowan-Robinson, M. 1992, MNRAS, 258, 787 Rudnick, J. 1936, ApJ, 83, 394 Russell, H.N. 1922, Drak Nebulae, Proc. Nat. Acad. Sci., 8, 115 Saija, R., Iatì, M.A., Borghese, F., Denti, P., Aiello, S., & Cecchi-Pestellini, C. 2001, ApJ, 559, 993 Saija, R., Iatì, M.A., Giusto, A., Borghese, F., Denti, P., Aiello, S., & Cecchi-Pestellini, C. 2003, MNRAS, 341, 1239 Sandford, S.A., Pendleton, Y.J., & Allamandola, L.J. 1995, ApJ, 440, 697 Saslaw, W.C., & Gaustad, J.E. 1969, Nature, 221, 160 Schalén, C. 1936, Medd. Uppsala Astron. Obs., No.64 Schalén, C. 1965, Medd. Lunds Obs. Ser. 1, No.210 Schmidt, G.D., Cohen, M., & Margon, B. 1980, ApJ, 239, L133 Schnaiter, M., Mutschke, H., Dorschner, J., Henning, Th., & Salama, F. 1998, ApJ, 498, 486 Sellgren, K., Werner, M.W., & Dinerstein, H.L. 1983, ApJ, 271, L13 Sellgren, K., Rouan, D., & Léger, A. 1988, A&A, 196, 252 Serkowski, K. 1973, in IAU Symp.52, Interstellar Dust and Related Topics, ed. J.M. Greenberg, & H.C. van de Hulst (Dordrecht: Reidel), 145 Shapley, H., & Curtis, H.D. 1921, Bull. Nat. Res. Coun., 2, 217 Siebenmorgen, R., & Krügel, E. 1992, A&A, 259, 614 Slipher, V.M. 1912, Lowell Obs. Bull., 55, 26 Smith, T.L., & Witt, A.N. 2002, ApJ, 565, 304 Smith, T.L., Clayton, G.C., & Valencic, L. 2004, AJ, 128, 357 Smith, J.D., et al. 2004, ApJS, 154, 199 Snow, T.P., & Witt, A.N. 1996, ApJ, 468, L65 Sofia, U.J., Cardelli, J.A., & Savage, B.D. 1994, ApJ, 430, 650 Sofia, U.J., et al. 2005, ApJ, in press Somerville, W. 1996, in ASP Conf. Ser. 97, Polarimetry of the Interstellar Medium, ed. W.G. Roberge, & D.C.B. Whittet (San Francisco: ASP), 143 Sorrell, W.H. 1990, MNRAS, 243, 570 Speck, A.K, & Hofmeister, A.M. 2004, ApJ, 600, 986 Spencer Jones, H. 1914, MNRAS, 75, 4 Spitzer, L., & Tukey, J.W. 1950, ApJ, 114, 187 Stebbins, J., & Whitford, A.E. 1943, ApJ, 98, 20 Stebbins, J., Huffer, C.M., & Whitford, A.E. 1939, ApJ, 90, 209 Stein, W.A., & Gillett, F.C. 1969, ApJ, 155, L197 Stecher, T.P. 1965, ApJ, 142, 1683 Stecher, T.P., & Donn, B. 1965, ApJ, 142, 1681 Struve, F.G.W. 1847, Etudes d’Astronomie Stellaire Struve, O., & Elvey, C.T. 1936, ApJ, 83, 162 Tanaka, M., et al. 1996, PASJ, 48, L53 Taylor, A., Baggaley, W.J., & Steel, D.I. 1996, Nature, 380, 323 Trumpler, R.J. 1930, PASP, 42, 214 Uchida, K.I., Sellgren, K., & Werner, M.W. 1998, ApJ, 493, L109 van de Hulst, H.C. 1949, The Solid Particles in Interstellar Space, Rech. Astron. Obs. Utrecht, 11, part 2 van de Hulst, H.C. 1950, ApJ, 112, 1 van de Hulst, H.C. 1957, Light Scattering by Small Particles (New York: Wiley) van Kerckhoven, C., Tielens, A.G.G.M., & Waelkens, C. 2002, A&A, 384, 568 van Rhijn, 1921, Pub. Astron. Lab. Groningen, 31 Verschuur, G.L. 1989, Interstellar Matters (Berlin: Springer-Verlag) Vijh, U.P., Witt, A.N., & Gordon, K.D. 2004, ApJ, 606, L65 von Helden, G., et al. 2000, Science, 288, 313 Vrba, F.J., Coyne, G.V., & Tapia, S. 1993, AJ, 105, 1010 Wampler, E.J. 1961, ApJ, 134, 861 Weiland, J.L., et al. 1986, ApJ, 306, L101 Weingartner, J.C., & Draine, B.T. 2001a, ApJ, 548, 296 Weingartner, J.C., & Draine, B.T. 2001b, ApJS, 134, 263 Whitford, A.E. 1948, ApJ, 107, 102 Whitford, A.E. 1958, AJ, 63, 201 Whittet, D.C.B. 2003, Dust in the Galactic Environment (2nd ed; Bristol: IoP) Whittet, D.C.B., Duley, W.W., & Martin, P.G. 1990, MNRAS, 244, 427 Wickramasinghe, D.T., & Allen, D.A. 1980, Nature, 287, 518 Wickramasinghe, N.C. 1965, MNRAS, 131, 177 Wickramasinghe, N.C. 1970a, in IAU Symp.36, Ultraviolet Stellar Spectra and Related Ground-Based Observations, ed. L. Houziaux & H.E. Butler (Dordrecht: Reidel), 42 Wickramasinghe, N.C. 1970b, PASJ, 22, 85 Wickramasinghe, N.C., & Nandy, K. 1968, Nature, 219, 1347 Wickramasinghe, N.C., & Nandy, K. 1970, Nature, 227, 51 Wickramasinghe, N.C., & Krishna Swamy, K.S. 1968, ApJ, 154, 397 Wickramasinghe, N.C., & Krishna Swamy, K.S. 1969, MNRAS, 144, 41 Wickramasinghe, N.C., Dharmawardhana, M.W.C., & Wyld, C. 1966, MNRAS, 134, 25 Wilking, B.A., Lebofsky, M.J., Martin, P.G., Rieke, G.H., & Kemp, J.C. 1980, ApJ, 235, 905 Wildt, R. 1933, Zeitschr. für Astrophys., 6, 345 Willner, S.P., Russell, R.W., Puetter, R.C., Soifer, B.T., & Harvey, P.N. 1979, ApJ, 229, L65 Witt, A.N. 1973, in IAU Symp.52, Interstellar Dust and Related Topics, ed. J.M. Greenberg, & H.C. van de Hulst (Dordrecht: Reidel), 53 Witt, A.N., & Lillie, C.F. 1973, A&A, 25, 397 Witt, A.N., & Vijh, U.P. 2004, in ASP Conf. Ser. 309, Astrophysics of Dust, ed. A.N. Witt, G.C. Clayton, & B.T. Draine (San Francisco: ASP), 115 Witt, A.N., Gordon, K.D., & Furton, D.G. 1998, ApJ, 501, L111 Wolf, M. 1904, MNRAS, 64, 838 Wolff, M.J., Clayton, G.C., Kim, S.H., Martin, P.G., & Anderson, C.M. 1997, ApJ, 478, 395 Woolf, N.J., & Ney, E.P. 1969, ApJ, 155, L181 Wright, E.L. 1982, ApJ, 255, 401 York, D.G., et al. 1973, ApJ, 182, L1 Zubko, V.G. 1999, ApJ, 513, L29 Zubko, V.G., Dwek, E., & Arendt, R.G. 2004, ApJS, 152, 211 [^1]: To be precise, this should be called “extinction” which is a combined effect of absorption and scattering: a grain in the line of sight between a distant star and the observer reduces the starlight by a combination of scattering and absorption. [^2]: The photometric distances were obtained by comparing apparent and absolute magnitudes, with the latter determined from the spectral types of the stars in the clusters. The geometrical distances were determined from the angular diameters of the clusters, assuming that all their diameters were the same. [^3]: As mentioned earlier in this review, general star counts did suggest the existence of interstellar extinction which increases with distance. However, this evidence is not decisive because interpretation of the star-count data rests on assumptions (generally unproved at the time) as to the true spatial distribution of the stars. [^4]: Trumpler (1930) derived a color-excess of ${{\sim\,}}$0.3${\,{\rm mag\, kpc}^{-1}}$ between the photographic (with an effective wavelength $\lambda_B$$\approx$$4300{\,{\rm \AA}}$) and visual ($\lambda_V$$\approx$$5500{\,{\rm \AA}}$) bands, and a general (visual) absorption of ${{\sim\,}}$1.0${\,{\rm mag\, kpc}^{-1}}$. [^5]: The exact nature of the carrier of this bump remains unknown. It is generally believed to be caused by aromatic carbonaceous (graphitic) materials, very likely a cosmic mixture of polycyclic aromatic hydrocarbon (PAH) molecules (Joblin, Léger & Martin 1992; Li & Draine 2001b). [^6]: Very recently, on the basis of the [*FUSE*]{} observations of 9 Galactic sightlines at $1050{\,{\rm \AA}}< \lambda < 1200{\,{\rm \AA}}$, Sofia et al. (2005) found that the CCM prediction for short-wavelengths ($\lambda^{-1}>8{\,{\rm \mu m}}^{-1}$) is not valid for all sightlines. [^7]: Van de Hulst (1949) pointed out that this is not the case for H, He and Ne since they will evaporate rapidly at grain temperatures exceeding ${{\sim\,}}$5${\,{\rm K}}$. [^8]: The “Serkowski law” $P(\lambda)/{P_{\rm max}}$=$\exp\,[-K\ln^2(\lambda/{\lambda_{\rm max}})]$ is determined by only one parameter: ${\lambda_{\rm max}}$ – the wavelength where the maximum polarization ${P_{\rm max}}$ occurs; the width parameter $K$ is related to ${\lambda_{\rm max}}$ through $K$$\approx$$1.66 {\lambda_{\rm max}}$+0.01. The peak wavelength ${\lambda_{\rm max}}$ is indicative of grain size and correlated with $R_V$: $R_V \approx (5.6\pm 0.3){\lambda_{\rm max}}$ (${\lambda_{\rm max}}$ is in micron; see Whittet 2003). [^9]: Many years later, the idea of metallic iron grains as an interstellar dust component was reconsidered by Chlewicki & Laureijs (1988) who attributed the 60${\,{\rm \mu m}}$ emission measured by IRAS for the Galactic diffuse ISM to small iron particles with a typical size of $a$$\approx$70${\,{\rm \AA}}$ (which would obtain an equilibrium temperature of ${{\sim\,}}$53${\,{\rm K}}$ in the diffuse ISM). But their model required almost all cosmic iron to be contained in metallic grains: ${{\sim\,}}$34.5ppm (parts per million) relative to H. Exceedingly elongated metallic needles with a length ($l$) over radius ($a$) ratio $l/a \approx 10^5$, presumably present in the intergalactic medium, have been suggested by Wright (1982), Hoyle & Wickramasinghe (1988), and Aguirre (2000) as a source of starlight opacity to thermalize starlight to generate the microwave background. Very recently, elongated needle-like metallic grains were suggested by Dwek (2004) as an explanation for the flat 3–8${\,{\rm \mu m}}$ extinction observed by Lutz et al. (1996) toward the Galactic Center and by Indebetouw et al. (2005) toward the $l$=42$^{\rm o}$ and 284$^{\rm o}$ lines of sight in the Galactic plane. But these results heavily rely on the optical properties of iron needles (see Li 2003a, 2005b). [^10]: Whittet, Duley, & Martin (1990) estimated from the 7.7–13.5${\,{\rm \mu m}}$ spectra (with a spectral resolution of ${{\sim\,}}$0.23${\,{\rm \mu m}}$) of 10 sightlines toward the Galactic Center the abundance of Si in SiC dust to be no more than ${{\sim\,}}$5% of that in silicates. Since about half of the dust in the ISM is injected by carbon stars in which an appreciable fraction of the stardust is SiC, it is unclear how SiC is converted to gas-phase and recondense to form silicates in the ISM. [^11]: Nanodiamonds were identified in the dust disks or envelopes surrounding two Herbig Ae/Be stars HD 97048 and Elias 1 and one post-asymptotic giant branch (AGB) star HR 4049, based on the 3.43${\,{\rm \mu m}}$ and 3.53${\,{\rm \mu m}}$ C–H stretching emission features expected for surface-hydrogenated nanodiamonds (Guillois, Ledoux, & Reynaud 1999; van Kerckhoven, Tielens, & Waelkens 2002). [^12]: The reason why so many different materials with such a wide range of optical properties could be used to explain the observed interstellar extinction was that the number of free parameters defining the size distribution was sufficiently large. [^13]: The reason why Wickramasinghe (1970a) considered ice-coated silicate grains was that he thought that graphite grains of a typical size ${{\sim\,}}$0.06${\,{\rm \mu m}}$ would attain an equilibrium temperature of ${{\sim\,}}$40${\,{\rm K}}$ in the ISM and would be too warm to possess an ice mantle, while silicate grains would tend to take up lower temperatures because of their lower optical and UV absorptivity and therefore the condensation of ice mantles could occur on their surfaces. [^14]: Since the “UIR” emission bands were initially found to be associated with UV-rich objects, it had been thought that they were pumped primarily by UV photons. Li & Draine (2002b) demonstrated that the excitation of PAHs does not require UV photons – since the PAH electronic absorption edge shifts to longer wavelengths upon ionization and/or as the PAH size increases (see Mattioda, Allamandola, & Hudgins 2005 for their recent measurements of the near-IR absorption spectra of PAH ions), therefore long wavelength (red and far-red) photons are also able to heat PAHs to high temperatures so that they emit efficiently at the “UIR” bands (also see Smith, Clayton, & Valencic 2004). Li & Draine (2002b) have modeled the excitation of PAH molecules in UV-poor regions. It was shown that the astronomical PAH model provides a satisfactory fit to the UIR spectrum of vdB133, a reflection nebulae with the lowest ratio of UV to total radiation among reflection nebulae with detected UIR band emission (Uchida, Sellgren, & Werner 1998). [^15]: The most recent estimates of the solar C ($\csun \approx 245{\,{\rm ppm}}$; Allende Prieto, Lambert, & Asplund 2002) and O abundances ($\osun \approx 457{\,{\rm ppm}}$; Asplund et al. 2004) are also “subsolar”, just ${{\sim\,}}$50%–70% of the commonly-adopted solar values (e.g. those of Anders & Grevesse 1989) and close to the “subsolar” interstellar abundances originally recommended by Snow & Witt (1996). [**If the interstellar abundances are indeed “subsolar”, there might be a lack of raw material to form the dust to account for the interstellar extinction. Mathis (1996) argued that this problem could be solved if interstellar grains have a fluffy, porous structure**]{} since fluffy grains are more effective in absorbing and scattering optical and UV starlight than compact grains (on a per unit mass basis). However, [**using the Kramers-Kronig relation, Li (2005a) demonstrated that fluffy dust is not able to overcome the abundance shortage problem.**]{} The abundances of refractory elements in [*stellar photospheres*]{} may under-represent the composition of the interstellar material from which stars are formed, resulting either from the possible underestimation of the degree of heavy-element settling in stellar atmospheres, or from the incomplete incorporation of heavy elements in stars during the star formation process. [^16]: Very recently, Vijh, Witt, & Gordon (2004) reported the discovery of blue luminescence at $\lambda$$<$5000${\,{\rm \AA}}$ in the Red Rectangle and identified it as fluorescence by small three- to four-ringed PAH molecules. Nayfeh, Habbal, & Rao (2005) argued that this blue luminescence could be due to hydrogen-terminated crystalline silicon nanoparticles. [^17]: Such a power-law size distribution is a natural product of shattering following grain-grain collisions (e.g. see Hellyer 1970, Biermann & Harwit 1980, Dorschner 1982, Henning, Dorschner, & Gürtler 1989). [^18]: von Helden et al. (2000) proposed that TiC nanocrystals could be responsible for the prominent 21${\,{\rm \mu m}}$ emission feature detected in over a dozen carbon-rich post-AGB stars which remains unidentified since its first detection (Kwok, Volk, & Hrivnak 1989). Based on the Kramers-Kronig relations (Purcell 1969), Li (2003b) found that the TiC proposal is not feasible because it requires at least 50 times more Ti than available. [^19]: The only exception is AFGL 2591, a molecular cloud surrounding a young stellar object, which displays a narrow feature at 11.2${\,{\rm \mu m}}$ superimposed on the broad 9.7${\,{\rm \mu m}}$ polarization band, generally attributed to annealed silicates (Aitken et al. 1988). However, its 3.1${\,{\rm \mu m}}$ ice absorption feature is not polarized (Dyck & Lonsdale 1980, Kobayashi et al. 1980). [^20]: So far spectropolarimetric measurement of this feature has been performed only for one sightline – the Galactic Center source IRS7 (Adamson et al. 1999). Unfortunately, no such measurements have been carried out for the 9.7${\,{\rm \mu m}}$ silicate absorption feature of this sightline. Spectropolarimetric measurements for both these two bands of the same sightline would allow a direct test of the silicate core-hydrocarbon mantle interstellar dust model (Li & Greenberg 1997), since this model predicts that the 3.4${\,{\rm \mu m}}$ feature would be polarized if the 9.7${\,{\rm \mu m}}$ feature (for the same sightline) is polarized (Li & Greenberg 2002). [^21]: Li & Draine (2001a) estimated that the [amount]{} of $a$$<$1${\,{\rm \mu m}}$ crystalline silicate grains in the diffuse ISM is $<$5% of the solar Si abundance. Kemper, Vriend & Tielens (2004) placed a much tighter upper limit of ${{\sim\,}}$0.2% on the crystalline fraction of the interstellar silicates along the sightline toward the Galactic Center. [^22]: The silicate absorption feature (relative to the visual extinction) along the path to the Galactic Center is about twice that of the local ISM: $\Delta \tau_{9.7{\,{\rm \mu m}}}/A_V$$\approx 1/9$ (Roche & Aitken 1985). It was originally thought that there were very few carbon stars in the central regions of the Galaxy so that one would expect a much larger fraction of the dust to be silicates than is the case further out in the Galactic disk (Roche & Aitken 1985). However, this explanation was challenged by the fact that the 3.4${\,{\rm \mu m}}$ aliphatic hydrocarbon dust absorption feature for the Galactic Center sources (relative to the visual extinction: $\Delta\tau_{3.4{\,{\rm \mu m}}}/A_V$$\approx$1/150) is also about twice that of the local ISM ($\Delta\tau_{3.4{\,{\rm \mu m}}}/A_V$$\approx$1/250; Pendleton et al. 1994; Sandford, Allamandola, & Pendleton 1995). [^23]: Over 20 different candidates have been proposed (see Pendleton & Allamandola 2002 for a summary). So far, the experimental spectra of hydrogenated amorphous carbon (HAC; Schnaiter, Henning & Mutschke 1999, Mennella et al. 1999) and the organic refractory residue, synthesized from UV photoprocessing of interstellar ice mixtures (Greenberg et al. 1995), provide the best fit to both the overall feature and the positions and relative strengths of the 3.42${\,{\rm \mu m}}$, 3.48${\,{\rm \mu m}}$, and 3.51${\,{\rm \mu m}}$ subfeatures corresponding to symmetric and asymmetric stretches of C–H bonds in CH$_2$ and CH$_3$ groups. Pendleton & Allamandola (2002) attributed this feature to hydrocarbons with a mixed aromatic and aliphatic character. [^24]: Let interstellar grains be approximated by a single size of $a$ (spherical radius) with a number density of $n_{\rm d}$. The visual extinction caused by these grains with a pathlength of $L$ is $A_V = 1.086\,\pi a^2 Q_{\rm ext}(V)\,n_{\rm d} L$, where $Q_{\rm ext}(V)$ is the dust extinction efficiency at $V$-band ($\lambda=5500$Å). The dust number density can be derived from $$n_{\rm dust} \approx \frac{\langle A_V/L\rangle} {1.086\,\pi a^2 Q_{\rm ext}(V)} \approx 1.1\times 10^{-12} \,\left(\frac{\langle A_V/L\rangle}{1.8{\,{\rm mag\, kpc}^{-1}}}\right) \left(\frac{1.5}{Q_{\rm ext}[V]}\right) \left(\frac{0.1{\,{\rm \mu m}}}{a}\right)^2~~.$$ The dust mass density is approximately $\rho_{\rm dust} = n_{\rm dust} \left(4/3\right)\,\pi a^3\rho_{\rm d} \approx 1.2\times 10^{-16}{\,{\rm g}}{\,{\rm cm}}^{-3}$ if we take $a\approx 0.1{\,{\rm \mu m}}$, $Q_{\rm ext}(V)=1.5$, and $\rho_{\rm d}=2.5{\,{\rm g}}{\,{\rm cm}}^{-3}$. [^25]: Dark clouds (e.g. the $\rho$ Oph cloud) seem to have lower $E(B-V)/\NH$ values, suggesting grain growth through coagulation (Jura 1980; Vrba, Coyne, & Tapia 1993; Kim & Martin 1996).
{ "pile_set_name": "ArXiv" }
UUITP-20/08\ [**Chain inflation and the imprint of fundamental physics in the CMBR\ **]{} [Diego Chialva$^1$ and Ulf H. Danielsson$^2$]{}\ Institutionen för fysik och astronomi\ Uppsala Universitet, Box 803, SE-751 08 Uppsala, Sweden [diego.chialva@fysast.uu.se ulf.danielsson@fysast.uu.se\ ]{} [**Abstract**]{} In this work we investigate characteristic modifications of the spectrum of cosmological perturbations and the spectral index due to chain inflation. We find two types of effects. First, modifications of the spectral index depending on interactions between radiation and the vacuum, and on features of the effective vacuum potential of the underlying fundamental theory. Second, a modulation of the spectrum signaling new physics due to bubble nucleation. This effect is similar to those of transplanckian physics. Measurements of such signatures could provide a wealth of information on the fundamental physics at the basis of inflation. September 2008 Introduction ============ The current most popular models for inflation are based on chaotic inflation. In these models a scalar field rolls slowly subject to Hubble friction in a shallow potential. In [@Chialva:2008zw] we proposed an alternative scenario[^1] that shares many of the features of slow roll chaotic inflation, but also differs in several important aspects. Our model is based on chain inflation and makes use of a series of several first order phase transitions. More precisely, we imagine a potential with a large number of small barriers that separate local, metastable minima. The barriers prevent the field from rolling, and without quantum mechanical tunneling the inflaton is stuck in a local minimum. By appropriately choosing the heights and widths of the barriers, one can obtain tunneling probabilities such that the field effectively rolls slowly down the potential through repeated tunneling events. In this way we can achieve a slow roll in the sense of having a slow change in $H^{2}\sim\rho^{V}$ ($\rho^{V}$ being the vacuum energy density), even if the potential for the fields is steep. The details of this process were worked out in [@Chialva:2008zw], and it was also shown that suitable potentials might be find in flux compactified string theory. The main features of the model introduced in [@Chialva:2008zw] is as follows. We assume that the bubbles, after their formation through tunneling, rapidly percolate and collide. The energy difference between two subsequent minima is temporarily stored in the bubble walls, and we assume that this energy is rapidly converted into radiation as the bubbles collide. In this way we obtain a coarse grained picture where the main effect of the barriers and the tunneling is to introduce a source term for radiation in the Friedmann equations. A scalar field can be understood as a fluid consisting of two components: a component corresponding to the kinetic energy, $T$, and a component corresponding to the potential energy, $V$. In the case of slow roll we have $T\sim\varepsilon V\ll V,$ where $\varepsilon$ is the slow roll parameter, and as a consequence the dynamics is dominated by the potential energy leading to accelerated expansion and inflation. In our version of chain inflation the kinetic component is further suppressed relative to the potential energy. On the other hand, radiation is produced through the tunneling leading to $\rho_{rad}\sim\varepsilon V$. As a result we effectively have, to first order in $\varepsilon$, a model consisting of a decaying cosmological constant and a coupled component of radiation. For the case of chaotic inflation, it is important to understand that it is the sub-dominant kinetic energy that determines the spectrum of the fluctuations. The kinetic energy corresponds to a hydrodynamical fluid with an effective speed of sound that is equal to the speed of light. In contrast, the potential energy does not correspond to a hydrodynamical fluid and lacks a well defined speed of sound. The amplitude of the primordial fluctuations are set by the speed of sound. The general result is $$P\sim\frac{H^{2}}{c_{s}\varepsilon},$$ where $c_{s}$is the speed of sound of the hydrodynamical component. For chaotic inflation $c_{s}=1$. In our model for chain inflation, the role of the kinetic energy is taken over by the radiation where $c_{s}=\frac{1}{\sqrt{3}}$. As a result,the primordial spectrum is corrected to $$P\sim\sqrt{3}\frac{H^{2}}{\varepsilon}.$$ The result differs from chaotic inflation through a simple factor of $\sqrt {3}$. While this simple argument captures the main physics of the model, there are many important points of the derivation that are carefully discussed in [@Chialva:2008zw]. In the present paper we discuss the possibility of further effects on the primordial spectrum from various sources.[^2] In the first part of the paper we will discuss the modifications to the spectrum of cosmological perturbations due to the presence of the non-negligible interaction between radiation and vacuum energy. We will discuss how they arise and appear to be specific to our model of chain inflation. In the second part of the paper we will instead consider how bubble nucleation affect the spectrum of perturbations, and in particular we will study the imprint of the size of the bubbles on the CMB. Effects due to interactions =========================== In [@Chialva:2008zw] we derived a system of equations that determine the evolution of the comoving curvature perturbations during a period of chain inflation. The approach was based on the traditional analysis of scalar perturbations in field (slow-roll/chaotic) models, as presented in [@kinflGarrMuk]. We start with a brief review of the approach used in [@Chialva:2008zw], and show that the method of [@kinflGarrMuk] is not the most convenient one in the case of chain inflation. We will therefore propose another way of analyzing and re-writing the system of equations that is better suited for our model. We start from equation (97) in [@Chialva:2008zw], $$\left\{ \begin{aligned} \dot\xi & = {a(\rho+p) {\over }H^2}\mathcal{R} \\ \dot{\mathcal{R}} & = {1 {\over }3} {H^2 {\over }a^3 (\rho+p)}\big(-k^2 -a^24\pi G {Q_V {\over }3H}\big)\xi -{4 {\over }3} H S_{V, r} \end{aligned}\right. , \label{systpert}$$ where - $a$ is the cosmological scale factor, $H$ is the Hubble rate, $\phi$ is the gravitational potential, and $G$ is Newton’s constant - $Q_{V}$ is the energy-momentum transfer vector[^3] - $a\phi= 4\pi GH\xi$, - $\mathcal{R}$ is the comoving curvature perturbation - $\rho,\,p$ are the total energy and pressure density - $\varepsilon$ is the slow-roll parameter - $k$ is the comoving wavenumber for the perturbation - $S_{V,r}$ is the relative entropy perturbation between vacuum ($V$) and radiation ($r$). In the following we will neglect the term proportional to $S_{V,r}$. As a result our conclusions apply only to models with negligible contributions from isoentropic perturbations, or, alternatively, just to the adiabatic component of the whole spectrum of perturbations. Comparing equations (\[systpert\]) with the analogous equations in [@kinflGarrMuk] (in flat space), we see the importance of interactions in our multicomponent system, as represented by the term $-a^{2}4\pi G{Q_{V}\ov3H}\xi$. Note also that this can be conveniently re-written as $-a^{2}4\pi G{Q_{V}\ov3H}=a^{2}H^{2}\varepsilon$. Following the literature (see for example [@Mukhanov:1990me]), it is customary to define a standard quantization variable $\varsigma=z\mathcal{R}$ where $$z\equiv{a(\rho+p)^{1\ov2}{\over }{1 {\over }3}H}\left( {\hat{O}\ov(-k^{2}+a^{2}H^{2}\varepsilon)}\right) ^{1\ov2}.$$ The singularity at $k^{2}=a^{2}H^{2}\varepsilon$ is just an artifact of the choice of variables, as is evident from (\[systpert\])[^4]. However, in order to better understand the implications for the spectrum of perturbations due to the new term, we choose to follow an alternative route using another change of variables. The (first order) action inferred from the equations of motion is given by[^5] $$S=\int\big[\dot{\xi}\hat{O}\mathcal{R}+{1\ov2}{H^{2}c_{s}^{2}{\over }a^{3}(\rho+p)}\xi(\Delta+a^{2}H^{2}\varepsilon)\hat{O}\xi-{1\ov2}{a^{2}(\rho+p){\over }H^{2}}\mathcal{R}\hat{O}\mathcal{R}\big]dt\,d^{3}x,$$ where $\hat{O}$ is a time-independent factor which, by comparison with known cases, was found to be[^6] $\hat{O}=k^{2}$. By eliminating $\mathcal{R}$ in the action by using the first equation in (\[systpert\]), and defining $$\begin{aligned} w^{2} & \equiv{H^{2}\ov(\rho+p)a^{2}}\hat{O}\\ u & \equiv w\,\xi\,,\end{aligned}$$ we obtain the action (up to a total derivative term) $$S={1\ov2}\int\big[u^{^{\prime}2}+{1\ov3}u(\Delta+\mathcal{H}^{2}\varepsilon)u+{w^{^{\prime\prime}}{\over }w}u^{2}\big]d\!\eta\,d^{3}\!x,$$ and the equation of motion is $$u^{^{\prime\prime}}+\left( {1\ov3}k^{2}-{1\ov3}\mathcal{H}^{2}\varepsilon -{w^{^{\prime\prime}}{\over }w}\right) u=0.$$ Here we have used the conformal time defined as $\eta=\int a^{-1}dt$ and $\mathcal{H}={a^{\prime}{\over }a}$, where a prime represents a derivative with respect to the conformal time. An advantage with this way of writing the equations is the absence of any artificial singularities. The equation of motion ---------------------- We will now expand ${w^{^{\prime\prime}}{\over }w}$ in slow-roll parameters such as $\varepsilon$ and the decay rates in such a way that we make sure to include contributions from derivatives of the slow-roll parameters up to order $O(\varepsilon)$. We will study the general case of a vacuum energy $\epsilon_{n}$ in the form of a power law, $$\epsilon_{n}\sim{m_{f}^{4}{\over }c!}n^{c}, \label{powerlaw}$$ where the integer $n$ labels the vacuum, and $m_{f}$ is an energy scale (depending on couplings, extra-dimensions and similar features of the microscopical theoretical model). We will need the following formula (see appendix for derivation) $$\dot{\varepsilon}\sim\left( 1+{2{\over }c}\right) H\varepsilon^{2}-D\widetilde{\Gamma}, \label{derivslowrollfinal}$$ where $D\widetilde{\Gamma}$ depends on the decay rates per unit time $\widetilde{\Gamma}$ and is defined in the appendix. Note that we do not restrict to equal rates at every step. We find $${w^{\prime\prime}{\over }w}\sim\mathcal{H}^{2}\varepsilon+{1\ov2}\mathcal{H}^{2}\left( \big(1+{2{\over }c}\big)\varepsilon-{\ D\widetilde{\Gamma}{\over }H}\right)$$ having neglected terms of order $\varepsilon^{2},\varepsilon D\widetilde {\Gamma},D\widetilde{\Gamma}^{2}$ and higher. Our equation of motion now becomes $$u^{^{\prime\prime}}+\left( {1\ov3}k^{2}-{1\ov3}\mathcal{H}^{2}\varepsilon -\mathcal{H}^{2}\varepsilon-{1\ov2}\left( \left( 1+{2{\over }c}\right) \mathcal{H}^{2}\varepsilon-a\mathcal{H}D\widetilde{\Gamma}\right) \right) u=0.$$ In a quasi-deSitter space, where $$a\sim-{1{\over }H\eta(1-\varepsilon)}\qquad(\eta<0),$$ we have $${1\ov\eta^{2}}\left( \nu^{2}-{1\ov4}\right) =\mathcal{H}^{2}\left( {\varepsilon\ov3}+\varepsilon+{1\ov2}\left( \left( 1+{2{\over }c}\right) \varepsilon-{D\widetilde{\Gamma}{\over }H}\right) \right) ,$$ which allow us to read off $$\nu^{2}\sim{1\ov4}+{\varepsilon\ov3}+\varepsilon+{1\ov2}\left( \left( 1+{2{\over }c}\right) \varepsilon-{D\widetilde{\Gamma}{\over }H}\right) .$$ The general solution for the equation $$u^{^{\prime\prime}}+\left( {1\ov3}k^{2}-{1\ov\eta^{2}}\left( \nu^{2}-{1\ov4}\right) \right) u=0$$ reads $$u=\sqrt{-\eta}\left[ c_{1}(k)H_{\nu}^{(1)}(-k\eta)+c_{2}(k)H_{\nu}^{(2)}(-k\eta)\right] ,$$ where $H_{\nu}^{(1)}(x)$ and $H_{\nu}^{(2)}(x)$ are Hankel functions of the first and second kind, respectively. In the limit $-k\eta\gg1$ we have that $$H_{\nu}^{(1)}\sim\sqrt{{2\ov-\pi k\eta}}e^{i(-k\eta-{\pi\ov2}\nu-{\pi\ov4})}\qquad H_{\nu}^{(2)}\sim\sqrt{{2\ov-\pi k\eta}}e^{-i(-k\eta-{\pi\ov2}\nu-{\pi\ov4})}\,\,.$$ Following standard procedure we match the solution with the Bunch-Davies vacuum[^7] ${e^{-ik\eta}\ov\sqrt{2kc_{s}}}$, finding $$u={1\ov2}\sqrt{{\pi{\over }c_{s}}}e^{i(\nu+{1\ov2}){\pi\ov2}}\sqrt{-\eta}H_{\nu }^{(1)}(-k\eta).$$ For superhorizon scales ($-k\eta\ll1$) $$H_{\nu}^{(1)}\sim\sqrt{{2\ov\pi}}e^{-i{\pi\ov2}}2^{\nu-{3\ov2}}{\Gamma (\nu)\ov\Gamma({3\ov2})}(-k\eta)^{-\nu}$$ so that, finally, $$u\sim e^{i{\pi\ov2}(\nu-{1\ov1})}2^{\nu-{3\ov2}}{\Gamma(\nu)\ov\Gamma ({3\ov2})}\frac{{(-k\eta)^{{1\ov2}-\nu}}}{\sqrt{2kc_{s}}}{.}$$ This is what we need in order to obtain the spectrum of perturbations. Spectrum of perturbations ------------------------- The spectrum of perturbations is conveniently expressed through the comoving curvature perturbation, which is constant on superhorizon scales during inflation. We can obtain it using the first equation in (\[systpert\]), which we repeat here for convenience $$\mathcal{R}={H^{2}{\over }a(\rho+p)}\dot{\xi}.$$ As a result we obtain $$\mathcal{R}={H\ov\sqrt{2M_{\text{Plank}}\varepsilon}}{1\ov\sqrt{2k^{3}c_{s}}}\left( {k{\over }aH}\right) ^{{1\ov2}-\nu}(1+O(\varepsilon)),$$ and the spectrum becomes $$P_{k}^{\mathcal{R}}={H^{2}\ov8\pi^{2}M_{Plank}^{2}{1\ov\sqrt{3}}\varepsilon }\left( {k{\over }aH}\right) ^{1-2\nu}(1+O(\varepsilon))\,.$$ From this we read off the spectral index $$\label{spectindexeps}n_{s}-1=1-2\nu\sim-{\ 2\ov3}\varepsilon-2\varepsilon- \left( 1+{2{\over }c}\right) \varepsilon+{D\widetilde{\Gamma}{\over }H,}$$ which alternatively can be written as $$\label{spectindexQV}n_{s}-1\sim{c_{s}^{2}\ov3H^{3}M_{Plank}^{2}}Q_{V}-2\varepsilon- \left( 1+{2{\over }c}\right) \varepsilon+\frac {{D\widetilde{\Gamma}}}{{H}}{,}$$ which is our final result. It is evident from this formula that the corrections to the spectral index due to the interactions between vacuum and radiation imply an extra tilt to the spectrum (blue or red depending on the value of $D\widetilde{\Gamma}$). (Note that the result for $D_{\sigma}\widetilde{\Gamma}=Q_{V}=0$ is precisely the same as the usual chaotic/slow-roll result in the case of $c=2$ as observed in [@Chialva:2008zw].) It appears to us that this feature of the spectrum is strongly characteristic of a first order transition, since in deriving (\[systpert\]) in [@Chialva:2008zw], we made use of specific aspects of first order transitions (such as the fact that the momentum perturbation for the vacuum was zero). Effects of a new intermediate scale =================================== The choice of vacuum -------------------- In deriving our results for the spectrum of cosmological perturbations and the spectral index in the previous section, we have followed the standard procedure of matching our solution to the Bunch-Davies vacuum. Essentially this means that we resolve the issue of the non-uniqueness of the vacuum in a cosmological space time by tracking the modes to infinitely short scales, where the effect of cosmological scales such as the horizon can be ignored. At such scales there is a unique vacuum just as in Minkowsky space. This is the Bunch-Davies vacuum. As is well known there is a potential problem with this procedure since one can not reliably track the modes to scales shorter than the Planck (or string) scale without taking into account effects of string theory and quantum gravity[^8]. Hence there are likely corrections to the choice of vacuum of order $\frac{H}{\Lambda}$, where $\Lambda$ is the scale of new physics. This is known as the transplanckian problem. Actually, it represents more of an opportunity than a problem since it could be an observational window to new physics. In our case we have yet another scale that enters. We have assumed a coarse graining over the nucleating bubbles that is valid only for scales substantially larger than the size of the bubbles, $r_{b}$. Hence, we have full control over the evolution of the perturbations only while their wavelength is larger than the size of the bubbles. If we follow the evolution of a specific mode backwards in time it will eventually reach a scale as short as the size of the bubbles, and our picture breaks down. What is the effective quantum state that should be used as an initial condition at this point? It is in general very difficult to give a precise answer to this question both because of the usual difficulties due to quantization in curved spaces, and also because of the great generality of our model of chain inflation (no field theory model is specified). Without a detailed model, we have no other option than to impose an effective initial condition for the perturbations, and to postulate their creation out of the vacuum. This is formally very similar to the case of the transplanckian problem. In several works ([@Martin:2000xs; @Niemeyer:2000eh; @Danielsson:2002kx; @Danielsson:2002qh; @Easther:2002xe]) it has been argued that initial conditions must be imposed at the Planck scale (or string scale) due to our ignorance of physics at higher energies. In our model the scale for new physics will instead be the size of the bubbles, but the analysis, that we now review, will be more or less the same. We begin by noting that the physical momentum $p$ and the comoving momentum $k$ are related through $$k=ap=-\frac{p}{\eta H},$$ where $\eta$ is conformal time, $p$ is the physical momentum, $k$ is the comoving momentum and $a$ is the scale factor. We impose the initial conditions when $p=\Lambda$, where $\Lambda$ is the energy scale important for the new physics given by $\Lambda\sim1/r_{b}$. In our case $r_{b}$ is just the size of the bubbles. We find that the conformal time when the initial condition is imposed to be $$\eta_{0}=-\frac{\Lambda}{Hk}.$$ As we see, different modes will be created at different times, with a smaller linear size of the mode (larger $k$) implying a later time. In our case we would in principle be able to calculate the form of the perturbations at the the scale $r_{b},$ by tracing the evolution backwards in time, through the nucleating bubbles, to even smaller scales. Presumably the result would depend on the fine details of the physics of bubble nucleation, which is beyond the scope of the present paper. Instead we will take the same attitude as in [@Danielsson:2002kx] and encode the unknown new physics into the choice of the vacuum. The claim of [@Danielsson:2002kx] is that the primordial spectrum is corrected through a modulating factor. These results can be directly taken over to our case with the result that $$\label{spectrmod}P\left( k\right) \sim\frac{H^{2}}{\varepsilon c_{s}}\left( 1-\frac{H}{\Lambda}\sin\left( \frac{2\Lambda}{H}\right) \right) ,$$ where $c_{s}=\frac{1}{\sqrt{3}}$ is the speed of sound. In the transplanckian case, $\Lambda$ is typically constant and equal to the Planck scale or string scale. The modulation of the spectrum comes from the rolling inflaton that leads to a changing $H$. In our case, $\Lambda$ will also be changing, but the amplitude $\frac{H}{\Lambda}$ is nevertheless expected to be small, and not to change very much during inflation. The argument of the sine, on the other hand, is a large number and can easily change by several times $2\pi$ during the relevant time period for the generation of the primordial perturbations. Let us now investigate in more detail what the effect will be in the case of chain inflation. The size of the bubbles ----------------------- To proceed we need to know more about the process of nucleation of bubbles. The rate of bubble formation per unit time and physical volume, in the analysis of [@Coleman:1977py], [@Coleman:1980aw], is given by the exponential of the euclidean instanton action, $S_{E},$ responsible for the tunneling (the so-called bounce"), $$\Gamma\sim e^{-S_{E}}.$$ The action evaluated on the bounce is given by[^9]: $$S_{E}=-{\pi^{2}\ov2}r^{4}\Delta\epsilon+2\pi^{2}r^{2}S,$$ where $r$ is the radius of the bubble, $\Delta\epsilon$ the change in energy due to the nucleation of the bubble, and $S$ is the bubble’s wall tension.[^10] In principle, there is also a third term present due to the effect of gravity, but we assume the size of the bubbles to be much smaller than the Hubble scale so that we can ignore it. The critical radius that allows for the nucleation of a bubble that will successfully expand, and therefore enables tunneling, is obtained by extremizing the above Euclidean action. The result is$$\begin{aligned} r_{b}\equiv r_{\text{critical}} & =\frac{3S}{\Delta\epsilon},\\ S_{E}\big|_{r=r_{\text{critical}}} & =\frac{27\pi^{2}}{2}\frac{S^{4}}{\left( \Delta\epsilon\right) ^{3}}.\end{aligned}$$ The setup outlined in these formulas is a static one. The tunneling occurs between two vacua of the theory, and any possible time evolution of the background is not taken into account. In our scenario, on the other hand, we have a chain of tunneling events occurring through time. The length scale signaling new physics (corresponding to the radius of the nucleated bubbles) depends on the time when the particular mode of interest is produced. For simplicity, however, we will assume that the change in the radius is slow enough that we can use the above analysis. Accounting for the time evolution of the background, when computing the critical radius of the bubbles at a given time, is most easily achieved by expressing the variation of the energy density due to the nucleation as $$\Delta\epsilon\sim-{d\rho^{V}{\over }dt}\langle\tau\rangle=6H^{3}M_{\text{Plank}}^{2}\langle\tau\rangle\,\varepsilon,$$ where we have used the Friedman equation, and defined $\langle\tau \rangle\equiv\langle{\widetilde{\Gamma}}\rangle^{-1}$ to be the average tunneling time (we recall that $\widetilde{\Gamma}$ is the decay rate per unit time)[^11]. Also, the surface tension $S$ needs to have a time dependence. It is more convenient, though, to express $S$ through the extremized action, as $$S=\left( {2S_{E}\ov27\pi^{2}}\right) ^{{1\ov4}}\Delta\epsilon^{{3\ov4}},$$ Eventually we find $$\begin{aligned} r_{b}H & =\left( 2S_{E}{\over }c\pi^{2}\right) ^{{1\ov4}}\langle\tau \rangle^{-{1\ov4}}\langle{\widetilde{\Gamma}{\over }n}\rangle^{-{1\ov4}}\left( \frac{H}{M_{pl}}\right) ^{{1\ov4}}\nonumber\\ & =\left( 2S_{E}{\over }c\pi^{2}\right) ^{{1\ov4}}\langle\tau\rangle^{-{1\ov4}}\langle{\widetilde{\Gamma}{\over }n}\rangle^{-{1\ov4}}\left( 8\pi^{2}\,\eta\,{\varepsilon\ov\sqrt{3}}\right) ^{{1\ov4}},\end{aligned}$$ where $\eta\sim2.5\cdot10^{-9}$ from the normalization of the spectrum. With $\varepsilon=10^{-2}$ we find $$r_{b}H\sim3.9\cdot10^{-3}S_{E}^{{1\ov4}}c^{-{1\ov4}}\langle\tau\rangle ^{-{1\ov4}}\langle{\widetilde{\Gamma}{\over }n}\rangle^{-{1\ov4}}.$$ In our calculation we have ignored time dependent corrections to, e.g., ${S_{E}}$ that are suppressed by $1/n$. Let us now consider the possible observational implications of the above effects. Successful chain inflation requires that while the vacuum undergoes the transitions, the phase distribution in the universe is peaked consecutively on the various phases. That is, the transitions occur consecutively, and in a short time (shorter than the Hubble time). Rapid tunneling implies that ${S_{E}}$ should be at most of order one, in order for $\langle\tau\rangle H\ll1$. If we then use $\varepsilon=\frac{c}{2H}\langle{\widetilde{\Gamma}{\over }n}\rangle=\frac{c}{2\langle\tau\rangle H}\langle\tau\rangle\langle{\widetilde{\Gamma}{\over }n}\rangle,$ we find that $\langle\tau\rangle\langle{\widetilde{\Gamma}{\over }n}\rangle$ needs to be small. With at peaked distribution we have, to a good approximation, $\langle \tau\rangle\langle{\widetilde{\Gamma}{\over }n}\rangle\sim1/n,$ and we see that $n$ needs to be large. Turning back to equation (\[spectrmod\]), and the corrections to the spectrum from the presence of a new scale, we know from the work on transplanckian physics that values of the order $\varepsilon\sim10^{-2}$ and $\frac{H}{\Lambda}\sim10^{-3}$ could possibly yield an observational effect. The restriction on $\varepsilon$ comes from the requirement that $H$ changes in an appreciable way in order for there to be a modulation. Using $H\sim k^{-\varepsilon}$ and (\[spectrmod\]) we have, following [@Bergstrom:2002yd], $$\frac{\Delta k}{k}\sim\frac{\pi H}{\varepsilon\Lambda},$$ where, in our case, $\Lambda\sim1/r_{b}$. We see that $\frac{\Delta k}{k}$ of order one, and a reasonable amplitude on the order of percent are easily obtainable within our model using values of $n\sim10^{4}$. Discussion ========== As we have argued, chain inflation will lead to several new effects on the spectrum of primordial perturbations. In particular, our calculations show that the spectral index is changed from the naive one due to the presence of interactions between radiation and the vacuum energy (the contribution proportional to $Q_{V}\propto\varepsilon$ in formulas (\[spectindexeps\],\[spectindexQV\])). The detailed predictions depend in a sensitive way on the distribution of the decay rates between the different minima. If it would be possible to measure these decay rates in a precise way, they would provide us with a wealth of information on features of the (effective) potential of the underlying fundamental theory. One needs to keep in mind, though, that it is necessary to distinguish these effects from other similar effects that could arise from non-standard potentials in other models of inflation. Another, possibly more characteristic prediction, is the existence of signatures similar to those that could be generated through transplanckian physics. That is, a modulation of the spectrum due to the presence of a fundamental scale. In case of transplanckian physics, it is the Planck scale (or string scale) that determines the effect, while in the case of chain inflation it is instead the size of the nucleating bubbles. It is interesting to note that the model quite naturally, without much fine tuning, gives rise to effects of a reasonable magnitude that possibly could be detected. As in the case of transplanckian physics we have only been able to make a very rough estimate of the size of the effect. In order to make better predictions of observational signatures, a precise model with an explicit potential and field content needs to be specified. In [@Chialva:2008zw], based on work in [@DanJohLar] and [@ChDaJoLaVo], we proposed that flux compactified type IIB string theory provides such models in a natural way. In that work we focused on the stabilization of the complex structure moduli using fluxes.[^12] With the help of monodromy transformations, generated by going around singular points in the moduli space of Calabi-Yau compactifications, we were able to show the existence of long sequences of minima of the necessary form. A quadratic behaviour, with $c=2,$ typically arises when the axiodilaton is stabilized independently of the complex structure moduli, while the linear behaviour with $c=1$ arises when the axiodilation is stabilized together with the complex structure moduli. The detailed form of the potentials depends heavily on the choice of Calabi-Yau manifolds and fluxes, but the overall features seem to be rather generic. In particular, the barriers in between the minima are expected to be such that an effective slow roll behaviour arises. It would be interesting to further explore the possibility of generating potentials for chain inflation through string theory. Given the difficulty in finding appropriate potentials for standard inflation, we believe this to be a worthwhile enterprise. The slow-roll parameter and its first derivative in a power-law chain inflation model ===================================================================================== During inflation we naturally expect: $$H^{2}\sim{8\pi G\ov3}\rho^{V} \label{approxHslow}$$ and[^13] $$\rho^{V}\sim\sum_{m}\epsilon_{m}p_{m}.$$ Here $p_{m}(t)$ is the fraction of volume occupied by the vacuum $m$ at time $t$ and its time evolution is given by (see [@Chialva:2008zw]) $$\dot{p}_{m}=-\widetilde{\Gamma}_{m}p_{m}+\widetilde{\Gamma}_{m+1}p_{m+1}. \label{systemprob}$$ From this, we find $$\dot{\rho}^{V}=-\sum_{m}\Delta\epsilon_{m}\widetilde{\Gamma}_{m}p_{m},$$ and from (\[powerlaw\]) we see $$\Delta\epsilon_{m}\sim c{\epsilon_{m}{\over }m.} \label{approxDeltaeps}$$ Then, using this and (\[approxHslow\]), we find for the slow-roll parameter $$\varepsilon=-{\dot{H}{\over }H^{2}}={c\ov2H}\,{\sum_{m}\epsilon_{m}\widetilde {\Gamma}_{m}p_{m}\,m^{-1}\ov\sum_{n}\epsilon_{n}p_{n}.} \label{slowrolldiffratgenform}$$ If we now define an average $\langle{\widetilde{\Gamma}{\over }n}\rangle$ as $$\langle{\widetilde{\Gamma}{\over }n}\rangle={\sum_{m}\epsilon_{m}\widetilde {\Gamma}_{m}p_{m}\,m^{-1}\ov\sum_{n}\epsilon_{n}p_{n}},$$ the slow-roll parameter is given by $$\varepsilon\sim\frac{c}{2H}\langle{\widetilde{\Gamma}{\over }n}\rangle. \label{slowrollparneqrate}$$ The time derivative of the slow-roll parameter is as follows. From (\[slowrolldiffratgenform\]) and using (\[approxDeltaeps\]) $$\dot{\varepsilon}={\varepsilon\ov2}\langle{\widetilde{\Gamma}{\over }n}\rangle+{c\ov2H}\left( {\sum_{m}\epsilon_{m}\widetilde{\Gamma}_{m}\dot{p}_{m}\,m^{-1}\ov\sum_{n}\epsilon_{n}p_{n}}+c{(\sum_{m}\epsilon_{m}\widetilde{\Gamma}_{m}p_{m}\,m^{-1})^{2}\ov(\sum_{n}\epsilon_{n}p_{n})^{2}}\right) \label{slowrollderiv}$$ Let us focus on the two terms in each of the brackets. The second one is simply $$c{(\sum_{m}\epsilon_{m}\widetilde{\Gamma}_{m}p_{m}\,m^{-1})^{2}\ov(\sum _{n}\epsilon_{n}p_{n})^{2}}=c\langle{\widetilde{\Gamma}{\over }n}\rangle^{2}\,\,.$$ It is the easy to see that using (\[systemprob\]), the numerator in the first term reads $$\begin{aligned} \sum_{m}\epsilon_{m}\widetilde{\Gamma}_{m}\dot{p}_{m}\,m^{-1} & =\sum _{m}\widetilde{\Gamma}_{m+1}p_{m+1}\left( {\epsilon_{m}{\over }m}\widetilde {\Gamma}_{m}-{\epsilon_{m+1}{\over }m+1}\widetilde{\Gamma}_{m+1}\right) \nonumber\\ & =-\sum_{m}\widetilde{\Gamma}_{m+1}p_{m+1}\Big(\Delta\left( {\epsilon _{m+1}{\over }m+1}\right) \widetilde{\Gamma}_{m+1}+{\epsilon_{m}{\over }m}\Delta(\widetilde{\Gamma}_{m+1})\Big)\end{aligned}$$ where we have defined, for any quantity $f_{m}$ $$\Delta\left( f_{m}\right) \equiv f_{m}-f_{m-1}\,\,.$$ We find that: $$\Delta\left( {\epsilon_{m+1}{\over }m+1}\right) =(c-1){\epsilon_{m+1}\ov(m+1)^{2}}$$ If we now define $$\sigma_{\langle{\widetilde{\Gamma}{\over }n}\rangle}^{2}=\left\langle \left( \text{{\small ${\widetilde{\Gamma}{\over }n}$}}\right) ^{2}\right\rangle -\left\langle \text{{\small ${\widetilde{\Gamma}{\over }n}$}}\right\rangle ^{2}\,\,.$$ Then, from (\[slowrollderiv\]) and (\[slowrollparneqrate\]), we have $$\dot{\varepsilon}\sim\left( 1+{2{\over }c}\right) H\varepsilon^{2} -(c-1)\varepsilon\,\sigma_{\widetilde{\Gamma}{\over }n}^{2}\langle{\widetilde {\Gamma}{\over }n}\rangle^{-1} -\varepsilon\langle{\widetilde{\Gamma}{\over }n}\Delta\widetilde{\Gamma}\rangle\langle{\widetilde{\Gamma}{\over }n}\rangle^{-1} +(c-1)\varepsilon\langle{\widetilde{\Gamma}{\over }n}{\Delta\widetilde{\Gamma }{\over }n-1}\rangle\langle{\widetilde{\Gamma}{\over }n}\rangle^{-1}, \label{derivslowrollfinalapp}$$ where all the averaging has been made using the distribution $\rho _{m}=\epsilon_{m}p_{m}$. For ease of notation, we define $$D\widetilde{\Gamma}\equiv(c-1)\varepsilon\,\sigma_{\widetilde{\Gamma}{\over }n}^{2}\langle{\widetilde{\Gamma}{\over }n}\rangle^{-1}+\varepsilon \langle{\widetilde{\Gamma}{\over }n}\Delta\widetilde{\Gamma}\rangle\langle {\widetilde{\Gamma}{\over }n}\rangle^{-1}-(c-1)\varepsilon\langle{\widetilde {\Gamma}{\over }n}{\Delta\widetilde{\Gamma}{\over }n-1}\rangle\langle{\widetilde {\Gamma}{\over }n}\rangle^{-1}.$$ Acknowledgments {#acknowledgments .unnumbered} =============== The work was supported by the Swedish Research Council (VR) and by the EU Marie Curie Training Site contract: MRTN-CT-2004-512194. [99]{} D. Chialva and U. H. Danielsson, “Chain inflation revisited,” arXiv:0804.2846 \[hep-th\]. K. Freese and D. Spolyar, “Chain inflation: ’Bubble bubble toil and trouble’,” JCAP **0507** (2005) 007 \[arXiv:hep-ph/0412145\]. K. Freese, J. T. Liu and D. Spolyar, “Chain inflation via rapid tunneling in the landscape,” arXiv:hep-th/0612056. Q. G. Huang, “Simplified Chain Inflation,” JCAP **0705** (2007) 009 \[arXiv:0704.2835 \[hep-th\]\]. Q. G. Huang and S. H. Tye, “The Cosmological Constant Problem and Inflation in the String Landscape,” arXiv:0803.0663 \[hep-th\]. J. Garriga and V. F. Mukhanov, “Perturbations in k-inflation,” Phys. Lett. B **458** (1999) 219 \[arXiv:hep-th/9904176\]. V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, “Theory of cosmological perturbations.” Phys. Rept. **215** (1992) 203. J. Martin and R. H. Brandenberger, “The trans-Planckian problem of inflationary cosmology,” Phys. Rev. D **63** (2001) 123501 \[arXiv:hep-th/0005209\]. J. C. Niemeyer, “Inflation with a high frequency cutoff,” Phys. Rev. D **63** (2001) 123502 \[arXiv:astro-ph/0005533\]. U. H. Danielsson, “A note on inflation and transplanckian physics,” Phys. Rev. D **66** (2002) 023511 \[arXiv:hep-th/0203198\]. U. H. Danielsson, “Inflation, holography and the choice of vacuum in de Sitter space,” JHEP **0207** (2002) 040 \[arXiv:hep-th/0205227\]. R. Easther, B. R. Greene, W. H. Kinney and G. Shiu, “A generic estimate of trans-Planckian modifications to the primordial power spectrum in inflation,” Phys. Rev. D **66** (2002) 023518 \[arXiv:hep-th/0204129\]. L. Bergstrom and U. H. Danielsson, “Can MAP and Planck map Planck physics?,” JHEP **0212** (2002) 038 \[arXiv:hep-th/0211006\]. S. R. Coleman, “The Fate Of The False Vacuum. 1. Semiclassical Theory,” Phys. Rev. D **15** (1977) 2929 \[Erratum-ibid. D **16** (1977) 1248\]. S. R. Coleman and F. De Luccia, “Gravitational Effects On And Of Vacuum Decay,” Phys. Rev. D **21** (1980) 3305. U. H. Danielsson, N. Johansson and M. Larfors, “The world next door: Results in landscape topography,” JHEP **0703** (2007) 080 \[arXiv:hep-th/0612222\]. D. Chialva, U. H. Danielsson, N. Johansson, M. Larfors and M. Vonk, “Deforming, revolving and resolving - New paths in the string theory landscape,” JHEP **0802** (2008) 016 \[arXiv:0710.0620 \[hep-th\]\]. J. Hamann, S. Hannestad, M. S. Sloth and Y. Y. Y. Wong, “Observing trans-Planckian ripples in the primordial power spectrum with future large scale structure probes,” arXiv:0807.4528 \[astro-ph\]. [^1]: For earlier attempts on models featuring inflation through chain of first order decays, see also [@Freese:2004vs; @Freese:2006fk; @Huang:2007ek; @Huang:2008jr] [^2]: We will leave aside the interesting possibility of having sizable contributions from isocurvature perturbations and/or non-gaussianities. [^3]: See [@Chialva:2008zw] for its exhaustive definition. For what we are concerned it is sufficient to define it through the equations $$\begin{aligned} \dot{\rho}^{V} & =Q_{V}\\ \dot{\rho}^{r} & =-4H\rho^{r}-Q_{V}$$ where $\rho^{V/r}$ is the energy density for vacuum/radiation. [^4]: This issue is absent in [@kinflGarrMuk] in the case of flat space [^5]: This is the same action as in [@kinflGarrMuk] with the derivative term integrated by parts. [^6]: In our previous paper we used $\hat{O}=-k^{2}$, but here we prefer use the other sign. [^7]: In the following section we will discuss this choice thorough, investigating the possibility of new fundamental physics at a scale larger than the Plank one. [^8]: See for example [@Martin:2000xs; @Niemeyer:2000eh; @Danielsson:2002kx; @Danielsson:2002qh; @Easther:2002xe] [^9]: We recognize in this the variation of the Gibbs energy for the nucleation of a bubble: with a different normalization for what concerns energy density and surface tension. [^10]: If the tension is due to a scalar field, $\phi,$ we have that $S=\int d\phi\sqrt{V}$, where $V$ is the potential. [^11]: All averages are taken with the distribution of vacuum phases $\rho_{m}^{V}=\epsilon_{m}p_{m}(t)$, see appendix and [@Chialva:2008zw] [^12]: In our simplified model the Kähler moduli, i.e. the moduli determining the sizes of the extra dimensions, were assumed to be fixed by other physics. [^13]: Here and in the following $\rho^{V}$ represents the energy density of the vacuum in the interior of the bubbles. We in general expect this formula for the following reason. First of all during inflation the total energy density is dominated by the vacuum component. The latter is then given by the contributions respectively of the interior of the bubbles and the walls. But the energy density of uncollided walls is proportional to the energy difference between two consecutive vacua, while the one of the interior of bubbles is proportional to the energy level, which is greater than the difference.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The growing amount of intermittent renewables in power generation creates challenges for real-time matching of supply and demand in the power grid. Emerging ancillary power markets provide new incentives to consumers (e.g., electrical vehicles, data centers, and others) to perform demand response to help stabilize the electricity grid. A promising class of potential demand response providers includes energy storage systems (ESSs). This paper evaluates the benefits of using various types of novel ESS technologies for a variety of emerging smart grid demand response programs, such as regulation services reserves (RSRs), contingency reserves, and peak shaving. We model, formulate and solve optimization problems to maximize the net profit of ESSs in providing each demand response. Our solution selects the optimal power and energy capacities of the ESS, determines the optimal reserve value to provide as well as the ESS real-time operational policy for program participation. Our results highlight that applying ultra-capacitors and flywheels in RSR has the potential to be up to 30 times more profitable than using common battery technologies such as LI and LA batteries for peak shaving.' author: - title: | Optimizing Energy Storage Participation\ in Emerging Power Markets --- Acknowledgment {#acknowledgment .unnumbered} ============== This paper is supported by the NSF Grant 1464388.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using the observational properties of Einstein’s gravitational field it is shown that a minimum of four non-coplanar mass probes are necessary for the Michelson and Morley interferometer to detect gravitational waves within the context of General Relativity. With fewer probes, some alternative theories of gravitation can also explain the observations. The conversion of the existing gravitational wave detectors to four probes is also suggested.' author: - | Ivan S. Ferreira\ University of Brasilia, Institute of Physics, Brasilia, DF Brasilia, DF 70910-900, ivan@fis.unb.br,\ C. Frajuca\ National Institute for Space Research, Sao Jose dos Campos, 12227-010, frajuca@gmail.com,\ Nadja S. Magalhaes\ Physics Department, Sao Paulo Federal University, SP09913-030, Brazil, nadjasm@gmail.com,\ M. D. Maia\ University of Brasilia, Institute of Physics, Brasilia, DF70910-900, maia@unb.br,\ Claudio M. G. Sousa\ Federal University of Para, Santarem, PA 68040-070, claudiogomes@ufpa.br. title: The Laser Gravitational Compass --- The Observable Gravitational Wave ================================= Einstein’s prediction of gravitational waves (gw) was originally derived from arbitrarily small perturbations of the Minkowski metric $g_{\mu\nu}= \eta_{\mu\nu} + h_{\mu\nu}$, such that Einstein’s equations reduce to a linear wave equation, written in a special (de Donder) coordinate gauge, as (Except when explicitly stated, Greek indices run from 0 to 3 and small case Latin indices run from 1 to 3). $$\Box^2 \Psi_{\mu\nu} = 0, \;\; \; \; \Psi_{\mu\nu}= h_{\mu\nu}-\frac{1}{2}h \eta_{\mu\nu},\;\; \;\: h=\eta^{\mu\nu}h_{\mu\nu}. \label{eq:waves}$$ The currently operating laser gw observatories are inspired by the Michelson & Morley (M&M) interferometers[@Giazotto; @Pitkin; @Eardley], where the data acquired by the Fabri-Perót interferometer (an etalon) is used to generate a numerical simulation, thus producing a template from which the most probable source is estimated[@Abbot2016]. The purpose of this note is to show that in order to detect gw described by General Relativity (GR) with an M&M interferometer requires a minimum of four non-coplanar mass probes. The observables of Einstein’s gravitational field are given by the eigenvalues of the Riemann curvature[@Zakharov; @Pirani1956; @Pirani1957], defined by $R(U,V)W = [\nabla_U, \nabla_V]W - \nabla_{[U,V]}W$, whose components in any given basis $ \{e_\mu \} $ are $R(e_\mu,e_\nu)e_\rho = R_{\mu\nu\rho\sigma}e^\sigma$. Then we find that there are at most six independent eigenvectors $X_{\mu\nu}$ and six eigenvalues $\lambda$, solutions of the eigenvalue equation $ R_{\mu\nu\rho\sigma}X^{\rho\sigma}=\lambda X_{\mu\nu}$, including the zero eigenvalue, corresponding to the absence of gravitation[@Pirani1962; @Sachs1962a; @Pirani1967]. Thus, using the language of field theory, Einstein’s gravitation is said to have five non-trivial observables or degrees of freedom $(dof)$. The spin-statistics theorem relates the $dof$ to the helicity or the orbital spin of the field as $s=(dof-1)/2$, so that Einstein’s gravitation is also said to be a spin-2 field. Alternative theories of gravitation may have distinct definitions of observables (not necessarily related to curvature) and their gravitational waves, if they exist, may require different methods of observations. Well known examples include: the spin-1 gauge theories of gravitation (there are several of them), characterized by dof=3; topological gravitation in three dimensions; projective theories of gravitation; F(R) theories; among many others. Therefore, in order to understand the observation of a gw it is essential to specify the observables of the theory on which the experiment is based. The most general massless spin-2 field $h_{\mu\nu}$ was defined in the Minkowski space-time by Fierz and Pauli[@FierzPauli], as a trace-free field $h=h_\mu{}^\mu=0$, satisfying the field equations $\Box^2 h_{\mu\nu}=0$. This is a linear field not to be confused with Einstein’s gravitation. Since in the case of the present observations of gw, the supporting theory is Einstein’s gravitation, then the observational signature to be sought is that of a $dof=5$ or of a spin-2 field, characterized by the observable curvature. The use of the Fierz-Pauli field $h_{\mu\nu}$ as the perturbation of the Minkowski metric makes it possible to free Eq. (\[eq:waves\]) of coordinate gauges so that its solutions can be written as as a superposition of plane polarized gravitational waves, characterized by the Traceless-Transverse-Plane-Polarized (TTPP) gauge conditions[@Giazotto]: $$h=0,\;\; h_{i0}=0, \;\; \Box^2 h_{\mu\nu}=0,\;\; h_{\mu\nu;\rho}=0. \label{eq:TTPP}$$ Then, these conditions are used to simulate a template, from which the source source of the gw observations is estimated. The Equivalence Principle and the M&M gw Detector ================================================= The use of an M&M detectors for gw is based on the principle of equivalence of GR: Given 2 masses A and B with attached mirrors, under the exclusive action of a known gravitational field, they propagate (or “free fall”) along time-like geodesics, with unit tangent vectors $T_A$ and $T_B$ respectively, satisfying geodesic equations $\nabla_{T_{A}} T_A=0$ and $\nabla_{T_{B}} T_B=0$ . Eventually, probe A sends a light signal with velocity $P$ to particle B along the light geodesic with equation $\nabla_P P=0$. After a while, probe A receives back the reflected signal, so that these geodesics describe a closed parallelogram, with the closing condition $\nabla_T P=\nabla_P T$. The curvature tensor calculated in that parallelogram is $ R(T,P)T = [\nabla_T ,\nabla_P ]T = \nabla_T (\nabla_T P) = \nabla_T a$, where $a = \nabla_T P$ is the acceleration of the signal along the fall. Defining a basis by $\{e_0 =T , e_i= P \} $, where $e_i$ denotes any space-like direction, we obtain the geodesic deviation equation in components $\frac{d a_i}{cdt}=R_{0i0i}$, where $t$ denotes the time parameter of the time-like geodesic. As we can see, the motion of the probes generates a 2-dimensional world-sheet, whose curvature $R_{0i0i}$, is translated directly into the the variation of the fall acceleration. In the currently operating and some future planned M&M gw detectors, three mass-probes are used, defining a space-like plane in space-time, whose motion under the action of a gw generates a 3-dimensional world-volume, whose curvature is measured by a $2 \times 2$ acceleration array $a_{ij}$, obeying the geodesic deviation equation $$\frac{d a_{ij}}{cdt}= \!\!R_{0i0j}, \; i,j =1,2. \label{eq:GDE2}$$ Therefore, such detectors are capable to measure only 3 curvature components $R_{0101},R_{0102},R_{0202}$, so that at most three degrees of freedom of Einstein’s gravitational field can be obtained. Since the observed gw are very weak, it has been assumed that the missing degrees of freedom in one detector may be complemented by the data collected on another detector located somewhere else. Such understanding is supported by the numerical simulation, from which an estimate of the wave source capable of reproducing the same observation in two separate detectors is obtained. Although it is possible to parallelly transport the curvature tensor from one detector to another, the left hand side content of Eq. (\[eq:GDE2\]) represents a locally measured quantity which cannot be transported to another detector, under the penalty of breaching the principle of equivalence. The definitive solution for the missing dof problem can be obtained by a direct measure of the curvature tensor using the geodesic deviation equations Eq. (\[eq:GDE2\]), extended to $i,j =1,2,3$. One such detector, the “gravitational compass” was conceived by P. Szekeres. It consists of four non-coplanar mass probes connected by six dynamometers, to measure the six curvature eigenvectors. By comparing three of these eigenvectors with the three principal directions of curvature, the compass can points directly to the source[@Szekeres]. Szekeres’ gravitational compass was improved by F. Pirani[@Pirani1967], where instead the dynamometers, the four mass-probes had attached mirrors and light sources. Then, applying Eq. (\[eq:GDE2\]) again for $i,j =1,2,3$, all components of the curvature can be read locally, defining the 4-dimensional gravitational field at the center of mass of the four probes. The interesting fact is that four non-coplanar masses define a virtual spheroidal surface in space-time, then the natural oscillation modes of the spheroid give a direct measure of the curvature eigenvalues. The Szekeres-Pirani gravitational compass can be implemented in the presently operating gw observatories, with the addition of a fourth mass probe not belonging to the plane of the existing 2-dimensional M&M detectors, located either in a tower or well, not necessarily having the same length as the existing arms, thus defining a spheroidal gw detector.\ Summarizing, the presently operating gw detectors are hindered by the fact that they cannot detect directly the all degrees of freedom of Einstein’s gravitational field at each point, thus leaving open the possibility of alternative explanations for the gw. For the specific case of gravitation in the sense of Einstein’s theory, we suggest the addition of a fourth mass to the existing detectors, so that all degrees of freedom of gravitation are directly measured from the five fundamental oscillation modes of a spheroid. **Acknowledgments**: NSM and CF acknowledge FAPESP for its support to their research through the thematic project 2013/26258-4. [20]{} A. Giazotto, *Phys. Reports* **182**, 365 (1989). D. M. Eardley, D. L. Lee and A. P. Lightman, *Phys.Rev. Lett.* **30**, 884 (1973). M. Pitkin, S. Reid, S. Rowan and J. Hough, *Living Reviews Relativity* **14**, 5 (2011). B. P. Abbot et al. *Phys. Rev. Lett.* **116**, 061102 (2016). V. D. Zakharov, *Gravitational Waves in Einstein’s Theory* (John Willey and Sons, New York, N.Y. 1972). F. A. E. Pirani, *Acta Physica Polonica* **15**, 389 (1956). F. A. E. Pirani, *Physical Review* **105**, 1089 (1957). F. A. E. Pirani, Gravitational radiation, in *Gravitation, An Introduction to Current Research*, ed. L. Witten (John Willey and Sons, New York , 1962). R. K. Sachs, *Gravitational Waves in General Relativity*. Proc. Roy. Soc. A, **270**, 103 (1962). F. A. E. Pirani, Introduction to gravitational radiation theory, in *Lectures in General Relativity, Brandeis Summer Institute in Theoretical Physics*, eds. S. Deser and K. W. Ford (Prentice-Hall, Englewood Cliffs, 1965). M. Fierz and W. Pauli, *On Relativistic Wave Equations for Particles of Arbitrary Spin in an Electromagnetic Field*. Proc. Roy. Soc. Lond. A **173**, 211 (1939). P. Szekeres, *The Gravitational Compass*, Jour. Math. Phys. **6**, 1387 (1965). N. S. Magalhaes, W. W. Johnson, C. Frajuca and O. D. Aguiar, *Determination of astrophysical parameters from the spherical gravitational wave detector data*. Mon. Not. R. Astron. Soc.**274**, 670 (1995). N. S. Magalhaes, W. W. Johnson, C. Frajuca and O. D. Aguiar, *A geometric method for location of gravitational wave sources*. Astrophys. Jour. **475**, 462 (1997).
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'references.bib' title: A Symbolic Decision Procedure for Symbolic Alternating Finite Automata ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'Spatial modulations in the distribution of observed luminosities (computed using redshifts) of $\sim 5\times 10^5$ galaxies from the SDSS Data Release $7$, probe the cosmic peculiar velocity field out to $z\sim 0.1$. Allowing for luminosity evolution, the $r$-band luminosity function, determined via a spline-based estimator, is well represented by a Schechter form with $ M^{\star}(z)-5{\rm log_{10}} h=-20.52-1.6(z-0.1)\pm 0.05$ and $\alpha^{\star}=-1.1\pm 0.03$. Bulk flows and higher velocity moments in two redshift bins, $0.02 < z < 0.07$ and $0.07 < z < 0.22$, agree with the predictions of the $\Lambda$CDM model, as obtained from mock galaxy catalogs designed to match the observations. Assuming a $\Lambda$CDM model, we estimate $\sigma_{8}\approx 1.1\pm 0.4$ for the amplitude of the linear matter power spectrum, where the low accuracy is due to the limited number of galaxies. While the low-$z$ bin is robust against coherent photometric uncertainties, the bias of results from the second bin is consistent with the $\sim1$% magnitude tilt reported by the SDSS collaboration. The systematics are expected to have a significantly lower impact in future datasets with larger sky coverage and better photometric calibration.' author: - Martin Feix - Adi Nusser - Enzo Branchini bibliography: - 'bulk\_ref.bib' title: Tracing the cosmic velocity field at from galaxy luminosities in the SDSS DR7 --- Introduction {#sec:int} ============ In recent years, the amount of available extragalactic data has helped to establish a comprehensive picture of our Universe and its evolution [e.g. @Percival2010; @Riess2011; @Hinshaw2013; @Planck2013]. These data, by and large, have enforced the standard cosmological paradigm where initial perturbations in the mass density field grow via gravitational instability and eventually form the cosmic structure we observe today. The clustering process is inevitably associated with peculiar motions of matter, namely deviations from a pure Hubble flow. On large scales, these motions exhibit a coherent pattern, with matter generally flowing from underdense to overdense regions. If galaxies indeed move much like test particles, they should appropriately reflect the underlying peculiar velocity field which contains valuable information and, in principle, could be used to constrain and discriminate between different cosmological models. Usually relying on galaxy peculiar velocities estimated from measured redshifts and distance indicators, most approaches in the literature have focused on extracting this information within local volumes of up to $100h^{-1}$ Mpc and larger centered on the Milky Way [e.g., @Riess1995; @Dekel1999; @Zaroubi2001; @Hudson2004; @Sarkar2007; @Lavaux2010; @feldwh10; @ND11; @Turnbull2012; @Feindt2013]. Common distance indicators are based on well-established relations between observable intrinsic properties of a given astronomical object, where one of them depends on the object’s distance. A typical example is the Tully-Fisher relation [@TF77] between rotational velocities of spiral galaxies and their absolute magnitudes. Due to observational challenges, the number of galaxies in distance catalogs is relatively small compared to that of redshift catalogs, limiting the possibility of exploring the cosmological peculiar velocity field to low redshifts $z\sim$ 0.02–0.03. Moreover, all known distance indicators are potentially plagued by systematic errors [@lyn88; @Strauss1995] which could give rise to unwanted biases in the inferred velocities and thus renders their use for cosmological purposes less desirable. To probe the flow of galaxies at deeper redshifts, one needs to resort to non-traditional distance indicators. One method, for instance, exploits the kinetic Sunyaev-Zel’dovich effect to measure the cosmic bulk flow, i.e. the volume average of the peculiar velocity field, out to depths of around 100–500$h^{-1}$ Mpc [e.g, @Haehnelt1996; @Osborne2011; @Lavaux2013; @planck_bf]. Another strategy is based on the apparent anisotropic clustering of galaxies in redshift space which is commonly described as redshift-space distortions. This effect is a direct consequence of the additional displacement from distances to redshifts due to the peculiar motions of galaxies, and it yields reliable constraints on the amplitude of large-scale coherent motions and the growth rate of density perturbations [e.g., @Hamilton1998; @Peacock2001; @Scoccimarro2004; @Guz08]. Galaxy peculiar motions also affect luminosity estimates based on measured redshifts, providing another way of tackling the problem. Since the luminosity of a galaxy is independent of its velocity, systematic biases in the estimated luminosities of galaxies can be used to explore the peculiar velocity field. The idea has a long history. It was first adopted to constrain the velocity of the Virgo cluster relative to the Local Group by correlating the magnitudes of nearby galaxies with their redshifts [@TYS]. Although in need of very large galaxy numbers to be effective, methods based on this idea use only measured galaxy luminosities and their redshifts to derive bounds on the large-scale peculiar velocity field. Therefore, these methods do not require the use of traditional distance indicators and they are also independent of galaxy bias. Using the nearly full-sky 2MASS Redshift Survey (2MRS) [@Huchra2012], for example, this approach has recently been adopted to constrain bulk flows in the local Universe within $z\sim 0.01$ [@Nusser2011; @Branchini2012]. Furthermore, it has been used to determine the current growth rate of density fluctuations by reconstructing the full linear velocity field from the clustering of galaxies [@Nusser1994; @Nusser2012]. Here we seek to apply this luminosity-based approach to obtain peculiar velocity information from galaxy redshifts and apparent magnitudes of the Sloan Digital Sky Survey (SDSS) [@York2000]. The goals of our analysis are: - A demonstration of the method’s applicability to datasets with large galaxy numbers. - An updated estimate of the $r$-band luminosity function of SDSS galaxies at $z\sim 0.1$, accounting for evolution in galaxy luminosities. - Novel bounds on bulk flows and higher-order moments of the peculiar velocity field at redshifts $z\sim 0.1$. - First constraints on the angular peculiar velocity power spectrum and cosmological parameters without additional input such as galaxy clustering information. The paper is organized as follows: we begin with introducing the luminosity method and its basic equations in section \[section2\]. In section \[section3\], we then describe the SDSS galaxy sample used in our analysis, together with a suite of mock catalogs which will allow us to assess uncertainties and known systematics inherent to the data. After a first test of the method, we attempt to constrain peculiar motions in section \[section4\], assuming a redshift-binned model of the velocity field. Because of the mixing between different velocity moments arising from the SDSS footprint, bulk flow measurements are interpreted with the help of galaxy mocks. Including higher-order velocity moments, we proceed with discussing constraints on the angular velocity power in different redshift bins and their implications. As an example of cosmological parameter estimation, we further infer the quantity $\sigma_{8}$, i.e. the amplitude of the linear matter power spectrum on a scale of $8h^{-1}$ Mpc, and compare the result to the findings from the corresponding mock analysis. Other potential issues and caveats related to our investigation are addressed at the section’s end. In section \[section5\], we finally summarize our conclusions and the method’s prospects in the context of next-generation surveys. For clarity, some of the technical material is separately given in an appendix. Throughout the paper, we adopt the standard notation, and all redshifts are expressed in the rest frame of the cosmic microwave background (CMB) using the dipole from ref. [@Fixsen1996]. Methodology {#section2} =========== Variation of observed galaxy luminosities {#section2a} ----------------------------------------- In an inhomogeneous universe, the observed redshift $z$ of an object (a galaxy) is generally different from its cosmological redshift $z_{c}$ defined for the unperturbed background. To linear order in perturbation theory, one finds the well-known expression [@SW] $$\begin{split} \frac{z-z_{c}}{1+z} &= \frac{V(t,r)}{c} - \frac{\Phi(t,r)}{c^2}\\ &{ } - \frac{2}{c^2}\int_{t(r)}^{t_0}\dd t \frac{\partial\Phi\left\lbrack\hvr r(t),t\right\rbrack}{\partial t}\approx \frac{V(t,r)}{c}, \end{split} \label{eq:sw}$$ where $V$ is the physical radial peculiar velocity of of the object, $\Phi$ denotes the gravitational potential and $\hvr$ is a unit vector along the line of sight to the object. The last step explicitly assumes low redshifts where the velocity $V$ makes the dominant contribution.[^1] Note that all fields are considered relative to their present-day values at a comoving radius of $r(t=t_{0})$ and that we have substituted $z$ for $z_{c}$ in the denominator on the left-hand side of eq. , which simplifies part of the analysis presented below and is consistent at the linear level. The observed absolute magnitude $M$, computed using the galaxy redshift $z$, rather than the (unknown) cosmological redshift $z_{c}$, differs from the true value $M^{(t)}$ because of the shift ${\rm DM}(z)-{\rm DM}(z_c)$ in the distance modulus ${\rm DM}=25+5\log_{\rm 10}\lbrack D_{L}/{\rm Mpc}\rbrack$, where $D_{L}$ is the luminosity distance. Hence, $$\begin{split} M &= m - {\rm DM}(z) - K(z) + Q(z)\\ &= M^{(t)} + 5\log_{10}\dfrac{D_{L}(z_{c})}{D_{L}(z)}, \end{split} \label{eq:magvar}$$ where $m$ is the apparent magnitude, $K(z)$ is the $K$-correction [e.g., @Blanton2007], and the function $Q(z)$ accounts for luminosity evolution. Since the variation $M-M^{(t)}$ of magnitudes distributed over the sky is systematic, it can be used to gain information on the peculiar velocity field. In the following, we will discuss how this may be achieved with the help of maximum-likelihood techniques. Statistical description {#section2c} ------------------------ ### Inference of bulk flows and other velocity moments Before introducing our methodology, we need to specify a suitable model of the velocity field. Although a popular option is to characterize peculiar velocities in terms of bulk flows, one could aim at a more complete description of the peculiar velocity field. Given the current data, however, a full three-dimensional estimate of the velocity field would be entirely dominated by the noise. A more promising approach is the following: first, we subdivide the galaxy data into suitable redshift bins and consider the bin-averaged velocity $\tilde{V}$. Supposing for the moment that we are dealing with a single bin, we then proceed to decompose $\tilde{V}(\hvr )$ (evaluated at the galaxy position $\hvr$) into spherical harmonics, i.e. $$\begin{split} a_{lm} &= \int\dd\Omega\tilde{V}(\hvr )Y_{lm}(\hvr ),\\ \tilde{V}(\hvr ) &= \sum\limits_{l,m}a_{lm}Y_{lm}^{*}(\hvr ),\quad l>0, \end{split} \label{eq:2c1}$$ where the sum over $l$ is cut at some maximum value $l_{\rm max}$. A bulk flow of the entire volume, denoted as $\vv_{B}$, corresponds to the dipole term ($l=1$) in eq. . Building on the pioneering work of [@TYS], for example, the analysis presented in [@Nusser2011] has initially been restricted to a model with $l_{\rm max}=1$ when considering galaxies from the 2MRS [@Huchra2012]. Assuming that redshift errors can be neglected [@Nusser2011], we write the probability of observing a galaxy with magnitude $M$, given only its redshift and angular position $\hvr$ on the sky, as $$P\left (M\vert z,a_{lm}\right ) = P\left (M\vert z,\tilde{V}(\hvr )\right ) = \frac{\phi(M)}{\eta\left (M^{+},M^{-}\right )}, \label{eq:2c2}$$ where $\phi(M)$ is the galaxy luminosity function (LF) and $\eta\left (M^{+},M^{-}\right )$ is defined as $$\eta\left (M^{+},M^{-}\right ) = \int_{M^{+}}^{M^{-}}\phi(M)\dd M. \label{eq:2b2}$$ The corresponding limiting magnitudes $M^{\pm}$ are given by $$\begin{split} M^{+} &= \max\left\lbrack M_{\rm min}, m^{+} - {\rm DM}(z_{c}) - K(z) + Q(z)\right\rbrack,\\ M^{-} &= \min\left\lbrack M_{\rm max}, m^{-} - {\rm DM}(z_{c}) - K(z) + Q(z)\right\rbrack, \end{split} \label{eq:app1c}$$ where $m^{\pm}$ are the sample’s limiting apparent magnitudes and the cosmological redshift $z_{c}$ depends on the velocity $\tilde{V}$ and the observed redshift $z$ because of eq. . The velocity model enters the expression for the limiting magnitudes $M^{\pm}$ since it induces a shift in the distance modulus. The coefficients $a_{lm}$ of the flow modes can, therefore, be inferred by maximizing the total log-likelihood obtained from the sum over all galaxies in a sample, i.e. $\log P_{\rm tot} = \sum\log P_{i}$. The rational for this is to find the set of $a_{lm}$ which minimizes the spread in the observed magnitudes [@Nusser2011]. The spherical harmonics provide an orthogonal basis only in the case of an all-sky survey, and the partial sky coverage of the SDSS implies that the inferred moments will not be statistically independent. For example, a quadrupole velocity mode ($l=2$) would contaminate the estimate of a bulk flow $\vv_{B}$, which must be taken into account when interpreting any results. The monopole term ($l=0$) is completely degenerate with an overall shift of the magnitudes, and hence it is not included. If the number of available galaxies is large enough, the central limit theorem implies that $P_{\rm tot}$ becomes approximately normal, and we have $$\log P_{\rm tot}\left (\vd\vert\vx\right ) = -\frac{1}{2}\left (\vx-\overline{\vx}\right )^{\rm T}\bm{\Sigma}^{-1} \left (\vx-\overline{\vx}\right ), \label{eq:2c3}$$ where $\vx$ is a vector of all model parameters, $\overline{\vx}$ is the corresponding mean, $\vd$ denotes the data (or Bayesian evidence), and $\bm{\Sigma}$ is the covariance matrix describing the expected error of our estimate. We have numerically verified that this approximation is extremely accurate for the SDSS which comprises several hundred thousands of galaxies. The distribution’s mean in eq. simply corresponds to the maximum-likelihood estimate $\hat{\vx}^{\rm ML}$ of the vector $\vx$, and $\bm{\Sigma}$ can be estimated either by inverting the observed Fisher matrix $\mathbf{F}$ which is defined as $\mathbf{F}_{\alpha\beta}=-\partial\log P_{\rm tot}/(\partial x_{\alpha}\partial x_{\beta})$ evaluated at the maximum value $\hat{\vx}^{\rm ML}$ or from a realistic set of mock galaxy catalogs. The increasing number of parameters associated with the higher-order moments of $\tilde{V}$ typically renders a full numerical evaluation of $\log P_{\rm tot}$ unfeasible. A solution to this problem is based on approximating the total log-likelihood function to second order (see section \[sectionnum\] and appendix \[app1\] for details). In the realistic application to SDSS data, the model parameters $\vx$ include the coefficients $a_{lm}$ (for each redshift bin) as well as the LF parameters. They will be determined simultaneously by maximizing $\log P_{\rm tot}$. ### Inference of angular velocity power spectra Let us now focus on large scales where linear theory is applicable. Assuming Gaussian initial conditions, the cosmological peculiar velocity field on these scales is then fully characterized by its power spectrum. The relevant quantity here is the angular velocity power spectrum $C_{l}=\langle\lvert a_{lm}^{2}\rvert\rangle$. Under these preliminaries, the problem of inferring the $C_{l}$ becomes equivalent to the more familiar estimation of the CMB anisotropy power spectrum, and may thus be tackled with the same general techniques [@Tegmark1997; @Bond1998]. To estimate the power spectrum, one simply maximizes the probability of observing the data given the $C_{l}$, i.e. $$P(C_{l})\equiv P\left (\vd\vert C_{l}\right )\propto\int\dd a_{lm}P\left (\vd\vert a_{lm}\right )P\left (a_{lm}\vert C_{l}\right ), \label{eq:2c4}$$ which is obtained by constructing the posterior likelihood according to Bayes’ theorem and marginalizing over the $a_{lm}$. Here the individual $a_{lm}$ are uncorrelated and taken to be normally distributed, i.e. one has $$P\left (a_{lm}\vert C_{l}\right ) = \prod\limits_{l,m}\left (2\pi C_{l}\right )^{-1/2}\exp \left (-\frac{\lvert a_{lm}\rvert^{2}}{2C_{l}}\right ), \label{eq:2c5}$$ and $P\left (\vd\vert a_{lm}\right )$ is derived from marginalizing $P_{\rm tot}\left (\vd\vert\vx\right )$ over the remaining parameters in $\vx$. Within the Gaussian approximation, carrying out the integration in eq. is straightforward and the resulting expressions are presented in appendix \[app3\]. Considering a particular model like, for example, the standard $\Lambda$CDM cosmology, the $C_{l}$ are fully specified by a set of cosmological parameters $\zeta_{k}$. Therefore, accounting for this dependency in the prior probability, the above technique may further be used to constrain cosmological key quantities such as $\sigma_{8}$ from the observed peculiar velocity field alone. Given the characteristics of current galaxy redshift surveys, it is clear that these constraints will be much less tight than those obtained by other means such as CMB analysis, but still valuable as a complementary probe and consistency check. A successful application of the method requires a large number of galaxies to beat the statistical (Poissonian) errors. The method does not require accurate redshifts and can be used with photometric redshifts to recover signals on scales larger than the spread of the redshift error. Other related maximum-likelihood approaches based on reduced input (photometric redshifts or just magnitudes) [@Itoh2010; @Abate2012] consider integrated quantities such as number densities, resulting in less sensitive measurements of bulk flows and higher-order moments of the peculiar velocity field. Estimating the galaxy luminosity function {#section2d} ----------------------------------------- A reliable measurement of the galaxy LF represents a key step in our approach. A corresponding estimator should be flexible enough to capture real features in the luminosity distribution, but also physical in the sense of returning a smooth function over the range of interest. To meet these requirements, we shall adopt the spline-based estimator introduced in [@Branchini2012] for our analysis.[^2] In this case, the unknown LF is written as a piecewise-defined function, i.e. $$\phi(M) = \varphi_{i}(M),\qquad M_{i-1}\leq M<M_{i}, \label{eq:2d1}$$ where $\varphi_{i}$ is a third-order polynomial defined such that the second derivative of $\phi$ with respect to $M$ is continuous on the interval $[M_{0},M_{N-1}]$ and vanishes at the boundaries. The cubic spline in eq. may be regarded as a generalization of the stepwise estimator originally proposed in [@efs88], and the actual spline coefficients are determined employing the standard techniques summarized in [@Press2002]. Since there occur only polynomial expressions, derivatives and integrals of $\phi(M)$ are of particularly simple form, allowing quite an efficient evaluation of the previously defined likelihood functions. LFs which are obtained according to this procedure might exhibit spurious wiggles, especially at the corresponding bright and faint ends. As is already discussed in [@Branchini2012], however, these wiggles can be sufficiently suppressed by adding an appropriate penalty term to the total likelihood function or by enforcing (log-)linear behavior of $\phi(M)$ beyond suitable bright and faint magnitude thresholds. Alternatively, it is also possible to simply choose magnitude cuts and the total number of spline points in such a way that the number of galaxies in each magnitude interval is large enough to avoid this problem for all practical purposes. In the present analysis, we will follow the latter approach when maximizing the total log-likelihood. In addition to the spline-based estimator, which is most relevant when considering real observations, we shall also use a parametric estimator that assumes a widely used Schechter form of the LF [@Sandage1979; @schechter], i.e. $$\phi(M) \propto 10^{0.4(1+\alpha^{\star})(M^{\star}-M)}\exp{\left (-10^{0.4(M^{\star}-M)}\right )}, \label{eq:2d2}$$ where $M^{\star}$ and $\alpha^{\star}$ are the usual Schechter parameters. The normalization of $\phi(M)$ cancels in the likelihood function and does not concern us here. Although it does not provide a good fit to all datasets, the Schechter form and its corresponding estimator turn out very useful for the analysis of both mock catalogs and the real galaxy sample presented in section \[section4\]. Datasets {#section3} ======== NYU Value-Added Galaxy Catalog {#section3a} ------------------------------ We will use the SDSS galaxies from the latest publicly available NYU Value-Added Galaxy Catalog (NYU-VAGC) [@Blanton2005].[^3]. This catalog is based on the SDSS Data Release $7$ (DR7) [@abaz], and contains galaxies with a median redshift of $z\approx 0.1$, observed in five different photometric bands with magnitudes corrected for Galactic extinction according to [@Schlegel1998]. Using Petrosian magnitudes, we decide to work with the $r$-band, mainly because it gives the largest spectroscopically complete galaxy sample [@Blanton2001; @Strauss2002], which is an important factor for the statistical method we have introduced in section \[section2\]. To minimize incompleteness and to exclude galaxies with questionable photometry and redshifts, we choose the subsample NYU-VAGC [safe]{} which contains only galaxies whose apparent $r$-band magnitudes satisfy $14.5 < m_{r} < 17.6$.[^4] The subsample accounts for fiber collisions following the correction scheme [nearest]{}, but this is expected to be of little relevance in our analysis which should be insensitive to galaxy clustering. Also, since we are interested in minimizing systematics due to uncertainties in $K$-corrections and luminosity evolution (see section \[section4\]), we shall adopt the $^{0.1}r$-bandpass when dealing with absolute magnitudes [@Blanton2003B], and further impose cuts on redshifts (expressed in the CMB frame) and observed absolute magnitudes $M_{r}$ such that only galaxies with $0.02 < z < 0.22$ and $-22.5 < M_{r} - 5\log_{10}h < -17.0$ are selected. The number of galaxies contained in our final working sample is approximately $5.4\times 10^{5}$ and may slightly vary, depending on the assumed background cosmology which enters the calculation of $M_{r}$ through the luminosity distance. For realistic flat cosmologies with a total matter density $\Omega_{m}\approx 0.3$, however, these variations are typically on the order of a few hundred galaxies and thus not very significant. Since we are concerned with relatively low redshifts $z\ltsim 0.2$, we assume a linear dependence of the luminosity evolution on redshift for simplicity, i.e. $$\label{eq:qz} Q(z)= Q_{0}(z-z_{0}),$$ where we set the pivotal redshift $z_{0}=0.1$. Furthermore, $K$-corrections for individual galaxies are taken from the NYU-VAGC and have been calculated with the software package [kcorrect]{} [v4[\_]{}1[\_]{}4]{} [@Blanton2007]. To calculate the limiting absolute magnitudes $M^{\pm}$ in the $^{0.1}r$-bandpass at a given redshift $z$, however, we resort to a mean $K$-correction of the form $$\overline{K}(z) = -2.5\log_{10}(1.1) + \sum_{i=1}^{3}\gamma_{i}(z - 0.1)^{i}, \label{eq:4b}$$ where $\gamma_{1}\approx 0.924$, $\gamma_{2}\approx 2.095$, and $\gamma_{3}\approx -0.184$ are determined by directly fitting the individual $K$-corrections listed in the NYU-VAGC. When calculating the total likelihood function introduced in section \[section2c\], all galaxies are weighted according to the angular (redshift) completeness. All remaining details relevant to the analysis of the NYU-VAGC galaxy redshift data will be separately discussed in section \[sectionnum\]. Mock galaxy catalogs {#section3b} -------------------- To test the performance of our approach, we resort to two different suites of galaxy mock catalogs. The first set of mocks is based on the LasDamas simulations [@McBride2009] while the second one is obtained from the real NYU-VAGC dataset that we analyse in this work. ### LasDamas mock catalogs These mock galaxy catalogs are obtained by populating the LasDamas simulations [@McBride2009] with artificial galaxies, using a halo occupation distribution model [e.g., @pesm; @Seljak2000; @Berlind2002] to match the observed clustering of SDSS galaxies in a wide luminosity range. The goal of these catalogs is to benchmark our method and validate its implementation, using a sample with overall characteristics (number density of objects, sky coverage, etc.) similar to that of the real catalog, ignoring all sources of systematic biases. Here we will consider a total of $60$ mocks from the public [gamma]{} release, modeled after a volume-limited subsample of SDSS DR7 cut at $M_{r}<-20$, which cover the full SDSS footprint (“North and South”) and a redshift range $0.02<z<0.106$ with a median of $z\approx 0.08$.[^5] The typical galaxy number in these mocks is around $1-1.5\times 10^{5}$, and we shall use them as a basic test of bulk flow measurements. To this end, an observed redshift is assigned to each galaxy according to $$cz = cz_{c} + V + c\epsilon_{z}, \label{eq:3b1}$$ where $z_{c}$ corresponds to the redshift entry in the mock catalog, the radial velocity $V=\hvr\cdot\vv_{B}$ is the line-of-sight component of the bulk flow $\vv_{B}$, and $\epsilon_{z}$ is a random measurement error drawn from a Gaussian distribution with $c\sigma_{z}=15$ km s$^{-1}$.[^6] Similarly, observed $r$-band magnitudes are assigned with the help of eq. , but without including the $K$-correction term. Assuming the linear luminosity evolution in with $Q_{0}=1.6$, the true galaxy magnitudes $M^{(t)}$ are randomly extracted from the Schechter distribution given by eq. with the parameters $M^{\star}=-20.44+5\log_{10}h$ and $\alpha^{\star}=-1.05$ [@Blanton2003]. Although it is irrelevant for the present purposes, this procedure ignores the masses of dark matter halos, meaning that very massive halos may host very faint galaxies and vice versa. We also add a Gaussian random error to $M_{r}$ with $\sigma_{M}=0.03$, and further trim the resulting mock catalogs by requiring $M_{r}<-20.25$ to prevent problems related to Malmquist bias. Finally, our choice of the bulk flow $\vv_{B}$ used in the benchmark runs will be described in section \[section4\]. As the LasDamas simulations assume a flat $\Lambda$CDM model with $\Omega_{m}=0.25$ and $h=0.7$, we adopt the same cosmology for the mocks. [0.95]{}[@lcccccc]{} Parameter set & $\Omega_{b}$ & $\Omega_{m}$ & $\Omega_{\Lambda}$ & $h$ & $n_{s}$ & $\sigma_{8}$ & 0.0455 & 0.272 & 0.728 & 0.702 & 0.961 & 0.8 & 0.0442 & 0.2643 & 0.7357 & 0.714 & 0.969 & 0.814 & 0.049 & 0.3175 & 0.6825 & 0.671 & 0.962 & 0.834 \[table1\] ### NYU-VAGC mock catalogs Starting directly from the previously described NYU-VAGC dataset, we generate a second set of mock catalogs built from the angular positions and spectroscopic redshifts of the observed galaxies. The goal of these mocks is to investigate the impact of known observational biases, incompleteness, and cosmic variance while preserving the spatial distributions of the galaxies in the real SDSS DR7 catalog. Just as in the case of the LasDamas mock catalogs, we interpret the observed spectroscopic redshifts as the cosmological ones and obtain the corresponding measured redshifts from eq. , where $V$ is now determined from the full linear velocity field evaluated at redshift $z=0$. The velocity field is obtained from a random realization sampled on a cubic grid with $1024^{3}$ points and a comoving mesh size of $4h^{-1}$ Gpc, assuming the linear power spectrum $P_{v}(k)$ of a flat $\Lambda$CDM cosmology with total matter and baryonic density parameters $\Omega_{m}=0.272$ and $\Omega_{b}=0.0455$, respectively, scalar spectral index $n_{s}=0.961$, $h=0.702$, and $\sigma_{8}=0.8$ (corresponding to the parameter set [param[\_]{}mock]{} which is listed in table \[table1\]). To ensure a high level of (statistical) independence between the final mocks, we perform appropriate translations and rotations of the survey data reference frame relative to the sampling grid before each galaxy is assigned a velocity equal to that of the nearest grid point. Because of small-scale nonlinearities and the finite grid sampling, we further add uncertainties to the line-of-sight components of these velocities which are generated from a normal distribution with $\sigma_{V} = 250$ km s$^{-1}$. The luminosities are assigned exactly as for the LasDamas mocks using the appropriate cuts in apparent and absolute magnitudes specified in section \[section3a\]. In addition, we simulate two known systematic errors in the photometric calibration of SDSS data [@Pad2008] that have a potential impact on our analysis. The first one arises from various magnitude offsets between the individual SDSS stripes, and is modeled by considering another random error with $\sigma_{\rm stripe} = 0.01$. The second, more serious error results from unmodeled atmospheric variations during the time of observation, ultimately causing an overall zero-point photometric tilt of roughly $0.01$ in magnitudes over the survey region. To mimic this tilt, we include in each mock a magnitude offset in the form of a randomly oriented dipole normalized such that its associated root mean square (rms) over all galaxies is $\delta m_{\rm dipole}=0.01$ [@Pad2008]. With this procedure we obtain a total of $269$ galaxy mocks (both flux- and volume-limited), mimicking the characteristics of the real NYU-VAGC sample. These mocks will be used to explore the distribution of measured bulk flow vectors and to study constraints on the power spectrum $C_{l}$ or cosmological parameters for a realistic choice of the large-scale peculiar velocity field. Data analysis {#section4} ============= We now proceed to apply our method to the SDSS data. To achieve our goals outlined in section \[sec:int\], we will begin with a short description of some additional preliminaries and present the general line of action in section \[sectionnum\]. After this, we will estimate the $r$-band LF of SDSS galaxies at $z\sim 0.1$ in section \[sectionlfestimate\], which provides the basis for our investigations. The results of our velocity analysis of the NYU-VAGC galaxy sample are then presented and discussed in sections \[sec:bf\] and \[sectionvcosmo\]. General line of action {#sectionnum} ---------------------- A major obstacle in constraining the velocity field from the SDSS is the partial coverage (only about $20$%) of the sky. Since the $Y_{lm}$ no longer form an orthogonal basis on this limited mask, the maximum-likelihood approach yields a statistical mixing between the estimated velocity moments, effectively probing a combination of different multipoles. In the case of the bulk flow, for instance, this would correspond to a superposition of several terms up to even the hexadecapole of the peculiar velocity field [@Tegmark2004]. Of course, one may resort to an orthogonal basis set for $l_{\rm max}$ in pixel space. Because we are going through the full maximum-likelihood procedure, however, there is no gain in doing so, i.e. all the information is already contained in the measured $a_{lm}$ and their covariance matrix. Also, the results expressed in such orthogonal bases typically have a less obvious physical interpretation. Additional difficulties arise from a too flexible LF model, i.e. oversampling issues related to the spline-based estimator, and the linear evolution term $Q(z)$ which actually mimics the formally ignored monopole contribution in eq. over the redshift range of interest. Both may contribute to the mode mixing and further complicate the interpretation of the corresponding results. Similarly, the presence of systematic errors in the SDSS photometry (see section \[section3b\]) can lead to spurious flows which contaminate the velocity measurements and bias possible estimates of velocity power and cosmological parameters. Despite these limitations, however, we show below that such measurements can still provide meaningful constraints if one interprets them with the help of suitable mock catalogs sharing the same angular mask (see section \[section3b\]). For instance, estimates of different quantities can be directly compared to the corresponding distributions obtained from the mocks where systematic effects are under control. As for the data (and mock) analysis presented below, we shall thus employ the following basic strategy: 1. Assume a set of parameters that describe the background cosmology and select the galaxy sample according to the absolute magnitudes and luminosity distances computed, respectively, from apparent magnitudes and redshifts (see section \[section3a\]). 2. Assuming the linear luminosity evolution model specified in eq. , determine the $^{0.1}r$-band LF parameters including $Q_{0}$ for the case of a vanishing peculiar velocity field, i.e. $a_{lm} = 0$. The value of $Q_{0}$ is kept fixed in the following steps while the LF parameters are free to vary, except when using the fixed LF estimator explored in section \[sec:bf\]. 3. Compute the maximum-likelihood estimate $\hat{\vx}^{\rm ML}$ of the parameter vector $\vx$ introduced in section \[section2c\] for a suitable $l_{\rm max}$. These parameters specify both the velocity model and the LF. Approximating $P_{\rm tot}$ locally by a Gaussian distribution and taking the previously found $\phi$ with $a_{lm} = 0$ as an initial guess, this is achieved by iteratively solving for $\hat{\vx}^{\rm ML}$ until the (exact) likelihood peak is reached. The required derivatives of $\log{P_{\rm tot}}$ can be calculated analytically and are summarized in appendix \[app1\]. Convergence is reached after $3$–$5$ iterations for a relative accuracy $10^{-6}$–$10^{-10}$. The CPU time depends on the value of $l_{\rm max}$, but is typically around a few tens of minutes for about half a million objects. The results are potentially prone to mask-induced degeneracies related to the spline point separation $\Delta M$ when estimating the LF. We will describe below the various approaches used to investigate this issue. 4. Estimate the random errors of $\hat{\vx}^{\rm ML}$ from the covariance matrix $\bm{\Sigma}$ which is computed by directly inverting the observed Fisher matrix $\mathbf{F}$. The Fisher matrix $\mathbf{F}_{\alpha\beta}=-\partial\log P_{\rm tot}/(\partial x_{\alpha}\partial x_{\beta})$ is evaluated at the maximum value $\hat{\vx}^{\rm ML}$.[^7] 5. Marginalize the resulting distribution $P_{\rm tot}(\vd\vert\vx )$ over all LF parameters that are unrelated to the velocity field and construct the posterior probability for the $a_{lm}$ and, subsequently, the probability $P(C_{l})$ according to the prescription given in section \[section2c\]. Then maximize the latter with respect to $C_{l}$ to estimate the angular power. Given the characteristics of the SDSS data, such estimates are expected to be quite uncertain, and thus we will limit ourselves to a proof of concept. 6. Alternatively, consider a spatially flat $\Lambda$CDM model where the $C_{l}$ are not free and independent, but fully determined by the cosmological parameters $\zeta_{k}$, i.e. $C_{l}=C_{l}(\zeta_{k})$. Constraints are obtained by sampling the probability $P[C_{l}(\zeta_{k})]$ as a function of $\zeta_{k}$ on a discrete grid. Although other parameter choices are briefly discussed, we will focus on the quantity $\sigma_{8}$ which corresponds to the amplitude of the linear matter power spectrum on a scale of $8h^{-1}$ Mpc. To ensure that linear theory remains a valid description on the physical scales probed in the analysis, we further have to set $l_{\rm max}$ accordingly (see section \[sectionvcosmo\] for details). Except in the case of examining the LasDamas mocks, which contain substantially less galaxies than the other samples (see section \[section3b\] above) and cover a smaller redshift range, we will consider the peculiar velocity field in two redshift bins with $0.02 < z_{1} < 0.07 < z_{2} < 0.22$, comprising about $N_{1}\sim 1.5\times 10^{5}$ and $N_{2}\sim 3.5\times 10^{5}$ galaxies, respectively. This specific choice is mainly driven by the accuracy of the bulk flow estimates presented in section \[sec:bf\]. For two redshift bins, the uncertainties are typically around $100$ km s$^{-1}$ which is also larger than the expected variation of the flow amplitude within the respective bins, yielding a good compromise between accuracy and evolution. Further, the actual bin widths are determined by requiring comparable signal-to-noise ratios which we roughly estimate from the expected variance of the velocity field within the corresponding bin volumes and the Poisson noise due the finite number of objects. As for the study of the observed data sample, we adopt the latest cosmological parameters based on the Wilkinson Microwave Anisotropy Probe (WMAP) combined with ground-based experiments [@Calabrese2013] and the recent measurements by the Planck satellite [@Planck2013] which are summarized in table \[table1\] and denoted by [param[\_]{}wmap]{} and [param[\_]{}planck]{}, respectively. ![The $^{0.1}r$-band LF as obtained from the NYU-VAGC sample: shown are the maximum-likelihood result adopting the spline-based estimator with $\Delta M=0.5$ (solid line), and two fits based on the Schechter form (dashed line) and its extension (dotted line; zoomed panel only) which is defined by eq. .[]{data-label="fig2"}](fig1.eps){width="0.95\linewidth"} The -band luminosity function of NYU-VAGC galaxies {#sectionlfestimate} -------------------------------------------------- As described in section \[sectionnum\], we begin with estimating $\phi(M)$ in the $^{0.1}r$-band from the NYU-VAGC data for a vanishing velocity field. Adopting the spline-based estimator with a separation of $\Delta M = 0.5$ between individual spline points, the resulting $\phi(M)$ is shown as a solid line in figure \[fig2\], where the normalization is chosen such that the integral of $\phi(M)$ over the considered absolute magnitude range becomes unity, i.e. $\eta(-22.5+5\log_{10}h,-17+5\log_{10}h)=1$, and error bars are computed from the “constrained” covariance matrix obtained by enforcing the LF normalization to guarantee a non-singular Fisher matrix. The shape of $\phi(M)$ and the found evolution parameter, $Q_{0} = 1.6\pm 0.11$, are in good agreement with previous studies based on earlier data releases [@Blanton2003; @Montero2009]. While the simple Schechter form with $M^{\star}-5\log_{10}h=-20.52\pm 0.04$ and $\alpha^{\star}=-1.10\pm 0.03$ (dashed line) describes the estimated $\phi$ reasonably well, it does not capture the visible feature at the faint end.[^8] Therefore, we consider an extension to eq. which, after using the relation $L/L^{\star}=10^{0.4(M^{\star}-M)}$, takes the form $$\phi \propto \left (\frac{L}{L^{\star}}\right )^{\beta^{\star}_{1}}\left\lbrack 1 + 10^{-2.5} \left (\frac{L}{L^{\star}}\right )^{\beta^{\star}_{2}}\right\rbrack\exp{\left (-\frac{L}{L^{\star}}\right )} \label{eq:4a}$$ and is equivalent to the sum of two Schechter functions with different choices of normalization and $\alpha^{\star}$. Fitting the above to the spline estimate yields the parameters $M^{\star} = -20.46\pm 0.03$, $\beta^{\star}_{1} = -1.01\pm 0.03$, and $\beta^{\star}_{2} = -1.64\pm 0.11$, giving a much better representation of the observed trend. This is illustrated in the zoomed panel of figure \[fig2\], where the new result (dotted line) is compared to both the spline (solid line) and the previous Schechter fit (dashed line). To further assess our result, we also calculate the predicted redshift distribution $\dd N/\dd z$ of galaxies which is directly proportional to the radial selection function $S(z)$, i.e. the fraction of galaxies included in the sample at redshift $z$. The selection function is easily obtained as an integral of the LF over the magnitude range defined by the redshift-dependent limiting absolute magnitudes. ![Redshift distribution of SDSS galaxies from the NYU-VAGC sample: the histogram (solid line) represents the observed distribution normalized to unity over the considered redshift range for bins with $\Delta z = 2.5\times 10^{-3}$. The predicted distribution (dashed line) assumes the spline-based estimate of the LF. []{data-label="fig3"}](fig2.eps){width="0.95\linewidth"} Figure \[fig3\] shows that the predicted and observed redshift distributions match quite well, except for a slight disagreement on the order of a few percent near the high-$z$ cut. This small discrepancy is most likely caused by a combination of both the limited linear evolution model and the use of different $K$-corrections (individual and mean) when estimating $\phi(M)$ and the selection function (see section \[sectioncaveats\] for a discussion of how luminosity evolution and $K$-corrections impact our peculiar velocity results). Note that all of the above assumes the set [param[\_]{}wmap]{}. Repeating the analysis with the parameters from [param[\_]{}planck]{}, however, does not yield any significant changes. ![Histograms of bulk flow measurements obtained from the customized LasDamas mocks: shown are the recovered distributions for both a known (dashed line) and unknown (solid line) flow direction.[]{data-label="fig1"}](fig3.eps){width="0.95\linewidth"} Constraining bulk flows {#sec:bf} ----------------------- As a first application of our method, we address how it may be used to constrain the bulk flow, $\vv_{B}$, in the NYU-VAGC data. We begin with the LasDamas mocks for testing the ability of the method to detect large, anomalous bulk flows in a SDSS-like catalog. Then we apply the method to the real NYU-VAGC and discuss whether our results are consistent with the standard $\Lambda$CDM cosmology. ### LasDamas benchmark {#benchmark} We make use of the LasDamas mocks introduced in section \[section3b\] and assume a constant $\vv_{B}$ of $1000$ km s$^{-1}$ pointing toward the direction $(l, b) \approx (266^{\circ}, 33^{\circ})$ expressed in Galactic coordinates. Note that both the magnitude and the direction of $\vv_{B}$ are chosen in accordance with the recent controversial claim of a “dark flow” out to depths of around 300–600$h^{-1}$ Mpc [@Kash2008; @Keisler2009; @Kash2010; @Osborne2011]. Using the Schechter estimator for the LF and setting $l_{\rm max}=1$, we follow the procedure outlined in section \[sectionnum\] to recover the flow $\vv_{B}$ from the customized LasDamas mocks. The histogram in figure \[fig1\] shows the resulting component along the input direction for the cases that it is known (dashed line) and unknown (solid line), i.e. the direction is allowed to vary freely. Clearly, the magnitude of $\vv_{B}$ is successfully extracted in both cases, and the corresponding rms values of $111$ and $125$ km s$^{-1}$ are fully consistent with each other as is expected from Gaussian statistics. Although not presented here, the found distributions along the other (perpendicular) directions for a freely varying $\vv_{B}$ are consistent with a zero velocity and exhibit a similar scatter. Of course, the current setup neglects any contamination due to leakage from other multipoles or systematic errors in the data. If these effects remain subdominant in the sense that their combination leads to changes comparable to or less than the estimated random errors, our result suggests that the method is capable of constraining large coherent bulk flows using the available galaxy data from the NYU-VAGC. As we will show below, this condition seems reasonably satisfied, at least for the results in the low-redshift bin with $0.02<z<0.07$. [0.95]{}[@lcccccc]{} & & $\phi(M)$ & $v_{x}$ \[km/s\] & $v_{y}$ \[km/s\] & $v_{z}$ \[km/s\] & $v_{x}$ \[km/s\] & $v_{y}$ \[km/s\] & $v_{z}$ \[km/s\] Hybrid & $-227\pm 128$ & $-326\pm 113$ & $-239\pm 73$ & $-367\pm 92$ & $-439\pm 85$ & $-25\pm 71$ Fixed & $-175\pm 126$ & $-278\pm 111$ & $-147\pm 58$ & $-340\pm 90$ & $-409\pm 81$ & $-45\pm 43$ Schechter & $-151\pm 130$ & $-277\pm 116$ & $-102\pm 78$ & $-422\pm 93$ & $-492\pm 86$ & $-150\pm 74$ \[table2\] ### Constraints from the NYU-VAGC In the next step, we seek constraints on the velocity field for the case $l_{\rm max}=1$, now using the real NYU-VAGC galaxy data. As we have argued above, the angular mask of our sample causes such “bulk flow” estimates to suffer from multipole mixing and their interpretation requires the use of mock catalogs. Another mask-related problem arises from additional degeneracies between the velocity multipoles and the LF, depending on the assumed spline point separation. For $\Delta M=0.5$, this already becomes an issue, and the straightforward remedy is to increase the separation to an adequate value. To account for alternative solutions and to further judge our method’s robustness, however, we will consider the following three representative approaches in our analysis: 1. Fix the LF to its estimate for a vanishing velocity field, i.e. use a predetermined shape of $\phi(M)$ for the analysis. The rational of this fixed estimator is to evaluate the impact of adding degrees of freedom in the LF model. 2. Adopt a hybrid model by fitting a Schechter form to the spline-based LF estimate for a vanishing velocity field and expressing the LF as the sum of a Schechter function and the corresponding (fixed) residual. 3. Work exclusively with the Schechter parameterization of the LF. Featuring the highest flexibility among the above, we expect estimates based on the hybrid LF model to be the most reliable ones. The corresponding flow measurements will be expressed in a specific Cartesian coordinate system defined by its $x$-, $y$-, and $z$-axes pointing toward Galactic coordinates $(l,b)\approx (81^{\circ},-7^{\circ})$, $(172^{\circ},-1^{\circ})$, and $(90^{\circ},83^{\circ})$, respectively. In particular, the system’s $z$-axis is chosen such that it approximately penetrates the central patch of galaxies observed in the northern hemisphere, and thus it is expected to give the tightest constraints. Using these coordinates, for example, the anomalous bulk flow incorporated in the LasDamas mocks (see the first paragraph of section \[benchmark\]) can be written as $\vv_{B}^{\rm T}\approx (-894,-80,442)$ in units of km s$^{-1}$. To aid the following discussion of our results, let us further introduce $v_{K}$ as the component along the direction of this anomalous flow which points toward $(l, b)\approx (266^{\circ}, 33^{\circ})$. ![Histograms of “bulk flow” measurements obtained from the simple NYU-VAGC mocks: shown are the recovered distributions (black lines) and corresponding Gaussian estimates (red lines) for the two redshift bins with $0.02<z<0.07$ (solid lines) and $0.07<z<0.22$ (dashed lines) along the direction of the anomalous flow assumed in the first paragraph of section \[benchmark\].[]{data-label="fig4"}](fig4.eps){width="0.95\linewidth"} Regarding the real NYU-VAGC galaxy sample, the inferred “bulk flows” for the cosmology defined by [param[\_]{}wmap]{} are summarized in table \[table2\]. A comparison between the estimated flow components shows that the results based on the various LF models are different, but consistent within their $1\sigma$-values, where the quoted errors are derived from the observed Fisher information. All measurements are in very good agreement with $v_{K}\approx 120\pm 115$ and $355\pm 80$ km s$^{-1}$ for the first and second redshift bin, respectively. To make sense of these numbers, we compare them to the distribution of $v_{K}$ found with the help of the simple NYU-VAGC mocks (see section \[section3b\]) which is presented in figure \[fig4\]. Note that the mock analysis leading to this distribution has been performed using a pure Schechter estimator of the LF. Employing the other LF models listed above, however, gives very similar results and will leave our conclusions unchanged.[^9] As can be seen from the figure, the observed distributions in both redshift bins are well described by Gaussian profiles with (nearly) zero mean and standard deviations of approximately $170$ and $200$ km s$^{-1}$, respectively. In contrast to the Fisher errors, the dispersion found from the mocks includes both the cosmic signal and contributions due to the magnitude dipole introduced in section \[section3b\]. Repeating the mock analysis after removing the latter leads to a decrease in the dispersions of around $9$% and $62$% for the first and second bin, respectively. Systematic errors induced by the magnitude dipole are expected to increase with the redshift since a bulk flow with amplitude $v_{B}$ is expected to induce a magnitude offset around $\delta{m}=5\log_{10}(1-v_{B}/cz)$ [@Nusser2011]. On the contrary, cosmic variance is expected to decrease with the volume, and thus with the redshift. The increase of the dispersion with the redshift indicates that the errors induced by the magnitude dipole obliterate the contribution due to cosmic variance. We find our measurements of $v_{K}$ to be fully compatible with the distribution obtained from the mocks and consistent with zero at a $1\sigma$ (first redshift bin) and $2\sigma$ (second redshift bin) confidence level. Given that the estimated flow components are not necessarily uncorrelated, however, it is more appropriate to consider the joint distribution of the bulk flow components which is adequately characterized by a multivariate Gaussian. From our set of mocks and the actual data, we find that correlations between the different components are relatively mild and the corresponding (linear) correlation coefficients typically take values around $0.1$–$0.3$. Choosing the hybrid model of the LF, for instance, the bulk flow measured in both bins is consistent with zero within $1.5\sigma$ if we adopt the usual confidence levels of multivariate normally distributed data with three degrees of freedom. Surprisingly, the bulk flow amplitudes associated with the estimated components in this case are $v_{B}\approx 490\pm 100$ and $580\pm 80$ expressed in units of km s$^{-1}$, where the errors are purely statistical and relatively small. Despite these rather large values, the recovered flows correspond to detections at the $1.5\sigma$ level. The reason is that the distribution of amplitudes does not follow a Gaussian, but has a long tail and is closely related to the $\chi^{2}$-distribution. The estimated flows are pointing toward $(l,b)\approx (310^{\circ},-25^{\circ})\pm (30^{\circ},10^{\circ})$ and $(310^{\circ},5^{\circ})\pm (10^{\circ},15^{\circ})$ in Galactic coordinates for the first and second redshift bin, respectively. Note that changing the cosmological parameters to [param[\_]{}planck]{} or using the other LF models yields basically the same results. ![image](fig5.eps){width="0.88\linewidth"} As for the comparison of our flow measurements with the mock catalog results, we point out that the simple NYU-VAGC mocks are not only built and analyzed with a slightly different cosmological model, but also ignore any redshift dependence of the peculiar velocity field which is assumed at $z=0$ (see section \[section3b\]). For typical choices of cosmological parameters and the redshift range of interest, this amounts to small differences $\ltsim 3$% and can, therefore, be ignored in our analysis. Another concern is that fixing the linear evolution as described in section \[sectionnum\] causes a bias in the flow components since the monopole-like term $Q(z)$ might leak in through the mask. To ensure that this is not the case, we plot the inferred components for both redshift bins against the estimate of the parameter $Q_{0}$ in figure \[fig5\]. A brief visual inspection of the scatter already indicates that there is no evidence for a correlation between these quantities. This is confirmed by calculating the linear correlation coefficients which turn out smaller than $0.1$ in all the cases. Together with the above findings, we thus conclude that the SDSS galaxy data exhibit no hint toward anomalously large flows. Accounting for the known magnitude tilt in the photometric calibration, our velocity measurements further appear fully consistent with the expectations of a $\Lambda$CDM cosmology. Higher-order multipoles: constraining angular power and {#sectionvcosmo} -------------------------------------------------------- As we have outlined in section \[section2c\], the luminosity-based approach considered in this work is analogous to the analysis of CMB anisotropies and, in principle, it is straightforward to constrain the angular velocity power spectrum using basically the same techniques. Given the characteristics of the SDSS data and our previous findings from section \[sec:bf\], however, we already expect such constraints to be rather weak and potentially biased because of the systematic magnitude tilt described in section \[section3b\]. Nevertheless, we shall explore the potential of this approach and illustrate some examples involving simple velocity models. ![image](fig6.eps){width="0.87\linewidth"} ### Constraints with no cosmology priors Let us assume a velocity model with $l_{\rm max}=2$ and assess the impact of a tilt in the zero-point photometry with the help of a mock galaxy catalog randomly chosen form the NYU-VAGC set. To facilitate a direct interpretation in terms of velocities, we additionally define the dimensional quantity $$\tilde{C}_{l}\equiv \sqrt{\frac{2l+1}{4\pi}C_{l}} \label{eq:4d}$$ which will be used in what follows below. Again, we assume the latest pre-Planck $\Lambda$CDM cosmology determined by [param[\_]{}wmap]{} and also work with the hybrid estimator of the LF (see section \[sec:bf\]). Figure \[fig6\] shows the joint $1\sigma$ and $2\sigma$ confidence regions of $\tilde{C}_{1}$ and $\tilde{C}_{2}$, estimated after maximizing the likelihood $P(C_{l})$ in eq. for the same mock, with (right panel) and without (left panel) mimicking the systematic magnitude dipole offset (see section \[section3b\]). Here the posterior likelihood is constructed separately for each redshift bin after marginalizing $P\left (\vd\vert a_{lm}\right )$ over the $a_{lm}$ of the respective other one, and the resulting contour lines are derived using the quadratic estimator presented in ref. [@Bond1998]. The effect of a spurious magnitude dipole mostly affects the probability contour along the $\tilde{C}_{1}$-axis, i.e. the power in the dipole, and as expected, the amplitude of the effect increases with the redshift. To quantify the smearing introduced when one estimates velocities through luminosity variations, we further compare the contours with the values of $\tilde{C}_{1}$ and $\tilde{C}_{2}$ inferred directly from the galaxy peculiar velocities in the mocks (hexagons in the figure). The estimated constraints are consistent with these values within the (large) $1$–2$\sigma$ bounds. ![image](fig7.eps){width="0.45\linewidth"} ![image](fig8.eps){width="0.45\linewidth"} Repeating the analysis for the real SDSS galaxies, we end up with the confidence regions depicted in the left panel of figure \[fig7\]. Although it is not very constraining in the present case, our analysis restricts the $\tilde{C}_{l}$ for $l_{\rm max}=2$ to several hundred km s$^{-1}$ and is consistent with zero power. This fully agrees with the predictions of the $\Lambda$CDM model and does not suggest any anomalous properties. We also note a striking resemblance in contour trends with the mock result in figure \[fig6\] from which it is tempting to deduce the existence of a formidable dipole contamination in the real data. Since, among other uncertainties, there is still leakage due to the survey geometry, however, strong statements like that cannot be made. Including higher velocity multipoles with $l_{\rm max}\geq 3$, the constraints become even weaker as the level of degeneracy increases. To give a final example, we assume another model with fixed LF and set $l_{\rm max}=3$. The corresponding confidence regions of the different $\tilde{C}_{l}$ are shown in right panel of figure \[fig7\]. Note that the tighter bounds are only a consequence of reducing the available degrees of freedom. ![image](fig9.eps){width="0.9\linewidth"} ### Constraints with cosmology priors Next, we shall consider constraints on cosmological parameters by imposing a $\Lambda$CDM prior on the angular power spectrum as detailed above in section \[section2c\]. In doing so, it is convenient to divide the parameters that define the cosmological models into two categories. The ones that characterize the background cosmology and that are used to estimate absolute magnitudes and compute distances for sample selection, and those that characterize the density fluctuations. Here we focus on $\sigma_{8}$ which belongs to the latter category, and assume that all other parameters are fixed to their values in [param[\_]{}wmap]{}. At the end of this section, we shall briefly discuss other choices and comment on the possibility of constraining background parameters such as $\Omega_{m}$. ![image](fig10.eps){width="0.91\linewidth"} Like in the previous section, we assess the impact of a spurious tilt in the estimated magnitudes. To guarantee the validity of linear theory, we set $l_{\rm max}=5$ which corresponds to considering physical scales above $\sim 100h^{-1}$ Mpc. Concerning the calculation of the theoretical $C_{l}$ which is required for the prior probability and summarized in appendix \[app2\], we adopt the parameterized form of the matter power spectrum $P(k)$ given in ref. [@EH98]. Moreover, the galaxy redshift distribution $p(z)$ used to compute the bin-averaged velocity field given by eq. is taken to be of the form $$p(z) \propto z^{a}\exp{\left\lbrack -\left (z/\overline{z}\right )^{b}\right\rbrack }, \label{eq:4e}$$ where the parameters $a=1.31$, $b=1.94$, and $\overline{z}=0.1$ are found by directly fitting eq. to the observed distribution. As is customary, $\sigma_{8}$ is inferred from discretely sampling the posterior probability and interpolating the corresponding result. In our calculations, we will choose a step size of $0.05$. Applying this procedure to the full suite of mock catalogs with and without the inclusion of a systematic magnitude dipole, we obtain the histograms shown in figure \[fig8\]. While the results for the combination of both redshift bins stem from maximizing $P(C_{l})$ in eq. using the full probability $P\left (\vd\vert a_{lm}\right )$, those for the low-$z$ bin are computed by constructing the posterior probability after marginalizing $P\left (\vd\vert a_{lm}\right )$ over all $a_{lm}$ in the high-$z$ bin. Note that the former approach is actually inconsistent as it incorrectly assumes that the $a_{lm}$ are uncorrelated between different bins. Accounting for these missing correlations, however, leads only to differences of a few percent in the estimated values of $\sigma_{8}$ and its error, suggesting that they may be safely neglected. As described in section \[section3b\], the NYU-VAGC mocks are based on the parameter set [param[\_]{}mock]{} and assume an input value of $\sigma_{8}=0.8$. In addition, we assume a Schechter LF for the mock analysis (just like in section \[sec:bf\]), but using the other LF models does not significantly change the results. The spikes in the histograms at $\sigma_8=0$ correspond to the cases in which we do not detect any power. Once we exclude those, the histograms are reasonably well represented by Gaussian distributions with standard deviations of $\sim 0.3$–$0.4$ (solid and dashed, red lines). As is readily seen from comparing the left and right panels, the presence of a systematic magnitude dipole (solid lines) causes a bias in the estimate of $\sigma_{8}$ which is rather severe for higher redshifts, i.e. $0.07<z<0.22$ (left panel). As expected, removing the dipole (dashed lines) also eliminates the bias, thus leading to the same mean value of $\sigma_{8}$ in both cases. Expressing the bias in numbers, the dipole contribution to galaxy magnitudes amounts to a systematic shift of $\Delta\sigma_{8}\approx 0.13$ and $\Delta\sigma_{8}\approx 0.52$ for the low-$z$ bin and the combination of both redshift bins, respectively. Considering now the real SDSS galaxy sample, we perform exactly the same analysis to obtain measurements of $\sigma_{8}$ for the different LF estimators introduced in section \[sec:bf\]. Our results are presented in figure \[fig9\] which shows the derived $\Delta\chi^{2}$ as a function of $\sigma_{8}$, obtained using the information from both redshift bins (left panel) and that of the first redshift bin only (right panel). Similar to what we have discovered in our investigation of “bulk flows”, the values based on different LF models agree very well within their corresponding $1\sigma$ errors, and we get $\sigma_{8}\sim 1.0$–$1.1$ in the low-$z$ bin and $\sigma_{8}\sim 1.5$–$1.6$ over the full $z$-range. Remarkably, the measured values and uncertainties closely match the inferred biased distributions of the previous mock analysis depicted in figure \[fig8\]. If the magnitude tilt in the SDSS data is the only relevant source of systematic errors and sufficiently characterized by a dipole-like modulation, we can use the bias estimated from the mocks to correct our measurements. Taking the result of the hybrid LF estimator (solid line in figure \[fig9\]), for example, we obtain corrected values of $\sigma_{8}=1.09\pm 0.38$ (both bins) and $\sigma_{8}=0.95\pm 0.53$ (low-$z$ bin) which are fully compatible with each other and also consistent with the expectation of the $\Lambda$CDM model. The quoted errors are the statistical errors inferred from the NYU-VAGC data. Note that changing the cosmology to [param[\_]{}planck]{} or choosing a different LF estimator has only a minor impact on the results. ![Scatter plots of $\sigma_{8}$ versus the evolution parameter $Q_{0}$ for the simple NYU-VAGC mocks: using the information from both redshift bins, the plot illustrates the resulting distributions obtained with (black squares) and without (red circles) a systematic dipole in the galaxy magnitudes.[]{data-label="fig10"}](fig11.eps){width="0.95\linewidth"} Again, one may ask whether fixing the linear evolution as described in section \[sectionnum\] causes an additional bias in our measurements of $\sigma_{8}$. To answer this question, we plot the derived values of $\sigma_{8}$ for both redshift bins against the estimate of the evolution parameter $Q_{0}$ in figure \[fig10\], using the simple NYU-VAGC mocks with (black squares) and without (red circles) the magnitude dipole. As before (see section \[sec:bf\]), the linear correlation coefficients turn out $\ltsim 0.1$, and there is no indication for a correlation between these quantities. Of course, one is not restricted to $\sigma_{8}$, but also free to look at other cosmological parameters or various combinations thereof. Considering the two parameters $h$ and $\Omega_{b}$ which, together, determine the baryonic matter density, for instance, we find that the respective constraints turn out weaker than before, and are also highly degenerate with $\sigma_{8}$. Similar statements should hold for the parameter $\Omega_{m}$. However, we do not explicitly check this because changing $\Omega_{m}$ alters the background cosmology and implies a modification of the survey volume and the selection of the galaxy sample. Taking this into account would substantially increase the workload of the analysis. Caveats {#sectioncaveats} ------- ### Coherent photometric errors and spurious signals Although our analysis of the SDSS galaxy data has tried to account for known systematics such as the zero-point photometric tilt (see the mock description in section \[section3b\]), there could exist additional errors in the photometric calibration. It is important to note that the impact of such errors increases with the redshift and significantly affects the results obtained at $z\gtsim 0.1$ in our analysis. An example is the possibility of zero-point offsets in magnitudes between the whole northern and southern hemispheres due to observing the galaxies in disconnected regions. If this offset was around $0.01$–$0.02$, it should basically contribute a spurious flow of $\sim 100$–$250$ km s$^{-1}$ to the actual bulk motion along the connecting axis, assuming a redshift of $z=0.1$ for all galaxies. Since the SDSS footprint gives much more weight to the northern sample, however, the effect is much less pronounced. As we have verified with the help of our galaxy mocks, it has just a mild impact on velocity measurements along the previously defined $z$-axis (see section \[sec:bf\]), thus leaving our conclusions unchanged. In fact, the missing evidence for any large-scale flow anomaly found in sections \[sec:bf\] and \[sectionvcosmo\] already indicates that there are no other relevant systematics which would otherwise require serious fine-tuning. As for the photometric tilt, which constitutes the main source of systematic errors in the present analysis, it is worth pointing out that, although difficult in practice, such a photometric bias could in principle be characterized and corrected for by using additional information from star or galaxy counts and clustering from the very same SDSS DR7 dataset. Another possibility is to recalibrate the SDSS photometry with the help of independent datasets. For example, this could be accomplished using observations of the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) which covers $3/4$ of the sky visible from Hawaii [@Kaiser2002; @Kaiser2010] or, indeed, any future, wide galaxy survey with good photometric stability. With this respect, a major step forward in control on the photometric calibration is expected from galaxy surveys carried out from space like, for instance, the planned Euclid mission [@euclid2011]. Imperfect corrections for Galactic extinction might also cause a systematic large-scale offset in the estimated magnitudes. In the NYU-VAGC, this correction is based on the maps given in ref. [@Schlegel1998]. The recent comparison between these and the reddening maps obtained from the Pan-STARRS1 stellar photometry [@Schlafly2014] does not hint at any large-scale coherent residual with an amplitude comparable to that of the known magnitude tilt. The new dust maps that will be obtained from the Planck data will settle the issue. ### Environmental dependence of the luminosity function There are several studies which strongly hint toward a dependence of the LF on the large-scale environment of galaxies [e.g., @Balogh2001; @Mo2004; @Croton2005; @Park2007; @Merluzzi2010; @Faltenbacher2010]. However, investigations trying to shed light on the connection between luminosity and galaxy density are typically limited to scales of a few Mpc, thus not probing the large scales relevant to our work. As a matter of fact, an analysis addressing these environmental dependencies of galaxy luminosities on scales $\sim 100h^{-1}$ Mpc and above is still unavailable. Nonetheless, we may get an idea from extrapolating the observed dependence into the large-scale domain. Using that the rms of density fluctuations averaged over scales $\gtsim 100h^{-1}$ is less than about $0.07$, this gives rise to a particularly small effect if the overdensity of the large-scale environment turns out to be the most important factor. As we have already suggested in ref. [@Nusser2013], it should further be feasible to take such effects into account by performing measurements over independent volumes which are classified in terms of their average density. ### Luminosity evolution and $K$-corrections The analysis conducted in this paper assumes that the evolution of galaxy luminosity can be effectively described with a linear model of the form $Q(z)= Q_{0}(z-z_{0})$ where $z_{0}=0.1$ and $Q_{0}$ is determined according to section \[sectionnum\]. To assess the robustness of our results with respect to this specific model, we have carried out a few simple tests. In a first run, we have examined the influence of varying $Q_{0}$ within its estimated $3\sigma$ confidence interval. This typically leads to changes in the measured “bulk flow” components of several tens of km s$^{-1}$ and causes deviations less than $10$% in the found values of $\sigma_{8}$. Moreover, we have explored nonlinear evolution models with second- and third-order terms. In this case, the corresponding changes in our velocity measurements turn out even smaller and can be safely neglected. Similarly, we have studied the impact of different $K$-corrections using the mean correction given by eq. and two-dimensional polynomial fits as a function of redshift and $g-r$ color [@Chilin2010].[^10] Again, the resulting differences are marginal and do not affect any of our conclusions. Conclusions {#section5} =========== We have exploited the well-known fact that peculiar motion induces spatially-coherent variations in the observed luminosity distribution of galaxies to probe the cosmic velocity field at $z\sim 0.1$ from the luminosity distribution of SDSS galaxies in the NYU-VAGC. The method adopted here extends the maximum-likelihood approach proposed in ref. [@Nusser2011] to constrain the peculiar velocity field beyond the bulk flow component. Considering the bin-averaged peculiar velocity field in two different redshift bins, $0.02<z<0.07$ and $0.07<z<0.22$, we have demonstrated how the method permits bounds on the corresponding angular velocity power spectrum and cosmological parameters. The main results of our analysis can be summarized as follows: - To assess the robustness of our analysis against potential systematic errors, we have used a suite of mock galaxy catalogs obtained both from numerical simulations and from the NYU-VAGC dataset itself to match the real data as close as possible. We have identified three main obstacles which potentially hamper the analysis of the SDSS data: the survey geometry, which causes mixing between different moments, the possible degeneracies between the velocity multipoles and the estimator of the LF, and the presence of a coherent photometric tilt of about $0.01$ magnitude across the survey region. While the impact of mode-to-mode mixing can be readily quantified by modeling the sky coverage and the influence of the LF has been evaluated by applying different estimators to the mocks, the latter effect is less trivial to account for. Here we have modeled the zero-point photometry offset by adding a randomly oriented dipole normalized such that the corresponding rms over all galaxies is $\delta m_{\rm dipole}=0.01$ for each individual mock galaxy catalog. Our results suggest that the systematic tilt in the observed galaxy magnitudes is sufficiently described by this dipole contribution. - Accounting for the known systematics in the SDSS photometry, the estimated “bulk flows” are consistent with the predictions of the standard $\Lambda$CDM model at $\ltsim 1$–$2\sigma$ confidence in both redshift bins. The combined analysis of the corresponding three Cartesian components further corroborates this result. Using an independent estimator, this confirms the findings of the CMB studies in refs. [@Osborne2011; @planck_bf] which provide an upper bulk flow limit of a few hundred km s$^{-1}$ at a $95$% confidence limit on similar scales. - Our analysis yields direct constraints on the angular velocity power spectrum $C_{l}$ (considering terms up to the octupole) defined in section \[section2c\], independent of a prior on the cosmological model. All of the estimated $C_{l}$ are consistent with the corresponding theoretical power spectra of the $\Lambda$CDM cosmology. - Assuming a prior on the $C_{l}$ as dictated by the $\Lambda$CDM model with fixed density parameters and a Hubble constant, we have used the method to infer the parameter $\sigma_{8}$ which determines the amplitude of the velocity field. After correcting for known systematics, we obtain $\sigma_{8}\approx 1.1\pm 0.4$ for the combination of both redshift bins and $\sigma_{8}\approx 1.0\pm 0.5$ for the low-redshift bin only. As anticipated, the found constraints on velocity moments and $\sigma_{8}$ are not very tight. However, they show the validity of our approach in view of future analyses with different datasets. - As for the encountered data-inherent issues, current and next-generation spectroscopic surveys are designed to alleviate most of them, thanks to their large sky coverage (e.g., eBOSS[^11], DESI [@bigboss2011; @Levi2013]) and improved photometric calibration in ground-based surveys (e.g., PAN-STARRS [@Kaiser2002; @Kaiser2010]) and especially in space-borne experiments like Euclid [@euclid2011]. Note that since uncertainties in the measured redshifts play a little role in our error budgets, the method is also suitable for application to wide photometric redshift surveys such as the 2MASS Photometric Redshift catalog (2MPZ) [@Bilicki2014] and, again, Euclid. - These excellent observational perspectives give us confidence that the method considered here will become a full-fledged cosmological probe, independent and alternative to the more traditional ones based on galaxy clustering, gravitational lensing and redshift space distortions. We expect that combining all these approaches will result in superior control over potential systematic errors that might affect the estimate of cosmological quantities, chief among them the growth rate $f(\Omega)$ of density fluctuations [@Nusser2012]. - The main interest here is the methodological aspect since the novel approach to estimate the angular velocity power spectrum or cosmological parameters, developed in analogy to the statistical treatment of CMB anisotropies, is to be regarded as a proof of concept guiding future analyses. As a final remark, we point out that it should be conceivable to reverse the ansatz taken in this work, allowing one to constrain luminosity evolution and to improve the photometric calibration of a galaxy sample in a given cosmological framework. This research was supported by the I-CORE Program of the Planning and Budgeting Committee, THE ISRAEL SCIENCE FOUNDATION (grants No. 1829/12 and No. 203/09), the German-Israeli Foundation for Research and Development, the Asher Space Research Institute, and in part by the Lady Davis Foundation. M.F. is supported by a fellowship from the Minerva Foundation. E.B. is supported by INFN-PD51 INDARK, MIUR PRIN 2011 “The dark Universe and the cosmic evolution of baryons: from current surveys to Euclid”, and the Agenzia Spaziale Italiana from the agreement ASI/INAF/I/023/12/0. Quadratic approximation of {#app1} =========================== As previously discussed in section \[section2c\], the total log-likelihood of observing galaxies with absolute magnitudes $M_{i}$ given their redshifts and radial peculiar velocities in a real (or simulated) dataset is $$\log P_{\rm tot} = \sum_{i}\log P_{i}\left (M_{i}\vert z_{i},V_{i}\right ) = \sum_{i}\log\frac{\phi(M_{i})}{\eta\left (M^{+}_{i},M^{-}_{i}\right )}, \label{eq:app1a}$$ where the function $\eta\left (M^{+}_{i},M^{-}_{i}\right )$ and the limiting magnitudes $M_{i}^{\pm}$ are defined through eqs. and . Assuming a set of LF and evolution parameters $q_{j}$ as well as a redshift-binned velocity model $\tilde{V}(\hvr ) = \sum a_{lm}Y_{lm}^{*}(\hvr )$, we now seek an expansion of the form $$\log P_{i} \approx \log P_{i}\vert_{\vx =\vx_{0}} + \sum\limits_{\alpha}\left.\frac{\partial\log P_{i}}{\partial x_{\alpha}}\right \vert_{\vx =\vx_{0}}x_{\alpha} + \sum\limits_{\alpha ,\beta}\left.\frac{\partial^{2}\log P_{i}}{\partial x_{\alpha}\partial x_{\beta}} \right\vert_{\vx =\vx_{0}}x_{\alpha}x_{\beta}. \label{eq:app1d}$$ Here we have introduced $\vx^{\rm T} = (q_{j},a_{lm})$ and $\vx_{0}$ corresponds to an arbitrary fixed parameter vector. Given a model of $\phi (M)$, it is then straightforward (but tedious) to derive the specific form of the above expansion by using eqs. (\[eq:2c2\]–\[eq:app1c\]) and the relations $$\begin{split} \frac{\partial{\rm DM}(z_{c})}{\partial a_{lm}} &= -\frac{1+z}{c}\left.\frac{\partial{\rm DM}(z)}{\partial z}\right\vert_{z=z_{c}}Y_{lm}^{*}\\ \frac{\partial^{2}{\rm DM}(z_{c})}{\partial a_{lm}\partial a_{l^{\prime}m^{\prime}}} &= \left (\frac{1+z}{c}\right )^{2}\left.\frac{\partial^{2}{\rm DM}(z)}{\partial z^{2}}\right\vert_{z=z_{c}}Y_{lm}^{*} Y_{l^{\prime}m^{\prime}}^{*}, \end{split}$$ where the $Y_{lm}$ are evaluated at the position of the galaxy in question. In the following, we shall give some details regarding the calculation for the two models of $\phi(M)$ adopted in our analysis. Schechter form {#app1a} -------------- Assuming a Schechter LF, we start from [@schechter] $$\phi (M) = 0.4\log{(10)}\phi^{\star}10^{0.4(1+\alpha^{\star})(M^{\star}-M)}\exp{\left (-10^{0.4(M^{\star}-M)}\right )},$$ with the usual Schechter parameters given as $M^{\star}$, $\alpha^{\star}$, and $\phi^{\star}$. The only non-trivial derivatives appearing in eq. are those of $\eta\left (M^{+},M^{-}\right )$ with respect to $\alpha^{\star}$. Since eq. can be written in terms of the incomplete gamma function, i.e. $$\eta\left (M^{+},M^{-}\right ) = \phi^{\star}\left\lbrack\Gamma\left (1+\alpha^{\star},\tilde{L}^{-}\right ) - \Gamma\left (1+ \alpha^{\star},\tilde{L}^{+}\right )\right\rbrack,$$ the corresponding expressions can be obtained with the help of $$\begin{split} \frac{\partial\Gamma\left (1+\alpha^{\star},\tilde{L}\right )}{\partial\alpha^{\star}} = &\left.\Gamma\left (1+\alpha^{\star},\tilde{L}\right )\log\tilde{L}\right. + \mathcal{A}\left (\alpha^{\star},\tilde{L}\right ),\\ \frac{\partial^{2}\Gamma\left (1+\alpha^{\star},\tilde{L}\right )}{\partial\alpha^{\star 2}} = &\left\lbrack\Gamma\left (1+\alpha^{\star},\tilde{L}\right )\log\tilde{L} + 2\mathcal{A}\left (\alpha^{\star},\tilde{L}\right ) \right\rbrack\log\tilde{L} + \mathcal{B}\left (\alpha^{\star},\tilde{L}\right ), \end{split}$$ where $\tilde{L}=L/L^{\star}=10^{0.4(M^{\star}-M)}$, $$\begin{split} \mathcal{A} &= \frac{\psi^{(0)}(1+\alpha^{\star})-\log\tilde{L}}{\Gamma (- \alpha^{\star})\sin\left\lbrack\pi(1+\alpha^{\star})\right\rbrack}\pi + \tilde{L}^{1+\alpha^{\star}}\sum\limits_{n=0}^{\infty}\frac{(-1)^{n} \tilde{L}^{n}}{n!(1+\alpha^{\star}+n)^{2}},\\ \mathcal{B} &= \frac{\psi^{(1)}(1+\alpha^{\star})+\left (\psi^{(0)}(1+\alpha^{\star})-\log\tilde{L}\right )^{2}}{\Gamma (- \alpha^{\star})\sin\left\lbrack\pi(1+\alpha^{\star})\right\rbrack}\pi - 2\tilde{L}^{1+\alpha^{\star}}\sum\limits_{n=0}^{\infty}\frac{(-1)^{n} \tilde{L}^{n}}{n!(1+\alpha^{\star}+n)^{3}}, \end{split}$$ and $\psi^{(s)}$ denotes the polygamma function of degree $s$. Note that the above relations remain strictly valid only as long as $\tilde{L}<1$ and $1+\alpha^{\star}$ is neither zero nor a negative integer. Cubic spline {#app1b} ------------ Choosing an equidistant set of sampling points $(M_{i},\phi_{i})$ with $0\leq i<N$, the LF may be modeled as a natural cubic spline with piecewise definition [@Press2002] $$\phi(M) = (1-t_{i})\phi_{i-1} + t_{i}\phi_{i} + t_{i}(1-t_{i})\left\lbrack a_{i}(1-t_{i}) + b_{i}t_{i}\right\rbrack,\qquad M_{i-1}\leq M<M_{i}, \label{eq:cs}$$ where $i$ now runs from $1$ to $N-1$, $t_{i}=(M-M_{i})/\Delta M$ and $$a_{i} = k_{i-1}\Delta M - (\phi_{i} - \phi_{i-1}),\qquad b_{i} = -k_{i}\Delta M + (\phi_{i} - \phi_{i-1}).$$ The $k_{i}$ can be written in terms of an appropriate tridiagonal matrix $A_{ij}$ whose inverse solves the spline problem, and introducing the Kronecker delta $\delta^{K}_{i,j}$, one has $$k_{i} = \frac{3}{\Delta M^{2}}\sum\limits_{j}A_{ij}^{-1}\left\lbrack\left (1-\delta^{K}_{0,j}\right )(\phi_{j}-\phi_{j-1}) + \left (1-\delta^{K}_{N-1,j}\right )(\phi_{j+1}-\phi_{j})\right\rbrack.$$ Since we are dealing with polynomials, evaluating the expansion in only requires basic calculus. To obtain the derivatives of $a_{i}$ and $b_{i}$, for instance, one easily verifies that $$\frac{\partial k_{i}}{\partial\phi_{j}} = \frac{3}{\Delta M^{2}}\left\lbrack\left (1-\delta^{K}_{0,j}\right )A^{-1}_{i,j-1}+\left ( \delta^{K}_{N-1,j}-\delta^{K}_{0,j}\right )A^{-1}_{i,j}-\left (1-\delta^{K}_{N-1,j}\right )A^{-1}_{i,j+1}\right\rbrack.$$ Similarly, piecewise integration of eq. yields $$\begin{split} \frac{1}{\Delta M}\int\phi(M^{\prime})\dd M^{\prime} = &\left (t_{i}-\frac{1}{2}t_{i}^{2}\right )\phi_{i-1} + \frac{1}{2}t_{i}^{2}\phi_{i} + a_{i}\left (\frac{1}{2}-\frac{2}{3}t_{i}+\frac{1}{4}t_{i}^{2}\right )t_{i}^{2}\\ &\left. +\right. b_{i}\left (\frac{1}{3}-\frac{1}{4}t_{i}\right )t_{i}^{3},\qquad M_{i-1}\leq M^{\prime}<M_{i}, \end{split}$$ which may be used to compute certain derivatives of $\eta\left (M^{+},M^{-}\right )$. Note that because of the spline equation’s linearity, all second derivatives with respect to the spline parameters $\phi_{j}$ trivially vanish. Constructing the probability for Gaussian fields {#app3} ================================================ Starting from Bayes’ theorem, we write the posterior probability as $$P(a_{lm}\vert\vd )\propto P\left (\vd\vert a_{lm}\right )P\left (a_{lm}\vert C_{l}\right ). \label{eq:app3a}$$ Assuming that the probabilities $P(\vd\vert a_{lm})$ and $P(a_{lm}\vert C_{l})$ correspond to Gaussian distributions $\mathcal{N}(\va,\bm{\Sigma}_{M})$ and $\mathcal{N}(0,\mathbf{D})$, respectively, evaluating the integral in eq. yields $$\log{P(C_{l})} = \log{P\left (\vd\vert C_{l}\right )} = -\frac{1}{2}\left\lbrack\va^{\rm T}\mathbf{Q}^{-1}\va + \operatorname{Tr}{\left (\log\mathbf{Q}\right )}\right\rbrack + {\rm const}, \label{eq:app3b}$$ where $\bm{\Sigma}_{M}$ denotes the marginal covariance matrix constructed from the original one (i.e. the full matrix $\bm{\Sigma}$ which has been introduced in section \[section2c\]), $\mathbf{D}$ is a diagonal matrix which depends on the $C_{l}$, and $\mathbf{Q}=\bm{\Sigma}_{M}+\mathbf{D}$ is assumed to be non-singular. Analog to the analysis of CMB anisotropies, the power spectrum is now estimated by maximizing the probability in eq. with respect to the $C_{l}$. To obtain an error for this estimate, we simply expand the logarithm of $P(C_{l})$ around the maximum (assumed as well-defined) to quadratic order and compute the inverse of the corresponding curvature matrix. One may easily verify that the required derivatives with respect to $C_{l}$ are explicitly given by $$\frac{\partial\log{P(C_{l})}}{\partial C_{l}} = \frac{1}{2}\left\lbrack\va^{\rm T}\mathbf{Q}^{-1} \frac{\partial\mathbf{D}}{\partial C_{l}}\mathbf{Q}^{-1}\va - \operatorname{Tr}{\left (\mathbf{Q}^{-1} \frac{\partial\mathbf{D}}{\partial C_{l}}\right )}\right\rbrack \label{eq:app3c}$$ and $$\frac{\partial^{2}\log{P}}{\partial C_{l}\partial C_{l^{\prime}}} = -\va^{\rm T}\mathbf{Q}^{-1} \frac{\partial\mathbf{D}}{\partial C_{l}}\mathbf{Q}^{-1}\frac{\partial\mathbf{D}}{\partial C_{l^{\prime}}}\mathbf{Q}^{-1} \va + \frac{1}{2}\operatorname{Tr}{\left (\mathbf{Q}^{-1}\frac{\partial\mathbf{D}}{\partial C_{l}}\mathbf{Q}^{-1} \frac{\partial\mathbf{D}}{\partial C_{l^{\prime}}}\right )}. \label{eq:app3d}$$ Using that $\langle\va\va^{\rm T}\rangle = \mathbf{Q}$, it further follows that $$\left\langle\frac{\partial^{2}\log{P}}{\partial C_{l}\partial C_{l^{\prime}}}\right\rangle = -\frac{1}{2}\operatorname{Tr}{\left (\mathbf{Q}^{-1}\frac{\partial\mathbf{D}}{\partial C_{l}}\mathbf{Q}^{-1} \frac{\partial\mathbf{D}}{\partial C_{l^{\prime}}}\right )}.$$ Theoretical for the CDM cosmology {#app2} ================================= In what follows, we will present predictions for the angular power spectrum of the peculiar velocity field introduced in section \[section2\] for the standard $\Lambda$CDM cosmology. Since the observed galaxies are divided into redshift bins, we consider the averaged velocity field $$\tilde{V}(\hvr ) = \int_{r_{1}}^{r_{2}}V\left\lbrack\hvr r, t(r)\right\rbrack p(r)\dd r. \label{eq:app2a}$$ Here $r_{1}$ and $r_{2}$ are the comoving distances at the limiting redshifts of a given bin, and $p(r)dr$ is the probability of observing a galaxy within the interval $[r,r+dr]$. We thus begin with $$a_{lm} = \int\dd\Omega\tilde{V}(\hvr )Y_{lm}(\hvr ) = -\int\dd\Omega Y_{lm}(\hvr )\int_{r_{1}}^{r_{2}}W(r)\frac{\partial\Phi_{0}(\hvr r)}{\partial r}\dd r , \label{eq:app2b}$$ where we have used $\tilde{V}(\hvr )=\sum a_{lm}Y_{lm}^{*}(\hvr )$, the definition $W(r) = 2a\dot{D}p(r)/3\Omega_{0}H_{0}^{2}$, and the linear relation $$V(\vr ,t) = -\frac{2}{3}\frac{a\dot{D}(t)}{\Omega_{0}H_{0}^{2}}\frac{\partial\Phi_{0}}{\partial r}, \label{eq:app2c}$$ with $D(t)$ and $a(t)$ evaluated at $t=t(r)$. Expanding $\Phi_0(\vr )$ in Fourier space, i.e. $$\Phi_0(\vr) = \frac{1}{(2\pi)^3}\int\dd^{3}k \Phi_{\vk}{\rm e}^{{\rm i}\vk\cdot{\vr}}, \label{eq:app2d}$$ and exploiting the plane wave expansion $${\rm e}^{{\rm i}\vk \cdot{\vr}} = 4\pi\sum\limits_{l,m}{\rm i}^lj_l(kr) Y^{*}_{lm}(\hat{\vr})Y_{lm}(\hat{\vk}), \label{eq:app2e}$$ where $j_{l}$ are the usual spherical Bessel functions of the first kind, we get $$a_{lm} = -\frac{{\rm i}^l}{2\pi^{2}}\int_{r_{1}}^{r_{2}}\dd rW(r)\int\dd^{3}k\Phi_{\vk}\left (\frac{lj_{l}}{r}-kj_{l+1}\right )Y_{lm}(\hat{\vk}). \label{eq:app2f}$$ Therefore, using that $\langle\Phi_{\vk}\Phi_{\vk^{\prime}}\rangle = (2\pi)^{3}\delta_{D}(\vk-\vk^{\prime})P_{\Phi}(k)$, we finally arrive at $$C_{l} = \left\langle\lvert a _{lm}\rvert^{2}\right\rangle = \frac{2}{\pi}\int\dd kk^{2}P_{\Phi}(k)\left\lvert\int_{r_{1}}^{r_{2}}\dd rW(r)\left (\frac{lj_{l}}{r}-kj_{l+1}\right )\right\rvert^{2}. \label{eq:app2g}$$ [^1]: The first two terms on the right-hand side of eq. describe the Doppler effect and the gravitational redshift, respectively. The last one reflects the energy change of a photon passing through a time-dependent potential well and is equivalent to the late-time integrated Sachs-Wolfe effect. [^2]: Since the underlying principle is a reduction in the spread of observed magnitudes, even unrealistic models of the luminosity distribution should yield unbiased measurements of the velocity information [@Nusser2012], albeit with larger statistical errors. [^3]: <http://sdss.physics.nyu.edu/vagc/> [^4]: The SDSS photometry is known to exhibit small offsets from the AB magnitude system [@Oke1983]. For the SDSS $r$-band, this amounts to a shift of around $0.01$ [@Eisenstein2006] which we will take into account when calculating absolute magnitudes below. [^5]: <http://lss.phy.vanderbilt.edu/lasdamas/mocks.html> [^6]: Although the redshifts listed in the LasDamas [gamma]{} mocks include distortions from peculiar velocities, we interpret them as cosmological redshifts $z_{c}$ for simplicity. This has no adverse effect on testing our method’s performance. [^7]: An immediate worry is that $\mathbf{F}$ could turn out singular or ill-conditioned. Except for issues, which are related to the normalization of the spline-based LF and can be easily overcome with the help of standard techniques [@James2006], we have not encountered such problems in our study which considers only the first few multipoles, i.e. $l_{\rm max}\ltsim 5$. [^8]: We emphasize that the variation in the faint-end slope is robust with respect to different choices of $\Delta M$ and not an artifact caused by the spline estimator. [^9]: Despite having less degrees of freedom, this is also true in the case of the fixed LF estimator, where only the spread in the $v_{z}$-component is significantly reduced. [^10]: As advised by the SDSS collaboration, we use model magnitudes to estimate the $g-r$ colors of galaxies. [^11]: http://www.sdss3.org/future/
{ "pile_set_name": "ArXiv" }
--- abstract: 'Proton-proton correlations were observed for the two-proton decays of the ground states of $^{19}$Mg and $^{16}$Ne. The trajectories of the respective decay products, $^{17}$Ne+p+p and $^{14}$O+p+p, were measured by using a tracking technique with microstrip detectors. These data were used to reconstruct the angular correlations of fragments projected on [ planes transverse]{} to the precursor momenta. The measured three-particle correlations reflect a genuine three-body decay mechanism and allowed us to obtain spectroscopic information on the precursors with valence protons in the $sd$ shell.' author: - 'I. Mukha' - 'L. Grigorenko' - 'K. Sümmerer' - 'L. Acosta' - 'M. A. G. Alvarez' - 'E. Casarejos' - 'A. Chatillon' - 'D. Cortina-Gil' - 'J. Espino' - 'A. Fomichev' - 'J. E. García-Ramos' - 'H. Geissel' - 'J. Gómez-Camacho' - 'J. Hofmann' - 'O. Kiselev' - 'A. Korsheninnikov' - 'N. Kurz' - 'Yu. Litvinov' - 'I. Martel' - 'C. Nociforo' - 'W. Ott' - 'M. Pfützner' - 'C. Rodríguez-Tajes' - 'E. Roeckl' - 'M. Stanoiu' - 'H. Weick' - 'P. J. Woods' title: 'Proton-proton correlations observed in two-proton decay of $^{19}$Mg and $^{16}$Ne' --- The recently discovered two-proton (2p) radioactivity is a specific type of genuine three-particle nuclear decays. It occurs when a resonance in any pair of fragments is located at higher energies than in the initial three-body (p+p+“core”) nucleus, and thus simultaneous emission of [ two]{} protons is the only decay channel. Three-body systems have more degrees of freedom in comparison with two-body systems, hence additional observables appear. In the case of 2p emission, the energy spectra of single protons become continuous, and proton-proton (p–p) correlations are available, which makes them a prospective probe for nuclear structure or/and [ the]{} decay mechanism. For example, the first p–p correlations observed in the 2p radioactivity of $^{94m}$Ag have revealed strong proton yields either in the same or opposite directions which called for a theory of 2p emission from deformed nuclei [@mukh06]. Two-proton emission can also occur from short-lived nuclear resonances or excited states (see, [ e.g., ]{} [@boch89; @o12; @o14]). Though [ in this case]{} the mechanism of 2p emission may depend on the reaction populating the parent state, such nuclei [ can be easily studied in-flight]{}. [ E.g., the cases of $^{6}$Be [@boch89; @dan87] and $^{16}$Ne [@korsh_ne16]were studied by analyzing their p–p correlations in the framework of a three-body partial-wave analysis developed for three-particle decays of light nuclei]{}. In particular, the study of $^6$Be revealed the existence of three-particle p+p+$\alpha$ correlations [@boch89] which matched the three-body components found [ theoretically]{} in the $p$-shell structure of $^6$Be [@thomp00]. Very recently, p–p correlations were also observed in 2p radioactivity of $^{45}$Fe [@giov07; @mier07] where both the [ lifetime and p–p correlations were found to reflect the]{} structure of $pf$-shell 2p precursors [@mier07]. Such a way [ of obtaining spectroscopic information is a]{} novel feature compared to studies of two-particle decays. [ In the present paper, we study for the first time]{} the p–p correlations in $sd$ shell nuclei [ via]{} examples of the 2p decays of $^{19}$Mg and $^{16}$Ne. These nuclei with very different half [ lives]{} ($T_{1/2} \! \approx \! 4 \! \cdot \! 10^{-9}$ s [@mukh_mg19] and $T_{1/2} \! \approx \! 4 \! \cdot \! 10^{-19}$ s [@kek_ne16], respectively) and presumably different spectroscopic properties may serve as [ reference cases illuminating the nuclear structure of other possible 2p emitters with $sd$-wave]{} configuration. The decay properties of the $^{16}$Ne and $^{19}$Mg ground states and the related resonances in $^{15}$F and $^{18}$Na are shown in Fig. \[fig0\] which compiles the data from Refs. [@kek_ne16; @ne16; @ne16_sc; @f15_pet; @f15_gol; @18Na; @mukh_mg19] and this work. The ground states of both isotopes decay only [ by simultaneous]{} 2p emission while their excited states are open for sequential 1p decays [ via intermediate]{} unbound states in $^{15}$F and $^{18}$Na. ![\[fig0\] States observed in $^{16}$Ne, $^{19}$Mg and the corresponding intermediate systems $^{15}$F, $^{18}$Na. Decay energies (in keV) are given relative to the respective p and 2p thresholds. Most values have been taken from the literature (Refs. [@mukh_mg19; @kek_ne16; @ne16; @ne16_sc; @f15_pet; @f15_gol; @18Na]), those in bold print are from the present work.](fig1_grig1a.eps){width="48.00000%"} The quantum-mechanical theory of 2p radioactivity which uses a three-body model [@grig00; @grig01; @grig03], predicts the p–p correlations [ to be]{} strongly influenced by nuclear structure together with Coulomb and three-body centrifugal barriers. In particular, the newly discovered 2p-radioactivity of $^{19}$Mg [@mukh_mg19] was predicted to be characterized by p–p correlations which reflect [the $sd$]{} configurations of the valence protons [@grig03a]. A similar effect is found in $^{16}$Ne, where [ the]{} $s$-wave configuration was predicted to dominate, contrary to its mirror $^{16}$C, thus breaking isospin symmetry [@grig_ne16]. [ A]{} complementary approach in describing 2p decays is [ the]{} mechanism of sequential emission of protons via an intermediate state (see e.g., [@lane]). It includes also the traditional quasi-classical di-proton model with emission of a $^2$He cluster, [ assuming]{} extremely strong p–p correlations [@gold60; @baz72]. The predictions of these models differ dramatically [ with respect to]{} the p–p correlations, suggesting them as [ a]{} sensitive probe of the 2p-decay mechanism (see the detailed predictions below). [ Our experiment to investigate 2p-emission from $^{19}$Mg and $^{16}$Ne]{} was performed by using a 591[*A*]{} MeV beam of $^{24}$Mg accelerated by the SIS facility at GSI, Darmstadt. The radioactive beams of $^{20}$Mg and $^{17}$Ne were produced at the Projectile-Fragment Separator (FRS) [@frs] with average intensities of 400 and 800 ions s$^{-1}$ and energies of  450[*A*]{} and  410[*A*]{} MeV, respectively. The secondary [ 1-n-removal]{} reactions ($^{20}$Mg, $^{19}$Mg) and ($^{17}$Ne, $^{16}$Ne) occurred at the mid-plane of FRS in a secondary 2 g/cm$^{2}$ $^{9}$Be target. Special magnetic optics settings were applied, the first FRS half being tuned in an achromatic mode using a wedge-shaped degrader, while its second half was set for identification of the heavy ions (HI) with high acceptance in angle and momentum. [ A sketch of the experimental set-up at the FRS midplane has been shown in Fig.1 of Ref. [@mukh_mg19] and explained in detail there. A microstrip detector array [@si_proposal] consisting of 4 large-area (7x4 cm$^{2}$) double-sided silicon detectors (with a pitch of 0.1 mm)was positioned downstream of the secondary target.]{} [ This array was]{} used to measure energy loss and position of coincident hits of two protons and a heavy fragment, thus [ allowing us to]{} reconstruct all decay-product trajectories and derive the coordinates of the reaction vertex and the angular p–p and proton-HI correlations. The conditions [ to select]{} the true HI+p+p events were: (i) [ a minimal distance between the proton and heavy ion trajectories of less than 150 $\mu$m]{}, and (ii) [ a difference between the two longitudinal coordinates of the vertices defined by two p–HI pairs (taken from the same HI+p+p event) within the range]{} defined by the experimental uncertainty of 0.3–1 mm depending on detection angle. The achieved angular resolution in tracking [ the]{} fragments was $\sim$1 mrad. More details concerning the detector performance and tracking procedure are given in [@mukh_mg19; @mihai; @mukh_ssd; @readout]. [ Another position-sensitive silicon detector and a multi-wire]{} chamber were used upstream of the target for tracking the $^{20}$Mg($^{17}$Ne) projectiles. The heavy 2p-out residuals ($^{17}$Ne and $^{14}$O) were unambiguously identified by their time of flight, magnetic rigidity, and energy loss measured with the position-sensitive scintillator detectors at the second half of FRS. ![\[fig2a\] Left panel: [ Cartoon]{} of momentum correlations $k_{p1-\!HI}-k_{p2-\!HI}$ expected for a direct [ three-body decay]{} (the grey area [ labeled]{} $k_{3R}$) [ and sequential 2p decay (grey boxes with black peaks labeled $k_{2R}$).]{} Upper-right panel: A sketch of the kinematical enhancement of an angular p–HI correlation at the maximum possible angle for [ a]{} given momentum between the decay products. Lower-right panel: the corresponding angular p–HI distribution.](kinem_scheme2a.eps){width="49.00000%"} The 2p decays of the ground states of $^{19}$Mg and $^{16}$Ne were identified by using angular correlations between the single protons and [ their]{} respective cores $^{17}$Ne [ and]{} $^{14}$O which allowed measuring the 2p-decay energies. This is analogous to identifying a reaction [ via the scatter plot illustrated]{} in Fig. \[fig2a\] (left panel). In general, 2p-decay may proceed via either sequential or direct decay mechanisms. The first case can be described as two consecutive 1p decays, with the p–HI spectra reflecting the respective p–HI resonances [@lane]. [ Their (p$_1$–HI)–(p$_2$–HI) scatter plot should display the sequential and direct decay events in the respective kinematical areas marked in Fig. \[fig2a\] by the relative momenta $k_{2R}$]{}. In the second case, the simultaneously emitted protons are likely to share the 2p-decay energy evenly, with both p–HI spectra being identical and peaked at $E/2$ [@baz72; @grig00]. [ In this case, the area marked $k_{3R}$ in Fig. \[fig2a\] should be populated, along the arc area with the root-mean-squared proton momentum being constant]{}. Sequential proton emission from a single 2p-parent state via narrow p–HI resonances should yield double peaks while 2p de-excitation of continuum parent states with p–HI final state interactions should reveal “slices” as shown in Fig. \[fig2a\]. Similar structures can be found in the angular $\theta_{p1-\!HI}-\theta_{p2-\!HI}$ correlations due to the following reason. Because of a strong [ kinematic]{} focusing at intermediate energies, the 1p decay leads to a characteristic angular p–HI correlation when the proton ejected isotropically in the 1p-precursor frame is emitted within a narrow cone around the HI with the maximum intensity around the largest possible angle (Fig. \[fig2a\], right panel). The p–HI angles reflect the transverse proton momentum relative to the HI one, and [ are therefore]{} correlated with the precursor’s decay energy. Thus sequential 2p decays from excited states in parent nuclei [ should be mostly located in peaks with tails along the respective slices in]{} the angular $\theta_{p1-HI}-\theta_{p2-HI}$ correlations, in analogy to those sketched in Fig. \[fig2a\] (left panel). In direct 2p decays, the single-proton energy spectrum always [ exhibits]{} a relatively narrow peak centered close to half of the 2p-decay energy; such energy distribution is a stable feature of this decay mechanism [@gold60; @grig03a]. Correspondingly, a bump should appear in the angular correlations in the same way as it should appear along the arc marked by $k_{3R}$ in the scatter plot in Fig. \[fig2a\]. [ This correspondence between angular and momentum correlations has been used to derive the 2p-decay energy of $^{19}$Mg]{} (see Fig. 2,4 in [@mukh_mg19] and the respective discussions). [ For the 2p-decay of $^{16}$Ne, the]{} angular correlations of each coincident proton with respect to the $^{14}$O momentum, $\theta_{p1-\!\rm{O}}-\theta_{p2-\!\rm{O}}$, derived from the measured $^{14}$O+p+p coincidence events, are shown in Fig. \[fig2\](a). The events with the [ smallest]{} angles fall into a distinct cluster around $\theta_{p-\!\rm{O}}$=35 mrad while most of the other events are [ located in the slices centered]{} around 70 and 95 mrad. These two groups can be attributed to the direct 2p decay from the $^{16}$Ne ground state and to the sequential emission of protons from excited states in $^{16}$Ne via the $^{15}$F ground-state, respectively. We shall refer to these events as the “ground state” and “excited state” events, respectively. The latter group includes also events resulting from the fragmentation reaction $^{17}$Ne$\rightarrow^{14}$O+p+p+n. To disentangle the “ground state” from the “excited state” events, we made a slice projection from the measured (p$_1$–$^{14}$O)–(p$_2$–$^{14}$O) correlations in Fig. \[fig2\](a) by selecting the angle of one of the protons within the range 0–45 mrad, where the 2p decay of the $^{16}$Ne ground state is expected to show up. Figure \[fig2\](b) displays the angular correlations $\theta_{p1-\!\rm{O}}$ corresponding to the “ground state” gate in [ the other]{} pair $\theta_{p2-\!\rm{O}}$. The peak around [ 35]{} mrad (the suggested “ground state”) dominates the spectrum, [ whereas few]{} correlations [ can be]{} seen between a proton from the “ground state” and another proton at larger angles. This means that the two protons from the “ground-state” are correlated, i.e. this peak can be explained by an emission of protons from the ground state in $^{16}$Ne. [ For a more quantitative analysis,]{} the data are compared to a Monte Carlo simulation of the response of our setup to the direct 2p-decay $^{16}$Ne$\rightarrow^{14}$O+p+p with the known 2p-decay energy of 1.4(1) MeV [@ne16] by using the GEANT program [@GEANT]. The simulations took into account the above-mentioned experimental accuracies in tracking the fragments and in [ reconstructing the vertices]{}, trajectory angles etc. The normalized simulation reproduces the data in the low-angle peak [ very well]{}. The contribution from the “tail” of the higher states to the ground-state peak amounts to about $20 \%$. The shape of this distribution is assumed to have the same shape as the $\theta_{p1-\!\rm{O}}$ distribution selected within the $\theta_{p2-\!\rm{O}}$ range just outside the ground-state region, from 48 to 160 mrad (the dotted curve). Figure \[fig2\](c) displays an example of an angular p–$^{14}$O distribution for “excited states” obtained from Fig. \[fig2\](a) by selecting the angular range of the other proton from 120 to 150 mrad, which corresponds to p–$^{14}$O final state interactions due to the ground and excited states of $^{15}$F. The Monte Carlo simulation of known one-proton decays $^{15}$F$\!\rightarrow^{14}$O+p of the ground and first excited states [ in]{} $^{15}$F with the 1p-decay energies of 1.56(13) and 2.85(4) MeV [@ne16_sc], respectively, [ reproduces well]{} the two lowest-angle peaks. The two higher-lying peaks indicate 1p decays [ of unknown]{} excited states in $^{15}$F [ with derived]{} 1p-decay [ energies]{} of 4.9(2) and 6.4(2) MeV. The excited states in $^{19}$Mg, $^{16}$Ne, $^{18}$Na and $^{15}$F will be addressed elsewhere. ![ \[fig1a\] (a) Three-body correlations predicted by the three-body model [@grig03a] [ for the 2p-decay of $^{19}$Mg, plotted as a function of the relative energy between two protons, $E_{p-p}$/$E$, and the angle $\theta_k$ between the relative momenta of the fragments as illustrated in the figure. Extreme cases of p–p correlations are sketched and related to the corresponding areas in the correlation plot]{}. (b) The p–p energy spectra from [ the]{} 2p-decay of $^{19}$Mg calculated for different weights $W$ of $s$-$p$-$d$ shell configurations in $^{19}$Mg. (c) Typical intensity distributions plotted as a function of $\theta_k$ (see Fig. [\[fig1a\]]{}(a)) in the rest frame of the [ 2p-precursor]{} (solid curve) and its analog in the lab system $\theta'_k$ projected on the [ transverse detector]{} plane (dashed curve). ](mg19proj_a){width="48.00000%"} ![ \[fig1a\] (a) Three-body correlations predicted by the three-body model [@grig03a] [ for the 2p-decay of $^{19}$Mg, plotted as a function of the relative energy between two protons, $E_{p-p}$/$E$, and the angle $\theta_k$ between the relative momenta of the fragments as illustrated in the figure. Extreme cases of p–p correlations are sketched and related to the corresponding areas in the correlation plot]{}. (b) The p–p energy spectra from [ the]{} 2p-decay of $^{19}$Mg calculated for different weights $W$ of $s$-$p$-$d$ shell configurations in $^{19}$Mg. (c) Typical intensity distributions plotted as a function of $\theta_k$ (see Fig. [\[fig1a\]]{}(a)) in the rest frame of the [ 2p-precursor]{} (solid curve) and its analog in the lab system $\theta'_k$ projected on the [ transverse detector]{} plane (dashed curve). ](mg19proj_b "fig:"){width="25.80000%"} ![ \[fig1a\] (a) Three-body correlations predicted by the three-body model [@grig03a] [ for the 2p-decay of $^{19}$Mg, plotted as a function of the relative energy between two protons, $E_{p-p}$/$E$, and the angle $\theta_k$ between the relative momenta of the fragments as illustrated in the figure. Extreme cases of p–p correlations are sketched and related to the corresponding areas in the correlation plot]{}. (b) The p–p energy spectra from [ the]{} 2p-decay of $^{19}$Mg calculated for different weights $W$ of $s$-$p$-$d$ shell configurations in $^{19}$Mg. (c) Typical intensity distributions plotted as a function of $\theta_k$ (see Fig. [\[fig1a\]]{}(a)) in the rest frame of the [ 2p-precursor]{} (solid curve) and its analog in the lab system $\theta'_k$ projected on the [ transverse detector]{} plane (dashed curve). ](mg19proj_c "fig:"){width="21.80000%"} We turn now to the discussion of angular p–p correlations following 2p decays. When the spin degrees of freedom are neglected and the total decay energy $E$ can be considered as fixed, the three-body correlations are completely described by two variables. A convenient choice is an [ energy-distribution]{} parameter $E_{p-p}/E$ ($E_{p-p}$ is relative energy between two protons) and an angle $\theta_k$ between the relative momenta of the fragments. [ Fig.\[fig1a\]]{} shows such distributions predicted by the three-body model for [ the 2p-decay]{} of $^{19}$Mg [@grig03; @grig03a]. The three-body model [ predicts a]{} distinctive correlation pattern which features [ an]{} enhancement at small $E_{p-p}$ due [ to final-state interaction]{} and [ a]{} suppression in the regions of strong Coulomb repulsion ($E_{p-p}/E \sim 0.5$, $\cos(\theta_k) \sim \pm 1$). The predicted energy distributions [ are sensitive]{} to the structure of the precursor \[Fig. \[fig1a\](b)\]. Similar predictions are available for $^{16}$Ne [@grig_ne16]. In our experiment, we were able to measure the opening angle $\theta_{p-p}$ between protons whose distribution reflects the $E_{p-p}$ correlations. Fig.\[fig4\] [ shows]{} the experimental angular p–p distributions obtained from triple $^{14}$O+p+p and $^{17}$Ne+p+p events gated by the conditions that both protons originate from the [ “ground states” of $^{16}$Ne and $^{19}$Mg]{}. These gates [ were]{} inferred from the respective angular p–HI correlations as discussed above. The events [ representing]{} the “ground state” 2p decay actually contain [ contributions from the ground state and background contributions from both, excited states of the parent nucleus and fragmentation reactions]{}. Therefore, using the angular-correlation data discussed above, we empirically evaluated the shapes of the background components by projecting triple events with the p–HI gates shifted [ away from the “ground state” region towards larger angles]{}. The resulting p–p background contributions shown by the dotted curves in Fig. \[fig4\] [ constitute]{} about 20 % of all p–p correlation data for $^{16}$Ne and 25 % for $^{19}$Mg (see Fig. \[fig2\](b) and Fig. 4 (c) in [@mukh_mg19]); [ they]{} were subtracted from the original p–p [ correlations]{}. As one can see [ in Fig. \[fig4\]]{}, the predictions [ following from]{} the assumption of a diproton [ emission fail to describe]{} both the $^{16}$Ne and $^{19}$Mg data while the three-body model reproduces [ the]{} shapes of both distributions. In the $^{19}$Mg case, the best description is obtained with the $d$-wave configuration dominating. The $^{16}$Ne data give evidence for [ nearly equal $s$- and $d$-wave components]{}. In Fig. \[fig5\], the intensity distributions are displayed as a function of $\cos(\theta'_k)$. The angle $\theta'_k$ (see Fig. \[fig1a\](a)) was defined by a line connecting [ the]{} two points where two protons hit the same detector and by a vector joining the [ middle between]{} the 2p hits [ and]{} the point of a related heavy-ion hit, in analogy with the angle $\theta_k$ shown in Fig. \[fig1a\](a). The typical theoretical prediction for such a distribution is shown in Fig. \[fig1a\](c). The diproton model predicts flat angular distributions in contrast to the experimental data in both cases. Only the three-body [ model can]{} reproduce the characteristic shapes of the observed correlations with the broad bumps around $\cos(\theta'_k)$=0 (the indicated spikes at $\cos(\theta'_k)\!\approx\!\pm$1 predicted by all calculations are less conclusive). Such a shape is a manifestation of the “Coulomb focusing” efficiently repulsing the fragments from [ large regions]{} of the momentum space (see Fig. \[fig1a\](a)). These distributions are weakly sensitive to the assumed structure of the parent states but are an exclusive feature of the [ three-body model]{}. In summary, the measured three-particle correlations from the 2p decay of the ground states of $^{16}$Ne and $^{19}$Mg are described quantitatively by the predictions of the tree-body model [@grig03a], in contrast to the quasi-classical “diproton” model which fails to describe our observations. These correlations are sensitive to the structure of the decaying nucleus. Thus the comparison between experiment and theory allows one to obtain spectroscopic information about the parent states. In $^{16}$Ne, the data are consistent [ with strong $s/d$ mixing]{} [@grig_ne16]. In $^{19}$Mg, the dominating $d$-shell configuration is the preferable description which is also consistent with the lifetime information [@mukh_mg19]. The method of measuring [ 2p-decays]{} in flight by [ precisely tracking all fragments]{} with microstrip detectors provides new specific [ observables]{}, thus yielding valuable [ spectroscopic]{} information on such exotic isotopes. Information about two-body subsystems, e.g., $^{15}$F, is simultaneously obtained. Systematic studies of other 2p emitters predicted theoretically [@grig03; @grig01a] are foreseen with this novel technique. The authors are grateful to M. Pohl and his co-workers of the DPNC, Universite de Génève, for developing the microstrip detectors. We thank in particular E. Cortina for the valuable contribution to this project. We appreciate the help of A. Bruenle, K.H. Behr, W. Hueller, A. Kelic, A. Kiseleva, R. Raabe and O. Tarasov during the preparations of the experiment. This work has been supported by contracts EURONS No. EC-I3 and FPA2003-05958, FPA2006-13807-C02-01 (MEC, Spain), the INTAS grant 03-54-6545, the Russian RFBR grants 05-02-16404 and 05-02-17535 and the Russian Ministry of Industry and Science grant NS-1885.2003.2. [70]{} I. Mukha [*et al.,*]{} Nature(London) [**439**]{}, 298 (2006). O.V. Bochkarev [*et al.,*]{} Nucl. Phys.  [**A505**]{}, 215 (1989). R.A. Kryger [*et al.,*]{} Phys. Rev. Lett. [**74**]{}, 860 (1995). C.R. Bain [*et al.,*]{} Phys. Lett. B [**373**]{}, 35 (1996). B.V. Danilin [*et al.*]{}, Sov. J. Nucl. Phys. **46**, 225 (1987). A.A. Korsheninnikov, Sov. J. Nucl. Phys. **52**, 827 (1990). I.J. Thompson [*et al.,*]{} Phys. Rev. C **61**, 024318 (2000). J. Giovinazzo [*et al.,*]{} Phys. Rev. Lett.  **99**, 102501 (2007). K. Miernik [*et al.,*]{} Phys. Rev. Lett.  **99**, 192501 (2007). I. Mukha [*et al.,*]{} Phys. Rev. Lett.  **99**, 182501 (2007). G.J. KeKelis [*et al.,*]{} Phys. Rev. C **17**, 1929 (1978). C.J. Woodward, R.E. Tribble and D.M. Tanner, Phys.Rev. C [**27**]{}, 27 (1983). A. Lepine-Szily [*et al.,*]{} Nucl. Phys.**A734**, 331 (2004). W.A. Peters [*et al.,*]{} Phys. Rev. C **68**, 034607 (2003). V.Z. Goldberg [*et al.,*]{} Phys. Rev. C **69**, 031302(R) (2004). T. Zerguerras [*et al.,*]{} Eur. Phys. J. A [**20**]{}, 389 (2004). L.V. Grigorenko, R.C. Johnson, I.G. Mukha, I.J. Thompson, and M.V. Zhukov, Phys. Rev. Lett. **85**, 22 (2000). L.V. Grigorenko, R.C. Johnson, I.G. Mukha, I.J. Thompson, and M.V. Zhukov, Phys. Rev. C **64**, 054002 (2001). L.V. Grigorenko and M.V. Zhukov, Phys. Rev. C **68**, 054005 (2003). L.V. Grigorenko, I.G. Mukha, and M.V. Zhukov, Nucl. Phys. **A713**, 372 (2003); **A740**, 401(E) (2004). L.V. Grigorenko, I.G. Mukha, I.J. Thompson, and M.V. Zhukov, Phys. Rev. Lett. **88**, 042502 (2002). A.M. Lane and R.G. Thomas, Rev. of Mod. Phys.  **30**, 257 (1958). V.I. Goldansky, Nucl. Phys. **19**, 482 (1960). A.I. Baz’, V.I. Goldansky, V.Z. Goldberg and Ya.B. Zeldovich, [*Light and Intermediate Nuclei Near the Border of Nuclear Stablity* ]{} (Nauka, Moscow, 1972). H. Geissel [*et al.,*]{} Nucl. Instrum. Methods Phys.Res. B [**70**]{}, 286 (1992). http://dpnc.unige.ch/ams/GSItracker/www/. M. Stanoiu [*et al.,*]{} GSI Scientific Report 2006, p. 23. I. Mukha [*et al.,*]{} GSI Scientific Report 2006, p. 112. J. Hoffmann, N. Kurz and W. Ott, GSI Scientific Report 2006, p. 216. “GEANT - detector simulation tool”, CERN software library, http://wwwasd.web.cern.ch/wwwasd/geant. L.V. Grigorenko, I.G. Mukha and M.V. Zhukov, Nucl. Phys.  **A714**, 425 (2003).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Matrix completion and extrapolation (MCEX) are dealt with here over reproducing kernel Hilbert spaces (RKHSs) in order to account for prior information present in the available data. Aiming at a fast and low-complexity solver, the task is formulated as one of kernel ridge regression. The resultant MCEX algorithm can also afford online implementation, while the class of kernel functions also encompasses several existing approaches to MC with prior information. Numerical tests on synthetic and real datasets show that the novel approach is faster than widespread methods such as alternating least-squares (ALS) or stochastic gradient descent (SGD), and that the recovery error is reduced, especially when dealing with noisy data.' author: - | Pere Giménez-Febrer$^1$, Alba Pagès-Zamora$^1$, and Georgios B. Giannakis$^2$\ $^1$SPCOM Group, Universitat Politècnica de Catalunya-Barcelona Tech, Spain\ $^2$Dept. of ECE and Digital Technology Center, University of Minnesota, USA title: 'Matrix completion and extrapolation via kernel regression [^1]' --- Matrix completion, extrapolation, RKHS, kernel ridge regression, graphs, online learning Introduction ============ With only a subset of its entries available, matrix completion (MC) amounts to recovering the unavailable entries by leveraging just the low-rank attribute of the matrix itself [@candes]. The relevant task arises in applications as diverse as image restoration [@ji2010robust], sensor networks [@yi2015partial], and recommender systems [@koren2009matrix]. To save power for instance, only a fraction of sensors may collect and transmit measurements to a fusion center, where the available spatio-temporal data can be organized in a matrix format, and the unavailable ones can be eventually interpolated via MC [@yi2015partial]. Similarly, collaborative filtering of ratings given by users to a small number of items are stored in a sparse matrix, and the objective is to predict their ratings for the rest of the items [@koren2009matrix]. Existing MC approaches rely on some form of rank minimization or low-rank matrix factorization. Specifically,  [@candes] proves that when MC is formulated as the minimization of the nuclear norm subject to the constraint that the observed entries remain unchanged, exact recovery is possible under mild assumptions; see also [@candes2010matrix] where reliable recovery from a few observations is established even in the presence of additive noise. Alternatively, [@koren2009matrix] replaces the nuclear norm by two low-rank factor matrices that are identified in order to recover the complete matrix. While the low-rank assumption can be sufficient for reliable recovery, prior information about the unknown matrix can be also accounted to improve the completion outcome. Forms of prior information can include sparsity [@yi2015partial], local smoothness [@cheng2013stcdg], and interdependencies encoded by graphs [@kalofolias2014matrix; @chen; @rao2015collaborative; @ma2011recommender]. These approaches exploit the available similarity information or prior knowledge of the bases spanning the column or row spaces of the unknown matrix. In this regard, reproducing kernel Hilbert spaces (RKHSs) constitute a powerful tool for leveraging available prior information thanks to the kernel functions, which measure the similarity between pairs of points in an input space. Prompted by this, [@abernethy2006low; @bazerque2013; @zhou2012; @stock2018comparative] postulate that columns of the factor matrices belong to a pair of RKHSs spanned by their respective kernels. In doing so, a given structure or similarity between rows or columns is effected on the recovered matrix. Upon choosing a suitable kernel function, [@yi2015partial] as well as [@cheng2013stcdg; @kalofolias2014matrix; @chen; @rao2015collaborative; @ma2011recommender] can be cast into the RKHS framework. In addition to improving MC performance, kernel-based approaches also enable extrapolation of rows and columns, even when all their entries are missing - a task impossible by the standard MC approaches in e.g. [@candes] and [@koren2009matrix]. One major hurdle in MC is the computational cost as the matrix size grows. In its formulation as a rank minimization task, MC can be solved via semidefinite programming [@candes], or proximal gradient minimization [@ma2011fixed; @cai2010singular; @chen; @gimenez], which entails a singular value decomposition of the recovered matrix per iteration. Instead, algorithms with lower computational cost are available for the bi-convex formulation based on matrix factorization [@koren2009matrix]. These commonly rely on iterative minimization schemes such as alternating least-squares (ALS) [@hastie2015matrix; @jain2013low] or stochastic gradient descent (SGD) [@gemulla2011large; @zhou2012]. With regard to kernel-based MC, the corresponding algorithms rely on alternating convex minimization and semidefinite programming [@abernethy2006low], block coordinate descent [@bazerque2013], and SGD [@zhou2012]. However, algorithms based on alternating minimization only converge to the minimum after infinite iterations. In addition, existing kernel-based algorithms adopt a specific sampling pattern or do not effectively make use of the Representer Theorem for RKHSs that will turn out to be valuable in further reducing the complexity, especially when the number of observed entries is small. The present contribution offers an RKHS-based approach to MCEX that also unifies and broadens the scope of MC approaches, while offering reduced complexity algorithms that scale well with the data size. Specifically, we develop a novel MC solver via kernel ridge regression as a convex alternative to the nonconvex factorization-based formulation that offers a closed-form solution. Through an explicit sampling matrix, the proposed method offers an encompassing sampling pattern, which further enables the derivation of upper bounds on the mean-square error. Moreover, an approximate solution to our MCEX regression formulation is developed that also enables online implementation using SGD. Finally, means of incorporating prior information through kernels is discussed in the RKHS framework. The rest of the paper paper is organized as follows. Section II outlines the RKHS formulation and the kernel regression task. Section III unifies the existing methods for MC under the RKHS umbrella, while Section IV introduces our proposed Kronecker kernel MCEX (KKMCEX) approach. Section V develops our ridge regression MCEX (RRMCEX) algorithm, an accelerated version of KKMCEX, and its online variant. Section VI deals with the construction of kernel matrices. Finally, Section VII presents numerical tests, and Section VIII concludes the paper. **Notation.** Boldface lower case fonts denote column vectors, and boldface uppercase fonts denote matrices. The $(i,j)$th entry of matrix $\A$ is $\A_{i,j}$, and the $i^{\text{th}}$ entry of vector $\bm a$ is $\bm a_i$. Superscripts $^T$ and $^\dagger$ denote transpose and pseudoinverse, respectively; while hat $\,\hat{}\,$ is used for estimates. Matrix $\F\in\mathcal{H}$ means that its columns belong to a vector space $\mathcal{H}$. The symbols $\I$ and $\bm 1$ stand for the identity matrix and the all-ones vector of appropriate size, specified by the context. The trace operator is $\text{Tr}(\cdot)$, the function eig($\A$) returns the diagonal eigenvalue matrix of $\A$ ordered in ascending order, and $\lambda_k(\A)$ denotes the $k^{\text{th}}$ eigenvalue of $\A$ with $\lambda_k(\A)\leq\lambda_{k+1}(\A)$. Preliminaries {#sec:background} ============= Consider a set of $N$ input-measurement pairs $\{(x_i,m_i)\}^N_{i=1}$ in $\mathcal{X}\times\mathbb{R}$, where $\mathcal{X}:=\{x_1,\ldots,x_N\}$ is the input space, $\mathbb{R}$ denotes the set of real numbers, and measurements obey the model $$\label{eq:sigmodel} m_i = f(x_i) + e_i$$ where $f:\mathcal{X}\rightarrow\mathbb{R}$ is an unknown function and $e_i\in\mathbb{R}$ is noise. We assume this function belongs to an RKHS $$\label{eq:hilb} \mathcal{H}_x:=\{f:f(x_i) = \sum^N_{j=1} \alpha_j \kapx(x_i,x_j), \:\:\: \alpha_j \in \mathbb{R}\}$$ where $\kapx:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ is the kernel function that spans $\mathcal{H}_x$, and $\{\alpha_i\}^N_{i=1}$ are weight coefficients. An RKHS is a complete linear space endowed with an inner product that satisfies the reproducing property [@shawe]. If $\langle f,f'\rangle_{\mathcal{H}_x}$ denotes the inner product in $\mathcal{H}_x$ between functions $f$ and $f'$, the reproducing property states that $f(x_i) = \langle f,\kapx(\cdot,x_i)\rangle_{\mathcal{H}_x}$; that is, $f$ in $\mathcal{H}_x$ can be evaluated at $x_i$ by taking the inner product between $f$ and $\kapx(\cdot,x_i)$. With $\{\alpha_i\}^N_{i=1}$ and $\{\alpha'_i\}^N_{i=1}$ denoting the coefficients of $f$ and $f'$ in (\[eq:hilb\]) respectively, we have $\langle f,f'\rangle_{\mathcal{H}_x} := \sum_{i=1}^N\sum_{j=1}^N \alpha_i\alpha_j' \kapx(x_i,x_j)$; that is, $$\begin{aligned} \langle f,f'\rangle_{\mathcal{H}_x} = \balp^T\K_x\balp'\label{eq:innerprod}\end{aligned}$$ where $\balp := [\alpha_1,\ldots,\alpha_N]^T$, $\balp' := [\alpha_1',\ldots,\alpha_N']^T$ and $(\K_x)_{i,j}:=\kapx(x_i,x_j)$. In order for $\langle \cdot,\cdot \rangle_{\mathcal{H}_x}$ in (\[eq:innerprod\]) to be an inner product, $\kapx$ must be symmetric and semipositive definite, meaning $\langle f,f\rangle_{\mathcal{H}_x} \geq 0\: \forall f\in\mathcal{H}_x$. As a consequence, $\K_x$ will be symmetric positive semidefinite since $\balp^T\K_x\balp \geq 0 \:\: \forall\, \balp \in \mathbb{R}^N$. While $\kapx$ is usually interpreted as a measure of similarity between two elements in $\mathcal{X}$, it can also be seen as the inner product of corresponding two elements in feature space $\mathcal{F}$ to which $\mathcal{X}$ can be mapped using function $\phi_x:\mathcal{X}\rightarrow\mathcal{F}$. Formally, we write $$\label{eq:kerdot} \kapx(x_i,x_j)=\langle\phi_x(x_i),\phi_x(x_j)\rangle_{\mathcal{F}}.$$ Function $\phi_x$ is referred to as feature map, and its choice depends on the application. For an input space of text files, for example, the files could be mapped to a feature vector that tracks the number of words, lines, and blank spaces in the file. Since $\phi_x$ can potentially have infinite dimension, evaluating the kernel using  might be prohibitively expensive. This motivates specifying the kernel through a similarity function in $\mathcal{X}$, which bypasses the explicit computation of the inner product in $\mathcal{F}$. Typical examples include the Gaussian kernel $\kapx(x_i,x_j) = \text{exp}\{-{\left|\left|x_i-x_j\right|\right|^2_2}/(2\eta)\}$ with $\eta$ being a free parameter, and the polynomial kernel [@friedman2001elements]. In certain cases however, it is difficult to obtain the kernel similarity function on the input space. Such cases include metric input spaces with misses (as in MC), and non-metric spaces. The alternative to both is deriving the kernel from prior information. For instance, if we have a graph connecting the points in $\mathcal{X}$, a kernel can be obtained from the graph Laplacian [@romero]. Having introduced the basics of RKHS, we proceed with the kernel regression task, where given $\{m_i\}^N_{i=1}$ we seek to obtain $$\label{eq:rrf} \hat{f} = \operatorname*{\arg\,\min}_{f\in\mathcal{H}_x} {1\over N}\sum_{i=1}^N l(m_i,f(x_i)) + \mu' ||f||^2_{\mathcal{H}_x}$$ with $l(\cdot)$ denoting the loss, $\mu' \in \mathbb{R}^+$ the regularization parameter, and $||f||_{\mathcal{H}_x}:=\langle f,f\rangle_{\mathcal{H}_x}$ the norm induced by the inner product in (\[eq:innerprod\]). We will henceforth focus on the square loss $l(m_i,f(x_i)) := (m_i - f(x_i))^2$. Using $\K_x$, consider without loss of generality expressing the vector $\f:=[f(x_1),\ldots,f(x_N)]^T$ as $\f=\K_x\balp$, where $\balp:=[\alpha_1,\ldots,\alpha_N]^T$. Using the latter in the square loss, (\[eq:rrf\]) boils down to a kernel ridge regression (KRR) problem that can be solved by estimating $\balp$ as $$\label{eq:rralp} \hat{\balp} = \operatorname*{\arg\,\min}_{\balp\in\mathbb{R}^N} {\left|\left|\m - \K_x \balp\right|\right|^2_2} +\mu \mspace{2mu} \balp^T \K_x \balp$$ where $\m := [m_1,\ldots,m_N]^T$ and $\mu=N\mu'$. The weights can be found in closed form as $$\label{eq:rralpest} \hat{\balp} = (\K_x+\mu\I)^{-1}\m$$ and the estimate of the sought function is obtained as $\hat{\f}=\K_x\hat{\balp}$. Kernel-based MCEX {#sec:mc} ================= Matrix completion considers $\F\in\mathbb{R}^{N\times L}$ of rank $r$ observed through a $N\times L$ matrix of noisy observations $$\label{eq:matm} \M = P_{\Omega}(\F+\bm \E)$$ where $\Omega\subseteq\{1,\ldots,N\}\times \{1,\ldots,L\}$ is the sampling set of cardinality $S=|\Omega|$ containing the indices of the observed entries; $P_{\Omega}(\cdot)$ is a projection operator that sets to zero the entries with index $(i,j)\notin \Omega$ and leaves the rest unchanged; and, $\bm\E\in\mathbb{R}^{N\times L}$ is a noise matrix. According to [@candes2010matrix], one can recover $\bm F$ from $\M$ with an error proportional to the magnitude of ${\left|\left|\E\right| \right|_\text{F}^2}$ by solving the following convex optimization problem: $$\begin{aligned} \label{eq:nnormc} &\min_{\F\in{\mathbb{R}^{N\times L}}} & \mspace{-50mu}& \text{rank}(\F) \nonumber \\ & \text{subject to} & \mspace{-50mu}& {\left|\left|P_{\Omega}(\F - \M)\right| \right|_\text{F}^2} \leq \delta\end{aligned}$$ where ${\left|\left|\cdot\right| \right|_\text{F}}$ is the Frobenius norm, and we assume ${\left|\left|P_\Omega(\E)\right| \right|_\text{F}^2} \leq \delta$ for some $\delta > 0$. Since solving (\[eq:nnormc\]) is NP-hard, the nuclear norm ${\left|\left|\F\right|\right|_*} := \text{Tr}(\sqrt{\F^T\F})$ can be used to replace the rank to obtain the convex problem [@ma2011fixed; @chen] $$\label{eq:nuclmc} \min_{\F\in{\mathbb{R}^{N\times L}}} {\left|\left|P_\Omega(\M-\F)\right| \right|_\text{F}^2} + \mu{\left|\left|\F\right|\right|_*}.$$ Because $\F$ is low rank, it is always possible to factorize it as $\F=\W\H^T$, where $\W \in \mathbb{R}^{N\times p}$ and $\H \in \mathbb{R}^{L\times p}$ are the latent factor matrices with $p \geq r$. This factorization allows expressing the nuclear norm as [@srebro2005maximum] ${\left|\left|\F\right|\right|_*} = \min_{\F=\W\H^T} {1\over 2}\left({\left|\left|\W\right| \right|_\text{F}^2} + {\left|\left|\H\right| \right|_\text{F}^2}\right)$ which allows reformulating (\[eq:nuclmc\]) as $$\label{eq:mclag} \{\hat{\W}\!,\!\hat{\H}\} \!=\! \operatorname*{\arg\,\min}_{\substack{\W\in\mathbb{R}^{N\times p}\\\H\in\mathbb{R}^{L\times p}}} {\left|\left|P_\Omega(\!\M\!-\!\W\H^T)\right| \right|_\text{F}^2} \!+ \mu\!\left({\left|\left|\W\right| \right|_\text{F}^2} \!+\! {\left|\left|\H\right| \right|_\text{F}^2}\right)$$ and yields $\hat{\F}=\hat{\W}\hat{\H}^T$. While the solutions to (\[eq:nuclmc\]) and (\[eq:mclag\]) are equivalent when the rank of the matrix minimizing  is smaller than $p$[@hastie2015matrix], solving (\[eq:nuclmc\]) can be costlier since it involves the computation of the singular values of the matrix. On the other hand, since (\[eq:mclag\]) is bi-convex it can be solved by alternately optimizing $\W$ and $\H$, e.g. via ALS [@jain2013low] or SGD iterations [@gemulla2011large]. Moreover, leveraging the structure of (\[eq:mclag\]), it is also possible to optimize one row from each factor matrix at a time instead of updating the full factor matrices, which enables faster and also online and distributed implementations [@teflioudi2012]. Aiming at a kernel-based MCEX, we model the columns and rows of $\F$ as functions that belong to two different RKHSs. To this end, consider the input spaces $\mathcal{X}:=\{x_1,\ldots,x_N\}$ and $\mathcal{Y}:=\{y_1,\ldots,y_L\}$ for the column and row functions, respectively. In the user-movie ratings paradigm, $\mathcal{X}$ could be the set of users, and $\mathcal{Y}$ the set of movies. Then $\F:=[\f_1,\ldots,\f_L]$ is formed with columns $\f_l := [f_l(x_1),\ldots,f_l(x_N)]^T$ with $f_l:\mathcal{X}\rightarrow\mathbb{R}$. Likewise, we rewrite $\F := [\g_1,\ldots,\g_N]^T$, with rows $\g_n^T:= [g_n(y_1),\ldots,g_n(y_L)]$ and $g_n:\mathcal{Y}\rightarrow\mathbb{R}$ . We further assume that $f_l\in\mathcal{H}_x\: \forall l=1,\ldots,L$ and $g_n\in\mathcal{H}_y \: \forall n=1,\ldots,N$, where $$\label{eq:hilbrow} \mathcal{H}_x:=\{f:f(x_i) = \sum^N_{j=1} \alpha_j \kapx(x_i,x_j), \:\:\: \alpha_j \in \mathbb{R}\}$$ $$\label{eq:hilbcol} \mathcal{H}_y:=\{g:g(y_i) = \sum^L_{j=1} \beta_j \kapy(y_i,y_j), \:\:\: \beta_j \in \mathbb{R}\}$$ and $\kapx:\mathcal{X}\times\mathcal{X}\rightarrow \mathbb{R}$ and $\kapy:\mathcal{Y}\times\mathcal{Y}\rightarrow \mathbb{R}$ are the kernels forming $\K_x\in\mathbb{R}^{N\times N}$ and $\K_y\in\mathbb{R}^{L\times L}$, respectively. Since $\W$ and $\H$ span the column and row spaces of $\F$, their columns belong to $\mathcal{H}_x$ and $ \mathcal{H}_y$ as well. Thus, the $m^\text{th}$ column of $\W$ is $$\label{eq:vecw} \w_m:=[w_m(x_1),\ldots,w_m(x_N)]^T$$ where $w_m:\mathcal{X}\rightarrow\mathbb{R}$ and $w_m\in\mathcal{H}_x\:\forall m=1,\ldots,p$, and the $m^\text{th}$ column of $\H$ is $$\label{eq:vech} \h_m:=[h_m(y_1),\ldots,h_m(y_L)]^T$$ where $h_m:\mathcal{Y}\rightarrow\mathbb{R}$ and $h_m\in\mathcal{H}_y\:\forall m=1,\ldots,p$. Hence, instead of simply promoting a small Frobenius norm for the factor matrices as in (\[eq:mclag\]), we can also promote smoothness on their respective RKHS. The kernel-based formulation in [@bazerque2013] estimates the factor matrices by solving $$\begin{aligned} \{\hat{\W},\hat{\H}\}=\operatorname*{\arg\,\min}_{\substack{\W\in\mathcal{H}_x\\\H\in\mathcal{H}_y}} \ &{\left|\left|P_\Omega(\M-\W\H^T)\right| \right|_\text{F}^2} \label{eq:factker}\\& \mspace{-50mu}+ \mu{\text{Tr}(\W^T\K_x^{-1}\W)} + \mu{\text{Tr}(\H^T\K_y^{-1}\H)}.\nonumber\end{aligned}$$ Note that (\[eq:factker\]) is equivalent to (\[eq:mclag\]) for $\K_x=\I$ and $\K_y=\I$. Since the constraints $\W\in\mathcal{H}_x$ and $\H\in\mathcal{H}_y$ can be challenging to account for when solving (\[eq:factker\]), we can instead find the coefficients that generate $\W$ and $\H$ in their respective RKHSs in order to satisfy such constraints. Thus, if we expand $\W=\K_x\B$ and $\H=\K_y\C$, where $\B\in\mathbb{R}^{N\times p}$ and $\C\in\mathbb{R}^{L\times p}$ are coefficient matrices, (\[eq:factker\]) becomes $$\begin{aligned} \label{eq:factkerw} \{\hat{\B},\hat{\C}\}=\operatorname*{\arg\,\min}_{\substack{\B\in\mathbb{R}^{N\times p}\\\C\in\mathbb{R}^{L\times p}}} & {\left|\left|P_\Omega(\M-\K_x\B\C^T\K_y)\right| \right|_\text{F}^2} \\ &+ \mu{\text{Tr}(\B^T\K_x\B)} + \mu{\text{Tr}(\C^T\K_y\C)}.\nonumber\end{aligned}$$ Nevertheless, with nonsingular kernel matrices, $\B$ and $\C$ can be found by solving and substituting $\hat{\B}=\K_x^{-1}\hat{\W}$ and $\hat{\C}=\K_y^{-1}\hat{\H}$ [@bazerque2013]. Alternating minimization schemes that solve the bi-linear MC formulation tends to the solution to the convex problem in the limit [@jain2013low], thus convergence to the global optimum is not guaranteed unless the number of iterations is infinite. Since algorithms for kernel-based MC [@bazerque2013] solving rely on such alternating minimization schemes, they lack convergence guarantees given finite iterations as well. In addition to that, their computational cost scales with the size of $\F$. On the other hand, online implementations have a lower cost [@zhou2012], but only guarantee convergence to a stationary point [@mardani2015]. In the ensuing section we develop a convex kernel-based reformulation of MCEX that enables a closed-form solver which purely exploits the extrapolation facilitated by the kernels. By casting aside the low-rank constraints, the computational complexity of our solver scales only with the number of observations while, according to our numerical tests, providing better performance. Moreover, we derive an online implementation that can be seamlessly extended to distributed operation. Kronecker kernel MCEX {#sec:kmr} ===================== In the previous section, we viewed the columns and rows of $\F$ as functions evaluated at the points of the input spaces $\mathcal{X}$ and $\mathcal{Y}$ in order to unify the state-of-the-art on MC using RKHSs. Instead, we now postulate entries of $\F$ as the output of a function lying on an RKHS evaluated at a tuple $(x_i,y_i) \in \mathcal{X} \times \mathcal{Y}$. Given the spaces $\mathcal{X}$ and $\mathcal{Y}$, consider the space $\mathcal{Z}:=\mathcal{X}\times\mathcal{Y}$ with cardinality $|\mathcal{Z}|=NL$ along with the two-dimensional function $v:\mathcal{Z}\rightarrow\mathbb{R}$ as $v(x_i,y_j)=f_j(x_i)$, which belongs to the RKHS $$\label{eq:hilbz} \mathcal{H}_z\!:=\!\{v\!:\!v(x_i,\!y_j) \!=\! \sum^N_{n=1}\!\sum^L_{l=1} \!\gamma_{n,l} \kapz((x_i,\!y_j),\!(x_n,\!y_l)), \: \:\; \gamma_{n,l} \!\in \!\mathbb{R}\}$$ with $\kapz:\mathcal{Z}\times\mathcal{Z}\rightarrow \mathbb{R}$. While one may choose any kernel to span $\mathcal{H}_z$, we will construct one adhering to our bilinear factorization $\F=\W\H^T$ whose $(i,j)^\text{th}$ entry yields $$\label{eq:zhw} \F_{ij}=v(x_i,y_j) = \sum^p_{m=1}w_m(x_i)h_m(y_j)$$ with $w_m$ and $h_m$ functions capturing $m^{\text{th}}$ column vector of $\W$ and $\H$ as in (\[eq:vecw\]) and (\[eq:vech\]). Since $ w\in\mathcal{H}_x$ and $ h\in\mathcal{H}_y$, we can write $w_m(x_i) = \sum^N_{n=1} b_{n,m} \kapx(x_i,x_n)$ and $h_m(y_j) = \sum^L_{l=1} c_{l,m} \kapy(y_j,y_l)$, where $b_{n,m}$ and $c_{l,m}$ are the entries at $(n,m)$ and $(l,m)$ of the factor matrices $\B$ and $\C$ from , respectively. Therefore, (\[eq:zhw\]) can be rewritten as $$\begin{aligned} v(x_i,y_j) &= \sum^p_{m=1} \sum^N_{n=1} b_{n,m}\kapx(x_i,x_n) \sum^L_{l=1} c_{l,m}\kapy(y_j,y_l) \nonumber \\ &= \sum^N_{n=1}\sum^L_{l=1} \left(\sum^p_{m=1} b_{n,m} c_{l,m} \right) \kapx(x_i,x_n)\kapy(y_j,y_l) \nonumber \\ &= \sum^N_{n=1}\sum^L_{l=1} \gamma_{n,l} \kapz((x_i,y_j),(x_n,y_l))\label{eq:zfunc}\end{aligned}$$ where $\gamma_{n,l}=\sum^p_{m=1} b_{m,n} c_{m,l}$, and $\kapz((x_i,y_j),(x_n,y_l)) =\kapx(x_i,x_n)\kapy(y_j,y_l)$ since a product of kernels is itself a kernel [@friedman2001elements]. Using the latter,  can be written compactly as $$\label{eq:zkermat} v(x_i,y_j) = \k_{i,j}^T\gam$$ where $\gam:=[\gamma_{1,1},\gamma_{2,1},\ldots,\gamma_{N,1},\gamma_{1,2},\gamma_{2,2},\ldots,\gamma_{N,L}]^T$, and correspondingly, $$\begin{aligned} \label{eq:kykz} \k_{i,j}=&[\kapx(x_i,x_1)\kapy(y_j,y_1), \ldots, \kapx(x_i,x_N)\kapy(y_j,y_1),\nonumber \\ &\kapx(x_i,x_1)\kapy(y_j,y_2), \ldots, \kapx(x_i,x_N)\kapy(y_j,y_L)]^T \nonumber \\ =& (\K_y)_{:,j}\otimes(\K_x)_{:,i}\end{aligned}$$ where a subscript $(:,j)$ denotes the $j^{\text{th}}$ column of a matrix, and we have used that $\K_x$ and $\K_y$ are symmetric matrices. In accordance with , the kernel matrix of $\mathcal{H}_z$ in  is $$\label{eq:Kz} \K_z=\K_y\otimes\K_x.$$ Clearly, $\k_{i,j}$ in  can also be expressed as $\k_{i,j}=(\K_z)_{:,(j-1)N+i}$. This together with  implies that $$\begin{aligned} \v =[&v(x_1,y_1),v(x_2,y_1),\ldots,v(x_N,y_1),v(x_1,y_2),\\&v(x_2,y_2),\ldots,v(x_N,y_N)]^T \end{aligned}$$ can be expressed in matrix-vector form as $$\label{eq:zmod} \v = \K_z\gam$$ or, equivalently, $\v=\text{vec}(\F)$. Note that entries of the kernel matrix are $(\K_z)_{i',j'} = \kapx(x_i,x_n)\kapy(y_j,y_l)$, where $n=j'\:\text{mod}\:N, i=i'\:\text{mod}\:N, l=\lceil {j'\over N}\rceil, \text{ and } j=\lceil {i'\over N}\rceil$. Since the eigenvalues of $\K_z$ are the product of eigenvalues of $\K_y$ and $\K_x$, it follows that $\K_z$ is positive semidefinite and thus a valid kernel matrix. With the definition of the function $v$ and its vector form we have transformed the matrix of functions specifying $\F$ into a function that lies on the RKHS $\mathcal{H}_z$. Hence, we are ready to formulate MCEX as a kernel regression task for recovering $\v$ from the observed entries of $\m=\text{vec}(\M)$. Given $ \{((x_i,y_j),m_{i,j})\}_{(i,j)\in\Omega}$ in $\mathcal{Z}\times\mathbb{R}$, our goal is to recover the function $v$ as $$\label{eq:lossr1} \hat{v}=\operatorname*{\arg\,\min}_{v\in\mathcal{H}_z} \sum_{(i,j)\in\Omega} (m_{i,j}-v(x_i,y_j))^2 + \mu ||v||^2_{\mathcal{H}_\mathcal{Z}}$$ where $||v||^2_{\mathcal{H}_\mathcal{Z}}:=\gam^T\K_z\gam$. Define next $\e := \text{vec}(\E)$ and $\mb=\S\m$, where $\S$ is an $S\times NL$ binary sampling matrix also used to specify the sampled noise vector $\bar{\e}=\S\e$. With these definitions and (\[eq:zmod\]), the model in (\[eq:matm\]) becomes $$\label{eq:obsvec} \mb = \S\v + \S\e = \S\K_z\gam + \bar{\e}$$ which can be solved to obtain $$\label{eq:mcker} \hgam = \operatorname*{\arg\,\min}_{\gam\in\mathbb{R}^{NL}} {\left|\left|\mb - \S\K_z\gam\right|\right|^2_2} + \mu\gam^T\K_z\gam$$ in closed form $$\label{eq:krr} \hgam = (\S^T\S\K_z+\mu\I)^{-1}\S^T\mb.$$ Since the size of $\K_z$ is $NL\times NL$, the inversion in (\[eq:krr\]) can be very computationally intensive. To alleviate this, we will leverage the Representer Theorem (see [@scholkopf2001generalized] for a formal proof), which allows us to reduce the number of degrees of freedom of the regression problem. In our setup, this theorem is as follows. **Representer Theorem**. Given the set of input-observations pairs $\{(x_i,y_j),m_{i,j})\}_{(i,j)\in \Omega}$ in $\mathcal{Z}\times \mathbb{R}$ and the function $v$ as in (\[eq:zfunc\]), the solution to $$\label{eq:lossr} \operatorname*{\arg\,\min}_{v\in\mathcal{H}_z} \sum_{(i,j)\in\Omega} (m_{i,j}-v(x_i,y_j))^2 + \mu ||v||^2_{\mathcal{H}_\mathcal{Z}}$$ is an estimate $\hat{v}$ that satisfies $$\label{eq:zhat} \hat{v} = \sum_{(n,l)\in\Omega} {\tau}_{n,l} k_{z}((\cdot,\cdot),(x_n,y_l))$$ for some coefficients $\tau_{n,l}\in\mathbb{R},$ $\forall(n,l)\in\Omega$. Theorem 1 asserts that $\hat{\gam}$ in (\[eq:mcker\]) satisfies $\hat{\gamma}_{n,l}=0 \; \forall \; (n,l) \notin \Omega$. Therefore, we only need to optimize $\{\gamma_{n,l} : (n,l)\in\Omega\}$ which correspond to the observed pairs. In fact, for our vector-based formulation, the Representer Theorem boils down to applying on  the matrix inversion lemma (MIL), which asserts the following. \[lem:MIL\] **MIL** [@henderson1981]. Given matrices $\A,\U$ and $\V$ of conformal dimensions, with $\A$ invertible, it holds that $$\label{eq:mil} (\U\V+\A)^{-1}\U = \A^{-1}\U(\V\A^{-1}\U+\I).$$ With $\A=\mu\I, \U=\S^T$ and $\V=\K_z\S^T$, application of to   yields $$\label{eq:gamest} \hat{\gam} = \S^T(\S\K_z\S^T+\mu\I)^{-1}\mb.$$ Subsequently, we reconstruct $\v$ as $$\label{eq:zKKMCEX} \hat{\v}_K = \K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\mb$$ and we will henceforth refer to as the *Kronecker kernel MCEX* (KKMCEX) estimate of $\v$. Regarding the computational cost incurred by , inversion costs $\mathcal{O}(S^3)$, since the size of the matrix to be inverted is reduced from $NL$ to $S$. Clearly, there is no need to compute $\K_z=\K_y\otimes\K_x$. As $\S$ has binary entries, $\S\K_z\S^T$ is just a selection of $S^2$ entries in $\K_z$; and, given that $\kapz((x_i,y_j),(x_n,y_l)) =\kapx(x_i,x_n)\kapy(y_j,y_l)$, it is obtained at cost $\mathcal{O}(S^2)$. Overall, the cost incurred by is $\mathcal{O}(S^3)$. Compared to the MC approach in , the KKMCEX method is easier to implement since it only involves a matrix inversion. Moreover, since it admits a closed-form solution, it facilitates deriving bounds on the estimation error of $\hat{\v}_K$. **Remark 1**. Matrices built via the Kronecker product have been used in regression for different purposes. Related to MC, [@rao2015collaborative] leverages Kronecker product structures to efficiently solve the Sylvester equations that arise in alternating minimization iterations to find $\{\hat{\W},\hat{\H}\}$ in . On the other hand, [@pahikkala2014two; @stock2018comparative] propose a Kronecker kernel ridge regression method that can be used to extrapolate missing entries in a matrix. However, the methods in [@pahikkala2014two; @stock2018comparative] assume a complete training set and Kronecker structure for the regression matrix; this implies that the observed entries in $\bm{M}$ can be permuted to form a full submatrix. In our formulation, we introduce $\S$ which encompasses any sampling pattern in $\Omega$. Thus, the properties of the Kronecker product used in [@rao2015collaborative; @pahikkala2014two; @stock2018comparative] cannot be applied to solve $\eqref{eq:zKKMCEX}$ since $\S\K_z\S^T$ is not necessarily the Kronecker product of two smaller matrices. **Remark 2**. The KKMCEX solution in , differs from that obtained as the solution of (\[eq:factkerw\]). On the one hand, the loss in  can be derived from the factorization-based one by using the Kronecker product kernel $\K_y\otimes\K_x$ and $\gam=\text{vec}(\B\C^T)$ to arrive at $$\begin{aligned} & {\left|\left|P_\Omega(\M - \K_x\B\C^T\K_y)\right| \right|_\text{F}^2} \nonumber\\& \:\: = {\left|\left|\mb - \S(\K_y\otimes\K_x)\text{vec}(\B\C^T)\right|\right|^2_2}.\end{aligned}$$ One difference between the two loss functions is that (\[eq:mcker\]) does not explicitly limit the rank of the recovered matrix $\hat{\F}=\text{unvec}({\hat{\v}_R})$ since it has $NL$ degrees of freedom through $\hat{\gam}$, while in (\[eq:factkerw\]) the rank of $\hat{\F}$ cannot exceed $p$ since $\B$ and $\C$ are of rank $p$ at most. In fact, the low-rank property is indirectly promoted in (\[eq:mcker\]) through the kernel matrices. Since $\text{rank}(\F)\leq\min(\text{rank}(\K_x),\text{rank}(\K_y))$, we can limit the rank of $\hat{\F}$ by selecting rank deficient kernels. On the other hand, the regularization terms in (\[eq:factkerw\]) and (\[eq:mcker\]) play a different role in each formulation. The regularization in (\[eq:factkerw\]) promotes smoothness on the columns of the estimated factor matrices $\{\hat{\W},\hat{\H}\}$; or, in other words, similarity between the rows of $\{\hat{\W},\hat{\H}\}$ as measured by $\kapx$ and $\kapy$. On the contrary, the regularization in (\[eq:mcker\]) promotes smoothness on $\hat{\v}$, which is tantamount to promoting similarity between the entries of $\hat{\F}$ in accordance with $\kapz$. KKMCEX error analysis --------------------- In order to assess the performance of KKMCEX we will rely on the mean-square error $$\label{eq:risk} MSE := {\mathbb{E}_{\bm e}\{ ||\v - \hat{\v}_K||^2_2\}}$$ where ${\mathbb{E}_{\bm e}\{ \cdot\}}$ denotes the expectation with respect to $\e$. Before we proceed, we will outline Nyström’s approximation. **Definition 1**. Given a kernel matrix $\K$ and a binary sampling matrix $\S$ of appropriate dimensions, the Nyström approximation [@drineas2005nystrom] of $\K$ is $\T = \K\S^T(\S\K\S^T)^{\dagger}\S\K$, and the regularized Nyström approximation is $$\label{eq:nystreg} \tilde{\T} = \K\S^T(\S\K\S^T + \mu\I)^{-1}\S\K.$$ Nyström’s approximation is employed to reduce the complexity of standard kernel regression problems such as the one in (\[eq:rralp\]). Instead of $\K$, the low-rank approximation $\T$ is used to reduce the cost of inverting large-size matrices using the MIL [@alaoui2015fast]. While it is known that the best low-rank approximation to a matrix is obtained from its top eigenvectors, Nyström’s approximation is cheaper. Using Def. 1, the following lemma provides the bias and variance of the KKMCEX estimator in (\[eq:zKKMCEX\]): \[th:biasvar\] Given the kernel matrix $\K_z$ and its regularized Nyström approximation $\tilde{\T}_z$ with $\mu>0$, the MSE of the KKMCEX estimator is $$\begin{aligned} \label{eq:lem1} \text{MSE} &= {\left|\left|(\K_z-\tilde{\T}_z)\gam\right|\right|^2_2} + {\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|(\K_z - \tilde{\T}_z)\S^T\bar{\e}\right|\right|^2_2}\}} \end{aligned}$$ where the first term accounts for the bias and the second term accounts for the variance. Lemma \[th:biasvar\] shows that the MSE of the KKMCEX can be expressed in terms of $\tilde{\T}_z$; see proof in the Appendix. Knowing that the 2-norm satisfies ${\left|\left|\A\right|\right|^2_2}\leq{\left|\left|\A\right| \right|_\text{F}^2}$, we have $$\begin{aligned} &{\left|\left|(\K_z-\tilde{\T}_z)\gam\right|\right|^2_2}+ {\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|(\K_z - \tilde{\T}_z)\S^T\bar{\e}\right|\right|^2_2}\}} \nonumber\\&\:\:\leq {\left|\left|(\K_z-\tilde{\T}_z)\right| \right|_\text{F}^2}\left({\left|\left|\gam\right|\right|^2_2} +{\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|\S^T\bar{\e}\right|\right|^2_2}\}}\right).\end{aligned}$$ Consequently, the upper bound on the MSE is proportional to the approximation error of $\tilde{\T}_z$ to $\K_z$. This suggests selecting $\{m_{i,j}\}_{(i,j)\in\Omega}$ so that this approximation error is minimized; see also [@alaoui2015fast] where $\Omega$ is chosen according to the so-called leverage scores of $\K_z$ in order to minimize the regression error. The next theorem uses Lemma 1 to upper bound the MSE in ; see the Appendix for its proof. \[th:mse\] Let $\sigma_{NL}$ be the maximum eigenvalue of a nonsingular $\K_z$, and $\tilde{\gam}:=\L^T\gam$, where $\L$ is the eigenvector matrix of $\K_z-\tilde{\T}_z$. If $\e$ is a zero-mean vector of iid Gaussian random variables with covariance matrix $\nu^2\I$, the MSE of the KKMCEX estimator is bounded as $$\label{eq:msebound} MSE \leq \frac{\mu^2\sigma_{NL}^2}{(\sigma_{NL}+\mu)^2}\sum _{i=1}^{S}\tilde{\gam_i}^2 + \sigma^2_{NL}\sum_{i=S+1}^{NL}\tilde{\gam}_i^2+\frac{S\nu^2 \sigma_{NL}^2}{\mu^2}.$$ Considering the right-hand side of (\[eq:msebound\]), the first two terms correspond to the bias, while the last term is related to the variance. We observe that when $\M$ is fully observed, that is, $S=NL$, the bias can be made arbitrarily small by having $\mu\rightarrow 0$. It is also of interest to assess how the MSE bound behaves as $S$ increases. Considering $\mu=S\mu'$ and fixed values in $(0,\infty)$ for $\mu'$, $||\tilde{\gam}_i||^2$ and $\sigma_{NL}$[^2], the bias term reduces to $$\begin{aligned} \label{eq:biasboundtext} \frac{S^2\mu'^2\sigma_{NL}^2}{(\sigma_{NL}+S\mu')^2}\sum _{i=1}^{S}\tilde{\gam_i}^2 + \sigma^2_{NL}\sum_{i=S+1}^{NL}\tilde{\gam}_i^2\;. \end{aligned}$$ We observe in that as $S$ increases, terms move from the second summation to the first. Therefore, whether the bias term grows or diminishes depends on the multiplication factors in front of the two summations. Since $\frac{S^2\mu'^2}{(\sigma_{NL}+S\mu')^2} \leq 1$ the bias term in decreases with $S$. On the other hand, the variance term becomes $\frac{\nu^2 \sigma_{NL}^2}{S\mu'^2}$ and decays with $S$ as well. As a result, the MSE bound in Theorem \[th:mse\] decays up until $S=NL$. Ridge regression MCEX {#sec:RRMCEX} ===================== Although the KKMCEX algorithm is fast when $S$ is small, the size of the matrix to be inverted in (\[eq:gamest\]) grows with $S$, hence increasing the computational cost. Available approaches to reducing the computational cost of kernel regression methods are centered around the idea of approximating the kernel matrix. For instance, [@alaoui2015fast] uses Nyström’s approximation, that our performance analysis in Section IV was based on, whereas [@yang2015randomized] relies on a sketch of $\K_z$ formed by a subset of its columns, hence reducing the number of regression coefficients; see also [@avron2017faster], where the kernel function is approximated by the inner product of random finite-dimensional feature maps, which also speeds up the matrix inversion. In this section, we reformulate the KKMCEX of Section \[sec:kmr\] to incorporate a low-rank approximation of $\K_z$ in order to obtain a reduced complexity estimate for $\bm \v$. Moreover, we also develop an online method based on this reformulation. Recall from Eq. (\[eq:kerdot\]) that a kernel can be viewed as the inner product of vectors mapped to a feature space $\mathcal{F}_z$, namely $\kapz((x_i,y_j),(x_n,y_l)) = \langle \phi_z(x_i,y_j),\phi_z(x_n,y_l)\rangle_{\mathcal{F}_z}$. Let $\tilde{\phi}_z:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}^d$ be a feature map approximating $\kappa_z$ so that $$\label{eq:kzfeatapprox} \kapz((x_i,y_j),(x_n,y_l)) \simeq \langle \tilde{\phi}_z(x_i,y_j),\tilde{\phi}_z(x_n,y_l)\rangle.$$ Then, we define the $NL\times d$ feature matrix $\tbphi_z:=[\tilde{\phi}_z(x_1,y_1),\allowbreak\tilde{\phi}_z(x_2,y_1),\ldots,\tilde{\phi}_z(x_N,y_L)]^T$ and form $\tK_z=\tbphi_z\tbphi^T_z$. Note that $\tK_z$ is a rank-$d$ approximation of $\K_z$, and that the equality $\K_z=\tK_z$ is only feasible when $\text{rank}(\K_z)\leq d$. Consider $\tbphi_x=[\tilde{\phi}_x(x_1),\ldots,\tilde{\phi}_x(x_N)]$ and $\tbphi_y=[\tilde{\phi}_y(y_1),\ldots,\tilde{\phi}_y(y_L)]$, where $\tilde{\phi}_x:\mathcal{X}\rightarrow \mathbb{R}^{d_x}$ and $\tilde{\phi}_y:\mathcal{Y}\rightarrow \mathbb{R}^{d_y}$, as the feature matrices forming low-rank approximations to $\K_x$ and $\K_y$, respectively. Since $\K_z=\K_y\otimes\K_x$ in KKMCEX, a prudent choice is $\tbphi_z=\tbphi_y\otimes \tbphi_x$. In the next section we will present means of constructing $\{\tbphi_x,\tbphi_y,\tbphi_z\}$ maps. Since $\tK_z$ is a valid kernel matrix, upon replacing $\K_z$ in with $\tK_z$, the observation model reduces to $$\bar{\m} = \S\tbphi_z\tbphi_z^T\bm\gam + \tilde{\e},$$ where $\tilde{\e} = \bar{\e} + \S(\K_z-\tK_z)\gam$. With this model, the weights in (\[eq:mcker\]) are obtained as $$\label{eq:mcfeat} \bm\hat{\gam} = \operatorname*{\arg\,\min}_{\gam\in \mathbb{R}^{NL}} {\left|\left|\bar{\m} - \S\tbphi_z\tbphi^T_z\gam\right|\right|^2_2} + \mu\gam^T\tbphi_z\tbphi^T_z\gam.$$ Letting $\bxi:=\tbphi_z^T\bm\gamma$ and substituting into (\[eq:mcfeat\]), we arrive at $$\label{eq:mcridge} \hxi=\operatorname*{\arg\,\min}_{\bxi\in \mathbb{R}^d} {\left|\left|\bar{\m} - \S\tbphi_z\bxi\right|\right|^2_2} + \mu{\left|\left|\bxi\right|\right|^2_2}$$ which admits the closed-form solution $$\label{eq:zeta} \hat{\bxi} = (\tbphi_z^T\S^T\S\tbphi_z+\mu\I)^{-1}\tbphi_z^T \S^T\bar{\m}.$$ Using $\hat{\bxi}$, we obtain $\hat{\v}_{R}=\tbphi_z\hat{\bxi}$ as the *ridge regression MCEX* (RRMCEX) estimate. Using the MIL  on , it follows that $$\begin{aligned} \label{eq:zreqzk} \hat{\bxi}=\tbphi_z^T \S^T(\S\tbphi_z\tbphi_z^T\S^T+\mu\I)^{-1}\bar{\m} \end{aligned}$$ and thus, $$\label{eq:zreqzk2} \hat{\v}_R=\tbphi_z\hat{\bxi}=\tK_z \S^T(\S\tK_z^T\S^T+\mu\I)^{-1}\bar{\m}.$$ Therefore, shows that $\hat{\v}_R$ is equivalent to the KKMCEX solution $\hat{\v}_K$ in after replacing $\K_z$ by its low-rank approximation $\tK_z$. For error-free approximation, $\K_z=\tbphi_z\tbphi_z^T$, while $\hat{\bxi}$ in can be viewed as the primal solution to the optimization problem in , and $\hat{\gam}$ in as its dual [@shawe]. Still, obtaining $\hat{\bxi}$ requires multiplying two $d\times S$ matrices and inverting a $d\times d$ matrix, which incurs computational cost $\mathcal{O}(d^2S)$ when $S\geq d$, and $\S\tbphi_z$ is obtained at cost $\mathcal{O}(dS)$. Thus, the cost of RRMCEX grows linearly with $S$ in contrast to KKMCEX that increases with $S^3$. By choosing an appropriate feature map so that $d\ll S$, it is possible to control the computational cost of calculating $\hat{\bxi}$. However, reduced computational cost by selecting a small $d$ might come at the price of an approximation error to $\K_z$, which correspondingly increases the estimation error of $\hat{\v}_R$. The selection of a feature matrix to minimize this error and further elaboration on the computational cost are given in Section \[sec:choosing\]. Online RRMCEX {#sec:oRRMCEX} ------------- Online methods learn a model by processing one datum at a time. An online algorithm often results when the objective can be separated into several subfunctions, each depending on one or multiple data. In the context of MC, online implementation updates $\hat{\F}$ every time a new entry $\M_{i,j}$ becomes available. If we were to solve  each time a new observation was becoming available, inverting an $S\times S$ matrix per iteration would result in an overall prohibitively high computational cost. Still, the cost of obtaining an updated solution per observation can stay manageable using online kernel regression solvers that fall into three categories [@van2014online]: dictionary learning, recursive regression and stochastic gradient descent based. Akin to [@lu2016large; @sheikholeslami2018], we will pursue here the SGD. Consider rewriting  entrywise as $$\label{eq:mcrr} \hat{\bxi}=\operatorname*{\arg\,\min}_{\bxi\in\mathbb{R}^d} \sum_{(i,j)\in \Omega}\left[m_{i,j} - \tilde{\phi}_z^T(x_i,y_j)\bxi\right]^2 + \mu{\left|\left|\bxi\right|\right|^2_2}.$$ With $n$ denoting each scalar observation, SGD iterations form a sequence of estimates $$\label{eq:sgd} \hat{\bxi}^n = \hat{\bxi}^{n-1} - t_n\left[\tilde{\phi}_z(x_i,y_j)(\tilde{\phi}^T_z(x_i,y_j)\hat{\bxi}^{n-1} - m_{i,j})+\mu\hat{\bxi}^{n-1}\right]$$ where $t_n$ is the step size, $n=1,\ldots,S$ and the tuple $(i,j)$ denotes the indices of the entry revealed at iteration $n$. With properly selecting $t_n$, the sequence $\hat{\bxi}^n$ will converge to  at per iteration cost $\mathcal{O}(d)$ [@bottou2012stochastic]. Apart from updating all entries in the matrix,   can also afford a simple distributed implementation using e.g., the algorithms in [@schizas]. **Remark 3**. Online algorithms for MC can be designed to solve the factorization-based formulation from (\[eq:mclag\]) rewritten as $$\begin{aligned} \operatorname*{\arg\,\min}_{\substack{\W\in\mathbb{R}^{N\times p}\\\H\in\mathbb{R}^{N\times p}}} \!\sum_{(i,j)\in\Omega} \!\left(\!(m_{i,j} \!-\! \w_i^T\h_j)^2 \!+\! {\mu \over |\Omega^w_i|}{\left|\left|\w_i\right|\right|^2_2} \!+\! {\mu\over |\Omega^h_j|}{\left|\left|\h_j\right|\right|^2_2}\!\right)\end{aligned}$$ where $\w_i^T$ and $\h_j^T$ denote the $i^{\text{th}}$ and $j^{\text{th}}$ rows of $\H$ and $\W$ respectively, $\Omega^w_i=\{j\::\: (i,j) \in \Omega \}$, and $\Omega^h_j=\{i\::\: (i,j) \in \Omega \}$. When $m_{i,j}$ becomes available, algorithms such as SGD and online ALS update the rows $\{\w_i^T$, $\h_j^T\}$ of the coefficient matrices. This procedure can also be applied to the kernel MCEX formulation in , that solves for $\W$ and $\H$, although the rows $\{\w_i^T$, $\h_j^T\}$ cannot be updated independently due to the involvement of the kernel matrices [@zhou2012]. Then, all entries in the $i^\text{th}$ row and $j^\text{th}$ column of $\hat{\F}$ are also updated per iteration, as opposed to our method which updates the whole matrix. Choosing the kernel matrices {#sec:choosing} ============================ In this section, we provide pointers on how to build matrices $\K_z$ for KKMCEX and $\tbphi_z$ for RRMCEX when prior information about either the matrix $\F$, or the input spaces $\mathcal{X}$ and $\mathcal{Y}$, is available. Kernels based on the graph Laplacian {#sec:gsmc} ------------------------------------ Suppose that the columns and rows of $\F$ lie on a graph, that is, each entry of a column or row vector is associated with a node on a graph that encodes the interdependencies with entries in the same vector. Specifically, we define an undirected weighted graph $\mathcal{G}_x = (\mathcal{X}, \mathcal{E}_x, \A_x)$ for the columns of $\F$, where $\mathcal{X}$ is the set of vertices with $|\mathcal{X}|=N$, $\mathcal{E}_x \subseteq \mathcal{X}\times\mathcal{X}$ is the set of edges connecting the vertices, and $\A_x \in \mathbb{R}^{N\times N}$ is a weighted adjacency matrix. Then, functions $\{f_l: \mathcal{X} \rightarrow \mathbb{R}\}^{L}_{l=1}$ are what is recently referred to as a graph signal [@shuman2013emerging], that is, a map from the set $\mathcal{X}$ of vertices into the set of real numbers. Likewise, we define a graph $\mathcal{G}_y = (\mathcal{Y}, \mathcal{E}_y, \A_y)$ for the rows of $\F$, i.e., $\{g_n: \mathcal{Y} \rightarrow \mathbb{R}\}^{N}_{n=1}$, which are also graph signals. In a matrix of user-movie ratings for instance, we would have two graphs: one for the users and one for the movies. The graphs associated with the columns and rows yield the underlying structure of $\F$ that can be used to generate a pair of kernels. Using $\A_x$ and $\A_y$, we can form the corresponding graph Laplacian as $\L_x:=\text{diag}(\A_x\bm 1)-\A_x$ and likewise for $\L_y$, that can serve as kernels. A family of graphical kernels results using a monotonic inverse function $r^\dagger(\cdot)$ on the Laplacian eigendecomposition as [@smola2003kernels] $$\label{eq:kerlap} \K = \Q r^{\dagger}(\bm\Lambda)\Q^T.$$ A possible choice of $r(\cdot)$ is the Gaussian radial basis function, which generates the diffusion kernel $r(\lambda_i)=e^{\eta\lambda_i}$, where $\lambda_i$ is the $i^\text{th}$ eigenvalue of $\L$, and $\eta$ a weight parameter. Alternatively, one can choose just the linear function $r(\lambda_i)=1 + \eta\lambda_i$, which results in the regularized Laplacian kernel. By applying different weighting functions to the eigenvalues of $\L_x$ and $\L_y$, we promote smoother or more rapidly changing functions for the columns and rows of $\hat{\F}$ [@romerospace]. While $\K_x$ and $\K_y$ are chosen as Laplacian kernels, this would not be the case for $\K_z = \K_y\otimes\K_x$ used in our KKMCEX context since it does not result from applying $r^\dagger(\cdot)$ to a Laplacian matrix. However, since $\K_x=\Q_x\Sig_x\Q_x^T$ and $\K_y=\Q_y\Sig_y\Q_y^T$, the eigendecomposition of $\K_z$ is $\K_z=(\Q_y\otimes\Q_x)(\Sig_y\otimes\Sig)(\Q^T_y \otimes\Q^T_x)$, and the notions of frequency and smoothness carry over. In other words, we are still promoting similarity among entries that are connected on the row and columns graphs through $\K_z$. A key attribute in graph signal processing is that of “graph bandlimitedness", which arises when a signal can be generated as a linear combination of a few eigenvectors of the Laplacian matrix. Therefore, a bandlimited graph signal belongs to an RKHS that is spanned by a bandlimited kernel [@romero] that suppresses some of the frequencies of the graph. A bandlimited kernel is derived from the Laplacian matrix of a graph as in , using $$\label{eq:bandlimr} r(\lambda_i) = 0 \:~~ \forall i\notin\Psi,$$ where $\Psi\subseteq\mathbb{N}$ contains the indices of frequencies not to be suppressed. As mentioned earlier, we define a graph for the columns and a graph for the rows of $\F$. Therefore, graph signals contained in the columns and rows may be bandlimited with different bandwidths. In order to form $\K_z$ our KKMCEX approach, we will need to apply different weighting functions akin to the one in  for kernel matrices $\K_x$ and $\K_y$. Kernels from known basis or features {#sec:kerfeat} ------------------------------------ In some applications the basis that spans the columns or rows of the unobserved matrix is assumed known, although this basis matrix needs not be a kernel. In order to be able to include such basis into the kernel framework, we need to generate kernel functions that span the same spaces as the columns and rows of $\F$. Consider the input sets $\{\mathcal{X},\mathcal{Y}\}$ whose entries can be mapped into an Euclidean space through feature extraction functions $\theta_x:\mathcal{X}\rightarrow\mathbb{R}^{t_x}$ and $\theta_y:\mathcal{Y}\rightarrow\mathbb{R}^{t_y}$ such that $\theta_x(x_i):=\x_i$ and $\theta_y(y_j):=\y_j$. For instance, in a movie recommender system where the users are represented in $\mathcal{X}$ and the movies in $\mathcal{Y}$, each coordinate of $\y_j$ could denote the amount of action, drama and nudity in the movie, and $\x_i$ would contain weights according to the user’s preference for each attribute. We may then use the feature vectors to determine the similarities among entries in $\mathcal{X}$ and $\mathcal{Y}$ by means of kernel functions. Let $\X :=[\x_1,\ldots,\x_N]^T$ and $\Y :=[\y_1,\ldots,\y_L]^T$. If $\text{span}(\F)\subseteq\text{span}(\X)$ and $\text{span}(\F^T)\subseteq\text{span}(\Y)$, we may conveniently resort to the linear kernel. The linear kernel amounts to the dot product in Euclidean spaces, which we use to define the pair $\kappa_x(x_i,x_j) = \x_i^T\x_j$ and $\kappa_y(y_i,y_j) = \y_i^T\y_j$. This leads to a straightforward construction of the kernel matrices for KKMCEX as $\K_x=\X\X^T$ and $\K_y=\Y\Y^T$. Besides the linear kernel, it is often necessary to use a different kernel class for each $\kappa_x$ and $\kappa_y$ chosen to better fit the spaces spanned by the rows and columns of $\F$. For instance, the Gaussian kernel defined as $\kapx(x_i,x_j) = \text{exp}\{-{\left|\left|\x_i-\x_j\right|\right|^2_2}/(2\eta)\}$, is a widely used alternative in the regression of smooth functions. Feature maps for RRMCEX ----------------------- Aiming to construct $\tbphi_z$ that approximates $\K_z$ at reduced complexity, we choose $\tilde{\phi}_z$ with $d\ll S$. To approximate linear kernels, let $\tilde{\phi}_x(x_i)=\x_i$ and $\tilde{\phi}_y(y_j)=\y_j$ so that we can set $\tilde{\phi}_z(x_i,y_j)=\tilde{\phi}_y(y_j)\otimes\tilde{\phi}_x(x_i)$ and $\tbphi_z=\Y\otimes\X$. Note that in this case $\tbphi_z\tbphi_z^T$ yields a zero-error approximation to $\K_z=(\Y\otimes\X)(\Y\otimes\X)^T$, which renders the KKMCEX and RRMCEX solutions equivalent. On occasion, $\X$ and $\Y$ may have large column dimension, thus rendering $\Y\otimes\X$ undesirable as a feature matrix in RRMCEX. In order to overcome this hurdle, we build an approximation to the column space of $\Y\otimes\X$ from the SVD of $\X$ and $\Y$. Consider the SVDs of matrices $\X = \U_x\D_x\V_x^T$ and $\Y = \U_y\D_y\V_y^T$, to obtain $\Y\otimes\X = (\U_y\otimes\U_x)(\D_y\otimes\D_x)(\V_y^T\otimes\V_x^T)$. Let $\tbphi_z=\U_d\D_d$, where $\U_d$ and $\D_d$ respectively hold the top $d$ singular vectors and singular values of $\Y\otimes\X$. The SVD has cost $\mathcal{O}(Nt_x^2)$ for $\X$ and $\mathcal{O}(Lt_y^2)$ for $\Y$. Comparatively, the cost of building $\K_x$ and $\K_y$ for the linear kernel is $\mathcal{O}(N^2t_x)$ and $\mathcal{O}(L^2t_y)$, respectively. Therefore, choosing RRMCEX over KKMCEX in this case incurs no extra overhead. When a function other than the linear kernel is selected, obtaining an approximation is more complex. To approximate a Gaussian kernel on $\mathcal{X}\times\mathcal{X}$, the vectors $\{\tilde{\phi}_x(\x_i)\}^N_{i=1}$ can be obtained by means of Taylor series expansion [@cotter2011explicit] or random Fourier features [@avron2017faster], which can also approximate Laplacian, Cauchy and polynomial kernels [@avron2017faster; @rahimi2008random]. Therefore, the maps $\tilde{\phi}_x$ and $\tilde{\phi}_y$ must be designed according to the chosen kernels. In some instances, such as when dealing with Laplacian kernels, $\X$ and $\Y$ are not available and we are only given $\K_x$ and $\K_y$. We are then unable to derive approximations to the kernel matrices by means of maps $\tilde{\phi}_x$ and $\tilde{\phi}_y$. Nevertheless, we can still derive an adequate $\tbphi_z$ to approximate $\K_z$. Indeed, Mercer’s Theorem asserts that there are eigenfunctions $\{q_n\}^{NL}_{n=1}$ in $\mathcal{H}_z$ along with a sequence of nonnegative real numbers $\{\sigma_n\}^{NL}_{n=1}$, such that $$\label{eq:mercer} \kapz((x_i,y_j),(x_n,y_l)) = \sum^{NL}_{n=1}\sigma_nq_n(x_i,y_j)q_n(x_n,y_l).$$ We can find from the eigendecomposition $\K_z=\bm\Q_z\Sig_z\bm\Q_z^T$, where $q_n$ is the $n\th$ eigenvector in $\Q_z$ and $\sigma_n$ the $n\th$ eigenvalue in $\Sig_z$. If $\K_z$ is low rank, we can construct $\tbphi_z=\Q_d\Sig^{{1\over 2}}_d$, where $\Q_d$ and $\Sig_d$ respectively hold the top $d$ eigenvectors and eigenvalues of $\K_z$. Note that, since $\K_z=(\Q_y\otimes\Q_x)(\Sig_y\otimes\Sig_x)(\Q^T_y \otimes\Q^T_x)$, we only need to eigendecompose smaller matrices $\K_x$ and $\K_y$ at complexity $\mathcal{O}(N^3+L^3)$. In some cases however, such as when using Laplacian kernels, the eigendecompositions are readily available, and $\bphi_z$ can be built at a markedly reduced cost. Numerical tests =============== In this section, we test the performance of the KKMCEX, RRMCEX and online (o)RRMCEX algorithms developed in Sections \[sec:kmr\], \[sec:RRMCEX\] and \[sec:oRRMCEX\], respectively; and further compare them to the solution of obtained with ALS [@jain2013low] and SGD [@zhou2012]. We run the tests on synthetic and real datasets, with and without noise, and measure the signal-to-noise-ratio (SNR) as $\frac{{\left|\left|\F\right| \right|_\text{F}^2}}{{\left|\left|\E\right| \right|_\text{F}^2}}$. The algorithms are run until convergence over $N_{r}=50$ realizations with different percentages of observed entries, denoted by $P_{s}=100S/(NL)$, which are taken uniformly at random per realization. As figure of merit, we use $$\text{NMSE} = \frac{1}{N_{r}}\sum\limits_{i=1}^{N_{r}}\frac{{\left|\left|\hat{\F}_i - \F\right| \right|_\text{F}^2}}{{\left|\left|\F\right| \right|_\text{F}^2}}$$ where $\hat{\F}_i$ is the estimate at realization $i$. We show results for the optimal combination of regularization and kernel parameters, found via grid search. Finally, ALS and SGD are initialized by a product of two random factor matrices, an both are stopped. Synthetic data {#sec:synth} -------------- We first test the algorithms on synthetic data. The $250\times250$ data matrix is generated as $\F = \K_x\bm \Gamma\K_y$, where $\bm \Gamma$ is a $250\times250$ matrix of Gaussian random deviates. For $\K_x$ and $\K_y$ we use Laplacian diffusion kernels with $\eta=1$ based on Erdös-Rényi graphs, whose binary adjacency matrices are unweighted and any two vertices are connected with probability 0.03. The resulting $\F$ is approximately low-rank, with the sum of the first 10 eigenvalues accounting for 96% of the total eigenvalue sum. Therefore, we set the rank bound $p$ to 10 for the ALS and SGD algorithms. Whether $\F$ is approximately low-rank or exactly low-rank did not affect our results, as they were similar for matrices with an exact rank of 10. For KKMCEX, $\K_z=\K_y\otimes \K_x$, and for RRMCEX $\tbphi_z=\Q_d\Sig_d^{1\over 2}$, where $\Q_d$ contains the top 250 eigenvectors of $\K_z$, and $\Sig_d$ the corresponding top 250 eigenvalues. Fig. \[fig:erd\] shows the simulated NMSE when $\M$ is noiseless (a) or noisy (b). We deduce from Fig. \[fig:erd\]a that all algorithms except SGD achieve a very small NMSE, below 0.003 at $P_s=1\%$ that falls to 0.007 at $P_s=10\%$. Of the three algorithms, KKMCEX has the smallest error except at $P_s=1\%$, where RRMCEX performs best. Although the error drops below 0.005 for SGD at $P_s > 4\%$, it is outperformed by the other algorithms by an order of magnitude. Fig. \[fig:erd\]b shows the same results when Gaussian noise is added to $\F$ at $\text{snr}=1$. We observe that KKMCEX and RRMCEX are matched and attain the lowest error, whereas ALS and SGD have larger errors across $P_s$. This corroborates that thanks to the regularization term that smoothes over all the entries instead of row or column-wise, the noise effect is reduced. Interestingly, RRMCEX is able to reduce the noise effect despite the bias it suffers because it only uses the top 250 eigenvalues of $\K_z$ from a total of $62{,}500$. This is mainly due to the additive noise being evenly distributed across the eigenspace of $\K_z$. Therefore, by keeping only the eigenvectors associated with the top 250 eigenvalues in $\K_z$, we are discarding those dimensions in which the SNR is lower. Fig. \[fig:erdtime\] depicts the time needed for the algorithms to perform the simulations reported in Fig. \[fig:erd\]. We observe in Fig. \[fig:erdtime\]a that RRMCEX has an almost constant computation time, whereas the time for KKMCEX grows with $P_s$ as expected since the size of the matrix to be inverted increases with $S$. On the other hand, ALS and SGD require less time than KKMCEX for the larger values of $P_s$, but are always outperformed by RRMCEX. Moreover, the ALS time is reduced as $P_s$ increases because the number of iterations required to converge to the minimum is smaller. Fig. \[fig:erdtime\]b suggests that the noise only impacts ALS, which has its computation time rise considerably across all $P_s$. Overall, Figures \[fig:erd\] and \[fig:erdtime\] illustrate that RRMCEX has the best performance for the synthetic matrix both in terms of NMSE and computational cost. Temperature measurements {#sec:real} ------------------------ In this case, $\F$ has size $150\times365$ comprising temperature readings taken by 150 stations over 365 days in 2002 in the United States[^3]. The columns and rows of $\F$ are modeled as graph signals with $\A_x$ and $\A_y$ for the graphs formed by the stations and the days of the year, respectively. We use Laplacian diffusion kernels both for $\K_x$ and $\K_y$, while $\K_z$ and $\tbphi_z$ are obtained as in the tests on synthetic data, except that $\tbphi_z$ is constructed with the top 150 eigenvectors of $\K_z$. The matrix $\A_x$ is obtained as in [@chen], where a graph $\mathcal{G}$ with unweighted adjacency matrix $\P$ is generated for the stations, and each station is a vertex connected to the 8 geographically closest stations. Next, we obtain the undirected graph $\mathcal{G}'$ with symmetric adjacency matrix $\P’ =\text{sign}(\P^T+\P)$. Finally, the entries of $\A_x$ are constructed as $(\A_x)_{i,j}= \text{exp}(-\frac{N^2 d_{i,j}}{\sum_{i,j} d_{i,j}})$, where $\{d_{i,j}\}$ are geodesic distances on $\mathcal{G}$. We adopt a graph on which each day is a vertex and each day is connected to the 10 past and future days to form $\A_y$. Fig. \[fig:time\] shows the simulated tests for (a) the matrix of temperature readings, and (b) the same matrix with additive Gaussian noise at $\text{snr}=1$. Fig. \[fig:time\]a demonstrates that KKMCEX achieves the lowest error for the first three data points, while afterwards ALS has a slight edge over KKMCEX. The real data matrix $\F$ is approximately low rank, since the sum of the first 10 singular values accounts for 75% of the total sum. This explains why RRMCEX fares worse than KKMCEX. Because it only contains the top 150 eigenvectors of $\K_z$, which is full rank, the vectorized data $\m$ lies in part outside the space spanned by $\tbphi_z$. Indeed, increasing the number of eigenvectors in $\tbphi_z$ results in a lower error, although the computational cost increases accordingly. Fig. \[fig:time\]b further demonstrates that the addition of noise has the least impact on RRMCEX, which attains the lowest error slightly below KKMCEX. On the other hand, ALS has a marginally higher error whereas the gap between SGD and the other three methods remains. Fig. \[fig:timetime\] depicts the computational time for the results in Fig. \[fig:time\]a, which are similar to those obtained for the synthetic dataset. Mushroom dataset ---------------- The Mushroom dataset[^4] comprises 8,124 labels and as many feature vectors. Each label indicates whether a sample is edible or poisonous, and each vector has 22 entries describing the shape, color, etc. of the mushroom sample. After removing items with missing features, we are left with 5,643 labels and feature vectors. Here, we solve a clustering problem in which $\F$ is a $5,643\times 5,643$ adjacency matrix, where $\F_{i,j} = 1$ if the $i^{\text{th}}$ and the $j^{\text{th}}$ mushroom samples belong to the same class (poisonous or edible), and $\F_{i,j} = -1$ otherwise. We encode the matrix stacking the feature vectors via one-hot encoding to produce a $5,643\times 98$ binary feature matrix analogous to $\X$ in Section \[sec:kerfeat\]. We build the kernel matrix $\K_x$ from the Pearson correlation coefficients of the rows of $\X$, and let $\K_y=\K_x$. The feature matrix $\tbphi_z$ for RRMCEX is built using the top 3,000 left singular vectors of $\X\otimes\X$. Fig. \[fig:mushroom\]a shows the test results on the mushroom adjacency matrix from $S=2,000$ $(P_s=0.006\%)$ to $S=20{,}000$ $(P_s=0.036\%)$ in steps of 1,000 observations. KKMCEX and RRMCEX achieve similar NMSE, while SGD has an error one order of magnitude higher, and ALS outperforms both by around one order of magnitude. This difference with ALS is because regression-based methods restrict the solution to belong to the space spanned by the basis matrix. On the other hand, when solving , we do not enforce the constraints $\W\in\mathcal{H}_x$, $\H\in\mathcal{H}_y$ [@bazerque2013; @zhou2012]. Therefore, when the prior information encoded in the kernel matrices is imperfect, ALS might be able to find a factorization that fits better the data at the cost of having $\hat{\W}\notin\mathcal{H}_x$ and $\hat{\H}\notin\mathcal{H}_y$. However, in Fig. \[fig:mushroom\]b we see that the computational cost for ALS and SGD is much higher than for KKMCEX and RRMCEX for the smaller $\P_s$. On the other hand, the time for ALS decreases with $S$ due to requiring les iterations to converge, whereas for KKRRMCEX and RRMCEX it increases with $S$. Online MC --------- In the online scenario, we compare the (o)RRMCEX algorithm with online (o)ALS and SGD. One observation is revealed per iteration at random, and all three algorithms process a single observation per iteration in a circular fashion. Per realization, we run tests on both synthetic and temperature matrices with $P_s=10\%$, that is, $S=6{,}250$ and $S=5{,}475$ observations for the synthetic and temperature matrices, respectively, for a single realization. Fig. \[fig:sg\]a depicts the tests for the noiseless synthetic matrix. Clearly, (o)RRMCEX converges much faster than SGD and (o)ALS. Indeed, as opposed to SGD and (o)ALS, which require several passes over the data, (o)RRMCEX approaches the minimum in around $6{,}000$ iterations. Moreover, it achieves the smallest NMSE of 0.0004, which is slightly below the 0.0011 obtained by SGD. Fig. \[fig:sg\]b shows the results for the temperature matrix without noise. Again, we observe that (o)RRMCEX converges the fastest to the minimum, whereas SGD requires many passes through the data before it starts descending, while (o)ALS converges much faster than with the synthetic data. Regarding the NMSE, (o)RRMCEX and SGD achieve the same minimum value. The tests on the Mushroom dataset are run with $S=10{,}000$ $(P_s=0.033\%)$ and $S=20{,}000$ $(P_s=0.036\%)$ observations following the same procedure as with the synthetic and temperature datasets. Fig. \[fig:mushonline\] shows results for the Mushroom adjacency matrix with the error for $S=20{,}000$ plotted in solid lines, and for $S=10{,}000$ in dotted lines. We observe that for $S=20{,}000$, (o)RRMCEX crosses the minimum of (o)ALS and SGD in 7 seconds, whereas (o)ALS and SGD converge to this minimum in 12 and 200 seconds, respectively. Afterwards, the line for (o)RRMCEX keeps descending until an error of $0.012$ is reached. For $S=10{,}000$ the convergence time of (o)RRMCEX and SGD remains almost unchanged, whereas for (o)ALS it increases to 26 seconds. Moreover, the error of both (o)ALS and SGD grows much larger, whereas (o)RRMCEX exhibits just a small increase. Conclusions =========== In this paper, we have taken a comprehensive look at MC under the framework of RKHS. We have viewed columns and rows of the data matrix as functions from an RKHS, and leveraged kernel theory to account for the available prior information on the contents of the sought matrix. Moreover, we have developed two estimation algorithms that offer simplicity and speed as their main advantages. When the number of observed data is small, KKMCEX obtains the full matrix estimate by inverting a reduced-size matrix thanks to the Representer Theorem. On the other hand, when the number of observations is too large for KKMCEX to handle, RRMCEX can be employed instead in order to lower the computational cost with no impact on the recovery error when noise is present. In addition, RRMCEX can be easily turned into an online method implemented via SGD iterations. Compared to mainstream methods designed for the factorization-based formulation, namely ALS and SGD, our RRMCEX exhibited improved performance in simulated and real data sets. Our future research agenda includes improving both KKMCEX and RRMCEX through parallel and accelerated regression methods, as well as designing robust sampling strategies for MCEX formulated as a kernel regression. Proof of Lemma 1 ---------------- For the KKMCEX estimator (\[eq:zKKMCEX\]), the MSE is given as $$\label{eq:risk} MSE := {\mathbb{E}_{\bm e}\{ {\left|\left|\v - \hz_K\right|\right|^2_2}\}} = {\mathbb{E}_{\bm e}\{ {\left|\left|\v - \K_z\hgam\right|\right|^2_2}\}}.$$ Plugging the estimator from (\[eq:gamest\]) into (\[eq:risk\]) yields $$\begin{aligned} MSE &= {\mathbb{E}_{\bm e}\{ {\left|\left|\v-\K_z \S^T (\S\K_z\S^T + \mu\I)^{-1}(\S\v+\bar{\e})\right|\right|^2_2}\}}\nonumber \\ &= {\left|\left|(\I - \K_z \S^T (\S\K_z\S^T + \mu\I )^{-1}\S)\v\right|\right|^2_2}\nonumber \\ &+ {\mathbb{E}_{\bm e}\{ {\left|\left|\K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\bar{\e}\right|\right|^2_2}\}} \label{eq:biasvar}\end{aligned}$$ where we have used that $\mathbb{E}\{\e\}=\bm 0$. Further, the first and second terms in (\[eq:biasvar\]) are the bias and variance of the KKMCEX estimator, respectively. If we substitute $\v=\K_z\gam$ into the first term of (\[eq:biasvar\]), we obtain $$\begin{aligned} bias &= {\left|\left|(\I - \K_z\S^T (\S\K_z\S^T + \mu\I)^{-1}\S )\K_z\gam\right|\right|^2_2}\nonumber \\ &= {\left|\left|(\K_z - \K_z \S^T (\S\K_z\S^T + \mu\I)^{-1}\S\K_z )\gam\right|\right|^2_2}\nonumber \\ & = {\left|\left|(\K_z-\tilde{\T}_z)\gam\right|\right|^2_2}\label{eq:bias}\end{aligned}$$ where $\tilde{\T}_z$ is the regularized Nyström approximation of $\K_z$ in (\[eq:nystreg\]). On the other hand, the variance term is $$\begin{aligned} var &= {\mathbb{E}_{\bm e}\{ {\left|\left|\K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\bar{\e}\right|\right|^2_2}\}}\nonumber \\ &\mspace{-25mu}= \mathbb{E}\{{1\over\mu^2}\left|\left|\K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\right.\right.\nonumber\\&\left.\left. \:\:\:(\mu\I+\S\K_z\S^T-\S\K_z\S^T)\bar{\e}\right|\right|_2^2\}\nonumber \\ &\mspace{-25mu}= {\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|\K_z\S^T - \K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\S\K_z\S^T\bar{\e}\right|\right|^2_2}\}}\nonumber\\ &\mspace{-25mu}= {\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|(\K_z - \K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\S\K_z)\S^T\bar{\e}\right|\right|^2_2}\}}\nonumber\\ &\mspace{-25mu}= {\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|(\K_z - \tilde{\T}_z)\S^T\bar{\e}\right|\right|^2_2}\}}.\label{eq:quad}\end{aligned}$$ Adding the two terms in  and , we obtain the MSE in . Proof of Theorem 2 ------------------ Since $\K_z - \tilde{\T}_z$ appears in the bias and variance terms in Lemma 1, we will first derive an upper bound on its eigenvalues that will eventually lead us to a bound on the MSE. To this end, we will need a couple of lemmas. \[lem:sim\] Given a symmetric matrix $\A\in\mathbb{R}^{N\times N}$ and a symmetric nonsingular matrix $\B\in\mathbb{R}^{N\times N}$, it holds that $\lambda_k(\A\B) = \lambda_k(\B^{1\over 2}\A\B^{1\over 2})$; and also $\lambda_k(\A\B) \leq \lambda_k(\A)\lambda_N(\B)$. Since $\B$ is invertible and symmetric, we can write $\A\B = \B^{-{1\over 2}}(\B^{1\over 2}\A\B^{1\over 2})\B^{1\over 2}$. Therefore, $\A\B$ is similar to $\B^{{1\over 2}}\A\B^{1\over 2}$, and they both share the same eigenvalues. Let $\mathcal{U}\subset\mathbb{R}^{N}\setminus\{\bm 0\}$. From the min-max theorem [@lax], the $k^{th}$ eigenvalue of $\A$ satisfies $$\lambda_k(\A) = \min_{\mathcal{U}} \left\{ \max_{\x\in \mathcal{U}} \frac{\x^T\A\x}{\x^T\x} \:|\: \text{dim}(\mathcal{U})=k \right\}.$$ Therefore, we have $$\begin{aligned} \lambda_k(\A\B) &= \lambda_k(\B^{1\over 2}\A\B^{1\over 2}) \nonumber\\&\mspace{-50mu}= \min_{\mathcal{U}} \left\{ \max_{\x\in \mathcal{U}} \frac{\x^T\B^{1\over 2}\A\B^{1\over 2}\x}{\x^T\x} \:|\: \text{dim}(\mathcal{U})=k \right\} \nonumber\\ &\mspace{-50mu}= \min_{\mathcal{U}} \left\{ \max_{\x\in \mathcal{U}} \frac{\x^T\B^{1\over 2}\A\B^{1\over 2}\x}{\x^T\B^{1\over 2}\B^{1\over 2}\x} \frac{\x^T\B\x}{\x^T\x} \:|\: \text{dim}(\mathcal{U})=k \right\}\nonumber \\ &\mspace{-50mu}\leq \min_{\mathcal{U}} \left\{ \max_{\x\in \mathcal{U}} \frac{\x^T\A\x}{\x^T\x} \:|\: \text{dim}(\mathcal{U})=k \right\}\lambda_N(\B)\nonumber \\ &\mspace{-50mu}= \lambda_k(\A)\lambda_N(\B).\end{aligned}$$ The following lemma bounds the eigenvalues of $\K_z - \tilde{\T}_z$, and the regularized Nystrom approximation $\tilde{\T}_z$ in (\[eq:nystreg\]). \[lem:nyst\] With $\K_z$ as in  and $\tilde{\T}_z$ as in , the eigenvalues of $\K_z-\tilde{\T}_z$ are bounded as $$\K_z - \tilde{\T}_z \preceq \frac{\mu\sigma_{NL}}{\sigma_{NL}+\mu}\I_S' + \sigma_{NL}\I_S$$ where $\sigma_{NL}$ is the largest eigenvalue of $\K_z$, $\I_S := \text{diag}([0,\allowbreak 0,\ldots,1,1])$ has $S$ zeros on its diagonal, and $\I'_S := \I-\I_S$. Using the eigendecomposition $\K_z=\Q_z\Sig_z\Q_z^T$, we can write $$\begin{aligned} \K_z-\tilde{\T}_z &= \K_z - \K_z\S^T(\S\K_z\S^T+\mu\I)^{-1}\S\K_z \nonumber \\ &= \Q_z\Sig_z^{1\over 2}\left[\I-\Sig_z^{1\over 2} \Q_z^T\S^T(\S\Q_z\Sig_z^{1\over 2}\Sig_z^{1\over 2} \Q_z^T\S^T \right.\nonumber \\&\left.\:\:\:\:+\mu \I)^{-1}\S\Q_z\Sig_z^{1\over 2}\right] \Sig_z^{1\over 2}\Q_z^T.\label{eq:midmat}\end{aligned}$$ Applying the MIL to the matrix inside the square brackets of , we arrive at $$\begin{aligned} &\I-\Sig_z^{1\over 2} \Q_z^T\S^T(\S\Q_z\Sig_z^{1\over 2}\Sig_z^{1\over 2} \Q_z^T\S^T+\mu \I)^{-1}\S\Q_z\Sig_z^{1\over 2} \nonumber\\&\:\:= (\I + {1\over\mu}\Sig_z^{1\over 2}\Q_z^T\S^T\S\Q_z\Sig_z^{1\over 2})^{-1}.\end{aligned}$$ That in turn implies $$\begin{aligned} \label{eq:kzp} &\K_z-\tilde{\T}_z = \mu\Q_z\Sig_z^{1\over 2}(\mu\I + \Sig_z^{1\over 2}\Q_z^T\S^T\S\Q_z\Sig_z^{1\over 2})^{-1}\Sig_z^{1\over 2}\Q_z^T \nonumber \\ &= \mu\Q_z\Sig_z^{1\over 2}(\Sig_z + \mu\I - \Sig_z + \Sig_z^{1\over 2}\Q_z^T\S^T\S\Q_z\Sig_z^{1\over 2})^{-1}\Sig_z^{1\over 2}\Q_z^T \nonumber \\ &= \mu\Q_z\Sig_z^{1\over 2}\left[(\Sig_z+\mu\I)^{1\over2}\left(\I - (\Sig_z+\mu\I)^{-{1\over2}}\Sig_z(\Sig_z+\mu\I)^{-{1\over 2}}\right.\right.\nonumber\\& \left.\left. \:\:\:\:+ (\Sig_z+\mu\I)^{-{ 1\over 2}}\Sig_z^{1\over 2}\Q_z^T\S^T\S\Q_z\Sig_z^{1\over 2}(\Sig_z+\mu\I)^{-{1\over 2}})\right)\right.\nonumber \\&\left. \:\:\:\:(\Sig_z+\mu\I)^{1\over2}\right]^{-1}\Sig_z^{1\over 2}\Q_z^T\nonumber \\ &= \mu\Q_z\Sig_z^{1\over 2}(\Sig_z+ \mu\I)^{-{1\over2}}(\I-\P)^{-1}(\Sig_z+ \mu\I)^{- {1\over2}}\Sig_z^{1\over 2}\Q_z^T \end{aligned}$$ where $$\begin{aligned} \label{eq:P} \P =&\Sig_z(\Sig_z+\mu\I)^{-1} \\&- (\Sig_z+\mu\I)^{-{ 1\over 2}}\Sig^{1\over 2}\Q_z^T\S^T\S\Q_z\Sig^{1\over 2}(\Sig_z+\mu\I)^{-{1\over 2}}.\nonumber \end{aligned}$$ Regarding the eigenvalues of $\K_z-\tilde{\T}_z$ in , we can bound them as $$\begin{aligned} &\lambda(\K_z-\tilde{\T}_z) \nonumber\\&= \mu\,\lambda(\Q_z\Sig_z^{1\over 2}(\Sig_z+ \mu\I)^{-{1\over2}}(\I-\P)^{-1}(\Sig_z+ \mu\I)^{- {1\over2}}\Sig_z^{1\over 2}\Q_z^T)\nonumber\\ &= \mu\,\lambda(\Sig_z^{1\over 2}(\Sig_z+ \mu\I)^{-{1\over2}}(\I-\P)^{-1}(\Sig_z+ \mu\I)^{- {1\over2}}\Sig_z^{1\over 2})\nonumber\\ &= \mu\,\lambda((\I-\P)^{-1}(\Sig_z+ \mu\I)^{-1}\Sig_z)\nonumber\\ &\leq\frac{\mu\sigma_{NL}}{\sigma_{NL}+\mu}\lambda((\I-\P)^{-1})\label{eq:kzbound1} \end{aligned}$$ where $\lambda(\cdot)$ denotes the eigenvalues of a matrix, and we have applied Lemma \[lem:sim\] on the third equality and the last inequality. Knowing that $\lambda(\I-\P)=\I-\lambda(\P)$ we can now bound the eigenvalues of $\P$ as $$\begin{aligned} \label{eq:C2} \lambda(\P) &\!=\!\lambda(\Sig_z^{1\over 2}(\Sig_z\!+\!\mu\I)^{- {1\over 2}}\Q_z^T(\I \!-\! \S^T\S)\Q_z(\Sig_z\!+\!\mu\I)^{- {1\over 2}}\Sig_z^{1\over 2})\nonumber\\ &=\lambda(\Q_z^T(\I - \S^T\S)\Q_z(\Sig_z+\mu\I)^{- {1}}\Sig_z)\nonumber\\ &\leq \frac{\sigma_{NL}}{\sigma_{NL}+\mu} \lambda(\Q_z^T(\I - \S^T\S)\Q_z)\nonumber\\ &= \frac{\sigma_{NL}}{\sigma_{NL}+\mu}\lambda(\I_S)\end{aligned}$$ where we have applied Lemma \[lem:sim\] on the second and third inequalities, and $\I_S:=\text{diag}{[0,0,\ldots,1,1]}$ has $S$ zeros on its diagonal. Next, we have that $\I-\P\succeq \I - \frac{\sigma_{NL}}{\sigma_{NL}+\mu}\I_S$, and thus $$\label{eq:iqbound} (\I-\P)^{-1}\preceq \I_S' + \frac{\sigma_{NL}+\mu}{\mu}\I_S$$ where $\I_S':=\I-\I_S$. Finally, combining  with  yields $$\label{eq:kt} \K_z - \tilde{\T}_z \preceq \frac{\mu\sigma_{NL}}{\sigma_{NL}+\mu}\I_S' + \sigma_{NL}\I_S$$ which concludes the proof. Using Lemmas \[lem:sim\] and \[lem:nyst\], we can proceed to establish a bound on the bias and variance. Considering the eigendecomposition $\K_z-\tilde{\T}_z=\L\bm\Lambda\L^T$, we can write the bias in (\[eq:bias\]) as $$\begin{aligned} bias &= {\left|\left|\L\bm\Lambda\L^T\gam\right|\right|^2_2}.\end{aligned}$$ With $\tilde{\gam}:=\L^T\gam$, and using Lemma \[lem:nyst\] the bias is bounded as $$\begin{aligned} bias & = {\left|\left|\L\bm\Lambda\tilde{\gam}\right|\right|^2_2} = \tilde{\gam}^T\bm\Lambda^2\tilde{\gam}\nonumber\\ &\leq \frac{\mu^2\sigma_{NL}^2}{(\sigma_{NL}+\mu)^2}\tilde{\gam}^T\I_S'\tilde{\gam} + \sigma_{NL}^2\tilde{\gam}^T\I_S\tilde{\gam}\nonumber\\ &= \frac{\mu^2\sigma_{NL}^2}{(\sigma_{NL}+\mu)^2}\sum _{i=1}^{S}\tilde{\gam_i}^2 + \sigma^2_{NL}\sum_{i=S+1}^{NL}\tilde{\gam}_i^2.\label{eq:biasbound}\end{aligned}$$ To bound the variance in (\[eq:quad\]), recall that $\e$ is a Gaussian random vector with covariance matrix $\nu^2\I$, while $\bar{\e}$ has covariance matrix $\nu^2\S\S^T$. Then (\[eq:quad\]) is a quadratic form in $\e$, whose the variance becomes $$\begin{aligned} \label{eq:var1} var &={\mathbb{E}_{\bm e}\{ {1\over\mu^2}{\left|\left|(\K_z - \tilde{\T}_z)\S^T\bar{\e}\right|\right|^2_2}\}} \nonumber\\&= {\nu^2\over \mu^2}{\text{Tr}(\S(\K_z - \tilde{\T}_z)^2\S^T)}\nonumber\\&= {\nu^2\over \mu^2}{\text{Tr}((\K_z - \tilde{\T}_z)^2\S^T\S)}.\end{aligned}$$ The matrix inside the trace in  has $NL-S$ zero entries in its diagonal. Lemma \[lem:nyst\], on the other hand, implies that diagonal entries of $\K_z-\tilde{\T}_z$ are smaller than its largest eigenvalue; that is, $\left[\K_z-\tilde{\T}_z\right]_{i,i}\leq\sigma_{NL}$. Coupling this with  yields $$\begin{aligned} var \leq \frac{S\nu^2 \sigma_{NL}^2}{\mu^2}.\label{eq:varbound}\end{aligned}$$ Finally, combining the bias bound in (\[eq:biasbound\]) with the variance bound in (\[eq:varbound\]), yields the bound for the MSE as $$MSE \leq \frac{\mu^2\sigma_{NL}^2}{(\sigma_{NL}+\mu)^2}\sum _{i=1}^{S}\tilde{\gam_i}^2 + \sigma^2_{NL}\sum_{i=S+1}^{NL}\tilde{\gam}_i^2+\frac{S\nu^2 \sigma_{NL}^2}{\mu^2}.$$ [^1]: This work was supported by the Ministerio de Economia y Competitividad of the Spanish Government and ERDF funds (TEC2016-75067-C4-2-R,TEC2015-515 69648-REDC), Catalan Government funds (2017 SGR 578 AGAUR), and NSF grants (1500713, 1514056, 1711471 and 1509040). [^2]: Note that $||\tilde{\gam}_i||^2$ and $\sigma_{NL}$ depend on the selected kernel $\K_z$ and matrix $\F$, and do not depend on $\S$. [^3]: http://earthpy.org/ulmo.html [^4]: http://archive.ics.uci.edu/ml
{ "pile_set_name": "ArXiv" }
--- abstract: 'Low-metallicity galaxies exhibit different properties of the interstellar medium (ISM) compared to nearby spiral galaxies. Obtaining a resolved inventory of the various gas and dust components of massive star forming regions and diffuse ISM is necessary to understand how those differences are driven. We present a study of the infrared/submillimeter (submm) emission of the massive star forming complex N158-N159-N160 located in the Large Magellanic Cloud. Combining observations from the [[*Spitzer*]{}]{} Space Telescope (3.6-70 [$\mu$m]{}), the [[*Herschel*]{}]{} Space Observatory (100-500 [$\mu$m]{}) and LABOCA (on APEX, 870 [$\mu$m]{}) allows us to work at the best angular resolution available now for an extragalactic source (a few parsecs for the LMC). We observe a remarkably good correlation between the [[*Herschel*]{}]{} SPIRE and LABOCA emission and resolve the low surface brightnesses emission. We use the [[*Spitzer*]{}]{} and [[*Herschel*]{}]{} data to perform a resolved Spectral Energy Distribution (SED) modelling of the complex. Using modified blackbodies, we derive an average “effective" emissivity index of the cold dust component $\beta$$_c$ of 1.47 across the complex. If $\beta$$_c$ is fixed to 1.5, we find an average temperature of $\sim$27K (maximum of $\sim$32K in N160). We also apply the @Galliano2011 SED modelling technique (using amorphous carbon to model carbon dust) to derive maps of the star formation rate, the grain temperature, the mean starlight intensity, the fraction of Polycyclic Aromatic Hydrocarbons (PAH) or the dust mass surface density of the region. We observe that the PAH fraction strongly decreases in the H[ii]{} regions we study. This decrease coincides with peaks in the mean radiation field intensity map. The dust surface densities follow the far-infrared distribution, with a total dust mass of 2.1 $\times$ 10$^4$ [$M_\odot$]{} (2.8 times less than if carbon dust was modelled by standard graphite grains) in the resolved elements we model. We also find a non-negligible amount of dust in the region called “N159 South", a molecular cloud that does not show massive star formation. We also investigate the drivers of the [[*Herschel*]{}]{}/PACS and SPIRE submm colours and find that the submm ratios correlate strongly with the radiation field intensity and with the near and mid-IR surface brightnesses equally well. Comparing our dust map to H[i]{} and CO observations in N159, we then investigate variations in the gas-to-dust mass ratio (G/D) and the CO-to-H$_2$ conversion factor X$_{CO}$. A mean value of G/D$\sim$356 is derived when using X$_{CO}$ = 7$\times$10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ [@Fukui2009]. If a constant G/D across N159 is assumed, we derive a X$_{CO}$ conversion factor of 5.4$\times$10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$. We finally model individual regions to analyse variations in the SED shape across the complex and the 870 [$\mu$m]{} emission in more details. No measurable submm excess emission at 870 [$\mu$m]{} seems to be detected in these regions.' author: - '\' bibliography: - '/Users/maudgalametz/Documents/Work/Papers/mybiblio.bib' title: 'The thermal dust emission in N158-N159-N160 (LMC) star forming complex mapped by Spitzer, Herschel and LABOCA' --- galaxies: ISM – galaxies:dwarf– galaxies:SED model – ISM: dust – submillimeter: galaxies Introduction ============ As potential templates of primordial environments, low-metallicity galaxies are keystones to understand how galaxies evolve through cosmic time. Studying the Interstellar Medium (ISM) of low-metallicity galaxies is a necessary step to get a handle on the interplay between star formation and the ISM under conditions characteristic of the early universe. Low-metallicity galaxies also have quite different infrared (IR) Spectral Energy Distributions (SEDs) than solar or metal-rich environments. For instance, their aromatic features are diminished compared to dustier galaxies [@Madden2006; @Engelbracht2008]. The paucity of aromatic features is usually attributed to the hardness of the radiation field in low-metallicity environments, destructive processes such as supernova-driven shocks effects [@Galliano2003; @Galliano2005] or delayed injection of carbon dust by asymptotic giant branch (AGB) stars [@Dwek1998; @Galliano_Dwek_Chanial_2008] in dwarf galaxies. More relevant to the present study, the SEDs of low-metallicity galaxies often exhibit a flattening of their submillimeter (submm) slope or a submm excess [@Bottner2003; @Galliano2003; @Galliano2005; @Marleau2006; @Bendo2006; @Galametz2009; @Galametz2011], namely a higher emission beyond 500 [$\mu$m]{} than that extrapolated from IR observations and standard dust properties (Milky Way dust for instance). The origin of this excess is still highly debated. These results highlight the importance of a complete coverage of the thermal dust emission of low-metallicity objects to get a handle on the overall dust population distribution and properties in these environments. Combining gas and dust tracers will also allow us to understand how the matter cycles and the star formation processes evolve with galaxy properties. The Large Magellanic Cloud (LMC) is our nearest low-metallicity neighbour galaxy [$\sim$ 50kpc; @Feast1999], well studied at all wavelengths. This proximity enables us to study in detail the physical processes at work in the ISM of the galaxy and to individually resolve its bright star-forming regions. These structures can unbiasedly be isolated due to the almost face-on orientation [23-37$^{o}$ @Subramanian2012]. Furthermore, the low interstellar extinction along the line of sight facilitates the interpretation of the physical conditions of these star-forming regions compared to Galactic studies strongly affected by this extinction. Thus, the irregular morphology and low-metallicity of the LMC make it a perfect laboratory to study the evolution in the physical properties of the ISM of galaxies and the influence of metal enrichment on their star-forming activity. Before the [[*Spitzer*]{}]{} [*Space Telescope*]{} ([[*Spitzer*]{}]{}) observations, the infrared studies on the LMC suffered from a lack of wavelength coverage or spatial resolution to quantify crucial physical parameters of the ISM properties such as the equilibrium temperature of the big grains, the spatial distribution of the dust grain populations or the interstellar radiation field. A study of the extended infrared emission in the ISM of the LMC was performed by @Bernard2008 as part of the [[*Spitzer*]{}]{} Legacy Program SAGE [Surveying the Agents of a Galaxy’s Evolution, @Meixner2006] project using [[*Spitzer*]{}]{} observations. They found disparities between the overall shape of the SED of the LMC and that of the Milky Way (MW), namely a different mid-infrared (MIR) shape. They also found departures from the linear correlation between the FIR optical depth and the gas column density. Using [[*Spitzer*]{}]{} FIR spectra ($\lambda$ = 52-93[$\mu$m]{}), the studies of @VanLoon2010b also allowed comparisons of compact sources in the LMC and its neighbor, the Small Magellanic Cloud (SMC), that has an even lower metallicity. Their results indicate that while the dust mass differs in proportion to metallicity, the oxygen mass seems to differ less. The photo-electric effect is indistinguishably efficient to heat the gas in both clouds. The SMC finally presents evidence of reduced shielding and reduced cooling. At submm wavelengths, the LMC was observed by @Aguirre2003 with the TopHat instrument, a balloon-borne telescope [@Silverberg2003] from 470 [$\mu$m]{} to 1.2 mm. They constrain the FIR regime with DIRBE (Diffuse Infrared Background Experiment) observations at 100, 140 and 240 [$\mu$m]{} and estimated an average dust temperature for the LMC of T=25.0$\pm$1.8K. Using DIRBE and ISSA (IRAS Sky Survey Atlas) observations, @Sakon2006 found that the submm emission power law index (often referred to as $\beta$) is smaller in the LMC than in the MW and that the 140 and 240 [$\mu$m]{} fluxes seem to deviate from the model predictions, in particular on the periphery of supergiant shells. This excess was modelled by a very cold dust component with temperatures $<$ 9K, even if their lack of submm constraints prevented them to unbiasedly conclude that cold dust was the explanation for the excess. Using revised DIRBE, WMAP and COBE maps, @israel2010 construct the global SED of the LMC to confirm a pronounced excess emission at millimeter and submm wavelengths, i.e. more emission than expected from a submm slope with $\beta$=2. Different hypotheses for this global excess are tested in @Bot2010_2, @Bot2010, @Galliano2011 (both rule out for instance the “very cold dust" hypothesis) or @Planck_collabo_2011_MagellanicClouds (that test the “spinning dust" hypothesis). We more specifically focus the present study on the N158-N159-N160 complex, a group of H[ii]{} regions ($\sim$ 400pc from North to South) located in the LMC, $\sim$ 500pc south of the 30 Doradus (30Dor) massive star-forming region [catalogued by @Henize1956]. The complex is intensively observed in IR, H[i]{} or CO. We have obtained observations of the complex with the LABOCA instrument at 870 [$\mu$m]{}, probing the coldest phases of dust at a resolution of 19. We note that this analysis presents the first study of LABOCA observation in the LMC. We refer to @Bot2010_2 for a detailed analysis of LABOCA observations of giant molecular clouds in the south-west region of the SMC. In addition to LABOCA data, the [*Herschel Space Observatory*]{} ([[*Herschel*]{}]{}) now allows us to probe the thermal dust emission with a coverage of the SED from 70 [$\mu$m]{} up to 500 [$\mu$m]{}, thus to sample the IR peak and the submm slope of nearby galaxies. The whole LMC has been mapped with [[*Herschel*]{}]{} as part of the HERITAGE project (HERschel Inventory of The Agents of Galaxy Evolution; @Meixner2010, Meixner et al., submitted to AJ). The good resolution of the instruments on-board [[*Herschel*]{}]{} favours the exploration of the local properties of the ISM with a resolution of [[*Herschel*]{}]{}/SPIRE at 500 [$\mu$m]{} similar to that of [[*Spitzer*]{}]{}/MIPS 160 [$\mu$m]{} (FWHM of the PSF=36), leading, for the LMC, to ISM resolution elements of $\sim$9 pc at SPIRE 500 [$\mu$m]{}. [ m[8cm]{}m[9cm]{} ]{} ![image](N158_HST_673_656_502){width="9.5cm"} & ![image](N160_HST_502_656_487){width="9.5cm"}\ This paper thus presents a study of the IR to submm thermal emission of star-forming regions in various evolutionary states at the best resolution available now. This allows us to investigate how properties and physical conditions observed on large scales are physically linked to specific environments on a more resolved scale. In Section 2, we describe the N158-N159-N160 complex and the observations and data reduction of the [LABOCA]{}, [[*Spitzer*]{}]{} and [[*Herschel*]{}]{} data. We present the SED modelling techniques we apply to the [[*Spitzer*]{}]{} and [[*Herschel*]{}]{} data on resolved scales in Section 3. In Section 4, we analyse the results of our modelling, namely the distribution of the dust temperatures, the mean stellar radiation field intensities, the fraction of Polycyclic aromatic hydrocarbons (PAH) or the star formation rates. We also study the dependence of submm colours on various parameters and compare our dust mass map with the available gas tracers for the N159 region. A SED modelling of individual regions across the complex including the 870 [$\mu$m]{} data in the fitting procedure is described in Section 5. We summarize the main conclusions of the analysis in Section 6. s ![image](N159_images_combined.pdf){width="18.5cm" height="21cm"}\ A multi-wavelength dataset ========================== The complex ----------- The N158-N159-N160 complex ($\sim$400pc from North to South) is located in the LMC at $\sim$500pc south of the 30Dor complex, the most massive star-forming region of the Local Group [@Kennicutt_Hodge_1986]. Figure \[HST\_LABOCA\] shows three-color compositions of the two regions N158 and N160 produced with images taken by the Wide Field Planetary Camera 2 (WFPC2) on board the Hubble Space Telescope (HST). Data are retrieved from the Hubble Legacy Archive (http://hla.stsci.edu/, Program IDs: 11807 and 8247 for N158 and N160 respectively). We overlay LABOCA 870 [$\mu$m]{} contours on both images for comparison between the spatial resolutions. The HST/WFPC2 observations of N158, N159 and N160 are discussed in details in @Fleener2010, @Heydari1999 and @Heydari2002 respectively.\ This H[ii]{} region (Fig. \[HST\_LABOCA\], left) possesses an elongated structure in the N-S direction. Two OB associations were first discovered in the region by @Lucke1970 who catalogued stellar associations across the whole LMC. One is associated with the southern part of N158, one with the north superbubble. Several studies have shown that this superbubble is dominated by a young stellar population of 2 to 6 Myr [@Testor1998; @Dunne2001].\ Observations of N159 in the K band by @Gatley1981 led to the discovery of the first extragalactic protostar. The stellar content of sub-structures of N159 was analysed in more detail by [@Deharveng1992] at optical wavelengths and @Meynadier2004 using JHK photometry. N159 presents characteristics of on-going star formation activity such as protostars, masers, young massive stars as well as High-Excitation Blobs (HEB), namely nebulae with turbulent media and strong stellar winds interacting with the ambient ionized gas. The Kuiper Airborne Observatory (KAO) has been used to probe the complex at far-infrared (FIR) wavelengths for the first time [@Werner1978; @Jones1986]. Later, @Jones2005 used [[*Spitzer*]{}]{}/IRAC observations to follow the protostars in N159 and constrain the nature of the upper end of the initial mass function (IMF) in the LMC.\ As in N159, massive star formation is well evolved in N160 (Fig. \[HST\_LABOCA\], right), with parent clouds that have mostly been dissipated. The region is associated with H[ii]{} regions, young stellar clusters and water masers [@Lazendic2002; @Oliveira2006]. Using JHK photometry, @Nakajima2005 showed that of the 11 optical stellar clusters and associations detected in the N159/N160 complex (ages listed by @Bica1996), one has an age superior to 10 Myr and belongs to N160, the rest are younger than 10 Myr, with 3 clusters showing populations younger than 3 Myr. We refer to @Carlson2012 for a study of the physical properties and evolutionary stages of YSOs in N160 and to @VanLoon2010a for a detailed study of FIR fine-structure cooling lines of compact sources in the 3 H[ii]{} regions.\ The N158-N159-N160 complex belongs to the largest molecular cloud complex of the LMC called “the molecular ridge" and accounts for $\sim$30$\%$ of the total molecular mass of the galaxy [@Mizuno2001]. Several analyses have suggested a dissipated sequential cluster formation within N158, N160 and N159 as well as a large-scale sequential cluster formation over the entire complex [@Bolatto2000; @Nakajima2005] and more globally from the 30Dor complex to the southern region of N159 where the large southward CO ridge of 30Dor begins [@Cohen1988; @Fukui1999]. Near-infrared (NIR) to FIR observations with [*2MASS*]{} and [[*Spitzer*]{}]{} highlighted the paucity of star-formation across the ridge, with a star-formation activity distributed in lower luminosity regions than in the 30Dor complex [@Indebetouw2008]. Several studies have suggested alternative scenarios to the standard self-propagating star formation to explain this trend, for instance that the sequential star formation could be induced in bow-shocks formed at the leading edge of the LMC [@deBoer1998]. The N159 H[ii]{} region harbours two giant molecular clouds (GMCs), known as “N159 East" (N159E) and “N159 West" (N159W) labeled in Fig. \[Spitzer\_Herschel\_LABOCA\_maps\]. @Chen2009 suggest that the current star formation in the N159E is probably triggered by H[ii]{} regions in expansion in the molecular cloud, while the massive stars of N159W are forming spontaneously. Another GMC was observed in the south of N159 (N159S). Performing a comparative study between \[CII\], CO, and FIR observations, @Israel1996_2 find that the clouds N160 and N159S are two opposite extremes with respect to N159 (East and West), with molecular gas in N160 more photo-dissociated than in N159. N159S shows a much lower level of star formation than N159W and N159E [@Bolatto2000; @Rantakyro2005], with only a few very faint diffuse H[ii]{} regions and no OB associations [@Chen2010]. The detection of candidates for Herbig Ae/Be stars suggest that the cluster formation could have just begun in N159S [@Nakajima2005]. The atomic and molecular gas distributions across the complex are discussed in more details in Section 4.5. LABOCA observations and data reduction -------------------------------------- The full region of N158, N159 and N160 was mapped with the [LABOCA]{} instrument operating at 870 [$\mu$m]{} and installed on the Cassegrain cabin of the Atacama Pathfinder EXperiment (APEX) telescope in the Chajnantor site in Chile’s Atacama desert. [LABOCA]{} is a bolometer array of 295 channels with a field of view of the full array of 114 . The full width half maximum (FWHM) of its point spread function (PSF) is 192 $\pm$ 0.3. Detectors of LABOCA are positioned with a double beam channel separation (36) and the field of view is thus under-sampled. The mapping was carried out with a raster of spiral pattern to obtain an homogenous fully sampled map. A total of 35.5 hrs of observations were obtained in August 2008 (Program ID: O-081.F-9329A-2008 - Principal Investigator: Sacha Hony). Calibration was performed through the observations of Mars and secondary calibrators: PMNJ0403-8100, PKS0537-441, B13134, V883-ORI, N2071IR and VY-CMa. The atmospheric attenuation was determined via skydips every hour[^1]. The data reduction was performed with the BoA reduction package (BOlometer Array Analysis Software)[^2]. The main steps of the reduction of the time-ordered data stream of each channel and scan are: flat fielding, calibration, opacity correction, dead or noisy channels removal, removal of the correlated noise on the global array as well as correlated noise induced by the coupled electronics of the detectors (amplifier boxes or cables), flagging of stationary points or data taken outside reasonable telescope scanning velocity and acceleration limits, $^3$He temperature drift correction, median baseline removal and despiking. Individual reduced scans are then co-added. The presence of very bright structures often leads to negative artefacts on their surrounding. These structures are “created" numerically during the correlation noise or median baseline steps. It is possible to reduce these artefacts using an iterative process during the data reduction. Once data are calibrated, we use our reduced image to create a “source model" by isolating the pixels superior to a given signal-to-noise (5 for the first iterations and decreasing progressively to 2). The model grows automatically to avoid isolated pixels in the masks. We subtract the source model from the data and rerun the reduction pipeline. We add the model at the end of the new reduction to obtain a new map. This map is used to build a new model, input for the following iteration. The process is repeated until the reduction converges. Those iterations lead to a significant retrieval of faint extended emission around the bright structures. Weight maps are derived during the final mapping step from which we derive rms and signal-to-noise maps. The final 45$\times$ 35 [LABOCA]{} image has a pixel size of 91, with a final rms of 7.8 mJy beam$^{-1}$. Herschel Data ------------- We obtained [[*Herschel*]{}]{} data for the N158-N159-N160 complex from the HERITAGE project, a programme dedicated to the observations of the two Magellanic Clouds and a part of the Magellanic Bridge. We refer to @Meixner2010 and the HERITAGE overview of Meixner et al. (submitted) for a detailed description of the observing strategy of the HERITAGE project and the data reduction of the data. We provide a short summary here. The LMC was mapped in two bands with PACS [Photodetector Array Camera and Spectrometer; @Poglitsch2010] at 100 and 160 [$\mu$m]{}, with respective FWHM of the Point-Spread Functions (PSF) of $\sim$77 and $\sim$12. Data are processed in HIPE 7.0 ([[*Herschel*]{}]{} Interactive Processing Environment) from Level 0 to Level 1 following the standard pipeline described in the PACS data reduction guide. Several steps of baseline subtraction (to take 1/$\sqrt[]{f}$ noise and drifts with time into account) and deglitching (to correct for jumps caused by cosmic ray hits) are then applied to the data. PACS 100 and 160 [$\mu$m]{} timelines are finally mapped using the PhotProject HIPE procedure. The LMC was also observed with [[*Herschel*]{}]{}/SPIRE [Spectral and Photometric Imaging Receiver; @Griffin2010] at 250, 350 and 500 [$\mu$m]{}, with respective FWHMs of their PSFs of 18, 25and 36. Data were reduced in HIPE 7.0, including additional routines to subtract the background, adjust the astrometry, apply deglitching steps, remove residuals from temperature drifts or mask discrepant data. We expect residual foreground emission as pointed out by @Bernard2008. Using the HI data cube of the LMC restricted to velocity ranges matching those of the Galaxy, @Galliano2011 have quantified this contamination to contribute to $\sim$1 $\%$ of the IR power, i.e. smaller than [[*Herschel*]{}]{} flux uncertainties. We thus consider the foreground contamination to be negligible in this study. Spitzer Data ------------ The [[*Spitzer*]{}]{} observations were performed as part of the SAGE project (Surveying the Agents of a Galaxy’s Evolution) using both instruments IRAC [InfraRed Array Camera; @Fazio2004] and MIPS [Multiband Imaging Photometer; @Rieke2004]. The IRAC bands cover 3.6, 4.5, 5.8, and 8 [$\mu$m]{} with FWHMs of the PSFs $<$2. The MIPS bands cover 24, 70 and 160 [$\mu$m]{} with a FWHM of the PSFs of 6, 18 and 40 respectively. A description of the observing strategy and data reduction can be found in @Meixner2006. We also refer to @Jones2005 for a very detailed overview of the IRAC observations of N159 and its various components. Because of their lower resolution (40) compared to the [[*Herschel*]{}]{}/PACS 160 [$\mu$m]{} maps (12), we do not use the MIPS 160 [$\mu$m]{} maps in this study. Meixner et al. (submitted) indicate a good agreement of the MIPS and PACS calibrations in the linear range of MIPS. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Details on the IR/submm emission in the regions. LABOCA 870 [$\mu$m]{} contours are overlaid on the IRAC 8 [$\mu$m]{} observation with, from top row to bottom row N158, N160, N159 and N159S. The IRAC 8 [$\mu$m]{} map is convolved to the LABOCA resolution. Contours are given for 0.018, 0.06, 0.09, 0.15 and 0.3 Jy/beam. North is up, east is left.[]{data-label="IRAC_LABOCA"}](N159_IRAC4_LABOCAcont_N158 "fig:"){width="8.3cm"} ![Details on the IR/submm emission in the regions. LABOCA 870 [$\mu$m]{} contours are overlaid on the IRAC 8 [$\mu$m]{} observation with, from top row to bottom row N158, N160, N159 and N159S. The IRAC 8 [$\mu$m]{} map is convolved to the LABOCA resolution. Contours are given for 0.018, 0.06, 0.09, 0.15 and 0.3 Jy/beam. North is up, east is left.[]{data-label="IRAC_LABOCA"}](N159_IRAC4_LABOCAcont_N160 "fig:"){width="8.3cm"} ![Details on the IR/submm emission in the regions. LABOCA 870 [$\mu$m]{} contours are overlaid on the IRAC 8 [$\mu$m]{} observation with, from top row to bottom row N158, N160, N159 and N159S. The IRAC 8 [$\mu$m]{} map is convolved to the LABOCA resolution. Contours are given for 0.018, 0.06, 0.09, 0.15 and 0.3 Jy/beam. North is up, east is left.[]{data-label="IRAC_LABOCA"}](N159_IRAC4_LABOCAcont_N159 "fig:"){width="8.8cm"} ![Details on the IR/submm emission in the regions. LABOCA 870 [$\mu$m]{} contours are overlaid on the IRAC 8 [$\mu$m]{} observation with, from top row to bottom row N158, N160, N159 and N159S. The IRAC 8 [$\mu$m]{} map is convolved to the LABOCA resolution. Contours are given for 0.018, 0.06, 0.09, 0.15 and 0.3 Jy/beam. North is up, east is left.[]{data-label="IRAC_LABOCA"}](N159_IRAC4_LABOCAcont_N159S "fig:"){width="6.8cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Variation of the thermal dust emission with wavelength ------------------------------------------------------ We present a multi-wavelength composition of the N158-N159-N160 region in Fig. \[Spitzer\_Herschel\_LABOCA\_maps\], with, from top to bottom IRAC 8 [$\mu$m]{}, MIPS 24 and 70 [$\mu$m]{}, PACS 100 and 160 [$\mu$m]{}, the three SPIRE bands (250, 350 and 500 [$\mu$m]{}) and the [LABOCA]{} map at 870 [$\mu$m]{}. This figure shows how the thermal dust emission is evolving with wavelength. The 8 [$\mu$m]{} observation mostly traces the aromatic feature emission from PAHs. The MIPS 24 [$\mu$m]{} emission is more compact than that of the other bands. Tracing the hot dust, the 24 [$\mu$m]{} emission is indeed confined to the compact star-forming sites. The cool dust (10-40K) is now traced by the [[*Herschel*]{}]{} PACS and SPIRE observations. The [LABOCA]{} observations are finally probing for the first time the coldest phases of dust at a resolution of 19 (4.6pc for the LMC). We clearly resolve the emission of the bright star-forming regions of the complex with LABOCA at 870 [$\mu$m]{}, but also detect and resolve the diffuse emission at low surface brightnesses. The distribution of the 870 [$\mu$m]{} emission is spatially coherent with that of the SPIRE maps, both in the H[ii]{} regions and in the diffuse medium. To allow spatial comparison between the 870 [$\mu$m]{} maps and the IRAC 8 [$\mu$m]{} map, predominantly tracing PAHs, we convolve the IRAC 8 [$\mu$m]{} map to the LABOCA resolution using the low-resolution kernel created by G. Aniano from the PSFs of IRAC and [LABOCA]{}[^3] and observe a spatial coincidence between the 8[$\mu$m]{} map and the 870[$\mu$m]{} contours (Fig. \[IRAC\_LABOCA\]) already observed by @Haas2002 for a wide range of galaxy types. On one hand, this correlation suggests that the PAH excitation could be caused by a widespread distribution of stellar FUV-emitters and not only linked with star formation [@Whaley2009]. On the other hand, we expect the 870 [$\mu$m]{} emission, namely the cold dust emission, to predominantly originate from largely shielded interiors of molecular clouds. Their surfaces are known to produce the PAH features [@Sauvage2005], which could explain their association with the 870[$\mu$m]{} emission. We finally note the peculiar behaviour of the southern region of N159 (N159S in Fig. \[Spitzer\_Herschel\_LABOCA\_maps\]). As observed in Fig. \[Spitzer\_Herschel\_LABOCA\_maps\] and Fig. \[IRAC\_LABOCA\], the peak of emission moves from 8 to 870 [$\mu$m]{}, with an offset of 2.7 toward the west for the 870 [$\mu$m]{} emission. We will see further in this paper that the peak of the 870 [$\mu$m]{} emission (which also dominates the emission at SPIRE wavelengths) spatially coincides with the molecular gas reservoir usually referred to as “N159 South" (N159S). This region was indeed observed in CO [@Bolatto2001; @Mizuno2010] and is the brightest \[C[i]{}\] source of N159 [@Bolatto2000]. We provide details on previous observations of gas tracers (H[i]{}, CO etc.) in the complex in Section 4.5. Convolution and background subtraction -------------------------------------- We convolve the IRAC, MIPS, PACS and SPIRE 250 and 350 [$\mu$m]{} maps to the lowest resolution available, namely that of SPIRE 500 [$\mu$m]{} (FWHM: 36). We used the convolution kernels developed by @Aniano2011, which are rotationally symmetric and optimized to avoid high-frequency numerical noise (induced by the filtering steps when one works in the frequency domain). The LABOCA 870 [$\mu$m]{} map is convolved to the SPIRE 500 [$\mu$m]{} resolution using the low resolution kernel created by G. Aniano from the PSFs of both SPIRE 500 [$\mu$m]{} and [LABOCA]{} (see Section 2.5). For each image, we estimate the background by masking the emission linked with the complex and fitting the distribution of the remaining pixels by a Gaussian. The peak value of this Gaussian is used as a background value per pixel and subtracted from the maps. In the following study, we restrict ourselves to resolved elements with a signal-to-noise superior to 1-$\sigma$ in the SPIRE bands. A resolved SED modelling ======================== We use data from 3.6 to 500 [$\mu$m]{} to obtain maps of the dust properties within the star-forming complex. Because the data reduction of the LABOCA data might still lead to removal of some of the faintest diffuse emission, we do not use the 870 [$\mu$m]{} map as a constraint for the resolved modelling. We derive average cold temperatures on a resolved basis using a two-temperature (two modified blackbodies) fitting procedure. We then apply a more realistic model [@Galliano2011] in order to investigate the distribution of the star formation rate, radiation field properties or dust properties (mass, PAH fraction) across the star-forming complex. We describe the two resolved SED fitting procedures in this section and analyse the maps of the physical parameters we obtained in Section 4. In Section 5, we perform a SED modelling of 24 individual regions across the complex (same model) to derive integrated properties on those regions and investigate the 870 [$\mu$m]{} thermal emission separately. Modified blackbody fitting -------------------------- If we assume that dust grains are in thermal equilibrium with the radiation field, we can obtain a simple estimate of the cold dust temperature across the complex by fitting a modified blackbody (MBB) to the dust thermal emission. It is important to keep in mind that in reality, each resolution element contains dust grains with varied properties along the line of sight. The temperatures derived using this method should thus be considered as luminosity-weighted average values. We choose to fit our 24 to 500[$\mu$m]{} data with a two-temperature (warm+cold) model. This two-MBB fitting is a minimum requirement to account for the contribution of warm dust to the MIR emission (and estimate the contribution of warm dust and single-photon-heating) to the 70 [$\mu$m]{} emission) and avoid an overestimation of the cold dust temperatures. We fit our data with a model of the form: $$\begin{aligned} L_{\nu}(\lambda, T_w, T_c, \beta_c) = A_w~\lambda^{-2} B_{\nu}(\lambda,T_w) + A_c~\lambda^{- \beta_c} B_{\nu}(\lambda,T_c) \label{eq1}\end{aligned}$$ with [*T$_{w}$*]{} and [*T$_{c}$*]{} the temperature of the warm and cold component, [*$\beta$$_{c}$*]{} the emissivity index of the cold component and [*B$_{\nu}$*]{} the Planck function. The A$_w$ and A$_c$ are scaling coefficients that account for the dust masses of each component. The emissivity index of the warm dust component is fixed to 2, a standard approximation of the opacity in the @Li_Draine_2001 dust models. We convolve the model with the instrumental spectral responses of the different cameras to derive the expected photometry. The fit is performed using the IDL function MPCURVEFIT [@Markwardt2009 Levenberg-Marquardt least-squares fit]. We take the uncertainties on flux measurements into account to weight the data during the fitting procedure (normal 1/error$^{2}$ weighting).\ [*Choice of emissivity -* ]{}In order to derive an estimate of the emissivity index value of the complex, we first let the emissivity index of the cold dust component vary ($\beta$$_c$ free) and perform the two-MBB fitting for each ISM resolved elements. We obtain an average emissivity index of 1.47 $\pm$ 0.17 across the complex, consistent with the results derived for the whole LMC by @Planck_collabo_2011_MagellanicClouds ($\sim$1.5) or for a strip in the LMC using [[*Herschel*]{}]{} Science Demonstration Phase data by @Gordon2010. We thus decide to fix the effective emissivity index to 1.5 and re-derive the temperature map. Fixing the emissivity index value prevents biases linked with [*1)*]{} temperature-emissivity anti-correlation due to measurement uncertainties, [*2)*]{} temperature mixing along the line of sight contributing to a bias in the derived “effective emissivity index" [see @Juvela2012; @Galametz2012 for discussions on these issues].\ [*Error estimates -* ]{}We perform Monte Carlo simulations in order to quantify the uncertainties on dust temperatures driven by flux errors of the resolved fluxes. Error maps are a combination in quadrature of the uncertainty maps derived during the data reduction (mapping, pointing) and uncertainties in the absolute calibration (15$\%$ for the SPIRE bands that are the most crucial for dust mass estimates), the latter being the dominant source of uncertainties. We generate 20 sets of modified constraints with fluxes randomly varying within their error bars and following a normal distribution around the nominal value and run the model for each set. Resolved uncertainties can thus derived using the standard deviations of each distribution. Since the absolute calibration is correlated for SPIRE bands, the three SPIRE measurements move consistently in each set.\ Realistic dust properties fitting --------------------------------- We also apply the phenomenological SED fitting procedure of @Galliano2011 to our 3.6 to 500 [$\mu$m]{} data in order to derive the resolved total infrared (TIR) luminosities, dust properties and radiation field intensity estimates across the region. Their approach is similar to that of @Draine2007, except for the starlight intensity distribution. In both models, as suggested by @Dale2001, the distribution of starlight intensities per unit dust mass is approximated by a power-law (of index $\alpha$). In the @Draine2007 models though, a significant fraction of the dust mass is assumed to be heated by starlight with a single intensity U$_{min}$ (“diffuse ISM" component). This model leads to better resolved fits (lower $\chi$$^2$ values) for nearby spirals [see @Aniano2012] but does not seem to correctly reproduce LMC observations [@Galliano2011]. We choose the approach of @Galliano2011 in order to allow flexibility of the model at submm wavelengths and account for possible variations of the effective emissivity or the presence of colder dust. We consider the dust size distribution and composition to be uniform across the complex and that sources of IR emission are old stars and dust (PAHs, carbon grains and silicates). The stellar contribution to the MIR is modelled using a library of stellar spectra previously synthesised using the stellar evolution code PEGASE [@Fioc_Rocca_1997]. The stellar population is assumed to have undergone an instantaneous burst 5 Gyr ago (initial solar metallicity, Salpeter Initial Mass Function). We do not have a MIR spectrum to properly constrain the PAH composition. We thus choose to fix the ionised PAH-to-neutral PAH ratio (f$_{PAH+}$), an additional parameter of the @Galliano2011 model, to 0.5 in order to limit the number of free parameters of the model.\ The free parameters of our model are thus: - the total mass of dust (M$_{dust}$), - the PAH-to-dust mass ratio (f$_{PAH}$), - the index describing the fraction of dust exposed to a given intensity ($\alpha$), - the minimum heating intensity (U$_{min}$), - the range of starlight intensities ($\Delta$U), - the mass of old stars (scaling of our stellar spectra). We note that U=1 corresponds to the intensity of the solar neighbourhood (2.2$\times$10$^{-5}$ W m$^{-2}$). For the rest of the paper, U$_{min}$+$\Delta$U will be referred as “U$_{max}$". The PAH fraction f$_{PAH}$ is normalised to the average Galactic value [4.6 $\%$; @Li_Draine_2001]. As pointed out in @Galliano2011, the mass of old stars is poorly constrained by the IRAC bands and is only used to subtract the stellar continuum to the MIR bands. No balance is made between this parameter and the starlight intensity. Errors on the physical parameters are derived using the same Monte Carlo technique described in Section 3.1.\ [*Amorphous Carbon to model carbon dust -* ]{} Two different approaches are compared in @Galliano2011 as far as the modelling of carbon dust is concerned: the “standard model" that uses the @Zubko2004 Galactic grain composition and graphite grains, or the “AC model" that uses amorphous carbons [@Zubko1996] in lieu of graphite. Carbon constitutes a major fraction of the grains in galaxies, especially in the circumstellar dust of carbon stars. However, the exact form of carbon dust is still rather unclear. Many SED models use graphite to describe the interstellar carbon dust as first suggested by @Mathis1977. Graphite, indeed, has well-known physical properties, which makes it easy to model, and is in good agreement with the observed (Galactic) extinction and polarization [@Draine_Lee_1984]. However, observational evidence challenges the graphite theory (at least the pure mono crystalline graphite theory), among which are [*1)*]{} variations in the 2175 $\AA$ profile not explained by changes in graphite grain size [@Draine1993], [*2)*]{} a graphite broadband emission feature near 33 [$\mu$m]{} less strong (weaker and narrower) than expected in global models [e.g. @Draine2007], [*3)*]{} an erosion of carbon dust in shocks not reproducible in the case of graphite grains. Indeed, @Welty2002 or @Serra_Dias_Cano2008 show that graphite is not expected to survive as such in the ISM due to erosion and irradiation in shock waves and suggest that interstellar hydrogenated amorphous carbons could be the most probable form of carbon material, their erosion being more efficient than graphite in shocks. We will also see further in the paper that using the “standard model" leads to dust masses inconsistent with the dust-to-gas mass ratio expected for the complex. The use of AC to model carbon dust generally leads to a decrease of the mass of large grains compared to that required by the “standard model" to account for the same emission (see Section 4.5.1 for explanations). Considering the average emissivity index of our region ($\beta$$\sim$1.5), we favour the “AC" approach to model the resolved SEDs across the complex. ![image](N159_ModelsResults.pdf){width="18cm"}\ . \[ModelResults\] Results and analysis ==================== In this section, we describe the various maps obtained directly or indirectly from our resolved SED models and analyse how their distributions correlate. Figure \[ModelResults\] shows the maps of the parameters we derive (temperature and mean radiation field intensity, PAH fraction, star formation rate and dust mass surface density). The 8[$\mu$m]{} image is shown on the upper left panel for comparison. In Section 4.4, we also investigate how submm colours evolve with the ISM conditions (heating sources, radiation field intensities). Finally, we analyse the dust surface density map and how it relates to the distribution of the different gas phases in the complex, and more particularly in the N159 region in Section 4.5. Temperatures and mean radiation field intensities ------------------------------------------------- We show the final cold temperature (i.e. temperature of the cold MBB) map obtained using our resolved two-MBB fitting technique in Fig. \[ModelResults\] (upper middle panel). The temperatures peak at the locations of star-forming regions where dust is expected to be warmer. N160 shows the highest cold dust temperature, with a maximum of $\sim$40K while the median temperature of the N158-N159-N160 complex is 26.9$^{\pm2.3}$K (28.2 if we restrict the analysis to ISM elements with a 3-$\sigma$ detection in the SPIRE bands). Based on observations made with the TopHat telescope combined with DIRBE data (100, 140, and 240 [$\mu$m]{}, 42), @Aguirre2003 derived temperatures of 25$^{\pm1.8}$K and 26.2$^{\pm2.3}$K for the LMC and 30Dor, using $\beta$= 1.33$^{\pm0.07}$ and 1.5$^{\pm0.08}$, respectively. From [[*Spitzer*]{}]{} data (thus no data above 160 [$\mu$m]{}) and the same emissivity values, @Bernard2008 obtained a colder temperature of 21.4 K for the LMC and 23 K for the 30 Dor region. Our values are thus close to the values derived for the 30Dor region by @Aguirre2003. We note however that different methods were used to derive the temperatures, making a direct comparison difficult. The N159S region is finally the coldest region of the whole complex, with a temperature $\sim$22K. This supports the argument that massive stars may still be at infancy and deeply embedded in the circumstellar material in this particular region. In order to investigate more precisely the distribution of radiation field intensities across the region, we derive a map of the mass-weighted mean starlight heating intensity $<$U$>$ [@Draine_Li_2007] that develops into: [ $$<U>=\left \{ \begin{array}{ll} \vspace{7pt} {\large \frac{\Delta U}{ln~(~1~+~\Delta U/U_{min}~)}} &if~\alpha = 1 \\ \vspace{7pt} U_{min}~\frac{ln~(~1+~\Delta U/U_{min}~)}{\Delta U~/~(U_{min}~+~\Delta U)} &if~\alpha = 2 \\ \vspace{7pt} \frac{1-\alpha}{2-\alpha}\frac{(U_{min}+\Delta U)^{2-\alpha}~-~U_{min}^{2-\alpha}}{(U_{min}+\Delta U)^{1-\alpha}~-~U_{min}^{1-\alpha}}& if~\alpha \ne 1~\&~\alpha \ne 2 \\ \end{array} \right .$$ ]{} with $\Delta$U the range of starlight intensities, U$_{min}$ the minimum heating intensity and $\alpha$ the index describing the fraction of dust exposed to a given intensity, all three obtained from the resolved @Galliano2011 modelling procedure. U=1 corresponds to the intensity of the solar neighbourhood (2.2$\times$10$^{-5}$ W m$^{-2}$).\ We show the mean starlight heating intensity map in Fig. \[ModelResults\] (upper right panel). Since dust grains are heated by the interstellar radiation field, we obtain a similar distribution between dust temperatures and radiation field intensities. Maxima in $<$U$>$ appear close to H[ii]{} regions where the starlight heating the dust is expected to be much more intense than the overall starlight illuminating the complex. The averaged intensity in our ISM elements is $<$U$>$=8.6. This average increases to 11.7 when we restrict the calculation for resolved ISM elements that have a 3-$\sigma$ detection in the SPIRE bands. We note that @Meixner2010 derived an average value $<$U$>$=7.6 for a strip across the LMC (so including more quiescent regions) using the same model as that applied in our study. Finally, N159S shows the lowest $<$U$>$ ($\sim$1.8) of the complex. This is consistent with the cold temperatures we observe in this region. This region thus appears to be shielded from the strong starlight heating the N159 H[ii]{} region. PAH fraction ------------ Figure \[ModelResults\] (lower left panel) shows how the PAH-to-total dust mass ratio f$_{PAH}$ is distributed across the star-forming complex. We remind the reader that in the modelling, f$_{PAH}$ is normalised to the Galactic value f$_{PAH,MW}$=4.6$\%$. The average value of f$_{PAH}$ across the region is driven by the ISM outside the H[ii]{} regions themselves (median f$_{PAH}$=0.7$^{\pm0.3}$). However, we observe variations in the PAH fraction. Peaks of the f$_{PAH}$ (in red) are located in low-surface brightnesses regions and are probably artefacts due to the very low dust mass of those regions. We observe a strong decrease of f$_{PAH}$ in the three star-forming regions as well as low PAH fractions in two compact knots located at the south east side of N160. All these regions correspond to peaks in the radiation field intensities (Fig. \[ModelResults\], $<$U$>$ panel) as well as strong radio emission (as we will see later in Section 5). This depression of the PAH fraction by intense radiation fields was also studied in detail by @Sandstrom2010_2 in the SMC. These results are consistent with the fact that PAHs are known to be destroyed in strong H[ii]{} regions [@Madden2006; @LeBouteiller2007]. PAHs are also smaller on average in low-metallicity galaxies, which could also favour their destruction [@Sandstrom2012]. Low-metallicity environments often show harder interstellar radiation fields extending over larger size scales. This effect is attributed to the hardness of the radiation field and/or to a lower dust attenuation in low-metallicity ISM which consequently leads to a larger mean free path length of the ionising photons. Moreover, the fragmented structure and clumpiness of the ISM in low-metallicity galaxies allows UV light to penetrate deeper in metal-poor molecular clouds, favouring grain destruction. We note that should the radiation field be harder than the interstellar radiation field of @Mathis1983 used in our model, the PAH fraction we estimate could be over-estimated. @Dwek2005 and @Galliano_Dwek_Chanial_2008 have suggested that the low fraction of PAHs in metal-poor environments could also be due to a delayed injection of carbon dust into the ISM by AGB stars in the final phase of their evolution. Using near-IR JHK photometry in the N159 nebula, @Meynadier2004 distinguished a young ($\sim$3 Myr) and an old ($\sim$1-10 Gyr) stellar population, even if they could not exclude that both populations might be spatially unrelated. Based on the absence of red supergiants in N159 (stars that should normally be observed in the cluster if the present stellar population was at least 10 Myr old), @Jones2005 concluded that most of the star formation must have taken place recently and that N159 might not be older than 1-2 Myr. A sequential cluster formation from N160 to N159S has been observed by @Nakajima2005, using observations of Herbig Ae/Be clusters. This supports a slightly longer star formation activity for the N160 region ($\sim$10-30 Myr). Both regions are nevertheless quite young compared to the time necessary for carbon dust to be injected in the ISM by the stars and thus participate in the formation of PAHs in the clusters. We finally note that recent studies suggest than the rate of AGB-produced PAHs compared to the rate of PAHs formed in the ISM may also change with metallicity [@Sandstrom2012].\ In conclusion, although the obvious anti-correlation between f$_{PAH}$ and the radiation field favours the destruction of PAHs as a straight forward explanation for the low PAH fraction we observe, the young age of the clusters could be an additional explanation for the absence of PAH in our three H[ii]{} regions. Star Formation rate ------------------- Since the total bolometric IR luminosity traces the emission of stellar populations enshrouded in dust cocoons, this quantity is thus very often used to quantify the star formation obscured by dust [@Perez-Gonzalez_2006; @Kennicutt2009]. We first obtain the total IR luminosity map of the complex by integrating the resolved SEDs in a $\nu$-f$_{\nu}$ space: $$L_{IR} = \int_{8~\mu m}^{1100~\mu m}L_{\nu}~d\nu \label{eq2}$$ In the SED fitting technique we use, we model the stellar contribution to short wavelengths as a separate component. This contribution is subtracted before performing the integration to only take the thermal emission of dust into account in the calculation. We use the IR-based calibration of @Kennicutt1998 to derive a star formation rate map of the region: $$\frac{SFR}{M_{\odot}~yr^{-1}}=\frac{L_{FIR}}{5.8 \times 10^9 L_{\odot}} \label{eq3}$$ @Jones2005 estimated the integrated luminosity of the N159 complex (integration of the SED up to 100[$\mu$m]{} combining [[*Spitzer*]{}]{} and KAO data). They found that the integrated luminosity was consistent with the observed radio emission, assuming that the H[ii]{} region was a cluster with a normal Initial Mass Function (IMF). As mentioned in @Indebetouw2008, Eq. \[eq3\] was calibrated for large extragalactic sources. It may be valid for the full star-forming regions such as N159, but the SFR calibration may not be accurate when applied to some of our resolved ISM elements, namely those not properly covering the stellar IMF.\ We show the SFR map of the region in units of [$M_\odot$]{} yr$^{-1}$ kpc$^{-2}$ (log scale) in Fig. \[ModelResults\] (lower middle panel). We find a median SFR of 0.06 [$M_\odot$]{} yr$^{-1}$ kpc$^{-2}$ across our modeled region. This average increases to 0.17 [$M_\odot$]{} yr$^{-1}$ kpc$^{-2}$ when we restrict the calculation for resolved ISM elements that have a 3-$\sigma$ detection in the SPIRE bands. This is consistent with the average SFR obtained by @Indebetouw2008 using the H$_2$ surface densities and the Schmitt-Kennicutt law (0.14 [$M_\odot$]{} yr$^{-1}$ kpc$^{-2}$ for N159/N160 and 0.11 [$M_\odot$]{} yr$^{-1}$ kpc$^{-2}$ for the southern molecular ridge to whom the complex belongs to). They find that their SFR estimate for N159/N160 corresponds to that derived using the @Calzetti2007 formula (i.e. a calibration from H$\alpha$ and 24 [$\mu$m]{} luminosities) within uncertainties. We note that the SFR we derive varies significantly across the region. Peaks of SFR correspond to our three star-forming regions, with the N160 centre showing the highest star formation rate of the complex. We indicate the N159S region by a white cross in Fig. \[ModelResults\] (SFR panel). The region does not show massive on-going star formation. ------------------------------------------------ ![image](SPIRE_color_dependence){width="18cm"} ------------------------------------------------ --------------------------------------------------- ![image](SPIRE_color_color_diagram){width="13cm"} --------------------------------------------------- Submillimeter colours --------------------- @Bendo2011 studied [[*Herschel*]{}]{} colours (i.e. PACS or SPIRE surface brightness ratios) in three nearby galaxies (M81, M83 and NGC 2403) and found that the 70/160 colour was correlated with tracers of the star formation activity (thus that warm dust is primarily heated by the young stellar populations) while SPIRE ratios (250/350 or 350/500) were more strongly correlated with the galacto-centric radius of the galaxies. They concluded that cold dust could be primarily heated by the total stellar population (young stars + old stars). @Boquien2010 and @Galametz2010 studied similar correlations in M33 and NGC 6822 respectively and showed that these ratios were also strongly correlated with the star formation, traced by the 24 [$\mu$m]{} emission for instance. In Section 4.4.1, we thus study the dependence of FIR colours on various parameters, including the mean radiation field intensities $<$U$>$ and the PAH fraction just derived from our modelling. Moreover, if FIR colours are often used as good proxies for dust temperatures, colour-colour relations can, on the other hand, provide additional information about the dust emissivity wavelength dependence [@Nagata2002; @Hibi2006; @Hirashita2009_2]. In Section 4.4.2, we analyse SPIRE colours as possible indicators of the cold dust emitting properties. ### Dependencies In order to investigate the drivers of submm colours variations in the N158-N159-N160 complex, we compare in Fig. \[SPIRE\_colour\_1\] the PACS and SPIRE surface brightness ratios 100/160, 160/250, 250/350 and 350/500 with : 1. the 3.6 [$\mu$m]{} surface brightness: dominated by emission from the old stellar populations, it is known as a good proxy of the stellar mass [see @Zhu2010; @Eskew2012 for recent studies], 2. the 24 [$\mu$m]{} hot dust emission, often used as a good calibrator for star formation [@Calzetti2007; @Rieke2009 among others], 3. the mean starlight heating intensity $<$U$>$. Values of U have been normalised to that of the Milky Way), 4. the PAH fraction f$_{PAH}$. For this section as well as the following, we restrict ourselves to pixels with a 3-$\sigma$ detection in the three SPIRE bands. We note that part of the N159S region matches this signal-to-noise criteria. In Fig. \[SPIRE\_colour\_1\], we compute the linear regressions from each comparison using an ordinary Least-Squares regression (function [*linfit.pro*]{} of the Astrolibrary of IDL). On each plot, we provide the slope of the relation as well as the root-mean-square error defined as: $$RMSE=\sqrt{\frac{\Sigma~(O_i-E_i)^2}{n}}$$ with O$_i$ the observed ratios (or temperature colours), E$_i$ the ratios estimated from our fitting relations and n the number of ISM elements.\ We observe correlations in all cases, with a very tight relation between the 100-to-160 and 160-to-250 flux density ratios and the mean starlight intensity $<$U$>$ (lower RMSE for each line). We know that dust grains are heated by the interstellar radiation field and re-emit this energy in the infrared regime so correlation between FIR colours and the radiation field intensity are expected. FIR colours are often used as good proxies of the average dust temperatures, even if FIR ratios using shorter wavelengths than 100 [$\mu$m]{} (70-to-100 ratio for instance) can be potentially biased by emission from stochastically heated dust grains [@Dale2001; @Bernard2008]. Contrary to ratios at shorter wavelengths that show stronger correlations with the 24 [$\mu$m]{} surface brightnesses, the SPIRE-only ratios follow the 3.6 and 24 [$\mu$m]{} surface brightnesses equally well. We finally note that the N159S region shows the lowest submm surface brightness ratios of the whole complex, with surface brightness ratios of $\sim$0.75 for 100/160, $\sim$1.58 for 160/250, $\sim$2 for 250/350 and $\sim$2.35 for 350/500. ### SPIRE colour-colour diagrams Figure \[SPIRE\_colour\_2\] shows a SPIRE colour-colour diagram of the star-forming complex. We plot the 350-to-500 [$\mu$m]{} flux density ratios (hereafter 350/500 colour) of our resolved elements as a function of the 250-to-350 [$\mu$m]{} flux density ratios (hereafter 250/350 colour). In order to investigate variations in the submm colours with temperature, we separate the points in 4 panels (T$<$26, 26$<$T$<$28, 28$<$T$<$30 and T$>$30) and code them with temperature, from cold in dark purple to hot in red colours. We use the resolved temperatures T$_c$ derived using our two-modified blackbody model. The lines show modelled colours obtained from a single temperature modified blackbody with an emissivity index $\beta$$_{singleBB}$ of 2, 1.5 or 1. We observe that SPIRE colours are well correlated with each other. Resolved elements with T$>$30 (fourth panel) have, on average, slightly lower 350/500 colours for a given 250/350 colour, so a “flatter" submm slope. This trend agrees with the anti-correlation observed between the emissivity index and the temperature [@Dupac2003; @Shetty2009] but could also result from temperature mixing effects along the line of sight in these elements. Given our error bars, we, however, consider this trend as marginal. We note that the ISM elements showing very low 250/350 colours (black squares) belong to N159S. As suggested by Fig. \[SPIRE\_colour\_2\], a single modified blackbody model with $\beta$$_{singleBB}$=2 cannot reproduce the colours we observe. In fact, many resolved elements reside around the $\beta$=1 model, thus at the very low end of the $\beta$ values quoted in the literature (1$<$$\beta$$<$2.5). This cautions the use of single-temperature fits to derive the intrinsic emissivity of the dust grains, those models providing in reality an “effective" emissivity index resulting from the combination of various dust populations at different temperatures. Dust and gas masses across the complex -------------------------------------- ### Dust masses We show the dust surface density map $\Sigma$$_{dust}$ obtained with the “AC model" in Fig. \[ModelResults\] (lower right panel). The distribution of the dust surface density follows the distribution of the FIR/submm emission. We particularly point at the elongated structure of N159 correlated with the SPIRE and 870 [$\mu$m]{} emission. The total dust mass of the region we modeled is $\sim$2.1$\times$10$^{4}$ [$M_\odot$]{}, with 8.5$\times$10$^{3}$ [$M_\odot$]{} within resolved elements fulfilling the 3-$\sigma$ SPIRE-band detection[^4]. A significant amount of dust is present in N159S, a region whose emission starts to be detected at 160 [$\mu$m]{} with PACS (Fig. \[Spitzer\_Herschel\_LABOCA\_maps\]), thus a significant reservoir of cold dust. We refer to Section 5 for an individual SED modelling of N158, N159, N159S and N160 that allows us to derive individual dust masses for these H[ii]{} regions. The resolved dust masses we estimate are 2.8 times higher if we use the “standard model" of @Galliano2011, i.e. graphite grains to model carbon dust. This is consistent with the statistical distribution of the resolved dust mass ratio between the “AC model" and the “standard model" they find in a strip across the LMC (median of the distribution at $\sim$2.6). In fact, the opacity profile of AC is higher than graphite for $\lambda$$>$5[$\mu$m]{}. This implies that for a fixed starlight intensity, AC will have a lower temperature than graphite, thus lead to more mass. However, AC has a flatter submm slope (more emissivity in the submm regime[^5]), so we are able to fit the same observational constraints with slightly hotter grains but less mass. Indeed, the “AC model" invokes higher starlight intensities (median of $<$U$>$ across the complex: 8.6 in the “AC case", 4.1 in the “standard model"), and thus less dust than graphite to account for the same submm emission. We finally note that the errors on resolved dust masses are of the order of $\sim$50$\%$. This will introduce non-negligible uncertainties in our gas-to-dust mass ratio (G/D) estimates (c.f. Section 4.5.4). ### Comparison with the H[i]{} distribution High-resolution maps of the atomic gas reservoir have been built for the complete LMC. The H[i]{} data cube used in this section [from @Kim2003] combines data from ATCA, the Australian Telescope Compact Array [@Kim2003] and the Parkes single-dish telescope [@Staveley2003]. The individual channel maps have a velocity resolution of 1.649 km s$^{-1}$ and the H[i]{} map was obtained integrating the cube over the range 190 $<$ v$_{hel}$ $<$ 386 km s$^{-1}$, with a resolution of 1. We refer to @Kim2003 and @Bernard2008 for further details on the LMC integrated map. Figure \[LABOCA\_HI\_CO\] (top) shows the H[i]{} map of the N158-N159-N160 complex, with 870 [$\mu$m]{} contours overlaid. The distribution of H[i]{} does not match that of the emission in the FIR (both [[*Spitzer*]{}]{} and [[*Herschel*]{}]{}) bands across the region. A clear offset is for instance observed between N159 and the closest H[i]{} peak located further north ($\sim$1.5). A closer look at the regions indicates that the peaks of the 870 [$\mu$m]{} emission of the three regions N158, N160 and N159, as well as the PACS and SPIRE peaks, reside in fact in H[i]{} holes. This could be a signature of the H[i]{} to H$_2$ transition already observed in LMC or Galactic clouds [@Wong2009; @Lee2012 Tatton et al. in prep, among others], with H[i]{} being converted into H$_2$ either by thermal or gravitational instabilities or by shock compressions. Contrary to the other regions of the complex, the southern H[i]{} peak does spatially coincide with a peak in the 8 [$\mu$m]{} emission (see Fig. \[IRAC\_LABOCA\] bottom panel for comparison between the 8 and the 870 [$\mu$m]{} emission in N159S). This peak is located $\sim$ 30 pc east of N159S. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![[*Top panel:*]{} The N158-N159-N160 complex observed in H[i]{} (in units of atom cm$^{-2}$). The H[i]{} map is a combination from ATCA [@Kim2003] and Parkes [@Staveley2003] observations. 870 [$\mu$m]{} contours are overlaid (0.02, 0.1, 0.25 and 0.4 Jy beam$^{-1}$). [*Bottom panel:*]{} ASTE CO(3-2) observation of N159 (in units of K km s$^{-1}$). Contours: same than top. For both images, North is up, east is left.[]{data-label="LABOCA_HI_CO"}](N159_HI_LABOCAover "fig:"){width="9.5cm"} ![[*Top panel:*]{} The N158-N159-N160 complex observed in H[i]{} (in units of atom cm$^{-2}$). The H[i]{} map is a combination from ATCA [@Kim2003] and Parkes [@Staveley2003] observations. 870 [$\mu$m]{} contours are overlaid (0.02, 0.1, 0.25 and 0.4 Jy beam$^{-1}$). [*Bottom panel:*]{} ASTE CO(3-2) observation of N159 (in units of K km s$^{-1}$). Contours: same than top. For both images, North is up, east is left.[]{data-label="LABOCA_HI_CO"}](N159_CO_LABOCAover "fig:"){width="8.5cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ [m[7cm]{}m[9cm]{}]{} ![image](N159_GDRFukui_LABOCAover){height="7cm"} & ![image](Minimization){height="7.5cm"}\ ### CO in N159 and N159S In this section, we focus more particularly in the N159 region to study how the dust budget and the dust properties are related to the molecular gas. Several CO observations of the N159 nebula have been carried out so far. @Johansson1998 observed $^{12}$CO(J=1-0) with the SEST (Swedish-ESO Submillimeter Telescope). They obtained maps with a resolution of 45, corresponding to $\sim$11pc for N159. @Bolatto2000 also mapped the N158 / N159 / N160 complex in four transitions: $^{13}$CO(1-0), $^{12}$CO(2-1), $^{12}$CO(4-3), \[C[i]{}\]($^3$P$_1$ $\rightarrow$ $^3$P$_0$) also with SEST and the 1.7m AST/RO (Antartic Submillimeter Telescope and Remote Observatory). A spatially resolved survey of the LMC was also performed with the NANTEN 4-m telescope in the 2.6 mm carbon monoxide emission with a 40 pc resolution by @Fukui1999 [@Fukui2008], permitting a study of the giant molecular clouds at global scales. Several observations of the CO(4-3) transition and of the CO(3-2) transition have been carried out in selected regions of the LMC. These transitions have been observed in the N158 / N159 / N160 complex by @Mizuno2010 ($^{12}$CO(J=4-3) and $^{13}$CO(J=3-2) and @Minamidani2008 with NANTEN2 (beam size at 490 GHz: 18) and ASTE (Atacama Submillimeter Telescope Experiment, beam size at 350 GHz: 22-23). Finally, the Magellanic Mopra Assessment (MAGMA) survey has recently completed a high angular resolution $^{12}$CO (1-0) mapping survey of giant molecular clouds across the LMC. The beam size of the instrument is 33, which corresponds to a spatial resolution of $\sim$8 pc at the distance of the LMC. The two giant molecular clouds of N159 (N159E and N159W) as well as its southern region (N159S) are thus already mapped at high resolution in many transitions. Figure \[LABOCA\_HI\_CO\] shows the N159 region observed by ASTE CO(3-2) with 870 [$\mu$m]{} contours overlaid. N159W is the strongest region and the eastern region breaks into 3 independent peaks. Compared to the other giant molecular clouds, the N159S does not show any heating source and little star formation activity. @Bolatto2000 also note the detection of diffuse low level emission in CO(2-1). N159E and N159W are associated with embedded star clusters with temperatures that were determined to be 70-80K while N159 S shows a nearly uniform molecular temperature estimated to be $\sim$30K by @Mizuno2010. They also note that the molecular peak of the N159E region is showing an elongated structure that is very well resolved at 870 [$\mu$m]{} with [LABOCA]{}. A slight shift appears between the peak of $^{12}$CO(4-3) in N159W and the 8 and 24 [$\mu$m]{} peaks. The bright peak of the [LABOCA]{} emission in N159W does not seem to show any shift with the ASTE $^{12}$CO(3-2) map. They also compare the spatial distribution of CO with [[*Spitzer*]{}]{} bands and found a fairly good correlation between the maps. As mentioned before, a peak in the 8 [$\mu$m]{} emission, situated on the east side of N159S, corresponds to a peak in the H[i]{} distribution while a peak in the 870 [$\mu$m]{} emission (N159S) corresponds to a peak in the CO map. @Fukui2009 already noted the offset between the H[i]{} and CO peak for N159. They showed that the H[i]{} and CO distributions correlate well on a 40-100 pc scale and argued that the H[i]{} envelopes are gravitationally bound by giant molecular clouds. They also show that the correlation between the H[i]{} and CO distribution breaks down on a more resolved scale. They interpret it as an illustration of the conversion of warmer, low-opacity H[i]{} to colder high-opacity H[i]{} from which H$_2$ could form, as suggested in @Ott2008 and @Wong2009. ### Gas-to-dust mass ratios in N159 Assuming a close mixing of dust and gas and that the gas-to-dust mass ratio does not depend on whether we study a predominantly atomic or molecular phase, we can relate the dust mass surface density in each resolved element of N159 to the gas reservoir such as: $$G/D=\frac{\Sigma_{HI}+X_{CO}I_{CO}}{\Sigma_{dust}}$$ with X$_{CO}$ the conversion factor in units of [$M_\odot$]{} pc$^{-2}$ (K km s$^{-1}$)$^{-1}$ and $\Sigma$$_{HI}$ and $\Sigma$$_{dust}$ the H[i]{} and dust surface densities in [$M_\odot$]{} pc$^{-2}$.\ We first need to convert the ASTE CO(3-2) observation of the region N159 (shown in Fig. \[LABOCA\_HI\_CO\] bottom panel; private communication) to a CO(1-0) map. A mean CO (J = 3-2)/CO (J = 1-0) ratio (R$_{3-2/1-0}$) of 0.6 is usually derived in galaxy disks [@Warren2010] but R$_{3-2/1-0}$ can probe larger ranges . @Minamidani2008 investigate this ratio in 10 CO clumps of N159. This H[ii]{} region being a hot and low-metallicity environment, they find values ranging between 0.7 and 1.4 (so higher than the median 0.6 value), with R$_{3-2/1-0}$=0.7 in the core of N159S and 1.1 in the cores of N159W and N159E. No clear North-South trend is observed however. We use the average R$_{3-2/1-0}$ value of the 10 clumps ($\sim$0.9) to convert the CO(3-2) into CO(1-0). From the H[i]{} map and the derived CO(1-0) map, we can:\ We use a constant X$_{CO}$ factor to convert the CO intensity in molecular hydrogen mass surface density. @Fukui2009 has performed a survey of molecular clouds in the LMC and estimate X$_{CO}$ to be $\sim$7 $\times$ 10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$, so three times the Galactic value of $\sim$2.3 $\times$ 10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$. We use the dust mass map obtained with the “AC model" as our dust reference map and convolve the CO and dust surface density maps to the resolution of the H[i]{} map using Gaussian kernels. Figure \[DGR\] (left) shows the gas(H[i]{}+CO)-to-dust mass ratio map obtained with those hypothesis, with a mean gas-to-dust mass ratio (G/D) value of 384 (with a non-negligible uncertainty given the 50$\%$ errors on dust masses). As a reference, the Galactic value of G/D is $\sim$157 [@Zubko2004] so the value we derived is 2.4 times the solar value. The total H[i]{} mass contained in the colored ISM elements of Fig. \[DGR\] is 4.3 $\times$ 10$^{5}$ [$M_\odot$]{} while the molecular gas mass is 1.1 $\times$ 10$^{6}$ [$M_\odot$]{} (thus 2.6 times higher than the atomic mass). The total dust mass in these ISM elements is 4.3 $\times$ 10$^{3}$ [$M_\odot$]{}, leading to a globally derived average of 356, consistent with the value derived on a resolved basis in spite of the local variations. We note that given the higher contribution of the H$_2$ to the total gas budget, using a slightly different (but constant) X$_{CO}$ factor would simply translate into a crude scaling of Fig. \[DGR\] to higher or lower gas-to-dust mass ratio ranges but would barely affect the qualitative distribution of Fig. \[DGR\] (left). Finally, dust masses are 2.8 times higher when graphite is used to model carbon dust (“graphite model"), leading to G/D smaller by the same factor (G/D$\sim$127). Studies of @Franco_Cox_1986, @Lisenfeld_Ferrara_1998 or @James2002, among others, show that the G/D is proportional to the metallicity of the galaxy. The G/D obtained when dust masses are obtained with graphite in the SED modelling are thus too low compared to those expected in metal-poor environments like the LMC.\ If we assume now that G/D is constant across the region, we can use the technique proposed by @Sandstrom2012_2 in order to derive the X$_{CO}$ factor that minimises the scatter of the logarithm of the G/D across the region. This technique minimises the effects of outliers on the measured scatter. As suggested in @Sandstrom2012_2, we tabulate the X$_{CO}$ values from 0.1 to 100 [$M_\odot$]{} pc$^{-2}$ (K km s$^{-1}$)$^{-1}$ (thus from 6.2 $\times$ 10$^{18}$ to 6.2 $\times$ 10$^{21}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$) and estimate the scatter of the logarithm of the G/D for each value of X$_{CO}$. The scatter for a given X$_{CO}$ value is estimated using the IDL [*biweight$\_$mean.pro*]{} function. We finally take the X$_{CO}$ value that minimises the scatter (Fig \[DGR\] - right). The minimization occurs for X$_{CO}$=8.6 [$M_\odot$]{} pc$^{-2}$ (K km s$^{-1}$)$^{-1}$ (= 5.4 $\times$ 10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$), thus 2.3 times the Galactic conversion factor and 1.3 times lower than that derived by @Fukui2009. This is slightly higher than the factor derived by @Pineda2009 using CO observations only (and the virial mass/density profile of CO clumps) for the molecular ridge or the dust-based value derived by @Leroy2011 ($\sim$4 $\times$ 10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$) for the LMC using [[*Spitzer*]{}]{}/SAGE data only. We caution the fact that @Sandstrom2012_2 apply the minimization method to unresolved molecular clouds. X$_{CO}$ is, in that case, a statistical quantity involving a cloud ensemble, namely a measure of the covering factor of CO clouds in more extended molecular structures. This method favors CO-bright gas and ignores CO-dark gas, which could naturally lead to X$_{CO}$ values close to those of the Solar Neighbourhood. This technique may not be applicable to the individual molecular clouds or clouds complexes we resolve in the LMC. Further studies on the X$_{CO}$ factor and potential CO-dark gas reservoirs using dust masses as a tracer of the gas masses will be performed in a future study. Thermal emission at 870 [$\mu$m]{} ================================== As mentioned previously, the LABOCA data reduction might remove faint diffuse emission in the outskirts of the complex. In this section, we select 24 regions across the complex where the 870 [$\mu$m]{} emission is bright enough not to be affected by the filtering steps and analyse the thermal dust emission at 870 [$\mu$m]{} in those regions. Selected regions and correction of radio contamination ------------------------------------------------------ We select individual regions across the complex that span a wide variety of environments, from intensely star-forming regions to a more quiescent ISM to study the detailed dust properties with the empirical SED model of @Galliano2011. Figure \[SED\_main\] (top left) shows those regions. The circles indicate the photometric apertures we use. We detail the characteristics (centers and sizes) of those apertures in Table \[Regions\_Fluxes\] along with the [[*Herschel*]{}]{} and LABOCA 870 [$\mu$m]{} flux densities of the individual regions. [[*Herschel*]{}]{} errors are estimated using a combination in quadrature of calibration uncertainties and measurement uncertainties (background effects, choice of the aperture). [LABOCA]{} errors are quantified from the rms maps produced by the reduction pipeline. The 870 [$\mu$m]{} flux densities we quote in Table \[Regions\_Fluxes\] contain non-dust contribution, especially free-free radio emission. 4.8 (resolution 30) and 8.6 GHz (resolution 15) images of the entire LMC have been obtained by @Dickel2005 using ATCA, which allow us to quantify the radio contamination. These observations reveal a strong radio emission in 30 Dor (peak of the radio emission of the entire LMC) as well as in the 3 bright star-forming regions N158, N159 and N160 (see Fig. \[SED\_main\], top right). They also indicate radio emission in two compact knots in the south east of N160, spatially correlated with peaks in the IR emission (as well as peaks in the SFR or $<$U$>$ maps) and corresponding to local decreases of the PAH fraction (see Fig. \[ModelResults\]). We convolve the ATCA maps to the SPIRE 500 [$\mu$m]{} resolution using a Gaussian kernel and estimate the 4.8 and 8.6 GHz fluxes (at 6.25 and 3.5 cm respectively) within our apertures for the 7 regions showing radio emission, namely region 1, 1a, 1b, 1c, 2, 3 and 5. With these two constraints, we perform a regression analysis (L$_{radio}$ $\propto$ $\nu$$^{-0.1}$) and estimate a free-free contamination of $\sim$9.6 $\%$, 9.0 $\%$ and 13 $\%$ of the 870 [$\mu$m]{} fluxes in the regions 2, 3, 5 respectively and 10.4 $\%$ in region 1 (N159), with individual contaminations of 8.0 $\%$, 13 $\%$ and 5.6 $\%$ in the subregions 1a, 1b and 1c respectively.\ -------------------------------------------- -------------------------------------- ![image](N159_LABOCA_regions){width="8cm"} ![image](N159_ATCA_4_8){width="8cm"} -------------------------------------------- -------------------------------------- ---------------------------------- ![image](SED_main){width="14cm"} ---------------------------------- ![image](SED_secondaires){width="15cm"} Results ------- We first apply our two-MBB model ($\beta$$_c$=1.5) to derive the cold dust temperature of our selected regions using the integrated flux densities tabulated in Table \[Regions\_Fluxes\] but correcting the 870 [$\mu$m]{} flux for radio contamination. We also model the selected regions using the “AC” model of @Galliano2011 described in Section 3. For the 7 regions where radio emission is detected, we kept the 870 [$\mu$m]{} flux densities of Table \[Regions\_Fluxes\] (i.e. flux densities including non-dust contamination) but include the radio component in the modelling procedure to allow the visualization of this contribution in the SEDs of those regions. This component will be constrained by our 2 ATCA data points. As mentioned in the previous section, the free-free contamination is inferior to 10$\%$ in the H[ii]{} region and negligible outside their cores. Figure \[SED\_main\] (bottom panel) shows the SEDs of regions detected in radio (and modelled with a free-free component) while Figure \[SED\_second\] shows the SED obtained in the other regions. We observe variations in the SED shapes from region to region. The SEDs of the brightest H[ii]{} regions (regions 1, 2, 3, 5) show ‘flat’ profiles indicating a warmer dust temperature range. The main differences in the SED shape occur in the mid-infrared regime where the 24-to-70 slope strongly varies from one region to another, with higher 24/70 colour in H[ii]{} regions and lower ratios in quiescent ISM (regions 12, 19 or 21 for instance). Region 6 (N159S) and region 21 associated with the same molecular cloud have SEDs strongly peaked due to the absence of star formation in the region. Table \[SED\_model\_results\] summarises the most important parameters obtained from the two fitting procedures: the cold temperature T$_{dust}$, the dust mass M$_{dust}$, the PAH fraction f$_{PAH}$ and the three parameters characterising the radiation fields (the minimum and maximum intensities of the radiation field U$_{min}$ and U$_{max}$ and $\alpha$, the index describing the fraction of dust exposed to a given intensity). We also add the dust masses obtained when a “graphite model" is applied to the data, for comparison. As mentioned previously, diffuse ISM shows dust temperatures of $\sim$26K on average while N158, N159 and N160 show warmer dust temperatures ($>$30K). There is thus a clear evolution of the dust temperature range between bright star-forming regions like N159 (region 1), intermediate star-forming regions like region 3, 17, 16 to very quiescent regions like region 10 or 21. The dust masses are systematically lower when using amorphous carbon in lieu of graphite to model carbon dust, 2.8 times lower on average. Indeed, the “amorphous carbon" model requires less cold dust to account for the same submm emission. This will have significant consequences on the study of the gas-to-dust mass ratio in the star-forming complex (see Section 6 for discussion). We finally note that the values derived using graphite are of the same order than those derived by @Rantakyro2005 using SIMBA data at 1.2mm. The $\alpha$ values for the diffuse regions very often reach the limit of 2.5 fixed in the model for this parameter. This is consistent with the assumption that in the diffuse medium the heating intensity falls as r$^{-2}$ away from the heating source so dU/dr evolves as r$^{-3}$. In a uniform medium where dM$_{dust}$/dr evolves as r$^2$, dM$_{dust}$/dU is thus proportional to U$^{-2.5}$, so $\alpha$=2.5 [@Dale2001]. To perform a global SED modelling of nearby spiral galaxies, @Draine2007 and @Dale2012 advise fixing the maximum heating intensity to 10$^6$, to limit the number of free parameters [fixed to 10$^{7}$ in @Aniano2012]. Given the number of data points we have to perform the modelling, we decide not to fix the maximum heating intensity of the radiation field U$_{max}$ in our study to investigate its possible variations from one region to another. Our U$_{max}$ values are not allowed to be higher than 10$^{7}$ during the modelling. Only two regions reach this maximum limit. N158, N159 and N160 show maximum intensities of $\sim$10$^4$-10$^5$ while diffuse ISM shows lower maximum intensities. As suggested by their peaked SED shape, the two south regions N159S and Region 23 have a very narrow (and low) range of radiation field intensities, with \[U$_{min}$, U$_{max}$\] ranges of \[3.8-4.8\] for Region 6, \[3.0-4.0\] for Region 23. As observed previously, the PAH fraction is very low in the H[ii]{} regions compared to the diffuse ISM around those regions. Finally, we also estimate the H[i]{} mass in each aperture. Values are tabulated in Table \[SED\_model\_results\]. As mentioned previously, the value of the Galactic G/D is 157. The LMC having a lower metallicity, the G/D of the LMC should be higher than this value. We observe a large spread in the H[i]{}/D values from one region to another, and low ratios in H[ii]{} regions, which is not surprising. Indeed, the whole region being a very large molecular complex, the molecular gas is thus expected to be a significant fraction of the total gas mass in those regions as well. As previously mentioned, further investigations including CO observations for the whole complex will be investigated in a future paper. SPIRE/LABOCA colour-colour diagram ----------------------------------- Figure \[SPIRE\_LABOCA\_colour\] shows S$_{350}$/S$_{870}$ versus S$_{250}$/S$_{350}$. We remove the radio contamination from the LABOCA flux densities used in this plot. The different lines are colours obtained modelling single temperature modified blackbodies with an emissivity index of 2.5, 2, 1.5 and 1 and generated with temperatures ranging from 5 (left) to 70K (right). The bright star-forming regions have higher S$_{250}$/S$_{350}$, consistent with regions with higher temperature dust. They also show lower S$_{350}$/S$_{870}$ ratios. N158 (region 3), N159 (region 1) and N160 (region 2) thus present “flatter" submm slope than regions of the diffuse ISM (i.e. higher 870 [$\mu$m]{} flux density for a fixed 350 [$\mu$m]{} flux density). Region 5 does not appear in the diagram because of its very high 350-to-870 colour. Figure \[SED\_main\] (bottom panel) suggests that the 870 [$\mu$m]{} flux density we derive could be underestimated compared to what is expected from the SPIRE bands. In Fig. \[SPIRE\_LABOCA\_colour\], we indicate by a red cross where the region would fall in the plot if the modelled 870 [$\mu$m]{} value (derived from our fit) was used instead of the flux density derived directly from the 870 [$\mu$m]{} map. ![SPIRE/LABOCA colour-colour diagram. The 350-to-870 [$\mu$m]{} flux density ratio are plotted as a function of the 250-to-350 [$\mu$m]{} flux density ratio. The brightest star-forming regions of the complex (1 to 6) appear in red. The individual subregions of N159 (1a, 1b and 1c) are in orange. Regions 7 to 24 are in green. The lines show the colours obtained with single temperature modified blackbodies with an emissivity index of 2.5, 2, 1.5 and 1. Radio contamination has been removed from the 870 [$\mu$m]{} flux densities. The red cross indicates where region 5 would fall if the modelled 870 [$\mu$m]{} value was used instead of the flux density we estimate.[]{data-label="SPIRE_LABOCA_colour"}](Color_SPIRE_LABOCA){width="9.2cm"} About submm excess ------------------ Ground-based data and now Herschel observations of low-metallicity environments at submm wavelengths have helped us to better probe the cold phases of dust and investigate variations in their properties. These observations have led to the detection of a “submm excess" [@Galliano2005; @Marleau2006; @Galametz2009], namely submm emission higher than that predicted by FIR data and standard dust models [@Draine_Li_2007 for instance]. Several interpretations have been analysed to explain this excess. It could be linked with a different dust component: very cold dust [e.g. @Galliano2003; @Galliano2005; @Galametz2010], emission from “spinning dust" [@Ysard2010] or magnetic dipole radiation from magnetic nano-particles [@Draine2012]. The excess could also be linked with a modification of the dust emissivity properties at colder temperatures [@Meny2007; @Paradis2010]. Several studies have been dedicated to the investigation of submm and millimeter excess in the LMC. @Bot2010 found excess millimeter emission while modelling the integrated SED of the LMC using FIR and radio data. This excess was explained by possible emission from “spinning dust". However, this hypothesis could require extreme excitation conditions for the small dust grains expected to be the driver of such emission. Recent [*Planck*]{} observations of the LMC were analysed in @Planck_collabo_2011_MagellanicClouds and also lead to the detection of a millimeter excess longward 500 [$\mu$m]{}. They found that CMB fluctuations would be sufficient to explain the excess. Using [[*Herschel*]{}]{} observations of a large strip across the LMC, @Gordon2010 showed, on a more resolved scale, that the [[*Herschel*]{}]{} 500 [$\mu$m]{} emission was $\sim$10$\%$ higher than expected from extrapolating models fitted at shorter wavelengths (and using a modified blackbody with an emissivity index of $\beta$=1.5) and found a weak anti-correlation of the excess with the MIPS 24 [$\mu$m]{} emission or the total gas surface density. Using the resolved SED modelling technique we use in this paper (and AC to model carbon dust), @Galliano2011 confirm the detection of a 500 [$\mu$m]{} extended emission excess in the same strip across the LMC, with an average relative amplitude of 15 $\%$ (and variations up to 40 $\%$). They found that this resolved submm excess anti-correlates well with the dust mass surface density and probably not linked with very cold dust or CMB fluctuations. The SEDs presented in Fig. \[SED\_main\] and Fig. \[SED\_second\] do not show excess emission long-ward 500 [$\mu$m]{}. Our model seems sufficient to properly fit the observations in the strongly star-forming regions as well as in the more diffuse regions we modelled, without requiring any extra component to explain the submm emission. We nevertheless note that unlike @Gordon2010 or @Galliano2011, we do not study regions with very low surface densities in this analysis. If the environment where submm excess is hiding is in the low surface brightness regions, we will not be able to detect it in our star-forming complex. Low surface brightness structures are also the regions where our data reduction of LABOCA data is the most likely to have removed signal, which could explain the non-detection. Conclusions =========== In this paper, we combine [[*Spitzer*]{}]{} Space Telescope, [[*Herschel*]{}]{} Space Observatory and LABOCA mid-infrared to submm data (from 3.6 to 870 [$\mu$m]{}) of the resolved massive star-forming region N158-N159-N160 located in the Large Magellanic Cloud. - Using a resolved 2MBB fitting technique and a $\beta$$_c$ fixed to 1.5, we find an average temperature of $\sim$27K across the region, with a maximum in N160. Using data from 3.6 to 500 [$\mu$m]{}, a physical SED modelling and the hypothesis of amorphous carbon to model carbon dust, we also derive maps of the dust temperature distribution, the average starlight intensity, the PAH fraction, the star formation rate and the dust mass surface densities. The PAH fraction strongly decreases in our H[ii]{} regions. This decrease coincides with peaks in the mean radiation field intensity map, which is consistent with the well-known destruction of PAHs by strong radiation fields. The absence of PAHs could also be linked with the young age of the clusters, namely that the regions could be too young to have formed any PAH. - We analyse how submm colours vary across the star-forming complex and study their dependency with the surface brightnesses or the radiation field intensity $<$U$>$. We find that the submm ratios correlate strongly with $<$U$>$ and that while the 100/160 and 160/250 colours correlate more strongly with the 24 [$\mu$m]{} brightness, the 250/350 and 350/500 colours equally correlate with the 3.6 and the 24 [$\mu$m]{} surface brightnesses. Possible variation of the emissivity with temperature have been investigated using SPIRE colour-colour diagrams. No clear trend is observed. - The dust surface densities follow the FIR distribution and a non-negligible amount of dust is detected in N159S. We find a total dust mass of 2.1 $\times$ 10$^4$ [$M_\odot$]{} in the resolved elements we model (and 2.8 times more dust if graphite is used to model carbon dust). - Considering that gas and dust are closely mixed in the N159 region, we use our dust mass map to compare the distribution of dust with those of the tracers of the atomic gas in the complex to investigate variations in the atomic+molecular gas-to-dust ratio in N159. We use a CO(3-2) map of N159 to quantify the molecular reservoir in that region. A mean value of $\sim$356 is derived for the N159 complex when using the X$_{CO}$ conversion factor of @Fukui2009 (=7 $\times$ 10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$). If we consider a constant G/D in the complex, we can apply the minimization technique of @Sandstrom2012_2 to inversely derive the X$_{CO}$ conversion factor of the complex that minimises the D/G scatter in N159. The D/G scatter is minimized when the conversion factor X$_{CO}$ is equal to 8.6 [$M_\odot$]{} pc$^{-2}$ (K km s$^{-1}$)$^{-1}$ (=5.4$\times$10$^{20}$ H$_2$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$). - We finally model the SEDs of 24 selected regions, now including the 870 [$\mu$m]{} data in the fitting, and describe the variations in the dust thermal emission (thus in the SED shape) across the complex. We show that our “AC" model is sufficient to explain the submm emission we observe at 870 [$\mu$m]{}. Acknowledgments {#acknowledgments .unnumbered} =============== First, we would like to thank the referee for his/her careful reading of the paper. M.R. wishes to acknowledge support from FONDECYT(CHILE) grant No1080335. PACS has been developed by MPE (Germany); UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff Univ. (UK) and including: Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy);IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA). ------------ ------------- -------------- ---------- ---------------- ------------------ ------------------ ------------------ ---------------- ---------------- ---------------- -------------- -- Region Radius MIPS 24 MIPS 70 PACS 100 PACS 160 SPIRE 250 SPIRE 350 SPIRE 500 LABOCA $^a$ RA (J2000) DEC (J2000) (arcsec) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy) (Jy) 1 (N159) 05 39 46.65 -69 45 39.94 220 243.5$\pm$24.3 2363.5$\pm$236.3 3226.9$\pm$163.5 2427.2$\pm$121.4 995.4$\pm$70.2 424.2$\pm$29.9 162.5$\pm$11.5 22.9$\pm$1.6 1a (N159E) 05 40 10.98 -69 44 35.84 64 64.3$\pm$6.4 477.9$\pm$47.8 579.5$\pm$29.1 402.0$\pm$20.1 164.1$\pm$11.6 70.7$\pm$5.0 27.5$\pm$2.0 5.7$\pm$0.1 1b 05 39 52.10 -69 45 23.17 36 28.1$\pm$2.8 170.3$\pm$17.0 231.8$\pm$11.6 152.4$\pm$7.6 57.7$\pm$4.1 23.9$\pm$1.7 9.1$\pm$0.6 1.6$\pm$0.1 1c (N159W) 05 39 32.51 -69 46 02.74 68 48.7$\pm$4.9 481.2$\pm$48.1 651.8$\pm$32.7 487.7$\pm$24.4 202.6$\pm$14.4 82.6$\pm$5.9 30.4$\pm$2.2 5.9$\pm$0.2 2 (N160) 05 39 38.63 -69 39 06.79 110 144.7$\pm$14.5 1073.3$\pm$107.3 1362.3$\pm$68.5 912.9$\pm$45.6 359.2$\pm$25.5 153.6$\pm$10.9 58.9$\pm$4.2 10.2$\pm$0.4 3 (N158) 05 39 11.22 -69 30 13.65 110 68.3$\pm$6.8 572.9$\pm$57.3 770.2$\pm$39.1 540.8$\pm$27.0 216.2$\pm$15.3 94.4$\pm$6.7 37.1$\pm$2.6 5.8$\pm$0.4 4 05 40 49.41 -69 44 48.22 110 4.7$\pm$0.5 134.5$\pm$13.4 226.2$\pm$12.5 217.0$\pm$10.9 106.88$\pm$7.5 49.6$\pm$3.5 20.2$\pm$1.4 2.4$\pm$0.4 5 05 40 22.25 -69 40 33.51 110 24.5$\pm$2.4 235.6$\pm$23.6 394.6$\pm$20.6 309.6$\pm$15.5 135.2$\pm$9.6 61.9$\pm$4.4 25.2$\pm$1.8 2.5$\pm$0.4 6 (N159S) 05 40 03.75 -69 51 01.62 110 2.8$\pm$0.3 76.3$\pm$7.6 166.2$\pm$9.6 193.9$\pm$9.7 107.3$\pm$7.5 50.4$\pm$3.5 20.4$\pm$1.4 3.0$\pm$0.4 7 05 39 30.65 -69 36 37.42 55 4.9$\pm$0.5 93.1$\pm$9.3 159.6$\pm$8.2 143.8$\pm$7.2 67.4$\pm$4.7 31.2$\pm$2.2 12.6$\pm$0.9 1.8$\pm$0.1 8 05 40 04.63 -69 37 59.86 55 7.4$\pm$0.7 102.6$\pm$10.3 138.6$\pm$7.1 118.2$\pm$5.9 53.9$\pm$3.8 25.1$\pm$1.8 10.3$\pm$0.7 1.6$\pm$0.1 9 05 38 41.55 -69 24 58.82 55 4.8$\pm$0.5 60.7$\pm$6.1 84.8$\pm$4.5 71.9$\pm$3.6 34.0$\pm$2.4 15.4$\pm$1.1 6.3$\pm$0.5 1.1$\pm$0.1 10 05 39 49.03 -69 26 26.38 55 6.4$\pm$0.6 68.2$\pm$6.8 100.9$\pm$5.3 76.4$\pm$3.8 32.6$\pm$2.3 14.5$\pm$1.0 5.8$\pm$0.4 1.1$\pm$0.1 11 05 38 45.73 -69 27 53.10 55 3.7$\pm$0.4 45.2$\pm$4.5 78.0$\pm$4.2 65.0$\pm$3.3 30.1$\pm$2.1 14.2$\pm$1.0 5.9$\pm$0.4 0.8$\pm$0.1 12 05 38 13.31 -69 30 39.46 55 1.0$\pm$0.1 20.5$\pm$2.0 38.9$\pm$2.3 40.2$\pm$2.0 19.2$\pm$1.4 8.9$\pm$0.6 3.7$\pm$0.3 0.4$\pm$0.1 13 05 40 00.70 -69 31 04.85 55 2.0$\pm$0.2 44.9$\pm$4.5 66.8$\pm$3.6 59.6$\pm$3.0 27.3$\pm$1.9 12.3$\pm$0.9 4.9$\pm$0.4 0.6$\pm$0.1 14 05 40 09.44 -69 32 44.60 55 4.1$\pm$0.4 68.8$\pm$6.9 103.0$\pm$5.4 82.6$\pm$4.1 36.2$\pm$2.5 16.1$\pm$1.1 6.3$\pm$0.5 0.8$\pm$0.1 15 05 40 51.61 -69 32 20.98 55 1.5$\pm$0.1 30.1$\pm$3.0 38.6$\pm$2.3 33.2$\pm$1.7 15.7$\pm$1.1 7.0$\pm$0.5 2.8$\pm$0.2 0.3$\pm$0.1 16 05 38 17.88 -69 33 37.52 55 1.4$\pm$0.1 28.4$\pm$2.8 45.0$\pm$2.5 43.9$\pm$2.2 22.6$\pm$1.6 11.0$\pm$0.8 4.5$\pm$0.3 0.7$\pm$0.1 17 05 38 55.03 -69 34 36.05 55 2.6$\pm$0.3 53.8$\pm$5.4 86.8$\pm$4.6 78.3$\pm$3.9 37.6$\pm$2.6 17.2$\pm$1.2 6.8$\pm$0.5 1.0$\pm$0.1 18 05 40 54.56 -69 38 04.19 55 1.3$\pm$0.1 18.8$\pm$1.9 38.6$\pm$2.4 34.0$\pm$1.7 16.6$\pm$1.2 7.8$\pm$0.6 3.3$\pm$0.2 0.5$\pm$0.1 19 05 40 58.54 -69 42 04.22 55 0.7$\pm$0.1 26.5$\pm$2.6 34.7$\pm$2.1 33.1$\pm$1.7 17.2$\pm$1.2 8.2$\pm$0.6 3.5$\pm$0.3 0.3$\pm$0.1 20 05 41 03.54 -69 46 36.87 55 1.0$\pm$0.1 33.9$\pm$3.4 49.3$\pm$2.8 48.5$\pm$2.4 25.5$\pm$1.8 11.9$\pm$0.8 4.9$\pm$0.3 0.8$\pm$0.1 21 05 41 40.44 -69 48 36.23 55 0.5$\pm$0.1 20.0$\pm$2.0 30.6$\pm$2.3 30.4$\pm$1.5 15.5$\pm$1.1 7.1$\pm$0.5 2.9$\pm$0.2 0.4$\pm$0.1 22 05 38 00.17 -69 42 32.98 55 0.7$\pm$0.1 19.8$\pm$2.0 31.2$\pm$1.9 32.4$\pm$1.6 16.8$\pm$1.2 7.9$\pm$0.6 3.2$\pm$0.2 0.4$\pm$0.1 23 05 40 13.59 -69 53 12.56 55 0.4$\pm$0.1 8.34$\pm$0.8 22.9$\pm$1.6 26.7$\pm$1.3 15.5$\pm$1.1 7.3$\pm$0.5 2.9$\pm$0.2 0.3$\pm$0.1 24 05 41 54.74 -69 45 30.83 55 0.4$\pm$0.1 7.3$\pm$0.7 16.1$\pm$1.8 17.9$\pm$0.9 10.5$\pm$0.8 5.0$\pm$0.4 2.1$\pm$0.2 0.5$\pm$0.1 ------------ ------------- -------------- ---------- ---------------- ------------------ ------------------ ------------------ ---------------- ---------------- ---------------- -------------- -- : MIPS, PACS, SPIRE and LABOCA flux densities[]{data-label="Regions_Fluxes"} Radio contamination is not subtracted from those values. ------------ --------- ----------------------------------------------------- ----------- ---------- ----------- ----------------------- ---------- Region T$_{c}$ M$_{dust}$ f$_{PAH}$ $\alpha$ U$_{min}$ U$_{max}$ H[i]{}/D (K) ([$M_\odot$]{}) 1 (N159) 34.7 3.08 $\times$ 10$^{3}$ (8.67 $\times$ 10$^{3}$)$^a$ 0.37 2.38 6.9 1.2 $\times$ 10$^{5}$ 165 1a (N159E) 27.7 8.01 $\times$ 10$^{2}$ (3.40 $\times$ 10$^{3}$) 0.27 1.94 2.2 3.5 $\times$ 10$^{3}$ 55 1b 34.7 1.71 $\times$ 10$^{2}$ (6.03 $\times$ 10$^{2}$) 0.23 2.26 8.2 1.0 $\times$ 10$^{5}$ 79 1c (N159W) 31.0 7.50 $\times$ 10$^{2}$ (2.92 $\times$ 10$^{3}$) 0.41 2.32 5.1 3.9 $\times$ 10$^{4}$ 67 2 (N160) 33.7 1.15 $\times$ 10$^{3}$ (4.32 $\times$ 10$^{3}$) 0.31 2.27 7.1 1.1 $\times$ 10$^{5}$ 102 3 (N158) 33.6 7.21 $\times$ 10$^{2}$ (2.17 $\times$ 10$^{3}$) 0.32 2.31 6.5 9.4 $\times$ 10$^{4}$ 130 4 26.8 5.29 $\times$ 10$^{2}$ (1.44 $\times$ 10$^{3}$) 0.63 2.50 2.8 2.2 $\times$ 10$^{2}$ 199 5 34.7 5.14 $\times$ 10$^{2}$ (1.36 $\times$ 10$^{3}$) 0.39 2.39 4.6 1.0 $\times$ 10$^{7}$ 195 6 (N159S) 24.8 6.44 $\times$ 10$^{2}$ (1.92 $\times$ 10$^{3}$) 0.83 2.42 3.8 4.8 212 7 28.7 3.01 $\times$ 10$^{2}$ (8.65 $\times$ 10$^{2}$) 0.60 2.50 3.4 3.1 $\times$ 10$^{3}$ 99 8 27.2 2.65 $\times$ 10$^{2}$ (9.18 $\times$ 10$^{2}$) 0.54 2.24 2.7 3.9 $\times$ 10$^{3}$ 131 9 26.7 1.66 $\times$ 10$^{2}$ (4.91 $\times$ 10$^{2}$) 0.53 2.32 2.8 2.1 $\times$ 10$^{3}$ 111 10 29.7 1.34 $\times$ 10$^{2}$ (4.04 $\times$ 10$^{2}$) 0.44 2.36 4.4 1.3 $\times$ 10$^{5}$ 126 11 28.5 1.42 $\times$ 10$^{2}$ (3.87 $\times$ 10$^{2}$) 0.60 2.43 3.2 1.0 $\times$ 10$^{7}$ 151 12 26.8 1.08 $\times$ 10$^{2}$ (2.47 $\times$ 10$^{2}$) 0.86 2.50 2.3 5.5 $\times$ 10$^{2}$ 164 13 28.2 1.29 $\times$ 10$^{2}$ (3.10 $\times$ 10$^{2}$) 0.48 2.50 2.9 4.8 $\times$ 10$^{3}$ 205 14 30.3 1.45 $\times$ 10$^{2}$ (3.66 $\times$ 10$^{2}$) 0.47 2.50 4.3 4.2 $\times$ 10$^{4}$ 196 15 26.8 7.34 $\times$ 10$^{1}$ (2.16 $\times$ 10$^{2}$) 0.54 2.18 2.7 9.0 $\times$ 10$^{3}$ 374 16 24.9 1.34 $\times$ 10$^{2}$ (3.57 $\times$ 10$^{2}$) 0.67 2.50 2.0 2.0 $\times$ 10$^{4}$ 138 17 27.5 1.84 $\times$ 10$^{2}$ (4.60 $\times$ 10$^{2}$) 0.62 2.50 2.8 4.1 $\times$ 10$^{3}$ 145 18 27.2 9.20 $\times$ 10$^{1}$ (2.25 $\times$ 10$^{2}$) 0.66 2.50 2.4 8.2 $\times$ 10$^{4}$ 219 19 24.2 1.09 $\times$ 10$^{2}$ (3.30 $\times$ 10$^{2}$) 0.48 1.75 1.0 7.3 $\times$ 10$^{1}$ 202 20 24.5 1.64 $\times$ 10$^{2}$ (5.37 $\times$ 10$^{2}$) 0.78 1.78 1.1 5.5 $\times$ 10$^{1}$ 176 21 25.6 8.79 $\times$ 10$^{1}$ (3.23 $\times$ 10$^{2}$) 0.74 1.00 1.0 2.1 $\times$ 10$^{1}$ 327 22 24.8 9.97 $\times$ 10$^{1}$ (2.47 $\times$ 10$^{2}$) 0.67 2.50 1.9 3.5 $\times$ 10$^{2}$ 275 23 24.1 1.07 $\times$ 10$^{2}$ (2.60 $\times$ 10$^{2}$) 0.95 2.50 3.0 4.0 289 24 23.9 7.76 $\times$ 10$^{1}$ (1.92 $\times$ 10$^{2}$) 0.99 2.50 1.3 4.6 $\times$ 10$^{2}$ 182 ------------ --------- ----------------------------------------------------- ----------- ---------- ----------- ----------------------- ---------- We indicate the dust masses obtained using the graphite model in parenthesis for comparison. [^1]: See http://www.apex-telescope.org/bolometer/laboca/calibration/ for details on the calibration and archive tables of sky opacities and calibration factors. [^2]: BoA was developed at MPIfR (Max-Planck-Institut fŸr Radio Astronomy, Bonn, Germany), AIfA (Argelander-Institut f[ü]{}r Astronomie, Bonn, Germany), AIRUB (Astronomisches Institut der Ruhr-Universit[ä]{}t, Bochum, Germany), and IAS (Institut d’Astrophysique Spatiale, Orsay, France) [^3]: http://www.astro.princeton.edu/$\sim$ganiano/Kernels.html for kernels and details on their construction. [^4]: We note that estimating this total dust mass on a resolved scale (resolved masses added together) allows us to probe the various dust components in ISM elements with low-dust mass surface densities as well as in dense regions. Contrary to dust mass estimates derived from a global SED modelling of the region, we can avoid potential biases (namely a dust mass underestimation) caused by a poorer resolution [c.f. discussion about biases caused by non-linearities in SED models in @Galliano2011]. [^5]: c.f. further discussions about opacities in @Galliano2011, Appendix A
{ "pile_set_name": "ArXiv" }
--- abstract: 'It is shown that signal energy is the only available degree-of-freedom ([DOF]{}) for fiber-optic transmission as the input power tends to infinity. With $n$ signal [DOFs]{}at the input, $n-1$ [DOFs]{} are asymptotically lost to signal-noise interactions. The main observation is that, nonlinearity introduces a multiplicative noise in the channel, similar to fading in wireless channels. The channel is viewed in the spherical coordinate system, where signal vector ${\underaccent{\bar}{X}}\in{\mathbb{C}}^n$ is represented in terms of its norm ${\left|{\underaccent{\bar}{X}}\right|}$ and direction $\hat{{\underaccent{\bar}{X}}}$. The multiplicative noise causes signal direction $\hat{{\underaccent{\bar}{X}}}$ to vary randomly on the surface of the unit $(2n-1)$-sphere in ${\mathbb{C}}^{n}$, in such a way that the effective area of the support of $\hat {{\underaccent{\bar}{X}}}$ does not vanish as ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$. On the other hand, the surface area of the sphere is finite, so that $\hat{{\underaccent{\bar}{X}}}$ carries finite information. This observation is used to show several results. Firstly, let ${{\mathcal{C}}}({{\mathcal{P}}})$ be the capacity of a discrete-time periodic model of the optical fiber with distributed noise and frequency-dependent loss, as a function of the average input power ${{\mathcal{P}}}$. It is shown that asymptotically as ${{\mathcal{P}}}\rightarrow\infty$, ${{\mathcal{C}}}=\frac{1}{n}\log\bigl(\log{{\mathcal{P}}}\bigr)+c$, where $n$ is the dimension of the input signal space and $c$ is a bounded number. In particular, $\lim_{{{\mathcal{P}}}\rightarrow\infty}{{\mathcal{C}}}({{\mathcal{P}}})=\infty$ in finite-dimensional periodic models. Secondly, it is shown that capacity saturates to a constant in infinite-dimensional models where $n=\infty$. An expression is provided for the constant $c$, by showing that, as the input ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$, the action of the discrete periodic stochastic nonlinear Schrödinger equation tends to multiplication by a random matrix (with fixed distribution, independent of input). Thus, perhaps counter-intuitively, noise simplifies the nonlinear channel at high powers to a *linear* multiple-input multiple-output fading channel. As ${{\mathcal{P}}}\rightarrow\infty$ signal-noise interactions gradually reduce the slope of the ${{\mathcal{C}}}({{\mathcal{P}}})$, to a point where increasing the input power returns diminishing gains. Nonlinear frequency-division multiplexing can be applied to approach capacity in optical networks, where linear multiplexing achieves low rates at high powers.' author: - 'Mansoor I. Yousefi' title: 'The Asymptotic Capacity of the Optical Fiber[^1] ' --- Introduction ============ Several decades since the introduction of the optical fiber, channel capacity at high powers remains a vexing conundrum. Existing achievable rates saturate at high powers because of linear multiplexing and treating the resulting interference as noise in network environments [@yousefi2012nft1; @yousefi2012nft2; @yousefi2012nft3]. Furthermore, it is difficult to estimate the capacity via numerical simulations, because channel has memory. Multi-user communication problem for (an ideal model of) optical fiber can be reduced to single-user problem using the nonlinear frequency-division multiplexing (NFDM) [@yousefi2012nft1; @yousefi2012nft3]. This addresses deterministic distortions, such as inter-channel and inter-symbol interference (signal-signal interactions). The problem is then reduced to finding the capacity of the point-to-point optical fiber set by noise. There are two effects in fiber that impact Shannon capacity in point-to-point channels. (1) Phase noise. Nonlinearity transforms additive noise to phase noise in the channel. As the amplitude of the input signal tends to infinity, the phase of the output signal tends to a uniform random variable in the zero-dispersion channel [@yousefi2011opc Section IV]. As a result, phase carries finite information in the non-dispersive fiber. (2) Multiplicative noise. Dispersion converts phase noise to amplitude noise, introducing an effect which at high powers is similar to fading in wireless channels. Importantly, the conditional entropy grows strongly with input signal. In this paper, we study the asymptotic capacity of a discrete-time periodic model of the optical fiber as the input power tends to infinity. The role of the nonlinearity in point-to-point discrete channels pertains to signal-noise interactions, captured by the conditional entropy. The main result is the following theorem, describing capacity-cost function in models with constant and non-constant loss; see Definition \[def:loss\]. Consider the discrete-time periodic model of the NLS channel described in Section \[sec:mssfm\], with non-zero dispersion. Capacity is asymptotically [rCl]{} C(P)= (P)+c, & [non-constant loss]{.nodecor},\ P+c, & [constant loss]{.nodecor}, where $n$ is dimension of the input signal space, ${{\mathcal{P}}}\rightarrow\infty$ is the average input signal power and $c{\stackrel{\Delta}{=}}c(n,{{\mathcal{P}}})<\infty$. In particular, $\lim\limits_{{{\mathcal{P}}}\rightarrow\infty} {{\mathcal{C}}}({{\mathcal{P}}})=\infty$ in finite-dimensional models. Intensity modulation and direct detection (photon counting) is nearly capacity-achieving in the limit ${{\mathcal{P}}}\rightarrow\infty$, where capacity is dominated by the first terms in ${{\mathcal{C}}}({{\mathcal{P}}})$ expressions. \[thm:main\] From the Theorem \[thm:main\] and [@yousefi2011opc Theorem 1], the asymptotic capacity of the dispersive fiber is much smaller than the asymptotic capacity of (the discrete-time model of) the zero-dispersion fiber, which is $\frac{1}{2}\log{{\mathcal{P}}}+c$, $c<\infty$. Dispersion reduces the capacity, by increasing the conditional entropy. With $n$ [DOFs]{} at the input, $n-1$ [DOFs]{} are asymptotically lost to signal-noise interactions, leaving signal energy as the only useful [DOF]{} for transmission. There are a finite number of [DOFs]{} in all computer simulations and physical systems. However, as a mathematical problem, the following Corollary holds true. Capacity saturates to a constant $c<\infty$ in infinite-dimensional models, including the continuous-time model. \[cor:inf\] The power level where signal-noise interactions begin to appreciably impact the slope of the ${{\mathcal{C}}}({{\mathcal{P}}})$ is not determined in this paper. Numerical simulations indicate that the conditional entropy does not increase with input in the nonlinear Fourier domain, for a range of power larger than the optimal power in wavelength-division multiplexing [@yousefi2016nfdm Fig. 9 (a)]. In this regime, signal-noise interactions are weak and the capacity is dominated by the (large) number $c$ in the Theorem \[thm:main\]. A numerical estimation of the capacity of the point-to-point fiber at input powers higher than those in Fig. \[fig:nfdm\] should reveal the impact of the signal-dependent noise on the asymptotic capacity. The contributions of the paper are presented as follows. The continuous-time model is discretized in Section \[sec:mssfm\]. The main ingredient is a modification of the split-step Fourier method (SSFM) that shows noise influence more directly compared with the standard SSFM. A *unit* is defined in the modified SSFM (MSSFM) model that plays an important role throughout the paper. The MSSFM and units simplify the information-theoretic analysis. Theorem \[thm:main\] and Corollary \[cor:inf\] are proved in Section \[sec:proof1\]. The main ingredient here is an appropriate partitioning of the [DOFs]{} in a suitable coordinate system, and the proof that the achievable rate of one group of [DOFs]{} is bounded in input. No assumption is made on input power in this first proof. Theorem \[thm:main\] is proved again in Section \[sec:proof2\] by considering the limit ${{\mathcal{P}}}\rightarrow\infty$, which adds further intuition. Firstly, it is shown that, as the input ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$, the action of the discrete periodic stochastic nonlinear Schrödinger (NLS) equation tends to multiplication by a random matrix (with fixed probability distribution function (PDF), independent of the input). As a result, perhaps counter-intuitively, as ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$ noise simplifies the nonlinear channel to a *linear* multiple-input multiple-output (non-coherent) fading channel. Secondly, the asymptotic capacity is computed, without calculating the conditional PDF of the channel, entropies, or solving the capacity optimization problem. Because of the multiplicative noise, the asymptotic rate depends only on the knowledge that whether channel random operator has any deterministic component. The conditional PDF merely modifies the bounded number $c$ in the Theorem \[thm:main\]. Note that we do not apply local analysis based on perturbation theories (valid in the low power regime). The proof of the Theorem \[thm:main\], [*e.g.*]{}, the asymptotic loss of [DOFs]{}, is based on a global analysis valid for any signal and noise; see Section \[sec:proof1\]. Notation and Preliminaries {#sec:notation} ========================== The notation in this paper is motivated by [@moser2004dbb]. Upper- and lower-case letters represent scalar random variables and their realizations, [*e.g.*]{}, $X$ and $x$. The same rule is applied to vectors, which are distinguished using underline, [*e.g.*]{}, ${\underaccent{\bar}{X}}$ for a random vector and ${\underaccent{\bar}{x}}$ for a deterministic vector. Deterministic matrices are shown by upper-case letter with a special font, [*e.g.*]{}, ${\mathsf{R}}=(r_{ij})$. Random matrices are denoted by upper-case letters with another special font, [*e.g.*]{}, ${\mathbb{M}}=(M_{ij})$. Important scalars are distinguished with calligraphic font, [*e.g.*]{}, ${{\mathcal{P}}}$ for power and ${{\mathcal{C}}}$ for capacity. The field of real and complex numbers is respectively ${\mathbb{R}}$ and ${\mathbb{C}}$. A sequence of numbers $X_1,\cdots, X_n $ is sometimes abbreviated as $X^n$, $X^0=\emptyset$. A zero-mean circularly-symmetric complex Gaussian random vector with covariance matrix ${\mathsf{K}}$ is indicated by ${\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathsf{K}}\right)}$. Uniform distribution on interval $[a,b)$ is designated as $\mathcal U(a,b)$. Throughout the paper, the asymptotic equivalence ${{\mathcal{C}}}({{\mathcal{P}}}) \sim f({{\mathcal{P}}})$, often abbreviated by saying “asymptotically,” means that $\lim_{{{\mathcal{P}}}\rightarrow\infty} {{\mathcal{C}}}({{\mathcal{P}}})/f({{\mathcal{P}}})=1$. Letter $c{\stackrel{\Delta}{=}}c(n,{{\mathcal{P}}})$ is reserved to denote a real number bounded in $n$ and ${{\mathcal{P}}}$. A sequence of independent and identically distributed () random variables $X_n$ drawn from the PDF $p_X(x)$ is presented as $X_n\sim{\text{i.i.d.}}\ p_X(x)$. The identity matrix with size $n$ is $I_n$. The Euclidean norm of a vector ${\underaccent{\bar}{x}}\in{\mathbb{C}}^n$ is [rCl]{} [|x|]{}=(|x\_1|\^2++|x\_n|\^2)\^. This gives rise to an induced norm ${\left|{\mathsf{M}}\right|}$ for matrix ${\mathsf{M}}$. We use the spherical coordinate system in the paper. Here, a vector ${\underaccent{\bar}{x}}\in{\mathbb{C}}^n$ is represented by its norm ${\left|{\underaccent{\bar}{x}}\right|}$ and direction $\hat{{\underaccent{\bar}{x}}}={\underaccent{\bar}{x}}/{\left|{\underaccent{\bar}{x}}\right|}$ (with convention $\hat{{\underaccent{\bar}{x}}}=0$ if ${\underaccent{\bar}{x}}=0$). The direction can be described by $m=2n-1$ angles. When direction is random, its entropy can be measured with respect to the spherical measure $\sigma^{m}(A)$, $A\subseteq \mathcal S^m$, where $\mathcal S^m$ is the $m-$sphere [rCl]{} S\^m={\^[m+1]{}:  [||]{}=1 }. It is shown in the Appendix \[app:one\] that the differential entropy with respect to the Lebesgue and spherical measures, denoted respectively by $h(\hat{{\underaccent{\bar}{X}}})$ and $h_{\sigma}(\hat{{\underaccent{\bar}{X}}})$, are related as [rCl]{} h(X)=h([||]{})+h\_(| [||]{})+m|X|. \[eq:sph-leb\] The entropy power of a random direction $\hat{{\underaccent{\bar}{X}}}\in{\mathbb{C}}^n$ is [rCl]{} ()=(h\_()). It represents the effective area of the support of $\hat{{\underaccent{\bar}{X}}}$ on $\mathcal S^m$. The Modified Split-Step Fourier Method {#sec:mssfm} ====================================== Signal propagation in optical fiber is described by the stochastic nonlinear Schrödinger (NLS) equation [@yousefi2012nft1 Eq. 2] [rCl]{} =L\_L(Q)+L\_N(Q)+N(t,z), \[eq:nls\] where $Q(t,z)$ is the complex envelope of the signal as a function of time $t\in{\mathbb{R}}$ and space $z\in{\mathbb{R}}^+$ and $N(t,z)$ is zero-mean circularly-symmetric complex Gaussian noise with [rCl]{} (N(t,z)N\^\*(t’,z’))=\^2\_[W]{}(t-t’)(z-z’), where $\delta_{{{\mathcal{W}}}}(x){\stackrel{\Delta}{=}}2{{\mathcal{W}}}\operatorname{sinc}(2{{\mathcal{W}}} x)$, $\operatorname{sinc}(x){\stackrel{\Delta}{=}}\sin(\pi x)/(\pi x)$, and ${{\mathcal{W}}}$ is noise bandwidth. The operator $L_L$ represents linear effects [rCl]{} L\_L(Q)= \_[k=0]{}\^j\^[k+1]{} -\_[r]{}(t,z)Q(t,z), \[eq:L-L\] where $\beta_k$ are dispersion coefficients, $\convolution$ is convolution and $\alpha_{r}$ is the residual fiber loss. The operator $L_N(Q)=j\gamma |Q|^2Q$ represents Kerr nonlinearity, where $\gamma$ is the nonlinearity parameter. The average power of the transmit signal is [rCl]{} P= \_[T]{}\_[-T/2]{}\^[T/2]{}|Q(t,0)|\^2t. \[eq:power-cont\] The residual loss in accounts for uncompensated loss and non-flat gain of the Raman amplification in distance and is generally frequency dependent. The constant loss model refers to the case where $\alpha_{r}(t,z)$ is constant in the frequency $f$, [*i.e.*]{}, $\hat{\alpha}_{r}(f,z)=\mathcal F(\alpha(t,z)){\stackrel{\Delta}{=}}\alpha_{r}(z)$, where $\mathcal F$ is the Fourier transform with respect to $t$. In realistic systems, however, loss varies over frequency, polarization or spatial models. This is the non-constant loss model. Channel filters act similar to a non-constant loss function. \[def:loss\] We discretize in space and time. Divide a fiber of length ${{\mathcal{L}}}$ into a cascade of a large number $m\rightarrow\infty$ of pieces of discrete fiber segments of length $\epsilon={{\mathcal{L}}}/m$ [@yousefi2011opc Section III. A]. A small segment can be discretized in time and modeled in several ways. An appropriate approach is given by the split-step Fourier method (SSFM). The standard SSFM splits the *deterministic* NLS equation into linear and nonlinear parts. In applying SSFM to the *stochastic* NLS equation, typically noise is added to the signal. We introduce a modified split-step Fourier method where, instead of noise addition, the nonlinear part of is solved in the presence of noise analytically. In the linear step, is solved with $L_N+N=0$. In the discrete-time model, linear step in a segment of length $\epsilon$ consists of multiplying a vector ${\underaccent{\bar}{X}}\in{\mathbb{C}}^n$ by the dispersion-loss matrix ${\mathsf{R}}=(r_{kl})$. In the constant loss model, ${\mathsf{R}}=e^{-\frac{1}{2}\alpha_{r}\epsilon}{\mathsf{U}}$, where ${\mathsf{U}}$ is a unitary matrix. In the absence of loss, ${\mathsf{R}}$ is unitary. The values of $r_{kl}$ depend on the dispersion coefficients, $\epsilon$ and $n$. In general, all entries of ${\mathsf{R}}$ are non-zero, although in a small segment, the off-diagonal elements can be very small. Matrix ${\mathsf{R}}$ is fully dispersive, [*i.e.*]{}, $r_{kl}\neq 0$, for all $k,l$. \[ass:U\] In the nonlinear step, is solved with $L_L=0$ resulting in [@mecozzi1994llh Eq. 12], [@yousefi2011opc Eq. 30]: [rCl]{} Q(t, z)=(Q(t,0)+W(t,z))e\^[j(t,z)]{}, \[eq:zd\] in which [rCl]{} (t,z)= \_0\^z|Q(t,0)+W(t,l)|\^2l, where $W(t,z)=\int_0^z N(t,l){\mathrm{d}}l$ is Wiener process. The modified nonlinear step in the MSSFM is obtained by discretizing . Divide a small segment $0\leq z\leq\epsilon$ into $L$ sub-segments of length $\mu=\epsilon/L$. Define $\Phi:{\mathbb{C}}\times{\mathbb{C}}^n\mapsto [0,\infty)$ as [rCl]{} (X,N)&=&\_[[signal-noise interactions, unknown]{.nodecor}]{} +\ &&+\_[[conditionally known]{.nodecor}]{}, \[eq:phase\] where $N_k\sim {\text{i.i.d.}}\ {\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathcal{D}}/L\right)}$, ${\mathcal{D}}=\sigma^2{{\mathcal{W}}}\epsilon/n$. The nonlinear step in a segment of length $\epsilon$ maps vector ${\underaccent{\bar}{X}}\in{\mathbb{C}}^n$ to vector ${\underaccent{\bar}{Y}}\in{\mathbb{C}}^n$, according to [rCl]{} Y\_k=(X\_k+N\_[k1]{}++N\_[kL]{})e\^[j(X\_k, N\_k)]{}, \[eq:discrete-zd\] where ${\underaccent{\bar}{N}}_k=(N_{k1},\cdots, N_{kL})^T$, $N_{ki}\sim{\text{i.i.d.}}\ {\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathcal{D}}/L\right)}$. The nonlinear step is a deterministic phase change in the SSFM. In this form, nonlinearity is entropy-preserving and does not interact with noise immediately [@yousefi2015cwit2 Lemma 2–3] — unless several steps in the SSFM are considered, which complicates the analysis. In the MSSFM, noise is introduced in a distributed manner within each nonlinear step. This shows noise influence more directly. Note that, conditioned on ${\left|Y_k\right|}$, the last term in is known. Other terms in represent signal-noise interactions. They are conditionally unknown and are responsible for capacity limitation. The MSSFM model for a fiber of length ${{\mathcal{L}}}$ consists of the cascade of linear and modified nonlinear steps (without noise addition between them). A *unit* in the MSSFM model is defined as the cascade of three segments of length $\epsilon$: A modified nonlinear step ${\underaccent{\bar}{X}}\mapsto{\underaccent{\bar}{U}}$, followed by a linear step ${\underaccent{\bar}{U}}\mapsto{\underaccent{\bar}{V}}$, followed by another modified nonlinear step ${\underaccent{\bar}{V}}\mapsto{\underaccent{\bar}{Y}}$; see Fig. \[fig:mssfm\]. A unit of length $3\epsilon$ is the smallest piece of fiber whose capacity behaves qualitatively similar to the capacity of the full model with length ${{\mathcal{L}}}$. In the Appendix \[app:in-out-mssfm\] it is shown that the input output relation ${\underaccent{\bar}{X}}\mapsto {\underaccent{\bar}{Y}}$ in one unit is given by [rCl]{} Y=MX+Z,\[eq:one-seg\] where ${\mathbb{M}}{\stackrel{\Delta}{=}}{\mathbb{M}}({\underaccent{\bar}{X}},{\mathbb{N}}^1,{\mathbb{N}}^2)$ is a random matrix with entries [rCl]{} M\_[kl]{}= r\_[kl]{}e\^[j\_k+j\_l]{}, \[eq:Mkl\] in which [rCl]{} \_l=(X\_l,N\_l\^1),\_k=(V\_k, N\_k\^2). Here ${\mathbb{N}}^1=({\underaccent{\bar}{N}}_1^1,\cdots, {\underaccent{\bar}{N}}_n^1)^T$ and ${\mathbb{N}}^2=({\underaccent{\bar}{N}}_1^2,\cdots, {\underaccent{\bar}{N}}_n^2)^T$ are $n\times L$ Gaussian ensembles with  entries drawn from ${\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathcal{D}}/L\right)}$, independent of any other random variable. The additive noise ${\underaccent{\bar}{Z}}{\stackrel{\Delta}{=}}{\underaccent{\bar}{Z}}({\underaccent{\bar}{X}},{\mathbb{N}}^1,{\mathbb{N}}^2)$ is in general non-Gaussian but bounded in ${\left|{\underaccent{\bar}{X}}\right|}$; see . Finally, vector ${\underaccent{\bar}{V}}$ is the output of the linear step in Fig. \[fig:mssfm\]. The input output relation ${\underaccent{\bar}{X}}\mapsto {\underaccent{\bar}{Y}}$ in a fiber of length ${{\mathcal{L}}}$ is obtained by composing $\bar m=m/2$ blocks ${\underaccent{\bar}{Y}}_k={\mathsf{R}}\bigl({\mathbb{M}}_k{\underaccent{\bar}{X}}_k+{\underaccent{\bar}{Z}}_k\bigl)$: [rCl]{} Y(k)=M(k)X(k)+Z(k), \[eq:m-seg\] where $k=1,2,\cdots,$ is the transmission index, $\{{\underaccent{\bar}{Z}}(k)\}_{k}$ is an  stochastic process, and [rCl]{} M(k)= \_[k=1]{}\^[|m]{} RM\_k,  Z(k)=RZ\_[|m]{}+\_[k=1]{}\^[|m-1]{} (\_[l=k+1]{}\^[|[m]{}]{} RM\_l)RZ\_k. \[eq:M-m-seg\] The power constraint is discretized to ${{\mathcal{P}}}=\frac{1}{n}{\mathsf{E}}{\left\lVert{\underaccent{\bar}{X}}\right\rVert}^2$ in the discrete-time model. Bandwidth, spectral broadening and spectral efficiency in the continuous-time model are discussed in Section \[sec:cor\]. Note that ${\mathbb{M}}({\underaccent{\bar}{X}},{\mathbb{N}}^1,{\mathbb{N}}^2)$ is a nonlinear random operator. Particularly, it depends on input. Dimension of the input space is $n$. To approximate the continuous-time model, $n\rightarrow\infty$. However, we let $n$ be arbitrary, [*e.g.*]{}, $n=5$. Dimension should not be confused with codeword length that tends to infinity. Proof of the Theorem \[thm:main\] {#sec:proof1} ================================= We first illustrate the main ideas of the proof via elementary examples. Consider the additive white Gaussian noise (AWGN) channel $Y=X+Z$, where $X\in{\mathbb{C}}$ is input, $Y\in{\mathbb{C}}$ is output and $Z\sim{\mathcal{N}_{{\mathbb{C}}}\!\left(0,1\right)}$ is noise. Applying chain rule to the mutual information [rCl]{} I(X;Y)=I(X;Y)+I(X;Y|Y), where $\angle$ denotes phase. The amplitude channel $X\mapsto |Y|$ is $${\left|Y\right|}\approx {\left|X\right|}+Z_r,$$ where $Z_r\sim{\mathcal{N}_{{\mathbb{C}}}\!\left(0,\frac{1}{2}\right)}$ and ${\left|X\right|}\gg 1$. It asymptotically contributes [rCl]{} I(X; |Y|)P+c to the capacity. Phase, on the other hand, is supported on the finite interval $[0,2\pi)$. The only way that the contribution of the phase to the capacity could tend to infinity is that, phase noise tends to zero on the circle as ${\left|X\right|}\rightarrow\infty$. Indeed, [rCl]{} Y&=&X+\^[-1]{}()\ &&X+, where $Z_r, Z_i\sim{\text{i.i.d.}}\ {\mathcal{N}_{{\mathbb{C}}}\!\left(0,\frac{1}{2}\right)}$. The output entropy is clearly bounded, $h(\angle Y|{\left|Y\right|})\leq \log 2\pi$. However, [rCl]{} h(Y|X, Y)&=&h(Z\_i)-|X|\ &&-P+c,[as]{.nodecor}P. \[eq:cond-ent-awgn\] Note that the differential entropy can be negative. The contribution of the phase to the mutual information is [rCl]{} I(X;Y|Y)P+c’. Condition implies $\operatorname{V}(\angle Y | X, {\left|Y\right|})\rightarrow 0$, [*i.e.*]{}, the effective phase noise on the unit circle asymptotically vanishes. Now consider the fading channel $Y=MX+Z$, where $X\in{\mathbb{C}}$ is input, $Y\in{\mathbb{C}}$ is output and $M, Z\sim{\text{i.i.d.}}\ {\mathcal{N}_{{\mathbb{C}}}\!\left(0,1\right)}$. To prepare for generalization to optical channel, we represent a complex scalar $X$ as ${\underaccent{\bar}{X}}=(\Re X, \Im X)^T$. Thus ${\underaccent{\bar}{Y}}={\mathbb{M}}{\underaccent{\bar}{X}}+{\underaccent{\bar}{Z}}$, where [rCl]{} = M\_r & -M\_i\ M\_i & M\_r , Z= Z\_r\ Z\_i , in which $M_{r,i},Z_{r,i}\sim{\text{i.i.d.}}\ {\mathcal{N}_{{\mathbb{C}}}\!\left(0,\frac{1}{2}\right)}$. As ${\left| {\underaccent{\bar}{X}}\right|}\rightarrow\infty$, ${\underaccent{\bar}{Y}}\approx {\mathbb{M}}{\underaccent{\bar}{X}}$, $\hat{{\underaccent{\bar}{Y}}}\approx {\mathbb{M}}\hat{{\underaccent{\bar}{X}}}/\bigl|{\mathbb{M}}\hat{{\underaccent{\bar}{X}}}\bigr|$, and randomness in $\hat{{\underaccent{\bar}{Y}}}$ does not vanish with ${\left|X\right|}$. Formally, [rCl]{} h\_(| X, [||]{})&=& h\_(| M\^[-1]{}Y,[||]{})\ &=& h\_(| M\^[-1]{}, [||]{})\ &&gt;&-, \[eq:cond-entr-fad\] where follows because ${\underaccent{\bar}{a}}={\mathbb{M}}^{-1}\hat{{\underaccent{\bar}{Y}}}$ does not determine $\hat{{\underaccent{\bar}{Y}}}$ for random ${\mathbb{M}}$: There are four random variables $M_{r,i}$ and $\hat{{\underaccent{\bar}{Y}}}_{1,2}$ for three equations ${\mathbb{M}}^{-1}\hat{{\underaccent{\bar}{Y}}}={\underaccent{\bar}{a}}$ and $|\hat{{\underaccent{\bar}{Y}}}|=1$. As a result, $I({\underaccent{\bar}{X}};\hat{{\underaccent{\bar}{Y}}}|{\left|{\underaccent{\bar}{Y}}\right|})<\infty$, and ${\left|{\underaccent{\bar}{Y}}\right|}$ is the only useful [DOF]{} at high powers, in the sense that its contribution $I({\underaccent{\bar}{X}}; {\left|{\underaccent{\bar}{Y}}\right|})$ to the mutual information $I({\underaccent{\bar}{X}}; {\underaccent{\bar}{Y}})$ tends to infinity with ${\left|{\underaccent{\bar}{X}}\right|}$. The zero-dispersion optical fiber channel is similar to the fading channel at high powers. The trivial condition [rCl]{} h((.,z) | Q(.,0), |Q(.,z)|)&gt;-, Q(.,0), is sufficient to prove that the capacity of is asymptotically the capacity of the amplitude channel, namely $\frac{1}{2}\log{{\mathcal{P}}}+c$. The intuition from the AWGN, fading and zero-dispersion channels suggests to look at the dispersive optical channel in the spherical coordinate system. The mutual information can be decomposed using the chain rule [rCl]{} I(Q(0); Q(z))&=&I([|Q(0)|]{} ; Q(z))+I((0); Q(z)|[|Q(0)|]{})\ &=&I([|Q(0)|]{} ; [|Q(z)|]{})+I([|Q(0)|]{} ; (z)|[|Q(z)|]{})\ &&+I((0); Q(z)|[|Q(0)|]{}), \[eq:I3\] where we dropped time index in $Q(t,z)$. The first term in is the rate of a single-input single-output channel which can be computed in the asymptotic limit as follows. Let ${\underaccent{\bar}{X}}$ and ${\underaccent{\bar}{Y}}$ represent discretizations of the input $Q(0,.)$ and output $Q(z,.)$. Consider first the lossless model. In this case, ${\mathbb{M}}$ is unitary and from , and [rCl]{} [||]{}\^2&=&[|+|]{}\^2\ &=&[|+\^|]{}\^2\ &=&[|+|]{}\^2, \[eq:chi-squared\] where ${\mathbb{M}}^\dag$ is the adjoint (nonlinear) operator and follows because ${\underaccent{\bar}{Z}}$ and ${\mathbb{M}}^\dag{\underaccent{\bar}{Z}}$ are identically distributed when ${\underaccent{\bar}{Z}}\sim{\mathcal{N}_{{\mathbb{C}}}\!\left(0,m{\mathcal{D}}I_n\right)}$; see Appendix \[app:in-out-mssfm\]. Thus $|{\underaccent{\bar}{Y}}|^2/(m{\mathcal{D}})$ is a non-central chi-square random variable with $2n$ degrees-of-freedom and parameter ${\left|{\underaccent{\bar}{x}}\right|}^2/(m{\mathcal{D}})$. The non-central chi-square conditional PDF $p(|{\underaccent{\bar}{y}}|^2||{\underaccent{\bar}{x}}|^2)$ can be approximated at large ${\left|{\underaccent{\bar}{x}}\right|}^2$ using the Gaussian PDF, giving the asymptotic rate [rCl]{} I([||]{}; [||]{})P+c. \[eq:I(|X|;|Y|)\] The bounded number $c$ can be computed using the exact PDF. The case $\alpha_{r}(z)\neq 0$ is similar to the lossless case. Here ${\mathbb{M}}=e^{-\frac{1}{2}\alpha_{r}{{\mathcal{L}}}}{\mathbb{U}}$, where ${\mathbb{U}}$ is a random unitary operator. Thus, ${\mathbb{M}}^\dag=e^{-\frac{1}{2}\alpha_{r}{{\mathcal{L}}}}{\mathbb{U}}^\dag$; furthermore ${\left|{\mathbb{M}}\right|}=e^{-\frac{1}{2}\alpha_r{{\mathcal{L}}}}$ is deterministic. The loss simply influences the signal power, modifying constant $c$ in . In the non-constant loss model, loss interacts with nonlinearity, dispersion and noise. Here, ${\left|{\mathbb{M}}\right|}$ is a random variable, and [rCl]{} [||]{}=[||]{}[||]{}|+|, \[eq:multi-chan\] where $\hat{{\mathbb{M}}}={\mathbb{M}}/{\left|{\mathbb{M}}\right|}$. Taking logarithm [rCl]{} [||]{}=[||]{}+[||]{}+|+|. \[eq:log-fading\] Applying Lemma \[lemm:decomposition\], we can assume ${\left|X\right|}>x_{0}$ for a suitable $x_{0}>0$ without changing the asymptotic capacity. The last term in is a bounded real random variable because [rCl]{} \_[[||]{}=1, [||]{}=1]{} |+|\^2&lt;. Thus, the logarithm transforms the channel with multiplicative noise ${\left|{\mathbb{M}}\right|}$ to the channel with additive bounded noise. The asymptotic capacity, independent of the PDF of ${\left|{\mathbb{M}}\right|}$, is [rCl]{} I([||]{}; [||]{})&& (([|X|]{}))\^2+c\ &=&P+c’. The last two terms in are upper bounded in one unit of the MSSFM using the data processing inequality [rCl]{} I(Q(0); Q(z)|[|Q(0)|]{})&& I(Q(0); Q(3)|[|Q(0)|]{}), \[eq:dp1\]\ I([|Q(0)|]{}; (z)| [|Q(z)|]{})&& I([|Q(0)|]{}; Q(3)|[|Q(3)|]{}). \[eq:dp2\] We prove that the upper bounds in – do not scale with input ${\left|Q(0)\right|}$. Let ${\underaccent{\bar}{X}},{\underaccent{\bar}{Y}}\in{\mathbb{C}}^n$ denote discretization of $Q(0,t)$ and $Q(3\epsilon,t)$. In one unit of the MSSFM [rCl]{} \_[[||]{}]{}I(; Y|[||]{})&&lt;&, \[eq:hatX-Y\]\ \_[[||]{}]{}I([||]{}; |[||]{})&&lt;&. \[eq:absX-hatY\] ![The area on the surface of the unit sphere, representing $\operatorname{V}(\hat Y)$, does not vanish as ${\left|\underline{x}\right|}\rightarrow\infty$.[]{data-label="fig:spherical-sector"}](fig2){width="20.00000%"} Consider first the lossless model, where ${\mathbb{M}}$ is a unitary operator. From Lemma \[lemm:decomposition\], as ${\left|{\underaccent{\bar}{x}}\right|}\rightarrow\infty$, the additive noise in can be ignored. Thus ${\underaccent{\bar}{Y}}={\left|{\underaccent{\bar}{Y}}\right|}\hat{{\underaccent{\bar}{Y}}}\approx{\left|{\underaccent{\bar}{X}}\right|}\hat{{\underaccent{\bar}{Y}}}$. To prove , [rCl]{} I(; Y|[||]{})&=&I(;[||]{}|[||]{})\ && I(;|[||]{})\ &=&h\_(|[||]{})-h\_(|, [||]{}). Step $(a)$ follows from the identity [rCl]{} I(X; ZY|Z)=I(X; Y|Z),Z0. \[eq:I-indentity\] We measure the entropy of $\hat Y$ with respect to the spherical probability measure $\sigma^{m}$, $m=2n-1$, on the surface of the unit sphere $S^{m}$. From the maximum entropy theorem (MET) for distributions with compact support, [rCl]{} h\_(|[||]{})A\_n, where $A_n=2\pi^n/\Gamma(n)$ is the surface area of $S^m$, in which $\Gamma(n)$ is the gamma function. We next show that the conditional entropy $h_{\sigma}(\hat{{\underaccent{\bar}{Y}}}|\hat{{\underaccent{\bar}{X}}}, {\left|{\underaccent{\bar}{x}}\right|})$ does not tend to $-\infty$ with ${\left|{\underaccent{\bar}{x}}\right|}$. The volume of the spherical sector in Fig. \[fig:spherical-sector\] vanishes if and only if the corresponding area on the surface of the sphere vanishes. This can be formalized using identity . Let ${\underaccent{\bar}{W}}=U\hat{{\underaccent{\bar}{Y}}}$, where $U\sim \mathcal U(0,1)$ independent of ${\underaccent{\bar}{X}}$ and ${\underaccent{\bar}{Y}}$. From [rCl]{} h\_(|, [||]{})=h( W|, [||]{})-h(U)-mU. \[eq:interm\] Applying chain rule to the differential entropy [rCl]{} h(W|.)&=&\_[k=1]{}\^[n]{} h(W\_k|W\^[k-1]{},.)\ &=&\_[k=1]{}\^[n]{} h(W\_k, | W\^[k-1]{},.) \[eq:chain-rule1\]\ &&+\_[k=1]{}\^[n]{} h(|W\_k| |W\^[k-1]{}, W\_[k]{},.), \[eq:chain-rule2\] where entropy is conditioned on ${\left|{\underaccent{\bar}{x}}\right|}$ and $\hat{{\underaccent{\bar}{X}}}$. For the phase entropies in , note that, from –, $\angle W_k=\angle Y_k$ contains random variable $\Phi_k$ with finite entropy, which does not appear in $W^{k-1}$. Formally, $$\angle W_k=\Phi_k+F(\Psi^n,\hat{{\underaccent{\bar}{x}}}),$$ for some function $F$, which can be determined from –. Thus [rCl]{} h(W\_k| W\^[k-1]{}, .)&=& h(\_k+F(\^n,)|W\^[k-1]{}, .)\ & &h(\_k+F(\^n,))|W\^[k-1]{},\^[k-1]{},\^[n]{}, U, .)\ &&h(\_k+F(\^n,))|\^[k-1]{},\^[n]{}, U, .)\ && h(\_k|\^[k-1]{},\^[n]{},.)\ &&gt;& -. \[eq:phase-entropies\] Step $(a)$ follows from the rule that conditioning reduces the entropy. Step $(b)$ holds because $W^{k-1}$ is a function of $\{\Phi^{k-1}, \Psi^n, U\}$. Step $(c)$ follows because $\{\Psi^n, .\}$ determines $F(\Psi^n,\hat{{\underaccent{\bar}{x}}})$. For the amplitude entropies in , we explain the argument for $n=3$: [rCl]{} W\_k =Ue\^[j\_k]{}&&(r\_[k1]{}x\_1e\^[j\_1]{}+r\_[k2]{}x\_2e\^[j\_2]{}+r\_[k3]{}x\_3e\^[j\_3]{}), \[eq:Ys\] where $1\leq k\leq 3$. Noise addition in implies ${\textnormal{Pr}}(\hat X_k= 0)=0$, $\forall k$; we thus assume $\hat x_k\neq 0$ for all $k$. It is clear that $h({\left|W_1\right|})>-\infty$. There are 5 random variables $U$, $\Phi_1$, $\Psi_{1,2,3}$ for two amplitude and phase relations in the $W_1$ equation in . Given $W_1$ and $\angle W_2$, there are 6 random variables and three equations. One could, for instance, express $\Psi_{1,2,3}$ in terms of $U$ and $\Phi_{1,2}$. This leaves free at least $U$ in $|W_2|$, giving [rCl]{} h(|W\_2|| W\_1, W\_2,.)&&h(U)+c\ &&gt;&-. The last equation for $W_3$ adds one random variable $\Phi_3$ and one equation for $\angle W_3$. Together with the equation for ${\left|W_2\right|}$, the number of free random variables, defined as the number of all random variables minus the number of equations, is 2; thus [rCl]{} h(|W\_3|| W\_1, W\_2, W\_3,.)&gt;-. In a similar way, in general, there are $n+k+1$ random variables in $W^k$ and $2k-1$ equations in $(W^{k-1}, \angle W_k)$, resulting in $n-k+2\geq 2$ free random variables. Thus [rCl]{} h(|W\_k|| W\^[k-1]{}, W\_[k]{})&gt;-,1kn. \[eq:amp-entropies\] Substituting and into –, we obtain $h({\underaccent{\bar}{W}}|.)>-\infty$. Finally, from [rCl]{} h\_(|, [||]{})&gt;-. The proof for lossy models, and , is similar. Loss changes matrix ${\mathsf{R}}$, which has no influence on our approach to proving the boundedness of terms in –. The essence of the above proof is that, as ${\left|{\underaccent{\bar}{x}}\right|}\rightarrow\infty$, the additive noise in gets smaller relative to the signal, but phase noise (and thus randomness in ${\mathbb{M}}$) does not decrease with ${\left|{\underaccent{\bar}{x}}\right|}$. Furthermore, ${\mathbb{M}}$ has enough randomness, owing to the mixing effect of the dispersion, so that all $2n-1$ angles representing signal direction in the spherical coordinate system are random variables that do not vanish with ${\left|{\underaccent{\bar}{x}}\right|}$. For some special cases of the dispersion-loss matrix ${\mathsf{R}}$, it is possible to obtain deterministic components in $\hat{{\underaccent{\bar}{Y}}}$ as ${\left|{\underaccent{\bar}{x}}\right|}\rightarrow\infty$. These are cases where mixing does not fully occur, [*e.g.*]{}, ${\mathsf{R}}=I_n$. In the MSSFM, however, ${\mathsf{R}}$ is arbitrary, due to, [*e.g.*]{}, step size $\epsilon$. Proof of the Corollary \[cor:inf\] {#sec:cor} ---------------------------------- We fix the power constraint and let $n\rightarrow\infty$ in the definition of the capacity. The logarithmic terms depending on ${{\mathcal{P}}}$ in the Theorem \[thm:main\] approach zero, so that ${{\mathcal{C}}}<\infty$. Consider now the continuous-time model . We discretize the channel in the frequency domain, according to the approach in [@yousefi2015cwit2]. As the time duration ${{\mathcal{T}}}\rightarrow\infty$ in [@yousefi2015cwit2 Section II], we obtain a discrete-time model with infinite number of [DOFs]{} (Fourier modes) in any frequency interval at $z=0$. Therefore, ${{\mathcal{C}}}<\infty$ in the corresponding discrete-time periodic model. It is shown in [@yousefi2011opc Section VIII] that, because of the spectral broadening, the capacity of the continuous-time model ${{\mathcal{C}}}_c$ can be strictly lower than the capacity of the discrete-time model ${{\mathcal{C}}}_d$. Since ${{\mathcal{C}}}_c\leq {{\mathcal{C}}}_d$, and ${{\mathcal{C}}}_d<\infty$, we obtain ${{\mathcal{C}}}_c<\infty$. We do not quantify constant $c'$ in the continuous-time model, which can be much lower than the constant $c$ in the discrete-time model, due to spectral broadening (potentially, $c'(\infty,\infty)=0$). A crude estimate, based on the Carson bandwidth rule, is given in [@yousefi2011opc Section VIII] for the zero-dispersion channel. To summarize, SE is bounded in input power in the continuous-time model with $n=\infty$ (with or without filtering). The extent of the data rate loss due to the spectral broadening ($c'$ versus $c$) remains an open problem. Random Matrix Model and the Asymptotic Capacity {#sec:proof2} =============================================== In this section it is shown that, as ${\left|{\underaccent{\bar}{X}}\right|}\rightarrow\infty$, the action of the discrete-time periodic stochastic NLS equation tends to multiplication by a random matrix (with fixed PDF, independent of the input). Noise simplifies the NLS channel to a *linear* multiple-input multiple-output non-coherent fading channel. This section also proves Theorems \[thm:main\] in an alternative intuitive way. The approach is based on the following steps. *Step 1)* In Section \[sec:decomposition\], the input signal space is partitioned into a bounded region $\mathcal R^-$ and its complement $\mathcal R^+$. It is shown that the overall rate is the interpolation of rates achievable using signals in $\mathcal R^{\pm}$. Lemma \[lemm:I&lt;infty\] is proved, showing that the contribution of $\mathcal R^-$ to the mutual information is bounded. Suitable regions $\mathcal R^{\pm}$ are chosen for the subsequent use. *Step 2)* In Section \[sec:fading-model\], it is shown that for all $q(t,0)\in \mathcal R^+$, the nonlinear operator $L_N=j\gamma|Q|^2Q$ is multiplication by a uniform phase random variable, [*i.e.*]{}, [rCl]{} L\_N(Q)= j(t,z) Q,t, z, where[^2]$\Theta(t,z)\sim{\text{i.i.d.}}\ \mathcal U(0,2\pi)$. In other words, for input signals in $\mathcal R^+$ the stochastic NLS equation is a simple linear channel with additive and multiplicative noise [rCl]{} =L\_L(Q)+j (t,z)Q+N(t,z). \[eq:multiplicative\] Discretizing , we obtain that optical fiber is a fading channel when input is in $\mathcal R^+$: [rCl]{} Y&=&X+Z,\^2P, \[eq:Y=HX+N\] in which ${\mathbb{M}}$ is a random matrix of the form [rCl]{} M = \_[k=1]{}\^m RD\_k,D\_k=(e\^[j\_[ki]{}]{}), \[eq:M-expr\] where $\Theta_{kl}\sim{\text{i.i.d.}}\ \mathcal U(0,2\pi)$ and ${\underaccent{\bar}{Z}}$ is noise [rCl]{} Z=\_[k=1]{}\^[m]{}(\_[l=1]{}\^kR D\_l)Z\_k,Z\_k\~ [\_(0,I\_n)]{}. In general, ${\mathbb{M}}$ and ${\underaccent{\bar}{Z}}$ are non-Gaussian. However, in the constant loss model, ${\underaccent{\bar}{Z}}\sim{\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathsf{K}}\right)}$ where ${\mathsf{K}}=(\sigma^2{{\mathcal{W}}}{{\mathcal{L}}}_e/n)I_n$, ${{\mathcal{L}}}_e=(1-e^{-\alpha{{\mathcal{L}}}})/\alpha$. Note that ${\mathbb{M}}$ and ${\underaccent{\bar}{Z}}$ have fixed PDFs, independent of ${\underaccent{\bar}{X}}$. Summarizing, the channel law is [rCl]{} p(y|x)= [given by the NLS equation]{.nodecor}, & xR\^-,\ p(Mx+Z|x), & xR\^+. \[eq:law\] *Step 3)* In Section \[sec:asymptotic-capacity\], the capacity of the multiplicative-noise channel is studied. Lemma \[lem:cap-Y=HX+N\] and \[lem:h(Mx)\] are proved showing that, for any ${\mathbb{M}}$ that does not have a deterministic component and is finite (see ), the asymptotic capacity is given by the Theorem \[thm:main\]. Importantly, the asymptotic rate is nearly independent of the PDF of ${\mathbb{M}}$, which impacts only the bounded number $c$. Finally, Lemma \[lem:h(M-fiber)&gt;-infty\] is proved showing that the random matrix underlying the optical fiber at high powers meets the assumptions of the Lemma \[lem:cap-Y=HX+N\]. An expression is provided for $c$, which can be evaluated, depending on the PDF of ${\mathbb{M}}$. Step 1): Rate Interpolation {#sec:decomposition} --------------------------- We begin by proving the following lemma, which is similar to the proof approach in [@agrell2015conds], where the notion of satellite constellation is introduced. Let $p({\underaccent{\bar}{y}}|{\underaccent{\bar}{x}})$, ${\underaccent{\bar}{x}}, {\underaccent{\bar}{y}} \in{\mathbb{R}}^n$, be a conditional PDF. Define [rCl]{} X= X\_1, & [with probability]{.nodecor} ,\ X\_2, & [with probability]{.nodecor} 1-, where ${\underaccent{\bar}{X}}_{1}$ and ${\underaccent{\bar}{X}}_2$ are random variables in ${\mathbb{R}}^n$ and $0\leq\lambda\leq 1$. Then [rCl]{} R\_1+(1-)R\_2RR\_1+(1-)R\_2+H(), \[eq:R1-R2-H\] where $R_1$, $R_2$ and $R$ are, respectively, mutual information of $X_1$, $X_2$ and $X$, and $H(x)=-x\log x-(1-x)\log(1-x)$ is the binary entropy function, $0\leq x\leq 1$. \[lemm:decomposition\] The PDF of the time sharing random variable ${\underaccent{\bar}{X}}$ and its output ${\underaccent{\bar}{Y}}$ are [rCl]{} p\_[X]{}(x)&=&p\_[X\_1]{}(x)+(1-)p\_[X\_2]{}(x),\ p\_[Y]{}(y)&=&p\_[Y\_1]{}(y)+(1-)p\_[Y\_2]{}(y), \[eq:py=py1+py2\] where [rCl]{} p\_[Y\_1,Y\_2]{}(y)=p(y|x)p\_[X\_1,X\_2]{}(x)x. By elementary algebra [rCl]{} I(X; Y) = I(X\_1;Y\_1)+(1-) I(X\_2,Y\_2)+I, where [rCl]{} I &=& D( p\_[Y\_1]{}(y\_1)||p\_[Y]{}(y)) +(1-) D( p\_[Y\_2]{}(y\_2)||p\_[Y]{}(y)). From , $$p_{{\underaccent{\bar}{Y}}}({\underaccent{\bar}{y}})\geq \max\left\{\lambda p_{{\underaccent{\bar}{Y}}_1}({\underaccent{\bar}{y}}), (1-\lambda) p_{{\underaccent{\bar}{Y}}_2}({\underaccent{\bar}{y}})\right\},$$ which gives $\Delta I\leq H(\lambda)$. From $\log x\leq x-1$, $D(p({\underaccent{\bar}{y}}_{1,2})||p({\underaccent{\bar}{y}}))\geq 0$, giving $\Delta I\geq 0$. Thus [rCl]{} I (X;Y) && I (X\_1;Y\_1)+(1-) I(X\_2;Y\_2)+H(),\ I(X;Y) && I (X\_1;Y\_1)+(1-) I(X\_2;Y\_2). Define [rCl]{} |R=\_[n]{}I(X;Y). With definitions in the Lemma \[lemm:decomposition\], we have $\bar R=\lambda\bar R_1+(1-\lambda)\bar R_2$. \[cor:rate-interpolation\] For the rest of the paper, we choose $\mathcal R^-$ to be an $n$-hypercube in ${\mathbb{C}}^n$ [rCl]{} R\^-\_={x\^n | |x\_k|&lt; ,1kn }, and $\mathcal R^+_{\kappa}={\mathbb{C}}^n\backslash\mathcal R^-_{\kappa}$. We drop the subscript $\kappa$ when we do not need it. The following Lemma shows that, if $\kappa<\infty$, the contribution of the signals in $\mathcal R^-_\kappa$ to the mutual information in the NLS channel is bounded. Let ${\underaccent{\bar}{X}}\in{\mathbb{C}}^n$ be a random variable supported on $\mathcal R^-_{\kappa}$ and $\kappa<\infty$. For the NLS channel [rCl]{} I(X;Y)&lt;. \[lemm:I&lt;infty\] From the MET, $h({\underaccent{\bar}{Y}})\leq \log\left|\mathcal R^-_\kappa\right|< \infty$. Let ${\underaccent{\bar}{X}}\rightarrow {\underaccent{\bar}{Z}}\rightarrow {\underaccent{\bar}{Y}}={\underaccent{\bar}{Z}}+{\underaccent{\bar}{N}}$ be a Markov chain, where ${\underaccent{\bar}{N}}$ is independent of ${\underaccent{\bar}{Z}}$ and $h({\underaccent{\bar}{N}})>-\infty$. Then $h({\underaccent{\bar}{Y}}|{\underaccent{\bar}{X}})\geq h({\underaccent{\bar}{Y}}|{\underaccent{\bar}{X}},{\underaccent{\bar}{Z}})=h({\underaccent{\bar}{Y}}|{\underaccent{\bar}{Z}})=h({\underaccent{\bar}{N}})>-\infty$. Applying this to the NLS channel with an independent noise addition in the last stage, we obtain $I({\underaccent{\bar}{X}}, {\underaccent{\bar}{Y}})<\infty$. Alternatively, from [@yousefi2015cwit2], $$\frac{1}{n}I({\underaccent{\bar}{X}}; {\underaccent{\bar}{Y}})\leq \log(1+\frac{|\mathcal R^-_\kappa|^2}{nm{\mathcal{D}}})<\infty.$$ \[lem:R-\] Step 2): Channel Model in the High Power Regime {#sec:fading-model} ----------------------------------------------- We begin with the zero-dispersion channel. Let $Q(t,0)=X=R_x\exp(j\Phi_x)$ and $Q(t,z)=Y=R_y\exp(j\Phi_y)$ be, respectively, channel input and output in . For a fixed $t$, $X$ and $Y$ are complex numbers. We have [rCl]{} \_[[|x|]{}]{}p(\_y|x)&=&\_[[|x|]{}]{}p(\_y|x,r\_y)\ &=&. Thus, the law of the zero-dispersion channel tends to the law of the following channel $$Y=Xe^{j\Theta}+Z,$$ where $\Theta\sim\mathcal U(0,2\pi)$, $Z\sim {\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathcal{D}}\right)}$, and $(X,Z,\Theta)$ are independent. \[lemm:uniform\] The condition PDF is [@yousefi2011opc Eq. 18] [rCl]{} p(r\_y,\_y|r\_x,\_x)&=&p\_0(r\_y|r\_x)\ &&+\_[m=1]{}\^(p\_m(r\_y|r\_x)e\^[jm(\_y-\_x-r\^2\_xz)]{}). Here [rCl]{} p\_m(r\_y|r\_x)&=&2r\_xb\_m(-a\_m(r\_x\^2+r\_y\^2))I\_m(2b\_mr\_xr\_y), where [rCl]{} a\_m=x\_m(x\_m),b\_m=, in which $x_m=\sqrt{jm\gamma\mathcal D}z=t_m(1+j)$, $t_m=\sqrt{\frac{1}{2}m\gamma\mathcal D}z$. Note that $p(r_y|r_x)=p_0(r_y|r_x)$. The conditional PDF of the phase is [rCl]{} p(\_y|r\_x,\_x, r\_y)&=&\ &&\ &=& \_[m=1]{}\^( D\_m(r\_x) e\^[jm(\_y-\_x- r\_x\^2z)]{})\ && +, where step $(a)$ follows from $p(r_y|r_x,\phi_x)=p(r_y|r_x)$ (see [@yousefi2011opc Fig. 6 (b)]) and [rCl]{} D\_m(r\_x)&=&\ &=&\ &&{-b\_0(x\_mx\_m-1)(r\_x\^2+r\_y\^2)}. \[eq:Dm\] The following three inequalities can be verified: [rCl]{} ||\^2&=&\ && 1,t&gt;0. \[eq:inq1\] [rCl]{} ||&& ||\ &&1. \[eq:inq2\] [rCl]{} F(t)&=&(x\_mx\_m-1)\ &=&t-1\ &&gt;&0, \[eq:inq3\] where $t{\stackrel{\Delta}{=}}t_m>0$. Using – in , we obtain $|D_m(r_x)|\leq E_m(r_x)$, where [rCl]{} E\_m(r\_x)={-F(t\_m)(r\_x\^2+r\_y\^2)}. \[eq:D&lt;1\] We have [rCl]{} \_[r\_x]{}|\_[m=1]{}\^D\_m(r\_x) e\^[jm(\_y-\_x-r\_x\^2z)]{}| && \_[r\_x]{} \_[m=1]{}\^|D\_m(r\_x)|\ && \_[r\_x]{} \_[m=1]{}\^E\_m(r\_x)\ && \_[m=1]{}\^\_[r\_x]{} E\_m(r\_x)\ && 0. Step $(a)$ follows because $E_m(r_x)\leq E_m(0)$ and $\sum E_m(0)$ is convergent; thus, by the dominated convergence theorem, $\sum E_m(r_x)$ is uniformly convergent. Step $(b)$ follows from . It follows that [rCl]{} \_[r\_x]{}p(\_y|r\_x, \_x, r\_y)=. Furthermore [rCl]{} \_[r\_x]{}p(\_y|r\_x,\_x)&=& \_[r\_x]{} p(\_y|r\_x,\_x, r\_[y’]{})p(r\_[y’]{}|r\_x,)r\_[y’]{}\ &=&. Lemma \[lemm:uniform\] generalizes to the vectorial zero-dispersion channel . Since noise is independent and identically distributed in space and time, so are the corresponding uniform phases. This is true even if $X_i$ in Fig. \[fig:mssfm\] are dependent, [*e.g.*]{}, ${\underaccent{\bar}{X}}=(x,\cdots, x)$. We now consider the dispersive model. To generalize Lemma \[lemm:uniform\] to the full model, we use the following notion [@moser2004dbb Section 2.6]. A family of PDFs $\{p_{{\underaccent{\bar}{X}}_{\theta}}({\underaccent{\bar}{x}})\}_{\theta}$, $0\leq\theta\leq\theta_0$, is said to *escape to infinity* with $\theta$ if $\lim\limits_{\theta\rightarrow\theta_0}{\textnormal{Pr}}(|{\underaccent{\bar}{X}}_{\theta}|<c)=0$ for any finite $c$. Let ${\underaccent{\bar}{X}}\in\mathcal R^+_{\kappa}$ and ${\underaccent{\bar}{Y}}$ be, respectively, the channel input and output in the dispersive model. The PDF of $Y_k$ escapes to infinity as $\kappa\rightarrow\infty$ for all $k$. \[lemm:scape\] The proof is based on induction in the MSSFM units. We make precise the intuition that, as $\kappa\rightarrow \infty$, the PDF of $|Y_k|$ spreads out, so that an ever decreasing probability is assigned to any finite interval. Consider vector ${\underaccent{\bar}{V}}$ in Fig. \[fig:mssfm\], at the end of the linear step in the first unit. Setting $W={\left|V_k\right|}$, we have [rCl]{} [[Pr]{.nodecor}]{}(W&lt;c)&=&\_[0]{}\^[c]{} p\_W(w)w\ &=& \_[0]{}\^[c]{} p\_W()t\ && c [\_[T\_]{}(t)]{}\_, \[eq:p(W&lt;c)\] where $T_{\epsilon}=\epsilon W$, $p_{T_{\epsilon}}(t)=\frac{1}{\epsilon}p_W(\frac{t}{\epsilon})$ and $\epsilon{\stackrel{\Delta}{=}}1/\kappa$. Below, we prove that ${\left\lVertp_{T_0}(t)\right\rVert}_{\infty}<\infty$. Fix $0<\delta<1$ and define the (non-empty) index set [rCl]{} I={i: |x\_i|\^[1-]{}}. The scaled random variable $T_{\epsilon}$ is [rCl]{} T\_&=&|V\_k|\ &=&|\_[l=1]{}\^n e\^[j\_l(x\_l,\_l\^1)]{}r\_[kl]{}\_l+|\ &=&| \_[lI]{} + \_[lI]{}+ |, where $\tilde{{\underaccent{\bar}{Z}}}$ is an additive noise and $\tilde{{\underaccent{\bar}{x}}}=\epsilon{\underaccent{\bar}{x}}$. As $\epsilon\rightarrow 0$, the second sum vanishes because, if $l\notin \mathcal I$, $|\tilde{x}_l|<\epsilon^\delta\rightarrow 0$. In the first sum, $|x_l|\rightarrow\infty$, thus $\Psi_l(x_l,{\underaccent{\bar}{N}}_l^1)\overset{\textnormal{a.s.}}{\rightarrow} U_l$, where $U_l\sim\mathcal U(0,2\pi)$. Therefore $T_{\epsilon}\overset{\textnormal{a.s.}}{\rightarrow} T_0$, in which [rCl]{} T\_[0]{}=|\_[l I]{}e\^[jU\_l]{}r\_[kl]{}x\_l|, \[eq:T-eps\] where $|\tilde x_l|>0$. Since the PDF of $e^{jU_l}$ is in $L^{\infty}(\mathbb T)$ on the circle $\mathbb T$, so is the conditional PDF $p_{T_0|\tilde{{\underaccent{\bar}{ X}}}}(t|\tilde{{\underaccent{\bar}{x}}})$, [*i.e.*]{}, [rCl]{} [\_[T\_0]{}(t)]{}\_&lt;. \[eq:p(t)&lt;infty\] Substituting into [rCl]{} \_[0]{}[[Pr]{.nodecor}]{}(W&lt;c)=0. \[eq:p(W&lt;c)-2\] In a similar way, can be proved for ${\underaccent{\bar}{V}}$ at the output of the linear step in the second unit, by replacing $e^{jU_l}r_{kl}$ in with $M_{kl}$, and noting that, as $\epsilon\rightarrow 0$, $\{M_{kl}\}_{l\in\mathcal I}$ tend to random variables independent of input, with a smooth PDF (without delta functions). From the Lemma \[lemm:scape\], as $\kappa\rightarrow\infty$ the probability distribution at the input of every zero-dispersion segment in the link escapes to infinity, turning the operation of the nonlinearity in that segment into multiplication by a uniform phase and independent noise addition. We thus obtain an input region $\mathcal R^+_\kappa$ for which, if ${\underaccent{\bar}{x}}\in\mathcal R^+_\kappa$, the channel is multiplication by a random matrix, as described in . The channel converts any small noise into worst-case noise in evolution. Step 3): The Asymptotic Capacity {#sec:asymptotic-capacity} -------------------------------- In this section, we obtain the asymptotic capacity of the channel . Applying Lemma \[lemm:decomposition\] to [rCl]{} |R(P)=|R\_-(P)+(1-)| R\_+(P), where $\bar R_{\pm}({{\mathcal{P}}})=\frac{1}{n}I({\underaccent{\bar}{X}};{\underaccent{\bar}{Y}})$, ${\underaccent{\bar}{X}}\in\mathcal R^{\pm}_\kappa$ and $\lambda$ is a parameter to be optimized. To shorten the analysis, we ignore the term $c=H(\lambda)/n$ in , as it does not depend on ${{\mathcal{P}}}$. We choose $\kappa$ sufficiently large, *independent of the average input power* ${{\mathcal{P}}}$. From the Lemma \[lemm:I&lt;infty\], $\sup_{{{\mathcal{P}}}}\bar R_-({{\mathcal{P}}})<\infty$. The following Lemma shows that $\bar{R}_+({{\mathcal{P}}})$ is given by the logarithmic terms in Theorem \[thm:main\] with [rCl]{} c|R\_-+(1-)\_I(;M), where $\bar R_-$ is the achievable rate at low powers. If ${\mathbb{M}}$ is Haar distributed, $c=\lambda \bar R_-$. Define $h({\mathbb{M}}){\stackrel{\Delta}{=}}h(M_{11},\cdots, M_{nn})$. Assume that [rCl]{} h(M)&gt;-, |M\_[ij]{}|\^2&lt;,1i,jn. \[eq:h(M)&gt;-infty\] Then, the asymptotic capacity of is given by the expressions stated in the Theorem \[thm:main\]. \[lem:cap-Y=HX+N\] The capacity of the multiple-input multiple-output non-coherent memoryless fading channel is studied in [@moser2004dbb; @lapidoth2003capacity]. Here, we present a short proof with a bit of approximation. Using chain rule for the mutual information [rCl]{} I(X;Y)&=&I([||]{}; Y)+I(; Y|[||]{})\ &=&I([||]{}; [||]{})+I(; Y|[||]{})+I([||]{}; |[||]{}). \[eq:I(X;Y)\] The first term in gives the logarithmic terms in Theorem \[thm:main\], as calculated in Section \[sec:proof1\]. We prove that the other terms are bounded in ${\left|{\underaccent{\bar}{X}}\right|}$. From the Lemma \[lemm:decomposition\], the additive noise in can be ignored when ${\underaccent{\bar}{X}}\in\mathcal R^+_\kappa$, so that ${\underaccent{\bar}{Y}}\approx{\mathbb{M}}{\underaccent{\bar}{X}}$. The second term in is [rCl]{} I(; Y|[||]{}) &=& I(; |X| M|[||]{})\ &=&I(; M| [||]{}), where we used identity . Note that we can not assume that ${\left|{\underaccent{\bar}{X}}\right|}$ and $\hat{{\underaccent{\bar}{X}}}$ are independent. For the output entropy [rCl]{} h(M|[||]{})&& h(M)\ &&\_[k=1]{}\^n h(\_[l=1]{}\^n M\_[kl]{}\_l )\ &&\_[k=1]{}\^n (e |\_[l=1]{}\^n M\_[kl]{}\_l|\^2)\ &&\_[k=1]{}\^n (\_[l=1]{}\^n |M\_[kl]{}|\^2)+ne\ &&n(||\^2\_F)+ne\ &&lt;&, where ${\left|{\mathbb{M}}\right|}_F=\Bigl(\sum\limits_{k,l=1}^n|M_{kl}|^2\Bigr)^{\frac{1}{2}}$ is the Frobenius norm. Step $(a)$ is obtained using the inequality $h({\underaccent{\bar}{W}})=\sum_k h(W_k|W^{k-1})\leq\sum_k h(W_k)$. Step $(b)$ is due to the MET. Cauchy-Schwarz and Jensen’s inequalities are, respectively, applied in steps $(c)$ and $(d)$. For the conditional entropy [rCl]{} h(M|[||]{}, )&=& \_ h(M|[||]{}, )\ && \_ h(M)\ &&-. \[eq:inter-6-aa\] Step $(a)$ holds because, from the Lemma \[lem:h(Mx)\], $h({\mathbb{M}}\hat{{\underaccent{\bar}{x}}})>-\infty$ for any $\hat{{\underaccent{\bar}{x}}}$. The third term in can be upper bounded using the second term by setting ${\underaccent{\bar}{X}}={\mathbb{M}}^{-1}{\underaccent{\bar}{Y}}$. We prove it alternatively. Since $\hat{{\underaccent{\bar}{Y}}}$ is compactly supported, $h_{\sigma}(\hat{{\underaccent{\bar}{Y}}}||{\left|{\underaccent{\bar}{Y}}\right|}|)\leq h_{\sigma}(\hat{{\underaccent{\bar}{Y}}})<\infty$. The conditional entropy is [rCl]{} h\_(|[||]{},[||]{})&=& h\_(|[||]{},[||]{}|M|)\ &=&h\_(|[||]{},|M|). \[eq:cond-entropy-Y=HX+N\] Applying identity to ${\mathbb{M}}\hat{{\underaccent{\bar}{X}}}$ and conditioning on ${\left|{\underaccent{\bar}{X}}\right|}$ [rCl]{} h\_(|[||]{}, |M|) &=& h(M| [||]{})-h(|M|| [||]{})\ && -(2n-1)((|M|)|[||]{}). \[eq:inter-6\] For the first term in [rCl]{} h(M|[||]{})&&h(M|[||]{}, )\ &&gt;&-, \[eq:inter-6-a\] where we used . Since $|{\mathbb{M}}\hat{{\underaccent{\bar}{X}}}|\leq{\left|{\mathbb{M}}\right|}\leq {\left|{\mathbb{M}}\right|}_F$, from the MET [rCl]{} h(|M||[||]{})&& h(|M|)\ && (2e|MX|\^2)\ && (2e[||]{}\_F\^2)\ &&lt;&. \[eq:inter-6-b\] Furthermore, [rCl]{} ((|M|\^2)|[||]{})&& (|M|\^2|[||]{})\ &&[||]{}\_F\^2\ &&lt;&. \[eq:inter-6-c\] Substituting – into and , we obtain [rCl]{} h\_(|[||]{},[||]{})&gt;-. The main ingredient in the proof of the Lemma \[lem:cap-Y=HX+N\], as well as Theorem \[thm:main\], is the following lemma. Let ${\mathbb{M}}$ be a random matrix and ${\underaccent{\bar}{x}}\in{\mathbb{C}}^n$ a non-zero deterministic vector. If ${\mathbb{M}}$ satisfies the assumptions , then [rCl]{} h(M x)&gt;-. \[lem:h(Mx)\] Since ${\underaccent{\bar}{x}}\neq 0$, at least one element of $ {\underaccent{\bar}{x}}$ is nonzero, say $ x_1\neq 0$. We switch the order of ${\mathbb{M}}$ and ${\underaccent{\bar}{x}}$ in the product ${\mathbb{M}}{\underaccent{\bar}{x}}$ as follows. Let ${\underaccent{\bar}{ M}}\in{\mathbb{C}}^{n^2}$ denote the vectorized version of ${\mathbb{M}}$, where rows are concatenated as a column vector. Define ${\underaccent{\bar}{V}}\in{\mathbb{C}}^{n^2}$ as follows: [rCl]{} V\_[k]{}= M\_[in]{}, & r=0,\ Y\_[i+r]{}=\_[l=1]{}\^n M\_[(i+r)l]{}x\_l, & r=1,\ M\_[(i+1)r]{}, & r2, \[eq:V-vec\] where $k=in+r$, $0\leq i\leq n$, $0\leq r \leq n-1$. Then ${\underaccent{\bar}{Y}}={\mathbb{M}} {\underaccent{\bar}{x}}$ is transformed to , which in matrix notation is [rCl]{} V=AM, \[eq:V=AM\] in which ${\mathsf{A}}_{n^2\times n^2}=\operatorname{diag}(\underbrace{{\mathsf{X}},\cdots,{\mathsf{X}}}_{n~\textnormal{times}})$, where the deterministic matrix ${\mathsf{X}}_{n\times n}$ is [rCl]{} X=( x\_1 & x\_2\^n\ 0 & I\_[n-1]{} ),x\_2\^n=( x\_2,, x\_n), in which $0$ is the $(n-1)\times 1$ all-zero matrix. From [rCl]{} h(V|)&=&h(M|x)+|A|\ &=&h(M)+n[|x\_1|]{}\ &=&h(M)+n[|x\_1|]{}. On the other hand, from [rCl]{} h(V|)&&h(Y, { M\_[ij]{}}\_[j2]{}|)\ &=&h(Y | )+h({ M\_[ij]{}}\_[j2]{}|x, Y)\ &=&h(Mx )+h({ M\_[ij]{}}\_[j2]{}|Y). Step $(a)$ holds because conditions $r=1$ and $r=0,1$ in include, respectively, ${\underaccent{\bar}{Y}}$ and $\{M_{ij}\}_{j\geq 2}$. Combining the last two relations [rCl]{} h(Mx)= h(M)+ n[| x\_1|]{}-h({M\_[ij]{}}\_[j2]{}|Y). If ${\mathsf{E}}|M_{ij}|^2 < \infty$, from the MET, the last term is bounded from below. Since $h({\mathbb{M}})> -\infty$ and $x_1\neq 0$, $h({\mathbb{M}}{\underaccent{\bar}{x}})>-\infty$. The random matrix ${\mathbb{M}}$ , underlying optical fiber at high powers, satisfies the assumptions of the Lemma \[lem:cap-Y=HX+N\]. \[lem:h(M-fiber)&gt;-infty\] Applying the triangle inequality to , $|M_{ij}|\leq \bigl({\mathsf{|}}{\mathsf{R}}|^m\bigr)_{ij}$, where $|{\mathsf{R}}|$ is the matrix with entries $|r_{ij}|$ and $m$ is the number of stages. We check the entropy condition in . In what follows, let $\theta_{i}\sim{\text{i.i.d.}}\ \mathcal U(0,2\pi)$. For one linear and nonlinear steps $m=1$: $$\begin{aligned} {\mathbb{M}}= \begin{pmatrix} e^{j\theta_1}r_{11} & e^{j\theta_2} r_{12}\\ e^{j\theta_1}r_{21} & e^{j\theta_2}r_{22} \end{pmatrix}.\end{aligned}$$ In this case, there are four amplitude dependencies $|M_{ij}|=|r_{ij}|$, $1\leq i,j\leq 2$, and two phase dependencies: [rCl]{} M\_[11]{}=M\_[21]{}+k,M\_[12]{}= M\_[22]{}+k,k=0,1. A dependency means that ${\mathbb{M}}$ contains a deterministic component, [*i.e.*]{}, $h({\mathbb{M}})>-\infty$. For $m=2$: $$\begin{aligned} M_{11}&=& e^{j(\theta_1+\theta_3)}r_{11}^2+ e^{j(\theta_1+\theta_4)}r_{12}r_{21}, \\ M_{12}&=& e^{j(\theta_2+\theta_3)}r_{12}\left(r_{11}+e^{j(\theta_4-\theta_3)}r_{22}\right),\\ M_{21} &=& e^{j(\theta_1+\theta_3)}r_{21}\left(r_{11}+e^{j(\theta_4-\theta_3)}r_{22}\right),\\ M_{22} &=& e^{j(\theta_2+\theta_3)}r_{21}r_{12}+ e^{j(\theta_2+\theta_4)}r_{22}^2. \end{aligned}$$ In this case too, there is a dependency $|r_{21}M_{12}|=|r_{12}M_{21}|$. For $m=3$: [rCl]{} M\_[11]{} &=& e\^[j(\_1+\_3+\_5)]{}r\_[11]{}\^3+ e\^[j(\_1+\_4+\_5)]{}r\_[11]{}r\_[12]{}r\_[21]{}\ &&+e\^[j(\_1+\_3+\_6)]{}r\_[11]{}r\_[12]{}r\_[21]{}+ e\^[j(\_1+\_4+\_6)]{}r\_[12]{}r\_[21]{}r\_[22]{},\ M\_[12]{}&=&e\^[j\_2]{}r\_[12]{}( e\^[j(\_3+\_5)]{}r\_[11]{}\^2+ + e\^[j(\_4+\_6)]{}r\_[22]{}\^2\ &&+ ),\ M\_[21]{} &=& e\^[j\_1]{}r\_[21]{}(e\^[j(\_3+\_5)]{}r\_[11]{}\^2+ e\^[j(\_4+\_6)]{}r\_[22]{}\^2\ &&+ ),\ M\_[22]{} &=& e\^[j(\_2+\_3+\_5)]{}r\_[11]{}r\_[12]{}r\_[21]{}+ e\^[j(\_2+\_4+\_5)]{}r\_[12]{}r\_[21]{}r\_[22]{}\ &&+e\^[j(\_2+\_3+\_6)]{}r\_[12]{}r\_[21]{}r\_[22]{}+ e\^[j(\_2+\_4+\_6)]{}r\_[22]{}\^3. Comparing the boxed terms, $|r_{21}M_{12}|\neq |r_{12}M_{21}|$. There are still 8 equations for 6 variables. In general, the number of entries of ${\mathbb{M}}$ is $n^2$. As $m> 2n$ steps are taken in distance, sufficient number of random variables $\theta_i$ are introduced in a matrix with fixed dimension. Since $n$ is fixed and $m$ is free, we obtain an under-determined system of polynomial equations for $x_i=\exp(j\theta_i)$ whose solution space has positive dimension. Thus an entry of ${\mathbb{M}}$ can not be determined from all other entries. The rate interpolation Lemma \[lemm:decomposition\] implies that, replacing ${\mathbb{C}}^n$ by $\mathcal R^+$ changes the asymptotic capacity by a finite number $c$. From the upper bound ${{\mathcal{C}}}\leq \log(1+{\text{SNR}})$ in [@yousefi2015cwit2] and Theorem 2.5 in [@moser2004dbb], we think that the asymptotic capacity can be achieved by an input distribution that escapes to infinity. This implies that $\lambda=0$, so that $c$ is indeed zero. We do not investigate this rigorously. Multivariate Gaussian input distribution is a poor choice for channels with multiplicative noise. Indeed, it achieves a rate bounded in power in . Log-normal input PDF for the signal norm achieves the asymptotic capacity of the non-constant loss model. Review of the Information Theory of the Optical Fiber {#sec:review} ===================================================== An information-theoretic analysis of the full model of the optical fiber does not exist. Even in the special case of the zero-dispersion, spectral efficiency is unknown. In the full model, we do not know anything about the capacity in the high power regime, let alone the spectral efficiency. The state-of-the-art is still lower bounds that are good in the nearly-linear regime. This situation calls for basic research, in order to make progress on these open problems. The present paper builds on earlier work. We acknowledge [@mecozzi1994llh Eq. 12] for the equation , [@mecozzi1994llh; @turitsyn2003ico; @yousefi2011opc] for the PDF of the zero-dispersion channel, [@yousefi2011opc] for the analysis of the zero-dispersion model, [@yousefi2015cwit2; @kramer2015upper] for noting that Shannon entropy is invariant under the flow of a broad class of deterministic partial differential equations and for highlighting the usefulness of the operator splitting (in numerical analysis) in the analysis of the NLS equation. Furthermore, we acknowledge [@agrell2015conds] for helpful insight leading to the rate interpolation Lemma \[lemm:decomposition\], [@moser2004dbb; @lapidoth2003capacity] for the study of the fading channels and Section II of [@yousefi2012nft3] for unfolding the origin of the capacity limitations in fiber — particularly the finding that signal-signal interactions are not fundamental limitations in the deterministic model if communication takes place in the right basis ([*i.e.*]{}, the nonlinear Fourier basis), which led us to the study of the remaining factor in this paper, namely the signal-noise interactions. We do not intend to survey the literature in this paper. There is a good review in [@ghozlan2015focusing Section I-A]. The achievable rates of 1- and multi-solitons is studied, respectively, in [@yousefi2012nft3; @meron2012soliton; @shevchenko2015; @zhang2016isit] and [@kaza2012; @kaza2016soliton; @buelow2016]. There is also a myriad of lower bounds that hold good in the low power regime; see, [*e.g.*]{}, [@mecozzi2012nsl; @secondini2013achievable; @dar2014new; @terekhov2016physrev; @secondini2016limitsv2; @turitsyn2016nature]. The achievable rates of the nonlinear frequency-division multiplexing for multi-user communication are presented in [@yousefi2016nfdm] for the Hermitian channel. Fig. \[fig:nfdm\] compares the NFDM and WDM rates [@yousefi2016nfdm Fig. 6]. The gap between the WDM and NFDM curve reflects signal-signal interactions. The gap between the NFDM and AWGN curve reflects signal-noise interactions. We conjecture that the NFDM rate is close to the capacity. At the power levels shown in Fig. \[fig:nfdm\], ${{\mathcal{C}}}_{\textnormal{wdm}}({{\mathcal{P}}})=\log{{\mathcal{P}}}+c$ and ${{\mathcal{C}}}_{\textnormal{nfdm}}=\log{{\mathcal{P}}}+c'$, $c<c'$. Although more gains are expected at ${{\mathcal{P}}}>-2.4$ dB, the slope of the blue curve will gradually decrease, converging, in the limit ${{\mathcal{P}}}\rightarrow\infty$, to the asymptotic form in Theorem \[thm:main\]. It is interesting to compare the extent of the signal-noise interactions in the time domain [@serena2016signalnoise] and in the nonlinear Fourier domain [@tavakkolnia2015sig Section IV. A]. Conclusions =========== The asymptotic capacity of the discrete-time periodic model of the optical fiber is characterized as a function of the input power in Theorem \[thm:main\]. With $n$ signal [DOFs]{}at the input, $n-1$ [DOFs]{} are asymptotically lost, leaving signal energy as the only available [DOF]{}  for transmission. The appropriate input distribution is a log-normal PDF for the signal norm. Signal-noise interactions limit the operation of the optical communication systems to low-to-medium powers. Acknowledgments {#acknowledgments .unnumbered} =============== The research was partially conducted when the author was at the Technische Universität München (TUM). The support of the TUM Institute for Advanced Study, funded by the German Excellence Initiative, and the support of the Alexander von Humboldt Foundation, funded by the German Federal Ministry of Education and Research, are gratefully acknowledged. The author thanks Luca Barletta for comments. Proof of the Identity {#app:one} ====================== Let ${\mathrm{d}}V({\underaccent{\bar}{x}})$ and ${\mathrm{d}}S({\underaccent{\bar}{x}})$ be the volume and surface element at point ${\underaccent{\bar}{x}}\in{\mathbb{R}}^n$ in the spherical coordinate system. Then [rCl]{} V(x)&=&|x|\^[n-1]{}V()\ &=&|x|\^[n-1]{}S()|x|. Thus the Jacobian of the transformation from the Cartesian system with coordinates ${\underaccent{\bar}{x}}$ to the spherical system with coordinates $({\left|{\underaccent{\bar}{x}}\right|}, \hat{{\underaccent{\bar}{x}}})$ is ${\left|{\underaccent{\bar}{x}}\right|}^{n-1}$. As a consequence [rCl]{} h(X)&=&h\_([||]{},)+[||]{}\^[n-1]{}\ &=& h([||]{})+h\_(|[||]{})+(n-1)[||]{}. Input Output Relation in a Unit {#app:in-out-mssfm} =============================== Define [rCl]{} D\_1=(e\^[j\_k]{}),D\_2=(e\^[j\_k]{}). The nonlinear steps in Fig. \[fig:mssfm\] in matrix notation are [rCl]{} U=D\_1(X+\^1e), Y=D\_2(V+\^2e), where ${\underaccent{\bar}{e}}\in{\mathbb{R}}^L$ is the all-one column vector. Combining the linear and nonlinear steps, we obtain with ${\mathbb{M}}={\mathbb{D}}_2{\mathsf{R}}{\mathbb{D}}_1$ and [rCl]{} Z =M\^1e+D\_2 \^2e. \[eq:additive-Z\] Clearly ${\mathbb{N}}^{1,2}{\underaccent{\bar}{e}}\sim{\mathcal{N}_{{\mathbb{C}}}\!\left(0,{\mathcal{D}}I_n\right)}$. However ${\mathbb{M}}{\mathbb{N}}^1{\underaccent{\bar}{e}}$ and ${\mathbb{D}}_2 {\mathbb{N}}^2{\underaccent{\bar}{e}}$ are generally non-Gaussian due to the signal and noise terms in $\Phi_k$ and $\Psi_l$. But, 1) in the constant loss model, if 2) $\forall k$ ${\underaccent{\bar}{x}}_k\rightarrow\infty$, then [rCl]{} Z\~[\_(0,)]{},K=(1+e\^[-\_[r]{}]{})I\_n. \[eq:noise\] In summary, $N$ variables are Gaussian; $Z$ variables are Gaussians in the asymptotic analysis of the constant loss model. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} M. I. Yousefi and F. R. Kschischang, “Information transmission using the nonlinear [F]{}ourier transform, [P]{}art [I]{}: [M]{}athematical tools,” *IEEE Trans. Inf. Theory*, vol. 60, no. 7, pp. 4312–4328, Jul. 2014, [A]{}lso published at arXiv, Feb. 2012. \[Online\]. Available: <http://arxiv.org/abs/1202.3653> ——, “Information transmission using the nonlinear [F]{}ourier transform, [P]{}art [II]{}: [N]{}umerical methods,” *IEEE Trans. Inf. Theory*, vol. 60, no. 7, pp. 4329–4345, Jul. 2014, [A]{}lso published at arXiv, Apr. 2012. \[Online\]. Available: <http://arxiv.org/abs/1204.0830> ——, “Information transmission using the nonlinear [F]{}ourier transform, [P]{}art [III]{}: [S]{}pectrum modulation,” *IEEE Trans. Inf. Theory*, vol. 60, no. 7, pp. 4346–4369, Jul. 2014, [A]{}lso published at arXiv, Feb. 2013. \[Online\]. Available: <http://arxiv.org/abs/1302.2875> ——, “On the per-sample capacity of nondispersive optical fibers,” *IEEE Trans. Inf. Theory*, vol. 57, no. 11, pp. 7522–7541, Nov. 2011. M. I. Yousefi and X. Yangzhang, “Linear and nonlinear frequency-division multiplexing,” *[ar[X]{}iv:1603.04389]{.nodecor}*, pp. 1–14, Mar. 2016. \[Online\]. Available: <http://arxiv.org/abs/1603.04389> S. M. Moser, “Duality-based bounds on channel capacity,” Ph.D. dissertation, ETH Zurich, Switzerland, Jan. 2005. A. Mecozzi, “Limits to long-haul coherent transmission set by the [K]{}err nonlinearity and noise of the in-line amplifiers,” *IEEE J. Lightw. Technol.*, vol. 12, no. 11, pp. 1993–2000, Nov. 1994. M. I. Yousefi, G. Kramer, and F. R. Kschischang, “Upper bound on the capacity of the nonlinear [S]{}chrödinger channel,” in *IEEE 14th Canadian Workshop on Inf. Theory*, St. John’s, Newfoundland, Canada, Jul. 2015, pp. 1–5. P. Serena, “Nonlinear signal–noise interaction in optical links with nonlinear equalization,” *IEEE J. Lightw. Technol.*, vol. 34, no. 6, pp. 1476–1483, Mar. 2016. I. Tavakkolnia and M. Safari, “Signalling over nonlinear fibre-optic channels by utilizing both solitonic and radiative spectra,” in *European Conf. Networks and Commun.*, Paris, France, Jul. 2015, pp. 103–107. E. Agrell, “Conditions for a monotonic channel capacity,” *IEEE Trans. Commun.*, vol. 63, no. 3, pp. 1–11, Sep. 2015. A. Lapidoth and S. Moser, “Capacity bounds via duality with applications to multiple-antenna systems on flat-fading channels,” *IEEE Trans. Inf. Theory*, vol. 49, no. 10, pp. 2426–2467, Oct. 2003. K. S. Turitsyn, S. A. Derevyanko, I. V. Yurkevich, and S. K. Turitsyn, “Information capacity of optical fiber channels with zero average dispersion,” *Phys. Rev. Lett.*, vol. 91, no. 20, p. 203901, Nov. 2003. G. Kramer, M. I. Yousefi, and F. Kschischang, “Upper bound on the capacity of a cascade of nonlinear and noisy channels,” in *IEEE Info. Theory Workshop*, Jerusalem, Israel, Apr. 2015, pp. 1–4. H. Ghozlan and G. Kramer, “Models and information rates for multiuser optical fiber channels with nonlinearity and dispersion,” *[ar[X]{}iv:1503.03124]{.nodecor}*, pp. 1–18, Mar. 2015. \[Online\]. Available: <https://arxiv.org/abs/1503.03124> E. Meron, M. Feder, and M. Shtaif, “On the achievable communication rates of generalized soliton transmission systems,” *[ar[X]{}iv:1207.0297]{.nodecor}*, pp. 1–13, Jul. 2012. \[Online\]. Available: <https://arxiv.org/abs/1207.0297> N. A. Shevchenko *et al.*, “A lower bound on the per soliton capacity of the nonlinear optical fibre channel,” in *IEEE Info. Theory Workshop*, Jeju Island, South Korea, Oct. 2015, pp. 1–5. Q. Zhang and T. H. Chan, “Achievable rates of soliton communication systems,” in *IEEE Int. Symp. Info. Theory*, Barcelona, Spain, Jul. 2016, pp. 605–609. P. Kazakopoulos and A. L.Moustakas, “Transmission of information via the non-linear [S]{}chrödinger equation: [T]{}he random [G]{}aussian input case,” *[arXiv:1210.7940]{.nodecor}*, pp. 1–9, Oct. 2012. \[Online\]. Available: <https://arxiv.org/abs/1210.7940> P. Kazakopoulos and A. L. Moustakas, “On the soliton spectral efficiency in non-linear optical fibers,” in *IEEE Int. Symp. Info. Theory*, Barcelona, Spain, Jul. 2016, pp. 610–614. H. Buelow, V. Aref, and W. Idler, “Transmission of waveforms determined by 7 eigenvalues with [PSK]{}-modulated spectral amplitudes,” in *European Conf. Opt. Commun.*, Sep. 2016, pp. 1–3. A. Mecozzi and R.-J. Essiambre, “Nonlinear [S]{}hannon limit in pseudolinear coherent systems,” *IEEE J. Lightw. Technol.*, vol. 30, no. 12, pp. 2011–2024, Jun. 2012. M. Secondini, E. Forestieri, and G. Prati, “Achievable information rate in nonlinear [WDM]{} fiber-optic systems with arbitrary modulation formats and dispersion maps,” *IEEE J. Lightw. Technol.*, vol. 31, no. 23, pp. 1–14, Dec. 2013. R. Dar, M. Shtaif, and M. Feder, “New bounds on the capacity of the nonlinear fiber-optic channel,” *Opt. Lett.*, vol. 39, no. 2, pp. 398–401, 2014. I. S. Terekhov, A. V. Reznichenko, and S. K. Turitsyn, “Calculation of mutual information for nonlinear communication channel at large [SNR]{},” *Phys. Rev. E*, vol. 94, no. 4, p. 042203, Oct. 2016. M. Secondini and E. Forestieri, “The limits of the nonlinear [S]{}hannon limit,” in *Opt. Fiber Commun. Conf. and Exposition*, Anaheim, California, United States, Mar. 2016, pp. 1–3. S. A. Derevyanko, J. E. Prilepsky, and S. K. Turitsyn, “Capacity estimates for optical transmission based on the nonlinear [F]{}ourier transform,” *Nature Commun.*, vol. 7, no. 12710, pp. 1–9, Sep. 2016. [^1]: The author is with the Communications and Electronics Department, Télécom ParisTech, Paris, France. Email: `yousefi@telecom-paristech.fr`. [^2]: Derivatives do not exist with  phase random variables. However, with finite bandwidth, there is non-zero correlation time.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study algebraically infinitely many infinitray extensions of predicate intuitionistic logic. We prove several representation theorems that reflect a (weak) Robinson’s joint consistency theorem for the extensions studied with and without equality. In essence a Henkin-Gabbay construction, our proof uses neat embedding theorems and is purely algebraic. Neat embedding theorems, are an algebraic version of Henkin constructions that apply to various infinitary extensions of predicate first order logics; to the best of our knowledge, they were only implemented in the realm of intuitionistic logic in the article ’Amalgamation of polyadic Heyting algebras’ Studia Math Hungarica, in press. [^1]' author: - Tarek Sayed Ahmed title: 'Representability, and amalgamation for various reducts of Heyting polyadic algebras' --- Introduction ============ Background and History ---------------------- It often happens that a theory designed originally as a tool for the study of a problem, say in computer science, came subsequently to have purely mathematical interest. When such a phenomena occurs, the theory is usually generalized beyond the point needed for applications, the generalizations make contact with other theories (frequently in completely unexpected directions), and the subject becomes established as a new part of pure mathematics. The part of pure mathematics so created does not (and need not) pretend to solve the problem from which it arises; it must stand or fall on its own merits. A crucial addition to the collection of mathematical catalysts initiated at the beginning of the 20 century, is formal logic and its study using mathematical machinery, better known as metamathematical investigations, or simply metamathematics. Traced back to the works of Frege, Hilbert, Russel, Tarski, Godel and others; one of the branches of pure mathematics that metamathematics has precipitated to is algebraic logic. Algebraic logic is an interdisciplinary field; it is the art of tackling problems in formal logic using universal algebraic machinery. It is similar in this respect to several branches in mathematics, like algebraic geometry, where algebraic machinery is used guided by geometric intuition. In algebraic logic, the intuition underlying its constructions is inspired from (mathematical) logic. The idea of solving problems in various branches of logic by first translating them to algebra, then using the powerful methodology of algebra for solving them, and then translating the solution back to logic, goes back to Leibnitz and Pascal. Such a methodology was already fruitfully applied back in the 19th century with the work of Boole, De Morgan, Peirce, Schröder, and others on classical logic. Taking logical equivalence rather than truth as the primitive logical predicate and exploiting the similarity between logical equivalence and equality, those pioneers developed logical systems in which metalogical investigations take on a plainly algebraic character. The ingenious transfer of ”logical equivalence“ to ” equations” turned out immensely useful and fruitful. In particular, Boole’s work evolved into the modern theory of Boolean algebras, and that of De Morgan, Peirce and Schröder into the well-developed theory of relation algebras, which is now widely used in such diverse areas, ranging from formalizations of set theory to applications in computer science. From the beginning of the contemporary era of logic, there were two approaches to the subject, one centered on the notion of logical equivalence and the other, reinforced by Hilbert’s work on metamathematics, centered on the notions of assertion and inference. It was not until much later that logicians started to think about connections between these two ways of looking at logic. Tarski gave the precise connection between Boolean algebra and the classical propositional calculus, inspired by the impressive work of Stone on Boolean algebras. Tarski’s approach builds on Lindenbaum’s idea of viewing the set of formulas as an algebra with operations induced by the logical connectives. When the Lindenbaum-Tarski method is applied to the predicate calculus, it lends itself to cylindric and polyadic algebras rather than relation algebras. In the traditional mid -20th century approach, algebraic logic has focused on the algebraic investigation of particular classes of algebras like cylindric, polyadic and relation algebras. When such a connection could be established, there was interest in investigating the interconnections between various metalogical properties of the logical system in question and the algebraic properties of the coresponding class of algebras (obtaining what are sometimes called “bridge theorems”). This branch has now evolved into the relatively new field of universal algebraic logic, in analogy to the well established field of universal algebra. For example, it was discovered that there is a natural relation between the interpolation theorems of classical, intuitionistic, intermediate propositional calculi, and the amalgamation properties of varieties of Heyting algebras, which constitute the main focus of this paper. The variety of Heyting algebras is the algebraic counterpart of propositional intuitionistic logic. We shall deal with Heyting algebras with extra (polyadic) operations reflecting quantifiers. Those algebras are appropriate to study (extensions) of predicate intuitionistic logic. Proving various interpolation theorems for such extensions, we thereby extend known amalgamation results of Heyting algebras to polyadic expansions. A historic comment on the development of intuitioinistic logic is in order. It was Brouwer who first initiated the programme of intuitionism and intuitionistic logic is its rigorous formalization developed originaly by Arend Heyting. Brouwer rejected formalism per se but admitted the potential usefulness of formulating general logical principles expressing intuitionistically correct constructions, such as modus ponens. Heyting realized the importance of formalization, being fashionable at his time, with the rapid development of mathematics. Implementing intuitionistic logic, turned out useful for diffrent forms of mathematical constructivism since it has the existing property. Philosophically, intuitionism differs from logicism by treating logic as an independent branch of mathematics, rather than as the foundations of mathematics, from finitism by permitting intuitionistic reasoning about possibly infinite collections; and from platonism by viewing mathematical objects as mental constructs rather than entities with an independent objective existence. There is also analogies between logisicm and intuitionism; in fact Hilbert’s formalist program, aiming to base the whole of classical mathematics on solid foundations by reducing it to a huge formal system whose consistency should be established by finitistic, concrete (hence constructive) means, was the most powerful contemporary rival to Brouwer’s and Heyting’s intuitionism. Subject Matter -------------- Connections between interpolation theorems in the predicate calculus and amalgamation results in varieties of cylindric and polyadic algebras, were initiated mainly by Comer, Pigozzi, Diagneault and Jonsson. As it happened, during the course of the development of algebraic logic, dating back to the work of Boole, up to its comeback in the contemporary era through the pioneering work of Halmos, Tarski, Henkin, Monk, Andréka, and Németi, it is now established that the two most famous widely used algebraisations of first order logic are Tarski’s cylindric algebras [@HMT1], [@HMT2], and Halmos’ polyadic algebras [@Halmos]. Each has its advantages and disadvantages. For example, the class of representable cylindric algebras, though a variety, is not finitely axiomatizable, and this class exhibits an inevitable degree of complexity in any of its axiomatizations [@Andreka]. However, its equational theory is recursive. On the other hand, the variety of (representable) polyadic algebras is axiomatized by a finite schema of equations but its equational theory is not recursively enumerable [@NS]. There have been investigations to find a class of algebras that enjoy the positive properties of both. The key idea behind such investigations is to look at (the continuum many) reducts of polyadic algebras [@AUamal], [@S] searching for the desirable finitely axiomatizable variety among them. Indeed, it is folkore in algebraic logic that cylindric algebras and polyadic algebras belong to different paradigms, frequently manifesting contradictory behaviour. The paper [@S] is a unification of the positive properties of those two paradigms in the Boolean case, and one of the results of this paper can be interpreted as a unification of those paradigms when the propositional reducts are Heyting algebras. A polyadic algebra is typically an instance of a transformation system. A transformation system can be defined to be a quadruple of the form $(\A, I, G, {\sf S})$ where $\A$ is an algebra of any similarity type, $I$ is a non empty set (we will only be concerned with infinite sets), $G$ is a subsemigroup of $(^II,\circ)$ (the operation $\circ$ denotes composition of maps) and ${\sf S}$ is a homomorphism from $G$ to the semigroup of endomorphisms of $\A$ $(End(\A))$. Elements of $G$ are called transformations. The set $I$ is called the dimension of the algebra, for a transformation $\tau$ on $I$, ${\sf S}({\tau})\in End(\A)$ is called a substitution operator, or simply a substitution. Polyadic algebras arise when $\A$ is a Boolean algebra endowed with quantifiers and $G={}^II$. There is an extensive literature for polyadic algebras dating back to the fifties and sixties of the last century, [@Halmos], [@J70], [@D], [@DM], [@AUamal], [@S]. Introduced by Halmos, the theory of polyadic algebras is now picking up again; indeed it’s regaining momentum with pleasing progress and a plathora of results, see the references [@MLQ], [@Fer1], [@Fer2], [@Fer3], [@Fer4], [@ANS], [@trans], to name just a few. In recent times reducts of polyadic algebras of dimension $I$ were studied [@S], [@AUamal]; these reducts are obtained by restricting quantifiers to involve only quantification on finitely many variables and to study (proper) subsemigroups of $^II$ The two extremes are the semigroup of finite transformations (a finite transformation is one that moves only finitely many points) and all of $^II$ but there are infinitely many semigroups in between. In this paper, we study reducts of polyadic algebras by allowing (proper) subsemigroups of $^II$, but we also weaken the Boolean structure to be a Heyting algebra. Thus we approach the realm of intuitionistic logic. We shall study the cases when $G$ consists of all finite transformations, when $G$ is a proper subsemigroup satisfying certain properties but essentially containing infinitary transformations, that is, transformations that move infinitely many points (this involves infinitely many cases), and when $G$ is the semigroup of all transformations. Our investigations will address the representation of such algebras in a concrete sense where the operations are interpreted as set-theoretic operations on sets of sequences, and will also address the amalgamation property and variants thereof of the classes in question. In all the cases we study, the scope of quantifiers are finite, so in this respect our algebras also resemble cylindric algebras. The interaction between the theories of Boolean cylindric algebras and Boolean polyadic algebras is extensively studied in algebraic logic, see e.g [@ANS], with differences and similarities illuminating both theories. In fact, the study of $G$ Boolean polyadic algebras ($G$ a semigroup) by Sain in her pioneering paper [@S], and its follow up [@AUamal], is an outcome, or rather a culmination, of such research; it’s a typical situation in which the positive properties of both theories amalgamate. Boolean polyadic algebras, when $G$ is the set of finite transformations of $I$ or $G={}^II$ are old [@Halmos], [@D] [@DM]. In the former case such algebras are known as quasipolyadic algebras, and those are substantially different from full polyadic algebras (in the infinite dimensional case), as is commonly accepted, quasipolyadic algebras belong to the cylindric paradigm; they share a lot of properties of cylindric algebras. While the substitution operators in full Heyting polyadic algebras are uncountable, even if both the algebra and its dimension are countable, the substitution operators for quasipolyadic equality algebras of countable dimension are countable. Unlike full polyadic algebras, quasipolyadic algebras can be formulated as what is known in the literature as a system of varieties definable by schemes making them akin to universal algebraic investigations in the spirit of cylindric algebras. Though polyadic algebras can be viewed as a system of varieties, this system cannot be definable by schemes due to the presence of infinitary substitutions. Studying reducts of polyadic algebras by allowing only those substitutions coming from an arbitrary subsemigroup of $^II$ is relatively recent starting at the turn of the last century [@S]. Such algebras (of which we study their Heyting reducts) also provide a possible solution to a central problem in algebraic logic, better known as the finitizability problem, which asks for a simple (hopefully) finite axiomatization for several classes of representable algebras that abound in algebraic logic. [^2] The finitizability problem is not easy, and has been discussed at length in the literature [@Bulletin]. Being rather a family of problems, the finitizability problem has several scattered reincarnations in the lierature, and in some sense is still open. The finitizability problem also has philosophical implications, repercussions, connotations, concerning reasoning about reasoning, and can, in so many respects, be likened to Hilbert’s programe of proving mathematics consistent by concrete finitistic methods. In fact, our results show that, when $G$ satisfies some conditions that are not particularly complicated, provides us with an algebraisable extension of predicate first order intuitionistic logic, whose algebraic counterpart is a variety that is finitely axiomatizable. An algebraisable extension is an extension of ordinary predicate intuitionistic logic (allowing formulas of infinite length), whose algebraic counterpart consisting of subdirect products of set algebras based on (Kripke) models, is a finitely based variety (equational class). This gives a clean cut solution to the analogue of the finitizability problem for ordinary predicate intuitionistic logic. Formal systems for intuitionistic propositional and predicate logic and arithmetic were developed by Heyting [@H],[@Hy] Gentzen [@G] and Kleene [@K]. Godel [@Godel] proved the equiconsistency of intuitionistic and classical theories. Kripke [@Kripke] provided a semantics with respect to which intuitionistic logic is sound and complete. We shall use a modified version of Kripke semantics below to prove our representability results. The algebraic counterparts of predicate intuitionistic logic, namely, Heyting polyadic algebras were studied by Monk [@Monk], Georgescu [@G] and the present author [@Hung]. Algebraically, we shall prove that certain reducts of algebras of full polyadic Heyting algebras (studied in [@Hung]) consist solely of representable algebras (in a concrete sense) and have the superamalgamation property (a strong form of amalgamation). Such results are essentially proved in Part 1, with the superamalgmation property deferred to part 3. We also present some negative results for other infinitary intiutionistic logics, based on non finite axiomatizability results proved in part 2, using bridge theorems. Indeed, in part 3, among other things, we show that the minimal algebraisable extension of predicate intuitionistic logic, in a sense to be made precise, is essentially incomplete, and fail to have the interpolation property. Roughly, minimal extension here means this (algebraisable) logic corresponding to the variety generated by the class of algebras arising from ordinary intuitionistic predicate logic. Such algebras are locally finite, reflecting the fact that formulas contain only finitely many variables. This correspondence is taken in the sense of Blok and Pigozzi associating quasivarieties to algebraisable logics. Algebraising here essentially means that we drop the condition of local finiteness, (hence alowing formulas of infinite length); this property is not warranted from the algebraic point of view because it is a poperty that cannot be expressed by first order formulas, let alone equations or quasiequations In fact, we show that all positive results in this paper extend to the classical case, reproving deep results in [@S], [@AUamal], and many negative results that conquer the cylindric paradigm, extend in some exact sense, to certain infinitary extensions of predicate intuitionistic logic, that arise naturally from the process of algebraising the intuitionistic predicate logic ( with and without equality). Such results are presented in the context of clarifying one facet of the finitizability problem for predicate intuionistic logic, namely that of drawing a line between positive and negative results. The techniques used in this paper intersects those adopted in our recent paper on Heyting polyadic algebras; it uses this part of algebraic logic developed essentially by Henkin, Monk, Tarski and Halmos - together with deep techniques of Gabbay - but there are major differences. We mention two Whereas the results in [@Hung] address full Heyting polyadic algebras where infinitary cylindrifications and infinitary substitutions are available; this paper, among many other things, shows that the proof survives when we restrict our attention to finitely generated semigroups still containing infinitary substitutions, and finite cylindrifiers. The algebras in [@Hung] have an axiomatization that is highly complex from the recursion theoretic point of view. The reducts studied here have recursive axiomatizations. We allow diagonal elements in our algebras (these elements reflect equality), so in fact, we are in the realm of infinitary extensions of intuitionistic predicate logic [*with*]{} equality. The interaction between algebraic logic and intuitionistic logic was developed in the monumental work of the Polish logicians Rasiowa and Sikorski, and the Russian logician Maksimova, but apart from that work, to the best of our knowledge, the surface of predicate intuitionistic logic was barely scratched by algebraic machinery. While Maksimova’s work [@b] is more focused on propositional intuitionistic logic, Rasiowa and Sikorski did deal with expansions of Heyting algebras, to reflect quantifiers, but not with polyadic algebras per se. Besides, Rasiowa and Sikorski, dealt only with classical predicate intuitionistic logic. In this paper, we continue the trend initiated in [@Hung], by studying strict reducts of full fledged infinitary logics, which are still infinitary, together with their expansions by the equality symbol, proving completenes theorems and interpolation properties, and we also maintain the borderline where such theorems cease to hold. Organization {#organization .unnumbered} ------------ In the following section we prepare for our algebraic proof, by formulating and proving the necessary algebraic preliminaries (be it concepts or theorems) addressing various reducts of Heyting polyadic algebras, possibly endowed with diagonal elements. Our algebraic proof of the interpolation property for infinitary extensions of predicate intuitionistic logic (with and without equality) are proved in section 3, which is the soul and heart of this part of the paper. This is accomplished using the well developed methodology of algebraic logic; particularly so-called neat embedding theorems, which are algebraic generalizations of Henkin constructions. On the notation {#on-the-notation .unnumbered} --------------- Throughout the paper, our notation is fairly standard, or self explanatory. However, we usually distinguish notationally between algebras and their domains, though there will be occasions when we do not distinguish between the two. Algebras will be denoted by Gothic letters, and when we write $\A$ for an algebra, then it is to be tacitly assumed that the corresponding Roman letter $A$ denotes its domain. Unfamiliar notation will be introduced at its first occurrence in the text. We extensively use the axiom of choice (any set can be well ordered, so that in many places we deal with ordinals or order types, that is, we impose well orders on arbitrary sets). For a set $X$, $Id_X$, or simply $Id$, when $X$ is clear from context, denotes the identity function on $X$. The set of all functions from $X$ to $Y$ is denoted by $^XY$. If $f\in {}^XY$, then we write $f:X\to Y$. The domain $X$ of $f$, will be denoted by $Dof$, and the range of $f$ will be denoted by $Rgf$. Composition of functions $f\circ g$ is defined so that the function at the right acts first, that is $(f\circ g)(x)=f(g(x))$, for $x\in Dog$ such that $g(x)\in Dof$. Algebraic Preliminaries ======================= In this section, we define our algebras and state and prove certain algebraic notions and properties that we shall need in our main (algebraic) proof implemented in the following section. Other results, formulated in lemmata \[dl\] and \[cylindrify\] in this section are non-trivial modifications of existing theorems for both cylindric algebras and polyadic algebras; we give detailed proofs of such results, skipping those parts that can be found in the literature referring to the necessary references instead. These lemmata address a very important and key concept in both cylindric and polyadic theories, namely, that of forming dilations and neat reducts (which are, in fact, dual operations.) The algebras ------------ For an algebra $\A$, $End(\A)$ denotes the set of endomorphisms of $\A$ (homomorphisms of $\A$ into itself), which is a semigroup under the operation $\circ$ of composition of maps. A transformation system is a quadruple $(\A, I, G, {\sf S})$ where $\A$ is an algebra, $I$ is a set, $G$ is a subsemigroup of $(^II,\circ)$ and ${\sf S}$ is a homomorphism from $G$ into $End(\A).$ Throughout the paper, $\A$ will always be a Heyting algebra. If we want to study predicate intuitionistic logic, then we are naturally led to expansions of Heyting algebras allowing quantification. But we do not have negation in the classical sense, so we have to deal with existential and universal quantifiers each separately. Let $\A=(A, \lor, \land,\rightarrow,0)$ be a Heyting algebra. An existential quantifier $\exists$ on $A$ is a mapping $\exists:\A\to \A$ such that the following hold for all $p,q\in A$: $\exists(0)=0,$ $p\leq \exists p,$ $\exists(p\land \exists q)=\exists p\land \exists q,$ $\exists(\exists p\rightarrow \exists q)=\exists p\rightarrow \exists q,$ $\exists(\exists p\lor \exists q)=\exists p\lor \exists q,$ $\exists\exists p=\exists p.$ Let $\A=(A, \lor, \land,\rightarrow,0)$ be a Heyting algebra. A universal quantifier $\forall$ on $A$ is a mapping $\forall:\A\to \A$ such that the following hold for all $p,q\in A$: $\forall 1=1,$ $\forall p\leq p,$ $\forall(p\rightarrow q)\leq \forall p\rightarrow \forall q,$ $\forall \forall p=\forall p.$ Now we define our algebras. Their similarity type depends on a fixed in advance semigroup. We write $X\subseteq_{\omega} Y$ to denote that $X$ is a finite subset of $Y$. Let $\alpha$ be an infinite set. Let $G\subseteq {}^{\alpha}\alpha$ be a semigroup under the operation of composition of maps. An $\alpha$ dimensional polyadic Heyting $G$ algebra, a $GPHA_{\alpha}$ for short, is an algebra of the following form $$(A,\lor,\land,\rightarrow, 0, {\sf s}_{\tau}, {\sf c}_{(J)}, {\sf q}_{(J)})_{\tau\in G, J\subseteq_{\omega} \alpha}$$ where $(A,\lor,\land, \rightarrow, 0)$ is a Heyting algebra, ${\sf s}_{\tau}:\A\to \A$ is an endomorphism of Heyting algebras, ${\sf c}_{(J)}$ is an existential quantifier, ${\sf q}_{(J)}$ is a universal quantifier, such that the following hold for all $p\in A$, $\sigma, \tau\in [G]$ and $J,J'\subseteq_{\omega} \alpha:$ ${\sf s}_{Id}p=p.$ ${\sf s}_{\sigma\circ \tau}p={\sf s}_{\sigma}{\sf s}_{\tau}p$ (so that ${\sf s}:\tau\mapsto {\sf s}_{\tau}$ defines a homomorphism from $G$ to $End(\A)$; that is $(A, \lor, \land, \to, 0, G, {\sf s})$ is a transformation system). ${\sf c}_{(J\cup J')}p={\sf c}_{(J)}{\sf c}_{(J')}p , \ \ {\sf q}_{(J\cup J')}p={\sf q}_{(J)}{\sf c}_{(J')}p.$ ${\sf c}_{(J)}{\sf q}_{(J)}p={\sf q}_{(J)}p , \ \ {\sf q}_{(J)}{\sf c}_{(J)}p={\sf c}_{(J)}p.$ If $\sigma\upharpoonright \alpha\sim J=\tau\upharpoonright \alpha\sim J$, then ${\sf s}_{\sigma}{\sf c}_{(J)}p={\sf s}_{\tau}{\sf c}_{(J)}p$ and ${\sf s}_{\sigma}{\sf q}_{(J)}p={\sf s}_{\tau}{\sf q}_{(J)}p.$ If $\sigma\upharpoonright \sigma^{-1}(J)$ is injective, then ${\sf c}_{(J)}{\sf s}_{\sigma}p={\sf s}_{\sigma}{\sf c}_{\sigma^{-1}(J)}p$ and ${\sf q}_{(J)}{\sf s}_{\sigma}p={\sf s}_{\sigma}{\sf q}_{\sigma^{-1}(J)}p.$ Let $\alpha$ and $G$ be as in the prevoius definition. By a $G$ polyadic equality algebra, a $GPHAE_{\alpha}$ for short, we understand an algebra of the form $$(A,\lor,\land,\rightarrow, 0, {\sf s}_{\tau}, {\sf c}_{(J)}, {\sf q}_{(J)}, {\sf d}_{ij})_{\tau\in G, J\subseteq_{\omega} \alpha, i,j\in \alpha}$$ where $(A,\lor,\land,\rightarrow, 0, {\sf s}_{\tau}, {\sf c}_{(J)}, {\sf q}_{(J)})_{\tau\in G\subseteq {}^{\alpha}\alpha, J\subseteq_{\omega} \alpha}$ is a $GPHA_{\alpha}$ and ${\sf d}_{ij}\in A$ for each $i,j\in \alpha,$ such that the following identities hold for all $k,l\in \alpha$ and all $\tau\in G:$ ${\sf d}_{kk}=1$ ${\sf s}_{\tau}{\sf d}_{kl}={\sf d}_{\tau(k), \tau(l)}.$ $x\cdot {\sf d}_{kl}\leq {\sf s}_{[k|l]}x$ Here $[k|l]$ is the replacement that sends $k$ to $l$ and otherwise is the identity. In our definition of algebras, we depart from [@HMT2] by defining polyadic algebras on sets rather than on ordinals. In this manner, we follow the tradition of Halmos. We refer to $\alpha$ as the dimension of $\A$ and we write $\alpha=dim\A$. Borrowing terminology from cylindric algebras, we refer to ${\sf c}_{(\{i\})}$ by ${\sf c}_i$ and ${\sf q}_{(\{i\})}$ by ${\sf q}_i.$ However, we will have occasion to impose a well order on dimensions thereby dealing with ordinals. When $G$ consists of all finite transformations, then any algebra with a Boolean reduct satisfying the above identities relating cylindrifications, diagonal elements and substitutions, will be a quasipolyadic equality algebra of infinite dimension. Besides dealing with the two extremes when $G$ consists only of finite transformations, supplied with an additional condition, and when $G$ is $^{\alpha}\alpha$, we also consider cases when $G$ is a possibly proper subsemigroup of $^{\alpha}\alpha$ (under the operation of composition). We need some preparations to define such semigroups. [Notation.]{} For a set $X$, $|X|$ stands for the cardinality of $X$. For functions $f$ and $g$ and a set $H$, $f[H|g]$ is the function that agrees with $g$ on $H$, and is otherwise equal to $f$. Recall that $Rgf$ denotes the range of $f$. For a transformation $\tau$ on $\alpha$, the support of $\tau$, or $sup(\tau)$ for short, is the set: $$sup(\tau)=\{i\in \alpha: \tau(i)\neq i\}.$$ Let $i,j\in \omega$, then $\tau[i|j]$ is the transformation on $\alpha$ defined as follows: $$\tau[i|j](x)=\tau(x)\text { if } x\neq i \text { and }\tau[i|j](i)=j.$$ Recall that the map $[i|j]$ is the transformation that sends $i$ to $j$ and is the equal to the identity elsewhere. On the other hand, the map denoted by $[i,j]$ is the transpostion that interchanges $i$ and $j$. For a function $f$, $f^n$ denotes the composition $f\circ f\ldots \circ f$ $n$ times. We extend the known definition of (strongly) rich semigroups [@S], [@AUamal], allowing possibly uncountable sets and semigroups. This will be needed when $G={}^\alpha\alpha$, cf. lemma \[cylindrify\]. However, throughout when we mention rich semigroups, then we will be tacitly assuming that both the dimension of the algebra involved and the semigroup are countable, [*unless*]{} otherwise explicity mentioned. \[rich\]Let $\alpha$ be any set. Let $T\subseteq \langle {}^{\alpha}\alpha, \circ \rangle$ be a semigroup. We say that $T$ is [*rich* ]{} if $T$ satisfies the following conditions: 1. $(\forall i,j\in \alpha)(\forall \tau\in T) \tau[i|j]\in T.$ 2. There exist $\sigma,\pi\in T$ such that $(\pi\circ \sigma=Id,\ Rg\sigma\neq \alpha), $ satisfying $$(\forall \tau\in T)(\sigma\circ \tau\circ \pi)[(\alpha\sim Rg\sigma)|Id]\in T.$$ \[stronglyrich\] Let $T\subseteq \langle {}^{\alpha}\alpha, \circ\rangle$ be a rich semigroup. Let $\sigma$ and $\pi$ be as in the previous definition. If $\sigma$ and $\pi$ satisfy: 1. $(\forall n\in \omega) |supp(\sigma^n\circ \pi^n)|<\alpha, $ 2. $(\forall n\in \omega)[supp(\sigma^n\circ \pi^n)\subseteq \alpha\smallsetminus Rng(\sigma^n)];$ then we say that $T$ is [*a strongly rich*]{} semigroup. Examples of rich semigroups of $\omega$ are $(^{\omega}\omega, \circ)$ and its semigroup generated by $\{[i|j], [i,j], i, j\in \omega, suc, pred\}$. Here $suc$ abbreviates the successor function on $\omega$ and $pred$ acts as its right inverse, the predecessor function, defined by $pred(0)=0$ and for other $n\in \omega$, $pred(n)=n-1$. In fact, both semigroups are strongly rich, in the second case $suc$ plays the role of $\sigma$ while $pred$ plays the role of $\pi$. Rich semigroups were introduced in [@S] (to prove a representability result) and those that are strongly rich were intoduced in [@AUamal] (to prove an amalgamation result). Next, we collect some properties of $G$ algebras that are more handy to use in our subsequent work. In what follows, we will be writing $GPHA$ ($GPHAE$) for all algebras considered. \[axioms\] Let $\alpha$ be an infinite set and $\A\in GPHA_{\alpha}$. Then $\A$ satisfies the following identities for $\tau,\sigma\in G$ and all $i,j,k\in \alpha$. 1. $x\leq {\sf c}_ix={\sf c}_i{\sf c}_ix,\ {\sf c}_i(x\lor y)={\sf c}_ix\lor {\sf c}_iy,\ {\sf c}_i{\sf c}_jx={\sf c}_j{\sf c}_ix$. That is ${\sf c}_i$ is an additive operator (a modality) and ${\sf c}_i,{\sf c}_j$ commute. 2. ${\sf s}_{\tau}$ is a Heyting algebra endomorphism. 3. ${\sf s}_{\tau}{\sf s}_{\sigma}x={\sf s}_{\tau\circ \sigma}x$ and ${\sf s}_{Id}x=x$. 4. ${\sf s}_{\tau}{\sf c}_ix={\sf s}_{\tau[i|j]}{\sf c}_ix$. Recall that $\tau[i|j]$ is the transformation that agrees with $\tau$ on $\alpha\smallsetminus\{i\}$ and $\tau[i|j](i)=j$. 5. ${\sf s}_{\tau}{\sf c}_ix={\sf c}_j{\sf s}_{\tau}x$ if $\tau^{-1}(j)=\{i\}$, ${\sf s}_{\tau}{\sf q}_ix={\sf q}_j{\sf s}_{\tau}x$ if $\tau^{-1}(j)=\{i\}$. 6. ${\sf c}_i{\sf s}_{[i|j]}x={\sf s}_{[i|j]}x$,  ${\sf q}_i{\sf s}_{[i|j]}x={\sf s}_{[i|j]}x$ 7. ${\sf s}_{[i|j]}{\sf c}_ix={\sf c}_ix$,   ${\sf s}_{[i|j]}{\sf q}_ix={\sf q}_ix$. 8. ${\sf s}_{[i|j]}{\sf c}_kx={\sf c}_k{\sf s}_{[i|j]}x$,  ${\sf s}_{[i|j]}{\sf q}_kx={\sf q}_k{\sf s}_{[i|j]}x$ whenever $k\notin \{i,j\}$. 9. ${\sf c}_i{\sf s}_{[j|i]}x={\sf c}_j{\sf s}_{[i|j]}x$,   ${\sf q}_i{\sf s}_{[j|i]}x={\sf q}_j{\sf s}_{[i|j]}x$. [Proof]{} The proof is tedious but fairly straighforward. Obviously the previous equations hold in $GPHAE_{\alpha}$. Following cylindric algebra tradition and terminology, we will be often writing ${\sf s}_j^i$ for ${\sf s}_{[i|j]}$. For $GPHA_{\alpha}$ when $G$ is rich or $G$ consists only of finite transformation it is enough to restrict our attenstion to replacements. Other substitutions are definable from those. Neat reducts and dilations -------------------------- Now we recall the important notion of neat reducts, a central concept in cylindric algebra theory, strongly related to representation theorems. This concept also occurs in polyadic algebras, but unfortunately under a different name, that of compressions. Forming dilations of an algebra, is basically an algebraic reflection of a Henkin construction; in fact, the dilation of an algebra is another algebra that has an infinite number of new dimensions (constants) that potentially eliminate cylindrifications (quantifiers). Forming neat reducts has to do with restricting or compressing dimensions (number of variables) rather than increasing them. (Here the duality has a precise categorical sense which will be formulated in the part 3 of this paper as an adjoint situation). Let $ \alpha\subseteq \beta$ be infinite sets. Let $G_{\beta}$ be a semigroup of transformations on $\beta$, and let $G_{\alpha}$ be a semigroup of transformations on $\alpha$ such that for all $\tau\in G_{\alpha}$, one has $\bar{\tau}=\tau\cup Id\in G_{\beta}$. Let $\B=(B, \lor, \land, \to, 0, {\sf c}_i, {\sf s}_{\tau})_{i\in \beta, \tau\in G_{\beta}}$ be a $G_{\beta}$ algebra. We denote by $\Rd_{\alpha}\B$ the $G_{\alpha}$ algebra obtained by dicarding operations in $\beta\sim \alpha$. That is $\Rd_{\alpha}\B=(B, \lor, \land, \to, 0, {\sf c}_i, {\sf s}_{\bar{\tau}})_{i\in \alpha, \tau\in G_{\alpha}}$. Here ${\sf s}_{\bar{\tau}}$ is evaluated in $\B$. For $x\in B$, then $\Delta x,$ the dimension set of $x$, is defined by $\Delta x=\{i\in \beta: {\sf c}_ix\neq x\}.$ Let $A=\{x\in B: \Delta x\subseteq \alpha\}$. If $A$ is a subuniverse of $\Rd_{\alpha}\B$, then $\A$ (the algebra with universe $A$) is a subreduct of $\B$, it is called the [*neat $\alpha$ reduct*]{} of $\B$ and is denoted by $\Nr_{\alpha}\B$. If $\A\subseteq \Nr_{\alpha}\B$, then $\B$ is called a [*dilation*]{} of $\A$, and we say that $\A$ [*neatly embeds*]{} in $\B$. if $A$ generates $\B$ (using all operations of $\B$), then $\B$ is called a [*minimal dilation*]{} of $\A$. The above definition applies equally well to $GPHAE_{\alpha}$. In certain contexts minimal dilations may not be unique (up to isomorphism), but what we show next is that in all the cases we study, they are unique, so for a given algebra $\A$, we may safely say [*the*]{} minimal dilation of $\A$. For an algebra $\A$, and $X\subseteq \A$, $\Sg^{\A}X$ or simply $\Sg X$, when $\A$ is clear from context, denotes the subalgebra of $\A$ generated by $X.$ The next theorems apply equally well to $GPHAE_{\alpha}$ with easy modifications which we state as we go along. \[dl\] Let $\alpha\subseteq \beta$ be countably infinite sets. If $G$ is a strongly rich semigroup on $\alpha$ and $\A\in GPHA_{\alpha}$, then there exists a strongly rich semigroup $T$ on $\beta$ and $\B\in TPHA_{\beta},$ such that $\A\subseteq \Nr_{\alpha}\B$ and for all $X\subseteq A,$ one has $\Sg^{\A}X=\Nr_{\alpha}\Sg^{\B}X$. Let $G_{I}$ be the semigroup of finite transformations on $I$. Let $\A\in G_{\alpha}PHA_{\alpha}$ be such that $\alpha\sim \Delta x$ is infinite for every $x\in A$. Then for any set $\beta$, such that $\alpha\subseteq \beta$, there exists $\B\in G_{\beta}PHA_{\beta},$ such that $\A\subseteq \Nr_{\alpha}\B$ and for all $X\subseteq A$, one has $\Sg^{\A}X=\Nr_{\alpha}\Sg^{\B}X.$ Let $G_I$ be the semigroup of all transformations on $I$. Let $\A\in G_{\alpha}PHA_{\alpha}$. Then for any set $\beta$ such that $\alpha\subseteq \beta$, there exists $\B\in G_{\beta}PHA_{\beta},$ such that $\A\subseteq \Nr_{\alpha}\B$ and for all $X\subseteq A,$ one has $\Sg^{\A}X=\Nr_{\alpha}\Sg^{\B}X.$ [Proof]{} cf. [@AUamal]. We assume that $\alpha$ is an ordinal; in fact without loss of generality we can assume that it is the least infinite ordinal $\omega.$ We also assume a particular strongly rich semigroup, that namely that generated by finite transformations together with $suc$, $pred$. The general case is the same [@AUamal] Remark 2.8 p. 327. We follow [@AUamal] p. 323-336. For $n\leq \omega$, let $\alpha_n=\omega+n$ and $M_n=\alpha_n\sim \omega$. Note that when $n\in \omega$, then $M_n=\{\omega,\ldots,\omega+n-1\}$. Let $\tau\in G$. Then $\tau_n=\tau\cup Id_{M_n}$. $T_n$ denotes the subsemigroup of $\langle {}^{\alpha_n}\alpha_n,\circ \rangle$ generated by $\{\tau_n:\tau\in G\} \cup \cup_{i,j\in \alpha_n}\{[i|j],[i,j]\}$. For $n\in \omega$, we let $\rho_n:\alpha_n\to \omega$ be the bijection defined by $\rho_n\upharpoonright \omega=suc^n$ and $\rho_n(\omega+i)=i$ for all $i<n$. Let $n\in \omega$. For $v\in T_n,$ let $v'=\rho_n\circ v\circ \rho_n^{-1}$. Then $v'\in G$. For $\tau\in T_{\omega}$, let $D_{\tau}=\{m\in M_{\omega}:\tau^{-1}(m)=\{m\}=\{\tau(m)\}\}$. Then $|M_{\omega}\sim D_{\tau}|<\omega.$ Let $\A$ be a given countable $G$ algebra. Let $\A_n$ be the algebra defined as follows: $\A_n=\langle A,\lor, \land, \to, 0, {\sf c}_i^{\A_n},{\sf s}_v^{\A_n}\rangle_{i\in \alpha_n,v\in T_n}$ where for each $i\in \alpha_n$ and $v\in T_n$, ${\sf c}_i^{\A_n}:= {\sf c}_{\rho_n(i)}^{\A} \text { and }{\sf s}_v^{\A_n}:= {\sf s}_{v'}^{\A}.$ Let $\Rd_{\omega}\A_n$ be the following reduct of $\A_n$ obtained by restricting the type of $\A_n$ to the first $\omega$ dimensions: $\Rd_{\omega}\A_n=\langle A_n,\lor,\land, \to,0, {\sf c}_i^{\A_n}, {\sf s}_{\tau_n}^{\A_n}\rangle_{i\in \omega,\tau\in G}.$ For $x\in A$, let $e_n(x)={\sf s}_{suc^n}^{\A}(x)$. Then $e_n:A\to A_n$ and $e_n$ is an isomorphism from $\A$ into $\Rd_{\omega}\A_n$ such that $e_n(\Sg^{\A}Y)=\Nr_{\omega}(\Sg^{\A_n}e_n(Y))$ for all $Y\subseteq A$, cf. [@AUamal] claim 2.7. While $\sigma$ and condition (2) in the definition of \[rich\] are needed to implement the neat embedding, the left inverse $\pi$ of $\sigma$ is needed to show that forming neat reducts commute with froming subalgebras; in particular $\A$ is the full $\omega$ neat reduct of $\A_n$. To extend the neat embedding part to infinite dimensions, we use a fairly straightforward construction involving an ultraproduct of exapansions of the algebras $\A_n$, on any cofinite ultrafilter on $\omega$. For the sake of brevity, let $\alpha=\alpha_{\omega}=\omega+\omega$. Let $T_{\omega}$ is the semigroup generated by the set $\{\tau_{\omega}: \tau\in G\}\cup_{i,j\in \alpha}\{[i|j],[i,j]\}.$ For $\sigma\in T_{\omega}$, and $n\in \omega$, let $[\sigma]_n=\sigma\upharpoonright \omega+n$. For each $n\in \omega,$ let $\A_n^+=\langle A,\lor,\land, \to, 0, {\sf c}_i^{\A_n^+}, {\sf s}_{\sigma}^{\A_n^+}\rangle_{i\in \alpha, \sigma\in T_{\omega}}$ be an expansion of $\A_n$ such that there Heyting reducts coincide and for each $\sigma\in T_{\omega}$ and $i\in \alpha,$ ${\sf s}_{\sigma}^{\A_n^+}:={\sf s}_{[\sigma]_n}^{\A_n} \text { iff } [\sigma]_n\in T_n,$ and ${\sf c}_i^{\A_n^+}:={\sf c}_i^{\A_n}\text { iff }i<\omega+n.$ Let $F$ to be any non-principal ultrafilter on $\omega$. Now forming the ultraproduct of the $\A_n^+$’s relative to $F$, let $\A^+=\prod_{n\in \omega}\A_n^+/F.$ For $x\in A$, let $e(x)=\langle e_n(x):n\in \omega\rangle/F.$ Let $\Rd_{\omega}A^+=\langle A^+, \lor, \land, \to, 0, {\sf c}_i^{\A^+}, {\sf s}_{\tau_{\omega}}^{\A^+} \rangle_{i<\omega,\tau\in T}.$ Then $e$ is an isomorphism from $\A$ into $\Rd_{\omega}\A^+$ such that $e(\Sg^{\A}Y)=\Nr_{\omega}\Sg^{\A^+}e(Y)$ for all $Y\subseteq A.$ We have shown that $\A$ neatly embeds in algebras in finite extra dimensions and in $\omega$ extra dimension. An iteration of this embedding yields the required result. In the presence of diagonals one has to check that homomorphisms defined preserve diagonal elements. But this is completely straightforward using properties of substitutions when applied to diagonal elements. Let $\alpha\subseteq \beta$. We assume, loss of generality, that $\alpha$ and $\beta$ are ordinals with $\alpha<\beta$. The proof is a direct adaptation of the proof of Theorem 2.6.49(i) in [@HMT1]. First we show that there exists $\B\in G_{\alpha+1}PHA_{\alpha+1}$ such that $\A$ embeds into $\Nr_{\alpha}\B,$ then we proceed inductively. Let $$R = Id\upharpoonright (\alpha\times A) \cup \{ ((k,x), (\lambda, y)) : k, \lambda < \alpha, x, y \in A, \lambda \notin \Delta x, y = {\mathsf s}_{[k|\lambda]} x \}.$$ It is easy to see that $R$ is an equivalence relation on $\alpha \times A$. Define the following operations on $(\alpha\times A)/R$ with $\mu, i, k\in \alpha$ and $x,y\in A$ : $$\label{l5} \begin{split} (\mu, x)/R \lor (\mu, y)/R = (\mu, x \lor y)/R, \end{split}$$ $$\label{l6} \begin{split} (\mu, x)/R\land (\mu, y)/R = (\mu, x\land y)/R, \end{split}$$ $$\label{l7} \begin{split} (\mu, x)/R\to (\mu, y)/R = (\mu, x\to y)/R, \end{split}$$ $$\label{l10} \begin{split} {\mathsf c}_i ((\mu, x)/R) = (\mu, {\mathsf c}_i x )/R, \quad \mu \in \alpha \smallsetminus \{i\}, \end{split}$$ $$\label{l11} \begin{split} {\mathsf s}_{[j|i]} ((\mu, x)/R) = (\mu, {\mathsf s}_{[j|i]} x )/R, \quad \mu \in \alpha \smallsetminus \{i, j\}. \end{split}$$ It can be checked that these operations are well defined. Let $$\C=((\alpha\times A)/R, \lor, \land, \to, 0, {\sf c_i}, {\sf s}_{i|j]})_{i,j\in \alpha},$$ and let $$h=\{(x, (\mu,x)/R): x\in A, \mu\in \alpha\sim \Delta x\}.$$ Then $h$ is an isomorphism from $\A$ into $\C$. Now to show that $\A$ neatly embeds into $\alpha+1$ extra dimensions, we define the operations ${\sf c}_{\alpha}, {\sf s}_{[i|\alpha]}$ and ${\sf s}_{[\alpha|i]}$ on $\C$ as follows: $${\mathsf c}_\alpha = \{ ((\mu, x)/R, (\mu, {\mathsf c}_\mu x)/R) : \mu \in \alpha, x \in B \},$$ $${\mathsf s}_{[i|\alpha]} = \{ ((\mu, x)/R, (\mu, {\mathsf s}_{[i|\mu]} x)/R) : \mu \in \alpha \smallsetminus \{i\}, x \in B \},$$ $${\mathsf s}_{[\alpha|i]} = \{ ((\mu, x)/R, (\mu, {\mathsf s}_{[\mu|i]} x)/R) : \mu \in \alpha \smallsetminus \{i\}, x \in B \}.$$ Let $$\B=((\alpha\times A)/R, \lor,\land, \to, {\sf c}_i, {\sf s}_{[i|j]})_{i,j\leq \alpha}.$$ Then $$\B\in G_{\alpha+1}PA_{\alpha+1}\text{ and }h(\A)\subseteq \Nr_{\alpha}\B.$$ It is not hard to check that the defined operations are as desired. We have our result when $G$ consists only of replacements. But since $\alpha\sim \Delta x$ is infinite one can show that substitutions corresponding to all finite transformations are term definable. For a finite transformation $\tau\in {}^{\alpha}\alpha$ we write $[u_0|v_0, u_1|v_1,\ldots, u_{k-1}|v_{k-1}]$ if $sup\tau=\{u_0,\ldots ,u_{k-1}\}$, $u_0<u_1 \ldots <u_{k-1}$ and $\tau(u_i)=v_i$ for $i<k$. Let $\A\in GPHA_{\alpha}$ be such that $\alpha\sim \Delta x$ is infinite for every $x\in A$. If $\tau=[u_0|v_0, u_1|v_1,\ldots, u_{k-1}|v_{k-1}]$ is a finite transformation, if $x\in A$ and if $\pi_0,\ldots ,\pi_{k-1}$ are in this order the first $k$ ordinals in $\alpha\sim (\Delta x\cup Rg(u)\cup Rg(v))$, then $${\mathsf s}_{\tau}x={\mathsf s}_{v_0}^{\pi_0}\ldots {\mathsf s}_{v_{k-1}}^{\pi_{k-1}}{\mathsf s}_{\pi_0}^{u_0}\ldots {\mathsf s}_{\pi_{k-1}}^{u_{k-1}}x.$$ The ${\sf s}_{\tau}$’s so defined satisfy the polyadic axioms, cf [@HMT1] Theorem 1.11.11. Then one proceeds by a simple induction to show that for all $n\in \omega$ there exists $\B\in G_{\alpha+n}PHA_{\alpha+n}$ such that $\A\subseteq \Nr_{\alpha}\B.$ For the transfinite, one uses ultraproducts [@HMT1] theorem 2.6.34. For the second part, let $\A\subseteq \Nr_{\alpha}\B$ and $A$ generates $\B$ then $\B$ consists of all elements ${\sf s}_{\sigma}^{\B}x$ such that $x\in A$ and $\sigma$ is a finite transformation on $\beta$ such that $\sigma\upharpoonright \alpha$ is one to one [@HMT1] lemma 2.6.66. Now suppose $x\in \Nr_{\alpha}\Sg^{\B}X$ and $\Delta x\subseteq \alpha$, then there exist $y\in \Sg^{\A}X$ and a finite transformation $\sigma$ of $\beta$ such that $\sigma\upharpoonright \alpha$ is one to one and $x={\sf s}_{\sigma}^{\B}y.$ Let $\tau$ be a finite transformation of $\beta$ such that $\tau\upharpoonright \alpha=Id \text { and } (\tau\circ \sigma) \alpha\subseteq \alpha.$ Then $x={\sf s}_{\tau}^{\B}x={\sf s}_{\tau}^{\B}{\sf s}_{\sigma}y= {\sf s}_{\tau\circ \sigma}^{\B}y={\sf s}_{\tau\circ \sigma\upharpoonright \alpha}^{\A}y.$ In the presence of diagonal elements, one defines them in the bigger algebra (the dilation) precisely as in [@HMT1], theorem 2.6.49(i). Here we extensively use the techniques in [@DM], but we have to watch out, for we only have finite cylindrifications. Let $(\A, \alpha,S)$ be a transformation system. That is to say, $\A$ is a Heyting algebra and $S:{}^\alpha\alpha\to End(\A)$ is a homomorphism. For any set $X$, let $F(^{\alpha}X,\A)$ be the set of all functions from $^{\alpha}X$ to $\A$ endowed with Heyting operations defined pointwise and for $\tau\in {}^\alpha\alpha$ and $f\in F(^{\alpha}X, \A)$, ${\sf s}_{\tau}f(x)=f(x\circ \tau)$. This turns $F(^{\alpha}X,\A)$ to a transformation system as well. The map $H:\A\to F(^{\alpha}\alpha, \A)$ defined by $H(p)(x)={\sf s}_xp$ is easily checked to be an isomorphism. Assume that $\beta\supseteq \alpha$. Then $K:F(^{\alpha}\alpha, \A)\to F(^{\beta}\alpha, \A)$ defined by $K(f)x=f(x\upharpoonright \alpha)$ is an isomorphism. These facts are straighforward to establish, cf. theorem 3.1, 3.2 in [@DM]. $F(^{\beta}\alpha, \A)$ is called a minimal dilation of $F(^{\alpha}\alpha, \A)$. Elements of the big algebra, or the cylindrifier free dilation, are of form ${\sf s}_{\sigma}p$, $p\in F(^{\beta}\alpha, \A)$ where $\sigma$ is one to one on $\alpha$, cf. [@DM] theorem 4.3-4.4. We say that $J\subseteq I$ supports an element $p\in A,$ if whenever $\sigma_1$ and $\sigma_2$ are transformations that agree on $J,$ then ${\sf s}_{\sigma_1}p={\sf s}_{\sigma_2}p$. $\Nr_JA$, consisting of the elements that $J$ supports, is just the neat $J$ reduct of $\A$; with the operations defined the obvious way as indicated above. If $\A$ is an $\B$ valued $I$ transformaton system with domain $X$, then the $J$ compression of $\A$ is isomorphic to a $\B$ valued $J$ transformation system via $H: \Nr_J\A\to F(^JX, \A)$ by setting for $f\in\Nr_J\A$ and $x\in {}^JX$, $H(f)x=f(y)$ where $y\in X^I$ and $y\upharpoonright J=x$, cf. [@DM] theorem 3.10. Now let $\alpha\subseteq \beta.$ If $|\alpha|=|\beta|$ then the the required algebra is defined as follows. Let $\mu$ be a bijection from $\beta$ onto $\alpha$. For $\tau\in {}^{\beta}\beta,$ let ${\sf s}_{\tau}={\sf s}_{\mu\tau\mu^{-1}}$ and for each $i\in \beta,$ let ${\sf c}_i={\sf c}_{\mu(i)}$. Then this defined $\B\in GPHA_{\beta}$ in which $\A$ neatly embeds via ${\sf s}_{\mu\upharpoonright\alpha},$ cf. [@DM] p.168. Now assume that $|\alpha|<|\beta|$. Let $\A$ be a given polyadic algebra of dimension $\alpha$; discard its cylindrifications and then take its minimal dilation $\B$, which exists by the above. We need to define cylindrifications on the big algebra, so that they agree with their values in $\A$ and to have $\A\cong \Nr_{\alpha}\B$. We let (\*): $${\sf c}_k{\sf s}_{\sigma}^{\B}p={\sf s}_{\rho^{-1}}^{\B} {\sf c}_{\rho(\{k\}\cap \sigma \alpha)}{\sf s}_{(\rho\sigma\upharpoonright \alpha)}^{\A}p.$$ Here $\rho$ is a any permutation such that $\rho\circ \sigma(\alpha)\subseteq \sigma(\alpha.)$ Then we claim that the definition is sound, that is, it is independent of $\rho, \sigma, p$. Towards this end, let $q={\sf s}_{\sigma}^{\B}p={\sf s}_{\sigma_1}^{\B}p_1$ and $(\rho_1\circ \sigma_1)(\alpha)\subseteq \alpha.$ We need to show that (\*\*) $${\sf s}_{\rho^{-1}}^{\B}{\sf c}_{[\rho(\{k\}\cap \sigma(\alpha)]}^{\A}{\sf s}_{(\rho\circ \sigma\upharpoonright \alpha)}^{\A}p= {\sf s}_{\rho_1{^{-1}}}^{\B}{\sf c}_{[\rho_1(\{k\}\cap \sigma(\alpha)]}^{\A}{\sf s}_{(\rho_1\circ \sigma\upharpoonright \alpha)}^{\A}p.$$ Let $\mu$ be a permutation of $\beta$ such that $\mu(\sigma(\alpha)\cup \sigma_1(\alpha))\subseteq \alpha$. Now applying ${\sf s}_{\mu}$ to the left hand side of (\*\*), we get that $${\sf s}_{\mu}^{\B}{\sf s}_{\rho^{-1}}^{\B}{\sf c}_{[\rho(\{k\})\cap \sigma(\alpha)]}^{\A}{\sf s}_{(\rho\circ \sigma|\alpha)}^{\A}p ={\sf s}_{\mu\circ \rho^{-1}}^{\B}{\sf c}_{[\rho(\{k\})\cap \sigma(\alpha)]}^{\A}{\sf s}_{(\rho\circ \sigma|\alpha)}^{\A}p.$$ The latter is equal to ${\sf c}_{(\mu(\{k\})\cap \sigma(\alpha))}{\sf s}_{\sigma}^{\B}q.$ Now since $\mu(\sigma(\alpha)\cap \sigma_1(\alpha))\subseteq \alpha$, we have ${\sf s}_{\mu}^{\B}p={\sf s}_{(\mu\circ \sigma\upharpoonright \alpha)}^{\A}p={\sf s}_{(\mu\circ \sigma_1)\upharpoonright \alpha)}^{\A}p_1\in A$. It thus follows that $${\sf s}_{\rho^{-1}}^{\B}{\sf c}_{[\rho(\{k\})\cap \sigma(\alpha)]}^{\A}{\sf s}_{(\rho\circ \sigma\upharpoonright \alpha)}^{\A}p= {\sf c}_{[\mu(\{k\})\cap \mu\circ \sigma(\alpha)\cap \mu\circ \sigma_1(\alpha))}{\sf s}_{\sigma}^{\B}q.$$ By exactly the same method, it can be shown that $${\sf s}_{\rho_1{^{-1}}}^{\B}{\sf c}_{[\rho_1(\{k\})\cap \sigma(\alpha)]}^{\A}{\sf s}_{(\rho_1\circ \sigma\upharpoonright \alpha)}^{\A}p ={\sf c}_{[\mu(\{k\})\cap \mu\circ \sigma(\alpha)\cap \mu\circ \sigma_1(\alpha))}{\sf s}_{\sigma}^{\B}q.$$ By this we have proved (\*\*). Furthermore, it defines the required algebra $\B$. Let us check this. Since our definition is slightly different than that in [@DM], by restricting cylindrifications to be olny finite, we need to check the polyadic axioms which is tedious but routine. The idea is that every axiom can be pulled back to its corresponding axiom holding in the small algebra $\A$. We check only the axiom $${\sf c}_k(q_1\land {\sf c}_kq_2)={\sf c}_kq_1\land {\sf c}_kq_2.$$ We follow closely [@DM] p. 166. Assume that $q_1={\sf s}_{\sigma}^{\B}p_1$ and $q_2={\sf s}_{\sigma}^{\B}p_2$. Let $\rho$ be a permutation of $I$ such that $\rho(\sigma_1I\cup \sigma_2I)\subseteq I$ and let $$p={\sf s}_{\rho}^{\B}[q_1\land {\sf c}_kq_2].$$ Then $$p={\sf s}_{\rho}^{\B}q_1\land {\sf s}_{\rho}^{\B}{\sf c}_kq_2 ={\sf s}_{\rho}^{\B}{\sf s}_{\sigma_1}^{\B}p_1\land {\sf s}_{\rho}^{\B}{\sf c}_k {\sf s}_{\sigma_2}^{\B}p_2.$$ Now we calculate ${\sf c}_k{\sf s}_{\sigma_2}^{\B}p_2.$ We have by (\*) $${\sf c}_k{\sf s}_{\sigma_2}^{\B}p_2= {\sf s}^{\B}_{\sigma_2^{-1}}{\sf c}_{\rho(\{k\}\cap \sigma_2I)} {\sf s}^{\A}_{(\rho\sigma_2\upharpoonright I)}p_2.$$ Hence $$p={\sf s}_{\rho}^{\B}{\sf s}_{\sigma_1}^{\B}p_1\land {\sf s}_{\rho}^{\B}{\sf s}^{\B}_{\sigma^{-1}}{\sf c}_{\rho(\{k\}\cap \sigma_2I)} {\sf s}^{\A}_{(\rho\sigma_2\upharpoonright I)}p_2.$$ $$\begin{split} &={\sf s}^{\A}_{\rho\sigma_1\upharpoonright I}p_1\land {\sf s}_{\rho}^{\B}{\sf s}^{\A}_{\sigma^{-1}}{\sf c}_{\rho(\{k\}\cap \sigma_2I)} {\sf s}^{\A}_{(\rho\sigma_2\upharpoonright I)}p_2,\\ &={\sf s}^{\A}_{\rho\sigma_1\upharpoonright I}p_1\land {\sf s}_{\rho\sigma^{-1}}^{\A} {\sf c}_{\rho(\{k\}\cap \sigma_2I)} {\sf s}^{\A}_{(\rho\sigma_2\upharpoonright I)}p_2,\\ &={\sf s}^{\A}_{\rho\sigma_1\upharpoonright I}p_1\land {\sf c}_{\rho(\{k\}\cap \sigma_2I)} {\sf s}^{\A}_{(\rho\sigma_2\upharpoonright I)}p_2.\\ \end{split}$$ Now $${\sf c}_k{\sf s}_{\rho^{-1}}^{\B}p={\sf c}_k{\sf s}_{\rho^{-1}}^{\B}{\sf s}_{\rho}^{\B}(q_1\land {\sf c}_k q_2)={\sf c}_k(q_1\land {\sf c}_kq_2)$$ We next calculate ${\sf c}_k{\sf s}_{\rho^{-1}}p$. Let $\mu$ be a permutation of $I$ such that $\mu\rho^{-1}I\subseteq I$. Let $j=\mu(\{k\}\cap \rho^{-1}I)$. Then applying (\*), we have: $$\begin{split} &{\sf c}_k{\sf s}_{\rho^{-1}}p={\sf s}^{\B}_{\mu^{-1}}{\sf c}_{j}{\sf s}_{(\mu\rho^{-1}|I)}^{\A}p,\\ &={\sf s}^{\B}_{\mu^{-1}}{\sf c}_{j}{\sf s}_{(\mu\rho^{-1}|I)}^{\A} {\sf s}^{\A}_{\rho\sigma_1\upharpoonright I}p_1\land {\sf c}_{(\rho\{k\}\cap \sigma_2I)} {\sf s}^{\B}_{(\rho\sigma_2\upharpoonright I)}p_2,\\ &={\sf s}^{\B}_{\mu^{-1}}{\sf c}_{j}[{\sf s}_{\mu \sigma_1\upharpoonright I}p_1\land r].\\ \end{split}$$ where $$r={\sf s}_{\mu\rho^{-1}}^{\B}{\sf c}_j {\sf s}_{\rho \sigma_2\upharpoonright I}^{\A}p_2.$$ Now ${\sf c}_kr=r$. Hence, applying the axiom in the small algebra, we get: $${\sf s}^{\B}_{\mu^{-1}}{\sf c}_{j}[{\sf s}_{\mu \sigma_1\upharpoonright I}^{\A}p_1]\land {\sf c}_k q_2 ={\sf s}^{\B}_{\mu^{-1}}{\sf c}_{j}[{\sf s}_{\mu \sigma_1\upharpoonright I}^{\A}p_1\land r].$$ But $${\sf c}_{\mu(\{k\}\cap \rho^{-1}I)}{\sf s}_{(\mu\sigma_1|I)}^{\A}p_1= {\sf c}_{\mu(\{k\}\cap \sigma_1I)}{\sf s}_{(\mu\sigma_1|I)}^{\A}p_1.$$ So $${\sf s}^{\B}_{\mu^{-1}}{\sf c}_{k}[{\sf s}_{\mu \sigma_1\upharpoonright I}^{\A}p_1]={\sf c}_kq_1,$$ and we are done. To show that neat reducts commute with forming subalgebras, we proceed as in the previous proof replacing finite transformation by transformation. When we have diagonal elements, we first discard them, obtaining a $GPHA_{\alpha}$ then form the diagonal free dilation of this algebra, and finally define the diagonal elements in the dilation as in [@HMT2], theorem 5.4.17, p.233. The next lemma formulated only for $GPHA_{\alpha}$ will be used in proving our main (algebraic) result. The proof works without any modifications when we add diagonal elements. The lemma says, roughly, that if we have an $\alpha$ dimensional algebra $\A$, and a set $\beta$ containing $\alpha$, then we can find an extension $\B$ of $\A$ in $\beta$ dimensions, specified by a carefully chosen subsemigroup of $^{\beta}\beta$, such that $\A=\Nr_{\alpha}\B$ and for all $b\in B$, $|\Delta b\sim \alpha|<\omega$. $\B$ is not necessarily the minimal dilation of $\A$, because the large subsemigroup chosen maybe smaller than the semigroup used to form the unique dilation. It can happen that this extension is the minimal dilation, but in the case we consider all transformations, the constructed algebra is only a proper subreduct of the dilation obtained basically by discarding those elements $b$ in the original dilation for which $\Delta b\sim \alpha$ is infinite. \[cylindrify\] For a set $I,$ let $G_I$ be the semigroup of all finite transformations on $I$. Let $\alpha\subseteq \beta$ be infinite sets. Let $\A\in G_{\alpha}PHA_{\alpha}$ and $\B\in G_{\beta}PHA_{\beta}.$ If $\A\subseteq \Nr_{\alpha}\B$ and $X\subseteq A$, then for any $b\in \Sg^{\B}X,$ one has $|\Delta b\sim \alpha|<\omega.$ In particular, the cylindrifier ${\sf c}_{(\Delta\sim\alpha)}b$, for any such $b$ is meaningful. Let $\alpha<\beta$ be countable ordinals and let $G_{\alpha}$ and $G_{\beta}$ be strongly rich semigroups on $\alpha$ and $\beta$, respectivey. Let $\A\in G_{\alpha}PHA_{\alpha}$ and $\B\in G_{\beta}PHA_{\beta}.$ If $\A\subseteq \Nr_{\alpha}\B$ and $X\subseteq A$, then for any $b\in \Sg^{\B}X,$ we have $|\Delta b\sim \alpha|<\omega.$ For a set $I$, let $G_I$ denote the set of all transformations on $I$. Let $\alpha\subseteq \beta$ be infinite sets, such that $|\alpha|<|\beta|$. Let $\A\in G_{\alpha}PHA_{\alpha}$. Then there exist a semigroup $S$ of $G_{\beta}$ and $\B\in SPHA_{\beta},$ such that $\A=\Nr_{\alpha}\B$, $S$ contains elements $\pi$, $\sigma$ as in definition \[stronglyrich\], and for all $X\subseteq A$, one has $\Sg^{\A}X=\Nr_{\alpha}\Sg^{\B}X$. Furthermore, for all $b\in B$, $|\Delta b\sim \alpha|<\omega.$ In this case we say that $\B$ is a minimal extension of $\A$. Let $\alpha\subseteq \beta$ be infinite sets, and assume that $|\alpha|=|\beta|$. Let $S\subseteq {}^{\alpha}\alpha$ be a semigroup that contains all finite transformations, and two infinitary ones $\pi$ and $\sigma$ as in the definition \[stronglyrich\]. Let $\A\in SPHA_{\alpha}$. Then there exist a semigroup $T\subseteq {}^{\beta}\beta$, such that $\B\in TPHA_{\beta}$, $\A=\Nr_{\alpha}\B,$ and for all $X\subseteq A$, one has $\Sg^{\A}X=\Nr_{\alpha}\Sg^{\B}X$. Furthermore, for all $b\in B$, $|\Delta b\sim \alpha|<\omega.$ [Proof]{} This trivially holds for elements of $\A$. The rest follows by an easy inductive argument, since substitution can move only finitely many points. This part is delicate because we have infinitary substitutions, so, in principal, it can happen that $|\Delta x\sim \alpha|<\omega$ and $|\Delta({\sf s}_{\tau}x)\sim \alpha|\geq \omega$, when $\tau$ moves infinitely many points. We show that in this particular case, this cannot happen. Let $M=\beta\sim \alpha$. We can well assume that $\beta=\omega+\omega$ and $\alpha=\omega$. Then since $M\cap \Delta x=\emptyset$ for all $x\in \A$, it suffices to show inductively that for any $x\in \B$ and any (unary) operation $f$ of $\B$, the following condition holds: $$\text {If }|M\cap \Delta x|<\omega\text { then }|M\cap \Delta (fx)|<\omega.$$ Of course, we should check that the above holds for the Heyting operations as well, but this is absolutely straightforward. Assume that $f$ is a substitution. So let $\tau\in G_{\beta}$, such that $f={\sf s}_{\tau}$. Let $D_{\tau}=\{m\in M:\tau^{-1}(m)=\{m\}=\{\tau(m)\}\}.$ Then, it is easy to check that $|M\sim D_{\tau}|<\omega$. For the sake of brevity, let $C_{\tau}$ denote the finite set $M\sim D_{\tau}.$ By $|M\cap \Delta x|<\omega$, we have $|(M\cap \Delta x)\cup C_{\tau}|<\omega$. We will show that $M\cap \Delta ({\sf s}_{\tau}x)\subseteq (M\cap \Delta x)\cup C_{\tau}$ by which we will be done. So assume that $i\in M\sim\omega$, and that $i\notin (M\cap \Delta x)\cup C_{\tau}$. Then $i\in D_{\tau}\sim \Delta x$, so $\{\tau(i)\}=\{i\}=\tau^{-1}(i)$. Thus we get that ${\sf c}_i{\sf s}_{\tau}x={\sf s}_{\tau}{\sf c}_ix$ by item (5) in theorem \[axioms\], proving that $i\notin M\cap \Delta {\sf s}_{\tau}x.$ Now assume that $f={\sf c}_j$ with $j\in \alpha$. If $i\in M$ and $i\notin \Delta x,$ then we have ${\sf c}_i{\sf c}_jx={\sf c}_j{\sf c}_ix={\sf c}_jx$ and we are done in this case, too. We note that the condition (2) in the definition of richness suffices to implement the neat embeding, while strong richness is needed so that $\A$ exhausts the ful neat reduct. Let $\A$ and $\beta$ be given. Choose $\pi$ and $\sigma$ in ${}^{\beta}\beta$ satisfying (3) and (4) in definition \[rich\]. Let $H_{\beta}=\{\rho\in {}^{\beta}\beta: |\rho(\alpha)\cap (\beta\sim \alpha)|<\omega\}\cup \{\sigma, \pi\}$. Let $S$ be the semigroup generated by $H_{\beta}.$ Let $\B'\in G_{\beta}PHA_{\beta}$ be an ordinary dilation of $\A$ where all transformations in $^{\beta}\beta$ are used. Exists by \[dl\]. Then $\A=\Nr_{\alpha}\B'$. We take a suitable reduct of $\B'$. Let $\B$ be the subalgebra of $\B'$ generated from $A$ be all operations except for substitutions indexed by transformations not in $S$. Then, of course $A\subseteq \B$; in fact, $\A=\Nr_{\alpha}\B$, since for each $\tau\in {}^{\alpha}\alpha$, $\tau\cup Id\in S.$ We check that $\B$ is as required. It suffices to show inductively that for $b\in B$, if $|\Delta b\sim \alpha|<\omega$, and $\rho\in S$, then $|\rho(\Delta b)\sim \alpha|<\omega$. For $\rho \in H_{\beta}\sim \{\pi, \sigma\}$, this easily follows from how $\rho$ is defined, otherwise the proof is as in the previous item. We can obviously write $\beta$ as a sum of ordinals $\alpha+\omega$, so that $\beta$ itself is an ordinal, and iterate $\sigma$ as in theorem \[dl\] (1), by noting that the proof does not depend on the countability of $\A,$ but rather on that of $\beta\sim \alpha$. In more detail, for $n\leq \omega$, let $\alpha_n=\alpha+n$ and $M_n=\beta\sim \alpha_n$. For $\tau\in S$, let $\tau_n=\tau\cup Id_{M_n}$. Let $T_n$ be subsemigroup of $\langle {}^{\alpha_n}\alpha_n,\circ \rangle$ generated by $\{\tau_n:\tau\in G\} \cup \cup_{i,j\in \alpha_n}\{[i|j],[i,j]\}$. For $n\in \omega$, we let $\rho_n:\alpha_n\to \alpha$ be the bijection defined by $\rho_n\upharpoonright \alpha=\sigma^n$ and $\rho_n(\alpha+i)=i$ for all $i<n$. (Here $\sigma$ is as in the fdefinition of \[rich\]. For $n\in \omega$, for $v\in T_n$ let $v'=\rho_n\circ v\circ \rho_n^{-1}$. Then $v'\in S$. For $\tau\in T_{\omega}$, let $D_{\tau}=\{m\in M_{\omega}:\tau^{-1}(m)=\{m\}=\{\tau(m)\}\}$. Then $|M_{\omega}\sim D_{\tau}|<\omega.$ Let $\A$ is an $S$ algebra. Let $\A_n$ be the algebra defined as follows: $\A_n=\langle A,\lor, \land, \to, 0, {\sf c}_i^{\A_n},{\sf s}_v^{\A_n}\rangle_{i\in \alpha_n,v\in T_n}$ where for each $i\in \alpha_n$ and $v\in T_n$, ${\sf c}_i^{\A_n}:= {\sf c}_{\rho_n(i)}^{\A} \text { and }{\sf s}_v^{\A_n}:= {\sf s}_{v'}^{\A.}$ Then continue as in the proof of the above theorem \[dl\], by taking the ultraproduct of the $\A_n$’s relative to a cofinite ultrafilter, one then gets a dilation in $\beta$ dimensions in which $\A$ neatly embeds satisfying the required. If $\A \in GPHA_{\alpha}$ where $G_{\alpha}$ is the semigroup of all transformations on $\alpha$ and $\alpha\subseteq \beta$, there are two kinds of extensions of $\A$ to $\beta$ dimensions. The minimal dilation of $\A$ which uses all substitutions in $G_{\beta}$, and a minimal extension of $\A$ which is can be a proper subreduct of the minimal dilation, using operations in a rich subsemigroup of $G_{\beta}.$ Algebraic Proofs of main theorems ================================= Henceforth, when we write $GPHA_{\alpha}$ without further specification, we understand that we simultaneously dealing with all possibilities of $G$, and that whatever we are saying applies equaly well to all cases considered. We could also say $\A$ is a $G$ algebra without further notice; the same is to be understood. Throughout the paper dimensions will be specified by [*infinite*]{} sets or ordinals. Our work in this section is closely related to that in [@Hung]. Our main theorem is a typical representabilty result, where we start with an abstract (free) algebra, and we find a non-trivial homomorphism from this algebra to a concrete algebra based on Kripke systems (an algebraic version of Kripke frames). The idea (at least for the equality-free case) is that we start with a theory (which is defined as a pair of sets of formulas, as is the case with classical intuitionistic logic), extend it to a saturated one in enough spare dimensions, or an appropraite dilation (lemma \[t2\]), and then iterate this process countably many times forming consecutive (countably many) dilations in enough spare dimensions, using pairs of pairs (theories), cf. lemma \[t3\]; finally forming an extension that will be used to construct desired Kripke models (theorem \[main\]). The extensions constructed are essentially conservative extensions, and they will actually constitute the set of worlds of our desired Kripke model. The iteration is done by a subtle zig-zag process, a technique due to Gabbay [@b]. When we have diagonal elements (equality), constructing desired Kripke model, is substantialy different, and much more intricate. All definitions and results up to lemma \[main1\], though formulated only for the diagonal-free case, applies equally well to the case when there are diagonal elements, with absolutely no modifications. (The case when diagonal elements are present will be dealt with in part 2). Let $\A\in GPHA_{\alpha}$. A theory in $\A$ is a pair $(\Gamma, \Delta)$ such that $\Gamma, \Delta\subseteq \A$. A theory $(\Gamma, \Delta)$ is consistent if there are no $a_1,\ldots a_n\in \Gamma$ and $b_1,\ldots b_m\in \Delta$ ($m,n\in \omega$) such that $$a_1\land\ldots a_n\leq b_1\lor\ldots b_m.$$ Not that in this case, we have $\Gamma\cap \Delta=\emptyset$. Also if $F$ is a filter (has the finite intersection property), then it is always the case that $(F, \{0\})$ is consistent. A theory $(\Gamma, \Delta)$ is complete if for all $a\in A,$ either $a\in \Gamma$ or $a\in \Delta$. A theory $(\Gamma, \Delta)$ is saturated if for all $a\in A$ and $j\in \alpha$, if ${\sf c}_ja\in \Gamma$, then there exists $k\in \alpha\sim \Delta a$, such that ${\sf s}^j_ka\in \Gamma$. Note that a saturated theory depends only on $\Gamma$. \[t1\]Let $\A\in GPHA_{\alpha}$ and $(\Gamma,\Delta)$ be a consistent theory. For any $a\in A,$ either $(\Gamma\cup \{a\}, \Delta)$ or $(\Gamma, \Delta\cup\{a\})$ is consistent. $(\Gamma,\Delta)$ can be extended to a complete theory in $\A.$ [Proof]{} Cf. [@Hung]. Suppose for contradiction that both theories are inconsistent. Then we have $\mu_1\land a\leq \delta_1$ and $\mu_2\leq a\land \delta_2$ where $\mu_1$ and $\mu_2$ are some conjunction of elements of $\Gamma$ and $\delta_1$, $\delta_2$ are some disjunction of elements of $\Delta$. But from $(\mu_1\land a\to \delta_1)\land (\mu_2\to a\lor \delta_2)\leq (\mu_1\land \mu_2\to \delta_1\lor \delta_2),$ we get $\mu_1\land \mu_2\leq \delta_1\lor \delta_2,$ which contradicts the consistency of $(\Gamma, \Delta)$. Cf. [@Hung]. Assume that $|A|=\kappa$. Enumerate the elements of $\A$ as $(a_i:i<\kappa)$. Then we can extend $(\Gamma, \Delta)$ consecutively by adding $a_i$ either to $\Gamma$ or $\Delta$ while preserving consistency. In more detail, we define by transfinite induction a sequence of theories $(\Gamma_i,\Delta_i)$ for $i\in \kappa$ as follows. Set $\Gamma_0=\Gamma$ and $\Delta_0=\Delta$. If $\Gamma_i,\Delta_i$ are defined for all $i<\mu$ where $\mu$ is a limit ordinal, let $\Gamma_{\mu}=(\bigcup_{i\in \mu} \Gamma_i, \bigcup_{i\in \mu} \Delta_i)$. Now for successor ordinals. Assume that $(\Gamma_i, \Delta_i)$ are defined. Set $\Gamma_{i+1}=\Gamma_i\cup \{a_i\}, \Delta_{i+1}=\Delta_i$ in case this is consistent, else set $\Gamma_{i+1}=\Gamma_i$ and $\Delta_{i+1}=\Delta_i\cup \{a_i\}$. Let $T=\bigcup_{i\in \kappa}T_i$ and $F= \bigcup_{i\in \kappa} F_i$, then $(T, F)$ is as desired. \[t2\] Let $\A\in GPHA_{\alpha}$ and $(\Gamma,\Delta)$ be a consistent theory of $\A$. Let $I$ be a set such that $\alpha\subseteq I$ and let $\beta=|I\sim \alpha|=\max(|A|, |\alpha|).$ Then there exists a minimal dilation $\B$ of $\A$ of dimension $I$, and a theory $(T,F)$ in $\B$, extending $(\Gamma,\Delta)$ such that $(T,F)$ is saturated and complete. [Proof]{} Let $I$ be provided as in the statement of the lemma. By lemma \[dl\], there exists $\B\in GPHA_I$ such that $\A\subseteq \Nr_{\alpha}\B$ and $\A$ generates $\B$. We also have for all $X\subseteq \A$, $\Sg^{\A}X=\Nr_{\alpha}\Sg^{\B}X$. Let $\{b_i:i<\kappa\}$ be an enumeration of the elements of $\B$; here $\kappa=|B|.$ Define by transfinite recursion a sequence $(T_i, F_i)$ for $i<\kappa$ of theories as follows. Set $T_0=\Gamma$ and $F_0=\Delta$. We assume inductively that $$|\beta\sim \bigcup_{x\in T_i} \Delta x\cup \bigcup_{x\in F_i}\Delta x|\geq \omega.$$ This is clearly satisfied for $F_0$ and $T_0$. Now we need to worry only about successor ordinals. Assume that $T_i$ and $F_i$ are defined. We distinguish between two cases: 1. $(T_i, F_i\cup \{b_i\})$ is consistent. Then set $T_{i+1}=T_i$ and $F_{i+1}=F_i\cup \{b_i\}.$ 2. If not, that is if $(T_i, F_i\cup \{b_i\})$ is inconsistent. In this case, we distinguish between two subcases: \(a) $b_i$ is not of the form ${\sf c}_jp.$ Then set $T_{i+1}=T_i\cup \{b_i\}$ and $F_{i+1}=F_i$. \(b) $b_i={\sf c}_jp$ for some $j\in I$. Then set $T_{i+1}=T_i\cup \{{\sf c}_jp, {\sf s}_u^jp\}$ where $u\notin \Delta p\cup \bigcup_{x\in T_i}\cup \bigcup_{x\in F_i}\Delta x$ and $F_{i+1}=F_i$. Such a $u$ exists by the inductive assumption. Now we check by induction that each $(T_i, F_i)$ is consistent. The only part that needs checking, in view of the previous lemma, is subcase (b). So assume that $(T_i,F_i)$ is consistent and $b_i={\sf c}_jp.$ If $(T_{i+1}, F_{i+1})$ is inconsistent, then we would have for some $a\in T_i$ and some $\delta\in F_i$ that $a\land {\sf c}_jp\land {\sf s}_u^jp\leq \delta.$ From this we get $a\land {\sf c}_jp\leq \delta,$ because ${\sf s}_u^jp\leq {\sf c}_jp.$ But this contradicts the consistency of $(T_i\cup \{{\sf c}_jp\}, F_i)$. Let $T=\bigcup_{i\in \kappa}T_i$ and $F=\bigcup_{i\in \kappa} F_i$, then $(T,F)$ is consistent. We show that it is saturated. If ${\sf c}_jp\in T$, then ${\sf c}_jp\in T_{i+1}$ for some $i$, hence ${\sf s}_u^jp\in T_{i+1}\subseteq T$ and $u\notin \Delta p$. Now by lemma \[t1\], we can extend $(T,F)$ is $\B$ to a complete theory, and this will not affect saturation, since the process of completion does not take us out of $\B$. The next lemma constitutes the core of our construction; involving a zig-zag Gabbay construction, it will be used repeatedly, to construct our desired representation via a set algebra based on a Kripke system defined in \[Kripke\] \[t3\] Let $\A\in GPHA_{\alpha}$ be generated by $X$ and let $X=X_1\cup X_2$. Let $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$ be two consistent theories in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2,$ respectively such that $\Gamma_0\subseteq \Sg^{\A}(X_1\cap X_2)$, $\Gamma_0\subseteq \Gamma_0^*$. Assume further that $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2$. Suppose that $I$ is a set such that $\alpha\subseteq I$ and $|I\sim \alpha|=max (|A|,|\alpha|)$. Then there exist a dilation $\B\in GPHA_I$ of $\A$, and theories $T_1=(\Delta_{\omega}, \Gamma_{\omega})$, $T_2=(\Theta_{\omega}, \Gamma_{\omega}^*)$ extending $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$, such that $T_1$ and $T_2$ are consistent and saturated in $\Sg^{\B}X_1$ and $\Sg^{\B}X_2,$ respectively, $(\Delta_{\omega}\cap \Theta_{\omega}, \Gamma_{\omega})$ is complete in $\Sg^{\B}X_1\cap \Sg^{\B}X_2,$ and $\Gamma_{\omega}\subseteq \Gamma_{\omega}^*$. [Proof]{} Like the corresponding proof in [@Hung], we will build the desired theories in a step-by-step zig-zag manner in a large enough dilation whose dimension is specified by $I$. The spare dimensions play a role of added witnesses, that will allow us to eliminate quantifiers, in a sense. Let $\A=\A_0\in GPHA_{\alpha}$. The proof consists of an iteration of lemmata \[t1\] and \[t2\]. Let $\beta=max(|A|, |\alpha|)$, and let $I$ be such that $|I\sim \alpha|=\beta$. We distinguish between two cases: Assume that $G$ is strongly rich or $G$ contains consists of all finite transformations. In this case we only deal with minimal dilations. We can write $\beta = I\sim \alpha$ as $\bigcup_{n=1}^{\infty}C_n$ where $C_i\cap C_j=\emptyset$ for distinct $i$ and $j$ and $|C_i|=\beta$ for all $i$. Then iterate first two items in lemma \[dl\]. Let $\A_1=\A(C_1)\in G_{\alpha\cup C_1}PHA_{\alpha\cup C_1}$ be a minimal dilation of $\A$, so that $\A=\Nr_{\alpha}\A_1$. Let $\A_2=\A(C_1)(C_2)$ be a minimal dilation of $\A_1$ so that $\A_1=\Nr_{\alpha\cup C_1}\A_2$. Generally, we define inductively $\A_n=\A(C_1)(C_2)\ldots (C_n)$ to be a minimal dilation of $\A_{n-1}$, so that $\A_{n-1}=\Nr_{\alpha\cup C_1\cup \ldots C_{n-1}}\A_n$. Notice that for $k<n$, $\A_n$ is a minimal dilation of $\A_k$. So we have a sequence of algebras $\A_0\subseteq \A_1\subseteq \A_2\ldots.$ Each element in the sequence is the minimal dilation of its preceding one. $G$ contains all transformations. Here we shall have to use minimal extensions at the start, i.e at the first step of the iteration. We iterate lemma \[dl\], using items (3) and (4) in lemma \[cylindrify\] by taking $|C_1|=\beta$, and $|C_i|=\omega$ for all $i\geq 2$; this will yield the desired sequence of extensions. Now that we have a sequence of extensions $\A_0\subseteq \A_1\ldots$ in different increasing dimensions, we now form a limit of this sequence in $I$ dimensions. We can use ultraproducts, but instead we use products, and quotient algebras. First form the Heyting algebra, that is the product of the Heyting reducts of the constructed algebras, that is take $\C=\prod_{n=0}^{\infty}\Rd A_n$, where $\Rd \A_n$ denotes the Heyting reduct of $\A_n$ obtained by discarding substitutions and cylindrifiers. Let $$M=\{f\in C: (\exists n\in \omega)(\forall k\geq n) f_{k}=0\}.$$ Then $M$ is a Heyting ideal of $\C$. Now form the quotient Heyting algebra $\D=\C/M.$ We want to expand this Heyting algebra algebra by cylindrifiers and substitutions, i.e to an algebra in $GPHA_{I}$. Towards this aim, for $\tau\in {}G,$ define $\phi({\tau})\in {} ^CC$ as follows: $$(\phi(\tau)f)_n={\sf s}_{\tau\upharpoonright dim \A_n}^{\A_n}f_n$$ if $\tau(dim(\A_n))\subseteq dim (\A_n)$. Otherwise $$(\phi(\tau)f)_n=f_n.$$ For $j\in I$, define $${\sf c}_jf_n={\sf c}_{(dim \A_n\cap \{j\})}^{\A_n}f_n,$$ and $${\sf q}_jf _n={\sf q}_{(dim \A_n\cap \{j\})}^{\A_n}f_n.$$ Then for $\tau\in G$ and $j\in I$, set $${\sf s}_{\tau}(f/M)=\phi({\tau})f/M,$$ $${\sf c}_{j}(f/M)=({\sf c}_j f)/M,$$ and $${\sf q}_{j}(f/M)=({\sf q}_j f)/M.$$ Then, it can be easily checked that, $\A_{\infty}=(\D, {\sf s}_{\tau}, {\sf c}_{j}, {\sf q}_{j})$ is a $GPHA_I$, in which every $\A_n$ neatly embeds. We can and will assume that $\A_n=\Nr_{\alpha\cup C_1\ldots \cup C_n}\A_{\infty}$. Also $\A_{\infty}$ is a minimal dilation of $\A_n$ for all $n$. During our ’zig-zagging’ we shall be extensively using lemma \[cylindrify\]. From now on, fix $\A$ to be as in the statement of lemma \[t3\] for some time to come. So $\A\in GPHA_{\alpha}$ is generated by $X$ and $X=X_1\cup X_2$. $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$ are two consistent theories in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2,$ respectively such that $\Gamma_0\subseteq \Sg^{\A}(X_1\cap X_2)$, $\Gamma_0\subseteq \Gamma_0^*$. Finally $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2.$ Now we have: $$\Delta_0\subseteq \Sg^{\A}X_1\subseteq \Sg^{\A(C_1)}X_1\subseteq \Sg^{\A(C_1)(C_2)}X_1\subseteq \Sg^{\A(C_1)(C_2)(C_3)}X_1 \ldots\subseteq \Sg^{\A_{\infty}}X_1.$$ $$\Theta_0\subseteq \Sg^{\A}X_2\subseteq \Sg^{\A(C_1)}X_2\subseteq \Sg^{\A(C_1)(C_2)}X_2\subseteq \Sg^{\A(C_1)(C_2)(C_3)}X_2 \ldots\subseteq \Sg^{\A_{\infty}}X_2.$$ In view of lemmata \[t1\], \[t2\], extend $(\Delta_0, \Gamma_0)$ to a complete and saturated theory $(\Delta_1, \Gamma_1')$ in $\Sg^{\A(C_1)}X_1$. Consider $(\Delta_1, \Gamma_0)$. Zig-zagging away, we extend our theories in a step by step manner. The proofs of the coming Claims, 1, 2 and 3, are very similar to the proofs of the corresponding claims in [@Hung], which are in turn an algebraic version of lemmata 4.18-19-20 in [@b], with one major difference from the former. In our present situation, we can cylindrify on only finitely many indices, so we have to be careful, when talking about dimension sets, and in forming neat reducts (or compressions). Our proof then becomes substantially more involved. In the course of our proof we use extensively lemmata \[dl\] and \[cylindrify\] which are not formulated in [@Hung] because we simply did not need them when we had cylindrifications on possibly infinite sets. [Claim 1]{} The theory $T_1=(\Theta_0\cup (\Delta_1\cap \Sg^{\A(C_1)}X_2), \Gamma_0^*)$ is consistent in $\Sg^{\A(C_1)}X_2.$ [Proof of Claim 1]{} Assume that $T_1$ is inconsistent. Then for some conjunction $\theta_0$ of elements in $\Theta_0$, some $E_1\in \Delta_1\cap \Sg^{\A(C_1)}X_2,$ and some disjunction $\mu_0^*$ in $\Gamma_0^*,$ we have $\theta_0\land E_1\leq \mu_0^*,$ and so $E_1\leq \theta_0\rightarrow \mu_0^*.$ Since $\theta_0\in \Theta_0\subseteq \Sg^{\A}X_2$ and $\mu_0^*\in \Gamma_0^*\subseteq \Sg^{\A}X_2\subseteq \Nr_{\alpha}^{\A(C_1)}\A$, therefore, for any finite set $D\subseteq C_1\sim \alpha$, we have ${\sf c}_{(D)}\theta_0=\theta_0$ and ${\sf c}_{(D)}\mu_0^*=\mu_0^*$. Also for any finite set $D\subseteq C_1\sim \alpha,$ we have ${\sf c}_{(D)}E_1\leq {\sf c}_{(D)}(\theta_0\to \mu_0^*)=\theta_0\to \mu^*.$ Now $E_1\in \Delta_1$, hence $E_1\in \Sg^{\A(C_1)}X_1$. By definition, we also have $E_1\in \Sg^{\A(C_1)}X_2.$ By lemma \[cylindrify\] there exist finite sets $D_1$ and $D_2$ contained in $C_1\sim \alpha,$ such that $${\sf c}_{(D_1)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_1$$ and $${\sf c}_{(D_2)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_2.$$ Le $D=D_1\cup D_2$. Then $D\subseteq C_1\sim \alpha$ and we have: $${\sf c}_{(D)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_1=\Sg^{\Nr_{\alpha}\A(C_1)}X_1=\Sg^{\A}X_1$$ and $${\sf c}_{(D)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_2=\Sg^{\Nr_{\alpha}\A(C_1)}X_2=\Sg^{\A}X_2,$$ that is to say $${\sf c}_{(D)}E_1\in \Sg^{\A}X_1\cap \Sg^{\A}X_2.$$ Since $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2,$ we get that ${\sf c}_{(D)}E_1$ is either in $\Delta_0\cap \Theta_0$ or $\Gamma_0$. We show that either way leads to a contradiction, by which we will be done. Suppose it is in $\Gamma_0$. Recall that we extended $(\Delta_0, \Gamma_0)$ to a complete saturated extension $(\Delta, \Gamma')$ in $\Sg^{\A(C_1)}X_1$. Since $\Gamma_0\subseteq \Gamma_1',$ we get that ${\sf c}_{(D)}E_1\in \Gamma_1'$ hence ${\sf c}_{(D)}E_1\notin \Delta_1$ because $(\Delta_1,\Gamma_1')$ is saturated and consistent. But this contradicts that $E_1\in \Delta_1$ because $E_1\leq {\sf c}_{(D)}E_1.$ Thus, we can infer that ${\sf c}_{(D)}E_1\in \Delta_0\cap \Theta_0$. In particular, it is in $\Theta_0,$ and so $\theta_0\rightarrow \mu_0^*\in \Theta_0$. But again this contradicts the consistency of $(\Theta_0, \Gamma_0^*)$. Now we extend $T_1$ to a complete and saturated theory $(\Theta_2, \Gamma_2^*)$ in $\Sg^{\A(C_1)(C_2)}X_2$. Let $\Gamma_2=\Gamma_2^*\cap \Sg^{\A(C_1)(C_2)}X_1$. [Claim 2]{} The theory $T_2=(\Delta_1\cup (\Theta_2\cap \Sg^{\A(C_1)(C_2))}X_1), \Gamma_2)$ is consistent in $\Sg^{\A(C_1)(C_2)}X_1$. [Proof of Claim 2]{} If the Claim fails to hold, then we would have some $\delta_1\in \Delta_1$, $E_2\in \Theta_2\cap \Sg^{\A(C_1)(C_2)}X_1,$ and a disjunction $\mu_2\in \Gamma_2$ such that $\delta_1\land E_2\rightarrow \mu_2,$ and so $\delta_1\leq (E_2\rightarrow \mu_2)$ since $\delta_1\in \Delta_1\subseteq \Sg^{\A(C_1)}X_1$. But $\Sg^{\A(C_1)}X_1\subseteq \Nr_{\alpha\cup C_1}^{\A(C_1)(C_2)}X_1$, therefore for any finite set $D\subseteq C_2\sim C_1,$ we have ${\sf q}_{(D)}\delta_1=\delta_1.$ The following holds for any finite set $D\subseteq C_2\sim C_1,$ $$\delta_1\leq {\sf q}_{(D)}(E_2\rightarrow \mu_2).$$ Now, by lemma \[cylindrify\], there is a finite set $D\subseteq C_2\sim C_1,$ satisfying $$\begin{split} \delta_1\to {\sf q}_{(D)}(E_2\rightarrow \mu_2) &\in \Nr_{\alpha\cup C_1}\Sg^{\A(C_1)(C_2)}X_2,\\ &=\Sg^{\Nr_{\alpha\cup C_1}\A(C_1)(\A(C_2)}X_2,\\ &=\Sg^{\A(C_1)}X_2.\\ \end{split}$$ Since $\delta_1\in \Delta_1$, and $\delta_1\leq {\sf q}_{(D)}(E_2\to \mu_2)$, we get that ${\sf q}_{(D)}(E_2\rightarrow \mu_2)$ is in $\Delta_1\cap \Sg^{\A(C_1)}X_2$. We proceed as in the previous claim replacing $\Theta_0$ by $\Theta_2$ and the existental quantifier by the universal one. Let $E_1= {\sf q}_{(D)}(E_2\to \mu_2)$. Then $E_1\in \Sg^{\A(C_1)}X_1\cap \Sg^{\A(C_2)}X_2$. By lemma \[cylindrify\] there exist finite sets $D_1$ and $D_2$ contained in $C_1\sim \alpha$ such that $${\sf q}_{(D_1)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_1,$$ and $${\sf q}_{(D_2)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_2.$$ Le $J=D_1\cup D_2$. Then $J\subseteq C_1\sim \alpha,$ and we have: $${\sf q}_{(J)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_1=\Sg^{\Nr_{\alpha}\A(C_1)}X_1=\Sg^{\A}X_1$$ and $${\sf q}_{(J)}E_1\in \Nr_{\alpha}\Sg^{\A(C_1)}X_2=\Sg^{\Nr_{\alpha}\A(C_1)}X_2=\Sg^{\A}X_2.$$ That is to say, $${\sf q}_{(J)}E_1\in \Sg^{\A}X_1\cap \Sg^{\A}X_2.$$ Now $(\Delta_0\cap \Theta_2\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2,$ we get that ${\sf q}_{(J)}E_1$ is either in $\Delta_0\cap \Theta_2$ or $\Gamma_0$. Suppose it is in $\Gamma_0$. Since $\Gamma_0\subseteq \Gamma_1'$, we get that ${\sf q}_{(J)}E_1\in \Gamma_1'$, hence ${\sf q}_{(J)}E_1\notin \Delta_1,$ because $(\Delta_1,\Gamma_1')$ is saturated and consistent. Here, recall that, $(\Delta, \Gamma')$ is a saturated complete extension of $(\Gamma, \Delta)$. But this contradicts that $E_1\in \Delta_1$. Thus, we can infer that ${\sf q}_{(J)}E_1\in \Delta_0\cap \Theta_2$. In particular, it is in $\Theta_2$. Hence ${\sf q}_{(D\cup J)}(E_2\to \mu_2)\in \Theta_2$, and so $E_2\to \mu_2\in \Theta_2$ since ${\sf q}_{(D\cup J)}(E_2\to \mu_2)\leq E_1\to \mu_2$. But this is a contradiction, since $E_2\in \Theta_2$, $\mu_2\in \Gamma_2^*$ and $(\Theta_2,\Gamma_2^*)$ is consistent. Extend $T_2$ to a complete and saturated theory $(\Delta_3, \Gamma_3')$ in $\Sg^{\A(C_1)(C_2)(C_3)}X_1$ such that $\Gamma_2\subseteq \Gamma_3'$. Again we are interested only in $(\Delta_3, \Gamma_2)$. [Claim 3 ]{} The theory $T_3=(\Theta_2\cup \Delta_3\cap \Sg^{\A(C_1)(C_2)(C_3)}X_2, \Gamma_2^*)$ is consistent in $\Sg^{\A(C_1)(C_2)(C_3)}X_2.$ [Proof of Claim 3]{} Seeking a contradiction, assume that the Claim does not hold. Then we would get for some $\theta_2\in \Theta_2$, $E_3\in \Delta_3\cap \Sg^{\A(C_1)(C_2)(C_3)}X_2$ and some disjunction $\mu_2^*\in \Gamma_2^*,$ that $\theta_2\land E_3\leq \mu_2^*.$ Hence $E_3\leq \theta_2\rightarrow \mu_2^*.$ For any finite set $D\subseteq C_3\sim (C_1\cup C_2),$ we have ${\sf c}_{(D)}E_3\leq \theta_2\rightarrow \mu_2^*$. By lemma \[cylindrify\], there is a finite set $D_3\subseteq C_3\sim (C_1\cup C_2),$ satisfying $$\begin{split} {\sf c}_{(D_3)}E_3 &\in \Nr_{\alpha\cup C_1\cup C_2}\Sg^{\A(C_1)(C_2)(C_3)}X_1\\ &=\Sg^{\Nr_{\alpha\cup C_1\cup C_2}\A(C_1)C_2)(C_3)}X_1\\ &= \Sg^{\A(C_1)(C_2)}X_1.\\ \end{split}$$ If ${\sf c}_{(D_3)}E_3\in \Gamma_2^*$, then it in $\Gamma_2$, and since $\Gamma_2\subseteq \Gamma_3'$, it cannot be in $\Delta_3$. But this contradicts that $E_3\in \Delta_3$. So ${\sf c}_{(D_3)}E_3\in \Theta_2,$ because $E_3\leq {\sf c}_{(D_3)}E_3,$ and so $(\theta_2\rightarrow \mu_2^*)\in \Theta_2,$ which contradicts the consistency of $(\Theta_2, \Gamma_2^*).$ Likewise, now extend $T_3$ to a complete and saturated theory $(\Delta_4, \Gamma_4')$ in $\Sg^{\A(C_1)(C_2)(C_3)(C_4)}X_2$ such that $\Gamma_3\subseteq \Gamma_4'.$ As before the theory $(\Delta_3, \Theta_4\cap \Sg^{\A(C_1)(C_2)(C_3)(C_4)}X_1, \Gamma_4)$ is consistent in $\Sg^{\A(C_1)(C_2)(C_3)(C_4)}X_1$. Continue, inductively, to construct $(\Delta_5, \Gamma_5')$, $(\Delta_5, \Gamma_4)$ and so on. We obtain, zigzaging along, the following sequences: $$(\Delta_0, \Gamma_0), (\Delta_1, \Gamma_0), (\Delta_3, \Gamma_2)\ldots$$ $$(\Theta_0, \Gamma_0^*), (\Theta_2, \Gamma_2^*), (\Theta_4, \Gamma_4^*)\ldots$$ such that $(\theta_{2n}, \Gamma_{2n}^*)$ is complete and saturated in $\Sg^{\A(C_1)\ldots (C_{2n})}X_2,$ $(\Delta_{2n+1}, \Gamma_{2n})$ is a saturated theory in $\Sg^{\A(C_1)\ldots (C_{2n+1})}X_1,$ $\Theta_{2n}\subseteq \Theta_{2n+2}$, $\Gamma_{2n}^*\subseteq \Gamma_{2n+2}^*$ and $\Gamma_{2n}=\Gamma_{2n}^*\cap \Sg^{\A(C_1)\ldots \A(C_{2n})}X_1,$ $\Delta_0\subseteq \Delta_1\subseteq \Delta_3\subseteq \ldots .$ Now let $\Delta_{\omega}=\bigcup_{n}\Delta_n$, $\Gamma_{\omega}=\bigcup_{n}\Gamma_n$, $\Gamma_{\omega}^*=\bigcup_{n}\Gamma_n^*$ and $\Theta_{\omega}=\bigcup_n\Theta_n$. Then we have $T_1=(\Delta_{\omega}, \Gamma_{\omega})$, $T_2=(\Theta_{\omega}, \Gamma_{\omega}^*)$ extend $(\Delta, \Gamma)$, $(\Theta, \Gamma^*)$, such that $T_1$ and $T_2$ are consistent and saturated in $\Sg^{\B}X_1$ and $\Sg^{\B}X_2,$ respectively, $\Delta_{\omega}\cap \Theta_{\omega}$ is complete in $\Sg^{\B}X_1\cap \Sg^{\B}X_2,$ and $\Gamma_{\omega}\subseteq \Gamma_{\omega}^*$. We check that $(\Delta_{\omega}\cap \Theta_{\omega},\Gamma_{\omega})$ is complete in $\Sg^{\B}X_1\cap \Sg^{\B}X_2$. Let $a\in \Sg^{\B}X_1\cap \Sg^{\B}X_2$. Then there exists $n$ such that $a\in \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2$. Now $(\Theta_{2n}, \Gamma_{2n}^*)$ is complete and so either $a\in \Theta_{2n}$ or $a\in \Gamma_{2n}^*$. If $a\in \Theta_{2n}$ it will be in $\Delta_{2n+1}$ and if $a\in \Gamma_{2n}^*$ it will be in $\Gamma_{2n}$. In either case, $a\in \Delta_{\omega}\cap \Theta_{\omega}$ or $a\in \Gamma_{\omega}$. Let $\A$ be an algebra generated by $X$ and assume that $X=X_1\cup X_2$. A pair $((\Delta,\Gamma)$ $(T,F))$ of theories in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2$ is a matched pair of theories if $(\Delta\cap T\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma\cap F\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2$. A theory $(T, F)$ extends a theory $(\Delta, \Gamma)$ if $\Delta\subseteq T$ and $\Gamma\subseteq F$. A pair $(T_1, T_2)$ of theories extend another pair $(\Delta_1, \Delta_2)$ if $T_1$ extends $\Delta_1$ and $T_2$ extends $\Delta_2.$ The following Corollary follows directly from the proof of lemma \[t3\]. \[main1\] Let $\A\in GPHA_{\alpha}$ be generated by $X$ and let $X=X_1\cup X_2$. Let $((\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*))$ be a matched pair in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2,$ respectively. Let $I$ be a set such that $\alpha \subseteq I$, and $|I\sim \alpha|=max(|A|, |\alpha|)$. Then there exists a dilation $\B\in GPHA_I$ of $\A$, and a matched pair, $(T_1, T_2)$ extending $((\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*))$, such that $T_1$ and $T_2$ are saturated in $\Sg^{\B}X_1$ and $\Sg^{\B}X_2$, respectively. We next define set algebras based on Kripke systems. We stipulate that ubdirect products (in the univerasl algebraic sense) are the representable algebras, which the abstract axioms formulated in ? aspire to capture. Here Kripke systems (a direct generalization of Kripke frames) are defined differently than those defined in [@Hung], because we allow [*relativized*]{} semantics. In the clasical case, such algebras reduce to products of set algebras. [^3] Let $\alpha$ be an infinite set. A Kripke system of dimension $\alpha$ is a quadruple $\mathfrak{K}=(K, \leq \{X_k\}_{k\in K}, \{V_k\}_{k\in K}),$ such that $V_k\subseteq {}^{\alpha}X_k,$ and $(K,\leq)$ is preordered set, For any $k\in K$, $X_k$ is a non-empty set such that $$k\leq k'\implies X_k\subseteq X_{k'}\text { and } V_k\subseteq V_{k'}.$$ \[Kripke\] Let $\mathfrak{O}$ be the Boolean algebra $\{0,1\}$. Now Kripke systems define concrete polyadic Heyting algebras as follows. Let $\alpha$ be an infinite set and $G$ be a semigroup of transformations on $\alpha$. Let $\mathfrak{K}=(K,\leq \{X_k\}_{k\in K}, \{V_k\}_{k\in K})$ be a Kripke system. Consider the set $$\mathfrak{F}_{\mathfrak{K}}=\{(f_k:k\in K); f_k:V_k\to \mathfrak{O}, k\leq k'\implies f_k\leq f_{k'}\}.$$ If $x,y\in {}^{\alpha}X_k$ and $j\in \alpha$ we write $x\equiv_jy$ if $x(i)=y(i)$ for all $i\neq j$. We write $(f_k)$ instead of $(f_k:k\in K)$. In $\mathfrak{F}_{\mathfrak{K}}$ we introduce the following operations: $$(f_k)\lor (g_k)=(f_k\lor g_k)$$ $$(f_k)\land (g_k)=(f_k\land g_k.)$$ For any $(f_k)$ and $(g_k)\in \mathfrak{F}$, define $$(f_k)\rightarrow (g_k)=(h_k),$$ where $(h_k)$ is given for $x\in V_k$ by $h_k(x)=1$ if and only if for any $k'\geq k$ if $f_{k'}(x)=1$ then $g_{k'}(x)=1$. For any $\tau\in G,$ define $${\sf s}_{\tau}:\mathfrak{F}\to \mathfrak{F}$$ by $${\sf s}_{\tau}(f_k)=(g_k)$$ where $$g_k(x)=f_k(x\circ \tau)\text { for any }k\in K\text { and }x\in V_k.$$ For any $j\in \alpha$ and $(f_k)\in \mathfrak{F},$ define $${\sf c}_{j}(f_k)=(g_k),$$ where for $x\in V_k$ $$g_k(x)=\bigvee\{f_k(y): y\in V_k,\ y\equiv_j x\}.$$ Finally, set $${\sf q}_{j}(f_k)=(g_k)$$ where for $x\in V_k,$ $$g_k(x)=\bigwedge\{f_l(y): k\leq l, \ y\in V_k, y\equiv_j x\}.$$ The diagonal element ${\sf d}_{ij}$ is defined to be the tuple $(f_k:k\in K)$ where for $x\in V_k$, $f_k(x)=1$ iff $x_i=x_j.$ The algebra $\F_{\bold K}$ is called the set algebra based on the Kripke system $\bold K$. Diagonal Free case ------------------ Our next theorem addresses the cases of $GPHA_{\alpha}$ with $G$ a rich semigroup, and everything is countable, and the case when $G={}^{\alpha}\alpha$ with no restrictions on cardinality. It is an algebraic version of a version of Robinson’s joint consistency theorem: A pair of consistent theories that agree on their common part can be amalgamated by taking their union to form a consistent extension of both; however, we stipulate that the second component of the first theory is included in the second component of the second theory. We will provide examples showing that we cannot omit this condition. The case when $G$ consists of finite transformations will be dealt with separately. It also says, that the results in [@Hung] proved for full polyadic Heyting algebras remains valid when we restrict cylindrifications to be finite, possibly add diagonal elements, and consider semigroups that could be finitely generated, showing that the presence of all infinitary substitutions and infinitary cylindrifications is somewhat of an overkill. Indeed, the axiomatization of full polyadic Heyting algebras studied in [@Hung] is extremely complex from the recursion theoretic point of view [@NS], while the axiomatizations studied here are far less complex; indeed they are recursive. This is definitely an acet from the algebraic point of view. \[main\] Let $\alpha$ be an infinite set. Let $G$ be a semigroup on $\alpha$ containing at least one infinitary transformation. Let $\A$ be the free $G$ algebra generated by $X$, and suppose that $X=X_1\cup X_2$. Let $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$ be two consistent theories in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2,$ respectively. Assume that $\Gamma_0\subseteq \Sg^{\A}(X_1\cap X_2)$ and $\Gamma_0\subseteq \Gamma_0^*$. Assume, further, that $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2$. Then there exist a Kripke system $\mathfrak{K}=(K,\leq \{X_k\}_{k\in K}\{V_k\}_{k\in K}),$ a homomorphism $\psi:\A\to \mathfrak{F}_{\mathfrak K},$ $k_0\in K$, and $x\in V_{k_0}$, such that for all $p\in \Delta_0\cup \Theta_0$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=1$ and for all $p\in \Gamma_0^*$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=0$. [Proof]{} We use lemma \[t3\], extensively. Assume that $\alpha$, $G$, $\A$ and $X_1$, $X_2$ and everything else in the hypothesis are given. Let $I$ be a set containing $\alpha$ such that $\beta=|I\sim \alpha|=max(|A|, |\alpha|).$ If $G$ is strongly rich, let $(K_n:n\in \omega)$ be a family of pairwise disjoint sets such that $|K_n|=\beta.$ Define a sequence of algebras $\A=\A_0\subseteq \A_1\subseteq \A_2\subseteq \A_2\ldots \subseteq \A_n\ldots,$ such that $\A_{n+1}$ is a minimal dilation of $\A_n$ and $dim(\A_{n+1})=\dim\A_n\cup K_n$. If $G={}^{\alpha}\alpha$, then let $(K_n:n\in \omega\}$ be a family of pairwise disjoint sets, such that $|K_1|=\beta$ and $|K_n|=\omega$ for $n\geq 1$, and define a sequence of algebras $\A=\A_0\subseteq \A_1\subseteq \A_2\subseteq \A_2\ldots \subseteq \A_n\ldots,$ such that $\A_1$ is a minimal extension of $\A$, and $\A_{n+1}$ is a minimal dilation of $\A_n$ for $n\geq 2$, with $dim(\A_{n+1})=\dim\A_n\cup K_n$. We denote $dim(\A_n)$ by $I_n$ for $n\geq 1$. Recall that $dim(\A_0)=\dim\A=\alpha$. We interrupt the main stream of the proof by two consecutive claims. Not to digress, it might be useful that the reader at first reading, only memorize their statements, skip their proofs, go on with the main proof, and then get back to them. The proofs of Claims 1 and 2 to follow are completely analogous to the corresponding claims in [@Hung]. The only difference is that we deal with only finite cylindrifiers, and in this respect they are closer to the proofs of lemmata 4.22-23 in [@b]. Those two claims are essential in showing that the maps that will be defined shortly into concrete set algebras based on appropriate Kripke systems, defined via pairs of theories, in increasing extensions (dimensions), are actually homomorphisms. In fact, they have to do with the preservation of the operations of implication and universal quantification. The two claims use lemma \[t3\]. [Claim 1]{} Let $n\in \omega$. If $((\Delta, \Gamma), (T,F))$ is a matched pair of saturated theories in $\Sg^{\A_n}X_1$ and $\Sg^{\A_n}X_2$, then the following hold. For any $a,b\in \Sg^{\A_n}X_1$ if $a\rightarrow b\notin \Delta$, then there is a matched pair $((\Delta',\Gamma'), (T', F'))$ of saturated theories in $\Sg^{\A_{n+1}}X_1$ and $\Sg^{\A_{n+1}}X_2,$ respectively, such that $\Delta\subseteq \Delta'$, $T\subseteq T',$ $a\in \Delta'$ and $b\notin \Delta'$. [Proof of Claim 1]{} Since $a\rightarrow b\notin \Delta,$ we have $(\Delta\cup\{a\}, b)$ is consistent in $\Sg^{\A_n}X_1$. Then by lemma \[t1\], it can be extended to a complete theory $(\Delta', T')$ in $\Sg^{\A_n}X_1$. Take $$\Phi=\Delta'\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2,$$ and $$\Psi=T'\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2.$$ Then $(\Phi, \Psi)$ is complete in $\Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2.$ We shall now show that $(T\cup \Phi, \Psi)$ is consistent in $\Sg^{\A_n}X_2$. If not, then there is $\theta\in T$, $\phi\in \Phi$ and $\psi\in \Psi$ such that $\theta\land \phi\leq \psi$. So $\theta\leq \phi\rightarrow \psi$. Since $T$ is saturated, we get that $\phi\rightarrow \psi$ is in $T$. Now $\phi\rightarrow \psi\in \Delta\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2\subseteq \Delta'\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2=\Phi$. Since $\phi\in \Phi$ and $\phi\rightarrow \psi\in \Phi,$ we get that $\psi\in \Phi\cap \Psi$. But this means that $(\Phi, \Psi)$ is inconsistent which is impossible. Thus $(T\cup \Phi, \Psi)$ is consistent. Now the pair $((\Delta', T') (T\cup \Phi, \Psi))$ satisfy the conditions of lemma \[t3\]. Hence this pair can be extended to a matched pair of saturated theories in $\Sg^{\A_{n+1}}X_1$ and $\Sg^{\A_{n+1}}X_2$. This pair is as required by the conclusion of lemma \[t3\]. [Claim 2]{} Let $n\in \omega$. If $((\Delta, \Gamma), (T,F))$ is a matched pair of saturated theories in $\Sg^{\A_n}X_1$ and $\Sg^{\A_n}X_2$, then the following hold. For $x\in \Sg^{\A_n}X_1$ and $j\in I_n=dim\A_n$, if ${\sf q}_{j}x\notin \Delta$, then there is a matched pair $((\Delta',\Gamma'), (T', F'))$ of saturated theories in $\Sg^{\A_{n+2}}X_1$ and $\Sg^{\A_{n+2}}X_2$ respectively, $u\in I_{n+2}$ such that $\Delta\subseteq \Delta'$, $T\subseteq T'$ and ${\sf s}_u^j x\notin \Delta'$. [Proof]{} Assume that $x\in \Sg^{\A_n}X_1$ and $j\in I_n$ such that ${\sf q}_{j}x\notin \Sg^{\A_n}X_1$. Then there exists $u\in I_{n+1}\sim I_n$ such that $(\Delta, {\sf s}_u^j x)$ is consistent in $\Sg^{\A_{n+1}}X_1$. So $(\Delta, {\sf s}_u^jx)$ can be extended to a complete theory $(\Delta', T')$ in $\Sg^{\A_{n+1}}X_1$. Take $$\Phi=\Delta'\cap \Sg^{\A_{n+1}}X_1\cap \Sg^{\A_{n+1}}X_2,$$ and $$\Psi=T'\cap \Sg^{\A_{n+1}}X_1\cap \Sg^{\A_{n+1}}X_2.$$ Then $(\Phi,\Psi)$ is complete in $\Sg^{\A_{n+1}}X_1\cap \Sg^{\A_{n+1}}X_2$. We shall show that $(T\cup \Phi,\Psi)$ is consistent in $\Sg^{\A_{n+1}}X_2$. If not, then there exist $\theta\in T,$ $\phi\in \Phi$ and $\psi\in \Psi,$ such that $\theta\land \phi\leq \psi$. Hence, $\theta\leq \phi\rightarrow \psi$. Now $$\theta={\sf q}_j(\theta)\leq {\sf q}_{j}(\phi\rightarrow \psi).$$ Since $(T,F)$ is saturated in $\Sg^{\A_n}X_2,$ it thus follows that $${\sf q}_{j}(\phi\rightarrow \psi) \in T\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2=\Delta\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2.$$ So ${\sf q}_{j}(\phi\rightarrow \psi)\in \Delta'$ and consequently we get ${\sf q}_{j}(\phi\rightarrow \psi)\in \Phi$. Also, we have, $\phi\in \Phi$. But $(\Phi, \Psi)$ is complete, we get $\psi\in \Phi$ and this contradicts that $\psi\in \Psi$. Now the pair $((\Delta', \Gamma'), (T\cup \Phi, \Psi))$ satisfies the hypothesis of lemma \[t3\] applied to $\Sg^{\A_{n+1}}X_1, \Sg^{\A_{n+1}}X_2$. The required now follows from the concusion of lemma \[t3\]. Now that we have proved our claims, we go on with the proof. We prove the theorem when $G$ is a strongly rich semigroup, because in this case we deal with relativized semantics, and during the proof we state the necessary modifications for the case when $G$ is the semigroup of all transformations. Let $$K=\{((\Delta, \Gamma), (T,F)): \exists n\in \omega \text { such that } (\Delta, \Gamma), (T,F)$$ $$\text { is a a matched pair of saturated theories in } \Sg^{\A_n}X_1, \Sg^{\A_n}X_2\}.$$ We have $((\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*))$ is a matched pair but the theories are not saturated. But by lemma \[t3\] there are $T_1=(\Delta_{\omega}, \Gamma_{\omega})$, $T_2=(\Theta_{\omega}, \Gamma_{\omega}^*)$ extending $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$, such that $T_1$ and $T_2$ are saturated in $\Sg^{\A_1}X_1$ and $\Sg^{\A_1}X_2,$ respectively. Let $k_0=((\Delta_{\omega}, \Gamma_{\omega}), (\Theta_{\omega}, \Gamma_{\omega}^*)).$ Then $k_0\in K.$ If $i=((\Delta, \Gamma), (T,F))$ is a matched pair of saturated theories in $\Sg^{\A_n}X_1$ and $\Sg^{\A_n}X_2$, let $M_i=dim \A_n$, where $n$ is the least such number, so $n$ is unique to $i$. Before going on we introduce a piece of notation. For a set $M$ and a sequence $p\in {}^{\alpha}M$, $^{\alpha}M^{(p)}$ is the following set $$\{s\in {}^{\alpha}M: |\{i\in \alpha: s_i\neq p_i\}|<\omega\}.$$ Let $${\mathfrak{K}}=(K, \leq, \{M_i\}, \{V_i\})_{i\in \mathfrak{K}}$$ where $V_i=\bigcup_{p\in G_n}{}^{\alpha}M_i^{(p)}$, and $G_n$ is the strongly rich semigroup determining the similarity type of $\A_n$, with $n$ the least number such $i$ is a saturated matched pair in $\A_n$. The order $\leq $ is defined as follows: If $i_1=((\Delta_1, \Gamma_1)), (T_1, F_1))$ and $i_2=((\Delta_2, \Gamma_2), (T_2,F_2))$ are in $\mathfrak{K}$, then define $$i_1\leq i_2\Longleftrightarrow M_{i_1}\subseteq M_{i_2}, \Delta_1\subseteq \Delta_2, T_1\subseteq T_2.$$ This is, indeed as easily checked, a preorder on $K$. We define two maps on $\A_1=\Sg^{\A}X_1$ and $\A_2=\Sg^{\A}X_2$ respectively, then those will be pasted using the freeness of $\A$ to give the required single homomorphism, by noticing that they agree on the common part, that is on $\Sg^{\A}(X_1\cap X_2).$ Set $\psi_1: \Sg^{\A}X_1\to \mathfrak{F}_{\mathfrak K}$ by $\psi_1(p)=(f_k)$ such that if $k=((\Delta, \Gamma), (T,F))\in K$ is a matched pair of saturated theories in $\Sg^{\A_n}X_1$ and $\Sg^{\A_n}X_2$, and $M_k=dim \A_n$, then for $x\in V_k=\bigcup_{p\in G_n}{}^{\alpha}M_k^{(p)}$, $$f_k(x)=1\Longleftrightarrow {\sf s}_{x\cup (Id_{M_k\sim \alpha)}}^{\A_n}p\in \Delta\cup T.$$ To avoid tiresome notation, we shall denote the map $x\cup Id_{M_k\sim \alpha}$ simply by $\bar{x}$ when $M_k$ is clear from context. It is easily verifiable that $\bar{x}$ is in the semigroup determining the similarity type of $\A_n$ hence the map is well defined. More concisely, we we write $$f_k(x)=1\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}p\in \Delta\cup T.$$ The map $\psi_2:\Sg^{\A}X_2\to \mathfrak{F}_{\mathfrak K}$ is defined in exactly the same way. Since the theories are matched pairs, $\psi_1$ and $\psi_2$ agree on the common part, i.e. on $\Sg^{\A}(X_1\cap X_2).$ Here we also make the tacit assumption that if $k\leq k'$ then $V_k\subseteq V_{k'}$ via the embedding $\tau\mapsto \tau\cup Id$. When $G$ is the semigroup of all transformations, with no restrictions on cardinalities, we need not relativize since $\bar{\tau}$ is in the big semigroup. In more detail, in this case, we take for $k=((\Delta,\Gamma), (T,F))$ a matched pair of saturated theories in $\Sg^{\A_n}X_1,\Sg^{\A_n}X_2$, $M_k=dim\A_n$ and $V_k={}^{\alpha}M_k$ and for $x\in {}^{\alpha}M_k$, we set $$f_k(x)=1\Longleftrightarrow {\sf s}_{x\cup (Id_{M_k\sim \alpha)}}^{\A_n}p\in \Delta\cup T.$$ Before proving that $\psi$ is a homomorphism, we show that $$k_0=((\Delta_{\omega},\Gamma_{\omega}), (\Theta_{\omega}, \Gamma^*_{\omega}))$$ is as desired. Let $x\in V_{k_0}$ be the identity map. Let $p\in \Delta_0\cup \Theta_0$, then ${\sf s}_xp=p\in \Delta_{\omega}\cup \Theta_{\omega},$ and so if $\psi(p)=(f_k)$ then $f_{k_0}(x)=1$. On the other hand if $p\in \Gamma_0^*$, then $p\notin \Delta_{\omega}\cup \Theta_{\omega}$, and so $f_{k_0}(x)=0$. Then the union $\psi$ of $\psi_1$ and $\psi_2$, $k_0$ and $Id$ are as required, modulo proving that $\psi$ is a homomorphism from $\A$, to the set algebra based on the above defined Kripke system, which we proceed to show. We start by $\psi_1$. Abusing notation, we denote $\psi_1$ by $\psi$, and we write a matched pair in $\A_n$ instead of a matched pair of saturated theories in $\Sg^{\A_n}X_1$, $\Sg^{\A_n}X_2$, since $X_1$ and $X_2$ are fixed. The proof that the postulated map is a homomorphism is similar to the proof in [@Hung] baring in mind that it is far from being identical because cylindrifiers and their duals are only finite. We prove that $\psi$ preserves $\land$. Let $p,q\in A$. Assume that $\psi(p)=(f_k)$ and $\psi(q)=(g_k)$. Then $\psi(p)\land \psi(q)=(f_k\land g_k)$. We now compute $\psi(p\land q)=(h_k)$ Assume that $x\in V_k$, where $k=((\Delta,\Gamma), (T, F))$ is a matched pair in $\A_n$ and $M_k=dim\A_n$. Then $$h_k(x)=1\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}(p\land q)\in \Delta\cup T$$ $$\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}p\land {\sf s}_{\bar{x}}^{\A_n}q\in \Delta\cup T$$ $$\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}p\in T\cup \Delta\text { and }{\sf s}_{\bar{x}}^{\A_n}q\in \Delta\cup T$$ $$\Longleftrightarrow f_k(x)=1 \text { and } g_k(x)=1$$ $$\Longleftrightarrow (f_k\land g_k)(x)=1$$ $$\Longleftrightarrow (\psi(p)\land \psi(q))(x)=1.$$ $\psi$ preserves $\rightarrow.$ (Here we use Claim 1). Let $p,q\in A$. Let $\psi(p)=(f_k)$ and $\psi(q)=(g_k)$. Let $\psi(p\rightarrow q)=(h_k)$ and $\psi(p)\rightarrow \psi(q)=(h'_k)$. We shall prove that for any $k\in \mathfrak{K}$ and any $x\in V_k$, we have $$h_k(x)=1\Longleftrightarrow h'_k(x)=1.$$ Let $x\in V_k$. Then $k=((\Delta,\Gamma),(T,F))$ is a matched pair in $\A_n$ and $M_k=dim\A_n$. Assume that $h_k(x)=1$. Then we have $${\sf s}_{\bar{x}}^{\A_n}(p\rightarrow q)\in \Delta\cup T,$$ from which we get that $$(*) \ \ \ {\sf s}_{\bar{x}}^{\A_n}p\rightarrow {\sf s}_{\bar{x}}^{\A_n}q\in \Delta\cup T.$$ Let $k'\in K$ such that $k\leq k'$. Then $k'=((\Delta', \Gamma'), (T', F'))$ is a matched pair in $\A_m$ with $m\geq n$. Assume that $f_{k'}(x)=1$. Then, by definition we have (\*\*) $${\sf s}_{\bar{x}}^{\A_m}p\in \Delta'\cup T'.$$ But $\A_m$ is a dilation of $\A_n$ and so $${\sf s}_{\bar{x}}^{\A_m}p={\sf s}_{\bar{x}}^{\A_n}p\text { and } {\sf s}_{\bar{x}}^{\A_m}q={\sf s}_{\bar{x}}^{\A_n}q.$$ From (\*) we get that, $${\sf s}_{\bar{x}}^{\A_m}p\rightarrow {\sf s}_{\bar{x}}^{\A_m}q\in \Delta'\cup T'.$$ But, on the other hand, from (\*\*), we have ${\sf s}_{\bar{x}}^{\A_m}q\in \Delta'\cup T',$ so $$f_{k'}(x)=1\Longrightarrow g_{k'}(x)=1.$$ That is to say, we have $h_{k'}(x)=1$. Conversely, assume that $h_k(x)\neq 1,$ then $${\sf s}_{\bar{x}}^{\A_n}p\rightarrow {\sf s}_{\bar{x}}^{\A_n}q\notin \Delta\cup T,$$ and consequently $${\sf s}_{\bar{x}}^{\A_n}p\rightarrow {\sf s}_{\bar{x}}^{\A_n}q\notin \Delta.$$ From Claim 1, we get that there exists a matched pair $k'=((\Delta',\Gamma')((T',F'))$ in $\A_{n+2},$ such that $${\sf s}_{\bar{x}}^{\A_{n+2}}p\in \Delta'\text { and } {\sf s}_{\bar{x}}^{\A_{n+2}}q\notin \Delta'.$$ We claim that ${\sf s}_{\bar{x}}^{\A_{n+2}}q\notin T'$, for otherwise, if it is in $T'$, then we would get that $${\sf s}_{\bar{x}}^{\A_{n+2}}q\in \Sg^{\A_{n+2}}X_1\cap \Sg^{\A_{n+2}}X_2.$$ But $$(\Delta'\cap T'\cap \Sg^{\A_{n+2}}X_1\cap \Sg^{\A_{n+2}}X_2, \Gamma'\cap F'\cap\Sg^{\A_{n+2}}X_1\cap \Sg^{\A_{n+2}}X_2)$$ is complete in $\Sg^{\A_{n+2}}X_1\cap \Sg^{\A_{n+2}}X_2,$ and ${\sf s}_{\bar{x}}^{\A_{n+2}}q\notin \Delta'\cap T'$, hence it must be the case that $${\sf s}_{\bar{x}}^{\A_{n+2}}q\in \Gamma'\cap F'.$$ In particular, we have $${\sf s}_{\bar{x}}^{\A_{n+2}}q\in F',$$ which contradicts the consistency of $(T', F'),$ since by assumption ${\sf s}_x^{\A_{n+2}}q\in T'$. Now we have $${\sf s}_{\bar{x}}^{\A_{n+2}}q\notin \Delta'\cup T',$$ and $${\sf s}_{\bar{x}}^{\A_{n+2}}p\in \Delta'\cup T'.$$ Since $\Delta'\cup T'$ extends $\Delta\cup T$, we get that $h_k'(x)\neq 1$. $\psi$ preserves substitutions. Let $p\in \A$. Let $\sigma\in {}G$. Assume that $\psi(p)=(f_k)$ and $\psi({\sf s}_{\sigma}p)=(g_k).$ Assume that $M_k=\dim\A_n$ where $k=((\Delta,\Gamma),(T,F))$ is a matched pair in $\A_n$. Then, for $x\in V_k$, we have $$g_k(x)=1\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}{\sf s}_{\sigma}^{\A}p\in \Delta\cup T$$ $$\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}{\sf s}_{\bar{\sigma}}^{\A_n}p\in \Delta\cup T$$ $$\Longleftrightarrow {\sf s}_{\bar{x}\circ {\bar{\sigma}}}^{\A_n}p\in \Delta\cup T$$ $$\Longleftrightarrow {\sf s}_{\overline{x\circ \sigma}}^{\A_n}p\in \Delta\cup T$$ $$\Longleftrightarrow f_k(x\circ \sigma)=1.$$ $\psi$ preserves cylindrifications. Let $p\in A.$ Assume that $m\in I$ and assume that $\psi({\sf c}_{m}p)=(f_k)$ and ${\sf c}_m\psi(p)=(g_k)$. Assume that $k=((\Delta,\Gamma),(T,F))$ is a matched pair in $\A_n$ and that $M_k=dim\A_n$. Let $x\in V_k$. Then $$f_k(x)=1\Longleftrightarrow {\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\in \Delta\cup T.$$ We can assume that $${\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\in \Delta.$$ For if not, that is if $${\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\notin \Delta\text { and } {\sf s}_{\bar{x}}^{\A_n}{\sf c}_{(m)}p\in T,$$ then $${\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\in \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2,$$ but $$(\Delta\cap T\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2, \Gamma\cap F\cap \Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2)$$ is complete in $\Sg^{\A_n}X_1\cap \Sg^{\A_n}X_2$, and $${\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\notin \Delta\cap T,$$ it must be the case that $${\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\in \Gamma\cap F.$$ In particular, $${\sf s}_{\bar{x}}^{\A_n}{\sf c}_{m}p\in F.$$ But this contradicts the consistency of $(T,F)$. Assuming that ${\sf s}_x{\sf c}_mp\in \Delta,$ we proceed as follows. Let $$\lambda\in \{\eta\in I_n: x^{-1}\{\eta\}=\eta\}\sim \Delta p.$$ Let $$\tau=x\upharpoonright I_n\sim\{m, \lambda\}\cup \{(m,\lambda)(\lambda, m)\}.$$ Then, by item (5) in theorem \[axioms\], we have $${\sf c}_{\lambda}{\sf s}^{\A_n}_{\bar{\tau}}p={\sf s}_{\bar{\tau}}^{\A_n}{\sf c}_{m}p={\sf s}_{\bar{x}}^{\A_n}{\sf c}_mp\in \Delta.$$ We introduce a piece of helpful notation. For a function $f$, let $f(m\to u)$ is the function that agrees with $f$ except at $m$ where its value is $u$. Since $\Delta$ is saturated, there exists $u\notin \Delta x$ such that ${\sf s}_u^{\lambda}{\sf s}_xp\in \Delta$, and so ${\sf s}_{(x(m\to u))} p\in \Delta$. This implies that $x\in {\sf c}_mf(p)$ and so $g_k(x)=1$. Conversely, assume that $g_k(x)=1$ with $k=((\Gamma,\Delta))$, $(T,F))$ a matched pair in $\A_n$. Let $y\in V_k$ such that $y\equiv_m x$ and $\psi(p)y=1$. Then ${\sf s}_{\bar{y}}p\in \Delta\cup T$. Hence ${\sf s}_{\bar{y}}{\sf c}_mp\in \Delta\cup T$ and so ${\sf s}_{\bar{x}}{\sf c}_mp\in \Delta\cup T$, thus $f_k(x)=1$ and we are done. $\psi$ preserves universal quantifiers. (Here we use Claim 2). Let $p\in A$ and $m\in I$. Let $\psi(p)=(f_k)$, ${\sf q}_{m}\psi(p)=(g_k)$ and $\psi({\sf q}_{m}p)=(h_k).$ Assume that $h_k(x)=1$. We have $k=((\Delta,\Gamma), (T,F))$ is a matched pair in $\A_n$ and $x\in V_k$. Then $${\sf s}_{\bar{x}}^{\A_n}{\sf q}_{m}p\in \Delta\cup T,$$ and so $${\sf s}_{\bar{y}}^{\A_n}{\sf q}_{m}p\in \Delta\cup T \text{ for all } y\in {}^IM_k, y\equiv_m x.$$ Let $k'\geq k$. Then $k'=((\Delta',\Gamma'), (T',F'))$ is a matched pair in $\A_l$ $l\geq n$, $\Delta\subseteq \Delta'$ and $T\subseteq T'.$ Since $p\geq {\sf q}_{m}p$ it follows that $${\sf s}_{\bar{y}}^{\A_n}p\in \Delta'\cup T' \text{ for all } y\in {}^IM_k, y\equiv_mx.$$ Thus $g_k(x)=1$. Now conversely, assume that $h_k(x)=0$, $k=((\Delta,\Gamma), (T,F))$ is a matched pair in $\A_n,$ then, we have $${\sf s}_{\bar{x}}^{\A_n}{\sf q}_{m}p\notin \Delta\cup T,$$ and so $${\sf s}_{\bar{x}}^{\A_n}{\sf q}_{m}p\notin \Delta.$$ Let $$\lambda\in \{\eta\in I_n: x^{-1}\{\eta\}=\eta\}\sim \Delta p.$$ Let $$\tau=x\upharpoonright I_n\sim\{m, \lambda\}\cup \{(m,\lambda)(\lambda, m)\}.$$ Then, like in the existential case, using polyadic axioms, we get $${\sf q}_{\lambda}{\sf s}_{\tau}p={\sf s}_{\tau}{\sf q}_{m}p={\sf s}_{x}{\sf q}_mp\notin \Delta$$ Then there exists $u$ such that ${\sf s}_u^{\lambda}{\sf s}_xp\notin \Delta.$ So ${\sf s}_u^{\lambda}{\sf s}_xp\notin T$, for if it is, then by the previous reasoning since it is an element of $\Sg^{\A_{n+2}}X_1\cap \Sg^{\A_{n+2}}X_2$ and by completeness of $(\Delta\cap T, \Gamma\cap F)$ we would reach a contradiction. The we get that ${\sf s}_{(x(m\to u))}p\notin \Delta\cup T$ which means that $g_k(x)=0,$ and we are done. We now deal with the case when $G$ is the semigroup of all finite transformations on $\alpha$. In this case, we stipulate that $\alpha\sim \Delta x$ is infinite for all $x$ in algebras considered. To deal with such a case, we need to define certain free algebras, called dimension restricted. Those algebras were introduced by Henkin, Monk and Tarski. The free algebras defined the usual way, will have the dimensions sets of their elements equal to their dimension, but we do not want that. For a class $K$, ${\bf S}$ stands for the operation of forming subalgebras of $K$, ${\bf P}K$ that of forming direct products, and ${\bf H}K$ stands for the operation of taking homomorphic images. In particular, for a class $K$, ${\bf HSP}K$ stands for the variety generated by $K$. Our dimension restricted free algebbras, are an instance of certain independently generated algebras, obtained by an appropriate relativization of the universal algebraic concept of free algebras. For an algebra $\A,$ we write $R\in Con\A$ if $R$ is a congruence relation on $\A.$ Assume that $K$ is a class of algebras of similarity $t$ and $S$ is any set of ordered pairs of words of $\Fr_{\alpha}^t,$ the absolutely free algebra of type $t$. Let $$Cr_{\alpha}^{(S)}K=\cap \{R\in Con \Fr_{\alpha}^t, \Fr_{\alpha}^t/R\in SK, S\subseteq R\}$$ and let $$\Fr_{\alpha}^{(S)}K=\Fr_{\alpha}^t/Cr_{\alpha}^{(S)}K.$$ $\Fr_{\alpha}^{(S)}K$ is called the free algebra over $K$ with $\alpha$ generators subject to the defining relations $S$. As a special case, we obtain dimension restricted free algebra, defined next. Let $\delta$ be a cardinal. Let $\alpha$ be an ordinal, and let $G$ be the semigroup of finite transformations on $\alpha$. Let$_{\alpha} \Fr_{\delta}$ be the absolutely free algebra on $\delta$ generators and of type $GPHA_{\alpha}$. Let $\rho\in {}^{\delta}\wp(\alpha)$. Let $L$ be a class having the same similarity type as $GPHA_{\alpha}.$ Let $$Cr_{\delta}^{(\rho)}L=\bigcap\{R: R\in Con_{\alpha}\Fr_{\delta}, {}_{\alpha}\Fr_{\delta}/R\in \mathbf{SP}L, {\mathsf c}_k^{_{\alpha}\Fr_{\delta}}{\eta}/R=\eta/R \text { for each }$$ $$\eta<\delta \text { and each }k\in \alpha\smallsetminus \rho(\eta)\}$$ and $$\Fr_{\delta}^{\rho}L={}_{\alpha}\Fr_{\beta}/Cr_{\delta}^{(\rho)}L.$$ The ordinal $\alpha$ does not figure out in $Cr_{\delta}^{(\rho)}L$ and $\Fr_{\delta}^{(\rho)}L$ though it is involved in their definition. However, $\alpha$ will be clear from context so that no confusion is likely to ensue. Assume that $\delta$ is a cardinal, $L\subseteq GPHA_{\alpha}$, $\A\in L$, $x=\langle x_{\eta}:\eta<\beta\rangle\in {}^{\delta}A$ and $\rho\in {}^{\delta}\wp(\alpha)$. We say that the sequence $x$ $L$-freely generates $\A$ under the dimension restricting function $\rho$, or simply $x$ freely generates $\A$ under $\rho,$ if the following two conditions hold: $\A=\Sg^{\A}Rg(x)$ and $\Delta^{\A} x_{\eta}\subseteq \rho(\eta)$ for all $\eta<\delta$. Whenever $\B\in L$, $y=\langle y_{\eta}, \eta<\delta\rangle\in {}^{\delta}\B$ and $\Delta^{\B}y_{\eta}\subseteq \rho(\eta)$ for every $\eta<\delta$, then there is a unique homomorphism from $\A$ to $\B$, such that $h\circ x=y$. The second item says that dimension restricted free algebras has the universal property of free algebras with respect to algebras whose dimensions are also restricted. The following theorem can be easily distilled from the literature of cylindic algebra. Assume that $\delta$ is a cardinal, $L\subseteq GPHA_{\alpha}$, $\A\in L$, $x=\langle x_{\eta}:\eta<\delta\rangle\in {}^{\delta}A$ and $\rho\in {}^{\delta}\wp(\alpha).$ Then the following hold: $\Fr_{\delta}^{\rho}L\in GPHA_{\alpha}$ and $x=\langle \eta/Cr_{\delta}^{\rho}L: \eta<\delta \rangle$ $\mathbf{SP}L$- freely generates $\A$ under $\rho$. In order that $\A\cong \Fr_{\delta}^{\rho}L$ it is necessary and sufficient that there exists a sequence $x\in {}^{\delta}A$ which $L$ freely generates $\A$ under $\rho$. [Proof]{} [@HMT1] theorems 2.5.35, 2.5.36, 2.5.37. Note that when $\rho(i)=\alpha$ for all $i$ then $\rho$ is not restricting the dimension, and we recover the notion of ordinary free algebras. That is for such a $\rho$, we have $\Fr_{\beta}^{\rho}GPHA_{\alpha}\cong \Fr_{\beta}GPHA_{\alpha}.$ Now we formulate the analogue of theorem \[main\] for dimension restricted agebras, which adresses infinitely many cases, because we have infinitely many dimension restricted free algebras having the same number of generators. \[main2\] Let $G$ be the semigroup of finite transformations on an infinite set $\alpha$ and let $\delta$ be a cardinal $>0$. Let $\rho\in {}^{\delta}\wp(\alpha)$ be such that $\alpha\sim \rho(i)$ is infinite for every $i\in \delta$. Let $\A$ be the free $G$ algebra generated by $X$ restristed by $\rho$; that is $\A=\Fr_{\delta}^{\rho}GPHA_{\alpha},$ and suppose that $X=X_1\cup X_2$. Let $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$ be two consistent theories in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2,$ respectively. Assume that $\Gamma_0\subseteq \Sg^{\A}(X_1\cap X_2)$ and $\Gamma_0\subseteq \Gamma_0^*$. Assume, further, that $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2$. Then there exist a Kripke system $\mathfrak{K}=(K,\leq \{X_k\}_{k\in K}\{V_k\}_{k\in K}),$ a homomorphism $\psi:\A\to \mathfrak{F}_K,$ $k_0\in K$, and $x\in V_{k_0}$, such that for all $p\in \Delta_0\cup \Theta_0$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=1$ and for all $p\in \Gamma_0^*$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=0$. [Proof]{} We state the modifications in the above proof of theorem \[main\]. Form the sequence of minimal dilations $(\A_n:n\in \omega)$ built on the sequence $(K_n:n\in \omega)$, with $|K_n|=\beta$, $\beta=|I\sim \alpha|=max(|A|, \alpha)$ with $I$ is a superset of $\alpha.$ If $i=((\Delta, \Gamma), (T,F))$ is a matched pair of saturated theories in $\Sg^{\A_n}X_1$ and $\Sg^{\A_n}X_2$, let $M_i=dim \A_n$, where $n$ is the least such number, so $n$ is unique to $i$. Define $K$ as in in the proof of theorem \[main\], that is, let $$K=\{((\Delta, \Gamma), (T,F)): \exists n\in \omega \text { such that } (\Delta, \Gamma), (T,F)$$ $$\text { is a a matched pair of saturated theories in } \Sg^{\A_n}X_1, \Sg^{\A_n}X_2\}.$$ Let $${\mathfrak{K}}=(K, \leq, \{M_i\}, \{V_i\})_{i\in \mathfrak{K}},$$ where now $V_i={}^{\alpha}M_i^{(Id)}=\{s\in {}^{\alpha}M: |\{i\in \alpha: s_i\neq i\}|<\omega\},$ and the order $\leq $ is defined by: If $i_1=((\Delta_1, \Gamma_1)), (T_1, F_1))$ and $i_2=((\Delta_2, \Gamma_2), (T_2,F_2))$ are in $\mathfrak{K}$, then $$i_1\leq i_2\Longleftrightarrow M_{i_1}\subseteq M_{i_2}, \Delta_1\subseteq \Delta_2, T_1\subseteq T_2.$$ This is a preorder on $K$. Set $\psi_1: \Sg^{\A}X_1\to \mathfrak{F}_{\mathfrak K}$ by $\psi_1(p)=(f_k)$ such that if $k=((\Delta, \Gamma), (T,F))\in \mathfrak{K}$ is a matched pair of saturated theories in $\Sg^{\A_n}X_1$ and $\Sg^{\A_n}X_2$, and $M_k=dim \A_n$, then for $x\in V_k={}^{\alpha}M_k^{(Id)}$, $$f_k(x)=1\Longleftrightarrow {\sf s}_{x\cup (Id_{M_k\sim \alpha)}}^{\A_n}p\in \Delta\cup T.$$ Define $\psi_2$ analogously. The rest of the proof is identical to the previous one. It is known that the condition $\Gamma\subseteq \Gamma^*$ cannot be omitted. On the other hand, to prove our completeness theorem, we need the following weaker version of theorem \[main\], with a slight modification in the proof, which is still a step-by-step technique, though, we do not ‘zig-zag’. \[rep\] Let $\A\in GPHA_{\alpha}$. Let $(\Delta_0, \Gamma_0)$ be consistent. Suppose that $I$ is a set such that $\alpha\subseteq I$ and $|I\sim \alpha|=max (|A|,|\alpha|)$. Then there exists a dilation $\B\in GPHA_I$ of $\A$, and theory $T=(\Delta_{\omega}, \Gamma_{\omega})$, extending $(\Delta_0, \Gamma_0)$, such that $T$ is consistent and saturated in $\B$. There exists $\mathfrak{K}=(K,\leq \{X_k\}_{k\in K}\{V_k\}_{k\in K}),$ a homomorphism $\psi:\A\to \mathfrak{F}_K,$ $k_0\in K$, and $x\in V_{k_0}$, such that for all $p\in \Delta_0$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=1$ and for all $p\in \Gamma_0$ if $\psi(p)=(g_k)$, then $g_{k_0}(x)=0.$ [Proof]{} We deal only with the case when $G$ is strongly rich. The other cases can be dealt with in a similar manner by undergoing the obvious modifications, as indicated above. As opposed to theorem \[main\], we use theories rather than pairs of theories, since we are not dealing with two subalgebras simultaneously. (i) follows from \[t2\]. Now we prove (ii). The proof is a simpler version of the proof of \[main\]. Let $I$ be a set such that $\beta=|I\sim \alpha|=max(|A|, |\alpha|).$ Let $(K_n:n\in \omega)$ be a family of pairwise disjoint sets such that $|K_n|=\beta.$ Define a sequence of algebras $\A=\A_0\subseteq \A_1\subseteq \A_2\subseteq \A_2\ldots \subseteq \A_n\ldots$ such that $\A_{n+1}$ is a minimal dilation of $\A_n$ and $dim(\A_{n+1})=\dim\A_n\cup K_n$. We denote $dim(\A_n)$ by $I_n$ for $n\geq 1$. If $(\Delta, \Gamma)$ is saturated in $\A_n$ then the following analogues of Claims 1 and 2 in theorem \[main\] hold: For any $a,b\in \A_n$ if $a\rightarrow b\notin \Delta$, then there is a saturated theory $(\Delta',\Gamma')$ in $\A_{n+1}$ such that $\Delta\subseteq \Delta'$ $a\in \Delta'$ and $b\notin \Delta'$. If $(\Delta, \Gamma)$ is saturated in $\A_n$ then for all $x\in \A_n$ and $j\in I_n$, if ${\sf q}_{j}x\notin \Delta,$ then there $(\Delta',\Gamma')$ of saturated theories in $\A_{n+2}$, $u\in I_{n+2}$ such that $\Delta\subseteq \Delta'$, and ${\sf s}_j^u x\notin \Delta'$. Now let $$K=\{(\Delta, \Gamma): \exists n\in \omega \text { such that } (\Delta,\Gamma) \text { is saturated in }\A_n.\}$$ If $i=(\Delta, \Gamma)$ is a saturated theory in $\A_n$, let $M_i=dim \A_n$, where $n$ is the least such number, so $n$ is unique to $i$. If $i_1=(\Delta_1, \Gamma_1)$ and $i_2=(\Delta_2, \Gamma_2)$ are in $K$, then set $$i_1\leq i_2\Longleftrightarrow M_{i_1}\subseteq M_{i_2}, \Delta_1\subseteq \Delta_2.$$ This is a preorder on $K$; define the kripke system ${\mathfrak K}$ based on the set of worlds $K$ as before. Set $\psi: \A\to \mathfrak{F}_{\mathfrak K}$ by $\psi_1(p)=(f_k)$ such that if $k=(\Delta, \Gamma)\in \mathfrak{K}$ is saturated in $\A_n$, and $M_k=dim \A_n$, then for $x\in V_k=\bigcup_{p\in G_n}{}^{\alpha}M_k^{(p)}$, $$f_k(x)=1\Longleftrightarrow {\sf s}_{x\cup (Id_{M_k\sim \alpha)}}^{\A_n}p\in \Delta.$$ Let $k_0=(\Delta_{\omega}, \Gamma_{\omega})$ be defined as a complete saturated extension of $(\Delta_0, \Gamma_0)$ in $\A_1$, then $\psi,$ $k_0$ and $Id$ are as desired. The analogues of Claims 1 and 2 in theorem \[main\] are used to show that $\psi$ so defined preserves implication and universal quantifiers. Presence of diagonal elements ============================= All results, in Part 1, up to the previous theorem, are proved in the absence of diagonal elements. Now lets see how far we can go if we have diagonal elements. Considering diagonal elements, as we shall see, turn out to be problematic but not hopeless. Our representation theorem has to respect diagonal elements, and this seems to be an impossible task with the presence of infinitary substitutions, unless we make a compromise that is, from our point of view, acceptable. The interaction of substitutions based on infinitary transformations, together with the existence of diagonal elements tends to make matters ‘blow up’; indeed this even happens in the classical case, when the class of (ordinary) set algebras ceases to be closed under ultraproducts [@S]. The natural thing to do is to avoid those infinitary substitutions at the start, while finding the interpolant possibly using such substitutions. We shall also show that in some cases the interpolant has to use infinitary substitutions, even if the original implication uses only finite transformations. So for an algebra $\A$, we let $\Rd\A$ denote its reduct when we discard infinitary substitutions. $\Rd\A$ satisfies cylindric algebra axioms. \[main3\] Let $\alpha$ be an infinite set. Let $G$ be a semigroup on $\alpha$ containing at least one infinitary transformation. Let $\A\in GPHAE_{\alpha}$ be the free $G$ algebra generated by $X$, and suppose that $X=X_1\cup X_2$. Let $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$ be two consistent theories in $\Sg^{\Rd\A}X_1$ and $\Sg^{\Rd\A}X_2,$ respectively. Assume that $\Gamma_0\subseteq \Sg^{\A}(X_1\cap X_2)$ and $\Gamma_0\subseteq \Gamma_0^*$. Assume, further, that $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\Rd\A}X_1\cap \Sg^{\Rd\A}X_2$. Then there exist $\mathfrak{K}=(K,\leq \{X_k\}_{k\in K}\{V_k\}_{k\in K}),$ a homomorphism $\psi:\A\to \mathfrak{F}_K,$ $k_0\in K$, and $x\in V_{k_0}$, such that for all $p\in \Delta_0\cup \Theta_0$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=1$ and for all $p\in \Gamma_0^*$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=0$. [Proof]{} The first half of the proof is almost identical to that of lemma \[main\]. We highlight the main steps, for the convenience of the reader, except that we only deal with the case when $G$ is strongly rich. Assume, as usual, that $\alpha$, $G$, $\A$ and $X_1$, $X_2$, and everything else in the hypothesis are given. Let $I$ be a set such that $\beta=|I\sim \alpha|=max(|A|, |\alpha|).$ Let $(K_n:n\in \omega)$ be a family of pairwise disjoint sets such that $|K_n|=\beta.$ Define a sequence of algebras $\A=\A_0\subseteq \A_1\subseteq \A_2\subseteq \A_2\ldots \subseteq \A_n\ldots$ such that $\A_{n+1}$ is a minimal dilation of $\A_n$ and $dim(\A_{n+1})=\dim\A_n\cup K_n$.We denote $dim(\A_n)$ by $I_n$ for $n\geq 1$. The proofs of Claims 1 and 2 in the proof of \[main\] are the same. Now we prove the theorem when $G$ is a strongly rich semigroup. Let $$K=\{((\Delta, \Gamma), (T,F)): \exists n\in \omega \text { such that } (\Delta, \Gamma), (T,F)$$ $$\text { is a a matched pair of saturated theories in } \Sg^{\Rd\A_n}X_1, \Sg^{\Rd\A_n}X_2\}.$$ We have $((\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*))$ is a matched pair but the theories are not saturated. But by lemma \[t3\] there are $T_1=(\Delta_{\omega}, \Gamma_{\omega})$, $T_2=(\Theta_{\omega}, \Gamma_{\omega}^*)$ extending $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$, such that $T_1$ and $T_2$ are saturated in $\Sg^{\Rd\A_1}X_1$ and $\Sg^{\Rd\A_1}X_2,$ respectively. Let $k_0=((\Delta_{\omega}, \Gamma_{\omega}), (\Theta_{\omega}, \Gamma_{\omega}^*)).$ Then $k_0\in K,$ and $k_0$ will be the desired world and $x$ will be specified later; in fact $x$ will be the identity map on some specified domain. If $i=((\Delta, \Gamma), (T,F))$ is a matched pair of saturated theories in $\Sg^{\Rd\A_n}X_1$ and $\Sg^{\Rd\A_n}X_2$, let $M_i=dim \A_n$, where $n$ is the least such number, so $n$ is unique to $i$. Let $${\bf K}=(K, \leq, \{M_i\}, \{V_i\})_{i\in \mathfrak{K}},$$ where $V_i=\bigcup_{p\in G_n, p\text { a finitary transformation }}{}^{\alpha}M_i^{(p)}$ (here we are considering only substitutions that move only finitely many points), and $G_n$ is the strongly rich semigroup determining the similarity type of $\A_n$, with $n$ the least number such $i$ is a saturated matched pair in $\A_n$, and $\leq $ is defined as follows: If $i_1=((\Delta_1, \Gamma_1)), (T_1, F_1))$ and $i_2=((\Delta_2, \Gamma_2), (T_2,F_2))$ are in $\bold K$, then set $$i_1\leq i_2\Longleftrightarrow M_{i_1}\subseteq M_{i_2}, \Delta_1\subseteq \Delta_2, T_1\subseteq T_2.$$ We are not yet there, to preserve diagonal elements we have to factor out $\bold K$ by an infinite family equivalence relations, each defined on the dimension of $\A_n$, for some $n$, which will actually turn out to be a congruence in an exact sense. As usual, using freeness of $\A$, we will define two maps on $\A_1=\Sg^{\Rd\A}X_1$ and $\A_2=\Sg^{\Rd\A}X_2$, respectively; then those will be pasted to give the required single homomorphism. Let $i=((\Delta, \Gamma), (T,F))$ be a matched pair of saturated theories in $\Sg^{\Rd\A_n}X_1$ and $\Sg^{\Rd\A_n}X_2$, let $M_i=dim \A_n$, where $n$ is the least such number, so $n$ is unique to $i$. For $k,l\in dim\A_n=I_n$, set $k\sim_i l$ iff ${\sf d}_{kl}^{\A_n}\in \Delta\cup T$. This is well defined since $\Delta\cup T\subseteq \A_n$. We omit the superscript $\A_n$. These are infinitely many relations, one for each $i$, defined on $I_n$, with $n$ depending uniquely on $i$, we denote them uniformly by $\sim$ to avoid complicated unnecessary notation. We hope that no confusion is likely to ensue. We claim that $\sim$ is an equivalence relation on $I_n.$ Indeed, $\sim$ is reflexive because ${\sf d}_{ii}=1$ and symmetric because ${\sf d}_{ij}={\sf d}_{ji};$ finally $E$ is transitive because for $k,l,u<\alpha$, with $l\notin \{k,u\}$, we have $${\sf d}_{kl}\cdot {\sf d}_{lu}\leq {\sf c}_l({\sf d}_{kl}\cdot {\sf d}_{lu})={\sf d}_{ku},$$ and we can assume that $T\cup \Delta$ is closed upwards. For $\sigma,\tau \in V_k,$ define $\sigma\sim \tau$ iff $\sigma(i)\sim \tau(i)$ for all $i\in \alpha$. Then clearly $\sigma$ is an equivalence relation on $V_k$. Let $W_k=V_k/\sim$, and $\mathfrak{K}=(K, \leq, M_k, W_k)_{k\in K}$, with $\leq$ defined on $K$ as above. We write $h=[x]$ for $x\in V_k$ if $x(i)/\sim =h(i)$ for all $i\in \alpha$; of course $X$ may not be unique, but this will not matter. Let $\F_{\mathfrak K}$ be the set algebra based on the new Kripke system ${\mathfrak K}$ obtained by factoring out $\bold K$. Set $\psi_1: \Sg^{\Rd\A}X_1\to \mathfrak{F}_{\mathfrak K}$ by $\psi_1(p)=(f_k)$ such that if $k=((\Delta, \Gamma), (T,F))\in K$ is a matched pair of saturated theories in $\Sg^{\Rd\A_n}X_1$ and $\Sg^{\Rd\A_n}X_2$, and $M_k=dim \A_n$, with $n$ unique to $k$, then for $x\in W_k$ $$f_k([x])=1\Longleftrightarrow {\sf s}_{x\cup (Id_{M_k\sim \alpha)}}^{\A_n}p\in \Delta\cup T,$$ with $x\in V_k$ and $[x]\in W_k$ is define as above. To avoid cumbersome notation, we write ${\sf s}_{x}^{\A_n}p$, or even simply ${\sf s}_xp,$ for ${\sf s}_{x\cup (Id_{M_k\sim \alpha)}}^{\A_n}p$. No ambiguity should arise because the dimension $n$ will be clear from context. We need to check that $\psi_1$ is well defined. It suffices to show that if $\sigma, \tau\in V_k$ if $\sigma \sim \tau$ and $p\in \A_n$, with $n$ unique to $k$, then $${\sf s}_{\tau}p\in \Delta\cup T\text { iff } {\sf s}_{\sigma}p\in \Delta\cup T.$$ This can be proved by induction on the cardinality of $J=\{i\in I_n: \sigma i\neq \tau i\}$, which is finite since we are only taking finite substitutions. If $J$ is empty, the result is obvious. Otherwise assume that $k\in J$. We recall the following piece of notation. For $\eta\in V_k$ and $k,l<\alpha$, write $\eta(k\mapsto l)$ for the $\eta'\in V$ that is the same as $\eta$ except that $\eta'(k)=l.$ Now take any $$\lambda\in \{\eta\in I_n: \sigma^{-1}\{\eta\}= \tau^{-1}\{\eta\}=\{\eta\}\}\smallsetminus \Delta x.$$ This $\lambda$ exists, because $\sigma$ and $\tau$ are finite transformations and $\A_n$ is a dilation with enough spare dimensions. We have by cylindric axioms (a) $${\sf s}_{\sigma}x={\sf s}_{\sigma k}^{\lambda}{\sf s}_{\sigma (k\mapsto \lambda)}p.$$ We also have (b) $${\sf s}_{\tau k}^{\lambda}({\sf d}_{\lambda, \sigma k}\land {\sf s}_{\sigma} p) ={\sf d}_{\tau k, \sigma k} {\sf s}_{\sigma} p,$$ and (c) $${\sf s}_{\tau k}^{\lambda}({\sf d}_{\lambda, \sigma k}\land {\sf s}_{\sigma(k\mapsto \lambda)}p)$$ $$= {\sf d}_{\tau k, \sigma k}\land {\sf s}_{\sigma(k\mapsto \tau k)}p.$$ and (d) $${\sf d}_{\lambda, \sigma k}\land {\sf s}_{\sigma k}^{\lambda}{\sf s}_{{\sigma}(k\mapsto \lambda)}p= {\sf d}_{\lambda, \sigma k}\land {\sf s}_{{\sigma}(k\mapsto \lambda)}p$$ Then by (b), (a), (d) and (c), we get, $${\sf d}_{\tau k, \sigma k}\land {\sf s}_{\sigma} p= {\sf s}_{\tau k}^{\lambda}({\sf d}_{\lambda,\sigma k}\cdot {\sf s}_{\sigma}p)$$ $$={\sf s}_{\tau k}^{\lambda}({\sf d}_{\lambda, \sigma k}\land {\sf s}_{\sigma k}^{\lambda} {\sf s}_{{\sigma}(k\mapsto \lambda)}p)$$ $$={\sf s}_{\tau k}^{\lambda}({\sf d}_{\lambda, \sigma k}\land {\sf s}_{{\sigma}(k\mapsto \lambda)}p)$$ $$= {\sf d}_{\tau k, \sigma k}\land {\sf s}_{\sigma(k\mapsto \tau k)}p.$$ The conclusion follows from the induction hypothesis. Now $\psi_1$ respects all quasipolyadic equality operations, that is finite substitutions (with the proof as before; recall that we only have finite substitutions since we are considering $\Sg^{\Rd\A}X_1$) except possibly for diagonal elements. We check those: Recall that for a concrete Kripke frame $\F_{\bold W}$ based on ${\bold W}=(W,\leq ,V_k, W_k),$ we have the concrete diagonal element ${\sf d}_{ij}$ is given by the tuple $(g_k: k\in K)$ such that for $y\in V_k$, $g_k(y)=1$ iff $y(i)=y(j)$. Now for the abstract diagonal element in $\A$, we have $\psi_1({\sf d}_{ij})=(f_k:k\in K)$, such that if $k=((\Delta, \Gamma), (T,F))$ is a matched pair of saturated theories in $\Sg^{\Rd\A_n}X_1$, $\Sg^{\Rd\A_n}X_2$, with $n$ unique to $i$, we have $f_k([x])=1$ iff ${\sf s}_{x}{\sf d}_{ij}\in \Delta \cup T$ (this is well defined $\Delta\cup T\subseteq \A_n).$ But the latter is equivalent to ${\sf d}_{x(i), x(j)}\in \Delta\cup T$, which in turn is equivalent to $x(i)\sim x(j)$, that is $[x](i)=[x](j),$ and so $(f_k)\in {\sf d}_{ij}^{\F_{\mathfrak K}}$. The reverse implication is the same. We can safely assume that $X_1\cup X_2=X$ generates $\A$. Let $\psi=\psi_1\cup \psi_2\upharpoonright X$. Then $\psi$ is a function since, by definition, $\psi_1$ and $\psi_2$ agree on $X_1\cap X_2$. Now by freeness $\psi$ extends to a homomorphism, which we denote also by $\psi$ from $\A$ into $\F_{\mathfrak K}$. And we are done, as usual, by $\psi$, $k_0$ and $Id\in V_{k_0}$. Theorem \[main2\], generalizes as is, to the expanded structures by diagonal elements. That is to say, we have: \[main4\] Let $G$ be the semigroup of finite transformations on an infinite set $\alpha$ and let $\delta$ be a cardinal $>0$. Let $\rho\in {}^{\delta}\wp(\alpha)$ be such that $\alpha\sim \rho(i)$ is infinite for every $i\in \delta$. Let $\A$ be the free $G$ algebra with equality generated by $X$ restristed by $\rho$; that is $\A=\Fr_{\delta}^{\rho}GPHAE_{\alpha},$ and suppose that $X=X_1\cup X_2$. Let $(\Delta_0, \Gamma_0)$, $(\Theta_0, \Gamma_0^*)$ be two consistent theories in $\Sg^{\A}X_1$ and $\Sg^{\A}X_2,$ respectively. Assume that $\Gamma_0\subseteq \Sg^{\A}(X_1\cap X_2)$ and $\Gamma_0\subseteq \Gamma_0^*$. Assume, further, that $(\Delta_0\cap \Theta_0\cap \Sg^{\A}X_1\cap \Sg^{\A}X_2, \Gamma_0)$ is complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2$. Then there exist a Kripke system $\mathfrak{K}=(K,\leq \{X_k\}_{k\in K}\{V_k\}_{k\in K}),$ a homomorphism $\psi:\A\to \mathfrak{F}_K,$ $k_0\in K$, and $x\in V_{k_0}$, such that for all $p\in \Delta_0\cup \Theta_0$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=1$ and for all $p\in \Gamma_0^*$ if $\psi(p)=(f_k)$, then $f_{k_0}(x)=0$. [Proof]{} $\Rd\A$ is just $\A$. Results in logical form ======================= We start by describing our necessary syntactical and semantical notions to follow. Informally a language is a triple $(V, P, G)$ where $V$ is a set providing an infinite supply of variables, $P$ is a another set of predicates disjoint from $V,$ and $G$ is a semigroup of transformations on $V$. There is no restriction on the arity of $p\in P$; sometimes referred to as the rank of $p$, that is the arity may be infinite. Formulas are defined recursively the usual way. Atomic formulas are of the form $p\bar{v}$, the length of $\bar{v}$ is equal to the arity of $p$. If $\phi, \psi$ are formulas and $v\in V,$ then $\phi\lor\psi$, $\phi\land \psi$, $\phi\to \psi$, $\exists v\phi,$ $\forall v\phi$ are formulas. For each $\tau\in G$, ${\sf S}({\tau})$ is a unary operation on formulas, that is, for any formula $\phi$, ${\sf S}{(\tau)}\phi$ is another formula, reflecting the metalogical operation of simultaneous substitution of variables (determined by $\tau$) for variables, such that the substitution is free. Notice that although we allow infinitary predicates, quantifications are defined only on finitely many variables, that is the scope of quantifiers is finite. We will also deal with the case when we have equality; for this purpose we add a newlogical symbol $=$ and we view it, as usual, as a binary relation. We recall some basic semantical notions for intuitionistic logic but adpated to the presence of atomic formulas possibly having infinite length. An intuitionistic or Kripke frame is a triple $\bold W=(W, R, \{D_w\}_{w\in W})$ where $W$ is a non-empty set called worlds, preordered by $R$ and $D_w$ is a non-empty subset of $D$ called the domain of $w$ for any $w$, and the monotoncity condition of domains is satisfied: $$(\forall w,w'\in W)[wRw'\implies D_w\subseteq D_{w'}.]$$ On the other hand, an intuitionistic or Kripke model is a quadruple $\bold M=(W, R, \{D_{w}\}_{w\in W} \models),$ where $(W, R, \{D_w\}_{w\in W})$ is an intuitionistic frame, $\models$ is a tenary relation between worlds, formulas, and assignments (maps from $V$ to $D$). We write $x\models \phi[s]$ if $(x, \phi ,s)\in \models$. This tenary relation $\models$ satisfies for any predicate $p$, any $s\in {}^{V}D$, any formulas $\phi$, $\psi$ and any $x\in W$ the following: $$\text { It is not the case that }x\models \bot,$$ $$(\forall y\in W)(x\models p[s]\land xRy\implies y\models p[s]),$$ $$x\models (\phi\land \psi)[s]\Longleftrightarrow x\models \phi[s] \text { and } x\models \psi[s],$$ $$x\models (\phi\lor \psi)[s]\Longleftrightarrow x\models \phi[s]\text { or }\phi\models \psi[s],$$ $$x\models (\phi\to \psi)[s]\Longleftrightarrow \forall y(xRy\implies(y\models \phi[s]\implies y\models \psi[s])).$$ For $s$ a function $s^k_a$ is the function defined by $s^k_a(i)=s(i)$ when $i\neq k$ and $s^k_a(k)=a$. Continuing the definition: $$x\models \forall v\phi[s]\Longleftrightarrow( (\forall y)(xRy\implies (\forall a\in D_y)y\models \phi[s^v_a]))),$$ $$x\models \exists v\phi[s]\Longleftrightarrow (\exists a\in D_x)(x\models \phi[s^v_a])),$$ $$x\models {\sf S}({\tau})\phi[s]\Longleftrightarrow x\models \phi[\tau\circ s].$$ Evidently the model is completely determined by the frame $(W, R, \{D_{w}\}_{w\in W})$ and by $\models$ on atomic formulas. That is for each for each $p\in P$ and each world $x$ and $s\in{}^VD_x$, $p$ determines a possibly infinitary relation $p_x\subseteq {}^VD_x$, and we stipulate that $x\models p[s]$ if $s\in p_x$. If we have equality, then for the world $x$ and $s\in {}^VD_x$, we add the clause $x\models v_1=v_2$ if $s(v_1)=s(v_2)$. We now define a calculas (in a usual underlying set theory $ZFC$, say) that we prove to be complete with respect to Kripke semantics; this will follow from our stronger proven result that such logics enjoy the interpolation property. We first deal with the equality free case. In such a case, our calculas is inspired by that of Keisler [@K]. Let $V$ and $P$ be disjoint sets of symbols, such that $V$ is infinite, $\rho$ a function with domain is $P$ and whose range is $Set$ (the class of all sets) or $Ord$ (the class of all ordinals). [^4] Let $G\subseteq {}^VV$. We define a logic $\mathfrak{L}_G$ in the following way. The symbols of $\mathfrak{L}_G$ consists of: The falsity symbol $\bot$. the disjunction $\lor$, conjunction $\land$, and the implication symbol $\to$. universal quantification symbol $\forall$. existential quantification symbol $\exists$. the individual variables $v\in V$ and predicates $p\in P.$ We assume that $\bot, \lor, \land,\to, \forall, \exists$ are not members of $V$ nor $P$. An atomic formula is an ordered pair $(p,x)$ where $p\in P$ and $x\in {}^{\rho(p)}V.$ We call $\rho(p)$ the rank of $p$. Formulas are defined the usual way by recursion; in this respect we regard $(\phi\to \phi)$ as an ordered triple and so are formulas involving other connectives including $\exists v\phi$ and $\forall v \phi.$ (In the former formula, the brackets are not syntactic brackets because we do no have brackets in our language.) The set $V_f(\phi)$ of free variables and the set $V_b(\phi)$ of bound variables in a formula $\phi$ are defined recursively the usual way. That is: If $\phi$ is an atomic formula $(p,x)$, then $V_f(\phi)$ is the range of $x$. if $\phi=\bot$, then $V_f(\bot)=\emptyset.$ If $\phi$ is $(\psi\lor \theta)$ or $(\psi\land \theta)$ or $(\psi\to \theta)$, then $V_f(\phi)=V_f(\psi)\cup V_f(\theta).$ If $\phi=(\forall v \psi)$ or $(\exists v\psi)$, then $V_f(\psi)=V_f(\phi)\sim \{v\}$. Now for the bound variables $V_b(\phi)$: If $\phi$ is an atomic formula $(p,x)$, then $V_b(\phi)=\emptyset.$ if $\phi=\bot$, then $V_b(\bot)=\emptyset.$ If $\phi$ is $(\psi\lor \theta)$ or $(\psi\land \theta)$ or $(\psi\to \theta),$ then $V_b(\phi)=V_b(\psi)\cup V_b(\theta).$ If $\phi=(\forall v \psi)$, then $V_b(\psi)=V_b(\phi)\cup \{v\}.$ If $\phi=(\exists v \psi)$, then $V_b(\psi)=V_b(\phi)\cup \{v\}.$ Note that the variables occurring in a formula $\phi$, denoted by $V(\phi)$ is equal to $V_f(\phi)\cup V_b(\phi)$ which could well be infinite. For $\tau\in G$ and $\phi$ a formula, ${\sf S}(\tau)\phi$ (the result of substituting each variable $v$ in $\phi$ by $\tau(v)$) is defined recursively and so is ${\sf S}_f(\tau)\phi$ (the result of substituting each free variable $v$ by $\tau(v)$). If $\phi$ is atomic formula $(p,x),$ then ${\sf S}(\tau)\phi=(p,\tau\circ x).$ if $\phi=\bot,$ then ${\sf S}(\tau)\bot=\bot$ If $\phi$ is $(\psi\lor \theta),$ then ${\sf S}(\tau)\phi=({\sf S}(\tau)\psi\lor {\sf S}(\tau)\theta).$ The same for other propositional connectives. If $\phi=(\forall v\phi),$ then ${\sf S}(\tau)\phi=(\forall\tau(v){\sf S}(\tau)\phi).$ If $\phi=(\exists v\phi),$ then ${\sf S}(\tau)\phi=(\exists\tau(v){\sf S}(\tau)\phi).$ To deal with free substitutions, that is when the resulted substituted variables remain free, we introduce a piece of notation that proves helpful. For any function $f\in {}^XY$ and any set $Z$, we let $$f|Z=\{(x, f(x)): x\in X\cap Z\}\cup \{(z,z)|z\in Z\sim X\}.$$ Then $f|Z$ always has domain $Z$ and $0|Z$ is the identity function on $Z$. If $\tau\in \bigcup\{^WV: W\subseteq V\}$, and $\phi$ is a formula, let ${\sf S}(\tau)\phi={\sf S}(\tau|V)\phi$ and ${\sf S}_f(\tau)\phi=S_f(\tau|V)\phi$. For free subtitution the first three clauses are the same, but if $\phi=(\forall v \psi)$, then ${\sf S}_f({\tau})\phi=(\forall v {\sf S}_f(\sigma)\psi)$ and if $\phi=(\exists v \psi),$ then ${\sf S}_f({\tau})\phi=(\exists v {\sf S}_f(\sigma)\psi)$ where $\sigma=\tau\upharpoonright (V\sim \{v\})\upharpoonright V$. Now we specify the axioms and the rules of inference. The axioms are: Axioms for propositional intuitionistic logic (formulated in our syntax). $((\forall v(\phi\to \psi)\to (\phi\to \forall v \psi)))$ where $v\in (V\sim V_{f} \phi)).$ $((\forall v(\phi\to \psi)\to (\exists v\phi\to \psi)))$ where $v\in (V\sim V_{f} \phi)).$ $(\forall v\phi\to {\sf S}_f(\tau)\phi)$, when $\tau(v) \notin (V\sim V_b(\phi)).$ $({\sf S}_f(\tau)\phi\to (\exists v \phi))$, when $\tau(v) \notin (V\sim V_b(\phi)).$ The rules of inference are: Form $\phi$, $(\phi\to \psi)$ infer $\psi.$ (Modus ponens.) From $\phi$ infer $(\forall v\phi).$ (Rule of generalization.) From ${\sf S}_f(\tau)\phi$ infer $\phi$ whenever $\tau\in {}^{V_f(\phi)} (V\sim V_b(\phi))$ and $\tau$ is one to one. (Free substitution.) From $\phi$ infer ${\sf S}(\tau)\phi$ whenever $\tau\in {}^{V(\phi)}V$ is one to one (Substitution). Now if we have $=$ as a primitive symbol, we add the following axioms (in this case no more rules of inference are needed): $v=v$ $v=w\to w=v$ If $\phi$ is a formula and $\tau, \sigma$ are substitutions that agree on the indices of the free variables occuring in $\phi$, then ${\sf S}_f(\tau)\phi={\sf S}_f(\sigma)\phi.$ We write ${\mathfrak L}_G$ for logics without equality, and we write ${\mathfrak L}_G^{=}$ for those with equality, when $G$ is specified in advance. Proofs are defined the usual way. For a set of formulas $\Gamma\cup \{\phi\}$, we write $\Gamma\vdash \phi$, if there is a proof of $\phi$ from $\Gamma$. To formulate the main results of this paper, we need some more basic definitions. Let $\bold M=(W,R, \{D_{w}\}_{w\in W}, \models)$ be a Kripke model over $D$ and let $s\in {}^VD,$ where $V$ is the set of all variables. A formula $\phi$ is satsifiable at $w$ under $s$ if $w\models \phi[s]$. The formula $\phi$ is satisfiable in $\bold M$ if there a $w\in W$ and $s\in {}^VD$ such that $w\models \phi[s].$ For a set of formulas $\Gamma$, we write $w\models \Gamma[s]$ if $w\models \phi[s]$ for every $\phi\in \Gamma$. The set of formulas $\Gamma$ is satisfiable in $\bold M$ if there is a $w\in W$ and $s\in {}^VD$ such that $w\models \Gamma[s]$. The formula $\phi$ is valid in $\bold M$ under $s$ if $w\models \phi[s]$ for all $w\in W$ and $s\in {}^{V}D_w$; $\phi$ is valid in $\bold M$ if it is valid for any $s\in {}^VD$. A formula $\phi$ is valid in a frame $(W, R, \{D_w\}_{w\in W})$ if it is valid in every model based on $W$ after specifying the semantical consequence relation $\models$. A set of formulas $\Gamma$ is consistent if no contradiction is derivable from $\Gamma$ relative to the proof system defined above, that is, it is not the case that $T\vdash \bot.$ The custom in intuitionistic logic is to deal with pairs of theories, the first component dealing with a set of formulas that are ’true’, and the second deals with a set formulas that are ‘false’, in the intended interpretaton. This is natural, since we do not have negation. So in fact, our algebraic counterpart proved in section 3, is in fact more general than the completeness theorem stated below; the latter follows from the special case when the second component of pairs is the theory $\{\bot\}$. The following theorems hold for logics without equality. In the presence of infinitary substitutions, we obtain a weaker result for logics with equality. The set $V$ denoting the set of variables in the next theorems is always infinite (which means that we will deal only with infinite dimensional algebras), however, $P$ (specifying the number of atomic formulas) could well be finite. \[com\] Let $V$ and $P$ be countable disjoint sets with $|V|\geq \omega$. When $G$ is a rich semigroup, then $\mathfrak{L}_G$ is strongly complete, that is if $\Gamma$ is a consistent set of formulas, then it is satisfiable at a world of some model based on a Kripke frame. For arbitrary (disjoint) sets $V$ and $P$ with $|V|\geq \omega$, when $G$ is the semigroup of finite transformations, and $\rho\in {}^{V}Ord$ is such that $V\sim \rho(p)$ is infinite for every $p\in P$, or $G={}^VV$ without any restrictions, then $\mathfrak{L}_G$ is strongly complete. [Proof]{} cf. Theorem \[complete\], item (1). We say that a logic $\mathfrak{L}$ has the Craig interpolation property if whenever $\models \phi\to \psi$ then there is a formula containing only symbols occurring in both $\phi$ and $\psi,$ $\theta$ say, such that $\models \phi\to \theta$ and $\models \theta\to \psi.$ (By the above completeness theorem, we can replace $\models$ by $\vdash$.) \[interpolation\] Let $\mathfrak{L}_G$ be as in the previous theorem, except that $G$ is assumed to be strongly rich. Then $\mathfrak{L}_G$ has the interpolation property [Proof]{} cf. Theorem \[complete\], item (2). In the case we have equality then we can prove a slightly weaker result when we have infinite substitutions. We say that the substitution operation ${\sf S}_{\tau}$ is finitary, if $\tau$ moves only finitely many points, otherwise, it is called infinitary. Now we have: \[interpolationeq\] For arbitrary (disjoint) sets $V$ and $P$, with $|V|\geq \omega$, when $G$ is the semigroup of finite transformations, and $\rho\in {}^{V}Ord$ is such that $V\sim \rho(p)$ is infinite for every $p\in P$, then $\mathfrak{L}_G^{=}$ is strongly complete, and has the interpolation property. When $G$ is rich or $G={}^VV,$ then ${\mathfrak L}_G^{=}$ is weakly complete, that is, if a formula is valid in all Kripke models, then it is provable. When $G$ is strongly rich or $G={}^VV$, the logic $\mathfrak{L}_G^{=}$ has the following weak interpolation property. If $\phi$ and $\psi$ are formulas such that only finitary substitutions were involved in their built up from atomic formulas, and $\models \phi\to \psi,$ then there is a formula containing only (atomic formulas, and possibly equality) occurring in both $\phi$ and $\psi,$ $\theta$ say, such that $\models \phi\to \theta$ and $\models \theta\to \psi$, and $\theta$ may involve infinitary substitutions during its formation from atomic formulas. (By weak completenes $\models$ can be replaced by $\vdash$.) [Proof]{} From theorems \[main3\], \[main4\]. \[negative\] For arbitrary (disjoint) sets $V$ and $P$, with $|V|\geq \omega$, when $G$ is the semigroup of finite transformations, and $\rho\in {}^{V}Ord$ is such that $\rho(p)=V$ for every $p\in P$, then both $\mathfrak{L}_G$ and $\mathfrak{L}_G^{=}$ are essentially incomplete, and fail to enjoy the interpolation property. [Proof]{} This is proved in \[Sayed\]. In intuitionistic ordinary predicate logic, interpolation theorems proved to hold for logic without equality, remain to hold when we add equality. This is reflected by item (1) in theorem \[interpolationeq\]. Indeed, in this case our logics are very close to ordinary ones. The sole difference is that atomic formulas could have infinite arity, but like the ordinary case, infinitely many variables lie outside (atomic) formulas. The next item in theorem \[interpolationeq\] shows that the situation is not as smooth nor as evident as the ordinary classical case. The presence of infinitary substitutions seems to make a drastic two-fold change. In the absence of diagonal elements, it turns negative results to positive ones, but it in the presence of diagonal elements the positive results obtained are weaker. Indeed, we do not know whether strong completeness or usual interpolation holds for such logics, but it seems unlikely that they do. We know that there will always be cases when infinitary substitutions are needed in the interpolant. Our logics manifest themselves as essentially infinitary in at least two facets. One is that the atomic formulas can have infinite arity and the other is that (infinitary) substitutions, when available, can move infinitely many points. But they also have a finitary flavour since quantification is taken only on finitely many variables. The classical counterpart of such logics has been studied frequently in algebraic logic, and they occur in the literature under the name of finitary logics of infinitary relations, or typless logics [@HMT2], though positive interpolation theorems for such logics are only rarely investigated \[IGPL\], for this area is dominated by negative results \[references\]. It is well known that first order predicate intuitionistic logic has the following two properties: (\*) Each proof involves finitely many formulas. (\*) A set of formulas is consistent if and only if it is satisfiable. In most cases, such as those logics which have infinitary propositional connectives, it is known to be impossible to define a notion of proof in such a way that both (\*) and (\*\*) are satisfied. We are thus confronted with the special situation that the logic ${\mathfrak L}_G$ behave like ordinary first order intuitionistic logic. In passing, we note that (infinitary) generalizations of the classical Lowenheim-Skolem Theorem and of the Compactness Theorem for ${\mathfrak L}_G$ without equality follows immediately from theorem \[com\]. Now we are ready to prove theorem \[interpolation\]. \[complete\] Let $G$ be a semigroup as in \[complete\] and \[interpolation\] $\mathfrak L_G$ is strongly complete $\mathfrak L_G$ has the interpolation property [Proof]{} We prove the theorem when $G$ is a strongly rich semigroup on $\alpha$, $\alpha$ a countable ordinal specifying the the number of variables in ${\mathfrak L}_G$. Let $\{R_i:i\in \omega\}$ be the number of relation symbols available in our language each of arity $\alpha$. We show that every consistent set of formulas $T$ is satisfiable at some world in a Kripke model. Assume that $T$ is consistent. Let $\A=\Fm/\equiv$ and let $\Gamma=\{\phi/\equiv: \phi\in T\}.$ Then $\Gamma$ generates a filter $F$. Then $\A\in GPHA_{\alpha}$ and $(F,\{0\})$ is consistent. By the above proof, it is satisfiable, that is there exists a Kripke system $\bold K=(K, \leq, M_k, \{V_k\}_{k\in K})$ a homomorphism $\psi:\A\to \mathfrak{F}_{\bold K}$ and an element $k_0\in \bold K$ and $x\in V_{k_0}$ such that for every $p\in \Gamma$, if $\psi(p)=(f_k)$ then $f_{k_0}(x)=1$. Define for $k\in K$, $R_i$ an atomic formula and $s\in {}^{\alpha}M_k$, $k\models R_i[s]$ iff $(\psi(R/\equiv))_k(s)=1.$ This defines the desired model. When $G$ is a strongly rich semigroup, or $G={}^{\alpha}\alpha$, we show that for any $\beta$, $\A=\Fr_{\beta}GPHA_{\alpha}$ has the interpolation property, that is if $a\in \Sg^{\A}X_1$ and $b\in \Sg^{\A}X_2,$ then there exists $c\in \Sg^{\A}(X_1\cap X_2)$ such that $a\leq c\leq b$. When $G$ is the semigroup of all finite transformations and $\rho\in {}^{\beta}\wp(\alpha)$ is dimension restricting, the algebra $\Fr_{\beta}^{\rho}(GPHA_{\alpha})$ can be shown to have the interpolation property in exactly the same manner. We use theorem \[main\] for the former case, while we use its analogue for dimension restricted free algebras, namely, theorem \[main2\] for the latter. Assume that $\theta_1\in \Sg^{\A}X_1$ and $\theta_2\in \Sg^{\A}X_2$ such that $\theta_1\leq \theta_2$. Let $\Delta_0=\{\theta\in \Sg^{\A}(X_1\cap X_2): \theta_1\leq \theta\}.$ If for some $\theta\in \Delta_0$ we have $\theta\leq \theta_2$, then we are done. Else $(\Delta_0, \{\theta_2\})$ is consistent. Extend this to a complete theory $(\Delta_2, \Gamma_2)$ in $\Sg^{\A}X_2$. Consider $(\Delta, \Gamma)=(\Delta_2\cap \Sg^{\A}(X_1\cap X_2), \Gamma_2\cap \Sg^{\A}(X_1\cap X_2))$. Then $(\Delta\cup \{\theta_1\}), \Gamma)$ is consistent. For otherwise, for some $F\in \Delta, \mu\in \Gamma,$ we would have $(F\land \theta_1)\to \mu$ and $\theta_1\to (F\to \mu)$, so $(F\to \mu)\in \Delta_0\subseteq \Delta_2$ which is impossible. Now $(\Delta\cup \{\theta_1\}, \Gamma)$ $(\Delta_2,\Gamma_2)$ are consistent with $\Gamma\subseteq \Gamma_2$ and $(\Delta,\Gamma)$ complete in $\Sg^{\A}X_1\cap \Sg^{\A}X_2$. So by theorem \[main\], $(\Delta_2\cup \{\theta_1\}, \Gamma_2)$ is satisfiable at some world in some set algbra based on a Kripke system, hence consistent. But this contradicts that $\theta_2\in \Gamma_2, $ and we are done. The logic ${\mathfrak L}_G^{=}$ has the weak interpolation property. [Proof]{} Assume that $\theta_1\in \Sg^{\Rd\A}X_1$ and $\theta_2\in \Sg^{\Rd\A}X_2$ such that $\theta_1\leq \theta_2$. Let $\Delta_0=\{\theta\in \Sg^{\A}(X_1\cap X_2): \theta_1\leq \theta\}.$ If for some $\theta\in \Delta_0$ we have $\theta\leq \theta_2$, then we are done. Else $(\Delta_0, \{\theta_2\})$ is consistent, hence $(\Delta_0\cap \Sg^{\Rd\A}X_2,\theta_2)$ is consistent. Extend this to a complete theory $(\Delta_2, \Gamma_2)$ in $\Sg^{\Rd\A}X_2$; this is possible since $\theta_2\in \Sg^{\Rd\A}X_2$. Consider $(\Delta, \Gamma)=(\Delta_2\cap \Sg^{\A}(X_1\cap X_2), \Gamma_2\cap \Sg^{\A}(X_1\cap X_2))$. It is complete in the ‘common language’, that is, in $\Sg^{\A}(X_1\cap X_2)$. Then $(\Delta\cup \{\theta_1\}), \Gamma)$ is consistent in $\Sg^{\Rd\A}X_1$ and $(\Delta_2, \Gamma_2)$ is consistent in $\Sg^{\Rd\A}X_2$, and $\Gamma\subseteq \Gamma_2.$ Applying the previous theorem, we get $(\Delta_2\cup \{\theta_1\}, \Gamma_2)$ is satisfiable. Let $\psi_1, \psi_2$ and $\psi$ and $k_0$ be as in the previous proof. Then $\psi\upharpoonright \Sg^{\Rd\A}X_1=\psi_1$ and $\psi\upharpoonright \Sg^{\Rd\A}X_2=\psi_2$. But $\theta_1\in \Sg^{\Rd\A}X_1$, then $\psi_1(\theta_1)=\psi(\theta_2)$. Similarly, $\psi_2(\theta_2)=\psi(\theta_2).$ So, it readily follows that $(\psi(\theta_1))_{k_0}(Id)=1$ and $(\psi(\theta_2))_{k_0}(Id)=0$. This contradicts that $\psi(\theta_1)\leq \psi(\theta_2),$ and we are done. When $G$ consists only of finite transformations and $v\sim \rho(p)$ is infinite, then ${\mathfrak L}_G^{=}$ has the interpolation property. In the next example, we show that the condition $\Gamma_0\subseteq \Gamma_0^*$ cannot be omitted. The example is an an algebraic version of theorem 4.31, p.121 in [@b], but modified appropriately to deal with infinitary languages. \[counter\] Let $G$ be a strongly rich semigroup on $\omega$. Let $\Lambda_{\omega}$ be a language with three predicate symbols each of arity $\omega$; this is a typless logic abstracting away from rank of atomic formulas, so that we might as well forget about the variables, since we allow them only in their natural order. The real rank of such relation symbols will be recovered from the semantics. Let $\bold M=(\N, \leq, \D_i)_{i\in \omega}$ be the Kripke frame with $D_i=\N$ for every $i$, and let $\N=\bigcup_{n\in \omega} B_n$, where $B_n$ is a sequence of pairwise disjoint infinite sets. We define the relation $\models$ on atomic formulas. Let $m\in \N$. If $m=2n+1$, and $s\in {}^{\omega}\N,$ then $m\models p_0[s]$ if $s_0\in \bigcup_{i\leq 2n+1}B_i$, $m\models p_1[s]$ if $s_0\in \bigcup_{i\leq 2n+1}B_i$ and $m\models p_3[s]$. If $m=2n$, and $s\in {}^{\omega}\N,$ then $m\models p_0[s]$ if $s_0\in \bigcup_{i\leq n}B_i$ and $m\models p_2[s]$ if $s_0\in \bigcup_{i\leq 2n+1}B_i$ and $m\models p_3[s]$. Let $\F_{\bold M}$ be the set algebra based on the defined above Kripke model $\bold M$. Let $\A=\Fr_3GPHA_{\omega}$ and let $x_1, x_2, x_3$ be its generators. Let $f$ be the unique map from $\A$ to $\F_{\bold M}$ such that for $i\in \{0,1,2\}$, $f(x_i)=p_i^{\bold M}$. We have $\A\cong \Fm/\equiv$. We can assume that the isomorphism is the identity map. Let $\Delta'=\{a\in A: f(a)=1\}$ and $\Theta'=\{a\in A: f(a)=0\}$. Let $\Delta=\{\phi:\phi/\equiv\in \Delta'\},$ and $\Theta=\{\phi:\phi/\equiv\in \Gamma'\}.$ Let $$\Delta_1=\Delta\cup \{{\sf q}_0(x_1\lor x_2), {\sf c}_0(x_2\land x_3)\}$$ $$\Theta_1=\Theta\cup \{{\sf c}_0(x_1\land x_3)\}$$ $$\Delta_2=\Delta\cup \{{\sf q}_0(x_1\lor x_3)\}$$ $$\Theta_2=\Theta\cup \{{\sf c}_0(x_1\land x_3), c_0(x_2\land x_3)\}.$$ Then by analogy to 4.30 in [@b], $(\Delta_1, \Theta_1)$, $(\Delta_2, \Theta_2)$ are consistent, but their union is not. \[mak\] If $G$ is strongly rich or $G={}^{\alpha}\alpha$, then $Var(\mathfrak{L}_G)$ has $SUPAP.$ In particular, $GPHA_{\alpha}$ has $SUPAP$. [Proof]{} Cf. [@b] p.174. Suppose that $\A_0, \A_1, \A_2\in Var(\mathfrak{L}_G)$. Let $i_1:\A_0\to \A_2$ and $i_2: \A_0\to \A_2$ be embeddings. We need to find an amalgam. We assume that $A_0\subseteq A_1\cap A_2$. For any $a\in A_i$, let $x_a^i$ be a variable such that $x_a^0=x_a^1=x_a^2$ for all $a\in A_0$ and the rest of the variables are distinct. Let $V_i$ be the set of variables corresponding to $\A_i$; then $|V_i|=|A_i|$. Let $V$ be the set of all variables, endowed with countably infinitely many if the algebras are finite. Then $|V|=\beta\geq \omega.$ We assume that the set of variables $V$ of ${\mathfrak L}_G$ is the same as the set variables of the equational theory of $Var(\mathfrak{L}_G).$ We fix an assignment $s_i$ for each $i\in \{0,1,2\}$ such that $s_i: V_i\to A_i$ and $s_i(x_a^i)=a$ and so $s_1\upharpoonright V_0=s_2\upharpoonright V_0=s_0$. In view of the correspondence established in \[terms\], we identify terms of the equational theory of $Var({\mathfrak L}_G)$ with formulas of $\mathfrak{L}_G$; which one we intend will be clear from context. Accordingly, we write $\A_i\models \psi\leftrightarrow \phi$ if $\bar{s}_i(\psi)=\bar{s}_i(\phi)$, where $\bar{s_i}$ is the unique extension of $s_i$ to the set all terms. Let $\Fm_i$ be the set of formulas of $\mathfrak{L}_{G}$ in the variables $x_a^i$, $a\in A_i$, and let $\Fm$ be the set of all formulas built up from the set of all variables. (Note that $Fm_i$ can be viewed as the set of terms built up from the variables $x_a^i$, and $Fm$ is the set of all terms built up from the set of all variables, defining operations corresponding to connectives turn them to absolutely free algebras.) For $i=1,2$, let $T_i=\{\psi\in \Fm_i: \A_i\models \psi=1\}$, and let $T=\{\psi\in \Fm: T_1\cup T_2\vdash \psi\}$. We will first prove (\*): For $\{i,j\}=\{1,2\},$ $\psi\in \Fm_i$ and $\phi\in \Fm_j,$ we have $T\vdash \psi\leftrightarrow \phi$ iff $(\exists c\in \Fm_0)(\A_i\models \psi\leq c\land \A_j\models c\leq \phi.)$ Only one direction is non trivial. Assume that $T\vdash \psi\leftrightarrow \phi.$ Then there exist finite subsets $\Gamma_i\subseteq T_i$ and $\Gamma_j\subseteq T_j$ such that $\Gamma_i\cup \Gamma_j\vdash \psi\leftrightarrow \phi.$ Then, by the deduction theorem for propositional intuitinistic logics, we get $${\mathfrak L}_G\vdash \bigwedge \Gamma_i\to (\bigwedge \Gamma_j\to (\psi\to \phi)),$$ and so $${\mathfrak L}_G\vdash (\bigwedge \Gamma_i\land \psi)\to (\bigwedge \Gamma_j\to \phi).$$ Notice that atomic formulas and variables occuring in the last deduction are finite. So the interpolation theorem formulated for $G$ countable algebras apply also, and indeed by this interpolation theorem \[interpolation\] for ${\mathfrak L}_G$, there is a formula $c\in \Fm_0$ such that such that $\vdash \bigwedge \Gamma_i\land \psi\to c$ and $\vdash c\to (\bigwedge \Gamma_j\to \phi.)$ Thus $\A_i\models \psi\leq c$ and $\A_j\models c\leq \phi$. We have proved (\*). Putting $\psi=1,$ we get $T\vdash \phi$ iff ($\exists c\in \Fm_0)(\A_i\models 1\leq c\land \A_j\models c\leq \phi)$ iff $\A_j\models \phi=1.$ Define on $\Fm$ the relation $\psi\sim \phi$ iff $T\vdash \alpha\leftrightarrow \beta$. Then $\sim$ is a congruence on $\Fm$. Also for $i=1,2$ and $\psi,\phi\in Fm_i$, we have $T\vdash \psi\sim \phi$ iff $\A_i\models \psi=\phi$. Let $\A=\Fm/\sim$, and $e_i=\A_i\to \A$ be defined by $e_i(a)=x_a^i/\sim$. Then clearly $e_i$ is one to one. If $a\in \A_0$, then $x_a^0=x_a^1=x_a^2$ hence $e_1(a)=e_2(a)$. Thus $\A$ is an amalgam via $e_1$ and $e_2.$ We now show that the superamalgamation property holds. Suppose $\{j,k\}=\{1,2\}$, $a\in \A_j$, $b\in \A_k$ and $e_j(a)\leq e_k(b)$. Then $(e_j(a)\to e_k(b))=1$, so $(x_a^j\to x_b^k)=1$, that is $T\vdash (x_a^j\to x_b^k)$. Hence there exists $c\in \Fm_0$ such that $(\A_j\models x_a\leq c\land \A_k\models c\leq x_b)$. Then $a\leq c$ and $c\leq b.$ By taking ${\mathfrak L}_G$ to be the logic based on $\alpha$ many variables, and ${\mathfrak L}_G$ has countably many atomic formulas each containing $\alpha$ many variables in their natural order, we get that $V=Var({\mathfrak L}_G)$, hence $V$ has $SUPAP$. [HMT85]{} Andréka, H. [*Complexity of equations valid in algebras of relations*]{}. Annals of Pure and Applied logic, [**89**]{} (1997), p. 149 - 209. Andreka, H., Nemeti I., Sayed Ahmed T. [*A non representable quasi-polyadic algebra with a representable cylindric reduct*]{} Studia Math Hungarica, in press Sayed Ahmed, T. [*On Amalgamation of Reducts of Polyadic Algebras.*]{} Algebra Universalis [**51**]{} (2004), p.301-359. Sayed Ahmed, T. [*Algebraic Logic, where does it stand today?*]{} Bulletin of Symbolic Logic, [**11**]{}(4) (2005), p. 465–516 Sayed Ahmed T, [*Some results about neat reducts*]{} Algebra Universalis, [**1**]{}(2010) p. 17-36. Sayed Ahmed , T. [*The class of polyadic algebras has the superamalgamation property*]{} Mathematical Logic Quarterly [**56**]{}(1)(2010)p.103-112 Sayed Ahmed T. [*The amalgmation property, and a problem of Henkin Monk and Tarski*]{} Journal of Algebra, number theory, advances and applications [**1**]{}(2)(2009) p. 127-141 Sayed Ahmed, T. [*On neat embeddings of cylindric algebras*]{} Mathematical Logic Quarterly [**55**]{}(6)(2009)p.666-668 Sayed Ahmed [*Classes of algebras without the amalgamation property*]{} Logic Journal of IGPL. [**1**]{} (2011) 87-104. Sayed Ahmed T. [*Amalgamation of polyadic Heyting algebras*]{} Studia Math Hungarica, in press. Daigneault, A., [*Freedom in polyadic algebras and two theorems of Beth and Craig*]{}. Michigan Math. J.[**11**]{} (1963), p. 129-135. Daigneault, A., and Monk,J.D., [*Representation Theory for Polyadic algebras*]{}. Fund. Math. [**52**]{}(1963) p.151-176. Ferenczi, M., [*On representation of neatly embeddable cylindric algebras*]{} Journal of Applied Non-classical Logics, [**10**]{}(3-4) (2000) p.34-56 Ferenczi, M., [*Finitary polyadic algebras from cylindric algebras.*]{} Studia Logica [**87**]{}(1)(2007) p.1-11 Ferenczi, M., [*On cylindric algebras satisfying the merry-go-round properties*]{} Logic Journal of IGPL, [**15**]{}(2) (2007), p. 183-199 Ferenczi, M., [*On representability of neatly embeddable cylindric algebras*]{} Journal of Appl. Non-classical Logic, 3-4, 10 (2000) 1-11 Ferenczi M, [*The polyadic representation* ]{} Transactions of Amer Math Society, to appear. Gabbay M.D., Maksimova L. [*Interpolation and Definability: Modal and Intuitionistic Logic*]{} Oxford Science Publications (2005) Gentzen, G., 1934-5, “Untersuchungen Über das logische Schliessen,” Math. Zeitschrift 39: 176-210, 405-431. Georgescu G. [*A representation theorem for polyadic Heyting algebras*]{} Algebra Universalis [**14**]{} (1982), 197-209. Gödel, K., 1933, “Zur intuitionistischen Arithmetik und Zahlentheorie,” Ergebnisse eines mathematischen Kolloquiums 4: 34-38. Halmos, P., [*Algebraic Logic.*]{} Chelsea Publishing Co., New York, (1962.) Henkin, L., [*An extension of the Craig-Lyndon interpolation theorem*]{} Journal of Symbolic Logic 28(3) (1963) p.201-216 Henkin, L., Monk, J.D., and Tarski, A., [*Cylindric Algebras Part I*]{}. North Holland, 1971. Henkin, L., Monk, J.D., and Tarski, A., [*Cylindric Algebras Part II*]{}. North Holland, 1985. Herrlich H, Strecker G. [*Category theory*]{} Allyn and Bacon, Inc, Boston (1973) Heyting, A., [*Die formalen Regeln der intuitionistischen Logik, in three parts*]{}, Sitzungsber. preuss. Akad. Wiss.: 42-71,(1930) 158-169. English translation of Part I in Mancosu 1998: 311-327. Heyting, A., 1956, Intuitionism: An Introduction, North-Holland Publishing, Amsterdam. Third Revised Edition (1971). Hodges, W. [*A shorter Model Theory*]{}. Cambridge. University Press. 1997. Johnson, J.S. [*Amalgamation of Polyadic Algebras*]{}. Transactions of the American Mathematical Society, [**149**]{}(1970) p.627-652 Pigozzi,D. [*Amalgamation, congruence extension, and interpolation properties in algebras.*]{} Algebra Universalis. [**1**]{}(1971), p.269-349. Keisler H.J., [*A complete first order logic with infinitary predicates*]{} Fund. Math [**52**]{}(1963) p.177-203 Kleene, S. C., 1952, Introduction to Metamathematics, Van Nostrand, Princeton. Kripke “Semantical analysis of intuitionistic logic,” in J. Crossley and M. A. E. Dummett, eds., 1965: 92-130. Madárasz J. and Sayed Ahmed T., [*Amalgamation, interpolation and epimorphisms.*]{} Algebra Universalis [**56**]{} (2) (2007) p. 179-210. Madárasz J. and Sayed Ahmed T. [*Neat reducts and amalgamation in retrospect, a survey of results and some methods. Part 1: Results on neat reducts*]{} Logic Journal of IGPL [**17**]{}(4)(2009) p.429-483 Madárasz J. and Sayed Ahmed T., [*Neat reducts and amalgamation in retrospect, a survey of results and some methods. Part 2: Results on amalgamation*]{} Logic Journal of IGPL (2009) doi: 10.1093/jigpal/jzp013 Monk J.D. [*Polyadic Heyting algebras*]{} Notices Amer Math Soc (1966) 735. Maksimova, L. [*Amalgamation and interpolation in normal modal logics*]{}. Studia Logica [**50**]{}(1991) p.457-471. Németi, I., Sági, G. [*On the equational theory of representable polyadic algebras*]{}. Journal of Symbolc Logic (65)(3) 2000, p. 1143-1167 Sagi, G, Ferenszi, M, [*On some developments in the representation theory of cylindric- like algebras*]{} Algebra Universalis, [**55**]{}(2-3)(2006) p.345-353 Sagi, G, Shelah S., [*Weak and strong interpolation for algebraic logics.*]{} Journal of Symbolic Logic [**71**]{}(2006) p.104-118. Sain, I. [*Searching for a finitizable algebraization of first order logic*]{}. 8, Logic Journal of IGPL. Oxford University, Press (2000) no 4, p.495–589. A. Tarski, *Grundz¨uge der Systemenkalk¨uls. Erster Teil*. Fundamenta Mathematica, Vol. **25**, (1935), p.503-526. English translation in \[A. Tarski, Logic, Semantics, Metamathematics. Papers from 1923 to 1938, edited by J. Corcoran, Hackett Pub. Co., Indianapolis, Indiana, second edition, (1983)\]: Foundations of the calculus of systems, p.342-383. [^1]: 2000 [*Mathematics Subject Classification.*]{} Primary 03G15. [*Key words*]{}: algebraic logic, neat reducts, cylindric algebras, amalgamation [^2]: The class of representable algebras is given by specifying the universes of the algebras in the class, as sets of certain sets endowed with set theoretic concrete operations; thus representable algebras are completely determined once one specifies their universes. [^3]: The idea of relativization, similar to Henkin’s semantics for second order logic, has proved a very fruitful idea in the theory of cylindric algebras. [^4]: Strictly speaking, in $ZFC$ we cannot talk about classes, but classes can be stimulated rigorously with formulas; in our context we chose not to be pedantic about it. Alternatively, we could have replaced $Set$ ($Ord$) by a set of sets (ordinals), but the notation $Set$ ($Ord$) is more succint and ecconomic.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use pseudodifferential calculus and heat kernel techniques to prove a conjecture by Chamseddine and Connes on rationality of the coefficients of the polynomials in the cosmic scale factor $a(t)$ and its higher derivatives, which describe the general terms $a_{2n}$ in the expansion of the spectral action for general Robertson-Walker metrics. We also compute the terms up to $a_{12}$ in the expansion of the spectral action by our method. As a byproduct, we verify that our computations agree with the terms up to $a_{10}$ that were previously computed by Chamseddine and Connes by a different method.' author: - | $ $\ Farzad Fathizadeh, Asghar Ghorbanpour, Masoud Khalkhali title: 'Rationality of Spectral Action for Robertson-Walker Metrics' --- Department of Mathematics, Western University\ London, Ontario, Canada, N6A 5B7 [^1]\ 0.1cm [**Mathematics Subject Classification (2010).**]{} 81T75, 58B34, 58J42. 0.1 cm [**Keywords.**]{} Robertson-Walker metrics, Dirac operator, Spectral action, Heat kernel, Local invariants, Pseudodifferential calculus. Introduction ============ Noncommutative geometry in the sense of Alain Connes [@ConBook] has provided a paradigm for geometry in the noncommutative setting based on spectral data. This generalizes Riemannian geometry [@ConReconstruct] and incorporates physical models of elementary particle physics [@ConGravity; @ConMixing; @ChaConMarGS; @ConMarBook; @ChaConConceptual; @ChaConWhy; @GraIocSch; @Sit; @Sui1; @Sui2]. An outstanding feature of the spectral action defined for noncommutative geometries is that it derives the Lagrangian of the physical models from simple noncommutative geometric data [@ConMixing; @ChaConSAP; @ChaConMarGS]. Thus various methods have been developed for computing the terms in the expansion in the energy scale $\Lambda$ of the spectral action [@ChaConUFNCG; @ChaConGravity; @ChaConUncanny; @ChaConRW; @IocLevVasGlobal; @IocLevVasTorsion]. Potential applications of noncommutative geometry in cosmology have recently been carried out in [@KolMar; @Mar; @MarPie; @MarPieTeh2012; @MarPieTeh; @NelOchSal; @NelSak1; @NelSak2; @EstMar]. Noncommutative geometric spaces are described by spectral triples $(\mathcal{A}, \mathcal{H}, D)$, where $\mathcal{A}$ is an involutive algebra represented by bounded operators on a Hilbert space $\mathcal{H}$, and $D$ is an unbounded self-adjoint operator acting in $\mathcal{H}$ [@ConBook]. The operator $D$, which plays the role of the Dirac operator, encodes the metric information and it is further assumed that it has bounded commutators with elements of $\mathcal{A}$. It has been shown that if $\mathcal{A}$ is commutative and the triple satisfies suitable regularity conditions then $\mathcal{A}$ is the algebra of smooth functions on a spin$^c$ manifold $M$ and $D$ is the Dirac operator acting in the Hilbert space of $L^2$-spinors [@ConReconstruct]. In this case, the Seeley-de Witt coefficients $a_{n}(D^2) = \int_M a_n (x, D^2) \,dv(x)$, which vanish for odd $n$, appear in a small time asymptotic expansion of the form $$\textnormal{Tr}(e^{-t D^2}) \sim t^{- \textnormal{dim} (M)/2} \sum_{n\geq 0} a_{2n} (D^2) t^n \qquad (t \to 0).$$ These coefficients determine the terms in the expansion of the spectral action. That is, there is an expansion of the form $$\textnormal{Tr} f(D^2/\Lambda^2) \sim \sum_{n \geq 0} f_{2n}\, a_{2n} (D^2/\Lambda^2),$$ where $f$ is a positive even function defined on the real line, and $f_{2n} $ are the moments of the function $f$ [@ChaConSAP; @ChaConUFNCG]. See Theorem 1.145 in [@ConMarBook] for details in a more general setup, namely for spectral triples with simple dimension spectrum. By devising a direct method based on the Euler-Maclaurin formula and the Feynman-Kac formula, Chamseddine and Connes have initiated in [@ChaConRW] a detailed study of the spectral action for the Robertson-Walker metric with a general cosmic scale factor $a(t)$. They calculated the terms up to $a_{10}$ in the expansion and checked the agreement of the terms up to $a_6$ against Gilkey’s universal formulas [@GilBook1; @GilBook2]. The present paper is intended to compute the term $a_{12}$ in the spectral action for general Robertson-Walker metrics, and to prove the conjecture of Chamseddine and Connes [@ChaConRW] on rationality of the coefficients of the polynomials in $a(t)$ and its derivatives that describe the general terms $a_{2n}$ in the expansion. In passing, we compare the outcome of our computations up to the term $a_{10}$ with the expressions obtained in [@ChaConRW], and confirm their agreement. In terms of the above aims, explicit formulas for the Dirac operator of the Robertson-Walker metric and its pseudodifferential symbol in Hopf coordinates are derived in §\[DiracinHopf\]. Following a brief review of the heat kernel method for computing local invariants of elliptic differential operators using pseudodifferential calculus [@GilBook1], we compute in §\[Termsupto10\] the terms up to $a_{10}$ in the expansion of the spectral action for Robertson-Walker metrics. The outcome of our calculations confirms the expressions obtained in [@ChaConRW]. This forms a check in particular on the validity of $a_8$ and $a_{10}$, which as suggested in [@ChaConRW] also, seems necessary due to the high complexity of the formulas. In §\[Term12\], we record the expression for the term $a_{12}$ achieved by a significantly heavier computation, compared to the previous terms. It is checked that the reduction of $a_{12}$ to the round case $a(t)=\sin t $ conforms to the full expansion obtained in [@ChaConRW] for the round metric by remarkable calculations that are based on the Euler-Maclaurin formula. In order to validate our expression for $a_{12}$, parallel but completely different computations are performed in spherical coordinates and the final results are confirmed to match precisely with our calculations in Hopf coordinates. In §\[ProofofConjecture\], we prove the conjecture made in [@ChaConRW] on rationality of the coefficients appearing in the expressions for the terms of the spectral action for Robertson-Walker metrics. That is, we show that the term $a_{2n}$ in the expansion is of the form $Q_{2n}\big(a(t),a'(t),\dots,a^{(2n)}(t)\big)/a(t)^{2n-3}$, where $Q_{2n}$ is a polynomial with rational coefficients. We also find a formula for the coefficient of the term with the highest derivate of $a(t)$ in $a_{2n}$. It is known that values of Feynman integrals for quantum gauge theories are closely related to multiple zeta values and periods in general and hence tend to be transcendental numbers [@MarBook]. In sharp distinction, the rationality result proved in this paper is valid for all scale factors $a(t)$ in Robertson-Walker metrics. Although it might be exceedingly difficult, it is certainly desirable to find all the terms $a_{2n}$ in the spectral action. The rationality result is a consequence of a certain symmetry in the heat kernel and it is plausible that this symmetry would eventually reveal the full structure of the coefficients $a_{2n}$. This is a task for a future work. Our main conclusions are summarized in §\[Conclusions\]. The Dirac Operator for Robertson-Walker Metrics {#DiracinHopf} =============================================== According to the spectral action principle [@ConGravity; @ChaConSAP], the spectral action of any geometry depends on its Dirac operator since the terms in the expansion are determined by the high frequency behavior of the eigenvalues of this operator. For spin manifolds, the explicit computation of the Dirac operator in a coordinate system is most efficiently achieved by writing its formula after lifting the Levi-Civita connection on the cotangent bundle to the spin connection on the spin bundle. In this section, we summarize this formalism and compute the Dirac operator of the Robertson-Walker metric in Hopf coordinates. Throughout this paper we use Einstein’s summation convention without any further notice. Levi-Civita connection. ----------------------- The spin connection of any spin manifold $M$ is the lift of the Levi-Civita connection for the cotangent bundle $T^*M$ to the spin bundle. Let us, therefore, recall the following recipe for computing the Levi-Civita connection and thereby the spin connection of $M$. Given an orthonormal frame $\{\theta_\alpha\}$ for the tangent bundle $TM$ and its dual coframe $\{\theta^\alpha\}$, the connection 1-forms $\omega^\alpha_\beta$ of any connection $\nabla$ on $T^*M$ are defined by $$\nonumber \nabla{\theta^\alpha}=\omega_\beta^\alpha \,\theta^\beta.$$ Since the Levi-Civita connection is the unique torsion free connection which is compatible with the metric, its 1-forms are uniquely determined by $$d\theta^\beta =\omega^\beta_\alpha \wedge \theta^\alpha.$$ This is justified by the fact that the compatibility with metric enforces the relations $$\nonumber \omega^\alpha_\beta=-\omega^\beta_\alpha,$$ while, taking advantage of the first Cartan structure equation, the torsion-freeness amounts to the vanishing of $$\nonumber T^\alpha = d\theta^\alpha - \omega^\alpha_\beta \wedge \theta^\beta.$$ The spin connection of Robertson-Walker metrics in Hopf coordinates. --------------------------------------------------------------------- The (Euclidean) Robertson-Walker metric with the cosmic scale factor $a(t)$ is given by $$\nonumber ds^{2}=dt^{2}+a^{2}\left( t\right) d\sigma^2,$$ where $d\sigma^2$ is the round metric on the 3-sphere $\mathbb{S}^3$. It is customary to write this metric in spherical coordinates, however, for our purposes which will be explained below, it is more convenient to use the Hopf coordinates, which parametrize the 3-sphere $S^3\subset \mathbb{C}^2$ by $$\nonumber z_1=e^{i\phi_1}\sin(\eta), \qquad z_2=e^{i\phi_2}\cos(\eta),$$ with $\eta$ ranging in $[0,\pi/2)$ and $\phi_1,\phi_2$ ranging in $ [0,2\pi)$. The Robertson-Walker metric in the coordinate system $x=(t, \eta, \phi_1, \phi_2)$ is thus given by $$\nonumber ds^{2}=dt^{2}+a^{2}\left( t\right)\left(d\eta^2+\sin^2(\eta)d\phi_1^2+\cos^2(\eta)d\phi_2^2\right).$$ An orthonormal coframe for $ds^{2}$ is then provided by $$\begin{aligned} \theta^1= dt, \qquad \theta^2 =a(t)\, d \eta, \qquad \theta^3 = a(t)\, \sin \eta \,d \phi_1, \qquad \theta^4 = a(t)\, \cos \eta \, d \phi_2. \nonumber \end{aligned}$$ Applying the exterior derivative to these forms, one can easily show that they satisfy the following equations, which determine the connection 1-forms of the Levi-Civita connection: $$\begin{aligned} && d\theta^1= 0, \nonumber \\ && d\theta^2 =\frac{a'(t)}{a(t)}\, \theta^1\wedge \theta^2, \nonumber \\ && d\theta^3 =\frac{a'(t)}{a(t)}\, \theta^1\wedge \theta^3+ \frac{\cot\eta}{a(t)}\, \theta^2\wedge \theta^3, \nonumber \\ && d\theta^4 =\frac{a'(t)}{a(t)}\, \theta^1\wedge \theta^4- \frac{\tan\eta}{a(t)}\, \theta^2\wedge \theta^4. \nonumber \end{aligned}$$ We recast the above equations into the matrix of connection 1-forms $$\omega=\frac{1}{a(t)}\left( \begin{array}{cccc} 0 & - a'(t)\, \theta ^2 & - a'(t)\, \theta ^3 & - a'(t)\, \theta ^4 \\ a'(t)\, \theta ^2 & 0 & -\cot \eta \, \theta ^3 & \tan \eta \, \theta ^4 \\ a'(t)\, \theta ^3 & \cot \eta \, \theta ^3 & 0 & 0 \\ a'(t)\, \theta ^4 & - \tan\eta \, \theta ^4 & 0 & 0 \\ \end{array} \right) \in\mathfrak{so}(4),$$ which lifts to the spin bundle using the Lie algebra isomorphism $\mu:\mathfrak{so}(4)\to \mathfrak{spin}(4)$ given by (see [@LawMic]) $$\nonumber \mu(A)= \frac{1}{4}\sum_{\alpha,\beta}\langle A\theta^\alpha,\theta^\beta\rangle c(\theta^\alpha)c(\theta^\beta), \qquad A \in \mathfrak{so}(4).$$ Since $\langle \omega\theta^\alpha,\theta^\beta\rangle =\omega^\alpha_\beta$, the lifted connection $\tilde{\omega}$ is written as $$\tilde\omega=\frac{1}{4}\sum_{\alpha,\beta}\omega^\alpha_\beta c(\theta^\alpha)c(\theta^\beta).$$ In the case of the Robertson-Walker metric we find that $$\label{exprspinconn} \tilde\omega=\frac{1}{2a(t)} \left( a'(t)\theta ^2 \gamma ^{12}+ a'(t)\theta ^3 \gamma ^{13}+ a'(t) \theta ^4\gamma ^{14}+ \cot (\eta ) \theta ^3\gamma ^{23}- \tan (\eta ) \theta ^4\gamma ^{24}\right),$$ where we use the notation $\gamma^{i j} = \gamma^i \gamma^j$ for products of pairs of the gamma matrices $\gamma^1, \gamma^2, \gamma^3, \gamma^4$, which are respectively written as $$\left( \begin{array}{cccc} 0 & 0 & i & 0 \\ 0 & 0 & 0 & i \\ i & 0 & 0 & 0 \\ 0 & i & 0 & 0 \end{array} \right), \left( \begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ -1 & 0 & 0 & 0 \end{array} \right), \left( \begin{array}{cccc} 0 & 0 & 0 & -i \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ -i & 0 & 0 & 0 \end{array} \right), \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right).$$ The Dirac Operator of Robertson-Walker metrics in Hopf coordinates. ------------------------------------------------------------------- Using the expression obtained for the spin connection and considering the predual of the orthonormal coframe $\{ \theta^\alpha \}$, $$\begin{aligned} \theta_1= \frac{\partial}{\partial t}, \qquad \theta_2=\frac{1}{a(t)}\frac{\partial}{\partial \eta}, \qquad \theta_3 = \frac{1}{a(t)\, \sin \eta} \frac{\partial}{\partial \phi_1}, \qquad \theta_4 = \frac{1}{a(t)\, \cos \eta} \frac{\partial}{\partial \phi_2}, \nonumber \end{aligned}$$ we compute the Dirac operator for the Robertson-Walker metric explicitly: $$\begin{aligned} D&=c(\theta^\alpha)\nabla_{\theta_\alpha} \\ &=\gamma^\alpha\left(\theta_\alpha+\tilde \omega(\theta_\alpha)\right)\\ &=\gamma^1\left(\frac{\partial}{\partial t}\right)+ \gamma^2\left(\frac{1}{a}\frac{\partial}{\partial \eta}+\frac{a'}{2a}\gamma^{12}\right) +\gamma^3\left(\frac{1}{a\sin(\eta)}\frac{\partial}{\partial \phi_1}+\frac{a'}{2a}\gamma^{13}+\frac{\cot(\eta)}{2a}\gamma^{23}\right)\\ & \quad +\gamma^4\left(\frac{1}{a\cos(\eta)}\frac{\partial}{\partial \phi_2}+\frac{a'}{2a}\gamma^{14}-\frac{\tan(\eta)}{2a}\gamma^{24}\right)\\ &=\gamma^1 \frac{\partial}{\partial t}+\gamma^2 \frac{1}{a}\frac{\partial}{\partial \eta}+\gamma^3 \frac{1}{a\, \sin \eta} \frac{\partial}{\partial \phi_1}+\gamma^4 \frac{1}{a\, \cos \eta } \frac{\partial}{\partial \phi_2} +\frac{3a'}{2a}\gamma^1+\frac{\cot(2\eta)}{a}\gamma^2. \end{aligned}$$ Thus the pseudodifferential symbol of $D$ is given by $$\begin{aligned} \nonumber \sigma_D({ x,\xi}) = i\xi_1\gamma^1+ \frac{i\xi_2}{a}\gamma^2+ \frac{i\xi_3}{a\, \sin \eta}\gamma^3+ \frac{i\xi_4}{a\, \cos \eta } \gamma^4 +\frac{3a'}{2a}\gamma^1+\frac{\cot(2\eta)}{a}\gamma^2. \end{aligned}$$ For the purpose of employing pseudodifferential calculus in the sequel to compute the heat coefficients, we record in the following proposition the pseudodifferential symbol of $D^2$. This can be achieved by a straightforward computation to find an explicit expression for $D^2$, or alternatively, one can apply the composition rule for symbols, $\sigma_{P_1 P_2}({ x,\xi})=\sum_\alpha \frac{(-i)^{|\alpha|}}{\alpha !}\partial^\alpha_\xi\sigma_{P_1}\partial^\alpha_{ x}\sigma_{P_2}$, to the symbol of $D$. The pseudodifferential symbol of $D^2$, where $D$ is the Dirac operator for the Robertson-Walker metric, is given by $$\sigma(D^2)= p_2 + p_1 + p_0,$$ where the homogeneous components $p_i$ of order $i$ are written as $$\begin{aligned} \label{symbolHopf} p_2&=&\xi _1^2+\frac{1}{a^2}\xi _2^2+\frac{1}{a^2 \sin ^2(\eta )}\xi _3^2+\frac{ 1}{a^2\cos ^2(\eta)}\xi _4^2, \nonumber \\ p_1&=& \frac{-3 i a a'}{a^2}\xi _1+\frac{-i a' \gamma ^{12}-2i \cot (2\eta )}{a^2}\xi _2 -\frac{i a' \csc (\eta ) \gamma ^{13}+i \cot (\eta ) \csc (\eta ) \gamma ^{23}}{a^2}\xi _3 \nonumber \\ && +\frac{i \tan (\eta ) \sec (\eta ) \gamma ^{24}-i a' \sec (\eta ) \gamma ^{14} }{a^2}\xi _4 , \nonumber \\ p_0&=&\frac{1}{4 a(t)^2}\Big(-6 a(t) a''(t)-3 a'(t)^2+\csc ^2(\eta )+\sec ^2(\eta ) \nonumber \\ &&+4+2 a'(t) (\cot (\eta )-\tan (\eta ))\gamma ^{12}\Big). \end{aligned}$$ Terms up to $a_{10}$ and their Agreement with Chamseddine-Connes’ Result {#Termsupto10} ======================================================================== The computation of the terms in the expansion of the spectral action for a spin manifold, or equivalently the calculation of the heat coefficients, can be achieved by recursive formulas while working in the heat kernel scheme of local invariants of elliptic differential operators and index theory [@GilBook1]. Pseudodifferential calculus is an effective tool for dealing with the necessary approximations for deriving the small time asymptotic expansions in which the heat coefficients appear. Universal formulas in terms of the Riemann curvature operator and its contractions and covariant derivatives are written in the literature only for the terms up to $a_{10}$, namely Gilkey’s formulas up to $a_6$ [@GilBook1; @GilBook2] and the formulas in [@AmsBerOc; @Avr; @Van] for $a_8$ and $a_{10}$. Small time heat kernel expansions using pseudodifferential calculus. {#heatcoefsbypesudo} -------------------------------------------------------------------- In [@GilBook1], by appealing to the Cauchy integral formula and using pseudodifferential calculus, recursive formulas for the heat coefficients of elliptic differential operators are derived. That is, one writes [^2] $$e^{-tD^2}=-\frac{1}{2\pi i}\int_\gamma e^{-t\lambda}(D^2-\lambda)^{-1}d\lambda,$$ where the contour $\gamma$ goes around the non-negative real axis in the counterclockwise direction, and one uses pseudodifferential calculus to approximate $(D^2-\lambda)^{-1}$ via the homogeneous terms appearing in the expansion of the symbol of the parametrix of $D^2-\lambda$. Although left and right parametrices have the same homogeneous components, for the purpose of finding recursive formulas for the coefficients appearing in each component, which will be explained shortly, it is more convenient for us to consider the right parametrix $\tilde{R}(\lambda)$. Therefore, the next task is to compute recursively the homogeneous pseudodifferential symbols $r_j$ of order $-2-j$ in the expansion of $\sigma(\tilde{R}(\lambda))$. Using the calculus of symbols, with the crucial nuance that $\lambda$ is considered to be of order 2, one finds that $$r_0=(p_2-\lambda)^{-1},$$ and for any $n>1$ $$\begin{aligned} \label{recursive1} r_n=-r_0\sum_{\begin{array}{c}|\alpha|+j+2-k=n\\ j<n\end{array}} \frac{(-i)^{|\alpha|}}{\alpha!}d^\alpha_\xi p_k\,d_x^\alpha r_j.\end{aligned}$$ We summarize the process of obtaining the heat coefficients by explaining that one then uses these homogeneous terms in the Cauchy integral formula to approximate the integral kernel of $e^{-t D^2}.$ Integration of the kernel of this operator on the diagonal yields a small time asymptotic expansion of the form $$\nonumber {\rm Tr}(e^{-tD^2})\sim \sum_{n=0}^\infty \frac{t^{(n-4)/2}}{16\pi^4}\int {\rm tr}(e_n(x)) \,dvol_g \qquad (t\to 0),$$ where $$\label{engeneralform} e_n(x) \sqrt{\det g}=\frac{-1}{2\pi i}\int \int_\gamma e^{-\lambda}r_n(x,\xi,\lambda)\,d\lambda \,d\xi.$$ For detailed discussions, we refer the reader to [@GilBook1]. It is clear from that cross derivatives of $p_2$ vanish and $d_\xi^\alpha p_k=0$ if $|\alpha|>k$. Furthermore, $\frac{\partial}{\partial\phi_k}r_n=0$ for $n \geq 0$, and the summation is written as $$\begin{aligned} \label{rnshort} r_n&=&-r_0\,p_0\,r_{n-2} -r_0\,p_1\,r_{n-1} +ir_0\frac{\partial}{\partial\xi_1}p_1 \frac{\partial}{\partial t }r_{n-2} +ir_0\frac{\partial}{\partial\xi_2}p_1 \frac{\partial}{\partial \eta}r_{n-2} \nonumber \\ &&+ir_0\frac{\partial}{\partial\xi_1}p_2 \frac{\partial}{\partial t }r_{n-1} +ir_0\frac{\partial}{\partial\xi_2}p_2 \frac{\partial}{\partial \eta }r_{n-1} +\frac{1}{2}r_0\frac{\partial^2}{\partial\xi_1^2}p_2 \frac{\partial^2}{\partial t^2 }r_{n-2} \nonumber \\ &&+\frac{1}{2}r_0\frac{\partial^2}{\partial\xi_2^2}p_2 \frac{\partial^2}{\partial \eta^2 }r_{n-2}. \end{aligned}$$ Using induction, we find that $$\label{rnja} r_n=\sum_{ \begin{array}{c} 2j-2-|\alpha|=n \\ n/2+1 \leq j \leq 2n+1\end{array}}r_{n,j,\alpha }(x)\, r_0^j \,\xi^\alpha.$$ For example, one can see that for $n=0$ the only non-zero $r_{0,j,\alpha}$ is $r_{0,1,\bf{0}}=1$, and for $n=1$ the non-vanishing terms are $$r_{1,2,{\bf e}_k}=\frac{\partial p_1}{\partial \xi_k}, \qquad r_{1,3,2{\bf e}_l+{\bf e}_k}=-2ig^{kk}\frac{\partial g^{ll}}{\partial x_k},$$ where ${\bf e}_j$ denotes the $j$-th standard unit vector in $\mathbb{R}^4$. It then follows from the equations , and that $$\begin{aligned} \label{en} e_n(x) \,a(t)^{3}\sin(\eta)\cos(\eta)&=\frac{-1}{2\pi i}\int_{\mathbb{R}^4}\int_\gamma e^{-t\lambda} r_n(x,\xi,\lambda)\, d\lambda \,d\xi\nonumber\\ &=\sum r_{n,j,\alpha}(x)\int_{\mathbb{R}^4}\xi^\alpha \frac{-1}{2\pi i}\int_\gamma e^{-t\lambda}r_0^j\,d\lambda \,d\xi\\ &=\sum \frac{c_\alpha}{(j-1)!} r_{n,j,\alpha} \,a(t)^{\alpha_2+\alpha_3+\alpha_4+3}\sin(\eta)^{\alpha_3+1}\cos(\eta)^{\alpha_4+1},\nonumber \end{aligned}$$ where $$c_\alpha=\prod_k \Gamma\left(\frac{\alpha_k+1}{2}\right)\frac{(-1)^{\alpha_k}+1}{2}.$$ It is straightforward to justify the latter using these identities: $$\begin{aligned} \frac{1}{2\pi i}\int_\gamma e^{-\lambda}r_0^jd\lambda&=&(-1)^{j}\frac{(-1)^{j-1}}{(j-1)!}e^{-||\xi||^2}=\frac{-1}{(j-1)!}\prod_{k=1}^4 e^{-g^{kk}\xi_k^2}, \nonumber \\ \int_\mathbb{R}x^ne^{-bx^2}dx&=&\frac{1}{2} \left((-1)^n+1\right) b^{-\frac{n}{2}-\frac{1}{2}} \Gamma \left(\frac{n+1}{2}\right).\nonumber\end{aligned}$$ A key point that facilitates our calculations and the proof of our main theorem presented in §\[proofofrationality\] is the derivation of recursive formulas for the coefficients $r_{n, j, \alpha}$ as follows. By substitution of into we find a recursive formula of the form $$\begin{aligned} \label{rnjarec} r_{n,j,\alpha}&= -p_0r_{n-2,j-1,\alpha}-\sum_{k}\frac{\partial p_1}{\partial\xi_k} r_{n-1,j-1,\alpha-{\bf e}_k}\nonumber\\ &\qquad +i\sum_{k}\frac{\partial p_1}{\partial \xi_k}\frac{\partial}{\partial x_k}r_{n-2,j-1,\alpha}+i(2-j)\sum_{k,l}\frac{\partial g^{ll}}{\partial x_k}\frac{\partial p_1}{\partial \xi_k}r_{n-2,j-2,\alpha-2{\bf e}_l}\nonumber\\ &\qquad+2i\sum_k g^{kk} \frac{\partial}{\partial x_k}r_{n-1,j-1,\alpha-{\bf e}_k}+i (4-2j)\sum_{k,l}g^{kk}\frac{\partial g^{ll}}{\partial x_k} r_{n-1,j-2,\alpha-2{\bf e}_l-{\bf e}_k}\\ &\qquad +\sum_k g^{kk} \frac{\partial^2}{\partial x_k^2}r_{n-2,j-1,\alpha}+(4-2j)\sum_{k,l}g^{kk}\frac{\partial g^{ll}}{\partial x_k} \frac{\partial}{\partial x_k}r_{n-2,j-2,\alpha-2{\bf e}_l}\nonumber\\ &\qquad +(2-j)\sum_{k,l}g^{kk} \frac{\partial^2 g^{ll}}{\partial x_k^2} r_{n-2,j-2,\alpha-2{\bf e}_l}\nonumber \\ &\qquad +(3-j)(2-j)\sum_{k,l,l'}g^{kk} \frac{\partial g^{ll}}{\partial x_k}\frac{\partial g^{l'l'}}{\partial x_k} r_{n-2,j-3,\alpha-2{\bf e}_l-2{\bf e}_{l'}}.\nonumber\end{aligned}$$ It is undeniable that the mechanism described above for computing the heat coefficients involves heavy computations which need to be overcome by computer programming. Calculating explicitly the functions $e_n(x)$, $n=0, 2, \dots, 12$, and computing their integrals over $\mathbb{S}_a^3$ with computer assistance, we find the explicit polynomials in $a(t)$ and its derivatives recorded in the sequel, which describe the corresponding terms in the expansion of the spectral action for the Robertson-Walker metric. That is, each function $a_n$ recorded below is the outcome of $$\begin{aligned} a_n &=&\frac{1}{16\pi^4}\int_{\mathbb{S}_a^3}{\rm tr}(e_n) \,dvol_g \nonumber \\ &=&\frac{1}{16\pi^4}\int_0^{2\pi}\int_0^{2\pi}\int_0^{\pi/2}{\rm tr}(e_n) \, a(t)^{3}\sin(\eta)\cos(\eta) \,d\eta \,d\phi_1 \,d\phi_2. \nonumber\end{aligned}$$ The terms up to $a_{6}$ ----------------------- These terms were computed in [@ChaConRW] by their direct method, which is based on the Euler-Maclaurin summation formula and the Feynman-Kac formula, and they were checked by Gilkey’s universal formulas. Our computations based on the method explained in the previous subsection also gives the same result. The first term, whose integral up to a universal factor gives the volume, is given by $$a_0=\frac{a(t)^3}{2}.$$ Since the latter appears as the leading term in the small time asymptotic expansion of the heat kernel it is related to Weyl’s law, which reads the volume from the asymptotic distribution of the eigenvalues of $D^2$. The next term, which is related to the scalar curvature, has the expression $$a_2= \frac{1}{4} a(t) \left(a(t) a''(t)+a'(t)^2-1\right).$$ The term after, whose integral is topological, is related to the Gauss-Bonnet term (cf. [@ChaConRW]) and is written as $$a_4=\frac{1}{120} \Big(3 a^{(4)}(t) a(t)^2+3 a(t) a''(t)^2-5 a''(t)+9 a^{(3)}(t) a(t) a'(t)-4 a'(t)^2 a''(t)\Big).$$ The term $a_6$, which is the last term for which Gilkey’s universal formulas are written, is given by $a_6=\frac{1}{5040 a(t)^2}\Big(9 a^{(6)}(t) a(t)^4-21 a^{(4)}(t) a(t)^2-3 a^{(3)}(t)^2 a(t)^3-56 a(t)^2 a''(t)^3+42 a(t) a''(t)^2+36 a^{(5)}(t) a(t)^3 a'(t)+6 a^{(4)}(t) a(t)^3 a''(t)-42 a^{(4)}(t) a(t)^2 a'(t)^2+60 a^{(3)}(t) a(t) a'(t)^3+21 a^{(3)}(t) a(t) a'(t)+240 a(t) a'(t)^2 a''(t)^2-60 a'(t)^4 a''(t)-21 a'(t)^2 a''(t)-252 a^{(3)}(t) a(t)^2 a'(t) a''(t)\Big).$\ The terms $a_8$ and $a_{10}$ ---------------------------- These terms were computed by Chamseddine and Connes in [@ChaConRW] using their direct method. In order to form a check on the final formulas, they have suggested to use the universal formulas of [@AmsBerOc; @Avr; @Van] to calculate these terms and compare the results. As mentioned earlier, Gilkey’s universal formulas were used in [@ChaConRW] to check the terms up to $a_6$, however, they are written in the literature only up to $a_6$ and become rather complicated even for this term. In this subsection, we pursue the computation of the terms $a_8$ and $a_{10}$ in the expansion of the spectral action for Robertson-Walker metrics by continuing to employ pseudodifferential calculus, as presented in §\[heatcoefsbypesudo\], and check that the final formulas agree with the result in [@ChaConRW]. The final formulas for $a_8$ and $a_{10}$ are the following expressions: $$a_8=$$ $-\frac{1}{10080 a(t)^4}\Big(-a^{(8)}(t) a(t)^6+3 a^{(6)}(t) a(t)^4+13 a^{(4)}(t)^2 a(t)^5-24 a^{(3)}(t)^2 a(t)^3-114 a(t)^3 a''(t)^4+43 a(t)^2 a''(t)^3-5 a^{(7)}(t) a(t)^5 a'(t)+2 a^{(6)}(t) a(t)^5 a''(t)+9 a^{(6)}(t) a(t)^4 a'(t)^2+16 a^{(3)}(t) a^{(5)}(t) a(t)^5-24 a^{(5)}(t) a(t)^3 a'(t)^3-6 a^{(5)}(t) a(t)^3 a'(t)+69 a^{(4)}(t) a(t)^4 a''(t)^2-36 a^{(4)}(t) a(t)^3 a''(t)+60 a^{(4)}(t) a(t)^2 a'(t)^4+15 a^{(4)}(t) a(t)^2 a'(t)^2+90 a^{(3)}(t)^2 a(t)^4 a''(t)-216 a^{(3)}(t)^2 a(t)^3 a'(t)^2-108 a^{(3)}(t) a(t) a'(t)^5-27 a^{(3)}(t) a(t) a'(t)^3+801 a(t)^2 a'(t)^2 a''(t)^3-588 a(t) a'(t)^4 a''(t)^2-87 a(t) a'(t)^2 a''(t)^2+108 a'(t)^6 a''(t)+27 a'(t)^4 a''(t)+78 a^{(5)}(t) a(t)^4 a'(t) a''(t)+132 a^{(3)}(t) a^{(4)}(t) a(t)^4 a'(t)-312 a^{(4)}(t) a(t)^3 a'(t)^2 a''(t)-819 a^{(3)}(t) a(t)^3 a'(t) a''(t)^2+768 a^{(3)}(t) a(t)^2 a'(t)^3 a''(t)+102 a^{(3)}(t) a(t)^2 a'(t) a''(t)\Big),$ \ and $$a_{10}=$$ $\frac{1}{665280 a(t)^6}\Big(3 a^{(10)}(t) a(t)^8-222 a^{(5)}(t)^2 a(t)^7-348 a^{(4)}(t) a^{(6)}(t) a(t)^7-147 a^{(3)}(t) a^{(7)}(t) a(t)^7-18 a''(t) a^{(8)}(t) a(t)^7+18 a'(t) a^{(9)}(t) a(t)^7-482 a''(t) a^{(4)}(t)^2 a(t)^6-331 a^{(3)}(t)^2 a^{(4)}(t) a(t)^6-1110 a''(t) a^{(3)}(t) a^{(5)}(t) a(t)^6-1556 a'(t) a^{(4)}(t) a^{(5)}(t) a(t)^6-448 a''(t)^2 a^{(6)}(t) a(t)^6-1074 a'(t) a^{(3)}(t) a^{(6)}(t) a(t)^6-476 a'(t) a''(t) a^{(7)}(t) a(t)^6-43 a'(t)^2 a^{(8)}(t) a(t)^6-11 a^{(8)}(t) a(t)^6+8943 a'(t) a^{(3)}(t)^3 a(t)^5+21846 a''(t)^2 a^{(3)}(t)^2 a(t)^5+4092 a'(t)^2 a^{(4)}(t)^2 a(t)^5+396 a^{(4)}(t)^2 a(t)^5+10560 a''(t)^3 a^{(4)}(t) a(t)^5+39402 a'(t) a''(t) a^{(3)}(t) a^{(4)}(t) a(t)^5+11352 a'(t) a''(t)^2 a^{(5)}(t) a(t)^5+6336 a'(t)^2 a^{(3)}(t) a^{(5)}(t) a(t)^5+594 a^{(3)}(t) a^{(5)}(t) a(t)^5+2904 a'(t)^2 a''(t) a^{(6)}(t) a(t)^5+264 a''(t) a^{(6)}(t) a(t)^5+165 a'(t)^3 a^{(7)}(t) a(t)^5+33 a'(t) a^{(7)}(t) a(t)^5-10338 a''(t)^5 a(t)^4-95919 a'(t)^2 a''(t) a^{(3)}(t)^2 a(t)^4-3729 a''(t) a^{(3)}(t)^2 a(t)^4-117600 a'(t) a''(t)^3 a^{(3)}(t) a(t)^4-68664 a'(t)^2 a''(t)^2 a^{(4)}(t) a(t)^4-2772 a''(t)^2 a^{(4)}(t) a(t)^4-23976 a'(t)^3 a^{(3)}(t) a^{(4)}(t) a(t)^4-2640 a'(t) a^{(3)}(t) a^{(4)}(t) a(t)^4-12762 a'(t)^3 a''(t) a^{(5)}(t) a(t)^4-1386 a'(t) a''(t) a^{(5)}(t) a(t)^4-651 a'(t)^4 a^{(6)}(t) a(t)^4-132 a'(t)^2 a^{(6)}(t) a(t)^4+111378 a'(t)^2 a''(t)^4 a(t)^3+2354 a''(t)^4 a(t)^3+31344 a'(t)^4 a^{(3)}(t)^2 a(t)^3+3729 a'(t)^2 a^{(3)}(t)^2 a(t)^3+236706 a'(t)^3 a''(t)^2 a^{(3)}(t) a(t)^3+13926 a'(t) a''(t)^2 a^{(3)}(t) a(t)^3+43320 a'(t)^4 a''(t) a^{(4)}(t) a(t)^3+5214 a'(t)^2 a''(t) a^{(4)}(t) a(t)^3+2238 a'(t)^5 a^{(5)}(t) a(t)^3+462 a'(t)^3 a^{(5)}(t) a(t)^3-162162 a'(t)^4 a''(t)^3 a(t)^2-11880 a'(t)^2 a''(t)^3 a(t)^2-103884 a'(t)^5 a''(t) a^{(3)}(t) a(t)^2-13332 a'(t)^3 a''(t) a^{(3)}(t) a(t)^2-6138 a'(t)^6 a^{(4)}(t) a(t)^2-1287 a'(t)^4 a^{(4)}(t) a(t)^2+76440 a'(t)^6 a''(t)^2 a(t)+10428 a'(t)^4 a''(t)^2 a(t)+11700 a'(t)^7 a^{(3)}(t) a(t)+2475 a'(t)^5 a^{(3)}(t) a(t)-11700 a'(t)^8 a''(t)-2475 a'(t)^6 a''(t)\Big).$ Computation of the Term $a_{12}$ in the Expansion of the Spectral Action {#Term12} ======================================================================== We pursue the computation of the term $a_{12}$ in the expansion of the spectral action for Robertson-Walker metrics by employing pseudodifferential calculus to find the term $r_{12}$ for the parametrix of $\lambda - D^2$, which is homogeneous of order $-14$, and by performing the appropriate integrations. Since there is no universal formula in the literature for this term, we have performed two heavy computations, one in Hopf coordinates and the other in spherical coordinates, to form a check on the validity of the outcome of our calculations. Another efficient way of computing the term $a_{12}$ is to use the direct method of [@ChaConRW]. The result of the computation in Hopf coordinates. {#exprfora12} -------------------------------------------------- Continuing the recursive procedure commenced in the previous section and exploiting computer assistance, while the calculation becomes significantly heavier for the term $a_{12}$, we find the following expression: $$a_{12}=$$ $\frac{1}{17297280 a(t)^{8}}\Big(3 a^{(12)}(t) a(t)^{10}-1057 a^{(6)}(t)^2 a(t)^9-1747 a^{(5)}(t) a^{(7)}(t) a(t)^9-970 a^{(4)}(t) a^{(8)}(t) a(t)^9-317 a^{(3)}(t) a^{(9)}(t) a(t)^9-34 a''(t) a^{(10)}(t) a(t)^9+21 a'(t) a^{(11)}(t) a(t)^9+5001 a^{(4)}(t)^3 a(t)^8+2419 a''(t) a^{(5)}(t)^2 a(t)^8+19174 a^{(3)}(t) a^{(4)}(t) a^{(5)}(t) a(t)^8+4086 a^{(3)}(t)^2 a^{(6)}(t) a(t)^8+2970 a''(t) a^{(4)}(t) a^{(6)}(t) a(t)^8-5520 a'(t) a^{(5)}(t) a^{(6)}(t) a(t)^8-511 a''(t) a^{(3)}(t) a^{(7)}(t) a(t)^8-4175 a'(t) a^{(4)}(t) a^{(7)}(t) a(t)^8-745 a''(t)^2 a^{(8)}(t) a(t)^8-2289 a'(t) a^{(3)}(t) a^{(8)}(t) a(t)^8-828 a'(t) a''(t) a^{(9)}(t) a(t)^8-62 a'(t)^2 a^{(10)}(t) a(t)^8-13 a^{(10)}(t) a(t)^8+45480 a^{(3)}(t)^4 a(t)^7+152962 a''(t)^2 a^{(4)}(t)^2 a(t)^7+203971 a'(t) a^{(3)}(t) a^{(4)}(t)^2 a(t)^7+21369 a'(t)^2 a^{(5)}(t)^2 a(t)^7+1885 a^{(5)}(t)^2 a(t)^7+410230 a''(t) a^{(3)}(t)^2 a^{(4)}(t) a(t)^7+163832 a'(t) a^{(3)}(t)^2 a^{(5)}(t) a(t)^7+250584 a''(t)^2 a^{(3)}(t) a^{(5)}(t) a(t)^7+244006 a'(t) a''(t) a^{(4)}(t) a^{(5)}(t) a(t)^7+42440 a''(t)^3 a^{(6)}(t) a(t)^7+163390 a'(t) a''(t) a^{(3)}(t) a^{(6)}(t) a(t)^7+35550 a'(t)^2 a^{(4)}(t) a^{(6)}(t) a(t)^7+3094 a^{(4)}(t) a^{(6)}(t) a(t)^7+34351 a'(t) a''(t)^2 a^{(7)}(t) a(t)^7+19733 a'(t)^2 a^{(3)}(t) a^{(7)}(t) a(t)^7+1625 a^{(3)}(t) a^{(7)}(t) a(t)^7+6784 a'(t)^2 a''(t) a^{(8)}(t) a(t)^7+520 a''(t) a^{(8)}(t) a(t)^7+308 a'(t)^3 a^{(9)}(t) a(t)^7+52 a'(t) a^{(9)}(t) a(t)^7-2056720 a'(t) a''(t) a^{(3)}(t)^3 a(t)^6-1790580 a''(t)^3 a^{(3)}(t)^2 a(t)^6-900272 a'(t)^2 a''(t) a^{(4)}(t)^2 a(t)^6-31889 a''(t) a^{(4)}(t)^2 a(t)^6-643407 a''(t)^4 a^{(4)}(t) a(t)^6-1251548 a'(t)^2 a^{(3)}(t)^2 a^{(4)}(t) a(t)^6-43758 a^{(3)}(t)^2 a^{(4)}(t) a(t)^6-4452042 a'(t) a''(t)^2 a^{(3)}(t) a^{(4)}(t) a(t)^6-836214 a'(t) a''(t)^3 a^{(5)}(t) a(t)^6-1400104 a'(t)^2 a''(t) a^{(3)}(t) a^{(5)}(t) a(t)^6-48620 a''(t) a^{(3)}(t) a^{(5)}(t) a(t)^6-181966 a'(t)^3 a^{(4)}(t) a^{(5)}(t) a(t)^6-18018 a'(t) a^{(4)}(t) a^{(5)}(t) a(t)^6-319996 a'(t)^2 a''(t)^2 a^{(6)}(t) a(t)^6-11011 a''(t)^2 a^{(6)}(t) a(t)^6-115062 a'(t)^3 a^{(3)}(t) a^{(6)}(t) a(t)^6-11154 a'(t) a^{(3)}(t) a^{(6)}(t) a(t)^6-42764 a'(t)^3 a''(t) a^{(7)}(t) a(t)^6-4004 a'(t) a''(t) a^{(7)}(t) a(t)^6-1649 a'(t)^4 a^{(8)}(t) a(t)^6-286 a'(t)^2 a^{(8)}(t) a(t)^6+460769 a''(t)^6 a(t)^5+1661518 a'(t)^3 a^{(3)}(t)^3 a(t)^5+83486 a'(t) a^{(3)}(t)^3 a(t)^5+13383328 a'(t)^2 a''(t)^2 a^{(3)}(t)^2 a(t)^5+222092 a''(t)^2 a^{(3)}(t)^2 a(t)^5+342883 a'(t)^4 a^{(4)}(t)^2 a(t)^5+36218 a'(t)^2 a^{(4)}(t)^2 a(t)^5+7922361 a'(t) a''(t)^4 a^{(3)}(t) a(t)^5+6367314 a'(t)^2 a''(t)^3 a^{(4)}(t) a(t)^5+109330 a''(t)^3 a^{(4)}(t) a(t)^5+7065862 a'(t)^3 a''(t) a^{(3)}(t) a^{(4)}(t) a(t)^5+360386 a'(t) a''(t) a^{(3)}(t) a^{(4)}(t) a(t)^5+1918386 a'(t)^3 a''(t)^2 a^{(5)}(t) a(t)^5+98592 a'(t) a''(t)^2 a^{(5)}(t) a(t)^5+524802 a'(t)^4 a^{(3)}(t) a^{(5)}(t) a(t)^5+55146 a'(t)^2 a^{(3)}(t) a^{(5)}(t) a(t)^5+226014 a'(t)^4 a''(t) a^{(6)}(t) a(t)^5+23712 a'(t)^2 a''(t) a^{(6)}(t) a(t)^5+8283 a'(t)^5 a^{(7)}(t) a(t)^5+1482 a'(t)^3 a^{(7)}(t) a(t)^5-7346958 a'(t)^2 a''(t)^5 a(t)^4-72761 a''(t)^5 a(t)^4-11745252 a'(t)^4 a''(t) a^{(3)}(t)^2 a(t)^4-725712 a'(t)^2 a''(t) a^{(3)}(t)^2 a(t)^4-27707028 a'(t)^3 a''(t)^3 a^{(3)}(t) a(t)^4-819520 a'(t) a''(t)^3 a^{(3)}(t) a(t)^4-8247105 a'(t)^4 a''(t)^2 a^{(4)}(t) a(t)^4-520260 a'(t)^2 a''(t)^2 a^{(4)}(t) a(t)^4-1848228 a'(t)^5 a^{(3)}(t) a^{(4)}(t) a(t)^4-205296 a'(t)^3 a^{(3)}(t) a^{(4)}(t) a(t)^4-973482 a'(t)^5 a''(t) a^{(5)}(t) a(t)^4-110136 a'(t)^3 a''(t) a^{(5)}(t) a(t)^4-36723 a'(t)^6 a^{(6)}(t) a(t)^4-6747 a'(t)^4 a^{(6)}(t) a(t)^4+17816751 a'(t)^4 a''(t)^4 a(t)^3+721058 a'(t)^2 a''(t)^4 a(t)^3+2352624 a'(t)^6 a^{(3)}(t)^2 a(t)^3+274170 a'(t)^4 a^{(3)}(t)^2 a(t)^3+24583191 a'(t)^5 a''(t)^2 a^{(3)}(t) a(t)^3+1771146 a'(t)^3 a''(t)^2 a^{(3)}(t) a(t)^3+3256248 a'(t)^6 a''(t) a^{(4)}(t) a(t)^3+389376 a'(t)^4 a''(t) a^{(4)}(t) a(t)^3+135300 a'(t)^7 a^{(5)}(t) a(t)^3+25350 a'(t)^5 a^{(5)}(t) a(t)^3-15430357 a'(t)^6 a''(t)^3 a(t)^2-1252745 a'(t)^4 a''(t)^3 a(t)^2-7747848 a'(t)^7 a''(t) a^{(3)}(t) a(t)^2-967590 a'(t)^5 a''(t) a^{(3)}(t) a(t)^2-385200 a'(t)^8 a^{(4)}(t) a(t)^2-73125 a'(t)^6 a^{(4)}(t) a(t)^2+5645124 a'(t)^8 a''(t)^2 a(t)+741195 a'(t)^6 a''(t)^2 a(t)+749700 a'(t)^9 a^{(3)}(t) a(t)+143325 a'(t)^7 a^{(3)}(t) a(t)-749700 a'(t)^{10} a''(t)-143325 a'(t)^8 a''(t))\Big).$ Agreement of the result with computations in spherical coordinates. ------------------------------------------------------------------- Taking a similar route as in §\[DiracinHopf\], we explicitly write the Dirac operator for the Roberson-Walker metric in spherical coordinates $$\nonumber ds^{2}=dt^{2}+a^{2}\left( t\right) \big ( d\chi^{2}+\sin^{2}(\chi) \left( d\theta^{2}+\sin^{2}(\theta) \, d\varphi^{2}\right) \big ).$$ Using the computations carried out in [@ChaConRW] with the orthonormal coframe $$dt, \qquad a(t)\, d \chi, \qquad a(t)\, \sin \chi \,d \theta, \qquad a(t)\, \sin \chi \, \sin \theta \,d \varphi,$$ the corresponding matrix of connection 1-forms for the Levi-Civita connection is written as $$\left ( \begin{array}{cccc} 0 &-a'(t)d\chi & -a'(t)\sin(\chi)d\theta & -a'(t)\sin(\chi)\sin(\theta)d\varphi\\ a'(t)d\chi &0 & -\cos(\chi)d\theta &-\cos(\chi)\sin(\theta)d\varphi \\ a'(t)\sin(\chi)d\theta & \cos(\chi)d\theta & 0 &-\cos(\theta)d\varphi \\ a'(t)\sin(\chi)\sin(\theta)d\varphi&\cos(\chi)\sin(\theta)d\varphi & \cos(\theta)d\varphi & 0\\ \end{array} \right ).$$ Lifting to the spin bundle by means of the Lie algebra isomorphism $\mu:\mathfrak{so}(4)\to \mathfrak{spin}(4)$ and writing the formula for the Dirac operator yield the following expression for this operator expressed in spherical coordiantes: $$\begin{aligned} D &=& \gamma^1 \frac{\partial}{\partial t}+\gamma^2 \frac{1}{a}\frac{\partial}{\partial \chi}+\gamma^3 \frac{1}{a\, \sin \chi} \frac{\partial}{\partial \theta}+\gamma^4 \frac{1}{a\, \sin \chi \, \sin \theta} \frac{\partial}{\partial \varphi} \nonumber \\ &&+\frac{3a'}{2a}\gamma^1+\frac{\cot(\chi)}{a}\gamma^2+\frac{\cot(\theta)}{2a\sin(\chi)}\gamma^3. \nonumber\end{aligned}$$ Thus the pseudodifferential symbol of $D$ is given by $$\begin{aligned} \sigma_D({ x,\xi})&=&i\gamma^1\xi_1 +\frac{i}{a}\gamma^2\xi_2+\frac{i}{a\sin(\chi)}\gamma^3\xi_3+\frac{i}{a\sin(\chi)\sin(\theta)}\gamma^4\xi_4 \nonumber \\ &&+\frac{3a'}{2a}\gamma^1+\frac{\cot(\chi)}{a}\gamma^2+\frac{\cot(\theta)}{2a\sin(\chi)}\gamma^3. \nonumber\end{aligned}$$ Accordingly, the symbol of $D^2$ is the sum $p_2'+p_1'+p_0'$ of three homogeneous components $$\begin{aligned} p_2'&=&\xi _1^2+\frac{1}{a(t)^2}\xi _2^2+\frac{1}{a(t)^2 \sin ^2(\chi )}\xi _3^2+\frac{ 1}{a(t)^2\sin ^2(\theta ) \sin ^2(\chi )}\xi _4^2, \nonumber \\ p_1'&=&-\frac{ 3 i a'(t)}{a(t)}\xi _1-\frac{i }{a(t)^2} \left(\gamma ^{12} a'(t)+2 \cot (\chi )\right)\xi _2 \nonumber \\ &&-\frac{i }{a(t)^2} \left(\gamma ^{13} \csc (\chi ) a'(t)+\cot (\theta ) \csc ^2(\chi )+\gamma ^{23} \cot (\chi ) \csc (\chi )\right)\xi _3\nonumber \\ &&-\frac{i }{a(t)^2} (\csc (\theta ) \csc (\chi ) a'(t)\gamma ^{14}+\cot (\theta ) \csc (\theta ) \csc ^2(\chi )\gamma ^{34} \nonumber \\ &&+\csc (\theta ) \cot (\chi ) \csc (\chi )\gamma ^{24} )\xi _4, \nonumber \\ p_0'&=&\frac{1}{8 a(t)^2}\left(-12 a(t) a''(t)-6 a'(t)^2+3 \csc ^2(\theta ) \csc ^2(\chi )-\cot ^2(\theta ) \csc ^2(\chi )+\right. \nonumber \\ && \left. 4 i \cot (\theta ) \cot (\chi ) \csc (\chi )-4 i \cot (\theta ) \cot (\chi ) \csc (\chi )-4 \cot ^2(\chi )+5 \csc ^2(\chi )+4\right) \nonumber \\ &&-\frac{\left(\cot (\theta ) \csc (\chi ) a'(t)\right)}{2 a(t)^2}\gamma ^{13} -\frac{ \left(\cot (\chi ) a'(t)\right)}{a(t)^2}\gamma ^{12}-\frac{ (\cot (\theta ) \cot (\chi ) \csc (\chi ))}{2 a(t)^2}\gamma ^{23}. \nonumber\end{aligned}$$ We have performed the computation of the heat coefficients up to the term $a_{12}$ using the latter symbols and have checked the agreement of the result with the computations in Hopf coordinates, presented in the previous subsections. This is in particular of great importance for the term $a_{12}$, since it ensures the validity of our computations performed in two different coordinates. Agreement with the full expansion for the round metric. ------------------------------------------------------- We first recall the full expansion for the spectral action for the round metric, namely the case $a(t) = \sin (t)$, worked out in [@ChaConRW]. Then we show that the term $a_{12}$ presented in §\[exprfora12\] reduces correctly to the round case. The method devised in [@ChaConRW] has wide applicability in the spectral action computations since it can be used for the cases when the eigenvalues of the square of the Dirac operator have a polynomial expression while their multiplicities are also given by polynomials. In the case of the round metric on $\mathbb{S}^4$, after remarkable computations based on the Euler-Maclaurin formula, this method leads to the following expression with control over the remainder term [@ChaConRW]: $$\begin{aligned} \frac{3}{4}{\rm Trace}(f(tD^2)) &=& \int_0^\infty f(tx^2)(x^3-x)dx+\frac{11 f(0)}{120}-\frac{31 f'(0) t}{2520} +\frac{41 f''(0) t^2}{10080} \nonumber \\ &&-\frac{31 f^{(3)}(0) t^3}{15840}+\frac{10331 f^{(4)}(0) t^4}{8648640}-\frac{3421 f^{(5)}(0) t^5}{3931200}+\dots +R_m. \nonumber\end{aligned}$$ This implies that the term $a_{12}$ in the expansion of the spectral action for the round metric is equal to $\frac{10331}{6486480}$. To check our calculations against this result, we find that for $a(t)=\sin(t)$ the expression for $a_{12}(t)$ reduces to $\frac{10331 \sin ^3(t)}{8648640},$ and hence $$a_{12}=\int_0^\pi a_{12}(\mathbb{S}^4)\,dt=\frac{4}{3} \frac{10331}{8648640}=\frac{10331}{6486480},$$ which is in complete agreement with the result in [@ChaConRW], mentioned above. Chameseddine-Connes’ Conjecture {#ProofofConjecture} =============================== In this section we prove a conjecture of Chamseddine and Connes from [@ChaConRW]. More precisely, we show that the term $a_{2n}$ in the asymptotic expansion of the spectral action for Robertson-Walker metrics is, up to multiplication by $a(t)^{3-2n}$, of the form $Q_{2n}(a,a',\dots,a^{(2n)})$, where $Q_{2n}$ is a polynomial with rational coefficients. Proof of rationality of the coefficients in the expressions for $a_{2n}$ {#proofofrationality} ------------------------------------------------------------------------ A crucial point that enables us to furnish the proof of our main theorem, namely the proof of the conjecture mentioned above, is the independence of the integral kernel of the heat operator of the Dirac operator of the Robertson-Walker metric from the variables $\phi_1, \phi_2, \eta$. Note that since the symbol and the metric are independent of $\phi_1, \phi_2$, the computations involved in the symbol calculus clearly imply the independence of the terms $e_n$ from these variables. However, the independence of $e_n$ from $\eta$ is not evident, which is proved as follows. \[ind\] The heat kernel $k(t, x, x)$ for the Robertson-Walker metric is independent of $\phi_1,\phi_2, \eta$. The round metric on $\mathbb{S}^3$ is the bi-invariant metric on ${\rm SU}(2)$ induced from the Killing form of its Lie algebra $\mathfrak{su}(2)$. The corresponding Levi-Civita connection restricted to the left invariant vector fields is given by $\frac{1}{2}[X,Y]$, and to the right invariant vector fields by $\frac{-1}{2}[X,Y]$. Since the Killing form is ${\rm ad}$-invariant, we have $$\langle [X,Y],Z\rangle+\langle Y,[X,Z]\rangle=0,\qquad X,Y,Z\in \mathfrak{su}(2),$$ which implies that in terms of the connection on left (right) invariant vector fields $X,Y,Z$, it can be written as $$\label{Killingequ} \langle \nabla_YX,Z\rangle+\langle Y,\nabla_ZX\rangle=0.$$ Considering the fact that $\nabla X:\mathfrak{X}(M)\to \mathfrak{X}(M)$ is an endomorphism of the tangent bundle, the latter identity holds for any $Y,Z\in\mathfrak{X}(M)$. Therefore, the equation is the Killing equation and shows that any left and right invariant vector field on ${\rm SU}(2)$ is a Killing vector field. By direct computation in Hopf coordinates, we find the following vector fields which respectively form bases for left and right invariant vector fields on ${\rm SU}(2)$: $$\begin{aligned} X^L_1&=\frac{\partial}{\partial \phi_1}+\frac{\partial}{\partial \phi_2},\\ X^L_2&=\sin (\phi_1+\phi_2)\frac{\partial}{\partial \eta}+\cot(\eta )\cos (\phi_1+\phi_2)\frac{\partial}{\partial \phi_1}-\tan (\eta )\cos (\phi_1+\phi_2) \frac{\partial}{\partial \phi_2},\\ X^L_3&=\cos (\phi_1+\phi_2)\frac{\partial}{\partial \eta}-\cot (\eta ) \sin (\phi_1+\phi_2)\frac{\partial}{\partial \phi_1}+\tan (\eta ) \sin(\phi_1+\phi_2)\frac{\partial}{\partial \phi_2},\\ X^R_1&=-\frac{\partial}{\partial \phi_1}+\frac{\partial}{\partial \phi_2},\\ X^R_2&=-\sin(\phi_1-\phi_2)\frac{\partial}{\partial \eta}-\cot (\eta ) \cos (\phi_1-\phi_2)\frac{\partial}{\partial \phi_1}-\tan(\eta )\cos(\phi_1-\phi_2)\frac{\partial}{\partial \phi_2},\\ X^R_3&=\cos(\phi_1-\phi_2)\frac{\partial}{\partial \eta}-\cot (\eta ) \sin(\phi_1-\phi_2)\frac{\partial}{\partial \phi_1}-\tan(\eta )\sin (\phi_1-\phi_2)\frac{\partial}{\partial \phi_2}.\end{aligned}$$ One can check that these vector fields are indeed Killing vector fields for the Robertson-Walker metrics on the four dimensional space. Thus, for any isometry invariant function $f$ we have: $$\begin{aligned} &&\frac{\partial}{\partial \phi_1}f=\frac{1}{2}(X^L_1-X^R_1)f=0, \nonumber \\ &&\frac{\partial}{\partial \phi_2}f=\frac{1}{2}(X^L_1+X^R_1)f=0,\nonumber \\ &&\frac{\partial}{\partial \eta}f=(\sin(\phi_1+\phi_2)X^L_2+\cos(\phi_1+\phi_2)X^L_3)f=0. \nonumber\end{aligned}$$ In particular, the heat kernel restricted to the diagonal, $k(t, x, x)$, is independent of $\phi_1, \phi_2,\eta$, and so are the coefficient functions $e_n$ in its asymptotic expansion. We stress that although $e_n (x)$ is independent of $\eta,\phi_1,\phi_2$, its components denoted by $e_{n, j, \alpha}$ in the proof of the following theorem are not necessarily independent of these variables. \[rationalitytheorem\] The term $a_{2n}$ in the expansion of the spectral action for the Robertson-Walker metric with cosmic scale factor $a(t)$ is of the form $$\frac{1}{a(t)^{2n-3}}\,Q_{2n}\left(a(t),a'(t),\dots,a^{(2n)}(t)\right),$$ where $Q_{2n}$ is a polynomial with rational coefficients. Using we can write $$\begin{aligned} \label{ensum} e_n&=\sum_{\begin{array}{c}2j-2-|\alpha|=n\\ n/2+1\leq j\leq 2n+1\end{array}} c_\alpha \,e_{n,j,\alpha}, \end{aligned}$$ where $$e_{n,j,\alpha}=\frac{1}{(j-1)!}r_{n,j,\alpha} \,a(t)^{\alpha_2+\alpha_3+\alpha_4}\sin(\eta)^{\alpha_3}\cos(\eta)^{\alpha_4}.$$ The recursive equation implies that $$\label{recforenja} e_{n,j,\alpha}=$$ $\frac {1}{(j-1)a(t)}\Big( (\gamma^{14} a'(t)-\tan(\eta)\gamma^{24} )e_{n-1,j-1,\alpha-{\bf e}_4} % % +(\gamma^{13}a'(t)+\cot(\eta)\gamma^{23})e_{n-1,j-1,\alpha-{\bf e}_3} % % +(\gamma^{12}a'(t)+1((2\alpha_4-1)\tan(\eta)+(1-2\alpha_3)\cot(\eta)))e_{n-1,j-1,\alpha-{\bf e}_2} % % +4a'(t)e_{n-1,j-2,\alpha-{\bf e}_1-2{\bf e}_2} % % +4a'(t)e_{n-1,j-2,\alpha-{\bf e}_1-2{\bf e}_3} % % + 4a'(t)e_{n-1,j-2,\alpha-{\bf e}_1-2{\bf e}_4} % % + (-2\alpha_2-2\alpha_3-2\alpha_4 + 3 )a' (t)e_{n-1,j-1,\alpha-{\bf e}_1} % % +2a(t)\frac{\partial} {\partial t} e_{n-1,j-1,\alpha-{\bf e}_1} % % -4\tan(\eta)e_{n-1,j-2,\alpha-{\bf e}_2-2{\bf e}_4} % % +4\cot(\eta)e_{n-1,j-2,\alpha-{\bf e}_2-2{\bf e}_3} % % + 2\frac{\partial}{\partial\eta} e_{n-1,j-1,\alpha-{\bf e}_2}\Big) \newline % % % % % % + \frac {1} {(j-1)a(t)^2} \Big( % % a(t)^2\frac {\partial^2} {\partial t^2} e_{n-2,j-1,\alpha} % % + 4a'(t)a(t)\frac {\partial}{\partial t} e_{n-2,j-2,\alpha-2{\bf e}_2} % % + 4 a'(t)a(t)\frac{\partial} {\partial t} e_{n-2,j-2,\alpha-2{\bf e}_3} % % + 4a' (t) a (t)\frac{\partial} {\partial t} e_{n-2,j-2,\alpha-2{\bf e}_4} % % + (-2\alpha_2-2\alpha_3-2\alpha_4+3) a'(t)a(t) \frac{\partial} {\partial t} e_{n-2,j-1,\alpha} % % + 4a'(t)^2 e_{n-2,j-3,\alpha-4{\bf e}_2} % % +8a'(t)^2 e_{n-2,j-3,\alpha-2{\bf e}_2-2{\bf e}_3} % % +8 a' (t)^2e_{n-2,j-3,\alpha-2{\bf e}_2-2{\bf e}_4} % % +4\cot(\eta)\frac{\partial}{\partial\eta}e_{n-2,j-2,\alpha-2{\bf e}_3} % % - 4\tan(\eta)\frac {\partial} {\partial\eta} e_ {n-2,j-2,\alpha-2{\bf e}_4} % % +\frac{\partial^2}{\partial\eta^2} e_{n-2,j-1,\alpha} % % +\big(2\cot(\eta)\gamma^{12}a'(t)+ (- 4(\alpha_2+\alpha_3+\alpha_4-2)a'(t)^2+ 4 (-(\alpha_3-1)\csc^2(\eta)+\alpha_3+\alpha_4-2) + 2 a(t)a''(t) ) \big)e_{n-2,j-2,\alpha-2{\bf e}_3} % % + \big( (\cot(\eta)(1-2\alpha_3)+(2\alpha_4-1)\tan (\eta)) + \gamma^{12} a'(t) \big)\frac {\partial}{\partial\eta} e_{n-2,j-1,\alpha} % % +\big((-4(\alpha_2+\alpha_3+\alpha_4- 2) a'(t)^2 +4(-(\alpha_4-1)\sec^2(\eta)+\alpha_3 + \alpha_4-2 )+2a(t)a''(t)) - 2\gamma^{12}\tan(\eta)a'(t) \big)e_{n-2,j-2,\alpha-2{\bf e}_4} % % +8(a'(t)^2-1) e_{n-2,j-3,\alpha-2{\bf e}_4} % % + 4(\cot^2(\eta)+a'(t)^2) e_{n-2,j-3,\alpha-4{\bf e}_3} % % +4(\tan^2(\eta)+a'(t)^2 ) e_{n-2,j-3,\alpha-4{\bf e}_4} % % + (2a(t)a''(t)-4(\alpha_2+\alpha_3+ \alpha_4-2)a'(t)^2)e_{n-2,j-2,\alpha-2{\bf e}_2} % % +\big(\frac{1}{2} (\cot(\eta)(1-2\alpha_3)+(2\alpha_4-1)\tan(\eta))\gamma^{12} a'(t)+\frac{1}{4} ((4\alpha_3^2-1)\csc^2 (\eta)-4(\alpha_3+\alpha_4-1)^2 +(2\alpha_2+2\alpha_3+2\alpha_4-3)(2\alpha_2+2\alpha_3+2\alpha_4-1) a'(t)^2 +\sec^2(\eta)(4\alpha_4^2-1)- 2(2\alpha_2+2\alpha_3+2\alpha_4-3)a(t) a''(t) ) \big)e_{n-2,j-1,\alpha} \Big). $ The functions associated with the initial indices are: $$\begin{aligned} &&e_{0,1,0,0,0,0}=1, \qquad e_ {1,2,1,0,0,0}= \frac {3ia'(t)}{a(t)}, \qquad e_ {1,3,1,2,0,0}= \frac {2ia'(t)}{a(t)}, \nonumber \\ && e_ {1,3,1,0,2,0}= \frac{2ia'(t)}{a (t)}, \qquad e_ {1,3,1,0,0,2}= \frac{2ia'(t)}{a (t)}, \qquad e_ {1,3,0,1,0,2}=-\frac{(2 i)\tan(\eta)}{a(t)}, \nonumber \\ &&e_ {1,3,0,1,2,0}=\frac{(2 i)\cot(\eta)}{a(t)}, \qquad e_ {1, 2, 0, 0, 1,0}= \frac {i\gamma ^{13} a'(t)} {a (t)}+\frac{i\gamma^{23}\cot(\eta)}{a (t)}, \nonumber \\ && e_ {1,2,0,0,0,1}=\frac{i\gamma ^{14}a'(t)}{a(t)}-\frac{i\gamma^{24}\tan(\eta)}{a (t)}, \qquad e_ {1,2,0,1,0,0}=\frac{2i\cot(2\eta)}{a (t)}+ \frac{i\gamma^{12}a'(t)}{a(t)}. \nonumber\end{aligned}$$ It is then apparent that $e_{0}$ and $e_1$ are, respectively, a polynomial in $a(t)$, and a polynomial in $a(t)$ and $a'(t)$, divided by some powers of $a(t)$. Thus, it follows from the above recursive formula that all $e_{n,j,\alpha}$ are of this form. Accordingly, we have $$e_n=\frac{P_n}{a(t)^{d_n}},$$ where $P_n$ is a polynomial in $a(t)$ and its derivatives with matrix coefficients. Writing $e_{n,j,\alpha}=P_{n,j,\alpha}/a(t)^{d_n}$, we obtain $d_n=\max\{d_{n-1}+1,d_{n-2}+2\}.$ Starting with $d_0=0$, $d_1=-1$, and following to obtain $d_n=n$, we conclude that $$e_{n,j,\alpha}=\frac{1}{a^{n}(t)}P_{n,j,\alpha}(a(t),\dots,a^{(n)}(t)),$$ where $P_{n,j,\alpha}$ is a polynomial whose coefficients are matrices with entries in the algebra generated by $\sin(\eta),\csc(\eta), \cos(\eta), \sec(\eta)$ and rational numbers. In the calculation of the even terms $a_{2n}$, only even $\alpha_k$ have contributions in the summation . This implies that the corresponding $c_\alpha$ is a rational multiple of $\pi^2$ and $P_{2n}$ is a polynomial with rational matrix coefficients, which is independent of variables $\eta,\phi_1,\phi_2$ by Lemma \[ind\]. Hence $$a_{2n}=\frac{1}{16\pi^4}\int_{\mathbb{S}_a^3}{\rm tr}(e_{2n})\,dvol_g= \frac{2\pi^2a(t)^3}{16\pi^4}\,{\rm tr}\Big(\frac{P_{2n}}{a(t)^{2n}}\Big)= \frac{Q_{2n}}{a(t)^{n-3}},$$ where $Q_{2n}$ is a polynomial in $a(t), a'(t), \dots, a^{(2n)}(t)$ with rational coefficients. The polynomials $P_{n, j, \alpha}$ also satisfy recursive relations that illuminate interesting features about their structure. \[monomialformofpnja\] Each $P_{n, j ,\alpha}$ is a finite sum of the form $$\sum c_k \, a(t)^{k_0}a'(t)^{k_1}\cdots a^{(n)}(t)^{k_n},$$ where each $c_k$ is a matrix of functions that are independent from the variable $t$, and $ \sum_{j=0}^n k_j=\sum_{j=0}^n jk_j=l,$ for some $0 \leq l \leq n$. This follows from an algebraically lengthy recursive formula for $P_{n, j ,\alpha}$ which stems from the equation , similar to the recursive formula for $e_{n, j, \alpha}$ in the proof of Theorem \[rationalitytheorem\]. In addition, one needs to find the following initial cases: $$\begin{aligned} &&P_ {0, 1, 0, 0, 0, 0} = I, \qquad P_ {1, 2, 1, 0, 0, 0} = 3 i a' (t), \quad P_ {1, 2, 0, 0, 1, 0} = i\gamma ^{13} a' (t) + i\gamma ^{23}\cot (\eta), \nonumber \\ &&P_ {1, 2, 0, 0, 0, 1} = i\gamma ^{14} a' (t) - i\gamma^{24}\tan (\eta), \qquad P_ {1, 2, 0, 1, 0, 0} = 2 i\cot (2\eta) + i\gamma^{12} a' (t), \nonumber \\ &&P_ {1, 3, 0, 1, 0, 2} = -2 i\tan (\eta),\qquad P_ {1, 3, 0, 1, 2, 0} = 2 i\cot (\eta), \qquad P_ {1, 3, 1, 2, 0, 0} = 2 i a' (t),\nonumber \\ &&P_ {1, 3, 1, 0, 2, 0} = 2 i a' (t), \qquad P_ {1, 3, 1, 0, 0, 2} = 2 i a' (t). \nonumber \end{aligned}$$ A recursive formula for the coefficient of the highest order term in $a_{2n}$ ----------------------------------------------------------------------------- The highest derivative of the cosmic scale factor $a(t)$ in the expression for $a_n$ is seen in the term $a(t)^{n-1}a^{(n)}(t)$, which has a rational coefficient based on Theorem \[rationalitytheorem\]. Let us denote the coefficient of $a(t)^{n-1}a^{(n)}(t)$ in $a_n$ by $h_n$. Since the coefficients $h_n$ are limited to satisfy the recursive relations derived in the proof of the following proposition, one can find the following closed formula for these coefficients. The coefficient $h_n$ of $a(t)^{n-1}a^{(n)}(t)$ in $a_n$ is equal to $$\sum_{\begin{array}{c} [n/2]+1\leq j\leq 2n+1\\ 0\leq k \leq j-n/2-1 \end{array}} \Gamma\left(\frac{2k+1}{2}\right)H_{n,j,2k},$$ where, starting from $$\begin{aligned} &&H_{1,2,1}=H_{1,3,1}=\frac{3 i}{2 \sqrt{\pi }},\qquad H_{2,4,2}=-\frac{1}{\sqrt{\pi }}, \nonumber \\ &&H_{2,3,0}=H_{2,2,0}=\frac{3}{4 \sqrt{\pi }},\qquad H_{2,3,2}=-\frac{3}{2 \sqrt{\pi }}, \nonumber\end{aligned}$$ the quantities $H_{n,j,\alpha}$ are computed recursively by $$H_{n,j,\alpha}=\frac{1}{j-1}(H_{n-2,j-1,\alpha}+2 i H_{n-1,j-1,\alpha-1}).$$ It follows from Proposition \[monomialformofpnja\] that the highest derivative of $a(t)$ in $a_n$ appears in the term $a(t)^{n-1}a^{(n)}(t)$. By a careful analysis of the equation we find that only the terms $$\frac{1}{j-1}\Big (a(t)^2 \frac{\partial ^2}{\partial t^2}P_{n-2,j-1,\alpha}+ 2 i a(t) \frac{\partial}{\partial t} P_{n-1,j-1,\alpha-{\bf e}_1} \Big)$$ contribute to its recursive formula. Denoting the corresponding monomial in $P_{n,j,\alpha}$ by $H_{n,j,\alpha}a(t)^{n-1}a^{(n)}(t)$ and substituting it into the above formula we obtain the equation $$H_{n,j,\alpha}=\frac{1}{j-1}(H_{n-2,j-1,\alpha}+2 i H_{n-1,j-1,\alpha-{\bf e}_1}),$$ for any $n>2$. Denoting $$H_{n,j,\alpha_1}=\sum \prod_{k=2}^4 \Gamma\left(\frac{\alpha_k+1}{2}\right)\frac{(-1)^{\alpha_k}+1}{2}\,{\rm tr}\left(\frac{ 1}{(2 \pi )^2}\int _0^{\pi/2 }H_{n,j,\alpha_1,\alpha_2,\alpha_3,\alpha_4}d\eta \right),$$ the recursive formula converts to $$H_{n,j,\alpha}=\frac{1}{j-1}(H_{n-2,j-1,\alpha}+2 i H_{n-1,j-1,\alpha-1}).$$ Thus, the coefficient of $a(t)^{n-1}a^{(n)}(t)$ in $a_n$ is given by the above expression. Using the above proposition we find that: $$\begin{aligned} && h_2 = \frac{1}{4}, \qquad h_4= \frac{1}{40}, \qquad h_6= \frac{1}{560}, \qquad h_8= \frac{1}{10080}, \qquad h_{10}=\frac{1}{221760}, \nonumber \\ && h_{12}=\frac{1}{5765760}, \qquad h_{14} = \frac{1}{172972800}, \qquad h_{16} = \frac{1}{5881075200}, \nonumber \\ && h_{18}=\frac{1}{223480857600}, \qquad h_{20}= \frac{1}{9386196019200}. \nonumber\end{aligned}$$ Conclusions {#Conclusions} =========== Pseudodifferential calculus is an effective tool for applying heat kernel methods to compute the terms in the expansion of a spectral action. We have used this technique to derive the terms up to $a_{12}$ in the expansion of the spectral action for the Robertson-Walker metric on a 4-dimensional geometry with a general cosmic scale factor $a(t)$. Performing the computations in Hopf coordinates, which reflects the symmetry of the space more conveniently at least from a technical point of view, we proved the independence of the integral kernel of the corresponding heat operator from three coordiantes of the space. This allowed us to furnish the proof of the conjecture of Chamseddine and Connes on rationality of the coefficients of the polynomials in $a(t)$ and its derivatives that describe the general terms $a_{2n}$ in the expansion. The terms up to $a_{10}$ were previously computed in [@ChaConRW] using their direct method, where the terms up to $a_6$ were checked against Gilkey’s universal formulas [@GilBook1; @GilBook2]. The outcome of our computations confirms the previously computed terms. Thus, we have formed a check on the terms $a_8$ and $a_{10}$. In order to confirm our calculation for the term $a_{12}$, we have performed a completely different computation in spherical coordinates and checked its agreement with our calculation in Hopf coordinates. It is worth emphasizing that the high complexity of the computations, which is overcome by computer assistance, raises the need to derive the expressions at least in two different ways to ensure their validity. We have found a formula for the coefficient of the term with the highest derivative of $a(t)$ in $a_{2n}$ for all $n$ and make the following observation. The polynomials $Q_{2n}$ in $a_{2n}=Q_{2n}\left(a(t),a'(t),\dots,a^{(2n)}(t)\right)/a(t)^{2n-3}$ are of the following form up to $Q_{12}$: $$Q_{2n}(x_0, x_1, \dots, x_{2n}) = \sum c_k\, x_0^{k_0} x_1^{k_1} \cdots x_{2n}^{k_{2n}}, \qquad c_k \neq0,$$ where the summation is over all tuples of non-negative integers $k=(k_0,k_1, \dots, k_{2n})$ such that either $ \sum k_j = 2n$ while $ \sum j k_j = 2n$, or $ \sum k_j = 2n-2$ while $ \sum j k_j = 2n-2$. This provides enough evidence and hope to shed more light on general structure of the terms $a_{2n}$ by further investigations, which are under way. Acknowledgments {#acknowledgments .unnumbered} =============== We are indebted to Alain Connes for helpful discussions and encouragements on the present topic. F.F. thanks the Institut des Hautes Études Scientifiques (I.H.E.S.) and its IT department, in particular Francois Bachelier, for their support and the excellent environment and facilities during his visit in the Fall of 2013. [99]{} P. Amsterdamski, A. Berkin, and D. O’Connor, [*$b_8$ Hamidew coefficient for a scalar field*]{}, Classical Quantum Gravity 6 (1989), 1981–1991. I. G. Avramidi, [*The covariant technique for the calculation of the heat kernel asymptotic expansion*]{}, Phys. Lett. B 238 (1990), 92–97. A. H. Chamseddine, A. Connes, [*Universal formula for noncommutative geometry actions: unification of gravity and the standard model,* ]{} Phys. Rev. Lett. 77 (1996), no. 24, 4868–4871. A. H. Chamseddine, A. Connes, [*The spectral action principle,*]{} Comm. Math. Phys. 186 (1997), no. 3, 731–750. A. H. Chamseddine, A. Connes, [*Conceptual explanation for the algebra in the noncommutative approach to the standard model,* ]{} Phys. Rev. Lett. 99 (2007), no. 19, 191601. A. H. Chamseddine, A. Connes, [*Quantum gravity boundary terms from the spectral action of noncommutative space,*]{} Phys. Rev. Lett. 99 (2007), no. 7, 071302. A. H. Chamseddine, A. Connes, [*Why the standard model,*]{} J. Geom. Phys. 58 (2008), no. 1, 38–47. A. H. Chamseddine, A. Connes, [*The uncanny precision of the spectral action,*]{} Comm. Math. Phys. 293 (2010), no. 3, 867–897. A. H. Chamseddine, A. Connes, [*Spectral action for Robertson-Walker metrics,*]{} J. High Energy Phys. 2012, no. 10, 101. A. H. Chamseddine, A. Connes, M. Marcolli, [*Gravity and the standard model with neutrino mixing,*]{} Adv. Theor. Math. Phys. 11 (2007) 991–1089. A. Connes, [*Noncommutative geometry*]{}, Academic Press, 1994. A. Connes, [*Gravity coupled with matter and the foundation of non-commutative geometry,*]{} Comm. Math. Phys. 182 (1996), no. 1, 155–176. A. Connes, [*Noncommutative geometry and the standard model with neutrino mixing,*]{} J. High Energy Phys. 2006, no. 11, 081, 19 pp. A. Connes, [*On the spectral characterization of manifolds,*]{} J. Noncommut. Geom. 7 (2013), no. 1, 1–82. A. Connes, M. Marcolli, [*Noncommutative Geometry, Quantum Fields and Motives*]{}, American Mathematical Society Colloquium Publications, 55, 2008. C. Estrada, M. Marcolli, [*Noncommutative mixmaster cosmologies,*]{} Int. J. Geom. Methods Mod. Phys. 10 (2013), no. 1, 1250086, 28 pp. P. Gilkey, [*Invariance theory, the heat equation, and the Atiyah-Singer index theorem*]{}, Mathematics Lecture Series, 11. Publish or Perish, Inc., Wilmington, DE, 1984. P. Gilkey, [*Asymptotic formulae in spectral geometry,*]{} Chapman & Hall/CRC, 2004. J. M. Gracia-Bondia, B. Iochum, T. Schücker, [*The standard model in noncommutative geometry and fermion doubling*]{}, Phys. Lett. B 416 (1998), no. 1-2, 123–128. B. Iochum, C. Levy, D. V. Vassilevich, [*Global and local aspects of spectral actions,*]{} J. Phys. A 45 (2012), no. 37, 374020, 19 pp. B. Iochum, C. Levy, D. Vassilevich, [*Spectral action for torsion with and without boundaries,*]{} Comm. Math. Phys. 310 (2012), no. 2, 367–382. D. Kolodrubetz, M. Marcolli, [*Boundary conditions of the RGE flow in the noncommutative geometry approach to particle physics and cosmology,*]{} Phys. Lett. B 693 (2010), no. 2, 166–174. H. B. Lawson, M.-L. Michelsohn, [*Spin geometry*]{}, Princeton University Press, 1989. M. Marcolli, [*Feynman motives,*]{} World Scientific Publishing Co. Pte. Ltd., 2010. M. Marcolli, [*Building cosmological models via noncommutative geometry,*]{} Int. J. Geom. Methods Mod. Phys. 8 (2011), no. 5, 1131–1168. M. Marcolli, E. Pierpaoli, [*Early universe models from noncommutative geometry*]{}, Adv. Theor. Math. Phys. 14 (2010), no. 5, 1373–1432. M. Marcolli, E. Pierpaoli, K. Teh, [*The coupling of topology and inflation in noncommutative cosmology,*]{} Comm. Math. Phys. 309 (2012), no. 2, 341–369. M. Marcolli, E. Pierpaoli, K. Teh, [*The spectral action and cosmic topology*]{}, Comm. Math. Phys. 304 (2011), no. 1, 125–174. W. Nelson, J. Ochoa, M. Sakellariadou, [*Constraining the noncommutative Spectral Action via astrophysical observations,*]{} Phys. Rev. Lett., Vol. 105 (2010), 101602. W. Nelson, M. Sakellariadou, [*Natural inflation mechanism in asymptotic noncommutative geometry*]{}, Phys. Lett. B 680: 263–266, 2009. W. Nelson, M. Sakellariadou, [*Cosmology and the noncommutative approach to the standard model*]{}, Phys. Rev. D 81: 085038, 2010. A. Sitarz, [*Spectral action and neutrino mass,*]{} Europhys. Lett. 86,10007 (2009). W. D. van Suijlekom, [*Renormalization of the spectral action for the Yang-Mills system,*]{} J. High Energy Phys. 2011, no. 3, 146, 9 pp. W. D. van Suijlekom, [*Renormalization of the asymptotically expanded Yang-Mills spectral action,*]{} Comm. Math. Phys. 312 (2012), no. 3, 883–912. A. E. van de Ven, [*Index-free heat kernel coefficients*]{}, Classical Quantum Gravity 15 (1998), 2311–2344. [^1]: [*E-mail addresses*]{}: ffathiz@uwo.ca, aghorba@uwo.ca, masoud@uwo.ca [^2]: Hereafter in this paper $t$ denotes the first variable of the space when it appears in $a(t)$ and its derivatives and it denotes the time when it appears in the heat operator and the associated small time asymptotic expansions.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A code for the numerical evaluation of hyperelliptic theta-functions is presented. Characteristic quantities of the underlying Riemann surface such as its periods are determined with the help of spectral methods. The code is optimized for solutions of the Ernst equation where the branch points of the Riemann surface are parameterized by the physical coordinates. An exploration of the whole parameter space of the solution is thus only possible with an efficient code. The use of spectral approximations allows for an efficient calculation of all quantities in the solution with high precision. The case of almost degenerate Riemann surfaces is addressed. Tests of the numerics using identities for periods on the Riemann surface and integral identities for the Ernst potential and its derivatives are performed. It is shown that an accuracy of the order of machine precision can be achieved. These accurate solutions are used to provide boundary conditions for a code which solves the axisymmetric stationary Einstein equations. The resulting solution agrees with the theta-functional solution to very high precision.' address: - 'Institut für Astronomie und Astrophysik, Universität Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, Germany' - 'LUTh, Observatoire de Paris, 92195 Meudon Cedex, France' author: - 'J. Frauendiener' - 'C. Klein' title: ' Hyperelliptic Theta-Functions and Spectral Methods ' --- Introduction ============ Solutions to integrable differential equations in terms of theta-functions were introduced with the works of Novikov, Dubrovin, Matveev, Its, Krichever, …(see [@DubNov75; @ItsMat75; @Kric78; @algebro]) for the Korteweg-de Vries (KdV) equation. Such solutions to e.g. the KdV, the Sine-Gordon, and the Non-linear Schrödinger equation describe periodic or quasi-periodic solutions, see [@dubrovin81; @algebro]. They are given explicitly in terms of Riemann theta-functions defined on some Riemann surface. Though all quantities entering the solution are in general given in explicit form via integrals on the Riemann surface, the work with theta-functional solutions admittedly has not reached the importance of soliton solutions. The main reason for the more widespread use of solitons is that they are given in terms of algebraic or exponential functions. On the other hand the parameterization of theta-functions by the underlying Riemann surface is very implicit. The main parameters, typically the branch points of the Riemann surface, enter the solutions as parameters in integrals on the Riemann surface. A full understanding of the functional dependence on these parameters seems to be only possible numerically. In recent years algorithms have been developed to establish such relations for rather general Riemann surfaces as in [@tretkoff84] or via Schottky uniformization (see [@algebro]), which have been incorporated successively in numerical and symbolic codes, see [@seppala94; @hoeij94; @gianni98; @deconinck01; @deconinck03] and references therein (the last two references are distributed along with Maple 6, respectively Maple 8, and as a Java implementation at [@riemann]). For an approach to express periods of hyperelliptic Riemann surfaces via theta constants see [@enoric2003]. These codes are convenient to study theta-functional solutions of equations of KdV-type where the considered Riemann surfaces are ‘static’, i.e., independent of the physical coordinates. In these cases the characteristic quantities of the Riemann surface have to be calculated once, just the comparatively fast summation in the approximation of the theta series via a finite sum as e.g.in [@deconinck03] has to be carried out in dependence of the space-time coordinates. The purpose of this article is to study numerically theta-functional solutions of the Ernst equation [@ernst] which were given by Korotkin [@Koro88]. In this case the branch points of the underlying hyperelliptic Riemann surface are parameterized by the physical coordinates, the spectral curve of the Ernst equation is in this sense ‘dynamical’. The solutions are thus not studied on a single Riemann surface but on a whole family of surfaces. This implies that the time-consuming calculation of the periods of the Riemann surface has to be carried out for each point in the space-time. This includes limiting cases where the surface is almost degenerate. In addition the theta-functional solutions should be calculated to high precision in order to be able to test numerical solutions for rapidly rotating neutron stars such as provided e.g. by the spectral code `LORENE` [@Lorene]. This requires a very efficient code of high precision. We present here a numerical code for hyperelliptic surfaces where the integrals entering the solution are calculated by expanding the integrands with a Fast Cosine Transformation in MATLAB. The precision of the numerical evaluation is tested by checking identities for periods on Riemann surfaces and by comparison with exact solutions. The code is in principle able to deal with general (non-singular) hyperelliptic surfaces, but is optimized for a genus 2 solution to the Ernst equation which was constructed in [@prl2; @prd3]. We show that an accuracy of the order of machine precision ($\sim 10^{-14}$) can be achieved at a space-time point in general position with 32 polynomials and in the case of almost degenerate surfaces which occurs e.g., when the point approaches the symmetry axis with at most 256 polynomials. Global tests of the numerical accuracy of the solutions to the Ernst equation are provided by integral identities for the Ernst potential and its derivatives: the equality of the Arnowitt-Deser-Misner (ADM) mass and the Komar mass (see [@komar; @wald]) and a generalization of the Newtonian virial theorem as derived in [@virial]. We use the so determined numerical data for the theta-functions to provide ‘exact’ boundary values on a sphere for the program library `LORENE` [@Lorene] which was developed for a numerical treatment of rapidly rotating neutron stars. `LORENE` solves the boundary value problem for the stationary axisymmetric Einstein equations with spectral methods. We show that the theta-functional solution is reproduced to the order of $10^{-11}$ and better. The paper is organized as follows: in section \[sec:ernsteq\] we collect useful facts on the Ernst equation and hyperelliptic Riemann surfaces, in section \[sec:spectral\] we summarize basic features of spectral methods and explain our implementation of various quantities. The calculation of the periods of the hyperelliptic surface and the non-Abelian line integrals entering the solution is performed together with tests of the precision of the numerics. In section \[sec:integrals\] we check integral identities for the Ernst potential. The test of the spectral code `LORENE` is presented in section \[sec:lorene\]. In section \[sec:concl\] we add some concluding remarks. Ernst equation and hyperelliptic Riemann surfaces {#sec:ernsteq} ================================================= The Ernst equation for the complex valued potential $\mathcal{E}$ (we denote the real and the imaginary part of $\mathcal{E}$ with $f$ and $b$ respectively) depending on the two coordinates $(\rho,\zeta)$ can be written in the form $$\Re \mathcal{E}\left(\mathcal{E}_{\rho\rho}+\frac{1}{\rho} \mathcal{E}_{\rho}+\mathcal{E}_{\zeta\zeta}\right)= \mathcal{E}_{\rho}^{2}+\mathcal{E}_{\zeta}^{2} \label{ernst1}.$$ The equation has a physical interpretation as the stationary axisymmetric Einstein equations in vacuum (see appendix and references given therein). Its complete integrability was shown by Maison [@maison] and Belinski-Zakharov [@belzak]. For real Ernst potential, the Ernst equation reduces to the axisymmetric Laplace equation for $\ln \mathcal{E}$. The corresponding solutions are static and belong to the so called Weyl class, see [@exac]. Algebro-geometric solutions to the Ernst equation were given by Korotkin [@Koro88]. The solutions are defined on a family of hyperelliptic surfaces $\mathcal{L}(\xi,\bar{\xi})$ with $\xi=\zeta-i\rho$ corresponding to the plane algebraic curve $$\mu^{2}=(K-\xi)(K-\bar{\xi})\prod_{i=1}^{g}(K-E_{i})(K-F_{i}) \label{hyper1},$$ where $g$ is the genus of the surface and where the branch points $E_{i}$, $F_{i}$ are independent of the physical coordinates and for each $n$ subject to the reality condition $E_{n}=\bar{F}_{n}$ or $E_{n},F_{n}\in \mathbb{R}$. Hyperelliptic Riemann surfaces are important since they show up in the context of algebro-geometric solutions of various integrable equations as KdV, Sine-Gordon and Ernst. Whereas it is a non-trivial problem to find a basis for the holomorphic differentials on general surfaces (see e.g. [@deconinck01]), it is given in the hyperelliptic case (see e.g. [@algebro]) by $$d\nu_k = \left( \frac{dK}{\mu}, \frac{KdK}{\mu},\ldots, \frac{K^{g-1}dK}{\mu} \right) \label{basis},$$ which is the main simplification in the use of these surfaces. We introduce on $\mathcal{L}$ a canonical basis of cycles $(a_{k},b_{k})$, $k=1,\ldots,n$. The holomorphic differentials $d\omega_k$ are normalized by the condition on the $a$-periods $$\int_{a_{l}}^{}d\omega_{k}=2\pi i \delta_{lk}. \label{normholo}$$ The matrix of $b$-periods is given by $\mathbf{B}_{ik} = \int_{b_{i}}^{}d\omega_{k}$. The matrix $\mathbf{B}$ is a so-called Riemann matrix, i.e. it is symmetric and has a negative definite real part. The Abel map $\omega: \mathcal{L} \to \mbox{Jac}(\mathcal{L}) $ with base point $E_{1}$ is defined as $\omega(P)=\int_{E_{1}}^{P}d\omega_k$, where $\mbox{Jac}(\mathcal{L})$is the Jacobian of $\mathcal{L}$. The theta-function with characteristics corresponding to the curve $\mathcal{L}$ is given by $$\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{x}|\mathbf{B})= \sum_{\mathbf{n}\in\mathbb{Z}^{g}}^{}\exp\left\{\frac{1}{2} \langle\mathbf{B}(\mathbf{p}+\mathbf{n}),(\mathbf{p}+\mathbf{n}) \rangle+\langle\mathbf{p}+\mathbf{n},2i\pi\mathbf{q}+\mathbf{x} \rangle\right\} \label{theta},$$ where $\mathbf{x}\in \mathbb{C}^{g}$ is the argument and $\mathbf{p},\mathbf{q}\in \mathbb{C}^{g}$ are the characteristics. We will only consider half-integer characteristics in the following. The theta-function with characteristics is, up to an exponential factor, equivalent to the theta-function with zero characteristic (the Riemann theta-function is denoted with $\Theta$) and shifted argument, $$\Theta_{\mathbf{p}\mathbf{q}}(\mathbf{x}|\mathbf{B})= \Theta(\mathbf{x}+\mathbf{B}\mathbf{p}+2i\pi\mathbf{q})\exp\left\{ \frac{1}{2}\langle\mathbf{B}\mathbf{p},\mathbf{p} \rangle+\langle\mathbf{p},2i\pi\mathbf{q}+\mathbf{x} \rangle\right\}. \label{theta2}$$ We denote by $d\omega_{PQ}$ a differential of the third kind, i.e., a 1-form which has poles in $P,Q\in \mathcal{L}$ with respective. residues $+1$ and $-1$. This singularity structure characterizes the differentials only up to an arbitrary linear combination of holomorphic differentials. The meromorphic differentials can be normalized by the condition that all $a$-periods vanish. We use the notation $\infty^{\pm}$ for the infinite points on different sheets of the curve $\mathcal{L}$, namely $\mu/K^{g+1}\to \pm 1$ as $K\to \infty^{\pm}$. The differential $d\omega_{\infty^{+}\infty^{-}}$ is given up to holomorphic differentials by $-K^{g}dK/\mu$. It is well known that the $b$-periods of normalized differentials of the third kind can be expressed in terms of the Abel map (see e.g. [@dubrovin81]), $$\int_{b_{k}}^{}d\omega_{PQ}=\omega_{k}(P)-\omega_{k}(Q), \quad k=1,\ldots,g \label{period}.$$ In [@prl; @prd2] a physically interesting subclass of Korotkin’s solution was identified which can be written in the form $$\mathcal{E}=\frac{\Theta_{\mathbf{p}\mathbf{q}}(\omega(\infty^{+})+\mathbf{u})}{ \Theta_{\mathbf{p}\mathbf{q}}(\omega(\infty^{-})+\mathbf{u})}\cdot e^{I} \label{ernst2},$$ where $\mathbf{u}=(u_k)\in\mathbb{C}^g$ and where $$I=\frac{1}{2\pi i}\int_{\Gamma}^{}\ln G(K)\,d\omega_{\infty^{+} \infty^{-}}(K), \qquad u_k=\frac{1}{2\pi i} \int_{\Gamma}^{}\ln G(K)\,d\omega_k. \label{path1}$$ $\Gamma$ is a piece-wise smooth contour on $\mathcal{L}$ and $G(K)$ is a non-zero Hölder-continuous function on $\Gamma$. The contour $\Gamma$ and the function $G$ have to satisfy the reality conditions that with $K\in \Gamma$ also $\bar{K}\in \Gamma$ and $\bar{G}(\bar{K})=G(K)$; both are independent of the physical coordinates. In the following we will discuss the example of the solution constructed in [@prl2; @prd3] which can be interpreted as a disk of collisionless matter. For a physical interpretation see [@prd4]. The solution is given on a surface of the form (\[hyper1\]) with genus 2. The branch points independent of the physical coordinates are related through the relations $E_{i}=\bar{F}_{i}$, $i=1,2$ and $E_{1}=-F_{2}$. The branch points are parameterized by two real parameters $\lambda$ and $\delta$. Writing $E_{1}^{2}=\alpha +i\beta$ with real $\alpha$, $\beta$, we have $$\alpha=-1+\frac{\delta}{2}, \quad \beta=\sqrt{\frac{1}{\lambda^{2}} +\delta-\frac{\delta^{2}}{4}} \label{disk1}.$$ The contour $\Gamma$ is the piece of the covering of the imaginary axis in the upper sheet between $[-i,i]$, the function $G$ has the form $$G(K)=\frac{\sqrt{(K^{2}-\alpha)^{2}+\beta^{2}}+K^{2}+1}{ \sqrt{(K^{2}-\alpha)^{2}+\beta^{2}}-K^{2}-1}. \label{disk2}$$ The physical parameters vary between $\delta=0$, the solution which was first given in [@NeuMei95], and $\delta_{s}=2(1+\sqrt{1+1/\lambda^{2}})$, the static limit in which $\beta=0$. In the latter case the Riemann surface degenerates, the resulting Ernst potential (\[ernst2\]) is real and be expressed in terms of objects corresponding to the surface $\mathcal{L}_{0}$ of genus 0 defined by the relation $\mu_{0}^{2}=(K-\xi)(K-\bar{\xi})$. The parameter $\lambda$ varies between $\lambda=0$, the so-called Newtonian limit where the branch points $E_{i}$, $F_{i}$ tend to infinity. Since $G$ is also of order $\lambda$ in this limit, the lowest order contributions are again real and defined on the surface $\mathcal{L}_{0}$. This case corresponds to the disk limit of the Maclaurin ellipsoids, see [@bitr]. The upper limit for $\lambda$ is infinity for $\delta\neq 0$ and $\lambda_{c}=4.629\ldots$ for $\delta=0$. The limiting situation is special in the second case since the resulting spacetime is no longer asymptotically flat and since the axis is singular. The invariant circumference of the disk is zero in this case which implies that the disk shrinks to a point for an observer in the exterior of the disk, see [@prd4]. For physical reasons the solution was discussed in [@prd4] in dependence of two other real parameters $\epsilon$ and $\gamma$. Here $\epsilon$ is related to the redshift of photons emitted at the center of the disk and detected at infinity. It varies between 0 in the Newtonian limit, and 1 in the ultra-relativistic limit, where photons cannot escape to infinity. Thus, $\epsilon$ is a measure of how relativistic the situation is. The parameter $\gamma$ is a measure of how static the solution is, it varies between 0, indicating the static limit and 1. For the functional relations between $\epsilon$, $\gamma$ and $\lambda$, $\delta$ see [@prd4]. The constant $\Omega$ (with respect to the physical coordinates) to appear in the following can be considered as a natural scale for the angular velocities in the disk, for a definition see [@prd4]. The coordinate $\rho$ can take all non-negative real values, the coordinate $\zeta$ all real values. The example we are studying here has an equatorial symmetry, $$\mathcal{E}(\rho,-\zeta)=\bar{\mathcal{E}}(\rho,\zeta) \label{eq:eqsym}.$$ It is therefore sufficient to consider only non-negative values of $\zeta$. The case $\rho=0$ corresponds to the axis of symmetry where the branch cut $[\xi,\bar{\xi}]$ degenerates to a point. As was shown in [@prd2; @prd4], the Ernst potential can be written in this limit in terms of theta-functions on the elliptic surface $\mathcal{L}_{1}$ defined by $\mu_{1}^{2}= (K^{2}-\alpha)^{2}+\beta^{2}$, i.e. the surface $\mathcal{L}$ with the cut $[\xi,\bar{\xi}]$ removed. Near the axis the Ernst potential has the form (see [@fay; @prd2]) $$\mathcal{E}(\rho,\zeta)=\mathcal{E}_{0}(\zeta)+\rho^{2} \mathcal{E}_{1}(\zeta)+\mathcal{O}(\rho^{4}); \label{eq:nearaxis}$$ here $\mathcal{E}_{0}$ and $\mathcal{E}_{1}$ are independent of $\rho$, $\mathcal{E}_{0}$ is the axis potential. This formula could be used to calculate the potential close to the axis. However we considered only values of $\rho$ greater than $10^{-5}$ and did not experience any numerical problems. Consequently we did not use formula (\[eq:nearaxis\]). For large values of $r=|\xi|$, the Ernst potential has the asymptotic expansion $$\mathcal{E}=1-\frac{2m}{r}+\frac{2m^{2}}{r^{2}} -\frac{2iJ\zeta}{r^{3}}+\mathcal{O}(1/r^{3}); \label{eq:ernstinfinity}$$ here the constants (with respect to $\xi$) $m$ and $J$ are the ADM-mass and, respectively, the angular momentum of the space-time. They can be calculated on the axis in terms of elliptic theta-functions, see [@prd4]. Formula (\[eq:ernstinfinity\]) is used for values of $r>10^{6}$. In the limit $\xi=E_{2}$, the Ernst potential can be given on the surface $\Sigma_{0}$ of genus 0 obtained by removing the cuts $[\xi,\bar{\xi}]$ and $[E_{2},F_{2}]$ from the surface $\mathcal{L}$. The potential can thus be given in this case in terms of elementary functions, see [@prd4]. In the equatorial plane $\zeta=0$, the Riemann surface $\mathcal{L}$ has an additional involution $K\to-K$ as can be seen from (\[hyper1\]). This implies that the surface can be considered as a covering of an elliptic surface, see [@algebro; @prd2]. The theta-functions in (\[ernst2\]) can be written as sums of theta-functions on the covered surface and on the Prym variety which happens to be an elliptic surface as well in this case. We use this fact at the disk ($\zeta=0$, $\rho\leq 1$), where the moving branch points are situated on $\Gamma$. There all quantities can be expressed in terms of quantities defined on the Prym surface $\Sigma_{w}$ defined by $\mu_{w}^{2}=(K+\rho^{2})((K-\alpha)^{2}+\beta^{2})$, see [@prd4]. Numerical implementations {#sec:spectral} ========================= The numerical task in this work is to approximate and evaluate analytically defined functions as accurately and efficiently as possible. To this end it is advantageous to use (pseudo-)spectral methods which are distinguished by their excellent approximation properties when applied to smooth functions. Here the functions are known to be analytic except for isolated points. In this section we explain the basic ideas behind the use of spectral methods and describe in detail how the theta-functions and the Ernst potential can be obtained to a high degree of accuracy. Spectral approximation ---------------------- The basic idea of spectral methods is to approximate a given function $f$ globally on its domain of definition by a linear combination $$f \approx \sum_{k=0}^N a_k \phi_k,$$ where the function $\phi_k$ are taken from some class of functions which is chosen appropriately for the problem at hand. The coefficients $a_k$ are determined by requiring that the linear combination should be ‘close’ to $f$. Thus, one could require that $||f - \sum_{k=0}^N a_k \phi_k||$ should be minimal for some norm. Another possibility is to require that $\left< f -\sum_{k=0}^N a_k \phi_k, \chi_l\right> = 0$ for $l=0:N$ with an appropriate inner product and associated orthonormal basis $\chi_l$. This is called the Galerkin method. Finally, one can demand that $f(x_l) = \sum_{k=0}^N a_k \phi_k(x_l)$ at selected points $(x_l)_{l=0:N}$. This is the so called collocation method which is the one we will use in this paper. In this case the function values $f_l=f(x_l)$ and the coefficients $a_k$ are related by the matrix $\Phi_{lk} = \phi_k(x_l)$. The choice of the expansion basis depends to a large extent on the specific problem. For periodic functions there is the obvious choice of trigonometric polynomials $\phi_k(x) = \exp(2\pi i k/N)$ while for functions defined on a finite interval the most used functions are orthogonal polynomials, in particular Chebyshev and Legendre polynomials. While the latter are important because of their relationship with the spherical harmonics on the sphere, the former are used because they have very good approximation properties and because one can use fast transform methods when computing the expansion coefficients from the function values provided one chooses the collocation points $x_l=\cos(\pi l/N)$ (see [@fornberg] and references therein). We will use here collocation with Chebyshev polynomials. Let us briefly summarize here their basic properties. The Chebyshev polynomials $T_n(x)$ are defined on the interval $I=[-1,1]$ by the relation $$T_n(\cos(t)) = \cos(n t), \text{where } x = \cos(t),\qquad t\in[0,\pi].$$ They satisfy the differential equation $$\label{eq:diffeqcheb} (1-x^2)\, \phi''(x) - x \phi'(x) + n^2 \phi(x) = 0.$$ The addition theorems for sine and cosine imply the recursion relations $$\label{eq:recurscheb} T_{n+1}(x) - 2 x\, T_n(x) + T_{n-1}(x) = 0,$$ for the polynomials $T_n$ and $$\label{eq:recursderiv} \frac{T'_{n+1}(x)}{n+1} - \frac{T'_{n-1}(x)}{n-1} = 2 T_n(x)$$ for their derivatives. The Chebyshev polynomials are orthogonal on $I$ with respect to the hermitian inner product $$\left< f, g \right> = \int_{-1}^1 f(x) \bar g(x) \,\frac{d x}{\sqrt{1-x^2}}.$$ We have $$\label{eq:ortho} \left< T_m , T_n \right> = c_m \frac\pi2\, \delta_{mn}$$ where $c_0=2$ and $c_l=1$ otherwise. Now suppose that a function $f$ on $I$ is sampled at the points $x_l=\cos(\pi l/N)$ and that $\sum_{n=0}^N a_n T_n$ is the interpolating polynomial. Defining $c_0=c_N=2$, $c_n=1$ for $0<n<N$ in the discrete case and the numbers $F_n = c_n a_n$ we have $$\begin{split} f_l &= \sum_{n=0}^N a_n T_n(x_l) = \sum_{n=0}^N a_n T_n(\cos(\pi l/N)) \\ &= \sum_{n=0}^N a_n \cos(\pi nl/N) = \sum_{n=0}^N \frac{F_n}{c_n}\cos(\pi nl/N) \end{split}.$$ This looks very much like a discrete cosine series and in fact one can show [@briggshenson] that the coefficients $F_n$ are related to the values $f_l$ of the function by an inverse discrete Fourier transform (DCT) $$F_n = \frac2N\sum_{l=0}^N \frac{f_l}{c_l}\cos(\pi nl/N).$$ Note, that up to a numerical factor the DCT is idempotent, i.e., it is its own inverse. This relationship between the Chebyshev polynomials and the DCT is the basis for the efficient computations because the DCT can be performed numerically by using the fast Fourier transform (FFT) and pre- and postprocessing of the coefficients [@fornberg]. The fast transform allows us to switch easily between the representations of the function in terms of its sampled values and in terms of the expansion coefficients $a_n$ (or $F_n$). The fact that $f$ is approximated globally by a finite sum of polynomials allows us to express any operation applied to $f$ approximately in terms of the coefficients. Let us illustrate this in the case of integration. So we assume that $f = p_N =\sum_{n=0}^N a_n T_n$ and we want to find an approximation of the integral for $p_N$, i.e., the function $$F(x) = \int_{-1}^x f(s)\, ds,$$ so that $F'(x)=f(x)$. We make the ansatz $F(x) = \sum_{n=0}^N b_n\, T_n(x)$ and obtain the equation $$F' = \sum_{n=0}^N b_n\,T'_n = \sum_{n=0}^N a_n T_n = f.$$ Expressing $T_n$ in terms of the $T'_n$ using  and comparing coefficients implies the equations $$b_1 = \frac{2a_{0} - a_{2}}{2}, \qquad b_n = \frac{a_{n-1} - a_{n+1}}{2n}\quad \text{for }0< n < N,\qquad b_N = \frac{a_{N-1}}{2N}.$$ between the coefficients which determines all $b_l$ in terms of the $a_n$ except for $b_0$. This free constant is determined by the requirement that $F(-1)=0$ which implies (because $T_n(-1)=(-1)^n$) $$b_0 = - \sum_{n=1}^N (-1)^n b_n.$$ These coefficients $b_n$ determine a polynomial $q_N$ of degree $N$ which approximates the indefinite integral $F(x)$ of the $N$-th degree polynomial $f$. The exact function is a polynomial of degree $N+1$ whose highest coefficient is proportional to the highest coefficient $a_N$ of $f$. Thus, ignoring this term we make an error whose magnitude is of the order of $|a_N|$ so that the approximation will be the better the smaller $|a_N|$ is. The same is true when a smooth function $f$ is approximated by a polynomial $p_N$. Then, again, the indefinite integral will be approximated well by the polynomial $q_N$ whose coefficients are determined as above provided the highest coefficients in the approximating polynomial $p_N$ are small. From the coefficients $b_n$ we can also find an approximation to the definite integral $\int_{-1}^1 f(s)\,ds = F(1)$ by evaluating $$q_N(1) = \sum_{n=0}^Nb_n = 2\sum_{l=0}^{\lfloor N/2\rfloor}b_{2l+1}.$$ Thus, to find an approximation of the integral of a function $f$ we proceed as described above, first computing the coefficients $a_n$ of $f$, computing the $b_n$ and then calculating the sum of the odd coefficients. Implementation of the square-root --------------------------------- The Riemann surface $\mathcal{L}$ is defined by an algebraic curve of the form $$\mu^{2}=(K-\xi)(K-\bar{\xi})\prod_{i=1}^{g}(K-E_{i})(K-\bar E_{i}),$$ where in our case we have $g=2$ throughout. In order to compute the periods and the theta-functions related to this Riemann surface it is necessary to evaluate the square-root $\sqrt{\mu^2(K)}$ for arbitrary complex numbers $K$. In order to make this a well defined problem we introduce the cut-system as indicated in Fig. \[fig:cut-system\]. On the cut surface the square-root $\mu(K)$ is defined as in [@heil] as the product of square-roots of monomials $$\mu=\sqrt{K-\xi\phantom{\bar\xi}} \sqrt{K-\bar{\xi}} \prod_{i=1}^{g} \sqrt{K-E_{i}} \sqrt{K-\bar E_{i}}. \label{eq:root}$$ The square-root routines such as the one available in MATLAB usually have their branch-cut along the negative real axis. The expression (\[eq:root\]) is holomorphic on the cut surface so that we cannot simply take the builtin square-root when computing $\sqrt{\mu^2(K)}$. Instead we need to use the information provided by the cut-system to define adapted square-roots. Let $\arg(z)$ be the argument of a complex number $z$ with values in $]-\pi,\pi[$ and consider two factors in (\[eq:root\]) such as $$\sqrt{K-P_1}\sqrt{K-P_2}$$ where $P_1$ and $P_2$ are two branch-points connected by a branch-cut. Let $\alpha=\arg(P_2-P_1)$ be the argument at the line from $P_1$ to $P_2$. Now we define the square-root $\sqrt[(\alpha)]{\cdot}$ with branch-cut along the ray with argument $\alpha$ by computing for each $z\in\mathbb{C}$ the square-root $s:=\sqrt{z}$ with the available MATLAB routine and then putting $$\sqrt[(\alpha)]{z} = \left\{ \begin{array}{rl} s & \alpha/2<\arg(s)<\alpha/2 + \pi\\ -s & \text{otherwise} \end{array} \right. .$$ With this square-root we compute the two factors $$\sqrt[(\alpha)]{K-P_1}\sqrt[(\alpha)]{K-P_2}.$$ It is easy to see that this expression changes sign exactly when the branch-cut between $P_1$ and $P_2$ is crossed. We compute the expression (\[eq:root\]) by multiplying the pairs of factors which correspond to the branch-cuts. This procedure is not possible in the case of the non-linear transformations we are using to evaluate the periods in certain limiting cases. In these cases the root is chosen in a way that the integrand is a continuous function on the path of integration. Numerical treatment of the periods ---------------------------------- The quantities entering formula (\[ernst2\]) for the Ernst potential are the periods of the Riemann surface and the line integrals $\mathbf{u}$ and $I$. The value of the theta-function is then approximated by a finite sum. The periods of a hyperelliptic Riemann surface can be expressed as integrals between branch points. Since we need in our example the periods of the holomorphic differentials and the differential of the third kind with poles at $\infty^{\pm}$, we have to consider integrals of the form $$\int_{P_{i}}^{P_{j}}\frac{K^{n}dK}{\mu(K)}, \quad n=0,1,2 \label{period1},$$ where the $P_{i}$, $i,j=1,\ldots,6$ denote the branch points of $\mathcal{L}$. In general position we use a linear transformation of the form $K =ct+d$ to transform the integral (\[period1\]) to the normal form $$\label{eq:int_aperiod} \int_{-1}^1 \frac{\alpha_0 + \alpha_1 t + \alpha_2 t^2}{\sqrt{1-t^2}} \;H(t) \,dt,$$ where the $\alpha_i$ are complex constants and where $H(t)$ is a continuous (in fact, analytic) complex valued function on the interval $[-1,1]$. This form of the integral suggests to express the powers $t^n$ in the numerator in terms of the first three Chebyshev polynomials $T_0(t)=1$, $T_1(t)=t$ and $T_2(t)= 2t^2-1$ and to approximate the function $H(t)$ by a linear combination of Chebyshev polynomials $$H(t) = \sum_{n\ge0} h_n T_n(t).$$ The integral is then calculated with the help of the orthogonality relation (\[eq:ortho\]) of the Chebyshev polynomials. Since the Ernst potential has to be calculated for all $\rho,\zeta\in \mathbb{R}^{+}_{0}$, it is convenient to use the cut-system (\[fig:cut-system\]). In this system the moving cut does not cross the immovable cut. In addition the system is adapted to the symmetries and reality properties of $\mathcal{L}$. Thus the periods $a_{2}$ and $b_{2}$ are related to $a_{1}$ and $b_{1}$ via complex conjugation. For the analytical calculations of the Ernst potential in the limit of collapsing cuts, we have chosen in [@prd2] cut systems adapted to the respective situation. In the limit $\xi\to \bar{\xi}$ we were using for instance a system where $a_{2}$ is the cycle around the cut $[\xi,\bar{\xi}]$. This has the effect that only the $b$-period $b_{2}$ diverges logarithmically in this case whereas the remaining periods stay finite as $\rho$ tends to 0. In the cut systems \[fig:cut-system\], all periods diverge as $\ln \rho$. Since the divergence is only logarithmical this does not pose a problem for values of $\rho>10^{-5}$. In addition the integrals which have to be calculated in the evaluation of the periods are the same in both cut-system. Thus there is no advantage in using different cut systems for the numerical work. To test the numerics we use the fact that the integral of any holomorphic differential along a contour surrounding the cut $[E_{1},F_{1}]$ in positive direction is equal to minus the sum of all $a$-periods of this integral. Since this condition is not implemented in the code it provides a strong test for the numerics. It can be seen in Fig. \[fig:test\_periods\] that 16 to 32 polynomials are sufficient in general position to achieve optimal accuracy. Since MATLAB works with 16 digits, machine precision is in general limited to 14 digits due to rounding errors. These rounding errors are also the reason why the accuracy drops slightly when a higher number of polynomials is used. The use of a low number of polynomials consequently does not only require less computational resources but has the additional benefit of reducing the rounding errors. It is therefore worthwhile to reformulate a problem if a high number of polynomials would be necessary to obtain optimal accuracy. These situations occur in the calculation of the periods when the moving branch points almost coincide which happens on the axis of symmetry in the space-time or at spatial infinity. As can be seen from Fig. \[fig:test\_periods\], for $\rho=10^{-3}$ and $\zeta=10^{3}$ not even 2048 polynomials (this is the limit due to memory on the low end computers we were using) produce sufficient accuracy. The reason for these problems is that the function $H$ in (\[eq:int\_aperiod\]) behaves like $1/\sqrt{t+\rho}$ near $t=0$. For small $\rho$ this behavior is only satisfactorily approximated by a large number of polynomials. We therefore split the integral in two integrals between $F_{2}$ and $(F_{2}+\bar{\xi})/2$ and between $(F_{2}+\bar{\xi})/2$ and $\bar{\xi}$. The first integral is calculated with the Chebyshev integration routine after the substitution $t=\sqrt{K-F_{2}}$. This substitution leads to a regular integrand also at the branch point $F_{2}$. The second integral is calculated with the Chebyshev integration routine after the substitution $K-\zeta=\rho\sinh(t)$. This takes care of the almost collapsing cut $[\xi,\bar{\xi}]$. It can be seen in Fig. \[fig:test\_periods\] that 128 polynomials are sufficient to obtain machine precision even in almost degenerate situations. The cut-system in Fig. \[fig:cut-system\] is adapted to the limit $\bar{\xi}\to F_{2}$ in what concerns the $a$-periods, since the cut which collapses in this limit is encircled by an $a$-cycle. However there will be similar problems as above in the determination of the $b$-periods. For $\bar{\xi}\sim F_{2}$ we split the integrals for the $b$-periods as above in two integrals between $F_{1}$ and $0$, and $0$ and $F_{2}$. For the first integral we use the integration variable $t = \sqrt{K-F_1}$, for the second $K=\Re F_{2}-i\Im F_{2}\sinh t$. Since the Riemann matrix (the matrix of $b$-periods of the holomorphic differentials after normalization) is symmetric, the error in the numerical evaluation of the $b$-periods can be estimated via the asymmetry of the calculated Riemann matrix. We define the function $err(\rho,\zeta)$ as the maximum of the norm of the difference in the $a$-periods discussed above and the difference of the off-diagonal elements of the Riemann matrix. This error is presented for a whole space-time in Fig. \[fig:error\]. The values for $\rho$ and $\zeta$ vary between $10^{-4}$ and $10^{4}$. On the axis and at the disk we give the error for the elliptic integrals (only the error in the evaluation of the $a$-periods, since the Riemann matrix has just one component). For $\xi\to \infty$ the asymptotic formulas for the Ernst potential are used. The calculation is performed with 128 polynomials, and up to 256 for $|\xi|>10^{3}$. It can be seen that the error is in this case globally below $10^{-13}$. Numerical treatment of the line integrals ----------------------------------------- The line integrals $\mathbf{u}$ and $I$ in (\[ernst2\]) are linear combinations of integrals of the form $$\int_{-i}^{i}\frac{\ln G(K)K^{l}dK}{\mu(K)}, \qquad l=0,1,2 \label{eq:line1}.$$ In general position, i.e. not close to the disk and $\lambda$ small enough, the integrals can be directly calculated after the transformation $K=it$ with the Chebyshev integration routine. To test the numerics we consider the Newtonian limit ($\lambda\to0$) where the function $\ln G$ is proportional to $1+K^{2}$, i.e. we calculate the test integral $$\int_{-i}^{i}\frac{(1+K^{2})\;dK}{\sqrt{(K-\zeta)^{2}+\rho^{2}}} \label{eq:testline}.$$ We compare the numerical with the analytical result in Fig. \[fig:line\]. In general position machine precision is reached with 32 polynomials. When the moving cut approaches the path $\Gamma$, i.e., when the space-time point comes close to the disk, the integrand in (\[eq:testline\]) develops cusps near the points $\xi$ and $\bar{\xi}$. In this case a satisfactory approximation becomes difficult even with a large number of polynomials. Therefore we split the integration path in $[-i,-i\rho]$, $[-i\rho,i\rho]$ and $[i\rho,i]$. Using the reality properties of the integrands, we only calculate the integrals between $0$ and $i\rho$, and between $i\rho$ and $i$. In the first case we use the transformation $K= \zeta+\rho\sinh t$ to evaluate the integral with the Chebyshev integration routine, in the second case we use the transformation $t = \sqrt{K-\bar{\xi}}$. It can be seen in figure \[fig:line\] that machine precision can be reached even at the disk with 64 to 128 polynomials. The values at the disk are, however, determined in terms of elliptic functions which is more efficient than the hyperelliptic formulae. To treat the case where $\delta\lambda^{2}$ is not small, it is convenient to rewrite the function $G$ in (\[disk2\]) in the form $$\ln G(K) =2\ln \left(\sqrt{(K^{2}-\alpha)^{2}+\beta^{2}}+K^{2} +1\right)-\ln \left(\frac{1}{\lambda^{2}}-\delta K^{2}\right) \label{eq:log}.$$ In the limit $\delta \lambda^{2}\to \infty$ with $\delta$ finite, the second term in (\[eq:log\]) becomes singular for $K=0$. Even for $\delta\lambda^{2}$ large but finite, the approximation of the integrand by Chebyshev polynomials requires a huge number of coefficients as can be seen from Fig. \[fig:logreg\]. It is therefore sensible to ‘regularize’ the integrand near $K=0$. We consider instead of the function $\ln( \frac{1}{\lambda^{2}}-\delta K^{2}) F(K)$ where $F(K)$ is a $C^{\infty}$ function near $K=0$, the function $$\ln \left(\frac{1}{\lambda^{2}}-\delta K^{2}\right)\left( F(K)-F(0)-F'(0)K-\ldots-\frac{1}{n!}F^{(n)}(0)K^{n}\right) \label{eq:logreg}.$$ The parameter $n$ is chosen such that the spectral coefficients of (\[eq:logreg\]) are of the order of $10^{-14}$ for a given number of polynomials, see Fig. \[fig:logreg\]. There we consider the integral $$\int_{-i}^{i}\frac{\ln G(K)dK}{\sqrt{(K^{2}-\alpha)^{2}+\beta^{2}}} \label{eq:axisreg},$$ which has to be calculated on the axis. We show the absolute values of the coefficients $a_{k}$ in an expansion of the integrand in Chebyshev polynomials, $\sum_{k=1}^{N}a_{k}T_{k}$. It can be seen that one has to include values of $n=6$ in (\[eq:logreg\]). The integral $\int_{\Gamma}^{}\ln G(K) F(K)$ is then calculated numerically as the integral of the function (\[eq:logreg\]), the subtracted terms are integrated analytically. In this way one can ensure that the line integrals are calculated in the whole space-time with machine precision: close to the Newtonian limit, we use an analytically known test function to check the integration routine, for general situations we check the quality of the approximation of the integrand by Chebyshev polynomials via the spectral coefficients which have to become smaller than $10^{-14}$. Theta-functions --------------- The theta series (\[theta\]) for the Riemann theta-function (the theta function in (\[theta\]) with zero characteristic, theta functions with characteristic follow from (\[theta2\])) is approximated as the sum $$\Theta(\mathbf{x}|\mathbf{B}) =\sum_{n_{1}=-N}^{N}\sum_{n_{2}=-N}^{N}\exp\left\{ \frac{1}{2}n_{1}^{2}B_{11}+n_{1}n_{2}B_{12}+\frac{1}{2}B_{22} +n_{1}x_{1}+n_{2}x_{2}\right\}. \label{eq:thetasum}$$ The value of $N$ is determined by the condition that terms in the series (\[theta\]) for $n>N$ are strictly smaller than some threshold value $\epsilon$ which is taken to be of the order of $10^{-16}$. To this end we determine the eigenvalues of $\mathbf{B}$ and demand that $$N> -\frac{1}{B_{max}}\left(||\mathbf{x}||+\sqrt{||\mathbf{x}||^{2} +2\ln \epsilon B_{max}}\right) \label{eq:N},$$ where $B_{max}$ is the real part of the eigenvalue with maximal real part ($\mathbf{B}$ is negative definite). For a more sophisticated analysis of theta summations see [@deconinck03]. In general position we find values of $N$ between 4 and 8. For very large values of $\zeta$ close to the axis, $N$ can become larger that 40 which however did not lead to any computational problems. To treat more extreme cases it could be helpful to take care of the fact that the eigenvalues of $\mathbf{B}$ can differ by more than an order of magnitude in our example. In these cases a summation over an ellipse rather than over a sphere in the plane $(n_{1},n_{2})$, i.e.different limiting values for $n_{1}$ and $n_{2}$ as in [@deconinck03] will be more efficient. In our case the computation of the integrals entering the theta-functions was however always the most time consuming such that an optimization of the summation of the theta-function would not have a noticeable effect. Due to the vectorization techniques in MATLAB, the theta summation always took less than 10 % of the calculation time for a value of the Ernst potential. Between 50 and 70 % of the processor time are used for the determination of the periods. On the used low-end PCs, the calculation time varied between 0.4 and 1.2s depending on the used number of polynomials. We show a plot of the real part of the Ernst potential for $\lambda=10$ and $\delta=1$ in Fig. \[fig:f\]. For $\rho,\zeta>1$, we use $1/\rho,1/\zeta$ as coordinates which makes it possible to plot the whole space-time in Weyl coordinates. The non-smoothness of the coordinates across $\rho=1=1/\rho$ and $\zeta=1=1/\zeta$ is noticeable in the plot. Asymptotically the potential is equal to 1. The disk is situated in the equatorial plane between $\rho=0$ and $\rho=1$. At the disk, the normal derivatives of $f$ are discontinuous. The imaginary part of the Ernst potential in this case is given in Fig. \[fig:b\]. It vanishes at infinity and at the regular part of the equatorial plane. At the disk, the potential has a jump. Integral identities {#sec:integrals} =================== In the previous section we have tested the accuracy of the numerics locally, i.e. at single points in the space-time. Integral identities have the advantage that they provide some sort of global test of the numerical precision since they sum up the errors. In addition they require the calculation of the potentials in extended regions of the space-time which allows to explore the numerics for rather general values of the physical coordinates. The identities we are considering in the following are the well known equivalence of a mass calculated at the disk (the Komar mass) and the ADM mass determined at infinity, see [@komar; @wald], and a generalization of the Newtonian virial identity, see [@virial] and the appendix. The derivatives of the Ernst potential occurring in the integrands can be related to derivatives of theta-functions, see [@prd2]. Since we are interested here in the numerical treatment of theta-functions with spectral methods, we determine the derivatives with spectral methods, too (see section 3). The integrals are again calculated with the Chebyshev integration routine. The main problem in this context is the singular behavior of the integrands e.g. at the disk which is a singularity for the space-time. As before this will lead to problems in the approximation of these terms via Chebyshev polynomials. This could lead to a drop in accuracy which is mainly due to numerical errors in the evaluation of the integrand and not of the potentials which we want to test. An important point is therefore the use of integration variables which are adapted to the possible singularities. Mass equalities --------------- The equality between the ADM mass and the Komar mass provides a test of the numerical treatment of the elliptic theta-functions at the disk by means of the elliptic theta-functions on the axis. Since this equality is not implemented in the code, it provides a strong test. The Komar mass at the disk is given by formula (\[virial2\]) of the appendix. In the example we are considering here, the normal derivatives at the disk can be expressed via tangential derivatives (see [@prd4]) which makes a calculation of the derivatives solely within the disk possible. We implement the Komar mass in the form $$m_{K}= \int_{0}^{1}d\rho\frac{b_{\rho}}{4\Omega^{2}\sqrt{\rho^{2}-\delta f^{2}+2f/\lambda}} \left(f+\frac{\Omega^{2}}{f}(\rho^{2}-a^{2}f^{2})\right) \label{eq:komar2}.$$ The integrand is known to vanish as $\sqrt{1-\rho^{2}}$ at the rim of the disk, which is the typical behavior for such disk solutions. Since $\sqrt{1-\rho^{2}}$ is not analytic in $\rho$, an expansion of the integrand (\[eq:komar2\]) in Chebyshev polynomials in $\rho$ would not be efficient. We will thus use $t= \sqrt{1-\rho^{2}}$ as the integration variable. This takes care of the behavior at the rim of the disk. Since in general the integrand in \[eq:komar2\] depends on $\rho^{2}$, this variable can be used in the whole disk. In the ultra-relativistic limit for $\delta\neq 0$, the function $f$ vanishes as $\rho$. In such cases it is convenient either to take two domains of integration or to use a different variable of integration. We chose the second approach with $\rho=\sin x$ (this corresponds to the disk coordinates (\[eq:diskcoor\])). Yet, strongly relativistic situations still lead to problems since $f$ vanishes in this case at the center of the disk as does $b_{\rho}$ which leads to a ‘0/0’ limit. In Fig. \[fig:mtest\] one can see that the masses are in general equal to the order of $10^{-14}$. In these calculations 128 up to 256 polynomials were used. We show the dependence for $\gamma=0.7$ and several values of $\epsilon$, as well as for $\epsilon=0.8$ and several values of $\gamma$. The accuracy drops in the strongly relativistic, almost static situations ($\epsilon$ close to 1, $\gamma$ close to zero) since the Riemann surface is almost degenerate in this case ($\beta\to 0$). In the ultra-relativistic limit for $\delta=0$, the situation is no longer asymptotically flat which implies that the masses formally diverge. For $\epsilon=0.95$, the masses are still equal to the order of $10^{-13}$. Not surprisingly the accuracy drops for $\epsilon=0.9996$ to the order of $10^{-4}$. Virial-type identities ---------------------- Generalizations of the Newtonian virial theorem are used in numerics (see [@virial]) as a test of the quality of the numerical solution of the Einstein equations. Since they involve integrals over the whole space-time, they test the numerics globally and thus provide a valid criterion for the entire range of the physical coordinates. The identity which is checked here is a variant of the one given in [@virial] which is adapted to possible problems at the zeros of the real part of the Ernst potential, the so-called ergosphere, see [@prd4] for the disk solutions discussed here. Eq. (\[virial20\]) relates integrals of the Ernst potential and its derivatives over the whole space-time to corresponding integrals at the disk. Since the numerics at the disk has been tested above, this provides a global test of the evaluation of the Ernst potential. As before, derivatives and integrals will be calculated via spectral methods. The problem one faces when integrating over the whole space-time is the singular behavior of the fields on the disk which represents a discontinuity of the Ernst potential. The Weyl coordinates in which the solution is given are not optimal to describe the geometry near the disk. Hence a huge number of polynomials is necessary to approximate the integrands in (\[virial20\]). Even with $512$ polynomials for each coordinate, the coefficients of an expansion in Chebyshev polynomials did not drop below $10^{-6}$ in more relativistic situations. Though the computational limits are reached, the identity (\[virial20\]) is only satisfied to the order of $10^{-8}$ which is clearly related to the bad choice of coordinates. We therefore use for this calculation so-called disk coordinates $\eta$, $\theta$ (see [@bitr]) which are related to the Weyl coordinates via $$\rho+i\zeta=\cosh(\eta+i\theta) \label{eq:diskcoor}.$$ The coordinate $\eta$ varies between $\eta=0$, the disk, and infinity, the coordinate $\theta$ between $-\pi/2$ and $\pi/2$. The axis is given by $\pm \pi/2$, the equatorial plane in the exterior of the disk by $\theta=0$ and $\eta\neq0$. Because of the equatorial symmetry, we consider only positive values of $\theta$. The surfaces of constant $\eta$ are confocal ellipsoids which approach the disk for small $\eta$. For large $\eta$, the coordinates are close to spherical coordinates. To evaluate the integrals in (\[virial20\]), we perform the $\eta$-integration up to a value $\eta_{0}$ as well as the $\theta$-integration with the Chebyshev integration routine. The parameter $\eta_{0}$ is chosen in a way that the deviation from spherical coordinates becomes negligible, typically $\eta_{0}=15$. The integral from $\eta_{0}$ to infinity is then carried out analytically with the asymptotic formula (\[eq:ernstinfinity\]). It turns out that an expansion in $64$ to $128$ polynomials for each coordinate is sufficient to provide a numerically optimal approximation within the used precision. This illustrates the convenience of the disk coordinates in this context. The virial identity is then satisfied to the order of $10^{-12}$. We plot the deviation of the sum of the integrals in (\[virial20\]) from zero for several values of $\lambda$ and $\gamma$ in Fig. \[fig:virialtest\]. The drop in accuracy for strongly relativistic almost static situations ($\gamma$ small and $\epsilon$ close to 1) is again due to the almost degenerate Riemann surface. The lower accuracy in the case of strongly relativistic situations for $\gamma=1$ reflects the fact that the disk is shrinking to a point in this limit. To maintain the needed resolution one would have to use more polynomials in the evaluation of the virial-type identity which was not possible on the used computers. Testing `LORENE` {#sec:lorene} ================ One purpose of exact solutions of the Einstein equations is to provide test-beds for numerical codes to check the quality of the numerical approximation. In the previous sections we have established that the theta-functional solutions can be numerically evaluated to the order of machine precision which implies they can be used in this respect. The code we are considering here is a C++-library called `LORENE` [@Lorene] which was constructed to treat problems from relativistic astrophysics such as rapidly rotating neutron stars. The main idea is to solve Poisson-type equations iteratively via spectral methods. To this end an equation as the Ernst equation (\[ernst1\]) is written in the form $$\Delta \mathcal{F} = \mathcal{G}(\mathcal{F},r,\theta,\phi) \label{eq:poisson},$$ where spherical coordinates $r$, $\theta$, $\phi$ are used, and where $\mathcal{G}$ is some possibly non-linear functional of $\mathcal{F}$ and the coordinates. The system (\[eq:poisson\]) is to be solved for $\mathcal{F}$ which can be a vector. In an iterative approach, the equation is rewritten as $$\Delta \mathcal{F}_{n+1} = \mathcal{G}(\mathcal{F}_{n},r,\theta,\phi),\quad n=1,2,\ldots \label{eq:poisson2}.$$ Starting from some initial function $\mathcal{F}_{0}$, in each step of the iteration a Poisson equation is solved for a known right-hand side. For the stationary axisymmetric Einstein equations which we are considering here, it was shown in [@schaudt] that this iteration will converge exponentially for small enough boundary data if the initial values are close to the solution of the equation in some Banach space norm. It turns out that one can always start the iteration with Minkowski data, but it is necessary to use a relaxation: instead of the solution $\mathcal{F}_{n+1}$ of (\[eq:poisson2\]), it is better to take a combination $\tilde{\mathcal{F}}_{n+1}=\mathcal{F}_{n+1}+\kappa \mathcal{F}_{n}$ with $\kappa\in ]0,1[$ (typically $\kappa=0.5$) as a new value in the source $\mathcal{G}_{n+1}$ to provide numerical stability. The iteration is in general stopped if $||\mathcal{F}_{n+1} -\mathcal{F}_{n}||<10^{-10}$. The Ernst equation (\[ernst1\]) is already in the form (\[eq:poisson\]), but it has the disadvantage that the equation is no longer strongly elliptic at the ergo-sphere where $\Re(\mathcal{E})=0$. In physical terms, this apparent singularity is just a coordinate singularity, and the theta-functional solutions are analytic there. The Ernst equation in the form (\[eq:poisson\]) has a right-hand side of the form ‘$0/0$’ for $\Re \mathcal{E}=0$ which causes numerical problems especially in the iteration process since the zeros of the numerator and the denominator will only coincide for the exact solution. The disk solutions we are studying here have ergo-spheres in the shape of cusped toroids (see [@prd4]). Therefore it is difficult to take care of the limit $0/0$ by using adapted coordinates. Consequently the use of the Ernst picture is restricted to weakly relativistic situations without ergo-spheres in this framework. To be able to treat strongly relativistic situations, we use a different form of the stationary axisymmetric vacuum Einstein equations which is derived from the standard $3+1$-decomposition, see [@eric1]. We introduce the functions $\nu$ and $N_{\phi}$ via $$e^{2\nu}=\frac{\rho^{2}f}{\rho^{2}-a^{2}f^{2}},\quad N_{\phi}=\frac{\rho af^{2}}{\rho^{2}-a^{2}f^{2}}, \label{eq:nuN}$$ where $ae^{2U}$ is the $g_{t\phi}$ component of the metric leading to the Ernst potential, see (\[eq:wlp\]) in the appendix. Expressions for $a$ in terms of theta-functions are given in [@prd4]. The vacuum Einstein equations for the functions (\[eq:nuN\]) read $$\begin{aligned} \Delta \nu & = & \frac{1}{2}\rho^{2}e^{-4\nu}(N_{\phi,\rho}^{2}+ N_{\phi,\zeta}^{2}) \label{eq:nu}, \\ \Delta N_{\phi} -\frac{1}{\rho^{2}}N_{\phi}& = & 4\rho(N_{\phi,\rho}(e^{2\nu})_{\rho}+ N_{\phi,\zeta}(e^{2\nu})_{\zeta}). \label{eq:Npfi}\end{aligned}$$ By putting $V=N_{\phi}\cos\phi$ we obtain the flat 3-dimensional Laplacian acting on $V$ on the left-hand side, $$\Delta V = 4\rho(V_{\rho}(e^{2\nu})_{\rho}+ V_{\zeta}(e^{2\nu})_{\zeta}). \label{eq:V}$$ Since the function $e^{2\nu}$ can only vanish at a horizon, it is globally non-zero in the examples we are considering here. Thus the system of equations (\[eq:nu\]) and (\[eq:V\]) is strongly elliptic, even at an ergo-sphere. The disadvantage of this regular system is the non-linear dependence of the potentials $\nu$ and $N_{\phi}$ on the Ernst potential and $a$ via (\[eq:nuN\]). Thus we loose accuracy due to rounding errors of roughly an order of magnitude. Though we have shown in the previous sections that we can guarantee the numerical accuracy of the data for $f$ and $af$ to the order of $10^{-14}$, the values for $\nu$ and $V$ are only reliable to the order of $10^{-13}$. To test the spectral methods implemented in `LORENE`, we provide boundary data for the disk solutions discussed above on a sphere around the disk. For these solutions it would have been more appropriate to prescribe data at the disk, but `LORENE` was developed to treat objects of spherical topology such as stars which suggests the use of spherical coordinates. It would be possible to include coordinates like the disk coordinates of the previous section in `LORENE`, but this is beyond the scope of this article. Instead we want to use the Poisson-Dirichlet routine which solves a Dirichlet boundary value problem for the Poisson equation for data prescribed at a sphere. We prescribe the data for $\nu$ and $N_{\phi}$ on a sphere of radius $R$ and solve the system (\[eq:nu\]) and (\[eq:V\]) iteratively in the exterior of the sphere. If the iteration converges, we compare the numerical solution in the exterior of the sphere with the exact solution. Since spherical coordinates are not adapted to the disk geometry, a huge number of spherical harmonics would be necessary to approximate the potentials if $R$ is close to the disk radius. The limited memory on the used computers imposes an upper limit of 64 to 128 harmonics. We choose the radius $R$ and the number of harmonics in a way that the Fourier coefficients in $\theta$ drop below $10^{-14}$ to make sure that the provided boundary data contain the related information to the order of machine precision. The exterior of the sphere where the boundary data are prescribed is divided in two domains, one from $R$ to $2R$ and one from $2R$ to infinity. In the second domain $1/r$ is used as a coordinate. For the $\phi$ dependence which is needed only for the operator in (\[eq:V\]), 4 harmonics in $\phi$ are sufficient. Since `LORENE` is adapted to the solution of the Poisson equation, it is to be expected that it reproduces the exact solution best for nearly static situations, since the static solutions solve the Laplace equation. The most significant deviations from the exact solution are therefore expected for $\delta=0$. For the case $\lambda=3$, we consider 32 harmonics in $\theta$ on a sphere of radius $R=1.5$. The iteration is stopped if $||\mathcal{F}_{n+1}-\mathcal{F}_{n}<5*10^{-10}$ which is the case in this example after 90 steps. The exact solution is reproduced to the order of $10^{-11}$. The absolute value of the difference between the exact and the numerical solution on a sphere of radius 3 is plotted in Fig. \[fig:maxdifftheta\] in dependence of $\theta$. There is no significant dependence of the error on $\theta$. The maximal deviation is typically found on or near the axis. As can be seen from Fig. \[fig:maxdiffr\] which gives the dependence on $r$ on the axis, the error decreases almost linearly with $1/r$ except for some small oscillations near infinity. We have plotted the maximal difference between the numerical and the exact solution for a range of the physical parameters $\lambda$ and $\delta$ in Fig. \[fig:gamma\]. As can be seen, the expectation is met that the deviation from the exact solution increases if the solution becomes more relativistic (larger $\epsilon$). As already mentioned, the solution can be considered as exactly reproduced if the deviation is below $10^{-13}$. Increasing the value of $\gamma$ for fixed $\epsilon$ leads to less significant effects though the solutions become less static with increasing $\gamma$. , For $\delta=0$, the ultra-relativistic limit $\lambda\to 4.629\ldots$ corresponds to a space-time with a singular axis which is not asymptotically flat, see [@prd4]. Since `LORENE` expands all functions in a Galerkin basis with regular axis in an asymptotically flat setting, solutions close to this singular limit cannot be approximated. Convergence gets much slower and can only be achieved with considerable relaxation. For $\lambda=4$ and $\delta=0$ we needed nearly 2000 iterations with a relaxation parameter of $\kappa=0.9$. The approximation is rather crude (in the order of one percent). For higher values of $\lambda$ no convergence could be obtained. This is however due to the singular behavior of the solution in the ultra-relativistic limit. In all other cases, `LORENE` is able to reproduce the solution to the order of $10^{-11}$ and better, more static and less relativistic cases are reproduced with the provided accuracy. Conclusion {#sec:concl} ========== In this article we have presented a scheme based on spectral methods to treat hyperelliptic theta-functions numerically. It was shown that an accuracy of the order of machine precision could be obtained with an efficient code. As shown, spectral methods are very convenient if analytic functions are approximated. Close to singularities such as the degeneration of the Riemann surface, analytic techniques must be used to end up with analytic integrands in the discussed example. The obtained numerical data were used to provide boundary values for the code `LORENE` which made possible a comparison of the numerical solution to the boundary value problem with the numerically evaluated theta-functions. For a large range of the physical parameters the numerical solution was of the same quality as the provided data. The main errors in `LORENE` are introduced by rounding errors in the iteration. This shows that spectral methods provide a reliable and efficient numerical treatment both for elliptic equations and for hyperelliptic Riemann surfaces. However, to maintain the global quality of the numerical approximation an analytical understanding of the solutions is necessary in order to treat the non-analyticities of the solutions. Einstein equations and integral identities ========================================== The Ernst equation has a geometric interpretation in terms of the stationary axisymmetric Einstein equations in vacuum. The metric can be written in this case in the Weyl-Lewis-Papapetrou form (see [@exac]) $$ds^{2}=g_{ab}dx^{a}dx^{b}=-f(dt+ad\phi)^{2}+(e^{2k}(d\rho^{2}+d\zeta^{2}) +\rho^{2}d\phi^{2})/f \label{eq:wlp},$$ where $\rho$ and $\zeta$ are Weyl’s canonical coordinates and $\partial_{t}$ and $\partial_{\phi}$ are the commuting asymptotically timelike respectively spacelike Killing vectors. In this case the vacuum field equations are equivalent to the Ernst equation (\[ernst1\]) for the complex potential $\mathcal{E}$. For a given Ernst potential, the metric (\[eq:wlp\]) can be constructed as follows: the metric function $f$ is equal to the real part of the Ernst potential. The functions $a$ and $k$ can be obtained via a line integration from the equations $$a_{\xi}=2\rho\frac{(\mathcal{E}-\bar{\mathcal{E}})_{\xi}}{ (\mathcal{E}+\bar{\mathcal{E}})^{2}} \label{axi},$$ and $$k_{\xi}=(\xi-\bar{\xi}) \frac{\mathcal{E}_{\xi}\bar{\mathcal{E}}_{\xi}}{ (\mathcal{E}+\bar{\mathcal{E}})^{2}}\;. \label{kxi}$$ This implies that $a$ is the dual of the imaginary part of the Ernst potential. The equation (\[kxi\]) for $k$ follows from the equations $$R_{\alpha\beta}=\frac{1}{2f^{2}}\Re(\mathcal{E}_{\alpha} \bar{\mathcal{E}}_{\beta}),\quad \alpha,\beta=1,2,3 \label{eq:ricci},$$ where $R$ is the (three-dimensional) Ricci tensor corresponding to the spatial metric $\mathrm{h}=\mbox{diag}(e^{2k},e^{2k},\rho^{2})$. This reflects a general structure of the vacuum Einstein equations in the presence of a Killing vector. For the Ricci scalar one finds $$-\frac{1}{2}e^{2k}R = k_{\rho\rho}+k_{\zeta\zeta} \label{virial17}.$$ We denote by $h$ the determinant of the metric $\mathrm{h}$. The Komar integral [@komar; @wald] of the twist of the timelike Killing vector $\xi=\partial_{t}$ over the whole spacetime establishes the equivalence between the asymptotically defined ADM mass and the Komar mass $m_{K}$, $$2 \int_{disk}^{}dV\left(T_{ab}-\frac{1}{2}g_{ab}T^{c}_{c}\right)n^{a} \xi^{b} \label{virial2}=: m_{K},$$ where the integration is carried out over the disk, where $n_{a}$ is the normal at the disk, and where $T_{ab}$ is the energy momentum tensor of the disk given in [@prd4]. In other words the ADM mass can be calculated either asymptotically or locally at the disk. To obtain an identity which does not involve only surface integrals, we consider as in [@virial] an integral over the trace of equation (\[eq:ricci\]) for the Ricci-tensor, $$R=\frac{h^{\alpha\beta}\mathcal{E}_{\alpha} \bar{\mathcal{E}}_{\beta}}{2f^{2}} \label{virial4}.$$ To avoid numerical problems at the set of zeros of $f$, the so-called ergo-sphere (see [@prd4] for the disk solutions studied here), we multiply both sides of equation (\[eq:ricci\]) by $f^{3}$. Integrating the resulting relation over the whole space-time, we find after partial integration $$-\int_{0}^{1}d\rho \rho f^{3}k_{\zeta}+ \int_{0}^{\infty}d\rho\int_{-\infty}^{\infty}d\zeta ((\rho f^{3})_{\rho}k_{\rho}+(\rho f^{3})_{\zeta}k_{\zeta})= \int_{0}^{\infty}d\rho\int_{-\infty}^{\infty}d\zeta \rho f( \mathcal{E}_{\rho}\bar{\mathcal{E}}_{\rho}+ \mathcal{E}_{\zeta}\bar{\mathcal{E}}_{\zeta}) \label{eq:virial5};$$ here the only contributions of a surface integral arise at the disk, since $k\propto 1/r^{2}$ for $r\to\infty$ and since the axis is regular ($k$ vanishes on the axis). If we replace $k$ via (\[kxi\]), we end up with an identity for the Ernst potential and its derivatives, $$\begin{aligned} && -\int_{0}^{1}d\rho \rho^{2}f (\mathcal{E}_{\rho} \bar{\mathcal{E}}_{\zeta} +\mathcal{E}_{\zeta}\bar{\mathcal{E}}_{\rho}) +\frac{3}{2}\int_{0}^{\infty}\int_{0}^{\infty}d\rho d\zeta \rho^{2}(\mathcal{E}_{\rho}(\bar{\mathcal{E}}_{\rho}^{2} +\bar{\mathcal{E}}_{\zeta}^{2})+\bar{\mathcal{E}}_{\rho} (\mathcal{E}_{\rho}^{2}+\mathcal{E}_{\zeta}^{2})) \nonumber\\ && = 2\int_{0}^{\infty}\int_{0}^{\infty}d\rho d\zeta \rho f \mathcal{E}_{\zeta} \bar{\mathcal{E}}_{\zeta} \label{virial20}.\end{aligned}$$ This identity (as the identity given in [@virial]) can be seen as a generalization of the Newtonian virial theorem. The relation (\[virial20\]) coincides with the corresponding relation of [@virial] only in the Newtonian limit. This reflects the fact that generalizations of a Newtonian result to a general relativistic setting are not unique. Our formulation is adapted to the Ernst picture and avoids problems at the ergo-spheres, thus it seems optimal to test the numerics for Ernst potentials in terms of theta-functions. Acknowledgment {#acknowledgment .unnumbered} ============== We thank A. Bobenko, D. Korotkin, E. Gourgoulhon and J. Novak for helpful discussions and hints. CK is grateful for financial support by the Marie-Curie program of the European Union and the Schloessmann foundation. [99]{} V.A. Belinskii, V.E. Zakharov, Integration of the Einstein equations by the methods of inverse scattering theory and construction of explicit multisoliton solutions, *Sov. Phys. JETP* [**48**]{} (1978) 985-994. E. D. Belokolos, A. I. Bobenko, V. Z. Enolskii, A. R. Its and V. B. Matveev, *Algebro-Geometric Approach to Nonlinear Integrable Equations*, Berlin: Springer, (1994). J. Binney and S. Tremaine, *Galactic Dynamics* (Princeton Univ. Press, Princeton, 1987). W. L. Briggs and V. E. Henson, [*The DFT, an owner’s manual for the discrete Fourier transform*]{}, Siam Philadelphia, 1995. B. Deconinck and M. van Hoeij, Computing Riemann matrices of algebraic curves, *Physica D*, **152-153**, 28 (2001). B. Deconinck, M. Heil, A. Bobenko, M. van Hoeij and M. Schmies, Computing Riemann Theta Functions, to appear in *Mathematics of Computation*. B.A. Dubrovin, V.B. Matveev, S.P. Novikov, Non-linear equations of Korteveg-de-Vries type, finite-zone linear operators, and Abelian varieties, *Russian Math. Surveys*, [**31**]{} 59-146 (1976) B.A. Dubrovin, Theta functiions and non-linear equations, *Russ. Math. Surv.* **36**, 11 (1981). V.Z. Enolski, P.H. Richter, Periods of hyperelliptic integrals expressed in terms of $\theta$-constants by means of Thomae formulae, to appear in *Phil. Trans. London Math. Soc.*, (2003). F.J. Ernst, New formulation of the axially symmetric gravitational field problem, *Phys. Rev.* **167**, 1175 (1968). J.D. Fay, *Theta-functions on Riemann surfaces*, Lect.Notes in Math. [**352**]{}, Springer (1973) B. Fornberg, *A practical guide to pseudospectral methods*, Cambridge University Press, Cambridge (1996) J. Frauendiener and C. Klein, Exact relativistic treatment of stationary counter-rotating dust disks: physical properties, *Phys. Rev. D* **63**, 84025 (2001). P. Gianni, M. Seppälä, R. Silhol, B. Trager, Riemann Surfaces, Plane Algebraic Curves and Their Period Matrices, *J. Symb. Comp.* **26**, 789 (1998). E. Gourgoulhon and S. Bonazzola, A formulation of the virial theorem in general relativity, *Class. Quant. Grav.* **11**, 443 (1994). E. Gourgoulhon, P. Haensel, R. Livine, E. Paluch, S. Bonazzola, and J.-A. Marck, Fast rotation of strange stars, *Astr. and Astrophysiscs*, **349** 851 (1999). M. Heil, *Numerical Tools for the study of finite gap solutions of integrable systems*, PhD thesis, TU Berlin (1995). M. Hoeij, An algorithm for computing an integral basis in an algebraic function field, *J. Symb. Comput.* **18**, 353 (1994). A.R. Its, V.B. Matveev, Schrödinger operators with finite-gap spectrum and N-soliton solutions of Korteveg-deVries equation, *Theor. Math. Physics* [**23**]{} (1), 51-67 (1975) C. Klein and O. Richter, On a class of physically realistic solutions to the Ernst equation, *Phys. Rev. Lett.*, **79**, 565 (1997). C. Klein and O. Richter, Physically realistic solutions to the Ernst equation on hyperelliptic Riemann surfaces, *Phys. Rev. D*, **58**, CID 124018 (1998). C. Klein and O. Richter, Exact relativistic gravitational field of a stationary counter-rotating dust disk, *Phys. Rev. Lett.* **83**, 2884 (1999). C. Klein, Exact relativistic treatment of stationary counter-rotating dust disks: boundary value problems and solutions, *Phys. Rev. D*, **63** 64033 (2001). A. Komar, ‘Covariant Conservation Laws in General Relativity’, *Phys. Rev.*, **113**, 934 (1959). D. Korotkin, Finite-gap solutions of the stationary axisymmetric Einstein equation, *Theor.Math. Phys.* [ **77**]{} 1018-1031 (1989) D. Korotkin and V. Matveev, Theta Function Solutions of the Schlesinger System and the Ernst Equation, *Funct. Anal. Appl.*, **34** 1 (2000). D. Kramer, H. Stephani, E. Herlt and M. MacCallum, *Exact Solutions of Einstein’s Field Equations*, Cambridge: CUP (1980). I.M. Krichever, *Russ. Math. Surveys*, [ **44**]{} No.32 144-225 (1989). D. Maison, Are the stationary axially symmetric Einstein equations completely integrable?, *Phys.Rev.Lett.* [**41**]{} (1978) 521-524. G. Neugebauer, R. Meinel, General relativistic gravitational field of the rigidly rotating disk of dust: Solution in terms of ultraelliptic functions, *Phys.Rev.Lett.* [**75**]{} 3046-3048 (1995) U. Schaudt, On the Dirichlet problem for the stationary and axisymmetric Einstein equations, [*Comm. Math. Phys.*]{}, [**190**]{}, 509, (1998). M. Seppälä, Computation of period matrices of real algebraic curves, *Discrete Comput. Geom.* **11**, 65 (1994). C.L. Tretkoff and M.D. Tretkoff, Combinatorial group theory, Riemann surfaces and differential equations, *Contemp. Math.* **33**, 467 (1984). R. Wald, *General Relativity*, Chicago, London: The University of Chicago Press (1984). www.lorene.obspm.fr www-sfb288.math.tu-berlin.de/ jtem/
{ "pile_set_name": "ArXiv" }
--- abstract: 'The control of the spatial distribution of micrometer-sized dust particles in capacitively coupled radio frequency discharges is relevant for research and applications. Typically, dust particles in plasmas form a layer located at the sheath edge adjacent to the bottom electrode. Here, a method of manipulating this distribution by the application of a specific excitation waveform, i.e. two consecutive harmonics, is discussed. Tuning the phase angle $\theta$ between the two harmonics allows to adjust the discharge symmetry via the Electrical Asymmetry Effect (EAE). An adiabatic (continuous) phase shift leaves the dust particles at an equilibrium position close to the lower sheath edge. Their levitation can be correlated with the electric field profile. By applying an abrupt phase shift the dust particles are transported between both sheaths through the plasma bulk and partially reside at an equilibium position close to the upper sheath edge. Hence, the potential profile in the bulk region is probed by the dust particles providing indirect information on plasma properties. The respective motion is understood by an analytical model, showing both the limitations and possible ways of optimizing this sheath-to-sheath transport. A classification of the transport depending on the change in the dc self bias is provided, and the pressure dependence is discussed.' address: | $^1$ Institute for Plasma and Atomic Physics, Ruhr University Bochum, 44780 Bochum, Germany\ $^2$ Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, H-1525 Budapest POB 49, Hungary\ $^3$ Department of Electronics, Kyushu University, 819-0395 Fukuoka, Japan author: - 'Shinya Iwashita$^1$, Edmund Schüngel$^1$, Julian Schulze$^1$, Peter Hartmann$^2$, Zoltán Donkó$^2$, Giichiro Uchida$^3$, Kazunori Koga$^3$, Masaharu Shiratani$^3$, Uwe Czarnetzki$^1$' title: 'Transport control of dust particles via the Electrical Asymmetry Effect: experiment, simulation, and modeling' --- Introduction {#Introduction} ============ Dusty plasmas exhibit interesting physical phenomena [@DustyPlasmasBasic; @FortovPysRep2005] such as the interaction of the plasma sheath [@ParticleSheath1; @ParticleSheath2; @ParticleSheath3; @Melzer] and bulk [@ParticleBulk] with the dust particles, the occurrence of waves [@ParticleWaves1] and instabilities [@ParticleInstab1; @ParticleInstab2; @ParticleInstab3], phase transitions [@Phase1; @Phase2; @Phase3; @Phase4; @Phase5], and the formation of Coulomb crystals [@Thomas; @Chu; @Hayashi; @Arp]. They have drawn a great attention for industrial application because dust particles in plasmas play various roles: on one hand the accumulation of dust particles is a major problem for device operation in fusion plasma reactors as well as for semiconductor manufacturing [@Bonitz; @Shukla; @Bouchoule; @Krasheninnikov; @Selwyn], i.e. they are impurities to be removed. On the other hand, they are of general importance for deposition purposes [@DustDepo1; @DustDepo2] and it is well known that an enhanced control of such dust particles in plasmas has the potential to realize the bottom up approach of fabricating novel materials, e.g., microelectronic circuits, medical components, and catalysts [@ShirataniJPD11; @Koga; @Wang; @Yan; @Fumagalli; @Kim]. In all cases the manipulation of dust particles, which is realized by controlling forces exerted on them such as electrostatic, thermophoretic, ion drag, and gravitational forces, or externally applied ones, e.g., created by a laser beam [@Nosenkoa; @MorfillPoP10; @Laser1; @Laser2], is crucially important. Furthermore, the use of dust particles as probes of these forces revealing plasma properties is a current topic of research [@MorfillPRL04; @DustProbes2].\ We have developed a novel method to control the transport of dust particles in a capacitively coupled radio frequency (CCRF) discharge by controlling the electrical symmetry of the discharge [@Iwashita]. Alternative dust manipulation methods using electrical pulses applied to wires have also been reported [@SamsonovPRL2002; @PustylnikPRE2006; @KnapekPRL2007; @PustylnikPoP2009]. Our dust manipulation method is based on the Electrical Asymmetry Effect (EAE) [@Heil]. The EAE allows to generate and control a dc self bias, $\eta$, electrically even in geometrically symmetric discharges. It is based on driving one electrode with a particular voltage waveform, $\phi_{\sim}(t)$, which is the sum of two consecutive harmonics with an adjustable phase shift, $\theta$: $$\label{EQappvol} \phi_{\sim}(t)=\frac{1}{2}\phi_0[\cos(2\pi f t+\theta)+\cos(4 \pi f t) ].$$ Here, $\phi_0$ is the identical amplitude of both harmonics. In such discharges, $\eta$ is an almost linear function of $\theta$. In this way, separate control of the ion mean energy and flux at both electrodes is realized in an almost ideal way. At low pressures of a few Pa, the EAE additionally allows one to control the maximum sheath voltage and width at each electrode by adjusting $\theta$ [@Heil], resulting in the control of forces exerted on dust particles, such as electrostatic and ion drag forces. In contrast to the pulsing methods mentioned above, the change in the phase angle does not require a change in the applied power or RF amplitude. Furthermore, it is a radio frequency technique, i.e. no DC voltage is applied externally and the EAE is, therefore, applicable to capacitive discharge applications with dielectric electrode surfaces, without the need for additional electrodes or power supplies for the pulsing. The EAE can be optimized with respect to the control range of the dc self-bias by choosing non-equal voltage amplitudes for the individual harmonics [@EAE7] or by adding more consecutive harmonics to the applied voltage waveform [@EAE11; @BoothJPD12]. In this study we intend to describe the basic mechanisms of the manipulation of the dust particle distribution in electrically asymmetric CCRF discharges. Thus, we restrict ourselves to the simplest case described by Eq. (\[EQappvol\]). It is important for the analysis carried out in this work that the dust density is sufficiently low so that the plasma parameters are not disturbed by the dust particles. A large concentration of dust particles disturbs the electron density and can cause a significant change of the dc self bias when distributed asymmetrically between the sheaths [@Boufendi2011; @Watanabe1994; @EddiJPD2013]. The critical parameter for the disturbance is Havnes’ value: $P = 695 T_e r_d n_d / n_i$, where $T_e$, $r_d$, $n_d$ and $n_i$ are electron temperature, radius of dust particles, their number density and ion density, respectively [@Thomas; @Havnes1990]. $P$ is basically the ratio of the charge density of dust particles to that of ions. The concentration of dust particles disturbs the electron density for $P > 1$, while it does not for $P << 1$. In the critical region ${P_c} = 0.1-1$ the charge of the dust particles becomes significant in the total charge balance [@Havnes1990]. We calculate $P \approx 10^{-3}$ for our experiment, which is well below the $P_c$. For this estimation, direct images of dust particles were analyzed and a mean distance between particles of about 1 mm is determined. Thus, the concentration of dust particles is quite low in this study and they do not disturb the plasma.\ This paper is structured in the following way: this introduction is followed by a description of the methods used in this work. There, information on the experimental setup as well as the numerical simulation method is provided, and the analytical approaches on the RF sheath driven by non-sinusoidal voltage waveforms and the motion of dust particles in the plasma bulk region are explained. The results, which are presented and discussed in the third section, include the control of the dc self bias in dusty plasmas via the EAE, the change of the dust levitation position when changing the phase angle adiabatically (continuously), the motion of dust particles through the plasma bulk when tuning the phase angle abruptly, and a classification of the dust particle transport depending on the change in the dc self bias and the discharge conditions. Finally, concluding remarks are given in section four. Methods ======= Experiment ---------- ![Sketch of the experimental setup.[]{data-label="FIGsetup"}](fig1.eps) Figure \[FIGsetup\] shows the experimental setup. The experiments are carried out using a CCRF discharge operated in argon gas at $p$ = 2 - 13 Pa, excited by applying $\phi_{\sim}(t)$ according to Eq. (\[EQappvol\]) with $f$ = 13.56 MHz and $\phi_0$ = 200 - 240 V. The applied voltage and the dc self bias are measured using a high voltage probe. Details of the electrical circuit have been provided in previous papers [@Julian; @Iwashita]. The lower (powered) and upper (grounded) electrodes of 100 mm diameter are placed at a distance of $d=22$ mm. The plasma is confined radially between the electrodes by a glass cylinder to improve the discharge symmetry. Both the grounded chamber and the powered electrode are water cooled to eliminate the influence of the thermophoretic force. The upper electrode has a 20 mm diameter hole sealed with a fine sieve in the center for injecting SiO$_2$ dust particles of 1.5 $\mu$m in size, from a dispenser situated above the upper electrode. The gap between the upper electrode and the dispenser, which is located at the center of the upper electrode, is sealed with a teflon ring to prevent any disturbances due to gas flowing through the gap. The supply of argon gas inside the glass cylinder is realized through slits of a teflon ring, which is placed between the glass cylinder and the grounded electrode. An aluminum ring (100 mm outer diameter, 60 mm inner diameter, 2 mm height) is set on the lower electrode to confine the dust particles radially. The injected dust particles initially tend to reside relatively near the edge inside the aluminum ring, therefore the observation area is taken to be in the region of 2 mm $\leq z \leq$ 22 mm and 18 mm $\leq r \leq $ 25 mm using a two dimensional laser light scattering (2DLLS) method [@Bouchoule; @ShirataniJPD11; @Koga; @XuLLS] as shown in Fig. \[FIGsetup\]. A vertical laser sheet passes between the two electrodes, with height and width of 20 mm and 1 mm, respectively. The laser power is 150 mW at 532 nm. The light scattered by the dust particles is detected through a side window using a CCD camera equipped with an interference filter and running at a frame rate of 30 pictures per second. PIC/MCC simulation ------------------ The rf discharge is described by a simulation code based on the Particle-In-Cell approach combined with Monte Carlo treatment of collision processes, PIC/MCC [@DonkoJPD09; @DonkoAPL09; @DonkoPSST11]. The code is one-dimensional in space and three-dimensional in velocity space. The simulations are performed in pure argon, although PIC/MCC simulations of dusty plasmas have already been reported [@Choi; @Schweigert; @Matyash]. Our approximation is based on the assumption that the dust particles represent only a minor perturbation to the plasma, which is justified for low concentration of dust particles as it is the case in this study. It has been proven that the simulations can be used to explain the motion of dust particles qualitatively as described in [@Iwashita], and the forthcoming analysis also shows the applicability. The PIC/MCC simulations are performed at pressures between 4 and 12 Pa. Although our simulations are not capable of accounting for any two dimensional effects, the simulation data are helpful to understand the experimental findings, which are analyzed in the direction perpendicular to the electrode surfaces only. In the simulations the discharge is driven by a voltage specified by Eq. (\[EQappvol\]). Electrons are reflected from the electrode surfaces with a probability of 0.2 and the secondary electron emission coefficient is set to $\gamma$ = 0.1. Based on the simulation results, the time averaged forces acting on dust particles, i.e. the ion drag force, $F_i$, electrostatic force, $F_e$ and gravity, $F_g$, are calculated as a function of the position between the electrodes [@Iwashita]. Here, the model of $F_i$ provided by Barnes et al [@Barnes] is applied. $F_e$ and $F_g$ are simply expressed as $F_e = Q_dE$ and $F_g = m_dg$, where $E$ and $m_d$ are the time averaged electric field and mass of dust particles, respectively. The charge of dust particles is calculated based on the standard formula: $Q_d = 1400 r_d T_e$ for isolated dust particles, e.g., by Bonitz [@Bonitz] or Piel [@Piel], to be $Q_d \approx -3300e$ in the plasma bulk (see Fig. \[Dustcharge\]), which is close to the typical value reported elsewhere [@ParticleBulk]. Here $e$ is the elementary charge. The typical error in the plasma bulk due to the spatial inhomogeneity is estimated to be about 10 %. Finally, the spatial profiles of the potential energy are derived from the net forces exerted on dust particles. ![Estimated spatial profile of the dust charge based on the standard formula [@Bonitz; @Piel] (Ar, 8 Pa, $\phi_0$ = 200 V, $\theta$ = $0^{\circ}$). The dashed line shows the spatial average in the plasma bulk, that is used in the manuscript. The location of dust particles in equilibrium near the lower electrode, which is obtained experimentally, is also shown.[]{data-label="Dustcharge"}](fig2.eps) Analytical model of the RF sheath driven by an arbitrary voltage waveform {#SectionSheathModel} ------------------------------------------------------------------------- In this section a model of CCRF discharges is combined with the Child-Langmuir approximation to obtain the main properties of the RF sheath, i.e. the time dependent sheath width and the spatio-temporal distribution of the potential and electric field inside the sheath, in an electrically asymmetric capacitive discharge. The goal is to calculate the time averaged sheath electric field and correlate this field with the levitation of the dust particles above the powered electrode in case of an adiabatic phase shift, discussed in section \[adiabatic\]. The dynamics of the sheath in a “classical” dual frequency discharge driven by two substantially different frequencies has been modeled using similar approaches [@DFsheath1; @DFsheath2; @DFsheath3]. According to the model, which has been introduced in [@Heil; @VQ; @Czarnetzki11], we find the following expression for the sheath voltage at the powered electrode normalized by $\phi_0$: $$\bar{\phi}_{sp}(t) = - \left[\frac{-{\varepsilon}q_t + \sqrt{{\varepsilon}{q_t}^2 - (1-\varepsilon)[\bar\eta + \bar\phi_{\sim}(t)]}}{1 - \varepsilon}\right]^2. \label{EQphibar}$$ Here $\varepsilon$, $q_t$, $\bar\eta$ and $\bar\phi_{\sim}(t)$ are the symmetry parameter as defined and discussed in [@Heil], normalized total charge, the dc self bias as well as the applied voltage normalized by $\phi_0$, respectively. Eq. (\[EQphibar\]) provides the sheath voltage as a function of time. In order to obtain a spatio-temporal model of the sheath electric field, the collisionless Child-Langmuir sheath theory [@Lieberman] can be applied at low pressures of a few Pa. To simplify the analysis, we restrict ourselves to a one-dimensional scenario. In this approximation, the maximum width of the sheath adjacent to the powered electrode is expressed as $s_{max,p} = \frac{\sqrt{2}}{3} \lambda_{De} \left(2 \left| \hat{\phi}_{sp} \right| e / T_e\right)^\frac{3}{4}$, where $\hat{\phi}_{sp}$, $\lambda_{De}$ and $T_e$ are the maximum of the sheath voltage at the powered electrode, the Debye length and the electron temperature at the sheath edge (in eV), respectively. The time dependent sheath width is given by the scaling with the sheath voltage: $s_p(t) = s_{max,p} \left( \phi_{sp}(t) / \hat{\phi}_{sp} \right)^{\frac{3}{4}}$. The minimum voltage drop across the powered sheath, $\hat{\phi}_{sp} <0$, is found from the voltage balance: $\phi_{\sim}(t) + \eta = \phi_{sp} + \phi_{sg} + \phi_b$ at the time of minimum applied voltage. Here $\phi_{sg}$ and $\phi_b$ are the sheath voltage at the grounded electrode and the bulk voltage, respectively. Neglecting the floating potential at the grounded sheath and $\phi_b$ yields $\hat{\phi}_{sp} \approx \tilde\phi_{min}+\eta,$ so that the minimum sheath voltage can easily be deduced from experimentally measured values, for instance. Here $\tilde\phi_{min}$ is the minimum of the applied voltage. Assuming that both the electric field and the potential are zero at the sheath edge the spatio-temporal profile of the electric potential in the sheath region at the powered electrode ($0 \leq z \leq s_p(t)$) is expressed by [@Oksuz] $$\label{EQsheathvoltage} \phi_{sp}(z,t) = - \frac{T_e}{2e} \left( \frac{3}{\sqrt{2}}\frac{s_p(t)-z}{\lambda_{De}} \right)^{\frac{4}{3}}.$$ Here $z$ = 0 is the position of the powered electrode. Finally, the spatio-temporal profile of the electric field in the sheath region is found by differentiation: $$\label{Efield} E_{sp}(z,t)=-\frac{\partial \phi_{sp}(z)}{\partial z}=-\frac{\sqrt{2} T_e}{e \lambda_{De}}\left( \frac{3}{\sqrt{2}}\frac{s_p(t)-z}{\lambda_{De}} \right)^{\frac{1}{3}}$$ Eq. (\[Efield\]) is used to understand the dust motion as a consequence of the adiabatic (continuous) phase change and to determine the electron density in section \[adiabatic\]. Model to describe the motion of dust particles {#SectionTransportModel} ---------------------------------------------- ![Spatial profile of electrostatic force, $F_e$, ion drag force, $F_i$, and gravity, $F_g$, exerted on dust particles. The spatial profile is obtained from PIC/MCC simulation (Ar, 8 Pa, $\phi_0$ = 200 V, $\theta$ = $0^{\circ}$).[]{data-label="Forces"}](fig1a.eps) The motion of dust particles in plasmas is determinded by the forces exerted on them [@Bouchoule; @Barnes; @Garrity; @Morfill; @Goree; @Piel]. Here, we propose a simple analytical model to describe the one-dimensional transport of dust particles between both sheaths through the plasma bulk. Models of the dust motion based on the force balance have already been reported [@Chu; @Couedel; @NefedovNJP2003; @LandNJP2007; @Zhakhovskii; @Graves]. We would like to emphasize again that the concentration of dust particles is quite low in this study and they do not disturb the plasma, which is different from the condition under which these models have been provided. Our approach focuses on analyzing the particular dust transport which has been obtained experimentally when changing the phase angle abruptly, and in fact the model proposed here can explain the experimental results. Further studies are required to investigate non-Hamiltonian effects [@Tuckerman; @Kompaneets] and clarify their role for the physics presented in this work. f In reactors with horizontal plane parallel electrodes separated by a discharge gap, $d$, and in the absence of thermophoretic forces, negatively charged dust particles tend to be confined at the sheath edges, where the forces exerted on them balance. Right after introducing the dust particles into the discharge volume, they are typically located around the lower sheath edge due to gravity. Let us focus on the motion of dust particles between the sheath edge of the bottom (powered) electrode $(z = s_p)$ and the upper (grounded) one $(z = d-s_g)$, e.g., after applying an upward force at the lower equilibium position. Later on, we will approximate the electrostatic force around the sheath edges as hard walls, i.e. the particles are instantaneously reflected without any change in their kinetic energy. This assumption is justified due to the fact that the electrostatic force caused by the bulk electric field (see Fig. \[Forces\]) or the interaction between dust particles is negligible under our condition. One reason for this quite small bulk electrostatic force is the relatively high ion density in the bulk, which is also realized in the void formation in dusty plasmas [@Bouchoule; @Bonitz]. In contrast to our situation, the electrostatic force is of vital importance in complex plasmas, where the major contribution of negative charges to the total charge balance in the bulk is given by the dust particles and not by the electrons (see e.g., [@Chu; @Couedel; @Takahashi]). The inter-particle force, i.e. Coulomb force can be comparable to the sheath electrostatic force under certain conditions [@Hwang]. This becomes crucial particularly when the lateral motion of dust particles is discussed. This study is, however, focused only on their vertical motion. Additionally, dust particles are initially located only at the lower sheath edge due to the balance between the sheath electrostatic force and the ion drag force, suggesting that these two forces are dominantly exerted on the dust particles in this study. Thus, the vertical component of the Coulomb force is much smaller than the respective component of the sheath electrostatic force and the ion drag force. In our model, small errors occur only at the bulk side of the sheath edge (equilibrium position of dust particles) where the electrostatic force is neither close to zero nor represents a hard wall. The dust particles are assumed not to perturb the plasma. Within the plasma bulk region, the dust particle motion is associated with the following force balance: $$\label{EQmombal} m_d\ddot{z} = -m_dg - m_d\nu\dot{z} + F_i(z).$$ Here, $m_d$, $g$, $\nu$, and $F_i$ are the mass of a dust particle, the acceleration of gravity, the frequency of momentum loss due to collisions between dust particles and gas atoms [@Piel; @Epstein], and the ion drag force, respectively. Note that the gas friction force $m_d\nu\dot{z}$ is derived from the assumption that the velocity of dust particles is much smaller than the thermal velocity of gas molecules. Therefore, the dependence of $\nu$ on the particle velocity can be neglected. Any interaction between the dust particles, e.g., a repulsive Coulomb force [@Thomas; @Chu; @Hayashi; @Arp; @Takahashi; @LinIJPD94], is not taken into account. Although the force profiles shown in Fig. \[Forces\] suggest that gravity can be neglected, we keep the corresponding term in the force balance to ensure the applicability of the resulting formulae for all types of particles, e.g., different sizes and/or mass densities (materials).\ There are several models of the ion drag force [@Barnes; @KhrapakPRL2003; @Fortov2] and the analytical description of this force remains an interesting research topic in itself. There are discussions in the literature on the validity of the different models. Although more sophisticated models are available, the Barnes model [@Barnes] is applied here in order to calculate the ion drag force in a simple way. The formula is generally considered to be accurate at low dust densities as pointed out e.g., in [@Bouchoule; @Piel], which is the case in this study. We assume that $n_i$ as well as the ion velocity, $v_i$, are expressed by trigonometric functions, as it results from the basic diffusion estimation in a steady state CCRF discharge [@Lieberman]: $$\begin{aligned} \label{iondensityprofile} n_i (z) & = & n_{i0} \cos\left[\left(z-\frac{d}{2}\right)\frac{\pi}{\Lambda_{i}}\right], \\ v_i (z) & = & v_{i0} \tan\left[\left(z-\frac{d}{2}\right)\frac{\pi}{\Lambda_{i}}\right]. \label{ionvelocityprofile}\end{aligned}$$ Here, the maximum ion density, $n_{i0}$, and ion velocity, $v_{i0}$, are constants. $\Lambda_{i}$ is the ion diffusion length; the value is actually close to the distance between the discharge center and the sheath edges. These input parameters are determined by fitting to the PIC/MCC simulation data as shown in Fig. \[Fitting\]. ![Spatial profile of ion density and velocity obtained from the PIC/MCC simulation and fit functions of the analytical model (Ar, 8 Pa, $\phi_0$ = 200 V, $\theta$ = $0^{\circ}$).[]{data-label="Fitting"}](fig3.eps) The estimated model quantities from this fitting are $n_{i0}=6.6 \cdot 10^{15}$ m$^{-3}$, $v_{i0}=344$ m s$^{-1}$, $\Lambda_i=15.5$ mm, and $d=23.0$ mm, respectively. The ion drag force consists of the collection force due to ions hitting the particle surface and the orbit force due to Coulomb collisions with the drifting ions. In low pressure CCRF discharges the orbit force [@Barnes], $$F_{i,orb}= 4 \pi n_i v_s m_i b_{\pi/2}^2 \Gamma,$$ typically dominates. Here, $v_s$, $m_i$, $b_{\pi/2}$ and $\Gamma$ are the mean ion velocity, the ion mass, the impact parameter and the Coulomb logarithm [@Barnes], respectively: $$\begin{aligned} b_{\pi/2} & = & \frac{eQ_d}{4\pi\epsilon_0m_iv_s^2}, \\ \Gamma & =& \frac{1}{2}\ln\left(\frac{\lambda_{De}^2+b_{\pi/2}^2}{r_d^2(1-\frac{2e\phi_f}{m_i v_s^2})+b_{\pi/2}^2}\right).\end{aligned}$$ Note that these quantities depend on the radius ($r_d$), floating potential ($\phi_f$), and charge ($Q_d$) of the dust particles. In this paper, we use the simplifying assumption of the dust particle charge to be negative and constant: $Q_d \approx -3300e$ as shown in Fig. \[Dustcharge\].\ In our approach, we neglect the thermal motion of the ions, i.e. the mean ion velocity $v_s$ is given by the drift component, $v_i$: $$v_s=\left(\frac{8k_BT_i}{\pi m_i}+v_i^2\right)^{\frac{1}{2}} \approx v_i.$$ Applying the approximation $F_i \approx F_{i,orb} \propto n_iv_i$, the ion drag force becomes $$F_i (z) = \bar{F}_{i0} \sin\left[\left(z-\frac{d}{2}\right)\frac{\pi}{\Lambda_i}\right].$$ Here, the maximum ion drag force ($\bar{F}_{i0}$) is a constant. In order to solve Eq. (\[EQmombal\]) analytically only the linear variation of the sine function is considered here: $$\label{EQiondrag} F_i (z) \approx F_{i0} \left(z-\frac{d}{2}\right) \frac{\pi}{\Lambda_{i}},$$ with $F_{i0}= 4 \pi m_i n_{i0} v_{i0} b_{\pi/2}^2 \Gamma$. The input parameters obtained from Fig. \[Fitting\] provide $F_{i0}=3.8 \cdot 10^{-13}$ N. Equation \[EQiondrag\] corresponds to a strong simplification of $F_i (z)$ and deviations from the exact solution appear, particularly in the regions close to the sheath edges. However, our aim is to explain the transport of dust particles through the plasma bulk with this model. In the bulk region, the model is a reasonable approach, since it includes the most relevant forces in this region. Furthermore, the forthcoming analysis shows that the basic features of particle motion and the experimental observation of the dust transport can be explained reasonably well by this approach.\ After inserting Eq. (\[EQiondrag\]) into Eq. (\[EQmombal\]) a second order linear ordinary differential equation $$\label{EQmombalsimple} m_d\ddot{z} + m_d\nu\dot{z} - F_{i0} \left[\left(z-\frac{d}{2}\right)\frac{\pi}{\Lambda_{i}}\right] + m_dg = 0$$ needs to be solved. Note that Eq. (\[EQmombalsimple\]) represents a harmonic oscillator in the space coordinate $(z-d/2)\pi/\Lambda_{i}$ with frequency $\sqrt{F_{i0}/m_d}$, which is externally driven by gravity and damped by collisions. Finally, using the boundary conditions $z(0)=z_0$ and $\dot{z}(0)=u_{0}$, which corresponds to the initial velocity of dust particles, the trajectory of dust particles is given by $$\label{EQtrajectory} z(t) = \left[\beta_1 \cosh \left(\alpha t \right) + \beta_2 \sinh \left(\alpha t\right)\right] e^{-\frac{\nu}{2}t} + \delta.$$ Here, $\alpha$, $\beta_1$, $\beta_2$, and $\delta$ are: $$\begin{aligned} \alpha & = & \sqrt{\left( \frac{\nu}{2}\right)^2 + \frac{\pi F_{i0}}{m_d \Lambda_{i}}}, \\ \beta_1 & = & x_0 - \frac{d}{2} - \frac{m_d \Lambda_{i} g}{\pi F_{i0}}, \\ \beta_2 & = & \left( u_0 + \beta_1 \frac{\nu}{2} \right) \alpha^{-1}, \\ \delta & = & \frac{m_d \Lambda_{i} g}{\pi F_{i0}} + \frac{d}{2}. \end{aligned}$$ From this trajectory of the dust particles, the kinetic energy is obtained: $$\label{EQenergy} W(t) = \frac{1}{2} m_d\dot{z}^2(t) = \frac{m_d}{8\alpha^2} \left( -A e^{\alpha t} + B e^{-\alpha t} \right)^2 e^{-\nu t},$$ where A and B are defined as $$\begin{aligned} A & = & g + \frac{F_{i0} \pi d}{2 m_d \Lambda_{i}} - x_0 \frac{F_{i0} \pi}{m_d \Lambda_{i}} + u_0 \left( \frac{\nu}{2} - \alpha \right), \\ B & = & g + \frac{F_{i0} \pi d}{2 m_d \Lambda_{i}} - x_0 \frac{F_{i0} \pi}{m_d \Lambda_{i}} + u_0 \left( \frac{\nu}{2} + \alpha \right).\end{aligned}$$ Eq. (\[EQenergy\]) is used to describe the dust energy as a consequence of the abrupt phase change in section \[abrupt\]. This rather complex result will be compared to the simple assumption that the kinetic energy of the dust particles is not affected by the particular shape of the potential profile and that the loss of the energy of the dust particles is only due to gas friction. Then, the velocity and kinetic energy of the dust particles can be estimated as $$\begin{aligned} u_{d}(t) = u_{0}e^{-\frac{\nu}{2}t}, \\ \label{EQenergysimple0} W(t) = \frac{1}{2}m_{d}u_{d}^{2}(t) = W_0e^{-{\nu}t} . \label{EQenergysimple}\end{aligned}$$ Here $W_0$ is the initial kinetic energy of dust particles. Eq. (\[EQenergysimple\]) is used to determine the potential profile experimentally using the spatial profile of the laser light scattering (LLS) intensity from dust particles in section \[abrupt\]. It should be noted that practically the dust charge fluctuates and the reflection of the dust particles at the sheath edge is “soft”. Again, our model aims to describe the dust transport observed in this study in a simple way, and thus the simple assumption, e.g., a constant dust charge and a rough approximation of the electrostatic force as a hard wall, is applied here. Results and Discussion ====================== dc self bias control via the EAE in a plasma containing a small amount of dust ------------------------------------------------------------------------------ ![Experimentally obtained dc self bias as a function of the phase angle $\theta$ with and without dust particles for different neutral gas pressures. The applied voltage amplitude is kept constant at $\phi_0$ = 200 V. Solid symbols relate to discharges without and open symbols to ones with dust particles. Square: 2 Pa, triangle: 4 Pa, inverted triangle: 8 Pa.[]{data-label="FIGbias"}](fig4.eps) Fig. \[FIGbias\] shows the dc self bias, $\eta$, obtained from the experiment, as a function of the phase angle, $\theta$. $\eta$ is generated as a monotonic function of $\theta$. As described in details before [@Heil; @EAE7; @EAE11; @Julian; @Czarnetzki11; @Eddi; @DonkoJPD09; @DonkoAPL09], the EAE allows to control the discharge symmetry electrically. The control range for gas pressures between 2 and 8 Pa and an applied voltage amplitude of $\phi_0$ = 200 V is found to be close to about 45 % of the applied voltage amplitude. Therefore, a strong change in both the time averaged sheath voltages ($\eta=\left\langle \phi_{sp}(t)\right\rangle+\left\langle \phi_{sg}(t)\right\rangle$) and the maximum sheath voltages as a function of $\theta$ can be expected. $\eta$ is shifted towards negative values because the discharge setup becomes effectively geometrically asymmetric due to the parasitic effect of capacitive coupling between the glass cylinder and the grounded chamber walls [@Coburn; @Savas; @Julian; @Booth10; @Booth12; @Booth12_2]. This effect tends to be stronger at higher pressures. It is important to note that in this study no significant difference of $\eta$ in cases with and without dust particles is observed, indicating that the presence of a low dust concentration does not influence the plasma significantly. Therefore, the models described in the previous section are indeed applicable as pointed out already in section \[Introduction\] by estimating Havnes’ value $P$. Adiabatic phase change {#adiabatic} ---------------------- ![Spatial profile of the measured LLS intensity from the dust particles around the lower electrode as a function of the phase angle $\theta$ combined with the electric field calculated from the analytical model (Ar, 2 Pa, $\phi_0$ = 200 V). The observation of the LLS intensity within the lower region (0 mm $\leq z \leq$ 2 mm) is blocked by the aluminum ring.[]{data-label="FIGllsfield"}](fig5.eps) The dust particles injected into the discharge are initially located at the sheath edge adjacent to the lower electrode. Any adiabatic (continuous) change of $\theta$ leaves the dust particles at an equilibrium position close to this lower sheath edge as shown in Fig. \[FIGllsfield\]. By increasing the phase angle from $0^{\circ}$ to $90^{\circ}$ adiabatically, the time averaged sheath width becomes smaller and both the mean and the maximum sheath voltages at the lower electrode decrease. Therefore, the equilibrium position of the dust particles is shifted closer towards the electrode. This change of the equilibrium position can be understood by the electric field profile obtained from the analytical model described in section \[SectionSheathModel\] using input parameters of $T_e$ = 3 eV and $\lambda_{De}$ = 644 $\mu$m calculated under the assumption of $n_e$ = $4 \times 10^{14}$ m$^{-3}$ (see lines in Fig. \[FIGllsfield\]). Electron density and temperature are taken from the PIC/MCC simulations because we applied a glass cylinder to confine the plasma. Thus, performing Langmuir probe measurements is not possible. We find very good agreement between the measured LLS and the part of the electric field distribution at a strength of about -4 kV/m, i.e. where forces exerted on dust particles balance. ![Distribution of the electric field at (a) $\theta=0^\circ$ and (b) $\theta=90^\circ$ as a function of spatial position within the phase angle dependent maximum sheath width and time resulting from the model shown in Eq. (\[Efield\]) (Ar, 2 Pa, $\phi_0$ = 200 V). $T_{rf}$ = 74 ns. The sheath reaches the region above dashed line only once per rf period.[]{data-label="FIGfield00field90"}](fig6.eps) ![Strength of the time averaged electric field as a function of position corresponding to Fig. \[FIGfield00field90\]. The dashed line is drawn according to that in Fig. \[FIGfield00field90\] (b). The gradient of the time averaged electric field changes at the boundary indicated by the dashed line.[]{data-label="FIGfield00field90_2"}](fig7.eps) When $\theta$ is changed from $0^{\circ}$ to $90^{\circ}$, the maximum of the time averaged electric field in the powered electrode sheath, i.e. $\left\langle E\right\rangle_{max}$ found at the electrode, becomes smaller due to the decrease in the mean sheath voltage. In addition, the change in the shape of the applied voltage as a function of $\theta$ leads to a change in the sheath voltage, $\phi_{sp}(t)$, which causes a change in the spatial distribution of the time averaged electric field. As it becomes clear from Fig. \[FIGfield00field90\] and \[FIGfield00field90\_2\], the slope of $\left\langle E\right\rangle(z)$ becomes flatter in the upper part of the sheath with increasing $\theta$, i.e. the time averaged voltage drop over this region becomes smaller. In particular, the field is relatively small during the second half of the rf period (see dashed line in Fig. \[FIGfield00field90\] (b)). Thus, the broadening of the equilibrium position (region of bright LLS) is well understood by the analytical model. This correlation analysis of the dust equilibrium position combined with the spatial electric field profile is applicable as a diagnostic tool to estimate plasma parameters, i.e. the dust particles can serve as electrostatic probes [@MorfillPRL04; @Kersten09; @DustProbes2; @DustProbes3; @DustProbes4]. The correlation analysis yields the maximum sheath extension as the only free fitting parameter, which depends on electron temperature and density ($s_{max,p} \propto \lambda_{De} / T_e^{3/4} \propto n_e^{-1/2} T_e^{-1/4}$). Hence, $s_{max,p}$ is more sensitive to changes in the electron density and, if the electron temperature is known, $n_e$ can be obtained assuming that these plasma parameters are constant, independently of $\theta$. In our discharge configuration, it is not possible to measure $T_e$. However, estimating $T_e \approx 3$ eV, for instance, results in an electron density of about $n_e \approx 4 \cdot 10^{14}$ $m^{-3}$ at the sheath edge under the condition of Fig. \[FIGllsfield\] (Ar, 2 Pa and $\phi_0$ = 200 V). Note that the charge of dust particles becomes smaller than that in the plasma bulk when they are closer to the sheath edge as shown in Fig. \[Dustcharge\], i.e. the charge of the dust particles observed in Fig. \[FIGfield00field90\_2\] might be smaller than -3300e which is assumed as the dust charge in this paper. Further study is required to discuss this topic in detail. Abrupt phase change {#abrupt} ------------------- When the phase angle is changed abruptly from $90^{\circ}$ to $0^{\circ}$, i.e. much faster than the reaction time scale of the particles, all dust particles are transported upwards into the plasma bulk and undergo rapid oscillations between the sheaths. Thereafter, a fraction of the particles reaches the upper sheath region and settles there (see Fig. \[FIGtransport\](a)). In this way, sheath-to-sheath transport is realized [@Iwashita]. Before discussing the conditions, under which sheath-to-sheath transport is possible, in more detail, this particle motion should be understood. As in the case of the adiabatic phase change, dust particles injected into the discharge are initially located at the sheath edge adjacent to the lower electrode. If the phase is changed abruptly from $90^{\circ}$ to $0^{\circ}$, the dust particles are suddenly located in a region of high potential due to their inertia. Consequently, they bounce back and forth between both sheaths, while being decelerated by gas friction (see Fig. \[Transportmodel\]) [@Iwashita]. As described in section \[SectionTransportModel\], the motion of dust particles is determined by gravity, the ion drag force pushing the particles out of the bulk towards the sheaths, deceleration due to friction by collisions with the neutral gas, as well as electrostatic forces due to the sheath electric field, which basically can be regarded as boundaries, thus spatially confining the particle motion. Afterwards, they reside inside the potential well at either the upper or the lower sheath edge [@Iwashita]. The shape of the potential profile consists of a peak close to the discharge center, two minima located around the sheath edges and steep rises inside the sheaths. The difference in the height of the two minima is mainly caused by gravity in the absence of thermophoretic forces. The term “potential” is valid only, if the result does not depend on the particle velocity, i.e. if the time scale of the dust particle motion is the slowest of all time scales of interest here. This condition is fulfilled: for instance, the thermal motion of both the neutral and the ionized gas atoms is about two orders of magnitude faster compared to the dust particle motion (the maximum dust velocity estimated from the experimental results (Fig. \[FIGtransport\]) is a few m/s at most). Therefore, the potential profile is provided independently from the dust velocity.\ ![Spatiotemporal profiles of the measured LLS intensity by the dust particles within the discharge gap (Ar, (a) 8 Pa and (b) 12 Pa, $\phi_0$ = 200 V). The abrupt phase change takes place at [*t $\approx$*]{} 0 ms. Observation of the lower region (0 mm $\leq z \leq$ 2 mm) is blocked by the aluminum ring. The upper (diamond and triangle) and lower (circle and square) points are taken to obtain the upper and lower potential wells in Fig. \[FIGpotexp\], respectively. The arrow illustrates the estimation of an initail velocity of $u_0 \approx 1$ m/s.[]{data-label="FIGtransport"}](fig8.eps) ![Model of sheath-to-sheath transport of dust particles[@Iwashita]. The potential profile is calculated from PIC/MCC simulation data (Ar, 4 Pa, $\phi_0$ = 200 V). $L_1$ and $L_2$ are the widths of the upper and lower potential wells, respectively, at $\theta = 0^\circ$.[]{data-label="Transportmodel"}](fig9.eps) It is possible to determine this potential distribution qualitatively from the experimental results. Hence, information on basic plasma properties might be achievable from this analysis. The shapes of the potential wells at the upper and lower sheath edges are obtained from the LLS profile (see four kinds of points in Fig. \[FIGtransport\](a)). The points are taken at the contour line, which is both existent in the entire plasma bulk region and shows a reasonably high intensity. Note that the resulting data points are close to the region of maximum gradient of the LLS intensity, as well. The upper (diamond and triangle) and lower (circle and square) points correspond to the confinement regions of dust particles in the potential wells at the upper and lower sheath edges, respectively. In order to deduce the potential distribution from them, the temporal evolution of the energy of the dust particles needs to be known. The simplest model of the dust motion is applied here, i.e. dust particles lose their kinetic energy only due to gas friction. Using this approximation allows an analytical treatment of $W(t)$ by using Eq. (\[EQenergysimple\]). Using the data points shown in Fig. \[FIGtransport\] (a) and replacing the time scale by the corresponding energy, the potential profile shown in Fig. \[FIGpotexp\] is obtained. Here, the potential energy scale is normalized by the initial energy of the dust particles. An estimation yields $W_0 \approx m_d u_0^2 /2 \approx 1.8 \times 10^{-15}$ J (11 keV) for an initial velocity of $u_0 \approx 1$ m/s, which was obtained from the spatiotemporal profile of the LLS intensity by the dust particles (see arrow in Fig. \[FIGtransport\]). Taking into account the uncertainty in $W_0$, we restrict ourselves to a qualitative discussion of the potential profile in this study. Comparing this profile to the one calculated from the simulation data shown in Fig. \[FIGpotpic\], we see that the position of the lower potential minimum agrees well between the experiment and the PIC simulation ($z \approx 5.7$ mm). In the experiment the upper minimum is located at 18.6 mm, whereas the position in the simulation is 16.9 mm. This difference is probably caused by the effective geometrical asymmetry of the discharge in the experiment, which is also indicated by the self bias voltage, $\eta$ (see 8 Pa case in Fig. \[FIGbias\]). In the PIC simulation the discharge is geometrically symmetric, thus yielding a symmetric dc self bias curve ($\eta(\theta=0^\circ)=-\eta(\theta=90^\circ) \approx -52$ V) and a wider sheath compared to the experiment at the grounded side for all $\theta$. The lowest part of the potential curve resulting from the experimental data cannot be obtained by this approach (see the curve at around z = 5 mm in Fig. \[FIGpotexp\]), since the residual spatial distribution is caused by the residual energy, $W_r$, of dust particles in equilibrium position due to thermal motion and Coulomb interaction, respectively, as well as the spatial resolution of the optical measurements (see the LLS intensity from dust particles after 100 ms in Fig. \[FIGtransport\]), which are neglected in our simple model. Except for this region, the dust particles can be used as probes to determine the potential, which depends on plasma properties via $F_i (z)$ and $F_{e}(z)$, in a major part of the discharge region. The probability for the trapping of dust particles at the upper sheath, $P_{trans}$ might be roughly estimated by the width of the upper potential well divided by the sum of the widths of the lower and upper potential wells , which is expressed as $L_1 / (L_1 + L_2)$, in the simple approximation made above (see Fig. \[Transportmodel\]) [@Iwashita]. Here $L_1$ and $L_2$ are the widths of the upper and lower potential wells, respectively. The probability calculated this way is about 0.5 for the experiment for 8 Pa and $\phi_0$=200 V, which agrees well with that calculated for the simulation potential profile.\ Furthermore, the potential profile can be used to obtain input parameters for the analytical model of dust transport described in section \[SectionTransportModel\]. For this model the potential profile in the plasma bulk is obtained by integrating Eq. (\[EQmombalsimple\]). Due to the small-angle approximation for the ion drag force (Eq. (\[EQiondrag\])) the potential profile is expressed by a simple parabola: $U(z)=U_0 - \left[F_{i,0} \left( z - d \right) \frac{z \pi}{2 \Lambda_i} - m_d g z \right]$, where $U_0$ is an integration constant. The model curve resulting from fits of equations \[iondensityprofile\] and \[ionvelocityprofile\] to PIC simulation data is shown in Fig. \[FIGpotpic\]. One can find a difference of the central maxima of the potential profile obtained from PIC/MCC simulation for $\theta = 90^{\circ}$ and $0^{\circ}$. This is derived from the spatial profiles of the ion drag force (mainly orbit force), i.e. the direction of the ion drag force changes at the center of the plasma bulk [@Iwashita] and the gradient of the force profile for $\theta = 90^{\circ}$ in this region is steeper than that for $\theta = 0^{\circ}$, resulting in the difference of the central maxima for $\theta = 90^{\circ}$ and $0^{\circ}$. The model shows reasonable agreement with the potential profile using the exact values from the PIC/MCC simulation within the plasma bulk. As discussed above, deviations can be observed close to the sheath edges, e.g., due to the simplified treatment of the electrostatic force as a hard wall. ![Potential profile at $\theta = 0^\circ$ obtained from the measured 2DLLS intensity shown in Fig. \[FIGtransport\] (a) using a simple model. The potential energy scale is normalized by a rough estimation of the initial energy of the dust particles.[]{data-label="FIGpotexp"}](fig10.eps) ![Potential profile calculated from PIC/MCC simulations data (Ar, 8 Pa, $\phi_0$=200 V). The model curve resulting from fits of equations \[iondensityprofile\] and \[ionvelocityprofile\] to PIC simulation data is shown, as well.[]{data-label="FIGpotpic"}](fig11.eps) Figure \[FIGtrajectory\] shows the trajectories of dust particles calculated from Eq. (\[EQtrajectory\]) and using the input parameters given above, for different values of the initial velocity. Right after the time of the abrupt phase shift all dust particles gain a certain initial velocity. If the initial velocity is below $u_0 \approx 1.0$ m/s, they cannot overcome the central maximum of the potential and bounce only inside the lower potential well. Dust particles with the initial velocity above $u_0 \approx 1.25$ m/s travel through the whole plasma bulk just after the phase shift. Dust particles with an initial velocity of $u_0 \approx 1.5$ m/s oscillate back and forth in the bulk region. However, their final equilibrium position is again located around the lower sheath. Therefore, from the model the initial velocity to realize the sheath-to-sheath transport is found at certain intervals, e.g., dust particles having $u_0$ = 2.0 m/s end up in the upper potential minimum while those having $u_0$ = 1.75 m/s do not. The conclusion obtained from Fig. \[FIGtrajectory\] can be summarized by introducing the number of passages of dust particles through the plasma bulk, $N_{trans}$. $u_0$ (m/s) 1.00 1.25 1.5 1.75 2.0 ------------- ------ ------ ----- ------ ----- $N_{trans}$ 0 1 2 2 3 : \[table1\] Summary of the effective transport of dust particles through the plasma bulk obtained in this study, depending on the initial velocity (Ar, 8 Pa, $\phi_0=200$ V). Odd number of $N_{trans}$ realizes sheath-to-sheath transport, while even number of $N_{trans}$ does not. Any odd number of $N_{trans}$ means that sheath-to-sheath transport is realized, whereas even numbers of $N_{trans}$ correspond to a final position close to the initial position at the lower sheath edge (table \[table1\]). We also note that the trajectory of $u_0 \approx$ 1.25 m/s obtained from the model agrees well with the experimental result (Fig. \[FIGtransport\](a)). ![Trajectory of dust particles calculated from the model for different initial velocities (Ar, 8 Pa, $\phi_0$=200 V). The input parameter fitted on the data calculated from PIC/MCC simulations (see Fig. \[FIGpotpic\]) are used.[]{data-label="FIGtrajectory"}](fig12.eps) ![Time evolution of the kinetic energy of dust particles after the abrupt phase shift according to the $u_0 = 1.25$ m/s case in Fig. \[FIGtrajectory\] (Ar, 8 Pa, $\phi_0$ = 200 V).[]{data-label="FIGenergy"}](fig13.eps) Using Eq. (\[EQenergy\]) the time evolution of the kinetic energy of the dust particles after the abrupt phase shift is obtained as shown in Fig. \[FIGenergy\]. An anharmonic oscillation is superimposed on the simple assumption of an exponential decay of the dust velocity (Eq. (\[EQenergysimple\])) as a function of time. The sharp edges in this oscillations are due to the treatment of the electrostatic forces as hard walls. When the dust particles bounce between the sheath edges, they do not just lose their kinetic energy on long timescales, but they also gain kinetic energy temporarily due to the ion drag force while moving from the discharge center towards the sheaths. However, the kinetic energy stays below $W_0 e^{-\nu t}$ between $t=$ 0 and the time of trapping in one of the two potential wells. This is because the potential profile leads to a deceleration of the dust particles just after the abrupt phase change. Therefore, the dust particles spend even more time on their way to the upper sheath and undergo more collisions with the neutral gas, resulting in enhanced friction losses. The information on the trajectory and energy provided by the analytical model of dust transport is useful for the optimization of their transport: it can be understood that a monoenergetic initial distribution within one of the velocity intervals allowing sheath-to-sheath transport, e.g., $u_0 \approx 1.25$ m/s in the case discussed here, is favorable to transport as many particles as possible to the upper sheath. Moreover, the outcome of the model suggests that the rough estimation of the probability of successful particle transport, $P_{trans}$, given above might overestimate the fraction of particles residing at the upper sheath edge, because the energy loss on the way from the upper sheath to the potential peak is much smaller than the energy loss occuring on the way from the lower sheath to the peak. In general, this model only requires the peak ion density in the discharge center and the electron temperature as input parameters, which could be measured by other diagnostic methods. However, there is no simple access to apply such methods in our experimental setup. The upgrading of the experimental setup to obtain these key parameters is required for our further study. Classification of transport conditions -------------------------------------- ![Experimentally obtained classification of the dust particle transport as a function of $\Delta \bar{\eta}$ and pressure. The voltage amplitude is kept at $\phi_0$ = 200 V for $p<$10 Pa and $\phi_0$ = 200-240 V for $p\geq$10 Pa, respectively.[]{data-label="FIGtransclass"}](fig14.eps) ![Normalized measured LLS intensity from dust particles around the upper sheath edge ($I_{upper} / I_{all}$) as a parameter of $\Delta \bar{\eta}$ for the abrupt phase shift (Ar, 4 Pa, $\phi_0$ = 200 V). $I_{upper} / I_{all}$ is obtained by dividing the sum of the LLS intensity from dust particles around the upper sheath edge by that from dust particles around both sheath edges. []{data-label="Velocity"}](fig15.eps) We now turn to the discussion of conditions, under which sheath-to-sheath transport is possible. The key parameter for this transport is the rapid change of the dc self bias, $\Delta \eta$, which can be easily controlled between $\Delta {\eta}_{min}=0$ and $\Delta {\eta}_{max}= {\eta}(90^\circ) - {\eta}(0^\circ)$ by choosing certain intervals of the change in the phase angle (see Fig. \[FIGbias\]). As shown in Fig. \[FIGtransclass\], a threshold value of $\Delta \bar{\eta}$ is apparently required to achieve the transport of a fraction of the particles to the upper equilibrium position. Here the difference of normalized dc self bias $\Delta \bar{\eta}$ is given by $\Delta \bar{\eta} = [\eta(\theta_2) - \eta(\theta_1)] / \phi_0 $ in case of the phase shift from $\theta_1$ to $\theta_2$. The threshold increases with pressure, due to the increasing collisionality and, even more important, a stronger ion drag force, i.e. the central peak in the potential distribution becomes higher with increasing pressure. Therefore, it becomes more difficult for the particles to overcome this potential barrier. If $\Delta \bar{\eta}$ is smaller than the threshold, sheath-to-sheath transport is not realized: the dust particles reach a certain position below this potential peak and are forced towards the equilibrium position around the lower electrode sheath again (see Fig. \[FIGtransport\](b)). In this case, similar to the adiabatic phase change, information on the local plasma properties might be gained from this disturbance of the particle distribution. In particular, we observe that the maximum displacement of the dust particles strongly depends on global parameters, such as pressure and voltage, in the experiment. However, a very good spatio-temporal resolution of the LLS measurements is required, which is not provided in our experiment. At low pressures, the sheath-to-sheath transport is possible within a wide range of $\Delta \bar{\eta}$ (see Fig. \[FIGtransclass\]). However, as it has been motivated by the model results shown in Fig. \[FIGtrajectory\], the fraction of dust particles might vary as a function of $\Delta \bar{\eta}$. Figure \[Velocity\] shows the normalized LLS intensity from dust particles around the upper sheath edge ($I_{upper} / I_{all}$) as a function of $\Delta \bar{\eta}$, for the abrupt phase shift. A low pressure of 4 Pa has been applied here. $I_{upper} / I_{all}$ is obtained by dividing the sum of the LLS intensity from dust particles around the upper sheath edge by that from dust particles around both sheath edges. The maximum of $I_{upper} / I_{all}$ is seen at $\Delta \bar{\eta}$ = 23%, and sheath-to-sheath transport is not achieved for $\Delta \bar{\eta}$ $<$ 16%. These results indicate that the optimum initial velocity for sheath-to-sheath transport is slightly above the minimum value where sheath-to-sheath transport is realized. It also becomes clear that the change in the dc self bias, $\Delta \bar{\eta}$, for the [*efficient*]{} sheath-to-sheath transport is found at a certain interval, e.g., dust particles are transported efficiently for $\bar{\eta}$ = 48% and $\bar{\eta}$ = 23%, while they are not for $\bar{\eta}$ = 41% (see Fig. \[Velocity\]). The initial velocity of dust particles, $u_0$, is controlled by changing $\Delta \bar{\eta}$, since the temporally averaged sheath voltage depends almost linearly on the dc self bias [@EAEpower] and it can be approximated that the initial energy of the dust particles is proportional to the change of the mean sheath voltage. Hence, $u_0 \propto \sqrt{\Delta \bar{\eta}}$ and these results support the model of the dust motion described above (Fig. \[FIGtrajectory\]).\ Conclusion ========== The opportunities of controlling the transport of dust particles via the EAE have been discussed using the results of experiment, simulations, and analytical models. For these models, it has been confirmed that the dust particles do not significantly perturb the electrical properties of the discharge. In the case of an adiabatic tuning of the phase angle between the applied harmonics the dust particles are kept at an equilibrium position close to the lower sheath edge and their levitation is correlated with the time averaged electric field profile. This might provide the opportunity to estimate the electron density by using the dust particles as electrostatic probes. In the case of an abrupt phase shift ($90^{\circ}$ $\rightarrow$ $0^{\circ}$) the dust particles are transported upwards, i.e. they move between both sheaths through the plasma bulk. The trajectory as well as the temporal evolution of the dust particle energy are well understood using an analytical model. It is found that an initial velocity of the dust particles of about 1.25 m/s is required to push them over the potential hill located around the center of the plasma bulk. Thus, changing the applied voltage waveform via the EAE allows transporting a fraction of the dust particles from the equilibrium position around the lower sheath edge to the one at the upper electrode sheath, i.e. sheath-to-sheath transport is realized. The model also predicts that the initial velocity to realize sheath-to-sheath transport is found at certain intervals, which is in agreement with the dependence of the probability of sheath-to-sheath transport (fraction of LLS intensity at the upper sheath edge) on the change in the dc self bias found in the experiment. Furthermore, a certain threshold value of the rapid change of the dc self bias is required to achieve sheath-to-sheath transport. If the change in the dc self bias lies below the threshold value, the dust particles move within the lower potential well. Due to an increase in the collisionality and in the height of the potential peak, the threshold increases and the displacement decreases as a function of neutral gas pressure. This research was supported by the German Federal Ministry for the Environment (0325210B), the Alexander von Humboldt Foundation, the RUB Research Department Plasma, and the Hungarian Scientific Research Fund (OTKA-K-77653+IN-85261, K-105476, NN-103150). References {#references .unnumbered} ========== [27]{} Fortov V E, Khrapak A G, Khrapak S A, Molotkov V I and Petrov O V 2004 [*Phys. Usp.*]{} [**47**]{} 447 Fortov V E, Ivlev A V, Khrapak S A, Khrapak A G, Morfill G E 2005 [*Phys. Rep.*]{} [**421**]{} 1 Nitter T 1996 [*Plasma Sources Sci. Technol.*]{} [**5**]{} 93 Melzer A, Trottenberg T and Piel A 1994 [*Phys. Lett. A*]{} [**191**]{} 301 Ivlev A V, Sütterlin R, Steinberg V, Zuzic M and Morfill G 2000 [*Phys. Rev. Lett.*]{} [**85**]{} 4060 H[ü]{}bner S and Melzer A 2009 [*Phys. Rev. Lett.*]{} [**102**]{} 215001 Khrapak S A, Ratynskaia S V, Zobnin A V, Usachev A D, Yaroshenko V V, Thoma M H, Kretschmer M, Hofner M, Morfill G E, Petrov O F and Fortov V E 2005 [*Phys. Rev. E*]{} [**72**]{} 016406 Kalman G, Rosenberg M and DeWitt H E 2000 [*Phys. Rev. Lett.*]{} [**84**]{} 6030 Nunomura S, Goree J, Hu S, Wang X and Bhattacharjee A 2002 [*Phys. Rev. E*]{} [**65**]{} 066402 Couedel L, Mikikian M, Samarian A A and Boufendi L 2010 [*Phys. Plasmas*]{} [**17**]{} 083705 Cavarroc M, Jouanny M C, Radouane K, Mikikian M and Boufendi L 2006 [*J. Appl. Phys.*]{} [**99**]{} 064301 Hamaguchi S, Farouki R T and Dubin D H E 1997 [*Phys. Rev. E*]{} [**56**]{} 4671 Melzer A, Homann A and Piel A 1996 [*Phys. Rev. E*]{} [**53**]{} 2757 Meijer E J and Frenkel D 1991 [*J. Chem. Phys.*]{} [**94**]{} 2269 Schweigert V A, Schweigert I V, Melzer A, Homann A and Piel A 1998 [*Phys. Rev. Lett.*]{} [**80**]{} 5345 Aschinger A and Winter J 2012 [*New J. Phys.*]{} [**14**]{} 093036 Thomas H, Morfill G E and Demmel V 1994 [*Phys. Rev. Lett.*]{} [**73**]{} 652 Chu J H and I Lin 1994 [*Phys. Rev. Lett.*]{} [**72**]{} 4009 Hayashi Y and Tachibana K 1994 [*Jpn. J. Appl. Phys.*]{} [**33**]{} L804 Arp O, Block D and Piel A 2004 [*Phys. Rev. Lett.*]{} [**93**]{} 165004 Bonitz M, Henning C and Block D 2010 [*Rep. Prog. Phys.*]{} [**73**]{} 066501 Shukla P K and Eliasson B 2009 [*Rev. Mod. Phys.*]{} [**81**]{} 25 Bouchoule A 1999 [*Dusty Plasmas*]{} (Chichester: John Wiley & Sons) Krasheninnikov S I and Soboleva T K 2005 [*Plasma Phys. Contr. Fusion*]{} [**47**]{} A339 Selwyn G S, Singh J and Bennett R S 1989 [*J. Vac. Sci. Technol.*]{} A [**7**]{} 2758 Cavarroc M, Mikikian M, Tessier Y and Boufendi L 2008 [*IEEE Trans. Plasma Sci.*]{} [**36**]{} 1016 Roca i Cabarrocas P, Nguyen-Tran Th, Djeridane Y, Abramov A, Johnson E and Patriarche G 2007 [*J. Phys. D: Appl. Phys.*]{} [**40**]{} 2258 Shiratani M, Koga K, Iwashita S, Uchida G, Itagaki N and Kamataki K 2011 [*J. Phys. D: Appl. Phys.*]{} [**44**]{} 174038 Koga K, Iwashita S and Shiratani M 2007 [*J. Phys. D: Appl. Phys.*]{} [**40**]{} 2267 Wang X, Ocola L E, Divan R S and Sumant A V 2012 [*Nanotechnology*]{} [**23**]{} 075301 Yan H, Choe H S, Nam S W, Hu Y, Das S, Klemic J F, Ellenbogen J C and Lieber C M 2011 [*Nature*]{} [**470**]{} 240 Fumagalli F, Kylián O, Amato L, Hanǔs J and Rossi F 2012 [*J. Phys. D: Appl. Phys.*]{} [**45**]{} 135203 Kim H H, Ogata A, Schiorlin M, Marotta E and Paradisi C 2011 [*Catal. Lett.*]{} [**141**]{} 277 Nosenko V, Goree J and Piel A 2006 [*Phys. Plasmas*]{} [**13**]{} 032106 Nosenko V, Ivlev A V, and Morfill G E 2010 [*Phys. Plasmas*]{} [**17**]{} 123705 Piel A and Melzer A 2002 [*Adv. Space Res.*]{} [**29**]{} 1255 Klindworth M, Melzer A, Piel A and Schweigert V A 2000 [*Phys. Rev. B*]{} [**61**]{} 8404 Annaratone B M, Antonova T, Thomas H M, and Morfill G E 2004 [*Phys. Rev. Lett.*]{} [**93**]{} 185001 Beckers J, Ockenga T, Wolter M, Stoffels W W, van Dijk J, Kersten H and Kroesen G M W 2011 [*Phys. Rev. Lett.*]{} [**106**]{} 115002 Iwashita S, Uchida G, Schulze J, Sch[ü]{}ngel E, Hartmann P, Shiratani M, Donkó Z and Czarnetzki U 2012 [*Plasma Sources Sci. Technol.*]{} [**21**]{} 032001 Samsonov D, Ivlev A V, Quinn R A, Morfill G and Zhdanov S 2002 [*Phys. Rev. Lett.*]{} [**88**]{} 095004 Pustylnik M Y, Ohno N, Takamura S, and Smirnov R 2006 [*Phys. Rev. E*]{} [**74**]{} 046402 Knapek C A, Samsonov D, Zhdanov S, Konopka U, and Morfill G E 2007 [*Phys. Rev. Lett.*]{} [**98**]{} 015004 Pustylnik M Y, Ivlev A V, Thomas H M, Morfill G E, Vasilyak L M, Vetchinin S P, Polyakov D N, and Fortov V E 2009 [*Phys. Plasmas*]{} [**16**]{} 113705 Heil B G, Czarnetzki U, Brinkmann R P and Mussenbrock T 2008 [*J. Phys. D: Appl. Phys.*]{} [**41**]{} 165202 Schulze J, Schüngel E, Donkó Z and Czarnetzki U 2009 [*J. Appl. Phys.*]{} **106** 063307 Schulze J, Schüngel E, Donkó Z and Czarnetzki U 2011 [*Plasma Sources Sci. Technol.*]{} [**20**]{} 015017 Lafleur T and Booth J P 2012 [*J. Phys. D: Appl. Phys.*]{} [**45**]{} 395203 Boufendi L, Jouanny M Ch, Kovacevic E, Berndt J and Mikikian M 2011 [*J. Phys. D: Appl. Phys.*]{} [**44**]{} 174035 Watanabe Y, Shiratani M, Fukuzawa T and Kawasaki H 1994 [*Plasma Sources Sci. Technol.*]{} [**3**]{} 355 Schüngel E, Mohr S, Iwashita S, Schulze J and Czarnetzki U 2013 [*J. Phys. D: Appl. Phys.*]{} [**46**]{} 175205 Havnes O, Aanesen T K, and Melandso F 1990 [*J. Geophys. Res.*]{} [**95**]{} 6581 Schulze J, Sch[ü]{}ngel E and Czarnetzki U 2009 [*J. Phys. D: Appl. Phys.*]{} [**42**]{} 092005 Xu R, 2002 [*Particle Characterization: Light Schattering Methods*]{} (Dordrecht: Kluwer Academic Publishers) Donkó Z, Schulze J, Heil B G, Czarnetzki U 2009 [*J. Phys. D: Appl. Phys.*]{} [**42**]{} 025205 Donkó Z, Schulze J, Czarnetzki U and Luggenh[ö]{}lscher D 2009 [*Appl. Phys. Lett.*]{} [**94**]{} 131501 Donkó Z 2011 [*Plasma Sources Sci. Technol.*]{} [**20**]{} 024001 Choi J S, Ventzew P L G, Hoekstra R J and Kushner M J 1994 [*Plasma Sourc. Sci. Technol.*]{} [**3**]{} 419 Schweigert I V, Alexandrov A L, Ariskin D A, Peeters F M, Stefanovi[ć]{} V, Kova[c]{}evi[ć]{} E, Berndt J and Winter J 2008 [*Phys. Rev. E*]{} [**78**]{} 026410 Matyash K, Schneider R, Taccogna F, Hatayama A, Longo S, Capitelli M, Tskhakaya D and Bronold F X 2007 [*Contrib. Plasma Phys.*]{} [**47**]{} 595 Barnes M S, Keller J H, Forster J C, O’Neill J A and Coultas D K 1992 [*Phys. Rev. Lett.*]{} [**68**]{} 313 Piel A 2010 [*Plasma Physics*]{} (Berlin: Springer) Robiche J, Boyle P C, Turner M M and Ellingboe A R 2003 [*J. Phys. D: Appl. Phys.*]{} **36** 1810 Franklin R N 2003 [*J. Phys. D: Appl. Phys.*]{} **36** 2660 Jiang W, Mao M and Wang Y N 2006 [*Phys. Plasmas*]{} [**13**]{} 113502 Schulze J, Heil B G, Luggenhölscher D, Brinkmann R P and Czarnetzki U 2008 [*J. Phys. D: Appl. Phys.*]{} **41** 195212 Czarnetzki U, Schulze J, Sch[ü]{}ngel E and Donkó Z 2011 [*Plasma Sources Sci. Technol.*]{} [**20**]{} 024010 Lieberman M A and Lichtenberg A J 2005 [*Principles of Plasma Discharges and Materials Processing, 2nd Ed.*]{} (Hoboken: John Wiley & Sons) Oksuz L and Hershkowitz N 2005 [*Plasma Sources. Sci. Technol.*]{} [**14**]{} 201 Garrity M P, Peterson T W, Garrett L M, and O’Hanlon J F, 1995 [*J. Vac. Sci. Technol.*]{} A [**13**]{} 2939 Rothermel H, Hagl T, Morfill G E, Thoma M H and Thomas H M 2002 [*Phys. Rev. Lett.*]{} [**89**]{} 175001 Liu B, Goree J, Fortov V E, Lipaev A, Molotkov V I, Petrov O F, Morfill G E, Thomas H M and Ivlev A V 2010 [*Phys. Plasmas*]{} [**17**]{} 053701 Cou[ë]{}del L, Nosenko V, Zhdanov S K, Ivlev I V, Thomas H M, and Morfill G E 2009 [*Phys. Rev. Lett.*]{} [**103**]{} 215001 Nefedov A P, Morfill G E, Fortov V E, Thomas H M, Rothermel H, Hagl T, Ivlev A V, Zuzic M, Klumov B A, Lipaev A M, Molotkov V I, Petrov O F, Gidzenko Y P, Krikalev S K, Shepherd W, Ivanov A I, Roth M, Binnenbruck H, Goree J A and Semenov Y P 2003 [*New J. Phys.*]{} [**5**]{} 33 Land V and Goedheer W J 2007 [*New J. Phys.*]{} [**9**]{} 246 Zhakhovski V V, Molotkov V I, Nefedov A P, Torchinski V M, Khrapak A G, and Fortov V E 1997 [*JETP Lett.*]{} [**66**]{} 419 Graves D B, Daugherty J E, Kilgore M D and Porteous R K 1994 [*Plasma Sources Sci. Technol.*]{} [**3**]{} 433 Tuckerman M E, Mundy C J, and Martyna G J 1999 [*Europhys. Lett.*]{} [**45**]{} 149 Kompaneets R, Vladimirov S V, Ivlev A V, Tsytovich V, Morfill G 2006 [*Phys. Plasmas*]{} [**13**]{} 072104 Takahashi K, Oishi T, Shimomai K, Hayashi Y, and Nishino S 1998 [*Phys. Rev. E*]{} [**58**]{} 7805 Hwang H H and Kushner M 1997 [*J. Appl. Phys.*]{} [**82**]{} 2106 Epstein P S 1924 [*Phys. Rev.*]{} [**23**]{} 710 Chu J H, Du Ji-Bin and Lin I, 1994 [*J Phys. D: Appl. Phys.*]{} [**27**]{} 296 Khrapak S A, Ivlev A V, Morfill G E, Zhdanov S K 2003 [*Phys. Rev. Lett.*]{} [**90**]{} 225002 Fortov V E and Morfill G E 2009 [*Complex and Dusty Plasmas*]{} (Boca Raton: CRC Press) Sch[ü]{}ngel E, Zhang Q Z, Iwashita S, Schulze J, Hou L J, Wang Y N and Czarnetzki U 2011 [*J. Phys. D: Appl. Phys.*]{} [**44**]{} 285205 Coburn J W and Kay E 1972 [*J. Appl. Phys.*]{} [**43**]{} 4965 Lieberman M A and Savas S E 1990 [*J. Vac. Sci. Technol. A*]{} [**8**]{} 1632 Johnson E V, Verbeke T, Vanel J C and Booth J P 2010 [*J Phys. D: Appl. Phys.*]{} [**43**]{} 412001 Johnson E V, Delattre P A, and Booth J P 2012 [*Appl. Phys. Lett.*]{} [**100**]{} 133504 Lafleur T, Boswell R W, and Booth J P 2012 [*Appl. Phys. Lett.*]{} [**100**]{} 194101 Basner R, Sigeneger F, Loffhagen D, Schubert G, Fehske H and Kersten H 2009 [*New J. Phys.*]{} [**11**]{} 013041 Schubert G, Basner R, Kersten H and Fehske H 2011 [*Eur. Phys. J. D*]{} [**63**]{} 431 Maurer H R, Schneider V, Wolter M, Basner R, Trottenberg T and Kersten H 2011 [*Contrib. Plasma Phys.*]{} [**51**]{} 218 Schüngel E, Schulze J, Donkó Z and Czarnetzki U 2011 [*Phys. Plasmas*]{} **18** 013503
{ "pile_set_name": "ArXiv" }
--- title: '**Exclusion of measurements with excessive residuals**' --- \ \ Sobolev Astronomical Institute of St. Petersburg State University,\ Universitetskij Prospekt 28, Staryj Peterhof, St. Petersburg 198504, Russia\ \*Email: nii@astro.spbu.ru\ An adjustable algorithm of exclusion of conditional equations with excessive residuals is proposed. The criteria applied in the algorithm use variable exclusion limits which decrease as the number of equations goes down. The algorithm is easy to use, it possesses rapid convergence, minimal subjectivity, and high degree of generality. *Keywords:* Estimation of model parameters; Conditional equations; Large residuals; Criteria of exclusion\ **1. Introduction** {#introduction .unnumbered} =================== In many astronomical (and not only astronomical) problems of estimation of model parameters, it is important reasonably to exclude unreliable data which produce large residuals, i.e., deviations, $\varepsilon$, of measurements from the accepted model: $|\varepsilon_j|/\sigma_j\gg 1$, where $\sigma_j$ is the standard deviation for $j$-th measurement, $j=1,\:\ldots,\:N$, $N$ is the number of measurements, i.e., of conditional equations. The occurrence of large residuals (“blunders”) contradicts the basic assumption of least-squares fitting on the normal distribution of measurement errors and can cause strong biases of parameter estimates. The common “$3\sigma$” criterion to exclude blunders $$\label{3s} \frac{|\varepsilon_j|}{\sigma_j}>k=3$$ does not allow for the probability of accidental occurrence of residual (\[3s\]) to increase with $N$ and become not negligible already at $N$ of order several tens. In this paper, a more adjustable algorithm of exclusion of equations with excessive residuals on the basis of a variable criterion limit is elaborated. **2.0.5emAlgorithm of excluding measurements with excessive residuals** {#emalgorithm-of-excluding-measurements-with-excessive-residuals .unnumbered} ======================================================================= 1.0.5emFor a given $N$, a value of $\kappa$ which satisfies the equation $$\left[1-\psi( \kappa)\right]N=1, \qquad \psi(z)\equiv\sqrt{\frac2\pi}\int^z_0 e^{-\frac{1}{2}t^2}dt,$$ where $\psi(z)$ is the probability integral, is found. The expectation value for the number of conditional equations with residuals $$\label{k1s} {|\varepsilon_j|/\sigma_j}>\kappa,$$ equals one, if residuals are normally distributed. A larger number of equations with such residuals may be considered as probably excessive. 2.0.5emThe number $L$ of equations satisfying the criterion (\[k1s\]) is determined. 3.0.5emIf $L>1$, $L-L'$ equations with the largest values of $|\varepsilon_j|/\sigma_j$ are excluded from consideration. Here, $L'\ge 1$ is a parameter of the algorithm. 4.0.5emThe criterion (\[3s\]) with $k$ depending on $N$ is applied to the remaining equations, in particular if $L=1$: $$\label{kg} {|\varepsilon_j|/\sigma_j}>k_\gamma(N),$$ where $k_\gamma$ is the root of the equation $$\label{kg(N)} 1-\left[\psi( k_\gamma)\right]^N=\gamma.$$ Here, $\gamma$ is an accepted confidence level. For low $\gamma$, i.e., for low $1-\psi(k_\gamma)$, in lieu of (\[kg(N)\]) an approximate equation can be used: $$\label{kg(N)1} [1-\psi( k_\gamma)]N=\gamma.$$ 5.0.5emFollowing the exclusion of equations with excessive residuals, a new solution of the problem is found from the remaining equations. Thereupon points 1–4 of this algorithm are applied again with new estimates of parameters and $\sigma_j$. The iterations are interrupted if no further exclusion happens. The probability ${\cal P}(L)$ of accidental occurrence of $L$ residuals satisfying (\[k1s\]) can be approximately evaluated with the Poisson distribution, which is ${\cal P}(L)={e^{-1}/L!}$ in this case. This approach gives $${\cal P}(L\ge 2) \approx 0.264,\qquad {\cal P}(L\ge 3) \approx 0.080,\qquad {\cal P}(L\ge 4) \approx 0.019.$$ Thus numbers of $L=3$ and $4$ can be considered as excessive, i.e., $L'=2$ or 3 can be correspondingly accepted. However, if unbiased parameters are more important than an unbiased residual variance, $L'=1$ is also allowed. Point 4 of the algorithm is essential in the case of only a single (or few) very large blunder(s), when point 3 can not come into action. A level of $\gamma=0.05$, being the standard one in many statistical criteria, can be accepted. **Acknowledgments** {#acknowledgments .unnumbered} =================== The work is partly supported by the Russian Foundation for Basic Research grant 08-02-00361 and the Russian President Grant for State Support of Leading Scientific Schools of Russia no. NSh-1323.2008.2.
{ "pile_set_name": "ArXiv" }
--- author: - | \ [calin.barbat@web.de]{} title: Dualization of projective algebraic sets by using Gröbner bases elimination techniques --- Introduction ============ I read about the duality principle in the book [@gie] and saw some examples of dualization in the books [@gie], [@kom] for quadrics and in the more recent introductory book on plane curves [@GF94] for plane curves. This last book gives some particular examples how the dualization is carried out but no general method. Some authors mention that variables have to be eliminated from a system. For plane curves the system is derived nicely in [@bri]. For one hypersurface I recently found [@pw] p. 104f. But for intersections of hypersurfaces the only example that I found was the intersection of two hypersurfaces in [@kom] as a general example, without being specific. So I derived in this article the system for the intersection case. In what follows I recommend to read [@GP] for the theoretical background about projective space, homogeneous polynomial, ideal, projective variety, etc., which is not covered in the present work. For an introduction to Gröbner bases see [@cox] or [@fro]. I used the methods derived in this article to dualize some examples and also checked with the examples given in [@hr]. Motivation with plane curves ============================ In what follows here, we assume that the denominators do not vanish. Think of the inversion radius $r$ as having value $i=\sqrt{-1}$. (Other values are also permitted, e.g. $1$.) We consider different representations of plane curves. Parametric ---------- We consider a parametrically given plane curve $c(t)=(x(t), y(t))^t$. Then we can define the pedal curve of $c(t)$ with respect to the origin as $$p(c(t))= \frac{y(t)\,x'(t) - x(t)\,y'(t)}{{x'(t)}^2 + {y'(t)}^2} \left( \begin{array}{c}-y'(t)\\x'(t)\end{array} \right)$$ (the pedal is the locus of the feet of perpendiculars from the origin to the tangents of the curve $c(t)$). We can also define what it means to invert $c(t)$ with respect to the circle of radius $r$ around the origin: $$\iota(c(t))=\frac{r^2}{{x(t)}^2 + {y(t)}^2} \left( \begin{array}{c}x(t)\\y(t)\end{array} \right)$$ By composing the two maps given above we get the dual of $c(t)$ as the inverse of the pedal: $$d(c(t))=\iota(p(c(t)))=\frac{r^2}{y(t)\,x'(t) - x(t)\,y'(t)} \left( \begin{array}{c}-y'(t)\\x'(t)\end{array} \right)$$ This is best explained by a commutative diagram: $$\xymatrix{ & c(t) \ar[dl]_d \ar[d]^p \\ \iota(p(c(t))) \ar[ur] \ar[r]^\iota & p(c(t)) \ar[l] }$$ Complex ------- Now we do the same for a curve $z(t)$ in the complex plane. The pedal is $$p(z(t)) = \frac{\overline{z'(t)}\,z(t) - \overline{z(t)}\,z'(t)}{2\,\overline{z'(t)}}$$ The inverse is $$\iota(z(t))=\frac{r^2}{\overline{z(t)}}$$ and the dual is (again by composition) $$d(z(t))=\iota(p(z(t)))=\frac{2\,r^2\,z'(t)}{\overline{z(t)}\,z'(t)-\overline{z'(t)}\,z(t)}$$ A similar commutative diagram as in the parametric case holds. Implicit -------- For implicitly given curves $f(x, y)=0$ we cannot give explicit formulas for the pedal curve but we give a method for computing it. We need the gradient $\nabla f(x, y) = \left(\frac{\partial f}{\partial x}(x, y), \frac{\partial f}{\partial y}(x, y)\right)^t$. Assume $p=(x,y)$ is a point of $f$ and $P=(X,Y)$ is a point of the pedal of $f$. Then, by the definition of the pedal, following must hold: 1. $p=(x,y)$ is a point of $f$: $f(x,y)=0$. 2. $P$ lies on the tangent at $f$ through $p$: $(P-p)^t \nabla f(x, y) =0$. 3. $P$ is orthogonal to tangent at $f$ through $p$: $P^t \left(\frac{\partial f}{\partial y}(x, y), -\frac{\partial f}{\partial x}(x, y)\right)^t =0$ By eliminating $(x,y)$ from these three equations we get an equation in $(X,Y)$ which is the pedal curve. For convenience we substitute $(X,Y)\mapsto (x,y)$. In what follows, we will see how the elimination can be done with Gröbner bases. The inverse of $f(x, y)=0$ is $f\left(\frac{r^2\,x}{x^2 + y^2},\frac{r^2\,y}{x^2 + y^2}\right)=0$. The dual of $f$ is the composition of inversion and pedal as constructed above. Theory ====== Case of one homogeneous polynomial ---------------------------------- First we consider the following projective algebraic set $$V(p) = \{{\bf x} \in \mathbb{K}^{n+1}\mid p({\bf x})=0\}$$ with $\mathbb{K}$ an algebraic closed field, ${\bf x}=(x_0, x_1, \ldots, x_n)$ a point of $\mathbb{K}^{n+1}$ and $p$ a homogeneous polynomial from $\mathbb{K}[x_0, x_1, \ldots, x_n]$. $V(p)$ consists of all roots ${\bf x}=(x_0, x_1, \ldots, x_n)$ of $p$ and is a hypersurface in the projective space ${\mathbb P}^n$. Let ${\bf u} = (u_0, u_1, \ldots, u_n)$ be a normal vector to $V(p)$ in a regular point ${\bf x} \in V(p)$. On the other side we know that the gradient $$\nabla p({\bf x}) = \left(\frac{\partial p}{\partial x_0}({\bf x}), \frac{\partial p}{\partial x_1}({\bf x}), \ldots, \frac{\partial p}{\partial x_n}({\bf x})\right)^t$$ is normal to $V(p)$ in $\bf x$. Therefore ${\bf u}$ and $\nabla p({\bf x})$ are linearly dependent. This can be written as ${\bf u}=\lambda\nabla p({\bf x})$ with a factor $\lambda$. We can form the following system $$\begin{aligned} \left\{\begin{aligned} \begin{split} p({\bf x}) &= 0 \\ {\bf u} - \lambda \nabla p({\bf x}) &= {\bf 0} \\ \end{split} \end{aligned}\right.\label{ds1}\end{aligned}$$ We define the set $V^*(p)=\{{\bf u} \in \mathbb{K}^{n+1}\mid p({\bf x}) = 0, \, {\bf u} - \lambda \nabla p({\bf x}) = 0 \}$ of partial solutions to the system (\[ds1\]) to be the dual of $V(p)$. Note that we are not interested in a complete solution of (\[ds1\]), but only in the partial solutions which I call here the $\bf u$-part of the solution. The $\bf u$-part of the solution of this system is the result of applying the Gauß map to $p({\bf x}) = 0$, where the Gauß map (see [@s], p. 103) is given – in Chow coordinates (see [@l]) – by $$\gamma : {\bf x} \mapsto {\bf u} = \lambda \nabla p({\bf x})$$ Now we want to construct an equivalent system to (\[ds1\]) but simpler in structure, describing the same dual algebraic set $V^*(p)$. There exists a system $B$ of polynomials in $\mathbb{K}[u_0, u_1, \ldots, u_n]$ with the same solution set $V^*(p)$ as the system (\[ds1\]). We start with the system (\[ds1\]) viewed as system of polynomials $$q_j \in \mathbb{K}[x_0, x_1, \ldots, x_n, \lambda, u_0, u_1, \ldots, u_n]$$ and eliminate the first $n+2$ variables. The system is: $$\begin{aligned} \left\{\begin{aligned} \begin{split} q_1(x_0, x_1, \ldots, x_n, \lambda, u_0, u_1, \ldots, u_n) &= p(x_0, \ldots, x_n) = 0 \\ q_2(x_0, x_1, \ldots, x_n, \lambda, u_0, u_1, \ldots, u_n) &= u_0 - {\lambda \frac{\partial p}{\partial x_0}(x_0, \ldots, x_n)} = 0 \\ &\vdots \\ q_{i+2}(x_0, x_1, \ldots, x_n, \lambda, u_0, u_1, \ldots, u_n) &= u_i - {\lambda \frac{\partial p}{\partial x_i}(x_0, \ldots, x_n)} = 0 \\ &\vdots \\ q_{n+2}(x_0, x_1, \ldots, x_n, \lambda, u_0, u_1, \ldots, u_n) &= u_n - {\lambda \frac{\partial p}{\partial x_n}(x_0, \ldots, x_n)} = 0 \\ \end{split} \end{aligned}\right. \end{aligned}$$ Let $G$ be a Gröbner basis for the ideal $S = (q_1, \ldots, q_j, \ldots, q_{n+3})$ with respect to an elimination ordering, where e.g. ${\bf x} > {\bf \lambda} > {\bf u}$. By the Elimination Theorem of [@cox] this basis $G$ eliminates ${\bf x}$ and ${\bf \lambda}$ and $B=G \cap \mathbb{K}[u_0, u_1, \ldots, u_n]$ is a Gröbner basis of the elimination ideal $E=S \cap \mathbb{K}[u_0, u_1, \ldots, u_n]$. We get $E=(B)$. By construction, this basis $B$ is a system of polynomials in $\mathbb{K}[u_0, u_1, \ldots, u_n]$ having the same partial solution set $V^*(p)$ as the original system (\[ds1\]). Because of this property we define $E=(B)$ to be the dual ideal of the initial ideal $(p)$. With the canonical isomorphism $u_i \stackrel{\cong}{\mapsto} x_i$ we can map the ideal $E \in \mathbb{K}[{\bf u}]$ to an ideal $D \in \mathbb{K}[{\bf x}]$ which leads to the following commutative diagram: $$\xymatrix{ (p) \in \mathbb{K}[{\bf x}] \ar[r]^{\gamma} \ar[d] & E \in \mathbb{K}[{\bf u}] \ar[dl]^{\cong} \\ D \in \mathbb{K}[{\bf x}] }$$ As an example we dualize the quadric $(n=3)$: $$-b_{0}x_{0}^{2}+2b_{1}x_{0}x_{1}-\frac{b_{1}-1}{a_{1}}x_{1}^{2}+\frac{1}{a_{2}}x_{2}^{2}+\frac{1}{a_{3}}x_{3}^{2} = 0$$ The system (\[ds1\]) is here: $$\left\{ \begin{aligned} -b_{0}x_{0}^{2}+2b_{1}x_{0}x_{1}-\frac{b_{1}-1}{a_{1}}x_{1}^{2}+\frac{1}{a_{2}}x_{2}^{2}+\frac{1}{a_{3}}x_{3}^{2} &=0 \\ 2b_{0}x_{0}\lambda_{1}-2b_{1}x_{1}\lambda_{1}+u_{0} &=0 \\ -2b_{1}x_{0}\lambda_{1}+\frac{2b_{1}-2}{a_{1}}x_{1}\lambda_{1}+u_{1} &=0 \\ -\frac{2}{a_{2}}x_{2}\lambda_{1}+u_{2} &=0 \\ -\frac{2}{a_{3}}x_{3}\lambda_{1}+u_{3} &=0 \end{aligned} \right.$$ By eliminating $x_i$ and $\lambda$ from these equations we get one equation in $u_i$, which is the dual quadric $$\begin{array}{c} (b_{1}-1)u_{0}^{2}+2a_{1}b_{1}u_{0}u_{1}+a_{1}b_{0}u_{1}^{2}+(a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})u_{2}^{2}+(a_{1}a_{3}b_{1}^{2}-a_{3}b_{0}b_{1}+a_{3}b_{0})u_{3}^{2} = 0 \end{array}$$ As another example we dualize a quadric over ${\mathbb{Q}}[w,x,y,z]$: $$\left( \begin{array}{c} w^{2}-x^{2} \end{array} \right)$$ and get $$\left( \begin{array}{c} z, \\ y, \\ w^{2}-x^{2} \end{array} \right)$$ How about dualizing this result? This simple example shows that we must be able to dualize ideals given by more than one polynomial. This is done in the next section. Case of a system of homogeneous polynomials ------------------------------------------- The same method can be extended and applied to finite sets of polynomials which represent geometrically their intersection as hypersurfaces. We consider the projective algebraic set $$V := V(I) = V(p_1, p_2, \ldots, p_m) = \{{\bf x} \in \mathbb{K}^{n+1}\mid p_1({\bf x})=\ldots=p_m({\bf x})=0\}$$ with $\mathbb{K}$ an algebraically closed field, ${\bf x}=(x_0, x_1, \ldots, x_n)$ a point of $\mathbb{K}^{n+1}$ and $I$ the ideal generated by $m \geq 1$ homogeneous polynomials $p_1, p_2, \ldots, p_m$ from $\mathbb{K}[{\bf x}]$. The dual projective algebraic set $V^*$ of $V$ is also a projective algebraic set which is the zero set of an ideal $E$ generated by polynomials from $\mathbb{K}[u_0, u_1, \ldots, u_n]$. We get this elimination ideal $E$ by eliminating (using a suitable Gröbner basis) the $m+n+1$ variables $x_i$ and $\lambda_j$ from the $m+n+1$ equations: $$\begin{aligned} \left\{\begin{aligned} \begin{split} p_1(x_0, \ldots, x_n) &= 0 \\ &\vdots \\ p_m(x_0, \ldots, x_n) &= 0 \\ u_0 - \sum_{j=1}^{m}{\lambda_j \frac{\partial p_j}{\partial x_0}(x_0, \ldots, x_n)} &= 0 \\ &\vdots \\ u_i - \sum_{j=1}^{m}{\lambda_j \frac{\partial p_j}{\partial x_i}(x_0, \ldots, x_n)} &= 0 \\ &\vdots \\ u_n - \sum_{j=1}^{m}{\lambda_j \frac{\partial p_j}{\partial x_n}(x_0, \ldots, x_n)} &= 0 \\ \end{split} \end{aligned}\right. \label{ds}\end{aligned}$$ The $x_i$ are point and the $u_i$ hyperplane coordinates but their roles can be interchanged. With the Jacobi matrix of ${\bf p}({\bf x})=(p_1, p_2, \ldots, p_m)^t({\bf x})$: $$J = \begin{pmatrix} \frac{\partial p_1}{\partial x_0} & \frac{\partial p_1}{\partial x_1} & \ldots & \frac{\partial p_1}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots & \\ \frac{\partial p_m}{\partial x_0} & \frac{\partial p_m}{\partial x_1} & \ldots & \frac{\partial p_m}{\partial x_n} \end{pmatrix}.$$ and setting ${\bf \lambda}=(\lambda_1, \lambda_2, \ldots, \lambda_m)^t$ we can write the system (\[ds\]) vectorially as $$\begin{aligned} \left\{\begin{aligned} {\bf p}({\bf x})&={\bf 0} \\ {\bf u} - \sum_{j=1}^{m}{\lambda_j \nabla p_j({\bf x})} &= {\bf u}-J^t({\bf x}) {\bf \lambda} = {\bf 0} \\ \end{aligned}\right.\end{aligned}$$ For $m=1$ this is the same as system (\[ds1\]). We redefine the Gauß map: $$\gamma : {\bf x} \mapsto {\bf u} = J^t({\bf x}) {\bf \lambda}$$ We dualize now the following ideal $$\left( \begin{array}{c} x_{2}^{2}-x_{3}^{2}, \\ x_{0}-x_{2} \end{array} \right)$$ The system (\[ds\]) is here (written as ideal) $$\left( \begin{array}{c} x_{2}^{2}-x_{3}^{2}, \\ x_{0}-x_{2}, \\ -\lambda_{2}+u_{0}, \\ u_{1}, \\ -2x_{2}\lambda_{1}+\lambda_{2}+u_{2}, \\ 2x_{3}\lambda_{1}+u_{3} \end{array} \right)$$ A sorted Gröbner basis with the elimination property is $$\left( \begin{array}{c} u_{1}, \\ u_{0}^{2}+2u_{0}u_{2}+u_{2}^{2}-u_{3}^{2}, \\ \lambda_{2}-u_{0}, \\ 2x_{3}\lambda_{1}+u_{3}, \\ x_{2}u_{3}+x_{3}u_{0}+x_{3}u_{2}, \\ x_{2}u_{0}+x_{2}u_{2}+x_{3}u_{3}, \\ 2x_{2}\lambda_{1}-u_{0}-u_{2}, \\ x_{2}^{2}-x_{3}^{2}, \\ x_{0}-x_{2} \end{array} \right)$$ and we see that the first two elements generate the elimination ideal $E$ for this example $$\left( \begin{array}{c} u_{1}, \\ u_{0}^{2}+2u_{0}u_{2}+u_{2}^{2}-u_{3}^{2} \end{array} \right)$$ Main diagram ------------ We have, by denoting with $D(\sqrt{I})$ the dual of the radical ideal of $I$: $$\xymatrix{ \sqrt{I} \ar[d]_{\gamma} & & I \ar[d]_<<<<{\gamma}\ar[ll]_{\supseteq} & \\ D(\sqrt{I}) \ar[d]\ar[r]^{\subseteq} & \sqrt{D(\sqrt{I})} \ar[d]\ar@/^1.5pc/[rr]^>>>>{\subseteq}|\hole & D(I) \ar[d]\ar[r]^{\subseteq} & \sqrt{D(I)} \ar[d]\\ V(D(\sqrt{I})) & V(\sqrt{D(\sqrt{I})}) \ar[l]_{\supseteq} & V(D(I)) & V(\sqrt{D(I)}) \ar[l]_{\supseteq} \ar@/^1.5pc/[ll]^>>>>{\supseteq} }$$ (The bent arrow in the middle of the diagram needs a proof.) As an example explaining this diagram consider: $$I=\left( \begin{array}{c} z^{2}, \\ x+y-z \end{array} \right)$$ Then we have $$D(\sqrt{I})=\sqrt{D(\sqrt{I})}=\left( \begin{array}{c} x-y \end{array} \right)$$ $$D(I)=\left( \begin{array}{c} x-y, \\ y^{2}+2yz+z^{2} \end{array} \right)$$ $$\sqrt{D(I)}=\left( \begin{array}{c} y+z, \\ x-y \end{array} \right)$$ By adding one polynom we get another example: $$I=\left( \begin{array}{c} z^{2}, \\ x+y-z, \\ x^{3}+y^{3} \end{array} \right)$$ $$D(\sqrt{I})=\sqrt{D(\sqrt{I})}=\left( \begin{array}{c} x-y \end{array} \right)$$ $$D(I)=\left( \begin{array}{c} xy-y^{2}+xz-yz, \\ x^{2}-2xy+y^{2} \end{array}\right)=\left( \begin{array}{c} (x-y)(y+z), \\ (x-y)^{2} \end{array} \right)$$ $$\sqrt{D(I)}=\left( \begin{array}{c} x-y \end{array} \right)$$ Examples ======== Steiner’s Roman surface ----------------------- I wrote a [Singular]{} procedure [dual]{} using the fast elimination provided by [Singular]{} with the combination of [hilb]{} and [eliminate]{} for computing the dual ideal $D$ of a given ideal $I$. The procedure [dual]{} takes as argument a finitely generated homogeneous ideal $I=\left( p_1, \ldots, p_m \right)$, where all generating polynomials $p_k$ are homogeneous and elements of the [basering]{}. A first example [Singular]{} input file for testing the functionality is given in the appendix. This file dualizes Steiner’s Roman surface (see [@hr] for other nice dualization examples): 1. First we set the [basering]{} to ${\mathbb{Q}}[x_{0},x_{1},x_{2},x_{3}]$. 2. As an example we want to dualize the ideal $$I=\left( \begin{array}{c} x_{1}^{2}x_{2}^{2}-x_{0}x_{1}x_{2}x_{3}+x_{1}^{2}x_{3}^{2}+x_{2}^{2}x_{3}^{2} \end{array} \right)$$ 3. We call procedure [dual]{} on this ideal and the first thing which it does is to adjoin auxiliary variables to our ring. The new ring is: $${\mathbb{Q}}[x_{0},x_{1},x_{2},x_{3},\lambda_{1},u_{0},u_{1},u_{2},u_{3}]$$ 4. The next step is constructing the following ideal generated by the system (\[ds\]) of equations, which in this case is $$S=\left( \begin{array}{c} x_{1}^{2}x_{2}^{2}-x_{0}x_{1}x_{2}x_{3}+x_{1}^{2}x_{3}^{2}+x_{2}^{2}x_{3}^{2}, \\ x_{1}x_{2}x_{3}\lambda_{1}+u_{0}, \\ -2x_{1}x_{2}^{2}\lambda_{1}+x_{0}x_{2}x_{3}\lambda_{1}-2x_{1}x_{3}^{2}\lambda_{1}+u_{1}, \\ -2x_{1}^{2}x_{2}\lambda_{1}+x_{0}x_{1}x_{3}\lambda_{1}-2x_{2}x_{3}^{2}\lambda_{1}+u_{2}, \\ x_{0}x_{1}x_{2}\lambda_{1}-2x_{1}^{2}x_{3}\lambda_{1}-2x_{2}^{2}x_{3}\lambda_{1}+u_{3} \end{array} \right)$$ 5. The Gr[ö]{}bner basis for $S$ w.r.t. the elimination order is (we only show some elements at the beginning and the end here because it is bigger than this page) $$G= \left( \begin{array}{c} 4u_{0}^{3}-u_{0}u_{1}^{2}-u_{0}u_{2}^{2}+u_{1}u_{2}u_{3}-u_{0}u_{3}^{2}, \\ x_{3}^{3}\lambda_{1}u_{0}u_{1}^{2}+x_{3}^{3}\lambda_{1}u_{0}u_{2}^{2}-x_{3}^{3}\lambda_{1}u_{1}u_{2}u_{3}+u_{0}^{2}u_{1}u_{2}-2u_{0}^{3}u_{3}, \\ 4x_{3}^{3}\lambda_{1}u_{0}^{2}-x_{3}^{3}\lambda_{1}u_{3}^{2}+u_{0}u_{1}u_{2}-2u_{0}^{2}u_{3}, \\ \cdots, \\ x_{0}^{2}x_{2}x_{3}\lambda_{1}-4x_{1}^{2}x_{2}x_{3}\lambda_{1}-4x_{2}^{3}x_{3}\lambda_{1}-2x_{0}x_{1}x_{3}^{2}\lambda_{1}+x_{0}u_{1}+2x_{2}u_{3}, \\ x_{0}^{2}x_{1}x_{3}\lambda_{1}-4x_{1}^{3}x_{3}\lambda_{1}-4x_{1}x_{2}^{2}x_{3}\lambda_{1}-2x_{0}x_{2}x_{3}^{2}\lambda_{1}+x_{0}u_{2}+2x_{1}u_{3} \end{array} \right)$$ 6. The elimination ideal $E$ consists here only of the first element of $G$ $$E=\left( \begin{array}{c} 4u_{0}^{3}-u_{0}u_{1}^{2}-u_{0}u_{2}^{2}+u_{1}u_{2}u_{3}-u_{0}u_{3}^{2} \end{array} \right)$$ This is the dual ideal of $I$. We can interpret the $x$ as point coordinates and the $u$ as hyperplane coordinates. But we want to be able to pass this ideal $E$ to [dual]{} and dualize it too! For this to work, we have to make one final step in [dual]{} and this is to map the $x$s to the $u$s and viceversa, giving as result the dual ideal of $I$ in point coordinates, which is $$D=\left( \begin{array}{c} 4x_{0}^{3}-x_{0}x_{1}^{2}-x_{0}x_{2}^{2}+x_{1}x_{2}x_{3}-x_{0}x_{3}^{2} \end{array} \right)$$ 7. When calling [dual]{} on $D$ we get $I$ as dual ideal of $D$. We won’t show here the steps, but the reader can generate them using the example input file from the appendix. The input file to [Singular]{} for this example generates the figure \[fig:steiner\] by using [surfex.lib]{} which in turn uses the program [surf]{}. ![Steiner surface and its dual surface[]{data-label="fig:steiner"}](ex03_steiner.jpg){width="8cm"} Parametrized quadric -------------------- We present now another example, the forth and back dualization of a somewhat special quadric. 1. Our ring is at the beginning ${\mathbb{Q}}(a_{1},a_{2},a_{3},b_{0},b_{1})[x_{0},x_{1},x_{2},x_{3}]$. 2. The quadric is given by the zero set of the following ideal (this zero set is a projective algebraic set) $$Q=\left( \begin{array}{c} -b_{0}x_{0}^{2}+2b_{1}x_{0}x_{1}-\frac{b_{1}-1}{a_{1}}x_{1}^{2}+\frac{1}{a_{2}}x_{2}^{2} \end{array} \right)$$ 3. We adjoin auxiliary variables and the new ring is $${\mathbb{Q}}(a_{1},a_{2},a_{3},b_{0},b_{1})[x_{0},x_{1},x_{2},x_{3},\lambda_{1},u_{0},u_{1},u_{2},u_{3}]$$ 4. Here we construct the system (\[ds\]) of equations $$S=\left( \begin{array}{c} -b_{0}x_{0}^{2}+2b_{1}x_{0}x_{1}-\frac{b_{1}-1}{a_{1}}x_{1}^{2}+\frac{1}{a_{2}}x_{2}^{2}, \\ 2b_{0}x_{0}\lambda_{1}-2b_{1}x_{1}\lambda_{1}+u_{0}, \\ -2b_{1}x_{0}\lambda_{1}+\frac{2b_{1}-2}{a_{1}}x_{1}\lambda_{1}+u_{1}, \\ -\frac{2}{a_{2}}x_{2}\lambda_{1}+u_{2}, \\ u_{3} \end{array} \right)$$ 5. The Gr[ö]{}bner basis of $S$ is $$G=\left( \begin{array}{c} u_{3}, \\ (b_{1}-1)u_{0}^{2}+2a_{1}b_{1}u_{0}u_{1}+a_{1}b_{0}u_{1}^{2}+(a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})u_{2}^{2}, \\ 2x_{2}\lambda_{1}-a_{2}u_{2}, \\ (a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})x_{1}u_{2}-a_{1}b_{1}x_{2}u_{0}-a_{1}b_{0}x_{2}u_{1}, \\ (2a_{1}b_{1}^{2}-2b_{0}b_{1}+2b_{0})x_{1}\lambda_{1}-a_{1}b_{1}u_{0}-a_{1}b_{0}u_{1}, \\ a_{2}b_{0}x_{0}u_{2}-a_{2}b_{1}x_{1}u_{2}+x_{2}u_{0}, \\ a_{1}b_{0}x_{0}u_{1}-(b_{1}-1)x_{1}u_{0}-2a_{1}b_{1}x_{1}u_{1}-a_{1}b_{1}x_{2}u_{2}, \\ x_{0}u_{0}+x_{1}u_{1}+x_{2}u_{2}, \\ 2b_{0}x_{0}\lambda_{1}-2b_{1}x_{1}\lambda_{1}+u_{0}, \\ a_{1}a_{2}b_{0}x_{0}^{2}-2a_{1}a_{2}b_{1}x_{0}x_{1}+(a_{2}b_{1}-a_{2})x_{1}^{2}-a_{1}x_{2}^{2} \end{array} \right)$$ 6. The elimination ideal (the dual of $Q$) is $$E=\left( \begin{array}{c} (b_{1}-1)u_{0}^{2}+2a_{1}b_{1}u_{0}u_{1}+a_{1}b_{0}u_{1}^{2}+(a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})u_{2}^{2}, \\ u_{3} \end{array} \right)$$ 7. We substitute $u \mapsto x$ in $E$ and get $$D=\left( \begin{array}{c} (b_{1}-1)x_{0}^{2}+2a_{1}b_{1}x_{0}x_{1}+a_{1}b_{0}x_{1}^{2}+(a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})x_{2}^{2}, \\ x_{3} \end{array} \right)$$ This ideal $D$ can be now again dualized which constitutes our next example. 1. After adjoining the auxiliary variables, our ring is $${\mathbb{Q}}(a_{1},a_{2},a_{3},b_{0},b_{1})[x_{0},x_{1},x_{2},x_{3},\lambda_{1},\lambda_{2},u_{0},u_{1},u_{2},u_{3}]$$ Notice here that – since our ideal $D$ has two generators – we have to adjoin two $\lambda$s (the procedures do this automatically). 2. The system (\[ds\]) – in fact an ideal too – is here $$S_2=\left( \begin{array}{c} (b_{1}-1)x_{0}^{2}+2a_{1}b_{1}x_{0}x_{1}+a_{1}b_{0}x_{1}^{2}+(a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})x_{2}^{2}, \\ x_{3}, \\ -(2b_{1}-2)x_{0}\lambda_{1}-2a_{1}b_{1}x_{1}\lambda_{1}+u_{0}, \\ -2a_{1}b_{1}x_{0}\lambda_{1}-2a_{1}b_{0}x_{1}\lambda_{1}+u_{1}, \\ -(2a_{1}a_{2}b_{1}^{2}-2a_{2}b_{0}b_{1}+2a_{2}b_{0})x_{2}\lambda_{1}+u_{2}, \\ -\lambda_{2}+u_{3} \end{array} \right)$$ 3. The corresponding Gr[ö]{}bner basis is $$G_2=\left( \begin{array}{c} a_{1}a_{2}b_{0}u_{0}^{2}-2a_{1}a_{2}b_{1}u_{0}u_{1}+(a_{2}b_{1}-a_{2})u_{1}^{2}-a_{1}u_{2}^{2}, \\ \lambda_{2}-u_{3}, \\ x_{3}, \\ (2a_{1}a_{2}b_{1}^{2}-2a_{2}b_{0}b_{1}+2a_{2}b_{0})x_{2}\lambda_{1}-u_{2}, \\ a_{1}x_{1}u_{2}-a_{1}a_{2}b_{1}x_{2}u_{0}+(a_{2}b_{1}-a_{2})x_{2}u_{1}, \\ (2a_{1}^{2}b_{1}^{2}-2a_{1}b_{0}b_{1}+2a_{1}b_{0})x_{1}\lambda_{1}-a_{1}b_{1}u_{0}+(b_{1}-1)u_{1}, \\ x_{0}u_{2}+a_{2}b_{0}x_{2}u_{0}-a_{2}b_{1}x_{2}u_{1}, \\ (b_{1}-1)x_{0}u_{1}-a_{1}b_{0}x_{1}u_{0}+2a_{1}b_{1}x_{1}u_{1}+a_{1}b_{1}x_{2}u_{2}, \\ x_{0}u_{0}+x_{1}u_{1}+x_{2}u_{2}, \\ (2b_{1}-2)x_{0}\lambda_{1}+2a_{1}b_{1}x_{1}\lambda_{1}-u_{0}, \\ (b_{1}-1)x_{0}^{2}+2a_{1}b_{1}x_{0}x_{1}+a_{1}b_{0}x_{1}^{2}+(a_{1}a_{2}b_{1}^{2}-a_{2}b_{0}b_{1}+a_{2}b_{0})x_{2}^{2} \end{array} \right)$$ 4. The new elimination ideal is $$\left( \begin{array}{c} a_{1}a_{2}b_{0}u_{0}^{2}-2a_{1}a_{2}b_{1}u_{0}u_{1}+(a_{2}b_{1}-a_{2})u_{1}^{2}-a_{1}u_{2}^{2} \end{array} \right)$$ 5. After mapping the $x$s and $u$s we finally get $Q$ again $$\left( \begin{array}{c} a_{1}a_{2}b_{0}x_{0}^{2}-2a_{1}a_{2}b_{1}x_{0}x_{1}+(a_{2}b_{1}-a_{2})x_{1}^{2}-a_{1}x_{2}^{2} \end{array} \right)$$ One might wonder why this looks different than the initial equation for $Q$, but if you multiply the original equation with the non-zero factor $- a_{1}a_{2} \neq 0$ – an operation which does not change the zero set, you see that you get the same quadric. 8-shaped space curve -------------------- In this example we intersect a sphere and a cylinder to get an 8-shaped space curve, which is given by the ideal: $$I = \left( \begin{array}{c} x^{2}+y^{2}-1, \\ x^{2}+y^{2}+z^{2}-2x-3 \end{array} \right)$$ You can see these surfaces and their intersection in figure \[fig:8begin\]. Now we dualize the ideal $I$ and get the following ideal (after dehomogenizing) with one polynomial as generator: $$D = \left( \begin{array}{c} 4x^{6}+12x^{4}y^{2}+12x^{2}y^{4}+4y^{6}-12x^{4}z^{2}-24x^{2}y^{2}z^{2}-12y^{4}z^{2}-15x^{2}z^{4}+\\ 12y^{2}z^{4}-4z^{6}+36x^{3}z^{2}+36xy^{2}z^{2}+18xz^{4}-8x^{4}-16x^{2}y^{2}-8y^{4}-20x^{2}z^{2}-\\ 20y^{2}z^{2}+z^{4}-4xz^{2}+4x^{2}+4y^{2} \end{array} \right)$$ $D$ is the dual of the 8-shaped space curve. They are depicted in figure \[fig:8dual\]. ![8-shaped curve as intersection of cylinder and sphere[]{data-label="fig:8begin"}](ex09_8_begin.jpg){width="8cm"} ![8-shaped curve and its dual surface[]{data-label="fig:8dual"}](ex09_8_final.jpg){width="8cm"} The [Singular]{} code for this example is: // load libs LIB "duality.lib"; LIB "surfex.lib"; ring r=0,(t,x,y,z),dp; // ring over Q short = 0; // print polynomials with ^ // two polynomials poly cylinder = x^2+y^2-1; poly sphere = (x-1)^2+y^2+z^2-2^2; // intersection ideal (a space curved shaped like an 8) ideal i1 = cylinder, sphere; // inhomogeneous for plotting ideal i2 = homog(i1, t); // homogeneous for dualising ideal d1 = dual(i2); // dual of intersection ideal (a surface) poly d2 = subst(d1[1], t, 1); // dehomogenize dual for plotting d2; // show result // plot everything // (in surfex you may want to set transparency options for some surfaces) plotRotatedList(list(cylinder, sphere, i1, d2), list(x,y,z)); Examples from the introductory book [@GF94] ------------------------------------------- As further examples that the procedure [dual]{} works, I dualize some planar algebraic curves from the book [@GF94] without including the intermediary output from [Singular]{}. The Neil parabola is given by $$\left( \begin{array}{c} x_{1}^{3}-x_{0}x_{2}^{2} \end{array} \right)$$ and its dual is $$\left( \begin{array}{c} 4x_{1}^{3}+27x_{0}x_{2}^{2} \end{array} \right)$$ You can see an illustration in figure \[fig:ex05\_neil\_parabola\] ![Neil parabola (red) and its dual[]{data-label="fig:ex05_neil_parabola"}](ex05_neil_parabola.jpg){width="4cm"} The Newton knot is given by $$\left( \begin{array}{c} x_{0}x_{1}^{2}+x_{1}^{3}-x_{0}x_{2}^{2} \end{array} \right)$$ and its dual cardioid is $$\left( \begin{array}{c} 4x_{0}x_{1}^{3}-4x_{1}^{4}+27x_{0}^{2}x_{2}^{2}-36x_{0}x_{1}x_{2}^{2}+8x_{1}^{2}x_{2}^{2}-4x_{2}^{4} \end{array} \right)$$ You can see an illustration in figure \[fig:ex06\_newton\_knot\] ![Newton knot (red) and its dual cardioid[]{data-label="fig:ex06_newton_knot"}](ex06_newton_knot.jpg){width="4cm"} The hypocycloid is given by $$\left( \begin{array}{c} x_{0}^{2}x_{1}^{2}-2x_{0}^{2}x_{1}x_{2}-2x_{0}x_{1}^{2}x_{2}+x_{0}^{2}x_{2}^{2}-2x_{0}x_{1}x_{2}^{2}+x_{1}^{2}x_{2}^{2} \end{array} \right)$$ and its dual is calculated as $$\left( \begin{array}{c} x_{0}^{3}+3x_{0}^{2}x_{1}+3x_{0}x_{1}^{2}+x_{1}^{3}+3x_{0}^{2}x_{2}-21x_{0}x_{1}x_{2}+3x_{1}^{2}x_{2}+3x_{0}x_{2}^{2}+3x_{1}x_{2}^{2}+x_{2}^{3} \end{array} \right)$$ You can see an illustration in figure \[fig:ex07\_hypocycloid\] ![Hypocycloid (red) and its dual[]{data-label="fig:ex07_hypocycloid"}](ex07_hypocycloid.jpg){width="4cm"} The next example was inspired by the Klein quartic. I changed one of the ellipses to a hyperbola and the generated ideal is $$\left( \begin{array}{c} x_{0}^{4}-\frac{5}{4}x_{0}^{2}x_{1}^{2}+\frac{1}{4}x_{1}^{4}-\frac{3}{4}x_{0}^{2}x_{2}^{2}+\frac{15}{16}x_{1}^{2}x_{2}^{2}-\frac{1}{4}x_{2}^{4}-\frac{1}{70}x_{0}^{2} \end{array} \right)$$ and its dual ideal is calculated by [Singular]{} as generated by the polynomial $$\left( \begin{array}{c} 12390875x_{0}^{12}-120264375x_{0}^{10}x_{1}^{2}+442991850x_{0}^{8}x_{1}^{4}-822808000x_{0}^{6}x_{1}^{6}+\\ 827628480x_{0}^{4}x_{1}^{8}-431827200x_{0}^{2}x_{1}^{10}+91888128x_{1}^{12}+2186625x_{0}^{10}x_{2}^{2}-\\ 148231125x_{0}^{8}x_{1}^{2}x_{2}^{2}+902043450x_{0}^{6}x_{1}^{4}x_{2}^{2}-1921126200x_{0}^{4}x_{1}^{6}x_{2}^{2}+\\ 1725988320x_{0}^{2}x_{1}^{8}x_{2}^{2}-560787840x_{1}^{10}x_{2}^{2}-116455850x_{0}^{8}x_{2}^{4}+713525750x_{0}^{6}x_{1}^{2}x_{2}^{4}-\\ 784988540x_{0}^{4}x_{1}^{4}x_{2}^{4}-703298400x_{0}^{2}x_{1}^{6}x_{2}^{4}+914535936x_{1}^{8}x_{2}^{4}+\\ 232142400x_{0}^{6}x_{2}^{6}-539359800x_{0}^{4}x_{1}^{2}x_{2}^{6}-507564960x_{0}^{2}x_{1}^{4}x_{2}^{6}-\\ 598014720x_{1}^{6}x_{2}^{6}-197686720x_{0}^{4}x_{2}^{8}+58816800x_{0}^{2}x_{1}^{2}x_{2}^{8}+119161344x_{1}^{4}x_{2}^{8}+\\ 80183040x_{0}^{2}x_{2}^{10}+32722560x_{1}^{2}x_{2}^{10}-12753408x_{2}^{12} \end{array} \right)$$ You can see an illustration in figure \[fig:ex08\_my\_quartic\]. ![A quartic (red) inspired by Klein’s quartic and the dual curve[]{data-label="fig:ex08_my_quartic"}](ex08_my_quartic.jpg){width="4cm"} Appendix ======== The code for the Steiner surface example is: ///////////////////////////////////////////// // This procedure calculates the dual ideal of the homogeneous ideal id // The output is a homogeneous ideal in the same ring ///////////////////////////////////////////// proc dual(ideal I) { def R0=basering; if(npars(R0)>0) {ERROR("Use a base ring without parameters!");}; if(ord_test(R0)!=1){ERROR("The base ring must have a global ordering!");}; if(homog(I)!=1) {ERROR("The input ideal must be homogeneous!");}; // get some information about the base ring and the input ideal int n=nvars(R0); int m=ncols(I); // change variables and compute transposed Jacobi matrix of I def NR=changevar("x()",R0); setring NR; ideal I=fetch(R0, I); matrix J=transpose(jacob(I)); // adjoin auxiliary variables to the ring def E1=extendring(m,"l()","dp",1,NR); def R=extendring(n,"u()","dp",1,E1); setring R; matrix J=fetch(NR, J); // set up system S ideal I=fetch(NR, I); matrix L=matrix([u(1..n)])-J*matrix([l(1..m)]); ideal S=I,L; // eliminate first m+n variables from S by Groebner bases method int j,k; poly prod=1; for(k=1;k<=n;k++){prod=prod*x(k);}; for(j=1;j<=m;j++){prod=prod*l(j);}; intvec v=hilb(std(S),1); ideal I1=eliminate(S,prod,v); // resubstitute variables, such that the output can be used again as input map f=R,(u(1..n),l(1..m),x(1..n)); ideal I2=ideal(f(I1)); // restore initial ring and return the result setring R0; export R0; return(fetch(R, I2)); } ///////////////////////////////////////////// LIB "surfex.lib"; // Load library ring R1 = 0,(x(0..3)),dp; // Define ring // First example: Steiner's Roman surface ideal I = (x(1)*x(2))^2+(x(1)*x(3))^2+(x(2)*x(3))^2-x(0)*x(1)*x(2)*x(3); ideal D = dual(I); ideal DD = dual(D); I; D; DD; // show results // Plot with surfex ring R2 = 0,(x,y,z),dp; map f=(R1, 1,x,y,z); plotRotatedList(list(f(I), f(D)), list(x,y,z)); [GPS09]{} : [*Ebene algebraische Kurven*]{}, Birkhäuser Boston, (1981). : [*Gröbner Bases Tutorial*]{}, http://www.cs.amherst.edu/ \~dac/lectures/gb1.handout.pdf, (2007). : [*Ebene algebraische Kurven*]{}, Vieweg Verlag, Braunschweig/Wiesbaden, (1994). : [*An introduction to Gröbner bases*]{}, Pure and Applied Mathematics, Wiley-­Interscience Series of Texts, Monographs, and Tracts. Chichester: John Wiley and Sons, (1997). : [*Vorlesungen über höhere Geometrie*]{}, Vieweg Verlag, Braunschweig, (1982). : [*A [Singular]{} Introduction to Commutative Algebra*]{}, 2nd Edition. Springer Verlag, Berlin, Heidelberg, New York, (2007). : [Singular]{} 3.1.0 – [*A computer algebra system for polynomial computations.*]{}, http://www.singular.uni-kl.de, (2009). : [*Vorlesungen über analytische Geometrie des Raumes*]{}, K.F. Koehler Verlag / Leipzig, (1940). : [*Images of the polar maps for hypersurfaces*]{}, arXiv:0811.0754v1 \[math.AG\], (5 Nov 2008). : [*Polar and dual varieties of real curves and surfaces*]{}, http://www.ima.umn.edu/2006-2007/W9.18-22.06/activities/Piene-Ragni/Piene\_190906.pdf, (2006). : [*Computational line geometry*]{}, Springer Verlag, (2001). : [*An invitation to algebraic geometry*]{}, Springer Verlag, (2000).
{ "pile_set_name": "ArXiv" }
--- author: - Yu Hu - James Trousdale - Krešimir Josić - 'Eric Shea-Brown' title: Motif Statistics and Spike Correlations in Neuronal Networks --- Acknowledgements {#acknowledgements .unnumbered} ================ We thank Chris Hoffman and Brent Doiron for their helpful insights. This work was supported by NSF grants DMS-0817649, DMS-1122094, and a Texas ARP/ATP award to KJ, and by a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and NSF Grants DMS-1056125 and DMS-0818153 to ESB.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present optical candidates for 75 X-ray sources in a $\sim 1$ deg$^2$ overlapping region with the medium deep ROSAT survey (Molthagen et al. 1997). These candidates are selected using the multi-color CCD imaging observations made for the T329 field of the Beijing-Arizona-Taipei-Connecticut (BATC) Sky Survey, which utilizes the NAOC 0.6/0.9m Schmidt telescope with 15 intermediate-band filters covering the wavelength range 3360-9745 Å. These X-ray sources are relatively faint (CR $<< 0.2 s^{-1}$) and thus mostly are not included in the RBS catalog, they also remain as X-ray sources without optical candidates in a previous identification program carried out by the Hamburg Quasar Survey. Within their position-error circles, almost all the X-ray sources are observed to have one or more spatially associated optical candidates within them down to the magnitude $m_V \sim 23.1$. We have classified 149 of 156 detected optical candidates with 73 of the 75 X-ray sources with a new method which predicts a redshift for non-stellar objects, which we have termed the SED-based Object Classification Approach (SOCA). These optical candidates include: 31 QSOs, 39 stars, 37 starburst galaxies, 42 galaxies, and 7 “just” visible objects. Twenty-eight X-ray error circles have only one visible object in them: 9 QSOs, 3 normal galaxies, 8 starburst galaxies, 6 stars, and two of the “just” visible objects. We have also cross-correlated the positions of these optical objects with NED, the FIRST radio source catalog and the 2MASS catalog. Separately, we have also SED-classified the remaining 6011 objects in our field of view. Optical objects are found at the $6.5\sigma$ level above what one would expect from a random distribution, only QSOs are over-represented in these error circles at greater than 4$\sigma$ frequency. We estimate redshifts for all extragalactic objects, and find a good correspondence of our predicted redshift with the measured redshift (a mean error of 0.04 in $\Delta z$. There appears to be a supercluster at z $\sim$ 0.3-0.35 in this direction, including many of the galaxies in the X-ray error circles are found in this redshift range.' author: - | Haotong Zhang, Suijian Xue, David Burstein,\ Xu Zhou, Zhaoji Jiang, Hong Wu, Jun Ma, Jiansheng Chen,\ and Zhenlong Zou title: 'Multicolor Photometric Observations of Optical Candidates to Faint ROSAT X-ray Sources in a 1 deg$^2$ field of the BATC Survey' --- 1.5cm [**keywords:**]{} X-rays: galaxies - galaxies: active - catalog: surveys Introduction ============ Combined optical and X-ray data allow one to obtain information about the luminosity functions of various types of X-ray sources as well as their evolution with redshift. In turn, this information can be used to further constrain models for the production of the X-ray background at different flux levels. Much effort has so far been made on the optical identification of the X-ray sources in the ROSAT/Bright Source (RBS) catalog (e.g., Voges et al. 1999; Rutledge et al. 2000) as well as of X-ray sources in some individual ROSAT deep survey observations (e.g., Lehmann et al. 2001). Yet, it is often unknown how much of the detections that occur in X-ray error circles are real associations of optical counterparts to these X-ray sources, and how much of these associations are due to random chance. To do this, one needs to have detected and identified all optical objects in a given image, and then see what percentages of these objects (QSOs, galaxies, stars) are then found near or within the areas covered by the X-ray error circles. This is precisely the kind of data we have for 75 X-ray sources detected with the ROSAT PSPC to a flux limit $S_x({\rm 0.1-2.4 keV}) \geq 5.3\times10^{-14}$ , in a 1 deg$^2$ field of view, as this field of view was also observed in our multicolor images for the Beijing-Arizona-Taipei-Connecticut Sky Survey (BATC survey). The relevant X-ray and optical data are presented in § 2. Details of the object classification procedure as well as selection of the X-ray candidates are given in § 3. Associated information that can be gleaned from these data are given in § 4. We summarize our results in § 5. The data and analysis ===================== The X-ray data -------------- The X-ray data come from a catalog obtained from a medium deep ROSAT survey in the HQS field HS 47.5/22 (Molthagen, Wendker, & Briel, 1997). The survey consists of 48 overlapping ROSAT PSPC pointings which were added up to produce a final catalog containing 574 X-ray sources with broad band (0.1-2.4 keV) count rates between $\sim3\times10^{-3}\rm cts\ s^{-1}$ and $\sim0.2\rm cts\ s^{-1}$, in a field of view (FOV) of $\sim 2.3\rm\ deg^2$. Molthagen et al. adopt an X-ray error circle of 2$\sigma$ + 10$''$ in radius, with the value of $\sigma$ coming from their observations. This is the X-ray error circle used in the present analysis. There was a preliminary identification of these X-ray sources with the HQS plates (Molthagen et al, 1997). Only a few objects, all brighter than $m_B\approx18^m.5$, have recognizable spectra. At $m_{\rm B}>18^m.5$, many objects are generally classified as weak and extremely blue, blue or red. For many X-ray sources no spectral classification was possible, the optical object simply being visible or the field of view empty. 75 of the 574 HQS sources fall on one program field of the BATC survey, T329, centered at 09:56:24.46, +47:35:08.4 (J2000), forming a subsample of the ROSAT medium deep survey in a 1 deg$^2$ field. (One-third of the BATC fields are located with a known quasar in its center. For field T329, this quasar is PC0953+4749 with z = 4.46, originally discovered by Schneider, Schmidt & Gunn 1991. Ironically, this QSO is not an X-ray source in the HQS field.) The X-ray brightness distribution of these 75 sources is shown in Fig.\[f1\]. The distribution of these 75 sources in our field of view in shown in Fig.\[x\_dist\]. Molthagen et al. associate 25 optical candidates for these 75 X-ray sources, or a frequency of 1/3: 6 QSOs or active galaxies; 7 QSO/active galaxy candidates (classified as such or extremely blue); 1 star; 8 stellar candidates; 1 galaxy candidate; 2 faint red objects; 5 unidentified spectra (including overlaps); 39 visible on the HQS direct plate only; and 6 empty fields (i.e., no counterpart on the HQS plate). The BATC optical data --------------------- Optical observations of BATC field T329 were carried out from 1996-1999 as part of the BATC Survey. Our survey utilizes the 0.6/0.9m Schmidt telescope of the Xinglong Observing Station of the National Astronomical Observatory of China (NAOC), equipped with 15 intermediate-band filters covering the wavelength range 3360-9745Å. With this facility our survey is designed to do multi-color CCD ($2048\times2048$) imaging of 500 selected, $\sim 1$ deg$^2$ fields-of-view for multiple scientific purposes (cf. Fan et al. 1996; Shang et al. 1998; Zheng et al. 1999; Zhou et al. 1999; Yan et al. 2000; Kong et al. 2000; Ma et al. 2002; Wu et al. 2002). The dataset for T329 consists of a number of individual direct CCD images in each of the 15 BATC passbands. These images are first treated individually (bias, dark and flat-fielding corrections) and then combined to comprise deep images. Information on the passbands used for the present study, including filter parameters, total exposure time, number of flux calibration images obtained, and the magnitude limit for that passband are given in Table \[table1\]. Details on the BATC flux calibration procedure are given in several previous papers (Fan et al. 1996; Zhou et al. 1999; Yan et al. 2000) and the reader is referred to those papers for this information. Further discussion of the observations made in field T329 that are separate from the X-ray identification issue are given in Zhou et al. (2003). The final product of the BATC observations of field T329 is a catalog of 6160 point-like optical objects in our 58 arcmin$^2$ field of view, with astrometry and photometry in 15 colors. SED classification ------------------ We are in the process of developing a SED-based Object Classification Approach (termed SOCA) for the BATC photometric system (Zhang, et al., in preparation). The SED of each object in our field of view, observed through $n$ filters, is compared to the SED computed for a set of template spectra. The aim is to find the best fit between the observed photometry and the model photometry through a standard $\chi^2$ minimization procedure: $$\chi^2=\sum^N_{i=1}(\frac{f^{obs}_i-A \cdot f^{temp}_i}{{\sigma}^{obs}_i})^2$$ where $f^{obs}_i$ and $f^{temp}_i$ are the observed and the template fluxes in the $i$th band respectively, ${\sigma}^{obs}_i$ is the error on the observed flux in this band, $A$ is the normalization constant can be calculated by minimize the $\chi^2$. In order to apply this method to SED classification, we currently employ three sets of template spectra: 1. The stellar library of Gunn & Stryker (1983) is used including most spectral classes on the MK system (this will be updated when new data are available); 2. The observed spectra of nearby galaxies (Kinny et al. 1996),including normal galaxies (Elliptical ,S0, Sa, Sb and Sc) and starburst (SB) galaxies (SB1-6) with different internal extinctions are used. Normal galaxies are redshifted from 0 to 1.0 in step of 0.01, SB galaxies are redshifted from 0 to 1.5 in step of 0.01. 3. A QSO template set is composed of series of simulated quasar spectra. These spectra have been constructed by fixing the emission line intensity ratio (cf. Wilkes 1986), while varying the $ly\alpha$ equivalent width (65 $\pm$ 34 Å) and the continuum index $\alpha$ (-0.75$\pm$ 0.5). $Ly\alpha$ forest absorption has been modeled according to M[ø]{}ller (1990) and Madau (1995). Redshift estimates are set between 0.0 and 6.0 in steps of 0.01 in $z$. Representative template spectra used in the present paper are given in Fig.\[f2\]. The template SEDs are obtained by convolving the template spectra with the measured passband of each filter. As the template SEDs are morphologically-classified, some templates may represent two or more morphologically-similar classes. For example, an SED classified as a starburst galaxy can also possibly be matched to that of a QSO. A value of $\chi^2$ is calculated for the correspondence of every template to each object SED. The minimum $\chi^2$ for each kind of template (star/galaxy/QSO/starburst galaxy) is calculated. The template with the $\chi^2$ minimum fit is taken as the best fit. In this fitting process we include those objects with at least 5 filter observations (such as saturated stars). The redshift estimates found for non-stellar objects (galaxies, QSOs) by this template-fitting process are useful for statistical studies of this field of view. Optical Candidates ================== Optical Objects near or within the X-Ray Error Circles ------------------------------------------------------ The CCD limiting magnitudes range from 20.5 to 23.5 mag, tending to be fainter in the bluer filters, and brighter in the more sky-limited redder filters (cf. Table \[table1\]). Our deeper, direct CCD observations, combined with our ability to classify the SEDs of the objects detected, permit us both to detect more objects than the HQS survey, as well as to classify more of the objects detected. The total area covered by the X-ray searches we have done for the 75 X-ray sources corresponds to 31.52 arcmin$^2$ or 0.00937 of the 3364 arcmin$^2$ sky area subtended by the BATC CCD. This area includes additional area searched beyond the nominal 2$\sigma$ error circle for 13 (17%) of the X-ray-detected objects, most of these within 1-2$''$ of the original error circle. The 6160 objects detected in the full image field were selected with the same criteria as those we use for the X-ray error circle. If the optical objects and the X-ray sources are randomly associated, we expect to detect $31.52/3364 \times 6160 = 58$ optical objects. Our observations find optical candidates (stars, galaxies, galaxy groups, starburst galaxies, QSOs) in 73 of the 75 X-ray error circles. We detect a total of 156 optical objects in these 75 X-ray error circles. Of these we can definitely SOCA-identify 140, tentatively identify 9 more (7 galaxies and two stars), 7 objects are only “visible,” and one X-ray circle (RXJ0955.5+4735) which is blank in our image (but two stars are just outside this error circle). This makes a total of 149 optically-identified candidates found in the BATC catalog that can be SOCA-classified and are found in or near 73 of the 75 X-ray error circles in our field of view. One of the two remaining X-ray sources (RXJ0954.0+4756) has a “just” visible object within the X-ray error circle that has a position coincident with a known radio source, but is too faint to obtain a reliable SED. The other remaining source (RXJ0953.7+4722) also has one “just” visible source within its error circle. We have a difference in objects detected in the X-ray circles to those randomly expected of $149 - 58 = 91\pm14$(assuming Gaussian errors). The difference between detected objects and random placement of objects in these X-ray error circles is significant at the 91/14 = 6.5$\sigma$ level. It would appear that the X-ray circles do tend to include more objects than randomly placed circles put on the rest of this field of view, when the data are sampled to faint magnitude levels. Table \[table2\] gives the relevant information on the optical candidates that are associated with these X-ray sources. The first 4 columns in Table \[table2\] come from the original X-ray catalog: X-ray source name, brightness in the 0.1–2.4 keV passband, 2$\sigma$ error circle radius in units of arcseconds, and the original HQS identification. The label assigned to each candidate optical object, a,b,c,$\ldots$, plus the observed position of the optical candidate (in J2000 coordinates) are given in the next three columns. Columns 8-12 give the derived information for each optical candidate: $\Delta r$ is the offset of the optical candidate position from the center of the X-ray error circle; $m_V$ denotes the V magnitude of the optical candidate (an upper limit is given if the candidate is only visible, but not measurable), derived from the relation: $m_{\rm V}=m_{\rm g} + 0.3233(m_{\rm f} - m_{\rm h}) + 0.0590$ (Zhou et al., 2003); $f_{xo}$ is the ratio of X-ray to optical flux, calculated from the 0.1-2.4 keV count-rate and optical V magnitude, vis. $f_{\rm xo} = {\rm log}(f_x/f_o) = {\rm log(PSPC\ counts/s}\times10^{-11}) +0.4m_V + 5.37$ (Maccacaro et al. 1988); Pred z is the redshift that the SOCA estimates for galaxies and QSOs; Where there are known candidates that are clearly identified on our images, the identity of these candidates are given. Finding charts of all the X-ray sources in our summed image at 7050Å(our j filter) are given in Fig.\[idt\]. The SEDs for all 149 SOCA-identified optical candidates in or near 73 of the 75 X-ray source error circles are given in Fig. \[sed\], in which also the best template fit is plotted for each optical candidate. The label on each SED gives the template plotted, the predicted redshift (if galaxy or QSO), and the value of the $\chi^2$ fit. In the case of two known objects (a nearby galaxy and a bright star; see next section), their previously-known identifications are given in place of the SOCA classification in Table \[table2\]. Candidate Associations ---------------------- Most X-ray sources contain more than one optical candidate within their error circles. Choosing which one is the probable X-ray source is educated guesswork at best. Rather than assign probabilities of the likelihood of each optical candidate’s association with these X-ray sources, we prefer to give the reader the statistics of how the candidates relate to the full data on the 6160 objects found in our field of view. In the 75 X-ray error circles, to a magnitude limit of V $\approx$ 23, we have found: 31 QSOs, 39 stars, 37 starburst galaxies, 42 galaxies, and 7 “only visible” objects. If we take the 6160 objects we find in our field of view, excluding the objects within the X-ray error circles, the analogous counts for these objects are: 341 QSOs, 1912 stars, 1508 starburst galaxies, 2076 galaxies, and 174 objects unclassified for a variety of reasons (too few filters observed, in the halo of a bright star, etc.). If we assume these objects are randomly distributed in this field of view, we expect random associations within our X-ray error circles to be 3.2 QSOs, 17.9 stars, 14.1 starburst galaxies, and 19.5 galaxies (discarding the unclassified objects). Therefore, the objects found in the X-ray error circles rather than the random placement of those identified objects in the field of view are: QSOs: $31-3.2 =27.8\pm5.8$; stars: $39-17.9 = 21.1\pm7.9$ ; starburst galaxies: $37-14.1 =22.9\pm7.1$; and galaxies: $42-19.5 = 22.5\pm7.8$. On the plus side, all objects are more represented within the X-ray error circles that what would be randomly there. On the minus side, only the QSOs have a highly significant overdensity ($4.7\sigma$) within the X-ray error circles, while stars, starburst galaxies and galaxies are only there at the 2.8-3.2$\sigma$ level. In Fig\[stardis\]-\[qsodis\] we show the distribution of each kind of object (QSO, star, starburst galaxy, galaxy) in our field of view, relative to the placement and size of the X-ray error circles. That QSOs are statistically most reliable detections comes as little surprise, as this was already well-known (e.g., Shanks et al. 1991, Georgantopoulos et al. 1996, McHardy et al. 1998). We only find one optical candidate in 28 of these error circles, these being: 9 QSOs, 3 normal galaxies, 8 starburst galaxies, 6 Galactic stars, and 2 candidates that are “just” visible on our image. The single candidates in these X-ray error circles have an asterisk by their SOCA classification in Table\[table2\]. Supplemental Data and Redshift Distributions ============================================ Radio and Near-IR identifications --------------------------------- We have cross-correlated the positions of all of the X-ray sources in our field-of-view with positions in the FIRST (Faint Images of Radio Sources at Twenty centimeters) radio survey, the infrared 2MASS (2 Micron All Sky Survey) catalog, and the NED catalog. We find seven of the X-ray error circles have FIRST radio sources within them that are coincident with optical sources, in addition to one associated with another radio source (HS0954+4815 as given by NED; see Table \[table2\]). Of these 8 associated radio sources, just two do not have SEDs in our data, and only three have another object in their X-ray error circle. There are a total of 100 radio sources in our image. The probability of having one radio source within 2$''$ of any optical source in this image is 0.47 (taking the joint probability). This makes the probability of having all eight position matches for the radio sources with the optical sources at $2.3\times10^{-6}$ . Hence, most of the radio sources are likely associated with their optical counterparts. In contrast, of the seven 2MASS object with position coincident with our optical candidates, only 2 are classified as galaxies \[one normal galaxy, RXJ0953.8+4740(a) and one starburst galaxy, RXJ0956.9+4731(a)\], and five are identified with late-type stars that are the dominant stellar candidates within these particular X-ray error circles. Spectroscopic Observations and Objects of Special Note ------------------------------------------------------ We have obtained spectra of a subsample of our optically-detected candidates to test our classification method and SOCA redshift estimates. These spectra were taken with the slit spectrograph on NAOC’s 2.16 m telescope at its Xinglong Observing Station, and the Multiobject Fiber Spectrograph (MOFS) on the 6m telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences. Nine of these spectra are shown in Fig.\[spec\], with the BATC fluxes overlayed. Included are 7 QSOs, 1 starburst galaxy and the HII galaxy associated with UGC 5354. In addition, a search of the SIMBAD and NASA Extragalactic Database (NED) catalogs comes up with redshifts for two additional QSOs associated with these X-ray sources (cf. Table \[table2\]). The correspondence of spectroscopically-determined redshifts with our SOCA-determined redshifts is excellent for 10 of the objects (a mean error in z of 0.04 for these 10 objects), while it is off by 1.03 for one additional object (RXJ0958.5+4738(a); shown in Fig. \[spec\]). Examination in detail of the SOCA fitting procedure for this one object shows that the BATC filter system mistakes the emission line in the 9190[Å]{} filter for \[OII\] 3727, while the spectrum shows this must be H$\alpha$. We have also made careful visual inspections for each of optical counterparts within the X-error circles, relating their visual appearance to the SEDs we obtain for them (something the reader can also do, using Fig. \[sed\] and Fig. \[idt\]). We give special note to the objects found in the following X-ray sources: RXJ0953.8+4740: There is evidently a galaxy group in this error circle, comprised of objects b,c and d. This group has previously been identified as PDCS 36 (the 36th cluster/group found in the Palomar Distant Cluster Survey, Postman et al. 1996). RXJ0954.0+4756: This object is too faint to identify optically, but its position is coincident with the radio source 7C0952+4814 = FIRST J095401.1+475644. RXJ0954.8+4715: The second object in Table 2 listed for this source has a position coincident with the radio source CDS90-R307B = FIRST J095453.2+471533. RXJ0955.1+4729: There are 7 objects found in the X-ray error circle. RXJ0955.1+4729(e) is a confirmed QSO at a redshift 2.15 and (d) is a late type star. What is interesting is that a,b,c,f and g are classified as normal galaxies at redshifts 0.33,0.36,0.36,0.39 and 0.33 respectively. Visual inspection also tends to put them at similar distances, making them a possible galaxy group. RXJ0956.7+4729: There are 3 objects within the error circle. From the image we can see that all of them are within the diffraction spikes of the nearby bright star, thus their SED are suspect. As a result, we put question marks in their identification in Table \[table2\]. RXJ0958.8+4744: This is part of the nearby, interacting galaxy UGC 5354. Part of this galaxy system is a small, HII galaxy off to one side. RXJ09058.9+4745: This is a bright, cataloged F8 star, BD+48-1823. Redshift Distribution --------------------- In assembling the estimated redshifts for our X-ray associated objects, we noticed that many of them tended to be clustered around a redshift of 0.30$\pm 0.05$). Given the accuracy of our redshifts for galaxies ($<$0.1 for most individual objects), this is significant. The top histogram in Fig \[redshift\] shows the redshift distribution for the 110 galaxies plus QSOs X-ray source candidates in our 58 arcmin$^2$ field of view. It is evident that there is a high overabundance of objects, mostly galaxies, in the redshift range 0.25-0.35. The bottom histogram in Fig \[redshift\] shows the redshift distribution for all 3663 SOCA-classified galaxies in our field of view. While the peak at redshift 0.25-0.35 is still there in the full galaxy sample, the contrast of that peak appears to be more significant for those galaxy candidates found in or near the error circles for these X-ray sources. At this redshift, a degree-sized field of view corresponds to a $\sim 20$ Mpc region of space. This means that there is a collection of superclusters at this redshift interval. This is similar to what one would see if one looked back at the local universe via sighting down the angle through the Perseus cluster, the Local Supercluster, the Great Attractor and the Shapley concentration stretched out over a redshift range of nearly 20,000 km/sec. In other words, seeing an overdense region of galaxies over a redshift range of 0.1 is not that unusual in our universe. Summary ======= Based on the 15 color photometric observations, and SED-based object classification approach (SOCA), as well as the multiwavelength cross-correlations, we find 156 optical candidates, to a magnitude limit close to $V \approx 23$, within or near the error circles for all 75 X-ray sources in our field of view. Among them are 31 QSOs (nine of which have spectroscopic confirmation), 37 starburst galaxies (2 of which have spectroscopic confirmation), 42 normal galaxies (including 3 possible galaxy groups), 39 stars (one of which is a BD object), and 7 just “visible” objects with no classifications (two of which are coincident with known radio sources). Two of the X-ray error circles have only just “visible” objects in them. We find 8 radio sources (out of 100 in our image) that are coincident with an optical object within the X-ray error circles, making it likely that many of these are the optical counterparts to these radio sources. Separately, we have also SED-classified 6011 additional objects in our 3364 arcmin$^2$ field of view to the same apparent magnitude limit (i.e., not including those found near or in the X-ray error circles). Of these 6011 objects, 341 are QSOs, 1912 are stars, 1508 are starburst galaxies, and 2076 are galaxies and 174 are unclassified for a number of objective reasons. The area of our X-ray circles subtends 31.52 arcmin$^2$, or 0.00937 of our full 3364 arcmin$^2$ field of view. Considering the objects detected to objects that could be randomly found in these error circles, we have: $6.5\sigma$ detections for all 149 classified objects, $4.7\sigma$ for the QSOs, and from $2.8-3.2\sigma$ for stars, starburst galaxies and galaxies. Twenty-eight error circles have only one object in or near them, including: 9 QSOs, 3 normal galaxies, 8 starburst galaxies, 6 stars and 2 “just” visible objects. In sum, in this paper we perform an exercise that few have been able to do with their X-ray data. By being able to SED-classify all objects in our 3364 arcmin$^2$ field of view down to V $\sim$ 23, we can ask ourselves what are the kinds of optical candidates we find within X-ray error circles to those randomly found in the field of view. question is that while all classified objects: QSOs, starburst galaxies, normal galaxies and stars are overrepresented in the X-ray error circles compared to a random distribution, it is only the QSOs that are highly statistically found in these X-ray circles. Yet, at the same time, there are 6.5$\sigma$ more objects within these X-ray circles than if randomly distributed in this image. So, while we know that one of the objects in our X-ray error circles is likely the X-ray source, in absence of independent knowledge, choosing which object it is is still more arbitrary than scientific. The BATC Survey is supported by the Chinese Academy of Sciences, the Chinese National Natural Science Foundation and the Chinese State Committee of Sciences and Technology. The present work was partially supported by the Chinese National Key Basic Research Science Foundation (NKBRSFG19990754). This research has made use of the NASA/IPAC Extra-galactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Brandt, W.N., et al., 2001, AJ, 122, 1 Fan, X.H., et al., 1996, AJ, 112, 628 Fischer, J.-U., et al., 1998, AN, 319, 347 Georgantopoulos, I., Stewart, G. C., Shanks, T., Boyle, B. J., & Griffiths, R. E., 1996, MNRAS, 280, 276 Gunn, J.E., Stryker, L.L., 1983, ApJS, 52, 121 Kinney, A.L., Bohlin, R.C., Calzetti, D.,Panagia, N., & Wyse, R.F.G., 1993, ApJS, 86, 5 Kinney, A.L., Calzetti, D., Bohlin, R.C., McQuade, K., Storchi-Bergmann, T., & Schmitt, H.R., 1996, ApJ,467, 38 Kong, X., et al., 2000, AJ, 119, 2745 Lehmann, I., et al., 2000, A&A, 354, 35 Lehmann, I., et al., 2001, A&A, 371, 833 Ma, J., Zhou, X., Wu, H., Chen, J.S., Jiang, Z.J., Xue, S.J., Zhu, J., 2002, ChJAA, 2, 127 Maccacaro, T., Gioia, I.M., Wolter, A., Zamorani, G., & Stocke, J., 1988, ApJ, 326, 680 Madau, P. 1995 ApJ,441,18 McHardy I.M., et al., 1998, MNRAS, 295, 641 Molthagen, K., Wendker, H.J., & Briel, U.G., 1997, A&AS, 126, 509 M$\o$ller, P & Jakobsen, P., 1990 A&A,228,299 Mushotzky, R.F., Cowie, L.L., Barger, A.J.,& Arnaud, K.A. 2000, Nature, 404, 459 Postman, M, Lubin, L.M., Gunn, J.E., Oke, J.B., Hoessel, J.G., Schneider, D.P., & Christensen, J.A. 1996, AJ, 111, 615. Rutledge, R.E, Brunner, R.J., Prince, T.A.,& Lonsdale, C., 2000, ApJS, 131, 335 Schmidt, M., et al.. 1998, A&A, 329, 495 Shang, Z.H., et al., 1998, ApJL, 504L, 23 Shanks, T., Georgantopoulos, I., Stewart, G.C., Pounds, K.A., Boyle, B.J., & Griffiths, R.E., 1991, Nature, 353, 315 Schneider, D.P. Schmidt, M.,& Gunn, J.E., 1991, AJ, 101, 2004 Sutherland, W. & Saunders, W., 1992, MNRAS, 259, 413 Voges, W., et al., 1999, A&A, 349, 389 Wilkes,B.J., 1986 MNRAS,218,331 Wolf, C. et. al. 2001 A&A,365,681 Yan, H.J., et al., 2000, PASP, 112, 691 Zamorani, G., et al. 1999, A&A, 346, 731 Zheng, Z.Y., et al., 1999, AJ, 117,2757 Zhou, X., Chen, J.S., Xu, W., Zhang, M., Jiang, Z.J., Zheng, Z.Y., Zhu, J, 1999, PASP, 111, 909 Zhou, X., et al., 2003, A&A,397,361 table1 table2
{ "pile_set_name": "ArXiv" }
--- abstract: | Experience collected in mesoscopic dynamic modeling of externally driven systems indicates absence of potentials that could play role of equilibrium or nonequilibrium thermodynamic potentials yet their\ thermodynamics-like modeling is often found to provide a good description, good understanding, and predictions that agree with results of experimental observations. This apparent contradiction is explained by noting that the dynamic and the thermodynamics-like investigations on a given mesoscopic level of description are not directly related. Their relation is indirect. They both represent two aspects of dynamic modeling on a more microscopic level of description. The thermodynamic analysis arises in the investigation of the way the more microscopic dynamics reduces to the mesoscopic dynamics (reducing dynamics) and the mesoscopic dynamic analysis in the investigation of the result of the reduction (reduced dynamics). author: - | Miroslav Grmela[^1]\ École Polytechnique de Montréal,\ C.P.6079 suc. Centre-ville Montréal, H3C 3A7, Québec, Canada title: ' **Externally driven macroscopic systems: Dynamics versus Thermodynamics**' --- Introduction {#Intr} ============ Boussinesq equation is a well known example of mathematical formulation of mesoscopic dynamics of externally driven macroscopic systems. The mesoscopic level on which the physics is regarded in this example is the level of fluid mechanics, the system itself is a horizontal layer of fluid heated from below (Rayleigh-Bénard system), and the external driving forces are the gravitational force and imposed temperature gradient. Analysis of solutions of the Boussinesq equations reveals properties observed in experiments (e.g. the observed passage from less organized to a more organized behavior presents itself as a bifurcation in solutions). Many other examples of this type can be found for instance in [@Halp]. One of the common features of the dynamical equations that arise in the examples (the feature that has been noted in [@Halp]) is that there does not seem to be possible, at least in general, to associate them with a potential whose landscape would provide a pertinent information about their solutions \[ *“... there is no evidence for any global minimization principles controlling the structure ...” - see the last paragraph of Conclusion in [@Halp]*\]. Since potential (or potentials) of this type are essential in any type of thermodynamics, the observed common feature seems to point to the conclusion that there is no thermodynamics of externally driven systems. On the other hand, there is a long tradition (starting with Prigogine in [@Prigogine]) of investigating externally driven systems with methods of thermodynamics. Roughly speaking, responses of macroscopic systems to external forces are seen as adaptations minimizing their resistance. The thermodynamic potentials involved in this type of considerations (i.e. potentials used to characterize the “resistance”) are usually various versions of the work done by external forces and the entropy production. There are many examples of very successful and very useful considerations of this type (see e.g. [@Umb]). In Section \[EX5\] we illustrate the thermodynamic analysis in the context of an investigation of the morphology of immiscible blends. Specifically, we show how the thermodynamic argument provides an estimate of concentrations at the point of phase inversion, i.e. at the point at which the morphology of a mixture of two immiscible fluids changes in such a way that the roles of being encircled and encircling changes (i.e. the continuous phase and the dispersed phase exchange their roles). The experience collected in investigations of externally driven systems can be thus summed up by saying that mesoscopic dynamical modeling indicates an impossibility of using thermodynamics-like arguments yet this type of arguments are often found to be very useful and pertinent. There are in fact well known examples [@Keizer] in which both dynamic and thermodynamic approaches were developed and the potentials used in the thermodynamic analysis are proven to play no significant role in the dynamic analysis. Our objective in this paper is to suggest an explanation of this apparent contradiction. We show that the dynamic and the thermodynamic analysis made on a given mesoscopic level of description are not directly related. Their relation is indirect. They are both two aspects of a single dynamic analysis made on a more microscopic (i.e. involving more details) level of description. An investigation of the way the microscopic dynamics is reducing to the mesoscopic dynamics provides the mesoscopic thermodynamics (Section \[RD\]) and the investigation of the final result of the reduction provides the mesoscopic dynamics. It is important to emphasize that we are using in this paper the term “thermodynamics” in a general sense (explained in Section \[RD\]). While the classical equilibrium thermodynamics and the Gibbs equilibrium statistical mechanics are particular examples of the general thermodynamics presented in Section \[RD\], they are not the ones that are the most pertinent for discussing externally driven systems. Multiscale Mesoscopic Models {#MMM} ============================= Given an externally driven system (or a family of such systems), how do we formulate its dynamical model? The most common way to do it (called hereafter a direct derivation) proceeds in the following three steps. First, behavior of the externally driven macroscopic systems under consideration is observed experimentally in certain types of measurements called hereafter *meso-measurements*. In the second step, the experience collected in the meso-measurements together with an insight into the physics taking place in the observed systems leads to the choice of the level of description, i.e. the choice of state variables (we shall denote them by the symbol $y$), and equations $$\label{Gdyn} \dot{y}=g(y,\zeta, \mathcal{F}^{meso})$$ governing their time evolution. By $\zeta$ we denotes the material parameters (i.e. the parameters through which the individual nature of the physical systems under consideration is expressed) and $\mathcal{F}^{meso}$ denotes the external forces. In the third step, the governing equations (\[Gdyn\]) are solved and the solutions are compared with results of observations. If the comparison is satisfactory, the model represented by (\[Gdyn\]) is called a well established mesoscopic dynamical model (e.g. the Boussinesq model is a well established model of the Rayleigh-Bénard systems). The choice of state variables $y$ in the second step is usually made by trying to formulate the simplest possible model in the sense that the chosen state variables are related as close as possible to the quantities observed in the *meso* measurements. The original derivation of the Boussinesq equations constituting the dynamic model of the Rayleigh-Bénard system provides a classical example of the direct derivation. The chosen mesoscopic level is in this example the level of fluid mechanics (the classical hydrodynamic fields serve as state variables $y$). The comparison made in the third step shows indeed agreement between predictions of the model and results of experimental observations. Hereafter, we shall refer to the collection of *meso* measurements and the mathematical model (\[Gdyn\]) as a *meso level* description. We now pick one well established mesoscopic model (e.g. the Boussinesq model). There are immediately two conclusions that we can draw. The first one is that there exist more microscopic levels (i.e. levels involving more details, we shall call them *MESO levels*) on which the physical system under investigation can be described. This is because the chosen *meso level* (e.g. the level of fluid mechanics) ignores many microscopic details that appear to be irrelevant to our interests (determined by meso-measurements and also by intended *meso* applications). We recall that there always exists at least one well established *MESO level* on which states are described by position vectors and velocities of $\sim 10^{23}$ particles composing the macoscopic systems under consideration (provided we remain in the realm of classical physics). Such ultimately microscopic model will be hereafter denoted as *MICRO* model. The second conclusion is that if we choose a *MESO level* and we found it to be well established (i.e. its predictions agree with results of more detailed *MESO* measurements), then we have to be able to see in solutions to its governing equations the following two types of dynamics: (i) reducing dynamics describing approach to the *MESO* dynamics to the *meso* dynamics, and (ii) reduced *MESO* dynamics that is the *meso* dynamics. This is because both the original *meso* model and the more microscopic *MESO* model have been found to be well established. Following further the second conclusion, we see that we have now an alternative way to derive the governing equations of our original *meso* model. In addition to its direct mesoscopic derivation described above in the first paragraph, we can derive it also by constructing first a more microscopic *MESO* model and then recognizing the *meso* model as a pattern in solutions to its governing equations. This new way of deriving the *meso* model seems to be complicated and indeed, it is rarely used. Nevertheless, it is important that this alternative way of derivation exists and that, by following it, we arrive at least at two new results: (a) the material parameters $\zeta$ through which the individual nature of macroscopic systems is expressed in the *meso* model (\[Gdyn\]) appear as functions of the material parameters playing the same role in the more microscopic *MESO* model, and (b) the reducing dynamics, giving rise to thermodynamics (as we show in Section \[RD\]). The above consideration motivates us to start our investigation of externally forced macroscopic systems with two mesoscopic models instead of with only one such model (\[Gdyn\]). The second model (*MESO* model) is formulated on a more microscopic level than the level on which the model (\[Gdyn\]) is formulated. By “a more microscopic model” we mean that more details are taken into account in the model. We write the governing equations of the second model formally as $$\label{Fdyn} \dot{x}=G(x,\varsigma, \mathcal{F}^{MESO})$$ where $x$ denotes state variables, $\varsigma$ material parameters and $\mathcal{F}^{MESO}$ the external influence. The state space used in the *meso* model (\[Gdyn\]) is denoted by the symbol $N$ (i.e. $y\in N$ ) and the state space used on the more microscopic *MESO* model (\[Fdyn\]) is denoted by the symbol $M$ (i.e. $x\in M$). We shall call hereafter the dynamics described by (\[Fdyn\]) as *MESO* dynamics and the dynamics described by (\[Gdyn\]) by *meso* dynamics. How do we formulate the *MESO* model (\[Fdyn\])? In its direct derivation we proceed in the same way as we do in the direct derivation of the [meso]{} model (\[Gdyn\]). The difference is only in that the *meso* measurements are replaced by more detailed *MESO* measurements and that the same type of physics as the one expressed in (\[Gdyn\]) is now expressed in (\[Fdyn\]) in a more detail. As an example of *meso* dynamics (\[Gdyn\]) we can take Boussinesq equations describing, on the level of fluid mechanics (i.e. the *meso level* in this example is the level of fluid mechanics), the Rayleigh-Bénard system. The corresponding to it *MESO level* could be the level of kinetic theory on which the state variable $x$ is the one particle distribution function and Eq.(\[Fdyn\]) is a kinetic equation expressing the same physics as the one expressed in the Boussinesq equations but on the level of kinetic theory. Having both *MESO* and *meso* dynamics, we are in position to provide a new derivation of the *meso* dynamics (\[Gdyn\]) and also to identify reducing $MESO\rightarrow meso$ dynamics that, as we shall see below in Section \[RDMmde\], provides us with a new *meso thermodynamics*. The process leading from *MESO level* to *meso level* is conveniently seen (see Section \[EX2\]) as a pattern recognition in the *MESO* phase portrait. By *MESO* phase portrait we mean a collection of trajectories (i.e. solutions to (\[Fdyn\]) ) passing through all $x\in M$ for a large family of the material parameters $\xi$ and external forces $\mathcal{F}^{MESO}$. The pattern that we search is the one which can be interpreted as representing the mesoscopic phase portrait corresponding to the *meso* dynamics (\[Gdyn\]). We prefer to refer to the process involved in the passage from *MESO* to *meso* dynamics as a pattern recognition process rather than the more frequently used “coarse graining” process since the latter term evokes procedures (as e.g. making pixels and averaging in them) that are manifestly coordinate dependent and thus geometrically (and consequently also physically) meaningless. Reducing Dynamics, Thermodynamics {#RD} ================================== We now proceed to investigate the pattern recognition process leading from *MESO* dynamics to *meso* dynamics. We recognize first its complexity. We recall for instance that this type of investigation constitutes in fact the famous Hilbert’s 6th problem (see [@GKHilb]). Roughly speaking, any investigation of the $MESO\,\rightarrow\,meso $ passage consists essentially in splitting the *MESO* dynamics (\[Fdyn\]) into the *meso* dynamics (\[Gdyn\]) (that we call *reduced dynamics* if we regard it in the context of $MESO\,\rightarrow\,meso$ passage) and another dynamics that makes the reduction (that we call *reducing dynamics*). While most investigations of the $MESO\,\rightarrow\,meso $ passages have focused in the past on the reduced dynamics, we show that investigations of the reducing dynamics are also interesting and bring in fact an additional important information that can be interpreted as an introduction of thermodynamics on the *meso* level. The reduced dynamics (i.e. *meso* dynamics) together with the thermodynamics implied by the reducing dynamics express then (on *meso level*) the complete physics of the macroscopic system under consideration. More details of the behavior of the macroscopic systems under consideration are seen on the *MESO* level (represented by (\[Fdyn\]) ) than on the *meso* level. Let $\mathcal{P}^{MESO}$ and $\mathcal{P}^{meso}$ be the phase portraits corresponding to the *MESO* dynamics (\[Fdyn\]) and the *meso* dynamics (\[Gdyn\]) respectively. Our problem is to recognize $\mathcal{P}^{meso}$ as a pattern inside $\mathcal{P}^{MESO}$. In the pattern recognition process we recover the less detailed viewpoint expressed in (\[Gdyn\]) (that arises in the pattern recognition process as the reduced dynamics) but in addition we also begin to see the reducing dynamics making the pattern to emerge. In this section we argue that the reducing dynamics, is in its essence thermodynamics. In order to be able to justify the use of the term “thermodynamics” we begin by recalling the standard (i.e. Gibbs) formulation of the classical thermodynamics and show subsequently that the reducing dynamics is indeed its natural extension. The level of description used in the classical equilibrium thermodynamics is called in this paper *equilibrium level*. In this section we concentrate on establishing a unified formulation of the reducing dynamics. We show that the formalism puts under a single umbrella the thermodynamics of driven systems and well established classical, microscopic, and mesoscopic equilibrium and nonequilibrium thermodynamics. The unification power of the formalism is in this section the principal argument supporting it. In the following section (Section \[EX\]) we then collect illustrative examples and applications providing additional support. Classical equilibrium thermodynamics; statics {#RDET} --------------------------------------------- The point of departure of the classical equilibrium thermodynamics is the postulate ***equilibrium Postulate 0*** of the *existence of equilibrium states*. For example, Callen formulates [@Callen] it as follows: \[*“... in all systems there is a tendency to evolve toward states in which the properties are determined by intrinsic factors and not by previously applied external influences. Such simple terminal states are, by definition, time independent. They are called equilibrium states...”*\]. The level of description on which investigations are limited only to macroscopic systems at equilibrium states will be called *equilibrium* level. No time evolution takes place on this level. The next postulate addresses the *state variables* used on *equilibrium* level to characterize the equilibrium states introduced in the previous postulate. ***equilibrium Postulate I*** *The state variables on *equilibrium* level are the state variables needed to formulate overall macroscopic mechanics (the number of moles $N$, the volume $V$, and the macroscopic mechanical kinetic energy $E_{mech}$) and in addition the internal energy $E_{int}$ that is a new, extra mechanical quantity, serving as an independent state variable. The internal energy $E_{int}$ then combines with the macroscopic mechanical $E_{mech}$ to define the overall total energy $E=E_{mech}+E_{int}$. We shall denote the state variables of the classical equilibrium thermodynamics by the symbol $\omega$ (i.e. $\omega=(E,N,V)$) an the equilibrium state space $\Omega$ (i.e. $\omega\in \Omega$)*. The third postulate addresses the way the equilibrium states are reached. ***equilibrium Postulate II*** \(i) *The fundamental thermodynamic relation consists of three potentials* $$\label{classftr} N^{(ee)}(\omega); \, E^{(ee)}(\omega);\,S^{(ee)}(\omega)$$ The two potentials, namely the number of moles $N^{(ee)}$ and the energy $N^{(ee)}$ are universal: $N^{(ee)}=N;\,\,E^{(ee)}=E$. The entropy $S^{(ee)}(\omega)$ is not universal. It is the quantity in which, on *equilibrium level*, the individual nature of the macroscopic systems under consideration are expressed. The association between $S^{(ee)}(\omega)$ and the macroscopic systems can be obtained, if we remain inside *equilibrium level*, only by experimental observations (whose results are collected in the so called thermodynamic tables). The entropy $S^{(ee)}(E,V,N)$ is required to satisfy the following three properties. (i) $S^{(ee)}(E,V,N)$ is a real valued and sufficiently regular function of $z$, (ii) $S^{(ee)}(E,V,N)$ is homogeneous of degree one (i.e. $S^{(ee)}(\lambda E,\lambda V,\lambda N)= \lambda S^{(ee)}(E,V,N)$ which means that the energy, number of moles, volume, and entropy are all extensive variables), and (iii) $S^{(ee)}(E,V,N)$ is a concave function (we exclude from our considerations in this paper critical states and phase transitions). \(ii) *Equilibrium states are defined as states at which the entropy $S^{(ee)}(\omega)$ reaches its maximum allowed by constraints (i.e. MaxEnt principle on equilibrium level)*. Since we consider in this paper thermodynamics associated with passages between two general levels, we need a clear notation. The upper index $(ee)$ in potentials introduced in (\[classftr\]) means $equilibrium \rightarrow equilibrium$, i.e. the passage in which the starting level is *equilibrium* level and the target level is also *equilibrium* level. If the passage that we investigate is *MICRO* $\rightarrow$ *equilibrium* (in Section \[RDMee\] below), we shall use $(MIe)$, if the passage is *MESO* $\rightarrow$ *equilibrium* (in Section \[RDMMee\]), we shall use $(Me)$, and in the investigation of the passage *MESO* $\rightarrow$ *meso* (in Section \[RDMmde\]), we shall use $(Mm)$. The first letter in the upper index denotes always the level on which the quantity is defined and the second letter the level to which the reduction aims or the level from which it is reduced (see (\[MIimpl\]), or (\[Meimpl\]) below). In order to write explicitly the MaxEnt principle, we introduce $$\label{Phiclass} \Phi^{(ee)}(\omega;T,\mu)=-S^{(ee)}(\omega)+E^{*}E^{(ee)}(\omega)+N^{*}N^{(ee)}(\omega)$$ called a thermodynamic potential on *equilibrium* level. By $\omega^*=(E^*,N^*,V^*)$ we denote conjugate state variables; $E^{*}$ is conjugate to $E$ (i.e. $E^*=S^{(ee)}_E$), $N^*$ is conjugate to $N$ (i.e. $N^*=S^{(ee)}_N$), and $V^*$ is conjugate to $V$ (i.e.$V^*=S^{(ee)}_V$). We use hereafter the shorthand notation $S_E=\frac{\partial S}{\partial E}$,... . In the classical equilibrium thermodynamics the conjugate variables have particular names, namely, $E^*=\frac{1}{T}, N^*=S_N=-\frac{\mu}{T}, V^*=-\frac{P}{T}$, where $T$ is the temperature, $\mu$ the chemical potential, and $P$ the pressure. Entropy $S^{(ee)}(E,V,N)$ transforms, under the Legendre transformation, into its conjugate $S^{(ee)*}(\mu,T)$, $$\label{eqimp} S^{(ee)*}(\mu,T)=[\Phi^{(ee)}(\omega;T,\mu)]_{\omega=\omega_{eq}(T,\mu)}$$ where $\omega_{eq}(T,\mu)$ is a solution of $\Phi^{(ee)}_{\omega}=0$. As a direct consequence of the homogeneity of $S^{(ee)}$, $S^{(ee)*}(T,\mu) =-\frac{P}{VT}$. We note that the MaxEnt principle in the classical equilibrium thermodynamics does not address the time evolution leading to the equilibrium states (i.e. it does not address the process of preparing macroscopic systems to equilibrium thermodynamic observations). It addresses only the question of what is the final result of such time evolution. We shall introduce such time evolution later in this paper. MICRO $\rightarrow$ equilibrium; Gibbs equilibrium statistical mechanics; statics {#RDMee} ---------------------------------------------------------------------------------- Another part of the classical equilibrium theory is the Gibbs equilibrium statistical mechanics that investigates the passage $MICRO \rightarrow equilibrium$. We shall formulate the physical basis of the Gibbs theory again in three postulates that are direct adaptations of the three postulates in Section \[RDET\] to *MICRO level*. The first postulate, Postulate 0, is the same as in the classical equilibrium thermodynamics except that we include in it the statement that *MICRO level* is also well established. The second postulate addresses the state variables ***MICRO$\rightarrow$ equilibrium Postulate I***. *State variables on MICRO level are position vectors ${{\boldmath \mbox{$r$}}}=({{\boldmath \mbox{$r$}}}_1,...,{{\boldmath \mbox{$r$}}}_N)$ and momenta ${{\boldmath \mbox{$v$}}}=({{\boldmath \mbox{$v$}}}_1,...,{{\boldmath \mbox{$v$}}}_N)$ of $N$ particles, $N\sim 10^{23}$, (or alternatively the $N$-particle distribution function $f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$)*. Next, we proceed to the third postulate that addresses the time evolution. Since the reduced time evolution in the passage *MICRO* $\rightarrow$ *equilibrium* is no time evolution, the time evolution taking place on *MICRO level* is the reducing time evolution. The *MICRO level* time evolution $({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})_0\mapsto({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})_t$ is governed by Hamilton’s equations $\left(\begin{array}{cc}\dot{{{\boldmath \mbox{$r$}}}}\\ \dot{{{\boldmath \mbox{$v$}}}}\end{array}\right)=\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)\left(\begin{array}{cc}E^{(MICRO)}_{{{\boldmath \mbox{$r$}}}}\\E^{(MICRO)}_{{{\boldmath \mbox{$v$}}}}\end{array}\right)$, where $E^{(MICRO)}({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$ is the microscopic energy. This microscopic time evolution induces the time evolution $f_0({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\mapsto f_t({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})=f_0(({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})_{-t})$. In the Gibbs equilibrium statistical mechanics only two aspect of the *MICRO* time evolution are retained: (1) conservations of the total mass $N^{(MIe)}(f)$ and the total energy $E^{(MIe)}(f)$ defined below in (\[microftr\]), and (2) an assumption about the *MICRO* trajectories, namely an ergodic-type hypothesis. The second postulate is thus the following. ***MICRO $\rightarrow$ equilibrium Postulate II*** \(i) *The fundamental thermodynamic relation consists of three potentials $$\begin{aligned} \label{microftr} N^{(MIe)}(f)&=&\int d{{\boldmath \mbox{$r$}}}\int d{{\boldmath \mbox{$v$}}}f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\nonumber \\ E^{(MIe)}(f)&=&\int d{{\boldmath \mbox{$r$}}}\int d{{\boldmath \mbox{$v$}}}E^{(MICRO)}({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}}) f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\nonumber \\ S^{(MIe)}(f)&=&-k_B\int d{{\boldmath \mbox{$r$}}}\int d{{\boldmath \mbox{$v$}}}f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\ln f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\end{aligned}$$ where $k_B$ is the Boltzmann constant, $N^{(MIe)}(f)$ has the physical interpretation of number of moles, $E^{(MIe)}(f)$ is the energy.* The map leading from the state space of the Liouville representation of classical mechanics to the state space of the classical equilibrium thermodynamics will be denoted by the symbol $\mathfrak{P}^{(MIe)}$, i.e. $$\label{PMIe} f\mapsto \mathfrak{P}^{(MIe)}(f)=(N^{(MIe)}(f),E^{(MIe)}(f))$$ \(ii) *$N^{(MIe)}(f)$ and $E^{(MIe)}(f)$ introduced in the fundamental thermodynamic relation (\[microftr\]) are conserved during the time evolution*. \(iii) *Particle trajectories $({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})_0\mapsto({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})_t$ fill up the microscopic phase space $M^{(MICRO)}$ (i.e. $({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\in M^{(MICRO)}$) so that time averages can be replaced with averages (by using certain measures) in $M^{(MICRO)}$ (*the ergodic hypothesis*)*. (iv)*Equilibrium states are defined as states at which $S^{(MIe)}(f)$ reaches its maximum allowed by constraints (i.e. MaxEnt principle for the MICRO $\rightarrow$ equilibrium passage). The expression (\[microftr\]) for $S^{(MIe)}(f)$ is in the Gibbs theory universally valid for all macroscopic systems. The quantity that on MICRO level expresses the individual nature of the macroscopic systems under consideration is only the energy $E(f)$*. In order to write explicitly the MaxEnt principle on the *MICRO* level, we introduce, as we did in the previous section, the thermodynamic potential $$\label{Phi} \Phi^{(MIe)}(f;T,\mu)=-S^{(MIe)}(f)+\frac{1}{T}E^{(MIe)}(f)-\frac{\mu}{T}N^{(MIe)}(f)$$ The fundamental thermodynamic relation on *equilibrium level* implied by the fundamental thermodynamic relation (\[microftr\]) on *MICRO level* is given by $$\begin{aligned} \label{MIimpl} N^{(eMI)}(\omega)&=&[N^{(MIe)}(f)]_{f=f_{eq}}\nonumber \\ E^{(eMI)}(\omega)&=&[E^{(MIe)}(f)]_{f=f_{eq}}\nonumber \\ S^{(eMI)*}(\mu,T)&=&[\Phi^{(MIe)}(f,T.\mu)]_{f=f_{eq}}=-\frac{P}{VT}\end{aligned}$$ where $f_{eq}({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}};T,\mu)$, solutions of $\Phi^{(MIe)}_{f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})}=0$, are equilibrium states. They form a manifold $\mathcal{M}_{eq}\subset M$ (i.e. $f_{eq}({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}};T,\mu)\in \mathcal{M}_{eq}$) that is an invariant manifold with respect to the *MICRO* time evolution. There is no time evolution that takes place on $\mathcal{M}_{eq}$. The upper index $(eMI)$ means that the quantity belongs to *equilibrium level* and is obtained from an analysis taking place on *MESO level*. This notation was already introduced in the text following Eq.(\[classftr\]). We note that the MaxEnt principle on *MICRO level* (i.e. $MICRO \rightarrow equilibrium$ Postulate II), as well as the equilibrium Postulate II in the classical equilibrium thermodynamics (see Section \[RDET\]), does not really address the time evolution leading to equilibrium sates. The *MICRO* time evolution $f_0({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\mapsto f_t({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})=f_0(({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})_{-t})$ introduced in the $MICRO\rightarrow equilibrium$ postulate above leaves the Gibbs entropy introduced in (\[microftr\]) unchaged (see more in Section \[EX2\]). As on *equilibrium level*, the $MICRO \rightarrow equilibrium$ Postulate II addresses only the final result of such evolution. In Section \[EX2\] we shall address the $MICRO\rightarrow equilibrium$ reducing time evolution. We shall write down explicitly the equations governing it. The Gibbs $MICRO\rightarrow equilibrium$ theory enriches the classical equilibrium thermodynamics in particular in the following two points: (i) it brings a microscopic insight into the meaning of the internal energy, and (ii) it offers a way to calculate the fundamental thermodynamic relation from the knowledge of microscopic interactions. Regarding the first point, we note that in the context of *MICRO level* the internal energy is the energy of the particles modulo the overall mechanical energy. The mechanical origin of the internal energy implies then the mechanical nature of the heat and consequently the energy conservation law involved in $equilibrium$ Postulate II. As for the fundamental thermodynamic relation, the Gibbs equilibrium statistical mechanics (specifically the MaxEnt Principle in Postulate II of the Gibbs theory) provides a mapping between the fundamental thermodynamic relation (\[microftr\]) on *MICRO level* (note that it is the particle energy $E^{(MICRO)}({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$ through which the individual nature of macroscopic systems is expressed on *MICRO level*) and the equilibrium fundamental thermodynamic relation (\[classftr\]) (note that the quantity through which the individual nature of macroscopic systems is expressed on *equilibrium* level is the entropy $S^{(ee)}(N,V,E))$. Finally, we note that the Gibbs equilibrium theory is not supported by a rigorous analysis of the *MICRO* mechanics. Both the ergodic-like behavior of particle trajectories and the tendency of $S^{(MIe)}(f)=-k_B\int d{{\boldmath \mbox{$r$}}}\int d{{\boldmath \mbox{$v$}}}f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\ln f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$ to reach its maximum (allowed by constraints) during the microscopic time evolution remain, for most macroscopic systems, an unproven assumption. The support for the Gibbs theory comes from plausible assumptions, illustrations, and the success of its applications. The same will be then true for the general formulation of reducing dynamics presented below. MESO $\rightarrow$ equilibrium; statics {#RDMMee} ---------------------------------------- So far, we have considered only the ultimate microscopic level (called *MICRO level*) and the ultimate macroscopic level (called *equilibrium level*). Now we take into consideration also mesoscopic levels and formulate a general thermodynamics associated with the passage *MESO* $\rightarrow$ *meso* (that we call hereafter simply thermodynamics). We begin with Postulate 0. We modify it by noting that *equilibrium level* is not the only well established level that is less microscopic than the *MICRO level*. There is in fact a whole family of such levels (for example fluid mechanics and kinetic theory levels). These well established mesoscopic levels differ from the equilibrium level by the fact that, in general, a time evolution takes place on them (we recall that no time evolution takes place on the equilibrium level) and also by the fact that they are applicable also to macroscopic systems subjected to external influences (e.g. the level of fluid mechanics is applicable to the Rayleigh-Bénard system). We thus replace the postulate of the existence of equilibrium states with a more general ***MESO Postulate 0*** *There exist well established mesoscopic levels* . The remaining two postulates are the same as in the Gibbs theory except that the state variable $f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$ used on the *MICRO level* is replaced by another state variables used on mesoscopic levels. As we have done already in Eq.(\[Fdyn\]), we shall denote it on $MESO$ level by the symbol $x$. For example, $x$ can be one particle distribution function (used in kinetic theory) of hydrodynamic fields (used in fluid mechanics): ***MESO $\rightarrow$ equilibrium Postulate I***. *State variables on MESO level are quantities denoted by the symbol $x$* We recall the $\textit{MESO level}$ is a well established level (i.e. theoretical predictions on *MESO level* agree with results of *MESO* experimental observations). This then means that $x$ is known. ***MESO $\rightarrow$ equilibrium Postulate II (statics)***. \(i) *The fundamental thermodynamic relation consists of a specification of three potential $$\label{MESOftr} N^{(Me)}(x),E^{(Me)}(x),S^{(Me)}(x)$$ denoting the number of moles, energy, and entropy respectively.* The map leading from *MESO* state space $M$ to the state space of the classical equilibrium thermodynamics will be denoted by the symbol $\mathfrak{P}^{(Me)}$, i.e. $$\label{PMe} x\mapsto\mathfrak{P}^{(Me)}(x)=(N^{(Me)}(x),E^{(Me)}(x))$$ (compare with (\[PMIe\])). \(ii) *Equilibrium states are defined as states at which the entropy $S^{(Me)}(x)$ reaches its maximum allowed by constraints (i.e. MaxEnt principle for the MESO $\rightarrow$ equilibrium passage)*. As in previous sections, we introduce thermodynamic potential $$\label{Phi1} \Phi^{(Me)}(x;T,\mu)=-S^{(Me)}(x)+\frac{1}{T}E^{(Me)}(x)-\frac{\mu}{T}N^{(Me)}(x)$$ Equilibrium state $x_{eq}$ are states at which $\Phi^{(Me)}(x;T,\mu)$ reaches its minimum. Consequently, $x_{eq}$ are solutions to $$\label{Phieq} \Phi^{(Me)}_x(x,T,\mu)=0,$$ Such states, called equilibrium states, form equilibrium a manifold denoted by the symbol $\mathcal{M}_{eq}\subset M$. The fundamental thermodynamic relation on *equilibrium level* implied by the fundamental thermodynamic relation (\[MESOftr\]) on *MESO level* is given by $$\begin{aligned} \label{Meimpl} N^{(eM)}(\omega)&=&[N^{(Me)}(x)]_{x=x_{eq}}\nonumber \\ E^{(eM)}(\omega)&=&[E^{(Me)}(x)]_{x=x_{eq}}\nonumber \\ S^{(eM)*}(\mu,T)&=&[\Phi^{(Me)}(x,T.\mu)]_{x=x_{eq}}=-\frac{P}{VT}\end{aligned}$$ where $x_{eq}({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}};T,\mu)$, equilibrium states, are solutions of (\[Phieq\]). The upper index $(eM)$ means that the quantity belongs to *equilibrium level* and is obtained from an analysis taking place on *MESO level*. This notation was already introduced in the text following Eq.(\[classftr\]). Summing up, the difference between the Gibbs equilibrium statistical mechanics and the mesoscopic equilibrium theory formulated above is in the Postulate 0, in the fundamental thermodynamic relation, and in the arguments supporting the theory. Postulate 0 includes now also existence of mesoscopic levels. Regarding the fundamental thermodynamic relation, all three potentials $N^{(Me)}(x),E^{(Me)}(x)$, and $S^{(Me)}(x)$ have to be specified. The same three potentials have to be also specified in the Gibbs theory (see (\[microftr\])) but two of them, namely $N^{(MIe)}$ and $S^{(MIe)}$, are universal. On *MESO level*, neither of them is universally applicable. For example, let $x$ be one particle distribution function. The fundamental thermodynamic relation (\[microftr\]), but now transposed to the level of kinetic theory (i.e. the N-particle distribution function is replaced by one particle distribution function), leads to the fundamental thermodynamic relation representing ideal gas on *equilibrium* level (recall that if $f$ in (\[microftr\]) is replaced by one particle distribution function then the only energy is the kinetic energy); in order to include more complex macroscopic systems, e.g. van der Waals gas, one has to modify both the energy - by introducing a mean field energy - and entropy - see more in [@Gr71] where also the corresponding reducing dynamics is specified). As for the supporting arguments, they now mainly come from relating the *MESO* equilibrium theory to the Gibbs theory. *MESO* equilibrium theories are indeed an organic part of the Gibbs equilibrium statistical mechanics. They arise as its simplified versions applicable to particular families of macroscopic systems. MESO $\rightarrow$ equilibrium; reducing dynamics {#RDMMde} -------------------------------------------------- An important advantage of investigating the passage $MESO \rightarrow equilibrium $ instead of the passage $MICRO \rightarrow equilibrium $ is that we can more easily investigate reducing dynamics. We have seen in Section \[RDMee\] that in order to pass from $MICRO$ dynamics to *equilibrium level*, we need assumptions (that, at least in general, remain unproven) about ergodic-type behavior of microscopic trajectories. On the other hand, mesoscopic-type experimental observations include also direct observations of the approach to equilibrium. Based on results of such observations, mathematical formulations of particular examples of the reducing dynamics $MESO \rightarrow equilibrium $ have been developed (for example the Navier-Stokes-Fourier equations of fluid mechanics and the Boltzmann kinetic equation of gas dynamics). The Boltzmann kinetic equation was then the first time evolution equation for which the passage $kinetic\,\,theory \rightarrow equilibrium$ was explicitly investigated (by Ludwig Boltzmann). In investigations of the reducing dynamics representing the passage $MESO \rightarrow equilibrium $ we can therefore use result obtained independently in several particular examples of well-established mesoscopic dynamical theories. The abstract formulation of reducing dynamics $MESO \rightarrow equilibrium$ presented below has emerged as a common mathematical structure of such well established theories. The first step was made by Clebsch [@Clebsch], who realized that the particle dynamics and the Euler fluid mechanics share the structure of Hamiltonian dynamics. The investigation initiated by Clebsch then continued in particular in the works of Arnold [@Arnold], Marsden and Weinstein [@MW]. Independently, Landau and Ginzburg [@LG] and Cahn and Hilliard [@CH] have recognized a common structure of gradient dynamics in the part of the time evolution that is represented in (\[GENERIC\]) by the second term on its right-hand side. Time evolution equations involving both the Hamiltonian and the gradient part had appeared first in [@DV], in [@Grmboulder] (that was presented at the AMS-IMS-SIAM Joint Summer Research Conference in the Mathematical Sciences on Fluids and Plasmas:Geometry and Dynamics, held at the University of Colorado, Boulder, CO, USA, 17–23 July 1983) and in [@Morr], [@Kauf], [@GrPhysD], [@BEd]. In [@GO], [@OG] the abstract equation (\[GENERIC\]) has been called GENERIC. Its formulation in the context of the contact geometry is presented in [@Grmcontact] Specific realizations of (\[GENERIC\]) on many examples of *MESO* levels can be found in [@Obook], [@Grmadv]. Now we proceed to the formulation. Postulate 0 and Postulate I remain the same as in Section \[RDMMee\]. In Postulate II we replace the static MaxEnt principle with dynamics MaxEnt principle. We explicitly specify the dynamics making the maximization of entropy. The third postulate *MESO $\rightarrow$ equilibrium Postulate II (statics)* in Section \[RDMMee\] is now replaced with dynamic postulate ***MESO $\rightarrow$ equilibrium Postulate II (dynamics)*** \(i) *remains the same as in MESO $\rightarrow$ equilibrium Postulate II (statics)* \(ii) *The time evolution making the passage MESO $\rightarrow$ equilibrium is governed by the GENERIC equation* $$\label{GENERIC} \dot{x}=[TL(x)x^*-[\Xi_{x^*}(x,X^{(CR)}(x^*))]_{x^*=\Phi^{(Me)}_x}$$ In the rest of this section we explain the meaning of the symbols appearing in (\[GENERIC\]) and prove that the time evolution governed by (\[GENERIC\]) brings $x$ indeed to the equilibrium states $x_{eq}\in\mathcal{M}_{eq}$ that are solutions of $\Phi^{(Me)}_x=0$. The first term on the right hand side of (\[GENERIC\]) represents the part of the time evolution of $x$ that is on *MESO level* directly inherited from *MICRO level*. It generates the Hamiltonian time evolution. The operator $L$, transforming the covector $x^*$ into a vector, is a Poisson bivector. This means that $\{A,B\}=<A_x,LB_x>$ is a Poisson bracket, i.e. $\{A,B\}=-\{B,A\}$ and the Jacobi identity $\{A,\{B,C\}\}+\{B,\{C,A\}\}+\{C,\{A,B\}\}=0$ holds. By the symbols $A,B,C$ we denote sufficiently regular real valued functions of $x$, $<,>$ denotes pairing in $M$. We recall that on *MICRO level* the Hamiltonian time evolution is generated by energy $E(x)$. We recall that in the particular case if $x=(r,v)^T$, where $r$ is the particle position vector, $v$ particle momentum, and $()^T$ means transpose of $()$, then $L=\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)$, and the equation governing the time evolution of $(r,v)$ is $\left(\begin{array}{cc}\dot{r}\\ \dot{v}\end{array}\right)=\left(\begin{array}{cc}0&1\\-1&0\end{array}\right) \left(\begin{array}{cc}E_r\\ E_v\end{array}\right)$. In order to keep the energy as a sole generator of dynamics also on *MESO level*, we require that the operator $L^{(Me)}$ is degenerate in the sense that $\{A,S^{(Me)}\}=\{A,N^{(Me)}\}=0$ for all $A$. By using the terminology established in investigations of Hamiltonian systems, the potentials $S^{(Me)}$ and $N^{(Me)}$ are required to be Casimirs of the Poisson bracket $\{A,B\}$. If this is the case then indeed, the first term on the right hand side of (\[GENERIC\]) is $LE^{(Me)}_x$. A direct consequence of the antisymmetry and the degeneracy of $L$ is that this Hamiltonian part of the time evolution leaves the energy and the generating potential $\Phi^{(Me)}$ (see (\[Phi1\])) unchanged (i.e. $(\dot{\Phi}^{(Me)})_{Hamilton}=<\Phi^{(Me)}_x,L\Phi^{(Me)}_x>=<E^{(Me)}_x,LE^{(Me)}_x>=0$). Examples of the operator $L$ in many *MESO levels* can be found in [@Grmadv]. Before leaving the Hamiltonian part of the time evolution we make a comment about the role that the Jacobi identity plays in it. If the second term on the right hand side of (\[GENERIC\]) is absent then the equation $\dot{x}=LE^{(Me)}_x$ governing the time evolution can also be written in the form $\dot{A}=\{A,E^{(Me)}\}$ holds for all $A$. If we now replace $A$ with $\{A,B\}$ we obtain $\dot{\{A,B\}}=\{\{A,B\},E^{(Me)}\}$. But $\dot{\{A,B\}}= \{\dot{A},B\}+\{A,\dot{B}\}=\{\{A,E^{(Me)}\},B\}+\{A,\{B,E^{(Me)}\}\}$. The Jacobi identity guarantees that these two time derivatives of $\{A,B\}$ are equal and thus the Poisson bracket remains unchanged during the time evolution. If we now consider the time evolution governing by (\[GENERIC\]) involving also the second term on the right hand side, the Poisson bracket is not preserved during the time evolution even if the Jacobi identity holds. The role of the Jacobi identity in GENERIC time evolution is thus much less important that in the Hamiltonian time evolution. Hereafter, we shall call a time evolution GENERIC time evolution even if the Jacobi identity remains unproven. The second part on the right hand side of (\[GENERIC\]) is the part that arises due to the fact that *MESO level* is not *MICRO level*. This means that some microscopic details that are seen on *MICRO level* are ignored on *MESO level*. This ignorance then influences the time evolution in such a way that the potential $\Phi^{(Me)}$ approaches its minimum. By $\Xi(x,X)$, called a dissipation potential, we denote a sufficiently regular and real valued function of $x\in M$ and of $X$ that is called a thermodynamic force. Its specification $X=X^{(CR)}(x^*)$ as a function of $x^*$ is called a *constitutive relation*. The superscript “CR” means Constitutive Relation. We assume that the dissipation potential satisfies the following properties: $$\begin{aligned} \label{Xi} &&\Xi(x,0)=0\nonumber \\ && \Xi\,\,reaches\,\,its\,\,minimum\,\,at\,\,X=0\nonumber\\ &&\Xi\,\,is\,\,a\,\,convex\,\,function\,\,in\,\,a\,\,neighborhood\,\,of\,\,X=0\end{aligned}$$ Regarding the constitutive relations, we assume that $$\label{crelprop} <x^*,\Xi_{x^*}(x,X^{(CR)}(x^*))>=\alpha <X^{(CR)},\Xi_{X^{(CR)}}(x,X^{(CR)})>$$ where $\alpha >0$ is a parameter. In addition, we require that the dissipation potential $\Xi$ is degenerate in the following sense: $$\begin{aligned} \label{degXi} <[x^*]_{x^*=E^{(Me)}_x},\Xi_{x^*}>=<[x^*]_{x^*=N^{(Me)}_x},\Xi_{x^*}>&=&0\nonumber \\ <x^*,[\Xi_{x^*}]_{x^*=E^{(Me)}_x}>=<x^*,[\Xi_{x^*}]_{x^*=N^{(Me)}_x}>&=&0\end{aligned}$$ The simplest example of the dissipation potential $\Xi$ satisfying (\[Xi\]) is the quadratic potential $\Xi=<X\Lambda X>$, where $\Lambda $ is a matrix with required degeneracy and positive definite if applied on vectors outside its nullspace. More general potentials arise in particular in chemical kinetics (see [@Grmchem]). It has been suggested in [@Beretta] to regard $\Lambda$ as a metric tensor. This interpretation brings then Riemannian geometry to dissipative dynamics. We emphasize that this geometrical viewpoint is limited to the quadratic dissipation potential. In the case of nonlinear dissipation potentials (for example those arising in chemical kinetics - see also Section \[EX2\]), the geometrical interpretation is still possible but the classical Riemannian geometry has to be replaced by a more general geometry. As an example of the constitutive relation satisfying (\[crelprop\]) we mention the Fourier constitutive relation in the investigation of heat transfer. In this example $x^*=\frac{1}{T({{\boldmath \mbox{$r$}}})}$, where $T({{\boldmath \mbox{$r$}}})$ is the local temperature and the constitutive relation is $X^{(CR)}(x^*)=\nabla \left(\frac{1}{T({{\boldmath \mbox{$r$}}})}\right)$. Direct calculations lead to $<x^*,\Xi_{x^*}>\\=<\frac{1}{T({{\boldmath \mbox{$r$}}})},\Xi_{\frac{1}{T({{\boldmath \mbox{$r$}}})}}>=\int d{{\boldmath \mbox{$r$}}}\frac{1}{T({{\boldmath \mbox{$r$}}})},\Xi_{\frac{1}{T({{\boldmath \mbox{$r$}}})}} =-\int d{{\boldmath \mbox{$r$}}}\frac{1}{T({{\boldmath \mbox{$r$}}})},\nabla\Xi_{\nabla\left(\frac{1}{T({{\boldmath \mbox{$r$}}})}\right)}\\= \int d{{\boldmath \mbox{$r$}}}\nabla\left(\frac{1}{T({{\boldmath \mbox{$r$}}})}\right),\Xi_{\nabla\left(\frac{1}{T({{\boldmath \mbox{$r$}}})}\right)}=<X^{(CR)},\Xi_{X^{(CR)}}>$ provided the boundary conditions guarantee that the integrals over the boundary that arise in by parts integrations (leading to the last equality) equal zero. We see that in this example $\alpha=1$. In the context of chemical kinetics, where the thermodynamic forces $X$ are chemical affinities, the parameter $\alpha\neq 1$ (see Section \[EX1\] and [@Grmchem]; for example, for the dissipation potential (\[XiN\]) the coefficient $\alpha=\frac{1}{2}$ and for the dissipation potential $\Xi$ appearing in the Boltzmann equation (\[intlin\]) the coefficient $\alpha=\frac{1}{4}$). Dissipation potentials and constitutive relations will play an important role also in the investigation of the passage *MESO* $\rightarrow$ *meso* in Section \[RDMmde\] below. It follows directly from (\[GENERIC\]) and from the properties of $L$, $\Xi$, and $X^{CR}$ listed above that $$\label{asGEN} \dot{\Phi}^{(Me)}=-<x^*,\Xi_{x^*}(x,X^{CR})>=-\alpha <X^{CR},\Xi_{X^{CR}}(x,X^{CR})>\leq 0$$ The first equality is the required property (\[crelprop\]) of constitutive relations and the last inequality is a direct consequence of the properties (\[Xi\]). The inequality (\[asGEN\]) allows us to see the thermodynamic potential $\Phi^{(Me)}$ as a Lyapunov function associated with the approach of solutions of (\[GENERIC\]) to $x_{eq}$ given by (\[Phieq\]). We have thus proven that Eq.(\[GENERIC\]) indeed makes the passage *MESO* $\rightarrow$ *equilibrium*. If, in addition, we assume that $L$ and $\Xi$ are degenerate in the sense that $\{S,A\}=0$ for all $A$ (i.e., if we use the terminology of Hamiltonian dynamics, the entropy $S^{(Me)}$ is Casimir of the Poisson bracket $\{A,B\}$), and $<E_x,\Xi_{S^{(Me)}_x}>=0$, $<N^{(Me)}_x,\Xi_{S^{(Me)}_x}>=0$ and $<x^*,\Xi_{E^{(Me)}_x}>=0$, $<x^*,\Xi_{N^{(Me)}_x}>=0 \,\,\forall x^*$ then, in addition to the inequality (\[asGEN\]), also the following equalities (conservation laws) $$\begin{aligned} \label{consEGEN} \dot{E}^{(Me)}(x)&=&0\\\label{consNGEN} \dot{N}^{(Me)}(x)&=&0\end{aligned}$$ hold. MESO $\rightarrow$ meso; reducing dynamics {#RDMmde} ------------------------------------------- In this section we come to the main subject of this paper. We consider externally driven macroscopic systems whose time evolution is governed on *MESO level* by (\[Fdyn\]). External forces prevent approach to equilibrium which means that *equilibrium level* is inaccessible and the approach $MESO \rightarrow equilibrium$ does not exist. Let however the behavior of the externally driven macroscopic systems under consideration be found to be described well also on $meso$ level that is more macroscopic (i.e. it takes into account less details) than *MESO level*. This then means that by investigating solutions of the governing equations on *MESO level* we have to be able to recover the governing equations on *meso level*. In addition, such investigation will also reveal reducing dynamics making the passage $MESO \rightarrow meso$. In Section \[RDMMde\] we have shown how thermodynamics on *equilibrium level* arises from the reducing dynamics $MESO \rightarrow equilibrium$ or $meso \rightarrow equilibrium$. In this section we show how thermodynamics on *meso level* (we shall call it Constitutive Relation *meso* thermodynamics or in short form ***CR meso-thermodynamics*** to distinguish it from $equilibrium\,\, thermodynamics$ discussed above in Sections \[RDET\] - \[RDMMde\]) arises from the reducing dynamics $MESO \rightarrow meso$. We recall that the formulation of thermodynamics presented in Section \[RDMMde\] (i.e. the formulation of thermodynamics implied by the reducing dynamics $MESO \rightarrow equilibrium$) has been supported mainly by the unification that it brings to various versions of mesoscopic thermodynamics that have emerged in the last one hundred fifty years in well studied and essentially independently developed (each on the basis if its own experimental evidence) mesoscopic dynamical theories. We do not find such examples in the context of the passage $MESO \rightarrow meso$. We do find however important results in nonequilibrium thermodynamics, like for instance dissipation thermodynamics (see references in [@GPK]) and extended thermodynamics (see e.g. [@Joubook], [@MullRugg]). We expect them to become, in some form, a part of the general formulation of *CR meso-thermodynamics*. Our goal is thus to formulate *CR meso-thermodynamic* in such a way that equilibrium thermodynamics, dissipation thermodynamics, and extended thermodynamics make appearance as its different aspects. ### Motivating example {#motex} Before formulating the three postulates of $MESO\rightarrow meso$ thermodynamics, we work out a particular example. First, we present the physical idea and then we formulate it mathematically . We begin with a given *meso level* represented by (\[Gdyn\]). For example, we can think of Eq.(\[Gdyn\]) as standing for the Navier-Stokes-Fourier set of equations. The corresponding to it more detailed *MESO level* represented by Eq.(\[Fdyn\]) will be constructed as an extension of (\[Gdyn\]). Following [@MullRugg], [@Joubook], the extension from *meso* to *MESO* is made, roughly speaking, by replacing the second term on the right hand side of (\[GENERIC\]) (i.e. the dissipative term) with a new state variable (denoted by the symbol $J$ and interpreted physically as a flux corresponding to the state variable $y$). The time evolution of $J$ is then governed by a newly introduced equation that involves dissipative term and is coupled to the time evolution of $y$. We require that $J$ dissipates rapidly to a quasi-stationary state $z_{qeq}(y)$ at which it becomes completely enslaved to $y$. At such quasi-stationary state, the newly constructed *MESO* dynamics reduces to the original *meso* dynamics represented by (\[Gdyn\]). We shall make now an additional requirement. Having realized that all equations governing the reducing time evolution $MESO \rightarrow equilibrium$ possess the structure (\[GENERIC\]), we require that in the absence of the dissipative term the time evolution of $(y,J)$ is Hamiltonian. In this example we restrict ourselves to *meso* dynamics (\[Gdyn\]) that is GENERIC (\[GENERIC\]) and without the Hamiltonian part. Moreover, we consider only isothermal systems (see also Section \[EX3\]) and, for the sake of simplicity, we omit the potential $N$ representing the number of moles. The time evolution equation (\[Gdyn\]) takes thus the form $$\label{I0} \dot{y}=-[\Xi^{(me)}_{y^*}(y,y^*)]_{y^*=\Phi^{(me)}_y}$$ where $\Phi^{(me)}(y,T)$ respectively $\Xi^{(me)}(y,y^*)$ is the thermodynamic potential respectively the dissipation potential associated with *meso* $\rightarrow$ *equilibrium* passage. The temperature $T$ is a constant. We investigate first the passage *meso* $\rightarrow$ *equilibrium*. We see immediately that (\[I0\]) implies $\dot{\Phi}^{(me)}=-\left[y^*\Xi^{(me)}_{y^*}\right]_{y^*=\Phi^{(me)}_y}\leq 0$ provided $\Xi^{(me)}$ satisfies the properties (\[Xi\]). This thermodynamic potential then implies the fundamental thermodynamic relation on *equilibrium level* $$\label{I20} \Phi^{(em)*}(T)= [\Phi^{(me)}(y,T)]_{y=y_{eq}(T)}$$ where $y_{eq}(T)$ is a solution of $\Phi^{(me)}_y=0$. By the upper index $(em)$ in $\Phi^{(em)*}(T)$ appearing in (\[I20\]) we denote (see the paragraph following Eq.(\[classftr\])) that this quantity belongs to *equilibrium level* and is obtained by MaxEnt reduction from *meso level*. So far, we have looked from *meso level* to *equilibrium level*. Now we look in the opposite direction towards *MESO level* involving more details. We extend the *meso* dynamics (\[I0\]) to *MESO* dynamics by following the physical considerations sketched in the beginning of this section. The state variables $x$ on *MESO level* become $x=(y,J)$, where $J$ is a newly adopted state variable having the physical interpretation of a “flux” of $y$. Equation (\[Fdyn\]) is proposed to have the form $$\begin{aligned} \label{I3} \dot{y}&=& \Gamma [J^*]_{J^*=\Phi^{(Me)}_J}\nonumber \\ \dot{J}&=& -\Gamma^T [y^*]_{y^*=\Phi^{(Me)}_y} - [\Theta^{(Me)}_{J^*}(y,J^*)]_{y^*=\Phi^{(Me)}_y;J^*=\Phi^{(Me)}_J}\end{aligned}$$ where $\Gamma$ is an operator, $\Gamma^T$ is its transpose, $\Phi^{(Me)}(y,J)$ is the thermodynamic potential associated with the *MESO* $\rightarrow$ *equilibrium* passage. The dissipation potential $\Theta^{(Me)}(y,J^*)$ is the Legendre transformation of the dissipation potential $\Xi^{(Me)}(y,X^*)$ where $X^*=\Theta^{(Me)}_{J^*}$ (i.e. $\Theta^{(Me)}(y,J^*) =[-\Xi^{(Me)}(y,X^*) +X^*J^*]_{X^*=X_0^*(y,J^*)}$, where $X_0^*(y,J^*)$ is a solution of $[-\Xi^{(Me)}(y,X^*) +X^*J^*]_{X^*}=0$). If $\Xi^{(Me)}(y,X^*)$ satisfies the properties (\[Xi\]) then also $\Theta^{(Me)}(y,J^*)$ satisfies them and vice versa. The time evolution equation (\[I3\]) is again GENERIC (\[GENERIC\]) but contrary to (\[I0\]) it has now also the Hamiltonian part\ $\left(\begin{array}{cc}0&\Gamma\\-\Gamma^T&0\end{array}\right)\left(\begin{array}{cc}y^*\\J^*\end{array}\right)$. The operator $\left(\begin{array}{cc}0&\Gamma\\-\Gamma^T&0\end{array}\right)$ is skew symmetric for any operator $\Gamma$ but the corresponding to it bracket does not necessarily satisfy the Jacobi identity for any $\Gamma$. In view of the remark that we made in Section \[RDMMde\] about the role of the Jacobi identity in GENERIC, we still consider (\[I3\]) as being GENERIC. At this point we note that the extension that we made above differs from extensions made in [@MullRugg], [@Joubook] by requiring that the nondissipative part of the extended equation is Hamiltonian. As a consequence, the flux appearing on the right hand side of the first equation in (\[I3\]) is conjugate to the flux appearing on the left hand side of the second equation of (\[I3\]). The fact that this feature of the extension is not seen in Refs.[@MullRugg] and [@Joubook] is that the master structure for extensions is in Refs.[@MullRugg] and [@Joubook] the classical Grad-hierarchy reformulation of the Boltzmann equation that addresses only a very special physical system (namely the ideal gas) and thus, in terms of our formulation, only a very special class of functions $\Phi^{(Me)}(y,J)$ and $\Phi^{(me)}(y)$. First, we again establish the passage *MESO* $\rightarrow$ *equilibrium*. It directly follows from (\[I3\]) that $$\label{eprodM} \dot{\Phi}^{(Me)}=-\left[J^*\Theta^{(Me)}_{J^*}\right]_{J^*=\Phi^{(Me)}_J}\leq 0$$ provided $\Theta^{(Me)}$ satisfies the properties (\[Xi\]). In the same way as on *meso* level we arrive at the fundamental thermodynamic relation on *equilibrium* level $$\label{I21} \Phi^{(eM)*}(T)=[\Phi^{(Me)}(y,J,T)]_{y=y_{eq}(T); J=J_{eq}(T)}$$ where $y_{eq}(T)$ and $J_{eq}(T)$ are solutions to $\Phi^{(Me)}_y=0$ and $\Phi^{(Me)}_J=0$. The thermodynamic potential $\Phi^{(Me)}(y,J)$ represents a more detailed picture of the physics taking place in the macroscopic system under consideration than the picture represented by $\Phi^{(me)}(y)$. Depending on the particular forms of $\Phi^{(Me)}(y,J)$ and $\Phi^{(me)}(y)$, some of the details taken into consideration on *MESO level* may or may not show up in the equilibrium fundamental thermodynamic relation on *equilibrium level*. In general, the *equilibrium level* fundamental thermodynamic relations (\[I20\]) and (\[I21\]) are not identical. Next, we reduce (\[I3\]) to (\[I0\]). Let the operator $\Gamma$, the dissipation potential $\Theta^{(Me)}$, and the thermodynamic potential $\Phi^{(Me)}$ be such that $J$ evolves in time more rapidly than $y$. If this is the case then we regard the time evolution governed by (\[I3\]) as proceeding in two stages. In the first stage (the reducing evolution), the time evolution of $J$ is governed by the second equation in (\[I3\]) in which $y$ (and thus also $y^*$) are fixed. This reducing (fast) time evolution is thus governed by $$\label{I4} \dot{J}=-\Phi^{(Mm)}_{J^*}$$ where $$\label{I5} \Phi^{(Mm)}(X^{(CR)*}(y^*),J^*)=\Theta^{(Me)}(J^*)-X^{(CR)*}(y^*)J^*$$ with the constitutive relation $$\label{CR1} X^{(CR)*}(y^*)=-\Gamma^T y^*$$ In order to distinguish the conjugates with respect to the entropy $\Phi^{(Me)}$ (i.e. $y^*=\Phi^{(Me)}_y; J^*=\Phi^{(Me)}_J$) from conjugates with respect to the dissipation potential $\Theta$, we do not use the upper index star to denote $\Theta_{J^*}$ but we use, following the traditional notation established in nonequilibrium thermodynamics, $X^*=\Theta_{J^*}$. Still following the traditional terminology of nonequilibrium thermodynamics, we call $X^*$ the thermodynamic force corresponding to the thermodynamic flux $J^*$. Now we turn our attention to solutions of (\[I4\]). We see immediately that $$\label{I100} \dot{\Phi}^{(Mm)}=-\Phi^{(Me)}_{JJ}(\Phi^{(Mm)}_{J^*})^2\leq 0$$ which means that $J$ tends, as $t\rightarrow \infty$, to $J_{qeq}^*(y^*)$ that is a solution of $$\label{I101} \Theta^{(Me)}_{J^*}(J^*)=X^{(CR)*}(y^*)$$ We see that the potential $\Theta^{(Me)}$ plays different roles in the analysis of $MESO \rightarrow meso$ and in the analysis of $MESO\rightarrow equilibrium$. In the former analysis it plays the same role as the thermodynamic potential $\Phi^{(Me)}$ plays in the investigation of the approach $MESO \rightarrow equilibrium$ governed by (\[I3\]). In the latter analysis it plays the role that is closely related to the entropy production (see (\[eprodM\])). The relation $$\label{Mmth} \Phi^{(mM)*}(y^*)=\Phi^{(Mm)}(X^{(CR)*}(y^*),J_{qeq}^*(y^*))=-\Xi^{(Me)}(y,X^{(CR)*}(y^*))$$ is the fundamental thermodynamic relation on *meso level* implied by the fast time evolution governed by (\[I4\]). If we insert $J_{qeq}^*$ (i.e. solution of (\[I101\])) into the first equation in (\[I3\]) we arrive at $$\label{Xxrelation} \Xi^{(me)}(y,y^*)=[\Xi^{(Me)}(y,X)]_{X=X^{(CR)*}(y,y^*)}$$ The analysis presented above can be summed up in two results. *Result 1* Equation (\[I4\]) governs the reducing time evolution (i.e. the time evolution making the reduction $MESO\rightarrow meso$) and (\[Mmth\]) is the fundamental thermodynamic relation on *meso* level implied by it. *Result 2* The reducing time evolution equation (\[I4\]) is explicitly related to the *MESO* time evolution equation (\[I3\]) and to the *meso* time evolution equation (\[I0\]). The *MESO* dynamics (\[I3\]) is split into (fast) reducing dynamics (\[I4\]) followed by (slow) reduced dynamics (\[I0\]). ### General formulation {#GF} In this section we formulate Result 1 in a more general context. Result 2 requires a detail specification of *MESO* dynamics and a detail analysis of the phase portrait $\mathcal{P}^{MESO}$ that it generates. Except for a few simple illustrations presented in Section \[EX\], we shall not attempt in this paper to formulate Result 2 in general terms. Our objective is to adapt the three Postulates of $MESO\rightarrow equilibrium$ thermodynamics (formulated in Sections \[RDET\], \[RDMee\], \[RDMMee\], \[RDMMde\] above) to $MESO\rightarrow meso$ thermodynamics. First we note an important difference between the reductions $MESO\rightarrow equilibrium$ and $MESO\rightarrow meso$. In the former reduction the target level is *equilibrium level*, i.e. a level of description on which no time evolution takes place. In such reducing dynamics, the fundamental thermodynamic relations consist of equilibrium state variables $\omega$ expressed in terms of the state variables used on the initial level and the entropy driving the reduction (see (\[microftr\]) and (\[MESOftr\])). In the latter reduction the target level is *meso level* on which the time evolution does take place. The fundamental thermodynamic relation corresponding to $MESO\rightarrow meso$ reduction must again include the state variables $y$ expressed in terms of $x$ but it must also include the vector field $g$ on *meso level* (see (\[Gdyn\]) ) expressed in terms of $x$. We present now a setting in which we subsequently formulate the fundamental thermodynamic relation of $MESO \rightarrow meso$ thermodynamics. We begin with *MESO* dynamics (\[Fdyn\]) and with the map $$\label{PMm} \mathfrak{P}^{(Mm)}:M\rightarrow N; x\mapsto y=y(x)$$ allowing to express the state variables on *meso level* in terms of the state variables on *MESO level* (compare with (\[PMIe\]) and (\[PMe\]). We apply the map $\mathfrak{P}^{(Mm)}$ on (\[Fdyn\]) and obtain $$\label{Gdynn} \dot{y}=\mathfrak{P}(G(x))$$ Hereafter, we shall write the right hand side of (\[Gdynn\]) in the form $$\label{J} \mathfrak{P}(G(x))=\Gamma({{\boldmath \mbox{$J$}}}(x))$$ where $\Gamma$ is a fixed operator and ${{\boldmath \mbox{$J$}}}(x)=(J_1(x),...,J_n(x))$ are quantities called ***thermodynamic fluxes***. For example, if (\[Fdyn\]) is the Boltzmann kinetic equation (with the one particle distribution function $f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$, where ${{\boldmath \mbox{$r$}}}$ is the position vector and ${{\boldmath \mbox{$v$}}}$ momentum of one particle) and the the map $\mathfrak{P}^{(Mm)}$ is a projection on the first five moments in ${{\boldmath \mbox{$v$}}}$ then $\Gamma =-\frac{\partial}{\partial {{\boldmath \mbox{$r$}}}}$ and ${{\boldmath \mbox{$J$}}}$ are higher order moments. In chemical kinetics (see [@Grmchem]) $\Gamma$ is the stoichiometric matrix. With (\[J\]), the time evolution equation (\[Gdynn\]) takes the form $$\label{JJJ} \dot{y}=\Gamma({{\boldmath \mbox{$J$}}}(x))$$ Its right hand side does not represent, at least in general, a vector field on $N$. In order to be that, it has to be closed, i.e. evaluated at $x=c_{cl}(y)$ $$\label{Gdynnc} \dot{y}=[\Gamma({{\boldmath \mbox{$J$}}}(x))]_{x=\hat{x}(y)}$$ The map $y\mapsto \hat{x}(y)$ is a map $N\rightarrow M$ called a ***closure map***. In the formulation of the $MESO \rightarrow meso$ thermodynamics we shall limit ourselves to the *meso* dynamics (\[Gdyn\]) that has the form (\[Gdynnc\]) with no restriction on the closure map. It will be the reducing dynamics that will determine it. We are now in position to formulate three Postulates of $MESO\rightarrow meso$ thermodynamics. We shall follow closely the formulation of the reducing dynamics *MESO* $\rightarrow$ *equilibrium* presented in Section \[RDMMde\]. ***MESO $\rightarrow$ meso Postulate 0*** is the same as the *MESO* $\rightarrow$ *meso* Postulate 0 in Section \[RDMMde\]. ***MESO $\rightarrow$ meso Postulate I*** *$x\in M$ are state variables on MESO level, $y\in N$ are state variables on meso level and (\[Gdynnc\]) with unrestricted closure map is a family of the time evolution equations on meso level.* ***MESO $\rightarrow$ meso Postulate II*** \(i) *The fundamental thermodynamic relation consists of the specification of the following quantities* $$\label{Mmcr} y(x),{{\boldmath \mbox{$J$}}}(x),\Phi^{(0Mm)}(x)$$ The first quantity in (\[Mmcr\]) is the map $\mathfrak{P}^{(Mm)}:M\rightarrow N$ which expresses the state variables $y$ used on *meso level* in terms of state variables $x$ used on the more microscopic *MESO level*. The reducing time evolution introduced below leaves the space $N$ unchanged. The quantities ${{\boldmath \mbox{$J$}}}=(J_1,...,J_n)$ are the thermodynamic fluxes appearing in the target *meso* dynamics (\[Gdynnc\]). The final quantity $\Phi^{(0Mm)}$ is the thermodynamic potential $\Phi^{(0Mm)}: M\rightarrow \mathbb{R}$. Following (\[Phi\]) and (\[Phi1\]), we write it in the form $$\label{Phi0} \Phi^{(0Mm)}(x,\theta)=-S^{(Mm)}(x)+\frac{1}{\theta}W^{(Mm)}(x)$$ The motivating example discussed above in Section \[motex\] and the reducing dynamics discussed below indicate that $[S^{(Mm)}(x)]_{x_{qeq}}$ (where $x_{qeq}$ is $t\rightarrow \infty$ solution of the reducing dynamics) has the physical interpretation of the entropy production on *meso level* and $[W^{(Mm)}(x)]_{x_{qeq}}$ has the physical interpretation of the work per unit time performed by external forces. The quantity $\theta$ is a temperature or a quantity having the physical dimension of the temperature. \(ii) *The $MESO\rightarrow meso$ reducing time evolution is governed by* $$\label{CRGENERIC} \dot{x}=[\theta L^{(Mm)}(x)x^*-[\Xi^{(Mm)}_{x^*}(x,x^*)]_{x^*=\Phi^{(Mm)}_x}$$ where $$\label{PhiMm} \Phi^{(Mm)}(x,{{\boldmath \mbox{$X$}}})=\Phi^{(0Mm)}(x,\theta)+\sum_{i=1}^{n}X_iJ_i(x)$$ is the thermodynamic potential. The quantities ${{\boldmath \mbox{$X$}}}=(X_1,...,X_n)$ are called thermodynamic forces corresponding to the thermodynamic fluxes ${{\boldmath \mbox{$J$}}}=(J_1,...,J_n)$. As in (\[GENERIC\]), the operator $L^{(Mm)}$ is a Poisson bivector and the $\Xi^{(Mm)}$ is a dissipation potential. Both $L^{(Mm)}$ and $\Xi^{(Mm)}$ are required to be degenerate so that the space $N$ remains invariant under the time evolution governed by (\[CRGENERIC\]). The same considerations as the ones that led us in Section \[RDMMde\]) to the conclusion that solutions to (\[GENERIC\]) have the property $x\rightarrow x_{eq}$ as $t\rightarrow \infty$, lead us the the conclusion that solutions of (\[CRGENERIC\]) have the property $x\rightarrow x_{qeq}$ as $t\rightarrow \infty$, where $x_{qeq}$ is a solution to $$\label{qeq} \Phi^{Mm)}_x=0$$ These states are time independent (i.e. steady) states with respect to the reducing dynamics but they are, in general, not steady states with respect to both the original *MESO* dynamics and the reduced dynamics. Provided the thermodynamic potential $\Phi^{(Mm)}$ is specified, $x_{qeq}$ depends on $({{\boldmath \mbox{$X$}}},\theta)$. In order $x_{qeq}$ could play the role of the closure $\hat{x}(y)$, $({{\boldmath \mbox{$X$}}},\theta)$ have to be specified as functions of $y$. We shall call such specification a ***constitutive relation*** $({{\boldmath \mbox{$X$}}}^{(CR)}(y),\theta^{(CR)}(y))$. The asymptotic solution $x_{qeq}$ of (\[CRGENERIC\]) with ${{\boldmath \mbox{$X$}}}={{\boldmath \mbox{$X$}}}^{(CR)}(y)$ and $\theta =\theta^{(CR)}(y)$ will be denoted $x_{cl}(y)$. The lower index “cl” denotes “closure”. We shall comment about constitutive relations at the end of this section. Now, we assume that the constitutive relations are known The fundamental thermodynamic relation on *meso level* implied by the fundamental thermodynamic relation (\[Mmcr\]) on *MESO level* is the following: $$\begin{aligned} \label{Mmimpl} {{\boldmath \mbox{$J$}}}^{(CR)}({{\boldmath \mbox{$X$}}}^{(CR)},\theta^{(CR)})&=& [{{\boldmath \mbox{$J$}}}(x)]_{x=x_{cl}} \\\label{thr} y&=&[y(x)]_{x=x_{cl}}\nonumber \\ S^{(mM)*}({{\boldmath \mbox{$X$}}}^{(CR)},\theta^{(CR)})&=&[\Phi^{(Mm)}]_{x=x_{cl}}\end{aligned}$$ The relation (\[Mmimpl\]) is the specification of the reduced dynamics. The unspecified closure $\hat{x}(y)$ appearing in (\[Gdynnc\]) is specified: $\hat{x}(y)=x_{cl}({{\boldmath \mbox{$X$}}}^{(CR)},\theta^{(CR)})$. The first line in (\[thr\]) is the same as the first two lines in the fundamental thermodynamic relations (\[MIimpl\]) and (\[Meimpl\]) implied by $MICRO\rightarrow equilibrium$ thermodynamics. The second line in (\[thr\]) is again the same as the third lines in (\[MIimpl\]) and (\[Meimpl\]). It is a thermodynamic relations on *meso* level implied by the reducing dynamics (\[CRGENERIC\]). We emphasize that this relation is not implied by the reduced dynamics. As for the notation, we use the upper index $(mM)$ to denote that the quantity is formulated on *meso level* and is implied by dynamics on *MESO level* (see the explanation of the notation in the text after (\[classftr\])). If we compare the fundamental thermodynamic relation (\[Mmimpl\]), (\[thr\]) implied by $MESO\rightarrow meso$ with the fundamental thermodynamic relations (\[MIimpl\]) and (\[Meimpl\]) implied by $MICRO\rightarrow equilibrium$ and $MESO\rightarrow equilibrium$, we see that the new feature in (\[Mmimpl\]), (\[thr\]) is the reduced dynamics (\[Mmimpl\]). Indeed, the reduced dynamics in the approach to $MESO\rightarrow equilibrium$ is no dynamics and thus there is no need to specify it. If, on the other hand, we compare (\[Mmimpl\]), (\[thr\]) with standard investigations of reductions that put into focus only the reduced dynamics, the second line in (\[thr\]) is new. It represents new thermodynamics implied by reducing dynamics. We also emphasize that the fundamental thermodynamic relation (\[Mmimpl\]), (\[thr\]) exists independently of whether the states in the reduced dynamics are steady or time dependent. Finally, we return to the constitutive relations $({{\boldmath \mbox{$X$}}}^{(CR)}(y),\theta^{(CR)}(y))$ introduced in the text after Eq.(\[qeq\]). First, we note that in the context of $MESO\rightarrow equilibrium $ investigations in Sections \[RDMee\], \[RDMMee\] and \[RDMMee\], constitutive relations are specifications of $\omega^*$. This means that in constitutive relations arising in $MESO\rightarrow equilibrium $ investigations we are expressing the conditions under which the macroscopic systems under consideration are investigated. This is also true in the context of $MESO\rightarrow meso$ investigations but with two new features. First, the conditions involve now also external forces. The imposed external forces are expressed in some of the forces $X$. We shall denote them by ${{\boldmath \mbox{$X$}}}^{(ext)}$. Second, the remaining forces, denoted ${{\boldmath \mbox{$X$}}}^{(int)}$ must be specified, as well as the free energy $\Phi^{(0Mm)}$ by solving *MESO* time evolution equation (\[Fdyn\]) (i.e. constructing the phase portrait $\mathcal{P}^{MESO}$, and extracting from it slower changing pattern representing the reduced *meso* time evolution (see also discussion in Section \[RD\]). This, of course, can be done only if (\[Fdyn\]) is more specified. We have done it in the example discussed in Section \[motex\] and we shall make other illustrations in the next Section \[EX\]. At this point we only mention that it is in the constitutive relations where the entropy $S^{(me)}$ enters the analysis. The entropy $S^{(Mm)}$ then typically becomes closely related to the production of the entropy $S^{(me)}$. Recall for example the Fourier and Navier-Stokes constitutive relations in fluid mechanics (see more in Section \[EX4\]). They are expressed in terms of the conjugate state variables with respect to the local entropy that, in the classical fluid mechanics, plays the role of $S^{(me)}$. We have also seen the similar constitutive relations in Section \[motex\]. We end this section with a few remarks. More comments and illustrations are then in Section \[EX\]. The CR fundamental thermodynamic relation (\[Mmimpl\]) is a relation involving only the state variables and the material parameters used on *meso level* (\[Gdyn\]). From the physical point of view, we expect that even if it is not directly related to Eq.(\[Gdyn\]), it reflects important properties of solutions of (\[Gdyn\]). This is because both Eq.(\[Gdyn\]) and the relation (\[Mmimpl\]) address the same physics even if expressed on different levels of description. In particular, we anticipate, on the physical ground, that the presence of bifurcations in solutions to (\[Gdyn\]) expressing mathematically the presence of sudden changes in behavior (e.g. the onset of convection in the Rayleigh-Bénard system) is manifested in the CR fundamental thermodynamic relation (\[Mmimpl\]) as phase transitions. This anticipation is based on the experimentally observed growth of fluctuations in meso-measurements of macroscopic systems in situations in which their behavior changes dramatically (we shall call them critical situations). From this observation we then conclude that in critical situations the “distance” between *meso* and *MESO levels* diminishes and the critical behavior manifests itself on both *meso* and *MESO levels*. Since the CR fundamental thermodynamic relation is inherited from the *MESO level*, we expect to see the critical behavior also in it. Even without specifying the CR thermodynamic potential $\Phi^{(Mm)}$, the fact that the constitutive relations arise from minimizing it implies Maxwell-type reciprocity relations (that, from the mathematical point of view, express symmetry of the second derivatives of $\Phi^{(Mm)}$) among the thermodynamic fluxes and forces. In the case when the potential $\Phi^{(Mm)}$ is quadratic then these reciprocity relations become Onsager’s relations. Examples of reciprocity relations that arise in chemical kinetics are worked out in Section III B in [@GrmPavKlika]. We ask now the following question. Given an externally driven macroscopic system, how do we find the CR thermodynamical potential $\Phi^{(Mm)}$ (see (\[PhiMm\])) corresponding to it? If we ask the same question, but with externally unforced macroscopic systems and with the thermodynamic potential $\Phi^{(Me)}$ (see (\[Phi1\]) ) replacing externally driven macroscopic systems and the CR thermodynamic potential $\Phi^{(Mm)}$, then the answer is the following. On the most macroscopic level (that is for externally unforced systems the level of classical equilibrium thermodynamics - see Section \[RDET\]), the only way we can identify the thermodynamic potential $\Phi^{(ee)}$ is by making experimental observations (e.g. observation of the relation among $P,V,T$ and of the specific heat - see Section \[RDET\]). The knowledge of $\Phi^{(ee)}$ on the level of the classical equilibrium thermodynamics can be then transferred, via the local equilibrium assumption, also to the level of fluid mechanics. On the level of kinetic theory, we can take as the point of departure for the search of $\Phi^{(Me)}$ the Boltzmann kinetic equation (playing in this example the role of *MESO* dynamics) and arrive (following Boltzmann) to the Boltzmann entropy by investigating properties of its solutions. On the *MICRO level*, it suffices to know all the mechanical interactions expressed in the energy $E^{MICRO}$ since the entropy on the *MICRO level* is the universal Gibbs entropy (\[microftr\]). On *meso levels*, we may find $\Phi^{(Me)}$ by MaxEnt reduction from the *MICRO level* (i.e. by maximizing the Gibbs entropy subjected to constraints expressing the mapping from *MICRO* to *meso* state spaces) and/or by relating entropy to concepts arising in the information theory and the theory of probability. In Section \[EX2\] we shall suggest a possible universal *MICRO level* CR entropy. Reducing Dynamics: Examples {#EX} =========================== Our objective in this section is to make a few comments and illustrations that will bring a more concrete content to the investigation discussed in previous sections. As for the $MESO\rightarrow equilibrium$ passage, many very specific illustrations can be found in [@Grmadv] and references cited therein and in [@Obook]. In Section \[EX2\], we work out a new illustration in which *MESO level* is replaced by *MICRO level*. In Sections \[EX3\] and \[EX4\] we develop two simple examples illustrating the $MESO\rightarrow meso$ passage. In Section \[EX5\] we use the *CR meso-thermodynamics* to estimate volume fractions at which phase inversion occurs in a blend of two immiscible fluids. In Section \[EX1\] we comment about reductions seen as a pattern recognition in phase portraits. Before proceeding to specific illustrations, we shall comment about the physics and the experimental basis of the general thermodynamics presented above. We begin with the classical equilibrium thermodynamics. This theory has emerged from an attempt to combine mechanics involved in large scale mechanical engines with heat. As it became clear later in the Gibbs equilibrium statistical mechanics, the heat is a manifestation, on the macroscopic scale, of the mechanics on the microscopic (atomic) scale. To combine the large scale mechanics with heat is to combine large scale mechanics with microscopic mechanics. The objective of the classical equilibrium thermodynamics is to incorporate the microscopic mechanics (or heat which, at the time when thermodynamics was emerging, was a rather mysterious concept) into the large scale mechanics by ignoring all that is irrelevant to our direct macroscopic interest. In the classical equilibrium thermodynamics this has been achieved by enlarging the concept of mechanical energy (by introducing a new type of energy, namely the internal energy) and by introducing the concept of entropy together with the MaxEnt principle. The setting of the classical equilibrium thermodynamics is thus a two-level setting: one level (macroscopic) is of our direct interest and the other (microscopic) is not of our direct interest. We cannot however completely ignore it since it influences what happens on the macroscopic level. It is the concept of entropy that on the macroscopic level represents all from the microscopic level that is important for describing the behavior that directly interests us. All the other details involved on the microscopic level are ignored. The essence of the classical equilibrium thermodynamics is thus to provide a relation $MICRO\rightarrow macro$ between two levels of description. Its experimental basis consist of observations showing that indeed the “minimalist” inclusion of the microscopic level offered by the classical equilibrium thermodynamics leads to predictions that agree with the macroscopic experimental observations. In the formulation of general thermodynamics we have extended the classical equilibrium thermodynamics by keeping its two-level $MICRO\rightarrow macro$ setting but we have replaced the *MICRO* and *macro* levels with two general *MESO* and *meso* levels. Thermodynamics (including the classical equilibrium thermodynamics) is a theory of theories or, in other words, a metaphysics. The experimental basis of thermodynamics are meta-observations showing that behavior observed and well described on one level can also be observed and well described on another level. Direct experimental observations, contrary to meat-observations, are observations made on a single level. They provide experimental basis of individual levels. The concept of entropy can only be understood in the two-level $MESO\rightarrow meso$ viewpoint of thermodynamics. The often asked questions like for instance: does the entropy exist for driven systems, should be replaced with the question: can the behavior of the driven system under investigation be described on two separate levels. If the answer to this latter question is yes then the answer to the former question is also yes. Since both well established levels are applicable, solutions of the time evolution on the level involving more details must approach solutions on the second level involving less details. The entropy is then the potential driving the approach. In conclusion of the above comment about the general thermodynamics we note that the very wide scope of thermodynamics (on the one hand it is a metaphysics and on the other hand it is a very practically oriented engineering tool) is certainly one of the reasons for its attractiveness but it is also a reason (at least one of the reasons) for unusually strong disagreements among its practitioners. Pattern recognition in the phase portrait, Chapman-Enskog method {#EX1} ---------------------------------------------------------------- An archetype example of *MESO* time evolution equation (\[Fdyn\]) is the Boltzmann kinetic equation. An archetype example of $MESO\rightarrow meso$ investigation is the Chapman-Enskog analysis of the passage from the Boltzmann equation to fluid mechanics. In this section we illustrate the pattern recognition viewpoint of reductions on the $MICRO\rightarrow MESO$ derivation of the Boltzmann equation and on the Chapman-Enskog method. ### MICRO $\rightarrow$ MESO introduction of the Boltzmann equation {#derBE} We emphasize that our objective is not to derive rigorously the Boltzmann equation from *MICRO* mechanics but only to illustrate how it can arise in the pattern recognition process in $\mathcal{P}^{MICRO}$. In order to be able to recognize patterns in phase portraits $\mathcal{P}^{MICRO}$ we have to generate it (or at least to obtain some pertinent information about it). Since $\mathcal{P}^{MICRO}$ is a collection of particle trajectories we have to find the trajectories, i.e. we have solve the *MICRO* time evolution equations. It is important to realize that it is not the *MICRO* vector field (i.e. the *MICRO* time evolution equations) that is our starting point in in pattern recognition process but it the collection of trajectories that it generates (i.e. solutions to the *MICRO* time evolution equations). In the case of a dilute ideal gas (i.e. a macroscopic system composed of particles that do not interact except for occasional binary collisions) the particle trajectories can be seen as a composition of straight lines (representing free particle motion) and two intersecting lines (representing binary collisions). Intersections of three (or more ) lines at one point (representing ternary (or higher order) collisions) are, due to the dilution, very rare and we therefore ignore them. We choose one particle distribution function $f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$ as the *meso* state variable and try to recognize its time evolution (in particular the vector field generating it) that can be seen as a pattern in $\mathcal{P}^{MICRO}$. We begin with the straight line ${{\boldmath \mbox{$r$}}}\rightarrow {{\boldmath \mbox{$r$}}}+\frac{{{\boldmath \mbox{$v$}}}}{m}t$. We can see this line as a trajectory generated by $\dot{{{\boldmath \mbox{$r$}}}}=\frac{{{\boldmath \mbox{$v$}}}}{m}$ and $\dot{{{\boldmath \mbox{$v$}}}}=0$. By $m$ we denote the mass of one particle and $t$ denotes the time. This particle time evolution induces the time evolution $f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})\rightarrow f({{\boldmath \mbox{$r$}}}-\frac{{{\boldmath \mbox{$v$}}}}{m}t,{{\boldmath \mbox{$v$}}})$ in one particle distribution functions. This time evolution is then generated by the vector field $$\label{ffl1} \frac{\partial f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})}{\partial t}=-\frac{\partial}{\partial {{\boldmath \mbox{$r$}}}}({{\boldmath \mbox{$v$}}}f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}}))$$ We have thus arrived at the vector field representing the first feature of particle trajectories. We turn now to the second feature, i.e. to two intersecting lines representing binary collisions. Formally, we regarded this feature as two straight lines, corresponding to momenta $({{\boldmath \mbox{$v$}}}_1,{{\boldmath \mbox{$v$}}}_2)$, meeting at the position with coordinate ${{\boldmath \mbox{$r$}}}_1$ and continuing as two straight lines corresponding to momenta $({{\boldmath \mbox{$v$}}}'_1,{{\boldmath \mbox{$v$}}}'_2)$. The ingoing momenta $({{\boldmath \mbox{$v$}}}_1,{{\boldmath \mbox{$v$}}}_2)$ and the outgoing momenta $({{\boldmath \mbox{$v$}}}'_1,{{\boldmath \mbox{$v$}}}'_2)$ are related by the relations $$\begin{aligned} \label{conBE1} v_1^2+v_2^2&=&(v_1')^2+(v_2')^2\nonumber \\ {{\boldmath \mbox{$v$}}}_1+{{\boldmath \mbox{$v$}}}_2&=&{{\boldmath \mbox{$v$}}}_1'+{{\boldmath \mbox{$v$}}}_2'\end{aligned}$$ expressing the mechanics of the collision. More details about the collision mechanics (that would make the relation between the ingoing and outgoing momenta one-to-one) are ignored and are not a part of the second feature. In terms of one particle distribution functions we express it therefore as the time evolution generated by the gain-loss balance, or in other words, by considering $({{\boldmath \mbox{$v$}}}'_1,{{\boldmath \mbox{$v$}}}'_2)\leftrightarrow({{\boldmath \mbox{$v$}}}_1,{{\boldmath \mbox{$v$}}}_2)$ as a chemical reaction obeying the constraint (\[conBE1\]). We shall see in Section \[EX2\] that the vector field such gain-loss balance is given by $$\begin{aligned} \label{intlin1} \frac{\partial f({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)}{\partial t}&=&-\Xi^{(BE)}_{f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)}\nonumber \\ &&=\int d2\int d1'\int d2'\widetilde{W}^{(BE)}(f(1')f(2')-f(1)f(2))\end{aligned}$$ where $\Xi^{(BE)}$ is the dissipation potential, $\widetilde{W}^{(BE)}$ is a quantity appearing in it (see details in Section \[EX2\] below), and $f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)$ is a conjugate of $f({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)$ with respect to a entropy $S^{(BE)}(f)$ (i.e. $f^*=S^{(BE)}_f$). With a particular choice of these quantities, the right hand side of (\[intlin1\]) becomes the classical Boltzmann collision operator (see Section \[EX2\]). Both features of the particle phase portrait $\mathcal{P}^{MICRO}$ are thus expressed in the time evolution of one particle distribution functions as the sum of the vector fields (\[ffl1\]) and (\[intlin1\]). The kinetic equation that we are obtaining in this way is the Boltzmann kinetic equation. The nonclassical formulation in which the Boltzmann equation is emerging from our derivation has several advantages. One of them is that the H-theorem (i.e. $\dot{S}^{(BE)}\geq 0$) is in it manifestly visible. Another advantage is that we have in fact derived a generalization of the Boltzmann equation since we do not have to choose $S^{(BE)}(f)=-k_B\int d{{\boldmath \mbox{$r$}}}\int d{{\boldmath \mbox{$v$}}}f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}}) \ln f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$. The entropy $S^{(BE)}(f)$ can be a more general potential $\int d{{\boldmath \mbox{$r$}}}\int d{{\boldmath \mbox{$v$}}}c(f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}}))$, where $c$ is an unspecified but sufficiently regular concave function $\mathbb{R}\rightarrow\mathbb{R}$. With such more general entropy $S^{(BE)}(f)$ we still have the H-theorem $\dot{S}^{(BE)}\geq 0$ since, as we convince ourselves by a direct verification, the time evolution generated by (\[ffl1\]) does not change $S^{(BE)}(f)$. In addition, we also see that the step in the above introduction of the Boltzmann equation where the time reversibility brakes and the dissipation emerges is our ignorance of details of trajectories during binary collisions. ### Chapman-Enskog method {#CEmeth} Let $\mathcal{P}^{MESO}$ and $\mathcal{P}^{meso}$ be the phase portraits corresponding to the *MESO* dynamics (\[Fdyn\]) and the *meso* dynamics (\[Gdyn\]) respectively. Our problem is to recognize $\mathcal{P}^{meso}$ as a pattern inside of $\mathcal{P}^{MESO}$. While this viewpoint of the $MESO \rightarrow meso$ reduction does provide a good intuitive understanding of the process, it does not provide a practical way to proceed. The archetype method offering such procedure is the Chapman-Enskog method (see e.g. [@GKHilb]). This method was originally developed for reducing the Boltzmann kinetic equation to the Navier-Stokes-Fourier hydrodynamic equations but it can be applied to any $MESO \rightarrow meso$ passage. The pattern recognition process becomes in the context of the Chapman-Enskog method in the process of identifying a manifold $\mathcal{M}\subset M$ that satisfies the following two requirements: (i) $\mathcal{M}$ is in one-to-one relation to $N$, (ii) $\mathcal{M}$ is quasi-invariant (i.e. $\mathcal{M}$ is “as much as possible” invariant with respect to the *MESO* time evolution taking place on $M$). We shall sketch below the geometrical essence of the method in three steps.\ *Chapman-Enskog, Step 1* By using an insight into the physics involved in the *MESO* dynamics, we write the *MESO* vector field $G$ as a sum of $G_0$, playing the dominant role, and $G_1$ that is seen as a perturbation (i.e. we write $G=G_0+G_1$). The splitting of the vector field $G$ induces then splitting of the search of the quasi invariant manifold $\mathcal{M}\subset M$ into two stages. A first approximation $\mathcal{M}^{(0)}$ of $\mathcal{M}$ (called zero Chapman-Enskog approximation) is identified in the first stage (the second step in the Chapman-Enskog method) with neglecting $G_1$ (i.e. we consider $G=G_0$). In the second stage (the third step in the Chapman-Enskog method) the manifold $\mathcal{M}^{(0)}$ is deformed into $\mathcal{M}^{(1)}$ that is called a first Chapman-Enskog approximation of $\mathcal{M}$. In the case of (\[Fdyn\]) being the Boltzmann kinetic equation, $G_0$ is the Boltzmann collision term (since the pieces of the trajectories involving binary collisions are seen as being dominant in the phase portrait $\mathcal{P}^{MESO}$).\ *Chapman-Enskog, Step 2* In this step we identify $\mathcal{M}^{(0)}$. We define it as a manifold on which the dominant vector field $G_0$ disappears (i.e. we solve the equation $[G_0]_{\mathcal{M}^{(0)}}=0$). The quantities that parametrize $\mathcal{M}^{(0)}$ are then chosen to be the *meso* state variables $y$ expressed in terms of the *MESO* state variables $x$. We thus obtain a mapping $\Pi:M\rightarrow N; x\mapsto y$. This mapping subsequently induces a one-to-one mapping $\Pi^{(\mathcal{M})}:\mathcal{M}^{(0)}\rightarrow N$. Next, the vector field $[G]_{\mathcal{M}^{(0)}}$ is projected (by the projection induced by $\Pi^{(\mathcal{M})}$ ) on the tangent space of $\mathcal{M}^{(0)}$. We denote the projected vector field by the symbol $G^{(0)}$. Finally, the vector field $G^{(0)}$ is projected (again by the projection induced by $\Pi^{(\mathcal{M})}$) on the tangent space of $N$. This is then the vector field on $N$, denoted by $g^{(0)}$ and called a zero Chapman-Enskog approximation of $G$ on $N$. In the case of the *MESO* dynamics (\[Fdyn\]) being the Boltzmann kinetic theory, the mapping $\Pi$ is the standard mapping from one particle distribution functions to hydrodynamic fields, $\mathcal{M}^{(0)}$ is the manifold whose elements are local Maxwell distribution functions, and $g^{(0)}$ is the right hand side of the Euler (reversible and nondissipative) hydrodynamic equations.\ *Chapman-Enskog, Step 3* The first Chapman-Enskog approximation $\mathcal{M}^{(1)}$ of $\mathcal{M}$ is found in this step. We note that the manifold $\mathcal{M}^{(0)}$ is not an invariant manifold since the vectors $[G]_{\mathcal{M}^{(0)}}$ do not lie in the tangent spaces attached to $x_0\in \mathcal{M}^{(0)}$. We want to make it more invariant. We therefore deform $\mathcal{M}^{(0)}$ into $\mathcal{M}^{(1)}$ $( x_0\mapsto x_1)$ in such a way that $G^{(1)}\equiv [G]_{\mathcal{M}^{(0)}}$, where $G^{(1)}$ is the vector field $G$ attached to the points $x_1$ and projected on $\mathcal{M}^{(1)}$. We note that the manifold $\mathcal{M}^{(1)}$ is still not invariant (since, in general, $[G]_{\mathcal{M}^{(1)}}\neq [G]_{\mathcal{M}^{(0)}}$) but it is expected to be “more” invariant than $\mathcal{M}^{(0)}$ since the vector field $G_1$ is just a perturbation of $G_0$ (and consequently the deformation $\mathcal{M}^{(0)}\rightarrow \mathcal{M}^{(1)}$ is small). The vector field $g^{(1)}$ projected on $N$ is the first Chapman-Enskog approximation of $G$. In the case of (\[Fdyn\]) being the Boltzmann equation, the vector field $g^{(1)}$ is the right hand side of the Navier-Stokes-Fourier (irreversible and dissipative) hydrodynamic equations. If both *MESO* dynamics (\[Fdyn\]) and *meso* dynamics (\[Gdyn\]) are known and well established (i.e. they both have emerged from direct derivations involving *MESO* measurements and *meso* measurements respectively) then the Chapman-Enskog type derivation of (\[Gdyn\]) from (\[Fdyn\]) brings an additional information. First, the domain of applicability of (\[Gdyn\]) inside of the domain of applicability of (\[Fdyn\]) is identified, and second, mapping $\xi \mapsto \zeta$ emerges (i.e. the material parameters $\zeta$ with which individual features of the systems under consideration are expressed on the *meso level* become functions of the material parameters $\xi$ used for the same purpose on the *MESO level*). Examples of applications of the Chapman-Enskog method in many types of mesoscopic dynamics (including for instance the dynamics describing chemical reactions) can be found in [@Gorbbook], [@GKHilb]. We make now three additional comments about the Chapman-Enskog method. *Comment 1* We note that there is no thermodynamics in the Chapman-Ensog method. How can we bring it to it? Following the viewpoint of thermodynamics presented in previous sections, we have to turn attention not only to the pattern (i.e. in this case to the submanifold $\mathcal{M}_1$) but also to the reducing time evolution bringing $x\in M$ to it. As we have seen, the reducing tome evolution is generated by a potential so, at leat, we should try to identify the potential. As for the manifold $\mathcal{M}_0$, the reducing time evolution is the Boltzmann equation without $G_1$ (i.e. Eq.(\[intlin1\]) and the potential is obviously the Boltzmann entropy. Indeed, $\mathcal{M}_0$ can be obtained by MaxEnt reduction of the Boltzmann entropy. The manifold on which $S^{(BE)}(f)$ reaches its maximum subjected to constraints representing the fluid mechanics fields expressed in terms of the one particle distribution function is exactly the submanifold $\mathcal{M}_0$. For example in [@Gorbbook], this is the way the submanifold $\mathcal{M}_0$ is introduced. The second step in the Chapman-Enskog method can be thus seen as a part of the investigation of reducing dynamics. Can we follow this path and interpret thermodynamically also the third step in the Chapman-Enskog method (i.e. the deformation of $\mathcal{M}_0$ to $\mathcal{M}_1$)? In order to make such interpretation, we look for a potential $S^{(BE)}_1(f)$, that satisfies the following properties: (i) $S^{(BE)}_1(f)$ is a deformation of $S^{(BE)}(f)$, (ii) its maximum is reached at $\mathcal{M}_1$, and (iii) it generates the time evolution in which $\mathcal{M}_1$ is approached (similarly as $S^{(BE)}(f)$ generates the time evolution $\mathcal{M}_0$ is approached. The partial results related to this problem that are reported in Section 4.2 of [@Grmadv] indicate that the potential obtained in this way is indeed the CR-entropy generating the reducing time evolution that is involved in the passage from kinetic theory to fluid mechanics. *Comment 2* For the kinetic equation (\[intlin1\]) the submanifold $\mathcal{M}_0$ is an invariant manifold. For the full Boltzmann kinetic equation (i.e. kinetic equation combining (\[ffl1\]) and (\[intlin1\])) neither $\mathcal{M}_0$ nor $\mathcal{M}_1$ are invariant manifolds. In fact, as it has been shown by Grad in [@Grad], and Desvillettes and Villani in [@Vill], the only invariant manifold is the manifold $\mathcal{M}_{eq}$ of equilibrium sates (i.e. time independent solutions of the full Boltzmann equation). Both manifolds $\mathcal{M}_0$ and $\mathcal{M}_1$ are quasi-invariant manifolds. Grad, Desvillettes and Villani have proven that solutions to the full Boltzmann equation may come very close to $\mathcal{M}_0$ and $\mathcal{M}_1$ (that is why we can call these manifolds quasi-invariant manifolds) but they never fall on neither of them. They only fall eventually on the submanifold $\mathcal{M}_{eq}$ that is a submanifold of both $\mathcal{M}_0$ and $\mathcal{M}_1$. *Comment 3* Reduction to the submanifold $\mathcal{M}_0$ results in the Euler fluid mechanics equations and reduction to its deformation $\mathcal{M}_1$ results in the Navier-Stokes-Fourier fluid mechanics equations. Both these fluid mechanics equations are particular realizations of (\[GENERIC\]) and thus both are physically meaningful. In principle, it is possible to continue the deformations of $\mathcal{M}_0$. Similarly as we have made the deformation $\mathcal{M}_0\rightarrow\mathcal{M}_1$ we can make the next deformation $\mathcal{M}_1\rightarrow \mathcal{M}_2$. In other words, we can proceed to the second Chapman-Enskog approximation. Will be the resulting reduced time evolution again physically meaningful (in the sense that it will be a particular realization of (\[GENERIC\]))? Experience collected in the investigations of higher order Chapman-Enskog approximations (e.g. the investigation of the linearized Boltzmann equation in [@GRHELV]) seems to indicate that the answer to this question is negative. MICRO $\rightarrow$ equilibrium reducing dynamics {#EX2} ------------------------------------------------- Many illustrations of GENERIC equation (\[GENERIC\]) in kinetic theory, fluid mechanics, and solid mechanics of simple and complex fluids can be found in [@Obook], [@Grmadv] and references cited therein. In this section we develop an additional new illustration. We return to the Gibbs equilibrium statistical mechanics presented in Section \[RDMee\] and ask the following question. What is the GENERIC time evolution of the N-particle distribution function $f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})$ (i.e. the time evolution governed by (\[GENERIC\]) with $x=f_N(1,...,N)$) that makes the maximization of the Gibbs entropy (see (\[microftr\])) postulated MaxEnt in the point (iv) of the *MICRO $\rightarrow$ equilibrium Postulate II*? We first introduce one such time equation and then discuss its possible non uniqueness. We use hereafter a shorthand notation $1\equiv ({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1), 2\equiv ({{\boldmath \mbox{$r$}}}_2,{{\boldmath \mbox{$v$}}}_2),...)$. We look for a dissipation potential $\Xi^{(N)}$ which brings N-particle distribution functions $f_N(1,...,N)$ to the Gibbs distribution $(f_N)_{eq}$ (i.e. to $f_N$ for which the thermodynamic potential $\Phi$ given in (\[Phi\]) reaches its minimum). In other words, we look for $\Xi^{(N)}$ for which solutions to $$\label{Nkin} \frac{\partial f_N(1,2,...,N)}{\partial t}=-\Xi^{(N)}_{f^*_N(1,2,...,N)}$$ approach, as $t\rightarrow\infty$, $(f_N)_{eq}$. Inspired by the dissipation potential arising in the Guldberg-Waage chemical kinetics (see [@Grmchem]) and the dissipation potential generating the Boltzmann collision integral [@GrB], we propose $$\begin{aligned} \label{XiN} \Xi^{(N)}&=&\int d1...\int dN\int d1'...\int dN'W^{(N)}(f_N,1,...,N,1',...,N')\nonumber \\ &&\times\left(e^{\frac{1}{2}X^{(N)}}+e^{-\frac{1}{2}X^{(N)}}-2\right)\end{aligned}$$ where the thermodynamic forces are given by $$\label{Nforce} X^{(N)}=\frac{1}{k_B}(f^*_N(1,2,...,N)-f^*_N(1',2',...,N')),$$ $f^*_N=S_{f_N}$, $S(f_N)$ is the Gibbs entropy (\[microftr\]), $$\label{12} (1,2,...,N)\rightleftarrows (1',2',...,N')$$ are one-to-one transformations in which the microscopic energy $E^{MICRO}(1,...N)$ remains constant, i.e. $$\label{12con} E^{MICRO}(1,2,...,N)=E^{MICRO}(1',2',...,N'),$$ and $W^{(N)}\geq 0$ are nonegative material parameters that are different from zero ($W^{(N)}\neq 0$) only if the constraint (\[12con\]) holds and $W$ is symmetric with respect to $(1,2,...,N)\rightarrow (1',2',...,N')$. In the Guldberg-Waage chemical kinetics (see [@Grmchem]), the transformation (\[12\]) is interpreted as a chemical reaction. We shall demonstrate below that the Boltzmann collision operator is the right hand side of (\[Nkin\]) with $N=$, dissipative forces $X^{(1)}$ given in (\[X1\]) and the transformation (\[12\]) given in (\[binreacx\]) and (\[conBE\]). Before proving that solutions to (\[Nkin\]) approach $(f_N)_{eq}$, we write the time evolution equation (\[Nkin\]) explicitly. With the Gibbs entropy (\[microftr\]), Eq.(\[Nkin\]) takes the form $$\begin{aligned} \label{Nkinexp} \frac{\partial f_N(1,2,...,N)}{\partial t}&=&-\Xi^{(N)}_{f^*_N(1,2,...,N)}\nonumber \\ &&=\int d1'...\int dN'\widetilde{W}^{(N)} (f_N(1',...,N')-f_N(1,...,N))\nonumber \\\end{aligned}$$ where $\widetilde{W}^{(N)}=\frac{W^{(N)}}{2k_B(f_N(1,...,N)f_N(1',...,N'))^{\frac{1}{2}}}$. The Legendre transformation $\Theta^{(N)}(J)$ of $\Xi^{(N)}(X)$ is $$\begin{aligned} \label{Theta} \Theta^{(N)}(J)&=&2\int d1...\int dN\int d1'...\int dN' W\nonumber \\ &&\times\left[\hat{J}\ln\left(\hat{J}+\sqrt{1+(\hat{J})^2}\right)-\left(\sqrt{1+(\hat{J})^2} -1\right)\right]\end{aligned}$$ where $\hat{J}=\frac{J}{W}$. Now we prove that solutions to (\[Nkin\]) (or (\[Nkinexp\])) approach, as $t\rightarrow \infty$, the Gibbs distribution $(f_N)_{eq}$. First, we see that the right hand side of (\[Nkin\]) equals zero if $X=0$. In view of (\[12con\])), equation $X=0$ is solved by $f_N=(f_N)_{eq}$. Since the thermodynamic potential $\Phi$ plays the role of the Lyapunov function for the approach to $(f_N)_{eq}$ (see (\[asGEN\])), we see that solutions to (\[Nkin\]) (which takes the form (\[Nkinexp\]) provided the entropy is the Gibbs entropy,) approach, as $t\rightarrow \infty$, the Gibbs distribution $(f_N)_{eq}$. The time evolution governed by (\[Nkin\]) indeed brings macroscopic systems to states investigated in the Gibbs equilibrium statistical mechanics (see Section \[RDMee\]. The dissipation potential $\Xi^{(N)}$ given in (\[XiN\]) (or equivalently its Legendre transformation $\Theta^{(N)}$ given in (\[Theta\])) can be therefore regarded as the universal CR-entropy on *MICRO level* similarly as the Gibbs entropy (\[microftr\]) is the universal entropy on *MICRO level*. Consequently, we can find CR-entropies on *meso levels* similarly as we can find entropies on *meso levels*. We can either try to extract them from the time evolution (generating $meso \rightarrow equilibrium$ passage - in the case of entropy, or $MESO \rightarrow meso$ passage - in the case of CR-entropy) or we can attempt to reduce them (by MaxEnt) from the universally valid expressions (for the Gibbs entropy (\[microftr\]) in the case of entropy and for the CR-entropy (\[XiN\]) in the case of CR-entropy). We now show that the dissipation potential (\[XiN\]) can be seen as a natural extension of the dissipation potential generating the Boltzmann collision operator arising in one particle kinetic theory. At the end of this section we then investigate other possible vector fields that can describe approach to equilibrium. The time evolution equation (\[Nkin\]) has a well defined meaning for any $N\geq 2$. For $N=1$, i.e. for the level of one particle kinetic theory, we cannot make the transformation (\[12\]) and we cannot therefore directly use (\[Nkin\]). In order to be able to introduce transformations of the type (\[12\]) in one particle kinetic theory, we need a partner. We shall denote the coordinates of the particle by $({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)$ and of its partner by $({{\boldmath \mbox{$r$}}}_2,{{\boldmath \mbox{$v$}}}_2)$. From the physical point of view, we regard the transformation (\[12\]) in the context of one particle kinetic theory as a binary collision between the particle and its partner. We therefore write (\[12\]) in the form $$\label{binreacx} ({{\boldmath \mbox{$v$}}}_1,{{\boldmath \mbox{$v$}}}_2)\rightleftarrows ({{\boldmath \mbox{$v$}}}'_1,{{\boldmath \mbox{$v$}}}'_2),$$ with the constraint $$\begin{aligned} \label{conBE} v_1^2+v_2^2&=&(v_1')^2+(v_2')^2\nonumber \\ {{\boldmath \mbox{$v$}}}_1+{{\boldmath \mbox{$v$}}}_2&=&{{\boldmath \mbox{$v$}}}_1'+{{\boldmath \mbox{$v$}}}_2'\end{aligned}$$ replacing the constraint (\[12con\]). The binary collision are assumed to take place at a fixed point with the spatial coordinate ${{\boldmath \mbox{$r$}}}_1$. The constraints (\[conBE\]) express the conservation of energy and momentum in the collisions. With the new transformation (\[binreacx\]) we then replace the thermodynamic force (\[Nforce\]) by $$\label{X1} X^{(1)}=\frac{1}{k_B}(f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)+f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_2)-f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}'_1)-f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}'_2)),$$ The time evolution equation (\[Nkin\]) takes now the form $$\begin{aligned} \label{intlin} \frac{\partial f({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)}{\partial t}&=&-\Xi^{(1)}_{f^*({{\boldmath \mbox{$r$}}}_1,{{\boldmath \mbox{$v$}}}_1)}\nonumber \\ &&=\int d2\int d1'\int d2'\widetilde{W}^{(1)}(f(1')f(2')-f(1)f(2))\end{aligned}$$ where $\widetilde{W}^{(1)}=\frac{W^{(1)}}{2k_B(f(1)f(2)f(1')f(2'))^{\frac{1}{2}}}$, $W^{(1)}$ is symmetric with respect to the transformation ${{\boldmath \mbox{$v$}}}_1 \leftrightarrows {{\boldmath \mbox{$v$}}}_2, \, {{\boldmath \mbox{$v$}}}'_1 \leftrightarrows {{\boldmath \mbox{$v$}}}'_2$ and $({{\boldmath \mbox{$v$}}}_1,{{\boldmath \mbox{$v$}}}_2)\rightleftarrows ({{\boldmath \mbox{$v$}}}'_1,{{\boldmath \mbox{$v$}}}'_2)$. Equation (\[intlin\]) is the Boltzmann kinetic equation without the free flow term (i.e. the right hand side of (\[intlin\]) is the Boltzmann collision operator - see more in [@GrPhysD], [@Grmadv]). Since the Boltzmann collision dissipation appears to be essentially a special case of the dissipation introduced in (\[Nkin\]), we can indeed regard the dissipation potential $\Xi^{(N)}$ in (\[XiN\]) as a natural extension of the dissipation potential generating the classical Boltzmann binary collision dissipation. There is however an interesting difference between the Boltzmann dissipation in (\[intlin\]) and the dissipation appearing in (\[Nkin\]). The former is weaker than the latter since the Boltzmann dissipation drives solutions to local equilibrium while the dissipation appearing in (\[Nkin\]) drives solutions to the total equilibrium. In general, we say that the dissipation generated by the vector field $\left(vector\,field\right)_1$ is stronger than the dissipation generated by the vector field $\left(vector\,field\right)_2$ if the inequality $\dot{\Phi}\leq 0$ holds for both vector fields but $\mathcal{M}_1\subset \mathcal{M}_2$. By $\mathcal{M}_i$ we denote the manifold whose elements are states approached as $t\rightarrow\infty$ in the time evolution generated by the vector field $\left(vector\,field\right)_i$; $i=1,2$. Let us now consider two vector fields: one is given by the right hand side of (\[intlin\]) and the other by the right hand side of $$\label{ffl} \frac{\partial f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}})}{\partial t}=-\frac{\partial}{\partial {{\boldmath \mbox{$r$}}}}({{\boldmath \mbox{$v$}}}f({{\boldmath \mbox{$r$}}},{{\boldmath \mbox{$v$}}}))$$ From the physical point of view, (\[ffl\]) is one particle kinetic equation representing a gas of completely noninteracting particles with no collisions. It is a (continuity) Liouville equation corresponding to the particle dynamics $\dot{{{\boldmath \mbox{$r$}}}}={{\boldmath \mbox{$v$}}}; \,\dot{{{\boldmath \mbox{$v$}}}}=0$. We note that the vector field (\[ffl\]) is nondissipative (i.e. Eq.(\[ffl\]) implies $\dot{\Phi}=0$) while, as we have shown above, the vector field (\[intlin\]) is dissipative (i.e. Eq.(\[intlin\]) implies $\dot{\Phi}\leq 0$) and the manifold $\mathcal{M}_{leq}$ corresponding to it is the manifold composed of local Maxwell distribution functions. Grad in [@Grad], and Desvillettes and Villani (in full generality) in [@Vill], have proven the following result. The manifold $\mathcal{M}_{teq}$ corresponding to the sum of the vector fields (\[ffl\]) and (\[intlin\]) (i.e. to the vector field appearing in the Boltzmann kinetic equation) is the manifold composed of total Maxwell distribution functions. Since $\mathcal{M}_{teq}\subset \mathcal{M}_{leq}$, we see that we can make dissipation generated by a vector field stronger just by adding to it an appropriate nondissipative vector field. This result, if transposed to the setting of N-particle dynamics for $N\geq 2$, indicates that vector fields with weaker dissipation than the vector field (\[Nkinexp\]) can possibly still drive solutions to the Gibbs equilibrium distribution function $(f_N)_{eq}$ provided the nondissipative vector field arising in the Liouville N-particle equation is added to them. What could be the vector fields that have a weaker dissipation than the vector field (\[Nkinexp\])? One way to construct them is to keep the dissipation potential (\[XiN\]), to keep the thermodynamic force (\[Nforce\]), to keep the interaction (\[12\]) but to introduce stronger constraints so that solutions to $X^{(N)}=0$ form a smaller manifold. For example, we can replace the constraint (\[12con\]) with the constraint: $(1',...,N')$ is just a reordering of $(1,...,N)$. In the case of $N=2$, this constraint becomes $(1',2')=(2,1)$. For such dissipation vector field the manifold $\mathcal{M}$ of states approached as $t\rightarrow\infty$ is the manifold of symmetric distribution functions. The time evolution generated by this vector field drives distribution functions to symmetric distribution functions. The time evolution is making the symmetrization. The following question then arises. Is this symmetrization dissipation, if combined with the nondissipative Liouville vector field, strong enough to drive solutions to the Gibbs equilibrium distribution $(f_N)_{eq}$? If the answer is negative (as it is probably the case), the next question is then to identify the dissipation potential with the weakest possible dissipation that, if combined with the Liouville vector field, does drive solutions to $(f_N)_{eq}$. In this paper we leave these questions unanswered. Before leaving this section we note that another interesting variation of the constraint (\[conBE\]) arises in the investigation of granular gases (i.e. gases composed of particles of macroscopic size). In this case the collisions are inelastic so that the first line in (\[conBE\]) is missing. Granular gas is an interesting example of an externally driven macroscopic system that can naturally be investigated on the level of kinetic theory (see e.g. [@granular]). Its thermodynamic investigation could be then based on the CR-thermodynamic potential with the dissipation potential $\Xi^{(1)}$ given in (\[XiN\]), and with two imposed forces: thermodynamic force $X^{(1)}$ (see (\[X1\])) representing the inelastic collisions, and a mechanical force $X^{(mech)}$ representing for example shaking. equilibrium $\rightarrow$ equilibrium (imposed temperature) {#EX3} ----------------------------------------------------------- The external force in this example is the energy exchange with thermal bath that is kept at a constant temperature $\mathfrak{T}$. In this case the externally driven macroscopic system evolves to an equilibrium state. This means that in this investigation of the $MESO \rightarrow meso$ passage the *meso level* is the equilibrium level. The difference between the $MESO \rightarrow equilibrium$ passage investigated in Section \[RDMMde\] and the $MESO \rightarrow equilibrium$ passage investigated in this example is that the state variables $y_{eq}$ at the equilibrium level are not $(E,V,N)$ (as in Section \[ET\]) but $(E^*,V,N)$, where $E^*=S_E=\frac{1}{T}$ is the conjugate of $E$. The CR thermodynamic potential $\Psi$ driving the evolution is, in this example, the thermodynamic potential (\[Phi1\]) with $T=\mathfrak{T}$, i.e. $\Phi(x,\mathfrak{T},\mu)=-S(E,V,N)+\frac{1}{\mathfrak{T}}E-\frac{\mu}{\mathfrak{T}}N$. The CR-GENERIC equation (\[CRGENERIC\]) is, in this example, the GENERIC time evolution (\[GENERIC\]) with $T=\mathfrak{T}$ and with the degeneracies of $L$ and $\Xi$ that guarantee the mass conservation (\[consNGEN\]) but not the energy conservation (\[consEGEN\]). The resulting fundamental thermodynamic relation is the Legendre transformation $(E,N,V)\rightarrow(\frac{1}{T},N,V)$ of the fundamental thermodynamic relation $S=S(E,V,N)$ implied by the GENERIC reducing time evolution discussed in Section \[RDMMde\]. The above analysis becomes particularly interesting if we choose the *MESO level* to be the equilibrium level with state variables $(E,V,N)$. In this case of $MESO \rightarrow meso$ passage both *MESO* and *meso levels* are *equilibrium levels*. They differ only in state variables: on *MESO level* the state variables are $(E,N,V)$ and on *meso level* $(E^*,V,N)$ The reducing time evolution in this $equilibrium \rightarrow equilibrium$ passage is the time evolution making the Legendre transformation $(E,N,V)\rightarrow(\frac{1}{T},N,V)$. The thermodynamic potential generating it is $$\label{phie} \Phi(E,N,V,\mathfrak{T})=-S(E,N,V)+\frac{1}{\mathfrak{T}}E$$ and the time evolution equation (\[CRGENERIC\]) becomes $$\label{ee} \dot{E}=-[\Xi_X(E,X)]_{X=-S_E+\frac{1}{\mathfrak{T}}}$$ that, if we choose the quadratic dissipation potential $\Xi(E,X)=\frac{1}{2}\Lambda X^2$, where $\Lambda>0$ is a material parameter, becomes $\dot{E}= -\Lambda (-S_E+\frac{1}{\mathfrak{T}})$. Cattaneo $\rightarrow$ Fourier (imposed temperature gradient) {#EX4} ------------------------------------------------------------- In this example the external force is an imposed temperature gradient. We denote it by the symbol $\nabla\frac{1}{\mathfrak{T}}$. This force prevents approach to *equilibrium level*. The most macroscopic level (i.e. the level with least details) on which macroscopic systems subjected to temperature gradient can be described is the level of fluid mechanics (we shall call it hereafter FM-level) on which the state variables are: $x=(\rho({{\boldmath \mbox{$r$}}}),{{\boldmath \mbox{$u$}}}({{\boldmath \mbox{$r$}}}),e({{\boldmath \mbox{$r$}}}))$, where ${{\boldmath \mbox{$r$}}}$ is the position vector, $\rho({{\boldmath \mbox{$r$}}})$ is the mass field (mass per unit volume at ${{\boldmath \mbox{$r$}}}$), ${{\boldmath \mbox{$u$}}}({{\boldmath \mbox{$r$}}})$ is the momentum field, and $e({{\boldmath \mbox{$r$}}})$ the energy field. In this example we shall limit ourselves only to the state variable $e({{\boldmath \mbox{$r$}}})$. All other state variables are assumed to be already at equilibrium. In this setting we now investigate the passage $MESO \rightarrow FM$. First, we recall that in the absence of the imposed temperature gradient (i.e. if $\nabla\frac{1}{\mathfrak{T}}=0$, the macroscopic systems under consideration will approach to *equilibrium level* and we can therefore consider the level of fluid mechanics (called FM-level) as *MESO level* and investigate the passage $FM \rightarrow equilibrium$. The GENERIC equation (\[GENERIC\]) representing this passage is well known and can be found for example in [@Grmadv]. Now we switch on the external force (i.e. $\nabla\frac{1}{\mathfrak{T}}\neq 0$) and investigate the $MESO \rightarrow FM$ passage. We proceed to find the CR-GENERIC equation representing it. The state variables that evolve in the reducing time evolution is the heat flux $J^{(h)}$. The CR relation is the Fourier constitutive relations: $X^{(h)}_i=\nabla_i\frac{1}{S^{(leq)}_E}$, where $S^{(leq)}(\rho({{\boldmath \mbox{$r$}}}),{{\boldmath \mbox{$u$}}}({{\boldmath \mbox{$r$}}}),e({{\boldmath \mbox{$r$}}}))$ is the local equilibrium entropy on the *FM level*. We use hereafter the indices $i=1,2,3;\, j=1,2,3$ and the summation convention. The CR thermodynamic potential in this example is $$\label{CRFM} \Phi^{(MFM)}(J^{(h)}; (\nabla\frac{1}{\mathfrak{T}}))=-S^{(0MFM)}(J^{(h)})+(\nabla\frac{1}{\mathfrak{T}})_iJ^{(h)}_i$$ In the CR-GENERIC equation (\[CRGENERIC\]) we neglect the Hamiltonian time evolution (i.e. we put $\mathcal{L}\equiv 0$) and, for the sake of simplicity, choose the dissipation potential $\Xi^{(FMe)}(J^{(h)}),X^{(h)})=\frac{1}{2}\int d{{\boldmath \mbox{$r$}}}(\Lambda^{(h)}X_i^{(h)}X_i^{(h)}$, where $\Lambda^{(h)}>0$ is a material parameter. With these specifications the CR-GENERIC equation (\[CRGENERIC\]) becomes $$\label{delt} \dot{J}^{(h)}_i=-\Lambda^{(h)}(-S^{(0FMe)}_{J^{(h)}_i}+(\nabla\frac{1}{\mathfrak{T}})_i)$$ The fundamental thermodynamic relation on the *FM level* implied by the *MESO level* CR-GENERIC equation (\[delt\]) is $$\label{frFM} S^{(FMM)*}(e({{\boldmath \mbox{$r$}}});(\nabla\frac{1}{\mathfrak{T}}))=\left[\Phi^{(MFM)}(J^{(h)}; (\nabla\frac{1}{\mathfrak{T}}))\right]_{S^{(0MFM)}_{J^{(h)}_i}=(\nabla\frac{1}{\mathfrak{T}})_i}$$ where $\Phi^{(MFM)}$ is the CR thermodynamic potential (\[CRFM\]). We note that Eq.(\[delt\]) is the well known Cattaneo equation [@Cattaneo] provided the imposed external force $\nabla\frac{1}{\mathfrak{T}}$ is replaced by $\nabla\frac{1}{T}$, where $T({{\boldmath \mbox{$r$}}})$ is the local temperature. There is however an important difference between the role it plays in extended thermodynamic theories in [@MullRugg], [@Joubook] and in this paper. In the context of our investigation it is the equation describing approach of the Cattaneo extended fluid dynamics (playing the role of *MESO level*) to the classical fluid mechanics with the Fourier constitutive relation (playing the role of *meso level*). The Cattaneo time evolution driven by the CR thermodynamic potential (\[CRFM\]), implying the CR fundamental thermodynamic relation (\[frFM\]) on the level of classical fluid mechanics, describes $MESO\rightarrow meso$ passage. In the extended theories investigated in [@MullRugg], [@Joubook] the Cattaneo equation is the equation arising in the $MESO\rightarrow equilibrum$ passage. It is just an extra equation (governing the time evolution of the extra state variable and coupled to the other time evolution equations) in the set of extended fluid mechanics equations whose solutions are required to approach equilibrium states. This means that the physical systems under investigation in [@MullRugg], [@Joubook] are externally unforced. Thermodynamics of immiscible blends; phase inversion {#EX5} ---------------------------------------------------- In this section we apply CR-thermodynamics to immiscible blends. We recall that the extension of the classical equilibrium thermodynamics of single component macroscopic systems to multicomponent miscible blends led Gibbs to the completion of the mathematical formulation of equilibrium thermodynamics. Further extensions to immiscible blends require to leave the realm of equilibrium thermodynamics and to enter CR-thermodynamics. Imposed external forces (e.g. imposed flows in the mixing process) prevent approach to equilibrium states. Moreover, extra variables addressing morphology of the interfaces among the components are needed to characterize their states. We shall not attempt in this paper to make a systematic investigation of CR-thermodynamics of immiscible blends. We shall concentrate only on one particular problem and use thermodynamics to investigate it. The immiscible blend that we consider is composed of two immiscible fluids (component “1”, and component “2”). The problem that we investigate is phase inversion. Let initially the component “1” form a continuous phase in which the component “2” is dispersed. This means that the component “2” resides inside drops encircled completely by the component “1”. Every two points in the component “1” can be joined by a line that lies completely inside the component “1”. We shall now increase the amount of the component “2”. We anticipate that at some volume fraction $\phi_2$ of the second component the roles of the two components change, the second component becomes the continuous phase and the first component becomes the dispersed phase. At the critical state at which the change occurs both components form a continuous phase. The morphology at the critical state is called a co-continuous morphology. The problem that we want to investigate is to estimate the critical value of $\phi_2$ as a function of the properties of the components (as for instance the viscosity, elasticity etc.) and of the blending conditions (i.e. the externally imposed forces). The co-continuous morphology is in particular very important in applications involving blends of polymer melts (see e.g. [@Favis]). In order to have a specific example in mind, we can think of immiscible blends of oil and water. If water is the continuous phase then the blend is milk, if oil is the continuous phase then the blend is butter. One way to approach the problem of phase inversion is by attempting to formulate a dynamical model of immiscible blends. In this paper we shall not take this route. We turn directly to thermodynamics. The concept with which we begin is the CR thermodynamic potential (\[PhiMm\]). Next, we regard phase inversion as phase transition. In view of the comment that we made at the end of Section \[GF\], this means that at the point of phase inversion $$\label{Phinver} \Phi^{(mM)}_{1/2}=\Phi^{(mM)}_{2/1}$$ where $\Phi^{(mM)}_{i/k}$ is the CR thermodynamic potential when the component “i” forms the continuous phase and the component “k” the disperse phase. This is the equation that answers our question. It remains now only to specify the CR thermodynamic potential $\Phi^{(mM)}$. We shall limit ourselves to most simple specifications that nevertheless illustrate the power of CR-thermodynamics. Following (\[Psi\]), we write $$\label{Pimbl} \Phi^{(mM)}= -S^{(0mM)}+\frac{1}{T_0}W$$ where we consider $S^{(0mM)}$ to be simply the local entropy production due to the presence of the dispersed phase, $W$ the work involved in elastic deformations of the dispersed droplets, and $T_0$ the temperature of the blend. Our problem now is to estimate $S^{(0Mm)}$ and $W$ in the mixture in which $\phi_1$ and $\phi_2$ are not too different. Let the component “1”, forming the continuous phase, be a fluid of viscosity $\eta_1$. The main contribution to the entropy production $S^{(0Mm)}_{1/2}$ comes from the flow in the continuous phase “1”. $S^{(0mM)}_{1/2}$ is thus $\alpha_2\eta_1\phi_2 \dot{\gamma}^2$, where $\alpha_2$ is a parameter depending on the shape of the inclusion and $\dot{\gamma}$ is the absolute value of the shear rate. The main contribution to the work $W_{1/2}$ is assumed to come also from the continuous phase, i.e. $W_{1/2} \sim H_1D_1^2\dot{\gamma}$, where $H_1$ is the elastic constant and $D_1$ is the deformation displacement of the matrix). We therefore obtain $$\label{P1} S^{(0mM)}_{1/2}=\alpha_2\phi_2\eta_1\dot{\gamma}^2;\,\,\, S^{(0mM)}_{2/1}=\alpha_1\phi_1\eta_2\dot{\gamma}^2$$ and $$\label{W} W_{1/2}=\phi_1H_1D_1^2\dot{\gamma};\,\,\, W_{2/1}=\phi_2H_2D_2^2\dot{\gamma}$$ By inserting these relations to (\[Pimbl\]) and (\[Phinver\]) we arrive finally at the estimate of the critical volume fraction $$\label{result} \frac{\phi_1}{\phi_2}=\frac{\alpha_2\eta_1\dot{\gamma}+\frac{1}{T_0}H_2D_2^2}{\alpha_1\eta_2\dot{\gamma}+\frac{1}{T_0}H_1D_1^2}$$ The above estimate appears to be an extension of several empirical formulas that can be found in the literature. For example, if we neglect the elasticity of the two fluids (i.e. we put $W_{1/2}=W_{2/1}=0$), or if the mixing is very vigorous (i.e. if $\dot{\gamma}$ is large so that the terms in (\[result\]) involving the viscosity are much larger than the terms involving the elastic energy), or also if $T_0$ is large then (\[result\]) becomes $\frac{\phi_1}{\phi_2}=\frac{\alpha_2\eta_1}{\alpha_1\eta_2}$. If, in addition, we neglect the shape factor (i.e. $\alpha_1=\alpha_2=1$) then we arrive at the estimate $\frac{\phi_1}{\phi_2}=\frac{\eta_1}{\eta_2}$ which is indeed the empirical formula introduced in [@phinv]. Concluding remarks {#CR} ================== Reducing dynamics $MESO\rightarrow meso$ is a dynamics bringing a mesoscopic level of description (called *MESO level*) to another mesoscopic level of description (called *meso level*) that involves less details. By identifying the reducing dynamics with thermodynamics we have been able to formulate a general thermodynamics that encompasses the classical equilibrium thermodynamics (corresponding to $equilibrium\rightarrow equilibrium$), the equilibrium statistical mechanics (corresponding to $MICRO\rightarrow equilibrium$), mesoscopic equilibrium thermodynamics (corresponding to $MESO\rightarrow equilibrium$), and thermodynamics of externally driven systems (corresponding to $MESO\rightarrow meso$). The general thermodynamics is presented in three postulates. The first postulate (called Postulate 0 in order to keep as much as possible the traditional terminology) states that there exist well established mesoscopic levels of description. By well established we mean well tested with experimental observations. This postulate generalizes the postulate of the existence of the equilibrium states that serves as a basis of the classical equilibrium and the Gibbs statistical equilibrium thermodynamics. The second postulate (called Postulate I) is about state variables used on the *MESO* and *meso levels* and about potentials needed to formulate mechanics. Again, this postulate generalizes the classical Postulate I in the classical equilibrium thermodynamics. The third postulate (Postulate II) addresses the process (called a preparation process) in which the macroscopic systems are prepared to states at which the *meso* description is found to agree with a certain family of experimental observations forming the experimental basis of the *meso* description. The time evolution making the preparation process is called reducing time evolution. In the classical equilibrium thermodynamics this postulate is the static Maximum Entropy principle (static MaxEnt principle) specifying only the final result of the preparation process. In the general *MESO* and *meso* descriptions, it is dynamic MaxEnt principle postulating equation governing the time evolution making the preparation processes. Two important results arise on the *meso level* from the static or the dynamic MaxEnt principles. First, it is the *meso level* time evolution (the time evolution reduced from the time evolution on the *MESO level*). Second, it is the fundamental thermodynamic relation that is constructed from the potential generating the reducing time evolution. The generating potential has the physical interpretation of entropy if the *meso level* in the approach $MESO \rightarrow meso$ is the equilibrium level and entropy production (or related to it quantity) if the *meso level* in the approach $MESO \rightarrow meso$ is a general *meso level*. In the classical, or the Gibbs statistical, equilibrium thermodynamics (i.e. if the meso-level in the approach $MESO \rightarrow meso$ is the equilibrium level) the reduced *meso* dynamics is in this case no dynamics. The fundamental thermodynamic relation is, in the context of the classical or the Gibbs statistical equilibrium thermodynamics, the classical equilibrium fundamental thermodynamic relation. In the context of *MESO* and *meso* descriptions of externally driven macroscopic systems it is a new relation on the *meso level* representing its thermodynamics. This thermodynamics is not directly related to the *meso* dynamics (as, indeed, the classical equilibrium fundamental thermodynamic relation is in no relation to no dynamics at equilibrium). It represents an extra information about macroscopic systems. The *meso* dynamics is the *MESO* dynamics seen on the *meso level* and the thermodynamics is an information extracted from the way how the details (that are seen on the *MESO level* but are invisible on the *meso level*) are being forgotten. The fourth postulate (Postulate III of the classical equilibrium thermodynamics) addresses the value of entropy at zero absolute temperature. Investigations of macroscopic systems at such extreme conditions are outside the scope of this paper. We are not extending this postulate to mesoscopic dynamical theories. In conclusion, we have demonstrated that if we limit ourselves to one fixed *meso level* (e.g. the level of fluid mechanics) then the dynamic and the thermodynamic modeling represent two essentially independent ways to investigate externally driven macroscopic systems. In particular, the validity and the pertinence of the thermodynamic *meso* models does not depend on establishing their relation to the dynamic *meso* models. If however we make our investigation simultaneously on two well established levels, one *MESO level* (that involves more details than the *meso level*) and the other the chosen *meso level* then we can derive both the *meso* dynamics and the *meso* thermodynamics from the *MESO* dynamics. The derivation consists of splitting the *MESO* time evolution into reducing time evolution (providing the *meso* thermodynamics) and reduced time evolution that becomes the *meso* time evolution. In most investigations of reductions the attention is payed only the the reduced dynamics. We hope that this paper will stimulate investigations in both reduced and reducing dynamics. What are the arguments supporting the three postulates of the general thermodynamics? In the case of $equilibrium\rightarrow equilibrium$ passage they become the standard postulates of the classical equilibrium thermodynamics. In the case of $MICRO\rightarrow equilibrium$ passage they become a formulation (equivalent to many other existing formulations) of the Gibbs equilibrium statistical mechanics. In the case of $MESO\rightarrow equilibrium$ passage, a large body of supporting evidence has been collected (see in particular [@Grmadv] and references cited therein and in [@Obook]). In the case of $MESO\rightarrow meso$ passage there is much smaller number of examples that have been worked out. The support in this case comes, in addition to the support coming from the detailed analysis in the examples, from the unification that the general thermodynamics brings (see more in the text at the beginning of Section \[EX\]).\ \ **Acknowledgements** This research was partially supported by the Natural Sciences and Engineering Research Council of Canada.\ Arnold, V. Sur la géometrie différentielle des groupes de Lie de dimension infini et ses applications dans l’ hydrodynamique des fluides parfaits. Annales de l’ Institut Fourier 1966 Beretta, C. Steepest entropy ascent model for far-nonequilibrium thermodynamics: Unified implementation of the maximum entropy production principle. Phys. Rev. E 2014, 90, doi:10.1103/PhysRevE.90.042113. Beris, A.N. and Edwards, B.J. Thermodynamics of Flowing Systems (Oxford University Press, Oxford, 1994). Cahn, J.; Hilliard, J. Free Energy of a Nonuniform System. I. Interfacial Free Energy. J. Chem. Phys. 1958, 28, 258-267. Callen, H. Thermodynamics: An Introduction to the Physical Theories of Equilibrium Thermostatics and Irreversible Thermodynamics; Wiley: Hoboken, NJ, USA, 1960. Cattaneo, C. Sulla conduzione del calore, Atti del Seminario Matematico e Fisico della Universita di Modena. 1948, 3, 83–101, (in Italian). Clebsch, A. Über die Integration der Hydrodynamische Gleichungen. Journal für die reine und angewandte Mathematik 1859, 56, 1–10. Cross, M.C.; Hohenberg, P.C. Pattern formation outside equilibrium. Rev. Mod. Phys. 1993, 65, 851–1112. Desvillettes, L.; Villani, C. On the trend to global equilibrium for spatially inhomogeneous kinetic systems: The Boltzmann equation. Invent. Math. 2005, 159, 245–316. Dzyaloshinski, I.E. and Volovick, G.E. “Poisson brackets in condense matter physics” Ann. Phys. (NY) 125, 67 (1980). Favis, B.D. and Chalifoux, J.P., Poly. Eng. Sci. 1987, 27, 1591 Ginzburg, V.; Landau, L. On the theory of superconductivity. Zhur. Eksp. Theor. Fiz. 1950, 20, 1064-1082. Gorban, A.N. Karlin, I.V. Invariant Manifolds for Physical and Chemical Kinetics: Lecture Notes in Physics; Springer: Berlin, Gremany, 2005; Volume 660. Gorban, A.N.; Karlin, I.V. Hilbert’s 6th problem: Exact and approximate hydrodynamic manifolds for kinetic equations. Bull. Am. Math. Soc. S 2013, doi:10.1090/S0273-0979-2013- 01439-3. Grad, H. On Boltzmann’s H-theorem. J. Soc. Indust. Math. 1965, 13, 259–277. Grmela, M. Kinetic equation approach to phase transitions, J. Stat. Mech. 1971, 3, 347-364 Grmela, M. Onsager’s symmetry in higher order fluid dynamics, Helv. Phys. Acta, 1977, 50, 393-406 Grmela, M. Particle and Bracket Formulations of Kinetic Equations. Contemp. Math. 1984, 28, 125–132. Grmela, M., Bracket formulation of diffusion-convection equations. Physica D 21, 1986, 179. Grmela, M. Reciprocity relations in thermodynamics. Physica A 309, 2002, 304–328. Grmela, M., Multiscale equilibrium and nonequilibrium thermodynamics in chemical engineering, Adv. Chem. Eng. 2010, 39, 75. Grmela, M. Fluctuations in extended mass-action-law dynamics. Physica D 2012, 241, 976–986. Grmela, M. “Geometry of Mesoscopic Nonequilibrium Thermodynamics”, Entropy, 2015, 17, 5938-5964 Grmela, M. and Öttinger, H.C., Phys. Rev. E 1987, 56, 6620 Grmela,; Pavelka, M.; Klika, V. “Reductions and Extensions in Mesoscopic Dynamics” Phys. Rev.E, 92, 032111 (2015) Jordhamo, G.M.; Manson, J.A.; Sperling, L.H., Poly. Eng. Sci. 1986, 26, 517 Jou, D.; Casas-Vàzquez, J.; Lebon, G. Extended Irreversible Thermodynamics, 4th ed.; Springer: Berlin, Gremany, 2010. Kaufman, A.N., Phys. Lett. A 100, 419 (1984). Keizer, J. “On the kinetic meaning of the second law of thermodynamics” J. Chem. Phys. 1976, 64, 4466-4474 Lucia, U. and Grazzini, G., “Second Law Today: Using Maximum-Minimum Entropy Generation” Entropy, 2015, 17, 778-7797 Marsden, J.; Weinstein, A. Coadjoint orbits, vortices, and Clebsch variables for incompressible fluids. Physica D 1983, 7, 305-323. Morrison, P.J., Phys. Lett. A 1984, 100, 423. Müller, I.; Ruggeri, T. Rational Extended Thermodynamics; Springer: New York, NY, USA, 1998. Öttinger, H.C., Beyond Equilibrium Thermodynamics (Wiley, New York, 2005). Öttinger, H.C. and Grmela, M., Phys. Rev. E 56, 6633 (1997). Pavelka, M.; Klika, V.; Grmela, M. Time reversal in nonequilibrium thermodynamics. Phys. Rev. E, **92**, 032111 (2015) Prigogine, I., “Thermodynamics of Irreversible Processes” , John Wiley and Sons, (1955) Pëschel, T, and Brilliantov, N.V. Eds. “Granular Gas Dynamics” Lecture Notes in Physics, (2003) Ruggeri, T. and Sugiyama, M. “Rational Extended Thermodynamics beyond the Monoatomic Gas” Springer (2015) [^1]: e-mail: miroslav.grmela@polymtl.ca
{ "pile_set_name": "ArXiv" }
epsf [**AMS, a particle spectrometer in space [^1]**]{} [M. Buénerd]{}\ \ [for the AMS collaboration]{} Introduction ============ Accurate measurements of particle fluxes close to earth have been performed recently by the AMS experiment, bringing a body of excellent new data on the particle populations in the low altitude terrestrial environment. These results should rejuvenate the long standing interest of a broad community of scientists for the interactions between the cosmic ray (CR) flux and the atmosphere and for the dynamics of particles in the earth neighborhood. They certainly open new prospects for accurate studies of these phenomena to investigate the interaction mechanisms generating the observed populations. The AMS experiment took its first data during a precursor flight on june 2-12, 1998, on the Space Shuttle DISCOVERY. The flight was primitively intended as a qualification test for the spectrometer instrumentation. The orbit altitude was close to 370 km. During 100 Hours of counting, about 10$^8$ events were recorded providing new results of high quality on the particle distributions at the altitude of the detector. Some of these results were rather unexpected. They illustrate the discovery potential of the experiment in its future steps. This contribution is devoted to a general presentation of the project, of the results obtained during this first experimental test and of their interpretation, and of the goals and plans of the forthcoming phase II of the experimental program. The first part will deal with a description of the measurements performed and questions raised on the dynamics of the detected particles in the earth environment, by the results obtained. The second part will describe a phenomenological approach based on a simulation to account for the observed distributions. The third and last part will consist of a description of the phase II AMS spectrometer which will begin on the International Space Station as of next october 2003, which will be very different from the version flown on the shuttle, and of its physics program. The AMS01 precursor flight ========================== The spectrometer operation during the flight has been very successful with only a few instrumental defects, having no significant consequence on the quality of the measurements achieved. The spectrometer ---------------- Figure \[AMS01\] shows a cut view in perspective of the spectrometer which was flown on the shuttle. The apparatus included a cylindrical permanent magnet generating a 0.15 Tesla dipole field perpendicular to the axis of the cylinder inside its volume [@AIMANT]. The inner volume was mapped with a tracker consisting of 6 planes of silicon microstrips partially equipped at this stage, allowing the reconstruction of particle trajectories [@TRACK] . The tracker planes also provided dE/dX measurements of the particles. Above and below the magnet, two double planes of scintillator hodoscopes with perpendicular orientations of their paddles, provided both a measurement of the particle time of flight (TOF) and of their specific energy loss (dE/dX). The paddle location and the position sensitivity inside the paddles also provided a complementary determination of the particle hit coordinates, useful for background rejection. A skirt of scintillators around the tracker was used to veto on particles outside the fiducial angular acceptance of the counter. At the bottom of the device a threshold Cherenkov counter equipped with n=1.035 aerogel material allowed $p/e^+$ and $\bar{p}$/e$^-$ discrimination below the $p(\bar{p})$ threshold around 4 GeV/c particles [@ATC]. Results ------- Some of the results have already been published [@HEBAR; @PROT1; @PROT2; @LEPT; @HE]. The measured data are still under analysis however, and the physics issues addressed by the experiment are being actively investigated. Some of the latters are discussed on in the following. ### Search for antimatter The first claimed objective of the experiment is the search for primordial antimatter in space. It was then very important to investigate the capability of the spectrometer to identify antiparticles with Z$\geq$2, and to identify and reject background events.\ $\bullet$ [**Antihelium [[@HEBAR]]{} -** ]{} Figure \[AHE\] shows the spectral distribution of Z=2 particles as a function of their rigidity, i.e., momentum/charge, the sign of the charge being measured by the sign of the trajectory curvature in the tracker. Positive rigidities correspond to He particles, whereas antiHeliums are expected on the negative side. A few fake antiheliums due to soft interactions in the detector, were rejected by means of appropriate cuts on the energy deposit in the tracker planes. Finally the experiment has allowed to set a new lower limit on the $\overline{He}$/$He$ fraction in cosmic rays, of 1.1 10$^{-6}$. See [@BESS] for recent results from the BESS experiment.\ $\bullet$ [**Antimatter nuclei Z$>$2 -** ]{} The particle identification capabilities of the spectrometer have been used to search also for antimatter nuclei with Z$>$2. This search has been negative so far. The limit obtained will be reported in a future publication. ### Protons [[@PROT1; @PROT2]]{} The CR proton distribution was already very well known from previous experiments before the AMS flight. The measurements were intended to be used for checking and calibrating the experiments, no new result being expected. Figure \[PROTONS\] show the kinetic energy distributions of incoming particles (towards earth) measured by AMS in bins of latitude. The spectra show some expected features like the power law decrease with energy. The geomagnetic cutoff (GC) due to the sweeping away of particles by the earth magnetic field below a critical momentum, is clearly observed in the spectra, decreasing from about 15 GeV around the equator down to zero in the polar region. The spectrum at high latitudes is in good agreement with previous measurements. Although no significant flux was expected below GC, a strong rise of the spectra at low energy is observed at all low latitudes with a strong enhancement in the equatorial region. The Albedo (outgoing particles) spectra at the same latitude do not show as expected the high energy features due to the incoming CR flux, but they display one single component peaked at low energy and overlapping almost perfectly (to within 1%) with the low E component of the incoming flux. These features indicate that we are dealing with a population of trapped particles circling around earth magnetic field lines, exactly as in the Van Allen belts but at much higher energy and much closer to earth. This will be confirmed by the analysis reported below. ### Leptons [[@LEPT]]{} The flux of leptons has been measured up to about 100 GeV for electrons. It was limited to about 3 GeV for positrons by the $p/e^+$ discrimination range set by the Cherenkov counter threshold for protons. $\bullet$ [**Electrons**]{} The electron spectra show quite similar features as the proton spectra, with the low energy component of the downgoing flux and the upgoing flux almost perfectly overlapping in the equatorial region. In addition these components of the lepton flux have exactly the same shape to within statistical errors, as for protons, indicating that the particles are likely involved in the same dynamical process. $\bullet$ [**Positrons**]{} The positron spectra are similar to the electron’s over the range investigated. The surprising feature is that the positron to electron flux ratio is about 4 in the equator region, while in the cosmic flux it is about 0.1, and about one in the atmosphere. The origin of this feature is an open question which is being addressed by the groups of the collaboration. Figure \[LEPTONS\] shows the distributions of electrons and positrons over the positron ID range in the equatorial region (left) and the distribution of the e$^+$/e$^-$ ratio in latitude. ### Ions $\bullet$ [**Deuterium [[@DEUT]]{} -** ]{} The flux of deuterium has been measured and some preliminary results are available. $\bullet$ [**Helium [[@HE]]{} -** ]{} The measured flux of Helium is in agreement with previous measurements and doesn’t show a strong rise of flux below GC as the proton flux does. However a small flux of $^3$He is found below GC, which originates probably at least partly from the fragmentation of cosmic $^4$He (figure \[HELIUMS\]. A consistent picture of these population based on known nuclear reaction mechanisms is being invetsigated to account for these populations of light nuclei [@DERHE]. $\bullet$ [**Z$>$2 Nuclei -** ]{} Some significant samples of light ions with 2$<$Z$\leq\approx$10 have been measured during this run. They are still being analyzed. Origin of the measured proton flux [[@DER00]]{} =============================================== Simulation program ------------------ The inclusive spectrum of protons at the altitude of AMS (390-400km) has been calculated by means of a computer simulation program built to this purpose. CR particles are generated with their natural abundance and momentum distributions. They are propagated inside the earth magnetic field. Particles are allowed to interact with atmospheric nuclei and produce secondary protons with cross sections and multiplicities as discussed below. Each secondary proton is then propagated and allowed to collide as in the previous step. A reaction cascade can thus develop through the atmosphere. The reaction products are counted when they cross the virtual sphere at the altitude of the AMS spectrometer, upward and downward. Particles undergo energy loss by ionisation before and after the interaction. Multiple scattering effects have not been included at this stage. Each event is propagated until the particle disappears by either colliding with a nucleus, or being stopped in the atmosphere, or escaping to outer space beyond twice the production altitude. Note that particles are counted each time they cross the sphere of detection altitude. The contributions of trapped particles are thus weighted statistically with their numbers of crossings, which increases their contribution to the final spectrum. The secondary nucleon spectrum generated has to cover two orders of magnitudes in kinetic energy, between about 100 MeV and 10 GeV. The main component of proton production cross section was obtained by means of analytical relations fitted to 14.6 GeV $p+Be$ data. The scaling properties of the cross section have been checked with the FRITIOF/PYTHIA (Lünd) event generator. Since this generator is not expected to account for the very low energy and backward proton emission (target-like to negative rapidities), this latter component was incorporated using a parametrization. The respective contributions to the total multiplicity-weighted proton production cross-section, were 352 mb for the QE component and 88 mb for the DI components. Cross sections on atmospheric nuclei were renormalized from the original data or parametrizations obtained on different nuclei, using ratios of geometrical cross sections. Results ------- Many features of the dynamics of particles in the earth magnetic field appear in the simulation results. Some of them are discussed in the published paper. Others will be reported later. Figure \[DISTRIB\] shows the experimental kinetic energy distributions of downward (left) and upward (right) protons measured for some bins of latitude, compared to the results of the simulation. No free parameter was used for normalization to the data: The calculated results are entirely determined by the physics input to the calculation. It is seen that the agreement between the data and the simulation result is remarkably good, at all latitudes and for both the inward and outward flux. In particular, the cutoff region is particularly well reproduced, which indicates that the processing of the particle dynamics and kinematics is good. The shaded histograms in the figure correspond to secondary particles in the simulation. The fraction of events originating from the DI component of the proton production cross section described previously, vary from about 10% in the equatorial region up to 25% in the polar region, with a momentum distribution peaking at low kinetic energy and distributed below 500 MeV. It can be concluded from this result that the proton flux measured by AMS can be accounted for to a good accuracy by the single interaction of the incoming CR flux with the atmosphere. Future prospects ---------------- This successful result opens a world of new prospects on the phenomenology of particles in the earth environment. Beside the ongoing investigations of the other AMS results described above, some other issues of general or particular interest are being addressed or will be addressed soon, like the study of the atmospheric neutrino flux and the secondary antiproton populations close to earth. The same type of approach can be used also for particle propagation in the galactic interstellar medium and the study of the various astrophysical issues associated to this propagation. AMS02, a particle observatory in space ====================================== The second phase of the AMS experiment will begin on october 2003 with the launch of the spectrometer and its installation on the International Space station (ISS) for a 3 to 5 years campaign of measurements. Spectrometer structure ---------------------- The new spectrometer shown on figure \[AMS2\], will improve on the AMS01 (figure \[AMS01\]) version by many respects. Its main change consists of the permanent magnet (B$_{max}$=0.15 T) of AMS01 being replaced by a superconducting magnet in AMS02 (B$_{max}$=1 T) which will result in a 6 times better resolution and 6 times larger momentum range, because of this larger magnetic field. In addition, several new detectors will be implemented in AMS02: A transition radiation detector (TRD) will provide lepton identification up to above 300 GeV and an improved tracking accuracy. A Cherenkov imager (RICH) will allow nuclear isotope identification up to about 13 GeV/c for mass around carbon [@RICH]. An electromagnetic calorimeter (ECAL) will provide the energy measurement of electromagnetic particles $\gamma$, leptons, and their discrimination from hadrons up to the TeV range. The synchrotron radiation detector (SRD) would provide e$^+$/e$^-$ identification/discrimination at very high energies [@SRD]. Physics program --------------- The physics program to be covered with the new instrument is wide, with a high discovery potential and a significant probability of unexpected results and new findings. Basically the spectrometer will be able to accumulate statistics larger by 3 to 4 orders of magnitudes than those measured so far by other embarked experiments, for all the species studied. The range in rigidity will extend from around 300 MV up to 3 TV, depending on the particle species, with a good identification capability for leptons, hadrons, and ions, provided by the spectrometer instrumentation. The TRD counter will provide lepton-hadron (mainly positron-proton) discrimination up to the TRD proton threshold around 300 GeV. The RICH will allow both charge and mass measurements: The charge can be obtained from the photon yield, proportionnal to Z$^2$, for momenta going from threshold up to the upper limit of the counter momentum resolution. The mass measurement is obtained from the ring image processing, over a range limited by the space resolution of the counter as shown in the table (see [@RICH] for details). The final performances and range will depend on the choice of Cherenkov radiators. The electromagnetic calorimeter will allow the energy measurement of electromagnetic particles, leptons and photons, up to the TeV range, and lepton-hadron discrimination from the measured P/E ratio over the same range. These features will allow a high statistics study of many cosmic ray species including $e^+$, $e^-$, $p$, $\bar{p}$, and the lightest ions $d,t,^{3,4}$He. Heavier light ions will also be studied with mass identification up to A$\approx 20$, and elements up to Z$\approx 20$ depending on the final performances of the instrumentation of the spectrometer (RICH in particular). Unstable ions like $^{10}$Be, and $^{26}$Al are of particular interest since they provide a measurement of the time of confinement of charged particles in the galaxy (galactic chronometers) [@BO00]. The corresponding antimatter nuclei will be searched with equivalent instrumental performances in identification and kinematic range. The metrological perspectives are summarized in table \[PROSP\]. \[CHARAC\] [llll]{} Particles & P$_{min}$ & P$_{max}$ & Comments\ e$^-$ & $\approx$0.3 & $\approx$3000 & Upper limit set by rigidity resolution\ e$^+$ & $\approx$0.3 & $\approx$300 & Upper limit set by TRD\ proton & $\approx$0.3 & $\approx$3000 & Upper limit set by rigidity resolution\ \ Ions Z&lt;$\approx$20 & $\approx$0.3 & $\approx$1500 ? & Depending on RICH performances\ \ Ions A&lt;4 & 1 to 4 & $\approx$20 & Depending on RICH performances\ Ions 4&lt;A&lt;$\approx$20 & 1 to 4 & $\approx$12 &\ \ $\bar{p}$ & $\approx$0.3 & $\approx$3000 & Depending on $\bar{p}/e^-$ discrimination\ $\overline{ions}$ & $\approx$0.3 & $\approx$1500 & $\overline{He}$, $\overline{C}$\ \[PROSP\] These capabilities will allow to address with an unmatched sensitivity the main scientific objectives of the program:\ 1) The search for antimatter in space [@MBAR]: The experimental signature for the detection of an antinucleus basically requires a determination of its charge in module and sign, the key point being the sign of the charge provided by the radius of curvature of the trajectory in the tracker, its accuracy and contamination by various backgrounds. The results discussed previously have shown that the background level is under control (see details in [@HEBAR]). It will be improved with the updated the tracker equipment in number of planes and readout electronics.\ 2) The search for dark matter in space through the signature of neutralino annihilations. The latter are expected to generate kinematic structures in the spectra of their annihilation products [@DARKM]. Such structures will be searched for in $\bar{p}$ and e$^+$ spectra. 3) In addition, the study of the various ions within the spectrometer (RICH) range of identification, will be achieved, in particular for the $^{10}$Be isotope. This is illustrated on figure \[BE10\] with the result of a simulation incorporating the ID resolution of the spectrometer provide by the RICH and a theoretical $^{10}$Be distribution in momentum. It is seen that the measured sample for 6 weeks of counting time on the ISS would provide highly accurate data over a range totally unexplored so far by previous experiments. The study of this sample will provide an estimate of the accuracy on the propagation parameters to be expected [@BO00]. See ref [@GAMMAS] for a study of a possible high energy gamma ray astronomy program with AMS. SUMMARY and CONCLUSION ====================== In summary, it has been shown that the first engineering flight of the AMS experiment has been very successful, both instrumentally and scientifically. This first step has provided a significant number of new unexpected physics results on the particle populations close to Earth. Although no new physics has emerged so far from these data, they are new and important and expected to provide some significant improvements in our understanding of the particle flux in the environment close to earth at the completion of the study. These result obtained in a region extensively explored by previous balloon or satellite experiments, illustrate the discovery potential of the spectrometer due to its instrumental characteristics: large geometrical acceptance, large momentum range and dynamics and particle identification capability, and to the long duration of the measurement campaign scheduled for the main phase of the experiment on the ISS, where close to 3 orders magnitude more statistics than in the first flight will be collected in much better instrumental conditions. [99]{} S. Ahlen et al., Nucl. Inst. and Meth. in Phys. A350(1994)351 D. Alvisi et al., Nucl. Inst. and Meth. in Phys. A437(1999)212 B. Alpat et al., Nucl. Inst. and Meth. in Phys. A446(2000)522; ibid, Nucl. Inst. and Meth. in Phys. A439(2000)53 D. Barancourt et al., Nucl. Inst. and Meth. in Phys., in press ; preprint astro-ph/0010242 The AMS collaboration, J.Alcaraz et al., Phys. Lett. B461(1999)387 M. Nozaki et al., Proc of 26th Int. Cosm. Ray Conf., Salt-Lake city, Aug, 17-25, 1999, vol 3, p 85. The AMS collaboration, J.Alcaraz et al., Phys. Lett., B472(2000)215 The AMS collaboration, J.Alcaraz et al., Phys. Lett. B490(2000)27 The AMS collaboration, J.Alcaraz et al., Phys. Lett. B484(2000)10 G. Lamanna, AMS note 2000-07-02 The AMS collaboration, J.Alcaraz et al., Phys. Lett. B494(2000)193 L. Derome and M. Buénerd, in preparation. L. Derome et al., Phys. Lett. 489(2000)1; L. Derome and M. Buénerd, Nucl. Phys. A, in Press. see http://ams.cern.ch/AMS/ams\_homepage.html M. Buénerd and Z. Ren, Nucl. Inst. and Meth. in Phys. A454(2000)476; T. Thuillier et al., Nucl. Inst. and Meth. in Phys. A442(2000)74; T. Thuillier, PhD thesis, Université J.Fourier, Grenoble (France), 1999; M. Pohl and H. Hofer Nucl. Inst. and Meth. in Phys. A416(1998)59 See the references in H. Kurki-Suhonio and E. Sihlova Phys. Rev. D62:103508,2000 E.Diehl et al., Phys.Rev.D52(1995)4223; S. Rudaz and F.W. Stecker, ApJ 325(1988)16; see E.A. Baltz et al, Phys. Rev. D61:023514,2000; and Phys. Rev. D59(1999)023511 for a recent status of the field A. Bouchet et al., Nucl. Phys. A, in press. R. Battiston et al., Astropart. Phys. 13(2000)51 [^1]: Talk given at the XXIV Symposium on Nuclear Physics, January 3-6, 2001, Taxco, Mexico.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $\Omega\subset \mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. We study the behavior of analytic structure in the boundary of $\Omega$ and obtain a compactness result for Hankel operators on the Bergman space of $\Omega$.' address: 'Bowling Green State University, Department of Mathematics and Statistics, Bowling Green, Ohio 43403 ' author: - 'Timothy G. Clos' bibliography: - 'rrrefs.bib' title: Hankel Operators on the Bergman spaces of Reinhardt Domains and Foliations of Analytic Disks --- Introduction ============ Let $\Omega\subset \mathbb{C}^n$ for $n\geq 2$ be a bounded domain. We let $dV$ be the (normalized) Lebesgue volume measure on $\Omega$. Then $L^2(\Omega)$ is the space of measurable, square integrable functions on $\Omega$. Let $\mathcal{O}_{\Omega}$ be the collection of all holomorphic (analytic) functions on $\Omega$. Then the Bergman space $A^2(\Omega):=\mathcal{O}_{\Omega}\cap L^2(\Omega)$ is a closed subspace of $L^2(\Omega)$, a Hilbert space. Therefore, there exists an orthogonal projection $P:L^2(\Omega)\rightarrow A^2(\Omega)$ called the Bergman projection. Then the Hankel operator with symbol $\phi\in L^{\infty}(\Omega)$ is defined as $$H_{\phi}f:=(I-P)(\phi f)$$ where $I$ is the identity operator and $f\in A^2(\Omega)$. Previous Work ============= Compactness of Hankel operators on the Bergman spaces of bounded domains and its relationship between analytic structure in the boundary of these domains is an ongoing research topic. In one complex dimension, Axler in [@Axler86] completely characterizes compactness of Hankel operators with conjugate holomorphic, $L^2$ symbols. There, the emphasis is on whether the symbol belongs to the little Bloch space. This requires that the derivative of the complex conjugate of the symbol satisfy a growth condition near the boundary of the domain.\ The situation is different in several variables for conjugate holomorphic symbols. In [@clos], the author completely characterizes compactness of Hankel operator with conjugate holomorphic symbols on convex Reinhardt domains in $\mathbb{C}^n$ if the boundary contains a certain class of analytic disks. The proof relied on using the analytic structure in the boundary to show that a compact Hankel operator with a conjugate holomorphic symbol must be the zero operator, assuming certain conditions on the boundary of the domain. In particular, the symbol is identically constant if certain conditions are satisfied. An example of a domain where these conditions are satisfied is the polydisk in $\mathbb{C}^n$ (as seen in [@Le10] and [@clos]).\ In [@CelZey] the authors studied the compactness of Hankel operators with symbols continuous up to the closure of bounded pseudoconvex domains via compactness multipliers. They showed if $\phi\in C(\overline{\Omega})$ is a compactness multiplier then $H_{\phi}$ is compact on $A^2(\Omega)$. The authors of [@CelZey] approached the problem using the compactness estimate machinery developed in [@StraubeBook].\ Hankel operators with symbols continuous up to the closure of the domain is also studied in [@CuckovicSahutoglu09] and [@ClosSahut]. The paper [@CuckovicSahutoglu09] considered Hankel operators with symbols that are $C^1$-smooth up to the closure of bounded convex domains in $\mathbb{C}^2$. The paper [@ClosSahut] considered symbols that are continuous up to the closure of bounded convex Reinhardt domains in $\mathbb{C}^2$. Thus the regularity of the symbol was reduced at the expense of a smaller class of domains.\ Many of these results characterize the compactness of these operators by the behavior of the symbol along analytic structure in the domain. For bounded pseudoconvex domains in $\mathbb{C}^n$, compactness of the $\overline{\partial}$-Neumann operator implies the compactness of Hankel operators with symbols continuous up to the closure of the domain. See [@FuSt] and [@StraubeBook] for more information on compactness of the $\overline{\partial}$-Neumann operator. For example the ball in $\mathbb{C}^n$ has compact $\overline{\partial}$-Neumann operator and hence any Hankel operator with symbol continuous up the closure of the ball is compact on the Bergman space of the ball. The compactness of the $\overline{\partial}$-Neumann operator on the ball in $\mathbb{C}^n$ follows from the convexity of the domain and absence of analytic structure in the boundary of the domain. See [@StraubeBook].\ As shown in [@dbaressential], the existence of analytic structure in the boundary of bounded convex domains is an impediment to the compactness of the $\overline{\partial}$-Neumann operator. It is therefore natural to ask whether the Hankel operator with symbol continuous up to the closure of the domain can be compact if the $\overline{\partial} $-Neumann operator is not compact. As we shall see, the answer is yes. On the polydisk in $\mathbb{C}^n$, [@Le10] showed that the answer to this question is yes, despite the non-compactness of the $\overline{\partial}$-Neumann operator. For bounded convex domains in $\mathbb{C}^n$ for $n\geq 2$, relating the compactness of the Hankel operator with continuously differentiable symbols to the geometry of the boundary is well studied. See [@CuckovicSahutoglu09]. They give a more general characterization than [@Le10] for symbols that are $C^1$-smooth up to the closure of the domain. For symbols that are only continuous up to the closure of bounded convex Reinhardt domains in $\mathbb{C}^2$, there is a complete characterization in [@ClosSahut].\ The Main Result =============== In this paper we investigate the compactness of Hankel operators on the Bergman spaces of smooth bounded pseudoconvex complete Reinhardt domains. These domains may not be convex as in [@ClosSahut] but are instead almost locally convexifiable. That is, for any $(p_1,p_2)\in b\Omega$ and if $(p_1,p_2)$ are away from the coordinate axes, then there exists $r>0$ so that $$B((p_1,p_2),r):=\{(z_1,z_2)\in \mathbb{C}^2: |z_1-p_1|^2+|z_2-p_2|^2<r^2\}$$ and a biholomorphism $T:B((p_1,p_2),r)\rightarrow \mathbb{C}^2$ so that $ B((p_1,p_2),r)\cap \Omega$ is a domain and $T(B((p_1,p_2),r)\cap \Omega)$ is convex. We will use this fact along with a result in [@CuckovicSahutoglu09] to localize the problem. We then analyze the geometry on analytic structure in the resulting convex domain. Then we perform the analysis on the boundary of this convex domain using the boundary geometry previously established to show the main result.\ \[thmmain\] Let $\Omega\subset\mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. Then $\phi\in C(\overline{\Omega})$ so that $\phi\circ f$ is holomorphic for any holomorphic $f:\mathbb{D}\rightarrow b\Omega$ if and only if $H_{\phi}$ is compact on $A^2(\Omega)$. We will assume $\phi\circ f$ is holomorphic for any holomorphic function $f:\mathbb{D}\rightarrow b\Omega$ and show that $H_{\phi}$ is compact on $A^2(\Omega)$, as the converse of this statement appears as a corollary in [@CCS]. Analytic structure in the boundary of pseudoconvex complete Reinhardt domains in $\mathbb{C}^2$ =============================================================================================== We fill first investigate the geometry of non-degenerate analytic disks in the boundary of Reinhardt domains. We define the following collection for any bounded domain $\Omega\subset \mathbb{C}^n$. $$\Gamma_{\Omega}:=\overline{\bigcup_{f\in A(\mathbb{D})\cap C(\overline{\mathbb{D}})\, , f \,\text{non-constant}}\{f(\mathbb{D}) |f:\mathbb{D}\rightarrow b\Omega\}}$$ Let $\Omega\subset \mathbb{C}^n$ for $n\geq 2$ be a domain. We say $\Gamma\subset b\Omega$ is an analytic disk if there exists $F:\mathbb{D}\rightarrow \mathbb{C}^n$ so that every component function of $F$ is holomorphic on $\mathbb{D}$ and continuous up to the boundary of $\mathbb{D}$ and $F(\mathbb{D})=\Gamma$.\ One observation is for any Reinhardt domain $\Omega\subset \mathbb{C}^n$, if $F(\mathbb{D})\subset b\Omega$ is an analytic disk where $F(\zeta):=(F_1(\zeta), F_2(\zeta),...,F_n(\zeta))$, then for any $(\theta_1,\theta_2,...,\theta_n)\in \mathbb{R}^n$, $G(\mathbb{D})\subset b\Omega$ is also an analytic disk where $$G(\zeta):=(e^{i\theta_1}F_1(\zeta), e^{i\theta_2}F_2(\zeta),...,e^{i\theta_n}F_n(\zeta)).$$ We say an analytic disk $f(\mathbb{D})$ where $f=(f_1,f_2,...,f_n)$ is trivial or degenerate if $f_j$ is identically constant for all $j\in \{1,2,...,n\}$. Otherwise, we say an analytic disk is non-trivial or non-degenerate.\ Let $\Omega\subset \mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. If $g(\mathbb{D})\subset b\Omega$ is an analytic disk so that $\overline{g(\mathbb{D})}\cap \{z_2=0\}\neq \emptyset$ or $\overline{g(\mathbb{D})}\cap \{z_1=0\}\neq \emptyset$, then $g(\zeta)=(g_1(\zeta),0)$ or $g(\zeta)=(0,g_2(\zeta))$, respectively. They are possibly infinitely many continuous families of non-trivial analytic disks in the boundary of bounded complete Reinhardt domains $\Omega$ in $\mathbb{C}^2$. Hence by compactness of the boundary of $\Omega$, there are subsets of $b\Omega$ that are accumulation sets of families of analytic disks. This next lemma gives us some insight on the structure of these accumulation sets. \[disklim\] Suppose $\Omega\subset \mathbb{C}^2$ is a bounded complete Reinhardt domain and $\{\Gamma_j\}_{j\in \mathbb{N}}\subset b\Omega$ is a sequence of pairwise disjoint, continuous families of analytic disks so that $\Gamma_j\rightarrow \Gamma_0$ as $j\rightarrow \infty$, where $\Gamma_0=\{e^{i\theta}F(\mathbb{D}):\theta\in [0,2\pi]\}$. Then, there exists $c_1,c_2\in \mathbb{C}$ so that $F\equiv (c_1,c_2)$. Let $\sigma$ be the Lebesgue measure on the boundary. Without loss of generality, we may assume $\Gamma_j$ are families of non-degenerate analytic disks and so we may assume $\sigma(\Gamma_j)>0$ for all $j\in \mathbb{N}$. If $\sigma(\Gamma_0)>0$, then we consider the sequence of indicator functions on $\Gamma_j$, called $\chi_{\Gamma_j}$. By assumption, $\chi_{\Gamma_j}\rightarrow \chi_{\Gamma_0}$ pointwise as $j\rightarrow \infty$. Hence an application of Lebesgue dominated convergence theorem shows that $\sigma(\Gamma_j)\rightarrow \sigma(\Gamma_0)$, and so $\sigma(\Gamma_j)\geq \delta>0$ for sufficiently large $j\in \mathbb{N}$. Since $\Gamma_j$ are pairwise disjoint and $\Omega$ is bounded, this is a contradiction. So $\sigma(\Gamma_0)=0$. Now assume $\Lambda_j(\zeta):=(f_j(\zeta),g_j(\zeta))$ where $f_j$, $g_j$ are holomorphic on $\mathbb{D}$ and continuous up to the boundary of $\mathbb{D}$. Furthermore, $$\Gamma_j=\{e^{i\theta}\Lambda_j:\theta\in [0,2\pi]\}.$$ Then, there exists $f,g$ so that $$\sup\{\text{dist}(((f_j(\zeta),g_j(\zeta)), f(\zeta),g(\zeta))):\zeta\in \overline{\mathbb{D}}\}\rightarrow 0$$ as $j\rightarrow \infty$. Therefore, one can show $f_j\rightarrow f$ and $g_j\rightarrow g$ uniformly on $\overline{\mathbb{D}}$ as $j\rightarrow \infty$. So $f$ and $g$ are holomorphic on $\mathbb{D}$ and continuous on $\overline{\mathbb{D}}$. To show $f$ and $g$ are constant it suffices to show they are constant on some open subset of $\mathbb{D}$. Assume $f$ is not identically constant. If $g$ is constant then by open mapping theorem, $F(\mathbb{D})$ is open in $\mathbb{C}\times \mathbb{R}$ and also $\sigma(F(\mathbb{D}))=0$, which cannot occur by the open mapping theorem. So, we assume both $F$ and $g$ are not identically constant, so the zeros of $f'$ and $g'$ have no accumulation point in $\mathbb{D}$. Then by a holomorphic change of coordinates, there exists an open simply connected set $D\subset \mathbb{D}$ so that $F(D)$ is biholomorphic to a subset $K$ of $\{z_1\in \mathbb{C}\}\times\{0\}$. Hence again by the open mapping theorem, $f$ is constant on $K$ since $K$ has measure zero and so $f$ is constant on $\overline{\mathbb{D}}$ by the identity principle. \[lembiholo\] Let $\Omega\subset \mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. Suppose $f:\mathbb{D}\rightarrow b\Omega$ and $g:\mathbb{D}\rightarrow b\Omega$ are holomorphic functions on $\mathbb{D}$ and continuous on $\overline{\mathbb{D}}$. Assume that $\overline{f(\mathbb{D})}\cap \overline{g(\mathbb{D})}\neq \emptyset$. Furthermore, assume $\overline{f(\mathbb{D})}\cap(\{z_1=0\}\cup \{z_2=0\})=\emptyset$ and $\overline{g(\mathbb{D})}\cap(\{z_1=0\}\cup \{z_2=0\})=\emptyset$. Then, $f(\mathbb{D})$ and $g(\mathbb{D})$ are biholomorphically equivalent to analytic disks contained in a unique complex line. Let $\zeta_0\in \mathbb{D}$ and $\zeta_1\in \mathbb{D}$ be such that $f(\zeta_0)=g(\zeta_1)$. Without loss of generality, by composing with a biholomorphism of the unit disk that sends $\zeta_0$ to $\zeta_1$, we may assume $f(\zeta_0)=g(\zeta_0)$. Then, there exists $r>0$ and a biholomorphism $T:B(f(\zeta_0),r)\rightarrow \mathbb{C}^2$ so that $f^{-1}(B(f(\zeta_0),r)\cap f(\mathbb{D}))\subset \mathbb{D}$ and $g^{-1}(B(f(\zeta_0),r)\cap g(\mathbb{D}))\subset \mathbb{D}$ and $T(B(f(\zeta_0),r)\cap \Omega)$ is convex. Then, $A:=f^{-1}(B(f(\zeta_0),r)\cap f(\mathbb{D}))\cap g^{-1}(B(f(\zeta_0),r)\cap g(\mathbb{D}))$ is an open, non-empty, simply connected, and bounded. By the Riemann mapping theorem, there exists a biholomorphism $R:\mathbb{D}\rightarrow A$. Then, $T\circ f\circ R$ and $T\circ g\circ R$ are analytic disks in the boundary of a bounded convex domain. Hence they are contained in a complex line by [@CuckovicSahutoglu09 Lemma 2]. In fact, they are contained in the same complex line because both disks have closures with non-empty intersection and the domain has a smooth boundary. That is, if $L_{\alpha}:=\{(a_1\zeta+b_{\alpha},c_1\zeta+d_{\alpha}):\zeta\in \mathbb{C}\}$ and $L_{\beta}:=\{(a_2\zeta+b_{\beta},c_2\zeta+d_{\beta}):\zeta\in \mathbb{C}\}$ are one parameter continuous (continuously depending on the parameter) families of complex lines depending on parameters $\alpha$ and $\beta$ that locally foliate the boundary, with $(L_{\alpha_0}\cap L_{\beta_0})\cap b\Omega\neq \emptyset$, then $a_1=a_2$. The argument uses the fact that boundary normal vectors must vary smoothly. Furthermore, one can conclude $L_{\alpha_0}=L_{\beta_0}$ since one can show $b_{\alpha_0}=b_{\beta_0}$ and $d_{\alpha_0}=d_{\beta_0}$. \[propconvex\] Let $\Omega\subset \mathbb{C}^2$ be a smooth bounded convex domain. Let $\{\Gamma_j\}_{j\in \mathbb{N}}$ be a collection of analytic disks in $b\Omega$ so that $$\nabla:=\overline{\bigcup_{j\in \mathbb{N}}\Gamma_j}$$ is connected. Then there exists a convex set $S$ and a non-constant holomorphic function $F:\mathbb{D}\rightarrow b\Omega$ so that $F$ is continuous up to $\overline{\mathbb{D}}$, $F(\mathbb{D})=S$ and $\nabla\subset \overline{S}$. By Lemma \[lembiholo\], there exists a complex line $L=\mathbb{C}\times \{0\}$ so that $\nabla\subset L$ and by convexity of the domain, $L\cap \Omega=\emptyset$. Then the convex hull of $\nabla$, called $\mathcal{H}(\nabla)$, is contained in $L\cap\overline{\Omega}$. Since $\nabla$ contains a non-trivial analytic disk, the interior of $\mathcal{H}(\nabla)$ is non-empty. We denote this non-empty interior as $I$. Assume $\overline{I}\neq \mathcal{H}(\nabla)$. Let $z_0\in \mathcal{H}(\nabla)\setminus \overline{I}$. Then there is a positive Euclidean distance from $z_0 $ to $\overline{I}$. Let $\mathcal{L}$ denote the collection of all line segments from $z_0$ to $bI$, called $K$. Then $K$ has non-empty interior, which contradicts the convexity of $\mathcal{H}(\nabla)$. Therefore, $I$ is a non-empty simply connected bounded open set in $\mathbb{C}$, so there is biholomorphism from $\mathbb{D}$ to $I$ that extends continuously to $\overline{\mathbb{D}}$ by smoothness of the boundary of $\Omega$. Then Lemma \[lembiholo\] implies that any disk in the boundary of a bounded pseudoconvex complete Reinhardt domain in $\Omega\subset \mathbb{C}^2$ is contained in a continuous family of analytic disks, called $\Gamma$. Furthermore, this continuous family can be represented as $$\Gamma=\{(e^{i\theta}F_1(\zeta), e^{i\theta}F_2(\zeta)):\theta\in [0,2\pi]\land \zeta\in \mathbb{D}\}$$ since $b\Omega$ is three (real) dimensional and $\Gamma$ locally foliates $b\Omega$. Locally Convexifiable Reinhardt domains in $\mathbb{C}^2$ ========================================================= \[almost\] Let $\Omega\subset\mathbb{C}^n$ be a bounded pseudoconvex complete Reinhardt domain. Then, $\Omega$ is almost locally convexifiable. That is, for every $(p_1,p_2,...,p_n)\in b\Omega\setminus(\{z_1=0\}\cup \{z_2=0\}\cup...\cup\{z_n=0\})$ there exists $r>0$ and there exists a biholomorphism $L$ on $B((p_1,p_2,...,p_n),r)$ so that $L(B((p_1,p_2,...,p_n),r)\cap \Omega)$ is convex. Our understanding of analytic structure in the boundary of bounded convex domains is a crucial part of the proof the Theorem \[thmmain\]. The following proposition is proven in [@CuckovicSahutoglu09]. \[prophull\] Let $\Omega\subset \mathbb{C}^n$ be a bounded convex domain. Let $F:\overline{\mathbb{D}}\rightarrow b\Omega$ be a non-constant holomorphic map. Then the convex hull of $F(\mathbb{D})$ is an affine analytic variety. We note there are no analytic disks in the boundary of $B((p_1,p_2,...,p_n),r)$ because of convexity and the fact that Property (P) (see [@Cat]) is satisfied on the boundary. We define the following directional derivatives. We assume $\phi\in C(\overline{\Omega)}$. Let $\vec{U}=(u_1,u_2)$ be a unit complex tangential vector at $p:=(p_1,p_2)\in b\Omega$. Then if they exist as pointwise limits, $$\partial_b^{\vec{U},p}\phi:=\lim_{t\rightarrow 0}\frac{\phi(p_1+tu_1,p_2+tu_2)-\phi(p_1,p_2)}{t}$$ and $$\overline{\partial}_b^{\vec{U},p}\phi:=\lim_{t\rightarrow 0}\frac{\phi(\overline{p_1+tu_1},\overline{p_2+tu_2})-\phi(\overline{p_1},\overline{p_2})}{t}$$ The following lemma uses these directional derivatives to characterize when a continuous function $\phi$ is holomorphic ’along’ analytic disks in the boundary of the domain. \[directional\] Let $\Omega\subset \mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. Suppose $\phi\in C(\overline{\Omega})$. Then $\phi\circ g$ is holomorphic for any non-constant holomorphic $g:\mathbb{D}\rightarrow b\Omega$ if and only if for every $p\in g(\mathbb{D})$ and $\vec{U}$ tangent to $b\Omega$ at $p$, $$\partial_b^{\vec{U},p}\phi$$ exists as a pointwise limit and $$\overline{\partial}_b^{\vec{U},p}\phi=0$$ Suppose $\phi\circ f$ is holomorphic for any $f:\mathbb{D}\rightarrow b\Omega$ holomorphic. The first case to consider is if $\overline{f(\mathbb{D})}$ intersects either coordinate axis. If $\overline{f(\mathbb{D})}$ intersects either coordinate axis, then by smoothness of $b\Omega$, $f(\mathbb{D})$ is contained in an affine analytic variety and is either vertical or horizontal. That is, $f(\mathbb{D})$ is contained in the biholomorphic image of $\mathbb{D}$. And so one can show $\partial_b^{\vec{U},p}\phi$ exists and $\overline{\partial}_b^{\vec{U},p}\phi=0$.\ That is, we may assume $f:=(f_1,f_2):\mathbb{D}\rightarrow b\Omega$ is holomorphic and neither $f_1$ nor $f_2$ is identically constant. This implies $\overline{f(\mathbb{D})}$ is away from either coordinate axis. Then $f(\mathbb{D})$ is contained in a family of analytic disks $\Gamma$ which foliate the boundary near $f(\mathbb{D})$. Let $p\in f(\mathbb{D})$. By Lemma \[almost\] and Proposition \[prophull\], there exists a biholomorphism $T:B(p,r)\rightarrow \mathbb{C}^2$ so that $T(f(\mathbb{D}))\subset \mathbb{C}\times \{\alpha\}$ for some $\alpha\in [-1,1]$. Furthermore, we may assume $T\circ f:=g$ where $g=(g_1,\alpha)$ and $g_1:\mathbb{D}\rightarrow \mathbb{C}$ is a biholomorphism with a continuous extension to the unit circle. We may assume $g_1$ is a biholomophism by Proposition \[propconvex\]. Let $\phi\circ T^{-1}=\widetilde{\phi}$. We will first show the tangential directional derivative $\partial_b^{\vec{U},p}\widetilde{\phi}$ and the conjugate tangential directional derivative $\overline{\partial}_b^{\vec{U},p}\widetilde{\phi}$ exists on $T(\Gamma)\subset \{(z_1,\alpha): z_1\in \mathbb{C}\land \alpha\in [-1,1]\}$ and $\overline{\partial}_b^{\vec{U},p}\widetilde{\phi}=0$ on $T(\Gamma)$ if and only if $\widetilde{\phi}\circ g$ is holomorphic for any holomorphic $g$ so that $g(\mathbb{D})\subset T(\Gamma)$. First we suppose $\widetilde{\phi}\circ g$ is holomorphic and $g(\mathbb{D})\subset T(\Gamma)$. Then we consider a unit vector $\vec{U}=(u,0)$ so that $\vec{U}$ is tangent to $g(\mathbb{D})$. We may consider the restriction of $\phi$ to $\overline{T(\Gamma)}$ to be a function of $(z_1,\overline{z_1},\alpha)$. That is, $$\phi|_{\overline{T(\Gamma)}}=\phi(z_1,\overline{z_1},\alpha).$$ Then for $({p_1},\alpha)\in g(\mathbb{D})$ we chose $t_0\in \mathbb{R}\setminus\{0\}$ so that for all $t$, $|t_0|>|t|>0$ we have $({p_1+tu},\alpha)\in g(\mathbb{D})$. Then using the fact that $\widetilde{\phi}\circ g$ is holomorphic, we have $$\begin{aligned} &\frac{\widetilde{\phi}(p_1,\overline{p_1+tu},\alpha)-\widetilde{\phi}(p_1,\overline{p_1},\alpha)}{t}\\ =&\frac{\widetilde{\phi}(g_1\circ g_1^{-1}(p_1),\overline{g_1}\circ \overline{g_1^{-1}}(\overline{p_1+tu}),\alpha)-\widetilde{\phi}(g_1\circ g_1^{-1}(p_1),\overline{g_1}\circ \overline{g_1^{-1}}(\overline{p_1}),\alpha)}{t}\\ \rightarrow &\frac{\partial(\phi\circ g\circ g_1^{-1})}{\partial \overline{z_1}}=0\\\end{aligned}$$ as $t\rightarrow 0$ and at $(p_1,\alpha)\in g(\mathbb{D})$. By a similar argument, it can be shown that $$\partial_b^{\vec{U},p}\widetilde{\phi}:=\lim_{t\rightarrow 0}\frac{\widetilde{\phi}({p_1+tu},\alpha)-\widetilde{\phi}({p_1},\alpha)}{t}$$ exists and is finite on $T(\Gamma)$. Next we assume $$\overline{\partial}_b^{\vec{U},p}\widetilde{\phi}:=\lim_{t\rightarrow 0}\frac{\widetilde{\phi}(\overline{p_1+tu},\alpha)-\widetilde{\phi}(\overline{p_1},\alpha)}{t}=0$$ on $T(\Gamma)$ and $$\partial_b^{\vec{U},p}\widetilde{\phi}:=\lim_{t\rightarrow 0}\frac{\widetilde{\phi}({p_1+tu},\alpha)-\widetilde{\phi}({p_1},\alpha)}{t}$$ exists and is finite on $T(\Gamma)$. Then $$\frac{\partial (\widetilde{\phi}\circ g)(\zeta)}{\partial \overline{\zeta}}=\partial_b^{\vec{U},p}\widetilde{\phi}\frac{\partial g}{\partial\overline{\zeta}}+\overline{\partial}_b^{\vec{U},p}\widetilde{\phi}\frac{\partial \overline{g}}{\partial\overline{\zeta}}=0$$ so by composing $\widetilde{\phi}$ with $T$, we have that $\phi\circ f$ is holomorphic. \[approx\] Let $\Omega\subset \mathbb{C}^2$ be a bounded pseudoconvex complete Reinhardt domain with a smooth boundary. Suppose $\phi\in C(\overline{\Omega})$ is such that $\phi\circ f$ is holomorphic for any holomorphic $f:\mathbb{D}\rightarrow b\Omega$. Let $\Gamma\subset b\Omega$ be a continuous family of non-trivial analytic disks so that $\overline{\Gamma}$ is disjoint from the closure of any other non-trivial family of analytic disks in $b\Omega$. Then there exists $\{\psi_n\}_{n\in \mathbb{N}}\subset C^{\infty}(\overline{\Omega})$ so that the following holds. 1. $\phi_n\rightarrow \phi$ uniformly on $\overline{\Gamma}$ as $n\rightarrow \infty$. 2. $\phi_n\circ f$ is holomorphic for any holomorphic $f$ so that $f(\mathbb{D})\subset \Gamma$ Let $\nabla\subset b\Omega$ be a non-degenerate analytic disk so that $f(\mathbb{D})=\nabla$ where $f=(f_1,f_2)$ is holomorphic and continuous up to $\overline{\mathbb{D}}$. Furthermore, assume $\nabla$ is away from the coordinate axes. By Lemma \[lembiholo\] and Proposition \[prophull\], there is a local holomorphic change of coordinates $T$ so that $T(\nabla)$ is contained in an affine analytic variety. By Proposition \[propconvex\], we may assume $T(\nabla)$ is convex and $\overline{T(\nabla)}\subset \overline{T(\Gamma)}$ where $\Gamma$ is the continuous family of disks containing $f(\mathbb{D})$ and away from the closure of any other non-degenerate analytic disk. Then the restriction $\phi|_{\Gamma}=\phi(z_1,\alpha)$ where $z_1\in (T(U\cap b\Omega))\subset\{(z_1,z_2)\in \mathbb{C}^2:z_2=0\}$ and $\alpha\in [-1,1]$. Without loss of generality, extend $\phi$ as a continuous function on $\mathbb{C}^2$. As notation, $\mathbb{D}_{\frac{1}{n}}:=\{z\in \mathbb{C}:|z|<\frac{1}{n}\}$. We let $\chi\in C^{\infty}_0(\mathbb{D})$ so that $0\leq \chi\leq 1$, $\chi$ is radially symmetric, and $\int_{\mathbb{C}}\chi=1$.\ Similarly, we let $\widetilde{\chi}\in C^{\infty}_0(-1,1)$, $0\leq \widetilde{\chi}\leq 1$, and radially symmetric so that $\int_{\mathbb{R}}\widetilde{\chi}=1$.\ Then we define the smooth mollifier $\{\chi_n\}_{n\in \mathbb{N}}\subset C^{\infty}_0(\mathbb{D}_{\frac{1}{n}}\times \left(-\frac{1}{n},\frac{1}{n}\right))$ as $$\chi_n(z_1,\alpha):=n^3\chi(nz_1)\widetilde{\chi}(n\alpha).$$ Then, there exists a holomorphic change of coordinates $H:V\rightarrow \mathbb{C}^2$ so that $T(\Gamma)\subset V$ and $H(T(\Gamma))=\mathbb{D}_s\times (-1,1)$ for some fixed radius $s>0$. For every $n\in \mathbb{N}$, chose $0<r_n<1$ so that $$-1<r_n(\alpha-\beta)<1$$ and $$|r_n(z_1-\lambda)|<s$$ for every $(z_1,\alpha)\in \mathbb{D}_s\times (-1,1)$ and for all $$(\lambda,\beta)\in \mathbb{D}_{\frac{1}{n}}\times \left(-\frac{1}{n},\frac{1}{n}\right).$$ Then we define the convolution of $\phi\circ T^{-1}$ with $\{\chi_n\}$ in the following manner. $$\psi_n(z_1,\alpha):=\int_{\mathbb{C}\times \mathbb{R}}\phi\circ T^{-1}(r_n(z_1-\lambda),r_n(\alpha-\beta))\chi_n(\lambda,\beta)dA(\lambda)d\beta.$$ Let us extend $\psi_n$ trivially to $\mathbb{C}^2$ and denote this trivial extension as $\psi_n$, abusing the notation. Now, we have everything we need to show $\psi_n\circ g$ are holomorphic for $g:\mathbb{D}\rightarrow T(\Gamma)$ holomorphic. Using Lemma \[directional\], for every $n\in \mathbb{N}$, $$\lim_{t\rightarrow 0}\frac{\phi\circ T^{-1}(r_n(\overline{z_1+tu}-\lambda),r_n(\alpha-\beta))-\phi\circ T^{-1}(r_n(\overline{z_1}-\lambda),r_n(\alpha-\beta))}{t}=0$$ pointwise and $$\lim_{t\rightarrow 0}\frac{\phi\circ T^{-1}(r_n({z_1+tu}-\lambda),r_n(\alpha-\beta))-\phi\circ T^{-1}(r_n({z_1}-\lambda),r_n(\alpha-\beta))}{t}$$ exists and is finite for every $(\lambda,\beta)\in \mathbb{D}_{\frac{1}{n}}\times (-\frac{1}{n},\frac{1}{n})$. Therefore, using the fact that $\chi_n$ are compactly supported and using the Lebesgue dominated convergence theorem, we have that $$\overline{\partial}_b^{\vec{U},p}\psi_n=0$$ for any unit vector $\vec{U}$ tangent to $T(\Gamma)$ for all $p\in T(\Gamma)$, and for all $n\in \mathbb{N}$. Furthermore, $${\partial}_b^{\vec{U},p}\psi_n$$ exists for any unit vector $\vec{U}$ tangent to $T(\Gamma)$, $p\in T(\Gamma)$, and $n\in \mathbb{N}$. Therefore by Lemma \[directional\], $\psi_n$ are holomorphic along analytic disks in $T(\Gamma)$. Furthermore, it can be shown that $\psi_n\circ T\rightarrow \phi$ uniformly on $\overline{\Gamma}$ as $n\rightarrow \infty$. Now if $\Gamma$ intersects the coordinate axes, then the analytic disks are horizontal or vertical by smoothness of $b\Omega$. So, we perform the convolution procedure as in [@ClosSahut] without using a holomorphic change of coordinates. For a linear operator $T:G\rightarrow H$ between Hilbert spaces, we define the essential norm as $$\|T\|_e:=\inf\{\|T-K\|\, ,K:G\rightarrow H\,\text{compact}\}$$ \[enorm\] Let $\Omega\subset \mathbb{C}^n$ be a bounded convex domain. Suppose $\Gamma_{\Omega}\neq \emptyset$ is defined as above. Assume $\{\phi_n\}_{n\in \mathbb{N}}\subset C(\overline{\Omega})$ so that $\phi_n\rightarrow 0$ uniformly on $\Gamma_{\Omega}$ as $n\rightarrow \infty$. Then, $\lim_{n\rightarrow \infty}\|H_{\phi_n}\|_e=0$ The next proposition is similar to the theorem in [@CuckovicSahutoglu09], with one major difference, namely they assumed smoothness of the boundary. Here, we assume the boundary is piecewise smooth. \[onefamily\] Let $\Omega\subset \mathbb{C}^2$ be a bounded convex domain so that the boundary of $\Omega$ contains no analytic disks except for one continuous family, called $\Gamma_{\Omega}$. Let $\phi\in C^{\infty}(\overline{\Omega})$ so that $\phi\circ f$ is holomorphic for any holomorphic $f:\mathbb{D}\rightarrow b\Omega$. Then, $H_{\phi}$ is compact on $A^2(\Omega)$. Without loss of generality, we may assume $$\Gamma_{\Omega}\subset \{(z_1,\alpha):z_1\in \mathbb{C}\, , \alpha\in (-1,1)\}.$$ Assuming $\phi\circ f$ is holomorphic for any $f:\mathbb{D}\rightarrow b\Omega$, one can show that the tangential directional derivative $\overline{\partial}_b \phi$ exists along $\Gamma_{\Omega}$. Furthermore $\frac{\partial\phi}{\partial\overline{z_1}}=0$ on $\Gamma_{\Omega}$. We wish to construct smooth function $\psi\in C^{\infty}(\overline{\Omega})$ so that $\psi \equiv \phi$ on $\Gamma_{\Omega}$ and $\overline{\partial}(\psi)=0$ on $\Gamma_{\Omega}$. To do this, we will use the idea of a defining function. There exists a smooth function $\rho\in C^{\infty}(\mathbb{C}^2)$ so that $\rho\equiv 0$ on $\overline{\{(z_1,\alpha):z_1\in \mathbb{C}\, , \alpha\in (-1,1)\}}$ and $|\nabla \rho|>0$ on $\overline{\{(z_1,\alpha):z_1\in \mathbb{C}\, , \alpha\in (-1,1)\}}$. Furthermore, by scaling the tangential and normal vector fields on $\overline{\{(z_1,\alpha):z_1\in \mathbb{C}\, , \alpha\in (-1,1)\}}$, we may assume $$\frac{\partial\rho}{\partial\overline{z_1}}|_{\overline{\{(z_1,\alpha):z_1\in \mathbb{C}\, , \alpha\in (-1,1)\}}}=0$$ and $$\frac{\partial\rho}{\partial\overline{z_2}}|_{\overline{\{(z_1,\alpha):z_1\in \mathbb{C}\, , \alpha\in (-1,1)\}}}=1.$$ Now we define $$\psi:=\phi-\rho\left(\frac{\partial\phi}{\partial\overline{z_2}}\right).$$ Then $\overline{\partial}\psi=0$ on $\Gamma_{\Omega}$ and also $\psi=\phi$ on $\Gamma_{\Omega}$. Then by Proposition \[enorm\], $\|H_{\phi-\psi}\|_e=0$ and so $H_{\phi-\psi}$ is compact on $A^2(\Omega)$. To show $H_{\psi}$ is compact we use the fact that $\overline{\partial}\psi=0$ on $\Gamma_{\Omega}$ together with the same argument seen in [@CuckovicSahutoglu09] that shows $H_{\widetilde{\beta}}$ is compact if $\overline{\partial}\widetilde{\beta}=0$ on $\Gamma_{\Omega}$. Therefore we conclude $H_{\phi}$ is compact. Proof of Theorem \[thmmain\] ============================ The idea is to use the following result which will allow us to localize the problem. \[local\] Let $\Omega\subset \mathbb{C}^n$ for $n\geq 2$ be a bounded pseudoconvex domain and $\phi\in L^{\infty}(\Omega)$. If for every $p\in b\Omega$ there exists an open neighbourhood $U$ of $p$ such that $U\cap \Omega$ is a domain and $$H^{U\cap \Omega}_{R_{U\cap\Omega}(\phi)}R_{U\cap\Omega}$$ is compact on $A^2(\Omega)$, then $H^{\Omega}_{\phi}$ is compact on $A^2(\Omega)$. We will also use the following lemma appearing in [@CuckovicSahutoglu09]. \[bi\] Let $\Omega_1$ and $\Omega_2$ be bounded pseudoconvex subsets of $\mathbb{C}^n$. Suppose $\phi\in C^{\infty}(\overline{\Omega_1})$ so that $H_{\phi}$ is compact on $A^2(\Omega_1)$. Let $T:\Omega_2\rightarrow \Omega_1$ be a biholomorphism with a smooth extension to the boundary. Then $H_{\phi\circ T}$ is compact on $A^2( \Omega_1)$. As we shall see, this collection of all non-constant analytic disks in $b\Omega$ will play a crucial role in our understanding of the compactness of Hankel operators on various domains in $\mathbb{C}^n$ for $n\geq 2$. There are several cases to consider depending on where $p\in b\Omega$ is located. 1. $p\in \Gamma_{\Omega}\subset b\Omega$ but away from the coordinate axes. 2. $p\in b\Omega\setminus \Gamma_{\Omega}$. 3. $p\in \{z_1=0\}\cup \{z_2=0\}$ We will first consider the case where $p$ is away from $\Gamma_{\Omega}$. We let $p:=(p_1,p_2)\in b\Omega$ and assume $p\in b\Omega\setminus \Gamma_{\Omega}$. So there exists an $r>0$ sufficiently small so that the ball $b(B(p,r)\cap \Omega)$ contains no analytic disks. Furthermore, there exists a biholomorphism $T:B(p,r)\rightarrow \mathbb{C}^2$ so that $T(B(p,r)\cap \Omega)$ is a convex domain. Therefore, since any analytic disk in $bT(B(p,r)\cap \Omega)$ must be the image (under $T$) of a disk in $b(B(p,r)\cap\Omega)$, there are no analytic disks in $bT(B(p,r)\cap \Omega)$. By convexity and compactness of the $\overline{\partial}$-Neumann operator, the Hankel operator $$H_{\phi\circ T^{-1}}^{T(B(p,r)\cap \Omega)}$$ is compact on $A^2(T(B(p,r)\cap \Omega))$. And so this proves $H^{U\cap \Omega}_{R_{U\cap\Omega}(\phi)}$ is compact on $A^2(U\cap\Omega)$ where $U:=B(p,r)$.\ If $p\in (\{z_1=0\}\cup \{z_2=0\})\cap b\Omega$, then by smoothness of the domain, either $p$ is contained in an analytic disk, $p$ is a limit point of a sequence of analytic disks, or $p$ is contained in part of the boundary satisfying property (P). If $p\in b\Omega$ is contained in a non-degenerate analytic disk, then locally the analytic disks are horizontal or vertical, by smoothness of the domain. Without loss of generality, assume the family of analytic disk is vertical. So, using the argument in [@ClosSahut], we can approximate the continuous symbol $\phi$ uniformly on $\Gamma_{U\cap\Omega}$ for some ball $U$ centered at $p$ with a sequence of smooth functions $\psi_n$ so that $\psi_n$ is holomorphic along any analytic disk contained in $b(U\cap\Omega)$. As in [@ClosSahut], we use [@CuckovicSahutoglu09] and the uniform approximation on $\Gamma_{U\cap\Omega}$ to conclude that $H^{U\cap\Omega}_{\phi|_{U\cap\Omega}}$ is compact on $A^2(U\cap\Omega)$.\ Note that if $p\in b\Omega$ is contained in part of the boundary satisfying property (P) (see [@Cat]), then the local $\overline{\partial}$-Neumann operator $N_1^{U\cap\Omega}$ is compact since there exists a convex neighbourhood $U$ of $p$ so that $U\cap\Omega$ is convex, and so $H^{U\cap\Omega}_{\phi|_{U\cap\Omega}}$ is compact on $A^2(U\cap\Omega)$.\ Lastly, if $p\in b\Omega\setminus (\{z_2=0\}\cup \{z_1=0\})$ and $p\in \Gamma_{\Omega}$. We will first assume $p$ is contained in a limit set of a discrete sequence of families of analytic disks. We may assume discreteness due to Lemma \[lembiholo\], Proposition \[propconvex\], and smoothness of the boundary of $\Omega$. Then by Lemma \[disklim\], this limit set exactly equals $\{p\}$. We will first assume $p$ is not contained in the closure of a single non-trivial analytic disk. Let $U:=B(p,r)$ chosen so that $U\cap \Omega$ is a domain and $T(U\cap\Omega)$ is convex for some biholomorphism $T:U\rightarrow \mathbb{C}^2$. Denote this discrete collection of continuous families of analytic disks as $\{\Gamma_j\}_{j\in \mathbb{N}}\subset b(U\cap\Omega)$. Furthermore, we may assume $$\Gamma_{T(U\cap\Omega)}=\bigcup_{j\in \mathbb{N}}\Gamma_j.$$ Then $\{T(\Gamma_j)\}_{j\in \mathbb{N}}$ is a discrete collection of families of affine analytic disks. Then for each $j\in \mathbb{N}$ there exists open pairwise disjoint neighborhoods $V_j$ with a strongly pseudoconvex boundary so that $T(\Gamma_j)\subset V_j$. Let $\rho_j$ be smooth cutoff functions so that $\rho_j\equiv 1$ on a neighborhood of $T(\Gamma_j)$ and $\rho_j$ are compactly supported in $V_j$. Define $$\widetilde{\phi}_j:=\rho_j (\phi\circ T^{-1}-\phi(p_1,p_2)).$$ We wish to show $H_{ \widetilde{\phi}_j}$ are compact on $A^2(T(U\cap\Omega))$ for all $j\in \mathbb{N}$. By Lemma \[onefamily\] and Proposition \[approx\], we approximate $\phi\circ T^{-1}-\phi(p_1,p_2)$ with a sequence $\{\psi^j_n\}_{n\in \mathbb{N}}\subset C^{\infty}(\mathbb{C}^2)$ so that $\psi^j_n\rightarrow \phi\circ T^{-1}-\phi(p_1,p_2)$ uniformly on $\overline{T(\Gamma_j)}$ as $n\rightarrow \infty$ and $\psi^j_n$ are holomorphic along $T(\Gamma_j)$. Then, $\rho_j\psi^j_n$ are holomorphic along any analytic disk in $bT(U\cap\Omega)$ for all $j,n\in \mathbb{N}$ and $\rho_j\psi^j_n\in C^{\infty}(\mathbb{C}^2)$. Fix $j, n\in \mathbb{N}$. Then, there exists a function $\delta_{j,n}\in C^{\infty}(\mathbb{C}^2)$ so that 1. $\overline{\partial}\delta_{j,n}=0$ on $\Gamma_{T(U\cap\Omega)}$. 2. $\delta_{j,n}=\rho_j\psi^j_n$ on $\Gamma_{T(U\cap\Omega)}$. Therefore by an argument similar to the proof of Proposition \[onefamily\], $H^{T(U\cap\Omega)}_{\delta_{j,n}}$, $H^{T(U\cap\Omega)}_{\rho_j\psi^j_n-\delta_{j,n}}$, and therefore $H^{T(U\cap\Omega)}_{\rho_j\psi^j_n}$ are compact on $A^2(T(U\cap\Omega))$ for all $j,n\in \mathbb{N}$.\ Furthermore, $$\rho_j\psi^j_n\rightarrow \widetilde{\phi}_j$$ uniformly on $\Gamma_{T(U\cap\Omega)}$ as $n\rightarrow \infty$. Then by convexity of $T(U\cap\Omega)$ and Proposition \[enorm\], $H_{\widetilde{\phi}_j}$ are compact on $A^2(T(U\cap\Omega))$ for all $j\in \mathbb{N}$. One can show that $$\alpha_N:=\sum_{j=1}^N\widetilde{\phi}_j$$ converges uniformly to $\phi\circ T^{-1}-\phi(p_1,p_2)$ on $\Gamma_{T(U\cap\Omega)}$ as $N\rightarrow \infty$. Also, $H_{\alpha_N}$ are compact on $A^2(T(U\cap\Omega))$ for all $N\in \mathbb{N}$ as the finite sum of compact operators. Furthermore, $\alpha_N\in C^{\infty}(\overline{T(U\cap\Omega)})$ for all $N$. Then by Lemma \[bi\], $H_{\alpha_N\circ T}$ are compact on $A^2(U\cap\Omega)$ for all $N$ and so $H^{U\cap\Omega}_{\phi|_{U\cap\Omega}}$ is compact on $A^2(U\cap \Omega)$. So, we have the following. For all $p:=(p_1,p_2)\in b\Omega$ there exists $r>0$ so that $B(p,r)\cap\Omega$ is a domain and $$H^{B(p,r)\cap\Omega}_{\phi|_{U\cap\Omega}}$$ is compact on $A^2(B(p,r)\cap \Omega)$. Then by composing with the restriction operator $R:A^2(\Omega)\rightarrow A^2(B(p,r)\cap\Omega)$, we have that $$H^{B(p,r)\cap\Omega}_{\phi|_{U\cap\Omega}}R$$ is compact on $A^2(\Omega)$. Then by Proposition \[local\], $H_{\phi}$ is compact on $A^2(\Omega)$. Next, we assume there exists a non-trivial analytic disk $\Gamma_0\in bT(U\cap\Omega)$ so that $p\in \overline{\Gamma_0}$ and $\{p\}$ is the limit set of $\{\Gamma_j\}_{j\geq 1}$. Then we can represent $$\Gamma_{U\cap\Omega}=\bigcup_{j\geq 0,\,\theta\in [0,2\pi]}\{e^{i\theta}\Gamma_j\}.$$ For $0<r<1$ we define $$\Gamma_r:=\bigcup_{f(\mathbb{D})\subset \Gamma_{U\cap\Omega},\,\theta\in [0,2\pi]}\{e^{i\theta}f(r\mathbb{D})\}$$ By convolving $\phi$ with a mollifier in $[0,2\pi]$, there exists $\{\tau_n\}_{n\in \mathbb{N}}\subset C(\overline{\Omega})$ so that $\tau_n\rightarrow \phi$ uniformly on $\overline{\Gamma_r}$ as $n\rightarrow \infty$, and for every $(z_1,z_2)\in \Gamma_r$ and $\vec{T}$ complex tangent to $bU\cap\Omega$ at $(z_1,z_2)$ the directional derivative of $\tau_n$ in the direction of $\vec{T}$ at $(z_1,z_2)$ exists. Furthermore, by the smoothness of $\tau_n$ in the $\theta$ variable, the directional derivative in the complex normal direction at $(z_1,z_2)$ also exists. Thus $\tau_n$ satisfies the compatibility condition for the Whitney extension theorem. See [@stein] and [@mal] for more information on the Whitney extension theorem. Therefore, there exits $\widetilde{\tau}_n\in C^1(\overline{\Omega})$ so that $\widetilde{\tau}_n\equiv \tau_n$ on $\Gamma_r$ and both tangential and normal directional derivatives of $\widetilde{\tau}_n$ agree with $\tau_n$. That is, $\tau_n\circ f$ are holomorphic on $\mathbb{D}$ for any $n\in \mathbb{N}$ and $f(\mathbb{D})\subset \Gamma_{U\cap\Omega}$. Thus $H_{\tau_n}$ is compact on $A^2(\Omega)$ by [@CuckovicSahutoglu09] and Proposition \[enorm\]. And so using Proposition \[enorm\] again and letting $r\rightarrow 1^-$, we conclude $H_{\phi|_{U\cap\Omega}}R_{U\cap\Omega}$ is compact on $A^2(\Omega)$. And so by Proposition \[local\], $H_{\phi}$ is compact on $A^2(\Omega)$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the time evolution of quotations of stocks and commodities and show that corrections to the orthodox Bachelier model inspired by quantum mechanical time evolution of particles may be important. Our analysis shows that traders tactics can interfere as waves do and trader’s strategies can be reproduced from the corresponding Wigner functions. The proposed interpretation of the chaotic movement of market prices imply that the Bachelier behaviour follows from short-time interference of tactics adopted (paths followed) by the rest of the world considered as a single trader and the Ornstein-Uhlenbeck corrections to the Bachelier model should qualitatively matter only for large time scales. The famous smithonian invisible hand is interpreted as a short-time tactics of whole the market considered as a single opponent. We also propose a solution to the currency preference paradox.' author: - | Edward W. Piotrowski\ Institute of Theoretical Physics, University of Białystok,\ Lipowa 41, Pl 15424 Białystok, Poland\ e-mail: <ep@alpha.uwb.edu.pl>\ Jan Sładkowski\ Institute of Physics, University of Silesia,\ Uniwersytecka 4, Pl 40007 Katowice, Poland\ e-mail: <sladk@us.edu.pl> title: Quantum diffusion of prices and profits --- Introduction ============ We have formulated a new approach to quantum game theory [@1]-[@3] that is suitable for description of market transactions in term of supply and demand curves [@4]-[@8]. In this approach quantum strategies are vectors (called states) in some Hilbert space and can be interpreted as superpositions of trading decisions. Tactics or moves are performed by unitary transformations on vectors in the Hilbert space (states). The idea behind using quantum games is to explore the possibility of forming linear combination of amplitudes that are complex Hilbert space vectors (interference, entanglement [@3]) whose squared absolute values give probabilities of players actions. It is generally assumed that a physical observable (e.g energy, position), defined by the prescription for its measurement, is represented by a linear Hermitian operator. Any measurement of an observable produces an eigenvalue of the operator representing the observable with some probability. This probability is given by the squared modulus of the coordinate corresponding to this eigenvalue in the spectral decomposition of the state vector describing the system. This is often an advantage over classical probabilistic description where one always deals directly with probabilities. The formalism has potential applications outside physical laboratories [@4]. Strategies and not the apparatus or installation for actual playing are at the very core of the approach. Spontaneous or institutionalized market transactions are described in terms of projective operation acting on Hilbert spaces of strategies of the traders. Quantum entanglement is necessary (non-trivial linear combinations of vectors-strategies have to be formed) to strike the balance of trade. This approach predicts the property of undividity of attention of traders (no cloning theorem) and unifies the English auction with the Vickrey’s one attenuating the motivation properties of the later [@5]. Quantum strategies create unique opportunities for making profits during intervals shorter than the characteristic thresholds for an effective market (Brown motion) [@5]. On such market prices correspond to Rayleigh particles approaching equilibrium state. Although the effective market hypothesis assumes immediate price reaction to new information concerning the market the information flow rate is limited by physical laws such us the constancy of the speed of light. Entanglement of states allows to apply quantum protocols of super-dense coding [@6] and get ahead of “classical trader”. Besides, quantum version of the famous Zeno effect [@4] controls the process of reaching the equilibrium state by the market. Quantum arbitrage based on such phenomena seems to be feasible. Interception of profitable quantum strategies is forbidden by the impossibility of cloning of quantum states. There are apparent analogies with quantum thermodynamics that allow to interpret market equilibrium as a state with vanishing financial risk flow. Euphoria, panic or herd instinct often cause violent changes of market prices. Such phenomena can be described by non-commutative quantum mechanics. A simple tactics that maximize the trader’s profit on an effective market follows from the model: [*accept profits equal or greater than the one you have formerly achieved on average*]{} [@7].\ The player strategy $|\psi\rangle$[^1] belongs to some Hilbert space and have two important representations $\langle q|\psi\rangle\negthinspace$ (demand representation) and $\langle p|\psi\rangle\negthinspace$ (supply representation) where $q$ and $p$ are logarithms of prices at which the player is buying or selling, respectively [@4; @8]. After consideration of the following facts: - error theory: second moments of a random variable describe errors - M. Markowitz’s portfolio theory - L. Bachelier’s theory of options: the random variable $q^{2} + p^{2}$ measures joint risk for a stock buying-selling transaction ( and Merton & Scholes works that gave them Nobel Prize in 1997) we have defined canonically conjugate Hermitian operators (observables) of demand $\mathcal{Q}_k$ and supply $\mathcal{P}_k$ corresponding to the variables $q$ and $p$ characterizing strategy of the k-th player. This led us to the definition of the observable that we call [*the risk inclination operator*]{}: $$H(\mathcal{P}_k,\mathcal{Q}_k):=\frac{(\mathcal{P}_k-p_{k0})^2}{2\,m}+ \frac{m\,\omega^2(\mathcal{Q}_k-q_{k0})^2}{2}\,, \label{hamiltonian}$$ where $p_{k0}\negthinspace:=\negthinspace\frac{ \phantom{}_k\negthinspace\langle\psi|\mathcal{P}_k|\psi\rangle_k } {\phantom{}_k\negthinspace\langle\psi|\psi\rangle_k}\,$, $q_{k0}\negthinspace:=\negthinspace\frac{ \phantom{}_k\negthinspace\langle\psi|\mathcal{Q}_k|\psi\rangle_k } {\phantom{}_k\negthinspace\langle\psi|\psi\rangle_k}\,$, $\omega\negthinspace:=\negthinspace\frac{2\pi}{\theta}\,$. $ \theta$ denotes the characteristic time of transaction [@7; @8] which is, roughly speaking, an average time spread between two opposite moves of a player (e. g. buying and selling the same commodity). The parameter $m\negthinspace>\negthinspace0$ measures the risk asymmetry between buying and selling positions. Analogies with quantum harmonic oscillator allow for the following characterization of quantum market games. One can introduce the constant $h_E$ that describes the minimal inclination of the player to risk, $ [\mathcal{P}_k,\mathcal{Q}_k]=\frac{i}{2\pi}h_E$. As the lowest eigenvalue of the positive definite operator $H$ is $\frac{1}{2}\frac{h_E}{2\pi} \omega$, $h_E$ is equal to the product of the lowest eigenvalue of $H(\mathcal{P}_k,\mathcal{Q}_k) $ and $2\theta$. $2\theta $ is in fact the minimal interval during which it makes sense to measure the profit. Let us consider a simple market with a single commodity $\mathfrak{G}$. A consumer (trader) who buys this commodity measures his/her profit in terms of the variable $\mathfrak{w}\negthinspace=\negthinspace-\mathfrak{q}$. The producer who provides the consumer with the commodity uses $\mathfrak{w}\negthinspace=\negthinspace-\mathfrak{p}$ to this end. Analogously, an auctioneer uses the variable $\mathfrak{w}\negthinspace=\negthinspace\mathfrak{q}$ (we neglect the additive or multiplicative constant brokerage) and a middleman who reduces the store and sells twice as much as he buys would use the variable $\mathfrak{w}\negthinspace=\negthinspace-2\hspace{.1em}\mathfrak{p}-\mathfrak{q}$. Various subjects active on the market may manifest different levels of activity. Therefore it is useful to define a standard for the “canonical” variables $\mathfrak{p}$ and $\mathfrak{q}$ so that the risk variable [@8] takes the simple form $\tfrac{\mathfrak{p}^2}{2}\negthinspace+\negthinspace\tfrac{\mathfrak{q}^2}{2}$ and the variable $\mathfrak{w}$ measuring the profit of a concrete market subject dealing in the commodity $\mathfrak{G}$ is given by $$u\,\mathfrak{q}+v\,\mathfrak{p}+\mathfrak{w}(u,v)=0\,, \label{rowprosrad}$$ where the parameters $u$ and $v$ describe the activity. The dealer can modify his/her strategy $|\psi\rangle$ to maximize the profit but this should be done within the specification characterized by $u$ and $v$. For example, let us consider a fundholder who restricts himself to purchasing realties. From his point of view, there is no need nor opportunity of modifying the supply representation of his strategy because this would not increase the financial gain from the purchases. One can easily show by recalling the explicit form of the probability amplitude $|\psi\rangle\negthinspace\in\negthinspace\mathcal{L}^2$ that the triple $(u,v,|\psi\rangle)$ describes properties of the profit random variable $\mathfrak{w}$ gained from trade in the commodity $\mathfrak{G}$. We will use the Wigner function $W(p,q)$ defined on the phase space $(p,q)$ $$\begin{aligned} W(p,q)&:=& h^{-1}_E\int_{-\infty}^{\infty}e^{i\hslash_E^{-1}p x} \;\frac{\langle q+\frac{x}{2}|\psi\rangle\langle\psi|q-\frac{x}{2}\rangle} {\langle\psi|\psi\rangle}\; dx\\ &=& h^{-2}_E\int_{-\infty}^{\infty}e^{i\hslash_E^{-1}q x}\; \frac{\langle p+\frac{x}{2}|\psi\rangle\langle\psi|p-\frac{x}{2}\rangle} {\langle\psi|\psi\rangle}\; dx,\end{aligned}$$ to measure the (pseudo-)probabilities of the players behaviour implied by his/her strategy $|\psi\rangle$ (the positive constant $h_E=2\pi\hslash_E$ is the dimensionless economical counterpart of the Planck constant discussed in the previous section [@4; @8]). Therefore if we fix values of the parameters $u$ and $v$ then the probability distribution of the random variable $\mathfrak{w}$ is given by a marginal distribution $W_{u,v}(w)dw$ that is equal to the Wigner function $W(p,q)$ integrated over the line $u\,p+v\,q+w=0$: $$W_{u,v}(w):=\iint\displaylimits_{\mathbb{R}^2}W(p,q)\,\delta(u\, q\negthinspace+\negthinspace v\,p\,\negthinspace+\negthinspace w,0) \,dpdq\, , \label{defiont}$$ where the Dirac delta function is used to force the constraint ( $\delta(u\, q\negthinspace+\negthinspace v\,p\,\negthinspace+\negthinspace w,0)$). The above integral transform $(W\negthinspace:\negthinspace\mathbb{R}^2 \negthinspace\rightarrow\negthinspace \mathbb{R})\longrightarrow (W\negthinspace:\negthinspace\mathbb{P}^2\negthinspace\rightarrow\negthinspace\mathbb{R})$ is known as the Radon transform [@9]. Let us note that the function $W_{u,v}(w)$ is homogeneous of the order -1, that is $$W_{\lambda u,\lambda v}(\lambda w)=|\lambda|^{-1}W_{u,v}(w)\,.$$ Some special examples of the (pseudo-) measure $W_{u,v}(w)dw$ where previously discussed in [@4; @8; @10]. The squared absolute value of a pure strategy in the supply representation is equal to $W_{0,1}(p)$ ($|\langle p|\psi\rangle|^2=W_{0,1}(p)$) and in the demand representation the relation reads $|\langle q|\psi\rangle|^2=W_{1,0}(q)$. It is positive definite in these cases for all values of $u$ and $v$. If we express the variables $u$ and $v$ in the units $\hslash_E^{-\frac{1}{2}}$ then the definitions of $W(p,q)$ and $W_{u,v}$ lead to the following relation between $W_{u,v}(w)$ i $\langle p|\psi\rangle$ or $\langle q|\psi\rangle$ for both representations[^2] [@11]: $$W_{u,v}(w)\frac{1}{2\pi|v|}\Bigl|\int_{-\infty}^\infty\negthinspace\negthinspace \text{e}^{\frac{\text{i}}{2v}(up^2+2pw)}\langle p|\psi\rangle\, dp\,\Bigr|^2. \label{radonpsi}$$ The integral representation of the Dirac delta function $$\delta(uq\negthinspace+\negthinspace vp\negthinspace+\negthinspace w,0)=\frac{1}{2\pi}\int^\infty_{-\infty}\text{e}^{ \text{i}k(uq+vp+w)}dk \label{hraddel}$$ helps with finding the reverse transformation to $(\ref{defiont})$. The results is: $$W(p,q)=\frac{1}{4\pi^2}\iiint\displaylimits_{\mathbb{R}^3}\cos(uq\negthinspace+\negthinspace vp\negthinspace+\negthinspace w)\, W_{u,v}(w)\,dudvdw\,. \label{odwrrado}$$ Traders using the same strategy (or single traders that can adapt their moves to variable market situations) can form sort of “tomographic pictures” of their strategies by measuring profits from trading in the commodity $\mathfrak{G}$. These pictures would be influenced by various circumstances and characterized by values of $u$ and $v$. These data can be used for reconstruction of the respective strategies expressed in terms the Wigner functions $W(p,q)$ according to the formula $(\ref{odwrrado})$. Example: marginal distribution of an adiabatic strategy ------------------------------------------------------- Let us consider the Wigner function of the $n$-th excited[^3] state of the harmonic oscillator [@12]$$W_n(p,q)dpdq=\frac{(-1)^n}{\pi\hslash_E}\thinspace e^{-\frac{2H(p,q)}{\hslash_E\omega}} L_n\bigl(\frac{4H(p,q)}{\hslash_E\omega}\bigr)dpdq\,,$$ where $L_{n}$ is the $n$-th Laguerre polynomial. We can calculate (cf the definition $(\ref{defiont})$) marginal distribution corresponding to a fixed risk strategy (that is the associated risk is not a random variable). We call such a strategy an adiabatic strategy [@4]. The identity [@13] $$\int_{-\infty}^\infty \text{e}^{\text{i}kw-\frac{k^2}{4}}\,L_n \Bigl(\frac{k^2}{2}\Bigr)\,dk=\frac{2^{n+1}\sqrt{\pi}}{n!}\,\text{e}^{-w^2}H^2_n(w)\, ,$$ where $H_n(w)$ are the Hermite polynomials, Eq $(\ref{hraddel})$ and the generating function for the Laguerre polynomials, $\frac{1}{1-t}\thinspace \text{e}^\frac{xt}{t-1}=\sum_{n=0}^\infty L_n(x)\,t^n$ lead to $$W_{n,u,v}(w)=\frac{2^n}{\sqrt{\pi(u^2+v^2)}\,n!}\,\,\text{e}^{-\frac{w^2}{u^2+v^2}} \,H^2_n\Bigl(\frac{w}{\sqrt{u^2+w^2}}\Bigr)=|\langle w|\psi_n\rangle|^2 \label{hradwkw}.$$ This is the squared absolute value of the probability amplitude expressed in terms of the variable $w$. It should be possible to interpret Eq $(\ref{hradwkw})$ in terms of stochastic interest rates but this outside the scope of the present paper. Canonical transformations ========================= Let us call those linear transformations $(\mathcal{P},\mathcal{Q})\negthinspace\rightarrow\negthinspace(\mathcal{P'},\mathcal{Q'})$ of operators $\mathcal{P}$ and $\mathcal{Q}$ that do not change their commutators $\mathcal{P}\mathcal{Q}\negthinspace-\negthinspace\mathcal{Q}\hspace{.1em}\mathcal{P}$ canonical. The canonical transformations that preserve additivity of the supply and demand components of the risk inclination operator $\tfrac{\mathcal{P}^2}{2m}+\tfrac{m\mathcal{Q}^2}{2}\ $ [@4; @8] can be expressed in the compact form $$\begin{pmatrix} \mathcal{P}\\\mathcal{Q} \end{pmatrix}\begin{pmatrix} \tfrac{\text{Re}\,z}{z\overline{z}}&\text{Im}\,z\vspace{.5ex}\\ -\tfrac{\text{Im}\,z}{z\overline{z}}&\text{Re}\,z \end{pmatrix} \begin{pmatrix} \mathcal{P'}\\\mathcal{Q'} \end{pmatrix}\,, \label{hradcytcyt}$$ where $z\negthinspace\in\negthinspace\overline{\mathbb{C}}$ is a complex parameter that is related to the risk asymmetry parameter $m$, $m\negthinspace=\negthinspace z\overline{z}$. Changes in the absolute value of the parameter $z$ correspond to different proportions of distribution of the risk between buying and selling transactions. Changes in the phase of the parameter $z$ may result in mixing of supply and demand aspects of transactions. For example, the phase shift $\tfrac{\pi}{4}$ leads to the new canonical variables $\mathcal{P'}=\mathcal{Y}:=\tfrac{1}{\sqrt{2}}\,(\mathcal{P} \negthinspace-\negthinspace\mathcal{Q})$ and $\mathcal{Q'}=\mathcal{Z}:=\tfrac{1}{\sqrt{2}}\,(\mathcal{P}\negthinspace +\negthinspace\mathcal{Q})$. The new variable $\mathcal{Y}$ describes arithmetic mean deviation of the logarithm of price from its expectation value in trading in the asset $\mathfrak{G}$. Accordingly, the new variable $\mathcal{Z}$ describes the profit made in one buying-selling cycle in trading in the asset $\mathfrak{G}$. Note that the normalization if forced by the requirement of canonicality of transformations. In the following we will use the Schrödinger-like picture for description of strategies. Therefore strategies will be functions of the variable $y$ being the properly normalized value of the logarithm of the market price of the asset in question. The dual description in terms of the profit variable $z$ is also possible and does not require any modification due to the symmetrical form of the risk inclination operator $H(\mathcal{Y},\mathcal{Z})$ [@4; @8]. The player’s strategy represents his/her actual position on the market. To insist on a distinction, we will define tactics as the way the player decides to change his/her strategy according to the acquired information, experience and so on. Therefore, in our approach, strategies are represented by vectors in Hilbert space and tactics are linear transformations acting on strategies (not necessary unitary because some information can drastically change the players behaviour!) Diffusion of prices =================== Classical description of the time evolution of a logarithm of price of an asset is known as the Bachelier model. This model is based on the supposition that the probability density of the logarithm of price fulfills a diffusion equation with an arbitrage forbidding drift. Therefore we will suppose that the (quantum) expectation value of the arithmetic mean of the logarithm of price of an asset $E(\mathcal{Y})$ is a random variable described by the Bachelier model. So the price variable $y$ has the properties of a particle performing random walk that can be described as Brown particle at large time scales $t$ and as Rayleigh particle at short time scales $\gamma$ [@16]. The superposition of these two motions gives correct description of the behaviour of the random variable $y$. It seems that the parameters $t$ and $\gamma $ should be treated as independent variables because the first one parameterizes evolution of the “market equilibrium state” and the second one parameterizes the “quantum” process of reaching the market equilibrium state [@7; @17]. Earlier [@14], we have introduced canonical portfolios as equivalence classes of portfolios having assets with equal proportions. An external observer describes the moves performed by the portfolio manager as a draw in the following lottery. Let $p_{n}, n=1,...,N$ be the probability of the purchase of $w_{n}$ units of the $n$-th asset. Our analysis lead us to Gibbs-like probability distribution: $$p_{n}\left( c_{0},\dots ,c_{N}\right) = \frac{ \exp \left( \beta c_{n}w_{n} \right) } {\sum _{k=0}^{N}\exp \left( \beta c_{k}w_{k} \right) }\label{pkanon}.$$ The coefficient $c_{n}$ denotes the present relative price of a unit of the asset $\mathfrak{G}_{n}$, $c_{n}=\frac{u_{n}}{\overline{u}_{n}}$ where $u_{n}$ is the present price of the $n$-th asset and $\overline{u}_{n}$ its price at the moment of drawing. Now let us consider an analogue of canonical Gibbs distribution function $$\text{e}^{-\gamma H(\mathcal{Y},\mathcal{Z})}, \label{htakryk}$$ where we have denoted the Lagrange multiplier by $\gamma$ instead of the more customary $\beta$ for later convenience. The analysis performed in Ref. [@1; @15] allows to interpret $(\ref{htakryk})$ as non-unitary tactics leading to a new strategy[^4]: $$\text{e}^{-\gamma H(\mathcal{Y},\mathcal{Z})}|\psi\rangle = |\psi ' \rangle. \label{dzialanie}$$ Therefore the parameter $\gamma$ can be interpreted as the inverse of the temperature ($\beta \sim (temperature)^{-1}$) of a canonical portfolio that represents strategies of traders having the same risk inclination (cf Ref.[@14]). These traders adapt such tactics that the resulting strategy form a ground state of the risk inclination operator $ H(\mathcal{Y},\mathcal{Z})$ (that is they aim at the minimal eigenvalue). We call tactics characterized by constant inclination to risk, $E(H(\mathcal{P},\mathcal{Q}))={const}$ and maximal entropy thermal tactics. Regardless of the possible interpretations, adoption of the tactics $(\ref{htakryk})$ means that traders have in view minimization of the risk (within the available information on the market). It is convenient to adopt such a normalization (we are free to fix the Lagrange multiplier) of the operator of the tactics so that the resulting strategy is its fixed point. This normalization preserves the additivity property, $ \mathcal{R}_{\gamma_1+\gamma_2}\negthinspace=\negthinspace \mathcal{R}_{\gamma_2}\mathcal{R}_{\gamma_1} $ and allows consecutive (iterative) implementing of the tactics. The operator representing such thermal tactics takes the form ($\omega\negthinspace=\negthinspace\hslash_E\negthinspace=\negthinspace1$) $$\mathcal{R}_\gamma:=\text{e}^{-\gamma (H(\mathcal{Y},\mathcal{Z})-\frac{1}{2})}\, .$$ Note that the operator $H(\mathcal{Y},\mathcal{Z})-\frac{1}{2}$ annihilate the minimal risk strategy (remember that the minimal eigenvalue is $\frac{1}{2}$). The integral representation of the operator $\mathcal{R}_\gamma$ (heat kernel) acting on strategies $\langle y|\psi\rangle\negthinspace\in\negthinspace\mathcal{L}^2$ reads: $$\langle y|\mathcal{R}_\gamma\psi\rangle=\int_{-\infty}^{\infty} \negthinspace\negthinspace \mathcal{R}_\gamma(y,y') \langle y'|\psi\rangle dy', \label{forhradof}$$ where (the Mehler formula [@18]) $$\mathcal{R}_\gamma(y,y')=\tfrac{1}{\sqrt{\pi(1-\text{e}^{-2\gamma})}}\,\,\text{e}^{-\frac{y^2- {y'}^2}{2^{\vphantom{2}}}-\frac{(\text{e}^{-\gamma}y-y')^2}{1-\text{e}^{-2\gamma}}}\,.$$ $\mathcal{R}_\gamma(y,y')$ gives the probability density of Rayleigh particle changing its velocity from $y'$ to $y$ during the time $\gamma$. Therefore the fixed point condition for the minimal risk strategy takes the form $$\int_{-\infty}^{\infty}\mathcal{R}_\gamma(y,y')\, \text{e}^{\frac{y^2-{y'}^2}{2}}dy'=1\,.$$ &gt;From the mathematical point of view, the tactics $\mathcal{R}_\gamma$ is simply an Ornstein-Uhlenbeck process. It is possible to construct such a representation of the Hilbert space $\mathcal{L}^2$ so that the fixed point of the thermal tactics corresponds to a constant function. This is convenient because the “functional” properties are “shifted” to the probability measure $\widetilde{dy}\negthinspace:=\negthinspace \tfrac{1}{\sqrt{\pi}}\,\text{e}^{-y^2}\negthinspace dy$. After the transformation $\mathcal{L}^2(dy)\negthinspace\rightarrow\negthinspace \mathcal{L}^2(\widetilde{dy})$, proper vectors of the risk inclination operator are given by Hermite polynomials (the transformation in question reduces to the multiplication of vectors in $\mathcal{L}^2$ by the function $\sqrt[4]{\pi}\,\text{e}^{\tfrac{y^2}{2}}$). Now Eq $(\ref{forhradof})$ takes the form: $$\widetilde{\langle y|\mathcal{R}_\gamma\psi\rangle}\int_{-\infty}^{\infty}\negthinspace\negthinspace \widetilde{\mathcal{R}}_\gamma(y,y')\, \widetilde{\langle y'|\psi\rangle}\, \widetilde{dy'}\,,$$ where $$\widetilde{\mathcal{R}}_\gamma(y,y'):\tfrac{1}{\sqrt{1-\text{e}^{-2\gamma}}}\,\text{e}^{{y'^2}- \frac{(\text{e}^{-\gamma}y-y')^2}{1-\text{e}^{-2\gamma}}}\,.$$ In this way we get the usual description of the Ornstein-Uhlenbeck process in terms of the kernel $\widetilde{\mathcal{R}}_\gamma(y,y')$ being a solution to the Fokker-Planck equation [@19]. “Classical” picture of quantum diffusion ======================================== Let us consider the integral kernel of one-dimensional exponent of the Laplace operator $\text{e}^{-\frac{\gamma}{2}\, \frac{\partial^2}{\partial y^2}}\negthinspace$ representing the fundamental solution of the diffusion equation $$\frac{\partial f(y,\gamma)}{\partial \gamma}=\frac{1}{2}\, \frac{\partial^2 f(y,\gamma)}{\partial y^2}\,\,.$$ The kernel takes the following form $$\mathcal{R}^0_\gamma(y,y'):=\tfrac{1}{\sqrt{2\pi\gamma}}\, \text{e}^{-\frac{(y-y')^2}{2\gamma}}\,,$$ and the appropriate measure invariant with respect to $\mathcal{R}^0_\gamma(y,y')$ reads: $$dy_0:=\tfrac{1}{\sqrt{\pi\gamma}}\, \text{e}^{-\frac{y^2}{2\gamma}}dy\,.$$ The corresponding stochastic process is known as the Wiener-Bachelier process. In physical applications the variables $y$ and $\gamma$ are interpreted as position and time, respectively (Brownian motion). Let us define the operators $\mathcal{X}_k$ acting on $\mathcal{L}^2$ as multiplications by functions $x_k(y(\gamma_k))$ for successive steps $k\negthinspace=\negthinspace1,\ldots,n$ such that $-\tfrac{\gamma}{2}\negthinspace\leq\negthinspace\gamma_1\negthinspace\leq \negthinspace\ldots\negthinspace\leq\negthinspace\gamma_n\negthinspace \leq\negthinspace\frac{\gamma}{2}$. The corresponding (conditional) Wiener measure $dW^\gamma_{y,y'}$ for $ y\negthinspace=\negthinspace y(-\tfrac{\gamma}{2})$ and $y'\negthinspace=\negthinspace y(\tfrac{\gamma}{2})$ is given by the operator $$\int\negthinspace\prod\limits_{k=1}^n x_k(y(\gamma_k))\,dW^\gamma_{y,y'}:=\Bigl( \text{e}^{-\tfrac{\gamma_1+\gamma/2}{2}\tfrac{\partial^2}{\partial y^2}} \mathcal{X}_1 \text{e}^{-\tfrac{\gamma_2-\gamma_1}{2}\tfrac{\partial^2}{\partial y^2}} \mathcal{X}_2\cdots\mathcal{X}_n \text{e}^{-\tfrac{\gamma/2-\gamma_n}{2}\tfrac{\partial^2}{\partial y^2}} \Bigr)(y,y')\,.$$ If the operators $\mathcal{X}_k$ are constant ($x_k(y(\gamma_k))\negthinspace\equiv\negthinspace1$) then $$\int dW^\gamma_{y,y'}=\mathcal{R}^0_\gamma(y,y')\,.$$ The Wiener measure allows to rewrite the integral kernel of the thermal tactics in the form [@18] $$\mathcal{R}_\gamma(y,y')=\int\mathcal{T}^{\prime }\,\text{e}^{-\int \limits_{-\gamma/2}^{\gamma/2} \frac{y^2(\gamma')-1}{2}\,\,d\gamma'}dW^\gamma_{y,y'} \label{feynmankac}$$ known as the Feynman-Kac formula where $\mathcal{T}^{\prime }$ is the anti-time ordering operator. According to the quantum interpretation of path integrals [@20] we can expand the exponent function in Eq $\ref{feynmankac}$ to get “quantum” perturbative corrections to the Bachelier model that result interference[^5] of all possible classical scenarios of profit changes in time spread $\gamma$, cf [@21].[^6] These quantum corrections are unimportant for short time intervals $\gamma\negthinspace\ll\negthinspace 1$ and the Ornstein-Uhlenbeck process resembles the Wiener-Bachelier one. This happens, for example, for “high temperature” thermal tactics and for disorientated markets (traders)[^7]. In effect, due to the cumulativity of dispersion during averaging for normal distribution $\eta(x,\sigma^2)$ $$\int_{-\infty}^\infty\negthinspace\negthinspace\eta(x\negthinspace+\negthinspace y,\sigma^2_1)\, \eta(y,\sigma^2_2)\,dy=\eta(x, \sigma^2_1\negthinspace+\sigma^2_2)$$ the whole quantum random walk parameterized by $\gamma$ can be incorporated additively into the mobility parameter of the classical Bachelier model. This explains changes in mobility of the logarithm of prices in the Bachelier model that follow, for example, from changes in the tactics temperature or received information. Therefore the intriguing phenomenon of market prices evolution can be interpreted in a reductionistic way as a quantum process. In this case the Bachelier model is a consequence of a short-time tactics adopted by the smithonian invisible hand (under the perfect concurrence assumption all other traders can be considered as an abstract trader dealing with any single real trader) [@4]. From the quantum point of view the Bachelier behaviour follows from short-time interference of tactics adopted (paths followed) by the rest of the world considered as a single trader. Collected information about the market results after time $\gamma\negthinspace\ll\negthinspace1$ in the change of tactics that should lead the trader the strategy being a ground state of the risk inclination operator (localized in the vicinity of corrected expectation value of the price of the asset in question). This should be done in such a way that the actual price of the asset is equal to the expected price corrected by the risk-free rate of return (arbitrage free martingale)[@22]. Both interpretations of the chaotic movement of market prices imply that Ornstein-Uhlenbeck corrections to the Bachelier model should qualitatively matter only for large $\gamma$ scales. An attentive reader have certainly noticed that we have supposed that the drift of the logarithm of the price of an asset must be a martingale (that is typical of financial mathematics [@22]). Now suppose that we live in some imaginary state where the ruler is in a position to decree the exchange rate between the local currency $\mathfrak{G}$ and some other currency $\mathfrak{G'}$. The value of the logarithm of the price of $\mathfrak{G}$ (denoted by $\mathfrak{n}$) is proportional to the result of measurement of position of a one dimensional Brown particle. Any owner of $\mathfrak{G}$ will praise the ruler for such policy and prefer $\mathfrak{G}$ to $\mathfrak{G'}$ because the the price of $\mathfrak{G}$ in units of $\mathfrak{G'}$ will, on average, raise (the process $\exp \mathfrak{n}$ is sub-martingale). For the same reasons a foreigner will be content with preferring $\mathfrak{G'}$ to $\mathfrak{G}$. This currency preference paradoxical property of price drifts suggest that the common assumption about logarithms of assets prices being a martingale should be carefully analyzed prior to investment. If one measures future profits from possessing $\mathfrak{G}$ with the anticipated change in quotation of $\mathfrak{n}$ then the paradox is solved and expectation values of the profits from possessing $\mathfrak{G}$ or $\mathfrak{G'}$ are equal to zero. Therefore the common reservations on using of logarithms of exchange rates as martingales to avoid the Siegel’s paradox is fulfilled [@22] (cf Bernoulli’s solution to the Petersburg paradox [@23]). Note that if we suppose that the price of an asset and not its logarithms is a martingale then the proposed model of quantum price diffusion remains valid if we suppose that the observer’s reference system drifts with a suitably adjusted constant velocity (in logarithm of price variable).\ Final remarks ============= We have proposed a model of price movements that is inspired by quantum mechanical evolution of physical particles. The main novelty is to use complex amplitudes whose squared modules describe the probabilities. Therefore such phenomena as interference of tactics (strategies) are possible. The analysis shows the movement of market prices imply that the Bachelier behaviour follows from short-time interference of tactics adopted by the rest of the world considered as a single trader and the Ornstein-Uhlenbeck corrections to the Bachelier model should qualitatively matter only for large time scales. Roughly speaking, traders dealing in the asset $\mathfrak{G}$ act as a sort of (quantum) tomograph and their strategies can be reproduced from the corresponding Wigner functions in a way analogous to the mathematical tomography used in medicine. Therefore we can speculate about possibilities using the experience acquired in medicine, geophysics and radioastronomy to investigate intricacies of supply and demand curves.\ [**Acknowledgments**]{} This paper has been supported by the [**Polish Ministry of Scientific Research and Information Technology**]{} under the (solicited) grant No [**PBZ-MIN-008/P03/2003**]{}. [99]{} D. Meyer, [*Quantum strategies*]{}, [*Phys. Rev. Lett. *]{}[**82**]{} (1999) 1052. J. Eisert, M. Wilkens, and M. Lewenstein, [*Quantum games and quantum strategies*]{}, [*Phys. Rev. Lett.*]{} [**83**]{} (1999) 3077. E. W. Piotrowski, J. Sładkowski, [*The next stage: quantum game theory*]{}, in [*“Progress in Mathematical Physics Research”*]{}, Nova Science Publishers, Inc. (2004); quant-ph/0308027. E. W. Piotrowski and J. Sładkowski, [*Quantum market games*]{}, [*Physica*]{} [**A 312**]{} (2002) 208; quant-ph/0104006. H. E. Stanley et al, [*Quantifying economic fluctuations*]{}, [*Physica A*]{} 302 (2001) 126. M. A. Nielsen, I. L. Chuang, [*Quantum Computation and Quantum Information*]{}, Cambridge University Press, Cambridge (2000). E. W. Piotrowski and J. Sładkowski, [*The Merchandising Mathematician Model*]{}, [*Physica*]{} [**A 318**]{} (2003) 496, cond-mat/0102174. E. W. Piotrowski and J. Sładkowski, [*Quantum-like approach to financial risk: quantum anthropic principle*]{}, [*Acta Phys. Pol. *]{}[**B32**]{} (2001) 3873; quant-ph/0110046. S. Helgason, [*The Radon transform*]{}, Birkhäuser, Boston (1999). E. W. Piotrowski and J. Sładkowski, [*Quantum bargaining games*]{}, [*Physica A*]{} 308 (2002) 391; quant-ph/0107140. V. I. Man’ko and R. V. Mendes, [*Non-commutative time frequency tomography*]{}, [*Phys. Lett. *]{}[**A 263**]{} (1999) 53; physics/9712022. V. I. Tatarskii, Uspiekhi Fiz. Nauk [**139**]{} (1983) 587. V. I. Man’ko, [*Conventional quantum mechanics without wave function and density matrix*]{}, in [*New perspectives in quantum mechanics*]{}, eds. S. Hacyan et al, AIP (1999). E. W. Piotrowski and J. Sładkowski, [*The thermodynamics of portfolios*]{}, [*Acta Phys. Pol. *]{}[[**B**]{} 32 (2001) 597]{}. E. W. Piotrowski, J. Sładkowski and J. Syska, [*Interference of quantum strategies*]{}, [*Physica* ]{}[**A 318**]{} (2003) 516; quant-ph/0205087. N. G. van Kampen, [*Stochastic Processes in Physics and Chemistry*]{}, Elsevier, New York (1983). E. E. Haven, [*A discussion on embedding the Black-Sholes option pricing model in a quantum physics setting*]{}, [*Physica A*]{} 304 (2002) 507. J. Glimm and A. Jaffe, [*Quantum Physics. A Functional Integral Point of View*]{}, Springer-Verlag, New York (1981). , Dordrecht, Amsterdam (1997). H. Kleinert, [*Path integrals in quantum mechanics, statistics and polyner physics*]{}, World Scientific, Singapore (1995). M. Kac, [*Probability and Related Topics in Physical Sciences*]{}, Interscience, New York (1959). M. P. Taylor, [*The economics of exchange rates*]{}, [*Journ. of Econ. Lit.*]{} [**33**]{}(1995) 13. M.H. DeGroot, [*Optimal Statistical Decisions*]{}, McGraw Hill, New York (1970). [^1]: We use the standard Dirac notation. The symbol $|\ \rangle$ with a letter $\psi$ in it denoting a vector parameterized by $\psi$ is called a [*ket*]{}; the symbol $\langle\ |\negthinspace$ with a letter in it is called a [*bra*]{}. Actually a [*bra*]{} is a dual vector to the corresponding [*ket*]{}. Therefore scalar products of vectors take the form $\langle \phi |\psi\rangle\negthinspace$ ([*bracket*]{}) and the expectation value of an operator $A$ in the state $|\psi\rangle\negthinspace$ is given by $\langle \psi |A\psi\rangle\negthinspace$. [^2]: One must remember that switching roles of $p$ and $q$ must be accompanied by switching $u$ with $v$ [^3]: Eigenvalues of the operator $H(\mathcal{P}_k,\mathcal{Q}_k)$ can be parameterized by natural numbers including 0. The $n-th$ eigenvalue is equal to $n +\frac{1}{2}$ in units of $\hslash_E$. The lowest eigenvalue state is called the ground state; the others are called exited states. [^4]: If the numbers $c_{n}w_{n}$ are eigenvalues of some bounded below Hermitian operator $H$ then we get the statistical operator $\frac{\text{e}^{ -\beta H}}{Tr \text{e}^{ -\beta H} }$. The expectation value of any observable $\mathcal{X}$ is given by $\langle\mathcal{X}\rangle_H :=\frac{Tr \mathcal{X}\text{e}^{ -\beta H}}{Tr \text{e}^{ -\beta H }}$. [^5]: Roughly speaking path intragrals sum up all possible ways of evolution (“paths”) with phases (weights) resulting from interaction. [^6]: Note that in the probability theory one measures risk associated with a random variable by squared standard deviation. According to this we could define the complex profit operator $\mathcal{A}:=\tfrac{1}{\sqrt{2}}\,(\mathcal{Y}+\text{i}\,\mathcal{Z})$. The appropriate risk operator would take the form $H(\mathcal{A}^\dag,\mathcal{A}) =\mathcal{A}^\dag\mathcal{A} +\tfrac{1}{2}$. [^7]: That is the parameter $\gamma$ is very small (but positive).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In a recent paper, a new parametrization for the dark matter (DM) speed distribution $f(v)$ was proposed for use in the analysis of data from direct detection experiments. This parametrization involves expressing the *logarithm* of the speed distribution as a polynomial in the speed $v$. We present here a more detailed analysis of the properties of this parametrization. We show that the method leads to statistically unbiased mass reconstructions and exact coverage of credible intervals. The method performs well over a wide range of DM masses, even when finite energy resolution and backgrounds are taken into account. We also show how to select the appropriate number of basis functions for the parametrization. Finally, we look at how the speed distribution itself can be reconstructed, and how the method can be used to determine if the data are consistent with some test distribution. In summary, we show that this parametrization performs consistently well over a wide range of input parameters and over large numbers of statistical ensembles and can therefore reliably be used to reconstruct both the DM mass and speed distribution from direct detection data.' author: - 'Bradley J. Kavanagh' bibliography: - 'Model\_Indep.bib' title: 'Parametrizing the local dark matter speed distribution: a detailed analysis' --- Introduction ============ The dark matter (DM) paradigm has enjoyed much success in explaining a wide range of astronomical observations (for a review, see e.g. Ref. [@Bertone:2005]). As yet no conclusive evidence has been provided for the identity of particle DM, though there are a variety of candidates, including the supersymmetric neutralino [@Jungman:1996], sterile neutrinos [@Dodelson:1994], axions [@Duffy:2009] and the lightest Kaluza-Klein particle [@Kolb:1984]. Here, we focus on the search for particles which belong to the generic class of Weakly Interacting Massive Particles (WIMPs). Direct detection experiments [@Goodman:1985; @Drukier:1986] aim to measure the energies of nuclear recoils induced by WIMP DM in the Galactic halo. Under standard assumptions about the DM halo, this data can be used to extract the WIMP mass and interaction cross section, allowing us to check for consistency with other search channels (such as indirect detection [@Lavalle:2012] and collider experiments [@Battaglia:2010]) and to probe underlying models of DM. Direct detection experiments are traditionally analyzed within the framework of the Standard Halo Model (SHM), in which WIMPs are assumed to have a Maxwell-Boltzmann speed distribution in the Galactic frame. The impact of uncertainties in the WIMP speed distribution has been much studied (see e.g. Refs. [@Green:2010; @Peter:2011; @Fairbairn:2012]), leading to the conclusion that such uncertainties may introduce a bias into any reconstruction of the WIMP mass from direct detection data. As yet, the speed distribution is unknown, while a number of proposals have been put forward for its form, including analytic parametrizations (e.g. Ref. [@Lisanti:2010]) and distributions reconstructed from the potential of the Milky Way [@Bhattacharjee:2012] or from N-body simulations [@Vogelsberger:2009; @Kuhlen:2010; @Kuhlen:2012; @Mao:2012]. Recent results from N-body simulations which attempt to include the effects of baryons on structure formation also report the possible presence of a dark disk in the Milky Way [@Read:2009; @Read:2010; @Kuhlen:2013]. With such a wide range of possibilities, we should take an agnostic approach to the speed distribution, not only to avoid introducing bias into the analysis of data, but also with the hope of measuring the speed distribution and thereby probing the formation history of the Milky Way. Several methods of evading these uncertainties have been proposed. These include simultaneously fitting the parameters of the SHM and dark matter properties [@Strigari:2009; @Peter:2009], fitting to empirical forms of the speed distribution (e.g. Ref. [@Pato:2011]) and fitting to a self-consistent distribution function [@Pato:2013]. However, these methods typically require that the speed distribution can be well fitted by a particular functional form. More model-independent methods, such as fitting the moments of the speed distribution [@Drees:2007; @Drees:2008] or using a step-function speed distribution [@Peter:2011], have also been presented. However, these methods can still introduce a bias into the measurement of the WIMP mass and perform less well with the inclusion of realistic experimental energy thresholds. In a recent paper [@Kavanagh:2013a] (hereafter referred to as Paper 1), a new parametrization of the speed distribution was presented, which allowed the WIMP mass to be extracted from hypothetical direct detection data without prior knowledge of the speed distribution itself. Paper 1 demonstrated this for a WIMP of mass 50 GeV, using several underlying distribution functions. In the present paper, we extend this analysis to a wider range of masses. We also aim to demonstrate the statistical properties of the method and show how realistic experimental parameters affect its performance. Finally, we will also elaborate on some of the technical details of the method and assess its ability to reconstruct the underlying WIMP speed distribution. Section \[sec:DDRate\] of this paper explains the direct detection event rate formalism and presents the parametrization of the speed distribution introduced in Paper 1. In Sec. \[sec:ParameterRecon\], the methodology for testing the parametrization is outlined. In Section \[sec:Parametrization\], we consider the choice and number of basis functions for the method. We then study the performance of the method as a function of input WIMP mass (Sec. \[sec:mass\]) and when Poisson fluctations in the data are taken into account (Sec. \[sec:stats\]). In Sec. \[sec:Recon\], we demonstrate how the speed distribution can be extracted from this parametrization and examine whether or not different distribution functions can be distinguished. Finally, we summarize in Sec. \[sec:Conclusions\] the main results of this paper. Direct detection event rate {#sec:DDRate} =========================== Dark matter direct detection experiments aim to measure the energies $E$ of nuclear recoils induced by interactions with WIMPs in the Galactic halo. Calculation of the event rate at such detectors has been much studied (e.g. Refs. [@Goodman:1985; @Drukier:1986; @Lewin:1996; @Jungman:1996]). For a target nucleus with nucleon number $A$, interacting with a WIMP of mass $m_\chi$, the event rate per unit detector mass is given by: $$\label{eq:Rate} \frac{\textrm{d}R}{\textrm{d}E} = \frac{\rho_0 \sigma_p}{2 m_\chi \mu_{\chi p}^2} A^2 F^2(E) \eta(v_\textrm{min})\,,$$ where $\rho_0$ is the local dark matter mass density, $\sigma_p$ is the WIMP-proton spin-independent cross section and the reduced mass is defined as $\mu_{A B} = m_A m_B/(m_A + m_B)$. The Helm form factor $F^2(E)$ [@Helm:1956] describes the loss of coherence of spin-independent scattering due to the finite size of the nucleus. A wide range of possible interactions have been considered in the literature, including inelastic [@Smith:2001], isospin-violating [@Kurylov:2003] and more general non-relativistic interactions [@Fan:2010; @Fitzpatrick:2012; @Fitzpatrick:2013]. We focus here on the impact of the WIMP speed distribution on the direct detection event rate. We therefore restrict ourselves to considering only spin-independent scattering, which is expected to dominate over the spin-dependent contribution for heavy nuclei, due to the $A^2$ enhancement in the rate. Information about the WIMP velocity distribution $f(\textbf{v})$ is encoded in the function $\eta$, sometimes referred to as the mean inverse speed, $$\label{eq:eta} \eta(v_\textrm{min}) = \int_{v > v_\textrm{min}} \frac{f(\textbf{v})}{v} \, \textrm{d}^3\textbf{v}\,,$$ where $\textbf{v}$ is the WIMP velocity in the reference frame of the detector. The integration is performed only over those WIMPs with sufficient speed to induce a nuclear recoil of energy $E$. The minimum required speed for a target nucleus of mass $m_N$ is $$\label{eq:v_min} v_\textrm{min}(E) = \sqrt{\frac{m_N E}{2\mu_{\chi N}^2}}\,.$$ We distinguish between the directionally averaged velocity distribution $$f(v) = \oint f(\textbf{v}) \, \textrm{d}\Omega_{\textbf{v}}\,,$$ and the 1-dimensional speed distribution $$f_1(v) = \oint f(\textbf{v}) v^2 \textrm{d}\Omega_{\textbf{v}}\,.$$ The distribution function should in principle be time-dependent, due to the motion of the Earth around the Sun. However, this is expected to be a percent-level effect (for a review, see e.g. Ref. [@Freese:2013]) and we therefore assume that $f_1(v)$ is time independent in the present work. We consider several benchmark speed distributions in this work, including the SHM and the SHM with the addition of a moderate dark disk which accounts for 23% of the total WIMP density [@Kuhlen:2013]. We model the speed distributions as combinations of Gaussian functions in the Earth frame $$\label{eq:gaussian} g(\textbf{v}) = N \exp\left(-\frac{(\textbf{v} - \textbf{v}_\textrm{lag})^2}{2\sigma_v^2}\right) \Theta(v_\textrm{esc} - |\textbf{v} - \textbf{v}_\textrm{lag}|)\,,$$ where $\textbf{v}_\textrm{lag}$ specifies the peak velocity of the distribution in the Earth frame and $\sigma_v$ the velocity dispersion. We truncate the distribution above the escape speed $v_\textrm{esc}$ in the Galactic frame and the factor $N$ is required to satisfy the normalization condition (Eq. \[eq:normalization\]). We use the value $v_\textrm{esc} = 544 {\,\textrm{km s}^{-1}}$, which lies within the 90% confidence limits obtained from the RAVE survey [@RAVE:2007; @RAVE:2013]. In addition, we also use the speed distribution of Lisanti et al. [@Lisanti:2010], which has the following form in the Earth’s frame: $$\label{eq:lisanti} f(\textbf{v}) = N \left[\exp\left(\frac{v_\textrm{esc}^2 - |\textbf{v} - \textbf{v}_0|^2}{k v_0^2}\right) -1\right]^k \Theta(v_\textrm{esc} - |\textbf{v} - \textbf{v}_0|)\,.$$ We use the parameter values $k = 2$ and $v_0 = 220 {\,\textrm{km s}^{-1}}$ in this work. We summarize in Tab. \[tab:distributions\] the different speed distributions considered. We also plot several of these in Fig. \[fig:Ensemble\_distributions\] for reference. [m[3cm]{}|ccc]{} Speed distribution benchmark & Fraction & $v_\textrm{lag} / {\,\textrm{km s}^{-1}}$ & $\sigma_v / {\,\textrm{km s}^{-1}}$\ SHM & 1 & 220 & 156\ & 0.77 & 220 & 156\ & 0.23 & 50 & 50\ Stream & 1 & 400 & 20\ & 0.97 & 220 & 156\ & 0.03 & 500 & 20\ & 0.5 & 200 & 20\ & 0.5 & 400 & 20\ Lisanti et al. & & $v_0 = 220 {\,\textrm{km s}^{-1}}$ & $k = 2$ ![Several of the benchmark speed distributions used in this work. They are defined in Eqs. \[eq:gaussian\] and \[eq:lisanti\] with parameters from Tab. \[tab:distributions\]. These distributions are the SHM (solid blue), SHM+DD (dashed green), Lisanti et al. (dot-dashed red) and the stream (dotted magenta).[]{data-label="fig:Ensemble_distributions"}](SpeedDistributions-Ensemble.pdf){width="49.00000%"} In Paper 1, a parametrization for the WIMP speed distribution was introduced, for use in the analysis of direct detection data. The parametrization of Paper 1 has the form: $$\label{eq:parametrization} f_1(v) = v^2 \exp\left\{ -\sum_{k =0}^{N-1} a_k P_k(v)\right\}\,$$ where $P_k(v)$ is some basis of polynomial functions of v. We fit the coefficients $\left\{a_1, ..., a_{N-1}\right\}$ using data, and fix $a_o$ by normalization $$\label{eq:normalization} a_0 = \ln\left(\int_{0}^\infty v^2 \exp\left\{ -\sum_{k = 1}^{N-1} a_k P_k(v)\right\} \, \textrm{d}v\right)\,.$$ This form of parametrization ensures that the distribution function $f_1(v)$ is everywhere positive and can be used to fit an arbitrary underlying directionally-averaged distribution function (given a sufficiently large number of polynomial basis functions). We explore in Sec. \[sec:Parametrization\] which basis functions should be used in the parametrization, as well as how many basis functions are required. Parameter Reconstruction {#sec:ParameterRecon} ======================== In order to assess the performance of the parametrization method, we attempt to reconstruct the WIMP mass $m_\chi$ and polynomial coefficients $\left\{a_1, ..., a_{N-1} \right\}$ using the nested sampling software <span style="font-variant:small-caps;">MultiNest</span> [@MultiNest1; @MultiNest2; @MultiNest3]. We also include the WIMP-proton spin-independent cross section $\sigma_p$ as a free parameter. However, we are forced to treat the cross section as a nuisance parameter. As has previously been noted [@Kavanagh:2012; @Kavanagh:2013a], taking an agnostic approach to the DM speed distribution means that we do not know what fraction of WIMPs lie above the energy thresholds of the experiments. While this does not adversely impact the reconstruction of the WIMP mass, it does result in a strong degeneracy, such that only lower limits can be placed on the cross section using such methods. In any case, the cross section appears in the event rate (Eq. \[eq:Rate\]) in the degenerate combination $\rho_0 \sigma_p$. Uncertainties on the local DM density $\rho_0$ are at present on the order of a factor of 2 (see e.g. [@Iocco:2011; @Bovy:2012; @Zhang:2013; @Nesti:2013]) and thus any reconstruction of the cross section would be subject to the same systematic uncertainty. In this work, we focus instead on reconstructing the WIMP mass and the shape of the speed distribution. For concreteness, we use the values $\sigma_p = 10^{-45} \textrm{ cm}^2$ and $\rho_0 = 0.3 \textrm{ GeV cm}^{-3}$ throughout this work. Experimental benchmarks {#sec:experiments} ----------------------- In order to generate mock data sets, we consider three idealized mock experiments, loosely based on detectors which are currently in development. As previous work has shown [@Kavanagh:2012; @Peter:2013a], the WIMP mass and speed distribution are degenerate when data from only a single experiment is considered. However, this degeneracy can be broken by including data from additional experiments with different nuclear target masses. The three target materials we consider here are Xenon, Argon and Germanium. We describe each experiment in terms of its nucleon number $A$, fiducial detector mass $m_\textrm{det}$, efficiency $\epsilon$ and energy sensitivity window $\left[E_\textrm{min}, E_\textrm{max}\right]$. We incorporate the effects of detector sensitivity, analysis cuts and detector down-time into the value of the efficiency $\epsilon$, which we take to be energy independent for simplicity. We consider a total exposure time for all experiments of $t_\textrm{exp} = \textrm{ 2 years}$. The experimental parameter values used in this work are summarized in Tab. \[tab:experiments\]. [c|m[1.2cm]{}m[1.7cm]{}m[1.5cm]{}m[1.7cm]{}]{} Experiment & Target Mass, $A$ & Detector Mass (fid.), $m_\textrm{det}$/kg & Efficiency, $\epsilon$ & Energy Range/keV\ Xenon & 131 & 1100 [@Aprile:2012a] & 0.7 [@Aprile:2012b] & 7-45 [@Aprile:2010]\ Argon & 40 & 1000 & 0.9 [@Benetti:2007] & 30-100 [@Grandi:2005]\ Germanium & 73 & 150 [@Bauer:2013b] & 0.6 [@Bauer:2013a] & 8-100 [@Bauer:2013a]\ The exact parameter values we used in this work do not strongly impact the results we present. However, it is important to note that the total mass and exposure of the experiments will affect the total number of events observed. This in turn will affect the precision of the reconstructions. For example, we have chosen a total Argon mass of 1000 kg. This is the stated target for Argon-based experiments which are in development (e.g. Ref. [@Badertscher:2013]), though at present typical fiducial masses for Argon prototypes are of the order of 100 kg [@Grandi:2005]. The data we have generated does not represent the ‘high-statistics’ regime: across all three experiments the total number of events observed is roughly 200-300 with as few as 10 events in the Germanium detector for some scenarios. Using a smaller exposure (or equivalently a smaller interaction cross section) will reduce the precision of the results, but should not introduce any additional bias. We also briefly consider the impact of a *larger* number of events in Sec. \[sec:Recon\]. Parameter sampling ------------------ We make parameter inferences using a combination of Bayesian and frequentist statistics. Bayes theorem for the probability of a particular set of theoretical parameters $\boldsymbol{\Theta}$ given the observed data $\textbf{D}$ is: $$P(\boldsymbol{\Theta}|\textbf{D}) = \frac{P(\boldsymbol{\Theta}) P(\textbf{D}|\boldsymbol{\Theta})}{P(\textbf{D})}\,,$$ where $P(\boldsymbol{\Theta})$ is the prior on the parameters and $P(\textbf{D})$ is the Bayesian evidence, which acts as a normalizing factor and has no impact on parameter inference. We summarize the priors used in this work in Tab. \[tab:priors\]. We also summarize in Tab. \[tab:MultiNest\] the MultiNest sampling parameters used. [m[1in]{}|cc]{} Parameter & Prior type & Prior range\ $m_\chi / \textrm{ GeV}$ & log-flat & $\left[10^{0}, 10^{3}\right]$\ $\sigma_p / \textrm{ cm}^2$ & log-flat & $\left[10^{-46}, 10^{-42}\right]$\ $\left\{a_k\right\}$ & linear-flat & $\left[-50, 50\right]$\ $R_{BG} / \textrm{dru}$ & log-flat & $\left[10^{-12}, 10^{-5}\right]$\ Parameter Value ------------------- ----------- $N_\textrm{live}$ 10000 efficiency 0.25 tolerance $10^{-4}$ : Summary of the MultiNest sampling parameters used in this work.[]{data-label="tab:MultiNest"} The factor $P(\textbf{D}|\boldsymbol{\Theta})$ is simply the likelihood of the data given the parameters $\boldsymbol{\Theta}$. In Sec. \[sec:Parametrization\] and Sec. \[sec:mass\], we consider the effects of varying the form of the parametrization and of varying the input WIMP mass. In order to eliminate the effects of Poisson noise, we use Asimov data [@Cowan:2013] for these sections. This means that we divide the energy window of each experiment into bins of width 1 keV. We then set the observed number of events $N_{o,i}$ in bin $i$ equal to the expected number of events $N_{e,i}$. In this case, we use the binned likelihood, calculated for $N_b$ energy bins: $$\label{eq:binnedL} \mathcal{L}_b = \prod_{i = 1}^{N_b} \frac{N_{e,i}^{N_{o,i}} \textrm{e}^{-N_{e,i}}}{N_{o,i}!}\,.$$ In Sec. \[sec:stats\] and Sec. \[sec:Recon\], we consider many realisations of data, including the effects of Poisson noise. We therefore use the extended likelihood which has previously been used by both the Xenon [@Aprile:2011] and CDMS [@Ahmed:2009] collaborations, which for a single experiment is given by: $$\label{eq:unbinnedL} \mathcal{L} = \frac{N_e^{N_o} \textrm{e}^{-N_e}}{N_o!} \prod_{i = 1}^{N_o} P(E_i)\,,$$ where the expected number of events is given by: $$\label{eq:N_expected} N_e = \epsilon m_\textrm{det}t_\textrm{exp}\int_{E_\textrm{min}}^{E_\textrm{max}} \frac{\textrm{d}R}{\textrm{d}E}\, \textrm{d}E\,,$$ and the normalised recoil spectrum is given by: $$\label{eq:eventdistribution} P(E) = \frac{ \epsilon m_\textrm{det} t_\textrm{exp}}{N_e} \frac{\textrm{d}R}{\textrm{d}E}\,.$$ The total likelihood is then the product over all experiments considered. Using nested sampling, we can extract the full posterior probability distribution of the parameters $P(\boldsymbol{\Theta}|\textbf{D})$, as well as the likelihood $\mathcal{L}(\Theta)$. However, we often want to make inferences not jointly for all parameters but for only a subset (treating the remaining as nuisance parameters). If we conceptually partition the parameter space into the parameters of interest $\boldsymbol{\psi}$ and the remaining nuisance parameters $\boldsymbol{\phi}$, we would like to make inferences about the values of $\boldsymbol{\psi}$, without reference to the values of $\boldsymbol{\phi}$. One option for doing this is to calculate the marginalized posterior distribution, obtained by integrating the posterior probability over the parameters we are not interested in: $$P_m(\boldsymbol{\psi}) = \int P(\boldsymbol{\psi}, \boldsymbol{\phi}) \, \textrm{d}\boldsymbol{\phi}\,.$$ This method performs well for small numbers of observations (compared to the number of free parameters in the fit). We take the mode of the distribution to be the reconstructed parameter value and construct p% *minimal* credible intervals, which include those parameter values with $P_m(\boldsymbol{\psi}) \geq h$, where $h$ is chosen such that p% of the probability distribution lies within the interval. The marginalized posterior method is used in Sec. \[sec:stats\] and Sec. \[sec:Recon\], where in some cases the number of events observed in an experiment is less than 10. An alternative method is to calculate the profile likelihood. This is obtained by maximizing the full likelihood function over the nuisance parameters: $$\label{eq:profilelikelihood} \mathcal{L}_p(\boldsymbol{\psi}) = \max_{\boldsymbol{\phi}} \mathcal{L}(\boldsymbol{\psi},\boldsymbol{\phi})\,.$$ For a large number of observations, we can take the value which maximizes $\mathcal{L}_p$ as the reconstructed value and construct confidence intervals using the asymptotic properties of the profile likelihood. We use the profile likelihood for parameter inferences in Sec. \[sec:Parametrization\] and Sec. \[sec:mass\], as the Asimov data sets provide a large number of measurements of $N_{e,i}$ over a large number of bins. The profile likelihood can also lead to less noisy reconstructions than the marginalized posterior, especially when the dimensionality of the parameter space becomes high, as in Sec. \[sec:Parametrization\] and Sec. \[sec:mass\]. Testing the parametrization {#sec:Parametrization} =========================== We now consider the two questions: how many basis functions are required and which polynomial basis should be used? In order to answer these questions, we use the two benchmark distribution functions illustrated in Fig. \[fig:VaryingN\_distributions\]. We have chosen these benchmarks not because they are necessarily realistic distribution functions but because they should be difficult to fit using standard techniques and fitting functions (e.g. [@Lisanti:2010]). The first distribution (referred to as ‘bump’) is a SHM distribution with the addition of a small bump, which contributes just 3% of the total WIMP population and could correspond to a small sub-halo or stream [@Vogelsberger:2009]. This should be difficult to fit because it represents only a very small deviation from the standard scenario. The second distribution (referred to as ‘double-peak’) has a sharp and rapidly varying structure, which we anticipate should be difficult to capture using a small number of basis functions. ![Benchmark speed distributions used in Sec. \[sec:Parametrization\] to test the performance of the parametrization as a function of the number and type of basis functions.[]{data-label="fig:VaryingN_distributions"}](SpeedDistributions-VaryingN.pdf){width="49.00000%"} Varying the number of basis functions ------------------------------------- We first investigate how the reconstructed WIMP mass $m_\textrm{rec}$ and uncertainty varies with the number of basis functions $N$. For now, we fix our choice of basis to shifted Legendre polynomials, as used in Paper 1: $$P_k(v) = L_k\left(2\frac{v}{v_\textrm{max}} - 1\right)\,,$$ where $L_k$ is the Legendre polynomial of order $k$, and $v_\textrm{max}$ is a cut off for the parametrization. We should choose $v_\textrm{max}$ to ensure that $f_1(v)$ is negligible above the cut off. However, too high a choice of $v_\textrm{max}$ will result in $f_1(v)$ being close to zero over a large range of the parametrization, making fitting more difficult. We use the value $v_\textrm{max} = 1000 {\,\textrm{km s}^{-1}}$, while lies significantly above the Galactic escape speed. The lower panel of Fig. \[fig:BUMP\_LEG\] shows the best fit mass and 68% confidence intervals as a function of $N$, using as input a WIMP of mass 50 GeV and the ‘bump’ distribution function. The reconstructed mass very rapidly settles close to the true value, using as few as three basis functions. This is because adding the bump near $v \sim 500 {\,\textrm{km s}^{-1}}$ still leaves the mean inverse speed relatively smooth, so a large number of basis function are not required. The correct mass is reconstructed and we emphasize in the lower panel of Fig. \[fig:BUMP\_LEG\] that the reconstruction is stable with the addition of more basis functions. We should also consider how the quality of the fit changes as a function of $N$. We would expect that adding fit parameters should always lead to a better fit. Eventually, the fit should be good enough that adding additional basis functions will no longer improve it significantly. We can then be confident that our reconstruction is accurate and not an artifact of using too few basis functions. In order to investigate this, we utilise the Bayesian Information Criterion (BIC) [@Schwarz:1978], which is given by: $$BIC = 2N_p\textrm{ln}(N_m) - \textrm{ln}(\mathcal{L}_\textrm{max}) \, ,$$ where $N_p$ is the number of free parameters, $N_m$ is the number of measurements or observations and $\mathcal{L}_\textrm{max}$ is the maximum likelihood value obtained in the reconstruction. For the case of binned data, $N_m$ corresponds simply to the total number of energy bins across all experiments. This criterion penalises the inclusion of additional free parameters and in comparing several models, we should prefer the one which minimises the BIC. ![Bayesian information criterion (BIC) as a function of the number of basis functions for an underlying ‘bump’ distribution function, 50 GeV WIMP and using Legendre polynomial basis functions (upper panel). Also shown (lower panel) are the reconstructed WIMP mass (dashed blue line), 68% confidence interval (shaded blue region) and underlying WIMP mass (solid horizontal black line).[]{data-label="fig:BUMP_LEG"}](VaryingN_BUMP_LEG.pdf){width="49.00000%"} The upper panel of Fig. \[fig:BUMP\_LEG\] shows the BIC (in arbitrary units) as a function of the number of basis functions for the ‘bump’ distribution function. The BIC is comparable for the cases of $N=2$ and $N=3$, indicating that the quality of the fit is improved slightly by the addition of another basis function. However, adding further basis functions does not have a significant impact on the maximum likelihood, leading to an increase in the BIC. This coincides with the stabilization of the reconstructed mass around the true value and we conclude that only two or three basis functions are required to provide a good fit to the data. Figure \[fig:DP\_LEG\] shows the corresponding results for the ‘double-peak’ distribution function. Here, we note that the bias induced by using too small a number of basis functions is larger than for the case of the ‘bump’ distribution, due to the more complicated structure in this case. The BIC is minimized for $N=7$, indicating that additional basis functions do not significantly improve the quality of the fit to data. This suggests that the shape of the speed distribution can be well fit by $N\geq7$ basis functions. As shown in the lower panel of Fig. \[fig:DP\_LEG\], the reconstruction of the WIMP mass is stable around the true mass for these values of $N$. ![As Fig. \[fig:BUMP\_LEG\] but for an underlying ‘double-peak’ distribution function.[]{data-label="fig:DP_LEG"}](VaryingN_DP_LEG.pdf){width="49.00000%"} We propose that such a procedure should be used in the case of real data should a dark matter signal be observed at multiple detectors. We have shown that by analyzing the reconstructed mass as a function of $N$ we can recover the true mass and that by using the BIC we can be confident that we have obtained an adequate fit to data. Choice of basis functions ------------------------- We now consider the second question posed at the start of Sec. \[sec:Parametrization\]: which polynomial basis should be used? We see immediately that a naive power series of the form $$\textrm{ln}f(v) \approx a_0 + a_1 v + a_2 v^2 + a_3 v^3 + ...\,,$$ is not practical for the purposes of parameter estimation. Higher powers of $v$ will have rapidly growing contributions to $\textrm{ln} f$, meaning that the associated coefficients must be rapidly decreasing in order to suppress these contributions. Fitting to the SHM using just 5 terms, the range of values for the $a_k$ in the case of a simple power series would span around 13 orders of magnitude. Ideally, we would like to specify an identical prior on each of the coefficients. However, in this scenario this would result in a highly inefficient exploration of the parameter space when some of the terms are so small. This problem can be significantly improved by rescaling $v$. We choose to rescale by a factor of $v_\textrm{max} = 1000 {\,\textrm{km s}^{-1}}$, and cut off the distribution function at $v_\textrm{max}$. The basis functions $(v/v_\textrm{max})^k$ are now less than unity by construction and the coefficients $a_k$ are now dimensionless: $$\textrm{ln}f(v) \approx a_0 + a_1 (v/v_\textrm{max}) + a_2 (v/v_\textrm{max})^2 + a_3 (v/v_\textrm{max})^3 + ...\,.$$ We now address the problem of *conditioning* of the polynomial basis (see e.g. Refs. [@Gautschi:1978; @Wilkinson:1984]). Conditioning is a measure of how much the value of a polynomial changes, given a small change in the coefficients. For a well-conditioned polynomial, small changes in the coefficient are expected to lead to small changes in the value of the polynomial. This is ideal for parameter estimation as it leads to a more efficient exploration of the parameter space. Orthogonal polynomial basis functions typically have improved conditioning [@Gautschi:1978] and we consider two specific choices: the Legendre polynomials which have already been considered and the Chebyshev polynomials. The Chebyshev polynomials are used extensively in polynomial approximation theory [@Mason:2002] and are expected to be well conditioned [@Gautschi:1978]. We have checked that the reconstruction results using Chebyshev polynomials are largely indistinguishable from the case of Legendre polynomials for both the ‘bump’ and ‘double-peak’ distributions and as a function of $N$. This leads us to conclude that the accuracy of the reconstruction is independent of the specific choice of basis. However, the reconstruction was much faster in the case of the Chebyshev basis. This is illustrated in Fig. \[fig:times\], which shows the time taken for reconstruction of the ‘bump’ benchmark as a function of $N$. The time taken grows much more slowly for the Chebyshev basis (roughly as $N^2$) than for the Legendre basis (roughly as $N^3$). We have also checked that this difference is not an artifact of how we calculate the basis functions. These results indicate that this choice of basis provides both reliable and efficient reconstruction for the WIMP mass and we therefore use the Chebyshev basis in the remainder of this work. ![Time taken (using 4 processors in parallel) for the reconstruction of the ‘bump’ benchmark, as a function of number of basis functions. The time taken using the Chebyshev basis (blue squares) grows more slowly with $N$ than for the Legendre basis (red triangles).[]{data-label="fig:times"}](RunTimes.pdf){width="49.00000%"} Varying $m_\chi$ {#sec:mass} ================ In previous work [@Kavanagh:2013a], this parametrization method was only tested for a single WIMP mass of $50 \textrm{ GeV}$. Here, we extend this analysis to a wider range of WIMP masses. We generate Asimov data for WIMP masses of 10, 20, 30, 40, 50, 75, 100, 200 and 500 GeV and reconstruct the best fit WIMP mass $m_\textrm{rec}$ and 68% and 95% confidence intervals from the profile likelihood. We use the SHM as a benchmark distribution function and use a fixed number of $N=5$ basis functions. The results are shown in Fig. \[fig:VaryingM\], along with the line $m_\textrm{rec} = m_\chi$ for reference. For large values of $m_\chi$, the shape of the event spectrum becomes independent of $m_\chi$ [@Green:2008], which results in a widening of the confidence intervals as the WIMP mass increases. For low mass WIMPs, fewer events are observed in each bin, again resulting in wider confidence intervals. It should be noted that for this analysis we have used Asimov data, in which the exact (non-integer) number of events is recorded in each bin. For low mass WIMPs, this means that the spectrum (and therefore the correct WIMP mass) is still well reconstructed using Asimov data, in spite of the small number of events. The tightest constraints are obtained when the input WIMP mass is close to the masses of several of the detector nuclei (in the range 30-80 GeV). There also appears to be no bias in the WIMP mass: the reconstruction matches the true mass across all values considered. ![Reconstructed WIMP mass $m_\textrm{rec}$ (central dashed blue line) as a function of input WIMP mass $m_\chi$ as well as 68% and 95% intervals (inner and outer blue dashed lines respectively). The line $m_\textrm{rec} = m_\chi$ (solid red line) is also plotted for reference.[]{data-label="fig:VaryingM"}](VaryingM.pdf){width="49.00000%"} An alternative parametrization method was proposed in Ref. [@Kavanagh:2012], in which the *momentum* distribution of halo WIMPs was parametrized. For a given speed distribution, the corresponding momentum distribution may be broad and easily reconstructed for high mass WIMPs. However, for low mass WIMPs the momentum distribution would be much narrower, owing to their lower momenta. The momentum parametrization method therefore performs poorly for low mass WIMPs. The parametrization presented in this paper does not suffer from similar problems. So far, we have only considered idealized direct detection experiments. We now apply the method to more realistic mock detectors, taking into account the effects of finite energy resolution, as well as unrejected background events. We assume here that each experiment has a gaussian energy resolution with fixed width $\sigma_E = 1 \textrm{ keV}$, such that the observed event rate for recoils of energy $E$ is given by: $$\frac{\textrm{d}R}{\textrm{d}E} = \int_{0}^{\infty} \frac{1}{\sqrt{2 \pi} \sigma_E}\exp\left\{-\frac{(E-E')^2}{2\sigma_E^2}\right\} \frac{\textrm{d}{R'}}{\textrm{d}E'} \, \textrm{d}E'\,,$$ where the primed event rate is the underlying (perfect resolution) rate. We also assume a constant flat background rate for each experiment $R_\textrm{BG} = 10^{-6}$ events/kg/keV/day (which has been suggested as a possible background rate for Xenon1T [@Aprile:2010] and WArP-100L [@Grandi:2005]) when generating mock data sets. However, we allow the flat background rate in each experiment to vary as free parameters during the fit. We have chosen relatively generic resolution and background parameters in this work, because the precise details of energy resolution and background shape and rate will depend on the specific experiment under consideration. Instead, we hope to show that the inclusion of more realistic experimental setups does not introduce an additional bias or otherwise spoil the good properties of the method presented here. Figure \[fig:VaryingM\_real\] shows the reconstructed mass as a function of input mass in this more realistic scenario. The 68% and 95% confidence intervals are now wider and the reconstructed mass does not appear to be as accurate. For input masses above $\sim$100 GeV, the uncertainties become very wide, with only a lower limit of $m_\textrm{rec} > 20 \textrm{ GeV}$ being placed on the WIMP mass. Due to the poorer energy resolution the shape of the energy spectrum is less well-determined. In addition, a flat background contribution can mimic a higher mass WIMP, as it leads to a flatter spectrum. This leads to a strong degeneracy, as a wide range of mass values can provide a good fit to the data. For high input masses, the profile likelihood is approximately constant above $m_\textrm{rec} \sim 20 \textrm{ GeV}$, indicating that there is no sensitivity to the underlying WIMP mass. In spite of this, the true mass values still lie within the 68% and 95% confidence intervals. In addition, the poor values for the reconstructed mass for heavy WIMPs are a side effect of the loss of sensitivity. Because the profile likelihood is approximately flat, the maximum likelihood point is equally likely to be anywhere within the 68% interval. These effects would be present even if we had considered a fixed form for the speed distribution. However, when we allow for a range of possible speed distributions, the effects become more pronounced. These results show that for more realistic experimental scenarios, the method presented in this paper remains reliable over a range of masses, though its precision may be significantly reduced. ![As fig. \[fig:VaryingM\] but including the effects of finite energy resolution and non-zero backgrounds, as described in the text.[]{data-label="fig:VaryingM_real"}](VaryingM_real.pdf){width="49.00000%"} Statistical properties {#sec:stats} ====================== We now consider the impact of statistical fluctuations on the reconstruction of the WIMP mass. In reality, the number of events observed $N_o$ at a given experiment will be Poisson distributed about the expected value $N_e$, while the observed distribution of recoil energies will not exactly match that expected from the calculated event rate. The fundamental statistical limitations of future direct detection experiments have been studied in detail in Ref. [@Strege:2012]. In this work, we generate 250 realisations of data from the mock experiments described in Tab. \[tab:experiments\]. Each realisation of the mock data is generated as follows: 1. Calculate the number of expected events $N_e$, given $\left\{m_\chi, \sigma_p, f(v)\right\}$, using Eq. \[eq:N\_expected\], 2. Pick the number of observed events $N_o$ from a Poisson distribution with mean $N_e$, 3. Pick recoil energies $\left\{E_1, E_2, ..., E_{N_o}\right\}$, from the distribution $P(E)$ in Eq. \[eq:eventdistribution\], 4. Repeat for all three experiments. For each realisation, we then use the method described in Sec. \[sec:ParameterRecon\] (using $N = 5$ basis functions) to reconstruct the WIMP mass and 68% and 95% credible intervals. Figure \[fig:Realisations\] shows the distribution of reconstructed masses for an input mass of 50 GeV for three benchmark speed distributions: SHM, SHM+DD and Lisanti et al. as described in Sec. \[sec:DDRate\]. In all three cases, the reconstructions are peaked close to the true value, regardless of the underlying distribution. For the SHM+DD distribution, the spread of reconstructions is slightly wider (with more reconstructions extending up to higher masses). This is due to the smaller number of events for this benchmark, making the data sets more susceptible to Poisson fluctuations. In order to assess the accuracy of the reconstructed value of the mass $m_\textrm{rec}$, we also calculate the bias $b$ for each realisation: $$\label{eq:bias} b = \textrm{ln}(m_\textrm{rec} / \textrm{GeV}) - \textrm{ln}(m_\textrm{true} / \textrm{GeV})\,.$$ We compare the logarithms of the mass values because we have used logarithmically-flat priors on the WIMP mass. In Tab. \[tab:bias\] we show the average bias across all 250 realisations for each of the three benchmark distributions. In all three cases, the average bias is consistent with zero. Even in the SHM+DD case, which shows larger fluctuations away from the true value, there is no statistical bias. ![Distribution of the reconstructed mass $m_\textrm{rec}$ for 250 mock data sets generated using several benchmark speed distributions, defined in Sec. \[sec:DDRate\]. These are the SHM (top), SHM+DD (middle) and Lisanti et al. (bottom) distributions. The input WIMP mass of $m_\chi = 50 \textrm{ GeV}$ is shown as a vertical dashed red line.[]{data-label="fig:Realisations"}](SHM_ensemble.pdf "fig:"){width="49.00000%"} ![Distribution of the reconstructed mass $m_\textrm{rec}$ for 250 mock data sets generated using several benchmark speed distributions, defined in Sec. \[sec:DDRate\]. These are the SHM (top), SHM+DD (middle) and Lisanti et al. (bottom) distributions. The input WIMP mass of $m_\chi = 50 \textrm{ GeV}$ is shown as a vertical dashed red line.[]{data-label="fig:Realisations"}](DD_ensemble.pdf "fig:"){width="49.00000%"} ![Distribution of the reconstructed mass $m_\textrm{rec}$ for 250 mock data sets generated using several benchmark speed distributions, defined in Sec. \[sec:DDRate\]. These are the SHM (top), SHM+DD (middle) and Lisanti et al. (bottom) distributions. The input WIMP mass of $m_\chi = 50 \textrm{ GeV}$ is shown as a vertical dashed red line.[]{data-label="fig:Realisations"}](LIS_ensemble.pdf "fig:"){width="49.00000%"} [m[1in]{}|c]{} Benchmark speed distribution & Mean bias $\langle b \rangle$\ SHM & 0.002 $\pm$ 0.008\ SHM+DD & 0.005 $\pm$ 0.007\ Lisanti et al. & 0.01 $\pm$ 0.01\ We also test the *coverage* of the credible intervals which have been constructed. For a $p\%$ credible interval, we expect that the true parameter value of the WIMP mass will lie within the interval in $p\%$ of realisations. In this case, we say that the method provides *exact coverage*. However, if the true parameter lies within the interval in fewer than $p\%$ of realisations, our reconstructed credible intervals are too narrow and provide *undercoverage*. Alternatively, we obtain *overcoverage* when the true parameter lies within the interval more often that $p\%$ of the time. Table \[tab:coverage\] shows the coverage values for the $68\%$ and $95\%$ intervals obtained in this section. In each case, there is very close to exact coverage. We have also checked that these intervals only provide exact coverage for the true WIMP mass of 50 GeV. Other values of $m_\textrm{rec}$ are contained within the intervals less frequently than the true value, again indicating that this parametrization allows for unbiased and statistically robust reconstructions of the WIMP mass. [m[1in]{}|cc]{} Benchmark speed distribution & 68% coverage & 95% coverage\ SHM & 71 $\pm$ 3 % & 94 $\pm$ 3 %\ SHM+DD & 68 $\pm$ 3 % & 91 $\pm$ 4 %\ Lisanti et al. & 70 $\pm$ 3 % & 95 $\pm$ 3 %\ Reconstructing $f_1(v)$ {#sec:Recon} ======================= Using the method described in this paper, we can obtain the posterior probability distribution for the coefficients $\left\{ a_1, ..., a_{N-1}\right\}$ given the data, which we refer to as $P(\textbf{a})$. We would like to be able to present this information in terms of the distribution function $f_1(v)$ in order to compare with some known distribution or look for particular features in the distribution. However, due to the fact that the distribution function is normalized, the values of $f_1$ at different speeds will be strongly correlated. We illustrate here how robust comparisons with benchmark distributions can be made. As a first step, we can attempt to sample from the $P(\textbf{a})$, in order to obtain $P(f_1(v))$. This is the probability distribution for the value of $f_1$ at a particular speed $v$, marginalizing over the values of $f_1$ at all other speeds. We can repeat for a range of speeds to obtain 68% and 95% credible intervals for the whole of $f_1(v)$. The result of this procedure is presented in Fig. \[fig:f\], for a randomly selected realisation from the SHM ensemble of Sec. \[sec:stats\]. The underlying SHM distribution is shown as a solid line, while the 68% and 95% marginalized intervals are shown as dark and light shaded regions respectively. In this naive approach, we see that there is little shape information which can be recovered from the reconstruction, with only upper limits being placed on the speed distribution. ![Reconstructed speed distribution for a single realisation of data, generated for a 50 GeV WIMP. 68% and 95% credible intervals are shown as dark and light shaded regions respectively, while the underlying SHM distribution function is shown as a solid blue line.[]{data-label="fig:f"}](f_SHM.pdf){width="49.00000%"} This method performs poorly because, as initially mentioned in Sec. \[sec:ParameterRecon\], we have no information about the fraction of dark matter particles below the energy threshold of our experiments. If this fraction is large, the event rate for a given cross-section is suppressed. However, increasing the cross-section will increase the total event rate. There is thus a degeneracy between the shape of the speed distribution and the cross-section, meaning that we can only probe the shape of $f_1(v)$, rather than its overall normalization. This degeneracy has not been accounted for in Fig. \[fig:f\]. We can attempt to correct for this by adjusting the normalization of $f_1(v)$. If we fix $f_1(v)$ to be normalized to unity above $v_a$ (where $v_a \approx 171 {\,\textrm{km s}^{-1}}$ is the lowest speed probed by the experiments for a WIMP of mass 50 GeV), we can compare the shapes of the underlying and reconstructed distribution functions. This is illustrated in Fig. \[fig:f\_scaled\], which shows that we now broadly reconstruct the correct shape of $f_1(v)$. Below $v_a$, the value of $f_1(v)$ is poorly constrained, because the experiments provide no information about the shape of the distribution below theshold. There remain several issues with this approach. In order to utilize this method, we must know the approximate value of the lowest speed probed by the experiments. However, this value is set by the WIMP mass. We could determine $v_a$ using the reconstructed WIMP mass, but this would be subject to significant uncertainty. In addition, direct reconstructions of the speed distribution are easily biased. The upper limit of the energy windows of the experiments corresponds to a particular WIMP speed (for a given WIMP mass). WIMPs above this speed still contribute to the total event rate, but contribute no spectral information. The reconstructed shape of the high speed tail of the distribution is therefore not constrained by the data, but may affect the reconstructed value of $f_1$ at lower speeds. ![Reconstructed speed distribution for the same realisation of data as Fig. \[fig:f\]. In this case, we have also normalized $f_1(v)$ to unity above $v_a \approx 171 {\,\textrm{km s}^{-1}}$ (vertical dashed line). This is the lowest speed accessible to the experiments for a WIMP of mass 50 GeV. 68% and 95% credible intervals are shown as dark and light shaded regions respectively, while the underlying SHM distribution function is shown as a solid blue line.[]{data-label="fig:f_scaled"}](f_SHM_scaled_line.pdf){width="49.00000%"} An alternative approach is to reconstruct the mean inverse speed $\eta(v)$ (defined in Eq. \[eq:eta\]) at some speed $v$. Because $\eta(v)$ is an integral function of $f_1$, it is less prone to bias as it takes into account the full shape of the distribution at speeds greater than $v$. However, we do not know the normalization of $f_1$ and so we must normalize $\eta$ appropriately. For each point sampled from $P(\textbf{a})$, we calculate $\eta$. We then divide by $\alpha(v)$, the fraction of WIMPs above speed $v$, calculated using the same parameter point: $$\label{eq:alpha} \alpha(v) = \int_{v}^{\infty} f_1(v') \, \textrm{d}v'\,.$$ We will write this rescaled mean inverse speed as $\eta^*(v) = \eta(v)/\alpha(v)$. The value of $\eta^*(v)$ is a measure of the shape of the distribution function above $v$. However, information about the normalization of the distribution has been factored out by dividing by $\alpha(v)$. We no longer need to know the value of $v_a$ in order to obtain information about the shape of the distribution at higher speeds. We may still need to decide the speed down to which we trust our reconstruction, but this no longer relies on an arbitrary choice of $v_a$ to normalize the reconstructions at all speeds. In Fig. \[fig:eta\_stats\], we plot the mean reconstructed value of $\eta^*$ at several values of $v$, using 250 realisations of the 50 GeV SHM benchmark. We also show the mean upper and lower limits of the 68% credible intervals as errorbars. The form of $\eta^*$ for the SHM is shown as a solid blue line. In all cases except for $v=100 {\,\textrm{km s}^{-1}}$, the mean reconstructed value is close to the true value, indicating that $\eta^*$ can be reconstructed without bias using this method. At low speeds, the reconstructed value deviates from the true value. In addition, the credible intervals lead to *under*coverage in the $v=100 {\,\textrm{km s}^{-1}}$ case. However, this point lies below the lowest speed to which the experiments are sensitive and therefore we cannot trust the reconstruction at this low speed. We have checked that for the remaining values of $v$ the method provides exact or overcoverage, indicating that at higher speeds we can use $\eta^*$ as a reliable and statistically robust measure of the shape of the distribution. ![Mean reconstructed values of the rescaled mean inverse speed $\eta(v)/\alpha(v)$ at several values of $v$, calculated over 250 realisations of data using a 50 GeV WIMP and underlying SHM distribution function. Errorbars indicate the mean upper and lower limits of the 68% credible intervals. The underlying form of $\eta(v)/\alpha(v)$ obtained from the SHM is shown as a solid blue line.[]{data-label="fig:eta_stats"}](Eta.pdf){width="49.00000%"} In the case of a single realisation of data, we would like to compare the probability distribution for $\eta^*(v)$ (obtained from $P(\textbf{a})$) to the value calculated from some test distribution. We note that several distributions may produce the same value of $\eta^*(v)$ at a given value of $v$. Thus, we may fail to reject a distribution function which is not the true distribution. However, if the calculated value of $\eta^*(v)$ does lie outside the $p\%$ interval, we can reject it at the $p\%$ level. We can increase the discriminating power of this method by repeating this reconstruction over all speeds and checking to see if the benchmark value of $\eta^*$ is rejected at any value of $v$. The result of this procedure is shown in Fig. \[fig:eta\] for a single realisation of data generated using an SHM distribution (the same data as in Figs. \[fig:f\] and \[fig:f\_scaled\]). We plot the 68%, 95% and 99% credible intervals as shaded regions, as well as the values of $\eta^*(v)$ calculated from several benchmark speed distribution. We will focus on the intermediate speed range ($v \gtrsim 200 {\,\textrm{km s}^{-1}}$), as we do not know *a priori* the lowest speed to which the experiments are sensitive. ![Rescaled mean inverse speed $\eta(v)/\alpha(v)$, reconstructed from a single realisation of data using a 50 GeV WIMP and underlying SHM distribution function. At each value of $v$ we calculate 68%, 95% and 99% credible intervals (shown as shaded intervals). We also show the calculated values of $\eta(v)/\alpha(v)$ for several possible benchmark speed distributions: SHM (solid blue), SHM+DD (dashed green), Lisanti et al. (dot-dashed red) and stream (dotted magenta). The benchmark curves are truncated when the underlying distribution function goes to zero.[]{data-label="fig:eta"}](SHM_lores.pdf){width="49.00000%"} The reconstructed intervals are consistent with a range of possible distribution functions. The SHM and SHM+DD distributions are identical over a wide range of speeds. This is because above $\sim 200 {\,\textrm{km s}^{-1}}$, the two distributions differ in normalization but not in shape. Differences appear between the two at low speeds where their shapes diverge. The Lisanti et al. distribution results in a larger deviation from the SHM, but not sufficiently large to differentiate between the two distributions given the size of the uncertainties. Finally, the stream distribution results in a significantly different form for $\eta^*(v)$. At approximately $400 {\,\textrm{km s}^{-1}}$, the curve for the stream distribution lies outside the reconstructed 99% credible interval. We can therefore use this method to reject the stream distribution at the 99% confidence level. Figure \[fig:eta\_hires\] shows the results of a reconstruction using a larger exposure. In this case, we generate data using the Lisanti et al. distribution and an exposure increased by a factor of $2.5$, resulting in approximately 1000 events across the three detectors. As expected, the resulting credible intervals are now substantially narrower. The stream distribution now lies significantly outside the 99% interval. In Fig. \[fig:eta\_hires\_zoom\], we show the same results, but focusing in on the region around $v \sim 400 {\,\textrm{km s}^{-1}}$. At certain points, the SHM and SHM+DD distributions now lie outside the 95% credible interval, suggesting that with a number of events of the order of 1000, we may be able to reject these benchmarks. ![As Fig. \[fig:eta\], but using as input a Lisanti et al. speed distribution and an exposure time which is 2.5 times longer.[]{data-label="fig:eta_hires"}](LIS_hires.pdf){width="49.00000%"} ![As Fig. \[fig:eta\_hires\], but focusing on the region around $v \sim 400 {\,\textrm{km s}^{-1}}$. Notice that in the range $400-550 {\,\textrm{km s}^{-1}}$, both the SHM and SHM+DD curves lie at or below the lower limit of the 95% credible interval.[]{data-label="fig:eta_hires_zoom"}](LIS_hires_zoom.pdf){width="49.00000%"} While the method displayed in Fig. \[fig:f\_scaled\] allows the approximate shape of the speed distribution to be reconstructed, reconstructions of $\eta^*(v)$ allow more statistically robust statements to be made about the underlying speed distribution. In particular, Fig. \[fig:eta\_hires\_zoom\] illustrates that with larger exposures deviations from Maxwellian speed distributions can be detected in a model-independent fashion. Conclusions {#sec:Conclusions} =========== We have studied in detail the parametrization for the local dark matter speed distribution introduced in Paper 1. This method involves writing the logarithm of the speed distribution as a polynomial in speed $v$ and fitting the polynomial coefficients (along with the WIMP mass and cross section) to the data. We have attempted to disentangle in this paper the influence of different benchmark speed distributions, different benchmark WIMP masses and different forms for the parametrization. We summarize our conclusions as follows: - We have shown that the reconstruction of the WIMP mass is robust under changes in the number of basis functions $N$. We have used the Bayesian Information Criterion (BIC) to compare models with different values of $N$ and have shown that minimizing the BIC allows us to determine how many basis functions are required for a reliable reconstruction. We have also demonstrated that the results of the method do not depend strongly on the choice of basis functions, but that the speed of reconstructions may improved by using the Chebyshev polynomial basis. - We have shown that the method leads to unbiased reconstructions of the WIMP mass for masses in the range 10-500 GeV. Including realistic experimental parameters, including non-zero backgrounds and finite energy resolution, reduces the precision of these reconstructions. In particular, for large values of the input mass, we can only place a lower limit of approximately 20 GeV on the reconstructed mass. This is significantly lower than in the idealized case, where we can typically constrain the WIMP mass to be heavier than around 50 GeV. - We have used several ensembles of data realisations to demonstrate the statistical properties of the method, including unbiased reconstructions and exact coverage of the WIMP mass. - We have presented several ways of displaying the reconstructed WIMP speed distribution using this method. In order to make robust statistical inferences about the speed distribution, we calculate the probability distribution of $\eta(v)/\alpha(v)$. This is the mean inverse speed $\eta(v)$, which appears in the direct detection event rate (eq. \[eq:Rate\]), rescaled by the fraction of WIMPs $\alpha(v)$ above speed $v$. This can be used as a measure of the *shape* of the distribution function, from which the unknown normalization has been factored out. We can then compare to the expected value of $\eta(v)/\alpha(v)$ from a given benchmark speed distribution, allowing us to distinguish between different underlying models. We have shown that this parametrization method is statistically robust and works well over a large range of input parameters, both in terms of particle physics and astrophysics. The inclusion of more realistic experimental parameters does not introduce any additional bias, but does reduce the precision of reconstructions. We obtain unbiased estimates of the WIMP mass over large numbers of data sets. Finally, we have shown that we can distinguish different forms of the speed distribution. With around 1000 events, it may be possible to detect minor deviations from the Standard Halo Model and begin to search for more interesting structure in the speed distribution of the Milky Way. The author thanks Anne M. Green and Mattia Fornasa for helpful comments. BJK is supported by STFC. Access to the University of Nottingham High Performance Computing Facility is also gratefully acknowledged.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Despite the invertible setting, Anosov endomorphisms may have infinitely many unstable directions. Here we prove, under transitivity assumption, that an Anosov endomorphism on a closed manifold $M,$ is either special (that is, every $x \in M$ has only one unstable direction) or for a typical point in $M$ there are infinitely many unstable directions. Other result of this work is the semi rigidity of the unstable Lyapunov exponent of a $C^{1+\alpha}$ codimension one Anosov endomorphism and $C^1$ close to a linear endomorphism of $\mathbb{T}^n$ for $(n \geq 2).$ In the appendix we give a proof for ergodicity of $C^{1+\alpha}, \alpha > 0,$ conservative Anosov endomorphism.' address: - 'Departamento de Matemática, IM-UFAL Maceió-AL, Brazil.' - 'Departamento de Matemática, ICMC-USP São Carlos-SP, Brazil.' author: - 'F. Micena' - 'A. Tahzibi' title: On the Unstable Directions and Lyapunov Exponents of Anosov Endomorphisms --- [^1] Introduction {#section.preliminaries} ============ In $1970s,$ the works [@PRZ] and [@MP] generalized the notion of Anosov diffeomorphism for non invertible maps, introducing the notion of Anosov endomorphism. We consider $M$ a $C^{\infty}$ closed manifold. [@PRZ] \[defprz\] Let $f: M \rightarrow M$ be a $C^1$ local diffeomorphism. We say that $f$ is an Anosov endomorphism if there is constants $C> 0$ and $\lambda > 1,$ such that, for every $(x_n)_{n \in \mathbb{Z}}$ an $f-$orbit there is a splitting $$T_{x_i} M = E^s_{x_i} \oplus E^u_{x_i}, \forall i \in \mathbb{Z},$$ which is preserved by $Df$ and for all $n > 0 $ we have $$||Df^n(x_i) \cdot v|| \geq C^{-1} \lambda^n ||v||, \;\mbox{for every}\; v \in E^u_{x_i} \;\mbox{and for any} \; i \in \mathbb{Z}$$ $$||Df^n(x_i) \cdot v|| \leq C\lambda^{-n} ||v||, \;\mbox{for every}\; v \in E^s_{x_i} \;\mbox{and for any} \; i \in \mathbb{Z}.$$ Anosov endomorphisms can be defined in an equivalent way ([@MP]): [@MP] \[defmp\] A $C^1$ local diffeomorphism $f: M \rightarrow M$ is said an Anosov endomorphism if $Df$ contracts uniformly a continuous sub-bundle $E^s \subset TM$ into itself, and the action of $Df$ on $TM/E^s$ is uniformly expanding. Sakai, in [@SA] proved that, in fact, the definitions $\ref{defprz}$ and $\ref{defmp}$ are equivalent. A contrast between Anosov diffeomorphisms and Anosov endomorphisms is the non-structural stability of the latter. Indeed, $C^1-$close to any linear Anosov endomorphism $A$ of Torus, Przytycki [@PRZ] constructed Anosov endomorphism which has infinitely many unstable direction for some orbit and consequently he showed that $A$ is not structurally stable. However, it is curious to observe that the topological entropy is locally constant among Anosov endomorphisms. Indeed, take the lift of Anosov endomorphism to the inverse limit space (see preliminaries for the definition). At the level of inverse limit space, two nearby Anosov endomorphisms are conjugate ([@PRZ], [@BerRov]) and lifting to inverse limit space does not change the entropy. Two endomorphisms (permitting singularities) $f_1, f_2$ are $C^1-$inverse limit conjugated, if there exists a homeomorphism $h : M^{f_1} \rightarrow M^{f_2}$ such that $h \circ \tilde{f_1} = \tilde{f_2} \circ h$ where $\tilde{f_i}$ are the lift of $f_i$ to the orbit space (see preliminaries). Denote by $p$ the natural projection $p: \overline{M} \rightarrow M,$ where $\overline{M}$ is the universal covering. Note that an unstable direction $E^u_{\overline{f}}(y)$ projects on an unstable direction of $T_x M, x = p(y)$ following the definition $\ref{defprz}, $ that is $Dp(y) \cdot (E^u_{f}(y)) = E^u(\tilde{x}), $ where ${\tilde{x}}= p (\mathcal{O}(y)).$ \[propMP\][@MP] $f$ is an Anosov endomorphism of $M,$ if and only if, the lift $\overline{f}: \overline{M} \rightarrow \overline{M} $ is an Anosov diffeomorphism of $\overline{M},$ the universal cover of $M.$ An advantage to work with the latter definition is that in $\overline{M}$ we can construct invariant foliations $\mathcal{F}^s_{\overline{f}}, \mathcal{F}^u_{\overline{f}}.$ Given an Anosov endomorphism and ${\tilde{x}}= (x_n)_{n \in \mathbb{Z}}$ an $f-$ orbit we denote by $ E^u({\tilde{x}})$ the unstable bundle subspace of $T_{x_0}(M)$ corresponding to the orbit $(x_n)_{n \in \mathbb{Z}}.$ In [@PRZ] one constructs examples of Anosov endomorphism such that $E^u({\tilde{x}}) \neq E^u (\tilde{y}),$ when $x_0 = y_0,$ but $(x_n)_n \neq (y_n)_n.$ In fact, it is possible that $x_0 \in M$ has uncountable unstable directions, see [@PRZ]. An Anosov endomorphism for which $E^u({\tilde{x}})$ just depends on $x_0$ (unique unstable direction for each point) is called special Anosov endomorphism. A linear Anosov endomorphism of torus is an example of special Anosov endomorphism. A natural question is whether it is possible to find an example of (non special)Anosov endomorphism, such that every $x \in M$ has a finite number of unstable directions. It is also interesting to understand the structure of points with infinitely many unstable directions. For transitive Anosov endomorphisms we prove the following dichotomy: \[teo1\] Let $f: M \rightarrow M$ be a transitive Anosov endomorphism, then: 1. Either $f$ is an special Anosov endomorphism, 2. Or there exists a residual subset $\mathcal{R} \subset M,$ such that for every $x \in \mathcal{R},$ $x$ has infinitely many unstable directions. Observe that when $M$ is the torus $\mathbb{T}^n, n \geq 2,$ all Anosov endomorphism of $\mathbb{T}^n$ are transitive, see [@AH]. Analysing the unstable Lyapunov exponents of the Anosov endomorphism, similarly with [@MT] we can prove: \[teo2\] Let $A: \mathbb{T}^n \rightarrow \mathbb{T}^n, n \geq 2$ be a linear Anosov endomorphism, with $dim E^u_A = 1.$ Then there is a $C^1$ open set $\mathcal{U},$ containing $A,$ such that for every $f \in \mathcal{U}$ a $C^{1 + \alpha}, \alpha> 0,$ conservative Anosov endomorphism we have $\lambda^u_f(x) \leq \lambda^u(A),$ for $m$ almost everywhere $x \in \mathbb{T}^n,$ where $m$ is the Lebesgue measure of $\mathbb{T}^n.$ To prove the Theorem \[teo2\], the neighbourhood $\mathcal{U}$ is can be chosen very small, such that every $f \in \mathcal{U}$ has its lift conjugated to $A$ in $\mathbb{R}^n.$ Then by this fact, we can consider a priori that also we have $dim E^u_f = 1.$ General Preliminaries Results. {#section.preliminaries} ============================== In this section we present some classical results on the theory of Anosov endomorphism, that will be important for our purposes for the rest of this work. The Limit Inverse Space. ------------------------ Consider $(X,d) $ a compact metric space and $f: X \rightarrow X$ a continuous map, we define a new compact metric space called limit inverse space for $f$ or natural extension of $f,$ being: $$X^f := \left\{(x_n)_{n \in \mathbb{Z}} \in \prod_{i \in \mathbb{Z}} X_i|\;\; X_i = X, \;\forall \; i \in \mathbb{Z} \;\;\mbox{and}\;\; f(x_i) = x_{i+1} \forall i\in \mathbb{Z} \right\}.$$ In this text we denote $X^f$ by $\widetilde{X}.$ Also we go to denote ${\tilde{x}}$ being an element $(x_n)_{n \in \mathbb{Z}}$ of $\widetilde{X}.$ We introduce a metric $\widetilde{d}$ in $\widetilde{X}$ as following: $$\tilde{d}(\tilde{x}, \tilde{y}) = \displaystyle\sum_{i \in \mathbb{Z}}\frac{d(x_i, y_i)}{2^{|i|}}.$$ It is easy to see that $(\tilde{X}, \tilde{d})$ is a compact metric space. Consider $\pi: \widetilde{X} \rightarrow X,$ the projection in the zero coordinate, that is, if ${\tilde{x}}= (x_n)_{n \in \mathbb{Z}}, $ then $\pi({\tilde{x}}) = x_0.$ One can verify that $\pi$ is continuous. We denote a pre-history of $x,$ a sequence of type ${\tilde{x}}_{-} = (\ldots, x_{-2}, x_{-1}, x_0 = x),$ such that $f(x_{-i})= x_{-i + 1}, i =1, 2, \ldots.$ Denote by $X^f_{-}$ or $\widetilde{X}_{-},$ the space of the all pre-histories with $x_0 \in X.$ The space $(\widetilde{X}_{-}, \widetilde{d})$ also is compact and the distance of two pre-histories of the same point $x_0 \in X$ is $\displaystyle\sum_{i=0}^{\infty}\frac{d_M(x_{-i}, y_{-i})}{2^i}.$ In the Anosov endomorphism context, $E^u({\tilde{x}})$ depends only on ${\tilde{x}}_{-},$ and this is why many times in this work we deal only with pre histories. Some Nice Properties of Anosov Endomorphisms. --------------------------------------------- The set of $C^1$ Anosov endomorphisms is open like Anosov diffeomorphisms. However, structural stability in the usual sense does not hold for Anosov endomorphisms (See the correct context for structural stability of Anosov endomorphisms in Berger-Rovella [@BerRov]). Anosov endomorphisms of a manifold $M$ is an open set in the $C^1$ topology. \[prz\] [@PRZ] Let $f: M \rightarrow M$ be an Anosov endomorphism, then the map $$\tilde{x} \mapsto E^u (\tilde{x})$$ is continuous. Let $f: M \rightarrow M$ be an Anosov Endomorphism, Denote by $\mathcal{E}^u_f(x) := \displaystyle\bigcup_{\tilde{x}: \pi(\tilde{x}) = x} E^u(\tilde{x})$, union of all unstable direction defined on $x.$ Considering the definitions \[defmp\] and \[defprz\] a natural question arises: What is the relation between $\mathcal{E}^u_f(x)$ and $ \bigcup_{y \in p^{-1}(x)} Dp(E^u_{\overline{f}}(y)) ?$ Observe that $\mathcal{E}^u_f(x)$ is not necessarily $\displaystyle\bigcup_{\pi(y) = x} Dp(y) \cdot ( E^u_{\widetilde{f}}(y)).$ Indeed, the latter is a countable union and the former may be uncountable (see [@PRZ].) Let $f: M \rightarrow M$ be an Anosov Endomorphism, then $$\mathcal{E}^u_f(x)=\overline{\displaystyle\bigcup_{p(y) = x} Dp(y) \cdot( E^u_{\overline{f}}(y))}.$$ First of all we would like to mention that $ \mathcal{E}^u_f(x)$ is a closed subset of $u-$dimensional grassmanian of $T_xM.$ This is an immediate corollary of theorem \[prz\]. Clearly $\displaystyle\bigcup_{\pi(y) = x} Dp(y) \cdot( E^u_{\widetilde{f}}(y)) \subset \mathcal{E}^u_f(x). $ So, $\overline{\displaystyle\bigcup_{\pi(y) = x} Dp(y) \cdot( E^u_{\overline{f}}(y)} )\subseteq \mathcal{E}^u_f(x).$ Now for reciprocal inclusion, let $E^u({\tilde{x}})$ be an unstable direction of $x \in M.$ We want to prove that $E \in \overline{\displaystyle\bigcup_{p(y) = x} Dp(y) \cdot( E^u_{\overline{f}}(y))}.$ We claim that given any finite pre-history $(x_{-k}, \ldots , x_{-2}, x_{-1}, x=x_0 ),$ there is a finite piece of $\overline{f}-$orbit $(y_{-k}, \cdots, \overline{f}^k(y_{-k})),$ which projects on $(x_{-k}, \ldots , x_{-2}, x_{-1}, x ),$ that is $$\pi( \overline{f}^{j}y_{-k}) = x_{-k+j}, j \in \{1, \cdots, k\}.$$ Indeed, choose any $y_{-k} \in \overline{M},$ such that $p(y_{-k}) = x_{-k}.$ As $p\circ \overline{f} = f \circ p,$ the piece of orbit of $y_{-k}$ by $\overline{f}$ orbit projects on $(x_{-k}, \ldots , x_{-2}, x_{-1}, x ).$ Now for each $k$ consider $\mathcal{O}(y_{-k}),$ the full orbit by $\overline{f}$ of $y_{-k}.$ It is clear that $p(\mathcal{O}(y_{-k}))$ converges to $\tilde{x}$ in $M^f.$ Recall that $$\label{kk} E^u(p(\mathcal{O}(y_{-k}))) = Dp (E^u(\overline{f}^k(y_{-k}))).$$ By continuity argument (theorem [@PRZ]) we have $$E^u(\pi(\mathcal{O}(y_{-k}))) \rightarrow E^u(\tilde{x})$$ and using \[kk\] we obtain $$Dp (E^u(\overline{f}^k(y_{-k}))) \rightarrow E^u(\tilde{x})$$ which completes the proof. The next lemma is useful for the rest of this paper. \[angle\] Suppose that $f: M \rightarrow M$ is an Anosov endomorphism, such that there are two different unstable directions $E^u_1$ and $E^u_2$ at $x,$ then the angle $\angle(Df^n(x)(E^u_1), Df^n(x)(E^u_2) ),$ goes to zero when $n \rightarrow + \infty.$ In fact, suppose that $dim(E^s) = k, \dim(E^u) = n .$ Suppose that $E^u_1(x) \neq E^u_2(x),$ for $x \in M.$ Consider $\{v_1, \ldots, v_n\}$ and $\{u_1, \ldots, u_n\}$ basis for $E^u_1(x)$ and $E^u_2(x)$ respectively. Since $E^u_1 (x) \neq E^u_2 (x)$ there is $u_i,$ say $u_1,$ such that $B = \{u_1, v_1, \ldots, v_n\}$ is a linearly independent set. Let $E := < u_1, v_1, \ldots, v_n >$ with $dim(E) = n+ 1$ be the subspace generated by $B.$ Observe that $dim(E) + dim(E^s) = n + k + 1 > n+k = dim(T_xM).$ This implies that $E \cap E^s$ is non trivial. Let $0 \neq v_s \in E \cap E^s,$ we have $$v_s = cu_1 + v,$$ where $c \neq 0$ and $ v \in E^u_{1} (x).$ Considering the following properties of vectors in stable and unstable bundles: - $||Df^n(x) v_s|| \rightarrow 0,$ $||Df^n(x) u_1|| \rightarrow + \infty,$ $||Df^n(x) v ||\rightarrow + \infty$ It come out that $\angle([Df^n(x)u_1], Df^n(x)E^u_1(x)) \rightarrow 0.$ In fact the same argument shows that $\angle([Df^n(x)u_i], Df^n(x)E^u_1(x)) \rightarrow 0,$ for all $u_i$ not in $E^u_1(x).$ Thus $$\lim_{n \rightarrow \infty}\angle(Df^n(x)(E^u_1 (x)), Df^n(x)(E^u_2(x)) )= 0.$$ Proof of the Theorem \[teo1\]. ============================== In the course of the proof of the main result we need to analyse the number of unstable directions as a function of $x \in M$ as follows: Let $u: M \rightarrow \mathbb{N} \cup \{\infty\}$ be defined as $$u(x) := \#( \mathcal{E}_f^u(x)),$$ which assigns to each $x$ the“number" of all possible unstable directions at $T_x{M}$. A simple and useful remark is the following: \[nondecreasing\] $u(x)$ is non-decreasing along the forward orbit of $x.$ It is enough to use that $f$ is a local diffeomorphisms and $Df(x)$ is injective. However, we emphasize that it is not clear whether $u(x)$ is constant or not along the orbit. This is because, all the pre history of $x$ is included in the prehistory of $f(x).$ \[quase\] Let $f: M \rightarrow M$ be a transitive Anosov endomorphism, then either there is $x \in M$ such that $u(x) = \infty,$ or $u$ is uniformly bounded on $M,$ in fact in this case $f$ is an special Anosov endomorphism. Suppose that $u(x) < \infty$ for all $x \in M.$ Define the sets $$\Lambda_k =\{x \in M\, | \, u(x) \leq k \}.$$ The sets $\Lambda_k$ are closed. Indeed by continuity argument (theorem \[prz\]) it comes out that $M \setminus \Lambda_k$ is open. Now observe that $$M = \displaystyle\bigcup_{k=1}^{+\infty}\Lambda_k,$$ by Baire Category theorem, there is $k_0 \geq 1$ such that $int(\Lambda_{k_0}) \neq \emptyset.$ Now we claim that $$M= \Lambda_{k_0}.$$ To prove the claim, take $x$ arbitrary in $M$ with $l$ unstable directions and $V_x$ a small neighbourhood of $x$ such that any point in $V_x$ has at least $l$ unstable direction. Consider a point with dense orbit in $V_x$ and take an iterate of it that belongs to $\Lambda_{k_0}.$ By lemma \[nondecreasing\] we conclude that $l \leq k_0$ and this yields $M = \Lambda_{k_0}.$ Finally, we prove that $ M = \Lambda_1$ implying that $f$ is an special Anosov endomorphism. Suppose that, there is $x \in M$ such that $u(x) \geq 2$ and choose $E^u_1(x), E^u_2(x)$ two different unstable directions at $T_x(M).$ Let $\alpha > 0 $ be the angle between $E^u_1(x)$ and $E^u_2(x).$ Consider $U_x$ a small neighbourhood of $x,$ such that every $y \in U_x$ has at least two unstable directions, say $E^u_1(y)$ and $E^u_2(y),$ with $\angle(E^u_1(y), E^u_2(y)) > \displaystyle\frac{\alpha}{2}.$ Let $x_0$ be a point with dense orbit. Let $n_1$ be a large number satisfying - $f^{n_1}(x_0) \in U_x, $ - $\angle (Df^{n_1}(x_0)\cdot E,Df^{n_1}(x_0)\cdot F ) < \displaystyle\frac{\alpha}{3} ,$ for any $E,F \in \mathcal{E}^u_f(x_0).$ The choice of $n_1$ is possible thanks to denseness of the forward orbit of $x_0$ and lemma \[angle\]. By definition of $U_x$ the two above properties imply that either $E^u_1(f^{n_1}(x_0))$ or $E^u_2(f^{n_1}(x_0))$ is not contained in $Df^{n_1}(x_0)\cdot\mathcal{E}^u_f(x_0).$ So, we obtain $$u(f^{n_1}(x_0)) > u(x_0) + 1.$$ By repeating this argument, it is possible to obtain an infinite sequence $ f^{n_k}(x_0)$ such that $$u(f^{n_{k+1}}(x_0)) \geq u(f^{n_{k}}(x_0)) + 1,$$ contradicting $M = \Lambda_{k_0}.$ Ending the Proof of Theorem \[teo1\] ------------------------------------ To finalize the proof of the theorem \[teo1\] it remains to prove that $u(x) = \infty,$ for a residual set $\mathcal{R} \subset M,$ whenever $f$ is not special Anosov endomorphism. In fact, suppose that there is $x \in M,$ such that $u(x) = + \infty.$ Given $k > 0,$ fix exactly $k$ different unstable directions at $x,$ and $U^k_x$ a neighbourhood of $x,$ such that $u(y) \geq k,$ for every $y \in U^k_x.$ Now, since $f$ is transitive, the open set $V^k = \displaystyle\bigcup_{i \geq 0} f^i(U^k_x) $ is dense in $M.$ Finally, consider $$\mathcal{R} := \bigcap_{k \geq 1} U_k,$$ which is a residual set. By construction, given $x \in \mathcal{R}$ we have $u(x) \geq k$ for every $k > 1,$ which implies $u(x) = +\infty.$ The completes the proof of theorem \[teo1\]. Proof of Theorem \[teo2\]. =========================== Given $f: \mathbb{T}^n \rightarrow \mathbb{T}^n$ an Anosov endomorphism, by the proposition \[propMP\], the lift $\overline{f}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is an Anosov diffeomorphism. Let $f_*: \mathbb{T}^n \rightarrow \mathbb{T}^n$ be the linearisation of $f$. By linearisation of $f$ we mean the unique linear endomorphism of torus and homotopic to $f.$ By Theorem 8.1.1 in [@AH], the linearisation map is hyperbolic. Although $\mathbb{R}^n$ is not compact, since $\overline{f}$ preserves $\mathbb{Z}^n,$ the derivatives of $\overline{f}$ are periodic in fundamental compact domains of $\mathbb{T}^n.$ This periodicity allows us to prove, in the $\mathbb{R}^n$ setting, analogous results to Anosov diffeomorphisms in compact case. \[abscont\] Let $f: \mathbb{T}^n \rightarrow \mathbb{T}^n$ be a $C^{1+\alpha}-$ Anosov endomorphism. Then for $\overline{f}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ there exist $\mathcal{F}^u_{\overline{f}}$ and $\mathcal{F}^s_{\overline{f}}$ transversally absolutely continuous foliations tangent to $E^u_{\overline{f}}$ and $E^s_{\overline{f}}$ respectively. Similar to the compact case, [@HPS]. \[quasi isometric\] A foliation $W$ of $\mathbb{R}^n$ is quasi-isometric if there exist positive constants $Q$ and $b$ such that for all $x, y$ in a common leaf of W we have $$d_W(x, y) \leq Q^{-1} || x - y|| + b.$$ Here $d_W$ denotes the riemannian metric on $W$ and $\|x-y\|$ is the euclidean distance. \[remarkquasi\] Observe that if $||x - y||$ is large enough, we can consider $b = 0, $ in the above definition. \[quasi\_iso\_fol\] Let $A$ be as theorem \[teo2\]. If $f$ is an Anosov endomorphism $C^1-$ sufficiently close to $A,$ then $\mathcal{F}^{s,u }_{\overline{f}}$ are quasi isometric foliations. The proof of this lemma follows directly from a proposition due to Brin, [@Br]. \[brin\] Let $W$ be a $k-$dimensional foliation on the $\mathbb{R}^m.$ Suppose that there is a $(m-k)-$dimensional plane $\Delta$ such that $T_x W(x) \cap \Delta =\{0\},$ such that $\angle (T_x W(x) , \Delta) \geq \beta > 0,$ for every $x \in \mathbb{R}^m.$ Then $W$ is quasi isometric. Consider $U $ a $C^1-$open set containing $A,$ such that for every $f \in U,$ $\overline{f}$ and $\overline{A}$ are $C^1$ close in the universal cover $\mathbb{R}^n.$ The $C^1$ neighborhood $U,$ is taken such that $$\label{angulou} |\angle (E^u_{\overline{f}}(x), E^u_A ) | < \alpha,$$ $$\label{angulos} |\angle (E^s_{\overline{f}}(x), E^s_A ) | < \alpha,$$ for any $x \in \mathbb{R}^n$ where $\alpha$ is an small number less than 1/2$\angle(E^u_A, E^s_A).$ For the foliation $\mathcal{F}^u_{\overline{f}}$ take $\Delta := E^s_A,$ and, for the foliation $\mathcal{F}^s_{\overline{f}},$ $\Delta := E^u_A.$ Applying the proposition \[brin\] completes the proof. \[nice\] For any Anosov endomorphism $f: \mathbb{T}^n \rightarrow \mathbb{T}^n$ close to its linearisation $A$, the following properties hold in the universal covering: 1. For each $k \in \mathbb{N}$ and $C > 1$ there is $M$ such that, $$||x - y|| > M \Rightarrow \frac{1}{C} \leq \displaystyle\frac{||\overline{f}^kx - \overline{f}^k y|| }{||A^kx - A^ky||} \leq C.$$ 2. $ \displaystyle\lim_{||y - x || \rightarrow +\infty} \frac{y-x}{||y - x ||} = E_A^{\sigma}, \;\; y \in \mathcal{F}^{\sigma}_{\bar{f}} (x), \sigma \in \{s, u\},$ uniformly. The proof is in the lines of [@H] and we repeat for completeness. Let $K$ be a fundamental domain of $\mathbb{T}^d$ in $\mathbb{R}^d, d \geq 2.$ Restricted to $K $ we have $$||\overline{f}^k - A^k|| < +\infty,$$ For $\overline{x} \in \mathbb{R}^d,$ there are $x \in K$ and $\overrightarrow{n} \in \mathbb{Z}^d,$ such that $\overline{x} = x + \overrightarrow{n}, $ since $f_{\ast} = A,$ we obtain: $$||\overline{f}^k(\overline{x}) - A^k(\overline{x})|| = ||\overline{f}^k(x + \overrightarrow{n}) - A^k(x +\overrightarrow{ n})|| = ||\overline{f}^k(x) +A^k \overrightarrow{n} - A^kx - A^k\overrightarrow{n} || < +\infty.$$ Now, for every $x, y \in \mathbb{R}^d,$ $$||\overline{f}^k x - \overline{f}^ky|| \leq ||A^k x - A^k y|| + 2||\overline{f}^k - A^k||_0$$ $$||A^k x - A^ky|| \leq ||\overline{f}^k x - \overline{f}^k y|| + 2||\overline{f}^k - A^k||_0,$$ where, $$||\overline{f}^k - A^k||_0 = \max_{x \in K}\{||\overline{f}^k(x) - A^k(x)||\}.$$ Since $A$ is non-singular, if $||x - y|| \rightarrow + \infty,$ then $||A^kx - A^k y|| \rightarrow + \infty.$ So dividing both expressions by $||A^kx - A^k y|| $ and doing $||x - y|| \rightarrow + \infty$ we obtain the proof of the first item. For the second item, we just consider the case of $E^s_A,$ for $E^ u$ just take $A^{-1}$ and $(\overline{f})^{-1}$ and same proof holds. Let $|\theta^s| = \max\{\,|\theta| \;| \; \theta\; \mbox{is eigenvalue of $A$ and}\; 0 < |\theta| < 1 \}.$ Fix a small $\varepsilon > 0$ and consider $\delta > 0,$ such that $0 < (1+ 2\delta)|\theta^s| < 1.$ If $f$ is sufficiently $C^1-$close to $A,$ then $\overline{f}$ is an Anosov diffeomorphism on $\mathbb{R}^d$ with contracting constant less than $(1+ \delta)|\theta^s|.$ Using the hyperbolic splitting, there is $k_0 \in \mathbb{N},$ such that if $v \in \mathbb{R}^d,$ $k > k_0$ and $$||A^k v || < (1 + 2\delta)^k |\theta^s|^k ||v||,$$ then $$||\pi^u_A(v)|| < \varepsilon ||\pi^s_A(v)||.$$ Consider $k > k_0$ and $M$ sufficiently large, satisfying the first item with $C = 2$ and in accordance with remark \[remarkquasi\]. Take $y \in \mathcal{F}^s_{\overline{f}}(x)$ and $||x - y|| > M.$ Let $d^s$ to denote the riemannian distance on stable leaves of $\mathcal{F}^s_{\overline{f}},$ using quasi isometry property of the foliation $\mathcal{F}^s_{\overline{f}},$ we get $$d^s(\overline{f}^k x, \overline{f}^k y) < ((1 + \delta)|\theta^s|)^k d^s(x,y) \Rightarrow$$ $$||\overline{f}^k x - \overline{f}^k y|| < ((1 + \delta)|\theta^s|)^k (Q^ {-1}|| x - y||) \Rightarrow$$ $$||A^k x - A^k y|| < 2 ((1 + \delta)|\theta^s|)^k (Q^ {-1}|| x - y||).$$ Finally, for large $k$ we have: $$2Q^{-1} ((1 + \delta)|\theta^s|)^k \leq ((1 + 2\delta) |\theta^s|)^k,$$ So, $$||\pi^u_A(x - y)|| < \varepsilon ||\pi^s_A(x- y)||.$$ [@MT] \[linalg\] Let $f : \mathbb{T}^d \rightarrow \mathbb{T}^d$ be an Anosov endomorphism close to $A: \mathbb{T}^d \rightarrow \mathbb{T}^d,$ such that $dim E^u_A = 1.$ Then for all $n \in \mathbb{N}$ and $\varepsilon > 0$ there exists $M$ such that for $x, y$ with $y \in \mathcal{F}^u_{\overline{f}}(x)$ and $||x - y||> M$ then $$(1 - \varepsilon)e^{n\lambda^{u}_A } ||y -x|| \leq \|A^n(x) - A^n(y)\| \leq (1 + \varepsilon)e^{n\lambda^{u}_A } ||y -x||$$ where $\lambda^{u}$ is the Lyapunov exponent of $A$ corresponding to $E^{u}_A.$ Denote by $E^{u}_A$ the eigenspace corresponding to $\lambda^{u}_A$ and $|\mu| = e^{\lambda^{u}_A},$ where $\mu$ is the eigenvalue of $A$ in the $E^{u}_A$ direction. Let $N \in \mathbb{N}$ and choose $x, y \in \mathcal{F}^{u}_f(x),$ such that $|| x - y || > M.$ By corollary \[nice\],we have $$\frac{x - y}{|| x - y||} = v + e_M,$$ where the vector $v = v_{E^{u}_A}$ is a unitary eigenvector of $A,$ in the $E^{u}_A$ direction and $e_M$ is a correction vector that converges to zero uniformly as $M$ goes to infinity. We have $$A^N \left( \frac{x - y}{|| x - y||} \right) = \mu^N v + A^N e_M = \mu^N \left(\frac{x - y}{|| x - y||} \right) -\mu^N e_M + A^N e_M$$ It implies that $$\begin{aligned} || x - y || (|\mu|^N - |\mu|^N ||e_M|| - ||A||^N || e_M||) \leq || A^N (x - y)|| \\ \leq || x - y || (|\mu|^N + |\mu|^N ||e_M|| + ||A||^N || e_M||).\end{aligned}$$ Since $N$ is fixed, we can choose $M > 0,$ such that $$|\mu|^N ||e_M|| + ||A||^N || e_M|| \leq \varepsilon |\mu|^N.$$ and the lemma is proved. By Multiplicative Ergodic Theorem for endomorphisms ([@QXZ]) the unstable Lyapunov exponent for a typical point is independent of unstable direction. We denote by $\lambda^u(x) = \lambda^u(\tilde{x})$ the unique unstable Lyapunov exponent of $x$ in our context where $\dim (E^u) =1.$ Let $A: \mathbb{T}^n \rightarrow \mathbb{T}^n, n \geq 2$ be a conservative linear Anosov endomorphism, with $dim E^u_A = 1.$ Then there is a $C^1$ open set $\mathcal{U},$ containing $A,$ such that for every $f \in \mathcal{U}$ a $C^{1+\alpha}, \alpha > 0$ conservative Anosov endomorphism we have $\lambda^u_f(x) \leq \lambda^u_A,$ for $m$ almost everywhere $x \in \mathbb{T}^n,$ where $m$ is the Lebesgue measure of $\mathbb{T}^n.$ Suppose by contradiction that there is a positive set $Z \in \mathbb{T}^n,$ such that, for every $x \in Z$ we have $\lambda^u_{\overline{f}}(x) > (1 + 5 \varepsilon) \lambda^u_A,$ for a small $\varepsilon > 0.$ Since $\overline{f}$ is $C^{1+\alpha},$ the unstable foliation $\mathcal{F}_{\overline{f}}^u$ is upper absolutely continuous. So, there is a positive Lebesgue measure set $B \in \mathbb{R}^n$ such that for every point $x \in B$ we have $$m^u_x(\mathcal{F}^u_{\overline{f}}(x) \cap Z) > 0 \label{1}$$ where $m^u_x$ is the Lebesgue measure of the leaf $\mathcal{F}^u_{\overline{f}}(x)$. Choose a $p \in B$ satisfying (\[1\]) and consider an interval $[x,y]_u \subset \mathcal{F}^u_{\overline{f}}(p) $ satisfying $m^u_p([x,y]_u \cap Z) > 0$ such that the length of $[x,y]_u$ is bigger than $M$ as required in the lemma \[linalg\] and corollary \[nice\]. We can choose $M$ such that $$||Ax - Ay|| < (1 + \varepsilon)e^{\lambda^u_A } ||y -x||$$ and $$\frac{|| \overline{f}(x) - \overline{f}(y)|| }{ ||Ax - Ay||} < 1 + \varepsilon.$$ whenever $d^u(x, y) \geq M,$ where $d^u$ denotes the riemannian distance in unstable leaves. The above equation implies that $$|| \overline{f}(x) -\overline{f}(y)|| < (1+ \varepsilon)^2 e^{\lambda^u_A} || y - x||.$$ Inductively, we assume that for $n \geq 1$ we have $$|| \overline{f}^n(x) - \overline{f}^n(y)||< (1+\varepsilon)^{2n} e^{n \lambda^u_A }|| y - x||. \label{induction}$$ Since $f$ expands uniformly on the $u-$direction we have $d^u(\overline{f}^n(x), \overline{f}^n(y)) > M,$ consequently $$\begin{aligned} ||\overline{ f}(\overline{f}^nx) - \overline{f}(\overline{f}^ny)|| &<& (1+\varepsilon)|| A(\overline{f}^nx) - A(\overline{f}^ny)|| \\ &<& (1 + \varepsilon)^2 e^{\lambda^u_A} || \overline{f}^nx - \overline{f}^n y||\\ &<& (1+\varepsilon)^{2(n+1)} e^{(n+1)\lambda^u_A}.\end{aligned}$$ For each $n > 0,$ let $A_n \subset Z$ be the following set $$A_n = \{ x \in Z \colon\;\; \|D\overline{f}^k(x)|E^u_{\overline{f}}(x) \| > (1+2\varepsilon)^{2k} e^{k\lambda^u_A} \;\; \mbox{for any} \;\; k \geq n\}.$$ We have $m(Z) > 0$ and $Z_n := (A_n \cap Z) \uparrow Z,$ as $(1 + 5 \varepsilon) > (1 + 2\varepsilon)^2,$ for small $\varepsilon > 0.$ Define the number $\alpha_0 > 0$ such that: $$\displaystyle\frac{m^u_p([x,y]_u \cap Z)}{m^u_p([x,y]_u)} = 2 \alpha_0.$$ Since $Z_n \cap [x,y]_u \uparrow Z \cap [x,y]_u, $ there is $n_0 \in \mathbb{N} ,$ such that $n \geq n_0,$ then $$m^u_p ([x,y]_u \cap Z_n) = \alpha_n \cdot m^u_p([x,y]_u),$$ for $\alpha_n > \alpha_0.$ Thus, for $n \geq n_0$ we have: $$\begin{aligned} ||\overline{f}^nx - \overline{f}^ny || &>& Q \displaystyle\int_{[x,y]_u \cap Z_n} ||Df^n(z)|| dm^u_p(z) > \\ &>& Q (1+ 2\varepsilon)^{2n} e^{n \lambda_A^u } m^u_p ([x,y]_u \cap Z_n) \\ &>& \alpha_0 Q^2 (1 + 2\varepsilon)^{2n} e^{n\lambda^u_A} \|x-y\|. \label{conclusion}\end{aligned}$$ The inequalities $(\ref{induction})$ and $(\ref{conclusion})$ give a contradiction. Appendix: Ergodicity Of Anosov Endomorphisms ============================================= In this section we establish the ergodicity of $C^{1+\alpha}$ conservative Anosov Endomorphism. Before this, we remember a classical result, see [@QXZ]. Let $T: X \rightarrow X $ a continuous map of a compact metric space. For any $T-$invariant Borel probability measure $\mu$ on $X,$ there exists a unique $\tilde{T}-$ invariant Borel probability measure $\tilde{\mu}$ on $X^T$ such that $\pi_{\ast}\tilde{\mu} = \mu.$ In particular for a measurable set $A \subset X,$ we have $\mu(A) = \tilde{\mu}(\pi^{-1}(A)). $ Moreover $\mu$ is ergodic if, and only if, $\tilde{\mu}$ is ergodic. Here $\pi: X^T \rightarrow X$ is the projection in the zero coordinate. To prove the ergodicity, we use the theory of SRB-measures of endomorphisms. We suggest the reader to see [@QZ], for further details on SRB theory of endomorphisms. See also [@BT] for the number of ergodic SRB measures for surface endomorphisms in terms of homoclinic equivalence classes. Let $f: M \rightarrow M$ be a $C^{1+\alpha}-$ conservative Anosov Endomorphism. Then $(f,m)$ is ergodic, where $m$ is a volume form on $M.$ The above theorem seems to be folklore in the ergodic theory of hyperbolic dynamics. However, we did not find any written proof. Initially, observe that if $f$ is an Anosov endomorphism, then $f$ is Axiom. $A$ . Since we are supposing that $f$ is $m-$preserving, then $\Omega(f) = M$ and $f$ is transitive (see [@MP] or [@PRZ]). By Pesin formula for endomorphisms, we know that $$h_m(f) = \int_M \sum_{i} \lambda_i^{+}(x)m_i(x) dm,$$ where $m_i(x)$ is the algebraic multiplicity of $\lambda_i(x).$ In fact we want to prove that the ergodic decomposition of $m$ is trivial. If $\mu$ is not ergodic, by ergodic decomposition theorem we can suppose that $m$ admits at least two ergodic components $\mu_1$ and $\mu_2,$ such that $$\label{SRB} h_{\mu_k}(f) = \int_M \sum_{i} \lambda_i^{+}(x)m_i(x) d\mu_k, k=1,2.$$ Denote by $B_i = B(\mu_i), i=1,2$ the basins of measures $\mu_1$ and $\mu_2$ respectively, $$B_i = \left\{ x \in M| \; \frac{1}{n} \sum_{j = 1}^{n-1} \varphi(f^j(x)) \rightarrow \int_M \varphi d\mu_i\right\}, i =1,2,$$ for every $\varphi \in C^0(M),$ we have $\mu_i(B_i) = 1.$ By the SRB characterization given in [@QZ], the measures $\mu_1$ and $\mu_2$ are SRB, since $\mu_1$ and $\mu_2$ satisfies the formula $(\ref{SRB}).$ Moreover, using the SRB theory developed in [@QZ], the measures $\tilde{\mu}_1, \tilde{\mu}_2,$ are SRB measure, and for $\tilde{m},$ a.e. $\tilde{x} \in M^f,$ we have $$\pi(\tilde{\mu}_i^{\eta(\tilde{x})}(\tilde{x})) << m^u_{\tilde{x}}, i =1,2,$$ where $\eta(\tilde{x})$ is the atom of a subordinated partition for $\tilde{m}$ and $m^u_{\tilde{x}},$ is the Lebesgue volume defined on $\mathcal{F}^u_f({\tilde{x}}).$ Since $\tilde{\mu}_i(B(\tilde{\mu}_i)) = 1, i =1,2$ we have that $\mu_i(\pi(B(\tilde{\mu}_i))= 1.$ By absolute continuity of conditional measures with respect to Lebesgue measure (in fact equivalence), there exist $\tilde{x}_1, \tilde{x}_2,$ such that the set $$F^u_i := B_i \cap \pi(B(\tilde{\mu}_i))\cap \pi(\eta(\tilde{x}_i)), i = 1, 2$$ has full Lebesgue measure in $\pi(\eta(\tilde{x}_i)), i =1,2.$ Now we saturate $F^u_i $ by leaves of $\mathcal{F}^s_{f}.$ That is we take $D_i: = \displaystyle\bigcup_{z \in F^u_i } \mathcal{F}^s_f(z),$ as union of stable leaves through points of $F^u_i.$ As $\mathcal{F}^s_f$ is a continuous foliation, these saturations contain open sets modulus zero (w.r.t. $m$). Now, if $z_i \in D_i, $ then $\mathcal{F}^s_f(z_i )$ intersects $F^u_i$ in a point $y_i.$ Since $y_i, z_i$ are in the same stable leaf $$\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{j = 1}^n \varphi(f^j(z_i)) = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{j = 1}^n \varphi(f^j(y_i)) \rightarrow \int_M \varphi d\mu_i , i =1,2$$ thus $D_i \subset B_i, i =1,2.$ Now, since $f$ is transitive, there is $N$ such that $f^N(D_1) \cap D_2 \neq \emptyset$ in an open set (modulo a null $m$ set), since $f^N(B_1) \subset B_1,$ in particular $B_1 \cap B_2 \neq \varnothing,$ so $\mu_1 = \mu_2,$ absurd. [RRRRRR]{} N. Aoki, K. Hiraide, Topological Theory of Dynamical Systems. MR 95m:58095. P.M. Balagafsheh, A. Tahzibi, On the Number of SRB Measures for Surfaces Endomorphisms. 2014. P. Berger, A. Rovella, On the inverse limit stability of endomorphisms. , 30 (2013), no. 3, 463-–475. M. Brin, On dynamical coherence. , 23(2) :395–401, 2003. A. Hammerlindl, Leaf Conjugacies on the Torus. 33 (2013), no. 3, 896–-933. M. Hirsch, C. Pugh and M. Shub, Invariant Manifolds. 583, Springer-Verlag, New York, 1977. R. Mañé, C. Pugh, Stability of endomorphisms. Lecture Notes in Math., 468, Springer, 1975, 175–184. F. Micena, A. Tahzibi, Regularity of foliations and Lyapunov exponents for partially hyperbolic Dynamics. (2013), no. 33, 1071-1082. F. Przytycki, Anosov endomorphisms. 58 (1976) :249–285. M. Qian, J-S. Xie; S. Zhu, Smooth ergodic theory for endomorphisms. Vol. 1978 . Springer-Verlag, Berlin Heidelberg, 2009. M. Qian, S. Zhu, SRB measures and Pesin’s entropy formula for endomorphisms. 354(4) (2002) :1453–1471. K. Sakai, Anosov maps on closed topological manifolds. 39 (1987) :505–519. [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: | We identify general trends in the (in)civility and complexity of political discussions occurring on Reddit between January 2007 and May 2017 – a period spanning both terms of Barack Obama’s presidency and the first 100 days of Donald Trump’s presidency. We then investigate four factors that are frequently hypothesized as having contributed to the declining quality of American political discourse – (1) the rising popularity of Donald Trump, (2) increasing polarization and negative partisanship, (3) the democratization of news media and the rise of fake news, and (4) merging of fringe groups into mainstream political discussions. author: - Rishab Nithyanand - Brian Schaffner - Phillipa Gill bibliography: - 'bibliography.bib' title: Online Political Discourse in the Trump Era --- Introduction ============ The 2016 election featured the two most disliked candidates in modern US presidential election history competing in the context of decades of increasing partisan polarization [@schaffner2017making]. In this paper we explore how online political discourse during the election differed from discourse occurring prior to it, in terms of incivility and linguistic complexity. We find that incivility in online political discourse, even in non-partisan forums, is at an all time high and linguistic complexity of discourse in partisan forums has declined from a seventh-grade level to a first-grade level (). The election was noteworthy for the high levels of incivility and declining complexity of discourse among political elites, particularly Donald Trump [@schaffnertrump]. Research has shown that when people are exposed to incivility from political elites that they themselves will respond by using more offensive rhetoric [@gervais2014following; @kwon2017aggression]. We explore how Trump’s increasing popularity impacted the civility and complexity of discourse in partisan forums. Our work uncovers a strong correlation between Trump’s rise in popularity and the increasing incivility observed in Republican forums on Reddit (). In may ways, the 2016 campaign was the logical culmination of two decades of affective polarization that witnessed Democrats and Republicans grow increasingly negative in their feelings about the opposing party. Political scientists have documented the increasing polarization among Americans for quite some time [@abramowitz2008polarization]; however, more recent work has emphasized the emotion-based (affective) nature of this polarization. Drawing on social identity theory [@tajfel1979integrative], studies have found that one of the defining features of partisan polarization is the increasingly negative feelings that members of one party have for the other party [@iyengar2012affect]. We measure the incidence of negative partisanship in political forums and find a strong correlation with incivility, supporting the theory that partisan identity leads people to experience emotions of both enthusiasm and anger [@mason2016cross; @huddy2015expressive]. Anger, in particular, is likely to give rise to incivility due to its ability to motivate political action [@groenendyk2014emotional; @valentino2011election; @huddy2015expressive]. Thus as Americans experience political anger more frequently they are likely to be motivated to go online to engage in political discussions [@ryan2012makes]. While we see that the 2016 election was not very dissimilar to 2012 (in terms of incidence of negative partisanship), we find that negative partisanship has shown an upward trend even after inauguration day (unlike 2012). We also find that hatred towards political entities of both parties was at an all time high during the 2016 elections, reinforcing the theory that 2016 was the ideal year for a non-establishment candidate (). The 2016 campaign also witnessed unprecedented rhetoric from a major presidential candidate regarding the credibility of the news media. Additionally, during this time, public distrust of and anger at the political establishment and traditional news media was at an all time high [@gallup-media]. Taken together, these conditions can lead individuals to engage in partisan motivated reasoning [@weeks2015emotions], which can fuel the spread and belief of “fake news”. We explore how frequently misinformation was shared and discussed online. We find that during the elections, Republican forums shared and discussed articles from outlets known to spread conspiracy theories, heavily biased news, and fake news at a rate 16 times higher than prior to the election – and more than any other time in the past decade. Our study shows that this misinformation fuels the uncivil nature of discourse (). The racism (Trump’s statements concerning Mexicans, Muslims, and other broad groups), sexism (the Access Hollywood recordings), and general incivility exhibited by the Trump campaign did not have any significant impact on his presidential run. In fact, recent events ([*e.g.,* ]{}Charlottesville and other Unite the Right rallies) have shown that these actions have emboldened and brought fringe groups into the mainstream. We investigate partisan forums and find a significant overlap between participants in mainstream Republican and extremist forums. We uncover a strong correlation between the rise in offensive discourse and discourse participation from extremists (). Reddit and the Reddit Dataset ============================= Reddit is the fourth most visited site in the United States and ninth most visited site in the world [@Alexa-Reddit]. At a high-level, Reddit is a social platform which enables its users to post content to individual forums called *subreddits*. Reddit democratizes the creation and moderation of these subreddits – [*i.e.,* ]{}any user may create a new subreddit and most content moderation decisions are left to moderators chosen by the individual subreddit. Subscribers of a subreddit are allowed to up-vote and down-vote posts made by other users. These votes determine which posts are visible on the front page of the subreddit (and, even the front-page of Reddit). Reddit also allows its users to discuss and have conversations about each post through the use of comments. Specifically, subscribers of a subreddit can make and also reply to comments on posts made within the subreddit. Like posts, the comments may also be up-voted and down-voted. These votes determine which comments are visible to users reading the discussion. Reddit is an attractive platform for analyzing political behaviour for three main reasons: First, the democratization of content moderation and discussion combined with the ability of participants to use pseudonymous identities has resulted in a strong online disinhibition effect and free-speech culture on Reddit [@Reddit-Freespeech]. This is unlike Facebook which has stronger moderation policies and requires accounts to register with their email addresses and real names (although the enforcement of both are questionable). Second, Reddit enables users to participate in long conversations and complex discussions which are not limited by length. This is unlike Twitter which limits posts and replies to 280 characters (prior to Sep 26, 2017 this limit was 140 characters [@TweetLength]). Finally, Reddit allows scraping of its content and discussions. This has enabled the community to build a dataset [^1] including every comment and post made since the site was made public in 2005. As of October 2017, the Reddit dataset includes a total of 3.5 billion comments from 25.3 million authors made on 398 million posts. We categorize the posts and comments in the dataset into two categories: political and non-political. Posts and comments made in subreddits categorized by *r/politics* moderators as “related” subreddits [^2] are tagged as political. We also tag the subreddits dedicated to all past Democratic, Libertarian, and Republican presidential candidates as political. All other subreddits are tagged as non-political. In total our political dataset contained comments and posts from 124 subreddits – each individually categorized as general-interest, democratic, libertarian, republican, international, and election-related. In our study we focus on comments and posts made between December 1$^{st}$, 2005 and May 1$^{st}$ 2017 – 100 days into Donald Trump’s presidency. We analyze every comment and post made in our set of political subreddits during this period – 130 million comments in 3 million posts – and contrast these with a random (10%) sample of non-political comments made during the same period– a total of 332 million comments in 12 million posts. shows the number of political and non-political comments analyzed during each month of from December 2005 to May 2017. It should be noted that the first political subreddit appeared only in January 2007 – therefore we have no political content to analyze before this period. ![**(log-scale)** Number of comments analyzed during each month from December 2005 to June 2017. For each election year, P indicates the start of the primaries, R/DNom indicates the month when the Republican/Democrat candidate became the presumptive nominee, R/DNC indicates the month of the Republican/Democratic National Conventions, E indicates the election month, and I indicates the Presidential Inauguration.[]{data-label="fig:comments-analyzed"}](./figures/comments-analyzed-logscale.png){width=".49\textwidth"} Civility and Complexity of Discourse {#sec:discourse} ==================================== In order to understand how online political discourse has evolved, we focus on two concepts: (in)civility and complexity of discourse. Incivility in political discourse --------------------------------- We use the prevalence of offensive speech in political discussions on Reddit as a metric for incivility. Previous work [@mutz2006hearing] has defined uncivil discourse as “communication that violates the norms of politeness” – a definition that clearly includes offensive speech. [**Identifying offensive speech.** ]{} In order to identify if a Reddit comment contains offensive speech, we make use of the offensive speech classifier proposed by Nithyanand [*et al.*]{}[@Nithyanand-FOCI2017]. At a high-level, the classifier uses a Random Forest model built upon the cosine similarities between a “hate vector” and annotated training data, both embedded within a 100-dimensional word embedding constructed from every Reddit comment. The approach yields an accuracy between 89-96% on testing data. The complete specification and evaluation are described in [@Nithyanand-FOCI2017]. We note that the classifier is unable to differentiate between offensive comments and comments which quote offensive content – [*e.g.,* ]{}comments quoting Donald Trump’s candidacy announcement speech, which included derogatory remarks about Mexican immigrants [@Trump-announcement], were also classified as offensive. To identify the entities in offensive comments, we use the SpaCy [@spacy] entity recognition toolkit augmented with a custom dictionary of political entities. [.485]{} ![image](./figures/offensiveness/paper-offensive-trend.png){width="\textwidth"} [.485]{} ![image](./figures/offensiveness/paper-authors-pol-apol-offensive-trend.png){width="\textwidth"} [.485]{} ![image](./figures/offensiveness/paper-offensive-trend-parties.png){width="\textwidth"} [.485]{} ![image](./figures/offensiveness/paper-authors-parties-offensive-trend.png){width="\textwidth"} [**Trends in offensive political discourse.** ]{} shows how the incidence of offensiveness has changed over time for subreddits in our political and non-political datasets. We find that offensive comments in political subreddits have always been at least as frequently occurring as offensive comments in non-political subreddits. shows the fraction of all authors that posted at least one offensive comment during each month. We find that authors of comments in political subreddits are much more (nearly 35%, on average) likely to be offensive than authors not participating in political discussions. Our data shows that the difference in incidence rates of offensive comments between political and non-political subreddits has dramatically increased since the start of the 2016 US presidential elections. In fact, we see that prior to 2014, there is only one month – June 2011, during the debt-ceiling crisis in congress and after Obama’s announcement to withdraw large numbers of American forces from Afghanistan – where political comments were over 20% more likely to be offensive than non-political comments. Since then, we notice this to be true for short periods of time in 2014 and 2015, and for the entire period from July 2016 until May 2017. Inspecting the offensive comments made during these periods, we find that large fractions (over 35%) of offensive comments were targeted at law enforcement authorities and the Black Lives Matter movement for the events surrounding the deaths of James Boyd (2014), Michael Brown (2014), and Freddie Gray (2015). The increase in incivility of discourse since July 2016 is attributed to the start of the US Presidential elections and the conclusions of the Democratic and Republican National Conventions – with over 80% of all offensive comments targeted at the two political parties and politicians including Hillary Clinton, Bernie Sanders, and Donald Trump. Worryingly, even after the elections and inauguration, the incidence of offensiveness in political comments and the fraction of offensive political comment authors has continued to grow. As of May 2017, we find that (1) approximately 10% of all political comments are classified as offensive, nearly 30% higher than for non-political comments and (2) nearly one-third of all political comment authors made offensive comments, over 70% higher than for non-political comment authors. *Take-away:* Our results show that political discourse from May 2016 to May 2017 has been more offensive (and by our definition, uncivil) than any other 12-month period in Reddit’s 12 year history. [**Subreddits responsible for offensive political discourse.** ]{} shows how the incidence of offensiveness has changed over time in subreddits categorized as Democratic, Libertarian, and Republican. We find several interesting long-term trends – until 2015 the comments on Democratic subreddits were on average 23% and 15% more likely to be offensive than comments on Republican and Libertarian subreddits, respectively. However, since 2015, comments on Republican subreddits were on average 46% and 7% more offensive than Democratic and Libertarian subreddits. We find similar trends in which shows the fraction of all authors that posted at least one offensive comment in a Democratic, Libertarian, and Republic subreddit during each month. The incidence of offensiveness in Libertarian subreddits on the other hand remains fairly stable through the entire period of the study with only one spike over the 10% mark in June 2015 – the month Donald Trump announced his candidacy. Looking closer at specific events responsible for spikes in offensive discourse reveals that prior to the start of the 2016 election season, comments in the Republican subreddits were most offensive (12% incidence rate) during early 2011 and 2014 – the period during Barack Obama’s 2011/2014 State of the Union addresses and the attempts to repeal (2011) and expand (2014) the Affordable Care Act. We see a large spike in the incidence of offensive comments starting from Donald Trump’s candidacy announcement in June 2015 (5.1% of comments and 12% of authors) to Trump’s victory of the Republican nomination in May 2016 (12.8% of comments and 35% of authors). Further, in spite of a drop in incidence of offensiveness in comments to 11.6% after the elections, the fraction of offensive comment authors has continued to grow to 38% as of May 2017. On the Democratic side, 2015 was the least offensive period in Democratic subreddits with incidence of offensive comments varying between 7% and 4.8%. Further, despite the growing rate of offensiveness during the 2016 primaries and general election – peaking between the election in November 2016 (6.3% of comments) and inauguration in January 2017 (8.5% of comments), this period remained the least offensive election cycle in Democratic subreddits – even compared to 2012 when Barack Obama was uncontested in the primaries. It is interesting to note that in spite of the low incidence of offensiveness, this period saw the highest number of offensive comment authors in the Democratic subreddits – peaking at 25% in October 2016. *Takeaway:* Offensive political discourse has grown at a high rate in Republican subreddits. As of May 2017, comments in Republican subreddits were 55% more likely to be offensive than comments in Democratic subreddits and with nearly twice as many authors of offensive comments. [.485]{} ![image](./figures/writing-levels/paper-political-vs-nonpolitical.png){width="\textwidth"} [.485]{} ![image](./figures/writing-levels/paper-party-comparisons.png){width="\textwidth"} Complexity of political discourse --------------------------------- We focus on linguistic complexity and use the Flesch-Kincaid readability grade-level [@Flesch-Kincaid] as a metric. The Flesch-Kincaid metric assigns higher scores to text containing longer words and sentences () – which generally tend to be more complex. This approach has been used in the past to understand the complexity of political speeches and is used in government and military documents in the United States. $$\label{eq:flesch-kincaid} Grade = 0.39 \times \frac{words}{sentences} + 11.8 \times \frac{syllables}{words} - 15.59$$ [**Trends in linguistic complexity of discourse.** ]{} shows the linguistic complexity of comments made for each month in political and non-political subreddits () and also broken down by Democratic, Libertarian, and Republican subreddits (). We see that discourse in political subreddits is generally more complex than in non-political subreddits, despite being highly variable over time. Deeper analysis shows that this variability is introduced by inclusion of the large “general-interest” political subreddit communities ([*e.g.,* ]{}*r/politics* and *r/worldnews*) which have over 1 million comment authors. Considering only the partisan subreddits (), we see that comments had an average readability grade-level between 7.8 (Democratic subreddits) and 7.5 (Republican and Libertarian subreddits) until December 2015, with only marginal variations throughout. During the 2016 Democratic and Republican primaries (January - June 2016), however, there were significant drops in complexity – Democratic and Republican subreddits had an average reading grade-level of 2.6 and 1.9, respectively. Complexity of discourse on Libertarian subreddits, on the other hand, improved to a 7.6 grade. These results suggest that the highly contested intra-party primaries on both sides led to much lower quality of discourse even on partisan subreddits. Since the end of the primaries (June 2016 - May 2017), complexity of discourse in Democratic subreddits improved to a 6.9 grade-level while discourse in Republican subreddits further declined to a 1.1 grade-level. During this same time, discourse in Libertarian subreddits also slightly declined to a 6.6 grade-level. *Takeaway:* The complexity of discourse in partisan subreddits was at its historical lowest during the 2016 primaries and presidential elections. While the complexity of discourse has recovered in the Democratic subreddits since the election, it has continued to decline to a first grade-level in Republican subreddits. The Trump Effect {#sec:trump-effect} ================ Anecdotal evidence has suggested that the rise in Donald Trump’s popularity resulted in more offensive political discourse. This has been referred to as the “Trump Effect” [@trump-effect]. Since we cannot prove or disprove the causal nature of the Trump Effect, we instead study the linear correlation between Donald Trump’s popularity and the offensiveness and complexity of political discourse (measured in ). As a metric for Trump’s popularity, we use poll data aggregated by Real Clear Politics during the 2016 elections [@rcp-polls] and approval/disapproval data aggregated by 538 since the start of the Trump presidency [@538-polls]. We split the poll data from Real Clear Politics into two categories: primary and general-election related polls. From the period between Trump’s candidacy announcement speech and his clinching of the Republican nomination (June 2015 - May 2016), we only focus on his weekly average vote share in polls related to the Republican primaries. Similarly, from July 2016 (the conclusion of the Democratic and Republican National Conventions) until November 8, 2016 (Election day) we only focus on Trump’s weekly average vote share in polls related to the general election and for the period following the presidential inauguration (Jan 2017 - May 2017), we only focus on Trump’s average approval and disapproval ratings as reported by 538. [**The Republican primaries (June 2015 - May 2016).** ]{} During the primaries, we find that Trump’s rise in popularity was strongly positively correlated with the rise of offensive discourse in Republican subreddits (Pearson correlation co-efficient: .84, p-value &lt; .0001) and strongly negatively correlated with the complexity of discourse in Republican subreddits (Pearson correlation co-efficient: -.65, p-value &lt; .0001). We do not find statistically significant correlations between Trump’s rise in popularity and political discourse in the Democratic or Libertarian subreddits. [**The general election (July 2016 - November 2016).** ]{} Trump’s popularity during the general election did not have a significant correlation with the complexity of discourse in any subreddits. However, his popularity was moderately correlated with offensiveness in Democratic subreddits (Pearson correlation co-efficient: .49, p-value &lt; .005). Interestingly, Hillary Clinton’s popularity during this period was also moderately correlated with the offensiveness in Republican subreddits (Pearson correlation co-efficient: .54, p-value &lt; .005). This points to the change in the nature of discourse from intra- to inter-party elections – [*i.e.,* ]{}that offensive discourse in inter-party elections are correlated to the success of the “other”. This supports recent scholarship noting the rise of negative partisanship and the fact that individuals are generally motivated to engage in political discourse due to anger with the opposition [@huddy2015expressive]. [**Donald Trump’s Presidency (January 2017 - May 2017).** ]{} During the first 100 days of Trump’s presidency, we find that there is only a statistically significant correlation between his approval (and disapproval) ratings and the offensiveness in Republican subreddits. As was the case during the general elections, there is no statistically significant correlation between Trump’s popularity and complexity of discourse. We find a moderate negative correlation between Trump’s approval rating and offensive discourse in Republican subreddits (Pearson correlation co-efficient: -.59, p-value: &lt; .05) and a moderate positive correlation between Trump’s disapproval rating and offensiveness in Republican subreddits (Pearson correlation co-efficient: .55, p-value &lt; .05). It is unclear if this rise in offensiveness occurs due to attempts to “double down” in support of Trump or due to displeasure with the course of Trump’s presidency. *Takeaway:* We find that Donald Trump’s popularity is always, at least moderately, correlated with the offensiveness of political discourse. During the primaries, Trump’s popularity was strongly correlated with the rise of offensiveness in Republican subreddits. During the general election, Trump’s popularity was moderately correlated with offensiveness in Democratic subreddits and during his presidency, there is a moderate negative correlation between his approval ratings and offensiveness in Republican subreddits. Negative Partisanship {#sec:polarization} ===================== Recent work [@negative-partisanship] has suggested that “persistent and durable repulsion from a political party”, defined as *negative partisanship*, has an effect on voting decisions and election turnout. We explore the incidence of negative partisanship on Reddit and seek to understand how it relates to the decline of civility and complexity of discourse. We use two metrics as a measure of negative partisanship: (1) the fraction of political comments in a partisan subreddit that express strong negative sentiments towards the opposition party – [*e.g.,* ]{}fraction of all comments in the Democratic subreddits which express negative sentiments towards the Republican party, and (2) the number of political entities that are most commonly featured in comments classified as offensive ([*i.e.,* ]{}considering all subreddits). While the first metric captures the traditional definition of negative partisanship, the second captures the trend of a growing hatred towards all political entities (or, the establishment). We use NLTK’s Vader [@sentiment-vader] sentiment analysis method to identify the sentiment of a comment. Vader returns a compound sentiment score in the \[-1, +1\] range, where -1 is the most negative sentiment and +1 is the most positive sentiment. We only consider comments with a compound sentiment $\le-.70$ – [*i.e.,* ]{}strongly negative comments. To identify political entities in comments, we use the SpaCy entity recognition method [@spacy] with a custom dictionary of political entities (manually curated from the common nouns that occur close to the words “Democrats”, “Republicans”, and “Libertarians” in our Reddit word embedding). When there are multiple political entities in a comment, it is unclear how to properly associate the sentiment of the comment with each entity – [*i.e.,* ]{}our sentiment analysis is at the comment-level, not entity-level – therefore we discard these comments. The same approach is used to identify entities that are the targets of offensive comments. ![Fraction of comments referencing the opposition party that have strong negative sentiments.[]{data-label="fig:negative-partisanship"}](./figures/sentiment-transition/negative-partisanship.png){width=".49\textwidth"} shows the fraction of comments referencing opposition parties that have strong negative sentiments ([vs. ]{}comments that refer to opposition parties and have other sentiments). We find that Libertarians are most likely to refer to the Democratic and Republican party with strong negative sentiments – on average over 45% of all references to these parties is strongly negative and only 7% are positive. While the Democratic subreddits have generally expressed negative sentiments against opposition parties – the trend declined in the period prior to and during the early phase of Democratic primaries, suggesting that intra-party elections shift the focus away from the inter-party dynamics. Between Super Tuesday III (April 2016) and Election night, negative partisanship on the Democratic subreddits nearly doubled from 19% to 37%. We see a similar trend in the Republican subreddits. This is possibly explained by the fact that Hillary Clinton and Donald Trump had all but clinched their parties nominations after Super Tuesday III and focus of their supporters was shifted to the general election. Between Trump’s clinching of the nomination and May 2017, negative partisanship on the Republican subreddits grew from 20% to 39% (a 30-month high) displaying signs of continuing the upward trend. In contrast, since the conclusion of the 2016 elections, negative partisanship on the Democratic subreddits declined to 28% in May 2017. When considering only data since June 2015 – the start of the primary campaign season, we find that there are statistically significant correlations between the incidence of negative partisanship and the decline of civility in political discourse, suggesting that incivility in political subreddits is frequently targeted at opposition parties. The observed correlation is found to be much stronger in Democratic subreddits (Pearson correlation co-efficient: .75, p-value &lt; .0005) than in Republican subreddits (Pearson correlation co-efficient: .39, p-value &lt; .001). We also find a moderately negative correlation between complexity of discourse on Republican subreddits and the incidence of negative partisanship (Pearson correlation co-efficient: -.40, p-value &lt; .01). ![Number of Democratic and Republican entities in Reddit’s 100 most commonly offended entities.[]{data-label="fig:targets-of-offensiveness"}](./figures/offensiveness/offended_entity_counts.png){width=".49\textwidth"} To gain a general sense of how political entities are viewed by Reddit (all subreddits, including non-political), we ranked (all) entities by the number of times they were the sole entity in a comment classified as offensive. The results are illustrated in . We find that political entities have always been amongst Reddit’s top 100 most offended entities since 2006, peaking during the 2008, 2012, and 2016 presidential elections. The number of political entities making an appearance in the 100 most offended entities was at an all time high (of nine entities) during the months leading up to the 2016 elections. Interestingly, we also find that the sitting President and other “establishment” figures such as the speaker and majority leader always rank in the Top 20 most offended entities. *Takeaway:* Although negative partisanship was at a 30-month high on Republican subreddits, it was comparable to the 2012 election season. However, the hatred shown towards specific political “establishment” entities was unprecedented – suggesting that 2016 was indeed the year of the outsider. Fake News and the Democratization of Media {#sec:news} ========================================== In this section we explore the impact of news media consumption habits on the quality of political discourse. Specifically we focus on the impact of media from controversial outlets (known for peddling conspiracy theories, [*etc.*]{}) and democratized social platforms (YouTube and Twitter) that are increasingly being repurposed for dissemination of “news”. [**Rise of controversial media outlets.** ]{} In our study we focus on the impact of conspiracy theory peddling, heavily biased, fake, and foreign state-sponsored news outlets on political discourse on Reddit. We use tags assigned by the OpenSources project [^3] to identify when a news outlet falls in the above categories. We broadly categorize these outlets as *controversial*. We observe that of the 833 outlets identified by the OpenSources project, 487 domains were active prior to May 2015, 219 domains made their first appearance on Reddit after June 2015, and 127 domains did not appear on Reddit. [.49]{} ![image](./figures/controversial-domains/posts-made-pol-apol.png){width="\textwidth"} [.49]{} ![image](./figures/controversial-domains/posts-made-partisan.png){width="\textwidth"} [.49]{} ![image](./figures/controversial-domains/comments-made-pol-apol.png){width="\textwidth"} [.49]{} ![image](./figures/controversial-domains/comments-made-partisan.png){width="\textwidth"} shows the amount of activity (in terms of posts and comments) surrounding all controversial outlets. We find that Republican subreddits were orders of magnitude more likely to be exposed to articles associated with these outlets than any other group – accounting for over 80% of all posting and commenting activity on links to controversial outlets, during and after the 2016 election cycle. Interestingly, we see that this was not the case prior to the elections. Links to controversial media outlets were up to 600% and 1600% more likely during the Republican primaries and the general election than in the months prior to the start of the 2015 Republican primaries. Since the start of Trump’s presidency, the activity surrounding links to controversial outlets continues to remain high. Upon further investigation, we find that the subreddits *r/The\_Donald* and *r/conservative* were the most commonly targeted subreddits. Although we do not perform a thorough investigation of this anomalous behaviour in this paper, we use this as evidence in our ongoing investigation of a coordinated misinformation campaign targeted at Republican subreddits. In general (across all political subreddits), the incidence of offensiveness is nearly 30% higher in comments associated with controversial posts (compared to all non-controversial posts). This provides a possible explanation for why discourse was much more offensive in Republican subreddits. This hypothesis is supported by a reasonably strong positive and statistically significant correlation between the incidence of controversial posts and fraction of offensive comments (Pearson correlation co-efficient: .59, p-value &lt; .0001). We do not find statistically significant correlations between the complexity of discourse in Republican subreddits and incidence of posts from controversial outlets, however. In the Democratic subreddits, we find that a majority of posts (64%) from controversial outlets had no comment activity – suggesting that these were removed by subreddit moderators or ignored by the community. There were no statistically significant correlations between the incidence of controversial posts and political discourse in the Democratic subreddits. *Takeaway:* Republican subreddits experienced a 1600% increase in links to controversial media outlets during the general elections. Combined with the inflammatory nature typical of these articles, this offers an explanation for the drastic growth of offensiveness in Republican subreddits. In Democratic subreddits, there is little to no activity on posts from controversial media outlets, suggesting more effective moderation and community policing. [.49]{} ![Ranking (by number of comments generated) of YouTube, Twitter, and Facebook among posts from all media platforms in partisan subreddits.[]{data-label="fig:media"}](./figures/media-consumption/yt-ranks.png "fig:"){width="\textwidth"} [.49]{} ![Ranking (by number of comments generated) of YouTube, Twitter, and Facebook among posts from all media platforms in partisan subreddits.[]{data-label="fig:media"}](./figures/media-consumption/tw-ranks.png "fig:"){width="\textwidth"} [.49]{} ![Ranking (by number of comments generated) of YouTube, Twitter, and Facebook among posts from all media platforms in partisan subreddits.[]{data-label="fig:media"}](./figures/media-consumption/fb-ranks.png "fig:"){width="\textwidth"} [**Social platforms as news sources.** ]{} Recent polls by Gallup [@gallup-media] have shown that trust in traditional media sources is at an all time low and is continuing to decline. Simultaneously, the 2016 US presidential election witnessed an explosion of political discourse on social and democratized media platforms – particularly YouTube, Twitter, and Facebook. This is confirmed by which shows how YouTube, Twitter, and Facebook have risen to prominence as top sources of discussion and information in political subreddits. The changing landscape of media consumption for politics is apparent. When ranked by amount of discussion generated (in terms of comments posted), we find that each category of subreddits has a preferred social media platform. Republican subreddits have used YouTube as their top information source since the 2008 US presidential elections. Since this time, YouTube has been ranked in the top 10 media outlets for all but five months. In fact, it remained ranked number one all through the period since the conclusion of the Republican National Convention until May 2017. On the Democratic and Libertarian subreddits, we see that YouTube only occasionally appears within the top 10 media outlets. Instead we see that Twitter was the top source of discussion on the Democratic subreddits for the period since Super Tuesday I until Election day. Interestingly, unlike the Republican affinity for YouTube which has been constantly high since 2009, the Democratic affinity for Twitter increased drastically during the primaries. We find that Republican subreddits are also increasingly using Twitter as a source of information with the site moving into and staying in the top 5 ranks since February 2017. Facebook does not appear to have a major impact in Democratic and Republican subreddits – only occasionally entering the top 20 ranks. However, Libertarian subreddits have consistently had Facebook amongst their top 15 media outlets since the start of the 2012 election cycle. Since the conclusion of the 2016 elections, Facebook has become the top information source for Libertarian subreddits. In terms of impact on political discourse, we find statistically significant negative correlations between the incidence of posts from social platforms and the complexity of discourse, both in the Democratic (Pearson correlation co-efficient: -.32, p-value &lt; .0005) and Republican (Pearson correlation co-efficient: -.64, p-value &lt; .0001) subreddits. When considering all political subreddits, a similar negative correlation was found (Pearson correlation co-efficient: -.31, p-value: &lt; .001). No statistically significant correlations were found when considering the offensiveness of political discourse. *Takeaway:* Posts linking to social media platforms generated significant amounts of activity in subreddits associated with all parties during the 2016 elections – Democratic subreddit engagement with posts from Twitter reached a historical high, Republican subreddits continued to show strong preference for posts linking to videos on YouTube, and Libertarian affinity for posts linking to Facebook pages continued to grow. Social media posts have a moderate negative correlation on the complexity of discourse. Fringe Groups in the Mainstream {#sec:fringe} =============================== Recent events – [*e.g.,* ]{}Unite the Right and White Nationalist rallies across the country and the Anti-Fascist rallies in response to them – have shown that fringe groups and extremists have now infiltrated mainstream political discourse in the real world. In this section we investigate their participation in mainstream political subreddits. To measure of influence of an extremist group we identify redditors that are simultaneously active in at least one hate subreddit and one political subreddit. We say that a redditor is *active* in a subreddit for a given month if they have at least (1) 10% of their monthly total of comments or posts or (2) at least 10 posts or comments in a subreddit for a given month. Our list of hate subreddits include 274 (banned, quarantined, and still open) subreddits associated with racism – [*e.g.,* ]{}*r/coontown* and *r/nazi*, sexism – [*e.g.,* ]{}*r/TheRedPill* and *r/mensrights*, violence – [*e.g.,* ]{}*r/killingwomen* and *r/beatingtrannies*, and peddling conspiracy theories and fake news – [*e.g.,* ]{}*r/conspiracy* and *r/blackcrime*. The list of subreddits was gathered through mining comments from *r/againsthatesubreddits* and announcements of subreddit bans and quarantines. We note that 87 of our 274 hate subreddits have been active for over 5 years and that 218 were active even prior to the start of the 2016 US presidential election season (May 2015). [.485]{} ![Infiltration of fringe (hateful) groups into mainstream political subreddits.[]{data-label="fig:fringe"}](./figures/fringe-groups/raw-posts.png "fig:"){width="\textwidth"} [.485]{} ![Infiltration of fringe (hateful) groups into mainstream political subreddits.[]{data-label="fig:fringe"}](./figures/fringe-groups/raw-comments.png "fig:"){width="\textwidth"} [.485]{} ![Infiltration of fringe (hateful) groups into mainstream political subreddits.[]{data-label="fig:fringe"}](./figures/fringe-groups/general-interest.png "fig:"){width="\textwidth"} shows the number of posts and comments made on mainstream (partisan and non-partisan) political subreddits by redditors who were, by our definition, active in an extremist subreddit. We see a startling rise in the number of posts made by members of extremist subreddits in the partisan and non-partisan political subreddits. From the period starting in December 2015 and continuing to Election day, there was a 200% increase in the number of posts made by fringe authors in Democratic subreddits and a 6600% increase in the number of posts made by these authors in Republican subreddits! At their peak in November 2016, these authors accounted for over 9% and 14% of all posts in Democratic and Republican subreddits, respectively. Since November, however, we see that both have declined. As of May 2017, activity of fringe authors on Democratic subreddits has returned to the pre-election levels, while Republican subreddits continue to experience 3900% more posting activity from fringe redditors (compared to December 2015). Further analysis reveals that the fringe subreddits contributing the most to Republican subreddits, in terms of posting and comment activity, are *r/conspiracy*, *r/TheRedPill*, *r/KotakuInAction*,and *r/mensrights*. During November 2016, we found that over 40% of all active posters in the following subreddits were simultaneously active on *r/The\_Donald* – *r/metacanada* (a right-wing extremist Canadian subreddit), *r/whiterights*, *r/physical\_removal* (a recently banned subreddit promoting violence against “liberals”) and *r/new\_right*. We observe similar overlaps even in non-partical *general-interest* political subreddits. We find strong statistically significant correlations between the number of comments and posts by fringe authors and the levels of offensiveness in political discourse for partisan and non-partisan subreddits. On the Democratic subreddits there was a very strong positive correlation (Pearson correlation co-efficient: .81, p-value &lt; .0001), while there correlation was slightly weaker on the Republican (Pearson correlation co-efficient: .58, p-value &lt; .0001) and non-partisan (Pearson correlation co-efficient: .73, p-value &lt; .0001) subreddits. We found that on Republican subreddits experienced a reduction in complexity of discourse that was moderately correlated with the increasing participation from fringe authors (Pearson correlation co-efficient: -.56, p-value &lt; .0001). *Takeaway:* At the height of the 2016 presidential elections, Republican subreddits saw an order of magnitude more activity from active members of extremist subreddits, while Democratic subreddits saw activity from these authors double. Since the election, these authors have continued to participate heavily in Republican subreddits. This infiltration is positively correlated with the rise of offensiveness in all political discourse. Discussion {#sec:conclusions} ========== Our investigation of the nature of discourse on Reddit over the past decade has yielded important insights about how increasing affective partisanship has influenced the civility of online political discussions. First, political discussions have become substantially more offensive in nature since the launch of the general election campaign for president in July 2016. Notably, this rise in incivility is overwhelmingly located on Republican (rather than Democratic) subreddits. This pattern is consistent with other research that suggests that polarization has largely been asymmetric, with Republicans exhibiting much more extremity than Democrats [@grossmann2016asymmetric]. Second, our analysis suggests that the substantial increase in incivility on reddit was strongly correlated to the rise of Donald Trump, negative partisanship, and the mainstreaming offringe groups. When Trump was performing well in the polls, incivility also increased, suggesting that his ascendancy either (1) elicited strong negative reactions from his opponents or (2) emboldened his supporters, even emboldening holders of extremist ideologies. Negative partisanship was especially evident during the general election campaign, as Trump’s increasing success elicited more offensive rhetoric in Democratic subreddits while increasing poll results for Clinton were associated with more offensive remarks on the Republican side. Research on negative partisanship predicts that anger will increase when the opposing party is doing well [@mason2016cross; @huddy2015expressive], something we see play out clearly on reddit during the general election campaign. Third, to further analyze the role of negative partisanship, we examined the sentiments of comments that targeted either party. We find that negative partisanship continues to grow on Republican subreddits but that it has ebbed a bit on Democratic subreddits since the 2016 election. On one hand, this runs counter to what we might expect, as it is usually partisans from the losing party who react to an election outcome with anger. On the other hand, this fits with the research suggesting that Republicans generally express higher levels of negative partisanship than Democrats [@iyengar2012affect]. Furthermore, it may signal the unique nature of Trump’s presidency. Specifically, as Trump took office without winning the popular vote and has constantly been under criticism since his inauguration, it may not be particularly surprising that the Republican base feels that their party’s status (and the legitimacy of Trump’s presidency) is under threat. This would explain why negative partisanship has remained high, even as Republicans control both branches of the federal government. Ultimately, we are able to demonstrate another unfortunate consequence of America’s political polarization – namely, the fact that online political discussions have become remarkably less civil and complex. While these trends are disturbing, we do provide some reason for hope that the situation can improve. After all, much of our evidence suggests that the degradation in discourse is tied to the rise of Trump. Thus, it is possible that our political discussions may become less offensive when his presence in the limelight fades. [^1]: <https://bigquery.cloud.google.com/dataset/fh-bigquery:reddit_comments> [^2]: https://www.reddit.com/r/politics/wiki/relatedsubs [^3]: http://www.opensources.co/
{ "pile_set_name": "ArXiv" }
--- abstract: 'The method of fusion barrier distribution has been widely used to interpret the effect of nuclear structure on heavy-ion fusion reactions around the Coulomb barrier. We discuss a similar, but less well known, barrier distribution extracted from large-angle quasi-elastic scattering. We argue that this method has several advantages over the fusion barrier distribution, and offers an interesting tool for investigating unstable nuclei.' author: - 'K. Hagino' - 'N. Rowley' title: ' Quasi-elastic barrier distribution as a tool for investigating unstable nuclei' --- Introduction ============ It has been well recognized that heavy-ion collisions at energies around the Coulomb barrier are strongly affected by the internal structure of colliding nuclei [@DHRS98; @BT98]. The couplings of the relative motion to the intrinsic degrees of freedom (such as collective inelastic excitations of the colliding nuclei and/or transfer processes) results in a single potential barrier being replaced by a number of distributed barriers. It is now well known that a barrier distribution can be extracted experimentally from the fusion excitation function $\sigma_{\rm fus}(E)$ by taking the second derivative of the product $E\sigma_{\rm fus}(E)$ with respect to the center-of-mass energy $E$, that is, $d^2(E\sigma_{\rm fus})/dE^2$ [@RSS91]. The extracted fusion barrier distributions have been found to be very sensitive to the structure of the colliding nuclei [@DHRS98; @L95], and thus the barrier distribution method has opened up the possibility of exploiting the heavy-ion fusion reaction as a “quantum tunneling microscope” in order to investigate both the static and dynamical properties of atomic nuclei. The same barrier distribution interpretation can be applied to the scattering process as well. In particular, it was suggested in Ref. [@ARN88] that the same information as the fusion cross section may be obtained from the cross section for quasi-elastic scattering (a sum of elastic, inelastic, and transfer cross sections) at large angles. Timmers [*et al.*]{} proposed to use the first derivative of the ratio of the quasi-elastic cross section $\sigma_{\rm qel}$ to the Rutherford cross section $\sigma_R$ with respect to energy, $-d (d\sigma_{\rm qel}/d\sigma_R)/dE$, as an alternative representation of the barrier distribution [@TLD95]. Their experimental data have revealed that the quasi-elastic barrier distribution is indeed similar to that for fusion, although the former may be somewhat smeared and thus less sensitive to nuclear structure effects (see also Refs.[@PKP02; @MSS03; @SMO02] for recent measurements). As an example, we show in Fig. 1 a comparison between the fusion and the quasi-elastic barrier distributions for the $^{16}$O + $^{154}$Sm system [@HR04]. ![ (a) The fusion barrier distribution for the $^{16}$O + $^{154}$Sm reaction. The solid line is obtained with the orientation-integrated formula with $\beta_2=0.306$ and $\beta_4$= 0.05. The dashed lines indicate the contributions from the six individual eigenbarriers. These lines are obtained by using a Woods-Saxon potential with a surface diffuseness parameter $a$ of 0.65 fm. The dotted line is the fusion barrier distribution calculated with a potential which has $a$ = 1.05 fm. (b) Same as Fig. 1(a), but for the quasi-elastic barrier distribution. (c) Comparison between the barrier distribution for fusion (solid line) and that for quasi-elastic scattering (dashed line). These functions are both normalized to unit area in the energy interval between 50 and 70 MeV.](fig1) In this contribution, we undertake a detailed discussion of the properties of the quasi-elastic barrier distribution [@HR04], which are less known than the fusion counterpart. We shall discuss possible advantagges for its exploitation, putting a particular emphasis on future experiments with radioactive beams. Quasi-elastic barrier distributions =================================== Let us first discuss heavy-ion reactions between inert nuclei. The classical fusion cross section is given by, $$\sigma^{cl}_{\rm fus}(E)=\pi R_b^2\left(1-\frac{B}{E}\right)\,\theta(E-B),$$ where $R_b$ and $B$ are the barrier position and the barrier height, respectively. From this expression, it is clear that the first derivative of $E\sigma^{cl}_{\rm fus}$ is proportional to the classical penetrability for a 1-dimensional barrier of height $B$ or eqivalently the s-wave penetrability, $$\frac{d}{dE}[E\sigma^{cl}_{\rm fus}(E)]=\pi R_b^2\,\theta(E-B) =\pi R_b^2\,P_{cl}(E),$$ and the second derivative to a delta function, $$\frac{d^2}{dE^2}[E\sigma^{cl}_{\rm fus}(E)]=\pi R_b^2\,\delta(E-B). \label{clfus}$$ In quantum mechanics, the tunneling effect smears the delta function in Eq. (\[clfus\]). If we define the fusion test function as $$G_{\rm fus}(E)=\frac{1}{\pi R_b^2}\frac{d^2}{dE^2} [E\sigma_{\rm fus}(E)],$$ this function has the following properties: i) it is symmetric around $E=B$, ii) it is centered on $E=B$, iii) its integral over $E$ is unity, and iv) it has a relatively narrow width of around $\hbar\Omega\ln(3+\sqrt{8})/\pi \sim 0.56 \hbar\Omega$, where $\hbar\Omega$ is the curvature of the Coulomb barrier. We next ask ourselves the question of how best to define a similar test function for a scattering problem. In the pure classical approach, in the limit of a strong Coulomb field, the differential cross sections for elastic scattering at $\theta=\pi$ is given by, $$\sigma_{\rm el}^{cl}(E,\pi)=\sigma_R(E,\pi)\,\theta(B-E),$$ where $\sigma_R(E,\pi)$ is the Rutherford cross section. Thus, the ratio $\sigma_{\rm el}^{cl}(E,\pi)/\sigma_R(E,\pi)$ is the classical reflection probability $R(E)$ ($=1-P(E)$), and the appropriate test function for scattering is [@TLD95], $$G_{\rm qel}(E)=-\frac{dR(E)}{dE} =-\frac{d}{dE}\left(\frac{\sigma_{\rm el}(E,\pi)}{\sigma_R(E,\pi)}\right). \label{qeltest}$$ In realistic systems, due to the effect of nuclear distortion, the differential cross section deviates from the Rutherford cross section even at energies below the barrier. Using the semi-classical perturbation theory, we have derived a semi-classical formula for the backward scattering which takes into account the nuclear effect to the leading order [@HR04]. The result for a scattering angle $\theta$ reads, $$\frac{\sigma_{\rm el}(E,\theta)}{\sigma_R(E,\theta)} =\alpha(E,\lambda_c)\cdot |S(E,\lambda_c)|^2, \label{ratio}$$ where $S(E,\lambda_c)$ is the total (Coulomb + nuclear) $S$-matrix at energy $E$ and angular momentum $\lambda_c = \eta\cot(\theta/2)$, with $\eta$ being the usual Sommerfeld parameter. Note that $|S(E,\lambda_c)|^2$ is nothing but the reflection probability of the Coulomb barrier, $R(E)$. For $\theta=\pi$, $\lambda_c$ is zero, and $|S(E,\lambda_c=0)|^2$ is given by $$|S(E,\lambda_c=0)|^2 = R(E) = \frac{\exp\left[-\frac{2\pi}{\hbar\Omega}(E-B)\right]} {1+\exp\left[-\frac{2\pi}{\hbar\Omega}(E-B)\right]}$$ in the parabolic approximation. $\alpha(E,\lambda_c)$ in Eq. (\[ratio\]) is given by $$\begin{aligned} \alpha(E,\lambda_c)&=&1+\frac{V_N(r_c)}{ka}\, \frac{\sqrt{2a\pi k\eta}}{E}\,\\ &\times& \left[1-\frac{r_c}{Z_PZ_Te^2}\cdot 2V_N(r_c) \left(\frac{r_c}{a}-1\right)\right],\end{aligned}$$ where $k=\sqrt{2\mu E/\hbar^2}$, with $\mu$ being the reduced mass for the colliding system. The nuclear potential $V_N(r_c)$ is evaluated at the Coulomb turning point $r_c=(\eta+\sqrt{\eta^2+\lambda_c^2})/k$, and $a$ is the diffuseness parameter in the nuclear potential. ![ The ratio of elastic scattering to the Rutherford cross section at $\theta=\pi$ (upper panel) and the quasi-elastic test function $G_{\rm qel}(E)=-d/dE (\sigma_{\rm el}/\sigma_R)$ (lower panel) for the $^{16}$O + $^{144}$Sm reaction. ](fig2) Figure 2 shows an example of the excitation function of the cross sections and the corresponding quasi-elastic test function, $G_{\rm qel}$ at $\theta=\pi$ for the $^{16}$O + $^{144}$Sm reaction. Because of the nuclear distortion factor $\alpha(E,\lambda_c)$, the quasi-elastic test function behaves a little less simply than that for fusion. Nevertheless, the quasi-elastic test function $G_{\rm qel}(E)$ behaves rather similarly to the fusion test function $G_{\rm fus}(E)$. In particular, both functions have a similar, relatively narrow, width, and their integral over $E$ is unity. We may thus consider that the quasi-elastic test function is an excellent analogue of the one for fusion, and we exploit this fact in studying barrier structures in heavy-ion scattering. In the presence of the channel couplings, the fusion and the quasi-elastic cross sections may be given as a weighted sum of the cross sections for uncoupled eigenchannels, $$\begin{aligned} \sigma_{\rm fus}(E)&=&\sum_\alpha w_\alpha \sigma_{\rm fus}^{(\alpha)}(E), \label{crossfus}\\ \sigma_{\rm qel}(E,\theta)&=&\sum_\alpha w_\alpha \sigma_{\rm el}^{(\alpha)}(E,\theta), \label{crossqel}\end{aligned}$$ where $\sigma_{\rm fus}^{(\alpha)}(E)$ and $\sigma_{\rm el}^{(\alpha)}(E,\theta)$ are the fusion and the elastic cross sections for a potential in the eigenchannel $\alpha$. These equations immediately lead to the expressions for the barrier distribution in terms of the test functions, $$\begin{aligned} D_{\rm fus}(E)&=&\frac{d^2}{dE^2}[E\sigma_{\rm fus}(E)]= \sum_\alpha w_\alpha \pi R_{b,\alpha}^2\,G_{\rm fus}^{(\alpha)}(E), \label{weightedsum} \\ D_{\rm qel}(E)&=& -\frac{d}{dE}\left(\frac{\sigma_{\rm qel}(E,\pi)}{\sigma_R(E,\pi)}\right) = \sum_\alpha w_\alpha G_{\rm qel}^{(\alpha)}(E). \end{aligned}$$ Advantages over fusion barrier distributions ============================================ There are certain attractive experimental advantages to measuring the quasi-elastic cross section $\sigma_{\rm qel}$ rather than the fusion cross sections $\sigma_{\rm fus}$ to extract a representation of the barrier distribution. These are: i) less accuracy is required in the data for taking the first derivative rather than the second derivative, ii) whereas measuring the fusion cross section requires specialized recoil separators (electrostatic deflector/velocity filter) usually of low acceptance and efficiency, the measurement of $\sigma_{\rm qel}$ needs only very simple charged-particle detectors, not necessarily possessing good resolution either in energy or in charge, and iii) several effective energies can be measured at a single-beam energy, since, in the semi-classical approximation, each scattering angle corresponds to scattering at a certain angular momentum, and the cross section can be scaled in energy by taking into account the centrifugal correction. Estimating the centrifugal potential at the Coulomb turning point $r_c$, the effective energy may be expressed as [@TLD95] $$E_{\rm eff}\sim E -\frac{\lambda_c^2\hbar^2}{2\mu r_c^2} =2E\frac{\sin(\theta/2)}{1+\sin(\theta/2)}. \label{Eeff}$$ Therefore, one expects that the function $-d/dE (\sigma_{\rm el}/\sigma_R)$ evaluated at an angle $\theta$ will correspond to the quasi-elastic test function (\[qeltest\]) at the effective energy given by eq. (\[Eeff\]). This last point not only improves the efficiency of the experiment, but also allows the use of a cyclotron accelerator where the relatively small energy steps required for barrier distribution experiments cannot be obtained from the machine itself [@PKP02]. Moreover, these advantages all point to greater ease of measurement with low-intensity exotic beams, which will be discussed in the next section. ![ Comparison of the ratio $\sigma_{\rm el}/\sigma_R$ (upper panel) and its energy derivative $-d/dE (\sigma_{\rm el}/\sigma_R)$ (lower panel) evaluated at two different scattering angles. ](fig3) In order to check the scaling property of the quasi-elastic test function with respect to the angular momentum, Fig. 3 compares the functions $\sigma_{\rm el}/\sigma_R$ (upper panel) and $-d/dE (\sigma_{\rm el}/\sigma_R)$ (lower panel) obtained at two different scattering angles. The solid line is evaluated at $\theta=\pi$, while the dotted line at $\theta=160^{\rm o}$. The dashed line is the same as the dotted line, but shifted in energy by $E_{\rm eff}-E$. Evidently, the scaling does work well, both at energies below and above the Coulomb barrier, although it becomes less good as the scattering angle decreases [@HR04]. Quasi-elastic scattering with radioactive beams =============================================== Low-energy radioactive beams have become increasingly available in recent years, and heavy-ion fusion reactions involving neutron-rich nuclei have been performed for a few systems [@SYW04; @LSG03; @RSC04]. New generation facilities have been under construction at several laboratories, and many more reaction measurements with exotic beams at low energies will be performed in the near future. Although it would still be difficult to perform high-precision measurements of fusion cross sections with radioactive beams, the measurement of the quasi-elastic barrier distribution, which can be obtained much more easily than the fusion counterpart as we discussed in the previous section, may be feasible. Since the quasi-elastic barrier distribution contains similar information as the fusion barrier distribution, the quasi-elastic measurements at backward angles may open up a novel way to probe the structure of exotic neutron-rich nuclei. ![ The excitation function for quasi-elastic scattering (upper panel) and the quasi-elastic barrier distribution (lower panel) for the $^{32}$Mg + $^{208}$Pb reaction around the Coulomb barrier. The solid and the dashed lines are the results of coupled-channels calculations which assume that $^{32}$Mg is a rotational and a vibrational nucleus, respectively. The single octupole-phonon excitation in $^{208}$Pb is also included in the calculations. ](fig4) In order to demonstrate the usefulness of the study of the quasi-elastic barrier distribution with radioactive beams, we take as an example the reaction $^{32}$Mg and $^{208}$Pb, where the quadrupole collectivity of the neutron-rich $^{32}$Mg remains to be clarified experimentally. Fig. 4 shows the excitation function of the quasi-elastic scattering (upper panel) and the quasi-elastic barrier distribution (lower panel) for this system. The solid and dashed lines are results of coupled-channels calculations where $^{32}$Mg is assumed to be a rotational or a vibrational nucleus, respectively. We include the quadrupole excitations in $^{32}$Mg up to the second member (that is, the first 4$^+$ state in the rotational band for the rotational coupling, or the double phonon state for the vibrational coupling). In addition, we include the single octupole phonon excitation at 2.615 MeV in $^{208}$Pb. We use a version of the computer code [CCFULL]{} [@HRK99] in order to integrate the coupled-channels equations. One clearly sees well separated peaks in the quasi-elastic barrier distribution both for the rotational and for the vibrational couplings. Moreover, the two lines are considerably different at energies around and above the Coulomb barrier, although the two results are rather similar below the barrier. We can thus expect that the quasi-elastic barrier distribution can indeed be utilized to discriminate between the rotational and the vibrational nature of the quadrupole collectivity in $^{32}$Mg, although these results might be somewhat perturbed by other effects which are not considered in the present calculations, such as double octupole-phonon excitations in the target, transfer processes or hexadecapole deformations. We mention that the distorted-wave Born approximation (DWBA) yields identical results for both rotational and vibrational couplings (to first order). In order to discriminate whether the transitions are vibration-like or rotation-like, at least second-step processes (reorientation and/or couplings to higher members) are necessary. The coupling effect plays a more important role in low-energy reactions than at high and intermediate energies. Therefore, we expect that quasi-elastic scattering around the Coulomb barrier will provide a useful means to allow the detailed study of the structure of neutron-rich nuclei in the near future. This work was supported by the Grant-in-Aid for Scientific Research, Contract No. 16740139, from the Japan Society for the Promotions of Science. [99]{} M. Dasgupta, D.J. Hinde, N. Rowley, and A.M. Stefanini, Annu. Rev. Nucl. Part. Sci. [**48**]{}, 401 (1998). A.B. Balantekin and N. Takigawa, Rev. Mod. Phys. [**70**]{}, 77 (1998). N. Rowley, G.R. Satchler, and P.H. Stelson, Phys. Lett. [**B254**]{}, 25 (1991). J.R. Leigh [*et al.*]{}, Phys. Rev. C[**52**]{}, 3151 (1995). M.V. Andres, N. Rowley, and M.A. Nagarajan, Phys. Lett. [**202B**]{}, 292 (1988). H. Timmers [*et al.*]{}, Nucl. Phys. [**A584**]{}, 190 (1995). E. Piasecki [*et al.*]{}, Phys. Rev. C[**65**]{}, 054611 (2002). D.S. Monteiro [*et al.*]{}, Nucl. Phys. [**A725**]{}, 60 (2003). R.F. Simoes [*et al.*]{}, Phys. Lett. [**B527**]{}, 187 (2002). K. Hagino and N. Rowley, Phys. Rev. C[**69**]{}, 054610 (2004). C. Signorini [*et al.*]{}, Nucl. Phys. [**A735**]{}, 329 (2004). J.F. Liang [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 152701 (2003). R. Raabe [*et al.*]{}, Nature [**431**]{}, 823 (2004). K. Hagino, N. Rowley, and A.T. Kruppa, Comp. Phys. Comm. [**123**]{}, 143 (1999).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given a directed graph $D=(N,A)$ and a sequence of positive integers $1 \leq c_1< c_2< \dots < c_m \leq |N|$, we consider those path and cycle polytopes that are defined as the convex hulls of simple paths and cycles of $D$ of cardinality $c_p$ for some $p \in \{1,\dots,m\}$, respectively. We present integer characterizations of these polytopes by facet defining linear inequalities for which the separation problem can be solved in polynomial time. These inequalities can simply be transformed into inequalities that characterize the integer points of the undirected counterparts of cardinality constrained path and cycle polytopes. Beyond we investigate some further inequalities, in particular inequalities that are specific to odd/even paths and cycles.' author: - Volker Kaibel and Rüdiger Stephan title: On cardinality constrained cycle and path polytopes --- Introduction ============ Let $D=(N,A)$ be a directed graph on $n$ nodes that has neither loops nor parallel arcs, and let $c = (c_1,\dots,c_m)$ be a nonempty sequence of integers such that $1 \leq c_1 < c_2 < \dots < c_m \leq n$ holds. Such a sequence is called a *cardinality sequence*. For two different nodes $s,t \in N$, the *cardinality constrained (s,t)-path polytope*, denoted by ${P_{s,t-\mbox{\scriptsize path}}^{c}(D)}$, is the convex hull of the incidence vectors of simple directed $(s,t)$-paths $P$ such that $|P| = c_p$ holds for some $p \in \{1,\dots,m\}$. The *cardinality constrained cycle polytope* ${P_C^{c}(D)}$, similar defined, is the convex hull of the incidence vectors of simple directed cycles $C$ with $|C|=c_p$ for some $p$. Note, since $D$ does not have loops, we may assume $c_1 \geq 2$ when we investigate cycle polytopes. The undirected counterparts of these polytopes are defined similarly. We denote them by ${P_{s,t-\mbox{\scriptsize path}}^{c}(G)}$ and ${P_C^{c}(G)}$, where $G$ is an undirected graph. The associated polytopes without cardinality restrictions we denote by $P_{s,t-\mbox{\scriptsize path}}(D)$, $P_{s,t-\mbox{\scriptsize path}}(G)$, $P_C(D)$, and $P_C(G)$. Cycle and path polytopes, with and without cardinality restrictions, defined on graphs or digraphs, are already well studied. For a literature survey on these polytopes see Table 1. ---------------------------------------------- ------------------------------------------------------------- Schrijver [@Schrijver2003], chapter 13: dominant of $P_{s,t-\mbox{\scriptsize path}}(D)$ Stephan [@Stephan]: $P_{s,t-\mbox{\scriptsize path}}^{(k)}(D)$ Dahl, Gouveia [@DG]: $P_{s,t-\mbox{\scriptsize path}}^{\leq k}(D):= P_{s,t-\mbox{\scriptsize path}}^{(1,\dots,k)}(D)$ Dahl, Realfsen [@DR]: $P_{s,t-\mbox{\scriptsize path}}^{\leq k}(D)$, $D$ acyclic Nguyen [@Nguyen]: dominant of $P_{s,t-\mbox{\scriptsize path}}^{\leq k}(G)$ Balas, Oosten [@BO]: directed cycle polytope $P_C(D)$ Balas, Stephan [@BST]: dominant of $P_C(D)$ Coullard, Pulleyblank [@CP], Bauer [@Bauer]: undirected cycle polytope $P_C(G)$ Hartmann, Özlük [@HO]: $P_C^{(k)}(D)$ Maurras, Nguyen [@MN1; @MN2]: $P_C^{(k)}(G)$ Bauer, Savelsbergh, Linderoth [@BLS]: $P_C^{\leq k}(G)$ \[3mm\] ---------------------------------------------- ------------------------------------------------------------- : **Literature survey on path and cycle polyhedra** Those publications that treat cardinality restrictions, discuss only the cases $\leq k$ or $= k$, while we address the general case. In particular, we assume $m \geq 2$. The main contribution of this paper will be the presentation of IP-models (or IP-formulations) for cardinality constrained path and cycle polytopes whose inequalities generally define facets with respect to complete graphs and digraphs. Moreover, the associated separation problem can be solved in polynomial time. The basic idea of this paper can be presented best for cycle polytopes. Given a finite set $B$ and a cardinality sequence $b=(b_1,\dots,b_m)$, the set ${\mbox{CHS}}^{b}(B):=\{F \subseteq B : |F|=b_p \mbox{ for some } p\}$ is called a *cardinality homogenous set system*. Clearly, $P_C^c(D) = \mbox{conv} \{\chi^C \in \mathbb{R}^A\;|\; C \mbox{ simple cycle}, \, C \in CHS^c(A)\}$, where $CHS^{c}(A)$ is the cardinality homogeneous set system defined on the arc set $A$ of $D$. According to Balas and Oosten [@BO], the integer points of the cycle polytope $P_C(D)$ can be characterized by the system $$\label{model1} \begin{array}{rcll} x(\delta^{\mbox{\scriptsize out}}(i))- x(\delta^{\mbox{\scriptsize in}}(i)) & = & 0 & \mbox{for all } i \in N,\\ x(\delta^{\mbox{\scriptsize out}}(i)) & \leq & 1 & \mbox{for all } i \in N,\\ - x((S:N \setminus S)) + x(\delta^{\mbox{\scriptsize out}}(i))+ x(\delta^{\mbox{\scriptsize out}}(j)) & \leq & 1& \mbox{for all } S \subset N,\\ & & & 2 \leq |S| \leq n-2,\\ & & & i \in S, j \in N \setminus S,\\ x(A) & \geq & 2,\\ \multicolumn{3}{r}{x_{ij} \in \{0,1\}} & \mbox{for all } (i,j) \in A. \end{array}$$ Here, $\delta^{\mbox{\scriptsize out}}(i)$ and $\delta^{\mbox{\scriptsize in}}(i)$ denote the set of arcs leaving and entering node $i$, respectively; for an arc set $F \subseteq A$ we set $x(F):=\sum_{(i,j) \in F} x_{ij}$; for any subsets $S,T$ of $N$, $(S:T)$ denotes $\{(i,j) \in A| i \in S, j \in T\}$. Moreover, for any $S \subseteq N$, we denote by $A(S)$ the subset of arcs whose both endnodes are in $S$. Grötschel [@Groetschel] presented a complete linear description of a cardinality homogeneous set system. For $CHS^{c}(A)$, the model reads: $$\label{model2} \begin{array}{l} \hspace{3.7cm} \begin{array}{rcccll} 0 & \leq & x_{ij}& \leq & 1 & \mbox{for all } (i,j) \in A,\\ c_1 & \leq & x(A) & \leq & c_m, \end{array} \\ \\ (c_{p+1} - |F|) \; x(F)-(|F| - c_p) \; x(A \setminus F) \leq c_p(c_{p+1}-|F|) \\ \hspace{1cm} \mbox{for all } F \subseteq A \mbox{ with } c_p < |F| < c_{p+1} \mbox{ for some } p \in \{1,\dots,m-1\}. \end{array}$$ The *cardinality bounds* $c_1 \leq x(A) \leq c_m$ exclude all subsets of $A$ whose cardinalities are out of the bounds $c_1$ and $c_m$, while the latter class of inequalities of model (\[model2\]), which are called *cardinality forcing inequalities*, cut off all arc sets $F \subseteq A$ of forbidden cardinality between the bounds, since for each such $F$, the cardinality forcing inequality associated with $F$ is violated by $\chi^F$: $$(c_{p+1}-|F|) \chi^F(F)-(|F|-c_p)\chi^F(A \setminus F)= |F|(c_{p+1}-|F|)> c_p(c_{p+1}-|F|).$$ However, for any $H \in CHS^{c}(A)$ the inequality associated with $F$ is valid. If $|H| \leq c_p$, then $(c_{p+1}-|F|) \chi^H(F)-(|F|-c_p)\chi^H(A \setminus F) \leq (c_{p+1}-|F|) x(H \cap F) \leq c_p (c_{p+1}-|F|)$, and equality holds if $|H|=c_p$ and $H \subseteq F$. If $|H| \geq c_{p+1}$, then $(c_{p+1}-|F|) \chi^H(F)-(|F|-c_p)\chi^H(A \setminus F) \leq |F|(c_{p+1}-|F|) - (c_{p+1}-|F|)(|F|-c_p) = c_p (c_{p+1}-|F|)$, and equality holds if $|H|=c_{p+1}$ and $H \cap F = F$. Combining both models results obviously in an integer characterization for the cardinality constrained cycle polytope ${P_C^{c}(D)}$. However, the cardinality forcing inequalities in this form are quite weak, that is, they define very low dimensional faces of ${P_C^{c}(D)}$. The key for obtaining stronger cardinality forcing inequalities for $P_C^c(D)$ is to count the nodes of a cycle rather than its arcs. The trivial, but crucial observation here is that, for the incidence vector $x\in\{0,1\}^A$ of a cycle in $D$ and for every node $i\in V$, we have $x(\delta^{\mbox{\scriptsize out}}(i))=1$ if the cycle contains node $i$, and $x(\delta^{\mbox{ \scriptsize out}}(i))=0$ if it does not. Thus, for every $W\subseteq N$ with $c_p<|W|<c_{p+1}$ for some $p\in\{1,\dots,m-1\}$, the cardinality-forcing inequality $$(c_{p+1} - |W|) \sum_{i \in W} x(\delta^{\mbox{\scriptsize out}}(i)) - (|W| - c_p) \sum_{i \in N \setminus W} x(\delta^{\mbox{\scriptsize out}}(i)) \leq c_p(c_{p+1} - |W|),$$ is valid for ${P_C^{c}(D)}$, cuts off all cycles $C$, with $c_p < |C| < c_{p+1}$, that visit $\min \{|C|, |W|\}$ nodes of $W$, and is satisfied with equation by all cycles of cardinality $c_p$ or $c_{p+1}$ that visit $\min \{|C|, |W|\}$ nodes of $W$. Using these inequalities yields the following integer characterization for ${P_C^{c}(D)}$: $$\label{model3} \begin{array}{rcll} x(\delta^{\mbox{\scriptsize out}}(i))- x(\delta^{\mbox{\scriptsize in}}(i)) & = & 0 & \mbox{for all } i \in N,\\ x(\delta^{\mbox{\scriptsize out}}(i)) & \leq & 1 & \mbox{for all } i \in N,\\ - x((S:N \setminus S)) + x(\delta^{\mbox{\scriptsize out}}(i))+ x(\delta^{\mbox{\scriptsize out}}(j)) & \leq & 1& \mbox{for all } S \subset N,\\ & & & 2 \leq |S| \leq n-2,\\ & & & i \in S, j \in N \setminus S,\\ \\ x(A) & \geq & c_1,\\ x(A) & \leq & c_m,\\ \\ (c_{p+1} - |W|) \sum_{i \in W} x(\delta^{\mbox{\scriptsize out}}(i)) & & & \\ - (|W|- c_p) \sum_{i \in N \setminus W} x(\delta^{\mbox{\scriptsize out}}(i)) & & & \\ - c_p (c_{p+1} - |W|) & \leq & 0 & \forall \; W \subseteq N: \; \exists p\\ & & & \mbox{with } c_p < |W| <c_{p+1},\\ \\ x_{ij}& \in& \{0,1\} & \mbox{for all } (i,j) \in A. \end{array}$$ However, in the polyhedral analysis of cardinality constrained path and cycle polytopes we will focus on the directed cardinality constrained path polytope for a simple reason: valid inequalities for ${P_{s,t-\mbox{\scriptsize path}}^{c}(D)}$ can easily be transformed into valid inequalities for the other polytopes. In particular, from the IP-model for ${P_{s,t-\mbox{\scriptsize path}}^{c}(D)}$ that we present in section 3 we derive IP-models for the remaining polytopes $\mathcal{P}$, as illustrated in Figure 1, such that a transformed inequality is facet defining for $\mathcal{P}$ when the original inequality is facet defining for ${P_{s,t-\mbox{\scriptsize path}}^{c}(D)}$. In addition, the subpolytopes $P_{s,t-\mbox{\scriptsize path}}^{(c_p)}(D)$ of $P_{s,t-\mbox{\scriptsize path}}^{c}(D)$ were studied in [@Stephan]. Theorem \[T3\] in Section 2 and Table 1 in [@Stephan] imply that they are of codimension 1 whenever $4 \leq c_p \leq n-1$, provided that we have an appropriate digraph $D$. Thus, any facet defining inequality $\alpha x \leq \alpha_0$ for $P_{s,t-\mbox{\scriptsize path}}^{(c_p)}(D)$ which is also valid for $P_{s,t-\mbox{\scriptsize path}}^{c}(D)$ can easily be shown to be facet defining also for $P_{s,t-\mbox{\scriptsize path}}^{c}(D)$ if $\alpha y =\alpha_0$ holds for some $y \in P_{s,t-\mbox{\scriptsize path}}^{c}(D) \setminus P_{s,t-\mbox{\scriptsize path}}^{(c_p)}(D)$. So, in the present paper many facet proofs must not be given from the scratch, but can be traced back to results in [@Stephan]. (-4,-6)(6,2) (-3,1.5) (3,1.5) (-3,-1.5) (3,-1.5) (3,-1.5) (3,-4.5) (0,-5.5)[Figure 1. ${P_{s,t-\mbox{\scriptsize path}}^{c}(D)}$ and related polytopes.]{} In the following we investigate the cardinality constrained path polytope $ P_{0,n-\mbox{\scriptsize path}}^c(D)$ defined on a digraph $D=(N,A)$ with node set $N = \{0,\dots,n\}$. In particular, $s=0, t=n$. Since $(0,n)$-paths do not use arcs entering $0$ or leaving $n$, we may assume that $\delta^{\mbox{\scriptsize in}}(0)= \delta^{\mbox{\scriptsize out}}(n)= \emptyset$. Next, suppose that $A$ contains the arc $(0,n)$ and the cardinality sequence $c$ starts with $c_1=1$. Then the equation $$\dim P_{0,n-\mbox{\scriptsize path}}^{(c_1,c_2,\dots,c_m)}(D) = \dim P_{0,n-\mbox{\scriptsize path}}^{(c_2,\dots,c_m)}(D)+1$$ obviously holds. Moreover, an inequality $\alpha x \leq \alpha_0$ defines a facet of $P_{0,n-\mbox{\scriptsize path}}^{(c_2,\dots,c_m)}(D)$ if and only if the inequality $\alpha x + \alpha_0 x_{0n} \leq \alpha_0$ defines a facet of $P_{0,n-\mbox{\scriptsize path}}^{(1,c_2,\dots,c_m)}(D)$. Thus, the consideration of cardinality sequences starting with 1 does not give any new insights into the facial structure of cardinality constrained path polytopes. So we may assume that $A$ does not contain the arc $(0,n)$. So, for our purposes it suffices to suppose that the arc set $A$ of $D$ is given by $$\label{arcset} A=\{(0,i), (i,n) : i=1,\dots,n-1\} \bigcup \{(i,j) : 1 \leq i,j \leq n-1, i \neq j\}.$$ Therefore, by default, we will deal with the directed graph $\tilde{D}_n = (\tilde{N}_n, \tilde{A}_n)$, where $\tilde{N}_n=\{0,1,\dots,n\}$ and $\tilde{A}_n=A$ is . The remainder of the paper is organized as follows: In Section 2, we examine the relationship between directed path and cycle polytopes. In Section 3, we consider the inequalities of the IP-model for the directed cardinality constrained path polytope ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ and give necessary and sufficient conditions for them to be facet defining. Moreover, we present some further classes of inequalities that also cut off forbidden cardinalities. Finally, in Section 4, we transform facet defining inequalities for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ into facet defining inequalities for the other polytopes. The relationship between directed path and cycle polytopes ========================================================== This section generalizes the results in [@Stephan], Section 2. Denote by $\cal{P}$ the set of simple $(0,n)$-paths $P$ in ${\tilde{D}_n}=({\tilde{N}_n},{\tilde{A}_n})$. Let $D'$ be the digraph that arises by removing node $0$ from ${\tilde{D}_n}$ and identifying $\delta^{\mbox{\scriptsize out}}(0)$ with $\delta^{\mbox{\scriptsize out}}(n)$. Then, $D'$ is a complete digraph on node set $\{1,\dots,n\}$ and $\cal{P}$ becomes the set $\mathcal{C}^n$ of simple cycles that visit node $n$. The convex hull of the incidence vectors of cycles $C \in \mathcal{C}^n$ in turn is the restriction of the cycle polytope defined on $D'$ to the hyperplane $x(\delta^{\mbox{\scriptsize out}}(n))=1$. Balas and Oosten [@BO] showed that the *degree constraint* $$x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$$ induces a facet of the cycle polytope defined on a complete digraph. Hence, the path polytope $P_{0,n-\mbox{\scriptsize path}}({\tilde{D}_n})$ is isomorphic to a facet of the cycle polytope $P_C(D')$. From the next theorem we conclude that this relation holds also for cardinality constrained path and cycle polytopes. We start with some preliminary statements from linear algebra. \[L1\] Let $k \neq \ell$ be natural numbers, let $x^1,x^2,\dots,x^r \in \mathbb{R}^p$ be vectors satisfying the equation $1^T x^i = k$, and let $y \in \mathbb{R}^p$ be a vector satisfying the equation $1^Ty=\ell$. Then the following holds:\ (i) $y$ is not in the affine hull of the set $\{x^1,\dots,x^r\}$.\ (ii) The points $x^1,\dots,x^r$ are affinely independent if and only if they are linearly independent. $\Box$ According to the terminology of Balas and Oosten [@BO], for any digraph $D=(N,A)$ on $n$ nodes we call the polytope $$P_{CL}^c(D):= \{(x,y) \in P_C^c(D) \times \mathbb{R}^n : y_i=1-x(\delta^{\mbox{\scriptsize out}}(i)),i=1,\dots,n\}$$ the *cardinality constrained cycle-and-loops polytope*. Its integer points are the incidence vectors of spanning unions of a simple cycle and loops. \[L2\] The points $x^1,\dots,x^p \in {P_C^{c}(D)}$ are affinely independent if and only if the corresponding points $(x^1,y^1),\dots,(x^p,y^p) \in P_{CL}^c(D)$ are affinely independent. The map $f : P_{CL}^c(D) \to P_C(D), \; (x,y) \mapsto x$ is an affine isomorphism. \[T3\] Let $D_n=(N,A)$ be the complete digraph on $n \geq 3$ nodes and $c=(c_1,\dots,c_m)$ a cardinality sequence with $m \geq 2$. Then the following holds:\ (i) The dimension of ${P_C^{c}(D_n)}$ is $(n-1)^2$.\ (ii) For any node $i \in N$, the degree inequality $x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$ defines a facet of ${P_C^{c}(D_n)}$. \(i) Balas and Oosten [@BO] proved that $\dim P_C(D_n)=(n-1)^2$, while Theorem 1 of Hartmann and Özlük [@HO] says that $$\label{dimeq} \dim P_C^{(k)}(D_n)= \left \{ \begin{array}{ll} |A|/2 -1,& \mbox{if } k=2,\\ n^2-2n, & \mbox{if } 2<k<n \mbox{ and } n \geq 5,\\ n^2-3n+1, & \mbox{if } k=n \mbox{ and } n \geq 3, \end{array} \right .$$ and $\dim P_C^{(3)}(D_4)= 6$. Since ${P_C^{c}(D_n)}\subseteq P_C(D_n)$, it follows immediately that $\dim {P_C^{c}(D_n)}\leq (n-1)^2$. When $n=3$, $m \geq 2$ implies ${P_C^{c}(D_n)}=P_C(D_n)$, and thus $\dim P_C^{(2,3)}(D_3)=4$. When $n=4$, the statement can be verified using a computer program, for instance, with `polymake` [@polymake]. For $n \geq 5$ the claim follows from and Lemma \[L1\] (i) unless $c=(2,n)$: it exists some cardinality $c_p$, with $2<c_p<n$, and thus there are $n^2-2n+1$ affinely independent vectors $x^r \in P_C^{(c_p)}(D_n) \subset {P_C^{c}(D_n)}$. Moreover, since $m \geq 2$, there is a vector $y \in {P_C^{c}(D_n)}$ of another cardinality which is affinely independent from the points $x^r$. Hence, ${P_C^{c}(D_n)}$ contains $n^2-2n+2$ affinely independent points proving $\dim {P_C^{c}(D_n)}= (n-1)^2$. When $c=(2,n)$, the above argumentation fails, since the dimensions of both polytopes $P_C^2(D_n)$ and $P_C^n(D_n)$ are less than $n^2-2n$. Setting $d_n:=\dim P_C^{(n)}(D_n)$, we see that there are $d_n+1=n^2-3n+2$ linearly independent points $x^r \in P_C^{(2,n)}(D_n) \cap P_C^{(n)}(D_n)$ satisfying $1^T x^r = n$. Clearly, the points $(x^r,y^r) \in P_{CL}^{(2,n)}$ are also linearly independent. Next, consider the point $(x^{23},y^{23})$, where $x^{23}$ is the incidence vector of the 2-cycle $\{(2,3),(3,2)\}$, and $n-1$ further points $(x^{1i},y^{1i})$, where $x^{1i}$ is the incidence vector of the 2-cycle $\{(1,i),(i,1)\}$. The incidence matrix Z whose rows are the vectors $(x^r,y^r)$, $r=1,2,\dots,d_n+1$, $(x^{23},y^{23})$, and $(x^{1i},y^{1i})$, $i=2,3,\dots,n$, is of the form $$Z= \begin{pmatrix} X & \mathbf{0} \\ Y & L \\ \end{pmatrix},$$ where $$L= \left( \begin{array}{c|c} 1 & 0 \hspace{0.2cm} 0 \hspace{0.2cm} 1 \cdots 1 \\ \hline \\[-.5em] \mathbf{0} & E-I \\ \end{array} \right) .$$ E is the $(n-1) \times (n-1)$ matrix of all ones and I the $(n-1) \times (n-1)$ identity matrix. $E-I$ is nonsingular, and thus $L$ is of rank $n$. $X$ is of rank $d_n+1$, and hence rank $(Z)=d_n+1+n=n^2-2n+2$. Together with Lemma \[L2\], this yields the desired result. \(ii) When $n \leq 4$, the statement can be verified using a computer program. When $n \geq 5$ and $4 \leq c_p < n$ for some index $p \in \{1, \dots,m\}$, the claim can be showed along the lines of the proof to part (i) using Theorem 11 of Hartmann and Özlük [@HO] saying that the degree constraint defines a facet of $P_C^{(c_p)}(D_n)$. It remains to show that the claim is true for $c \in \{(2,3),(2,n),(3,n),(2,3,n)\}$, $n \geq 5$. W.l.o.g. consider the inequality $x(\delta^{\mbox{\scriptsize out}}(1)) \leq 1$. When $c=(2,3)$, consider all 2- and 3-cycles whose incidence vectors satisfy $x(\delta^{\mbox{\scriptsize out}}(1))=1$. This are exactly $n^2-2n+1$ cycles, namely the 2-cycles $\{(1,j),(j,1)\}$, $j=2,\dots,n$, and the 3-cycles $\{(1,j),(j,k),(k,1)\}$ for all arcs $(j,k)$ that are not incident with node $1$. Their incidence vectors are affinely independent, and hence, the degree constraint is facet defining for $P_C^{(2,3)}(D_n)$. This implies also that it induces a facet of $P_C^{(2,3,n)}(D_n)$. Turning to the case $c=(2,n)$, note that the degree constraint is satisfied with equality by all Hamiltonian cycles. Hence, we have $d_n+1$ linearly independent Hamiltonian cycles and again, the 2-cycles $\{(1,i),(i,1)\}$, which are linearly independent of them. Finally, let $c=(3,n)$. Beside $d_n+1$ Hamiltonian cycles, consider the 3-cycles $(1,3),(3,4),(4,1)$ and $\{(1,2),(2,j),(j,1)\}$, $j=3,\dots,n$. Then the $n^2-2n+1$ corresponding points in $P_{CL}^c(D_n)$ build a nonsingular matrix. Hence, by Lemma \[L2\], it follows the desired result. Given a cardinality sequence $c=(c_1,\dots,c_m)$ with $m \geq 2$ and $c_1 \geq 2$, Theorem \[T3\] implies that $\dim {P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}= n^2-2n$. From Theorem \[T3\] another important fact can be derived. Facet defining inequalities for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ can easily be lifted to facet defining inequalities for ${P_C^{c}(D_n)}$. For sequential lifting, see Nemhauser and Wolsey [@NW]. \[lifting\] Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$ and $c_1 \geq 2$. Let $\alpha x \leq \alpha_0$ be a facet defining inequality for $P_{0,n-\mbox{\scriptsize path}}^{c}({\tilde{D}_n})$ and $\gamma$ the maximum of $\alpha(C)$ over all cycles $C$ in ${\tilde{D}_n}$ with $|C|=c_p$ for some $p$. Setting $\alpha_{ni}:=\alpha_{0i}$ for $i=1,\dots,n-1$, the inequality $$\sum_{i=1}^n \sum_{j=1 \atop j \neq i}^n \alpha_{ij} x_{ij} + (\gamma -\alpha_0)x(\delta^{\mbox{\scriptsize out}}(n)) \leq \gamma$$ defines a facet of $P_C^{c}(D_n)$. $\Box$ No similar relationship seems to hold between undirected cycle and path polytopes. Facets of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ {#SecFacets} ============================================================ Let $D=(N,A)$ be a digraph on node set $N=\{0,\dots,n\}$. The integer points of $P_{0,n-\mbox{\scriptsize path}}^c(D)$ are characterized by the following system: $$\label{pathmodel} \begin{array}{rcll} x(\delta^{\mbox{\scriptsize out}}(i))- x(\delta^{\mbox{\scriptsize in}}(i)) & = & \multicolumn{2}{l}{ \left \{ \begin{array}{r@{}l} 1 & \mbox{ if } i=0,\\ 0 &\mbox{ if } i \in N \setminus \{0,n\},\\ -1 & \mbox{ if } i=n,\\ \end{array} \right.} \\ x(\delta^{\mbox{\scriptsize out}}(i)) & \leq & 1 & \mbox{for all } i \in N \setminus \{0,n\},\\ x((S:N \setminus S)) - x(\delta^{\mbox{\scriptsize in}}(j)) & \geq & 0 & \forall S \subset N: 0,n \in S, j \in N \setminus S,\\ \\ x(A) & \geq & c_1,\\ x(A) & \leq & c_m,\\ \\ (c_{p+1} - |W|+1) \sum_{i \in W} x(\delta^{\mbox{\scriptsize out}}(i)) & & & \\ - (|W|-1 - c_p) \sum_{i \in N \setminus W} x(\delta^{\mbox{\scriptsize out}}(i)) & & & \\ - c_p (c_{p+1} - |W|+1) & \leq & 0 & \forall \; W \subseteq N: \; 0,n \in W, \; \exists p\\ & & & \mbox{with } c_p < |W|-1 <c_{p+1},\\ \\ x_{ij}& \in& \{0,1\} & \mbox{for all } (i,j) \in A. \end{array}$$ Here, the cardinality forcing inequalities arise in another form, since the number of nodes that are visited by a simple path is one more than the number of arcs in difference to a simple cycle. The first three and the integrality constraints ensure that $x$ is the incidence vector of a simple $(0,n)$-path $P$ (cf. [@Stephan]). The cardinality bounds and the cardinality forcing inequalities guarantee that $|P|=c_p$ for some $p$. Dahl and Gouveia [@DG] gave a complete linear description of $P_{0,n-\mbox{\scriptsize path}}^{(1,2,3)}(D')$, where $D' = D \cup \{(0,n)\}$. So, we have also one for $P_{0,n-\mbox{\scriptsize path}}^{(2,3)}(D)$. Consequently, from now on we exclude the case $c=(2,3)$ with respect to directed path polytopes. More precisely, in the sequel we consider only the set of cardinality sequences ${\mbox{CS}}:= \{c=(c_1,\dots, c_m) : m \geq 2, 2 \leq c_1 < \dots < c_m \leq n, c \neq (2,3) \}$. However, as the proof of Theorem \[T3\] indicates, the polyhedral analysis of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ becomes much harder if $c \in \{(2,n),(3,n),(2,3,n)\}$. In order to avoid that the paper is surcharged with long argumentations, we skip in particular these cases and refer the interested reader to [@KS]. Given a valid inequality $cx \leq c_0$, a $(0,n)$-path $P$ is said to be *tight* if $c(P)=c_0$. Due to the flow conservation constraints, two different inequalities that are valid for $P_{0,n-\mbox{\scriptsize path}}^c(D)$ may define the same face. The next theorem, which is an adaption of a result of Hartmann and Özlük [@HO], says how those inequalities can be identified. \[equiv\] Let $\alpha x \geq \alpha_0$ be a valid inequality for $P_{0,n-\mbox{\scriptsize path}}^c(D)$ and let $T$ be a spanning tree of $D$. Then for any specified set of coefficients $\beta_{ij}$ for the arcs $(i,j) \in T$, there is an equivalent inequality $\alpha' x \geq \alpha_0$ for $P_{0,n-\mbox{\scriptsize path}}^c(D)$ such that $\alpha'_{ij} = \beta_{ij}$ for $(i,j) \in T$. $\Box$ Facets related to cardinality restrictions ------------------------------------------ The cardinality bounds $x({\tilde{A}_n}) \geq c_1$ and $x({\tilde{A}_n}) \leq c_m$ define facets of the cardinality constrained path polytope ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if and only if $4 \leq c_i \leq n-1$ for $i=1,m$ (see Table 1 of [@Stephan]). Next, we turn to the cardinality forcing inequalities. Due to the easier notation, we analyze them for the polytope $P^* := \{x \in {P_C^{c}(D_n)}| x(\delta^{\mbox{\scriptsize out}}(1))=1\}$ which is isomorphic to ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. \[TCF\] Let $D_n=(N,A)$ be the complete digraph on $n \geq 4$ nodes and $W$ a subset of $N$ with $1 \in W$ and $c_p < |W| < c_{p+1}$ for some $p \in \{1,\dots,m-1\}$. The cardinality-forcing inequality $$\label{CF} (c_{p+1} - |W|) \sum_{i \in W} x(\delta^{\mbox{\scriptsize out}}(i)) - (|W| - c_p) \sum_{i \in N \setminus W} x(\delta^{\mbox{\scriptsize out}}(i)) \leq c_p(c_{p+1} - |W|)$$ defines a facet of $P^*$ if and only if $c_{p+1}-|W| \geq 2$ and $c_{p+1} < n$ or $c_{p+1}=n$ and $|W| = n-1$. Assuming that $|W| + 1 = c_{p+1} < n$, we see that (\[CF\]) is dominated by nonnegativity constraints $x_{ij} \geq 0$ for $(i,j) \in N \setminus W$. When $c_{p+1} = n$ and $n - |W| \geq 2$, (\[CF\]) is dominated by another inequality of the same form for some $W' \supset W$ with $|W'| = n-1$. Therefore, if inequalities (\[CF\]) are not facet defining, then they are dominated by other inequalities of the IP-model that are facet defining for $P^*$. Suppose that $c_{p+1} - |W| \geq 2$ and $c_{p+1} < n$. By choice, $|W| \geq 3$ and $|N \setminus W| \geq 3$. Moreover, assume that the equation $bx = b_0$ is satisfied by all points that satisfy (\[CF\]) at equality. Setting $\iota := c_{p+1}-|W|$, we will show that $$\label{conc1} \begin{array}{rcll} b_{1i} & = & \iota & \forall \; i \in N \setminus \{1\} \\ b_{i1} & = & \iota & \forall \: i \in W \setminus \{1\}, \\ b_{ij} & = & \kappa & \forall \: i \in W \setminus \{1\}, j \in N \setminus \{1\}, \\ b_{ij} & = & \lambda & \forall \: i \in N \setminus W, j \in N \setminus \{1\}, \\ b_{i1} & = & \mu & \forall \: i \in N \setminus W \end{array}$$ for some $\kappa \neq 0, \lambda, \mu$. Then, considering a tight cycle of length $c_p$ and two tight cycles of length $c_{p+1}$, one using an arc in $(N \setminus W: \{1\})$, the other not, yields the equation system $$\begin{array}{rcl} b_0 & = & 2 \iota + (c_p-2) \kappa \\ b_0 & = & \iota + (|W|-1) \kappa + (c_{p+1} - |W| -1) \lambda + \mu \\ b_0 & = & 2 \iota + (|W|-2) \kappa + (c_{p+1}-|W|) \lambda \end{array}$$ which solves to $$\begin{array}{rcl} b_0 & = & 2 \iota + (c_p-2) \kappa \\ \mu & = & \iota + (\frac{|W|-c_p}{ |W|-c_{p+1}}-1) \kappa \\ \lambda & = & \frac{|W|-c_p}{ |W|-c_{p+1}} \kappa. \end{array}$$ Thus, $bx=b_0$ is the equation $$\begin{array}{rcl} \iota x(\delta^{\mbox{\scriptsize out}}(1)) + \iota x(\delta^{\mbox{\scriptsize in}}(1)) + (\frac{|W|-c_p}{ |W|-c_{p+1}}-1) \kappa \sum\limits_{i \in N \setminus W} x_{i1} \\ + \kappa \sum\limits_{i \in W \setminus \{1\}} x(\delta^{\mbox{\scriptsize out}}_1(i)) + \frac{|W|-c_p}{ |W|-c_{p+1}} \kappa \sum\limits_{i \in N \setminus W} x(\delta^{\mbox{\scriptsize out}}_1(i)) & = & 2 \iota + (c_p-2) \kappa, \end{array}$$ where $\delta^{\mbox{\scriptsize out}}_1(i) := \delta^{\mbox{\scriptsize out}}(i) \setminus \{(i,1)\}$. Adding $\kappa -\iota$ times the equations $x(\delta^{\mbox{\scriptsize out}}(1))=1$ and $x(\delta^{\mbox{\scriptsize in}}(1))=1$ and multiplying the resulting equation with $- \frac{|W|-c_{p+1}}{\kappa}$, we see that $bx=b_0$ is equivalent to (\[CF\]). To show (\[conc1\]), we may assume without loss of generality that $2 \in W$ and $b_{1i}= c_{p+1}-|W|$, $i \in N \setminus \{1\}$, and $b_{21}= c_{p+1}-|W|$, by Theorem \[equiv\]. Next, let $\mathcal{R}$ be the set of subsets of $N$ of cardinality $c_{p+1}$ that contain $W$, i.e., $$\mathcal{R} := \{R \subset N| \: |R|= c_{p+1}, R \supset W\}.$$ For any $R \in \mathcal{R}$, the $c_{p+1}$-cycles on $R$ are tight tours on $R$. Theorem 23 of Grötschel and Padberg [@GP] implies that there are $\tilde{\alpha}_i^R, \tilde{\beta}_i^R$ for $i \in R$ such that $b_{ij} = \tilde{\alpha}_i^R + \tilde{\beta}_j^R$ for all $(i,j) \in A(R)$. Setting $$\begin{array}{rcll} \alpha_i^R & := & \tilde{\alpha}_i^R - \tilde{\alpha}_1^R & (i \in R), \\ \beta_i^R & := & \tilde{\beta}_i^R - \tilde{\alpha}_1^R & (i \in R), \end{array}$$ yields $\alpha_i^R + \beta_j^R = b_{ij}$ for all $(i,j) \in A(R)$. Since $\alpha_1^R = 0$ and $b_{1i} = \iota$, it follows that $\beta_i^R = \iota$ for all $i \in R \setminus \{1\}$. In a similar manner one can show for any $S \in \mathcal{R}$ the existence of $\alpha_i^S, \beta_i^S$ for $i \in S$ with $\alpha_1^S = 0$, $\beta_j^S = \iota$ for $j \in S \setminus \{1\}$, and $\alpha_i^S + \beta_i^S = b_{ij}$ for all $(i,j) \in A(S)$. This implies immediately that $\alpha_i^R = \alpha_i^S$ and $\beta_i^R = \beta_i^S$ for all $i \in R \cap S$. Thus, there are $\alpha_i, \beta_i$ for all $i \in N$ such that $\alpha_1 = 0$, $\beta_i = \iota$ for $i \in N \setminus \{1\}$, and $b_{ij} = \alpha_i +\beta_j $ for all $(i,j) \in A$. Next, consider a tight $c_{p}$-cycle that contains the arcs $(1,k), (k,j)$ but does not visit node $\ell$ for some $j,k,\ell \in W$. Replacing node $k$ by node $\ell$ yields another tight $c_{p}$-cycle, and therefore $b_{1k} + b_{kj} = b_{1\ell }+b_{\ell j }$, which implies that $\alpha_k = \alpha_{\ell }$ for all $k, \ell \in W \setminus \{1\}$. Thus, there is $\kappa$ such that $b_{ij}=\kappa$ for all $i \in W \setminus \{1\}$, $j \in N \setminus \{1\}$. Moreover, it follows immediately that $b_{i1}=\iota$ for all $i \in W \setminus \{1\}$. One can show analogously that $\alpha_i = \alpha_j$ for all $i,j \in N \setminus W$. This implies the existence of $\lambda, \mu$ with $b_{ij} = \lambda$ for all $i \in N \setminus W$, $j \in N \setminus \{1\}$ and $b_{i1} = \mu$ for all $i \in N \setminus W$. Finally, when $|W|+1 = c_{p+1} = n$, we show that there are $n^2 - 2n$ affinely independent points $x \in P^*$ satisfying (\[CF\]) at equality. Without loss of generality, let $W= \{1,\dots,n-1\}$. Because each tour is tight with respect to (\[CF\]), it exist $n^2-3n+2$ linearly independent points $(x^r,y^r) \in Q := \{(x,y) \in P_{CL}^c(D_n) | x(\delta^{\mbox{\scriptsize out}}(1) = 1)\}$ with $y^r = 0$. Furthermore, consider the incidence vectors of the $n-2$ cycles $(1,2,\dots,c_p)$, $(1,3,4,\dots,c_p+1),\dots, (1,n-2,n-1,2,3,\dots,c_p-2)$, $(1,n-1,2,3,\dots,c_p-1)$. The corresponding points in $Q$ are linearly independent and they are also linearly independent of the points $(x^r,y^r)$. Hence, (\[CF\]) is also facet defining if $|W|+1 = c_{p+1} = n$. \[TRS\] Let $D_n=(N,A)$ be the complete digraph on $n$ nodes, and let $1 \in W \subset N$ with $c_p < |W| < c_{p+1}$ for some $p \in \{1,\dots,m-1\}$. The *cardinality-subgraph inequality* $$\label{RS} 2x(A(W)) - (|W|-c_p-1) [x((W:N \setminus W)) + x((N \setminus W:W))] \leq 2 c_p$$ is valid for $P^*$ and induces a facet of $P^*$ if and only if $p+1 < m$ or $c_{p+1}=n = |W|+1$. A cycle of length less or equal to $ c_p$ uses at most $c_p$ arcs of $A(W)$ and thus its incidence vector satisfies (\[RS\]). A cycle $C$ of length greater or equal to $c_{p+1}$ uses at most $|W|-1$ arcs in $A(W)$ and if $C$ indeed visits any node in $W$, then it uses at least 2 arcs in $(W:N \setminus W) \cup (N \setminus W:W)$ and hence, $$\begin{array}{lrr} \multicolumn{3}{l}{2 \chi^C(A(W)) - (|W|-c_p-1) [\chi^C ((W:N \setminus W)) + \chi^C((N \setminus W:W))] \hspace{1cm}} \\ & & \leq 2(|W|-1) - 2(|W|-c_p-1) = 2c_p. \end{array}$$ In particular, all cycles of feasible length that visit node $1$ satisfy (\[RS\]). To prove that is facet defining, assume that $p+1=m$ and $c_m < n$. When $c_{p+1}-c_p=2$ holds, then (\[RS\]) does not induce a facet of $P^*$ for the same reason as the corresponding cardinality forcing inequality does not induce a facet of $P^*$. Indeed, both inequalities define the same face. When $c_{p+1}-c_p > 2$, then it is easy to see that the face induced by (\[RS\]) is a proper subset of the face defined by the cardinality forcing inequality (\[CF\]), and thus, it is not facet defining. The same argumentation holds when $p+1=m$, $c_m =n$, and $n - |W| > 1$. To show that (\[RS\]) defines a facet, when the conditions are satisfied, we suppose that the equation $bx=b_0$ is satisfied by every $x \in P^*$ that satisfies (\[RS\]) at equality. Using Theorem \[equiv\] we may assume that $b_{w1}= 2$ for some $w \in W$, $b_{1i}=2$ for all $i \in W$, and $b_{iw} = -(|W|-c_p-1)$ for all $i \in N \setminus W$. Let $q,r \in N \setminus W$ be two nodes that are equal if $c_{p+1}=|W|+1$ and otherwise different. Then, all $(q,r)$-paths of length $|W|+1$ whose internal nodes are all the nodes of $W$ satisfies the equation $bx = b_0$. (Note, in case $c_{p+1}=|W|+1$, the paths are Hamiltonian cycles.) Thus, it exist $\alpha_q$, $\beta_r$, and $\alpha_j$, $\beta_j$ for $j \in W$ with $$\begin{array}{rcll} b_{qj} & = & \alpha_q+\beta_j & (j \in W)\\ b_{ir} & = & \alpha_i+\beta_r & (i \in W)\\ b_{ij} & = & \alpha_i+\beta_j & ((i,j) \in A(W)). \end{array}$$ Without loss of generality we may assume that $\beta_w=0$. Since $b_{1j}=2$, it follows that $\alpha_1 = 2$, $\beta_j = 0$ for all $j \in W \setminus \{1\}$, and $\alpha_q = |W|-c_p-1$. When $c_p=2$, then the cycles $\{(1,j),(j,1)\}$ for $j \in W \setminus \{1\}$. When $c_p \geq 3$, then consider a tight $c_p$-cycle that starts with $(1,i),(i,j)$ and skips node $k$ for some $i,j,k \in W \setminus \{1\}$. Replacing the arcs $(1,i),(i,j)$ by $(1,k),(k,j)$ yields another tight $c_p$-cycle, and thus the equation $b_{1i}+b_{ij}=b_{1k}+b_{kj}$. In either case, it follows that $b_{j1} = 2$ for $j \in W \setminus \{1\}$ and there is $\lambda$ such that $b_{ij} = \lambda$ for all $(i,j) \in A(W \setminus \{1\})$. Summarizing our intermediate results and adding further, easy obtainable observations, we see that $$\begin{array}{rcll} \label{eq3} b_{1i} & = & 2 & (i \in W \setminus \{1\}) \\ b_{i1} & = & 2 & (i \in W \setminus \{1\}) \\ b_{ij} & = & \lambda & ((i,j) \in A(W \setminus \{1\})) \\ b_{qi} & = & -(|W|-c_p-1) & (i \in W \setminus \{1\}) \\ b_{q1} & = & -(|W|-c_p-1) + 2 - \lambda \\ b_{ir} & = & -(|W|-c_p-1)(\lambda-1) & (i \in W \setminus \{1\}) \\ b_{1r} & = & -(|W|-c_p-1)(\lambda-1) + 2 - \lambda \\ b_0 & = & 4 + (c_p-2) \lambda \end{array}$$ holds. So, when $c_{p+1}=n$, we have $q=r$ and $N \setminus W = \{q\}$, and thus, $bx = b_0$ is the equation $$\begin{array}{rcl} 2 x(\delta^{\mbox{\scriptsize out}}(1)) - \lambda x_{1q}+2x(\delta^{\mbox{\scriptsize in}}(1))-\lambda x_{q1} + \lambda x(A(W \setminus \{1\}))\\ -(|W|-c_p-1)x(\delta^{\mbox{\scriptsize out}}(q)) - (|W|-c_p-1)(\lambda-1) x(\delta^{\mbox{\scriptsize in}}(q)) & = & 4 + (c_p-2) \lambda. \end{array}$$ Adding $(1-\frac{\lambda}{2})(|W|-c_p-1)$ times the equation $x(\delta^{\mbox{\scriptsize out}}(q))-x(\delta^{\mbox{\scriptsize in}}(q))=0$ and $(\lambda-2)$ times the equations $x(\delta^{\mbox{\scriptsize out}}(1))=1$ and $x(\delta^{\mbox{\scriptsize in}}(1))=1$, we see that $bx=b_0$ is equivalent to (\[RS\]), and hence (\[RS\]) is facet defining. Otherwise, that is, if $p+1 < m$, (\[eq3\]) holds for each pair of nodes $q,r \in N \setminus W$. Moreover, letting $k \neq l \in W \setminus \{1\}$, it can be seen that every $(k,l)$-path $P$ of length $c_{p+1}-|W|+1$ or $c_m - |W|+1$ whose internal nodes are in $N \setminus W$ satisfies the equation $bx= -\lambda(|W|-c_p-1)$. Thus, there are $\pi_k, \pi_l$, and $\{\pi_j | j \in N \setminus W\}$ such that $$\begin{array}{rcll} b_{kj} & = & \pi_k - \pi_j & (j \in N \setminus W)\\ b_{jl} & = & \pi_j - \pi_l & (j \in N \setminus W)\\ b_{ij} & = & \pi_i - \pi_j & ((i,j) \in A(N \setminus W)). \end{array}$$ Since $b_{kj}=-(|W|-c_p-1)(\lambda-1)$ for $j \in N \setminus W$, it follows that $\pi_i = \pi_j$ for all $i,j \in N \setminus W$ which implies that $b_{ij} = 0$. Hence, $bx = b_0$ is the equation $$\begin{array}{rcl} 2 x(\delta^{\mbox{\scriptsize out}}(1))+2x(\delta^{\mbox{\scriptsize in}}(1))-\lambda \sum_{i \in N \setminus W} (x_{1i}+x_{i1}) \\ + \lambda x(A(W \setminus \{1\}))-(|W|-c_p-1)x((N \setminus W:W)) \\ - (|W|-c_p-1)(\lambda-1) x((W:N \setminus W))& = & 4 + (c_p-2) \lambda. \end{array}$$ Adding $(1-\frac{\lambda}{2})(|W|-c_p-1)$ times the equation $$x((N \setminus W:W))-x((W: N \setminus W))=0$$ and $(\lambda-2)$ times the equations $x(\delta^{\mbox{\scriptsize out}}(1))=1$ and $x(\delta^{\mbox{\scriptsize in}}(1))=1$, we see that $bx=b_0$ is equivalent to (\[RS\]), and hence (\[RS\]) is facet defining. Facets unrelated to cardinality restrictions -------------------------------------------- \[Tnon\] Let $c \in {\mbox{CS}}$ and $n \geq 4$. The *nonnegativity constraint* $$\label{nnc} x_{ij} \geq 0$$ defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if and only if $c \neq (2,n)$ or $c=(2,n)$, $n \geq 5$, and $(i,j)$ is an inner arc. By Theorem 3.1 of [@Stephan], (\[nnc\]) defines a facet of $P_{0,n-\mbox{\scriptsize path}}^{(k)}(\tilde{D}_n)$ whenever $4 \leq k \leq n-1$. Hence, Lemma \[L1\] implies that (\[nnc\]) is facet defining for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if $n \geq 5$ and there is an index $p$ with $4 \leq c_p \leq n-1$. In case of $c \in \{(2,n),(3,n),(2,3,n)\}$, see [@KS]. \[Tdegree\] Let $c \in {\mbox{CS}}$, $n \geq 4$, and $i$ be an internal node of ${\tilde{D}_n}$. The degree constraint $$\label{degree} x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$$ induces a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ unless $c=(2,n)$. When $n \geq 5$ and $4 \leq c_p \leq n-1$ for some index $p$, (\[degree\]) can be shown to induce a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ using Lemma \[L1\] of this paper and Theorem 3.2 of [@Stephan], saying that (\[degree\]) induces a facet of $P_{0,n-\mbox{\scriptsize path}}^{(c_p)}({\tilde{D}_n})$. In case of $c \in \{(2,n),(3,n),(2,3,n)\}$, see [@KS]. \[Tosmc\] Let $c = (c_1,\dots,c_m) \in {\mbox{CS}}$, $n \geq 4$, $S \subset {\tilde{N}_n}$, $0,n \in S$, and $v \in {\tilde{N}_n}\setminus S$. The *one-sided min-cut inequality* $$\label{osmc} x((S:{\tilde{N}_n}\setminus S)) - x(\delta^{\mbox{\scriptsize in}}(v)) \geq 0$$ induces a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if and only if $|{\tilde{N}_n}\setminus S| \geq 2$, $|S| \geq c_1+1$, and $c \neq (2,n)$. *Necessity.* When ${\tilde{N}_n}\setminus S = \{v\}$, (\[osmc\]) becomes the trivial inequality $0x \geq 0$, and thus it is not facet defining. When $|S| \leq c_1$, all feasible $(0,n)$-paths $P$ satisfy $|P \cap (S:{\tilde{N}_n}\setminus S)| \geq 1$, and hence, (\[osmc\]) can be obtained by summing up the inequality $x((S:{\tilde{N}_n}\setminus S)) \geq 1$ and the degree constraint $-x(\delta^{\mbox{\scriptsize in}}(v)) \geq -1$. When $c=(2,n)$, see [@KS]. *Sufficiency.* By Theorem 3.4 of [@Stephan], (\[osmc\]) induces a facet of $P_{0,n-\mbox{\scriptsize path}}^{(k)}({\tilde{D}_n})$ for $4 \leq k \leq n-2$ if and only if $|S| \geq k+1$ and $|{\tilde{N}_n}\setminus S| \geq 2$. Hence, when $|S| \geq c_i+1$ for some index $i \in \{1,\dots,m\}$ with $c_i \geq 4$ and $|{\tilde{D}_n}\setminus S| \geq 2$, inequality (\[osmc\]) is facet defining for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ by applying Lemma \[L1\]. In particular, this finishes the proof when $i=1$. Note that in case $i=m$, $c_i \geq 4$ and $|S| \geq c_i+1$ imply $4 \leq c_m \leq n-2$, since $|S| \leq n-1$. When $c_1 = 2$ or $c_1 = 3$, see [@KS]. We introduce a further class of inequalities whose undirected pendants we need later for the characterization of the integer points of ${P_C^{c}(K_n)}$. \[Tmincut\] Let $c \in {\mbox{CS}}$, $n \geq 4$, $S \subset {\tilde{N}_n}$, and $0,n \in S$. The *min-cut inequality* $$\label{mc} x((S:{\tilde{N}_n}\setminus S)) \geq 1$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if and only if $|S| \leq c_1$ and facet defining for it if and only if $3 \leq |S| \leq c_1$ and $|{\tilde{N}_n}\setminus S| \geq 2$. When $c \neq (3,n)$, the theorem follows from Theorem 3.3 of [@Stephan], Lemma \[L1\], and the fact that $m \geq 2$. When $c=(3,n)$, see [@KS]. Inequalities specific to odd or even paths ------------------------------------------ \[TOP\] Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 2$, and $c_p$ even for $1 \leq p \leq m$, and let ${\tilde{N}_n}= S \; \dot{\cup} \; T$ be a partition of ${\tilde{N}_n}$ with $0 \in S$, $n \in T$. The *odd path exclusion constraint* $$\label{ocec} x({\tilde{A}_n}(S))+ x({\tilde{A}_n}(T)) \geq 1$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ and defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if and only if (i) $c_1=2$ and $|S|, |T| \geq \frac{c_2}{2}+1$, or (ii) $c_1 \geq 4$ and $|S|,|T| \geq \frac{c_2}{2}$. Clearly, each $(0,n)$-path of even length uses at least one arc in ${\tilde{A}_n}(S) \cup {\tilde{A}_n}(T)$. Thus, inequality (\[ocec\]) is valid. When $|S|$ or $|T|$ is less than $\frac{c_2}{2}$, then there is no $(0,n)$-path of length $c_p$, $p \geq 2$, that satisfies (\[ecec\]) at equality which implies that (\[ecec\]) cannot be facet defining for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. Thus $|S|, |T| \geq \frac{c_2}{2}$ holds if is facet defining. For $c_1=2$ we have to require even $|S|, |T| \geq \frac{c_2}{2}+1$. For the sake of contradiction assume w.l.o.g. that $|S|=\frac{c_2}{2}$. Then follows $|T| \geq \frac{c_2}{2}+1$. However, for an inner arc $(i,j) \in {\tilde{A}_n}(S)$ there is no tight $(0,n)$-path of cardinality $c_2$ that uses $(i,j)$. Next, let (i) or (ii) be true. The conditions imply that for $p=1$ or $p=2$ $c_p \geq 4$ and $|S|,|T| \geq \frac{c_p}{2}+1$ holds. Restricted to the polytope $P_{0,n-\mbox{\scriptsize path}}^{(c_p)}({\tilde{D}_n})$ inequality (\[ocec\]) is equivalent to the max-cut inequality $x((S:T)) \leq \frac{c_p}{2}$ which were shown to be facet defining for $P_{0,n-\mbox{\scriptsize path}}^{(c_p)}({\tilde{D}_n})$ (see Theorem 3.5 of [@Stephan]). Thus there are $n^2-2n-1$ linearly independent points in ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}\cap P_{0,n-\mbox{\scriptsize path}}^{(c_p)}({\tilde{D}_n})$ satisfying (\[ocec\]) at equality. Moreover, the conditions ensure that there is also a tight $(0,n)$-path of cardinality $c_q$, where $q=3-p$. By Lemma \[L1\] (i), the incidence vector of this path is affinely independent of the former points, and hence, (\[ocec\]) defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. \[TEP\] Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 3$, and $c_p$ odd for $1 \leq p \leq m$, and let ${\tilde{N}_n}= S \; \dot{\cup} \; T$ be a partition of ${\tilde{N}_n}$ with $0,n \in S$. The *even path exclusion constraint* $$\label{ecec} x({\tilde{A}_n}(S))+ x({\tilde{A}_n}(T)) \geq 1$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ and defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ if and only if (i) $c_1=3$, $|S|-1 \geq \frac{c_2+1}{2}$, and $|T| \geq \frac{c_2-1}{2}$, or (ii) $c_1 \geq 5$ and $\min (|S|-1,|T|) \geq \frac{c_2-1}{2}$. Up to one special case, Theorem \[TEP\] can be proved quite similarly as Theorem \[TOP\]. Hence, we skip the proof here and refer the interested reader to [@KS]. \[TMCF\] Let $D_n=(N,A)$ be the complete digraph on $n \geq 6$ nodes and $c=(c_1,\dots,c_m)$ a cardinality sequence with $m \geq 3$, $c_1 \geq 2$, $c_m \leq n$, and $c_{p+2}=c_{p+1}+2=c_{p}+4$ for some $2 \leq p \leq m-2$. Moreover, let $N=P \; \dot{\cup} \; Q \; \dot{\cup} \; \{r\}$ be a partition of $N$, where $P$ contains node $1$ and satisfies $|P|=c_p+1=c_{p+1}-1$. Then the *modified cardinality forcing inequality* $$\label{MCF} \sum_{v \in P} x(\delta^{\mbox{\scriptsize out}}(v)) - \sum_{v \in Q} x(\delta^{\mbox{\scriptsize out}}(v)) + x((Q:\{r\})) - x((P:\{r\})) \leq c_p$$ defines a facet of $P^* = \{x \in {P_C^{c}(D_n)}| x(\delta^{\mbox{\scriptsize out}}(1))=1$. The arcs that are incident with node $r$ have coefficients zero. Let $C$ be a cycle that visits node $1$ and is of feasible length. If $C$ does not visit node $r$, $C$ satisfies clearly (\[MCF\]), since the restriction of (\[MCF\]) to the arc set $A(N \setminus \{r\})$ is an ordinary cardinality forcing inequality (\[CF\]). When $C$ visits node $r$ and uses at most $c_p$ arcs whose corresponding coefficients are equal to one, then $C$ satisfies also (\[MCF\]), since all those coefficients that are not equal to 1 are $0$ or $-1$. So, let $C$ with $|C| \geq c_{p+1} $visit node $r$ and use as many arcs whose corresponding coefficients are equal to one as possible. That are exactly $|P|$ arcs which are contained in $A(P) \cup (P:Q)$. But then $C$ must use at least one arc in $A(Q) \cup (Q:P)$ whose coefficient is $-1$. Hence, also in this case $C$ satisfies (\[MCF\]), which proves the validity of (\[MCF\]). To show that is facet defining, suppose that the equation $bx = b_0$ is satisfied by all points that satisfy (\[MCF\]) at equality. By Theorem \[equiv\], we may assume that $b_{1r}=b_{r1}=0$ and $b_{1i}=1$ for $i \in N \setminus \{1,r\}$. By considering the $c_{p+1}$-cycles with respect to $P \cup \{j\}$ for $j \in N \setminus P$, one can show along the lines of the proof of Theorem \[TCF\] that there are $\alpha_k$, $\beta_k$, $k \in N$, with $b_{ij}=\alpha_i+\beta_j$ for all $(i,j) \in A$, $\alpha_1=0$, $\beta_r=0$, and $\beta_j=1$. In particular, when $c_{p}=2$, the tight 2-cycles $\{(1,i),(i,1)\}$, $i \in P$ yield $\alpha_k=\alpha_\ell$ for $k,\ell \in P \setminus \{1\}$. Otherwise one can show as in the proof of Theorem \[TCF\] that $\alpha_k = \alpha_l$ for all $k,l \in P \setminus \{1\}$. Thus, there is $\kappa$ such that $\alpha_i= \kappa$ for $i \in P$, $i \neq 1$. This in turn implies that there is $\lambda$ with $\alpha_j= \lambda$ for $j \in Q$ by considering tight $c_{p+1}$-cycles. Then, the equation $b_{r1}=0$, a tight cycle of length $c_p$, and two tight cycles of length $c_{p+1}$, one visiting node $r$, the other a node $j \in Q$, yield the equation system $$\begin{array}{rcl} b_{r1} & = & 0\\ b_0 & = & (c_p-1)(\kappa+1) +\beta_1 \\ b_0 & = & c_p(\kappa+1)\\ b_0 & = & c_p(\kappa+1)+\lambda+\beta_1+1\\ \end{array}$$ which solves to $$\begin{array}{rcl} b_0 & = & c_p(\kappa+1)\\ \lambda & = &- \kappa -2\\ \beta_1 & = & \kappa+1\\ \alpha_r & = & -\kappa -1. \end{array}$$ Next, consider for $i \in P \setminus \{1\}$, $j,k \in Q$ a $c_{p+2}$-cycle $C$ that starts in node 1, then visits all nodes in $P \setminus \{1,i\}$, followed by the nodes $j$, $r$, $i$, $k$, and finally returns to 1. Since $C$ is tight, we can derive the equation $$1+(c_p-1)(\kappa+1)+ b_jr+(\alpha_r+1)+(\kappa+1)+(\lambda+\beta_1)=b_0$$ which solves to $b_{jr}=\kappa$. By considering further tight $c_{p+2}$-cycles one can deduce that $b_{ri}=-\kappa$ for $i \in Q$ and $b_{jk}=-\kappa-1$ for $(j,k) \in A(Q)$. Thus, $bx=b_0$ is the equation $$\begin{array}{rcl} x(\delta^{\mbox{\scriptsize out}}(1) \setminus \{(1,r)\})-x((Q: \{1\}))+ (2\kappa+1)x((P \setminus \{1\}: \{1\}))\\ + (\kappa+1) \sum_{i \in P \setminus \{1\}} x(\delta^{\mbox{\scriptsize out}}(i) \setminus \{(i,1),(i,r)\})\\ - (\kappa+1) \sum_{i \in Q} x(\delta^{\mbox{\scriptsize out}}(i) \setminus \{(i,1),(i,r)\})\\ - \kappa x(\delta^{\mbox{\scriptsize out}}(r) \setminus \{(r,1)\}) + \kappa x(\delta^{\mbox{\scriptsize in}}(r) \setminus \{(1,r)\})& = & c_p(\kappa+1). \end{array}$$ Adding $\kappa$ times the equations $x(\delta^{\mbox{\scriptsize out}}(1))-x(\delta^{\mbox{\scriptsize in}}(1))=0$ and $x(\delta^{\mbox{\scriptsize out}}(r))-x(\delta^{\mbox{\scriptsize in}}(r))=0$, we see that $bx=b_0$ is equivalent to (\[MCF\]), and hence, (\[MCF\]) defines a facet. Separation ---------- All inequalities of the IP-model as well as the min-cut inequalities and the modified cardinality forcing inequalities can be separated in polynomial time. For the one-sided min-cut inequalities (\[osmc\]), separation consists in finding a minimum $\{0,n\}-l$-cut in ${\tilde{D}_n}$ for each node $l \in {\tilde{N}_n}\setminus \{0,n\}$. The cardinality forcing inequalities can be separated with a greedy algorithm. To this end, let $x^* \in \mathbb{R}^{{\tilde{A}_n}}_+$ be a fractional point. Set $y^*_i := x^*(\delta^{\mbox{\scriptsize out}}(i))$ for $i=0,\dots,n-1$, and apply the greedy separation algorithm 8.27 of Grötschel [@Groetschel] on input data $y^*, {\tilde{N}_n}$, and $c$. To separate the modified cardinality forcing inequalities this algorithm can be applied $n-1$ times as subroutine, namely: for each internal node $r$ of ${\tilde{N}_n}$, apply it on the subgraph induced by ${\tilde{N}_n}\setminus \{r\}$. Next, the separation problem for the odd (even) path exclusion constraints is equivalent to the maximum cut problem which is known to be NP-hard. Turning to the cardinality-subgraph inequalities , it seems to be very unlikely that there is a polynomial time algorithm that solves the separation problem for this class of inequalities. Assume that there is given an instance $(D'=(N',A'),c=(c_1,\dots,c_m), x^*)$ of the separation problem, where $x^* \in A'$ is a fractional point satisfying $x^*(\delta^{\mbox{\scriptsize out}}(1))=1$. (We consider the separation problem for $P^*$.) In the special case of $m=2$ and $c_m=c_2-c_1=2$ the separation problem for the inequalities and $x^*$ reduces to find a subset $W^*$ of $N'$ of cardinality $k:=c_1+1$ such that $1 \in W^*$ and $x^*(A'(W^*)) > 2c_p$. This problem can be tackled on the underlying graph $G'=(N',E')$ with edge weights $w_e:=x^*_{ij}+x^*_{ji}$ for $e=[i,j] \in E'$, where $x^*_{ij}$ is set to zero if the arc $(i,j)$ is not in $A'$. The associated optimization problem $\max w(E'(W)), W \subseteq N', 1 \in W, |W|=k$, is a variant of the weighted version of the densest $k$-subgraph problem which is known to be NP-hard (see Feige and Seltser [@FS]). Facets of the other polytopes ============================= In this section, we derive facet defining inequalities for related polytopes mentioned in the introduction from facet defining inequalities for the cardinality constrained path polytope ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. Facets of the directed cardinality constrained cycle polytope ------------------------------------------------------------- \[CfacetsCCCP\] Let $D_n=(N,A)$ be the complete digraph on $n \geq 3$ nodes and $c=(c_1,\dots,c_m)$ a cardinality sequence with $m \geq 2$ and $c_1 \geq 2$. Then the following statements hold:\ (a) The nonnegativity constraint $x_{ij} \geq 0$ defines a facet of ${P_C^{c}(D_n)}$.\ (b) The degree constraint $x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$ defines a facet of ${P_C^{c}(D_n)}$ for every $i \in N$.\ (c) Let $S$ be a subset of $N$ with $2 \leq |S| \leq n-2$, let $v \in S$ and $w \in N \setminus S$. The *multiple cycle exclusion constraint* $$\label{mcec} x(\delta^{\mbox{\scriptsize out}}(v))+x(\delta^{\mbox{\scriptsize out}}(w)) - x((S:N \setminus S)) \leq 1$$ induces a facet of ${P_C^{c}(D_n)}$ if and only if $|S|,|N \setminus S| \geq c_1$ and $c \notin \{(2,3),(2,n)\}$.\ (d) For any $S \subset N$ with $|S|,|N \setminus S| \leq c_1-1$, the min-cut inequality $$\label{cmc} x((S:N \setminus S)) \geq 1$$ is valid for ${P_C^{c}(D_n)}$ and induces a facet of ${P_C^{c}(D_n)}$ if and only if $|S|,|N \setminus S| \geq 2$.\ (e) Let $S$ be a subset of $N$ and $j \in N \setminus S$. The one-sided min-cut inequality $$\label{cosmc} x((S:N \setminus S)) -x(\delta^{\mbox{\scriptsize out}}(j)) \geq 0$$ defines a facet of ${P_C^{c}(D_n)}$ if and only if $|S| \geq c_1$ and $ 2 \leq |N \setminus S| \leq c_1-1$.\ (f) The cardinality bound $x(A) \geq c_1$ defines a facet of ${P_C^{c}(D_n)}$ if and only if $c_1=3$ and $n \geq 5$ or $4 \leq c_1 \leq n-1$. Analogously, $x(A) \leq c_m$ defines a facet of ${P_C^{c}(D_n)}$ if and only if $c_m=3$ and $n \geq 5$ or $4 \leq c_m \leq n-1$.\ (g) Let $W$ be a subset of $N$ with $c_p < |W| < c_{p+1}$ for some $p \in \{1,\dots,m-1\}$. The cardinality-forcing inequality (\[CF\]) defines a facet of ${P_C^{c}(D_n)}$ if and only if $c_{p+1}-|W| \geq 2$ and $c_{p+1} < n$ or $c_{p+1}=n$ and $|W| = n-1$.\ (h) Let $W$ be a subset of $N$ such that $c_p < |W| < c_{p+1}$ holds for some $p \in \{1,\dots,m-1\}$. The cardinality-subgraph inequality (\[RS\]) is valid for ${P_C^{c}(D_n)}$ and induces a facet of ${P_C^{c}(D_n)}$ if and only if $p+1 < m$ or $c_{p+1}=n = |W|+1$.\ (i) Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 2$, and $c_p$ even for $1 \leq p \leq m$, and let $N = S \; \dot{\cup} \; T \; \dot{\cup} \; \{n\}$ be a partition of $N$. The *odd cycle exclusion constraint* $$\label{opec} x(A(S))+ x(A(T)) + x((T: \{n\}))- x((\{n\}:T)) \geq 0$$ is valid for ${P_C^{c}(D_n)}$ and defines a facet of ${P_C^{c}(D_n)}$ if and only if ($\alpha$) $c_1=2$ and $|S|, |T| \geq \frac{c_2}{2}$, or ($\beta$) $c_1 \geq 4$ and $|S|,|T| \geq \frac{c_2}{2}-1$.\ (j) Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 3$, and $c_p$ odd for $1 \leq p \leq m$, and let $N = S \; \dot{\cup} \; T$ be a partition of $N$. The *even cycle exclusion constraint* $$\label{epec} x(A(S))+ x(A(T)) \geq 1$$ is valid for ${P_C^{c}(D_n)}$ and defines a facet of ${P_C^{c}(D_n)}$ if and only if $|S|, |T| \geq \frac{c_2-1}{2}$.\ (k) Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 3$, $c_1 \geq 2$, $c_m \leq n$, $n \geq 6$, and $c_{p+2}=c_{p+1}+2=c_{p}+4$ for some $2 \leq p \leq m-2$. Moreover, let $N=P \; \dot{\cup} \; Q \; \dot{\cup} \; \{r\}$ be a partition of $N$, with $|P|=c_p+1=c_{p+1}-1$. Then the modified cardinality forcing inequality defines a facet of ${P_C^{c}(D_n)}$. \(a) When $n \leq 4$, the statement can be verified using a computer program. When $c=(2,3)$ and $n \geq 5$, we apply Theorem 10 of Hartmann and Özlük which says that $x_{ij} \geq 0$ defines a facet of $P_C^{(p)}(D_n)$ whenever $p \geq 3$ and $n \geq p+1$. Thus, there are $n^2-2n$ 3-cycles satisfying $x_{ij} \geq 0$ at equality. Together with Lemma \[L1\] applied on these tight 3-cycles and any 2-cycle not using arc $(i,j)$, we get the desired result. The remainder statements of (a) follow by application of Theorem \[Tnon\] and Theorem \[lifting\].\ (b) First, when $c=(2,3)$ one can show along the lines of the proof to Proposition 5 of Balas and Oosten [@BO] that $x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$ defines a facet of ${P_C^{c}(D_n)}$. Next, when $(2,3) \neq c \neq (2,n)$, the degree constraint can be shown to induce a facet using theorems \[Tdegree\] and \[lifting\]. Finally, when $c=(2,n)$, see [@KS].\ (c) Supposing that $c=(2,3)$, the inequality (\[mcec\]) is dominated by the nonnegativity constraint $x_{ij} \geq 0$ for any arc $(i,j) \in (S: N \setminus S) \cup (N \setminus S: S)$ that is neither incident with $v$ nor with $w$. Next, suppose that $c=(2,n)$. Inequality (\[mcec\]) is equivalent to the subtour elimination constraint $x(A(S)) \leq |S|-1$ with respect to the ATSP $P_C^{(n)}(D_n)$. Thus, we have $n^2-3n+1$ tours satisfying (\[mcec\]) at equality. But we have only $n-1$ tight 2-cycles, and consequently, (\[mcec\]) does not induce a facet. Next, if $|S| \leq c_1-1$, then (\[mcec\]) is the sum of the valid inequalities $x(\delta^{\mbox{\scriptsize out}}(v)) - x((S:N \setminus S)) \leq 0$ and $x(\delta^{\mbox{\scriptsize out}}(w)) \leq 1$. Finally, if $|N \setminus S| \leq c_1-1$, then (\[mcec\]) is the sum of the inequalities $x(\delta^{\mbox{\scriptsize out}}(w)) - x((S:N \setminus S)) \leq 0$ and $x(\delta^{\mbox{\scriptsize out}}(v)) \leq 1$ (cf. Hartmann and Özlük [@HO p. 162]). Suppose that the conditions in (c) are satisfied. First, consider the inequality (\[mcec\]) on the polytope $Q:=\{x \in {P_C^{c}(D_n)}: x(\delta^{\mbox{\scriptsize out}}(1))=1\}$ which is isomorphic to the path polytope ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. Then, (\[mcec\]) is equivalent to the one-sided min-cut inequality (\[osmc\]) which defines a facet of $Q$ by Theorem \[Tosmc\]. Thus, also (\[mcec\]) defines a facet of $Q$. Now, by application of Theorem \[lifting\] on $Q$ and (\[mcec\]) we obtain the desired result. (When $c_1 \geq 4$, then the statement can be proved also with Theorem 14 of Hartmann and Özlük [@HO].\ (d) Assuming $|S|=1$ or $|N \setminus S|=1$ implies that (\[cmc\]) is an implicit equation. So, let $|S|,|N \setminus S| \geq 2$ which implies that $c_1 \geq 3$. From Theorem \[Tmincut\] follows that (\[cmc\]) defines a facet of $Q:= \{x \in {P_C^{c}(D_n)}: x(\delta^{\mbox{\scriptsize out}}(i)) =1\}$, and hence, by Theorem \[lifting\], it defines also a facet of ${P_C^{c}(D_n)}$.\ (e) When $|N \setminus S| \geq c_1$, (\[cosmc\]) is obviously not valid. When $|N \setminus S|=1$, (\[cosmc\]) is the flow constraint $x(\delta^{\mbox{\scriptsize in}}(j))- x(\delta^{\mbox{\scriptsize out}}(j)) = 0$. When $|S| \leq c_1-1$ and $|N \setminus S| \leq c_1-1$, (\[cosmc\]) is the sum of the valid inequalities $x((S: N \setminus S)) \geq 1$ and $-x(\delta^{\mbox{\scriptsize out}}(j)) \geq -1$. Suppose that $|S| \geq c_1$ and $ 2 \leq |N \setminus S| \leq c_1-1$. Then in particular $c_1 \geq 3$ holds. For any node $i \in S$, (\[cosmc\]) defines a facet of $Q:= \{x \in {P_C^{c}(D_n)}: x(\delta^{\mbox{\scriptsize out}}(i))=1\}$, by Theorem \[Tosmc\]. Applying Theorem \[lifting\] we see that therefore (\[cosmc\]) defines also a facet of ${P_C^{c}(D_n)}$.\ (f) Since $\dim \{x \in {P_C^{c}(D_n)}: x(A)=c_i\} = \dim P_C^{(c_i)}(D_n)$, the claim follows directly from Theorem 1 of Hartmann and Özlük [@HO].\ (g)-(i) Necessity can be proved as in the corresponding part of the proof to Theorem \[TCF\] (\[TRS\], \[TOP\]) while suffiency can be shown by applying Theorem \[lifting\] on Theorem \[TCF\] (\[TRS\], \[TOP\]).\ (j) By Theorem 15 of Hartmann and Özlük, defines a facet of $P_C^{(c_1)}(D_n)$. Moreover, the cardinality conditions for $S$ and $T$ ensure that there is a tight cycle of cardinality $c_2$, and hence, by Lemma \[L1\], defines a facet of ${P_C^{c}(D_n)}$.\ (k) Apply Theorem \[lifting\] on Theorem \[TMCF\]. Facets of the undirected cardinality constrained cycle polytope --------------------------------------------------------------- In this section, we consider the undirected cardinality constrained cycle polytope ${P_C^{c}(K_n)}$ defined on the complete graph $K_n=(N,E)$, where $c$ is a cardinality sequence with $3 \leq c_1 < \dots < c_m \leq n$ and $m \geq 2$. It was shown in [@KMV] and [@MN2] that $\dim P_C^{(p)}(K_n) = |E|-1$ for $3 \leq p \leq n-1$ and $n \geq 5$. Thus, it is easy to verify that $\dim {P_C^{c}(K_n)}= |E|=n(n-1)/2$ for all $n \geq 4$, since $m \geq 2$. Note, in case of $n=4$, ${P_C^{c}(K_n)}= P_C(K_n)$, and by Theorem 2.3 of Bauer [@Bauer], $\dim P_C(K_4)=6=|E|$. Facet defining inequalities for ${P_C^{c}(K_n)}$ can be derived directly from the inequalities mentioned in Corollary \[CfacetsCCCP\] (b)-(h), since these inequalities are equivalent to symmetric inequalities. A valid inequality $cx \leq \gamma$ for ${P_C^{c}(D_n)}$ is said to be *symmetric* if $c_{ij}=c_{ji}$ holds for all $i < j$. Due to the flow conservation constraints, it is equivalent to a symmetric inequality if and only if the system $t_i-t_j=c_{ij}-c_{ji}$ is consistent (see Hartmann and Özlük [@HO] and Boros et al [@BHHS]). One can show that the undirected counterpart $\sum_{1 \leq i < j \leq n} c_{ij} y_{ij}$ of a symmetric inequality $c x \leq \gamma$ is valid for ${P_C^{c}(K_n)}$. Moreover, it induces a facet of ${P_C^{c}(K_n)}$ if $cx \leq \gamma$ induces a facet of ${P_C^{c}(D_n)}$. This follows from an argument of Fischetti [@Fischetti], originally stated for the ATSP and STSP, which is also mentioned in Hartmann and Özlük [@HO] in the context of directed and undirected $p$-cycle polytopes $P_C^{(p)}(D_n)$ and $P_C^{(p)}(K_n)$. \[CfacetsuCCCP\] Let $K_n=(N,E)$ be the complete graph on $n \geq 3$ nodes and $c=(c_1,\dots,c_m)$ a cardinality sequence with $m \geq 2$ and $c_1 \geq 3$. Then holds:\ (a) For any $e \in E$, the nonnegativity constraint $y_e \geq 0$ defines a facet of ${P_C^{c}(K_n)}$ if and only if $n \geq 5$.\ (b) The degree constraint $y(\delta(i)) \leq 2$ defines a facet of ${P_C^{c}(K_n)}$ for every $i \in N$.\ (c) Let $S$ be a subset of $N$ with $c_1 \leq |S| \leq n-c_1$, let $v \in S$ and $w \in N \setminus S$. Then, the two-sided min-cut inequality $$\label{umcec} y(\delta(v))+y(\delta(w)) - y((S:N \setminus S)) \leq 2$$ induces a facet of ${P_C^{c}(K_n)}$.\ (d) For any $S \subset N$ with $|S|,|N \setminus S| \leq c_1-1$, the min-cut inequality $$\label{ucmc} y((S:N \setminus S)) \geq 2$$ is valid for ${P_C^{c}(K_n)}$ and induces a facet of ${P_C^{c}(K_n)}$ if and only if $|S|,|N \setminus S| \geq 2$.\ (e) Let $S$ be a subset of $N$ and $j \in N \setminus S$. The one-sided min-cut inequality $$\label{ucosmc} y((S:N \setminus S)) -y(\delta(j)) \geq 0$$ defines a facet of ${P_C^{c}(K_n)}$ if and only if $|S| \geq c_1$ and $ 2 \leq |N \setminus S| \leq c_1-1$.\ (f) The cardinality bound $y(E) \geq c_1$ defines a facet of ${P_C^{c}(K_n)}$. The cardinality bound $y(E) \leq c_m$ defines a facet of ${P_C^{c}(K_n)}$ if and only if $c_m < n$.\ (g) Let $W$ be a subset of $N$ with $c_p < |W| < c_{p+1}$ for some $p \in \{1,\dots,m-1\}$. The cardinality-forcing inequality $$(c_{p+1}-|W|) \sum_{i \in W} y(\delta(i)) - (|W|-c_p)\sum_{i \in N \setminus W} y(\delta(i)) \leq 2 c_p( c_{p+1}-|W|)$$ defines a facet of ${P_C^{c}(K_n)}$ if and only if $c_{p+1}-|W| \geq 2$ and $c_{p+1} < n$ or $c_{p+1}=n$ and $|W| = n-1$.\ (h) Let $W$ be a subset of $N$ such that $c_p < |W| < c_{p+1}$ holds for some $p \in \{1,\dots,m-1\}$. The cardinality-subgraph inequality $$2y(E(W)) - (|W|-c_p-1) y((W:N \setminus W)) \leq 2 c_p$$ is valid for ${P_C^{c}(K_n)}$ and induces a facet of ${P_C^{c}(K_n)}$ if and only if $p+1 < m$ or $c_{p+1}=n = |W|+1$.\ (i) Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 3$, and $c_p$ odd for $1 \leq p \leq m$, and let $N = S \; \dot{\cup} \; T$ be a partition of $N$. The even cycle exclusion constraint $$y(E(S))+ y(E(T)) \geq 1$$ is valid for ${P_C^{c}(K_n)}$ and defines a facet of ${P_C^{c}(K_n)}$ if and only if $|S|, |T| \geq \frac{c_2-1}{2}$. \(a) When $n \leq 5$ the statement can be verified using a computer program. When $n \geq 6$, the claim follows from Proposition 2 of Kovalev, Maurras, and Vaxés [@KMV], Proposition 2 of Maurras and Nguyen [@MN2], and the fact that $m \geq 2$. (b)-(i) All directed inequalities occurring in Corollary \[CfacetsCCCP\] (b)-(h) and (j) are equivalent to symmetric inequalities. For example, the degree constraint $x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$ is equivalent to $x(\delta^{\mbox{\scriptsize out}}(i))+x(\delta^{\mbox{\scriptsize in}}(i)) \leq 2$. Via the identification $y(\delta(i)) \cong x(\delta^{\mbox{\scriptsize out}}(i)) + x(\delta^{\mbox{\scriptsize in}}(i))$ we see that $y(\delta(i)) \leq 2$ defines a facet of ${P_C^{c}(K_n)}$ if $x(\delta^{\mbox{\scriptsize out}}(i)) \leq 1$ defines a facet of ${P_C^{c}(D_n)}$. Necessity can be shown with similar arguments as for the directed counterparts of these inequalities. The inequalities mentioned in Corollary \[CfacetsuCCCP\] (a)-(c), (e)-(g) together with the integrality constraints $y_e \in \{0,1\}$ for $e \in E$ provide a characterization of the integer points of ${P_C^{c}(K_n)}$. In this context note that if $|N\setminus S|=2$, the inequalities in (e) are equivalent to the well-known parity constraints $$y(\delta(j) \setminus \{e\}) - y_e \geq 0 \hspace{2cm} (j \in N, e \in \delta(j))$$ mentioned for example in [@Bauer]. The odd cycle exclusion constraints as well as the modified cardinality forcing inequalities from Corollary \[CfacetsCCCP\] are not symmetric nor equivalent to symmetric inequalities. Hence, we did not derive counterparts of these inequalities for ${P_C^{c}(K_n)}$. Of course, given a valid inequality $cx \leq c_0$ for ${P_C^{c}(D_n)}$, one obtains a valid inequality $\tilde{c} y \leq 2c_0$ for ${P_C^{c}(K_n)}$ by setting $\tilde{c}_{ij} := c_{ij}+c_{ji}$ for $i<j$. However, it turns out that the counterparts of these two classes of inequalities are irrelevant for a linear description of ${P_C^{c}(K_n)}$. Facets of the undirected cardinality constrained path polytope -------------------------------------------------------------- The undirected cardinality constrained $(0,n)$-path polytope ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ is the symmetric counterpart of ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. Here, $K_{n+1}=(N,E)$ denotes the complete graph on node set $N=\{0,\dots,n\}$. In the sequel we confine ourselves to the set ${\mbox{CS}}$ of cardinality sequences $c=(c_1,\dots,c_m)$ with $m \geq 2$, $c_1 \geq 2$, and $c \neq (2,3)$. Let $K_{n+1} =(N,E)$ be the complete graph on node set $N=\{0,\dots,n\}$, $n \geq 4$, and let $c=(c_1,\dots,c_m) \in {\mbox{CS}}$ be a cardinality sequence. Then the following holds: 1. $\dim {P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}=|E|-3$. 2. The nonnegativity constraint $y_e \geq 0$ defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $c \neq (2,n)$ or $c=(2,n)$ and $e$ is an internal edge. \(i) All points $y \in {P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ satisfy the equations $$\begin{aligned} y_{0n} & = & 0, \label{101} \\ y(\delta(0)) & = & 1, \label{102} \\ y(\delta(n)) & = & 1. \label{103}\end{aligned}$$ Thus, the dimension of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ is at most $|E|-3$. When $4 \leq c_i <n$ for some $i \in \{1,\dots,m\}$, then the statement is implied by Theorem 4.7 of [@Stephan], saying that $\dim P_{0,n-\mbox{\scriptsize path}}^{(c_i)}(K_{n+1})=|E|-4$, and the fact that $m \geq 2$. When $c \in \{(2,n), (3,n), (2,3,n)$, see [@KS]. \(ii) When $4 \leq c_i < n$ for some $i \in \{1,\dots,m\}$, then the claim follows from Theorem 4.9 of [@Stephan] and the fact that $m \geq 2$. Otherwise, $c=(2,n)$, $c=(3,n)$, or $c=(2,3,n)$. Then see [@KS]. The concept of symmetric inequalities can be used to derive facet defining inequalities for ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ from those for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$. A valid inequality $cx \leq c_0$ for the directed path polytope ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ is said to be *[pseudo-symmetric ]{}* if $c_{ij}=c_{ji}$ for all $1 \leq i < j \leq n-1$. It is equivalent to a [pseudo-symmetric ]{}inequality if and only if the system $t_i-t_j=c_{ij}-c_{ji}$ for $1 \leq i <j \leq n-1$ is consistent. In [@Stephan] it was shown that the undirected counterpart $\bar{c}y \leq c_0$ of a [pseudo-symmetric ]{}inequality $cx \leq c_0$ (obtained by setting $\bar{c}_{0i}=c_{0i},\bar{c}_{in}=c_{in}$ for all internal nodes $i$ and $\bar{c}_{ij}=c_{ij}=c_{ij}$ for all $1 \leq i < j \leq n-1$) is facet defining for $P_{0,n-\mbox{\scriptsize path}}^{(p)}(K_{n+1})$ if $cx \leq c_0$ is facet defining for $P_{0,n-\mbox{\scriptsize path}}^{(p)}({\tilde{D}_n})$. The same holds for ${P_{0,n-\mbox{\scriptsize path}}^c(\tilde{D}_n)}$ and ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$. \[PfacetsuCCCP\] Let $K_{n+1}=(N,E)$ be the complete graph on node set $N=\{0,\dots,n\}$ with $n \geq 4$, and let $c=(c_1,\dots,c_m) \in {\mbox{CS}}$ be a cardinality sequence. Then we have:\ (a) The degree constraint $y(\delta(i)) \leq 2$ defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ for every node $i \in N \setminus \{0,n\}$ unless $c=(2,n)$.\ (b) Let $S$ be a subset of $N$ with $0,n \in S$ and $|S| \leq c_1$. Then, the min-cut inequality $$y((S:N \setminus S)) \geq 2$$ induces a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $|S| \geq 3$ and $|V \setminus S| \geq 2$.\ (c) Let $S \subset N$ with $0,n \in S$, $j \in N \setminus S$, and $|S| \geq c_1+1$. Then, the one-sided min-cut inequality $$y((S:N \setminus S))- y(\delta(j))\geq 0$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ and induces a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $|N \setminus S| \geq 2$.\ (d) The cardinality bound $y(E) \geq c_1$ defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $c_1 \geq 4$. The cardinality bound $y(E) \leq c_m$ defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $c_m < n$.\ (e) Let $W$ be a subset of $N$ with $0,n \in W$ and $c_p < |W|-1 < c_{p+1}$ for some $p \in \{1,\dots,m-1\}$. The cardinality-forcing inequality $$(c_{p+1}-|W|+1) \sum_{i \in W} y(\delta(i)) - (|W|-c_p-1)\sum_{i \in N \setminus W} y(\delta(i)) \leq 2 c_p( c_{p+1}-|W|+1)$$ defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $c_{p+1}-|W|+1 \geq 2$ and $c_{p+1} < n$ or $c_{p+1}=n$ and $|W| = n$.\ (f) Let $W$ be a subset of $N$ such that $0,n \in W$ and $c_p < |W|-1 < c_{p+1}$ for some $p \in \{1,\dots,m-1\}$. The cardinality-subgraph inequality $$2y(E(W)) - (|W|-c_p-2) y((W:N \setminus W)) \leq 2 c_p$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ and induces a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if $p+1 < m$ or $c_{p+1}=n = |W|$.\ (g) Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 2$, and $c_p$ even for $1 \leq p \leq m$, and let $N = S \; \dot{\cup} \; T$ be a partition of $N$ with $0 \in S$, $n \in T$. The odd path exclusion constraint $$y(E(S))+ y(E(T)) \geq 1$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ and defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if (i) $c_1=2$ and $|S|, |T| \geq \frac{c_2}{2}+1$, or (ii) $c_1 \geq 4$ and $|S|,|T| \geq \frac{c_2}{2}$.\ (h) Let $c=(c_1,\dots,c_m)$ be a cardinality sequence with $m \geq 2$, $c_1 \geq 3$, and $c_p$ odd for $1 \leq p \leq m$, and let $N = S \; \dot{\cup} \; T$ be a partition of $N$ with $0,n \in S$. The even path exclusion constraint $$y(E(S))+ y(E(T)) \geq 1$$ is valid for ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ and defines a facet of ${P_{0,n-\mbox{\scriptsize path}}^{c}(K_{n+1})}$ if and only if ($\alpha$) $c_1=3$, $|S|-1 \geq \frac{c_2+1}{2}$, and $|T| \geq \frac{c_2-1}{2}$, or ($\beta$) $c_1 \geq 5$ and $\min (|S|-1,|T|) \geq \frac{c_2-1}{2}$.$\Box$ As already mentioned, the modified cardinality forcing inequalities are not equivalent to [pseudo-symmetric ]{}inequalities. Concluding remarks ================== Restricting the set of feasible solutions of a combinatorial optimization problem to those that satisfy some specified cardinality constraints always can be done by adding the corresponding cardinality forcing inequalities inherited from the polytope associated with the respective cardinality homogeneous set system. However, as we have demonstrated at the example of paths and cycles, one may end with rather weak formulations unless this is done carefully: Imposing the restrictions on the number of vertices leads to formulations with facet defining inequalities, while the straight-forward approach using the arcs does not result in strong inequalities. It would be interesting to see whether this is similar for cardinality restricted versions of other optimization problems. Moreover, we believe that there should be other interesting situations where knowledge on a master polyhedron (like the cardinality homogeneous set systems polyhedron) and on a polyhedron associated with particular combinatorial structures (like paths and cycles) can be brought into fruitful interplay. [99]{} E. Balas and M. Oosten, *On the cycle polytope of a directed graph*, Networks 36 No. 1 (2000), pp. 34-46. E. Balas and R. Stephan, *On the cycle polytope of a directed graph and its relaxations*, submitted to Networks. P. Bauer, *A Polyhedral Approach to the Weighted Girth Problem*, Aachen 1995. P. Bauer, J.T. Linderoth, and M.W.P. Savelsbergh, *A branch and cut approach to the cardinality constrained circuit problem*, Mathematical Programming, Ser. A 91 (2002), pp. 307-348. E. Boros, P. Hammer, M. Hartmann, and R. Shamir, *Balancing problems in acyclic networks*, Discrete Applied Mathematics 49 (1994), pp. 77-93. C. Coullard and W.R. Pulleyblank, *On cycle cones and polyhedra*, Linear Algebra Appl. 114/115 (1989), pp. 613-640. G. Dahl and L. Gouveia, *On the directed hop-constrained shortest path problem*, Operations Research Letters 32 (2004), pp. 15-22. G. Dahl and B. Realfsen, *The Cardinality-Constrained Shortest Path Problem in 2-Graphs*, Networks 36 No. 1 (2000), pp. 1-8. U. Feige and M. Seltser, *On the densest k-subgraph problem*, Technical report, Department of Applied Mathematics and Computer Science, The Weizmann Institute, Rehobot, 1997. M. Fischetti, *Clique tree inequalites define facets of the asymmetric traveling salesman polytope*, Discrete Applied Mathematics 56 (1995), pp. 9-18. E. Gawrilow and M. Joswig, *polymake: A framework for analyzing convex polytopes*. In: G. Kalai and G.M. Ziegler (eds.): Polytopes   Combinatorics and Computation (DMV-Seminars, pp. 43 74) Basel: Birkh¨auser-Verlag Basel 2000, see also http://www.math.tu-berlin.de/polymake M. Grötschel, *Cardinality homogeneous set systems, cycles in matroids, and associated polytopes*, in: M. Grötschel, *The sharpest cut. The impact of Manfred Padberg and his work*, MPS-SIAM Series on Optimization 4, SIAM, 2004, pp. 199-216. M. Grötschel and M.W. Padberg, *Polyhedral theory*, in: E.L. Lawler et al (eds.), *The traveling salesman problem. A guided tour of combinatorial optimization*, Chichester, New York, and others, 1985, pp. 251-305. M. Hartmann and Ö. Özlük, *Facets of the $p$-cycle polytope*, Discrete Applied Mathematics 112 (2001), pp. 147-178. V. Kaibel and R. Stephan, *On cardinality constrained cycle and path polytopes*, ZIB-Report, October, 2007, available at `www.zib.de/bib/pub/index.en.html` . M. Kovalev, J.-F. Maurras, and Y. Vaxés, *On the convex hull of 3-cycles of the complete graph*, Pesquisa Operational, 23 (2003) 99-109. J.-F. Maurras and V.H. Nguyen, *On the linear description of the 3-cycle polytope*, European Journal of Operational Research, 1998. J.-F. Maurras and V.H. Nguyen, *On the linear description of the k-cycle polytope, $PC_n^k$*, International Transactions in Operational Research 8 (2001), pp. 673-692. G.L. Nemhauser and L.A. Wolsey, Integer and Combinatorial Optimization, Wiley: New York, 1988. V.H. Nguyen,*A complete description for the k-path polyhedron*, Proceedings of the Fifth International Conference on Modelling, Computation and Optimization in Information Systems and Management Science, pp. 249-255. R. Stephan, *Facets of the (s,t)-p-path polytope*, arXiv: math.OC/0606308, submitted, June 2006. R. Stephan, *Polytopes associated with length restricted directed circuits*, Master Thesis, Technische Universität Berlin, 2005. A. Schrijver, *Combinatorial Optimization*, Vol. A, Berlin et al., 2003. \
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a method to improve image clustering using sparse text and the wisdom of the crowds. In particular, we present a method to fuse two different kinds of document features, image and text features, and use a common dictionary or “wisdom of the crowds” as the connection between the two different kinds of documents. With the proposed fusion matrix, we use topic modeling via non-negative matrix factorization to cluster documents.' author: - - - - bibliography: - 'asilomar\_imgtxt.bib' title: Improving Image Clustering using Sparse Text and the Wisdom of the Crowds --- Introduction ============ There has been substantial research in organizing large image databases. Often, these images have corresponding text, such as captions in textbooks and metatags. We investigate strategies to use this text information to improve clustering image documents into groups of similar images. Image Clustering is used for image database management, content based image searches, and image classification. In this paper, we present a method for improving image clusters using sparse text and freely obtainable information form the internet. The motivation behind our method stems from the idea that we can fuse image and text documents and use the “wisdom of the crowds” (WOC), the freely obtainable information, to connect the sparse text documents where WOC documents act as a representative of a single class. In Section 2, we breifly touch upon related material. In Section 3, we introduce our method of fusing text and image documents using the term frequency-inverse document frequency weighting scheme. We then describe how non-negative matrix factorization is used for the purpose of topic modeling in section 4. In Section 5, we present results from an application of our method. Related Works ============= There have been many studies on text document clustering and image clustering. A general joint image and text clustering strategy proceeds in two steps, first two different types of documents must be combined into a single document feature matrix. Then, a clustering technique is implemented. The term frequency-inverse document frequency (TF-IDF) is a technique to create a feature matrix from a collection, or corpus, of documents. TF-IDF is a weighting scheme that weighs features in documents based on how often the words occurs in an individual document compared with how often it occurs in other documents [@tf-idf_original]. TF-IDF has been used for text mining, near duplicate detection, and information retrieval. When dealing with text documents, the natural features to use are words (i.e. delimiting strings by white space to obtain features). We can represent each word by a unique integer. In order to use text processing techniques for image databases, we generate a collection of image words using two steps. First, we obtain a collection of image features, and then define a mapping from the image features to the integers. To obtain image features, we use the scale invariant feature transform (SIFT) [@lowe2004distinctive]. We then use k-means to cluster the image features into $K$ different clusters. The mapping from the image feature to the cluster is used to identify image words, and results in the image Bag-Of-Words model [@fei2005bayesian]. Topic modeling is used to uncover a hidden topical structure of a collection of documents. There have been studies on using large scale data collections to improve classification of sparse, short segments of text, which usually cluster inaccurately due to spareness of text [@phan2008learning]. Latent Dirichlet Allocation (LDA), singular value decomposition (SVD), and non-negative matrix factorization (NNMF) are just some of the models that have been used in topic modeling [@arora2012learning]. In our method, we integrate these techniques to combine and cluster different types of documents. We use SIFT to obtain image features and term frequency-inverse document frequency to generate a feature matrix in the fused collection of documents. Then, we use the non-negative matrix factorization to learn a representation of the corpus which is used to cluster the documents. Fusing Image and Text Documents =============================== We denote a collection of image documents $D = \{d_1, ... d_n\}$ and a collection of sparse text documents $S = \{s_1,...s_n\}$ and text document $s_i$ describes image document $d_i$ for i=1,...m. Some of the text documents may be empty, indicating the absence of any labeled text. **Image Documents** Using the scale invariant feature transform (SIFT) and k-means, we obtain $A \in \mathbb{R}^{n \times p}$ where $p$ is the number of image features and n is the number of image documents and element $A_{i,j}$ represents the number of times the image document $d_i$ contains the $j^{th}$ feature. **Wisdom of the Crowds** Due to the sparse nature of the text documents we are considering, the WOC is needed to link features that represent a single class. For example, if one wishes to obtain a class of documents and images about cats, text and images from a wikipedia page on cats can be used as the wisdom of the crowds. Using Wikipedia, we collect WOC documents $W = \{w_1, ... w_k\}$ where $k$ is the number of clusters we wish to cluster the images into. Each $w_i$ is a text document that contains features that collectively describe a single class. To create text features, we parse text documents by white space (i.e. break up text by words) and obtain a corpus $f = (f_1, ... f_q)$ of $q$ unique features. Let $C \in \mathbb{R}^{k \times q}$ Each $C_{i,j}$ is the number of times the feature $f_j$ appears in $w_i$. **Text Documents** In the same manner as with WOC documents, we parse text documents into features to obtain a corpus. In most cases, the features in this corpus have already appeared somewhere in the WOC documents so we use the same $f = (f_1,...f_q)$ from the previous step. If it is not the case, “missing" features can simply be appended to the list of features and the $C$ matrix extended to reflect the absence of the missing features. We calculate $B \in \mathbb{R}^{m \times q}$ where $m$ is the number of text documents, $q$ is the number of features in corpus $f$, and element $B_{i,j}$ is the number of times text document $s_i$ contains the feature $f_j$. We then extend $B \in \mathbb{R}^{m \times q}$ to $B \in \mathbb{R}^{n \times q}$ such that $B_{i,j}= 0$ for $i = m+1,...n, j=1,...q$. Intuitively, this means that none of the text features knowingly describes the $m+1, ... n$ image documents. We combine the image feature matrix $A$, the text feature matrix $B$, and the WOC matrix $C$ to initialize matrix $M$: $$M = \begin{bmatrix} A & B\\ 0 & C\\ \end{bmatrix},$$ where $M \in \mathbb{R}^{(m+k) \times (p+q)}$ and $0 \in \{0\}^{k \times p}$. We call $M$ our mixed document feature matrix. Each row represents a document and each column represents a feature (either an image feature or a text feature). Without the reweighing using IDF, it is difficult to use sparse text to aid in image classification. This is because the frequency of the image features outweigh any sort of effect the sparse text has in the classification of image documents. The inverse document frequency matrix $\text{IDF} \in \mathbb{R}^{p+q \times p+q}$ is defined as the diagonal matrix with nonzero elements: $$\text{IDF}_{j,j} = \log\dfrac{m+k}{|\{i : M_{i,j}>0\}|},$$ where ${|\{i : M_{i,j}>0\}|}$ is the number of documents containing the $j^{th}$ feature. We then re-evaluate M to be $M = M \times IDF$. ![Example of an M matrix with 4500 image documents, 9 WOC documents, and 450 text documents.[]{data-label="fig:mixmat"}](mixMat.jpg) Topic Modeling using Non-negative Matrix Factorization ====================================================== We use non-negative matrix factorization (NNMF) on the document feature matrix to cluster documents into topics. We consider the document feature matrix as a set of (m+k) points in a (p+q) dimensional space. Each document is a point and each feature is a dimension. We want to reduce the dimensionality of this space into $k^* << \min(m+k, p+q)$ dimensions [@Lsas]. NMF is a method that takes a non-negative matrix $M_+ \in \mathbb{R}^{(m+k) \times (p+q)}$ and factors it into two non-negative matrices $U_+ \in \mathbb{R}^{(m+k) \times k^*}$ and $V_+ \in \mathbb{R}^{k^* \times (p+q)}$ where $k^*$ is the rank of the desired lower dimensional approximation to $X$ [@LeeSeung]. We take the $(p+q)$-dimensional feature space and project it onto a $k^*$-dimensional topic space where $k^*$ is the number of desired classes. Denoting the Frobenius norm of $M$ as $||M||_F^2 = \sum_i\sum_j M_{i,j}^2$, we wish to obtain $U$ and $V$ by minimizing the following cost function: $$||M - UV||^2_F. \label{eq:nmf}$$ Intuitavely, $U_{i,j}$ tells us how well document $d_i$ fits into topic $j$ and $V_{i,j}$ tells us how well the $j^{th}$ feature describes the $i^{th}$ topic. In most applications of topic modeling using NNMF, a document $d_i$ belongs to topic $j$ if $$j = \operatorname*{arg\,max}_z U_{i,z}.$$ Because of the geometric nature of the NNMF topic modeling method, we also investigate the clusters that result from a k-means clustering on the rows of $U$, or the location of documents in the reduced-dimension topical space. Results ======= Evaluation Metrics ------------------ Purity and z-Rand scores are metrics used to evaluate cluster quality [@traud2008comparing], [@amigo2009comparison]. **Purity** Purity is a well known clustering measure that depends on some ground truth. This metric compares a cluster to the ground truth by comparing the intersection of the ground truth clustering with the new clustering. Purity can be computed by as follows: $$Purity(G,C) = \frac{1}{m} \sum_i \max_j |g_j \cap c_i|.$$ Here, $m$ is the number of documents, $G = \{g_1, ..., g_k\}$ is the ground truth or class assignment where each $g_j$ is a set of indices belonging to the $j^{th}$ class, and $C = \{c_1, ... c_t\}$ is the clustering from some method where each $c_i$ is the set of indices belonging to the $i^{th}$ cluster. It is important to note that purity is sensitive to the number of clusters. If every document had its own cluster, then the purity for this set of clusters is 1. To address this sensitivity, we also look at the $z$-rand metric. **Z-rand** To define the $z$-rand score we first define $p$ to be the number of pairs of documents that are in the same cluster as determined by our method and in the ground truth (i.e. the number of document pairs that are correctly clustered together). The $z$-rand score, $z_R$ is defined as: $$z_R = \dfrac{(p-\mu_p)}{\sigma_p},$$ where $\mu_p$ and $\sigma_p$ are the expected value the standard deviation of $p$ under a hypergeometric distribution with the same size of clusters. Intuitively, we are comparing the number of correctly identified pairings to the number of correctly identified pairings if the pairings were randomly selected. The higher the z-rand score, the better clusters as the clusters created are very different from randomly picked clusters.\ We apply our method to the Electro-Optical (EO) dataset provided by China Lake. This dataset consists of 9 classes of images where each class contains 500 images of a single vehicle from different angles. Because this dataset does not contain text data, we use wikipedia articles to create sparse text captions for a varying number of documents by randomly selecting 5 words from each wikipedia article to be an image caption. Using only the image documents and NNMF, the clusters produced score a mean purity of 0.6397 and mean z-rand of 1460.7. Matrix Purity Zrand ----------- -------------------- --------------------- A 0.6397 $\pm$ 0.012 1460.7 $\pm$ 52.18 $[A : B]$ 0.6597 $\pm$ 0.01 1538.6 $\pm$ 45.71 M 0.769 $\pm$ 0.0012 1909.5 $\pm$ 136.55 : Results from the EO data set. A is only using image features, $[A:B]$ is image features with sparse text, and M is image and text features with additional dicionary.[]{data-label="tab:mainresults"} For our first experiment, we investigate the usefulness of fusing image and text documents together and using the appropriate reweighting. In Table \[tab:mainresults\], we are comparing using only image features, using image and sparse text features, and using image features, sparse text features, and a dictionary. As one can see, using only image features, does the worst while using sparse text features helps only slightly. We attribute this slight improve to the fact that the text documents are sparse. When we use the WOC, we get a significant increase in purity and zrand. We also investigated the effect of varying the percentage of documents with both image and text features and found that in general, regardless of the number of image documents that contained sparse text, the purity stayed from 0.76-0.78, while the z-rand ranged from 1877.0-1938.2. To improve results, one may also remove stop words from the text features. Stop words are commonly used words such as ‘the’, ‘a’, and ‘is’. When we did this, we obtained a mean purity of 0.778. We found that each class can be broken down into three subclasses: front of vehicle, back of vehicle, and sides. So, using $k=27$, we greatly improve our results as shown in Table \[tab:27topics\]. % of documents with labels purity z-Rand ---------------------------- ----------------------- ----------------------- 0.2 0.88126$\pm$0.0035533 1579.9675$\pm$15.6499 0.4 0.87793$\pm$0.0046333 1571.9551$\pm$14.8006 0.6 0.88403$\pm$0.0043387 1566.322$\pm$13.8568 0.8 0.88341$\pm$0.0042595 1580.0351$\pm$13.7404 1 0.88071$\pm$0.0038168 1576.8284$\pm$16.8794 : Purity and z-Rand over different percentages of images documents with text documents where the number of text documents $m = \lfloor{np}\rfloor$ for EO data set using $k^* = 27$.[]{data-label="tab:27topics"} Conclusion ========== Fusing text documents and image documents makes it possible to improve image clusters. The results from the EO data set show that are method does make an improvement on the image clusters when comparing to using NNMF on only the image document feature matrix $A$. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported in part by AFOSR MURI grant FA9550-10-1-0569. Arjuna Flenner was supported by ONR grants number N0001414WX20237 and N0001414WX20170. Deanna Needell was partially supported by Simons Foundation Collaboration grant $\#274305$.
{ "pile_set_name": "ArXiv" }
Introduction ============ The single band two dimensional Hubbard Hamiltonian[@HUBBARD] has recently received considerable attention due to possible connections with high temperature superconductors. Indeed, evidence is accumulating that this Hamiltonian may describe, at least qualitatively, some of the normal state properties of the cuprates.[@review] Exact Diagonalization (ED) and Quantum Monte Carlo (QMC) have been used to model static properties like the behavior of spin correlations and magnetic susceptibility both at half-filling and with doping.[@review] Comparisons of dynamic quantities like the spectral weight and density of states with angle-resolved photoemission results[@flat-exper; @flat; @bulut; @hanke; @berlin] have also proven quite successful. Significantly, while analytic calculations have pointed towards various low temperature superconducting instabilities, such indications have been absent in numerical work.[@review] Historically, however, the Hubbard model was first proposed to model magnetism and metal-insulator transitions in 3D transition metals and their oxides,[@HUBBARD] rather than superconductivity. Now that the technology of numerical work has developed, it is useful to reconsider some of these original problems. A discussion of possible links between the 3D Hubbard model and photoemission results for ${\rm YTiO_3}$, ${\rm Sr VO_3}$ and others[@fujimori; @inoue; @morikawa] has already recently occurred. In such perovskite ${\rm Ti^{3+}}$ and ${\rm V^{4+}}$ oxides, which are both in a $3d^1$ configuration, the hopping amplitude $t$ between transition-metal ions can be varied by modifying the $d-d$ neighboring overlaps through a tetragonal distortion. Thus, the strength of the electron correlation $U/t$ can be varied by changing the composition. In fact, a metal-insulator transition has been reported in the series ${\rm SrVO_3}-{\rm Ca VO_3}-{\rm La Ti O_3}-{\rm YTiO_3}$. On the metallic side, a quasiparticle band is experimentally observed near the Fermi energy $E_F$, as well as a high energy satellite associated to the lower Hubbard band (LHB).[@fujimori; @rrmp] Spectral weight is transferred from the quasiparticle to the LHB as $U/t$ is increased at half-filling. In this paper, we report the first use of Quantum Monte Carlo, combined with analytic continuation techniques, to evaluate the spectral function and density of states for the 3D Hubbard Hamiltonian. The motivation is twofold. First, we want to compare general properties of the 3D Hubbard Hamiltonian with the extensive studies already reported in the literature[@WHITE; @jarrell; @rrmp; @review] for the 2D and infinite-D cases. Of particular importance is the presence of quasiparticles near the half-filling regime, as well as the evolution of spectral weight with doping. Many of the high-Tc cuprates contain ${\rm CuO_2}$ planes that are at least weakly coupled with each other, and thus the study of the 3D system may help in understanding part of the details of the cuprates. More generally, the Hubbard Hamiltonian is likely to continue being one of the models used to capture the physics of strongly correlated electrons, so we believe it is important to document its properties in as many environments as possible for potential future comparisons against experiments. Secondly, we discuss a particular illustration of such contact between Hubbard Hamiltonian physics and experiments on 3D transition metal oxides. In addition to the studies of half-filled systems with varying correlation energy mentioned above, experiments where the band filling is tuned by changing the chemical composition have also been reported.[@fujimori2; @morikawa; @tokura] One compound that has been carefully investigated in this context is ${\rm Y_{1-x} Ca_x Ti O_3}$. At $x=0$ the system is an antiferromagnetic insulator. As $x$ increases, a metal-insulator transition is observed in PES studies. The lower and upper Hubbard bands (LHB and UHB) are easily identified even with $x$ close to 1, which would naively correspond to small electronic density in the single band Hubbard model, i.e. a regime where $U/t$ is mostly irrelevant. In the experiments, a very small amount of spectral weight is transferred to the Fermi energy, filling the gap observed at half-filling (i.e. generating a “pseudogap”). Analysis of the PES results of these compounds using the paramagnetic solution of the Hubbard Hamiltonian in infinite-D [@metzner], a limit where dynamic mean field theory becomes exact (see section II), has resulted in qualitative agreement [@jarrell; @georges; @rrmp] with the experimental results. At and close to half-filling there is an antiferromagnetic (AF) solution which becomes unstable against a paramagnetic (PM) solution at a critical concentration of holes. In the PM case, weight appears in the original Hubbard gap as reported experimentally. However, this analysis of the spectral weight in terms of the infinite-D Hamiltonian is in contradiction with results for the density of states reported in the 2D Hubbard model[@review] where it is found that upon hole (electron) doping away from half-filling the chemical potential $\mu$ moves to the top (bottom) of the valence (conduction) band. The results at $\langle n \rangle =1$ in 2D already show the presence of a robust quasiparticle peak which is absent in the insulating PM solution of the $D=\infty$ model. That is, in the 2D system the large peak in the density of states observed away from half-filling seems to evolve from a robust peak already present at half-filling. On the other hand, at $D=\infty$ a feature resembling a “Kondo-resonance” is $generated$ upon doping if the paramagnetic solution is used. This peak in the density of states does not have an analog at half-filling unless frustration is included.[@jarrell] Studies in 3D may help in the resolution of this apparent non-continuity of the physics of the Hubbard model when the dimension changes from 2 to $\infty$. The proper way to carry out a comparison between $D=3$ and $\infty$ features is to base the analysis on ground state properties. With this restriction, i.e. using the AF solution at $D=\infty$ and close to half-filling, rather than the PM solution, we found that the $D=3$ and $\infty$ results are in good agreement. In this paper we will consider which of these situations the 3D Hubbard Hamiltonian better corresponds to, and therefore whether the single band Hubbard Hamiltonian provides an adequate description of the density of states of 3D transition-metal oxides. Model and Methods ================= The single band Hubbard Hamiltonian is $$\begin{aligned} H & = & -t \sum_{\bf \langle ij \rangle } ( c^\dagger_{ {\bf i} \sigma} c_{{\bf j} \sigma} + h.c.) - \mu \sum_{{\bf i}\sigma} n_{{\bf i}\sigma} \nonumber \\ & & + U \sum_{\bf i} (n_{{\bf i} \uparrow} - 1/2 ) (n_{{\bf i} \downarrow} - 1/2 ), \label{hubbard}\end{aligned}$$ where the notation is standard. Here ${\bf \langle ij \rangle }$ represents nearest-neighbor links on a 3D cubic lattice. The chemical potential $\mu$ controls the doping. For $\mu=0$ the system is at half filling ($\langle n \rangle=1$) due to particle-hole symmetry. $t\equiv 1$ will set our energy scale. We will study the 3D Hubbard Hamiltonian using a finite temperature, grand canonical Quantum Monte Carlo (QMC) method [@blankenbecler] which is stabilized at low temperatures by the use of orthogonalization techniques [@white]. The algorithm is based on a functional-integral representation of the partition function obtained by discretizing the “imaginary-time” interval $[0,\beta]$ where $\beta$ is the inverse temperature. The Hubbard interaction is decoupled by a two-valued Hubbard-Stratonovich transformation [@hirsch] yielding a bilinear time-dependent fermionic action. The fermionic degrees of freedom can hence be integrated out analytically, and the partition function (as well as observables) can be written as a sum over the auxiliary fields with a weight proportional to the product of two determinants, one for each spin species. At half-filling ($\langle n \rangle=1$), it can be shown by particle-hole transformation of one spin species $(c_{{\bf i}\downarrow} \rightarrow (-1)^{{\bf i}} c_{{\bf i}\downarrow}^\dagger)$ that the two determinants differ only by a positive factor, hence their product is positive definite. At general fillings, however, the product can become negative, and this “minus-sign problem” restricts the application of QMC to relatively high temperature (of order 1/30 of the bandwidth) off half-filling. The QMC algorithm provides a variety of static and dynamic observables. One equal time quantity in which we are interested is the magnetic (spin-spin) correlation function, $$C({\bf l}) = \frac{1}{N} \sum_{{\bf j}} \langle m_{\bf j} m_{{\bf j+l}} \rangle. \label{correl}$$ Here $m_{\bf j}=\sum_\sigma\sigma n_{{\bf j}\sigma}$ is the local spin operator, and $N$ is the total number of lattice sites. Static correlations have also been investigated in earlier studies of the $3D$ Hubbard model [@HIRSCHn; @rts] where the antiferromagnetic phase diagram at half filling was explored. To obtain dynamical quantities in real time or frequency, the QMC results in imaginary time have to be analytically continued to the real time axis. Since we are mostly interested in the one-particle spectrum we measure the one-particle Green function $G({\bf p},\tau)$. The imaginary part of $G({\bf p},\omega)$ (in real frequency) defines the spectral weight function at momentum ${\bf p}$, $A({\bf p},\omega)$, which is related to $G({\bf p},\tau)$ by: $$G({\bf p},\tau) = \int_{-\infty}^\infty d\omega \, A({\bf p},\omega) \, \frac{e^{-\tau\omega}}{1+e^{\beta\omega}}. \label{conti}$$ $A({\bf p},\omega)$ can in principle be calculated by inverting Eq.(\[conti\]), but the exponential behavior of the kernel at large values of $|\omega|$ makes this inversion difficult numerically. $G({\bf p},\tau)$ is quite insensitive to details of $A({\bf p},\omega)$ in particular at large frequencies. Since $G({\bf p},\tau)$ is known only on a finite grid in the interval $[0,\beta]$ and there only within the statistical errors given by the QMC-sampling, solving Eq.(\[conti\]) for $A({\bf p},\omega)$ is an ill-posed problem. A large number of solutions exists, and the problem is to find criteria to select out the correct one. This can be done by employing the Maximum Entropy (ME) method [@gubernatis]. Basically, the ME finds the “most likely” solution $A({\bf p},\omega)$ which is consistent with the data and all information that is known about the solution (like positivity, normalization, etc.). ME avoids “overfitting” to the data by a “smoothing” technique that tries to assimilate the resulting $A({\bf p},\omega)$ to a flat default model. In the absence of any data ($G({\bf p},\tau)$) ME would converge to the default model which is chosen to be a constant within some large frequency interval. There is no adjustable parameter in the ME application. One needs accurate data for $G({\bf p},\tau)$ with a statistical error of $o(10^{-4})$ to get reliable results for $A({\bf p},\omega)$. In principle, one can calculate analytically the first and second (and higher) moments of the spectral weight and include this information in the ME procedure. However, we chose to calculate the moments afterwards from the resulting function $A({\bf p},\omega)$, and to compare them with the analytically known results as a further test. The agreement was in all cases within 10%. Still, the ME methods provides only a rough estimate of the true spectral weight functions. Band gaps and the positions of significant peaks are usually well captured but fine structure which needs a high frequency resolution is hard to detect within the ME approach. Integrating $A({\bf p},\omega)$ over the momenta gives the one-particle density of states (DOS) $N(\omega)$. However, technically, it is preferrable to integrate first $G({\bf p},\tau)$, which reduces the statistical errors, and then perform the analytic continuation. The DOS will be compared to results from the dynamical mean-field theory of infinite dimensions, $D=\infty,$ [@metzner]. In this limit, with the proper scaling of the hopping element ($t=t^*/\sqrt{Z}$, with $Z$ being the coordination number) the one-particle self energy becomes local or, equivalently, momentum independent and the lattice problem is mapped onto a single site problem. The constant $t^*$ is set to $t^*=\sqrt{6}$ to obtain the same energy scale, $t=1$, when compared to the $3D$ case. [@bethe] In contrast to conventional mean-field theories, the self energy remains frequency dependent, preserving important physics. Spatial fluctuations are neglected, an approximation which becomes exact in the limit $Z\to\infty$ ($Z=2D$ for the simple cubic lattice). Even in $D=\infty$, the remaining local interacting problem cannot be solved analytically but will also be treated by a finite temperature QMC [@fye] supplemented by a self-consistency iteration [@georges; @rrmp]. The advantage is that the system can be investigated in the thermodynamic limit with a modest amount of computer time. Due to its local character the $D=\infty$ approach cannot provide information on momentum dependent spectral functions. However, recently a $k$-resolved spectral function has been studied[@rspec] in $D=\infty$. Among other things, the $D=\infty$ limit has been used recently to study the AF phase diagram in the Hubbard model [@jarrell]. The agreement of the Néel temperature with 3D results is good.[@rts] In $D=\infty$ it is further possible to suppress AF long range order artificially by restricting the calculation to the (at low temperatures unstable) paramagnetic solution at half-filling. In this way, one may simulate frustration due to the lattice structure or orbital degeneracy, although in the absence of calculations for hypercubic lattices with nearest and next-nearest uniform hopping amplitudes it is still a conjecture how close this approach is to including these effects fully. Half-filling ============ Quantum Monte Carlo ------------------- We first study the single particle spectral weight $A({\bf p},\omega)$ at relatively strong coupling, $U=8$, and half-filling ($\langle n \rangle=1$) at a low temperature of $T=1/10$. -10mm A gap is clearly present in the spectrum (Fig. 1) which is compatible with the expectation that the half-filled Hubbard model on a bipartite lattice is an antiferromagnetic insulator for all nonzero values of the coupling $U/t$. The spectral weight has four distinct features (two in the LHB and two identical ones in the UHB, as expected from particle-hole symmetry). In the UHB there is weight at a high energy, roughly in the interval between $\sim 5t$ and $8t$. This broad feature likely corresponds to the incoherent part of the spectral function found in previous simulations for the 2D Hubbard and $t-J$ models.[@review] The dominant scale of this incoherent weight is $t$, and since it is located far from the top of the valence band its presence is not important for the low temperature properties of the system. Much more interesting is the sharper peak found close to the gap in the spectrum. This band dispersion starts at a binding energy of approximately $\omega= -4t$ at momenta $(0,0,0)$ and moves up in energy obtaining its maximum value at $\omega\approx -2t$ at momenta $(0,\pi/2,\pi)$ and $(\pi/2,\pi/2,\pi/2)$ in Fig. 1. The width of the peak diminishes as the top of the valence band is reached. Similar structure was discussed before in studies of 2D systems, which had a somewhat higher resolution, as a “quasiparticle” band corresponding to a hole moving coherently in an antiferromagnetic background.[@review; @moreo] This quasiparticle should be visualized as a hole distorting the AF order parameter in its vicinity. In this respect it is like a spin-polaron or spin-bag,[@schrieffer] although “string states” likely influence its dispersion and shape.[@review] The quasiparticle (hole plus spin distortion) movement is regulated by the exchange $J$, rather than $t$. Using the center of the quasiparticle peaks of Fig. 1 as an indication of the actual quasiparticle pole position, we obtain a bandwidth $W$ of about $2$ to $3t$ or, equivalently, $4$ to $6J$ using ${ J=4t^2/U}$ for $U=8$. However, due to the low resolution of the ME procedure, reflected in part in the large width of the peaks of Fig. 1, it is difficult to show more convincingly within QMC/ME that the quasiparticle bandwidth is indeed dominated by $J$. Note that moving from $(0,0,0)$ to $(\pi,\pi,\pi)$ along the main diagonal of the Brillouin Zone (BZ), the PES part of the spectrum (i.e. the weight at $\omega <0$) loses intensity. There is a clear transfer of weight from PES at small $|{\bf p}|$ to IPES at large $|{\bf p}|$, as observed in 2D simulations.[@bulut1] In addition, note that there is PES weight above the (naive) Fermi momentum of this half-filled system. For example, at ${\bf p} = (0,\pi,\pi)$, spectral weight at $\omega <0$ can be clearly observed. Similarly, at ${\bf p} = (0,0,\pi)$ weight in the IPES region is found. This effective doubling of the size of the unit cell in all three directions is a consequence of the presence of AF long range order. The hole energy at ${\bf p}=(0,0,0)$ and $(\pi,\pi,\pi)$ becomes degenerate in the bulk limit and the quasiparticle band, for example along the main diagonal of the BZ, has a reflection symmetry with respect to $(\pi/2,\pi/2,\pi/2)$, as observed in our results (Fig. 1). However, note that the actual intensity of the AF-induced PES weight close to $(\pi,\pi,\pi)$ is a function of the coupling. As $U/t \rightarrow 0$, the intensity of the AF-induced region is also reduced to zero. The presence of this AF-generated feature has recently received attention in the context of the 2D high-Tc cuprates.[@schrieffer; @shadow] While its presence in PES experiments at optimal doping is still under discussion, these features clearly appear in PES experimental studies of half-filled insulators, like ${\rm Sr_2 Cu O_2 Cl_2}$.[@wells] Thus, while this behavior has been primarily discussed in the context of 2D systems angle-resolved PES (ARPES) studies of 3D insulators like ${\rm LaTiO_3}$ might also show such features. In Fig. 2 we show the density of states (DOS) $N(\omega)$ of the $4^3$ lattice. The two features described before, namely quasiparticle and incoherent background, in both PES and IPES are clearly visible. Also shown in Fig. 2 is the temperature effect on $N(\omega)$ which is weak for the given temperatures ($\beta=10$ and 4). The basic features are still retained, only the gap is slightly reduced and the quasiparticle peak less pronounced at the higher temperature. Results corresponding to a larger coupling, $U=12$, at $\beta=4$ are shown in Fig. 3. The gap increases and the quasiparticle band becomes sharper as $U/t$ grows, as expected if its bandwidth is regulated by $J$. $A({\bf p},\omega)$ at $U=12$, $\beta=4$ are similar to the results shown in Fig. 1. -20mm -20mm A characteristic double peak like that seen in Figs. 2,3 has been observed in the X-ray absorption spectrum of ${\rm LaFeO_3}$ [@sarma], which is a strongly correlated, wide gapped antiferromagnetic insulator. The peaks appear at energies of about 2.2 eV and 3.8 eV. When Fe is substituted by Ni this structure vanishes and the system becomes a paramagnetic metal at a Ni concentration of about 80%. If we choose for comparison with the calculated DOS (Fig. 3) a hopping amplitude of $t=0.5$ eV, giving a reasonable d-bandwidth of $W=6$ eV, the positions of the quasiparticle peak and the maximum of the incoherent band for $U=12t$ are at about $\omega_1\approx 4t=2$ eV and $\omega_2\approx 8.4t=4.2$ eV, respectively. The agreement with the experimental values is fairly good considering the crude simplifications of the Hubbard model such as neglecting orbital degeneracy and charge transfer effects. Even the estimated charge gap of Fig. 3, defined by the onset of spectral weight relative to the Fermi energy, $\Delta_{charge}\approx 3t=1.5$ eV is not too far from the experimental value of $\sim 1.1$ eV. The ratio $r=\omega_2/\omega_1$ decreases with $U$ since for $U\gg t$ both energies are expected to converge to $U/2$. While for $U=12t$, $r\approx 2.1$ is comparable to the experimental value ($\sim 1.7$), it is too large for $U=8$ ($r \approx 2.9 $) showing that under the assumption of a single band Hubbard model description for ${\rm LaFeO_3}$, the effective on-site interaction is at least of the size of the d-bandwidth. Another feature which has been attributed to antiferromagnetic ordering was found in a high resolution PES study of $\rm{V_2O_3}$. [@shin; @rozen2] In the AF insulator at $T=100K$ the spectrum shows a shoulder at $\omega_1=-0.8$ eV which is absent in the paramagnetic metal at $T=200K$. This shoulder might be a reminiscent of the quasiparticle peak. The maximum of the lower Hubbard band is at about $\omega_2 \approx -1.3$ eV, giving a ratio $\omega_2/\omega_1\approx 1.6$ similar to that observed in ${\rm LaFeO_3}$. The on-site interaction was estimated to be about 1.5 times the bandwidth.[@shin] -20mm It is interesting to compare the results obtained in our simulations with those found in the $D=\infty$ limit of the Hubbard model. At half-filling for arbitrary coupling strength, the $D=\infty$ model has an AF insulating ground state. Its DOS is shown in Fig. 4, using the same coupling and temperature as in the 3D simulation. $N(\omega)$ for both cases are similar, and they are also similar to results found before in 2D, suggesting that the physics of holes in an antiferromagnetic system is qualitatively the same irrespective of whether a 2D, 3D or ${\rm \infty}$D lattice is used, at least within the accuracy of present QMC/ME simulations. SDW mean-field and Born approximation ------------------------------------- Since the data shown in the previous subsection correspond to holes in a system with AF long-range order, it is natural to compare our results against those found in mean-field approximations to the half-filled 3D Hubbard model that incorporate magnetic order in the ground state. The “spin-density-wave” mean-field approximation has been extensively used in the context of the 2D Hubbard model,[@sdw] and here we will apply it to our 3D problem. For a lattice of $N$ sites, the self-consistent equation for the gap $\Delta$ is $$1={{U}\over{2N}} \sum_{\bf p} {{1}\over{E_{\bf p}}}, \label{sdw}$$ where $E_{\bf p} = \sqrt{ \epsilon_{\bf p}^2 + \Delta^2 }$ is the quasiparticle energy, and $\epsilon_{\bf p} = -2t (\cos p_x + \cos p_y + \cos p_z)$ is the bare electron dispersion. The resulting quasiparticle dispersion is shown in Fig. 5 compared against the results of the QMC/ME simulation. The overall agreement is good if the coupling $U$ in the gap equation Eq.(4) is tuned to a value $U \sim 5.6$. It is reasonable that a reduced $U$ should be required for such a fit, since the SDW MF gap is usually larger than the more accurate QMC result. Similar renormalizations of $U$ in comparing QMC and approximate analytic work has been discussed in the context of fitting the magnetic response,[@renormu] and has also been explicitly calculated.[@vandongen] Fig. 5 shows many of the features observed in the numerical simulation, namely a hole dispersion which is maximized at $(\pi/2,\pi/2,\pi/2)$ for the momenta shown there, an overall bandwidth smaller than the noninteracting one, and the presence of AF-induced features in the dispersion above the naive Fermi momentum. Thus the SDW MF approach qualitatively captures the correct hole quasiparticle bandwidth $J$ at half-filling. However, a spurious degeneracy appears in the hole dispersion in this approximation. Momenta satisfying $\cos p_x + \cos p_y + \cos p_z =0$ have the same energy. This is not induced by symmetry arguments and is an artifact of the SDW approach. In addition, $A({\bf p},\omega)$ in the SDW approximation only has one peak in the PES region for each value of the momentum, missing entirely the incoherent part. While it may not be necessary to fix this problem in this case, it is important in general to be able to go beyond SDW MF. To do this, the self-consistent Born approximation[@born] (SCBA) for one hole in the 3D $t-J$ model, which corresponds to the strong coupling limit of the Hubbard model, can be used. This technique reproduces accurately exact diagonalization results in the 2D case.[@born] -3mm Actually, the dispersion of a dressed hole in an antiferromagnet within the SCBA for a bilayer system, and also for a 3D cubic lattice, has been recently studied.[@sasha] Here, for completeness, we reproduce some of the results of Ref.[@sasha], and compare them against those of the 3D Hubbard model obtained with the SDW MF approximation and QMC calculations (Fig. 6). The comparison is carried out at $J/t \sim 0.3$ which corresponds to $U/t \sim 13$. The maximum of the dispersion in the valence band using the SCBA now lies at $({\pi/2, \pi/2, \pi/2})$, removing the spurious SDW MF degeneracy. In the scale of Fig. 6 the splitting between this momentum and $(\pi,\pi/2,0)$ is difficult to resolve, since it corresponds to about 100K. Note that the bandwidth predicted by the SDW MF technique is approximately a factor two larger than the more accurate prediction of the SCBA. However, for this larger value of $U$ it does not appear possible to fit simultaneously the SDW MF bandwidth and band-gap to the results of QMC by the same renormalization of $U$, something which can be done successfully at weaker coupling, $U=8$. The QMC points at this intermediate coupling value where $U$=bandwidth lie in between the SDW MF and SCBA. Though the uncertainties in the QMC results are rather large, we expect the agreement between SCBA and QMC results to improve as the coupling increases. The best fit of the SCBA data[@sasha] is $\epsilon({\bf p})\,=\,c\,+\,0.082(\cos p_x\cos p_y+ \cos p_y\cos p_z+\cos p_x\cos p_z)\, +\,0.022(\cos 2p_x+\cos 2p_y+\cos 2p_z)$ (eV), if $J=0.125eV$ and $t=0.4eV$ are used. The constant $c$ is defined by the SDW MF gap (Fig. 6). As in the case of the 2D problem, holes tend to move within the same sublattice to avoid distorting the AF background.[@review] Working at small $J/t$, the bandwidth of the 3D $t-J$ hole quasiparticle was found to scale as $J$,[@sasha] as occurs in two dimensions. Finite Hole Density =================== D=3 --- We can also use the QMC approach to study the 3D Hubbard model away from half-filling for temperatures down to about 1/30 of the bandwidth, a value for which $T\sim J$ for the present strong coupling values. First, we study the influence of doping and temperature on the spin-spin correlation function $C({\bf l})$. At half filling $C({\bf l})$ shows strong antiferromagnetic correlations over the whole $4^3$-lattice at $\beta=10$ (Fig. 7). At $\beta=2$ the correlations are significantly weakened, and with additional doping ($\langle n \rangle =0.88$) all correlations are suppressed besides those between nearest neighbors. These appear to be stable against doping. The density of local moments, $\sqrt{C(0)}$ reaches its low temperature limit at an energy scale set by $U$ and hence is unaffected by the change of $\beta$ from $\beta=2$ to $\beta=10$ (note that longer range spin correlations form at a temperature set by the much smaller energy scale $J$). $\sqrt{C(0)}$ is to first order proportional to the electronic density and hence slightly reduced at $\langle n \rangle =0.88$. There has been considerable discussion concerning the relationship between the spin-spin correlations and the presence of a gap in the density of states. In particular, it was observed[@WHITE] that if $N(\omega)$ is evaluated on lattices of increasing size at fixed temperature a well formed gap appearing on small lattices disappears when the spatial extent exceeds the spin-spin correlation length. Decreasing the temperature (and hence increasing the range of the spin correlation) allows the gap to reform. Similar effects are seen here in 3D. -16mm Fig. 8a shows the density of states on a $4^3$ lattice at several densities, $U=8$ and $\beta=2$. At this temperature the charge gap is not fully developed, and the quasiparticle peaks cannot be resolved. The result with doping is similar to that reported on 2D lattices.[@bulut2] The chemical potential $\mu$ moves to the top of the valence band as the density is reduced from half-filling. A large peak is generated which increases in intensity as $\langle n \rangle$ is further reduced. The weight of the upper part of the spectrum (reminiscent of the UHB) decreases with doping due to the reduced effective interaction. Similar results are shown in Fig. 8b but for a $6^3$ lattice. There is not much difference between the two lattices, showing that within the resolution of the ME procedure finite size effects are small. The large peak that appears in Fig. 8a-b at finite hole density is crossed by $\mu$ as the density is reduced. At $\langle n \rangle = 0.94$, the peak is located to the left of $\mu$, at $\langle n \rangle = 0.88$ it has reached the chemical potential, and at $\langle n \rangle = 0.72$, the peak has moved to the right. This is in agreement with the behavior observed in both 2D QMC and ED simulations,[@dos] and it may be of relevance for estimations of superconducting critical temperatures if a source of carrier attraction is identified.[@dos1] The results of the previous section at half-filling obtained at low temperatures $(T\sim 1/10)$ revealed a sharp quasiparticle peak in the DOS at the top of the valence band and bottom of the conduction band. Numerical studies of 2D lattices have shown that the peak intensity at $T=0$ is the $largest$ at half-filling.[@dos] Away from half-filling, the peak is still visible but it is broader than at $\langle n \rangle =1$.[@dos] Thus there is no evidence that the sharp peak in the DOS of the doped system has been generated dynamically and represents a “Kondo-resonance” induced by doping, as has sometimes been suggested [@jarrell], and as Fig. 8 obtained at relatively high temperature, $\beta=2$, seem to imply. -18mm -18mm Another important quantity to study is the quasiparticle residue Z. The SCBA results show that Z is small but finite for the case of one hole in an antiferromagnetic insulator state, and actually the results are very similar in 3D and 2D systems.[@born; @sasha] Numerical results provide a similar picture.[@review] On the other hand, Z vanishes in the $D=\infty$ approach working in the paramagnetic state as the doping $\delta$ tends to zero. Note that in this state there are no AF correlations ($\xi_{AF} =0$). Thus, it is clear that the hole quasiparticle at half-filling observed in the 2D and 3D systems is $not$ related with the quasiparticle-like feature observed in the PM state at $D=\infty$. -12mm -12mm -6mm In Fig. 9, we show $A({\bf p},\omega)$ obtained on the $4^3$ lattice, $U=8$, $T=1/2$ and various densities away from half-filling. The gap is now absent. From the energy location of the maximum of the dominant peak in Fig. 9a-c, the quasiparticle dispersion can be obtained. The results are shown in Fig. 10. It is remarkable that the quasiparticle dispersion resembles that of a noninteracting system i.e. $\epsilon_{\bf p} = -2t^\star (\cos p_x + \cos p_y + \cos p_z)$, with a scale increasing from $t^\star \sim t/4$ to $t/3$ with doping. This dispersion certainly does not exhaust all the spectral weigth but a large incoherent part still remains at this coupling, density and temperature. Similar results were observed in 2D.[@bulut; @moreo; @dos; @ortolani] Only vestiges remain of the AF induced weight in PES near $(\pi,\pi,\pi)$. However, this drastic reduction of the AF induced intensity may be caused by the high temperature of the simulation as observed in the spin-spin correlation function (Fig. 7). -5mm -10mm $D=\infty$ ----------- The previous subsection and the results at half-filling have shown that the DOS of the 3D Hubbard model has a large peak at the top of the valence band. The peak is crossed by the chemical potential as $\langle n \rangle $ decreases. This behavior is in apparent contradiction with results reported at $D=\infty$ where a peak is generated upon doping if the “paramagnetic” solution to the mean-field problem is selected. At $D=\infty$, there are only two very distinct magnetic ground states. One has AF long-range order, and the other is a paramagnet with strictly $zero$ AF correlation length i.e. without short range antiferromagnetic fluctuations. Thus, at $D=\infty$ the transition is abrupt from a regime with $\xi_{AF} = \infty$ to $\xi_{AF} = 0$. This does not occur in finite dimensions where before the long-range order regime is reached, AF correlations start building up smoothly. This qualitative difference is depicted in Fig. 11. $\xi_{AF}$ as small as a couple of lattice spacings can be robust enough to induce important changes in the carrier dispersion, and may even be enough to induce superconductivity as many theories for the 2D high-Tc cuprates conjecture. We believe that the absence of a regime of intermediate size AF correlations at large D is the key ingredient that explains the differences reported here between D=2,3 and ${\rm D=\infty}$. In Fig. 12a, the D=$\infty$ DOS in the AF phase is shown at $\langle n \rangle =1$ and 0.94. For these densities the AF-phase is energetically stable. We observe the tendency of the large peak at the bottom of the valence band to move towards the chemical potential in good agreement with the 3D Quantum Monte Carlo simulations. As found in 2D, the intensity of the peak decreases as we move away from half-filling if the temperature is low enough. In Fig. 12b, the DOS in the $D=\infty$ limit working in the paramagnetic phase is shown at several densities. For the present interaction, $U=8$, the paramagnetic solution remains metallic at all temperatures even at half filling. [@jarrell] The results are qualitatively different from those observed in the AF regime. At $\langle n \rangle =1$ a large peak at the chemical potential is clearly visible. Upon hole doping this peak gradually moves toward higher energies. At sufficiently strong doping the DOS of the PM-phase (Fig. 12b) resembles the results for the $3D$ lattices (Fig. 8), which is not surprising since AF correlations in $3D$ are strongly suppressed at the present temperature. Close to half filling, however, the $3D$ results are closer to the DOS of the AF-phase where a strong peak is observed on the left hand side of the chemical potential $\mu$. This result is gratifying since the proper way to compare $D=3$ and $\infty$ results is by using the actual ground states in each dimension. In $D=\infty$, at low temperatures, the crossing of the peak by $\mu$ is expected at that point where the AF-Phase becomes unstable against doping. -18mm -18mm **Conclusions** =============== In this paper we have calculated the single particle properties of the 3D single band Hubbard model using Quantum Monte Carlo and the SDW mean field and SCBA approximations. Our results have many similarities with those reported previously in 2D systems. At half-filling, peaks at the top of the valence band and bottom of the conduction band are observed in the DOS. Their behavior is associated with spin polarons with a bandwidth of order the exchange $J$. We found similarities to and semi-quantitive agreement with experimentally observed features in the spectra of strongly correlated 3D AF insulators, $\rm{LaFeO_3}$ and $V_2O_3$. As we dope the system, the sharp peak associated with these quasiparticles is crossed by the chemical potential as the density $\langle n \rangle $ changes. The PES weight observed away from half-filling is already present at half-filling. No new states are generated by doping. This result must be contrasted with that observed experimentally in, e.g., ${\rm Y_{1-x} Ca_x Ti O_3}$ using angle-integrated PES. In this case spectral weight which is not present in the insulator appears at $E_F$ in the metallic regime as we dope the system. This behavior does not seem reproduced by the single band Hubbard model Eq.(1) in 3D, whose physics appears to be very close to that of 2D. Indeed for the 2D cuprates it has been shown experimentally that the states found at $E_F$ upon doping are already present at half-filling.[@fujimori3] An exception among 3D materials is $\rm{NiS_{2-x}Se_x}$ which remains antiferromagnetic throughout the metal-insulator transition induced by (homovalent) Se-substitution or temperature. PES spectra at $x=0.5$ for different temperatures [@matsuura] show a strong peak close to the Fermi energy which does $not$ disappear in the insulator. Instead the peak is shifted off the Fermi energy and only very slightly reduced in weight. Since this situation is not described within the paramagnetic $D=\infty$ approach, AF correlations are presumably essential for the low energy electronic excitations of this system. The success of the $D=\infty$ approach to the Hubbard model in describing the physics of ${\rm Y_{1-x} Ca_x Ti O_3}$, ${\rm SrVO_3}$ and ${\rm CaVO_3}$, however, appears crucially to depend upon forcing the paramagnetic solution of the equations.[@rozen1] In this case, states are actually $generated$ in the Hubbard gap after a small hole doping is introduced. Of course, it may be that the “arbitrary” choice of this paramagnetic solution, which is not the actual minimum of the free energy, is well motivated since it mimics the presence of physical effects like frustration which destroy long range order in real materials. More work is needed to show that this scenario is realized for realistic densities and couplings. An alternative explanation for the discrepancy between the PM solution in infinite-D and finite dimensional results may lie in the finite resolution of the combination of Monte Carlo simulations and Maximum Entropy techniques. However, the SCBA and results at half-filling and low T show that it is likely that at $\langle n \rangle =1$ we have quasiparticle states in the DOS. In studies of the single band Hubbard Hamiltonian in 2D, and in the present analysis in 3D, it is clear that short-range AF correlations play an important role close to $\langle n \rangle =1$. In particular, the states created at the top of the valence band are likely to be spin polarons with a finite quasiparticle residue Z. PES states observed at finite hole doping evolve continuously from those present at half-filling. Experiments on the 2D high-Tc cuprates seem to present similar features, while the results for the 3D perovskites are very different in the sense that no remnants of the coherent part of the spectrum away from half-filling are reported at half-filling. Still, strong AF correlations are apparently present in several 3D transition metal oxides and influence the low energy spectrum at least on the insulating side of the transition. The introduction of frustration in the single band Hubbard model in 3D, perhaps through next-nearest neighbor hoppings, will reduce AF correlations and in particular the AF-induced charge gap, and might be sufficient to observe a evolution of spectral weight upon doping closer to the experimental findings. However, it might be that 3D models which explicitly include orbital degeneracy will be necessary to reproduce the physics of the transition metal oxides, as has recently been described for NiO chains[@nio] and Mn-oxides.[@muller]. Indeed, a recent argument presented by Kajueter et al.[@kajueter] to justify the use of the $D=\infty$ model provides a more realistic explanation for the apparent link between theory in this limit, and 3D transition-metal oxide results. The idea is that the physics of the real perovskite 3D oxides is influenced by the orbital degeneracy. Presumably this effect leads to a drastic reduction of the antiferromagnetic correlations that dominate the physics of these 2D and 3D systems. Many orbitals, including Hund’s coupling, produce an effective magnetic frustration that may reduce the AF correlation length to a negligible value even close to the AF insulator at half-filling. Such a frustration effect could be strong enough to generate a finite critical coupling $U/t$ at half-filling. Acknowledgments =============== We thank A. Fujimori, M. Rozenberg and A. Moreo for useful discussions. We are grateful to A. Sandvik for providing his Maximum Entropy program. Most simulations were done on a cluster of HP-715 work stations at the Electrical and Computer Engineering Department at UC Davis. We thank P. Hirose and K. Runge for technical assistance. E. D. is supported by grant NSF-DMR-9520776. R. T. S. is supported by grant NSF-DMR-9528535. M. U. is supported by a grant from the Office of Naval Research, ONR N00014-93-1-0495 and by the Deutsche Forschungsgemeinschaft. We thank the National High Magnetic Field Laboratory (NHMFL) and the Center for Materials Research and Technology (MARTECH) for additional support. New address: Theoretische Physik III, Universität Augsburg, D–86135 Augsburg, Germany. Electronic address: ulmke@physik.uni-augsburg.de M.C. Gutzwiller, Phys. Rev. Lett. [**10**]{}, 159 (1963); J. Hubbard, Proc. Roy. Soc. [**A 276**]{}, 238 (1964); J. Kanamori, Prog. Theor. Phys. [**30**]{}, 275 (1963). E. Dagotto, Rev. Mod. Phys. [**66**]{}, 763 (1994), and references cited therein. D.S. Dessau et al., Phys. Rev. Lett. [**71**]{}, 2781 (1993); K. Gofron et al., Phys. Rev. Lett. [**73**]{}, 3302 (1994). E. Dagotto, A. Nazarenko and M. Boninsegni, Phys. Rev. Lett. [**73**]{}, 728 (1994). N. Bulut, D.J. Scalapino and S.R. White, Phys. Rev. [**B50**]{}, 7215 (1994). R. Preuss, W. Hanke, and W. von der Linden, Phys. Rev. Lett. [**75**]{}, 1344 (1995). M. Langer, J. Schmalian, S. Grabowski, and K.H. Bennemann, Phys. Rev. Lett. [**75**]{}, 4508 (1995). A. Fujimori, I. Hase, H. Namatame, Y. Fujishima, Y. Tokura, H. Eisaki, S. Uchida, K. Takegahara, and F. M. F. de Groot, Phys. Rev. Lett. [**69**]{}, 1796 (1992). I. H. Inoue, I. Hase, Y. Aiura, A. Fujimori, Y. Haruyama, T. Maruyama, and Y. Nishihara, Phys. Rev. Lett. [**74**]{}, 2539 (1995). K. Morikawa, T. Mizokawa, K. Kobayashi, A. Fujimori, H. Eisaki, S. Uchida, F. Iga, and Y. Nishihara, Phys. Rev. [**B 52**]{}, 13711 (1995). M. Vekic and S.R. White, Phys. Rev. [**B47**]{}, 1160 (1993); G.S. Feng and S.R. White, Phys. Rev. [**B46**]{}, 8691 (1992); and M. Vekic and S.R. White, Phys. Rev. [**B47**]{}, 5678 (1992). M. Jarrell and T. Pruschke, Z. Phys. [**B 90**]{}, 187 (1993). A. Fujimori, I. Hase, M. Nakamura, H. Namatame, Y. Fujishima, Y. Tokura, M. Abbate, F. M. F. de Groot, J. C. Fuggle, O. Strebel, M. Doce, and G. Kaindl, Phys. Rev. [**B 46**]{}, 9841 (1992). Y. Tokura, Y. Taguchi, Y. Okada, Y. Fujishima, T. Arima, K. Kumagai, and Y. Iye, Phys. Rev. Lett. [**70**]{}, 2126 (1993). W. Metzner and D. Vollhardt, Phys. Rev. Lett. [**62**]{}, 324 (1989); for a review see D. Vollhardt, in [*Correlated Electron Systems*]{}, ed. V. J. Emery (World Scientific, Singapore, 1993), p. 57. For a review see A. Georges, G. Kotliar, W. Krauth, and M. Rozenberg, Rev. Mod. Phys. [**68**]{} 13 (1996). For a review see Th. Pruschke, M. Jarrell, and J. K. Freericks, Adv. Phys. [**44**]{}, 187 (1995). R. Blankenbecler, D. J. Scalapino, R. L. Sugar, Phys. Rev. [**D 24**]{}, 2278 (1981). S. R. White, D. J. Scalapino, R. L. Sugar, E. Y. Loh, J. E. Gubernatis, R. T. Scalettar, Phys. Rev. [**B 40**]{}, 506 (1989). J. E. Hirsch, Phys. Rev. [**B 28**]{}, 4059 (1983). J. E. Hirsch, Phys. Rev. [**B 35**]{}, 1851 (1987). R. T. Scalettar, D. J. Scalapino, R. L. Sugar and D. Toussaint, Phys. Rev. [**B 39**]{}, 4711 (1989). For a review see M. Jarrell and J. E. Gubernatis, Phys. Rep. [**269**]{}, 133 (1996). We choose a half-elliptic, non-interacting density of states which becomes exact for the Bethe lattice with infinite connectivity. Like real $3D$-DOS and in contrast to the hypercubic lattice in $D=\infty$ it has a finite bandwidth $(W=4t^*)$ and algebraic band edges. J. E. Hirsch and R. M. Fye, Phys. Rev. Lett. [**56**]{}, 2521 (1986). M. Rozenberg, G. Kotliar, and H. Kajueter, preprint. A. Moreo et al., Phys. Rev. [**B 51**]{}, 12045 (1995). A. Kampf and J. R. Schrieffer, Phys. Rev. [**B 41**]{}, 6399 (1990). N. Bulut, D. J. Scalapino, and S. R. White, Phys. Rev. Lett. [**73**]{}, 748 (1994). P. Aebi et al., Phys. Rev. Lett. [**72**]{}, 2757 (1994); S. Haas et al., Phys. Rev. Lett. [**74**]{}, 4281 (1995). B.O. Wells, et al., Phys. Rev. Lett. [**74**]{}, 964 (1995). D. D. Sarma et al., Phys. Rev. [**B 49**]{}, 14238 (1994). S. Shin et al., J. Phys. Soc. Jpn. [**64**]{}, 1230 (1994). M.J. Rozenberg, G. Kotliar, H. Kajueter, G.A. Thomas, D.H. Rapkine, J.M. Honig, and P. Metcalf, Phys. Rev. Lett. [**75**]{} 105, 1995. J. R. Schrieffer, X. G. Wen, and S. C. Zhang, Phys. Rev. [**B 39**]{}, 11663 (1989). N. Bulut, D.J. Scalapino, and S.R. White, Phys. Rev. [**B47**]{}, 14599 (1993). P. van Dongen, Phys. Rev. Lett. [**67**]{}, 757 (1991). The self-consistent perturbation theory around the SDW MF solution provides a renormalization factor about $q=0.29$ in the limit of small $U$ in $D=3$ and $D=\infty$. Numerically it was found that $q$ increases with $U$, see e.g. [@jarrell]. S. Schmitt-Rink, C.M. Varma, and A.E. Ruckenstein, Phys. Rev. Lett. [**60**]{}, 2793 (1988); F. Marsiglio, A.E. Ruckenstein,S. Schmitt-Rink, and C.M. Varma, Phys. Rev. [**B43**]{}, 10882 (1991); G. Martinez and P. Horsch, Phys. Rev. [**B44**]{}, 317 (1991); Z. Liu and E. Manousakis, Phys. Rev. [**B45**]{}, 2425 (1992). A. Nazarenko and E. Dagotto, preprint. N. Bulut, D.J. Scalapino and S.R. White, Phys. Rev. Lett. [**72**]{}, 705 (1994). A. Nazarenko, S. Haas, J. Riera, A. Moreo, and E. Dagotto, preprint. Recently proposed theories of high-Tc make extensive use of a large accumulation of weight in the DOS near the chemical potential to enhance the critical temperature \[E. Dagotto, A. Nazarenko and A. Moreo, Phys. Rev. Lett. [**74**]{}, 310 (1995)\]. E. Dagotto, F. Ortolani and D. Scalapino, Phys. Rev. [**B 46**]{}, 3183 (1992). A. Fujimori et al., Phys. Rev. [**B 39**]{}, 2255 (1989); Phys. Rev. [**B 40**]{}, 7303 (1990). A. Y. Matsuura, Z.-X. Shen, D. S. Dessau, C.-H. Park, T. Thio, J. W. Bennett, O. Jepson, Phys. Rev. [**B 53**]{}, R7584 (1996). M. J. Rozenberg , I. H. Inoue , H. Makino , F. Iga , Y. E. Dagotto, J. Riera, A. Sandvik, and A. Moreo, Phys. Rev. Lett. [**76**]{}, 1731 (1996). E. Müller-Hartmann and E. Dagotto, preprint. H. Kajueter, G. Kotliar, and G. Moeller, preprint Rutgers Univ. (1996).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we study the multifractal Hausdorff and packing dimensions of Borel probability measures and study their behaviors under orthogonal projections. In particular, we try through these results to improve the main result of M. Dai in [@D] about the multifractal analysis of a measure of multifractal exact dimension.' address: - 'bilel.selmi@fsm.rnu.tn' - | Faculty of Sciences of Monastir\ Department of Mathematics\ 5000-Monastir\ Tunisia author: - Bilel SELMI date: 'May 20, 2011' title: Multifractal dimensions for projections of measures --- Multifractal analysis, Dimensions of measures, Projection. Introduction ============ The notion of dimensions is an important tool in the classification of subsets in $\mathbb{R}^n$. The Hausdorff and packing dimensions appear as some of the most common examples in the literature. The determination of set’s dimensions is naturally connected to the auxilliary Borel measures supported by these sets. Moreover, the estimation of a set’s dimension is naturally related to the dimension of a probability measure $\nu$ in $\mathbb{R}^n$. In this way, thinking particularly to sets of measure zero or one, leads to the respective definitions of the lower and upper Hausdorff dimensions of $\nu$ as follows $$\underline{\dim}(\nu)=\inf\Big\{\dim(E);\; E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)>0\Big\}$$ and $$\overline{\dim}(\nu)=\inf\Big\{\dim(E);\; E \subseteq \mathbb{R}^n\; \text{and}\;\nu(E)=1\Big\},$$ where $\dim(E)$ denotes the Hausdorff dimension of $E$ (see[@F]). If $\underline{\dim}(\nu)= \overline{\dim}(\nu)$, this common value is denoted by ${\dim}(\nu)$. In this case, we say that $\nu$ is unidimensional. Similarly, we define respectively the lower and upper packing dimensions of $\nu$ by $$\underline{\operatorname{Dim}}(\nu)=\inf\Big\{\operatorname{Dim}(E);\;E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)>0\Big\}$$ and $$\overline{\operatorname{Dim}}(\nu)=\inf\Big\{\operatorname{Dim}(E);\;E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)=1\Big\},$$ where $\operatorname{Dim}(E)$ is the packing dimension of $E$ (see [@F]). Also, if the equality $\underline{\operatorname{Dim}}(\nu)= \overline{\operatorname{Dim}}(\nu)$ is satisfied, we denote by ${\operatorname{Dim}}(\nu)$ their common value. The lower and upper Hausdorff dimensions of $\nu$ were studied by A.H. Fan in [@FF; @FF1]. They are related to the Hausdorff dimension of the support of $\nu$. A similar approach, concerning the packing dimensions, was developed by Tamashiro in [@T]. There are numerous works in which estimates of the dimension of a given measure are obtained [@NBC; @B; @D; @F; @FLR; @H1; @H2; @H; @L; @BBSS]. When $\overline{\dim}(\nu)$ is small (resp. $\underline{\dim}(\nu)$ is large), it means that $\nu$ is singular (resp. regular) with respect to the Hausdorff measure. Similar definitions are used when concerned with the upper and lower packing dimensions.\ Note that, in many works (see for example [@F; @H1; @H2; @H]), the quantities $\underline{\dim}(\nu)$, $\overline{\dim}(\nu)$, $\underline{\operatorname{Dim}}(\nu)$ and $\overline{\operatorname{Dim}}(\nu)$ are related to the asymptotic behavior of the function $\alpha_\nu(x,r)= \frac{\log\nu(B(x,r))}{\log r}$. One of the main problems in multifractal analysis is to understand the multifractal spectrum, the Rényi dimensions and their relationship with each other. During the past 20 years, there has been enormous interest in computing the multifractal spectra of measures in the mathematical literature and within the last 15 years the multifractal spectra of various classes of measures in Euclidean space $\mathbb{R}^n$ exhibiting some degree of self-similarity have been computed rigorously (see [@F; @LO; @Pe] and the references therein). In an attempt to develop a general theoretical framework for studying the multifractal structure of arbitrary measures, Olsen [@LO] and Pesin [@Pes] suggested various ways of defining an auxiliary measure in very general settings. For more details and backgrounds on multifractal analysis and its applications, the readers may be referred also to the following essential references [@NB; @NBC; @BJ; @BB; @BBH; @BenM; @CO; @DB; @BD1; @FM1; @MMB; @MMB1; @LO; @O2; @O1; @Ol; @SH1; @SELMI1; @SELMI; @W; @W1; @W3; @W4]. In this paper, we give a multifractal generalization of the results about Hausdorff and packing dimension of measures. We first estimate the multifractal Hausdorff and packing dimensions of a Borel probability measure. We try through these results to improve the main result of M. Dai in [@D Theorem A] about the multifractal analysis of a measure of exact multifractal dimension. We are especially based on the multifractal formalism developed by Olsen in [@LO]. Then, we investigate a relationship between the multifractal dimensions of a measure $\nu$ and its projections onto a lower dimensional linear subspace. Preliminaries ============= We start by recalling the multifractal formalism introduced by Olsen in [@LO]. This formalism was motivated by Olsen’s wish to provide a general mathematical setting for the ideas present in the physics literature on multifractals. Let $E\subset \mathbb{R}^n$ and $\delta>0$, we say that a collection of balls $\big(B(x_i, r_i)\big)_i$ is a centered $\delta$-packing of $E$ if $$\forall i,\; 0<r_i<\delta,\quad x_i\in E,\; \text{and} \quad B(x_i,r_i)\cap B(x_j, r_j)=\emptyset,\quad{\forall}\; i\neq j.$$ Similarly, we say that $\big(B(x_i, r_i)\big)_i$ is a centered $\delta$-covering of $E$ if $$\forall i,\; 0<r_i<\delta, \quad x_i\in E, \quad \text{and} \qquad E\subset \bigcup_i \; B(x_i, r_i).$$ Let $\mu$ be a Borel probability measure on $\mathbb{R}^n$. For $q, t\in\mathbb{R}$, $E \subseteq{\mathbb R}^n$ and $\delta>0$, we define $$\overline{{\mathcal P}}^{q,t}_{\mu,\delta}(E) =\displaystyle \sup \left\{\sum_i \mu(B(x_i,r_i))^q (2r_i)^t\right\},$$ where the supremum is taken over all centered $\delta$-packings of $E$. The generalized packing pre-measure is given by $$\overline{{\mathcal P}}^{q,t}_{\mu}(E) =\displaystyle\inf_{\delta>0}\overline{{\mathcal P}}^{q,t}_{\mu,\delta}(E).$$ In a similar way, we define $$\overline{{\mathcal H}}^{q,t}_{\mu,\delta}(E) = \displaystyle\inf \left\{\sum_i \mu(B(x_i,r_i))^q(2r_i)^t\right\},$$ where the infinimum is taken over all centered $\delta$-coverings of $E$. The generalized Hausdorff pre-measure is defined by $$\overline{{\mathcal H}}^{q,t}_{\mu}(E) = \displaystyle\sup_{ \delta>0}\overline{{\mathcal H}}^{q,t}_{\mu,\delta}(E).$$ Especially, we have the conventions $0^q=\infty$ for $q\leq0$ and $0^q=0$ for $q>0$. Olsen [@LO] introduced the following modifications on the generalized Hausdorff and packing measures, $${\mathcal H}^{q,t}_{\mu}(E)=\displaystyle\sup_{F\subseteq E}\overline{{\mathcal H}}^{q,t}_{\mu}(F)\quad\text{and}\quad {\mathcal P}^{q,t}_{\mu}(E) = \inf_{E \subseteq \bigcup_{i}E_i} \sum_i \overline{\mathcal P}^{q,t}_{\mu}(E_i).$$ The functions ${\mathcal H}^{q,t}_{\mu}$ and ${\mathcal P}^{q,t}_{\mu}$ are metric outer measures and thus measures on the family of Borel subsets of $\mathbb{R}^n$. An important feature of the Hausdorff and packing measures is that ${\mathcal P}^{q,t}_{\mu}\leq{\overline{\mathcal P}}^{q,t}_{\mu}$. Moreover, there exists an integer $\xi\in\mathbb{N}$, such that ${\mathcal H}^{q,t}_{\mu}\leq\xi{\mathcal P}^{q,t}_{\mu}.$ The measure ${\mathcal H}^{q,t}_{\mu}$ is of course a multifractal generalization of the centered Hausdorff measure, whereas ${\mathcal P}^{q,t}_{\mu}$ is a multifractal generalization of the packing measure. In fact, it is easily seen that, for $t\geq0$, one has $$2^{-t} {\mathcal H}^{0,t}_{\mu}\leq {\mathcal H}^{t}\leq {\mathcal H}^{0,t}_{\mu}\quad\text{ and}\quad{\mathcal P}^{0,t}_{\mu}={\mathcal P}^{t},$$ where ${\mathcal H}^{t}$ and ${\mathcal P}^{t}$ denote respectively the $t$-dimensional Hausdorff and $t$-dimensional packing measures. The measures ${\mathcal H}^{q,t}_{\mu}$ and ${\mathcal P}^{q,t}_{\mu}$ and the pre-measure ${\overline{\mathcal P}}^{q,t}_{\mu}$ assign in a usual way a multifractal dimension to each subset $E$ of $\mathbb{R}^n$. They are respectively denoted by $\dim_{\mu}^q(E)$, $\operatorname{Dim}_{\mu}^q(E)$ and $\Delta_{\mu}^q(E)$ (see [@LO]) and satisfy $$\begin{array}{lllcr} \dim_{\mu}^q(E) &=&\inf \Big\{ t\in\operatorname{\mathbb{R}}; \quad {\mathcal H}^{{q},t}_{\mu}(E) =0\Big\}=\sup \Big\{ t\in\operatorname{\mathbb{R}}; \quad {\mathcal H}^{{q},t}_{\mu}(E) =+\infty\Big\}, \\ \\ \operatorname{Dim}_{\mu}^q(E) &=& \inf \Big\{ t\in\operatorname{\mathbb{R}}; \quad {\mathcal P}^{{q},t}_{\mu}(E) =0\Big\}=\sup \Big\{ t\in\operatorname{\mathbb{R}}; \quad {\mathcal P}^{{q},t}_{\mu}(E) =+\infty\Big\}, \\ \\ \Delta_{\mu}^q(E) &=& \inf \Big\{ t\in\operatorname{\mathbb{R}}; \quad \overline{\mathcal P}^{{q},t}_{\mu}(E) =0\Big\}=\sup\Big\{ t\in\operatorname{\mathbb{R}}; \quad \overline{\mathcal P}^{{q},t}_{\mu}(E) =+\infty\Big\}. \end{array}$$ The number $\dim_{\mu}^q(E)$ is an obvious multifractal analogue of the Hausdorff dimension $\dim(E)$ of $E$ whereas $\operatorname{Dim}_{\mu}^q(E)$ and $\Delta_{\mu}^q(E)$ are obvious multifractal analogues of the packing dimension $\operatorname{Dim}(E)$ and the pre-packing dimension $\Delta(E)$ of $E$ respectively. In fact, it follows immediately from the definitions that $$\dim(E)=\dim_{\mu}^0(E),\;\;\;\operatorname{Dim}(E)=\operatorname{Dim}_{\mu}^0(E)\quad\text{and}\quad\Delta(E)=\Delta_{\mu}^0(E).$$ Multifractal Hausdorff and packing dimensions of measures ========================================================= Now, we introduce the multifractal analogous of the Hausdorff and packing dimensions of a Borel probability measure. The lower and upper multifractal Hausdorff dimensions of a measure $\nu$ with respect to a measure $\mu$ are defined by $$\underline{\dim}_{\mu}^q(\nu)=\inf\Big\{\dim_{\mu}^q(E);\; E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)>0\Big\}$$ and $$\overline{\dim}_{\mu}^q(\nu)=\inf\Big\{\dim_{\mu}^q(E);\; E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)=1\Big\}.$$ We denote by ${\dim}_{\mu}^q(\nu)$ their common value, if the equality $\underline{\dim}_{\mu}^q(\nu)= \overline{\dim}_{\mu}^q (\nu)$ is satisfied. The lower and upper multifractal packing dimensions of a measure $\nu$ with respect to a measure $\mu$ are defined by $$\underline{\operatorname{Dim}}_{\mu}^q(\nu)=\inf\Big\{\operatorname{Dim}_{\mu}^q(E);\; E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)>0\Big\}$$ and $$\overline{\operatorname{Dim}}_{\mu}^q(\nu)=\inf\Big\{\operatorname{Dim}_{\mu}^q(E);\; E \subseteq \mathbb{R}^n\; \text{and}\; \nu(E)=1\Big\}.$$ When $\underline{\operatorname{Dim}}_{\mu}^q(\nu)= \overline{\operatorname{Dim}}_{\mu}^q(\nu)$, we denote by ${\operatorname{Dim}}_{\mu}^q(\nu)$ their common value. Let $\mu, \nu$ be two Borel probability measures on $\mathbb{R}^n$. 1. We say that $\mu$ is absolutely continuous with respect to $\nu$ and write $\mu\ll \nu$ if, for any set $A\subset\operatorname{\mathbb{R}}^n$, $\nu(A)=0\Rightarrow \mu(A)=0$. 2. $\mu$ and $\nu$ are said to be mutually singular and we write $\mu\bot \;\nu$ if there exists a set $A\subset\operatorname{\mathbb{R}}^n$, such that $\mu(A)=0=\nu(\operatorname{\mathbb{R}}^n\setminus A).$ The quantities $\underline{\dim}_\mu^q(\nu)$ and $\overline{\dim}_\mu^q(\nu)$ (resp. $\underline{\operatorname{Dim}}_\mu^q (\nu)$ and $\overline{\operatorname{Dim}}_\mu^q(\nu)$) allow to compare the measure $\nu$ with the generalized Hausdorff (resp. packing) measure. More precisely, we have the following result. \[0\] Let $\mu, \nu$ be two Borel probability measures on $\mathbb{R}^n$ and $q\in \mathbb{R}$. We have, 1. $\underline{\dim}_{\mu}^q(\nu)=\sup\Big\{t\in\operatorname{\mathbb{R}};\;\nu\ll{\mathcal H}^{q,t}_{\mu}\Big\}$ and $\overline{\dim}_{\mu}^q(\nu)=\inf\Big\{t\in\operatorname{\mathbb{R}};\; \nu\bot{\mathcal H}^{q,t}_{\mu}\Big\}.$ 2. $ \underline{\operatorname{Dim}}_{\mu}^q(\nu)=\sup\Big\{t\in\operatorname{\mathbb{R}};\; \nu\ll{\mathcal P}^{q,t}_{\mu}\Big\}$ and $ \overline{\operatorname{Dim}}_{\mu}^q(\nu)=\inf\Big\{t\in\operatorname{\mathbb{R}};\; \nu\bot{\mathcal P}^{q,t}_{\mu}\Big\}.$ [**Proof.**]{} 1. Let’s prove that $\underline{\dim}_{\mu}^q(\nu) =\sup\Big\{ t\in\operatorname{\mathbb{R}};\;\nu\ll{\mathcal H}^{q,t}_{\mu}\Big\}$. Define $$s=\sup\Big\{t\in\operatorname{\mathbb{R}};\; \nu\ll{\mathcal H}^{q,t}_{\mu}\Big\}.$$ For any $t<s$ and $E\subseteq\operatorname{\mathbb{R}}^n$, such that $\nu(E)>0$, we have ${\mathcal H}^{q,t}_{\mu}(E)>0$. It follows that $\dim_{\mu}^q(E)\geq t$ and then, $\underline{\dim}_{\mu}^q(\nu)\geq t$. We deduce that $\underline{\dim}_{\mu}^q(\nu)\geq s$.\ On the other hand, for any $t>s$, there exists a set $E\subseteq\operatorname{\mathbb{R}}^n$, such that $\nu(E)>0$ and ${\mathcal H}^{q,t}_{\mu}(E)=0$. Consequently, $\dim_{\mu}^q(E)\leq t$ and so, $\underline{\dim}_{\mu}^q(\nu)\leq t$. This leads to $\underline{\dim}_{\mu}^q(\nu)\leq s$. Now, we prove that $\overline{\dim}_{\mu}^q(\nu)=\inf\Big\{t\in\operatorname{\mathbb{R}};\; \nu\bot{\mathcal H}^{q,t}_{\mu}\Big\}.$ For this, we define $$s'=\inf\Big\{t\in\operatorname{\mathbb{R}};\; \nu\bot{\mathcal H}^{q,t}_{\mu}\Big\}.$$ For $t>s'$, there exists a set $E\subseteq\operatorname{\mathbb{R}}^n$, such that ${\mathcal H}^{q,t}_{\mu}(E)=0=\nu(\operatorname{\mathbb{R}}^n\setminus E)$. Then, $\dim_{\mu}^q(E)\leq t$. Since $\nu(E)=1$, then $\underline{\dim}_{\mu}^q(\nu)\leq t$ and $\underline{\dim}_{\mu}^q(\nu)\leq s'$.\ Now, for $t<s'$, take $E\subseteq\operatorname{\mathbb{R}}^n$, such that ${\mathcal H}^{q,t}_{\mu}(E)>0$ and $\nu(E)=1$. It can immediately seen that $\dim_{\mu}^q(E)\geq t$. Then, $\underline{\dim}_{\mu}^q(\nu)\geq t$. It follows that $\underline{\dim}_{\mu}^q(\nu)\geq s'$. This ends the proof of assertion (1). 2. The proof of assertion (2) is given in [@L Theorem 2].$\hfill\square$ When the upper multifractal Hausdorff (resp. packing) dimension of the measure is small, it means that the measure $\nu$ is “very singular" with respect to the generalized multifractal Hausdorff (resp. packing) measure. In the same way, when the lower multifractal (resp. packing) dimension of the measure is large, then the measure $\nu$ is “quite regular" with respect to the generalized multifractal Hausdorff (resp. packing) measure. The quantities $\underline{\dim}_{\mu}^q(\nu)$, $\overline{\dim}_{\mu}^q(\nu)$, $\underline{\operatorname{Dim}}_{\mu}^q(\nu)$ and $\overline{\operatorname{Dim}}_{\mu}^q(\nu)$ are related to the asymptotic behavior of the function ${\alpha}_{\mu,\nu}^q(x,r)$, where $${\alpha}_{\mu,\nu}^q(x,r)=\frac{\log \nu\big(B(x,r)\big) -q\log\mu\big(B(x,r)\big) }{\log r}.$$ Notice that the chararcterization of the lower und upper packing dimensions by the function $\alpha_{\mu,\nu}^q$ is proved by J. Li in [@L Theorem 3]. In the following theorem we prove similar results for the Hausdorff dimensions. \[1\] Let $\mu, \nu$ be two Borel probability measures on $\mathbb{R}^n$ and $q\in \mathbb{R}$. Let $$\underline{\alpha}_{\mu,\nu}^q(x)=\displaystyle\liminf_{ r\to 0}\; {\alpha}_{\mu,\nu}^q(x,r)\quad\text{and}\quad \overline{\alpha}_{\mu,\nu}^q(x)=\displaystyle\limsup_{ r\to 0}\; {\alpha}_{\mu,\nu}^q(x,r).$$ We have, 1. $ \underline{\dim}_{\mu}^q(\nu)=\operatorname{ess\,inf}\underline{\alpha}_{\mu,\nu}^q(x) \quad\text{and}\quad \overline{\dim}_{\mu}^q(\nu)=\operatorname{ess\,sup}\underline{\alpha}_{\mu,\nu}^q(x). $ 2. $ \underline{\operatorname{Dim}}_{\mu}^q(\nu)=\operatorname{ess\,inf}\overline{\alpha}_{\mu,\nu}^q(x)\quad\text{and}\quad \overline{\operatorname{Dim}}_{\mu}^q(\nu)=\operatorname{ess\,sup}\overline{\alpha}_{\mu,\nu}^q(x), $ where the essential bounds being related to the measure $\nu$. [**Proof.**]{} 1. We prove that $\underline{\dim}_{\mu}^q(\nu) =\text{ess}\inf\underline{\alpha}_{\mu,\nu}^q(x)$. Let $\alpha<\text{ess}\inf\underline{\alpha}_{\mu,\nu}^q(x)$. For $\nu$-almost every $x$, there exists $r_0>0$, such that $0<r<r_0$ and $$\nu(B(x,r))<\mu(B(x,r))^q ~~r^\alpha.$$ Denote by $$F_n=\left\{x;\;\nu(B(x,r))<\mu(B(x,r))^q ~~r^\alpha,\;\, \text{for}\;\,0<r<\frac1n\right\}.$$ Let $F=\cup_n F_n$. It is clear that $\nu(F)=1$. Take $E$ be a Borel subset of $\operatorname{\mathbb{R}}^n$ satisfying $\nu(E)> 0$. We have $\nu(E\cap F)>0$ and there exists an integer $n$, such that $\nu(E\cap F_n)>0$. Let $\delta>0$ and $\big(B(x_i,r_i)\big)_i$ be a centered $\delta$-covering of $E \cap F_n$. We have $$\displaystyle\sum_{i}\nu(B(x_{i},r_{i}))\leq 2^{-\alpha} \displaystyle\sum_{i}\mu(B(x_{i},r_{i}))^q (2r_i)^\alpha,$$ so that $$2^{\alpha}\nu(E\cap F_n)\leq \overline{{\mathcal H}}^{q,\alpha}_{\mu,\delta}(E\cap F_n).$$ Letting $\delta \to 0$ gives that $$2^{\alpha}\nu(E\cap F_n)\leq \overline{{\mathcal H}}^{q,\alpha}_{\mu}(E\cap F_n)\leq {\mathcal H}^{q,\alpha}_{\mu}(E\cap F_n).$$ It follows that $${\mathcal H}^{q,\alpha}_{\mu}(E)\geq{\mathcal H}^{q,\alpha}_{\mu}(E\cap F_n)>0\;\Rightarrow\; \dim_{\mu}^q(E)\geq \alpha.$$ We have proved that $$\underline{\dim}_{\mu}^q(\nu)\geq\text{ess}\inf\underline{\alpha}_{\mu,\nu}^q(x).$$ On the other hand, if $\text{ess}\inf\underline{\alpha}_{\mu,\nu}^q(x)=\alpha$. For $\varepsilon>0$, let $$E_\varepsilon=\Big\{x\in\operatorname{supp}\nu;\; \underline{\alpha}_{\mu,\nu}^q(x)<\alpha+\varepsilon\Big\}.$$ It is clear that $\nu(E_\varepsilon)>0$. This means that $\underline{\dim}_{\mu}^q(\nu)\leq{\dim}_{\mu}^q(E_\varepsilon)$. We will prove that $${\dim}_{\mu}^q(E_\varepsilon)\leq \alpha+\varepsilon, \;\; \forall\; \varepsilon>0.$$ Let $E\subset E_\varepsilon$ and $x\in E$. Then, for all $\delta>0$ we can find $0<r_x<\delta$, such that $$\nu(B(x,r_x))>\mu(B(x,r_x))^q ~~r_x^{\alpha+\varepsilon}.$$ Take $\delta>0$. The family $\Big(B(x,r_x)\Big)_{x\in E}$ is a centered $\delta$-covering of $ E.$ Using Besicovitch’s Covering Theorem (see [@F; @M1]), we can construct $\xi$ finite or countable sub-families $\Big(B(x_{1j},r_{1j})\Big)_j$,....,$\Big(B(x_{\xi j},r_{\xi j})\Big)_j$, such that each $E$ satisfies $$E\subseteq\displaystyle\bigcup_{i=1}^\xi\bigcup_jB(x_{ij},r_{ij}) \quad \text{and} \quad \Big(B(x_{ij},r_{ij})\Big)_j\quad \text{is a } \delta\text{-packing of }E.$$ We get $$\begin{aligned} \displaystyle\sum_{i,j}\mu(B(x_{ij},r_{ij}))^q (2r_{ij})^{\alpha+\varepsilon} &\leq& \xi 2^{\alpha+\varepsilon} \sum_j \nu(B(x_{ij},r_{ij}))\leq \xi 2^{\alpha+\varepsilon} \nu(\mathbb{R}^n).\end{aligned}$$ Consequently, $$\overline{\mathcal H}^{q,\alpha+\varepsilon}_{\mu,\delta}(E)\leq\xi 2^{\alpha+\varepsilon} \nu(\mathbb{R}^n)\;\Rightarrow\; \overline{\mathcal H}^{q,\alpha+\varepsilon}_{\mu}(E)\leq\xi 2^{\alpha+\varepsilon} \nu(\mathbb{R}^n).$$ We obtain thus $${\mathcal H}^{q,\alpha+\varepsilon}_{\mu}(E_\varepsilon)\leq\xi 2^{\alpha+\varepsilon} \nu(\mathbb{R}^n)<\infty.$$ Therefore, $$\dim_{\mu}^q(E)\leq \alpha+\varepsilon\quad\text{and}\quad \underline{\dim}_{\mu}^q(\nu)\leq\text{ess}\inf\underline{\alpha}_{\mu,\nu}^q(x).$$ We prove in a similar way that $\overline{\dim}_{\mu}^q(\nu)=\text{ess}\sup\underline{\alpha}_{\mu,\nu}^q(x)$. \[cor1\] Let $\mu, \nu$ be two Borel probability measures on $\mathbb{R}^n$ and take $q, \alpha\in \mathbb{R}$. We have, 1. $\underline{\dim}_{\mu}^q(\nu)\geq\alpha$ if and only if $\underline{\alpha}_{\mu,\nu}^q(x)\geq\alpha$ for $\nu$-a.e. $x$. 2. $\overline{\dim}_{\mu}^q(\nu)\leq\alpha$ if and only if $\underline{\alpha}_{\mu,\nu}^q(x)\leq\alpha$ for $\nu$-a.e. $x$. 3. $\underline{\operatorname{Dim}}_{\mu}^q(\nu)\geq\alpha$ if and only if $\overline{\alpha}_{\mu,\nu}^q(x)\geq\alpha$ for $\nu$-a.e. $x$. 4. $\overline{\operatorname{Dim}}_{\mu}^q(\nu)\leq\alpha$ if and only if $\overline{\alpha}_{\mu,\nu}^q(x)\leq\alpha$ for $\nu$-a.e. $x$. [**Proof.**]{} Follows immediately from Theorem \[1\]. We recall the definition of the deranged Cantor set (see [@B11; @B1; @B12; @BB]). Let $I_\emptyset= [0, 1]$. We obtain respectively the left and right sub-intervals $I_{\varepsilon,1}$ and $I_{\varepsilon,2}$ of $I_\varepsilon$ by deleting the middle open sub-interval of $I_\varepsilon$ inductively for each $\varepsilon\in\{1, 2\}^n$, where $n \in\operatorname{\mathbb{N}}$. We consider the sequence $$\mathcal{C}_n =\displaystyle\bigcup_{\varepsilon\in\{1, 2\}^n} I_\varepsilon.$$ $\{\mathcal{C}_n \}_{n\in\operatorname{\mathbb{N}}}$ is a decreasing sequence of closed sets. For each $n\in \operatorname{\mathbb{N}}$ and each $\varepsilon\in\{1, 2\}^n$, we put $$|I_{\varepsilon,1} | / |I_{\varepsilon} |= c_{\varepsilon,1}\quad\text{and}\quad |I_{\varepsilon,2} | / |I_{\varepsilon} |= c_{\varepsilon,2},$$ where $|I|$ is the diameter of $I$. The set $\mathcal{C}=\displaystyle\bigcap_{n\geq0}\mathcal{C}_n$ is called a deranged Cantor set. Let $\nu$ be a probability measure supported by the deranged Cantor set $\mathcal{C}$ and $\mu$ be the Lebesgue measure on $I_\emptyset$. For $\varepsilon_1,...,\varepsilon_n\in\{1,2\}$, we denote by $I_{\varepsilon_1,...,\varepsilon_n}$ the basic set of level $n$. For $x\in C$, we denote by $I_n(x)$ the $n$-th level set containing $x$. We introduce the sequence of random variables $X_n$ defined by $$X_n(x)=-\log_3\left(\frac{\nu(I_n(x))}{\nu(I_{n-1}(x))}\right).$$ We have $$\frac{S_n(x)}{n}=\frac{X_1(x)+...+X_n(x)}{n}= \frac{\log(\nu(I_n(x))}{\log\mid I_n(x)\mid}.$$ By Lemma 1 in [@BB], we have for all $x\in \mathcal{C}$, $$\liminf_{n\to\infty}\frac{\log(\nu(I_n(x))}{\log\mid I_n(x)\mid}=\liminf_{r\to0}\frac{\log(\nu(B(x,r))}{\log r}$$ and $$\limsup_{n\to\infty}\frac{\log(\nu(I_n(x))}{\log\mid I_n(x)\mid}=\limsup_{r\to0}\frac{\log(\nu(B(x,r))}{\log r}.$$ The quantities $\underline{\dim}_{\mu}^q(\nu)$ and $\overline{\dim}_{\mu}^q(\nu)$ are related to the asymptotic behavior of the sequence $\displaystyle\frac{S_n}{n}$. More precisely, we have the following two relations $$\underline{\dim}_{\mu}^q(\nu) =\text{ess}\inf\left\{\liminf_{n\to\infty} \frac{S_n(x)}{n}-q\right\} \quad\text{and}\quad \overline{\dim}_{\mu}^q(\nu) =\text{ess}\sup\left\{\liminf_{n\to\infty} \frac{S_n(x)}{n}-q\right\}.$$ In the same way, we can also prove that $$\underline{\operatorname{Dim}}_{\mu}^q(\nu) =\text{ess}\inf\left\{\limsup_{n\to\infty}\frac{S_n(x)}{n}-q\right\} \quad\text{and}\quad \overline{\operatorname{Dim}}_{\mu}^q(\nu) =\text{ess}\sup\left\{\limsup_{n\to\infty}\frac{S_n(x)}{n}-q\right\}.$$ We say that the measure $\nu$ is $(q, \mu)$-unidimensional if $\overline{\dim}_{\mu}^q(\nu)=\underline{\dim}_{\mu}^q(\nu)$. We also say that $\nu$ has an exact multifractal packing dimension whenever $\overline{\operatorname{Dim}}_{\mu}^q(\nu) =\underline{\operatorname{Dim}}_{\mu}^q(\nu)$. In general, a Borel probability measure is not $(q,\mu)$-unidimensional and $\overline{\operatorname{Dim}}_{\mu}^q(\nu)\neq\underline{\operatorname{Dim}}_{\mu}^q(\nu)$. In the following, we are interested to the $(q, \mu)$-unidimensionality and ergodicity of $\nu$ and to the calculus of its multifractal Hausdorff and packing dimensions. Our purpose in the following theorem is to prove the main Theorem of M. Dai [@D Theorem A] under less restrictive hypotheses. \[p\] The measure $\nu$ is $(q, \mu)$-unidimensional with ${\dim}_{\mu}^q(\nu)=\alpha$ if and only if the following two conditions are satisfied. 1. There exists a set $E$ of $\operatorname{\mathbb{R}}^n$ with ${\dim}_{\mu}^q(E)=\alpha$, such that $\nu(E)=1$. 2. $\nu(E)=0$, for every Borel set $E$ satisfying ${\dim}_{\mu}^q(E)<\alpha$. [**Proof.**]{} We can deduce from Theorems \[0\] and \[1\] that $\nu$ is $(q, \mu)$-unidimensional if and only if we have the following assertions. 1. $\nu$ is absolutely continuous with respect to ${\mathcal H}^{q,\alpha-\varepsilon}_{\mu}$, for all $\varepsilon>0$. 2. $\nu$ and ${\mathcal H}^{q,\alpha+\varepsilon}_{\mu}$ are mutually singular, for all $\varepsilon>0$. Then, the proof of Theorem \[p\] becomes an easy consequence of the following lemma. [@D] The following conditions are equivalent. 1. We have, 1. there exists a set $E$ of $\operatorname{\mathbb{R}}^n$ with ${\dim}_{\mu}^q(E)=\alpha$, such that $\nu(E)=1$. 2. $\nu(E)=0$, for every Borel set $E$ satisfying ${\dim}_{\mu}^q(E)<\alpha$. 2. We have, 1. $\nu\ll{\mathcal H}^{q,\alpha-\varepsilon}_{\mu}$ for all $\varepsilon>0$. 2. $\nu\bot{\mathcal H}^{q,\alpha+\varepsilon}_{\mu}$ for all $\varepsilon>0$. Theorem \[p\] improves Dai’s result [@D Theorem A] (we need not to assume that $\mu$ is a doubling measure). The symmetrical results are true as well. \[20\] Let $\mu, \nu$ be two Borel probability measures on $\mathbb{R}^n$ and take $\alpha, q\in \operatorname{\mathbb{R}}$. The following conditions are equivalent. 1. $\overline{\operatorname{Dim}}_{\mu}^q(\nu)=\underline{\operatorname{Dim}}_{\mu}^q(\nu)=\alpha$. 2. We have, 1. there exist a set $E\subset\operatorname{\mathbb{R}}^n$ with $\operatorname{Dim}_{\mu}^q(E)=\alpha$, such that $\nu(E)=1$, 2. if $\; E\subset\operatorname{\mathbb{R}}^n$ satisfies $\operatorname{Dim}_{\mu}^q(E)<\alpha$, then $\nu(E)=0$. 3. We have, 1. $\nu \ll {\mathcal P}^{q,\alpha-\epsilon}_{\mu}$, for all $\epsilon>0$. 2. $\nu \bot {\mathcal P}^{q,\alpha+\epsilon}_{\mu}$, for all $\epsilon>0$. [**Proof.**]{} We can deduce from Theorems \[0\] and \[1\] that the assertions (1) and (3) are equivalent. We only need to prove the equivalence of the assertions (2) and (3). Assume that the measure $\nu$ satisfies the hypothesis (a) and (b) of (2). Let $\; E\subset\operatorname{\mathbb{R}}^n$ and suppose that ${\mathcal P}^{q,\alpha-\epsilon}_{\mu}(E)=0$, for all $\epsilon>0$. Then, we have that $\operatorname{Dim}_{\mu}^q(E)\leq \alpha-\epsilon<\alpha$. By condition (b) of (2), we obtain $\nu(E)=0$. Thus, $$\nu \ll {\mathcal P}^{q,\alpha-\epsilon}_{\mu},\quad\text{for all } \epsilon>0.$$ Thanks to condition (a) of (2), there exists a set $E\subset\operatorname{\mathbb{R}}^n$ of multifractal packing dimension $\alpha$, such that $\nu(E)=1$ and $\operatorname{Dim}_{\mu}^q(E)=\alpha< \alpha+\epsilon$, for all $\epsilon>0$. Then, ${\mathcal P}^{q,\alpha+\epsilon}_{\mu}(E)=0$. Thus, $$\nu \bot {\mathcal P}^{q,\alpha+\epsilon}_{\mu},\quad\text{for all } \epsilon>0.$$ Now, assume that $\nu$ satisfies conditions (a) and (b) of (3). This means that $\nu \ll {\mathcal P}^{q,\alpha -\epsilon}_{\mu}$, for all $\epsilon>0$. Taking a Borel set $E$ with $\operatorname{Dim}_{\mu}^q(E) =\beta<\alpha$ and $\epsilon=\frac{\alpha-\beta}2$, we get ${\mathcal P}^{q,(\alpha+\beta)/2}_{\mu}(E)=0.$ Then, $\nu(E)=0$. Since $\nu \bot {\mathcal P}^{q,\alpha +\epsilon }_{\mu}$, for all $\epsilon>0$, there exists a set $F_\epsilon$ with ${\mathcal P}^{q,\alpha +\epsilon }_{\mu}(F_\epsilon)=0$ and $\nu(F_\epsilon)=1$. Hence, $\operatorname{Dim}_{\mu}^q(F_\epsilon)\leq\alpha+\epsilon$. Choose a sequence $(\epsilon_k)_k$ such that $\epsilon_k\to 0$ as $k\to +\infty$ and consider the set $F=\displaystyle\bigcap_{k\geq 1} F_{\epsilon_k}$. It is clear that $\nu(F)=1$ and $$\operatorname{Dim}_{\mu}^q(F) \leq\displaystyle\liminf_{k\to\infty} \operatorname{Dim}_{\mu}^q(F_{\epsilon_k})\leq \displaystyle\liminf_{k\to\infty}(\alpha+\epsilon_k)=\alpha.$$ If $\operatorname{Dim}_{\mu}^q(F) =\alpha$, then the condition (a) of (2) is satisfied for $E=F$.\ If $\operatorname{Dim}_{\mu}^q(F) <\alpha$, then putting $E=F\cup G$, for some Borel set $G$ of multifractal packing dimension $\alpha$, we obtain $$\nu(E)=1 \qquad \text{and}\qquad \operatorname{Dim}_{\mu}^q(E) =\max \Big\{ \operatorname{Dim}_{\mu}^q(F), \operatorname{Dim}_{\mu}^q(G)\Big\} =\alpha.$$ Let $\mu$ be the Lebesgue measure on $\mathbb{R}^n$, $\nu$ be a compactly supported Borel probability measure on $\mathbb{R}^n$ and $T:\operatorname{supp}\nu\rightarrow\operatorname{supp}\nu$ a $K$-lipschitz function. Suppose that $\nu$ is $T$-invariant and ergodic on $\operatorname{supp}\nu$. Then, $$\overline{\dim}_{\mu}^q(\nu)=\underline{\dim}_{\mu}^q(\nu)\quad\text{and}\quad \overline{\operatorname{Dim}}_{\mu}^q(\nu)=\underline{\operatorname{Dim}}_{\mu}^q(\nu).$$ [**Proof.**]{} $T$ is a $K$-lipschitz function, then $T(B(x,r))\subseteq B(T(x), Kr)$. Since $\nu$ is $T$-invariant, then we can deduce that $$\nu\big(B(x,r)\big)\leq\nu\big(T^{-1}\big(T(B(x,r))\big)\big)\leq\nu\big(T^{-1}\big(B(T(x), Kr)\big)\big)=\nu\big(B(T(x), Kr)\big).$$ It follows that, $$\begin{aligned} \frac{\log \nu\big(B(x,r)\big)-q\log\mu\big(B(x,r)\big) }{\log r} &=& \frac{\log \nu\big(B(x,r)\big) }{\log r}-q \\ &\geq& \frac{\log \nu\big(B(T(x),K r)\big) }{\log (K r)} \times \frac{\log (K r)}{\log r}-q,\end{aligned}$$ which proves that $$\underline{\alpha}_{\mu,\nu}^q(x)\geq\underline{\alpha}_{\mu,\nu}^q(T(x))\quad\text{and} \quad\overline{\alpha}_{\mu,\nu}^q(x)\geq\overline{\alpha}_{\mu,\nu}^q(T(x)).$$ Since $\nu$ is ergodic, then the function $\underline{\alpha}_{\mu,\nu}^q(x) -\underline{\alpha}_{\mu,\nu}^q(T(x))$ (resp. $\overline{\alpha}_{\mu,\nu}^q(x)-\overline{\alpha}_{\mu,\nu}^q(T(x))$) is positive and satisfies $$\int\Big(\underline{\alpha}_{\mu,\nu}^q(x)-\underline{\alpha}_{\mu,\nu}^q(T(x))\Big) d\nu(x)=0\quad \left(\text{resp.}\;\;\int\Big(\overline{\alpha}_{\mu,\nu}^q(x)-\overline{\alpha}_{\mu,\nu}^q(T(x))\Big) d\nu(x)=0\right).$$ We can conclude that, $$\underline{\alpha}_{\mu,\nu}^q(x)=\underline{\alpha}_{\mu,\nu}^q(T(x))\quad\text{and} \quad\overline{\alpha}_{\mu,\nu}^q(x)=\overline{\alpha}_{\mu,\nu}^q(T(x))\quad \text{for }\; \nu\; \text{-a.e. } x$$ and that the functions $\underline{\alpha}_{\mu,\nu}^q$, $\overline{\alpha}_{\mu,\nu}^q$ are $T$-invariant. On the other hand, the measure $\nu$ is ergodic and $$-q\leq\underline{\alpha}_{\mu,\nu}^q\leq \overline{\alpha}_{\mu,\nu}^q\leq n-q.$$ It follows that $\underline{\alpha}_{\mu,\nu}^q$ ($\overline{\alpha}_{\mu,\nu}^q$) is $\nu$-almost every where constant, which says that $$\overline{\dim}_{\mu}^q(\nu)=\underline{\dim}_{\mu}^q(\nu)\quad\text{and}\quad \overline{\operatorname{Dim}}_{\mu}^q(\nu)=\underline{\operatorname{Dim}}_{\mu}^q(\nu).$$ In the case where $\overline{\alpha}_{\mu,\nu}^q(x)=\underline{\alpha}_{\mu,\nu}^q(x)=\alpha$ for $\nu$-almost all $x$, we have ${\dim}_{\mu}^q(\nu)={\operatorname{Dim}}_{\mu}^q(\nu)=\alpha. $ The results developed by Heurteaux in [@H1; @H2; @H] and Fan et al. in [@FF; @FF1; @FLR] are obtained as a special case of the multifractal Theorems when $q$ equals $0$. We say that the probability measure $\mu$ is a quasi-Bernoulli measure on the Cantor set $\mathcal{C}= \{0, 1\}^{\operatorname{\mathbb{N}}^*}$ if we can find $C \geq 1$ such that $$\forall x,y \in \mathcal{F}\qquad C^{-1} \mu(x)\mu(y)\leq\mu(x y)\leq C \mu(x)\mu(y),$$ where $\mathcal{F}$ is the set of words written with the alphabet $\{0, 1\}$. Let $\mathcal{F}_n$ be the set of words of length $n$, and take $x = x_1 x_2 ... \in \mathcal{C}$, let $I_n(x)$ be the unique cylinder $\mathcal{F}_n$ that contains $x$. Let us introduce the function $\tau_\mu$ defined for $p\in\operatorname{\mathbb{R}}$, by $$\tau_\mu(p)=\displaystyle\lim_{n\to\infty} \tau_\mu(p,n)\quad\text{with}\quad \tau_\mu(p,n)=\displaystyle\frac1{n\log 2} \log \left(\sum_{x\in\mathcal{F}_n} \mu(x)^p\right).$$ Let $\mu$ and $\nu$ be two probability measures on $\mathcal{C}$ such that, $\nu\ll\mu$ and $\mu$ is a quasi-Bernoulli measure. Then, $\tau'_\mu(1)$ exists and we have $$\begin{aligned} \label{E1} \displaystyle\lim_{ n\to \infty} \frac{\log \mu\big(B(x,2^{-n})\big)} {\log (2^{-n})}=\displaystyle\lim_{ n\to \infty} \frac{\log_2 \mu\big(I_n(x)\big)} {-n}= -\tau'_\mu(1) \text{ for}\; \mu\text{-a.e. } x\in \mathcal{C},\end{aligned}$$ and $$\begin{aligned} \label{E2} \displaystyle\lim_{ n\to \infty}\;\; \frac{\log_2 \nu\big(I_n(x)\big)} {-n}= -\tau'_\nu(1)= -\tau'_\mu(1)\quad \text{for}\; \nu\text{-a.e. } x\in \mathcal{C}.\end{aligned}$$ For more details about and , the reader can see [@H3; @H2]. We have immediately from and that the measure $\nu$ is $(q, \mu)$-unidimensional and $${\dim}_{\mu}^q(\nu)={\operatorname{Dim}}_{\mu}^q(\nu)=(q-1)\tau'_\mu(1)=(q-1)\tau'_\nu(1).$$ Projections results =================== In this section, we show that the multifractal Hausdorff and packing dimensions of a measure $\nu$ are preserved under almost every orthogonal projection. Casually, we briefly recall some basic definitions and facts which will be repeatedly used in subsequent developments. Let $m$ be an integer with $0<m<n$ and $G_{n,m}$ the Grassmannian manifold of all $m$-dimensional linear subspaces of $\mathbb{R}^n$. Denote by $\gamma_{n,m}$ the invariant Haar measure on $G_{n,m}$, such that $\gamma_{n,m}(G_{n,m})=1$. For $V\in G_{n,m}$, define the projection map $\pi_V: \mathbb{R}^n \longrightarrow V$ as the usual orthogonal projection onto $V$. Then, the set $\{\pi_V,\; V \in G_{n,m}\}$ is compact in the space of all linear maps from $\mathbb{R}^n$ to $\mathbb{R}^m$ and the identification of $V$ with $\pi_V$ induces a compact topology for $G_{n,m}$. Also, for a Borel probability measure $\mu$ with compact support on $\mathbb{R}^n$, denoted by $\operatorname{supp}\mu$, and for $V\in G_{n,m}$, define the projection $\mu_V$ of $\mu$ onto $V$, by $$\mu_V(A)=\mu(\pi_V^{-1}(A))\quad \forall A\subseteq V.$$ Since $\mu$ is compactly supported and $\operatorname{supp}\mu_V=\pi_V(\operatorname{supp}\mu)$ for all $V\in G_{n,m}$, then for any continuous function $f: V\longrightarrow\mathbb {R}$, we have $$\displaystyle\int_V fd\mu_V=\int f(\pi_V(x))d\mu(x)$$ whenever these integrals exist. For more details, see for example [@FM; @FN; @M1; @M2; @SB; @SS]. The convolution is defined, for $1\leq m< n$ and $r>0$, by $$\begin{array}{llll} \overline{\phi}_r^m:\; & \mathbb{R}^n &\longrightarrow & \mathbb{R} \\ & x & \longmapsto & \gamma_{n,m}\Big\{V\in G_{n,m};\; |\pi_V(x)|\leq r\Big\}. \end{array}$$ Moreover, define $$\begin{array}{llll} \phi_r^m: & \mathbb{R}^n & \longrightarrow & \mathbb{R} \\ & x & \longmapsto & \min\Big\{1\:,\: r^m|x|^{-m}\Big\}. \end{array}$$ We have that $\phi_r^m(x)$ is equivalent to $\overline{\phi}_r^m (x)$ and write $\phi_r^m(x)\asymp \overline{\phi}_r^m(x)$. For a probability measure $\mu$ and for $V\in G_{n,m}$, we have $$\label{r}\mu\ast\phi_r^m(x)\asymp\mu\ast \overline{\phi}_r^m(x)=\int\mu_V(B(x_V,r))dV$$ and $$\mu\ast\phi_r^m(x)=\int\min\Big\{1 \:,\: r^m|x-y|^{-m}\Big\}d\mu(y).$$ So, integrating by parts and converting into spherical coordinates (see [@FN]), we obtain $$\mu\ast\phi_r^m(x)=mr^m\int_r^{+\infty}u^{-m-1}\mu(B(x,u))du.$$ We present the tools, as well as the intermediate results, which will be used in the proofs of our main results. The following straightforward estimates concern the behaviour of the convolution $\mu\ast\phi_r^m(x)$ as $r\to0$. [@FN]\[l1\] Let $1\leq m \leq n$ and $\mu$ be a compactly supported Borel probability measure on $\operatorname{\mathbb{R}}^n$. For all $x\in \mathbb{R}^n$, we have $$c r^m\leq\mu\ast\phi_r^m(x)$$ for all sufficiently small $r$, where $c>0$ independent of $r$. Let $E\subseteq \mathbb{R}^n$ and $0 < s <+ \infty$. We say that $E$ is $s$-Ahlfors regular if it is closed and if there exists a Borel measure $\mu$ on $\mathbb{R}^n$ and a constant $1\leq C_E < +\infty$, such that $\mu(E)>0$ and $$C_E^{-1} r^s\leq \mu(B(x,r))\leq C_E r^s,\quad\text{for all}\;\; x\in E\;\;\;\text{and}\;\;\; 0<r\leq 1.$$ \[l2\] Let $0<m\leq n$ 1. Let $\mu$ be a compactly supported Borel probability measure on $\operatorname{\mathbb{R}}^n$. Then, for all $x\in \mathbb{R}^n$ and $r>0$, $$\mu(B(x,r))\leq\mu\ast\phi_r^m(x).$$ 2. Suppose that $\mu$ is a compactly supported Borel probability measure on $\mathbb{R}^n$ with support contained in an $s$-Ahlfors regular set for some $0 < s \leq m$. Then, for all $\varepsilon>0$ and $\mu$-almost all $x$ there is $c>0$ such that $$c~r^{-\varepsilon}\mu(B(x,r))\geq\mu\ast\phi_r^m(x).$$ for sufficiently small $r$. [**Proof.**]{} The proof of assertion (1) is exactly the same as that given in [@FN]. The assertion (2) is nothing but Lemma 5.8 of [@O]. We use the properties of $\mu\ast\phi_r^n(x)$ to have a relationship between the kernels and the projected measures. [@FN]\[l3\] Let $1\leq m \leq n$ and $\mu$ be a compactly supported Borel probability measure on $\operatorname{\mathbb{R}}^n$. We have, 1. Let $\varepsilon>0$. For all $V\in G_{n,m}$, for $\mu$-almost all $x$ and for sufficiently small $r$, $$r^\varepsilon \mu\ast\phi_r^m(x)\leq\mu_V(B(x_V,r)).$$ 2. Let $\varepsilon>0$. For $\gamma_{n,m}$-almost all $V\in G_{n,m}$, for all $x\in\mathbb{R}^n$ and for sufficiently small $r$, $$r^{-\varepsilon} \mu\ast\phi_r^m(x)\geq\mu_V(B(x_V,r)).$$ Throughout this section, we consider a compactly supported Borel probability measure $\mu$ on $\mathbb{R}^n$ with support contained in an $s$-Ahlfors regular set for some $0\leq s\leq m < n$ and $\nu$ be a compactly supported Borel probability measure on $\mathbb{R}^n$ such that $\operatorname{supp}\nu \subseteq \operatorname{supp}\mu$ and $\nu\ll\mu$. We introduce the function $\underline{\alpha}_{\mu,\nu}^{q,m}$ and $\overline{\alpha}_{\mu,\nu}^{q,m}$, by $$\underline{\alpha}_{\mu,\nu}^{q,m}(x)=\displaystyle\liminf_{ r\to 0}\;\; \frac{\log \nu\ast\phi_r^m(x) -q\log\mu\ast\phi_r^m(x)}{\log r},$$ and $$\overline{\alpha}_{\mu,\nu}^{q,m}(x)=\displaystyle\limsup_{ r\to 0}\;\; \frac{\log \nu\ast\phi_r^m(x)-q\log\mu\ast\phi_r^m(x)}{\log r}.$$ \[prop2\] Let $q\in \mathbb{R}$. We have that for $\nu$-almost all $x$ 1. If $q>0$, then $$\underline{\alpha}_{\mu,\nu}^{q,m}(x)=\underline{\alpha}_{\mu,\nu}^{q}(x).$$ 2. If $q\leq0$ and $\underline{\alpha}_{\mu,\nu}^{q}(x)\leq m(1-q)$, then $$\underline{\alpha}_{\mu,\nu}^{q,m}(x)=\underline{\alpha}_{\mu,\nu}^{q}(x).$$ [**Proof.**]{} 1. We will prove that for $\nu$-almost all $x$, we have $ \underline{\alpha}_{\mu,\nu}^{q,m}(x)\leq\underline{\alpha}_{\mu,\nu}^{q}(x). $ The proof of the other inequality is similar. By using Lemma \[l2\], we have $$\log\nu(B(x,r))\leq\log\nu\ast\phi_r^m(x).$$ Since $ \nu$ is absolutely continuous with respect to $\mu$ and $q>0$, we have that for $\nu$-almost all $x$ $$-q\big(\log c-\varepsilon\log r+\log\mu(B(x,r))\big)\leq -q\log\mu\ast\phi_r^m(x).$$ So, for $\nu$-almost all $x$, $$\frac{\log\nu(B(x,r))-q\big(\log c-\varepsilon\log r+\log\mu(B(x,r))\big)}{\log r}\geq \frac{\log\nu\ast\phi_r^m(x)-q\log\mu\ast\phi_r^m(x)}{\log r}.$$ Finally, Letting $\varepsilon \to 0$, we get $\underline{\alpha}_{\mu,\nu}^{q}(x)\geq \underline{\alpha}_{\mu,\nu}^{q,m}(x).$ 2. The inequality $\underline{\alpha}_{\mu,\nu}^{q,m}(x)\leq m(1-q)$ follows immediately from Lemma \[l1\]. By using Lemma \[l2\], we have $$\log\nu(B(x,r))\leq\log\nu\ast\phi_r^m(x).$$ Since $q\leq0$, then $$-q\log\mu(B(x,r))\leq-q\log\mu\ast\phi_r^m(x).$$ It follows that $\underline{\alpha}_{\mu,\nu}^{q,m}(x)\leq \underline{\alpha}_{\mu,\nu}^{q}(x)$. The proof for other inequality is similar to that given for assertion (1). The following proposition is a consequence of Lemma \[l3\]. \[prop1\] Let $q\in \mathbb{R}$. For $\gamma_{n,m}$-almost all $V\in G_{n,m}$ and $\nu$-almost all $x$, we have $$\underline{\alpha}_{\mu_V,\nu_V}^{q}(x_V)=\underline{\alpha}_{\mu,\nu}^{q,m}(x)$$ and $$\overline{\alpha}_{\mu_V,\nu_V}^{q}(x_V)=\overline{\alpha}_{\mu,\nu}^{q,m}(x).$$ The following theorem presents general relations between the multifractal Hausdorff and the multifractal packing dimension of a measure $\nu$ and that of its orthogonal projections. Let $q\in \mathbb{R}$. 1. For $\gamma_{n,m}$-almost all $V\in G_{n,m}$, we have $$\underline{\operatorname{Dim}}_{\mu_V}^q(\nu_V)=\operatorname{ess\,inf}\overline{\alpha}_{\mu,\nu}^{q,m}(x)\quad\text{ and}\quad \overline{\operatorname{Dim}}_{\mu_V}^q(\nu_V)=\operatorname{ess\,sup}\overline{\alpha}_{\mu,\nu}^{q,m}(x)$$ where the essential bounds being related to the measure $\nu$. 2. For $\gamma_{n,m}$-almost all $V\in G_{n,m}$, we have 1. for $q>0$, $$\begin{array}{ccll} \underline{\dim}_{\mu_V}^q(\nu_V) = \underline{\dim}_\mu^q(\nu) \quad\text{and}\quad\overline{\dim}_{\mu_V}^q(\nu_V) =\overline{\dim}_\mu^q(\nu). \end{array}$$ 2. for $q\leq0$ and $\overline{\dim}_\mu^q(\nu)\leq m(1-q),$ $$\underline{\dim}_{\mu_V}^q(\nu_V)=\underline{\dim}_\mu^q(\nu) \quad\text{and}\quad \overline{\dim}_{\mu_V}^q(\nu_V)=\overline{\dim}_\mu^q(\nu).$$ [**Proof.**]{} Follows immediately from Propositions \[prop2\] and \[prop1\] and Corollary \[cor1\]. Due to Proposition 5.10 in [@O], the result is optimal. If in addition, $q=0$, then the results of Falconer and O’Neil hold (see [@FN]). Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank the referee for carefully reading of the manuscript and for various suggestions and improvements. [99]{} N. Attia and B. Selmi. [*Regularities of multifractal Hewitt-Stromberg measures*]{}. Commun. Korean Math. Soc., (to appear). N. Attia, B. Selmi and Ch. Souissi. *Some density results of relative multifractal analysis*. Chaos, Solitons $\&$ Fractals. [**103**]{} (2017), 1-11. I.S. Baek. *On deranged Cantor sets*. Kyungpook Math. Journal. [**38**]{} (1998), 363-367. I.S. Baek. *Weak local dimension on deranged Cantor sets*. Real Analysis Exchange. [**26**]{} (2001), 553-558. I.S. Baek. *Multifractal by self-similar measures*. J. Appl. Math. $\&$ Computing., [**23**]{} (2007), 497-503. I.S. Baek. *On multifractal of Cantor dust*. Acta Mathematica Sinica, English Series. [**25**]{} (2009), 1175-1182. A. Batakis and B. Testud. *Multifractal analysis of inhomogeneous Bernoulli products*. Journal of Statistical Physics. [**142**]{} (2011), 1105-1120. F. Ben Nasr and J. Peyrière. [*Revisiting the multifractal analysis of measures*]{}. Revista Math. Ibro., [**25**]{} (2013), 315-328. F. Ben Nasr and I. Bhouri. [*Spectre multifractal de mesures borliennes sur $\mathbb{R}^d$*]{}, C.R. Acad. Sci. Paris Ser. I Math., [**325**]{} (1997), 253-256. F. Ben Nasr, I. Bhouri and Y. Heurteaux. [*The validity of the multifractal formalism: results and examples*]{}. Adv. in Math., [**165**]{} (2002), 264-284. A. Ben Mabrouk. [*A higher order multifractal formalism*]{}. Stat. Prob. Lett., [**78**]{} (2008), 1412-1421. J. Cole. [*Relative multifractal analysis.*]{} Chaos, Solitons $\&$ Fractals, [**11**]{}: (2000), 2233-2250. M.F. Dai. *Multifractal analysis of a measure of multifractal exact dimension*. Nonlinear Analysis: Theory, Methods $\&$ Applications. [**70**]{} (2008), 1069-1079. Z. Douzi and B. Selmi. *Multifractal variation for projections of measures*. Chaos, Solitons $\&$ Fractals. [**91**]{} (2016), 414-420. Z. Douzi and B. Selmi. *On the projections of mutual multifractal spectra*. [arXiv:1805.06866v1](http://arXiv.org/archive/math#arXiv 1501protecd kern+.2777em elax1805.06866v1.), (2018). K. J. Falconer. *Techniques in fractal geometry*. Wiley. New York., (1997). K.J. Falconer and P. Mattila. *The packing dimensions of projections and sections of measures*. Math. Proc. Cambridge Philos. Soc., [**119**]{} (1996), 695-713. K.J. Falconer and T.C. O’Neil. *Convolutions and the geometry of multifractal measures*. Math. Nachr., [**204**]{} (1999), 61-82. A.H. Fan. *Sur la dimension des mesures*. Studia. Math., [**111**]{} (1994), 1-17. A.H. Fan. *On ergodicity and unidimensionality*. Kyushu. J. Math., [**48**]{} (1994), 249-255. A.H. Fan, K.S. Lau and H. Rao. *Relationships between different dimensios of a measures*. Monatsh. Math., [**135**]{} (2002), 191-201. A. Farhat and A. Ben Mabrouk, [*A joint multifractal analysis of finitely many non Gibbs-Ahlfors type measures*]{}. [viXra:1808.0576](http://arXiv.org/archive/math#viXra; 1805protecd kern+.2777em elax:1808.0576.), (2018). Y. Heurteaux. *Inégalités de Harnack à la frontière pour des opérateurs paraboliques*. Thèse. (1989). Y. Heurteaux. *Sur la comparaison des mesures avec les mesures de Hausdorff*. C. R. Acad. Sci. Paris Sér. I Math., [**321**]{} (1995), 61-65. Y. Heurteaux. *Estimations de la dimension inférieure et de la dimension supérieure des mesures*. Ann. Inst. H. Poincaré Probab. Statist., [**34**]{} (1998), 309-338. Y. Heurteaux. *Dimension of measures: the probabilistic approach*. Publ. Mat., [**51**]{} (2007), 243-290. J. Li. *A note on multifractal packig dimension of measures*. Anal. Theory Appl., [**25**]{} (2009), 147-153. P. Mattila. *The Geometry of Sets and Measures in Euclidean Spaces*. Cambridge University Press, Cambrdige. (1995). P. Mattila. *Hausdorff dimension, orthogonal projections and intersections with planes*. Annales Academiae Scientiarum Fennicae. Series A Mathematica. [**1**]{} (1975), 227-244. M. Menceur, A. Ben Mabrouk and K. Betina. [*The multifractal formalism for measures, review and extension to mixed cases*]{}. Anal. Theory Appl., [**32**]{} (2016), 77-106. M. Menceur and A. Ben Mabrouk. [*A mixed multifractal formalism for finitely many non Gibbs Frostman-like measures*]{}. [arXiv:1804.09034v1](http://arXiv.org/archive/math#arXiv 1806protecd kern+.2777em elax.09034v1.), (2018). T.C. O’Neil. *The multifractal spectra of projected measures in Euclidean spaces*. Chaos, Solitons $\&$ Fractals. [**11**]{} (2000), 901-921. L. Olsen. *A multifractal formalism*. Advances in Mathematics. [**116**]{} (1995), 82-196. L. Olsen. [*Multifractal dimensions of product measures*]{}. Math. Proc. Camb. Phil. Soc., [**120**]{} (1996), 709-734. L. Olsen. [*Measurability of multifractal measure functions and multifractal dimension functions*]{}. Hiroshima Math. J., [**29**]{} (1999), 435-458. L. Olsen. [*Dimension inequalities of multifractal Hausdorff measures and multifractal packing measures*]{}. Math. Scand., [**86**]{} (2000), 109-129. P.Y. Pesin. [*Dimension type characteristics for invariant sets of dynamical systems*]{}. Russian Math: Surveys. [**43**]{} (1988), 111-151. P.Y. Pesin. [*Dimension theory in dynamical systems, Contemporary views and applications*]{}. Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, (1997). S. Shen. [*Multifractal analysis of some inhomogeneous multinomial measures with distinct analytic Olsen’s $b$ and $B$ functions*]{}. Journal of Statistical Physics. [**159**]{} (2015), 1216-1235. B. Selmi. *A note on the effect of projections on both measures and the generalization of $q$-dimension capacity*. Probl. Anal. Issues Anal., [**5**]{} (23) (2016), 38-51. B. Selmi. *Measure of relative multifractal exact dimensions*. Adv. Appl. Math. Sci., [**17**]{} (2018), 629-643. B. Selmi. [*Some results about the regularities of multifractal measures*]{}. Korean J. Math. [**26**]{} (2018), 271-283. B. Selmi. [*On the strong regularity with the multifractal measures in a probability space*]{}. Analysis and Mathematical Physics (to appear). B. Selmi and N. Yu. Svetova. *On the projections of mutual $L^{q,t}$-spectrum*. Probl. Anal. Issues Anal., [**6**]{} (24) (2017), 94-108. M. Tamashiro. *Dimensions in a separable metric space*. Kyushu. J. Math., [**49**]{} (1995), 143-162. M. Wu. [*The multifractal spectrum of some Moran measures*]{}. Sci. China. Ser. A Math., [**48**]{} (2005), 97-112. M. Wu. [*The singularity spectrum $f(\alpha)$ of some Moran fractals*]{}. Monatsh Math., [**144**]{} (2005), 141-55. M. Wu and J. Xiao. [*The singularity spectrum of some non-regularity moran fractals*]{}. Chaos, Solitons $\&$ Fractals. [**44**]{} (2011), 548-557. J. Xiao and M. Wu. [*The multifractal dimension functions of homogeneous moran measure*]{}. Fractals. [**16**]{} (2008), 175-185.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In coventional imaging experiments, objects are localized in a position space and such optically responsive objects can be imaged with a convex lens and can be seen by a human eye. In this paper, we introduce an experiment on a three-dimensional imaging of a pattern which is localized in a three-dimesional phase space. The phase space pattern can not be imaged with a lens in a conventional way and it can not be seen by a human eye. In this experiment, a phase space pattern is produced from object transparancies and imprinted onto the phase space of an atomic gaseous medium, of doppler broadened absorption profile at room temperature, by utilizing velocity selective hole burning in the absorption profile. The pattern is localized in an unique three dimensional phase space which is a subspace of the six dimensional phase space. Imaging of the localized phase space pattern is performed at different momentum locations. In addition, imaging of the imprinted pattern of an object of nonuniform transmittance is presented.' author: - Mandip Singh and Samridhi Gambhir title: 'Three-dimensional imaging of a pattern localized in a phase space' --- In most imaging experiments, a structure of an object is defined in a position space. The structural pattern can be stationary or for a dynamic object can be non stationary *w.r.t.* time. An image of such an optically responsive object can be produced with a convex lens therefore, such an object can be seen with a camera or by a human eye. In this paper, we go beyond the conventional notion of imaging. A structural pattern of objects in our experiment is defined in a phase space therefore, such a pattern can not be imaged with a lens or a camera and a human eye can not visualize such a pattern. In this paper, we introduce a three-dimensional (3D) imaging of a pattern localized in a phase space. The pattern is localized in an unique 3D subspace, of the six-dimensional (6D) phase space, involving two position and one momentum coordinates. However, the pattern is delocalized in a 3D position subspace and in a 3D momentum subspace of the 6D phase space, separately. In experiment presented in this paper, the pattern of interest is produced by object transparencies and imprinted onto the phase space of an atomic gaseous medium at room temperature. Experiment is performed by utilizing velocity selective hole-burning [@lamb; @bennet; @haroche; @hughes2; @scholten1; @boudot; @schm] in doppler broadened absorption profile of an atomic gaseous medium. Tomographic images of the pattern localized in a 3D phase space are then captured with an imaging laser beam. The imaging laser beam is not interacting with actual objects used to produce the localized phase space pattern. Imaging of objects localized in a position space has been realized with quantum $\&$ classical methods with undetected photons [@zeilinger_1; @wong]. Quantum imaging with undetected photons, and unlike the ghost imaging [@bar; @imphase; @qimaging; @ghim; @boyd; @shih; @lugiato], does not rely on coincidence detection of photons. In this paper, a pattern of an object of nonuniform transmittance is also imprinted onto the phase space of an atomic medium and the pattern is then imaged at a constant location of momentum. A localized pattern in a 3D subspace of the 6D phase space is shown in Fig. \[fig1\] (a), where a two dimensional position space is spanned by orthogonal position unit vectors $\hat{x}$ and $\hat{y}$ and a third dimension corresponds to the $z$-component of momentum, $p_{z}$. ![\[fig1\] *(a) A localized pattern in a 3D phase space and its three tomograms. (b) Experimental schematic diagram, a linearly polarized imaging laser beam is overlapped in an atomic gaseous medium with counter propagating object laser beams. A 2D transverse intensity profile of the imaging laser beam at different detunings is captured with an EMCCD camera. (c) A 2D transverse intensity profile of the overlapped object laser beams prior to their entrance into the atomic medium. All three alphabets are overlapped with each other. (d) Transmittance, for imaging laser beam, of the atomic medium in presence of object laser beams without masks. Three peaks labeled as $1$, $2$ and $3$ correspond to a velocity selective hole-burning, in doppler broadened absorption profile, produced by object laser beams of frequencies $\nu_{1}$, $\nu_{o}$ and $\nu_{2}$, respectively.*](fig1.png) In experiment, $p_{z}$ is the $z$-component of momentum of atoms. The pattern is stationary *w.r.t.* time. A 2D planar section of a localized 3D phase space pattern at a constant $p_{z}$ represents a tomogram of the localized pattern. In Fig. \[fig1\] (a), three different tomograms at three different momenta are shown. Tomograms with an image of the English script alphabets $\bf{C}$, $\bf{A}$ and $\bf{T}$ are localized at $p_{z}$ equals to $p_{1}$, $p_{2}$ and $p_{3}$, respectively. Furthermore, in a 3D position space, spanned by orthogonal position unit vectors $\hat{x}$, $\hat{y}$ and $\hat{z}$, each tomogram is completely delocalized on $z$-axis that implies, all images are overlapped with each other and distributed at all points on $z$-axis. However, in a 3D momentum space, spanned by orthogonal unit vectors $\hat{p}_{x}$, $\hat{p}_{y}$ and $\hat{p}_{z}$ of momentum components along $\hat{x}$, $\hat{y}$ and $\hat{z}$ directions, each tomogram is delocalized in all planes parallel to $p_{x}$-$p_{y}$ plane. A subspace where the pattern is completely localized is an unique 3D subspace of the 6D phase space, as shown in Fig. \[fig1\] (a). In remining 3D subspaces of the 6D phase space the pattern is delocalized. In this paper, stationary localized 3D phase space pattern of interest is produced from objects located in the position space. The pattern is then imprinted onto the phase space of an atomic gas obeying Maxwell velocity distribution, in form of difference of number density of atoms in ground state and excited state. Tomographic images of the 3D phase space pattern are then imaged with an imaging laser beam, where by varing the frequency of the laser beam the location, $p_{z}$, of the tomogram can be shifted. In experiment, a stationary pattern in phase space of atoms is produced at room temperature (25$^{o}$C) by velocity selective hole-burning in the doppler broadened absorption profile of an atomic gaseous medium. Consider a linearly polarized object laser beam, of frequency $\nu_{p}$ and transverse intensity profile $I_{p}(x,y,\nu_{p})$, propagating in an atomic gaseous medium in a direction opposite to $z$-axis. The intensity profile $I_{p}(x,y,\nu_{p})$ represents a 2D image of an object in position space. This image information is transferred to a velocity class of the atomic gaseous medium at temperature $T$ by velocity selective atomic excitation. Consider an atomic gaseous medium where an isolated stationary atom has a ground quantum state $|g\rangle$ of energy $E_{g}$ and an excited quantum state $|e\rangle$ of energy $E_{e}$ with linewidth $\Gamma$. The object laser beam is on resonance with a velocity class of atoms of $z$-component of their velocity equals to $v_{r}=2\pi(\nu_{o}-\nu_{p})/k$, where $\nu_{o}=(E_{e}-E_{g})/h$, $k=2\pi/\lambda$ is the magnitude of the propagation vector of the object laser beam having wavelength $\lambda$. Atoms of other velocity classes are out of resonance due to the doppler shift. Transverse doppler shift is negligible beacuse of non relativistic velocity regime at room temperature. In absence of an object laser beam, all the atoms are in the ground state. Consider $n$ is the number of atoms per unit volume of the gaseous medium. According to Maxwell velocity distribution, a fraction of atoms with velocity in an interval $dv_{z}$ around $v_{z}$ at temperature $T$ is $f(v_{z}) dv_{z}=(m/2 \pi k_{B} T)^{1/2}e^{-m v^{2}_{z}/2 k_{B} T} dv_{z}$. Where, $k_{B}$ is the Boltzmann constant and $m$ is mass of an atom. Consider $L$ is length of the atomic medium along beam propagation direction. In presence of an object laser beam the ground state atoms of resonant velocity class are populated to the excited state. A steady state difference of atomic number densities in the ground state ($n_{1}$) and in the excited state ($n_{2}$) at $v_{z}$ is $n_{1}(x,y,v_{z})-n_{2}(x,y,v_{z})=n f(v_{z})/(1+I_{p}({x,y,\nu_{p}})\Gamma^{2}/(4I_{s}((2 \pi \nu_{p}-2 \pi \nu_{o}+kv_{z})^{2}+\Gamma^{2}/4)))$, where $I_{s}$ is the saturation intensity of the atomic transition. If attenuation and diffraction of the object laser beam are negligible then the transverse intensity profile of object laser beam is imprinted in the form of an atomic population difference in the resonating velocity class of atoms. This pattern is delocalized in the longitudinal direction *i.e.* in the direction of propagation of the object laser beam. However, the pattern is localized in the transverse plane of coordinates $x$, $y$ at a $z$-component of momentum of atoms, $p_{z} = m v_{r}$. If three different overlapping object laser beams of same linear polarization, different intensity profiles and frequencies are passed through the atomic medium then each beam imprints a different pattern in a different velocity class. Where each one located at a different $p_{z}$ corresponding to the resonant velocity class of atoms adressed by the resonating object laser beam. As a result a localized patten of all objects is imprinted onto a 3D subspace of the 6D phase space of atoms. The nearest frequency separation of object laser beams has to be much larger than the linewidth of the transition to reduce the overlapping of resonating velocity classes. To image the localized phase space pattern, a counter propagating imaging laser beam of frequency $\nu$ is overlapped with the object laser beams passing through the atomic gaseous medium. The polarization of the imaging laser beam is perpendicular to the polarization of object laser beams. The total absorption coefficient $\alpha$ of the imaging laser beam at frequency detuning, $\delta\nu=\nu-\nu_{o}$, is a convolution of population difference and absortion crossection of an atom such that $\alpha(x,y,\delta\nu) =\int^{\infty}_{-\infty} [n_{1}(x,y,v_{z})-n_{2}(x,y,v_{z})] \sigma_{o}(\Gamma^{2}/4) dv_{z}/((2 \pi \delta\nu- kv_{z})^{2}+\Gamma^{2}/4)$, where $\sigma_{o}$ is the peak absorption crossection of the atomic transition. The absorption of the imaging laser beam decreases if it interacts with a velocity class of atoms excited by object laser beams *i.e.* $n_{2}(x,y,v_{z})$ is nonzero. This produces velocity selective hole-burning in the doppler broadened absorption profile of the atomic medium. For the incident transverse intensity profile of the imaging laser beam $I_{r}(x,y,\delta\nu)$, the transmitted imaging laser beam intensity profile after passing through the gaseous medium is $I_{r}(x,y,\delta\nu) \exp(-\mathrm{OD}(x,y,\delta\nu))$. Where $\mathrm{OD}(x,y,\delta\nu)=\alpha(x,y,\delta\nu) L$ is the optical density of the atomic medium. The optical density profile, at a detuning $\delta\nu$, corresponds a tomographic section of the phase space pattern at $p_{z}= 2 \pi m \delta\nu /k$. The optical denisty of the medium decreases if object laser beams are present. An image of a tomographic section can be constructed by measuring a change in the optical density profile caused by object laser beams. A 3D image of the phase space pattern can be constructed with tomograms obtained at different detunings of imaging laser beam. In experiment, objects are three 2D transparency masks where each mask consists of an image of an alphabet of the English script, $\bf{C}$ (on mask-$1$), $\bf{A}$ (on mask-$2$) and $\bf{T}$ (on mask-$3$), as shown in Fig. \[fig1\] (b). All alphabets are transparent and remaining part of each mask is completely opaque to light. Object laser beams are initially passed through single mode (SM) polarization maintaining optical fibers to produce beams of gaussian transverse intensity profile. Where optical fibers are utilized as transverse mode filters. The mode filtered and collimated object laser beams of frequencies $\nu_{1}$, $\nu_{o}$ and $\nu_{2}$ are then passed through the mask-$1$, mask-$2$ and mask-$3$, respectively. After the masks, the transverse intensity profile of object laser beams correspond to alphabets $\bf{C}$ (at $\nu_{1}$), $\bf{A}$ (at $\nu_{o}$) and $\bf{T}$ (at $\nu_{2}$). All three object laser beams are overlapped on polarization beam splitters (PBS-$3$, PBS-$2$). The overlapped object laser beams are linearly $x$-polarized by a polarizer with its pass axis aligned along $x$-axis ($x$-polariser). Two half wave-plates are placed, before and after the PBS-$2$, to rotate the linear polarization of the object laser beams to equialize the intensity. The image of transverse intensity profile of the overlapped object laser beams, prior to their entrance into the atomic medium, is shown in Fig. \[fig1\] (c) where, images of alphabets are overlapped with each other. A different alphabet is imprinted on a light field of different frequency and momentum. Therefore, an intensity profile of each object laser beam also corresponds to a tomograph in the 3D phase space. The overlapped object laser beams are passed through an atomic gaseous medium, which is a $10$ cm long rubidium ($^{87}$Rb) vapour cell shielded from external magnetic field. The linewidth of resonant transition of the atomic medium is broadened due to doppler shift caused by motion of atoms. An object laser beam of frequency $\nu_{o}$ is on resonance to the atomic transition of stationary $^{87}$Rb atoms where a ground quantum state is $5^{2}S_{1/2}$ with total angular momentum quantum number $F=2$ ($|g\rangle$) and an excited quantum state is $5^{2}P_{3/2}$ with $F=3$ ($|e\rangle$). For stationary atoms the wavelength of this transition is $\lambda\simeq780$ nm. Object laser is frequency locked to the transition and frequency is shifted by accousto-optic modulators. Object laser frequency $\nu_{2}$ is red detuned by $-40$ MHz and frequency $\nu_{1}$ is blue detuned by $+40$ MHz from the resonant transition for stationary atoms as shown in Fig. \[fig1\] (b). The nearest frequency separation of object laser beams is much larger than the line width, $5.75$ MHz and much lower than Doppler broadening of the resonant transition. The frequency spread of all laser beams is less than $1$ MHz. Frequency detuning of beams is measured with a resolution $0.1$ MHz. An object laser beam of frequency $\nu_{o}$ is on resonance with atomic velocity class of $v_{z}=0$. Therefore, an image of an alphabet $\bf{A}$ is imprinted in the zeroth velocity class in the form of an atomic population difference. An object laser beam of frequency $\nu_{1}$ is on resonance with a velocity class $v_{z}=-31.2$ m/sec therefore, an image of an alphabet $\bf{C}$ is imprinted in this velocity class of atoms. An object laser beam of frequency $\nu_{2}$ is on resonance with a veolcity class $v_{z}=+31.2$ m/sec therefore, an image of an alphabet $\bf{T}$ is imprinted in this velocity class of atoms. Atoms of each velocity class are uniformly distributed in the position space volume of the atomic gaseous medium. Therefore, the imprinted pattern is completely delocalized along the length of atomic gaseous medium in the beam propagation direction. All the imprinted images form a localized pattern in an unique 3D sub-space of the 6D phase space of the atomic gaseous medium. To image of the imprinted phase space pattern, a linearly polarized imaging laser beam is passed through the atomic medium in the opposite direction relative to the propagation direction of object laser beams. Object and imaging laser beams are produced by two independent lasers. Imaging laser is also frequency locked to the same resonant transition of stationary atoms and its frequency is shifted by accouto-optic modulators. Prior to their entrance into the atomic medium, the transverse intensity profile of imaging laser beam of frequency $\nu_{r}$ and detuning $\delta\nu=\nu_{r}-\nu_{o}$ is $I_{r}(x,y,\delta\nu)$. Imaging laser beam is $y$-polarized which is perpendicular to the linear polarization of object laser beams and its peak intensity is much lower than the saturation intensity of the atomic transition. After passing through the atomic medium, imaging laser beam is reflected by PBS-$1$ and its transverse intensity distribution at different detunings is captured with an electron-multiplying-charge-coupled-device (EMCCD) camera without gain multiplication. Transmittance of the atomic vapour cell for the imaging laser beam, at different detuning $\delta\nu$, in presence of object laser beams without masks is shown in Fig. \[fig1\] (d). Three peaks labeled as $1$, $2$ and $3$ correspond to velocity selective hole-burning, in doppler broadened absorption profile, caused by object laser beams of frequencies $\nu_{1}$, $\nu_{o}$ and $\nu_{2}$, respectively. Object and imaging laser beams are counter propagating therefore, a peak in the transmittance due to a hole-burning by a higher frequency object laser beam appears at lower frequency of imaging laser beam. To measure the imaging laser beam detuning precisely, a part of object laser light is extracted and red detuned by $190$ MHz from the resonant transition. ![\[fig2\] *Three tomographic images, of a localized pattern in the 3D phase space, captured at different detunings of the imaging laser beam. Each image is a plot of a change in the optical density, $\Delta \mathrm{OD}(y,z,\delta\nu)$.*](fig2.png) The extracted object laser light is overlapped with a part of the imaging laser light of same polarisation on a non polarization beam splitter (BS). A beating signal of two lasers is detected with a fast response photo detector and measured with a radio frequency spectrum analyzer as shown in Fig. \[fig1\] (b). Detuning is measured from frequency of the beating signal that corresponds to a frequency difference of two lasers . The intensity profile of imaging laser beam after traversing throught the atomic medium in presence of object laser beams is $I_{on}(x,y,\delta\nu)=I_{r}(x,y,\delta\nu) \exp(-\mathrm{OD}(x,y,\delta\nu))$. The optical density $\mathrm{OD}(x,y,\delta\nu)$ is constructed at detuning $\delta\nu$. The optical density is higher in absence of object laser beams. A change in the optical density after switching-on the object laser beams is $\Delta \mathrm{OD}(x,y,\delta\nu)=-\log(I_{on}(x,y,\delta\nu)/I_{off}(x,y,\delta\nu))$. Where, $I_{off}(x,y,\delta\nu)$ is the intensity profile of the imaging laser beam after traversing through the atomic medium in absence of object laser beams. Frequency of the imaging laser beam is red detuned by $\delta\nu=-40$ MHz from the resonant transition. Its transverse intensity profile $I_{off}(x,y,\delta\nu)$ is captured with an EMCCD camera in absence of three object laser beams.. After a time delay, another image of the imaging laser beam intensity $I_{on}(x,y,\delta\nu)$ is captured in presence of three object laser beams. A $y$-polarizer is placed in front of EMCCD camera to block any back reflection of object laser beams from optical components. A change in optical density profile, $\Delta \mathrm{OD}(x,y,\delta\nu)=-\log(I_{on}(x,y,\delta\nu)/I_{off}(x,y,\delta\nu))$, of the atomic medium is evaluated. Similar measurements are performed for detuning $\delta\nu=0$ MHz and $\delta\nu=+40$ MHz. For each detuning of the imaging laser beam, a different tomographic section of the 3D phase space localized pattern is captured. A series of three tomographic images are shown in Fig. \[fig2\] for three different detunings. Imaging laser beam is reflected by PBS-$1$ therefore, images are constructed after making a reflection transformation in a plane parallel to $y$-$z$ plane and $\Delta \mathrm{OD}(x,z,\delta\nu)$ is transformed to $\Delta \mathrm{OD}(y,z,\delta\nu)$. Three tomographic images resemble to the English script alphabets of object transparancies *i.e.* $\bf{C}$ (at $\delta\nu=-40$ MHz), $\bf{A}$ (at $\delta\nu=0$ MHz) and $\bf{T}$ (at $\delta\nu=+40$ MHz). By combining all the tomographic images, a word $\bf{CAT}$ is formed as shown in Fig. \[fig2\]. Imaging laser beam detuning and $z$-component of resonating velocity class corresponding to each tomographic image are shown on top of each tomograph. ![\[fig3\] *(a) A photograph of overlapping neutral density filters. (b) An image, $\Delta \mathrm{OD}(y,z,\delta\nu=0)$, of an area enclosed by a square as shown in (a).*](fig3.png) In another experiment, an object of nonuniform transmittance is constructed by overlapping two neutral density filters of neutral densities ($\mathrm{ND}$) $0.3$ and $0.6$ as shown in a photograph, Fig. \[fig3\] (a). Four different regions $R_{1}$ ($\mathrm{ND}=0$), $R_{2}$ ($\mathrm{ND}=0.3$), $R_{3}$ ($\mathrm{ND}=0.6$) and $R_{4}$ ($\mathrm{ND}=0.9$) are formed. This object is positioned in place of a mask-$2$ in path of an object laser beam of frequency $\nu_{o}$. The image of a part of the object enclosed by a square as shown in Fig. \[fig3\] (a) is captured with imaging laser beam. Experiment is performed with a single object laser beam. The intensity profile $I_{p}(x,y,\delta\nu=0)$ of object laser beam after passing through the object consists of four different regions of different intensity levels. Therefore, it produces four regions of different depths of hole-burning in atomic gaseous medium. Imaging laser beam is on resonance with velocity class $v_{z}=0$ and an image of $\Delta\mathrm{OD}(y,z,\delta\nu=0)$ of the atomic gaseous medium is constructed as shown in Fig. \[fig3\] (b), which is an image of the overlapping neutral density filters. In conclusion, experiments presented in this paper provide a unique way to produce and image a 3D pattern localized only in an unique 3D subspace of the 6D phase space. [**[Contribution of Authors:]{}** ]{}Mandip Singh (MS) created the idea, MS designed and performed the experiment, MS and PhD student Samridhi Gambhir (SG) made the masks, SG plotted data shown in Fig. 1 (d). MS wrote the paper. [99]{} W. E. Jr. Lamb, “Theory of an optical maser," Phys. Rev. **134**, A1429 (1964). W. R. Jr. Bennett, “Hole burning effects in a [H]{}e-[N]{}e optical maser," Phys. Rev. **126**, 580 (1962). S. Haroche and F. Hartmann, “Theory of saturated-absorption line shapes," Phys. Rev. A. **6**, 1280 (1972). M. L. Harris, C. S. Adams, S. L. Cornish, I. C. McLeod, E. Tarleton and I. G. Hughes, “Polarization spectroscopy in rubidium and cesium," Phys. Rev. A. **73**, 062509 (2006). L. P. Maguire, R. M. W. van Bijnen, E. Mese and R. E. Scholten, “Theoretical calculation of saturated absorption spectra for multi-level atoms," J. Phys. B: At. Mol. Opt. Phys **39**, 2709 (2006). M. A. Hafiz, D. Brazhnikov, G. Coget, A. Taichenachev, V. Yudin, E. de Clercq and R. Boudot, “High-contrast sub-[D]{}oppler absorption spikes in a hot atomic vapor cell exposed to a dual-frequency laser field," New. J. Phys. **19**, 073028 (2017). S. Putz, A. Angerer, D. O. Krimer, R. Glattauer, W. J. Munro, S. Rotter, J. Schmiedmayer and J. Majer, “Spectral hole burning and its application in microwave photonics," Nat. Photonics, **11**, 36 (2017). G. B. Lemos and V. Borish, G. D. Cole, S. Ramelow, R. Lapkiewicz and A. Zeilinger, “Quantum imaging with undetected photons," Nature, **512**, 409 (2014). J. H. Shapiro, D. Venkatraman and F. N. C. Wong, “Classical imaging with undetected photons," Nat.Sci. Rep. **5**, 10329 (2015). T. B. Pittman, Y. H. Shih, D. V. Strekalov and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement," Phys. Rev. A. **52**, R3429 (1995). A. F. Abouraddy, P. R. Stone, A. V. Sergeinko, B. E. A. Saleh and M. C. Teich, “Entangled-photon imaging of a pure phase objects," Phys. Rev. Letts. **93**, 213903 (2004). A. Gatti, E. Brambilla and L. Lugiato, “Quantum imaging," Prog. Opt. **51**, 251-348 (2008). R. S. Aspden, D. S. Tasca, R. W. Boyd and M. J. Padgett, “[EPR]{}-based ghost imaging using a single-photon-sensitive camera," New. J. Phys. **15**, 073032 (2013). R. S. Bennink, S. J. Bentley and R. W. Boyd, “Two-Photon coincidence imaging with a classical source," Phys. Rev. Letts. **89**, 113601 (2002). A. Valencia, G. Scarcelli, M. D’Angelo and Y. Shih, “Two-photon imaging with thermal light," Phys. Rev. Letts. **94**, 063601 (2005). F. Ferri D,. Magatti, A. Gatti, M. Bache, E. Brambilla and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light," Phys. Rev. Letts. **94**, 183602 (2005).
{ "pile_set_name": "ArXiv" }
--- abstract: | #### Morphology and defects: {#morphology-and-defects .unnumbered} Issues of Ge hut cluster array formation and growth at low temperatures on the Ge/Si(001) wetting layer are discussed on the basis of explorations performed by high resolution STM and [*in-situ*]{} RHEED. Dynamics of the RHEED patterns in the process of Ge hut array formation is investigated at low and high temperatures of Ge deposition. Different dynamics of RHEED patterns during the deposition of Ge atoms in different growth modes is observed, which reflects the difference in adatom mobility and their ‘condensation’ fluxes from Ge 2D gas on the surface for different modes, which in turn control the nucleation rates and densities of Ge clusters. Data of HRTEM studies of multilayer Ge/Si heterostructures are presented with the focus on low-temperature formation of perfect films. #### Photo-emf spectroscopy: {#photo-emf-spectroscopy .unnumbered} Heteroepitaxial Si [*p–i–n*]{}-diodes with multilayer stacks of Ge/Si(001) quantum dot dense arrays built in intrinsic domains have been investigated and found to exhibit the photo-emf in a wide spectral range from 0.8 to 5$\mu$m. An effect of wide-band irradiation by infrared light on the photo-emf spectra has been observed. Photo-emf in different spectral ranges has been found to be differently affected by the wide-band irradiation. A significant increase in photo-emf is observed in the fundamental absorption range under the wide-band irradiation. The observed phenomena are explained in terms of positive and neutral charge states of the quantum dot layers and the Coulomb potential of the quantum dot ensemble. A new design of quantum dot infrared photodetectors is proposed. #### Terahertz spectroscopy: {#terahertz-spectroscopy .unnumbered} By using a coherent source spectrometer, first measurements of terahertz dynamical conductivity (absorptivity) spectra of Ge/Si(001) heterostructures were performed at frequencies ranged from 0.3 to 1.2 THz in the temperature interval from 300 to 5K. The effective dynamical conductivity of the heterostructures with Ge quantum dots has been discovered to be significantly higher than that of the structure with the same amount of bulk germanium (not organized in an array of quantum dots). The excess conductivity is not observed in the structures with the Ge coverage less than 8Å. When a Ge/Si(001) sample is cooled down the conductivity of the heterostructure decreases. address: | (1) A M Prokhorov General Physics Institute of RAS, 38 Vavilov Street, Moscow 119991, Russia\ (2)Technopark of GPI RAS, 38 Vavilov Street, Moscow, 119991, Russia\ (3)Moscow Institute of Physics and Technology, Institutsky Per. 9, Dolgoprudny, Moscow Region, 141700, Russia author: - 'Vladimir A Yuryev$^{1,2}$' - 'Larisa V Arapkina$^{1}$' - 'Mikhail S Storozhevykh$^{1}$' - 'Valery A Chapnin$^{1}$' - 'Kirill V Chizh$^{1}$' - 'Oleg V Uvarov$^{1}$' - 'Victor P Kalinushkin$^{1,2}$' - 'Elena S Zhukova$^{1,3}$' - 'Anatoly S Prokhorov$^{1,3}$' - 'Igor E Spektor$^{1}~$ and Boris P Gorshunov$^{1,3}$' bibliography: - 'EMNC2012-article.bib' title: | Ge/Si(001) heterostructures with dense arrays of Ge\ quantum dots: morphology, defects, photo-emf spectra\ and terahertz conductivity --- \[1995/12/01\] Introduction {#introduction .unnumbered} ============ Artificial low-dimensional nano-sized objects, like quantum dots, quantum wires and quantum wells, as well as structures based on them, are promising systems for improvement of existing devices and for development of principally new devices for opto-, micro- and nano-electronics. Besides, the investigation of physical properties of such structures is also of fundamental importance. In both regards, amazing perspectives are provided when playing around with quantum dots that can be considered as artificial atoms with a controlled number of charge carriers that have a discrete energy spectrum [@Pchel_Review-TSF; @Pchel_Review]. Arrays of a *large* number of quantum dots including multilayer heterostructures make it possible to create artificial “solids" whose properties can be controllably changed by varying the characteristics of constituent elements (“atoms") and/or the environment (semiconductor matrix). The rich set of exciting physical properties in this kind of systems originates from single-particle and collective interactions that depend on the number and mobility of carriers in quantum dots, Coulomb interaction between the carriers inside a quantum dot and in neighbouring quantum dots, charge coupling between neighbouring quantum dots, polaron and exciton effects, etc. Since characteristic energy scales of these interactions (distance between energy levels, Coulomb interaction between charges in quantum dots, one- and multiparticle exciton and polaron effects, plasmon excitations, etc.) are of order of several meV [@3-Colomb_interactions-Dvur; @4-Drexler-InGaAs; @5-Lipparini-far_infrared], an appropriate experimental tool for their study is provided by optical spectroscopy in the far-infrared and terahertz bands. To get access to the effects, one has to extend the operation range of the spectrometers to the corresponding frequency domain that is to the terahertz frequency band. Because of inaccessibility of this band, and especially of its lowest frequency part, below 1 THz (that is $\apprle 33$cm$^{-1}$), for standard infrared Fourier-transform spectrometers, correspondent data is presently missing in the literature. In this paper, we present the results of the first detailed measurements of the absolute dynamical (AC) conductivity of multilayer Ge/Si heterostructures with Ge quantum dots, at terahertz and sub-terahertz frequencies and in the temperature range from 5 to 300K. In addition, for at least two tens of years, multilayer Ge/Si heterostructures with quantum dots have been candidates to the role of photosensitive elements of monolithic IR arrays promising to replace and excel platinum silicide in this important brunch of the sensor technology [@Wang-properties; @Wang-Cha; @Dvur-IR-20mcm]. Unfortunately, to date achievements in this field have been less than modest. We believe that this state of affairs may be improved by rigorous investigation of formation, defects and other aspects of materials science of such structures, especially those which may affect device performance and reliability, focusing on identification of reasons of low quantum efficiency and detectivity, high dark current and tend to degrade with time as well as on search of ways to overcome these deficiences. New approaches to device architecture and design as well as to principles of functioning are also desirable. This article reports our latest data on morphology and defects of Ge/Si heterostructures. On the basis of our recent results on the photo-emf in the Si [*p–i–n*]{}-structures with Ge quantum dots, which are also reported in this article, we propose a new design of photovoltaic quantum dot infrared photodetectors. Methods {#methods .unnumbered} ======= Equipment and techniques {#equipment-and-techniques .unnumbered} ------------------------ The Ge/Si samples were grown and characterized using an integrated ultrahigh vacuum instrument [@classification; @stm-rheed-EMRS; @CMOS-compatible-EMRS; @VCIAN2011] built on the basis of the Riber SSC2 surface science center with the EVA32 molecular-beam epitaxy (MBE) chamber equipped with the RH20 reflection high-energy electron diffraction (RHEED) tool (Staib Instruments) and connected through a transfer line to the GPI-300 ultrahigh vacuum scanning tunnelling microscope (STM) [@gpi300; @STM_GPI-Proc; @STM_calibration]. Sources with the electron beam evaporation were used for Ge or Si deposition. A Knudsen effusion cells was utilized if boron doping was applied for QDIP [*p–i–n*]{}-structure formation. The pressure of about $5\times 10^{-9}$Torr was kept in the preliminary sample cleaning (annealing) chamber. The MBE chamber was evacuated down to about $10^{-11}$Torr before processes; the pressure increased to nearly $2\times 10^{-9}$Torr at most during the Si substrate cleaning and $10^{-9}$Torr during Ge or Si deposition. The residual gas pressure did not exceed $10^{-10}$Torr in the STM chamber. Additional details of the experimental instruments and process control can be found in Ref.[@VCIAN2011]. RHEED measurements were carried out [*in situ*]{}, i.e., directly in the MBE chamber during a process [@stm-rheed-EMRS]. STM images were obtained in the constant tunnelling current mode at the room temperature. The STM tip was zero-biased while the sample was positively or negatively biased when scanned in empty- or filled-states imaging mode. Structural properties of the Ge/Si films were explored by using the Carl Zeiss Libra-200 FE HR HRTEM. The images were processed using the WSxM software [@WSxM]. For obtaining spectra of photo-electromotive force (photo-emf) a setup enabling sample illumination by two independent beams was used; one of the beams was a wide-band infrared (IR) radiation, generated by a tungsten bulb, passed through a filter of Si or Ge (bias lighting) and the other was a beam-chopper modulated narrow-band radiation cut from globar emission by an IR monochromator tunable in the range from 0.8 to 20 $\mu$m. The spectra were taken at the chopping frequency of 12.5 Hz at temperatures ranged from 300 to 70 K and a widely varied power of the bias lighting. The measurements of the terahertz dynamic conductivity and absorptivity of Ge/Si heterostructures at room and cryogenic temperatures (down to 5 K) have been performed using the spectrometer based on backward-wave oscillators (BWO) as radiation sources. This advanced experimental technique will be described in detail below in a separate section. Sample preparation procedures {#sample-preparation-procedures .unnumbered} ----------------------------- ### Preparation of samples for STM and RHEED {#preparation-of-samples-for-stm-and-rheed .unnumbered} Initial samples for STM and RHEED studies were $8\times 8$ mm$^{2}$ squares cut from the specially treated commercial boron-doped Czochralski-grown (CZ) Si$(100)$ wafers ($p$-type, $\rho\,= 12~{\Omega}\,$cm). After washing and chemical treatment following the standard procedure described elsewhere [@cleaning_handbook], which included washing in ethanol, etching in the mixture of HNO$_3$ and HF and rinsing in the deionized water [@VCIAN2011], the silicon substrates were loaded into the airlock and transferred into the preliminary annealing chamber where they were outgassed at the temperature of around [565]{} for more than 6h. After that, the substrates were moved for final treatment and Ge deposition into the MBE chamber where they were subjected to two-stages annealing during heating with stoppages at [600]{} for 5min and at [800]{} for 3min [@classification; @stm-rheed-EMRS]. The final annealing at the temperature greater than [900]{} was carried out for nearly 2.5min with the maximum temperature of about [925]{} (1.5min). Then, the temperature was rapidly lowered to about [750]{}. The rate of the further cooling was around 0.4/s that corresponded to the ‘quenching’ mode applied in [@stm-rheed-EMRS]. The surfaces of the silicon substrates were completely purified of the oxide film as a result of this treatment [@our_Si(001)_en; @phase_transition; @stm-rheed-EMRS]. Ge was deposited directly on the deoxidized Si(001) surface. The deposition rate was varied from about $0.1$ to $0.15$/s; the effective Ge film thickness $(h_{\rm Ge})$ was varied from 3 to 18 for different samples. The substrate temperature during Ge deposition $(T_{\rm gr})$ was [360]{} for the low-temperature mode and 600 or [650]{} for the high-temperature mode. The rate of the sample cooling down to the room temperature was approximately 0.4/s after the deposition. ### Preparation of multilayer structures {#preparation-of-multilayer-structures .unnumbered} Ge/Si heterostructures with buried Ge layers were grown on CZ $p$-Si$(100)$:B wafers ($\rho\,= 12~{\Omega}\,$cm) washed and outgassed as described above. Deoxidized Si(001) surfaces were prepared by a process allowed us to obtain clean substrate surfaces (this was verified by STM and RHEED) and perfect epitaxial interfaces with Si buffer layers (verified by HRTEM): the wafers were annealed at 800 under Si flux of $\apprle 0.1$Å/s until a total amount of the deposited Si, expressed in the units of the Si film thickness indicated by the film thickness monitor, reached 30[Å]{}; 2-minute stoppages of Si deposition were made first twice after every 5 and then twice after every 10. Afterwards, a $\sim$100nm thick Si buffer was deposited on the prepared surface at the temperature of $\sim$650. Then, a single Ge layer or a multilayer Ge/Si structure was grown. A number of Ge layers in multilayer structures reached 15 but usually was 5; their effective thickness ($h_{\rm Ge}$), permanent for each sample, was varied from sample to sample in the range from 4 to 18Å; the thickness of the Si spacers ($h_{\rm Si}$) was $\sim$50nm. The Ge deposition temperature was $\sim$360, Si spacers were grown at $\sim$530. A heterostructure formed in such a way was capped by a $\sim$100nm thick Si layer grown at $\sim$530. All layers were undoped. The samples were quenched after the growth at the rate of $\sim$0.4/s. ### [Growth of *p–i–n*]{}-structures {#growth-of-pin-structures .unnumbered} [*p–i–n*]{}-structures were grown on commercial phosphorus-doped CZ $n$-Si(100) substrates ($\rho = 0.1\,\Omega$cm). Si surfaces were prepared for structure deposition in the same way as for the growth of multilayer structures. $i$-Si buffer domains of various thicknesses were grown on the clean surfaces at $\sim$650. Then, a stacked structure of several periods of quantum dot (QD) dense arrays separated by Si barriers was grown under the same conditions as the multilayer structures; $h_{\rm Si}$ was widely varied in different structures reaching 50nm; $h_{\rm Ge}$ always was 10Å. A sufficiently thick undoped Si layer separated the stacked QD array from the Si:B cap doped during the growth, the both layers were grown at $\sim$530. Figure \[fig:p-i-n\_Schematics\] demonstrates two such structures (referred to as R163 and R166) which are in the focus of this article. Their caps were doped to $5\times 10^{18}$ and $10^{19}$cm$^{-3}$ in the R163 and R166 samples, respectively. Buffer layer and barrier thicknesses were 99 and 8nm in the R163 structure and 1690 and 30nm in R166. Mesas were formed on samples for photoelectric measurements. Ohmic contacts were formed by thermal deposition of aluminum. Terahertz BWO-spectroscopy {#terahertz-bwo-spectroscopy .unnumbered} -------------------------- The BWO-spectrometers provide broad-band operation (frequencies $\nu$ ranging from 30 GHz to 2 THz), high frequency resolution ($\Delta \nu/\nu = 10^{-5}$), broad dynamic range (40–50 dB), continuous frequency tuning and, very importantly, the possibility of *direct* determination of spectra of any “optical” parameter, like complex conductivity, complex dielectric permittivity, etc. (‘direct’ means that no Kramers–Kronig analysis—typical for far-infrared Fourier transform spectroscopy—is needed). The principle of operation of BWO-spectrometers is described in details in the literature (see, e.g., [@6-Kozlov-Volkov; @7-Gorshunov-BWO_spectroscpoy]). It is based on measurement of the complex transmission coefficient $Tr^{*} = Tr\exp(i\varphi)$ of a plane-parallel sample with subsequent calculation of the spectra of its optical parameters from those of the transmission coefficient amplitude $Tr(\nu)$ and the phase $\varphi(\nu)$. The corresponding expression can be written as [@8-Born-Wolf; @9-Dressel] $$Tr^{*}=Tr\exp(i\varphi) = \frac{T_{12}T_{21}\exp(i\delta)}{1+ T_{12}T_{21}\exp(2i\delta)}.\label{eqn:THz-Eq1}$$ Here $$\begin{aligned} \nonumber T_{pq}= t_{pq}\exp(i\varphi_{pq}), t^{2}_{pq}=\frac{4(n_{p}^2+k_{p}^2)}{(k_p + k_q)^2+(n_p + n_q)^2}, \varphi_{pq}= \arctan\{\frac{k_pn_p-k_qn_q}{n_p^2+k_p^2+n_pn_q+k_pk_q}\}\end{aligned}$$ are Fresnel coefficients for the interfaces ‘air–sample’, indices $p,~ q = 1,~ 2 $ correspond: ‘1’ to air (refractive index $n_1 = 1$, extinction coefficient $k_1 = 0$) and ‘2’ to the material of the sample $(n_2,~k_2)$, $\delta = \frac{2{\pi}d}{\lambda}(n_2+ik_2)$, $d$ is the sample thickness, $\lambda$ is the radiation wavelength. The sample parameters (for instance, $n_2$ and $k_2$ ) are found for each fixed frequency by solving two coupled equations for the two unknowns, $Tr(n_2, k_2, \nu) = Tr_{\mathrm{exp}}(\nu)$ and $\varphi(n_2, k_2, \nu) = \varphi_{\mathrm{exp}}(\nu)$ \[here $Tr_{\mathrm{exp}}(\nu)$ and $\varphi_{\mathrm{exp}}(\nu)$ are the measured quantities\]. The so-found values of $n_2(\nu)$ and $k_2(\nu)$ can then be used to derive the spectra of the complex permittivity $\varepsilon^*(\nu) = \varepsilon'(\nu) + i \varepsilon''(\nu) = n_2^2 - k^2_2 + 2 i n_2 k_2$, complex conductivity $\sigma^*(\nu) = \sigma_1(\nu) + i \sigma_2(\nu) = \nu n_2 k_2 + i \nu (\varepsilon_{\infty} - \varepsilon')/2$, etc. ($\varepsilon_{\infty}$ is the high-frequency contribution to the permittivity). If the sample is characterized by low enough absorption coefficient, Fabry–Perot-like interference of the radiation within the plane-parallel layer leads to an interference maxima and minima in the transmission coefficient spectra. In this case there is no need to measure the phase shift spectra since the pairs of optical quantities of the sample can be calculated from the transmission coefficient spectrum alone: the absorptive part (like $\varepsilon''$ or $\sigma_1$) is determined from the interferometric maxima amplitudes and the refractive part (like $\varepsilon'$ or $n$) is calculated from their positions [@6-Kozlov-Volkov; @7-Gorshunov-BWO_spectroscpoy]. When measuring the dielectric response of the films (like heterostructures in the present case) on dielectric substrates, first the dielectric properties of the substrate material are determined by standard techniques just described. Next, one measures the spectra of the transmission coefficient and of the phase shift of the film-substrate system, and it is these spectra that are used to derive the dielectric response of the film by solving two coupled equations for two unknowns—“optical” parameters of the film. The corresponding expression for the complex transmission coefficient of a two-layer system can be written as [@8-Born-Wolf; @9-Dressel]: $$Tr^*_{1234}= Tr\exp(i\varphi)= \frac{T_{12}T_{23}T_{34}\exp\{i (\delta_2+\delta_3)\}}{1+T_{23}T_{34}\exp(2i\delta_3)+T_{12}T_{23}\exp(2i\delta_2)+ T_{12}T_{34}\exp\{2i(\delta_2+\delta_3)\}}, \label{eqn:THz-Eq2}$$ where indices 1 and 4 refer to the media on the two sides of the sample, i.e., of the film on substrate, $\delta_p = (n_p + i k_p)$, with $d_p$ being the film and substrate thicknesses ($p = 2,~ 3$). The other notations are the same as in Eq. (\[eqn:THz-Eq1\]). The measurements are performed in a quasioptical configuration, no waveguides are used [@6-Kozlov-Volkov; @7-Gorshunov-BWO_spectroscpoy] and this makes measurement schemes extremely flexible. All measurement and analysis procedures are PC-controlled. Most important parameters of the BWO-spectrometer are summarized in Table \[tab:BWO\_parameters\]. Results and Discussion {#results-and-discussion .unnumbered} ====================== Morphology and defects {#morphology-and-defects-1 .unnumbered} ---------------------- ### STM and RHEED study of Ge/Si(001) QD arrays: morphology and formation {#stm-and-rheed-study-of-gesi001-qd-arrays-morphology-and-formation .unnumbered} Previously, we have shown in a number of [*STM studies*]{} [@classification; @VCIAN2011; @Nucleation_high-temperatures; @Hut_nucleation; @initial_phase; @CMOS-compatible-EMRS] that the process of the hut array nucleation and growth at low temperatures starts from occurrence of two types of 16-dimer nuclei [@Hut_nucleation] on wetting layer (WL) patches of 4-ML height [@initial_phase] giving rise to two known species of $\{105\}$-faceted clusters—pyramids and wedges [@classification]—which then, growing in height (both types) and in length (wedges), gradually occupy the whole wetting layer, coalless and start to form a nanocrystalline Ge film (Figure \[fig:STM-360\]) [@VCIAN2011; @CMOS-compatible-EMRS]. This is a life cycle of hut arrays at the temperatures &lt;600. We refer to cluster growth at these temperatures as the low-temperature mode. At high temperatures (&gt;600), only pyramids represent a family of huts: they were found to nucleate on the WL patches in the same process of 16-dimer structure occurrence as at low temperatures [@Nucleation_high-temperatures]. We failed to find wedges or their nuclei if Ge was deposited at these temperatures and this fact waits for a theoretical explanation. In addition to pyramids, shapeless Ge heaps faceting during annealing have been observed on WL in the vicinity of pits and interpreted as possible precursors of large faceted clusters [@VCIAN2011; @Nucleation_high-temperatures]. Note that a mechanism of Ge hut formation via faceting of some shapeless structures appearing near WL irregularities, which resembles the process described in the current article, was previously considered as the only way of Ge cluster nucleation on Si(001) [@Nucleation; @Goldfarb_2005]. Now we realize that huts nucleate in a different way[@Hut_nucleation] and formation of the faceting heaps at high temperatures is a process competing with appearance of real pyramidal huts which arise due to formation of the 16-dimer nuclei on tops of WL patches [@Hut_nucleation; @CMOS-compatible-EMRS; @initial_phase]. Yet, further evolution of the Ge heaps into finalized faceted clusters, such as domes, in course of Ge deposition is not excluded [@Nucleation_high-temperatures]. During further growth at high temperatures, pyramids reach large sizes becoming much greater than their low-temperature counterparts and usually form incomplete facets or split edges (Figure \[fig:STM-600\]). An incomplete facet seen in Figure \[fig:STM-600\]a and especially a “pelerine” of multiple incomplete facets seen in Figure \[fig:STM-600\]b,c around the pyramid top indicate unambiguously that this kind of clusters grow from tops to bottoms completing facets rather uniformly from apexes to bases, and bottom corners of facets are filled the latest. Sometimes it results in edge splitting near the pyramid base (Figure \[fig:STM-600\]b,d). [*RHEED*]{} has allowed as to carry our [*in-situ*]{} explorations of forming cluster arrays. We have compared RHEED patterns of Ge/Si(001) surfaces during Ge deposition at different temperatures and a dynamics of diffraction patterns during sample heating and cooling. Diffraction patterns of reflected high-energy electrons for samples of thin ($h_{\rm Ge}=$ 4Å) Ge/Si(001) films deposited at high (650 or 600) and low (360) temperatures with equal effective thicknesses are presented in Figure \[fig:rheed\]a,b. The patterns are similar and represent a typical $(2\times 1)$ structure of Ge WL; reflexes associated with appearance of huts (the 3D-reflexes) are absent in both images, that agrees with the data of the STM analysis. Diffraction patterns presented in Figure \[fig:rheed\]a,c,e are related to the samples with $h_{\rm Ge}$ increasing from 4 to 6. The 3D-reflexes are observed only in the pattern of the samples with $h_{\rm Ge}=$ 6[Å]{}, that is also in good agreement with the STM data [@VCIAN2011; @initial_phase]. Influence of the sample annealing at the deposition temperature is illustrated by a complimentary pair of the RHEED patterns given in Figure \[fig:rheed\]c,d. Annealing of specimens at the temperature of growth (650) resulted in appearance of the 3D-reflexes (Figure \[fig:rheed\]d) that also corresponds with the results of our STM studies [@VCIAN2011]. Difference in evolution of diffraction patterns during the deposition of Ge is a characteristic feature of the high-temperature mode of growth in comparison with the low-temperature one. The initial Si(001) surface before Ge deposition is $(2\times 1)$ reconstructed. At high temperatures, as $h_{\rm Ge}$ increases, diffraction patterns evolve as $(2\times 1)\rightarrow$ $(1\times 1)\rightarrow$ $(2\times 1)$ with very weak -reflexes. Brightness of the -reflexes gradually increases (the $(2\times 1)$ structure becomes pronounced) and the 3D-reflexes arise only during sample cooling (Figure \[fig:RHEED\_cool-600\]). At low temperatures, the RHEED patterns change as $(2\times 1)\rightarrow$ $(1\times 1)\rightarrow$ $(2\times 1)\rightarrow$ $(2\times 1)+3$D-reflexes. The resultant pattern does not change during sample cooling. This observation reflects the process of Ge cluster “condensation” from the 2D gas of mobile Ge adatoms. High Ge mobility and low cluster nucleation rate in comparison with fluxes to competitive sinks of adatoms determines the observed difference in the surface structure formation at high temperatures as compared with that at low temperatures [@VCIAN2011; @Nucleation_high-temperatures] when the adatom flux to nucleating and growing clusters predominates and adatom (addimer) mobility is relatively small. ### STM and HRTEM study of Ge/Si heterostructures with QD array: morphology and defects {#stm-and-hrtem-study-of-gesi-heterostructures-with-qd-array-morphology-and-defects .unnumbered} Structures overgrown with Si were examined by means of HRTEM for structural perfection or possible defects, e.g., imperfections induced by array defects reported in Ref. [@defects_ICDS-25]. Data of HRTEM studies evidence that extended defects do not arise at low $h_{\rm Ge}$ on the buried Ge clusters and perfect epitaxial heterostructures with quantum dots form under these conditions that enables the formation of defectless multilayer structures suitable for device applications. Figure \[fig:TEM-6A\] relates to the five-layer Ge/Si structure with [*h$_{\mathrm{Ge}}$*]{}= 6. We succeeded to resolve separate Ge clusters whose height is, according to our STM data [@VCIAN2011; @classification], $\apprle$3ML over WL patches (Figure \[fig:TEM-6A\]a,b). A lattice structure next to the cluster apex is not disturbed (Figure \[fig:TEM-6A\]c,d); its parameters estimated from the Fourier transform of an image taken from this domain (Figure \[fig:TEM-6A\]e,f), $\sim 5.4$ along the \[001\] direction and $\sim 3.8$ along \[110\], within the accuracy of measurements coincide with the parameters of the undisturbed Si lattice. Stacking faults (SF) have been found to arise above Ge clusters at $h_{\rm Ge}$ as large as 10 (Figure \[fig:TEM-10A1L\]). SFs often damage Si structures with overgrown Ge layers at this values of $h_{\rm Ge}$. A high perfection structure is observed around Ge clusters in Figure \[fig:TEM-10A1L\]a although their height is up to 1.5 nm over WL (the typical height of huts is known from both our STM and HRTEM data). Yet, a tensile strained domain containing such extended defects as SFs and twin boundaries forms over a cluster shown in Figure \[fig:TEM-10A1L\]b,c (twinning is clearly observable in Figure \[fig:TEM-10A1L\]d). One can see, however, that this cluster is extraordinary high: its height over WL exceeds 3.5 nm. Such huge clusters have been described by us previously as defects of arrays [@defects_ICDS-25]; we predicted in that article such formations to be able to destroy Ge/Si structures generating high stress fields in Si spacer layers and, as a consequence, introducing extended defects in device structures. As seen in Figure \[fig:TEM-10A1L\]b,c, the stress field spreads under the cluster in the Si buffer layer grown at much higher temperature than the cap. Unfortunately, the huge Ge hut clusters (as we showed in Ref. [@defects_ICDS-25], they are not domes) usually appear in the arrays and their number density was estimated as $\sim$$10^9$cm$^{-3}$ from the STM data. Strain domains are also seen next to Ge clusters in the five-layer structures depicted in Figure \[fig:TEM-9-10A\] ([*h$_{\mathrm{Ge}}$*]{}= 9 or 10). We found that such domains are not inherent to all cluster vicinities but only to some of them (Figure \[fig:TEM-9-10A\]a,d). The disturbed strained domains give a contrast different from that of the undisturbed Si lattice (Figure \[fig:TEM-9-10A\]e). Zoom-in micrographs of the disturbed regions show perfect order of atoms in the crystalline lattice (Figure \[fig:TEM-9-10A\]b,c,e,f) everywhere except for the closest vicinities of the Si/Ge interface where point defects and a visible lattice disordering immediately next to the cluster are registered (Figure \[fig:TEM-9-10A\]b,c,e,f). However, some farther from the interfaces but still near cluster apexes the crystalline order restores (Figure \[fig:TEM-9-10A\]h). We have estimated the lattice parameter in the disturbed regions from the Fourier transforms of the HRTEM micrographs taken in these domains (Figure \[fig:TEM-9-10A\]i). The values we obtained appeared to be vary for different regions. Yet, they usually appreciably exceeded the Si lattice parameter. Moreover, they often reached the Ge parameter of $\sim$5.6–5.7 (along \[001\] and $\sim$4 along \[110\]). This might be explained either by appreciable diffusion of Ge from clusters (previously, we have already reported an appreciable diffusion of Si in Ge clusters in analogous structures from covering Si layers grown at 530 [@Raman_conf; @our_Raman_en]) or by Si lattice stretching under the stress. Likely both factors acts. It is worth while emphasising that the stretched domains usually do not contain extended defects, as it is seen from the HRTEM micrographs, except for the cases of array defects (huge clusters) like that demonstrated in Figure \[fig:TEM-10A1L\]. We suppose that the extended defects in these regions arise because the strain exceeds an elastic limit near huge clusters. Finally, we have tried to find out if huge clusters exists in arrays of [*h$_{\mathrm{Ge}}$*]{}= 9 (Figure \[fig:STM-9A\]). We have been convinced that even in rather uniform arrays large clusters (Figure \[fig:STM-9A\]e), which might generate considerable stress, are abundant and even huge ones (Figure \[fig:STM-9A\]d), which should produce lattice disordering (extended defects), are available. Effect of such defects as huge clusters on device performance and a cause of their appearance in hut arrays await further detailed studies. Photo-emf of Ge/Si [*p–i–n*]{}-structures {#photo-emf-of-gesi-pin-structures .unnumbered} ----------------------------------------- ### Photo-emf spectra {#photo-emf-spectra .unnumbered} We have investigated heteroepitaxial Si [*p–i–n*]{}-diodes with multilayer stacks of Ge/Si(001) QD dense arrays built in intrinsic domains and found them to exhibit the photo-emf in a wide spectral range from 0.8 to 5 $\mu$m [@NES-2011; @photon-2011]. An effect of wide-band irradiation by infrared light on the photo-emf spectra has been observed. Here we describe the most representative data obtained for two radically different structures denoted as R163 and R166 (Figure \[fig:p-i-n\_Schematics\]). Typical photo-emf spectra obtained for R163 and R166 structures are presented in Figure \[fig:r163\_r166\]. In the spectra, we mark out three characteristic ranges which differently respond to bias lighting and differently depend on its power. (i) [*Wavelength range from 0.8 to 1.0 $\mu$m*.]{} The photo-emf response increases with the increase in the bias lighting power, reaches maximum at $P \approx 0.63$ mW/cm$^2$ with a Si filter and at $P \approx 2.6$ mW/cm$^2$ with a Ge filter and decreases with further increase in the power. \(ii) [*Wavelength range from 1.1 to 2.6 $\mu$m*.]{} The photo-emf response decreases monotonously in this range with the increase in the power of the bias lighting with any, Si or Ge, filter. \(iii) [*Wavelength range &gt;2.6 $\mu$m*.]{} The photo-emf response increases with the increase in the bias lighting power and comes through its maximum at $P \approx 0.63$ mW/cm$^2$ if a Si filter is used and at $P \approx 0.25$ mW/cm$^2$ for a Ge filter. The response decreases with further growth of the bias lighting power for a Si filter and remains unchanged when Ge filter is utilized. We propose the following model for explanation of these observations: In the studied structures, all QD layers are located in the $i$-domain (Figure \[fig:bands\]). One can see from these sketches that some QD layers are positively charged (the ground states of QDs is above the Fermi level and hence they are filled by holes) while others are neutral (the QDs’ ground states are below Fermi level and hence empty). Then, one may consider a QD layer as a single ensemble of interacting centers because the average distance between QDs’ apexes is about 13 nm whereas QDs’ bases adjoin. Consequently, one can imagine an allowed energy band with some bandwidth, determined by QDs’ sizes and composition dispersion, and a certain density of states in this band. Let us explore in detail every range of the photo-emf spectra taking into account the proposed model. ### Wavelength range from 0.8 to 1.0$\mu$m {#wavelength-range-from-0.8-to-1.0mum .unnumbered} Without bias lighting, all radiation in the Si fundamental absorption range can be believed to be absorbed in Si (cap-layer, spacers, buffer layer and substrate) and QDs are not involved in the absorption, so the total charge of QD layers remains unaltered. Electron-holes pairs are generated in the intrinsic region of the [*p–i–n*]{}-diode as a result of the absorption and separated by the junction field which converts the radiation to emf. However, carrier separation is hindered because of presence of the potential barriers for holes in the valence band which are produced by the charged QD layers situated in intrinsic domain. Calculated height of these barriers equals 0.1 to 0.2 eV depending on the layer position in the structure. Transitions from QD ensemble states to the valence and conduction bands of Si start under bias lighting. Carriers excited by bias lighting do not contribute to the photo-emf signal measured at the modulation frequency of the narrow-band radiation. QDs captured a photon change their charge state. An effective layer charge decreases as a result of the absorption of the bias lighting radiation that results in reduction of potential barrier height and more efficient carrier separation in the junction field. Increase in the photo-emf response in the fundamental absorption range under bias lighting is explained by this process. ### Wavelength range from 1.1 to 2.6$\mu$m {#wavelength-range-from-1.1-to-2.6mum .unnumbered} This band is entirely below the Si fundamental absorption range. Therefore the response in this region cannot be explained in terms of absorption in bulk Si. One can explain the presence of the photo-emf signal in this region considering the following model: Both hole transitions from the QD ensemble states to the valence band and electron transitions from the QD ensemble states to the conduction band due to absorption of photons with the energy between $\sim 1.12$ and $\sim 0.4$ eV are possible. The probability of every kind of the transitions is determined by the photon energy, the density of states in the QD ensemble and by effective charge of the QD layer. It follows from theoretical studies [@Gerasimenko_Si-mat_nanoelectr; @Brudnyi-Ge-small_QD] and experiments on photoluminescence [@PL-Si/Ge_1.4-1.8mcm; @PL-Si/Ge] that photons with energies ranged from 0.7 to 0.9 eV are required for electron transitions from the QD states to the conduction band. However, it is necessary to mention the research of photoconductivity [@Talochkin-Lateral_photoconductivity_Ge/Si], in which electron transitions for low photon energy ($\sim 0.4$ eV) have been shown to be likely. The availability of these transitions is explained by dispersion of sizes and composition of QDs, effect of diffusion on the hetero-interface and deformation effects. The likelihood of electron transitions drops rapidly with photons energy decrease because of reduction of the density of states in the QDs ensemble when approaching to the conduction band edge. This is the reason of the observed monotonous decrease in the photo-emf signal with the increase in the radiation wavelength in this range. At the same time, bias lighting switching on leads to growth of concentration of the unmodulated (“dark”) carrier, depletion of QDs and as a consequence to the observed reduction of the photo-emf response at the chopping frequency. ### Wavelength range &gt;2.6$\mu$m {#wavelength-range-2.6mum .unnumbered} As mentioned above, electron transitions can happen at low energy of the exciting radiation ($\sim$0.4 eV) which correspond to wavelength of $\sim$3.1 $\mu$m. Yet, the photo-emf signal is observed at the radiation wavelengths up to 5 $\mu$m in our measurements. The presence of the photo-emf response in this range can only be explained if the QD layer is considered as a single ensemble of mutually interacting centers. An effective positive charge in the QD layer forms a potential well for electrons in the conduction band. This leads to reduction of energy needed for electron transitions from the QD ensemble states to the conduction band. Partial emptying of the states makes electron transitions possible and, at the same time, does not lead to significant change in the potential wells depth. As a result, electron transitions can happen at the exciting radiation energies as low as 0.25 eV. Hole transitions also can happen at these energies via a large number of excited states in the QD ensemble. It may be concluded that the likelihood the electron transitions decreases faster than that of the hole transitions as the exciting radiation energy decreases in the considered wavelength range. However, first it is necessary to empty the levels by the electron transitions to make possible hole transitions. This could be achieved by using an additional radiation of the spectral domain where the probability of the electron transitions is high. So, bias lighting stimulates the hole transitions by exciting electrons that leads to emptying the levels. In this case the electron concentration is not modulated as distinct from the hole concentration which is modulated at the chopper frequency. This explains the observed low magnitude of the photo-emf in the wavelength range &gt;2.6 $\mu$m and its increase under bias lighting. ### Influence of buffer layer thickness on photo-emf spectra {#influence-of-buffer-layer-thickness-on-photo-emf-spectra .unnumbered} As seen from Figure \[fig:bands\], the buffer layer thickness determines the QD layers position the in intrinsic domain and thus controls the relative position of the Fermi level and the mini-band of the QD array in the region where the QD layers are situated. The charge of the QD layer is determined by the band occupation of the QD ensemble which, in turn, is controlled by the Fermi level location. For this reason the effect of bias lighting on photo-emf generated by the narrow-band radiation in the fundamental absorption range is much stronger for the R166 structure, which have a thick buffer layer, than for the R163 one. This is clearly seen in Figure \[fig:bias\]. The absolute value of photo-emf in the R166 structure is lower than that in the R163 sample due to higher potential barriers for holes in the valence band. Yet, the photo-emf response increases with the growth of the bias lighting power much stronger in the R166 [*p–i–n*]{}-diode than in the R163 one. ### Prospective photovoltaic IR detectors {#prospective-photovoltaic-ir-detectors .unnumbered} On the basis of our results on the photo-emf in the Si [*p–i–n*]{}-structures with Ge quantum dots, we have recently proposed [@Yur1-patent-Ge] a new design of photovoltaic quantum dot infrared photodetectors which enables detection of variations of photo-emf produced by the narrow-band radiation in the Si fundamental absorption range (a reference beam) under the effect of the wide-band IR radiation inducing changes in the Coulomb potential of the quantum dot ensemble which, in turn, affects the efficiency of the photovoltaic conversion of the reference beam. The quantum dot array resembles a grid of a triode in these detectors which is controlled by the detected IR light. The reference narrow-band radiation generates a potential between anode and cathode of this optically driven quantum dot triode; a magnitude of this voltage depends on the charge of the QD grid (Figure \[fig:bands\]). Such detectors can be fabricated on the basis of any appropriate semiconductor structures with potential barriers, e.g., [*p–i–n*]{}-structures, $p$–$n$-junctions or Schottky barriers, and built-in arrays of nanostructures. There are many ways to deliver the reference beam to the detector, e.g., by irradiating the sensor by laser or LED. We propose, however, surface plasmon polaritons delivered to the detector structures by the plasmonic waveguides [@Bozhevolnyi-waveguides; @Zayats-waveguides] to be applied as the reference beams in the detector circuits. This approach makes such detectors, if based on Si, fully compatible with existing CMOS fabrication processes [@Zayats-Si_waveguides] that, in turn, opens a way to development of plasmonic IR detector arrays on the basis of the monolithic silicon technology. THz conductivity of multilayer Ge/Si QD arrays {#thz-conductivity-of-multilayer-gesi-qd-arrays .unnumbered} ---------------------------------------------- The effective dynamic conductivity of Ge quantum dot layer was determined by measuring the transmission coefficient spectra of heterostructures grown on Si(001) substrates. Characteristics of the substrates were determined beforehand as demonstrated by Figures \[fig:THz-fig1\] and \[fig:THz-fig2\]. In Figure \[fig:THz-fig1\], the interferometric pattern in the transmission coefficient spectrum $Tr(\nu)$ of a plane-parallel Si substrate is clearly seen. Pronounced dispersion of $Tr(\nu)$ peaks and their temperature dependence allow to extract the parameters of the charge carriers (holes) by fitting the spectra with Eq. (\[eqn:THz-Eq2\]) and by modelling the sample properties with the Drude conductivity model where the complex AC conductivity is given by an expression [@9-Dressel; @10-Sokolov] $$\sigma^*(\nu) = \sigma_1(\nu) + i\sigma_2(\nu) = \frac{\sigma_0\gamma^2}{\gamma^2+\nu^2} + i\frac{\sigma_0\nu\gamma}{\gamma^2+\nu^2}. \label{eqn:THz-Eq3}$$ Here $\sigma_1$ is the real part and $\sigma_2=\nu(\varepsilon_{\infty} - \varepsilon')/2$ is the imaginary part of the conductivity, $\varepsilon_{\infty}$ is the high-frequency dielectric constant, $\sigma_0=\nu^2_{\mathrm{pl}}/2\gamma$ is the DC conductivity, $\nu^2_{\mathrm{pl}} = (ne^2 /{\pi}m^*)^{\frac{1}{2}}$ is the plasma frequency of the carriers condensate, $n$, $e$ and $m^*$ are, respectively, their concentration, charge and effective mass and $\gamma$ is their scattering rate. Figure \[fig:THz-fig2\] shows the temperature variation of the plasma frequency and the scattering rate of charge carriers. Lowering of the plasma frequency is mainly connected with the carriers’ freezing out and the $\gamma(T)$ behaviour is well described by a $T^{-\frac{3}{2}}$ dependence, as expected. The values of effective dynamical conductivity and absorption coefficient $\alpha = 4{\pi}k/\lambda$ of the heterostructures with Ge quantum dots were determined basing on the measurements of terahertz transmission coefficient spectra of the Si substrate with the heterostructure on it as compared to the spectra of the same substrate with the heterostructure etched away; this allowed us to avoid influence of (even slight) differences in dielectric properties of substrates cut of a standard commercial silicon wafer. By comparing the so-measured transmissivity spectra we reliably detect, although small, changes in the amplitudes of interference maxima of a bare substrate caused by heterostructures. This is demonstrated by Figure \[fig:THz-fig3\]: at $T = 300$K we clearly and firmly register a 2% lowering of the peak transmissivity introduced by the heterostructure. When cooling down, the difference decreases and we were not able to detect it below about 170K, see Figure \[fig:THz-fig4\]. Correspondingly, as is seen in Figure \[fig:THz-fig4\], the AC conductivity of the heterostructure decreases while cooling, along with the conductivity of the Si substrate. The latter observation might be an indication of the fact that the charges are delivered into the quantum dots array from the substrate; the statement, however, needs further exploration. Measuring the room temperature spectra, we have found that the AC conductivity and the absorption coefficient of the heterostructure do not depend on the effective thickness (measured by the quartz sensors during MBE) of the germanium layer ($h_{\mathrm{Ge}}$) for $h_{\mathrm{Ge}}$ ranging from 8 to 14, see Figure \[fig:THz-fig5\]. For larger coverage, $h_{\mathrm{Ge}}>14$Å, both quantities start to decrease. One of the main findings of this work is that the AC conductivity and absorption coefficient of Ge/Si heterostructures have been discovered to be significantly higher than those of the structure with the same amount of germanium not organized in an array of quantum dots. Crucial role played by quantum dots is supported by a decrease of $\sigma_{\rm AC}$ and $\alpha$ observed for large germanium coverage ($h_{\mathrm{Ge}}>14$Å), when structurization into quantum dots gets less pronounced and the thickness of Ge layer becomes more uniform. On the other hand, it is worth noting that no extra absorption of terahertz radiation was detected in the samples with low coverage, $h_{\mathrm{Ge}}=4.4$ and 6Å. This can be explained either by the absence of quantum dots in that thin Ge layer or by their small sizes, by a large fraction of the free wetting layer or by relatively large distances between the clusters as compared to their sizes, i.e., by the absence or smallness of the effect of quantum dots on the dielectric properties of the heterostructure. As seen from Figure \[fig:THz-fig5\], the values $\sigma_{\rm AC}\approx 100\,\Omega^{-1}\mathrm{cm}^{-1}$ and $\alpha\approx 4000\,\mathrm{cm}^{-1}$ are considerably higher than the values measured for bulk germanium, $\sigma_{\rm AC}({\rm Ge})\approx 10^{-2}\,\Omega^{-1}\mathrm{cm}^{-1}$, by about four orders of magnitude, and $\alpha({\rm Ge})\approx 40\,\mathrm{cm}^{-1}$, by about two orders of magnitude. Assuming that the AC conductivity of heterostructure is connected with the response of (quasi) free carriers, one can express it with a standard formula $\sigma=e\mu n =ne^2(2\pi \gamma m^*)^{-1}$ ($\mu$ is the mobility of charge carriers). Then, the observed increase has to be associated with considerable enhancement either of the mobility (suppression of scattering rate) of charge carriers within a quantum dot array or of their concentration. The second possibility has to be disregarded since the total concentration of charges in the sample (substrate plus heterostructure) remains unchanged. As far as the mobility increase is concerned, we are not aware of a mechanism that could lead to its orders of magnitude growth when charges get localized within the quantum dot array. Another interpretation of the observed excess AC conductivity could be based on some kind of [*resonance*]{} absorption of terahertz radiation. Known infrared experiments exhibit resonances in quantum dot arrays that are caused by the transitions between quantized energy levels, as well as between the split levels and the continuum of the valence or conduction band [@4-Drexler-InGaAs; @11-Heitmann; @12-Boucaud; @13-Weber; @14-Savage]. Carriers localized within quantum dots can form bound states with the carriers in the surrounding continuum (excitons) or with optical phonons (polarons), which can in turn interact with each other and form collective complexes [@3-Colomb_interactions-Dvur; @4-Drexler-InGaAs; @12-Boucaud; @13-Weber; @14-Savage; @15-Hameau]. Plasma excitations generated by electromagnetic radiation in the assembly of conducting clusters or quantum dots also have energies of about 10 meV [@16-Sikorski; @18-Dahl; @17-Demel], i.e., fall into the THz band. It is important that these effects can be observed not only at low, but at elevated temperatures as well, up to the room temperature. At this stage, we are not able to unambiguously identify the origin of the THz absorption seen at $T = 170$ to 300K in Ge/Si heterostructure with Ge quantum dots. Among the aforementioned, the mechanisms involving polaritons or plasma excitations seem to be least affected by thermal fluctuations and could be considered as possible candidates. To get detailed insight into microscopic nature of the observed effect, further investigations of heterostructures with various geometric and physical parameters, as well as in a wider frequency and temperature intervals are in progress. Conclusions {#conclusions .unnumbered} =========== In conclusion of the article, we highlight its main provisions. Using high resolution STM and [*in-situ*]{} RHEED we have explored the processes of Ge hut cluster array formation and growth at low temperatures on the Ge/Si(001) wetting layer. Different dynamics of the RHEED patterns in the process of Ge hut array formation at low and high temperatures of Ge deposition reflects the difference in adatom mobility and their fluxes from 2D gas of mobile particles (atoms, dimers and dimer groups) on the surface which govern the nucleation rates and densities of arising Ge clusters. HRTEM studies of multilayer Ge/Si heterostructures with buried arrays of Ge huts have shown that the domains of stretched lattice occurring over Ge clusters in Si layers at high Ge coverages usually do not contain extended defects. We suppose that the extended defects in these regions arise because the strain exceeds an elastic limit near huge clusters. Silicon [*p–i–n*]{}-diodes with multilayer stacks of Ge cluster arrays built in [*i*]{}-domains have been found to exhibit the photo-emf in a wide spectral range from 0.8 to 5$\mu$m. A significant increase in photo-emf response in the fundamental absorption range under the wide-band IR radiation has been reported and explained in terms of positive and neutral charge states of the quantum dot layers and the Coulomb potential of the quantum dot ensemble. A new type of photovoltaic QDIPs is proposed in which photovoltage generated by a reference beam in the fundamental absorption band is controlled by the QD grid charge induced by the detected IR radiation [@Yur1-patent-Ge]. Using a BWO-spectrometer, first measurements of terahertz dynamical conductivity spectra of Ge/Si heterostructures were carried out at frequencies ranged from 0.3 to 1.2 THz in the temperature interval from 5 to 300K. The effective dynamical conductivity of the heterostructures with Ge quantum dots has been found to be significantly higher than that of the structure with the same amount of Ge not organized in quantum dots. The excess conductivity is not observed in the structures with the Ge coverage less than 8Å. When a Ge/Si sample is cooled down the conductivity of the heterostructure decreases. Abbreviations {#abbreviations .unnumbered} ============= AC, alternating current; BWO, backward-wave oscillator; CMOS, complementary metal-oxide semiconductor; CZ, Czochralski or grown by the Czochralski method; DC, direct current; emf, electromotive force; HRTEM, high resolution transmission electron microscope; IR; infrared; LED, light emitting diode; MBE, molecular beam epitaxy; ML, monolayer; QD, quantum dot; QDIP, quantum dot infrared photodetector; RHEED, reflected high energy electron diffraction; SF, stacking fault; SIMS, secondary ion mass spectroscopy; STM, scanning tunneling microscope; WL, wetting layer; UHV, ultra-high vacuum. Competing interests {#competing-interests .unnumbered} =================== The authors declare that they have no competing interests. Authors contributions {#authors-contributions .unnumbered} ===================== VAY conceived of the study and designed it, performed data analysis, and took part in discussions and interpretation of the results; he also supervised and coordinated the research projects. LVA participated in the design of the study, carried out the experiments, performed data analysis, and took part in discussions and interpretation of the results. MSS investigated the photo-emf spectra; he carried out the experiments, performed data analysis, and took part in discussions and interpretation of the results. VAC participated in the design of the study, took part in discussions and interpretation of the results; he also supervised the researches performed by young scientists and students. KVC took pat in the experiments on investigation of the photo-emf spectra and the terahertz conductivity; he prepared experimental samples and took part in discussions and interpretation of the results. OVU performed the HRTEM studies and took part in discussions and interpretation of the results. VPK participated in the design of the study, took part in discussions and interpretation of the results; he also supervised the research project. ESZ carried out the experiments on the terahertz spectroscopy; she performed measurements and data analysis, and took part in discussions and interpretation of the results. ASP participated in the studies by the terahertz spectroscopy; he took part in discussions and interpretation of the results. IES participated in the studies by the terahertz spectroscopy; he took part in discussions and interpretation of the results. BPG performed the explorations by the terahertz spectroscopy; he participated in the design of the study, performed measurements and data analysis, and took part in discussions and interpretation of the results; he also supervised the research project. Acknowledgements {#acknowledgements .unnumbered} ================ Tables {#tables .unnumbered} ====== Table \[tab:BWO\_parameters\] - Main parameters of the terahertz BWO-spectrometer {#tabletabbwo_parameters---main-parameters-of-the-terahertz-bwo-spectrometer .unnumbered} --------------------------------------------------------------------------------- Figures {#figures .unnumbered} ======= ![image](Fig_1_Morph) Figure \[fig:STM-360\] - STM images of Ge/Si(001) quantum dot arrays grown at 360: {#figurefigstm-360---stm-images-of-gesi001-quantum-dot-arrays-grown-at-360 .unnumbered} ----------------------------------------------------------------------------------- $h_{\rm Ge}$ (Å) is (a) 6, (b) 8, (c) 10, (d) 14, (e) 15, (f) 18. ![image](Figure_3_Morph) Figure \[fig:STM-600\] - STM empty-state images of high-temperature pyramids: {#figurefigstm-600---stm-empty-state-images-of-high-temperature-pyramids .unnumbered} ------------------------------------------------------------------------------ $T_{\rm gr}=650$; (a) $87\times87$nm, steps of the incomplete upper left facet, running normal to the base side, are seen near the left corner of the pyramid; (b) $87\times87$nm, a cluster with edges split near the base and an apex formed by a set of incomplete {105} facets; (c) $57\times57$nm, a magnified image of a facet with several {105} incomplete facets near an apex; (d) $22\times22$nm, a split edge near a base. ![image](Figure_2_Morph) Figure \[fig:rheed\] - *In situ* RHEED patterns of Ge/Si(001) films: {#figurefigrheed---in-situ-rheed-patterns-of-gesi001-films .unnumbered} --------------------------------------------------------------------- *E* = 10keV, \[110\] azimuth; (a) $T_{\rm gr} =$ 650, $h_{\rm Ge}=$ 4Å; (b) $T_{\rm gr} =$ 360, $h_{\rm Ge}=$ 4Å; (c) $T_{\rm gr} =$ 650, $h_{\rm Ge}=$ 5Å; (d) $T_{\rm gr} =$ 650, $h_{\rm Ge}=$ 5Å, annealing at the deposition temperature for 7 min; (e) $T_{\rm gr} =$ 650, $h_{\rm Ge}=$ 6Å, the similar pattern is obtained for $T_{\rm gr} =$ 600; the patterns were obtained at room temperature after sample cooling. ![image](Figure_4_Morph.eps) Figure \[fig:RHEED\_cool-600\] - RHEED patterns of Ge/Si(001) deposited at 600 obtained during sample cooling: {#figurefigrheed_cool-600---rheed-patterns-of-gesi001-deposited-at-600-obtained-during-sample-cooling .unnumbered} --------------------------------------------------------------------------------------------------------------- $h_{\rm Ge}=$ 6Å; *E* = 10keV, \[110\] azimuth; cooling rate is $\sim 0.4$/s (see the cooling diagram in Ref. [@stm-rheed-EMRS]); (a) $T=600$, before cooling; (b)–(d) during cooling, time from beginning of cooling (min.): (b) 1, (c) 2, (d) 3; (e) room temperature, after cooling; arrows indicate the arising -reflexes to demonstrate a process of the $(2\times 1)$ pattern appearance; the images were cut from frames of a film. ![image](Figure_0_HRTEM) Figure \[fig:TEM-6A\] - HRTEM data for the five-layer Ge/Si heterostructure with buried Ge clusters: {#figurefigtem-6a---hrtem-data-for-the-five-layer-gesi-heterostructure-with-buried-ge-clusters .unnumbered} ----------------------------------------------------------------------------------------------------- [*h$_{\mathrm{Ge}}$*]{}= 6 (see Figure \[fig:STM-360\]a); (a) a long shot, the mark is 100 nm; (b) Ge clusters resolved in a layer, figure ‘1’ indicates one of the clusters, ‘2’ shows a WL segment; the mark is 50 nm; (c),(d) magnified images of a Ge cluster, the panel (d) corresponds to the light square in the panel (c); the marks are 10 and 5 nm, respectively; (e) a close-up image of a domain next to the top of the cluster imaged in (d); (f) the Fourier transform of the image (e), the measured periods are $\sim 5.4$ along \[001\] and $\sim 3.8$ along \[110\]; arrows in panels (c) to (f) indicate the \[001\] direction. ![image](Figure_2_HRTEM) Figure \[fig:TEM-10A1L\] - HRTEM images of the one-layer Ge/Si structures with buried Ge clusters: {#figurefigtem-10a1l---hrtem-images-of-the-one-layer-gesi-structures-with-buried-ge-clusters .unnumbered} --------------------------------------------------------------------------------------------------- [*h$_{\mathrm{Ge}}$*]{}= 10 (see Figure \[fig:STM-360\]c); (a) a perfect epitaxial structure of Ge and Si layers; the mark is 10 nm; (b), (c) a huge cluster (&gt;3,5 nm high) gives rise to tensile strain generating point and extended defects in the Si cap, the stress field spreads under the cluster \[the mark is 10 nm in (b)\]; (d) a magnified image obtained from the tensile domain, extended defects are seen; ‘1’ denotes Ge clusters, ‘2’ is a domain under tensile stress, ‘3’ indicates a twin boundary. ![image](Figure_4_HRTEM) Figure \[fig:TEM-9-10A\] - TEM data for the five-layer Ge/Si heterostructures, [*T$_{\mathrm{gr}}$*]{}= 360: {#figurefigtem-9-10a---tem-data-for-the-five-layer-gesi-heterostructures-t_mathrmgr-360 .unnumbered} ------------------------------------------------------------------------------------------------------------- \(a) to (c) [*h$_{\mathrm{Ge}}$*]{}= 9; (d) to (i) [*h$_{\mathrm{Ge}}$*]{}= 10; (a) domains of tensile strain in Si over Ge clusters are observed more or less distinctly near most clusters, but not around all; the surface is down; the mark equals 20 nm; (b), (c) zoom in two strained domains, no extended defects are observed; (d) strained domains are more pronounced, the strain is well recognized even under some clusters; (e) a magnified image of a strained domain; a strained lattice is well contrasted with the normal one; (f) zoom in the dilated lattice, a perfectly ordered lattice is observed; (g) a Si domain next to the Ge/Si interface near the cluster apex, a vacancy (‘V’) and disordered lattice (upper right corner) are revealed; letter ‘I’ indicates the direction to the interface along [[&lt;]{}11$\overline{1}$&gt;]{}; (h) the same as in (g) but some farther from the interface, the lattice is perfect; (i) the Fourier transform of an image obtained from a strained domain demonstrates an enhanced lattice parameter (the strain varies from domain to domain, the estimated lattice period in the \[001\] direction sometimes reaches $\sim$5.6Å). ![image](Figure_1_HRTEM) Figure \[fig:STM-9A\] - STM images of Ge/Si(001), [*h$_{\mathrm{Ge}}$*]{}= 9, [*T$_{\mathrm{gr}}$*]{}= 360: {#figurefigstm-9a---stm-images-of-gesi001-h_mathrmge-9-t_mathrmgr-360 .unnumbered} ------------------------------------------------------------------------------------------------------------ \(a) to (d) array top views with different magnifications; (e) a large cluster in the array, $\sim$2,5 nm high; (f) a huge cluster (&gt;3,5 nm high) interpreted as an array defect. ![image](Fig_1a_EMF)(a)\ ![image](Fig_1b_EMF)(b) Figure \[fig:p-i-n\_Schematics\] - Schematics of the [*p–i–n*]{}-structures: {#figurefigp-i-n_schematics---schematics-of-the-pin-structures .unnumbered} ----------------------------------------------------------------------------- \(a) R163, (b) R166. ![image](Fig_2-3_EMF) Figure \[fig:r163\_r166\] - Photo-emf spectra of the [*p–i–n*]{} structures: {#figurefigr163_r166---photo-emf-spectra-of-the-pin-structures .unnumbered} ----------------------------------------------------------------------------- (a) R163: (1) without bias lighting; (2)–(5) under bias lighting (Ge filter): (2) $W=0.25$mW/cm$^2$; (3) $W=0.77$mW/cm$^2$; (4) $W=1.5$mW/cm$^2$; (5) $W=2.16$mW/cm$^2$; (b) R166: (1) without bias lighting; (2)–(6) under bias lighting (Si filter): (2) $W=0.63$mW/cm$^2$; (3) $W=3.3$mW/cm$^2$; (4) $W=5.3$mW/cm$^2$; (5) $W=12$mW/cm$^2$; (6) $W=17.5$mW/cm$^2$. ![image](Fig_4a_EMF)(a)\ ![image](Fig_4b_EMF)(b) Figure \[fig:bands\] - Schematics of band structures of [*p–i–n*]{}-diodes: {#figurefigbands---schematics-of-band-structures-of-pin-diodes .unnumbered} ---------------------------------------------------------------------------- \(a) R163, (b) R166; figure ‘1’ indicates potential barriers for holes in the valence band. ![image](Fig_5_EMF) Figure \[fig:bias\] - Dependence of photo-emf response of the R163 and R166 [*p–i–n*]{}-structures on bias lighting power density. {#figurefigbias---dependence-of-photo-emf-response-of-the-r163-and-r166-pin-structures-on-bias-lighting-power-density. .unnumbered} ----------------------------------------------------------------------------------------------------------------------------------- ![image](Figure_1_THz) Figure \[fig:THz-fig1\] - Spectra of transmission coefficient of a silicon substrate (a commercial wafer, $\rho = 12\,\Omega$cm), measured at two temperatures using two different BWO working in spectral ranges from 11 cm$^{-1}$ to 24 cm$^{-1}$ and from 29 cm$^{-1}$ to 39 cm$^{-1}$: {#figurefigthz-fig1---spectra-of-transmission-coefficient-of-a-silicon-substrate-a-commercial-wafer-rho-12omegacm-measured-at-two-temperatures-using-two-different-bwo-working-in-spectral-ranges-from-11-cm-1-to-24-cm-1-and-from-29-cm-1-to-39-cm-1 .unnumbered} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Dots show the measurement results, lines are least-square fits based on the Drude conductivity model, as described in the text. ![image](Figure_2_THz) Figure \[fig:THz-fig2\] - Temperature dependences of the silicon substrate parameters obtained by fitting the transmission coefficient spectra as shown in Figure \[fig:THz-fig1\] and described in the text: {#figurefigthz-fig2---temperature-dependences-of-the-silicon-substrate-parameters-obtained-by-fitting-the-transmission-coefficient-spectra-as-shown-in-figurefigthz-fig1-and-described-in-the-text .unnumbered} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \(a) plasma frequency of charge carriers and (b) scattering rate. Solid line in (b) shows the $T^{-3/2}$ behavior. ![image](Figure_3_THz) Figure \[fig:THz-fig3\] - Spectra of transmission coefficient of Ge/Si heterostructure on Si substrate (solid symbols) and of bare substrate (open symbols) measured at two different temperatures: {#figurefigthz-fig3---spectra-of-transmission-coefficient-of-gesi-heterostructure-on-si-substrate-solid-symbols-and-of-bare-substrate-open-symbols-measured-at-two-different-temperatures .unnumbered} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Horizontal lines show the difference in peak transmissivity that is observed at 300 K and disappears at $\sim$170 K. The peaks positions are shifted due to slight difference in the Si substrate thickness. ![image](Figure_4_THz) Figure \[fig:THz-fig4\] - Temperature dependences of dynamical conductivity of Ge/Si heterostructure and of Si substrate: {#figurefigthz-fig4---temperature-dependences-of-dynamical-conductivity-of-gesi-heterostructure-and-of-si-substrate .unnumbered} -------------------------------------------------------------------------------------------------------------------------- Frequency is around 1 THz. ![image](Figure_5_THz) Figure \[fig:THz-fig5\] - Terahertz conductivity and absorption coefficient of Ge/Si heterostructure with Ge quantum dots versus Ge coverage: {#figurefigthz-fig5---terahertz-conductivity-and-absorption-coefficient-of-gesi-heterostructure-with-ge-quantum-dots-versus-ge-coverage .unnumbered} ---------------------------------------------------------------------------------------------------------------------------------------------- \(a) terahertz conductivity, (b) absorption coefficient; lines are guides to the eye.
{ "pile_set_name": "ArXiv" }
=0.5cm =1em #### Introduction. {#introduction. .unnumbered} Two new large colliders with relativistic heavy nuclei, the RHIC and the LHC, are scheduled to be in operation in the nearest future. The charge numbers $Z_1=Z_2=Z$ of the nuclei with masses $M_1=M_2=M$ and their Lorentz factors $\gamma_1=\gamma_2=\gamma=E/M$ are the following $$\begin{aligned} Z=79\,, \ \gamma &=&\,\;108 \ {\mathrm {for \ RHIC \ (Au--Au \ collisions)}}\, \nonumber \\ Z=82\,, \ \gamma &=&3000 \ {\mathrm {for \ LHC \ \ (Pb--Pb \ collisions)}}\,. \label{1}\end{aligned}$$ Here $E$ is the heavy ion energy in the c.m.s. One of the important processes at these colliders is $$Z_1Z _2\to Z_1Z_2 \, e^+e^- \,. \label{2}$$ Its cross section is huge. In the Born approximation (see Fig. \[f1\] with $n=n'=1$) the total cross section according to the Racah formula [@R] is equal to $\sigma_{\mathrm{Born}} = 36 $ kbarn for the RHIC and 227 kbarn for the LHC. Therefore it will contribute as a serious background to a number of experiments, besides, this process is the leading beam loss mechanism (for details see review [@BB]). The cross sections of the process (\[2\]) in the Born approximation are known with accuracy $\sim 1/ \gamma^2$ (see, for example, Refs. [@R; @KLBGMS] and more recent calculations reviewed in Refs. [@BB; @BHT]). However, besides of the Born amplitude $M_{\mathrm {Born}} =M_{11}$, also other amplitudes $M_{nn'}$ (see Fig. \[f1\]) have to be taken into account for heavy nuclei since in this case the parameter of the perturbation series $Z\alpha$ is of the order of unity. Therefore, the whole series in $Z\alpha$ has to be summed to obtain the cross section with sufficient accuracy. Following Ref. [@BM], we call the Coulomb correction (CC) the difference $d\sigma_{\mathrm{Coul}}$ between the whole sum $d \sigma$ and the Born approximation $$d\sigma = d \sigma_{\mathrm{Born}} + d \sigma_{\mathrm{Coul}}\,. \label{4}$$ Such kind of CC is well known in the photoproduction of $e^+e^-$ pairs on atoms (see Ref. [@BM] and §98 of [@BLP]). The Coulomb correction to the total cross section of that process decreases the Born contribution by about 10 % for a Pb target. For the pair production of reaction (\[2\]) with $Z_1\alpha \ll 1$ and $ Z_2 \alpha \sim 1$ CC has been obtained in Refs. [@NP; @BB]. Recently this correction has been calculated for the pair production in the collisions of muons with heavy nuclei [@IKSS]. The results of Refs. [@NP; @BB; @IKSS] agree with each other in the corresponding kinematic regions and noticeably change the Born cross sections. Formulae for CC for two heavy ions were suggested ad hoc in Sect. 7.3 of  [@BB]. However, our calculations presented here do show that this suggestion is incorrect. In the present paper we calculate the Coulomb correction for process (\[2\]) omitting terms of the order of $1$ % compared with the main term given by the Born cross section. We find that these corrections are negative and quite important: $$\begin{aligned} \sigma_{\mathrm{Coul}}/ \sigma_{\mathrm{Born}} &=& -25\, \% \;\; {\mathrm for \ \ RHIC}\,, \nonumber \\ \sigma_{\mathrm{Coul}}/ \sigma_{\mathrm{Born}} &=& -14\, \% \;\; {\mathrm for \ \ LHC}\,. \label{5}\end{aligned}$$ This means that at the RHIC the background process with the largest cross section will have a production rate 25 % smaller than expected. Our main notations are given in Eq. (\[1\]) and Fig. \[f1\], besides, $(P_1+P_2)^2 = 4E^2 = 4 \gamma^2 M^2$, $q_i= (\omega_i,\, {\bf q}_i)= P_i-P_i'$, $\varepsilon= \varepsilon_++\varepsilon_-$ and $$\sigma_0=\frac{\alpha^4 Z_1^2 Z_2^2}{\pi m^2} \,, \;\; L= \ln{P_1P_2 \over 2M_1 M_2}= \ln{\gamma^2} \label{3}$$ where $m$ is the electron mass. The quantities ${\mathbf q}_{i\perp}$ and ${\mathbf p}_{\pm\perp}$ denote the transverse part of the corresponding three–momenta. Throughout the paper we use the well known function[@BM] $$f(Z) = Z^2\alpha^2 \sum_{n=1}^{\infty} {1\over n(n^2+Z^2\alpha^2)}\,,$$ its particular values for the colliders under discussion are $f(79)=0.313$ and $f(82)=0.332$. #### Selection of the leading diagrams and the structure of the amplitude. {#selection-of-the-leading-diagrams-and-the-structure-of-the-amplitude. .unnumbered} Let ${\cal M}$ be the sum of the amplitudes $M_{nn'}$ of Fig. \[f1\]. It can be presented in the form $$\begin{aligned} \label{7a} {\cal M}&=& \sum_{nn'\geq 1 } M_{nn'}= M_{\mathrm{Born}} +M_1+{\tilde M}_1+ M_2\,,\\ M_1 &=& \sum_{n'\geq 2} M_{1n'}\,, \ \ \tilde M_1 = \sum_{n\geq 2} M_{n1}\,, \ \ M_2= \sum_{nn'\geq 2} M_{nn'} \,. \nonumber\end{aligned}$$ The Born amplitude $M_{\mathrm{Born}}$ contains the one–photon exchange both with the first and the second nucleus, whereas the amplitude $M_1$ ($\tilde M_1$) contains the one–photon exchange only with the upper (lower) nucleus. In the last amplitude $M_2$ we have no one–photon exchange. According to this classification we write the total cross section as $$\sigma = \sigma_{\mathrm{Born}} +\sigma_1 +\tilde\sigma_1 + \sigma_2 \label{7}$$ where $$\begin{aligned} &&d\sigma_{\mathrm{Born}} \propto |M_{\mathrm{Born}}|^2\,, \nonumber \\ &&d\sigma_1 \propto 2 {\mathrm Re}(M_{\mathrm{Born}} M_1^*) +|M_1|^2 \,, \nonumber \\ &&d\tilde\sigma_1 \propto 2 {\mathrm Re}(M_{\mathrm{Born}} \tilde M_1^*) +| \tilde M_1|^2 \,, \nonumber \\ &&d\sigma_2 \propto 2 {\mathrm Re}\left( M_{\mathrm{Born}} M_2^* + M_1\tilde M_1^* +M_1M_2^* \right. \nonumber \\ && \left. \hspace{1cm}+\tilde M_1M_2^* \right) + |M_2|^2 \,. \nonumber\end{aligned}$$ It is not difficult to show that the ratio $\sigma_i / \sigma_{\mathrm{Born}}$ is a function of $(Z\alpha)^2$ only but not of $Z \alpha$ itself. Additionally we estimate the leading logarithms appearing in the cross sections $\sigma_i$. The integration over the transfered momentum squared $q_1^2 $ and $q_2^2$ results in two large Weizsäcker–Williams (WW) logarithms $\sim L^2$ for the $\sigma_{\mathrm{Born}}$, in one large WW logarithm $\sim L$ for $\sigma_1$ and $\tilde\sigma_1$. The cross section $\sigma_2$ contains no large WW logarithm. Therefore, the relative contribution of the cross sections $\sigma_i$ is $\sigma_1 / \sigma_{\mathrm{Born }} =\tilde\sigma_1 / \sigma_{\mathrm{Born}} \sim (Z\alpha)^2 /L$ and ${\sigma_2 / \sigma_{\mathrm{Born}}} \sim (Z\alpha)^2 /L^2 \, < 0.4$ % for the colliders (\[1\]). As a result, with an accuracy of the order of $1 \%$ we can neglect $\sigma_2$ in the total cross section and use the equation $$\sigma = \sigma_{\mathrm{Born}} +\sigma_1 +\tilde\sigma_1\,. \label{9}$$ With that accuracy it is sufficient to calculate $\sigma_1$ and $\tilde\sigma_1$ in the leading logarithmic approximation (LLA) only since the next to leading log terms are of the order of $(Z\alpha /L)^2$. This fact greatly simplifies the calculations. The calculation in the LLA can be performed using the equivalent photon or WW approximation. The main contribution to $\sigma_1$ and $\tilde\sigma_1$ is given by the region $(\omega_1/\gamma)^2 \ll -q_1^2 \ll m^2$ and $(\omega_2/\gamma)^2 \ll -q_2^2 \ll m^2$, respectively. In the first region the main contribution arises from the amplitudes $M_{\mathrm{Born}}+M_1$ (in the second region $M_{\mathrm{Born}}+\tilde M_1$). The virtual photon with four–momentum $q_1$ is almost real and the amplitude can be expressed via the amplitude $M_\gamma$ for the real photoproduction $\gamma Z_2\to Z_2 e^+e^-$ (see, for example, §99 of Ref. [@BLP]) $$M_{\mathrm{Born}}+M_1\approx \sqrt{4 \pi \alpha} Z_1 \frac{ |{\mathbf{q}}_{1\perp}|} {(-q_1^2)} \, \frac{2 E}{\omega_1} \, M_\gamma \,. \label{amp}$$ The amplitude $M_\gamma$ has been calculated in Ref. [@BM]. We use the convenient form of that amplitude derived in the works [@OM] and [@IM]: $$M_\gamma= ( f_1 \, M_\gamma^{\mathrm{Born}} + {\mathrm i} f_2 \, \Delta M_\gamma) \, {\mathrm e}^{{\mathrm i} \Phi} \label{Mgamma}$$ where $M_\gamma^{\mathrm{Born}}$ is the Born amplitude for the $\gamma Z_2 \to Z_2 e^+ e^-$ process. This Born amplitude depends on the transverse momenta ${\mathbf p}_{\pm\perp}$ only via the two combinations $A=\xi_+-\xi_-$ and ${\mathbf B}=\xi_+ {\mathbf p}_{+\perp} + \xi_- {\mathbf p}_{-\perp}$ where $\xi_\pm= m^2/( m^2+{\mathbf p}_{\pm\perp}^2)$. The quantity $\Delta M_\gamma$ is obtained from $M_\gamma^{\mathrm{Born}}$ replacing $A\to \xi_++\xi_--1$ and ${\mathbf B}\to \xi_+ {\mathbf p}_{+\perp} - \xi_- {\mathbf p}_{-\perp}$. All the nontrivial dependence on the parameter $Z_2 \alpha \equiv \nu$ are accumulated in the Bethe-Maximon phase $$\Phi=\nu \, \ln\frac{(p_+P_2) \xi_+}{(p_-P_2) \xi_-} \label{phase}$$ and in the two functions (with $z=1 - (-q_2^2/m^2) \xi_+\xi_-$) $$f_1=\frac{ F({\mathrm i} \nu,-{\mathrm i} \nu; 1 ; z)} { F({\mathrm i} \nu,-{\mathrm i} \nu; 1 ; 1 )} \,,\ \ f_2=\frac{1-z}{\nu} f_1'(z)\,. \label{f1f2}$$ The function $f_1(z)$ and its derivative $f_1'(z)$ are given with the help of the Gauss hypergeometric function $F(a,b;c;z)$. It can be clearly seen that in the region ${\mathbf p}_{\pm\perp}^2 \sim m^2$ the amplitude $M_\gamma$ differs considerably from the $M_\gamma^{\mathrm{Born}}$ amplitude and, therefore, the whole amplitude ${\cal M}$ differs from its Born limit $M_{\mathrm{Born}}$. Let us stress that just this transverse momentum region ${\mathbf p}_{\pm\perp}^2 \sim m^2$ gives the main contribution into the total Born cross section $\sigma_{\mathrm{Born}}$ and into $\sigma_1$. Outside this region the CC vanishes. Indeed, for ${\mathbf p}_{\pm\perp}^2 \ll m^2$ or ${\mathbf p}_{\pm\perp}^2 \gg m^2$ the variable $ z \approx 1$, therefore, $f_1\approx 1$, $f_2 \approx 0$ and $$M_{\mathrm{Born}}+M_1= M_{\mathrm{Born}} {\mathrm e}^{{\mathrm i} \Phi} \,. \label{limit}$$ Note that the region ${\mathbf p}_{\pm\perp}^2 \gg m^2$ gives a negligible contribution to the total cross section $\sigma$, however, this region might be of interest for some experiments. The results of Ref. [@BM] which are used here in the form of Eqs. (\[amp\])-(\[limit\]) are the basis for our consideration. These results were confirmed in a number of papers (see, for example, Refs. [@Qclas; @IM]) using various approaches. Recently in Refs. [@DIRACEQ] the Coulomb effects were studied within the frame–work of a light–cone or an eikonal approach. However, the approximations used in Refs. [@DIRACEQ] fail to reproduce the classical results of Bethe and Maximon [@BM]. To show this explicitly, we consider the simple case $Z_1 \alpha \ll 1, \; Z_2 \alpha \equiv \nu \sim 1$ in which the principal result of Refs. [@DIRACEQ] for the amplitude takes the form ${\cal M}=M_{\mathrm{Born}}+ M_1= M_{\mathrm{Born}} \exp ({\mathrm i} \Psi)$ with $\Psi = \nu \ln {\mathbf q}_{2\perp}^2$ in obvious contradiction to Eqs. (\[amp\])-(\[Mgamma\]). Since in the works [@DIRACEQ] different statements on the applicability range of their results can be found, we take as an example the common region $(\omega_i/\gamma_i)^2 \ll {\mathbf q}_{i\perp}^2 \ll m^2$. But even in that region their expression for the matrix element does not reproduce Eq. (\[limit\]) since their phase $\Psi$ does not coincide with the Bethe–Maximon phase $\Phi$, i.e. $\Psi\neq \Phi$. #### CC to the energy distribution and to the total cross section. {#cc-to-the-energy-distribution-and-to-the-total-cross-section. .unnumbered} As it was explained in the previous section, the basic expression for the cross section $d\sigma_1$ in the LLA can be directly obtained using the WW approximation. To show clearly the terms omitted in the LLA, we start with a more exact expression for $d \sigma_1$ derived for the case of $\mu Z $ collisions considered in Ref. [@IKSS]. The reason is that for the most interesting region (when the energy of relativistic $e^{\pm}$ pairs is much smaller than the nucleus energy) the muon in the $\mu Z$ scattering as well as the upper nucleus of the ion–ion collision can be equally well treated as spinless and pointlike particles. Using Eqs. (14) and (17) from Ref. [@IKSS] (given in the lab frame of the muon projectile on a nucleus target) and the invariant variables $x_{\pm}= (p_{\pm} P_2)/(q_1 P_2)$, $y= (q_1 P_2) / (P_1P_2)$ we obtain $d \sigma_1$ for the pair production in $Z_1Z_2$ collisions in the invariant form (and at $y\ll 1$) $$\begin{aligned} d\sigma_1& =& - \frac{4}{3} \sigma_0 f(Z_2) \left\{ \left[ (1+\xi) a-1\right] \ln \frac{1+\xi}{\xi} - \right. \nonumber \\ &-& \left. a + \frac{4-a}{1+\xi} \, \right\} {dy\over y} dx_+dx_- \delta(x_++x_- - 1) \label{10}\end{aligned}$$ with $a= 2 (1+x_+^2+x_-^2) \,, \ \ \xi= \left( M_1 y/m\right)^2 x_+x_- $. The main contribution to $\sigma_1$ is given by the region $$\frac{M_1^2 M_2^2 }{(P_1 P_2)^2} \ll \xi \ll 1\,. \label{11}$$ The corresponding expression for $d\tilde\sigma_1$ can be obtained by making the replacements $$d\tilde\sigma_1 = d\sigma_1(q_1 \to q_2, P_1 \leftrightarrow P_2, Z_1\leftrightarrow Z_2)\,. \label{12}$$ Below we consider only the experimentally most interesting case when in the collider system ($\gamma_1=E_1/M_1 \sim \gamma_2=E_2/M_2$) both $e^+$ and $e^-$ are ultrarelativistic ($\varepsilon_\pm \gg m$). We assume that the $z$-axis is directed along the initial three-momentum of the first nucleus ${\mathbf P}_1$. To obtain the energy distribution of $e^+$ and $e^-$ in the LLA we have to take into account two regions $p_{\pm z} \gg m$ and $(-p_{\pm z}) \gg m$ where the lepton pair is produced either in forward or backward direction. In the first region we have $ x_\pm= \varepsilon_{\pm}/\varepsilon$, $y=\varepsilon/E_1$, and from Eq. (\[10\])-(\[11\]) we obtain in the LLA $$\begin{aligned} &d\sigma_1^{(1)}&= \nonumber \\ &-&4 \,\sigma_0 f(Z_2) \left(1 - \frac{4\varepsilon_+ \varepsilon_-}{3 \varepsilon^2} \right) \, \ln \frac{(m \gamma_1)^2}{ \varepsilon_+ \varepsilon_-} \, {d\varepsilon_+ d \varepsilon_- \over \varepsilon^2} \,, \label{13new} \\ && m \ll \varepsilon_\pm \ll m \gamma_1 \,. \nonumber\end{aligned}$$ In the second region we have $x_\pm \approx \varepsilon_{\mp}/ \varepsilon$, $y\approx m^2 \varepsilon /( 4 E_1 \varepsilon_+\varepsilon_-)$) and $$\begin{aligned} &d\sigma_1^{(2)}&= \nonumber \\ &-&4 \,\sigma_0 f(Z_2) \left(1 - \frac{4\varepsilon_+ \varepsilon_-}{3 \varepsilon^2} \right) \, \ln \frac{\gamma_1^2 \varepsilon_+ \varepsilon_-}{m^2} \, {d\varepsilon_+ d \varepsilon_- \over \varepsilon^2} \,, \label{14new} \\ && m \ll \varepsilon_\pm \ll m \gamma_2 \,. \nonumber\end{aligned}$$ Summing up these two contributions, we find $$d\sigma_1= - 8 \, \sigma_0 f(Z_2) \left(1 - \frac{4\varepsilon_+ \varepsilon_-}{3 \varepsilon^2} \right) \, \ln \gamma_1^2 \, {d\varepsilon_+ d \varepsilon_- \over \varepsilon^2} \,. \label{13}$$ To obtain $\sigma_1$ we have to integrate the expressions (\[13new\]) and (\[14new\]) over $\varepsilon_-$ (with logarithmic accuracy) $$\begin{aligned} d\sigma_1^{(1)} &=& - \frac{28}{9} \sigma_0 f(Z_2) \, \ln \frac{(m \gamma_1)^2}{\varepsilon_+^2} \, \frac{d\varepsilon_+} {\varepsilon_+} \,, \label{15new} \\ &&m\ll \varepsilon_+\ll m \gamma_1 \,, \nonumber\end{aligned}$$ $$\begin{aligned} d\sigma_1^{(2)} &=& - \frac{28}{9} \sigma_0 f(Z_2) \, \ln \frac{(\gamma_1 \varepsilon_+)^2}{m^2} \, \frac{d\varepsilon_+} {\varepsilon_+} \,, \label{16new} \\ &&m\ll \varepsilon_+\ll m \gamma_2 \nonumber\end{aligned}$$ from which it follows that $$d\sigma_1= - \frac{28}{9} \sigma_0 f(Z_2)\, \ln \gamma_1^2 \, \frac{d\varepsilon_+} {\varepsilon_+} \,. \label{17new}$$ The further integration of Eqs. (\[15new\]), (\[16new\]) over $\varepsilon_+$ results in $$\sigma_1=- \frac{28}{9} \sigma_0 f(Z_2)\, \left[ \ln \frac{P_1 P_2}{ 2 M_1 M_2} \right]^2 \,. \label{18new}$$ This expression is in agreement with the similar result for the $\mu Z$ scattering (see Eq. (31) from [@IKSS] for $Z_1 =1, Z_2=Z$). The corresponding formulae for $\tilde\sigma_1$ can be obtained from Eqs. (\[13\]), (\[17new\]) and (\[18new\]) by replacing $\gamma_1\leftrightarrow \gamma_2$, $Z_1\leftrightarrow Z_2$. The whole CC contribution $d \sigma_{\mathrm{Coul}}= d( \sigma_1+ \tilde\sigma_1)$ for the symmetric case $Z_1=Z_2=Z$ and $\gamma_1=\gamma_2=\gamma$ takes the following form $$d\sigma_{\mathrm Coul}= - 16 \,\sigma_0 f(Z) \left(1 - \frac{4\varepsilon_+ \varepsilon_-}{3 \varepsilon^2} \right) \, L \, \frac{d\varepsilon_+ d \varepsilon_-} {\varepsilon^2} \label{133}$$ at $ m\ll \varepsilon_{\pm} \ll m \gamma \,$, $$d\sigma_{\mathrm Coul}= - \frac{112}{9} \sigma_0 f(Z) \, L\, \frac {d\varepsilon_+ } {\varepsilon_+} \label{1333} \\$$ at $ m\ll \varepsilon_+ \ll m \gamma \,$, and $$\sigma_{\mathrm Coul}=- \frac{56}{9} \sigma_0 f(Z)\, L^2 \,. \label{188new}$$ The size of this correction for the two colliders was given before in Eq. (\[5\]). The total cross section with and without Coulomb correction as function of the Lorentz factor $\gamma$ is illustrated in Fig. \[f3\] for Pb nuclei. #### Conclusion. {#conclusion. .unnumbered} We have calculated the Coulomb corrections to $e^+e^-$ pair production in relativistic heavy ion collisions for the case of colliding beams. Our main results are given in Eqs. (\[133\])-(\[188new\]). We have restricted ourselves to the Coulomb corrections for the energy distribution of electrons and positrons and for the total cross section. In our analysis we neglected contributions which are of the relative order of $\sim (Z\alpha)^2/L^2$. The CC to the angular distribution of $e^+e^-$ can be easily obtained in a similar way, however only with an accuracy $Z\alpha/L^2$. Since our basic formulae (\[10\]), (\[12\]) are given in the invariant form, a similar calculation can be easily repeated for fixed–target experiments. This interesting question will be considered in a future work. [*Acknowledgments.*]{} — We are very grateful to G. Baur, Yu. Dokshitzer, U. Eichmann, V. Fadin, I. Ginzburg and V. Telnov for useful discussions. V.G.S. acknowledges support from Volkswagen Stiftung (Az. No. I/72 302). D.Yu.I. and V.G.S. are partially supported by the Russian Foundation for Basic Research (code 96-02-19114). [99]{} Email address: d-ivanov@math.nsc.ru Email address: schiller@tph204.physik.uni-leipzig.de Email address: serbo@math.nsc.ru G. Racah, Nuovo Cim. [**14**]{}, 93 (1937). C. A. Bertulani, G. Baur, Phys. Rep. [**163**]{}, 299 (1988). V. N. Baier, V. S. Fadin, ZhETF [**61**]{}, 476 (1971); E. A. Kuraev, V. G. Lasurik-Elzufin, Pis’ma ZhETF [**13**]{}, 391 (1971); V. M. Budnev, I. F. Ginzburg, G. V. Meledin, V. G. Serbo, Nucl. Phys. B [**63**]{}, 519 (1973). G. Baur, K. Henken, D. Trautman, J. Phys. G [**24**]{}, 1657 (1998). H. Bethe, L. C. Maximon, Phys. Rev. [**93**]{}, 768 (1954); H. Davies, H. Bethe, L. C. Maximon, Phys. Rev. [**93**]{}, 788 (1954). V. B. Berestetskii, E. M. Lifshitz, L. B. Pitaevskii, Quantum Electrodynamics (Nauka, Moscow, 1989). A. I. Nikishov, N. V. Pichkurov, Sov. J. Nucl. Phys. [**35**]{}, 561 (1982). D. Yu. Ivanov, E. A. Kuraev, A. Schiller, V. G. Serbo, Phys. Lett. B [**442**]{}, 453 (1998). H. Olsen, L. C. Maximon, Phys. Rev. [**114**]{}, 887 (1959). D. Ivanov, K. Melnikov, Phys. Rev. D [**57**]{}, 4025 (1998). B. Segev and J. C. Wells, Phys. Rev. A [**57**]{}, 1849 (1998); A. J. Baltz, L. McLerran, Phys. Rev. C [**58**]{}, 1679 (1998); U. Eichmann, J. Reinhardt, S. Schramm, W. Greiner, nucl-th/9804064; U. Eichmann, J. Reinhardt, W. Greiner, nucl-th/9806031. H. Olsen, L. C. Maximon, and H. Wergeland, Phys. Rev. [**106**]{}, 27 (1957); V. N. Baier, V. M. Katkov, ZhETF [**55**]{}, 1542 (1965).
{ "pile_set_name": "ArXiv" }
--- abstract: | This paper is a contribution to the study of the subgroup structure of exceptional algebraic groups over algebraically closed fields of arbitrary characteristic. Following Serre, a closed subgroup of a semisimple algebraic group $G$ is called irreducible if it lies in no proper parabolic subgroup of $G$. In this paper we complete the classification of irreducible connected subgroups of exceptional algebraic groups, providing an explicit set of representatives for the conjugacy classes of such subgroups. Many consequences of this classification are also given. These include results concerning the representations of such subgroups on various $G$-modules: for example, the conjugacy classes of irreducible connected subgroups are determined by their composition factors on the adjoint module of $G$, with one exception. A result of Liebeck and Testerman shows that each irreducible connected subgroup $X$ of $G$ has only finitely many overgroups and hence the overgroups of $X$ form a lattice. We provide tables that give representatives of each conjugacy class of connected overgroups within this lattice structure. We use this to prove results concerning the subgroup structure of $G$: for example, when the characteristic is $2$, there exists a maximal connected subgroup of $G$ containing a conjugate of every irreducible subgroup $A_1$ of $G$. address: 'School of Mathematics, University of Bristol, Bristol, BS8 1TW, UK, and Heilbronn Institute for Mathematical Research, Bristol, UK' author: - 'Adam R. Thomas' bibliography: - 'biblio.bib' title: The Irreducible Subgroups of Exceptional Algebraic Groups --- [^1] [^1]: The author is indebted to Prof. M. Liebeck for his help in producing this paper. He would also like to thank Dr A. Litterick and Dr T. Burness for their comments on previous versions of this paper. Finally, the author would like to thank the anonymous referee for their careful reading of this paper and many insightful comments and corrections.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work the diffusion in the quenched trap model with diverging mean waiting times is examined. The approach of randomly stopped time is extensively applied in order to obtain asymptotically exact representation of the disorder averaged positional probability density function. We establish that the dimensionality and the geometric properties of the lattice, on top of which the disorder is imposed, dictate the plausibility of a mean-filed approximation that will only include annealed disorder. Specifically, for any case when the probability to return to the origin ($Q_0$) is less than $1$, i.e. the transient case, the quenched trap model can be mapped on to the continuous time random walk. The explicit form of the mapping is provided. In the case when an external force is applied on a tracer particle in a media described by the quenched trap model, the response to such force is calculated and a non-linear response for sufficiently low dimensionality is observed.' author: - Stanislav Burov bibliography: - './quenchedLiterature.bib' title: The Transient Case of The Quenched Trap Model --- Introduction ============ Brownian Motion is probably the simplest manifestation of a transport in random environment. In this case the particle path is constantly modified by collisions with molecules that compose the surrounding media. The trajectory will appear as if the direction of motion is randomly changes as a function of time and a simple random walk (RW) is quite useful to describe the motion. The continuum representation of a RW is a regular diffusion [@Weiss]. When the motion of the particle occurs in a complex media, the simple RW might be insufficient for proper description of the transport. In many materials the basic linear dependence of the mean squared displacement (MSD), $\langle x^2(t) \rangle$, is missing and instead $\langle x^2(t)\rangle\sim t^{\alpha} $ while $0<\alpha<1$. Such behavior is termed anomalous subdiffusion and materials where it appears include living cells [@Metzler2011; @LiveCell; @Tabei; @Bariers], blinking quantum dots [@QuantumD], plasma membrane [@Krapf2011], filamentous networks [@BurovPnas] and many more [@Sokolov2005]. The modeling of transport in these systems is quite complicated, when compared to the original RW. In the works of Scher and Montroll [@ScherMontroll] the continuous time random walk (CTRW) approach for transport in amorphous materials was developed. The idea behind CTRW is the existence of regions of local arrest, i.e. traps, where the traced particle waits for some random time before it continues its motion inside the media. When the expected random waiting times diverge the behavior is non-ergodic [@Bel2005; @YongHe] and CTRW will produce the mentioned subdiffusive scaling of the MSD. While CTRW became extremely popular and applicative [@Bouchaud; @Klafter; @Kutner2017], this approach treats the disorder in the media as annealed and uncorrelated. Quenchness of the disorder in the media is more physically appealing in many situations but it implies existence of strong correlations that in their turn introduce significant difficulties in calculating basic properties of the transport [@Kehr]. When the local dwell times of CTRW are fixed the model is known as the quenched trap model (QTM). The QTM was found to be an important model that describes glassy behavior such as aging, weak-ergodicity breaking and non self-averaging [@BouchaudAg; @Monthus1996; @Rinn2000; @Rinn2001; @Bertin; @Burov2007]. Beyond the applications of the QTM, the difficulty of untangling the behavior dictated by quenched disorder, that is associated with QTM, posed this model and methods for its solution as a fundamental problem of anomalous transport [@Bouchaud]. The presence of the mentioned correlations, imposed by the quenched disorder, make the treatment of the QTM highly non-trivial task. Over the years many theoretical methods were devised to advance the general understanding of the QTM. The method of semi-equilibration [@Derrida] allowed to determine the average velocity and diffusion constant in the one-dimensional ($d=1$) case for the non-anomalous transport. Description of the QTM in terms of master equation and their representation in the Fourier space produced the scaling behavior of the QTM propagator at the origin [@Bernasconi; @Alexander]. Renormalization Group approach [@Machta], and scaling arguments [@Bouchaud1987], provided the existence of a critical dimension, $d=2$, for the QTM and the scaling behavior of the MSD. Based on these works a qualitative understanding that for sufficient high dimension ($d>2$) the behavior of the QTM can be mapped on-to the mean filed representation, i.e. CTRW. Further, the behavior of the QTM was studied for various lattices under the simplification of directed walk, i.e. without returns to previously visited traps [@Aslangul]. The decimation of disorder allowed Monthus to calculate (among other quantities) the behavior of the positional probability density function (PDF) in $d=1$ case in the limit of very low temperatures [@Monthus; @MonthusSec]. Rigorous probabilistic approach to the QTM led to mathematically exact scaling theorems [@BenArous1; @BenArous2] and further generalization of the QTM to such models as the randomly trapped random walk [@BenArous3; @Cerny01]. The effect of fractal structures for QTM [@Akimoto2015] and behavior of the QTM under influence of a bias [@Akimoto2019] are part of a current research. The previously obtained results suggest that for any dimension $d>2$ the behavior of QTM converges to the one of CTRW. A simple hand-waving argument that support this qualitative result is that in sufficiently high dimensions the traced particle rarely returns to the same lattice point, thus reducing the effect of strong correlations imposed by the quenched disorder. The P[ó]{}lya’s [@Weiss] theorem states that the probability to return to the origin (or any previously occupied position) is less then $1$ for any dimension above $d=2$. A valid question is what is the quantitative representation of the mapping between QTM and CTRW? Can one extend this mapping to the cases where dimensionality is low but the formerly raised hand-waiving argument still holds, i.e. the biased case? In this manuscript we will provide an explicit form of the mapping between QTM and CTRW for any transient case in any dimension. By using the randomly stopped time approach, that was originally developed for the $d=1$ case [@Burov1; @Burov2], we manage to obtain a subordiantion of the spatial process to the temporal $\alpha$-stable process. Unlike the CTRW where the subordinated spatial process advances as a function of the number of jumps [@Bouchaud; @Fogedby; @Barkai], for QTM the local time of the spatial process is quite different. A brief summary of part of our results was published in Ref. [@Burov2017]. This paper is organized as follows. In Sec. \[section\_def\] the QTM is defined together with local time, measurement time and the subordination approach. In Sec. \[salphaSec\] the local time $S_\alpha$ is explored and the mean value of the local time is computed in Sec. \[meansalpha\] and the second moment in Sec. \[secondsalpha\]. In Sec. \[deltafunction\] we summarize the results of the first and second moment calculation and show that the local time convergences to the number of jumps that the process has performed. In Section \[doublesubordination\] the previously established convergence of the local time is exploited in order to establish an explicit mapping between the CTRW and QTM, by the means of double subordination. The formulas are applied to the one-dimensional cased of biased QTM. In Sec. \[nonlinresp\] we obtain analytic expressions for the moments of the transient case of the QTM and show how the quenched disorder gives rise to the non-linear response of externally applied field. The summary is provided in Sec. \[summary\]. Several Appendices supply specific technical calculations and referred to in the manuscript. The Quenched Trap Model and Subordination {#section_def} ========================================= The QTM is defined as a random jump process of a particle on top of a lattice of dimension $d$. For every lattice point ${\bf x}$ a quenched random variable $\tau_{\bf x}$ is defined. This quenched variable $\tau_{\bf x}$ defines the time that the particle is going to spend at ${\bf x}$ before jumping to some other site ${\bf x}'$’, i.e. $\tau_{\bf x}$ is the local dwell time. The probability to jump from ${\bf x}$ to ${\bf x}'$ is provided by $p({\bf x}',{\bf x})$. In the following we will assume translational invariance of the lattice that leads to $p({\bf x}',{\bf x})$ of the form $p({\bf x}'-{\bf x})$. The quenched dwell times $\{\tau_{\bf x}\}$ are , real, positive and independently distributed random variables with $$\psi(\tau_{\bf x})\sim\tau^{-(1+\alpha)}A\big/|\Gamma(-\alpha)|\qquad \left(\tau_{\bf x}\to\infty\right) \label{psitaudef}$$ as the PDF ($A>0$). The value of the exponent $\alpha$ is bounded to $0<\alpha<1$. For such values of $\alpha$ the average dwell time is diverging, $\int_0^\infty\tau\psi(\tau)\,d\tau\to\infty$ and the model gives rise to anomalous subdiffusion and aging [@BouchaudAg]. The physical picture behind this definition of QTM is a thermally activated particle that is jumping between various energetic traps. When a particle is in a trap, the average escape time $\tau$ is provided by the Arrhenius law $\tau\propto \exp\left(E_{\bf x}/T\right)$, where $E_{\bf x}>0$ is the depth of the trap ${\bf x}$ and $T$ is the temperature. When the distribution of $E_{\bf x}$s is $f(E)=\frac{1}{T_g}\exp\left(-E{\bf x}/T_g\right)$, the average escape time is distributed according to Eq. (\[psitaudef\]), and $\alpha=T/T_g$. For low temperatures $T<T_g$ and glassy behavior, i.e. aging and non-self-averaging, is observed [@Bertin]. The QTM is thus a version of a transport on top of a random energetic landscape with exponential distribution of trap depths. We wish to perform a separation of the QTM into two processes. The first one is a spatial process on top of the lattice. This process is defined by the jump probabilities $p({\bf x}'-{\bf x})$ with some local time. The other process is a temporal process that transforms the local time into the measurement time $t$, that is defined by the dwell times. How exactly the measurement time and the local time are defined and related to each other is crucial for the solution of the QTM. Measurement Time and Local Time {#loctime} ------------------------------- During the measurement time $t$, the particle has visited several lattice points and stayed exactly $\tau_{\bf x}$ at each lattice ${\bf x}$. The measurement time $t$ is then simply given by $$t=\sum_{\bf x} n_{\bf x}\tau_{\bf x} \label{measurtime}$$ where $n_{\bf x}$ is the number of time the particle visited site ${\bf x}$ and the summation is over all the lattice points. While $\tau_{\bf x}$ are independent, identically distributed (I.I.D) random variables, $n_{\bf x}$ are correlated. Indeed, the number of times the particle visited at site ${\bf x}$ shouldn’t be very different from the number of times the particle visited in adjacent sites. The local time for the spatial process is defined as $$S_\alpha=\sum_{\bf x} \left(n_{\bf x}\right)^\alpha. \label{localtime}$$ The variable $$\eta=t/(S_\alpha)^{1/\alpha} \label{etadef}$$ is of high interest, especially in the $t\to\infty$ and $S_\alpha\to\infty$ limit. Lets consider that $\{n_{\bf x}\}$ are fixed (an outcome of a given experiment) then $\eta$ depends on the realization of the disorder, i.e. $\{\tau_{\bf x}\}$. The PDF of $\eta$ is found by examining disorder averaged $\exp(-u\eta)$, i.e $\langle \exp\left(-u\eta\right)\rangle$, that is given by $$\langle e^{-u\eta} \rangle =\displaystyle \langle \exp\left( -u \sum_{\bf x}\frac{n_{\bf x}\tau_{\bf x}}{(S_\alpha)^{1/\alpha}}\right) \rangle. \label{etalaplace}$$ Since the $\{\tau_{\bf x}\}$ are I.I.D Eq. (\[etalaplace\]) takes the form $$\langle e^{-u\eta} \rangle =\displaystyle \prod_{\bf x} {\hat{\psi}}\left[\frac{n_{\bf x}u}{(S_\alpha)^{1/\alpha}}\right], \label{etalaplace02}$$ where the product is over all the lattice sites and ${\hat{\psi}}(u)=\int_0^\infty\exp(-\tau_{\bf x} u)\psi(\tau_{\bf x})\,d\tau_{\bf x}$. Due to Eq. (\[psitaudef\]) the small $u\to 0$ limit of ${\hat{\psi}}(u)$ is ${\hat{\psi}}(u)\sim 1-Au^\alpha$ and Eq. (\[etalaplace02\]) takes the form $$\langle e^{-u\eta} \rangle =\displaystyle \prod_{\bf x} \left( 1-\frac{n_{\bf x}^\alpha}{S_\alpha}Au^\alpha\right). \label{etalpalace03}$$ When all the multiplications are performed on the r.h.s. of Eq. (\[etalpalace03\]) the leading term is $1$. The next term is $-\sum_{\bf x}n_{\bf x}^\alpha Au^\alpha/S_\alpha$ that is simply $-A u^\alpha$. The following term is $\frac{1}{2}\sum_{\bf x}\sum_{{\bf x}'}n_{\bf x}^\alpha n_{{\bf x}'}^\alpha A^2 u^{2\alpha}/S_\alpha^2$ that takes the form $\frac{1}{2}A^2u^{2\alpha}$. By computing next terms with higher orders of $u$ we obtain that the r.h.s is of the form $\sum_{j=0}^\infty(-Au^\alpha)^j/\Gamma[j+1]$, that is simply the Taylor expansion of $\exp\left(-Au^\alpha\right)$. When taking into account the higher orders of $u$ in the expansion of ${\hat{\psi}}(u)=1-Au^\alpha+Bu^\beta+...$ (where $\beta>\alpha$), we show in Appendix \[sbetaproof\] that in the limit of $S_\alpha\to\infty$ all these terms converge to $0$ and do not contribute to the r.h.s of Eq. (\[etalpalace03\]). Finally we can state that in the large $S_\alpha$ limit $$\langle e^{-u\eta} \rangle = e^{-A u^\alpha} \label{etalaplacefnl}$$ which means that the PDF of $\eta$ is one sided L[é]{}vy stable distribution $l_{\alpha,A,1}$ [@Klafter; @Barkai]. We managed to obtain the distribution of $\eta$ and the distribution of the measurement time $t$ for a given local time $S_\alpha$, since $t=S_\alpha^{1/\alpha}\eta$. Because $S_\alpha$ is positive and strictly growing, as we let the particle jump from one lattice point to another, we can inverse the relation in Eq. (\[etadef\]), $S_\alpha=(t/\eta)^\alpha$, use the known distribution of $\eta$, and obtain the PDF of $S_\alpha$ for a given measurement time $t$ $${\cal P}_t\left(S_\alpha\right)\sim \frac{t}{\alpha}S_\alpha^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t}{S_\alpha^{1/\alpha}}\right) \label{salphadist}$$ in the large $t$ limit. The measurement time $t$ is the quantity that is set in any experiment or calculation. Eq. (\[salphadist\]) describes the probability to obtain various $S_\alpha$ when averaging over disorder and letting the process to evolve up to time $t$. We use this disorder-averaged relation between local time $S_\alpha$ and $t$ in the next subsection while constructing the representation of the QTM propagator in terms of the two processes. Subordination {#Subordintaion} ------------- The probability $p({\bf x}'-{\bf x})$ describes the transition probability between two lattice points. It completely determines the spatial process on top of any translationally-invariant lattice, as long as we don’t take the disorder due to traps into account. For example it determines the probability to find the particle at position ${\bf x}$ after $N$ jumps. In this case, $N$ is the local time of the spatial process, the process is terminated when the number of performed jumps reaches a specific threshold and the position is recorded. Any strictly growing function of the jumps can be considered as a local time, specifically $S_\alpha$. When the process starts $S_\alpha$ equals to zero and its value is updated each time the particle performs a jump. As $S_\alpha$ crosses a given value the process is terminated. The quantity $P_{S_\alpha}({\bf x})$ is the probability to find the particle at position ${\bf x}$ (starting from the origin) after local time $S_\alpha$ has passed. Due to dependence of $S_\alpha$ on local visitation numbers $n_{\bf x}$ (Eq. (\[localtime\])), the local time is a function of both the number of jumps and the trajectory taken by the particle. The PDF $P({\bf x},t)$ to find the particle at position ${\bf x}$ after measurement time $t$ is presented by conditioning on all the possible $S_\alpha$ that can occur during the process. One needs to sum over all the possible $P_{S_\alpha}({\bf x})$ multiplied by the appropriate probability to observe such $S_\alpha$ at time $t$, for a given disorder. After averaging over disorder the PDF takes the form $$\langle P({\bf x},t) \rangle=\sum_{S_\alpha}P_{S_\alpha}({\bf x}) {\cal P}_t(S_\alpha) \label{subordination01}$$ and due to Eq. (\[salphadist\]) in the $t\to\infty$ limit we obtain $$\langle P({\bf x},t) \rangle\sim\int_0^\infty P_{S\alpha}({\bf x}) \frac{t}{\alpha}S_\alpha^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t}{S_\alpha^{1/\alpha}}\right)\,dS_\alpha. \label{subordination02}$$ while we replaced the summation by integral [@Bouchaud]. Eq. (\[subordination02\]) represents the propagator of the QTM as a subordination of two processes. The spatial process that has no disorder but is terminated at random local time $S_\alpha$ and the temporal process that involves the disorder and make the mapping between local time and measurement time. While the function $l_{\alpha,A,1}(\dots)$ is known, the missing part is the probability $P_{S_\alpha}$ that is obtained for the case of a transient spatial process. Local time $S_\alpha$ {#salphaSec} ===================== The propagator $P_{S_\alpha}({\bf x})$ lacks the disorder that is present in the QTM and is a simple jump process on a lattice, but nevertheless it is highly non-trivial. The main complication is the stopping time $S_\alpha$ that is dependent on the path taken by the particle. If the local time is simply the number of jumps $N$, the probability to find the particle at ${\bf x}$ after $N$ jumps is completely defined by the corresponding probabilities after $(N-1)$ jumps. This is not the case for $P_{S_\alpha}({\bf x})$. The arrival to ${\bf x}$ do not increases $S_\alpha$ by $1$ as happens with the number of jumps, but rather the increase of $S_\alpha$ depends on the total number of times that ${\bf x}$ was previously visited. In the case of $1$-dimensional simple random walk (RW) the shape of $P_{S_\alpha}({\bf x})$ was previously [@Burov1] computed in the limit of $\alpha\to 0$. In this example $P_{S_\alpha}({\bf x})$ has a very distinctive V shape (with a minimum at the origin) and is quite different from the regular Gaussian propagator of the random walk. Before obtaining the $P_{S_\alpha}({\bf x})$, the study of the properties of $S_\alpha$ is in place. Specifically the first two moments of $S_\alpha$, i.e. ${\overline{S_\alpha}}$ and ${\overline{S_\alpha^2}}$. The averaging ${\overline{\dots}}$ is with respects to many trajectories of the RW walk on a lattice without traps. The results of Sec.\[meansalpha\] and Sec.\[secondsalpha\] are summarized in Sec.\[deltafunction\]. ${\overline{S_\alpha}}(N)$ {#meansalpha} -------------------------- The mean value of $S_\alpha$ is obtained from Eq. (\[localtime\]), ${\overline{S_\alpha}}=\sum_{\bf x}\overline{n_{\bf x}^\alpha}$. Defining $\beta_N({\bf x};k)$ to be the probability for the RW to visit lattice site ${\bf x}$ exactly $k$ times after $N$ steps, we write the average local time after $N$ steps as $${\overline{S_\alpha}}(N)=\sum_{\bf x}\sum_{k=0}^\infty k^\alpha \beta_N({\bf x};k). \label{salphamean01}$$ The probability $\beta_N({\bf x};k)$ is the probability to arrive at ${\bf x}$ at-least $k$ times minus the probability to arrive at-least $k+1$ times during $N$ jumps. Since the $k$th arrival must occur during these $N$ jumps, $\beta_N({\bf x};k)$ is expressed as $$\begin{array}{ll} \beta_N({\bf x};k)=\sum_{m=1}^N f_m({\bf x};k) - \sum_{m=1}^N f_m({\bf x};k+1) & \qquad {\bf x}\neq {\bf 0} \\ \beta_N({\bf 0};k)=\sum_{m=1}^N f_m({\bf 0};k-1) - \sum_{m=1}^N f_m({\bf 0};k) & \end{array} \label{betaxk01}$$ where $f_N({\bf x};k)$ is the probability to reach site ${\bf x}$ for the $k$’th time after $N$ steps. By defining $f_N({\bf 0})$ to be the probability of first return to the origin (${\bf x}={\bf 0}$) after $N$ steps, we write the recursive form for $f_N({\bf x};k)$ $$f_N({\bf x};k+1)=\sum_{m=0}^N f_m({\bf x};k)f_{N-m}({\bf 0}). \label{firstpassagesdef}$$ The generating function ${\hat f}_z({\bf x};k)=\sum_{N=0}^\infty z^N f_N({\bf x};k)$ is then $${\hat f}_z({\bf x};k)=\left[{\hat f}_z({\bf 0})\right]^{k-1}{\hat f}_z({\bf x}) \label{frecursive}$$ where ${\hat f}_z({\bf 0})$ is the generating function of the probability of first return to ${\bf 0}$ and ${\hat f}_z({\bf x})$ is the generating function of the probability of first arrival to ${\bf x}$. Eq. (\[betaxk01\]) and Eq. (\[frecursive\]) provide the generating function of $\beta_N({\bf x};k)$ $$\begin{array}{ll} {\hat \beta}_z({\bf x};k)=\frac{1}{1-z}\left[1-{\hat f}_z({\bf 0})\right]\left[{\hat f}_z({\bf 0})\right]^{k-1}{\hat f}_z({\bf x}) & \qquad {\bf x}\neq {\bf 0} \\ {\hat \beta}_z({\bf 0};k)=\frac{1}{1-z}\left[1-{\hat f}_z({\bf 0})\right]\left[{\hat f}_z({\bf 0})\right]^{k-1} & \end{array} \label{betaxk02}$$ Eq. (\[betaxk02\]) allows us to compute the generating function of ${\overline{S_\alpha}}(N)$, while the summation $\sum_{\bf x}{\hat f}_x({\bf x})$ can be obtained by the means of $c_N({\bf x})$, the probability to find the particle at position ${\bf x}$ after $N$ steps (started at ${\bf 0}$). Since $c_N({\bf x})$ is related to $f_N({\bf x})$ by $$c_N({\bf x}) = \delta_{N,0}\delta_{{\bf x},{\bf 0}} + \sum_{m=1}^N f_m({\bf x}) c_{N-m}({\bf 0}) \label{cnxdefinition}$$ the generating functions ${\hat f}_z({\bf x})$ and ${\hat c}_z({\bf x})$ are connected by $$\begin{array}{l} {\hat f}_z({\bf x\neq 0}) = {\hat c}_z({\bf \neq 0})\big/{\hat c}_z({\bf 0}) \\ {\hat f}_z({\bf 0}) =1- 1\big/{\hat c}_z({\bf 0}) . \end{array} \label{candfgenerating}$$ Together with the fact that $\sum_{\bf x} c_N({\bf x}) =1$ and consequently $\sum_{\bf x} {\hat c}_z({\bf x})=1/(1-z)$, Eqs.(\[salphamean01\],\[betaxk02\],\[candfgenerating\]) result in $${\overline {{\hat{S}_\alpha}}}(z) = \left[\frac{1-{\hat f}_z({\bf 0})}{1-z} \right]^2 \sum_{k=0}^\infty k^\alpha {\hat f}_z({\bf 0})^{k-1}. \label{salphameanz01}$$ For the case when the spatial process is transient and the probability of eventually returning to the origin $Q_0=\sum_{N=0}^\infty f_N({\bf 0})$, is less than $1$, the asymptotic ($N\to\infty$) is readily obtained from Eq. (\[salphameanz01\]). For $z\to 1$, ${\hat f}_z({\bf 0})\to Q_0<1$. The fact that $\sum_{N=0}^\infty N z^N = z/(1-z)^2$ and Tauberian theorem [@Weiss] implies that $${\overline {S_\alpha}}(N)\sim \Lambda N \qquad (N\to\infty) \label{salphaNlarge}$$ where $$\Lambda = \frac{\left[1-Q_0\right]^2}{Q_0} Li_{-\alpha}(Q_0) \label{lambdaconst}$$ and $Li_a(b)=\sum_{k=1}^\infty b^k/k^a$ is the Polylogarithm function. The form of average $S_\alpha$ as expressed in Eq. (\[salphaNlarge\]) will be essential in the following for asymptotic representation of $\langle P({\bf x},t) \rangle$ for the transient case by the means of $P_{S_\alpha}({\bf x})$. The average behavior of $S_\alpha$ suggests that the local time $S_\alpha$ is not very much different from the regular local time, i.e. the number of jumps $N$, at-least for the transient case $Q_0<1$. The behavior of the second moment of $S_\alpha$ should indicate if one indeed can exchange the local time $S_\alpha$ by a linear function of $N$. ${\overline{S_\alpha^2}}(N)$ {#secondsalpha} ---------------------------- The goal of this section is to provide the conditions for a plausible substitution of $S_\alpha$ by its average value $\langle S_\alpha\rangle$. The second moment of $S_\alpha$ is computed in a similar fashion as the first moment was computed in Sec. \[meansalpha\], and the first moment of $S_\alpha$ (Eq. \[salphamean01\]) is generalized to $${\overline{S_\alpha^2}}(N) = \sum_{{\bf x}}\sum_{{\bf x}'}\sum_{k_1}\sum_{k_2}k_1^\alpha k_2 ^\alpha \beta_N\left({\bf x};k_1,{\bf x}';k_2\right) \label{secmomdef}$$ where $\beta_N\left({\bf x};k_1,{\bf x}';k_2\right)$ is the probability that in $N$ steps the RW will visit site ${\bf x}$ exactly $k_1$ times and the site ${\bf x}'$ exactly $k_2$ times. This probability is calculated in the terms of $f_N\left({\bf x},k_1;{\bf x}',k_2\right)$, the probability to arrive to ${\bf x}$ after $N$ steps for the $k_1$th time while visiting ${\bf x}'$ exactly $k_2$ times. $\beta_N\left({\bf x};k_1,{\bf x}';k_2\right)$ is the probability that the $k_1$th arrival was performed but not the $(k_1+1)$th, i.e. $$\begin{array}{ll} \beta_N\left({\bf x};k_1,{\bf x}';k_2\right)= & \sum_{l=0}^N \left\{\left[ f_l\left({\bf x},k_1;{\bf x}',k_2\right)-f_l\left({\bf x},k_1+1;{\bf x}',k_2\right) \right]\right. \\ & \left. +\left[ f_l\left({\bf x}',k_2;{\bf x},k_1\right)-f_l\left({\bf x}',k_2+1;{\bf x},k_1\right) \right]\right\}\qquad (k_1+k_2\geq 2). \end{array} \label{twopointbeta01}$$ The range $k_1>0$ and $k_2>0$ is sufficient since $\beta_N({\bf x},k_1;{\bf x'},k_2)$ is multiplied by $k^\alpha$ in Eq. (\[secmomdef\]). We define the probability to start at ${\bf x}$ and after $N$ steps to reach ${\bf x}'$, without visiting ${\bf x}$ or ${\bf x}'$ on the way, as $M_N({\bf x},{\bf x}')$ and the probability to start at ${\bf x}$ and return to the same site after $N$ steps, without visiting ${\bf x}$ or ${\bf x}'$ on the way, as $T_N({\bf x},{\bf x}')$. The probability $f_N({\bf x},k_1;{\bf x}';k_2)$ is recursively expressed in terms of $M_N({\bf x},{\bf x}')$ and $T_N({\bf x},{\bf x}')$ $$\begin{array}{ll} f_N({\bf x},k_1+1;{\bf x}';k_2)= & \sum_{l=0}^N f_l({\bf x},k_1;{\bf x}';k_2)T_{N-l}({\bf x},{\bf x}')+f_l({\bf x}',k_2;{\bf x};k_1)M_{n-l}({\bf x}',{\bf x}) \\ %& %+\delta_{k_1,0}\delta_{k_2,0}f_N({\bf %x},1;{\bf x'},0) \end{array} \label{frstpsgmt01}$$ where $f_N({\bf x},0;{\bf x'},k_2)=0$. Eq. (\[frstpsgmt01\]) leads to the following expression in $z$ space $${\hat f}_z({\bf x},k_1+1;{\bf x}',k_2)= {\hat f}_z({\bf x},k_1;{\bf x}',k_2){\hat T}_z({\bf x},{\bf x}')+ {\hat f}_z({\bf x}',k_2;{\bf x},k_1){\hat M}_z({\bf x}',{\bf x}). %+ %\delta_{k_1,0}\delta_{k_2,0} %{\hat{f}}_z({\bf x},1;{\bf x'},0). \label{frstpsgmt02}$$ Application of additional transformation $k_1\to\xi_1$ and $k_2\to\xi_2$, by performing a double summation $\sum_{k_1=1}^\infty\sum_{k_2=1}^\infty\xi_1^{k_1}\xi_2^{k_2}$ on both sides of Eq. (\[frstpsgmt02\]) delivers $$\left[ 1-\xi_1{\hat T}_z({\bf x},{\bf x'}) \right] {\hat {\tilde f}}_z({\bf x},\xi_1;{\bf x}',\xi_2) - \xi_1{\hat M}_z({\bf x}',{\bf x}) {\hat {\tilde f}}_z({\bf x}',\xi_2;{\bf x},\xi_1) =\xi_1{\hat f'}_z({\bf x},1;{\bf x'},\xi_2) \label{frstpsgxi01}$$ where ${\hat {\tilde f}}_z({\bf x},\xi_1;{\bf x}',\xi_2)=\sum_{k_1=1}^\infty\sum_{k_2=1}^\infty\xi_1^{k_1}\xi_2^{k_2}{\hat f}_z({\bf x},k_1;{\bf x}',k_2)$ and\ ${\hat f'}_z({\bf x},1;{\bf x'},\xi_2)=\sum_{k_2=1}^\infty\xi_2^{k_2}{\hat f}_z({\bf x},1;{\bf x}',k_2)$. In a similar fashion we obtain $$\left[ 1-\xi_2{\hat T}_z({\bf x'},{\bf x}) \right] {\hat {\tilde f}}_z({\bf x'},\xi_2;{\bf x},\xi_1) - \xi_2{\hat M}_z({\bf x},{\bf x'}) {\hat {\tilde f}}_z({\bf x},\xi_1;{\bf x'},\xi_2) =\xi_2{\hat f'}_z({\bf x'},1;{\bf x},\xi_1). \label{frstpsgxi02}$$ Eqs. (\[frstpsgxi01\],\[frstpsgxi02\]) are linear equations in terms of ${\hat{\tilde f}}_z({\bf x},\xi_1;{\bf x'},\xi_2)$ and ${\hat{\tilde f}}_z({\bf x'},\xi_2;{\bf x},\xi_1)$ that attain the solution $${\hat{\tilde f}}_z({\bf x},\xi_1;{\bf x'},\xi_2) = \frac{\xi_1\left[1-\xi_2 {\hat T}_z({\bf x'},{\bf x})\right]{\hat f'}_z({\bf x},1;{\bf x'},\xi_2)+\xi_1\xi_2{\hat M}_z({\bf x'},{\bf x}){\hat f'}_z({\bf x'},1;{\bf x},\xi_1)} {[1-\xi_1{\hat T}_z({\bf x},{\bf x'})] [1-\xi_2{\hat T}_z({\bf x'},{\bf x})] -\xi_1\xi_2{\hat M}_z({\bf x},{\bf x'}){\hat M}_z({\bf x'},{\bf x})}. \label{genformf01}$$ Since $f_N({\bf x'},k+1;{\bf x},0)=\sum_{l=0}^N f_l({\bf x'},k;{\bf x},0) T_{N-l}({\bf x'},{\bf x})$, the transform ${\hat{ {f'}}}_z({\bf x'},\xi_2;{\bf x},0)=\sum_{k=1}^\infty \xi_2^k {\hat f}_z({\bf x'},k;{\bf x},0)$ is $${\hat{\tilde {f'}}}_z({\bf x'},\xi_2;{\bf x},0) = \frac{\xi_2 {\hat f}_z({\bf x'},1;{\bf x},0)}{1-\xi_2 {\hat T}_z({\bf x'},{\bf x})}. \label{specformf02}$$ By using the expression $f_N({\bf x},1;{\bf x'},k_2)=\sum_{l=0}^N f_l({\bf x'},k_2;{\bf x},0) M_{N-l}({\bf x'},{\bf x})$ and Eq. (\[specformf02\]) we obtain $${\hat{\tilde{f'}}}_z({\bf x},1;{\bf x'},\xi_2) = \frac{\xi_2 {\hat f}_z({\bf x'},1;{\bf x},0) {\hat M}_z({\bf x'},{\bf x})} {1-\xi_2 {\hat T}_z({\bf x'},{\bf x})}, \label{specformf03}$$ and then by substitution of Eqs. (\[genformf01\],\[specformf03\]) in Eq. (\[twopointbeta01\]), and using Eq. (\[frstpsgmt02\]), we obtain for ${\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)=\sum_{N=0}^\infty\sum_{k_1=1}^\infty\sum_{k_2=1}^\infty z^N {\xi_1}^{k_1}{\xi_2}^{k_2} \beta_N({\bf x};k_1,{\bf x'};k_2)$ $$\begin{array}{ll} {\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)= & \frac{1}{1-z}\left\{ \left( 1-{\hat T}_z({\bf x},{\bf x'})-{\hat M}_z({\bf x'},{\bf x}) \right) \frac{\xi_1\xi_2 {\hat f}_z({\bf x'},1;{\bf x},0) {\hat M}_z({\bf x'},{\bf x})+{\xi_1}^2\xi_2{\hat M}_z({\bf x'},{\bf x})\frac{ {\hat f}_z({\bf x},1;{\bf x},0) {\hat M}_z({\bf x},{\bf x'})} {1-\xi_1 {\hat T}_z({\bf x},{\bf x'})}} {[1-\xi_1{\hat T}_z({\bf x},{\bf x'})] [1-\xi_2{\hat T}_z({\bf x'},{\bf x})] -\xi_1\xi_2{\hat M}_z({\bf x},{\bf x'}){\hat M}_z({\bf x'},{\bf x})}\right. \\ & \left.+ \left( 1-{\hat T}_z({\bf x'},{\bf x})-{\hat M}_z({\bf x},{\bf x'}) \right) \frac{\xi_2\xi_1 {\hat f}_z({\bf x},1;{\bf x'},0) {\hat M}_z({\bf x},{\bf x'})+{\xi_2}^2\xi_1{\hat M}_z({\bf x},{\bf x'})\frac{ {\hat f}_z({\bf x'},1;{\bf x'},0) {\hat M}_z({\bf x'},{\bf x})} {1-\xi_2 {\hat T}_z({\bf x'},{\bf x})}} {[1-\xi_1{\hat T}_z({\bf x'},{\bf x})] [1-\xi_1{\hat T}_z({\bf x},{\bf x'})] -\xi_2\xi_1{\hat M}_z({\bf x'},{\bf x}){\hat M}_z({\bf x},{\bf x'})} \right\} . \end{array} \label{twopointbeta02}$$ The generating functions of the two-point probabilities $T_N({\bf x},{\bf x'})$, $M_N({\bf x},{\bf x'})$ and $f_N({\bf x},1;{\bf x'},0)$ that define the behavior of ${\hat{\tilde{\beta}}}_z({\bf x},\xi_1;{\bf x'},\xi_2)$ are expressed in terms of the generating function of the probability of first arrival $f_N({\bf x})$, which is provided by Eq. (\[candfgenerating\]). In Appendix \[twopintgen\] we show that $$\begin{array}{lll} {\hat f}_z({\bf x},1;{\bf x'},0) & = & \frac{{\hat f}_z({\bf x})-{\hat f}_z({\bf x'}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})} \\ {\hat M}_z({\bf x},{\bf x'}) & = & \frac{{\hat f}_z({\bf x'-x})-{\hat f}_z({\bf 0}){\hat f}_z({\bf x'-x})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})} \\ {\hat T}_z({\bf x},{\bf x'}) & = & \frac{{\hat f}_z({\bf 0})-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}. \end{array} \label{twopointonepoint}$$ Since the generating function of $\beta_N({\bf x},k_1;{\bf x'},k_2)$ is represented in terms of ${\hat f}_z({\bf x})$, ${\hat f}_z({\bf x'})$, ${\hat f}_z({\bf x-x'})$ and ${\hat f}_z({\bf x'-x})$, the summation over ${\bf x}$ and ${\bf x'}$ can be achieved in the $t\to\infty$ limit. Due to Eq. (\[candfgenerating\]) and the already mentioned fact that $\sum_{\bf x}{\hat c}_z({\bf x})=1/(1-z)$, the summation over all possible ${\bf x}$ and ${\bf x'}$ on the right hand side of Eq. (\[twopointbeta02\]) can be expanded in a power series over $1/(1-z)$. The Tauberian theorem [@Weiss] states that the leading order in $t$ space is provided by the leading order of $1/(1-z)$ in the $z\to 1$ limit in $z$ space. It is clear that $\sum_{\bf x}\sum_{\bf x'}{\hat c}_z({\bf x}){\hat c}_z({\bf x'})=1/(1-z)^2$, but in Eq. (\[twopointbeta02\]) all the multiplications of generating functions of single point probabilities are of mixed origin, e.g. ${\hat c}_z({\bf x}){\hat c}_z({\bf x-x'})$, ${\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x})$ and all other possibilities. Moreover substitution of Eq. (\[twopointonepoint\]) and Eq. (\[candfgenerating\]) in Eq. (\[twopointbeta02\]) shows that most of the multiplications will include more than two terms, e.g. ${\hat c}_z({\bf x}){\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x})$. In Appendix \[fouriecxz\] we show that $$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}){\hat c}_z({\bf x'-x})= \frac{1}{(1-z)^2} \label{doublesumz01}$$ for any case of transient RW (the roles of ${\bf x}$ and ${\bf x'}$ can be interchanged). Any other terms of the form $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x})$ or $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}){\hat c}_z({\bf x}){\hat c}_z({\bf x'-x})$ (or generally multiplication of any number of terms greater than $2$) grow slower than $1/(1-z)^2$ when $z\to 1$ (see Appendix \[fouriecxz\]). This means that when expanding the denominator in Eq. (\[twopointbeta02\]) and utilizing Eq. (\[twopointonepoint\]), all the terms in the expansion, except the zero order, i.e. $1/[1-\xi_1{\hat T}_z({\bf x'},{\bf x})] [1-\xi_1{\hat T}_z({\bf x},{\bf x'})]$, will grow slower than $1/(1-z)^2$ after summation over ${\bf x}$ and ${\bf x'}$. Then in the $z\to 1$ limit we use $$\begin{array}{llll} {\hat f}_z({\bf x},1;{\bf x'},0) & = & {\hat f}_z({\bf x}) & \\ {\hat M}_z({\bf x},{\bf x'}) & = & {\hat f}_z({\bf x'-x})-{\hat f}_z({\bf 0}){\hat f}_z({\bf x'-x}) & \qquad (z\to 1) \\ {\hat T}_z({\bf x},{\bf x'}) & = & {\hat f}_z({\bf 0}) &. \end{array} \label{zto1lim01}$$ and the only relevant terms in the summation over ${\bf x}$ an ${\bf x'}$ are $$\begin{array}{ll} \sum_{\bf x}\sum_{\bf x'} {\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)\underset{z\to 1}{\longrightarrow} \sum_{\bf x}\sum_{\bf x'} \frac{(1-{\hat f}_z({\bf 0}))^2}{1-z} \frac{\xi_1 \xi_2}{[1-\xi_1{\hat f}_z({\bf 0})] [1-\xi_2{\hat f}_z({\bf 0})] } & \left\{ {\hat f}_z({\bf x'}) {\hat f}_z({\bf x-x'}) \right. \\ & \left.+ {\hat f}_z({\bf x}) {\hat f}_z({\bf x'-x}) \right\}. \end{array} \label{sumbetazto1}$$ Substituting the expression in Eq. (\[doublesumz01\]) and Eq. (\[candfgenerating\]) into Eq. (\[sumbetazto1\]) leads to $$\sum_{\bf x}\sum_{\bf x'} {\hat{\tilde{\beta}}}_z({\bf x};\xi_1,{\bf x'};\xi_2)\underset{z\to 1}{\longrightarrow} \frac{2(1-{\hat f}_z({\bf 0}))^4}{(1-z)^3} \frac{\xi_1 \xi_2}{[1-\xi_1{\hat f}_z({\bf 0})] [1-\xi_2{\hat f}_z({\bf 0})] }, \label{sumbetazto1_01}$$ and since $\sum_{k=1}^\infty \xi^k \left({\hat f}_z({\bf 0})\right)^{k-1}=\xi\big/[1-\xi{\hat f}_z({\bf 0})]$ $$\sum_{\bf x}\sum_{\bf x'} {\hat{{\beta}}}_z({\bf x};k_1,{\bf x'};k_2)\underset{z\to 1}{\longrightarrow} \frac{2(1-{\hat f}_z({\bf 0}))^4}{(1-z)^3} {\hat f}_z({\bf 0})^{k_1-1} {\hat f}_z({\bf 0})^{k_2-1}. \label{sumbetazto1_1}$$ Eventually from Eq. (\[sumbetazto1\_1\]) and Eq. (\[secmomdef\]) we obtain $${\overline{\hat{S_\alpha^2}}}(z)\underset{z\to 1}{\longrightarrow} \frac{(1-{\hat f}_z({\bf 0}))^4}{{\hat f}_z({\bf 0})^2} \left\{Li_{-\alpha}({\hat f}_z({\bf 0})\right\}^2 \frac{2}{(1-z)^3} \label{secmomzform01}$$ then according to the identity $\sum_{N=0}^\infty z^N N(N-1)=2z^2/(1-z)^3$, and the Tauberian theorem, the asymptotic behavior of ${\overline{S_\alpha^2}}(N)$ is $${\overline{{S_\alpha^2}}}(N)\sim \frac{(1-Q_0)^4}{Q_0^2} \left\{Li_{-\alpha}(Q_0)\right\}^2 N^2 \qquad N\to\infty. \label{secmomzform02}$$ This relation shows that for any transient RW, in the large $N$ limit the second moment of $S_{\alpha}(N)$ converges to a square of the mean of $S_{\alpha}(N)$, i.e. $$\frac{{\overline{S_{\alpha}(N)^2}}} {{\overline{S_{\alpha}(N)}}^2} \underset{N\to\infty}{\longrightarrow}1. \label{convergensmom}$$ [0.48]{} ![ Convergence of $S_\alpha(N)$ to $\Lambda N$. Both panels describe the behavior of $S_\alpha$ for a one dimensional RW with probability $0.7$ to make a step $+1$ and probability $0.3$ to make a step $-1$. The thick line in both panels are simulation results while the dashed line is the theoretical prediction of Eq. (\[deltaconverge\]) with $\Lambda$ provided by Eq. (\[lambdaconst\]). For both panels $Q_0=0.6$. Panel[**(a)**]{} presents the case with $\alpha=0.5$ while panel [**(b)**]{} the case with $\alpha=0.25$. []{data-label="salphaconverge"}](./sconverg_a.pdf "fig:"){width="\textwidth"}   [0.48]{} ![ Convergence of $S_\alpha(N)$ to $\Lambda N$. Both panels describe the behavior of $S_\alpha$ for a one dimensional RW with probability $0.7$ to make a step $+1$ and probability $0.3$ to make a step $-1$. The thick line in both panels are simulation results while the dashed line is the theoretical prediction of Eq. (\[deltaconverge\]) with $\Lambda$ provided by Eq. (\[lambdaconst\]). For both panels $Q_0=0.6$. Panel[**(a)**]{} presents the case with $\alpha=0.5$ while panel [**(b)**]{} the case with $\alpha=0.25$. []{data-label="salphaconverge"}](./sconverg_b.pdf "fig:"){width="\textwidth"} Convergence to a $\delta$-function {#deltafunction} ---------------------------------- We had shown that the distribution of $S_\alpha$ is such that in the $N\to\infty$ limit the square of the first moment converges to the second moment. The minimal value of $S_\alpha/N$ is $2(N/2)^\alpha /N$ that is achieved if the RW performed $N/2$ back and forward jumps between two sites. The maximal value of $S_\alpha /N$ is $1$, that is achieved if the RW never visited any site twice. Since those two limits are achieved for a very specific trajectories of the RW the probability of the minimal and maximal values of $S_\alpha$ converges to $0$ in the $N\to\infty$ limit. For the random variable $$s=S_\alpha/N, \label{sdivNdef}$$ the PDF $\lambda(s)$ is defined for $0\leq s \leq 1$ and $\lambda(s)\to 0$ when $s\to 0$, or $s\to 1$. Moreover the proven equivalence of ${\overline{S_\alpha^2}}$ and ${\overline{S_\alpha}}^2$ in the $N\to\infty$ limit means that $$\left(\int_0^1s\lambda(s)\,ds\right)^2=\int_0^1\left(s\right)^2\lambda(s)\,ds\qquad N\to\infty. \label{jenseneq01}$$ Since $\lambda(s)$ is a PDF and $\left(\dots\right)^2$ is a strictly convex function, Jensen inequality [@Jensen] states that $\left(\int_0^1s\lambda(s)\,ds\right)^2\leq\int_0^1\left(s\right)^2\lambda(s)\,ds$ and the equality is achieved only when $s$ is constant, i.e. $\lambda(s)$ is a $\delta$-function. Then from Eq. (\[salphaNlarge\]) we obtain that $$\lambda(s)\underset{N\to\infty}{\longrightarrow} \delta\left(s-\Lambda\right), \label{deltaconverge}$$ where the constant $\Lambda$ is provided in Eq. (\[lambdaconst\]). This result means that in the large $N$ limit the local time $S_\alpha$ and the number of jumps $N$ are equivalent up to a transformation $S_\alpha\to \Lambda N$. This result is presented in Fig. \[salphaconverge\], where the random variable $S_\alpha(N)/N$ (obtained from numerical simulation) converges to a non-zero constant for large $N$. In the next section we utilize this result to establish the form of $P_{S_\alpha}({\bf x})$ and a simplified representation of the positional probability density function, i.e. $P({\bf x},t)$. Double subordination and the equivalence of CTRW and the transient QTM {#doublesubordination} ====================================================================== The PDF $P({\bf x},t)$, as it is presented by Eq. (\[subordination02\]), depends on $P_{S_\alpha}({\bf x})$. The form of $P_{S_\alpha}({\bf x})$ is obtained by using again the subordination approach where the local time $S_\alpha$ is subordinated to $N$ ,number of jumps performed, and the spatial process is given provided by $W_N{\bf x}$ - the PDF of regular RW, i.e. $$P_{S_\alpha}({\bf x})=\sum_{N=0}^\infty W_N({\bf x}) {\cal G}_{S_\alpha}(N,{\bf x}). \label{doublesub01}$$ ${\cal G}_{S_\alpha}(N,{\bf x})$ is the probability to perform $N$ steps before reaching ${\bf x}$ provided that the value of $S_\alpha$ is known. In the previous section we have shown that in the $N\to\infty$ limit the PDF of $s=S_\alpha/N$, i.e. $\lambda(s)$, is converging to $\delta(s-\Lambda)$. For $\lambda(s)$, $S_\alpha$ is the random variable and $N$ is the parameter. For ${\cal G}_{S_\alpha}$ $N$ is the random variable and $S_\alpha$ is the parameter. The convergence of $\lambda(s)$ to a $\delta$-function shows that in the $N\to\infty$ limit these two quantities are interchangeable and then for a transient RW $${\cal G}_{S_\alpha}(N,{\bf x})\underset{S_\alpha\to\infty}{\longrightarrow} \delta\left({S_\alpha-\Lambda N}\right), \label{gsalpharep}$$ independent of the value of ${\bf x}$. The double subordination approach prescribes the disorder averaged PDF $\langle P({\bf x},t) \rangle$ the form $$\langle P({\bf x},t) \rangle=\sum_{S_\alpha}\sum_{N=0}^\infty W_N({\bf x}) {\cal G}_{S_\alpha}(N,{\bf x}) {\cal P}_t(S_\alpha) \label{doublesub02}$$ where we used Eqs.(\[subordination01\],\[doublesub01\]). When taking the limit $t\to\infty$ the form of ${\cal P}_t(S_\alpha)$ in Eq. (\[salphadist\]) dictates that only large $S_\alpha$ need to be considered, and then according to Eq. (\[gsalpharep\]) only large $N$ are of interest, finally we obtain that $$\langle P({\bf x},t) \rangle \sim\int_0^\infty W_{ N}({\bf x}) \frac{t\big/\Lambda^{1/\alpha}}{\alpha} N^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t\big/\Lambda^{1/\alpha}}{N^{1/\alpha}}\right)\,dN \qquad t\to\infty, \label{pxtformfin}$$ where the transition to integration is the regular practice of the subordination technique [@Bouchaud]. It is important to notice that in the case of continuous time random walk (CTRW) [@Weiss] the particle experience each jump a new waiting time $\tau$, independent of the previous visitation even if itis currently located in a previously visited site. This makes the CTRW a kind of mean-filed approximation of the QTM and specifically, according to Eq. (\[localtime\]), for CTRW $S_\alpha=N$. Accordingly, only one level of subordination is needed and $P_{S_\alpha}({\bf x})$ is simply $W_N({\bf x})$ that leads to $$\langle P({\bf x},t) \rangle_{CTRW} \sim\int_0^\infty W_{ N}({\bf x}) \frac{t}{\alpha} N^{-1/\alpha-1}l_{\alpha,A,1}\left(\frac{t}{N^{1/\alpha}}\right)\,dN \qquad t\to\infty. \label{pxtctrw}$$ Comparison of Eq. (\[pxtctrw\]) and Eq. (\[pxtformfin\]) leads to $$\langle P({\bf x},t) \rangle_{QTM} \sim \langle P({\bf x},t/\Lambda^{1/\alpha})_{CTRW} \qquad t\to\infty, \label{equivalence}$$ or simply said : the disorder averaged propagator of a transient QTM is equivalent to the propagator of CTRW taken at time $t/\Lambda^{1/\alpha}$. Eventually we proved that a simple transformation of time for CTRW $$t\to t\big/\Lambda^{1/\alpha} \label{timechange}$$ makes this model sufficient to asymptotically represent the transient case of the QTM. Eq. (\[equivalence\]) states that for every situation that the propagator of CTRW can be computed [@Barkai], the propagator of QTM can be computed as well. The constant $\Lambda^{-1/\alpha}$ is provided by Eq. (\[lambdaconst\]) and displayed in Fig. \[qzeroplot\] for $0\leq Q_0 <1$. ![ Behavior of the $\Lambda=\frac{\left[1-Q_0\right]^2}{Q_0} Li_{-\alpha}(Q_0)$ (dependent pre-factor of the temporal transformation $t\to t/\Lambda^{1/\alpha}$) as a function of the return probability $Q_0$, for $\alpha=0.75$. The divergence for $Q_0\to 1$ signifies the limitation of this transformation strictly to the transient case. []{data-label="qzeroplot"}](./qzeroplt.pdf){width="50.00000%"} This constant is positive and $>1$ for any $Q_0$. In the limit when $Q_0\to 1$, i.e. the approach to the recurrent case, $Li_{-\alpha}(Q_0)\sim (1-Q_0)^{-1-\alpha}$ [@Abramowitz] and $\Lambda^{-1/\alpha}\sim(1-Q_0)^{-(1-\alpha)/\alpha}$ diverges as $Q_0\to1$. This divergence signifies the limitation of the presented result to the transient case $0\le Q_0 <1$. When $Q_0=0$ the QTM is exactly described by the CTW since the particle never returns to previously visited site, indeed in this case $\Lambda^{-1/\alpha}=1$. For any $0<Q_0 <1$ the constant is greater than $1$. This means that the QTM process is faster than CTRW, i.e. the two models attain the same PDFs but for QTM it is achieved on shorter time-scales. Such behavior can be attributed to the fact that CTRW never resamples previously visited traps (the disorder is annealed), while it is not true for QTM. Since CTRW never resamples previously visited traps it has a higher probability (when compared to QTM) to find deeper traps, which means that its propagation is going to be slower than QTM, on average. For the $1$-dimensional case of a biased RW on a simple lattice with constant spacing the $W_N( x)$ is a binomial distribution that is very well approximated by the Gaussian approximation $$W_N(x)=\frac{1}{\sqrt{2\pi 4q(1-q)N}}e^{-\frac{\left(x-(2q-1)N\right)^2}{8q(1-q)N}}\qquad \left(N >> 1\right), \label{gauss1dbias}$$ where $q$ is the probability to jump to the right one step on the lattice and $1-q$ is the probability to jump to the left. The return probability for this process is $Q_0=2(1-q)$, as proven in the next section. For several values of $\alpha$ the form of $l_{\alpha,A,1}$ is explicitly known[@Barkai], specifically for $\alpha=1/2$, $$l_{1/2,1,1}(\eta)=\frac{1}{2\sqrt{\pi}}\eta^{-3/2}e^{-\frac{1}{4\eta}}. \label{lohalfdist}$$ Then according to Eq. (\[pxtformfin\]), for the $1$-dimensional case the PDF is provided by $$\begin{array}{ll} \langle P(x,t) \rangle \sim & \displaystyle \int_0^\infty \frac{\sqrt{t} e^{-\frac{\left(x-(2q-1)N\right)^2}{8q(1-q)N}} }{\sqrt{2\pi^2 4q(1-q)N}} \left(\frac{2(1-q)}{(2q-1)^2 Li_{-1/2}(2(1-q))}\right)^{-1} \\ & \exp\left[-\frac{N^2}{4t}\left(\frac{2(1-q)}{(2q-1)^2 Li_{-1/2}(2(1-q))}\right)^{2}\right] \,dN\qquad (t\to\infty). \end{array} \label{onedbiasedpxt}$$ In Fig. \[pxtbiasfig\] we perform a comparison between a numerical simulation of the QTM and the theoretical result of Eq. (\[onedbiasedpxt\]). The comparison is performed for $t=10^3$ and it is excellent for this finite time. ![ Comparison of the numerical simulation of the PDF for a 1d QTM with bias and theoretical predication of Eq. (\[onedbiasedpxt\]). The symbols is the numerical simulation while the thick line is theory without fitting parameters. The parameters of the case are : $A=1$, $q=0.7$, $\alpha=1/2$ and the spacing of the lattice is $1$. []{data-label="pxtbiasfig"}](./pxtq07alp05.pdf){width="50.00000%"} Moments of the QTM and non-linear response {#nonlinresp} ------------------------------------------ The explicit form of the disorder averaged PDF, expressed by Eq. (\[pxtformfin\]), permits evaluation of different moments $\langle {\bf x}^\mu \rangle$. Indeed, the approximation works for a regime when the the measurement time is sufficiently large and many jumps have been performed. In this limit the probability density $W_N({\bf x})$ attains the Gaussian form and all the moments $\int |{\bf x}|^\mu W_N({\bf x})d\,{\bf x}$ can be easily computed [@Winkelbauer]. Generally we can say that $$\int |{\bf x}|^\mu W_N({\bf x})d\,{\bf x} = B_\mu N^{\gamma_\mu}. \label{gammamudef}$$ The constant $B_\mu$ depends on the power $\mu$ and the lattice that determine the properties of the Gaussian approximation, i.e. second moment and the mean of the Gaussian distribution. Then according to Eq. (\[pxtformfin\]) the $\mu$th moment $\langle |{\bf x}|^\mu \rangle$ is provided by $\int_0^\infty (B_\mu t\big/\Lambda^{1/\alpha}\alpha)N^{\gamma_\mu-1-1/\alpha}l_{\alpha,A,1}\left(t/(\Lambda N)^{1/\alpha}\right)dN$. Since, $\int_0^\infty y^q l_{\alpha,A,1}(y)dy=A^{q/\alpha}\Gamma[1-q/\alpha]/\Gamma[1-q]$ (for $q/\alpha<1$) [@Barkai] the expression for the moments of ${\bf x}$ takes the form $$\langle |{\bf x}|^\mu \rangle= \frac{\Gamma[1+\gamma_\mu]}{A^{\gamma_\mu}\Gamma[1+\alpha\gamma_\mu]}\frac{B_\mu}{\Lambda^{\gamma_\mu}} t^{\alpha\gamma_\mu}. \label{momentexpression}$$ The constants $\gamma_\mu$, $B_\mu$ and $Q_0$ depend only on the lattice dimension and the type of the RW on top of this lattice. Of a special interest is the behavior of the first moment when an external force is applied, i.e. response of the system to a bias. In the QTM model the force is applied in such a way that it is not affecting the dwell times $\tau_{\bf x}$ but rather determines the transition probabilities between different locations [@Bertin02; @MonthusSec; @Deborah01]. When the imposed external force $F_0$ is sufficiently weak the transition probabilities $p({\bf x - x'})$ should be proportional to $\exp(F_0({\bf x-x'})/2k_BT)$ for transition from ${\bf x'}$ to ${\bf x}$, and to $\exp(-F_0({\bf x-x'})/2k_BT)$ for the reverse transition. Here we assume that the force is constant and applied in the direction of ${\bf x-x'}$, otherwise one needs to use the projection of the force in the ${\bf x-x'}$ direction. Since we are interested only in the limit of weak force it is possible to expand the exponential up to first order in $F_0$. In the case of a simple binomial RW on top of a $1$-dimensional lattice the probability $q$ to perform a jump to the right will be $q=\frac{1}{2}(1+F_0a/2k_BT)$ and the probability to jump to the left $1-q=\frac{1}{2}(1-F_0a/2k_BT)$, where $a$ is the lattice spacing. For dimensions $d\geq 2$ similar expansion will take place, the only difference is that $F_0$ will be multiplied by some $\cos(\theta)$ where $\theta$ is the appropriate angle between the direction of the force and local axis of the lattice. The presence of the force affects not only the constant $B_\mu$ in Eq. (\[momentexpression\]) but also the constant $\Lambda$ by the means of $Q_0$. Of special interest is the one-dimensional case. For $d=1$ $Q_0$, without the presence of external force, is $1$ [@Weiss]. When external small external force $F_0$ is added $Q_0$ is decreased but still attains values in the vicinity of $1$ and consequently (due to the form of $\Lambda$ in Eq. (\[lambdaconst\])) contributes to a non-trivial dependence on the force of the first moment. The first moment of the one dimensional case with a presence of a weak force $F_0$ is the case of traps on a one a simple one-dimensional lattice with probabilities $q=\frac{1}{2}(1+F_0a/2k_BT)$ to jump to the right and $1-q$ to jump to the left. For the spatial process $W_N(x)$ this is the case of a binomial random walk and thus for sufficiently large $N$ the Gaussian limit is attained $$W_N(x)\sim\frac{\exp\left[-\frac{\left(x-(2q-1)N\right)^2}{8q(1-q)N}\right]} {\sqrt{8\pi q(1-q)N}} \label{binomial1dspat}$$ and $$\int_{-\infty}^{\infty} x W_N(x)\,dx = (2q-1) N \label{binom1dmoment}$$ meaning that $B_1=2q-1$ and $\gamma_1=1$. Eq. (\[binom1dmoment\]) describes the linear response to the external force for the spatial part of the QTM. The return probability $Q_0\underset{z\to 1}{\to}\sum_{N=0}^\infty f_N({\bf 0})={\hat f}_{z}({\bf 0})$ is provided by Eq. (\[candfgenerating\]) while the Fourier transform of the jump probability $p({\bf x})$ is ${\overline p({\bf k})}=\sum_{\bf x} e^{i({\bf k \dot x})}p({\bf x})$ dictates the form of ${\hat c}_z({\bf 0})$ for dimension $d$ [@Weiss] $${\hat c}_z({\bf 0})=\frac{1}{(2\pi)^d}\int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi}\frac{1}{1-z{\overline p({\bf k})}}\,d^d{\bf k}. \label{cxgenerating}$$ For $d=1$, ${\overline{p}}(k)=q\exp(ik)+(1-q)\exp(-ik)$ and Eq. (\[cxgenerating\]) is $${\hat c}_z({\bf 0})=\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{1-z(q\exp(i k)+(1-q)\exp(-i k)}\,dk, \label{cxgendeq1}$$ by changing the variable to $y=\exp(ik)$ the integral in Eq. (\[cxgendeq1\]) is transformed into $${\hat c}_z({\bf 0})=\frac{1}{2\pi i}\oint_{|y|=1}\frac{1}{y-zq y^2-z(1-q)}\,dy. \label{cxgendeq1_01}$$ For any $z<1$ the two solutions of $-zq y^2+y-z(1-q)=0$ are located on the real line while one of them is for $y>1$ and the other is is for $y<1$. This means that the integral depends on the presence of one single pole $\forall z<1$. This pole is located at $y=(1-q)/q$ for $z=1$ and the integral in Eq. (\[cxgendeq1\_01\]) in the $z\to 1$ limit is $${\hat c}_0({\bf 0})=\frac{1}{2q-1}. \label{cxgebdeq1_02}$$ Then according to Eq. (\[candfgenerating\]) for $d=1$ the probability to return to the starting point, given the process is biased (i.e. $q>1/2$), is $$Q_0=2(1-q). \label{qzerodeq1}$$ Finally, according to Eqs. (\[momentexpression\],\[binom1dmoment\],\[qzerodeq1\]) and Eq. (\[lambdaconst\]) we obtain that $$\langle x(t) \rangle \underset{t\to\infty}{\sim}\frac{1}{A\Gamma[1+\alpha]} \frac{2(1-q)}{(2q-1) Li_{-\alpha}\left[2(1-q)\right]} t^\alpha, \label{xmoment1d01}$$ and when explicitly writing the the probability $q=\frac{1}{2}(1+F_0a/2k_BT)$ and the fact that the spacing of the lattice is $a$, $\langle x(t) \rangle$ is transformed into $$\langle x(t) \rangle \underset{t\to\infty}{\sim}\frac{a}{A\Gamma[1+\alpha]} \frac{[1-F_0a/2k_BT]}{(F_0a/2k_BT) Li_{-\alpha}\left[1-F_0a/2k_BT\right]} t^\alpha. \label{xmoment1d01_xa01}$$ For small $F_0\to 0$ we use the asymptotic relation $Li_{-\alpha}(1-y)\sim \Gamma[1+\alpha]y^{-\alpha-1}$ [@Abramowitz] and obtain the non-linear response to externally applied small force $$\langle x(t) \rangle \underset{t\to\infty}{\sim}\frac{a}{A\Gamma[1+\alpha]^2} \left(\frac{F_0a}{2k_BT} \right)^\alpha t^\alpha. \label{xmoment1d01_xa02}$$ ![ Comparison of the numerical simulation of the first moment $\langle x(t)\rangle$ for a 1d QTM with a presence of external force $F_0$ and theoretical predication of Eqs. (\[xmoment1d01\_xa01\],\[xmoment1d01\_xa02\]). The symbols describe the results of the numerical simulation with $t=10^8$, $\alpha=1/2$, $a=1$ and $A=1$. The thick line is theory as described by Eq. (\[xmoment1d01\_xa01\]) without fitting parameters and the dashed line is the prediction of Eq. (\[xmoment1d01\_xa02\]). For sufficiently small $F_0a/k_BT$ the two theoretical results coincide. []{data-label="biasedforcefig"}](./qtmbiased1dforce02.pdf){width="60.00000%"} A convincing comparison between the analytical results of Eqs. (\[xmoment1d01\_xa01\],\[xmoment1d01\_xa02\]) and numerical simulation is presented in Fig. \[biasedforcefig\]. It is clear from the figure that both theoretical result due to Eq. (\[xmoment1d01\_xa01\]) and Eq. (\[xmoment1d01\_xa02\]) coincide for sufficiently small external force $F_0$. The behavior of the first moment for small forces, as described by Eq, (\[xmoment1d01\_xa02\]) does not satisfy linear response. The response to external force is anomalous and the force enters the equation with an exponent $\alpha<1$. This behavior for a $1$-dimensional biased QTM was previously predicted by using scaling analysis [@Bouchaud; @Bertin02] and also obtained by exploitation of the Renormalization Group techniques in the limit of low temperatures , i.e $\alpha\to 0$ [@MonthusSec]. The non-linear response is present only due to the strong disorder and the quenched nature of the disorder. For the annealed case with power-law waiting times the response is linear [@Bouchaud]. From the treatment of the $1$-dimensional case it becomes clear that the non-linearity appears solely due to presence of $\Lambda$ in the denominator of Eq. (\[momentexpression\]). According to Eq. (\[lambdaconst\]) $\Lambda$ depends on $Q_0$ in a non-trivial fashion. When a small external force is present it alters the probability of return $Q_0$. Of special interest are the cases where $Q_0=1$ when $F_0=0$. Addition of small $|F_0|$ will decrease $Q_0$ and introduce a non-linear contribution due to the divergence $\Lambda$ in the limit of $Q_0\to1$. For the cases where $Q_0<1$ even when the external force is non-present, addition of a non-zero external force slightly decreases $Q_0$ that is translated to a small change in $\Lambda$ and the linear response is not affected. It is then according to classical result of P[ó]{}lya [@Weiss], the non-linear response is to be expected for $d=1,2$ while for any higher dimension the strong quenched disorder will not alter the linear response to external field. Summary ======= The properties of transport in the QTM have been extensively explored over the years. In this manuscript we provided an explicit mapping between the transient cases of QTM and the widely used CTRW. This result allows to generalize any result that is known for the CTRW to the case of QTM. Immediate applications include, first-passage properties [@Redner], super-diffusive fluctuations for anomalous transport [@Lindenberg; @Voituriez], representation by the means of fractional equations [@Klafter], large deviation properties [@BarkaiBurov20] and many more. The non trivial dependence of the mapping on the probability to return to the origin, $Q_0$, implies that we should expect very important differences between the QTM and CTRW for low dimensions even when the process is transient. Like the existence of non-linear response to externally applied field that was calculated for the QTM and is absent for CTRW. The developed theoretical framework of double subordination and two-point probabilities have merit on their own. We hope that these methods will help in addressing the recurrent case of QTM. Finally we would like to notice that existence of explicit mappings between the QTM and other models of transport in disordered media, such as the barrier model [@Sollich], can allow to address the general case of transport in a random-potential landscape [@SokolovCamb]. [**Acknowledgments:**]{} This work was supported by the Pazy foundation grant 61139927. I thank D.A. Kessler for fruitful discussions. Appendix ======== Additional terms of ${\hat{\psi}}(u)$ {#sbetaproof} ------------------------------------- In Section \[loctime\] it was shown that when the expansion of ${\hat \psi}(u)$ is of the form ${\hat \psi}(u)\sim 1-Au^\alpha$, Eq. (\[etalaplacefnl\]) holds. Here we show that additional terms in the expansion, i.e. ${\hat \psi}(u)\sim 1-Au^\alpha+Bu^\beta$ with $\beta>\alpha$, won’t change this equation when $S_\alpha \to \infty$. In such a case $$\langle e^{-u\eta}\rangle = \displaystyle \prod_{\bf x} \left( 1-\frac{n_{\bf x}^\alpha}{S_\alpha}Au^\alpha +\frac{n_{\bf x}^\beta}{S_\alpha^{\beta/\alpha}}Bu^\beta\right) \label{betprooffull01}$$ and the multiplication will produce the terms mentioned in Sec. \[loctime\] and also terms of the form $\sum_{\bf x}n_{\bf x}^\beta B u^\beta/S_\alpha ^{\beta/\alpha}$, $\sum_{\bf x}\sum_{\bf x'}n_{\bf x}^\alpha n_{\bf x'}^\beta A B u^{\alpha+\beta}/S_\alpha^{1+\beta/\alpha}$, $\sum_{\bf x}\sum_{\bf x'}n_{\bf x}^\beta n_{\bf x'}^\beta B^2 u^{2\beta}/S_\alpha^{2\beta/\alpha}$ etc. Since $\sum_{\bf x} n_{\bf x}^\beta=S_\beta$, the behavior of the the term $\sum_{\bf x}n_{\bf x}^\beta B u^\beta/S_\alpha ^{\beta/\alpha}$ is dictated by the ratio $S_\beta/S_\alpha ^\beta/\alpha$. For the transient case , i.e presence of bias or $d>2$, we have shown in Sec. \[salphaSec\] that ${\overline S_\alpha}\sim \Lambda N$ when $N\to\infty$. This means that in the limit of many jumps, $N\to\infty$, the ratio $S_\beta/S_\alpha ^\beta/\alpha$ is decaying like $N^{-\frac{\beta}{\alpha}+1}$, ($\beta>\alpha$). Therefore, all the terms that are not of the form $\left(\sum_{\bf x} \frac{n_{\bf x}^\alpha}{S_\alpha} A u^\alpha\right)^j$ will decay to $0$ in the $N\to\infty$ limit. We can then state that only the two first terms in the expansion of ${\hat \psi}(u)$ ($1-Au^\alpha$) are needed. Generating functions of two-point probabilities {#twopintgen} ----------------------------------------------- In Sec. \[secondsalpha\] three two-point probabilities were crucial for the behavior of $\beta_N({\bf x},k_1;{\bf x'},k_2)$ : [**I**]{} $f_N({\bf x},1;{\bf x'},0)$, [**II**]{} $M_N({\bf x},{\bf x'})$ and [**III**]{} $T_N({\bf x},{\bf x'})$. The probability $f_N({\bf x},1;{\bf x'},0)$ is the probability to start at point ${\bf 0}$ and after $N$ steps to reach the point ${\bf x}$ for the first time, without visiting ${\bf x'}$ even once. So from all the possibilities to reach ${\bf x}$ for the first time after $N$ we must subtract those where the point ${\bf x'}$ was visited at-least once (before reaching ${\bf x}$), i.e. $$f_N({\bf x},1;{\bf x'},0)=f_N({\bf x}) - \sum_{l=0}^N f_l({\bf x'},1;{\bf x},0)f_{N-l}({\bf x-x'}), \label{app_fngen01}$$ where $f_N({\bf x})$ is the first-passage probability defined in Eq. (\[candfgenerating\]). The translational invariance of the lattice was utilized. According to Eq. (\[app\_fngen01\]) the $z$-transform of $f_N({\bf x},1;{\bf x'},0)$ is $${\hat f}_z({\bf x},1;{\bf x'},0)={\hat f}_z({\bf x}) - {\hat f}_z({\bf x'},1;{\bf x},0){\hat f}_z({\bf x-x'}). \label{app_fngen02}$$ By switching the places of ${\bf x}$ and ${\bf x'}$ in Eq. (\[app\_fngen01\]) and performing a $z$-transform we obtain $${\hat f}_z({\bf x'},1;{\bf x},0)={\hat f}_z({\bf x'}) - {\hat f}_z({\bf x},1;{\bf x'},0){\hat f}_z({\bf x'-x}). \label{app_fngen03}$$ Substitution of Eq. (\[app\_fngen03\]) into Eq. (\[app\_fngen02\]) leads to an expression for ${\hat f}_z({\bf x},1;{\bf x'},0)$ in terms of a generating function of $f_N({\bf x})$ $${\hat f}_z({\bf x},1;{\bf x'},0)= \frac{{\hat f}_z({\bf x})-{\hat f}_z({\bf x'}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}. \label{app_fngen04}$$ The probability $M_N({\bf x},{\bf x'})$ is the probability to start at ${\bf x}$ and after $N$ steps to reach ${\bf x'}$ for the first time, without returning to ${\bf x}$ on the way. Due to translational invariance of the lattice $M_N({\bf x},{\bf x'})$ is expressible in terms of $f_N({\bf x},1;{\bf x'},0)$, i.e. $M_N({\bf x},{\bf x'})=f_N({\bf x'-x},1;{\bf 0},0)$. Then according to Eq. (\[app\_fngen04\]) the generating function of $M_N({\bf x},{\bf x'})$ is $${\hat M}_z({\bf x},{\bf x'})= \frac{{\hat f}_z({\bf x'-x})-{\hat f}_z({\bf 0}){\hat f}_z({\bf x'-x})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}. \label{app_mgen01}$$ The probability $T_N({\bf x},{\bf x'})$ is the probability to return to ${\bf x}$ after $N$ steps without visiting ${\bf x'}$ on the way. Once again the translational invariance of the lattice allows to utilize $f_N({\bf x},1;{\bf x'},0)$ and hence $T_N({\bf x},{\bf x'})=f_{N}({\bf 0},1;{\bf x-x'},0)$. Then according to Eq. (\[app\_fngen04\]), the generating function of $T_N({\bf x},{\bf x'})$ is provided by $${\hat T}_z({\bf x},{\bf x'}) = \frac{{\hat f}_z({\bf 0})-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}{1-{\hat f}_z({\bf x'-x}){\hat f}_z({\bf x-x'})}. \label{app_tgen01}$$ Properties of $c_N({\bf x})$ and summation over all lattice points {#fouriecxz} ------------------------------------------------------------------ The probability to find the particle at position ${\bf x}$ after $N$ steps (when starting at ${\bf 0}$), $c_N({\bf x})$ is normalized, i.e. $\sum_{\bf x}c_N({\bf x})=1$, where the summation is over all possible lattice points. This leads to the following relation $$\sum_{\bf x} c_N({\bf x}) e^{i{\bf a\cdot x}} \underset{{\bf a}\to{\bf 0}}{\longrightarrow}1 \label{afr_single01}$$ and consequently for the generating function ${\hat c}_z({\bf x})=\sum_{N=0}^\infty z^N c_N({\bf x})$ $$\sum_{\bf x} {\hat c}_z({\bf x}) e^{i{\bf a\cdot x}} \underset{{\bf a}\to{\bf 0}}{\longrightarrow}\frac{1}{1-z}. \label{afr_single02}$$ For the single jump probability $p({\bf x})$ the characteristic function is defined as ${\hat p}({\bf a})=\sum_{x_1}\sum_{x_2}\dots\sum_{x_d}p({\bf x})e^{i{\bf a\cdot x}}$, where ${\bf x}=(x_1,x_2,\dots,x_d)$ are all possible single steps on the lattice. Since all the jumps of the RW on the lattice are independent, $\sum_{\bf x} c_N({\bf x})e^{i{\bf a\cdot x}}=\left({\hat p}({\bf a})\right)^N$ and according to Eq. (\[afr\_single02\]) $$\sum_{\bf x} {\hat c}_z({\bf x}) e^{i{\bf a\cdot x}}= \frac{1}{1-z{\hat p}({\bf a})} \underset{{\bf a}\to{\bf 0}}{\longrightarrow}\frac{1}{1-z}. \label{afr_single03}$$ According to Eq. (\[afr\_single03\]) the double sum $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'})$ is simply $$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'})=\underset{{\bf a}\to{\bf 0}}{\lim}\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'})e^{i{\bf a\cdot x}}e^{i{\bf a\cdot x'}}= \underset{{\bf a}\to{\bf 0}}{\lim}\frac{1}{\left(1-z{\hat p}({\bf a})\right)^2}=\frac{1}{(1-z)^2}. \label{afr_double01}$$ This result is simply extended to the case of $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x})$. Indeed, $$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x})e^{i{\bf a\cdot x}}e^{i{\bf a\cdot x'}}=\sum_{\bf x}{\hat c}_z({\bf x})e^{i2{\bf a\cdot x}} \sum_{\bf x'}{\hat c}_z({\bf x'-x})e^{i{\bf a\cdot(x'-x)}}, \label{afr_double02}$$ due to translational invariance the right hand side of Eq. (\[afr\_double02\]) equals to $\frac{1}{1-z{\hat p}(2{\bf a})}\frac{1}{1-z{\hat p}({\bf a})}$ and we obtain $$\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x})e^{i{\bf a\cdot x}}e^{i{\bf a\cdot x'}}=\underset{{\bf a}\to{\bf 0}}{\lim}\frac{1}{1-z{\hat p}(2{\bf a})}\frac{1}{1-z{\hat p}({\bf a})}=\frac{1}{(1-z)^2}. \label{afr_double03}$$ Sums of terms of the form ${\hat c}_z({\bf x'}){\hat c}_x({\bf x-x'})$ produce similar result. Generally speaking, when the arguments of ${\hat c}_z(\dots){\hat c}_z(\dots)$ cover all possible points $({\bf x},{\bf x'})$ of the $2d$ lattice, the double summation will provide the result $1/(1-z)^2$. We turn now to calculation of sums of the form $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$. For this case the behavior of $\sum_{\bf x'}{\hat c}_z({\bf x'-x}){\hat c}_z({\bf x -x'})e^{i{\bf a\cdot x'}}$ must be inspected. According to the convolution theorem $$\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})e^{i{\bf a\cdot x'}}=\left(\frac{1}{2\pi}\right)^d \int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi} \frac{1}{1-z{\hat p}({\bf b})}\frac{1}{1-z{\hat p}({\bf b-a})}d^d{\bf b}, \label{afr_triple01}$$ where $d^d{\bf b}$ is $db_1\,db_2\dots db_d$. When the ${\bf a}\to{\bf 0}$ limit is taken, the integrand on the right hand side of Eq. (\[afr\_triple01\]) is simply $1\big/(1-z{\hat p}({\bf b}))^2$. Moreover, the asymptotic limit of $N\to\infty$ is translated as the $z\to 1$ limit in the $z$ space. In this limit the main contribution to the integral in Eq. (\[afr\_triple01\]) is from the values of ${\bf b}$ that are in the vicinity of ${\bf 0}$, since ${\hat p}({\bf 0})=1$ and the integrand converges to $1/(1-z)^2$. We concentrate on two types of ${\hat p}({\bf b})$ expansions in the vicinity of ${\bf b = 0}$. The first type is a linear case $${\hat p}({\bf b})\sim 1+i{\bf b \cdot B}\qquad {\bf b}\to 0. \label{afr_tripexp01}$$ This is the case of a RW with a bias in the ${\bf B}$ direction. Then $$\left(\frac{1}{2\pi}\right)^d \int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi} \frac{1}{\left(1-z{\hat p}({\bf b})\right)^2}d^d{\bf b} \underset{z\to 1}{\sim} \left(\frac{1}{2\pi}\right)^d \int_{-\pi}^{\pi}\dots\int_{-\pi}^{\pi} \frac{1}{\left(1-z(1+i{\bf b\cdot B})\right)^2}d^d{\bf b}, \label{afr_tripexp01int01}$$ and since $1\big/{\left(1-z(1+i{\bf b\cdot B}) \right)^2}=(1-z)^{-2}\left[1+i\frac{z}{1-z}{\bf b\cdot B}\right]^2\big/\left[1+\frac{z^2}{(1-z)^2}({\bf b\cdot B})^2)\right]^2$ we obtain for Eq. (\[afr\_tripexp01int01\]) (after making $\frac{z}{1-z}{\bf b}={\bf b'}$ substitution) $$\left(1-z\right)^{d-2} \left(\frac{1}{2\pi}\right)^d \int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}}\int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}}\dots\int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}} \frac{\left[1+i{\bf b'\cdot B}\right]^2}{\left[1+({\bf b'\cdot B})^2)\right]^2}d^d{\bf b'}. \label{afr_tripexp01int02}$$ We see that in the $z\to 1$ limit the $z$ dependence arrives from the $(1-z)^{d-2}$ pre-factor and the fact that the range of integration diverges as $1/(1-z)$. For $d=1$ extra caution is needed since the pre-factor $1/(1-z)$ diverges while the integral $\int_{-\infty}^{\infty}\left[1+ib'B\right]^2\big/\left[1+(b'B)^2\right]^2db'=0$. Exact calculation of the integral in Eq. (\[afr\_tripexp01int02\]) for $d=1$ shows that $$\frac{1}{2\pi(1-z)}\int_{-\frac{z\pi}{1-z}}^{\frac{z\pi}{1-z}} \frac{[1+ib'B]^2}{\left[1+(b'B)^2\right]^2}d\,b'=\frac{1}{1+z(z-2+B^2\pi^2 z)}\underset{z\to 1}{\longrightarrow} \frac{1}{B^2\pi^2} \label{afr_tripexp01d1}$$ a constant and is not diverging in the $z\to 1$ limit. This proofs that for $d=1$ and the case of a present bias ($B\neq 0$) the sum $\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'})$ converges to a constant when $z\to 1$ so the double sum $\sum_{\bf x}\sum_{\bf x'}\dots$ diverges as $1/(1-z)$ (and not as $1/(1-z)^2$) in the $z\to 1$ limit. For any $d\geq 2$ the pre-factor $(1-z)^{d-2}$ in Eq.(\[afr\_tripexp01int02\]) is not diverging and the only divergences are possible from the range of the integration when $z\to 1$. Inspection of the function $[1+i \sum_{j=1}^d b'_j B_j]^2\big/\left[1+(\sum_{j=1}^d b'_j B_j)^2\right]^2$ shows that when the $|b'_1|\to\infty$ the leading order of this function is $\sim 1/(b'_1B_1+\sum_{j=2}^db'_j B_j)^2$. Integration over $b'_1$ provides a leading order of $1/(b'_2 B_2+\sum_{j=3}^db'_j B_j)^1$ for $|b'_2|\to\infty$. Next integration over $b'_2$ will provide a leading order of $\log\left(\sum_{j=3}^d b'_j B_j\right)$ for the other $b'_j$s. By continuing the integration over all the different $b'_j$ ($d$ integrals in total) we obtain that the integrals in Eq. (\[afr\_tripexp01int02\]) are diverging as $|(1-z)^{2-d}\log\left(1-z\right)|$ when $z\to 1$. Then from Eq. (\[afr\_tripexp01d1\]), Eq. (\[afr\_tripexp01int02\]) and Eq. (\[afr\_triple01\]) it is established that $$\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'}) \underset{z\to 1}{\sim} \left\{ \begin{array}{ll} \frac{1}{B^2\pi^2} & d=1 \\ |\log\left(1-z\right)| & d\geq2 \end{array} \right. \label{afr_triple01fin}$$ Finally we have shown that for any dimension of the lattice $d$, when the RW has a bias (i.e. ${\bf B\neq 0}$), the double sum $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$ is growing as $|\log\left(1-z\right)|/(1-z)$ in the $z\to 1$ limit. The second type of behavior is the case without bias, i.e. $${\hat p}({\bf b})\sim 1-\left({\bf b \cdot B}\right)^2 \qquad {\bf b}\to 0. \label{afr_tripexp02}$$ In a similar fashion as Eq. (\[afr\_tripexp01int02\]) was derived, we obtain that $$\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'}) \underset{z \to 1}{\longrightarrow} \left(1-z\right)^{d/2-2} \left(\frac{1}{2\pi}\right)^d \int_{-\sqrt{\frac{z}{1-z}}\pi}^{\sqrt{\frac{z}{1-z}}\pi}\dots\int_{-\sqrt{\frac{z}{1-z}}\pi}^{\sqrt{\frac{z}{1-z}}\pi} \frac{1}{\left[1+({\bf b'\cdot B})^2)\right]^2}d^d{\bf b'}. \label{afr_tripexp02int01}$$ The integral on the right hand side of Eq. (\[afr\_tripexp02int01\]) is always positive and the integration coordinates can be transformed into generalized polar coordinates. In this case the only non-constant integration is of the form $\int_0^{\sqrt{\frac{z}{1-z}}\pi|{\bf B}|}r^{d-1}/(1+r^2)^2$ that is diverges as $(1-z)^{2-d/2}|\log(1-z)|$ for $d\geq4$ and converges for any $d<4$. Eventually in the $z\to 1$ limit $$\sum_{\bf x'}{\hat c}_z({\bf x'}){\hat c}_z(-{\bf x'}) \underset{z \to 1}{\sim} \left\{ \begin{array}{ll} (1-z)^{-3/2} & d=1 \\ (1-z)^{-1} & d=2 \\ |\log\left((1-z)\right)| & d>2 \end{array} \right. \label{afr_trip02fin}$$ We have shown that for any dimension $d>2$, when the RW has no bias (i.e. ${\bf B= 0}$), the double sum $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$ is growing as $|\log\left(1-z\right)|/(1-z)$ in the $z\to 1$ limit. We have proven that for the specific case of $\sum_{\bf x}\sum_{\bf x'} {\hat c}_z({\bf x}) {\hat c}_z({\bf x'-x}){\hat c}_z({\bf x-x'})$ and transient RW the double sum diverges slower than $(1-z)^2$ in the $z\to 1$ limit. This result holds also for any double summation over ${\bf x}$ and ${\bf x'}$ and triple multiplications of the probability densities ${\hat c}_z({\bf x-x'}){\hat c}_z({\bf x'-x}){\hat c}_z({\bf x'})$ (or any permutation of the positions). Again, due to the properties of the convolution integrals that lead to Eqs.(\[afr\_triple01fin\],\[afr\_trip02fin\]). When the double summation is performed over multiplication of more than three ${\hat c}_z({\bf x})$s the result will be equivalent to several convolutions integral. Since each convolution reduces the order of divergence of $1/(1-z)$, additional convolutions will only reduce the divergences that appear in Eqs. (\[afr\_triple01fin\],\[afr\_trip02fin\]). This means that the results of this section show that [*any double summation over ${\bf x}$ and ${\bf x'}$ and n-th multiplication of positional PDFs diverges slower than $1/(1-z)^2$ when $z\to 1$, if the RW is transient*]{}.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The quantum critical dynamics of the quantum phase transitions is considered. In the framework of the unified theory, based on the Keldysh technique, we consider the crossover from the classical to the quantum description of the boson many-particle system dynamics close to the second order quantum phase transition. It is shown that in this case the upper critical space dimension of this model is $d_c^{+}=2$, therefore the quantum critical dynamics approach is useful in case of $d<2$. In the one-dimension system the phase coherence time does diverge at the quantum critical point, $g_c$, and has the form of $\tau \propto -\ln |g-g_c|/(g-g_c) $, the correlation radius diverges as $r_c\propto |g-g_c|^{-\nu } (\nu = 0.6)$.' author: - 'Vasin M.G.' title: 'Quantum critical dynamics of the boson system in the Ginsburg-Landau model' --- Introduction ============ We consider the dissipative critical dynamics of the quantum phase transitions (QPT) taking place in the system of the coupled enharmonic oscillators with one-component order parameter ($n=1$) corresponding, for example, to the Ising magnet [@Sachdev]. It is well known that at $T=0$, in the regime of quantum fluctuations (zero-point fluctuations), the ordering phase transition is possible in these systems [@Kagan]. In addition it is believed that the critical exponents of this phase transition are determined with help of the simple rule: the exponents of the phase transition in a $d$-dimension system at $T=0$ are the same as at $T\neq 0$ but in the system with greater per unit dimension: $d_{eff}=d+z=d+1$ [@Pankov]. Hence one can conclude that the upper critical space dimension of the considered system is $d^+_{cr}=3$. Let us call it quantum mechanical (QM) approach. However, when describing the dynamics of the statistical ensemble of the coupled oscillators ($N\to \infty$) one needs to take into account the dissipation effect [@Weiss]. In case of $\hbar \omega \gg kT$ this leads to the change of the critical exponents and the universality class of the phase transition in the one-dimension system: $z\thickapprox 2$, $d^+_{cr}=2$ [@HH]. In this paper we describe the critical dynamics of the Ising magnet system close to QPT using the Keldysh technique [@K]. This approach is developed for description of the dynamics of non-equilibrium quantum systems. Therefore, we suppose it will allow us to describe the crossover from the critical dynamics to the quantum critical dynamics (QCD) using the uniform technique. We also believe that it help us to outline the borders of applicability of the QM and QCD approaches to the QPT description. Crossover from the critical dynamics to the quantum critical dynamics in the Keldysh technique ============================================================================================== Let us consider the quantum critical dynamics of the Ginsburg-Landau model in terms of the Keldysh technique. The Lagrangian of this model has the following form: $$\begin{gathered} \mathcal{L}\thickapprox(\vec\partial \phi )^2+\mu(g)\phi^2+v(g)\phi^4. \label{I}\end{gathered}$$ where $\phi $ is the scalar order parameter field, which obeys to the bose statistics. We suppose $\mu $ and $v$ to depend on some external parameter $g$, that controls the system state. It is convenient to describe the non-equilibrium dynamics of quantum systems in terms of the Keldysh technique. Since we assume the uniform description of both quantum ($T\to 0$) and classical ($T\gg 0$) systems, we consider the system interacting with a heat bath at the temperature $T$. According to the Keldysh approach to the description of non-equilibrium dynamics of the system one should write the generating functional in the form of $$\begin{gathered} W=\int \mathfrak{D}\vec\phi \exp\left\{i\int d^{d+1}x \mathcal{L}(\phi_{cl},\,\phi_q;\,g_{cl},\,g_q)\right\},\end{gathered}$$ where $\vec \phi=\{\phi_q,\,\phi_{cl}\}$, $\phi_{cl}$ and $\phi_q $ are the “classical” and “quantum” parts of the order parameter accordingly, $g_{cl}$ and $g_q$ are the sources of these fields, and $\mathcal{L}$ is the fields lagrangian density. Below it will be more convenient to move from the Minkowski space to the Euclidean one by the Wick rotation, $t= -ix_4 $. Then $$\begin{gathered} W=\int \mathfrak{D}\vec\phi \exp\left\{-\int d^{d}kd\omega \mathcal{L}(\phi_{cl},\,\phi_q;\,g_{cl},\,g_q)\right\},\end{gathered}$$ Note, that in this case every contact of the system with any environment, including external noise, is described as the interaction with the heat bath, while the “internal (quantum) noise” is implied in the description directly in the description. In this case according to [@K] one can write the Keldysh Lagrangian in the form of: $$\begin{gathered} \mathcal{L}=\mathcal{L}_{free}+\mathcal{L}_{int}+\mathcal{L}_{noise},\end{gathered}$$ where $$\begin{gathered} \mathcal{L}_{free}=\phi_q\left( \varepsilon_k-i\gamma \omega\right)\phi_{cl}+\phi_{cl}\left( \varepsilon_k+i\gamma \omega\right)\phi_q,\\[10pt] \mathcal{L}_{int}= -\frac{}{}U(\phi_{cl}+\phi_q,\,g_{cl}+g_q)+U(\phi_{cl}-\phi_q,\,g_{cl}-g_q),\\ \mathcal{L}_{noise}=\phi_q \left(2\gamma \omega \coth {\frac {\displaystyle \omega }{\displaystyle T}}\right)\phi_q,\end{gathered}$$ $\varepsilon_k=k^2+\mu(g)$, and $U(\phi)$ is the interaction part. According to the Keldysh approach to the description of non-equilibrium dynamics one can write an expression for the Retard, Advanced and the Keldysh parts of the Green function (matrix) in the form of: $$\begin{gathered} G^K=G^R\circ F-F\circ G^A ,\end{gathered}$$ where $F$ is the Hermitian matrix ($F=F^{\dag}$), and the circular multiplication sign implies integration over the intermediate time (matrix multiplication) [@K]. One can check that $$\begin{gathered} [G^{-1}]^K=[G^R]^{-1}\circ F-F\circ [G^A]^{-1} .\end{gathered}$$ After the Wigner transform (WT) in the frequency representation we come to $$\begin{gathered} G^K=f(\omega )(G^R-G^A) ,\\ [G^{-1}]^K=f(\omega)\left([G^R]^{-1}-[G^A]^{-1}\right),\end{gathered}$$ where $f(\omega )$ is the distribution function. For a boson system in thermal equilibrium $f=-i\coth (\omega/T)$, where $T$ is the temperature of the heat bath [@K]. This is the FDT, which, as it is shown later, takes a different form in the classical and quantum limits. If we consider the system with dissipation, then $$\begin{gathered} [G^R]^{-1}=\varepsilon_k +i\gamma\omega ,\quad [G^A]^{-1}=\varepsilon_k -i\gamma\omega ,\\ [G^{-1}]^K=2\gamma\omega \coth (\omega/T), \label{a3}\end{gathered}$$ where $\gamma $ is the kinetic coefficient. In the quantum case $T\ll \omega $ (see Fig.\[f1\]) $$\begin{gathered} \coth (\omega/T)\to \mbox{sign}(\omega ) \quad \Rightarrow \quad [G^{-1}]^K=2\gamma |\omega |.\end{gathered}$$ The FDT has the following form: $ G^K=i\,\mbox{sign}(\omega )(G^R-G^A)$. In the classical case $T\gg \omega$ (see Fig.\[f1\]) $$\begin{gathered} \coth (\alpha \omega)\to {\frac {\displaystyle T}{\displaystyle \omega }} \quad \Rightarrow \quad [G^{-1}]^K=2\gamma T,\end{gathered}$$ and the system satisfies the usual classical form of FDT: $G^K=T(G^R-G^A)/i\omega $. ![The red line is the graphic representation of $\coth (\omega/T)$ versus $T$ function (with $\omega =4$), the green line is the $T/\omega $ function. At high temperatures these graphics coincide, which corresponds to the critical dynamics. However, $\coth (\omega/T)\to \mbox{sign}(\omega )$ close to $T=0$, where the system is described by the quantum critical dynamics.[]{data-label="f1"}](D.eps) Below we will concentrate on the quantum limit ($\omega \gg T\approx 0$), when $\coth(\omega /T)\to \mbox{sign} (\omega )$, the temperature is not essential in the FDT, $\mathcal{L}_{noise}=\phi_q \left(2\gamma |\omega |\right)\phi_q$, and the Keldysh Green function has the following form: $$\begin{gathered} G^K(\omega )={\frac {\displaystyle 2\gamma |\omega |}{\displaystyle \varepsilon_k^2 +\gamma^2\omega^2}}.\end{gathered}$$ Note, that in the case of $k\to 0$ $G^K(\omega )={2}/{\gamma |\omega |}$. This is the so called $1/f$-noise, whose intensity does not depend on the temperature but it is equal to $\hbar$. One can infer that the presence of $1/f$-noise is a natural property of the cold many-body bose system, which follows from the quantum character of dynamics in $T=0$. Quantum critical dynamics of the $d=2-\varepsilon $ Ginsburg–Landau model ========================================================================= We suppose that the system is close to the second order phase transition, when the interaction part of the action can be written as $$\begin{gathered} U\approx\mu(g) \phi^2+v(g_c)\phi^4,\end{gathered}$$ where $\mu(g)=(g-g_c) \to 0$ close to the phase transition point, $g_c$. Below we will consider the quantum limit $T\to 0$ of the critical dynamics of this system in the $d=2-\varepsilon$ space dimension close to the second order critical point. The critical dynamics rests on the hypothesis of dynamic scaling, according to which the action should be invariant with respect to the scale transformations which conformly expand the space and time coordinates ($\omega \propto k^{d_{\omega }}$). In this case the summarized dimension, $D=d+d_{\omega }$ ($d_{\omega}=z$ is the dynamic exponent), has the same role as the conventional (momentum) dimension, $d_k$, in the static case. The canonical dimensions of the fields and the model parameters are determined from the condition of dimensionless action. The corresponding summarized canonical dimensions, $D[F]$, of any values, $F$, are determined as: $$D[F]=d[F]+z\cdot d_{\omega }[F],$$ where $d_{\omega }[F]$ is the frequency dimension [@Vas; @Pat]. The canonical dimensions of the values of our theory are given in the table: $F$ $k$ $\omega$ $\phi_{\rm cl}$ $\phi_{\rm q}$ $v$ $\gamma $ $\mu $ ------------------ ----- ---------- -------------------- -------------------- ----------------- ----------- -------- $d[F]$ 1 0 $-2+\varepsilon/2$ $-2+\varepsilon/2$ $2+\varepsilon$ $2$ $2$ $d_{\omega }[F]$ $0$ $1$ $-1/2$ $-1/2$ $-1$ $-1$ $0$ $D[F]$ $1$ $z=2$ $-3+\varepsilon/2$ $-3+\varepsilon/2$ $\varepsilon$ $0$ $2$ ![The graphic representation of the Keldish, $G^K$ (a), advanced, $G^A$ (b), and retarded, $G^R$ (c), Green functions of the theory.[]{data-label="f2"}](Gren.eps) ![ The graph representation of the contributions to the renormalization of the theory’s vertexes.[]{data-label="f3"}](Graphs.eps) The renormalization procedure is carried out with the standard method. It is assumed that the fields $\phi_{\rm q} $, and $\phi_{\rm cl} $ are slow-varying ones, such that the Fourier-transformed fields have only long-wave components: $|k|<k_0$; $\omega <\omega_0$. At the first step of RG transformations one integrates the partition function over the components of the fields in the limited wave band $\Lambda k_0<k<k_0$, $\Lambda^z \omega_0<\omega <\omega _0$. The renormalized parameters have the following form: $$\label{CH8R0} \begin{array}{l} \displaystyle \mu^{(R)}=Z_{\mu}Z_{\phi_{\rm q }}Z_{\phi_{\rm cl} }\Lambda^{d+\varepsilon+z}=Z_{\mu}\Lambda^{-2},\\ \displaystyle \gamma^{(R)}=Z_{\gamma}Z_{\phi_{\rm q} }^2\Lambda^{d+\varepsilon+2z}=Z_{\gamma}\Lambda^{0},\\ \displaystyle v^{(R)}=Z_{v}Z_{\phi_{\rm q}}Z_{\phi_{\rm cl}}^3\Lambda^{3d+3\varepsilon+3z}=Z_{v}\Lambda^{\varepsilon}. \end{array}$$ where $Z_{\phi_{\rm q }}, Z_{\phi_{\rm cl} }, Z_{\mu}, Z_v$ and $Z_{\gamma}$ are the constants of renormalization. Let us explain the renormalization of $\mu $ as an example in detail. We will limit ourselves to using the one loop approximation, that is quite enough for the demonstration of all the features of the theory. In this case the graphical representation of the main divergent contribution to the renormalization is shown in Fig.\[f3\]b, and the renormalization constant of $\mu$ has the form: $$\label{CH8R0} \begin{array}{l} \displaystyle Z_{\mu }\simeq \mu -{\frac {\displaystyle 6\mu v}{\displaystyle (2\pi)^{3}}}\int\limits_{\Lambda^z \omega _0}^{\omega _0}\int\limits_{\Lambda k_0}^{k_0} G^K(k,\omega)G^R(k,\omega)d{\bf k}d\omega =\\[12pt] \displaystyle =\mu -{\frac {\displaystyle 12\mu v\pi^2}{\displaystyle \gamma (2\pi)^3}}\int\limits_{\Lambda k_0}^{k_0}{\frac {\displaystyle dk}{\displaystyle k}}=\mu -{\frac {\displaystyle 3\mu v}{\displaystyle 2\gamma \pi}}\ln(1/\Lambda). \end{array}$$ One can see that the integral in this expression introduces a logarithmically divergent contribution to $\mu $ renormalization if the momentum dimension is $d[k]\equiv d=2$. In this case we get the following expression for the renormalized value of $\mu $: $$\label{CH8R} \mu^{(R)}=e^{2\xi} Z_{\mu}\simeq e^{2\xi}\left[ \mu -{\frac {\displaystyle 3}{\displaystyle 2}}{\frac {\displaystyle \mu v}{\displaystyle \gamma \pi}}\,\xi\right],$$ where $\xi=\ln(1/\Lambda )$ is the logarithmically divergent factor. In the same way one can get other terms of the renormalized action: $$\label{CH8R1} \displaystyle v^{(R)}=e^{\varepsilon\xi } Z_{v}\simeq e^{\varepsilon\xi }\left[v-{\frac {\displaystyle 9}{\displaystyle 2}}{\frac {\displaystyle v^2}{\displaystyle \gamma \pi}}\,\xi \right].$$ The contribution to the renormalization of the kinetic coefficient $|\omega |\gamma $ is proportional to $|\omega |$: $$\begin{gathered} \label{CH8R2} \gamma^{(R)} =\gamma -{\frac {\displaystyle 3v^2 16\pi^4}{\displaystyle (2\pi)^6\gamma^2}} \ln{(1/\Lambda )}=\gamma -{\frac {\displaystyle 3v^2}{\displaystyle 4\pi^2\gamma ^2}}\,\xi .\end{gathered}$$ Hence, in the one-loop approximation the renormalization group of the model under study has the form: $$\label{RG1} \begin{array}{c} \displaystyle {\frac {\displaystyle \partial \ln \mu}{\displaystyle \partial \xi}}=2-{\frac {\displaystyle 3}{\displaystyle 2}}{\frac {\displaystyle v}{\displaystyle \gamma \pi}},\quad \displaystyle {\frac {\displaystyle \partial \gamma}{\displaystyle \partial \xi}}= -{\frac {\displaystyle 3}{\displaystyle 4}}{\frac {\displaystyle v^2}{\displaystyle \pi^2\gamma ^2}},\\[10pt] \displaystyle {\frac {\displaystyle \partial \ln v}{\displaystyle \partial \xi}}= \varepsilon -{\frac {\displaystyle 9}{\displaystyle 2}}{\frac {\displaystyle v}{\displaystyle \gamma \pi}}. \end{array}$$ From the condition of the stable point existence, ${\partial \ln (v)}/{\partial \xi}=0$, we obtain $v=2\gamma\pi\varepsilon/9$. Note that in the case of $d=2$ ($\varepsilon =0$) we get $v=0$. In this case only the quadratic term is relevant so that with $d=2=d_c^+$ the critical behavior is well described by the Gaussian theory. From the above one can conclude, that there is the quantum order-disorder phase transition in one-dimensionalal systems. This differs appreciably from the classical case, in which the thermal fluctuations control the relaxation processes. However, this result agrees with the experimental data for the quasi-one-dimensional systems. Also, from (\[RG1\]) one can see that in the one loop estimation the critical exponent $\nu =0.6$, and one can predict that the relaxation time, or equivalently, the phase coherence time, diverges at the critical point $g_c$ as $\tau \propto\gamma/\mu\propto -\ln|g-g_c|/(g-g_c)$. conclusions =========== For Bose systems we have formulated the crossover from the critical dynamics regime of QPT to the quantum critical dynamics regime in the framework of the non-equilibrium quantum fields theory in the Keldysh technique. The key point of this crossover is that the random noise becomes pink when quantum fluctuations dominate over thermal fluctuations, $\hbar\omega >kT$. As a result, the system goes into a different class of universality, and the critical interval shifts into the low dimension area: $0<d<2$. Experimental observations of QPT show that they can take place in one-dimensional systems. This conforms both with the QM approach and with the QCD description. However, according to QCD in the case of two-dimensional system at $T\to0$ the critical exponents should approach the mean-field theory values. This conforms with the recent experimental results [@Zhang], and distinguishes this theory from QM approach, in which it should be possible only in the case of $d>3$. One can assume that the reason of this is following: Indeed, at the time scales which are much more then the phase coherence time the description of the phase transition needs the QM approach. However, at the phase transition the phase coherence time diverges, and the experimental time scales can not exceed it. As a result the observed critical behaviour corresponds to the QCD description. I am grateful to N.M.Shchelkachev and V.N.Ryzhov for helpful discussion of this paper. This work was partly supported by the RFBR grants No. 13-02-91177 and No. 13-02-00579. [99]{} Subir Sachdev, [*Quantum Phase Transitions*]{} (Cambridge University Press, ISBN 0521004543), 2001 p. 353; M.I. Kaganov, A.V. Chubukov, Sov. Phys. Usp. 30 1015–1040 (1987); S. Pankov et al., Phys. Rev. B **69** 054426-054436 (2004); U. Weiss, Quantum Dissipative Systems. World Scientific, Singapore, (1999); C. Hohenberg and B.I. Halperin, Rev. Mod. Phys. **49**, 435 (1977); A. Kamenev and A. Levchenko, Advances in Physics 58, 197 (2009); A.N. Vasil’ev, [*Quantum-Field Renormalization Group in the Theory of Critical Phenomena and Stochastic Dynamics*]{} (CRC Press, Boca Raton, London, New York, Washington, ISBN: 0415310024), 2004, p 705; A.Z. Patashinskii, V.L. Pokrovskii, 1979 [*Fluctuation Theory of Phase Transitions*]{} (Pergamon Press, Oxford, New York, Toronto, Sydney, Paris, Frankfurt) p 321; X. Zhang, C.L. Hung, S.K. Tung and C. Chin, Science 335, 1070(2012);
{ "pile_set_name": "ArXiv" }
--- abstract: 'A simple variant of a realistic flavour symmetry scheme for fermion masses and mixings provides a possible interpretation of the diphoton anomaly as an electroweak singlet “flavon”. The existence of TeV scale vector-like T-quarks required to provide adequate values for CKM parameters can also naturally account for the diphoton anomaly. Correlations between $V_{ub}$ and $V_{cb}$ with the vector-like T-quark mass can be predicted. Should the diphoton anomaly survive in a future Run, our proposed interpretation can also be tested in upcoming B and LHC studies.' author: - Cesar Bonilla - Miguel Nebot - Rahul Srivastava - ' José W. F. Valle' title: A flavour physics scenario for the $750$ GeV diphoton anomaly --- The ATLAS [@atlas750] and CMS [@cms750] collaborations have presented first results obtained from proton collisions at the LHC with 13 TeV center-of-mass energy. The ATLAS collaboration sees a bump in the invariant mass distribution of diphoton events at 750 GeV, with a 3.9 sigma significance, while CMS sees a 2.6 sigma excess at roughly the same value. Taking these hints at face value, we suggest a possible theoretical framework to interpret these findings. We propose that the new particle is a singlet scalar boson carrying a flavour quantum number. Our proposed framework accounts for three important aspects of the flavor puzzle: - the observed value of the Cabbibo angle arises mainly from the down-type quark sector through the Gatto-Sartori-Tonin relation [@Gatto:1968ss]; - the observed pattern of neutrino oscillations [@Forero:2014bxa] is reproduced in a restricted parameter range [@Morisi:2013eca]; - the observed values of the “down-type” fermions is well-described by the generalized b-tau unification formula [@Morisi:2011pt; @Morisi:2013eca; @King:2013hj; @Bonilla:2014xla] $$\label{eq:massrelation} \frac{m_{\tau}}{\sqrt{m_{e}m_{\mu}}}\approx \frac{m_{b}}{\sqrt{m_{s}m_{d}}},$$ predicted by the flavour symmetry of the model. There are in principle several possible realizations of the 750 GeV anomaly as a flavon [@Morisi:2012fg; @King:2014nza] : a flavor–carrying singlet scalar. Our main idea is to obtain a scheme where the CERN anomaly may be probed also in the flavor sector. For this purpose we consider a simple variant of that proposed in [@Morisi:2013eca] in order to address the points above. Phenomenological consistency of the model requires the presence of vector–like fermions in order to account for the observations in the quark sector. Their presence can naturally account for a production cross section of the scalar anomaly through gluon–gluon fusion similar to that indicated by ATLAS and CMS [@Morisi:2013eca] [^1] Here we investigate the allowed parameter space of our scheme which provides an adequate joint description of CKM physics describing the B sector and the recent CERN diphoton data, illustrating how the two aspects are inter-related in our scheme. For definiteness and simplicity, here we focus on a nonsupersymmetric version of the model discussed in [@Morisi:2013eca]. The charge assignments for the fields is as shown in Table \[tab1\] Fields $L$ $E^c$ $Q$ $U^c$ $D^c$ $H^u$ $ H^d$ $T$ $T^c$ $\sigma$ $\sigma'$ $ \xi $ -------------------- ----- ------- ----- ------- ------- ------- -------- ---------- ------------ ------------ ------------ ---------- $\mathrm{SU(2)_L}$ $2$ $1$ $2$ $1$ $1$ $2$ $2$ $1$ $1$ $1$ $1$ $1$ $A_4$ $3$ $3$ $3$ $3$ $3$ $3$ $3$ $1$ $1$ $3$ $3$ $1$ $\mathrm{Z}_4$ $1$ $1$ $1$ $1$ $1$ $1$ $1$ $\omega$ $\omega^2$ $\omega^3$ $\omega^2$ $\omega$ : Matter content of the model, where $\omega^4=1$.[]{data-label="tab1"} Here, $T, T^c$ are a pair of vector like “quarks” transforming as $(3, 1, 4/3)$ and $(\bar{3}, 1, -4/3)$ under the [[Standard Model ]{}]{}gauge group . The scalars $\sigma, \sigma'$ are singlets under but transform as $A_4$ triplets and carry $\mathrm{Z}_4$ charge. The scalar $\xi$ is also a singlet under [[Standard Model ]{}]{}as well as under the $A_4$ symmetry but transforms as $\omega$ under the $\mathrm{Z}_4$ symmetry. In addition to the above charges, the scalars and fermions also carry an additional $\mathrm{Z}_2$ charge such that the scalar $H^u$ only couples to the up-type quarks, while $H^d$ only couples to the down-type quarks and charged leptons (this $\mathrm{Z}_2$ symmetry would not be needed if supersymmetry were assumed). The invariant Yukawa Lagrangian of the model is given by, $$\begin{aligned} \mathcal{L}_f & = & y^u_{ijk} Q_i H^u_j U^c_k + y^d_{ijk} Q_i H^d_j D^c_k + y^l_{ijk} L_i H^d_j E^c_k \nonumber \\ & + & X' T U^c_i \sigma_i + \frac{Y'}{\Lambda} Q_i (H^u \cdot \sigma')_i T^c + y_T T T^c \xi \label{yuk}\end{aligned}$$ where we take all couplings $y_T$, $y^a_{ijk}$ and $X$, $Y$ as real for simplicity; $a = u,d,l$ and $i,j,k = 1,2,3$. Following Ref. [@Morisi:2013eca] after electroweak symmetry breaking and requiring certain hierarchy in the flavon vevs, $\vev{H^{u,d}} = (v^{u,d},\varepsilon^{u,d}_1,\varepsilon^{u,d}_2)$ where $\varepsilon_{1,2}^u \ll v^u$ and $\varepsilon_{1,2}^d\ll v^d$, one gets the mass relation between the down-type quarks and charged leptons given by Eq. (\[eq:massrelation\]). The up-type quark sector gets modified due to the presence of vector like quarks so that the full up-type quark mass matrix is $4 \times 4$ and given by $$\begin{aligned} \label{Mu} M_{u} = \left( \begin{array}{cccc} 0 & a^u \alpha^u & b^u & Y'_1 \\ b^u \alpha^u & 0 & a^u r^u & Y'_2 \\ a^u & b^u r^u & 0 & Y'_3 \\ X'_1 & X'_2 & X'_3 & M'_T \\ \end{array}\right) \end{aligned}$$ where $a^u = y_1^u \varepsilon_1^u $, $b^u = y_2^u \varepsilon_1^u$; $y_{1,2}^u$ being the only two possible Yukawa couplings arising from the $A_4$-tensor in Eq. (\[yuk\]). Also, $r^u = v^u/\varepsilon_1^u$ and $\alpha^u = \varepsilon_2^u / \varepsilon_1^u$. Moreover, $X'_i = X' \vev{\sigma_i}$, $Y'_i= Y' \vev{(H^u\cdot \sigma')_i}/\Lambda$ and $M'_T = y_T \vev{\xi}$. The mass matrix in Eq. \[Mu\] is the same as that obtained in [@Morisi:2013eca] where the detailed treatment of the Yukawa sector is given. Notice that the addition of vector quarks only changes the up sector mass matrix, the down sector mass matrix remaining unchanged and thus the relation in Eq. (\[eq:massrelation\]) remains unchanged, see [@Morisi:2013eca] for further details. The scalar sector of the model consists of $\mathrm{SU(2)_L}$ doublet scalars $H^u, H^d$ both transforming as triplets under the $A_4$ symmetry. In addition it contains three types of $\mathrm{SU(2)_L}$ singlet scalars with $\sigma, \sigma' \sim 3$ under $A_4$ while $\xi \sim 1$ under the $A_4$ symmetry. In order to illustrate how our candidate scalar can account for the $750$ GeV di-photon excess, we consider a simplified scenario. Neglecting the mixing between the $\mathrm{SU(2)_L}$ doublet and singlet scalars $s^{0}=(\xi^0,\sigma^0,\sigma^{\prime0})$ we can phenomenologically express the various scalar mass eigenstates as follows $$\begin{aligned} h_i & = & \mathcal{U}_{ij} H^{0}_{j},\,\, \ \ (i,j=1,...,6) \nonumber \\ \chi_m & = & \mathcal{O}_{mn} s^{0}_{n}, \ \ (m,n=1,...,7) \label{pscalarmass}\end{aligned}$$ Under this approximation the singlet scalars $\chi_i$ can be further decomposed as $$\begin{aligned} \zeta \equiv \chi_1 &=& \mathcal{O}_{11} \xi^{0}+ \mathcal{O}_{1n} s^{0}_n, \\ \chi_m &=& \mathcal{O}_{mn} s^{0}_{n}, \nonumber \label{sscalarmass} \end{aligned}$$ with $m,n=2,...,7$ and we have identified the flavon field mass eigenstate $\zeta$ as our $750$ GeV resonance candidate. Then, this flavon field is composed predominantly of $\mathrm{SU(2)_L}$ singlet scalars. Note that the rotation matrix $\mathcal{O}$ determines the mixing amongst the singlet scalars that form the two $A_4$-triplets and the $A_4$-singlet $\xi$. At the LHC $\zeta$ will be predominately produced through gluon-gluon fusion via a triangle loop involving the vector like T-quarks. In the absence of mixing between the $\mathrm{SU(2)_L}$ doublet and singlet scalars the tree level coupling of $\zeta$ to $W, Z$ bosons can be neglected. Similarly, the coupling of $\zeta$ to down-type fermions is also negligible. However the coupling of $\zeta$ to up-type quarks is determined by the off-diagonal elements of Eq. (\[Mu\]). Thus, $\zeta$ predominantly couples to the vector-like quarks $T, T^c$. As we show below, the flavour constraints require the vector-like quarks to be quite heavy so that, for a large range of parameters we have $m_T > m_\zeta/2$. Thus, the decay of $\zeta$ to $T, T^c$ is also kinematically forbidden. Therefore, $\zeta$ predominantly decays to photons and gluons through the triangle loop involving $T, T^c$, as shown in Fig. (\[Fig:Decays\]) and to up-type quarks through tree-level mixing. Apart from the above channels, $\zeta$ can also decay to $Z \gamma$ and $ZZ$ through analogous triangle loops involving $T, T^c$. Since $m^2_Z << m^2_\zeta$ the decay widths to $\gamma \gamma$, $Z \gamma$ and $Z Z$ channels are proportional to each other, so that, $$\begin{aligned} \frac{\Gamma_{Z \gamma}}{\Gamma_{\gamma \gamma}} \approx 2 \, \tan^2 \theta_W \ \ \text{and}\ \ \frac{\Gamma_{Z Z}}{\Gamma_{\gamma \gamma}} \approx \tan^4 \theta_W \end{aligned}$$ where $\theta_W$ is the weak mixing angle. In general $\zeta$ can also decay to two Higgs scalars as we discuss below. Thus $\zeta$ seems, indeed, an ideal candidate to explain the 750 GeV di-photon excess recently observed at the LHC. Both $g$ and $\gamma$ couple to $T, T^c$ through gauge interactions with interaction strength proportional to $\alpha_s, \alpha$, the strong and electromagnetic coupling constants respectively. One can write down the effective Lagrangian for the coupling of $\zeta$ with gluons, $Z$ boson and photons which is $$\begin{aligned} \mathcal{L}_{\rm{eff}} & = & \frac{ c_\gamma}{4} \, \zeta \, F^{\mu \nu} F_{\mu \nu} + \frac{c_g}{4} \, \zeta \, G^{\mu \nu} G_{\mu \nu} + \frac{ c_{Z\gamma}}{2} \, \zeta \, F^{\mu \nu} Z_{\mu \nu} \nonumber \\ & + & \frac{c_{ZZ}}{4} \, \zeta \, Z^{\mu \nu} Z_{\mu \nu} \label{effl}\end{aligned}$$ where $F^{\mu \nu}, G^{\mu \nu}$ are the usual electromagnetic and colour field strength tensors and $Z_{\mu \nu}$ is the field strength tensor for the $Z$ boson. In Fig. \[fig7\] we show the allowed ranges for the effective couplings required to account for the 750 GeV di-photon excess for both the CMS and ATLAS experiments within $95 \%$ confidence level. In the 8 TeV run neither ATLAS nor CMS have seen any statistically significant excess in any of the $\gamma \gamma$, $Z \gamma$ and $ZZ$ channels. The constraints from 8 TeV run on production times branching fraction $\sigma \times Br(\zeta \to ff)$; $f = g,Z,\gamma$ for these decay channels can be obtained from [@Aad:2014fha; @Aad:2015kna; @Aad:2015mna; @CMS-PAS-HIG-14-006]. In Fig. \[fig7\], we have also included the constraints from the non-observation of any signal excess in the $\gamma \gamma$, $Z \gamma$ and $ZZ$ in the 8 TeV run. The upper colored region is consistent only with ATLAS data, while the lower one is consistent with only CMS data, while the middle region is consistent with both CMS and ATLAS. The solid and dashed lines delimit the regions disallowed by 8 TeV data for $\zeta \to \gamma \gamma$ decay and $\zeta \to Z \gamma$ decay, respectively. The constraints from $\zeta \to g g$ as well as $\zeta \to Z Z$ decays are rather weak and are not shown in the graph. The value of the effective couplings $c_\gamma$ and $c_g$ are determined by only two parameters, namely the mass $m_T$ of the vector quarks and the strength of the Yukawa coupling $\zeta T T^c$. The allowed parameter ranges for the mass $m_T$ and Yukawa coupling $y_T$ for both CMS and ATLAS experiments as well as the constraints from the 8 TeV run are shown in Fig. \[fig8\]. In plotting Fig. \[fig8\] we have required that all the Yukawa couplings remain perturbative over the entire range of parameter space. The color scheme of Fig. \[fig8\] is the same as that of Fig. \[fig7\]. As shown in Fig. \[fig8\], the decay $\zeta \to \gamma \gamma$ in our model can explain the diphoton excess observed by both CMS and ATLAS experiments. Although the non-observation of similar diphoton excess in the 8 TeV run puts severe constraints on the allowed parameter range, our model still has enough freedom to reconcile these restrictions with the observed 13 TeV excess. A key feature of our proposal is the identification of the 750 GeV anomaly as a flavon, i.e. a scalar state that carries flavour information. Indeed, our scalar $\zeta$ is directly coupled to the quarks, and hence to flavour, so we expect potential correlations between CKM physics and the properties of the observed anomaly. Therefore, in addition to the the quark and lepton masses, related by Eq. (\[eq:massrelation\]), the measured neutrino oscillation parameters [@Forero:2014bxa], there are restrictions on the model parameters that come from the consistency of the quark sector, such as the measured quark mixing parameters [@Agashe:2014kda]. In order to explore these implications of the model, one must check, as in Ref. [@Morisi:2013eca] or other generic vector-like scenarios [@Botella:2008qm; @Botella:2012ju], that ours can indeed adequately reproduce CKM physics. To do this we include a selected set of additional observables sensitive to the new vector-like quark $T$ and to the deviations of the CKM matrix from the standard $3\times 3$ unitary form. In particular, we include: 1. neutral meson mixing constraints: in $B_d^0$ – $\bar B_d^0$ and $B_s^0$ – $\bar B_s^0$ systems, mass differences and “golden” CP asymmetries in $B_d\to J/\Psi K_S$, $B_s\to J/\Psi\Phi$ decays, bounds on the short distance contribution to the mass difference in the $D^0$ – $\bar D^0$ system, and indirect and direct CP violation parameters $\epsilon_K$ and $\epsilon^\prime/\epsilon_K$ for the $K^0$ – $\bar K^0$ system; 2. rare decays induced by different quark level transitions: $B_s\to\mu^+\mu^-$, $B_d\to\mu^+\mu^-$, $B\to X_s\gamma$, $K_L\to \pi^0\nu\bar\nu$, $K^+\to \pi^+\nu\bar\nu$, and short distance contributions to $D^0\to\mu^+\mu^-$ and $K_L\to\mu^+\mu^-$. Furthermore, to reflect LHC bounds on direct production of the new vector-like quark, we restrict our analysis to values of the mass $m_T>1$ TeV [@Aad:2016qpo]. Once compliance with this set of constraints is ensured, we can address, in addition to the 750 GeV diphoton hint, other features of the model, in particular flavour related ones, like correlations among different observables. Due to the complexity of the problem in terms of the number of independent parameters, we focus on scenarios where the upper $3\times 3$ block of the $M_u$ mass matrix and the remaining mass matrices are fixed, while the $X_i$, $Y_i$ and $M_{44}$ entries, related to the new vector-like quark, are free to vary. Furthermore, we consider separate variations of : (A) only the largest entries, namely $\{X_2,Y_3,M_{44}\}$, or (B) only (all) the $X_i$ entries. To cover the available parameter space while maintaining agreement with all of the above constraints, we conduct a likelihood analysis based on Markov chain driven MonteCarlo simulations. Figures \[fig:Correlations:A\] and \[fig:Correlations:B\] illustrate the results of such analyses. In Fig. \[fig:Correlations:Aa\], the correlation among the values of $|V_{ub}|$ and the mass of the new quark $m_T$ is shown for scenario (A): in that case, accommodating larger $m_T$ values comes at the price of increasingly larger values of $|V_{ub}|$. To further illustrate the situation, Fig. \[fig:Correlations:Ab\] displays the correlation among $|V_{ub}|$ and $|V_{cb}|$ for separate ranges of $m_T$ values, $m_T\in[1.00;1.02]$ TeV, $m_T\in[1.10;1.12]$ TeV and $m_T\in[1.20;1.22]$ TeV. In addition to the features seen in Fig. \[fig:Correlations:Aa\], Fig. \[fig:Correlations:Ab\] shows an additional (milder) correlation among $|V_{cb}|$ and $m_T$: larger masses prefer smaller $|V_{cb}|$ values. Figure \[fig:Correlations:B\] corresponds instead to scenario (B). In this case the correlation trends are reversed with respect to scenario (A): while $|V_{ub}|$ is almost uncorrelated with $m_T$, $|V_{cb}|$ tends to be larger for increasing T-masses. It is also to be noticed that the range of $m_T$ values is much more limited than in scenario (A). Finally, we briefly comment on the issue of width of the $750$ GeV resonance. The first thing to notice is that with the current low statistics, the estimates for the decay width are very poor. This is reflected in the fact that, while the ATLAS experiment prefers a broad decay width of around $45$ GeV, the CMS data suggest a decay width of few GeV. Such uncertain decay width estimates are likely to change significantly in the next run, if the anomaly survives. In our model, since the $\zeta\to TT$ decay is not kinematically allowed, the decay width of $\zeta$ is a priori narrow. Even though $\zeta \to t \bar{t}$ decay is mixing suppressed, it can contribute to the total width from a few hundred MeVs up to 10 GeVs. Partial widths to lighter fermions are smaller as well as that for the $\zeta \to hh$ decay (where h is the [[Standard Model ]{}]{}Higgs boson) which is constrained by the LHC Run1 data. If in future runs the ATLAS experiments confirm that a broad resonance persists this would imply that a significant novel decay of $\zeta$ is at work. This work was supported by MINECO grants FPA2014-58183-P, Multidark CSD2009-00064 and the PROMETEOII/2014/084 grant from Generalitat Valenciana. M.N. acknowledges financial support from the PROMETEOII/2013/017 grant from Generalitat Valenciana. RS will like to thank T. Modak and S. Sadhukhan for useful discussion and suggestions. The numerical computation was done using MadGraph5aMC@NLO [@Alwall:2014hca] with NN23LO1 PDF set [@Ball:2013hta]. [10]{} \[1\][`#1`]{} \[2\]\[\][[\#2](#2)]{} *[Search for resonances decaying to photon pairs in 3.2 fb$^{-1}$ of $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector]{}*, Technical Report ATLAS-CONF-2015-081, CERN, Geneva (2015), <http://cds.cern.ch/record/2114853>. C. Collaboration, *[Search for new physics in high mass diphoton events in proton-proton collisions at $\sqrt{s} = 13$ TeV]{}* (2015) CMS-PAS-EXO-15-004, <https://cds.cern.ch/record/2114808>. R. Gatto, G. Sartori and M. Tonin, *[Weak Selfmasses, Cabibbo Angle, and Broken SU(2) x SU(2)]{}*, Phys. Lett. **B28** (1968) 128–130. D. Forero, M. Tortola and J. Valle, *[Neutrino oscillations refitted]{}*, [[](http://dx.doi.org/10.1103/PhysRevD.90.093006)]{}[[](http://dx.doi.org/10.1103/PhysRevD.90.093006)]{}, [[](http://arxiv.org/abs/1405.7540)]{}. S. Morisi et al., *[Quark-Lepton Mass Relation and CKM mixing in an A4 Extension of the Minimal Supersymmetric Standard Model]{}*, [[](http://dx.doi.org/10.1103/PhysRevD.88.036001)]{}[[](http://dx.doi.org/10.1103/PhysRevD.88.036001)]{}, [[](http://arxiv.org/abs/1303.4394)]{}. S. Morisi, E. Peinado, Y. Shimizu and J. W. F. Valle, *[Relating quarks and leptons without grand-unification]{}*, Phys.Rev. **D84** (2011) 036003, [[](http://arxiv.org/abs/1104.1633)]{}. S. King, S. Morisi, E. Peinado and J. W. F. Valle, *[Quark-Lepton Mass Relation in a Realistic A4 Extension of the Standard Model]{}*, Phys. Lett. B **724** (2013) 68–72, [[](http://arxiv.org/abs/1301.7065)]{}. C. Bonilla, S. Morisi, E. Peinado and J. W. F. Valle, *[Relating quarks and leptons with the $T_7$ flavour group]{}*, [[](http://dx.doi.org/10.1016/j.physletb.2015.01.017)]{}[[](http://dx.doi.org/10.1016/j.physletb.2015.01.017)]{}, [[](http://arxiv.org/abs/1411.4883)]{}. S. Morisi and J. W. F. Valle, *[Neutrino masses and mixing: a flavour symmetry roadmap]{}*, Fortsch.Phys. **61** (2013) 466–492, [[](http://arxiv.org/abs/1206.6678)]{}. S. F. King et al., *[Neutrino Mass and Mixing: from Theory to Experiment]{}*, [[](http://dx.doi.org/10.1088/1367-2630/16/4/045018)]{}[[](http://dx.doi.org/10.1088/1367-2630/16/4/045018)]{}, [[](http://arxiv.org/abs/1402.4271)]{}. F. Staub et al., *[Precision tools and models to narrow in on the 750 GeV diphoton resonance]{}* (2016), [[](http://arxiv.org/abs/1602.05581)]{}. G. Aad et al. (ATLAS), *[Search for new resonances in $W\gamma$ and $Z\gamma$ final states in $pp$ collisions at $\sqrt s=8$ TeV with the ATLAS detector]{}*, [[](http://dx.doi.org/10.1016/j.physletb.2014.10.002)]{}[[](http://dx.doi.org/10.1016/j.physletb.2014.10.002)]{}, [[](http://arxiv.org/abs/1407.8150)]{}. G. Aad et al. (ATLAS), *[Search for an additional, heavy Higgs boson in the $H\rightarrow ZZ$ decay channel at $\sqrt{s} = 8\;\text{ TeV }$ in $pp$ collision data with the ATLAS detector]{}*, [[](http://dx.doi.org/10.1140/epjc/s10052-015-3820-z)]{}[[](http://dx.doi.org/10.1140/epjc/s10052-015-3820-z)]{}, [[](http://arxiv.org/abs/1507.05930)]{}. G. Aad et al. (ATLAS), *[Search for high-mass diphoton resonances in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector]{}*, [[](http://dx.doi.org/10.1103/PhysRevD.92.032004)]{}[[](http://dx.doi.org/10.1103/PhysRevD.92.032004)]{}, [[](http://arxiv.org/abs/1504.05511)]{}. *[Search for new resonances in the diphoton final state in the range between 150 and 850 GeV in pp collisions at $\sqrt{s} = 8~\mathrm{TeV}$]{}*, Technical Report CMS-PAS-HIG-14-006, CERN, Geneva (2014), <http://cds.cern.ch/record/1714076>. K. Olive et al. (Particle Data Group), *[Review of Particle Physics]{}*, [[](http://dx.doi.org/10.1088/1674-1137/38/9/090001)]{}[[](http://dx.doi.org/10.1088/1674-1137/38/9/090001)]{}. F. J. Botella, G. C. Branco and M. Nebot, *[Small violations of unitarity, the phase in $B^0_s - \bar{B}^O_s$ and visible $t \to cZ$ decays at the LHC]{}*, Phys.Rev. **D79** (2009) 096009, [[](http://arxiv.org/abs/0805.3995)]{}. F. J. Botella, G. C. Branco and M. Nebot, *[The Hunt for New Physics in the Flavour Sector with up vector-like quarks]{}*, [[](http://dx.doi.org/10.1007/JHEP12(2012)040)]{}[[](http://dx.doi.org/10.1007/JHEP12(2012)040)]{}, [[](http://arxiv.org/abs/1207.4440)]{}. G. Aad et al. (ATLAS), *[Search for single production of vector-like quarks decaying into $Wb$ in $pp$ collisions at $\sqrt{s} =$ 8 TeV with the ATLAS detector]{}* (2016), [[](http://arxiv.org/abs/1602.05606)]{}. J. Alwall et al., *[The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations]{}*, [[](http://dx.doi.org/10.1007/JHEP07(2014)079)]{}[[](http://dx.doi.org/10.1007/JHEP07(2014)079)]{}, [[](http://arxiv.org/abs/1405.0301)]{}. R. D. Ball et al. (NNPDF), *[Parton distributions with QED corrections]{}*, [[](http://dx.doi.org/10.1016/j.nuclphysb.2013.10.010)]{}[[](http://dx.doi.org/10.1016/j.nuclphysb.2013.10.010)]{}, [[](http://arxiv.org/abs/1308.0598)]{}. [^1]: Indeed vector-like fermions have been suggested to account for the diphoton anomaly. For an extensive reference set see [@Staub:2016dxq].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using angle-resolved photoemission spectroscopy, we report electronic structure for representative members of ternary topological insulators. We show that several members of this family, such as Bi$_2$Se$_2$Te, Bi$_2$Te$_2$Se, and GeBi$_2$Te$_4$, exhibit a singly degenerate Dirac-like surface state, while Bi$_2$Se$_2$S is a fully gapped insulator with no measurable surface state. One of these compounds, Bi$_2$Se$_2$Te, shows tunable surface state dispersion upon its electronic alloying with Sb (Sb$_x$Bi$_{2-x}$Se$_2$Te series). Other members of the ternary family such as GeBi$_2$Te$_4$ and BiTe$_{1.5}$S$_{1.5}$ show an in-gap surface Dirac point, the former of which has been predicted to show nonzero weak topological invariants such as (1;111); thus belonging to a different topological class than BiTe$_{1.5}$S$_{1.5}$. The measured band structure presented here will be a valuable guide for interpreting transport, thermoelectric, and thermopower measurements on these compounds. The unique surface band topology observed in these compounds contributes towards identifying designer materials with desired flexibility needed for thermoelectric and spintronic device fabrication.' author: - 'M. Neupane' - 'S.-Y. Xu' - 'L. A. Wray' - 'A. Petersen' - 'R. Shankar' - 'N. Alidoust' - Chang Liu - 'A. Fedorov' - 'H. Ji' - 'J. M. Allred' - 'Y. S. Hor' - 'T.-R. Chang' - 'H.-T. Jeng' - 'H. Lin' - 'A. Bansil' - 'R. J. Cava' - 'M. Z. Hasan' title: Topological surface states and Dirac point tuning in ternary topological insulators --- Introduction ============ A topological insulator (TI), as experimentally realized in bismuth-based materials, is a novel electronic state of quantum matter characterized by a bulk-insulating band gap and spin-polarized metallic surface states. [@Kane; @PRL; @David; @Nature08; @Hasan; @SCZhang; @Suyang_1; @Ran; @Nature; @physics; @David; @Science; @BiSb; @Matthew; @Nature; @physics; @BiSe; @Chen; @Science; @BiTe; @David; @Nature; @tunable; @Pedram; @Nature; @BiSb; @Hor; @PRB; @BiSe; @Essin; @PRL; @Magnetic; @Galvanic; @effect; @Yu; @Science; @QAH; @Qi; @Science; @Monopole; @Linder; @PRL; @Superconductivity; @Liang; @Fu; @PRL; @Superconductivity; @Phuan; @Hor; @arXiv; @BiTe; @superconducting] Owing to time reversal symmetry, topological surface states are protected from backscattering and localization in the presence of weak perturbation, resulting in spin currents with reduced dissipation. On the other hand, bismuth-based materials are also being studied for enhanced thermoelectric device performance. [@Moore] Therefore, it is of general importance to study the band structure of these materials as a starting point. Using angle-resolved photoemission spectroscopy (ARPES) and spin-resolved ARPES, several Bi-based topological insulators have been identified, such as the Bi$_{1-x}$Sb$_x$ alloys [@David; @Nature08; @David; @Science; @BiSb], the Bi$_2X_3$ ($X$ = Se, Te) series and their derivatives.[@Matthew; @Nature; @physics; @BiSe; @Chen; @Science; @BiTe] Although significant efforts have been made to realize multifunctional electronic properties in the existing materials, little success has been obtained so far due to residual bulk conduction.[@Phuan; @Hor; @arXiv; @BiTe; @superconducting; @Suyang] This led to the search for other topological materials, which might potentially be optimized for the realization of functional devices. Recently, ternary topological insulators such as Bi$_2$Se$_2$Te, Bi$_2$Te$_2$Se, Bi$_2$Te$_2$S, GeBi$_2$Te$_4$, and PbBi$_4$Te$_{7}$ have been theoretically predicted to feature multifunctional and flexible electronic structures. [@Suyang_1; @Wang_Johnson; @Lin] However, limited ARPES studies are reported even on Bi$_2$Te$_2$Se to date. [@Suyang; @Wang_Johnson; @Lin; @Sergey; @BTS_Ando; @Ong_BTS; @Arakane; @Kimura; @Souma] In this paper, we investigate the electronic structure of four distinct and unique compounds, namely, Bi$_2$Se$_2$Te (Se-rich), Bi$_2$Te$_2$Se (Te-rich), Bi$_2X_{3-x}$S$_x$ ($X$ = Se, Te; $x$ = 1, 1.5), and GeBi$_2$Te$_4$, as representative members of the ternary family. Surface state properties relevant for the enhanced functionality are identified in these materials. First-principles band calculations are also presented for comparison with our experimental data. Our experimental findings are itemized as follows. First, our data suggests that the ternary compound Bi$_2$Se$_2$Te (Se-rich) has a large effective bulk band gap. By tuning the ratio of bismuth to antimony, we are able not only to lower the Fermi level into the band gap but also to fine tune the Fermi level so that it lies exactly at the Dirac point. Second, we show that the Dirac point of Bi$_2$Te$_2$Se (Te-rich) is not isolated from the bulk valence bands when the chemical potential is placed at the Dirac point. Third, we report band structure properties of sulfur doped Bi$_2X_3$ \[Bi$_2X_{3-x}$S$_x$ ($X$ = Se, Te; $x$ = 1, 1.5)\] in some detail. The compound Bi$_2$Te$_{1.5}$S$_{1.5}$, derived from Bi$_2$Te$_3$ by replacing Te with S, shows a large bulk band gap and a single Dirac cone surface state, where the Dirac point is located inside the bulk band gap, in contrast to the related Bi$_2$Te$_3$ where the Dirac point is buried inside the bulk valence band. The detail of crystal growth of this compound is described in Ref. \[40\]. The replacement of Te by S is a critically important process to realize the exposed Dirac point electronic structure in Te-rich sample. Finally, we discuss the electronic structure of GeBi$_2$Te$_4$, which serves as a single Dirac cone topological insulator belonging to a class with nonzero weak topological invariants. Despite its high Te-content, this compound exhibits in-gap Fermi level and isolated Dirac node. This is likely due to the change of global crystal potential associated with the Ge sub-lattice. ![(Color online) Crystal structure and topological surface states in ternary spin-orbit compounds: $B_2X_2X'$, $AB_2X_4$, $A_2B_2X_5$ and $AB_4X_7$ \[$A$ = Pb, Ge; $B$ = Bi, Sb; $X, X'$ = Se, Te\]. (a)-(d) crystal structure and calculated bulk and surface band structures for the (111) surface of $B_2X_2X'$, $AB_2X_4$, $A_2B_2X_5$ and $AB_4X_7$, respectively. The bulk band projections are represented by shaded areas.](Fig1){width="8.0"} Methods ======= The first-principles band calculations were performed with the linear augmented plane-wave (LAPW) method using the WIEN2K package[@wien2k] and the projected augmented wave method[@PAW] using the VASP package[@VASP] in the framework of density functional theory (DFT). The generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof[@PBE] was used to describe the exchange correlation potentials. Spin-orbit coupling (SOC) was included as a second variational step using a basis of scalar relativistic eigenfunctions. The surface electronic structure computation was performed with a symmetric slab of six quintuple layers; a vacuum region with thickness larger than 10 $\mathrm{\AA}$ was used. Single crystalline samples of ternary topological insulators were grown using the Bridgman method, which is described elsewhere. [@Hor; @PRB; @BiSe; @BTS_Ando; @Jia] ARPES measurements for the low energy electronic structures were performed at the Synchrotron Radiation Center (SRC), Wisconsin, the Stanford Synchrotron Radiation Lightsource (SSRL), California, and the Advanced Light Source (ALS), California, equipped with high efficiency VG-Scienta SES2002 and R4000 electron analyzers. Samples were cleaved [*in situ*]{} and measured at 10-80 K in a vacuum better than 1 $\times$ 10$^{-10}$ torr. They were found to be very stable and without degradation for the typical measurement period of 20 hours. Potassium deposition was performed at beam line 12.0.1 of the ALS from a SAES getter source (SAES Getters USA, Inc.), which was thoroughly degassed before the experiment. Pressure in the experimental chamber stayed below 1$\times$ 10$^{-10}$ torr during deposition. The deposition rate ($\mathrm{\AA}$/Sec) was monitored using commercial quartz thickness monitor (Leybold Inficon Inc., model XTM/2). The deposition amount (thickness) was then obtained by multiplying the deposition rate by the elapsed time. ![image](Fig2){width="16cm"} ![image](Fig3){width="16cm"} Results and discussion ====================== Band calculation ---------------- In Fig. 1 we present crystal structures and first principles theoretical calculations for the (111) bulk and surface electronic structure of $B_2X_2X'$, $AB_2X_4$, $A_2B_2X_5$ and $AB_4X_7$ \[$A$ = Pb, Ge; B = Bi, Sb; $X, X'$ = Se, Te\] as examples for a large family of ternary topological insulators with single Dirac cone. Calculations are presented along the $\bar K-\bar\Gamma-\bar M$ momentum space directions. $B_2X_2X'$ has tetradymite structure with a rhombohedral unit cell belonging to the space group R$\bar3$m. The commonly invoked hexagonal cell consists of three quintuple layers. The natural cleavage plane of Bi$_2$Te$_2$Se lies between two quintuple layers. $A_mB_{2n}X_{m+3n}$ represents a large family of compounds in which $(AX)_m$ layers are inserted into the $B_2X_3$ stacking. The crystal structures of $AB_2X_4$, $A_2B_2X_5$, and $AB_4X_7$ are composed of $X$ layers forming a cubic close packing, with a fraction of octahedral interstices occupied by $A$ and $B$ atoms. [@GBT_; @Structure] The unit cell of $AB_2X_4$ is formed by stacking together three seven-atomic-layer slabs in the sequence $X(1)-B-X(2)-A-X(2)-B-X(1)$ \[Fig. 1(b)\]. The cleavage plane of GeBi$_2$Te$_4$ locates between two seven-atomic-layers. Figs. 1(c) and 1(d) give two examples of topologically nontrivial compounds with two different kinds of insertion and stacking. The unit cell of Pb$_2$Bi$_2$Se$_5$ consists of nine-atomic-layers which are made by inserting two PbSe layers into Bi$_2$Se$_3$. This crystal cleaves between two nine-atomic-layers, where the van der Waals bonding is weak. PbBi$_4$Te$_7$ consists of alternating seven-atomic-layers of PbBi$_2$Te$_4$ and quintuple layers of Bi$_2$Te$_3$. There are two possible surface terminations along the (111) direction for PbBi$_4$Te$_7$, with the exposure of either a seven-atomic-layer or a five-atomic-layer. [@Suyang; @Sergey] We show the surface bands for the one with the exposure of seven-atomic-layer in Fig. 1(d). A singly degenerate gapless Dirac cone centered at the $\bar{\Gamma}$ point is observed in the representative compounds for each class, indicating that these materials belong to the $Z_2$ = -1 topological class. The numerically predicted bulk band gap varies over an order of magnitude from 0.01 eV to 0.31 eV. Such surface electron kinetics offer a wide range of topologically nontrivial electronic structures ranging from a nearly isotropic Dirac cone (e.g. PbBi$_2$Se$_4$) to strongly anisotropic and doping dependent topological surface states. This remarkable material flexibility provides a wide range of critical electronic properties for realization of different functionalities, which are not even theoretically offered in the most commonly studied Bi$_2X_3$ compounds. ![image](Fig4){width="17.5cm"} Realization of an isolated Dirac node ------------------------------------- The presence of an isolated Dirac node, as well as the tunability of the chemical potential to the isolated Dirac point, is highly favored for application purposes because it reduces the scattering from the bulk bands. An important requirement for topological insulators in device oriented applications such as topological quantum information and low power spintronics devices [@spintronics] is the dissipationless surface states in the topological transport regime, i.e., an isolated Dirac cone fully separated from bulk bands, and the Fermi level located at the Dirac point.[@Liang; @Fu; @PRL; @Superconductivity] The full exposure of topological transport regime for dissipationless spin current with tunable surface states is useful for the study of various novel topological phenomena, such as quantum spin Hall effect, magnetoelectric effects, etc. [@Hasan] However, none of the proposed applications have been realized due to the material drawbacks of the existing well-studied topological insulators. Although there are various experimental efforts to realize an isolated Dirac cone by tuning the Fermi level with appropriate doping, [@David; @Nature; @tunable] the essential necessity of external surface deposition process makes this procedure unsuitable for most practical applications. Recently, tuning of Fermi level has been reported by changing the Bi to Sb composition ratio for Sb$_x$Bi$_{2-x}X_3$ ($X$ = Se, Te) single crystals.[@Kong] It is well known that an isolated Dirac node together with a chemical potential lying on the Dirac point through Sb substitution is not possible on either Bi$_{2}$Te$_3$ or Bi$_{2}$Se$_3$. For Sb$_x$Bi$_{2-x}$Te$_3$, though the chemical potential can be tuned by Sb concentration,[@Kong] the Dirac point is always buried inside the bulk valence bands. For Sb$_x$Bi$_{2-x}$Se$_3$, substantial Sb substitution changes the topological property of the system since Sb$_2$Se$_3$ is proven to be a trivial insulator.[@Hasan; @SCZhang] In the following, we discuss the tunable topological surface states in the Sb$_x$Bi$_{2-x}$Se$_2$Te system, in which we realize an isolated Dirac point without any surface deposition. The ARPES electronic structure of Sb$_x$Bi$_{2-x}$Se$_2$Te is shown in Fig. 2(a). Bi$_{2}$Se$_2$Te ($x$ = 0) shows well-defined surface states with massless Dirac-like dispersion \[see Fig. 2(a), left\], proving it to be a topological insulator featuring a single Dirac cone with a bulk insulating gap of $\sim$ 250 meV. The Fermi level of this system lies at the bulk conduction band (BCB) and the valence band is located below the Dirac point. Upon substitution of Sb in place of Bi, the Dirac-like topological surface states can be clearly observed in the entire doping range \[see Fig. 2(a)\]. With increasing $x$, the Fermi level $E_F$ moves downward from the BCB, indicating a reduction of the $n$-type bulk carriers. When Sb substitution is further increased, both Dirac point and $E_F$ lie within the bulk energy gap (such as in $x$ = 0.8), and ultimately the Fermi level reaches the isolated Dirac point for $x$ = 1. Upon further increase of $x$ ($x > 1$), $E_F$ moves below the Dirac point, indicating a crossover from $n$- to $p$-type topological insulator \[see Fig. 2(b)\]. The charge neutrality point (CNP), the point where $E_F$ meets the Dirac point, can thus be determined to be located at $x \sim 1$. To further verify the observation of an isolated Dirac point, photon energy dependent measurements have been performed, as shown in Fig. 2(c). While bulk valence bands change with photon energy, the surface bands and the isolated Dirac node show no visible dispersion, suggesting the two-dimensional nature of the surface states. Our measurements thus verify that Sb$_x$Bi$_{2-x}$Se$_2$Te is a tunable topological insulator with an isolated Dirac node. Our first-principles calculations for $x$ = 0, 1, 1.67 (see Fig. 3) show that topological surface states exist in all three doping levels; Dirac point moves away from the bulk bands as doping ($x$) increases, supporting our experimental results. ![image](Fig5){width="12.8cm"} Insulating Bi$_2$Te$_2$Se ------------------------- An important property for a functional electronic structure of a topological insulator is the isolation of the surface states from the bulk electronic states, since the surface signal would otherwise be washed out by the bulk contribution in transport experiments. Bi$_2$Te$_2$Se is a distinct line compound in the phase diagram known as “Kawazulite” [@kawazulite; @Suyang; @BTS_Ando] (as opposed to a random mixture of Bi$_2$Se$_3$ and Bi$_2$Te$_3$). The interpretation of any surface transport measurements will rely on key band structure properties and parameters, such as Fermi velocity, Fermi momentum, etc., which have yet to be reported on Bi$_2$Te$_2$Se. In Fig. 4(a), we present ARPES electronic structures on three batches of as-grown Bi$_2$Te$_2$Se (“native 1”, “native 2”, and “$n$-type”) with slightly different growth parameters. Our measurements reveal a single Dirac cone on the cleaved (111)-surface. The experimentally observed chemical potentials vary with different sample growth conditions. As shown in Fig. 4(a), the Fermi level of the “native1” batch is slightly above the Dirac node ($E_F=E_D+0.1$ eV) with an average Fermi momentum ($k_F$) of $0.05$ $\mathrm{\AA}^{-1}$. In contrast, the Fermi level of the“native2” batch is more than 0.3 eV above $E_D$, with a larger averaged $k_F$ of $0.1$ $\mathrm{\AA}^{-1}$. For the batch marked as “*n*-type”, the bulk conduction band minimum is observed near its Fermi level, from which we are able to obtain a bulk band gap of $\sim0.3$ eV. The two-dimensional constant energy contour plots of the ARPES intensity at various binding energies ($E_B$) are shown in Fig. 4(c). The Fermi contour of the “native2” batch \[first panel of Fig. 4(c)\] realizes a hexagonal shape within the bulk band gap. Binding energy evolution study of the constant energy contours \[Fig. 4(c)\] shows that the hexagon gradually reverts to a circle when approaching the Dirac node. In the vicinity of the Dirac point, the valence band feature is observable as a six-fold petal-like intensity pattern at $E_B$ $\sim$ 0.3 eV (Fig. 4(c) right). Three-dimensional representation of the electronic structure of the “native2” batch is shown in Fig. 4(d). From Fig. 4, it is clear that the Dirac point of Bi$_2$Te$_2$Se is not exposed, rather it is buried into the bulk valence bands. The Fermi velocity ($v_F$) of Bi$_2$Te$_2$Se is estimated to be $6 \times 10^5$ m/s along the $\bar{\Gamma}-\bar{M}$ direction, and $8{\times}10^5$ m/s along the $\bar{\Gamma}-\bar{K}$ direction, which is larger than that of Bi$_2X_3$ (Refs. ). This makes it favorable for a long mean-free- path ($L=v_F\tau$) on the surface. Sulfur to selenium/tellurium substitution in Bi$_2$(Se/Te)$_3$ -------------------------------------------------------------- The proposed applications of topological insulators require a wide range of tunability of the key electronic parameters of the topological surface states, which is lacking in the widely studied binary TI material Bi$_2$Se$_3$ and Bi$_2$Te$_3$. In the following, we present a study of the sulfur substitution to Se/Te in Bi$_2$Se$_3$ and Bi$_2$Te$_3$ (see Ref. \[40\] for sample growth and characterization), which brings desired properties to the well-studied binary topological insulators. While Bi$_2$Se$_3$ is a single Dirac cone topological insulator, ARPES measurement shows that Bi$_2$Se$_2$S is a trivial insulator with a band gap of $\sim$ 1.2 eV \[see Fig. 5(a)\]. On the other hand, ARPES measurement on Bi$_2$Te$_{1.5}$S$_{1.5}$ \[Fig 5(b)\] reveals that it is a topological insulator with a bulk band gap of $\sim$ 0.2 eV (band gap of Bi$_2$Te$_3$ $\sim$ 0.15 eV). More importantly, compared to Bi$_2$Te$_3$ whose Dirac node is buried inside the valence bands, the Dirac point of Bi$_2$Te$_{1.5}$S$_{1.5}$ is completely isolated from the bulk electronic states, which is essential for applications in the Dirac transport regime. The isolated nature of the Dirac point is further verified by photon energy dependence measurements as shown in Fig. 5(c). The surface states show no visible dispersion, and the Dirac node is always at a different binding energy from that of the bulk valence band under different photon energies. The isolated nature of the Dirac node is also supported by our first principles band calculations presented in Fig. 5(d). Furthermore, our experimental observations shown in Figs. 5(a) and (b) suggest different roles of sulfur doping on Bi$_2$Se$_3$ and Bi$_2$Te$_3$, which can be understood by considering their crystal structures. While Bi$_2$Se$_3$ has a rhombohedral unit cell under the space group $R\bar{3}m$ (No. 166), Bi$_2$S$_3$ has an orthorhombic unit cell under the space group $Pnma$ (No. 62)(see Ref. ). The former compound is topologically nontrivial, while the latter is a trivial band insulator with a large gap of about 1 eV. The solid solution Bi$_2X_{3-x}$S$_x$ behaves in such a way that formula with higher concentration of heavier elements prefer the rhombohedral structure, while that with higher concentration of lighter elements prefer the orthorhombic structure. The observed large gap in Bi$_2$Se$_2$S is consistent with the predicted trivial phase with an orthorhombic unit cell. On the other hand, Bi$_2$Te$_{1.5}$S$_{1.5}$ contains heavier elements and our observed gapless surface Dirac cone is consistent with the predicted nontrivial phase with a rhombohedral unit cell. GeBi$_2$Te$_4$: A ternary topological insulator with nonzero weak topological invariants ---------------------------------------------------------------------------------------- ![image](Fig6){width="15cm"} Topological insulators are characterized by $Z_2$ topological invariants of the bulk band structure. For a three-dimensional bulk insulator, the topological insulator state is defined by four topological invariants commonly indexed as $[\nu_0;\nu_1\nu_2\nu_3]$ (see Refs. ), where $\nu_0$ is the strong invariant and $ \nu_1,\nu_2,\nu_3$ are the weak invariants. In the bulk computations, we evaluate these four invariants by obtaining the parity symmetry of the Bloch wave functions for all the occupied electronic states at eight time-reversal-invariant points.[@FuKane] While Bi$_2$Se$_3$ belongs to the \[1;000\] class due to a band inversion at the $\Gamma$ point, GeBi$_2$Te$_4$ is predicted to be a strong topological insulator with nonzero weak indices \[1;111\]. This is due to a band inversion at the $Z$-point instead of the $\Gamma$-point, [@FuKane] which is also seen in PbBi$_2$Te$_4$. Figure 6(a) shows the ARPES measured dispersion of the surface Dirac bands of GeBi$_2$Te$_4$. Our data shows an in-gap Fermi level for naturally grown GeBi$_2$Te$_4$ crystals. In order to systematically analyze the surface band structure and Fermi surface warping effects in GeBi$_2$Te$_4$, we plotted constant energy contours at different binding energies \[see Fig. 6(b)\]. The constant energy contour at $E_B$ = 0.02 eV of GeBi$_2$Te$_4$ clearly demonstrates the hexagonal warping effect. [@Liang_Fu; @Xu_BiTe] When the binding energy is increased from the Fermi level, the effect of the bulk potential vanishes and the shape of the contour turns into a circle. Further increasing the binding energy results in a Fermi surface consisting of a single Dirac point with no other features. Therefore, GeBi$_2$Te$_4$ realizes an isolated Dirac point, which makes it possible to bring the system into the topological transport regime. Constant energy contours below the Dirac point reveal the lower Dirac cone. Upon further increasing the binding energy, an additional six-fold symmetric feature extending outwards along all $\bar\Gamma$-$\bar M$ directions is observed. Surface potassium deposition measurements are performed in order to estimate the energy position of the bottom of the bulk conduction band from the Dirac point. Fig. 6(c) shows the ARPES measured dispersion of the surface bands of GeBi$_2$Te$_4$ along the $\bar\Gamma$-$\bar K$ momentum direction as potassium is deposited; corresponding energy distribution curves are plotted in Fig. 6(d). The average thickness of the potassium layer is also marked on the top of each panel in Fig. 6(c). With approximately 1.6 $\mathrm{\AA}$ deposition of potassium, the bulk conduction band appears with its bottom located at about 200 meV above the Dirac point. Conclusion ========== We have performed electronic structure measurements for representative members of a large family of ternary topological insulators using ARPES. Our measurements show that the ternary topological insulators Bi$_2$Se$_2$Te, Bi$_2$Te$_2$Se, Bi$_2$Te$_{1.5}$S$_{1.5}$ and GeBi$_2$Te$_4$ exhibit single Dirac cone surface states, which is supported by our first principles band calculations. Among them, Bi$_2$Se$_2$Te, Bi$_2$Te$_{1.5}$S$_{1.5}$ and GeBi$_2$Te$_4$ feature an in-gap Dirac point. Bi$_2$Se$_2$Te has a large effective bulk band gap and it shows a tunable surface state with an isolated Dirac node upon changing chemical composition Bi/Sb. The unique electronic properties of this material class identified in our experiments will be a helpful guide to interpret transport, optical, magnetic and thermoelectric measurements. Acknowledgements ================ The ARPES measurements are supported by NSF-DMR-1006492. The Synchrotron Radiation Center is supported by NSF-DMR-0537588. The crystal growth is supported by NSF-DMR-0819860. Work at Northeastern is supported by the Basic Energy Sciences, US Department of Energy (DE-FG02-07ER46352 and AC03-76SF00098), and benefited from the allocation of supercomputer time at NERSC and Northeastern University’s Advanced Scientific Computation Center. T.-R.C. and H.T.J. are supported by the National Science Council and Academia Sinica, Taiwan, and they thank NCHC, CINC-NTU, and NCTS, Taiwan for technical support. The Advanced Light Source is supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The Stanford Synchrotron Radiation Lightsource is supported by the U.S. Department of Energy under Contract No. DE-AC02-76SF00515. M.Z.H. acknowledges additional support from the Advanced Light Source at LBNL and the A. P. Sloan Foundation. [99]{} L. Fu, C. L. Kane, and E. J. Mele, Phys. Rev. Lett. $\mathbf{98}$, 106803 (2007). D. Hsieh, D. Qian, L. Wray, Y. Xia, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nature $\mathbf{452}$, 970 (2008). S.-Y. Xu, L. A. Wray, Y. Xia, R. Shankar, A. Petersen, A. Fedorov, H. Lin, A. Bansil, Y. S. Hor, D. Grauer, R. J. Cava, and M. Z. Hasan, arXiv:cond-mat/1007.5111v1 (2010). e-print arXiv:1007.5111v1 Y. Ran, Y. Zhang, and A. Vishwanath, Nature Phys. $\mathbf{5}$, 298 (2009). D. Hsieh, Y. Xia, L. Wray, D. Qian, A. Pal, J. H. Dil, J. Osterwalder, F. Meier, G. Bihlmayer, C. L. Kane, Y.S. Hor, R. J. Cava, and M. Z. Hasan, Science $\mathbf{323}$, 919 (2009). M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. $\mathbf{82}$, 3045 (2010). X.-L. Qi, and S. C. Zhang, Rev. Mod. Phys. $\mathbf{83}$, 1057 (2011). Y. Xia, D. Qian, D. Hsieh, L. Wray, A. Pal, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nature Phys. $\mathbf{5}$, 398 (2009). Y. L. Chen, J. G. Analytis, J.-H. Chu, Z. K. Liu, S.-K. Mo, X. L. Qi, H. J. Zhang, D. H. Lu, X. Dai, Z. Fang, S. C. Zhang, I. R. Fisher, Z. Hussain, and Z.-X. Shen, Science $\mathbf{325}$, 178 (2009). D. Hsieh, Y. Xia, D. Qian, L. Wray, J. H. Dil, F. Meier, J. Osterwalder, L. Patthey, J. G. Checkelsky, N. P. Ong, A. V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nature $\mathbf{460}$, 27 (2009). P. Roushan, J. Seo, C. V. Parker, Y. S. Hor, D. Hsieh, D. Qian, A. Richardella, M.Z. Hasan, R. J. Cava, and A. Yazdani, Nature $\mathbf{460}$, 1106 (2009). Y. S. Hor, A. Richardella, P. Roushan, Y. Xia, J. G. Checkelsky, A. Yazdani, M. Z. Hasan, N. P. Ong, and R. J. Cava, Phys. Rev. B $\mathbf{79}$, 195208 (2009). A. M. Essin, J. E. Moore, and D. Vanderbilt, Phys. Rev. Lett. $\mathbf{102}$, 146805 (2009). I. Garate and M. Franz, Phys. Rev. Lett. $\mathbf{104}$, 146802 (2010). R. Yu, W. Zhang, H.-J. Zhang, S.-C. Zhang, X. Dai, and Z. Fang, Science $\mathbf{329}$, 61 (2010). X.-L. Qi, R. Li, J. Zhang, and S.-C. Zhang, Science $\mathbf{323}$, 1184 (2009). J. Linder, Y. Tanaka, T. Yokoyama, A. Sudbo, and N. Nagaosa, Phys. Rev. Lett. $\mathbf{104}$, 067001 (2010). L. Fu and C. L. Kane, Phys. Rev. Lett. $\mathbf{102}$, 216403 (2009). D.-X. Qu, Y. S. Hor, J. Xiong, R. J. Cava, and N. P. Ong, Science $\mathbf{329}$, 821 (2010). Y. S. Hor, J. G. Checkelsky, D. Qu, N. P. Ong, and R. J. Cava, arXiv:1006.0317 (2010). P. Ghaemi, R. S. K. Mong, and J. E. Moore, arXiv:cond-mat/1002.1341v1 (2010). S.-Y. Xu, L. A. Wray, Y. Xia, R. Shankar, A. Petersen, A. Fedorov, H. Lin, A. Bansil, Y. S. Hor, D. Grauer, R. J. Cava, and M. Z. Hasan, arXiv:cond-mat/1007.5111v1 (2010). L.-L. Wang, and D. D. Johnson, Phys. Rev. B, $\mathbf{83}$, 241309(R) (2011). H. Lin, T. Das, L. A. Wray, S.-Y. Xu, M. Z. Hasan, and A. Bansil, New J. Phys. $\mathbf{13}$, 095005 (2011). Z. Ren, A. A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, Phys. Rev. B $\mathbf{82}$, 241306(R) (2010). J. Xiong, Y. Luo, Y. Khoo, S. Jia, R. J. Cava, and N. P. Ong, arXiv:1111.6031 (2011). S. V. Eremeev, G. Landolt, T. V. Menshchikova, B. Slomski, Y. M. Koroteev, Z. S. Aliev, M. B. Babanly, J. Henk, A. Ernst, L. Patthey, A. Eich, A. A. Khajetoorians, J. Hagemeister, O. Pietzsch, J. Wiebe, R. Wiesendanger, P. M. Echenique, S. S. Tsirkin, I. R. Amiraslanov, J. H. Dil, and E. V. Chulkov, Nature com., $\mathbf{3}$, 635 (2012). T. Arakane, T. Sato, S. Souma, K. Kosaka, K. Nakayama, M. Komatsu, T. Takahashi, Z. Ren, K. Segawa, and Y. Ando, Nature com., $\mathbf{3}$, 636 (2012). K. Kuroda, H. Miyahara, M. Ye, S. V. Eremeev, Yu. M. Koroteev, E. E. Krasovskii, E. V. Chulkov, S. Hiramoto, C. Moriyoshi, Y. Kuroiwa, K. Miyamoto, T. Okuda, M. Arita, K. Shimada, H. Namatame, M. Taniguchi, Y. Ueda, and A. Kimura, [Phys. Rev. Lett.]{} **108**, 206803 (2012). S. Souma, K. Eto, M. Nomura, K. Nakayama, T. Sato, T. Takahashi, K. Segawa, and Y. Ando, Phys. Rev. Lett., **106**, 216803 (2011). P. Blaha, K. Schwarz, G. Madsen, D. Kvasnicka, and J. Luitz, *WIEN2k, An Augmented Plane Wave Plus Local Orbitals Program for Calculating Crystal Properties.* (Karlheinz Schwarz, Techn. University Wien, Austria, 2001). P. E. Bl$\ddot{o}$chl, Phys. Rev. B. [**50**]{}, 17953 (1994); G. Kresse and J. Joubert, Phys. Rev. B. [**59**]{}, 1758 (1999). G. Kress and J. Hafner, Phys. Rev. B. [**48**]{}, 13115 (1993); G. Kress and J. Furthm$\ddot{u}$ller, Comput. Mater. Sci. [**6**]{}, 15 (1996); Phys. Rev. B. [**54**]{}, 11169 (1996). J. P. Perdew, K. Burke, and M. Ernzerhof, [Phys. Rev. Lett.]{} **77**, 3865-3868 (1996). S. Jia, H. Ji, E. Climent-Pascual, M. K. Fuccillo, M. E. Charles, J. Xiong, N. P. Ong, and R. J. Cava, Phys. Rev. B $\mathbf{84}$, 235206 (2011). K. A. Agaev and S. A. Semiletov, Kristallografiya $\mathbf{10}$, 109 (1965). S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnar, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, Science $\mathbf{294}$ 1488 (2001). D. Kong, Y. Chen, J. J. Cha, Q. Zhang, J. G. Analytis, K. Lai, Z. Liu, S. S. Hong, K. J. Koski, S.-K. Mo, Z. Hussain, I. R. Fisher, Z.-X. Shen, and Y. Cui, Nature nanotech, $\mathbf{6}$, 705 (2011). P. Bayliss, American Mineralogist $\mathbf{76}$, 257 (1991). H. Ji, J. M. Allred, M. K. Fuccillo, M. E. Charles, M. Neupane, L. A. Wray, M. Z. Hasan, and R. J. Cava, Rev. B $\mathbf{85}$, 201103(R) (2012). L. F. Lundegaard, E. Makovicky, T. Boffa-Ballaran, T. Balic-Zunic, Phys. Chem. Miner. $\mathbf{32}$, 578 (2005). L. Fu, and C. L. Kane, Phys. Rev. B, **76**, 045302 (2007). L. Fu, Phys. Rev. Lett., **103**, 266801 (2009). S.-Y. Xu, L. A. Wray, Y. Xia, F. von Rohr, Y. S. Hor, J. H. Dil, F. Meier, B. Slomski, J. Osterwalder, M. Neupane, H. Lin, A. Bansil, A. Fedorov, R. J. Cava, and M. Z. Hasan, arXiv:1101.3985 (2011) (unpublished).
{ "pile_set_name": "ArXiv" }
--- author: - Dejan Lavbič and Marjan Krisper title: Facilitating Ontology Development with Continuous Evaluation --- > **Dejan Lavbič** and Marjan Krisper. 2010. **Facilitating Ontology Development with Continuous Evaluation**, [Informatica **(INFOR)**](https://www.mii.lt/informatica/), 21(4), pp. 533 - 552. Abstract {#abstract .unnumbered} ======== In this paper we propose facilitating ontology development by constant evaluation of steps in the process of ontology development. Existing methodologies for ontology development are complex and they require technical knowledge that business users and developers don’t poses. By introducing ontology completeness indicator developer is guided throughout the development process and constantly aided by recommendations to progress to next step and improve the quality of ontology. In evaluating the ontology, several aspects are considered; from description, partition, consistency, redundancy and to anomaly. The applicability of the approach was demonstrated on Financial Instruments and Trading Strategies (FITS) ontology with comparison to other approaches. Keywords {#keywords .unnumbered} ======== Ontology development methodology, ontology evaluation, ontology completeness, rapid ontology development, semantic web Introduction ============ The adoption of Semantic Web technologies is less than expected and is mainly limited to academic environment. We are still waiting for wide adoption in industry. We could seek reasons for this in technologies itself and also in the process of development, because existence of verified approaches is a good indicator of maturity. As technologies are concerned there are numerous available for all aspects of Semantic Web applications; from languages for capturing the knowledge, persisting data, inferring new knowledge to querying for knowledge etc. In the methodological sense there is also a great variety of methodologies for ontology development available, as it will be further discussed in section \[related-work\], but the simplicity of using approaches for ontology construction is another issue. Current approaches in ontology development are technically very demanding and require long learning curve and are therefore inappropriate for developers with little technical skills and knowledge. In majority of existing approaches an additional role of knowledge engineer is required for mediation between actual knowledge that developers possess and ontology engineers who encode knowledge in one of the selected formalisms. The use of business rules management approach [@smaizys_business_2009] seems like an appropriate way to simplification of development and use of ontologies in business applications. Besides simplifying the process of ontology creation we also have to focus on very important aspect of ontology completeness. The problem of error-free ontologies has been discussed in [@fahad_ontological_2008; @porzel_task-based_2004] and several types of errors were identified - inconsistency, incompleteness, redundancy, design anomalies etc. All of these problems have to already be addressed in the development process and not only after development has reached its final steps. In this paper we propose a Rapid Ontology Development (ROD) approach where ontology evaluation is performed during the whole lifecycle of the development. The idea is to enable developers to rather focus on the content than the formalisms for encoding knowledge. Developer can therefore, based on recommendations, improve the ontology and eliminate the error or bad design. It is also a very important aspect that, before the application, the ontology is error free. Thus we define ROD model that introduces detail steps in ontology manipulation. The starting point was to improve existing approaches in a way of simplifying the process and give developer support throughout the lifecycle with continuous evaluation and not to conclude with developed ontology but enable the use of ontology in various scenarios. By doing that we try to achieve two things: - guide developer through the process of ontology construction and - improve the quality of developed ontology. The remainder of the paper is structured as follows. In the following section \[related-work\] state of the art is presented with the review of existing methodologies for ontology development and approaches for ontology evaluation. After highlighting some drawbacks of current approaches section \[ROD\] presents the ROD approach. Short overview of the process and stages is given with the emphasis on ontology completeness indicator. The details of ontology evaluation and ontology completeness indicator are given in section \[indicator\], where all components (description, partition, redundancy and anomaly) that are evaluated are presented. In section \[evaluation\] evaluation and discussion about the proposed approach according to the results obtained in the experiment of **Financial Instruments and Trading Strategies (FITS)** is presented. Finally in section \[conclusion-and-future-work\] conclusions with future work are given. Related work ============ Review of related approaches ---------------------------- Ontology is a vocabulary that is used for describing and presentation of a domain and also the meaning of that vocabulary. The definition of ontology can be highlighted from several aspects. From taxonomy [@corcho_methodologies_2003; @sanjuan_text_2006; @veale_analogy-oriented_2006] as knowledge with minimal hierarchical structure, vocabulary [@bechhofer_thesaurus_2001; @miller_wordnet:_1995] with words and synonyms, topic maps [@dong_hyo-xtm:_2004; @park_xml_2002] with the support of traversing through large amount of data, conceptual model [@jovanovic_achieving_2005; @mylopoulos_information_1998] that emphasizes more complex knowledge and logic theory [@corcho_methodologies_2003; @dzemyda_optimization_2009; @waterson_verifying_1999] with very complex and consistent knowledge. Ontologies are used for various purposes such as natural language processing [@staab_system_1999], knowledge management [@davies_semantic_2006], information extraction [@wiederhold_mediators_1992], intelligent search engines [@heflin_searching_2000], digital libraries [@kesseler_schema_1996], business process modeling [@brambilla_software_2006; @ciuksys_reusing_2007; @magdalenic_dynamic_2009] etc. While the use of ontologies was primarily in the domain of academia, situation now improves with the advent of several methodologies for ontology manipulation. Existing methodologies for ontology development in general try to define the activities for ontology management, activities for ontology development and support activities. Several methodologies exist for ontology manipulation and will be briefly presented in the following section. CommonKADS [@schreiber_knowledge_1999] is in fact not a methodology for ontology development, but is focused towards knowledge management in information systems with analysis, design and implementation of knowledge. CommonKADS puts an emphasis to early stages of software development for knowledge management. Enterprise Ontology [@uschold_towards_1995] recommends three simple steps: definition of intention; capturing concepts, mutual relation and expressions based on concepts and relations; persisting ontology in one of the languages. This methodology is the groundwork for many other approaches and is also used in several ontology editors. METHONTOLOGY [@fernandez-lopez_building_1999] is a methodology for ontology creation from scratch or by reusing existing ontologies. The framework enables building ontology at conceptual level and this approach is very close to prototyping. Another approach is TOVE [@uschold_ontologies:_1996] where authors suggest using questionnaires that describe questions to which ontology should give answers. That can be very useful in environments where domain experts have very little expertise of knowledge modeling. Moreover authors of HCONE [@kotis_human_2003] present decentralized approach to ontology development by introducing regions where ontology is saved during its lifecycle. OTK Methodology [@sure_methodology_2003] defines steps in ontology development into detail and introduces two processes – Knowledge Meta Process and Knowledge Process. The steps are also supported by a tool. UPON [@nicola_building_2005] is an interesting methodology that is based on Unified Software Development Process and is supported by UML language, but it has not been yet fully tested. The latest proposal is DILIGENT [@davies_semantic_2006] and is focused on different approaches to distributed ontology development. From information systems development point of view there are several methodologies that share similar ideas found in ontology development. Rapid Ontology Development model, presented in this paper follows examples mainly from blended, object-oriented, rapid development and people-oriented methodologies [@avison_information_2006]. In blended methodologies, that are formed from (the best) parts of other methodologies, the most influential for our approach was Information Engineering [@martin_information_1981] that is viewed as a framework within which a variety of techniques are used to develop good quality information systems in an efficient way. In object-oriented approaches there are two representatives – Object-Oriented Analysis (OOA; @booch_object_1993) and Rational Unified Process (RUP; @jacobson_unified_1999). Especially OOA with its five major activities: finding class and objects, identifying structures, indentifying subjects, defining attributes and defining services had profound effect on our research, while it was extended with the support of design and implementation phases that are not included in OOA. The idea of rapid development methodologies is closely related to ROD approach and current approach addresses the issue of rapid ontology development which is based on rapid development methodologies of information systems. James Martin’s RAD [@martin_rapid_1991] is based on well known techniques and tools but adopts prototyping approach and focuses on obtaining commitment from the business users. Another rapid approach is Dynamic Systems Development Method (DSDM; @consortium_dsdm_2005) which has some similarities with Extreme Programming (XP; @beck_extreme_2004). XP attempts to support quicker development of software, particularly for small and medium-sized applications. Comparing to techniques involved in information systems development, the ontology development in ROD approach is mainly based on *holistic techniques* (rich pictures, conceptual models, cognitive mapping), *data techniques* (entity modeling, normalization), *process techniques* (decision trees, decision tables, structured English) and *project management techniques* (estimation techniques). The ROD approach extends reviewed methodologies by simplifying development steps and introducing continuous evaluation of developed ontology. This is achieved by ontology completeness indicator that is based on approaches for ontology evaluation. Based on existing reviews in [@brank_survey_2005; @gangemi_modelling_2006; @gomez-perez_evaluation_1999; @hartmann_d1.2.3_2004] we classify evaluation approaches into following categories: - compare ontology to *“golden standard”* [@maedche_measuring_2002], - using ontology in an *application* and evaluating results [@porzel_task-based_2004], - compare with source of data about the *domain to be covered* by ontology [@brewster_data_2004] and - *evaluation* done *by humans* [@lozano-tello_ontometric:_2004; @noy_user_2005]. Usually evaluation of different levels of ontology separately is more practical than trying to directly evaluate the ontology as whole. Therefore, classification of evaluation approaches based on the level of evaluation is also feasible and is as follows: lexical, vocabulary or data layer, hierarchy or taxonomy, other semantic relations, context or application level, syntactic level, structure, architecture and design. Prior the application of ontologies we have to assure that they are free of errors. The research performed by @fahad_ontological_2008 resulted in classification and consequences of ontology errors. These errors can be divided into inconsistency errors, incompleteness errors, redundancy errors and design anomalies. Problem and proposal for solution --------------------------------- The review of existing approaches for ontology development in this section pointed out that several drawbacks exist. Vast majority of ontology development methodologies define a complex process that demands a long learning curve. The required technical knowledge is very high therefore making ontology development very difficult for nontechnically oriented developers. Among methodologies for ontology development there is a lack of rapid approaches which can be found in traditional software development approaches. On the other hand methodologies for traditional software development also fail to provide sufficient support in ontology development. This fact can be confirmed with the advent of several ontology development methodologies presented at the beginning of this section. Majority of reviewed methodologies also include very limited evaluation support of developed ontologies. If this support exists it is limited to latter stages of development and not included throughout the process. This paper introduces a novel approach in ontology modeling based on good practices and existing approaches [@allemang_semantic_2008; @cardoso_semantic_2007; @fahad_ontological_2008; @fernandez-lopez_building_1999; @sure_methodology_2003; @uschold_towards_1995] while trying to minimize the need of knowing formal syntax required for codifying the ontology and therefore bringing ontology modeling closer to business users who are actual knowledge holders. Based on the findings from the comparison of existing methodologies for ontology development and several evaluation approaches it has been noted that no approach exist that would constantly evaluate ontology during its lifecycle. The idea of proposed ROD approach with ontology completeness evaluation presented in section \[ROD\] is to create a feedback loop between developed ontology and its completeness by introducing indicator for completeness. With ROD approach detailed knowledge of development methodology is also not required as the process guides developers through the steps defined in methodology. By extending existing approaches with constant evaluation the quality of final artifact is improved and the time for development is minimized as discussed in section \[indicator\]. Rapid Ontology Development {#ROD} ========================== Introduction to ROD process --------------------------- The process for ontology development ROD (Rapid Ontology Development) that we propose is based on existing approaches and methodologies (see section \[related-work\]) but is enhanced with continuous ontology evaluation throughout the complete process. It is targeted at domain users that are not familiar with technical background of constructing ontologies. Developers start with capturing concepts, mutual relations and expressions based on concepts and relations. This task can include reusing elements from various resources or defining them from scratch. When the model is defined, schematic part of ontology has to be binded to existing instances of that vocabulary. This includes data from relational databases, text file, other ontologies etc. The last step in bringing ontology into use is creating functional components for employment in other systems. ROD stages {#ROD-stages} ---------- The ROD development process can be divided into the following stages: *pre-development*, *development* and *post-development* as depicted in Figure \[fig:ROD-process\]. Every stage delivers a specific output with the common goal of creating functional component based on ontology that can be used in several systems and scenarios. In pre-development stage the output is feasibility study that is used in subsequent stage development to construct essential model definition. The latter artifact represents the schema of problem domain that has to be coupled with instances from the real world. This is conducted in the last stage post-development which produces functional component for usage in various systems. ![Process of rapid ontology development (ROD)[]{data-label="fig:ROD-process"}](img/ROD-process){width="0.7\linewidth"} The role of constant evaluation as depicted in Figure \[fig:ROD-process\] is to guide developer in progressing through steps of ROD process or it can be used independently of ROD process. In latter case, based on semantic review of ontology, enhancements for ontology improvement are available to the developer in a form of multiple actions of improvement, sorted by their impact. Besides actions and their impacts, detail explanation of action is also available (see Figure \[fig:OC-GUI\]). ![Display of ontology completeness (OC) results and improvement recommendations[]{data-label="fig:OC-GUI"}](img/OC-GUI){width="0.3\linewidth"} In case of following ROD approach, while developer is in a certain step of the process, the OC measurement is adapted to that step by redefinition of weights (see Figure \[fig:OC-weights\] for distribution of weights by ROD steps) for calculation (e.g., in Step 2.1 of ROD process where business vocabulary acquisition is performed, there is no need for semantic checks like instance redundancy, lazy concept existence or inverse property existence, but the emphasis is rather on description of TBox and RBox component and path existence between concepts). When OC measurement reaches a threshold (e.g., $80\%$) developer can progress to the following step (see Figure \[fig:OC-calculation\]). The adapted OC value for every phase is calculated on-the-fly and whenever a threshold value is crossed, a recommendation for progressing to next step is generated. This way developer is aided in progressing through steps of ROD process from business vocabulary acquisition to functional component composition. In case that ontology already exists, with OC measure we can place the completeness of ontology in ROD process and start improving ontology in suggested phase of development (e.g., ontology has taxonomy already defined, so we can continue with step 2.4 where ad hoc binary relations identification takes place). Ontology evaluation and ontology completeness indicator {#indicator} ------------------------------------------------------- ![OC calculation[]{data-label="fig:OC-calculation"}](img/OC-calculation){width="0.7\linewidth"} **Ontology completeness (OC)** indicator used for guiding developer in progressing through steps of ROD process and ensuring the required quality level of developed ontology is defined as $$OC = f \left( C, P, R, I \right) \in [0, 1] \label{eq:OC}$$ where $C$ is set of concepts, $P$ set of properties, $R$ set of rules and $I$ set of instances. Based on these input the output value in an interval $[0, 1]$ is calculated. The higher the value, more complete the ontology is. OC is weighted sum of semantic checks, while weights are being dynamically altered when traversing from one phase in ROD process to another. OC can be further defined as $$OC = \sum_{i=1}^{n} w_i^{'} \cdot leafCondition_i \label{eq:OC-sum}$$ where $n$ is the number of leaf conditions and $leafCondition$ is leaf condition, where semantic check is executed. For relative weights and leaf condition calculation the following restrictions apply $\sum_i w_i^{'} = 1$, $\forall w_i^{'} \in [0, 1]$ and $\forall leafCondition_i \in [0, 1]$. Relative weight $w_i^{'}$ denotes global importance of $leafCondition_i$ and is dependent on all weights from leaf to root concept. The tree of conditions in OC calculation is depicted in Figure \[fig:OC-tree\] and contains semantic checks that are executed against the ontology. The top level is divided into *TBox*, *RBox* and *ABox* components. Subsequent levels are then furthermore divided based on ontology error classification [@fahad_ontological_2008]. Aforementioned sublevels are *description*, *partition*, *redundancy*, *consistency* and *anomaly*. ![Ontology completeness (OC) tree of conditions, semantic checks and corresponding weights[]{data-label="fig:OC-tree"}](img/OC-tree){width="0.7\linewidth"} This proposed structure can be easily adapted and altered for custom use. Leafs in the tree of OC calculation are implemented as semantic checks while all preceding elements are aggregation with appropriate weights. Algorithm for ontology completeness (OC) price is depicted in Definition \[def:OC-evaluation\], where $X$ is condition and $w = w(X, Y)$ is the weight between condition $X$ and condition $Y$. \[Ontology completeness evaluation algorithm\] $$\begin{aligned} &\text{' Evaluation is executed on top condition "OC components" with weight 1} \\ &\textbf{Evaluate } \boldsymbol{(X, w)} \\ &\quad price_{OC} = 0 \\ &\quad \text{mark condition } X \text{ as visited} \\ &\quad \text{if not exists sub-condition of } X \\ &\qquad \text{' Execute semantic check on leaf element} \\ &\qquad \text{return } w \cdot exec(X) \\ &\quad \text{else for all conditions } Y \text{ that are sub-conditions of } X \text{ such that } Y \text{ is not visited} \\ &\qquad \text{' Aggregate ontology evaluation prices} \\ &\qquad \text{if } w(X, Y) \neq 0 \\ &\qquad \quad price_{OC} = price_{OC} + Evaluate(Y, w(X, Y)) \\ &\quad \text{return } w \cdot price_{OC} \\ &\textbf{End}\end{aligned}$$ Each leaf condition implements a semantic check against ontology and returns value $leafCondition \in [0, 1]$. Figure \[fig:OC-weights\] depicts the distribution of OC components (description, partition, redundancy, consistency and anomaly) regarding individual phase in ROD process (see section \[ROD-stages\]). In first two phases 2.1 and 2.2 developer deals with business vocabulary identification and enumeration of concepts’ and properties’ examples. Evidently with aforementioned steps emphasis is on description of ontology, while partition is also taken into consideration. The importance of components description and partition is then in latter steps decreased but it still remains above average. In step 2.3 all other components are introduced (redundancy, consistency and anomaly), because developer is requested to define taxonomy of schematic part of ontology. While progressing to the latter steps of ROD process emphasis is on detail description of classes, properties and complex restriction and rules are also added. At this stage redundancy becomes more important. This trend of distributions of weights remains similarly balanced throughout the last steps 2.5 and 2.6 of development phase. In post-development phase when functional component composition is performed, ontology completeness calculation is mainly involved in redundancy, description and anomaly checking. The details about individual OC components are emphasized and presented in details in the following subsections. ![Impact of weights on OC sublevels in ROD process[]{data-label="fig:OC-weights"}](img/OC-weights){width="0.7\linewidth"} ### Description Description of ontology’s components is very important aspect mainly in early stages of ontology development. As OC calculation is concerned there are several components considered: - *existence of entities* (classes and properties) and *instances*, - (multiple) *natural language descriptions* of TBox and RBox components and - *formal description* of concepts and instances. The notion of existence of entities is very straightforward; if ontology doesn’t contain any entities than we have no artifacts to work with. Therefore the developer is by this metric encouraged to first define schematic part of ontology with classes and properties and then also to add elements of ABox component in a form of individuals. Next aspect is natural language descriptions of entities. This element is despite of its simplicity one of the most important, due to ability to include these descriptions in further definition of complex axioms and rules [@vasilecas_towards_2009]. Following business rules approach [@vasilecas_practical_2008] it’s feasible to create templates for entering this data on-the-fly by employing this natural description of entities. Developer is encouraged to describe all entities (classes and properties) with natural language using readable labels (e.g., `rdfs:label` and `rdfs:comment`) that don’t add to the meaning of captured problem domain but greatly improves human readability of defined ontology. When constructing ontology it is always required to provide labels and description in English, but the use of other languages is also recommended to improve employment of ontology. The last aspect of ontology description is formal description of TBox and ABox components that concerns concepts and instances. When describing classes with properties ontologists tend to forget defining domain and range values. This is evaluated for schematic part of ontology while for instances all required axioms are considered that are defined in TBox or ABox. Ontologists tend to leave out details of instances that are required (e.g., cardinality etc.). ### Partition Partition errors deal with omitting important axioms or information about the classification of concept and therefore reducing reasoning power and inferring mechanisms. In OC calculation several components are considered: - *common classes* and *instances*, - *external instances* of ABox component, - *connectivity of concepts* of TBox component and - *hierarchy of entities*. The notion of common classes deals with the problem of defining a class that is a sub-class of classes that are disjoint. The solution is to check every class $C_i$ if exist super-classes $C_j$ and $C_k$ that are disjoint. Similar is with common instances where situation can occur where instance is member of disjointing classes. When decomposing classes in sub-class hierarchy it is often the case that super-class instance is not a member of any sub-class. In that case we deal with a problem of external instances. The solution is to check every class $C_i$ if exist any instance that is a member of $C_i$, but not a member of any class in set of sub-classes. The aspect of connectivity of concepts deals with ontology as whole and therefore not allowing isolated parts that are mutually disconnected. The first semantic check deals with existence of inverse properties. If we want to contribute to full traversal among classes in ontology the fact that every object property has inverse property defined is very important. The second semantic check deals with existence of path between concepts. Ontology is presented as undirected graph $G = (V, E)$ and we try to identify maximum disconnected graphs. The last aspect of ontology completeness as partition is concerned with hierarchy of entities. We introduce data oriented approach for definition of hierarchy of entities where technical knowledge from domain user is not required. This is based on requirement that for every class and property defined ontologist is requested to insert also few instances (see preliminary steps in ROD process introduced in section \[ROD-stages\]). After this requirement is met, set of competency questions are introduced to the domain user and the result are automatically defined hierarchy axioms (e.g., `rdfs:subClassOf`, `owl:equivalentClass`, `owl:disjointWith`, `rdfs:subPropertyOf` and `rdfs:equivalentProperty`). The approach for disjoint class recommendation is depicted in Definition \[def:disjoint-axiom\], while approach for other hierarchy axioms is analogous. \[Recommend disjoint axiom between classes\] $$\begin{aligned} &\textbf{recommendDisjointWithClasses} \\ &\quad \tau_{\subseteq}^{sibling} = \{ \} \leftarrow \text{ Set of all sub-class pairs } (C, D) \\ &\quad Q_n \leftarrow \text{ Competency questions} \\ &\quad disjointClassRecommend = \{ \} \\ &\quad \text{for each } C_i \in TBox \\ &\qquad \text{add all sub-class pairs of class } C_i \text{ to } \tau_{\subseteq}^{sibling} \\ &\qquad \text{for each sub-class pair } (C_j, C_k) \in TBox \text{ where } C_j \subseteq C_i \wedge C_k \subseteq C_i \wedge C_j \neq C_k \\ &\qquad \quad \text{if } \exists i (C_j), i (C_k) \in ABox : ( \neg Q_1 (C_j, C_k) \wedge \neg Q_3 (C_j, C_k) ) \text{ then} \\ &\qquad \qquad \text{if } C_j \cap C_k \neq \{ \} \text{ then} \\ &\qquad \qquad \quad disjointClassRecommend = disjointClassRecommend \cup (C_j, C_k) \\ &\qquad \qquad \text{end if} \\ &\qquad \quad \text{end if} \\ &\qquad \text{end for} \\ &\quad \text{end for} \\ &\quad price = 1 - \frac{ \left | disjointClassRecommend \right | } { \left | \tau_{\subseteq}^{sibling} \right | } \\ &\quad \text{return } \boldsymbol{disjointClassRecommend} \text{ and } \boldsymbol{price} \\ &\textbf{end}\end{aligned}$$ Using this approach of recommendation, domain users can define axioms in ontology without technical knowledge of ontology language, because with data driven approach (using instances) and competency questions the OC calculation indicator does that automatically. Redundancy occurs when particular information is inferred more than once from entities and instances. When calculating OC we take into consideration following components: - *identical formal definition* and - *redundancy in hierarchy of entities*. When considering identical formal definition, all components (TBox, RBox and ABox) have to be checked. For every entity or instance Ai all belonging axioms are considered. If set of axioms of entity or instance $A_i$ is identical to set of axioms of entity or instance $A_j$ and $A_i \neq A_j$, then entities or instances $A_i$ and $A_j$ have identical formal definition. This signifies that $A_i$ and $A_j$ describe same concept under different names (synonyms). Another common redundancy issue in ontologies is redundancy in hierarchy. This includes sub-class, sub-property and instance redundancy. Redundancy in hierarchy occurs when ontologist specifies classes, properties or instances that have hierarchy relations (`rdfs:subClassOf`, `rdfs:subPropertyOf` and `owl:instanceOf`) directly or indirectly. ### Consistency In consistency checking of developed ontology the emphasis is on finding circulatory errors in TBox component of ontology. Circulatory error occurs when a class is defined as a sub-class or super-class of itself at any level of hierarchy in the ontology. They can occur with distance $0$, $1$ or $n$, depending upon the number of relations involved when traversing the concept down the hierarchy of concepts until we get the same from where we started traversal. The same also applies for properties. To evaluate the quality of ontology regarding circulatory errors the ontology is viewed as graph $G = (V, E)$, where $V$ is set of classes and $E$ set of `rdfs:subClassOf` relations. ### Anomaly Design anomalies prohibit simplicity and maintainability of taxonomic structures within ontology. They don’t cause inaccurate reasoning about concepts, but point to problematic and badly designed areas in ontology. Identification and removal of these anomalies should be necessary for improving the usability and providing better maintainability of ontology. As OC calculation is concerned there are several components considered: - *chain of inheritance* in TBox component, - *property clumps* and - *lazy entities* (classes and properties). The notion of chain of inheritance is considered in class hierarchy, where developer can classify classes as `rdfs:subClassOf` other classes up to any level. When such hierarchy of inheritance is long enough and all classes have no appropriate descriptions in the hierarchy except inherited child, then ontology suffers from chain of inheritance. The algorithm for finding and eliminating chains of inheritance is depicted in Definition \[def:chain-of-inheritance\]. \[Find chain of inheritance\] $$\begin{aligned} &\textbf{findChainOfInheritance} \\ &\quad price = 1 \\ &\quad axiom(C) = [ type, entity, value ] \leftarrow \text{ Axiom of class C} \\ &\quad A(C) = \forall axiom(C) : entity = C \leftarrow \text{ Set of asserted axioms of class C} \\ &\quad A_{\subseteq}^{-} \leftarrow \text{ Set of asserted axioms of class } C \text{ without rdfs:subClassOf axiom} \\ &\quad chainOfInheritance = \{ \} \\ &\quad \text{while } \exists C_i, C_j \in TBox \wedge \exists C_1, C_2, \ldots, C_n \in TBox : (C_j \subseteq C_n \subseteq C_{n-1} \subseteq \ldots \subseteq C_2 \subseteq C_1 \subseteq C_i) \wedge \\ &\qquad ( \forall C_1, C_2, \ldots, C_n : \left | superClass(C_n) \right | = 1 \wedge A_{\subseteq}^{-} (C_n) = \{ \} ) \wedge \left | A_{\subseteq}^{-} (C_i) \right | > 0 \wedge \left | A_{\subseteq}^{-} (C_j) \right | > 0 \text{ then} \\ &\qquad \quad price = price - \frac{n}{n_{\subseteq}^{direct}} \\ &\qquad \quad chainOfInheritance = chainOfInheritance \cup \{ C_i, C_j, \{ C_1, C_2, \ldots, C_n \} \} \\ &\quad \text{end while} \\ &\quad \boldsymbol{chainsOfInheritance} \text{ and } \boldsymbol{price} \\ &\textbf{end}\end{aligned}$$ The next aspect in design anomalies is property clumps. This problem occurs when ontologists badly design ontology by using repeated groups of properties in different class definitions. These groups should be replaced by an abstract concept composing those properties in all class definitions where this clump is used. To identify property clumps the following approach depicted in Definition \[def:property-clumps\] is used. \[Find property clumps\] $$\begin{aligned} &\textbf{findPropertyClumps} \\ &\quad price \leftarrow 1 \\ &\quad n_R \leftarrow \text{ Number of properties (datatype and object)} \\ &\quad V \leftarrow \text{ Classes and properties} \\ &\quad E \leftarrow \text{ Links between classes and properties} \\ &\quad propertyClumps = \{ \} \\ &\quad \text{while exist complete bipartite sub-graph } K_{m,n}^{'} \text{ of graph } G(V,E) \\ &\qquad \text{select } K_{m,n}^{''} \text{ from } K_{m,n}^{'} \text{, where } \max (\frac{m^{''} \cdot n^{''}}{m^{''} + n^{''}}) \\ &\qquad propertyClumps = propertyClumps \cup K_{m,n}^{''} \\ &\qquad \text{remove all edges from } G(V, E) \text{ that appear in } K_{m,n}^{''} \\ &\qquad price = price - \frac{ m^{''} \cdot n^{''} - (m^{''} + n^{''}) }{n_R} \\ &\quad \text{end while} \\ &\quad \text{return } \boldsymbol{propertyClumps} \text{ and } \boldsymbol{price} \\ &\textbf{end}\end{aligned}$$ The last aspect of design anomalies is lazy entities, which is a leaf class or property in the taxonomy that never appears in the application and does not have any instances. Eliminating this problem is quite straightforward; it just requires checking all leaf entities and verifying whether it contains any instances. In case of existence those entities should be removed or generalized or instances should be inserted. Evaluation ========== Method ------ The ROD process was evaluated on Financial Instruments and Trading Strategies (FITS) ontology that is depicted in Figure \[fig:FITS-ontology\]. ![Financial instruments and trading strategies (FITS)[]{data-label="fig:FITS-ontology"}](img/FITS-ontology){width="0.8\linewidth"} When building aforementioned ontology one of the requirements was to follow Semantic Web mantra of achieving as high level of reuse as possible. Therefore the main building blocks of FITS ontology are all common concepts about financial instruments. Furthermore every source of data (e.g., quotes from Yahoo! Finance in a form of CSV files and direct Web access, AmiBroker trading program format etc.) is encapsulated in a form of ontology and integrated into FITS ontology. Within every source of data developer can select which financial instrument is he interested in (e.g., `GOOG`, `AAPL`, `PCX`, `KRKG` etc.). The last and the most important component are financial trading strategies that developers can define. Every strategy was defined in its own ontology (e.g., `FI-Strategy-Simple`, `FI-Strategy-SMA`, `FI-Strategy-Japanese` etc.). The requirement was also to enable open integration of strategies, so developer can select best practices from several developers and add its own modification. Two different approaches in constructing ontology and using it in aforementioned use case were used. The approach of rapid ontology development (ROD) was compared to ad-hoc approach to ontology development, which was based on existing methodologies CommonKADS, OTK and METHONTOLOGY. With ROD approach the proposed method was used with tools IntelliOnto and Protégé. The entire development process was monitored by iteration, where ontology completeness price and number of ontology elements (classes, properties and axioms with rules) were followed. At the end the results included developed ontology, a functional component and information about the development process by iteration. The final version of ontology was reviewed by a domain expert, who confirmed adequateness. At implementation level ontology was expected to contain about $250$ to $350$ axioms of schematic part and numerous instances from various sources. Results and Discussion ---------------------- The process of ontology creation and exporting it as functional component was evaluated on FITS ontology and the results are depicted in Figures \[fig:OC-assessment-ROD\] and \[fig:OC-assessment-ad-hoc\]. Charts represent ontology completeness price and number of ontology elements regarding to iterations in the process. ![OC assessment and number of ontology elements through iterations and phases of ROD process[]{data-label="fig:OC-assessment-ROD"}](img/OC-assessment-ROD){width="1\linewidth"} ![OC assessment and number of ontology elements through iterations of ad-hoc development process[]{data-label="fig:OC-assessment-ad-hoc"}](img/OC-assessment-ad-hoc){width="1\linewidth"} Comparing ROD to ad-hoc approach the following conclusions can be drawn: - the number of iterations to develop required functional component using ROD approach $(30)$ is less than using ad-hoc approach $(37)$ which results in $23\%$ less iterations; - ontology developed with ROD approach is throughout the development process more complete and more appropriate for use than in ad-hoc, due to continuous evaluation and simultaneous alerts for developers. During the process of ontology construction based on ROD approach the developer was continuously supported by ontology evaluation and recommendations for progressing to next steps. When developer entered a phase and started performing tasks associated with the phase, ontology completeness was evaluated as depicted in Figure \[fig:OC-GUI\]. While OC was less than a threshold value, developer followed instructions for improving ontology as depicted in Figure \[fig:OC-calculation\]. Results of OC evaluation are available in a simple view, where basic statistics about ontology is displayed (number of concepts, properties, rules, individuals etc.), progress bar depicting completeness, and details about evaluation, improvement recommendations and history of changes. The core element is progress bar that denotes how complete ontology is and is accompanied with a percentage value. Following are recommendations for ontology improvement and their gains (e.g., remove circulatory errors $(+10\%)$, describe concepts in natural language $(+8\%)$, connect concepts $(+7\%)$ etc.). When improvement is selected (e.g., remove circulatory errors) the details are displayed (gain, task and details). The improvement and planned actions are also clearly graphically depicted on radar chart (see Figure \[fig:OC-GUI\]). The shaded area with strong border lines presents current situation, while red dot shows TO-BE situation if we follow selected improvement. When OC price crosses a threshold value (in this experiment $80\%$) a recommendation to progress to a new phase is generated. We can see from our example that for instance recommendation to progress from phase 2.5 to phase 2.6 was generated in 20th iteration with OC value of $91,3\%$, while in 19th iteration OC value was $76,5\%$. As Figure \[fig:OC-assessment-ROD\] depicts ontology completeness price and number of ontology elements are displayed. While progressing through steps and phases it’s seen that number of ontology elements constantly grow. On the other hand OC price fluctuate – it’s increasing till we reach the threshold to progress to next phase and decreases when entering new phase. Based on recommendations from the system, developer improves the ontology and OC price increases again. With introduction of OC steps in ontology development are constantly measured while enabling developers to focus on content and not technical details (e.g. language syntax, best modeling approach etc.). Conclusions and Future work {#conclusion-and-future-work} =========================== Current methodologies and approaches for ontology development require very experienced users and developers, while we propose ROD approach that is more suitable for less technically oriented users. With constant evaluation of developed ontology that is introduced in this approach, developers get a tool for construction of ontologies with several advantages: - the required technical knowledge for ontology modeling is decreased, - the process of ontology modeling doesn’t end with the last successful iteration, but continues with post-development activities of using ontology as a functional component in several scenarios and - continuous evaluation of developing ontology and recommendations for improvement. In ontology evaluation several components are considered: description, partition, redundancy, consistency and anomaly. Description of ontology’s components is very important aspect mainly in early stages of ontology development and includes existence of entities, natural language descriptions and formal descriptions. This data is furthermore used for advanced axiom construction in latter stages. Partition errors deal with omitting important axioms and can be in a form of common classes, external instances, hierarchy of entities etc. Redundancy deals with multiple information being inferred more than once and includes identical formal definition and redundancy in hierarchy. With consistency the emphasis is on finding circulatory errors, while anomalies do not cause inaccurate reasoning about concepts, but point to badly designed areas in ontology. This includes checking for chain of inheritance, property clumps, lazy entities etc. It has been demonstrated on a case study from financial trading domain that a developer can build Semantic Web application for financial trading based on ontologies that consumes data from various sources and enable interoperability. The solution can easily be packed into a functional component and used in various systems. The future work includes improvement of ontology completeness indicator by including more semantic checks and providing wider support for functional components and creating a plug-in for most widely used ontology editors for constant ontology evaluation. One of the planned improvements is also integration with popular social networks to enable developers rapid ontology development, based on reuse. Allemang, D. and Hendler, J. (2008). . Elsevier. Avison, D. and Fitzgerald, G. (2006). . McGraw-Hill, Maidenhead, UK. Bechhofer, S. and Goble, C. (2001). Thesaurus construction through knowledge representation. , 37(1):25–45. Beck, K. and Andres, C. (2004). . Addison-Wesley, USA, Boston. Booch, G. (1993). . Addison-Wesley, Santa Clara, USA. Brambilla, M., Celino, I., Ceri, S., and Cerizza, D. (2006). A [Software]{} [Engineering]{} [Approach]{} to [Design]{} and [Development]{} of [Semantic]{} [Web]{} [Service]{} [Applications]{}. In [*5th [International]{} [Semantic]{} [Web]{} [Conference]{}*]{}. Brank, J., Grobelnik, M., and Mladenić, D. (2005). . Brewster, C., Alani, H., Dasmahapatra, S., and Wilks, Y. (2004). . Cardoso, J., Hepp, M., and Lytras, M. (2007). . Springer. Ciuksys, D. and Caplinskas, A. (2007). Reusing ontological knowledge about business process in [IS]{} engineering: process configuration problem. , 18(4):585–602. Consortium, D. (2005). . Tesseract Publishing, UK, Surrey. Corcho, O., Fernandez-Lopez, M., and Gomez-Perez, A. (2003). Methodologies, tools and languages for building ontologies. [Where]{} is their meeting point? , 46(1):41–64. Davies, J., Studer, R., and Warren, P. (2006). . John Wiley & Sons, Chichester, England. Dong, Y. and Li, M. S. (2004). -[XTM]{}: a set of hyper-graph operations on [XML]{} [Topic]{} [Map]{} toward knowledge management. , 20(1):81–100. Dzemyda, G. and Sakalauskas, L. (2009). Optimization and [Knowledge]{}-[Based]{} [Technologies]{}. , 20(2):165–172. Fahad, M. and Quadir, M. A. (2008). Ontological errors - [Inconsistency]{}, [Incompleteness]{} and [Redundancy]{}. In [*International [Conference]{} on [Enterprise]{} [Information]{} [Systems]{} ([ICEIS]{}) 2008*]{}. Fernandez-Lopez, M., Gomez-Perez, A., Sierra, J. P., and Sierra, A. P. (1999). Building a chemical ontology using methontology and the ontology design environment. , 14(1). Gangemi, A., Catenacci, C., Ciaramita, M., and Lehmann, J. (2006). Modelling ontology evaluation and validation. In Sure, Y. D. J., editor, [*3nd [European]{} [Semantic]{} [Web]{} conference ([ESWC]{} 2006)*]{}, pages 140–154. Springer-Verlag Berlin. Gómez-Pérez, A. (1999). . Hartmann, J., Sure, Y., Giboin, A., Maynard, D., Suárez-Figueroa, M. d. C., and Cuel, R. (2004). D1.2.3 [Methods]{} for ontology evaluation. Technical report. Heflin, J. and Hendler, J. (2000). Searching the web with [SHOE]{}. In [*Artificial [Intelligence]{} for [Web]{} [Search]{}*]{}, pages 36–40. AAAI Press, Menlo Park, USA. Jacobson, I., Booch, G., and Rumbaugh, J. (1999). . Addison-Wesley, Boston, USA. Jovanović, J. and Gašević, D. (2005). Achieving knowledge interoperability: [An]{} [XML]{}/[XSLT]{} approach. , 29(3):535–553. Kesseler, M. (1996). A schema based approach to [HTML]{} authoring. , 96(1). Kotis, K. and Vouros, G. (2003). Human centered ontology management with [HCONE]{}. In [*[IJCAI]{} ’03 [Workshop]{} on [Ontologies]{} and [Distributed]{} [Systems]{}*]{}. Lozano-Tello, A. and Gómez-Pérez, A. (2004). : [A]{} method to choose the appropriate ontology. , 15(2):1–18. Maedche, A. and Staab, S. (2002). . Magdalenic, I., Radosevic, D., and Skocir, Z. (2009). Dynamic [Generation]{} of [Web]{} [Services]{} for [Data]{} [Retrieval]{} [Using]{} [Ontology]{}. , 20(3):397–416. Martin, J. (1991). . MacMillan Publishing, Indianapolis, USA. Martin, J. and Finkelstein, C. (1981). , volume Volume 1 and 2. Prentice Hall, New Jersey, USA. Miller, G. A. (1995). : a lexical database for [English]{}. , 38(11):39–41. Mylopoulos, J. (1998). Information modeling in the time of the revolution. , 23(3-4):127–155. Nicola, A. D., Navigli, R., and Missikoff, M. (2005). Building an [eProcurement]{} ontology with [UPON]{} methodology. In [*15th e-[Challenges]{} [Conference]{}*]{}, Ljubljana, Slovenia. Noy, N. F., Guha, R., and Musen, M. A. (2005). Park, J. and Hunting, S. (2002). . Addison-Wesley, Boston, USA. Porzel, R. and Malaka, R. (2004). . SanJuan, E. and Ibekwe-SanJuan, F. (2006). Text mining without document context. , 42(6):1532–1552. Schreiber, G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., van de Velde, W., and Wielinga, B. (1999). . The MIT Press: Cambridge, Massachusetts, London, England. Smaizys, A. and Vasilecas, O. (2009). Business [Rules]{} based agile [ERP]{} systems development. , 20(3):439–460. Staab, S., Braun, C., Bruder, I., Duesterhoeft, A., Heuer, A., Klettke, M., Neumann, G., Prager, B., Pretzel, J., Schnurr, H. P., Studer, R., Uszkoreit, H., and Wrenger, B. (1999). A system for facilitating and enhancing web search. In [*International working conference on artificial and natural neural networks: [Engineering]{} applications of bio-inspired artificial neural networks ([IWANN]{} ’99)*]{}. Sure, Y. (2003). . . Uschold, M. and Grueninger, M. (1996). Ontologies: principles, methods and applications. , 11(2). Uschold, M. and King, M. (1995). Towards a methodology for building ontologies. In [*Workshop on basic ontological issues in knowledge sharing ([IJCAI]{} ’95)*]{}. Vasilecas, O., Kalibatiene, D., and Guizzardi, G. (2009). Towards a formal method for transforming ontology axioms to application domain rules. , 38(4):271–282. Vasilecas, O. and Sosunovas, S. (2008). Practical application of [BRTL]{} approach for financial reporting domain. , 37(2):106–113. Veale, T. (2006). An analogy-oriented type hierarchy for linguistic creativity. , 19(7):471–479. Waterson, A. and Preece, A. (1999). Verifying ontological commitment in knowledge-based systems. , 12(1-2):45–54. Wiederhold, G. (1992). Mediators in the architecture of future information systems. , 25(3):38–49.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The IceCube Neutrino Observatory with its 1-km$^3$ in-ice detector and the 1-km$^2$ surface detector (IceTop) constitutes a three-dimensional cosmic ray detector well suited for general cosmic ray physics. Various measurements of cosmic ray properties, such as energy spectra, mass composition and anisotropies, have been obtained from analyses of air showers at the surface and/or atmospheric muons in the ice.' address: 'Humboldt-Universität zu Berlin and DESY' author: - 'H. Kolanoski (for the IceCube Collaboration)' bibliography: - 'ecrs\_pcr2\_hlt\_Kolanoski.bib' title: Cosmic Ray Physics with the IceCube Observatory --- Introduction ============ The IceCube Neutrino Observatory [@achterberg06; @Kolanoski_HLT_icrc2011] is a detector situated in the ice of the geographic South Pole at a depth of about 2000 m. The observatory is primarily designed to measure neutrinos from below, using the Earth as a filter to discriminate against muon background induced by cosmic rays (neutrino results are reported elsewhere in these proceedings [@kappes_HLT_ecrs2012]). IceCube also includes an air shower array on the surface called IceTop extendind IceCube’s capabilities for cosmic ray physics. Construction of IceCube Neutrino Observatory was completed in December 2010. IceCube can be regarded as a cubic-kilometer scale three-dimensional cosmic ray detector with the air showers (mainly the electromagnetic component) measured by the surface detector IceTop and the high energy muons and neutrinos measured in the ice. In particular the measurement of the electromagnetic component in IceTop in coincidence with the high energy muon bundle, originating from the first interactions in the atmosphere, has a strong sensitivity to composition. Here IceCube offers the unique possibility to clarify the cosmic ray composition and spectrum in the range between about 300 TeV and 1 EeV, including the ‘knee’ region and a possible transition from galactic to extra-galactic cosmic rays. Detector ======== #### IceCube: The main component of the IceCube Observatory is an array of 86 strings equipped with 5160 light detectors in a volume of 1 km$^3$ at a depth between 1450m and 2450m (Fig.\[fig:I3Array\]). The nominal IceCube string spacing is 125 m on a hexagonal grid. A part of the detector, called DeepCore, is more densely instrumented resulting in a lower energy threshold. ![Left: The IceCube detector with its components DeepCore and IceTop in the final configuration (January 2011). In this paper we present data taken with the still incomplete detector. We will refer to the configuration as IC79/IT73, for example, meaning 79 strings in IceCube and 73 stations in IceTop. The final detector has the configuration IC86/IT81. Right: View of a cosmic ray event which hits IceTop and IceCube. The size of the colored spots is proportional to the signal in the DOMs, the colors encode the signal times, separately for IceCube and IceTop. []{data-label="fig:I3Array"}](I3Array_vector_Jan2011_modHK_red "fig:"){width="62.00000%"}![Left: The IceCube detector with its components DeepCore and IceTop in the final configuration (January 2011). In this paper we present data taken with the still incomplete detector. We will refer to the configuration as IC79/IT73, for example, meaning 79 strings in IceCube and 73 stations in IceTop. The final detector has the configuration IC86/IT81. Right: View of a cosmic ray event which hits IceTop and IceCube. The size of the colored spots is proportional to the signal in the DOMs, the colors encode the signal times, separately for IceCube and IceTop. []{data-label="fig:I3Array"}](BigEvent.pdf "fig:"){width="37.00000%"} Each string, except those of DeepCore, is equipped with 60 light detectors, called ‘Digital Optical Modules’ (DOMs), each containing a $10''$ photo multiplier tube (PMT) to record the Cherenkov light of charged particles traversing the ice. In addition, a DOM houses complex electronic circuitry supplying signal digitisation, readout, triggering, calibration, data transfer and various control functions. The most important feature of the DOM electronics is the recording of the analog waveforms in $3.3{\,\mathrm{ns}}$ wide bins for a duration of $422{\,\mathrm{ns}}$. With a coarser binning a ‘fast ADC’ extends the time range to 6.4$\mu$s. #### IceTop: The 1-km$^2$ IceTop air shower array [@ITDet-IceCube:2012nn] is located above IceCube at a height of 2835m above sea level, corresponding to an atmospheric depth of about 680 g/cm$^2$. It consists of 162 ice Cherenkov tanks, placed at 81 stations mostly near the IceCube strings (Fig.\[fig:I3Array\]). In the center of the array, a denser station distribution forms an in-fill array with a lower energy threshold (about 100TeV). Each station comprises two cylindrical tanks, 10 m apart, with an inner diameter of $1.82{\,\mathrm{m}}$ and filled with ice to a height of $90{\,\mathrm{cm}}$. Each tank is equipped with two DOMs which are operated at different PMT gains to cover linearly a dynamic range of about $10^5$ with a sensitivity to a single photoelectron (the thresholds, however, are around 20 photoelectrons). DOMs, electronics and readout scheme are the same as for the in-ice detector. Cosmic Ray spectrum {#sec:spectrum} =================== ![First evalution of one year of data taken with the 73-station configuration of IceTop in 2010. The events were required to have more than 5 stations and zenith angles in the range $\cos\theta \geq 0.8$. The spectrum is shown for the two assumptions ‘pure proton’ and ‘pure iron’ for the primary composition. []{data-label="fig:IT73-spectrum-p-Fe"}](IT26_spectrum-v2_2.pdf){width="100.00000%"} ![First evalution of one year of data taken with the 73-station configuration of IceTop in 2010. The events were required to have more than 5 stations and zenith angles in the range $\cos\theta \geq 0.8$. The spectrum is shown for the two assumptions ‘pure proton’ and ‘pure iron’ for the primary composition. []{data-label="fig:IT73-spectrum-p-Fe"}](FullYear_cosZenith_above_08_3and_MoreStations.pdf){width="100.00000%"} ![Composition analysis (IC40/IT40 configuration) [@ITIC40-composition_Abbasi:2012]. Left: Simulated correlation between the energy loss of the muon bundles in the ice (K70) and the shower size at the surface (S125) for proton and iron showers. The shading indicates the percentage of protons over the sum of protons and iron in a bin. The lines of constant primary energy are labeled with the logarithms of the energies. Right: IceCube result for the average logarithmic mass of primary cosmic rays compared to other measurements (references in [@ITIC40-composition_Abbasi:2012]). []{data-label="fig:composition_ITIC40"}](pretty_plot_berries_ICRC_zaxis_v2-eps-converted-to.pdf "fig:"){width="43.00000%"} ![Composition analysis (IC40/IT40 configuration) [@ITIC40-composition_Abbasi:2012]. Left: Simulated correlation between the energy loss of the muon bundles in the ice (K70) and the shower size at the surface (S125) for proton and iron showers. The shading indicates the percentage of protons over the sum of protons and iron in a bin. The lines of constant primary energy are labeled with the logarithms of the energies. Right: IceCube result for the average logarithmic mass of primary cosmic rays compared to other measurements (references in [@ITIC40-composition_Abbasi:2012]). []{data-label="fig:composition_ITIC40"}](compositionplot2-eps-converted-to.pdf "fig:"){width="54.00000%"} Figure \[fig:IT26\_spectrum-v2\_2\] shows the energy spectrum from 1 to 100 PeV [@IT26-spectrum_Abbasi:2012wn] determined from 4 month of data taken in the IT26 configuration in 2007. The relation between the measured shower size and the primary energy is mass dependent. Good agreement of the spectra in three zenith angle ranges was found for the assumption of pure proton and a simple two-component model (see [@IT26-spectrum_Abbasi:2012wn]). For zenith angles below 30[$^{\circ}$]{}, where the mass dependence is smallest, the knee in the cosmic ray energy spectrum was observed at about 4.3PeV with the largest uncertainty coming from the composition dependence (+0.38PeV and -1.1PeV). The spectral index changes from 2.76 below the knee to 3.11 above the knee. There is an indication of a flattening of the spectrum above about 20PeV which was also seen by the experiments GAMMA [@Gamma-Garyaka:2008gs], Tunka [@Kuzmichev_HLT_ecrs2012] and Kaskade-Grande [@Haungs_HLT_ecrs2012]. A first preliminary evaluation of IceTop data from the 2010/11 season with 79 IceCube strings and 73 IceTop stations is shown in Fig. \[fig:IT73-spectrum-p-Fe\]. Cosmic ray composition {#sec:composition} ====================== As mentioned in the introduction, the combination of the in-ice detector with the surface detector offers a unique possibility to determine the spectrum and mass composition of cosmic rays from about 300 TeV to 1 EeV. The first such analysis exploiting the IceTop-IceCube correlation was done on a small data set corresponding to only one month of data taken with about a quarter of the final detector for energies from 1 to 30 PeV [@ITIC40-composition_Abbasi:2012]. From the measured input variables, shower size and muon energy loss (Fig.\[fig:composition\_ITIC40\], left), the primary energy and mass was determined using a neural network. The resulting average logarithmic mass is shown in Fig.\[fig:composition\_ITIC40\], right. These results are still dominated by systematic uncertainties, such as the energy scale of the muons in IceCube and the effects of snow accumulation on the IceTop tanks. A similar analysis of IceTop-IceCube coincidences is in progress using the IC79/IT73 data set taken in 2010 (the energy spectrum obtained with these data is displayed in Fig. \[fig:IT73-spectrum-p-Fe\]). The studies indicate that there will be enough statistics for composition analysis up to about 1 EeV. The systematic uncertainties related to the models can be reduced by including different mass sensitive variables, like zenith angle dependence of shower size [@IT26-spectrum_Abbasi:2012wn], muon rates in the surface detector and shower shape variables (see discussion in [@Kolanoski_HLT_icrc2011]). PeV-gamma rays {#sec:pevgamma} ============== IceCube can efficiently distinguish PeV gamma rays from the background of cosmic rays by exploiting coincident in-ice signals as veto. Gamma-ray air showers have a much lower muon content than cosmic ray air showers of the same energy. Candidate events are selected from those showers that lack a signal from a muon bundle in the deep ice. Results of one year of data, taken in the IC40/IT40 configuration are shown in Fig. \[fig:pevgammarays\_limits\] [@Stijn_icrc2011]. The projected gamma-ray sensitivity of the final detector is also given. ![Limits on the diffuse gamma ray flux relative to the cosmic ray flux from a region within 10[$^{\circ}$]{} from the Galactic Plane (IC40/IT409, purple line). The plot includes also the only other available limits from CASA-MIA [@CASA-MIA-Chantell:1997gs] and the expected one-year sensitivity for the complete IceCube detector (blue dashed line for the whole covered energy range, blue dots for smaller energy bins).\ [ ]{}[]{data-label="fig:pevgammarays_limits"}](pevgammarays_limits.pdf){width="48.00000%"} Transient events {#sec:transients} ================ Transient events such as sun flares or gamma ray bursts, if they generate very high fluxes of low energy particles, could be observed as general rate increases above the noise level in the IceTop DOMs even if they could not be detected individually. This was first demonstrated with the observation of the Dec 13, 2006 Sun flare event [@Sun-flare-Abbasi08]. The detector readout has since then been setup such that counting rates could be obtained at different thresholds allowing to unfold cosmic ray spectra during a flare [@Takao_IT_icrc2011]. Atmospheric muons in the ice {#sec:muons_inice} ============================ In this section analyses of atmospheric muons in IceCube (without requiring air shower detection in IceTop) are presented. The related atmospheric neutrinos, an irreducible background for cosmic neutrino search, are discussed elsewhere in these proceedings [@kappes_HLT_ecrs2012]. Muon spectrum and composition ----------------------------- Atmospheric muon and neutrino spectra measured with IceCube probe shower development of cosmic rays with primary energies above about 10 TeV. To penetrate to the IceCube depth and be detectable the muons have to have energies above about 500 GeV. Methods have been developed to distinguish single, high-energetic muons by their stochastic energy loss [@Berghaus_icrc2011] from muon bundles with rather smooth energy deposition. Figure \[fig:Muon-bundle-spectrum\] shows a cosmic ray spectrum derived from an analysis of muon bundles. The flux is plotted against an energy estimator, $E_{mult}$, which is derived from the measured muon multiplicity in the bundles using the empirical formula $N_{\mu} \sim A^{0.23} E^{0.77}$ with iron as reference nucleus ($A=56$). The data are compared to the predictions from different models. None of the models matches particularly well, especially not at low energies (where threshold effects might cause some experimental uncertainty). The data indicate that some additional component at higher energies is required, for example the extra-galactic ‘mixed component’ in the model “Gaiser-Hillas 3a” (see Fig.1 in [@Gaisser:2012zz] and discussion in [@Berghaus_isvhecri_2012]). There is also an interesting flattening observable above about 10 PeV which might be connected to the flattening observed in the same region in the IceTop spectrum (Figs. \[fig:IT26\_spectrum-v2\_2\] and \[fig:IT73-spectrum-p-Fe\]) and by other experiments (see Section \[sec:spectrum\]). This analysis is complementary to the composition analysis and can be exploited to test the consistency of models in a wide energy range from well below the knee to above some EeV. ![Energy spectrum of primary cosmic rays obtained from muon bundles in IceCube. The energy estimator $E_{mult}$ is derived from the measured muon multiplicity in the bundles which is composition dependent. See explanation in the text.\ [ ]{} []{data-label="fig:Muon-bundle-spectrum"}](Muon-bundle-spectrum.pdf){width="48.00000%"} Muons with high transverse momenta ---------------------------------- At high energies the muons reach the in-ice detector in bundles which are, for primaries above about 1PeV, collimated within radii of the order of some 10m. Most of the muons stem from the soft peripheral collisions with little transverse momentum transfer. Perturbative QCD calculations, however, predict the occurrence of muons with higher transverse momenta in some fraction of the events. Large transverse momenta of muons show up in a lateral separation from the muon bundle. In Fig. \[fig:LS-dist\] this lateral distribution obtained from IC59 data [@LSMuons-Abbasi:2012he] is shown along with a fit by an exponential plus a power law function. The power law part indicates the onset of hard scattering in this regime of $p_T \approx 2-15$ GeV/c, as expected from perturbative QCD. However, the zenith angular dependence shown in Fig. \[fig:LS-zenith\] cannot be described by the commonly used models Sibyll and QGSJET while it is reasonably reproduced by DPMJET. The reasons for these differences have to be understood and could have important implications for air shower simulations. ![\[fig:LS-zenith\] The cosine distribution of the directions of bundles with laterally separated muons compared to simulations using commonly used interaction models.](scale_data_and_fit_paper_plot_simple_log.pdf){width="100.00000%"} ![\[fig:LS-zenith\] The cosine distribution of the directions of bundles with laterally separated muons compared to simulations using commonly used interaction models.](paper_plot_cos_zen_maxmuon_levels_models_l7.pdf){width="100.00000%"} Cosmic ray anisotropy {#sec:anisotropy} ===================== IceCube collects large amounts of cosmic ray muon events, about $10^{11}$ events in every year of running with the full detector. These events have been used to study cosmic ray anisotropies on multiple angular scales, for the first time in the Southern sky [@Abbasi_anisotropy:2010mf; @Abbasi_anisotropy:2011ai; @Abbasi_anisotropy:2011zka]. ![Left: Relative intensity maps for the low-energy (top) and high-energy (bottom) data sets. Right: Projections of the maps unto right ascension in the declination band -75[$^{\circ}$]{} to -25[$^{\circ}$]{}. In the projection plot, the error bars are statistical while the colored boxes indicate the systematic uncertainty. The curves are empirical fits.[]{data-label="fig:IT-aniso-RelInt"}](IT-Aniso-RI-proj.pdf){width="100.00000%"} While the previous analyses exploited data from the in-ice muons only, now also first results from data taken with IceTop are available. The advantage of using IceTop is a better energy resolution which allows a finer energy binning if statistics is sufficient. Figure \[fig:IT-aniso-RelInt\] shows skymaps of relative intensities determined from IceTop data for primary energies centered around 400 TeV and 2 PeV (still with a rather coarse binning). The data were taken over 3 years in the configurations IT59, IT73, IT81. The 400 TeV data confirm the in-ice observations [@Abbasi_anisotropy:2011zka], in particular the change in phase compared to the 20-TeV observations. The new result is that at 2PeV the anisotropy as a function of right ascension has a similar shape as at 400TeV but becomes apparently stronger. As yet the anisotropies observed on multiple angular scales and at different energies have not found an explanation. Theoretical explanations like local magnet fields affecting the cosmic ray streams and/or nearby sources of cosmic rays are discussed. The determination of the energy dependence of anisotropies will be crucial for validating such explanations. Conclusion ========== The presented results on cosmic ray properties, such as energy spectra, mass composition and anisotropies, demonstrate the high, partly unique, potential of the IceCube Observatory for studying cosmic rays physics. The IceCube/IceTop system covers an energy range from well below the knee to the expected onset of an extra-galactic component. References {#references .unnumbered} ==========
{ "pile_set_name": "ArXiv" }
--- abstract: 'As part of a study BSM corrections to leptonic decays of the $B_c$ meson, Tran et al. [@Tran:2018kuv] use the covariant confining quark model (CCQM) to estimate the matrix element of the pseudo-scalar curent between the vacuum and the $B_c$ meson. We note that this matrix element can be determined using existing lattice QCD results.' author: - 'C. T. H. Davies' - 'C. McNeile' title: 'Comment on Implications of new physics in the decays $B_c \to (J/\psi,\eta_c)\tau\nu$' --- Introduction ============ The paper by Tran et al.  [@Tran:2018kuv] discusses Beyond the Standard Model (BSM) contributions to leptonic and semi-leptonic decays of the $B_c$ meson. This is a very topical calculation because of the tantalizing hints that there are lepton symmetry violations in various B and $B_c$ meson decays found by the LHCb collaboration [@Aaij:2017tyk]. To quantify the constraints from these analyses it is important to have reliable values for the operator matrix elements involved, with quantified uncertainties. The pseudo-scalar matrix element {#sec:psme} ================================ [@Tran:2018kuv] considers a Hamiltonian of corrections to the standard model: $${\cal H}_{eff} = \frac{4 G_F V_{cb}}{\sqrt{2}} ( {\cal O}_{V_L} + \sum_{X=S_i, V_i, T_L} \delta_{l \tau} X {\cal O_X} ) \label{eq:Hnew}$$ and works out the phenomenology for the leptonic and semi-leptonic decays of the $B_c$ meson. The operators considered are: $$\begin{aligned} {\cal O}_{V_i} & = & (\overline{c} \gamma^\mu P_i b ) (\overline{l} \gamma_{\mu} P_L \nu_l ) , \\ {\cal O}_{S_i} & = & (\overline{c} P_i b ) (\overline{l} P_L \nu_l ), \\ {\cal O}_{T_L} & = & (\overline{c} \sigma^{\mu\nu} P_L b ) (\overline{l} \sigma_{\mu\nu} P_L \nu_l ) , \label{eq:BSMcontribution}\end{aligned}$$ where $\sigma_{\mu\nu} = i [\gamma_\mu , \gamma_\nu] / 2 $, $P_L = (1 - \gamma_5 ) / 2 $, and $P_R = (1 + \gamma_5 ) / 2 $. The delta function in the Hamiltonian in equation \[eq:Hnew\] takes into account lepton flavor violation in this model. The complex $X$ are the Wilson coefficients from the Beyond the Standard Model (BSM) theory. We note that there is no suppression of the operators by the scale of the BSM physics, because the three additional operators all have the same dimension as the operators in the standard model: The leptonic decay constant of the $B_c$ meson, $f_{B_c}$: $$\label{eq:fbc} \langle 0 \mid \overline{c} \gamma_5 \gamma_{\mu} b \mid B_c \rangle = f_{Bc} p_{\mu} ,$$ is used in the standard model calculation of the annihilation rate of the $B_c$ meson to leptons via a $W$ boson. The additional operators in equation \[eq:BSMcontribution\] require the introduction of the pseudo-scalar matrix element of the $B_c$ meson defined via $$\langle 0 \mid \overline{c} \gamma_5 b \mid B_c \rangle = f_{Bc}^{P}(\mu) M_{bc}.$$ The matrix element $f_{Bc}^{P}$ depends on the renormalization scale $\mu$ in QCD. A physical result is obtained when it is combined with the Wilson coefficient, which also depends on $\mu$, from the BSM theory. The leptonic branching fraction of the $B_c$ meson is $$\begin{gathered} {\cal B} (B_c \rightarrow \tau \nu) = \frac{G_F^2}{8 \pi} \mid V_{cb} \mid^2 \tau_{B_c} m_{B_c} m_{\tau}^2 \\ \left( 1 - \frac{m_\tau^2}{m_{B_c}^2} \right)^2 f_{B_c}^2 A_{BSM},\end{gathered}$$ where $A_{BSM}$ is $$A_{BSM} = \mid 1 - (V_R - V_L) + \frac{m_{B_c}}{m_\tau} \frac{f_{B_c}^{P} }{f_{B_c}} (S_R - S_L) \mid^2 .$$ In the standard model $A_{BSM}$ = 1. If there are experimental deviations of the leptonic decay of the $B_c$ meson from the value in the standard model, then the values of $f_{B_c}$ and $f_{B_c}^{P}$ are required to constrain values of the Wilson coefficients $V_R$, $V_L$, $S_R$, and $S_L$, of the BSM theory. The Wilson coefficients also contribute to semi-leptonic decays of heavy light mesons, so additional constraints on them can be obtained. This is an modern update of the experimental origins of the V-A theory in the standard model, where experimental data was used to constrain the interactions between quarks (see [@Das:2009zzd] for example). Although the leptonic decay of the $B_c$ meson has not been observed experimentally, the constraints from a LEP1 measurement allowed, Tran et al. [@Tran:2018kuv] to put bounds on the $S_L$ and $S_R$ couplings.  [@Tran:2018kuv; @Ivanov:2016qtw] use a CCQM to estimate $f_{Bc}^{P}(\mu)$, although without giving a scale, $\mu$, at which it is determined. Lattice QCD results {#sec:lattice} =================== The decay constant of the $B_c$ has been calculated in lattice QCD using two different approaches which give results in good agreement [@McNeile:2012qf; @Colquhoun:2015oha]. The most accurate results comes from using the Highly Improved Staggered Quark (HISQ) formalism [@Follana:2006rc]. In this formalism there is an exact partially conserved axial current (PCAC) [@Kilcup:1986dg] relation $$\partial_{\mu} A_{\mu} = (m_1 + m_2) P \;\;. \label{eq:PCACdefn}$$ From the pseudoscalar matrix element times quark mass we can then obtain the matrix element of the temporal axial current (at zero spatial momentum) needed for eq. (\[eq:fbc\]) with absolute normalisation. This is done in [@McNeile:2012qf] for heavy-charm pseudoscalar mesons for a range of heavy quark masses and values of the lattice spacing, $a$. This enables the heavy quark mass dependence of the heavy-charm decay constant to be mapped out in the continuum ($a \rightarrow 0$) limit and a result for $f_{B_c}$ to be obtained when the heavy quark mass corresponds to that of the $b$. The value obtained is $$\label{eq:fbcresult} f_{B_c} = 0.427(6)(2) \,\mathrm{GeV},$$ and a complete error budget is given in [@McNeile:2012qf]. A completely different approach for $f_{B_c}$ based on the lattice discretisation of nonrelativistic QCD (NRQCD) [@Lepage:1992tx] is given in [@Colquhoun:2015oha]. There the matrix element of the temporal axial current is calculated directly but, since there is no PCAC relation on the lattice in this case, the current is matched to that of continuum QCD using lattice QCD perturbation theory through $\mathcal{O}(\alpha_s)$ [@Monahan:2012dq]. A result for $f_{B_c}$ of 0.434(15) GeV is obtained, where the uncertainty is dominated by that from lattice discretisation effects and systematic uncertainties in matching the current. Although the uncertainty is larger here than in the HISQ case, the agreement between the two results is confirmation of our understanding of the errors from the two approaches. Since, in the HISQ case [@McNeile:2012qf], the lattice PCAC relation was used to determine $f_{B_c}$, it is clear that we could also have determined $f_{Bc}^{P}(\mu)$. Since $f_{Bc}^P(\mu)$ runs with $\mu$ it is much more convenient to determine it in combination with quark masses. The PCAC relation, eq. (\[eq:PCACdefn\]) on the lattice yields the following relationship between $f_{B_c}$ and $f_{Bc}^P$: $$(m_b+m_c)f_{Bc}^{P} = M_{B_c} f_{Bc} . \label{eq:pcaclatt}$$ Here $m_b$ and $m_c$ are the bare lattice quark masses. Since both sides of this equation are scheme- and scale-invariant, we can instead apply this relationship in the continuum using the continuum results for $f_{B_c}$ obtained from lattice QCD calculations. Then $$f_{Bc}^{P}(\mu) = \frac{M_{B_c} f_{Bc} }{m_b(\mu) + m_c(\mu)}, \label{eq:pcacdecay}$$ where $m_b(\mu)$ and $m_c(\mu)$ are the bottom and charm quark masses at the scale $\mu$ in a standard continuum scheme, such as $\overline{\mathrm{MS}}$. The quark masses are also most conveniently and accurately obtained from lattice QCD calculations, see for example [@McNeile:2010ji]. We use results from [@McNeile:2010ji] for the quark masses in the $\overline{\mathrm{MS}}$ scheme at a standard scale of 3 GeV, $\overline{m}_c$(3 GeV, $n_f$=4) = 0.986(6) GeV, $m_b/m_c$ = 4.51(4), $f_{B_c}$ from [@McNeile:2012qf] (0.427(6) GeV) and $M_{B_c}$ = 6.274(1) GeV from experiment [@Aaij:2016qlz]. This gives, in the $\overline{\mathrm{MS}}$ scheme $$\label{eq:fbcpresult} \overline{f}_{Bc}^P (\mbox{3 GeV}) = 0.493(9) \,\,\mathrm{GeV}$$ where the uncertainty is dominated by that from the lattice QCD result for $f_{B_c}$. The result for $\overline{f}_{Bc}^{P}$ can be run to different values of $\mu$ using the inverse of the running of the $\overline{\mathrm{MS}}$ quark mass [@Chetyrkin:1997dh; @Vermaseren:1997fq]. The result for $f_{B_c}$ computed using the CCQM [@Tran:2018kuv] of 0.489 GeV is 15% larger than that obtained from the lattice QCD results discussed above. The systematic uncertainty from using the CCQM is estimated in [@Tran:2018kuv] as 10%. The result given in [@Tran:2018kuv] for $f_{B_c}^{P}$ of 0.645 GeV is hard to interpret or compare to the lattice QCD values since no scheme or scale for it is given. Lattice QCD results for the form factors of $B_c$ semileptonic decay to charmonium states are as yet preliminary [@Colquhoun:2016osw] but will provide a further point of comparison in future. Conclusions =========== Weak decays of the $B_c$ meson provide exciting opportunities for constraining new physics as growing datasets from LHC, along with new analyses, become available [@Tran:2018kuv]. The theoretical input to this of hadronic parameters such as decay constants and form factors for the $B_c$ need to be firmly based on ‘first-principles’ approaches to QCD, such as lattice QCD. This allows not only the result to be given but also a well-motivated uncertainty on its value. To this end, we collect here existing lattice QCD results, with their associated uncertainty, for the $B_c$ decay constant and we derive from them a value for the pseudoscalar current matrix element. Our calculations were done on the Darwin Supercomputer as part of STFC’s DiRAC facility jointly funded by STFC, BIS and the Universities of Cambridge and Glasgow. This work was funded by STFC. [10]{} C.-T. Tran, M. A. Ivanov, J. G. Körner, and P. Santorelli, (2018), arXiv:1801.06927. LHCb, R. Aaij [*et al.*]{}, Phys. Rev. Lett. [**120**]{}, 121801 (2018), arXiv:1711.05623. A. Das, J. Phys. Conf. Ser. [**196**]{}, 012004 (2009). M. A. Ivanov, J. G. Körner, and C.-T. Tran, Phys. Rev. [**D94**]{}, 094028 (2016), arXiv:1607.02932. C. McNeile, C. T. H. Davies, E. Follana, K. Hornbostel, and G. P. Lepage, Phys. Rev. [**D86**]{}, 074503 (2012), arXiv:1207.0994. HPQCD, B. Colquhoun [*et al.*]{}, Phys. Rev. [**D91**]{}, 114509 (2015), arXiv:1503.05762. HPQCD Collaboration, E. Follana [*et al.*]{}, Phys.Rev. [**D75**]{}, 054502 (2007), arXiv:hep-lat/0610092. G. W. Kilcup and S. R. Sharpe, Nucl. Phys. [**B283**]{}, 493 (1987). G. P. Lepage, L. Magnea, C. Nakhleh, U. Magnea, and K. Hornbostel, Phys. Rev. [**D46**]{}, 4052 (1992), arXiv:hep-lat/9205007. C. Monahan, J. Shigemitsu, and R. Horgan, Phys.Rev. [**D87**]{}, 034017 (2013), arXiv:1211.6966. C. McNeile, C. T. H. Davies, E. Follana, K. Hornbostel, and G. P. Lepage, Phys. Rev. [**D82**]{}, 034512 (2010), arXiv:1004.4285. LHCb, R. Aaij [*et al.*]{}, Phys. Rev. [**D95**]{}, 032005 (2017), arXiv:1612.07421. K. G. Chetyrkin, Phys. Lett. [**B404**]{}, 161 (1997), arXiv:hep-ph/9703278. J. A. M. Vermaseren, S. A. Larin, and T. van Ritbergen, Phys. Lett. [**B405**]{}, 327 (1997), arXiv:hep-ph/9703284. HPQCD, B. Colquhoun, C. Davies, J. Koponen, A. Lytle, and C. McNeile, PoS [**LATTICE2016**]{}, 281 (2016), arXiv:1611.01987.
{ "pile_set_name": "ArXiv" }
--- author: - - - bibliography: - '../bibliography.bib' title: 'Generating Optimal Privacy-Protection Mechanisms via Machine Learning\' ---
{ "pile_set_name": "ArXiv" }
--- abstract: | We present a general approach to deriving bounds on the generalization error of randomized learning algorithms. Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used—as often assumed in the probably approximately correct (PAC)-Bayesian literature—and in the single-draw case, where the hypothesis is extracted only once. For this last scenario, we present a novel bound that is explicit in the central moments of the information density. The bound reveals that the higher the order of the information density moment that can be controlled, the milder the dependence of the generalization bound on the desired confidence level. Furthermore, we use tools from binary hypothesis testing to derive a second bound, which is explicit in the tail of the information density. This bound confirms that a fast decay of the tail of the information density yields a more favorable dependence of the generalization bound on the confidence level. author: - '\' bibliography: - 'reference.bib' title: Generalization Error Bounds via $m$th Central Moments of the Information Density --- Introduction {#sec:introduction} ============ A recent line of research, initiated by the work of Russo and Zou [@russo16-05b] and then followed by many recent contributions [@xu17-05a; @bassily18-02a; @bu19-01a; @esposito19-12a], has focused on obtaining bounds on the generalization error of randomized learning algorithms in terms of information-theoretic quantities, such as mutual information. The resulting bounds are *deterministic*, i.e., data-independent, and allow one to assess the speed of convergence of a given learning algorithm in terms of sample complexity [@shalev-shwartz14-a p. 44]. A parallel development has taken place in the machine learning and statistics community, where the probably approximately correct (PAC)-Bayesian framework, pioneered by McAllester [@mcallester98-07a], has resulted in several upper bounds on the generalization error. These bounds, which are expressed in terms of the relative entropy between a prior and a posterior distribution on the hypothesis class (see, e.g., [@guedj19-01a] for a recent review), are typically *empirical*, i.e., data-dependent, and can be used to design learning algorithms [@catoni07-a]. One difficulty in comparing the bounds on the generalization error available in the literature is that they sometimes pertain to different quantities. To illustrate this point, we need to introduce some key quantities, which will be used in the remainder of the paper. Following the standard terminology in statistical learning theory, we let $\setZ$ be the instance space, be the hypothesis space, and $\ell: \setW\times \setZ \rightarrow \positivereals$ be the loss function. A training data set $Z^n=[Z_1,\dots,Z_n]$ is a set of $n$ samples drawn from a distribution $P_Z$ defined on $\setZ$. We denote by $P_{Z^n}$ the product distribution induced by $P_Z$. A randomized learning algorithm is characterized by a conditional probability distribution $P_{W\!\given\! Z^n}$ on $\mathcal{W}$. Finally, we let the generalization error for a given hypothesis $w$ be defined as the difference between the population and empirical risks $$\label{eq:gen} {\textnormal{gen}}(w,z^n)=\frac{1}{n}\sum_{k=1}^{n}\ell(w,z_k) -\Ex{P_Z}{\ell(w,Z)}.$$ Throughout the paper, we shall assume that the loss function $\ell(w,Z)$ is $\sigma$-subgaussian [@wainwright19-a Def. 2.2] under $P_Z$ for all $w\in \setW$. The line of work initiated with [@russo16-05b] deals with bounding the average generalization error $$\label{eq:average-gen} \Ex{P_{W\! Z^n}}{ {{\textnormal{gen}}(W,Z^n)}}.$$ Specifically, upper bounds on the absolute value of this quantity were first presented in [@russo16-05b] and then improved in [@xu17-05a Thm. 1] and [@bu19-01a Prop. 1]. On the contrary, the PAC-Bayesian approach seeks lower bounds on the probability [@guedj19-01a] $$\label{eq:pac-bayesian} P_{Z^n}\lefto[\abs{\Ex{P_{W\!\given\! Z^n}}{{{\textnormal{gen}}(W,Z^n)}}} \leq \epsilon \right].$$ Characterizing such a probability, which is in the spirit of the PAC framework, is relevant when a new hypothesis $W$ is drawn from $P_{W\!\given\! Z^n}$ every time the algorithm is used. As can be verified by, e.g., comparing the proof of [@xu17-05a Lemma 1] and the proof of [@guedj19-10a Prop. 3],[^1] for the subgaussian case, one can obtain bounds both on  and on  that are explicit in the mutual information $I(W;Z^n)$ and in the relative entropy $\relent{P_{W\!\given\! Z^n}}{P_W}$, respectively, by using the Donsker-Varadhan variational formula for relative entropy. One may also be interested in the scenario in which the hypothesis $W$ is drawn from $P_{W\!\given\! Z^n}$ only once, i.e., it is kept fixed for all uses of the algorithm. In such a scenario, which, following the terminology used in [@catoni07-a p. 12], we shall refer to as a *single-draw* scenario, the probability of interest is $$\label{eq:single-draw} P_{W\! Z^n}\lefto[\abs{{{\textnormal{gen}}(W,Z^n)}} \leq \epsilon \right].$$ Bounds on this probability that depend on the mutual information $I(W;Z^n)$ were provided in [@xu17-05a Thm. 3] and [@bassily18-02a]. Several novel bounds, which are explicit in information-theoretic quantities such as $f$-divergence, $\alpha$-mutual information, and maximal leakage, were recently derived in [@esposito19-12a]. Interestingly, all these bounds make use of a different set of tools compared with the ones used to establish bounds on  and , with one of the main ingredients being the data processing inequality for $f$-divergences. Furthermore, they yield drastically different estimates for the generalization error. Specifically, let us assume that we want  to be greater than $1-\delta$ where, throughout the paper, $\delta \in (0,1)$. Then a slight refinement of the analysis in [@bassily18-02a] yields the following bound on $\epsilon$: $$\label{eq:sample_complexity_mi} \epsilon\geq \sqrt{\frac{2\sigma^2}{n}\left(\frac{I(W;Z^n)+H_b(\delta)}{\delta}+\log 2\right)}. $$ Here, $H_b(\delta)$ denotes the binary entropy function. Throughout the paper, $\log(\cdot)$ denotes the natural logarithm. In contrast, the analysis in [@esposito19-12a Cor. 5], yields the following bound for $\alpha>1$: $$\label{eq:sample_complexity_alpha_mi} \epsilon\geq \sqrt{\frac{2\sigma^2}{n} \left[I_{\alpha}(W;Z^n)+\log 2 + \frac{\alpha}{\alpha-1} \log \frac{1}{\delta}\right]}.$$ Here, $I_{\alpha}(\cdot,\cdot)$ is the $\alpha$-mutual information [rCl]{}\[eq:alpha\_MI\] I\_(W;Z\^n)& = & \_[P\_[Z\^n]{} ]{}, where $\dv P_{W\! Z^n}/\dv P_W\! P_{Z^n}$ is the Radon-Nikodym derivative. Note that, since $\lim_{\delta\to 0} H_b(\delta)/\delta+\log \delta = 1$, the dependence of $\epsilon$ on $\delta$ in  is of order $1/\sqrt{\delta}$. In contrast, it is of order $\sqrt{(\alpha/(\alpha-1)) \log(1/\delta)}$ in , which is typically more favorable. For example, in the limit $\alpha\to\infty$, the $\alpha$-mutual information converges to the maximal leakage [@issa16-a Thm. 1], and $\epsilon$ depends on $\delta$ only through the term $\sqrt{\log(1/\delta)}$. The analysis in [@esposito19-12a], however, does not reveal why using $\alpha$-mutual information rather than mutual information results in a more benign dependence of the generalization error on the confidence parameter $\delta$. Moreover, the choice $\alpha=1$, for which $I_{\alpha}(W;Z^n)$ reduces to $I(W;Z^n)$, renders the bound in  vacuous. #### Contributions {#contributions .unnumbered} Inspired by the treatment of the generalization error for the case of the $0-1$ loss function reported in [@catoni07-a], we present a single framework for deriving bounds on the generalization error that can be applied to both average and tail analyses, both of a PAC-Bayesian and single-draw flavor. As a product of our analysis, we obtain a probabilistic generalization error bound for the single-draw scenario, which results in the following bound on $\epsilon$ to guarantee that  is greater than $1-\delta$: $$\label{eq:sample_complexity_moments} \epsilon\geq \sqrt{\frac{2\sigma^2}{n}\left(I(W;Z^n)+\frac{M_m(W;Z^n)}{(\delta/2)^{1/m}}+\log\frac{2}{\delta}\right)}.$$ Here, $$\label{eq:central_moment_infodens} M_m(W;Z^n) = \Exop^{1/m}_{P_{W\! Z^n}}\lefto[\abs{ \imath(W,Z^n) - I(W;Z^n)}^m \right]$$ is the $m$th root of the $m$th central moment of the information density $$\label{eq:info_density} \imath(w,z^n)=\log \frac{\dv P_{W\! Z^n} }{\dv P_W\! P_{Z^n}}(w,z^n).$$ The bound in  is derived as a data-independent relaxation of an underlying data-dependent bound. Comparing  with , we see that the existence of higher central moments of $\imath(W,Z^n)$ results in a more favorable scaling of the error bound with $\delta$. This implies that one can obtain generalization error bounds that are explicit in the mutual information and have a more favorable dependence on $\delta$ than the one given in . In the limit $m\to\infty$, the dependence is of order $\sqrt{\log (1/\delta)}$, but the resulting bound is less tight than the maximal leakage bound in [@esposito19-12a Cor. 5]. However, through a more refined analysis, we also obtain a bound that is tighter than the maximal leakage bound in some cases. To shed further light on the role of the tail of the information density in determining the dependence of $\epsilon$ on $\delta$, we derive an additional probabilistic single-draw bound, based on a change of measure argument [@polyanskiy19-a Thm. 12.5] that is used to establish strong converse bounds in binary hypothesis testing. It results in the following bound on $\epsilon$: [rCl]{} &&.\[eq:strongconv\_gen\_thm\_intro\] Similar to , this bound reveals that for a fixed $\delta$, low values of $\epsilon$ require fast-decaying tails of the information density random variable. Indeed, $\gamma$ in  should be chosen sufficiently large to make the argument of the $\log$ positive. However, large values of $\gamma$ also contribute to a large $\epsilon$. Bounds via a Subgaussian Inequality {#sec:main_results} =================================== In this section, we derive several types of bounds on the absolute value of the generalization error of a randomized learning algorithm. The following theorem gives an inequality that will later be used to derive both average and tail bounds for the generalization error. \[thm:mainineq\] Let $Z^n$ be according to $P_Z$. Assume that $\ell(w,Z)$ is $\sigma$-subgaussian under $P_Z$ for all $w\in \mathcal W$. Assume that $P_{W\! Z^n}$ is absolutely continuous with respect to $P_W\! P_{Z^n}$. Then, for all $\lambda\in \reals$, [rCl]{}\[eq:mainineq\] && 1. Since $\ell(w,Z)$ is $\sigma$-subgaussian and the $Z_i$ are , the random variable $\frac{1}{n}\sum_{i=1}^n \ell(w,Z_i)$ is $\sigma/\sqrt{n}$-subgaussian, i.e., $$\begin{gathered} \Exop_{P_{Z^n} } \lefto[\exp\lefto(\lambda\left(\frac{1}{n}\sum_{i=1}^n \ell(w,Z_i) - \Exop_{P_Z}\lefto[\ell(w,Z)\right] \right)\right)\right] \\ \leq \exp\lefto(\frac{\lambda^2\sigma^2}{2n}\right).\end{gathered}$$ Reorganizing terms and taking the expectation with respect to $P_W$, we get $$\label{eq:subgauss_without_indicator} \Exop_{P_W\! P_{Z^n} } \lefto[\exp\lefto(\lambda {{\textnormal{gen}}(W,Z^n)}- \frac{\lambda^2\sigma^2}{2n}\right)\right]\leq 1.$$ Now, let $E$ be the union of all sets $\setE\in \mathcal{W}\times \mathcal{Z}^n$ such that $P_{W\!Z^n}(\setE)=0$, and let $\bar E$ denote its complement. It follows from  that $$\Exop_{P_W\! P_{Z^n} } \lefto[1_{\bar E}\cdot\exp\lefto(\lambda {{\textnormal{gen}}(W,Z^n)}- \frac{\lambda^2\sigma^2}{2n}\right)\right]\leq 1,$$ where $1_{\bar E}$ is the indicator function of the set $\bar E$. To obtain , we perform a change of measure from $P_W\! P_{Z^n}$ to $P_{W\! Z^n}$, as per [@polyanskiy19-a Prop. 17.1(4)]. We next show how the inequality  can be used to derive previously known and novel bounds on the generalization error. Average Generalization Error ---------------------------- As a first corollary of Theorem \[thm:mainineq\], we derive a bound on the average generalization error , recovering the result in [@xu17-05a Thm. 1]. \[cor:ExpectedGen\] Under the assumptions of Theorem \[thm:mainineq\], $$\abs{ \Ex{P_{W\! Z^n}}{{{\textnormal{gen}}(W,Z^n)}}} \leq \sqrt{\frac{2\sigma^2}{n}I(W;Z^n)}.$$ We apply Jensen’s inequality to , which yields $$\label{eq:intermediate_average} \exp\biggl(\lambda\Ex{P_{W\! Z^n}}{{{\textnormal{gen}}(W,Z^n)}} - \frac{\lambda^2\sigma^2}{2n} -\Ex{P_{W\! Z^n}}{\imath(W,Z^n)} \biggr) \leq 1.$$ Noting that $\Ex{P_{W\! Z^n}}{\imath(W,Z^n)}=I(W;Z^n)$ we get, after taking the $\log$ of both sides of  and reorganizing terms, the nonnegative parabola in $\lambda$ $$\lambda^2\frac{\sigma^2}{2n} - \lambda \Ex{P_{W\! Z^n}}{{{\textnormal{gen}}(W,Z^n)}} + I(W;Z^n) \geq 0.$$ Since the discriminant of a nonnegative parabola is nonpositive, we get $$\Exop^2_{P_{W\! Z^n}}[{{\textnormal{gen}}(W,Z^n)}] - \frac{2\sigma^2}{n}I(W;Z^n)\leq 0,$$ which yields the desired bound. PAC-Bayesian Tail Bounds {#sec:pac-bayesian-results} ------------------------ Next, we use Theorem \[thm:mainineq\] to obtain two tail bounds on the absolute value of the generalization error averaged over $P_{W\!\given\! Z^n}$ in . The first one, presented in Corollary \[cor:PAC-bayesian\], recovers a classical data-dependent PAC-Bayesian bound (see, e.g., [@guedj19-10a Prop. 3]) for the special case in which $P_W$ is taken as the prior distribution and $P_{W\!\given\! Z^n}$ is taken as the posterior distribution. The second one, presented in Corollary \[cor:PAC-bayesian-data-independent\], is a relaxation of the first bound, which makes it data-independent. This bound, which depends on the $m$th moment of the relative entropy $\relent{P_{W\vert Z^n}}{P_W}$, recovers the bound given in [@bassily18-02a App. A.3] for the case $m=1$. \[cor:PAC-bayesian\] Under the assumptions in Theorem \[thm:mainineq\], the following bound holds with probability at least $1-\delta$ under $P_{Z^n}$: $$\begin{gathered} \abs{ \Exop_{P_{W\vert Z^n} }{{\textnormal{gen}}(W,Z^n)}} \\ \leq \sqrt{\frac{2\sigma^2}{n}\left(\relent{P_{W\vert Z^n}}{P_W} +\log \frac{1}{\delta}\right)}.\label{eq:multidrawBase} \end{gathered}$$ Similarly to the proof of Corollary \[cor:ExpectedGen\], we apply Jensen’s inequality to , but now only with respect to the conditional expectation of $W$ given $Z^n$. This yields [rrCl]{} \_[P\_[Z\^n]{}]{} && 1, \[eq:singleDrawJensen\] where we used that $$\Ex{P_{W\vert Z^n=z^n}}{i(W,z^n)}=\relent{P_{W\vert Z^n=z^n}}{P_W}.$$ Next, we use Markov’s inequality in the following form: let $U\distas P_U$ be a nonnegative random variable s.t. $\Ex{}{U}\leq 1$. Then $$\begin{aligned} P_U[U>1/\delta]< \Ex{}{U}\delta\leq \delta.\label{eq:MarkovTrick}\end{aligned}$$ Using  in , we conclude that [rrCl]{} P\_[Z\^n]{}&& 1-. Reorganizing terms, we obtain: [rrCl]{} P\_[Z\^n]{}&1&-. The desired bound  now follows from the same discriminant analysis as in the proof of Corollary \[cor:ExpectedGen\]. The bound in Corollary \[cor:PAC-bayesian\] is data-dependent because the upper bound on the generalization error depends on the specific instance of $Z^n$. In the next corollary, we apply Markov’s inequality once more to make the bound data-independent. \[cor:PAC-bayesian-data-independent\] Under the assumptions in Theorem \[thm:mainineq\], the following bound holds with probability at least $1-\delta$ under $P_{Z^n}$ for all $m>0$: $$\begin{gathered} \abs{ \Ex{P_{W\vert Z^n} } {{{\textnormal{gen}}(W,Z^n)}} } \\ \leq \sqrt{\frac{2\sigma^2}{n}\left(\frac {\Exop^{1/m}_{P_{Z^n}}\lefto[\relent{P_{W\vert Z^n}}{P_W}^m\right] } {(\delta/2)^{1/m}} +\log \frac{2}{\delta}\right)}.\label{eq:multidrawMI}\end{gathered}$$ Applying Markov’s inequality to the random variable $\relent{P_{W\vert Z^n}}{P_W}^m$, we obtain after some manipulations $$\begin{gathered} P_{Z^n}\Biggl[\relent{P_{W\vert Z^n}}{P_W} \leq \frac {\Exop^{1/m}_{P_{Z^n}}\lefto[\relent{P_{W\vert Z^n}}{P_W}^m\right] } {\delta^{1/m}} \Biggr]\\ \geq 1-\delta. \label{eq:MultidrawProofMarkov}\end{gathered}$$ We now observe that the two probability bounds  and  together with the union bound imply that, with probability at least $1-2\delta$ under $P_{Z^n}$, $$\begin{gathered} \abs{ \Ex{P_{W\vert Z^n} } {{{\textnormal{gen}}(W,Z^n)}} }\\ \leq \sqrt{\frac{2\sigma^2}{n}\left(\frac {\Exop^{1/m}_{P_{Z^n}}\lefto[\relent{P_{W\vert Z^n}}{P_W}^m\right] } {\delta^{1/m}} +\log \frac{1}{\delta}\right)}.\end{gathered}$$ The desired result then follows by the substitution $\delta\rightarrow \delta/2$. Note that when $m=1$, we have $$\Exop_{P_{Z^n}}\lefto[\relent{P_{W\vert Z^n}}{P_W}\right]=I(W;Z^n)$$ and the bound  coincides with the one reported in [@bassily18-02a App. 3]. Some additional remarks on  are provided in Section \[sec:remarks\]. Single-Draw Probabilistic Bounds {#sec:sigle-draw-results} -------------------------------- We now use Theorem \[thm:mainineq\] to derive two tail bounds on the absolute value of the single-draw generalization error in . As in Section \[sec:pac-bayesian-results\], we first state a data-dependent bound in Corollary \[cor:single-draw-data-dependent\]. Then, we relax this to two different data-independent bounds in Corollaries \[cor:single-draw-data-independent\] and \[cor:single-draw-maximal-leakish\]. All three bounds are novel, to the best of our knowledge. \[cor:single-draw-data-dependent\] Under the assumptions in Theorem \[thm:mainineq\], the following bound holds with probability at least $1-\delta$ under $P_{W\! Z^n}$:[^2] $$\label{eq:singledrawBase} \abs{ {{\textnormal{gen}}(W,Z^n)}} \leq \sqrt{\frac{2\sigma^2}{n}\left(\imath(W,Z^n) +\log \frac{1}{\delta}\right)}.$$ Applying Markov’s inequality  directly to , we conclude that $$\begin{gathered} P_{W\! Z^n}\biggo[\exp\biggo(\lambda {{\textnormal{gen}}(W,Z^n)}- \frac{\lambda^2\sigma^2 }{2n} -{{\imath}(W,Z^n)}\bigg)\leq \frac{1}{\delta} \bigg]\\ \geq 1-\delta,\end{gathered}$$ from which the desired result follows by the same discriminant analysis as in the proof of Corollary \[cor:ExpectedGen\]. \[cor:single-draw-data-independent\] Under the assumptions in Theorem \[thm:mainineq\], the following bound holds with probability at least $1-\delta$ under $P_{W\! Z^n}$: $$\begin{gathered} \abs{ {{\textnormal{gen}}(W,Z^n)}} \leq \label{eq:singledrawMoment} \\ \sqrt{\frac{2\sigma^2}{n}\left(I(W;Z^n)+\frac{M_m(W;Z^n)}{(\delta/2)^{1/m}} +\log \frac{2}{\delta}\right)},\end{gathered}$$ where $M_m(W;Z^n)$, defined in , is the $m$th root of the $m$th central moment of the information density. We shall use Markov’s inequality in the following form: for a random variable $U$, $$\label{eq:markov-for-arbitrary-rv} P_U\lefto[U \leq \Exop[U]+ \frac{\Exop^{1/m}[\abs{U-\Exop[U]}^m]}{\delta^{1/m}} \right]\geq 1-\delta.$$ Applying  to the information density random variable, we conclude that, with probability at least $1-\delta$, $$\label{eq:moment_bound_on_inf_dens} {{\imath}(W,Z^n)}\leq I(W;Z^n)+\frac{M_m(W;Z^n)}{\delta^{1/m}}.$$ It now follows from , , and the union bound that, with probability at least $1-2\delta$, $$\begin{gathered} \abs{{{\textnormal{gen}}(W,Z^n)}}\leq \\ \sqrt{\frac{2\sigma^2}{n}\left(I(W;Z^n)+\frac{M_m(W;Z^n)}{\delta^{1/m}}+\log \frac{1}{\delta} \right)}.\end{gathered}$$ The desired result follows after the substitution $\delta\rightarrow \delta/2$. \[cor:single-draw-maximal-leakish\] Under the assumptions in Theorem \[thm:mainineq\], the following bound holds with probability at least $1-\delta$ under $P_{W\! Z^n}$: $$\begin{gathered} \abs{ {{\textnormal{gen}}(W,Z^n)}} \leq \label{eq:singledrawMaximalLeakish} \\ \sqrt{\frac{2\sigma^2}{n}\left(\log \esssup_{P_{Z^n} }\Exop_{P_{W}}\lefto[ \frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}\right] +2\log \frac{2}{\delta}\right)}.\end{gathered}$$ The following holds with probability $1$ under $P_{W\! Z^n}$: $$\begin{aligned} \imath(W,Z^n) = \log \frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}} \leq \log \esssup_{P_{Z^n\vert W}} \frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}.\end{aligned}$$ The assumption that $P_{W\!Z^n}\ll P_W\!P_{Z^n}$ means that any set in the support of $P_{W\!Z^n}$ is also in the support of $P_W\!P_{Z^n}$. We can therefore weaken the $\esssup$ as follows: $$\begin{aligned} \log \esssup_{P_{Z^n\vert W}} \frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}} \leq \log \esssup_{P_{Z^n}} \frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}.\end{aligned}$$ Markov’s inequality implies that, with probability at least $1-\delta$ under $P_W$, $${{\imath}(W,Z^n)}\leq \log \esssup_{P_{Z^n}}\Ex{P_W}{\frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}} + \log\lefto(\frac{1}{\delta}\right),$$ which, combined with  through the union bound and the substitution $\delta\rightarrow \delta/2$, gives the desired result. Remarks on the Tail Bounds in Sections \[sec:pac-bayesian-results\] and \[sec:sigle-draw-results\] {#sec:remarks} -------------------------------------------------------------------------------------------------- The single-draw tail bound in  reveals a relation between the central moments of the information density and the confidence parameter $\delta$. Specifically, the higher the moment of the information density that can be controlled, the more benign the dependence of the generalization error bound on $\delta$. A similar observation holds for the data-independent PAC-Bayesian bound , in which controlling higher moments of the random variable $\relent{P_{W\!\given\! Z^n}}{P_W}$ leads to a more favorable dependence of the generalization bound on $\delta$. In the limit $m\to\infty$ the bound in  reduces to $$\begin{gathered} \label{eq:singledrawMomentInf} \abs{ {{\textnormal{gen}}(W,Z^n)}} \leq \\ \sqrt{\frac{2\sigma^2}{n}\left(I(W;Z^n)+{M_\infty(W;Z^n)} +\log \frac{2}{\delta}\right)},\end{gathered}$$ where $M_\infty(W;Z^n)=\esssup_{P_{W\!Z^n}} \abs{\imath(w,z^n)- I(W;Z^n)}$. So, in this limit, the dependence on $\delta$ is of order $\sqrt{\log(1/\delta)}$. However, the bound  is tighter than , up to the factor $2$ multiplying the logarithm. It is also tighter than the maximal leakage bound in [@esposito19-12a Cor. 10] and the max information bound in [@dwork15-06a Thm. 4] with $\beta=0$, up to the aforementioned factor of $2$. Indeed, let the max information be defined as $$\label{eq:max_mi} I_{\textnormal{max}}(W;Z^n) = \esssup_{w,z^n}\imath(w,z^n).$$ It is readily verified that $$I_\textnormal{max}(W;Z^n)\leq I(W;Z^n)+{M_\infty(W;Z^n)}.$$ As shown in [@esposito19-12a Lem. 12], $\mathcal{L}(Z^n \rightarrow W)\leq I_{\textnormal{max}}(W;Z^n)$, where $\mathcal{L}(Z^n \rightarrow W)$ denotes the maximal leakage. The bound in  can be relaxed to give a maximal leakage bound, since [rCl]{} (Z\^n W)& =&\_[P\_[W]{}]{}\ \[eq:lower\_bound\_of\_maximal\_leakage\] && \_[P\_[Z\^n]{} ]{}\_[P\_[W]{}]{}. Thus, provided that $$\log \esssup_{P_{Z^n} }\Exop_{P_{W}}\lefto[ \frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}\right] \leq \mathcal{L}(Z^n \rightarrow W) + \log\frac{2}{\delta},$$ we have established that the bound in  is stronger than, in turn, the maximal leakage bound in [@esposito19-12a Cor. 10], the max information bound in [@dwork15-06a Thm. 4] with $\beta=0$, and . In the next section, we present a different approach to obtaining single-draw tail bounds, which reveals a coupling between $\delta$ and the tail of the information density random variable. Bounds via the Strong Converse ============================== As pointed out in Section \[sec:introduction\], a key tool for deriving the single-draw bound  is the data processing inequality for $f$-divergences. This is also true for some of the bounds presented in [@esposito19-12a]. In the context of binary hypothesis testing, it is known that such an inequality only leads to a weak converse bound on the region of achievable error rates. To obtain a strong converse, one needs to use [@polyanskiy19-a Lem. 12.2] (restated in Lemma \[lem:strong\_converse\_lemma\] below for convenience), which provides a bound on the probability of an event under a distribution $P$ in terms of its probability under $Q$. \[lem:strong\_converse\_lemma\] Let $E$ be an arbitrary event and $P$ and $Q$ be probability measures such that $P$ is absolutely continuous with respect to $Q$. Then, for all $\gamma\in\reals$, $$\label{lem:StrongConv} P[E] \leq P\lefto[\log \frac{\dv P}{\dv Q} > \gamma \right] + e^\gamma Q[E].$$ As we shall show next, this inequality can be turned into a generalization bound by choosing $P$, $Q$, and $E$ appropriately. \[thm:strong\_conv\_bound\] Under the assumptions of Theorem \[thm:mainineq\], the following bound holds with probability at least $1-\delta$ over $P_{W\! Z^n}$: $$\begin{gathered} \label{eq:strongconv_gen_thm} \abs{{{\textnormal{gen}}(W,Z^n)}}\\ \leq\sqrt{ \frac{2\sigma^2}{n} \left( \gamma + \log\lefto(\frac{2}{\delta-P_{W\! Z^n}\lefto[\imath(W,Z^n)\geq \gamma\right]} \right) \right)}\end{gathered}$$ for all $\gamma$ for which the arguments of the logarithm and the square root are nonnegative. With $P=P_{W\! Z^n}$, $Q=P_W\! P_{Z^n}$ and $$\label{eq:high_error_event} E=\{(w,z^n): \abs{\textnormal{gen}(w,z^n)} > \epsilon \},$$ we apply Lemma \[lem:strong\_converse\_lemma\] to get $$\label{eq:StrongConvPf1} P_{W\! Z^n} [E] \leq {P_{W\! Z^n} }\left[\imath(W,Z^n)\geq \gamma \right] + e^\gamma {P_W\! P_{Z^n}}[E].$$ The $\sigma$-subgaussianity of the loss function implies that [@wainwright19-a Eq. (2.9)] $${P_{Z^n}}\lefto[\abs{\textnormal{gen}(w,Z^n)}>\epsilon\right] \leq 2\exp\lefto(-n{\epsilon^2}/{(2\sigma^2)} \right).\label{eq:hoeffding_q_bound}$$ Inserting  into , we obtain $$\begin{gathered} \label{eq:strong_conv_bound} P_{W\! Z^n}[\abs{{{\textnormal{gen}}(W,Z^n)}}>\epsilon] \\\leq P_{W\! Z^n}\lefto[\imath(W,Z^n)\geq \gamma\right] + 2\exp\lefto(\gamma-n{\epsilon^2}/{(2\sigma^2)}\right).\end{gathered}$$ We get the desired result by imposing that the right-hand side of  is less than $\delta$ and solving for $\epsilon$. Unlike the bounds in Section \[sec:main\_results\], this bound depends on the tail distribution of the information density. For a given $\delta$, the parameter $\gamma$ needs to be chosen large enough to make the factor $\delta-P_{W\! Z^n}[\imath(W,Z^n)\geq \gamma]$ positive. However, choosing $\gamma$ too large makes the bound loose because of the $\gamma$ term that is added to the $\log$. This reveals a trade-off between the rate of decay of the tail of the information density and the confidence level $\delta$. Controlling the tail of the information density results in a tighter bound than the moment-based bound in  and the maximal leakage bound  [@esposito19-12a Cor. 10] (up to constants). Indeed, these two bounds can be obtained by further upper-bounding the right-hand side of , as we shall discuss next. Moment-Based Single-Draw Tail Bound {#sec:rederive_sd_tail_moments} ----------------------------------- By Markov’s inequality, [rCl]{} & &P\_[W Z\^n]{}\ && P\_[W Z\^n]{}\ && .\[eq:strong\_conv\_rederive\_2\] We now set $$\begin{aligned} \label{eq:gamma} \gamma = \frac{M_m(W;Z^n)}{(\delta/2)^{1/m}}+I(W;Z^n).\end{aligned}$$ Subsituting  in , we conclude that $$P_{W\! Z^n}\lefto[\imath(W,Z^n)\geq \gamma\right] \leq {\delta}/{2}.$$ Inserting this upper bound into we obtain $$\epsilon \leq \sqrt{\frac{2\sigma^2}{n} \left(I(W;Z^n)+\frac{M_m(W;Z^n)}{(\delta/2)^{1/m}} + \log\frac{4}{\delta} \right)},$$ which coincides with , up to a $\log 2$ term. Maximal Leakage Single-Draw Tail Bound {#sec:rederive_max_info} -------------------------------------- With probability $1$ over $P_{W\! Z^n}$, using the assumption that $P_{W\!Z^n}\ll P_W\!P_{Z^n}$, $$P_{W\! Z^n}[{{\imath}(W,Z^n)}\geq \gamma] \leq P_W\lefto[\esssup_{P_{Z^n}}\frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}\geq e^\gamma\right].$$ Thus, Markov’s inequality implies that $$P_{W\! Z^n}[{{\imath}(W,Z^n)}\geq \gamma] \leq e^{-\gamma}\Ex{P_W}{\esssup_{P_{Z^n}}\frac{\dv P_{W\! Z^n}}{\dv P_W\!P_{Z^n}}}.$$ Setting $\gamma = \mathcal{L}(Z^n \rightarrow W) + \log(2/\delta)$ and using this result in , we get, with probability at least $1-\delta$ over $P_{W\! Z^n}$, [rCl]{} && . [^1]: For the case in which the prior and posterior distributions in [@guedj19-10a Prop. 3] are set to $P_W$ and $P_{W\!\given\! Z^n}$, respectively. [^2]: Note that the argument of the square root can be negative, but that this happens with probability at most $\delta$. Therefore, the right-hand side of  is well-defined with probability at least $1-\delta$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the spin-orbit coupling induced by the splitting between TE and TM optical modes in a photonic honeycomb lattice. Using a tight-binding approach, we calculate analytically the band structure. Close to the Dirac point,we derive an effective Hamiltonian. We find that the local reduced symmetry ($\mathrm{D_{3h}}$) transforms the TE-TM effective magnetic field into an emergent field with a Dresselhaus symmetry. As a result, particles become massive, but no gap opens. The emergent field symmetry is revealed by the optical spin Hall effect.' author: - 'A. V. Nalitov' - 'G. Malpuech' - 'H. Terças' - 'D. D. Solnyshkov' bibliography: - 'reference.bib' title: 'Spin-orbit coupling and optical spin Hall effect in photonic graphene' --- Spin-orbit coupling in crystals allows to create and control spin currents without applying external magnetic fields. These phenomena have been described in the seventies [@Dyakonov] and are nowadays called the spin Hall effect (SHE) [@Hirsch1999; @reviewSHE]. In 2005, the interplay between the spin-orbit coupling and the specific crystal symmetry of graphene[@Geim2007] has been proposed [@Kane2005] to be at the origin of a new type of spin Hall effect, the quantum spin Hall effect, in which the spin currents are supported by surface states and are topologically protected [@QSHE; @Kane2010]. This result has a special importance, since it defines a new class of $Z_2$-topogical insulator [@Kane2005b], not associated with the quantization of the total conductance, but associated with the quantization of the spin conductance. However, from an experimental point of view, the realization of any kind of SHE is difficult, because spin-orbit coupling does not only lead to the creation of spin current, but also to spin decoherence [@Dyakonov2]. In graphene, the situation is even worse, since the spin-orbit coupling is extremely weak. Deposition of adatoms has been proposed to increase the spin-orbit coupling [@Gmitra2013], and it allowed the recent observation of the SHE [@Balakrishnan2013], but associated with a very short spin relaxation length, of the order of 1 $\mu$m. On the other hand, artificial honeycomb lattices for atomic Bose Einstein Condensates (BEC) [@hon_atom] and photons [@Peleg2007; @Kuhl2010; @Polini2013; @Kalesaki2014; @Jacqmin2014] have been realized. These systems are gaining a lot of attention due to the large possible control over the system parameters, up to complete Hamiltonian engineering[@Hafezi; @Umucalilar]. In BECs, the recent implementation of synthetic magnetic fields [@Lin1] and of non-Abelian, Rashba-Dresselhauss gauge fields [@Lin2] appears promising in the view of the achievement of topological insulator analogs. Photonic systems, and specifically photonic honeycomb lattices appear even more promising. They are based on coupled wave guide arrays [@Rechtsman2013], on photonic crystals with honeycomb symmetry [@Won2011], and on etched planar cavities [@Jacqmin2014]. A photonic Floquet topological insulator has been recently reported [@Chong2013], and some others based on the magnetic response of metamaterials predicted [@Khanikaev]. In photonic systems, spin-orbit coupling naturally appears from the energy splitting between the TE and TM optical modes and from structural anisotropies. Both effects can be described in terms of effective magnetic fields acting of the photon (pseudo)-spin [@Shelykh2010]. In planar cavity systems, the TE-TM effective field breaks the rotational symmetry, but preserves both time reversal and spatial inversion symmetries. It is characterized by a $k^2$ scaling and a double azimuthal dependence. This spin-orbit coupling is at the origin of the optical spin Hall effect (OSHE)[@Kavokin2005; @Leyder2007] and of the acceleration of effective magnetic monopoles [@Hivet; @Bramwell2012; @Solnyshkov2013]. As recently shown [@Tercas2014], the specific TE-TM symmetry can be locally transformed into a non-Abelian gauge field in a structure with a reduced spatial symmetry. In this work, we calculate the band structure of photonic graphene in the presence of the intrinsic spin-orbit coupling induced by the TE-TM splitting. We derive an effective Hamiltonian which allows to extract an effective magnetic field acting on the photon pseudo-spin only. We find that the low symmetry ($\mathrm{D_{3h}}$) induced by the honeycomb lattice close to the Dirac points transforms the TE-TM field in a emergent field with a Dresselhaus symmetry. Particles become massive but no gap opens. The dispersion topology shows larges similarities with the one of bilayer graphene [@McCann2006] and of monolayer graphene with Rashba spin-orbit coupling [@Rakyta2010], featuring trigonal warping [@Dresselhaus1974] and Lifshitz transition [@Lifshitz1960]. The symmetry of these states is revealed by the optical spin Hall effect (OSHE) which we describe by simulating resonant optical excitation of the $\Gamma$, K and K’ points. The OSHE at the $\Gamma$ point shows four spin domains associated with the TE-TM symmetry. The OSHE at the K and K’ shows two domains characteristic of the Dresshlauss symmetry. The spin domains at the K and K’ points are inverted, which is a signature of the restored $\mathrm{D_{6h}}$ symmetry when the two valleys are included. In what follows, in order to be specific, we consider a honeycomb lattice based on a patterned planar microcavity similar to the one recently fabricated and studied [@Jacqmin2014]. This does not reduce the generality of our description, which can apply to other physical realizations of honeycomb lattices, in optical and non-optical systems. In [@Jacqmin2014], quantum wells were embedded in the cavity which provided the strong coupling regime and the formation of cavity exciton-polaritons. Here, we will consider the linear regime, a parabolic in-plane dispersion, and no applied magnetic field. In such case, photons and exciton-polaritons behave in a similar way and our formalism applies to both types of particles. \[fig1\] ![(color online) A schematic sketch of the tight-binding model. (a) Photon tunneling between microcavity pillars is described as photon propagation through “waveguide”-like links. (b) Polarization dependence of tunneling probability due to TE-TM energy splitting: $L$ state which is polarized longitudinally to the “waveguide” link is closer in energy to the degenerate pillar-pinned states than the transversely-polarized state $T$, resulting in higher L-photon tunneling probability through the link. ](fig1.pdf "fig:") *Tight-binding model* First, we describe the spin-orbit coupling in photonic graphene structure (figure 1a) within the tight-binding approximation. We take a basis of $\sigma\pm$ polarized photon states localized on each pillar of the lattice as a zeroth approximation for the tight-binding model and introduce the hopping of photons from a pillar to one of its nearest neighbors as a perturbation $\hat{V}$ on this basis. To illustrate the polarization dependence of the hopping probability, let us consider two neighbouring pillars $A$ and $B$, shown in Figure (1b). The photon hopping between them may be described as propagation through a “waveguide”-like link. TE-TM energy splitting imposes a slight difference $\delta J$ in tunneling matrix elements for states linearly-polarized longitudinally ($L$) and transversely ($T$) to vector $\mathbf{d}_\varphi$ linking the pillars [@suppl], as it was recently shown for the eigenstates in a photonic benzene molecule [@Vera]. In that framework, the matrix elements read: $$\langle A, L \vert \hat{V} \vert B, L\rangle \equiv -J-\delta J/2, \quad \langle A, T \vert \hat{V} \vert B, T\rangle \equiv -J+\delta J/2. \notag$$ While a photon is in a link, TE-TM field does not rotate its eigenstate polarizations $L$ and $T$, implying no cross-polarization matrix elements: $$\langle A, L \vert \hat{V} \vert B, T\rangle = \langle A, T \vert \hat{V} \vert B, L\rangle = 0. \notag$$ In $\sigma\pm$ basis, the probability of spin flip during hopping is linear in $\delta J$ and its phase gain depends on the angle $\varphi$ between the link and the horizontal axis: $$\langle A, \pm \vert \hat{V} \vert B, \pm \rangle = -J, \quad \langle A, + \vert \hat{V} \vert B,- \rangle = - \delta J e^{-2\mathrm{i}\varphi}. \notag$$ This phase factor reflects the fact that when a link is rotated by 90 degrees, $L$ and $T$ polarization basis is inverted: if $L$ was horizontal, it becomes vertical and vice versa. A photon state may be described in the bispinor form $\Phi = \left( \Psi_A^+, \Psi_A^-, \Psi_B^+, \Psi_B^- \right)^{\mathrm{T}}$, with $\Psi_{A(B)}^\pm$ being the wave function on both sublattices in both spin components. The effective Hamiltonian acting on a plane wave bispinor $\Phi_\mathbf{k}$ then has a block matrix form: $$\label{Hamiltonian} \mathrm{H}_\mathbf{k} = \left( \begin{matrix} \mathrm{0} & \mathrm{F}_{\mathbf{k}} \\ \mathrm{F}_{\mathbf{k}}^\dagger & \mathrm{0} \end{matrix} \right), \quad \mathrm{F}_{\mathbf{k}} = - \left( \begin{matrix} f_{\mathbf{k}} J & f_{\mathbf{k}}^+ \delta J \\ f_{\mathbf{k}}^- \delta J & f_{\mathbf{k}} J \end{matrix} \right),$$ where complex coefficients $f_{\mathbf{k}}$,$f_{\mathbf{k}}^\pm$ are defined by: $$f_{\mathbf{k}}=\sum_{j=1}^3 \exp(-\mathrm{i}\mathbf{k d}_{\varphi_j}),\quad f_{\mathbf{k}}^\pm = \sum_{j=1}^3 \exp(-\mathrm{i}\left[\mathbf{k d}_{\varphi_j} \mp 2 \varphi_j \right]), \notag$$ and $\varphi_j = 2 \pi (j-1) / 3$ is the angle between the horizontal axis and the direction to the $j$th nearest neighbor of a type-A pillar. Its diagonalization results in a biquadratic equation on the photon dispersion, having two pairs of solutions $\pm E_\mathbf{k}^\pm$, given by: $$\begin{aligned} \label{disp_sol} &2 (E_\mathbf{k}^\pm)^2 = 2 \vert f_\mathbf{k} \vert^2 J^2 + \left( \vert f_\mathbf{k}^+ \vert^2 + \vert f_\mathbf{k}^-\vert^2 \right) \delta J^2 \pm \\ &\pm \sqrt{(\vert f_\mathbf{k}^+ \vert^2 - \vert f_\mathbf{k}^- \vert^2)^2 \delta J^4 + 4 \vert f_\mathbf{k} f_\mathbf{k}^{+*}+f_\mathbf{k}^* f_\mathbf{k}^- \vert ^2 J^2 \delta J^2}. \notag\end{aligned}$$ The dispersion is plotted along the principal direction in Figure (2a), and the trigonal warping effect which is a characteristic of bilayer graphene [@McCann2006] and of monolayer graphene with Rashba spin-orbit coupling [@Rakyta2010] is shown on the Figure (2b) in the vicinity of the K point. When $\delta J = J /2$, trigonal warping disappears. The crossing points originating from different Dirac points meet and annihilate. The dispersion topology changes – a phenomenon associated with the so-called Lifshitz transition in Fermionic systems [@Lifshitz1960]. If $\delta J \ll J$, the distance $\delta K$ between a K point and the additional pockets is approximately given by $(\delta J / J)^2 a^{-1}$. \[fig2\] ![(color online) The eigenstates of the effective Hamiltonian due to the honeycomb potential and the TE-TM splitting. (a) Photon dispersion branches along principal directions. The inset demonstrates a zoomed K valley – the dispersion is gapless. (b) Isoenergetic lines around K point, illustrating the trigonal warping: Dirac cones split in four pockets. (c) and (d) Linear polarization map and the pseudospin texture of the lowest energy state.](fig2.pdf "fig:") The effective Hamiltonian may be expressed in terms of pseudospin operators $\boldsymbol{\sigma}$ and $\mathbf{s}$, having the same matrix form of Pauli matrices vector and corresponding to sublattice (A/B) and polarization (H/V) degrees of freedom. It may further be separated into polarization-independent part $H^{(0)}_\mathbf{k}$, coupling $\boldsymbol{\sigma}$ with momentum and giving a standard graphene dispersion with two Dirac valleys K and K$^\prime$, and a spin-orbit term $H^{\mathrm{SO}}_\mathbf{k}$, coupling $\mathbf{s}$ with $\boldsymbol{\sigma}$ and momentum: $$\begin{aligned} H_\mathbf{k}^{\mathrm{(0)}} =& - J \sigma_+ f_\mathbf{k} +h.c., \label{H_0}\\ H_\mathbf{k}^{\mathrm{SO}} =& - \delta J \sigma_+ \otimes \left( f_\mathbf{k}^+ s_+ + f_\mathbf{k}^- s_- \right) + h.c.. \label{H_SO}\end{aligned}$$ where $\sigma_\pm = (\sigma_x \pm \mathrm{i} \sigma_y)/2$, $s_\pm = (s_x \pm \mathrm{i} s_y)/2$, and the $\otimes$ symbol denotes Kronecker product. Expanding expressions (\[H\_0\],\[H\_SO\]) and keeping the main order in $\mathbf{q=k-K}$, we further isolate the momentum-independent part $H^\mathrm{SO}_{\mathbf{K}}$ coupling $\mathbf{s}$ with $\boldsymbol{\sigma}$ and rewrite both terms in the low-energy approximation: $$\begin{aligned} H_\mathbf{q}^{\mathrm{(0)}} =& \hbar v_F \left( \tau_z q_x \sigma_x + q_y \sigma_y \right), \label{H_01} \\ H^{\mathrm{SO}} =& \Delta \left( \tau_z \sigma_y s_y - \sigma_x s_x \right), \label{H_SOK}\end{aligned}$$ where $v_F = 3 J a / (2 \hbar)$, $\Delta = 3 \delta J / 2$ and $\tau_z$ equals $+1$ and $-1$ for K and K$^\prime$ valleys respectively. Here we use the same basis as the the one of Kane and Mele [@Kane2005] in order to allow for a direct comparison with their Hamiltonian. This basis is different from the original basis of Wallace [@Wallace] which is used in the eq.\[Hamiltonian\]. The passage from Wallace to Kane is obtained by writing $q_x\rightarrow q_y$,$q_y\rightarrow -q_x$). If one restricts state space by locally fixing the sublattice $\boldsymbol{\sigma}(\mathbf{k})$ pseudospin and valley $\tau_z$, the spin-orbit term may be treated as an interaction with an emergent field. As an example, if one considers eigenstates of the main term (\[H\_01\]) in one Dirac valley with a fixed energy sign $c=\pm 1$, spin-orbit term (\[H\_SOK\]) transforms to a symmetry-allowed Dresselhaus-like emergent field: $$\begin{aligned} H_{c}^{\mathrm{SO}} =& -\Delta c \left( q_x s_x + q_y s_y\right) /q. \label{H_SOKt}\end{aligned}$$ This term, having a well-defined physical origin, is similar in spirit with the Rashba term introduced by Kane and Mele [@Kane2005; @Kane2005b]. The effective field described by the spin-orbit term (\[H\_SOKt\]) splits the degenerate massless photon branches by $3 \delta J$, and their linear polarization only depends on the direction of $\mathbf{q}$ and not on its absolute value. However, if $q < \Delta / \hbar v_F = (\delta J / J) a^{-1}$, the spin orbit term cannot be considered as a perturbation of the main term (\[H\_01\]), the interplay between the two terms gives an effective photon mass $m^*=(2 c \hbar^2 \delta J)/(3 a^2 J^2)$ in this region of reciprocal space. The pseudospin pattern (defining the linear polarization of light) of the lowest energy eigenstate reflects the effective field acting on the particles, because the pseudospin aligns with this field. The the whole reciprocal space is shown in Figure (2c). The figure (2d) shows a zoom on the K-point where the emergent Dressehlaus like field is clearly identified. Figure (2c) also clearly shows that the effective fields have an opposite sign close to the K and K’ points respectively. From this analytical calculation of the dispersion, we can conclude that the particular type of spin-orbit coupling we consider does not open a gap in the K point of the Brillouin zone, but leads to the appearance of massive particles. This, among other consequences, should induce a strong modification of the Klein tunneling effect. As shown in [@Liu2012], where Klein tunneling in the presence of a Rashba term was considered, the tunneling is suppressed for energies close to the K point, where the dispersion is not linear anymore, but is recovered for higher energies. The best evidence of the presence of a spin-orbit coupling inducing an effective magnetic field of a specific symmetry is the optical spin-Hall effect: rotation of the particle spin around the effective wavevector-dependent field during their propagation. The resonant excitation around the $\Gamma$ point with linearly polarised light should lead to a radial expansion of the wave-packet accompanied by a precession of the photon pseudo-spin. The double azimuthal dependence of the effective field orientation should lead, as in the planar case, to the formation of four spin domains [@Kavokin2005; @NOSHE]. Close to the K and K’ points, the Dressehlaus effective field orientation follows the azimuthal angle and two spin domains only should form [@Vishnevsky2013; @Tercas2014]. *Numerical simulation* In the following, in order to check the validity of the tight-binding approximation, and the observability of the OSHE, in realistic structures and experiments, namely including the broadening induced by the finite life time, we study numerically the propagation of polarised light in the photonic graphene structure. We consider a structure etched out of a planar microcavity, where the graphene atoms are represented by overlapping pillars (fig 3a). The equation of motion for the photonic spinor wavefunction reads : $$\begin{aligned} & i\hbar \frac{{\partial \psi _ \pm }} {{\partial t}} = - \frac{{\hbar ^2 }} {{2m}}\Delta \psi _ \pm + U\psi _ \pm - \frac{{i\hbar }} {{2\tau }}\psi _ \pm + \\ & + \beta {\left( {\frac{\partial }{{\partial x}} \mp i\frac{\partial }{{\partial y}}} \right)^2}{\psi _ \mp } +P_0 e^{ { - \frac{{\left( {t - t_0 } \right)^2 }} {{\tau _0^2 }}}}e^{ { - \frac{{\left( {{\mathbf{r}} - {\mathbf{r}}_0 } \right)^2 }} {{\sigma ^2 }}}}e^{ {i\left( {{\mathbf{kr}} - \omega t} \right)} } \notag\end{aligned}$$ where $\psi(r)={\psi_+(r), \psi_-(r)}$ are the two circular components of the photon wave function, $m$ is the cavity photon mass, $\tau$ the lifetime. This equation is similar with the one describing the photon motion in a planar cavity in the presence of TE-TM splitting [@Shelykh2010], described by the parameter $\beta ={% \hbar ^{2}}\left( {m_{l}^{-1}-m_{t}^{-1}}\right) /4m$ where $% m_{l,t}$ are the effective masses of TM and TE polarized particles respectively and $m=2\left( {{m_{t}}-{m_{l}}}\right) /{m_{t}}{% m_{l}}$. We have taken $m_t=5\times10^{-5}m_0$, $m_l=0.95m_t$, where $m_0$ is the free electron mass. The only difference lies in the introduction of the honeycomb lattice potential $U(r)$ shown on the figure 3a (24x24 elementary cells). $P_{0}$ is the amplitude of the pulsed pumping (identical for both components, corresponding to horizontal polarization), the pulse duration is $\tau_0=1$ ps, the size of the spot $\sigma=15$ $\mu$m. Pumping is localized in real space and in reciprocal space close to the selected point ($\Gamma$, K or K’). The lifetime was taken $\tau=25$ ps. \[fig3\] ![(color online). Optical spin Hall effect in photonic graphene. Circular polarization degree as a function of coordinates: a) the potential used in the simulations; b) excitation at $\Gamma$ point (TE-TM field); c) excitation at K point (Dresselhaus effective field); d) excitation at K’ point (field inverted with respect to K’).](fig3.pdf "fig:") We have performed numerical simulation of optical spin Hall effect in photonic graphene using a high-resolution (512x512) representation of a potential, similar to the one already studied in experiments [@Jacqmin2014]. The nVidia CUDA graphical processor was used to carry out the integration of the 2D spinor Schroedinger equation. Figure (3-b,c,d) shows the snapshots taken at $t=30 ps$ of the circular polarization degree as a function of coordinates. Panel b) shows the polarization degree for the excitation in the $\Gamma$ point, where the field has the typical TE-TM texture, evidenced by the 4 polarization domains [@Kavokin2005; @Leyder2007; @NOSHE]. Panel c,d) demonstrate the optical spin Hall effect for the $K$ and $K'$ points respectively, where the field has the texture of the Dresselhaus spin-orbit coupling. This is evidenced by 2 polarization domains in real space [@Vishnevsky2013; @Tercas2014] being inverted between the $K$ and $K'$ points which reflects the fact that the fields around $K$ and $K'$ are respectively opposite. The texture of the optical spin-Hall effect is a clear demonstration of the different nature of the effective magnetic field due to the spin-orbit coupling in the two Dirac points (K and K’) of the Brillouin zone. From this numerical experiment, we clearly see the advantage of photonic systems, which allow to excite and analyze any point of the dispersion, much easier than in solid state systems. Other very interesting consequences of our work rely on the possibilities offered by the manipulation of the lattice geometry in photonic systems and by the mixed exciton-photon nature of exciton-polaritons. The system geometry is the tool which has been used to create a photonic topological insulator [@Rechtsman2013]. Combined with spin-orbit coupling, it opens very broad perspectives. The mixed nature of exciton-polaritons provides a magnetic response of the system at optical frequencies, which is of interest to realize a photonic topological insulator [@Haldane2008; @Soljacic09]. It also induces a very strong non-linear optical response. Non-linear spin Hall effect associated with the transmutation of topological defects and focusing of spin currents have been already described in planar structures [@NOSHE]. The behaviour of soliton states in photonic topological insulators was recently considered [@Segev]. More generally, the interactions allow an exciton-polariton gas to behave as a quantum fluid [@Carusotto2013] with spin-anisotropic interactions [@Shelykh2010]. Polaritonic graphene [@Jacqmin2014] therefore opens very large possibilities for the studies of interacting spinor quantum fluids, in the presence of different types of real and effective magnetic fields which suggest accessibility to different types of quantum phases. To conclude, we have studied the spin-orbit coupling induced by the TE-TM splitting in a microcavity etched in the shape of a graphene lattice. Within the tight-binding approximation we found the eigenstates of the system, derived an effective Hamiltonian and found the effective fields acting on the photon spin. The symmetry of the field is lowered close to the Dirac points where it takes the form of a Dressehlauss field. The experimental observability of the optical Spin Hall effect induced by this spin-orbit coupling is verified by numerical simulations. We acknowledge discussions with M. Glazov, A. Amo, I. Carusotto, and J. Bloch. This work has been supported by the ITN INDEX (289968), ANR Labex GANEX (Grant No. ANR-11-LABX-0014), ANR Quandyde (ANR-11-BS10-001) and IRSES POLAPHEN (246912).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present predictions for the flux averaged muon energy spectra of quasi-elastic (QE) and 1-pion production events for the K2K long-baseline experiment. Using the general kinematical considerations we show that the muon energy spectra closely follow the neutrino energy spectrum with downward shift of the energy scale by $0.15\ \gev$ (QE) and $0.4\ \gev$ (1-pion production). These predictions seem to agree with the observed muon energy spectra in the K2K nearby detector. We also show the spectral distortion of these muon energy spectra due to the neutrino oscillation for the SK detector. Comparison of the predicted spectral distortions with the observed muon spectra of the 1-Ring and 2-Ring muon events in the SK detector will help to determine the oscillation parameters. The results will be applicable to other LBL experiments as well.' author: - 'Ji–Young Yu$^1$, E. A. Paschos$^1$, D. P. Roy$^2$, I. Schienbein$^3$' title: 'Muon spectra of Quasi-Elastic and 1-pion production events at the KEK LBL neutrino oscillation experiment' --- Introduction ============ Recently the KEK to Kamioka long-baseline neutrino oscillation experiment (K2K) has published its first result [@Ahn:2002up], which confirms the existence of ${\ensuremath{\nu_\mu}}$ oscillation as seen in the Super-Kamiokande (SK) atmospheric neutrino data [@Fukuda:1998mi]. The observed oscillation parameters from K2K agree well with the neutrino mass and mixing angles deduced from the atmospheric neutrino oscillation data [@Fukuda:1998mi] $$\sin^2 2 \theta \simeq 1\quad \text{and} \quad \Delta m^2 \simeq 3 \times 10^{-3} \evsq \ .$$ As is well known, in a two flavor scenario, the probability for a muon neutrino with energy $E_\nu$ to remain a muon neutrino after propagating the distance $L$ is given by the following expression $$P_{\mu\mu} = 1-\sin^2 2\theta\sin^2 \Big(\frac{\Delta m^2 L}{4 E_\nu}\Big) \ . \label{eq:pmumu}$$ Basically, the standard approach to measure the oscillation parameters is to determine the oscillation probability in Eq.  in dependence of $E_\nu$. At the position of the minimum $\Delta m^2$ can be determined from the condition $\tfrac{\Delta m^2 L}{4 E_{\nu,{\rm min}}} \overset{!}{=} \tfrac{\pi}{2}$ and $\sin^2 2\theta$ from $P_{\mu\mu}(E_{\nu,{\rm min}})\overset{!}{=} 1-\sin^2 2\theta$. The neutrino energy is not directly measurable but can be reconstructed from the simple kinematics of quasi-elastic (QE) scattering events. Measuring the energy $E_\mu$ and the polar angle $\cos \theta_\mu$ of the produced muon allows to reconstruct $E_\nu$ with help of the following relation (even if the scattered proton is not observed) $$E_\nu=E_\nu[E_\mu,\cos \theta_\mu] = \frac{M E_\mu - {\ensuremath{m_{\mu}}}^2/2}{M - E_\mu + |\vec{k}_\mu| \cos\theta_\mu} \ . \label{eq:E-reconstruction}$$ Here $M$ denotes the proton mass, $m_\mu$ the muon mass and $\vec{k}_\mu$ is the three-momentum of the muon in the laboratory system. However, in practice there are some difficulties. First of all, the experimental one-ring muon events (${\ensuremath{1\rm{R}\mu}}$) are not pure QE event samples. About 30$\%$ of the ${\ensuremath{1\rm{R}\mu}}$ events are 1-pion production events with unidentified or absorbed pions. For the 1-pion events Eq.  would systematically underestimate the true neutrino energy [@Walter:NuInt02]. Secondly, the reconstruction of $E_\nu$ gets more complicated including binding energy $\epsilon_B$ and Fermi motion of the target nucleons $$\begin{aligned} E_\nu &=& E_\nu[E_\mu,\cos \theta_\mu,\vec{p},\epsilon_B] \\ &=& \frac{(E_p+\epsilon_B) E_\mu - (2 E_p \epsilon_B +\epsilon^2_B+ {\ensuremath{m_{\mu}}}^2)/2-\vec{p}\cdot \vec{k}_\mu} {E_p+\epsilon_B-E_\mu+|\vec{k}_\mu|\cos\theta_\mu-|\vec{p}|\cos\theta_p} \ , \nonumber \label{eq:E-reconstruction1}\end{aligned}$$ where $\vec{p}$ is the three momentum and $E_p = \sqrt{M^2 + \vec{p}^2}$ the energy of the initial nucleon. Further, $\theta_p$ is the polar angle of the target nucleon w.r.t. the direction of the incoming neutrino. Neglecting $\epsilon_B$ and the momentum $\vec{p}\, $ Eq.  is recovered. Since the momentum $\vec{p}$ is unknown, $0 \le |\vec{p}| \le p_F$ where $p_F$ is the Fermi momentum, this will lead to an uncertainty of the reconstructed neutrino energy at given values $E_\mu$, $\cos\theta_\mu$, and $\epsilon_B$ of about -9$\%$ to +6$\%$ for a single event. Hence we can see no reliable way for reconstructing the neutrino energy for the ${\ensuremath{1\rm{R}\mu}}$ sample on an event by event basis. On the other hand the muon energy is a directly measurable quantity for each event. Therefore it seems to us to be a better variable for testing the spectral distortion phenomenon compared to the reconstructed neutrino energy. In this talk we summarize the basic ideas and the main results in [@Paschos:2003ej] where we have used kinematic considerations to predict the muon energy spectra of the QE and 1-pion resonance production events which constitute the bulk of the charged-current ${\ensuremath{\nu_\mu}}$ scattering events in the K2K experiment. These predictions can be checked with the observed muon energy spectra from the nearby detector. We also present the distortion of these muon spectra due to ${\ensuremath{\nu_\mu}}$ oscillation, which one expects to see at the SK detector. Comparison of the predicted muon spectra with those of the observed QE and 1-pion events at the SK detector will be very useful in determining the oscillation parameters. Flux averaged muon energy spectra ================================= The flux averaged muon energy spectra for QE and 1-pion events are given by $$\big<\frac{{\rm d}\sigma^R}{{\rm d} E_\mu}\big> \equiv \int f(E_\nu) \frac{{\rm d}\sigma^R}{{\rm d} E_\mu} {\rm d} E_\nu \label{eq:xs}$$ where $f(E_\nu)$ is the neutrino flux at K2K for the nearby detector(ND) and ’R’ denotes QE and $\Delta$ resonance contribution to 1-pion production, which dominates the latter. Simple kinematic considerations lead to the following approximation for the flux averaged muon energy spectra, both, for QE and 1-pion production [@Paschos:2003ej] $$\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>\ \simeq \sigma_{tot}^R(\overline{E_R})\ f(\overline{E_R})\ , \label{eq:app}$$ with $$\begin{aligned} \overline{E_R} &=& E_\mu + \Delta E^R = E_\mu + \begin{cases} 0.15\ \gev & \text {for \ QE}\\ 0.4\ \gev & \text {for \ 1-pion} \ . \end{cases} \label{eq:shift}\end{aligned}$$ Furthermore, it is well known that the total cross sections for QE and $\Delta$ production tend to constant values for neutrino energies of about $1\ \gev$ and $1.4\ \gev$, respectively: $\sigma_{tot}^R[E_\nu] \to N^R$. Hence, for muon energies larger than about $1.2\ \gev$, Eq.  can be further simplified by replacing $\sigma_{tot}^R$ by its constant asymptotic value $N^R$: $$\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>\ \simeq N^R \times f(\overline{E_R}) \quad \text{for}\quad E_\mu \gtrsim 1.2\ \gev \ , \label{eq:app2}$$ with $$\begin{aligned} N^R &=& \begin{cases} 4.5\ \fb & \text {for \ QE}\\ 5.5\ \fb & \text {for \ 1-pion} \ . \end{cases} \label{eq:norm}\end{aligned}$$ The normalizations correspond to the average cross-section per nucleon for a $H_2O$ target [@Paschos:2000be]. Thus we conclude that at large muon energies $E_\mu \gtrsim 1\ \gev$ the flux averaged muon energy cross section $\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>$ is directly proportional to the neutrino flux shifted in energy. The normalizations $N^R$ and energy shifts $\Delta E^R$ are predictions of our theoretical calculation [@Paschos:2003ej] which can be verified experimentally with the muon energy spectra of the QE and 1-pion events observed at the ND. Furthermore, Eqs. , , and also apply to the far detector (FD) if one replaces the flux at the ND by the flux at the FD which is distorted by the neutrino oscillation probability $P_{\mu\mu}(E_\nu)$: $$f(E_\nu) \to {\ensuremath{f_{\rm SK}}}(E_\nu) = f(E_\nu) \times P_{\mu\mu}(E_\nu)\ . \label{eq:fdflux}$$ Comparing these predictions with the observed muon energy spectra of the QE and 1-Pion events of the SK detector will test the spectral distortion due to ${\ensuremath{\nu_\mu}}$ oscillation and determine the oscillation parameters. Particularly, on the higher energy side of the peak the relative size of the SK to the ND cross-sections provides a direct measure of the spectral distortion and hence the underlying oscillation parameters: $$\begin{aligned} \frac{\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>_{{\rm FD}}} {\big<\frac{{\ensuremath{{\operatorname{d}}}}\sigma^R}{{\ensuremath{{\operatorname{d}}}}E_\mu}\big>_{{\rm ND}}} \simeq P_{\mu\mu}(\overline{E_R}) \quad \text{for}\quad E_\mu \gtrsim 1.2\ \gev \ . \label{eq:ratio}\end{aligned}$$ In Sec. \[sec:numeric\] we present exact calculations of the QE [@Paschos:2001np] and 1-pion production cross sections. The dominant contribution to 1-pion production is from $\Delta$ resonance production for which we take the formalism of Refs. [@Schreiner:1973mj; @Paschos:2002mb; @eap:2003]. For completeness we have also included the $P_{11}(1440)$ and $S_{11}(1535)$ resonance contributions for which we used the parameterization and the form factors of Ref. [@Fogli:1979cz]. We estimate the contribution from the still higher resonances along with the non-resonant background to be no more than $5-10\%$ of the 1-pion production cross section at these low energies (see Fig. \[fig:1\]). Therefore the accuracy of our prediction should be as good as that of the K2K experiment. We compare our exact calculations with the approximation in Eqs.  and from which the range of validity can be inferred. A comparison with the approximation in Eq.  can be found in Ref. [@Paschos:2003ej] (see Figs. 2 and 3). Numerical results {#sec:numeric} ================= ![K2K neutrino energy spectrum. The solid line is the exact spectrum normalized to unit area. The dashed line shows the approximated spectrum by the Lorentzian in Eq. (\[lorentz\]). We also show for comparison as dotted line the same Lorentzian, but with normalization 1.4 instead of 1.25.[]{data-label="fig:1"}](k2k_spectrum.eps){width="8.5cm"} In this section we present our numerical results. Fig. \[fig:1\] shows the K2K neutrino energy spectrum. In addition, we show an approximated spectrum by a Lorentzian given by $$\begin{aligned} f_{\rm{L}}(E_\nu) &=& \frac{N}{\pi}\frac{\Gamma} {(E_\nu-E_0)^2+\gamma^2} \label{lorentz} \nonumber\\ E_0 &=& 1.2 \ \gev,\ \Gamma = 0.6 \ \gev \ . \end{aligned}$$ where $N$ is an appropriate normalization factor. ![Exact predictions of the muon energy spectra for the (a) Nearby and (b) SK detectors of the K2K experiment. The QE (solid line) and the 1-pion production (dashed line) cross-sections are shown along with the $\Delta$ resonance contribution (dotted line).[]{data-label="fig:2"}](spectrum_near.eps "fig:"){width="8.5cm"} ![Exact predictions of the muon energy spectra for the (a) Nearby and (b) SK detectors of the K2K experiment. The QE (solid line) and the 1-pion production (dashed line) cross-sections are shown along with the $\Delta$ resonance contribution (dotted line).[]{data-label="fig:2"}](spectrum_far.eps "fig:"){width="8.5cm"} Fig. \[fig:2\]a shows the predicted muon energy spectra for the QE (solid line) and 1-pion production (dashed line) processes. Clearly one can see that the peak at $E_0= 1.2\ \gev$ is shifted left by $\Delta E \simeq 0.15\ \gev$ for the QE and $\Delta E \simeq 0.4-0.5\ \gev$ for the $\Delta$ resonance production which dominates the 1-pion production process. The steepness of the muon energy spectra at the low energy reflects the threshold rise of $\sigma^R$ and the steep neutrino flux. On the other hand the muon energy spectra closely follow the shape of the neutrino energy spectrum on the right side of the peak. The predicted exact muon energy spectra of Fig. \[fig:2\]a agree reasonably well with the corresponding spectra of the K2K ND [@Ahn:2002up] for both the QE and the non-QE sample. In particular one can compare the predicted QE spectrum with their simulated QE spectrum shown in Fig. 1 of Ref. [@Ahn:2002up]. Their Figs. 1a and c show separately the QE muon momentum distribution for the 1-Ring muon (${\ensuremath{1\rm{R}\mu}}$) sample of the 1KT and the QE enhanced sample of the FGD respectively. The two play complementary roles in covering the complete muon energy range, as the 1KT and the FGD have high efficiencies at $E_\mu < 1\ \gev$ and $E_\mu \gtrsim 1\ \gev$, respectively [@Ahn:2002up]. One can not compare our predicted muon energy spectra with these figures quantitatively without folding in these efficiency factors, which are not available to us. But there is good qualitative agreement between the predicted QE spectrum of our Fig. \[fig:2\]a with their Fig. 1c at $E_\mu \gtrsim 1\ \gev$ and Fig. 1a at $E_\mu < 1\ \gev$. While the former shows the position of the peak and the shape of the spectrum to the right, the latter shows the broadening of spectrum down to $E_\mu \simeq 0.4\ \gev$. Similarly one sees good agreement between the predicted muon energy spectrum of our Fig. \[fig:2\]a for 1-pion events with the non-QE spectra of their Fig. 1c,d at $E_\mu \gtrsim 1\ \gev$ and Fig. 1a at $E_\mu < 1\ \gev$. Thus one has a simple and robust prediction for the shape of the muon energy spectrum in terms of the neutrino spectrum not only for the QE events but also for the 1-pion production events, which dominate the inelastic events. Fig. \[fig:2\]b shows the corresponding muon energy spectra of the QE and 1-pion events for the SK detector, predicted by Eqs.  and . One can clearly see the distortion of the muon energy spectrum due the ${\ensuremath{\nu_\mu}}$ oscillation. They should be compared with the observed muon energy spectra of the 1-Ring and 2-Ring muon events at the SK detector, after taking into account the pion detection efficiency. We hope such a comparison will be done by the K2K collaboration. For the estimation of the pion detection efficiency it is necessary to consider nuclear effects, i.e., Pauli blocking, nuclear absorption and charge exchange of the produced pions which can rescatter several times in the nucleus. Therefore we have included these effects following the prescription of Refs. [@Adler:1974qu; @Paschos:2000be; @Schienbein:2003sm]. Since the dominant contribution to 1-pion production processes comes from $\Delta$ resonance production on oxygen, we have evaluated the effects of nuclear absorption and rescattering on the produced pions for this case. The relevant charged current subprocesses are $\nu p \to \mu^- p \pi^+$, $\nu n \to \mu^- n \pi^+$ and $\nu n \to \mu^- p \pi^0$ with relative cross-sections $9:1:2$ for the dominant contribution from $\Delta$ resonance. Fig. \[fig:5\] shows the effects of nuclear corrections on the produced $\pi^+$ and $\pi^0$ spectra from these processes for the nearby detector averaged over the neutrino spectrum. The results are very similar for the SK detector. Nuclear rescattering effects result in enhancing the $\pi^0$ events at the cost of the dominant $\pi^+$ component. But taken together we see a nearly $20 \%$ drop in the rate of 1-pion events due to nuclear absorption of the produced pion. Moreover about $10 \%$ of the remaining events corresponds to the pion momentum being less than the Cerenkov threshold of $100\ \mev$. Therefore one expects about $70\%$ of the $\Delta$ events to give a detectable pion ring at the SK detector while the remaining $30\%$ appears like a QE event. Adding the latter to the $35 \%$ of genuine QE events would imply that about $50 \%$ of the CC events will appear QE-like at the SK detector. Alternatively the observed muon energy spectrum of the sum of 1-Ring and 2-Ring muon events could be compared with the predicted spectrum of the sum of QE and 1-Pion events. Although a part of the 2-Ring events may come from multi-pion production, the resulting error may be small since multi-pion events at the ND constitute only $\sim 15 \%$ of CC events. ![The momentum distribution of the decay pion for charged current resonance production in oxygen with (solid line) and without nuclear correction (dotted line). The dashed line takes only into account the effect of Pauli blocking.[]{data-label="fig:5"}](ppi_spectra_ppip_o.eps "fig:"){width="8.5cm"} ![The momentum distribution of the decay pion for charged current resonance production in oxygen with (solid line) and without nuclear correction (dotted line). The dashed line takes only into account the effect of Pauli blocking.[]{data-label="fig:5"}](ppi_spectra_ppi0_o.eps "fig:"){width="8.5cm"} ![The predicted exact muon energy spectra (solid lines) and the approximation (dashed lines) according to Eq.  for QE and 1-pion production for the (a) Nearby detector and (b) SK detector of the K2K experiment.[]{data-label="fig:3"}](app_spec_nearby.eps "fig:"){width="8.5cm"} ![The predicted exact muon energy spectra (solid lines) and the approximation (dashed lines) according to Eq.  for QE and 1-pion production for the (a) Nearby detector and (b) SK detector of the K2K experiment.[]{data-label="fig:3"}](app_spec_far.eps "fig:"){width="8.5cm"} Next, we turn in Fig. \[fig:3\] to a comparison of the exact calculation for the muon energy spectra (solid lines) with the approximation (dashed lines) made on the right hand side of Eq. . Fig. \[fig:3\]a shows the results for the ND and Fig. \[fig:3\]b for the FD obtained with the neutrino flux specified in Eq. . One can see that the approximation is in perfect agreement with the exact calculation for muon energies $E_\mu \ge 1.2\ \gev$ for single pion production and $E_\mu \ge 1\ \gev$ for QE scattering. Hence, in this region the muon energy spectra are directly proportional to the neutrino flux shifted in energy. ![image](ratio_near_far.eps){width="8.5cm"} ![image](ratio_near_far_pion.eps){width="8.5cm"} Finally, in Figs. \[fig:4\]a and \[fig:4\]b we show the far-near-ratio of the muon energy spectra for QE and 1-pion production, respectively. In this case, the dashed lines depict the exact result and the solid lines the oscillation probabilities $P_{\mu\mu}$ given in Eq.  which have been evaluated at the shifted energies $\overline{E_R}$ given in Eq. . Again, at $E_\mu \ge 1\ \gev$ (QE) and $E_\mu \ge 1.2\ \gev$ (1-pion production) the approximation in Eqs.  and works well and a measurement of the far-near-ratio of such pure QE or 1-pion event samples in this kinematic region would give direct access to the oscillation probability. However, as has been discussed above, the observable 1-Ring and 2-Ring muon events are superpositions of QE, 1-pion, and multi-pion events making the analysis more complicated. Conclusions =========== The muon energy spectra of QE and 1-pion events provide a complementary approach to experimental extractions of the atmospheric muon oscillation parameters at K2K. The results are based on quite general kinematic considerations and will also be applicable to other future long baseline experiments like J2K, MINOS and the CERN-Gran Sasso experiments which plan to use low energy ${\ensuremath{\nu_\mu}}$ beams [@Itow:NuInt01; @Lipari:NuInt01]. Therefore it will be very useful to extend this analysis for the beam energy spectra and the target nuclei of these experiments. [**[Acknowledgment]{}**]{}\ J.-Y. Yu wishes to thank the organizers of the ICFP03 in Seoul for the kind invitation and financial support. [10]{} bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}url \#1[`#1`]{}urlprefixbibinfo\#1\#2[\#2]{}eprint\#1[\#1]{} *et al.*, Collaboration, ****, (), . *et al.*, Collaboration, ****, (), . , . , , , , ****, (), . , , , ****, (), . , ****, (), . , ****, (), . , ****, (). , , , . , ****, (). , , , ****, (). , , . , . , .
{ "pile_set_name": "ArXiv" }
--- author: - 'Tim Adamo,' - Eduardo Casali - '& Stefan Nekovar' bibliography: - 'biblio.bib' title: Ambitwistor string vertex operators on curved backgrounds --- Introduction ============ Ambitwistor strings [@Mason:2013sva; @Berkovits:2013xba] have many surprising properties; while much attention has rightly been paid to their utility for computing scattering amplitudes, they can also be defined on non-linear background fields [@Adamo:2014wea; @Adamo:2018hzd]. On such curved backgrounds the ambitwistor string is described by a chiral worldsheet CFT with free OPEs, which allows for many *exact* computations in these backgrounds, in stark contrast to conventional string theories where an expansion in the inverse string tension is needed (cf., [@Fradkin:1985ys; @Callan:1985ia; @Abouelsaood:1986gd]). For instance, the fully non-linear equations of motion for NS-NS supergravity [@Adamo:2014wea] and gauge theory [@Adamo:2018hzd] emerge as exact worldsheet anomaly cancellation conditions, and ambitwistor strings have been used to compute 3-point functions on gravitational and gauge field plane wave backgrounds [@Adamo:2017sze] correctly reproducing results found with ‘standard’ space-time techniques [@Adamo:2017nia]. Thus far, only a RNS formalism for the ambitwistor string has been shown to be quantum mechanically consistent at the level of the worldsheet. While pure spinor and Green-Schwarz versions of the ambitwistor string (or deformations thereof) have been defined on curved backgrounds [@Chandia:2015sfa; @Chandia:2015xfa; @Azevedo:2016zod; @Chandia:2016dwr], it is not clear that they are anomaly-free since only classical worldsheet calculations have been done in these frameworks. In this paper we study the heterotic and type II ambitwistor strings in the RNS formalism, at the expense of only working with NS-NS backgrounds. These backgrounds will be non-linear, and generic apart from constraints imposed by nilpotency of the BRST operator (i.e., anomaly cancellation): the Yang-Mills equations in the heterotic case and the NS-NS supergravity equations in the type II case. For each of these models, we construct vertex operators in the $(-1,-1)$ picture for all NS-NS perturbations of the backgrounds and investigate the constraints imposed on the operators by BRST closure. In the heterotic model we consider only one such vertex operator whose BRST closure imposes the linearised gluon equations of motion (as well as gauge-fixing conditions) on the perturbation around a Yang-Mills background. In the type II model we consider three vertex operator structures, corresponding to symmetric rank-two tensor, anti-symmetric rank-2 tensor, and scalar perturbations. With a background metric (obeying the vacuum Einstein equations), BRST closure fixes the two tensorial perturbations to be a linearised graviton and $B$-field respectively. On a general NS-NS background (composed of a non-linear metric, $B$-field and dilaton), the three structures are combined into a single vertex operator, whose BRST closure imposes the linearised supergravity equations of motion on the perturbations. We comment on the descent procedure for obtaining vertex operators in picture number zero, as well as the prospects for obtaining integrated vertex operators. We also mention some unresolved issues regarding the GSO projection in curved background fields. Heterotic ambitwistor string ============================ As a warm up we first describe the vertex operator for a gluon in the heterotic ambitwistor string on a generic Yang-Mills background field since the calculations here are mostly straightforward. This model was defined in a gauge background in [@Adamo:2018hzd]; as usual for ambitwistor strings the worldsheet action is free $$\begin{aligned} \label{wsa2} S=\frac{1}{2\,\pi}\int_{\Sigma}\Pi_{\mu}\,\dbar X^{\mu}+\frac{1}{2}\,\Psi_{\mu}\,\dbar\Psi^{\mu} +S_{C}\,,\end{aligned}$$ where $\Sigma$ is a closed Riemann surface and $S_{C}$ is the action for a holomorphic current algebra for some gauge group. The bosonic field $X^\mu$ is a worldsheet scalar, and $\Pi_{\mu}$ is its spin $1$ conjugate. The real fermions $\Psi^{\mu}$ are spin $\frac{1}{2}$ fields on the worldsheet. The action implies free OPEs for the worldsheet fields, along with the usual OPE for a holomorphic worldsheet current algebra: $$\begin{aligned} \begin{split}\label{OPEs} &X^{\mu}(z)\,\Pi_{\nu}(w)\sim \frac{\delta^{\mu}_{\nu}}{z-w}\,, \qquad \Psi^{\mu}(z)\,\Psi^{\nu}(w)\sim \frac{\eta^{\mu\nu}}{z-w}\,,\\ &j^{{\mathsf{a}}}(z)\,j^{\mathsf{b}}(w)\sim \frac{k\,\delta^{\mathsf{ab}}}{(z-w)^2} + \frac{f^{\mathsf{abc}}\,j^{\mathsf{c}}(w)}{z-w}\,, \end{split}\end{aligned}$$ where $\eta_{\mu\nu}$ is the $d$-dimensional Minkowski metric, $k$ is the level of the current algebra, and $f^{\mathsf{abc}}$ are the structure constants of the gauge group. At the level of the worldsheet fields dependence on a background gauge field enters through the non-standard gauge transformations of the field $\Pi_{\mu}$. From now on we take the $k\rightarrow 0$ limit to decouple gravitational degrees of freedom from the model [@Berkovits:2004jj; @Adamo:2018hzd]. In addition to the stress-energy tensor $T$, two other (holomorphic) currents are gauged: one is fermionic of spin $\frac{3}{2}$ while the other is bosonic of spin $2$. These currents depend explicitly on the background gauge field $A_{\mu}^{{\mathsf{a}}}$; the spin $\frac{3}{2}$ current is $$\begin{aligned} \label{Gcurr} \mathsf{G}=\Psi^{\mu}\left(\Pi_{\mu}-A^{\mathsf{a}}_{\mu}\,j^{\mathsf{a}}\right)\,,\end{aligned}$$ and the spin $2$ current is $$\begin{aligned} \label{Hcurr} \mathsf{H} = \Pi^2 - 2\, \Pi^\mu A_{\mu}^{\mathsf{a}} j^\mathsf{a} + A_\mu^\mathsf{a} A^{\mu \mathsf{b}} j^\mathsf{a} j^\mathsf{b} + \Psi^\mu \Psi^\nu F_{\mu\nu}^\mathsf{a} j^\mathsf{a} - \partial\left( \partial_\mu A^{\mu \mathsf{a}} j^\mathsf{a} \right) + f^{\mathsf{a}\mathsf{b}\mathsf{c}} j^\mathsf{c} A^{\mu \mathsf{b}} \partial A_\mu^\mathsf{a}\,.\end{aligned}$$ Here $F_{\mu\nu}^\mathsf{a}$ is the field strength of $A_\mu^{\mathsf{a}}$. It is straightforward to show that these currents obey $$\begin{aligned} \label{Hcurr0} \mathsf{G}(z)\,\mathsf{G}(w)\sim \frac{\mathsf{H}}{z-w}\,,\end{aligned}$$ without any conditions on the background field. Constraints on $A_{\mu}^{{\mathsf{a}}}$ emerge by requiring the gauging of the currents and to be quantum mechanically consistent on the worldsheet. Indeed, this gauging leads to the modification of the worldsheet action by ghost systems \[hghosts\] SS+\_bc++, and an associated BRST charge \[hBRST\] Q=cT +bcc+++\^2, for $T$ the full stress-energy tensor (including all ghost and current algebra contributions, except the $(b,c)$ system) and all expressions assumed to be normal-ordered. Here $(b,c)$ are the fermionic ghosts associated to gauging holomorphic worldsheet gravity, $(\beta,\gamma)$ are the bosonic ghosts associated to gauging $\mathsf{G}$, and $(\tilde{b},\tilde{c})$ are the fermionic ghosts associated to gauging $\mathsf{H}$. Both $c,\tilde{c}$ are spin $-1$ while $\gamma$ is spin $-\frac{1}{2}$. Requiring $Q^2=0$ gives the anomaly cancellation conditions for the theory. The holomorphic conformal anomaly – controlled entirely through $T$ – constrains the space-time dimension in terms of the central charge of the current algebra, but puts no restrictions on $A_{\mu}^{{\mathsf{a}}}$. However the $\{\mathsf{G},\mathsf{H}\}$ algebra is also anomalous unless it closes: $\mathsf{G}(z)\mathsf{H}(w)\sim 0$. This requirement *does* constrain the background gauge field: $$\begin{aligned} \mathsf{G}(z)\mathsf{H}(w)\sim 0 \iff D_{[\mu}F_{\nu\alpha]}^\mathsf{a}=0=D^\mu F_{\mu\nu}^\mathsf{a}\,,\end{aligned}$$ where $D=\partial + A$ is the covariant derivative. These equations are the usual Bianchi identity obeyed by the field strength and the Yang-Mills equations. As expected, vanishing of BRST anomalies imposes on-shell conditions on the background fields. Gluon vertex operator {#glVO} --------------------- Our goal is now to describe perturbations of the Yang-Mills background $A_{\mu}^{{\mathsf{a}}}$ at the level of vertex operators in the worldsheet CFT. Let $a_\mu^{\mathsf{a}}(X)$ be a perturbation of the background. A natural ansatz for an associated vertex operator in the ‘fixed’ picture (i.e., picture number $-1$) is $$\begin{aligned} V = c\tilde{c}\, \delta(\gamma)\, \Psi^\mu\, a_\mu^\mathsf{a}\, j^\mathsf{a}\,.\end{aligned}$$ This is an admissible vertex operator if it is annihilated by the BRST operator $Q$. Since $V$ is a conformal primary of spin zero, the only interesting contributions to $QV$ come from higher poles in OPEs with the currents and . Using the free OPEs , it is straightforward to show that $$\begin{aligned} \mathsf{G}(z)V(w)\sim - \frac{c\tilde{c}\, \delta(\gamma)\,D^\mu a_\mu^\mathsf{a}\,j^{{\mathsf{a}}}(w)}{(z-w)^2}+\cdots\,,\end{aligned}$$ and $$\begin{aligned} \mathsf{H}(z)V(w)\sim \frac{c\tilde{c}\, \delta(\gamma)\,\Psi^\nu j^{{\mathsf{a}}}}{(z-w)^2}\left(D^\mu D_\mu a_\nu^\mathsf{a} + 2 f^{\mathsf{a}\mathsf{b}\mathsf{c}} a^{\mathsf{b} \mu} F^\mathsf{c}_{\mu \nu}\right)(w) + \cdots\,,\end{aligned}$$ where the $+\cdots$ represent single pole terms in the OPE which will not contribute to the action of the BRST charge. In particular, these OPEs indicate that \[gQV\] QV=c()j\^. So requiring $QV=0$ imposes the Lorenz gauge condition ($D^{\mu}a_{\mu}^{{\mathsf{a}}}=0$) as well as the linearised Yang-Mills equations \[linYM\] D\^D\_ a\_\^+2 f\^a\^F\^\_=0 on the perturbation. In other words, the vertex operator lies in the BRST cohomology if and only if $a_{\mu}^{{\mathsf{a}}}$ describes an on-shell gluon fluctuation on the non-linear Yang-Mills background. The standard descent procedure (cf., [@Friedan:1985ge; @Verlinde:1987sd; @Witten:2012bh]) can be used to obtain the gluon vertex operator in zero picture number. To do this, we simply use the standard picture changing operator $\delta(\beta)\mathsf{G}$ to get $$\begin{aligned} c\tilde{c}U(w) & =\lim_{z\rightarrow w}\delta(\beta)\mathsf{G}(z)\,V(w) \\ & =c\tilde{c}\left(\Psi^\mu\Psi^\nu D_\nu a^\mathsf{a}_\mu j^\mathsf{a}+(\Pi^\mu-\mathsf{A}^{\mu\mathsf{a}}j^{\mathsf{a}}) a_\mu^{\mathsf{b}}j^{\mathsf{b}}-f^{\mathsf{abc}}a_{\mu}^{\mathsf{b}}\,j^{\mathsf{c}}\,\partial A^{\mu\mathsf{a}}\right)(w)\,. \label{Dgluon}\end{aligned}$$ An equivalent way to derive $U(w)$ is by linearising the current $\mathsf{H}$ around a Yang-Mills background, keeping in mind that the perturbation $a_{\mu}^{{\mathsf{a}}}$ obeys the Lorenz gauge condition. Further descent into an integrated vertex operator using the $b$-ghost and the stress-energy tensor can be carried out as in the usual string. How to perform the descent using the $\tilde{b}$-ghost and $\mathsf{H}$ current remains an open question, although it is well-known how to do so in a flat background [@Mason:2013sva; @Adamo:2013tsa; @Ohmori:2015sha]. Type II ambitwistor string ========================== We now move on to the type II ambitwistor string on a curved NS-NS background composed of a metric $g_{\mu\nu}$, $B$-field $B_{\mu\nu}$ and dilaton $\Phi$. This model was defined in [@Adamo:2014wea] with worldsheet action \[IIwsa\] S=\_\_X\^+|\_\^+(\^[-2]{}), where $(\psi^{\mu},\bar\psi_{\nu})$ is a complex fermion system of spin $\frac{1}{2}$. The final term, proportional to the worldsheet curvature $R_{\Sigma}$, is required to ensure quantum mechanical diffeomorphism invariance, but does not affect local calculations (such as OPEs) since this curvature can always be set to zero in a small neighborhood on the worldsheet. Thus, the OPEs between worldsheet fields remain free and independent of the background fields: $$\begin{aligned} \label{OPE} X^{\mu}(z)\,\Pi_{\nu}(w)\sim \frac{\delta^{\mu}_{\nu}}{z-w}\,, \qquad \psi^{\mu}(z)\,\bar\psi_{\nu}(w)\sim \frac{\delta^{\mu}_{\nu}}{z-w}\,,\end{aligned}$$ although $\Pi_{\mu}$ does not transform covariantly under a space-time diffeomorphism [@Adamo:2014wea]. The type II model features the gauging of three additional currents, as well as the holomorphic stress-energy tensor. Two of these are spin $\frac{3}{2}$ fermionic currents, $$\begin{aligned} \label{GeneralG} \mathcal{G}=&\psi^\mu\Pi_\mu + \partial(\psi^\mu\Gamma^\kappa{}_{\mu\kappa})-2\partial(\psi^\mu\partial_\mu\Phi)+\frac{1}{3!}\psi^\mu\psi^\nu\psi^\kappa H_{\mu\nu\kappa}\,,\\ \label{GeneralGbar} \bar{\mathcal{G}}=&g^{\mu\nu}\bar\psi_\nu(\Pi_\mu-\Gamma^\kappa{}_{\mu\lambda}\bar\psi_\kappa\psi^\lambda) - g^{\mu\nu}\partial(\bar\psi_\kappa\Gamma^\kappa{}_{\mu\nu})-2\partial(g^{\mu\nu}\bar\psi_\mu\partial_\nu\Phi)+\frac{1}{3!}\bar\psi_\mu\bar\psi_\sigma\bar\psi_\lambda H^{\mu\sigma\lambda}\,,\end{aligned}$$ where $\Gamma^{\kappa}_{\mu\nu}$ are the Christoffel symbols of $g_{\mu\nu}$ and $H_{\mu\nu\sigma}$ is a background three-form. The third current is bosonic of spin $2$, given by[^1] $$\begin{aligned} \begin{split} \label{MostGeneralH} \mathcal{H}=& g^{\mu\nu}\left(\Pi_\mu-\Gamma^\kappa{}_{\mu\lambda}\bar\psi_\kappa\psi^\lambda\right)\left(\Pi_\nu-\Gamma^\kappa{}_{\nu\lambda}\bar\psi_\kappa\psi^\lambda\right) -\frac{1}{2}R^{\kappa\lambda}{}_{\mu\nu}\bar\psi_\kappa\bar\psi_\lambda\psi^\mu\psi^\nu \\ & - g^{\mu \nu} \partial\left( \Pi_\rho \Gamma^\rho_{\mu \nu} \right) -\bar\psi_\kappa\partial\psi^\lambda g^{\mu\nu}\partial_\lambda\Gamma^\kappa{}_{\mu\nu} + \psi^\mu \partial_\mu \left( g^{\rho \sigma} \partial (\bar{\psi}_\kappa \Gamma^\kappa_{\rho \sigma} )\right) \\ &+\frac{1}{2} g^{\mu\nu} H_{\mu \kappa \lambda} \psi^\kappa \psi^\lambda \left(\Pi_\nu - \Gamma^ \rho_{\nu\sigma} \bar{\psi}_\rho \psi^\sigma \right) + \frac{1}{2} \left( \Pi_\mu - \Gamma^\kappa_{\mu\lambda} \bar{\psi}_\kappa \psi^\lambda \right) H_\nu^{\;\; \rho \sigma} \bar{\psi}_\rho \bar{\psi}_\sigma \\ &+\frac{1}{4} g^{\mu\nu} H_{\mu\kappa\lambda} \psi^\kappa \psi^\lambda H_\nu^{\;\; \rho\sigma} \bar{\psi}_\rho \bar{\psi}_\sigma -\frac{1}{3!} \psi^\mu \bar{\psi}_\nu \bar{\psi}_\kappa \bar{\psi}_\lambda \nabla_\mu H^{\nu\kappa\lambda} - \frac{1}{3!} \bar{\psi}_\mu \psi^\nu \psi^\kappa \psi^\lambda \nabla^\mu H_{\nu\kappa\lambda} \\ &+ \frac{1}{2} H^{\mu\nu\kappa} \bar{\psi}_\kappa \partial \left( H_{\mu\nu\lambda} \psi^\lambda\right) + \partial \left(H_{\kappa \lambda \nu} \psi^\nu \right) g^{\kappa \sigma} \Gamma^\lambda_{\sigma \rho} \psi^\rho \,- \partial \left(H_{\kappa \lambda \nu} \psi^\nu g^{\kappa \sigma} \Gamma^\lambda_{\sigma \rho} \psi^\rho \right) \\ & - \frac{1}{2} \partial_\sigma H_{\mu\nu\rho} \psi^\nu \psi^\rho \partial g^{\sigma\mu} - \frac{1}{12} H^{\mu \nu \rho} \partial^2 H_{\mu \nu \rho} + \frac{1}{2} \Gamma^\rho_{\mu\nu} H_{\sigma \lambda \rho} \psi^\sigma \psi^\lambda \partial g^{\mu \nu} \\ &-2 \partial \left(g^{\mu \nu} \Pi_\mu \partial_\nu \Phi \right) - \partial \left(\bar{\psi}_\kappa \psi^\lambda ( 2 \nabla^\kappa \partial_ \lambda \Phi -2 g^{\mu\nu} \Gamma^\kappa_{\mu \lambda} \partial_\nu \Phi ) \right). \end{split}\end{aligned}$$ These currents are covariant with respect to target space diffeomorphisms and conformal primaries of the worldsheet CFT. This is despite the fact that they contain various terms which do not appear to be manifestly covariant, due to the requirement of normal-ordering on the worldsheet. Gauging these currents along with holomorphic worldsheet gravity leads to a BRST operator $$\begin{aligned} \label{IIcQ} Q=\oint c\,T +bc\partial c+ \frac{\tilde{c}}{2}\,\cH + \bar{\gamma}\,\cG +\gamma\,\bar{\cG}-2\gamma\bar{\gamma}\tilde{b}\,,\end{aligned}$$ where the $(b,c)$, $(\tilde{b},\tilde{c})$, $(\beta,\gamma)$ ghost systems have the same quantum numbers as in the heterotic case, and $(\bar{\beta},\bar{\gamma})$ have the same quantum numbers as their un-barred cousins (i.e., they are bosonic and $\bar{\gamma}$ has spin $-\frac{1}{2}$). The stress tensor can be broken into matter and ghost contributions $T=T_{\mathrm{m}}+T_{\mathrm{gh}}$, with $$\begin{aligned} \label{stress_tensor} T_\mathrm{m}= -\Pi_\mu\partial X^\mu -\frac{1}{2}(\psi_\mu\partial\psi^\mu+\psi^\mu\partial\bar\psi_\mu)-\frac{1}{2}\partial^2\log(e^{-2\Phi}\sqrt{g})\end{aligned}$$ for the matter fields and $$\begin{aligned} T_{\mathrm{gh}} = \tilde{c}\partial \tilde{b} - 2\tilde{b}\partial \tilde{c} - \frac{3}{2}\beta\partial\gamma - \frac{1}{2}\gamma\partial\beta - \frac{3}{2}\tilde\beta\partial\tilde\gamma - \frac{1}{2}\tilde\gamma\partial\tilde\beta\end{aligned}$$ for the ghost fields, where we again exclude the $(b,c)$ system. As in the heterotic model, $Q^2=0$ is obstructed by a conformal anomaly and anomalies related to the gauged currents – in this case $\{\cG,\bar\cG,\cH\}$. The conformal anomaly imposes no constraints on the background fields and is eliminated by selecting the critical space-time dimension $d=10$. The other anomalies vanish if the algebra of currents is quantum mechanically closed: \[curalg\] (z)(w)\~0 \~|(z)|(w), (z)|(w)\~, and these conditions impose constraints on the background fields. The requirement that the $\cG(z)\cG(w)$ and $\bar{\cG}(z)\bar{\cG}(w)$ OPEs be non-singular imposes \[bianchi1\] \_[\[]{}H\_[\]]{}=0, R\_=0, R\_[()]{}=0, which are the usual Bianchi identities and symmetries of the Riemann tensor of the background metric, along with ${\mathrm{d}}H=0$. This latter statement indicates that (locally) $H={\mathrm{d}}B$; that is, $H$ arises as the field strength of a background $B$-field. Dynamical constraints on the background fields emerge from the final closure requirement of , which imposes $$\begin{aligned} R+4\nabla_\mu\nabla^\mu\Phi-4\nabla_\mu\Phi\nabla^\mu\Phi-\frac{1}{12}H^2 & =0\,,\nonumber\\ R_{\mu\nu}+2\nabla_\mu\nabla_\nu\Phi-\frac{1}{4}H_{\mu\rho\sigma}H_\nu{}^{\rho\sigma} & =0\,,\label{SugraEOM}\\ \nabla_\kappa H^\kappa{}_{\mu\nu}-2H^\kappa{}_{\mu\nu}\nabla_\kappa\Phi & =0\,.\nonumber\end{aligned}$$ These are precisely the field equations for the NS-NS sector of type II supergravity, so vanishing of BRST anomalies enforces the appropriate equations of motion on the background fields. Graviton vertex operator ------------------------ To begin, consider the type II model with *only* a background metric $g_{\mu\nu}$ turned on, and let $h_{\mu\nu}(X)$ be a symmetric, traceless perturbation of this metric. A fixed picture vertex operator associated to this perturbation is given by $$\begin{aligned} \label{gravityop} V_{h}=c\tilde{c}\,\delta(\gamma)\delta(\bar\gamma)\,\cO_{h}=c\tilde{c}\,\delta(\gamma)\delta(\bar\gamma)\left(\bar{\psi}_\mu\psi^\nu h^\mu{}_\nu-\frac{1}{2}(\partial g_{\mu\nu})h^{\mu\nu}\right).\end{aligned}$$ Note that this contains a quantum correction term proportional to a worldsheet derivative. While this quantum correction vanishes for flat or certain highly symmetric backgrounds (e.g., a plane wave metric written in Brinkmann coordinates [@Adamo:2017sze]), it plays a crucial role on a general background. For $V_{h}$ to be an admissible vertex operator, it must be annihilated by the BRST operator . Since $V_h$ is a conformal primary of spin 0 on the worldsheet, any potential obstructions to its $Q$-closure arise from OPEs between the operator $\cO_h$ and the currents , and with $H_{\mu\nu\rho}=0=\Phi$. One finds: $$\begin{aligned} &\mathcal{G}(z)\,\cO_{h}(w)\sim-\frac{\psi^\nu\,\nabla_{\mu} h^{\mu}{}_{\nu}}{(z-w)^2}(w)+\cdots\,, \\ &\bar{\mathcal{G}}(z)\,\cO_{h}(w)\sim\frac{g^{\rho\sigma}\bar{\psi}_\mu\,\nabla_\rho h^\mu{}_\sigma}{(z-w)^2}(w)+\cdots\,,\end{aligned}$$ and $$\begin{gathered} \label{gback1} \frac{\mathcal{H}(z)}{2}\,\cO_h(w)\sim \frac{h^{\mu\nu}R_{\mu\nu}}{(z-w)^3}(w)+\frac{\bar\psi_\alpha\psi^\beta}{2\,(z-w)^2}\left(\nabla_{\kappa}\nabla^{\kappa} h^{\alpha}_{\beta} - 2R^{\alpha}{}_{\sigma \gamma \beta} h^{\sigma \gamma} \right. \\ \left. -R^{\sigma\alpha} h_{\sigma \beta} -R^{\sigma}{}_{\beta} h^{\alpha}_{\sigma}+2h^\lambda{}_\beta R_\alpha{}_\lambda\right)(w) +\frac{\partial X^\gamma}{(z-w)^2}\left(\frac{1}{2}h_{\mu\nu}\partial_\gamma R^{\mu\nu}\right. \\ \left.+\frac{1}{4}\partial_\gamma g^{\mu\nu}(\nabla_{\alpha}\nabla^{\alpha} h_{\mu \nu} - 2 R_{\mu \alpha \beta \nu} h^{\alpha \beta} -R^\lambda_{\; \mu} h_{\lambda \nu} -R^\lambda_{\; \nu} h_{ \mu \lambda})\right)(w)+\cdots\,,\end{gathered}$$ where the $+\cdots$ stand for terms which do not contribute to the action of the BRST operator. Since the background metric obeys the vacuum Einstein equations ($R_{\mu\nu}=0$), these OPEs imply that $$\begin{gathered} \label{ggraviton} QV_{h}=c\tilde{c}\,\delta(\gamma)\delta(\bar\gamma)\bigg[\partial\gamma\,\bar{\psi}_\mu\,\nabla^{\nu} h^\mu{}_\nu-\partial\bar\gamma\, \psi^\nu\,\nabla_{\mu} h^{\mu}{}_{\nu} \\ \left.+\frac{\partial\tilde{c}\,\bar\psi_\mu\psi^\nu}{2}\left(\nabla_{\alpha}\nabla^{\alpha} h^{\mu}_{\nu} - 2R^{\mu}{}_{\alpha \beta \nu} h^{\alpha \beta}\right) +\frac{\partial\tilde{c}\,\partial g^{\mu\nu}}{4}\,\left(\nabla_{\alpha}\nabla^{\alpha} h_{\mu \nu} - 2 R_{\mu \alpha \beta \nu} h^{\alpha \beta}\right)\right]\,.\end{gathered}$$ Thus, the OPEs between the vertex operator and the currents $\cG$, $\bar{\cG}$ impose the de Donder gauge condition \[deDonder\] \^h\_=0, which is consistent with expectations from the flat background case [@Mason:2013sva]. The OPE between the vertex operator and the current $\cH$ leads to the linearised Einstein equation for a metric perturbation on a vacuum Einstein background: \[linEin\] \_\^ h\_ - 2 R\_ h\^=0. In other words, requiring $QV_{h}=0$ imposes precisely the physical gauge-fixing and linearised equation of motion for a graviton on the perturbation $h_{\mu\nu}$. What happens when the background $B$-field and dilaton are switched on? Keeping the form for the vertex operator, it remains to check the action of the *full* (i.e., with $g_{\mu\nu}$, $H_{\mu\nu\rho}$ and $\Phi$) BRST operator on $V_h$. The additional background fields do not change the fact that $QV_{h}$ is governed entirely by the OPEs between $\cO_h$ and the currents , and , although these OPEs are now substantially more complicated. One finds that $$\begin{aligned} &\mathcal{G}(z)\,\cO_{h}(w)\sim-\frac{\psi^\nu}{(z-w)^2}\left(\nabla_\mu h^\mu{}_\nu-2h^\mu{}_\nu\partial_\mu\Phi\right)+\cdots \,,\\ &\bar{\mathcal{G}}(z)\,\cO_{h}(w)\sim\frac{g^{\rho\sigma}\bar{\psi}_\mu}{(z-w)^2}\left(\nabla_\rho h^\mu{}_\sigma-2h^\mu{}_\rho\partial_\sigma\Phi\right)+\cdots\,,\end{aligned}$$ while the OPE between $\cH$ and $\cO_h$ is $$\begin{gathered} \frac{\cH(z)}{2}\,\cO_{h}(w)\sim \frac{h^{\mu\nu}}{(z-w)^3}\left(R_{\mu\nu}+2\nabla_\mu\nabla_\nu\Phi-\frac{1}{4}H_{\mu\rho\sigma}H_\nu{}^{\rho\sigma}\right) \\ +\frac{\bar\psi_{\alpha}\psi^{\beta}}{(z-w)^2}\left[h^{\lambda}_{\beta}\left(R^{\alpha}{}_{\lambda}+2\nabla^\alpha\nabla_\lambda\Phi-\frac{1}{4}H^{\alpha}{}_{\rho\sigma}H_\lambda{}^{\rho\sigma}\right) + \frac{1}{2}\left(\nabla_{\lambda}\nabla^{\lambda}h^{\alpha}_{\beta}-2R^{\alpha}{}_{\sigma\rho\beta}h^{\sigma\rho}-R^{\sigma\alpha}h_{\sigma\beta}\right. \right. \\ -R^{\sigma}{}_{\beta}h^{\alpha}_{\sigma}-h^{\rho}_{\sigma} H_{\beta\rho\kappa}H^{\alpha\sigma\kappa}-2(h^{\alpha}_{\sigma}\nabla_{\beta}\partial^{\sigma}\Phi+h_{\beta\sigma}\nabla^{\alpha}\partial^{\sigma}\Phi+\nabla_{\sigma}h^{\alpha}_{\beta} \partial^{\sigma}\Phi)\Big)\bigg] \\ +\frac{1}{(z-w)^2}\left[\frac{h_{\mu\nu}}{2}\,\partial\!\left(R^{\mu\nu}+2\nabla^\mu\nabla^\nu\Phi-\frac{1}{4}H^{\mu}{}_{\rho\sigma}H^{\nu\rho\sigma}\right)+\frac{\partial g^{\mu\nu}}{4}\left(\nabla_{\lambda}\nabla^{\lambda}h_{\mu\nu}-2R_{\mu\alpha\beta\nu}h^{\alpha\beta} \right.\right. \\ -R^{\lambda}{}_{\mu}h_{\lambda\nu}-R^{\lambda}{}_{\nu}h_{\lambda\mu}-h^{\lambda}_{\sigma}H_{\mu\lambda\alpha}H_{\nu}{}^{\sigma\alpha}-2\left(h_{\mu\sigma}\nabla_{\nu}\partial^{\sigma}\Phi-h_{\nu\sigma}\nabla_{\mu}\partial^{\sigma}\Phi+\nabla_{\sigma}h_{\mu\nu}\partial^{\sigma}\Phi\right)\Big)\bigg] \\ +\frac{\psi^{\rho}\psi^{\sigma}}{2\,(z-w)^2}\left(\nabla_{\nu}h_{\lambda\sigma}\,H_{\rho}{}^{\nu\lambda}+\frac{h^{\alpha\beta}}{2}\,\nabla_{\alpha}H_{\beta\sigma\rho}\right)-\frac{\bar\psi_{\rho}\bar{\psi}_{\sigma}}{2\,(z-w)^2}\left(\nabla_{\nu}h_{\lambda}^{\sigma}\,H^{\rho\nu\lambda}+\frac{h^{\alpha\beta}}{2}\,\nabla_{\alpha}H_{\beta}{}^{\sigma\rho}\right) \\ +\cdots\,,\end{gathered}$$ where all numerators are evaluated at $w$ on the worldsheet, and $+\cdots$ again denotes terms which will not contribute to the action of the BRST operator. Using the fact that the background fields obey the non-linear equations of motion , this means that $$\begin{gathered} \label{NSgraviton} QV_{h}=c\tilde{c}\,\delta(\gamma)\delta(\bar\gamma)\bigg[\partial\gamma\,\bar{\psi}_\mu\,(\nabla^{\nu} h^\mu{}_\nu-2h^{\mu}{}_{\nu}\partial^{\nu}\Phi)-\partial\bar\gamma\, \psi^\nu\,(\nabla_{\mu} h^{\mu}{}_{\nu}-2h^{\mu}{}_{\nu}\partial_{\mu}\Phi) \\ \frac{\partial\tilde{c}}{4}\,\left(2\bar\psi^{\mu}\psi^{\nu}+\partial g^{\mu\nu}\right)\left(\nabla_{\lambda}\nabla^{\lambda}h_{\mu\nu}-2R_{\mu\rho\sigma\nu}h^{\rho\sigma}-R^{\lambda}{}_{\mu}h_{\lambda\nu}-R^{\lambda}{}_{\nu}h_{\lambda\mu}\right. \\ \left.-h^{\lambda}_{\sigma}H_{\mu\lambda\alpha}H_{\nu}{}^{\sigma\alpha}-2\left(h_{\mu\sigma}\nabla_{\nu}\partial^{\sigma}\Phi+h_{\nu\sigma}\nabla_{\mu}\partial^{\sigma}\Phi+\nabla_{\sigma}h_{\mu\nu}\partial^{\sigma}\Phi\right)\right) \\ +\frac{\partial\tilde{c}}{2}\left(\psi^{\mu}\psi^{\nu}-\bar{\psi}^{\mu}\bar{\psi}^{\nu}\right)\left(\nabla_{\rho}h_{\lambda\nu}\,H_{\mu}{}^{\rho\lambda}-\frac{h^{\rho\sigma}}{2}\,\nabla_{\rho}H_{\sigma\mu\nu}\right)\bigg]\,,\end{gathered}$$ where indices are raised and lowered with the background metric. The requirement $QV_{h}=0$ therefore imposes the generalized de Donder gauge condition \[gdeDonder\] \^h\_=2h\_\^, as well as the linearised equation of motion $$\begin{gathered} \label{linNSEin} \nabla_{\lambda}\nabla^{\lambda}h_{\mu\nu}-2R_{\mu\rho\sigma\nu}h^{\rho\sigma}-R^{\lambda}{}_{\mu}h_{\lambda\nu}-R^{\lambda}{}_{\nu}h_{\lambda\mu}-h^{\lambda}_{\sigma}H_{\mu\lambda\alpha}H_{\nu}{}^{\sigma\alpha} \\ -2\left(h_{\mu\sigma}\nabla_{\nu}\partial^{\sigma}\Phi+h_{\nu\sigma}\nabla_{\mu}\partial^{\sigma}\Phi+\nabla_{\sigma}h_{\mu\nu}\partial^{\sigma}\Phi\right)=0\,.\end{gathered}$$ As desired, this is precisely the linearisation of the symmetric tensor equation from for a metric perturbation. However, we also obtain an *antisymmetric* constraint from the last line of : \[skewgrav\] \_h\_H\_\^-\_H\_=0. From a space-time perspective, this is unexpected: given a symmetric, traceless perturbation $h_{\mu\nu}$, one only expects to obtain the symmetric equation of motion . The antisymmetric equation arises because the background fields $\{g,H,\Phi\}$ are still treated as fluctuating quantum fields by the worldsheet theory. Indeed, these background fields are functionals of the worldsheet field $X^{\mu}(z)$, which is a full quantum field contributing to all OPEs. This means that the perturbation $h_{\mu\nu}$ can backreact on the background geometry, leading to additional constraints. In particular, a metric perturbation sources terms in the antisymmetric equation of motion for the background fields [^2]. At the level of a space-time variational problem, this corresponds to evaluating the space-time action on $\{g+h,H,\Phi\}$ and varying it with respect to all these fields. Projecting the resulting equations of motion onto the parts linear in $h$ gives the symmetric equation and the antisymmetric equation as well as the trivial scalar constraint. Consequently, the graviton vertex operator only makes sense in the BRST cohomology in the presence of a background metric. When a full NS-NS background is turned on, $QV_h=0$ leads to the physical gauge-fixing condition and correct equation of motion , but also an additional backreaction constraint . We will see the resolution of this issue in Section \[NSNSvertex\]. B-field vertex operator ----------------------- Consider a $B$-field perturbation $b_{\mu\nu}(X)$, which is anti-symmetric ($b_{\mu\nu}=b_{[\mu\nu]}$). As in the graviton case, initially we seek a vertex operator to describe this perturbation on a background metric $g_{\mu\nu}$ alone. Using consistency with the flat space GSO projection as a guide, the candidate vertex operator in the fixed picture is: \[0bvertex\] V\_[b]{}\^[(0)]{}=()(|)(\^\^b\_-|\_|\_b\^). It is straightforward to compute the action of the BRST operator $Q$ on $V^{(0)}_{b}$; since the operator is a conformal primary of spin zero with a canonical ghost structure, $QV^{(0)}_b$ is controlled entirely by the OPEs between the terms in brackets in and the currents $\cG$, $\bar{\cG}$, $\cH$ (with $H_{\mu\nu\rho}=0=\Phi$). This leads to $$\begin{gathered} \label{0bfield} QV^{(0)}_{b}=c\tilde{c}\,\delta(\gamma)\delta(\bar\gamma)\,\bigg[\partial\gamma\,\bar{\psi}_\nu\,\nabla_{\mu} b^{\mu\nu}+\partial\bar\gamma\, \psi^\nu\,\nabla^{\mu} b_{\mu\nu} \\ +\frac{\partial\tilde{c}}{4}\left(\psi^{\mu}\psi^{\nu}-\bar{\psi}^{\mu}\bar{\psi}^{\nu}\right)\left(\nabla_{\lambda}\nabla^{\lambda} b_{\mu\nu} - 2R_{\sigma\mu\nu\rho} b^{\sigma\rho}+2R^{\sigma}{}_{\mu} b_{\nu\sigma} \right)\bigg]\,.\end{gathered}$$ Using the vacuum Einstein equations for the background, $QV^{(0)}_b=0$ imposes the gauge-fixing constraint \[0bgauge\] \^b\_=0, as well as the equation of motion \[0bEOM\] \_\^ b\_ - 2R\_b\^=0 on the perturbation. Sure enough, is precisely the linearised equation of motion for a $B$-field propagating on a vacuum Einstein background. From our experience with the graviton vertex operator, we know that a $B$-field perturbation in a general NS-NS background will source the linearised scalar and symmetric tensor equations of motion, leading to unwanted constraints on the perturbation. Nevertheless, it is instructive to see how this arises by constructing a vertex operator for the perturbation $b_{\mu\nu}$ with a background metric, $B$-field and dilaton. It is easy to see that $V^{(0)}_{b}$ is no longer correct in this case; we claim that it must be supplemented by additional terms with non-standard worldsheet ghost structure. To write these terms down, we must bosonize the worldsheet ghost systems $(\beta, \gamma)$ and $(\bar\beta,\bar\gamma)$ [@Friedan:1985ge]. Let $\phi$ be a chiral scalar on the worldsheet, and $(\eta,\xi)$ be a pair of fermions of spin $+1$ and $0$, respectively. These fields have OPEs \[bgs\] (z)(w)\~-(z-w), (z)(w)\~, and are related to the ghosts $(\beta,\gamma)$ by \[bgs1\] = e\^, =e\^[-]{}, using the fact that an exponential of the chiral scalar $\e^{k\phi}$ has spin $-(k+\frac{k^2}{2})$. An additional copy of each system, $\bar\phi$, $(\bar{\eta},\bar{\xi})$ is introduced (with identical statistics) for the $(\bar\beta,\bar\gamma)$ ghost system. With these bosonized ghost systems, the $B$-field vertex operator on a general NS-NS background is given by \[bvertex\] V\_[b]{}=V\^[(0)]{}\_[b]{}+\_[b]{}\^[(1)]{}+|\_[b]{}\^[(1)]{}, where the additional operators are $$\begin{aligned} \begin{split} \mathcal{O}_b^{(1)} & = \frac{c \tilde{c}}{4} \partial \tilde{c} \, \partial \xi \, \e^{-2 \phi} \e^{-\bar{\phi}} \,\psi^\mu H_{\mu\rho\sigma} b^{\rho \sigma}\,, \\ \bar{\mathcal{O}}_b^{(1)} &= \frac{c \tilde{c}}{4} \partial \tilde{c} \, \partial \bar{\xi} \, \e^{-2 \bar{\phi}} \e^{-{\phi}} \,\bar{\psi}_\mu H^{\mu\rho\sigma} b_{\rho \sigma}\,. \end{split} \end{aligned}$$ The fact that these additional operators are required is perhaps not surprising, since the background $B$-field couples to the BRST operator in a manner that is distinctly different to the background metric. We must now check the action of the BRST operator on $V_b$. While $QV_{b}^{(0)}$ was governed entirely by the OPEs between the currents $\cG$, $\bar{\cG}$ and $\cH$, the same is not true of $QV_b$. This is due to the non-standard ghost structure of $\cO_{b}^{(1)}$, $\bar{\cO}^{(1)}_{b}$. For instance, there are now non-trivial OPEs with the structure constant terms in that must be accounted for: $$\begin{aligned} -2 \tilde{b} \gamma \bar{\gamma}(z)\, \mathcal{O}^{(1)}_b(w) &\sim \frac{c \tilde{c} e^{-\bar{\phi}} \eta }{z-w} \frac{\bar{\psi}_\mu H^{\mu\rho\sigma} b_{\rho \sigma}}{2}+\cdots\,, \label{gauge_cancel1} \\ -2 \tilde{b} \gamma \bar{\gamma} (z) \bar{\cO}^{(1)}_b(w) &\sim \frac{c \tilde{c} e^{-{\phi}} \bar{\eta} }{z-w} \frac{{\psi}^\mu H_{\mu\rho\sigma} b^{\rho \sigma}}{2}+\cdots\,, \label{gauge_cancel2}\end{aligned}$$ making use of the general rule $$\begin{aligned} \e^{\pm\phi}(z)\,\e^{k\phi}(w)= (z-w)^{\mp k}:\e^{\pm\phi}(z)\,\e^{k\phi}(w):\end{aligned}$$ for OPEs between exponentials of the chiral scalar. Note that contributions from the expansion of $\e^{\pm\phi}(z)$ are of crucial importance, canceling algebraic contributions to the OPEs $$\begin{aligned} \bar{\gamma} \mathcal{G}(z)\, V^{(0)}_b(w) &\sim -\frac{c \tilde{c} \e^{-\phi} \bar{\eta}}{z-w} \left( \bar{\psi}_\beta \left(\nabla_\alpha b^{\alpha \beta} - 2b^{\alpha \beta} \partial_\alpha \Phi \right) + \frac{\psi^\mu H_{\mu\rho\sigma} b^{\rho \sigma}}{2} \right)\,, \label{gauge_cond1} \\ \gamma \bar{\mathcal{G}}(z)\, V^{(0)}_b(w) &\sim -\frac{c \tilde{c} \e^{-\bar{\phi}} {\eta}}{z-w} \left( {\psi}^\beta \left(\nabla^\alpha b_{\alpha \beta} - 2b_{\alpha \beta} \partial^\alpha \Phi \right) + \frac{\bar{\psi}_\mu H^{\mu\rho\sigma} b_{\rho \sigma}}{2} \right)\,.\label{gauge_cond2}\end{aligned}$$ Similarly, at every stage of this calculation it is crucial to consider all possible contributions from ghosts to the OPEs. Note that contributions from the stress-energy tensor terms in $Q$ remain trivial, since both $\cO_{b}^{(1)}$ and $\bar{\cO}^{(1)}_{b}$ are conformal primaries of spin zero – despite their non-trivial ghost structure. The final result of these calculations is $$\begin{gathered} \label{bfield} QV_b= \frac{c \tilde{c}}{4} \partial \tilde{c}\, \e^{-\phi} \e^{-\bar{\phi}}\bigg[\bar{\psi}_\rho {\psi}^\sigma \left( H^{\mu \alpha \rho} ({\mathrm{d}}b)_{\mu\alpha\sigma} + H_{\mu \alpha\sigma} ({\mathrm{d}}b)^{\mu\alpha\rho} \right)+ \partial g^{\rho\sigma} \left( H^{\mu \beta}{}_\rho ({\mathrm{d}}b)_{\mu\beta\sigma} \right) \\ +(\psi^\mu \psi^\nu - \bar{\psi}^\mu \bar{\psi}^\nu)\left(\nabla_{\lambda}\nabla^{\lambda} b_{\mu\nu} - 2R_{\alpha\mu\nu\beta} b^{\alpha\beta} + 2R^\alpha{}_{\mu} b_{\nu \alpha }- 2\partial^\alpha \Phi \nabla_\alpha b_{\mu\nu} +4 b_{\alpha \mu} \nabla_\nu \partial^\alpha \Phi\right)\bigg] \\ -c \tilde{c}\, \e^{-\phi} \bar{\eta}\,\bar{\psi}_\beta \left(\nabla_\alpha b^{\alpha \beta} - 2b^{\alpha \beta} \partial_\alpha \Phi \right)-c \tilde{c}\, \e^{-\bar{\phi}} \eta\, \psi^\beta \left(\nabla^\alpha b_{\alpha \beta} - 2b_{\alpha \beta} \partial^\alpha \Phi \right) \\ +\frac{c \tilde{c}}{12} \partial \tilde{c}\, \e^{-\phi}\, \partial \e^{-\bar{\phi}}\,H^{\mu\nu\rho} ({\mathrm{d}}b)_{\mu\nu\rho}- \frac{c \tilde{c}}{12} \partial \tilde{c}\, \partial \e^{-\phi}\, \e^{-\bar{\phi}}\,H^{\mu\nu\rho} ({\mathrm{d}}b)_{\mu\nu\rho}\,,\end{gathered}$$ where $({\mathrm{d}}b)_{\mu\alpha\sigma}=\partial_\mu b_{\alpha\sigma}+\partial_\alpha b_{\sigma\mu}+\partial_\sigma b_{\mu\alpha}$ and all terms proportional to the background equations of motion have been set to zero. As desired, setting $QV_b=0$ enforces the gauge condition \[bgauge\] \^b\_ = 2 b\_ \^, along with the linearised equation of motion for a $B$-field perturbation on a NS-NS background: \[bEOM\] \_\^ b\_ -2 R\_ b\^+2R\^\_[\[]{} b\_[\] ]{} -2 \^ \_b\_ + 4 b\_ \^=0. We also obtain additional scalar and symmetric backreaction constraints on the perturbation: \[sbfield\] H\_\^ (b)\_=0=H(b). So as expected, $V_b$ only makes sense in the BRST cohomology on a purely metric background. Dilaton vertex operator ----------------------- In usual superstring theory, the form of the dilaton vertex operator [@Kataoka:1990ga] is complicated by the fact that the dilaton couples to the worldsheet action through the Fradkin-Tseytlin term [@Fradkin:1985ys]. A similar mechanism is in play in the ambitwistor string, visible at the level of the BRST charge through the last term in the matter stress-energy tensor . For a scalar perturbation on space-time $\varphi(X)$, the associated ambitwistor string vertex operator is composed of four terms: $$\begin{aligned} \label{DilatonVO} V_\varphi = \cO^{(1)}_\varphi + \bar{\cO}^{(1)}_\varphi + \cO^{(2)}_\varphi + \bar{\cO}^{(2)}_\varphi\,,\end{aligned}$$ where $$\begin{aligned} \cO^{(1)}_\varphi &= - c \tilde{c}\, \partial \tilde{c}\, \partial \xi \, \e^{-2 \phi}\, \e^{- \bar{\phi}}\,{\psi}^\mu \partial_\mu \varphi\,, \\ \bar{\cO}^{(1)}_\varphi &= - c \tilde{c}\, \partial \tilde{c}\, \partial \bar{\xi} \, \e^{-2 \bar{\phi}}\, \e^{- {\phi}}\, \bar{\psi}_\mu \partial^\mu \varphi\,, \\ \cO^{(2)}_\varphi &= 2\, c \tilde{c}\, \partial \e^{-\phi}\, \e^{-\bar{\phi}}\, \varphi\,, \\ \bar{\cO}^{(2)}_\varphi &= -2\, c \tilde{c}\, \e^{-\phi}\, \partial \e^{-\bar{\phi}}\, \varphi\,.\end{aligned}$$ Note that unlike the graviton and $B$-field vertex operators, differs in the flat space limit from other formulae appearing in the literature [@Berkovits:2018jvm]. This is due to our use of a complex fermion system for the spin $\frac{1}{2}$ matter fields on the worldsheet, as opposed to the real fermion system used elsewhere. Unlike the previous cases, not all constituents of $V_{\varphi}$ are conformal primaries. In particular, the operators $\cO^{(2)}_\varphi$ and $\bar{\cO}^{(2)}_\varphi$ are not primary, so when calculating $QV_{\varphi}$ care must be taken to account for contributions from their OPEs with stress tensor terms in the BRST operator. The relevant OPEs are $$\begin{aligned} \begin{split} (c T + bc\partial c)\!(z)\, \cO^{(2)}_\varphi(w) &\sim - 2\,\frac{c \partial c\, \tilde{c}\, \e^{-\phi } \e^{-\bar{\phi} } }{(z-w)^2}\,\varphi+\cdots\,, \\ (c T + bc\partial c)\!(z)\, \bar{\cO}^{(2)}_\varphi(w) &\sim 2\,\frac{c \partial c\, \tilde{c}\, \e^{-\phi } \e^{-\bar{\phi} } }{(z-w)^2}\,\varphi+\cdots\,, \end{split} \end{aligned}$$ so the anomalous conformal weight contributions cancel between the two operators. The non-trivial ghost structure of all four contributions in necessitates a careful treatment of the ghost contributions to the action of the BRST operator. On a general NS-NS background, the result is $$\begin{gathered} \label{dilaton} QV_{\varphi}=2c\tilde{c}\,\partial\tilde{c}\left(\partial\e^{-\phi}\,\e^{-\bar{\phi}}-\e^{-\phi}\,\partial\e^{-\bar{\phi}}\right)\left(\nabla_{\mu}\partial^{\mu}\varphi-2\,\partial_{\mu}\Phi\,\partial^{\mu}\varphi\right) \\ -c\tilde{c}\,\partial\tilde{c}\,\e^{-\phi}\e^{-\bar\phi}\left[\left(\partial g^{\mu\nu}+2\bar{\psi}^{\mu}\psi^{\nu}\right)\,\nabla_{\mu}\partial_{\nu}\varphi+\frac{1}{2}\left(\psi^{\mu}\psi^{\nu}-\bar{\psi}^{\mu}\bar{\psi}^{\nu}\right)\,H_{\mu\nu\sigma}\,\partial^{\sigma}\varphi\right]\,.\end{gathered}$$ Requiring $QV_{\varphi}=0$ therefore imposes scalar, symmetric and anti-symmetric equations of motion on the perturbation: \[scalardilaton\] \_\^-2\_\^=0 \[tensordilaton\] \_\_=0, H\_\^=0. As expected, only the scalar equation  is the desired one; the two tensor equations  arise from the backreaction of the scalar perturbation on the metric and $B$-field sectors. However, the situation for the dilaton vertex operator is worse than for the graviton or $B$-field: even with a pure metric background, we still obtain a tensor equation $\nabla_{\mu}\partial_{\nu}\varphi=0$, which over-constrains the perturbation. Although the vertex operator gives the correct scalar equation of motion, its inclusion in the BRST cohomology enforces unphysical constraints on the spectrum NS-NS vertex operator {#NSNSvertex} --------------------- For each of the graviton, $B$-field and dilaton vertex operators, we have seen that the associated vertex operator is not in the BRST cohomology of the type II ambitwistor string on a general NS-NS background. While the graviton and $B$-field vertex operators are BRST-closed on the support of the appropriate linearised field equations on a pure gravity background, the dilaton operator is only BRST-closed on the support of additional, unphysical equations for *any* sector of background fields. These issues are overcome by combining the graviton, $B$-field and dilaton vertex operators into a single NS-NS vertex operator, which simultaneously perturbs each sector of the background. Indeed, from the space-time perspective this is much more natural than exciting a perturbation of one of the fields on its own, since the non-linear equations of motion intertwine all three. This ‘fat graviton,’ sometimes expressed heuristically as $h_{\mu\nu}\oplus b_{\mu\nu}\oplus\varphi$, is the natural perturbation of the NS-NS sector of type II supergravity. The candidate vertex operator is given by summing together each of three vertex operators constructed above: \[NSvertex\] V\_=V\_[h]{}+V\_[b]{}+V\_, where $V_{h}$ is given by , $V_b$ by , and $V_{\varphi}$ by . Computing $QV_{\mathrm{NS}}$ is straightforward: we simply add together the results for the BRST operator acting on each of the three components, , and . The distinct ghost structures in the result impose different constraints on the background fields. From the terms proportional to $c\tilde{c} \e^{-\phi}\bar{\eta}$ and $c \tilde{c}\e^{-\bar{\phi}}\eta$, we obtain the gauge conditions \[NSgf\] \^h\_=2h\_\^, \^b\_=2b\_\^. Terms proportional to $c\tilde{c}\partial\tilde{c} \e^{-\phi}\e^{-\bar\phi}$ encode tensorial equations of motion. The symmetric equation, which appears contracted into $(2\bar\psi^{(\mu}\psi^{\nu)}+\partial g^{\mu\nu})$, is $$\begin{gathered} \label{NSsym} \nabla_{\lambda}\nabla^{\lambda}h_{\mu\nu}-2R_{\mu\rho\sigma\nu}\,h^{\rho\sigma}-2R^{\lambda}{}_{(\mu}\,h_{\nu)\lambda}-h^{\rho}_{\sigma}\,H_{\mu\rho\lambda}H_{\nu}{}^{\sigma\lambda} \\ -4\left(h_{\sigma(\mu}\,\nabla_{\nu)}\partial^{\sigma}\Phi+\frac{1}{2}\nabla_{\sigma}h_{\mu\nu}\,\partial^{\sigma}\Phi\right)+H_{\rho\sigma(\mu}\,({\mathrm{d}}b)_{\nu)}{}^{\rho\sigma}-4\nabla_{(\mu}\partial_{\nu)}\varphi=0\,,\end{gathered}$$ while the anti-symmetric equation, which appears contracted into $(\psi^{\mu}\psi^{\nu}-\bar{\psi}^{\mu}\bar\psi^{\nu})$, is $$\begin{gathered} \label{NSasym} \nabla_{\lambda}\nabla^{\lambda}b_{\mu\nu}-2R_{\rho\mu\nu\sigma}\,b^{\rho\sigma}+2R^{\sigma}{}_{[\mu}\, b_{\nu]\sigma} +4\left(b_{\sigma[\mu}\,\nabla_{\nu]}\partial^{\sigma}\Phi-\frac{1}{2}\nabla_{\sigma}b_{\mu\nu}\,\partial^{\sigma}\Phi\right) \\ +2\nabla_{\rho}h_{\sigma[\nu}\,H_{\mu]}{}^{\rho\sigma}-h^{\rho\sigma}\,\nabla_{\rho}H_{\sigma\mu\nu}-2 H_{\mu\nu\sigma}\,\partial^{\sigma}\varphi=0\,.\end{gathered}$$ Finally, a scalar equation of motion \[NSscalar\] \_\^-2\_\^- =0 is imposed by terms proportional to the ghost structure $c\tilde{c}\partial\tilde{c}\,(\e^{-\phi}\partial\e^{-\bar\phi}-\partial\e^{-\phi}\e^{-\bar\phi})$. Sure enough, equations are the generalized de Donder gauge conditions for graviton and $B$-field perturbations, while equations – are precisely the linearised equations of motion for the NS-NS sector of type II supergravity. Thus, $V_{\mathrm{NS}}$ is in the BRST cohomology of the type II ambitwistor string if and only if it encodes a physical, on-shell perturbation for the NS-NS sector of supergravity on space-time. Discussion {#DiscussionVO} ========== In this paper, we found vertex operators for the heterotic and type II ambitwistor strings with curved background fields. In the heterotic case, we gave the gluon vertex operator on any Yang-Mills background: BRST closure imposes the physical equations of motion and gauge-fixing constraint on the gluon perturbation. For the type II model things are more subtle. In a pure gravity background, we found graviton and $B$-field vertex operators which are BRST closed when the appropriate physical constraints are imposed on the perturbations. On a general NS-NS background (composed of a metric, $B$-field and dilaton), a fully consistent vertex operator is given by simultaneous encoding perturbations to all three sectors. BRST closure then imposes the appropriate physical constraints on these perturbations, given by the linearised equations of motion and a generalized de Donder gauge. The fact that these vertex operators can be determined *exactly* – without recourse to any background field expansion – points to a significant difference between ambitwistor string and ordinary string theory, where such calculations on a general background would be impossible. It should be noted that a generalization of the vertex operators given here allows for *any* gauge-fixing condition on the perturbations – the procedure is a straightforward extension of what is done on a flat background [@Berkovits:2018jvm]. The Lorenz or (generalized) de Donder conditions obtained here are, in a sense, the ‘minimal’ such gauge-fixing constraints. Of course, one hopes to use these vertex operators to compute physical observables in non-trivial backgrounds. At three-points, this requires knowing the operators in both the fixed (i.e., negative picture number) picture emphasized here, as well as the descended vertex operators (i.e., picture number zero). In the heterotic theory, the descended vertex operator is easy to obtain through the standard procedure or linearising the constraint $\mathsf{H}$. In the type II case, one can again follow the standard procedure by colliding $V_{\mathrm{NS}}$ with the picture changing operators $\delta(\bar\beta)\cG$ and $\delta(\beta)\bar{\cG}$, respectively. Some terms in the resulting operator will be $Q$-exact and not contribute to correlation functions; these pure gauge contributions can be isolated by applying the picture changing operators in different order, and then comparing the results. Equivalently, the descended vertex operator can be computed by linearising the $\cH$ current around the chosen background. On a general NS-NS background, the resulting vertex operator is complicated, but in highly symmetric backgrounds (usually those of interest for perturbative calculations) the descended vertex operator can be quite tractable. For instance, the three-point graviton amplitude on a vacuum plane wave space-time has been computed directly from ambitwistor strings [@Adamo:2017sze]. We expect the descent procedure to be manageable enough for explicit calculation of 3-point functions around other highly symmetric backgrounds. To obtain genus zero, $n$-point worldsheet correlations functions (for $n>3$), the analogue of descent with respect to the $\cH$ current must be understood. In flat backgrounds, where $\cH^{\mathrm{flat}}=\Pi^2$, this procedure is understood and leads to the appearance of the scattering equations [@Mason:2013sva; @Adamo:2013tsa; @Ohmori:2015sha]. However, on general backgrounds $\cH$ has complicated $X$-dependence which obstructs a straightforward evaluation of the path integral. In deformations of the ambitwistor string, where $\cH$ has $X$-dependence even in flat backgrounds, it is still not understood how to perform descent with respect to $\cH$ [@Azevedo:2017yjy; @Jusinskas:2016qjd; @Casali:2016atr]. Clearly, a resolution of this issue is required if ambitwistor strings are to be a useful tool in the study of perturbative QFT on curved backgrounds. Finally, we note that the fate of the GSO projection (which ensures that the spectrum of the type II ambitwistor string is equivalent to that of type II supergravity) in curved space remains unclear. Indeed, in the graviton vertex operator the term proportional to a worldsheet derivative does not obey the naïve GSO projection, but is clearly required to ensure that $QV_h=0$ yields covariant equations. Other terms in the $B$-field and dilaton vertex operators also naïvely seem to be in the GSO-odd sector, but dropping them yields non-covariant or unphysical (algebraic and first derivative) equations of motion. One potential way to address the issue of the GSO projection is to formulate the curved space worldsheet theory with two real fermion systems, rather than the complex fermion system used here. The price to pay is that the action is no longer free and a true background field expansion must be used. OPEs would be calculated order-by-order in perturbation theory, but we expect that calculations of the nilpotency of $Q$ and $Q$-closure of vertex operators will become trivial after a certain low loop order. This follows from the fact that the non-perturbative calculations using the complex fermion model give only a finite number of low order poles in the OPEs. We would like to thank Lionel Mason for useful discussions. TA is supported by an Imperial College Junior Research Fellowship; EC was supported by EPSRC grant EP/ M018911/1; SN is supported by EPSRC grant EP/M50659X/1 and a Studienstiftung des deutschen Volkes scholarship. This research is supported in part by U.S. Department of Energy grant DE-SC0009999 and by funds provided by the University of California. [^1]: This expression for $\mathcal{H}$ corrects some typos made in [@Adamo:2014wea]. We have checked that these modifications don’t alter any of the results in [@Adamo:2017sze]. [^2]: The metric perturbation can also source a scalar constraint, but it is easy to see that this vanishes on the support of the background equations of motion.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the relation between the solutions of the Skyrme model of lower degrees and the corresponding axially symmetric Hopfions which is given by the projection onto the coset space $SU(2)/U(1)$. The interaction energy of the Hopfions is evaluated directly from the product ansatz. Our results show that if the separation between the constituents is not very small, the product ansatz can be considered as a relatively good approximation to the general pattern of the charge one Hopfions interaction both in repulsive and attractive channel.' author: - | [A. Acus]{}$^{\dagger}$, [E. Norvaišas]{}$^{\dagger}$ and [Ya. Shnir]{}$^{\star \ddagger}$\ \ \ $^{\dagger}$[Vilnius University, Institute of Theoretical Physics and Astronomy]{}\ [Goštauto 12, Vilnius 01108, Lithuania]{}\ $^{\star}$[BLTP, JINR, Dubna, Russia]{}\ $^{\ddagger}$[Institute of Physics, Carl von Ossietzky University Oldenburg, Germany]{} title: Hopfions interaction from the viewpoint of the product ansatz --- Introduction ============ Spatially localized particle-like non-perturbative soliton field configurations have a number of applications in a wide variety of physical systems, from modern cosmology and quantum field theory to condensed matter physics. The study of the interaction between the solitons and their dynamical properties has attracted a lot of attention in many different contexts (for a general review see e.g. [@Manton-Sutcliffe]). One of these interesting contexts include investigation of a new family of materials known as topological insulators, which also makes relevant the basis research involving topological solitons. Perhaps the most interesting possibility is the discovery that frustrated magnetic materials may support topological insulator phases, for which wave functions are classified by the Hopf invariant [@Moore2008]. Simple example of topological soliton solutions is given by the class of scalar models from the Skyrme family, the original Skyrme model [@Skyrme:1961vq], the Faddeev-Skyrme model [@Faddeev] in $d=3+1$, and the low-dimensional baby Skyrme model in $2+1$ dimensions [@Bsk]. The Lagrangian of all these models as they were formulated originally, has similar structure, it includes the usual sigma-model kinetic term, the Skyrme term, which is quartic in derivatives, and the potential term which does not contain the derivatives. According to the Derrick’s theorem [@Derrick], the latter term is optional in $d=3+1$, however it is necessary to stabilise the soliton configurations in the baby-Skyrme model. A peculiar feature of these models is that the corresponding soliton solutions, Skyrmions and Hopfions, do not saturate the topological bound. In order to attain the topological lower bound and get a relation[^1] between the masses of the solitons and their topological charges $Q$, one has to modify the model, for example drop out the quadratic kinetic term [@Adam:2010fg; @Foster:2010zb] or extend the model by coupling of the Skyrmions to an infinite tower of vector mesons [@Sutcliffe:2011ig]. Thus, the powerful methods of differential geometry cannot be directly applied to describe low-energy dynamics of the Skyrmions and Hopfions, one has to analyse the processes of their scattering, radiation and annihilation numerically [@Piette:1994mh; @Battye:1996nt]. Interestingly, the numerical simulations of the head-on collision of the charge one Skyrmions reveal the celebrated picture of the $\pi/2$ scattering through the intermediate axially-symmetric charge two Skyrmion [@Battye:1996nt], which is typical for BPS configurations like vortices or monopoles (see [@Manton-Sutcliffe]). The same pattern was observed in the baby Skyrme model using the collective coordinate method [@Sutcliffe:1991aua]. However recent attempt to model the Hopfion dynamics [@Hietarinta:2011qk] failed to find the channel of right-angle scattering in head-on collisions. Typically, the problem of direct simulation of the soliton dynamics is related with sophisticated numerical methods, the calculations require considerable amount of computational resources, actually this problem is fully investigated only for the low-dimensional baby Skyrme model. Even more simple task of full numerical investigation of the spinning solitons beyond rigid body approximation was performed only recently in the Faddeev-Skyrme model [@BattyMareike; @JHSS] and in the baby Skyrme model [@Halavanau:2013vsa; @Battye:2013tka], in the case of the original Skyrme model in $d=3+1$ this problem is not investigated yet. Alternatively, one can delve into the assumptions about the character of the soliton interaction by analogy with the dynamical properties of the Bogomol’nyi type solitons [@Manton:1988ba; @Sutcliffe:1991aua; @Schroers:1993yk]. Then the moduli space approximation for low-energy soliton dynamics can be applied. This approach works especially well for low-dimensional baby Skyrme model because it can be considered as a deformation of the $O(3)$ sigma model. It also explains the observations of the right-angle scattering in the head-on collisions of the Skyrmions in $d=3+1$, however the question about validity of the moduli approximation to the low-energy dynamics of the Hopfions is not quite clear. Another approach to the problem of interaction between the solitons is to consider the asymptotic field of the configurations, then for example the Skyrmions can be treated as triplets of scalar dipoles [@Schroers:1993yk; @Manton1994; @Manton:2002pf]. Similarly, the asymptotic fields both the baby Skyrmion and the Hopfion in the sector of degree one correspond to a doublet of orthogonal dipoles [@Piette:1994mh; @Gladikowski:1996mb; @Ward:2000qj]. Considering this system Ward predicted existence of three attractive channels in the interaction of the charge one Hopfions with different orientation [@Ward:2000qj]. It was suggested recently to use a simplified dipole-dipole picture of the interaction between the baby Skyrmions in the “easy plane” model, thus in this description the interaction energy depends only on the average orientation of the dipoles [@Jaykka:2010bq]. In his pioneering paper [@Skyrme:1961vq] Skyrme suggested to apply the product ansatz which yields a good approximation to a configuration of well-separated unit charge Skyrmions. The ansatz is constructed by the multiplication of individual Skyrmion fields, besides the rational map ansatz [@Houghton:1997kg] it can be used to produce an initial multi-Skyrmion configuration for consequent numerical calculations [@Battye1998]. In a similar way one can construct a system of well-separated baby-Skyrmions using the parametrization of the scalar triplet in terms of the $SU(2)$-valued hermitian matrix fields [@Acus:2009df]. Evidently, the same approach can be used to model the configuration of well separated static Hopfions of degree one. On the other hand the product ansatz can be applied in the Faddeev-Skyrme model to approximate various multicomponent configurations whose position curve consists of a few disjoint loops, like the $Q=4$ soliton. In this Letter we discuss the relation between the solutions of the Skyrme model of lower degree and the corresponding axially symmetric Hopfions which is given by the projection onto the coset space $SU(2)/U(1)$. Using this approach we construct the product ansatz of two well-separated single Hopfion configurations. We confirm that the product ansatz correctly reproduces the channels of interaction. Indeed, it is known that similar with the case of the Skyrmions, the interaction between the two Hopfions can be repulsive or attractive depending upon the relative orientation of the solitons [@Ward:2000qj]. The model ========= Let us consider a Faddeev-Skyrme model Lagrangian in 3+1 dimensions with metric $(+,-,-,-)$: $$\label{model} {\cal L} = \frac{1}{32\pi^2}\left(\partial_\mu \phi^a \partial^\mu \phi^a - \frac{1}{4}(\varepsilon_{abc}\phi^a\partial_\mu \phi^b\partial_\nu \phi^c)^2 \right)\,.$$ Here $\phi^a = (\phi^1, \phi^2,\phi^3)$ denotes a triplet of scalar real fields which satisfy the constraint $|\phi^a|^2=1$. The finite energy configurations should approach a constant value at spatial infinity, which we selected to be $\phi^a(\infty) = (0,0,1)$. Thus, the static field $\mathbf{\phi}(\mathbf{x})$ defines a map $R^3 \rightarrow S^2$, which can be characterized by Hopf invariant $Q = \pi_3(S^2) = \mathbb{Z}$. Then the finite energy solutions of the model, the Hopfions, are the map $S^3 \to S^2$ and the target space $S^2$ by construction is the coset space $SU(2)/U(1)$. It follows that any coset space element ${\ensuremath{\mathbf{H}}}$ can be projected from generic $SU(2)$ group element ${\ensuremath{\mathbf{U}}}$. In circular coordinate system the projection can be written in the following form, $$\label{genericProjection} {\ensuremath{\mathbf{H}}}=2\sum_a (-1)^a \tau_a \phi_{-a}= 2 {\ensuremath{\mathbf{U}}}\tau_0 {\ensuremath{\mathbf{U}}}^\dagger\,,$$ where the Pauli matrices $(\tau_1,\tau_0,\tau_{-1})$ are chosen to satisfy relation $$\label{pauliDefinition} \tau_a \tau_b =\frac14 (-1)^a \delta_{a,-b}\mathbf{1} -\frac{1}{\sqrt{2}} \left[ \begin{matrix} 1 & 1 &1\\ a & b & c \end{matrix} \right]\tau_c ,$$ and $\left[ \begin{smallmatrix} 1 & 1 &1\\ a & b & c \end{smallmatrix} \right]$ denotes the Clebsch-Gordon coefficient. It is convenient to rewrite the Lagrangian  directly in terms of coset space elements ${\ensuremath{\mathbf{H}}}$, $$\label{modelInH} {\cal L} = \frac{1}{64\pi^2}\left({{\rm Tr}}\big\{\partial_\mu {\ensuremath{\mathbf{H}}}\partial^\mu {\ensuremath{\mathbf{H}}}\big\} + \frac{1}{16}{{\rm Tr}}\big\{\bigl[\partial_\mu {\ensuremath{\mathbf{H}}},\partial_\nu {\ensuremath{\mathbf{H}}}\bigr]\bigl[\partial^\mu {\ensuremath{\mathbf{H}}},\partial^\nu {\ensuremath{\mathbf{H}}}\bigr]\big\} \right)\,.$$ The difference between the Skyrmions and Hopfions is that in the latter case the dimensions of the domain space and the target space are not the same, the topological charge of the Hopfions is not defined locally. It has a meaning of the linking number in the domain space [@Faddeev]. There have been many investigations of the solutions of the model  [@Gladikowski:1996mb; @Sutcliffe:2007ui; @Battye1998; @Hietarinta2000]. Here we restrict out consideration to the axially symmetric configurations of lower degrees $Q=1,2$ which are conventionally labeled as ${\cal A}_{1,1}$ and ${\cal A}_{2,1}$ [@Sutcliffe:2007ui]. An approximation to these solutions can be constructed via Hopf projection of the corresponding Skyrmion configurations with baryon numbers $B=1$ and $B=2$, respectively. Indeed, it have been shown [@Battye1998; @Su:2008] that up to a constant, the solution for the charge $Q=1$ Hopfion can be written in a form which is equivalent to the standard hedgehog solution of the Skyrme model with the usual profile function $F(r)$. This construction yields the hopfion with mass $1.232$. In our conventions   the ${\cal A}_{1,1}$ configuration is ${\ensuremath{\mathbf{H}}}_1$ which is a projection of the Skyrmion matrix valued field ${\ensuremath{\mathbf{U}}}_0$, i.e. $$\label{hopfion1Projection} {\ensuremath{\mathbf{H}}}_1({\ensuremath{\mathbf{r}}})=2{\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}) \tau_0 {\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}})\,,$$ where ${\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}})$ denotes the usual spherically symmetric Skyrmion which is parametrised via the hedgehog ansatz $$\label{skyrmionAnsatz} {\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}})=\exp \bigl(2{\mathrm{i}}(\hat{{\ensuremath{\mathbf{r}}}}\cdot{\ensuremath{\boldsymbol{\tau}}}) F(r)\bigr)\,.$$ Here $\hat{{\ensuremath{\mathbf{r}}}}$ denotes the unit position vector and $F(r)$ is a monotonically decreasing profile function of the Skyrmion with usual boundary conditions, $F(0)=0,~~F(\infty)=\pi$. In terms of the triplet scalar fields $\phi_a$ in the circular coordinate system defined by $\phi_{\pm 1}=\mp\frac{1}{\sqrt{2}}\bigl(\phi^1\pm\mathrm{i}\phi^2\bigr)$ and $\phi_0=\phi^3$, the projected Skyrme configuration can be written as[^2] $$\label{skyrmionAnsatzInphi} \phi_a=2\sin^2 F(r) \hat{r}_0 \hat{r}_a +\mathrm{i} a \sin \bigl(2 F(r)\bigr) \hat{r}_a + \cos \bigl(2 F(r)\bigr) \delta_{0,a} \,.$$ Evidently, although the ansatz   depends on the radial variable only, expression clearly demonstrates, that the corresponding ${\cal A}_{1,1}$ Hopfion does not possess the spherical symmetry, the projection breaks it down to axial symmetry [@Battye1998]. The residual $O(2)$ symmetry of global rotations by the phase $\alpha$ around the third axis in the internal space changes the triplet in the following way $$\label{hopfionrotation} \phi_{+1} \rightarrow \phi_{+1} \mathrm{e}^{i\alpha},\quad \phi_{-1} \rightarrow \phi_{-1} \mathrm{e}^{-i\alpha},\quad \phi_{0} \rightarrow \phi_{0}\,.$$ Remind that in the case of the hedgehog ansatz the iso-rotation of the configuration is equivalent to rotation of the vector $\hat{r}_a$ by angle $\alpha$ around the $z$-axis. The position curve of Hopfion is a is commonly chosen to be the curve $\phi^{-1}(0,0,-1)$, the preimage of the point $(0,0,-1)$ which is the antipodal to the vacuum $(0,0,1)$. For the simplest ${\cal A}_{1,1}$ Hopfion this is a circle of radius $F(r_c) = \pi/2$, with numerical value $r_c = 0.8763$ in the $x-y$ plane. Small deviations $F(r) = F(r_0)+\epsilon$ then define the tube around the position curve where $\vartheta \approx \pi/2$ and $\varphi = [0,2\pi)$. From the parametrization we can define the orientation of the single Hopfion. Indeed, a clockwise rotation by an angle $\varphi$ in equatorial plane corresponds to a counterclockwise rotation of the tube point on the target space \[circle\] $$\begin{aligned} \phi^1 \approx& 2\epsilon \sin \varphi,\label{cir1}\\ \phi^2 \approx& -2\epsilon \cos \varphi,\label{cir2}\\ \phi^3 \approx& -1.\end{aligned}$$ Here, for the sake of convenience, we are using the component notations of the field $\phi$. Note that the Hopfion charge can be inverted by the transformation ${\ensuremath{\mathbf{H}}}\rightarrow {\ensuremath{\mathbf{H}}}^* $ or $\phi_{a} \rightarrow (-1)^a \phi_{-a}$. It is easy to see that in this case the signs of the right hand sides of and are inverted, thus a clockwise rotation about the $z$ axis in the domain space corresponds to a clockwise rotation on the target space. So for the Hopfion with negative topological charge the point on the tube rotates in the opposite direction. Thereafter we restrict our investigation to the case of positive values of $Q$ only. For single Hopfion we can rotate the points on the tube by applying rotation transform via the $SU(2)$ matrix $$\label{hopfionRotationM} {\ensuremath{\mathbf{H}}}\rightarrow D(\alpha){\ensuremath{\mathbf{H}}}D(-\alpha)\, .$$ Evidently this transformation is equivalent to and it leaves invariant the Lagrangian . Let us now consider two identical Hopfions of degree one which are placed at the points ${\ensuremath{\mathbf{R}}}/2$ and $-{\ensuremath{\mathbf{R}}}/2$ and separated by a distance $R$, as shown in Fig. \[fig:1\]. There the polar angle $\Theta$ yields the orientation of the Hopfions relative to the $z$-axis. Note that in this frame the pattern of interaction is invariant with respect to the spacial rotations of the system around the $z$-axis by an azimuthal angle $\Phi$. [ \[fig:1\] ]{} First, we suppose that both separated Hopfions are counterclockwise oriented and they are in phase, i.e. $\Delta \alpha =0$. This system can be approximated by the product ansatz $$\label{hopfionProductAnsatz} {\ensuremath{\mathbf{H}}}_2^{\Delta \alpha =0}({\ensuremath{\mathbf{r}}})=2 {\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^\prime){\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime\prime})\tau_0{\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime\prime}){\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^\prime)\,$$ where ${\ensuremath{\mathbf{r}}}^\prime={\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{R}}}/2$ and ${\ensuremath{\mathbf{r}}}^{\prime\prime}={\ensuremath{\mathbf{r}}}-{\ensuremath{\mathbf{R}}}/2$. Here fields of both Hopfions at the spacial boundary tend to the same asymptotics $(0,0,1)$. Note, however, that in the constituent system of two identical Hopfions of degree one, contrary to the single Hopfion case, the transformation of one of the Hopfions ${\ensuremath{\mathbf{H}}}$ do not leave the Lagrangian invariant, it becomes a function of relative phase difference $\alpha$. Further, in addition to the ansatz we can consider two separated Hopfions of degree one with opposite phases, $\Delta \alpha =\pi$. Using the definition we can express this system in terms of the matrix ${\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}})$, thus the corresponding product ansatz is different from : $$\label{hopfionProductAnsatzAntiParallel} \begin{split} {\ensuremath{\mathbf{H}}}_2^{\Delta \alpha =\pi}({\ensuremath{\mathbf{r}}})=8 {\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^\prime)\tau_0{\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime\prime})\tau_0{\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime\prime})\tau_0{\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^\prime)\,, \end{split}$$ An advantage of the product ansatz approximations and is that it ensures the conservation of the total topological charge for any separation $R$ and space orientation of the constituents. The simple additive ansatz of two unit charge Hopfions used by Ward [@Ward:2000qj] to construct the Hopfion of degree two can be considered as a good approximation only if the Hopfions are well separated. Substitution of product ansatzes and into Lagrangian allows us to write down the expressions for the corresponding energy densities of both configurations as a function of the components of the position vectors $r_i^{\prime}$ and $r_j^{\prime\prime}$ (cf Fig. \[fig:1\]). Using the Gröebner basis method implemented in [*Mathematica*]{}, we can collect these components into various combinations. It appears that in all cases the expressions for the local energy, as well as for the corresponding topological charge density are some functions only of the distances $r^\prime$ and $r^{\prime\prime}$, the dot product ${\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime}$, $z$-components of the vectors $r_0^{\prime}$, $r_0^{\prime\prime}$ and the cross product $({\ensuremath{\mathbf{r}}}^{\prime}\times{\ensuremath{\mathbf{r}}}^{\prime\prime})_0$. Let us now express these quantities in terms of the Hopfion’s position coordinates $R,\Theta,\Phi$ and the spherical coordinates $r,\theta,\varphi$, then the numerical integration of the corresponding local densities over the variables $\varphi,\vartheta$ and $r$ yields the total energy (mass) of the system and its topological charge. In order to do it we apply some useful identities: $$\begin{aligned} \label{substitutions} r^{\prime}=&\bigl(r^2+R^2/4+r R (\cos\Theta \cos\vartheta+\sin\Theta \sin\vartheta \cos(\varphi-\Phi))\bigr)^{1/2},\notag\\ r^{\prime\prime}=&\bigl(r^2+R^2/4-r R (\cos\Theta \cos\vartheta+\sin\Theta \sin\vartheta \cos(\varphi-\Phi))\bigr)^{1/2},\notag\\ ({\ensuremath{\mathbf{r}}}^{\prime}\times{\ensuremath{\mathbf{r}}}^{\prime\prime})_0=&\frac{r R \sin\Theta \sin\vartheta \sin(\Phi-\varphi)}{r^{\prime}r^{\prime\prime}},\notag\\ ({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})=&\frac{r^2-(R^2/4)}{r^{\prime} r^{\prime\prime}},\\ r^{\prime}_0=&\frac{r \cos\vartheta+(R/2)\cos \Theta}{r^\prime},\notag\\ r^{\prime\prime}_0=&\frac{r \cos\vartheta-(R/2)\cos \Theta}{r^{\prime\prime}}\,.\notag\end{aligned}$$ Now, we illustrate the calculation procedure on a particular example of evaluation of the local topological charge density, which in the circular coordinates is[^3] $$\label{barDensGen} \begin{split} {\cal Q}({\ensuremath{\mathbf{r}}}^{\prime},{\ensuremath{\mathbf{r}}}^{\prime\prime})=&{\mathrm{i}}\sqrt{2}(-1)^{a+b} \left[ \begin{matrix} 1 & 1 &1\\ a & b & a+b \end{matrix} \right] \mathop{\mathrm{Tr}}\Bigl( \nabla_a \bigl({\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime}){\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime\prime})\bigr) {\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime\prime}){\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime})\\ \times&\nabla_b \bigl({\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime}){\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime\prime})\bigr) {\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime\prime}){\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime}) \nabla_{-a-b} \bigl({\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime}){\ensuremath{\mathbf{U}}}_0({\ensuremath{\mathbf{r}}}^{\prime\prime})\bigr) {\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime\prime}){\ensuremath{\mathbf{U}}}_0^\dagger({\ensuremath{\mathbf{r}}}^{\prime}) \Bigr)\,. \end{split}$$ Substitution of the product ansatz of two aligned ${\cal A}_{1,1}$ Hopfions into yields the following expression for the topological charge density $$\label{hopfionBarionDensityParallel} \begin{split} &{\cal Q}^{\Delta \alpha =0}({\ensuremath{\mathbf{r}}}^{\prime},{\ensuremath{\mathbf{r}}}^{\prime\prime})= -6\biggl( (1-({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})^2)F^\prime(r^{\prime}) F^\prime(r^{\prime\prime})\Bigl(\frac{\sin (2 F(r^\prime))}{r^{\prime}}+\frac{\sin (2 F(r^{\prime\prime}))}{r^{\prime\prime}}\Bigr) \\ &+\frac{2\sin ^2F(r^\prime)}{r^{\prime 2}} \Bigl(F^\prime(r^{\prime\prime}) ({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})^2+F^\prime(r^\prime)\Bigr) +\frac{2\sin ^2F(r^{\prime\prime})}{r^{\prime\prime 2}} \Bigl(F^\prime(r^\prime) ({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})^2+F^\prime(r^{\prime\prime})\Bigr)\\ &+\frac{(1-({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})^2)}{r^\prime r^{\prime\prime}} \Bigl(\frac{\sin(2 F(r^\prime)) \sin ^2F(r^{\prime\prime})}{r^{\prime\prime}}\Bigr) +\frac{\sin ^2F(r^\prime) \sin (2 F(r^{\prime\prime}))}{r^{\prime}}\\ &+\frac{2 \sin F(r^\prime) \sin F(r^{\prime\prime}) \bigl(F^\prime(r^\prime)+F^\prime(r^{\prime\prime})\bigr)} {r^\prime r^{\prime\prime}} \Bigl(-2 \sin F(r^\prime) \sin F(r^{\prime\prime}) ({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})\\ &+ (1+({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})^2) \cos F(r^\prime) \cos F(r^{\prime\prime})\Bigr) \biggr)\,. \end{split}$$ It is possible to compute the total topological charge of the configuration to verify our construction for correctness. This task becomes a little bit more simple since the expression depends only on the variables $r^\prime$, $r^{\prime\prime}$ and the dot product $({\ensuremath{\mathbf{r}}}^{\prime}\cdot{\ensuremath{\mathbf{r}}}^{\prime\prime})$. If we suppose that both Hopfions are sitting on top of each other, i.e. $R\rightarrow 0$, then from and we find $$\label{parBarDensLimit} \lim_{R\rightarrow 0} {\cal Q}^{\Delta \alpha =0}({\ensuremath{\mathbf{r}}}^{\prime},{\ensuremath{\mathbf{r}}}^{\prime\prime})= -\frac{24 \sin^2\bigl(2F(r)\bigr) F^\prime(r) }{r^2}.$$ Thus, this formula is different from its counterpart for the topological charge one configuration by factor of two. Evidently, the total topological charge of the configuration then can be obtained by evaluation of the integral over the domain $$Q=\frac{1}{24 \pi^2}\int_0^{2\pi}{\mathrm{d}}\varphi\int_0^\pi{\mathrm{d}}\vartheta\sin\vartheta\int_0^\infty{\mathrm{d}}r r^2 {\cal Q}({\ensuremath{\mathbf{r}}}^{\prime},{\ensuremath{\mathbf{r}}}^{\prime\prime})$$ when the parameters $R$, $\Theta$ and $\Phi$ are arbitrary. Using the above mentioned boundary conditions on the profile function $F(r)$, we arrived at $Q=2$, as expected. The same procedure can be repeated when the Hopfions are in oposite phase, i.e. for the configuration given by the product anzats . In this case, however the corresponding topological charge density depends on the variables ${\ensuremath{\mathbf{r}}}_0^{\prime}$ and ${\ensuremath{\mathbf{r}}}_0^{\prime\prime}$ as well as the above mentioned set of variables, thus the result is a bit more complicated than and is not repesented here. Explicitly, in the limit $R\rightarrow 0$ it results in the function of the radial variable $r$ and angle $\theta$ which is different from it counterpart and possesses a double zero at the origin $$\label{opositeBarDensLimit} \lim_{R\rightarrow 0} {\cal Q}^{ \Delta \alpha =\pi}({\ensuremath{\mathbf{r}}}^{\prime},{\ensuremath{\mathbf{r}}}^{\prime\prime})= -\frac{96 \cos^2\theta \sin^4\bigl(F(r)\bigr) F^\prime(r) }{r^2}.$$ However, the integration of this function subject of the same boundary conditions, also gives the same result $Q=2$ for any values of the separation $R$ and orientation angles $\Theta$ and $\Phi$. Thus, we can identify this expression with the topological charge density of the ${\cal A}_{2,1}$ Hopfion. Numerical results ================= In a general case, evaluation of the total topological charge and the energy of the configuration constructed via a product ansatz needs some numerical computations. In Figs. \[fig:4\] and \[fig:5\] the calculated isosurfaces of the topological charge densities are presented for some fixed values of the set of orientation parameters, both for the Hopfions which are in phase and in the opposite phases, i.e. applying the ansatz and , respectively. Evidently, if the separation parameter $R$ is not very small, the product ansatz field given by , correctly reproduces the familiar structure of the system of two ${\cal A}_{1,1}$ Hopfions. Note that in the first case, as separation parameter $R$ goes to some small but still non-zero value, the axially symmetric charge two ${\cal A}_{1,2}$ unstable configuration of charge two [@Hietarinta:1998kt] is recovered. [ \[fig:4\] ]{} a\) b) c) d) [ \[fig:5\] ]{} a\) b) c) Let us now evaluate the energy of interaction between the Hopfions. Particularly, for each set of fixed values of the orientation parameters $R$, $\Theta$ and $\Phi$, the integration of the energy density yields the value of the interaction energy of the Hopfions once the masses of two infinitely separated Hopfions (i.e. $M_0=2\times1.232\times 32\pi^2$) are subtracted. We have performed simulations with varying values of the parameters $R$, $\Theta$ and $\Phi$. In Figs. \[fig:2\],\[fig:3\] we presented the integrated product ansatz interaction energy as a function of the orientation parameters for in phase and oposite phase Hopfions, respectively[^4]. First, from our results we can conclude that the above product ansatz fields, both and correctly reproduce the pattern of interaction between the Hopfions based on the simplified dipole-dipole approximation [@Ward:2000qj]. Indeed, both for configurations which are in phase and in opposite phases, the orientation along direction given by $\Theta = 0$ matches the Channel A discussed by Ward [@Ward:2000qj]. The orientation angle $\Theta = \pi/2$ corresponds to the Channel B. Note that the Channel C represents the interaction between Hopfion and anti-Hopfion, therefore is out of the scope of the present work. Other values of $\Theta$ correspond to intermediate relative orientations of the Hopfions. When the Hopfions are in phase and $\Theta = 0$ (Channel A) there is a shallow attractive window for separations $R$ large than 4, as can be seen from Fig. \[fig:2\] (b). Evidently, this attractive channel is very narrow because the potential of interaction quickly becomes repulsive as the value of $\Theta$ increases. Note that the repulsive part of the potential is concentrated inside the core where the product ansatz approximation is not very useful. If $\Theta=\pi/2$ (i.e. the Hopfions are in side by side position), the interaction potential is always repulsive as displayed in Fig. \[fig:2\] (c). The energy of interaction for other orientations of the Hopfions is represented by a surface depicted in Fig. \[fig:2\](a). The pattern of interaction between the opposite phase ${\cal A}_{1,1}$ Hopfions is rather different. Fig. \[fig:3\] (b) shows the corresponding energy of interaction as a function of the orientation parameters. Evidently, in contrast with Fig. \[fig:2\] (b) in the Channel A ($\Theta = 0$ ) the interaction is always repulsive for any values of the separation parameter $R$. However, in the Channel B ($\Theta = \pi/2$) the interaction energy is taking relatively large negative value at the separation about the size of the core $r_c = 0.8763$ and then at gradually approach zero as separation between the Hopfions increases, as shown in Fig. \[fig:3\] (c). Finally, in Fig. \[fig:3\] (a) we depicted the energy of interaction between the Hopfions which are in opposite phases, as function of the orientation parameters $R$ and $\Theta$. We also have checked that the integrated interaction energy does not depend on the azimuthal angle $\Phi$, as expected, though the expressions demonstrate that the energy density functional explicitly depends on this orientation parameter[^5]. To sum up, the product ansatz successfully captures the basic pattern of the interaction between the ${\cal A}_{1,1}$ Hopfions, our calculations suggest that for an arbitrary orientation of the Hopfions the system will evolve towards the state with minimal energy shown in Fig. \[fig:3\] (c). Qualitatively this conclusion is in agreement with recent results of full 3d numerical simulations of the Hopfions dynamics presented in [@Hietarinta:2011qk]. [ \[fig:2\] ]{} a\) b) c) [ \[fig:3\] ]{} a\) b) c) Conclusion {#conclusion .unnumbered} ========== Using Hopf projection of the Skyrme field and the product ansatz approximation we have investigated the pattern of interaction between the axially symmetric ${\cal A}_{1,1}$ Hopfions, in particular we analysed how the interaction energy depends on the orientation parameters, the separation $R$ and the polar angle $\Theta$. We have shown that this approach correctly reproduces both the repulsive and attractive interaction channels discussed previously in the limit of the dipole-dipole interactions. Here we mainly restricted our discussion to two most interesting cases considering in phase and oposite phase ${\cal A}_{1,1}$ Hopfions. Finally, let us note that the product ansatz can be applied to construct a system of interacting Hopfions of higher degrees. It can be done if instead of the Skyrmion matrix valued hedgehog field we project the corresponding rational map Skyrmions [@Houghton:1997kg; @Battye1998]. On the other hand, setting the value of the separation parameter $R$ about the size of the core may be used to approximate various linked solitons, for example the configuration ${\cal L}_{1,1,1}^{1,1,1}$ can be constructed as a projection of the product of three matrix valued Skyrme fields . Acknowledgements {#acknowledgements .unnumbered} ================ This work is supported by the A. von Humboldt Foundation (Ya.S.) and also from European Social Fund under Global Grant measure, VP1-3.1-ŠMM-07-K-02-046 (A.A.). [99]{} N. Manton and P. Sutcliffe, [*Topological Solitons*]{}, (Cambridge University Press, Cambridge, England, 2004). J. E., Moore, Y. Ran and X.-G. Wen, Phys. Rev. Lett. [**101**]{} (2008) 186805 \[arXiv:0804.4527v2\]. T. H. R. Skyrme, Proc. Roy. Soc. Lond.  A [**260**]{} (1961) 127. L.D. Faddeev, *Quantization of solitons*, Princeton preprint IAS-75-QS70 (1975)\ L.D Faddeev and A. Niemi, Nature [**387**]{}, 58 (1997); Phys. Rev. Lett. [**82**]{}, 1624 (1999). B.M.A. Piette, W.J. Zakrzewski, H.J.W. Mueller-Kirsten, D.H. Tchrakian, Phys. Lett. B [**320**]{} (1994) 294.\ B.M.A. Piette, B.J. Schroers, W.J. Zakrzewski, Z. Phys. C [**65**]{} (1995) 165. G.H. Derrick, J. Math. Phys. [**5**]{} (1964) 1252. C. Adam, J. Sanchez-Guillen and A. Wereszczynski, Phys. Lett. B [**691**]{} (2010) 105 \[arXiv:1001.4544 \[hep-th\]\]. D. Foster, Phys. Rev. D [**83**]{} (2011) 085026 \[arXiv:1012.2595 \[hep-th\]\]. P. Sutcliffe, JHEP [**1104**]{} (2011) 045 \[arXiv:1101.2402 \[hep-th\]\]. N. S. Manton, Phys. Rev. Lett.  [**60**]{} (1988) 1916. P. M. Sutcliffe, Nonlinearity [**4**]{} (1991) 4, 1109. B. J. Schroers, Z. Phys. C [**61**]{} (1994) 479 \[hep-ph/9308236\]. N. S. Manton, Acta Phys. Polon. B [**25**]{} (1994) 1757. N. S. Manton, B. J. Schroers and M. A. Singer, Commun. Math. Phys.  [**245**]{} (2004) 123 \[hep-th/0212075\]. B. M. A. G. Piette, B. J. Schroers and W. J. Zakrzewski, Nucl. Phys. B [**439**]{} (1995) 205 \[hep-ph/9410256\]. R. A. Battye and P. M. Sutcliffe, Phys. Lett. B [**391**]{} (1997) 150 J. Gladikowski and M. Hellmund, Phys. Rev.  D [**56**]{} (1997) 5194 \[arXiv:hep-th/9609035\]. R. S. Ward, Phys. Lett. B [**473**]{} (2000) 291 \[hep-th/0001017\]. J. Jaykka and M. Speight, Phys. Rev. D [**82**]{} (2010) 125030 \[arXiv:1010.2217 \[hep-th\]\]. J. Hietarinta, J. Palmu, J. Jaykka and P. Pakkanen, New J. Phys.  [**14**]{} (2012) 013013 \[arXiv:1108.5551 \[hep-th\]\]. A.F. Vakulenko and L.V. Kapitansky, Sov. Phys. Dokl. [**24**]{}, 432 (1979). R. Battye, M. Haberichter, Phys. Rev. D [**87**]{} (2013) 105003 (11pp). D. Harland, J. Jäykkä., Ya. Shnir and M. Speight, J. Phys. A: Math. Theor. [**46**]{} (2013) 225402 (18pp). A. Halavanau and Y. Shnir, Phys. Rev. D [**88**]{} (2013) 085028 \[arXiv:1309.4318 \[hep-th\]\]. R. A. Battye and M. Haberichter, Phys. Rev. D [**88**]{} (2013) 125016 arXiv:1309.3907 \[hep-th\]. A. Acus, E. Norvaisas and Y. Shnir, Phys. Lett. B [**682**]{} (2009) 155 \[arXiv:0909.5281 \[hep-th\]\]. C. J. Houghton, N. S. Manton and P. M. Sutcliffe, Nucl. Phys. B [**510**]{} (1998) 507 \[hep-th/9705151\]. P. Sutcliffe, Proc. Roy. Soc. Lond.  A [**463**]{} (2007) 3001 \[arXiv:0705.1468 \[hep-th\]\]. R. Battye and P. Sutcliffe, Phys. Rev. Lett.  [**81**]{} (1998) 4798 J. Hietarinta and P. Salo, Phys. Rev. D [**62**]{} (2000) 081701 . Wang-Chang Su, Chin. J. Phys.  [**40**]{} (2002) 516 \[hep-th/0107187\]. J. Hietarinta and P. Salo, Phys. Lett. B [**451**]{} (1999) 60 \[hep-th/9811053\]. C. J. Houghton, N. S. Manton and P. M. Sutcliffe, Nucl. Phys. B [**510**]{} (1998) 507 [^1]: This relation is linear for Skyrmions, however for the Hopfions the Vakulenko-Kapitanski bound in $d=3+1$ is $E=c Q^{3/4}$ [@VK] where $c$ is some constant. [^2]: Note there is a misprint in the corresponding expression for the component $\phi_2$ in [@Acus:2009df] [^3]: Recall that the Hopf charge of the configuration we constructed via projection is given by the topological charge of the Skyrme field [@Battye1998]. [^4]: Note that in order to provide a reasonable approximation to the system of two separated ${\cal A}_{1,1}$ Hopfions, the separation parameter $R$ must be larger that the size of the core $r_c$. [^5]: Mathematica notebook of all calculation details can be downloaded from <http://mokslasplius.lt/files/Hopfion2013.tgz>.
{ "pile_set_name": "ArXiv" }
--- author: - 'Yiwei Sun, , Suhang Wang$^{\mathsection}$ ,Xianfeng Tang, Tsung-Yu Hsieh, Vasant Honavar' title: Node Injection Attacks on Graphs via Reinforcement Learning --- [^1] [^1]: $^{\mathsection}$Corresponding Author
{ "pile_set_name": "ArXiv" }
[2]{} Although memristive devices with threshold voltages are the norm rather than the exception in experimentally realizable systems, their SPICE programming is not yet common. Here, we show how to implement such systems in the SPICE environment. Specifically, we present SPICE models of a popular voltage-controlled memristive system specified by five different parameters for PSPICE and NGSPICE circuit simulators. We expect this implementation to find widespread use in circuits design and testing. In the last few years, circuit elements with memory, namely, memristive [@chua76a], memcapacitive and meminductive [@diventra09a] systems have attracted considerable attention from different disciplines due to their capability of non-volatile low-power information storage, potential applications in analog and digital circuits, and their ability to store and manipulate information on the same physical platform [@pershin11a]. However, when combined into complex circuits, progress in this field significantly relies on the available tools at our disposal. One such tool is the SPICE simulation environment, commonly used in circuit simulations and testing. While several SPICE models of memristive [@Biolek2009-1; @Benderli2009-1; @Biolek2009-2; @Shin10a; @Rak10a; @Yakopcic11a; @Kvatinsky12a], memcapacitive [@Biolek2009-2; @Biolek10b] and meminductive [@Biolek2009-2; @Biolek11b] elements are already available, they typically [@Biolek2009-1; @Benderli2009-1; @Biolek2009-2; @Shin10a; @Rak10a] rely on physical models without a threshold (see, e.g., Refs. [@strukov08a; @joglekar09a]). Threshold-type switching is instead an extremely important common feature of memristive devices (for examples, see Ref. [@pershin11a]) and, due to physical constraints, likely to be common in memcapacitive and meminductive elements as well [@diventra13a]. Indeed, it is the threshold-type switching which is responsible for non-volatile information storage, serves as a basis for logic operations [@borghetti10a; @pershin12a], etc., and therefore, it can not be neglected. For instance, experimentally demonstrated memristive logic circuits [@borghetti10a] and emerging memory architectures [@linn10a] support fixed-threshold modeling [@pershin09b] of memristive devices. Moreover, the atomic migration responsible for resistance switching in many important experimental systems is induced by the applied field and not by the electric current flow. Therefore, models with voltage threshold [@pershin09b; @Yakopcic11a] are physically better justified than those with the current one [@Kvatinsky12a]. In the present paper we introduce a SPICE model for a memristive device with threshold voltage that has been proposed by the present authors [@pershin09b]. Using this type of memristive devices, we have already demonstrated and analyzed several electronic circuits including a learning circuit [@pershin09b], memristive neural networks [@pershin10c], logic circuits [@pershin12a], analog circuits [@pershin10d] and circuits transforming memristive response into memcapacitive and meminductive ones [@pershin09e]. These previous results thus demonstrate the range of applicability of the selected physical model. As a consequence, we expect its SPICE implementation to find numerous applications as well. ![image](fig1){width="6.5cm"} The equations describing memristive systems can be formulated in the voltage- or current-controlled form [@chua76a]. In some cases, a voltage-controlled memristive system can be easily re-formulated as a current-controlled one and vice versa [@pershin11a]. Let us then focus on voltage-controlled memristive systems whose general definition (for an $n$th-order voltage-controlled memristive system) is given by the following relations $$\begin{aligned} I(t)&=&R_M^{-1}\left(X,V_M,t \right)V_M(t) , \label{Condeq1}\\ \dot{X}&=&f\left( X,V_M,t\right) \label{Condeq2}\end{aligned}$$ where $X$ is the vector representing $n$ internal state variables, $V_M(t)$ and $I(t)$ denote the voltage and current across the device, and $R_M$ is a scalar, called the [*memristance*]{} (for memory resistance). \[t\] ![image](fig2){width="5.5cm"} .subckt memristor pl mn PARAMS: Ron=1K Roff=10K Rinit=5K alpha=0 beta=1E13 Vt=4.6 Bx 0 x I='(f1(V(pl,mn))>0) && (V(x)<Roff) ? {f1(V(pl,mn))}: (f1(V(pl,mn))<0) && (V(x)>Ron) ? {f1(V(pl,mn))}: {0}' Cx x 0 1 IC={Rinit} R0 pl mn 1E12 Rmem pl mn r={V(x)} .func f1(y)={beta*y+0.5*(alpha-beta)*(abs(y+Vt)-abs(y-Vt))} .ends .subckt memristor pl mn PARAMS: Ron=1K Roff=10K Rinit=5K beta=1E13 Vtp=4.6 Vtm=4.6 nu1=0.0001 nu2=0.1 Gx 0 x value={f1(V(pl)-V(mn))*(f2(f1(V(pl)-V(mn)))*f3(Roff-V(x))+f2(-f1(V(pl)-V(mn)))*f3(V(x)-Ron))} Raux x 0 1E12 Cx x 0 1 IC={Rinit} Gpm pl mn value={(V(pl)-V(mn))/V(x)} .func f1(y)={beta*(y-Vtp)/(exp(-(y-Vtp)/nu1)+1)+beta*(y+Vtm)/(exp(-(-y-Vtm)/nu1)+1)} .func f2(y1)={1/(exp(-y1/nu1)+1)} .func f3(y)={1/(exp(-y/nu2)+1)} .ends A specific realization of a voltage-controlled memristive system [*with threshold*]{} has been suggested by the present authors in Ref. [@pershin09b]. Such a memristive system is described by $$\begin{aligned} I&=&X^{-1}V_M, \label{eq3} \\ \frac{\textnormal{d}X}{\textnormal{d}t}&=&f\left( V_M\right) \left[ \nonumber \theta\left( V_M\right)\theta\left( R_{off}-X\right) + \right. \\ &{} &\qquad \qquad \qquad \left. \theta\left(-V_M\right)\theta\left( X-R_{on}\right)\right], \label{eq4}\end{aligned}$$ with $$f(V_M)=\beta V_M+0.5\left( \alpha-\beta\right)\left[ |V_M+V_t|-|V_M-V_t| \right] \label{eq5}$$ where $V_t$ is the threshold voltage, $R_{on}$ and $R_{off}$ are limiting values of the memristance $R_M\equiv X$, and the $\theta$-functions (step functions) are used to limit the memristance to the region between $R_{on}$ and $R_{off}$. The important model parameters are the coefficients $\alpha$ and $\beta$ that characterize the rate of memristance change at $|V_M|< V_t$ and $|V_M|> V_t$, respectively. These two coefficients define the slopes of the $f(V_M)$ curve below and above the threshold (see Fig. \[fig1\]). When $\alpha=0$ (Fig. \[fig1\](b)), the device state changes only if $\left| V_M \right|>V_t$. Note that Eqs. (\[eq3\])-(\[eq5\]) are written in such a way that a positive/negative voltage applied to the top terminal with respect to the bottom terminal denoted by the black thick line always tends to increase/decrease the memristance $R_M$ (the opposite convention has been used in Ref. [@pershin09b]). ![image](fig3){width="4.5cm"} The SPICE model for these devices is formulated following the general idea of Ref. [@Biolek2009-1]. For NGSPICE circuit simulator, the memristive system is realized as a sub-circuit combining a behavioral resistor $R$ (a resistor whose resistance can be specified by an expression), a current source $\uparrow$, and a capacitor $C$. Table \[tbl1\] presents the code of the sub-circuit. Its second line ([*Bx ...*]{}) defines the current source with the current specified through ternary functions. (A ternary function is defined in the code as $a$ ? $b$ : $c$ , which means “IF a, THEN b, ELSE c” [@ngspice].) The purpose of these functions is to limit $R_M$ between $R_{on}$ and $R_{off}$. The third line of the code in Table \[tbl1\] specifies the capacitor $C$ ([*Cx ...*]{}) with an initial condition. The fourth line ([*Rmem ...*]{}) defines the behavioral resistor whose resistance takes the same numerical value as the voltage across the capacitor. The next line ([*.func ...*]{}) provides the function $f$ according to Eq. (\[eq5\]). We have not experienced any convergence problems using Table \[tbl1\] model with NGSPICE simulator that potentially could result from “IF-THEN” statements. Clearly, Eq. (\[eq3\])-(\[eq5\]) could be programmed differently, employing smoothing functions (e.g., arctan(), sigmoid or similar function) as we do below in the case of PSPICE simulator model. Moreover, instead of “IF...THEN” statement in the Table \[tbl1\], one can use a step function based expression. In this case, the “Bx ...” line of the code should be replaced with “Bx 0 x I=$\{$f1(V(pl,mn))\*(u(V(pl,mn))\*u(Roff-V(x))+u(V(mn,pl))\*u(V(x)-Ron))$\}$”. For PSPICE circuit simulator, the SPICE model of memristive device with threshold is formulated slightly differently without the use of behavioral resistor. Instead, we employ an additional current source playing the role of behavioral resistor [@Basso05a]. In addition, in order to avoid convergence problems, the function $f$ in Eq. (\[eq5\]) should be smoothed. In the most important case of $\alpha=0$, the smoothing of $f$ is straightforward. Table \[tbl2\] presents the code for PSPICE circuit simulator for this case. In Table \[tbl2\], $nu1$ and $nu2$ are smoothing parameters used in smoothed step functions $f2$ and $f3$ (although we prefer to use different smoothing parameters for functions of voltages and resistances, a common smoothing function could also be used). We have verified that simulation results are identical in both versions of SPICE and that PSPICE code is also compatible with LTspice circuit simulator. In addition, we note that the value of $beta$ in Table \[tbl2\] was selected to match switching times of real memristive devices that are in nanoseconds range. We suggest to select the maximum allowable time step not exceeding 0.01ns when using this value of $beta$. Let us consider a memristive device with threshold directly connected to a sinusoidal voltage source $V(t)=V_0 \sin(2\pi \nu t)$ as presented in Fig. \[fig3\]. The circuit simulations are performed as a transient analysis of the circuit taking into account initial conditions (the [*uic*]{} option of [*.tran*]{}) within the NGSPICE circuit simulator. In our simulations, we consider two different types of memristive devices with threshold corresponding to two cases of functions $f(V_M)$ as presented in Fig. \[fig1\]. In the first case (that can be dubbed as a memristive device with a [*soft threshold*]{}) the coefficients $\alpha,\beta>0$ and $\alpha <\beta$. In this case, the memristance changes at any $V\neq 0$. However, the change is faster when the applied voltage magnitude is above the threshold voltage ($|V|>V_t$). In the second case (Fig. \[fig1\](b)), $\alpha =0$. Consequently, the memristance changes only when the applied voltage exceeds the threshold voltage ($|V|>V_{t}$). This second case is closer to the actual behavior of many experimentally realizable memristive systems [@pershin11a]. We call this type of systems as memristive devices with [*hard threshold*]{}. \[fig4\] Fig. \[fig4\] presents selected results of our simulations showing the circuit dynamics at long times (the initial transient interval is omitted). We consider two types of memristive devices – one with a soft and another with hard thresholds ($\alpha=0.1\beta$ and $\alpha=0$, respectively) – and plot the applied voltage, current and memristance as functions of time for these two cases at a frequency $\nu=0.05$GHz. Clearly, in both cases, the current through the device is not of the simple sine form. The plot of memristance as a function of time demonstrates that the range of memristance change in (a) is larger than in (b) (actually, in (a), $R_M$ switches between $R_{on}$ and $R_{off}$). The vertical dashed line in Fig. \[fig4\](a) helps noticing that in Fig. \[fig4\](a) the memristance starts changing as soon as the sign of applied voltage changes. In Fig. \[fig4\](b), instead, the change of $R_M$ occurs solely when $|V|>V_t$. As a consequence, the shapes of $R_M(t)$ in Fig. \[fig4\](a) and (b) are slightly different, and the steps in $R_M(t)$ in Fig. \[fig4\](b) are shifted along the horizontal axis compared to those in Fig. \[fig4\](a). The current as a function of voltage at several selected values of $\nu$ is plotted in Fig. \[fig5\]. Clearly, these curves are typical frequency-dependent pinched hysteresis loops [@chua76a; @diventra09a]. The character of the loops for memristive systems with hard and soft thresholds is slightly different. While for memristive systems with soft threshold the curve for the lowest frequency has the smallest loop span, the situation for the memristive system with hard threshold is opposite: the largest loop span occurs at the lowest frequency. This result, however, is not surprising if we take into account the fact that in the memristive system with soft threshold the change of $R_M$ occurs at lower voltages. Moreover, the insets of Fig. \[fig5\] demonstrate the memristance $R_M(t)$ as a function of $V(t)$ at a particular frequency. It is not difficult to notice that in the case of the memristive system with hard threshold (shown in the inset of Fig. \[fig5\](b)), $R_M$ changes only when $\left| V\right|$ exceeds $V_t=4.6$V . We have developed and tested a SPICE model of memristive devices with threshold voltage. In this model, the limiting conditions for the memristance are realized using ternary functions which adhere more closely to the actual physical situation, compared with the window functions approach previously suggested [@joglekar09a]. The memristive device is realized as a sub-circuit consisting of several elements. While the present model is based on a single internal state variable, $X$, it can be easily generalized to more complex physical models involving several internal state variables. We would like to note that the NGSPICE model presented in Table \[tbl1\] was included into the last distribution of NGSPICE [@ngspice]. Moreover, different convergence and simulation issues of memelements in SPICE will be considered in our future publication [@Biolek13a]. Finally, we note that threshold models of memcapacitive and meminductive systems can be implemented in the SPICE environment in a similar way. [**Acknowledgements**]{} This work has been partially supported by NSF grants No. DMR-0802830 and ECCS-1202383, and the Center for Magnetic Recording Research at UCSD. [99]{} CHUA, L. O., KANG, S. M. Memristive devices and systems. *Proc. IEEE*, 1976, vol. 64, p. 209 - 223. DI VENTRA, M., PERSHIN, Y. V., CHUA, L. O. Circuit elements with memory: Memristors, memcapacitors, and meminductors. *Proc. IEEE*, 2009, vol. 97, no. 10, pp. 1717 - 1724. PERSHIN, Y. V., DI VENTRA, M. Memory effects in complex materials and nanoscale systems. *Advances in Physics*, 2011, vol. 60, p. 145 - 227. BIOLEK, Z., BIOLEK, D., BIOLKOVA, V. SPICE model of memristor with nonlinear dopant drift. *Radioengineering*, 2009, vol. 18, no. 2, p. 210 - 214. BENDERLI, S., WEY, T. A. On [SPICE]{} macromodelling of TiO2 memristors. *Electron. Lett.*, 2009, vol. 45, no. 7, p. 377 - 378. BIOLEK, Z., BIOLEK, D., BIOLKOVA, V. SPICE modeling of memristive, memcapacitative and meminductive systems. *Proc. of ECCTD ’09, European Conference on Circuit Theory and Design*, August 23-27, 2009, p. 249 - 252. SHIN, S., KIM, K., KANG, S.-M. Compact models for memristors based on charge-flux constitutive relationships. *IEEE Trans. Comput.-Aided Design Integr. Circuits Syst.*, 2010, vol. 29, p. 590. RAK, A., CSEREY, G. Macromodeling of the memristor in SPICE. *IEEE Trans. Comput.-Aided Design Integr. Circuits Syst.*, 2010, vol. 29, p. 632. YAKOPCIC, C., TAHA, T. M., SUBRAMANYAM, G., PINO, R. E., ROGERS, S. A memristor device model. *[IEEE]{} El. Dev. Lett.*, 2011, vol. 32, p. 1436. KVATINSKY, S., FRIEDMAN, E. G., KOLODNY, A., WEISER, U. C. TEAM: ThrEshold adaptive memristor model. *[IEEE]{} Trans. Circ. Syst. I*, 2013, vol. 60, p. 211. BIOLEK, D., BIOLEK, Z., BIOLKOVA, V. SPICE modelling of memcapacitor. *El. Lett.*, 2010, vol. 46, p. 520. ——. PSPICE modeling of meminductor. *Analog. Integr. Circ. Sig. Process.*, 2011, vol. 66, p. 129. STRUKOV, D. B., SNIDER, G. S., STEWART, D. R., WILLIAMS, R. S. The missing memristor found. *Nature*, 2008, vol. 453, p. 80 - 83. JOGLEKAR, Y. N., WOLF, S. J. The elusive memristor: properties of basic electrical circuits. *Eur. J. Phys.*, 2009, vol. 30, p. 661. DI VENTRA, M., PERSHIN, Y. V. On the physical properties of memristive, memcapacitive, and meminductive systems. *Nanotechnology (in press)*, 2013; arXiv:1302.7063. BORGHETTI, J., SNIDER, G. S., KUEKES, P. J., YANG, J. J., STEWART, D. R., WILLIAMS,R. S. ‘Memristive’ switches enable ‘stateful’ logic operations via material implication. *Nature*, 2010, vol. 464, p. 873 - 876. PERSHIN, Y. V., DI VENTRA, M. Neuromorphic, digital and quantum computation with memory circuit elements *Proc. [IEEE]{}*, 2012, vol. 100, p. 2071. LINN, E., ROSEZIN, R., WASER, R. Complementary resistive switches for passive nanocrossbar memories. *Nature Mat.*, 2010, vol. 9, p. 403. PERSHIN, Y. V., LA FONTAINE, S., DI VENTRA, M. Memristive model of amoeba learning. *Phys. Rev. E*, 2009, vol. 80, p. 021926. PERSHIN, Y. V., DI VENTRA, M. Experimental demonstration of associative memory with memristive neural networks. *Neural [N]{}etworks*, 2010, vol. 23, p. 881. ——. Practical approach to programmable analog circuits with memristors. *[IEEE]{} Trans. Circ. Syst. I*, 2010, vol. 57, p. 1857. ——. Memristive circuits simulate memcapacitors and meminductors. *Electronics Letters*, 2010, vol. 46, p. 517 - 518. NENZI, P., VOGT, H. Ngspice Users Manual, version 25plus. 2013. Available at: http://ngspice.sourceforge.net/docs/ngspice-manual.pdf BASSO, C. SPICE analog behavioral modeling of variable passives. *Power Electronics Technology*, April 2005, pp. 57 - 59. BIOLEK, D., DI VENTRA, M., PERSHIN, Y. V. in preparation. [**About Authors…**]{} **Yuriy V. PERSHIN** was born in Russia. He received his Ph.D. degree in theoretical physics from the University of Konstanz, Konstanz, Germany, in 2002. His research interests span broad areas of nanotechnology, including physics of semiconductor nanodevices, spintronics, and biophysics. **Massimiliano Di Ventra** was born in Italy. He received his Ph.D. degree in theoretical physics from the Ecole Polytechnique Federale de Lausanne, Switzerland, in 1997. His research interests are in the theory of electronic and transport properties of nanoscale systems, non-equilibrium statistical mechanics, DNA sequencing/polymer dynamics in nanopores, and memory effects in nanostructures for applications in unconventional computing and biophysics.
{ "pile_set_name": "ArXiv" }
--- address: Stanford University author: - Zhangsihao Yang - Or Litany - Tolga Birdal - Srinath Sridhar - Leonidas Guibas bibliography: - 'references.bib' title: Continuous Geodesic Convolutions for Learning on 3D Shapes --- \[1\][&gt;p[\#1pt]{}]{} Geometric Deep Learning ,Shape Descriptors ,Shape Segmentation ,Shape Matching
{ "pile_set_name": "ArXiv" }
--- abstract: 'The mineral barlowite, , has been the focus of recent attention due to the possibility of substituting the interlayer $^{2+}$ site with non-magnetic ions to develop new quantum spin liquid materials. We re-examine previous methods of synthesizing barlowite and describe a novel hydrothermal synthesis method that produces large single crystals of barlowite and Zn-substituted barlowite ($_3$$_x$$_{1-x}$). The two synthesis techniques yield barlowite with indistinguishable crystal structures and spectroscopic properties at room temperature; however, the magnetic ordering temperatures differ by 4 K and the thermodynamic properties are clearly different. The dependence of properties upon synthetic conditions implies that the defect chemistry of barlowite and related materials is complex and significant. Zn-substituted barlowite exhibits a lack of magnetic order down to *T* = 2 K, characteristic of a quantum spin liquid, and we provide a synthetic route towards producing large crystals suitable for neutron scattering.' address: - 'Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA' - 'Department of Chemistry, Stanford University, Stanford, California 94305, USA' - 'Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, US' - 'Department of Applied Physics, Stanford University, Stanford, California 94305, USA' author: - 'Rebecca W.  Smaha' - Wei He - 'John P. Sheckelton' - Jiajia Wen - 'Young S. Lee' bibliography: - 'mendeley.bib' title: 'Synthesis-Dependent Properties of Barlowite and Zn-Substituted Barlowite' --- Crystal growth,Quantum spin liquid,Magnetic properties,Crystal structure determination,Spectroscopy,heat capacity Introduction ============ Quantum spin liquid (QSL) materials have an exotic magnetic ground state characterized by the spins evading conventional magnetic long-range order down to *T* = 0 K and possessing long-range quantum entanglement.[@Balents2010; @Norman2016] One way to explain this ground state is as a resonating valence bond state, in which singlets of entangled spins fluctuate over the lattice but never break translational symmetry.[@Anderson1973] Through the possibility of obtaining long-range quantum entanglement of spins, a better understanding of the QSL ground state opens avenues to develop materials for topological quantum computing applications.[@Ioffe2002] In addition, investigating QSL candidate materials may have important implications for our understanding of high temperature superconductivity.[@Anderson1987; @Savary2017] One of the best experimentally realized QSL candidates is the metal oxyhalide mineral herbertsmithite, .[@Braithwaite2004; @Shores2005; @Han2012; @Fu2015] Herbertsmithite has a rhombohedral, layered structure consisting of alternating kagomé lattice planes of $^{2+}$ ions with layers of nonmagnetic $^{2+}$ ions that serve to magnetically isolate the kagomé layers. Extreme magnetic frustration can be found when there are competing antiferromagnetic (AFM) interactions between nearest-neighbor S = 1/2 spins on a kagomé lattice, which consists of a network of corner-sharing triangles. The physics of herbertsmithite has been studied extensively, but chemical and synthetic limitations have held it back: a small fraction of excess $^{2+}$ impurities on the interlayer Zn site results in interlayer magnetic coupling that obscures the intrinsic QSL behavior.[@DeVries2012; @Han2016b] The mineral barlowite[@Elliott2014; @Han2014], , is another rare example of a material that has an isolated, undistorted S = 1/2 kagomé lattice. It contains $^{2+}$ ions on its interlayer site, presumably causing it to have a transition to long-range magnetic order at 15 K.[@Han2014; @Jeschke2015] Barlowite, therefore, is not a QSL material; however, DFT calculations show that substituting the interlayer site with nonmagnetic $^{2+}$ or $^{2+}$ should suppress the long-range magnetic order and lead to a QSL state.[@Guterding2016a; @Liu2015a] It has a different coordination environment around the interlayer $^{2+}$ (trigonal prismatic as opposed to octahedral in herbertsmithite) and perfect AA stacking of the kagomé layers, while herbertsmithite has ABC stacking. It has been predicted that these differences will yield a significantly lower amount of $^{2+}$ impurities on the interlayer site in Zn- or Mg-substituted barlowite compared to herbertsmithite, opening up new avenues to study the intrinsic physics of QSL materials.[@Liu2015a] Here, we re-examine the synthesis of barlowite after noting a discrepancy between the morphology of crystals of natural barlowite (described as “platy" along the *c*-axis[@Elliott2014]) and crystals of synthetic barlowite (rods down the *c*-axis).[@Han2016; @Pasco2018] We present a new method of synthesizing large single crystals of barlowite that are structurally and spectroscopically identical to polycrystalline barlowite at room temperature. However, at low temperatures, the magnetic transition temperature shifts by 4 K. Slight modifications of these two methods produce polycrystalline and single crystalline Zn-substituted barlowite ($_3$$_x$$_{1-x}$) showing a lack of magnetic order down to *T* = 2 K, consistent with a QSL ground state. This comparison of synthesis methods has implications for past and future studies of related synthetic minerals, especially copper oxysalts produced hydrothermally. We find that the large dependence of properties on synthetic route suggests that the defect chemistry of copper oxysalts is more complex than previously believed, implying that a true understanding of this class of materials requires careful control over synthesis. Experimental Details ==================== Materials and Methods --------------------- (Alfa, Cu 55%), (Alfa, 96%), HBr (Alfa, 48% wt), (BTC, 99.999%), (BTC, 99.5%), (Alfa, 99%), (Alfa, 99%), deionized (DI) (EMD Millipore), and (Aldrich, 99.9%) were used as purchased. Mid- and near-infrared (IR) measurements were performed on a Thermo Fisher Scientific Nicolet 6700 Fourier transform infrared spectrometer (FTIR) with a Smart Orbit diamond attenuated total reflectance (ATR) accessory. Raman measurements were performed on a Horiba LabRAM Aramis spectrometer with a CCD detector, 1800 grooves/mm grating, and 532 nm laser. DC magnetization measurements were performed on a Quantum Design Physical Properties Measurement System (PPMS) Dynacool from 2 to 350 K under applied fields of 0.005 T, 1.0 T, and 9.0 T. Heat capacity measurements were performed in the PPMS Dynacool on either a pressed single pellet of powder mixed with Ag powder in a 1:2 mass ratio or on a single crystal affixed to a sapphire platform using Apiezon-N grease. Syntheses --------- **1**: (1.5477 g), (0.2593 g), and HBr (0.8 mL) were sealed in a 45 mL PTFE-lined stainless steel autoclave with 36 mL DI or . This was heated over 3 hours to 175 $^{\circ}$C and held for 72 hours before being cooled to room temperature over 48 hours. The products were recovered by filtration and washed with DI , yielding polycrystalline barlowite. **1-a**: This was prepared as **1** above but with the following heating profile: it was heated over 3 hours to 175 $^{\circ}$C and held for 17 days before being cooled to room temperature over 48 hours. **Zn-1**: (0.5307 g), (0.0593 g), and (0.5405 g) were sealed in a 23 mL PTFE-lined stainless steel autoclave with 10 mL DI . This was heated over 3 hours to 210 $^{\circ}$C and held 24 hours before being cooled to room temperature over 30 hours. The products were recovered by filtration and washed with DI , yielding polycrystalline Zn-substituted barlowite. **2**: (0.4569 g) and (0.9119 g) were sealed in a 23 mL PTFE-lined stainless steel autoclave with 15 mL DI . **Zn-2**: (0.2742 g), (0.4653 g), and (1.1724 g) were sealed in a 23 mL PTFE-lined stainless steel autoclave with 15 mL DI . For both, the autoclave was heated over 3 hours to 175 $^{\circ}$C and held for 72 hours, then cooled to 80 $^{\circ}$C over 24 hours. It was held at 80 $^{\circ}$C for 24 hours before being cooled to room temperature over 12 hours. The products were recovered by filtration and washed with DI , yielding barlowite or Zn-substituted barlowite crystals mixed with polycrystalline , which was removed by sonication in acetone. X-ray Diffraction ----------------- Single crystal diffraction (SCXRD) experiments were conducted at Beamline 15-ID at the Advanced Photon Source (APS), Argonne National Laboratory, using a Bruker D8 diffractometer equipped with a PILATUS3 X CdTe 1M detector or a Bruker APEXII detector. Datasets were collected at 300 K using a wavelength of 0.41328 [Å]{}. The data were integrated and corrected for Lorentz and polarization effects using <span style="font-variant:small-caps;">saint</span> and corrected for absorption effects using <span style="font-variant:small-caps;">sadabs</span>.[@BrukerAXSSoftwareInc2016] The structures were solved using intrinsic phasing in <span style="font-variant:small-caps;">apex3</span> and refined using the <span style="font-variant:small-caps;">shelxtl</span>[@sheldrick2015] and <span style="font-variant:small-caps;">olex2</span>[@Dolomanov2009] software. Hydrogen atoms were inserted at positions of electron density near the oxygen atom and were refined with a fixed bond length and an isotropic thermal parameter 1.5 times that of the attached oxygen atom. Thermal parameters for all other atoms were refined anisotropically. High resolution synchrotron powder X-ray diffraction (PXRD) data were collected at 300 K using beamline 11-BM at the APS using a wavelength of 0.412728 [Å]{}. Samples were measured in kapton capillaries; crystalline samples were crushed into a powder. Rietveld refinements were performed using <span style="font-variant:small-caps;">gsas-II</span>.[@toby2013] Atomic coordinates and isotropic atomic displacement parameters were refined for each atom. Following results from SCXRD, the site occupancy of the interlayer Cu or Zn was fixed at 0.3333; however, for **Zn-1** the Zn refined to nearly octahedral, so it was placed on the octahedral site with an occupancy of 1. Hydrogen was excluded. Results and discussion ====================== Synthesis --------- Attempts to replicate the reported synthesis of barlowite[@Han2014] were stymied by its use of , an unstable[@Appelman1969] and commercially unavailable reagent. Replacing with yielded crystals an order of magnitude smaller than those reported previously[@Jeschke2015; @Han2016] and too small for neutron scattering experiments. Thus, we developed two alternate synthetic routes to produce barlowite and Zn-substituted barlowite: **Method 1:** **1a:** **1b:** **Method 2:** **2a:** **2b:** Method 1 produces polycrystalline barlowite mixed with small crystals (up to 0.5 mm, **1**) or polycrystalline Zn-substituted barlowite (no crystals, **Zn-1**). Method 1 is similar to the first reported syntheses of barlowite[@Han2014] and Zn-substituted barlowite.[@Feng2017a] However, we utilize slightly different reagents ( instead of in Method 1a and no in Method 1b, and we have optimized the stoichiometry and temperature profile. The higher temperature used here for barlowite (175 $^{\circ}$C, Method 1a) yields 0.5 mm crystals more efficiently than synthesis at 120 $^{\circ}$C.[@Pasco2018] However, neither this method nor any literature report on Zn-substituted barlowite[@Feng2017; @Feng2017a] produces crystals, which will hinder future neutron studies of its possible QSL ground state. Method 2 produces large crystals (up to 2 mm) of both barlowite (**2**) and Zn-substituted barlowite (**Zn-2**). Method 2a is an entirely novel route for the growth of single crystals of barlowite, and they must be mechanically separated from the byproduct. The preferential formation and stability of will aid the growth of high-quality barlowite crystals. It is modified to produce Zn-substituted barlowite (Method 2b) with the addition of a large excess of and correspondingly increasing the stoichiometry. As measured by inductively coupled plasma atomic emission spectroscopy (ICP-AES), the Zn content of polycrystalline **Zn-1** is 0.95, and the Zn content of **Zn-2** averaged over several crystals is 0.33. These were produced using 1.5 equivalents of and 5 equivalents of , respectively, and the difference can be attributed to the nearly five orders of magnitude higher solubility of in water compared to . Both methods utilize the moderate temperature and pressure range accessible to PTFE-lined autoclaves. PTFE is essential given the presence of fluorine in the reaction; attempting to synthesize single crystalline barlowite in a quartz tube such as for herbertsmithite[@Chu2011; @Han2011] is futile since is expected to etch the quartz before forming barlowite. The two synthesis methods produce barlowite crystals with different morphologies. Method 1 crystals (Figure \[fgr:Struc\]C) grow as small hexagonal rods whose long axis is the *c*-axis, similar to those reported in the literature,[@Han2016; @Pasco2018] while Method 2 crystals (Figure \[fgr:Struc\]D) grow as larger hexagonal plates flattened along the *c*-axis. Naturally-occurring barlowite crystals were described as ‘platy’ and thus are likely more similar to those produced by Method 2.[@Elliott2014] Structure and Composition ------------------------- The reported room temperature crystal structure as solved via single crystal X-ray diffraction (SCXRD) structure in space group *P*6~3~/*mmc* (No. 194)[@Han2014] agrees well with that of naturally-occurring barlowite.[@Elliott2014] Recent reports disagree on whether the crystal structure is hexagonal[@Feng2017a] or orthorhombic[@Pasco2018] at room temperature. Single crystal X-ray diffraction measurements at beamline 15-ID at the APS at *T* = 300 K performed on crystals of **1**, **2**, and **Zn-2** showed no signs of symmetry lowering or pseudomerohedral twinning, so we assign the structure as hexagonal space group *P*6~3~/*mmc*. Lattice parameters and refinement details for the SCXRD structures can be found in Table S1,[@supp] and extracted bond distances can be found in Table S2. ![image](fig1_new.png){width="17cm"} The structure of barlowite is depicted in Figure \[fgr:Struc\]B, showing the kagomé plane of highly Jahn-Teller (4+2)-distorted octahedra and the coordination environment around the interlayer Cu site. This interlayer Cu is disordered over three symmetry-equivalent sites and has trigonal prismatic coordination, isostructural to claringbullite.[@Burns1995a] While rare, trigonal prismatic $^{2+}$ occurs in several other copper oxysalt minerals besides claringbullite, including buttgenbachite[@Fanfani1973] and connellite.[@Mclean1972] Each symmetry-equivalent interlayer site has four short Cu-O bonds ($\sim$2.0 [Å]{}) to the nearer oxygens and two long Cu-O bonds ($\sim$2.4 [Å]{}). The near-identical values of ionic radii, X-ray scattering factors, and neutron scattering factors of Cu and Zn make accurately distinguishing site occupancies of these two elements using diffraction techniques extremely difficult. $^{2+}$ is not Jahn-Teller active, and therefore substituting onto the kagomé site is energetically unfavorable. Having up to 15% excess $^{2+}$ ions on the Zn interlayer site is possible and has been shown in herbertsmithite using anomalous scattering measurements.[@Freedman2010] In the absence of anomalous diffraction measurements to definitively determine the Cu:Zn ratio and site occupancies in Zn-substituted barlowite, we fix the Zn to substitute on the interlayer site in both single crystal and Rietveld refinements. Following ICP-AES results, we fix the Cu:Zn ratio to be 3:1 and 3.67:0.33 in **Zn-1** and **Zn-2**, respectively. High resolution synchrotron PXRD datasets were collected at beamline 11-BM at the APS at *T* = 300 K for barlowite and Zn-substituted barlowite synthesized using both methods. A representative Rietveld refinement in space group *P*6~3~/*mmc* for **1** is shown in Fig. \[fgr:Struc\]A, and crystallographic data are tabulated in Table S3. The remaining Rietveld refinements are shown in Figures S1–S3. The refinements show that the two methods produce crystallographically identical samples and support the assignment of hexagonal symmetry. Selected bond distances are shown in Table \[tbl:bond300K\]; the structural effect of Zn substitution is visible as a shift in the triplicated, disordered interlayer site (Cu2 or Zn1). The length of one side of the triangle formed by this disorder (Cu2–Cu2) in both **1** and **2** is approximately 0.74 [Å]{}, while in **Zn-1** the Zn moves to the center of this site and becomes octahedral. Since $^{2+}$ is not Jahn-Teller active, it is closer to the center of an octahedron instead of at the extremes of a trigonal prismatic geometry, which is more likely for the Jahn-Teller active $^{2+}$.[@Burns1996] The intermediate bond length of **Zn-2** (0.59 [Å]{}) reflects the lower amount of $^{2+}$ that has been substituted onto the interlayer site compared to **Zn-1**. These trends are corroborated by bond distances extracted from single crystal refinements (Table S2). **atoms** Barlowite **1** Barlowite **2** **atoms** **Zn-1** **Zn-2** ------------ ------------------ ------------------ ------------ ------------------ ------------------ Cu1–Cu1 3.33871(0) [Å]{} 3.34001(0) [Å]{} Cu1–Cu1 3.33796(0) [Å]{} 3.33812(0) [Å]{} Cu1–O1 1.9635(11) [Å]{} 1.9755(9) [Å]{} Cu1–O1 1.9758(6) [Å]{} 1.9594(11) [Å]{} Cu1–Br1 3.01999(0) [Å]{} 3.02027(0) [Å]{} Cu1–Br1 3.02416(0) [Å]{} 3.02153(0) [Å]{} Cu2–O1 (1) 2.0030(18) [Å]{} 1.9861(17) [Å]{} Zn1–O1 (1) 2.1076(9) [Å]{} 2.0294(13) [Å]{} Cu2–O1 (2) 2.0030(17) [Å]{} 1.9861(16) [Å]{} Zn1–O1 (2) n/a 2.0293(12) [Å]{} Cu2–O1 (3) 2.4455(12) [Å]{} 2.4266(10) [Å]{} Zn1–O1 (3) n/a 2.3860(12) [Å]{} Cu2–Cu2 0.74153(0) [Å]{} 0.73180(0) [Å]{} Zn1–Zn1 0 [Å]{} 0.59253(0) [Å]{} \[tbl:bond300K\] Fourier Transform Infrared and Raman Spectroscopy ------------------------------------------------- Polycrystalline (Method 1) and crushed single crystals (Method 2) of barlowite and Zn-substituted barlowite were examined using attenuated total reflectance (ATR) Fourier Transform Infrared Spectroscopy (FTIR) at room temperature, shown in Figure \[fgr:IR\]A. The spectra have a broad band of O–H stretches centered at $\sim$3100 cm$^{-1}$. The modes in the 700–1060 cm$^{-1}$ range are assigned to CuO–H and ZnO–H deformations, while the modes between 400–700 cm$^{-1}$ are likely due to Cu–O or Zn–O stretches (Figure \[fgr:IR\]B).[@Braithwaite2004; @Schuiskii2013; @Sithole2012] The modes shift slightly between barlowite and Zn-substituted barlowite, reflecting the mixture of Cu and Zn in the *M*O–H and *M*–O regions. Both barlowite samples have modes at 1056 and 1020 cm$^{-1}$ and a stronger mode at 850 cm$^{-1}$, which shift to 1040 and 782 cm$^{-1}$ when nearly a full equivalent of Zn is substituted into the structure (**Zn-1**). As **Zn-2** has much less Zn (0.33 compared to 0.95), its spectra shows a combination of the two end points, resulting in broad modes at 1020, 845, and 780 cm$^{-1}$. In the *M*–O region (400–700 cm$^{-1}$), the mode at 553 cm$^{-1}$ is found in all samples. The relative strength of the mode at 490 cm$^{-1}$ in **Zn-1**, which is seen only as a weak shoulder in **Zn-2**, may be due to the much higher amount of Zn present in **Zn-1**. These determinations are supported by a shift in the O–H band and in the modes in the 700–1060 cm$^{-1}$ region upon deuteration, whereas features below 600 cm$^{-1}$ are unaffected (see Figure S4). While the presence of fluorine may affect the stretching frequencies compared to herbertsmithite, the recent assignment of all modes below 1100 cm$^{-1}$ as F–H or F–D stretches[@Pasco2018] is not supported by other work on the spectroscopy of Cu- or Zn-containing hydroxy minerals[@Braithwaite2004; @Schuiskii2013; @Sithole2012] and does not explain the shift in modes within the 400-1060 cm$^{-1}$ region between barlowite and Zn-substituted barlowite. While slight spectral differences between barlowite and Zn-substituted barlowite are expected due to the substitution of Zn, both synthetic routes produce spectroscopically equivalent samples. ![A), B) FTIR and C) Raman spectra ($\lambda$ = 532 nm) of barlowite and Zn-substituted barlowite comparing the two synthesis methods.[]{data-label="fgr:IR"}](Fig2.png) Raman spectroscopy at room temperature was performed on polycrystalline (Method 1) and single crystalline (Method 2) barlowite and Zn-substituted barlowite with a laser excitation of 532 nm (Figure \[fgr:IR\]C). There is a strong mode at approximately 75 cm$^{-1}$ and weaker modes at 185 and 430 cm$^{-1}$ in all samples, with minor shifts. Both barlowite samples have a relatively strong mode at 520 cm$^{-1}$, while in **Zn-1** it shifts to 500 cm$^{-1}$. **Zn-2** contains both modes, reflecting its mixture of Zn and Cu on the interlayer site. While the spectra show good agreement between the two barlowite samples, some differences exist between the Zn-substituted barlowite samples. There are additional peaks at 350 and 985 cm$^{-1}$ in **Zn-1**; we hypothesize that these may due to the larger amount of Zn present compared to **Zn-2**. Magnetic Susceptibility ----------------------- Low-field ($\mu_0$*H* = 0.005 T) zero field cooled (ZFC) and field cooled (FC) DC susceptibility measurements were performed on polycrystalline **1** and a collection of single crystals of **2** (Figure \[fgr:susc\]A). **1** has a steep onset at *T$_N$* = 15 K, consistent with previous reports.[@Han2014; @Jeschke2015] However, **2** has a gradual onset at *T$_N$* = 11 K as well as a second transition at *T* = 6 K. The higher temperature transition in both samples appears to have some ferromagnetic (FM) character, as indicated by the bifurcation between the FC and ZFC measurements.[@Domenicali1950] The magnitude of the magnetization of **1** is approximately twice that of **2** at *T* = 2 K. Barlowite synthesized by Method 1 using a longer dwelling time at 175 $^{\circ}$C (17 days instead of 3 days; denoted **1-a**) exhibits a higher ordering temperature (*T$_N$* $\approx$ 16.5 K) as well as a different response between *T* = 2 and 15 K yielding a larger magnitude of the magnetization. The difference in low temperature magnetic properties between materials with seemingly identical room temperature structures and spectroscopic properties calls into question the validity of comparing samples reported in the literature using different synthesis methods. Curie-Weiss fits of high temperature (*T* = 180–350 K) inverse susceptibility data of barlowite (Figure \[fgr:susc\]B) reveal slight differences between the two methods. A diamagnetic correction $\chi_0$ = -0.00025 emu/mol was obtained from an initial fit for **1**, and this value was fixed for all subsequent Curie-Weiss fits, with the assumption that the difference in diamagnetic correction between the samples is negligible. As shown in Table \[tbl:CW\], there is good agreement between the values of the effective magnetic moment ($\mu_{eff}$) for each $^{2+}$ ion and the *g* factor (assuming S = 1/2) for all barlowite samples. The values of the molar Curie constant (C) are slightly different but both reasonable for $^{2+}$. The extracted Weiss temperatures ($\Theta$) are quite large—indicating strong antiferromagnetic (AFM) interactions—and both show good agreement with the reported value ($\Theta$ = -136(10) K).[@Han2014] The deviations in susceptibility from the Curie-Weiss fit below 180 K and the large ratios between the Weiss temperature and the Néel transition temperature indicate a high degree of magnetic frustration, yielding a frustration index *f* greater than 8 for all barlowite samples.[@Ramirez1994] ![A) ZFC (closed symbols) and FC (open symbols) magnetization of barlowite and **Zn-1** measured in an applied field of $\mu_0$*H* = 0.005 T. B) Inverse susceptibility data and Curie-Weiss fits extrapolated to the Weiss temperature.[]{data-label="fgr:susc"}](Fig3.png) The Curie-Weiss fit of **1-a** (shown in Figure S5) yields a larger Weiss temperature than **1**, suggesting that a longer synthesis dwelling time affects the magnetism through a process akin to annealing, allowing defects within the structure to move to a more energetically favorable position. Barlowite synthesized starting with (**2**) affords a third set of magnetic properties—it yields the highest Weiss temperature and highest frustration index *f* but lowest FM transition temperature. The differences between **1**, **1-a**, and **2**, which give identical PXRD patterns at room temperature, imply that the magnetism is disproportionately affected by subtle differences in defects controlled by synthesis conditions. Barlowite[@Han2014] **1** **1-a** **2** **Zn-1** ------------------------- --------------------- ---------- ---------- ----------- ---------- C (K$\cdot$emu/mol) – 1.860(3) 1.863(9) 1.937(11) 1.431(4) $\Theta$ (K) -136(10) -125(1) -134(2) -146(2) -220(1) $\mu_{eff}$ ($\mu_{B}$) – 1.928(2) 1.929(5) 1.967(5) 1.953(3) *g* factor 2.27 2.226(2) 2.228(5) 2.272(6) 2.255(3) Frustration index *f* 9.1(9) 8.3(6) 8.1(5) 13.3(12) infinite Polycrystalline samples of Zn-substituted barlowite without a magnetic transition down to *T* = 2 K have been reported,[@Feng2017; @Feng2017a] although ours is the first report of single crystals. Low-field ZFC and FC DC susceptibility measurements on polycrystalline **Zn-1** are also shown in Figure \[fgr:susc\]A. It shows no signs of magnetic order down to *T* = 2 K, suggesting a QSL ground state. Our syntheses of **Zn-2** have produced materials with 33% substitution of $^{2+}$ on the interlayer site, which suppresses the ordering temperature to *T$_N$* = 4 K. Curie-Weiss fits to the high temperature (180-350 K) inverse susceptibility data of **Zn-1** using a diamagnetic correction $\chi_0$ = -0.00025 emu/mol are shown in Figure \[fgr:susc\]B, and the extracted values are summarized in Table \[tbl:CW\]. The molar Curie constant is 76.9% of that of **1**, in good agreement with the theoretical value of 75% for a fully-substituted Zn-barlowite with three magnetic $^{2+}$ ions to barlowite’s four. The Weiss temperature $\Theta$ = -220(1) K is more negative than that of barlowite. The less negative values found for barlowite are likely due to a ferromagnetic component from magnetic interactions related to the interlayer Cu. Heat Capacity ------------- Heat capacity (HC) measurements were performed from *T* = 2.5–25 K on pressed pellets of polycrystalline **1** and **Zn-1** and a 2.0 mg single crystal of **2**. The powders were mixed with Ag powder to improve thermal conductivity; the contribution of Ag was removed by measuring and subtracting pure Ag. The two barlowite samples show markedly different behavior below 20 K, corroborating the magnetization data. In the molar HC data (*C*, Figure \[fgr:HC\]A), **1** exhibits a broad, asymmetric feature peaking at 13.5 K while **2** has a narrower peak centered at 6.5 K. **Zn-1** does not exhibit a magnetic transition or any other magnetic feature down to *T* = 2.5 K, consistent with a QSL ground state, and its HC and that of **Zn-2** is a topic of ongoing research and will be discussed further in future work. The small displacement between the curves above 20 K can be ascribed to the uncertainty in the mass normalization. For both barlowite samples, the background was fit to a third-degree polynomial *C$_{bg}$ = aT$^2$ + bT$^3$* between *T* = 18–25 K, following a previous analysis.[@Han2014] One expects the cubic term (*bT$^3$*) to derive from crystal lattice contribution; however, an additional quadratic term (*aT$^2$*) improved the fit in this range significantly (which is likely related to an intrinsic contribution from the kagomé spins). Since we aim to examine the anomalies in the HC related to the magnetic transitions of **1** and **2** and directly compare them to that reported by Han et al.,[@Han2014] we treat this empirical polynomial fit as a background in this discussion. ![Heat capacity (HC) measurements on **1** and **Zn-1** (pressed powder mixed with Ag) and **2** (single crystal). A) Molar HC; the dashed lines indicate *C$_{bg}$* for each sample. B) *C$_{mag}$* calculated by subtracting the background from the molar HC. C) Magnetic entropy normalized as a fraction of the total value per Cu.[]{data-label="fgr:HC"}](Fig4.png) *C$_{bg}$* was subtracted from *C* to obtain the HC related to the magnetic transition (*C$_{mag}$*, Figure \[fgr:HC\]B); *C$_{mag}$*/T was integrated from 2.5–25 K to determine the entropy released by this transition (S, Figure \[fgr:HC\]C). The *C$_{mag}$* of **1** has a sharp onset and a plateau between 7–14 K. While the onset temperature for **1** ($\sim$15 K) is the same as that reported by Han et al.[@Han2014] for barlowite synthesized using similar precursors to our Method 1a, **1** has a much broader and flatter plateau down to $\sim$7 K than the reported barlowite. This is due to the different relative intensity of a shoulder at $\sim$7 K: it is weaker than the 15 K feature in the Han et al. sample but equally as strong as the 15 K feature in **1**. That subtle differences occur in the HC between these two samples–whose synthesis methods are much closer than comparing **1** and **2**–further reveals the dependence of the physical properties upon synthesis condition. Compared to **1** and the Han et al. sample, **2** has a broader onset and a sharp peak at 6 K, potentially correlated to the two transitions seen in the magnetization data. The magnetic entropy per Cu released by the transition to long-range magnetic order is plotted in Figure \[fgr:HC\]C and is qualitatively similar to the literature report.[@Han2014] Barlowite **2** has a steeper onset at lower temperature and reaches a higher value than **1**. For both samples, the magnetic entropy is significantly lower than the expected value if all of the Cu spins become ordered at the transition. This may be intrinsic and due to the formation of dynamic spin correlations, as suggested in previous work.[@Han2014] The plateaus in the entropy above 20 K are artifacts of the background subtraction. Discussion ---------- Overall, the results described above point to a complex picture of materials issues affecting the structure and properties of barlowite. The two versions of barlowite synthesized here differ significantly in their magnetic and thermodynamic properties. The two synthesis methods utilize different sources of $^{2+}$ ions, and we posit that this leads to distinct reaction mechanisms. When barlowite is synthesized from (Method 2), the Cu-F bonds must break so that the Cu-O and Cu-Br bonds can form, while each $^{2+}$ ion in (Method 1) already has four Cu-O bonds. These distinct reaction pathways and transition states could make different types of lattice defects more energetically favorable in each variant of barlowite. Defects in natural minerals are common, depending greatly on the environment during crystal formation, and can affect magnetic and physical properties dramatically.[@Schock1985; @Hobbs1984] The relatively mild temperature and pressure conditions under which barlowite and other Cu-containing oxysalt minerals crystallize may permit defects such as oxygen or copper vacancies to form, and small differences in these environments seem to have a large effect upon the resulting material. The family of copper oxysalt minerals contains a wide diversity of stable coordination environments for its $^{2+}$ ions;[@Burns1995] small divergences are thus not likely to destabilize the overall structure. Given that the two variants of barlowite are indistinguishable crystallographically at room temperature, either the difference in defects alone is enough to affect the physical properties, which is plausible given the effect of Cu/Zn site mixing upon the magnetic properties of herbertsmithite,[@DeVries2012; @Han2016b] or there may be some difference in low temperature structure engendered by the different reaction pathway. Materials synthesis provides the ability to optimize growth conditions to make a sample as pure as possible in order to measure the intrinsic properties. In the case of barlowite, we present two options and must now determine which is the “true" or “best" barlowite. In some areas, such as semiconductor processing, the “best" materials are those that are the most defect-free. As direct measurements of the defect levels in barlowite via transport measurements are complicated by its insulating nature, other metrics must be considered. Potential evaluation criteria for barlowite, taking into consideration that it is the parent compound to a quantum spin liquid material characterized by the lack of long-range magnetic order, could include the temperature of magnetic ordering transitions or the ease of synthesizing crystals suitable for neutron scattering experiments. However, more work must be done to investigate the low-temperature properties and rich physics of this system; perhaps both variants of barlowite will shed light upon the fundamental excitations of the frustrated antiferromagnetic $^{2+}$ kagomé lattice. Conclusion ========== We re-examine the reported synthesis of barlowite () and Zn-substituted barlowite ($_3$$_x$$_{1-x}$), and we present a novel method that yields large single crystals. These two synthetic routes yield barlowite and Zn-substituted barlowite with the same structure and FTIR and Raman spectra at room temperature. However, the magnetic properties of barlowite produced via these two methods diverge at low temperatures: Method 1 barlowite has a transition to long-range magnetic order at *T$_N$* = 15 K, matching previously reported magnetic properties, while Method 2 barlowite has a transition at *T$_N$* = 11 K and a second transition at *T* = 6 K. The heat capacity at low temperature also differs significantly between Method 1 and Method 2 barlowite. Given that both methods produce structurally equivalent materials, this difference raises questions about the role that synthesis-related defects play in the physical properties of barlowite and similar materials. Modifying the two synthesis methods yields Zn-substituted barlowite: Rietveld refinements, ICP-AES analysis, and magnetic data support the successful introduction of Zn into the structure. Method 1 produces polycrystalline Zn-substituted barlowite with a formula determined by ICP-AES to be ; it does not order magnetically down to *T* = 2 K and shows highly frustrated behavior consistent with that of a quantum spin liquid material. While Zn-substituted barlowite synthesized via Method 2 orders at *T$_N$* = 4 K, consistent with its lower Zn content of , it produces the first single crystals of Zn-substituted barlowite. This provides a synthetic route towards the production of large single crystals suitable for neutron scattering. Acknowledgments =============== The work at Stanford and SLAC was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-76SF00515. ChemMatCARS Sector 15 is principally supported by the Divisions of Chemistry (CHE) and Materials Research (DMR), National Science Foundation, under grant number NSF/CHE-1346572. Use of the PILATUS3 X CdTe 1M detector is supported by the National Science Foundation under the grant number NSF/DMR-1531283. Use of the Advanced Photon Source at Argonne National Laboratory was supported by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. R.S. was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program as well as an NSF Graduate Research Fellowship (DGE-1656518). We thank S. Lapidus for assistance at 11-BM, Y.-S. Chen and S.Y. Wang for assistance at 15-ID, and H.I. Karunadasa for generous access to equipment. Part of this work was performed at the Stanford Nano Shared Facilities (SNSF), supported by the NSF under award ECCS-1542152.
{ "pile_set_name": "ArXiv" }
--- abstract: | Le texte concerne des généralisations de l’équation de Markoff en théorie des nombres, déduites des fractions continues. Il décrit la méthode pour une résolution complète de ces nouvelles équations, ainsi que leur interprétation en algébre et en géométrie algébrique. Cette approche algébrique est complétée par un développement analytique concernant les groupes fuchsiens. Le lien avec la théorie de Teichmüller des tores percés est complètement décrit, les classifiant au moyen d’une théorie de la réduction. Des considérations plus générales au sujet des surfaces de Riemann, les géodésiques et leur étude hamiltonienne sont citées, de même que des applications à la physique, au bruit en $1/f$ et à la fonction zéta. Des idées relatives à d’importantes conjectures sont présentées. On donne aussi des raisons pour lesquelles la théorie de Markoff apparaît dans différents contextes géométriques, grâce à des résultats de décomposition valables dans le groupe $GL(2,\mathbb{Z})$.\ \ \ \ <span style="font-variant:small-caps;">Abstract.</span> The text deals with generalizations of the Markoff equation in number theory, arising from continued fractions. It gives the method for the complete resolution of such new equations, and their interpretation in algebra and algebraic geometry. This algebraic approach is completed with an analytical development concerning fuchsian groups. The link with the Teichmüller theory for punctured toruses is completely described, giving their classification with a reduction theory. More general considerations about Riemann surfaces, geodesics and their hamiltonian study are quoted, together with applications in physics, $1/f$-noise and zeta function. Ideas about important conjectures are presented. Reasons why the Markoff theory appears in different geometrical contexts are given, thanks to decomposition results in the group $GL(2,\mathbb{Z})$. author: - Serge Perrine date: '6 avril 2003 (version 6)' title: | Recherches autour\ de\ la théorie de Markoff --- $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$$$ $$\text{''Tout voir, tout entendre, ne perdre aucune id\'{e}e'' \ }$$ $$\text{\bfseries{Evariste Galois}}$$ $$$$ $$$$ $$\text{''Saisir les propri\'{e}t\'{e}s des choses, } \\ \text{d'apr\`{e}s leur mode d'existence dans l'infiniment petit''}$$ $$\text{\bfseries{Discours de F\'{e}lix Klein sur Bernhard Riemann et son influence}}$$ $$$$ $$$$ $$\text{''Sans l'esp\'{e}rance, on ne trouvera pas l'inesp\'{e}r\'{e}, } \\ \text{qui est introuvable et inaccessible''}$$ $$\text{\bfseries{H\'{e}raclite}}$$ $$\ \begin{array}{cc} & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \end{array}$$ Remerciements ============= Mes remerciements s’adressent à différentes personnes sans lesquelles ce texte n’aurait jamais vu le jour, et à tous ceux qui m’ont aidé pour sa mise en forme. Je pense en particulier aux personnes suivantes : - Georges Rhin qui tout au long de ces dernières années a prêté attention aux différents documents que je lui adressais périodiquement. - Michel Planat avec qui une coopération régulière et des discussions passionnantes autour d’observations physiques qu’il avait faites ont beaucoup soutenu ma curiosité pour la théorie de Markoff. Mon intérêt pour ce sujet venait de considérations sur le codage de l’information. Mais voir apparaître le spectre de Markoff dans les caractéristiques physiques d’un oscillateur à vérouillage de phase a considérablement relancé mes travaux. En observant le comportement d’oscillateurs construits sur mesure, pourrions-nous comprendre certaines parties de cette théorie restant encore énigmatiques, pourrions-nous inversement construire certains modèles de bruit utiles à la physique? Ces questions ont orienté mes travaux. - Michel Mendès France et Michel Waldschmidt qui se sont à différentes reprises intéressés à mes travaux, et m’ont fourni l’occasion de les perfectionner et de les exposer. Je les remercie très chaleureusement de leurs encouragements et de leurs commentaires sans concession que j’ai toujours considérés comme une source de progrès. Je voudrais aussi remercier Cécile et les enfants pour leur grande patience à supporter le temps considérable que j’ai passé sur ce travail. $$\begin{array}{cc} & \\ & \\ & \end{array}$$ $$\begin{array}{cc} & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \\ & \end{array}$$ Présentation générale ===================== Le but du présent travail est d’exposer une démarche de recherche conduite autour de la théorie de Markoff, ainsi que les résultats qu’elle a fournis. Cette théorie est une branche de ce que Hermann Minkowski a appelé la ”géométrie des nombres” [@Minkowski][@Cassels2]. Elle fournit une réponse partielle au problème suivant : Une forme quadratique réelle étant donnée $f(x,y)=ax^2+bxy+cy^2\in \mathbb{R}[x,y]$, quelle est la valeur minimale du nombre $\mid f(x,y)\mid $ lorsque $x$ et $% y$ sont des entiers non tous deux simultanément nuls ? Pour une forme définie $f(x,y)$, c’est-à-dire telle que $% \Delta (f)=b^2-4ac<0$, ce problème a été résolu par Joseph Louis Lagrange. Sa solution se déduit aussi d’un résultat plus général de Charles Hermite [@Hermite] donnant : $$C(f)=\frac{\inf_{(x,y)\in \mathbb{Z}^2-\{(0,0)\}}\mid f(x,y)\mid }{\sqrt{\mid \Delta (f)\mid }}\leq \frac 1{\sqrt{3}}=C(x^2+xy+y^2).$$ Il a aussi été démontré ([@Cassels2] p.33) que pour tout nombre $\rho \in ]0,(1/\sqrt{3})]$, on peut trouver une forme quadratique définie $f(x,y)\in \mathbb{R}[x,y]$ telle que : $$\rho =C(f).$$ Si la forme $f(x,y)$ est indéfinie, c’est-à-dire telle que $% \Delta (f)=b^2-4ac>0$, on sait depuis [@Korkine] que l’on a : $$C(f)\leq \frac 1{\sqrt{5}}=C(x^2-xy-y^2).$$ Pour les autres valeurs, on a [@Korkine] : $$C(f)\leq \frac 1{\sqrt{8}}=C(x^2-2y^2).$$ C’est pour mieux comprendre le cas indéfini qu’Andrei A. Markoff a développé sa théorie [@Markoff]. Celle-ci identifie l’infinité des valeurs $C(f)$ comprises entre $(1/\sqrt{5})$ et $(1/3)$ et les trous sans constante qui les séparent. Ces valeurs sont isolées et convergent vers $(1/3)$. Pour les valeurs inférieures à $(1/3)$, il n’existait jusqu’à une date récente aucune approche comparable à la théorie de Markoff. Des résultats lacunaires existent sur des trous sans constante, mais la situation reste globalement méconnue aujourd’hui encore. Une synthèse de ce qui était connu en 1988 a été réalisée par Thomas W. Cusick et Mary E. Flahive [@Cusick], au moment où l’auteur soutenait sa thèse sur le même sujet. La recherche menée depuis cette période s’est appuyée sur les deux dernières contributions citées. Il s’agissait d’aller au delà des résultats connus sur le sujet. On a trouvé quelques résultats relatifs à de nouveaux trous du spectre, mais assez rapidement l’idée a germé de chercher à disposer d’une généralisation de la théorie de Markoff pour essayer d’en déduire des résultats analogues à ceux disponibles au dessus de $(1/3)$. Parallèlement la mise en évidence en physique, autour d’oscillateurs spéciaux, de valeurs physiques égales aux constantes $C(f)$ données par la théorie de Markoff a été particulièrement motivante. Cet accomplissement du à Michel Planat [@Planat1] a conduit à envisager la construction d’oscillateurs particuliers permettant de ”voir” la structure du spectre de Markoff en des endroits où sa structure est suffisamment chaotique pour rester à ce jour méconnue. L’exploration de ce sujet, et son lien possible avec une modélisation du bruit en $(1/f)$ qui reste à ce jour assez énigmatique, est devenu progressivement un projet important. Construire dans ce contexte de nouvelles théories analogues à celle de Markoff est apparu utile On a donc mis au point des notations destinées à permettre l’appréhension de nouvelles théories plus générales que la théorie originale de Markoff. Cet objectif, entrevu a l’issue du travail de thèse de l’auteur, n’avait pas débouché à ce moment sur des exemples significatifs et complets. La démarche a consisté à comprendre comment construire de façon directe sur des suites de nombres entiers positifs un processus de création arborescente qui fournisse toujours des suites attachées à une même équation diophantienne du type de celle de Markoff. A cet égard, l’article [@Perrine] s’est avéré déterminant. Il a permis de disposer de ce mode de construction pour certaines suites assez générales, en faisant en sorte qu’elles restent attachées à l’équation $$x^2+y^2+z^2=4xyz-x.$$ On a ainsi pu disposer d’une théorie complète permettant d’obtenir des constantes d’approximations convergentes vers la valeur $% (1/4) $ ainsi que quelques trous du spectre. Il est ensuite apparu que le mode de construction découvert laissait invariantes des équations de forme plus générale. A cette occasion le lien naturel qui existe avec les sommes de Dedekind [@Rademacher] a été mis en évidence. Ceci a permis d’identifier d’autres équations permettant de construire des constantes d’approximations qui convergent vers $(1/3)$ comme dans la théorie de Markoff classique, mais cette fois par valeurs inférieures. On a ainsi pu obtenir des informations sur une partie totalement méconnue du spectre. Un exemple complet a été détaillé [@Perrine4] concernant l’équation $$x^2+y^2+z^2=3xyz+2x.$$ Pour cette dernière, on a fourni toutes les solutions entières dans $% \mathbb{N}$ ou $\mathbb{Z}$. Il est remarquable qu’à la différence de la théorie de Markoff classique, les solutions entières positives se répartissent en deux classes, et non pas en une seule. On a montré cependant comment ces deux classes donnent naissance à un arbre unique de triplets de Cohn, pour lesquels la construction sur les suites d’entiers s’applique complètement. Les triplets de Cohn sont définis de façon générale par la condition $x>y>z$. Les constantes données par l’équation précédentes sont différentes de celles mises en évidence dans la même zone du spectre de Markoff par David J. Crisp et William Moran [@Crisp]. C’est ainsi que le modèle géométrique construit par Harvey Cohn à partir du demi-plan de Poincaré $\mathcal{H}$, prolongé par l’étude des géodésiques fermées du tore percé se coupant elles-mêmes [@Series], est devenu insuffisant pour décrire la complexité du spectre de Markoff au voisinage de $(1/3)$. Le projet a donc été fait de revisiter cette interprétation géométrique. Ceci a été mené à bien et a permis de comprendre la nature des équations que l’on identifiait progressivement. Avant cela, dans [@Perrine7] on a étendu le mode de construction arborescent de suites de nombres entiers positifs pour mettre en évidence d’autres équations de forme légèrement plus générale donnant des constantes d’approximation dans le voisinage de $(1/3)$ : $$x^2+y^2+z^2=3xyz+sx,\;\;s>0.$$ Sur de telles équations, où $s>0$, on a pu montrer dans [@Perrine] l’existence d’un nombre fini de classes de solutions. Le même résultat est valable aussi pour $s\leq 0$. Mais alors que dans un cas $% (s>0)$ il convient d’introduire une notion de solution fondamentale pour obtenir ce résultat, dans l’autre cas $(s\leq 0)$ c’est une notion différente de solution minimale qui permet de conclure. Au demeurant, ces dernières équations apparaissent liées entre elles compte tenu de l’expression des minima arithmétiques des formes quadratiques binaires associées. L’approche précédente qui donne des valeurs $C(f)$ inférieures s’accumulant sur $(1/3)$, c’est-à-dire à nouveau dans la partie haute et méconnue du spectre, a aussi été étendue à d’autres situations. Ainsi un nouvel exemple de théorie de Markoff généralisée a été traité avec l’équation $$x^2+y^2+z^2=3xyz+yz-2x.$$ Il a permis de donner une nouvelle interprétation à d’anciens travaux de Collin J. Hightower [@Hightower]. Le point d’accumulation correspondant est égal à $1/(1+\sqrt{5})$. On a aussi compris comment la connaissance d’une partie du spectre permettait d’obtenir des informations sur une partie plus basse du spectre. Dans le dernier cas cité, c’est la valeur maximale du spectre $(1/\sqrt{5})$ qui est déterminante. Au final on a considéré que la bonne généralisation de la théorie de Markoff était relative à des équations diophantiennes notées $M^{s_1s_2}(a,\partial K,u_\theta )$, où $s_1$ et $s_2$ signes respectifs de $\varepsilon _1$ et $\varepsilon _2\in \{-1,+1\}$, $a\in \mathbb{N}\backslash \{0\}$, $\partial K\in \mathbb{Z}$, $% u_\theta \in \mathbb{Z}$ : $$x^2+\varepsilon _2y^2+\varepsilon _1z^2=(a+1)xyz+(\varepsilon _2\partial K)yz-u_\theta x,\;\;x,y,z\in \mathbb{N}\backslash \{0\}.$$ Une telle forme d’équation recouvre celles évoquées ci-dessus. Il a donc semblé que ce type d’équation était la bonne. Et en réalité on a pu montrer comment elles apparaissaient naturellement par un calcul relatif aux fractions continues. On a montré également qu’elles correspondent à une formule de trace ainsi qu’à une propriété remarquable de la fonction $\eta $ de Dedekind [@Perrine9]. Pour ces équations on a pu mettre au point une méthode générale de résolution qui s’apparente à la descente infinie chère aux arithméticiens. Elle fait jouer un rôle essentiel au groupe du triangle $\mathbf{T}_3$ qui classe les solutions. On a aussi montré comment le recours à des triplets de Cohn permettait dans l’essentiel des cas de conclure à l’existence d’une classe contenant une infinité de solutions, ainsi qu’un nombre fini de telles classes. Ce nombre de classes a d’ailleurs un lien avec le nombre de classes des corps quadratiques, mais le travail reste à faire pour mettre cette observation en état présentable. On a pu étudier de façon directe les surfaces ayant pour équation la forme que l’on vient de donner. Ces surfaces cubiques sont rationnelles, on en a donné une représentation rationnelle.Coupées par un plan, elles donnent des courbes elliptiques dans de nombreux cas. Toutes les courbes elliptiques à coefficients rationnels sont obtenues ainsi. Ceci permet d’avoir une idée quant à des phénomènes pouvant affecter des courbes elliptiques différentes portées par une même surface cubique. Un sujet arithmétique prometteur qui s’est ainsi dégagé concerne le lien entre les théories de Markoff généralisées et la structure des points entiers sur les courbes elliptiques [@Perrine8]. Les réflexions dans ce dernier domaine ne sont pas achevées.On a également pu montrer que tout réseau complet d’un corps quadratique permet de construire une équation cubique du type précédent. Ce résultat important donne un sens algébrique aux équations que l’on étudie. Il permet facilement de comprendre ce que l’on vient d’indiquer sur le nombre de classes de solutions. Toutes les constructions qui précèdent ont aussi un support analytique commun analogue à celui découvert par Harvey Cohn pour la théorie de Markoff classique [@Cohn]. Pour mieux comprendre cette interprétation géométrique, on a étudié de façon directe les tores percés. Ceci a introduit une distinction entre les tores percés conformes paraboliques et hyperboliques. Le cas parabolique donne une généralisalisation très satisfaisante de la théorie de Markoff, mettant en évidence des groupes fuchsiens dont on a établi qu’ils sont libres à deux générateurs. Ce sont les groupes de Fricke qui sont ainsi tous obtenus, mais ils correspondent seulement à l’équation de la théorie de Markoff classique qui les caractérise tous. Pour le cas hyperbolique, on a pu construire un exemple original illustrant le fait découvert que les groupes fuchsiens correspondants ne sont pas libres. Comme les surfaces intervenant dans ce contexte, des tores percés, sont des quotients du demi plan de Poincaré par un groupe fuchsien agissant sur lui, la théorie de Teichmüller [@Schneps] constitue un cadre bien adapté pour appréhender le sujet. On l’a donc approfondie jusqu’à en donner une présentation qui montre clairement comment elle généralise la théorie de Markoff. La théorie de Teichmüller décrit les propriétés des différentes structures conformes définissant une surface de Riemann donnée sur un même support topologique. Elle détermine par réduction une structure cristalline pour laquelle on a donné quelques éléments d’information dans l’ouvrage [@Perrine9]. On a pu comprendre pourquoi il n’y a pas à considérer de tores percés elliptiques, ainsi que la nature du lien entre nos équations de Markoff généralisées et la théorie de Teichmüller. On a aussi vu que tous les tores percés conformes paraboliques définis sur un même tore topologique percé peuvent être distingués par deux nombres réels positifs. Ce type de résultat est connu depuis les travaux de R. Fricke [@Cohn]. Mais les méthodes issues de la théorie de Markoff conduisent à se restreindre à un premier nombre, un module compris entre 1 et 2. Le module 1 correspond au tore percé d’un groupe dit de Klein. Le module 2 correspond au tore percé du groupe de Hecke [@Hecke]. On voit ainsi apparaitre de façon naturelle les deux tores étudiés dans [@Cohn]. Tous les modules intermédiaires correspondent à d’autres tores percés conformes paraboliques isomorphes en tant qu’espaces topologiques mais non en tant que surfaces de Riemann. Le fait que ces tores ne soient pas conformément équivalents a des conséquences géométriques intéressantes pour les classement des groupes fuchsiens associés. Ce résultat a été complété en montrant que tous les tores percés paraboliques sont classés au moyen de deux paramètres réels, tous deux définis à partir de la seule équation de Markoff classique : $$x^2+y^2+z^2=xyz.$$ Le module définit le domaine fondamental et un second paramètre réel dit accessoire décrit la façon dont ses bords sont identifiés. Les théories de Markoff des équations $M^{s_1s_2}(a,\partial K,u_\theta )$ conduisent très naturellement à définir des générateurs $A$ et $B$ de groupes fuchsiens à deux générateurs. Elles donnent dans le cas parabolique les groupes de Fricke bien connus [@Rosenberger] [@Matelski]. On l’a démontré de façon rigoureuse. Les cas qui correspondent à des matrices $A$ et $B$ à coefficients entiers ont été complètement décrits. Il en résulte la possibilité de caractériser les tores percés paraboliques correspondants. La théorie de la réduction valable pour les nombres algébriques de degré 2 s’étend alors aux systèmes générateurs de ces groupes de Fricke. Un résultat qui en découle [@Horowitz] concerne la détermination des représentations du groupe à deux générateurs $\mathbf{F}_2$ dans les groupes $GL(2,\mathbb{Z})$. En approfondissant cette question, on a mis en évidence le lien avec le théorème de Dyer et Formanek [@Formanek]. Sa démonstration classique repose sur des propriétés des représentations $\rho :Aut(\mathbf{F}_2)\longrightarrow GL(m,\mathbb{Z})$. Les théories de Markoff correspondantes donnent de telles représentations issues du groupe à deux générateurs $\mathbf{F}_2$ dans le groupe $GL(2,% \mathbb{Z})$. Caractériser ces représentations est essentiel et on a pu comprendre comment ceci revenait à considérer dans l’essentiel des cas des structures conformes sur des tores percés. Le lien esquissé à cette occasion avec la théorie de noeuds mériterait d’être creusé plus avant [@Brumfiel], comme si au delà des noeuds toriques on pouvait introduire une nouvelle sous-catégorie de noeuds liés aux tores percés. A partir de ces réflexions, on a surtout obtenu une meilleure connaissance du groupe $GL(2,\mathbb{Z})$. Deux décompositions ternaires qui semblent nouvelles ont été données dans [@Perrine1b] pour toute matrice de $GL(2,\mathbb{Z}).$ Ceci permet notamment de relier la théorie de Markoff classique à la structure du groupe du triangle $\mathbf{T}_3$ et de représenter ce dernier dans $GL(2,\mathbb{Z})$ à l’aide d’un groupe diédral. Il est probable que tous les groupes finis donnent des résultats analogues et permettent de construire des structures arborescentes, et on conjecture que tous peuvent être représentés dans $GL(2,\mathbb{Z})$. L’auteur pense que l’on obtient par un tel procédé toutes ses généralisations de l’équation de Markoff. Quelques résultats ont été obtenus en ce sens mais il ne sont pas encore présentables. Une conséquence importante qui pourrait en découler est la conjecture que tout groupe fini est obtenu comme groupe des classes d’un corps quadratique réel. Mais y a-t-il un lien entre ces dernières théories de Markoff et les géodésiques des tores percés conformes associés? En y réfléchissant l’auteur a envisagé à partir de cette question un domaine d’application pour ses généralisations de la théorie de Markoff au codage des géodésiques des surfaces de Riemann [@Schmutz2]. Il a approfondi la dualité naturelle qui existe entre points et géodésiques sur une telle surface. Malheureusement cette étude apparemment nouvelle n’a pas suffisamment débouché pour donner lieu à publication. On a cependant donné quelques éléments au chapitre 7 de l’ouvrage [@Perrine9]. La question particulière de la caractérisation des géodésiques fermées par des suites finies d’entiers qui les codent, puis construisent des propriétés algébriques diverses, est très intéressante. Elle est aussi importante pour comprendre l’approche ergodique [@Series] [@Series2] [@Schmutz2]. Les géodésiques dépendent de la structure conforme adoptée sur le tore topologique percé qui la porte. Les transformations conformes qui changent une géodésique fermée en une autre définissent des opérations de transcodage sur les suites d’entiers associées.Il y a là une perspective d’application dans le codage de l’information, en particulier le codage en flot (stream cyphering) et les générateurs pseudo-aléatoires. Tout changement de géodésique se traduit par une déformation de la structure algébrique de ces suites. Les réflexions sur ce sujet ont été nombreuses, mais restent assez lacunaires. On a donné au chapitre 7 de [@Perrine9] des pistes pour approfondir le problème. On a en particulier rappelé comment se développe dans un tel contexte l’approche hamiltonienne de la mécanique, en mettant l’accent sur son caractère quasi fonctoriel. Quelques conséquences en résultent pour la compréhension même de ce que constituent le calcul mathématique [@Feynman] et certains objets physiques. Un point qui tourne librement sur une géodésique fermée peut représenter un système physique stable, donc observable. Les changements de solutions dans nos équations diophantiennes correspondent alors à des sauts quantiques dans l’évolution d’un tel système selon des géodésiques différentes sur un tore percé. Cette idée donne une structuration quantique au système considéré, structure que l’on peut espérer retrouver dans des systèmes réels. On a un phénomène comparable sur les courbes elliptiques d’une même surface donnée par nos équations. De là à étendre la problématique pour se poser des problèmes de mécanique statistique et de théorie ergodique, il n’y a qu’un pas que les travaux de dynamique symbolique de Caroline Series [@Series] [@Series2] ont depuis longtemps franchi. Le lien est aussi évident avec le problème des ”petits diviseurs”, les résonances proches de fréquences dans un mouvement quasi périodique, et certains modèles de bruit en $1/f$ (voir [@Arnold] [@Yoccoz] [@Herman] [@Dodson] [@Planat3]). Qui dit géodésique évoque le calcul des vatiations d’Euler-Lagrange, la propagation des ondes, mais aussi le théorème KAM et les tores invariants. C’est ce dernier point qui est aussi à la base de l’intérêt de différents physiciens pour la théorie de Markoff [@Gutzwiller]. Si un système physique évolue librement selon des trajectoires géodésiques qui peuvent être représentées sur un tore, par identification de deux mouvements périodiques fondamentaux, et si un point de ce tore ne peut jamais être atteint, une théorie de Markoff généralisée apparait naturellement. Les trois derniers thèmes que l’on vient d’évoquer ne sont pas complètement épuisés par les recherches résumées. Par contre elles ont aussi conduit à approfondir de façon très systématique le sujet de l’interprétation de Harvey Cohn de la théorie de Markoff classique. C’est ainsi qu’il a été établi qu’on rencontre cette théorie dès qu’intervient le groupe $GL(2,\mathbb{% Z})$ des matrices $2\times 2$ de déterminant $\pm 1$. La raison essentielle mise en évidence est l’existence dans $GL(2,\mathbb{Z})$ d’un sous-groupe diédral $\mathbf{D}_6$ à $12$ éléments non normal définissant intrinsèquement un quotient à droite $GL(2,% \mathbb{Z})/\Re _{\mathbf{D}_6}$ qui s’identifie à l’arbre complet de la théorie de Markoff (respectivement un quotient à gauche $GL(2,\mathbb{Z}% )/_{\mathbf{D}_6}\Re $ équipotent). Ce résultat assure l’ubiquité du groupe du triangle $\mathbf{T}_3=\mathbf{C}_2*\mathbf{C}_2*% \mathbf{C}_2$ produit libre de trois groupes cycliques à deux éléments $\mathbf{C}_2$ dans des situations aussi diverses que les fibrés vectoriels, les ordres des anneaux de quaternions, le topographe de Conway... [@Rudakov] [@Hirzebruch] [@Vigneras] [@Conway]. L’article [@Perrine1b] développe cet aspect et a été repris en tant que chapitre 6 dans l’ouvrage [@Perrine9]. Tout au long des travaux menés on a conservé le souci d’une cohérence globale. Il s’agissait de sortir du cadre trop contraignant de la seule équation de Markoff classique pour construire d’autres exemples mais en cherchant simultanément à comprendre comment appréhender le ”chaos” du spectre des constantes d’approximation des nombres algébriques de degré 2. On voulait également permettre de maitriser les applications à la physique. Ces deux préoccupations ont constitué les fils conducteurs de la démarche développée tout au long de ces dernières années. C’est ainsi que l’on a recherché et finalement trouvé un opérateur différentiel intrinsèquement lié à la théorie de Markoff classique, la question restant ouverte de calculer son spectre et de le comparer au spectre de Markoff. La méthode utilisée pour le construire est transposable aux équations $M^{s_1s_2}(a,\partial K,u_\theta )$. Elle a conduit à s’intéresser aux équations hypergéométriques, aux équations de Lamé qui interviennent sur les paramètres accessoires des tores percés [@Keen5], et qui ne sont que des équations de Schrödinger particulières dont le groupe de monodromie associé peut être étudié [@Waall]. Une présentation développée des travaux que l’on vient d’évoquer a été donnée dans l’ouvrage [@Perrine9]. Celui-ci peut être résumé comme suit. On a mis au point un formalisme général et décrit ses liens avec les sommes de Dedekind. On a dégagé les équations qui généralisent l’équation de Markoff classique, et on les a interprétées avec une formule de trace et les sommes liées à la fonction $\eta $ de Dedekind. Partant de ces équations, on en a étudié de façon directe les solutions. Ceci a fait apparaître des structures généralisant celle découverte par A. A. Markoff. Dans quelques exemples particuliers, on a décrit les classes de solutions pour l’action du groupe $\mathbf{T}_3$. On a détaillé l’application à l’étude du spectre de Markoff. On a fait le lien avec des sujets classiques d’arithmétique quadratique, notamment la recherche des points entiers sur les courbes elliptiques. On a étudié les groupes fuchsiens agissant sur le demi-plan de Poincaré $\mathcal{H}$ et considéré le cas des groupes libres à deux générateurs, ainsi que les conséquences pour la structure du groupe $GL(2,\mathbb{Z})$. Ceci a montré l’importance algébrique de la théorie de Markoff classique et son lien avec la $K$-théorie et le théorème de Dyer Formanek relatif au groupe des automorphismes d’un groupe libre. Etudiant de façon générale les surfaces de Riemann et la théorie de Teichmüller relative aux métriques sur une même surface, on a fourni de nombreuses perspectives dans le chapitre 7 de l’ouvrage [@Perrine9] en cherchant à préciser le contexte qui leur donne naissance. L’un des points qui paraît le plus important à l’auteur concerne les développements relatifs à la fonction $\eta $ de Dedekind, à son lien avec le laplacien d’objets à géométrie hyperbolique, et à ses généralisations en physique nucléaire. On a aussi donné quelques pistes pour réfléchir à d’importantes conjectures. Le texte qui suit condense l’ouvrage que l’on vient de résumer, en identifiant les résultats nouveaux obtenus. Dans chaque chapitre on précise dans le premier paragraphe la problématique envisagée dans le texte qui suit, et on résume dans le dernier paragraphe les perspectives de recherches futures à mener. Le lecteur désireux d’aller à l’essentiel peut donc, au delà de la présente introduction passer tous les détails techniques qui sont présentés dans chaque chapitre en ne lisant que les introductions et les conclusions. Dans les paragraphes détaillés, on a été à l’essentiel en n’insistant ni sur les définitions données ni sur les calculs menés. On a renvoyé pour l’essentiel à l’ouvrage [@Perrine9], sachant que les définitions qu’il adopte sont les plus généralement admises. Tout ce qui est relatif aux définitions classiques et aux résultats bien connus a été extrait dans la mesure du possible. Le chapitre 5 est consacré à la généralisation de la théorie de Markoff aux surfaces de Riemann hyperboliques. On a voulu bien identifier des thèmes qui ont un sens par rapport à une problématique de codage et de quantification de l’information portée par une telle surface, et plus généralement par rapport aux limitations du calcul qui modélise la physique. Le chapitre comprend peu de résultats nouveaux hors l’équation différentielle intrinsèquement liée à la théorie de Markoff. Il fournit le point de vue élaboré par l’auteur pour comprendre la signification de grandes conjectures encore d’actualité.Il développe aussi une signification profonde de la fonction éta de Dedekind expliquant sa décomposition en produit infini, et les produits infinis qui en résultent pour d’autres fonctions classiques, telles que les fonctions thêta ou les fonctions elliptiques. On a également voulu jeter quelques bases pour faire le lien avec les solitons et les travaux d’actualité en géométrie non commutative ([@Connes] à [@Connes6]) et en théorie du chaos quantique. Dans le texte on utilise le même système d’indexation des propositions que dans l’ouvrage [@Perrine9]. Elles sont repérées dans chaque chapitre avec deux nombres, mais citées en faisant précéder ces derniers d’un nombre indiquant le chapitre où elles se trouvent. On a aussi ajouté quelques éléments nouveaux découverts depuis la publication de l’ouvrage [@Perrine9], ainsi que quelques références complémentaires qui paraissent importantes. La bibliographie est légèrement plus large que ce qui est strictement utilisé dans le texte, pour facilter des travaux ultérieurs en cours. $$$$ Généralisation de la théorie de Markoff ======================================= Introduction ------------ Historiquement, la théorie de Markoff a été construite vers 1880 grâce aux fractions continues [@Markoff]. Puis elle a été progressivement reconsidérée en mettant en avant les formes quadratiques correspondantes [@Cassels]. Aujourd’hui, elle est usuellement présentée à l’envers en partant de la résolution de l’équation diophantienne qui concluait les deux articles fondateurs [@Cusick] : $$x^2+y^2+z^2=3xyz,\;\;x,y,z\in \mathbb{N}\backslash \{0\}.$$ On a cherché au début du 20$^{\grave{e}me}$ siècle, et de façon infructueuse, les équations à étudier pour construire une généralisation de cette théorie [@Frobenius]. Reprenant ce problème, l’auteur a considéré que le retour aux fractions continues était la méthode la plus réaliste pour atteindre un tel objectif. Il a ainsi pu construire un formalisme généralisé et les équations diophantiennes qui en résultent [@Perrine9] en partant des suites d’entiers strictement positifs les plus générales $$S=(a_0,a_1,...,a_n).$$ Présentation de la théorie -------------------------- ### Notations La matrice de la suite $S$ et son déterminant sont donnés par $$M_S=M_{(a_0,a_1,...,a_n)}=\left[ \begin{array}{cc} a_0 & 1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{cc} a_1 & 1 \\ 1 & 0 \end{array} \right] ...\left[ \begin{array}{cc} a_n & 1 \\ 1 & 0 \end{array} \right] =\left[ \begin{array}{cc} m & K_1 \\ m-K_2 & K_1-l \end{array} \right] ,$$ $$\varepsilon _S=\det (M_S)=(-1)^{n+1}.$$ La suite miroir de $S$ est $S^{*}=(a_n,a_{n-1},...,a_0)$, et on associe à $S$ deux suites étendues sur la gauche et sur la droite avec $% S\rhd =(\lhd S^{*})^{*}$ et : $$\lhd S=\left\{ \begin{array}{cc} (1,a_0-1,a_1,...,a_n) & \text{si }a_0\neq 1 \\ (a_1+1,...,a_n) & \text{si }a_0=1 \end{array} \right\} .$$ Les matrices $M_S$ engendrent le groupe $GL(2,\mathbb{Z})$ des matrices de déterminant $\pm 1$. Elles agissent sur la droite projective réelle $% P^1(\mathbb{R})=\mathbb{R}\cup \{\infty \}$ ou la droite complexe $P^1(\mathbb{C})=% \mathbb{C}\cup \{\infty \}$ par $$\left[ \begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array} \right] (z)=\frac{\alpha z+\beta }{\gamma z+\delta },$$ avec des notations classiques pour les fractions continues : $$M_S(\infty )=[S]=[a_0,a_1,...,a_n]=a_0+\frac 1{a_1+\dfrac 1{...+\dfrac 1{a_n}}}.$$ Les nombres algébriques de degré 2, dits nombres de Markoff, dont le développement en fraction continue est périodique et peut être écrit avec une période $(S^{*},a)$ sont notés $\theta _a(S)=[0,% \underline{S^{*},a}].$ On peut en donner une expression algébrique. La théorie de Markoff généralisée s’appuie sur une décomposition de forme : $$S^{*}=(a_n,a_{n-1},...,a_0)=(X_1,b,X_2),$$ où les suites $X_1$ et $X_2$ définissent des matrices de suites dans $GL(2,\mathbb{Z})$ : $$M_{X_1}=\left[ \begin{array}{cc} m_1 & m_1-k_{12} \\ k_1 & k_1-l_1 \end{array} \right] \text{ avec }\det (M_{X_1})=\varepsilon _1\in \{-1,+1\},$$ $$M_{X_2}=\left[ \begin{array}{cc} m_2 & m_2-k_2 \\ k_{21} & k_{21}-l_2 \end{array} \right] \text{ avec }\det (M_{X_2})=\varepsilon _2{}{}{}\in \{-1,+1\},\text{ }$$ On obtient ainsi les expressions suivantes : $$m=(b+1)m_1m_2+m_1k_{21}-m_2k_{12},\;\;\varepsilon _S=-\varepsilon _1\varepsilon _2.$$ On définit deux paramètres auxiliaires $t_1$, $t_2$, et deux nombres $u$ et $\partial K$ importants : $$t_1=k_1+k_{12}-m_1,\;\;t_2=k_2+k_{21}-m_2,$$ $$u=m_2t_1-m_1t_2,\;\;\partial K=\varepsilon _2(K_1-K_2).$$ Ils permettent d’évaluer : $$m_1k_2-m_2k_1=(b+1)m_1m_2-m-u,$$ $$\varepsilon _1m_2=K_1m_1-k_1m,\;\;\;\varepsilon _2m_1=k_2m-K_2m_2.$$  La résolution des deux dernières équations de Bezout calcule $% K_1,K_2,k_1,k_2,$ à partir du seul triplet $(m,m_1,m_2)$ et de $% (\varepsilon _1,\varepsilon _2)$. On en déduit les autres paramètres. Ceci permet de reconstruire la suite $S^{*}$ et sa décomposition avec $X_1$ et $X_2$. Cette méthode a été utilisée pour construire les premiers exemples de théories de Markoff généralisées [@Perrine3]. Le point découvert a été que pour $(\varepsilon _1,\varepsilon _2)=(\pm 1,\pm 1)$ donné, et à la résolution d’équations de Bezout près, le triplet $(m,m_1,m_2)$ contient toute l’information nécessaire pour reconstruire les suites $X_1$ et $X_2$, ainsi que $b$ et la suite $S^{*}$, puis la décomposition matricielle associée pour $M_{S^{*}}$. On a pu s’assurer qu’il existe une suite $T$ éventuellement vide, telle que l’on ait $X_1=(\lhd X_2^{*},c,T)$. Ceci impose une propriété de miroir partielle à la suite $S$ : $$\lhd S^{*}=(X_2^{*},c,T,b,X_2).$$ Comme les cas $T=\emptyset $ et $X_2=\emptyset $ sont envisageables, on a obtenu ainsi un résultat essentiel pour la construction de la généralisation de la théorie de Markoff que l’on recherche : Hors le cas des suites $(1)$ et $(b,1)$, toute suite $S$ admet une décomposition $$S^{*}=(\lhd X_2^{*},c,T,b,X_2),$$ avec $X_2$ et $T$ suites d’entiers strictement positifs, éventuellement vides, ainsi que $b$ et $c$ entiers strictement positifs. ### Forme de Markoff Dans le cas le plus général, on dispose d’une matrice $M_{(S^{*},a)}$ correspondant à la période du nombre $$\theta _a(S)=[0,\underline{S^{*},a}]=[0,\underline{\lhd X_2^{*},c,T,b,X_2,a}% ].$$ Cette matrice définit une forme quadratique issue de la recherche des points fixes de la transformation de Möbius définie par la matrice $% M_{(S^{*},a)}$ [@Cohn2] [@Series]. Cette forme quadratique binaire entière indéfinie dite forme de Markoff s’écrit : $$\begin{aligned} mF_\theta (x,y) &=&mx^2+(((a+1)m-K_2)-K_1)xy-((a+1)K_1-l)y^2 \\ &=&m(x-\theta _a(S)y)(x-\overline{\theta _a(S)}y).\end{aligned}$$ Un calcul direct donne [@Perrine][@Perrine1] : On a : $$F_\theta (K_1,m)=F_\theta (K_2-(a+1)m,m)=\varepsilon _1\varepsilon _2=-\varepsilon _S,$$ $$F_\theta (K_1x+((a+1)K_1-l)y,mx+((a+1)m-K_2)y)=-\varepsilon _SF_\theta (x,y).$$ ### Réduction La théorie de la réduction des formes quadratiques binaires remonte à C. F. Gauss [@Gauss]. Elle concerne les formes quadratiques indéfinies que l’on écrit avec des coefficients réels $\lambda \in \mathbb{R}\backslash \{0\}$ et $\beta $, $\gamma \in \mathbb{R}$ $$\lambda f(x,y)=\lambda (x^2+\beta xy+\gamma y^2).$$ Chacune a un discriminant strictement positif $\Delta (\lambda f)=\lambda ^2(\beta ^2-4\gamma )=\lambda ^2\Delta (f)$. Elle possède un minimum arithmétique $$m(\lambda f)=\inf_{(x,y)\in \mathbf{Z}^2-\{(0,0)\}}\left| \lambda f(x,y)\right| =\left| \lambda \right| m(f).$$ Ceci donne sa constante de Markoff, ne dépendant pas du coefficient $% \lambda $ $$C(\lambda f)=m(\lambda f)/\sqrt{\Delta (\lambda f)}=m(f)/\sqrt{\Delta (f)}% =C(f).$$ Le spectre de Markoff est défini comme étant l’ensemble de toutes les constantes de Markoff de formes quadratiques réelles indéfinies. Il possède un sous ensemble particulier $Mark$ de constantes des formes quadratiques indéfinies à coefficients entiers. C’est le spectre quadratique. Le lien entre les deux spectres a fait l’objet de différents travaux [@Cusick][@Tornheim]. L’équivalence de deux formes $\lambda f$ et $\lambda ^{\prime }f^{\prime }$ est définie avec des entiers $v_{11}$, $v_{12}$, $v_{21}$, $v_{22}$ vérifiant : $$\lambda ^{\prime }f^{\prime }(v_{11}x+v_{12}y,v_{21}x+v_{22}y)=\lambda f(x,y),\;\;v_{11}v_{22}-v_{12}v_{21}=\pm 1.$$ Elle donne avec des notations comparables à celles de A. A. Markoff [@Markoff] le classique lemme de réduction : Pour toute forme quadratique réelle indéfinie $\lambda f(x,y)$ il existe une forme réduite équivalente $\lambda _0f_0(x,y)$, vérifiant les conditions suivantes : $$\lambda _0f_0(x,y)=\lambda _0(x^2+\beta _0xy+\gamma _0y^2)=\lambda _0(x-\xi _0y)(x-\xi _0^{\prime }y),$$ $$\xi _0=\frac{-\beta _0+\sqrt{\beta _0^2-4\gamma _0}}2=[\alpha _0,\alpha _1,...,\alpha _j,...]>1,$$ $$-1<\xi _0^{\prime }=-(1/\eta _0)=\frac{-\beta _0-\sqrt{\beta _0^2-4\gamma _0}% }2=-[0,\alpha _{-1},\alpha _{-2},...,\alpha _{-j},...]<0.$$ La suite des nombres entiers strictement positifs $(\alpha _n)_{n\in \mathbb{Z}% } $ est associée de façon unique (à la symétrie près $% \alpha _j\rightarrow \alpha _{-j}$ et aux décalages près $\alpha _j\rightarrow \alpha _{j+t}$ où $t\in \mathbb{Z}$) à $\lambda f(x,y)$. Si l’on considère les différentes valeurs $$\xi _j=[\alpha _j,\alpha _{j+1},...,\alpha _{2j},...]>1,$$ $$-1<\xi _j^{\prime }=-(1/\eta _j)=-[0,\alpha _{j-1},\alpha _{j-2},...,\alpha _0,...]<0,$$ $$\frac 2{L_j}=\xi _j-\xi _j^{\prime }=\sqrt{\beta _j^2-4\gamma _j},$$ définissant pour tout $j\in \mathbb{Z}$ une forme réduite équivalente à $\lambda f(x,y)$ : $$\lambda _jf_j(x,y)=\lambda _j(x^2+\beta _jxy+\gamma _jy^2)=\lambda _j(x-\xi _jy)(x-\xi _j^{\prime }y).$$ Le nombre $\lambda _j=\lambda _jf_j(1,0)$ est représenté par la forme $\lambda f(x,y)$. Et on a : $$C(\lambda f)=C(f_0)=C(f_j)=\inf_{j\in \mathbb{Z}}(\frac{L_j}2).$$ Depuis [@Markoff], il est clair que travailler sur les formes de Markoff est équivalent à utiliser la théorie classique de la réduction des formes quadratiques : Toute forme quadratique indéfinie $f(x,y)$ à coefficients entiers définit un nombre fini de formes de Markoff $F_\theta (x,y)$ équivalentes à $f(x,y)$, de nombres de Markoff $\theta _a(S)$ correspondant compris entre 0 et 1, et de suites associées $(S^{*},a)$. De plus on a équivalence des propriétés suivantes : 1/ $F_\theta (x,y)$ forme de Markoff 2/ $F_\theta (-x,y)$ forme réduite ### Calcul des constantes et approximation diophantienne L’étude du spectre quadratique $Mark$ dans le spectre de Markoff est faisable de façon exhaustive en étudiant [@Perrine] les constantes des formes $F_\theta (x,y)$ : $$\Delta (F_\theta )=\left[ \frac{((a+1)m+K_1-K_2)^2-4\varepsilon _1\varepsilon _2}{m^2}\right] =\frac{\Delta _a(S)}{m^2},$$ $$0<m(F_\theta )=\inf \{\mid F_\theta (x,y)\mid ;(x,y)\in \mathbb{Z}% ^2-\{(0,0)\}\}=\frac{m-s}m\leq F_\theta (1,0)=1.$$ La théorie du polygone de Klein [@Klein2] permet d’écrire $$0<C(F_\theta )=m(F_\theta )\frac m{\sqrt{\Delta _a(S)}}=\frac{m-s}{\sqrt{% \Delta _a(S)}}\leq \frac m{\sqrt{\Delta _a(S)}}.$$ Elle fournit un lien avec l’approximation diophantienne : Soit $\theta _a(S)$ un nombre de Markoff réel algébrique de degré 2 associé à la forme $F_\theta (x,y)$, l’ensemble des points d’accumulation de l’ensemble $$\{\mid q(q\theta _a(S)-p)\mid ;p,q\in \mathbb{Z}\},$$ est fini et s’écrit sous la forme $$\{\frac{\mid m_j\mid }{\sqrt{\Delta _a(S)}};m_j\in \mathbb{Z}^{*}\},$$ où $m_j$ est un entier représenté par la forme $mF_\theta (x,y)$ sur une réduite $(p_j/q_j)$ de $\theta _a(S)=[0,\underline{S^{*},a}]$: $$mF_\theta (p_j,q_j)=m_j.$$ C’est aussi l’ensemble des points d’accumulation de l’ensemble $$\{\mid q(q\overline{\theta _a(S)}-p)\mid ;p,q\in \mathbb{Z}\}.$$ Sa plus petite valeur n’est autre que la constante de Markoff $C(F_\theta )=C(\theta _a(S))$. Sa plus grande valeur peut être très différente de $C(\theta _a(S))$. Et si l’on note $$\theta _a(S)=[0,\underline{S^{*},a}]=[b_0,b_1,b_2,...],$$ on peut aussi écrire avec les réduites de ce nombre $$q_j(q_j\theta _a(S)-p_j)=\frac{(-1)^j}{% (b_{j+1}+[0,b_{j+2},b_{j+3},...]+[0,b_j,b_{j-1},...,b_1])},$$ $$C(F_\theta )=C(\theta _a(S))=\frac 1{\lim \sup_{j\rightarrow \infty }(b_{j+1}+[0,b_{j+2},b_{j+3},...]+[0,b_j,b_{j-1},...,b_1])}.$$ ### Extrema positif et négatif On est conduit à se demander si le minimum arithmétique de $F_\theta $ est atteint positivement ou négativement. On note $\nu _\theta $ la plus grande valeur strictement négative représentée par $% F_\theta $ et $\mu _\theta $ la plus petite valeur strictement positive représentée par $F_\theta $. On pose : $$1\geq \mu _\theta =\frac{m-s_\mu }m>0,\;\;\nu _\theta =-\frac{m-s_\nu }m<0.$$ La situation où $-\nu _\theta =\mu _\theta $, comme dans la théorie de Markoff classique, est exceptionnelle. C’est pourquoi on ne doit plus l’utiliser comme un argument déterminant dans l’étude des constantes de Markoff, ainsi que cela est fait depuis les travaux de Remak [@Remak], notamment dans [@Cassels]. Considérant la période du nombre de Markoff associé à $(S^{*},a)=(\lhd X_2^{*},c,T,b,X_2,a)$, on a été conduit à se demander si les nombres $b$ et $c$ ne déterminent pas la façon dont la forme $F_\theta $ atteint ses valeurs $\mu _\theta $ ou $\nu _\theta $. En fait ceci dépend de $% \varepsilon _1$ et $\varepsilon _2$ car on a : $$\dfrac 1{c+\dfrac 1{[T,b,X_2,a,\lhd X_2^{*},c,...]}+\dfrac 1{[X_2\rhd ,a,X_2^{*},b,T^{*},c,...]}}=\varepsilon _2\frac{mF_\theta (k_2,m_2)}{\sqrt{% \Delta _a(S)}}>0.$$ $$\dfrac 1{b+\dfrac 1{[X_2,a,\lhd X_2^{*},c,T,b,...]}+\dfrac 1{[T^{*},c,X_2\rhd ,a,X_2^{*},b,...]}}=-\varepsilon _1\frac{mF_\theta (k_1,m_1)}{\sqrt{\Delta _a(S)}}>0.$$\ Si l’on écrit le dernier nombre sous la forme $$\frac{m-s_b}{\sqrt{\Delta _a(S)}},$$ on obtient le résultat essentiel suivant : Avec les expressions précédentes qui définissent $s_b$, on a : $$s_b=(b-a)m_1m_2-u.$$ Cette formule remarquée dans l’article [@Perrine7] a une démonstration directe : $$\begin{aligned} mF_\theta (k_1,m_1) &=&mk_1^2+((a+1)m-K_2-K_1)k_1m_1-((a+1)K_1-l)m_1^2 \\ &=&k_1(mk_1-m_1K_1)+(a+1)m_1(mk_1-m_1K_1)+m_1(m_1l-K_2k_1) \\ &=&-\varepsilon _1(k_1+(a+1)m_1)m_2+\varepsilon _1m_1k_2 \\ &=&\varepsilon _1((b+1)m_1m_2-m-u)-\varepsilon _1(a+1)m_1m_2 \\ &=&-\varepsilon _1(m-((b-a)m_1m_2-u)).\end{aligned}$$ Elle donne un complément à la proposition 1.2.2 : La forme de Markoff vérifie avec les paramètres introduits $$\varepsilon _1F_\theta (k_1,m_1)=\varepsilon _2F_\theta (k_2-(a+1)m_2,m_2)=-((m+(a-b)m_1m_2+u)/m)<0.$$ On obtient maintenant en comparant les cas $\varepsilon _1=1$ et $% \varepsilon _1=-1$ : Pour toute forme de Markoff $F_\theta $, on a le majorant suivant pour son minimum arithmétique : $$m(F_\theta )\leq \frac{m+u+(a-b)m_1m_2}m,$$ avec les inégalités suivantes : $$(b-a)m_1m_2<m+u<(a+b+2)m_1m_2-(a+1)\partial Km_2^2,$$ $$\partial Km_2<m_1.$$ Vouloir étudier séparement les deux extrema positif ou négatif pourrait conduire à considérer chacune des deux parties du polygone de Klein pour elle-même. En fait les fractions continues adaptées pour ce faire sont les fractions continues régulières réduites, dites de Jung-Hirzebruch, qui s’écrivent : $$\lbrack [a_0,a_1,...,a_n]]=a_0-\frac 1{a_1-\dfrac 1{...-\dfrac 1{a_n}}}.$$ Ces nouvelles réduites correspondent [@Finkel'shtein] à des sommets du polygone de Klein supérieur si et seulement si on a $a_n\neq 2 $. Elles sont reliées aux fractions continues ordinaires utilisées ci-dessus ([@Hirzebruch0] (p. 215) [@Myerson] [@Dimca]) par la formule générale suivante : $$\lbrack a_0,a_1,z]=[[a_0+1,2_{a_1-1},z+1]].$$ ### L’équation de Markoff généralisée Dans le cas le plus général, on peut mettre en évidence de plusieurs façons l’existence d’une équation diophantienne généralisant celle de Markoff. Comme dans [@Cassels] on peut utiliser une nouvelle forme quadratique reliée à $F_\theta (x,y)$ : $$\phi _\theta (z,y)=z^2+((a+1)m+K_1-K_2)zy-\varepsilon _Sy^2=m^2F_\theta (x,y),\;\;z=mx-K_1y.$$ Elle possède la propriété de multiplicativité suivante : On a $$\phi _\theta (z_1,y_1)\phi _\theta (z_2,y_2)=\phi _\theta (z_1z_2+\varepsilon _Sy_1y_2,y_1z_2+z_1y_2+((a+1)m+K_1-K_2)y_1y_2).$$ Elle est invariante par différentes transformations [@Cassels] : On a : $$\begin{aligned} \phi _\theta (z,y)=\phi _\theta (-z,-y) \\ =-\varepsilon _S\phi _\theta (y,-\varepsilon _Sz) \\ =\phi _\theta (z+((a+1)m+K_1-K_2)y,-y) \\ =\phi _\theta (-z,y-((a+1)m+K_1-K_2)\varepsilon _Sz) \\ =-\varepsilon _S\phi _\theta (y-\varepsilon _S((a+1)m+K_1-K_2)z,\varepsilon _Sz) \\ =-\varepsilon _S\phi _\theta (-y,-\varepsilon _Sz-((a+1)m+K_1-K_2)\varepsilon _Sy).\end{aligned}$$ Cette dernière proposition donne $\phi _\theta (-\varepsilon _1m_2,m_1)=m^2F_\theta (k_1,m_1)$ et l’expression vue pour $s_b$ fait apparaitre l’équation $M^{s_1s_2}(b,\partial K,u)$ recherchée, dont les termes ne dépendent que de la suite $S^{*}$ : Soit $S^{*}=(a_0,a_1,...,a_n)=(X_1,b,X_2)$ une suite d’entiers positifs donnant les paramètres $m$, $m_1$, $m_2$, $\partial K$, $u$, $% \varepsilon _1$, $\varepsilon _2$, le triplet d’entiers $(m,m_1,m_2)\in (% \mathbb{N}\backslash \{0\})^3$ est solution de l’équation diophantienne $% M^{s_1s_2}(b,\partial K,u)$$$m^2+\varepsilon _2m_1^2+\varepsilon _1m_2^2=(b+1)mm_1m_2+\varepsilon _2\partial Km_1m_2-um.$$ En notant $u_\theta =u+(a-b)m_1m_2=-s_b$ pour tout $a\in \mathbb{N}\backslash \{0\}$, le triplet d’entiers $(m,m_1,m_2)$ vérifie aussi l’équation $% M^{s_1s_2}(a,\partial K,u_\theta )$ $$m^2+\varepsilon _2m_1^2+\varepsilon _1m_2^2=(a+1)mm_1m_2+\varepsilon _2\partial Km_1m_2-u_\theta m.$$ ### Autres démontrations Trois autres démonstrations de cette proposition ont été découvertes. Elles sont détaillées dans l’ouvrage [@Perrine9]. $\bullet $ Une première généralise le calcul original de Markoff [@Markoff]. $\bullet $ Une seconde met en oeuvre les sommes de Dedekind [@Perrine6], dont le lien avec l’équation de Markoff a été reconnu depuis longtemps [@Hirzebruch](pp. 158-165) au travers de leur classique formule de réciprocité [@Rademacher]. La somme de Dedekind est définie pour $(\delta ,\gamma )\in \mathbb{Z}\times \mathbb{Z}-\{0\}$ comme suit : $$s(\delta ,\gamma )=s(\delta ,\left| \gamma \right| )=\sum_{k=1}^{\left| \gamma \right| }\left( \left( \frac{k\delta }{\left| \gamma \right| }\right) \right) \left( \left( \frac k{\left| \gamma \right| }\right) \right) .$$ La première mention des sommes $s(\delta ,\gamma )$ se trouve dans l’étude de la fonction $\eta $ faite par R. Dedekind dans son commentaire du fragment XXVIII de B. Riemann [@Riemann] (p. 397). Cette fonction est issue des calculs d’Eisenstein pour donner des produits infinis exprimant les fonctions elliptiques [@Weil4], et analogues à ceux découverts par Euler pour les fonctions trigonométriques [@Euler2] (Tome1 ch. IX). La somme de Dedekind est présente dans l’exposant donnant $\varepsilon $, la racine $24i\grave{e}me$ de l’unité de la formule de transformation de $\eta $ par un élément de $PSL(2,% \mathbb{Z})$ : $$\eta (\frac{\alpha \tau +\beta }{\gamma \tau +\delta })=\varepsilon (\gamma \tau +\delta )^{\frac 12}\eta (\tau ).$$ $\bullet $ Une troisième démonstration interprète l’équation $M^{s_1s_2}(b,\partial K,u)$ comme une formule de trace utilisant les matrices $$A_b=M_{(\lhd X_2^{*},b)}=\left[ \begin{array}{cc} bm_2+k_{21} & m_2 \\ bk_2+l_2 & k_2 \end{array} \right] ,\;\;\varepsilon _A=\det (A_b)$$ $$B_c=M_{(X_1^{*}\rhd ,c)}=\left[ \begin{array}{cc} (c+1)m_1-k_1 & m_1 \\ (c+1)(m_1-k_{12})-(k_1-l_1) & m_1-k_{12} \end{array} \right] ,\;\;\varepsilon _B=\det (B_c)$$ $$A_bB_c=M_{(\lhd X_2^{*},b)}M_{(X_1^{*}\rhd ,c)}=M_{(\lhd S\rhd ,c)}=\left[ \begin{array}{cc} (c+1)m-K_1 & m \\ (c+1)K_2-l & K_2 \end{array} \right] .$$ Ces matrices sont dans $GL(2,\mathbb{Z})$ et non seulement dans $SL(2,\mathbb{Z})$. Notre équation se déduit d’une formule de Fricke qui donne pour $% tr(A_bB_cA_b^{-1}B_c^{-1})$ la valeur : $$\varepsilon _Atr(A_b)^2+\varepsilon _Btr(B_c)^2+\varepsilon _A\varepsilon _Btr(A_bB_c)^2-\varepsilon _A\varepsilon _Btr(A_b)tr(B_c)tr(A_bB_c)-2.$$ Il suffit de calculer par une autre méthode la trace du commutateur $% A_bB_cA_b^{-1}B_c^{-1}$ dans le cas où $b=c$ pour retrouver notre équation diophantienne comme simple formule de trace [@Perrine9]. ### Complément Dans le cas général il n’y a pas d’hypothèse à faire sur le nombre $\delta =$ pgcd$(m_1,m_2)$. Il s’agit d’un nombre qui peut être différent de $1$ et divise $u$. Il vérifie : On a les égalités $$\delta =\text{pgcd}(m_1,m_2)=\text{pgcd}(m_2,m)=\text{pgcd}(m,m_1)=\text{pgcd% }(m,m_1,m_2).$$ La situation générale se distingue donc clairement de la théorie de Markoff classique où l’on a toujours $\delta =1$. Comme cette dernière condition est utilisée de façon assez centrale dans l’exposé [@Cassels], notamment au travers de ses lemmes 5 et 6, on comprend a posteriori pourquoi il a fallu changer de paradigme pour dégager notre généralisation de la théorie de Markoff. Perspectives ------------ Les calculs qui précèdent s’appliquent à toutes les formes quadratiques binaires indéfinies. Ceci explique pourquoi les équations diophantiennes mises en évidence sont très générales. On a indiqué qu’elles sont aussi données par une formule de trace, ainsi que par une propriété de la fonction $\eta $ de Dedekind. Il s’agit là de résultats tout à fait nouveaux qui ouvrent un domaine de réflexion très important. On peut chercher à généraliser ce qui précède à des formes homogènes de plus grand degré ou possédant plus de variables. Il est possible qu’il faille privilégier dans ce contexte un algorithme [@Gomory] [@Moussafir] [@Lachaud] [@Dimca] généralisant les fractions continues régulières réduites $[[a_0,a_1,...,a_n]]$ de Jung-Hirzebruch, dont on peut systématiser l’utilisation dans ce qui précède. La fonction $\eta $ de Dedekind vient des calculs d’Eisenstein pour la décomposition des fonctions elliptiques en produits infinis [@Weil4]. Une question qui se pose est de savoir s’il existe une fonction généralisant $\eta $ pour d’autres fonctions trigonométriques.Un projet est de déduire de là des sommes plus générales que celles de Dedekind, et de comprendre ce que pourrait être une formule de réciprocité correspondante, ainsi qu’une équation diophantienne associée. Ce projet est accessible à partir de la théorie des groupes de Lie [@Baker]. Chercher à partir de là des formules de trace plus générales semble être un sujet d’une grande importance. En liaison avec des travaux de C. Procesi [@Procesi] une autre piste concerne l’étude d’une formule plus générale que celle de Fricke pour la trace du commutateur de deux matrices $2\times 2$. Egalement, en liaison avec ce qui a été vu pour les extrema positif et négatif, il est intéressant d’examiner les conséquences pour les approximations asymétriques des nombres irrationnels et le résultat classique de B. Segre [@Alzer]. Résolution complète de nos équations ==================================== Introduction ------------ Ayant identifié une bonne généralisation de l’équation de Markoff classique, on a étudié ensuite la résolution directe de l’équation diophantienne $M^{s_1s_2}(a,\partial K,u_\theta )$, où $% s_1$ et $s_2$ signes respectifs de $\varepsilon _1$ et $\varepsilon _2\in \{-1,+1\}$, $a\in \mathbb{N}\backslash \{0\}$, $\partial K\in \mathbb{Z}$, $% u_\theta \in \mathbb{Z}$ : $$x^2+\varepsilon _2y^2+\varepsilon _1z^2=(a+1)xyz+(\varepsilon _2\partial K)yz-u_\theta x,$$ $$x,y,z\in \mathbb{N}\backslash \{0\}.$$ Il s’agissait de comprendre comment s’organisent les triplets de solutions que l’on note $(m,m_1,m_2)$. Une méthode de résolution a été mise au point sur des cas particuliers $M^{++}(2,0,0)$, $% M^{++}(2,0,-2)$, $M^{++}(3,0,1)$. Elle est essentiellement décrite dans [@Perrine4]. Désormais cette méthode est complète et permet la résolution de toutes les équations $M^{s_1s_2}(a,\partial K,u_\theta )$. Méthode de résolution et conséquences ------------------------------------- ### Invariance par le groupe du triangle La méthode de résolution classique de l’équation de Markoff présentée dans [@Cassels], en évitant les redondances entre des triplets de solutions pouvant se déduire les uns des autres, casse en réalité la structure de l’ensemble des solutions. Pour l’étendre à une équation $M^{s_1s_2}(a,\partial K,u_\theta )$ mieux vaut considérer toutes les solutions, sans restriction. Pour simplifier le problème il est aussi utile de considérer les solutions dans $\mathbb{Z}^3$. Pour tout ensemble de solutions dans $\mathbb{Z}^3$, on dit que son intersection avec l’ensemble $(\mathbb{N}\backslash \{0\})^3$ est son empreinte dans $(\mathbb{N}\backslash \{0\})^3$. Il existe différentes possibilités pour déduire une solution dans $\mathbb{Z}^3$ d’une autre. L’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ est invariante par les involutions suivantes : $$N:(x,y,z)\longrightarrow (x,-y,-z).$$ $$X:(m,m_1,m_2)\longmapsto ((a+1)m_1m_2-m-u_\theta ,m_1,m_2)=(m^{\prime },m_1,m_2),$$ $$Y:(m,m_1,m_2)\longmapsto (m,\varepsilon _2((a+1)mm_2+\varepsilon _2\partial Km_2)-m_1,m_2)=(m,m_1^{\prime },m_2),$$ $$Z:(m,m_1,m_2)\longmapsto (m,m_1,\varepsilon _1((a+1)mm_1+\varepsilon _2\partial Km_1)-m_2)=(m,m_1,m_2^{\prime }),$$ On a les conditions $$N^2=X^2=Y^2=Z^2=Id.$$ $$XN=NX,\;YN=NY,\;ZN=NZ.$$ Pour $\varepsilon _1=\varepsilon _2$, il existe une autre involution qui laisse invariante l’équation : $$P:(x,y,z)\longrightarrow (x,z,y).$$ Elle vérifie : $$P^2=Id,\;XP=PX,\;ZP=PY,\;YP=PZ,\;NP=PN.$$ Modifiant $X$, remarquons que si on utilise $m_{\bullet }=(a+1)m_1m_2-m$ au lieu de $m^{\prime }$, l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ se transforme en une équation de même forme qui s’écrit $% M^{s_1s_2}(a,\partial K-\varepsilon _2u_\theta (a+1),-u_\theta )$. Cette observation permet éventuellement de concentrer l’attention sur les équations telles que $u_\theta =0$ ou $s=-u_\theta >0$. Avec les involutions $X$, $Y$ et $Z$, s’introduit $\mathbf{T}_3$, le groupe du triangle aussi noté $\mathbf{T}^{*}(\infty ,\infty ,\infty )$. C’est le produit libre de trois groupes cycliques à deux éléments $% \mathbf{C}_2$ : $$\mathbf{T}_3=\mathbf{C}_2*\mathbf{C}_2*\mathbf{C}_2.$$ Par le théorème de la forme normale pour un tel produit libre [@Cohen] (p. 26), tout élément de $\mathbf{T}_3$ peut être écrit comme un mot $ch=ch(X,Y,Z)$, produit d’involutions formelles $X$, $% Y$, $Z$, dont deux lettres consécutives sont toujours différentes. Notre équation est invariante par l’action du groupe $\mathbf{C}_2\times \mathbf{T}_3$ construite avec $N$, $X$, $Y$, $Z$. Et comme sa partie la moins évidente vient de l’action induite de $\mathbf{T}_3$, c’est sur cette dernière que l’on met l’accent. ### Différentes structures d’arbres sur le groupe du triangle Dans le cas particulier d’une action transitive et libre du groupe $\mathbf{T% }_3$ sur un ensemble $\Omega $, on dit avec John H. Conway [@Conway] que le $\mathbf{T}_3$-espace $\Omega $ est un topographe. Le groupe $\mathbf{T}% _3 $ lui même peut être structuré en topographe. Il possède donc une structure de graphe en forme d’arbre, c’est-à-dire avec les définitions de [@Serre] de graphe sans aucun circuit de forme $Cir_n$, où $n\geq 1$. Ses sommets sont les éléments de $\mathbf{T}_3$, la racine de l’arbre étant l’unité du groupe, et ses arêtes sont étiquetées avec $X$, $Y$, $Z$. Les chemins (ou géodésiques) de l’arbre sont aussi décrits à partir de la racine par des mots $% ch\in \mathbf{T}_3$, de sorte que les éléments de $\mathbf{T}_3$ se représentent de deux façons, soit par les sommets du topographe soit par ses chemins ayant pour origine sa racine. De chaque sommet sont issues trois arêtes qui correspondent à chaque lettre $X$, $Y$, ou $% Z $. Avec [@Perrine3] on a pu définir sur $\mathbf{T}_3$ une nouvelle structure d’arbre sur l’ensemble des mots réduits de $\mathbf{T}_3$ qui commencent par $XY$ (suivi donc d’un mot commençant par $X$ ou $Z$, éventuellement vide). On dit qu’il s’agit des mots de Cohn. Ils sont classables par longueur croissante avec les transformations $G$ et $D$ suivantes de $\mathbf{T}_3$ dans $\mathbf{T}_3$ : $\bullet $ A gauche, on écrit le mot de départ sous la forme $XW$, et on fabrique $W^{\prime }$ à partir de $W$ en permutant $Y$ et $Z$. On définit ensuite le transformé à gauche de $XW$ comme étant le mot $XYW^{\prime }$. Il est clair que pour $XW$ de longueur $n$ et commençant par $XY$, son transformé est de longueur $n+1$ et commence par $XYZ$. La transformation $G:XW\rightarrow XYW^{\prime }$ est injective. $\bullet $ A droite, on écrit le mot de départ sous la forme $VW$, où $V$ ne contient que des lettres $X$ et $Y$ (au moins $2$), et $W$ commence par $Z$ ou est éventuellement vide. On fabrique alors $% V^{\prime }$ en permutant $X$ et $Y$ dans $V$. On définit ensuite $% XV^{\prime }W$ comme étant le transformé à droite de $VW$. Il est évident que le terme $XV^{\prime }W$ commence par $XYX$ et est de longueur $n+1$ lorsque $VW$ commence par $XY$ et est de longueur $n$. La transformation $D:VW\rightarrow XV^{\prime }W$ est injective. On a obtenu ainsi une propriété qui a pu être utilisée pour montrer que dans la plupart des cas l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ possède une infinité de solutions : Dans le groupe $\mathbf{T}_3$ engendré par $X$, $Y$ et $Z$, pour toute longueur $n\geq 2$ il existe $2^{n-2}$ mots de Cohn de longueur $n$. Ils sont naturellement organisés en arbre par les transformations $G$ et $D$ définies de $\mathbf{T}_3$ dans $\mathbf{T}_3$. Egalement, on peut considérer dans $\mathbf{T}_3$ l’ensemble des mots réduits qui commencent par $X$ (suivi donc d’un mot commençant par $% Y$ ou $Z$, éventuellement vide). On dit qu’il s’agit des mots de Cassels. En changeant $Y$ en $Z$ dans la proposition précédente, on a facilement : Dans le groupe $\mathbf{T}_3$ engendré par $X$, $Y$ et $Z$, pour toute longueur $n\geq 1$ il existe $2^{n-1}$ mots de Cassels de longueur $n$. Ils sont naturellement organisés en arbre. ### Le groupe du triangle dans $GL(2,\mathbb{Z})$ Dans [@Perrine1b], et en tirant les conséquences de la théorie de Markoff classique, on a montré comment le groupe $\mathbf{T}_3$ est étroitement lié au groupe $GL(2,\mathbb{Z})$. On considère pour cela, avec le morphisme d’abélianisation $\pi ^{\prime }$ du groupe $Aut(% \mathbf{F}_2)$ à valeurs dans $GL(2,\mathbb{Z})$, deux matrices engendrant dans $GL(2,\mathbb{Z})$ un groupe diédral $\mathbf{D}_6$ à $12$ éléments : $$\pi ^{\prime }(t)=\left[ \begin{array}{cc} 1 & 1 \\ -1 & 0 \end{array} \right] ,\;\;\pi ^{\prime }(o)=\left[ \begin{array}{cc} 0 & -1 \\ -1 & 0 \end{array} \right] .$$ On complète en considérant trois matrices d’ordre 2 : $$\pi ^{\prime }(X_0)=\left[ \begin{array}{cc} 1 & 0 \\ -2 & -1 \end{array} \right] ,\;\;\pi ^{\prime }(Y_0)=\left[ \begin{array}{cc} -1 & -2 \\ 0 & 1 \end{array} \right] ,\;\;\pi ^{\prime }(Z_0)=\left[ \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right] .$$ Elles permettent de faire agir le groupe $\mathbf{T}_3$ dans $GL(2,\mathbb{Z})$ en définissant le produit suivant où $ch\in \mathbf{T}_3$ et $\pi _0^{\prime }(\mathbf{T}_3)$ de façon évidente $$ch(\pi ^{\prime }(X_0),\pi ^{\prime }(Y_0),\pi ^{\prime }(Z_0))=\pi _0^{\prime }(ch(X,Y,Z))\in \pi _0^{\prime }(\mathbf{T}_3).$$ On en a déduit la décomposition ternaire représentant le groupe $% \mathbf{T}_3$ dans $GL(2,\mathbb{Z})$ : Tout élément $V\in GL(2,\mathbb{Z})$ se décompose d’une et d’une seule façon sous la forme $$\pi ^{\prime }(o)^h\pi ^{\prime }(t)^kch(\pi ^{\prime }(X_0),\pi ^{\prime }(Y_0),\pi ^{\prime }(Z_0)),\text{ }$$ $$\text{o\`{u} }h=0,1;\;\;k=0,1,...,5;\text{\ \ }ch\in \mathbf{T}_3.$$ Les éléments de $\pi _0^{\prime }(\mathbf{T}_3)$, sont caractérisés par les conditions $h=0$ et $k=0$. Le groupe $\pi _0^{\prime }(\mathbf{T}_3)$ n’est pas normal dans le groupe $GL(2,\mathbb{Z})$. Il est isomorphe par $\pi _0^{\prime }$ au groupe $\mathbf{T}_3$. Les éléments du groupe $\mathbf{D}_6$ non normal dans $GL(2,\mathbb{Z})$ sont caractérisés par la condition $$ch(\pi ^{\prime }(X_0),\pi ^{\prime }(Y_0),\pi ^{\prime }(Z_0))=\mathbf{1}% _2.$$ Le groupe $\mathbf{D}_6$ introduit deux relations d’équivalence entre éléments de $GL(2,\mathbb{Z})$ $$V_1\;\Re _{\mathbf{D}_6}\;V_2\;\;\Leftrightarrow \;\;V_1V_2^{-1}\in \mathbf{D% }_6\;\;\Leftrightarrow \;\;V_2\in \mathbf{D}_6V_1,$$ $$V_1\;_{\mathbf{D}_6}\Re \;V_2\;\;\Leftrightarrow \;\;V_1^{-1}V_2\in \mathbf{D% }_6\;\;\Leftrightarrow \;\;V_2\in V_1\mathbf{D}_6.$$ Le quotient à droite $GL(2,\mathbb{Z})/\Re _{\mathbf{D}_6}=(GL(2,\mathbb{Z})/% \mathbf{D}_6)_d$ des classes $\mathbf{D}_6V_1$ et le quotient à gauche $% GL(2,\mathbb{Z})/_{\mathbf{D}_6}\Re =(GL(2,\mathbb{Z})/\mathbf{D}_6)_g$ des classes $V_1\mathbf{D}_6$ où $V_1\in GL(2,\mathbb{Z})$ sont équipotents. Ces deux ensembles sont différents car $\mathbf{D}_6$ n’est pas normal dans le groupe $GL(2,\mathbb{Z})$. L’écriture de $V\in GL(2,\mathbb{Z})$ dans le dernier résultat énoncé donne $$Vch(\pi ^{\prime }(X_0),\pi ^{\prime }(Y_0),\pi ^{\prime }(Z_0))\text{ }% ^{-1}=\pi ^{\prime }(o)^h\pi ^{\prime }(t)^k\in \mathbf{D}_6.$$ Elle détermine un unique élément $ch(\pi ^{\prime }(X_0),\pi ^{\prime }(Y_0),\pi ^{\prime }(Z_0))\in \pi _0^{\prime }(\mathbf{T}_3)$ tel que $$V\;\Re _{\mathbf{D}_6}\;ch(\pi ^{\prime }(X_0),\pi ^{\prime }(Y_0),\pi ^{\prime }(Z_0)).\;$$ D’où une autre interprétation du topographe qui est identifiable à l’arbre complet de la théorie de Markoff ou encore au groupe du triangle $\mathbf{T}_3$ : Le groupe $\mathbf{T}_3$ est équipotent au quotient (à droite ou à gauche) du groupe $GL(2,\mathbb{Z})$ par son sous-groupe non normal $% \mathbf{D}_6$. C’est en particulier un $GL(2,\mathbb{Z})$-espace homogène. On a pu en déduire une proposition préalable à des résultats connus de la $K$-théorie ([@Rotman] (p. 193), [@Rosenberg] (p. 218 et p. 75), [@Soule] (p. 261), [@Swinnerton]). On a pour $GL(2,\mathbb{Z})$ les groupes d’homologie suivants $$H_1(GL(2,\mathbb{Z}),\mathbb{Z})=GL(2,\mathbb{Z})/[GL(2,\mathbb{Z}),GL(2,\mathbb{Z})]\simeq \mathbf{D}_6/[\mathbf{D}_6,\mathbf{D}_6]\simeq \mathbf{C}_2\times \mathbf{C}% _2,$$ $$H_2(GL(2,\mathbb{Z}),\mathbb{Z})\simeq \mathbf{C}_2.$$ En utilisant le groupe libre à deux éléments $\mathbf{F}_2\simeq [SL(2,\mathbb{Z}),SL(2,\mathbb{Z})]$, dont on a montré dans [@Perrine1b] qu’il est relié à l’équation de Markoff classique, on a obtenu : Tout élément $V\in GL(2,\mathbb{Z})$ se décompose d’une et d’une seule façon sous la forme $$\pm W(A_0,B_0)O^hW_k(S,T),\text{ }$$ $$h\in \{0,1\},$$ $$W(A_0,B_0)\in \mathbf{F}_2=[SL(2,\mathbb{Z}),SL(2,\mathbb{Z})],$$ $$W_k(S,T)\in \{\mathbf{1}_2,S,ST,STS,STST,STSTS\}\;\text{avec }k=0,1,...,5.$$ Les éléments du sous-groupe $SL(2,\mathbb{Z})$ normal dans $GL(2,\mathbb{Z}% )$ sont caractérisés par la condition $h=0$. Les matrices citées dans cette proposition sont les trois générateurs de $GL(2,\mathbb{Z})$ : $$S=\left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right] ,\;\;T=\left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right] ,\;\;O=\left[ \begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array} \right] ,$$ ainsi que des mots $W(A_0,B_0)$ écrits multiplicativement en fonction des deux commutateurs qui engendrent $\mathbf{F}_2$ d’après [@MagnusKarassSolitar] (p. 97-98) : $$A_0=[(TS)^{-1},S^{-1}]=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right] ,\;B_0=[(TS)^{-2},S^{-1}]^{-1}=\left[ \begin{array}{cc} 1 & -1 \\ -1 & 2 \end{array} \right] .$$ On a explicité tous les passages entre les deux représentations ternaires des matrices du groupe $GL(2,\mathbb{Z})$, groupe dont on a pu également retrouver une présentation à deux générateurs $% T$ et $I=OS$ qui est minimale [@BeylRosenberger] : $$GL(2,\mathbb{Z})=<I,T^{-1}\mid I^2=([T^{-1},I]T^{-1})^4=([T^{-1},I]T^{-1}I)^2=% \mathbf{1}_2>.$$ Le sous-groupe $\pi _0^{\prime }(\mathbf{T}_3)$ est engendré par trois matrices calculables en $I$ et $T^{-1}$ : $$\pi ^{\prime }(X_0)=T^{-1}IOT^{-1}IOIT^{-1}B_0^{-1},\;\;\pi ^{\prime }(Y_0)=IOIOA_0^{-1}TS,\;\;\pi ^{\prime }(Z_0)=IS.$$ De plus [@BeylRosenberger] le groupe du triangle $\mathbf{T}_3$ est isomorphe à $PGL(2,\mathbb{Z})$ avec : $$PGL(2,\mathbb{Z})=<\overline{I},\overline{T}^{-1}\mid \overline{I}^2=([% \overline{T}^{-1},\overline{I}]\overline{T}^{-1})^2=([\overline{T}^{-1},% \overline{I}]\overline{T}^{-1}\overline{I})^2=\mathbf{1}>.$$ On peut vérifier que $\mathbf{F}_2\simeq [PSL(2,\mathbb{Z}),PSL(2,\mathbb{Z})]$ est d’indice $2$ dans ce groupe, et que l’on a aussi : $$\lbrack PGL(2,\mathbb{Z}),PGL(2,\mathbb{Z})]=<[\overline{I},\overline{T}^{-1}],[% \overline{I},\overline{T}]\mid [\overline{I},\overline{T}^{-1}]^3=[\overline{% I},\overline{T}]^3=\mathbf{1}>\simeq \mathbf{C}_3\star \mathbf{C}_3.$$ ### Forêt et bouquets de solutions Résoudre l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ dans $\mathbb{Z% }^3$ consiste à déterminer la structure du $\mathbf{T}_3$-espace de ses triplets de solutions. C’est une union de $\mathbf{T}_3$-espaces connexes (des $\mathbf{T}_3$-orbites). On dit alors que chaque $\mathbf{T}_3$-espace connexe de solutions dans $\mathbb{Z}^3$ est un bouquet. On le note $% Bq\subset \mathbb{Z}^3$. L’union des bouquets possibles $Bq_1$, $Bq_2$, ...., $% Bq_n$, ..., est la forêt des solutions dans $\mathbb{Z}^3$ de l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$. Bouquets et forêt étant des $\mathbf{T}_3$-espaces, ils peuvent être structurés comme un graphe dont les sommets sont les triplets de solutions et dont les arêtes sont non orientées. De chaque sommet partent trois arêtes. Chaque arête est étiquetée par l’involution $X$, $Y$ ou $Z$ permettant de passer d’une extremité de l’arête à l’autre. Les définitions de [@Serre] s’appliquent encore, permettant de considérer aussi des arbres de solutions, ce sont des graphes sans aucun circuit de forme $Cir_n$, où $n\geq 1$. L’étude d’exemples montre que tous les bouquets de solutions que l’on rencontre ne sont pas des arbres. ### Hauteur et réduction des triplets de solutions Pour tout triplet $(m,m_1,m_2)\in \mathbb{Z}^3$ de solutions de l’équation $% M^{s_1s_2}(a,\partial K,u_\theta )$, on définit sa hauteur $$h=\max (\mid m\mid ,\mid m_1\mid ,\mid m_2\mid )\geq 0.$$ On peut considérer trois autres valeurs construites avec les involutions $X$, $Y$, $Z$ : $$h_X=\max (\mid m^{\prime }\mid ,\mid m_1\mid ,\mid m_2\mid ),$$ $$h_Y=\max (\mid m\mid ,\mid m_1^{\prime }\mid ,\mid m_2\mid ),$$ $$h_Z=\max (\mid m\mid ,\mid m_1\mid ,\mid m_2^{\prime }\mid ).$$ On dit qu’un triplet $(m,m_1,m_2)$ n’est pas fondamental si et seulement si l’un des nombres $h_X$, $h_Y$, $h_Z$ est strictement plus petit que $h$. Dans le cas contraire, un triplet $(m,m_1,m_2)$ qui ne vérifie pas cette dernière condition est appelé fondamental. Les inégalités qui caractérisent cette situation permettent d’identifier les triplets fondamentaux, chacun d’entre eux définissant un bouquet de solutions par l’action du groupe $\mathbf{T}_3$. Considérons un triplet quelconque d’un bouquet de solutions de l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$. Si $h_X<h$ on applique $% X$ et on change de triplet, si $h_Y<h$ on applique $Y$ et on change de triplet, si $h_Z<h$ on applique $Z$ et on change de triplet. Ceci donne un algorithme dont l’avancement dans le bouquet que l’on considère est contrôlé par la réduction de la hauteur qui décroit en restant positive. Losque la hauteur est minimale, on identifie un triplet fondamental dans le bouquet considéré pour l’équation $% M^{s_1s_2}(a,\partial K,u_\theta )$. On dispose ainsi d’une méthode analogue à la descente infinie de Fermat pour calculer toutes les solutions dans $\mathbb{Z}^3$ de cette équation, et les classer en bouquets. Si l’on travaille dans $(\mathbb{N}\backslash \{0\})^3$ la hauteur est définie sans valeur absolue. Il se peut que pour un triplet donné l’algorithme précédent ne permette plus par application de $X$, $Y$ ou $Z$, de trouver un nouveau triplet dans l’ensemble $(\mathbb{N}\backslash \{0\})^3$. Un tel triplet sur lequel l’algorithme s’arrête est dit minimal. ### Solutions fondamentales dans $(\mathbb{N}\backslash \{0\})^3$ On a un résultat de finitude général [@Perrine9] pour les solutions fondamentales d’une équation $M^{s_1s_2}(a,\partial K,u_\theta )$ : Considérons les solutions dans $(\mathbb{N}\backslash \{0\})^3$ d’une équation diophantienne $M^{s_1s_2}(a,\partial K,u_\theta )$. Elles ne sont fondamentales que dans un nombre fini de cas, hors le cas des équations $M^{--}(a,-2-u_\theta (a+1),u)$ où $u_\theta <0$ : $$x^2-y^2-z^2=(a+1)xyz+(u_\theta (a+1)-2)yz-u_\theta x.$$  Ces dernières ont une infinité de solutions fondamentales valant $% (-u_\theta ,m_1,m_1)$, avec $m_1\in \mathbb{N}\backslash \{0\}$ quelconque, et les bouquets correspondants, en nombre infini, sont finis et s’écrivent $$\{(-u_\theta ,m_1,m_1),((a+1)m_1^2,m_1,m_1)\}.$$ En dehors de ces cas particuliers, on ne trouve ainsi qu’un nombre fini de bouquets pour l’action du groupe $\mathbf{T}_3$ ayant une empreinte non vide dans $(\mathbb{N}\backslash \{0\})^3$. Ce résultat a donné une proposition garantissant qu’on ne trouve dans l’essentiel des cas qu’un nombre fini de solutions fondamentales. Considérons les solutions dans $(\mathbb{N}\backslash \{0\})^3$ d’une équation diophantienne $M^{s_1s_2}(a,\partial K,u_\theta )$. Si elle possède une empreinte de bouquet contenant une infinité de solutions distinctes, alors elle n’a qu’un nombre fini de bouquets pour l’action du groupe $\mathbf{T}_3$ ayant une empreinte non vide dans $(\mathbb{N}\backslash \{0\})^3$. ### Solutions minimales dans $(\mathbb{N}\backslash \{0\})^3$ Certaines empreintes de bouquet ne sont identifiables que grâce à des solutions minimales. Pour ces dernières, on a la caractérisation suivante [@Perrine9] : Soit une solution $(m,m_1,m_2)\in (\mathbb{N}\backslash \{0\})^3$ d’une équation diophantienne $M^{s_1s_2}(a,\partial K,u_\theta )$ vérifiant à une inversion près des indices la condition $m_1\geq m_2\geq 1$. Elle est minimale si et seulement si on a l’une des conditions suivantes : $$\varepsilon _2m_1^2+\varepsilon _1m_2^2-\varepsilon _2\partial Km_1m_2\leq 0,\;\;\varepsilon _2m^2+\varepsilon _1\varepsilon _2m_2^2+\varepsilon _2u_\theta m\leq 0.$$ Il se peut qu’une équation $M^{s_1s_2}(a,\partial K,u_\theta )$ ait un nombre fini de solutions minimales, et aucune solution fondamentale. C’est le cas de l’équation $M^{++}(2,0,-2)$.Pour $\varepsilon _1=\varepsilon _2=1$, les deux conditions $\partial K\leq 2$ et $u_\theta \leq 0$ ne donnent qu’un nombre fini de solutions minimales et de solutions fondamentales. Dans ce cas, on a établi l’existence d’un nombre fini d’empreintes de bouquets de solutions dans $(\mathbb{N}\backslash \{0\})^3$ pour l’équation $M^{++}(a,\partial K,u_\theta )$. Pour les autres cas, la situation est assez diverse en fonction des paramètres $a$, $\partial K$, $u_\theta $, mais dans l’essentiel des cas le nombre d’empreintes de bouquet reste fini. ### Les triplets de Cohn et leur utilisation On dit qu’une solution $(m,m_1,m_2)\in (\mathbb{N}\backslash \{0\})^3$ d’une équation $M^{s_1s_2}(a,\partial K,u_\theta )$ est un triplet de Cohn [@Cohn] si et seulement si on a $m>m_1>m_2$. Toutes les solutions possibles dans $(\mathbb{N}\backslash \{0\})^3$ ne sont pas de ce type, comme le montre le cas où $\varepsilon _1=\varepsilon _2$ et une permutation de $y$ et $z$ dans l’équation étudiée. Mais de telles solutions apparaissent naturellement à l’issue des calculs du chapitre précédent. En effet toute paire de suites $X_2$ et $T$ détermine des fractions continues de plus en plus longues expliquant a posteriori les inégalités définissant les triplets de Cohn : $$m_2/k_2=[\lhd X_2^{*}],\;\;m_1/k_1=[\lhd X_2^{*},c,T],\;\;m/K_1=[\lhd X_2^{*},c,T,b,X_2].$$ On a pu vérifier que les triplets de Cohn d’une même empreinte de bouquet sont donnés par des chemins de $\mathbf{T}_3$ commençant par $XY$. A partir de telles suites, on a mis au point un procédé de construction d’un arbre de triplets de Cohn pour nos équations [@Perrine3]. On a utilisé pour cela les combinaisons $G$, $DD$, $GD$, des transformations $G$ et $D$ mises en évidence dans le groupe $\mathbf{T}% _3 $, ceci donne des triplets de Cohn lorsque les suites associées sont bien définies, c’est-à-dire à coefficients entiers positifs (comme vu dans [@Perrine7] les opérateurs $\lhd $ et $\rhd $ peuvent créer des problèmes correspondant au fait que le bouquet concerné n’est pas un arbre). Pour cela on change d’abord l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ en une équation équilibrée $% M^{s_1s_2}(c,\partial K_c,u)$ assurant la condition $b=c$ et ne modifiant pas les suites $X_2$ et $T$. ### La construction algorithmique à droite et à gauche Les formules pour des transformations $G$, $DD$, $GD$, donnant un triplet de Cohn à partir d’un autre sont les suivantes pour l’équation $% M^{s_1s_2}(c,\partial K_c,u)$ : $\bullet $ La construction à gauche est définie sur les suites par : $$X_2^G=(\lhd T^{*},c,X_2),\;\;T^G=T.$$ On en déduit $$X_1^G=(\lhd X_2^{*},c,T\rhd ,c,T),$$ $$(S^G\rhd )=(X_2^{*},c,T\rhd ,c,T^{*},c,\lhd T^{*},c,X_2).$$ L’équation diophantienne correspondant aux nouvelles suites et dont le triplet de Cohn $(m^G,m_1^G,m_2^G)$ est une solution, s’écrit: $$M^{s_2,s_1}(c,\partial K_c,\varepsilon _1\varepsilon _2u):x^2+\varepsilon _1y^2+\varepsilon _2z^2=(c+1)xyz+\varepsilon _1\partial K_cyz-\varepsilon _1\varepsilon _2ux.$$ $\bullet $ La construction à droite est plus complexe. Ceci a été découvert dans [@Perrine1]. On doit en réalité distinguer deux cas. En partant deux fois à droite, on définit $$X_2^{DD}=X_2^{*},\;\;T^{DD}=(\lhd X_2^{*},c,T,c,X_2\rhd ).$$ Ceci donne : $$X_1^{DD}=(\lhd X_2,c,\lhd X_2^{*},c,T,c,X_2\rhd ),$$ $$(S^{DD}\rhd )=(X_2,c,\lhd X_2^{*},c,T^{*},c,X_2\rhd ,c,X_2^{*}).$$ L’équation diophantienne correspondant aux nouvelles suites et dont le triplet de Cohn $(m^{DD},m_1^{DD},m_2^{DD})$ est solution, s’écrit : $$M^{s_1,s_2}(c,\partial K_c,u):x^2+\varepsilon _2y^2+\varepsilon _1z^2=(c+1)xyz+\partial K_cyz-\varepsilon _2ux.$$ $\bullet $ La construction à gauche une fois après un passage à droite est définie avec : $$X_2^{DG}=(\lhd X_2^{*},c,T),\;\;T^{DG}=(X_2^{*},c,T^{*},c,X_2).$$ Ceci donne pour les autres suites que l’on considère $$X_1^{DG}=(\lhd T^{*},c,X_2\rhd ,c,X_2^{*},c,T^{*},c,X_2),$$ $$(S^{DG}\rhd )=(T^{*},c,X_2\rhd ,c,X_2^{*},c,T,c,X_2,c,\lhd X_2^{*},c,T).$$ On trouve encore une équation diophantienne correspondant aux nouvelles suites, dont le triplet de Cohn $(m^{DG},m_1^{DG},m_2^{DG})$ est une solution : $$M^{s_2,s_1}(c,\varepsilon _2\partial K_c,\varepsilon _1u):x^2+\varepsilon _1y^2+\varepsilon _2z^2=(c+1)xyz+\varepsilon _1\varepsilon _2\partial K_cyz-\varepsilon _1ux.$$ ### Conséquence pour la résolution de nos équations Les transformations $G$, $DD$, $GD$, ont donné le résultat suivant pour l’équation $M^{s_1s_2}(c,\partial K_c,u)$ : Considérons un triplet de Cohn $(m,m_1,m_2)$ associé à deux suites $X_2$ et $T$, solution de l’équation diophantienne équilibrée $$M^{s_1s_2}(c,\partial K_c,u):x^2+\varepsilon _2y^2+\varepsilon _1z^2=(c+1)xyz+\varepsilon _2\partial K_cyz-ux.$$ On obtient pour les équations diophantiennes transformées à droite et à gauche de la précédente les expressions $$G:M^{s_1s_2}(c,\partial K_c,u)\longmapsto M^{s_2s_1}(c,\partial K_c,\varepsilon _1\varepsilon _2u),$$ $$DD:M^{s_1s_2}(c,\partial K_c,u)\longmapsto M^{s_1s_2}(c,\varepsilon _2\partial K_c,\varepsilon _2u),$$ $$GD:M^{s_1s_2}(c,\partial K_c,u)\longmapsto M^{s_2s_1}(c,\varepsilon _2\partial K_c,\varepsilon _1u).$$ De plus le processus de construction donné sur les suites fournit, lorsque les suites sont bien définies, un triplet de Cohn solution de l’équation correspondante, de taille strictement plus grande que celle du triplet $(m,m_1,m_2)$. Il existe alors une infinité de solutions pour l’équation $M^{s_1s_2}(c,\partial K_c,u)$ et un nombre fini d’empreintes de bouquets correspondantes. La transposition à des valeurs $a$ ou $b$ différentes de $c$ ne pose pas de problème, donnant un résultat analogue pour $% M^{s_1s_2}(a,\partial K,u_\theta )$ ou $M^{s_1s_2}(b,\partial K,u)$. ### Construction des suites de départ $X_2$ et $T$ Les nombres $\varepsilon _1$, $\varepsilon _2$, $a$, $\partial K$, $u_\theta $ sont donnés par l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ que l’on considère. Disposant par la méthode de résolution de cette équation d’un triplet $(m,m_1,m_2)\in (\mathbb{N}\backslash \{0\})^3$ de solutions, on peut construire deux suites associées $X_1$ et $X_2$ en résolvant les équations de Bezout en $(K_1,k_1)$ et $(K_2,k_2)$. On se ramène alors à une équation $M^{s_1s_2}(b,\partial K,u)$. #### Cas particulier où $\varepsilon _1=\varepsilon _2$ Dans tous les exemples étudiés où $\varepsilon _1=\varepsilon _2$, on a trouvé un cas où $T=\emptyset $. On a pu démontrer que cette remarque est générale. Considérons une équation $M^{s_1s_2}(b,\partial K,u)$ où $% \varepsilon _1=\varepsilon _2$$$x^2+\varepsilon _2y^2+\varepsilon _2z^2=(b+1)xyz+\varepsilon _2\partial Kyz-ux,$$ telle que l’on puisse trouver $m_1$ et $m_2$ dans $\mathbb{N}\backslash \{0\}$ vérifiant $$m_1^2-(b+\partial K+1)m_1m_2+m_2^2=-u-\varepsilon _2.$$ Alors elle possède un triplet de solutions $(m,m_1,m_2)$ tel que $$m=m_1^2-\partial Km_1m_2+m_2^2.$$ En notant $c=b+\partial K$ et dans le cas où l’on a $m_1-cm_2\in \mathbb{N}% \backslash \{0\}$, condition assurée si $u<0$, on peut construire une infinité de solutions de l’équation équilibrée associée grâce aux transformations $G$, $DD$, $GD$, avec $T=\emptyset $ et $X_2$ suite définie avec $k_{21}=m_1-cm_2>0$ par $$\frac{m_2}{m_1-cm_2}=[X_2],\;\;\det (M_{X_2})=\varepsilon _2.$$ Dans tous ces cas on a la solution $(\varepsilon _2,m_1,m_2)$ pour l’équation $M^{s_1s_2}(b,\partial K,u)$ : $$m_1^2-(b+1+\partial K)m_1m_2+m_2^2=-u-\varepsilon _2.$$ La valeur de $\varepsilon _1\varepsilon _2$ est une forte contrainte, elle impose $\varepsilon _S=-1$. En réalité, si l’on étudie des nombres $\theta _a(S)$ on peut toujours changer la suite $S$ en $S^{\prime }=(S,a,S)$, et se ramener avec cette dernière suite à $\varepsilon _{S^{\prime }}=-1$. Moyennant cette transformation d’allongement de la suite $S$, on peut par exemple dans l’étude des constantes de Markoff faire en sorte que la contrainte $\varepsilon _1=\varepsilon _2$ soit toujours vérifiée. On peut appliquer alors l’involution $P$ de façon à ce que la longueur de la suite $\lhd X_1$ soit plus grande ou égale à la longueur de la suite $X_2$. Cette normalisation ne change pas l’équation étudiée mais donne naturellement un triplet de Cohn. #### Cas général pour $\varepsilon _1$ et $\varepsilon _2$ La proposition qui précède a été généralisée au cas où l’on n’a plus nécessairement la condition $\varepsilon _1=\varepsilon _2$ ni a fortiori la normalisation introduite avant. On a trouvé par exemple pour $T=(1)$ : On considère un triplet $(m,m_1,m_2)\in \mathbb{Z}^3$ vérifiant les deux relations $$-u-\varepsilon _2=m_1^2-(b+\partial K+1)m_1m_2+\varepsilon _1\varepsilon _2m_2^2,$$ $$m=m_1^2-\partial Km_1m_2+\varepsilon _1\varepsilon _2m_2^2.$$ Il est solution de l’équation $M^{s_1s_2}(b,\partial K,u)$. Si ce triplet correspond à une suite $T=(1)$ avec laquelle on peut écrire $% X_1=(\lhd X_2^{*},c,1)$, on a $$\varepsilon _1=-\varepsilon _2,\;\partial K=(c-b),\;m_1=(c+1)m_2+k_{21},$$ $$u+\varepsilon _2=m_2^2-(c+1)m_2k_{21}-k_{21}^2=\Psi _{(c,1)}(m_2,k_{21}).$$ Avec $c=b+\partial K$ et dans le cas où $m_1-(c+1)m_2\in \mathbb{N}% \backslash \{0\}$, condition assurée si $u<0$, on peut construire une infinité de solutions de l’équation équilibrée associée grâce aux transformations $G$, $DD$, $GD$, avec $T=(1)$ et $X_2$ suite définie avec $k_{21}=m_1-(c+1)m_2>0$ par $$\frac{m_2}{m_1-(c+1)m_2}=[X_2],\;\;\det (M_{X_2})=\varepsilon _2.$$ Les premières égalités de cette proposition proviennent des relations suivantes du cas général, spécialisées compte tenu de la suite $T$ choisie : $$-u-\varepsilon _2\mu =m-(b+1)m_1m_2,\;\;\mu m=m_1^2-\partial Km_1m_2+\varepsilon _1\varepsilon _2m_2^2.$$ ### Remarques complémentaires On a dans le cas général une forme quadratique $\Psi _{(c,T)}$ $$u+\varepsilon _2\mu =\Psi _{(c,T)}(m_2,k_{21})=(c\kappa _2+\lambda )m_2^2-(c\mu +\kappa _1-\kappa _2)m_2k_{21}-\mu k_{21}^2.$$ Le discriminant de $\Psi _{(c,T)}$ est positif dans l’essentiel des cas, assurant que la forme $\Psi _{(c,T)}$ est indéfinie. Pour une valeur $u$ donnée et sachant que $\varepsilon _2=\pm 1$, l’équation que l’on considère possède alors une infinité de solutions en $% (m_2,k_{21})$ dès qu’elle en possède une. D’où une infinité de possibilités pour la suite $X_2$ lorsque la suite $T$ est donnée. Un calcul comparable est faisable déterminant une infinité des possibilités pour $T$ lorsque $X_2$ est donnée. Ceci permet de comprendre autrement l’existence de l’arbres des triplets de Cohn mis en évidence ci-dessus. On a pu établir : Dans les cas où $\varepsilon _1=\varepsilon _2=1$, on a : $$G=XYPX,\;\;GD=XYP,\;\;DD=XY.$$ Ces expressions expliquent autrement pourquoi, dans le cas correspondant, on trouve des triplets de Cohn avec les trois transformations $G$, $GD$, $DD$. En effet on a déjà indiqué que ces triplets sont caractérisés par le fait qu’ils correspondent à des mots réduits qui commencent par $XY$. ### Un exemple d’application Tous les exemples peuvent être traités grâce aux méthodes qui précèdent. On illustre ici sur un cas, celui des équations $% M^{++}(2,0,u)$. Pour $\partial K=0$, soit $c=b$. Avec $\varepsilon _1=\varepsilon _2=1$ on obtient : $$m_1=bm_2+k_{21},$$ $$m=(b^2+1)m_2^2+2bm_2k_{21}+k_{21}^2=m_2^2+m_1^2,$$ $$u=(b-1)m_2^2-(b-1)m_2k_{21}-k_{21}^2-1=\Psi _{(c,T)}(m_2,k_{21})_{.}$$ Ceci donne un triplet de Cohn $((bm_2+k_{21}),m_2,1)$ pour l’équation $% M^{++}(b,0,u)$. Pour $b=2$ et une infinité de valeurs $u=-s<0$, l’équation $M^{++}(2,0,u)$ a des solutions $(m,m_1,m_2)\in (\mathbb{N}% \backslash \{0\})^3$, notamment si on a avec $(p,q)\in (\mathbb{N}\backslash \{0\})^2$ $:$$$s=p^2+q^2+1-3pq>0.$$ On trouve une infinité de telles expressions avec les nombres de Fibonacci : $$s=(1+4F_{2t+1}^2-2F_{2t+1}F_{2t}-F_{2t}^2)=F_{2t+3}^2+F_{2t}^2+1-3F_{2t+3}F_{2t}>0.$$ Dans d’autres cas, il n’y a aucune solution dans $(\mathbb{N}\backslash \{0\})^3 $. On a en effet établi : Considérons une équation $M^{++}(2,0,u)$ avec $u<0$$$x^2+y^2+z^2=3xyz-ux.$$ Elle possède des solutions $(m,m_1,m_2)\in (\mathbb{N}\backslash \{0\})^3$ si et seulement si on peut en trouver une vérifiant $$0<m<s=-u,\;\;0<m_2<\sqrt{(s-m)m}.$$ Dans ce cas qui arrive pour une infinité de valeurs $s>0$, elle possède une infinité de solutions. De plus pour $0<s\leq 50$ l’équation $M^{++}(2,0,u)$ n’admet aucune solution lorsque l’on a $$-u=s\in \{1,3,7,9,11,19,23,27,31,43,47\}.$$ Dans l’essentiel des cas on peut écrire : $$0<s=p_k^2-3p_kp_{k-1}+p_{k-1}^2+1<\;m=p_k^2+p_{k-1}^2,\;\;\;m_2=p_{k-1}.$$ Les nombres $p_k$ et $p_{k-1}$ se déduisent de nombres de Fibonacci et donnent des constantes de Markoff s’écrivant : $$C(\theta _2(S))=\frac{3p_kp_{k-1}-1}{\sqrt{9(p_k^2+p_{k-1}^2)^2-4}}<\frac 13.$$ Lorsque $p_{k-1}$ augmente indéfiniment, ces constantes convergent vers la valeur $(1/3)$. Ceci a donné : Le spectre de Markoff quadratique $Mark$ a pour plus grande valeur d’accumulation $(1/3)$, par valeurs inférieures et par valeurs supérieures. La dernière proposition peut se déduire d’une autre expression : $$-u=-(F_{2t}^2+6F_{2t+1}F_{2t}-F_{4t+3})=F_{2t+1}^2+F_{2t}^2+1-3F_{2t+1}F_{2t}<0.$$ Pour une infinité des valeurs $u>0$ l’équation $M^{++}(2,0,u)$ a des solutions dans $(\mathbb{N}\backslash \{0\})^3$. ### La condition de divisibilité équivalente et ses conséquences Toute équation diophantienne $M^{s_1s_2}(a,\partial K,u_\theta )$ se déduit en réalité d’une simple condition de divisibilité : $$m\mid m_1^2-\partial Km_1m_2+\varepsilon _2\varepsilon _1m_2^2.$$ Supposons que l’on note $m_1^2-\partial Km_1m_2+\varepsilon _2\varepsilon _1m_2^2=\mu m$, en remplaçant dans l’équation et simplifiant par $% m\neq 0$ il reste $$m+\varepsilon _2\mu =(a+1)m_1m_2-u_\theta .$$ Cette expression détermine $u_\theta $. En la combinant avec la précédente de façon à à éliminer le terme $\mu $, on retrouve l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ dont les propriétés essentielles sont donc contenues dans la seule condition de divisibilité. Sans éliminer $\mu $, on a aussi l’équation $% M^{-s_1,-s_2}(a,\partial K,u_\theta +2\varepsilon _2\mu )$. Ceci illustre le phénomène des équations à solutions communes évoqué dans [@Perrine5]. Si l’on note maintenant $$\partial ^{a+1}K=\varepsilon _2(a+1)m+\partial K=\varepsilon _2((a+1)m+K_1-K_2),$$ on a la condition de divisibilité équivalente $$m\mid (m_1^2-(\partial ^{a+1}K)m_1m_2+\varepsilon _1\varepsilon _2m_2^2)=\phi _\theta (m_1,-\varepsilon _2m_2).$$ Le discriminant $\Delta _0=(\partial K)^2-4\varepsilon _1\varepsilon _2$ commun aux précédentes conditions de divisibilité permet de classifier les équations singulières, c’est-à-dire telles que $% \Delta _0\leq 0$ ou $\Delta _0$ carré parfait, comme suit : $\bullet $ Pour $\varepsilon _2=1$, une équation $M^{s_1s_2}(a,\partial K,u_\theta )$ est dite pointue si elle est de forme : $$x^2+y^2+z^2=(a+1)xyz-u_\theta x,$$ $$x^2+y^2+z^2=(a+1)xyz\pm yz-u_\theta x.$$ On dit qu’il s’agit d’une équation dégénérée lorsqu’elle s’écrit : $$x^2+y^2+z^2=(a+1)xyz\pm 2yz-u_\theta x,$$ $$x^2+y^2-z^2=(a+1)xyz-u_\theta x.$$ $\bullet $ Pour $\varepsilon _2=-1$, une équation est dite pointue si elle s’écrit : $$x^2-y^2-z^2=(a+1)xyz-u_\theta x,$$ $$x^2-y^2-z^2=(a+1)xyz\pm yz-u_\theta x,$$ On dit qu’on a affaire à une équation dégénérée lorsqu’elle est de forme : $$x^2-y^2-z^2=(a+1)xyz\pm 2yz-u_\theta x,$$ $$x^2-y^2+z^2=(a+1)xyz-u_\theta x.$$ ### Le cas des équations où $u=0$ Considérons un nombre de Markoff $\theta _a(S)$ définissant la constante $C(\theta _a(S))$. L’application du lemme de Dickson [@Dickson] (ch.8, vol.2, p. 408-409) permet de faire l’hypothèse que l’on a : $$S^{*}=(a_n,a_{n-1},...,a_0),\;\;\forall i=0,...,n,\;\;a_i\leq a,$$ $$C(\theta _a(S))=\frac 1{\xi _0-\xi _0^{\prime }}=\frac 1{a+[0,\underline{S,a}% ]+[0,\underline{S^{*},a}]}=\frac m{\sqrt{\Delta _a(S)}}.$$ Dans le cas où le minimum donnant la constante est obtenu pour un unique indice $j\in \{0,1,...,(n+1)\}$, on dit que la constante est uniquement atteinte. Mais il peut être obtenu sur plusieurs indices différents $j\in \{0,1,...,(n+1)\}$, on dit dans ce cas que la constante est multiplement atteinte. Si le minimum est atteint pour $j=0$, on dit que l’on est dans le cas super-réduit. Le cas super-réduit de constante multiplement atteinte a donné : Dans le cas super-réduit où la constante de Markoff de $\theta _a(S)$ est obtenue pour deux indices différents $0$ et $j\in \{1,...,(n+1)\}$, on a une décomposition naturelle $$S^{*}=(X_1,a,X_2),$$ Avec les paramètres associés à la suite $S^{*}$, l’équation de Markoff associée s’écrit $M^{s_1s_2}(a,\partial K,0)$$$x^2+\varepsilon _2y^2+\varepsilon _1z^2=(a+1)xyz+\varepsilon _2\partial Kyz.$$ La situation décrite par cette proposition généralise celle de la théorie de Markoff classique. Pour $\varepsilon _1=\varepsilon _2=1$, la condition $u=0$ n’est d’ailleurs conciliable avec la condition $\partial K=0$ que lorsqu’on a $a=2$. C’est le sens du résultat démontré par G. Frobenius [@Frobenius]. Pour généraliser l’équation de Markoff classique à d’autres cas identifiés par la dernière proposition, on doit supposer $\partial K\neq 0$. Et une réciproque de cette proposition est facile. Ces résultats ont permis d’étudier [@Perrine7] des équations comme $M^{++}(2,2,0)$ de solution $(3,1,1)$, $M^{++}(2,-2,0)$ de solution $(3,2,1)$, $M^{++}(3,-1,0)$ de solution $% (3,1,1)$, ainsi que les constantes associées. ### Application à l’étude du spectre de Markoff La méthode d’analyse du spectre de Markoff développée par l’auteur [@Perrine4] a été illustrée ci-dessus au voisinage de $(1/3)$. Elle consiste à utiliser une équation donnée $M^{s_1s_2}(a,\partial K,u_\theta )$ pour décrire un endroit particulier du spectre. Chaque solution d’une telle équation fournit des suites $X_2$ et $T$, et permet la construction d’une constante de forme $% C(\theta _a(S))=C(F_\theta )$ dans le spectre quadratique. Par ailleurs, les branches infinies données par tout bouquet de solutions de l’équation fournissent des points d’accumulation du spectre algébrique $Mark$. Ces points peuvent correspondre, comme dans la théorie de Markoff classique à des constantes de formes quadratiques à coefficients réels. Ce sont alors des constantes du spectre de Markoff complet. L’opération de passage de $Mark$ au spectre complet ([@Cusick] Chapitre 3, [@Cusick1]) correspond à une opération de fermeture topologique. Le spectre de Markoff est ainsi analysé comme superposition de sous-ensembles de constantes de nombres quadratiques $% \theta _a(S)$ et de leurs points d’accumulation. On a trouvé ainsi de nouveaux trous du spectre et évalué sa complexité au voisinage de $(1/3)$. On peut montrer avec l’expression de $C(\theta _a(S))$ que cette constante est située dans le segment $$U_a=[\frac 1{\sqrt{a^2+4a}},\frac 1{\sqrt{a^2+4}}].$$ Le segment $U_1$ est réduit à l’ensemble $\{1/\sqrt{5}\}$ qui contient la plus grande constante du spectre de Markoff. Le segment $U_2$ donne dans sa partie supérieure, entre $(1/3)$ et $(1/\sqrt{8})$ les constantes fournies par la théorie de Markoff classique. Ce sont des nombres isolés à l’exception du plus petit $(1/3)$ qui est un point d’accumulation par valeurs supérieures de constantes de Markoff. Il est connu qu’au dessus de la valeur $(1/3,334367...)$ de R. T. Bumby le spectre des constantes de Markoff est de mesure nulle ([@Cusick] p. 76). Comme l’a montré Mary E. Gbur Flahive [@Gbur], cette partie du spectre contient cependant une infinité de points d’accumulation dont la valeur $(1/(\sqrt{5}+1))$ découverte par C. J. Hightower [@Hightower]. J. R. Kinney et T. S. Pitcher ont affiché l’existence d’une infinité de trous dans le spectre de Markoff au dessus de $(1/% \sqrt{12})$, aussi près que souhaité de cette valeur qui est également un point d’accumulation de valeurs du spectre, mais l’existence de ces trous reste à confirmer ([@Perrine] IV 143). L’ensemble $U_2$ ne rencontre pas l’ensemble $U_3$, ce qui met en évidence un trou bien connu du spectre de Markoff $$]\frac 1{\sqrt{13}},\frac 1{\sqrt{12}}[.$$ La valeur $(1/\sqrt{13})$ est la plus grande valeur de $U_3$. Elle est isolée comme l’a montré O. Perron ([@Cusick] p. 15) en exhibant le trou maximal $$]\frac{22}{65+9\sqrt{3}},\frac 1{\sqrt{13}}[.$$ La plus petite valeur de $U_3$ est $(1/\sqrt{21})$, elle est donc aussi comprise dans $U_4$ dont la plus grande valeur vaut $(1/\sqrt{20})$. Entre les deux dernières bornes citées se trouve la valeur $\textbf{F}$ de G. A. Freiman ([@Cusick] p. 55) située au bord d’un trou du spectre, et telle que toute valeur réelle comprise entre $0$ et $\textbf{F}$ soit une constante de Markoff : $$\textbf{F}^{-1}=4+\frac{253589820+283748\sqrt{462}}{491993569}.$$ C’est dans la partie basse de $U_2$ et dans la partie haute de $U_3$ que la distribution des constantes de Markoff est la plus mal connue et que l’on travaille donc. Lorsque la valeur de $a$ augmente, le nombre de possibilités pour les suites $T$ et $X_2$ s’accroît. La distribution des constantes dans le segment $U_{a+1}$ est ainsi plus compliquée que celle existant dans $U_a$. Toute constante $C(\theta _a(S))$ de $U_a$ dans cet ensemble donne de plus grâce au lemme de Dickson ([@Cassels] p. 408) une valeur de $% U_{a+1} $ elle-même point d’accumulation du spectre. Ainsi la plus grande constante du spectre de Markoff $(1/\sqrt{5})\in U_1$ donne le point d’accumulation $(1/(1+\sqrt{5}))$ de C. J.Hightower dans $U_2$. L’article de W. R. Lawrence [@Lawrence] montre un phénomène comparable mais de plus grande complexité, en établissant que la distribution des constantes de Markoff dans la partie basse de l’ensemble $U_a$ est plus compliquée que celle que l’on trouve dans sa partie haute. Décrivant le spectre par valeurs décroissantes, plus on se rapproche de $0$ plus sa complexité croît. Après une partie discrète, puis une autre cantorienne, l’aspect chaotique du spectre disparaît d’un coup lorsqu’il devient continu sous la valeur de Freiman $\textbf{F}$. Une telle structure ressemble à celle du spectre d’un opérateur. Perspectives ------------ Une méthode de résolution des équations $M^{s_1s_2}(a,\partial K,u_\theta )$ a été mise au point. On a donné de nombreux exemples d’équations dont toutes les solutions sont connues et entrent dans notre formalisme général. Un projet important est de résoudre le maximum d’équations de ce type pour approfondir la connaissance du spectre de Markoff. On peut automatiser cette résolution. Une des difficultés pour fournir des résultats généraux concerne le calcul du maximum qui définit toute constante de Markoff. Sur tous les cas pratiques ce n’est pas un problème grâce à la théorie du polygone de Klein [@Klein2]. La méthode que l’on a développée pour étudier nos équations rend moins cruciale une démonstration de la conjecture de Frobenius, Cassels et Zagier [@Zagier] [@Button] pour l’arbre de la théorie de Markoff classique. On a d’ailleurs pu montrer dans [@Perrine7] que cette conjecture est bien spécifique à la théorie classique. On n’a pas de résultat analogue pour les triplets d’autres équations $M^{s_1s_2}(a,\partial K,u_\theta )$. La conjecture reste cependant ouverte, et on peut l’aborder avec les procédés qui ont été résumés dans ce qui précède. Cependant, cette approche n’a pas encore permis de conclure. La notion de hauteur est essentielle pour faire fonctionner l’algorithme que l’on a mis au point pour résoudre nos équations. En fait il s’agit simplement d’une méthode de descente infinie adaptée de celle très classique de Pierre De Fermat. On dispose donc maintenant d’un ensemble d’exemples concrets d’équations diophantiennes non complètement triviales sur lesquelles tester un certain nombre de conjectures classiques sur les hauteurs ([@Lang] chapitre 2). On a vu dans le chapitre précédent que nos équations étaient aussi données par une formule de trace (voir [@Perrine9]). La question se pose de savoir si toutes le sont. Ceci revient à approfondir la façon dont le groupe du triangle $\mathbf{T}_3$ se plonge dans $GL(2,% \mathbb{Z})$, et à généraliser l’approche de [@Perrine1b] par la trace à toutes nos équations. Un point particulier sur lequel l’auteur voudrait se pencher est le fait que tout groupe dénombrable $% \mathbf{G}$ puisse être plongé en tant que sous groupe de $GL(2,\mathbb{% Z})$. On pourrait ainsi définir une trace pour ses éléments [@Lehrer], et la question se pose de savoir si cette trace dépend du plongement que l’on considère. Ceci donnerait aussi un début de réponse à la problématique évoquée dans [@Alperin] et explicable par le fait que tout groupe de matrices fermé dans $GL(n,% \mathbb{R})$ est un groupe de Lie [@Baker]. On pourrait aussi pour un tel groupe $\mathbf{G}$ considérer les relations $\Re _{\mathbf{G}}$ et $_{% \mathbf{G}}\Re $ qui s’en déduisent à droite et à gauche. On trouverait au quotient une structure arborescente.Pour $\mathbf{G}$ d’indice fini dans $GL(2,\mathbb{Z})$ ceci fait un lien avec la théorie des dessins d’enfants ([@Waldschmidt2] p. 99). Et lorsque $\mathbf{G}$ est fini, ceci fait un lien avec l’interprétation de nos equations. Ce développement conduit à généraliser notre article [@Perrine1b] avec une véritable correspondance de Galois entre groupes finis ou dénombrables et structures arborescentes définies dans $% GL(2,\mathbb{Z})$, ainsi que sur une approche de la théorie de Galois inverse [@Serre5]. Les conséquences pour les groupes de tresses et les groupes de classes d’applications (mapping class groups au sens de [@Birman]) pourraient se révéler très importantes. Ceux-ci sont en effet dénombrables, et seraient donc aussi plongeables dans $GL(2,% \mathbb{Z})$, tout comme les groupes $GL(a+1,\mathbb{Z})$ dont les propriétés seraient donc accessibles par $GL(2,\mathbb{Z})$, groupe dont on voudrait aussi développer l’arithmétique. Une perspective connexe est d’étendre ce qui précède à $% GL(a+1,\mathbb{Z})$ et des équations possédant un nombre plus grand de termes, comme par exemple celle déjà étudiée par A. Hurwitz qui généralise l’équation de Markoff classique [@Baragar] : $$\sum_{i=0}^{i=a}x_i^2=(a+1)\prod_{i=0}^{i=a}x_i.$$ Les résultats sous-jacents relatifs à des arbres $\mathbf{T}_{a+1}$ à $a+1$ branches en chaque noeud, et généralisant $\mathbf{T}_3$, pourraient s’avérer très importants. Le lien entrevu dans [@Perrine1b] avec le théorème de Dyer et Formanek [@LyndonSchupp] laisse penser que des résultats profonds entre $\mathbf{T}_{a+1}$ et $% GL(a+1,\mathbb{Z})$ sont ainsi accessibles. L’auteur envisage aussi d’étudier la façon dont $GL(2,\mathbb{Z})$ est utilisable pour coder de l’information.Des idées de ce genre ont déjà été présentées par W. Magnus qui a travaillé pour la société Telefunken après 1930 (voir [@Magnus2] p. 186). Approche algébrique =================== Introduction ------------ La question étudiée ensuite concerne la signification algébrique de nos équations diophantiennes $M^{s_1s_2}(a,\partial K,u_\theta )$. On a pu en donner une interprétation grâce aux réseaux de rang 2 sur $\mathbb{Z}$. Ceci a permis de poursuivre le classement de ces équations diophantiennes avec ce qui est connu pour les corps quadratiques, et de réinterpréter certains des résultats déjà obtenus. Une observation essentielle a été que tout réseau complet d’un corps quadratique donne en fait naissance à une équation de Markoff généralisée, permettant d’envisager ses bouquets de solutions comme décrivant des relations entre des idéaux d’ordres quadratiques. On a aussi montré comment nos équations donnent des indications sur les points entiers et rationnels des courbes elliptiques en les plongeant dans des surfaces cubiques qui sont rationnelles. Ce point fait apparaître un phénomène quantique de changement brutal des caractéristiques d’une courbe elliptique réelle lorsque le plan qui lui donne naissance à l’intersection avec la surface cubique se déplace. Toute courbe elliptique réelle peut être obtenue ainsi, ceci ouvre une perspective intéressante. Le contenu de ce chapitre a été présenté aux Journées Arithmétiques de Lille [@Perrine8]. Lien de nos équations avec des corps quadratiques réels ------------------------------------------------------- Dans l’essentiel des cas le nombre $\Delta _\phi =((a+1)m+K_1-K_2)^2-4\varepsilon _1\varepsilon _2$ est positif. La condition de divisibilité condensant l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$ s’écrit : $$4m\mid (2m_1-\partial ^{a+1}Km_2)^2-\Delta _\phi (m_2)^2=4\phi _\theta (m_1,-\varepsilon _2m_2).$$ Deux cas apparaissent selon la parité de $\partial ^{a+1}K=\varepsilon _2((a+1)m+K_1-K_2)$, que l’on regroupe en posant $$\tau =0\text{ et }d=\frac{\Delta _\phi }4\;\text{si \ }\Delta _\phi \equiv 0\;(\mod\,4),\;\;\tau =1\text{ et }d=\Delta _\phi \;\text{si \ }\Delta _\phi \equiv 1\;(\mod\,4),$$ $$k=\frac{\partial ^{a+1}K-\tau }2\in \mathbb{Z},\;\;\varpi =\frac{\tau +\sqrt{% \Delta _\phi }}2=\frac{\tau +\sqrt{d}}{2^\tau },$$ $$P_\varpi (x)=\frac{(2x-\tau )^2-\Delta _\phi }4=x^2-\tau x-\frac{\Delta _\phi -\tau }4.$$ Avec ces notations, la condition de divisibilité s’écrit simplement $$m\mid m_2^2P_\varpi (\frac{m_1-m_2k}{m_2}).$$ Dans le cas d’équations dégénérées, $\mathbb{Q}(\sqrt{d})$ n’est pas un corps quadratique. Dans le cas d’équations pointues, $\mathbb{Q% }(\sqrt{d})$ est un corps quadratique imaginaire, $\mathbb{Q}(i)$ pour les cas pointus $n^{\circ }1$ où l’on retrouve la théorie de Markoff classique, $\mathbb{Q}(j)$ pour les cas pointus $n^{\circ }2$. Dans les autres cas $\mathbb{Q}(\sqrt{d})$ est un corps quadratique réel lié à l’équation $M^{s_1s_2}(a,\partial K,u_\theta )$. ### Construction de $\mathbb{Z}$-modules complets L’étude de la condition de divisibilité mise en évidence est un problème très classique de théorie des nombres (voir par exemple [@Legendre] Tome 1 p. 200).Elle s’interprète dans le corps quadratique $\mathbb{Q}(\sqrt{d})$ en posant avec $\delta =$pgcd$(m,m_1,m_2)>0$ : $$\mathbf{c}_2=m/\delta ,\;\;\mathbf{e}_2\equiv (m_2k-m_1)/\delta \;\;(% \mod\,\mathbf{c}_2)\;\text{avec }0\leq \mathbf{e}_2<\mathbf{c}_2,\;\;\mathbf{f}% _2=m_2/\delta >0,$$ Avec par exemple [@Faisant] (p. 11) ou [@Borevitch] (pp. 144-169), elle signifie qu’il existe un $\mathbb{Z}$-module complet de $\mathbb{Q}(\sqrt{d})$, dit aussi réseau de rang $2$ sur $\mathbb{Z}$. Il s’agit d’un idéal de l’ordre $\mathcal{O}_{m_2}=\mathbb{Z}[m_2\varpi ]$ du corps quadratique $% \mathbb{Q}(\sqrt{d})$ noté $$\mathbb{M}_2^{\diamond }=(\delta )(\mathbf{c}_2;\mathbf{e}_2+\mathbf{f}_2\varpi )=\{xm+y(m_2(k+\varpi )-m_1)\mid x,y\in \mathbb{Z}\}.$$ L’anneau des stabilisateurs du réseau $\mathbb{M}_2^{\diamond }$ est un ordre $\mathcal{O}_{\mathbf{c}_2}=\mathbb{Z}[(m_2/\delta )\varpi ]$ de $\mathbb{Q}(% \sqrt{d})$. En tant que module sur $\mathbb{Z}$ le réseau $\mathbb{M}% _2^{\diamond }$ a pour norme $N(\mathbb{M}_2^{\diamond })=m\delta $. La forme quadratique associée à cette base est à coefficients dans $\mathbb{Z% }$ et s’écrit : $$f_{\mathbb{M}_2^{\diamond }}(x,y)=\frac 1\delta (mx^2+(m_2\partial ^{a+1}K-2m_1)xy+(\mu -\varepsilon _2(a+1)m_1m_2)y^2).$$ Le lien avec les formes quadratiques $\phi _\theta (z,y)$ et $F_\theta (x,y)$ apparaît alors en posant $\mathbf{z}=mx-m_1y$ et $\mathbf{y}=\varepsilon _2m_2y$ dans la forme $f_{\mathbb{M}_2^{\diamond }}$ associée à $\mathbb{M}% _2^{\diamond }$ : $$m\delta f_{\mathbb{M}_2^{\diamond }}(x,y)=\phi _\theta (\mathbf{z},\mathbf{y}% )=N(\mathbf{z-y}(m\theta _a(S)-K_1)).$$ La forme $\phi _\theta $ est donc une norme du corps quadratique $\mathbb{Q}(% \sqrt{d})$, ce qui explique sa propriété de multiplicativité. Les calculs précédents mettent l’accent sur le réseau $\mathbb{M}% _\theta =\{\mathbf{x}m-\mathbf{y}m\theta _a(S)\mid \mathbf{x},\mathbf{y}\in \mathbb{Z}\}$, avec lequel on a obtenu : La forme quadratique associée à $[1,-(m\theta _a(S)-K_1)]$ base de l’ordre maximal $\mathcal{O}_\theta =\mathbb{Z}[\varpi ]=\mathbb{Z}[\mathbf{-}% (m\theta _a(S)-K_1)]$ du corps quadratique $\mathbb{Q}(\sqrt{d})$ vaut, avec $N(% \mathcal{O}_\theta )=1,$ $$\phi _\theta (\mathbf{z},\mathbf{y})=f_{\mathcal{O}_\theta }(\mathbf{z},% \mathbf{y})=N(\mathbf{z-y}(m\theta _a(S)-K_1)).$$ Cet ordre contient un idéal entier $\mathbb{M}_\theta =\{\mathbf{x}m+% \mathbf{y}m\theta _a(S)\mid \mathbf{x},\mathbf{y}\in \mathbb{Z}\}$, de norme $m$, et dont la forme quadratique associée à la base $[m,-m\theta _a(S)] $ vaut $$mF_\theta (\mathbf{x},\mathbf{y})=f_{\mathbb{M}_\theta }(\mathbf{x},\mathbf{y})=% \frac{N(\mathbf{x-y}m\theta _a(S))}{N(\mathbb{M}_\theta )}.$$ ### D’autres $\mathbb{Z}$-modules complets L’ordre $\mathcal{O}_{m_2}=$ $\mathbb{Z}[m_2\varpi ]$ est un sous-anneau de l’ordre maximal $\mathcal{O}_\theta $. On peut poser avec son idéal $% \mathbb{M}_2^{\diamond }$ : $\bullet $ Pour $\varepsilon _2=1$ : $$\mathbb{M}_2=\mathbb{M}_2^{\diamond }=\{(x+y((a+1)m_2-k_2))m+(ym_2)m\theta _a(S)\mid x,y\in \mathbb{Z}\}\subset \mathbb{M}_\theta .$$ $\bullet $ Pour $\varepsilon _2=-1$ : $$\overline{\mathbb{M}_2}=\mathbb{M}_2^{\diamond }=\{(x-y((a+1)m_2-k_2))m-(ym_2)m% \overline{\theta _a(S)}\mid x,y\in \mathbb{Z}\}\subset \overline{\mathbb{M}_\theta }.$$ Avec le réseau $\mathbb{M}_{\delta \theta }=\{\mathbf{x}m-\mathbf{y}\delta m\theta _a(S)\mid \mathbf{x},\mathbf{y}\in \mathbb{Z}\}$ de $\mathbb{Q}(\sqrt{d})$, on a alors : Avec les notations précédentes et les réseaux introduits, la condition de divisibilité donne les inclusions $$\mathbb{M}_2\subset \mathbb{M}_{\delta \theta }\subset \mathbb{M}_\theta ,\;\overline{% \mathbb{M}_2}\subset \overline{\mathbb{M}_{\delta \theta }}\subset \overline{\mathbb{M% }_\theta }.$$ ### Une décomposition en produit Dans ce que l’on vient de voir, on aurait pu permuter $m_1$ et $m_2$. D’où un calcul comparable à ce qui précède, dans l’ordre $% \mathcal{O}_{m_1}=$ $\mathbb{Z}[m_1\varpi ]$ du même corps quadratique $% \mathbb{Q}(\sqrt{d})$. Ceci définit un réseau $\mathbb{M}_1^{\diamond }=(\delta )(\mathbf{c}_1;\mathbf{e}_1+\mathbf{f}_1\varpi )$, sa norme $% m\delta $, sa forme quadratique associée de discriminant $(m_1^2\Delta _\phi /\delta ^2)$, son anneau de stabilisateurs $\mathcal{O}_{(m_1/\delta )}=\mathbb{Z}[(m_1/\delta )\varpi ].$ La forme quadratique associée se calcule facilement. L’ordre $\mathcal{O}_{m_1}=$ $\mathbb{Z}[m_1\varpi ]$ est un autre sous-anneau de l’ordre maximal $\mathcal{O}_\theta $ qui permet de poser : $\bullet $ Pour $\varepsilon _1=-1$ : $$\mathbb{M}_1=\mathbb{M}_1^{\diamond }=\{(x-yk_1)m+(ym_1)m\theta _a(S)\mid x,y\in \mathbb{Z}\}\subset \mathbb{M}_{\delta \theta }\subset \mathbb{M}_\theta .$$ $\bullet $ Pour $\varepsilon _1=1$ : $$\overline{\mathbb{M}_1}=\mathbb{M}_1^{\diamond }=\{(x+yk_1)m-(ym_1)m\overline{% \theta _a(S)}\mid x,y\in \mathbb{Z}\}\subset \overline{\mathbb{M}_{\delta \theta }}% \subset \overline{\mathbb{M}_\theta }.$$ Il devient alors intéressant de considérer le produit $\mathbb{M}_1\mathbb{% M}_2$, ce qui a bien un sens ([@Faisant] p.20). En complétant avec les classes de similitude [@Faisant] (p. 22), on a ainsi obtenu : Dans l’idéal $\mathbb{M}_{\delta \theta }\mathbb{=}\{\mathbf{x}m-\mathbf{y}% \delta m\theta _a(S)\mid \mathbf{x},\mathbf{y}\in \mathbb{Z}\}$ de l’ordre $% \mathcal{O}_\theta =\mathbb{Z}[\varpi ]$ existent deux réseaux $$\mathbb{M}_1=\{(x-yk_1)m+(ym_1)m\theta _a(S)\mid x,y\in \mathbb{Z}\},$$ $$\mathbb{M}_2=\{(x+y((a+1)m_2-k_2))m+(ym_2)m\theta _a(S)\mid x,y\in \mathbb{Z}\}.$$ Le premier est un idéal de l’anneau $\mathcal{O}_{m_1}=$ $\mathbb{Z}% [m_1\varpi ]$. Il possède pour anneau de stabilisateurs $\mathcal{O}% _{(m_1/\delta )}=\mathbb{Z}[(m_1/\delta )\varpi ]$ et a pour norme $m\delta $. Le second est un idéal de l’anneau $\mathcal{O}_{m_2}=$ $\mathbb{Z}% [m_2\varpi ]$. Il possède en tant qu’anneau de stabilisateurs l’ordre $% \mathcal{O}_{(m_2/\delta )}=\mathbb{Z}[(m_2/\delta )\varpi ]$ et a aussi pour norme $m\delta $. Enfin on a $$\mathbb{M}_1\mathbb{M}_2=m\mathbb{M}_{\delta \theta }=\{\mathbf{x}m^2-\mathbf{y}% \delta m^2\theta _a(S)\mid \mathbf{x},\mathbf{y}\in \mathbb{Z}\},$$ ou avec les classes de similitudes des réseaux $[\mathbb{M}_1][\mathbb{M}_2]=[% \mathbb{M}_{\delta \theta }]$. On a des conditions comparables pour les réseaux conjugués. ### Equation d’un $\mathbb{Z}$-module complet quelconque La donnée d’un idéal $\mathbf{I}=(\delta )(\mathbf{c};\mathbf{e}+% \mathbf{f}\varpi )$ quelconque dans un ordre $\mathcal{O}_{m_2}$ d’un corps quadratique $\mathbb{Q}(\sqrt{d})$, où $d$ sans facteur carré, conduit inversement à une condition de divisibilité et à une équation diophantienne, et ceci pour toute valeur $m_2$. Pour le voir, on généralise les calculs précédents en les prenant à l’envers. Ceci a donné : Tout idéal d’un ordre $\mathcal{O}_{m_2}$ d’un corps quadratique quelconque $\mathbb{Q}(\sqrt{d})$ définit une relation diophantienne. Avec les conditions $\varepsilon _2^{\prime }\in \mathbb{Z}\backslash \{0\}$ et $% \varepsilon _1^{\prime }=\varepsilon _2^{\prime }\varepsilon ^{\prime }\in \mathbb{Z}$ elle s’écrit $$m^2+\varepsilon _2^{\prime }m_1^2+\varepsilon _1^{\prime }m_2^2=(a+1)mm_1m_2-\varepsilon _2^{\prime }\mathbf{\partial }% ^{a+1}m_1m_2-u^{\prime }m.$$ Elle correspond avec $(m,m_1,m_2)\in (\mathbb{N}\setminus \{0\})^3$ aux conditions suivantes où $\mathbf{\partial }^{a+1}\in \mathbb{Z}$ et $% \varepsilon ^{\prime }\in \mathbb{Z}$ $$m\mid (m_1^2-\mathbf{\partial }^{a+1}m_1m_2+\varepsilon ^{\prime }m_2^2),\;\;\delta =\text{pgcd}(m,m_1,m_2).$$ Une telle équation en $(m,m_1,m_2)$ généralise nos équations $M^{s_1s_2}(a,\partial K,u_\theta )$. Elle est différente de celles étudiées dans [@Mordell] ou [@Rosenberger2]. Elle correspond seulement à la donnée d’un réseau d’un corps quadratique. Le fait que toute forme quadratique entière binaire indéfinie peut être réduite, et donne donc une forme de Markoff, montre que l’on peut traiter la résolution des nouvelles équations ici mises en évidence par les mêmes moyens que ceux développés ci dessus. De telles équations ont par exemple été étudiées par G. Rosenberger [@Rosenberger2]. Remarquons que la proposition que l’on vient de faire s’applique pour tout idéal d’un ordre de corps quadratique quelconque, même avec $d$ négatif. La situation ici décrite est donc beaucoup plus générale que celle que l’on envisageait ci-dessus. La différence est que l’on a $% \varepsilon _1^{\prime }\in \mathbb{Z},\;\varepsilon _2^{\prime }\in \mathbb{Z}% \backslash \{0\}$. La décomposition en produit de deux réseaux apparaît maintenant liée au fait que l’on a $\varepsilon ^{\prime }=\pm 1 $, et donc que $\Delta _\phi $ est de forme $(\mathbf{\partial }% ^{a+1})^2\pm 4$. Cette propriété permet d’échanger les rôles de $m_1$ et $m_2$ dans la condition de divisibilité, donc de construire un autre idéal avec lequel le produit d’idéaux peut être fait. En réalité, pour parvenir à la dernière proposition on a imposé la contrainte supplémentaire que $d$ soit sans facteur carré. Si l’on admet au contraire de poser $\Delta _\phi =(\mathbf{% \partial }^{a+1})^2\pm 4=\lambda ^2d$, avec $\lambda \in \mathbb{Z}$, ce qui ne change pas le corps quadratique que l’on considère et conduit à résoudre un équation de Pell-Fermat pour identifier $\lambda $, on peut développer le calcul précédent en imposant $\varepsilon _1^{\prime }$, $\varepsilon _2^{\prime }\in \{-1,+1\}$. Ceci montre que nos équations sont en fait aussi générales que les précédentes. En choisir une revient lorsqu’elle est non singulière à considérer un réseau complet dans un corps quadratique, et non un réseau quelconque d’un tel corps. On a pu développer cette approche en examinant la signification pour nos équations du fait que les réseaux correspondants sont strictement semblables, ainsi que la traduction pour les réseaux de l’action du groupe du triangle $\mathbf{T}_3$ sur les solutions et de l’existence d’un nombre fini de bouquets de solutions. On trouve dans [@Hirzebruch0] des indications sur l’interprétation géométrique qui peut être donnée de tels résultats. Le formalisme qui en découle permet de systématiser les résultats disponibles sur le lien entre arbres, ordres maximaux et formes quadratiques, tels que cités dans [@Pays] ou [@Vigneras] (p. 41). Le point essentiel en vue est un lien entre le nombre de classes d’un corps quadratique et le nombre de bouquets de solutions pour certaines de nos équations. Lien de nos équations avec les courbes elliptiques -------------------------------------------------- L’idée approfondie maintenant peut être comprise très simplement de façon géométrique. Avec des variables $(x,y,z)\in \mathbb{R}^3$, on considère une surface cubique réelle d’équation $% M^{s_1s_2}(b,\partial K,u)$. Coupée par un plan, elle donne une courbe cubique dont on établit dans différents cas qu’elle est elliptique. Disposant alors, grâce à l’action du groupe $\mathbf{T}_3$, d’informations sur les points entiers de la surface, on espère en déduire des conséquences pour les points entiers de la courbe elliptique. Différentes tentatives faites pour concrétiser cette idée sur l’équation de Markoff classique se sont révélées infructueuses. Mais on a pu la développer sur nos équations généralisées, on va expliquer comment et pourquoi. On donne d’abord un exemple pour montrer comment cette approche fonctionne. ### Un exemple On considère l’équation $M^{++}(2,0,-2)$. On connaît un triplet de solutions $(m,m_1,m_2)=(73,8,3)$. Il correspond aux paramètres $$K_1=K_2=46,\;k_1=k_{12}=5,\;k_2=k_{21}=2.$$ Ces valeurs vérifient par exemple la relation $2m_1=5m_2+1$. En la combinant avec la relation $M^{++}(2,0,-2)$ liant $m$, $m_1$, $m_2$, on obtient : Considérons la courbe réelle $E$ d’équation cubique $$30xz^2-4x^2+6xz-29z^2+8x-10z-1=0,$$ Il s’agit d’une courbe elliptique où existe un point entier $% (x,z)=(m,m_2)=(73,3)$. Inversement tout point entier $(x,z)=(m,m_2)\in \mathbb{Z% }^2$ de cette courbe elliptique $E$ est de plus tel qu’il existe un point entier $(x,y,z)=(m,m_1,m_2)\in \mathbb{Z}^3$ situé sur la surface cubique réelle $M^{++}(2,0,-2)$ d’équation $$x^2+y^2+z^2=3xyz+2x.$$ La partie délicate consiste à démontrer que $E$ est bien elliptique. On utilise pour cela l’algorithme de réduction de Nagell [@Nagell], tel qu’il est présenté dans [@HCohen] ou [@Connell]. On renvoie à [@Perrine9] pour la démonstration effective. ### Cas singuliers On désigne par $M^{s_1s_2}(b,\partial K,u)$ la surface cubique que l’on considère, notée comme l’équation la définissant. On la coupe par un plan $\Pi _{(t_{1,\rho },t_{2,\rho })}$ d’équation $% u=t_{1,\rho }z-t_{2,\rho }y$. Cette équation dérive de l’expression de $u$ déjà vue, sachant que l’on note avec $\rho \in \mathbb{Z}$ $$t_{1,\rho }=k_1+k_{12}-\rho m_1=t_1-(\rho -1)m_1,\;\;t_{2,\rho }=k_2+k_{21}-\rho m_2=t_2-(\rho -1)m_2.$$ L’intersection est une courbe que l’on note $E_{(t_{1,\rho },t_{2,\rho })}$. Le calcul précédent ne peut absolument pas fonctionner pour l’équation de Markoff classique $M^{++}(2,0,0)$ car elle donne $% t_1=t_2=u=0$. Dans un tel cas dit totalement singulier, le plan $\Pi _{(t_{1,\rho },t_{2,\rho })}$ avec lequel couper notre surface cubique n’est pas défini. A fortiori, on n’obtient pas une courbe elliptique, même en changeant la valeur de $\rho $. De nombreux cas totalement singuliers ont pu être fabriqués. Hors ces cas qu’on laisse maintenant de côté, on voit que d’autres situations dites partiellement singulières se présentent. Le plan $\Pi _{(t_{1,\rho },t_{2,\rho })}$ est calculable, mais son intersection avec la surface cubique $M^{s_1s_2}(b,\partial ,u)$ est une courbe de degré inférieur ou égal à 2. On a donné des exemples dans [@Perrine9]. ### Cas général On considère maintenant les cas non singuliers où l’on a nécessairement $t_{1,\rho }t_{2,\rho }\neq 0$. Pour la courbe cubique $% E_{(t_{1,\rho },t_{2,\rho })}$ on trouve une équation à coefficients entiers. L’algorithme de Nagell peut lui être appliqué. Hors quelques cas particuliers que l’on peut expliciter, la courbe fabriquée par cet algorithme est elliptique. Les cas qui échappent peuvent être étudiés de façon séparée. De sorte qu’on a mis en évidence pour toute surface cubique réelle $M^{s_1s_2}(b,\partial K,u)$ un ensemble de courbes elliptiques $E_{(t_{1,\rho },t_{2,\rho })}$ qui lui sont attachées, et de points entiers en nombre fini sur la courbe $% E_{(t_{1,\rho },t_{2,\rho })}$ qui sont également sur la surface. En se limitant à $\rho =0$, tout point entier de la surface cubique $% M^{s_1s_2}(b,\partial K,u)$ apparaît sur une courbe elliptique $% E_{(t_{1,\rho },t_{2,\rho })}$ contenue dans la surface. Inversement, si l’on considére un point entier $(x,z)=(m,m_2)\in \mathbb{Z}% ^2$ d’une courbe elliptique $E_{(t_{1,\rho },t_{2,\rho })}$, son équation fournit dans $\mathbb{Z}$ une condition qui impose que $m_1$ soit rationnel. La forme particulière de l’équation de degré 2 en $% m_1$ déduite de l’équation $M^{s_1s_2}(b,\partial K,u)$ montre alors qu’en réalité $m_1$ est entier. En d’autres termes les points entiers de la courbe $E_{(t_{1,\rho },t_{2,\rho })}$ sont exactement les points entiers de la surface $% M^{s_1s_2}(b,\partial K,u)$ qui sont situés dans le plan $\Pi _{(t_{1,\rho },t_{2,\rho })}$. Par le théorème de Mordell ([@Mordell] chapter 27), on ne trouve qu’un nombre fini de points entiers sur la courbe $E_{(t_{1,\rho },t_{2,\rho })}$. Cependant, en général la surface $M^{s_1s_2}(b,\partial K,u)$ possède une infinité de points entiers comme on l’a vu avec les contructions arborescentes faites au moyen des triplets de Cohn. Ils se classent d’ailleurs, dans le cas le plus général, en un nombre fini d’orbites pour l’action du groupe $\mathbf{T}_3$. Ceci permet de classer les points de la courbe $E_{(t_{1,\rho },t_{2,\rho })}$. Pour des compléments sur les points entiers des courbes elliptiques et leur calcul effectif, on renvoie à [@Smart] (XIII.3.). Une étude plus globale de cette situation reste à faire, sachant que le contexte des surfaces elliptiques ([@Silverman] chapter 3) fournit des éléments de compréhension intéressants et que l’on peut considérer des plans plus généraux avec lesquels couper la surface. ### Description géométrique de la surface cubique La surface réelle cubique $M^{s_1s_2}(b,\partial K,u)$ peut être étudiée avec des méthodes classiques de géométrie algébrique (voir par exemple [@Hartshorne]). On complexifie les variables pour simplifier les énoncés lorsque c’est nécessaire. #### Points singuliers L’équation définissant la surface est d’ordre 3 est : $$F(x,y,z)=(b+1)xyz-x^2-\varepsilon _2y^2-\varepsilon _1z^2+\varepsilon _2\partial Kyz-ux=0.$$ Les points singuliers non à l’infini, points doubles lorsqu’ils existent, sont calculables : $$x=0,\;\;\partial K=\pm 2,\;\;2z=\partial Ky,\;\;u=(b+1)yz,$$ $$x=u=(\varepsilon _2\varepsilon ^{\prime }(b+1)y^2/3),\;\;\varepsilon _2\partial K=2\varepsilon ^{\prime }-u(b+1),\;\;z=\varepsilon _2\varepsilon ^{\prime }y.$$ En dehors de tous ces cas qui sont assez nombreux et contiennent par exemple la théorie de Markoff classique, la surface ne possède pas de point singulier, et est donc non singulière. #### Génératrices La surface a des points doubles à l’infini, les points à l’infini des axes du repère. Il s’agit des sommets **A, B, C,** d’un triangle dont les côtés sont des génératrices, c’est-à-dire des droites contenues dans la surface, mais dans ce cas situées à l’infini sur la surface. Par construction, les autres génératrices de la surface sont à distance finie et parallèles à l’un des plans de coordonnées. Elles peuvent toutes être calculées [@Perrine9]. Au total il existe huit génératrices parallèles au plan $yOz$. Par le même procédé on obtient huit génératrices parallèles au plan $% xOy$ et huit génératrices parallèles au plan $xOz$. Au total, on trouve ainsi les $(3\times 8)+3=27$ génératrices réelles ou complexes de Cayley et Salmon pour la surface cubique étudiée [@Henderson]. En utilisant une méthode classique (par exemple [@Bouligand] p. 466) on en déduit une représentation rationnelle de la surface qui ne fait que traduire dans ce cas particulier le fait que toute surface du troisième ordre est rationnelle (unicursale). Il est intéressant d’expliciter une telle représentation rationnelle de $M^{s_1s_2}(b,\partial K,u)$ pour comprendre, à l’intersection avec des plans comme ceux utilisés dans ce qui précède, les conséquences pour les courbes elliptiques que l’on a mises en évidence ci-dessus. Dans le cas où un point double existe à distance finie sur la surface, toute droite passant par ce point définit aussi une telle représentation rationnelle de la surface cubique. Dans les autres cas, on peut également appliquer la méthode de la tangente due à B. Segre [@Segre] pour construire une représentation rationnelle de la surface. #### Représentation rationnelle de la surface cubique réelle On a décrit dans [@Perrine9] la construction d’une telle représentation. On considère la trace de la surface d’équation $% M^{s_1s_2}(b,\partial K,u)$ dans le plan $(b+1)x+\varepsilon _2\partial K=0$. C’est en dehors de cas limites ou impossibles, une conique. Ceci permet de considérer un point $\Omega (\mathcal{X},\mathcal{Y},\mathcal{Z})$ sur cette conique dont les coordonnées sont écrites avec un premier paramètre $\mu $. On passe alors dans un repère d’origine $\Omega $ avec $x=\mathcal{X}+x_0,\;\;y=\mathcal{Y}+y_0,\;\;z=\mathcal{Z}+z_0$. L’équation de la surface s’écrit alors avec des polynômes homogènes $\Phi _i$ de degré $i$ en $x_0$, $y_0$, $z_0$ : $$\Phi _3(x_0,y_0,z_0)+\Phi _2(x_0,y_0,z_0)+\Phi _1(x_0,y_0,z_0)=0.$$ Le plan tangent en $\Omega $ à la surface a pour équation $\Phi _3(x_0,y_0,z_0)=(b+1)x_0y_0z_0=0$. On change à nouveau de repère en l’utilisant pour poser $$x_1=x_0,\;\;y_1=y_0,\;\;z_1=((b+1)\mathcal{YZ}-2\mathcal{X}% -u)x_0-2\varepsilon _2\mathcal{Y}y_0-2\varepsilon _1\mathcal{Z}z_0.$$ L’équation de la surface s’écrit avec des polynômes $\Psi _i$ de degré $i$ en $x_1$, $y_1$, $z_1$ : $$\Psi _3(x_1,y_1,z_1)+\Psi _2(x_1,y_1,z_1)+\Psi _1(x_1,y_1,z_1)=0.$$ Avec une droite d’équation $z_1=0$ et $x_1=\lambda y_1$ passant par le point double $\Omega $ du plan tangent, et coupant donc la surface en un troisième point dont les coordonnées sont calculables, on obtient une représentation en $\lambda $ et $\mu $ en remplaçant $\mathcal{X% }$, $\mathcal{Y}$, $\mathcal{Z}$, par leurs expressions en fonction de $\mu $ et en réduisant les formules qui en résultent. Ceci donne une représentation birationnelle à deux paramètres $\lambda $ et $% \mu $ de la surface $M^{s_1s_2}(b,\partial K,u)$ qui est donc ([@Hartshorne] p. 422) une surface réelle rationnelle de dimension de Kodaira $\kappa =-1$. Il en découle la possibilité de la comparer à un plan projectif réel construit sur les deux variables $\lambda $ et $\mu $. Cette représentation dégénère en celle utilisée par H. Cohn dans l’article [@Cohn4] et due à R. Fricke [@Fricke] pour le cas de la théorie de Markoff classique. On trouve dans [@Bajaj] des références pour obtenir d’autres représentations rationnelles des surfaces $M^{s_1s_2}(b,\partial K,u)$. Elles donnent la possibilité de décrire l’ensemble des points rationnels $E(\mathbb{Q})$ des courbes elliptiques $E$ que l’on introduit à l’intersection de la surface cubique avec un plan d’équation rationnelle. Ces points sont paramétrés au moyen de $\lambda $ et $% \mu $ vérifiant une contrainte algébrique supplémentaire en remplaçant $y$ et $z$ par leurs expressions dans la relation définissant le plan. Perspectives ------------ Le dernier sujet évoqué, où changer de plan revient à déformer la courbe elliptique réelle $E$ avec de temps en temps des sauts quantiques pour les structures algébriques qu’elle porte, reste entièrement à explorer. On a pensé à l’utiliser par pour construire des courbes elliptiques de grand rang. La surface $% M^{s_1s_2}(b,\partial K,u)$ est utilisée pour contrôler la géométrie des courbes elliptiques réelle $E$ qu’elle contient. Ces courbes ne sont d’ailleurs pas rationnelles. Elles donnent un bon exemple de la remarque bien connue ([@LevyBruhl] p.171) que les sections planes d’une surface rationnelle ne sont pas nécessairement des courbes rationnelles. La méthode suivie a consisté à utiliser la plus petite variété rationnelle contenant une variété algébrique donnée pour étudier cette dernière. Remarquons que l’on peut adapter à la surface $M^{s_1s_2}(b,\partial K,u)$ la construction de la structure de groupe d’une courbe elliptique. On trouve dans [@Kollar] (chapter 1) une approche moderne des surfaces cubiques $\mathfrak{X}$ non singulières montrant comment elles permettent de construire un réseau $\mathbb{Z}^7$ équipé d’un produit scalaire de signature $(1,-6)$. Ce réseau peut être décrit en terme d’homologie ou de cohomologie. Il est égal à son groupe de classes de diviseurs $Pic(\mathfrak{X})$. Sur de telles surfaces, on peut développer une théorie de Galois avec le groupe de Weyl $W(E_6)$, qui correspond aux permutations de leurs 27 droites dans 45 plans tritangents [@Hartshorne] (p. 405). On met ainsi en évidence pour une telle cubique sur $\mathbb{C}$ un groupe simple à $29520$ éléments que l’on peut représenter comme groupe unitaire $U_4(2)$ sur le corps $F_4$, comme groupe symplectique $PSp_4(3)$ sur le corps $F_3$, comme groupe orthogonal $% O_6^{-}(2)$ sur le corps $F_2$ [@Conway3]. Les surfaces cubiques sont en particulier des exemples bien connus de surfaces Del Pezzo [@Hartshorne] (p.401). En se limitant au cas réel, la théorie de Galois que l’on vient d’évoquer donne des indications sur les configurations que l’on peut trouver. On trouve dans [@Hunt] (chapitres 5 et 6) de magnifiques développements autour de $W(E_6)$. On a un lien évident avec un système de Steiner particulier, le plan projectif d’ordre 2 dit plan de Fano [@Assmus] (p.4), certains systèmes réguliers de poids [@Saito3] (p. 522), et les algèbres de Lie [@Leung]. Les surfaces réelles $M^{s_1s_2}(b,\partial K,u)$ relèvent de cette approche. Il est aussi possible d’envisager la transposition de l’article de M. H. Èl’-Huti [@ElHuti]. Des développements comparables à ceux de [@Manin] [@Manin1] (p. 89) permettent de calculer le groupe de tous les automorphismes birationnels de la surface cubique $% M^{s_1s_2}(b,\partial K,u)$, et de vérifier que son action sur l’ensemble des solutions entières de l’équation diophantienne correspondante est transitive. Le résultat obtenu est essentiellement le même que celui de Èl’-Huti. Il donne une représentation géométrique du groupe $\mathbf{T}_3$ par le groupe des transformations de la surface engendré par des réflexions par rapport aux points doubles à l’infini **A, B, C**. Ce groupe agit transitivement dans l’ensemble des solutions entières de l’équation diophantienne $M^{s_1s_2}(b,\partial K,u)$. Ceci permet de disposer d’une interprétation géométrique expliquant avec le groupe du triangle $\mathbf{T}_3$ les structures arborescentes que l’on a construites avec les triplets de Cohn. On peut également caractériser en tant que groupe d’automorphismes birationnels de la surface le groupe engendré par $\mathbf{T}_3$ et le groupe $W$ de tous les automorphismes projectifs de la surface qui sont biréguliers en dehors de l’ensemble des points des côtés du triangle **A, B, C**. Ceci permet de décrire le groupe de Brauer de la surface $M^{s_1s_2}(b,\partial K,u)$ et d’étudier sur des exemples non triviaux des problèmes comtemporains de géométrie arithmétique [@Lang] [@Cornell] [@Hulsbergen]. On renvoie à [@Manin1] [@Colliot1] [@Swinnerton] [@Serre3] [@Colliot] [@Jahnel] [@Bajaj] [@Silverman1] pour la perspective déjà envisagée dans [@Perrine3] indiquant qu’il n’y a pas de contre exemple au principe de Hasse sur nos équations. Une piste d’étude qui paraît aussi prometteuse [@Colliot] (p. 397) est de faire un lien avec les surfaces de Severi-Brauer construites avec la norme d’un corps cubique. Cette construction de F.Châtelet fait jouer un rôle particulier au groupe des permutations de trois éléments, groupe que l’on représente sur nos surfaces par des transformations géométriques permutant les points doubles **A**,** B** et** C.** L’étude du lien avec les surfaces elliptiques ([@Hartshorne] (chapitre V) [@Shioda] [@Friedman]) est également une piste que l’on voudrait approfondir, en recherchant quel type d’ensemble on doit extraire pour passer d’un type de surface à un autre. Les autres résultats obtenus ont montré que nos surfaces ont un lien étroit avec des réseaux complets des corps quadratiques, raison pour laquelle on pense qu’elles ne donnent pas de contre-exemple au principe de Hasse. Différentes perspectives ont été identifiées, dont celle de relier arbres et ordres. L’interprétation locale sur nos surfaces cubiques de tous ces résultats est possible. Une autre idée est d’éclairer la réflexion sur les grandes conjectures non encore résolues sur les courbes elliptiques [@Wiles2]. On peut passer des corps quadratiques à des corps plus généraux et chercher à transposer ce qui précède. Approche analytique =================== Introduction ------------ La théorie de Markoff classique, notamment dans la présentation de Harvey Cohn [@Cohn2], est liée à la géométrie de certains tores percés conformes et à leurs géodésiques. La question qui s’est posée a été de savoir s’il en est de même de la généralisation que l’on a mise au point précédemment. Ce problème a été résolu.Pour le faire on a caractérisé d’abord les tores percés, puis on a fait le lien avec les matrices mises en évidence dans les calculs des chapitres précédents.Ceci est possible grâce à une équation généralisant celle de Markoff à tout tore percé conforme. Elle justifie a posteriori le bien fondé du choix des équations $M^{s_1s_2}(b,\partial K,u)$ que l’on a mises en avant. Les définitions utilisées pour la géométrie hyperbolique sont classiques et issues de [@Perrine9]. On a pu à partir de là effectuer une classification des tores percés conformes construits sur un même tore percé topologique. L’originalité de ce qui suit réside essentiellement dans le traitement rigoureux des tores percés paraboliques. Il confirme que ces tores sont donnés par l’équation de la théorie de Markoff classique. Le fait que l’on caractérise réellement ainsi tous les groupes de Fricke a été énoncé il y a longtemps ([@FrickeKlein] [@Rosenberger] [@Keen4]), mais les nombreuses démonstrations qui existent dans la littérature présentent des lacunes ([@Gilman] p. 3), ce qui ne semble pas le cas de notre approche. On donne dans la suite un exemple d’énoncé que l’on est obligé de prendre avec une grande prudence. Le contre-exemple que l’on a donné dans le cas d’un tore percé hyperbolique a montré qu’il est associé à un groupe non libre semble complètement nouveau. Et le lien découvert avec une problématique de géométrie algébrique donne une perspective de compréhension commune pour les deux cas précédents.Elle relie le groupe de matrices que l’on considère à un groupe de diviseurs d’une surface. Ceci a permis d’élaborer le point de vue analytique de la théorie dont le point de vue algébrique a été esquissé au chapitre précédent [@Serre4]. Le texte qui suit développe l’approche qui a conduit à ces résultats. Ils ont été présentés lors de conférences faites en 1996-1997 à une école thématique du CNRS [@Perrine2a] et à l’Institut des Matériaux du Mans [@LeMehaute]. Construction de tores percés conformes -------------------------------------- Les tores percés étudiés sont construits à partir du demi-plan de Poincaré $\mathcal{H}$. On indexe de façon naturelle chacun d’eux par des $n$-uplets de nombres réels. Ces nombres sont liés par des relations qui les organisent en un nouvel objet géométrique $\mathcal{V}$. On construit donc un ensemble de surfaces de Riemann $(\mathcal{H}/\Gamma _s)_{s\in \mathcal{V}}$, des tores percés dont le support topologique est le même, mais dont la géométrie est décrite d’une façon particulière en chaque point de l’objet $s\in \mathcal{V}$. Cette approche, qui revient à paramétrer des structures de surfaces de Riemann différentes existantes sur un même objet topologique, est celle de la théorie de Teichmüller [@Keen1] [@Imayoshi] [@Seppala] [@Nag]. On l’a développée sur les tores percés en évoquant le problème du choix de l’objet $\mathcal{V}$ le plus pertinent et des variables que l’on peut cacher en raisonnant à équivalence conforme ou isométrique près de $\mathcal{H}$. ### Les deux matrices d’un tore percé conforme Pour construire un tore percé $\mathcal{T}^{\bullet }$ par extraction d’un point, on utilise quatre géodésiques de $\mathcal{H}$ notées $\alpha s$, $s\beta $, $\beta p$, $p\alpha $, ne se coupant pas, et dont les extrémités $\alpha $, $s$, $\beta $, $p$, sont situées sur la droite réelle qui constitue le bord de $\mathcal{H}$. Elles délimitent un domaine quadrangulaire de $\mathcal{H}$. On convient que les sommets $\alpha $, $s$, $\beta $, $p$, apparaissent dans cet ordre lorsque l’on décrit ce bord de $-\infty $ à $+\infty $. Il s’agit de nombres réels. Mais on suppose que $p$ peut éventuellement prendre une valeur infinie. En effet les points $-\infty $ et $+\infty $ du bord de $\mathcal{H}$ sont confondus au seul point à l’infini $\infty $, compactifiant ce bord en une droite projective $\mathcal{S}^1=\mathbf{P}^1(% \mathbb{R})$. Ce bord compactifie $\mathcal{H}$ lui-même d’une certaine façon, sous forme d’une demi-sphère fermée (ou d’un disque fermé). Pour retrouver le tore percé à partir de là, on identifie deux à deux les géodésiques précédentes par des transformations $$t_A:\alpha p\rightarrow s\beta ,\;\;t_B:\alpha s\rightarrow p\beta .$$ Ceci revient à construire le tore en collant grâce à $t_A$ et $% t_B$ les bords du domaine quadrangulaire défini ci dessus. Dans cette opération, le point extrait du tore correspond aux quatre points $\alpha $, $s$, $\beta $, $p$, qui sont identifiés par $t_A$ ou $t_B$. Ils n’ont pas d’image dans l’objet construit car ils sont situés au bord de $% \mathcal{H}$ et non dans $\mathcal{H}$. Pour conserver un maximum de propriétés géométriques, et pas seulement les propriétés topologiques sous jacentes, les transformations $t_A$ et $t_B$ doivent être des isométries de $% \mathcal{H}$ pour sa métrique habituelle. Si on veut qu’elles conservent aussi l’orientation et les angles, elles doivent être des transformations conformes $t_A$ et $t_B$ données par des matrices $A$ et $B$ de $SL(2,\mathbb{R})$. Avec les extrémités des géodésiques, les matrices $A$ et $B$ remplissent des conditions qui permettent de les calculer en fonction des nombres $\alpha $, $s$, $\beta $, $p$ : A une conjugaison près par une matrice $M$ de $SL(2,\mathbb{R})$, on a la représentation paramétrique suivante pour les matrices $A$ et $B$ définissant un tore percé conforme, construites dans $SL(2,\mathbb{R})$ avec $\alpha <0$ et $\beta >0$: $$A=\left[ \begin{array}{cc} c\beta & -c\alpha \beta \\ c & (1/c\beta )-c\alpha \end{array} \right] \text{ o\`{u} }c\neq 0,$$ $$B=\left[ \begin{array}{cc} c^{\prime }\alpha & -c^{\prime }\alpha \beta \\ c^{\prime } & (1/c^{\prime }\alpha )-c^{\prime }\beta \end{array} \right] \text{ o\`{u} }c^{\prime }\neq 0.$$ De telles matrices sont associées aux valeurs $\alpha <0$, $s=0$, $\beta >0$, et $p=\infty $, du bord de $\mathcal{H}$, qu’elles transforment comme suit: $$A(\alpha )=s,\;\;A(p)=\beta ,\;\;B(\beta )=s,\;\;B(p)=\alpha .$$ Elles donnent pour les géodésiques associées de $\mathcal{H}$ $$A(\alpha p)=s\beta ,\;\;B(\alpha s)=p\beta .$$ Les expressions données pour $A$ et $B$ dans cette proposition résultent du calcul de leur déterminant qui vaut $1$. Raisonner à équivalence conforme près de $\mathcal{H}$ a permis de cacher deux paramètres. Ceux qui restent définissent un objet géométrique réel $\mathcal{V}$ de dimension $4$ grâce auquel on indexe toutes les possibilités de couples $(A,B).$ A équivalence conforme près de $\mathcal{H}$, on indexe toutes les possibilités de tores percés conformes avec les paramètres conservés $(\alpha ,\beta ,c,c^{\prime })\in \mathcal{V}$. L’objet géométrique $% \mathcal{V}$ est défini par les contraintes $$\alpha <0,\;\;\beta >0,\;\;c\neq 0,\;\;c^{\prime }\neq 0.$$ ### Le groupe fuchsien d’un tore percé conforme Ayant identifié deux matrices $A$ et $B$ par le résultat précédent, on considére dans $SL(2,\mathbb{R})$ le groupe qu’elles engendrent $G=gp(A,B)$. Son image par le morphisme canonique $\psi $ de $% SL(2,\mathbb{R})$ dans $PSL(2,\mathbb{R})$ est notée $$\Gamma =PG=Pgp(A,B)=G/G\cap \{\pm \mathbf{1}_2\}=gp(\psi (A),\psi (B))=gp(a,b).$$ Ce groupe de transformations conformes agit sur le demi-plan de Poincaré $\mathcal{H}$. Au quotient, on trouve un tore percé par extraction d’un point $\mathcal{T}_\Gamma ^{\bullet }=\mathcal{H}/\Gamma $. En transportant la métrique de $\mathcal{H}$ sur ce quotient, la projection $\mathcal{H}% \rightarrow \mathcal{T}_\Gamma ^{\bullet }$ devient une application conforme. On dit que $A$ et $B$ sont les matrices du tore percé conforme $\mathcal{T}_\Gamma ^{\bullet }$ et que le groupe $\Gamma =gp(A,B)$ est un groupe fuchsien définissant $\mathcal{T}_\Gamma ^{\bullet }$. Evidemment, un même tore percé $\mathcal{T}_\Gamma ^{\bullet }$ peut correspondre à d’autres couples de générateurs $(A,B)$ de $G$ et à d’autres couples de générateurs $(a,b)$ du groupe $\Gamma $.\ #### Notion de groupe de Fricke La théorie de Markoff classique [@Cohn2] entre dans le cadre géométrique que l’on vient de présenter avec $$c=\beta =-c^{\prime }=-\alpha =1,$$ $$A=A_0=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right] ,\;\;B=B_0=\left[ \begin{array}{cc} 1 & -1 \\ -1 & 2 \end{array} \right] .$$ Ces deux matrices engendrent [@MagnusKarassSolitar] le sous-groupe normal dérivé du groupe discret $SL(2,\mathbb{Z})$, d’où un groupe fuchsien de $PSL(2,\mathbb{Z})$ isomorphe à $\mathbf{F}_2$, le groupe libre de rang $2$. Généralisant cet exemple, on dit qu’un groupe fuchsien $% \Gamma =PG$ est un groupe de Fricke si et seulement s’il vérifie les deux conditions [@Rosenberger] [@Schmidt] :\ (1): Le groupe $\Gamma $ est isomorphe à un groupe libre à deux générateurs $\mathbf{F}_2=\mathbb{Z}*\mathbb{Z}$.\ (2): La surface de Riemann $\mathcal{H}/\Gamma $ possède un espace topologique support qui est homéomorphe à un tore topologique percé par extraction d’un point.\ Dans le cas général, il n’est pas toujours simple de démontrer que $\Gamma $ est un groupe fuchsien [@Gilman]. Il n’est pas non plus nécessairement facile de montrer que l’on a affaire à un groupe libre [@Newman]. Pour cela, on a besoin de connaitre un minimum des propriétés des matrices $A$ et $B$. Dans la suite on donne un exemple qui montre que certains résultats classiques dans ce domaine [@LyndonUllman] [@Purzitsky] sont à appliquer avec prudence. Notre définition même des groupes de Fricke n’est pas la plus communément admise. On trouve par exemple dans [@Bers] une définition des groupes modulaires de Fricke qui englobe celle qui précède. Ces définitions trouvent leur origine dans l’ouvrage [@FrickeKlein].\ #### Image inverse Notons $a$ et $b$ les deux générateurs du sous-groupe fuchsien $% \Gamma $ de $PSL(2,\mathbb{R})$, et soient $A$ et $B$ deux images inverses respectives de $a$ et $b$. On peut considérer en remontant à $SL(2,% \mathbb{R})$ quatre sous-groupes à deux générateurs d’image $\Gamma $ dans $PSL(2,\mathbb{R})$ par la projection canonique $\psi $ $$gp(A,B),\;\;gp(-A,B),\;\;gp(A,-B),\;\;gp(-A,-B).$$ Les points correspondants $\alpha $, $s=0$, $\beta $, $p=\infty $, définis par chacun des groupes précédents sont identiques. En considérant les quatre possibilités précédentes, on dit que $% gp(A,B)$ est le groupe principal défini par $\Gamma $ si et seulement si on a $$tr(A)\geq 0,\;\;tr(B)\geq 0.$$ On dit que les trois autres groupes $gp(-A,B)$,$\;gp(A,-B)$,$\;gp(-A,-B)$, sont les groupes conjugués de $gp(A,B)$. La remontée d’un groupe $% \Gamma \subset PSL(2,\mathbb{R})$ en un groupe $G\subset SL(2,\mathbb{R})$ dont $% \Gamma $ est l’image est étudiée dans [@Kra]. On a : Le groupe principal $gp(A,B)$ défini par un groupe de Fricke $\Gamma =$ $% gp(a,b)$ est libre. La projection canonique $\psi $ telle que $\psi (A)=a$ et $\psi (B)=b$ est un isomorphisme de $gp(A,B)$ sur $gp(a,b)$. Pour les groupes conjugués on a aussi des isomorphismes $$\psi :gp(A,-B)\simeq \Gamma ,\;\;\psi :gp(-A,B)\simeq \Gamma ,\;\;\psi :gp(-A,-B)\simeq \Gamma .$$ Enfin, on a pour l’opposé de la matrice unité $$-\mathbf{1}_2\notin gp(A,B),\;\;-\mathbf{1}_2\notin gp(A,-B),\;\;-\mathbf{1}% _2\notin gp(-A,B),\;\;-\mathbf{1}_2\notin gp(-A,-B).$$ ### Hyperbolicité des deux matrices d’un tore percé Avec l’expression calculée la matrice $A$ et de $B$, on a facilement [@Katok1] : Les matrices $A$ et $B$ d’un tore percé $\mathcal{T}_\Gamma ^{\bullet }$ sont hyperboliques, c’est-à-dire telles que : $$\left| tr(A)\right| >2,\;\;\left| tr(B)\right| >2.$$ Elles possèdent chacune deux points fixes réels non à l’infini sur le bord de $\mathcal{H}$, et une géodésique invariante qui les relie, son axe. En particulier, pour le groupe principal $gp(A,B)$ d’un tore percé conforme, on a $tr(A)>2,\;\;tr(B)>2$. La position respective des extrémités des axes, les points fixes $% a^{+}$, $a^{-}$, $b^{+}$, $b^{-}$, de $A$ et $B,$ n’est pas indifférente, ou ce qui revient au même le fait que les axes de $A$ et $B$ se coupent dans $\mathcal{H}$. Ces deux axes ne peuvent d’ailleurs être identiques que si l’on a $c^{\prime 2}\alpha =c^2\beta $. Or les signes de $\alpha $ et $\beta $ garantissent que cette égalité n’est jamais assurée. L’introduction un birapport permet de retrouver un résultat connu : Avec les deux conditions $\alpha <0$ et $\beta >0$ et les expressions données pour $A$ et $B$, les axes de ces deux matrices hyperboliques sont toujours distincts. Ils ne se coupent que si et seulement si on a la condition $$0>[a^{+},a^{-};b^{+},b^{-}]=\frac{(b^{+}-a^{+})}{(b^{+}-a^{-})}\times \frac{% (b^{-}-a^{-})}{(b^{-}-a^{+})}.$$ Celle-ci est équivalente au fait que tout intervalle du bord de $% \mathcal{H}$ contenant deux points fixes de l’une des transformations $A$ ou $B$ contient aussi un point fixe de l’autre. Les définitions du birapport (”rapport de rapport” plutôt que ”cross product”) sont diverses selon les auteurs. Notre définition est celle de [@Toubiana] [@Sidler]. ### Intervention des commutateurs On considère le commutateur de $A$ et $B$, que l’on définit comme suit : $$L=[A,B]=ABA^{-1}B^{-1}.$$ Il s’agit ici de la définition classique du commutateur donnée par exemple dans [@Beardon] [@Katok1] et non de celle que l’on trouve dans [@Bourbaki]. On peut le calculer. Il permet de considérer avec [@Cohn] une autre matrice $C^{\circ }$ de $G$ telle que $$C^{\circ }BA=1,\;\;ABC^{\circ }=L.$$ Le commutateur s’introduit naturellement dans notre contexte parce que l’on a $$L(s)=ABA^{-1}B^{-1}(s)=ABA^{-1}(\beta )=AB(p)=A(\alpha )=s.$$ En d’autres termes il contient toute l’information nécessaire à la définition du tore percé conforme défini par $A$ et $B$. Si $A$ et $B$ commutent, toute possibilité de définir le tore disparaît. Dans le cas contraire tout point fixe de $L$ permet de définir les points possibles $s$, $\beta $, $p$, $\alpha $. Dans le cas général, on trouve deux possibilités pour $s$, donc pour $\beta $, $p$, $\alpha $. Remarquons aussi que dans le cas encore plus général pour $A$ et $% B $ il n’y a pas de raisons que $s$, $\beta $, $p$, $\alpha $, soient réels, le procédé peut alors donner des tores complets. Mais on laisse ces cas de côté, concentrant l’attention sur les tores percés construits, où sont réels les nombres $s$, $\beta $, $p$, $\alpha $. Ceci donne [@Katok1] : Avec les expressions des matrices $A$ et $B$ du tore percé $\mathcal{T}% _\Gamma ^{\bullet }$, le commutateur $L=[A,B]$ est tel que $$tr(L)=tr([A,B])\leq -2.$$ On dit que $[A,B]$ est une matrice parabolique lorsque $tr([A,B])=-2$ a lieu. Lorsque l’on a l’inégalité stricte $tr([A,B])<-2$, on dit qu’elle est hyperbolique. La matrice inverse $L^{-1}$ permet d’introduire une matrice $C$ vérifiant $$CAB=1,\;\;BAC=L^{-1}=[B,A]=[A,B]^{-1},\;\;tr(L^{-1})=tr(L).$$ On a aussi un autre commutateur $K$ qui définit le même tore percé que $L$ avec $$ABC=1,\;\;CBA=K=[B^{-1},A^{-1}],\;\;tr(K)=tr(L),$$ $$BAC^{\circ }=1,\;\;C^{\circ }AB=K^{-1}=[A^{-1},B^{-1}],\;\;tr(K^{-1})=tr(K),$$ $$K(p)=B^{-1}A^{-1}BA(p)=B^{-1}A^{-1}B(\beta )=B^{-1}A^{-1}(s)=B^{-1}(\alpha )=p.$$ Pour les traces des matrices que l’on vient de considérer, il est facile d’établir : On a $$tr(C)=tr(C^{\circ }),$$ $$tr(L)=tr(L^{-1})=tr(K)=tr(K^{-1})\leq -2,$$ $$tr(L)+2=tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)\leq 0.$$ La dernière égalité de cette proposition est due à Fricke. Elle introduit un nombre qui est utilisé dans la suite $$\sigma =tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB).$$ ### Tores percés paraboliques et hyperboliques La dernière proposition identifie deux cas pour $K$ et $L$ (comparer à [@Wolpert]). Illustrons avec $K$ $\bullet $ Si $tr(K)=-2$, on a $c^2\beta =-c^{\prime 2}\alpha $, et les matrices $K$ et $L$ sont paraboliques. La matrice $K$ se simplifie $$K=\left[ \begin{array}{cc} -1 & 2(1-c^2\alpha \beta -c^{\prime 2}\alpha \beta )/(c^{\prime 2}\alpha ) \\ 0 & -1 \end{array} \right] .$$ Elle donne une transformation parabolique du demi-plan de Poincaré $% \mathcal{H}$. Son unique point fixe est $p=\infty $. Il permet de définir un tore associé unique $\mathcal{T}_\Gamma ^{\bullet }$ grâce aux matrices $A$ et $B$. Cette transformation ne laisse aucune géodésique de $\mathcal{H}$ invariante. Elle correspond à une translation parallèlement à l’axe réel. On dit que $\mathcal{T}% _\Gamma ^{\bullet }$ est un tore percé conforme parabolique. $\bullet $ Si $tr(K)<-2$, les matrices $K$ et $L$ sont hyperboliques. $K$ laisse invariante une géodésique de $\mathcal{H}$, l’axe de $K$, qui avec $c^2\beta +c^{\prime 2}\alpha \neq 0$ est la géodésique des points $z=x+iy$ de $\mathcal{H}$ vérifiant $$x=\frac{(c^2\alpha \beta +c^{\prime 2}\alpha \beta -1)}{(c^2\beta +c^{\prime 2}\alpha )}.$$ Elle possède deux points fixes sur le bord de $\mathcal{H}$ : le point à l’infini $p=\infty $ et l’intersection $p^{\prime }$ de cette géodésique avec le bord de $\mathcal{H}$. Le point à l’infini $p$ permet de définir un tore associé $\mathcal{T}_\Gamma ^{\bullet }$ avec les points $B(p)=\alpha $, $A(\alpha )=B(\beta )=s$,$\;A(p)=\beta $. On dit que $\mathcal{T}_\Gamma ^{\bullet }$ est un tore percé conforme hyperbolique. Dans ce cas, il est possible de s’assurer que la géodésique $pp^{\prime }$ invariante dans $\mathcal{H}$ par $K$ donne dans $\mathcal{T}_\Gamma ^{\bullet }$ une géodésique fermée entourant la piqûre. En extrayant alors le disque piqué ayant cette géodésique pour bord, on fabrique une nouvelle surface trouée $\mathcal{T}_\Gamma ^{\circ }$. Dans $\mathcal{H}$, on peut représenter le domaine fondamental correspondant à l’image réciproque de $\mathcal{T}_\Gamma ^{\circ }$. On peut vérifier qu’il est stable par le groupe $gp(A,B)$. Tout se passe comme si la surface $% \mathcal{T}_\Gamma ^{\bullet }$ prolongeait $\mathcal{T}_\Gamma ^{\circ }$ de façon à réduire le trou à une piqûre. Les deux objets $\mathcal{T}_\Gamma ^{\bullet }$ et $\mathcal{T}_\Gamma ^{\circ }$ ont même support topologique, mais pas même support conforme. Le fait remarquable dans ce cas est que le tore percé hyperbolique est dédoublé grâce à l’autre extrémité $p^{\prime }$ de la géodésique invariante par $K$ et aux points qui s’en déduisent avec $B(p^{\prime })=\alpha ^{\prime }$, $A(\alpha ^{\prime })=B(\beta ^{\prime })=s^{\prime }$,$\;A(p^{\prime })=\beta ^{\prime }$. Ce second tore est distinct du tore précédent. Lorsque $c^2\beta +c^{\prime 2}\alpha $ tend vers $0$, on constate que $p^{\prime }$ tend vers $p=\infty $, $s^{\prime }$ vers $s=0$, $\alpha ^{\prime }$ vers $\alpha $, $% \beta ^{\prime }$ vers $\beta $. Le tore percé devient parabolique mais double à la limite. Ceci illustre le phénomène du double de Schottky d’une surface de Riemann non compacte ([@Cohn5] p.235). ### Une représentation à trois paramètres Ayant paramétré tous les tores percés conformes avec un objet géométrique $\mathcal{V}$ de dimension $4$, on voit maintenant comment trouver d’autres paramétrisations de tous ces tores par un objet géométrique différent de $\mathcal{V}$. On privilégie les nombres : $$\lambda =c^{\prime }\alpha ,\;\;\mu =c\beta ,$$ $$\theta _\alpha =-c^{\prime 2}\alpha =-c^{\prime }\lambda >0,\;\;\theta _\beta =c^2\beta =c\mu >0,\;\;\Theta =(\theta _\alpha /\theta _\beta )>0,$$ $$M=tr(AB)^2-tr([A,B])-2=tr(AB)^2-\sigma ,$$ $$M_2=tr(A)tr(AB)-tr(B)+\Theta tr(B),$$ $$M_1=tr(B)tr(AB)-tr(A)+\Theta ^{-1}tr(A).$$ On obtient : $$\lambda =(M_2/M),\;\;\mu =(M_1/M).$$ Les expressions de $tr(A)$, $tr(B)$, $tr(AB)$ donnent maintenant : $$M^2+M_1^2+\Theta ^{-1}M_2^2=tr(A)MM_1,$$ $$M^2+\Theta M_1^2+M_2^2=tr(B)MM_2,$$ $$M^2+\Theta M_1^2+\Theta ^{-1}M_2^2=tr(AB)M_1M_2.$$ Les trois relations précédentes ont une solution commune en $\Theta $ pourvu que l’on ait : $$M^2+M_1^2+M_2^2=tr(A)MM_1+tr(B)MM_2-tr(AB)M_1M_2.$$ Lorsque la valeur de $\Theta $ est différente de 1, on trouve, avec $% \varepsilon =\pm 1$ : $$\mu =\frac{-(2tr(B)tr(AB)-tr(A)\sigma )+\varepsilon tr(A)\sqrt{\sigma ^2-4\sigma }}{2(\sigma -tr(AB)^2)},$$ $$\lambda =\frac{-(2tr(A)tr(AB)-tr(B)\sigma )-\varepsilon tr(B)\sqrt{\sigma ^2-4\sigma }}{2(\sigma -tr(AB)^2)}.$$ Ces expressions n’ont un sens qu’à condition d’avoir un argument positif dans les radicaux. Comme par construction $\lambda $ et $\mu $ sont réels et existent bien, cette condition est assurée. Le cas parabolique où $tr([A,B])=-2$ se singularise en annulant le terme $% \sigma ^2-4\sigma $. Ceci simplifie les expressions de $\lambda $ et $\mu $. Si l’on revient aux expressions des matrices $A$ et $B$, on observe qu’elles sont totalement déterminées par les trois nombres réels $tr(A)$, $tr(B)$, $tr(AB)$, à un paramètre réel près cependant, que l’on peut supposer ici être $\theta _\alpha $. La question naturelle qui se pose est donc de savoir ce qui lie des couples de matrices $(A,B)$ correspondant aux mêmes traces, mais à des valeurs $\theta _\alpha $ distinctes. Considérons donc de tels couples $(A,B)$ et $(A^{\prime },B^{\prime })$. Avec $s=0$ et $p=\infty $, on a par construction : $$\alpha =-\frac{\lambda ^2}{\theta _\alpha },\;\;\beta =\frac{\mu ^2\Theta }{% \theta _\alpha }.$$ Ceci donne le birapport suivant $$\lbrack \alpha ,\beta ;s,p]=-\frac 1\Theta (\frac \lambda \mu )^2.$$ Le même raisonnement fait pour $(A^{\prime },B^{\prime })$ conduit au même birapport. Sur la droite projective constituant le bord de $% \mathcal{H}$, on met ainsi en évidence deux quadruplets de points ayant même birapport. Il en découle selon un résultat connu([@Frenkel] p. 248, [@Rees] p. 76) l’existence d’une homographie $h$ de $PGL(2,\mathbb{R})=GL(2,\mathbb{R})/(\mathbb{R}% \backslash \{0\})$ les échangeant. Elle permet la construction d’une transformation conforme de $\mathcal{H}$ autorisant à se limiter à $% \theta _\alpha =1$ et à énoncer : A une conjugaison près par une matrice de $SL(2,\mathbb{R})$, on a la représentation paramétrique suivante à trois paramètres pour les matrices $A$ et $B$ du tore percé $\mathcal{T}_\Gamma ^{\bullet }$ $$A=\left[ \begin{array}{cc} \mu & (\mu \lambda ^2) \\ (1/\Theta \mu ) & ((1+(\lambda ^2/\Theta ))/\mu ) \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} \lambda & -(\lambda \mu ^2\Theta ) \\ -(1/\lambda ) & ((1+\Theta \mu ^2)/\lambda \end{array} \right] .$$ La donnée des trois paramètres $\lambda \neq 0$, $\mu \neq 0$, $% \Theta >0$, détermine les matrices $A$, $B$, et $AB$, et donc leurs traces selon les expressions $$tr(A)=((1+(\lambda ^2/\Theta )+\mu ^2)/\mu ),$$ $$tr(B)=((1+\lambda ^2+\Theta \mu ^2)/\lambda ),$$ $$tr(AB)=((1+(\lambda ^2/\Theta )+\Theta \mu ^2)/\lambda \mu ).$$ Ces valeurs vérifient les conditions supplémentaires $$1+\lambda ^2+\mu ^2=tr(A)\mu +tr(B)\lambda -tr(AB)\lambda \mu ,$$ $$\alpha =-\lambda ^2,\;\;s=0,\;\;\beta =\mu ^2\Theta ,\;\;p=\infty .$$ Inversement, les trois paramètres intervenant dans ces matrices ne dépendent que des trois valeurs $tr(A)$, $tr(B)$, $tr(AB)$, et d’un signe, avec les expressions $$\lambda =\frac{-(2tr(A)tr(AB)-tr(B)\sigma )-\varepsilon tr(B)\sqrt{\sigma ^2-4\sigma }}{2(\sigma -tr(AB)^2)}\neq 0,$$ $$\mu =\frac{-(2tr(B)tr(AB)-tr(A)\sigma )+\varepsilon tr(A)\sqrt{\sigma ^2-4\sigma }}{2(\sigma -tr(AB)^2)}\neq 0,$$ $$\Theta =\frac{2tr(A)^2+2tr(B)^2-tr(B)^2\sigma +\varepsilon tr(B)^2\sqrt{% \sigma ^2-4\sigma }}{2tr(A)^2+2tr(B)^2-tr(A)^2\sigma -\varepsilon tr(A)^2% \sqrt{\sigma ^2-4\sigma }}>0,$$ où l’on a $$\varepsilon =\pm 1,\;\;\sigma =tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)\leq 0.$$ De plus on a équivalence des trois propriétés suivantes : $$tr(L)=-2,\;\;\sigma =0,\;\;\Theta =1.$$ Les expressions données pour $A$ et $B$ dans cette proposition n’utilisent que trois paramètres parce qu’on a caché $\theta _\alpha $ en raisonnant à une transformation conforme de $\mathcal{H}$ près. Les paramètres qui restent définissent un objet géométrique $% \mathcal{V}^{\prime }$ qui est réel de dimension $3$. Il indexe avec des paramètres $(\lambda ,\mu ,\Theta )\in \mathcal{V}^{\prime }$ les couples $(A,B)$ correspondants, et donc les différentes possibilités pour les classes de tores percés conformes. L’espace $\mathcal{V}% ^{\prime }$ est défini par les contraintes $\lambda \neq 0$, $\mu \neq 0 $, $\Theta >0$. Raisonnant sur $\Gamma =Pgp(A,B)$ on peut se limiter à $% \lambda >0,\;\mu >0,\;\Theta >0$. ### Autre représentation à quatre paramètres Dans le résultat qui précède, on a introduit une dissymétrie dans les rôles joués par $\theta _\alpha $ et $\theta _\beta $. En rétablissant la symétrie entre $\theta _\alpha $ et $\theta _\beta $ on a obtenu : A une conjugaison près par une matrice de $SL(2,\mathbb{R})$, on a la représentation paramétrique suivante à quatre paramètres pour les matrices $A$ et $B$ du tore percé conforme $\mathcal{T}_\Gamma ^{\bullet }$ $$A=\left[ \begin{array}{cc} \mu & (\mu \lambda ^2/\Theta _\alpha ) \\ (\Theta _\beta /\mu ) & ((1+(\Theta _\beta /\Theta _\alpha )\lambda ^2)/\mu ) \end{array} \right] ,$$ $$B=\left[ \begin{array}{cc} \lambda & -(\lambda \mu ^2/\Theta _\beta ) \\ -(\Theta _\alpha /\lambda ) & ((1+(\Theta _\alpha /\Theta _\beta )\mu ^2)/\lambda ) \end{array} \right] .$$ Les paramètres intervenant dans ces expressions ne dépendent que des trois valeurs $tr(A)$, $tr(B)$, $tr(AB)$, et d’une valeur $\varepsilon =\pm 1 $ : $$\sigma =tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)\leq 0,$$ $$\lambda =\frac{-(2tr(A)tr(AB)-tr(B)\sigma )-\varepsilon tr(B)\sqrt{\sigma ^2-4\sigma }}{2(\sigma -tr(AB)^2)}\neq 0,$$ $$\mu =\frac{-(2tr(B)tr(AB)-tr(A)\sigma )+\varepsilon tr(A)\sqrt{\sigma ^2-4\sigma }}{2(\sigma -tr(AB)^2)}\neq 0,$$ $$\Theta _\alpha =2tr(A)^2+2tr(B)^2-tr(B)^2\sigma +\varepsilon tr(B)^2\sqrt{% \sigma ^2-4\sigma }>0,$$ $$\Theta _\beta =2tr(A)^2+2tr(B)^2-tr(A)^2\sigma -\varepsilon tr(A)^2\sqrt{% \sigma ^2-4\sigma }>0.$$ $$\alpha =-(\lambda ^2/\Theta _\alpha ),\;\;s=0,\;\;\beta =(\mu ^2/\Theta _\beta ),\;\;p=\infty .$$ A une conjugaison près définie par une dilatation d’amplitude $\tau ^2$ telle que $$\theta _\alpha =\Theta _\alpha \tau ^2,\;\;\theta _\beta =\Theta _\beta \tau ^2,$$ on retrouve les expressions paramétriques antérieures $$A=\left[ \begin{array}{cc} \mu & (\mu \lambda ^2/\theta _\alpha ) \\ (\theta _\beta /\mu ) & ((1+(\theta _\beta /\theta _\alpha )\lambda ^2)/\mu ) \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} \lambda & -(\lambda \mu ^2/\theta _\beta ) \\ -(\theta _\alpha /\lambda ) & ((1+(\theta _\alpha /\theta _\beta )\mu ^2)/\lambda ) \end{array} \right] .$$ A une conjugaison près définie par une dilatation d’amplitude $% \Theta _\alpha $, on retrouve aussi les expressions déjà vues avec le paramètre $\Theta =(\Theta _\alpha /\Theta _\beta )$. Cette proposition peut être interprétée avec un nouvel objet géométrique $\mathcal{V}^{\prime \prime }$ de dimension $4$ permettant de paramétrer tous les tores percés conformes d’une nouvelle façon. On utilise ici des quadruplets $(tr(B),tr(A),tr(BA),% \varepsilon )\in \mathcal{V}^{\prime \prime }$ l’objet $\mathcal{V}^{\prime \prime }$ est défini par $\varepsilon =\pm 1$ et la condition $$tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)\leq 0.$$ Le bord de $\mathcal{V}^{\prime \prime }$ correspondant à la condition $% \sigma =0$ ne donne que des tores percés conformes paraboliques. Dans ce cas d’ailleurs, les tores percés associés à $\varepsilon =1$ et $% \varepsilon =-1$ sont identiques. Ce bord peut donc être paramétré en oubliant $\varepsilon $, uniquement par des triplets $% (tr(B),tr(A),tr(BA))$ vérifiant l’équation de Markoff classique : $$tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)=0.$$ On a montré dans [@Perrine1b] que dans ce cas le groupe $G=gp(A,B)$ est un groupe libre à deux générateurs $\mathbf{F}_2$ s’il est contenu dans $GL(2,\mathbb{Z})$. Il détermine un groupe de Fricke $Pgp(A,B)$ engendré par les classes de $A$ et $B$. La suite montre comment l’équation de Markoff paramétrise en fait tous les groupes de Fricke par des points du bord de $\mathcal{V}^{\prime \prime }$. Ceci revient à dire que pour un tore percé conforme les propiétés d’être de Fricke ou parabolique sont équivalentes [@FrickeKlein] [@Rosenberger]. On conjecture que les groupes correspondant aux tores percés conformes hyperboliques ont, en dehors du bord de $\mathcal{V}^{\prime \prime }$, deux générateurs $A$ et $B$ qui sont liés. Construire les relations les liant est un problème essentiel dont les conséquences pourraient être importantes. On donne dans la suite un exemple où l’on a réussi à le faire. Cet exemple illustre notre conjecture. ### Rôle des transformations anti-conformes Dans la proposition qui précède, on voudrait pouvoir se limiter dans tous les cas à une paramétrisation des tores percés par des triplets $(tr(B),tr(A),tr(BA))$, et donc se passer également du terme $% \varepsilon $ pour les tores percés conformes hyperboliques. C’est possible si on ne raisonne qu’à isométrie près de $\mathcal{H}$, c’est-à-dire en faisant agir aussi ses transformations anti-conformes. Pour le voir il a suffi d’expliquer ce qui différencie les deux cas $% \varepsilon =+1$ et $\varepsilon =-1$ correspondant à un même triplet $(tr(A),tr(B),tr(AB))$. Ceci a permis d’énoncer : Pour les deux couples de matrices $(A^{+},B^{+})$ et $(B^{-},A^{-})$ correspondant à un même triplet de traces tel que $\sigma <0$ ainsi que respectivement à $\varepsilon =1$ et $\varepsilon =-1$, il existe une matrice $D\in S^{*}L(2,\mathbb{R})$ telle que $$B^{-}=DA^{+}D^{-1},\;\;A^{-}=DB^{+}D^{-1},\;\;\det (D)=-1.$$ La matrice $D$ définit une transformation anti-conforme $\psi (D)=h_{+}^{-}$ du demi-plan de Poincaré $\mathcal{H}$ dans lui-même qui transforme les géodésiques comme suit (en inversant les sens de parcours): $$\alpha ^{+}p\rightarrow p\beta ^{-},\;\;\alpha ^{+}s\rightarrow s\beta ^{-},\;\;s\beta ^{+}\rightarrow \alpha ^{-}s,\;\;p\beta ^{+}\rightarrow \alpha ^{-}p;$$ $$h_{+}^{-}(\alpha ^{+})=\beta ^{-},\;\;h_{+}^{-}(\beta ^{+})=\alpha ^{-},\;\;h_{+}^{-}(s)=s,\;\;h_{+}^{-}(p)=p;$$ $$h_{+}^{-}(z)=(\frac{\alpha ^{-}}{\beta ^{+}})\overline{z}=(\frac{\beta ^{-}}{% \alpha ^{+}})\overline{z}.$$ Elle donne pour les divers paramètres intervenant $$(tr(A^{+}),tr(B^{+}),tr(A^{+}B^{+}))=(tr(B^{-}),tr(A^{-}),tr(A^{-}B^{-})),$$ $$(\lambda ^{+},\mu ^{+},\Theta ^{+})=(\mu ^{-},\lambda ^{-},(1/\Theta ^{-})),$$ $$[\alpha ^{+},\beta ^{+};s,p]=[\beta ^{-},\alpha ^{-};s,p],$$ $$\alpha ^{-}=-(\Theta _\beta ^{+}/\Theta _\alpha ^{-})\beta ^{+}=-(\Theta _\alpha ^{+}/\Theta _\beta ^{-})\beta ^{+},$$ $$\beta ^{-}=-(\Theta _\alpha ^{+}/\Theta _\beta ^{-})\alpha ^{+}=-(\Theta _\beta ^{+}/\Theta _\alpha ^{-})\alpha ^{+}.$$ Ce résultat permet de se limiter au cas $\varepsilon =1$ dans les calculs courants faits autour des tores percés, lorsque l’on raisonne à isométrie près de $\mathcal{H}$. Il est intéressant de se demander ce que donne la proposition précédente lorsque $\sigma $ tend vers $0$. On trouve à la limite un tore percé conforme parabolique où $s=0$ et $p=\infty $. Ceci explique comment tout tore percé conforme parabolique est anti-conformément équivalent à lui-même. Dans les autres cas, la dernière proposition correspond aux observations qui ont été faites précédemment sur le dédoublement des tores percés hyperboliques (et les doubles de Schottky d’une surface de Riemann non compacte [@Cohn5] p.235). Une transformation anti-conforme lie les deux tores percés obtenus. Signification géométrique de nos équations ------------------------------------------ ### Cône attaché à un tore percé Revenant sur les nombres $M$, $M_1$, $M_2$, qui ont été introduits précédemment, on a obtenu : Soient $A$ et $B$ les matrices d’un tore percé conforme $\mathcal{T}% _\Gamma ^{\bullet }$ quelconque. Avec les expressions connues où $% \varepsilon =\pm 1$ $$\sigma =tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB),$$ $$\Theta =\frac{2tr(A)^2+2tr(B)^2-tr(B)^2\sigma +\varepsilon tr(B)^2\sqrt{% \sigma ^2-4\sigma }}{2tr(A)^2+2tr(B)^2-tr(A)^2\sigma -\varepsilon tr(A)^2% \sqrt{\sigma ^2-4\sigma }},$$ $$M_1=tr(B)tr(AB)-tr(A)+\Theta ^{-1}tr(A),$$ $$M_2=tr(A)tr(AB)-tr(B)+\Theta tr(B),$$ $$M=tr(AB)^2-\sigma ,$$ on a la relation $(FR^{*})$ suivante : $$M^2+M_1^2+M_2^2=tr(A)MM_1+tr(B)MM_2-tr(AB)M_1M_2.$$ L’équation $(FR^{*})$ définit une quadrique en $M$, $M_1$, $M_2$, qui est un cône en ces paramètres directement donné par la matrice $$\left[ \begin{array}{ccc} 1 & -\dfrac{tr(A)}2 & -\dfrac{tr(B)}2 \\ -\dfrac{tr(A)}2 & 1 & \dfrac{tr(AB)}2 \\ -\dfrac{tr(B)}2 & \dfrac{tr(AB)}2 & 1 \end{array} \right] .$$ On dit que c’est le cône $(FR^{*})$ associé au couple de générateurs $(A,B)$ du groupe $gp(A,B)$ du tore percé $\mathcal{T% }_\Gamma ^{\bullet }$ que l’on considère. Le déterminant de la matrice qui le définit vaut $$1-\frac 14(tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB))=\frac{4-\sigma }4\geq 1.$$ Pour le tore percé conforme associé, on peut considérer que la relation $(FR^{*})$ est une bonne généralisation de l’équation de Markoff classique [@Markoff]. En effet, si $\Theta =1$, soit $\sigma =0$, elle se simplifie par un facteur $tr(AB)^2$ en $$tr(A)^2+tr(B)^2+tr(AB)^2=tr(A)tr(B)tr(AB).$$ ### Lien avec nos équations $M^{s_1s_2}(b,\partial K,u)$ Il est apparu que l’équation $(FR^{*})$ correspond aux équations qui ont été étudiées dans ce qui précède.\ #### Une équation équivalente On a fait apparaître dans [@Perrine6] une équation équivalente à $M^{s_1s_2}(b,\partial K,u)$. On appelle $M(b,r,s,t)$ cette nouvelle équation : $$x^2+y^2+z^2=(b+1)xyz+ryz+szx+txy,$$ où $$r=\varepsilon _1K_1-\varepsilon _2K_2,\;\;s=-(\varepsilon _1k_1+k_{12}),\;\;t=\varepsilon _2k_2+k_{21}.$$ Le lien s’effectue avec l’équation $M^{s_1s_2}(b,\partial K,u)$ grâce aux deux égalités suivantes $$\varepsilon _1m_2=K_1m_1-k_1m,\;\;\varepsilon _2m_1=k_2m-K_2m_2.$$\ #### Mise en évidence du tore percé et du cône Dans le cas le plus général pour établir l’équation $% M^{s_1s_2}(b,\partial K,u)$ on a vu que l’on pouvait utiliser une formule de Fricke pour calculer la trace du commutateur $% [A_b,B_c]=A_bB_cA_b^{-1}B_c^{-1}$ où $$A_b=M_{(\lhd X_2^{*},b)}=\left[ \begin{array}{cc} bm_2+k_{21} & m_2 \\ bk_2+l_2 & k_2 \end{array} \right] ,$$ $$B_c=M_{(X_1^{*}\rhd ,c)}=\left[ \begin{array}{cc} (c+1)m_1-k_1 & m_1 \\ (c+1)(m_1-k_{12})-(k_1-l_1) & m_1-k_{12} \end{array} \right] ,$$ Ceci définit $t$, $s$, $r$, par un simple calcul de traces. De façon à disposer de matrices appartenant à $SL(2,\mathbb{R})$ on fait l’hypothèse que l’on a $$\det (A_b)=\det (B_c)=\det (A_bB_c)=1=\varepsilon _1=\varepsilon _2.$$ L’équation équivalente $M(b,r,s,t)$ prend alors la forme : $$m^2+m_1^2+m_2^2=tr(A_b)mm_1+tr(B_c)mm_2-tr(A_bB_c)m_1m_2.$$ On reconnaît l’équation $(FR^{*})$ du cône qui a été associée à un tore percé conforme. Ce tore est déduit du groupe $gp(A_b,B_c)$ avec : $$L(s)=A_bB_cA_b^{-1}B_c^{-1}(s)=s,\;\;\alpha =A_b^{-1}(s),\;\;p=B_c^{-1}(\alpha ),\;\;\beta =A_b(p).$$ Dans le cas parabolique, on trouve avec ces conditions un tore percé unique. C’est le cas de la théorie de Markoff classique. Dans le cas hyperbolique qui est le cas le plus fréquent, on identifie ainsi deux tores percés.\ #### Un exemple de tore percé hyperbolique On a détaillé un exemple hyperbolique qui correspond à l’équation $M^{++}(3,0,1)$. Les matrices à considérer sont dans $SL(2,\mathbb{Z})$ : $$A=\left[ \begin{array}{cc} 11 & 3 \\ 7 & 2 \end{array} \right] =M_{(1,1,1,3)},\;\;B=\left[ \begin{array}{cc} 37 & 11 \\ 10 & 3 \end{array} \right] =M_{(3,1,2,3)}.$$ On peut calculer les deux tores percés conformes. L’un est donné par les points $$s_{+}=\frac{4363+\sqrt{3122285}}{1658}=[3,\underline{1,2,3,3,3,3,2,1}% ]\approx 3,697225,$$ $$\beta _{+}=\frac{1477+\sqrt{3122285}}{982}=[\underline{3,3,3,2,1,1,2,3}% ]\approx 3,303461,$$ $$p_{+}=\frac{-44517-\sqrt{3122285}}{155578}=[-1,1,2,2,1,\underline{% 3,3,2,1,1,2,3,3}]\approx -2,297497,$$ $$\alpha _{+}=\frac{1477-\sqrt{3122285}}{982}=[-1,1,2,\underline{% 2,1,1,2,3,3,3,3}]\approx -0,295315.$$ Le second tore est donné par les points $$s_{-}=\frac{4363-\sqrt{3122285}}{1658}=[1,1,1,\underline{3,3,3,3,2,1,1,2}% ]\approx 1,565743,$$ $$\beta _{-}=\frac{1477-\sqrt{3122285}}{982}=[-1,1,2,\underline{2,1,1,2,3,3,3,3% }]\approx -0,295315,$$ $$p_{-}=\frac{-44517+\sqrt{3122285}}{155578}=[-1,1,2,1,1,1,\underline{% 3,3,3,2,1,1,2,3}]\approx -0,274782,$$ $$\alpha _{-}=\frac{1477+\sqrt{3122285}}{982}=[\underline{3,3,3,2,1,1,2,3}% ]\approx 3,303461.$$ Un point remarquable est que dans ce cas on a $$\beta _{+}=\alpha _{-},\;\;\beta _{-}=\alpha _{+},$$ d’où deux matrices $U=B^{-1}A=-A^{-1}B$ et $V=BA^{-1}=-AB^{-1}$ telles que $$U^2=V^2=-\mathbf{1}_2,\;\;A=-VB=BU,\;\;B=VA=-AU,$$ $$\beta _{+}=U(\;\beta _{-})=V(\;\beta _{-}).$$ Dans le groupe $\Gamma =gp(\psi (A),\psi (B))$ on trouve ainsi des relations entre $\psi (A)$ et $\psi (B).$ Elles établissent que ce groupe n’est pas libre. Ce n’est donc pas un groupe de Fricke, même si par construction la surface de Riemann $\mathcal{H}/\Gamma $ est homéomorphe à un tore percé. Les points fixes de $A$ et $B$ sont respectivement $$a^{+}=-[\underline{\lhd X_2^{*},a}]=-[\underline{1,1,1,3}]=\frac{9+\sqrt{165}% }{14}\approx 1,5604,$$ $$\;\;a^{-}=-[0,\underline{X_2\rhd ,a}]=-[0,\underline{1,1,1,3}]=\frac{9-\sqrt{% 165}}6\approx -0,6409,$$ $$b^{+}=-[\underline{X_1^{*}\rhd ,a}]=-[\underline{3,1,2,3}]=\frac{34+\sqrt{% 1586}}{20}\approx 3,6912,$$ $$b^{-}=-[0,\underline{\lhd X_1,a}]=-[0,\underline{2,1,3,3}]=\frac{32-\sqrt{% 1586}}{22}\approx -0,3557.$$ Leurs axes respectifs se coupent donc. D’autre part, un calcul simple montre que $s_{+}$ et $s_{-}$ sont des points fixes réels de la matrice de trace $\sigma -2=1767$ $$L=ABA^{-1}B^{-1}=\left[ \begin{array}{cc} -1298 & 4799 \\ -829 & 3065 \end{array} \right] \in SL(2,\mathbb{Z}).$$ Cet exemple est intéressant car il est en contradiction avec un théorème établi par R.C. Lyndon et J.L. Ullman [@LyndonUllman] (p. 164) qui permettrait dans ce cas de conclure que le groupe $gp(\psi (A),\psi (B))$ est libre. Le constat que cet article présente au moins deux difficultés a déjà été fait dans l’article [@Purzitsky] (pp. 213-214). Il est confirmé. L’équation $(FR^{*})$ du cône est locale et change pour chaque point $(m,m_1,m_2)$ de la surface cubique $M^{++}(3,0,1)$. Au point $(130,11,3)$ elle s’écrit : $$x^2+y^2+z^2=tr(A)xy+tr(B)zx-tr(AB)yz=13xy+40xz-520yz.$$ Equivalente avec $15m_2-4m_1=1$ à $M^{++}(3,0,1)$, elle donne aussi l’équation $M(b,r,s,t)$ qui s’écrit : $$x^2+y^2+z^2=4xyz-15xz+4xy.$$\ #### Une piste d’approfondissement L’exemple précédent permet de comprendre le lien de la surface cubique avec le groupe fuchsien $\Gamma =gp(\psi (A),\psi (B))$. Pour prolonger la réflexion on a trouvé des indications dans [@Shafarevich] (Tome 1, chapitre III 1.6 p.164). Avec la représentation paramétrique généralisant celle de Fricke qui a été construite au chapitre précédent, on déduit une application régulière de la surface dans le plan projectif et surtout un pinceau non dégénéré de coniques. On peut alors faire apparaître dans cette situation un groupe abélien libre [@Shafarevich] (Tome 1, chapitre III 1.6, théorème 4) à partir duquel on peut espérer reconstruire les matrices $2\times 2$ que l’on considère. Dans une telle interprétation qui reste à détailler complètement, on matérialise le groupe des classes de diviseurs de la surface en chaque point entier $(m,m_1,m_2)$ en utilisant une application $s$ du plan projectif $\mathbf{P}^1$ dans la surface définissant la courbe $S=s(\mathbf{P}^1)$ et une fibre non singulière $F$ dont on déduit le groupe $gp(A,B)$. Ceci donne une nouvelle piste pour approfondir la situation que l’on considère, en la rattachant à une problématique importante de géométrie algébrique. Théorie complète pour les tores percés paraboliques --------------------------------------------------- Dans le cas des tores percés paraboliques on peut réduire encore le nombre des paramètres. On a vu précédemment que ce cas est celui des groupes de Fricke et qu’il existe un lien direct avec l’équation de Markoff classique. Ceci permet de développer une théorie complète de la réduction pour ces tores percés [@Rosenberger]. Elle généralise ce qui a été construit dans [@Perrine1b] pour la théorie de Markoff classique, ou dans le chapitre 2 pour la résolution de nos équations par descente infinie. ### Représentations à deux paramètres En supposant que $gp(A,B)$ est un groupe principal, on peut supposer $% \lambda $ et $\mu $ positifs. Seules deux valeurs suffisent alors à définir les matrices $A$ et $B$ dans le cas parabolique. On a ainsi énoncé : Pour un tore percé conforme parabolique $\mathcal{T}_\Gamma ^{\bullet }$, on a à une conjugaison près par une matrice de $SL(2,\mathbb{R})$, la représentation paramétrique suivante pour les matrices $A$ et $B$ du groupe principal de $\mathcal{T}_\Gamma ^{\bullet }$ $$A=\left[ \begin{array}{cc} \mu & (\mu \lambda ^2/\Theta _\alpha ) \\ (\Theta _\alpha /\mu ) & ((1+\lambda ^2)/\mu ) \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} \lambda & -(\lambda \mu ^2/\Theta _\alpha ) \\ -(\Theta _\alpha /\lambda ) & ((1+\mu ^2)/\lambda ) \end{array} \right] ,$$ avec $$\Theta _\alpha =2(tr(A)^2+tr(B)^2),\;\;\alpha =-(\lambda ^2/\Theta _\alpha ),\;\;s=0,\;\;\beta =(\mu ^2/\Theta _\alpha ),\;\;p=\infty .$$ Ceci donne la représentation paramétrique suivante des traces $$tr(A)=\frac{1+\lambda ^2+\mu ^2}\mu ,\;\;tr(B)=\frac{1+\lambda ^2+\mu ^2}% \lambda ,\;\;tr(AB)=\frac{1+\lambda ^2+\mu ^2}{\lambda \mu },$$ où $$\lambda =(tr(A)/tr(AB))>0,\;\;\mu =(tr(B)/tr(AB))>0.$$ Ce cas est caractérisé par la relation de Fricke $$tr(A)^2+tr(B)^2+tr(AB)^2=tr(A)tr(B)tr(AB).$$ Cette relation signifie que la représentation paramétrique précédente de $\mathcal{T}_\Gamma ^{\bullet }$ est à deux paramètres $\lambda $ et $\mu $. A une dilatation d’amplitude $\tau =\Theta _\alpha ^{-1}$ près, on peut faire disparaître le paramètre $% \Theta _\alpha $ des écritures précédentes en raisonnant à une transformation conforme près. Les deux matrices à considérer prennent alors la forme $$A=\left[ \begin{array}{cc} \mu & \mu \lambda ^2 \\ (1/\mu ) & ((1+\lambda ^2)/\mu ) \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} \lambda & -\lambda \mu ^2 \\ -(1/\lambda ) & ((1+\mu ^2)/\lambda ) \end{array} \right] .$$ Le groupe $Pgp(A,B)$ qu’elles définissent est un groupe de Fricke. Et $% gp(A,B)$ est un groupe libre à deux générateurs isomorphe à $% \mathbf{F}_2$. Cette proposition peut être interprétée avec un objet géométrique $\mathcal{V}^{\prime \prime \prime }$ qui est une surface de Riemann d’équation $$x^2+y^2+z^2=xyz\text{.}$$ Chaque point $(x,y,z)=(tr(B),tr(A),tr(AB))$ de $\mathcal{V}^{\prime \prime \prime }$ définit un couple $(\lambda ,\mu )$ permettant la donnée d’un tore percé conforme parabolique $\mathcal{T}_{gp(\psi (A),\psi (B))}^{\bullet }$. La paramétrisation des matrices en $\lambda $ et $\mu $ est due à Fricke [@Fricke] [@Cohn4]. De plus tous les tores percés paraboliques sont ainsi obtenus avec les couples $(\lambda ,\mu )\in \mathbb{R}^2\backslash \{(0,0)\}$. L’énoncé véritablement nouveau de cette proposition est celui qui affiche que le groupe fuchsien $gp(\psi (A),\psi (B))=Pgp(A,B)$ est toujours un groupe de Fricke. On utilise pour le démontrer le théorème 8 (p. 221) de [@Purzitsky] avec $$tr(A)>2,\;\;tr(B)>2,\;\;tr(L)=tr(ABA^{-1}B^{-1})=-2.$$ On peut calculer les points fixes $a^{+}$, $a^{-}$, $b^{+}$, $b^{-}$, de $A$ et $B$ en fonction de $\lambda $, $\mu $, et s’assurer du signe négatif de $[a^{+},a^{-};b^{+},b^{-}]$. Ayant ainsi vérifié toutes les conditions de théorème cité on l’applique pour conclure que le groupe $gp(A,B)$ est discret et libre à deux générateurs tout comme $Pgp(A,B)$. Comme par construction la surface de Riemann $\mathcal{H}% /Pgp(A,B)$ est homéomorphe à un tore percé en un point, il en résulte que $Pgp(A,B)$ est un groupe de Fricke. Comme la réciproque se déduit aisément de [@Perrine1b] en montrant que les traces sont liées par une équation de Markoff classique, cette propriété est bien caractéristique du cas parabolique. De plus on a donné précédemment un exemple hyperbolique où cette propriété n’est pas assurée. En d’autres termes on a obtenu une équivalence entre la catégorie des groupes de Fricke et celle des tores percés conformes paraboliques. ### Des exemples de tores percés paraboliques Différents exemples de groupes de Fricke associés à des tores percés conformes paraboliques sont bien connus. $\bullet $ Le lien avec les travaux de A. Schmidt [@Schmidt] introduit $$\mathbf{A}_0=tr(AB),\;\;\mathbf{B}_0=tr(A),\;\;\mathbf{C}_0=tr(B),\;\;k=(1+% \lambda ^2+\mu ^2)/\theta ,$$ $$T_0=BA,\;\;U_0=A,\;\;V_0=B^{-1}.$$ Ceci donne une nouvelle représentation paramétrique (voir [@Perrine9]) précisant comment $A$ et $B$ agissent sur les bords du domaine fondamental $p\alpha s\beta $. $$A=\pm \left[ \begin{array}{cc} \sqrt{\beta \theta } & -\alpha \sqrt{\beta \theta } \\ \sqrt{\theta /\beta } & ((1-\alpha \theta )/\sqrt{\beta \theta }) \end{array} \right] ,\;B=\pm \left[ \begin{array}{cc} \sqrt{-\alpha \theta } & -\beta \sqrt{-\alpha \theta } \\ -\sqrt{(\theta /-\alpha )} & ((1-\beta \theta )/\sqrt{-\alpha \theta }) \end{array} \right] .$$ Les travaux de A. Schmidt [@Schmidt] introduisent une notion de groupe de Fricke étendu dont un groupe de Fricke est un sous groupe d’indice 2. Un tel groupe étendu n’est autre qu’un groupe isomorphe au groupe du triangle $\mathbf{T}_3$ dans lequel l’indice 2 définit de façon unique le groupe de Fricke. Le groupe étendu correspond à une sphère triplement percée dont le tore percé est un revêtement à deux feuilles. On peut prolonger ces repésentations de $\mathbf{F}_2$ et $\mathbf{T}_3$ en une représentation de $GL(2,\mathbb{Z% })$. $\bullet $ Presque toutes les matrices $A\in SL(2,\mathbb{R})$ permettent de trouver une matrice $B$ avec laquelle $gp(A,B)$ détermine un tore percé conforme parabolique : Considérons une matrice à coefficients réels $$A=\pm \left[ \begin{array}{cc} \mathbf{a} & \mathbf{b} \\ \mathbf{c} & \mathbf{d} \end{array} \right] \in SL(2,\mathbb{R}),\;\text{o\`{u} }\mathbf{bc}>0,\;\;\mathbf{ba}% >0,\;\;\mathbf{ac}>0,$$ alors $A$ détermine un tore percé conforme parabolique avec $$B=\pm \left[ \begin{array}{cc} \sqrt{\mathbf{bc}} & -\mathbf{a}\sqrt{\dfrac{\mathbf{b}}{\mathbf{c}}} \\ -\mathbf{a}\sqrt{\dfrac{\mathbf{c}}{\mathbf{b}}} & \dfrac{(1+\mathbf{a}^2)}{% \sqrt{\mathbf{bc}}} \end{array} \right] \in SL(2,\mathbb{R}).$$ Le groupe $gp(A,B)$ est libre à deux générateurs. Cette proposition donne des exemples classiques [@Cohn1] [@Schmidt] [@SeriesHaas] : $\bullet $ Le groupe de Klein est défini avec $\lambda =1$, $\mu =\theta =2$. Il est déterminé par $A$ : $$A=\left[ \begin{array}{cc} 2 & 1 \\ 1 & 1 \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} 1 & -2 \\ -2 & 5 \end{array} \right] \text{.}$$ $\bullet $ Le groupe de la théorie de Markoff, qui est en réalité le groupe libre $\mathbf{F}_2$, est défini avec $\lambda =$ $\mu =\theta =1$. Il est déterminé par la seule donnée de la matrice $A_0$ : $$A_0=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right] ,\;\;B_0=\left[ \begin{array}{cc} 1 & -1 \\ -1 & 2 \end{array} \right] \text{.}$$ Il est possible de voir que ce cas se ramène au précédent. $\bullet $ Le groupe de Hecke est défini avec $\lambda =$ $\mu =\sqrt{2}$, $\theta =1$. Il est déterminé par : $$A=\left[ \begin{array}{cc} \sqrt{2}/2 & \sqrt{2}/4 \\ \sqrt{2} & 3\sqrt{2}/2 \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} \sqrt{2}/2 & -\sqrt{2}/4 \\ -\sqrt{2} & 3\sqrt{2}/2 \end{array} \right] \text{.}$$ $\bullet $ Le groupe $G_\theta $ est engendré par les matrices suivantes où $\theta >0$ $$A_\theta =\left[ \begin{array}{cc} \mu & (\mu \lambda ^2/\theta ) \\ (\theta /\mu ) & ((1+\lambda ^2)/\mu ) \end{array} \right] ,\;\;B_\theta =\left[ \begin{array}{cc} \lambda & -(\lambda \mu ^2/\theta ) \\ -(\theta /\lambda ) & ((1+\mu ^2)/\lambda ) \end{array} \right] .$$ Il est conformément équivalent au groupe $G_1$ donné avec $% \theta =1$ par : $$D_\theta =\frac 1{\sqrt{\theta }}\left[ \begin{array}{cc} \theta & 0 \\ 0 & 1 \end{array} \right] ,\;\;A_1=D_\theta A_\theta D_\theta ^{-1},\;\;B_1=D_\theta B_\theta D_\theta ^{-1}.$$ On en déduit l’expression de la matrice de passage d’un groupe $G_\theta $ à tout autre groupe $G_{\theta ^{\prime }}$. Si l’on note respectivement $\mathcal{T}_{\Gamma _\theta }^{\bullet }$ et $\mathcal{T}% _{\Gamma _{\theta ^{\prime }}}^{\bullet }$ les tores percés conformes associés, ils sont conformément équivalents lorsque $\theta $ et $\theta ^{\prime }$ sont de même signe, et anti-conformément équivalents dans le cas contraire. ### Classification des groupes de Fricke par les triplets de traces Avec un formalisme sur les traces analogue à celui de [@Perrine1b] on a trouvé : Soient $(A,B)$ et $(A^{\prime },B^{\prime })$ les systèmes de générateurs respectifs de deux groupes principaux de groupes de Fricke $\Gamma $ et $\Gamma ^{\prime }$ associés à des tores percés conformes paraboliques, on a équivalence des propriétés suivantes : 1/ Les couples $(A,B)$ et $(A^{\prime },B^{\prime })$ sont équivalents par un même automorphisme intérieur de $GL(2,\mathbb{R})$ : $$A=DA^{\prime }D^{-1},\;\;B=DB^{\prime }D^{-1},\;\;\text{o\`{u} }D\in GL(2,% \mathbb{R}).$$ 2/ On a égalité des deux triplets suivants $$\Pi (A,B)=(tr(B^{-1}),tr(A),tr(B^{-1}A^{-1})),$$ $$\Pi (A^{\prime },B^{\prime })=(tr(B^{\prime -1}),tr(A^{\prime }),tr(B^{\prime -1}A^{\prime -1})).$$ 3/ Les couples $(A,B)$ et $(A^{\prime },B^{\prime })$ définissent les mêmes paramètres $\lambda $, $\mu \in \mathbb{R}^{+}$$$\lambda =(tr(A)/tr(AB))=(tr(A^{\prime })/tr(A^{\prime }B^{\prime })),$$ $$\mu =(tr(B)/tr(AB))=(tr(B^{\prime })/tr(A^{\prime }B^{\prime })).$$ De façon évidente, on a $1/\Rightarrow 2/\Rightarrow 3/$. Le plus délicat est d’établir l’implication $3/\Rightarrow 1/$. On le fait avec une méthode géométrique directe basée sur la comparaison de birapports. Il en résulte l’existence d’une homographie de la droite réelle projective sur le bord de $\mathcal{H}$, et donc d’une matrice $D\in GL(2,\mathbb{R})$ associée à cette homographie. La matrice $D$ est explicitement calculable, et on vérifie qu’elle satisfait à la condition $1/$. Ceci termine la démonstration en explicitant la transformation de Möbius de $\mathcal{H}$ recherchée. De plus on a pu s’assurer que l’on a : Toute équivalence conforme d’un tore percé parabolique $\mathcal{T}% _\Gamma ^{\bullet }$ dans lui-même donnée par une conjugaison de $% GL(2,\mathbb{R})$ est égale à l’identité. ### Réduction des tores percés paraboliques Ayant classé les tores percés paraboliques au moyen des transformations conformes de $\mathcal{H}$, on a examiné ce que l’on peut faire sans changer de groupe, mais en changeant seulement de système de générateurs $(A,B)$. Ceci permet de contruire une théorie de la réduction dans tout groupe de Fricke.\ #### Les involutions Le groupe $\Gamma =Pgp(A,B)$ est un groupe de Fricke pour tout tore percé parabolique $\mathcal{T}_\Gamma ^{\bullet }$ , et le goupe $% gp(A,B) $ est libre à deux générateurs $A$ et $B$. On applique les automorphismes involutifs du groupe $gp(A,B)$ dont les expressions sont issues de la théorie de Markoff classique [@Perrine1b] : $$X_\phi :(A,B)\longrightarrow (A^{-1},ABA),$$ $$Y_\phi :(A,B)\longrightarrow (BAB,B^{-1}),$$ $$Z_\phi :(A,B)\longrightarrow (A^{-1},B).$$ Leur action sur le triplet des traces $% (x,y,z)=(tr(B^{-1}),tr(A),tr(B^{-1}A^{-1}))$ est : $$\widetilde{X_\phi }:(x,y,z)\longrightarrow (yz-x,y,z),$$ $$\widetilde{Y_\phi }:(x,y,z)\longrightarrow (x,xz-y,z),$$ $$\widetilde{Z_\phi }:(x,y,z)\longrightarrow (x,y,xy-z).$$ Ces transformations laissent invariante la relation $x^2+y^2+z^2=xyz$.\ #### Nappe principale et groupe principal La dernière équation citée est celle d’une surface réelle possédant un point double $(0,0,0)$ et quatre nappes se déduisant de la nappe principale définie par les conditions $x>0,\;\;y>0,\;\;z>0$. La nappe principale est invariante par les transformations $\widetilde{X_\phi }$, $\widetilde{Y_\phi }$, $\widetilde{Z_\phi }$. On passe d’une nappe aux autres par des transformations évidentes. Elles peuvent ne pas laisser le groupe $gp(A,B)$ invariant. Comme on raisonne sur un tore percé parabolique, on a recours aux deux paramètres $\lambda =(tr(A)/tr(AB))$ et $\mu =(tr(B)/tr(AB))$. Pour la nappe principale, on a $\lambda >0$ et $% \mu >0$, c’est à dire des valeurs dans le premier quart de plan réel. Pour les autres couples matrices dont les paramètres sont dans un des autres quarts de plan, on note les paramètres correspondants $% \varepsilon _\lambda \lambda $ et $\varepsilon _\mu \mu $, avec $\lambda >0$ et $\mu >0$, $\varepsilon _\lambda $ et $\varepsilon _\mu $ dans $\{+1,-1\}$. Ceux-ci déterminent des couples de matrices que l’on peut écrire $% (\varepsilon _\mu A,\varepsilon _\lambda B)$. Les groupes $gp(\varepsilon _\mu A,\varepsilon _\lambda B)$ et $gp(A,B)$ peuvent être différents, mais les groupes de transformations associés sont identiques et déterminent même groupe de Fricke. Tous donnent les mêmes points $\alpha $, $s=0$, $\beta $, $p=\infty $. On peut donc se limiter à considérer le groupe principal $gp(A,B)$, avec les conditions $\lambda >0$ et $\mu >0$ caractérisant la nappe principale. Les autres sont ses groupes conjugués. La remontée d’un groupe $\Gamma \subset PSL(2,\mathbb{R})$ à un groupe $% G\subset SL(2,\mathbb{R})$ dont $\Gamma $ est l’image est étudiée dans [@Kra]. Le groupe $\Gamma $ se remonte en $G$ si et seulement s’il n’a pas d’élément d’ordre 2. Dans le cas parabolique, il n’y pas de difficulté.\ #### La réduction Le processus de réduction peut être transposé facilement du groupe principal à tout groupe conjugué. Sur le groupe principal $% gp(A,B)$ on construit algorithmiquement une suite des transformations $% \widetilde{X_\phi }$, $\widetilde{Y_\phi }$, $\widetilde{Z_\phi }$, de façon à réduire tout système de générateurs $(A,B)$. Considérons que ce système définisse avec le triplet de traces associé sur la nappe principale les quatre nombres $$m=\max (x,y,z)>0,$$ $$m_x=\max (yz-x,y,z)>0,$$ $$\ m_y=\max (x,xz-y,z)>0,$$ $$m_z=\max (x,y,xy-z)>0.$$ On dit que le triplet $(x,y,z)$ n’est pas réduit si et seulement si l’un des nombres $m_x$, $m_y$, $m_z$, est strictement plus petit que $m$. On s’assure facilement que pour tout triplet non réduit, deux des nombres $% m_x$, $m_y$, $m_z$, sont plus grands que $m$, le troisième étant plus petit que $m$. Ceci permet de choisir une unique involution parmi $% \widetilde{X_\phi }$, $\widetilde{Y_\phi }$, $\widetilde{Z_\phi }$, avec laquelle on construit un nouveau triplet $(x_1,y_1,z_1)$ tel que la valeur $% m_1=\max (x_1,y_1,z_1)$ soit strictement plus petite que $m$. On poursuit en renouvelant le procédé à partir de ce dernier triplet, développant un processus de descente infinie analogue à celui que l’on a utilisé pour résoudre nos équations. Il est facile de vérifier que l’algorithme s’arrête sur un triplet réduit. Ceci donne : Tout groupe principal du groupe de Fricke $\Gamma $ associé à un tore percé conforme parabolique $\mathcal{T}_\Gamma ^{\bullet }$ possède un système de générateurs réduit. L’action des involutions $\widetilde{X_\phi }$, $\widetilde{Y_\phi }$, $% \widetilde{Z_\phi }$, se traduit sur les paramètres $\lambda $ et $\mu $ grâce à des involutions définissant une action de $\mathbf{T}_3$ sur le quart de plan : $$\mathbf{X}_\phi :(\lambda ,\mu )\longrightarrow (\lambda ,\frac{1+\lambda ^2}% \mu ).$$ $$\mathbf{Y}_\phi :(\lambda ,\mu )\longrightarrow (\frac{1+\mu ^2}\lambda ,\mu ),$$ $$\mathbf{Z}_\phi :(\lambda ,\mu )\longrightarrow (\frac \lambda {\lambda ^2+\mu ^2},\frac \mu {\lambda ^2+\mu ^2}).$$ On fait alors apparaître un intéressant pavage d’un quart de plan réel par un triangle curviligne, pavage du à une action du groupe du triangle $\mathbf{T}_3$. Les points invariants par $\mathbf{X}_\phi $ dans le quart de plan qui correspond à la nappe principale sont portés par une hyperbole $H_X$ d’équation $\mu ^2-\lambda ^2=1$. Ceux qui sont invariants par $\mathbf{Y}_\phi $ sont portés par l’hyperbole $H_Y$ d’équation $\lambda ^2-\mu ^2=1$. Les points invariants par $\mathbf{Z}% _\phi $ sont portés par le cercle $H_Z$ d’équation $\lambda ^2+\mu ^2=1$. Ces trois courbes déterminent un triangle curviligne de sommets $% \mathbf{L}(1,0)$, $\mathbf{M}(0,1)$, $\mathbf{N}(\infty ,\infty )$ qui constitue un domaine fondamental pour l’action sur le quart de plan du groupe $\mathbf{T}_3$.\ #### La super-réduction Dans le triangle curviligne $\mathbf{LMN}$ lui-même, on a la condition $$\mu ^2\leq 1+\lambda ^2.$$ Mais on peut échanger le rôle des matrices $A$ et $B$ sans changer de groupe, c’est-à dire permuter $\lambda $ et $\mu $. On obtient $% \lambda \leq \mu $ avec cette transformation $$P_1:(A,B)\longrightarrow (B,A).$$ On obtient aussi $1\leq \lambda $ avec la transformation suivante $$P_2:(A,B)\longrightarrow (A,B^{-1}A^{-1}).$$ On dit qu’un système de générateurs $(A,B)$ du groupe principal associé à un tore percé parabolique $\mathcal{T}_\Gamma ^{\bullet }$ est super-réduit si et seulement si on a les conditions $$1\leq \lambda \leq \mu ,\;\;\mu ^2\leq 1+\lambda ^2.$$ Ce qui précède permet d’énoncer : Tout groupe principal du groupe de Fricke $\Gamma $ associé à un tore percé conforme parabolique $\mathcal{T}_{\Gamma} ^{\bullet }$ possède un système de générateurs super-réduit. #### Exemple des tores percés de Klein et de Hecke On peut illustrer ce que donne l’algorithme sur les exemples connus de groupes de Fricke [@Cohn1]. $\bullet $ Tore de Klein : Ce cas a été donné avec $\lambda =1$, $\mu =\theta =2$, qui ne respectent pas la condition de super-réduction. Le triplet correspondant est $(x,y,z)=(6,3,3)$. Il donne $m=6$, $m_x=3$, $m_y=15$, $m_z=15$. On identifie ainsi la transformation $\mathbf{X}_\phi $ qui conduit à calculer les matrices suivantes : $$A=\left[ \begin{array}{cc} 1 & -1 \\ -1 & 2 \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right] .$$ On a alors $(x,y,z)=(3,3,3)$ et $m=3<$ $m_x=$ $m_y=$ $m_z=6$. On est cette fois dans le triangle curviligne $\mathbf{LMN}$ avec les valeurs $\lambda =\mu =1$. On voit alors que l’on se ramène simplement au groupe de la théorie de Markoff, où $B=A_0$ et $A=B_0$. Avec $\lambda =\mu =1$ on est alors dans le cas d’un système super-réduit de générateurs du groupe principal considéré. $\bullet $ Tore de Hecke : Ce cas a été évoqué avec les valeurs $\lambda =\mu =\sqrt{2}/2$ et $\theta =1$. Ces valeurs ne respectent pas la condition de super-réduction. On se trouve cette fois dans le triangle curviligne** **$\mathbf{LMN}$. Le triplet correspondant est maintenant $(x,y,z)=(2\sqrt{2},2\sqrt{2},4)$. Il correspond aux valeurs $% m=4 $, $m_x=m_y=2\sqrt{2}$, $m_z=4$. On n’identifie ainsi aucune transformation applicable $\mathbf{X}_\phi $, $\mathbf{Y}_\phi $, $\mathbf{Z}% _\phi $. Par contre on peut appliquer $P_2$ qui donne les matrices suivantes correspondant aux valeurs $\lambda =1$ et $\mu =\sqrt{2}$ : $$A=\left[ \begin{array}{cc} \sqrt{2}/2 & \sqrt{2}/4 \\ \sqrt{2} & 3\sqrt{2}/2 \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} 4 & -(1/2) \\ 2 & 0 \end{array} \right] .$$ ### Module d’un tore percé conforme parabolique On dit que deux tores percés conformes paraboliques sont de même type si et seulement s’il existe une équivalence conforme transformant l’un en l’autre. L’algorithme de réduction permet de remplacer le couple de générateurs $(A,B)$ d’un groupe de Fricke grâce aux involutions $X_\phi $, $Y_\phi $, $Z_\phi $. Il ne change pas le tore percé conforme sur lequel on travaille. En combinant les deux méthodes, on associe aux différents types de tores percés conformes paraboliques avec les calculs qui précèdent un nombre réel $(\mu ^2/\lambda ^2)$, le module du tore percé que l’on considère. Les conditions de super-réduction garantissent que l’on peut se ramener à $$1\leq \frac{\mu ^2}{\lambda ^2}\leq 2.$$ Ceci a permis d’énoncer : Tous les types de tores percés conformes paraboliques sont associés à un nombre réel positif $(\mu ^2/\lambda ^2)$ compris entre 1 et 2, le module du tore percé conforme parabolique considéré. La valeur 1 correspond à un tore percé de Klein. La valeur 2 correspond à un tore percé de Hecke. Toute valeur comprise entre 1 et 2 correspond à un tore percé conforme. Cette proposition classe à l’aide de leur module les tores percés conformes paraboliques que l’on peut construire sur un même tore percé topologique. Deux tores percés conformes correspondants à des modules $(\mu ^2/\lambda ^2)$ différents ne peuvent être de même type. A l’inverse, deux tores correspondants à un même module $(\mu ^2/\lambda ^2)$ peuvent ne pas être de même type. Considérons pour le voir $$\mu ^{\prime }=\kappa \mu ,\;\lambda ^{\prime }=\kappa \lambda ,\;\kappa \neq 0.$$ Sauf le cas où $\kappa =1$, les quadruplets $(\alpha ,s,\beta ,p)$ et $% (\alpha ^{\prime },s^{\prime },\beta ^{\prime },p^{\prime })$ se déduisent par une homographie sans que celle-ci permette de conclure à des relations convenables entre les matrices associées $A$, $B$ et $A^{\prime }$, $B^{\prime }$. Les traces seules permettent de garantir que l’on a affaire à des tores paraboliques conformément équivalents. Par exemple le tore de la théorie de Markoff est donné avec $\lambda =\mu =1$, mais il n’est pas conformément équivalent à celui que l’on obtient avec $\kappa =2$ et les matrices $$A^{\prime }=\left[ \begin{array}{cc} 2 & 8 \\ (1/2) & (5/2) \end{array} \right] ,\;B^{\prime }=\left[ \begin{array}{cc} 2 & -8 \\ -(1/2) & (5/2) \end{array} \right] ,$$ car les triplets de traces associés comprennent un rationnel non entier qui rend impossible de trouver $M\in SL(2,\mathbb{Z})$ telle que $A^{\prime }=M^{-1}A_0M$ et $B^{\prime }=M^{-1}B_0M$.Les exemples de ce genre ont été étudiés dans [@Cohn2] où des indications sont données sur la valeur des constantes de Markoff correspondantes, mais la théorie développée par cet auteur est moins complète que la nôtre. Les deux tores percés de l’exemple que l’on vient de donner ne sont pas conformément équivalents, alors qu’ils sont de même module égal à $1$ par hypothèse. Cette situation ne se reproduit pas dans le cas particulier du tore de Hecke qui est de module $2$. Considérons en effet les inégalités qui le définissent, elles imposent $\lambda =1$ et $\mu =\sqrt{2}$. Elles caractérisent de façon unique le type conforme du tore de Hecke. En fait, se donner un couple $(\lambda ,\mu )$ avec les contraintes trouvées antérieurement est équivalent à se donner un couple $((\mu ^2/\lambda ^2),\lambda )$, c’est-à-dire plus que le seul module $(\mu ^2/\lambda ^2)$, avec cette fois les contraintes $$1\leq \lambda \leq \frac 1{\sqrt{(\mu ^2/\lambda ^2)-1}}.$$ La fixation du module $(\mu ^2/\lambda ^2)$ permet de se ramener à un même domaine fondamental $p=\infty $, $\alpha =-1$, $s=0$, $\beta =(\mu ^2/\lambda ^2)$. Mais le facteur supplémentaire $\lambda $ est en plus nécessaire pour décrire alors la façon dont les bords de ce domaine sont identifiés par $A$ et $B$, ce que l’on a pu décrire géométriquement dans [@Perrine9]. On retrouve ainsi le fait que le type conforme d’un tore percé conforme parabolique nécessite deux paramètres pour être bien défini, ainsi que le fait que les tores de Hecke sont définis à transformation conforme près de $% \mathcal{H}$ par leur seul module. La théorie de la réduction que l’on a développée n’est pas celle de [@Katok2] qui correspond plutôt à un codage des géodésiques fermées d’un quotient $\mathcal{H}/\Gamma $ où $% \Gamma $ groupe fuchsien. ### Apparition des quaternions On considère une matrice $B\in SL(2,\mathbb{R})$ telle que $% tr(B)=((1+\lambda ^2+\mu ^2)/\lambda )$. Avec le groupe $G_1$ introduit précédemment, $B_1\in G_1$ et la condition $BD=DB_1$où $$B_1=\left[ \begin{array}{cc} \lambda & -(\lambda \mu ^2) \\ -(1/\lambda ) & ((1+\mu ^2)/\lambda ) \end{array} \right] ,\;\;D=\left[ \begin{array}{cc} \mathbf{x} & \mathbf{y} \\ \mathbf{z} & \mathbf{t} \end{array} \right] ,\;\;\det (D)=\pm 1,$$ on a obtenu : Si $B\in SL(2,\mathbb{R})$, on a équivalence des deux propriétés : 1/ $tr(B)=((1+\lambda ^2+\mu ^2)/\lambda )$. 2/ Il existe $D\in GL(2,\mathbb{R})$ telle que $B=DB_1D^{-1}$ où $$B_1=\left[ \begin{array}{cc} \lambda & -\lambda \mu ^2 \\ -(1/\lambda ) & ((1+\mu ^2)/\lambda ) \end{array} \right] .$$ Si on combine maintenant cette proposition avec la recherche d’une matrice $% D^{\prime }$ telle que $A=D^{\prime }A_1D^{\prime -1}$ et $tr(A)=((1+\lambda ^2+\mu ^2)/\mu )$, on est conduit à écrire $$B^{-1}A^{-1}=DB_1^{-1}D^{-1}D^{\prime }A_1^{-1}D^{\prime -1},$$ $$tr(B^{-1}A^{-1})=tr(B_1^{-1}(D^{-1}D^{\prime })A_1^{-1}(D^{-1}D^{\prime })^{-1})=((1+\lambda ^2+\mu ^2)/\lambda \mu ).$$ Ceci introduit une matrice $$W=D^{-1}D^{\prime }=\left[ \begin{array}{cc} \varpi _1 & \varpi _4 \\ \varpi _3 & \varpi _2 \end{array} \right] ,$$ et le calcul effectif de sa trace donne un équation quadratique que l’on peut interpréter comme la norme d’un quaternion. Une solution de cette équation est donnée par $\varpi _1=\varpi _2=\pm 1$, $\varpi _3=\varpi _4=0.$ Les autres solutions sont calculables et fournissent des quaternions non triviaux que l’on peut utiliser pour donner une caractérisation du couple $(A,B)$ par le triplet de traces $$\Pi (A,B)=(tr(B^{-1}),tr(A),tr(B^{-1}A^{-1})).$$ Perspectives ------------ Dans ce qui précède on a donné une nouvelle interprétation géométrique des équations diophantiennes $M^{s_1s_2}(a,\partial K,u_\theta )$ de notre théorie de Markoff généralisée. Le lien a été fait avec la théorie des tores percés conformes, et on a vu une différence entre le cas parabolique où la généralisation est complète et le cas hyperbolique où les résultats sont plus lacunaires. Pour les tores paraboliques, on dispose d’une théorie de la réduction complète qui classe les triplets de traces sous l’action du groupe du triangle $\mathbf{T}_3$. Tous sont issus d’une équation de Markoff classique grâce à la condition $$tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)=\sigma =0.$$ Ils s’interprètent avec les couples de générateurs du groupe libre non commutatif à deux générateurs $\mathbf{F}_2$ auquel tout groupe de Fricke est isomorphe [@Rosenberger]. Ce cas généralise de façon complète aux tores percés paraboliques la théorie de Markoff classique. Cette présentation explique les développements que l’on trouve dans [@Cohn1] et [@Cohn2] auxquels elle donne une interprétation. Elle explicite les travaux de [@Schmidt]. On peut compléter ce qui précède par un calcul des constantes de Markoff associées, sachant que les fractions continues ne s’écrivent plus dans ce cas avec des $1$ et des $2$, mais contiennent d’autres valeurs (voir [@Cohn2]). Trouver un exemple où la seule valeur supplémentaire est égale à $3$ ne paraît pas un défi insurmontable. Les tores percés conformes hyperboliques sont eux-mêmes donnés par une équation $M^{s_1s_2}(a,\partial K,u_\theta )$ dont on a donné une interprétation géométrique et pour lesquels on a une action du groupe de triangle $\mathbf{T}_3$ et une réduction associée. On les a également classés à isométrie près de $\mathcal{H}$ avec la condition différente (comparer à [@Jin]) : $$tr(A)^2+tr(B)^2+tr(AB)^2-tr(A)tr(B)tr(AB)=\sigma <0.$$ Pour que ce classement soit à équivalence conforme près de $% \mathcal{H}$ il faut ajouter une valeur $\varepsilon =\pm 1$ correspondant à l’orientation du tore percé. On a une autre réduction pour les tores $\sigma $-hyperboliques ainsi définis. On a vu que le groupe correspondant n’est plus de Fricke. On a donné un exemple où une relation entre générateurs a été calculée, il faut voir si la méthode utilisée est généralisable, et ce que devient la super-réduction. Ceci est examiné au chapitre suivant. Les géodésiques invariantes ont permis de comprendre comment les surfaces de Riemann qui en résultent sont prolongées d’une surface à un trou vers une surface à une piqûre. On a vu comment ce cas recouvre une situation où apparaissent deux tores troués conformes liés entre eux (doubles de Schottky), qui se recouvrent dans le cas parabolique limite. On peut développer une théorie de la réduction pour les tores hyperboliques en prenant soin de travailler simultanément sur les deux tores. Il faudrait aussi voir ce que devient dans le cas hyperbolique le lien avec les quaternions. L’exemple hyperbolique identifié avec $\sigma =1769$ concerne les matrices $$A=\left[ \begin{array}{cc} 11 & 3 \\ 7 & 2 \end{array} \right] ,\;\;B=\left[ \begin{array}{cc} 37 & 11 \\ 10 & 3 \end{array} \right] ,\;\;H=\left[ \begin{array}{cc} 3065 & -4799 \\ 829 & -1298 \end{array} \right] .$$ Le groupe associé $G=<A,B,H\mid [A,B]H=\mathbf{1}_2>$ n’est pas libre car il contient des éléments particuliers $U=B^{-1}A$ et $V=BA^{-1}$ tels que $U^2=V^2=-\mathbf{1}_2$. La piste d’approfondissement à la géométrie algébrique qui a été signalée dans ce cas autour de [@Shafarevich] (Tome 1, chapitre III 1.6 p. 164) constitue un sujet important à creuser. On a esquissé l’interprétation qui reste à détailler matérialisant le groupe des classes de diviseurs de nos surfaces cubiques en chaque point entier en utilisant une courbe $S=s(\mathbf{P}^1)$ représentant le plan projectif dans la surface $M^{s_1s_2}(a,\partial K,u_\theta )$ et une fibre non singulière $F$. Cette approche construit un groupe libre à deux générateurs dont il faut comprendre à quoi il correspond dans le cas hyperbolique. Ce cas pourrait avoir une grande importance pour la classification des fibrés vectoriels [@Drezet] [@Drezet1] [@Drezet2] [@LePotier] [@Klyachko] [@Klyachko1]. $$$$ Généralisation aux surfaces de Riemann ====================================== Introduction ------------ Les réflexions pour généraliser la théorie de Markoff classique à des situations plus vastes ont conduit à étudier de plus près la géométrie conforme des surfaces de Riemann. Un exposé sur cette question est donné dans [@Perrine9] où l’on décrit la vision que l’auteur a de ce dernier sujet, et les liens qu’il a formalisés avec des thèmes d’actualité en mathématiques ou en physique. Le résumé qui suit s’appuie sur les exposés classiques sur le sujet ([@Siegel] [@Lehner] [@Farkas] [@Beardon] [@Katok1] [@Miranda] [@Serre1] [@Cohn6] [@Douady] [@Stillwell] [@Zieschang] [@Berger1] [@Lehmann] [@Doubrovine] [@Reyssat] [@Nakahara] [@Nash1] [@Nash2]...). Il indique quelques perspectives de recherches que l’auteur a choisi d’explorer. Le point de départ a été d’étendre les travaux présentés précédemment sur les tores percés au cadre plus général des surfaces de Riemann. On a concrétisé cette idée en tentant d’éclairer des problématiques contemporaines. Le chaos quantique est ainsi devenu progressivement une préoccupation essentielle. On a recherché les liens qu’il pouvait avoir avec la théorie de Markoff. En d’autres termes, il s’agissait de savoir si le spectre de Markoff peut être obtenu comme spectre d’un opérateur, ce qui pourrait expliquer son apparition dans des objets physiques tels que des oscillateurs [@Planat1]. Les principaux résultats auxquels on est parvenu sont les suivants : $\bullet $ On a reformulé l’approche sur les tores percés conformes en la plaçant dans le cadre plus vaste des groupes fuchsiens. Ceci a permis de comprendre comment étudier le cas hyperbolique non complètement traité dans ce qui précède. Ceci a aussi fait le lien complet avec la théorie de Teichmüller interprétée ici comme théorie des représentations d’un groupe de Poincaré dans $% PSL(2,\mathbb{R})$. Cette approche généralise la théorie de Markoff de façon très satisfaisante. On dispose ainsi d’un espace de modules qui joue le rôle de l’ensemble des triplets de traces pour les tores percés. On a aussi un groupe qui agit sur cet espace, généralisant l’action du groupe du triangle du cas des tores percés. On a pu en déduire des résultats nouveaux sur des équations diophantiennes à plus grand nombre de variables que l’on peut traiter comme l’équation de Markoff. Il existe déjà un tel exemple [@Baragar]. Pour développer l’approche géométrique correspondante, on a recherché les objets à considérer à la place des surfaces de Riemann qui semblent de ce point de vue un peu limitées. C’est ainsi que les espaces de Stein ont été abordés, mais ils ne sont certainement pas la bonne notion pour de nouvelles généralisations, et on a indiqué pourquoi il faut privilégier les domaines de Riemann. $\bullet $ On a ensuite étudié le lien avec le codage des géodésiques sur une surface de Riemann. Ceci constitue un sujet où les résultats généraux disponibles restent limités [@Series1] [@Series2], mais qui a un lien très important avec l’étude des systèmes dynamiques et la théorie ergodique, notamment les géodésiques fermées. La réflexion a été faite dans la perspective de sortir de la seule théorie de Markoff qui semble pourtant la seule où on dispose de résultats signicatifs [@Schmutz2]. $\bullet $ La fonction êta de Dedekind est sous-jacente à tous nos travaux. On a montré qu’elle donne un certain nombre des fonctions transcendantes classiques sur lesquelles sont construits les plus beaux résultats de la théorie de nombres. On a donc cherché à repérer un certain nombre d’expressions où cette fonction $\eta $ apparaît, donnant la fonction modulaire, les fonctions elliptiques, les fonctions thêta, etc... On a montré comment ces dernières sont importantes pour la théorie de l’information, et notamment la dualité codage/quantification.Une idée que l’on a ensuite cherché à approfondir est que ces expressions expliquent à partir de la décomposition en produit infini de $\eta $ beaucoup des produits infinis classiques des autres fonctions. Un point a été laissé de côté, concernant les fonctions $L$. Mais elles ont également une propriété en relation avec ce schéma [@Ericksson]. $\bullet $ On a ensuite cherché à comprendre certaines des expressions différentielles mises en avant dans les premiers travaux de Harvey Cohn relatifs à l’interprétation géométrique de la théorie de Markoff ([@Cohn1], [@Cohn7], [@Cohn8], [@Cohn1]). Ceci permet de faire le lien avec la sphère à trois trous, et de comprendre par là même certains travaux de Asmus Schmidt [@Schmidt]. Une approche hypergéométrique est également possible à partir de là. Mais le plus important est que ce développement met l’accent sur la double uniformisation à l’oeuvre sur les tores percés. $\bullet $ Cette double uniformisation vient de ce que le tore percé a pour revêtement conforme le demi-plan de Poincaré, mais qu’il peut être complété en un tore qui a pour revêtement conforme $% \mathbb{C}$. Le tore donne naturellement naissance aux courbes elliptiques et donc à des équations cubiques analogues à celles considérées avant, cependant que le tore percé donne pour ce qui le concerne naissance aux équations analogues à celle de Markoff. Comme l’auteur a décrit dans les chapitres précédents les relations que ces deux types d’équations algébriques entretiennent, il convenait de mieux comprendre cette situation qui a pour conséquence l’existence d’une riche structure de double revêtement conforme [@Mazur1] sur les tores percés. Cette observation a des implications arithmétiques profondes comparables à celles du cas de la conjecture de Shimura Taniyama Weil [@Wiles] où l’un des revêtements est euclidien et l’autre hyperbolique. On en a déduit une construction générale permettant de paramétrer presque tous les points d’un tore percé avec des fonctions elliptiques. Toutes les conséquences d’une telle idée ne sont pas tirées, notamment ce que l’on pourrait en tirer pour une conjecture de Selberg. Le résultat essentiel auquel on est parvenu est que la fonction êta de Dedekind peut être interprétée avec le laplacien d’un tore. Cette approche débouche sur la mise en avant d’un nouvel invariant fondamental dont on a repéré l’utilisation en physique. $\bullet $ C’est à partir de ces observations que les réflexions sur le chaos quantique apparaissent naturellement. Toute la réflexion s’organise autour d’une équation de Schrödinger dont l’espace des phases est un tore ou un tore percé. Dans le premier cas, on a mis en évidence un lien profond avec la loi de réciprocité quadratique, et donc la fonction $\eta $ de Dedekind et les sommes de Gauss, mais il est curieux de constater que le temps devient discret. Le cas où l’espace des phases est un tore percé reste à étudier, et l’auteur conjecture que c’est celui qui donne l’interprétation recherchée du spectre de Markoff comme spectre d’un opérateur lié à une équation de Schrödinger. $\bullet $ En approfondissant ce sujet, on est parvenu à résoudre le problème de Riemann-Hilbert associé à la théorie de Markoff classique. En d’autres termes on a pu écrire une équation différentielle fuchsienne dont le groupe de monodromie est engendré par les deux matrices $A_0$ et $B_0$ qui engendrent le groupe $[SL(2,\mathbb{Z}% ),SL(2,\mathbb{Z})]\simeq \mathbf{F}_2$. Il reste à approfondir l’analyse spectrale de l’opérateur différentiel associé. $\bullet $ On a enfin esquissé le lien avec la théorie de fibrés vectoriels et montré à partir de là comment se développe une approche par la $K$-théorie. La théorie de Markoff correspond dans ce contexte à des fibrés exceptionnels du plan projectif $P_2(\mathbb{C}% )$ et aux hélices de D. Yu. Nogin [@Nogin] [@Nogin1]. Cette approche est particulièrement importante car elle donne un cadre permettant de comprendre un certain nombre de conjectures très importantes encore non résolues comme les conjectures de Lichtenbaum ou celle de Birch et Swinnerton-Dyer, grâce à une interprétation automorphe de la $K$-théorie. Au passage, des liens avec la conjecture de Riemann ont été approfondis. Rappels succincts sur les surfaces de Riemann --------------------------------------------- Le présent paragraphe est un simple rappel sur les surfaces de Riemann destiné à fixer les notations pour la suite. Il peut être omis par le lecteur averti et ne développe rien que l’on ne trouve dans [@Perrine9]. Les objets géométriques que l’on a considérés précédemment, les tores percés conformes, sont des quotients du demi-plan de Poincaré $\mathcal{H}$ par l’action d’un groupe fuchsien $% \Gamma $, c’est-à-dire d’un sous-groupe discret de $PSL(2,\mathbb{R})$. Pour la plupart des surfaces de Riemann on peut généraliser la théorie développée pour les tores percés en utilisant un groupe fuchsien. On peut en effet construire par quotient de $\mathcal{H}$ pour l’action d’un tel groupe toutes les surfaces de Riemann à l’exception de celles dont le support topologique est homéomorphe ([@Farkas] p. 208) à la sphère de Riemann $\mathcal{S}^2$, à la sphère percée par extraction d’un point $\mathbb{C}$, à la sphère percée par extraction de deux points $\mathbb{C}^{*}$, au tore $% \mathcal{T}$. Pour toute autre surface de Riemann $\mathcal{M}$ le groupe de Poincaré $\pi _1(\mathcal{M},*)$ peut être représenté comme sous-groupe $\Gamma $ du groupe des automorphismes $PSL(2,\mathbb{R})$ de son revêtement conforme $\mathcal{H}$. La surface $\mathcal{M}$ est de forme $\mathcal{H}/\Gamma $, et une représentation de groupes $\overline{\rho }% :\pi _1(\mathcal{M},*)\rightarrow \Gamma $ porte les données géométriques de $\mathcal{M}$. Dans la suite les surfaces $\mathcal{M% }$ sont connexes, et donc connexes par arcs, de sorte que leur groupe de Poincaré $\pi _1(\mathcal{M},*)$ ne dépend pas du point de base servant à le définir. ### Uniformisation des surfaces de Riemann Un revêtement conforme d’une surface de Riemann $\mathcal{M}$ est dit universel si et seulement si son groupe de Poincaré $\pi _1(\mathcal{M}% ,*)$ est réduit à un élément neutre. La surface de Riemann est alors simplement connexe. Toutes les surfaces de Riemann simplement connexes sont connues à équivalence conforme près ([@Farkas] p. 206). Elles correspondent aux trois modèles de géométrie classique ([@Wolf] [@Nakahara] p. 486) dont la structure conforme est unique sur le support topologique que l’on considère, à courbure constante positive (cas sphérique), à courbure nulle (cas euclidien), et à courbure négative (cas hyperbolique). Il s’agit des suivantes : $\bullet $ La sphère de Riemann $\mathcal{S}^2\mathbb{=}\mathbf{P}_1(\mathbb{C}% )$ de type conforme $(0,0,0)$. $\bullet $ Le plan complexe $\mathbb{C}=\mathcal{S}^2\backslash \{\infty \}$ qui est du type conforme $(0,1,0)$. $\bullet $ Le demi-plan de Poincaré $\mathcal{H}$ qui est du type conforme $(0,0,1)$. Le théorème de Killing-Hopf ([@Stillwell] p. 135) garantit que toute surface de Riemann peut ainsi être obtenue comme quotient de l’une des trois surfaces $\mathcal{S}^2$, $\mathbb{C}$, $\mathcal{H}$, par l’action d’un sous-groupe de leur groupe d’automorphismes conformes. Il constitue la base de trois théories de Galois s’appliquant respectivement pour les surfaces de Riemann. Si $\mathcal{M}$ est une surface de Riemann de revêtement simplement conforme $\mathcal{M}^{sc}$ et de groupe de revêtement $% \Gamma $, sous-groupe de $Aut(\mathcal{M}^{sc})$, on a équivalence conforme des surfaces $\mathcal{M}$ et $\mathcal{M}^{sc}/\Gamma $. D’où l’importance de connaitre les groupes d’automorphismes des surfaces simplement connexes ([@Farkas] p. 206) : $$Aut(\mathcal{S}^2)\simeq PSL(2,\mathbb{C})\;\text{groupe des transformations de M\"{o}bius complexes},$$ $$Aut(\mathbb{C})\simeq P\Delta L(2,\mathbb{C})\;\text{groupe donn\'{e} par les matrices triangulaires sup\'{e}rieures},$$ $$Aut(\mathcal{H})\simeq PSL(2,\mathbb{R})\simeq Isom^{+}(\mathcal{H})\text{ groupe des matrices r\'{e}elles}.$$ Le groupe $PSL(2,\mathbb{C})=\{M\in GL(2,\mathbb{C})\mid \det (M)=1\}/\{\pm 1\}$ de la sphère contient les deux autres groupes cités, ce qui réunit les trois théories de Galois que l’on vient d’évoquer en une seule. Les sous groupes de $PSL(2,\mathbb{C})$ sont les groupes kleinéens [@Series3]. On connaît tous les types conformes de surfaces de Riemann qui ont les surfaces $\mathcal{S}^2$ ou $\mathbb{C}$ pour revêtement universel ([@Farkas] p. 208). Ce sont les cas suivants : $\bullet $ La sphère de Riemann $\mathcal{S}^2$ est la seule surface de Riemann qui possède $\mathcal{S}^2$ pour revêtement universel conforme. Son groupe de Poincaré est trivial. $\bullet $ $\mathbb{C}$, $\mathbb{C}^{*}\simeq \mathbb{C}/\omega \mathbb{Z}$, et les tores compacts $\mathcal{T}=\mathcal{T}_\Lambda =\mathbb{C}/\Lambda $ sont les seules surfaces de Riemann qui ont $\mathbb{C}=\mathcal{S}^2\backslash \{\infty \}$ pour revêtement universel conforme. - La sphère percée d’une piqûre $\mathbb{C}=\mathcal{T}_0^{\bullet }=\mathcal{S}^2\backslash \{\infty \}$ est du type conforme $(0,1,0)$. Son groupe de Poincaré est trivial. - La sphère percée de deux piqûres $\mathbb{C}^{*}=\mathcal{T}% _0^{\bullet \bullet }=\mathcal{S}^2\backslash \{0,\infty \}$ est du type conforme $(0,2,0)$. On peut la représenter par un cylindre $\mathbb{C}% /\omega \mathbb{Z}$ défini avec $\omega \in \mathbb{C}^{*}$.Elle a pour groupe de Poincaré $\pi _1(\mathbb{C}^{*},*)\simeq \mathbb{Z}$. Dans $\mathbb{C}$ le domaine fondamental est une bande permettant de paver avec le groupe $% \mathbb{Z}$ tout l’espace $\mathbb{C}$. - Les tores compacts $\mathcal{T}_\Lambda =\mathbb{C}/\Lambda $ sont du type conforme $(1,0,0)$, conformément équivalents à des courbes elliptiques. La projection canonique donne un révêtement universel d’un tel tore et est définissable avec des fonctions elliptiques. Son groupe de Poincaré est $\pi _1(\mathcal{T},*)=\mathbb{Z\oplus Z}\simeq \mathbb{% Z}^2$. On peut montrer que $Aut(\mathcal{M})$ est une extension de $\mathbb{C}% /\Lambda $ par un groupe fini ([@Farkas] p. 296, [@Reyssat] p. 48), en général $\{\pm 1\}$. Mais deux cas se distinguent correspondant à la symétrie carrée $\Lambda \simeq \mathbb{Z}\oplus i\mathbb{Z}$ et à la symétrie hexagonale $\Lambda \simeq \mathbb{Z}\oplus j\mathbb{Z}$. $\bullet $ Trois types conformes de surfaces de Riemann complémentaires aux précédents sont caractérisés par le fait que $\mathcal{H} $ est cette fois leur revêtement universel conforme ([@Farkas] p. 210) mais que leur groupe de Poincaré est commutatif. En dehors de $% \mathcal{H}$ lui-même, on trouve les suivants : - La sphère percée d’une piqûre et d’un trou $\mathcal{D}% ^{\bullet }=\{z\in \mathbb{C};\;0<\mid z\mid <1\}$ qui est du type conforme $% (0,1,1)$ et vérifie $\pi _1(\mathcal{D}^{\bullet },*)\simeq \mathbb{Z}$. - La sphère percée de deux trous $\mathcal{D}_\alpha ^{\circ }=\{z\in \mathbb{C};\;0<\alpha <\mid z\mid <1\}$ qui est du type conforme $% (0,0,2)$ et vérifie $\pi _1(\mathcal{D}_\alpha ^{\circ },*)\simeq \mathbb{Z} $. $\bullet $ Dans tous les autres cas qui sont en nombre infini, le revêtement universel conforme de $\mathcal{M}$ est le demi-plan de Poincaré $\mathcal{H}$ sur lequel agit son groupe de Poincaré $\pi _1(\mathcal{M},*)$ qui est non commutatif. Ce groupe est isomorphe à un groupe fuchsien $\Gamma \subset PSL(2,\mathbb{R})\simeq Aut(\mathcal{H})$ agissant sur $\mathcal{H}$ pour donner $\mathcal{M}\simeq \mathcal{H}/\Gamma $. En pratique [@DeRham], la surface $\mathcal{M}$ peut être visualisée globalement avec un domaine fondamental pour l’action du groupe $\Gamma $ dans $\mathcal{H}$. Pour la définition d’un domaine fondamental polygonal on peut utiliser la méthode de [@Keen]. ### Surfaces de Riemann définies par un groupe fuchsien Les groupes fuchsiens $\Gamma $ permettent de décrire à équivalence conforme près toutes les surfaces de Riemann en dehors de celles qui ont $\mathcal{S}^2$ ou $\mathbb{C}$ pour revêtement universel. On a : Hors les types $(0,0,0)$, $(0,1,0)$, $(1,0,0)$, $(0,2,0)$, $(0,1,1)$, $% (0,0,2)$, toute surface de Riemann est conformément équivalente à une surface qui peut être obtenue comme un espace quotient de forme $$\mathcal{M}=\mathcal{H}/\Gamma ,$$ où $\Gamma \simeq \pi _1(\mathcal{M},*)$ groupe fuchsien non commutatif isomorphe à un sous groupe de $Aut(\mathcal{H})\simeq PSL(2,\mathbb{R}).$ Toutes les surfaces de type fini de genre $g\geq 2$ sont décrites ainsi. Mais c’est aussi le cas pour certaines surfaces de genre $0$ ou $1$. Les tores percés conformes paraboliques qui sont de genre $1$ sont donnés par cette dernière proposition. Avec une matrice parabolique $P=L^{-1}$, le groupe fuchsien correspondant a pour présentation $<% \overline{A},\overline{B},\overline{P}\mid [\overline{A},\overline{B}]% \overline{P}=\mathbf{1}_2>$. Les tores percés conformes hyperboliques également de genre $1$ sont donnés par un groupe fuchsien dont on connaît une présentation $<% \overline{A},\overline{B},\overline{H}\mid [\overline{A},\overline{B}]% \overline{H}=\mathbf{1}_2>$ où $H$ matrice hyperbolique. Les sphères à trois piqûres qui sont de genre $0$, c’est-à-dire les pantalons, peuvent être obtenus de même ([@Stillwell] p.114). Le groupe fuchsien correspondant est isomorphe au groupe du triangle $\mathbf{T}_3$. ### Autre anti-équivalence de catégories Le théorème fondamental de Riemann associe à toute surface de Riemann compacte $\mathcal{M}$ une équation polynômiale $Q(x,y)=0$. En normalisant $Q$ on se ramène à une relation algébrique irréductible entre les variables complexes $y$ et $x$ de forme suivante $$\Phi (y,x)=y^n+\phi _1(x)y^{n-1}+...+\phi _n(x)=0,$$ où $\phi _k(x)\;(1\leq k\leq n\leq m)$ sont des fonctions rationnelles de $x$. Leurs dénominateurs s’annulent en un nombre fini de pôles où l’on peut considérer que la valeur prise par $y$ devient infinie. Ailleurs, la résolution en $y$ d’une telle équation donne une fonction multivalente $y(x)$, chaque valeur de $x$ permettant de définir $n$ valeurs $y_i(x)$ dans $\mathbb{C}$ là où le discrimant de $\Phi $ n’est pas nul, c’est-à-dire dans un ouvert $\mathbb{C}_\Phi $ de $\mathbb{C}$ tel que $\mathbb{C}\backslash \mathbb{C}_\Phi $ ensemble fini. Pour tout point $% x\in $ $\mathbb{C}_\Phi $, chaque uniformisation $y_i(x)$ donne par le théorème des fonctions implicites une carte locale holomorphe qui se prolonge analytiquement grâce au théorème de Puiseux ([@Dieudonne3] p. 106, [@Arnaudies2] théorème 7.7). L’ensemble de ces prolongements définit une surface de Riemann $\mathcal{N}\subset \mathcal{M}$ à $n$ feuilles au dessus de $\mathbb{C}_\Phi $. On complète avec la projection $\pi _x$ qui a chaque point $% p_i=(x,y_i(x))\in \mathcal{N}$ associe $\pi _x((x,y_i(x)))=x\in $ $\mathbb{C}% _\Phi $. Elle constitue un revêtement à $n$ feuilles de $\mathcal{N} $ au dessus de $\mathbb{C}_\Phi $. Les feuilles se raccordent en des points singuliers de $\mathcal{M}$ qui sont ses points de ramification. On voit comment les feuilles se raccordent en observant les termes $y_i(x)$ pour $x$ tournant autour de chaque point de $\mathbb{C}\backslash \mathbb{C}_\Phi $. En chacun de ces points on détermine ainsi une permutation sur les feuilles se décomposant en cycles. Elle permet de prolonger le revêtement local induit par $\pi _x$ au-dessus d’un disque percé en $x$. On ajoute autant de points à $\mathcal{N}$ au-dessus de $x$ qu’il y a de cycles dans la permutation des feuilles au voisinage. On fait de même au point à l’infini $x=\infty $ en utilisant des coordonnées homogènes ([@Cartan] p. 205). On peut alors s’assurer que la surface $\mathcal{N}$ complétée n’est autre que $\mathcal{M}$. Il en résulte en recollant tous les morceaux un revêtement global prolongeant $\pi _x$ à la surface de départ et noté de même $\pi _x:\mathcal{M}% \longrightarrow \mathbf{P}_1(\mathbb{C})$. Entre la courbe affine définie dans $\mathbb{C}^2$ par $\Phi $ et $\mathcal{M}$ il peut y avoir une différence portant sur un nombre fini de points. Mais en complétant par ces points dans $\mathbf{P}^1(\mathbb{C})$ la surface de Riemann compacte devient une courbe projective sur $\mathbb{C}$. On montre alors que $\pi _x$ est méromorphe sur $\mathcal{M}$ et qu’il en est de même de la seconde projection $\pi _y$ définissable grâce aux valeurs $y_i(x)$. Enfin il est facile de s’assurer que le corps $\mathcal{K}(\mathcal{M})$ des fonctions méromorphes sur $\mathcal{M}$ s’identifie à $\mathbb{C}(\pi _x,\pi _y)$ et a pour degré $n$ sur le corps $\mathbb{C}(\pi _x)$ qui est de degré de transcendance $1$ sur $\mathbb{C}$. De plus le corps $\mathbb{C}% (\pi _x,\pi _y)$ s’identifie aisément au corps des fractions de l’anneau $\mathbb{C}[X,Y]/Q(X,Y)\mathbb{C}[X,Y]\simeq \mathcal{K}(\mathcal{M})$. Cette construction donne un foncteur $\mathcal{K}$ de la catégorie des surfaces de Riemann compacte connexe dans celle des corps de fonctions complexes, c’est-à-dire des extensions de type fini et de degré de transcendance 1 de $\mathbb{C}$. Ce foncteur se prolonge en une anti-équivalence de catégories (une théorie de Galois) entre surfaces de Riemann compactes et corps de fonctions complexes, elle-même prolongeable entre la catégorie des surfaces de Riemann dee type fini et celle de certaines $\mathbb{C}$-algèbres ([@Douady] Tome 2, p. 138 et [@Reyssat] p. 71). Des algèbres vers les surfaces, on procède en considérant l’ensemble des valuations de l’algèbre et en identifiant chacune d’elle à un point (une place). La méthode pour construire la structure de surface de Riemann sur ces valuations est précisée dans [@Chevalley], [@Lang1] ou [@Arnaudies] (p. 92). On en trouve un exposé simplifié dans [@Edwards] qui permet de bien comprendre l’analogie entre arithmétique et corps de fonctions chère à André Weil [@Weil0]. Ceci donne la signification de la notion de diviseur d’une surface, ainsi que du théorème de Riemann-Roch ([@Reyssat] p. 94, [@Edwards] p. 158, [@Arnaudies] p. 182). Le lien avec les formes différentielles en faisable avec le quotient $\Omega _{\mathcal{K}(\mathcal{M})}$ du $\mathcal{K}(\mathcal{M})$-espace des symboles $df$ où $f\in \mathcal{K}(\mathcal{M})$ par le sous-espace engendré par les relations $d(f+f^{\prime })-df-df^{\prime }$, $d(ff^{\prime })-fdf^{\prime }-(df)f^{\prime }$,$\;dc$ où $c\in \mathbb{C} $. On identifie dans cet espace un $\mathbb{C}$-espace des formes différentielles méromorphes, c’est-à-dire s’écrivant $fdz$ où $f\in \mathcal{K}(\mathcal{M})$ méromorphe dans lequel on trouve avec $f$ holomorphe un espace de cohomologie $H^1(\mathcal{M},\mathbb{C})$. ### $C^{*}$-algèbres Il existe d’autres anti-équivalences de catégories concernant les surfaces de Riemann. Et par exemple ([@MacLane] p. 93) le théorème de Gelfand-Naimark ([@Guichardet0] p. 160) permet d’en construire une avec l’anti-équivalence qui existe entre la catégorie des espaces topologiques séparés et celle des $C^{*}$-algèbres.Cette dernière associe à tout espace topologique la $C^{*}$-algèbre des fonctions complexes continues définies sur cet espace.En sens inverse la construction se fait en développant [@Connes1] (théorème 6 p. 25). Il s’agit d’un cas particulier de la transformation de Gelfand associant à toute algèbre de Banach commutative son spectre de caractères localement compact, compact si l’algèbre est unitaire ([@Guichardet0] p. 108). Cette construction donne un cadre très naturel aux habituelles transformations de Fourier, mais surtout elle permet de retrouver une démonstration directe du fait que tout espace compact peut être vu comme un espace algébrique sur $% \mathbb{C}$. On trouve dans [@SchwartzEnock] une tentative d’extension de cette équivalence aux algèbres de Kac, projet qui a fait l’objet d’intenses recherches autour de la géométrie non commutative d’Alain Connes [@Connes].Les $C^{*}$-algèbres font quant à elles l’objet d’une intense activité de recherche car elles structurent les ensembles d’observables de la mécanique quantique ([@Waldschmidt2] p.548). La notion d’anti-équivalence signifie que différentes théories parlent en réalité des mêmes objets habillés de déguisements différents, ou considérés de points de vue différents, notamment selon qu’ils sont étudiés globalement ou localement. Il serait utile de comprendre quelles propriétés supplémentaires sur les $C^{*}$-algèbres traduisent les propriétés certaines surfaces de Riemann ([@Waldschmidt2] p. 548, [@Fischer]). Parler d’anti-équivalence de catégories ou de théorie de Galois revient essentiellement au même, la théorie de Galois classique ayant simplement donné le premier exemple historique d’une telle anti-équivalence. ### Prolongement des surfaces et espèces de groupes fuchsiens On dit $\mathcal{M}$ est prolongeable en $\mathcal{M}^{\prime }$ ou que $% \mathcal{M}^{\prime }$ prolonge $\mathcal{M}$ si et seulement s’il existe une application holomorphe $f$ de $\mathcal{M}$ dans $\mathcal{M}^{\prime }$ telles que $\mathcal{M}^{\prime }\backslash f(\mathcal{M})$ ait un intérieur non vide. Un trou dans la surface $\mathcal{M}$ peut être comblé avec un disque fermé à une piqûre sans changer la nature du support topologique de la surface. Les surfaces compactes fournissent des exemples de surfaces non prolongeables. Les tores troués conformes donnent au contraire des exemples de surfaces prolongeables en des tores percés. Le prolongement conduit à distinguer les groupes fuchsiens de première et ceux de seconde espèce ([@Beardon] p. 202). On utilise pour cela l’ensemble $\Lambda (\Gamma )$ des points limites des orbites $\Gamma z$ où $z$ dans un domaine fondamental. Pour un groupe fuchsien $\Gamma $ de seconde espèce, la surface de Riemann $\mathcal{H}% /\Gamma $ est prolongeable. Et on trouve sur le bord de $\mathcal{H}$ pour $% \Lambda (\Gamma )$ un ensemble vide, à un ou deux éléments, ou un ensemble parfait et nulle part dense dans le bord de $\mathcal{H}$ ([@Katok1] p. 67). Pour un groupe de première espèce $\Gamma $, la surface de Riemann $\mathcal{H}/\Gamma $ n’est pas prolongeable et l’ensemble $\Lambda (\Gamma )$ est dense dans le bord de $\mathcal{H}$. ### Groupes fuchsiens élémentaires L’action d’un groupe fuchsien $\Gamma $ classe les points de $\mathcal{H}% \cup \mathbb{R}\cup \{\infty \}$ en points paraboliques, hyperboliques et elliptiques. Au delà des groupes de première ou de seconde espèce, il existe un autre type de groupe fuchsien dit élémentaire caractérisé par le fait qu’il possède une orbite finie pour son action dans la clôture euclidienne $\mathcal{H}% \cup \mathbb{R}\cup \{\infty \}$ de $\mathcal{H}$. Un tel groupe est tel que $% \Lambda (\Gamma )$ n’a pas plus de deux points ([@Katok1] 3.8 p. 78). Si un groupe fuchsien $\Gamma $ n’est pas élémentaire il contient une infinité d’éléments hyperboliques, et tout élément elliptique est d’ordre fini ([@Katok1] p. 48). Si au contraire un groupe fuchsien $\Gamma $ est élémentaire il est cyclique (fini ou infini) ou conjugué dans $PSL(2,\mathbb{R})$ à un groupe engendré par les classes de $g(z)=-1/z$ et $h(z)=kz$ où $k>1$. Inversement, les sous groupes cycliques de $PSL(2,\mathbb{R})$ engendrés par un élément parabolique ou hyperbolique sont fuchsiens, et les sous-groupes cycliques de $PSL(2,\mathbb{R})$ engendrés par un élément elliptique sont fuchsiens si et seulement s’ils sont finis. ### Signature d’un groupe fuchsien Avec [@Shimura] (ch. 1.3 à 1.5), [@KumarMurty] (ch. 10) ou [@Knapp] (§9.5), on complète maintenant $\mathcal{H}$ en lui ajoutant les pointes pour $\Gamma $. Elles sont situées sur son bord et donnent un ensemble plus vaste $\mathcal{H}^{*}$. Ceci définit une nouvelle surface de Riemann $\mathcal{H}^{*}/\Gamma $ qui est compacte si on suppose que le groupe $\Gamma $ est de première espèce. On la note $X(\Gamma )$. L’ensemble des pointes pour $\Gamma $ ajoutées à $\mathcal{H}$ est fini et stable pour l’action de $\Gamma $. Il comble au quotient toutes les piqûres de la surface de Riemann $\mathcal{H}/\Gamma $. On dit de $X(\Gamma )$ qu’il s’agit d’une courbe modulaire lorsque $\Gamma \subset PSL(2,\mathbb{Z})$ et $\Gamma $ contient le sous-groupe de congruence $% \Gamma (n)=PG(n)$ de $PSL(2,\mathbb{Z})$ défini avec $$G(n)=\{\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \in SL(2,\mathbb{Z})\mid a\equiv d\equiv 1\;(\mod \,n),\;b\equiv c\equiv 0\;(\mod\,n)\}.$$ $G(2)$ est libre à deux générateurs ([@Iversen] p.154). On note $X(\Gamma (n))=X(n)$. Ainsi la courbe modulaire$\;X(\Gamma _0(n))=X_0(n) $ est définie avec $\Gamma =\Gamma _0(n)=PG_0(n)$ : $$G(n)\subset G_0(n)=\{\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \in SL(2,\mathbb{Z})\mid c\equiv 0\;(\mod\,n)\}\subset PSL(2,\mathbb{Z}% ).$$\ #### Cas où le revêtement universel d’une telle surface n’est pas $\mathcal{H}$. Pour $\Gamma =\Gamma (1)=\Gamma _0(1)=PSL(2,\mathbb{Z})$, on trouve $\mathcal{H}% ^{*}=$ $\mathcal{H}\cup \mathbb{Q}\cup \{\infty \}$, et $\mathcal{H}^{*}/PSL(2,% \mathbb{Z})$ est conformément équivalent à la sphère de Riemann $\mathcal{S}^2$. Par construction on est dans une situation relevant de ce qui est expliqué dans l’article de B. Mazur [@Mazur1] sur les doubles revêtements conformes. On a ici deux uniformisations qui interagissent, une euclidienne et l’autre hyperbolique. Les points de $\mathbb{Q% }$ sont tous paraboliques, déductibles de $\infty $ avec un élément de $PSL(2,\mathbb{Z})$, de sorte que $\mathcal{H}^{*}/PSL(2,% \mathbb{Z})$ s’identifie à la surface modulaire $\mathcal{H}/PSL(2,\mathbb{Z})$ complétée de son point à l’infini. Cette équivalence conforme est donnée par l’invariant modulaire $$J:\tau \in \mathcal{H}/PSL(2,\mathbb{Z})\cup \{\infty \}\simeq \mathcal{H}% ^{*}/PSL(2,\mathbb{Z})\longmapsto J(\tau )\in \mathbb{C}\cup \{\infty \}\simeq \mathcal{S}^2.$$ Cet invariant définit $J(\tau )\in \mathbb{C}$ pour tout $\tau \in \mathcal{% H}$, et $J(\tau )=\infty $ pour $\tau \in \mathbb{Q}\cup \{\infty \}$. Le demi-plan $\mathcal{H}$ devient ainsi un revêtement ramifié de $\mathbb{% C}$ avec un point elliptique de ramification $2$ en $i$ et un point elliptique de ramification $3$ en $(-1+i\sqrt{3})/2$. On retrouve ainsi ([@Katok1] p. 71) le domaine fondamental du groupe modulaire $PSL(2,\mathbb{Z% })$ et ses deux classes de conjugaison de sous groupes cycliques maximaux de $PSL(2,\mathbb{Z})$, l’une correspondant à des groupes d’ordre $2$, l’autre à des groupes d’ordre $3$. Ceci est lié à la présentation de $PSL(2,\mathbb{Z})$ comme produit libre d’un groupe cyclique d’ordre $2$ et d’un groupe cyclique d’ordre $3$ tels que rappelés par exemple dans [@Serre1] (pp. 128-131).\ #### Cas où le revêtement universel de $\mathcal{H}% ^{*}/\Gamma $ est $\mathcal{H}$ Cette situation définit un nouveau groupe fuchsien $\Gamma ^{*}$ et une équivalence conforme $$\mathcal{H}^{*}/\Gamma \simeq \mathcal{H}/\Gamma ^{*}.$$ La compacité de $\mathcal{H}/\Gamma ^{*}$ a pour conséquence que $% \Gamma ^{*}$ ne construit pas de pointe, et que le nombre $r$ de classes de points elliptiques pour l’action de $\Gamma ^{*}$ est fini. En revenant par équivalence conforme à $\mathcal{H}^{*}/\Gamma $ les singularités demeurent. Par revêtement, on peut éventuellement procéder ([@Stillwell] ch. 8) à la désingularisation de $% \mathcal{H}/\Gamma ^{*}$. Les points elliptiques correspondent à des points singuliers marqués sur la surface. La ramification ([@Douady] ch. VI) décrit les phénomènes qui se manifestent pour l’apparition de ces points. En considérant $$\mathcal{H}_{\Gamma ^{*}}=\mathcal{H}\backslash \{z\mid z\text{ elliptique pour }\Gamma ^{*}\},$$ on a les propriétés suivantes : 1- La surface $\mathcal{H}/\Gamma ^{*}$ prolonge $(\mathcal{H}_{\Gamma ^{*}}/\Gamma ^{*})$. 2- L’application canonique $\pi :\mathcal{H}\rightarrow \mathcal{H}/\Gamma ^{*}$ est localement bijective au voisinage de tout point de $\mathcal{H}% _{\Gamma ^{*}}$. 3- Pour tout point elliptique $\mathbf{p}_i$ ($i=1$, ..., $r$) dans $% \mathcal{H}/\Gamma ^{*}$ on peut définir un nombre $\upsilon _i$ tel que $Card(\pi ^{-1}(\mathbf{p}_i))=\upsilon _i$. Ceci classe les points elliptiques en classant les nombres $\upsilon _i$ par ordre croissant. On dit que $\upsilon _i$ est l’indice de ramification du point $\mathbf{p}_i$ ou que les points $\mathbf{p}_1$,$...$, $\mathbf{p}_r$ sont marqués avec les nombres $\upsilon _1$,$...$, $\upsilon _r$. Cette approche introduit pour la surface de Riemann $\mathcal{M}\simeq \mathcal{H}/\Gamma $ une signature, dite aussi signature de $\Gamma $ : $$(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\upsilon _{r+1},...,\upsilon _n;m),$$ où l’on note $$2\leq \upsilon _1\leq \upsilon _2\leq ...\leq \upsilon _r\leq \upsilon _{r+1}=...=\upsilon _n=\infty .$$ Cette signature indique que la surface $\mathcal{M}$ de genre $g$ possède $r$ points elliptiques $\mathbf{p}_1$,$...$, $\mathbf{p}_r$ d’indices de ramification $\upsilon _1$,$...$, $\upsilon _r$, des points paraboliques $\mathbf{p}_{r+1}$,$...$, $\mathbf{p}_n$ en nombre $n-r$, ainsi que $m$ trous. Son type conforme $(g,n,m)$ s’en déduit. ### Invariant d’Euler-Poincaré La caractéristique d’Euler-Poincaré d’un groupe fuchsien $\Gamma $ de signature $(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\upsilon _{r+1},...,\upsilon _n;m)$ est définie avec : $$-\chi (\Gamma )=2g-2+\sum_{i=1}^n(1-\frac 1{\upsilon _i})+m=2g-2+\sum_{i=1}^r(1-\frac 1{\upsilon _i})+(n-r)+m.$$ Ce nombre est positif lorsque le groupe fuchsien $\Gamma $ n’est pas réduit à l’unité. Le covolume de $\Gamma $, c’est-à-dire l’aire hyperbolique de $\mathcal{M}$, est ([@Beardon] p. 269) $$Cov(\Gamma )=\mu (\mathcal{M})=2\pi (-\chi (\Gamma )).$$ Dans le cas d’un groupe fuchsien $\Gamma $ de première espèce, on a nécessairement $m=0$ et cette formule donne l’aire hyperbolique de tout domaine fondamental convexe de $\Gamma $ dans le demi-plan de Poincaré $% \mathcal{H}$. La caractéristique d’Euler-Poincaré est aussi l’invariant de la surface $\mathcal{M}$ défini classiquement comme somme alternée des nombres de Betti pour les $r$-simplexes construits par une triangulation $$\chi (\Gamma )=\chi _{\mathcal{M}}=\sum_{j=0}^n(-1)^jb_j(\mathcal{M})=b_0(% \mathcal{M})-b_1(\mathcal{M})+b_2(\mathcal{M}).$$ Pour une surface compacte $\mathcal{M}$, $b_0(\mathcal{M})$ correspond au nombre de composantes connexes, $b_2(\mathcal{M})$ est le nombre de composantes connexes orientables ([@Lehmann] p. 257 et p. 260), et $% b_1(\mathcal{M})$ est défini par le nombre de générateurs de $% \pi _1(\mathcal{M},*)$ ou de son quotient commutatif $H_1(\mathcal{M},\mathbb{Z}% )$, le premier groupe de l’homologie singulière de $\mathcal{M}$ : $$H_1(\mathcal{M},\mathbb{Z})\simeq \pi _1(\mathcal{M},*)/[\pi _1(\mathcal{M}% ,*),\pi _1(\mathcal{M},*)].$$ ### Géométrie symplectique Dans chaque classe de $H_1(\mathcal{M},\mathbb{Z})$ on peut trouver une courbe fermée $c(t)$ infiniment différentiable sur $\mathcal{M}$, même géodésique dans différents cas. Ceci permet en tout point $P\in \mathcal{M}$ d’intersection de deux telles courbes $c_1(t_1)$ et $c_2(t_2)$ correspondant à deux éléments différents $\gamma _1$ et $% \gamma _2$ de $H_1(\mathcal{M},\mathbb{Z})$ de considérer la base $% (\partial c_1/\partial t_1,\partial c_2/\partial t_2)$ de l’espace tangent en $P$. Comme dans le cas que l’on privilégie ici $\mathcal{M}$ est orientable, avec un vecteur normal $n$ on peut définir $\varepsilon (P)=1 $ ou $\varepsilon (P)=-1$ selon que le triplet $(n,\partial c_1/\partial t_1,\partial c_2/\partial t_2)$ est direct ou non. Et en sommant sur tous les points d’intersection des courbes $c_1(t_1)$ et $% c_2(t_2)$ on obtient le nombre d’intersection $\gamma _1\sqcap \gamma _2$. La géométrie symplectique s’introduit alors de façon naturelle pour une telle surface de Riemann en étendant à tout le groupe d’homologie $H_1(\mathcal{M},\mathbb{Z})$ ce nombre d’intersections qui devient une forme bilinéaire antisymétrique non dégénérée $% H_1(\mathcal{M},\mathbb{Z})\times H_1(\mathcal{M},\mathbb{Z})\longrightarrow \mathbb{Z% }$. On en déduit l’existence de bases symplectiques et de dissections canoniques associées ([@Waldschmidt2] p. 105) permettant de voir $% \mathcal{M}$ au moyen d’un domaine fondamental de $\mathcal{H}$ sur le bord duquel peut être matérialisée la dissection canonique. On peut prolonger de façon naturelle de $\mathbb{Z}$ à $\mathbb{R}$ cette forme bilinéaire en $H_1(\mathcal{M},\mathbb{R})\times H_1(\mathcal{M},\mathbb{R}% )\longrightarrow \mathbb{R}$. Ceci construit un espace vectoriel symplectique [@Cannas]. Remarquons que le fait que $\mathcal{M}$ est associée à un groupe fuchsien et donc orientable est essentiel pour que la construction que l’on vient de faire soit valide. On peut encore étendre cette forme en une forme hermitienne définie positive ([@Waldschmidt2] p. 189) au travers de la notion de polarisation sur les variétés abéliennes complexes qui caractérise les jacobiennes. Ce point est évoqué plus loin. ### Approche topologique du groupe de Poincaré Pour toute surface de Riemann $\mathcal{M}$, la signature contient toutes les données topologiques essentielles, mais aucune donnée conforme. Elle donne une présentation du groupe de Poincaré $\pi _1(\mathcal{M}% ,*)\simeq \Gamma $. On utilise pour le voir le théorème de Seifert et Van Kampen ([@Gramain] p. 30) et un passage au quotient pour les points elliptiques. Ceci donne : Le groupe de Poincaré $\pi _1(\mathcal{M},*\mathcal{)}$ de toute surface $\mathcal{M}$ de type conforme $(g,n,m)$ admet une présentation à $% 2g+n+m$ générateurs et $r+1$ relations où 1/ Les générateurs sont $% a_1,b_1,...,a_g,b_g,e_1,...,e_r,p_{r+1},...,p_n,h_1,...,h_m.$ 2/ Les relations sont $$\prod_{i=1}^g[a_i,b_i]e_1...e_rp_{r+1}...p_nh_1...h_m=1,\;\;\forall i=1,...,r,\;\;e_i^{\upsilon _i}=1.$$ Sa signature vaut $$(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\infty _{n-r};m).$$ Ce résultat permet le calcul du premier groupe d’homologie $$H_1(\mathcal{M},\mathbb{Z})\simeq \mathbb{Z}^{2g-r}\times \mathbb{Z}/\upsilon _1\mathbb{Z% }\times ...\times \mathbb{Z}/\upsilon _r\mathbb{Z}.$$ Dans le cas compact où $n=r$ et $m=0$, on a $\pi _1(\mathcal{M},*)\simeq \mathbf{F}_{2g-r}$, groupe libre à $2g-r$ générateurs. ### Approche conforme du groupe de Poincaré Les données conformes d’une surface $\mathcal{M}$ de revêtement conforme $\mathcal{H}$ sont issues d’une représentation injective du groupe $\pi _1(\mathcal{M},*)$ dans le groupe $Aut(\mathcal{H})=PSL(2,\mathbb{R}% )$. Ceci provient du résultat dû à Poincaré ([@Poincare] [@Zieschang] (p. 114) [@Katok1] (p. 90)) : Soit $\Gamma $ un groupe fuchsien définissant une surface de Riemann de type fini $\mathcal{M}=\mathcal{H}/\Gamma $ ayant pour signature $% (g;n:\upsilon _1,\upsilon _2,...,\upsilon _n;m)$, $\Gamma $ admet une présentation à $2g+n+m$ générateurs et $r+1$ relations avec 1/ Les générateurs $\overline{A}_1,\overline{B}_1,...,\overline{A}_g,% \overline{B}_g,\overline{E}_1,...,\overline{E}_r,\overline{P}_{r+1},...,% \overline{P}_n,\overline{H}_1,...,\overline{H}_m$ dans $PSL(2,\mathbb{R}).$ 2/ Les relations $$\prod_{i=1}^g[\overline{A}_i,\overline{B}_i]\overline{E}_1...\overline{E}_r% \overline{P}_{r+1}...\overline{P}_n\overline{H}_1...\overline{H}% _m=1,\;\;\forall i=1,...,r,\;\;\overline{E}_i^{\upsilon _i}=\mathbf{1}_2.$$ Les termes $\overline{H}_i$ sont hyperboliques et sont définis à une permutation et à une conjugaison de $PSL(2,\mathbb{R})$ près. Il en est de même des termes $\overline{P}_j$ qui sont paraboliques. Les termes $% \overline{E}_k$ engendrent des sous-groupes finis maximaux et non conjugués de $\Gamma $. Tout élément elliptique de $\Gamma $ est conjugué dans $PSL(2,\mathbb{R})$ d’une puissance d’un terme $\overline{% E}_k$, et tout élément parabolique de $\Gamma $ est de même conjugué d’une puissance d’un terme $\overline{P}_j$. Tout élément d’ordre fini dans $\Gamma $ est elliptique. Si le groupe fuchsien $\Gamma $ est de première espèce, il n’y a pas de termes hyperboliques $\overline{H}_i$. Dans ce cas le groupe $\Gamma $ est cocompact, c’est-à-dire tel que $\mathcal{M}=\mathcal{H}/\Gamma $ soit une surface de Riemann compacte, si et seulement s’il n’y a pas de termes paraboliques. ### Remontée à un groupe de matrices On peut maintenant revenir d’un groupe fuchsien $\Gamma $ à un groupe $G$ dans $SL(2,\mathbb{R})$ défini par image inverse de $PSL(2,\mathbb{R})$ dans $% SL(2,\mathbb{R})$. Avec le morphisme canonique $P:$ $SL(2,\mathbb{R})\rightarrow PSL(2,\mathbb{R})$, on dit que le sous-groupe $\Gamma $ de $PSL(2,\mathbb{R})$ est remonté en le groupe $G$ dans $SL(2,\mathbb{R})$ si et seulement si la restriction $P(G)$ est isomorphe à $\Gamma $. On a déjà vu pour les groupes de Fricke qu’il peut y avoir plusieurs images réciproques $G$ de $\Gamma $ par $P$. En fait ([@Seppala] p. 136) pour tout genre $g>1$ tout sous-groupe fuchsien $\Gamma $ de $PSL(2,\mathbb{R})$ définit $2^{2g}$ groupes $G$ différents remontant $\Gamma $ dans $SL(2,\mathbb{R})$. Un résultat de Irwin Kra [@Kra] indique aussi qu’un groupe $\Gamma \subset PSL(2,\mathbb{R})$ peut être remonté dans $SL(2,\mathbb{R})$ si et seulement s’il ne possède pas d’élément d’ordre $2$. Ces derniers peuvent en effet créer des problèmes comme le montre l’exemple de la transformation $f(z)=-(1/z)$ d’ordre $2$ dans $PSL(2,\mathbb{R}% ) $. La matrice qui correspond à $f$ dans $SL(2,\mathbb{R})$ est d’ordre $% 4. $ De tels éléments d’ordre $2$ appelés ”casquettes croisées”, détruisent l’orientabilité de la surface que l’on étudie. Il faut faire appel comme dans [@Seppala] (p. 70) aux notions plus vastes de surface de Klein et de structure dianalytique pour trouver de tels éléments dans le groupe correspondant que l’on peut alors considérer comme un groupe kleinéen ([@Zieschang] Theorem 3.2.8 p. 71, [@Zieschang1] Theorem 15.9 p. 35, [@Seppala] p. 89). Mais ce cas ne peut se produire pour les surfaces de Riemann que l’on étudie ici où le revêtement est $\mathcal{H}$. De plus on a : Soit $\Gamma $ un groupe fuchsien qui définit une surface de Riemann de type fini $\mathcal{M}=\mathcal{H}/\Gamma $ ayant pour signature $% (g;n:\upsilon _1,\upsilon _2,...,\upsilon _n;m)$, le groupe $\Gamma $ se remonte dans $SL(2,\mathbb{R})$. Il détermine même un unique groupe principal $G$ caractérisé par le fait que ses générateurs sont à trace positive. Le groupe $\Gamma $ est isomorphe au groupe $G$ défini avec : 1/ Des générateurs $% A_1,B_1,...,A_g,B_g,E_1,...,E_r,P_{r+1},...,P_n,H_1,...,H_m$. 2/ Des relations $$\prod_{i=1}^g[A_i,B_i]E_1...E_rP_{r+1}...P_nH_1...H_m=1,\;\;\forall i=1,...,r,\;\;E_i^{\upsilon _i}=\mathbf{1}_2.$$ Les matrices $A_i$ et $B_i$ sont hyperboliques. Les éléments $% E_1,...,E_r$, sont des éléments de torsion dans $G$. Ce sont des matrices elliptiques ($0<tr(E_i)<2$) possédant un point fixe dans $% \mathcal{H}$. Autour du point fixe de $E_i$ l’action se fait localement par une matrice de rotation. Sur la surface de Riemann quotient, ceci donne un point de ramification de multiplicité $\upsilon _i$. La multiplicité du point cône correspondant est liée à l’angle au sommet de ce cône. Les éléments $P_{r+1},...,P_n$, sont paraboliques ($% 0<tr(P_i)=2$) possédant un point fixe sur le bord de $\mathcal{H}$. Sur la surface de Riemann quotient, un tel point donne une piqûre. Les éléments $H_1,...,H_m$, sont hyperboliques ($2<tr(H_i)$) possédant une géodésique fixe dans $\mathcal{H}$. Sur la surface de Riemann quotient, une telle géodésique permet de définir un trou dont elle est le bord. On peut combler ce trou par un disque percé sans rien changer au support topologique, et en prolongeant seulement la surface de Riemann que l’on considére. Si cette opération est faite, la piqûre qui en résulte est l’image d’un point du bord de $\mathcal{% H}$ dont on peut faire le tour avec une géodésique fermée invariante par la matrice $H_i$ correspondante. Les deux derniers cas ne se produisent pas si l’on a affaire à une surface compacte. Dans tous les cas, on a $\mathcal{M}\simeq \mathcal{H}/G$. Topologiquement, on voit bien que rien ne distingue les termes $p_i$ et $h_j$ dans la présentation de $\pi _1(\mathcal{M},*)$, alors que dans $SL(2,% \mathbb{R})$ la représentation de ce groupe apporte du nouveau qui correspond à la structure conforme et se matérialise sur les valeurs des traces. On comprend aussi avec ces observations pourquoi on n’a pas eu à considérer de tore elliptique dans le chapitre précédent. ### Le théorème de Poincaré On trouve en [@Katok1] (Ch. 4) une réciproque partielle de ce que l’on vient de voir. Il s’agit du théorème de Poincaré qui indique que si $g\geq 0$, $r\geq 0$, $\upsilon _i\geq 2$ (pour $i=1,...,r$) sont des nombres entiers tels que l’on ait $$2g-2+\sum_{i=1}^r(1-\frac 1{\upsilon _i})>0,$$ alors il existe un groupe fuchsien $\Gamma $ ayant pour signature $% (g;r:\upsilon _1,\upsilon _2,...,\upsilon _r;0)$. On dispose d’une construction explicite pour un tel groupe fuchsien dit géométriquement fini, c’est-à-dire possédant un domaine fondamental convexe polygonal à $4g+2r$ sommets délimité par un nombre fini de côtés portés par des géodésiques. Ce groupe admet une présentation à $2g+r$ générateurs et $r+1$ relations avec 1/ Les générateurs $\overline{A}_1,\overline{B}_1,...,\overline{A}_g,% \overline{B}_g,\overline{E}_1,...,\overline{E}_r.$ 2/ Les relations $$\prod_{i=1}^g[\overline{A}_i,\overline{B}_i]\overline{E}_1...\overline{E}% _r=1,\;\;\forall i=1,...,r,\;\;\overline{E}_i^{\upsilon _i}=\mathbf{1}_2.$$ Le groupe $\Gamma $ ne contient pas d’élément parabolique. Par construction son covolume est fini, et ce groupe est cocompact. En sens inverse pour tout groupe $\Gamma $ ayant ces propriétés, la construction d’un domaine fondamental ayant les mêmes caractéristiques dans $\mathcal{H}$ et associé à la surface $% \mathcal{M}=\mathcal{H}/\Gamma $ est faisable par la méthode de [@Keen]. Selon la façon dont est donné le groupe fuchsien, il peut s’avérer plus ou moins délicat de construire un domaine fondamental. Dans ce que l’on vient de voir on connaît des générateurs à partir desquels on travaille. Si le groupe est plutôt donné par des congruences dans $PSL(2,\mathbb{Z})$, d’autres méthodes existent dont certaines sont automatisées [@Verril]. Par exemple le domaine fondamental d’un sous-groupe de congruence $\Gamma $ de niveau $n$ est contenu dans un domaine fondamental du groupe $\Gamma (n)$ identifié dans [@Kulkarni]. Certains groupes fuchsiens ne sont pas des groupes de congruence ([@Lehner] p. 253). Dans [@Katok1] la méthode de construction de Poincaré est étendue à un groupe fuchsien de signature $(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\infty ,...,\infty ;m)$. Ceci donne la possibilité de construire un groupe fuchsien de première espèce avec des éléments paraboliques $P_{r+1},...,P_n$. Ce cas généralise celui des tores percés conformes paraboliques qui sont de signature $% (1;1:\infty ;1)$. Il existe une infinité de tels tores percés non conformément équivalents, alors que la construction de [@Katok1] n’en fournit qu’un. Pour les autres signatures le même constat peut être fait, avec des domaines fondamentaux différents donnant des surfaces non conformément équivalentes, mais qui sont topologiquement identiques. Ceci signifie que la construction de Poincaré peut être généralisée, par exemple en ne plaçant plus le centre du polygone exhibé au centre du disque unité. Le théorème de Poincaré est encore étendu dans l’énoncé que l’on trouve dans [@Beardon] (p. 268) indiquant qu’il existe un groupe fuchsien $\Gamma $ de type fini, non élémentaire, et de signature $(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\upsilon _{r+1},...,\upsilon _n;m)$ si et seulement si on a la condition pour la caractéristique d’Euler-Poincaré : $$-\chi (\Gamma )=2g-2+\sum_{i=1}^r(1-\frac 1{\upsilon _i})+(n-r)+m>0.$$ L’expression de cette caractéristique d’Euler-Poincaré peut être minorée par la valeur positive $(1/42)$ qui correspond au groupe de Hurwitz à trois générateurs elliptiques $\overline{E}_1,% \overline{E}_2,\overline{E}_3,$ tels que $\overline{E}_1^2=\overline{E}_2^3=% \overline{E}_3^7=1$. Cette observation permet d’établir le théorème de Hurwitz ([@Farkas] p. 258) indiquant que le groupe $% Aut(\mathcal{M})$ des automorphismes conformes d’une surface de Riemann compacte $\mathcal{M}$ de genre $g\geq 2$ est fini et majoré par $% 42\times (2g-2)=84(g-1)$. Ceci donne aussi le théorème de Schwarz ([@Farkas] p. 258), disant que si $\mathcal{M}$ est de genre $g\geq 2$, alors $Aut(\mathcal{M})$ est un groupe fini. ### Groupes de Coxeter associés Pour un groupe fuchsien $\Gamma $ que l’on suppose ici pour simplifier de signature $(g;r:\upsilon _1,\upsilon _2,...,\upsilon _r;0)$, on peut introduire ([@Katok1] p.93) un polygone hyperbolique à $4g+2r$ côtés orientés dont les sommets $s_j$ sont indexés cycliquement, et dont les angles aux sommets sont tous calculables en fonction de la signature de $\Gamma $. On peut alors considérer le groupe $\Gamma ^{*}$ de toutes les isométries de $\mathcal{H}$ laissant invariant les côtés de ce polygone. Il est engendré par les réflexions $\sigma _j$ $(j=1,...,4g+2r)$ de $\mathcal{H}$ par rapport aux côtés $s_js_{j+1}$ du domaine fondamental polygonal de $\Gamma $. En notant $(2\pi /2\nu _j)$ l’angle au sommet $s_j$, on obtient ([@LaHarpe] p.135) les relations $\sigma _j^2=1$ et$\;(\sigma _{j-1}\sigma _j)^{\nu _j}=1$. Tous les coefficients $\nu _j$ sont faciles à expliciter en fonction de la signature du groupe $\Gamma $ ou de son groupe principal $G$. Les reflexions retournant les angles, ceci introduit un groupe d’isométries non toutes directes de $\mathcal{H}$ : $$\Gamma ^{*}=<\sigma _1,...,\sigma _{4g+2r}\mid \sigma _j^2=1,\;(\sigma _{j-1}\sigma _j)^{\nu _j}=1>.$$ Ce groupe agit proprement dans $\mathcal{H}$, ce qui signifie que pour tout sous-ensemble compact $\mathcal{C}$ de $\mathcal{H}$ l’ensemble des éléments $\gamma \in \Gamma ^{*}$ tels que $\gamma \mathcal{C}\cap \mathcal{C}\neq \emptyset $ est fini. L’intérieur du polygone de départ constitue un domaine fondamental pour cette action. Le groupe $% \Gamma ^{*}$ est un groupe de Coxeter [@Bourbaki1] qui permet d’expliciter le lien avec la théorie de immeubles de Tits, ici des immeubles hyperboliques ([@Ronan]). Dans $\Gamma ^{*}$ on retrouve $% \Gamma $ comme sous-groupe d’indice 2 des transformations qui conservent l’orientation ([@Katok1] théorème 3.5.4). Ceci donne comme quotient $\mathcal{H}/\Gamma $ une surface de Riemann compacte de genre $g$ où le polygone construit précédemment se projette en un complexe à $r+1$ sommets reliés par $2g+r$ géodésiques tracées sur la surface considérée. Ce complexe permet de calculer l’homologie singulière de la surface. Il correspond à une dissection canonique de la surface de Riemann. ### Groupes de triangle hyperboliques Avec $g=0$ et $n=r=3$, ce qui précède garantit l’existence d’un groupe de Coxeter $\mathbf{T}^{*}(\upsilon _1,\upsilon _2,\upsilon _3)$ définissable avec trois réflexions $\overline{R}_1,\overline{R}_2,% \overline{R}_3$ sur les cotés d’un triangle géodésique de $% \mathcal{H}$ pourvu que $$\sum_{i=1}^3\frac 1{\upsilon _i}<1.$$ Ce groupe $\mathbf{T}^{*}(\upsilon _1,\upsilon _2,\upsilon _3)$ appelé groupe de triangle hyperbolique a pour présentation $$<\overline{R}_1,\overline{R}_2,\overline{R}_3\mid \overline{R}_1^2=\overline{% R}_2^2=\overline{R}_3^2=(\overline{R}_1\overline{R}_2)^{\upsilon _3}=(% \overline{R}_2\overline{R}_3)^{\upsilon _1}=(\overline{R}_3\overline{R}% _1)^{\upsilon _2}=1>.$$ Il possède un sous-groupe, le groupe de von Dyck, qui peut être vu ([@Katok1] p. 99-102) comme fuchsien d’indice 2 de signature $% (0;3:\upsilon _1,\upsilon _2,\upsilon _3;0)$ : $$\begin{aligned} \mathbf{T}(\upsilon _1,\upsilon _2,\upsilon _3) &=&<\overline{E}_1,\overline{% E}_2\mid \overline{E}_1{}^{\upsilon _1}=\overline{E}_2{}^{\upsilon _2}=(% \overline{E}_1\overline{E}_2)^{\upsilon _3}=1> \\ &=&<\overline{E}_1,\overline{E}_2,\overline{E}_3\mid \overline{E}% _1{}^{\upsilon _1}=\overline{E}_2{}^{\upsilon _2}=\overline{E}_3{}^{\upsilon _3}=\overline{E}_1\overline{E}_2\overline{E}_3=1>.\end{aligned}$$ On trouve $\mathbf{T}(2,3,\infty )=PSL(2,\mathbb{Z})=\Gamma (1)$ parmi les groupes de Dyck. Tout groupe de triangle est un quotient du groupe utilisé dans la résolution de nos équations et qui peut lui-même être considéré comme un groupe de Coxeter [@Charney] $$\mathbf{T}_3\cong \mathbf{T}^{*}(\infty ,\infty ,\infty )=\mathbf{C}_2*% \mathbf{C}_2*\mathbf{C}_2.$$ Dans $\mathbf{T}_3$, le sous-groupe $\mathbf{F}_2\simeq [PSL(2,\mathbb{Z}% ),PSL(2,\mathbb{Z})]$ d’indice $2$ peut être vu comme le groupe fuchsien qui construit la théorie de Markoff classique. Les groupes de Dyck sont sphériques, euclidiens ou hyperboliques, selon que le nombre suivant est plus grand, égal ou plus petit que 1 : $$\frac 1{\upsilon _1}+\frac 1{\upsilon _2}+\frac 1{\upsilon _3}.$$ Les groupes de Dyck donnent concrètement le passage entre les travaux développés dans le présent ouvrage et la théorie des singularités [@Arnold2] [@Oka] [@Milnor3] [@BensonM] [@Hirzebruch0] [@Dimca] [@Lamotke] [@Laufer]. La condition de réduction présentée dans [@Saito3] pour les systèmes de poids réguliers et les singularités de surfaces associées est comparable à celle vue au chapitre précédent pour les tores paraboliques. Un système régulier de poids est un quadruplet d’entiers positifs $(t,x,y,z)$ tels que $t>m=\max (x,y,z)$ et $$\frac{(q^t-q^x)(q^t-q^y)(q^t-q^z)}{(1-q^x)(1-q^y)(1-q^z)}\;\text{% polyn\^{o}me en }q\text{.}$$ On peut lui associer une singularité à l’origine d’une surface définie par un polynôme $$\sum_{xi+yj+zk=t}a_{ijk}X^iY^jZ^k.$$ On associe à un tel système un sous-groupe discret agissant sur $% \mathcal{H}$, $\mathbb{C}$, $\mathcal{S}^2$. On trouve dans [@Saito3] comment se fait le lien avec des groupes fuchsiens et des surfaces $K_3$ lorsque l’on est dans le cas hyperbolique où $t-x-y-z=1>0$, et dans [@Nikulin] (p. 665) une évocation due à I. I. Piatetsky-Shapiro et I. R.Shafarevich du lien entre les surfaces réelles $K_3$ et les groupes de réflexion hyperbolique. On indique aussi dans [@Saito3] (p. 499 table 4) comment se fait le lien avec des surfaces rationnelles, les groupes kleinéens et les systèmes de racines A-D-E lorsque l’on est dans le cas sphérique où $t-x-y-z=-1<0$. On trouve ainsi les sous-groupes finis du groupe $SU(2)$ des matrices unitaires de $SL(2,\mathbb{C}% ) $, revêtement universel du groupe $SO(3)$ des rotations de l’espace euclidien. Ceci donne les groupes de rotations $\mathbf{C}_{l+1}$, $\mathbf{D% }_{l-2}$, $\mathbf{A}_4$, $\mathbf{S}_4$, $\mathbf{A}_5$, des polyèdres platoniciens (pyramide, bipyramide, tétraèdre, cube ou octaèdre, icosaèdre ou dodécaèdre), avec les groupes simples associés par la correspondance de McKay ([@Conway2] p. 297, [@Baez], [@Slodowy]) et les polynômes correspondants [@Arnold2] (Tome 1 p. 139) : $$$$ $$\begin{array}{ccc} A(l)=\mathbf{T}(1,1,l+1) & \text{cyclique d'ordre }l+1\geq 2 & XZ+Y^{l+1} \\ D(l)=\mathbf{T}(2,2,l-2) & \text{di\'{e}dral d'ordre }4(l-2)\geq 8 & X^2Y+Y^{l-1}+Z^2 \\ E(6)=\mathbf{T}(2,3,3) & \text{ t\'{e}tra\`{e}dral binaire }(\rightarrow Fi_{24}) & X^2+Y^3+Z^4 \\ E(7)=\mathbf{T}(2,3,4) & \text{octa\`{e}dral binaire }(\rightarrow B=F_{2+}) & X^2+Y^3+YZ^3 \\ E(8)=\mathbf{T}(2,3,5) & \text{icosa\`{e}dral binaire }(\rightarrow M) & X^2+Y^3+Z^5 \end{array}$$ $$$$ On obtient ainsi les groupes de Dyck sphériques que l’on peut voir comme sous-groupes finis $\Gamma $ de $PSL(2,\mathbb{C})$ agissant de manière discontinue sur $\mathcal{S}^2$, définissant les cinq polyèdres réguliers des pavages classiques de la sphère (voir [@Berger] tome 1 p. 44).Ils correspondent aux singularités simples ou de Klein [@Klein1] [@Slodowy] [@Slodowy2] [@Arnold4] (p.26), et aux diagrammes de Dynkin sans double liens [@Bourbaki1] (Ch. VI § 4 th. 3 p. 197) des groupes de Coxeter associés à la résolution de ces singularités. Déjà identifiés dans la scholie de la proposition 18 du livre XIII des Elements d’Euclide, évoqués dans le Timée de Platon, ils ont été mis à contribution en 1621 dans le Secret du Monde par JeanKepler pour justifier le système héliocentrique qui a été proposé en 1543 par N. Copernic dans son livre des Révolutions... Bien que les surfaces $M^{s_1s_2}(b,\partial K,u)$ mises en évidence dans les chapitres antérieurs soient rationnelles et essentiellement sans singularité, il est intéressant de constater que les types de singularités isolées possibles sur les surfaces cubiques sont connus. On trouve notamment le point conique elliptique correspondant au diagramme de Dynkin $E(6)$ et à la forme normale du cas euclidien où $t-x-y-z=1>0$ qui est l’équation $XY^2-4Z^3+g_2X^2Y+g_3X^3$ d’une surface elliptique ([@Friedman] p.182). Les autres possibilités [@Fischer1] correspondent à des points doubles rationnels (des singularités de Klein) et sont données par $A(l)$ où $% l=1,...,5;\;D(l)$ où $l=4,5;\;E(6).$ ### Jacobienne et fonctions thêta En se limitant encore à une surface $\mathcal{H}/\Gamma $ de signature $% (g;r:\upsilon _1,\upsilon _2,...,\upsilon _r;0)$ le polygone bord du domaine fondamental que l’on vient de mettre en évidence s’appelle une dissection canonique de la surface. Il permet de reconstruire la surface par des transformations successives de son domaine fondamental. Pour une surface $\mathcal{M}$ de genre $g$ supposée ici compacte, on fait apparaître ainsi $2g$ cycles $\alpha _1$,..., $\alpha _{2g}$ avec lesquels les différentielles holomorphes $\omega _1$,..., $\omega _g$, de la surface donnent une matrice de périodes $$\mathbf{\Omega }=\left[ \pi _{jk}\right] =\left[ \int_{\alpha _k}\omega _j\right] ,\;j=1,...,g,\;k=1,...,2g.$$ Ses $2g$ vecteurs colonnes $(\pi _{jk})_{k=1,...,2g}$ donnent un sous-groupe discret de périodes $\Lambda $ de rang $g$ de $\mathbb{C}^g$ définissant la surface jacobienne $Jac(\mathcal{M})=\mathbb{C}^g/\Lambda $ de $\mathcal{M}$. On a de plus un plongement canonique généralisant la situation déjà rencontrée pour les courbes elliptiques : $$\kappa :u\in \mathcal{M}\longmapsto \kappa (u)=(\int_{u_0}^u\omega _j)_{j=1,...g}\in Jac(\mathcal{M}).$$ Chaque intégrale de cette application dite de Jacobi (ou Kodaira) est mal définie car elle dépend du chemin d’intégration. Mais le $g$-uplet est quant à lui bien défini. On peut faire en sorte d’avoir $% \alpha _1=a_1$,..., $\alpha _g=a_g$, $\alpha _{g+1}=b_1$, ..., $b_g$, et $% \pi _{jk}=\delta _{jk}$ pour $k=1,...,g$, et poser avec une matrice $\mathbf{% M}$ : $$\mathbf{M}=\left[ \int_{b_k}\omega _j\right] ,\;j=1,...,g,\;k=1,...,g,$$ $$\Lambda =\mathbb{Z}^g\oplus \mathbf{M}\mathbb{Z}^g.$$ On vérifie que l’on a $\mathbf{M=}^t\mathbf{M}$ et Im$(\mathbf{M)}>0$, et ces deux conditions caractérisent le demi-espace supérieur de Siegel $\mathcal{H}_g$ sur lequel on peut faire agir de façon naturelle le groupe symplectique $Sp(g,\mathbb{Z})$. L’intérêt de cette construction est que la surface jacobienne peut elle-même être plongée au moyen d’une fonction thêta dans un espace projectif $% \mathbf{P}^n(\mathbb{C})$ lorsqu’elle admet une polarisation. Il s’agit d’une forme hermitienne $H$ définie sur $\mathcal{M}\times \mathcal{M}$ telle que $\Im(H)$ soit à valeurs entières sur le réseau $% \Lambda $. La surface jacobienne devient alors un groupe algébrique. Ceci est la conséquence d’un résultat de Lefschetz [@KumarMurty] [@Waldschmidt2] p. 192. Ce résultat plonge de façon naturelle toute surface de Riemann compacte dans un tel espace projectif, réalisant de façon concrète le plongement donné par les théorèmes de Chow ou Kodaira ([@Griffiths] p. 167 ou p. 181 [@Serre4] p.29-30). Cette construction permet de représenter le groupe des automorphismes $Aut(\mathcal{M})$ par un monomorphisme naturel dans le groupe $Sp(g,\mathbb{Z})$ qui contient ainsi une information essentielle ([@Farkas] p. 287). La fonction thêta correspondante s’écrit pour $u\in \mathbb{C}^g$ et $\mathbf{M\in }\mathcal{H}_g$ $$\theta (u,\mathbf{M)=}\sum_{m\in \mathbb{Z}^g}\exp (\pi i(^tm)\mathbf{M}m+2\pi i(^tm)u).$$ La surface jacobienne $Jac(\mathcal{M})$ est une variété abélienne ([@Jost], [@Arnaudies] tome 3 §24.2) sur laquelle on dispose d’une polarisation canonique ([@Maurin] p. 311 [@Waldschmidt2] p. 206). On peut caractériser les variétés abéliennes qui possèdent une polarisation. Ce sont des variétés algébriques projectives que l’on peut munir d’une loi de groupe algébrique définie avec des polynômes homogènes et deux applications $\mathcal{M}\times \mathcal{M}\rightarrow \mathcal{M}$ et $% \mathcal{M}\rightarrow \mathcal{M}$ qui s’écrivent comme des fonctions rationnelles à coefficients dans le corps $\mathcal{K}(\mathcal{M})$ des fonctions méromorphes définies sur $\mathcal{M}$. Un résultat remarquable dû à T. Shioda est que les variétés jacobiennes sont caractérisées par des solitons, solutions de l’équation de Kadomtsev-Petviashvili de la théorie des plasmas ([@Shioda]). Il existe aussi des tores de dimension 2 sans plongement projectif ([@Shafarevich] p.351-356), et qui ne sont donc pas des variétés abéliennes. ### Fonctions automorphes On définit un facteur d’automorphie $\mu $ associé au groupe fuchsien $\Gamma \subset Aut(\mathcal{H})$ avec : $$\mu :\Gamma \times \mathcal{H}\longrightarrow \mathcal{\,}\mathbb{C},$$ $$\forall \gamma _1,\gamma _2\in \Gamma ,\;\forall z\in \mathcal{H},\;\mu (\gamma _1\gamma _2,z)=\mu (\gamma _1,\gamma _2z)\mu (\gamma _2,z).$$ $$\forall \gamma \in \Gamma ,\;\;\mu (\gamma ,.)\text{ holomorphe non nulle sur }\mathcal{H}\text{.}$$ Une fonction automorphe $f$ du groupe fuchsien $\Gamma $ et de facteur d’automorphie $\mu $ est une fonction définie sur $\mathcal{H}$, souvent supposée méromorphe, telle que $$\forall \gamma \in \Gamma ,\;\forall z\in \mathcal{H},\;f(\gamma z)=\mu (\gamma ,z)f(z).$$ Une fonction automorphe est parfois dite modulaire. Mais l’auteur préfère réserver à ce dernier mot un sens plus précis destiné à étudier des situations plus générales ([@Kac] p.257). Si $\mu $ est constante et égale à $1$, on dit simplement ici que $f$ est une fonction $\Gamma $-automorphe. Il s’agit d’une fonction définie sur $\mathcal{H}$ mais par une simple fonction définie sur $\mathcal{M}=\mathcal{H}/\Gamma $ que l’on compose avec la projection canonique de $\mathcal{H}$ sur $\mathcal{H}/\Gamma $. Tout automorphisme $\gamma \in \Gamma \subset Aut(\mathcal{H})$ du demi-plan $% \mathcal{H}$ peut être considéré localement comme une fonction holomorphe permettant de définir $$\mu _\gamma =\frac{\partial \gamma }{\partial z},\;\;\mu _\gamma (z)^{-1}=\mu (\gamma ,z).$$ Si $F$ définie sur $\mathcal{H}$ est invariante par $\gamma \in \Gamma \subset $ $Aut(\mathcal{H})\simeq PSL(2,\mathbb{R})$, considérons l’expression associée $$F(\gamma z)=F(\frac{az+b}{cz+d})=F(z).$$ Lorsque dériver $F$ en $z$ est possible, on obtient une fonction $% f=F^{\prime }$ qui est $\Gamma $-automorphe et dont le facteur d’automorphie est donné par : $$f(\gamma z)=f(\frac{az+b}{cz+d})=(cz+d)^2f(z)=\mu (\gamma ,z)f(z).$$ Avec des dérivées d’ordre supérieur, on doit introduire la dérivation de Schwarz ([@Ford] p. 99) pour trouver d’autres formules de ce type. On peut montrer que les facteurs d’automorphie les plus généraux s’écrivent ([@Gunning] p. 19) avec $2k$ entier non négatif $\;\mu (\gamma ,z)=(cz+d)^{2k}$. Ceci définit les fonctions $% \Gamma $-automorphes de poids $2k$. Ces fonctions permettent la définition des formes différentielles de degré $k$, dites encore $k$-différentielles ou formes automorphes  $f\longrightarrow f(z)dz^k$. Ces formes sont dites holomorphes si $f$ est holomorphe ([@Farkas] p.51 et 87). De telles formes permettent de considérer le $k$-ième $% \mathbb{C}$-espace de cohomologie $H^k(\mathcal{M},\mathbb{C})$ ainsi que l’algèbre commutative graduée ([@Farkas] p.269) $$H^{*}(\mathcal{M},\mathbb{C})=\bigoplus_{k=0}^\infty H^k(\mathcal{M},\mathbb{C}).$$ Elles permettent l’étude des aspects différentiels de la surface $% \mathcal{M}$ $=\mathcal{H}/\Gamma $ et de sa théorie de Hodge [@Lewis]. Il y a aussi un lien avec la théorie des représentations, la théorie du corps de classe et le programme de Langlands ([@Bump] [@Benson] [@Hochschild] [@RamMurty] [@Gelbart]). Les fonctions $\Gamma $-automorphes de poids $2k$ qui sont holomorphes sur $% \mathcal{H}$ définissent de leur côté un $\mathbb{C}$-espace vectoriel $\mathbf{M}_k(\Gamma )$ puis, en désignant l’espace $0$ par cette écriture si $k<0$, une algèbre graduée somme directe ([@Serre1] p.145) $$\mathbf{M}(\Gamma )=\bigoplus_{k=-\infty }^\infty \mathbf{M}_k(\Gamma ).$$ Cette algèbre a un lien avec l’algèbre des fonctions méromorphes $\mathcal{K}(\mathcal{M})$ évoquée ci-dessus sur la surface de Riemann $\mathcal{M}=\mathcal{H}/\Gamma $. On trouve dans [@Dolgachev] (p.75) une démonstration du fait que si $\Gamma $ est un sous-groupe d’indice fini de $PSL(2,\mathbb{Z})$ l’algèbre $\mathbf{M}(\Gamma )$ est de type fini sur $\mathbb{C}$, tous les espaces $\mathbf{M}_k(\Gamma )$ étant de dimension finie. Pour le groupe $PSL(2,\mathbb{Z})$, on trouve dans la même référence, ou dans [@Serre1] (p.145) l’isomorphisme de $% \mathbf{M}(PSL(2,\mathbb{Z}))$ et de l’algèbre de polynômes $\mathbb{C}% [X,Y]$. Pour un groupe fuchsien plus général $\Gamma $ tel que $% \mathcal{M}$ $=\mathcal{H}/\Gamma $ soit une surface compacte le corps des fractions de $\mathbf{M}(\Gamma )$ est une extension $\mathcal{K}(\mathcal{M)% }$ de degré fini du corps des fractions $\mathbb{C}(X,Y)$ de $\mathbb{C}[X,Y]$. En pratique ceci se traduit par le fait que deux fonctions $\Gamma $-automorphes ayant même domaine de définition sont liées par une relation algébrique. Une conséquence importante ([@Ford] p.163) est que toute fonction automorphe $f(\tau )$ d’un groupe fuchsien ayant pour domaine fondamental une région constituée de $k$ copies du domaine fondamental de $PSL(2,\mathbb{Z})$ donne avec un polynome $\Phi $ de degré inférieur ou égal à $k$ une relation algébrique $\Phi (f,J)=0 $ où $J$ est l’invariant modulaire$.$ La théorie de Teichmüller généralisant celle de Markoff ------------------------------------------------------- Le problème central de la théorie de Teichmüller consiste à décrire les différentes structures conformes qui existent sur un même support topologique $\mathcal{M}_{top}$ d’une surface de Riemann $% \mathcal{M}$ supposée ici connexe et de type fini. Le groupe de Poincaré sur $\mathcal{M}_{top}$ pointé est noté $\pi _1(% \mathcal{M}_{top},*)$. Classiquement la théorie de Teichmuller est présentée avec tout un appareillage différentiel.Or on peut la présenter de façon quasi algébrique lorsque $\mathcal{H}$ est le revêtement conforme de la surface $\mathcal{M}$. Ceci a permis d’expliciter comment elle généralise la théorie de Markoff. Le formalisme mis au point sur les tores percés conformes a ainsi été étendu pour tout groupe fuchsien de signature $s$, permettant une approche très globale applicable à d’autres équations diophantiennes. Ceci a débouché sur des questions géométriques nouvelles dans la perspective de sortir des surfaces pour appréhender des objets plus compliqués sur lesquels généraliser les méthodes qui précèdent. Les domaines de Riemann semblent particulièrement bien adaptés à tel projet pour des raisons que l’on explique. On décrit maintenant comment ces réflexions ont été développées, en renvoyant au chapitre 7 de [@Perrine9] pour des compléments ainsi qu’à [@Seppala] [@Harvey] [@Schneps] [@Krushkal1]. ### Représentations du groupe de Poincaré Les différentes structures conformes sur $\mathcal{M}_{top}$ sont définies par les représentations $\overline{\rho }$ du groupe $\pi _1(\mathcal{M}_{top},*)$ dans le groupe $PSL(2,\mathbb{R})$, constituant l’espace des déformations $$\mathcal{R}=\mathcal{R}(\pi _1(\mathcal{M}_{top},*),PSL(2,\mathbb{R})).$$ Au moyen de la notion de groupe principal, le calcul d’une déformation $% \overline{\rho }$ de signature $s=(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\upsilon _{r+1},...,\upsilon _n;m)$ est faisable analytiquement avec la représentation associée $\rho :$ $\pi _1(\mathcal{M}% _{top},*)\rightarrow SL(2,\mathbb{R})$ telle que $\overline{\rho }=P\circ \rho $. Il suffit d’expliciter les coefficients des matrices images par $\rho $ des générateurs qui vérifient les relations d’une présentation de $\pi _1(\mathcal{M}_{top},*)$. Le calcul des générateurs $\rho (a_1)=A_1$, $\rho (b_1)=B_1$, $...$, $\rho (a_g)=A_g$, $\rho (b_g)=B_g$, $\rho (e_1)=E_1$, $...$, $\rho (e_r)=E_r$, $% \rho (p_{r+1})=P_{r+1}$, $...$, $\rho (p_n)=P_n$, $\rho (h_1)=H_1$, $...$, $% \rho (h_m)=H_m$ nécessite $3(2g+n+m)$ paramètres réels car on a quatre paramètres pour chacune des matrices et que leurs déterminants valent $1$. On doit prendre en compte entre ces paramètres $3(r+1)$ égalités entre nombres réels issus des relations qui lient les matrices, ainsi que les $n-r$ égalités $% tr(P_i)=2$. Ceci représente au total $n+2r+3$ contraintes liant ces paramètres réels. Trois paramètres supplémentaires peuvent être éliminés en raisonnant à un automorphisme intérieur près, c’est-à-dire à une transformation conforme près de $% \mathcal{H}$. Ceci construit une variété réelle $\mathcal{V}_s$ de dimension $6g-6-2r+2n+3m$ dont une partie définie par les contraintes sur les traces supérieures à $2$ paramétrise les structures de Riemann possibles sur le support topologique de $\mathcal{M}_{top}$. Chaque point noté $\Pi (\overline{\rho })$ dans cette partie de $\mathcal{V}_s$ correspond à une structure conforme sur $\mathcal{M}_{top}$. Par exemple, pour les tores percés paraboliques $g=1$, $r=0$, $n=1$, $m=0$, on a mis en évidence au chapitre précédent la nappe principale d’une variété $\mathcal{V}_{(1;1;0)}$ de dimension $2$ donnée par l’équation de Markoff. Et pour un tore percé hyperbolique $g=1$, $r=0$, $n=0$, $m=1$, on trouve de même une partie d’une variété de dimension $3$. On trouve dans [@Keen1] d’autres exemples. Du groupe principal $\rho (\pi _1(\mathcal{M}_{top},*))=G$ on déduit alors la représentation associée $\overline{\rho }=P\circ \rho $ et son image $\Gamma =PG=$ $\overline{\rho }(\pi _1(\mathcal{M}% _{top},*))$ qui est un groupe fuchsien de signature $s=(g;n:\upsilon _1,\upsilon _2,...,\upsilon _r,\upsilon _{r+1},...,\upsilon _n;m)$. On peut pratiquement considérer qu’à chaque point $\Pi (\overline{\rho })$ de $\mathcal{V}_s$ est attaché le groupe fuchsien $\Gamma $, point que l’on peut noter avec les matrices du groupe principal $G$ de $\Gamma $ et par analogie avec ce que l’on a développé dans [@Perrine9] : $$\Pi (A_1,B_1,...,A_g,B_g,E_1,...,E_r,P_{r+1},...,P_n,H_1,...,H_m).$$ Le calcul que l’on vient de faire ne détermine pas tout l’espace des déformations de $\pi _1(\mathcal{M}_{top},*)$, mais uniquement son sous-ensemble $\mathcal{R}_s=\mathcal{R}_s(\pi _1(\mathcal{M}_{top},*),PSL(2,% \mathbb{R}))$. Il faut regrouper ces derniers espaces sur toutes les signatures correspondant au même type topologique $(g,n+m)$ de $\pi _1(\mathcal{M}% _{top},*)$ pour retrouver $\mathcal{R}$. Comme le montre les tores percés paraboliques et hyperboliques, on trouve des phénomènes de bord entre les espaces $\mathcal{R}_s$ correspondant aux sauts quantiques que constituent pour la géométrie conforme le passage d’une piqûre à un trou. Pour tout $\overline{\rho }\in \mathcal{R}$ une structure conforme est donnée par la considération de : $$\mathcal{M}=\mathcal{H}/\overline{\rho }(\pi _1(\mathcal{M}_{top},*))=% \mathcal{H}/\Gamma .$$ Il s’agit d’une surface de Riemann qui selon les propriétés de $% \overline{\rho }$ peut avoir pour support topologique $\mathcal{M}_{top}$ et une signature ou une autre. On remarquera que des espaces topologiques homéotopes définissent des groupes de Poincaré isomorphes, mais peuvent ne pas être homéomorphes ([@Gramain] p.16) mais qu’inversement la dernière égalité privilégie un modèle $% \mathcal{M}_{top}$ parmi les classes d’homéomorphie d’une même classe d’homéotopie. ### Equivalence des représentations et réduction Si $Int(PSL(2,\mathbb{R}))$ est le groupe des automorphismes intérieurs de $% PSL(2,\mathbb{R})$, on a avec la composition des morphismes une action naturelle de ce groupe dans $\mathcal{R}_s$. Ceci permet de cacher différents paramètres liés à ces automorphismes intérieurs et donc de raisonner à équivalence conforme près de $\mathcal{H}$. On définit ([@Seppala] p. 165) ainsi un quotient qui est l’espace des modules de signature $s$ : $$\mathcal{M}od(s)=\mathcal{R}_s(\pi _1(\mathcal{M}_{top},*),PSL(2,\mathbb{R}% ))/Int(PSL(2,\mathbb{R})).$$ Comme la variété réelle $\mathcal{V}_s$ à laquelle il s’identifie par le calcul précédent, cet espace paramétrise les structures conformes de signature $s$ existant sur la surface topologique $% \mathcal{M}_{top}$ support. Compte tenu de la façon dont on l’a construit, remarquons que sur $\mathcal{V}_s$ il est naturel de considérer les automorphismes intérieurs définis par une matrice $D\in GL(2,\mathbb{R})$. Ceci fait alors intervenir l’orientation de $\mathcal{% M}$ et donne un résultat analogue à [@Perrine9] (prop. 5.5.3 et prop. 6.5.3). De façon précise, on a l’équivalence entre l’égalité de $$\Pi (A_1,B_1,...,A_g,B_g,E_1,...,E_r,P_{r+1},...,P_n,H_1,...,H_m),$$ $$\Pi (A_1^{\prime },B_1^{\prime },...,A_g^{\prime },B_g^{\prime },E_1^{\prime },...,E_r^{\prime },P_{r+1}^{\prime },...,P_n^{\prime },H_1^{\prime },...,H_m^{\prime }),$$ et l’existence de $D\in GL(2,\mathbb{R})$ telle que $$A_1^{\prime }=DA_1D^{-1},...,E_1^{\prime }=DE_1D^{-1},...,P_{r+1}^{\prime }=DP_{r+1}D^{-1},...,H_m^{\prime }=DH_mD^{-1}.$$ La partie de $\mathcal{V}_s$ mise en évidence ci-dessus est invariante pour l’action des automorphismes intérieurs définis par les matrices $D\in GL(2,\mathbb{R})$. Le calcul de toutes les possibilités pour $D$ débouche sur des considérations sur les quaternions ou les algèbres de Clifford qui les généralisent. Egalement, puisque les calculs faits ont utilisé des représentations $\rho :$ $\pi _1(% \mathcal{M}_{top},*)\rightarrow SL(2,\mathbb{R})$, c’est-à-dire des représentations de groupe spéciales $\rho :$ $\pi _1(\mathcal{M}% _{top},*)\rightarrow GL(2,\mathbb{R})$, on peut utiliser les résultats de cette théorie dans l’étude de la situation que l’on considère, avec le fait qu’en général $\pi _1(\mathcal{M}_{top},*)$ est infini et non commutatif. On fait ainsi le lien avec la notion de caractère qui, en sens inverse s’introduit dans la théorie de Teichmuller, par exemple avec la notion de caractère de Fricke. L’équation algébrique associée se retrouve par les méthodes de [@Horowitz]. Le lien avec les formes quadratiques binaires peut être retrouvé en généralisant le théorème de Frobenius Schur ([@Serre6] p. 121). Il est aussi possible de faire agir de façon naturelle le groupe des automorphismes $Aut(\pi _1(\mathcal{M}_{top},*))$ sur $\mathcal{R}_s$, ce qui revient à changer de système de générateurs du groupe $% \pi _1(\mathcal{M}_{top},*)$. Comme l’action induite d’un élément de $Int(\pi _1(\mathcal{M}_{top},*))$ sur $\mathcal{R}_s$ donne l’identité, on peut se contenter d’étudier l’action sur $\mathcal{R}_s$ du groupe des classes d’applications $$\Gamma _{\pi _1(\mathcal{M}_{top},*)}=Aut(\pi _1(\mathcal{M}% _{top},*))/Int(\pi _1(\mathcal{M}_{top},*))=Out(\pi _1(\mathcal{M}% _{top},*)).$$ Cette démarche généralise la théorie de la réduction qui a été vue au chapitre précédent. Elle définit l’espace de Teichmuller : $$\mathcal{T}eich(s)=\mathcal{M}od(s)/\Gamma _{\pi _1(\mathcal{M}_{top},*)}.$$ Il est identifiable à la partie de $\mathcal{V}_s$ mise en évidence ci-dessus que l’on peut maintenant interpréter comme domaine fondamental pour l’action du groupe $\Gamma _{\pi _1(\mathcal{M}_{top},*)}$ dans toute la variété réelle $\mathcal{V}_s$. ### Groupes fuchsiens arithmétiques La question se pose de savoir si l’on peut remplacer dans ce que l’on vient de voir le groupe $PSL(2,\mathbb{R})$ par $PSL(2,\mathbb{Z})$. La réponse non évidente est partiellement donnée dans le chapitre 5 de [@Katok1]. Elle conduit à se méfier de la dénomination de groupe fuchsien arithmétique utilisée également dans le contexte des groupes de Lie, et différente de la notion envisagée ici qui se résume à la condition $\Gamma \subset PSL(2,\mathbb{Z})$. ### Compléments sur les représentations de groupes #### Variété de représentations On peut aussi vouloir remplacer $PSL(2,\mathbb{R})$ par $PSL(2,\mathbb{C})$, et raisonner sur des groupes kleinéens plutôt que sur des groupes fuchsiens. Ceci conduit à la notion de variété de représentations d’un groupe de Poincaré [@Lubotzky] [@Brumfiel] $$\rho \in \mathcal{R}(\pi _1(\mathcal{M}_{top},*),PSL(2,\mathbb{C}))\rightarrow (tr\rho (g_1),tr\rho (g_2),...,tr\rho (g_p))\in \mathbb{C}^p,$$ où $\rho :\pi _1(\mathcal{M}_{top},*)\rightarrow PSL(2,\mathbb{C})$ représentation de $\pi _1(\mathcal{M}_{top},*)$ dans $PSL(2,\mathbb{C})$, avec $p$ nombre de générateurs choisis dans le groupe $\pi _1(% \mathcal{M}_{top},*)$, $tr$ la trace dans $PSL(2,\mathbb{C})$ à distinguer de la trace de la matrice correspondante dans $SL(2,\mathbb{C})$. L’ensemble des représentations complexes $\rho $ est $\mathcal{R}(\pi _1(\mathcal{M}% _{top},*),PSL(2,\mathbb{C}))$. Ce nouveau sujet est lui-même lié à l’étude de l’espace de Teichmüller qu’il complexifie [@Seppala] (ch. 4). On construit de façon naturelle des relations algébriques entre les traces en utilisant la méthode de [@Horowitz] [@Procesi], d’où des variétés autour des caractères de Fricke dans lesquelles on peut représenter l’espace de Teichmüller [@Bers]. La méthode donne des structures algébriques qui généralisent également la théorie de Markoff [@Gonzales] [@Saito] [@Saito1] [@Luo]. Le procédé peut d’ailleurs être comparé à la méthode utilisée pour montrer que les surfaces de Riemann compactes sont algébriques ([@Cohn5] (p. 120) [@Nag] (p.98) [@Serre4] [@Ly]). Le lien est aussi faisable avec la classique théorie des invariants ([@Hilbert4] [@Dixmier] [@Liu]), puis les algèbres de Hopf, la théorie de Galois et les groupes quantiques ([@Bergman] [@Chase] (p. 52) [@Demidov] [@Guichardet]). On a aussi un lien profond avec le calcul de Heaviside (encore appelé ombral, symbolique, de Sylvester, de Boole, de Leibnitz..., [@Humbert] [@Rota] [@Rota1]) qui a donné naissance aux distributions par généralisation de la fonction de Dirac ([@Heaviside] [@Carson] [@Schwartz] [@Colombeau] [@Cartier]). L’approfondissement de ce sujet conduit au calcul différentiel non commutatif [@Demidov], aux D-modules ([@Coutinho] [@Bertrand] (p. 14)), etc. Il constitue une perspective essentielle pour des travaux à venir. #### Monodromie On appelle représentation de monodromie d’un groupe $\Gamma =\pi _1(% \mathcal{M}_{top},*)$ tout homomorphisme de groupes $$\rho :\pi _1(\mathcal{M}_{top},*)\longrightarrow GL(n,\mathbb{C}).$$ L’image de $\rho $ est le groupe de monodromie. Ces représentations peuvent être classées avec les automorphismes intérieurs de $% GL(n,\mathbb{C})$ et interviennent dans la résolution des équations différentielles de Fuchs ([@Yosida1] p. 75, [@Gray], [@Kuga]) qui sont de forme suivante où les $a_i$ sont holomorphes, ou encore méromorphes dans le domaine considéré : $$\frac{d^nf}{dz^n}+a_1(z)\frac{d^{n-1}f}{dz^{n-1}}+...+a_n(z)f=0.$$ $\bullet $ Pour le cas plus général où $n$ n’est pas nécessairement égal à $2$, ce qui précède conduit à l’étude des groupes algébriques et à la théorie de Galois différentielle de Picard-Vessiot, Ritt, Kolchin, Pommaret, etc... On renvoie pour l’approfondissement de ce sujet à [@Bertrand] [@Yosida2]. $\bullet $ Pour le cas $n=2$ et $\pi _1(\mathcal{M}_{top},*)\simeq \mathbf{F}% _2$ engendré par $A$ et $B$, les représentations de monodromie sont complètement décrites dans [@Yosida1] (p. 80). Celles qui sont irréductibles, c’est-à-dire sans sous espace propre invariant, sont caractérisées à un automorphisme intérieur près de $GL(2,% \mathbb{C})$ par des expressions $$\rho (A)=\left[ \begin{array}{cc} \lambda _1 & 1 \\ 0 & \lambda _2 \end{array} \right] ,\;\rho (B)=\left[ \begin{array}{cc} \mu _1 & 0 \\ (\nu _1+\nu _2)-(\lambda _1\mu _1+\lambda _2\mu _2) & \mu _2 \end{array} \right] ,\;\lambda _i\mu _j\neq \nu _k.$$ Elles sont déterminées de façon unique par les trois couples $% (\lambda _1,\lambda _2)$, $(\mu _1,\mu _2)$, $(\nu _1,\nu _2)$ de valeurs propres de $A$, $B$ et $AB$, pourvu qu’ils vérifient les contraintes citées. Par exemple en diagonalisant les matrices $A_0$ et $B_0$ de la théorie de Markoff classique, on vérifie que les contraintes sont vérifiées et que l’on a : $$\rho (A_0)=\left[ \begin{array}{cc} \frac{3-\sqrt{5}}2 & 1 \\ 0 & \frac{3+\sqrt{5}}2 \end{array} \right] ,\;\rho (B_0)=\left[ \begin{array}{cc} \frac{3-\sqrt{5}}2 & 0 \\ -4 & \frac{3+\sqrt{5}}2 \end{array} \right] .$$ On peut expliciter dans ce cas une solution du problème de Riemann-Hilbert qui consiste à reconstruire à partir de la représentation de monodromie une équation fuchsienne possédant $% \rho $ pour représentation de monodromie. Pour cela on utilise [@Yosida] (th.4.3.2 p.85) pour calculer le schéma de Riemann associé. On reconstruit alors une équation fuchsienne (une équation hypergéométrique perturbée) qui est avec $\sigma _3+\tau _3=1$ et $\sigma _3+\sigma _3^{-1}=3$ : $$x(1-x)\frac{d^2u}{dx^2}+(1-2x)\frac{du}{dx}-(\sigma _3\tau _3)u=\frac 1{4\pi ^2x(1-x)}\log (\frac{3+\sqrt{5}}2)\log (\frac{3-\sqrt{5}}2)u.$$ Elle identifie un opérateur différentiel dont l’analyse spectrale reste à faire et à comparer avec le spectre de Markoff : $$L=D^2+\frac{(1-2x)}{x(1-x)}D-\frac{(\sigma _3\tau _3)4\pi ^2x(1-x)+\log (% \frac{3+\sqrt{5}}2)\log (\frac{3-\sqrt{5}}2)}{4\pi ^2x^2(1-x)^2}.$$ ### La présentation classique de la théorie de Teichmüller La théorie de Teichmüller a pour but de déterminer toutes les structures conformes sur une même surface topologique. Comme une structure conforme définit une structure différentielle orientée à deux dimensions, on peut décomposer le problème en deux : construire d’abord sur une structure topologique une structure différentielle ou la structure riemannienne unique qu’elle définit, ensuite construire sur cette dernière une structure conforme. Le second problème a une solution unique. Le premier est beaucoup plus délicat, notamment si la surface topologique n’est pas compacte. Au-dela de ce qui précède une solution peut être obtenue par différents autres moyens comme la quasi-conformité. On donne ici quelques indications sur cette façon classique de présenter la théorie de Teichmüller. #### Classes d’équivalence conforme Deux métriques $ds^2$ et $dt^2$ sont dites conformément équivalentes si l’application identique $Id:(\mathcal{M}% ,dt^2)\longrightarrow (\mathcal{M},ds^2)$ est une transformation conforme de $\mathcal{M}$. En écrivant la métrique $ds^2$ sous la forme $% ds^2=\lambda \mid dz+\mu (z)d\overline{z}\mid ^2$ on voit que les classes d’équivalence conforme sont paramétrées par $\mu (z)$. Leur ensemble $Conf(\mathcal{M})$ peut être vu comme un ensemble quotient $% Conf(\mathcal{M})=Met(\mathcal{M})/C_{+}^\infty (\mathcal{M})$, où $Met(% \mathcal{M})$ est l’ensemble de toutes les métriques possibles sur $% \mathcal{M}$, et $C_{+}^\infty (\mathcal{M})$ le groupe multiplicatif des fonctions $\lambda $ réelles positives non nulles, différentiables, définies sur $\mathcal{M}$. #### Espace des modules (ou des classes d’équivalence difféomorphes) Deux métriques $ds^2$ et $dt^2$ sur une surface topologique $\mathcal{M}$ sont dites difféomorphiquement équivalentes si et seulement si on a un difféomorphisme $f:(\mathcal{M},dt^2)\longrightarrow (\mathcal{M}% ,ds^2)$ préservant l’orientation et conforme. Avec $ds^2=\lambda \mid dz+\mu (z)d\overline{z}\mid ^2$ on voit que les classes d’équivalence difféomorphes correspondant à une même classe d’équivalence conforme déterminée par $\mu (z)$ sont paramétrées par $% \lambda $. Soit $Diff_{+}(\mathcal{M})$ le groupe des difféomorphismes de $\mathcal{M}$ dans $\mathcal{M}$. Les classes d’équivalence difféomorphes définissent $\mathcal{M}od(\mathcal{M})=Met(\mathcal{M}% )/Diff_{+}(\mathcal{M})$ l’espace des modules de la surface topologique $% \mathcal{M}$. On a une surjection canonique de $\mathcal{M}od(\mathcal{M})$ dans $Conf(\mathcal{M})$. #### Espace de Teichmüller Entre $C_{+}^\infty (\mathcal{M})$ et $Diff_{+}(\mathcal{M})$ existe le groupe $Diff_0(\mathcal{M})$ de tous les difféomorphismes isotopes à l’identité. On dit que les deux métriques $ds^2$ et $dt^2$ sur la surface topologique $\mathcal{M}$ sont fortement équivalentes si et seulement s’il existe $f\in Diff_0(\mathcal{M})$ de $\mathcal{M}$ tel que $% f:(\mathcal{M},dt^2)\longrightarrow (\mathcal{M},ds^2)$ soit conforme. On dit alors que $\mathcal{T}eich(\mathcal{M})=Met(\mathcal{M})/Diff_0(\mathcal{% M})$ est l’espace de Teichmüller. On trouve dans [@Nash2] (p.150) une précision sur la validité de cette définition qui n’est adéquate que pour certaines surfaces, et qui nécessite pour être valable de considérer que $Met(\mathcal{M})$ ne contient que les métriques pour lesquelles la courbure de Gauss de $\mathcal{M}$ est constante. Posent problème les surfaces ayant pour revêtement conforme universel la sphère de Riemann $\mathcal{S}^2$ ou $\mathbb{C}$. Les autres cas de revêtement $\mathcal{H}$ ne posent pas de difficulté. On a donné dans [@Perrine9] différents espaces de Teichmüller montrant que piqûres et trous ne jouent pas même rôle sur une surface de Riemann. #### Groupe de Teichmüller des classes d’applications Comme $Diff_0(\mathcal{M})$ est un sous-groupe normal de $Diff_{+}(\mathcal{M% })$, on peut aussi définir le groupe (parfois dit modulaire) de Teichmüller, encore appelé groupe des classes d’applications (mapping class group) $\Gamma _{\mathcal{M}}=Diff_{+}(\mathcal{M})/Diff_0(% \mathcal{M})$. Le groupe $\Gamma _{\mathcal{M}}$ est un groupe discret, interprétable comme le groupe des composantes connexes du groupe $% Diff_{+}(\mathcal{M})$. Il est engendré par les twists de Dehn ([@Nash2] p.157). Les twists de Dehn engendrent le groupe $\Gamma _{\mathcal{M% }}$ mais ils n’en constituent en général pas un ensemble minimal de générateurs. Sont importants ceux qui ne sont pas homotopes à l’identité, par exemple parce qu’ils font le tour d’une poignée de la surface ou d’une piqûre. Le groupe $\Gamma _{\mathcal{M}}$ est isomorphe à un quotient d’un groupe d’automorphismes du groupe de Poincaré ([@Ivanov1] (p. 17), [@Zieschang1] (ch. 2)) ici noté $\pi _1(\mathcal{M},*)$ : $$\Gamma _{\mathcal{M}}\simeq Aut_{*}(\pi _1(\mathcal{M},*))/Int(\pi _1(% \mathcal{M},*))=Out_{*}(\pi _1(\mathcal{M},*)),$$ où $Int(\pi _1(\mathcal{M},*))$ est le groupe des automorphismes intérieurs de $\pi _1(\mathcal{M},*)$, et $Aut_{*}(\pi _1(\mathcal{M}% ,*)) $ est le groupe des automorphismes de $\pi _1(\mathcal{M},*)$ induits par un homéomorphisme de $\mathcal{M}$. Le groupe $Aut_{*}(\pi _1(% \mathcal{M},*))$ est contenu dans le groupe de tous les automorphismes $% Aut(\pi _1(\mathcal{M},*))$. Ainsi s’introduit le groupe plus vaste $\Gamma _{\pi _1(\mathcal{M}_{top},*)}=Out(\pi _1(\mathcal{M},*))$ dont $\Gamma _{% \mathcal{M}} $ est un sous-groupe. On a regroupé dans [@Perrine9] tout un ensemble de résultats connus pour les groupes de Poincaré et les groupes de classes d’applications, mais dispersés dans la littérature sur ce thème [@Birman] [@Keen2] [@Ivanov2] [@Dehn] [@Wajnryb] [@Gervais]. Dans différents cas on est certain de l’égalité $\Gamma _{\mathcal{M}}=\Gamma _{\pi _1(\mathcal{% M}_{top},*)}$, par exemple lorsque le théorème de Dehn Nielsen s’applique comme c’est le cas pour les surfaces compactes [@Zieschang] (p. 194). Ce théorème permet d’expliciter le lien avec la présentation faite ci-dessus de la théorie de Teichmüller par les représentations. Le manuscrit de Fenchel et Nielsen [@Nielsen1] explicite d’ailleurs l’homéomorphisme qu’induit tout automorphisme donné dans $Aut(\pi _1(\mathcal{M},*))$. #### Lien entre espace de Teichmüller et espace des modules On a défini plusieurs quotients avec $\mathcal{M}$, l’espace des modules $\mathcal{M}od(\mathcal{M})=Met(\mathcal{M})/Diff_{+}(\mathcal{M})$, l’espace de Teichmüller $\mathcal{T}eich(\mathcal{M})=Met(\mathcal{M}% )/Diff_0(\mathcal{M})$, le groupe des classes d’applications $\Gamma _{% \mathcal{M}}=Diff_{+}(\mathcal{M})/Diff_0(\mathcal{M})$. La comparaison de leur définition fait apparaître l’espace des modules comme un quotient de l’espace de Teichmüller par le groupe discret des classes d’applications agissant sur cet espace de façon propre et discontinue ([@Schneps] p. 12) $$\mathcal{M}od(\mathcal{M})\simeq \mathcal{T}eich(\mathcal{M})/\Gamma _{% \mathcal{M}}.$$ De sorte $\mathcal{T}eich(\mathcal{M})$ peut être considéré comme un revêtement ramifié au-dessus de l’espace des modules $% \mathcal{M}od(\mathcal{M})$. Cette configuration est comparable à celle des groupes fuchsiens agissant sur le revêtement des surfaces de Riemann. Une piste pour développer son étude émerge de la comparaison entre espaces de Teichmüller et surfaces de Riemann de revêtement conforme $\mathcal{H}$, car on a : $\bullet $ L’espace de Teichmüller dispose d’une structure topologique ([@Schneps] p.10). $\bullet $ Il est muni d’une structure analytique réelle [@Abikoff1]. $\bullet $ C’est une composante d’une variété affine réelle définie par des polynômes à coefficients rationnels ([@Seppala] p. 175). $\bullet $ Il est muni d’une métrique naturelle, dite de Weil-Peterson ([@Nash2] p. 157). $\bullet $ On peut y construire une structure analytique complexe naturelle [@Nag] [@Earle]. $\bullet $ C’est une variété kählérienne de courbure négative ([@Nash2] p. 157, [@Ahlfors]). $\bullet $ Il possède une structure d’espace de Stein ([@Imayoshi] p. 171, [@Bers1]). ### Compactification de l’espace de Teichmüller Etant en général de dimension supérieure à 2, les espaces de Teichmüller peuvent être vus comme des généralisations des surfaces de Riemann. Dans beaucoup de cas, on a des modèles topologiques d’espaces de Teichmüller ([@Nash2] (p.153), [@Imayoshi] (p.9), [@Nag] (p.111) [@Schneps] (p. 18)). Des exemples d’espaces de Teichmüller décrits par des équations algébriques sont dès à présent disponibles ([@Keen2] p. 1206 relations 4-1 et 4-2). Ils permettent d’envisager l’existence d’autres équations diophantiennes dont la résolution ressemble à celle de Markoff, et est intrinsèquement liée à une structure géométrique. Un exemple déjà connu de ce type est donné par [@Baragar]. Mais ce que l’on vient de voir offre de très nombreuses autres possibilités. Ce point est confirmé par le fait que toute variété de Stein est biholomorphiquement équivalente à une sous-variété analytique complexe de $\mathbb{C}^n$ pour un certain $n$ entier ([@LaurentThiebaut] p. 180, [@Kaup] p. 269). En compactifiant une telle variété, on fait le lien avec la géométrie algébrique grâce au théorème de Chow ([@Serre4] p.29-30). Remarquons que la compactification d’une surface de Riemann comme $\mathcal{H}$ peut ne plus être une surface de Riemann. On a observé entre les tores percés conformes hyperboliques et paraboliques quelques-uns des phénomènes intervenant dans la compactification des espaces de Teichmüller. L’étude de cette compactification est l’une des perspectives qui ont été ouvertes par W.P. Thurston [@Fathi]. Elle prend ici une signification particulière dans l’esprit de [@Serre4] car elle conduit inversement à l’idée de considérer toute équation diophantienne que l’on cherche à résoudre comme donnée par un tel processus. Les espaces compacts simplement connexes jouent un rôle équivalent dans les dimensions supérieures à celui de la sphère de Riemann $\mathcal{S}^2$. Les variétés analytiques complexes compactes et simplement connexes, sont homéomorphes à des sphères ([@Massey] p. 142). Pour mémoire, la conjecture de Poincaré qui transpose ce dernier résultat aux variétés réelles de dimension 3 reste toujours ouverte, sachant qu’elle est résolue en dimensions supérieures [@Smale]. ### Espaces de Stein et domaines de Riemann Une étude directe des espaces de Stein $\mathfrak{X}$ s’inspirant de celle des surfaces de Riemann constitue une piste utile pour approfondir la théorie de Teichmüller. Ces espaces sont intéressants pour avoir suffisamment de fonctions holomorphes globales pour séparer ses points. En notant $\mathcal{O}(\mathfrak{X})$ la $\mathbb{C}$-algèbre unitaire des fonctions holomorphes de $\mathfrak{X}$ dans $\mathbb{C}$, on fabrique une algèbre topologique qui est une sous-algèbre de Fréchet de la $% \mathbb{C}$-algèbre* *$\mathcal{C}(\mathfrak{X})$ des fonctions continues de $\mathfrak{X}$ dans $\mathbb{C}$. Tout caractère continu de l’algèbre $% \chi :\mathcal{O}(\mathfrak{X})\rightarrow $ $\mathbb{C}$ est défini par un point de $x\in \mathfrak{X}$ lorsque ce dernier espace est de dimension finie: $% \mathfrak{X}\subset \mathbb{C}^n$. Et l’application $\chi \in X(\mathcal{O}(\mathfrak{X}% ))\rightarrow x\in \mathfrak{X}$ est un homéomorphisme ([@Kaup] p. 268). Cette propriété est caractéristique des espaces de Stein ([@Guichardet0] p.72). Les surfaces de Riemann ouvertes sont des espaces de Stein ([@Kaup] p. 224), tout comme les espaces complexes qui ne contiennent qu’un nombre fini de points. Mais les surfaces de Riemann compactes ne sont pas des espaces de Stein ([@Guichardet0] p. 87). Ceci montre clairement que les espaces de Stein ne sont qu’une généralisation partielle des surfaces de Riemann même s’ils contiennent les espaces de Teichmüller. Une bonne définition pour englober les surfaces de Riemann et les espaces de Teichmüller dans un formalisme commun semble plutôt être celle de domaine de Riemann ([@Kaup] (p. 38 et p. 96) [@Jarnicki] [@Grauert]). Elle correspond aux domaines d’holomorphie simplement connexes caractérisés par le fait qu’il existe $f\in \mathcal{O}(\mathfrak{X})$ non holomorphiquement extensible à un point se situant hors de $\mathfrak{X}% \subset \mathbb{C}^n$. Plusieurs autres pistes sont apparues pour approfondir les réflexions précédentes sur la théorie de Markoff : $\bullet $ L’étude des groupes fuchsiens de dimension supérieure, dans l’esprit de [@Apanasov] [@Apanasov1] ou [@Ratcliffe]. $\bullet $ La théorie de Galois des extensions finies de corps de fractions $\mathbb{C}(X_1,...X_n)$. Il serait utile de comprendre si elle a un lien avec les polylogarithmes ([@Waldschmidt4] [@Lewin] [@Cathelineau]) et comment l’on peut construire une théorie de Galois pour les équations aux dérivées partielles, ayant éventuellement un lien avec la théorie des fonctions hypergéométriques généralisées [@Opdam]. $\bullet $ La théorie de la combinatoire des voies ferrées ou ”train tracks” telle qu’elle est présentée dans [@Mosher1] [@Penner] [@Mosher]. Codage des géodésiques ---------------------- Une conséquence de la théorie de Teichmüller concerne le fait que le groupe des classes d’applications possède une structure que l’on peut décrire tout comme celle du groupe de Poincaré. Il en découle des conséquences pour le codage des géodésique d’une surface de Riemann que l’on va maintenant évoquer, ainsi que les liens avec les fractions continues. ### Décomposition du groupe des classes d’applications Le groupe des classes d’applications $\Gamma _{\mathcal{M}}$ se décompose en utilisant les deux opérations sur les groupes de somme amalgamée et d’extension HNN ([@Serre] [@Bass] [@Cohen] [@LaHarpe] (III 14) [@Hausmann]) : Pour toute surface de Riemann $\mathcal{M}$ de type conforme $$(g,n,m)\notin \{(0,0,0),(0,1,0),(0,2,0),(1,0,0)\},$$ c’est-à-dire ayant $\mathcal{H}$ pour revêtement conforme, le groupe des classes d’applications est simplement décomposable. A tout groupe simplement décomposable, on associe un graphe de décomposition qui décrit tous les composants nécessaires et synthétise toutes les indications dont on a besoin pour combiner ces composants. Le groupe des classes d’applications se décompose parce que la surface de Riemann $\mathcal{M}$ se décompose par plombage en pantalons ([@Harvey] (p. 312), [@Bedford] (article de C. Series), [@Ratcliffe] (p. 408), [@Seppala] (p. 117)). La démonstration s’effectue en remontant au groupe fuchsien qui définit la surface $% \mathcal{M}$. Il possède aussi la propriété d’être simplement décomposable ([@Harvey] p. 312). On est donc ramené à un problème d’algèbre avec un groupe $G$ simplement décomposable dont on étudie le quotient $Out(G)=Aut(G)/Int(G)$. On utilise pour conclure les méthodes de [@Pietrowski]. Cette approche vaut pour le groupe de Poincaré comme pour le groupe des classes d’applications et est développée dans [@Vogtmann]. En considérant des géodésiques de $\mathcal{M}$ dont les longueurs correspondent aux modules [@Imayoshi] et en associant une valeur dans un groupe $\mathbb{Z}/2\mathbb{Z}$ qui correspond au sens de parcours de la géodésique, ainsi qu’une lettre qui correspond à un élément du groupe $\pi _1(\mathcal{M},*)$ correspondant à cette géodésique, le graphe de décomposition permet de reconstruitre le groupe de Poincaré $\pi _1(\mathcal{M},*)$ puis $Out(\pi _1(\mathcal{M% },*))$ et enfin $\Gamma _{\pi _1(\mathcal{M}_{top},*)}$. ### Codage des géodésiques Il exite un ensemble de travaux s’appuyant sur les fractions continues pour coder les géodésiques fermées des tores percés [@Series1] dont l’extension à des surfaces de Riemann $\mathcal{M}$ plus compliquées que les tores percés n’est pas au point [@Schmutz2] malgré le grand intérêt de cette question. L’auteur s’est donc penché sur ce sujet en cherchant à comprendre comment il faudrait procéder pour obtenir une bonne généralisation et des résultats nouveaux. Le point essentiel qui en résulte est que le groupe de Poincaré $\pi _1(\mathcal{M},*)$ et le premier groupe $H_1(% \mathcal{M},\mathbb{Z})$ de l’homologie singulière de $\mathcal{M}$ contiennent dans beaucoup de cas l’information essentielle, ne serait-ce que parce que toute classe de ces groupes contient alors une géodésique. En se limitant dans un premier temps à un groupe fuchsien de signature $% (g;r:\upsilon _1,\upsilon _2,...,\upsilon _r;0)$, on peut orienter de façon cohérente les arêtes du polygone défini [@Katok1] [@Keen] pour décrire un domaine fondamental de ce groupe fuchsien. Ceci privilégie des sens de parcours sur les lacets géodésiques de la surface de Riemann $\mathcal{M}$ que l’on considère. A partir de là, toute autre géodésique orientée de la surface $\mathcal{H}% /\Gamma =\mathcal{M}$ peut être codée selon la méthode de Morse et Koebe [@Series1]. Chaque fois que progressant dans le sens de la géodésique on traverse une géodésique constituant un côté de la dissection canonique, on inscrit la lettre correspondante avec une puissance $+1$ ou $-1$ qui est le nombre d’intersections correspondant. Cette convention de signe utilise l’orientation de la surface $\mathcal{M}$, comme décrit dans [@Waldschmidt2] (p. 105). Ceci permet d’associer à toute géodésique un mot écrit comme une suite doublement infinie de lettres à la puissance $\pm 1$ prises dans l’ensemble $$\{\overline{A}_1,\overline{B}_1,...,\overline{A}_g,\overline{B}_g,\overline{E% }_1,...,\overline{E}_r\}.$$ Toutes les suites ne sont pas possibles. Le fait qu’il en existe d’infinies montre que les suites associées aux géodésiques ne font pas partie du groupe $\Gamma $ dont les termes s’expriment seulement comme des mots finis écrits avec les mêmes lettres. Le groupe $\Gamma $ est donc trop petit pour décrire toutes les géodésiques de la surface et on doit imaginer de faire appel pour atteindre cet objectif à d’autres opérations catégoriques que le simple produit libre de groupes. Néanmoins les géodésiques fermées correspondent à des mots infinis périodiques dont on peut coder la période avec les lettres précédentes, désignant des éléments de $% \Gamma \simeq \pi _1(\mathcal{M},*)$. D’après ce que l’on connaît sur les tores percés, toute période finie de ce type ne permet pas de coder une telle géodésique fermée. On sait par contre reconnaître les géodésiques fermées simples, c’est-à-dire ne se coupant pas elles-mêmes dans l’essentiel des cas [@Series1] [@Series2]. Ce résultat s’étend par les considérations précédentes au cas plus général d’une surface de Riemann $% \mathcal{M}$ de signature $s$. Cette approche a un lien avec les groupes d’homotopie et d’homologie, au moins dans le cas compact où le théorème de Hilbert indique que toute classe d’homotopie libre de lacets fermés de $\mathcal{M}$ contient une géodésique fermée, et que deux points quelconques de $\mathcal{M}$ peuvent être reliés par une géodésique appartenant à toute classe d’homotopie donnée ([@Maurin] p. 390). Il reste cependant là des questions à approfondir. On voit par exemple par ce qui précède qu’une géodésique peut être décrite par une suite doublement infinie de telles lettres (l’analogie est évidente avec les fractions continues !), et qu’un difféomorphisme de $\mathcal{M}$ transforme cette géodésique en une autre codable de même avec ces lettres. Ce difféomorphisme agit comme un générateur pseudo-aléatoire (voir [@Cusick3] [@Rueppel] [@Sangjing]). Il est connu qu’il puisse ne pas être une isométrie ([@Berger1] p. 429), ni donc a fortiori un automorphisme conforme de $\mathcal{M}$. Il existe d’autres méthodes de codage des géodésiques, dont certaines plus directement tournées vers les fractions continues [@Katok] [@Arnoux]. On peut faire intervenir ces dernières en décomposant toutes les matrices $% A_1,B_1,...,A_g,B_g,E_1,...E_r$ intervenant en produit de matrices de forme $$\left[ \begin{array}{cc} a & 1 \\ 1 & 0 \end{array} \right] ^{\pm 1},\;a\in \mathbb{N}\backslash \{0\}.$$ On trouve dans [@Lehner] (p. 334) et [@Haas] une évocation sommaire des problèmes d’approximation diophantienne liés à ce type de situation. Pour les approfondir il faut préalablement étendre ce qui précède à d’autres signatures que celles privilégiées ci-dessus, ce qui ne paraît pas insurmontable. L’auteur a quelques travaux en cours sur ce thème notamment pour comprendre le lien avec les points de Weierstrass [@Leroy]. ### Dynamique symbolique La dynamique symbolique et sa variante du chaos déterministe consiste à étudier cette situation en poursuivant les travaux fondateurs de Hadamard ([@Dahan] p. 396, [@Bedford], [@Ruelle], [@Lind], [@Adler], [@Adler1], [@SchmidtK]). Les systèmes dynamiques vérifiant l’axiome A d’Anosov s’introduisent dans ce contexte, avec le fait remarquable que les surfaces de Riemann hyperboliques ont toutes un flot géodésique ayant cette propriété ([@Anosov] pour le cas compact, et [@Ruelle] (p. 171) pour une extension au cas non compact). Ceci permet de classer les homéomorphismes entre surfaces de Riemann avec un important résultat dû à W. Thurston (cité dans [@Perrine2a] ou [@Mosher]). ### Approche ergodique Ce thème d’étude fait le lien avec des sujets aussi importants que la thermodynamique, la théorie ergodique et l’information [@Ruelle1] [@Billingsley] [@Alseda], certaines fonctions zêta [@Parry] [@Terras2], le décompte des nombres premiers et son analogie avec le comportement de certaines géodésiques [@Parry1] [@Baladi] [@Bowen] [@Watkins] [@Kotani] [@Hurt], l’algèbre des corps quadratiques et l’évaluation de leur nombre de classes [@Sarnak] [@Vivaldi], l’interprétation thermodynamique de la mesure de Mahler de certains polynômes [@SchmidtK] (paragraphe 5.18), la mécanique hamiltonienne car le flot géodésique constitue un système hamiltonien sur la variété symplectique $\mathcal{D}_2$ des droites [@Audin], le théorème KAM des tores invariants et les petits diviseurs [@Arnold] [@Dodson1] [@Yoccoz] [@Herman], la dynamique holomorphe et les objets universels à caractère fractal qu’elle construit [@Sullivan] [@Yoccoz1], l’analyse spectrale de certains opérateurs [@Moser] [@Ruelle2] [@Connes], les cycles limites et le phénomène de Stokes (16$^{\grave{e% }me}$ problème de Hilbert), etc... Ubiquité de la fonction êta de Dedekind --------------------------------------- Un certain nombre de résultats classiques en théorie du codage de l’information comme la formule de MacWilliams [@Macwilliams] ont un lien avec les surfaces de Riemann. On a approfondi ce thème sachant que les recherches de l’auteur sur la théorie de Markoff ont débuté avec une préoccupation liée au codage. Ceci a permis d’identifier un lien assez remarquable avec la fonction êta de Dedekind dont on a vu qu’elle donne naissance aux équations diophantiennes qui généralisent celle de la théorie de Markoff. On a aussi pu préciser comment l’essentiel des fonctions transcendantes habituelles s’écrivent avec la fonction êta de Dedekind qui joue donc un rôle fondamental. ### Les fonctions thêta Les fonctions thêta définies par un réseau $\Gamma \subset \mathbb{R% }^n$ muni de son produit scalaire naturel sont avec $\tau \in \mathcal{H}$ et $q=\exp (2i\pi \tau )$ $$\theta _\Gamma (\tau )=\sum_{m\in \Gamma }\exp (i\pi \tau <m,m>)=\sum_{m\in \Gamma }q^{\frac 12<m,m>}=\sum_{r=0}^\infty a_rq^r.$$ Ce sont les fonctions génératrices des nombres $a_r=Card\{m\in \Gamma \mid <m,m>=2r\}$. Elles comptent les points du réseau $\Gamma $ situés dans une sphère de rayon $\sqrt{2r}$ centrée à l’origine. En d’autres termes, les coefficients du développement en série de Fourier de ces fonctions donnent le nombre de représentations d’un entier par une forme quadratique définie positive. Les formules les plus générales ont été données dans ce domaine par A. Malyshev ([@Iwaniec] chapitre 11).Les fonctions thêta définies par un réseau unimodulaire pair $% \Gamma \subset \mathbb{R}^n$ identique à son dual donnent des exemples classiques ([@Serre1] p.174) de fonctions automorphes de poids $(n/2)$ pair du groupe $PSL(2,\mathbb{Z})$. Ceci est une conséquence de la formule de Poisson appliquée à la fonction réelle $\Theta (t)=\theta _\Gamma (it)$, c’est-à-dire de la formule de Jacobi ([@Moll] p. 149). Cette formule de Poisson a un lien très profond avec la loi de réciprocité quadratique [@Berg]. ### Lien avec le codage de l’information La formule de Poisson peut elle même être considérée comme une formule de trace ([@Terras] ch 1.3). Elle donne la formule de MacWilliams sur les polynômes de poids des codes correcteurs d’erreurs [@Macwilliams].L’introduction des fonctions thêta permet d’interpréter ce dernier résultat [@Broue] [@Broue1] [@Milnor3] [@BensonM]. Ceci permet aussi de traduire les relations entre certains codes importants pour les applications et certains réseaux. Par exemple le code de Golay étendu correspond au réseau de Leech qui est l’unique réseau unimodulaire pair de $\mathbb{R}^{24}$ sans racine d’après un résultat de J. H. Conway ([@Ebeling] p.105). Ceci donne le groupe simple $M_{24}$, le groupe de Mathieu, dont la simplicité peut être comprise par l’approche galoisienne de la surface de Riemann correspondante. Egalement le polynôme de poids $% W_C(X,Y)$ de tout code $C\subset \mathbb{F}_2^n$ autodual doublement pair s’écrit par un théorème de Gleason ([@Ebeling] 69) comme polynome en $\varphi $ et $\xi $ où $$W_{\widetilde{H}}(X,Y)=X^8+14X^4Y^4+Y^8=\varphi ,$$ polynôme de poids du code de Hamming étendu $\widetilde{H}\subset \mathbb{F}_2^7$, et $$W_{\widetilde{G}}(X,Y)=(X^8+14X^4Y^4+Y^8)^3-42X^4Y^4(X^4-Y^4)^4=W_{% \widetilde{H}}(X,Y)^3-42\xi ,$$ polynôme de poids du code de Golay étendu $\widetilde{G}\subset \mathbb{% F}_2^{24}$. Pour approfondir l’étude du rapport entre les deux codes cités, et leur lien avec des géométries finies comme les systèmes de Steiner, on renvoie à [@Assmus] (§7.11 p. 284). Un intérêt de la dernière expression pour notre sujet est qu’en notant $q=\exp (2i\pi \tau )$ et $$A=A(\tau )=\sum_{n\in \sqrt{2}\mathbb{Z}}q^{\frac 12n.n}=\sum_{n\in 2\mathbb{Z}% }q^{\frac 14n.n},\;B=B(\tau )=\sum_{n\in 2\mathbb{Z}+1}q^{\frac 14n.n},$$ on retrouve la fonction êta de Dedekind ([@Ebeling] p.67) : $$A^4B^4(A^4-B^4)^4=16q\prod_{n=1}^\infty (1-q^n)^{24}=16\eta (\tau )^{24}.$$ Ceci a conduit l’auteur à approfondir l’étude des réseaux ([@Martinet] [@Conway2]) ainsi que les pavages hyperboliques ([@Ebeling] (chapitre 4) [@Vinberg] [@Vinberg1] [@Magnus1]) pour obtenir des informations sur certains codes ([@Goppa]). Il est reconnu depuis un certain temps qu’une dualité existe entre le codage de l’information et la quantification, notamment en utilisant des fonctions thêta. Les développements qui précèdent éclairent cette observation faite dans [@Forney] et que l’on peut interpréter par recours au groupe d’homologie $H_2(\mathcal{M},\mathbb{Z})$ et aux formes d’intersection déjà évoquées. Les fonctions thêta les plus générales à plusieurs variables ont déjà été introduites en liaison avec la variété jacobienne (voir [@Mumford] chapitre 2) et s’écrivent avec $u\in \mathbb{C}^g$ et $\mathbf{M\in }\mathcal{H}_g$ $$\theta (u,\mathbf{M)=}\sum_{m\in \mathbb{Z}^g}\exp (\pi i(^tm)\mathbf{M}m+2\pi i(^tm)u).$$ Elles redonnent $\theta _\Gamma (\tau )=\theta (0,\tau \mathbf{M)}$ avec $% u=0 $ et $<m,m>=(^tm)\mathbf{M}m$. Elles sont importantes pour décrire différentes situations physiques telles que la propagation de la chaleur ([@Terras] 1.2 exemple 1, 1.3 exercice 7), la propagation de solitons ou le comportement de la jonction Josephson (voir l’article de J. A.Zagrodzinski dans [@Planat]). Avec $g=1$ et $\mathbf{M}=\tau \mathbf{1}% _g$ les expressions précédentes donnent la forme plus simple étudiée dans le chapitre 1 de [@Mumford] avec $u\in \mathbb{C}$ et $% \tau \in \mathcal{H}$$$\theta (u,\tau \mathbf{)=}\sum_{m\in \mathbb{Z}}\exp (\pi im^2\tau +2\pi imu).$$ Pour $u=0$, on obtient $A(\tau )=\theta (0,2\tau \mathbf{)}$ donnant un lien avec la fonction $\eta $ de Dedekind ([@Iwaniec] p. 177) qui permet aussi d’exprimer $B(\tau )$ avec l’expression donnée ci-dessus pour $% 16\eta (\tau )^{24}$ : $$A(\frac{2r-1}4)=\theta (0,\tau -\frac 12\mathbf{)=}\frac{\eta ^2(z)}{\eta (2z)}.$$ ### Lien avec l’équation de la chaleur La fonction thêta $\theta (u,\tau \mathbf{)}$ vérifie une équation de la chaleur pour $u$, $\tau =it\in \mathbb{R}$ et $t$ positif : $$\frac \partial {\partial t}\theta (u,it\mathbf{)=}\frac 1{4\pi }\frac{% \partial ^2}{\partial u^2}\theta (u,it\mathbf{).}$$ On trouve ainsi une solution fondamentale de l’équation de la chaleur pour $u\in \mathbb{R}/\mathbb{Z}$. Cette observation remonte à Fourier lui-même ([@Fourier] §241, [@Jacobi], [@Weil4] p. 28) qui a aussi utilisé l’équation de la chaleur pour mettre au point ses séries. Eisenstein a ensuite utilisé les travaux de Fourier pour démontrer des énoncés de théorie des nombres relatifs à la fonction $\zeta $ de Riemann conjecturés antérieurement dans [@Euler2]. Remarquons que parmi les produits infinis utilisés par Eisentein apparaît explicitement une autre solution de l’équation de diffusion de la chaleur, la fonction de Gauss s’écrivant $$T(u,t)=\frac 1{\sqrt{t}}\exp (-\frac{\pi u^2}t).$$ Cette observation permet comprendre le lien existant entre la théorie du mouvement brownien et les fonctions thêta et zêta [@Yor]. ### Les quatre fonctions thêta habituelles Certaines expressions des fonctions thêta redonnent, dans l’esprit des anciens travaux de C. G. Jacobi, d’autres fonctions automorphes comme par exemple les fonctions elliptiques. On utilise pour cela les fonctions thêta suivantes qui vérifient aussi l’équation de la chaleur, notées selon les auteurs $$\mathbf{\;}\theta _{jk}(u,\tau \mathbf{)=}\theta \left[ \begin{array}{c} j \\ k \end{array} \right] (u,\tau )=\vartheta _{[2j,-2k]}(\frac u\pi ,\tau )=\sum_{n\in \mathbb{Z}% }\exp (\pi i(n+j)^2\tau +2\pi i(n+j)(u+k)).$$ Les fonctions thêta permettent de plonger ([@Waldschmidt] p.193) une courbe elliptique dans un espace projectif $\mathbf{P}^{l-1}(\mathbb{C})$ où $l\geq 3$. Elles permettent aussi d’écrire une telle courbe comme intersection de deux quadriques grâce à des relations classiques dues à Riemann et Jacobi que l’on retrouve par exemple dans ([@KumarMurty] ch. 7). Les fonctions thêta sont souvent présentées comme des généralisations elliptiques de la fonction exponentielle (par exemple [@Weisstein]). Vouloir les utiliser comme l’exponentielle qui permet de passer d’un groupe de Lie à une algèbre de Lie [@Moore] a ouvert pour l’auteur toute une perspective de recherches. Usuellement, on restreint à $4$ le nombre de fonctions thêta utilisées grâce aux propriétés suivantes : $$\vartheta _{[2j,-2k]}(z+\frac 12,\tau )=\vartheta _{[2j,-2k-2]}(z,\tau ),$$ $$\vartheta _{[2j,-2k]}(z+\frac 12\tau \pi ,\tau )=\exp (-i\pi \tau /4\theta -iz-ki\pi )\vartheta _{[2j+2,-2k]}(z,\tau ).$$ Ceci permet de se limiter avec $\mathbf{q}=\exp (i\pi \tau )$ aux quatre fonctions suivantes qui possèdent une décomposition en produits infinis ([@Chandrasekharan] (ch.V) [@Moll] (ch.3) [@Perrine9]) : $$\vartheta (u,\tau )=2\sum_{n=0}^\infty (-1)^n\mathbf{q}^{(n+\frac 12)^2}\sin (2n+1)\pi u=\theta (u,\tau ),$$ $$\vartheta _1(u,\tau )=2\sum_{n=0}^\infty \mathbf{q}^{(n+\frac 12)^2}\cos (2n+1)\pi u=\theta (\frac 12-u,\tau ),$$ $$\vartheta _2(u,\tau )=1+2\sum_{n=1}^\infty (-1)^n\mathbf{q}^{n^2}\cos (2n\pi u)=-i\mathbf{q}^{\frac 14}\exp (-i\pi u)\theta (\frac \tau 2-u,\tau ),$$ $$\vartheta _3(u,\tau )=1+2\sum_{n=1}^\infty \mathbf{q}^{n^2}\cos (2n\pi u)=% \mathbf{q}^{\frac 14}\exp (-i\pi u)\theta (\frac{\tau +1}2-u,\tau ).$$ ### Expressions avec la fonction êta de Dedekind Si $\vartheta ^{\prime }(u,\tau )$ désigne la dérivée de $% \vartheta $ par rapport à $u$, on a une propriété d’automorphie avec une racine huitième de l’unité $\kappa $ dépendant de la matrice utilisée dans $SL(2,\mathbb{Z})$$$\vartheta ^{\prime }(0,\frac{a\tau +b}{c\tau +d})=\kappa (c\tau +d)^{\frac 32}\vartheta ^{\prime }(0,\tau ).$$ Ceci met en évidence un lien avec la fonction $\eta $ de Dedekind telle que : $$\eta (\tau )^{24}=\mathbf{q}^2\prod_{n\geq 1}(1-\mathbf{q}^{2n})^{24},\;\;% \text{o\`{u} }\mathbf{q}=\exp (i\pi \tau ).$$ On trouve par exemple les formules suivantes ([@Chandrasekharan] p. 80 et p. 123, [@Knopp] p. 46, [@Apostol] p. 91, [@Grant], [@WalkerP] p. 161) : $$\vartheta ^{\prime }(0,\tau )=2\pi \eta ^3(\tau )=-2\pi \eta (\tau /2)\eta ((\tau +1)/2)\eta (2\tau )=\pi \vartheta _1(0,\tau )\vartheta _2(0,\tau )\vartheta _3(0,\tau ),$$ $$\vartheta (0,\tau )=i\exp (-i\pi \tau /9)\eta (\tau /3),$$ $$\;\vartheta _1(0,\tau )=2\frac{\eta ^2(2\tau )}{\eta (\tau )},\;\vartheta _2(0,\tau )=\frac{\eta ^2(\tau /2)}{\eta (\tau )},\;\vartheta _3(0,\tau )=% \frac{\eta ^2((\tau +1)/2)}{\eta (\tau )},$$ $$\prod_{0\leq u,v<m, \, (u,v)\neq (0,0)} \theta \left[ \begin{array}{c} 1/2+u/m \\ 1/2+v/m \end{array} \right] (0,\tau )=(-1)^{m-1}m\eta (\tau )^{m^2-1}.$$ Le lien avec l’invariant modulaire $J$ et la fonction $\mathbf{\lambda }% _\Lambda $ s’en déduit ([@Chandrasekharan] p. 85) : $$J(\tau )=\frac{(\vartheta _1(0,\tau )^8+\vartheta _2(0,\tau )^8+\vartheta _3(0,\tau )^8)^3}{54(\vartheta _1(0,\tau )\vartheta _2(0,\tau )\vartheta _3(0,\tau ))^8}=\frac 4{27}\frac{(1-\mathbf{\lambda }_\Lambda +\mathbf{% \lambda }_\Lambda ^2)^3}{\mathbf{\lambda }_\Lambda ^2(1-\mathbf{\lambda }% _\Lambda )^2},\;\;$$ $$\mathbf{\lambda }_\Lambda (\tau )=\frac{\vartheta _1(0,\tau )^4}{\vartheta _3(0,\tau )^4}=16(\eta ^2(2\tau )\eta (\tau /2)\eta ^{-3}(\tau ))^8.$$ Egalement on peut écrire avec $\eta $ les fonctions elliptiques de Jacobi ([@Chandrasekharan] p. 100 et p. 103 [@Moll] ch.3 [@WalkerP] (p. 165)). Avec une constante $c$ on a par ([@Husemoller] p. 191) $$\wp (u,\mathbb{Z}\oplus \mathbb{Z}\tau )=c-\frac{d^2}{du^2}\log \vartheta (\frac u{\pi \vartheta _3(0,\tau )^2},\tau ).$$ Le fait que toutes ces fonctions puissent se déduire de $\eta $ montre l’importance fondamentale de cette fonction aussi utilisée pour des calculs d’approximation [@Garvan]. Pour les fonctions $L$ la situation est plus compliquée mais liée [@Ericksson]. Approche hypergéométrique de la théorie de Markoff -------------------------------------------------- On a approfondi quelques remarques faites par Harvey Cohn dans son étude de la théorie de Markoff. ### Relation avec une fonction elliptique Dans son article initial [@Cohn2], Harvey Cohn donne la relation suivante pour interpéter géométriquement la théorie de Markoff avec un réseau $\Lambda $ particulier : $$1-J(\tau )=\wp ^{\prime 2}(z)=4\wp ^3(z)+1.$$ Le module $J$ est une fonction automorphe pour le groupe fuchsien $\Gamma =PSL(2,\mathbb{Z})$ de facteur $\mu =1$ et de poids $0$.Harvey Cohn dit que les triplets de matrices $(A,B,C)$ associés à la théorie de Markoff classique déterminent un pavage hexagonal du demi-plan de Poincaré en $\tau $ et correspondent par cette relation entre $\tau $ et $z$ à un pavage quadrilatéral par un réseau $\Lambda $ du plan complexe en $z$. Il l’illustre géométriquement sur une figure où apparaissent les matrices notées $$A_0=\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right] ,\;\;B_0=\left[ \begin{array}{cc} 1 & -1 \\ -1 & 2 \end{array} \right] ,$$ L’aspect algébrique de ces remarques de Harvey Cohn se résume ([@Perrine9] fig. 7.7) en décrivant les domaines fondamentaux respectifs de deux pavages de $\mathcal{H}$, donnant au quotient le tore percé. Le premier est un pavage hexagonal $\alpha \beta \gamma \delta \varepsilon \zeta \eta \theta \iota $. Le second donne un domaine quadrilatéral $% \kappa \lambda \mu \nu \xi $. Le passage entre les deux est faisable par un jeu de tangram hyperbolique utilisant pour pièces des morceaux composant le domaine fondamental bien connu pour le groupe $PSL(2,\mathbb{Z})$. Pour aller vers le domaine hexagonal, il suffit d’appliquer au domaine modulaire six matrices de forme $$\left[ \begin{array}{cc} 1 & k \\ 0 & 1 \end{array} \right] \;(k=-2,-1,0,1,2,3).$$ Pour aller vers le domaine quadrilatéral, il suffit d’utiliser les deux demi-domaines modulaires et les six matrices suivantes ([@Appel] Tome 2 p.368) $$\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] ,\left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right] ,\left[ \begin{array}{cc} 0 & -1 \\ 1 & -1 \end{array} \right] ,\left[ \begin{array}{cc} 1 & 0 \\ 1 & 1 \end{array} \right] ,\left[ \begin{array}{cc} 1 & -1 \\ 1 & 0 \end{array} \right] ,\left[ \begin{array}{cc} -1 & 0 \\ 1 & -1 \end{array} \right] .$$ ### Sphère à trois piqûres et invariant modulaire En réalité, il existe une autre façon de fabriquer une surface de Riemann avec le domaine quadrilatéral $\kappa \lambda \mu \nu \xi $ et cette méthode a été généralisée dans [@Schmidt]. Il suffit d’identifier $\kappa \lambda $ et $\xi \nu $ par une transformation de $a\in PSL(2,\mathbb{Z})$, ainsi que $\mu \lambda $ et $\mu \nu $ par $b\in PSL(2,\mathbb{Z})$. on fabrique ainsi une sphère à trois piqûres correspondant aux points $0$, $1$, $\infty $. Le calcul explicite peut être fait et détermine pour $a$ et $b$ les matrices $$a=\left[ \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right] ,\;b=\left[ \begin{array}{cc} 1 & 0 \\ 2 & 1 \end{array} \right] .$$ Ces matrices engendrent le groupe $\Gamma (2)$ qui est libre ([@Iversen] p.154) et déterminent une structure géométrique unique sur $% \mathcal{H}/\Gamma (2)$. Ce qui précède garantit par le théorème de Riemann ([@Ford] p. 163) l’existence d’une relation algébrique entre $J$ et une fonction automorphe pour le groupe $\Gamma (2)$. Cette relation est usuellement calculée à partir des expressions données pour le cas elliptique $$y^2=4x^3-g_2x-g_3=P(x)=4(x-e_1)(x-e_2)(x-e_3).$$ On pose, à une permutation près sur $e_1$, $e_2$, $e_3$$$\nu _{31}=(e_3-e_1)\neq 0,\;\mathbf{x}=\frac{(x-e_1)}{(e_3-e_1)},\;\mathbf{% \lambda }_\Lambda =\frac{(e_2-e_1)}{(e_3-e_1)},\;\nu ^2=\frac 1{4\nu _{31}^3},\;\mathbf{y}^2=\nu ^2y^2.$$ Ceci transforme l’équation $y^2=P(x)$ en la forme de Legendre suivante $$\mathbf{y}^2=\mathbf{x}(\mathbf{x}-1)(\mathbf{x}-\mathbf{\lambda }_\Lambda )% \text{ o\`{u} }\mathbf{\lambda }_\Lambda \notin \{0,1\}.$$ Les permutations possibles sur $e_1$, $e_2$, $e_3$, montrent que deux courbes elliptiques $E_{\mathbf{\lambda }_\Lambda }$ et $E_{\mathbf{\lambda }% _\Lambda ^{\prime }}$ obtenues ainsi sont isomorphes si et seulement si on a $$\mathbf{\lambda }_\Lambda ^{\prime }\in \{\mathbf{\lambda }_\Lambda ,\frac 1{% \mathbf{\lambda }_\Lambda },1-\mathbf{\lambda }_\Lambda ,\frac 1{1-\mathbf{% \lambda }_\Lambda },\frac{\mathbf{\lambda }_\Lambda }{\mathbf{\lambda }% _\Lambda -1},\frac{\mathbf{\lambda }_\Lambda -1}{\mathbf{\lambda }_\Lambda }% \}.$$ Ceci permet de se limiter aux valeurs complexes $$\mathbf{\lambda }_\Lambda \in S_4=\{\lambda \mid \lambda \in \mathbb{C},\;\mid \lambda \mid <1,\;\mid 1-\lambda \mid <1,\;\Re(\lambda )\geq (1/2)\}.$$ En inversant les relations précédentes pour déduire $e_3$ et $% e_2 $ on obtient les expressions suivantes montrant que $\mathbf{\lambda }% _\Lambda $ ne suffit pas à définir le polynôme $P(x)$ mais que le paramètre accessoire $\nu _{31}$ est indispensable : $$\nu _{31}+\nu _{31}\mathbf{\lambda }_\Lambda =-3e_1,\;4\nu _{31}^2\mathbf{% \lambda }_\Lambda e_1=(12e_1{}^2-g_2)e_1=8e_1{}^3+g_3,$$ $$g_2=\frac{4\nu _{31}^2}3(1-\mathbf{\lambda }_\Lambda +\mathbf{\lambda }% _\Lambda ^2),\;g_3=\frac{4\nu _{31}^3}{27}(\mathbf{\lambda }_\Lambda +1)(% \mathbf{\lambda }_\Lambda -2)(2\mathbf{\lambda }_\Lambda -1),$$ $$g_2^3-27g_3^2=16\nu _{31}^6\mathbf{\lambda }_\Lambda ^2(1-\mathbf{\lambda }% _\Lambda )^2.$$ Ceci donne l’expression de $J$ recherchée et très classique $$J=\frac{g_2^3}{g_2^3-g_3^2}=\frac 4{27}\frac{(1-\mathbf{\lambda }_\Lambda +% \mathbf{\lambda }_\Lambda ^2)^3}{\mathbf{\lambda }_\Lambda ^2(1-\mathbf{% \lambda }_\Lambda )^2}.$$ Ainsi $\mathbf{\lambda }_\Lambda $ apparaît comme $J$ en tant que fonction d’une variable $\tau \in \mathcal{H}$. En considérant que $\tau =\omega _2/\omega _1$ où $\omega _1$, $\omega _2$ engendrent le réseau $% \Lambda $, on peut observer l’action sur $\mathbf{\lambda }_\Lambda (\tau )$ d’une transformation de $PSL(2,\mathbb{Z})$. Il est facile de voir que si la transformation est dans $\Gamma (2)$, le groupe de la sphère à trois trous, la valeur de cette fonction ne change pas ([@Ford] p.159). La fonction $\mathbf{\lambda }_\Lambda (\tau )$ est donc automorphe pour ce groupe dont un domaine fondamental apparaît aussi sur la figure précédente. Comme ce domaine fondamental $\kappa \lambda \mu \nu \xi $ est constitué de copies du domaine fondamental de $PSL(2,\mathbb{Z})$, on retrouve d’une autre façon par la méthode de Riemann ([@Ford] p. 163) l’existence de la relation liant $J$ et $\mathbf{\lambda }_\Lambda $. Celle-ci vient d’être calculée. On voit facilement ([@Perrine9] fig.7.8 inspirée de [@Cohn5]) ce que donne la fonction $\tau \rightarrow \mathbf{\lambda }_\Lambda (\tau )$. Elle vérifie $$\mathbf{\lambda }_\Lambda (\tau +1)=\frac{\mathbf{\lambda }_\Lambda (\tau )}{% \mathbf{\lambda }_\Lambda (\tau )-1},\;\;\mathbf{\lambda }_\Lambda (-\frac 1\tau )=1-\mathbf{\lambda }_\Lambda (\tau ).$$ Ces conditions mettent en évidence deux matrices dont on vérifie aisément qu’elles engendrent le groupe des permutations à trois éléments: $$\mathbf{\lambda }_\Lambda \circ S=\left[ \begin{array}{cc} -1 & 1 \\ 0 & 1 \end{array} \right] \circ \mathbf{\lambda }_\Lambda ,\;\;\mathbf{\lambda }_\Lambda \circ T=\left[ \begin{array}{cc} 1 & 0 \\ 1 & -1 \end{array} \right] \circ \mathbf{\lambda }_\Lambda .$$ Ainsi s’introduit un groupe fini de matrices isomorphe au groupe des permutations de $3$ éléments avec $$\mathbf{S}=\left[ \begin{array}{cc} -1 & 1 \\ 0 & 1 \end{array} \right] \rightarrow \left( \begin{array}{ccc} 1 & 2 & 3 \\ 2 & 1 & 3 \end{array} \right) ,\;\;\mathbf{ST}=\left[ \begin{array}{cc} 0 & -1 \\ 1 & -1 \end{array} \right] \rightarrow \left( \begin{array}{ccc} 1 & 2 & 3 \\ 3 & 1 & 2 \end{array} \right) .$$ Ce groupe de permutations agit sur les valeurs de $\mathbf{\lambda }_\Lambda $ avec des orbites à $6$ éléments sauf les trois cas suivants : $\mathbf{\lambda }_\Lambda \in \{1/2,-1,2\}$ soit $\tau $ dans la classe de $% i$ donnant $J=1$ et la ramification d’ordre $2$ de $J$ (en pratique, deux droites se coupent sur la figure précédente, ce qui correspond à une symétrie carrée). $\mathbf{\lambda }_\Lambda \in \{-\rho ,-\rho ^2\}$ soit $\tau $ dans la classe de $\rho =(-1+i\sqrt{3})/2$ donnant $J=0$, la ramification d’ordre $3$ de $J$ (en pratique, trois droites se coupent sur la figure précédente, ce qui correspond à une symétrie hexagonale). $\mathbf{\lambda }_\Lambda \in \{0,1,\infty \}$ soit $\tau $ dans la classe de $\infty $ donnant $J=\infty $ hors de $\mathcal{H}$ et de $\mathbb{C}$. ### L’étude hypergéométrique des relations de H. Cohn Pour comprendre l’origine de la relation utilisée par Harvey Cohn dans [@Cohn2] pour interpréter la théorie de Markoff, considérons l’expression $$f=\frac 1{27}(\mathbf{\lambda }_\Lambda +1)(\frac 1{\mathbf{\lambda }% _\Lambda }+1)(1-\mathbf{\lambda }_\Lambda +1)(\frac 1{1-\mathbf{\lambda }% _\Lambda }+1)(\frac{\mathbf{\lambda }_\Lambda }{\mathbf{\lambda }_\Lambda -1}% +1)(\frac{\mathbf{\lambda }_\Lambda -1}{\mathbf{\lambda }_\Lambda }+1).$$ C’est par construction un invariant pour le groupe des permutations de $3$ éléments appliqué dans le plan en $\mathbf{\lambda }_\Lambda $ [@Dixmier] exprimable en fonction de $g_2$ et $g_3$. En faisant ce calcul, on trouve facilement ([@Ford] p. 160) la première partie de l’expression donnée par Harvey Cohn $$1-f=\frac 4{27}\frac{(1-\mathbf{\lambda }_\Lambda +\mathbf{\lambda }_\Lambda ^2)^3}{\mathbf{\lambda }_\Lambda ^2(1-\mathbf{\lambda }_\Lambda )^2}=J.$$ On trouve dans [@Hunt] (p. 136) une façon de traiter une telle équation. Ce n’est pas la méthode utilisée ici. On veut plutôt écrire $f$ avec une fonction elliptique en $\tau $ particulière. Pour cela on identifie les bords du domaine considéré dans le plan en $\tau $ avec le groupe $[SL(2,\mathbb{Z}% ),SL(2,\mathbb{Z})]$. Ceci donne au quotient un tore percé conforme. Comme le calcul précédent en $\mathbf{\lambda }_\Lambda $ était lié à quelques singularités près à la sphère du domaine modulaire et faisait apparaître $J$, de même le tore moins un point est lié à un tore complet dont il s’agit d’utiliser la fonction de Weierstrass associée. Ceci revient à travailler à la conjonction de deux uniformisations [@Mazur1], une dans $\mathbb{C}$ puis une dans $\mathcal{H}$. Dans ses différents articles ([@Cohn1], [@Cohn7], [@Cohn8], [@Cohn1]), Harvey Cohn mentionne en liaison avec la question étudiée une autre formule issue de travaux de R. Fricke à prendre en compte et qui sous-entend une symétrie hexagonale $$dz=const.\times \frac{dJ}{J^{2/3}(J-1)^{1/2}}.$$ Il évoque la difficulté du passage entre les différentes expressions, renvoyant à [@Chudnovski] [@Keen5] où le problème est étudié sous l’aspect d’un paramètre accessoire vérifiant une équation différentielle de Lamé ([@Yosida2] p. 110), mais sans conclusions bien nettes. Cette question est liée au 22$^{i\grave{e}me}$ problème de Hilbert qui est celui de l’uniformisation numérique d’une surface de Riemann, encore non encore totalement résolu aujourd’hui [@Seppala1], même si les équations de Lamé font l’objet d’un regain d’intérêt aujourd’hui [@Arscott] [@Waall]. La dernière expression reliant $z$ et $J$ peut être construite par simple différentiation. Supposons que l’on ait $$1-J(\tau )=\wp ^{^{\prime }2}(z)=4\wp ^3(z)+1,$$ ceci donne $$-dJ=12\wp ^2(z)\wp ^{^{\prime }}(z)dz,\;\;\wp ^{^{\prime }}(z)=(1-J)^{1/2},\;\;\wp ^2(z)=(J/4)^{2/3}.$$ D’où en remplaçant dans l’expression de $-dJ$, et à un facteur multiplicatif près, l’expression donnée pour $dz$. En sens inverse l’intégration d’une telle expression reliant $dz$ et $dJ$ présente des difficultés car elle dépend du chemin considéré. A un facteur près, on trouve une intégrale hypergéométrique définie dans le cas où $\Re(c)>\Re(a)>0$,$\;\mid x\mid <1 $, ici $a=(1/3)$, $c=(5/6)$, $b=0$, ou encore une fonction bêta définie pour $\Re(p)>0$,$\;\Re(q)>0$, ici $p=(1/3)$, $% q=(1/2) $ $$F(a,b,c,x)=\frac{\mathbf{\Gamma }(c)}{\mathbf{\Gamma }(a)\mathbf{\Gamma }% (c-a)}\int_0^1t^{a-1}(1-t)^{c-a-1}(1-tx)^{-b}dt,$$ $$B(p,q)=\int_0^1t^{p-1}(1-t)^{q-1}dt=\frac{\mathbf{\Gamma }(p)\mathbf{\Gamma }% (q)}{\mathbf{\Gamma }(p+q)},\;\;\mathbf{\Gamma }(p)=\int_0^\infty t^{p-1}\exp (-t)dt$$ Les difficultés d’intégration sont illustrées dans [@Yosida] (p. 85 - 90) où l’on montre comment l’intégration sur un double contour de Pochhammer autour de $[0,1]$ change $$\int_0^1J^{p-1}(1-J)^{q-1}dJ$$ en la multipliant cette valeur par un facteur $(1-\exp (2i\pi p))(1-\exp (2i\pi q))$. La fonction hypergéométrique $F(a,b,c,x)$ est solution de l’équation différentielle à deux singularités $x=0$ et $% x=1$, où $x\in \mathbb{C}$ : $$E(a,b,c):\;\;x(1-x)\frac{d^2F}{dx^2}+(c-(a+b+1)x)\frac{dF}{dx}-abF=0.$$ Lorsque les paramètres $a$, $b$, $c$, sont réels et tels que $c$, $% c-a-b$, $a-b$, non entiers, on peut définir sur $\mathcal{D}=\mathbb{C}% \backslash \{]-\infty ,0]\cup [1,\infty [\}$ l’application de Schwarz : $$Sch:J\in \mathcal{D}\longrightarrow (F(a,b,c,J):J^{1-c}F(a+1-c,b+1-c,2-c,J))\in \mathbf{P}^1(C).$$ Pour $\mid 1-c\mid =(1/\upsilon _1)$, $\mid c-a-b\mid =(1/\upsilon _2)$, $% \mid a-b\mid =(1/\upsilon _3)$ strictement plus petits que $1$, l’image de $% \mathcal{H}$ par cette application est un triangle de la sphère de Riemann avec les angles $(\pi /\upsilon _1)$ en $Sch(0)$, $(\pi /\upsilon _2) $ en $Sch(1)$, et $(\pi /\upsilon _3)$ en $Sch(\infty )$. On retrouve ainsi les groupes classiques de pavages par des isométries des surfaces de Riemann simplement connexes ([@Berger] chapitre 1), avec les trois cas sphérique, euclidien et hyperbolique. On peut prolonger l’application de Schwarz à $\mathbb{C}\backslash \{0,1\}$ par le principe de réflexion sur le bord de $\mathcal{H}$ et fabriquer des transformations conformes ([@Yosida] p. 78) qui interprètent le lien entre $J$ et $\mathbf{\lambda }_\Lambda $ montrant le caractère déterminant de ce qui se passe en certains points singuliers : $$F(1/12,5/12,1,x):x=J(\tau )\in \mathbb{C}\longmapsto \tau \in \mathcal{H}% /PSL(2,Z)\text{ d'inverse la fonction }J,$$ $$F(1/2,1/2,1,x):x=\text{ }\mathbf{\lambda }_\Lambda (\tau )\in \mathbb{C}% \backslash \{0,1\}\longmapsto \tau \in \mathcal{H}/\Gamma (2)\text{ d'inverse la fonction }\mathbf{\lambda }_\Lambda .$$ Un tel prolongement permet effectivement de considérer un contour de Pochhammer et permet de comprendre la nature de la difficulté rencontrée. Pour les valeurs $a=(1/3)$, $b=0$, $c=(5/6)$ de l’expression différentielle de H. Cohn entre $dz$ et $dJ$, on a $\mid 1-c\mid =(1/6)$, $\mid c-a-b\mid =(1/2)$, $\mid a-b\mid =(1/3)$. Ceci correspond à un cas euclidien de cristal plan hexagonal. On trouve dans les travaux de R.Dedekind [@Dedekind] une approche complémentaire à ce qui précède, avec un lien explicite avec la fonction $\eta $. Il montre que la fonction $w(\tau )$ définie à une constante près par $$w(\tau )=c\frac{J^{\prime }(\tau )^{1/2}}{J(\tau )^{1/3}(1-J(\tau ))^{1/4}},$$ vérifie une équation différentielle hypergéométrique $% E((1/12),(1/12),(2/3))$ permettant d’écrire $w$ en fonction de $J$. La fonction $\eta $ est elle-même une racine carrée de $w$ à un coefficient près ([@Chandrasekharan] p. 135 ou [@Moll] p. 180) telle que : $$\eta (\tau )^{24}=\frac 1{(48\pi ^2)^3}\frac{J^{\prime }(\tau )^6}{J(\tau )^4(1-J(\tau ))^3}=-\frac 1{4^4\pi ^6}\frac{\mathbf{\lambda }_\Lambda ^{\prime }(\tau )^6}{\mathbf{\lambda }_\Lambda (\tau )^4(1-\mathbf{\lambda }% _\Lambda (\tau ))^4}.$$ Pour aller plus avant, il est nécessaire de faire le lien entre ce que l’on vient de voir et l’équation hypergéométrique perturbée que l’on a mise en évidence ci-dessus en liaison avec la représentation monodromique définie par les matrices $A_0$ et $B_0$. Ce point fait l’objet d’un travail en cours de développement, prévu pour être présenté à la cinquième conférence internationale ”Symmetry in Nonlinear Mathematical Physics” de Kyiv, en juin 2003. Approche par la double uniformisation ------------------------------------- La comparaison de ce que l’on vient de voir avec les résultats du chapitre précédent suggère que dans certains cas la valeur $% \mathbf{\lambda }_\Lambda $ puisse être choisie égale au module $% (\mu ^2/\lambda ^2)$ d’un tore percé parabolique prolongeable en le tore $\Lambda $. En effet, l’équation $\mathbf{y}^2=\mathbf{x}(\mathbf{x}-1)(% \mathbf{x}-\mathbf{\lambda }_\Lambda )$ met en évidence trois racines $% \alpha ^{\prime }=1$, $s^{\prime }=0$, $\beta ^{\prime }=$ $\mathbf{\lambda }% _\Lambda $, et ne définit bien l’équation diophantienne de départ qu’au coefficient $\nu _{31}$ près. De même, le tore percé est défini avec $\alpha =-1$, $s=0$, $\beta =(\mu ^2/\lambda ^2)$, au coefficient $\lambda $ près. En approfondissant ce thème, on a construit la propriété de double uniformisation des tores percés, et on en a tiré les conséquences pour la théorie de Markoff. Le résultat essentiel obtenu est la relation profonde qui existe entre la fonction êta de Dedekind et l’opérateur de Laplace-Beltrami d’un tore. Ceci explique la décomposition en produit infini de la fonction $\eta $. Comme cette fonction est liée à beaucoup d’autres fonctions transcendantes, ceci explique l’existence de produits infinis pour toutes ces fonctions, notamment les fonctions thêta. ### Une construction générale En comparant la représentation des tores de module $(\mu ^2/\lambda ^2)$ du chapitre précédent au plan en $\mathbf{\lambda }_\Lambda $, on fait en sorte que se correspondent des points de même ordre de ramification. Ainsi $(\mu ^2/\lambda ^2)=2$ correspond à une ramification d’ordre 2 que l’on obtient avec $\mathbf{\lambda }_\Lambda =1/2$. De même $(\mu ^2/\lambda ^2)=1$ correspond ainsi à une ramification d’ordre 3 que l’on obtient avec $\mathbf{\lambda }_\Lambda =-\rho ^2$. Ceci assure la cohérence avec la ramification de $J$, on l’observe sur le domaine de cette fonction entre des valeurs correspondantes qui sont réelles et qui valent $J=1$ pour $\mathbf{\lambda }_\Lambda =1/2 $, ainsi que $J=0$ pour $\mathbf{\lambda }_\Lambda =-\rho ^2=(1+i\sqrt{3% })/2$. On considère inversement des valeurs complexes $\mathbf{\lambda }% _\Lambda $ se situant sur le bord $[(1/2),-\rho ^2]$ du domaine $S_4$ de notre figure 7.8 de [@Perrine9]. Elles correspondent de façon bijective aux valeurs $J=J(\mathbf{\lambda }_\Lambda )\in [0,1]\subset \mathbb{R% }$. On pose ainsi $(\mu ^2/\lambda ^2)=\mathbf{\beta }(J)\in [1,2]\subset \mathbb{R}$, avec $\mathbf{\beta }$ bijection croissante de $[0,1]\subset \mathbb{R% }$ dans $[1,2]\subset \mathbb{R}$. On choisit alors $\Theta _\alpha =\lambda ^2$, et on se ramène au cas où $\alpha =-1$, $s=0$, $\beta =\mathbf{% \beta }(J)$, $p=\infty $. Ceci normalise le tore percé que l’on considère et donne un tore percé parabolique défini par les matrices $A$ et $B$ suivantes dans $SL(2,\mathbb{R})$ $$A=\left[ \begin{array}{cc} \lambda \sqrt{\mathbf{\beta }(J)} & \lambda \sqrt{\mathbf{\beta }(J)} \\ \dfrac \lambda {\sqrt{\mathbf{\beta }(J)}} & \dfrac{1+\lambda ^2}{\lambda \sqrt{\mathbf{\beta }(J)}} \end{array} \right] ,\;B=\left[ \begin{array}{cc} \lambda & -\lambda \mathbf{\beta }(J) \\ -\lambda & \dfrac{1+\lambda ^2\mathbf{\beta }(J)}\lambda \end{array} \right] .$$ En réalité, ces deux matrices sont définies au facteur réel $% \lambda >0$ près. Il y a tout un ensemble de tores différents qui peuvent convenir à la même valeur $\mathbf{\beta }(J)$, et qui ont donc des propriétés communes. Il n’y a aucune raison de se débarrasser ici du coefficient $\lambda $. Une analyse approfondie de cette situation a été faite et a conduit au théorème ”Jugendtraum” de Kronecker ([@Dieudonne2] tome 1, p. 236). Les deux matrices mises en évidence définissent un groupe libre dans $PSL(2,% \mathbb{R})$ que l’on peut abélianiser pour introduire une courbe elliptique. Les endomorphismes du groupe libre qu’elles engendrent sont associés à des polynômes [@Peyriere]. Ceci provient des résultats qui ont été démontrés pour les tores percés conformes paraboliques. Une méthode pour fabriquer une courbe elliptique associée consiste à utiliser les calculs de [@Husemoller] (p.179) et à les prolonger pour la valeur $\mathbf{\lambda }% _\Lambda \mathbf{=}-\rho ^2$ . En dehors de ce cas singulier qui ne pose d’ailleurs pas de problème ([@Silverman0] ch VI), les valeurs $% \mathbf{\lambda }_\Lambda $ sélectionnées sont telles que $$\mathbf{\lambda }_\Lambda =\frac 12+i\Im(\mathbf{\lambda }_\Lambda ),\;0\leq \Im(\mathbf{\lambda }_\Lambda )\leq \frac{\sqrt{3}}2,\;\mid \mathbf{\lambda }_\Lambda \mid <1,\;\mid 1-\mathbf{\lambda }_\Lambda \mid <1.$$ Ceci permet de bien définir deux périodes engendrant un réseau $% \Lambda $$$\omega _1(\mathbf{\lambda }_\Lambda )=\int_{-\infty }^0\frac{dx}{\sqrt{% x(x-1)(x-\mathbf{\lambda }_\Lambda )}},\;\;\omega _2(\mathbf{\lambda }% _\Lambda )=\int_1^\infty \frac{dx}{\sqrt{x(x-1)(x-\mathbf{\lambda }_\Lambda )% }}.$$ D’où la construction effective d’une fonction elliptique associée à ce réseau $$\wp _\Lambda ^{\prime 2}(z)=4\wp _\Lambda ^3(z)-g_2\wp _\Lambda (z)-g_3,$$ $$g_2=\frac{4\nu _{31}^2}3(\mathbf{\lambda }_\Lambda ^2-\mathbf{\lambda }% _\Lambda +1),\;g_3=\frac{4\nu _{31}^3}{27}(\mathbf{\lambda }_\Lambda +1)(% \mathbf{\lambda }_\Lambda -2)(2\mathbf{\lambda }_\Lambda -1).$$ Pour $\mathbf{\lambda }_\Lambda =-\rho ^2$, on obtient $g_2=0$. La courbe elliptique correspondante, associée au réseau $\mathbb{Z}[\rho ]$, est bien définissable ([@Silverman] p. 102) au moyen de l’équation rationnelle $y^2=x^3+1$ utilisée ci-dessus pour analyser les informations différentielles données par H. Cohn. En effet, cette dernière est issue de l’équation $% y^2=4x^3-3i\sqrt{3}\nu _{31}^3$ obtenue avec les expressions données pour $g_2$ et $g_3$. Pour $\mathbf{\lambda }_\Lambda =(1/2)$, on obtient $% g_3=0$. La courbe elliptique correspondante, associée au réseau $% \mathbb{Z}[i]$, est définissable ([@Silverman] p. 101) par l’équation $y^2=x^3+x$ déductible cette fois de $% y^2=4x^3-\nu _{31}^2x$. La donnée du paramètre accessoire $\nu _{31}\neq 0$ permet de retrouver toutes les données de la courbe elliptique avec $e_3=e_1+\nu _{31},\;e_2=e_1+\mathbf{\lambda }_\Lambda \nu _{31}$. A une transformation conforme près de $\mathbb{C}$ construite par translation et rotation, on peut normaliser sans changer $\mathbf{\lambda }_\Lambda $ cette courbe elliptique en se ramenant à $e_1=0,\;e_3=\nu _{31}=\parallel \nu _{31}\parallel \in \mathbb{R}^{+}$. Ayant normalisé le tore percé et le tore selon deux méthodes différentes, on s’attache alors à faire en sorte que le tore percé provienne du tore par simple extraction d’un point, sachant que les problèmes de métrique induite restent à vérifier. L’identification de $e_1$ et $e_3=\nu _{31}$ sur le tore donne un grand cercle que l’on identifie sur le tore percé à un cercle de même longueur reliant la piqûre à elle-même, et donc correspondant dans $\mathcal{H}$ à la géodésique $0\alpha $ où $\alpha =-1$. Ceci définit la transformation $\Upsilon $ de $\mathbb{C}$ dans $\mathcal{H}$ avec $$\Upsilon (e_1)=-1,\;\Upsilon (e_2)=\infty ,\;\Upsilon (e_3)=0,\;\Upsilon (e_2+e_3)=\beta .$$ La translation $t_{e_3}:z\rightarrow z+e_3$ de $\mathbb{C}$ correspond ainsi à la transformation $A$ de $\mathcal{H}$ avec $A\circ \Upsilon =\Upsilon \circ t_{e_3}$. L’identification sur les autres bords avec la même transformation $\Upsilon $ entre les domaines fondamentaux respectifs et la translation $t_{e_2}:z\rightarrow z+e_2$ donne $B^{-1}\circ \Upsilon =\Upsilon \circ t_{e_2}$. Dans les conditions précédentes, lorsque $% J $ varie sur le segment retenu, $\mathbf{\lambda }_\Lambda $ varie sur l’arc associé, et $\tau $ varie dans son plan complexe. On peut imposer la contrainte $f(\tau )=1-J(\tau )=\wp _\Lambda ^{\prime 2}(z)$, on obtient ainsi une relation entre $J$ et $z$ que l’on peut traduire sous forme différentielle. Ceci redonne les expressions de Harvey Cohn. Le point intéressant dans cette construction très générale est que le domaine fondamental pour le groupe $gp(A,B)$ a une forme très simple déduite des points $\alpha =-1$, $s=0$, $\beta =$ $\mathbf{\beta }(J)$, $% p=\infty $. Le nombre $\beta $ étant fixé, ce domaine fondamental est bien déterminé. Il est assez facile de caractériser l’identification de ses bords correspondant à la donnée d’une valeur $\lambda $, et de comparer ce que donnent des valeurs $\lambda $ différentes grâce à une affinité ayant pour base le bord de $% \mathcal{H}$. Pour $J=0$, on trouve seulement $\beta =1$. Mais la construction que l’on vient de faire pour le bord de $S_4$ est plus générale et est extensible à tout $\mathbf{\lambda }_\Lambda \in S_4$ où elle donne des tores percés hyperboliques. ### Notions attachées au tore $\mathcal{T}$ Pour le tore $\mathcal{T}$ on a $\mathcal{T}eich(\mathcal{T})=\mathcal{H}$ et $\mathcal{M}od(\mathcal{T})=\mathcal{H}/PSL(2,\mathbb{Z})$. Chaque point du domaine fondamental $\mathcal{M}od(\mathcal{T})$ correspond à une classe d’équivalence du tore, c’est-à-dire une classe d’isomorphisme de courbe elliptique [@Toubiana] (p. 203). On retrouve ainsi la surface modulaire percée à l’infini que l’on peut identifier à $\mathbb{C}$ en tant que surface de Riemann grâce à l’invariant modulaire $J$. On donne dans [@Nakahara] (p.487-491) une description complète de la métrique de Peterson-Weil dans ce cas. Sur cet exemple existe un opérateur de Laplace-Beltrami $\mathbf{\Delta }$ dont on trouve dans [@Gelfand] (p. 41) la propriété caractéristique qui est d’être un opérateur différentiel de second ordre sur $\mathcal{H} $ qui commute avec toutes les transformations suivantes sur les fonctions $f$ définies sur le demi plan $$T(\psi \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] )f(z)=f(\frac{az+b}{cz+d}).$$ Une telle transformation représente le groupe $PSL(2,\mathbb{Z})$, voire un groupe plus large comme $PSL(2,\mathbb{R})$, en tant que groupe d’opérateurs sur un espace fonctionnel dont on peut faire l’analyse harmonique [@Howe]. Ceci est facilité par le fait que l’on a, pour tout $\overline{g}=\psi (\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] )\in \Gamma _{\mathcal{H}}=PSL(2,\mathbb{Z})$ et pour le laplacien $% \mathbf{\Delta }$ une relation de commutation $\mathbf{\Delta }T(\overline{g}% )=T(\overline{g})\mathbf{\Delta }$ qui conduit à penser à des vecteurs propres communs. Pour tout $\tau =\tau _1+i\tau _2\in \mathcal{T}% eich(\mathcal{T})=\mathcal{H}$, cet opérateur est écrit ici avec un signe $$\mathbf{\Delta }=-\tau _2^2(\frac{\partial ^2}{\partial \tau _1^2}+\frac{% \partial ^2}{\partial \tau _2^2}).$$ L’opérateur de Casimir qui a des propriétés comparables au précedent est défini dans [@Borel] par $\mathcal{C}^{*}=-2% \mathbf{\Delta }$. #### Formes automorphes et opérateur de Laplace - Beltrami Les formes automorphes jouent un rôle particulier par rapport à ces opérateurs, notamment parce que les fonctions méromorphes sur une surface de Riemann $\mathcal{H}/\Gamma $ sont données par les fonctions méromorphes du demi-plan de Poincaré $\mathcal{H}$ invariantes par $% \Gamma $, et que l’opérateur $\mathbf{\Delta }$ se transporte lui-même de $\mathcal{H}$ sur les surfaces de Riemann [@RosenbergS]. Comme la plupart des équations essentielles de la physique s’expriment en fonction de l’opérateur $\mathbf{\Delta }$ et peuvent concerner des phénomènes relatifs à des objets modélisés par des surfaces de Riemann (que l’on peut chauffer, éclairer ou bien faire vibrer), l’étude de cette situation est très importante [@Hopf] [@Safarov]. Les formes automorphes se groupent elles-mêmes en familles ayant de propriétés particulières (formes d’ondes de Maass, formes modulaires holomorphes, etc...[@Bruggeman1]). On peut songer à les utiliser pour obtenir des valeurs de fonctions particulières, comme les séries de Fourier ont par exemple été utilisées par Dirichlet pour démontrer un certain nombre des valeurs de la fonction zêta fournies par Euler [@Dirichlet]. Une particularité importante est cependant que dans un certain nombre de cas, le spectre des valeurs propres de $\mathbf{\Delta }$ possède une partie continue. Les choses sont donc plus compliquées qu’avec un laplacien euclidien ordinaire. Plus précisément [@Gelbart], supposons donnée une fonction automorphe de poids $k$ pour le groupe $PSL(2,\mathbb{Z})$, avec la condition de définition écrite maintenant sous la forme $$\forall \gamma \in PSL(2,\mathbb{Z}),\;\forall z\in \mathcal{H}% ,\;f(z)=(cz+d)^{-k}f(\gamma z).$$ Elle permet la définition d’une fonction sur $SL(2,\mathbb{R})$ avec l’expression $$\Phi _f(\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] )=(ci+d)^{-k}\;f(\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] i).$$ Pour tout $g\in SL(2,\mathbb{R})$ et tout $\gamma \in SL(2,\mathbb{Z})$ cette fonction vérifie du fait de l’automorphie de $f$$$(C1):\;\;\Phi _f(\gamma g)=\Phi _f(g).$$ On a aussi pour toute matrice de rotation, et ceci est lié à la structure quotient de $\mathcal{H}\simeq SL(2,\mathbb{R})/SO(2,\mathbb{R})$ $$(C2):\;\;\Phi _f(g\left[ \begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right] )=\exp (ik\theta )\Phi _f(g).$$ Si l’on suppose que $f$ est holomorphe, on obtient avec le laplacien $% \mathbf{\Delta }$ de $SL(2,\mathbb{R})$ (dont celui de $\mathcal{H}$ est l’image) la condition $$(C3):\;\;\mathbf{\Delta }\Phi _f=-\frac 14k(k-2)\Phi _f=-\frac{(k-1)^2-1}% 4\Phi _f.$$ Cette condition se simplifie sous la forme $\mathbf{\Delta }\Phi _f=-s(s-1)\Phi _f$ si l’on se limite comme dans ce qui précède aux valeurs $k=2s$ paires. Mais d’autres fonctions propres de $\mathbf{\Delta }$ existent [@Howe] [@Taylor] [@Borel]. C’est en affaiblissant cette condition que Maass a inventé ses propres formes d’onde [@Maass]. C’est aussi en étudiant cette situation que Selberg a trouvé sa célèbre formule généralisant celle de Poisson citée en 5.2, ainsi que les méthodes de Dirichlet pour évaluer les sommes de Gauss ou démontrer la loi de réciprocité quadratique [@Selberg]. Il y a deux conditions supplémentaires très importantes $$(C4):\;\;\int_{SL(2,\mathbb{R})/SL(2,\mathbb{Z})}\mid \Phi _f(g)\mid ^2dg<\infty .$$ Cette première condition introduit un espace de Hilbert $L^2(SL(2,\mathbb{R}% )/SL(2,\mathbb{Z}))$ de fonctions de carré intégrable. On considère aussi $$(C5):\;\;\int_{\mathbb{R}/\mathbb{Z}}\mid \Phi _f(\left[ \begin{array}{cc} 1 & x \\ 0 & 1 \end{array} \right] g)\mid ^2dx=0.$$ Cette seconde condition définit un sous-espace particulier $L_0^2$ dans le précédent, l’espace des ”formes-pointes” dans lequel on peut identifier un sous-espace $A_k(\Gamma )$ isomorphe au sous-espace des formes $f\in S_k(\Gamma )$ qui s’annulent sur les pointes. Cet espace est un sous-espace du $\mathbb{C}$-espace vectoriel $\mathbf{M}_k(\Gamma )$ des fonctions automorphes. #### Lien avec les représentations de $SL(2,\mathbb{R})$ Une conséquence importante de ce qui précède est que l’on peut en déduire une représentation régulière à droite, unitaire et de dimension infinie de $SL(2,\mathbb{R})$ dans l’ensemble des opérateurs unitaires de $L_0^2$ : $$\text{Pour tous }g,h\in SL(2,\mathbb{R}),\;\;R(g)\Phi _f(h)=\Phi _f(gh),$$ où pour tout $g\in SL(2,\mathbb{R})$ et $\Phi _f$ bien choisi $\mathbf{% \Delta }R(g)\Phi _f=R(g)\mathbf{\Delta }\Phi _f$. Ceci décompose $R$ avec des sous-espaces invariants pour $\mathbf{\Delta }$, c’est-à-dire de fonctions propres de $\mathbf{\Delta }$, et donc comme somme directe de réprésentations de $SL(2,\mathbb{R})$. Celles ci sont au demeurant toutes connues [@Howe][@Taylor]. Ces représentations induisent des représentations des groupes fuchiens que l’on peut remonter dans $% SL(2,\mathbb{R})$ par la proposition 1.4. On trouve aussi ([@Terras] chapitre III) des expressions en ”série de Fourier” de $K$-fonctions de Bessel (remplaçant les sinusoides) où $x+iy\in \mathcal{H}$ $$f(x+iy)=\sum_{n\in \mathbb{Z}}a_n\exp (2i\pi nx)\sqrt{y}K_{it}(2\pi y\mid n\mid ),$$ $$K_s(z)=\frac 12\int_0^\infty \exp (-\frac z2(u+\frac 1u)u^{s-1}du,$$ $$\frac{t^2+1}4\text{ valeur propre de }\mathbf{\Delta }.$$ Une conjecture importante due à Selberg affirme que pour les groupes $% \Gamma _0(n)$ associés à la théorie de Hecke ([@Shimura] ch3, [@Miyake] § 4.5) cette valeur propre qui correspond à $% s=(1+it)/2$ est supérieure ou égale à $(1/4)$. On donne dans [@Apostol] un système fini de générateurs de $\Gamma _0(p)$ pour $p$ premier, ainsi que des formes automorphes associées. Ce que l’on vient de voir revient à dire que $t$ est réel. #### Lien entre le laplacien d’un tore et la fonction éta de Dedekind On considère l’opérateur $\mathbf{\Delta }$ sur un tore conforme $% \mathcal{T}$ défini par un paramètre complexe $\tau =\tau _1+i\tau _2\in \mathcal{T}eich(\mathcal{T})=\mathcal{H}$. Pour représenter ce tore, on utilise [@Nakahara] le plan complexe en $z\in \mathbb{C}$ et des coordonnées associées à un réseau $\Lambda =\mathbb{Z}\xi ^1\oplus \mathbb{Z}\xi ^2$ données par : $$\xi ^1=i\frac{\overline{\tau }z-\tau \overline{z}}{\tau _2},\;\;\xi ^2=i% \frac{\overline{z}-z}{2\tau _2}.$$ La métrique de $\mathbb{C}$ définit une métrique induite sur le tore à partir de laquelle on peut calculer la mesure de Weil-Petersson à adopter, et avec laquelle le laplacien du tore peut être écrit simplement à partir du laplacien du demi-plan de Poincaré. Pour le laplacien de $\mathcal{H}$, en introduisant pour $i=1,2$ l’opérateur $% \partial _i=\partial /\partial \xi ^i$, on a $$\mathbf{\Delta }=-\frac 1{2\tau _2^2}(\mid \tau \mid ^2(\partial _1)^2-2\tau _1\partial _1\partial _2+(\partial _2)^2).$$ On trouve alors facilement des fonctions propres de $\mathbf{\Delta }$ vérifiant les bonnes conditions au bord du parallélogramme de la figure précédente, de façon à pouvoir en déduire des fonctions sur le tore quotient $$\psi _{m,n}(\xi )=\exp (2i\pi (n\xi ^1+m\xi ^2),\;\;m,n\in \mathbb{Z}.$$ Les valeurs propres associées sont $$\lambda _{m,n}=\frac{2\pi ^2}{\tau _2^2}(m-n\tau )(m-n\overline{\tau })=% \frac{2\pi ^2}{\tau _2^2}\mid m+n\tau \mid ^2.$$ Par analogie avec ce que l’on sait pour les opérateurs sur les espaces de dimension finie, le déterminant du laplacien $\mathbf{\Delta }$ sur le tore $\mathcal{T}$ pourrait être envisagé comme un produit infini de ces valeurs propres $$Det(\mathbf{\Delta })=\prod_{(m,n)\in \mathbb{Z}^2-(0,0)}\frac{2\pi ^2}{\tau _2^2}\mid m+n\tau \mid ^2.$$ Mais une telle définition qui fait apparaître un produit infini est insuffisante. On peut cependant la rendre rigoureuse en introduisant les séries d’Eisenstein $E(\tau ,s)$. On décrit ici la méthode pour ce faire telle qu’elle est donnée dans [@Nakahara] (p. 489). L’évaluation d’une telle série utilise la fonction êta de Dedekind $\eta $. La série d’Eisenstein est définie pour $\Re% (s)>1$ par $$E(\tau ,s)=\tau _2^s\sum_{(m,n)\in \mathbb{Z}^2\backslash \{(0,0)}\frac 1{\mid m+\tau n\mid ^{2s}}=\tau _2^sG_{2s}(\tau ),\quad \quad g_2(\tau )=\tau _2^{-1}E(\tau ,1),$$ elle vérifie l’équation fonctionnelle $$\pi ^{-s}\Gamma (s)E(\tau ,s)=\pi ^{-(1-s)}\Gamma (1-s)E(\tau ,1-s),$$ et possède une formule limite due à Kronecker en son pôle simple $s=1$ où apparaît $\eta $ et la constante d’Euler $\gamma $$$E(\tau ,s)=\frac \pi {s-1}+2\pi (\gamma -\log (2)-\log (\sqrt{\tau _2}\mid \eta (\tau )\mid ^2))+O(s-1).$$ La méthode consiste à utiliser un logarithme et à négliger une infinité de termes $2\pi ^2$ pour définir seulement le nombre $$\frac{\det (\mathbf{\Delta })}{\tau _2}=\exp (-\log \tau _2(1+E(\tau ,0))-E^{\prime }(\tau ,0)).$$ On utilise alors la formule de Kronecker et des expressions classiques pour les fonctions $\Gamma $ pour en déduire des évaluations en $s$ des deux termes égaux par l’équation fonctionnelle $$sE(\tau ,1-s)=-\pi +2\pi s(\gamma -\log 2-\log (\sqrt{\tau _2}\mid \eta (\tau )\mid ^2)+...),$$ $$\pi ^{1-2s}\frac{\Gamma (1+s)}{\Gamma (1-s)}E(\tau ,s)=\pi E(\tau ,0)+(-2(\log \pi +\gamma )E(\tau ,0)+E^{\prime }(\tau ,0)\pi s+...).$$ La comparaison donne $$E(\tau ,0)=-1,\;\;E^{\prime }(\tau ,0)=2(\log 2-\log (\sqrt{\tau _2}\mid \eta (\tau )\mid ^2),$$ c’est-à-dire avec une expression qui précède : $$\frac{\det (\mathbf{\Delta })}{\tau _2}=\exp (-E^{\prime }(\tau ,0))=\tau _2\mid \eta (\tau )\mid ^4.$$ Cette expression donne une signification particulière à la fonction de Dedekind par rapport à un déterminant construit avec l’opérateur de Laplace-Beltrami du tore. Elle permet de comprendre pourquoi cette fonction se décompose sous forme d’un produit infini particulier. En notant ici $q=\exp (2\pi i\tau )=\mathbf{q}^2,$ on retrouve le produit donné dans le commentaire de R. Dedekind relatif au fragment XXVIII de B. Riemann [@Riemann] (p. 397) $$\eta (\tau )^{24}=q\prod_{n\geq 1}(1-q^n)^{24}.$$ Cette fonction a déjà été rencontrée comme définissant une forme automorphe de poids 12. Son expression est liée au discriminant $Disc(E_\Lambda )$ de la courbe elliptique $% E_\Lambda $ attachée à un réseau $\Lambda =\mathbb{Z}\omega _1\oplus \mathbb{Z}\varpi _2$ correspondant à un $\mathcal{T}_\Lambda $ pour lequel $\tau =\Im(\omega _1/\omega _2)$ et $$\eta (\tau )^{24}=g_2^3-27g_3^2=16(e_1-e_2)^2(e_2-e_3)^2(e_3-e_1)^2=Disc(E_\Lambda ),$$ $$J=\frac{g_2^3}{g_2^3-27g_3^2}=\frac 1{54}\frac{% ((e_1-e_2)^2+(e_2-e_3)^2+(e_3-e_1)^2)^3}{(e_1-e_2)^2(e_2-e_3)^2(e_3-e_1)^2}.$$ On a vu avec la branche principale du logarithme et pour une transformation de $PSL(2,\mathbb{Z})$ définie par une matrice de $SL(2,\mathbb{Z})$ que l’on avait $$\log \eta (\frac{a\tau +b}{c\tau +d})=\log \eta (\tau )+\frac 14\log (-(c\tau +d)^2)+\pi i\frac{a+d}{12c}-\pi is(d,c).$$ On peut résumer cette égalité en disant que $\eta $ a une propriété d’automorphie de poids $(1/2)$. Mais il faut pour cela introduire une racine 24$^{i\grave{e}me}$ de l’unité permettant d’écrire $$\eta (\frac{a\tau +b}{c\tau +d})=\chi _\eta (\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] ).(c\tau +d)^{(1/2)}\eta (\tau ).$$ On utilise donc désormais la définition de [@Kac] (p. 257) plus satisfaisante que celle que l’on a utilisée antérieurement pour les fonctions automorphes. On dit que $\eta $ est une forme modulaire de poids $% (1/2)$ et de système de multiplicateur $P\chi _\eta $, où dans le cas le plus général $P\chi _\eta :\Gamma =PSL(2,\mathbb{Z})\rightarrow \mathbb{C}\backslash \{0\}$ est une fonction telle que pour tout $\gamma \in \Gamma $, on ait $\mid P\chi _\eta (\gamma )\mid =1$, et si $P:SL(2,\mathbb{Z}% )\rightarrow PSL(2,\mathbb{Z})$ projection canonique $P\chi _\eta \circ P=\chi _\eta $. La fonction $g_2$ est quant à elle une fonction modulaire de poids $4$ pour un système de multiplicateur trivial, d’où se déduit avec la modularité de poids $12$ du discriminant $% g_2^3-27g_3^2$ la propriété de modularité de poids $0$ de $J$. Ces deux dernières fonctions peuvent à leur tour être considérées comme vecteurs propres d’opérateurs que l’on peut expliciter. Le discriminant est ainsi fonction propre des opérateurs de Hecke ([@Serre1] p. 168), opérateurs qui commutent tous avec le laplacien ce qui en donne l’analyse spectrale. On renvoie à [@Rademacher] (ch. 8, 9) pour toutes les vérifications complémentaires des calculs qui précèdent. Les conclusions importantes sont qu’il existe un lien profond entre la fonction de Dedekind et l’opérateur de Laplace-Beltrami du tore $\mathcal{T}$, et donc aussi celui de $\mathcal{H}$, et que ce dernier est relié en profondeur aux représentations unitaires dans un espace de Hilbert $% L_0^2 $ de dimension infinie du plus simple des groupes de Lie non compact $% SL(2,\mathbb{R})$. On a d’ailleurs vu comment $\mathcal{H}$ admet le quotient $% PSL(2,\mathbb{R})$ comme groupe d’automorphismes, cette dernière propriété est donc parfaitement compréhensible. Le passage au tore permet l’apparition d’un produit infini interprétable comme partie maitrisable du déterminant d’un opérateur de Laplace-Beltrami $% \mathbf{\Delta }$. Evidemment une question qui se pose est de savoir si la technique de résurgence de Ecalle [@Ecalle] ne permettrait pas de placer les calculs précédents dans un cadre plus satisfaisant. Le lien mis en évidence dans ce qui précède entre fonction $\eta $ et un opérateur [@Kostant] trouve une application particulière dans la théorie des champs [@Bunke], laissant apparaître l’existence d’une véritable construction fonctorielle pour cette théorie des champs, de portée beaucoup plus vaste que les développements classiques qu’ont permis la cyclotomie et le ”Jugendtraum” de Kronecker [@Landsman]. #### Sommes de Gauss La fonction $P\chi _\eta $ peut être étudiée de façon directe. Elle a un lien profond avec les sommes de Gauss ([@Chandrasekharan] (ch. IX), [@Lemmermeyer]) et c’est son comportement qui permet en réalité la démonstration cyclotomique de la loi de réciprocité quadratique. Au demeurant, c’est dans ce facteur que se concentrent en exposant d’une puissance les sommes de Dedekind. D’où également le lien entre ces sommes et la réciprocité quadratique. On trouve dans [@Knopp] (p. 51) une expression de ce multiplicateur utilisant le symbole de Jacobi : $$\text{si }c\text{ impair }\chi _\eta (\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] )=\left( \begin{array}{c} d \\ \mid c\mid \end{array} \right) \exp (\mathcal{F}-3c),$$ $$\text{si }c\text{ pair }\chi _\eta (\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] )=(-1)^{\frac{sgn(c)-1}2\frac{sgn(d)-1}2}\left( \begin{array}{c} c \\ \mid d\mid \end{array} \right) \exp (\mathcal{F}+3d-3-3cd),$$ $$\text{o\`{u} }\mathcal{F}=\frac{\pi i}{12}(a+d)c-bd(c^2-1).$$ On peut vérifier à partir de là que l’on a bien affaire à une racine 24$^{i\grave{e}me}$ de l’unité. Les sommes de Gauss sont données par $$G(a,n)=\sum_{k=0}^n\exp (\frac{2i\pi ak^2}n).$$ Pour $p$ premier impair et $a$ non congru à $0$ modulo $p$, si $c=1$ ou $% c=-i$ selon que $p\equiv 1(\mod\,4)$ ou $p\equiv 3(\mod\,4)$, elles vérifient pour la transformée de Fourier discrète [@Crandall] (p.92) : $$\left( \begin{array}{c} a \\ p \end{array} \right) =\frac{G(a,p)}{\frac 12\sqrt{p}(1+i)(1+i^p)}=\frac c{\sqrt{p}% }\sum_{k=0}^n\left( \begin{array}{c} k \\ p \end{array} \right) \exp (\frac{2i\pi ak}p).$$ On peut en déduire l’expression du nombre de classes d’idéaux d’un corps quadratique [@Hilbert5] (théorème 114 p.135). Si $p$ et $q$ sont premiers entre eux, la réciprocité quadratique se démontre avec $G(p,q)G(q,p)=G(1,pq)$. Les sommes de Gauss vérifient aussi l’identité de Landsberg-Schaar dont on peut déduire la réciprocité quadratique [@Moll] p. 153 [@Binz] : $$\frac 1{\sqrt{p}}\sum_{n=0}^{p-1}\exp (\frac{2i\pi n^2q}p)=\frac{\exp (i\pi /4)}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp (\frac{-i\pi n^2p}{2q})\;\;(p>0,\;q>0).$$ Il est remarquable que cette formule soit issue de la trace d’un opérateur d’évolution longitudinale associé à une équation de Schrödinger. On trouve une démonstration dans [@Armitage] à partir d’une équation de Schrödinger sur un espace de phase cylindrique, que l’on modifie pour le rendre toroïdal, ce qui d’ailleurs discrétise le temps. #### Lien avec la fonction zêta de Riemann Dans le cadre présenté s’introduit également la fonction zêta de Riemann. La série d’Eisenstein peut être étudiée de façon directe en tant que noyau reproduisant de l’opérateur autoadjoint qui étend l’opérateur laplacien sur $L^2(\mathcal{M}od(% \mathcal{T}))$. On a rappelé dans ce qui précède comment s’introduisait naturellement cette structure d’espace de Hilbert. Elle permet la définition d’un autre noyau reproduisant ([@Nakahara] (p.426) ou [@Gilkey]), le noyau de chaleur lié à l’opérateur elliptique laplacien. Dans le contexte plus général d’une variété $\mathcal{M}$ plongée dans un espace de dimension $D$ ce noyau est donné par l’expression suivante $$h(x,y;t)=<x\mid \exp (-t\mathbf{\Delta })\mid y>=\sum_n\exp (-t\lambda _n)<x\mid n><n\mid y>.$$ Il vérifie l’équation de la chaleur qui s’écrit compte tenu du choix fait pour le signe du laplacien $\left( \partial _t+\mathbf{\Delta }\right) h(x,y;t)=0$. Il permet de définir le semi-groupe de la chaleur $\{\exp (-t\mathbf{\Delta });t\geq 0\}$. La transformée de Mellin donne une fonction zêta $\zeta (x,y;s)$ qui vaut $$\sum_n\frac 1{\mathbf{\Gamma }(s)}\int_0^\infty t^{s-1}\exp (-t\lambda _n)<x\mid n><n\mid y>dt.$$ Elle détermine la fonction $\zeta _\Delta $ généralisée suivante qui a la forme d’une trace : $$\zeta _{\mathbf{\Delta }}(s)=\int_{\mathcal{M}}\zeta (x,x;s)dx=\sum_n\lambda _n^{-s}.$$ On évoque dans [@Nakahara] (p.429) et on approfondit dans [@Gilkey] les développements de ces calculs vers la définition d’un $% \eta $-invariant pour certains opérateurs elliptiques, pratiquement une signature d’une forme quadratique d’intersection, qui débouche sur le théorème de l’indice d’Atiyah-Patodi-Singer et des applications importantes en dimension 4 ([@Naber]). Cet invariant est donc une généralisation de la fonction êta de Dedekind pour des objets plus larges que les surfaces de Riemann [@Muller] [@Bismut]. L’application au cas où $\mathcal{M}$ est un tore est facile. Elle permet de définir de façon plus intrinsèque une fonction zêta ([@Lapidus] p. 229) en utilisant de façon directe la trace d’une puissance du laplacien $$\zeta _{\mathbf{\Delta }}(s)=tr(\mathbf{\Delta }^{-s})=\sum_{(m,n)\in \mathbb{Z}% ^2\backslash \{(0,0)}\lambda _{m,n}^{-s}.$$ La conjecture de Riemann [@Bombieri] semble correspondre d’une certaine façon à ce qui se passe lorsque le tore que l’on considère est tel que $\tau $ tende vers un nombre entier, ce qui introduit à la limite une brisure de symétrie modifiant dramatiquement l’algèbre d’opérateurs engendrée par $\mathbf{\Delta }$ sur laquelle on travaille. On peut construire un système dynamique pour ce cas dont la fonction de partition soit $\zeta _{\mathbf{\Delta }}$. Il suffit de suivre la méthode de [@CohenP] dans son exposé très clair des travaux de [@Connes5] permettant de considérer la fonction $\zeta $ de Riemann elle-même comme fonction de partition d’un système dynamique $(A,\sigma _t)$ avec $A$ une $C^{*}$-algèbre et $\sigma _t$ un groupe à un paramètre d’automorphismes de $A$. Inversement, le problème de construire un opérateur hermitien qui pourrait être selon Michael Berry [@Berry] un hamiltonien gouvernant un système mécanique quantique à mécanique classique sous jacente chaotique et à temps irréversible correspond à la conjecture de Hilbert et Polya [@Watkins]. Un très récent article de Alain Connes [@Connes6] laisse penser que l’hypothèse de Riemann pourrait correspondre comme la formule de Selberg [@Selberg] à une formule de trace pour un tel hamiltonien [@Fedosov] (theorem 9.5.2 p. 307). On peut comparer à ce que donnent les théories d’Arakelov de dimensions supérieures [@Lang] (pp. 172-173). Une question importante paraît être de bien formaliser dans le contexte présenté la transformation de Mellin, comme une anti-équivalence particulière de catégories, de variétés abéliennes vers des algèbres d’opérateurs supportant des fonctions $\zeta $. Une autre piste consiste à approfondir le lien qui est décrit dans [@Chudnovski] entre l’approximation de Apéry de $\zeta (3)$ et des équations de Lamé que cet article relie explicitement à l’équation de Markoff. Un projet consiste à considérer l’opérateur $L$ que l’on a introduit ci-dessus en liaison avec les matrices $A_0$ et $B_0$, à considérer des propriétés d’orthogonalité associées et à utiliser des méthodes analogues à celles développées dans [@VanAssche]. #### Gaz de bosons et bruit en $1/f$ Le formalisme précédent a été appliqué à la mécanique statistique des gaz de bosons. Il s’appuie sur le lien qui a été étudié par Ramanujan entre la fonction êta de Dedekind et les partitions d’entiers. Ceci se matérialise avec la fonction multiplicative $\mathbf{\tau }_R$ de Ramanujan ([@Serre1] p. 156, [@Chowla] p. 57) donnant $$\eta (\tau )^{24}=q\prod_{n\geq 1}(1-q^n)^{24}=\sum_{n\geq 1}\mathbf{\tau }% _R(n)q^n,\;\;\text{o\`{u} }q=\exp (2\pi i\tau )=\mathbf{q}^2=\exp (-h\nu /kT).$$ Ceci permet de définir une fonction de partition par mode où $p(n)$ nombre de partitions de l’entier $n$ associé aussi à la fonction $% \eta $ par la formule $$Z(q)=q^{\frac 1{24}}\eta (\tau )^{-1}=\prod_{n\geq 1}(\frac 1{1-q^n})=\sum_{n\geq 1}p(n)q^n=\frac{\exp (\pi i\tau /12)}{\eta (\tau )}.$$ Si $\sigma _k(n)$ désigne la somme des puissances $k^{i\grave{e}mes}$ des diviseurs de $n$, on obtient des grandeurs interprétables par analogie avec la mécanique statistique $$\text{l'\'{e}nergie libre }F=-kT\sum_{n\geq 1}\sigma _{-1}(n)\exp (-nh\nu /kT),$$ $$\text{l'\'{e}nergie interne }E=h\nu \sum_{n\geq 1}\sigma _1(n)\exp (-nh\nu /kT),$$ $$\text{l'entropie }S=k\sum_{n\geq 1}(h\nu /kT\sigma _1(n)+\sigma _{-1}(n))\exp (-nh\nu /kT).$$ Sur cette base, les fluctuations d’énergie dans un résonateur à quartz ont été évaluées [@Planat4], faisant apparaître un bruit quantique en $(1/f)$. Au delà du cas du résonateur à quartz, il faudrait creuser le sujet précédent pour montrer comment donner dans une perspective plus générale une explication profonde du bruit en $(1/f)$ que l’on rencontre si fréquemment dans la nature. Quelques pistes récentes ont commencé à être explorées. Elles font le lien avec les sommes de Ramanujan [@Planat9]. ### Notions attachées à un tore percé $\mathcal{T}% \backslash \{p\}$ L’espace de Teichmüller du tore percé est $\mathcal{T}eich(\mathcal{T}% \backslash \{p\})=\mathcal{H}$. On a aussi $\Gamma _{\mathcal{T}\backslash \{p\}}$ $=GL(2,\mathbb{Z})$. Ce groupe est noté $S^{*}L(2,\mathbb{Z})$ pour indiquer qu’il agit dans $\mathcal{H}$ par transformations conformes et anticonformes. Les résultats obtenus sur les tores percés paraboliques permettent de se ramener à l’action de $SL(2,\mathbb{Z})$ dans le demi-plan de Poincaré pour décrire au quotient l’espace des modules $\mathcal{M}od(\mathcal{T}\backslash \{p\})$ grâce à la surface modulaire percée. Ces données déduites de [@Nag] (p. 153) sont intéressantes car elles ne correspondent pas à ce qui a été vu ci-dessus dans l’étude des tores percés conformes paraboliques. On a donné $$\mathcal{T}eich(\mathcal{T}\backslash \{p\})\simeq \mathcal{F}(\lambda ,\mu )=\{(\lambda ,\mu )\mid \lambda >0,\mu >0\},$$ et l’on a décrit la façon dont $\Gamma _{\mathcal{T}\backslash \{p\}}=GL(2,\mathbb{Z})$ agit dans $\mathcal{F}(\lambda ,\mu )$. Au quotient on identifie bien les classes d’équivalence difféomorphe (et donc conforme) sur le tore percé, c’est-à-dire les modules du tore percé. Ceci correspond au commentaire de la définition 1.6 de [@Schneps] (p.10). Tout se passe comme si $\mathcal{H}$ correspondait à un modèle topologique de l’espace de Teichmüller, et $\mathcal{F}% (\lambda ,\mu )$ à un modèle géométrique décrit par une équation algébrique. Le lien entre ces deux modèles a été étudié en détail dans [@Keen5], mais ce travail devrait être repris à la lumière des considérations qui précèdent. Il est également très important de remarquer que la théorie de la réduction qui a été présentée pour les tores paraboliques, va beaucoup plus loin que ce que donne la seule action de $\Gamma _{\mathcal{T}\backslash \{p\}}=GL(2,\mathbb{Z})$ sur $% \mathcal{F}(\lambda ,\mu )$. Généraliser un tel résultat est concevable en rentrant dans l’étude de la présentation des groupes de classes d’applications $\Gamma _{\mathcal{M}}$. Sans aller jusque là, on peut indiquer sommairement comment on retrouve les résultats déjà rencontrés au chapitre précédent avec les remarques formulées par [@Keen5] (p. 203) et issues de [@Keen1]. On traduit ce que dit Linda Keen sous la forme $$\pi ^{\prime }(\chi )=\left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right] \text{ agit sur }\mathcal{F}(\lambda ,\mu )\text{ par }(\lambda ,\mu )\rightarrow (\frac \mu {\lambda ^2+\mu ^2},\frac \lambda {\lambda ^2+\mu ^2}),$$ $$\pi ^{\prime }(\chi ^{\prime })=\left[ \begin{array}{cc} 1 & 1 \\ -1 & 0 \end{array} \right] \text{ agit sur }\mathcal{F}(\lambda ,\mu )\text{ par }(\lambda ,\mu )\rightarrow (\frac \mu \lambda ,\frac 1\lambda ).$$ Ces deux matrices respectivement d’ordre 2 et 3 sont telles que leurs images par $\psi $ dans $PSL(2,\mathbb{Z})$ engendrent ce groupe. On peut maintenant considérer que $\pi ^{\prime }$ est un morphisme d’abélianisation, avec dans le groupe des automorphismes $Aut(\mathbf{F}_2)$ du groupe libre à deux éléments $\mathbf{F}_2$ engendré par $A$ et $B$$$\chi =(B,A^{-1}),\quad \quad \chi ^{\prime }=(AB,B^{-1}).$$ Il suffit alors de considérer l’action de ces deux automorphismes sur le triplet $$(tr(B^{-1}),tr(A),tr(B^{-1}A^{-1}))=(\frac{1+\lambda ^2+\mu ^2}\mu ,\frac{% 1+\lambda ^2+\mu ^2}\lambda ,\frac{1+\lambda ^2+\mu ^2}{\lambda \mu }),$$ $$\chi \text{ donne }(tr(A),tr(B^{-1}),tr(AB^{-1})),\;\;\chi ^{\prime }\text{ donne }(tr(B^{-1}),tr(B^{-1}A^{-1}),tr(A)).$$ Plus généralement, le groupe $Aut(\mathbf{F}_2)$ agit grâce à $\pi ^{\prime }$ sur $\mathcal{F}(\lambda ,\mu )$. On a d’ailleurs $% \pi ^{\prime }(Aut(\mathbf{F}_2))=$ $GL(2,\mathbb{Z})=\Gamma _{\mathcal{T}% \backslash \{p\}}$. On a développé l’étude de cette situation, expliquant comment le groupe $\mathbf{T}_3=\mathbf{T}^{*}(\infty ,\infty ,\infty )$ apparaît ici. Ce groupe a été mis en évidence avec le triangle curviligne $\mathbf{LMN}$. ### Interprétation géométrique de la double uniformisation En comparant les deux cas du tore $\mathcal{T}$ et du tore percé $% \mathcal{T}\backslash \{p\}$, tout se passe comme si on observait dans l’espace la surface $x^2+y^2+z^2=xyz$ et que l’on représente cette configuration dans $\mathcal{H}$. Les formes quadratiques donnent tout l’espace $\mathbb{R}^3$, puis projectivement $\mathcal{H},$ et on sait faire agir $PSL(2,\mathbb{Z})$ sur ces espaces. Dans $\mathbb{R}^3$ on visualise cette surface, et on la représente projectivement par $\mathcal{H}$. On trouve ainsi une signification à l’action de $GL(2,\mathbb{Z})$ sur cette surface. La réduction porte ainsi une information beaucoup plus profonde que la simple inclusion d’un objet topologique dans un autre. Elle traduit la façon dont un objet géométrique est contenu dans un autre. On trouve ainsi une signification comparable à ce qui est expliqué dans l’article de B. Mazur [@Mazur1] sur les doubles revêtements conformes. ”C’est la conjonction de deux uniformisations (l’une en l’occurrence euclidienne et l’autre hyperbolique de type arithmétique, c’est-à-dire périodique par rapport à un groupe de congruence) qui crée une structure exceptionnellement riche sur les courbes elliptiques et entraine des implications profondes pour des questions arithmétiques (en fait [@Knapp] (ch.XII) la conjecture de Shimura Taniyama Weil démontrée par A.Wiles [@Wiles]: une courbe elliptique sur les nombres rationnels possède un fonction zêta provenant de formes modulaires de poids 2).” Ce que l’on vient de décrire entre le tore $\mathcal{T}$ et le tore percé $\mathcal{T}% \backslash \{p\}$ donne deux uniformisations possibles pour le tore percé conforme. Approche par le chaos quantique ------------------------------- Comme on vient d’étendre la définition du laplacien à des tores percés, une question qui se pose est de savoir s’il existe une interprétation mécanique correspondant à la théorie de Markoff classique, ou aux généralisations qu’on en a données. Il faut comprendre si dans ce nouveau contexte le spectre de Markoff pourrait être le spectre d’un opérateur à construire sur le tore percé. L’idée suivie par l’auteur pour étudier cette question a consisté à examiner ce que donne la théorie du chaos quantique sur différents tores percés non conformément équivalents puis à considérer la même question sur des surfaces de Riemann, comme le fait [@Gutzwiller], enfin sur des espaces plus complexes. Pour toute surface de Riemann $\mathcal{M}$ définie par un groupe fuchsien on a introduit de façon naturelle la géométrie symplectique en considérant le premier groupe d’homologie $H_1(\mathcal{M% },\mathbb{Z})$ et le nombre d’intersections ([@Waldschmidt2] p. 105). Le formalisme de la mécanique hamiltonienne et de la quantification s’introduit à partir de là ([@MacLane1] [@Gotay] [@Fedosov] [@DodsonCTJ] [@Dodson1] [@Fischer] [@Nelson] [@Cassa] [@Takhadjian]), avec encore beaucoup de choses à éclaircir [@MacKay]. Ceci permet de modéliser certains problèmes de mécanique au moyen de telles surfaces de Riemann. On notera qu’en mécanique des solides ordinaires le formalisme hamiltonien se met en place avec un espace de phases de dimension fini. Les choses deviennent un peu plus compliquées dès que l’on aborde des problèmes d’hydrodynamique car l’espace des phases devient de dimension infinie, obligeant à avoir recours à des outils comme les espaces de Hilbert. Mais même à ce prix d’autres domaines de la physique ne rentrent pas facilement dans ce formalisme dont l’un des grands intérêts a été de montrer l’importance de la topologie pour la physique (voir par exemple [@Casetti] [@Mineev]). ### Quelques exemples  On évoque ici trois exemples pour illustrer les limites du formalisme hamiltonien et les voies de son extension. $\bullet $ La méthode du ”scattering inverse” est utilisée pour intégrer des équations différentielles non-linéaire. Son interprétation hamiltonienne est due à L. D. Fadeev [@Fadeev]. Elle s’applique à des équations très importantes de la Physique (Sine-Gordon, Lamé c’est-à-dire Schrödinger périodique à une dimension [@Feldman], Schrödinger non linéaire, Korteweg-deVries, etc.) admettant une présentation hamiltonienne avec des états dans un espace de Hilbert. Certains solitons entrent dans le domaine couvert par ce développement [@Remoissenet] qui dépasse largement le cadre des seules surfaces de Riemann. On renvoie pour approfondir le thème des solitons à [@Gesztesy]. Mais les surfaces de Riemann interviennent aussi dans ce cadre [@Dubrovin]. $\bullet $ Les équations de Maxwell classique (dont l’auteur voudrait formaliser le lien avec la théorie de Hodge) régissent la propagation des ondes et de la lumière. Elles n’entrent pas dans le formalisme hamiltonien sauf à étendre à une dimension infinie la dimension de l’espace des phases. Elles décrivent en effet des variations de champ électrique et magnétique en tout point de l’espace. La transformation de ces champs transporte de l’énergie et donne en l’absence de charge et de courant une équation d’onde qui décrit la propagation de l’onde qui transporte cette énergie. L’équation de Schrödinger appliquée à une fonction d’onde représentant un photon isolé donne exactement les équations de Maxwell. Avec un électron, elle donne l’équation de Dirac à la base comme ces dernières de l’électrodynamique quantique [@Penrose]. Le développement d’un cadre global commun pour les lois de la physique que l’on vient d’évoquer passe donc bien par l’introduction d’un cadre hilbertien et d’une analyse dans celui-ci de l’équation de Schrödinger. $\bullet $ La théorie quantique des champs a été introduite à la suite des travaux d’Einstein sur l’invariance par les transformations de Lorentz des équations de l’électromagnétisme de Maxwell $$\nabla (E+iB)=q+ig,\;\frac \partial {\partial t}(E+iB)+i\nabla \times (E+iB)=j_e+ij_m.$$ Le souci de rendre ces deux équations invariantes par d’autres transformations $(E+iB)\rightarrow \exp (i\phi )(E+iB)$ a conduit à la théorie du champ conforme et à la tentative d’unifier la gravité aux autres forces de la nature par la théorie des cordes. Cette démarche a eu un temps fort avec l’article [@Polyakov]. En réalité, cette théorie ne semble avoir qu’un intérêt restreint car il a été constaté que son domaine d’application reste limité. Il est cependant établi que cette théorie admet une présentation hamiltonienne avec des états dans un espace de Hilbert, une $C^{*}$-algèbre d’opérateurs et un groupe de symétries de jauge, c’est-à-dire la géométrie non commutative d’Alain Connes [@Connes] [@Waldschmidt2] (p.548). Cette dernière devrait permettre d’étendre fonctoriellement le projet sans doute trop restreint de la théorie du champs conforme [@Witten] [@Landsman]. Une quantification dans cette théorie se déduit des remarques qui précèdent, dont on trouve les éléments essentiels dans [@Friedan] [@Gawedski] [@Vafa] [@Puta] [@Nakahara] [@Grandati] [@Bott]. ### L’intégrale de pas de Feynman  On trouve un exposé générique de cette question en coordonnées les plus générales dans [@Grosche] (p. 67-91) et [@Golubeva]. Sur une variété $\mathcal{M}$ (par exemple une surface de Riemann compacte) contenue dans un espace de dimension $D$ et munie d’une métrique $ds^2=g_{ab}(\mathbf{q})dq^adq^b$ donnée avec des paramètres locaux de position $\mathbf{q}=(q^1,...,q^D)$, on peut considérer l’espace des fonctions de carré intégrable $L^2(% \mathcal{M})$ pour le produit scalaire $$<f_1,f_2>=\int_{\mathcal{M}}\sqrt{\det (g_{ab})}f_1(\mathbf{q})\overline{f_2(% \mathbf{q})}d\mathbf{q,}$$ et l’opérateur de Laplace Beltrami, appelé laplacien, où $% (g^{ab})$ inverse de $(g_{ab})$ : $$\mathbf{\Delta }=g^{ab}\partial _a\partial _b+(g^{ab}\Gamma _a+g_a^{ab})\partial _b,\;\;\text{o\`{u} }\Gamma _a=\frac{\partial \log \sqrt{\det (g_{ab})}}{\partial q^a}.$$ Les paramètres d’impulsion, opérateurs hermitiens adaptés au produit scalaire introduit, ont une forme particulière : $$p_{-a}=-i\hbar (\frac \partial {\partial q^a}+\frac{\Gamma _a}2).$$ L’opérateur associé à l’énergie est défini à partir de la variable temps : $$i\hbar \frac \partial {\partial t}.$$ L’équation de Schrödinger ([@Ngo] p. 45) dépendant du temps pour une particule de masse $m$ se déplaçant dans un champ potentiel $V(\mathbf{q})$ indépendant du temps sur la variété $% \mathcal{M}$ s’écrit alors avec un hamiltonien $$i\hbar \frac \partial {\partial t}\psi (\mathbf{q},t)=\left[ -\frac{\hbar ^2% }{2m}\mathbf{\Delta }+V(\mathbf{q})\right] \psi (\mathbf{q},t)=H\psi (% \mathbf{q},t).$$ Dans certains cas elle possède une unique solution générale ([@Waldschmidt2] p. 549) donnée par une intégrale de Feynman construite à partir d’une amplitude de probabilité $K(\mathbf{q}",t";% \mathbf{q}^{\prime },t^{\prime })$ qu’une particule quitte sa position initiale pour atteindre sa position finale, et grâce à laquelle on peut décrire l’évolution dans le temps de la fonction d’onde $\psi $ duale de la particule que l’on considère $$\psi (\mathbf{q"},t")=\int_{\mathbb{R}^D}\sqrt{g(\mathbf{q}^{\prime })}K(% \mathbf{q}",t";\mathbf{q}^{\prime },t^{\prime })\psi (\mathbf{q}^{\prime },t^{\prime })d\mathbf{q}^{\prime }.$$ Même si le potentiel $V(\mathbf{q})$ est nul ce calcul peut être fait [@Kleinert] en s’appuyant sur les géodésiques de $\mathcal{M% }$. En supposant le système global stable et isolé, c’est-à-dire dans un état stationnaire, l’énergie totale du système est une constante qui est une valeur propre $E$ de $H$ avec laquelle on a $\psi (% \mathbf{q},t)=\psi (\mathbf{q},0)\exp (-iEt/\hbar )$ et $$E\psi (\mathbf{q},0)=\left[ -\frac{\hbar ^2}{2m}\mathbf{\Delta }+V(\mathbf{q}% )\right] \psi (\mathbf{q},0).$$ ### Cas de l’oscillateur harmonique quantique On trouve une équation comparable dans le cas de l’oscillateur harmonique quantique à une seule dimension $D=1$, où $V(\mathbf{q}% )=(1/2)m\omega ^2\mathbf{q}^2$ et $\mathbf{\Delta =}\partial ^2/\partial \mathbf{q}^2$, et avec les polynômes de Hermite ([@Perrine9] pp. 295-296) les seules énergies totales possibles $% E_n=E_0+n\hbar \omega $ et le vecteur ket $\mid n>=\psi _n(\mathbf{q},0)$ associé à chacune d’elle. Ceci donne aussi la forme hermitienne à considérer pour laquelle ces vecteurs ket forment une base orthonormée de l’espace de Hilbert des fonctions associées. Sur cet espace s’introduisent les trois opérateurs auto-adjoints qui correspondent aux observables de position, d’impulsion et d’énergie utilisées : $$\mathbf{Q}=\sqrt{\frac{m\omega }\hbar }\mathbf{q},\;\mathbf{P}=\frac{\mathbf{% p}}{\sqrt{m\omega \hbar }},\;\;[\mathbf{P},\mathbf{Q}]=i\neq 0,$$ $$H=\hbar \omega (\mathbf{AA}^{*}-\frac 12)\text{ o\`{u} }\mathbf{A=}\frac 1{% \sqrt{2}}(\mathbf{Q}+i\mathbf{P})\neq \mathbf{A}^{*}.$$ On a également sur cet espace un opérateur unitaire naturel ([@Mackey] p.75) qui s’écrit $$(\frac H{\hbar \omega }+\frac 12+i)(\frac H{\hbar \omega }+\frac 12-i)^{-1},$$ il est utilisable pour étudier l’hypothèse de Riemann associée selon les méthodes de [@Connes6] et [@CohenP]. On peut enfin développer ([@Perrine9] p.296) une approche statistique de la distribution des états d’énergie $E_n$ lorsque cet oscillateur de pulsation $\omega =2\pi \nu $ est en contact avec un milieu extérieur beaucoup plus grand que lui et agissant comme thermostat de température constante $T$. Les états d’énergie sont quantifiés en $\hbar \omega =h\nu $, où $h$ est la constante de Planck et $\hbar =(h/2\pi )$. ### Le chaos quantique et les géodésiques Ce que l’on vient de résumer pour l’oscillateur harmonique se généralise en la formulation hamiltonienne que l’on a donnée pour toute variété, et donc toute surface de Riemann $\mathcal{M}$.Ceci condense de l’information sur sa géométrie et conduit naturellement à une problématique de quantification en considérant le spectre des valeurs propres associé à l’opérateur apparaissant dans l’équation de Schrödinger. Une relation peut être établie avec les orbites géodésiques périodiques de $\mathcal{M}$ grâce à la formule de trace issue des travaux de Selberg [@Gutzwiller1] [@Watkins]. C’est l’un des développements récents de la théorie du chaos quantique. Dans [@Colin] (p. 59) on indique que pour décrire les géodésiques de $\mathcal{M}$ on peut considérer un hamiltonien pseudo-différentiel $\hbar \sqrt{-\mathbf{\Delta }}$ et se ramener à l’équation de Schrödinger $$i\hbar \frac \partial {\partial t}\psi =\hbar \sqrt{-\mathbf{\Delta }}\psi .$$ Une simplification par $\hbar $ se produit dans cette équation et sa solution est donnée par le groupe à un paramètre $U(t)=\exp (-t% \sqrt{-\mathbf{\Delta }})$. Cette remarque conduit à se poser la question de la nature géométrique profonde de la constante de Planck ([@Mendes], [@Fedosov]: ”la constante de Planck pourrait ne prendre que des valeurs telles que l’indice topologique soit un nombre entier.”). Dans l’approche statistique associée la fonction de partition quantique associée est $$Z(t)=tr(U(t))=\sum_{n=1}^\infty \exp (-i\mu _nt),$$ où les $\mu _n$ correspondent aux solutions stationnaires de forme $\exp (-i\mu _nt)\psi _n(\mathbf{q},0)$ avec $$\mathbf{\Delta }\psi _n(\mathbf{q},0)=-\lambda _n\psi _n(\mathbf{q}% ,0),\;\;\mu _n=\sqrt{\lambda _n},\;\;\lambda _1=0<\lambda _2\leq ...\leq \lambda _n\leq ...$$ Elles se déduisent des valeurs propres $\lambda _n$ de l’opérateur de Laplace associé à la variété $\mathcal{M}$. Il existe toute une littérature sur ce sujet, sachant que cet opérateur est la plupart du temps défini comme l’opposé de celui que l’on vient d’utiliser ([@RosenbergS] [@Safarov] articles de I. Chavel pp. 30-75 et M. Shubin pp. 226-283). ### Application à la théorie de Markoff Lorsque la variété $\mathcal{M}$ n’est pas compacte, le spectre n’a pas de raison d’être discret et peut donc contenir une partie cantorienne ou une partie continue. On ne voit plus alors apparaître l’équivalent de la constante de Planck comme dans le cas de l’oscillateur harmonique quantique. On a vu ci-dessus comment l’identité de Landsberg-Schaar sur les sommes de Gauss est issue de la trace d’un opérateur d’évolution longitudinale associé à une équation de Schrödinger [@Armitage]. On a indiqué comment à partir d’un espace de phase cylindrique rendu toroïdal on retrouvait la réciprocité quadratique intimement liée à la fonction êta de Dedekind, elle même liée au tore. Mais on a vu aussi que cette approche discrétise le temps et fait disparaître l’équation de Schrödinger avec un paramètre temporel continu. Ceci semble indiquer que pour aller plus loin dans la généralité du formalisme de l’équation de Schrödinger, il faudrait considérer les temps comme les autres paramètres observables. La question qu’on se pose alors est de savoir si ce formalisme pourrait interpréter le spectre de Markoff lorsque l’espace des phases $\mathcal{M% }$ est le tore percé parabolique mis en évidence par Harvey Cohn dans [@Cohn2]. Il faudrait pour progresser dans cette voie donner une bonne équation de Schrödinger à considérer. On devrait s’assurer que l’on n’est pas alors dans un cas de nombre fini de ses solutions pour une telle équation, le minimum intervenant dans la théorie de Markoff pouvant alors correspondre à une minimisation de l’énergie. Ce programme de travail de l’auteur n’en est qu’à ses débuts, de sorte que peu de résultats peuvent encore être donnés quant à l’approche proposée. Une piste pour progresser dans cette voie pourrait être d’expliciter la formulation hamiltonienne quantique associée aux oscillateurs à vérouillage de phase de Michel Planat [@Planat]. Il semble bien qu’ils correspondent à un espace de phase torique percé, constituant donc un modèle plus sophistiqué que l’oscillateur quantique à une dimension. La question de la dégénérescence discrète éventuelle de l’équation de Schrödinger dans ce cas est un problème intéressant. Quelques thèmes de réflexion connexes ------------------------------------- ### Liens avec les fibrés vectoriels et la $K$-théorie Pour toute surface de Riemann $\mathcal{M}$ le théorème d’uniformisation de Poincaré, Koebe et Klein a donné des domaines $% \mathcal{U}\subset \mathcal{S}^2$ et des transformations holomorphes injectives $t$ de $\mathcal{U}$ dans $\mathcal{M}$ telles qu’en tout point $% x\in \mathcal{U}$, $t$ uniformise localement $\mathcal{M}$ au point $t(x)$, cartographiant le voisinage de ce point dans $\mathcal{M}$. Aujourd’hui, ce résultat a finalement été pris comme définition des surfaces de Riemann par H. Weyl [@Weyl]. Les groupes fuchsiens permettent de traiter algébriquement certaines de ces surfaces, et l’on reconstruit dans l’algèbre des fonctions automorphes associée les invariants caractéristiques. On trouve dans [@Milne1] (p.53-54) l’idée que les facteurs d’automorphie correspondent à des cocycles (la cohomologie est là!) et cet auteur montre qu’ils sont en correspondance bijective avec des fibrés vectoriels sur la surface d’une façon qui interprète les fonctions automorphes de poids $2k$ comme des sections d’un fibré $L_k^{*}$ sur un compactifié $\mathcal{M}^c$ déterminé par un facteur d’automorphie canonique (la $K$-théorie apparaît!). Cette remarque est très importante pour comprendre pourquoi la théorie de Markoff détermine des fibrés exceptionnels et des hélices du plan projectif $P_2(\mathbb{C})$ (voir [@Grothendieck1], [@Drezet], [@Rudakov], [@Nogin], [@Nogin1], [@Gorodentsev], [@Gorodentsev1], [@Drezet1], [@Drezet2]). Il serait d’un grand intérêt d’associer d’autres fibrés et hélices aux équations $M^{\varepsilon _{1,}\varepsilon _2}(a,\partial K,u)$ mises en évidence dans le présent ouvrage, ne serait-ce que pour mieux comprendre la structure des fibrés vectoriels sur différents types de variétés et les classifier [@Van; @de; @Ven] [@LePotier] [@Sen] [@Klyachko] [@Klyachko1] [@Ionescu] [@Baez1]. On conjecture que ceci est possible. Cette recherche s’inscrit dans la grande tradition des analogies entre corps de nombres et corps de fonctions chère à André Weil [@Weil0], qui a conduit aux schémas d’Alexandre Grothendieck [@Silverman2] (A.9), puis à la cohomologie étale pour généraliser la théorie de Galois [@Milne2], à la géométrie d’Arakelov [@Soule2], enfin à la cohomologie motivique [@Levine] et à la résolution de la conjecture de Langlands sur les corps de fonctions [@Laumon]. Cette approche a permis la résolution de l’hypothèse de Riemann pour les courbes de genre quelconque sur un corps fini par André Weil [@Weil2], puis pour toutes les variétés sur un corps fini par Pierre Deligne [@Deligne] et à la résolution de la conjecture de Langlands sur les corps de fonctions [@Laumon] [@Soergel]. Un résumé rapide de la démarche historique se trouve dans [@Cartier2] ou [@Milne] (p. 97-100). Pour d’autres perspectives on renvoie à [@Goss] [@Buium]. Une conséquence du projet de recherche que l’on vient d’évoquer pour les fibrés est de donner une interprétation ”automorphe” générale des $K$-groupes $K_i(R)$ de la théorie de D. Quillen.L’importance de cette question est clairement mise en lumière dans [@Weibel1] (p.17-18). Quant à la définition classique des groupes $% K_i(R)$, on la trouve dans [@Rosenberg], ou plus directement dans [@Arlettaz]. Sur ceux-ci se transposent des résultats de la théorie algébrique des nombres comme le théorème des unités de Dirichlet [@Rosenberg] (p. 288). Dans ces résultats, $R$ désigne un anneau d’entiers d’un corps $F$ extension finie de $\mathbb{Q}$ et il y a un lien profond entre ces $K$-groupes et la fonction $\zeta _F$ du corps $F$ [@Lichtenbaum] [@Weibel1] [@Bump] [@Benson]. Il est aussi connu que les fonctions zêta sont liées aux sommes de Dedekind et à la géométrie torique qui a été développée pour faire un lien entre la théorie des ensembles convexes dans un réseau et la géométrie algébrique [@Ziegler] (p. 224) [@Danilov] [@Pommersheim2]. Enfin le lien entre la géométrie torique et les fonctions automorphes est clairement explicite dans des travaux tels que [@Borisov] [@Cox1] [@Cox2]. On trouve des développements plus directs sur le lien entre les fonctions zêta (ou $L$) et les sommes de Dedekind dans des travaux tels que [@Stevens] [@Sczech]. ### Lien avec les fonctions zêta L’apparition des fonctions zêta peut se comprendre avec une remarque faite lors de l’évocation des fonctions thêta. Les espaces de fonctions automorphes de poids successifs se déduisant par des exponentiations de groupes, on peut faire apparaître naturellement ([@Dieudonne3] p. 297) les nombres de Bernouilli (ici $\mathbf{b}% _n=(-1)^{n+1}b_{2n}>0)$ avec une ”demi-formule de Poisson” qui concerne des exponentielles successives d’un opérateur $d$, et donne la fonction de partition $Z$ de l’oscillateur harmonique dans la théorie de Boltzmann et Planck en remplaçant $d$ par $-(h\nu /kT)$ $$-\sum_{k\geq 1}\exp (kd)=\frac{\exp (d)}{\exp (d)-1}=\frac 1{1-\exp (-d)}=d^{-1}+\frac 12+\sum_{n\geq 1}(-1)^{n+1}\mathbf{b}_n\frac{d^{2n-1}}{% (2n)!}.$$ Appliquée à une fonction analytique, une telle formule donne la formule classique d’Euler et Mac-Laurin ([@Dieudonne3] p. 302 [@Kac1] ch. 25). Cette formule est applicable aux structures car elle est de nature fonctorielle [@Gelfand1]. On trouve dans [@Tits] une traduction pour les algèbres de Kac-Moody. On sait aussi passer d’une algèbre de Lie à un groupe de Lie par l’exponentielle qui transforme des sommes en produits, des traces en déterminants ([@Arnold00] p. 116-119). On trouve dans [@Postnikov] (p.175) les conséquences pour les catégories correspondantes notamment les équivalences de catégories entre groupes de Lie et algèbres de Lie, et dans [@Postnikov] (p. 97) comment l’algèbre enveloppante universelle d’une algèbre de Lie possède une structure naturelle d’algèbre de Hopf. Dans [@Guichardet] (p. 27) apparaît la dualité entre les groupes algébriques affines et les algèbres de Hopf commutatives de type fini, le cas semi-simple de dimension finie correspondant aux groupes finis. Le lien avec les catégories tressées et les familles d’arbres est essentiel [@Moore] [@Larson]. Dans [@Chari] (p. 4-5) on indique aussi comment la catégorie des groupes quantiques devrait être définie comme duale (c’est-à-dire anti-équivalente) à celle des algèbres de Hopf. Pour d’autres [@Majid] les groupes quantiques ne sont autres que les algèbres de Hopf, ce qui ne satisfait pas l’auteur du présent texte. Comme il est fait de façon explicite une relation avec la présentation hamiltonienne de la mécanique et de sa quantification depuis les travaux de l’école de L. D. Fadeev [@Fadeev], on est conduit naturellement à l’idée de comparer les variétés abéliennes aux groupes quantiques. L’introduction de [@Chari] rappelle comment se sont développés ces travaux de mécanique [@Moyal] pour déboucher sur les travaux de A. Connes ([@Connes], [@Connes2]) avec lesquels il y a donc une dualité profonde. Dans la dernière formule donnée l’exponentielle permet de passer d’un groupe $K_{2k}(\mathcal{M})$ à un espace $\mathbf{M}_k(\Gamma )$ dont la dimension est connue ([@Milne1] p.45). La somme de gauche correspond au passage à la limite d’une somme de groupes $\mathbf{M}_k(\Gamma )$ pour construire l’algèbre graduée $\mathbf{M}(\Gamma )$. Celle de droite correspond à une construction particulière restant à formaliser de façon précise (un espace classifiant). Les groupes $K_{2k}(\mathcal{M})$ sont dans cette perspective comparables à des groupes de cohomologie $H^{*}(\mathcal{M},% \mathbb{Z})$ et donc à $\mathbf{M}(\Gamma )$. Les conjectures de Lichtenbaum qui se positionnent dans cette perspective ([@Soule] p. 107) s’écrivent alors avec $k$ pair $$\frac{CardK_{2k-2}(\mathcal{M})}{CardK_{2k-1}(\mathcal{M})}=\frac{\mathbf{b}% _k}k2^r.$$ ### L’automorphie de la fontion êta liée au nombre d’or L’automorphie de $\eta $ est la propriété caractéristique de cette fonction [@Toyoizumi] qui donne naissance à la somme de Dedekind $s$, et qui a comme conséquence l’existence de la théorie développée dans les chapitres précédents. Cette remarque conduit à l’idée de regarder dans les travaux qui sont relatifs à l’opérateur de Lagrange-Beltrami ou dans ceux sur les représentations unitaires de dimension infinie des algèbres de Lie comme $SL(2,\mathbb{R})$ où l’on pourrait utiliser les résultats qui ont été développés autour des généralisations de l’équation de Markoff. On trouve dans [@Kac] (p.270) mention d’un résultat qui évoque nos travaux. Soit $\theta _a(S)=[0,\underline{% S^{*},a}]$ un algébrique de degré 2 tel que $S=(a_0,a_1,...,a_n)$ est une suite telle que $S=S^{*}$. On considère $$f_c(\tau )=q^c\prod_{j=1}^{j=\infty }(1-q^j)^{a_{j-1}}\text{ o\`{u} }q=\exp (2\pi i\tau ),$$ Cette expression définit une fonction modulaire au sens de [@Kac] (p. 257) pour un groupe $\Gamma (n)$ si et seulement si on a $$c=\frac{(n+2)(a+\sum_{j=0}^na_j)}{24}-\frac 1{4(n+2)}\sum_{j=1}^{n+1}j(n+2-j)a_{j-1}.$$ Avec le nombre d’or $\theta _1(S)=[0,\underline{1}]$ qui donne $n=0$, la valeur $c$ que l’on obtient est $c=(1/24)$. On retrouve ainsi la fonction $% \eta $ de Dedekind. Le lien avec le pentagone que traduit ce dernier cas apparaît aussi dans l’identité pentagonale d’Euler ([@Euler1] 1748) citée dans [@Moll] p. 143 ou [@Kac1] ch. 12, décomposant $% \eta $ en série de Fourier et permettant son interprétation comme inverse d’une fonction de partition d’un ensemble d’oscillateurs indépendants de fréquences multiples d’une fréquence de base : $$\sum_{n\in \mathbb{Z}}(-1)^nq^{\frac{n(3n+1)}2}=\prod_{j=1}^{j=\infty }(1-q^j).$$ On peut également préciser le lien avec le pavage de Penrose ([@Connes] fig. II.3. p. 89) qui donne de son côté avec la construction de Vaughan Jones une $C^{*}$-algèbre canonique et pour premier indice non entier d’un facteur de type II$_1$ le nombre d’or ([@Connes] p.507-508, [@Connes4]). La démonstration même de ce dernier résultat montre bien le lien qui existe avec les fonctions modulaires et les surfaces de Riemann et les noeuds. Remarquons que la formule donnée pour $f_c$ débouche plus généralement sur la définition de fonctions modulaires données par des produits de fonctions $\eta $, ce qui physiquement correspond à des ensembles d’oscillateurs indépendants. Pour $n\in \{2,3,4,6,12\}$ on trouve dans [@Shimura] (p. 49) de telles expressions pour les surfaces $X(n)$, tout comme dans [@Ligozat] pour les surfaces $X_0(n)$ de genre 1. Il y a là un sujet à creuser pour lequel on donne quelques références [@Cox] [@Kondo] [@MacDonald] [@Voskresenskaya] [@Saito2] [@Robins] [@Okstate] [@Martin] [@Meyer] [@Robins] [@Ligozat] [@Hiramatsu] [@Mackey] (p.366). ### Lien avec des espaces topologiques plus généraux Le lien avec les espaces lenticulaires, qui sont eux-mêmes liés à la loi de réciprocité quadratique ([@Bredon] p. 365 [@Sossinsky1] p. 108) et plus généralement à l’invariant $\eta $ des formes d’espaces sphériques, est approfondi dans [@Luck] [@Gilkey] [@Gilkey2] [@Hilsum]. Ceci donne tout un ensemble de développements débouchant sur des sujets comme la $K$-théorie équivariante, les complexes de Koskul, ...[@Soergel]. L’invariant êta de Dedekind que l’on a utilisé pour nos travaux admet en réalité une généralisation profonde qui a été mise en lumière avec les travaux d’Atiyah, Patodi et Singer vers 1975. On trouve dans [@Muller] une synthèse sur ce sujet faite il y a une dizaine d’années qui met bien en évidence le rôle des points cône et des bords de surface (la propagation de la chaleur est perturbée par les bords et les points cônes). Un lien explicite est fait avec les travaux de F. Hirzebuch ([@Hirzebruch], [@Hirzebruch1]) qui mettent eux-mêmes l’accent sur le lien entre singularités et fractions continues ([@Laufer] ch.II, [@Oka] p.95). L’invariant êta joue le rôle d’un polynôme cyclotomique infini, laissant imaginer qu’un nouveau ”Jugendtraum” plus vaste peut être énoncé, lié aux variétés abéliennes et à des invariants combinatoires à préciser ([@Gabriel] [@Hsu]), à la géométrie non commutative [@Manin2], voire à une théorie du corps de classe non commutative [@Ihara] [@Ihara1]. Derrière ces sujets se trouvent la description des singularités isolées des surfaces et la correspondance de McKay [@Kapranov] [@Ito] [@Milnor3] [@Yau1] [@Van; @de; @Ven1] (p. 72-89) [@Dimca] [@Lamotke] pour la résolution par les courbes exceptionnelles et les singularités rationnelles A-D-E, la dualité étrange d’Arnold et la formule de Verlinde, les diagrammes de Dynkin [@Gelfand2] [@Draxler] [@Gabriel] [@Ponomarev], les formes quadratiques [@Ebeling1] [@Dolgachev1] [@Minac], les noeuds et leur monodromie [@Lines] [@Vershinin] [@Yosida2] [@Zieschang2], les modules de Verma et les systèmes de poids [@Saito3] [@Saito2] [@Martin], la théorie de Galois différentielle [@Gray] [@Put] [@Kac1] [@Bertrand], la théorie de la représentation des algèbres de dimension infinie et les conséquences qu’elle a pour l’étude de fonctions spéciales utiles à la physique [@Cahn] [@Dyson] [@Kac] [@Opdam] [@VanAssche] [@Varchenko], les lois de réciprocité plus générales [@Fukuhara] [@Fukuhara1] [@Fukuhara2] [@Brylinski1] [@Diaz] [@Halbritter] [@Hida] [@Iyanaga] [@Hiramatsu] [@Berg], une théorie non commutative du corps de classe étroitement liée à la cohomologie [@Ihara1] [@Iyanaga] et à la conjecture de Riemann [@Beilinson]. Une perspective globale en guise de conclusion ---------------------------------------------- On a décrit dans ce qui précède plusieurs pistes de généralisation de la théorie de Markoff : $\bullet $ Par le calcul des fractions continues, on a mis en évidence des équations diophantiennes $M^{s_1s_2}(b,\partial K,u)$ plus générales que l’équation classique de Markoff $M^{++}(2,0,0)$. On a montré comment les résoudre, ainsi que le lien avec le groupe du triangle et $GL(2,\mathbb{Z})$ qui le contient. $\bullet $ Par l’étude géométrique des tores percés, on a montré que l’équation de Markoff $M^{++}(2,0,0)$ permet la description de tous les tores percés paraboliques. On a également montré que nos équations $M^{s_1s_2}(b,\partial K,u)$ apparaissent dans l’étude générale des tores percés et ont un lien avec des pinceaux de coniques et un groupe libre à deux générateurs qui existe dans ce contexte. On a également trouvé dans ce contexte d’autres équations permettant la description de tous les tores percés hyperboliques. $\bullet $ En se limitant aux surfaces de Riemann dont le revêtement conforme est le demi-plan de Poincaré, on a montré qu’une généralisation naturelle de la théorie de Markoff est la théorie de Teichmüller.Ceci a permis de faire le lien avec des équations diophantiennes plus générales ayant des caractéristiques analogues à celle de Markoff, et éventuellement plus de variables. On a identifié un cadre plus général, celui des domaines de Riemann, où des résultats plus généraux existent. L’équation que l’on considère apparaît dans ce contexte comme liant les caractères de la représentation du groupe de Poincaré que l’on considère. $$$$ Le présent chapitre a exploré ce qui concerne les surfaces de Riemann, et on y a intégré dans chaque paragraphe différentes perspectives pour des travaux futurs sur lesquelles on ne revient pas ici. Certains sujets importants ont été laissés de côté que l’on mentionne pour mémoire : $\bullet $ L’analyse harmonique non commutative [@Gross] et tous ses développements obtenus en considérant les mouvements décrits par des points sur des courbes d’une surface de Riemann. Cette théorie diffère de l’analyse harmonique commutative développée sur la surface de Riemann dans l’esprit de [@Terras] (chapitre 3). Dans différents cas, ce mouvement peut être décomposé selon des mouvements sur des géodésiques correspondant aux générateurs du groupe de Poincaré de la surface. Une telle approche peut mener à des équations différentielles dont on a laissé de côté l’étude dans ce qui précède. Sur les tores percés on renvoie à [@Cherry] qui s’est inspiré des travaux originaux de Poincaré pour décrire les équations possibles et à [@Gray] pour l’approfondissement de ce sujet qui a conduit aux théories de Picard-Vessiot et Drach ainsi qu’à une théorie de Galois spécifique. $\bullet $ Le lien avec la théorie des tresses et des noeuds a été à plusieurs reprises évoqué. La relation avec les développements qui précèdent est assurée par une construction d’Ivanov [@Ivanov1]. Soit $\mathcal{M}$ une surface de Riemann possédant un nombre fini de trous. En collant des disques fermés sur tous les trous de $\mathcal{M}$, on fabrique une surface compacte $\mathcal{N}$. Les difféomorphismes $\mathcal{M}\rightarrow \mathcal{M}$ donnent des difféomorphismes $\mathcal{N}% \rightarrow \mathcal{N}$, d’où un homomorphisme canonique surjectif de $% \Gamma _{\mathcal{M}}$ dans $\Gamma _{\mathcal{N}}$. Son noyau est le groupe des tresses $B_n(\mathcal{N})$, où $n$ est le nombre de trous de la surface $\mathcal{M}$. Ceci permet d’expliciter le lien avec l’étude des noeuds rationnels, les ”rational tangles” de Conway ([@Murasugi] ch.9, [@Kauffman2]) liés aux fractions continues et qui sont utilisés dans certaines applications à la recombinaison des enzymes et de l’ADN [@Sumners] [@Ernst] [@Dessalles] [@Kari] [@Carbone] [@Salomaa]. $\bullet $ La théorie des dessins d’enfants [@Belyi] [@Grothendieck] [@Jones] [@Luo1] [@Waldschmidt2] (p.99) a été très peu évoquée. Son développement en dimension supérieure est envisageable. Son analogie avec différents travaux d’astronomes sur la forme cristallisée du vide quantique est éclairante [@Lehoucq] [@Thurston1]. Plus généralement d’ailleurs tous les développements qui ont été présentés autour des surfaces de Riemann permettent de comprendre des travaux contemporains de physique qui leur donnent une nouvelle importance pour les applications [@Mineev] [@Davies]. On a évoqué le lien avec les solitons [@Moll] (ex. 2, p. 91) [@Belokolos] [@Gesztesy] pour lesquelles on peut généraliser la démarche qui précède. Mais l’invariant êta semble posséder dans ce contexte une importance fondamentale, comme s’il était lié à l’énergie du vide quantique et à ses infinies vibrations élémentaires, pourquoi pas au bruit en $1/f$ sous-jacent au bruit de fonds de l’univers créé par la singularité du Big Bang rendant sa géométrie hyperbolique ? $$$$ Les problèmes que l’on a abordés dans le présent chapitre concernent essentiellement la théorie de Teichmüller sur les surfaces de Riemann et les fonctions modulaires. On a cherché à comprendre comment ils sont liés à des problèmes non résolus d’une grande actualité: l’hypothèse de Riemann, la conjecture de Poincaré, la conjecture de Hodge [@Lewis], la conjecture de Birch et Swinnerton-Dyer [@Wiles2], l’explication du défaut de masse dans les équations de Yang et Mills ([@Nash2] (chapitre VIII) [@Nakahara] (chapitre 10)), etc. C’est pour comprendre le contexte de ces sujets que notre approche a été développée, avec l’idée de faire un lien avec les méthodes de l’analyse spectrale. Les relations avec des espaces de Hilbert et des $C^{*}$-algèbres d’opérateurs a été creusé même si on reste loin du compte pour ce qui concerne la présentation de l’appareillage mathématique nécessaire [@Wells] [@Witten] [@Vladimirov]. La dimension 2 a été privilégiée parce que l’on a travaillé essentiellement sur les surfaces de Riemann. Or elle présente des différences qualitatives très importantes par rapport aux dimensions supérieures où l’on a vu que l’on pouvait aussi généraliser la théorie de Markoff. Par exemple le lien donné par le théorème de Dehn-Nielsen entre homéomorphisme et transformation conforme n’est plus si direct dans les dimensions supérieures à 2. $$$$ Au terme de ces réflexions, ce qui paraît à l’auteur le plus fascinant est le lien avec la nature du calcul [@Feynman] [@Benioff] [@Shor] [@Penrose] et la théorie algorithmique de l’information. L’idée qui se développe aujourd’hui est que les calculateurs ont un modèle mécanique quantique et que ce dernier est le développement naturel du calcul classique, de la même façon que la mécanique quantique succède à la mécanique classique. Comme si l’analogie chère à Weil, qui a été citée à plusieurs reprises [@Weil0] [@Deninger], débouchait sur une interaction beaucoup plus profonde que l’on pourrait désigner par le vocable de quantification de la logique, d’ailleurs entrevue par John von Neumann [@Birkhoff] et bien décrite dans [@Weaver].Il reste largement à formaliser cette analogie que Rolf Berndt résume dans son panorama des travaux de E. Kähler par les correspondances suivantes [@Berndt] $$\text{anneau }\rightarrow \text{ objet,}$$ $$\text{homomorphisme }\rightarrow \text{ perception,}$$ $$\text{id\'{e}al }\rightarrow \text{ perspective,}$$ $$\text{corps de Galois }\rightarrow \text{ oscillateur.}$$ La dernière correspondance avec les oscillateurs peut surprendre, mais elle a été entrevue dans ce qui précède et est clairement apparente dans différents travaux tels que [@Bismut1] [@Sierra] [@Prasad] [@Borel] [@Rallis] [@Przebinda] [@Mounier]. Elle permet d’envisager une interprétation quantique de l’arithmétique, le nombre $1$ étant représentable comme un oscillateur de fréquence $\nu $, le nombre $2$ correspondant à un oscillateur de fréquence $% 2\nu $, et ainsi de suite... On pourrait ainsi comparer la relation d’incertitude de Heisenberg au résultat bien connu d’indécidabilité de Gödel, et imaginer que les arbres constituent un moyen privilégié de concentration de l’information qui n’est pas indépendant de ces questions. Le dixième problème de Hilbert pourrait lui-même induire une explication comparable [@Matiiassevitch] (ch.3-4). L’analogie de Weil pourrait quant à elle déboucher sur une compréhension plus profonde du codage quantique de l’information [@Benioff] [@Benioff1] [@Shor] [@Delahaye1] [@Preskill] [@NielsenM]. Dans le domaine du calcul algorithmique, la quantification est en effet désormais à l’oeuvre [@Feynman], comme sont à l’oeuvre les solitons dans la transmission à distance de l’information et le traitement optique dans certains équipements expérimentaux qui seront utilisés dans l’Internet du futur. Dans le domaine de la représentation, les surfaces de Riemann interviennent dans la théorie de la vision des objets [@Sochen] [@Schmitter] et de processus non linéaires [@Planat] (p. 304). La caractéristique d’Euler-Poincaré et les anneaux de Grothendieck apparaissent dans les structures algébriques les plus générales et les ensembles définissables [@Krajicek], laissant imaginer la possibilité d’associer fonctoriellement à chaque objet ainsi structuré une surface de Riemann. Les limites techniques ressemblent à celles, plus fondamentales, qui viennent d’être évoquées [@Lloyd] et qui ont une résonance dans l’impossibilité de prévoir le mouvement de certains systèmes mécaniques [@Moore1] [@Penrose] (p.202). Faut-il interpréter l’incertitude de Heisenberg comme une limite algorithmique imposée par les moyens logico-mathématiques que nous utilisons pour penser la physique? En tout cas le calcul intégral lui-même a des limites qui ont une importance dans ces questions de calculabilité et impactent les résultats de la mécanique même [@Matiiassevitch] (p. 193), sachant qu’il est concevable sans dépense d’énergie et sans accroissement d’entropie physique [@Delahaye1] (p. 27). Il y a là tout une perspective globale de réflexions concernant la nature informationnelle et vivante de la mathématique que l’auteur voudrait approfondir en examinant de plus près l’intuition que mathématique et théorie de l’information sont une seule et même chose. $$$$ On conclut sur une pensée d’Alexandre Grothendieck qui est exprimée dans son Esquisse d’un Programme. Elle résume à elle seule la façon dont l’auteur du présent texte conçoit sa propre démarche de recherche : $$\begin{aligned} &&\text{''...la d\'{e}marche de la pens\'{e}e qui sonde et qui d\'{e}couvre, } \\ &&\text{en tat\^{o}nnant dans la p\'{e}nombre bien souvent, avec des trou\'{e}es de lumi\`{e}re } \\ &&\text{subite quand quelque tenace image fausse, ou simplement inad\'{e}quate, } \\ &&\text{se trouve enfin d\'{e}busqu\'{e}e et mise \`{a} jour,} \\ &&\text{et que les choses qui paraissaient de guingois se mettent en place,} \\ &&\text{dans l'harmonie mutuelle qui leur est propre.''}\end{aligned}$$ $$$$                                                  Metz, février 2003. $$$$ [4442]{} W. Abikoff, The uniformization theorem, Amer. Math. Monthly, October 1981, pp. 574-592 W. Abikoff, The real analytic theory of Teichmüller space, Lecture Notes in Mathematics n$^{\circ }820$, Springer Verlag, 1989 R. Adler, L. Flatto, Geodesics flows, interval maps, and symbolic dynamics, Bull. Amer. Math. Soc. 25, 1991, pp. 229-234 R. Adler, Symbolic dynamics and Markov partitions, Bull. Amer. Math. Soc. 35, n$^{\circ }1$, 1998, pp. 1-56 L. Ahlfors, The complex analytic structure of the space of closed Riemann surfaces, Lecture Notes in Mathematics n$^{\circ }820 $, Springer Verlag, 1989 L. Ahlfors, Some remarks on Teichmüller’s space of Riemann surfaces, Ann. of Math. 2 n$^{\circ }74$, 1961, pp. 171-191 A. I. Akhieser, The unity of physical theory and its mathematical formalism, Ukrainian Mathematical Journal, vol. 49 n$% ^{\circ }12$, 1997, pp. 1791-1797 N. I. Akhieser, Elements of the theory of elliptic functions, Translations of mathematical monographs vol. 79, A.M.S., 1990 J. L. Alperin, A Lie approach to finite groups, Groups - Canberra 1989, Lecture Notes in Math. 1456, Springer Verlag, 1990, pp. 1-9 LL. Alsedà, D. Juher, M. P. Mumbrú, A note on the periodic orbits and topological entropy of graphs maps, Proc. Amer.Math. Soc. vol. 129 n$^{\circ }10$, pp. 2911-2946 H. Alzer, On Segre’s theorem on asymmetric diophantine approximation, Abh. Math. Sem. Univ. Hamburg 67, 1997, pp. 195-203 M. H. Amsler, Des surfaces à courbure négative constante dans l’espace à trois dimensions et leurs singularités, Math. Ann. 130, 1955, pp. 234-254 A. A. Andronov, S. E. Chaikin, A. A. Vitt, Theory of oscillators, Pergamon Press, 1966 D. V. Anosov, Geodesic flows on a compact Riemann manifold of negative curvature, Proc. Sketlov Math. Inst. 90, 1967 B. N. Apanasov, Conformal geometry of discrete groups and manifolds, De Gruyter expositions in mathematics n$^{\circ }32$, W. de Gruyter, 2000 B. N. Apanasov, Discrete groups in space and uniformization problems, Math. and Appl. 40, Kluwer Academic Publishers, 1991 T. M. Apostol, Modular functions and Dirichlet series in number theory, Graduate texts in math. n$^{\circ }41$, 1976 T. M. Apostol, Introduction to analytic number theory, Undergraduate texts in math., 1984 P. Appel, E. Goursat, Théorie des fonctions algébriques et de leurs intégrales (tomes I et II), Gauthier Villars, 1929 D. Arlettaz, Algebraic $K$-theory of rings from a topological viewpoint, www.unil.ch/ima/ docs/Personnes/darlettaz.html F. M. Arscott, Periodic differential equations, An introduction to Mathieu, Lamé, and allied functions, Pergamon Press, 1964 V. Armitage, A. Rogers, Gauss sums and quantum mechanics, J. Phys. A: Math. Gen. 33 ou arXiv:quant-ph/0003107 v1, 2000 J. M. Arnaudiès, J. Bertin, Surfaces de Riemann, équation de Halphen et groupes polyédraux, Ellipses, 2001 J. M. Arnaudiès, Séries entières, séries de Puiseux, séries de Fourier, Ellipse, 1999 V. I. Arnold, Equations différentielles ordinaires, Editions de Moscou, 1974 V. I. Arnold, Les méthodes mathématiques de la mécanique classique, Editions MIR, 1976 V. I. Arnold, Small denominators I, Trans. Amer. Math. Soc. Ser. 2, n$^{\circ }$46, 1965, pp. 213-284; II, Russian Math.Surveys 18-6, 1963, pp. 9-36; III, $idem$, pp. 85-193 V. I. Arnold, Catastrophe theory, Springer Verlag, 1984 V. I. Arnold, Singularités des applications différentiables (2 tomes), Editions MIR, 1986 V. I. Arnold, Contact geometry, the geometric method of Gibbs thermodynamics, Gibbs lectures, 1989 V. I. Arnold, V. V. Goryunov, O. V. Lyashko, V.A. Vasil’ev, Singularity theory (I et II), Springer Verlag, 1998 P. Arnoux, Le codage du flot géodésique sur la surface modulaire, L’Enseignement Mathématique, t. 40, 1994, pp.29-48 E. F. Assmus, J. D. Key, Designs and their codes, Cambridge Tracts in Mathematics n$^{\circ }103$, Cambridge University Press, 1992 M. Audin, Les systèmes hamiltoniens et leur intégrabilité, S.M.F. et E.D.P. Science, 2001 M. Audin, Intégrabilité et non-intégrabilité des systèmes hamiltoniens (d’après S.Ziglin, LJ. Morales-Ruiz, J. P. Ramis, Séminaire Bourbaki, 53$^{% \grave{e}me}$ année, 2000-2001, n$^{\circ }$884, mars 2001 C. Audouin, B. Guinot, Les fondements de la mesure du temps, Masson, 1998 B. W. Augenstein, Links between physics and set theory, Chaos, Solitons & Fractals vol. 7 n$^{\circ }11$, 1996, pp. 1761-1798 J. C. Baez, baez@galaxy.ucr.edu (pour ADE: http://math.ucr.edu/home/baez/ADE.html) J. C. Baez, J. Dolan, From finite sets to Feynman diagrams, Mathematics unlimited - 2001 and beyond, Björn Engquist and Wilfried Schmid, 2001 C. L. Bajaj, R. L. Holt, A. N. Netravali, Rational parametrizations of non singular real cubic surfaces, 1998, http://citeseer.nj.nec.com/bajaj98rational.html A. Baker, Matrix groups, An introduction to Lie group theory, Springer, 2001 V. Baladi, Comment compter avec les fonctions zêta?, Gazette des Mathématiciens n$^{\circ }47$, 1991, pp. 79-96 A. Baragar, Integral solutions of Markoff-Hurwitz equations, Journal of Number Theory 49, 1994, pp. 27-44 J. Barwise, J. Seligman, Information flow, the logic of distributed systems, Cambridge tracts in theoretical computer science n$^{\circ }$44, 1997 H. Bass, Algebraic $K$- theory, W. A. Benjamin, 1968 H. Bateman, Higher transcendental functions, Mc Graw Hill, 1955 A. F. Beardon, The geometry of discrete groups, Graduate Texts in Mathematics 91, Springer Verlag, 1983 A. Beauville, Counting rational curves on $K_3$-surfaces, www.dma.ens.fr/ edition/ Publications/ all.1998.html T. Bedford, M. Keane, C. Series, Ergodic theory, symbolic dynamics and hyperbolic spaces, Oxford University Press, 1991 A. Beilinson, V. Drinfeld, Chiral algebras, University of Chicago, 2001 A. Beilinson, V. A. Ginsburg, V. V. Schechtman, Koszul duality, Journal of Geometry and Physics, vol5, n$^{\circ }3$, 1988, reedition in “Geometry and Physics”, Essays in honour of I. M. Gelfand, North-Holland, 1991, pp. 317-350 J. Bellissard, The non commutative geometry of aperiodic solids, Proceedings of the 2001 Summer School of Theoretical Physics, ”Geometry, Topology, and Quantum Field Theory”, Villa de Leyva, Columbia, 7-30 July 2001, Kluwer, E. D. Belokolos, A. I. Bobenko, V. Z. Enol’skii, A.R. Its, V. B. Matveev, Algebro-geometric approach to non linear integrable equations, Springer Verlag, 1994 G. V. Belyi, On Galois extensions of a maximal cyclotomic field, Math USSR Izv. 14, 1980, pp. 247-256 R. L. Benedetto, W. M. Goldman, The topology of the relative character varieties of quadruply punctured sphere, Experimental Mathematics 8:1, 1999, pp. 85-103 P. Benioff, Quantum mechanics and hamiltonian models of computers, Annals of the New York Academy od Sciences vol. 480, 1986, pp. 475-486 P. Benioff, The computer as a physical system: a microscopic model of computers as represented by Turing machines, Journal of Statistical Physics vol. 22, 1980, p. 563-591 D. J. Benson, Representations and cohomology, 2 tomes, Cambridge studies in advanced mathematics 30-31, Cambridge University Press, 1991 M. Benson, Analytic equivalence of isolated hypersurfaces singularities definied by homogeneous polynomials, Proc.Symp. Pure Math. vol. 40, A.M.S., 1983 M. C. Berg, The Fourier-analytic proof of the quadratic reciprocity, John Wiley & Sons, 2000 M. Berger, Géométrie (2 tomes), Nathan, 1990 M. Berger, B. Gostiaux, Géométrie différentielle: variétés, courbes et surfaces, PUF, 1987 G. M. Bergman, Everybody knows what a Hopf algebra is, Contemporary Mathematics, vol. 43, 1985, pp. 25-48 R. Berndt, http://www.math.uni-hamburg.de/home/berndt/ M. Berry, Riemann’s zeta function: a model for quantum chaos, Quantum chaos and statistical nuclear physics, Eds T.H. Seligman and H. Nishioka, Springer Lecture Notes in Physics n$^{\circ }263$, 1986, pp.1-17 L. Bers, F. P. Gardiner, Fricke spaces, Advances in Maths. n$^{\circ }$62, 1986, pp. 249-284 L. Bers, L. Ehrenpreis, Holomorphic convexity of Teichmüller’space, Bull. Amer. Math. Soc. n$^{\circ }$70, 1964, pp. 761-764 D. Bertrand, Groupes algébriques linéaires et théorie de Galois différentielle, Cours de 3ème cycle, Université Paris VI, Premier semestre 1985-1986, notes de cours par René Lardon F. R. Beyl, G. Rosenberger, Efficient presentations of $GL(2,\mathbb{Z})$ and $PGL(2,\mathbb{Z})$, London Mathematical Society, LNS 121, Proceedings of Groups-St Andrews 1985, E. Robertson et C. Campbell eds., 1987, pp. 135-137 F. Bien, Constructions of telephone networks by Galois representations, Notices of the American Mathematical Society, vol. 36, n$% ^{\circ }$1, 1989, pp. 5-22 P. Billingsley, Ergodic theory and information, John Wiley, 1965 E. Binz, W. Schempp, Quantum hologram and relativistic hodogram : Magnetic resonance tomography and gravitational wavelet detection, Casys 2000 (D. Dubois ed.) , AIP conference proceedings n$% ^{\circ }573$, 2001, pp. 98-131 G. Birkhoff, J.von Neumann, The logic of quantum mechanics, Annals of mathematics, 37, 1936, pp. 823-843 J.S. Birman, Braids, links and mapping class groups, Annals of Mathematics Studies, Princeton University Press, 1975 J. M. Bismut, J. V. Cheeger, Transgressed Euler classes of $SL(n,\mathbb{Z})$ vector bundles, adiabatic limits of éta invariants and special values of $L$-functions, Ann.Scient. Ec. Norm. Sup., 4ème série, t.25, 1992, pp.335-391 J. M. Bismut, Koskul complexes, harmonic oscillators, and the Todd class, J. Amer. Math. Soc. 3(1), 1990, pp.159-256 A. Blanchard, Les corps non commutatifs, P.U.F., 1972 E. Bombieri, Problems of the millenium: the Riemann hypothesis, www.claymath.org A. Borel, Automorphic forms on $SL_2(\mathbb{R})$, Cambridge tracts in mathematics 130, Cambridge University Press, 1997 Z. I. Borevitch, I.R. Chafarevitch, Théorie des nombres, Gauthiers Villars, 1967 L. A. Borisov, P. E. Gunnells, Toric varieties and modular forms, Invent. Math. 144, 2001, pp. 297-325 J. M. Borwein, P. B. Borwein, Pi and the AGM, John Wiley and sons, 1986 R. Bott, On some recent interactions between mathematics and physics, Canad. Math. Bull. vol.28(2), 1985, pp. 129-164 G. Bouligand, Cours de géométrie analytique, Vuibert, 1946 N. Bourbaki, Algèbre, Hermann, 1970 N. Bourbaki, Groupes et algèbres de Lie, chapitres 4, 5 et 6, Hermann, 1968 B. Bowditch, Markoff triples and the quasifuchsian groups, Proc. London Math. Soc. vol. 77 part 3, 1998, pp. 697-736 R. Bowen, The equidistribution of closed geodesics, Amer. J. Math. vol. 94, 1972, pp. 413-423 G. E. Bredon, Topology and geometry, Graduate Texts in Math., Springer Verlag, 1991 E. Brieskorn, H. Knörrer, Plane algebraic curves, Birkhaüser, 1986 J. Briggs, F. D. Peat, Un miroir turbulent: guide illustré de la théorie du chaos, InterEditions, 1991 M. Brion, Points entiers dans les polytopes convexes, Séminaire Bourbaki n$^{\circ }$ 780, 1993-1994, Astérisque 227, 1995, pp. 145-169 A. Broise, F. Dal’bo, M. Peigné, Etudes spectrales d’opérateurs de transfert et applications, Astérisque, 1996, p. 112-177 R. Brooks, H. M. Farkas, I. Kra, Number theory, theta identities, and modular curves, Contemporary Mathematics vol. 201, 1997, pp. 125-154 M. Broué, Codes et formes quadratiques, Séminaire P. Dubreil, 28$^{\grave{e}me}$ année, 1974-1975, n$% ^{\circ }$23, pp. 01 à 03 M. Broué, M. Enguehard, Polynômes de poids de certains codes et fonction théta de certains réseaux, Ann. Scient. Ec. Norm. Sup. 4$^{\grave{e}me}$ t. 5, 1972, pp. 157-181 R. W. Bruggeman, On the distribution of Dedekind sums, Comtemporary Mathematics, Vol 166, 1994, pp. 197-210 R. W. Bruggeman, Families of automorphic forms, Birkhaüser, 1994 G. W. Brumfiel, H. M. Hilden, $SL(2)$ representations of finitely presented groups, Contemporary Mathematics n$% ^{\circ }$ 1987, chapter 10, 1995 J. L. Brylinski, Loop spaces, characteristic classes and geometric quantization, Progress in mathematics vol 107, Birkhaüser, 1993 J. L. Brylinski, Central extensions and reciprocity laws, Cahier de topologie et géométrie différentielle catégorique, volume XXXVIII-3, 1997, pp. 193-215 J. L. Brylinski, Koszul complexes, differential operators, and the Weil-Tate reciprocity law, Journal of algebra n$^{\circ }$ 230, 2000, pp. 89-100 D. A. Buell, Binary quadratics forms: classical theory and modern computations, Springer Verlag, 1989 D. Bump, Automorphic forms and representations, Cambridge series in advanced mathematics 55, Cambridge University Press, 1996 A. Buium, Differential algebra and diophantine geometry, Hermann, 1994 U. Bunke, M. Olbrich, Selberg zeta and theta - a differential operator approach, Mathematical Research vol 83, Akademie Verlag, 1995 U. Bunke, The $\eta $-invariant as a lagrangian of a topological quantum field, talk given at Wendisch Reitz near Berlin, September 1993, http:// www.uni-math.gwdg.de/ bunke/ linkstopapers.html J. O. Button, The uniqueness of the prime Markoff numbers, J. London Math. Soc. (2), n$^{\circ }$58, 1998, pp. 9-17 J. O. Button, Markoff numbers, principal ideals and continued fraction expansions, J. Number Theory, vol 87, n$^{\circ }$81, march 2001 R. N. Cahn, http://www.physics.lbt.gov/.rncahn/book.html J. Calais, Eléments de théorie des groupes, P.U.F, 2ème édition 1996 A. Cannas da Silva, Lectures on symplectic geometry, Lecture Notes in Mathematics n$^{\circ }1764$, Springer Verlag, 2001 A. Carbone, M. Gromov, Mathematical slices of molecular biology, IHES/M/01/03, janvier 2001, Gazette des Mathématiciens, supplément au numéro 88, S.M.F., 2001 A. Carbone, M. Gromov, P. Pruzinkiewicz, Pattern formation in biology, vision and dynamics, World ScientificPublishing Co, 2000 J. R. Carson, Electric circuit theory and the operational calculus, New york, 1926 H. Cartan, Théorie élémentaire des fonctions analytiques d’une ou plusieurs variables complexes, Hermann, 1961 J. S. Carter, How surfaces intersect in space (2ed.), Series on knots and everything vol. 2, World Scientific, 1995 P. Cartier, C. De Witt-Morette, Intégration fonctionnelle, éléments d’axiomatique, C. R. Acad.Sci. Paris, t. 316, Série II, 1993, pp. 733-738 P. Cartier, Développements récents sur les groupes de tresses, applications à la topologie et à l’algèbre, Séminaire Bourbaki, 42$^{\grave{e}me}$ année n$^{\circ }716$, 1989-90, pp. 1-42 P. Cartier, Des nombres premiers à la géométrie algébrique, une brève histoire de la fonction zéta, Séminaire d’histoire des mathématiques de l’Institut Poincaré, 23 janvier 1991 L. Casetti, M. Pettini, E. D. G. Cohen, Geometric approach to hamiltonian dynamics ans statistical mechanics, arXiv:cond-mat/9912092v1, 6 décembre 1999 A. Cassa, Quantum physical systems as classical systems, Journal of Mathematical Physics, vol. 42, n$^{\circ }11$, Nov.2001, pp. 5143-52149 J. W. S. Cassels, An introduction to diophantine approximation, Cambridge Tracts in Math. Physics, n$^{\circ }$45, Cambridge University Press, 1957 J. W. S. Cassels, An introduction to the geometry on numbers, Springer Verlag, 1971 J. L. Cathelineau, Homologie du groupe linéaire et polylogarithmes, Séminaire Bourbaki n$^{\circ }772$, 1993, pp. 01-23 L. S. Charlap, Bieberbach groups and flats manifolds, Springer Verlag, 1986 G. J. Chaitin, Toward a mathematical definition of life, in The maximal entropy formalism (ed. R. D. Levine, M. Tribus), MIT Press, 1979, pp. 477-498 G. J. Chaitin, Algorithmic information theory, Cambridge University Press, 1987 K. Chandrasekharan, Elliptic functions, Springer Verlag, 1985 V. Chari, A. N. Pressley, A guide to quantum groups, Cambridge University Press, 1994 R. Charney, M. Davis, When is a Coxeter system determined by its Coxeter group?, J. London Math. Soc. (2)61, 2000, pp.441-461 E. Charpentier, N. Nikolski, Leçons de mathématiques d’aujourd’hui, Cassini, 2000 S. Chase, M. Sweedler, Hopf algebras and Galois theory, Springer Verlag, 1969 T. M. Cherry, Analytic quasi-periodic curves of discontinuous type on a torus, Proceedings of the London Math. Soc., Serie 2 vol 44 n$^{\circ }$2210, 1938, pp. 175-215 C. Chevalley, Introduction to the theory of algebraic functions of one variable, Math. surveys, A.M.S., 1951 S. Chowla, The Riemann hypothesis and Hilbert’s tenth problem, Blackie & son Ltd, 1965 D. V. Chudnovski, G. V. Chudnovski, Computational problems in arithmetic of linear differential equations, some diophantine applications, Number theory New York 1985-1988, (D.V. et G.V. Chudnovski, M. B. Natanson, H. Cohn editors) Lecture Notes in Mathematics 1383, Springer Verlag, 1989 R. C. Churchill, Two-generator subgroups of $SL(2,\mathbb{C})$ and the hypergeometric, Riemann and Lamé equations, J. Symbolic Comput. 28 (4-5), 1999, pp. 521-545 D. E. Cohen, Combinatorial group theory, a topological approach, J. London Math. Soc. Student Texts n$^{\circ }$14, Cambridge University Press, 1989 H. Cohen, A course in computational algebraic number theory, Springer-Verlag, 1991 M. Cohen, W. Metzler, A. Zimmermann, What does a basis of $F(a,b)$ look like?, Math. Ann. n$^{\circ }$257, 1981, pp. 435-445 P. Cohen, Sur la mécanique statistique d’après les travaux de Bost-Connes, Journées académiques de Lille, 9-10 mars 1998 H. Cohn, Approach to Markoff minimal forms through modular functions, Ann. of Math., vol 61, n$^{\circ }$61, 1955, pp. 1-12 H. Cohn, Representation of Markoff’s binary quadratic forms by geodesics on a perforated torus, Acta Arithmetica XVIII, 1971, pp. 123-136 H. Cohn, Markoff forms and primitive words, Math. Ann. n$^{\circ }$196, 1972, pp. 8-22 H. Cohn, Minimal geodesics on Fricke’s torus covering, Riemann surfaces and related topics, Proceedings of the 1978 Stony Brooks Conference, Princeton University Press, 1980 H. Cohn, Remarks on the cyclotomic Fricke groups, Kleinian groups and related topics, Proceedings Oaxtepec 1981, Lectures Notes in Mathematics n$^{\circ }$971, Springer Verlag, 1982 H. Cohn, Markoff geodesics in matrix theory, Lecture Notes in Pure and Applied Mathematics, n$^{\circ }$147, Marcel Dekker, 1992, pp. 69-92 H. Cohn, Conformal mapping on Riemann surfaces, Dover, 1980 H. Cohn, Introduction to the construction of class fields, Cambridge Studies in Advanced Mathematics n$^{\circ }6$, Cambridge University Press, 1985 H. Cohn, Minimality bounds for traces of Markoff matrices, Canadian Mathematical Society Conference Proceedings, vol. 15, 1995, pp. 109-121 H. Cohn, Some direct limits of primitive homotopy words and of Markoff geodesics, Annals of Mathematics Studies n$^{\circ }79$, 1974, pp. 81-98 Y. Colin de Verdière, Un exemple de chaos classique et quantique: les surfaces de Riemann, chapitre 2 de Turbulence et déterminisme (publié sous la direction M. Lesieur), PUG, 1998 J. F. Colombeau, Multiplication of distributions, Bull. Amer. Math. Soc. 23 n$^{\circ }2$, 1990, pp. 251-268 J. L. Colliot Thélène, Les grands thèmes de François Châtelet, L’Enseignement Mathématique, $% 2^{\grave{e}me}$ série, 1988, pp. 387-405 J. L. Colliot Thélène, L’arithmétique des variétés rationnelles, Annales de la Faculté des Sciences de Toulouse, 1992, pp. 295-335 I. Connell, Elliptic curves handbook, McGill University, Montreal, 1996 A. Connes, Géométrie non commutative, InterEditions, Paris, 1990, version étendue: Non commutative geometry, Academic Press, 1994 A. Connes, Brisures de symétrie spontanée et géométrie du point de vue spectral, Séminaire Bourbaki n$% ^{\circ }816$, 1995-1996, pp. 1-37 A. Connes, Trace formula in non commutative geometry and the zeros of the Riemann zeta function, http://www.esi.ac.at, 8 March, 1999, Sel. Math. New Ser. 5, 1999, pp. 29-106 A. Connes, Non commutative geometry, year 2000, arXiv:math.QA/0011193 23/11/2000 A. Connes, Indice des sous facteurs, algèbres de Hecke et théorie des noeuds, Séminaire Bourbaki n$^{\circ }647$, 1984-1985, pp. 1-20 J. B. Bost, A. Connes, Hecke algebras, type III factors and phase transitions with spontaneous symmetry breaking in number theory, Sel. Math., vol.1 n$^{\circ }3$, 1995, pp. 411-457 A. Connes, Explicit formulas as trace formulas, quantized calculus, and spectral interpretation of the zeros of the Riemann zeta function, www.mpim-bonn.mpg.de/html/services/ activities/ dt99/MPI-1999650-d.ps J. H. Conway, The sensual (quadratic) form, The Carus Monograph (M.A.A.) n$^{\circ }$26, 1997 J. H. Conway, An enumeration of knots and links and some of their algebraic properties, Computational problems in abstract algebra (D. Welsh ed.), Pergamon Press, 1970, pp. 329-358 J. H. Conway, N. J. A Sloane, Sphere packings, lattices and groups, Springer Verlag, 1993 J. H. Conway, R. T. Curtis, S. P. Norton, R. A.Parker, R. A. Wilson, Atlas of finite groups, Clarendon Press, 1985 G. Cornell, J. H. Silverman, Arithmetic geometry, Springer Verlag, 1986 S. C. Couthino, A primer of algebraic D-modules, London Mathematical Society Student Texts 33, Cambridge University Press, Cambridge, 1995 D. A. Cox, Primes of the form $x^2+ny^2$, Fermat, Class field theory, and complex multiplication, John Wiley and sons, 1989 D. A. Cox, What is a toric variety? http://www.amherst.edu/dacox/ D. A. Cox, Recent developments in toric geometry, Algebraic Geometry Santa Cruz 1995 (J. Kollar, R. Lazarsfeld, D. Morrison eds.), Proc. Symp. Pure Math., AMS, 1997, pp. 389-436 D. A. Cox, Update on toric geometry, http://emis.kaist.ac.kr/journals/SC/2002/6/pdf/ smf\_sem.cong\_6\_1.41.pdf H. S. M. Coxeter, W. O. J. Moser, Generators and relations for discrete groups, Ergebnisse des Mathematik und ihrer Grenzgebeite, 14, Springer Verlag, 1980 R. Crandall, C. Pomerance, Prime numbers-a computational perspective, Springer, 2001 D. J. Crisp and W. Moran, Single self-intersection geodesics and the Markoff spectrum, Number theory with an emphasis on the Markoff spectrum (A.D. Pollington and W. Moran ed.), Lectures Notes in Pure and Applied Mathematics, Dekker, n$^{\circ }147$, 1993, pp. 83-93 R. H. Crowell, R. H. Fox, Introduction to knot theory, Blaisdell Publishing Co, 1963 R. Cuculière, Représentation diophantienne des nombres de Fibonacci, Bulletin de l’APMEP, février 1984 T. W. Cusick and M.E. Flahive, The Markoff and Lagrange spectra, Mathematical Surveys and Monographs n$^{\circ }$ 30, A.M.S., 1989 T. W. Cusick, The connection between the Lagrange and Markoff spectra, Duke Math. Journal, 1975, pp. 507-517 T. W. Cusick, On Perrine’s generalized Markoff equation, Aequationes Mathematicae n$^{\circ }$3, 1993, pp. 203-211 T. W. Cusick, C. Ding, A. Renwall, Stream ciphers and number theory, North Holland Mathematical Library 55, 1998 S. D. Cutkosky, H. Srinivasan, The algebraic fundamental group of a curve singularity, Journal of Algebra 230, 2000, pp.101-126 P. Cvitanovic, Classical and quantum chaos: a cyclist treatise, 1998, http://www.nbi.dk/ ChaosBook/ D’Arcy W. Thompson, Formes et croissance, traduction par D. Teyssié, Seuil, 1994 A. Dahan Dalmedico, J. L. Chabert, K. Chemla, Chaos et déterminisme, Seuil, 1992 V. I. Danilov, The geometry of toric varieties, Russian Math. Surveys 33:2, 1978, pp. 97-154 A. Das, Integrable models, Lecture Notes in Physics n$% ^{\circ }30$, World Scientific, 1989 P. Davies, La nouvelle physique, Flammarion, 1993 M. Davis, Computability and unsolvability, reedition Dover, 1982 M. Dehn, Papers on group theory and topology, Springer Verlag, 1987 J. P. Delahaye, Information noyée, information cachée, Pour la Science n$^{\circ }229$, Novembre 1996, pp.142-146 J. P. Delahaye, Information complexité et hasard, Hermès, 1999 J. P. Delahaye, L’intelligence et le calcul (de Gödel aux ordinateurs quantiques), Belin/Pour la Science, 2002 R. Dedekind, Gesammelte Math. Werke 1, Vieweg, 1930-1932, pp. 174-201 P. Deligne, La conjecture de Weil, (I) Pub. Math. IHES 43, 1974, pp. 273-308, (II) Pub. Math. IHES 52, 1980, pp. 137-252 M. Demazure, Identités de MacDonald, Séminaire Bourbaki n$^{\circ }$ 483, 1975-1976, pp. 1-11 E. E. Demidov, Some aspects of the theory of quantum groups, Russian Math. Surveys 48:6, 1993, pp. 41-79 C. Deninger, Some analogies between number theory and dynamical systems on foliated spaces, Documenta mathematica, Extra volume ICM, 1998, pp. 163-186 G. De Rham, Sur les polygones générateurs des groupes fuchsiens, Enseignement Mathématique 17, 1971, pp. 49-61 J. L. Dessalles, L’ordinateur génétique, Hermès, 1996 D. Deutsch, A. Ekert, R. Lupacchini, Machine, logic and quantum physics, The Bulletin of Symbolic Logic, vol. 6 n$^{\circ }3$, 2000, pp. 265-283 D. Deutsch, Is there a fundamental bound on the rate at which information can be processed? Phys. Rev. Letter 42, 1982, pp. 286-288 D. Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer, Proc. Roy. Soc. Lond. Ser.A, col 400, 1985, pp. 96-117 D. W. Sumners, Lifting the curtain: using topology to prob the hidden action of enzymes, Notices Amer. Math. Soc. 528, 1995, pp. 5-42 R. Diaz, S. Robins, The Ehrhart polynomial of a lattice polytope, Annals of Mathematics 145, 1997, pp. 503-518 W. Dicks, M. J. Dunwoody, Groups acting on graphs, Cambridge Studies in Advanced Mathematics n$^{\circ }$17, Cambridge University Press, 1989 L. E. Dickson, History of the theory of numbers, Chelsea reprints, New York, 1992 T. Tom Dieck, Transformation groups, Walter de Gruyter, 1987 B. Dietz, On the gaps of the Lagrange spectrum, Acta Arithmetica. 45, 1985, pp. 59-64 J. Dieudonné, Panorama des mathématiques pures - le choix bourbachique, Gauthier Villars, 1977 J. Dieudonné, Pour l’honneur de l’esprit humain - les mathématiques d’aujourd’hui, Hachette Pluriel, 1987 J. Dieudonné, Abrégé d’histoire des mathématiques, Tome I et II, Hermann, 1978 J. Dieudonné, Calcul infinitésimal, Hermann, 1980 A. Dimca, Singularities and topology of hypersurfaces, Springer Verlag, 1992 P. G. L. Dirichlet, Sur la convergence des séries trigonométriques qui servent à représenter une fonction arbitraire entre des limites données, J. Reine Angew. Math. n$% ^{\circ }4$, 1829, pp. 157-169 P. G. L. Dirichlet, Lectures on number theory, History of mathematics sources vol. 16, A.M.S.-L.M.S., 1990 J. Dixmier, Quelques aspects de la théorie des invariants, Conférences faites à l’Université of Pennsylvanie, 1987, La Gazette des Mathématiciens n$^{\circ }43$, Janv. 1990, pp. 39-64 M. P. Do Carmo, Riemannian geometry, Birkhäuser, 1992 M. M. Dodson, Exceptional sets in dynamical systems and diophantine approximation, Preprint, mmq1/at/york.ac.uk M. M. Dodson, J. A. G. Vickers editors, Number theory and dynamical systems, London Mathematical Society Lecture Notes Series 134, Cambridge University Press, 1989 C. T. J. Dodson, Categories, bundles ans space-time topology, Kluwer, 1988 I. Dolgachev, Lectures on modular forms, Fall 1997/98, ftp.math.lsa.unich.edu (sur la même adresse et du même auteur : Introduction to physics, Introduction to algebraic geometry, Introduction to string theory.) I. Dolgachev, Integral quadratic forms: application to algebraic geometry (after V. Nikulin), Séminaire Bourbaki n$^{\circ }611$, 1982-1983, pp.1-33 R. et A. Douady, Algèbres et théories galoisiennes, Cedic/Nathan, 1979 B. Doubrovine, S. Novikov, A. Fomenko, Géométrie contemporaine - méthodes et applications, 3 tomes, Mir, 1979 P. Dräxler, G. O. Michler, C. M. Ringel, Computational methods for representations of groups and algebras, Progress in mathematics, vol. 173, Birkaüser, 1999 J-M. Drezet, J. Le Potier, Fibrés stables et fibrés exceptionnels sur $P^2$, Ann. Sci. Ecole Normale Sup. (4) n$% ^{\circ }18$, 1985, pp. 193-243 J-M. Drezet, Sur les équations vérifiées par les invariants des fibrés exceptionnels, Forum Math. 8, 1196, pp.237-265, http://www.math.jussieu.fr/drezet/CV/CV.html J-M. Drezet, Fibrés exceptionnels et suite spectrale de Beilinson sur $P_2(\mathbb{C})$, Math. Ann. 275, 1986, pp. 25-48 B. Dubrovin, I. Krichever, S. Novikov, Schrödinger equations in magnetic fields and Riemann surfaces, Sov. Math. Dokl. 17, 1976, pp. 947-951 P. Du Val, Elliptic functions and elliptic curves, London Mathematical Society Lecture Note Series 9, Cambridge University Press, 1993 F. Dyson, Missed opportunities, Bull. Amer. Math. Soc. vol.78 n$^{\circ }5$, 1972, pp. 635-652 C. J. Earle, Teichmüller spaces as complex manifolds, Conference at Warwick, 1992 W. Ebeling, Lattices and codes, a course partially based on lectures by F. Hirzebruch, Advanced Lectures in Mathematics, Vieweg, 1994 W. Ebeling, Quadratische formen und Monodromie gruppen von Singularitäten, Math. Ann. 232, 1978, pp. 463-498 B. Candelpergher, J. C. Nosmas, F. Pham, Approche de la résurgence, Hermann, 1993 H. M. Edwards, Divisor theory, Birkhäuser, 1990 M. Efimov, Géométrie supérieure, Editions MIR, 1981 S. Eilenberg, N. Steenrod, Foundations of algebraic topology, Princeton Univ. Press, 1952 D. Eisenbud, W. Neumann, Three dimensional link theory and invariants of plane curves, Annals of Mathematics Studies 110, Princeton University Press, 1975 G. Eisenstein, Mathematische Werke (2 tomes), Chelsea, 1975 M. H. El Khuti, Cubic surfaces of Markov type, Math USSR Sbornik, vol 22, n$^{\circ }3$, 1974, pp. 331-346 J. Elstrodt, F. Grunewald, J. Mennicke, Groups acting on hyperbolic spaces, Springer Verlag, 1998 N. Ericksson, $q$-series elliptic curves and odd values of the partition functions, I.J.M.M.S. vol 22 n$^{\circ }1$, 1999, p. 55-65. C. Ernst, D. W. Sumners, A calculus for rational tangles - application to DNA recombination, Math. Proc. Camb. Phil. Soc. n$^{\circ }108$, 1990, pp. 489-515 L. Euler, Eléments d’algèbre (tome 2: Analyse indéterminée), Bachelier, 1807, pp. 323-375 L. Euler, Opera Omnia 1 II, Teubner et Füssli, 1911, p. 390-398 L. Euler, Introduction à l’analyse infinitésimale, Barrois, 1796, nouvelle édition ACL-éditions, 1987 L. D. Fadeev, A hamiltonian interpretation of the inverse scattering method, Solitons (R. K. Bullough, P. J. Caudrey ed.), Topics in current physics, Springer Verlag, 1980, pp. 339-354 A. Faisant, L’équation diophantienne du second degré, Hermann, 1991 H. M. Farkas, I. Kra, Riemann surfaces, Graduate Texts in Math. 71, Springer Verlag, 1991 H. M. Farkas, I. Kra, Theta constants, Riemann surfaces and the modular group, Graduate studies in mathematics vol 37, A.M.S., 2001 A. Fathi, F. Laudenbach, V. Poenaru, Travaux de Thurston sur les surfaces, Astérisque 66-67, 1979 B. Fedosov, Deformation, quantization and index theory, Akademie Verlag, 1996 J. Feldman, H. Knörrer, E. Trubowitz, There is no two dimensional analogue of Lamé’s equation, Math. Ann. 294, 1992, pp. 295-324 J. Ferrand, The action of conformal transformations on a Riemann manifold, Math. Ann. Band 304 H2, 1996, pp. 277-292 R. Feynman, Quantum mechanical computers, Optics News vol.11, 1985, pp. 11-20 K. H. Fichtner, M. Ohya, Quantum teleportation with entangled states given by beam splittings, Commun. Math. Phys., 2001, pp. 229-247 Yu. Yu. Finkel’shtein, Klein polygons and reduced regular continued fractions, Russian Math. Surveys , 1993, pp. 198-200 G. Fischer, Quantization induced by geometry, Differential geometry and applications, Proc. Conf. Aug 28 - Sept 1, Brno 1996, pp. 559-565 G. Fischer, Mathematische Modelle, Bildband, Braunschweig, Germany, Vieweg, 1986 (cité par http://mathworld.wolfram.com/RationalDoublePoint.html) L. R. Ford, Automorphic functions, Chelsea Publishing Company, reprint 1972 J. L. Dyer, E. Formanek, The automorphism group of a free group is complete, J. London Math. Soc. (2), 11, 1975, pp. 181-190 G. D. Forney, On the duality of coding and quantizing, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, Vol 14, A.M.S., 1993, pp. 1-14 J. Foster, Lectures on Riemann surfaces, Springer Verlag, 1981 J. Fourier, La théorie analytique de la chaleur, Firmin Didot, Paris, 1822 J. Frenckel, Géométrie pour l’élève professeur, Hermann, 1973 R. Fricke, Über die Theorie der automorphen Modulgruppen, Gött. Nach., 1896, pp. 91-101 D. Friedan, S. Shenker, The analytic geometry of two-dimensional conformal field theory, Nuclear Physics, B281, 1897, pp.509-545 R. Friedman, Algebraic surfaces and holomorphic vector bundles, Universitext, Springer Verlag, 1988 R. Fricke, F. Klein, Vorlesungen über die Theorie der automorphen Funktionen, Teubner, Leipzig, 1897 D. Frobenius, Über die Markoffschen Zahlen, Preuss. Akad. Wiss. Sitzungsber., 1913, pp. 458-487 J. Fuchs, Affine Lie algebras and quantum groups, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 1995 S. Fukuhara, Modular forms, generalized Dedekind symbols and period polynomials, Math. Ann. 310, 1998, pp. 83-101 S. Fukuhara, Genaralized Dedekind symbols associated with the Eisentstein series, Proc. Amer. Math. Soc. vol. 127 n$% ^{\circ }9$, pp. 2561-2568 S. Fukuhara, Y. Matsumoto, N. Yui, Non commutative polynomial reciprocity formulae, International Journal of mathematics, vol. 12 n$^{\circ }8$, 2001, pp. 973-986 W. Fulton, Algebraic curves, W. A. Benjamin, 1969 P. Gabriel, A. V. Roiter, Representations of finite-dimensionnal algebras, Springer Verlag, 1997 J. M. Borwein, F. G. Garvan, Approximations to $% \pi $ via the Dedekind eta functions, Canad. Math. Soc. Conference Proceedings, vol. 20, Organic mathematics, 1997, pp. 89 -114 C. F. Gauss, Recherches arithmétiques, traduction Poullet Delisle, Courcier, 1807 C. F. Gauss, Recherches générales sur les surfaces courbes, traduction M. E. Roger, Albert Blanchard, 1967 K. Gawedski, Conformal field theory, Séminaire Bourbaki, n$^{\circ }704$, 1988-1989, pp. 1-32 M. E. Gbur, On the lower Markov spectrum, Monat. für Math. 81, 1976, pp. 95-107 S. Gelbart, An elementary introduction to the Langlands’ program, Bull. Amer. Math. Soc. vol 10 n$^{\circ }2$, 1984, pp.177-219 I. M. Gel’fand, M. I. Graev, I. I.Pyatetskii-Shapiro, Representation theory and automorphic forms, Academic Press, 1990 J. M. Kantor traducteur pour le Bulletin de l’APMEP 1990 de l’article ”Comment je suis devenu mathématicien”, entretien de I. M. Gelfand avec V. Retah et A. Sossinski publié dans la revue Quant, 1989, I. M. Gel’fand, I. N. Bernstein, V.A.Ponomarev, Coxeter functors and Gabriel’s theorem, Russ. Math. Surv. 28(2), 1973, pp. 17-32 I. M. Gelfand, Yu. I. Manin, Homological algebra, Springer Verlag, 1994 S. Gervais, Presentation and central extensions of mapping class groups, Trans. Amer. Math. Soc. vol. 348, n$^{\circ }8$, 1996, pp. 3097-3132 F. Gesztesy, H. Holden, Hierarchies of soliton equations and their algebro-geometric solutions, Cambridge Studies in Advanced Mathematics 79, 2003 J. Gilman, Two-generators discrete subgroups of $% PSL(2,\mathbb{R})$, Memoirs of the American Mathematical Society n$^{\circ }561$, September 1995 P. B. Gilkey, Invariance theory, the heat equation and the Atiyah-Singer index theorem, Publish or Perish, 1984, electronic reprint, 1996 P. B. Gilkey, The geometry of spherical space form groups, World Scientific vol 7, 1989 C. Godbillon, Eléments de topologie algébrique, Hermann, 1971 K. Gödel, Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I, 1931, Monatschefte für Mathematik und Physik, vol. 38, 1931, pp. 173-198, traduction dans Le théorème de Gödel, Editions du Seuil, 1989 J. R. Goldman, Knots, tangles and electrical networks, Advances in Applied Mathematics n$^{\circ }14$, 1993, pp. 267-306 W. M. Goldman, Ergodic theory on moduli spaces, Annals of Math. 146, 1997, pp. 475-507 V. A. Golubeva, Some problems in the analytic theory of Feynmann integrals, Russian Math. Surveys 31 n$^{\circ }2$, 1976, pp. 135-220 R. E. Gomory, An algorithm for integer solutions to linear programming, Recent advances in Mathematical Programming (Graves and Wolf eds.), Mc Graw Hill, 1963, pp. 269-302 F. Gonzales Acuna, J. M. Montesinos Amilibia, On the character variety of group representations in $SL(2,\mathbb{C})$ and $PSL(2,% \mathbb{C})$, Mth. Z. 214, 1993, pp. 627-652 V. D. Goppa, Geometry and codes, Kluwer Academic Publishers, 1988 A. L. Gorodentsev, A. N. Rudakov, Exceptional vector bundles on projective spaces, Duke Mathematical Journal vol. 54 n$% ^{\circ }1$, 1987, pp. 115-130 A. L. Gorodentsev, Helix theory and nonsymmetrical bilinear forms, Algebraic geometry and its applications, Proceedings of the 8$^{th}$ Algebraic Geometry Conference, Yaroslavl’ 1992, publication of the Steklov Institute of Mathematics (A. Tikhomirov, A.Tyurin, editors), Vieweg, 1994 D. Goss, What is a Shtuka?, Notices of the American Mathematical Society vol. 50, n$^{\circ }1$, January 2003, pp. 36-37 M. Gotay, J. A. Isenberg, La symplectification de la science, Gazette des mathématiciens n$^{\circ }54$, 1992 (voir également l’article de C. Viterbo relatif à la topologie symplectique) A. Gramain, Topologie des surfaces, P.U.F., 1971 Y. Grandati, Eléments d’introduction à l’invariance conforme, Centre de Recherches Nucléaires de Strasbourg, CRN-PHTH/91-13, 1991 D. Grant, Some product formulas for theta functions in one and two variables, Acta Arithmetica 102.3, 2002, p. 223 H. Grauert, R. Remmert, Theory of Stein spaces, Grundlehren Math. Wiss. vol. 236, Springer Verlag, 1979 J. Gray, Linear differential equations and group theory from Riemann to Poincaré, Birkhaüser, 1986 M. A. Grayson, The heat equation shrinks embedded plane curves to round points, J. Differential geometry 26, 1987, pp; 285-314 H. B. Griffiths, Surfaces, Cambridge University Press, second edition 1980 C. Grosche, F. Steiner, Handbook of Feynman path integral, Springer Verlag, 1998 K. I. Gross, On the evolution of non commutative harmonic analysis, Amer. Math. Monthly 85, 1978, pp. 525-547 A. Grothendieck, Esquisse d’un programme, Preprint 1985 A. Grothendieck, Sur la classification des fibrés holomorphes sur la sphère de Riemann, Amer. J. Math 79., 1957, pp. 121-138 J. Guenot, R. Narasimhan, Introduction à la théorie des surfaces de Riemann, Monographie n$^{\circ }23$, L’Enseignement Mathématique, 1976 A. Guichardet, Leçons sur certaines algèbres topologiques, Dunod, 1967 A. Guichardet, Groupes quantiques, Interéditions/CNRS, 1995 R. C. Gunning, Lectures on modular forms, Annals of mathematics studies, Princeton University Press, 1962 R. C. Gunning, Lectures on Riemann surfaces, Princeton University Press, 1966 C. H. Gupta, Around automorphisms of relatively free groups, Algebra (some recent advances), I.B.S. Passi Ed., Birkhaüser, 1999, pp. 63-74 M. C. Gutzwiller, Chaos in classical and quantum mechanics, IAM 1, Springer Verlag, 1990 M. C. Gutzwiller, The origin of the trace formula, Classical-semi classical and quantum dynamics in atoms (Editors H.Friedrich et B. Eckhardt), Lecture Notes in Physics n$^{\circ }485$, Springer Verlag, 1997, pp. 8-28 H. Haefliger, F. Laudenbach, V. Poenaru, L.Siebenmann, Echos du colloque Jean Cerf, Gazette des mathématiciens n$% ^{\circ }64$, 1995, pp. 3-15 U. Halbritter, Some new reciprocity formulas for generalized Dedekind sums, Results in Mathematics vol. 8, 1985, pp. 21-46 R. Hartshorne, Algebraic geometry, Graduate Texts in Math. n$^{\circ }52$, Springer Verlag, 1977 W. J. Harvey, Spaces of discrete groups, discrete groups and automorphic functions, Academic Press, 1977 A. Haas, Diophantine approximation on hyperbolic Riemann surfaces, Acta Mathematica, 156, 1986, pp. 33-82 A. Hatcher, Pants decompositions of surfaces, www. math.cornell.edu/hatcher/ J. C. Hausmann, Sur l’usage des critères pour reconnaître un groupe libre, un produit amalgamé ou une HNN-extension, L’Enseignement Mathématique n$^{\circ }27$, 1981, pp. 221-242 S. Hawking et autres, La mort de Newton, Maisonneuve et Larose, 1996 O. Heaviside, On operators in mathematical physics, Proc. Royal Society London, 52, 1893, pp. 504-529 et 54, 1894, pp. 105-143 E. Hecke, Uber die Bestimmung Dirichletscher Reihen durch ihre Funktional gleichung, Math. Ann. 112, 1936, pp. 664-699 Y. Hellegouarch, Invitation aux mathématiques de Fermat-Wiles, Masson, 1997 A. Henderson, The twenty seven lines upon the cubic surface, Cambridge Univ. Press, 1911 Ch. Hermite, Troisième lettre à Jacobi, 6 août 1845, Oeuvres, Gauthiers Villars, 1905, pp. 100-121 J. N. Mather et autres, Michael R. Herman, Gazette des Mathématiciens, Avril 2001, n$^{\circ }$ 88, pp. 51-94 H. Hida, Geometric reciprocity laws, www.math.ucla.edu/hida/285b.1.01s C. J. Hightower, The minima of indefinite binary quadratic forms, J. of Number Theory n$^{\circ }$2, 1970, pp. 364-378 G. Higman, B. H. Neumann, H. Neumann, Embeddings theorems for groups, J. London Math. Soc. 24, 1949, pp. 247-254 D. Hilbert, Sur les problèmes futurs des mathématiques, Compte rendu du deuxième congrès de mathématiques du 6 au 12 août 1900, Göttinger Nachrichten, 1900, Réimpression J. Gabay, 1990 D. Hilbert, S. Cohn-Vossen, Geometry and the imagination, Chelsea Pubishing Company, reedition 1990 D. Hilbert, Les fondements de la géométrie, Edition critique par P. Rossier, Dunod, 1971 D. Hilbert, P. Bernays, Fondements des mathématiques (2 tomes), L’Harmattan, 2002 D. Hilbert, Theory of algebraic invariants, Cambridge mathematical library, Cambridge University Press, 1993 D. Hilbert, Théorie des corps de nombres algébriques, Traduction A. Lévy et Th. Got, Hermann, 1913 M. Hilsum, L’invariant $\eta $ pour les variétés lipchitziennes, J. of Diff. Geom. vol. 55 n$^{\circ }1,$ 2000, pp. 1-42 T. Hiramatsu, Theory of automorphic forms of weight 1 - Investigations in number theory, Advances in pure mathematics 13, 1988, p. 503-584 M. Hirsch, Differential topology, Springer Verlag, 1976 F. Hirzebruch, Hilbert modular surfaces, Ens. Math. n$^{\circ }$19, 1973, pp. 183-281 F. Hirzebruch, D. Zagier, The Atiyah-Singer theorem and elementary number theory, Mathematics Lecture Series n$^{\circ }3 $, Publish or Perish Inc., 1974, pp. 92-165 F. Hirzebruch, T. Berger, R. Jung, Manifolds and modular forms, Vieweg, 1992 G. Hjorth, A. S. Kechris, The complexity of the classification of Riemann surfaces and complex manifolds, Illinois Journal of Mathematics, vol. 44 n$^{\circ }1$, 2000, pp. 104-137 A. E. Bryson Jr., Yu-Chi Ho, Applied optimal control, Hemisphere Publishing Co, 1975 G. Hochschild, T. Nakayama, Cohomology in class field theory, Ann. of Math. 55, 1952, pp. 348-366 M. Höchster, Prime ideal structure in commutative rings, Trans. Amer. Math. Soc. n$^{\circ }142$, 1969, pp. 43-60 A. S. Holevo, Quantum coding theorems, Russian Math. Surveys 53:6, 1998, pp. 1295-1331 P. Alexandroff, H. Hopf, Topologie, Springer Verlag, 1935 L. Hopf, Introduction to the differential equations of physics, Dover, 1948 H. Hopf, Differential geometry in the large, Lecture Notes in Mathematics n$^{\circ }$1000, Springer Verlag, 1983 R. D. Horowitz, Characters of free groups represented in the two-dimensional special linear group, Comm. in Pure and Applied Math. vol XXV, 1972, pp. 635-649 R. Howe, Eng Chye Tan, Non abelian harmonic analysis, applications of $SL(2,\mathbb{R})$, Springer Verlag, 1992 T. Hsu, Quilts: central extensions, braid actions and finite groups, Lecture Notes in Mathematics n$^{\circ }$1731, Springer Verlag, 2000 Y. Z. Huang, Two dimensional conformal geometry and vertex operator algebra, Progress in Mathematics n$^{\circ }148$, Birkhauser, 1997 W. W. Hulsbergen, Conjectures in arithmetic algebraic geometry, Vieweg, 1992 K. Hulek, C. Kahn, S. H. Weintraub, Moduli spaces of abelian surfaces: compactification, degenerations and theta functions, Walter de Gruyter, 1993 P. Humbert, Le calcul symbolique, Hermann, 1934 J. E. Humphreys, Reflexion groups and Coxeter groups, Cambridge University Press, 1990 S.P. Humphries, Action of braid groups on determinantal ideals, compact spaces and a stratification of Teichmüller spaces, Invent. Math. 144, 2001, pp. 451-505 B. Hunt, The geometry of some special arithmetic quotients, Lecture Notes in Mathematics 1637, Springer Verlag, 1996 N. E. Hurt, The prime geodesic theorem and quantum mechanics on finite volume graphs : a review, Reviews in Mathematical Physics vol. 13 n$^{\circ }12$, 2001, pp. 1459-1503 D. Husemöller, Elliptic curves, Graduate texts in mathematics 111, Springer Verlag, 1986 D. Husemöller, Fibre bundles, Graduate texts in mathematics 20, Springer Verlag, 1975 R. C. Hwa, V. L. Teplitz, Homology and Feynman integrals, W. A. Benjamin, 1966 Y. Ihara, The congruence monodromy problems, J.Math. Soc. Japan 20, 1968, pp. 107-121 Y. Ihara, Non abelian class fields on finite fields in special cases, Proc. Intern. Congress of Mathematics Nice vol. 1, 1970, pp. 381-389 Y. Imayoshi, M. Taniguchi, An introduction to Teichmüller spaces, Springer Verlag, 1992 L. M. Ionescu, On categorification, arXiv.org, math.CT/9906038, 6 jun 1999 Y. Ito, I. Nakamura, Hilbert schemes and simple singularities, New trends in algebraic geometry (Proceedings of the algebraic symposium, Warwick, 1996), Cambridge University Press, 1999, pp.151-233 N. V. Ivanov, Algebraic properties of the mapping class groups of surfaces, Geometric and algebraic topology, Banach Center Publications vol.18, Polish Scientific Publishers, Warszava, 1986, pp. 15-35 N. V. Ivanov, Automorphisms of Teichmüller modular groups, Topology and geometry (Rohlin seminar), Lecture Notes in Mathematics n$^{\circ }1346$, Springer Verlag, 1989 N. V. Ivanov, Subgroups of Teichmüller modular groups, Translations of mathematical monographs n$^{\circ }$115, A.M.S., 1992 B. Iversen, Hyperbolic geometry, London Mathematical Society Student Texts 25, Cambridge University Press, 1992 H. Iwaniec, Topics in classical automorphic forms, Graduate studies in Mathematics, vol. 17, A.M.S., 1997 S. Iyanaga, The theory of numbers, North Holland, 1975 A. Jackson, http:// www.ams.org/new-in-math/ mathnews/ motivic.html C. G. Jacobi, Fundamenta Nova, Gesammelte Werke 1, pp. 497-538, G. Reimer ed., 1881 (certaines traductions sur www.math.ohio-state.edu/ econrad/ Jacobi ) R. Jacquard et autres, Technique et science informatiques, Hermès, 2000 J. Jahnel, The Brauer-Severi variety associated with a central simple algebra: a survey, Mathematisches Institut, Göttingen, 25/09/2000, http://www.uni-math.gwdg.de/jahnel M. Jarnicki, P. Pflug, Extension of holomorphic functions, De Gruyter Expositions in Mathematics 34, 2000 R. V. Jean, Croissance végétale et morphogénèse, Masson et Presses de l’Université du Québec, 183 P. Jeanquartier, Transformation de Mellin et développements asymptotiques, L’Enseignement Mathématique, tome XXV Fascicule 1-2, Janvier-Juin 1979, pp. 285-308 Y. Jin, A. L. Schmidt, A diophantine equation appearing in diophantine approximation, Indag. Mathem. 12 n$^{\circ }4$, 2001, pp. 477-482 D. L. Johnson, Presentation of groups, London Mathematical Lecture Notes Series n$^{\circ }$22, Cambridge University Press, 1976 G.A. Jones, Characters and surfaces: a survey, The atlas of finite groups - ten years on (Ed. R. Curtis, R. Wilson), London Mathematical Society Lecture Notes Series n$^{\circ }$249, Cambridge University Press, 1998, pp. 90-118 V. F. R. Jones, Hecke algebra representations of braid groups and link polynomials, Ann. of Math. 126, 1897, pp. 335-388 J. Jost, Compact Riemann surfaces, Springer Verlag, 1997 A. Juhl, Cohomological theory of dynamical zeta functions, Progress in Math. n$^{\circ }$194, Birkhauser, 2001 V. G. Kac, Infinite dimensional algebras, Dedekind’s $\eta $-function, classical Moebius function and the very strange formula, Adv. in Math., 30, 1978, pp.85-131 V. G. Kac, An elucidation of ”Infinite-dimensional algebras... and the very strange formula.” $E_8^{(1)}$ and the cube root of the modular invariant, Adv. in Math. , 35, 1980, pp.264-273 V. G. Kac, Infinite dimensional Lie algebras, Cambridge University Press, 1990 V. G. Kac, P. Cheung, Quantum calculus, Universitex, Springer Verlag, 2002 M. Kapranov, E. Vasserot, Kleinian singularities, derived categories and Hall algebras, Math. Ann. 316, 2000, p. 565-576 L. Kari, DNA computing: arrival of biological mathematics, The Mathematical Intelligencer, vol 19 n$^{\circ }2$, 1997, pp. 9-22 S. Katok, Fuchsian groups, The University of Chicago Press, 1992 S. Katok, Reduction theory for fuchsian groups, Math. Ann. 273, 1986, pp. 461 - 470 S. Katok, Coding of closed geodesics after Gauss and Morse, Geometriae Dedicata, n$^{\circ }$63, 1996, pp. 123-145 A. Katok, H. Hasselblatt, Introduction to the modern theory of dynamical systems, Cambridge University Press, 1995 L. H. Kauffman, Knots and physics, World Scientific, 1991 L. H. Kauffman, On knots, Ann. of Math. Stud. 115, Princeton University Press, 1987 L. H. Kauffman, Rational tangles, Advances in Applied Mathematics 18, 1997, pp. 300-332 L. Kaup, B. Kaup, Holomorphic functions of several variables, De Gruyter Studies in Mathematics 3, Walter de Gruyter, 1983 L. Keen, Canonical polygons for finitely generated fuchsian groups, Acta. Math. 115, 1965, pp. 1-16 L. Keen, Intrinsic moduli on Riemann surfaces, Annals of Mathematics Studies 84, 1966, pp. 404-420 L. Keen, On Fricke moduli, Advances in the theory of Riemann surfaces (Ed. L. Ahlfors), Annals of Mathematics Studies 66, Princeton University Press, 1971, pp. 205-224 L. Keen, On fundamental domains and the Teichmuller modular group, Contribution to analysis, Academic Press, New York and London, 1974, pp. 185-194 L. Keen, A rough fundamental domain for Teichmüller spaces, Bull. Amer. Math. Soc. vol. 83, n$^{\circ }6$, 1977, pp. 1199-1226 L. Keen, H.E. Rauch, T. Vasquez, Moduli of punctured tori and the accessory parameter of Lamé’s equation, Trans.Amer. Math. Soc. vol 255, 1979, pp. 201-230 G. R. Kempf, Complex abelian varieties and theta functions, Springer Verlag, 1991 M. A. Kervaire, A manifold which does not admit any differentiable structure, Comm. Math. Helv. n$^{\circ }34$, 1960, pp.257-270 M. A. Kervaire, J. Milnor, Groups of homotopy spheres, Ann. of Math. 77, 1963, pp. 504-537 A. L. Kholodenko, Statistical mechanics of 2+1 gravity from Riemann zeta function and Alexander polynomial: exact results, Journal of Geometry and Physics 38, 2001, pp. 81-139, http://chemistry.clemson.edu/ChemDocs/faculty.html A. L. Kholodenko, Random walks on figure eight : From polymers through chaos to gravity and beyond, Physica A 289, 2001, ArXiv:cond-mat/9905221v1,14 may 1999, http://chemistry.clemson.edu/ChemDocs/faculty.html A. Yu Kitaev, Quantum computations: algorithms and error correction, Russian Math. Surveys 52:6, 1997, pp. 1191-1249 F. Klein, Le programme d’Erlangen, Gauthiers Villars, 1974 F. Klein, The icosahedron and the solution of equations of the fifth degree, Dover, 1956 F. Klein, Sur une représentation géométrique du développement en fraction continue ordinaire, Nouv. Ann. Math. vol.3 n$^{\circ }$15, pp. 327-331 H. Kleinert, Path integrals in quantum mechanics, statistics and polymer physics, World Scientific, Singapore, 1990 A. A. Klyachko, Moduli of vector bundles and number of classes, Func. Anal. and its appli. 25, 1991, pp. 81-83 A. A. Klyachko, Stable bundles, representation theory and hermitian operators, Sel. Math. new ser. 4, 1998, pp. 419-445 A. W. Knapp, Elliptic curves, Mathematical notes n$% ^{\circ }$40, Princeton University Press, 1992 A. Knauf, Number theory, dynamical systems and statistical mechanics, Max Planck Institute for Mathematics in the Sciences, May 1998, knauf@mis.mpg.de M. I. Knopp, Modular functions in analytic number theory, Markham Pub. Company, 1970 D. E. Knuth, The art of computer programming, Addison Wesley Reading, 1968 N. Koblitz, Introduction to elliptic curves and modular forms, Springer Verlag, 1984 H. Koch, Introduction to classical mathematics 1 (from the quadratic reciprocity law to the uniformization theorem), Kluwer Academic Publishers, 1991 J. Kollár, Complex algebraic geometry, IAS/Park City Mathematics Series vol. 3, AMS/ Institute for Advanced Studies, 1997 Y. Komory, T. Sugawa, Bers embedding of the teichmuller space of a once-punctured torus, www.cajpn.org/complex/pp01/0107.ps.gz T. Kondo, Examples of multiplicative $\eta $-products, Sci. pap. Coll. Arts and Sci. Univ. Tokyo 35, 1986, pp. 113-189 A. Korkine, G. Zolotareff, Sur les formes quadratiques positives, Math. Annalen, n$^{\circ }6$, 1873, pp.366-389 B. Kostant, On MacDonald’s $\eta $-function formula, the laplacian and generalized exponents, Advances in Maths. 20, 1976, pp. 179-212 M. Kotani, A note on asymptotic expansions for closed geodesics in homology classes, Math. Ann. 320, 2001, pp. 507-529 I. Kra, On lifting of Kleinian groups to $SL(2,\mathbb{C})$, Differential geometry and complex analysis (Ed. I. Chavel, H. M.Farkas), Springer Verlag, 1985, pp. 183-193 J. Krajícek, T. Scanlon, Combinatorics with definable sets: Euler characteristics and Grothendieck rings, The Bulletin of Symbolic Logic, vol 6 n$^{\circ }3$, sept. 2000, pp. 311-330 S. L. Krushkal, B. N. Apanasov, N. A.Gusevskiì, Kleinian groups and uniformization in examples and problems, Translations of Mathematical Monographs 62, A.M.S., 1986 S. L. Krushkal, Spaces of Riemann surfaces, Israel mathematical conference proceedings vol 14, 2000, www.math.technion.ac.il/pbrooks/imcp14/krushkal.ps M. Kuga, Galois’dream, Birkhäuser, 1993 R. S. Kulkarni, An arithmetic-geometric method in the study of the subgroups of the modular group, American Journal of Mathematics 113, 1991, pp. 1053-1133 V. Kumar Murty, Introduction to abelian varieties, CRM mon. ser. vol. 3, A.M.S., 1983 G. Lachaud, Klein polygons and geometric diagrams, Sails and Klein Polyedra, Contemporary Matthematics vol. 210, 1998, pp. 365-385 P. de la Harpe, Topics in geometric group theory, Chicago Lectures in Mathematics, 2000 K. Lamotke, Regular solids and isolated singularities, Vieveg Advanced lectures in Mathematics, 1986 N. P. Landsman, Quantization as a functor, http://fr-arXiv.org/ dvi/ math-ph/0107023gz?front S. Lang, Survey of diophantine geometry, Springer Verlag, 1997 S. Lang, Introduction to abelian and algebraic functions, Graduate texts in mathematics n$^{\circ }89$, Springer Verlag, 1983 M. L. Lapidus, M. van Frankenhuysen, Fractal geometry and number theory, Birkhaüser, 2000 R. Grossman, R. G. Larson, Hopf algebraic structure of families of trees, J. of Algebra 126, 1989, pp. 185-210 H. B. Laufer, Normal two-dimensional singularities, Annals of Mathematics Studies, Princeton University Press, 1971 G. Laumon, La correspondance de Langlands sur les corps de fonctions (d’après Laurent Lafforgue), Séminaire Bourbaki n$% ^{\circ }873$, 1999-2000 C. Laurent-Thiébaut, Théorie des fonctions holomorphes de plusieurs variables, Interéditions/CNRS Editions, 1997 W. R. Lawrence, Subsequent to a theorem of Markoff, J. of Number Theory 12, 1980, pp. 201-209 A. M. Legendre, Théorie des nombres, réédition A. Blanchard, 1955 D. Lehmann, C. Sacré, Géométrie et topologie des surfaces, PUF, 1982 J. Lehner, Discontinuous groups and automorphic functions, Math. Survey, A.M.S., 1964 G. I. Lehrer, Group representations, geometry and topology, Groups - Canberra 1989, Lecture Notes in Math. 1456, Springer Verlag, 1990, pp. 1-9 R. Lehoucq, M. Lachieze-Rey, J. P. Luminet, Cosmic cristallography, Astron. Astrophys. vol. 313, 1996, pp. 339-346 P. M. Gruber, C. G. Lekkerkerker, Geometry of numbers, North Holland Mathematical Library, vol. 37, 1987 A. Le Méhauté, R. R. Nigmatullin, L. Nivanen, Flèches du temps et géométrie fractale, Hermès, 1990 F. Lemmermeyer, Evolution of reciprocity laws from Euler to Artin, Springer Verlag, 2000 J. Le Potier, Lecture on vector bundles, Cambridge studies in advanced mathematics 54, Cambridge University Press, 1997 (version antérieure moins développée: Fibrés vestoriels sur les courbes algébriques, Pub. Math. Univ. Paris 7, octobre 1995) S. Leroy, Points de Weierstrass d’une surface de Riemann compacte, Le journal de maths des élèves, ENS Lyon, vol 1, 1994 n$^{\circ }2$, p.16-25 N. C. Leung, ADE-bundle over rational surfaces, configuration of lines and rulings, arXiv.math. AG/0009192v1 20 sep 2000 M. Levine, Homology of algebraic varieties: an introduction to the works of Suslin and Voevodsky, Bull. A.M.S vol. 34, n$% ^{\circ }3$, 1997, pp. 293-312 P. Levy Bruhl, Précis de géométrie, P.U.F., 1967 L. Lewin, Polylogarithms and associated functions, North Holland, 1981 J. D. Lewis, A survey of the Hodge conjecture, CRMMonograph series vol 10, American Mathematical Society, 1999 H. Lewy, Water waves on sloping beaches, Bull. Amer Math. Soc. 52, 1946, p. 737-775 S. Lichtenbaum, Values of zeta functions, etale cohomology, and algebraic $K$-theory, Lecture Notes in Mathematics 342, Springer Verlag, 1973, pp. 489-501 Ming Li, P. Vitányi, An introduction to Kolmogorov complexity and its applications, Springer Verlag, 1997 G. Ligozat, Courbes modulaires de genre 1, Thèse, Bull. Soc. Math. France, mémoire 43, 1975 (Formes paraboliques normalisées pour $\Gamma _0(N)$, prépub. n$^{\circ }757411$, Paris XI Orsay) D. Lind, B. Marcus, An introduction to symbolic dynamics and coding, Cambridge University Press, 1995 J. V. van Lint, G. van der Geer, Introduction to coding theory and algebraic geometry, Birkhäuser, 1988 D. Lines, Cobordisme de noeuds classiques fibrés et de leur monodromie, Noeuds, tresses et singularités, Séminaire de Plan sur Bex (Suisse) en Mars 1982, L’Enseignement Mathématique K. Liu, Invariant theory and the invariants of low dimensional topology, Topology vol. 38 n$^{\circ }4$, 1999, pp. 763-777 S. Lloyd, Quantum mechanical computers and uncomputability, Physical Review Letter vol. 71, 1993, pp. 943-946 J. L. Lluis Puebla, J. L. Loday, H. Gillet, C. Soulé, V. Snaith, Higher algebraic $K$-theory: an overview,Lecture Notes in Mathematics n$^{\circ }$1491, Springer Verlag, 1992 N. I. Lobatcheffsky, Collection complète de oeuvres géométriques, Edition de l’Université de Kasan, tome 2, Pangéométrie ou précis de géométrie fondée sur une théorie générale et rigoureuse des parallèles, 1886 J. L. Loday, Des mammouths à la $K$-théorie algébrique, http://www-irma.u-strasbourg.fr/loday/\[mammouth\] et \[arithmetree\], 2000 A. Lubotzky, A. R. Magid, Varieties of representations of finitely generated groups, Memoirs of the A.M.S. n$% ^{\circ }336$, vol 58, 1985 W. Lück, A basic introduction to surgery theory, Preprintereihe des SFB 478, Westfälischen Wilhems Universität Münster, helf 197, Novembre 2001 F. Luo, Characters of $SL(2)$ representations of groups, arXiv:math. GT/9905138, 21/05/99 F. Luo, Grothendieck’s reconstruction principle and 2-dimensional topology and geometry, arXiv: math. GT/9904019, 4 Apr 1990 R. C. Lyndon, J. L. Ullman, Pairs of real 2 by 2 matrices that generate free products, Michigan Math. J. 15, 1968, pp. 161-166 R. C. Lyndon, P. E. Schupp, Combinatorial group theory, Springer Verlag, 1977 O. Ly, On effective decidability of the homeomorphism problem for non compact surfaces, Contemporary Mathematics, vol 250, 1999, p. 89-112 H. Maass, Über eine neue Art von nichtanalytischen automorphen Funktionen und die Bestimmung Dirichletscher Reichen durch Funktional gleichung, Math. Ann. n$^{\circ }121$, 1949, pp. 141-183 I. G. MacDonald, Affine root systems and Dedekind’s $\eta $-function, Invent. Math. 15, 1972, pp. 91-143 I. G. MacDonald, Affine Hecke algebras and orthogonal polynomials, Séminaire Bourbaki n$^{\circ }797$, 1995, pp. 1-18 G. Mack, Universal dynamics, a unified theory of complex systems, emergence, life and death, Commun. Math. Phys. 219, 2001, p. 141-178 R. S. MacKay, Recent progress and outstanding problems in hamiltonian dynamics, Physica D 86, 1995, pp. 122-133 G. W. Mackey, The scope and history of commutative and non commutative harmonic analysis, History of mathematics, vol. 5, A.M.S., L.M.S., 1992 S. MacLane, Categories for the working mathematician, Springer Verlag, 1971 S. MacLane, Hamiltonian mechanics and geometry, 1970, Amer. Math. Monthly 77, 1970, pp. 570-586 F. J. MacWilliams, N. J. Sloane, The theory of error correcting codes, North Holland, 1978 S. Majid, Foundations of quantum group theory, Cambridge University Press, 1995 S. Majid, A Macfarlane, Spectrum generating quantum group of the harmonic oscillator, Int. J. Mod. Phys. A 7, 1992, pp. 4377-4393 W. Magnus, Rational representations of Fuchsian groups and non parabolic subgroups of the modular group, Nachrichten des Akademie des Wissenschaften in Göttingen II, Math.-Phys. Klasse (1973), pp. 179-189 W. Magnus, A. Karass, D. Solitar, Combinatorial group theory, Dover (2nd edition), 1976 W. Magnus, Non-euclidean tessellations and their groups, Academic Press, 1974 R. S. Maier, Algebraic solutions of the Lamé equation, revisited, Submitted to the Journal of Differential Equations, 2002, http://uranium.math.arizona.edu/rsm/cv.html A. V. Malyshev, Markoff and Lagrange spectra (survey of the literature), J. Soviet. Math. vol 16 n$^{\circ }$1, 1981, pp. 767-788 Yu. I. Manin, Cubic forms, North Holland, 1986 Yu. I. Manin, M.A. Tfasman, Rational varieties: algebra, geometry, arithmetic, Russian Math. Surveys 41, 1986, pp. 51-116 Yu. I. Manin, Quantum computing and Shor’s factoring algorithm, arXiv:math. Yu. I. Manin, Reflections on arithmetical physics, Conformal invariance and string theory, Academic Press, 1989, pp. 293-303 (voir aussi son article New dimensions in geometry, et le commentaire de M.Atiyah, 25th Arbeuitstagung) Yu. I. Manin, M. Marcolli, Continued fractions, modular symbols, and non commutative geometry, arXiv:math.NT/0102006 v2 7 aug 2001 X. Marsault, Compression et cryptage en informatique, Hermès, 1995 G. V. Margulis, Discrete subgroups of semi simple Lie groups, Springer, 1991 A.A. Markoff, Sur les formes quadratiques indéfinies, Math. Ann. 6, 1879, pp. 381-406; Math. Ann. 17, 1880, pp. 379-399 Y. Martin, Multiplicative $\eta $-quotients, Trans.Amer. Math. Soc. n$^{\circ }$ 348, 1996, pp. 4825 Y. Martin, On Hecke operators and products of the Dedekind $\eta $-function, C.R. Acad. Sci. Paris t.322 ser.1, 1996, pp.307-312 J. Martinet, Les réseaux parfaits des espaces euclidiens, Masson, 1996 B. Maskit, Parameters for fuchsian groups II: topological type (1,1), Annales Academiae Scientarum Fennicae, Series A.I. Mathematica, vol 14, 1989, pp. 265-275 W. S. Massey, Algebraic topology: an introduction, Graduate Texts in Mathematics n$^{\circ }56$, Springer Verlag, 1967 J. P. Matelski, The classification of discrete 2-generator subgroups of $PSL(2,\mathbb{R})$, Israel J. Math.42, 1982, pp. 309-317 Y. Matiiassevitch, Le dixième problème de Hilbert, son indécidabilité, Masson, 1995 M. Matsumoto, A presentation of mapping class groups in terms of Artin groups and geometric monodromy of singularities, Math. Ann. 316, 2000, pp. 401-418 K. Maurin, The Riemann legacy, Riemann ideas in mathematics and physics, vol. 417, Kluwer Academic Publishers, 1997 B. Mazur, Arithmetic on curves, Bull. Amer. Math. Soc. vol.14 n$^{\circ }2$, 1986, pp. 207- 259 B. Mazur, Number theory as gadfly, Amer Math. Monthly, 1991, pp. 593-610 B. Mazur, Modular curves and the Eisenstein ideal, Publ. Math. IHES 47, 1977, pp. 33-186 A. Melikidze, Physics literature, Condensed Matter Theory, http:// pupgg.princeton.edu/ melikidze/lit.htmlx R. B. Melrose, The Atiyah-Patodi-Singer index theorem, Research notes in mathematics vol. 4, A. K. Peters, 1993 M. Mendès-France, The Planck constant of a curve, Cours donné à Montréal, juillet 1989 M. Mendès-France, A. Sebbar, Pliages de papiers, fonctions thêta et méthode du cercle, Acta Math. 183, 1999, pp. 101-139 Ch. Mercat, Discrete Riemann surfaces and the Ising model, Commun. Math. Phys. 218, 2001, pp. 177-216 J. L. Meyer, Characters analogues of Dedekind sums and transformations of analytical Eisentstein series, Pacific Math. J.194, 2000, pp. 137-164 S. Meskin, Periodic automorphisms of the two-generator free group, Proceedings of the second international conference on the theory of groups, Canberra, Lecture Notes in Mathematics n$^{\circ }$372, Springer Verlag, 1973, pp. 494-498 J. S. Milne, Elliptic curves, Math 679, Winter 1996, www.math.lsa.umich.edu/jmilne/ J. S. Milne, Modular functions and modular forms, Math 678, Fall 1990, www.math.lsa. umich.edu/jmilne/ J. S. Milne, Lectures on etale cohomology, Math 776, Winter 1998, www.math.lsa.umich.edu /jmilne/ J. Milnor, Topology from the differential point of view, University Press of Virginia, 1965 J. Milnor, On manifolds homomorphic to the 7-sphere, Ann. of Math. $64$, 1956, pp. 399-405 J. Milnor, Singular points of complex hypersurfaces, Annals of Mathematics Studies 61, Princeton University Press, 1968 J. Milnor, P. Orlik, Isolated singularities defined by weighted homogeneous polynomials, Topology vol.9, 1970, pp. 385-393 J. Milnor, Hyperbolic geometry : the first 150 years, Bull. Amer. Math. Soc. 6, 1982, pp. 9-24 J. Minác, M. Spira, Witt rings and Galois groups, Ann. of Math.. 144, 1996, pp. 35-60 V. P. Mineev, Topologically stable defects and solitons in ordered media, Harwood Academic Publishers, 1998 H. Minkowski, Geometrie der Zahlen, Leipzig, 1896, réédition Johnson, 1968 Y. N. Minsky, The classification of punctured-torus groups, Ann. of Math. 149 n$^{\circ }2$, 1999, pp. 559-626 R. Miranda, Algebraic curves and Riemann surfaces, Graduate Studies in Mathematics 5, A.M.S., 1997 T. Miyake, Modular forms, Springer Verlag, 1989 R. Mneimné, F. Testard, Introduction à la théorie des groupes de Lie classiques, Hermann, 1997 H. McKean, V. Moll, Elliptic curves, Cambridge University Press, 1999 M. Monatstyrsky, Riemann, topology, and physics, Birkhäuser, 1999 R. Moore, http://www-texdev.mpce.mq.edu.au/Quantum/Quantum/node1.html C. Moore, Predictability and undecidability in dynamical systems, Physical Review Letters vol 64, 1990, pp. 2354-2357 L. J. Mordell, Diophantine equations, Academic Press, 1969 J. Moser, Integrable hamiltonian systems and spectral theory, Accademia Nazionale di Lincei, 1983 L. Mosher, Train tracks expansions of measured foliations, preprint http://newark.rutgers.edu/mosher L. Mosher, What is...a train track? Notices of the American Mathematical Society, March 2003, vol 50, n$^{\circ }3$, p. 354-355 G. D. Mostow, Strong rigidity of locally symmetric spaces, Annals of Mathematics Studies 78, Princeton University Press, 1973 G. Mounier, Les ondes en physique : de Pythagore à nos jours (Vibrations, ondes, impulsions), Ellipse, 2002 J. O. Moussafir, Voiles et polyèdres de Klein, géométrie, algorithmes et statistiques, Thèse à l’Université Paris XI Dauphine, 28 janvier 2000 J. E. Moyal, Quantum mechanics as a statistical theory, Proc. Camb. Phil. Soc. 45, 1949, pp. 99-124 A. Mozgova, Culler’s algorithm for the group $SL(2,% \mathbb{Z})$, Methods Funct. Anal. Topology 7(4), 2001, pp. 81-84 W. Muller, The eta invariant - some recent developments, Séminaire Bourbaki, n$^{\circ }787$, 1993-1994, pp. 1-25 D. Mumford, Tata lectures on theta (I), Progress in mathematics vol. 28, Birkäuser, 1983 D. Mumford, Curves and their jacobians, The University of Chicago Press, 1976 D. Mumford, Algebraic geometry I, Complex projective varieties, Springer Verlag, 1978 T. Munzer, P. Burchard, Visualizing the structure of the world wide web in 3d hyperbolic space, http://www.geom.umn.edu/docs/research /webviz, 1995 K. Murasugi, Knot theory and its applications, Birkhaüser, 1996 G. Myerson, On semi-regular finite continued fractions, Archiv. Math. n$^{\circ }48$, 1987, pp. 420-425 G. L. Naber, Topology, geometry, and gauge fields (Foundations), Springer Verlag, 1997 S. Nag, The complex analytic theory of Teichmüller spaces, Wiley Interscience, 1988 T. Nagell, Sur les propriétés arithmétiques des cubiques planes du premier genre, Skrifter Norske Videnskaps-Akademii i Oslo, n$^{\circ }$1, 1935, pp. 1-25 M. Nakahara, Geometry topology and physics, IOP Publishing Limited, 1990 Ch. Nash, S. Sen, Topology and geometry for physicists, Academic Press, 1983 Ch. Nash, Differential topology and quantum field theory, Academic Press, 1996 S. M. Natanzon, Moduli of Riemann surfaces, Hurwitz-type spaces and their superanalogues, Russian Mathematical Surveys 54 n$^{\circ }1$, pp. 61-116 E. Nelson, Derivation of the Schrödinger equation from newtonian mechanics, Physical Review 15 n$^{\circ }4$, 28 oct. 1966, pp. 1079-1085 A. Némethi, Dedekind sums and the signature of $% f(x,y)+z^N$, Sel. Math., New ser. 4, 1998, pp. 361-376, Sel. Math., New ser. 5, 1999, pp. 161-179 A. Némethi, The signature of $f(x,y)+z^N$, Singularity theory, B. Bruce, D. Mond eds., LNS 263, Cambridge University Press, 1999 Y. V. Nesterenko, P. Philippon, Introduction to algebraic independence theory, Lecture Notes in Mathematics n$^{\circ }1752$, Springer Verlag, 2001 M. Newman, Integral matrices, Academic Press, 1972 M. Newman, Classification of normal subgroups of the modular group, Trans. Amer. Math. Soc. n$^{\circ }$126, 1967, pp. 267-277 M. Newman, Pairs of matrices generating discrete free groups and free products, Michigan Math. J. n$^{\circ }$15, 1968, pp. 155-166 Ch. Ngô, H. Ngô, Physique quantique, Masson, 1991 J. Nielsen, Collected Mathematical Papers, Vol.1, Birkhaüser, 1986 J. Nielsen, Die Isomorphismengruppe der freien Gruppen, Math. Ann. n$^{\circ }$91, 1924, pp. 169-209, in Collected Mathematical Papers, Vol. 1, Birkhaüser, 1986 W. Fenchel, J. Nielsen, On discontinuous groups of isometric transformations of the non-euclidean plane, Courant, Anniversary volume, 1948, in Collected Mathematical Papers of J. Nielsen, Vol. 1, Birkhaüser, 1986 M. A. Nielsen, I. L. Chuang, Quantum computation and quantum information, Cambridge University Press, 2002 V. V. Nikulin, Discrete reflection groups in Lobachevky spaces and algebraic surfaces, Proceedings of the International Congress of Mathematicians, Berkeley, California, 1986, pp. 654-671 D. Yu. Nogin, Notes on exceptional vector bundles and helices, Lecture Notes in Mathematics 1419, Springer Verlag, 1989, pp. 181-195 D. Yu. Nogin, Spirals of period four and equations of Markov type, Math. Ussr Izvestiya vol. 37, 1991, p. 209-226 L. Nottale, Fractal space-time and microphysics, towards a theory of scale relativity, World Scientific,1993 S. Novikov, S. V. Mankov, L. P. Pitaevskii, V.E. Zakharov, Theory of solitons, Plenum Press, 1984 M. Oka, Non-degenerate complete intersection singularity, Hermann, 1997 Ch. Okonek, M. Scheider, H. Spindler, Vector bundles on complex projective spaces, Birkhäuser, 1980 http://www.math.okstate.edu/ loriw/degree2/degree2hm/eta2/eta2.html T. Ono, Vector bundles on a cubis surface whose restrictions to line are rigid, SUT journal of Mathematics vol. 36 n$% ^{\circ }1$, 2000, pp. 83-98 K. Ono, Y. Martin, Eta quotients and elliptic curves, Proc. Amer. Math. Soc. 125 n$^{\circ }11$, 1997, pp. 31689-3176 E. M. Opdam, Multivariable hypergeometric functions, European Congress of Mathematics (Barcelona 2000) vol. 1, C. Casacubuta et als. eds., Progress in Mathematics 201, Birkhauser 2001, voir aussi mat.uab.es/ art3ecm/ opdam.pdf R. P. Osborne, H. Zieschang, Primitives in the free group of two generators, Invent. Math. n$^{\circ }$63, 1981, pp. 17-84 W. Parry, M. Pollicott, An analogue of the prime number theorem and closed orbits of Axiom A flows, Annals of Math. 118, 1983, pp. 573-591 W. Parry, M. Pollicott, Zeta functions and the periodic orbit structure of hyperbolic dynamics, S.M.F., Astérisque vol. 187-188, 1990 I. Pays, Arbres, ordres maximaux et formes quadratiques entières, Number theory Paris 1992-1993, Lecture Notes Series n$^{\circ }$215, London Mathematical Society, 1995, pp. 209-230 A. Karass, A. Pietrowski, D. Solitar, Automorphisms of a free product with an amalgamed subgroup, ContemporaryMathematics 33, A.M.S., 1984, pp. 328-340 R. C. Penner, J. L. Harer, Combinatorics of train tracks, Ann. of Math. Studies vol. 125, Princeton University Press, 1992 R. Penrose, L’esprit, l’ordinateur et les lois de la physique, InterEditions, 1993 J. C. Perez, L’ADN décrypté, la découverte et les preuves du langage caché de l’ADN, Préface de Jean Marie Pelt, Résurgence, 1997 G. Cohen, P. Godlewki, S. Perrine, Idempotents of cyclic codes, International symposium on information theory, Ronneby Sweden, 21-24 june 1976, IEEE 76CH1095-9T G. Cohen, P. Godlewki, S. Perrine, Sur les idempotents de codes, C. R. Acad. Sc. t. 284, série A, février 1977, pp. 509-512 S. Perrine, A new aspect of some Post algebras, The eight symposium on multiple valued logic, 24-26 may 1978, Rosemont Illinois S. Perrine, Logique multivalente, Annales des Télécommunications, tome 33, n$^{\circ }11-12$, Nov-Dec 78, pp.376-382 S. Perrine, Thèse, Université de Metz, décembre 1988 S. Perrine, Sur une généralisation de la théorie de Markoff, Journal of Number Theory vol 37 n$^{\circ }$2, 1991, pp. 211-230 S. Perrine, La méthode de Poincaré appliquée à l’arithmétique, Congrès International Henri Poincaré, Nancy, mai 1994 S. Perrine, L’arithmétique sur une surface percée, Annales des Télécommunications, tome 51, n$^{\circ }$7-8, 1996, pp. 407-420 S. Perrine, Sur des equations diophantiennes généralisant celle de Markoff, Annales de la Faculté des Sciences de Toulouse vol VI n$^{\circ }$1, 1997, pp. 127-141 S. Perrine, Un arbre de constantes d’approximation analogue à celui de l’équation diophantienne de Markoff, Journal de Théorie des Nombres de Bordeaux n$^{\circ }10$, 1998, pp. 321-353 S. Perrine, Trees of approximation constants, Contributed talk at the conference on Continued Fractions: from analytic number theory to constructive approximation, University of Missouri, Columbia, May 20-23, 1998, Contemp. Math. n$^{\circ }236$, A.M.S., 1999, pp. 297-310 S. Perrine, Mathématiques et télécommunications, Colloque ”Théorie des nombres, bruit de fréquences et télécommunications”, Institut Henri Poincaré, 3 décembre 1999 S. Perrine, On generalized Markoff equations and their interpretation, Noise, Oscillators and Algebraic Randomness, Michel Planat (Ed.), La Chapelle des Bois, 1999, Lecture Notes in Physics n$^{\circ }550$, Springer Verlag, 2000 S. Perrine, About some diophantine equation and the resulting chaos in geodesics, 4$^{\grave{e}me}$ Conférence Internationale CASYS’2000, HEC-Liège, Belgique, Août 2000, Computed Anticipatory Systems, American Institute of Physics, n$^{\circ }$573, 2001, pp. 285-298 S. Perrine, L’interprétation matricielle de la théorie de Markoff classique, Preprint présenté au groupe d’étude des problèmes diophantiens, Paris, Chevaleret, 1/02/2001, International Journal of Mathematics and Mathematical Sciences, vol 32 n$% ^{\circ }4$, 2002, pp. 193-262 S. Perrine, Sommes de Dedekind et généralisations de l’équation diophantienne de Markoff, Journal of Number Theory, vol. 94 n$^{\circ }2$, juin 2002, pp. 224-247 S. Perrine, De la théorie de Markoff aux points entiers sur les courbes elliptiques, XXIIèmes Journées Arithmétiques, Lille, 2-6 juillet 2001 S. Perrine, La théorie de Markoff et ses développements, Tessier & Aschpool, 2002, www.tessier-ashpool.fr J. Peyrière, Wen Zhi-Ying, Wen Zhi-Xiong, Polynômes associés aux endomorphismes de groupes libres, L’Enseignement Mathématique 39, 1993, pp. 153-175 M. Planat (Ed.), Oscillateurs, bruit de fréquence, gigue des solitons, synchronisation des oscillateurs, Annales des Télécommunications tome 51 n$^{\circ }7-8$, juillet-août 1996 M. Planat (Ed.) Noise, oscillators and algebraic randomness, Lectures Chapelle des Bois, 1999, Lecture Notes in Physics n$% ^{\circ }$550, Springer Verlag, 2000 M. Planat, S. Dos Santos, J. Cresson, S.Perrine, 1/f frequency noise in a communication receiver and the Riemann hypothesis, 15$^{th}$ international conference on noise in physical systems and 1/f fluctuations, Hong Kong, 1999 M. Planat, S. Dos Santos, N. Ratier, J. Cresson, S. Perrine, Closed to resonance interaction of radiofrequency waves in a Schottky diode mixer: 1/f noise and number theory, 7$^{\grave{e}me}$ Van der Ziel symposium on quantum 1/f noise, St Louis, Août 1998 M. Planat, N. Daher, N. Ratier, A quantum $1/f$ fluctuation in equilibrium: from Planck to Ramanujan, Van der Ziel symposium on quantum $1/f$ noise, Saint Louis, June 2000, Journal de Théorie des Nombres de Bordeaux vol. 14, 2002, p. 585-601 M. Planat, $1/f$ noise, the measurement of time and number theory, Fluctuation and noise letters 1, R63-R67, 2001 M. Planat, E. Henry, The arithmetic of $1/f$ noisein a phase-looked loop, Appl. Phys. Letter 8(13); 2002 M. Planat, Thermal $1/f$ noise from the theory of partitions: application to a quartz resonator, Physica A:318, 2003, pp. 371-386 M. Planat, The hyperbolic geometry of $1/f$ noise in phase locked loop, arXiv.hep-th/0209243 M. Planat, H. Rosu, S. Perrine, Ramanujan sums for signal processing of low frequency noise, arXiv.org/math-ph/pdf/0209/0209002.pdf, Physical Review E66, 056128, 2002 J. E. Pommersheim, Toric varieties, lattice points and Dedekind sums, Math. Ann. n$^{\circ }$295, 1993, pp. 1-24 J. E. Pommersheim, S. Garoufalidis, Values of zeta functions at negative integers, Dedekind sums and toric geometry, J. of the Amer. Math. Soc. Vol 14, n$^{\circ }$1, pp. 1-23 H. Poincaré, Théorie des groupes fuchsiens, Acta. Math. n$^{\circ }$1; 1882, pp. 1-62 H. Poincaré, Sur l’uniformisation des fonctions analytiques, Acta Math. n$^{\circ }31$, 1907, pp. 1-54 H. Poincaré, Oeuvres, Gauthier Villars (11 tomes), 1916-1956, en particulier tome 5 pour l’arithmétique et les formes quadratiques, 1950 A. A. Belavin, A. M. Polyakov, A. B.Zamolodchikov, Infinite conformal symmetry in two dimensional quantum field theory, Nucl. Phys. B241, 1984, p. 333-380, http://ccdb3fs.kek.jp/ cgi-bin/ img-index?198407016 I. N. Bernstein, V. A. Ponomarev, Coxeter functors and Gabriel’s theorem, Russ. Math. Surv. 28(2), 1973, pp. 17-32 E. G. C. Poole, Introduction to the theory of linear differential equations, Dover, 1960 A. J. van der Poorten, An introduction to continued fractions, Diophantine Analysis, edited by J.H. Loxton and A. J. van der Poorten, London Mathematical Society Lecture Notes Series 109, 1985, pp. 99-138 A. J. van der Poorten, K. S. Williams, Values of the Dedekind eta function at quadratic irrationalities, Canadian Journal of Mathematics vol. 51 n$^{\circ }1$, 1999, pp. 176-224, Corrigendum vol. 53 n$^{\circ }2$, pp. 434-443. M. Postnikov, Leçons de géométrie, Groupes et algèbres de Lie, Editions MIR, 1985 D. Prasad, Weil representation, Howe duality and the theta correspondance, CRM Proceedings and Lecture Notes vol.1, AMS, 1993, pp. 195-127 J. Preskill, www.theory-caltech.edu/people/preskill/ph229 O. Pretzel, Codes and algebraic curves, ClarendonPress, Oxford, 1998 C. Procesi, The invariant theory of $n\times n$ matrices, Adv. in Math. 16, 1976, pp. 306-381 P. Prusinkiewicz, J. Hanan, Lindenmayer systems, fractals and plants, Lecture Notes in Biomathematics 79, Springer Verlag, 1989 T. Przebinda, The oscillator duality correspondance for the pair $O(2,2),Sp(2,\mathbb{R})$, Mem. Amer.Math. Soc. 79, 1999 N. Purzitsky, Two generator discrete free groups, Math. Z. 126, 1972, pp. 209-223 M. van des Put, M. F. Singer, Galois theory of difference equations, Springer Verlag, 1997 M. Puta, Hamiltonian mechanical systems and geometric quantization, Mathematics and its applications vol. 2690, Kluwer Academic Publishers, 1993 H. Rademacher, Topics in analytic number theory, Springer Verlag, 1973 H. Rademacher, E. Grosswald, Dedekind sums, Carus Monographs 16, 1972 C. Radin, Global order from local sources, Bull.Amer. Math. Soc vol 25 n$^{\circ }2$, 1991, pp. 335-364 T. Radó, Über den Begriff der Riemannschen Fläche, Acta Litt. Sci. Szeged, 2, 1925, pp. 101-121 S. Rallis, $L$-functions and the oscillator representation, Lecture Notes in Math. n$^{\circ }1245$, Springer Verlag, 1987 M. Ram Murty, A motivated introduction to the Langlands program, Advances in number theory (F. G. Gouvea and N. Yui editors), Clarendon Press, 1983, pp. 37-66 J. G. Ratcliffe, Foundations of hyperbolic manifolds, Springer Verlag, 1994 M. L. Reed, Algebraic structure of genetic inheritance, Bull. Amer. Math. Soc. vol 34 n$^{\circ }2$, Avril 1997, pp.107-130 E.G. Rees, Notes on geometry, Universitext, Springer Verlag, 1983 R. Remak, Über indefinite binäre quadratische minimal Formen, Math. Ann. 932, 1924, pp. 155-182 M. Remoissenet, Waves called solitons, Springer Verlag, 1996 E. Reyssat, Quelques aspects des surfaces de Riemann, Progress in Mathematics n$^{\circ }77$, Birkhäuser, 1989 B. Riemann, Oeuvres mathématiques, traduction L. Laugel, A. Blanchard, 1968 S. Robins, Generalized Dedekind $\eta $-products, Contemp. Math., vol 166, 1994, pp. 119-128 M. Ronan, Buildings, main ideas and applications, Bull. London Math. Soc. 24, 1992, pp. 1-51, et pp. 97-126 J. Rosenberg, Algebraic $K$-theory and its applications, Graduate texts in mathematics, Springer Verlag, 1994 S. Rosenberg, The laplacian on a Riemann manifold, London Mathematical Society Student Texts 31, Cambridge University Press, 1997 G. Rosenberger, Fuchssche Gruppen - die freies Produkt zweier zyklischer Gruppen sind - und die Gleichung $x^2+y^2+z^2=xyz$, Math. Ann. 199, 1972, pp. 213-227 G. Rosenberger, G. Kern-Isberner, Uber Diskretheitsbedingungen und die diophantische Gleichung $ax^2+by^2+cz^2=dxyz$, Arch. Math. vol 34, 1980, pp. 481-493 G. C. Rota, D. Kahaner, A. Odlysko, On the foundations of combinatorial theory VIII, Finite operator calculus, Journal of mathematical analysis and applications 42, 1973, pp. 684-760 G. C. Rota, J. P. S. Kung, The invariant theory of binary forms, Bull. Amer. Math. Soc. Vol 10 n$^{\circ }1$, January 1984, pp. 25-85 J. J. Rotman, Notes on homological algebra, Van Nostrand Math. Stud. n$^{\circ }$26, 1970 A. N. Rudakov, The Markov numbers and exceptionnal bundles on $P^2$, Math. USSR Izvestiya, vol 32, n$^{\circ }$1, 1989, pp.99-112 D. Ruelle, Elements of differentiable dynamics and bifurcation theory, Academic Press, 1989 D. Ruelle, Thermodynamic formalism, Assison Wesley, 1978 D. Ruelle, An extension of the theory of Fredholm determinants, Inst. Hautes Etudes Sci. Publ. Math. 72, 1990, pp.175-193 R. A. Rueppel, Analysis and design of stream ciphers, Springer Verlag, 1986 B. Davies, Y. Safarov, Spectral theory and geometry, London Mathematical Society Lecture Notes Series n$^{\circ }273$, Cambridge University Press, 1999 K. Saito, Character variety of representations of a finitely generated group in $SL_2$, Proceedings of the 37$^{th}$ Taniguchi symposium on topology and Teichmüller spaces, World Scientific Publishing Co., 1996, pp. 253-264 K. Saito, The Teichmüller space and a certain modular function from a viewpoint of group representations, Algebraic geometry and related topics, Ed. Yang - Namikawa - Ueno, International Press, Proceedings of International Symposium Inchoen, Korea, 1993, pp. 41-88 K. Saito, Duality for regular systems of weights, topological field theory, primitive forms and related topics, M. Kashiwara, K. Saito, A. Matsuo, I. Satake editors, Birkhäuser, 1998 (voir références de Extended affine root system I à V, RIMS Kyoto University) K. Saito, Regular system of weights and associated singularities, Advanced Studies in Pure Mathematics 8, Complex analytic singularities, 1986, pp. 479-526 K. Saito, Extended affine root system V (elliptic eta products and $L$-functions, Proceedings on Moonshine and related topics, ed. McKay, CRM Proceedings and Lecture Notes, vol.30, 2001 G. Paun, G. Rozenberg, A. Salomaa, DNA computing : New computing paradigms, Springer Verlag, 1998 Sangjing Lee, Ki Hyoung Ko, Jung Hee Cheon, Jae Woo Han, Ju-sung Kang, Choonsik Park, New public-key cryptosystem using braid groups, http:// knot.kaist.ac.kr /sjlee/ P. Sarnak, Class numbers of indefinite binary quadratic forms, J. of Number Theory, vol. 15, 1982, pp. 229-247 A. L. Schmidt, Minimum of quadratic forms with respect to fuchsian groups (I), J. Reine Angew. Math. 286/287, 1976, pp.341-348 K. Schmidt, Dynamical systems of algebraic origin, Birkhäuser, 1995 M. Schmitter, F. Prêtteux, Morphologie mathématique informatique, Masson, 1994 P. Schmutz, Systoles of arithmetic surfaces and the Markoff spectrum, Math. Ann. 305, 1996, pp. 191-203 P. Schmutz-Schaller, Geometry of Riemann surfaces based on closed geodesics, Bull. Amer. Math. Soc. vol 35, n$^{\circ }$1, 1998, pp. 193-214 P. Schmutz-Schaller, A cell decomposition of Teichmüller space based on geodesic length functions, GAFA, Geom.Funct. Anal. vol 11, 2001, pp. 142-174 X. Buff, J. Fehrenbach, P. Lochak, L. Schneps, P.Vogel, Espaces de modules des courbes, groupes modulaires et théorie des champs, Panorama et synthèses, n$^{\circ }$7, SMF, 1999 M. R. Schroeder, Number theory in science and communication, Springer Verlag, 1986 L. Schwartz, Généralisation de la notion de fonction, de dérivation, de transformation de Fourier, et applications mathématiques et physiques, Ann. Univ. Grenoble 21, 1945, p. 57-74 M. Enock, J. M. Schwartz, Kac algebras and duality of locally compact groups, Springer Verlag, 1992 H. Schwerdtfeger, Geometry of complex numbers, Dover, 1962 P. E. Gunnels, R. Sczech, Evaluation of Dedekind sums, Eisenstein cocycles, and special values of $L$-functions, arXiv:math.NT/9909141 v2, 5 oct. 1999 J. B. Seaborn, Hypergeometric functions and their applications, Texts in applied mathematics n$^{\circ }8$, Springer Verlag, 1991 J. McKay, A. Sebbar, Fuchsian groups, automorphic functions and schwarzians, Math. Ann. 318, 2000, pp. 255-275 A. Sebbar, Classification of torsion-free genus zero congruence groups, Proc. Amer. Math. Soc. vol. 129 n$^{\circ }9$, pp. 2517-2527 B. Segre, Arithmetic upon an algebraic surface, Bull. Amer. Math. Soc., 1945, pp. 152-161 B. Segre, The non singular cubic surfaces, Oxford University Press, 1942 H. Seifert, W. Threlfall, A textbook of topology, Academic Press, 1980 A. Selberg, Harmonic analysis and discontinous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series, J. Indian Math. Soc. n$^{\circ }$20, 1956, pp. 47-87 R. N. Sen, G. L. Sewell, Fiber budles in quantum physics, J. Math. Phys, vol. 43 n$^{\circ }3$, March2002, pp. 1323-1339 M. Seppälä, T. Sorvali, Geometry of Riemann surfaces and Teichmüller spaces, Mathematics studies 169, North Holland, 1992 M. Seppälä, Myrberg’s numerical uniformization of hyperelliptic curves, to be published in Ann. Acad. Sci. Fenn., http://web.math.fsu.edu/ seppala/ ComputationsOnCurves/index.html C. Series, The geometry of Markoff numbers, The mathematical intelligencer, vol 7, n$^{\circ }$3, 1985, pp. 20-29 J. S. Birman, C. Series, An algorithm for simple curves on surfaces, J. London Math. Soc. (2) 29, 1984, pp. 331-342 C. Series, A. Haas, The Hurwitz constant and diophantine approximation on Hecke groups, J. London Math. Soc. (2) 34, 1986, pp.219-234 C. Series, Some geometrical models of chaotic dynamics, Proc. R. Soc. Lond., A413, 1987, pp. 171-182 D. Mumford, C. Series, D. Wright, Indra’s pearls (The vision of Felix Klein), Cambridge University Press, 2002 J. P. Serre, Homologie singulière des espaces fibrés, applications, Ann. of Maths. II série 54, 1951, pp. 425-505 J. P. Serre, Arbres, amalgames, $SL_2$, Astérisque n$^{\circ }$46, SMF, 1982 J. P. Serre, Cours d’arithmétique, P.U.F., 1970 J. P. Serre, Cohomolgie galoisienne, Lecture Notes in Mathematics n$^{\circ }5$, Springer, 1965 J. P. Serre, Corps locaux, en particulier Chapitre X, Hermann, 1968 J. P. Serre, Géométrie algébrique et géométrie analytique, Ann. Inst. Fourier n$^{\circ }6$, 1955, pp.1-42 J. P. Serre, Topics in Galois theory, Research Notes in Mathematics, Jones and Barlett Publishers, 1992 J. P. Serre, Représentations linéaires des groupes finis, Hermann, 1967 I. R. Shafarevich, Basic algebraic geomtry (Tome I : Varieties in projective space, Tome II : Schemes and complex manifolds), Springer Verlag, 1977 C. Shannon, W. Weaver, A mathematical theory of communications, University of Illinois Press, 1949 H. Atmanspacher, H. Scheingraber editors, Information dynamics, Plenum Press, New York, 1991 (voir en particulier l’article de Manfred Euler pp. 167-183) M. Sheingorn, Characterization of simple closed geodesics on Fricke surfaces, Duke Mathematical Journal, vol. 52 n$^{\circ }2 $, 1985, pp. 535-545 G. Shimura, Introduction to the arithmetic theory of automorphic functions, Princeton University Press, 1994 T. Shioda, On elliptic modular surfaces, J. Math.Soc. Japan 24, 1972, pp. 20-59 T. Shioda, Characterization of Jacobian varieties in terms of solitons equations, Invent. Math. 83, 1986, pp. 333-382 C. Consani, J. Scholten, Arithmetic on a quintic threefold, International journal of Mathematics vol. 12 n$^{\circ }8$, 2001, pp. 943-972 P. W. Shor, http://www.research.att.com/shor, notamment Algorithms for quantum computation: discrete logarithms and factoring, Proceedings 35$^{th}$ annual symposium on foundations of computer science, November 20-22, 1994, pp.124-134 J. C. Sidler, Géométrie projective, Interéditions, 1993 L. Siegel, Topics in complex function theory, John Wiley ans sons, 1969 G. Sierra, C. Gomez, Quantum harmonic oscillator, algebra and invariants, J. Math. Phys. A.22, 1989, p. 873 J. H. Silverman, The arithmetic of elliptic curves, Springer-Verlag, 1986 J. H. Silverman, Advanced topics in the arithmetic of elliptic curves, Graduate texts in mathematics n$^{\circ }$151, Springer Verlag, 1994 J. H. Silverman, Integral points on curves and surfaces, Proc. 15$^{th}$ Journées Arithmétiques, 1987, Lecture Notes in Mathematics 1380, Springer Verlag, 1989, pp. 202-241 M. Hindry, J. H. Silverman, Diophantine geometry, an introduction, Graduate texts in mathematics 201, Springer Verlag, 1991 J. H. Silverman, Speculations about the topology of rational points: an up-date, Columbia University number theory seminar, New York, 1992, Astérisque n$^{\circ }228$, 1995, pp. 165 - 181 J. H. Silverman, G. Cornell, G. Stevens, Modular forms and Fermat’s last theorem, Springer Verlag, 1997 N. J. A. Sloane, Error correcting codes and invariant theory: new applications of a nineteenth century technique, American Mathematical Monthly 84, 1977, pp. 82-107 P. Slodowy, Solides platoniciens, singularités de Klein, catastrophes élémentaires et groupes de Lie, Logos et théorie des catastrophes, Colloque de Cerisy, Editions Patiño, 1982 P. Slodowy, Groups and special singularities, Singularity theory, eds RD. T. Lê, K. Saito, B. Teissier, International center for theoretical physics 19 août-6septembre 1991, World Scientific, 1995, pp. 731-799 S. Smale, Generalized Poincaré’s conjecture in dimension greater than 4, Ann. of Math n$^{\circ }74$, 1961, pp. 391-406 N. P. Smart, The algorithmic resolution of diophantine equations, London Mathematical Society Student Texts 41, 1998 V. P. Snaith, Topological methods in Galois representation theory, Canadian Mathematical Society Series of Monographs and Advanced Texts, Wiley, 1999 W. Soergel, Langlands’philosophy and Koszul duality, http://home-mathematik.uni-freiburg.de/soergel/ N. Sochen, R. Kimmel, R. Malladi, From high energy physics to low level vision, Lawrence Berkeley National Laboratory, report LBNL 39243, University of California, Berkeley, CA 94720, 22 August 1996, pp. 1-39, citeseer.ng.nec.com/ 208527.html A. Sossinsky, Noeuds, génèse d’une théorie mathématique, Seuil, 1999 V. V. Prasolov, A. B. Sossinsky, Knots, Links, Braids and 3-manifolds, An introduction to the new invariants in low-dimensional topology, Translations of mathematical monographs n$^{\circ } $154, A.M.S., 1997 Ch. Soulé, $K$-theory and the values of the zeta functions, Algebraic $K$-theory and its applications, Proceedings of the workshop and symposium, H. Bass, A. O. Kuku, C. Pedrini (Eds), World Scientific, 1999, pp. 255-281 Ch. Soulé, Géométrie d’Arakelov des surfaces arithmétiques, Séminaire Bourbaki n$^{\circ }713$, 1988-1989, pp.1-13 E. H. Spanier, Algebraic topology, Springer Verlag, 1966 J. S. Spielberg, Cuntz-Krieger algebras associated with fuchsian groups Ergod. Th. & Dynam. Sys. 13, 1993, pp. 581-595 G. Springer, Introduction to Riemann surfaces, Addison Wesley, 1957 G. Stevens, The Eisenstein measure and real quadratic fields, Compte rendu de la conférence internationale de théorie des nombres (J. M. de Koninck, C. Levêque, éditeurs), Laval, 5-18 juillet 1982, W. de Gruyter, 1989 J. Stillwell, Geometry of surfaces, Springer Verlag, 1992 D. Sullivan, Linking the universalities of Milnor-Thurston Feigenbaum and Ahlfors-Bers, Topological methods in modern mathematics, a symposium in honor of John Milnor’s sixtieth birthday, Publish or Perish, 1993 D. W. Sumners, Untangling DNA, Math.Intelligencer 12, n$^{\circ }3$, 1990, pp.71-80 P. Swinnerton-Dyer, The Brauer group of cubic surfaces, Mathematical Proccedings of the Cambridge Philosophical Society, vol 1313, 1993, pp. 449-460 S. Tabachnikov, Billiards, Panoramas et Synthèses 1, S.M.F., 1995 K. Takeuchi, Arithmetic triangle groups, J.Math. Soc. Japan, vol 29, n$^{\circ }$1, 1977, pp. 91-106 L. A. Takhadjian, P. G. Zograf, The Selberg zeta function and a new K$\ddot{a}$hler metric on the moduli space of punctured Riemann surfaces, Journal of Geometry and Physics, vol5, n$^{\circ }3$, 1988, reedition in “Geometry and Physics”, Essays in honour of I. M. Gelfand, North-Holland, 1991, pp. 551-570 L. Takhadjian, Topics in quantum geometry of Riemann surfaces: two dimensional quantum gravity, arXiv/hep-th/9409088 M. E. Taylor, Non commutative harmonic analysis, Mathematical surveys and monographs n$^{\circ }22$, A.M.S. 1986 A. Terras, Harmonic analysis on symmetric spaces and applications (1), Springer V., 1985 A. Terras, Fourier analysis on finite groups and applications, London Mathematical Society Student Texts 43, Cambridge University Press, 1999 H. M. Stark, A. A. Terras, Zeta functions of finite graphs and coverings (part II), Advances in Maths. 154, 2000, pp. 132-195 A. Terras, Finite quantum chaos, The American Mathematical Monthly, vol.109 n$^{\circ }2$, 2002, p. 121-139 M. A. Tsfasman, S. G. Vlãdut, Algebraic geometric codes, Kluwer, Amsterdam, 1991 W. P. Thurston, The geometry and topology of 3-manifolds, University of Princeton, Preprint, 1978 W. P. Thurston, Three-dimensional geometry and topology, vol. 1, Princeton University Press, 1997 W. P. Thurston, J. Weeks, Les variétés à trois dimensions, Les mathématiques d’aujourd’hui, Belin Pour la Science, 1986 M. A. Knus, A. Merkurjev, M. Rost, J. P. Tignol, The book of involutions, A.M.S. Colloquium Publications n$^{\circ }$ 44, 1998 T. Klotz Milnor, Efimov’s theorem about complete immersed surfaces of negative curvature, Adv. in Math. n$^{\circ }8$, 1972, pp. 474-543 J. Tits, Groupes associés aux algèbres de Kac Moody, Sém. Bourbaki n$^{\circ }700$, 1988-1989, pp.1-24 T. Tom Dieck, Transformation groups, Walter de Gruyter, 1987 L. Tornheim, Asymmetric minima of quadratic forms and asymmetric diophantine approximation, Duke Math. J. n$^{\circ }22$, 18955, pp. 287-294 E. R. Toubiana, R. Sá Earp, Introduction à la géométrie hyperbolique et aux surfaces de Riemann, Paris, 1997 M. Toyoizumi, A note on the Dedekind eta function, JPanta Jour. Algebra, Number Theory and Applications 1(2), 2001, pp. 125-132 A. M. Turing, On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society ser.2 vol. 2, 1936-1937, pp. 230-265, corrections vol.43, 1937, pp. 544-546 (réédition dans La machine de Turing, Seuil, 1995) A. M. Turing, A physical basis of morphogenesis, Phil. Trans. Roy. Soc., B 237, 1952, p. 37 C. Vafa, Conformal theories and punctured surfaces, Physics Letters B vol199 n$^{\circ }2$, 1987, pp. 195-202 W. Van Assche, Multiple orthogonal polynomials, irrationality and transcendence, Contemporary Mathematics vol 236, 1999, pp. 325-342 A. Van de Ven, Twenty years of classifying algebraic vector bundles, Journées de géométrie algébrique, Angers 1979, ed. A. Beauville, Alphen an de Rijn, Sijthoff-Noorhooff, 1980 W. Barth, C. Peters, A. Van de Ven, Compact complex surfaces, Ergebnisse des Mathematik und ihrer Grenzgebiete 3.Folge. Band 4, 1980 B. Van der Pol, An electromechanical investigation of the Riemann zeta function in the critical strip, Bull.Amer. Math. Soc. 53, 1947, A. Varchenko, Multidimensional hypergeometric functions and representation theory of Lie algebras and quantum groups, Advanced Series In Mathematical Physics vol. 21, 1997, World Scientific, 1995 I. Vardi, Continued fractions and modular forms, IHES Algorithms seminar 3 avril 2000, http://algo.inria.fr/banderier/Seminar/vardi00.html I. Vardi, A relation between Dedekind sums and Kloosterman sums, Duke Math. Journal vol. 55 n$^{\circ }1$, 1987, pp.189-197 P. B. Cohen et autres, Les nombres, problèmes anciens et actuels, Ellipses, 2000 J. L. Verdier, Groupes quantiques (d’après V.G. Drinfel’d), Séminaire Bourbaki n$^{\circ }685$, Astérisque 152-153, 1987, pp. 305-319 H. Verril, http://hverill.net/pageshelena/fundomain/index2.html V. V. Vershinin, Braid groups and loop spaces, Russian Math. Surveys 54:2, 1999, pp. 273-350 M. F. Vignéras, Arithmétique des algèbres de quaternions, Lecture Notes in Mathematics n$^{\circ }$800, Springer Verlag, 1980 E. B. Vinberg, Hyperbolic reflexion groups, Russian Math. Surveys 40:1, 1985, pp.31-75 E. B. Vinberg, Discrete groups generated by reflexions in Lobachevskii spaces, Math USSR-Sbornik vol.1 n$^{\circ }3$, 1967, pp. 429-444 E. B. Vinberg, The absence of crystallographic reflexion groups in Lobachevsky spaces of large dimension, Funt. Anal.Appl. 15:2, 1981, pp. 79-80 E. B. Vinberg, Discret linear groups generated by reflexions , LMath. USSR Izvestija 35, 1971, pp. 1083-1119 F. Vivaldi, Arithmetic properties of strongly chaotic motions, Phys. D 25, 1987, pp.105-130 V. Vladimirov, I. Volovich, E. Zelenov, $p$-adic analysis and mathematical physics, World Scientific, 1994 M. Culler, K. Vogtmann, Moduli of graphs and automorphisms of free groups, Invent. Math. 84, 1986, pp. 91-119 J. von Neumann, L’ordinateur et le cerveau, Editions La Découverte, 1992 J. von Neumann, Les fondements mathématiques de la mécanique quantique, J. Gabay, 1998 G. V. Voskresenskaya, One special class of modular forms and group representations, Journal de Théorie des Nombres de Bordeaux 11, 1999, pp. 247-262 H. A. van der Waall, Lamé equations with finite monodromy, Dissertation à l’Université d’Utrecht, janv. 2002, http://www.library.uu.nl/digiarchief/dip/diss/2002-0530-113355/inhooud.html B. Wajnryb, A simple presentation for the mapping class group of an orientable surface, Israel Journal of Mathematics, vol 45, 1983, pp. 157-174, Erratum: J. S. Birman and B. Wajnryb, Presentations of the mapping class group, Israel Journal of Mathematics, vol 88, 1994, pp. 425-427 B. Wajnryb, Mapping class group of a surface is generated by two elements, Topology 35, 1996, pp. 377-383 M. Waldschmidt, Open diophantine problems, Hilbert’s Problems Today, 5th -7th, Avril, 2001, http:// www.math.jussiseu.fr/ miw/ articles/ odp.ps M. Waldschmidt, P. Moussa, J. M. Luck (ed.), Number theory and physics, Springer proceedings in physics n$^{\circ }47$, 1990 M. Waldschmidt, P. Moussa, J. M. Luck, C. Itzykson (ed.), From number theory to physics, Springer Verlag, 1995 M. Waldschmidt, Sur la nature arithmétique des valeurs de fonctions modulaires, Séminaire Bourbaki, Exposé n$% ^{\circ }824$, 49ème année, 1996-1997, pp. 1-36 M. Waldschmidt, Multiple polylogarithms : an introduction, http:// www.math.jussieu.fr/ miw/ articles/ J. L. Waldspurger, Engendrement par des séries thêta de certains espaces de formes modulaires, Invent. Math. 50 n$^{\circ }2$, 1978/1979, pp. 135-168 J. L. Walker, Codes and curves, Student Mathematical Library, IAS/Park City Mathematical Subseries vol 7, 2000 P. L. Walker, Elliptic functions, a constructive approach, John Wiley and sons, 1996 M. Watkins, Number theory and physics archive, htp://www.maths.ex.ec.uk/mwatkins/ N. Weaver, Mathematical quantization, Studies in advanced mathematics, Chapman and Hall, 2001 Ch. A. Weibel, The development of algebraic K-theory before 1980, Contemp. Math. 243, 1999, http:// www.math.uiuc.edu/ K-theory/ 0343/ index.html Ch. A. Weibel, History of homological algebra, http:// www.math.uiuc.edu/ K-theory/ 0245/ index.html Ch. A. Weibel, The development of algebraic $K$-theory before 1980, A.M.S. 1999, http:// www.math.uiuc.edu/ K-theory/ A. Weil, Sur l’analogie entre les corps de nombres et les corps de fonctions algébriques, 1939, Oeuvres scientifiques, Springer Verlag 1979, pp. 236-240 A. Weil, Arithmetics on algebraic varieties, Ann. of Math. 53, 1951, pp. 412-44 André Weil (1906-1998), Numéro spécial de la Gazette des Mathématiciens n$^{\circ }80$, 1989 A. Weil, Courbes algébriques et variétés abéliennes, Hermann, 1971 A. Weil, Théorèmes fondamentaux de la théorie des fonctions thêta (d’après les mémoires de Poincaré et Frobenius), Séminaire Bourbaki, Exposé n$^{\circ }16$, Mai 1949, pp. 1-10 A. Weil, Elliptic functions according to Eisenstein and Kronecker, Ergebnisse der Mathematik und ihrer Grenzgebiete 88, Springer Verlag, 1976 E. W. Weisstein’s world of mathematics, http:// mathworld.wolfram.com/ R. O. Wells, Complex manifolds and mathematical physics, Bull. Amer. Math. Soc. vol.1 n$^{\circ }2$, 1979, pp. 296-336 H. Weyl, Die idee der Riemannschen Fläche, Teubner, 1923 H. Whitney, Differentiable manifolds, Ann. of Math. n$^{\circ }37$, 1936, pp. 645-680 A. J. Wiles, Modular elliptic curves and Fermat’s last theorem, Annals of Maths. n$^{\circ }$141, 1995, pp. 433-551 A. J. Wiles, The Birch and Swinnerton-Dyer conjecture, Clay Mathematics Institute, www.claymath.org (voir aussi l’article dans Mathematics: Frontiers and perspectives, IMU/AMS, 2000) W. Willinger, V. Paxson, Where mathematics meets the internet, Notices of the American Mathematical Society, vol. 45, n$% ^{\circ }$8, 1998, pp. 961-970 E. Witten, Physics and geometry, Proceedings of the international congress of mathematicians, Berkeley, California, 1986, pp. 267-303 E. Witten, Physical law and the quest for mathematical understanding, Bull. Amer. Math. Soc. vol 40, n$^{\circ }1$, 2002, pp. 21-29 J. A. Wolf, Spaces of constant curvature, Publish or Perish, 1984 S. Wolpert, On the Kahler form of the moduli space of once punctured tori, Comm. Math. Helvetici 58, 1983, pp. 246-256 J. Yan, A. Yan, B. Yan, Prime numbers and the amino acid code: analogy in coding properties, J. Theor. Biol. vol. 151 n$% ^{\circ }3$, 1991, pp. 333-341 S. T. Yau, E. Zaslow, BPS states, string duality, and nodal curves on K3, Nuclear Phys. B 471, 1996, pp. 503-512 M. Benson, S. T. Yau, Lie algebras and their representations arising from isolated singularities: computer method in calculating the Lie algebras and their cohomology, Advanced Studies in Pure Mathematics 8, Complex analytic singularities, 1986, pp. 3-58 J. C. Yoccoz, An introduction to small divisors problems, From number theory to physics (M. Waldschmidt, P. Moussa, J. M. Luck, C. Itzykson, ed.), Springer Verlag, 1995 D. Cerveau, E. Ghys, N. Sibony, J. C. Yoccoz, M. Flexor, Dynamique et géométrie complexes, Panoramas et Synthèses 8, S.M.F., 1999 H. P. Yockey, Origin of life on earth and Shannon’s theory of communication, Computers and chemistry 24, 2000, p. 105-123 Ph. Biane, J. Pitman, M. Yor, Probability laws related to the Jacobi theta and Rielann zeta functions, and Brownian excrursions, Bull. Amer. Math. Soc. 38, 2001, pp. 435-465, electronically published June 12, 2001 M. Yoshida, Hypergeometric functions, my love, modular interpretations of configuration spaces, Aspects of Mathematics vol. E.32, Vieweg and sons, 1997 K. Iwasaki, H. Kimura, S. Shimomura, M. Yoshida, From Gauss to Painlevé (A modern theory of special functions), Aspects of Mathematics E.16, Vieweg Verlag, 1991 M. Yoshida, Fuchsian differential equations - with special emphasis on the Gauss-Schwartz theory, Vieweg Verlag, 1987 D. Zagier, On the number of Markoff numbers below a given bound, Math. Comp. 39, 1982, pp. 709-723 D. Zagier, Higher dimensional Dedekind sums, Math. Ann. 202, 1973, pp. 149-172 D. Zagier, Vassiliev invariants and a strong identity related to the Dedekind eta function, Topology 40, 2001, pp. 945-960 O. Zariski, Algebraic surfaces (2nd edition), Ergebnisse der mathematic, vol 61, Springer Verlag, 1971 G. M. Ziegler, Lectures on polytopes, Graduate texts in math. n$^{\circ }$152, Springer Verlag, 1995 H. Zieschang, E. Vogt, H. D. Coldewey, Surfaces and planar discontinuous groups, Lecture Notes in Mathematics n$^{\circ }835$, Springer Verlag, 1980 H. Zieschang, Finite groups of mapping classes of surfaces, Lecture Notes in Mathematics n$^{\circ }875$, Springer Verlag, 1981 G. Burde, H. Zieschang, Knots, De Gruyter, 1986
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the formation of a Kinematically Decoupled Core (KDC) in an elliptical galaxy, resulting from a major merger simulation of two disk galaxies. We show that although the two progenitor galaxies are initially following a prograde orbit, strong reactive forces during the merger can cause a short-lived change of their orbital spin; the two progenitors follow a retrograde orbit right before their final coalescence. This results in a central kinematic decoupling and the formation of a large-scale ($\sim$2 kpc radius) counter-rotating core (CRC) at the center of the final elliptical-like merger remnant ($M_*=1.3 \times10^{ 11 } ${M$_\odot$}$ $), while its outer parts keep the rotation direction of the initial orbital spin. The stellar velocity dispersion distribution of the merger remnant galaxy exhibits two symmetrical off-centered peaks, comparable to the observed “2-$\sigma$ galaxies”. The KDC/CRC consists mainly of old, pre-merger population stars (older than 5 Gyr), remaining prominent in the center of the galaxy for more than 2 Gyr after the coalescence of its progenitors. Its properties are consistent with KDCs observed in massive elliptical galaxies. This new channel for the formation of KDCs from prograde mergers is in addition to previously known formation scenarios from retrograde mergers and can help towards explaining the substantial fraction of KDCs observed in early-type galaxies.' author: - 'Athanasia Tsatsi, Andrea V. Macciò, Glenn van de Ven, and Benjamin P. Moster' bibliography: - 'ms.bib' title: | A new channel for the Formation of Kinematically Decoupled Cores\ in Early-type galaxies --- Introduction ============ Early-type galaxies (ETGs) are the end-products of complex assembly and evolutionary processes that determine their shape and dynamical structure. Signatures of such past processes in present-day ETGs are likely to be in the form of peculiar kinematic subsystems that reside in their central regions. Such subsystems are called Kinematically Decoupled Cores (KDCs) and they are defined as central stellar components with distinct kinematic properties from those of the main body of the galaxy [e.g. @McDermid_2006; @Krajnovic_2011; @Toloba_2014]. KDCs were first discovered using one-dimensional long-slit spectroscopic observations of the stellar kinematics of ETGs . More recently, integral-field unit spectroscopic surveys such as SAURON [@Bacon_2001], ATLAS^3D^ [@Cappellari_2011], or CALIFA [@Sanchez_2012], being able to provide full two-dimensional observations of the stellar kinematics, have favored the detection of KDCs and revealed that a substantial fraction of ETGs in the nearby universe show kinematic decoupling in their central regions. This fraction ranges in different surveys, depending mainly on technical and sample-selection biases. Notably, the fraction of ETGs that host KDCs in the SAURON sample of 48 E+S0 galaxies [@deZeeuw_2002] is substantially high, especially in the centers of slow-rotating ETGs: 8 out of the 12 slow rotators ($\sim67$%) from the main survey host a KDC [@Emsellem_2007]. In the ATLAS^3D^ volume-limited sample of 260 ETGs, this fraction is 47% [@Krajnovic_2011]. The KDCs found in slow rotators are typically “old and large", with stellar populations that show little or no age differences with their host galaxy (older than 8 Gyr) and sizes larger than 1 kpc [@McDermid_2006; @Kuntschner_2010]. KDCs are also detected in fast-rotating ETGs: 25% of fast rotators from the main SAURON survey host KDCs. This type of KDCs are typically “young and compact", with stellar populations younger than 5 Gyr and sizes less than a few hundred parsecs [@McDermid_2006]. We note that these fractions establish a lower limit to the true fraction of ETGs with kinematically decoupled regions, considering projection effects, the fact that young and compact KDCs are subject to technical or observational biases [e.g. @McDermid_2006], while many ETGs with resolved KDCs in their centers are subject to different classifications throughout the literature [e.g. $2\sigma$-galaxies, see @Krajnovic_2011]. While a consensus is reached about the prominent existence of KDCs in luminous ETGs, the physical processes and the rate at which they are formed are still poorly understood. Young and compact KDCs in fast rotators might have formed via star-formation in situ. According to this scenario, the stellar component of the KDC is formed in initially kinematically misaligned gaseous regions, probably originating from externally accreted gas or unequal mass merging , where the orientation of the merging orbit defines the orientation of rotation of the resulting KDC. Following this line of thought, suggested that counter-rotating cores can only result from retrograde mergers. However, this scenario could not hold for the large and old KDCs found in slow rotators, whose stellar population was probably formed at the same epoch as the main body of the galaxy. In this case, processes such as gas accretion or accretion of low-mass stellar systems are more likely to affect the outer parts of the galaxy and can not be consistent with observations that show no color gradients between the KDC and the surrounding galaxy [@Carollo_1997]. The most plausible formation scenario that could explain the similarity of the stellar content of the KDC and the main body of the galaxy is major merging. This scenario has been confirmed in simulations [e.g. @Bois_2010; @Bois_2011], resulting in elliptical-like and slow-rotating merger remnants hosting KDCs only when the two progenitor galaxies were initially following retrograde merger orbits. However, observations indicate a lower limit to the true rate of occurrence of KDCs in ETGs which can not be explained only by retrograde mergers, pointing to the need of additional KDC formation scenarios. Here we show that, a KDC might as well result from an initially prograde major merger. The kinematic decoupling in the center of the final elliptical-like merger remnant can result from a short-lived change of the orbital spin of the two progenitor galaxies right after their second encounter. This new channel for the formation of KDCs might serve as an additional mechanism that can help towards explaining their observed rate of occurrence in ETGs. Simulation parameters ===================== The simulation we use is described in [@Moster_2011]. It was performed using the TreeSPH-code GADGET-2 [@Springel_2005], including star formation and supernova feedback. The two progenitor disk galaxies are identical and they are composed of a cold gaseous disk, a stellar disk and a stellar bulge, which are embedded in a dark-matter and hot-gas halo. The gaseous and the stellar disk of each progenitor galaxy have exponential surface brightness profiles and they are rotationally supported, while the spherical stellar bulge follows a [@Hernquist_1990] profile, and is initially non-rotating[^1]. The dark matter halo has a @Hernquist_1990 profile and a spin parameter consistent with cosmological simulations [@Maccio_2008]. The hot gaseous halo follows the $\beta$ -profile [@Cavaliere_FuscoFemiano1976] and is rotating around the spin axis of the disk [see @Moster_2011 for a more detailed description of the galaxy model]. The stellar mass of each progenitor is $M_*=5 \times 10^{ 10 } ${M$_\odot$}$ $ and the bulge-to-disk ratio was chosen to be $B/D=0.22$. The mass of the cold gaseous disk is $M_{g,cold}=1.2 \times 10^{ 10 } ${M$_\odot$}$ $, such that the gas fraction in the disk is 23%. The virial mass of the dark matter halo is $M_{dm}=1.1 \times 10^{ 12 } ${M$_\odot$}$ $, while the mass of the hot gaseous halo is $M_{g,hot}=1.1 \times 10^{ 11 } ${M$_\odot$}$ $. The softening length is 100 pc for stellar, 400 pc for dark matter and 140 pc for gas particles. The two progenitors are initially employed in a nearly unbound prograde parabolic orbit, with an eccentricity of $e=0.95$ and a pericentric distance of $\ensuremath{r_{p_1}}=13.6$ kpc. Such an orbit is representative for the most common major mergers in $\Lambda$CDM cosmology [@Khochfar_Burkert2006]. The two galaxies have an initial separation of $d_{start}$ = 250 kpc. The orbital and the rotation spin of the first galaxy are aligned, while the spin axis of the second galaxy is inclined by $\theta$=$30\,^{\circ}$ with respect to the orbital plane. The simulation lasts for 5 Gyr, such that the remnant elliptical galaxy is fully relaxed. Merger Remnant ============== Structure of the merger remnant ------------------------------- In order to connect the orbital and mass distribution of our simulated galaxy with observable properties, we create two-dimensional mock stellar mass maps as follows. Stellar particles are projected such that the galaxy is seen edge-on with respect to the initial orbital plane of the merger. Particles are then binned on a regular grid centered on the baryonic center of mass of the galaxy. We adopt a distance of 20 Mpc, so that 1 arcsec corresponds roughly to 0.1 kpc. Our grid has a size of 20x20 kpc, covering approximately twice the half-mass radius ($r_h$) of our galaxy, and a pixel size of 0.075 kpc, so that it corresponds to the spatial resolution of the IRAC camera of the Spitzer Space Telescope [@Fazio_2004]. We parametrize the galaxy’s projected stellar mass distribution using the Multi-Gaussian Expansion (MGE) model [@Monnet_1992; @Emsellem_1994], as implemented by [@Cappellari_2002]. The intrinsic shape of the remnant’s stellar particle distribution is parametrized using an iterative method to obtain the best fitting ellipsoid to the distribution and to extract the eigenvalues of the mass tensor inside this ellipsoid [@Maccio_2008]. The intermediate-to-long and short-to-long axes ratios that we retrieve are p=0.88 and q=0.54, respectively, and the average projected ellipticity of the remnant is $\epsilon=0.49$, estimated within 2 $\ensuremath{r_h}$. Kinematics of the merger remnant -------------------------------- Stellar particles are projected along the chosen viewing angle and binned on a regular 20x20 kpc grid centered on the baryonic center of mass of the galaxy. In order to mimic real integral-field spectroscopic data, the pixel size of 0.1 kpc corresponds, at the adopted distance of 20 Mpc, approximately to the spatial resolution of the SAURON spectrograph [@Bacon_2001]. The bulk velocity of the galaxy is estimated within a sphere of 50 kpc around the center and subtracted from all particle velocities. Then we extract the mass-weighted stellar line-of-sight mean velocity and velocity dispersion for every pixel. The extracted kinematic maps are spatially binned using the 2D Voronoi binning method [@Cappellari_2003], based on a minimum number of particles per pixel in the map. Signal corresponds to the number of particles per pixel and we adopt Poisson noise, such that our signal-to-noise ratio per bin ($SN_{bin}$) should correspond approximately to a target value $\ensuremath{SN_{T}\sim30}$. We also use a simple logarithmic function inferred from CALIFA data [@Husemann_2013] to construct mock velocity errors of our binned kinematic data: $$\delta\upsilon=\ensuremath{5\,SN_{T}(1+1.4\log{N_{pix}})/SN_{bin}} \mbox{, {km\,s$^{-1}$}}$$ where $N_{pix}$ is the number of pixels per bin. For the purpose of this work, we divide the stellar particles of the remnant into 4 different components: “old stars", which are stars that initially were part of the progenitors’ stellar material, (ages $\textgreater$ 5 Gyr), “young stars", which were formed during the merger (ages $\textless$ 5 Gyr) and “all stars", which is the total stellar content of the merger remnant. We also track the stellar particles in the remnant that initially formed the bulges of the two progenitor galaxies. These particles form the “Pr.Bulge" (Progenitor bulge) stars. The projected two-dimensional stellar mass and the stellar kinematics for every stellar component are shown in Figures \[fig:figure1\] and \[fig:figure2\]. . \[fig:figure3\] One can clearly see from the velocity maps the presence of a large-scale KDC of radius $\sim$2 kpc in the center of the elliptical merger remnant (Figure \[fig:figure2\]). This component is a counter-rotating component with respect to the outer body of the galaxy and is most prominent in the “old" stellar population kinematics: stars that initially belonged to the two progenitor galaxies. On the other hand, “young" stars form a stellar disk which is almost aligned with the orbital plane of the two progenitors. This young stellar disk is rotationally supported and strongly prograde-rotating, with a maximum velocity which is 4 times higher than the one of the “old stars". We also note a weak sign of counter-rotation in the central region of the disk, seen in the extracted stellar rotation curve. Notably, stars that were initially part of the two progenitors’ bulges (“Pr.Bulge") are globally counter-rotating in the merger remnant, exhibiting almost a solid-body rotation. One can also see the presence of two symmetrical off-centered peaks in the “all stars" stellar velocity dispersion map of Figure \[fig:figure1\]. This feature is commonly observed in ETGs with counter-rotating components (CRC). These galaxies are called “$2\sigma$-galaxies"[@Krajnovic_2011], and they were associated with external accretion of counter-rotating gas [@Rubin_1992] or major retrograde mergers [@Crocker_2009]. Here we see that a $2\sigma$-galaxy results from a single, prograde major merger. We also note that the $2\sigma$-feature is more prominent in the “all stars" map, where the young disk of stars is present, and less strong in the “old stars" map, even though the CRC is more prominent in the latter. In real ETGs a $2\sigma$-feature usually implicates the existence of a CRC, while the opposite is more ambiguous. Here we show that the $2\sigma$-feature might arise because of the presence of the CRC, but it is enhanced only if one of the components is fast-rotating[^2]. ORIGIN of the Kinematic Decoupling ================================== In order to understand the origin of the kinematic decoupling in the central region of the galaxy, we study the behavior of the merging orbits of its two progenitors: Figure \[fig:figure3\] shows the separation (d) and the specific orbital angular momentum ($l_z$) for one of the two progenitors as a function of time. The merging orbits are shown in Figure \[fig:figure4\], viewed face-on with respect to the initial orbital plane. At the time the two progenitors reach their first pericenter ($p_{1}$=0.78 Gyr), they become tidally distorted, resulting in long trailing arms that expel loosely bound material from their disks. The orbital angular momentum is decreasing due to mass loss and dynamical friction (Figure \[fig:figure3\]). This causes the galaxies to approach their second pericenter ($p_{2}$=2.08 Gyr) with almost radial orbits (Figure \[fig:figure4\]). After their second encounter, the two galaxies change their orbital spin and follow a retrograde orbit for $\sim$300 Myr, until they finally reach their coalescence at t$\sim$2.4 Gyr. The sudden change from a prograde to a retrograde merger can be understood in the framework of reactive forces. Due to strong tidal interactions during the merger, the two progenitor galaxies are systems of variable mass; mass is constantly ejected along their short-lived trailing arms. The mass loss from each system results in a reactive force, known as the [@Mestschersky_1902] force : $$\overrightarrow{R}=\dot{m}(\overrightarrow{\upsilon}-\overrightarrow{V})$$ where $\dot{m}$ is the mass loss rate, $\overrightarrow{V}$ the bulk velocity of the system and $\overrightarrow{\upsilon}$ is the velocity of the outflowing matter. The Mestschersky force acts upon the two galaxies as a “reactive thrust" which, if strong enough, can cause the change of the orbital spin. Figure \[fig:figure5\] shows in detail this effect after the second pericentric passage. After the two progenitors approach closely, strong tidal forces that act upon them result in short-lived trailing arms, which mainly consist of their disks’ stellar component. Loosely bound material gets ejected along these arms, resulting into a strong reactive thrust to the main bodies of the progenitors, causing them to change their orbital spin and follow retrograde trajectories until their coalescence at t$\sim$2.4 Gyr[^3]. The central region of the final remnant is counter-rotating and the width of the last oscillation before coalescence corresponds to the size of the KDC ($\sim$2 kpc), which is prominent in the center of the galaxy for more than 2 Gyr after the kinematic decoupling of its progenitors. Under this framework, one can explain why stars that were initially part of the progenitors’ bulges show global counter-rotation in the post-merger kinematics (Figure \[fig:figure2\]): these stars, more tightly bound in the centers of the galaxies during their close encounters, can track the behavior of the orbital spin of their progenitors’ center of mass before coalescence. On the other hand, the outer parts of the galaxy keep the initial prograde spin. Gas and stars ejected during the merger are subsequently re-accreted, while inheriting the outer prograde spin, forming the prograde-rotating outer part of the remnant. We suggest that the Mestschersky force is present in every stage of the merger. We interpret the change of sign of the orbital angular momentum near the first apocenter $\alpha_{1}$, as a result of this force (Figure \[fig:figure3\]), which is also seen as a change of curvature of the two merging orbits near $\alpha_{1}$ in Figure \[fig:figure4\]. However, at this time in the merging process the effect is not strong enough to change the orbital spin. Summary and Discussion {#Summary and Discussion} ====================== We have shown that a KDC in an early-type galaxy can result from an initially prograde major merger of two disk galaxies. This finding is in contrast to the commonly suggested idea that KDC formation can only result from retrograde mergers. We show the plausibility of an orbital reversal of a prograde merger, caused by reactive forces that act upon the two progenitors due to mass loss, which results in KDC formation in the final merger remnant. The KDC that resides in the center of the remnant shows strong counter-rotation for more than 2 Gyr after the final coalescence of its progenitors. The KDC is most prominent in the old stellar population of the galaxy (ages $\textgreater$ 5 Gyr) and is large in size (2 kpc radius), making it consistent with observations of KDCs in massive ETGs [@McDermid_2006] and comparable to the observed CRC/2-$\sigma$ galaxies [@Krajnovic_2011]. The fact that it results from an initially prograde merger provides a new channel for KDC formation that can add to the predicted rate of occurrence of KDCs and help towards explaining their observed fraction in ETGs. The suggested formation scenario depends on reactive (Mestschersky) forces, that act upon the progenitors due to mass loss during the merger, causing the reversal of the orbital spin. Since prograde mergers result in substantial mass loss compared to retrograde mergers [@Toomre_1972; @Barnes_1988], we expect that such an effect is more likely to occur in prograde mergers. We would also expect this effect to depend on the mass ratio, the initial inclination, as well as the structural properties of the progenitor galaxies. Resolution effects might also influence properties of the KDC, such as its size and its position angle [e.g. @Bois_2010]. Using the merger simulations presented in [@Moster_2011], we note that the formation of the KDC does not depend on the particular form of feedback used or the specific values of hot and cold gas fractions employed in the progenitor galaxies. A larger statistic of merger simulations will allow us to understand how common this new channel for KDC formation is, which we plan to investigate in future work. We acknowledge financial support to the DAGAL network from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007- 2013/ under REA grant agreement number PITN-GA-2011-289313. The numerical simulations used in this work were performed on the THEO cluster of the Max-Planck-Institut f[ü]{}r Astronomie at the Rechenzentrum in Garching. [^1]: We note that these initial structural properties of the progenitors are influenced by their close interaction, i.e. they develop bulge rotation, bars and spiral arms in the first few hundred Myr of the simulation. [^2]: We should note, however, that most $2\sigma$-galaxies do not exhibit a centrally peaked velocity dispersion, like the one presented here. [^3]: The suggested mechanism could be responsible for the formation of KDCs in non-retrograde close encounters, e.g.[@Barnes_2002].
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using a scenario of a hybridized mixture of localized bipolarons and conduction electrons, we demonstrate for the latter the simultaneous appearance of a pseudogap and of strong incoherent contributions to their quasi-particle spectrum which arise from phonon shake-off effects. This can be traced back to temporarily fluctuating local lattice deformations, giving rise to a double-peak structure in the pair distribution function, which should be a key feature in testing the origin of these incoherent contributions, recently seen in angle resolved photoemission spectroscopy ($ARPES$).' address: - | $^{(a)}$ Centre de Recherches sur les Très Basses Températures, Laboratoire Associé á l’Université Joseph Fourier,\ Centre National de la Recherche Scientifique, BP 166, 38042, Grenoble Cédex 9, France - | $^{(b)}$ Dipartimento di Scienze Fisiche “E.R. Caianiello”, Università di Salerno, I-84081 Baronissi (Salerno), Italy\ Unità I.N.F.M. di Salerno author: - 'J.  Ranninger$^{(a)}$ and A. Romano$^{(b)}$' date: 'March 31, 1998' title: '**Interrelation between the pseudogap and the incoherent quasi-particle features of high-$T_c$ superconductors**' --- [2]{} The appearance of a pseudogap, accompanied by a predominantly incoherent quasi-particle spectrum in certain parts of the Brillouin zone[@ARPES], is considered to be amongst the most significant signatures of high-$T_c$ superconductors ($HT_cSC$) which may contain the key of our understanding of these materials. As suggested earlier, the large incoherent part of the quasi-particle spectrum might come from a coupling of the electrons to collective modes such as spin fluctuations[@Schrieffer-97]. We shall discuss and defend in this Letter a similar point of view based on a scenario of a mixture of intrinsically localized bipolarons and coexisting itinerant electrons, hybridized with each other via charge exchange, permitting bipolarons to disintegrate into pairs of conduction electrons and vice-versa to reconstitute themselves in an inverse process. The location of the bipolarons in high-$T_c$ materials might be sought in the highly polarizable dielectric layers adjacent to the $CuO_2$ planes or possibly inside the polaronic stripes[@Bianconi-97] in those planes themselves - the remainder of those $CuO_2$ planes forming the subsystem housing the itinerant electrons. Taking the bipolarons as quasi-particles without any internal structure, such a scenario is described by the so called Boson-Fermion model ($BFM$) which has led us to the prediction of pseudogap features in the quasi-particle spectrum[@Ranninger-95], driven by strong local electron-pair correlations. In the present Letter we extend our previous studies by taking into account the internal polaronic structure of the bipolaronic Bosons, as being composed of charge and lattice vibrational degrees of freedom, locked together in a coherent quantum state. A bipolaronic Boson localized on a site $i$ is represented by $$b^{+}_i~e^{-\alpha(a_i-a^{+}_i)}|0\rangle | \Phi(X) ) = b^{+}_i|0\rangle | \Phi(X-X_0) ). \quad \label{equ1}$$ where the phonon operators $a_i^{(+)}$ correspond to local lattice deformations. The hard-core Bose operators $b_i^{(+)}$ describe pairs of electrons which are self-trapped inside locally deformed clusters of atoms, characterized by deformed harmonic oscillator states $|\Phi(X-X_0))$ with equilibrium positions shifted by $X_0=2\alpha\sqrt{\hbar/2M\omega_0}$ ($\omega_0$ denotes the characteristic frequency and $M$ the mass of the oscillators). The strength of the coupling of the charge carriers to local lattice deformations, ultimately leading to bipolaron formation, is given by $\hbar \omega_0 \alpha$. Such physics is described in terms of the following generalization of the original $BFM$: $$\begin{aligned} H & = & (D-\mu)\sum_{i,\sigma}c^+_{i\sigma}c_{i\sigma} -t\sum_{\langle i\neq j\rangle,\sigma}c^+_{i\sigma}c_{j\sigma} \nonumber \\ & + & (\Delta_B-2\mu) \sum_ib^+_ib_i +v\sum_i [b^+_ic_{i\downarrow}c_{i\uparrow} +c^+_{i\uparrow}c^+_{i\downarrow}b_i] \nonumber \\ & - & \hbar \omega_0 \alpha \sum_ib^+_ib_i(a_i+a_i^{+}) +\hbar \omega_0 \sum_i \left(a^{+}_i a_i +\frac{1}{2}\right). \label{eq2}\end{aligned}$$ Here $c_{i\sigma}^{(+)}$ are Fermionic operators referring to itinerant electrons with spin $\sigma$. The bare hopping integral for the electrons is given by $t$, the bare Fermionic half band width by $D$, the Boson energy level by $\Delta_B$ and the Boson-Fermion pair-exchange coupling constant by $v$. The chemical potential $\mu$ is common to Fermions and Bosons. The indices $i$ denote effective sites involving molecular units made out of adjacent molecular clusters of the metallic Fermionic and dielectric Bosonic subsystems. Because of the small overlap of the oscillator wave functions at different sites we may, to within a first approximation, consider the Boson and Fermion operators as commuting with each other. The original $BFM$, given by the first two lines in Eq.(2), has been investigated in great detail as far as the opening of the pseudogap is concerned and as far as this affects the thermodynamic, transport and magnetic properties[@Ranninger-95]. The opening of the pseudogap in the Fermionic density of states was shown to be driven by the onset of local electron pairing without any superconducting long range order. Even without treating the generalized $BFM$ within the self-consistent conserving approximation, used in those studies of the original $BFM$, we find that the atomic limit of this generalized $BFM$ already gives us clear indications on the interrelation between the opening of the pseudogap and the appearance of predominantly incoherent quasi-particle features, as seen in ARPES studies. In order to set the scale of the various parameters in this model we measure them in units of $D$, which for typical $HT_cSC$ is of the order of $0.5~eV$. As in our previous calculations of the original $BFM$, we choose $v$ such that the pseudogap opens up at temperatures of the order of a hundred degrees $K$. We take $v=0.25$ for the present study. We furthermore choose $\alpha=2.5$ such that together with a typical local phonon frequency of the order of $\omega_0=0.1$ we have a reasonable bipolaron binding energy $\varepsilon_{BP}=\alpha^2 \hbar \omega_0$ which pins the chemical potential at about half the renormalized Bosonic level $\tilde\Delta_B= \Delta_B- \hbar \omega_0 \alpha^2$. We choose $\tilde\Delta_B$ to lie close to the band center such that the number of electrons is slightly below half-filling (typically around $0.75$ per site, which is the physically relevant regime of concentrations). For larger binding energies the bipolaronic level would drop below the band of the electrons leading to a situation of [*bipolaronic superconductivity*]{}, which is clearly not realized in $HT_cSC$ since they definitely show a Fermi surface. The idea behind applying the Boson-Fermion scenario to $HT_cSC$ is that we are confronted with inhomogeneous systems consisting of highly polarizable substructures on which localized bipolarons are formed. These local substructures are embedded in the rest of the lattice[@Roehler-97] which is occupied by electrons having a large either hole or electron-like Fermi surface[@Ding-97; @Ino-98], depending on doping. In such a two-component scenario the electrons scatter in a resonant fashion in and out of the Bosonic bipolaronic states. It is that which is at the origin of the opening of the pseudogap in the normal state of these materials, driven by a precursor of electron pairing[@Ranninger-95], rather than magnetic interactions [@Ding-97]. Generalizing this scenario in the way described above provides a mechanism by which the electrons acquire polaronic features (which, unlike for Bosons, are not of intrinsic nature) via the charge exchange term. This term thus not only controls the opening of the pseudogap as in the original $BFM$ but also the appearance of the strong incoherent contributions to the electron spectrum arising from phonon shake-off effects. Given the two-subsystem picture on which the Boson-Fermion model is based, doping leads primarily to the creation of localized bipolarons which beyond a certain critical concentration are exchanged with the itinerant electrons. For a system such as, for instance, $YBCO$ the number $n_B=\langle b_i^+ b_i \rangle $ of doping induced bipolarons (approximately given by half the number of dopant $O_2^{2-}(1)$ ions in the chains) varies between $0$ and $0.5$ per effective site and the number of Fermions $n_F=\sum_{\sigma} \langle c^{+}_{i\sigma}c_{i\sigma} \rangle$ is equal to 1 if the Boson-Fermion exchange coupling were absent. We thus obtain a total number of charge carriers $n_{tot}=n_F+2n_B$ close to 2 for optimally doped systems. We should however remember that in real systems doping not only changes $n_{tot}$ but also the relative occupancy of Fermions and Bosons, which seems to be the most important effect in the doping mechanism of these materials, achieved in our model as soon as $v$ is different from zero. We shall in the following solve the generalized $BFM$ in the atomic limit (i.e., putting the second term in Eq.(2) equal to zero) for a grand canonical ensemble. In this case the eigenstates of the Hamiltonian are $$\begin{aligned} |0,l \rangle& =& |~0\rangle \otimes |0) \otimes |\Phi(X)\rangle_l \nonumber \\ |1,l \rangle& =& |\uparrow\rangle \otimes |0) \otimes |\Phi(X)\rangle_l \nonumber \\ |2,l \rangle& =& |\downarrow\rangle \otimes |0) \otimes |\Phi(X)\rangle_l \nonumber \\ |3,l \rangle& =& u_{l,+}|\uparrow \downarrow \rangle \otimes |0) \otimes |\Phi(X)\rangle_{u_{l,+}} \nonumber \\ && \quad\qquad\qquad +v_{l,+}|0\rangle \otimes |1) \otimes |\Phi(X)\rangle_{v_{l,+}} \nonumber \\ |4,l \rangle& =& u_{l,-}|\uparrow \downarrow \rangle \otimes |0) \otimes |\Phi(X)\rangle_{u_{l,-}} \nonumber \\ && \quad\qquad\qquad +v_{l,-}|0\rangle \otimes |1) \otimes |\Phi(X)\rangle_{v_{l,-}} \nonumber \\ |5,l \rangle& =& |\uparrow \rangle \otimes |1) \otimes |\Phi(X-X_0)\rangle_l \nonumber \\ |6,l \rangle& =& | \downarrow \rangle \otimes |1) \otimes |\Phi(X-X_0)\rangle_l \nonumber \\ |7,l \rangle& =& |\uparrow \downarrow \rangle \otimes |1) \otimes |\Phi(X-X_0)\rangle_l \quad ,\end{aligned}$$ where $|\sigma\rangle$ denotes a site occupied by an electron with spin $\sigma$ and $|\!\uparrow\downarrow\rangle$ a site occupied by a pair of electrons with spin up and down. $|0)$ and $|1)$ denote a site unoccupied and, respectively, occupied by a Boson. $|\Phi(X)\rangle_l$ denotes the $l$-th excited oscillator state and $|\Phi(X-X_0)\rangle_l= (a^+-\alpha)^l/\sqrt{l!} \,exp(\alpha(a-a^+))|\Phi(x)\rangle_0$ the $l$-th excited shifted oscillator state. These two sets of oscillator states are sufficient to describe all the states listed in Eq.(3) except for the states $|3,l \rangle$ and $|4,l \rangle$ for which the corresponding oscillator states are given by $|\Phi(X)\rangle_{u_{l,\pm}}$ and $|\Phi(X)\rangle_{v_{l,\pm}}$. The latter are determined by numerical diagonalization by expanding them in a set of excited harmonic oscillator states in the form $ u_{l,\pm}|\Phi(X)\rangle_{u_{l,\pm}}= \sum_n u_{l,\pm}^n|\Phi(X)\rangle_n$ and $ v_{l,\pm}|\Phi(X)\rangle_{v_{l,\pm}}= \sum_n v_{l,\pm}^n|\Phi(X)\rangle_n$. For the regime of coupling parameters which we are interested in we take into account up to $50$ phonon states, i.e., $n \leq 50$. It is the states $|3,l \rangle$ and $|4,l \rangle$ which describe the transfer of polaronic features from the localized bipolarons to the conduction electrons when Boson-Fermion exchange processes take place. Since photoemission only couples to the electrons, it is via this transfer of polaronic features to the intrinsically non-polaronic electrons that photoemission spectra show features which are reminiscent of polaronic quasi-particles. These temporarily fluctuating local lattice deformations described by the corresponding oscillator wave functions $|\Phi(X)\rangle_{u_{l,\pm}}$ and $|\Phi(X)\rangle_{v_{l,\pm}}$ are manifest in the pair distribution function ($PDF$) $$g(x)=\frac{1}{Z}\sum_{n,l}\exp(-\beta E(n,l))\langle n,l| \delta(x) |n,l \rangle \quad .$$ Here $Z=\sum_{m=0}^7 \sum_{l=0}^\infty e^{-\beta E(m,l)}$ denotes the partition function, with $E(m,l)$ being the eigenvalues of the eigenstates listed above, given by: $E(0,l)=l \hbar \omega_0$, $E(1,l)=E(2,l)=\varepsilon_0+l\hbar\omega_0$, $E(3,l)=\varepsilon_{l,+}$, $E(4,l)=\varepsilon_{l,-}$, $E(5,l)=E(6,l)= \varepsilon_0+E_0+l\hbar\omega_0-\varepsilon_{BP}$ and $E(7,l)=2\varepsilon_0+E_0+l\hbar\omega_0-\varepsilon_{BP}$, with $\varepsilon_0=D-\mu$ and $E_0=\Delta_B-2\mu$. In order to investigate the various physical quantities on the basis of this single-site generalized $BFM$ we must choose $\Delta_B$ in a way to guarantee the conditions set out above, that is, a concentration of electrons $n_F \simeq 0.75$ (corresponding to a hole concentration of $\simeq 0.25$) for a total concentration of particles $n_{tot}=2$. In order to achieve these conditions we put the bare bipolaronic level $\Delta_B$ above the bare electronic energy level $D$ such that the bipolaronic levelshift $\varepsilon_{BP}$ brings this level down slightly below the bare electronic level. We adjust the precise position of this level by putting $\Delta_B=2D+\hbar \omega_0 \alpha^2-\delta\Delta_B$, with $\delta\Delta_B=0.025$, chosen in order to give $n_F \simeq 0.75$. Given this choice of parameters, we obtain a $PDF$ (illustrated in Fig.1) showing a double-peak structure which merges into a single-peak structure as the temperature is lowered below the characteristic temperature $T^*$, at which, as we shall see below, the pseudogap opens up. The two peak positions characterize the two deformations of the local lattice environment where a given site is alternatively occupied by a pair of electrons or by a bipolaron. Recent $EXAFS$[@Roehler-97], $XANES$[@Conradson-97] and pulsed neutron scattering[@Egami-96] experiments give some indications for such dynamical local lattice fluctuations. Let us now embark on the evaluation of the intensity of the photoemission spectrum $I_{PES}(\omega)$ from a single site Boson-Fermion system - tantamount to the angle integrated rather than angle resolved photoemission spectroscopy when neglecting the effect of the dynamical mean field coming from the itinerancy of the electrons. We have $I_{PES}(\omega)=I_E(\omega)n_F(\omega)$, where $n_F(\omega)$ denotes the Fermi distribution function and $I_E(\omega)$ the emission part of the total one-particle Fermionic spectral function $$\begin{aligned} &&I(\omega) = \frac{1}{Z}\sum_{m,m';l,l'}\left( e^{-\beta E(m,l)}+ e^{-\beta E(m',l')} \right) \nonumber \\ &&\qquad\quad | \langle m',l'|c_{\uparrow}|m,l \rangle |^2 \delta(\omega-E(m,l)+E(m',l')) \nonumber \\ &&\quad\quad = Z_F\delta(\omega-\varepsilon_0)+\frac{1}{Z}\sum_{l,m,s=\pm} |u_{l,s}^m|^2(e^{-\beta(\varepsilon_0+m\hbar\omega_0)} \nonumber \\ &&\qquad\quad + e^{-\beta\varepsilon_{l,s}} ) \delta(\omega+\varepsilon_0+m\hbar\omega_0-\varepsilon_{l,s}) \nonumber \\ &&+{e^{-\alpha^2}\over Z} \sum_{l,m,s=\pm} \left| \sum_{n \leq l} v_{m,s}^n \sqrt{{l! \over n!}}\sum_{n'=0}^n {n \choose n'} {\alpha^{n-n'}(-\alpha)^{l-n'} \over(l-n'!)} \right| ^2 \nonumber \\ &&(e^{-\beta \varepsilon_{m,s}}+ e^{-\beta(\varepsilon_0+E_0+l\hbar\omega_0-\varepsilon_{BP})}) \delta(\omega+\varepsilon_{m,s}-\varepsilon_0 -E_0 \nonumber \\ && -l\hbar\omega_0+\varepsilon_{BP})\quad.\end{aligned}$$ Here $Z_F=\frac{1}{Z} (1+e^{-\beta \varepsilon_0})(1+e^{-\beta (\varepsilon_0+E_0-\varepsilon_{BP})}) n_B(\hbar \omega_0)$ represents the spectral weight of the non-bonding contributions which accounts for the coherent part of the photoemission spectrum, unaffected by any coupling to the Bosons and hence to the phonons ($n_B(\omega)$ denotes the Bose distribution function). The second and third contribution to the spectral function $I(\omega)$ account for the incoherent part of the spectrum. We illustrate in Fig.2 the photoemission spectral intensity $I_{PES}(\omega)$ for different temperatures (in units of $D$). For high temperatures ($T \simeq 0.06$) we observe a very much broadened spectral function which in shape comes close to that of a typical Fermi liquid. Upon lowering the temperature this spectral function starts exhibiting a pseudogap and at the same time a broad incoherent contribution (coming from the second and third term of the expression for $I(\omega)$ in Eq.(5)) emerges. The incoherent part of the spectrum extends over a region in energy which is of the order of the half band width ($\simeq 0.5 eV$) and is practically temperature independent at low temperatures, which seems to be confirmed experimentally[@Norman-97]. The closing up of the pseudogap (measured as the difference in energy between the chemical potential at $\omega=0$ and the midpoint of the leading edge of the photoemission spectrum) as we increase the temperature is illustrated in the inset of Fig.2. The pseudogap has a zero temperature limit of $0.085D\simeq 40\,meV$ and closes up at a characteristic temperature $T^* \simeq 0.06D\simeq 350\,K$, which are reasonable numbers. We should add that the chemical potential for temperatures below $T^*$ turns out to be practically temperature independent. In order to illustrate the closing up of the pseudogap as the temperature approaches $T^*$ we plot in Fig.3 the density of states $I(\omega)$ for different temperatures. We clearly notice a strongly non-symmetric bias near the chemical potential ($\omega = 0$) which seems to be verified in tunneling experiments[@Renner-98]. The work reported in this Letter relates the temperature dependence of the pseudogap to that of the incoherent part of the quasi-particle spectrum. The opening of the pseudogap, being associated to resonant exchange tunneling between intrinsically unpaired electrons and electrons paired up in bipolaronic states, is thus driven by a metal-insulator cross-over rather than by superconducting fluctuations and thus can open up well above the onset of the superconducting phase. The broad incoherent part of this spectrum is attributed to phonon shake-off effects arising from the polaronic character of the electrons in the metallic layers, transmitted to them via their resonant scattering into bipolaronic states. The temporarily fluctuating local lattice deformations which are caused in this process are expected to show up in a characteristic double-peak feature of the pair distribution function (measureable by $EXAFS$) and should test whether the incoherent $ARPES$ background is of polaronic origin or not. Our approach is based on an atomic limit calculation of the generalized Boson-Fermion model which is solved exactly by numerical means. The results obtained are not expected to change qualitatively when taking into account the itinerancy of the electrons. This will only introduce possible asymmetries in the Brillouin zone coming from asymmetric coupling $v$ in a more microscopic model (in accordance with the hypothesis of strongly hybridized plane and out of plane states in certain parts of the Brilloun zone, as suggested by $LDA$ calculations[@Andersen-94]) and affect the quasi-particle structure close to the Fermi energy. For this energy regime our self-consistent studies[@Ranninger-95] on the original $BFM$, fully taking into account electron itinerancy but neglecting any coupling to the phonons, reproduces more faithfully the quasi particle structure, but with an incoherent contribution which is typically of the order of $0.1 \; D$. This is one order of magnitude less than the incoherent contributions due to phonon shake-off reported in this Letter. We therefore can safely treat this problem within the approximation scheme presented here. Our results are moreover robust in the sense that they hold for different total concentrations between $1.5$ and $2$, as long as we enforce the condition that $n_F$ is between $0.7$ and 1. D.S. Marshall [*et al.*]{}, Phys. Rev. Lett. [**76**]{}, 4841 (1996);  H. Ding et al., Nature [**382**]{}, 51 (1996). Z.-X. Shen and J.R. Schrieffer, Phys. Rev. Lett. [**78**]{}, 1771 (1997). N.L. Saini [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 3467 (1997). J. Ranninger, J.-M. Robin and M. Eschrig, Phys. Rev. Lett. [**74**]{}, 4027 (1995); J. Ranninger and J.-M. Robin, Phys. Rev. [**B53**]{}, R11961 (1996); T. Domanski, J. Ranninger and J.-M. Robin, Solid State Commun. [**105**]{}, 473 (1998). J. Röhler et al., in “High-$T_c$ Superconductivity 1996: Ten years after the discovery”, E. Kaldis [*et al.*]{} eds. (Kluwer Academic Publishers, Dortrecht, (1997)), p.469. H. Ding [*et al.*]{}, Phys. Rev. Lett. [**78**]{}, 2628 (1997). A. Ino [*et al.*]{}, preprint (1998). S.D. Conradson, J. Mustre de Leon and A.R. Bishop, Journal of Superconductivity [**10**]{}, 329 (1997). T. Egami and S.J.L. Billinge, in “Physical Properties of High-Temperature Superconductors”, V.D.M. Ginsberg ed. (World Scientific, 1996), p.265. M.R. Norman [*et al.*]{}, Phys. Rev. Lett. [**79**]{}, 3506 (1997). Ch. Renner [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 149 (1998). see for instance O.K. Andersen et al., Phys. Rev. B [**49**]{}, 4145 (1994) for $YBCO$. Similar results were obtained for $Hg$, $Tl$ and $Bi$ compounds.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the implications of the recent discovery of CP violation in two-body SCS $D$ decays by LHCb. We show that the result can be explained within the SM without the need for any large $SU(3)$ breaking effects. It further enables the determination of the imaginary part of the ratio of the $\Delta U=0$ over $\Delta U=1$ matrix elements in charm decays, which we find to be $(0.65\pm 0.12)$. Within the standard model, the result proves the non-perturbative nature of the penguin contraction of tree operators in charm decays, similar to the known non-perturbative enhancement of $\Delta I=1/2$ over $\Delta I=3/2$ matrix elements in kaon decays, that is, the $\Delta I=1/2$ rule. As a guideline for future measurements, we show how to completely solve the most general parametrization of the $D \to P^+P^-$ system.' author: - Yuval Grossman - Stefan Schacht bibliography: - 'uspin-DeltaACP.bib' title: 'The Emergence of the $\Delta U=0$ Rule in Charm Physics' --- Introduction \[sec:intro\] ========================== In a recent spectacular result, LHCb discovered direct CP violation in charm decays at 5.3$\sigma$ [@Aaij:2019kcg]. The new world average of the difference of CP asymmetries [@Aitala:1997ff; @Link:2000aw; @Csorna:2001ww; @Aubert:2007if; @Staric:2008rx; @Aaltonen:2011se; @Collaboration:2012qw; @Aaij:2011in; @Aaij:2013bra; @Aaij:2014gsa; @Aaij:2016cfh; @Aaij:2016dfb] $$\begin{aligned} \Delta a_{CP}^{\mathrm{dir}} &\equiv a_{CP}^{\mathrm{dir}}(D^0\rightarrow K^+K^-) - a_{CP}^{\mathrm{dir}}(D^0\rightarrow \pi^+\pi^-)\,, \end{aligned}$$ where $$\begin{aligned} a_{CP}^{\mathrm{dir}}(f) &\equiv \frac{ \vert \mathcal{A} (D^0\to f)\vert^2 - \vert {\mathcal{A}}(\overline{D}^0\to f)\vert^2 }{ \vert \mathcal{A}(D^0\to f)\vert^2 + \vert {\mathcal{A}}(\overline{D}^0\to f)\vert^2 }\,, \end{aligned}$$ and which is provided by the Heavy Flavor Averaging Group (HFLAV) [@Amhis:2016xyh], is given as [@Carbone:2019] $$\begin{aligned} \Delta a_{CP}^{\mathrm{dir}} &= -0.00164\pm 0.00028\,. \label{eq:HFLAVav} \end{aligned}$$ Our aim in this paper is to study the implications of this result. In particular, working within the Standard Model (SM) and using the known values of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements as input, we see how Eq. (\[eq:HFLAVav\]) can be employed in order to extract low energy QCD quantities, and learn from them about QCD. The new measurement allows for the first time to determine the CKM-suppressed amplitude of singly-Cabibbo-suppressed (SCS) charm decays that contribute a weak phase difference relative to the CKM-leading part, which leads to a non-vanishing CP asymmetry. More specifically, $\Delta a_{CP}^{\mathrm{dir}}$ allows to determine the imaginary part of the $\Delta U=0$ over $\Delta U=1$ matrix elements. As we show, the data suggest the emergence of a $\Delta U=0$ rule, which has features that are similar to the known $\Delta I=1/2$ rule in kaon physics. This rule is the observation that in $K \to \pi\pi$ the amplitude into a $I=0$ final state is enhanced by a factor $\sim 20$ with respect to the one into a $I=2$ final state [@Tanabashi:2018oca; @GellMann:1955jx; @GellMann:1957wh; @Gaillard:1974nj; @Bardeen:1986vz; @Buras:2014maa; @Bai:2015nea; @Blum:2015ywa; @Boyle:2012ys; @Buras:2015yba; @Kitahara:2016nld]. This is explained by large non-perturbative rescattering effects. Analogous enhancements in charm decays have previously been discussed in Refs. [@Einhorn:1975fw; @Abbott:1979fw; @Golden:1989qx; @Brod:2012ud; @Grinstein:2014aza; @Bhattacharya:2012ah; @Franco:2012ck; @Hiller:2012xm]. For further recent theoretical work on charm CP violation see Refs. [@Nierste:2017cua; @Nierste:2015zra; @Muller:2015rna; @Grossman:2018ptn; @Buccella:1994nf; @Grossman:2006jg; @Artuso:2008vf; @Khodjamirian:2017zdu; @Buccella:2013tya; @Cheng:2012wr; @Feldmann:2012js; @Li:2012cfa; @Atwood:2012ac; @Grossman:2012ry; @Buccella:2019kpn; @Yu:2017oky; @Brod:2011re]. In Sec. \[sec:decomposition\] we review the completely general U-spin decomposition of the decays $D^0\rightarrow K^+K^-$, $D^0\rightarrow \pi^+\pi^-$ and $D^0\rightarrow K^{\pm}\pi^{\mp}$. After that, in Sec. \[sec:solving\] we show how to completely determine all U-spin parameters from data. Our numerical results which are based on the current measurements are given in Sec. \[sec:numerics\]. In Sec. \[sec:deltau0rule\] we interpret these as the emergence of a $\Delta U=0$ rule, and in Sec. \[sec:DeltaI12inKDB\] we compare it to the $\Delta I=1/2$ rules in $K$, $B$ and $D$ decays. The different effect of $\Delta U=0$ and $\Delta I=1/2$ rules on the phenomenology of charm and kaon decays, respectively, is discussed in Sec. \[sec:UandIrules\]. In Sec. \[sec:conclusions\] we conclude. Most general amplitude decomposition \[sec:decomposition\] ========================================================== The Hamiltonian of SCS decays can be written as the sum $$\begin{aligned} \mathcal{H}_{\mathrm{eff}} \sim \Sigma (1,0) - \frac{\lambda_b}{2} (0,0)\,,\end{aligned}$$ where $(i,j) = \mathcal{O}^{\Delta U=i}_{\Delta U_3=j}$, and the appearing combination of CKM matrix elements are $$\begin{aligned} \Sigma &\equiv \frac{V_{cs}^* V_{us} - V_{cd}^* V_{ud}}{2}\,, \qquad -\frac{\lambda_b}{2} \equiv -\frac{V_{cb}^* V_{ub}}{2} = \frac{V_{cs}^* V_{us} + V_{cd}^* V_{ud} }{2}\,, \end{aligned}$$ where numerically, $|\Sigma| \gg |\lambda_b|$. The corresponding amplitudes have the structure $$\begin{aligned} \mathcal{A} = \Sigma ( A_{\Sigma}^s - A_{\Sigma}^d ) - \frac{\lambda_b}{2} A_b\,,\end{aligned}$$ where $A_{\Sigma}^s$, $A_{\Sigma}^d$ and $A_b$ contain only strong phases and we write also $A_{\Sigma}\equiv A_{\Sigma}^s - A_{\Sigma}^d$. For the amplitudes we use the notation $$\begin{aligned} \mathcal{A}(K\pi) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow K^+\pi^-)\,, \\ \mathcal{A}(\pi\pi) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow \pi^+\pi^-)\,, \\ \mathcal{A}(KK) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow K^+K^-)\,, \\ \mathcal{A}(\pi K) &\equiv \mathcal{A}(\overline{D}^0 \rightarrow \pi^+ K^-)\,.\end{aligned}$$ The U-spin related quartet of charm meson decays into charged final states can then be written as [@Brod:2012ud; @Muller:2015lua; @Muller:2015rna] $$\begin{aligned} {\mathcal{A}}(K\pi) &= V_{cs} V_{ud}^* \left(t_0 - \frac{1}{2} t_1 \right)\,, \label{eq:decomp-1}\\ {\mathcal{A}}(\pi\pi) &= -\Sigma^*\left(t_0 + s_1 + \frac{1}{2} t_2 \right) -\lambda_b^*\left(p_0 - \frac{1}{2} p_1 \right)\,, \label{eq:decomp-2}\\ {\mathcal{A}}(KK) &= \Sigma^*\left(t_0 - s_1 + \frac{1}{2} t_2 \right) -\lambda_b^* \left(p_0 + \frac{1}{2} p_1 \right)\,, \label{eq:decomp-3}\\ {\mathcal{A}}(\pi K) &= V_{cd} V_{us}^* \left(t_0 + \frac{1}{2} t_1 \right)\,. \label{eq:decomp-4}\end{aligned}$$ The subscript of the parameters denotes the level of U-spin breaking at which they enter. We write $A(K\pi)$ and $A(\pi K)$ for the Cabibbo-favored (CF) and doubly Cabibbo-suppressed (DCS) amplitude without the CKM factors, respectively. We emphasize that the SM parametrization in Eqs. (\[eq:decomp-1\])–(\[eq:decomp-4\]) is completely general and independent from U-spin considerations. For example, further same-sign contributions in the CF and DCS decays can be absorbed by a redefinition of $t_0$ and $t_2$, see Ref. [@Brod:2012ud]. The meaning as a U-spin expansion only comes into play if we assume a hierarchy for the parameters according to their subscript. The letters used to denote the amplitudes should not be confused with any ideas about the diagrams that generate them. That is, the use of $p_0$ and $t_0$ is there since in some limit $p_0$ is dominated by penguin diagrams and $t_0$ by tree diagrams. Yet, this is not always the case, and thus it is important to keep in mind that all that we do know at this stage is that the above is a general reparametrization of the decay amplitudes, and that each amplitude arises at a given order in the U-spin expansion. In the topological interpretation of the appearing parameters, $t_0$ includes both tree and exchange diagrams, which are absorbed [@Muller:2015lua]. Moreover, $s_1$ contains the broken penguin and $p_0$ includes contributions from tree, exchange, penguin and penguin annihilation diagrams [@Muller:2015lua; @Brod:2012ud]. We note that the U-spin parametrization is completely general when we assume no CPV in the CF and DCS decays, which is also the case to a very good approximation in the SM. Beyond the SM, there can be additional amplitude contributions to the $\overline{D}^0\rightarrow K^+\pi^-$ and $\overline{D}^0\rightarrow \pi^+K^-$ decays which come with a relative weak phase from CP violating new physics. We do not discuss this case any further here. In terms of the above amplitudes, the branching ratios are given as $$\begin{aligned} \mathcal{BR}(D\rightarrow P_1P_2 ) &= \vert \mathcal{A}\vert^2 \times \mathcal{P}(D,P_1,P_2)\,, \nonumber \\ \mathcal{P}(D,P_1,P_2) &= \tau_D \times \frac{1}{16\pi m_D^3} \sqrt{ (m_D^2 - (m_{P_1} - m_{P_2})^2) (m_D^2 - ( m_{P_1} + m_{P_2})^2 ) }\,.\end{aligned}$$ The direct CP asymmetries are [@Golden:1989qx; @Pirtskhalava:2011va; @Nierste:2017cua] $$\begin{aligned} a_{CP}^{\mathrm{dir}} &= \mathrm{Im}\left(\frac{\lambda_b}{\Sigma}\right) \mathrm{Im}\left(\frac{A_b}{A_{\Sigma}}\right)\,.\end{aligned}$$ Solving the complete U-spin System \[sec:solving\] =================================================== We discuss how to extract the U-spin parameters of Eqs. (\[eq:decomp-1\])–(\[eq:decomp-4\]) from the observables. We are mainly interested in the ratios of parameters and less in their absolute sizes and therefore we consider only quantities normalized on $t_0$, that is $$\begin{aligned} \label{eq:def-u-par} &\tilde{t}_1 \equiv \frac{t_1}{t_0}\,, \qquad \tilde{t}_2 \equiv \frac{t_2}{t_0}\,, \qquad \tilde{s}_1 \equiv \frac{s_1}{t_0}\,, \qquad \tilde{p}_0 \equiv \frac{p_0}{t_0}\,,\qquad \tilde{p}_1 \equiv \frac{p_1}{t_0}\,.\end{aligned}$$ We choose, without loss of generality, the tree amplitude $t_0$ to be real. The relative phase between $\mathcal{A}(K\pi)$ and $\mathcal{A}(\pi K)$ is physical and can be extracted in experimental measurements. However, the relative phases between $\mathcal{A}(\pi\pi)$, $\mathcal{A}(KK)$ and $\mathcal{A}(K\pi)$ are unphysical, i.e. not observable on principal grounds. This corresponds to two additional phase choices that can be made in the U-spin parametrization. Consequently, without loss of generality, we can also choose the two parameters $\tilde{s}_1$ and $\tilde{t}_2$ to be real. Altogether, that makes eight real parameters, that we want to extract, not counting the normalization $t_0$. Of these, four parameters are in the CKM-leading part of the amplitudes and four in the CKM-suppressed one. In the CP limit $\mathrm{Im}\lambda_b \rightarrow 0$ we can absorb $\tilde{p}_0$ and $\tilde{p}_1$ into $\tilde{t}_2$ and $\tilde{s}_1$ respectively, which makes four real parameters in that limit. The eight parameters can be extracted from eight observables that can be used to completely determine them. Additional observables can then be used in order to overconstrain the system. We divide the eight observables that we use to determine the system into four categories: $(i)$ Branching ratio measurements (3 observables) [@Tanabashi:2018oca]. They are used to calculate the squared matrix elements. We neglect the tiny effects of order $|\lambda_b/\Sigma|$ and we get $$\begin{aligned} \vert A_{\Sigma}(KK)\vert^2 &= \frac{\mathcal{B}( \overline{D}^0\rightarrow K^+K^-)) }{ |\Sigma|^2 \mathcal{P}(D^0,K^+,K^-) }\,, \\ \vert A_{\Sigma}( \pi\pi)\vert^2 &= \frac{\mathcal{B}( \overline{D}^0\rightarrow \pi^+\pi^-) }{ |\Sigma|^2 \mathcal{P}(D^0,\pi^+,\pi^-) }\,, \\ \vert A( K\pi)\vert^2 &= \frac{\mathcal{B}( \overline{D}^0\rightarrow K^+\pi^-) }{ |V_{cs} V_{ud}^*|^2 \mathcal{P}(D^0, K^+, \pi^-) }\,, \\ \vert A(\pi K)\vert^2 &= \frac{\mathcal{B}( \overline{D}^0\rightarrow K^-\pi^+) }{ |V_{cd} V_{us}^*|^2 \mathcal{P}(D^0,K^-,\pi^+ ) }\,.\end{aligned}$$ We consider three ratios of combinations of the four branching ratios, which are $$\begin{aligned} R_{K\pi} &\equiv \frac{ \vert A(K\pi)\vert^2 - \vert A(\pi K)\vert^2 }{ \vert A(K\pi) \vert^2 + \vert A(\pi K) \vert^2 }\,, \label{eq:br-ratio-1}\\ R_{KK,\pi\pi} &\equiv \frac{ \vert A(KK)\vert^2 - \vert A(\pi\pi)\vert^2 }{ \vert A(KK)\vert^2 + \vert A(\pi \pi)\vert^2 }\,, \label{eq:br-ratio-2} \\ R_{KK,\pi\pi,K\pi} &\equiv \frac{ \vert A( KK )\vert^2 + \vert A( \pi\pi) \vert^2 - \vert A( K\pi) \vert^2 - \vert A( \pi K) \vert^2 }{ \vert A( KK) \vert^2 + \vert A( \pi \pi) \vert^2 + \vert A( K \pi) \vert^2 + \vert A( \pi K) \vert^2 }\,. \label{eq:br-ratio-3}\end{aligned}$$ $(ii)$ Strong phase which does not require CP violation (1 observable). The relative strong phase between CF and DCS decay modes $$\begin{aligned} \delta_{K\pi} &\equiv \mathrm{arg}\left( \frac{\mathcal{A}(\overline{D}^0\rightarrow K^-\pi^+)}{\mathcal{A}(D^0\rightarrow K^-\pi^+)}\right) = \mathrm{arg}\left(\frac{\mathcal{A}(D^0\rightarrow K^+\pi^-)}{\mathcal{A}(D^0\rightarrow K^-\pi^+)} \right) \end{aligned}$$ can be obtained from time-dependent measurements [@Chau:1993ec; @Browder:1995ay; @Wolfenstein:1995kv; @Blaylock:1995ay; @Falk:1999ts; @Gronau:2000ru; @Bergmann:2000id; @Falk:2001hx; @Grossman:2006jg; @Kagan:2009gb; @Aaij:2016roz] or correlated $D^0 \overline{D}^0$ decays [@Bigi:1986dp; @Xing:1996pn; @Gronau:2001nr; @Atwood:2002ak; @Asner:2005wf; @Asner:2012xb] at a charm-$\tau$ factory. $(iii)$ Integrated direct CP asymmetries (2 observables). In particular we use [@Einhorn:1975fw; @Abbott:1979fw; @Golden:1989qx; @Brod:2012ud; @Grinstein:2014aza; @Franco:2012ck; @Nierste:2017cua; @Nierste:2015zra; @Muller:2015rna; @Hiller:2012xm; @Grossman:2018ptn; @Buccella:1994nf; @Grossman:2006jg; @Artuso:2008vf; @Khodjamirian:2017zdu; @Cheng:2012wr; @Feldmann:2012js; @Li:2012cfa; @Atwood:2012ac; @Grossman:2012ry; @Buccella:2019kpn; @Yu:2017oky; @Brod:2011re] $$\begin{aligned} \Delta a_{CP}^{\mathrm{dir}} &\equiv a_{CP}^{\mathrm{dir}}(D^0\rightarrow K^+K^-) - a_{CP}^{\mathrm{dir}}(D^0\rightarrow \pi^+\pi^-)\,, \\ \Sigma a_{CP}^{\mathrm{dir}} &\equiv a_{CP}^{\mathrm{dir}}(D^0\rightarrow K^+K^-) + a_{CP}^{\mathrm{dir}}(D^0\rightarrow \pi^+\pi^-)\,.\end{aligned}$$ $(iv)$ Strong phases that require CP violation (2 observables) [@Grossman:2006jg; @Bergmann:2000id; @Kagan:2009gb; @Bigi:1986dp; @Xing:1996pn; @Gronau:2001nr; @Atwood:2002ak; @Nierste:2015zra]. These are the relative phases of the amplitudes of a $\overline{D}^0$ and $D^0$ going into one of the CP eigenstates. They are proportional to CPV effects and thus very hard to extract. In particular, $$\begin{aligned} \delta_{KK} &\equiv \mathrm{arg}\left(\frac{\mathcal{A}(\overline{D}^0\rightarrow K^+K^-)}{\mathcal{A}(D^0\rightarrow K^+K^-)} \right)\,, \qquad \delta_{\pi\pi} \equiv \mathrm{arg}\left(\frac{\mathcal{A}(\overline{D}^0\rightarrow \pi^+\pi^-)}{\mathcal{A}(D^0\rightarrow \pi^+\pi^-)} \right) \,.\end{aligned}$$ These can be obtained from time-dependent measurements or measurements of correlated $D^0\overline{D}^0$ pairs. In principle, using the above observables the system Eqs. (\[eq:decomp-1\])–(\[eq:decomp-4\]) is exactly solvable as long as the data is very precise. In the CP limit the branching ratio measurements $(i)$ and the strong phase $(ii)$ are sufficient to determine $\tilde{t}_1$, $\tilde{t}_2$ and $\tilde{s}_1$, which are the complete set of independent parameters in this limit. For our parameter extraction with current data, we expand the observables to first nonvanishing order in the U-spin expansion. We measure the power counting of that expansion with a generic parameter $\varepsilon$, which, for nominal U-spin breaking effects is expected to be $\varepsilon \sim 25\%$. All of the explicit results that we give below have the nice feature that the parameters can be extracted from them up to relative corrections of order $\mathcal{O}(\varepsilon^2)$. Below it is understood that we neglect all effects of that order. In terms of our parameters the ratios of branching ratios are given as $$\begin{aligned} R_{K\pi} &= - \mathrm{Re}(\tilde{t}_1) \,, \\ R_{KK,\pi\pi} &= - 2 \tilde{s}_1\,, \\ R_{KK,\pi\pi,K\pi} &= \frac{1}{2}\left( \tilde{s}_1^2 - \frac{1}{4}\vert \tilde{t}_1\vert^2 + \tilde{t}_2 \right)\,. \end{aligned}$$ By inserting the expressions for $R_{K\pi}$ and $R_{KK,\pi\pi}$ into Eq. (\[eq:br-ratio-3\]) we can solve the above equations for the independent parameter combinations. The result up to $\mathcal{O}(\varepsilon^2)$ is $$\begin{aligned} \mathrm{Re}( \tilde{t}_1 ) &= - R_{K\pi}\,, \label{eq:Ret1tilde}\\ \tilde{s}_1 &= -\frac{1}{2} R_{KK,\pi\pi}\,, \label{eq:Res1tilde} \\ -\frac{1}{4} \left(\mathrm{Im}\, \tilde{t}_1\right)^2 + \tilde{t}_2 &= 2 R_{KK,\pi\pi,K\pi} - \frac{1}{4}R_{KK,\pi\pi}^2 + \frac{1}{4} R_{K\pi}^2 \,.\label{eq:combi}\end{aligned}$$ We are then able to determine $\tilde{t}_1$ with Eq. (\[eq:Ret1tilde\]) and the strong phase between the CF and DCS mode, see also Ref. [@Bergmann:2000id], $$\begin{aligned} \delta_{K\pi} &= \mathrm{arg}\left(- \frac{1-\frac{1}{2} \tilde{t}_1 }{1+\frac{1}{2} \tilde{t}_1 } \right) = -\mathrm{Im} (\tilde{t}_1)\,, \label{eq:strongphase} \end{aligned}$$ where in the last step we neglect terms of relative order of $\varepsilon^2$. After that we can determine $\tilde{s}_1$ and $\tilde{t}_2$ from Eqs. (\[eq:Res1tilde\]) and (\[eq:combi\]), respectively. The sum and difference of the integrated direct CP asymmetries can be used together with the phases $\delta_{KK}$ and $\delta_{\pi\pi}$ to determine $\tilde{p}_0$ and $\tilde{p}_1$. We have $$\begin{aligned} \Delta a_{CP}^{\mathrm{dir}} &= \mathrm{Im}\left(\frac{\lambda_b}{\Sigma}\right) \times 4\,\mathrm{Im}\left(\tilde{p}_0 \right) \,, \label{eq:DeltaACPdirParameter}\end{aligned}$$ and $$\begin{aligned} \Sigma a_{CP}^{\mathrm{dir}} = 2\, \mathrm{Im}\left(\frac{\lambda_b}{\Sigma}\right) \times \left[ 2\, \mathrm{Im}(\tilde{p}_0 ) \tilde{s}_1 + \mathrm{Im}(\tilde{p}_1) \right] \,.\end{aligned}$$ Note that also $\Delta a_{CP}^{\mathrm{dir}}$ and $\Sigma a_{CP}^{\mathrm{dir}}$ share the feature of corrections entering only at the relative order $\mathcal{O}(\varepsilon^2)$ compared to the leading result. The measurement of $\Delta a_{CP}^{\mathrm{dir}}$ is basically a direct measurement of $\mathrm{Im}\,\tilde{p}_0$, $$\begin{aligned} \mathrm{Im}\,\tilde{p}_0 &= \frac{1}{4 \mathrm{Im}(\lambda_b/\Sigma)} \Delta a_{CP}^{\mathrm{dir}}\,. \label{eq:penguinovertree} \end{aligned}$$ The phases $\delta_{KK}$ and $\delta_{\pi\pi}$ give (see e.g. Ref. [@Nierste:2015zra]) $$\begin{aligned} \mathrm{Re}\left(\frac{A_b(D^0\rightarrow K^+K^-)}{A_{\Sigma}(D^0\rightarrow K^+K^-)} \right) - \mathrm{Re}\left(\frac{A_b(D^0\rightarrow \pi^+\pi^-)}{A_{\Sigma}(D^0\rightarrow \pi^+\pi^-)} \right) &= 4 \mathrm{Re}(\tilde{p}_0) \,, \label{eq:retildep0}\end{aligned}$$ and $$\begin{aligned} \mathrm{Re}\left(\frac{A_b(D^0\rightarrow K^+K^-)}{A_{\Sigma}(D^0\rightarrow K^+K^-)} \right) + \mathrm{Re}\left(\frac{A_b(D^0\rightarrow \pi^+\pi^-)}{A_{\Sigma}(D^0\rightarrow \pi^+\pi^-)} \right) &= 2\, \mathrm{Re}(2 \tilde{p}_0 \tilde{s}_1 + \tilde{p}_1 ) {\nonumber}\\ &= 2 \left[ 2\, \mathrm{Re}(\tilde{p}_0) \tilde{s}_1 + \mathrm{Re}(\tilde{p}_1)\right] \,. \label{eq:retildep1}\end{aligned}$$ As $\tilde{s}_1$ is already in principle determined from the other observables, this gives us then the full information on $\tilde{p}_0$ and $\tilde{p}_1$. As the observables $\delta_{KK}$ and $\delta_{\pi\pi}$ are the hardest to measure, we are not providing here the explicit relation of Eq. (\[eq:retildep0\]) and Eq. (\[eq:retildep1\]) to these observables, acknowledging just that the corresponding parameter combinations can be determined from these in a straight forward way. Taking everything into account, we conclude that the above system of eight observables for eight parameters can completely be solved. This is done where the values of the CKM elements are used as inputs. We emphasize that in principle with correlated double-tag measurements at a future charm-tau factory [@Gronau:2001nr; @Goldhaber:1976fp; @Bigi:1986dp; @Xing:1994mn; @Xing:1995vj; @Xing:1995vn; @Xing:1996pn; @Xing:1999yw; @Asner:2005wf; @Asner:2008ft; @Asner:2012xb; @Xing:2019uzz] we could even overconstrain the system. Numerical Results \[sec:numerics\] =================================== We use the formalism introduced in Sec. \[sec:solving\] now with the currently available measurements. As not all of the observables have yet been measured, we cannot determine all of the U-spin parameters. Yet, we use the ones that we do have data on to get useful information on some of them. - Using Gaussian error propagation without taking into account correlations, from the branching ratio measurements [@Tanabashi:2018oca] $$\begin{aligned} \mathcal{BR}(D^0\rightarrow K^+K^-) &= (3.97\pm 0.07) \cdot 10^{-3}\,, \\ \mathcal{BR}(D^0\rightarrow \pi^+\pi^-) &= (1.407\pm 0.025)\cdot 10^{-3}\,, \\ \mathcal{BR}(D^0\rightarrow K^+\pi^- ) &= (1.366 \pm 0.028) \cdot 10^{-4}\,, \\ \mathcal{BR}(D^0\rightarrow K^- \pi^+ ) &= (3.89\pm 0.04)\cdot 10^{-2}\,,\end{aligned}$$ we obtain the normalized combinations $$\begin{aligned} R_{K\pi} &= -0.11 \pm 0.01\,, \\ R_{KK,\pi\pi} &= 0.534 \pm 0.009\,, \\ R_{KK,\pi\pi,K\pi} &= 0.071 \pm 0.009\,.\end{aligned}$$ - The strong phase between DCS and CF mode for the scenario of no CP violation in the DCS mode is [@Amhis:2016xyh] $$\begin{aligned} \delta_{K\pi} &= \left(8.6^{+9.1}_{-9.7}\right) ^{\circ}\,.\end{aligned}$$ - The world average of $\Delta a_{CP}^{\mathrm{dir}}$ is given in Eq. (\[eq:HFLAVav\]). - The sum of CP asymmetries $\Sigma a_{CP}^{\mathrm{dir}}$ in which CP violation has not yet been observed. In order to get an estimate we use the HFLAV averages for the single measurements of the CP asymmetries [@Amhis:2016xyh; @Aaij:2014gsa; @Aaltonen:2011se; @Aubert:2007if; @Staric:2008rx; @Csorna:2001ww; @Link:2000aw; @Aitala:1997ff] $$\begin{aligned} A_{CP}(D^0\rightarrow \pi^+\pi^-) &= 0.0000 \pm 0.0015\,, \\ A_{CP}(D^0\rightarrow K^+K^-) &= -0.0016 \pm 0.0012\,, \end{aligned}$$ and subtract the contribution from indirect charm CP violation $a_{CP}^{\mathrm{ind}} = (0.028 \pm 0.026)\%$ [@Carbone:2019]. We obtain $$\begin{aligned} \Sigma a_{CP}^{\mathrm{dir}} &= A_{CP}(D^0\rightarrow K^+K^-) + A_{CP}(D^0\rightarrow \pi^+\pi^-) - 2 a_{CP}^{\mathrm{ind}} {\nonumber}\\ &= -0.002\pm 0.002\,, \end{aligned}$$ where we do not take into account correlations, which may be sizable. - The phases $\delta_{KK}$ and $\delta_{\pi\pi}$ have not yet been measured, and we cannot get any indirect information about them. From Eqs. (\[eq:Ret1tilde\])–(\[eq:strongphase\]) it follows that $$\begin{aligned} \mathrm{Re}( \tilde{t}_1 ) &= 0.109 \pm 0.011\,, \label{eq:result-ret1tilde}\\ \mathrm{Im}( \tilde{t}_1 ) &= -0.15^{+0.16}_{-0.17}\,, \label{eq:result-imt1tilde} \\ \tilde{s}_1 &= -0.2668 \pm 0.0045\,, \label{eq:result-res1tilde} \\ -\frac{1}{4} \left(\mathrm{Im}\tilde{t}_1\right)^2 + \mathrm{Re}(\tilde{t}_2) &= 0.075\pm 0.018 \,. \label{eq:result-combi}\end{aligned}$$ Employing [@Tanabashi:2018oca] $$\begin{aligned} \mathrm{Im}\left(\frac{\lambda_b}{\Sigma}\right) = (-6.3\pm 0.3)\cdot 10^{-4}\,,\end{aligned}$$ and inserting the measurement of $\Delta a_{CP}^{\mathrm{dir}}$ into Eq. (\[eq:penguinovertree\]), we obtain $$\begin{aligned} \mathrm{Im}\,\tilde{p}_0 &= 0.65 \pm 0.12 \,. \label{eq:resultp0tilde} \end{aligned}$$ Using $\Sigma a_{CP}^{\mathrm{dir}} $ we get $$\begin{aligned} 2 \mathrm{Im}(\tilde{p}_0 ) \tilde{s}_1 + \mathrm{Im}(\tilde{p}_1) &= 1.7\pm 1.6\,. \end{aligned}$$ Few remarks are in order regarding the numerical values we obtained. 1. Among the five parameters defined in Eq. (\[eq:def-u-par\]), $\tilde{p}_1$ is the least constrained parameter as we have basically no information about it. In order to learn more about it we need measurements of $\Sigma a_{CP}^{\mathrm{dir}}$ as well as of the phases $\delta_{KK}$ and $\delta_{\pi\pi}$. 2. The higher order U-spin breaking parameters are consistently smaller than the first order ones, and the second order ones are even smaller. This is what we expect assuming the U-spin expansion. 3. Eqs. (\[eq:result-ret1tilde\])–(\[eq:result-combi\]) suggest that the SU(3)$_F$ breaking of the tree amplitude $\tilde{t}_1$ is smaller than the broken penguin contained in $\tilde{s}_1$. 4. Using Eqs. (\[eq:result-ret1tilde\])–(\[eq:result-combi\]) we can get a rough estimate for the $\mathcal{O}(\varepsilon^2)$ corrections that enter the expression for $\Delta a_{CP}^{\mathrm{dir}}$ in Eq. (\[eq:DeltaACPdirParameter\]). The results on the broken penguin suggest that these corrections do not exceed a level of $\sim 10\%$. We cannot, however, determine these corrections completely without further knowledge on $\tilde{p}_1$. The $\Delta U=0$ rule \[sec:deltau0rule\] ========================================== We now turn to discuss the implications of Eq. (\[eq:resultp0tilde\]). We rewrite Eq. (\[eq:DeltaACPdirParameter\]) as $$\begin{aligned} \Delta a_{CP}^{\mathrm{dir}} &= 4\, \mathrm{Im}\left(\frac{\lambda_b}{\Sigma}\right) \left|\tilde{p}_0 \right| \sin( \delta_{\mathrm{strong}})\,,\end{aligned}$$ with the unknown strong phase $$\begin{aligned} \delta_{\mathrm{strong}} &= \mathrm{arg}(\tilde{p}_0)\,.\end{aligned}$$ Then the numerical result in Eq. (\[eq:resultp0tilde\]) reads $$\begin{aligned} \left|\tilde{p}_0\right| \sin( \delta_{\mathrm{strong}}) &= 0.65 \pm 0.12\,. \label{eq:mainresult}\end{aligned}$$ Recall that in the group theoretical language the parameters $t_0$ and $p_0$ are the matrix elements of the $\Delta U=1$ and $\Delta U=0$ operators, respectively [@Brod:2011re]. For the ratio of the matrix elements of these operators we employ now the following parametrization $$\label{eq:defC} \tilde p_0 = B + C e^{i \delta}\,,$$ such that $B$ is the short-distance (SD) ratio and the second term arises from long-distance (LD) effects. While the separation between SD and LD is not well-defined, what we have in mind here is that diagrams with a $b$ quark in the loop are perturbative and those with quarks lighter than the charm are not. In Eq. (\[eq:DeltaI12-generic\]) of Sec. \[sec:DeltaI12inKDB\] below we apply the same decomposition into a no QCD part and corrections to that also to the $\Delta I=1/2$ rules in $K$, $D$ and $B$ decays to pions. It is instructive to compare all of these systems in the same language. We first argue that in Eq. (\[eq:defC\]) to a very good approximation $B=1$. This is basically the statement that perturbatively, the diagrams with intermediate $b$ are tiny. More explicitly, in that case, that is when we neglect the SD $b$ penguins, we have $$\begin{aligned} Q^{\Delta U=1} \equiv \frac{Q^{\bar{s}s} - Q^{\bar{d}d}}{2}\,,\qquad Q^{\Delta U=0} \equiv \frac{Q^{\bar{s}s} + Q^{\bar{d}d}}{2}\,. \end{aligned}$$ Setting $C=0$ then corresponds to the statement that only $Q^{\bar{s}s}$ can produce $K^+K^-$ and only $Q^{\bar{d}d}$ can produce $\pi^+\pi^-$. This implies that for $C=0$ $$\begin{aligned} { \left< K^+K^- \right| } Q^{\bar{d}d} { \left| D^0 \right> } &= { \left< \pi^+\pi^- \right| } Q^{\bar{s}s} { \left| D^0 \right> } = 0\,, \label{eq:assumption2-1} \end{aligned}$$ and $$\begin{aligned} { \left< K^+K^- \right| } Q^{\bar{s}s} { \left| D^0 \right> } \neq 0\,, \qquad { \left< \pi^+\pi^- \right| } Q^{\bar{d}d} { \left| D^0 \right> } \neq 0\,. \label{eq:assumption2-2}\end{aligned}$$ We then see that $B=1$ since $$\begin{aligned} \frac{{ \left< K^+K^- \right| } Q^{\Delta U=0}{ \left| D^0 \right> } }{ { \left< K^+K^- \right| } Q^{\Delta U=1}{ \left| D^0 \right> } } &= 1\,, \qquad \frac{ { \left< \pi^+\pi^- \right| } Q^{\Delta U=0}{ \left| D^0 \right> } }{ { \left< \pi^+\pi^- \right| } Q^{\Delta U=1}{ \left| D^0 \right> } } = -1\,. \label{eq:quarkmodel-wo-phase}\end{aligned}$$ We note that in the SU(3)$_F$ limit we also have $$\begin{aligned} { \left< K^+K^- \right| } Q^{\Delta U=1}{ \left| D^0 \right> } &= - { \left< \pi^+\pi^- \right| } Q^{\Delta U=1}{ \left| D^0 \right> }\,, \\ { \left< K^+K^- \right| } Q^{\Delta U=0}{ \left| D^0 \right> } &= { \left< \pi^+\pi^- \right| } Q^{\Delta U=0}{ \left| D^0 \right> }\,,\end{aligned}$$ but this is not used to argue that $B=1$. We then argue that $\delta \sim \mathcal{O}(1)$. The reason is that non-perturbative effects involve on-shell particles, or in other words, rescattering, and such effects give rise to large strong phases to the LD effects independent of the magnitude of the LD amplitude. In the case that $B=1$, $\delta \sim \mathcal{O}(1)$ and using the fact that the CKM ratios are small we conclude that the CP asymmetry is roughly given by the CKM factor times $C$ $$\begin{aligned} \Delta a_{CP}^{\mathrm{dir}} = 4\, \mathrm{Im}\left(\frac{\lambda_b}{\Sigma}\right) \times C \times \sin \delta\,. \end{aligned}$$ Now the question is: what is $C$? As at this time no method is available in order to calculate $C$ with a well-defined theoretical uncertainty, we do not employ here a dynamical calculation in order to provide a SM prediction for $C$ and $\Delta a_{CP}^{\mathrm{dir}}$. We rather show the different principal possibilities and how to interpret them in view of the current data. In order to do so we measure the order of magnitude of the QCD correction term $C$ relative to the no QCD limit $\tilde{p}_0=1$. Relative to that limit, we differentiate between three cases 1. $C = \mathcal{O}(\alpha_s/\pi)$: Perturbative corrections to $\tilde p_0$. 2. $C = \mathcal{O}(1)$: Non-perturbative corrections that produce strong phases from rescattering but do not significantly change the magnitude of $\tilde p_0$. 3. $C \gg \mathcal{O}(1)$: Large non-perturbative effects with significant magnitude changes and strong phases from rescattering to $\tilde p_0$. Note that category (2) and (3) are in principle not different, as they both include non-perturbative effects, which differ only in their size. Some perturbative results concluded that $C=\mathcal{O}(\alpha_s/\pi)$, leading to $\Delta a_{CP}^{\mathrm{dir}}\sim 10^{-4}$ [@Grossman:2006jg; @Bigi:2011re]. Note that the value $\Delta a_{CP}^{\mathrm{dir}} = 1\times 10^{-4}$, assuming $O(1)$ strong phase, would correspond numerically to $C \sim 0.04$. We conclude that if there is a good argument that $C$ is of category (1), the measurement of $\Delta a_{CP}^{\mathrm{dir}}$ would be a sign of beyond the SM (BSM) physics, because it would indicate a relative $\mathcal{O}(10)$ enhancement. If the value of $\Delta a_{CP}^{\mathrm{dir}}$ would have turned out as large as suggested by the central value of some (statistically unsignificant) earlier measurements [@Aaij:2011in; @Collaboration:2012qw], we would clearly need category (3) in order to explain that, i.e. penguin diagrams that are enhanced in magnitude, see e.g. Refs. [@Brod:2012ud; @Hiller:2012xm; @Cheng:2012wr; @Feldmann:2012js; @Li:2012cfa; @Atwood:2012ac; @Grossman:2012ry; @Brod:2011re]. Another example for category (3) is the $\Delta I=1/2$ rule in the kaon sector which is further discussed in sections \[sec:DeltaI12inKDB\] and \[sec:UandIrules\]. The current data, Eq. (\[eq:mainresult\]), is consistent with category (2). In the SM picture, the measurement of $\Delta a_{CP}^{\mathrm{dir}}$ proves the non-perturbative nature of the $\Delta U=0$ matrix elements with a mild enhancement from $\mathcal{O}(1)$ rescattering effects. This is the $\Delta U=0$ rule for charm. Note that the predictions for $\Delta a_{CP}^{\mathrm{dir}}$ of category (i) and (ii) differ by $\mathcal{O}(10)$, although category (ii) contains only an $\mathcal{O}(1)$ nonperturbative enhancement with respect to the no QCD limit $\tilde{p}_0=1$. We emphasize that a measure for a QCD enhancement is not necessarily its impact on an observable, but the amplitude level comparison with the absence of QCD effects. We also mention that we do not need SU(3)$_F$ breaking effects to explain the data. Yet, the observation of $\vert \tilde{s}_1\vert > \vert \tilde{t}_1\vert$ in Eqs. (\[eq:result-ret1tilde\])–(\[eq:result-res1tilde\]) provide additional supporting evidence that rescattering is significant. Though no proof of the $\Delta U=0$ rule on its own, this matches its upshot and is indicative of the importance of rescattering effects also in the broken penguin which is contained in $\tilde{s}_1$. With future data on the phases $\delta_{KK}$ and $\delta_{\pi\pi}$ we will be able to determine the strong phase $\delta$ of Eq. (\[eq:defC\]). In that way it will be possible to completely determine the characteristics of the emerging $\Delta U=0$ rule. $\Delta I=1/2$ Rules in $K$, $D$ and $B$ Decays \[sec:DeltaI12inKDB\] ===================================================================== It is instructive to compare the $\Delta U=0$ rule in charm with the $\Delta I=1/2$ rule in kaon physics, and furthermore also to the corresponding ratios of isospin matrix elements of $D$ and $B$ decays. For a review of the $\Delta I=1/2$ rule see e.g. Ref. [@Buras:2014maa]. In kaon physics we consider $K \to\pi\pi$ decays. Employing an isospin parametrization we have [@Buras:2014maa] $$\begin{aligned} {\mathcal{A}}(K^+\rightarrow \pi^+\pi^0) &= \frac{3}{2} A_2^K e^{i\delta_2^K}\,, \nonumber\\ {\mathcal{A}}(K^0\rightarrow \pi^+\pi^-) &= A_0^K e^{i\delta_0^K} + \sqrt{\frac{1}{2}} A_2^K e^{i\delta_2^K}\,, \nonumber \\ {\mathcal{A}}(K^0\rightarrow \pi^0\pi^0) &= A_0^K e^{i\delta_0^K} - \sqrt{2} A_2^K e^{i\delta_2^K}\,. \label{eq:kaondata}\end{aligned}$$ Note that the strong phases of $A_0^K$ and $A_2^K$ are factored out, so that $A_{0,2}^K$ contain weak phases only. The data give $$\begin{aligned} \left|\frac{A_0^K}{A_2^K}\right| \approx 22.35\,,\qquad \delta_0^K - \delta_2^K = (47.5\pm 0.9)^{\circ}\,, \label{eq:kaon-deltaI12-rule}\end{aligned}$$ see Ref. [@Buras:2014maa] and references therein for more details. $A_{0,2}^K$ have a small imaginary part stemming from the CKM matrix elements only. To a very good approximation the real parts $\mathrm{Re}(A_0^K)$ and $\mathrm{Re}(A_2^K)$ in the $\Delta I=1/2$ rule depend only on the tree operators [@Buras:2015yba; @Kitahara:2016nld] $$\begin{aligned} Q_1 &= (\bar{s}_{\alpha} u_{\beta})_{V-A} (\bar{u}_{\beta} d_{\alpha})_{V-A}\,, \qquad Q_2 = (\bar{s} u)_{V-A} (\bar{u} d)_{V-A}\,. \end{aligned}$$ The lattice results Refs. [@Bai:2015nea; @Blum:2015ywa; @Boyle:2012ys] show an emerging physical interpretation of the $\Delta I=1/2$ rule, that is an approximate cancellation of two contributions in $\mathrm{Re}(A_2^K)$, which does not take place in $\mathrm{Re}(A_0^K)$. These two contributions are different color contractions of the same operator. The isospin decompositions of $D\rightarrow \pi\pi$ and $B\rightarrow \pi\pi$ are completely analog to Eq. (\[eq:kaondata\]). To differentiate the charm and beauty isospin decompositions from the kaon one, we put the corresponding superscripts to the respective analog matrix elements. Leaving away the superscripts indicates generic formulas that are valid for all three meson systems. In order to understand better the anatomy of the $\Delta I=1/2$ rule we use again the form $$\begin{aligned} \frac{A_0}{A_2} &= B + C e^{i\delta}\,, \label{eq:DeltaI12-generic}\end{aligned}$$ analogously to Eq. (\[eq:defC\]) in Sec. \[sec:deltau0rule\] for the $\Delta U=0$ rule. Here, $B$ is again the contribution in the limit of no QCD, and $C e^{i\delta}$ contains the corrections to that limit. Now, as discussed in Refs. [@Buras:1988ky; @Buras:2014maa], in the limit of no strong interactions only the $Q_2$ operator contributes in Eq. (\[eq:DeltaI12-generic\]). Note that the operator $Q_1$ is only generated from QCD corrections. When we switch off QCD, the amplitude into neutral pions vanishes and we have for $K,D,B\rightarrow \pi\pi$ equally [@Buras:1988ky; @Buras:2014maa] $$\begin{aligned} B &= \sqrt{2}\,. \label{eq:kaon-deltaI12-rule-no-qcd}\end{aligned}$$ This corresponds to the limit $\tilde{p}_0 = 1$ that we considered in Sec. \[sec:deltau0rule\] for the $\Delta U=0$ rule. The exact numerical value in Eq. (\[eq:kaon-deltaI12-rule-no-qcd\]) of course depends on the convention used for the normalization of $A_{0,2}$ in the isospin decomposition Eq. (\[eq:kaondata\]), where we use the one present in the literature. For the isospin decomposition of $D^+\rightarrow \pi^+\pi^0$, $D^0\rightarrow \pi^+\pi^-$ and $D^0\rightarrow \pi^0\pi^0$, we simply combine the fit of Ref. [@Franco:2012ck] to get $$\begin{aligned} \left|\frac{A_0^D}{A_2^D}\right| &= 2.47\pm 0.07\,, \qquad \delta_0^D - \delta_2^D = (\pm 93 \pm 3)^{\circ}\,. \label{eq:charm-deltaI12-rule}\end{aligned}$$ Reproducing the $\Delta I=1/2$ rule for charm Eq. (\[eq:charm-deltaI12-rule\]) is an optimal future testing ground for emerging new interesting non-perturbative methods [@Khodjamirian:2017zdu]. Very promising steps on a conceptual level are also taken by lattice QCD [@Hansen:2012tf]. In $K$ and $D$ decays the contributions of penguin operators to $A_0$ is CKM-suppressed, i.e. to a good approximation $A_0$ is generated from tree operators only. In $B$ decays the situation is more involved because there is no relative hierarchy between the relevant CKM matrix elements. However, one can separate tree and penguin contributions by including the measurements of CP asymmetries within a global fit, as done in Ref. [@Grinstein:2014aza]. From Fig. 3 therein we find for the ratio of matrix elements of tree operators that $$\begin{aligned} \left| \frac{A_0^B}{A_2^B}\right| \sim \sqrt{2}\end{aligned}$$ is well compatible with the data, the best fit point having $\vert A_0^B / A_2^B\vert = 1.5$. The fit result for the phase difference $\delta_0^B - \delta_2^B$ is not given in Ref. [@Grinstein:2014aza]. The emerging picture is: The $\Delta I=1/2$ rule in $B$ decays is compatible or close to the no QCD limit. The $\Delta I=1/2$ rule in kaon physics clearly belongs to category (3) of Sec. \[sec:deltau0rule\]. Here, the non-perturbative rescattering affects not only the phases but also the magnitudes of the corresponding matrix elements. Finally, the $\Delta I=1/2$ rule in charm decays is intermediate and shows an $\mathcal{O}(1)$ enhancement, similar to the $\Delta U=0$ rule that we found in Sec. \[sec:deltau0rule\]. We can understand these differences from the different mass scales that govern $K$, $D$ and $B$ decays. Rescattering effects are most important in $K$ decays, less important but still significant in $D$ decays, and small in $B$ decays. Phenomenology of the $\Delta U=0$ vs. $\Delta I=1/2$ rule \[sec:UandIrules\] ============================================================================ An interesting difference between the $\Delta I=1/2$ rule in kaon decays and the $\Delta U=0$ rule in charm decays is their effect on the phenomenology. Large rescattering enhances the CP violation effects in $D$ decays, but it reduces the effect in kaon decays. The reason for the difference lies in the fact that in kaon decays the SD decay generates only a $u \bar u$ final state, while in charm decays it generates to a very good approximation the same amount of $d \bar d$ and $s \bar s$ states. We write the amplitudes very generally and up to a normalization factor as $${\cal A} = 1 + r a e^{i(\phi+\delta)}\,, \label{eq:generic-ampl}$$ such that $r$ is real and depends on CKM matrix elements, $a$ is real and corresponds to the ratio of the respective hadronic matrix elements, $\phi$ is a weak phase and $\delta$ is a strong phase. For kaons $a$ is the ratio of matrix elements of the operators $Q^{\Delta I=1/2}$ over $Q^{\Delta I=3/2}$, while for charm it is the ratio of matrix elements of the operators $Q^{\Delta U=0}$ over $Q^{\Delta U=1}$. We first consider the case where we neglect the third generation. In that limit for kaons we have the decomposition $${\cal A}_K = V_{us}V_{ud}^* (A_{1/2} + r_{CG} A_{3/2})\,,$$ where $r_{CG}$ is the CG coefficient that can be read from Eq. (\[eq:kaondata\]). For charm we have $${\cal A}_D = V_{cs}V_{us}^* A_1.$$ That means that in the two-generational limit for kaons we have $r=1$ and in charm $r=0$. If we switch on the third generation we get small corrections to these values in each case: $r\ll 1$ for charm and $|r-1|\ll 1$ for kaons. These effects come from the non-unitarity of the $2 \times 2$ CKM. For the kaon case there is an extra effect that stems from SD penguins that come with $V_{ts}V_{td}^*$. In both cases we have $\delta \sim \mathcal{O}(1)$ from non-perturbative rescattering, as well as $\phi \sim \mathcal{O}(1)$. The general formula for direct CP asymmetry is given as [@Tanabashi:2018oca] $$\begin{aligned} A_{CP} &= -\frac{2 r a \sin(\delta) \sin(\phi) }{ 1 + (ra)^2+ 2 ra \cos(\delta) \cos(\phi) } \approx \begin{cases} 2 r a \sin(\delta) \sin(\phi)&{\mbox{for $ra \ll 1$}},\\ 2 (r a)^{-1} \sin(\delta) \sin(\phi)&{\mbox{for $ra \gg 1$}}.\\ \end{cases} \label{eq:CPasym-general-formula} \end{aligned}$$ Non-perturbative effects enhance $a$ in both kaon and charm decays. This means the effect which is visible in the CP asymmetry is different depending on the value of $r$. For $r a\ll 1$ increasing $a$ results in enhancement of the CP asymmetry, while for $r a \gg 1$ it is suppressed. These two cases correspond to the charm and kaon cases, respectively. It follows that the $\Delta I=1/2$ rule in kaons reduces CP violating effects, while the $\Delta U=0$ rule in charm enhances them. Conclusions \[sec:conclusions\] =============================== From the recent determination of $\Delta a_{CP}^{\mathrm{dir}}$ we derive the ratio of $\Delta U=0$ over $\Delta U=1$ amplitudes as $$\begin{aligned} \vert \tilde{p}_0\vert \sin(\delta_{\mathrm{strong}}) &= 0.65 \pm 0.12\,.\label{eq:mainresult-conclusion}\end{aligned}$$ In principle two options are possible in order to explain this result: In the perturbative picture beyond the SM (BSM) physics is necessary to explain Eq. (\[eq:mainresult-conclusion\]). On the other hand, in the SM picture, we find that all that is required in order to explain the result is a mild non-perturbative enhancement due to rescattering effects. Therefore, it is hard to argue that BSM physics is required. Our interpretation of the result is that the measurement of $\Delta a_{CP}^{\mathrm{dir}}$ provides a proof for the $\Delta U=0$ rule in charm. The enhancement of the $\Delta U=0$ amplitude is not as significant as the one present in the $\Delta I=1/2$ rule for kaons. In the future, with more information on the strong phase of $\tilde{p}_0$ from time-dependent measurements or measurements of correlated $D^0\overline{D}^0$ decays, we will be able to completely determine the extent of the $\Delta U=0$ rule. Interpreting the result within the SM implies that we expect a moderate non-perturbative effect and nominal $SU(3)_F$ breaking. The former fact implies that we expect U-spin invariant strong phases to be $\mathcal{O}(1)$. The latter implies that we anticipate the yet to be determined $SU(3)_F$ breaking effects not to be large. Thus, there are two qualitative predictions we can make $$\begin{aligned} \delta_{\mathrm{strong}}\sim \mathcal{O}(1), \qquad a_{CP}^{\mathrm{dir}} (D^0\rightarrow K^+K^-) \approx -a_{CP}^{\mathrm{dir}}(D^0\rightarrow \pi^+\pi^-)\,.\end{aligned}$$ Verifying these predictions will make the SM interpretation of the data more solid. We thank Alex Kagan, Yossi Nir, Giovanni Punzi and Alan Schwartz for useful discussions. The work of YG is supported in part by the NSF grant PHY1316222. SS is supported by a DFG Forschungsstipendium under contract no. SCHA 2125/1-1.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work, we consider the inverse problem of reconstructing the internal structure of an object from limited x-ray projections. We use a Gaussian process prior to model the target function and estimate its (hyper)parameters from measured data. In contrast to other established methods, this comes with the advantage of not requiring any manual parameter tuning, which usually arises in classical regularization strategies. Our method uses a basis function expansion technique for the Gaussian process which significantly reduces the computational complexity and avoids the need for numerical integration. The approach also allows for reformulation of come classical regularization methods as Laplacian and Tikhonov regularization as Gaussian process regression, and hence provides an efficient algorithm and principled means for their parameter tuning. Results from simulated and real data indicate that this approach is less sensitive to streak artifacts as compared to the commonly used method of filtered backprojection.' address: | $^1$Department of Electrical Engineering and Automation, Aalto University, Finland\ $^2$Department of Information Technology, Uppsala University, Sweden\ author: - 'Zenith Purisha$^1$, Carl Jidling$^2$, Niklas Wahlstr[ö]{}m$^2$, Thomas B. Sch[ö]{}n$^2$, Simo S[ä]{}rkk[ä]{}$^1$' bibliography: - 'sample.bib' title: 'Probabilistic approach to limited-data computed tomography reconstruction' --- 2019 [*Keywords*]{}: computed tomography, limited data; probabilistic method; Gaussian process; Markov chain Monte Carlo Introduction ============ X-ray computed tomography (CT) imaging is a non-invasive method to recover the internal structure of an object by collecting projection data from multiple angles. The projection data is recorded by a detector array and it represents the attenuation of the x-rays which are transmitted through the object. Since the 1960s, CT has been used to a deluge of applications in medicine [@cormack1963representation; @cormack1964representation; @herman1979image; @kuchment2013radon; @national1996mathematics; @shepp1978computerized] and industry [@akin2003computed; @cartz1995nondestructive; @de2014industrial]. Currently, the so-called filtered back projection (FBP) is the reconstruction algorithm of choice because it is very fast [@avinash1988principles; @buzug2008computed]. This method requires dense sampling of the projection data to obtain a satisfying image reconstruction. However, for some decades, the limited-data x-ray tomography problem has been a major concern in, for instance, the medical imaging community. The limited data case—also referred to as [*sparse projections*]{}—calls for a good solution for several important reasons, including: - the needs to examine a patient by using low radiation doses to reduce the risk of malignancy or to [*in vivo*]{} samples to avoid the modification of the properties of living tissues, - geometric restrictions in the measurement setting make it difficult to acquire the complete data [@riis2018limited], such as in [*mammography*]{} [@niklason1997digital; @rantala2006wavelet; @wu2003tomographic; @zhang2006comparative] and electron imaging [@fanelli2008electron], - the high demand to obtain the data using short acquisition times and to avoid massive memory storage, and - the needs to avoid—or at least minimize the impact of—the moving artifacts during the acquisition. Classical algorithms—such as FBP—fail to generate good image reconstruction when dense sampling is not possible and we only have access to limited data. The under-sampling of the projection data makes the image reconstruction (in classical terms) an [*ill-posed*]{} problem [@natterer1986mathematics]. In other words, the inverse problem is sensitive to measurement noise and modeling errors. Hence, alternative and more powerful methods are required. Statistical estimation methods play an important role in handling the ill-posedness of the problem by restating the inverse problem as a [*well-posed extension*]{} in a larger space of probability distributions [@kaipio2006statistical]. Over the years there have been a lot of work on tomographic reconstruction from limited data using statistical methods (see, e.g., [@rantala2006wavelet; @bouman1996unified; @haario2017shape; @kolehmainen2003statistical; @siltanen2003statistical; @sauer1994bayesian]). In the statistical approach, incorporation of [*a priori*]{} knowledge is a crucial part in improving the quality of the image reconstructed from limited projection data. That can be viewed as an equivalent of the regularization parameter in classical regularization methods. However, statistical methods, unlike classical regularization methods, also provide a principled means to estimate the parameters of the prior (i.e., the hyperparameters) which corresponds to automatic tuning of regularization parameters. In our work we build the statistical model by using a Gaussian process model [@Rasmussen2006] with a hierarchical prior in which the (hyper)parameters in the prior become part of the inference problem. As this kind of hierarchical prior can be seen as an instance of a Gaussian process (GP) regression model, the computational methods developed for GP regression in machine learning context [@Rasmussen2006] become applicable. It is worth noting that some works on employing GP methods for tomographic problems have also appeared before. An iterative algorithm to compute a maximum likelihood point in which the prior information is represented by GP is introduced in [@tarantola2005inverse]. In [@hendriks2018implementation; @jidling2018probabilistic], tomographic reconstruction using GPs to model the strain field from neutron Bragg-edge measurements has been studied. Tomographic inversion using GP for plasma fusion and soft x-ray tomography have been done in [@li2013bayesian; @svensson2011non]. Nevertheless, the proposed approach is different from the existing work. Our aim is to employ a hierarchical Gaussian process regression model to reconstruct the x-ray tomographic image from limited projection data. Due to the measurement model involving line integral computations, the direct GP approach does not allow for closed form expressions. The first contribution of this article is to overcome this issue by employing the basis function expansion method proposed in [@SolinSarkka2015], which makes the line integral computations tractable as it detaches the integrals from the model parameters. This approach can be directly used for common GP regression covariance functions such as Matérn or squared exponential. The second contribution of this article is to point out that the we can also reformulate classical regularization, in particular Laplacian and Tikhonov regularization, as Gaussian process regression where only the spectral density of the process (although not the covariance function itself) is well defined. As the basis function expansion only requires the availability of the spectral density, we can build a hierarchical model off a classical regularization model as well and have a principles means to tune the regularization parameters. Finally, the third contribution is to present methods for hyperparameter estimation that arise from the machine learning literature and apply the methodology to the tomographic reconstruction problem. In particular, the proposed methods are applied to simulated 2D chest phantom data available in <span style="font-variant:small-caps;">Matlab</span> and real carved cheese data measured with $\mu$CT system. The results show that the reconstruction images created using the proposed GP method outperforms the FBP reconstructions in terms of image quality measured as relative error and as peak signal to noise ratio. Constructing the model ====================== The tomographic measurement data -------------------------------- Consider a physical domain $\Omega \subset {{\mathbb R}}^2$ and an attenuation function $f:\Omega\rightarrow{{\mathbb R}}$. The x-rays travel through $\Omega$ along straight lines and we assume that the initial intensity (photons) of the x-ray is $I_0$ and the exiting x-ray intensity is $I_d$. If we denote a ray through the object as function $s \mapsto (x_1(s),x_2(s))$ Then the formula for the intensity loss of the x-ray within a small distance $ds$ is given as: $$\label{calibration1} \frac{dI(s)}{I(s)}= -f(x_1(s),x_2(s)) ds,$$ and by integrating both sides of , the following relationship is obtained $$\label{calibration2} \int_{-R}^{R} f(x_1(s),x_2(s)) ds = \log\frac{I_0}{I_d},$$ where $R$ is the radius of the object or area being examined. In x-ray tomographic imaging, the aim is to reconstruct $f$ using measurement data collected from the intensities $I_d$ of x-rays for all lines through the object taken from different angles of view. The problem can be expressed using the Radon transform, which can be expressed as $$\label{Measurement Model} \mathcal{R} f(r,\theta) = \int f(x_1,x_2) d{\mathbf x}_L, $$ where $d{\mathbf x}_L$ denotes the $1$-dimensional Lebesgue measure along the line defined by $L=\{(x_1,x_2) \in {{\mathbb R}}^2 : x_1\cos \theta + x_2\sin\theta = r \}$, where $\theta\in[0,\pi)$ is the angle and $r\in{{\mathbb R}}$ is the distance of $L$ from the origin as shown in Figure \[Radon transform\]. (100,80) (0,-65)[![An illustration of the Radon transform. It maps the object $f$ on the $(x_1,x_2)$-domain into $f$ on the $(r,\theta)$ domain. The measurement data is collected from the intensities $I_d$ of x-rays for all lines L through the object $f(x_1,x_2)$ and from different angles of view. []{data-label="Radon transform"}](figure1 "fig:"){width="10cm"}]{} The parametrization of the straight line $L$ with respect to the arc length $s$ can be written as: $$\begin{split} x_1(s,\theta,r) &= r \, \cos(\theta) - s \, \sin(\theta), \\ x_2(s,\theta,r) &= r \, \sin(\theta) + s \, \cos(\theta). \\ \end{split} \label{eq:xystr}$$ In this work, the object is placed inside a circular disk with radius $R$. Then, as a function of $r$ and $\theta$ the line integral in (\[Measurement Model\]) can be written as $$\label{Radon} \begin{split} \mathcal{R} f(r,\theta) &= \int_{-R}^{R} f(x_1(s,\theta,r),x_2(s,\theta,r)) \, ds \\ &= \int_{-R}^R f({\mathbf x}^0+s\hat{{\mathbf u}}) ds, \end{split}$$ where $$\begin{aligned} {\mathbf x}^0 = \begin{bmatrix} r\cos(\theta) & r\sin(\theta) \end{bmatrix}^{\mathsf{T}}, \qquad \hat{{\mathbf u}} = \begin{bmatrix} -\sin(\theta) & \cos(\theta) \end{bmatrix}^{\mathsf{T}}.\end{aligned}$$ In a real x-ray tomography application, the measurement is corrupted by at least two noise types: photons statistics and electronic noise. In x-ray imaging, a massive number of photons are usually recorded at each detector pixel. In such case, a Gaussian approximation for the attenuation data in $\eqref{calibration2}$ can be used [@bouman1992generalized; @sachs19993d]. Recall that a logarithm of the intensity is involved in $\eqref{Radon}$, and so additive noise is a reasonable model for the electronic noise. We collect a set of measurements as $$\label{noisy measurement} y_i = \int_{-R}^R f({\mathbf x}_i^0+s\hat{{\mathbf u}}_i) ds + \varepsilon_i,$$ where $i$ corresponds to the data point index. The corresponding inverse problem is given the noisy measurement data $\{y_i\}_{i=1}^n$ in to reconstruct the object $f$. Gaussian processes as functional priors {#functional priors} --------------------------------------- A Gaussian process (GP) [@Rasmussen2006] can be viewed as a distribution over functions, where the function value in each point is treated as a Gaussian random variable. To denote that the function $f$ is modeled as a GP, we formally write $$\begin{aligned} \label{eq:fGP} f({\mathbf x}) \sim {\GP \left(m({\mathbf x}),\,\, k({\mathbf x},{\mathbf x}') \right)}.\end{aligned}$$ The GP is uniquely specified by the *mean function* $m({\mathbf x})=\mathbb{E}[f({\mathbf x})]$ and the *covariance function* $k({\mathbf x},{\mathbf x}')=\mathbb{E}[(f({\mathbf x})-m({\mathbf x}))(f({\mathbf x}')-m({\mathbf x}'))]$. The mean function encodes our prior belief of the value of $f$ in any point. In lack of better knowledge it is common to pick $m({\mathbf x})=0$, a choice that we will stick to also in this paper. The covariance function on the other hand describes the covariance between two different function values $f({\mathbf x})$ and $f({\mathbf x}')$. The choice of covariance function is the most important part in the GP model, as it stipulates the properties assigned to $f$. A few different options are discussed in Section \[sec:cov\_func\]. As data is collected our belief about $f$ is updated. The aim of regression is to predict the function value $f({\mathbf x}_*)$ at an unseen test point ${\mathbf x}_*$ by conditioning on the seen data. Consider direct function measurements on the form $$y_i=f({\mathbf x}_i)+\varepsilon_i,$$ where $\varepsilon_i$ is independent and identically distributed (iid) Gaussian noise with variance $\sigma^2$, that is, $\varepsilon_i\sim{\mathcal{N}}(0,\sigma^2)$. Let the measurements be stored in the vector ${\mathbf y}$. Then the mean value and the variance of the predictive distribution $p(f({\mathbf x}_*) \mid {\mathbf y})$ are given by [@Rasmussen2006] \[eq:GPreg\] $$\begin{aligned} \mathbb{E}[f({\mathbf x}_*) \mid {\mathbf y}] &= {\mathbf k}_*^{\mathsf{T}}(K +\sigma^2I)^{-1}{\mathbf y}, \\ \mathbb{V}[f({\mathbf x}_*) \mid {\mathbf y}] &= k({\mathbf x}_*,{\mathbf x}_*)- {\mathbf k}_*^{\mathsf{T}}(K +\sigma^2I)^{-1}{\mathbf k}_*. \end{aligned}$$ Here the vector ${\mathbf k}_*$ contains the covariances between $f({\mathbf x}_*)$ and each measurement while the matrix $K$ contains the covariance between all measurements, such that $$\begin{aligned} ({\mathbf k}_*)_i &= k({\mathbf x}_i,{\mathbf x}_*), \\ K_{ij} &= k({\mathbf x}_i,{\mathbf x}_j).\end{aligned}$$ An example of GP regression for a two-dimensional input is given in Figure \[fig:se\_ill\_post\]. The red stars indicate the measurements, while the shaded surface is the GP prediction. The blue line highlights a slice of the plot that is shown explicitly to the right, including the $95\%$ credibility region. ![Left: GP prediction (shaded surface) obtained from the measurements (red stars, also indicated by their deviation from the prediction). Right: slice plot of the blue line in the left figure, including the $95\%$ credibility region.[]{data-label="fig:se_ill_post"}](gp_ill.eps){width="\textwidth"} The Gaussian process for x-ray tomography {#GP Xray} ----------------------------------------- In this section, we show how to apply the functional priors presented in Section \[functional priors\] to x-ray tomography application. Since the x-ray measurements are line integrals of the unknown function $f({\mathbf x})$, they are linear functionals of the Gaussian process. Hence, we can define a linear functional $\mathcal{H}_{{\mathbf x},i}$ as follows: $$\begin{aligned} \mathcal{H}_{{\mathbf x},i} f({\mathbf x}) = \int_{-R}^R f({\mathbf x}^0_i+s\hat{{\mathbf u}}_i) ds.\end{aligned}$$ and thus the GP regression problem becomes \[eq:invprob\] $$\begin{aligned} f({\mathbf x}) &\sim {\GP \left(m({\mathbf x}),\,\, k({\mathbf x},{\mathbf x}') \right)}, \\ y_i &= \mathcal{H}_{{\mathbf x},i} f({\mathbf x})+\varepsilon_i. \end{aligned}$$ As discussed, for example, in [@Sarkka:2011; @SolinSarkka2015] the GP regression equations can be extended to this kind of models, which in this case leads to the following: \[eq:GPreg\] $$\begin{aligned} \mathbb{E}[f({\mathbf x}_*)|{\mathbf y}] &= {\mathbf q}_*^{\mathsf{T}}(K +\sigma^2I)^{-1}{\mathbf y}, \\ \mathbb{V}[f({\mathbf x}_*)|{\mathbf y}] &= k({\mathbf x}_*,{\mathbf x}_*)- {\mathbf q}_*^{\mathsf{T}}(Q +\sigma^2I)^{-1}{\mathbf q}_*, \end{aligned}$$ where $\mathbf{y} = \begin{bmatrix} y_1 & \cdots & y_n \end{bmatrix}^{\mathsf{T}}$ and $$\begin{aligned} \label{eq:crosscov} ({\mathbf q}_*)_i &= \int_{-R}^R k({\mathbf x}_i^0+s\hat{{\mathbf u}}_i,{\mathbf x}_*) ds, \\ \label{eq:Gram} Q_{ij} &= \int_{-R}^R\int_{-R}^R k({\mathbf x}_i^0+s\hat{{\mathbf u}}_i,{\mathbf x}_j^0+s'\hat{{\mathbf u}}_j) ds ds'.\end{aligned}$$ In general we can not expect closed form solutions to – and numerical computations are then required. However, even with efficient numerical methods, the process of selecting the hyperparameters is tedious since the hyperparameters are in general not decoupled from the integrand and the integrals need to be computed repeatedly in several iterations. In this paper, we avoid this by using the basis function expansion that will be described in Section \[sec:approx\]. Squared exponential and Matérn covariance functions {#sec:cov_func} --------------------------------------------------- An important modeling parameter in Gaussian process regression is the covariance function $k({\mathbf x},{\mathbf x}')$ which can be selected in various ways. Because the basis function expansion described in Section \[sec:approx\] requires the covariance function to be *stationary*, we here limit our discussion to covariance functions of this form. *Stationarity* means that $k({\mathbf x},{\mathbf x}')=k({\boldsymbol{\mathrm{r}}})$ where ${\boldsymbol{\mathrm{r}}}={\mathbf x}-{\mathbf x}'$, so the covariance only depends on the distance between the input points. In that case we can also work with the spectral density, which is the Fourier transform of the stationary covariance function $$\label{eq:fourier} S({\boldsymbol{\omega}}) = \mathcal{F}[k] = \int k({\boldsymbol{\mathrm{r}}})e^{-\text{i}{\boldsymbol{\omega}}^{\mathsf{T}}{\boldsymbol{\mathrm{r}}}}d{\boldsymbol{\mathrm{r}}},$$ where again ${\boldsymbol{\mathrm{r}}}={\mathbf x}-{\mathbf x}'$. The perhaps most commonly used covariance function within the machine learning context [@Rasmussen2006] is the *squared exponential* (SE) covariance function $$\begin{aligned} \label{eq:SE} k_{\textrm{SE}}({\boldsymbol{\mathrm{r}}}) &= \sigma_f^2\exp\left[ -\frac{1}{2l^2}\|{\boldsymbol{\mathrm{r}}}\|_2^2 \right],\end{aligned}$$ which has the following spectral density $$\begin{aligned} \label{eq:SES} S_\textrm{SE}({\boldsymbol{\omega}})&=\sigma_f^2 (2\pi)^{d/2} l^d \exp\left[ -\frac{l^2 \| {\boldsymbol{\omega}} \|_2^2}{2} \right],\end{aligned}$$ where $d$ is the dimensionality of ${\mathbf x}$ (in our case $d=2$). The SE covariance function is characterized by the magnitude parameter $\sigma_f$ and the *length scale* $l$. The squared exponential covariance function is popular due to its simplicity and ease of implementation. It corresponds to a process whose sample paths are infinitely many times differentiable and thus the functions modeled by it are very smooth. Another common family of covariance functions is given by the Matérn class $$\begin{aligned} k_{\textrm{Matern}}({\boldsymbol{\mathrm{r}}})&= \sigma_f^2 \frac{2^{1-\nu}}{\Gamma(\nu)} \left( \frac{\sqrt{2\nu}\|{\boldsymbol{\mathrm{r}}}\|_2}{l} \right)^\nu K_\nu\left( \frac{\sqrt{2\nu}\|{\boldsymbol{\mathrm{r}}}\|_2}{l} \right), \\ S_{\textrm{Matern}}({\boldsymbol{\omega}})&= \sigma_f^2\frac{2^d \pi^{d/2}\Gamma(\nu+d/2)(2\nu)^\nu}{\Gamma(\nu)l^{2\nu}} \left( \frac{2\nu}{l^2} + \| {\boldsymbol{\omega}} \|_2^2 \right)^{-(\nu+d/2)},\end{aligned}$$ where $K_\nu$ is a modified Bessel function [@Rasmussen2006]. The smoothness of the process is increased with the parameter $\nu$: in the limit $\nu\rightarrow\infty$ we recover the squared exponential covariance function. Gaussian processes are also closely connected to classical spline smoothing [@kimeldorf1970] as well as other classical regularization methods [@kaipio2006statistical; @mueller2012linear] for inverse problems. Although the construction of the corresponding covariance function is hard (or impossible), it is still possible to construct the corresponding spectral density in many cases. With these spectral densities and the basis function method of Section \[sec:approx\], we can construct probabilistic versions of the classical regularization methods as discussed in the next section. Covariance functions arising from classical regularization ---------------------------------------------------------- Let us recall that a classical way to seek for solutions to inverse problems is via optimization of a functional of the form $$\mathcal{J}[f] = \frac{1}{2 \sigma^2} \sum_i (y_i - \mathcal{H}_{{\mathbf x},i} f({\mathbf x}))^2 + \frac{1}{2 \sigma_f^2} \int | \mathcal{L} f({\mathbf x}) |^2 \, d{\mathbf x},$$ where $\mathcal{L}$ is a linear operator. This is equivalent to a Gaussian process regression problem, where the covariance operator is formally chosen to be $\mathcal{K} = [\mathcal{L}^* \mathcal{L}]^{-1}$. In (classical) Tikhonov regularization we have $\mathcal{L} = \mathcal{I}$ (identity operator) which corresponds to penalizing the norm of the solution. Another option is to penalize the Laplacian which gives $\mathcal{L} = \nabla^2$. Although the kernel of this covariance operator is ill-defined with the classical choices of $\mathcal{L}$ and thus it is not possible to form the corresponding covariance function, we can still compute the corresponding spectral density function by computing the Fourier transform of $\mathcal{L}^* \mathcal{L}$ and then inverting it to form the spectral density: $$S({\boldsymbol{\omega}}) = \frac{\sigma_f^2}{\mathcal{F}[\mathcal{L}^* \mathcal{L}]}.$$ In particular, the minimum norm or (classical) Tikhonov regularization can be recovered by using a white noise prior which is given by the constant spectral density $$\label{eq:STikhonov} S_{\textrm{Tikhonov}}({\boldsymbol{\omega}}) = \sigma_f^2,$$ where $\sigma_f$ is a scaling parameter. Another interesting case is the Laplacian operator based regularization which corresponds to $$\label{eq:SLaplacian} S_{\textrm{Laplacian}}({\boldsymbol{\omega}}) = \frac{\sigma_f^2}{\| {\boldsymbol{\omega}} \|^4_2}.$$ It is useful to note that the latter spectral density corresponds to a $l \to \infty$ limit of the Matérn covariance function with $\nu + d/2 = 2$ and the white noise to $l \to 0$ in either the SE or the Matérn covariance functions. The covariance functions corresponding to the above spectral densities would be degenerate, but this does not prevent us from using the spectral densities in the basis function expansion method described in Section \[sec:approx\] as the method only requires the availability of the spectral density. Basis function expansion {#sec:approx} ------------------------ To overcome the computational hazard described in Section \[GP Xray\], we consider the approximation method proposed in [@SolinSarkka2015], which relies on the following truncated basis function expansion $$\begin{aligned} \label{eq:BFE} k({\mathbf x},{\mathbf x}')\approx \sum_{i=1}^{m}S({\sqrt{\lambda}_i})\phi_i({\mathbf x})\phi_i({\mathbf x}'), \end{aligned}$$ where $S$ denotes the spectral density of the covariance function, and $m$ is the truncation number. The basis functions $\phi_i({\mathbf x})$ and eigenvalues ${\lambda}_i$ are obtained from the solution to the Laplace eigenvalue problem on the domain $\Omega$ $$\begin{aligned} \label{eq:lapl_eig} \begin{cases} \hspace{-4mm} \begin{split} -\Delta\phi_i({\mathbf x})& ={\lambda}_i\phi_i({\mathbf x}), \\ \phi_i({\mathbf x})& =0, \end{split} \end{cases} \quad \begin{split} {\mathbf x}&\in\Omega, \\ {\mathbf x}&\in\partial\Omega. \end{split}\end{aligned}$$ In two dimensions with $\Omega=[-L_1,L_1]\times[-L_2,L_2]$ we introduce the positive integers $i_1\le m_1$ and $i_2\le m_2$. The number of basis functions is then $m=m_1m_2$ and the solution to is given by $$\begin{aligned} \phi_i({\mathbf x}) &= \frac{1}{\sqrt{L_1L_2}}\sin\big(\varphi_{i_1}(x_1+L_1)\big)\sin\big(\varphi_{i_2}(x_2+L_2)\big) , \\ \lambda_i &= \varphi_{i_1}^2+\varphi_{i_2}^2, \quad \varphi_{i_1}=\frac{\pi i_1}{2L_1}, \quad \varphi_{i_2}=\frac{\pi i_2}{2L_2}, \end{aligned}$$ where $i=i_1+m_1(i_2-1)$. Let us now build the vector ${\boldsymbol\phi}_*\in{{\mathbb R}}^{m\times1}$, the matrix $\Phi\in{{\mathbb R}}^{m\times M}$ and the diagonal matrix $\Lambda\in{{\mathbb R}}^{m\times m}$ as $$\begin{aligned} ({\boldsymbol\phi}_*)_i&=\phi_i({\mathbf x}_*), \\ \label{eq:Phi_entry} \Phi_{ij} &= \int_{-R}^R \phi_i({\mathbf x}^0_j+s\hat{{\mathbf u}}_j)ds, \\ \Lambda_{ii} &= S(\sqrt{\lambda_i}). \end{aligned}$$ The entries $\Phi_{ij}$ can be computed in closed form with details given in \[app:compdet\]. Now we substitute $Q\approx\Phi^{\mathsf{T}}\Lambda\Phi$ and ${\mathbf q}_*\approx\Phi^{\mathsf{T}}\Lambda{\boldsymbol\phi}_*$ to obtain \[eq:pred\_appr2\] $$\begin{aligned} \mathbb{E}[f({\mathbf x}_*) \mid {\mathbf y}] &\approx{\boldsymbol\phi}_{*}^{\mathsf{T}}\Lambda \Phi (\Phi^{\mathsf{T}}\Lambda \Phi + \sigma^2 I)^{-1}{\mathbf y}, \\ \mathbb{V}[f({\mathbf x}_*) \mid {\mathbf y}] &\approx {\boldsymbol\phi}_{*}^{\mathsf{T}}\Lambda {\boldsymbol\phi}_{*} - {\boldsymbol\phi}_{*}^{\mathsf{T}}\Lambda \Phi (\Phi^{\mathsf{T}}\Lambda \Phi + \sigma^2I)^{-1} \Phi^{\mathsf{T}}\Lambda {\boldsymbol\phi}_{*}.\end{aligned}$$ When using the spectral densities corresponding to the classical regularization methods in and , the mean equation reduces to the classical solution (on the given basis). However, also for the classical regularization methods we can compute the variance function which gives uncertainty estimate for the solution which in the classical formulation is not available. Furthermore, the hyperparameter estimation methods outlined in the next section provide principled means to estimate the parameters also in the classical regularization methods. Hyperparameter estimation ========================= In this section, we will consider some methods for estimating the *hyperparameters*. The free parameters of the covariance function, for example, the parameters $\sigma_f$ and $l$ in the squared exponential covariance function, are together with the noise parameter $\sigma$ referred to as the hyperparameters of the model. In this work, we employ a Bayesian approach to estimate the hyperparameters, and comparisons with standard parameter estimation methods such as L-curve and cross-validation methods are given as well. Posterior distribution of hyperparameters ----------------------------------------- The marginal likelihood function corresponding to the model is given as $$\label{likelihood} p(\mathbf{y} \mid \sigma_f,l,\sigma) = \mathcal{N}(\mathbf{y} \mid \mathbf{0}, Q(\sigma_f,l) + \sigma^2 \, I),$$ where $Q(\sigma_f,l)$ is defined by . The posterior distribution of parameters can now be written as follows: $$\label{posterior} p(\sigma_f,l,\sigma \mid \mathbf{y}) \propto p(\mathbf{y} \mid \sigma_f,l,\sigma) p(\sigma_f) p(l) p(\sigma),$$ where non-informative priors are used: $p(\sigma_f)\propto \frac{1}{\sigma_f}$, $p(l)\propto \frac{1}{l}$ and $p(\sigma)\propto \frac{1}{\sigma}$. The logarithm of  can be written as $$\label{logposterior} \log\,p(\sigma_f,l,\sigma \mid \mathbf{y}) = \text{const.} - \frac{1}{2} \, \log \det (Q + \sigma^2 I) - \frac{1}{2} \mathbf{y}^{\mathsf{T}}(Q + \sigma^2 I)^{-1} \mathbf{y} - \log \frac{1}{\sigma_f} - \log \frac{1}{l} - \log \frac{1}{\sigma}.$$ Given the posterior distribution we have a wide selection of methods from statistics to estimate the parameters. One approach is to compute the maximum a posteriori (MAP) estimate of the parameters by using, for example, gradient-based optimization methods [@Rasmussen2006]. However, using this kind of point estimate loses the uncertainty information of the hyperparameters and therefore in this article we use Markov chain Monte Carlo (MCMC) methods [@brooks2011handbook] which retain the information about the uncertainty in the final result. Metropolis–Hastings sampling of hyperparameters ----------------------------------------------- As discussed in the previous section, the statistical formulation of the inverse problem gives a posterior distribution of the hyperparameters ${\boldsymbol{\varphi}} = (\sigma_f,l,\sigma)$ as the solution rather than single estimates. The MCMC methods are capable of generating samples from the distribution. The Monte Carlo samples can then be used for computing the mean, the variance, or some other statistics of the posterior distribution [@Gelman_et_al:2013]. In this work, we employ the Metropolis–Hastings algorithm to sample from the posterior distribution. The L-curve method ------------------ One of the classical methods to obtain information about the optimum value for $\sigma$ is the L-curve method [@hansen1992analysis], which operats by plotting the norm of the solution $\lVert f_{\sigma}({\mathbf x}) \rVert _2$ versus the residual norm $\lVert \mathcal{H}_{{\mathbf x},i} f_{\sigma}({\mathbf x}) - y_i \rVert _2$. The associated L-curve is defined as the continous curve consisting of all the points $(\lVert \mathcal{H}_{{\mathbf x},i} f_{\sigma}({\mathbf x}) - y_i \rVert _2,\lVert f_{\sigma}({\mathbf x}) \rVert _2)$ for $\sigma \in [0,\infty)$. Cross-validation ---------------- As a comparison, we also consider to use methods of cross-validation (CV) for model selection. In $k$-fold CV, the data are partitioned into $k$ disjoint sets $\mathbf{y}_{j}$, and at each round $j$ of CV, the predictive likelihood of the set $\mathbf{y}_{j}$ is computed given the rest of the data $\mathbf{y}_{-j}$. These likelihoods are used to monitor the predictive performance of the model. This performance is used to estimate the generalization error, and it can be used to carry out model selection [@kohavi1995study; @Rasmussen2006; @vehtari2017practical]. The Bayesian CV estimate of the predictive fit with given parameters ${\boldsymbol{\varphi}}$ is $$\mbox{CV} = \sum_{j=1}^n \log p(\mathbf{y}_j \mid \mathbf{y}_{-j},{\boldsymbol{\varphi}}),$$ where $p(\mathbf{y}_j \mid \mathbf{y}_{-j},{\boldsymbol{\varphi}})$ is the predictive likelihood of the data $\mathbf{y}_j$ given the rest of the data. The best parameter values with respect to CV can be computed by enumerating the possible parameter values and selecting the one which gives the best fit in terms of CV. Experimental results {#sec:expResults} ==================== In this section, we present numerical results using the GP model for limited x-ray tomography problems. All the computations were implemented in <span style="font-variant:small-caps;">Matlab</span> 9.4 (R2018a) and performed on an Intel Core i5 at 2.3 GHz and CPU 8GB 2133MHz LPDDR3 memory. For both simulated data (see Section \[SimulatedData\]) and real data (see Section \[RealData\]) we use $m=10^4$ basis functions in . The measurements are obtained from the line integral of each x-ray over the attenuation coefficient of the measured objects. The measurements are taken for each direction (angle of view), and later they will be referred to as projections. The same number of rays in each direction is used. The computation of the hyperparameters is carried out using the Metropolis–Hastings algorithms with $5\,000$ samples, and the first $1\,000$ samples are thrown away ([*burn-in*]{} period). The reconstruction is computed by taking the conditional mean of the object estimate. Simulated data: 2D Chest phantom {#SimulatedData} -------------------------------- As for the simulated data, we use one slice of <span style="font-variant:small-caps;">Matlab</span>’s 3D Chest dataset [@matlab_chest] as a ground truth, $f_{\text{true}}$, which is shown in Figure \[fig:ChestPhantomRec\](a). The size of the phantom is $N \times N$, with $N = 128$. The black region indicates zero values and lighter regions indicate higher attenuation function values. The measurements (i.e. sinogram) of the chest phantom are computed using the [radon]{} command in <span style="font-variant:small-caps;">Matlab</span> and corrupted by additive white Gaussian noise with zero mean and $0.1$ variance ($\sigma_{\text{true}} = 0.32$). Several reconstructions of the chest phantom using different covariance functions, namely squared exponential (SE), Matérn, Laplacian, and Tikhonov, are presented. For the SE, Matérn, and Laplacian covariance functions, the paramameters $\sigma_f$, $l$, and $\sigma$ are estimated using the proposed method. We use $\nu = 1$ for the Matérn covariance. As for the Tikhonov covariance, it is not characterized by the length scale $l$, and hence only $\sigma_f$ and $\sigma$ are estimated. All the estimated parameters are reported in Table \[GP parameters\]. Figure \[fig:Histogram parameters Chest\] presents the histograms of the 1-d marginal posterior distribution of each parameters using different covariance functions. The histograms show the distribution of the parameters samples in the Metropolis–Hastings samples. The results show that the $\sigma_f$ estimate for SE and Matérn covariances is $0.12$, while for Laplacian and Tikhonov, the estimates are $0.05$ and $0.64$. For Matérn, Laplacian, and Tikhonov covariance functions, the $\sigma$ estimates are concentrated around the same values $0.34-0.39$ with standard deviation (SD) between $0.02 - 0.03$. These noise estimates are well-estimated the ground-truth noise, $\sigma_{\text{true}} = 0.32$, with the absolute error is between $0.02 - 0.07$. The estimate of the SE kernel appears to overestimate the noise, $\sigma = 0.60$. It is reported that the length-scale parameter, $l$, for Laplacian and SE covariance functions are concentrated in the same values, while for Matérn yields higher estimate, $l = 10.14$. Figure \[fig:ChestPhantomRec\](c)-(f) shows GP reconstructions of the 2D chest phantom using different covariance functions from 9 projections (uniformly spaced) out of 180$^\circ$ angle of view and $185$ number of rays for each projection. The computation times for all numerical tests are reported in Table \[Computation time\]. The Metropolis–Hastings reconstruction shows longer computational time due to the need for generation of a large number of samples from the posterior distribution. However, the benefit of this algorithm is that it is easy to implement and it is reliable for sampling from high dimensional distributions. Target FBP SE Matérn Laplacian Tikhonov --------------- ----- ------- -------- ----------- ---------- Chest phantom 0.5 11210 9676 9615 9615 : Computation times of chest phantom (in seconds)[]{data-label="Computation time"} The numerical test of the simulated data reconstructions is compared against figures of merit, namely: - the relative error (RE) $$\begin{aligned} \frac{\|f_{\text{true}} - f_{\text{rec}} \|_2}{\| f_{\text{true}}\|_2}, \end{aligned}$$ where $f_{\text{rec}}$ is the image reconstruction, and - the peak-signal-to-noise ratio (PSNR) $$\begin{aligned} 10\log_{10}\left(\frac{\mathrm{peakval}^2}{\mathrm{MSE}}\right), \end{aligned}$$ where $\mathrm{peakval}$ is the maximum possible value of the image and $\mathrm{MSE}$ is the mean square error between $f_{\text{true}}$ and $f_{\text{rec}}$, as shown in Table \[Figures of merit\]. In practice, image quality in CT depends on other parameters as well, such as image contrast, spatial resolution, and image noise [@goldman2007principles]. These parameters can be evaluated when the CT device is equipped with CT numbers for various materials, high-resolution image is available, and statistical fluctuations of image noise which require several times of measurement to record random variations in detected x-ray intensity are acquired. However, in this work, the collected datasets are not supported by the aforementioned factors and they fall outside the scope of this paper. The results presented here are focusing on the implementation of a new algorithm to limited-data CT reconstruction and are reported as a preliminary study. Reconstruction using a conventional method is computed as well with the built-in <span style="font-variant:small-caps;">Matlab</span> function [iradon]{}, which uses the FBP to invert the Radon transform. It reconstructs a two-dimensional slice of the sample from the corresponding projections. The angles for which the projections are available are given as an argument to the function. Linear interpolation is applied during the backprojection and a Ram–Lak or ramp filter is used. The FBP reconstruction of the chest phantom is shown in Figure \[fig:ChestPhantomRec\](b). For comparison, FBP reconstructions computed using some other filters are seen in Figure \[fig:FBPs\]. ------------ ----------------- -------------- --------------- Covariance $\sigma_f$ (SD) $l$ (SD) $\sigma$ (SD) function SE 0.12 (0.04) 5.03 (0.03) 0.60 (0.02) Matérn 0.12 (0.07) 10.14 (0.08) 0.34 (0.03) Laplacian 0.05 (0.10) 4.49 (0.02) 0.39 (0.03) Tikhonov 0.64 (0.02) - 0.35 (0.03) ------------ ----------------- -------------- --------------- : The GP parameter estimates for the chest phantom. The estimates are calculated using the conditional mean, and the standard deviation (SD) values are also reported in parentheses.[]{data-label="GP parameters"} (100,600) (52,415)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_1_ChestSE "fig:"){width="4.25cm"}]{} (190,415)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_2_ChestSE "fig:"){width="4.25cm"}]{} (326,415)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_3_ChestSE "fig:"){width="4.25cm"}]{} (52,265)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_1_ChestMatern "fig:"){width="4.25cm"}]{} (190,265)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_2_ChestMatern "fig:"){width="4.25cm"}]{} (326,265)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_3_ChestMatern "fig:"){width="4.25cm"}]{} (52,125)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_1_ChestLaplacian "fig:"){width="4.25cm"}]{} (185,125)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_2_ChestLaplacian "fig:"){width="4.25cm"}]{} (326,125)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_3_ChestLaplacian "fig:"){width="4.25cm"}]{} (52,-10)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_1_ChestTikhonov "fig:"){width="4.25cm"}]{} (326,-10)[![Histogram of the 1-d marginal distribution of the GP parameters. Left, middle and right columns are the marginal distribution for parameter $\sigma_f$, $l$ and $\sigma$ with corresponding covariance functions indicated in the vertical text in the left of the figure. The estimate of the parameter $l$ is not available for Tikhonov covariance.[]{data-label="fig:Histogram parameters Chest"}](samples_3_ChestTikhonov "fig:"){width="4.25cm"}]{} (30,480) [90]{} SE (30,330) [90]{} Matérn (30,180) [90]{} Laplacian (30,30) [90]{} Tikhonov (125,560)[$\sigma_f$]{} (250,560)[$l$]{} (393,560)[$\sigma$]{} (100,260) (5,138)[![(a) A ground truth of 2D chest phantom. (b) Filtered backprojection reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The GP reconstructions are using 9 projections.[]{data-label="fig:ChestPhantomRec"}](ChestImage "fig:"){width="6.3cm"}]{} (158,138)[![(a) A ground truth of 2D chest phantom. (b) Filtered backprojection reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The GP reconstructions are using 9 projections.[]{data-label="fig:ChestPhantomRec"}](SliceChest9AngFBP "fig:"){width="6.3cm"}]{} (306,138)[![(a) A ground truth of 2D chest phantom. (b) Filtered backprojection reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The GP reconstructions are using 9 projections.[]{data-label="fig:ChestPhantomRec"}](SliceChest9AngSE "fig:"){width="6.3cm"}]{} (5,3)[![(a) A ground truth of 2D chest phantom. (b) Filtered backprojection reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The GP reconstructions are using 9 projections.[]{data-label="fig:ChestPhantomRec"}](SliceChest9AngMatern "fig:"){width="6.3cm"}]{} (158,3)[![(a) A ground truth of 2D chest phantom. (b) Filtered backprojection reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The GP reconstructions are using 9 projections.[]{data-label="fig:ChestPhantomRec"}](SliceChest9AngLaplacian "fig:"){width="6.3cm"}]{} (306,3)[![(a) A ground truth of 2D chest phantom. (b) Filtered backprojection reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The GP reconstructions are using 9 projections.[]{data-label="fig:ChestPhantomRec"}](SliceChest9AngTikhonov "fig:"){width="6.3cm"}]{} (87,137)[(a)]{} (240,137)[(b)]{} (393,137)[(c)]{} (87,0)[(d)]{} (240,0)[(e)]{} (393,0)[(f)]{} (270,260) (5,138)[![Filtered backprojection reconstructions using (a) Shepp–Logan filter, (b) Cosine filter (c) Hamming filter, (d) Hann filter. Values of relative error (RE) are between $23.6 - 25.2$ and PSNR values are between $18.1 - 19.9\%$. []{data-label="fig:FBPs"}](SheppLogan "fig:"){width="6.3cm"}]{} (158,138)[![Filtered backprojection reconstructions using (a) Shepp–Logan filter, (b) Cosine filter (c) Hamming filter, (d) Hann filter. Values of relative error (RE) are between $23.6 - 25.2$ and PSNR values are between $18.1 - 19.9\%$. []{data-label="fig:FBPs"}](Cosine "fig:"){width="6.3cm"}]{} (5,3)[![Filtered backprojection reconstructions using (a) Shepp–Logan filter, (b) Cosine filter (c) Hamming filter, (d) Hann filter. Values of relative error (RE) are between $23.6 - 25.2$ and PSNR values are between $18.1 - 19.9\%$. []{data-label="fig:FBPs"}](Hamming "fig:"){width="6.3cm"}]{} (158,3)[![Filtered backprojection reconstructions using (a) Shepp–Logan filter, (b) Cosine filter (c) Hamming filter, (d) Hann filter. Values of relative error (RE) are between $23.6 - 25.2$ and PSNR values are between $18.1 - 19.9\%$. []{data-label="fig:FBPs"}](Hann "fig:"){width="6.3cm"}]{} (87,137)[(a)]{} (240,137)[(b)]{} (87,0)[(c)]{} (240,0)[(d)]{} Method RE (%) PSNR ---------------------- -------- ------- -- FBP (Ram–Lak filter) 25.86 18.44 GP-SE 29.41 21.76 GP-Matérn 23.26 22.76 GP-Laplacian 29.18 21.79 GP-Tikhonov 23.39 22.73 Lcurve-Laplacian 23.38 22.62 Lcurve-Tikhonov 23.26 22.63 CV-Laplacian 25.18 22.31 CV-Tikhonov 23.47 22.75 : Figures of merit for chest phantom reconstructions.[]{data-label="Figures of merit"} We also compared the results to the L-curve method and the CV: - The L-curve method is applied to the Laplacian and the Tikhonov covariances and the L-curve plots from different values of parameter $10^{-1}\leq\sigma\leq10$ for both covariances are shown in Figure \[L-curve chest data\]. Both plots show that the corner of the L-curve is located in between $0.2\leq \sigma \leq 1$. (100,250) (-30,50)[![The L-curve for (a) Tikhonov and (b) Laplacian covariance from the chest phantom reconstruction.[]{data-label="L-curve chest data"}](LcurveChestTikhonov "fig:"){width="9cm"}]{} (210,50)[![The L-curve for (a) Tikhonov and (b) Laplacian covariance from the chest phantom reconstruction.[]{data-label="L-curve chest data"}](LcurveChestLaplacian "fig:"){width="9cm"}]{} (65,45)[$\lVert \mathcal{H}_{{\mathbf x},i} f_{\sigma}({\mathbf x}) - y_i \rVert _2$]{} (-13,130) [90]{} $\lVert f_{\sigma}({\mathbf x}) \rVert _2$ (225,130) [90]{} $\lVert f_{\sigma}({\mathbf x}) \rVert _2$ (305,45)[$\lVert \mathcal{H}_{{\mathbf x},i} f_{\sigma}({\mathbf x}) - y_i \rVert _2$]{} (100,15)[(a)]{} (345,15)[(b)]{} (10,220) (10,150) (50,127) (175,80) (267,225) (269,94) (290,95) (410,95) - The CV is tested for the Laplacian and Tikhonov covariances using point-wise evaluation of $10^{-2} \leq \sigma \leq 1$ and $10^{-2}\leq \sigma_f \leq 1$. For the Laplacian covariance, several points of length scale $1\leq \ell \leq 100$ are tested as well. The minimum prediction error was obtained for $\sigma_f = 0.8$, $\sigma = 0.8$ and $\ell = 10$. For the Tikhonov covariance, the minimum prediction error was obtained for $\sigma = 0.5$ and $\sigma_f = 0.5$. The estimates of $\sigma_f$ and $\sigma$ for Laplacian are $0.8$ and $0.5$, respectively, and they give the same estimates for the Tikhonov covariance function. The estimates of $\sigma$ for both kernels appear to overestimate the $\sigma_{\text{true}}$. The absolute error is in between $0.18 - 0.48$. The length-scale estimate from Laplacian covariance, $l = 10$, appears to close to the estimate in Matérn covariance. Image reconstructions for both L-curve and CV methods are shown in Figure \[fig:Parameter Choice Methods\]. (100,270) (7,138)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $\sigma = 1$) and Tikhonov (using $\sigma = 0.2$) covariance functions, respectively. (d) & (e) reconstructions using CV with Laplacian and Tikhonov covariance functions, respectively []{data-label="fig:Parameter Choice Methods"}](ChestImage "fig:"){width="6.3cm"}]{} (155,138)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $\sigma = 1$) and Tikhonov (using $\sigma = 0.2$) covariance functions, respectively. (d) & (e) reconstructions using CV with Laplacian and Tikhonov covariance functions, respectively []{data-label="fig:Parameter Choice Methods"}](LcurveChestLaplacian_sigman1 "fig:"){width="6.3cm"}]{} (306,138)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $\sigma = 1$) and Tikhonov (using $\sigma = 0.2$) covariance functions, respectively. (d) & (e) reconstructions using CV with Laplacian and Tikhonov covariance functions, respectively []{data-label="fig:Parameter Choice Methods"}](LcurveChestTikhonov_sigman02 "fig:"){width="6.3cm"}]{} (8,3)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $\sigma = 1$) and Tikhonov (using $\sigma = 0.2$) covariance functions, respectively. (d) & (e) reconstructions using CV with Laplacian and Tikhonov covariance functions, respectively []{data-label="fig:Parameter Choice Methods"}](CVChestLaplacian "fig:"){width="6.3cm"}]{} (155,3)[![(a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using L-curve parameter choice method with Laplacian (using $\sigma = 1$) and Tikhonov (using $\sigma = 0.2$) covariance functions, respectively. (d) & (e) reconstructions using CV with Laplacian and Tikhonov covariance functions, respectively []{data-label="fig:Parameter Choice Methods"}](CVChestTikhonov "fig:"){width="6.3cm"}]{} (87,137)[(a)]{} (240,137)[(b)]{} (393,137)[(c)]{} (87,0)[(d)]{} (240,0)[(e)]{} Real data: Carved cheese {#RealData} ------------------------ We now consider a real-world example using the tomographic x-ray data of a carved cheese slice measured with a custom-built CT device available at the University of Helsinki, Finland. The dataset is available online [@fips_dataset]. For a detailed documentation of the acquisition setup—including the specifications of the x-ray systems—see [@bubba2017tomographic]. We use the downsampled sinogram with $140$ rays and 15 projections from $360^\circ$ angle of view. In the computations, the size of the target is set to $120 \times 120$. Figure \[CheeseRec\](c) shows the GP reconstruction (Matérn covariance function) of the cross section of the carved cheese slice using 15 projections (uniformly spaced) out of 360$^\circ$ angle of view. For comparison, the FBP reconstruction is shown in Figure \[CheeseRec\](b). ------------ ----------------- -------------- --------------- Covariance $\sigma_f$ (SD) $l$ (SD) $\sigma$ (SD) function Matérn 0.012 (0.07) 11.00 (0.08) 0.02 (0.04) ------------ ----------------- -------------- --------------- : Estimated GP parameters for the carved cheese using Matérn covariance function. The estimates are calculated using the conditional mean, and the standard deviation (SD) values are also reported in parentheses.[]{data-label="GP parameter cheese"} The computation times for the carved cheese are reported in Table \[Computation time cheese\]. (100,200) (150,38)[![(a) FBP reconstruction (Ram–Lak filter) of the carved cheese using dense $360$ projections. (b) Filtered backprojection reconstruction from 15 projections. (c) GP reconstruction using Matérn covariance from 15 projections.[]{data-label="CheeseRec"}](Cheese15AngFBP "fig:"){width="6.21cm"}]{} (0,-31)[![(a) FBP reconstruction (Ram–Lak filter) of the carved cheese using dense $360$ projections. (b) Filtered backprojection reconstruction from 15 projections. (c) GP reconstruction using Matérn covariance from 15 projections.[]{data-label="CheeseRec"}](Cheese_slice_1120b.pdf "fig:"){width="6.8cm"}]{} (295,38)[![(a) FBP reconstruction (Ram–Lak filter) of the carved cheese using dense $360$ projections. (b) Filtered backprojection reconstruction from 15 projections. (c) GP reconstruction using Matérn covariance from 15 projections.[]{data-label="CheeseRec"}](Cheese15AngMatern "fig:"){width="6.21cm"}]{} (90,30)[(a)]{} (232,30)[(b)]{} (395,30)[(c)]{} Target FBP Matérn --------------- ----- -------- -- -- -- Carved cheese 0.1 12604 : Computation times of the carved cheese (in seconds)[]{data-label="Computation time cheese"} Discussion ---------- We have presented x-ray tomography reconstructions from both simulated and real data for limited projections (i.e. sparse sampling) using an approach based on the Gaussian process. However, other limited-data problems such as limited angle tomography could be explored as well. The quality of GP reconstructions using different covariance functions looks rather the same qualitatively. However, quantitatively, the reconstruction using Matérn covariance is the best one: it has the lowest RE $23.26\%$ and the highest PSNR $22.76$. PSNR describes the similarity of the original target with the reconstructioned image (the higher value, the better of the reconstruction). Figures of merit estimates are not available for the real cheese data since there is no comparable ground truth. Nevertheless, the quality of the reconstruction can be observed qualitatively by comparing with the FBP reconstruction obtained with dense $360$ projections from $360$ degrees shown in Figure \[CheeseRec\](a). The corresponding parameter estimates for the chest phantom and the cheese are reported in Table \[GP parameters\] and \[GP parameter cheese\]. For the chest phantom case, the estimate of parameter $\sigma$ using Matérn, Laplacian and Tikhonov kernels tend to be close to the true value $\sigma_{true}$. As for the SE covariance, the standard deviation of noise is overestimated. The reconstructions produced by the FBP benchmark algorithm using sparse projections are overwhelmed by streak artefacts due to the nature of backprojection reconstruction, as shown in Figure \[fig:ChestPhantomRec\](b) for the chest phantom and Figure \[CheeseRec\](b) the for cheese target. The edges of the target are badly reconstructed. Due to the artefacts, especially for the chest phantom, it is difficult to distinguish the lighter region (which is assumed to be tissue) and the black region (air). The FBP reconstruction has the worst quality and it is confirmed in Table \[Figures of merit\] that it has a high RE value ($25.86\%$) and the lowest PSNR ($18.44$). FBP reconstructions computed with different filters are shown in Figure \[fig:FBPs\]. However, there is no significant improvement in the images as it is clarified by the RE and PSNR values in the caption as well as by qualitative investigation. On the other hand, the GP reconstructions outperform the FBP algorithm in terms of image quality as reported in the figures of merit. The PSNR values of the GP-based reconstructions are higher than that of the FBP reconstruction. Nevertheless, in GP reconstructions, sharp boundaries are difficult to achieve due to the smoothness assumptions embedded in the model. The GP prior clearly suppresses the artifacts in the reconstructions as shown in Figure \[fig:ChestPhantomRec\](c) and \[CheeseRec\](c). In Figure \[fig:ChestPhantomRec\](c), the air and tissue region are recovered much better, since the prominent artefacts are much less. In Figure \[CheeseRec\](c), the air region (outside the cheese and the C and T letters) are much sharper than in the FBP reconstruction. Overall, the results indicate that the image quality can be improved significantly by employing the GP method. In Figure \[fig:Parameter Choice Methods\] the image reconstructions using L-curve and CV methods are presented. The quality of the reconstructions is reported in Table \[Figures of merit\] as well. In these methods, finer point-wise evaluations might help to improve the quality of the reconstructions. We emphasize that in the proposed GP-approach, some parameters in the prior is a part of the inference problem (see Equation ). Henceforth, we can avoid the difficulty in choosing the prior parameters. This problem corresponds to the classical regularization methods, in which selecting the regularization parameters is a very crucial step to produce a good reconstruction. Conclusions =========== We have employed the Gaussian process with a hierarchical prior to computed tomography using limited projection data. The method was implemented to estimate the x-ray attenuation function from the measured data produced by the Radon transform. The performance has been tested on simulated and real data, with promising results shown. Unlike algorithms commonly used for the limited x-ray tomography problem that require manual tuning of prior parameters, the proposed GP method offers an easier set up as it takes into account the prior parameters as a part of the estimation. Henceforth, it constitutes a promising and user-friendly strategy. The most important part of the GP model is the selection of the covariance function, since it stipulates the properties of the unknown function. As such, it also leaves most room for improvement. Considering the examples in Section \[sec:expResults\], a common feature of the target functions is that they consist of a number of well-defined, separate regions. The function values are similar and thus highly correlated within the regions, while the correlation is low at the edges where rapid changes occur. This kind of behavior is hard to capture with a stationary covariance function that models the correlation as completely dependent on the distance between the input locations. A non-stationary alternative is provided by, for example, the neural network covariance function, which is known for its ability to model functions with non-smooth features [@Rasmussen2006]. The basis function approximation method employed in this work is only applicable to stationary covariance functions, but other approximations can of course be considered. Despite its success, the computational burden of the proposed algorithm is relatively high. To solve this problem, speed-up strategies are available, such as implementing parallelized GPU code, optimizing the covariances of the sampling strategy, or by changing the MCMC algorithm to another one. Investigating finer resolution images and statistical records would also be interesting future research to evaluate other image quality parameters. Moreover, the proposed method can be applied to multidetector CT imaging [@mookiah2018multidetector; @flohr2005multi] as well as 3D CT problems using sparse data [@sidky2008image; @purisha2018automatic]. Details on the computation of $\Phi$ {#app:compdet} ==================================== Here we derive the closed-form expression of the entries $\Phi_{ij}$ stated in . We get that $$\begin{split} \Phi_{ij} &= \int_{-R}^R \phi_i({\mathbf x}^0_j+s\hat{{\mathbf u}}_j)ds \\ &=\frac{1}{\sqrt{L_1L_2}}\int_{-R}^{R} \sin(\varphi_{i_1} r_j\cos\theta_j - \varphi_{i_1} s \sin\theta_j +\varphi_{i_1}L_1)\sin(\varphi_{i_2} r_j\sin\theta_j + \varphi_{i_2} s \cos\theta_j+\varphi_{i_2} L_2) ds \\ & = \frac{1}{\sqrt{L_1L_2}}\int_{-R}^{R} \sin(\alpha_{ij} s + \beta_{ij})\sin(\gamma_{ij} s + \delta_{ij}) ds \\ & = \frac{1}{2\sqrt{L_1L_1}} \int_{-R}^R \cos((\alpha_{ij}-\gamma_{ij})s+\beta_{ij}-\delta_{ij}) - \cos((\alpha_{ij}+\gamma_{ij})s + \beta_{ij} + \delta_{ij})ds \\ & = \frac{1}{2\sqrt{L_1L_1}} \Big[ \frac{1}{\alpha_{ij}-\gamma_{ij}}\sin((\alpha_{ij}-\gamma_{ij})s+\beta_{ij}-\delta_{ij}) - \frac{1}{\alpha_{ij}+\gamma_{ij}}\sin((\alpha_{ij}+\gamma_{ij})s + \beta_{ij} + \delta_{ij})\Big]_{-R}^R \\ & = \frac{1}{2\sqrt{L_1L_1}} \Big( \frac{1}{\alpha_{ij}-\gamma_{ij}}\sin((\alpha_{ij}-\gamma_{ij})R+\beta_{ij}-\delta_{ij}) - \frac{1}{\alpha_{ij}+\gamma_{ij}}\sin((\alpha_{ij}+\gamma_{ij})R + \beta_{ij} + \delta_{ij}) \\ & \qquad - \frac{1}{\alpha_{ij}-\gamma_{ij}}\sin(-(\alpha_{ij}-\gamma_{ij})R+\beta_{ij}-\delta_{ij}) + \frac{1}{\alpha_{ij}+\gamma_{ij}}\sin(-(\alpha+\gamma_{ij})R + \beta_{ij} + \delta_{ij})\Big), \end{split}$$ where $$\begin{aligned} \alpha_{ij} &= \varphi_{i_1}\sin\theta_j, \\ \beta_{ij} &= \varphi_{i_1}r_j\cos\theta_j+\varphi_{i_1}L_1, \\ \gamma_{ij} &= \varphi_{i_2}\cos\theta_j, \\ \delta_{ij} &= \varphi_{i_2}r_j\sin\theta_j+\varphi_{i_2}L_2.\end{aligned}$$ References {#references .unnumbered} ==========
{ "pile_set_name": "ArXiv" }
--- author: - 'E. Bravo' bibliography: - '../../ebg.bib' date: 'Received ; accepted ' title: '[$^{16}$O$($p,$\alpha)^{13}$N ]{}makes explosive oxygen burning sensitive to the metallicity of the progenitors of type Ia supernovae' --- Introduction ============ The nucleosynthesis resulting from type Ia supernovae (SNIa) reflects the thermodynamical history of the progenitor white dwarf (WD) during the explosion and its initial chemical composition. Thus, nucleosynthetic constraints coming from observations of supernovae and their remnants are an important source of knowledge of the conditions achieved during the explosion. The optical properties, spectra, and light curves of SNIa over a few weeks around maximum brightness have been used to infer the chemical profile of the ejecta [@2005sth; @2008maz; @2011tan; @2014sas; @2016ash]. However, the ability to constrain the nucleosynthetic products based on optical data is hampered by the complex physics that governs the formation of spectral features in the visible, ultraviolet, and infrared bands. Observations of sufficiently close supernova remnants (SNRs) are an alternative to obtain information about the chemical composition of the ejecta [e.g. @1988ham; @1988fes]. Hundreds to a few thousands of years after the explosion, the ejected elements emit strongly in the X-ray band due to shock heating, and their emission lines can be detected and measured by current X-ray observatories [e.g. @1995hug; @1995van; @2008bad; @2014yam; @2015yam]. Recently, the high spectral resolution of [*Suzaku*]{} has allowed the relative mass ratio of calcium to sulfur, $M_\mathrm{Ca}/M_\mathrm{S}$, to be measured in a few SNRs with a precision of $\sim5\%-16\%$ [@2017mar], with the result that this ratio spans the range $0.17 - 0.28$, with an uncertainty of 0.04 in both limits [for reference, this mass ratio is 0.177 in the solar system; @2003lod]. These results have been interpreted in terms of metallicity-dependent yields during explosive oxygen burning. There are two effects to account for in relation with $\alpha$-rich oxygen burning: first, the strength of the enhancement of the yield of calcium at all metallicities, and second, the metallicity dependence of the mass ratio of calcium to sulfur, $M_\mathrm{Ca}/M_\mathrm{S}$, in the ejecta. Both calcium and sulfur are a product of explosive oxygen burning, and they are synthesized in proportion to their ratio in conditions of quasi-statistical equilibrium, which depends on the quantity of $\alpha$ particles available: $M_\mathrm{Ca}/M_\mathrm{S}\propto X_\alpha^2$ [@2014de]. [@1973woo] studied the conditions under which explosive oxygen burning would reproduce the solar-system abundances. They explained that oxygen burning can proceed through two different branches: $\alpha$-poor and $\alpha$-rich. The $\alpha$-poor branch has the net effect that for every two [${}^{16}$O]{} nuclei destroyed, one [${}^{28}$Si]{} nuclei and one $\alpha$ particle are created. This branch proceeds mainly through the fusion reaction of two [${}^{16}$O]{} nuclei, but it is contributed as well by the chain [${}^{16}$O]{}$(\gamma,\alpha)$[${}^{12}$C]{}$($[${}^{16}$O]{}$,\gamma)$[${}^{28}$Si]{}. On the other hand, the $\alpha$-rich branch involves the photo-disintegration of two [${}^{16}$O]{} nuclei to give two [${}^{12}$C]{} plus two $\alpha$ particles, followed by the fusion reaction [${}^{12}$C]{}$($[${}^{12}$C]{}$,\alpha)$[${}^{20}$Ne]{}$(\gamma,\alpha)$[${}^{16}$O]{}, which releases a total of four $\alpha$ particles for each [${}^{16}$O]{} nuclei destroyed. [@1973woo] included the chain [$^{16}$O$($p,$\alpha)^{13}$N$(\gamma$,p$)^{12}$C ]{}in the $\alpha$-rich branch and listed these two reactions (and their inverses) among the most influential reactions for explosive oxygen burning. [@2012bra] found that the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate and its inverse are among the ones that impact most the abundance of [${}^{40}$Ca]{}, in agreement with [@1973woo]. @2014de and @2016mil noticed that $M_\mathrm{Ca}/M_\mathrm{S}$ can be used to infer the metallicity, $Z$, of the progenitor of SNIa, but they did not identify the source of the metallicity dependence of the calcium and sulfur yields. Later, @2017mar used the measured $M_\mathrm{Ca}/M_\mathrm{S}$ in a few type Ia SNRs of the Milky Way and the LMC to determine the progenitor metallicity, and concluded that there had to be an unknown source of neutronization of the WD matter before the thermal runaway besides that produced during carbon simmering [@2008chm; @2008pir; @2016mar; @2017pie]. They also pointed out that SNIa models that used the standard set of reaction rates were unable to reproduce the high calcium-to-sulfur mass ratio measured in some remnants. In the present work, it is shown that the origin of the metallicity dependence of $M_\mathrm{Ca}/M_\mathrm{S}$ has to be ascribed to the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction. In the following section, the mechanisms by which the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction controls the $\alpha$ particle abundance as a function of the progenitor metallicity are explained. If the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction is switched off, the value of $M_\mathrm{Ca}/M_\mathrm{S}$ remains insensitive to metallicity. In Section \[s:limits\], the uncertainty of the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate is reported along with the limits to its value that can be obtained from the measured $M_\mathrm{Ca}/M_\mathrm{S}$ in SNRs. The conclusions of this work are presented in Section \[s:conclusions\]. [$^{16}$O$($p,$\alpha)^{13}$N ]{}and metallicity {#s:workings} ================================================ The chain $^{16}$O(p,$\alpha$)$^{13}$N($\gamma$,p)$^{12}$C provides a route alternative to [$^{16}$O$(\gamma,\alpha)^{12}$C ]{}to convert [${}^{16}$O]{} to [${}^{12}$C]{} and feed the $\alpha$-rich branch of explosive oxygen burning [@1973woo]. The chain neither consumes nor produces protons, however its rate depends on the abundance of free protons. In the shells that experience explosive oxygen burning in SNIa, the neutron excess is closely linked to the progenitor metallicity. At small neutron excess, hence low progenitor metallicity, there are enough protons to make [$^{16}$O$($p,$\alpha)^{13}$N ]{}operational. At large neutron excess, hence large progenitor metallicity, the presence of free neutrons neutralizes the protons, and undermines the chain [$^{16}$O$($p,$\alpha)^{13}$N$(\gamma$,p$)^{12}$C ]{}efficiency. This is because in explosive oxygen burning, quasi-statistical equilibrium holds for the abundances of nuclei between silicon and calcium [@1970tru]. In quasi-statistical equilibrium, a large neutron-excess leads to a large abundance of neutronized intermediate-mass nuclei such as, for instance, $^{34}$S or $^{38}$Ar, which react much more efficiently with protons than the $\alpha$-nuclei such as $^{32}$S or $^{36}$Ar that are produced in low-neutron-excess conditions. To illustrate the above ideas, Figs. \[f:1\]-\[f:3\] show the evolution of key quantities related to the branching of explosive oxygen burning into either the $\alpha$-rich or the $\alpha$-poor tracks. Specifically, the plots show the evolution of a mass shell reaching a peak temperature of $4\times10^9$ K in models 1p06\_Z2p25e-4\_$\xi_\mathrm{CO}$0p9 and 1p06\_Z2p25e-2\_$\xi_\mathrm{CO}$0p9, described in @2019bra. In short, both models simulate the detonation of a WD with mass $1.06$ [ ]{}made of carbon and oxygen, whose progenitor metallicities are respectively $Z=2.25\times10^{-4}$ (strongly sub-solar metallicity, hereafter the low-$Z$ case) and $Z=0.0225$ (about 1.6 times solar, hereafter the high-$Z$ case). In both models, the rate of the fusion reaction [$^{12}\mathrm{C}+^{16}\mathrm{O}$ ]{}has been scaled down by a factor 0.1 as suggested by @2017mar [see also Bravo et al. 2019]. A larger proton abundance in the low-$Z$ case at the same temperature and similar oxygen abundance as in the high-$Z$ case implies a larger nucleosynthetic flux from the [$^{16}$O$($p,$\alpha)^{13}$N$(\gamma$,p$)^{12}$C ]{}chain, as can be seen in Fig. \[f:1\], and as a consequence the nucleosynthetic flux from this reaction chain exceeds that from the [$^{16}$O$(\gamma,\alpha)^{12}$C ]{}reaction. Thus, the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction becomes the main source of [${}^{12}$C]{} at the expense of [${}^{16}$O]{}. In the high-$Z$ case, the nucleosynthetic flux due to the [$^{16}$O$($p,$\alpha)^{13}$N$(\gamma$,p$)^{12}$C ]{}chain remains at all times below that due to the [$^{16}$O$(\gamma,\alpha)^{12}$C ]{}reaction. Figure \[f:2\] shows the evolution of the abundances of selected nuclei during the main phase of oxygen burning of the aforementioned mass shell, for both metallicities. The low-$Z$ case displays a proton abundance larger than the high-$Z$ case by a factor of approximately ten, while the mass fractions of $\alpha$ particles and [${}^{12}$C]{} nuclei are also larger by a factor of approximately two. The abundance of oxygen declines faster in the low-$Z$ case, and that of sulfur rises faster at first, but at the end achieves nearly the same equilibrium abundance as in the high-$Z$ case. In contrast, the mass fraction of calcium rises in the low-$Z$ case to approximately five times the value reached in the high-$Z$ case. The final values of the calcium-to-sulfur mass ratios obtained in the mass shell are: $M_\mathrm{Ca}/M_\mathrm{S}=0.45$ in the low-$Z$ case, and $M_\mathrm{Ca}/M_\mathrm{S}=0.17$ in the high-$Z$ case. Figure \[f:3\] shows the $\alpha$-efficiency of oxygen burning for both the low-$Z$ and the high-$Z$ case. For the purposes of the present work, the $\alpha$-efficiency is defined as the number of $\alpha$ particles created through both the $\alpha$-rich and the $\alpha$-poor branches divided by the number of [${}^{16}$O]{} nuclei destroyed in the same processes, and is equal to: $$\frac{\delta\alpha}{\delta ^{16}\mathrm{O}} = \frac{R_{{\mathrm{Op}\alpha}}+R_{{\mathrm{O}\gamma\alpha}}+R_{{\mathrm{O+O}}}+2R_{{\mathrm{C+C}}}}{R_{{\mathrm{Op}\alpha}}+R_{{\mathrm{O}\gamma\alpha}}+2R_{{\mathrm{O+O}}}+R_{{\mathrm{C+O}}}-R_{{\mathrm{C+C}}}}\,,$$ where $R$ is the nucleosynthetic flux due to a given reaction in , $R_{{\mathrm{Op}\alpha}}=\rho N_\mathrm{Av}\left<\sigma v\right>Y(^{16}\mathrm{O})Y(\mathrm{p})$ makes reference to the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction and $R_{{\mathrm{O}\gamma\alpha}}$ to the [$^{16}$O$(\gamma,\alpha)^{12}$C ]{}photodisintegration, $Y$ is the molar fraction of each species involved in the reaction, $R_{{\mathrm{O+O}}}$ makes reference to the fusion reaction [$^{16}\mathrm{O}+^{16}\mathrm{O}$ ]{}, and so on. As explained before, the $\alpha$-efficiency of $\alpha$-poor oxygen burning is 0.5, which would correspond, for instance, to all reaction rates being zero, except that of [$^{16}\mathrm{O}+^{16}\mathrm{O}$ ]{}. The $\alpha$-efficiency of $\alpha$-rich oxygen burning is equal to 4, which would be obtained if $R_{{\mathrm{O+O}}} = R_{{\mathrm{C+O}}} = 0$, and $R_{{\mathrm{C+C}}}=\left(R_{{\mathrm{Op}\alpha}}+R_{{\mathrm{O}\gamma\alpha}}\right)/2$. In Fig. \[f:1\], it can be seen that $R_{{\mathrm{C+C}}}$ is close to the mean of $R_{{\mathrm{Op}\alpha}}$ and $R_{{\mathrm{O}\gamma\alpha}}$. As could be expected, the $\alpha$-efficiency shown in Fig. \[f:3\] lies between the two limits, and is larger for the low-$Z$ case, which attains a value close to 2.5. To test the extent to which the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction accounts for the metallicity dependence of $M_\mathrm{Ca}/M_\mathrm{S}$ and $M_\mathrm{Ar}/M_\mathrm{S}$ in type Ia supernova models, I ran one-dimensional SNIa models with a range of progenitor metallicities and the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction switched off, that is, $$R_{{\mathrm{Op}\alpha}}=f_0 \rho N_\mathrm{Av}\left<\sigma v\right>Y(^{16}\mathrm{O})Y(\mathrm{p})\,,$$ with $f_0 = 0$. The code used is the same as in @2019bra, where it is described in detail. The thermonuclear reaction rates used in the simulations are those recommended by the JINA REACLIB compilation [@2010cyb hereafter REACLIB]. Detailed balance is assumed to hold for forward and reverse reactions, that is, the factor $f_0$ is applied as well to the [${}^{13}$N]{}$(\alpha,\mathrm{p})$[${}^{16}$O]{} rate. The results are shown in Fig. \[f:4\], together with the results obtained with the standard rates ($f_0=1$) and the observational constraints derived from the emission lines of SNRs as measured with [*Suzaku*]{} [@2017mar]. Switching off the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate makes $M_\mathrm{Ca}/M_\mathrm{S}$ and $M_\mathrm{Ar}/M_\mathrm{S}$ almost insensitive to the WD progenitor metallicity, at all $Z$ for the mass ratio of argon to sulfur, and at solar and sub-solar $Z$ for the mass ratio of calcium to sulfur. The figure highlights the fact that without the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate these mass ratios cannot cover the full range of measured values at SNRs for any metallicity. The same conclusion holds for different scalings of the [$^{12}\mathrm{C}+^{16}\mathrm{O}$ ]{}reaction rate, and for models of delayed detonation of Chandrasekhar-mass WDs. Limits to the rate of [$^{16}$O$($p,$\alpha)^{13}$N ]{}deduced from supernova remnants {#s:limits} ====================================================================================== At present, the uncertainty on the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate remains unconstrained. Its rate can be found in the STARLIB [@2013sal] and REACLIB [@2010cyb] compilations. In STARLIB, this rate was computed using Hauser-Feshbach theory and assigned a conventional (recommended) uncertainty of a factor ten, because of the lack of enough experimental information. The REACLIB rate is a fit to the rate of [$^{16}$O$($p,$\alpha)^{13}$N ]{}in @1988cau, and is the rate used in the SNIa models reported here. In the temperature range of interest for explosive oxygen burning, $T\simeq(3.5 - 5)\times10^9$ K, the STARLIB rate is larger than the REACLIB rate by a factor that varies between 1.5 and 2.5. An enhanced [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate may also increase the calcium-to-sulfur mass ratio, and becomes an alternative to the scaling down of the [$^{12}\mathrm{C}+^{16}\mathrm{O}$ ]{}reaction rate by a factor 0.1 suggested by @2017mar in order to match the range of $M_\mathrm{Ca}/M_\mathrm{S}$ and $M_\mathrm{Ar}/M_\mathrm{S}$ in SNRs. This is because the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction and its inverse are not in statistical equilibrium at the temperatures reached during explosive oxygen burning, unlike most of the reactions linking intermediate-mass nuclei from silicon to calcium. Figure \[f:5\] shows the relative change in the elemental yields of the SNIa model consisting in the detonation of a 1.06 [ ]{}WD with $Z=0.009$, when either the [$^{12}\mathrm{C}+^{16}\mathrm{O}$ ]{}rate is scaled down by a factor ten or when the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate is scaled up by a factor seven (both models named as M$_\mathrm{ALT}$ in the plot), compared to the same model with all the rates at their standard values (identified in the plot as M$_\mathrm{ON}$). The graph shows that the elements synthesized in significant quantities in the SNIa model, iron-group elements plus intermediate-mass $\alpha$-nuclei, are made in equal proportions in the two M$_\mathrm{ALT}$ models. The same result is obtained for different parameters of the SNIa model; for example, the WD progenitor metallicity. In practice, it is possible to obtain the same proportions of the most abundant elements with intermediate modifications of both the [$^{12}\mathrm{C}+^{16}\mathrm{O}$ ]{}and the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rates. The strong suppression of the metallicity dependence of $M_\mathrm{Ca}/M_\mathrm{S}$ and $M_\mathrm{Ar}/M_\mathrm{S}$ when the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction is switched off suggests that there should be a minimum for its rate, below which the SNR measurements could not be reproduced. I ran the same model of the detonation of a 1.06 [ ]{}WD with $Z=2.25\times10^{-4}$ (the effect is most evident at low metallicities) and the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate scaled by different factors, $f_0=0.1$, 0.3, and 0.5. The results are shown in Fig. \[f:5\]. The mass ratios belonging to $f_0=0.1$ and 0.3 fall short of covering the observational data, while the results for $f_0=5$ are acceptable, in the sense that the resulting $M_\mathrm{Ar}/M_\mathrm{S}$ is within $3\sigma$ of the upper limit of the corresponding observational range [the $1\sigma$ uncertainty of the upper limit of the argon-to-sulfur mass ratio is 0.01; see @2017mar]. Therefore, I chose the last factor, $f_0=0.5$, to establish a lower limit to the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate. The restrictions to the [$^{16}$O$($p,$\alpha)^{13}$N ]{}reaction rate derived from the measurements of $M_\mathrm{Ca}/M_\mathrm{S}$ and $M_\mathrm{Ar}/M_\mathrm{S}$ in SNRs are displayed in Fig. \[f:6\], together with the rates from STARLIB, REACLIB, and from @1969wag, the one used by @1973woo. The SNR observational data lead to tighter rate constraints than the uncertainty listed in STARLIB, although the rates provided by this compilation lie within the observationally based rate uncertainty (shaded band in Fig. \[f:6\]). On the other hand, the rates from @1969wag are too large at high temperatures. It is to be expected that future X-ray observatories will be able to provide more stringent constraints on this rate through more accurate data concerning the strength of the emission lines of intermediate-mass elements in SNRs. Conclusions {#s:conclusions} =========== The so-called $\alpha$-rich explosive oxygen burning during type Ia supernova explosions enhances the production of calcium with respect that of sulfur. From previous studies, it is known that there are two effects to account for in relation to $\alpha$-rich oxygen burning. First, the strength of the enhancement of the yield of calcium at all metallicities, and second, the metallicity dependence of the mass ratio of calcium to sulfur, $M_\mathrm{Ca}/M_\mathrm{S}$, in the ejecta. Here, it is demonstrated that a single reaction, [$^{16}$O$($p,$\alpha)^{13}$N ]{}(followed by [${}^{13}$N]{}$+\gamma\rightarrow\mathrm{p}+$[${}^{12}$C]{}), is responsible for the metallicity dependence of $M_\mathrm{Ca}/M_\mathrm{S}$ in the ejecta of type Ia supernovae. This reaction chain boosts $\alpha$-rich oxygen burning when proton abundance is large, increasing the synthesis of argon and calcium with respect to sulfur and silicon. For high-metallicity progenitors, the presence of free neutrons leads to a drop in the proton abundance and the above chain is not efficient. Through one-dimensional modeling of supernova explosions, it is shown that switching off the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate makes the nucleosynthesis insensitive to the metallicity of the supernova progenitor. Although the rate of [$^{16}$O$($p,$\alpha)^{13}$N ]{}can be found in astrophysical reaction rate libraries, its uncertainty is unconstrained. Assuming that all reaction rates other than [$^{16}$O$($p,$\alpha)^{13}$N ]{}retain their standard values, an increase by a factor of approximately seven of the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate at temperatures in the order $3-4\times10^9$ K is enough to explain the whole range of calcium-to-sulfur mass ratios measured in Milky Way and LMC supernova remnants. These same measurements provide a lower limit to the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate in the mentioned temperature range, on the order of a factor 0.5 with respect to the rate reported by Caughlan & Fowler in 1988. Future measurements of the [$^{16}$O$($p,$\alpha)^{13}$N ]{}rate at the energies of the Gamow-peak for temperatures in the range $3-4\times10^9$ K are encouraged, as they would help to determine the precise role of this reaction in the synthesis of calcium in type Ia supernovae. This work has benefited from discussions about explosive oxygen burning with Frank Timmes, Broxton Miles, Dean Townsley, Carles Badenes, and Héctor Martínez-Rodríguez. Support by the MINECO-FEDER grant AYA2015-63588-P is acknowledged.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Effective bounds for the finite number of surjective holomorphic maps between canonically polarized compact complex manifolds of any dimension with fixed domain are proven. Both the case of a fixed target and the case of varying targets are treated. In the case of varying targets, bounds on the complexity of Chow varieties are used.' address: | Ruhr-Universität Bochum\ Fakultät für Mathematik\ D-44780 Bochum\ Germany author: - Gordon Heier title: Effective finiteness theorems for maps between canonically polarized compact complex manifolds --- Effective bounds for automorphism groups {#autsection} ======================================== Hurwitz proved the following effective finiteness theorem on Riemann surfaces. \[Hurbound\] Let $X$ be a smooth compact complex curve of genus $g\geq 2$. Then the group $\operatorname{{Aut}}(X)$ of holomorphic automorphisms of $X$ satisfies $$\#\operatorname{{Aut}}(X)\leq84(g-1).$$ For many years after Hurwitz’s proof, this bound has been known to be sharp only for $g=3$ and $g=7$, in which cases there exist, respectively, the classical examples of the Klein quartic in ${{\mathbb{P}}}^2$ given by the homogeneous equation $X^3Y+Y^3Z+Z^3X=0$ and the Fricke curve with automorphism group ${\rm PSL}(2,8)$. Using the theory of finite groups, it was established only in the 1960’s by Macbeath that there are infinitely many $g$ for which the above bound is sharp (see [@Macbeath]). Xiao was able to establish the following direct (and clearly sharp due to the above) generalization of Hurwitz’s theorem. Let $X$ be a $2$-dimensional minimal compact complex manifold of general type. Then $$\#\operatorname{{Aut}}(X)\leq(42)^2K_X^2.$$ In arbitrary dimension, the automorphism group of a smooth compact complex manifold of general type is still known to be finite because of the finiteness theorem of Kobayashi-Ochiai ([@KobOch]), which we shall state in the next section. One is tempted to conjecture that in the case of the canonical line bundle being big and nef or even ample, there is an upper bound of the form $C_nK_X^n$. The preprint [@Ts] makes an attempt to prove this conjecture. In the paper [@Sza], Szabó was able to establish the following effective polynomial upper bound in arbitrary dimension. \[szabobound\] Let $X$ be an $n$-dimensional compact complex manifold whose canonical line bundle is big and nef. Then the number of birational automorphisms of $X$ is no more than $$(2(n+1)(n+2)!(n+2)K_X^n)^{16n3^n}.$$ The multiple $2(n+1)(n+2)!(n+2)K_X$ is large enough to give a birational morphism from $X$ to projective space. This is proven in [@CaSchn page 8], using results of Demailly [@DBound] and Kollár [@Koeffbase] on effective base point freeness of adjoint line bundles. The goal of [@CaSchn] is to obtain a polynomial bound for the special case of automorphism groups that are abelian. In arbitrary dimension, effective pluricanonical (birational) embeddings are essential in proving finiteness statements of the type considered in this paper. They enable us to bring the problem into the context of projective varieties and to establish uniform boundedness. In the case of $K_X$ being ample, the following effective theorem on pluricanonical embeddings is available. \[effpluri\] If $X$ is a compact complex manifold of complex dimension $n$ whose canonical line bundle $K_X$ is ample, then $mK_X$ is very ample for any integer $$\label{pluricaneffbound} m\geq (e+\frac 1 2)n^\frac 7 3+\frac 1 2 n^\frac 5 3 + (e+\frac 1 2)n^\frac 4 3 + 3n+ \frac 1 2 n^\frac 2 3+5,$$ where $e \approx 2.718$ is Euler’s number. From now on, we will set $k=k(n)$ to be the round-up of the effective very ampleness bound given in . To our knowledge, Szabó’s theorem is the one that provides the best bound at this point. However, its proof relies on several methods previously introduced by other authors (e.g., see [@HuckSauer]) and uses the classification of finite simple groups in an essential way. In light of this, the much more straightforward method of Howard and Sommese, which was introduced in [@HoSo], still deserves to be noticed. Their method is actually not only applicable to automorphisms (see next section), and it represents an instance of a proof based entirely on boundedness and rigidity, which, technically speaking, is the main focus of the present paper. Howard and Sommese prove for the case of a canonically polarized manifold that $\#Aut(X)$ is bounded from above by a number which depends only on the Chern numbers of $X$. Based on their result, we now state the following effective finiteness theorem. \[HoSoAutbound\] Let $X$ be a compact complex manifold of dimension $n$ whose canonical line bundle is ample. Then $$\#\operatorname{{Aut}}(X) \leq \left((n+1)^2k^nn!2^{n^2}(2k)^{\frac 1 2 n(n+1)}(1+2kn)^nK_X^n\right)^{((k^nK_X^n+n)^2-1} .$$ Before we prove this theorem, we need to prove two auxiliary propositions which make the method of Howard and Sommese entirely effective. The first proposition will be used to bound the dimension of the target projective space for the pluricanonical embedding given by $kK_X$. It is a standard argument. \[proph0bound\] Let $X$ be an $n$-dimensional compact complex manifold and $L$ a very ample line bundle on $X$. Then $$h^0(X,L)\leq L^n+n.$$ We proceed by induction. The case $n=1$ follows immediately from the Riemann-Roch Theorem. Let $D$ be an effective divisor on $X$ such that $\operatorname{{\mathcal O}}_X(D)=L$. One has the standard short exact sequence $$0\to\operatorname{{\mathcal O}}_X\to\operatorname{{\mathcal O}}_X(L)\to\operatorname{{\mathcal O}}_D(L)\to 0.$$ From this exact sequence, we obtain $$h^0(X,L)\leq h^0(X,\operatorname{{\mathcal O}}_X)+h^0(D,\operatorname{{\mathcal O}}_D(L)).$$ By induction, we find that $$h^0(X,L)\leq 1+(L_{|D})^{n-1}+n-1 = L^n+n.$$ Secondly, we use a result of Demailly, Peternell and Schneider in [@DPS] to compute a bound for the Chern class intersection numbers that occur in the well-known formula for the degree of the ($1$-codimensional) dual of a projective variety. Our effective result is the following. \[chernintersection\] Let $X$ be a compact complex manifold of dimension $n$ whose canonical line bundle is ample. Let $k$ denote again the round-up of the constant defined in Then the following holds for $i=1,\ldots,n$. $$|c_i(\Omega_X^1).K_X^{n-i}|\leq i!2^{in}(2k)^{\frac 1 2 i(i+1)}(1+2kn)^iK_X^n.$$ Recall that $k$ is such that $kK_X$ is very ample. It follows from the Castelnuovo-Mumford theory of regularity that $\Omega_X^1(2kK_X)$ is generated by global sections and therefore nef. We may thus apply [@DPS Corollary 2.6] to obtain $$\begin{aligned} 0&\leq&c_i(\Omega_X^1(2kK_X))K_X^{n-i}\nonumber\\ &\leq& (c_1(\Omega_X^1(2kK_X)))^iK_X^{n-i}\nonumber\\ &=&(c_1(\Omega_X^1)+2knK_X)^iK_X^{n-i}\nonumber\\ &=&(1+2kn)^iK_X^n\label{B}\end{aligned}$$ for $i=1,\ldots,n$. In [@Ful page 56], one finds the formula $$\label{chernclassformula} c_i(\Omega_X^1(2kK_X))=\sum_{\nu=0}^{i}{n-\nu\choose i-\nu}c_\nu(\Omega_X^1)(2kK_X)^{i-\nu},$$ which enables us to prove the Proposition by an induction. The inequality clearly holds in the case $i=1$. For $1<i\leq n$, note that it follows from that $$\begin{aligned} c_i(\Omega_X^1(2kK_X))K_X^{n-i}&=&\left(\sum_{\nu=0}^{i}{n-\nu\choose i-\nu}c_\nu(\Omega_X^1)(2kK_X)^{i-\nu}\right)K_X^{n-i}\\ &=&\sum_{\nu=0}^{i}{n-\nu\choose i-\nu}c_\nu(\Omega_X^1)(2k)^{i-\nu}K_X^{n-\nu}.\end{aligned}$$ Taking absolute values, the triangle inequality yields $$\begin{aligned} &&|c_i(\Omega_X^1)K_X^{n-i}|\\ &\leq&c_i(\Omega_X^1(2kK_X))K_X^{n-i}+\sum_{\nu=0}^{i-1}{n-\nu\choose i-\nu}|c_\nu(\Omega_X^1)(2k)^{i-\nu}K_X^{n-\nu}|\\ &\stackrel{\eqref{B}}{\leq}&(1+2kn)^iK_X^n+\sum_{\nu=0}^{i-1}{n-\nu\choose i-\nu}(2k)^{i-\nu}|c_\nu(\Omega_X^1)K_X^{n-\nu}|\\ &\stackrel{Ind.}{\leq}&(1+2kn)^iK_X^n+\sum_{\nu=0}^{i-1}{n-\nu\choose i-\nu}(2k)^{i-\nu}\nu! 2^{\nu n} (2k)^{\frac 1 2 \nu(\nu+1)}(1+2kn)^\nu K_X^n\\ &\leq&(1+2kn)^iK_X^n+i2^n(2k)^i(i-1)!2^{(i-1)n}(2k)^{\frac 1 2 i(i-1)}(1+2kn)^{i-1}K_X^{n}\\ &\leq&(1+2kn)^iK_X^n+2^n(2k)^{\frac 1 2 i(i+1)}i!2^{(i-1)n}(1+2kn)^{i-1}K_X^{n}\\ &\leq&(1+2kn)^iK_X^n+(2k)^{\frac 1 2 i(i+1)}i!2^{in}(1+2kn)^{i-1}K_X^{n}\\ &\leq&i!2^{in}(2k)^{\frac 1 2 i(i+1)}(1+2kn)^{i}K_X^{n}\quad \text{q.e.d.}\end{aligned}$$ Now that we have all necessary effective tools at our disposal, we can proceed to the The proof given in [@HoSo] yields that $$\label{HoSoChern} \#\operatorname{{Aut}}(X)\leq \left(\sum_{\nu=0}^{n}(-1)^j(n+1-j)(kK_X)^{n-j}c_j(X)\right)^{(h^0(kK_X))^2-1}.$$ Substituting the numerical bounds derived in Propositions \[proph0bound\] and \[chernintersection\] and estimating in an obvious way, we obtain that $$\#\operatorname{{Aut}}(X) \leq\left((n+1)^2k^nn!2^{n^2}(2k)^{\frac 1 2 n(n+1)}(1+2kn)^nK_X^n\right)^{(k^nK_X^n+n)^2-1}.$$ Effective finiteness theorems for maps with a fixed target {#fixed target} ========================================================== For surjective meromorphic maps between compact complex spaces there is the following finiteness theorem due to Kobayashi-Ochiai. Let $X$ be any compact complex space and $Y$ a compact complex space of general type. Then the number of surjective meromorphic maps between $X$ and $Y$ is finite. There are no known effective versions of this theorem due to the fact that there are no effective birational embedding theorems for manifolds with merely big canonical line bundle in higher dimensions. The case of $X$ and $Y$ being smooth compact complex curves has been known for a long time as the Theorem of de Franchis, based on [@deF]. Not surprisingly, there are effective bounds in this case that depend only on the genus $g$ of $X$. However, these bounds are often obtained in the more general case of varying targets (see [@HS], [@Guerra]) or in complete analogy to the higher dimensional case. Somewhat surprisingly, those authors that consider specifically the case of two [*fixed*]{} Riemann surfaces and investigate e.g. the induced homomorphisms on the first homology groups (as e.g. in [@tanabe]) do not seem to be able to do much better numerically than those who consider more general situations. All bounds are exponential in $g$, and the question of the true nature of the dependence on $g$ seems to be completely open. Since maps between fixed Riemann surfaces seem to be closer in spirit to automorphisms than to the case of varying targets, where the bound is not polynomial (see next section), and based on some other preliminary evidence, we venture the following conjecture. \[deFranchisConj\] There is a polynomial function $B(g)$ with the following property. For two fixed smooth compact complex curves $X$ and $Y$ of genus at least $2$ with the genus of $X$ equal to $g$, and the number of surjective holomorphic maps from $X$ to $Y$ is no more than $B(g)$. As was already indicated in the previous section, the method of Howard and Sommese for automorphism groups can also be used to obtain a bound for the number of maps between any two fixed canonically polarized manifolds. The details of this straightforward generalization can be found in [@BD]. In fact, the bound one arrives at is the same as the expression we already encountered in . So we simply state the following theorem. \[efffixedtarget\] Let $X$ and $Y$ be fixed compact complex manifolds with ample canonical line bundles. Let $n$ be the dimension of $X$. Then the number of surjective holomorphic maps between $X$ and $Y$ is no more than $$\left((n+1)^2k^nn!2^{n^2}(2k)^{\frac 1 2 n(n+1)}(1+2kn)^nK_X^n\right)^{(k^nK_X^n+n)^2-1} .$$ As we move on, we remark that a bound for the above theorem can also be obtained by using the Chow variety method discussed in the next section, since the graph of a surjective holomorphic map $X\to Y$ corresponds to an isolated point in a certain Chow variety of $X\times Y$. However, since this leads in fact to a worse bound, we will not discuss this in detail. Effective finiteness theorems for maps with varying targets =========================================================== The following theorem is often referred to as the Theorem of de Franchis-Severi. Its statement is obtained from the statement of the de Franchis Theorem by allowing the targets $Y$ to vary among smooth compact complex curves of genus at least $2$. \[deFSev\] Let $X$ be a smooth compact complex curve. Then the set of all holomorphic maps $f:X\to Y$, where $Y$ is any (variable) smooth compact complex curve of genus at least $2$, is finite. In [@HS], Howard and Sommese proved that if $X$ is of genus $g$, the number of holomorphic maps in Theorem \[deFSev\] modulo automorphisms of the target spaces is no more than $$\label{HSbound} \left(\frac 1 2 (2\sqrt{6}(g-1)+1)^{2+2g^2}g^2(g-1)(\sqrt{2})^{g(g-1)}+1\right).$$ We denote this expression by $\mathcal S'(g)$. Since the cardinality of the automorphism group of any one of the targets is at most $\tfrac{1}{2}\cdot 84(g-1)$ due to Hurwitz, one can alternatively say that the number of holomorphic maps in Theorem \[deFSev\] is no more than $$\mathcal S(g):= 42(g-1)\cdot\mathcal S'(g).$$ In their paper, Howard and Sommese apparently overlooked the fact that their technique counts maps only modulo automorphisms of the targets. This fact was observed by Kani in his paper [@Kani]. This oversight can, of course, easily be remedied by adding the factor $42(g-1)$. In Kani’s paper, isomorphism classes of targets are counted instead of maps by means of a “packing argument”. We chose to quote the result of Howard and Sommese because its proof is closer to the point of view taken in the present paper. It is certainly interesting to note that [@Kani §4] exhibits a relatively straightforward example of a series of Riemann surfaces $X$ that shows that the cardinality of the sets of maps defined in Theorem \[deFSev\] cannot be bounded by a polynomial in $g$. In fact, what is shown is that the number of isomorphism classes of targets in these sets cannot be bounded by a polynomial. Therefore, Kani’s example does not contradict our Conjecture \[deFranchisConj\]. We would like to remark that the statements in [@BD page 802] and [@TsaiIMRN page 110], which say that there is such a contradiction, represent a misinterpretation of Kani’s example. The following conjecture (which is sometimes referred to as Iitaka-Severi Conjecture) represents a generalization of Theorem \[deFSev\]. As we shall see in the proof of Theorem \[section3thm\], the difficulty in proving it lies in the fact (and only in the fact) that there are no uniform birational embedding theorems for manifolds with big canonical line bundle in higher dimensions (not even ineffective ones). Let $X$ be a compact complex manifold. Then the number of birational equivalence classes of compact complex manifolds of general type having a member $Y$ for which there exists a dominant rational map from $X$ to $Y$ is finite. Our result in this section is the following effective generalization of Theorem \[deFSev\]. \[section3thm\] Let $X$ be an $n$-dimensional compact complex manifold whose canonical line bundle $K_X$ is ample. Then the number $\mathcal F(X)$ of surjective holomorphic maps $f:X\to Y$, where $Y$ is any $n$-dimensional compact complex manifold with ample canonical bundle, is no more than $$2^nk^nK_X^n\cdot{(N+1)\cdot 2^nk^nK_X^n\choose N}^{(N+1)(2^nk^nK_X^n{2^nk^nK_X^n+n-1\choose n}+{2^nk^nK_X^n+n-1\choose n-1})},$$ where $$N=(k^nK_X^n+n)^2-1,$$ and $k=k(n)$ is the effective very ampleness bound from [Theorem \[effpluri\]]{}. Although the bound for $\mathcal F(X)$ looks somewhat complicated, its behavior with respect to $n$ and $K_X^n$ is easy to determine using Sterling’s formula. Namely, there exist explicit (exponential) functions $\alpha(n), \beta(n)$ such that $$\mathcal F (X)\leq (\alpha(n)K_X^n)^{\beta(n)(K_X^n)^{n+5}}.$$ In the case of $Y$ being a compact complex surface of general type, effective bounds in the same spirit as our Theorem \[section3thm\] were given by Tsai in [@TsaiJAG] (see also [@TsaiIMRN], [@TsaiCrelle] and also [@Guerra]). Ineffective results related to Theorem \[section3thm\] have also been established in [@MDLM], [@Maehara], [@BD]. It is well known that a result of the type of Theorem \[section3thm\] can be proved by what is commonly referred to as a “boundedness and rigidity argument”. The basic idea is to show that the objects in question can be associated to Chow points in a finite (or even effectively finite) number of Chow varieties and that in any irreducible component the Chow points can correspond to at most one of the objects in question. Then, clearly, the number of the objects in question is no more than the number of relevant irreducible components of the Chow varieties. Keeping this strategy in mind, we now start the Let $f:X\to Y$ be one of the maps under consideration and let $\Gamma_f$ denote its graph. Let $p_1,p_2$ denote the two canonical projections of $X\times Y$. Let $\phi_f$ denote the isomorphism $X\to \Gamma_f, x \mapsto (x,f(x))$. The line bundle $p_1^*(kK_X)\otimes p_2^*(kK_Y)$ is very ample on $X\times Y$ and embeds $\Gamma_f\subset X\times Y$ into $$\begin{aligned} X\times {{\mathbb{P}}}^{h^0(Y,kK_Y)-1}\hookrightarrow{{\mathbb{P}}}^{h^0(X,kK_X)\cdot h^0(Y,kK_Y)-1}.\end{aligned}$$ Due to Proposition \[proph0bound\] and the fact that $K_X^n\geq K_Y^n$ (see below), we can assume $\Gamma_f$ to be embedded into $$X\times{{\mathbb{P}}}^{N_1}\hookrightarrow P^N$$ with $$N_1 := k^nK_X^n+n-1$$ and $$N := (k^nK_X^n+n)^2-1.$$ The degree of $\Gamma_f$ (measured in ${{\mathbb{P}}}^N$) can be estimated as follows: $$\begin{aligned} \deg(\Gamma_f)&=&\int_{\Gamma_f}(p_1^*c_1(kK_X)+p_2^*c_1(kK_Y))^n\\ &=&\int_X\phi_f^*(p_1^*c_1(kK_X)+p_2^*c_1(kK_Y))^n\\ &=&\int_X (c_1(kK_X)+f^*c_1(kK_Y))^n\\ &\leq&\int_X (2c_1(kK_X))^n\\ &=&2^nk^nK_X^n.\end{aligned}$$ Note that the inequality is due to the fact that $K_X=f^*K_Y+D$, where $D$ is an effective divisor and $K_X$ and $f^*K_Y$ are ample, whence $$\begin{aligned} K_X^{n-j}\left(f^*K_Y\right)^j&=&K_X^{n-j-1}\left(f^*K_Y+D\right)\left(f^*K_Y\right)^j\\ &=&K_X^{n-j-1}\left(f^*K_Y\right)^{j+1}+K_X^{n-j-1} . D . \left(f^*K_Y\right)^j\\ &\geq& K_X^{n-j-1}\left(f^*K_Y\right)^{j+1}\end{aligned}$$ for $j=0,\ldots,n-1$. In particular, this computation yields $$K_X^n\geq (f^*K_Y)^n\geq K_Y^n,$$ which we used previously. We now come to the rigidity part of the proof. In [@TsaiCrelle Corollary 3.2] it is shown that if $\pi:Z\to \Delta$ is a holomorphic family of smooth projective varieties of general type over a disk with $Z_0\cong Y$, there is no surjective holomorphic map $F:X\times\Delta\to Z$ with $F(X\times\{t\})=\pi^{-1}(t)$ unless $Z\cong Y\times \Delta$ and $F(\cdot,t)$ is independent of $t$. Now take an irreducible component $I$ of $\operatorname{{Chow}}_{n,d}(X\times{{\mathbb{P}}}^{N_1})$ that contains a point corresponding to one of our graphs $\Gamma_f \subset X\times Y\subset X\times{{\mathbb{P}}}^{N_1}$. According to our previous boundedness considerations, we have $d\leq 2^nk^nK_X^n$. To be able to apply the rigidity property stated above, we need the following parametrization statement. For the details of its proof, we refer to [@Maehara Section 3], noting that our situation is essentially the same as the one treated by Maehara. There is a Zariski-open subset $U\subset I$ such that all Chow points $[\Gamma] \in U$ correspond to surjective holomorphic maps $f_{[\Gamma]}:X\to Y_{[\Gamma]}$ with $Y_{[\Gamma]}\subset{{\mathbb{P}}}^{N_1}$ being an $n$-dimensional projective manifold of general type. Moreover, $U$ contains all Chow points $[\Gamma_f]\in I$ that come from graphs of maps $f:X\to Y$ of the type considered in the statement of the Theorem. Based on this parametrization statement, the above-mentioned rigidity property implies that for $[\Gamma_1],[\Gamma_2] \in U$, we have $f_{\Gamma_1}=f_{\Gamma_2}$, i.e. the number $\mathcal F(X)$ is no more than the number of relevant irreducible components of $\operatorname{{Chow}}_{n,d}(X\times{{\mathbb{P}}}^{N_1})$ for $d=1,\ldots, 2^nk^nK_X^n$. Clearly, only those components of $\operatorname{{Chow}}_{n,d}(X\times{{\mathbb{P}}}^{N_1})$ are relevant whose general points represent irreducible cycles. However, from [@Kollarbook] (and also [@Guerra]), the following proposition is known. Let $W\subset {{\mathbb{P}}}^n$ be a projective variety defined by equations of degree no more than $\tilde \delta$. Let $\operatorname{{Chow}}'_{k,\delta}(W)$ denote the union of those irreducible components of $\operatorname{{Chow}}_{k,\delta}(W)$ whose general points represent irreducible cycles. Then the number of irreducible components of $\operatorname{{Chow}}'_{k,\delta}(W)$ is no more than $${(n+1)\max\{\delta,\tilde \delta\}\choose n}^{(n+1)(\delta {\delta+k-1\choose k }+{\delta+k-1 \choose k-1})}.$$ Bounds on the complexity (i.e. the number of irreducible components) of Chow varieties have previously been produced by a number of authors. For example, the problem was extensively studied in Catanese’s [@Ca], and also in the papers of Green-Morrison ([@GM]) and Tsai ([@TsaiIMRN]). A new approach to handling Chow varieties of $1$-dimensional cycles is introduced in [@heiereffshaf]. Since the degrees of the defining equations of $X\times {{\mathbb{P}}}^{N_1}\subset {{\mathbb{P}}}^N$ under the Segre embedding are no more than $k^nK^n_X$, we conclude that our cardinality $\mathcal F(X)$ can be estimated from above by $$\begin{aligned} &&\sum_{d=1}^{2^nk^nK_X^n} \# \text{ of irreducible components of }\operatorname{{Chow}}'_{n,d}(X\times{{\mathbb{P}}}^{N_1})\\ &\leq&2^nk^nK_X^n\cdot{({N}+1)\cdot 2^nk^nK_X^n\choose {N}}^{({N}+1)(2^nk^nK_X^n{2^nk^nK_X^n+n-1\choose n}+{2^nk^nK_X^n+n-1\choose n-1})}.\end{aligned}$$ We remark that the nonequidimensional case (i.e. $\dim X > \dim Y$) can be reduced to our Theorem \[section3thm\] by taking hyperplane sections. We shall express this fact as follows. \[nonequidimcase\] If we take the targets $Y$ in [Theorem \[section3thm\]]{} to be $n'$-dimensional with $n-n'>0$, then the analogous cardinality $\mathcal F_{n'}(X)$ is no more than the number obtained when replacing $K_X^n$ with $((n-n')k+1)^{n'}k^{(n-n')}K_X^n$ in the bound obtained in [Theorem \[section3thm\]]{}. We keep the notation from the proof of Theorem \[section3thm\]. For a generic hyperplane section $X\cap H$ of $X$ in ${{\mathbb{P}}}^{N_1}$, the restriction of the maps in question to $X\cap H$ is still surjective (for an easy proof of this fact see [@MDLM]). Therefore, after taking $n-n'$ general hyperplane sections, we obtain an $n'$-dimensional submanifold $\tilde X$ to which we can apply Theorem \[section3thm\]. An $(n-n')$-fold iteration of the adjunction formula yields $$\begin{aligned} K_{\tilde X}^{n'}&=&(\frac 1 k \mathcal O(1)|_{\tilde X}+(n-n')\mathcal O(1)|_{\tilde X})^{n'}\\ &=&((n-n')+\frac 1 k)^{n'}k^nK_X^n\\ &=&((n-n')k+1)^{n'}k^{(n-n')}K_X^n.\end{aligned}$$ The strategy of an effective boundedness and rigidity proof can be used in a number of similar settings. For example, in the paper [@heiereffshaf], a uniform effective bound is established for the finiteness statement of the Shafarevich Conjecture over function fields (Theorem of Parshin-Arakelov). The arguments in that paper are more delicate due to the more complicated situation (one has to deal with moduli maps instead of maps of the form $f:X\to Y$), but the underlying principle is essentially the same. It is a great pleasure to thank Professor Yum-Tong Siu for many invaluable discussions on (effective) algebraic geometry in general and finiteness theorems of the type discussed in the present paper in particular. These discussions took place while I enjoyed the generous hospitality of the Mathematics Department of Harvard University and the Institute of Mathematical Research at the University of Hong Kong. It is with sincere gratitude that I acknowledge support through the Schwerpunktprogramm “Globale Methoden in der komplexen Geometrie” of the Deutsche Forschungsgemeinschaft through the chair of Professor Alan Huckleberry at Bochum University. [MDLM82]{} T. Bandman and G. Dethloff. Estimates of the number of rational mappings from a fixed variety to varieties of general type. , 47(3):801–824, 1997. F. Catanese. Chow varieties, [H]{}ilbert schemes and moduli spaces of surfaces of general type. , 1(4):561–595, 1992. F. Catanese and M. Schneider. Polynomial bounds for abelian groups of automorphisms. , 97(1-2):1–15, 1995. Special issue in honour of Frans Oort. J.-P. Demailly. A numerical criterion for very ample line bundles. , 37(2):323–374, 1993. M. de Franchis. Un teorema sulle involuzioni irrationali. , 36:368, 1913. J.-P. Demailly, Th. Peternell, and M. Schneider. Compact complex manifolds with numerically effective tangent bundles. , 3(2):295–345, 1994. W. Fulton. , volume 2 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics \[Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics\]*]{}. Springer-Verlag, Berlin, second edition, 1998. M. Green and I. Morrison. The equations defining [C]{}how varieties. , 53(3):733–747, 1986. L. Guerra. Complexity of [C]{}how varieties and number of morphisms on surfaces of general type. , 98(1):1–8, 1999. G. Heier. Effective freeness of adjoint line bundles. , 7:31–42, 2002. (arXiv:math.AG/0108118) G. Heier. Uniformly effective [S]{}hafarevich [C]{}onjecture on families of hyperbolic curves over a curve with prescribed degeneracy locus. arXiv:math.AG/0311085. A. Howard and A. J. Sommese. On the orders of the automorphism groups of certain projective manifolds. In [*Manifolds and Lie groups (Notre Dame, Ind., 1980)*]{}, pages 145–158. Birkhäuser Boston, Mass., 1981. A. Howard and A. J. Sommese. On the theorem of de [F]{}ranchis. , 10(3):429–436, 1983. A. T. Huckleberry and M. Sauer. On the order of the automorphism group of a surface of general type. , 205(2):321–329, 1990. A. Hurwitz. ber algebraische [G]{}ebilde mit eindeutigen [T]{}ransformationen in sich. , 41:403–442, 1893. E. Kani. Bounds on the number of nonrational subfields of a function field. , 85(1):185–198, 1986. S. Kobayashi and T. Ochiai. Meromorphic mappings onto compact complex spaces of general type. , 31(1):7–16, 1975. J. Koll[á]{}r. Effective base point freeness. , 296(4):595–605, 1993. J. Koll[á]{}r. . Springer-Verlag, Berlin, 1996. A. M. Macbeath. On a theorem of [H]{}urwitz. , 5:90–96, 1961. K. Maehara. A finiteness property of varieties of general type. , 262(1):101–123, 1983. M. Martin-Deschamps and R. Lewin-M[é]{}n[é]{}gaux. Surfaces de type général dominées par une variété fixe. , 110(2):127–146, 1982. E. Szab[ó]{}. Bounding automorphism groups. , 304(4):801–811, 1996. M. Tanabe. A bound for the theorem of de [F]{}ranchis. , 127(8):2289–2295, 1999. I-Hsun Tsai. Dominant morphisms on surfaces of general type modulo biholomorphic equivalence. , (3):101–111, 1997. I-Hsun Tsai. Dominating the varieties of general type. , 483:197–219, 1997. I-Hsun Tsai. Chow varieties and finiteness theorems for dominant maps. , 7(4):611–625, 1998. H. Tsuji. Bound of automorphisms of projective varieties of general type. arXiv:math.AG/0004138. Gang Xiao. Bound of automorphisms of surfaces of general type. [I]{}. , 139(1):51–77, 1994. Gang Xiao. Bound of automorphisms of surfaces of general type. [II]{}. , 4(4):701–793, 1995.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In the framework of the search of dark matter in galactic halos in form of massive compact halo object (MACHOs), we discuss the status of microlensing observations towards the Magellanic Clouds and the Andromeda galaxy, M31. The detection of a few microlensing events has been reported, but an unambiguous conclusion on the halo content in form on MACHOs has not been reached yet. A more detailed modelling of the expected signal and a larger statistics of observed events are mandatory in order to shed light on this important astrophysical issue.' author: - 'S. Calchi Novati' title: Microlensing in Galactic Halos --- Introduction ============ Gravitational microlensing, as first noted in [@ref:pacz86], is a very efficient tool for the detection and the characterisation of massive astrophysical halo compact objects (MACHOs), a possible component of dark matter halos. Following the first exciting detection of microlensing events [@ref:macho93; @ref:eros93; @ref:ogle93], by now the detection of $\sim 30$ events have been reported towards the Magellanic Clouds and our nearby galaxy, M31, and first interesting conclusions on this issue have been reported (Section \[sec:LMC\] and Section \[sec:M31\]). Soon enough, however, the Galactic bulge probed to be an almost an interesting target [@ref:pacz91], and indeed by now the number of observed microlensing events along this line of sight exceeds by two order of magnitudes that observed towards the Magellanic Clouds and M31. In that case the contribution from the dark matter halo is expected to be extremely small compared to that of either bulge or disc (faint) stars [@ref:griest91]. Microlensing searches towards the Galactic bulge are therefore important as they allow to constrain the inner Galactic structure [@ref:pacz94]. Recently, the MACHO [@ref:popowski05], OGLE [@ref:sumi06] and EROS [@ref:hamadache06] collaborations presented the results of their observational campaigns towards this target. A remarkable conclusion is the agreement among these different searches as for the observed value of the optical depth and the agreement with the theoretical expectations [@ref:evans02; @ref:hangould03]. For a more recent discussion see also [@ref:novati07], where the issue of the bulge mass spectrum is treated. The microlensing quantities {#sec:ml} =========================== Microlensing events are due to a lensing object passing near the line of sight towards a background star. Because of the event configuration, the observable effect during a microlensing event is an apparent transient amplification of the star’s flux (for a review see e.g. [@ref:roulet97]). The *optical depth* is the instantaneous probability that at a given time a given star is amplified so to give rise to an observable event. This quantity is the probability to find a lens within the “microlensing tube”, a tube around the line of sight of (variable) radius equal to the *Einstein radius*, $R_\mathrm{E}=\sqrt{4G\mu_l/c^2\, D_l D_{ls}/D_s}$, where $\mu_l$ is the lens mass, $D_l,\,D_s$ are the distance to the lens and to the source, respectively, and $D_{ls}=D_s-D_l$. The optical depth reads $$\label{eq:tau} \tau = \frac{4\pi G D_s^2}{c^2}\int_{0}^{D_s} \mathrm{d}x \rho(x) x(1-x)\,,$$ where $\rho$ is the *mass* density distribution of lenses and $x\equiv D_l/D_s$. The optical depth provides valuable informations on the overall density distribution of the lensing objects, but it can not be used to further characterise the events, in particular, it does not depend on the lens mass. This is because lighter (heavier) objects are, for a given total mass of the lens population, more (less) numerous but their lensing cross section is smaller (larger), and the two effects cancel out. The optical depth turns out to be an extremely small quantity, of order of magnitude $\sim 10^{-6}$. This implies that one has to monitor extremely large sets of stars to achieve a reasonable statistics. The experiments measure the number of the events and their characteristics, in particular their durations. To evaluate these quantities one makes use of the microlensing *rate* that expresses the number of lenses that pass through the volume element of the microlensing tube $\mathrm{d}^3x$ in the time interval $\mathrm{d}t$ for a given lens number density distribution $n(\vec{x})$ and velocity distribution $f(\vec{v})$ $$\label{eq:rate} \mathrm{d} \Gamma = \frac{n_l\,\mathrm{d}^3 x}{\mathrm{d}t} \times f(\vec{v}_l) \mathrm{d}^3 v_l\,.$$ The volume element of the microlensing tube is $\mathrm{d}^3 x=(\vec{v}_{r\bot} \cdot \hat{\vec{n}}) \mathrm{d}t \mathrm{d}S$. $\mathrm{d}S=\mathrm{d}l\mathrm{d}D_l$ is the portion of the tube external surface, and $\mathrm{d}l=u_t R_\mathrm{E} \mathrm{d}\alpha$, where $u_t$ is the maximum impact parameter, $\vec{v}_{r}$ is the lens relative velocity with respect to the microlensing tube and $\vec{v}_{r\bot}$ its component in the plane orthogonal to the line of sight, and $\hat{\vec{n}}$ is the unit vector normal to the tube inner surface at the point where the microlensing tube is crossed by the lens. The velocity of the lenses entering the tube is $\vec{v}_l=\vec{v}_r+\vec{v}_t$. $\vec{v}_t$ is the tube velocity. The differential rate is directly related to the number of expected microlensing events as $\mathrm{d}N=N_\mathrm{obs} T_\mathrm{obs} \mathrm{d}\Gamma$, where $N_\mathrm{obs}, T_\mathrm{obs}$ are the number of monitored sources and the whole observation time, respectively. Furthermore, the distribution for the duration of the microlensing events, the *Einstein time*, $t_\mathrm{E}=R_\mathrm{E}/v_{r\bot}$, can also be deduced from the differential microlensing rate, as $\mathrm{d}\Gamma/\mathrm{d}t_\mathrm{E}$. Besides on the lens mass, the key quantity one is usually interested into, $t_\mathrm{E}$ depends also on other usually unobservable quantities. It is therefore suitable to observe a large enough number of events so to be able to deal statistically with the degeneracies intrinsic to the parameter space of microlensing events. Eventually note that, in calculating the microlensing quantities, the optical depth and the rate, one can also take into account the source spatial and velocity distributions. Microlensing towards the Magellanic Clouds {#sec:LMC} ========================================== ![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in $10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$ units. The three innermost contours correspond to 10, 20 and $30\times 10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$. The location of the MACHO (black stars and empty diamonds) and EROS (triangles) microlensing candidates are shown. The $x-y$ axes are directed towards West and North respectively. From top right to bottom left: contours maps of the optical depth for lenses in the Galactic halo, LMC halo and self lensing, respectively. The numerical values are in $10^{-8}$ units. Also shown, the location of the fields observed by the MACHO collaboration. (Figures adapted from [@ref:mancini04].)[]{data-label="fig:lmc-tau"}](lmc1 "fig:"){width="7cm"} ![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in $10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$ units. The three innermost contours correspond to 10, 20 and $30\times 10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$. The location of the MACHO (black stars and empty diamonds) and EROS (triangles) microlensing candidates are shown. The $x-y$ axes are directed towards West and North respectively. From top right to bottom left: contours maps of the optical depth for lenses in the Galactic halo, LMC halo and self lensing, respectively. The numerical values are in $10^{-8}$ units. Also shown, the location of the fields observed by the MACHO collaboration. (Figures adapted from [@ref:mancini04].)[]{data-label="fig:lmc-tau"}](lmc2 "fig:"){width="7cm"} ![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in $10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$ units. The three innermost contours correspond to 10, 20 and $30\times 10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$. The location of the MACHO (black stars and empty diamonds) and EROS (triangles) microlensing candidates are shown. The $x-y$ axes are directed towards West and North respectively. From top right to bottom left: contours maps of the optical depth for lenses in the Galactic halo, LMC halo and self lensing, respectively. The numerical values are in $10^{-8}$ units. Also shown, the location of the fields observed by the MACHO collaboration. (Figures adapted from [@ref:mancini04].)[]{data-label="fig:lmc-tau"}](lmc3 "fig:"){width="7cm"} ![Top left: projection on the sky plane of the column density of the LMC disc and bar. The numerical values on the contours are in $10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$ units. The three innermost contours correspond to 10, 20 and $30\times 10^7~\mathrm{M}_\odot~\mathrm{kpc}^{-2}$. The location of the MACHO (black stars and empty diamonds) and EROS (triangles) microlensing candidates are shown. The $x-y$ axes are directed towards West and North respectively. From top right to bottom left: contours maps of the optical depth for lenses in the Galactic halo, LMC halo and self lensing, respectively. The numerical values are in $10^{-8}$ units. Also shown, the location of the fields observed by the MACHO collaboration. (Figures adapted from [@ref:mancini04].)[]{data-label="fig:lmc-tau"}](lmc4 "fig:"){width="7cm"} The first survey aimed at the detection of microlensing events have been carried out towards the Large and Small Magellanic Clouds (LMC and SMC, respectively), so to probe the MACHO content within the Galactic halo. The main results have been obtained by the MACHO [@ref:macho00] and the EROS [@ref:eros07] collaborations. MACHO reported the detection of 13-17 microlensing events towards the LMC, and concluded that a rather significant (mass) fraction of the Galactic halo, $f\sim~20\%$, is made up of dark mass objects of $\sim~0.4~\textrm{M}_\odot$. On the other hand, EROS reported the detection of 1 event towards the SMC and no events towards the LMC, whereas they evaluated, for a full halo of $0.4~\textrm{M}_\odot$ MACHO, an expected number of microlensing events $\sim 39$. Correspondingly, EROS put a rather severe *upper* limit on the halo fraction in form of MACHOs, $f<0.08$ for $0.4~\textrm{M}_\odot$ MACHO objects. The disagreement between the results obtained by the MACHO and the EROS collaboration leaves the issue of the halo content in form of MACHOs open. A possible issue is the nature of the flux variations reported by the MACHO collaboration. Indeed, microlensing searches are plagued by variable stars (that represent the overwhelmingly majority of the flux variations detected) masquerading as microlensing events. However, Bennet [@ref:bennet05] performed a new analysis on the MACHO data set concluding that “\[…\] the main conclusions of the MACHO LMC analysis are unchanged by the variable star contamination\[…\]”. Looking for MACHO events, the second background is constituted by “self-lensing” events, where, besides the source, also the lens belongs to some luminous star population (either in the LMC itself or possibly, along the line of sight, in the Galactic disc). This possibility was first addressed in [@ref:sahu94; @ref:wu94] and has been further discussed by several authors (e.g. [@ref:gould95; @ref:gyuk00]). Besides these possible background contaminations, a few aspects of the EROS analysis are worth being mentioned. First, while the fields observed by MACHO towards the LMC are all concentrated around the central region, EROS monitored an extremely larger region. This alleviates the issue of self lensing but also that of a possible clumpiness of the Galactic halo right along the line of sight towards the LMC (this argument is balanced, however, by the much smaller expected rate in the outer with respect to the inner LMC regions). Second, EROS restricted his analysis to a subsample of bright sources, this choice being motivated by the superior photometric precision of the corresponding light curves (so to reduce possible contamination from variable stars), and by the possibility of a better understanding of the so-called “blending” effect. The latter issue is of particular relevance in microlensing analyses and it concerns the ability to correctly evaluate the source flux, in absence of amplification, in crowded fields where the observed objects can be, to some extent, the blend of several stars. It is worth stressing that a similar approach was crucial to get to the agreement between the theoretical expectations and the measured values of the optical depth in the case of observations towards the Galactic bulge. ![Scatter plot of the observed values (empty boxes) of the Einstein time and of the expected values of the median duration (filled stars) with respect to the self-lensing optical depth evaluated along the direction of the events. The dashed line for $\tau_\mathrm{SL}=2$ approximately delimits the inner LMC region, where a good agreement is found between the two values for most of the observed events, and the outer region, where the rise in the expected duration is clearly not observed. (Figure adapted from [@ref:mancini04].)[]{data-label="fig:lmc-rate"}](lmc-te){width="11cm"} ![Galactic (top) and LMC dark matter halo fraction, median value with 68% CL error, as a function of the MACHO mass. For values in the mass range $(0.1-0.3)~\mathrm{M}_\odot$, preferred for LMC lenses on the basis of an analysis of the event duration and spatial distributions, the LMC halo dark matter fraction turns out to be significantly larger than the Galactic one. (Figure adapted from [@ref:novati06].) []{data-label="fig:lmc-halo"}](lmc-2f){width="9cm"} More recently, new analyses of the MACHO results have been undertaken. In [@ref:jetzer02] it is shown that the observed events are probably distributed among different components (disc, Galactic halo, the LMC halo and self lensing). Taking advantage of a new modelling of both the luminous and the dark components of the LMC [@ref:vdm02], in [@ref:mancini04] the self-lensing issue has been once more addressed considering the set of microlensing events reported by the MACHO collaboration. In Figure \[fig:lmc-tau\] the density profile of the luminous components of the LMC, disc and bar, is shown together with the optical depth profiles for the Galactic halo, the LMC halo and self lensing. Furthermore, through an analysis of the differential microlensing rate it has been shown that self-lensing events cannot contribute to all of the $\sim~10$ observed events. First, the expected number for self lensing turns out to be significantly smaller, about 1-2 events at most. Second (Figure \[fig:lmc-rate\]), for self-lensing events one expects a peculiar signature in the relationship between the event duration and their spatial distribution, with longer durations expected in the outer LMC region (where, correspondingly, the self-lensing optical depth turns out to be smaller). Such a relationship, however, is not observed. The same set of MACHO events has also been analysed in [@ref:novati06], where in particular the question of a possible significant contribution to the observed events of lenses belonging to the dark matter halo of the LMC, as opposed to those of the Galactic halo population, has been addressed (this possibility had previously been discussed in [@ref:gould93; @ref:kerins99]). In particular, studying both the spatial and the duration distributions, it is shown that only a fraction of the events have characteristics that match those expected for the latter population, hinting that a population of somewhat lighter, $\sim~0.2~\mathrm{M}_\odot$, LMC halo MACHO may indeed contribute to the observed events. Challenging the usual assumption of equal halo fractions in form of MACHO for the Galactic and the LMC halo it was then shown, Figure \[fig:lmc-halo\], that indeed for MACHO masses in the range $(0.1-0.3)~\mathrm{M}_\odot$ the LMC halo mass fraction can be significantly larger than the Milky Way’s so that up to about half of the observed events could indeed be attributed to the LMC MACHO dark matter halo. Microlensing towards M31 {#sec:M31} ======================== The contradictory results of the microlensing campaigns towards the Magellanic Clouds challenge to probe the MACHO distribution along different line of sights. The Andromeda galaxy, M31, nearby and similar to the Milky Way, is a suitable target for this search [@ref:crotts92; @ref:baillon93; @ref:jetzer94]. First, it allows us to explore the Galactic halo along a different line of sight. Second, it has its own dark matter halo that, as we look at it from outside, can be studied globally. We stress that this is a fundamental advantage with respect to the Magellanic Clouds searches. The analysis shows that, as an order of magnitude, for a given MACHO mass and halo fraction, one expects about 3 microlensing events by MACHOs in the M31 halo for each event due to a MACHO in the Galactic halo (in fact, in the latter case the number of available lenses is enormously smaller, by about 4 orders of magnitude, but this fact is, almost, balanced by the much larger value of the Einstein radius). Eventually, the high inclination of the M31 disc is expected to provide a strong gradient in the spatial distribution of microlensing events, that can in principle give an unmistakable signature for M31 microlensing halo events. ![Projected on M31, the boundaries of the observed INT fields are shown together with the position of the microlensing candidates (circles) reported by the POINT-AGAPE collaboration. Note in particular the position of the microlensing event N2, located rather far away from the M31 centre, where the expected self-lensing signal is very low. Note also that S4, the M31-M32 microlensing event, and S5, a possible binary event not included in the selection, are not included in the analysis for the determination of the halo fraction in form of MACHOs. (Figure adapted from [@ref:novati05].)[]{data-label="fig:int-fields"}](INTevts4){width="8cm"} As compared to that to the LMC and the SMC, $\sim 50~\mathrm{kpc}$, the distance to M31 is, by more than an order of magnitude, larger, $\sim 770~\mathrm{kpc}$. As a consequence, the potential sources of microlensing events are not going to be, as for the Magellanic Clouds, resolved objects. This calls for a peculiar technique, usually referred to as “pixel-lensing”, whose key feature is the fact that one monitors flux variations of unresolved objects in each pixel element of the image [@ref:gould96]. Although, in principle, *all* stars in the pixel field are possible sources one can only detect lensing events due either to bright enough stars or to extremely high amplification events (in any case all stars in the pixel field contribute to the overall flux background). As it has been first shown by the AGAPE group [@ref:agape97], the former case is by far the most likely, with, in any case, a number of potential sources per arsec-square that can easily exceed, in the more crowded region, a few hundreds. With respect to the analysis towards the Magellanic Clouds this means that a huge number of sources is potentially available. However, one must handle with the difficulty that the source flux is not a directly observable quantity. This adds a further degeneracy in the microlensing parameter space, in particular it does not allow one to unambiguously determine, out of the observed event duration, the Einstein time. Instead, what is directly observable is the so-called $t_{1/2}$, the full-width-at-half-maximum of the flux variation visible above the background. It turns out that, for self-lensing events as well for MACHO events in the mass range preferred by the Magellanic Clouds searches, $t_{1/2}$ is of order of only a few days. This is shorter than the typical durations observed towards the Magellanic Clouds, and this is relevant in the analysis as it makes easier the distinction between the contaminating background of variable stars and the truly microlensing flux variations. The first convincing detection of a microlensing event towards M31 has been reported by the AGAPE group [@ref:agape99]. At the same time, other collaborations have undertaken searches for microlensing towards M31 and the detection of a few more candidate events has been discussed: Columbia-VATT [@ref:crotts96], POINT-AGAPE [@ref:auriere01; @ref:paulin03], SLOTT-AGAPE [@ref:novati02; @ref:novati03], WeCAPP [@ref:riffeser03], MEGA [@ref:mega04], NainiTal [@ref:joshi05]. Beyond the detection of viable microlensing candidates, in order to draw conclusions on the physical issue of the nature of the lenses, and therefore on the halo content in form of MACHOs, one must develops models able to predict the expected signal. With respect to the Magellanic Clouds searches a few important differences arise. First, as a direct consequence of the unresolved-source issue, for M31 experiments one has to model the luminosity functions of the sources. A related issue is that the intrinsic M31 surface brightness shows a strong gradient moving towards the galaxy centre, and this introduces a spatial dependent noise level one has to take into account in order to correctly predict the level of amplification needed, for a given source magnitude, to give rise to a detectable microlensing event [@ref:kerins01]. The second important difference is the ratio of expected self lensing versus MACHO lensing, that in the case of observations towards M31 is much larger than for observations towards the LMC. This is a consequence of the fact that the M31 luminous components are much more massive than the LMC one’s. The exact figure depends on the observed field of view, the MACHO mass and halo fraction (and also on the not-so-well-known self-lensing contribution). However, comparing for instance the MACHO and the POINT-AGAPE analyses (to be discussed below) for full halos of $\sim~0.5~\mathrm{M}_\odot$ this ratio turns out to be *larger*, in the M31 case, by about one order of magnitude. As for the search of the MACHO lensing signal, this expected self-lensing signal constitutes therefore an unwanted background one must be able to deal with and eventually to get rid of. On the other hand, a relatively large self-lensing signal is important as it allows one to study the characteristics of the M31 stellar populations. A complete analysis of the microlensing, observed and expected, signal has been performed by the POINT-AGAPE [@ref:novati05] and the MEGA [@ref:mega06] collaborations. The two groups shared the same set of data, taken at the 2.5m INT telescope over a period of 4 years (1999-2002), but carried out completely independent analyses. As for the data analysis, in particular, POINT-AGAPE used the so-called “pixel-photometry” [@ref:agape97], while MEGA used the DIA (difference image analysis) photometry [@ref:tomaney96]. Furthermore, the two groups followed different strategies as for the determination of the efficiency of their analysis pipeline, this step being the fundamental link between the theoretical predictions and the results of the data analysis. ![The POINT-AGAPE results: Most probable values, upper and lower 95% CL limit for the halo fraction as a function of the MACHO mass. (Figure adapted from [@ref:novati05].)[]{data-label="fig:pa-res"}](pa){width="9cm"} The conclusions out of these two experiments turned out to be in disagreement. POINT-AGAPE claims for an evidence of a MACHO contribution to Galactic halos, whereas MEGA finds his detected signal to be compatible with self lensing. POINT-AGAPE [@ref:novati05] restricted his search to bright microlensing events, and reported the detection of 6 microlensing candidates, for which the possible variable stars contamination was throughly discussed and eventually discarded. Among these 6, one was found to be located right along the line of sight to M32, a M31 satellite galaxy, and attributed to an intergalactic M31-M32 microlensing event [@ref:paulin02] and therefore excluded from the analysis of the halo content in form of MACHOs. Through a Monte Carlo analysis of the expected signal it was then shown that the expected self-lensing signal, for viable M31 luminous models, was at most $\sim~1$ event, to be compared with $\sim~7$ events expected for full halos (both the Galactic and the M31) of $0.5~\mathrm{M}_\odot$ MACHOs. Taking into account the spatial distribution of the observed events, in that respect it turned out to be particularly relevant the position of 1 event, located rather far away from the M31 centre (Figure \[fig:int-fields\]), POINT-AGAPE concluded claiming that “\[…\] the observed signal is much larger than expected from self lensing alone and we conclude, at the 95% confidence level, that at least 20% of the halo mass in the direction of M31 must be in the form of MACHOs if their average mass lies in the range $0.5-1~\mathrm{M}_\odot$ \[…\]” (Figure \[fig:pa-res\]). MEGA [@ref:mega06] identified 14 microlensing candidates reaching, however, an altogether different conclusion : “\[…\] the observed event rate is consistent with the rate predicted for self-lensing - a MACHO halo fraction of 30% or higher can be ruled out at the 95% confidence level \[…\]”. These contradictory results give rises to a through debate. One of the main point of disagreement is, in fact, the prediction of the expected self-lensing signal and its characterisation with respect to MACHO lensing (e.g. [@ref:kerins01; @ref:baltz05; @ref:riffeser06]). The results of the MEGA experiment have been further analysed in [@ref:ingrosso06; @ref:ingrosso07], where in particular the spatial and the duration distributions of the observed events has been considered, with the conclusion that self lensing can not explain all the reported microlensing candidates. Conclusions =========== Microlensing searches towards the Magellanic Clouds and the Andromeda galaxy, M31, have given, in recent years, first exciting though somewhat contradictory conclusions. The detection of microlensing events has been reported, however their interpretation with respect to the halo dark matter issue is still open to debate. Along both line of sights “evidence” for a MACHO signal as well as null results have been reported. (We may stress that, in the framework of galaxy formation theory, even a relatively “small” halo fraction contribution in form of MACHOs, say at the 10%-20% level, may turn out to be relevant). New observational campaigns are currently under way to further address this interesting issue. The SuperMACHO collaboration [@ref:rest05] is observing the LMC with a much larger field of view than previous campaigns. Towards M31, the Angrstom [@ref:kerins06] and the PLAN [@ref:loiano07] campaigns are currently underway. Both, in different ways, aims rather to the central M31 region, so to properly characterise the self-lensing signal. Furthermore, as opposed to previous analyses, there is an effort to get to a more suitable sampling of the observational data so to allow a better reconstruction of the microlensing event parameters, in particular of the Einstein time. It is a pleasure to thank the organisers of this I Italian-Pakistan Workshop of Relativistic Astrophysics for the warm ambience we enjoyed through this interesting meeting. Thanks to Jean Kaplan for carefully reading the manuscript. [0]{} ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a fair and optimistic [@ben:90; @aso:97] quantum contract signing protocol between two clients that requires no communication with the third trusted party during the exchange phase. We discuss its fairness and show that it is possible to design such a protocol for which the probability of a dishonest client to cheat becomes negligible, and scales as $N^{-1/2}$, where $N$ is the number of messages exchanged between the clients. Our protocol is not based on the exchange of [*signed*]{} messages: its fairness is based on the laws of quantum mechanics. Thus, it is abuse-free [@abuse-free_1], and the clients do not have to generate new keys for each message during the Exchange phase. We discuss a real-life scenario when the measurement errors and qubit state corruption due to noisy channels occur and argue that for real, good enough measurement apparatus and transmission channels, our protocol would still be fair. Our protocol could be implemented by today’s technology, as it requires in essence the same type of apparatus as the one needed for BB84 cryptographic protocol [@bb84]. Finally, we briefly discuss two alternative versions of the protocol, one that uses only two states (based on B92 protocol [@b92]) and the other that uses entangled pairs (based on [@ekert:91]), and show that it is possible to generalize our protocol to an arbitrary number of clients.' author: - 'N. Paunković$^{1}$, J. Bouda$^{2}$ and P. Mateus$^{1}$' title: Fair and optimistic quantum contract signing --- I. Introduction {#sec:introduction} =============== Contract signing [@Even.Yacobi:Relationsamongpublic-1980] is an important security task with many applications, namely to stock market and others [@CSapplications]. It is a two party protocol between Alice and Bob who share a common contract and want to exchange each others’ commitments to it, thus binding to the terms of the contract. Usually, commitment is done by signing the contract on the spot. With the technology development, situations when parties involved are physically far apart become more relevant every day – distant people can communicate using ordinary or e-mail, internet, etc. This poses new challenges to the problem. Forcing spatially distant parties to exchange signatures opens the possibility of a fraud: Bob may get the commitment from Alice (a copy of the contract with her signature on it) without committing himself, which creates an [*unfair situation*]{}. Indeed, having Alice’s commitment enables Bob to appeal to a judge to bind the contract (i.e., to enforce; declare it valid), by showing Alice’s commitment to the contract (together with his signature stamped on it). On the other hand, although Alice did commit, she cannot prove it (prove that she sent her commitment to Bob) and thus cannot appeal to a judge. Moreover, she cannot prove that she did not receive Bob’s commitment, so he can safely claim that he behaved honestly and sent his commitment to Alice. Note that initially Bob did not commit, but having Alice’s commitment puts him in a position to [*later in time*]{} choose whether to sign the contract or not, and thus bind it or not, while Alice has no power to do either of the two. This situation is particularly relevant in a stock market, for example, where prices of stocks may range over time, but agents must commit [*beforehand*]{} to sell/buy at a certain time in the [*future*]{}, for previously fixed prices. The unfairness allows Bob to make risky decisions in the stock market without being bound to them, unless he decides so. The problem when distant parties wish to commit to a common contract lies in the impossibility for an agent, say Alice, to prove whether she has indeed committed to it or not. Often, the contract signing problem is said to be a variant of the so-called [*return receipt*]{} (or [*certified mail*]{}) [*problem*]{}, in which parties involved exchange mails between each other asking for the proof confirming whether the other side received the message, or not. A simple solution to this unfair situation is to have a trusted third party (usually referred to as Trent) mediating the transaction - Alice and Bob send their commitments to Trent, who then returns the receipts to the senders, and performs the message exchange [*only*]{} upon receiving both of the commitments. However, Trent’s time and resources are expensive and should be avoided as much as possible. Unfortunately, it has been shown that there is no fair and viable contract signing protocol [@Even.Yacobi:Relationsamongpublic-1980; @fis:lyn:pat:85], unless during the signature exchange phase the signing parties communicate with a common trusted agent, i.e., Trent. By [*fair*]{} protocol we mean that either both parties get each other’s commitment or none gets. By [*viable*]{} protocol we mean that, if both parties behave honestly, they will both get each others’ commitments. The essence of the proof of the above impossibility result is rather simple, and is related with the impossibility of establishing distributed consensus in asynchronous networks [@fis:lyn:pat:85]. The simple assumption we need for the proof is the following: the protocol consists of a number of messages exchanged between the two parties, so that eventually, upon the termination of the protocol, both Alice and Bob acquire the signature of the other. This can be done by either sending pieces of signatures in each message, or in a more sophisticated scenarios, by sending partial information needed to calculate, upon running a complex algorithm, the signature needed [@even:82; @even:goldreich:lempel:85]. We see that if such protocol existed, it would have a final step where one message is exchanged, say, from Bob to Alice. In that case, before sending his last message, Bob would already have all the information required for him to compute Alice’s signature of the contract (and Alice does not). Therefore, if he does not send the last message, the protocol reaches an unfair state. Note the essential importance of the asynchronousity – it is not possible for [*distant*]{} parties to arrange in advance that messages are sent [*simultaneously*]{}. One way to come around this difficulty is to consider [*optimistic*]{} protocols that do not require communication with Trent unless something wrong comes up (some message is missing, etc.) [@aso:97]. In such protocols, the clients contact Trent regarding the given contract before the actual signing takes place. Trent notifies the request and assigns the particular contract with the clients, this way [*initializing*]{} the signing protocol. After that, the clients [*exchange*]{} messages between each other such that, if the protocol is executed correctly by both sides, both will end up with signed messages. In case something goes wrong (message that is not according to the protocol is sent, or communication interrupted), Trent is contacted in order to [*bind*]{} the contract. Another workaround is to relax the fairness condition probabilistically. [*Probabilistic fairness*]{} allows one agent to have at most $\varepsilon$ more probability of binding the contract over the other agent, at each step of the protocol. In this case, for an arbitrarily small $\varepsilon$, classical solutions have been found where the number of exchanged messages between the agents is minimized [@rabin:83; @ben:90]. In addition to being (probabilistically) fair, in the protocol by Rabin [@rabin:83] the joint probability that one agent can bind the contract, while the other cannot, is also always smaller than given $\varepsilon$. Moreover, the second protocol by Ben-Or [*et. al*]{} [@ben:90] satisfies even stronger condition: the conditional probability that an agent cannot bind the contract, given that the other can, can be made arbitrarily small. Note that the two notions are not exclusive: the protocol [@ben:90] is both fair and optimistic. In this paper, we present a (probabilistically) [*fair*]{} contract signing protocol where [*no information*]{} with a trusted third party (Trent) is exchanged during the exchange phase. This way, it avoids possible communication bottlenecks that are otherwise inherent when involving Trent. Information exchange takes place during the initialization phase and possibly later during the (contract) binding phase (the protocol is [*optimistic*]{} [@aso:97]: Trent is rarely asked to bind the contract due to protocol fairness - cheating does not pay off). Unlike previous classical proposals, in our quantum protocol the messages exchanged between the clients (Alice and Bob) during the exchange phase do [*not*]{} have to be [*signed*]{}. Consequently, our protocol is abuse-free [@abuse-free_1]: a client has no proof that (s)he communicated with the other client, attempting to sign a [*given*]{} contract. In our protocol only two signed messages are exchanged. This is very important when one wants to achieve unconditional security. In the case of classical protocols, digital pseudo-signatures [@Chaum.Roijakkers-Unconditionally-SecureDigitalSignatures-1991] should be used, where key is one-use and expensive to generate. Finally, our protocol can be used in solving some purely quantum protocols involving timely decisions between spatially distant parties, such was the case of simultaneous dense coding and teleportation [@daowen:11]. In classical cryptography the contract exchange is done in the way that respective participants are learning some information (signed message, etc.) bit by bit, thus increasing their knowledge. In order to bind the contract they have to present the (complete) information to Trent. Our approach is somewhat different, and is based on the laws of quantum physics. Quantum systems obey the laws of quantum physics, which exhibit some counterintuitive features that are quite distinct from those of classical physics. The principle of quantum superposition, entanglement and interference, to name just a few, have found numerous applications in the growing field of quantum information and computation, bringing about many advantages of quantum-based information processing protocols over those using classical systems only. Quantum cryptography [@bb84; @qcryptography-security] guarantees secure communication and devices based on its principles can be already bought on a market today; Shor’s algorithm for factoring [@Shor] is exponentially faster than any known classical counterpart; the use of entanglement can in some cases of distributed information processing protocols considerably decrease the amount of information exchange between distant parties needed to perform a given task, thus decreasing (quantum) communication complexity of the problem [@complexity] and can even eliminate any need of communication that is classically necessary for achieving a goal of common interest of separated parties [@pseudo-telepathy] (a feature often referred in literature to as [*pseudo-telepathy*]{}). For an overview of the field of quantum information and computation, see for example [@Nielsen]. In solving the contract signing problem, we use quantum complementarity which as a consequence has the impossibility to unambiguously discriminate between two quantum states unless they are mutually orthogonal [@ivanovic; @helstrom]. This fact is at the heart of the security of quantum cryptographic protocols. In particular, as the setups of the famous BB84 quantum key distribution protocol [@bb84] and our contract signing protocol have conceptual similarities, so the security of the former is closely related to the fairness of the latter. At the end, we show an alternative version of our contract signing protocol that uses quantum entanglement, a non-local version of complementarity[^1]. Entanglement provides the possibility of replacing the exchange of classical information with Trent during the exchange phase, by establishing the [*proper*]{} type of correlations between the distant parties, in a similar fashion as in Ekert’s key distribution protocol that uses entangled states [@ekert:91]. The very same mechanism of commitment to one specific choice can be used to establish e.g., a bit commitment protocol. Note that unconditionally secure bit commitment is not possible without Trent [@Mayers-Uncon_secur_quant:1996; @Lo+Chau-quant_commi_reall:1997], although it is realizable using other assumptions as well. The paper is organized as follows. In the next section we present the quantum contract signing protocol that requires no communication with Trent during the exchange phase, is optimistic and fair. In Section III. we present the conditions we require for the protocol to be fair. In the Section IV. we discuss its fairness for the case of ideal measurements restricted to only two alternative single-particle observables. In Section V. we analyze the case of general multi-particle measurements (POVMs) and argue that under the assumption of noisy channels and realistic detectors with error rates, it is still possible to design a fair protocol. We also discuss alternative protocols that, instead of four states, use either two non-orthogonal states, or entangled pairs. Finally, we present a generalization to the case of more than two agents. In Conclusions we present a short overview of the results presented and some of possible future lines of research. II. The protocol {#sec:protocol} ================ In order for it to be fair, any contract signing protocol has to force a client to make [*only one*]{} out of two possible choices - accept or reject the contract. A conceptually similar situation occurs in quantum physics, where an observer cannot simultaneously measure two complementary observables, say the position and the velocity of a quantum system. In other words, an observer is forced to choose to measure [*only one*]{} out of two possible observables and gain information about only one out of two physical properties of the system observed. For example, by measuring the exact position of a quantum particle, we are left with complete uncertainty of its velocity, and vice versa (Heisenberg uncertainty relations). This basic feature of quantum physics is the essential ingredient of our protocol: instead of sending to Trent the information that explicitly states the acceptance or rejection of the contract, Alice reveals her choice by measuring one of the two complementary observables, thus acquiring information about only one of the two possible features of the system given to her by Trent (and the same for Bob). Gaining information about one feature thus corresponds to the acceptance, while acquiring information about the other corresponds to rejection of the contract. This information can later be used as a proof of client’s choice. As a client’s measurement is local, no information is exchanged between a client and Trent, during the exchange phase. Only latter, during the possible binding phase, this (classical) information obtained by a client’s measurement is confronted with Trent’s (classical) information of the quantum state in which he prepared the quantum system distributed to a client, and thus used as verification of client’s choice. To ensure the timely decisions, Trent provides Alice with the classical information of the quantum state in which Bob’s quantum system is prepared, and vice versa. This way, the clients can confront each others’ measurement results with the classical data provided by Trent, thus obtaining each others’ commitment choices before a certain fixed moment in time. Since quantum mechanics is essentially a probabilistic theory, the clients are supplied by a number of systems, giving rise to the probabilistic fairness of the protocol. In our protocol, we use the simplest two-dimensional quantum systems called qubits. The complementary observables could be seen as spin components (for electrons), or linear polarizations (for photons), along two mutually orthogonal axes. We will denote the two observables measured on single qubits as [*the Accept*]{} observable $\hat{A}$ and [*the Reject*]{} observable $\hat{R}$. Measuring $\hat{A}$ corresponds to the acceptance, while measuring $\hat{R}$ corresponds to the rejection of the contract. The two observables $\hat{A}$ and $\hat{R}$ are required to be mutually complementary and are given by mutually unbiased bases [@ivanovic] $\mathcal{B}_A = \{ |0\rangle, |1\rangle \}$ ([*the Accept*]{} basis) and $\mathcal{B}_R = \{ |-\rangle, |+\rangle \}$ ([*the Reject*]{} basis), respectively, such that $|\pm\rangle=(|1\rangle\pm |0\rangle)/\sqrt{2}$. Both observables have the same eigenvalues, $0$ and $1$, such that $$\begin{aligned} \label{observables} \hat{A} & = & 1\cdot |1\rangle\langle 1| + 0\cdot |0\rangle\langle 0|, \nonumber \\ \hat{R} & = & 1\cdot |+\rangle\langle +| + 0\cdot |-\rangle\langle -|. \end{aligned}$$ During the Initialization phase, Trent randomly prepares qubits, each in one of the four states $\{ |0\rangle,|1\rangle,|-\rangle,|+\rangle \}$, taken from the Accept or the Reject basis. Thus, each qubit state $|\psi\rangle \in \{ |0\rangle,|1\rangle,|-\rangle,|+\rangle \}$ is defined by two classical bits $C=(C_b,C_s)$, first of which defines the basis ($C_b=1$ if $|\psi\rangle \in \mathcal{B}_A$, while $C_b=0$ otherwise) while the second defines the particular state from a given basis ($C_s=1$ if $|\psi\rangle \in \{|1\rangle,|+\rangle \}$, while $C_s=0$ otherwise). For each qubit sent to Alice in a state $|\psi\rangle^{\cal{A}}$, Trent sends to Bob classical bits $C^{\cal{A}}= C(|\psi\rangle^{\cal{A}})$ assigned to this state, and analogously for Bob. During the Exchange phase, that might take place much later after the Initialization phase, the clients agree upon the actual contract, decide whether to commit to it or not, and exchange each others’ commitments. On the system in the state $|\psi\rangle^{\cal{A}}$, Alice measures either $\hat{A}$ or $\hat{R}$, depending on whether she decides to accept or reject the contract, respectively, and then communicates her result $M^{\cal{A}}$ to Bob. Since the two observables are defined by a pair of mutually unbiased bases, the statistics of measurement results performed on a sequence of qubits each of which is randomly prepared in one of the four states $\{ |0\rangle,|1\rangle,|-\rangle,|+\rangle \}$ will be dramatically different depending on which observable is measured. For instance, when Alice measures $\hat{A}$ on qubits prepared in the Accept basis, the measurement outcomes are perfectly correlated with the corresponding classical bits $C_s^{\cal{A}}$ given to Bob (i.e., $M^{\cal{A}} = C_s^{\cal{A}}$); when she measures $\hat{A}$ on qubits prepared in the Reject basis produces, the results obtained are completely uncorrelated with the corresponding classical bits, and analogously for $\hat{R}$. This way, by choosing one of the two measurements performed on a sequence of qubits, Alice produces one of two mutually exclusive sets of measurement outcomes that serve as a signature of her choice (to accept or to reject the contract). By sending the results to Bob, she informs him of her decision by some fixed moment in time $t_0$. The same is done by Bob. In the Binding phase, [*each*]{} party is asked to confront her/his measurement results with the Trent’s corresponding classical bits. The perfect correlation between measurement results and the corresponding classical information for qubits prepared in the Accept/Reject basis confirms a client’s Accept/Reject choice. Trent declares contract as valid, giving the signed certificates to both clients, if a client, say Alice, accepted the contract, while Bob did not reject it, or vice versa. Note that it is impossible, unless with negligible probability, to produce perfect correlations on both sets of qubits (a direct consequence of the security [@qcryptography-security] of the BB84 protocol [@bb84]). In this and the next two sections we restrict ourselves to the case of ideal measurements and noiseless quantum channels (from Trent to the clients). The real-case scenario is discussed in the penultimate section of the paper. Obviously, the Exchange phase as described above suffers from the same problem as any classical protocol not involving information exchange with Trent: upon receiving Alice’s results, Bob can stop communication and safely postpone his decision to accept or reject the contract to a later moment in time. Thus, we require that the measurements and the exchange of measurement outcomes between clients happens in steps, (qu)bit by (qu)bit. Nevertheless, if we require perfect correlations between the measuring results and the corresponding classical information for [*all*]{} qubits distributed to a client, in $1/4$ of the cases an agent will be already after the first step in a position to choose to, with probability one, either bind or reject the contract. Imagine the following situation. Alice is the first to perform a measurement and send the classical information to Bob. She is the honest party, she wants to accept the contract and thus she measures $\hat{A}$ onto her first qubit. With probability $1/2$, her first qubit will be prepared in one of the states from the Reject basis. By measuring $\hat{A}$ onto one of the $\{ |-\rangle,|+\rangle \}$ states, there is $1/2$ of the probability that her result does not match the value of $C_{s_1}^{\cal{A}}$, and so with probability $1/4$ Alice will not be able to achieve perfect correlations between her measurement outcomes and the corresponding classical data of the qubits prepared in states from the Reject basis. In other words, there is $1/4$ of probability that an honest side will already after the first step of the Exchange phase be unable to reject the contract, even before the other side performed any measurement. In such a situation, Bob can safely stop communication and postpone his choice until later moment in time. Thus, we require that in order to accept/reject the contract, a client has to establish perfect correlations on $\alpha N_{A/R}$ qubits prepared in the Accept/Reject basis (with the total number of qubits $N=N_A+N_R$), where $1/2 < \alpha < 1$. We call $\alpha$ the [*acceptance ratio*]{} (note that, for a protocol to be viable, it is necessary that $\alpha > 1/2$). In this scenario, a client is allowed to obtain $(1-\alpha)N_{A/R}$ wrong results for the states prepared in the basis of her/his choice (Accept or Reject). If $\alpha$ is sufficiently large, this eliminates the possibility of obtaining good enough correlations for both groups of qubits, this way eliminating the above mentioned unfair situation. In the rest of this section, we present a formal description of our protocol. Our protocol is optimistic and as such it is divided into three phases: the Initialization, the Exchange (see Fig. \[Initialization and Exchange Figure\]) and the Binding phase. During the Exchange phase agents exchange their measurement results. If both clients are honest and perform measurements according to the protocol (measure the Accept observable), the Exchange phase will end up with both clients having their probabilities to bind the contract [*exponentially*]{} (in number of qubits) close to one: the protocol is viable and optimistic (clients do not need to contact Trent as they already know the answer). If a client, say Bob, is dishonest and performs measurements other than that prescribed by the protocol (or just guesses the outcomes), he will unavoidably obtain wrong outcomes for some of the qubits from the Accept basis. If Alice detects a wrong result (Bob’s [*cheating*]{}), she interrupts the exchange and proceeds to the Binding phase. In a realistic case of measurement errors, Alice will have to set a threshold for the allowed number of wrong results below which she continues with the exchange. We discuss it at the end of this paper. ![(color online) The Initialization and The Exchange Phase.[]{data-label="Initialization and Exchange Figure"}](Fig12.pdf){width="8.0cm" height="6.5cm"} [**The Initialization Phase:**]{} [*Trent produces $N$ pairs of qubits in states $(|\psi\rangle_m^{\cal{A}},|\psi\rangle_m^{\cal{B}})$ with the corresponding classical description $(C_m^{\cal{A}},C_m^{\cal{B}})=((C_{b_m}^{\cal{A}},C_{s_m}^{\cal{A}}),(C_{b_m}^{\cal{B}},C_{s_m}^{\cal{B}}))$, with $m\in \{1,\ldots N \}$. The rule of assigning the classical data to the corresponding qubit states is the following: $C_{b_m}^{\cal{A/B}}=1$ if $|\psi\rangle_m^{\cal{A/B}} \in \mathcal{B}_A$, while $C_{b_m}^{\cal{A/B}}=0$ otherwise; $C_{s_m}^{\cal{A/B}}=1$ if $|\psi\rangle_m^{\cal{A/B}} \in \{|1\rangle,|+\rangle \}$, while $C_{s_m}^{\cal{A/B}}=0$ otherwise. Each qubit state is randomly chosen from the set $\{ |0\rangle,|1\rangle,|-\rangle,|+\rangle \}$. Trent distributes to Alice $N$ qubits $|\psi\rangle_m^{\cal{A}}$ and $2N$ classical bits $C_m^{\cal{B}}$, and analogously for Bob, keeping the copy of the classical data to himself. He assigns a unique identifier (number) to all data so that it can be linked in the Exchange phase to a specific contract.*]{} [**The Exchange Phase:**]{} [*Alice and Bob agree on a contract and exchange signed messages containing the contract, the identifier of qubits sequence they want to use, and some previously arranged moment in time $t_0$ giving time restriction to finish the Exchange phase. (This does not bind them to the contract!) Alice and Bob perform measurements on their qubits and exchange the measurement results with each other. Without the loss of generality, we assume Alice is the first to start communication. She measures an observable of her choice ($\hat{A}$ or $\hat{R}$) on the state $|\psi\rangle_1^{\cal{A}}$, obtaining the result $M_1^{\cal{A}} \in \{ 0,1 \}$, and sends it to Bob. Bob compares $M_1^{\cal{A}}$ with $C_{s_1}^{\cal{A}}$. If the values are different, Alice measured her qubit in the basis corresponding to*]{} $(1+C_{b_1}^{\cal{A}}) \ \mbox{mod} \ 2$. [*Otherwise, the comparison is inconclusive. Next, Bob repeats the procedure described for Alice. The rest of the exchange consists in repeating the above procedure for the states $(|\psi\rangle_m^{\cal{A}},|\psi\rangle_m^{\cal{B}})$ with $m\in \{2,\ldots N \}$. If a client, say Alice, does not obtain a result from Bob until $t_0$ or receives for a qubit from the Accept basis a result different from the corresponding classical data ($C_{b_m}^{\cal{B}}=1 \wedge M_m^{\cal{B}}\neq C_{b_m}^{\cal{B}}$), she immediately proceeds to the Binding phase.*]{} [**The Binding Phase:**]{} [*At the beginning of the Binding phase Trent chooses $\alpha \in (1/2,1)$ randomly and independently, according to a publicly known probability distribution $p(\alpha)$. Without the loss of generality, we assume that Alice contacts Trent to decide validity of the contract. She decides according to her preference whether she want to bind or to reject the contract. In the former case she measures all unmeasured qubits in the Accept basis, in the latter in the Reject base. Both parties then report Trent for each respective qubit whether they measured it in the Accept or Reject basis, and submit respective measurement outcomes. Trent verifies whether their measurement outcomes correspond to their claims. If there is a mismatch in measurements of, say Bob, he is declared as cheater and Trent considers only Alice’s measurement outcomes. Let $N_A^{\cal{A}}$ ($N_R^{\cal{A}}$) denote the number of Alice’s qubits prepared in the Accept (Reject) basis, and analogously $N^{\cal{B}}_A$ and $N_R^{\cal{B}}$ for Bob. The contract is declared as valid if Alice presents at least $\alpha N_A^{\cal{A}}$ accept results and Bob presents less than $\alpha N_R^{\cal{B}}$ reject results, or when Bob presents at least $\alpha N_A^{\cal{B}}$ accept results and Alice presents less than $\alpha N_R^{\cal{A}}$ reject results. In case a client, say Bob, supplied incorrect measurement outcomes (see above), Trent declares the contract to be valid if Alice presents at least $\alpha N_A^{\cal{A}}$ accept results. In all other cases the contract is declared as invalid.*]{} III. Fairness conditions {#sec:fairness-conditions} ======================== As noted in the previous section, our protocol is (probabilistically) viable: if both clients are honest, the probability to bind the contract is exponentially close to one. Therefore, it is also optimistic: honest clients do not need to contact Trent in order to obtain the verdict that, with exponentially high probability, they already know. In case a client, say Bob, is not honest, i.e. is not measuring the Accept observable in every step of the protocol, we say that he is cheating. Any cheating strategy will inevitably have a non-zero probability of producing a wrong result on qubits from the Accept basis, thus allowing Alice to detect Bob’s cheating and move on to the Binding phase. In case the Exchange phase is terminated due to cheating detected, all we can predict is the probability that Trent declares the contract as valid. This probability depends on the moment when the exchange was aborted as well as on actions of both parties (before and after the exchange was terminated). The preferences of the signing parties may change (due to commodity price changes, etc.) before it is possible to reach Trent. We say that parties are symmetric, if the probability that Trent declares the contract as valid is (almost) the same regardless whether honest Alice wants to bind the contract and Bob wants to reject it, or vice versa. Note that we do not care about trivial cases when both want to reject or both to bind the contract. This notion of symmetry is close to the weak coin tossing problem [@Mochon-Quant_weak_coin-:2004]: if both Alice and Bob want the same outcome ($0$ or $1$), there is no need to guarantee unbiased coin toss. On the other hand, it is vital to assure as little bias as possible, if their preferences are contradictory. In this section, we present our formal fairness conditions that allow for such symmetric positions of the clients. In the following, we assume Alice to be honest and Bob dishonest client. The [*probability to bind the contract*]{}, $P_{bind}$, is a probability that during the Binding phase Trent, upon receiving classical information from both Alice and Bob (i.e., their measurement results), declares the contract as valid and hands in to both Alice and Bob signed (i.e., validated) copies of the contract. The probability to bind is a function of step $m$, after which the communication is broken (typically, if cheating is noticed, or a connection between the clients broken). It is also a function of clients’ strategies. In this section, we will consider the case of one-particle measurements only. As shown in section V., the general case of multi-particle measurements, acting on at most $L$ qubits, can be reduced to the case of one-qubit measurements, by considering the blocks of $L$ qubits as units and then applying the same reasoning to the sequence of $N/L$ such units. The case of global measurements on all $N$ qubits is discussed at the end of Section V. Restricting ourselves to one-qubit measurements, for a fixed parameter $\alpha$, the probability to bind is a function of clients’ measurements before the step $m$, $X^{\cal{A}}$ for Alice, $X^{\cal{B}}$ for Bob, and their behavior after the step $m$, $Y^{\cal{A}}$ for Alice, $Y^{\cal{B}}$ for Bob. Thus, $$P_{bind} = P_{bind} (m, \alpha, (X^{\cal{A}}, X^{\cal{B}}), (Y^{\cal{A}}, Y^{\cal{B}})).$$ Alice is an honest client, so her strategy is $X^{\cal{A}} = \hat{A}^{\otimes m}$. In Section V. we show that, when measured on qubits prepared in one of the states from the Accept and the Reject bases, general one-qubit measurements produce probability distributions equivalent to those obtained when only the Accept and the Reject observables are measured. Therefore, Bob’s strategy, in which he measures the $\hat{A}$ observable $(m - \delta m)$ times, while $\hat{R}$ observable is measured $\delta m$ times (the order is irrelevant), is $X^{\cal{B}} = \hat{A}^{\otimes (m - \delta m)} \hat{R}^{\otimes (\delta m)}$. Thus, we can write $X^{\cal{B}} = X^{\cal{B}} (m, \delta m)$. The only two relevant strategies after step $m$ are those in which one client wants to bind the contract and the other does not want to bind the contract. The one who wants it bound measures the Accept observable, $Y = \hat{A}^{\otimes (N-m)}$. The other, who does not want it bound, measures the Reject observable, $Y = \hat{R}^{\otimes (N-m)}$. If Alice wants the contract bound, $Y^{\cal{A}} = \hat{A}^{\otimes (N-m)}$, and Bob doesn’t, $Y^{\cal{B}} = \hat{R}^{\otimes (N-m)}$, we call the corresponding probability Alice’s probability to bind the contract, $P^{\cal{A}}_{bind} (m, \alpha, X^{\cal{A}}, X^{\cal{B}})$, and analogously for Bob (note that in this notation, clients’ strategies after step $m$ are not explicitly written). We will calculate clients’ probabilities to bind the contract in the next section. Here, we only note that we do not discuss cases when after step $m$ both clients want the same: if the protocol is fair for the cases when clients’ wishes are opposite (a conservative assumption), it will be when they wish the same. Averaging over all possible Bob’s strategies $X^{\cal{B}}$ (Alice is honest, so her strategy is known), we obtain $$P_{bind}^{\cal{A}} (m, \alpha) = \sum_{X^{\cal{B}}} p(X^{\cal{B}}) P_{bind}^{\cal{A}} (m, \alpha, X^{\cal{A}}, X^{\cal{B}}),$$ and analogously for $P^{\cal{B}}_{bind} (m, \alpha)$. Here, $p(X^{\cal{B}})$ is the probability that Bob chooses the particular strategy $X^{\cal{B}} = X^{\cal{B}} (m, \delta m)$, and is given by the probability $p_w(\delta m) = 1-(3/4)^{\delta m}$ to obtain a wrong result, when measuring the Reject observable $\delta m$ times (obviously, wrong results are, in the case of ideal measurements with no errors, possible only on qubits from the Accept basis). For our protocol to be fair, we require that at each step $m$ of the Exchange phase, the difference between the agents’ (Alice and Bob) probabilities to bind the contract can be made arbitrarily small: for any given $\varepsilon$, $$|P^{\cal{B}}_{bind}(m, \alpha) - P^{\cal{A}}_{bind}(m, \alpha) | < \varepsilon.$$ In order to make our protocol even more symmetric, we introduce [*the probability to cheat*]{} of a dishonest client (Bob). It is the product of Bob’s probability to bind, and the probability that Alice will not bind the contract: $$\begin{aligned} P_{ch}^{\cal{B}} (m, \alpha, X^{\cal{A}}, X^{\cal{B}}) &=& P^{\cal{B}}_{bind} (m, \alpha, X^{\cal{A}}, X^{\cal{B}}) \\ && \times [1 - P^{\cal{A}}_{bind} (m, \alpha, X^{\cal{A}}, X^{\cal{B}})]. \nonumber\end{aligned}$$ After the averaging over Bob’s strategies $X^{\cal{B}} (m)$ probability to cheat of a dishonest client Bob is: $$P_{ch}^{\cal{B}} (m, \alpha) = \sum_{X^{\cal{B}}} p(X^{\cal{B}}) P_{ch}^{\cal{B}} (m, \alpha, X^{\cal{A}}, X^{\cal{B}}).$$ Our second fairness requirement is that a dishonest client’s (Bob’s) probability to cheat $P_{ch}^{\cal{B}} (m, \alpha)$ is also negligible[^2]. Note that in our protocol the Binding phase requires [*both*]{} clients to confront their measurement results, [*both*]{} obtaining the [*same*]{} verdict by Trent at the end. Therefore, although the probability to cheat is the product of two probabilities, the probability that Bob can bind the contract and the probability that Alice cannot, it is not itself a probability of an event (Bob’s probability to bind is obtained under the assumption that after step $m$ he measures $\hat{A}$ and Alice $\hat{R}$; to obtain Alice’s probability to bind, we assume that after step $m$ she is the one to measure $\hat{A}$ while Bob measures $\hat{R}$). Yet, it can serve as a measure of protocol’s fairness as it quantifies agent’s freedom to choose between binding and refusing the contract later in time. We discuss this in more detail at the end of this section. At the end of the paper, we show that it is straightforward to design a protocol in which it is not necessary that both clients are (simultaneously) present during the binding phase. In this scenario, the corresponding probability to cheat becomes the joint probability that one client can bind the contract, while the other cannot. Unfortunately, although the first fairness criterion is satisfied for the above protocol, its probability to cheat can be as high as $1/4$: in case both clients measure the Accept observable, both clients’ probabilities to bind are monotonic functions, such that for big enough $N$ there exists $m_s$ for which $P^{\cal{A}}_{bind}(m_s, \alpha) \approx P^{\cal{B}}_{bind}(m_s, \alpha) \approx 1/2$. But if $\alpha$, determined by Trent by sampling a random variable described by a publicly known probability distribution $p(\alpha)$, is itself unknown to the clients, and determined only later during the Binding phase, the expected probability to cheat $\bar{P}_{ch}(m)$ becomes negligible, for big enough $N$. Thus, our second fairness condition finally reads as: for any given $\varepsilon$, $$\! \bar{P}^{\cal{B}}_{ch}(m) \! = \! \int p(\alpha) P_{bind}^{\cal{B}}(m,\alpha)[1-P_{bind}^{\cal{A}}(m,\alpha)]{\mathbf d\alpha} < \varepsilon.$$ The coefficient $\alpha$ is sampled randomly by Trent to achieve stronger security requirements. This assures symmetric position of honest and cheating participant even before Trent is contacted during the Binding phase: if agents are temporarily unable to contact Trent, cheaters should not profit from this in a significant way. A client, say Bob, may be willing to take the risk and stop the protocol prematurely during the Exchange phase, provided such a situation can assure him some reasonable position. Consider a contract where Alice buys orange juice from Bob for $X$ units per litter. According to the market expectation, with probability $p$ the price should increase and with probability $(1-p)$ decrease. When the price goes up to $X'>X$, Alice wants to enforce the contract, since otherwise she should buy juice for higher price. Bob wants the contract to be canceled to sell the juice for higher price. In case the price drops, the situation is symmetric. Bob may be willing to take the risk parameterized by $\delta$ in the following sense. The joint probability that the price drops and he will be able to enforce the contract is at least $\delta$ as well as the joint probability that price increases and Alice won’t be able to enforce the contract. The latter gives him protection from financial loses, while the former allows him to spare some money. This is formalized as $$(\exists\, 0 \! \le \! p \! \le \! 1\ \! ) \left[p(1-P_{bind}^{\cal{A}})\geq\delta\,\wedge\ (1-p)P_{bind}^{\cal{B}}\geq\delta\right].$$ Thus, to prevent reasonability of Bob’s cheating we require that $$(\forall\, 0 \!\! \le \! p \! \le \!\! 1\ \!\! ) \left[p(1-P_{bind}^{\cal{A}})\le\delta\,\vee\ (1-p)P_{bind}^{\cal{B}}\le\delta\right].$$ Let us denote $Y\stackrel{def}{=}P_{bind}^{\cal{B}}(m,\alpha)[1-P_{bind}^{\cal{A}}(m,\alpha)]$ the random variable parameterized by $\alpha$. The expected value $$E(Y)=\int p(\alpha) P_{bind}^{\cal{B}}(m,\alpha)[1-P_{bind}^{\cal{A}}(m,\alpha)]{\mathbf d\alpha} %\equiv \bar{P}_{ch}(m)$$ is nothing but the [*expected probability to cheat*]{} $\bar{P}_{ch}(m)$ (note that due to $P^{\cal{B}}_{bind}(m, \alpha) \approx P^{\cal{A}}_{bind}(m, \alpha)$, we have $\bar{P}_{ch}(m) \equiv \bar{P}^{\cal{B}}_{ch}(m) \approx \bar{P}^{\cal{A}}_{ch}(m)$ ). Using the above fairness criterion $\bar{P}_{ch}(m) < \varepsilon$, Chebyshev inequality, and putting $\delta^3=\varepsilon$, we obtain $\mbox{Prob}_\alpha[Y<\delta+\delta^3]\geq 1-\delta.$ Thus, the probability $\delta$ can be made arbitrarily small with arbitrarily high probability. IV. Fairness of the protocol: Ideal case {#sec:fairness-ideal} ======================================== In the following, we show that our protocol is [*fair*]{}: both $|P^{\cal{B}}_{bind}(m, \alpha) - P^{\cal{A}}_{bind}(m, \alpha)|$ and $\bar{P}_{ch}(m)$ could be made arbitrarily low. In this section, we assume that only $\hat{A}$ or $\hat{R}$ are measured, and that no measurement errors or qubit state corruption occur. In the next section, we discuss general (one or multi-qubit) observables and real-life scenario of imperfect measurements and noisy channels. In case Bob is cheating during the Exchange phase, he will be detected after a small number of steps, with probability growing exponentially in the number of qubits measured by Alice. Assume Bob’s cheating is detected after Alice measured $m$ qubits. Alice terminated the Exchange phase and participants proceed with the Binding phase, that can be delayed (Trent is offline). Meanwhile, participants are allowed to change their preferences and we would like to examine symmetry of their position. We are interested only in the situation when Alice wants to bind the protocol and Bob wants to reject, and vice versa. In the former case Alice tries to do her best to bind the contract. This means she measures all unmeasured qubits in the Accept basis and sends her results to Trent. Bob does his best to invalidate the contract, measuring his qubits in the Reject basis. Note that possible lying about measurement basis on respective qubit is detected with probability growing exponentially in the number of wrongfully reported measurement results, so the number of measurements Bob can lie about is well limited. If the cheating was detected and the Exchange phase was terminated after $m$ steps, Alice’s probability to bind the contract is, for a given $\alpha$, given by the following expression: $$P^{\cal{A}}_{bind} (m, \alpha, X^{\cal{A}}, X^{\cal{B}}) = P^{\cal{A}}_A (m, \alpha, X^{\cal{A}}) [1 - P^{\cal{B}}_R (m, \alpha, X^{\cal{B}})],$$ and analogously for Bob. Probability to accept the contract, $P_A$, is the probability that a client who wants the contract bound, thus after step $m$ measuring the Accept observable, will pass the Trent’s test during the Binding phase. Probability to reject the contract, $P_R$, is the probability that a client who does not want the contract bound, thus after step $m$ measuring the Reject observable, will invalidate the contract during the Binding phase. Note that under this convention it is redundant to specify clients’ strategies $Y$ after the step $m$. Also, clients’ probabilities to accept and reject the contract depend only upon their own strategies before the step $m$, and not those of the opponent. Alice’s probability to accept the contract is $P^{\cal{A}}_A (m, \alpha) = 1$: she is an honest client and thus is measuring $\hat{A}$ in every step until the interruption; she wants the contract bound, so continues to measure the Accept observable. Her probability to reject the contract can be written in a simplified form as $P^{\cal{A}}_R (m, \alpha)$, bearing in mind that in this case $X^{\cal{A}} (m) = \hat{A}^{\otimes m}$ and $Y^{\cal{A}} (m) = \hat{R}^{\otimes (N-m)}$. Out of $m$ steps, a dishonest Bob measures the Accept observable $(m - \delta m)$ times, and the remaining $\delta m$ times the Reject observable. Therefore $X^{\cal{B}} (m, \delta m) = \hat{A}^{\otimes (m - \delta m)} \hat{R}^{\otimes (\delta m)}$. When rejecting the contract, his strategy is $Y^{\cal{B}} (m) = \hat{R}^{\otimes (N-m)}$, and we see that Bob’s probability to reject is the same as Alice’s probability to reject, had the interruption occurred $\delta m$ steps before, $ P^{\cal{B}}_R (m, \alpha, \delta m) = P^{\cal{A}}_R (m-\delta m, \alpha)$. When measuring the Reject observable, the probability to obtain wrong results on states from the Accept basis, and thus being detected cheating, is exponentially fast approaching to one, $p_w(\delta m) = 1-(3/4)^{\delta m}$. Therefore, for $1/2 < \alpha < 1$ and large enough $N$, $\delta m << (1-\alpha )N$. The expected $\delta m$ is small and since for large enough $N$ all the probabilities are slow functions of $m$, we have $ P^{\cal{B}}_A (m, \alpha, \delta m) \approx P^{\cal{A}}_A (m, \alpha) = 1$, while $P^{\cal{A}}_R (m, \alpha) \approx P^{\cal{A}}_R (m-\delta m, \alpha) = P^{\cal{B}}_R (m, \alpha)$. Therefore, the first fairness condition $P^{\cal{A}}_{bind}(m, \alpha) \approx P^{\cal{B}}_{bind}(m, \alpha)$ is satisfied. To show that the second fairness condition is also satisfied, we first note that, due to $P^{\cal{A}}_R (m, \alpha) \approx P^{\cal{B}}_R (m, \alpha)$ and $ P^{\cal{B}}_A (m, \alpha) \approx P^{\cal{A}}_A (m, \alpha) = 1$, the two probabilities to cheat are almost the same and can be, for a given $\alpha$, written in terms of single expected probability to reject the contract $$\label{ch2} P_{ch}(m;\alpha)=P_R(m;\alpha)(1-P_R(m;\alpha)).$$ The expected probability to reject, for a given $\alpha$, is $$P_R(m;\alpha) = \sum_{N_R=0}^{N} q(N_R)P_R(m;\alpha, N_R),$$ where $P_R(m;\alpha, N_R)$ is the probability to (be able to) reject the contract (obtain less than $(1-\alpha)N_R$ wrong results on qubits from the Reject basis, measuring the Accept observable on the first $m$ qubits), for a given acceptance ratio $\alpha$ and the number of qubits prepared in the Reject basis $N_R$, keeping for simplicity the $N$ dependence implicit, and $q(N_R) = 2^{-N}\left( \begin{array}{c} N \\ N_R \end{array} \right)$ is the probability to have exactly $N_R$ states from the Reject basis. Note that $P_R(m;\alpha, N_R)$ is calculated under the assumption that the Accept observable is measured during the first $m$ steps, while after that the Reject observable is measured for the remaining $(N-m)$ steps. For $m<(1-\alpha)N_R$ there is always a chance to reject the contract, thus $P_R(m;\alpha, N_R) = 1$. Otherwise, $$\label{reject} P_R(m;\alpha, N_R) = \sum_{n=n^\prime}^{m^\prime} P(n \mbox{ in R};m, N_R)P_R(n \mbox{ in R};\alpha, N_R).$$ Here, the probability that exactly $n$ out of the first $m$ qubit states are from the Reject basis is given by[^3] $$P(n \mbox{ in R};m, N_R) = \left( \begin{array}{c} m \\ n \end{array} \right) \left( \begin{array}{c}N- m \\ N_R-n \end{array} \right) \left( \begin{array}{c} N \\ N_R \end{array} \right)^{-1},$$ while the probability of being able to reject the contract if $n$ qubits are from the Reject basis is given by[^4] $$P_R(n \mbox{ in R};\alpha, N_R) = 2^{-n} \sum_{i=0}^T \left( \begin{array}{c} n \\ i \end{array} \right),$$ where $T=(1-\alpha)N_R-1$ if $n\geq (1-\alpha)N_R$ and $T=n$ otherwise. Due to the constraint of having exactly $N_{A/R}$ qubits from the Accept/Reject basis, the range of the summation for $n$ in equation is given by $n^\prime=0$ for $m\leq N_A$ while $n^\prime=m-N_A$ otherwise, and $m^\prime=m$ for $m\leq N_R$ while $m^\prime=N_R$ otherwise. Finally, the expected probability to cheat, with respect to a given probability distribution $p(\alpha)$ on the segment $I_{\alpha}$, is $$\bar{P}_{ch}(m) = \int_{I_{\alpha}}p(\alpha)P_{ch}(m;\alpha){\mathbf d\alpha}.$$ Using the simplest uniform probability $p(\alpha)=1/I_{\alpha}$ on the segment $I_{\alpha}=[0.9,0.99]$, we numerically evaluated the expected probability to cheat $\bar{P}_{ch}(m)$ for up to $N=600$, while for the “typical” case of $N_A=N_R$ we managed to evaluate it up to $N=8000$, see Fig. \[Results Figure\]. We see that the [*fairness condition*]{} $\mbox{sup}_m \bar{P}_{ch}(m) << 1$ is satisfied (for $N=600$ we got $\mbox{sup}_m \bar{P}_{ch}(m) = \bar{P}_{ch}(92)= 0.0811$, while for $N_A=N_R$ we have $\mbox{sup}_m \bar{P}_{ch}(m) = \bar{P}_{ch}(1455)= 0.0247$ for $N=8000$), which is explicitly shown on Fig. \[Results Figure\]. Moreover, the numerical fit gives $\mbox{sup}_m \bar{P}_{ch}(m) \propto N^{-1/2}$ behavior, giving us the scaling of the complexity of the protocol, with respect to the number $N$ of exchanged messages between Alice and Bob. ![(color online) The expected probability to cheat $\bar{P}_{ch}(m)$ (upper row) and the maximal expected probability to cheat $\mbox{sup}_m \bar{P}_{ch}(m)$ (lower row) for the uniform $p(\alpha)$ on $I_{\alpha}=[0.9,0.99]$. The plots from the left column represents results for our protocol, while the right ones are for the restricted “typical” case of $N_A=N_R$. Note the scaling behavior $\mbox{sup}_m \bar{P}_{ch}(m) \propto N^{-1/2}$. \ []{data-label="Results Figure"}](Fig34.pdf){width="8.5cm" height="6.0cm"} V. Fairness of the protocol: General measurements and noise {#sec:fairness-general_measurements} =========================================================== First, we consider only one-qubit orthogonal measurements. Since Alice is an honest client, $X^{\cal{A}} = \hat{A}^{\otimes m}$, we have that $P^{\cal{A}}_A (m, \alpha, X^{\cal{A}}) = 1$ and $P^{\cal{A}}_R (m, \alpha, X^{\cal{A}}) = P_R(m, \alpha)$. Bob is a dishonest client and his strategy $X^{\cal{B}}$ consists of measuring $(m-k)$ times the Accept observable $\hat{A}$ and $k$ times observable $\hat{K}= 0\cdot |m\rangle\langle m| + 1\cdot |m^\bot\rangle\langle m^\bot|$, where $$\!\! |m\rangle \! = \! \cos\frac{\theta}{2}|0\rangle + e^{\varphi}\!\sin\frac{\theta}{2}|1\rangle \! = \! \cos\frac{\theta^\prime}{2}|-\rangle + e^{\varphi^\prime}\!\!\sin\frac{\theta^\prime}{2}|+\rangle.$$ Let $m = m_a + m_r$, where $m_a$ is the number of the Accept and $m_r$ the number of the Reject qubits among the first $m$ qubits, and let $k_a$ be the number of measurements of $\hat{K}$ on $m_a$ qubits from the Accept basis, and analogously for $k_r$. Bob’s measurements of $\hat{K}$ are equivalent to[^5] (for simplicity, we omit writing the $\alpha$ and $N_R$ dependences): - $q_a\cdot k_a$ measurements of $\hat{A}$ and $\delta m_a = [1-q_a]\cdot k_a$ measurements of $\hat{R}$ on qubits from the Accept basis, where $q_a=\cos\theta$. Thus, the probability to notice cheating after $k_a$ measurements of $\hat{K}$ is $\tilde{p}_w(\delta m_a)=\tilde{p}_w([1-q_a] k_a)= 1-(1/2)^{[1-q_a] k_a}$ and Bob’s probability to accept the contract is $P^{\cal{B}}_A (m, \alpha, X^{\cal{B}}) \equiv P_A (m, \alpha; m_a, \delta m_a)$; - $q_r\cdot k_r$ measurements of $\hat{A}$ and $\delta m_r = [1-q_r]\cdot k_r$ measurements of $\hat{R}$ on qubits from the Reject basis, where $q_r=\cos\theta^\prime$. Thus, Bob’s measurements are equivalent to $(m_r - \delta m_r)$ measurements of $\hat{A}$ on qubits from Reject basis and his probability to reject is $P^{\cal{B}}_R(m, \alpha, X^{\cal{B}}) \leq P_R(m-\delta m_r,\alpha)$. Since $\tilde{p}_w(\delta m_a) = 1-(1/2)^{[1-q_a] k_a}$, either $k_a$ is small, or $q_a \approx 1$. In case of $q_a \approx 1$, we have that $q_r \approx 0$: the observable $\hat{K}$ is close to $\hat{A}$ and Bob’s strategy is close to that of an honest client. In case, $k_a$ is small, we have that $\delta m_a$ is also small and Bob’s probability to accept the contract is close to one, $P^{\cal{B}}_A (m, \alpha, X^{\cal{B}}) = P_A (m, \alpha; m_a, \delta m_a) \approx 1$. Moreover, since for typical cases $k_a \approx k_r$, we have that typically Bob’s probability to reject the contract is $P^{\cal{B}}_R(m, \alpha, X^{\cal{B}}) \approx P_R(m,\alpha)$. Therefore, the corrected probability to cheat, averaged over all possible distribution of states (from the Accept and the Reject bases) and all possible strategies of the clients, will not be considerably altered and the protocol would still be fair, even if arbitrary number of observables $\hat{K}_i$ is allowed. Note that, according to the above and the analysis from the previous section, the complexity of the protocol, for the case of general one-qubit orthogonal measurements, scales as $N \propto \varepsilon^{-2}$, where $N$ is the number of exchanged messages and $\varepsilon$ a given upper bound for the probability to cheat. The above argument could be generalized for joint $L$-qubit measurements, if $L \propto N^t$ and $t<1$: for every joint observable (or general POVM) $\hat{O}_L \neq \otimes_{i=1}^L\hat{A}_i$ there is a non-zero probability $q_L$ that at least one wrong result will be obtained on the accept qubits, which scales as $q_L^k$, $k$ being the number of $\hat{O}_L$ measurements (note that for ideal measurements, Alice will notice cheating as soon as she receives the [*first*]{} wrong result from Bob). Thus, in case of performing joint measurements on at most $L$ qubits, for $N/L >> 1$, the protocol would still be fair. In case $L \propto N$ the fairness of our protocol could be seen as a consequence of the unconditional security of the BB84 protocol in the presence of noisy channel and imperfect sources and detectors [@qcryptography-security]: if we interpret $(1- \alpha)$ as the error rate due to the noise (see the paragraph below), then in order to successfully cheat in our contract signing protocol, an agent would have to be correct on [*both*]{} $\alpha N_A$ qubits from the Accept basis and on $\alpha N_R$ qubits from the Reject bases, which in turn contradicts the unconditional security of the BB84 protocol. Namely, in order to pass the test by Alice and Bob and thus learn the secret key, Eve has to know the states of qubits from both mutually unbiased bases. In the presence of errors, she needs to be correct only on the $\alpha$ fraction of qubits. Since quantum cryptography is unconditionally secure, that is not possible, unless with exponentially small probability. Thus, our quantum contract signing protocol is fair. Note that, although the errors due to imperfect technology are inevitable, it is possible, at least in principle, to make them arbitrary low. Therefore, if for some error rate $(1- \alpha)$ it is not possible, unless with negligible probability, to have the right results on $\alpha N_{A/R}$ qubits from both the Accept and the Reject bases, then for some better equipment the fairness condition would be satisfied. In the case of measurement errors and noisy channels, one must introduce the error tolerance $\eta = M_w/M$, where $M_w=\langle m_w\rangle\equiv\eta M$ is the expected number of wrong results obtained in measuring an observable on $M$ qubits prepared in states from the observable’s eigenbasis. Coefficient $\eta$ gives the ratio of unavoidably produced wrong results: to detect cheating would then mean to obtain more than expected, according to $\eta$, wrong results. For $\eta < (1- \alpha)$ and big enough $N$, our protocol would therefore still be fair. Our protocol can be modified to use only two non-orthogonal pure states, just like the B92 cryptographic protocol [@b92] is modified after the BB84. Obviously, the probabilities determining the features and complexity of the protocol would be quantitatively different, but the protocol would clearly still be fair. We note that it is straightforward to design a protocol in which a single client can contact Trent to obtain a signed contract. In this case, Trent sends to a client, say Alice, classical information about only a half of, randomly chosen, Bob’s qubit states. This information is used by Alice to check Bob’s measurements: whether he is measuring the Accept observable, or not. If so, he is measuring the Accept observable on the rest of his qubits as well (he does not know for which qubits Alice has the classical information and for which not). Thus, the results Bob provided her for the rest of his qubits is used by Trent to verify Alice’s data: instead of Bob appearing with his results, Alice provides those given to her by Bob. Note that in this case, unlike before, the corresponding average probability to cheat is probability of a real event: joint probability that an agent cannot bind the contract, while the other can. This version of the protocol is also suited for the use of entangled pairs, instead of qubits in definitive pure states. In this case, Trent produces $4N$ pairs of maximally entangled qubits in the singlet state $|\psi^- \rangle = (|01\rangle - |10\rangle )/\sqrt{2}$. Half of each pair of the first $2N$ entangled pairs are distributed to Alice, while the other half is kept with Trent. Each half of the other $2N$ pairs is sent to Bob. Then, Trent randomly measures either of the two observables (Accept or Reject) on each of $4N$ qubits left with him. Random $N$ results obtained from measuring the first $2N$ qubits are sends to Bob (together with the classical information of their positions within the first $2N$ pairs), and analogously with the other $2N$ results ($N$ of which are sent to Alice). We see that this situation is equivalent to the above, when instead of entangled, Trent sends qubits in pure states. Finally, we note that it is simple to generalize our protocol for the case of arbitrary number of parties. In case of $n$ parties, a classical information about the preparation of each of $N$ qubits sent to, say the first client, is divided into $N/(n-1)$ sets $S_i$, with $i = 2, \ldots n$, such that each set $S_i$ is sent to the $i$-th client. The protocol is executed in $N$ rounds, such that in the $i$-th step of each round, the $i$-th client is performing a single-qubit measurement and publicly announces the result to all of the remaining $(n-1)$ clients. To declare contract valid, t binding all of the $n$ clients, the requirements analogous to the ones for the two-party protocol must be satisfied between each of the $\left( \begin{array}{c} n \\ 2 \end{array} \right)$ pairs of clients. This, as well as the above mentioned generalizations of our protocol, will be discussed in more detail elsewhere. VI. Conclusions {#conclusions} =============== We have presented a fair and optimistic quantum protocol for signing contracts that does not require the exchange of information with the trusted party during the Exchange phase. Unlike the classical proposals, its fairness is based on the laws of physics rather than on sending signed messages, that are only computationally secure. Thus, no keys are generated during the Exchange phase and the protocol is abuse-free. For single-qubit orthogonal measurements, the complexity of the protocol scales as $N \propto \varepsilon^{-2}$, where $N$ is the number of exchanged messages and $\varepsilon$ given threshold for the probability to cheat. Analogously to the previous proposal of a quantum contract signing protocol [@qsig-previous], the present one could also be performed using either entangled pairs or two, instead of four pure states, but it does not require tamper-proof devices nor is based on the effects of decoherence (which are necessary for achieving [@qsig-previous]). Also, it is simple to generalize it to involve many clients. Our protocol can easily be altered such that only one client is enough to present her/his results in order to bind the contract and obtain the document declaring the contract as valid. In this case, the probability to cheat becomes a joint probability that a client can bind a contract, while the other cannot. The fairness condition of a negligible probability to cheat is in this case slightly weaker than the one of [@ben:90]. Also, in our quantum protocol the trusted party is slightly more involved than in the classical protocol presented in [@rabin:83]. Nevertheless, being based on the laws of physics, our protocol has other security advantages over the classical counterparts. Moreover, it can be used for designing other quantum information and security protocols. Indeed, our approach was used to design the protocols of simultaneous dense coding and teleportation between spatially distant clients [@daowen:11]. Finally, our protocol could be easily performed with the current technology used in quantum cryptography. The possible future lines of research include a detailed study of the effects of noise and multi-particle measurements on the fairness of the protocol and its complexity. Today’s quantum technology is still at its infancy and experimental realizations and technological applications of quantum information protocols are relatively rare and not always as reliable as classical alternatives. Even the security of quantum cryptography, that is proven to be unconditionally secure under the effects of noise, is still very much dependent on the technology used. Nevertheless, the development of quantum technology has already proven to be remarkable and beyond many of early expectations, and will for sure continue to develop. Another possible line of future research could be a further study of alternative versions of quantum contract signing protocols with possibly better properties and designing novel quantum information, security and computation protocols involving spatially distant clients that require timely decisions. NP and PM thank the project of SQIG at IT, funded by FCT and EU FEDER projects QSec PTDC/EIA/67661/2006 and QuantPrivTel PTDC/EEA-TEL/103402/2008, IT Project QuantTel, and Network of Excellence, Euro-NF. The authors acknowledge discussions with V. Božin and Č. Brukner. S. Even and Y. Yacobi, Technical Report 175, Technicon (1980). N. Asokan, V. Shoup and Waidner, M. IEEE J. Sel. Areas Commun., 18, 4 (2000), 593–610. M. J. Fischer, N. A. Lynch and M. Paterson, J. ACM [**32**]{}, 374 (1985). S. Even, Technical Report 231, Computer Science Department, Technicon, Israel (1982). S. Even, O. Goldreich and A. Lampel, [*Communications of the ACM*]{}, [**28**]{}, 637 (1985). M. Rabin, [*J. Computer and System Sciences*]{} [**27**]{}, 256 (1983). M. Ben-Or, O. Goldreich, S. Micali and R. L. Rivest, [*IEEE Trans. Inf. Theo.*]{}, [**36**]{}, 40 (1990). N. Asokan, M. Schunter and M. Waidner, [*CCS’97*]{}, 7 (1997). J. Garay, M. Jakobsson and P. MacKenzie, CRYPTO ’99, LNCS 1666, pp449-466, Springer-Verlag (1999). D. Chaum, and S. Roijakkers, CRYPTO’90, LNCS 537, 206–214 (1991); G. Wang, IEEE Transactions on Information Forensics and Security, [**5**]{}, March 2010 (In Press), Sept. 2009. H. Situ, D. Qiu, P. Mateus and N. Paunkovi' c, submitted for publication, arXiv:1106.3956v1 \[quant-ph\]. C. H. Bennett and G. Brassard, [*IEEE International Conference on Computers, Systems and Signal Processing, Bangalore, India,*]{} 175 (1984). H.-K. Lo and H. F. Chau, Science [**283**]{}, 2050 (1999); P. W. Shor and J. Preskill, Phys. Rev. Lett. [**85**]{}, 441 (2000); D. Mayers, J. ACM [**48**]{}, 351 (2001); V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dusek, N. Lutkenhaus and M. Peev, Rev. Mod. Phys. [**81**]{}, 1301 (2009). P. W. Shor, [*Proc. 35nd Annual Symposium on Foundations of Computer Science*]{}, IEEE Press, 124 (1994). Č. Brukner, M. Zukowski, J.-W. Pan and A. Zeilinger, Phys. Rev. Lett. [**92**]{}, 127901 (2004); H. Buhrman, R. Cleve and S. Massar, R. de Wolf, arXiv:0907.3584 (2001). G. Brassard, A. Broadbent and A. Tapp, [*Proceedings of the 8th International Workshop on Algorithms and Data Structures*]{}, Vol. [**2748**]{} of Lecture Notes in Computer Science, 1 (2003); Č. Brukner, N. Paunković, T. Rudolph, and V. Vedral, Int. J. Quant. Info. [**4**]{}, 365 (2006). M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{}, Cambridge University Press (2000). I. D. Ivanović, J. Phys. A [**14**]{}, 3241 (1981). C. W. Helstrom. [*Quantum Detection and Estimation Theory*]{}, Academic Press, New York (1976). A. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991). H.-K. Lo and H. F. Chau, Phys. Rev. Lett. [**78**]{} 3410 (1997). D. Mayers, PhysComp’96, Boston (1996). C. Mochon, FOCS 2004, quant-ph/0403193 C. H. Bennett, Phys. Rev. Lett. [**68**]{}, 3121 (1992). J. Bouda, P. Mateus, N. Paunkovi' c and J. Rasga, Int. J. Quant. Inf. [**6**]{}, 219 (2008). [^1]: Both quantum complementarity and quantum entanglement could be seen as consequences of the Heisenberg uncertainty principle and ultimately, the superposition principle. [^2]: Obviously, if the second fairness requirement is satisfied, the first if satisfied as well: having negligible probability to cheat is stronger condition than the symmetry between the client’s probabilities to bind the contract. [^3]: Since each sequence of $N$ qubit states, prepared such that exactly $N_R$ are from the Reject basis, is equally probable, the probability $P(n \mbox{ in R};m)$ is given by the ratio of the number of sequences whose $n$ out of $N_R$ qubits prepared in the Reject basis are among the first $m$ qubits, while the other $(N_R-n)$ are among the rest of $(N-m)$ qubits, $\left( \begin{array}{c} m \\ n \end{array} \right) \left( \begin{array}{c}N- m \\ N_R-n \end{array} \right)$, and the total number of allowed sequences is $\left( \begin{array}{c} N \\ N_R \end{array} \right)$. [^4]: When measuring the Accept observable $\hat{A}$ on a state from the Reject basis each of the two possible results are equally probable. Therefore, each sequence of $n$ such measurement results has the probability $2^{-n}$. If exactly $i$ out of $n$ individual-qubit results do not match those given by Trent to the other client, and if $i<(1-\alpha)N$, then it is still possible to reject the contract. There are $\left( \begin{array}{c} n \\ i \end{array} \right)$ of such measurement results, for given $n$. [^5]: First, we consider measurements of $\hat{K}$ on qubits whose states are from the Accept basis. Let $p_k(0|0)$ be the conditional probability to obtain result $0$ when measuring an observable $\hat{K}$ on the state $|0\rangle$, and analogously for other conditional probabilities. Notice that, when measuring an arbitrary one-qubit orthogonal observable $\hat{K}$, one has $p(0|0)=p(1|1)=s_a$: the probabilities to obtain the right result on the states from the Accept basis are the same. The two probabilities are also equal when one measures $\hat{A}$ with frequency (i.e., probability) $q_a$ and $\hat{R}$ with frequency $1-q_a$. The probability to obtain the right result on the states from the Accept basis is then $S_a= q_a+(1-q_a)/2=(1+q_a)/2$: when measuring $\hat{A}$ (with probability $q_a$) one obtains the right result with probability $1$, while when measuring $\hat{R}$ (with probability $1-q_a$) one obtains the right result with probability $1/2$. In order for those two measurements to be equivalent, it is necessary that $s_a=S_a=(1+q_a)/2$. In the case of the orthogonal observable $\hat{K}$, we have $s_a=\cos^2(\theta/2)$, and thus $q_a=\cos\theta$. The same argument applies for the case of measurements performed on the states from the reject basis.
{ "pile_set_name": "ArXiv" }
--- address: | DESY, Notkestr. 85, D-22603 Hamburg, Germany\ E-mail: fpschill@mail.desy.de author: - 'F.-P. Schilling  \[H1 Collaboration\]' title: | Diffractive Jet Production in DIS\ – Testing QCD Factorisation --- Overview ======== At HERA, colour singlet exchange or [*diffractive*]{} processes are studied in deep-inelastic $ep$ scattering (DIS), where the exchanged photon with virtuality $Q^2$ provides a probe to determine the QCD (i.e. quark and gluon) structure of diffractive exchange. In [@collins], it was proven that QCD hard scattering factorisation is valid in diffractive DIS, so that [*diffractive parton distributions*]{} $p_i^D$ in the proton can be defined as quasi-universal objects. The hypothesis of a factorising $x_{{I\!\!P}}$ dependence ([*Regge factorisation*]{}) is often used in addition. Measurements of inclusive diffractive DIS in terms of the [ *diffractive structure function*]{} $F_2^{D(3)}(x_{{I\!\!P}},\beta,Q^2)$ mainly constrain the diffractive quark distribution. By contrast, diffractive dijet production is directly sensitive to the gluon distribution $g^D(z,\mu^2)$ (Fig. \[fig2\]), which can be inferred only indirectly from the scaling violations of $F_2^{D(3)}$. QCD factorisation can be tested by predicting the dijet cross sections using the pdf’s extracted from $F_2^{D(3)}$. ![Inclusive diffractive scattering at HERA [*(left)*]{} and diffractive dijet production [*(right)*]{}, viewed in a partonic picture.[]{data-label="fig2"}](djets-kine.grey.eps){height="3.0cm"} Furthermore, the predictions of a variety of phenomenological models for diffractive DIS such as soft colour neutralisation or 2-gluon exchange can be confronted with the dijet cross sections. Data Selection and Cross Section Measurement ============================================ The data sample corresponds to an integrated luminosity of $\mathcal{L}=18.0 \rm\ pb^{-1}$ and was obtained with the H1 detector at HERA. Dijet events were identified using the CDF cone algorithm and diffractive events were selected by the requirement of a large rapidity gap in the outgoing proton direction. The kinematic range of the measurement is $4<Q^2<80 \ \mathrm{GeV^2}$, $p^*_{T, jet}>4 \ \mathrm{GeV}$, $x_{{I\!\!P}}<0.05$, $M_Y<1.6 \ \mathrm{GeV}$ and $|t|<1.0 \ \mathrm{GeV^2}$. The cross sections were corrected for detector and QED radiative effects and the systematic uncertainties, which dominate the total errors, were carefully evaluated. Diffractive Parton Distributions ================================ Parton distributions for the diffractive exchange[^1] were extracted from DGLAP QCD fits to $F_2^{D(3)}(x_{{I\!\!P}},\beta,Q^2)$ in [@h1f2d94]. The parton distributions were found to be dominated by gluons ($80-90\%$ of the exchange momentum). If these parton distributions, which evolve according to the DGLAP equations, are used to predict the diffractive dijet cross sections, a very good agreement is obtained (Fig. \[fig5ab\]). Fig. \[fig7\] shows the measurement of the dijet cross section as a function of $z_{{I\!\!P}}^{(jets)}$, an estimator for the parton momentum fraction of the diffractive exchange which enters the hard process (Fig. \[fig2\] right). A very good agreement in shape and normalisation is obtained if the [ *fit 2*]{} parton distributions from [@h1f2d94] are used. The [*fit 3*]{} parameterisation, in which the gluon distribution is peaked towards high $z_{{I\!\!P}}$ values, is disfavoured[^2]. Using different factorisation scales ($\mu^2=Q^2+p_T^2$ in Fig. \[fig7\]a, $\mu^2=p_T^2$ in Fig. \[fig7\]b) or including a resolved virtual photon contribution (Fig. \[fig7\]a) in the model prediction does not alter these conclusions. The dijet data thus strongly support the validity of QCD factorisation in diffractive DIS and give tight constraints on the diffractive gluon distribution in both shape and normalisation. The measured $z_{{I\!\!P}}^{(jets)}$ cross sections in bins of the scale $\mu^2=Q^2+p_T^2$ (Fig. \[fig8\]a) are in good agreement with the prediction based on a DGLAP evolution of the diffractive parton distributions. The $z_{{I\!\!P}}^{(jets)}$ cross sections in bins of $x_{{I\!\!P}}$ (Fig. \[fig8\]b) demonstrate consistency with Regge factorisation. In a Regge framework, the energy dependence of the cross section is determined in terms of an effective [*pomeron intercept*]{} $\alpha_{{I\!\!P}}(0)=1.17\pm0.07$ (stat.+syst.) from the $x_{{I\!\!P}}$ cross section (Fig. \[fig6\]a), consistent with the result from [@h1f2d94]. The cross section as a function of $\beta$ is shown in Fig. \[fig6\]b. Soft Colour Neutralisation Models ================================= In Fig. \[fig10\], the cross sections are compared with models based on the ideas of soft colour neutralisation to produce large rapidity gaps. These are the original version of the ‘soft colour interactions’ model (SCI) [@sci], the improved version of SCI based on a generalised area law [@scinew] and the ‘semi-classical model’ [@semicl]. The original SCI and the semi-classical models give good descriptions of the differential distributions. However, none of these models is yet able to simultaneously reproduce shapes and normalisations of the dijet cross sections. Colour Dipole and 2-gluon Exchange Models ========================================= Models for diffractive DIS based on the diffractive scattering of $q\bar{q}$ or $q\bar{q}g$ photon fluctuations off the proton by 2-gluon exchange are confronted with the data in Fig. \[fig11\] for the limited kinematic range of $x_{{I\!\!P}}<0.01$, where contributions from quark exchange can be neglected. The ‘saturation model’ [@sat], which takes only $k_T$-ordered configurations of the final state partons into account, reproduces the shapes of the differential distributions, but underestimates the cross sections by a factor of two. The model of Bartels et al. [@bartels], in which also non-$k_T$-ordered configurations are taken into account, is found to be in reasonable agreement with the data if a free parameter $p_{T,g}^{cut}$ is fixed to $1.5 \rm\ GeV$[^3]. Conclusions =========== Diffractive dijet production has been shown to be a powerful tool to gain insight into the underlying QCD dynamics of diffraction, in particular the role of gluons. Factorisable, gluon-dominated diffractive parton distributions successfully describe diffractive jet production and inclusive diffraction in DIS at the same time, in agreement with QCD factorisation. [99]{} H1 Collaboration, C. Adloff [*et al.*]{}, . F.-P. Schilling, Ph.D. thesis, Univ. Heidelberg (2001), DESY-THESIS-2001-010. J. Collins, , err.-ibid. [**D 61**]{} (2000) 19902. H1 Collaboration, C. Adloff [*et al.*]{}, . A. Edin, G. Ingelman, J. Rathsman, . J. Rathsman, . W. Buchmüller, T. Gehrmann, A. Hebecker, . K. Golec-Biernat, M. Wüsthoff, . J. Bartels, H. Lotter, M. Wüsthoff, ;\ J. Bartels, H. Jung, M. Wüsthoff, . [^1]: The assumption of Regge factorisation was found to be compatible with the data. [^2]: The corresponding gluon distributions are shown above the cross sections. [^3]: $p_{T,g}^{cut}$ corresponds to the minimum $p_T$ of the final state gluon in the case of $q\bar{q}g$ production.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This article continues a discussion raised in previous publications (LANL preprint server, nucl-th/0202006 and nucl-th/0202020). I try to convince my opponents that general arguments are not “my case" and may be applied to their model.' author: - 'V. Yu. Ponomarev[@byline2]' title: 'On “the authentic damping mechanism” of the phonon damping model. II [^1]' --- To remind in brief a discussion which is already distributed over several publications: A damping mechanism of giant resonances (GR) is well established and represents now basic knowledge in nuclear structure physics. Calculations performed by many groups of authors within different microscopic approaches confirm that a spreading width (due to a coupling of collective modes, phonons, to complex configurations) is the main part of the total GR width in medium and heavy nuclei. In light nuclei, a coupling to continuum (an escape width) also plays an essential role. A damping mechanism of GRs in a phenomenological phonon damping model (PDM) in its PDM-1 version is different from that (see, an important clarification in [@note0]). A collective phonon fragments within PDM-1 as a result of coupling to simple and not to complex configurations, i.e. only the so-called Landau damping mechanism is accounted for. A coupling strength is a phenomenological model parameter which is adjusted to reproduce the GR width known from experiment. Agreement with data provided by fits within the PDM may be defined as from very good to excellent. In a recent article [@n9] which raised the present discussion, it has been concluded that these type of fits confirm “the [**authentic**]{} damping mechanism” of the PDM as “the result of coupling between collective phonon and non-collective $p$-$h$ configurations” (i.e. the well established knowledge on the GR properties was put in doubt). This conclusion has been criticized in my article [@m]. It has been argued that this model has the Breit-Wigner (BW) form for the phonon distribution as an [*ad hoc*]{} input and thus, even excellent description of the data available is not surprising. A fruitfulness of an idea to make conclusions from fits in which model parameters are adjusted to described physical observables has been put in doubts. Although my evaluation of the PDM in [@m] was made for the point of view of general physical grounds, Dang [*et al.*]{} did not agree with me in the forthcoming publication [@nn]. They claim that I consider some specific case (“his case") which cannot be attached to the PDM and all my arguments “[*are either wrong or irrelevant*]{}". I cannot agree with their conclusion and present below additional arguments in a sequence following the paragraphs in [@nn]: [**2.**]{} For the giant dipole resonance (GDR), the energy scale associated with variations in a coupling matrix between a phonon and uncorrelated $1p1h$ states is of the order of a few hundred keV. The width of the GDR strength function is of the order of a few MeV. So, I do not agree that the condition cited in [@nn] from [@Boh69] is satisfied in the GDR region: why are a few MeV small compared to a few hundred keV? I know only one PDM-1 article [@n1] in which it is assumed that a phonon interacts 40 times stronger with some specific configurations than with other ones (see more on this article in [**9.**]{} below). In all other PDM-1 papers, we find a single phonon which interacts equally with all $1p1h$ configurations. I do not want to discuss here the PDM fits at non-zero temperature. To keep on reproducing the data in hot nuclear, Dang [*et al.*]{} have to assume for unclear reasons that a phonon prefers to interact with $1p1p$ and $1h1h$ configurations about 10 times stronger than with $1p1h$ configurations. Again, as in the case of cold nuclei, an idea to provide the best fits is preferred to understanding of the physics. I think it is a blind way for theory. It is true that PDM equations are presented in a general form in many papers by this group with different $V_{q_1 s_1}$. But the point is that they are never used in actual calculations in this form. For this reason, I prefer to discuss what is used in calculations rather than what is written and not used even by the PDM authors themselves. [**3.**]{} It is very simple to transform Eq. (1) in [@nn] for $m_q^{(2)}$ into Eq. (1) in [@m] for $W_2$, although Dang [*et al.*]{} claim it is impossible. For that, one needs to switch off an additional PDM smearing, i.e., consider the limit $\varepsilon \to 0$. This would bring immediately to the first line of Eq. (2D-14) in [@Boh69]. Eq. (1) in [@m] (for a constant coupling strength) or its general form in [@Boh69]: $$\hspace*{60mm}W_2 = \sum_{a,\alpha} (V_{a \alpha})^2 \hspace*{60mm} \mbox{(2D-14)}$$ for the second moment $W_2$ is relevant to the PDM as well as to any model which deals with interactive systems. Of course, to perform this transformation one should use the PDM strength function introduced in Ref. [@o1]: $$S_q(E) = \frac{1}{\pi} \frac{\gamma_q(E)}{\left(E-\omega_{q}-P_q(E)\right)^2+\gamma_q^2(E)}~ \label{e1}$$ where $\gamma_q(E)$ is the PDM damping, $P_q(E)$ is the polarization operator (see, e.g., Ref. [@o1] for definitions), and $\omega_{q}$ is a phonon energy, a model parameter. The strength function $S_q(E)$ presents fragmentation properties of a PDM phonon over eigen-states of the PDM Hamiltonian smeared with an additional parameter $\varepsilon$. Parameter $\varepsilon$ appears in $\delta(E) = \varepsilon/[\pi \cdot (E^2+ \varepsilon^2)]$ for $\delta$-functions in $\gamma_q(E)$. I point this out because the strength function (\[e1\]) has been replaced in the forthcoming PDM articles [@n9; @nn; @n1; @n2; @n3; @n4; @n5; @n6; @n7; @n8; @n10; @n11; @n12; @n13], by its approximate form: $$S_q'(E) = \frac{1}{\pi} \frac{\gamma_q(E)}{\left(E-E_{GDR}\right)^2+\gamma_q^2(E)} \label{e2}$$ where $E_{GDR}$ should be taken as a solution of $$f(E) \equiv E-\omega_{q}-P_q(E)=0~. \label{e3}$$ Eq. (\[e2\]) has been obtained from Eq. (\[e1\]) by expanding $P_q(E)$ near a solution of Eq. (\[e3\]), $E_{GDR}$, and then extrapolating the properties of this approximation far away from $E_{GDR}$. In the limit $\varepsilon \to 0$, Eq. (\[e3\]) has $N+1$ solutions corresponding to eigen-energies of the PDM Hamiltonian. [**4.**]{} I never claimed that the BW form for the phonon distribution is assumed within the PDM. But it is indeed an [*ad hoc*]{} input for PDM calculations. I may refer again to [@Boh69] where we read that “[*the Breit-Wigner form for the strength function is an immediate consequence of the assumption of a constant coupling to the other degrees of freedom of the system*]{}". The BW under discussion has nothing to do with definition of the PDM strength function. Indeed, in the limit $\varepsilon \to 0$, $S_q(E)$ turns into a set of infinitely narrow lines while their envelope still remains the BW. [**5.**]{} I do not agree that the calculation with random values of $E_{\alpha}$ in [@m] “[*no longer corresponds to the PDM*]{}". I have used the PDM Hamiltonian and details on a spectrum and model parameters are only technical details of a calculation. The purpose of my calculation is to demonstrate that “[*the crucial feature of the PDM is the use of realistic single-particle energies*]{}" [@nn] is of marginal importance when a configuration space is not small; everything is determined by the BW discussed above. $E_0$ in Ref. [@nn]belongs to the Lorentz line in a hypothetical nucleus and not to my PDM fits. Eigen energies in my calculation in Ref. [@m] were obtained from Eq. (\[e3\]) in the limit $\varepsilon \to 0$. [**6.**]{} I agree that if something “[*is by no mean\[s\] obvious*]{}" it has to be checked. My experience of microscopic calculations tells me that the increase of collectivity tends to the increase of a coupling strength. Of course, it is not necessary that everybody should trust my experience. But then, there are no other alternatives: the one, who puts it in doubt, should check it independently. [**7.**]{} I never claimed that there are some reasons “[*why the values of $f_1$ for $^{40}$Ca and $^{48}$Ca should be the same*]{}" as there are no reasons to keep this parameter fixed along chains of isotopes. As pointed out in [@m], this parameter has no physical meaning. My issue is that one cannot learn anything from agreement with experiment from fits in which a free parameter is adjusted to a described observable. It is important to stress once again: On the page 4 of the article under discussion [@n9], we find: “[*For double closed-shell nuclei $^{16}$O and $^{40,48}$Ca, where the pairing gap is zero, such a kind of enlargement of configuration space is compensated simply by a renormalization of $f_1$, which reduces its value by $\sim 25\%$ for $^{16}$O, and $\sim 35-37\%$ for $^{40,48}$Ca.*]{}" I read this statement after discussion of open-shell nuclei as a renormalization of $f_1$ in closed-shell nuclei in respect to open-shell nuclei. Or do Dang [*et al.*]{} mean to say that calculations in double-magic nuclei have been performed with pairing as one may conclude now from [@nn]: “[*The results for GDR in $^{16}$O have been obtained already within the enlarged space*]{}"? Obviously, only the authors know whether they renormalized their $f_1$ in calculations along chains or not. Of course, I will take out the statement on the $f_1$ renormalization from $^{16}$O to $^{18}$O if it is not true. But before that, I need some help from the authors as to how a reader should interpret the above cited statements. [**8.**]{} It is clearly explained in [@m] why comparing the PDM predictions in $^{40}$Ca to the data [@Har00], the strongest $1^-$ state observed should be excluded from consideration (because it has a two-phonon nature and two-phonon states are not included in the PDM model space). Thus, the PDM 0.25% of the TRK EWS corresponds to 0.007% and not to 0.025% from this experiment. It seems to me that Dang [*et al.*]{} try to hide again a huge disagreement by misleading comparison. The same conclusion, that the PDM is not capable to reproduce “[*the significant experimental difference in the E1 strengths*]{}" $^{40}$Ca/$^{48}$Ca, has been obtained independently by another group of authors [@Har01]. As they write, “[*It is important to note that the parameters of the PDM are adjusted to reproduce the gross structure of the GDR while investigations of $\gamma$-ray strength function models show that the extrapolation of the strength distribution down to energies below the particle threshold leads to unrealistic high dipole strengths and overestimates the experimental data*]{}". Thus, it is not only my point of view that a conclusion in [@n9] (on a quantitative description of pygmy dipole resonance within the PDM) is not justified. It is not true that the PDM with a structureless phonon has no problems with double counting. If a phonon internal structure is not accounted for, it does not change the physical meaning of the phonon. The PDM configuration space contains a phonon and uncorrelated $1p1h$ configurations. The last ones are also excited from the ground state and each of them has its own $B_{1p1h}(E1)$ value. If $1p1h$ spectrum is rather complete (it is always true in the PDM calculations) these uncorrelated $1p1h$ state alone exhaust about 100% of the TRK EWSR. But the PDM physics is determined only by the phonon strength function and not by its sum with $N$ strength functions of uncorrelated $1p1h$ configurations. This is equivalent to $B_{1p1h}(E1) = 0$ within the PDM. [**9.**]{} The previous article on pygmy resonances by Dang [*et al.*]{} [@n1] was not a subject of [@m]. But if Dang [*et al.*]{} raise discussion on it in [@nn], I have some comments on it too: The capability of the phonon damping model (PDM) to describe giant dipole resonance (GDR) damping in neutron-rich nuclei has been tested in [@n1]. To mimic essential differences of double-magic and exotic nuclei, a coupling between a phonon and some $1p1h$ configurations near the Fermi surface has been strongly enhanced in the last ones. As a result, a phonon interacts with these selected $1p1h$ configurations with a strength “[*equal to 41 MeV for oxygen isotopes, 13.856 MeV for calcium isotopes, and 6.928 MeV for tin isotopes*]{}" [@note1]. Let us try to understand how it is possible to stand such an enormous coupling strength which is far away from nuclear structure scales and report an agreement with experiment from this type of calculations. For that, I have repeated the PDM calculations for $^{16}$O and $^{18}$O at zero temperature keeping all the details of Ref. [@n1] and employing realistic $1p1h$ spectrum from Hartree-Fock calculation with SGII Skyrme forces [@note2]. The results of my calculations [@code] are shown in Fig. \[f1\]. A difference in $S_q(E)$ for $^{16}$O and $^{18}$O in Fig. \[f1\] is dramatic but not surprising. It is due to the fact that a phonon couples to all $1p1h$ configurations with an equal strength of $F_1=1.025$ MeV (a PDM parameter) in $^{16}$O while a coupling strength for $[1p_{1/2} 2d_{5/2}]_{\nu}$ configuration at $E_{[1p_{1/2} 2d_{5/2}]_{\nu}}=8.2$ MeV in $^{18}$O has been enhanced to $F_1'= 40 F_1 = 41$ MeV following the details of calculations in Ref. [@n1]. As a result, we find the GDR in $^{18}$O between 40 and 60 MeV and about 40% of its strength is pushed to $-20$ MeV. The energy of the ground state is 0 MeV. Comparing dashed lines in Fig. \[f1\], one may be surprised that $S_q'(E)$ does not feel this enormous matrix element of 41 MeV in $^{18}$O confirming the results in [@n1]. But a very important detail is that $S_q'(E)$ in Fig. \[f1\] has been calculated with $E_{GDR} = 22.5$ MeV from Ref. [@n1]. It has been done in an attempt to reproduce the GDR strength functions published in Fig. 3 of [@n1] which are found in agreement with the data available in $^{16}$O and $^{18}$O. Taking into account that the employed $1p1h$ spectrum might be not exactly the same as in Ref. [@n1], it is possible to conclude that dashed curves in Fig. \[f1\] reproduce the results in Fig. 3a, 3b of Ref. [@n1] on a rather good qualitative level. The only problem is that it is not possible to obtain $E_{GDR} = 22.5$ MeV in $^{18}$O, reported in Ref. [@n1], with parameters from this article as a solution of Eq. (\[e3\]). To demonstrate this, let us consider a behavior of the function $f(E)$ of Eq. (\[e3\]) in $^{16}$O and $^{18}$O. In $^{16}$O, it has a tendency of a continues increase with fluctuations reflecting $1p1h$ poles smeared by the parameter $\varepsilon$ and crosses $y=0$ line in my calculation at $E_{GDR} = 18.5$ MeV (see, Fig. \[f2\], left). In $^{18}$O (Fig. \[f2\], right), a fluctuation around $E_{[1p_{1/2} 2d_{5/2}]_{\nu}}$ increases enormously because of 41 MeV coupling matrix element corresponding to this pole yielding a spurious solution of Eq. (\[e3\]) at this energy. The physical PDM solutions of Eq. (\[e3\]) in $^{18}$O have $E_{GDR} = -19.7$, 50.6, and 55.0 MeV which are very different from $E_{GDR} = 22.5$ MeV, reported in Ref. [@n1]. $S_q'(E)$ calculated with any them is dramatically different from the one in Fig. \[f1\] (right). The “$^{18}$O effect" can be obtained even without any calculations. For that, one may neglect a phonon coupling to all $1p1h$ configurations (with a “weak" matrix element $F_1$) except for $[1p_{1/2} 2d_{5/2}]_{\nu}$ configuration. Then, in the limit $\varepsilon \to 0$, Eq. (\[e3\]) transforms into quadratic equation: $$E-\omega_{q}-\frac{(F_1')^2 \cdot n}{E-E_{[1p_{1/2} 2d_{5/2}]_{\nu}}}=0~. \label{e4}$$ where a factor $n$ accounts partial occupation of $\nu2d_{5/2}$ level in $^{18}$O. Eq. (\[e4\]) yields the PDM eigen states at $-19.0$ and 49.4 MeV with a phonon strength distribution among them as 40% and 60%, respectively. It becomes clear that agreement with experiment for $^{18}$O (and accordingly for other neutron-rich nuclei) reported in Ref. [@n1] has been obtained by making use of the approximate PDM strength function $S_q'(E)$ and the GDR energy which is not a solution of Eq. (\[e3\]) as announced. Correct PDM strength function with parameters from Ref. [@n1] for $^{18}$O is presented by solid curve in right part of Fig. \[f1\]. [**1 and 10.**]{} I have examined the PDM from the point of view of general physical grounds. My arguments and conclusions are presented in [@m] and above. I think a reader may independently conclude whether general rules are not for this model (as the claims of Dang [*et al.*]{} in [@nn] may be understood) and whether one learns any physics from the PDM fits in Refs. [@n9; @n1; @o1; @n2; @n3; @n4; @n5; @n6; @n7; @n8; @n10; @n11; @n12; @n13; @n14; @n15; @n16] although agreement with experiment is always reported by the authors. [99]{} Permanent address: Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna, Russia. V. Yu. Ponomarev, LANL preprint server, nucl-th/0202006v1, 1 Feb. 2002. N. Dinh Dang [*et al.*]{}, LANL preprint server, nucl-th/0202020v1, 7 Feb. 2002. In many articles by Dang [*et al.*]{}, we find that a coupling to complex configurations is effectively accounted for within the PDM-1 in a model parameter $f$ responsible for coupling to simple configurations (a possibility to provide similar fits within the PDM-1 and PDM-2, which phenomenologically accounts for coupling to complex configurations, is used as an only proof). As pointed out in Refs. [@m; @mm], this is a misleading statement and nothing else. Indeed, from a theoretical point of view, higher-order effects may be claimed to be effectively included in a free parameter of lower-order graphs only if higher-order diagrams can be transformed into the lower-order diagrams with a renormalized vertex. The PDM diagrams are published in Ref. [@n2]: in Fig. 1a for a coupling to simple and in Figs. 1b-e for a coupling to complex configurations.) Since none of the diagrams in Figs. 1b-e can be transformed into the diagram in Fig. 1a with a renormalized vertex, the statement that high-order processes can be effectively accounted for by lower-order processes does not sound as theoretically proved. It is also not possible to agree that two fits performed within two different, in principle, approaches (PDM-1 and PDM-2) may be used as a theoretical proof that one model automatically includes all ingredients of another model. It is important to notice that the PDM strength parameter $f$ has to be reduced by about two order of magnitude in the transition from the PDM-1 to PDM-2. This takes place because a configuration space of the PDM-2 is much larger as compared to the one of the PDM-1 and there is only one purpose of PDM fits, namely to reproduce the data. C. A. Bertulani, P. F. Bortignon, V. Yu. Ponomarev, and V. V. Voronov, Phys. Rev. Lett. [**87**]{}, 269201 (2001). A. Bohr and B. R. Mottelson, [*Nuclear Structure, vol. I*]{} (New York, Benjamin, 1969). T. Hartmann, J. Enders, P. Mohr, K. Vogt, S. Volz, and A. Zilges, Phys. Rev. Lett. [**85**]{}, 274 (2000). T. Hartmann, J. Enders, P. Mohr, K. Vogt, S. Volz, and A. Zilges, Phys. Rev. C [**65**]{}, 034301 (2002). An interaction strength of 41 MeV is not a misprint. In Ref. [@n1] we find it again as $F_1'=40F_1$ where $F_1=1.025$ MeV for $^{16}$O in table 1. In Ref. [@nn], it is mentioned as “[*the parameter $f_1$ in Ref. \[5\] of \[4\] was increased significantly near the Fermi surface*]{}". To avoid misunderstanding, parameters $F_1$ and $f_1$ are the same, only different notations are used in various PDM articles. I thank Dr. G. Coló for $1p1h$ spectrum. Calculations have been performed with a Fortran code pdm.f of 42 lines. It is available from the author. Calculation time of both $S_q(E)$ and $S_q'(E)$ is about 1 sec. on 166 MHz PC. N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**63**]{}, 044302 (2001). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**61**]{}, 064304 (2001). N. Dinh Dang [*et al.*]{}, Nucl. Phys. [**A636**]{}, 427 (1998). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**58**]{}, 3374 (1998). N. Dinh Dang [*et al.*]{}, Phys. Lett. [**445B**]{}, 1 (1998). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**59**]{}, 3128 (1999). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**60**]{}, 34306 (1999). N. Dinh Dang [*et al.*]{}, Nucl. Phys. [**A645**]{}, 536 (1999). N. Dinh Dang [*et al.*]{}, Phys. Rev. Lett. [**85**]{}, 1827 (2000). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**61**]{}, 027302 (2000). N. Dinh Dang [*et al.*]{}, Nucl. Phys. [**A675**]{}, 531 (2000). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**64**]{}, 024302 (2001). N. Dinh Dang [*et al.*]{}, Phys. Rev. C [**64**]{}, 027303 (2001). N. Dinh Dang, Nucl. Phys. [**A687**]{}, 253c (2001). N. Dinh Dang [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 4145 (1998). N. Dinh Dang [*et al.*]{}, Nucl. Phys. [**A649**]{}, 201c (1999). N. Dinh Dang [*et al.*]{}, Phys. Rev. Lett. [**87**]{}, 269202 (1998). [^1]: I cannot not neither confirm nor disprove that an article [@m] was rejected by PRC as a reader learns from [@nn]. It is common practice that a manuscript submission to any scientific journal remains a confidential information of the author(s) until the manuscript is accepted for publication. In PRC, it is protected by a secret accession code. It seems to me at least not ethical to publish openly on Web a confidential information of another person(s) as Mr. N. Dinh Dang did in [@nn]. An editorial decision may be published only with a formal permission from the Editors. I doubt that Mr. N. Dinh Dang has it. I also interpret this fact as an attempt to influence a reader by non-scientific arguments in a scientific discussion.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We predict a non-monotonous temperature dependence of the persistent currents in a ballistic ring coupled strongly to a stub in the grand canonical as well as in the canonical case. We also show that such a non-monotonous temperature dependence can naturally lead to a $\phi_0/2$ periodicity of the persistent currents, where $\phi_0$=h/e. There is a crossover temperature $T^*$, below which persistent currents increase in amplitude with temperature while they decrease above this temperature. This is in contrast to persistent currents in rings being monotonously affected by temperature. $T^*$ is parameter-dependent but of the order of $\Delta_u/\pi^2k_B$, where $\Delta_u$ is the level spacing of the isolated ring. For the grand-canonical case $T^*$ is half of that for the canonical case.' address: | $^a$Fl.48, 93-a prospect Il’icha, 310020 Khar’kov, Ukraine\ $^b$S.N. Bose National Centre for Basic Sciences, JD Block, Sector 3, Salt Lake City, Calcutta 98, India. author: - 'M. V. Moskalets$^a$ and P. Singha Deo$^{b,}$[@eml]' title: '**[Temperature enhanced persistent currents and “$\phi_0/2$ periodicity”]{}**' --- Introduction ============ Although the magnitude of persistent current amplitudes in metallic and semiconductor mesoscopic rings [@but] has received experimental attention [@exp], much attention has not been given to qualitative features of the persistent current. Qualitative features reflect the underlying phenomena, and are more important than the order of magnitude. Incidentally, the order of magnitude and sign of the persistent currents in metallic rings is still not understood. With this background in mind, we study the temperature dependence of persistent currents in a ring strongly coupled to a stub [@buet]. We predict a non-monotonous temperature dependence of the amplitude of persistent currents in this geometry both for the grand-canonical as well as for the canonical case. We show that there is a crossover temperature ($T^*$) above which it decreases with temperature and below which it increases with temperature, and energy scales determining this crossover temperature are quantified. This is in contrast to the fact that in the ring, temperature monotonously affects the amplitude of persistent currents. However, so does dephasing and impurity scattering, which are again directly or indirectly temperature dependent [@but; @Cheung], except perhaps in very restrictive parameter regimes where it is possible to realize a Luttinger liquid in the ring in the presence of a potential barrier [@krive]. Recent study, however, shows that in the framework of a Luttinger liquid, a single potential barrier leads to a monotonous temperature dependence of the persistent currents for non-interacting as well as for interacting electrons [@mos99]. We also show a temperature-induced switch over from $\phi_0$ periodicity to $\phi_0/2$ periodicity. This is a very non-trivial temperature dependence of the fundamental periodicity that cannot be obtained in the ring geometry. There is also another motivation behind studying the temperature dependence of persistent currents in this ring-stub system. In the ring, the monotonous behavior of the persistent current amplitude with temperature stems from the fact that the states in a ring pierced by a magnetic flux exhibit a strong parity effect [@Cheung]. There are two ways of defining this parity effect in the single channel ring (multichannel rings can be generalized using the same concepts and mentioned briefly at the end of this paragraph). In the single-particle picture (possible only in the absence of electron-electron interaction), it can be defined as follows: states with an even number of nodes in the wave function carry diamagnetic currents (positive slope of the eigenenergy versus flux) while states with an odd number of nodes in the wave function carry paramagnetic currents (negative slope of the eigenenergy versus flux) [@Cheung]. In the many-body picture (without any electron-electron interaction), it can be defined as follows: if $N$ is the number of electrons (spinless) in the ring, the persistent current carried by the $N$-body state is diamagnetic if $N$ is odd and paramagnetic if $N$ is even [@Cheung]. Leggett conjectured [@leg] that this parity effect remains unchanged in the presence of electron-electron interaction and impurity scattering of any form. His arguments can be simplified to say that when electrons move in the ring, they pick up three different kinds of phases: 1) the Aharonov-Bohm phase due to the flux through the ring, 2) the statistical phase due to electrons being Fermions and 3) the phase due to the wave-like motion of electrons depending on their wave vector. The parity effect is due to competition between these three phases along with the constraint that the many-body wave function satisfy the periodic boundary condition (which means if one electron is taken around the ring with the other electrons fixed, the many-body wave function should pick up a phase of 2$\pi$ in all). Electron-electron interaction or simple potential scattering cannot introduce any additional phase, although it can change the kinetic energy or the wave vector and hence modify the third phase. Simple variational calculations showed that the parity effect still holds [@leg]. Multichannel rings can be understood by treating impurities as perturbations to decoupled multiple channels, which means small impurities just open up small gaps at level crossings within the Brillouin zone and keep all qualitative features of the parity effect unchanged. Strong impurity scattering in the multichannel ring can, however, introduce strong level correlations, which is an additional phenomenon. Whether and how the parity effect gets modified by these correlations is an interesting problem. In a one-dimensional (1D) system where we have a stub of length $v$ strongly coupled to a ring of length $u$ (see the left bottom corner in Fig. 1), we can have a bunching of levels with the same sign of persistent currents, [@Deo95] i.e., many consecutive levels carry persistent currents of the same sign. This is essentially a breakdown of the parity effect. The parity effect breaks down in this single channel system because there is a new phase that does not belong to any of the three phases discussed by Leggett and mentioned in the preceding paragraph. This new phase cancels the statistical phase and so the N-body state and the (N+1)-body state behave in similar ways or carry persistent currents of the same sign [@deo96; @sre]. When the Fermi energy is above the value where we have a node at the foot of the stub (that results in a transmission zero in transport across the stub), there is an additional phase of $\pi$ arising due to a slip in the Bloch phase [@deo96] (the Bloch phase is the third kind of phase discussed above, but the extra phase $\pi$ due to slips in the Bloch phase is completely different from any of the three phases discussed above because this phase change of the wave function is not associated with a change in the group velocity or kinetic energy or the wave vector of the electron [@deo96; @sre]). The origin of this phase slip can be understood by studying the scattering properties of the stub structure. One can map the stub into a $\delta$-function potential of the form $k \cot (kv) \delta (x-x_0)$ [@deo96]. So one can see that the strength of the effective potential is $k \cot (kv)$ and is energy dependent. Also the strength of the effective potential is discontinuous at $kv=n \pi$. Infinitesimally above $\pi$ an electron faces a positive potential while infinitesimally below it faces a negative potential. As the effective potential is discontinuous as a function of energy, the scattering phase, which is otherwise a continuous function of energy, in this case turns out to be discontinuous as the Fermi energy sweeps across the point $kv=\pi$. As the scattering phase of the stub is discontinuous, the Bloch phase of the electron in the ring-stub system is also discontinuous. This is pictorially demonstrated in Figs. 2 and 3 of Ref. [@deo96]. In an energy scale $\Delta_u\propto 1/u$ (typical level spacing for the isolated ring of length $u$) if there are $n_b\sim\Delta_u/\Delta_v$ (where $\Delta_v\propto 1/v$, the typical level spacing of the isolated stub) such phase slips, then each phase slip gives rise to an additional state with the same slope and there are $n_b$ states of the same slope or ithe same parity bunching together with a phase slip of $\pi$ between each of them [@deo96]. The fact that there is a phase slip of $\pi$ between two states of the same parity was generalized later, arguing from the oscillation theorem, which is equivalent to Leggett’s conjecture for the parity effect [@lee]. Transmission zeros are an inherent property of Fano resonance generically occurring in mesoscopic systems and this phase slip is believed to be observed [@the] in a transport measurement [@sch]. For an elaborate discussion on this, see Ref. [@tan]. A similar case was studied in Ref. [@wu], where they show the transmission zeros and abrupt phase changes arise due to degeneracy of “dot states” with states of the “complementary part” and hence these are also Fano-type resonances. The purpose of this work is to show a very non-trivial temperature dependence of persistent currents due to the breakdown of the parity effect. The temperature effects predicted here, if observed experimentally, will further confirm the existence of parity-violating states, which is a consequence of this new phase. To be precise, the new phase is the key source of the results discussed in this work. Theoretical treatment ===================== We concentrate on the single channel system to bring out the essential physics. The multichannel ring also shows a very strong bunching of levels even though the rotational symmetry is completely broken by the strongly coupled stub and wide gaps open up at the level crossings [@sre] within the Brillouin zone. Hence let us consider a one-dimensional loop of circumference $u$ with a one-dimensional stub of length $v$, which contain noninteracting spinless electrons. The quantum-mechanical potential is zero everywhere. A magnetic flux $\phi$ penetrates the ring (see the left bottom corner in Fig. 1). In this paper we consider both the grand-canonical case (when the particle exchange with a reservoir at temperature $T$ is present and the reservoir fixes the chemical potential $\mu$; in this case we will denote the persistent current as $I_\mu$) and the canonical case (when the number $N$ of particles in the ring-stub system is conserved; in this case we will denote the persistent current as $I_N$). For the grand canonical case we suppose that the coupling to a reservoir is weak enough and the eigenvalues of electron wave number $k$ are not affected by the reservoir [@Cheung]. They are defined by the following equation [@Deo95]. $$\cos(\alpha)=0.5\sin(ku)\cot(kv)+\cos(ku), \label{Eq1}$$ \ where $\alpha=2\pi\phi/\phi_0$, with $\phi_0=h/e$ being the flux quantum. Note that Eq. (\[Eq1\]) is obtained under the Griffith boundary conditions, [@Griffith] which take into account both the continuity of an electron wave function and the conservation of current at the junction of the ring and the stub; and the hard-wall boundary condition at the dead end of the stub. Each of the roots $k_n$ of Eq.(\[Eq1\]) determines the one-electron eigenstate with an energy $\epsilon_n=\hbar^2k_n^2/(2m)$ as a function of the magnetic flux $\phi$. Further we calculate the persistent current $I_{N/\mu}=-\partial F_{N/\mu}/\partial \phi$ [@Byers], where $F_N$ is the free energy for the regime $N=const$ and $F_\mu$ is the thermodynamic potential for the regime $\mu=const$. In the latter case for the system of noninteracting electrons the problem is greatly simplified as we can use the Fermi distribution function $f_0(\epsilon)=(1+\exp[(\epsilon-\mu)/T] )^{-1}$ when we fill up the energy levels in the ring-stub system and we can write the persistent current as follows [@Cheung]. $$I_\mu=\sum_n I_n f_0(\epsilon_n), \label{Eq2}$$ \ where $I_n$ is a quantum-mechanical current carried by the $n$th level and is given by [@Deo95] $$\frac{\hbar I_n}{e}=\frac{2k_n\sin(\alpha)}{\frac{u}{2}\cos(k_nu) \cot(k_nv)-[\frac{v}{2}{\rm cosec}^2(k_nv)+u]\sin(k_nu)}. \label{Eq3}$$ \ For the case of $N=const$ we must calculate the partition function $Z$, which determines the free energy $F_N=-T\ln(Z)$ [@Landau], $$Z=\sum_m \exp\left( -\frac{E_m}{T} \right), \label{Eq4}$$ \ where $E_m$ is the energy of a many-electron level. For the system of $N$ spinless noninteracting electrons $E_m$ is a sum over $N$ different (pursuant to the Pauli principle) one-electron energies $E_m=\sum_{i=1}^{N} \epsilon_{n_i}$, where the index $m$ numbers the different series $\{\epsilon_{n_1},...,\epsilon_{n_N}\}_m$. For instance, the ground-state energy is $E_0=\sum_{n=1}^{N}\epsilon_n$. Results and discussions ======================= First we consider the peculiarities of the persistent current $I_\mu$, $i.e.,$ for the regime $\mu=const$. In this case the persistent current is determined by Eqs.(\[Eq1\])-(\[Eq3\]). Our calculations show that the character of the temperature dependence of the persistent currents is essentially dependent on the position of the Fermi level $\mu$ relative to the groups of levels with similar currents. If the Fermi level lies symmetrically between two groups (which occurs if $u/\lambda_F=n$ or $n+0.5$, where $n$ is an integer and $\lambda_F$ is the Fermi wavelength), then the current changes monotonously with the temperature that is depicted in Fig. 1 (the dashed curve). In this case the low-lying excited levels carry a current which is opposite to that of the ground-state; the line shape of the curve is similar to that of the ring [@Cheung]. On the other hand, if the Fermi level lies within a group ($u/\lambda_F\sim n+0.25$) then low-lying excited states carry persistent currents with the same sign. In that case there is an increase of a current at low temperatures as shown in Fig. 1 (the dotted curve). At low temperatures the currents carried by the low-lying excited states add up with the ground-state current. However, these excited states are only populated at the cost of the ground state population. Although in the clean ring higher levels carry larger persistent currents, this is not true for the ring-stub system. This is because the scattering properties of the stub are energy-dependent and at a higher energy the stub can scatter more strongly. Hence a lot of energy scales such as temperature, Fermi energy and number of levels populated compete with each other to determine the temperature dependence. A considerable amount of enhancement in persistent current amplitudes as obtained in our calculations appears for all choices of parameters whenever the Fermi energy is approximately at the middle of a group of levels that have the same slope. At higher temperatures when a large number of states get populated, the current decreases exponentially. So in this case the current amplitude has a maximum as a function of the temperature and we can define the temperature corresponding to the maximum as the crossover temperature $T^*$. It is worth mentioning that in the ring system, although there is no enhancement of persistent currents due to temperature, one can define a crossover temperature below which persistent currents decrease less rapidly with temperature. Essentially this is because at low temperatures thermal excitations are not possible because of the large single-particle level spacings. Hence this crossover temperature is the same as the energy scale that separates two single-particle levels, $i.e.,$ the crossover temperature is proportional to the level spacing $\Delta=hv_F/L$ in the ideal ring at the Fermi surface, where $v_F$ is the Fermi velocity and $L$ is the length of the ring. The crossover temperature obtained by us in the ring-stub system is of the same order of magnitude, $i.e.,$ $\Delta_u=hv_F/u$, although different in meaning. In the case of $u/\lambda_F=n+0.25$ at low temperatures we show the possibility of obtaining $\phi_0/2$ periodicity, although the parity effect is absent in this system. This is shown in Fig. 2, where we plot $I_{\mu}/I_0$ versus $\phi/\phi_0$ at a temperature $k_BT/\Delta_u$=0.01 in solid lines, which clearly show a $\phi_0/2$ periodicity. Previously two mechanisms are known that can give rise to $\phi_0/2$ periodicity of persistent currents. The first is due to the parity effect [@los], which does not exist in our system, and the second is due to the destructive interference of the first harmonic that can only appear in a system coupled to a reservoir so that the Fermi energy is an externally adjustable parameter. The later mechanism can be understood by putting $k_FL$=$(2n\pi+\pi /2)$ in eq. 2.11 in Ref. [@Cheung]. If this later case is the case in our situation, then the periodicity should remain unaffected by temperature and for fixed $N$ we should only get $\phi_0$ periodicity [@Cheung], because then the Fermi energy is not an externally adjustable parameter but is determined by $N$. We show in Fig. 2 (dashed curve) that the periodicity changes with temperature and in the next two paragraphs we will also show that one can obtain $\phi_0/2$ periodicity for fixed $N$. The dashed curve in Fig. 2 is obtained at a temperature $k_BT/\Delta_u$=0.15 and it shows a $\phi_0$ periodicity. As it is known, the crossover temperature depends on the harmonic number $m$: $T^*_m=T^*/m$ [@Cheung], in this case a particular harmonic can actually increase with temperature initially and decrease later, different harmonics reaching their peaks at different temperatures. Therefore, the second harmonic that peaks at a lower temperature than the first harmonic can exceed the first harmonic in certain temperature regimes. At higher temperatures it decreases with the temperature faster than the first harmonic and so at higher temperature $\phi_0$ periodicity is recovered. In view of a strong dependence of the considered features on the chemical potential, we consider further the persistent current $I_N$ in the ring-stub system with a fixed number of particles $N=const$. In this case we calculate the persistent current using the partition function (Eq.(\[Eq4\])). The numerical calculations show that in this case there is also a non-monotonous temperature dependence of the persistent current amplitude in the canonical case as in the grand-canonical case. This is shown in Fig. 1 by the solid curve. The maximum of $I_N(T)$ is more pronounced if $v/u$ is large and the number of electrons ($N$) is small. Besides, if the number of electrons is more than $n_b/2$, then the maximum does not exist. The crossover temperature is higher by a factor 2 as compared to that in $I_\mu$. This was also found for the 1D ring [@Cheung; @Loss92], where, as mentioned before, the crossover temperature has a different meaning. To show that one can have $\phi_0/2$ periodicity for fixed $N$, we plot in the inset of Fig. 2 the first harmonic $I_1/I_0$ (solid curve) and the second harmonic $I_2/I_0$ (dotted curve) of $I_N$ for $N$=5, $v$=7$k_F$ and $u$=2.5$k_F$. At low temperature the second harmonic exceeds the first harmonic because the stub reduces the level spacing and in a sense can adjust the Fermi energy in the ring to create partial but not exact destruction of the first harmonic. There are distinct temperature regimes where $I_1$ exceeds $I_2$ and vice versa and the two curves peak in completely different temperatures. $I_2$ also exhibits more than one maxima. Experimentally different harmonics can be measured separately and the first harmonic as shown in Fig. 2 can show tremendous enhancement with temperature. An important conclusion that can be made from Fig. 2 is that observation of $\phi_0/2$ periodicity as well as $\phi_0$ is possible even in the absence of the parity effect quite naturally because the absence of the parity effect also means one can obtain an enhancement of the persistent current amplitude with temperature, and as a result an enhancement of a particular harmonic with temperature, resulting in different harmonics peaking at different temperatures. Conclusions =========== In summary, we would like to state that the temperature dependence of persistent currents in a ring strongly coupled to a stub exhibits very nontrivial features. Namely, at small temperatures it can show an enhancement of the amplitude of persistent currents in the grand-canonical as well as in the canonical case. The fundamental periodicity of the persistent currents can change with temperature. If detected experimentally, these can lead to a better understanding of the qualitative features of persistent currents. It will also confirm the existence of parity-violating states that is only possible if there is a new phase apart from the three phases considered by Leggett [@leg] while generalizing the parity effect. This new phase is the sole cause of the nontrivial temperature dependence. There is a crossover temperature $T^*$ above which the amplitude of persistent currents decreases with temperature. How the crossover temperature is affected by electron correlation effects and dephasing should lead to interesting theoretical and experimental explorations in the future. Finally, with the large discrepancies between theory and experiments for the persistent currents in disordered rings, one cannot completely rule out the possibility of parity violation in the ring system as well. The stub is not the only way to produce this new phase that leads to a violation of the parity effect. There can be more general ways of getting transmission zeros [@lee] that may also be parity violation. In that case, the ring-stub system may prove useful as a theoretical model to understand the consequences of parity violation. Its consequences on the temperature dependence shown here may motivate future works in this direction. [99]{} email: deo@boson.bose.res.in M. B[ü]{}ttiker, Y. Imry, and R. Landauer, Phys. Lett. [**96A**]{}, 365 (1983). L. P. Levy et al, Phys. Rev. Lett. [**64**]{}, 2074 (1990); V. Chandrasekhar et al, Phys. Rev. Lett. [**67**]{}, 3578 (1991); D. Mailly et al, Phys. Rev. Lett. [**70**]{}, 2020 (1993). The weak coupling limit was earlier studied by M. B[ü]{}ttiker, Phys. Scripta T [**54**]{}, 104 (1994). Coupled rings was studied by T.P. Pareek and A.M. Jayannavar, Phys. Rev. B [**54**]{}, 6376 (1996). H.F.Cheung, Y.Gefen, E.K.Riedel, W.H.Shih, Phys. Rev. B [**37**]{}, 6050 (1988). I.V. Krive et al, Phys. Rev. B [**52**]{}, 16451 (1995); A.S.Rozhavsky, J. Phys.: Condens. Matter [**9**]{}, 1521 (1997); I. V. Krive et al, cond-mat/9704151. M.V. Moskalets, Physica E [**5**]{}, 124 (1999). A.J. Leggett in: Granular nano-electronics, eds. D. K. Ferry, J.R. Barker and C. Jacobony, NATO ASI Ser. B [**251**]{} (Plenum, New York, 1991) p. 297. P.Singha Deo, Phys. Rev. B [**51**]{}, 5441 (1995). P. Singha Deo, Phys. Rev. B [**53**]{}, 15447 (1996). P. A. Sreeram and P. Singha Deo, Physica B [**228**]{}, 345 (1996). H.-W.Lee, Phys. Rev. Lett., [**82**]{}, 2358 (1999). P.Singha Deo and A.M.Jayannavar, Mod. Phys. Lett. B [**10**]{}, 787 (1996); P.Singha Deo, Solid St. Communication [**107**]{}, 69 (1998); C.M.Ryu et al, Phys. Rev. B [**58**]{}, 3572 (1998); Hongki Xu et al, Phys. Rev. B, [**57**]{}, 11903 (1998). R. Schuster et al, Nature [**385**]{}, 417 (1997). T. Taniguchi and M. Büttiker, Phys. Rev. B [**60**]{}, 13814 (1999). J.Wu et al, Phys. Rev. Lett. [**80**]{}, 1952 (1998). S. Griffith, Trans. Faraday. Soc. [**49**]{}, 650 (1953). N. Byers, C.N. Yang, Phys. Rev. Lett. [**7**]{}, 46 (1961); F. Bloch, Phys. Rev. B [**2**]{}, 109 (1970). L.D. Landau, E.M. Lifschitz, (1959) Statistical Physics (Pergamon, London). D. Loss and P. Goldbart, Phys. Rev. B [**43**]{}, 13762 (1991). D. Loss, Phys. Rev. Lett. [**69**]{}, 343 (1992).  \ Fig. 1. The ring of length $u$ with a stub (resonant cavity) of length $v$ threaded by a magnetic flux $\phi$ (left bottom corner). The dependence of the current amplitude $I_\mu$ in units of $I_0=ev_F/u$ on the temperature $T$ in units of $\Delta_u/2\pi^2k_B$ for the regime $\mu=const$ with $v=15\lambda_F$ and $u=(5+x)\lambda_F$ at $x=0$ (dashed curve) and $x=0.25$ (dotted curve); and $I_N/I_0$ for the isolated ring-stub system with $v/u=10$, and $N=3$ (solid curve). For the appropriate scale the curves 2 and 3 are multiplied by factors of 3 and 15, respectively. Fig. 2. The dependence of the persistent current $I_\mu$ in units of $I_0=ev_F/u$ on the magnetic flux $\phi$ in units of $\phi_0$ for the regime $\mu=const$ with $v=15\lambda_F$ and $u=5.25\lambda_F$ for $T/\Delta_u=0.01$ (dashed curve) and $T/\Delta_u=0.15$ (solid curve). The curve 2 is multiplied by a factor of 5 for the appropriate scale. The inset shows the first harmonic $I_1$ (solid curve) and second harmonic $I_2$ (dotted curve) of $I_N$ in units of $I_0$ for N fixed at 5, $v$=7$k_F$ and $u$=2.5$k_F$ versus temperature in units of $\Delta_u/2\pi^2k_B$.
{ "pile_set_name": "ArXiv" }
--- abstract: | Detailed Monte Carlo Inversion analysis of the spectral lines from three Lyman limit systems (LLS) \[$N$(H[i]{}) $\ga 1.0\times10^{17}$ [cm$^{-2}\,$]{}\] and nine lower $N$(H[i]{}) systems \[$2\times10^{14}$ [cm$^{-2}\,$]{}$\la N$([H]{}[i]{}) $\la 2\times10^{16}$ [cm$^{-2}\,$]{}\] observed in the VLT/UVES spectra of Q0347–3819 (in the range $2.21 \leq z \leq 3.14$) and of APM BR J0307–4945 (at $z = 4.21$ and 4.81) is presented. Combined with the results from a previous work, the analyzed LLSs show that they are a [*heterogeneous*]{} population originating in different environments. A functional dependence of the line-of-sight velocity dispersion $\sigma_{\rm v}$ on the absorber size $L$ is confirmed: the majority of the analyzed systems follow the scaling relation $\sigma_{\rm v} \sim (N_{\rm H}\,L)^{0.3}$ (with $N_{\rm H}$ being the total gas column density). This means that most absorbers may be related to virialized systems like galaxies or their halos. Previously noted enhancement of the metal content in small size systems is also confirmed: metallicities of $Z \sim (1/3-1/2)\,Z_\odot$ are found in systems with $L \la 0.4$ kpc, whereas we observe much lower metal abundances in systems with larger linear sizes. For the first time in LLSs, a pronounced \[$\alpha$-element/iron-peak\] enrichment is revealed: the absorber at [$z_{\rm abs}\,$]{}= 2.21 shows \[O/Fe\] = $0.65\pm0.11$, \[Si/Fe\] = $0.51\pm0.11$, and \[Mg/Fe\] = $0.38\pm0.11$. Several absorption systems exhibit characteristics which are very similar to that observed in high-velocity clouds in the Milky Way and may be considered as high-redshift counterparts of Galactic HVCs. author: - 'S. A. Levshakov, I. I. Agafonova, S. D’Odorico, A. M. Wolfe,' - 'M. Dessauges-Zavadsky' title: 'Metal abundances and kinematics of quasar absorbers – II. Absorption systems toward Q0347–3819 and APM BR J0307–4945 ' --- Introduction ============ With the present paper we continue to study the chemical composition and the kinematic characteristics of quasar absorption systems using a new computational procedure, – the Monte Carlo Inversion algorithm (MCI), – developed earlier in a series of papers \[see Levshakov, Agafonova & Kegel (2000); hereafter LAK\]. The MCI technique allows us to recover self-consistently the physical parameters of the intervening gas cloud (such as the average gas number density $n_0$, the column densities for different species $N_{\rm a}$, the kinetic temperature $T_{\rm kin}$, the metal abundances $Z_{\rm a}$, and the linear size $L$), the statistical characteristics of the underlying hydrodynamical fields (such as the line-of-sight velocity dispersion $\sigma_{\rm v}$, and the density dispersion $\sigma_{\rm y}$), and the line of sight density $n_{\rm H}(x)$ and velocity $v(x)$ distributions (here $x$ is the dimensionless coordinate in units of $L$). Having this comprehensive information we are able to classify the absorbers more reliably and hence to obtain important clues concerning the physical conditions in intervening galaxies, galactic halos and large scale structure objects at high redshifts. Besides, it will also be possible to constrain the existing theories of the origin of galaxy formation since the observed statistics of the damped Ly$\alpha$ (DLA) and Lyman limit (LLS) systems is believed to be a strong test of different cosmological models (e.g. Gardner et al. 2001; Prochaska & Wolfe 2001). In the first part of our study (Levshakov et al. 2002a, hereafter Paper I) we reported results on the absorption systems at [$z_{\rm abs}\,$]{}= 1.87, 1.92 and 1.94 toward the HDF-South quasar J2233–606. These systems exhibit many metal lines with quite complex structures. It was found that all profiles can be well described with an assumption of a homogeneous metallicity and a unique photoionizing background. According to the estimated sizes, velocity dispersions and metal contents the absorbers at [$z_{\rm abs}\,$]{}= 1.92 and 1.87 were related to the galactic halos whereas the system at [$z_{\rm abs}\,$]{}= 1.94 was formed, more likely, in an irregular star-forming galaxy. It was also found, that the linear size and the line-of-sight velocity dispersion for all three absorbers obey a scaling relation of the same kind that can be expected for virialized systems. The present paper deals with absorbers observed in the spectra of Q0347–3819 ($z_{\rm em} = 3.23$) and APM BR J0307–4945 ($z_{\rm em} = 4.75$, see § 2.1). Both spectra include several dozens of systems containing metals, but most of them are weak and severely blended and hence do not allow to estimate the underlying physical parameters with a reasonable accuracy. After preliminary analysis only 12 systems were chosen for the inversion with the MCI and their properties are described below. The structure of the paper is as follows. § 2 describes the data sets. In § 3 our model assumptions and basic equations are specified. The estimated parameters for individual systems are given in § 4. The implication of the obtained results to the theories of LLS origin are discussed in § 5 and our conclusions are reported in § 6. Appendix contains a table with typical parameters of different absorbers which are referred to in the present study. Observations ============ The spectroscopic observations of Q0347–3819 and APM BR J0307–4945 obtained with the UV-Visual Echelle Spectrograph UVES (D’Odorico et al. 2000) on the VLT/Kueyen 8.2 m telescope are described in detail by D’Odorico, Dessauges-Zavadsky & Molaro (2001) and by Dessauges-Zavadsky et al. (2001), respectively. Both spectra were observed with the spectral resolution FWHM $\simeq 7$ [km s$^{-1}\,$]{}. For the analysis of metal systems from the Q0347–3819 spectrum with lines in the range 4880 – 6730 Å which was not covered by the VLT observations, we used a portion of the Q0347–3819 spectrum obtained with the High-Resolution Echelle Spectrograph HIRES (Vogt et al. 1994) on the 10 m W. M. Keck I telescope (Prochaska & Wolfe 1999). The spectral resolution in this case was about 8 [km s$^{-1}\,$]{}. The VLT/UVES data are now available for public use in the VLT data archive. The majority of the metal systems in the spectrum of Q0347–3819 were identified in Levshakov et al. (2002b), whereas the [$z_{\rm abs}\,$]{}= 4.21 system toward APM BR J0307–4945 was distinguished by Dessauges-Zavadsky et al.[^1] (2001) as consisting of two sub-systems : one at [$z_{\rm abs}\,$]{}= 4.211 and the other at [$z_{\rm abs}\,$]{}= 4.218. A new system at [$z_{\rm abs}\,$]{}= 4.81 is analyzed here for the first time. Emission redshift of APM BR J0307–4945 -------------------------------------- The emission redshift of this distant quasar $z_{\rm em} = 4.728\pm0.015$ was previously measured by Péroux et al. (2001) from the +\] $\lambda1400.0$ and $\lambda1549.1$ lines observed in the $\sim 5$ Å resolution spectrum obtained with the 4 m Cerro Tololo Inter-American Observatory telescope. In our VLT/UVES spectrum of this quasar a few additional lines can be identified which are useful for the redshift measurements. The most important of them is the weak $\lambda1304$ line. From earlier studies (see, e.g., Tytler & Fan 1992 and references cited therein) it is known that ‘low-ionization lines’ such as $\lambda1304$ are systematically redshifted and narrower than ‘high-ionization’ lines such as , , and Ly$\alpha$. In Fig. 1 we compare the profile with those of and of a wide blend of the Ly$\alpha$++ lines. All these lines are shown in the same velocity scale which is defined by the $\lambda1304$ center corresponding to $z_{\rm em} = 4.7525$. This line is redshifted with respect to the $z_{\rm em}$ value deduced by Péroux et al. from the measurements of the +\] and profiles. Because the Ly$\alpha$ emission line is blended with other emission lines and its blue wing is distorted by numerous narrow absorption lines, profile comparison cannot be very accurate in this case. Nevertheless, we found a smooth fit to the $\lambda1549$ profile (which is unblended and shows significant asymmetry) and used this synthetic profile for comparison with other lines (altering the amplitude of the synthetic profile to match the line profile while keeping its center unchanged). Fig. 1 shows that this simplified procedure indeed allows us to achieve a fairly good concordance between the line and the Ly$\alpha$++ blend. This could indicate that the redshift of the quasar is higher than that measured by Péroux et al., being actually $z_{\rm em} \simeq 4.753$. Model assumptions and the MCI procedure ======================================= The complete description of the MCI code is given in LAK and its most updated version – in Paper I. Since this technique is relatively new, we briefly outline it here and stress its difference from the Voigt profile fitting (VPF) procedure commonly used for the QSO absorption line analysis. The VPF deconvolution is based on the assumption that the observed complex line profiles are caused by several separate clouds randomly distributed along the line of sight. In every cloud, gas is characterized by the constant density and normally distributed velocities (the $b$ parameter usually estimated in the VPF procedure just stands for the dispersion of the velocity distribution). Because of the constant gas density, the ionizing structure inside the cloud is described by a single ionization parameter $U$ which can be estimated from the measured column densities of lines of different ions with help of some photoionization code if the spectrum of the background ionizing radiation is given. However, numerous cosmological hydrodynamical calculations performed in the previous decade have shown that the QSO absorption lines arise more likely in the smoothly fluctuating intergalactic medium in a network of sheets, filaments, and halos (e.g., Cen et al. 1994; Miralda-Escudé et al. 1996; Theuns et al. 1998). This model finds its support also in modern high resolution spectroscopic observations: the increasing spectral resolution reveals progressively more and more complex profiles. A very important characteristic of the continuous absorbing medium is that the contribution to any point within the line profile comes not only from a single separate area (a ‘cloud’) but from all volume elements (‘clouds’) distributed along the sightline within the absorbing region and having the same radial velocity (see, for details, § 2.2 in LAK). If the absorption systems are indeed formed in the fluctuating IGM, then the above described VPF procedure which interprets each absorption feature in the line profile as caused by one distinguished cloud is not, in general, physically justified. In some systems, this approach can produce rather erratic results like extremely varying metallicities between subcomponents, negative kinetic temperatures, exotic UV background spectra etc. (see examples in Levshakov et al. 1999; LAK; Paper I). The MCI procedure is based on the assumption that all lines observed in a metal system arise in a continuous absorbing gas slab of a thickness $L$ with a fluctuating gas density and a random velocity field. We also assume that within the absorber the metal abundances are constant, the gas is optically thin for the ionizing UV radiation, and the gas is in the thermodynamic and ionization equilibrium. The last assumption means that the fractional ionizations of different ions are determined exclusively by the gas density and vary from point to point along the sightline. These fractional ionization variations are just the cause of the observed diversity of profile shapes between ions of low- and high-ionization stages. Whereas most of the above mentioned assumptions are quite natural, that of the constant metallicity over the entire absorbing region needs additional discussion. On one hand, it is required from mathematical point of view. Namely, the splitting of the velocity and density fields is effective if all observed ions share the same velocity distribution but respond differently to the gas density. If in addition to the varying density and velocity one would allow for the varying metallicity, the inverse problem becomes fully degenerate, i.e. it would have infinitely large number of solutions. On the other hand, the constant metallicity has some observational support: results obtained in numerous studies of the Galactic halo chemical composition reveal no systematic differences in the gas-phase abundances within galactocentric distances of $7-10$ kpc in various directions (e.g., Savage & Sembach 1996). But, of course, we cannot exclude the case when the line of sight passes through many types of environments with different enrichment histories within a given absorber. If the metallicities within such an absorber differ only slightly ($\la 0.5$ dex), the observed lines of different ions can be well fitted to the synthetic profiles calculated with some average value of the metal content. If, however, the differences in the metallicities are really large ($\ga 1$ dex), the self-consistent fitting of all observed lines becomes impossible. In this case we have to split the absorber into separate regions having different metal abundances (see § 4.2.2 and 4.3.1 for examples). It is well known that the measured metallicities depend in a crucial way on the adopted background ionizing spectrum. We started in all cases with the Haardt-Madau (HM) background ionizing spectra (Haardt & Madau 1996) computing the fractional ionizations and the kinetic temperatures with the photoionization code CLOUDY (Ferland 1997). If the fitting with the HM spectrum was impossible, we used another spectra, e.g. the Mathews and Ferland (MF) spectrum (Mathews & Ferland 1987). The MCI procedure itself is implemented in the following way. Within the absorbing region the radial velocity $v(x)$ and the total hydrogen density $n_{\rm H}(x)$ along the line of sight are considered as two random fields which are represented by their sampled values at equally spaced intervals $\Delta x$, i.e. by the vectors $\{ v_1, \ldots, v_k \}$ and $\{ n_1, \ldots, n_k \}$ with $k$ large enough ($\sim 150-200$) to describe the narrowest components of the complex spectral lines. The radial velocity is assumed to be normally distributed with the dispersion $\sigma_{\rm v}$, whereas the gas density is distributed log-normally with the mean $n_0$ and the second central dimensionless moment $\sigma_{\rm y}$ ($y = n_{\rm H}/n_0$). Both stochastic fields are calculated using the Markovian processes (see LAK for mathematical basics). The model parameters estimated in the least-squares minimization of the objective function (see eqs.\[29\] and \[30\] in LAK) include $\sigma_{\rm v}$ and $\sigma_{\rm y}$ along with the total hydrogen column density $N_{\rm H}$, the mean ionization parameter $U_0$, and the metal abundances $Z_a$ for $a$ elements observed in a given absorption-line system. The computations are carried out in two steps: firstly a point in the parameter space ($N_{\rm H}, U_0, \sigma_{\rm v}, \sigma_{\rm y}, Z_{\rm a})$ is chosen at random and then an optimal configuration of $\{v_i\}$ and $\{n_i\}$ for this parameter set is searched for. These steps are repeated till a minimum of the objective function ($\chi^2 \sim 1$ per degree of freedom) is achieved. To optimize the configurations of $\{v_i\}$ and $\{n_i\}$, the simulated annealing algorithm with Tsallis acceptance rule (Xiang et al., 1997) and an adaptive annealing temperature choice is used (details are given in Paper I). The following important fact should be taken into account when one interprets the results obtained with the MCI technique. The mean ionization parameter $U_0$ is related to the parameters of the gas cloud as (see eq.\[28\] in LAK) $$U_0 = \frac{n_{\rm ph}}{n_0} (1 + \sigma^2_{\rm y})\; . \label{eq:F1}$$ Here $n_{\rm ph}$ is the number density of photons with energies above 1 Ry which is determined by the intensity of the adopted background ionizing spectrum. This equation shows that if the density field is fluctuating ($\sigma_{\rm y} > 0$), then with the same mean density $n_0$ and the same background ionizing spectrum we obtain a higher value of $U_0$ without any additional sources of the ionization. Intermittent regions of low and high ionization caused by the density fluctuations will occur in this case along the sightline. On the other hand, for a given $U_0$ the mean gas density $n_0$ is also higher in the fluctuating media as compared to the completely homogeneous gas clouds ($\sigma_{\rm y}$ = 0). Since the linear size of the absorber is $L = N_{\rm H}/n_0$, the sizes estimated with the assumption of a constant density (as, e.g., in the VPF) may occur too large. Another important question is whether the MCI solution is unique and accurate. In general, the inverse problems are highly non-linear and ill-posed which implies multiple solutions and/or very broad uncertainty ranges for the recovered parameters. To produce a physically reasonable solution, one has to account for all available information related to the case under study. For instance, the more lines of different ionic transitions are included into the analysis, the more accurate result can be obtained since both low- and high-density regions are probed simultaneously. One may also compare the relative metal abundances predicted by nucleosynthetic theories with those provided by the MCI. An odd pattern may indicate a misleading solution. The recovered linear sizes must also be in agreement with the characteristic sizes of the absorbers stemming from observations of gravitationally lensed quasars and quasar pairs which show $L \la 100$ kpc. One of the main problem in the analysis of the QSO high redshift spectra is the line blending which hampers significantly the inversion of the observed spectra. As compared to the VPF method, the MCI is much more robust dealing with blended lines due to the assumption that all ions trace the same underlying gas density and velocity distributions. This means that we are able to reconstruct both distributions using unblended parts of different lines. It is obvious that the accuracy of the recovered parameters improves with increasing number of lines and the variety of ions involved in the analysis. A priori we do not know which parts of the lines are blended and which are not. To clarify this, several test runs with different arrangements of lines are carried out till a self-consistent fit for the majority of lines observed in the spectrum is found. Results on individual metal systems =================================== All results given below in Tables 1-3 were obtained using the MCI procedure as described in Paper I. Giving the shape and the intensity at 1 Ry of the background ionizing radiation, the errors of the fitting parameters $U_0$, $N_{\rm H}$, $\sigma_{\rm v}$, $\sigma_{\rm y}$ and $Z_{\rm a}$ are about 15-20% , the errors of the estimated column densities are less than 10%, whereas the derived parameters $n_0$ and $L$ are estimated with about 50% accuracy. These errors, however, should be considered as internal in the sense that they reflect merely the configuration of the parameter space in the vicinity of a minimum of the objective function. To what extent the recovered parameters may correspond to their real values is discussed separately for each individual absorbing system. We note in passing that the density $n_0$ and by this the linear size $L$ scales with the intensity of the radiation field (see eq.\[1\]). The analyzed metal systems are described within three categories : (1) Lyman limit systems with $N$() $> 5\times10^{16}$ [cm$^{-2}\,$]{}, (2) Ly$\alpha$ absorbers with $N$() $< 5\times10^{16}$ [cm$^{-2}\,$]{}, and (3) Ly$\alpha$ systems with a probable metallicity gradient. Their physical properties are compared with different types of absorbers listed in Appendix. Lyman limit systems ------------------- ### Q0347–3819, [$z_{\rm abs}\,$]{}= 2.21 This system consists of a broad saturated Ly$\alpha$ hydrogen line spread over 500 [km s$^{-1}\,$]{} and of metal lines of low and high ionized species : $\lambda1302$, $\lambda1334$, $\lambda\lambda2796,$ 2803, $\lambda1670$, $\lambda\lambda1190,$ 1193, $\lambda\lambda1608,$ 2344, 2382, 2586, 2600, $\lambda1854$, $\lambda1206$ and and doublets as well. The physical parameters obtained with the MCI are presented in Table 1 whereas the corresponding observed and synthetic spectra are shown in Fig. 2. The recovered density and velocity distributions along the line of sight are plotted in Fig. 3. The intermittent high- and low-density regions giving rise to, respectively, low- and high-ionization species (a multiphase medium) are clearly seen. This means that the lines of different ionization stages arise in different areas despite of having the same radial velocities \[see, e.g., the regions with $v(x) \simeq 0$ [km s$^{-1}\,$]{} in Fig. 3\]. According to these data, the system under study is a compact (380 pc) warm cloud ($T_{\rm kin} \simeq 9000$ K) with a high metal content (0.6 solar value for O) and a high velocity dispersion ($\sigma_{\rm v} \simeq 80$ [km s$^{-1}\,$]{}). The inferred column density of $4.6\times10^{17}$ [cm$^{-2}\,$]{} classifies this system as a typical LLS \[since $\log N$() $> 17$ [cm$^{-2}\,$]{}\]. In principle, this value lies beyond the applicability limit of the MCI which is formally valid for $\tau_{912} < 1$ \[if $N$() = $4.6\times10^{17}$ [cm$^{-2}\,$]{}, then $\tau_{912} = 3$\]. Besides, having only one saturated Ly$\alpha$ line we cannot in any case say for sure that the estimated value is the real hydrogen column density. But there are also reasons that make the obtained solution quite plausible. First, we consider an absorber as a clumpy region which implies that the ionizing radiation may penetrate the cloud from different directions without being significantly altered (i.e. we assume that the density of the background ionizing radiation is not reduced much and its spectral distribution is not changed considerably in a gas cloud which is not a slab of a uniform density). Second, the observed mixture of the and lines and the and lines makes it possible to fix the mean ionization parameter $U_0$ quite strictly because the fractional ionization curves for, e.g., and are very different. Thus, the column density can hardly be less than the value estimated by the MCI since in that case the metallicity would be higher than solar. Besides, the same velocity interval covered by both the Ly$\alpha$ and $\lambda1206$ lines suggests that we observe probably the real profile of the Ly$\alpha$ (a blend with the Ly$\beta$ line from the system at [$z_{\rm abs}\,$]{}= 2.8102 which falls in the range $-40$ [km s$^{-1}\,$]{}$< \Delta v < 140$ [km s$^{-1}\,$]{} has a little influence on the Ly$\alpha$ red wing – see Fig. 2). Higher values of $N$() cannot be excluded, but additional calculations have shown that the maximum of the column density is limited : both available wings of the Ly$\alpha$ line do not allow an increase in $N$() by more than a factor of three. This uncertainty does not change, fortunately, the main characteristic of this system – we do observe at [$z_{\rm abs}\,$]{}= 2.21398 a compact ($L < 1$ kpc) metal-rich ($Z > 0.1\,Z_\odot$) cloud. As seen from Fig. 2, the synthetic profiles represent well all unblended spectral features (broad absorption features at the position of the doublet are not consistent with each other and, hence, they cannot be attributed to the real profiles; the same is valid for the doublet). The measured relative abundances are discussed later in § 5.3. Here we note that a set of the identified species and their pattern as well as the estimated mass ($\sim 10^4 M_\odot$) resemble parameters measured in so-called high and intermediate velocity clouds (HVC, IVC) in the Local Group (Wakker 2001). Although the nature of the HVCs and IVCs is still poorly understood, they are unlikely to be phenomena isolated to the Milky Way. The system at [$z_{\rm abs}\,$]{}= 2.21 may be one of similar HVCs likely to be encountered in an high-redshift galactic halo. A more precise determination of its properties would require the higher Lyman series lines to be included in the analysis (these lines, however, can be observed with space telescopes only). ### Q0347–3819, [$z_{\rm abs}\,$]{}= 2.81 Here we identified three neutral hydrogen lines and several metal absorption lines both of low and high ionic transitions (see Fig. 4, the doublet falls, unfortunately, in a wavelength coverage gap of the Keck spectrum). The estimated physical parameters are listed in Table 1 whereas the corresponding synthetic spectra are shown in Fig. 4 by the smooth curves. It should be noted, however, that the solution with the column density of $5.5\times10^{16}$ [cm$^{-2}\,$]{} is not unique because all available hydrogen lines are saturated and partly blended in the red wings. We found also another solution with $N$() $= 10^{17}$ [cm$^{-2}\,$]{}. The solution presented in Table 1 has been chosen because it delivered a more or less consistent set of all parameters: the velocity dispersion $\sigma_{\rm v} \simeq 60$ [km s$^{-1}\,$]{}, the mean gas density $n_0 \simeq 10^{-3}$ [cm$^{-3}\,$]{}, the linear size $L \simeq 30$ kpc, and the metallicity $Z \sim 1/10Z_\odot$. For comparison, the second solution with $N$() $=10^{17}$ [cm$^{-2}\,$]{} gives the metallicity $Z \simeq 1/20Z_\odot$, the size $L \simeq 70$ kpc and the same other parameters. According to the both sets of the estimated parameters and the fact that rather strong low-ionization lines of and are observed, the [$z_{\rm abs}\,$]{}= 2.81 system could be related to an inner galactic halo. ### APM BR J0307–4945, [$z_{\rm abs}\,$]{}= 4.21 In the available spectral range we identified hydrogen lines Ly$\alpha$, Ly$\beta$ and Ly$\gamma$ as well as metal lines $\lambda1334$, $\lambda1526$, $\lambda977$, $\lambda989$, $\lambda1854$, $\lambda1206$ and the and doublets. From the metal line profiles, two main subsystems can be clearly distinguished: the first at $v \simeq -200$ [km s$^{-1}\,$]{}, and the second at $v \simeq 200$ [km s$^{-1}\,$]{}(see Fig. 5). The analysis of the [$z_{\rm abs}\,$]{}= 4.21 absorber has been carried out in two steps: ($i$) the subsystem at $v \simeq 200$ [km s$^{-1}\,$]{} was treated separately (the results are listed in column 4 in Table 1 and the corresponding synthetic spectra are shown by the solid lines in Fig. 5), ($ii$) both systems were fitted together (column 5 in Table 1, the dotted lines in Fig. 5). The recovered metallicities are high (about $1/3Z_\odot$ at $v \simeq 200$ [km s$^{-1}\,$]{}, and slightly higher at $v \simeq -200$ [km s$^{-1}\,$]{}) and their pattern is nearly solar. At low redshifts similar characteristics are measured, e.g., in high-metallicity blue compact galaxies. At high redshifts, some radio galaxies are known which show two spatially resolved emitting regions, slightly sub-solar metallicities and sizes of about tens of kpc, e.g., MRC 2104–242 at $z = 2.49$ (Overzier et al. 2001) or TN J1338–1942 at $z = 4.11$ (De Breuck et al. 1999). It is also known that galaxies showing morphological evidence of a merger have excessive velocity widths of their spectral lines (e.g. Mallén-Ornelas et al. 1999). Thus we conclude that the LLS under study probably arises when the line of sight intersects two clumps which may be merging. The observed range of the metal lines (from $\simeq -300$ to 400 [km s$^{-1}\,$]{}) and their very complex structures are also in line with this picture. Although the recovered values are self-consistent, we cannot guarantee their uniqueness because of the following reasons. The hydrogen lines are blended and saturated and, hence, the real $N$() value may be higher leading to lower metal abundances. Besides, in our calculations the HM background ionizing spectrum was adopted. However, it is probable that in close pair of galaxies where a significant star-forming activity is triggered by the merging, the HM spectrum can be affected by the local sources. In particular, the discrepancies between the observed and theoretical intensities seen in the and components at $v = 100$ [km s$^{-1}\,$]{} as well as the estimated overabundant ratio of \[C/Si\] $\simeq 0.1$ (usually \[Si/C\] lies between 0 and 0.4, see Appendix) may be caused by inadequate choice of the background ionizing spectrum. Nevertheless, the observed metal profiles are quite consistent with the HM spectrum and therefore we expect that the influence of the local sources may not be very strong and, hence, the interpretation of this absorption system will not be significantly altered. Absorbers with $N$() $< 5\times10^{16}$ [cm$^{-2}\,$]{} ------------------------------------------------------- ### Q0347–3819, [$z_{\rm abs}\,$]{}= 2.53 and 2.65 These two metal systems with [$z_{\rm abs}\,$]{}= 2.5370 and 2.65044 show broad and saturated hydrogen Ly$\alpha$ lines and lines of silicon and carbon in different ionization stages. Computational results are presented in Table 2, the observed and synthetic profiles are shown in Figs. 6 and 7. The solutions listed in Table 2 are non unique because of the sole hydrogen line and blended low-ionization lines of $\lambda$1260 and $\lambda$1334. If in reality the $\lambda$1260 absorption is absent at [$z_{\rm abs}\,$]{}= 2.5370, then the solution with a higher $U_0$ and, hence, with a large linear size can also be obtained. Therefore we consider the estimated linear size of 13 kpc as a lower limit. The system at [$z_{\rm abs}\,$]{}= 2.650 may have low-ionization $\lambda1334$ absorption resulting in a lower mean ionization parameter and a smaller linear size as compared with those listed in Table 2. A rather high overabundance ratio \[Si/C\] $\simeq 0.5$ estimated for this system also allows us to speculate that the real ionization parameter may be lower. But these considerations do not change the classification of both absorbers: most probably they are hosted by halos of some distant galaxies. ### Q0347–3819, [$z_{\rm abs}\,$]{}= 2.962 and 2.966 These two systems are separated by only 300 [km s$^{-1}\,$]{} but demonstrate very different physical characteristics (see Table 2 and Figs. 8 and 9). Multiple hydrogen lines available in their spectra make it possible to estimate quite accurately the hydrogen column densities and all other physical parameters. The [$z_{\rm abs}\,$]{}= 2.96171 system shows a broad Ly$\alpha$ line extending over 400 [km s$^{-1}\,$]{} and weak absorption lines of highly ionized silicon and carbon \[the $\lambda977$ line is contaminated by the Ly$\gamma$ line from the [$z_{\rm abs}\,$]{}= 2.97915 system (see Fig. 11); the $\lambda$1334 line is in a wavelength coverage gap of the Keck spectrum; the $\lambda\lambda$ 1260, 1193 lines are strongly blended\]. The derived physical parameters are typical for a galactic halo absorber: a low density ($n_0 \simeq 3\times10^{-4}$ [cm$^{-3}\,$]{}), low metallicity ($Z \simeq 0.01\,Z_\odot$), hot ($T_{\rm kin} \simeq 40000$ K) cloud of $L \simeq 20$ kpc size. The large overabundance of silicon as compared with carbon (\[Si/C\] $\la 0.8$) can be explained by the uncertainty in the estimated carbon abundance: only one weak $\lambda1548$ line is available for the analysis. The adjacent system at [$z_{\rm abs}\,$]{}= 2.96591 is on the contrary a very compact ($L \simeq 120$ pc), warm ($T_{\rm kin} \simeq 10000$ K) cloud, 5 times more denser and 30 times more metal abundant. This absorber reveals also a weak $\lambda989$ line \[contaminated in the right wing by the H$_2$ L11-0P(2) line from the [$z_{\rm abs}\,$]{}= 3.025 DLA system – see Fig. 5 in Levshakov et al. 2002b\]. We do not detect high amplitude fluctuations in the density and velocity fields in this compact system (see Fig. 10), and as a result the observed metal line profiles are almost symmetric. The low density region with the space coordinate $0 \leq x \leq 0.2$ does not contribute much to the line profiles, although a weak absorption arising in this gas can be seen in Fig. 9 in the $\lambda$977 and $\lambda1548$ lines at $v \simeq -30$ [km s$^{-1}\,$]{}. Since it is hardly possible that a cloud with a 120 pc size could exist in space on its own, the two systems are probably physically related. For instance, this small and metal enriched cloud may be a condensation of a supernova-heated gas in a galactic halo seen in the [$z_{\rm abs}\,$]{}= 2.96171 absorption lines. This process known as a galactic fountain (Bregman 1980) is believed to give origin to the high metallicity HVCs observed in the halo of the Milky Way. The velocity excess of the HVCs is usually greater than 90 [km s$^{-1}\,$]{} which is consistent with the shift of $\simeq 120$ [km s$^{-1}\,$]{} between the redward absorption in the Ly$\alpha$ profile at [$z_{\rm abs}\,$]{}= 2.96171 and the centre of the Ly$\alpha$ line at [$z_{\rm abs}\,$]{}= 2.96591. ### Q0347–3819, [$z_{\rm abs}\,$]{}= 2.98 Although this system belongs to the most commonly observed type in the Ly$\alpha$ forest (apart from hydrogen lines only weak and doublets and no apparent absorption in other ionic species are registered), its hydrogen lines are unusually broad – extended over 600 [km s$^{-1}\,$]{}. Available Ly$\alpha$, Ly$\beta$ and Ly$\gamma$ lines (see Fig. 11) allow us to estimate the hydrogen column density with a sufficiently high accuracy. According to the recovered physical parameters (see Table 2), in this case we are dealing with a large ($L \simeq 50$ kpc) cloud of a rarefied ($n_0 \simeq 10^{-4}$ [cm$^{-3}\,$]{}), metal-poor ($Z < 0.01\,Z_\odot$) and hot ($T_{\rm kin} \simeq 40000$ K) gas. Blending of the $\lambda977$ and $\lambda1206$ lines and the weakness of the $\lambda1393$ line do not allow us to estimate accurately the mean ionization parameter. The $U_0$ value presented in Table  2 should be considered as a lower limit. If in reality $U_0$ is higher, then the absorber may have a lower mean gas density and a larger linear size. The most probable host for this system might be an external region of a giant galactic halo or a large scale structure object. ### Q0347–3819, [$z_{\rm abs}\,$]{}= 3.14 The unsaturated Ly$\gamma$ line gives accurate estimations of the total neutral hydrogen column density (Fig. 12, Table 2). Clean continuum windows seen at the expected positions of metal lines make it possible to estimate the upper limits on metal abundances and to calculate the total hydrogen column density $N_{\rm H}$. The result obtained shows a rather low metallicity cloud with \[C/H\] $< -2.2$ and the linear size $L > 13$ kpc. One may expect to observe similar systems in the outer parts of galactic halos. ### APM BR J0307–4945, [$z_{\rm abs}\,$]{}= 4.81 This is the most distant absorber in our set where a low metal abundance can be directly measured. Its hydrogen Ly$\alpha$ line is clearly seen at $\Delta v = 3000$ [km s$^{-1}\,$]{} in the wide emission blend Ly$\alpha$++ shown in Fig. 1[**c**]{}. Absorption lines in Fig. 13 give the redshift [$z_{\rm abs}\,$]{}= 4.8101 and, thus, this system has [$z_{\rm abs}\,$]{}$> z_{\rm em}$. The same order of magnitude velocity difference ($\Delta v \simeq 3000$ [km s$^{-1}\,$]{}) between the H$_2$-bearing cloud at [$z_{\rm abs}\,$]{}= 2.811 (Levshakov & Varshalovich 1985) and the quasar redshift $z_{\rm em} = 2.770$ (Foltz et al. 1988) has been observed toward PKS 0528–250. This H$_2$ cloud seems to be at a distance larger than 10 kpc from the quasar as shown by Srianand & Petitjean (1998). However, in our case we are not able to estimate the proximity of the quasar. We can only assume that the photoionization of the [$z_{\rm abs}\,$]{}= 4.8101 system could be affected by the quasar radiation. This assumption is supported by the following facts. It turned out to be impossible to fit all available lines with the HM ionizing spectrum: the relative intensities of the $\lambda977$ and $\lambda1548, 1550$ required very high $U_0$ values ($U_0 > 0.1$) which contradicted with the shallow extended wings of the Ly$\alpha$ line. On the contrary, the MF ionizing spectrum, corresponding to the AGN, allowed to fit all lines and delivered a self-consistent set of physical parameters (Table 2, Figs. 13 and 14). The intensity at 1 Ry was set to $J_{912} = 10^{-22}$ erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$ sr$^{-1}$ which corresponds to the intensity of the HM spectrum at $z = 4.9$. According to the recovered values, the absorber at [$z_{\rm abs}\,$]{}= 4.81 is a metal-poor cloud with the linear size of about 25 kpc and the mean density $n_0 \simeq 2\times10^{-4}$ [cm$^{-3}\,$]{}. Fig. 14 shows that the shallow wings of the Ly$\alpha$ line are produced by the streaming out low density gas whereas the central region of the cloud remains very quiet. Since we cleary see only the doublet[^2] and have upper limits for the intensity of the $\lambda977$ and $\lambda1393$ lines, the mean ionization parameter $U_0$ listed in Table 2 should be considered as a lower limit implying that lower metal abundances and lower gas densities may also be possible. Taking into account that the density $n_0$ and by this the linear size $L$ scales with the intensity of the radiation field (see eq.\[1\]), the value of $L$ becomes quite uncertain. It could be larger or smaller than the estimated size of 25 kpc (this uncertainty is marked by ‘?’ in Table 2). An absorption system with similar spectral characteristics (wide and shallow wings of the Ly$\alpha$ line, a weak doublet) was previously observed by Reimers et al. (2001) at [$z_{\rm abs}\,$]{}= 1.674 toward HE 0515–4414 ($z_{\rm em} = 1.73$). The system at [$z_{\rm abs}\,$]{}= 1.674 shows in addition a strong doublet which, unfortunately, cannot be identified in the [$z_{\rm abs}\,$]{}= 4.81 cloud because of blending in the dense Ly$\alpha$ forest. Thus we may conclude that such systems are probably formed in the gas clouds affected by the QSO radiation. Ly$\alpha$ systems with metallicity gradient -------------------------------------------- ### Q0347–3819, [$z_{\rm abs}\,$]{}= 2.848 and 2.899 These systems present several seemingly unblended hydrogen lines and pronounced lines of highly ionized silicon and carbon (Figs. 15, 16; Table 3). Additionally, the $\lambda1238$ and $\lambda1031$ lines were identified at [$z_{\rm abs}\,$]{}= 2.848. In spite of the multiple lines, it turned out that the MCI failed to fit adequately all available ionic profiles in the apparent velocity ranges when homogeneous metallicities over the entire absorbing regions were assumed. The most sensitive restrictions in these calculations are set by the continuum windows seen in the profiles of the strong $\lambda977$ and $\lambda1206$ lines. We consider the observed lines from these systems as arising in clouds with metallicity gradients for the following reasons. The hydrogen Ly$\alpha$ and Ly$\beta$ lines seen at [$z_{\rm abs}\,$]{}= 2.848 in the $\Delta v$ range between 100 [km s$^{-1}\,$]{} and 400 [km s$^{-1}\,$]{} do not show any metal absorption in this velocity interval. Absorption features seen in the range 140 [km s$^{-1}\,$]{}$\leq \Delta v \leq 400$ [km s$^{-1}\,$]{} in Fig. 15 (panel ) and in the range 180 [km s$^{-1}\,$]{}$\leq \Delta v \leq 400$ [km s$^{-1}\,$]{} in panel cannot be attributed to the corresponding hydrogen sub-system, otherwise we should observe pronounced lines in the same velocity range. If the redward hydrogen absorption is physically connected with the blueward one, then we may conclude that the [$z_{\rm abs}\,$]{}= 2.848 absorber has a metallicity gradient. A very similar picture is seen in the [$z_{\rm abs}\,$]{}= 2.899 system for the blueward portions of the hydrogen lines in the range $-220$ [km s$^{-1}\,$]{}$\la \Delta v \la -80$ [km s$^{-1}\,$]{}(Fig. 16). The $\lambda977$ line (which is the most sensitive absorption line in a wide range of the ionization parameter from $\log U_0 \ga -4$ to $\log U_0 \simeq 0$) shows no absorption in this velocity range. In principle, the [$z_{\rm abs}\,$]{}= 2.899 system may be caused by a blending effect when the line of sight intersects two separate clouds (one of them supposingly metal-free) which have approximately the same radial velocities. As for the [$z_{\rm abs}\,$]{}= 2.848 system, a fortuitous blending can be ruled out because we observe two absorption features of approximately equal depth at $v \simeq -80$ [km s$^{-1}\,$]{} in the Ly$\alpha$ and profiles – the configuration noticed previously at [$z_{\rm abs}\,$]{}= 1.385 toward HE 0515–4414 by Reimers et al. (2001), i.e. this unusual pattern is not unique and seems to indicate some special kind of absorption systems. Preliminarily we may conclude that the system at [$z_{\rm abs}\,$]{}= 2.848 as well as probably that at [$z_{\rm abs}\,$]{}= 2.899 arise when the line of sight intersects a distant halo with very low (if any) metal content and encounters a metal-rich HVC. Unfortunately, the accurate quantitative analysis of both systems cannot be carried out because the velocity excesses of the assumed HVCs are not large enough to separate the inputs to the hydrogen lines from the halos and from the clouds like it was possible in the case of the [$z_{\rm abs}\,$]{}= 2.962 and [$z_{\rm abs}\,$]{}= 2.965 systems. Therefore the results presented in Table 3 are tentative, but they are physically reasonable and self-consistent. The corresponding synthetic spectra are shown in Figs. 15 and 16 by the solid lines. It should be noted that for both systems, only one wing of the available lines (marked with the horizontal bold lines) was included in the analysis, whereas the synthetic profiles for the entire lines of the encountered HVCs were computed using the velocity and density distributions estimated from the metal profiles. According to the data from Table 3, the suggested HVCs belong to different types. The system at [$z_{\rm abs}\,$]{}= 2.899 has the metal abundance pattern, the set of the ions observed (notice the pronouced doublet) and the size similar to those estimated for the supposed HVC at [$z_{\rm abs}\,$]{}= 2.965 and consistent with the parameters of the HVCs observed in the Milky Way (see Appendix). The system at [$z_{\rm abs}\,$]{}= 2.848 is more higher ionized ( and and no ) and it has the size of several kiloparsecs. Highly ionized HVCs with similar parameters were observed by Sembach et al. (1999) near the Milky Way. Their origin is still uncertain. Probably the [$z_{\rm abs}\,$]{}= 2.848 absorber may belong to the intercluster gas clouds in a distant group of galaxies as it was suggested by Sembach et al. for the local highly ionized HVCs. Discussion ========== The origin of metal systems --------------------------- Metal systems with $N$() $< 5\times10^{16}$ [cm$^{-2}\,$]{} are usually believed to originate in galactic halos at different galactocentric distances. At low redshifts ($z < 1$) the galaxies associated with certain metallic absorptions (e.g. ) can be in most cases identified directly (e.g. Chen et al. 2001a). Our results on absorption systems with $z \ga 2$ also support this assumption: absorbers with [$z_{\rm abs}\,$]{}= 1.87 (Paper I), 2.54, 2.65, 2.962, 2.98 (present paper) are produced by metal-enriched ($Z < 0.1\,Z_\odot$), hot ($T_{\rm kin} \ga 20000$ K), rarefied ($n_0 \simeq 10^{-4} - 10^{-3}$ [cm$^{-3}\,$]{}) gas clouds which have typical linear sizes of $L > 10$ kpc. These parameters are consistent with contemporary models of galactic halos (e.g. Viegas, Friaca & Gruenwald 1999). The nature of Lyman limit absorbers is less understood. Mo & Miralda-Escudé (1996) associate them with cold photoionized clouds randomly moving in hot spherical halos. The clouds are supposed to form from the initial density inhomogeneities in the accreting intergalactic gas during its cooling. Both the cloud and the halo obviously reveal the equal metallicity since they are formed from the same gas. In our study two LLSs with [$z_{\rm abs}\,$]{}= 1.92 (Paper I) and 2.81 (present paper) can be related to the absorbers of this type. However, this scenario obviously fails to explain metal abundant ($Z > 0.1\,Z_\odot$) systems since it is hard to understand how the whole halo can be metal-enriched to such a high level. It was shown by hydrodynamic simulations (e.g. Katz et al. 1996; Gardner et al. 2001) that LLSs can also arise on lines of sight that pass through small protogalaxies. We found two systems with [$z_{\rm abs}\,$]{}= 1.94 (Paper I) and 4.21 (present paper) that can be explained within this framework. These metal-rich ($Z \simeq 1/3\,Z_\odot$) absorbers with the sizes of several kpc are probably hosted by objects that may be akin to the local compact blue galaxies. Some absorbers in our present study ([$z_{\rm abs}\,$]{}= 2.21, 2.965, and, possibly, 2.89) reveal small linear sizes ($L < 1$ kpc) together with very high metal content ($Z \simeq 1/2\,Z_\odot$). These three systems may be explained in the framework of the process known as a galactic fountain : metal-enriched (supernova-heated) gas arises from the inner region of a galaxy and condenses into the clouds within the hot galactic halo. After formation, clouds fall back toward the galaxy centre because of their higher density. It is supposed that high-metallicity HVCs observed in the Milky Way halo are formed by this mechanism (Bregman 1980). The HVCs are common objects in our Galaxy and are detected in every longitude and latitude region. If galactic fountain works also in distant galaxies, it would be quite probable to encounter such a cloud on the line of sight which intersects the galactic halo, as also discussed by Charlton et al. (2001). Another type of HVCs – hot, highly ionized clouds with sizes of several kiloparsecs – is represented by the absorption system at [$z_{\rm abs}\,$]{}= 2.848. The origin of this type of HVCs is uncertain, but they may be produced by the intergalactic metal-enriched gas falling onto metal-poor galactic halos. Measured abundances of C and Si are depicted versus logarithmic sizes of the studied systems in Fig. 17. Systematically higher metal abundances are seen in compact systems with linear sizes $L < 4$ kpc. This result seems to indicate that the more effective metal enrichment occurs within relatively compact regions. Our results show that Lyman limit systems are a [*heterogeneous*]{} population which is formed in at least three different environments. This should be taken into account when statistics of LLSs is used to verify different models in hydrodynamic cosmological simulations. $\sigma_{\rm v} - N_{\rm H}\,L$ relation ---------------------------------------- If QSO metal systems are formed in gas clouds gravitationally bound with intervening galaxies, the internal kinematics of the QSO absorbers should be closely related to the total masses of the host galaxies. In case of galactic population, different types of galaxies show different scaling relations between the linear size and the velocity width of emission lines (e.g., Mallén-Ornelas et al. 1999). Possible correlation between the absorber linear size $L$ and its line-of-sight velocity dispersion $\sigma_{\rm v}$ was also mentioned in Paper I. The correlation between $\sigma_{\rm v}$ and $L$ stems from the virial theorem which states : $$\sigma^2_{\rm v} \sim \frac{M}{L} \sim n_0\,L^2 = N_{\rm H}\,L\; . \label{eq:E1}$$ Assuming that the gas systems are in quasi-equilibrium, one can expect $\sigma_{\rm v} \sim \sqrt{N_{\rm H}\,L}$. In Fig. 18 we examine our systems by comparing their kinematics ($\sigma_{\rm v}$) with measured sizes ($L$) and total gas column densities ($N_{\rm H}$). Shown are the data for all QSO absorbers studied in Paper I and in the present paper except for the systems at [$z_{\rm abs}\,$]{}= 2.848 and 2.899 (Table 3) which show inhomogeneous metallicities. It is seen that in the $\log (\sigma_{\rm v})$ versus $\log (N_{\rm H}\,L)$ diagram, most systems with linear sizes $L > 1$ kpc lie along the line with the slope $\kappa = 0.30\pm0.03$ (1 $\sigma$ c.l.). Taking into account that we know neither the impact parameters nor the halo density distributions, this result can be considered as a quite good fit to the relation (1). Hence we may conclude that most absorbers with $L > 1$ kpc are gravitationally bound with systems that appear to be in virial equilibrium at the cosmic time when the corresponding Ly$\alpha$ absorbers were formed. The possible consequence of this conclusion is that since the most metal rich absorbers identified in the QSO spectra arise in the galactic systems the question whether the intergalactic matter is metal enriched or pristine remains still open. \[$\alpha$-element/iron-peak\] ratio ------------------------------------ The metal abundances measured in the [$z_{\rm abs}\,$]{}= 2.21 LLS (Table 1) can be used to estimate the $\alpha$-element to the iron-peak group ratio which is a good indicator of the chemical evolutionary status of high redshift gas clouds. During the chemical evolution, heavy elements produced in stars show different nucleosynthetic histories so that their relative abundances vary with cosmic time. Oxygen and other $\alpha$-chain elements are mainly produced by Type II SNe, while iron is also a product of Type Ia SNe which have longer evolution scales. In the early stages of the chemical evolution of galaxies ($\Delta t \la 2\times10^7$ yr) the interstellar gas is likely enriched by Type II SNe products, while at $\Delta t \ga 10^8$ yr, the \[$\alpha$/Fe\] ratio should decline. Observations reveal both low \[e.g. $\simeq 0.1-0.2$ in the [$z_{\rm abs}\,$]{}= 3.390 dust-free DLA (Q0000–2621; Molaro et al. 2001) and in the [$z_{\rm abs}\,$]{}= 3.386 DLA (Q0201+1120; Ellison et al. 2000)\], and high \[e.g. $\simeq 0.7$ in the DLA I Zw 18 (Levshakov, Kegel & Agafonova 2001) and $0.68\pm0.08$ in the [$z_{\rm abs}\,$]{}= 3.025 DLA (Q0347–3819; Levshakov et al. 2002b)\] ratios of \[$\alpha$-element/iron-peak\]. Oxygen with its weak affinity with dust grains is a good tracer of the $\alpha$-element abundances. Nevertheless, the intrinsic \[$\alpha$/Fe\] ratio may be affected by depletion of iron since being a refractory element iron may be partly locked into dust grains. The dust content in the [$z_{\rm abs}\,$]{}= 2.21 LLS may not, however, be too high. The relative abundances of the $\alpha$-elements O, Mg and Si are \[Si/O\] $= -0.14\pm0.11$ and \[Mg/O\] = $-0.27\pm0.11$. In Galactic stars the $\alpha$-elements show the same behaviour relative to iron-peak elements (oversolar at \[Fe/H\] $\la -1$; see, e.g., Gaswami & Prantzos 2000). We thus expect to find solar $\alpha$-element ratios in dust-free absorbing regions, as observed, e.g., in the mentioned above [$z_{\rm abs}\,$]{}= 3.390 DLA toward Q0000–2620. A negative value of \[Mg/O\] found in this LLS may indicate the presence of some amount of dust with a depletion factor of about 0.2 dex for the magnesium abundance. If, however, only the gas-phase abundances of O and Fe are taken, the upper bound on the \[O/Fe\] ratio is $0.65\pm0.11$, which is comparable with that found, for instance, in the [$z_{\rm abs}\,$]{}= 3.025 DLA toward Q0347–3819 where the dust-to-gas ratio is $\simeq 1/30$ of the mean Galactic interstellar medium value (Levshakov et al. 2002b). The enrichment of the $\alpha$-elements in the [$z_{\rm abs}\,$]{}= 2.21 LLS is also supported by the relative abundances of Si, Mg to Fe: \[Si/Fe\] = $0.51\pm0.11$ and \[Mg/Fe\] = $0.38\pm0.11$. Thus, the absorbing cloud at [$z_{\rm abs}\,$]{}= 2.21 appears to be a chemically young object. Summary ======= We have deduced the physical properties of ten absorption-line systems in the range $\Delta z = 2.21 - 2.966$ toward Q0347–3819 and of two systems at [$z_{\rm abs}\,$]{}= 4.21 and 4.81 toward APM BR J0307–4945. The main conclusions are as follows : 1. The analyzed Lyman limit systems belong to a [*heterogeneous*]{} population which is formed by at least three groups of absorbers : ($i$) extended metal-poor gas halos of distant galaxies; ($ii$) gas in dwarf galaxies; ($iii$) metal-enriched gas arising from the inner galactic regions and condensing into the clouds within the hot galactic halo (galactic fountain). While the interpretation of a single system is sometimes subject to large uncertainties as discussed in chapter 4, the existence of a wide spread of properties in the different systems is certainly proved. 2. A correlation between the line-of-sight velocity dispersion $\sigma_{\rm v}$ and the linear size $L$ of the absorbing systems noted in Paper I is confirmed. New results show that large size QSO absorbers ($L > 1$ kpc) obey a scaling law $\sigma_{\rm v} \sim (N_{\rm H}\,L)^{0.3}$ over two decades in velocity dispersion and in the product $(N_{\rm H}\,L)$. This means that the majority of the metal absorbers are probably bound to the galactic systems and, hence, the question whether the IGM is enriched or pristine requires further investigations. 3. Systematically higher metal abundances are found in compact systems: in our sample there are no small size systems ($L < 1$ kpc) with metallicity lower than $0.1\,Z_\odot$. 4. The gas-phase metal abundances from the [$z_{\rm abs}\,$]{}= 2.21 LLS reveal a pronounced \[$\alpha$-element/iron-peak\] enhancement with \[O/Fe\] = $0.65\pm0.11$ at the $6\sigma$ confidence level, the first time when this abundance pattern is unambiguously found in a LLS. The measured \[O/Fe\] ratio implies that the chemical history of this LLS is $\la 10^8$ yr. 5. Absorption system at [$z_{\rm abs}\,$]{}= 2.21 and 2.965 and possible the systems at [$z_{\rm abs}\,$]{}= 2.848 and 2.899 toward Q0347–3819 show characteristics very similar to that observed for different types of HVCs in the Milky Way and may be interpreted as being the high-redshift counterparts of these Galactic objects. We thank Prof. W. H. Kegel for helpful correspondence and our anonymous referee for many helpful suggestions. S.A.L. gratefully acknowledges the hospitality of the National Astronomical Observatory of Japan (Mitaka) where this work was performed. The work of S.A.L. and I.I.A. is supported in part by the RFBR grant No. 00-02-16007. Possible counterparts of QSO absorbers ====================================== In our interpretation of the nature of the different metal systems with $N$() $\simeq 10^{14}-10^{17}$ [cm$^{-2}\,$]{} we have referred to a set of absorbers whose physical parameters are summarized in Table 4. Comparison between the reference absorbers and the QSO systems is based on following parameters : 1. Linear sizes. The systems can be divided into small-size ($L < 1$ kpc), intermediate-size (1 kpc $\la L \la 10$ kpc), and large-size (10 kpc $\la L \la 150$ kpc) absorbing regions connected with galactic gaseous envelopes, as well as into very large-size ($L \ga$ 150 kpc) absorbers (filaments) showing correlation with the large-scale distribution of galaxies (e.g. Penton et al. 2002). Filaments are probably intergalactic material not recycled by galaxies. 2. Metallicities. Values of $0.3 \la Z/Z_\odot \la 1$, $0.03 \la Z/Z_\odot \la 0.3$, and $Z/Z_\odot < 0.03$ classify the systems as metal rich, metal enriched, and metal poor, respectively. 3. Metallicity patterns. Relative element abundances allow us to distinguish between chemically young and old systems (low and high relative \[$\alpha$-element/iron-peak\] ratio, respectively). Bajtlik, S., Duncan, R. C., & Ostriker, J. P. 1988, ApJ, 327, 570 Bergeron, J., & Boissé, P. 1991, A&A, 243, 344 Bregman, J. N. 1980, ApJ, 236, 577 Cen, R., Miralda-Escudeé, J., Ostriker, J. P., & Rauch, M. 1994, ApJ, 437, L9 Charlton, J. C., Churchill, C. W., & Rigby, J. R. 2001, ASP Conf. Ser., 240, 487 Chen, H.-W., Lanzetta, K. M., Webb, J. K., & Barcous, X. 1998, ApJ, 498, 77 Chen, H.-W., Lanzetta, K. M., & Webb, J. K. 2001a, ApJ, 556, 158 Chen, H.-W., Lanzetta, K. M., Webb, J. K., & Barcous, X. 2001b, ApJ, 559, 654 De Breuck, C. et al. 1999, A&A, 352, L51 Dessauges-Zavadsky, M. et al. 2001, A&A, 370, 426 D’Odorico, S. et al. 2000, Proc. SPIE, 4005, 121 D’Odorico, S., Dessauges-Zavadsky, M., & Molaro, P. 2001, A&A, 368, L21 Ellison, S., Songaila, A., Schaye, J., & Pettini, M. 2000, AJ, 120, 1175 Ferland, G. J. 1997, A Brief Introduction to Cloudy (Internal Rep.,; Lexington: Univ. Kentucky) Foltz, C. B., Chaffee, F. H., Jr., & Black, J. H. 1988, ApJ, 324, 267 Gardner, J. P., Katz, N., Hernquist, L., Weinberg, D. H. 2001, ApJ, 559, 131 Goswami, A., & Prantzos, N. 2000, A&A, 359, 191 Grevesse, N., & Sauval, A. J. 1998, Space Sci. Rev., 85, 161 Haardt, F., & Madau, P. 1996, ApJ, 461, 20 Holweger, H. 2001, in Solar and Galactic Composition, ed. R. F. Wimmer-Schweingruber, AIP Conf. Proceed.. 598, 23 Izotov, Y. I., & Thuan, T. X. 1999, ApJ, 511, 639 Katz, N., Weinberg, D. H., Hernquist, L., & Miralda-Escudé, J. 1996, ApJ, 457, L57 Levshakov, S. A., & Varshalovich, D. A. 1985, MNRAS, 212, 517 Levshakov, .S. A., Takahara, F., & Agafonova, I. I. 1999, ApJ, 517, 609 Levshakov, S. A., Agafonova, I. I., & Kegel, W. H. 2000, A&A, 360, 833 \[LAK\] Levshakov, S. A., Kegel, W. H., & Agafonova, I. I. 2001, A&A, 373, 836 Levshakov, S. A., Agafonova, I. I., Centurión, M., & Mazets, I. E. 2002a, A&A, 383, 813 \[Paper I\] Levshakov, S. A., Dessauges-Zavadsky, M., D’Odorico, S., & Molaro, P. 2002b, ApJ, 565, 696 Mallén-Ornelas, G., Lilly, S. J., Crampton, D., & Schade, D. 1999, ApJ, 518, L83 Mathews, W. G., & Ferland, G. J. 1987, ApJ, 323, 456 Miralda-Escudé, J., Cen, R., Ostriker, J. P., & Rauch, M. 1996, ApJ, 471, 582 Mo, H. J., & Miralda-Escudé, J. 1996, ApJ, 469, 589 Molaro, P., Levshakov, S. A., Dessauges-Zavadsky, M., & D’Odorico, S. 2001, ApJ, 549, 90 Overzier, R. A., Röttgering, H. J. A., Kurk, J. D., & De Breuck, C. 2001, A&A, 367, L5 Penton, S. V., Stocke, J. T., & Shull, J. M. 2002, ApJ, 565, 720 Péroux, C., Storrie-Lombardi, L. J., McMahon, R. G., Irwin, M., & Hook, I. M. 2001, AJ, 121, 1799 Prochaska, J. X., & Wolfe, A. 1999, ApJS, 121, 369 Prochaska, J. X., & Wolfe, A. 2001, ApJ, 560, L33 Savage, B. D., & Sembach, K. R. 1996, ARA&A, 34, 279 Schaye, J. 2001, ApJ, 559, 507 Sembach, K. R., Savage, B. D., Lu, L., & Murphy, E. M. 1999, ApJ, 515, 108 Songaila, A. 1998, AJ, 115, 2184 Srianand, R., & Petitjean, P. 1998, A&A, 335, 33 Steidel, C. C., Dickinson, M., Meyer, D. M., Adelberger, K. L., & Sembach, K. R. 1997, ApJ, 480, 568 Theuns, T., Leonard, A., Efstathiou, G., Pearce, F. R., & Thomas, P. A. 1998, MNRAS, 301, 478 Tytler, D., & Fan, X.-M. 1992, ApJS, 79, 1 Viegas, S. M., Friaca, A. C. S., & Gruenwald, R. 1999, MNRAS, 309, 355 Vogt, S. S. et al. 1994, Proc. SPIE, 2198, 362 Wakker, B. P. 2001, ApJS, 136, 463 Xiang, Y., Syn, D. Y., Fan, W., & Gong, X. G. 1997, Phys. Lett. A, 233, 216 [lcccc]{} $U_0$ & 2.1E-3 & 4.0E-2 & 7.5E-3 & 8.1E-3\ $N_{\rm H}$, [cm$^{-2}\,$]{}& 2.2E19 & 8.3E19 & 4.2E19 & 9.1E19\ $\sigma_{\rm v}$, [km s$^{-1}\,$]{}& 80.8 & 59.5 & 80.0 & 170.0\ $\sigma_{\rm y}$ & 1.0 & 1.15 & 1.2 & 1.25\ $Z^b_{\rm C}$ & 2.0E-4 & 3.1E-5 & 1.3E-4 & 2.0E-4\ $Z_{\rm N}$ & $\ldots$ & $<$4.0E-6 & $<$2.5E-5 & $<$2.5E-5\ $Z_{\rm O}$ & 4.4E-4 & $\ldots$ & $\ldots$ & $\ldots$\ $Z_{\rm Mg}$ & 1.5E-5 & $\ldots$ & $\ldots$ & $\ldots$\ $Z_{\rm Al}$ & $\ldots$ & $\ldots$ & $<$2.5E-6 & $<$2.5E-6\ $Z_{\rm Si}$ & 2.0E-5 & 4.2E-6 & 1.1E-5 & 1.5E-5\ $Z_{\rm Fe}$ & 5.0E-6 & $\ldots$ & $\ldots$ & $\ldots$\ $[Z_{\rm C}]^c$ & $-0.28$ & $-1.10$ & $-0.47$ & $-0.27$\ $[Z_{\rm N}]$ & $\ldots$ & $<-1.3$ & $<-0.5$ & $<-0.5$\ $[Z_{\rm O}]$ & $-0.10$ & $\ldots$ & $\ldots$ & $\ldots$\ $[Z_{\rm Mg}]$ & $-0.37$ & $\ldots$ & $\ldots$ & $\ldots$\ $[Z_{\rm Al}]$ & $\ldots$ & $\ldots$ & $<-0.5$ & $<-0.5$\ $[Z_{\rm Si}]$ & $-0.24$ & $-0.90$ & $-0.51$ & $-0.36$\ $[Z_{\rm Fe}]$ & $-0.75$ & $\ldots$ & $\ldots$ & $\ldots$\ $N$(H[i]{}), [cm$^{-2}\,$]{}& 4.6E17 & 5.5E16 & 1.4E17 & 2.3E17\ $N$(O[i]{}) & 1.5E14 & $\ldots$ & $\ldots$ & $\ldots$\ $N$(C[ii]{}) & 1.5E15 & 5.2E13 & 4.8E14 & 1.2E15\ $N$(Mg[ii]{})& 1.1E14 & $\ldots$ & $\ldots$ & $\ldots$\ $N$(Si[ii]{})& 2.3E14 & 8.2E12 & 4.8E13 & 1.0E14\ $N$(Fe[ii]{})& 1.3E13 & $\ldots$ & $\ldots$ & $\ldots$\ $N$(C[iii]{})& $\ldots$ & 1.6E15 & 4.3E15 & 1.3E16\ $N$(N[iii]{})& $\ldots$ & $<$2.0E14 & $<$6.3E14 & $<$1.6E15\ $N$(Al[iii]{})& $<$8.1E12 & $\ldots$ & $<$1.1E13 & $<$3.4E13\ $N$(Si[iii]{})& 2.0E14 & 1.2e14 & 2.6E14 & 6.8E14\ $N$(C[iv]{})& 7.7E13 & $\ldots$ & 5.6E14 & 2.0E15\ $N$(Si[iv]{})& 3.0E13 & 6.7E13 & 9.0E13 & 2.9E14\ $n_0$, [cm$^{-3}\,$]{}& 2.0E-2 & 9.2E-4 & 2.5E-3 & 2.5E-3\ $\langle T \rangle$, K & 9.1E3 & 2.4E4 & 1.2E4 & 1.2E4\ $T_{\rm min}$ & 8.6E3 & 1.7E4 & 1.1E4 & 1.1E4\ $T_{\rm max}$ & 9.8E3 & 3.3E4 & 1.4E4 & 1.5E4\ $L$, kpc & 0.38 & 30 & 6.0 & 13\ [lccccccc]{} $U_0$ & 2.9E-2 & 4.2E-2 & 8.5E-2 & 2.0E-2 & 0.15 & 0.11 & 3.8E-2\ $N_{\rm H}$, [cm$^{-2}\,$]{}& 4.4E19 & 1.9E19 & 2.0E19 & 5.1E17 & 3.6E19 & 1.2E19 & 1.6E19\ $\sigma_{\rm v}$, [km s$^{-1}\,$]{}& 68.5 & 73.0 & 88.0 & 11.6 & 110.8 & 25.3 & 33.8\ $\sigma_{\rm y}$ & 1.05 & 0.7 & 0.85 & 0.92 & 1.12 & 1.0 & 0.87\ $Z^b_{\rm C}$ & 5.0E-6 & 3.7E-5 & 2.0E-6 & 1.5E-4 & 3.1E-6 & $<$ 1.8E-6 & 3.3E-6\ $Z_{\rm N}$ & $\ldots$ & $\ldots$ & $\ldots$ & 1.8E-5 & $\ldots$ & $\ldots$ & $\ldots$\ $Z_{\rm Si}$ & 1.2E-6 & 1.2E-5 & 1.1E-6 & 3.2E-5 & $<$3.5E-6 & $<$ 9.0E-7 & $<7.0$E-7\ $[Z_{\rm C}]^c$ & $-1.89$ & $-1.02$ & $>-2.3$ & $-0.42$ & $-2.07$ & $<-2.3$ & $<-2.0$\ $[Z_{\rm N}]$ & $\ldots$ & $\ldots$ & $\ldots$ & $-0.67$ & $\ldots$ & $\ldots$ & $\ldots$\ $[Z_{\rm Si}]$ & $-1.45$ & $-0.47$ & $-1.49$ & $-0.03$ & $<-1.0$ & $<-1.6$ & $<-1.7$\ $N$(H[i]{}), [cm$^{-2}\,$]{}& 2.2E16 & 6.2E15 & 2.0E15 & 3.6E14 & 1.9E15 & 8.7E14 & 3.7E15\ $N$(C[ii]{}) & $\ldots$ & $\ldots$ & $\ldots$ & 9.4E11 & $\ldots$ & $<$4.7E10 & $<$1.5E11\ $N$(Si[ii]{})& $\leq$1.2E12 & $\ldots$ & $\ldots$ & $<$1.8E11 & $\ldots$ & $<$1.6E10 & $\ldots$\ $N$(C[iii]{})& $\ldots$ & $\ldots$ & 1.1E13 & 4.3E13 & 1.8E13 & $<$4.6E12 & $<4.8$E12\ $N$(N[iii]{})& $\ldots$ & $\ldots$ & $\ldots$ & 5.2E12 & $\ldots$ & $\ldots$ & $\ldots$\ $N$(Si[iii]{})& 1.6E13 & 2.7E13 & 1.1E12 & 3.6E12 & $<$9.1E11 & $<$2.8E11 & $\ldots$\ $N$(C[iv]{})& 4.7E13 & 2.5E14 & 8.4E12 & 1.9E13 & 2.0E13 & $<$4.5E12 & 9.4E12\ $N$(Si[iv]{})& 7.3E12 & 3.7E13 & 8.5E11 & 5.2E12 & $<$1.4E12 & $<$3.0E11 & $<6.7$E11\ $n_0$, [cm$^{-3}\,$]{}& $<$1.1E-3 & 5.7E-4 & $<$3.2E-4 & 1.5E-3 & $<$2.4E-4 & $<$3.1E-4 & 2.3E-4?\ $\langle T \rangle$, K & 2.8E4 & 1.8E4 & 4.0E4 & 1.0E4 & 4.4E4 & 3.4E4 & 3.2E4\ $T_{\rm min}$ & 1.9E4 & 1.4E4 & 2.6E4 & 7.0E3 & 3.0E4 & 2.7E4 & 2.2E4\ $T_{\rm max}$ & 4.3E4 & 2.3E4 & 5.0E4 & 2.1E4 & 7.0E4 & 5.3E4 & 4.3E4\ $L$, kpc & $>$13 & 11 & $>$20 & 0.12 & $>$50 & $>$13 & 25?\ [lcc]{} $U_0$ & 0.1 & 2.2E-2\ $N_{\rm H}$, [cm$^{-2}\,$]{}& 2.3E18 & 1.9E18\ $\sigma_{\rm v}$, [km s$^{-1}\,$]{}& 34.9 & 54.3\ $\sigma_{\rm y}$ & 0.9 & 1.3\ $Z^b_{\rm C}$ & 9.1E-5 & 9.6E-5\ $Z_{\rm N}$ & $\la$2.5E-5 & $\ldots$\ $Z_{\rm O}$ & 3.5E-4 & $\ldots$\ $Z_{\rm Si}$ & $<$2.5E-5 & 2.4E-5\ $[Z_{\rm C}]^c$ & $-0.67$ & $-0.60$\ $[Z_{\rm N}]$ & $\la -0.5$ & $\ldots$\ $[Z_{\rm O}]$ & $-0.21$ & $\ldots$\ $[Z_{\rm Si}]$ & $<-0.14$ & $-0.15$\ $N$(H[i]{}), [cm$^{-2}\,$]{}& 1.8E14 & 4.1E15\ $N$(C[ii]{}) & $\ldots$ & 1.1E13\ $N$(Si[ii]{})& $\ldots$ & 3.6E12\ $N$(C[iii]{})& 2.7E13 & 1.4E14\ $N$(Si[iii]{})& $\ldots$ & 2.3E13\ $N$(C[iv]{})& 5.8E13 & 3.0E13\ $N$(Si[iv]{})& $<$1.3E12 & 1.3E13\ $N$(N[v]{}) & $\la$ 8.3E12 & $\ldots$\ $N$(O[vi]{}) & 5.3E13 & $\ldots$\ $n_0$, [cm$^{-3}\,$]{}& 2.5E-4 & 2.0E-3\ $\langle T \rangle$, K & 2.3E4 & 1.2E4\ $T_{\rm min}$ & 1.9E4 & 9.9E3\ $T_{\rm max}$ & 2.9E4 & 1.4E4\ $L$, kpc & 3.0 & 0.33\ [lcclc]{} 1. HVC-type absorbers inside& & & \[C/Fe\]$\simeq0.2$, \[N/Fe\]$\simeq0.6$ & 1,2\ galactic halos & $L < 1$ & $0.1 \la X \la 1$ & \[O/Fe\]$\simeq0.7$, \[Mg/Fe\]$\simeq0.2$ &\ & & & \[Al/Fe\]$\simeq0.3$, \[Si/Fe\]$\simeq0.3$ &\ 2. Gas in dwarf galaxies of & & & &\ ($i$) low-metallicity &$1 \la L \la 10$ &$0.03 \la X \la 0.06$ & \[C/Fe\]$\simeq-0.2$, \[O/Fe\]$\simeq0.3$ & 3,4\ & & &\[N/Fe\]$\simeq-0.2$, \[Si/Fe\]$\simeq0.1$ &\ ($ii$) high-metallicity &$1 \la L \la 10$ &$0.06 < X \la 0.3$ & \[C/Fe\]$\simeq\,\,\,\, 0.2$, \[O/Fe\]$\simeq0.4$ &3,4\ & & &\[N/Fe\]$\simeq-0.1$, \[Si/Fe\]$\simeq0.3$ &\ 3. Gaseous envelopes$^b$ & & & &\ of galaxies seen in & & & &\ ($i$) H[i]{} Ly$\alpha$&$\,\,\,10h^{-1} \la L \la 160h^{-1}$ & & & 5,10\ ($ii$) Mg[ii]{}$\lambda\lambda 2796, 2803$ & $15h^{-1} \la L \la 75h^{-1}$ &$X < 1$ & & 6,7\ ($iii$) C[iv]{}$\lambda\lambda 1548, 1550$ & $100h^{-1} \la L \la 180h^{-1}$ &$X < 1$ & & 8,9\ \ 4. Large-scale filaments & & & &\ around galaxies & $L \ga 150h^{-1}$ &$X \ll 1$ & \[Si/C\]$\simeq0.4$, \[N/C\]$\simeq-0.7$ & 10 [^1]: Data are listed in Table 5 which is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5). [^2]: The $\lambda$1550 Å component is slightly blended with the night sky lines, whereas the $\lambda1548$ Å line is clear as found from the comparison of the two spectra of the APM BR J0307–4945 taken in a two month interval.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this short article we develop recent proposals to relate Yang-Baxter sigma-models and non-abelian T-duality. We demonstrate explicitly that the holographic space-times associated to both (multi-parameter)-$\beta$-deformations and non-commutative deformations of ${\cal N}=4$ super Yang-Mills gauge theory including the RR fluxes can be obtained via the machinery of non-abelian T-duality in Type II supergravity.' --- [**Marginal and non-commutative deformations\ via non-abelian T-duality**]{} [Ben Hoare$^{a}$ and Daniel C. Thompson$^{b}$]{} [*$^{a}$ Institut für Theoretische Physik, ETH Zürich,\ Wolfgang-Pauli-Strasse 27, 8093 Zürich, Switzerland.*]{} [*$^{b}$ Theoretische Natuurkunde, Vrije Universiteit Brussel & The International Solvay Institutes,\ Pleinlaan 2, B-1050 Brussels, Belgium.*]{} [*E-mail:  *]{} [<bhoare@ethz.ch>, <Daniel.Thompson@vub.ac.be>]{} Introduction {#sec:intro} ============ There is a rich interplay between the three ideas of T-duality, integrability and holography. Perhaps the most well studied example of this is the use of the TsT transformation to ascertain the gravitational dual space-times to certain marginal deformations of ${\cal N}=4$ super Yang-Mills gauge theory [@Lunin:2005jy]. Whilst this employs familiar T-dualities of $U(1)$ isometries in space-time, T-duality can be extended to both non-abelian isometry groups and to fermionic directions in superspace. Such generalised T-dualities also have applications to holography. Fermionic T-duality [@Berkovits:2008ic; @Beisert:2008iq] was critical in understanding the scattering amplitude/Wilson loop duality at strong coupling. T-duality of non-abelian isometries has been employed as a solution generating technique in Type II supergravity [@Sfetsos:2010uq], relating for instance $AdS_5\times S^5$ to (a limit[^1] of) the space-times corresponding to ${\cal N}=2$ non-Lagrangian gauge theories. Developing the recent results of [@Hoare:2016wsk; @Borsato:2016pas] this note will investigate further the role generalised notions of T-duality can play in holography. A new perspective on deformations of the $AdS_5 \times S^5$ superstring has come from the study of Yang-Baxter deformations of string $\sigma$-models [@Klimcik:2002zj; @Klimcik:2008eq; @Klimcik:2014bta; @Delduc:2013fga; @Delduc:2013qra]. These are integrable algebraic constructions which deform the target space of the $\sigma$-model through the specification of an antisymmetric $r$-matrix solving the (modified) classical Yang-Baxter equation ((m)cYBE). If the $r$-matrix solves the mcYBE then, applied to the supercoset formulation of strings in $AdS_5\times S^5$ [@Metsaev:1998it; @Berkovits:1999zq], these give rise to $\eta$-deformed space-times which are conjectured to encode a quantum group $q$-deformation of ${\cal N}=4$ super Yang-Mills with a deformation parameter $q \in \mathbb{R}$ [@Delduc:2014kha; @Arutyunov:2013ega; @Arutyunov:2015qva]. However the $\eta$-deformed worldsheet theory appears to be only globally scale invariant [@Hoare:2015gda; @Hoare:2015wia], the target space-time does not solve exactly the Type II supergravity equations [@Arutyunov:2015qva] but rather a generalisation thereof [@Arutyunov:2015mqj]. Classically $\eta$-deformations are related via a generalised Poisson-Lie T-duality [@Vicedo:2015pna; @Hoare:2015gda; @Sfetsos:2015nya; @Klimcik:2015gba; @Klimcik:2016rov; @Delduc:2016ihq] to a class of integrable deformation of (gauged) WZW models known as $\lambda$-deformations [@Sfetsos:2013wia; @Hollowood:2014rla; @Hollowood:2014qma], which do however have target space-times solving the usual supergravity equations of motion [@Sfetsos:2014cea; @Demulder:2015lva; @Borsato:2016zcf; @Chervonyi:2016ajp]. There is also evidence that the latter class corresponds to a quantum group deformation of the gauge theory, but with $q$ a root of unity [@Hollowood:2015dpa]. If instead the $r$-matrix solves the unmodified cYBE (a homogeneous $r$-matrix), first considered in [@Kawaguchi:2014qwa], the YB $\sigma$-models have been demonstrated to give a wide variety of integrable target space-times including those generated by TsT transformations [@Matsumoto:2014nra; @Matsumoto:2015uja; @Matsumoto:2014gwa; @Matsumoto:2015jja; @vanTongeren:2015soa; @Kyono:2016jqy; @Osten:2016dvf]. For these models the corresponding dual theory can be understood in terms of a non-commutative $\mathcal{N} = 4$ super Yang-Mills with the non-commutativity governed by the $r$-matrix and the corresponding Drinfel’d twist [@vanTongeren:2015uha; @vanTongeren:2016eeb]. Recently it has been shown that such YB $\sigma$-models can be also be understood in terms of non-abelian T-duality: given an $r$-matrix one can specify a (potentially non-abelian) group of isometries of the target space with respect to which one should T-dualise [@Hoare:2016wsk]. The deformation parameter appears by first centrally extending this isometry group and then T-dualising. Following a Buscher-type procedure, the Lagrange multiplier corresponding to the central extension is non-dynamical. In particular it is frozen to a constant value and thereby plays the role of the deformation parameter. This conjecture was proven in the NS sector in [@Borsato:2016pas], where a slightly different perspective was also given. If one integrates out only the central extension, the procedure above can be seen to be equivalent to adding a total derivative $B$-field constructed from a 2-cocycle on the isometry group with respect to which we dualise and then dualising. In this note we develop this line of reasoning. We begin by outlining the essential features of Yang-Baxter $\sigma$-models and the technology of non-abelian T-duality in Type II supergravity. After demonstrating that a centrally-extended T-duality can be reinterpreted as as non-abelian T-duality of a coset based on the Heisenberg algebra, we show how the machinery of non-abelian T-duality developed for Type II backgrounds can be readily applied to the construction of [@Hoare:2016wsk; @Borsato:2016pas]. We confirm that the centrally-extended non-abelian T-duals produce the full Type II supergravity backgrounds corresponding to $\beta$-deformations (when the duality takes place in the $S^5$ factor of $AdS_5\times S^5$), non-commutative deformations (when performed in the Poincaré patch of $AdS_5$) and dipole deformations (when performed in both the $S^{5}$ and $AdS^{5}$ simultaneously). In appendices \[app:sugra\] and \[app:algconv\] we outline our conventions for supergravity and certain relevant algebras respectively. As a third appendix \[app:furtherexamples\] we include some additional worked examples including one for which the non-abelian T-duality is anomalous and the target space solves the generalised supergravity equations. The supergravity backgrounds in this note have appeared in the literature in the past but the derivation and technique presented here is both novel, simple and, we hope may have utility in the construction of more general supergravity backgrounds. Yang Baxter sigma-models {#sec:yangbaxter} ======================== Given a semi-simple Lie algebra $\mathfrak{f}$ (and corresponding group $F$) we define an antisymmetric operator $R$ obeying $$\label{eq:cybe} [R X , R Y] - R\left([R X, Y]+ [X,RY] \right) = c [ X, Y] \ , \quad X,Y \in \mathfrak{f} \ ,$$ where the cases $c=\pm 1$ and $c=0$ are known as the classical and modified classical Yang Baxter equations (cYBE and mcYBE) respectively. We adopt some notation $X\wedge Y = X\otimes Y - Y \otimes X$ and define e.g. $$r= T_1 \wedge T_2 + T_3 \wedge T_4 +\dots \ , \quad RX = {\operatorname{Tr}}_2 ( r (\mathbb{I}\otimes X)) \ .$$ We define an inner product by the matrix trace of generators, ${\operatorname{Tr}}(T_{A} T_{B})$, and lower and raise indices with this inner product and its inverse. In this way the $r$-matrix acts as $$\label{eq:rmatdef} R(T_{A}) \equiv R_{A}{}^{B}T_{B} \ , \quad R_{A}{}^{B} = {\operatorname{Tr}}\left( {\operatorname{Tr}}_2 ( r (\mathbb{I}\otimes T_{A})) T^{B} \right) \ .$$ Suppose we have a $\mathbb{Z}_{2}$ grading $\mathfrak{f} = \mathfrak{g}\oplus \mathfrak{k}$ for a subgroup $\mathfrak{g}$. Let $T_{A}$ be generators for $\mathfrak{f}$, $T_{\alpha}$ those of $\mathfrak{g}$ and $T_{i}$ the remaining orthogonal generators of $\mathfrak{k}$. We introduce a projector to the coset defined by $P(T_{\alpha})= 0$ and $P(T_{i})= T_{i}$ or, in matrix form, $$P(T_{A}) \equiv P_{A}{}^{B}T_{B} \ , \quad P_{A}{}^{B} = {\operatorname{Tr}}\left( P( T_{A}) T^{B} \right) \ .$$ We also define the adjoint action for $g\in F$ by $${\operatorname{Ad}}_{g} T_{A} \equiv gT_{A} g^{-1} \equiv D_{A}{}^{B}(g) T_{B} \ , \quad D_{AB} = {\operatorname{Tr}}(g T_{A} g^{-1} T_{B} ) \ .$$ Let the two-dimensional worldsheet field $g$ be a coset representative for $F/G$ with which we define the currents $$J_\pm = J_{\pm}^{A}T_{A} = g^{-1} \partial_{\pm} g \ , \quad J_{\pm}^{A}= {\operatorname{Tr}}(g^{-1} \partial_{\pm} g T^{A}) \ ,$$ where we use light-cone coordinates $\partial_\pm = \partial_0 \pm \partial_1$. The standard (bosonic) $\sigma$-model whose target is the coset space $F/G$ is $$\label{eq:cosetPCM} {\cal L } = {\operatorname{Tr}}(J_+ P(J_-) ) \ .$$ To define the Yang-Baxter model first we let $$R_{g} = {\operatorname{Ad}}_{g^{-1}} R {\operatorname{Ad}}_{g} \ , \quad (R_{g})_{A}{}^{B} = D(g)_{A}{}^{C}R_{C}{}^{D}D(g^{-1})_{D}{}^{B} \ ,$$ and define the operator $${\cal O} = \mathbb{I} - \eta R_{g}P \ , \quad {\cal O} _{A}{}^{B} = \delta_{A}{}^{B} - \eta P_{A}{}^{C} (R_{g})_{C}{}^{B} \ ,$$ in which we have explicitly introduced the deformation parameter $\eta$. Later we will restrict to the case $c=0$ in , in which case the parameter $\eta$ can be absorbed into the definition of $R$. The Yang-Baxter $\sigma$-model on a coset is given by [@Delduc:2014kha; @Matsumoto:2014nra] $${\cal L } = {\operatorname{Tr}}(J_+ P( {\cal O}^{-1}J_-) ) = J_{+}^{A} E_{AB} J_{-}^{B} \ , \quad E_{AB} = {\cal O}^{-1}{}_{B}{}^{C} P_{CA} \ .$$ Non-abelian T-duality Technology {#sec:natd} ================================ In this section we will mainly follow the approach of [@Sfetsos:2010uq; @Lozano:2011kb; @Itsios:2013wd] including the transformation of RR fluxes. Some subtleties are caused by the dualisation in a coset space and the approach here is slightly different to the one in [@Lozano:2011kb]. Let us consider the standard (bosonic) $\sigma$-model whose target is the coset space $F/G$ whose Lagrangian is given in eq. , and perform the non-abelian T-dual with respect to a subgroup $H \subset F$ (which need not, and in our applications mostly will not be, either semi-simple or a subgroup of $G$). Let $H_a$ be the generators of $\mathfrak{h}$ and $\tilde{H}^a$ generators of a dual algebra $\mathfrak{h}^\star$ normalised such that ${\operatorname{Tr}}(H_a \tilde{H}^b)= \delta^b_a$. Let us we parametrise the coset representative as $g= h \hat{g}$. We define $\hat{J}= \hat{g}^{-1} d \hat{g}$ and $L = L^a H_a = h^{-1}dh$ such that $$J= \hat{J}+ L^a H_a^{\hat{g} }\ , \quad H_a^{\hat{g} } ={\operatorname{Ad}}_{\hat{g}^{-1}} H_a \ .$$ We also define $$\begin{aligned} G_{ab} &= {\operatorname{Tr}}( H^{\hat{g}}_{a} P( H^{\hat{g}}_{b} ) ) \ , \quad Q_{a} &= {\operatorname{Tr}}( \hat{J} P( H^{\hat{g}}_{a} ) ) \ . \end{aligned}$$ In this notation the $H$ isometry of the target space is manifest since the metric corresponding to eq. is $$ds^2 = {\operatorname{Tr}}( \hat{J} P(\hat{J} )) + 2 Q^T L + L^T G L = {\operatorname{Tr}}( \hat{J} P(\hat{J} )) - Q^TG^{-1} Q + e^T e \ ,$$ where we introduce the frame fields $$G= \kappa^T \kappa \ , \quad e= \kappa \left( L + G^{-1} Q \right) .$$ We perform the dualisation by introducing a $\mathfrak{h}$-valued connection with components $A_{\pm} = A_{\pm}^{a}H_{a}$ and a $\mathfrak{h}^\star$-valued Lagrange multiplier $V= v_{a} \tilde{H}^{a}$. We covariantise currents $$J^{\nabla}_{\pm} = g^{-1} d g + g^{-1} A_{\pm } g \ ,$$ such that we are gauging a left action of some $\tilde{h} \in H$ $$g \rightarrow \tilde{h} g \ , \quad A \rightarrow \tilde{h} A \tilde{h} ^{-1} - d \tilde{h} \tilde{h}^{-1} \ ,$$ and consider $${\cal L }^{ \nabla} = {\operatorname{Tr}}(J^{\nabla}_+ P(J^{\nabla}_-) ) + {\operatorname{Tr}}(V F_{+ -} ) \ ,$$ where the field strength is $F_{+-} = \partial_{+} A_{-} - \partial_{-} A_{+} + [A_{+} , A_{-}]$. We continue by gauge fixing on the group element $g = \hat{g}$ i.e. $h=1$.[^2] Integrating the Lagrange multipliers enforces a flat connection and one recovers the starting model since $$\label{eq:puregauge} A_\pm = h^{-1}\partial_\pm h = L_\pm \ ,$$ and upon substituting back into the action one recovers the starting $\sigma$-model. On the other hand, integrating by parts the derivative terms of the gauge fields yields $${\cal L }^{ \nabla} = {\operatorname{Tr}}(\hat{J}_{+} P(\hat{J}_-) ) + A_{+}^{a}A_{-}^{b} M_{ab} + A_{+}^{a}( \partial_{-} v_{a}+ Q_{-a} ) - A_{-}^{a}( \partial_{+} v_{a} - Q_{+a} ) \ ,$$ in which we have pulled back the one-forms $Q$ and $\hat{J}$ to the worldsheet and defined $$\begin{aligned} F_{ab} &= {\operatorname{Tr}}([H_{a} ,H_{b}]V) = f_{ab}{}^{c} v_{c} \ , \quad M_{ab} =G_{ab} + F_{ab} \ . \end{aligned}$$ The gauge field equations of motion now read $$\label{eq:gauge} A_{-} = - M^{-1} ( \partial_{-} v + Q_{-} ) \ , \quad A_{+} = M^{-T} ( \partial_{+} v - Q_{+ } ) \ .$$ Combining these equations of motion for the gauge field in eqs.  and sets up the canonical transformation between T-dual theories. Substitution of the gauge field equation of motion into the action yields the T-dual model given by $$\begin{split}\label{eq:dualmodellag} {\cal L }_{dual} & = {\operatorname{Tr}}(\hat J_+ P (\hat J_-)) - A_+^T M A_- \\ & = {\operatorname{Tr}}(\hat{J}_{+} P(\hat{J}_-) ) + ( \partial_{+} v_{a} - Q_{+a} )(M^{-1})^{ab} ( \partial_{-} v_{b} + Q_{-b} ) \ , \end{split}$$ where in the first line $A_\pm$ are evaluated on the gauge field equation of motion eq. . The NS fields can be read directly from this $\sigma$-model and in particular the dual metric is given as $$\widehat{ds}^2 = {\operatorname{Tr}}( \hat{J} P(\hat{J} )) - Q^TG^{-1} Q + \widehat{e}_\pm^T \widehat{e}^{\vphantom{T}}_\pm \ ,$$ with $\widehat{e}_\pm$ given by the push forwards to target space of $$\label{eq:dualframe} \widehat{e}_\pm = \kappa \left( A_\pm + G^{-1} Q_\pm \right) \ ,$$ evaluated on the gauge field equation of motion . On the worldsheet left and right moving fermionic sectors couple to the frame fields $\widehat{e}_{+}$ and $\widehat{e}_-$ respectively. Since they define the same metric they are related by a local Lorentz rotation $$\label{eq:Lorentztrans} \Lambda \widehat{e}_{-} = \widehat{e}_{+} \ , \quad \Lambda = -\kappa M^{-T}M \kappa^{-1}$$ This Lorentz rotation lifts to spinors via $$\label{eq:Spinortrans} \Omega^{-1}\Gamma^a \Omega= ( \Lambda\cdot\Gamma)^a \ .$$ Using the Clifford isomorphism we convert the poly-form sum of RR fluxes $$\label{eq:Polyform} {\cal P}= e^{\Phi} ( F_{1 } + F_{3} + F_{5} - \star F_3 + \star F_1 ) \ ,$$ to a bi-spinor matrix. The T-duality rule is then given by $$\widehat{{\cal P}} ={\cal P} \cdot \Omega^{-1} \ .$$ The relationship between the local Lorentz rotations and RR field transformation in the case of abelian T-duality in curved space was made explicit in the work of Hassan [@Hassan:1999bv] and developed in the present context in [@Sfetsos:2010uq]. Note that although we have “bootstrapped” the transformation rule for the RR sector from knowledge of the NS sector it seems rather likely that the same conclusion can be reached in e.g. the pure spinor superstring by a straightforward extension of the arguments presented for abelian [@Benichou:2008it] and fermionic T-duality [@Sfetsos:2010xa].[^3] Finally let us turn to the transformation of the dilaton field under non-abelian T-duality. For the non-abelian duality to preserve conformality the dualisation procedure must avoid the introduction of a mixed gravitational-gauge anomaly [@Alvarez:1994np; @Elitzur:1994ri] and the structure constants of the algebra in which we dualise should satisfy $$n_a \equiv f_{ab}{}^b = 0 \ .$$ When this is the case the dual dilaton comes from the Gaussian integration in the path integral [@Buscher:1987qj] $$\label{eq:diltrans} \widehat{\Phi} = \Phi - \frac{1}{2}\log \det M \ .$$ On the other hand if $n_a \neq 0$ the dual model is not expected to be conformal, however it will be globally scale invariant. In this case we still define the dual “dilaton” to be given by . The global scale invariance then implies that, for example, the one-loop metric and $B$-field beta-functions (defined in of appendix \[app:sugra\]) only vanish up to diffeomorphisms and gauge transformations. This is in contrast to the conformal case, for which the beta-functions of the metric, $B$-field and dilaton vanish identically, while the RR fluxes solve the first order equations in eq. of appendix \[app:sugra\]. It transpires that the globally scale invariant models that arise from dualising with $n_a \neq 0$ satisfy a stronger set of equations than those of global scale invariance [@Hoare:2016wsk]. These are a modification of the Type II supergravity equations [@Arutyunov:2015mqj; @Wulff:2016tju; @Sakatani:2016fvh] that depend on a particular Killing vector $I$ of the background such that when $I = 0$ standard Type II supergravity is recovered. These equations are given in eqs. of appendix \[app:sugra\]. As mentioned above we take the dual “dilaton” field in these equations to still be defined in terms of the original dilaton via the transformation , while the one-forms $X$, $Z$ and $W$ are defined in terms of $\Phi$ and the Killing vector $I$ as in eq. of appendix \[app:sugra\]. To show that the dual background solves the modified supergravity equations we follow the derivation in [@Hoare:2016wsk]. After splitting the Lagrange multiplier as $v_a = u_a + y n_a$, it transpires that shifting $y$ is a symmetry of the dual background and T-dualising $y \to \tilde y$ gives a conformal $\sigma$-model with a dilaton linear in $\tilde y$. From the results of [@Arutyunov:2015mqj] this then implies that, in our conventions, the dual model solves the modified supergravity equations with $I^y = - 1$. The classical bosonic string Lagrangian in conformal gauge, $$\label{eq:cbsacg} \mathcal{L} = \partial_+ x^m (G_{mn} + B_{mn}) \partial_- x^n \ ,$$ has the property that when we replace $\partial_- x^m \to I^m$ it equals $W_n \partial_+ x^n$ where the one-form $W$, defined in eq. , is given by $$W_n = I^m (G_{mn} - B_{mn}) \ .$$ Following this procedure in the dual model with $I^y = - 1$ and the remaining components vanishing, we find that $$\label{eq:Ampush} W_n \partial_+ x^n = - A_+^a n_a \ ,$$ with $A_+$ evaluated on the gauge field equation of motion . To summarise; if the T-duality is anomalous then the background solves the modified supergravity equations with the one-form $W$, which can be used to define the modification, given by the push forward of the $A_+$ component of the gauge field evaluated on its equations of motion. Centrally-extended duality {#sec:centralext} ========================== Let us now consider non-abelian T-dualities with respect to centrally-extended algebras. In particular we consider the setup considered in [@Hoare:2016wsk; @Borsato:2016pas] in which case the dualities are equivalent to Yang-Baxter deformations for homogeneous $r$-matrices. The aim of this section is to extend this to the RR fluxes using the technology outlined in section \[sec:natd\]. We start by recalling that for a homogeneous $r$-matrix for a Lie algebra $\mathfrak{f}$ $$\label{eq:rmatans} r= \sum_{j} \eta_j \, \big( \sum_{i=1}^{n(j)} a_{ij} \, X_{ij} \wedge Y_{ij} \big) \ ,$$ the generators $\{X_{ij},Y_{ij}\}$ (for each fixed $j$) form a basis for a subalgebra $\mathfrak{h}$, which admits a central extension. In eq. $\eta_{j}$ are free parameters, while $a_{ij}$ are fixed real coefficients. For each free parameter we introduce a central extension, such that the centrally-extended algebra has a basis $\mathfrak{h}^{ext} = \{X_{ij}, Y_{ij}\} \oplus \{Z_j\}$, with commutation relations $[X_{ij},Y_{ij}]^{ext} = [X_{ij},Y_{ij}] + a_{ij}^{-1} Z_j$ (for fixed $i$ and $j$), and $[X_{ij},Z_j]^{ext} = [Y_{ij},Z_j]^{ext} = 0$. This is the centrally-extended algebra with respect to which we dualise. The precise relation between the centrally-extended non-abelian T-dual and the Yang-Baxter deformation was made in the NS sector in [@Borsato:2016pas]. The $R$-operator (see eq. ) governing a certain Yang-Baxter deformation defines an invertible map from $\mathfrak{h}^\star$ to $\mathfrak{h}$. Recalling our parametrisation of the $F/G$ coset representative $g= h \hat{g}$ with $h \in H$, we may write $h = \exp(R(X))$ for $X\in \mathfrak{h}^\star$. If $\mathfrak{h}$ is abelian then the relation between the Lagrange multipliers parametrising the T-dual model and the YB deformed model is simple: $V= \eta^{-1} R(X)$. When $\mathfrak{h}$ is non-abelian the relation is more involved [@Borsato:2016pas]. One can formally set up the non-abelian T-dual of the central extension by considering the coset of the centrally-extended algebra by the central generators. To see this precisely let us consider the Heisenberg algebra, i.e. the central extension of $U(1)^2$ $$[X, Y ]= Z \ , \quad [X , Z] = [ Y,Z ] = 0 \ .$$ We let $T_{1}= X, T_{2}= Y$ and $T_{3} = Z$ and hence the only non-vanishing structure constant is $f_{12}{}^{3}=1$. We introduce the matrix generators $$T_{1} = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right) \ , \quad T_{2} = \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{array} \right) \ , \quad T_{3 }= \left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right) \ ,$$ and the group element $$g= \exp \left[ x_{1} T_{1} + x_{2}T_{2} + (x_{3}-\frac{1}{2} x_{1}x_{2} ) T_{3}\right] = \left( \begin{array}{ccc} 1& x_{1} &x_{3} \\ 0 &1 & x_{2}\\ 0 & 0 & 1 \\ \end{array} \right) \ .$$ The left-invariant one-forms $g^{-1}dg= L^{i} T_{i}$ are $$L^{1} = dx_{1} \ , \quad L^{2 } = dx_{2} \ , \quad L^{3} = dx_{3} - x_{1} dx_{2} \ .$$ We consider a $\sigma$-model based on this algebra $$\label{eq:Heisenberg} {\cal L} = E_{ab} L_+^a L_-^b = f_1 L^{1}_+ L^{1}_- + f_2 L^{2}_+ L^{2}_- + \lambda L^{3}_+ L^{3}_- \ ,$$ i.e. $E = \operatorname{diag} (f_1,f_2, \lambda)$, where we allow $f_{1,2}$ to be functions of any spectator coordinates. In the limit $\lambda \rightarrow 0$ the theory develops a gauge invariance (the coordinate $x_3$ drops out of the action all together) and reduces to the $\sigma$-model whose target space is simply $ds^2 = f_1 dx_1^2 + f_2 dx_2^2$. This Rube Goldberg construction allows us to now go head and perform a non-abelian T-duality on the coset following the techniques of [@Lozano:2011kb]. The resulting dual $\sigma$-model is given by $$\mathcal{L}_{dual} = \partial_+ v_a (M^{-1})^{ab} \partial_- v^b$$ in which $$\begin{split} M_{ab} = E_{ab} + f_{ab}{}^c v_c & = \left( \begin{array}{ccc} f_1 & v_3 & 0 \\ -v_3 & f_2 & 0 \\ 0 & 0 & \lambda \end{array} \right) \ , \\ (M^{-1})^{ab} & = \left( \begin{array}{ccc} h f_2 & - h v_3 & 0 \\ h v_3 & h f_1 & 0 \\ 0 & 0 & \frac{1}{\lambda} \end{array} \right) \ , \quad h= \frac{1}{f_1 f_2 + v_3^2}\ . \end{split}$$ The matrix $M^{-1}$ diverges in the limit of interest $\lambda \to 0$. In particular, the coefficient of the kinetic term for $v_3$ becomes infinite in the limit and this can be understood as freezing $v_3$ to a constant value. To see this let us rewrite the dual $\sigma$-model as $$\mathcal{L}_{dual} = \partial_+ v_\alpha (M^{-1})^{\alpha\beta} \partial_- v^\beta + \lambda a_+ a_- + a_+ \partial_-v_3 - a_- \partial_+ v_3 \ , \quad \alpha,\beta = 1,2 \ ,$$ where we integrate over $a_\pm$. Now taking $\lambda \to 0$ and then integrating out $a_\pm$ we find $\partial_\pm v_3 = 0$ and indeed $v_3$ is frozen to a constant value. This final step is analogous to the Buscher procedure considered in [@Hoare:2016wsk]. The true target space of the dual model is then spanned by the coordinates $v_1 \equiv y_2$ and $v_2 \equiv y_1$, while $v_3 \equiv \nu$ is a constant parameter. The dual metric, B-field and dilaton shift are easily ascertained: $$\label{eq:eq1} \widehat{ds}^2 = h ( f_1 dy_1^2 + f_2 dy_2^2 ) \ , \quad \widehat{B} = \nu h dy_1 \wedge dy_2 \ , \quad \widehat{\Phi} = \Phi + \frac{1}{2}\log h \ .$$ Frame fields for the dual geometry as seen by left and right movers [@Lozano:2011kb] are given by $$\label{eq:eq2} \widehat{e}_{+}^{\, i} = (\kappa\cdot M^{-1})^{a i} d v_a \, \quad \widehat{e}_{-}^{\,i} = - (\kappa \cdot M^{-1})^{ i a } dv_a \ , \quad i=1,2 \ , \quad a = 1,2,3 \ .$$ where $\frac{1}{2} (E+ E^T)= \kappa^T \kappa$. Explicitly we have $$\label{eq:eq3} \widehat{e}_+ = \left(\begin{array}{c} h \sqrt{f_1} (f_2 dy_2 + \nu dy_1) \\ h \sqrt{f_2}(f_1 dy_1 - \nu d y_2) \end{array}\right) \ , \quad \widehat{e}_- = \left(\begin{array}{c} h \sqrt{f_1} (-f_2 dy_2 + \nu dy_1) \\ h \sqrt{f_2}( -f_1 dy_1 - \nu d y_2) \end{array}\right) \ .$$ The plus and minus frames are then related by a Lorentz rotation $$\label{eq:eq4} \Lambda \cdot \widehat{e}_- = \widehat{e}_+ \ , \quad \Lambda = h \left(\begin{array}{cc} \nu^2 -f_1f_2 & - 2 \nu \sqrt{f_1 f_2} \\ 2\nu\sqrt{f_1 f_2} & \nu^2 - f_1 f_2 \end{array}\right) \ , \quad \det \Lambda = 1 \ , \quad \Lambda \cdot \Lambda^T = \mathbb{I} \ .$$ This coset-based construction is interesting, however for calculation purposes it is enough to follow the T-duality rules for the non-centrally-extended dualisation, while replacing the structure constants entering the $\dim H \times \dim H$ matrix $F_{ab}= {\operatorname{Tr}}([ H_{a}, H_{b}] V)$ with the corresponding central extension and the centrally-extended Lagrange multipliers i.e. $V^{ext} = v_{a} H^{a} + v_{\mu }Z^{\mu}$ and $F^{ext}_{ab}= {\operatorname{Tr}}([ H_{a}, H_{b}]^{ext} V^{ext})$. Applications {#sec:examples} ============ Let us now turn to specific examples for which we construct the dual RR fluxes corresponding to various centrally-extended non-abelian T-dualities of $AdS_5 \times S^5$ using the technology outlined in section \[sec:natd\]. Here we will consider certain deformations that are well-known to correspond to TsT transformations. In appendix \[app:furtherexamples\] we consider further examples that correspond to Yang-Baxter deformations with time-like abelian and non-abelian $r$-matrices. Application 1: Non-Commutative Deformations {#ssec:app1} ------------------------------------------- The first application we consider is the string background dual to non-commutative $\mathcal{N} = 4$ super Yang-Mills [@Hashimoto:1999ut; @Maldacena:1999mh] $$\begin{aligned} \nonumber ds^2 &= \frac{du^2}{u^2} + u^2 \left( -dt^2 + dx_1^2 + \tilde h (dx^2_2 + dx^2_3) \right) + d\Omega_5^2 \ , \quad \tilde h = \frac{1}{1+ a^4 u^4} \ , \\ \label{eq:mrback} B &= a^2 \tilde h u^4 dx_2 \wedge dx_3 \ , \quad \exp 2 \Phi = g_0^2 \tilde h \ , \\ \nonumber F_3 &= -\frac{4}{g_0} a^2 u^3 dt\wedge dx_1 \wedge du \ , \quad F_5= \frac{4}{g_0} \tilde h u^3 (1+\star) \, du \wedge dt \wedge dx_1 \wedge dx_2 \wedge dx_3 \ .\end{aligned}$$ Starting from the undeformed background $$\label{eq:undefads5} \begin{aligned} ds^2 &= \frac{du^2}{u^2} + u^2 \left( -dt^2 + dx_1^2 + dx^2_2 + dx^2_3 \right) + d\Omega_5^2 \ , \quad \exp 2 \Phi = g_0^2 \ , \\ F_5& = \frac{4}{g_0} u^3 (1+\star) \, du \wedge dt \wedge dx_1 \wedge dx_2 \wedge dx_3 \ , \end{aligned}$$ we now consider the non-abelian T-dual with respect to the central extension of $U(1)^2$, where the $U(1)^2$ is generated by shifts in $x_2$ and $x_3$. Using eqs. – with $y_1 = \frac{x_3}{a^2}$, $y_2 = \frac{x_2}{a^2}$, $f_1 = f_2 = u^2$ and setting the deformation parameter $\nu=a^{-2}$ we find that the plus and minus frames are given by $$\widehat e_+ = \left(\begin{array}{c} \frac{hu}{a^4} ( a^2 u^2 dx_2 + dx_3) \\ \frac{hu}{a^4}( a^2 u^2 dx_3 - dx_2) \end{array}\right) \ , \quad \widehat e_- = \left(\begin{array}{c} \frac{hu}{a^4} (- a^2 u^2 dx_2 + dx_3) \\ \frac{hu}{a^4}(- a^2 u^2 dx_3 - dx_2) \end{array}\right) \ , \quad h = \frac{a^4}{1+a^4 u^4} \ .$$ The Lorentz rotation of induces a spinorial action according to given by $$\Omega = \sqrt{\frac{h}{a^4}} \left( \mathbb{I} - a^2 u^2 \Gamma^{23} \right) \ .$$ Now let us consider the duality transformation of the five-form RR flux supporting the $AdS_5 \times S^5$ geometry . The self-dual five-form flux can be written as $F_5 = (1+\star) f_5$, where $$f_5 = \frac{4}{g_0}u^3 du \wedge dt \wedge d x_1\wedge dx_2 \wedge dx_3 \equiv \frac{4}{g_0} e^u \wedge e^0 \wedge e^1 \wedge e^2 \wedge e^3 \ .$$ The corresponding poly-form of eq.  is then given by $${\cal P} = 4 \Gamma^{u 0 1 2 3} - 4 \Gamma^{56789} \ .$$ The transformation of the poly-form under T-duality is given by $$\widehat{{\cal P}}= {\cal P} \cdot \Omega^{-1} = 4\sqrt{\frac{h}{a^4}} \Gamma^{u 0 1 2 3} - 4\sqrt{\frac{h}{a^4}}a^2 u^2 \Gamma^{u 0 1 } + \text{duals} \ .$$ Extracting the dual background from the above data we find $$\label{eq:mrback2} \begin{aligned} \widehat{ds}^2 &= \frac{du}{u^2} + u^2 \big( -dt^2 + dx_1^2 + \frac{h}{a^4} ( dx_2^2 + dx_3^2 ) \big) + d\Omega_5^2 \ , \\ \widehat{B}&= -a^{-2}\frac{h}{a^4} dx_2 \wedge dx_3 \ , \quad \exp(2\widehat{\Phi}) = (g_0a^2)^2 \frac{h}{a^4} \ , \\ \widehat{F}_3&= -\frac{4}{g_0 a^2} a^2 u^3 du \wedge dt \wedge dx_1 \ , \quad \widehat{F}_5 = \frac{4}{g_0 a^2} \frac{h}{a^4} u^3 (1+\star)\, du \wedge dt \wedge dx_1 \wedge dx_2 \wedge dx_3 \ . \\ \end{aligned}$$ Noting that $\tilde h = a^{-4}h$, we then immediately see that this is precisely the background up to the constant shift of the dilaton $g_0 \to g_0 a^{-2}$. A small subtlety is that while there is precise agreement between $H = dB$ in and , the $B$-field itself differs by a gauge: $$\widehat{B} =- a^{-2} \frac{1}{1+a^4 u^4} dx_2 \wedge dx_3 = -a^{-2}dx_2 \wedge dx_3 + \frac{ a^2 u^4}{1+a^4 u^4}dx_2 \wedge dx_3 \ .$$ This is always the case in these comparisons [@Hoare:2016wsk; @Borsato:2016pas] and from now on by agreement we always mean up to a gauge term in the $B$-field. Application 2: Marginal Deformations {#ssec:app2} ------------------------------------ ${\cal N}=4$ super Yang-Mills with gauge group $SU(N)$ admits a class of marginal deformations that preserve ${\cal N}= 1$ supersymmetry [@Leigh:1995ep]. The corresponding superpotential for these theories is $$W = \kappa {\operatorname{Tr}}\Big( \Phi_1 [\Phi_2, \Phi_3]_q + \frac{h}{4}\big( \sum_{i=1}^3 \Phi_i^2 \big) \Big) \ ,$$ in which the commutator is $q$-deformed i.e. $[\Phi_i, \Phi_j]_{q} = \Phi_i \Phi_j - q \Phi_j \Phi_i$. For the case where $h=0$ and $q= e^{i \beta}$ with $\beta$ real, known as the $\beta$-deformation, the seminal work of Lunin and Maldacena [@Lunin:2005jy] provides the gravitational dual background constructed via a TsT solution generating technique consisting of a sequence of T-duality, coordinate shift and T-duality. In this case integrability has been shown on both the string [@Frolov:2005ty; @Frolov:2005dj; @Alday:2005ww] and gauge side [@Roiban:2003dw; @Berenstein:2004ys; @Frolov:2005ty; @Beisert:2005if] of the AdS/CFT correspondence. The cubic deformation ($q=1$ and $h \neq 0$) is far less understood, with integrability not expected and, as of now, no known complete gravitational dual constructed. A more general class of non-supersymmetric deformations[^4] of this gauge theory are defined by a scalar potential $$V= {\operatorname{Tr}}\Big( |[\Phi_1, \Phi_2]_{q_3}|^2 + |[\Phi_2, \Phi_3]_{q_1}|^2 + |[\Phi_3, \Phi_1]_{q_2}|^2 \Big) + {\operatorname{Tr}}\Big( \sum_{i=1}^3 [\Phi_i , \bar{\Phi}_i] \Big)^2 \ ,$$ where $q_i = e^{-2\pi i \gamma_i}$. This three parameter deformation, known as the $\gamma$-deformation, enjoys integrability both in the gauge theory [@Frolov:2005iq] and in the worldsheet $\sigma$-model with the target space given by the postulated gravitational dual background constructed in [@Frolov:2005dj]. Upon setting all three deformation parameters equal this reduces to the $\beta$-deformation with enhanced ${\cal N}=1$ supersymmetry and hence we will proceed with the general case. Rather remarkably the string $\sigma$-model in the $\gamma$-deformed target space can be obtained as Yang-Baxter $\sigma$-model [@Kyono:2016jqy; @Osten:2016dvf]. Let us consider the bosonic sector, restricting our attention to the five-sphere of $AdS_5\times S^5$; the $AdS$ factor plays no role in what follows. It is convenient to follow [@Frolov:2005dj] and parametrise the $S^5$ in coordinates adapted to the $U(1)^3$ isometry $$\label{eq:metgam} ds_{S^5}^2 = d\alpha^2 + \S_\alpha^2 d\xi^2 + \C_\alpha^2 d\phi_1^2 + \S_\alpha^2 \C_\xi^2 d\phi_2^2 + \S_\alpha^2 \S_\xi^2 d\phi_3^2 = \sum_{i= 1\dots 3} dr_i^2 + r_i^2 d\phi_i^2 \ ,$$ where $r_1 = \C_\alpha$, $r_2= \S_\alpha \C_\xi$, $r_3= \S_\alpha \S_\xi$ with $\C_x$ and $\S_x$ denoting $\cos x$ and $\sin x$ respectively. The sphere can be realised as the coset $SU(4)/SO(5)$ for which a particular coset representative is given by $$\label{eq:paragam} g = e^{\frac{1}{2} \sum_{m=1}^3 \phi^m h_m } e^{-\frac{\xi}{2} \gamma^{13}} e^{\frac{i}{2} \alpha \gamma^1} \ ,$$ where $ \gamma^{13}$ and $\gamma^1$ are certain $SU(4)$ generators (see appendix \[app:algconv\] for conventions) and $h_i$ are the three Cartan generators. Letting $P$ be the projector onto the coset and $J_\pm = g^{-1} \partial_\pm g$ pull backs of the left-invariant one-form, the $S^5$ $\sigma$-model Lagrangian is $${\cal L } = {\operatorname{Tr}}(J_+ P(J_-) ) \ ,$$ with the parametrisation giving the $\sigma$-model with target space metric . Starting with the $r$-matrix $$r= \frac{\nu_1}{4} h_2 \wedge h_3 + \frac{\nu_3}{4} h_1 \wedge h_2 + \frac{\nu_2}{4} h_3 \wedge h_1 \ ,$$ it was shown in [@Matsumoto:2014nra; @vanTongeren:2015soa] that the NS sector of the Yang-Baxter $\sigma$-model matches the $\gamma$-deformed target space explicitly given by $$\label{eq:gammadef} \begin{aligned} ds^2 & = ds^2_{AdS}+ \sum_{i= 1\dots 3} ( dr_i^2 + G r_i^2 d\phi_i^2) + G r_1^2 r_2^2 r_3^2 \Big( \sum_{i= 1\dots 3} \nu_i dr_i \Big)^2 \ ,\\ B & = G ( r_1^2 r_2^2 \nu_3 d\phi_1 \wedge d\phi_2 + r_1^2 r_3^2 \nu_2 d\phi_3 \wedge d\phi_1 + r_2^2 r_3^2 \nu_1 d\phi_2 \wedge d\phi_3 ) \ , \\ \end{aligned}$$ with $$G^{-1}\equiv \lambda= 1+ r_1^2 r_2^2 \nu_3^2 + r_3^2 r_1^2 \nu_2^2 + r_2^2 r_3^2 \nu_1^2 \ ,$$ where the parameters $\nu_i$ are related to the $\gamma_i$ of the field theory by a factor of the $AdS$ radius [@Frolov:2005dj], which we suppress throughout. We would like to interpret this in terms of the centrally-extended (non-)abelian T-duality introduced in section \[sec:centralext\]. To do so we find it expedient to make a basis transformation of the Cartan generators; let us assume $\nu_3 \neq 0$ and define $$\tilde{h}_1 = h_1 - \frac{\nu_1}{\nu_3} h_3 \ , \quad \tilde{h}_2 = h_2 - \frac{\nu_2}{\nu_3} h_3 \ , \quad \tilde{h}_3 = h_3+ \frac{\nu_1}{\nu_3} h_3 + \frac{\nu_2}{\nu_3} h_3 \ .$$ In this basis the $r$-matrix simply reads $$r= \frac{\nu_3}{4} \tilde{h}_1 \wedge \tilde{h}_2 \ .$$ We also introduce a new set of angles such that $\tilde{h}_i \tilde{\phi}_i = h_i \phi_i$ (where the sum over $i$ is implicit). Written in this way it is clear that we should consider a centrally-extended (non-)abelian T-duality along the $\tilde{h}_1$ and $\tilde{h}_2$ directions. To proceed we defined a slightly exotic set of frame fields for the $S^5$, adapted to the dualisation as described $$\begin{aligned} e^\alpha &= d \alpha \ , \quad e^\xi = \sin \alpha d\xi \ , \quad e^1 = \frac{1}{\varphi \sqrt{\lambda-1} } \left( r_1^2 \varphi^2 d\phi_1 - r_2^2 r_3^2 \nu_1 \nu_2 d\phi_2 - r_2^2 r_3^2 \nu_1 \nu_3 d\phi_3 \right) \ , \\ e^2 & = \frac{1}{\varphi } \left( r_2^2 \nu_3 d\phi_2 - r_3^2 \nu_2 d\phi_3 \right) \ , \quad e^3 = \frac{r_1 r_2 r_3}{\sqrt{\lambda -1} }\sum_{i} \nu_i d\phi_i \ , \end{aligned}$$ where $\varphi=(r_2^2 \nu_3^2 + r_3^3 \nu_2^2)^{\frac{1}{2}}$. Though these frames depend on $\nu_i$ the overall metric remains the round $S^5$ independent of $\nu_i$. The advantage of this basis is that the T-dualisation acts only on the $e_1$ and $e_2$ directions. We non-abelian T-dualise with respect to the central extension of $\tilde{h}_1$ and $\tilde{h}_2$ making the gauge fixing choice $$\hat{g} = e^{\frac{1}{2} \tilde{\phi}_3 \tilde{h}_3 } e^{-\frac{\xi}{2} \gamma^{13}} e^{\frac{i}{2} \alpha \gamma^1}$$ and by parametrising the Lagrange multiplier parameters as $$v_1 = - \frac{2 }{\nu_{3}} \tilde{\phi}_2 \ , \quad v_2 =\frac{2 }{\nu_{3}} \tilde{\phi}_1 \ , \quad v_3 = \frac{4}{\nu_3} \ , \quad dv_3= 0 \ .$$ After some work one finds the dual metric is exactly that of eq.  with a $B$-field matching up to a gauge transformation.[^5] The dual dilaton is given by $$e^{\widehat{\Phi} - \phi_0} = \frac{\nu_3}{4 \sqrt{\lambda}} \ .$$ The frame fields produced by dualisation, using eq. , are $$\begin{aligned} \widehat{e}^{\,\alpha} &= e^\alpha \ , \quad \widehat{e}^{\,\xi} = e^\xi \ , \quad \widehat{e}^{\,3} = e^3 \ , \\ \widehat{e}^{\,1} &\equiv \widehat{e}^{\,1}_{+} = \frac{1}{ \lambda \varphi \sqrt{\lambda-1} } \left( r_1^2 \varphi^2 d\phi_1 - r_2^2 (r_3^2 \nu_1 \nu_2 + (\lambda-1)\nu_3 ) d\phi_2 - r_3^2 (r_2^2 \nu_1 \nu_3 - (\lambda-1)\nu_2 d\phi_3 \right) \ , \\ \widehat{e}^{\,2} &\equiv \widehat{e}^{\,2}_{+} = \frac{1}{ \lambda \varphi } \left( r_1^2 \varphi^2 d\phi_1 + r_2^2 ( \nu_3 - \nu_1 \nu_2 r_3^2 ) d\phi_2 - r_3^2( \nu_2 + \nu_1 \nu_3 r_2^2 ) d\phi_3 \right) \ . \end{aligned}$$ Following the dualisation procedure the Lorentz transformation in eq.  is given by $$\Lambda = \frac{1}{\lambda} \left(\begin{array}{cc} 2-\lambda & - 2 \sqrt{\lambda - 1} \\ 2 \sqrt{\lambda -1} & 2- \lambda \end{array} \right) \ ,$$ for which the corresponding action on spinors is simply $$\Omega =\frac{1}{\sqrt{\lambda}} \mathbb{I} - \frac{\sqrt{\lambda -1 }}{\sqrt{\lambda} }\Gamma^{12} \ .$$ Then acting on the poly-form we ascertain the T-dual fluxes $$\begin{aligned} \widehat{F}_3&= -4 e^{-\phi_0} r_1 r_2 r_3 \, e^\alpha \wedge e^\xi \wedge \left( \nu_1 d\phi_1 + \nu_2 d\phi_2 + \nu_3 d\phi_3 \right) \ , \\ \widehat{F}_5 &= (1 +\star) \frac{4 e^{-\phi_0}}{\lambda} r_1 r_2 r_3 \, e^\alpha \wedge e^\xi \wedge d\phi_1 \wedge d\phi_2 \wedge d\phi_3 \ , \end{aligned}$$ in complete agreement with the results of [@Frolov:2005dj]. To close this section let us make a small observation. For the $\beta$-deformation $\nu_1 = \nu_2 = \nu_3 \equiv \gamma$ there a special simplification that happens when $\gamma = \frac{1}{n}$, $n\in \mathbb{Z}$. In this case the deformed gauge theory is equivalent to that of D3 branes on the discrete torsion orbifold $\mathbb{C}^3/\Gamma$ with $\Gamma = \mathbb{Z}_n \times \mathbb{Z}_n$. These cases are also special in the dualisation procedure above. Notice that the Lagrange multiplier $v$ corresponding to the central extension is inversely proportional to $\gamma$ and hence the orbifold points correspond to cases where $v$ is integer quantised. Moreover, recalling that non-abelian T-duality with respect to a centrally-extended $U(1)^2$ is equivalent to first adding a total derivative $B$-field, i.e. making a large gauge transformation, and then T-dualising with respect to $U(1)^2$, where the required total derivative is again given by the expression in footnote \[foot:bdiff\], we find that at the orbifold points ($\nu_1 = \nu_2 = \nu_3 \equiv \gamma = \frac1n$) the integral of this total derivative $$\frac1{4\pi^2} \int B_2 = \frac{n}{12\pi^2} \int (d\phi_2 \wedge d\phi_3 + d \phi_1 \wedge d \phi_2 + d\phi_3 \wedge d\phi_1) = n \ ,$$ is also integer quantised. Application 3: Dipole Deformations {#ssec:app3} ---------------------------------- Dipole theories [@Bergman:2000cw; @Bergman:2001rw] are a class of non-local field theories obtained from regular (or even non-commutative) field theories by associating to each non-gauge field $\Phi_a$ a vector $L^\mu_a$ and replacing the product of fields with a non-commutative product $$(\Phi_1 \tilde\star \Phi_2 )(x) \equiv \Phi_1(x- \frac{1}{2} L_2) \Phi_2 (x+ \frac{1}{2} L_1) \ .$$ Whilst intrinsically non-local, these theories can be mapped to local field theories with a tower of higher-order corrections. For small $L$ the leading correction is the coupling to a dimension 5 operator, which for $\mathcal{N} = 4$ SYM was identified in [@Bergman:2000cw] as $$\Delta {\cal L} = L^\mu \cdot {\cal O}_\mu \ , \quad {\cal O}_\mu^{IJ} = \frac{i}{g^2_{YM}} \textrm{tr}\left(F_\mu{}^\nu \Phi^{[I} D_{\mu} \Phi^{J]}+ (D_\mu \Phi^K)\Phi^{[K}\Phi^I\Phi^{J]} \right) \ .$$ In [@Bergman:2001rw] the supergravity dual to this dipole deformation was constructed. When aligned in the $x^3$ direction the dipole vector $L$ specifies a constant element in $ \mathfrak{su}(4)$ which defines in the $\bf{4}$ a $4\times 4$ traceless hermitian matrix $U$ and in the $\bf{6}$ a $6\times 6$ real antisymmetric matrix $M$. In terms of these matrices the supergravity metric is given by [@Bergman:2001rw] $$ds^2 = \frac{R^2}{z^2} \left( -dt^2 + dx_1^2 + dx_2^2 + f_{1}^{-1}z^2 d x_3^2 \right) + R^2 \left( d\textrm{n}^Td\textrm{n} + \lambda^2 f_{1}^{-1} (\textrm{n}^T M d \textrm{n})^2 \right) \ ,$$ where $\textrm{n}$ is a unit vector in $\mathbb{R}^6$, $\lambda = R^4 (\alpha^\prime)^{-2}= 4 \pi g^2_{YM}N$ and $$f_1 = \frac{z^2}{R^2}+ \lambda^2 \textrm{n}^T M^T M \textrm{n} \ .$$ The deformation acts in both $S^5$ and $AdS^5$. The eigenvalues of a $6 \times 6$ real antisymmetric matrix are three imaginary numbers and their complex conjugates. If we take three of the independent eigenvalues of $M$ to be equal, $M^T M$ is a positive constant, $l^2/\lambda^2$, times the identity matrix, and hence $$f_1 = z^2 + l^2 \ ,$$ where we have set $R = 1$. Though this case preserves no supersymmetry, it does yield a simple metric on the five-sphere; viewed as a $U(1)$ fibration over $\mathbb{C}\mathbf{P}^2$ (given in appendix \[app:algconv\] in eq. ) the deformation acts to change the radius of this fibration such that it depends on the function $f_1$ [@Bergman:2001rw], which now only depends on the $AdS$ radial coordinate. To arrive at this dipole deformation via centrally-extended non-abelian T-duality we gauge the central extension of the $U(1)^{2}$ subgroup generated by $\{ \mathfrak{P}_{3} , (S_{12}+ S_{34}+ S_{56}) \}$. We gauge fix the coset representative $$\hat{g} = g_{AdS_{5}}\oplus g_{S^{5}} \ ,\quad x_{3}\rightarrow 0 \ , \quad \phi \rightarrow 0 \ ,$$ where $g_{AdS_{5}}$ is the parametrisation relevant for the Poincaré patch and $g_{S^{5}}$ is given in eq. . The Lagrange multipliers are then parametrised as $$v_{1} = \frac{\phi}{l} \ , \quad v_{2}= \frac{x_{3}}{l} \ , \quad v_{3} = \frac{1}{l} \ .$$ Following the general formulae one arrives at the T-dual frame fields $$\widehat{e}^{\,1}_{\pm} = \frac{z}{z^{2} + l^{2}} \big( dx_{3} \pm l \Psi \big) \ , \quad \widehat{e}^{\,2}_{\pm} = \frac{z}{z^{2} + l^{2}} \big( -z \Psi \pm \frac{l}{z} dx_{3} \big) \ ,$$ in which $\Psi$ is the global one-form corresponding to the $U(1)$ fibration defined in eq. . It is a simple matter to extract the Lorentz rotation in the spinor representation $$\Omega = \frac{1}{\sqrt{z^{2}+ l^{2}} }\left(z \mathbb{I} - l \Gamma^{12} \right) \ .$$ Here $\Gamma_{12}$ refers to the directions in tangent space given by frames $\widehat{e}^{\,1}$ and $\widehat{e}^{\,2}$. This is a product of two gamma matrices, one with legs in $S^{5}$ and the other in $AdS_{5}$. Therefore, the action of $\Omega$ only produces a five-form in the dualised target space. In fact since, for example, $z\widehat{e}^{\,2}_+ - l \widehat{e}^{\,1}_+ = - z \Psi$ one finds that $F_5$ is only altered by an overall constant scaling that could be re-absorbed into a shift of the dilaton. The final result is the target space geometry $$\begin{aligned} \widehat{ds}^{2} &= \frac{1}{z^{2}} \left( -dx_{0}^{2}+ dx_{1}^{2}+dx_{2}^{2 } + dz^{2}\right) + ds^{2}_{\mathbb{C}\mathbf{P}^2} + \frac{1}{z^{2}+l^{2}} dx_{3}^{2 } + \frac{z^{2}}{z^{2}+l^{2}}\Psi^{2} \ , \\ \widehat{B} &= \frac{l}{z^{2}+ l^{2}}\Psi \wedge dx_{3} + \frac{1}{l} dx_{3} \wedge d\phi \ , \quad e^{ 2(\widehat{\Phi}-\phi_{0})} = \frac{z^{2}l^{2}}{z^{2}+l^{2}} \ , \quad \widehat{F}_5 = \frac{1}{l} F_5 \ . \end{aligned}$$ Modulo a gauge transformation in $\widehat{B}$ this agrees with the geometry of [@Bergman:2001rw]. Concluding Comment {#sec:conclusions} ================== In this article we have demonstrated that the holographic dual of many known deformations of gauge theories can be understood in terms of non-abelian T-duality, extending the construction in the NS sector of [@Hoare:2016wsk; @Borsato:2016pas] to the RR sector. In section \[sec:examples\] we tested the construction on a number of examples: a non-commutative deformation, the $\gamma$-deformation, a dipole deformation and, in appendix \[app:furtherexamples\], a unimodular non-abelian deformation and a jordanian deformation. There are a number of interesting open directions. Our construction involved only bosonic generators of the $\mathfrak{psu}(2,2|4)$ algebra of the superstring; it would be interesting to extend this to more general $r$-matrices, including those containing fermionic generators. Furthermore, to formalise the relation between the Yang-Baxter deformations and non-abelian dualities it would be useful to understand how the spinor rotation defining the deformed RR fluxes in the former [@Borsato:2016ose] is related to that in the latter, which was the subject of the present article. Additionally, one would like to understand whether solutions of the modified cYBE (i.e. $\eta$-deformations and their Poisson-Lie dual $\lambda$-deformations) can be understood in this framework. Finally, and perhaps optimistically, one might hope that generalised notions of T-duality can be employed to find gravitational duals of other non-integrable marginal deformations of gauge theories. Acknowledgements ================ It is a pleasure to thank Saskia Demulder, Carlos Núñez, Arkady Tseytlin, Linus Wulff and Konstantinos Zoubos for discussions concerning aspects of this work. The work of BH is partially supported by grant no. 615203 from the European Research Council under the FP7. The work of DT is supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7/37, and in part by the “FWO-Vlaanderen” through the project G020714N and a postdoctoral fellowship, and by the Vrije Universiteit Brussel through the Strategic Research Program “High-Energy Physics”. (Modified) Supergravity Conventions {#app:sugra} =================================== In this appendix we summarise our conventions for the (modified) Type IIB supergravity equations. Similar equations exist for Type IIA. Let us define the following beta-functions $$\label{eq:betafunctions} \begin{aligned} \beta_{mn}^{G} &= R_{mn} + 2\nabla_{m n }\Phi -\frac{1}{4}H_{mpq}H_{n}{}^{pq} \\ & - e^{2\Phi}\Big( \frac{1}{2}({F_1}^2)_{mn}+\frac{1}{4}({F_3}^2)_{mn} +\frac{1}{96}({F_5}^2)_{mn} - \frac{1}{4}g_{mn} \big( F_1^2 +\frac{1}{6}F_{3}^2 \big)\Big) \ , \\ \beta_{mn}^B &= d[e^{-2\Phi} \star H] - F_1\ww \star F_3 - F_3 \ww F_5 \ , \\ \beta^\Phi &= R+ 4 \nabla^2 \Phi - 4 (\partial \Phi)^2 - \frac{1}{12} H^2 \ . \end{aligned}$$ For a globally scale invariant $\sigma$-model the beta-functions for the metric and $B$-field vanish up to diffeomorphisms and gauge transformations. There should then be similar second-order equations for the RR fluxes. The Type IIB supergravity equations, i.e. the critical string equations, are given by $$\label{eq:sugraeq} \begin{aligned} \beta^G_{mn} & = 0 \ , \quad \beta^B_{mn} = 0 \ , \quad \beta^\Phi = 0 \ , \\ d\cF_1 & = d\Phi \ww \cF_1 \ , \quad d\star\cF_1 + H\ww \star \cF_3 = d\Phi \ww\star \cF_1 \ , \\ d\cF_3 - H\ww \cF_1 &= d\Phi \ww \cF_3 \ , \quad d\star \cF_3 + H\ww \star \cF_5 = d\Phi \ww \star \cF_3 \ ,\\ d\cF_5 - H\ww \cF_3&= d\Phi \ww \cF_5 \ , \quad \cF_5 = \star \cF_5 \ , \\ \end{aligned}$$ where we have defined $\cF = e^\Phi F$. There exists a modification to the supergravity equations that still imply the global scale invariance conditions, but now depend on an additional Killing vector of the background $I$. These modified supergravity equations can be understood as follows. We start from a solution of the Type II supergravity equations for which the metric, $B$-field and weighted RR fluxes $\cF$ have a $U(1)$ isometry corresponding to shifts in the coordinate $y$, but where the dilaton breaks this isometry via a piece linear in $y$, i.e. $\Phi = cy + \ldots$. The supergravity equations only depend on $d\Phi$ and hence we can ask what happens if we formally T-dualise in $y$. The dual background then solves the modified equations with the Killing vector corresponding to shifts in the dual coordinate to $y$ [@Arutyunov:2015mqj]. Alternatively they follow from the requirement that the Type II Green-Schwarz string action is $\kappa$-symmetric [@Wulff:2016tju]. Recently they have also been formulated in an $O(d,d)$ invariant manner, as a modification of Type II double field theory [@Sakatani:2016fvh]. The modified Type IIB supergravity equations are $$\label{eq:modsugra1} \begin{aligned} \beta_{mn}^{G} &= - \nabla_m W_n - \nabla_n W_m \ , \quad e^{2\Phi} \beta_{mn}^{B} = 2 \star d W +2 W \wedge \star H \ , \\ \beta^{\Phi} &= 4 \star d \star W -4 \star( (W+ 2 d\Phi) \ww \star W) \ , \\ d\cF_1 & = Z\ww \cF_1 + \star (I \ww \star \cF_3) \ , \quad d\star\cF_1 + H\ww \star \cF_3 = Z\ww\star \cF_1 \ , \quad \star (I \ww \star \mathcal{F}_1) = 0 \ , \\ d\cF_3 - H\ww \cF_1 &= Z\ww \cF_3 + \star (I \ww \star \cF_5) \ , \quad d\star \cF_3 + H\ww \star \cF_5 = Z\ww \star \cF_3- \star( I \wedge \cF_1 ) \ , \\ d\cF_5 - H\ww \cF_3&= Z\ww \cF_5- \star (I \ww \cF_3) \ , \quad \cF_5 = \star \cF_5 \ , \end{aligned}$$ where $I$ is a one-form corresponding to a certain Killing vector of the background, i.e. $$\label{eq:modsugra2} \mathcal{L}_I G= \mathcal{L}_I B = \mathcal{L}_I \Phi = \mathcal{L}_I \cF_{1,3,5} = 0 \ ,$$ and the one-forms $Z$, $X$ and $W$ are constructed from $I$ and $\Phi$ $$\label{eq:modsugra3} Z = d\Phi - \iota_I B \ , \quad X= I + Z \ , \quad W = X - d\Phi = I - \iota_I B \ .$$ It is important to note that for the modified system of equations to be invariant under the gauge freedom $B \to B + d \Lambda$ (where for simplicity we assume that $\mathcal{L}_I \Lambda = 0$) the “dilaton” field $\Phi$ must now transform as $\Phi \to \Phi - \iota_I \Lambda$, and hence is not unique. This can be understood by starting from a Weyl-invariant background with a dilaton linear in an isometric direction $y$, $\Phi = c y + \ldots$. If we shift $y$ by an arbitrary function of the transverse coordinates this ansatz is preserved, however the explicit form of the dilaton is changed. After “T-dualising” in $y$ this coordinate redefinition then maps to a gauge transformation under which the dual “dilaton” field now transforms. Conventions for Algebras {#app:algconv} ======================== In this appendix we outline our conventions for the algebras $\mathfrak{so}(4,2)$ and $\mathfrak{so}(6)$ for which we largely adopt those of [@Arutyunov:2009ga]. For $SO(4,2)$ we start by defining the $\gamma$ matrices $$\gamma_0 = i \sigma_3 \otimes \sigma_0 \ , \quad \gamma_1= \sigma_2 \otimes \sigma_2 \ , \quad \gamma_2= -\sigma_2 \otimes \sigma_1\ , \quad \gamma_3= -\sigma_1 \otimes \sigma_0 \ , \quad \gamma_4= \sigma_2 \otimes \sigma_3 \ ,$$ in terms of which the generators of $SO(4,2)$ are given by $$T_{ij} = \frac{1}{4} [\gamma_{i}, \gamma_j] \ , \quad T_{i5} = \frac{1}{2} \gamma_i \ , \quad i,j = 0, \ldots, 4 \ .$$ The $SO(4,1)$ subgroup is generated by $T_{ij}$ for $i,j = 0, \ldots, 4$. The projector onto the orthogonal complement is given by $$P(X)= - {\operatorname{Tr}}(X T_{0,5}) T_{0,5} + \sum_{i=1}^4 {\operatorname{Tr}}(X T_{i,5}) T_{i,5} \ .$$ A useful adapted basis when considering Poincaré patch is $$\mathfrak{D}= T_{45} \ , \quad \mathfrak{P}_\mu = T_{\mu 5} - T_{\mu 4} \ , \quad \mathfrak{K}_\mu = T_{\mu 5} + T_{\mu 4} \ , \quad \mathfrak{M}_{\mu \nu} = T_{\mu \nu} \ , \quad \mu = 0,\ldots, 3 \ .$$ We also use $\mathfrak{M}_{+i} = \mathfrak{M}_{0i} +\mathfrak{M}_{1i} $ for $i=2,3$. The bosonic $AdS_5$ $\sigma$-model is given by $${\cal L } = {\operatorname{Tr}}(J_+ P(J_-) ) \ ,$$ for $J= g^{-1}dg$ and when the gauged fixed group element is parametrised as $$\label{eq:gAdS} g= \exp\left[ \eta^{\mu \nu} x_\mu \mathfrak{P}_\nu \right] z^{\mathfrak{D} } \ ,$$ the target space metric is given on the Poincaré patch by $$ds^2=\frac{1}{z^2} \left( dz^2 + \eta^{\mu \nu} dx_\mu dx_\nu \right) \ .$$ As usual the coordinate $u$ used in section \[ssec:app1\] is related to $z$ by $u = z^{-1}$. In the examples that we consider we dualise with respect to a subalgebra $\mathfrak{h}\subset \mathfrak{so}(4,2)$ which does not necessarily need to be a subgroup of the $\mathfrak{so}(4,1)$ subalgebra specified above. For $\mathfrak{so}(6)\cong \mathfrak{su}(4)$ we supplement $\gamma_i$ $i=1,\ldots, 4$, defined above with $\gamma_5 = -i \gamma_0$ and construct the (anti-hermitian) generators $$S_{ij} = \frac{1}{4} [\gamma_{i}, \gamma_j] \ , \quad S_{i6} = \frac{i}{2} \gamma_i \ , \quad i,j = 1, \ldots, 5 \ .$$ The Cartan subalgebra is generated by $$h_1= i {\operatorname{diag}}(1,1,-1,-1) \ , \quad h_2= i {\operatorname{diag}}(1,-1,1,-1) \ , \quad h_3= i {\operatorname{diag}}(1,-1,-1,1)$$ We take the $\mathfrak{so}(5)$ subalgebra to be generated by $S_{ij}$ for $i=1,\ldots,5$, such that the projector onto the orthogonal complement of this subgroup is $$P(X) = \sum_{i=1}^5{\operatorname{Tr}}( X\cdot S_{i6}) S_{i6} \ ,$$ where here ${\operatorname{Tr}}$ stands for the negative of the matrix trace. A coset representative for $SO(6)/SO(5)$ can be chosen as $$g = \exp[\frac{1}{2} \phi^m h_m ] \exp[-\frac{\xi}{2} \gamma^{13}] \exp[\frac{i\alpha}{2} \gamma^1] \ ,$$ leading to the $\sigma$-model parametrisation of $S^5$ employed in section \[ssec:app2\]. An alternative parametrisation is given by $$\label{eq:CPparam} g= \exp[\frac{i\phi}{2} \gamma_5] \cdot \big[s \, \mathbb{I} + \frac{it}{2} \big( e^{i\phi} \alpha (\gamma_1 - i \gamma_2) + e^{-i\phi} \bar\alpha (\gamma_1 + i \gamma_2) + e^{i\phi} \beta (\gamma_3- i \gamma_4) + e^{-i\phi}\bar\beta (\gamma_3+ i \gamma_4) \big) \big] \ ,$$ where $$r = 1+ |\alpha|^2 + |\beta|^2 \ , \quad s^2 = \frac{1}{2\sqrt{r} }(1+ \sqrt{r}) \ , \quad t^2 = \frac{1}{2\sqrt{r} (1+ \sqrt{r})} \ .$$ These coordinates give a metric on $S^5$ that makes manifest the structure of $S^5$ as a $U(1)$ fibration over $\mathbb{C}\mathbf{P}^2$ $$\label{eq:CPform} \begin{aligned} &ds^2_{S^5} = ds^{2}_{\mathbb{C}\mathbf{P}^2} + \Psi^{2}\ , \quad ds^{2}_{\mathbb{C}\mathbf{P}^2} = \frac{1}{r}( |d\alpha|^2 + |d\beta|^2 ) - \frac{1}{r^2}| \omega |^2 \ , \\ & \Psi= d\phi + \frac{1}{r} \Im( \omega) \ , \quad \omega = \bar\alpha d \alpha + \bar\beta d \beta \ . \end{aligned}$$ The global one-form $\Psi = \sum_{i=1\dots 3} x_i dy_i - y_i d x_i$ where $z_i = x_i + i y_i$ are coordinates on $\mathbb{C}^3$ given by $z_1 = \frac{1}{\sqrt{r}} e^{i \phi}$, $z_2 = \frac{\alpha}{\sqrt{r}} e^{i \phi}$, $z_3 = \frac{\beta}{\sqrt{r}} e^{i \phi}$. One can think of $\Psi$ as a contact form whose corresponding Reeb vector has orbits which are the $S^1$ fibres. For computational purposes we note that frame fields for $\mathbb{C}\mathbf{P}^2$ can be found in e.g. [@Eguchi:1980jx]. When dealing with the dipole deformation in section \[ssec:app3\] we will need the full ten-dimensional space-time. This is readily achieved by taking a block diagonal decomposition, i.e. $g = g_{AdS_{5}} \oplus g_{S^5}$, with the generators of $\mathfrak{su}(2,2) \oplus \mathfrak{su}(4)$ given by $8 \times 8$ matrices, with the $\mathfrak{su}(2,2)$ and $\mathfrak{su}(4)$ generators in the upper left and lower right $4 \times 4$ blocks respectively. Traces are then replaced with “supertrace” (the bosonic restriction of the supertrace on $\mathfrak{psu}(2,2|4)$) given by the matrix trace of the upper $\mathfrak{su}(2,2)$ block minus the matrix trace of the lower $\mathfrak{su}(4)$ block. Further Examples of Deformations in AdS5 {#app:furtherexamples} ======================================== In section \[sec:examples\] we considered non-abelian T-dualities with respect to a centrally-extended two-dimensional abelian algebra, demonstrating that this is equivalent to a TsT transformation of the full supergravity background. There are additional classes of deformations that can be constructed as non-abelian T-duals. These come from considering particular non-semisimple subalgebras of $\mathfrak{su}(2,2) \oplus \mathfrak{su}(4)$, whose existence relies on the non-compactness of $\mathfrak{su}(2,2)$. There are a number of such algebras that are non-abelian and admit central extensions [@Borsato:2016ose], such that when we T-dualise the metric with respect to this centrally-extended subalgebra we find a deformation of the original metric [@Hoare:2016wsk; @Borsato:2016pas] that coincides with a certain Yang-Baxter deformation. To illustrate this richer story we present a summary of two examples showing how the techniques described in this paper also apply, i.e. the R-R fluxes following from non-abelian T-duality agree with those of the Yang-Baxter $\sigma$-model. An $r$-matrix $$r = r^{ab} T_a \wedge T_b \ ,$$ is said to be non-abelian if $[T_a , T_b]\neq 0$ for at least some of the generators. An $r$-matrix is said to be unimodular if $$r^{ab} [T_a, T_b] = 0 \ .$$ For a solution of the classical Yang-Baxter equation the unimodularity of the $r$-matrix is equivalent to the unimodularity ($f_{ab}{}^b = 0$) of the corresponding subalgebra. In [@Borsato:2016ose] it was shown that the background defined by a Yang-Baxter $\sigma$-model based on a non-unimodular non-abelian $r$-matrix is not a supergravity background, but rather solves the modified supergravity described above. The first example we discuss corresponds to a non-abelian but unimodular $r$-matrix, while the second is a non-unimodular $r$-matrix. Unimodular r-matrix {#sapp:340} ------------------- The first example corresponds to an $r$-matrix considered in [@Borsato:2016ose] $$r = \eta~ \mathfrak{M}_{23}\wedge \mathfrak{P}_1 + \zeta~ \mathfrak{P}_2\wedge \mathfrak{P}_3 \ .$$ This is non-abelian e.g. $[\mathfrak{M}_{23}, \mathfrak{P}_2]= - \mathfrak{P}_3$, but since $[\mathfrak{M}_{23}, \mathfrak{P}_1]= [\mathfrak{P}_{2}, \mathfrak{P}_3]=0$ it is unimodular. In [@Borsato:2016ose] it was shown that the corresponding deformation is nevertheless equivalent to two non-commuting TsT transformations, with a non-linear coordinate redefinition in between. On the other hand it was discussed from the perspective of non-abelian T-duality in [@Hoare:2016wsk] where the relevant subalgebra was $\mathfrak{h}= \{ \mathfrak{M}_{23}, \mathfrak{P}_1 , \mathfrak{P}_2, \mathfrak{P}_3 \}$. The gauge freedom can be used to fix the coset representative in eq.  to $\hat{g} = e^{-x_{0 }\mathfrak{P}_{0}} z^{\mathfrak{D}}$, but there remains one residual gauge symmetry which is used to fix a Lagrange multiplier to zero. The Lagrange multipliers are parametrised by $$v_1= -\frac{x_1}{\eta} + \frac{r^2}{2 \zeta} \ , \quad v_2= \frac{\theta}{\eta}\ , \quad v_3=\frac{r}{\zeta}\ ,\quad v_4= 0\ , \quad v_5= \frac{1}{\eta}\ , \quad v_6= \frac{1}{\zeta} \ ,$$ where $v_5$ and $v_6$ correspond to the two central generators and $r$ and $\theta$ are polar coordinates on the $x_2 ,x_3$ plane. Applying the non-abelian T-duality technology one finds the dual geometry is $$\begin{aligned} \widehat{ds}^2 &=\frac{1}{z^2} \left( dz^2 - dx_0^2\right) + \widehat{e}_\pm \cdot \widehat{e}_\pm + ds^2_{S^5} \ , \\ \widehat{e}^{\,1}_+ &= \frac{dx_1 \left(\zeta ^2+z^4\right)+\eta r \left(-\zeta dr +r z^2 d\theta \right)}{z f } \ , \\ \widehat{e}^{\,2}_+& = \frac{z \left(\zeta dr+\eta r dx_1-r z^2 d \theta \right)}{f} \ , \\ \widehat{e}^{\,3}_+&= \frac{-dr \left(\eta ^2 r^2+z^4\right)+\zeta \eta r dx_1-\zeta r z^2 d\theta }{z f}\ , \end{aligned}$$ where $f= \zeta ^2+\eta ^2 r^2+z^4$, while the remaining NS fields are $$\widehat{B} = \frac{-\zeta \eta r dr \wedge d\theta + \left(\zeta ^2+z^4\right) dx_1\wedge d \theta }{\eta f } \ , \quad e^{-2(\widehat{\Phi} - \phi_0)} = \frac{f}{\zeta ^2 \eta ^2 z^4} \ .$$ The Lorentz rotation $\Lambda e_- = e_+$ is given by $$\Lambda = \frac{1}{f} \left( \begin{array}{ccc} z^4+\zeta ^2-r^2 \eta ^2 & -2 r z^2 \eta & 2 r \zeta \eta \\ 2 r z^2 \eta & z^4-\zeta ^2-r^2 \eta ^2 & - 2 z^2 \zeta \\ 2 r \zeta \eta & 2 z^2 \zeta & z^4-\zeta ^2+r^2 \eta ^2 \\ \end{array} \right) \ ,$$ with the corresponding spinor representation $$\Omega = \frac{1}{\sqrt{f} } \left( z^2 \mathbb{I} - r \eta \Gamma^{12} - \zeta \Gamma^{23} \right) \ .$$ This completes the IIB supergravity solution with the three-form and five-form flux $$\begin{aligned} F_3 &=\frac{4 e^{-\phi_0} }{z^5 \zeta \eta }\left( \zeta dx_0\wedge dx_1 \wedge dz - r \eta dx_0 \wedge dr \wedge dz \right) \ , \\ F_5&= (1+\star) \frac{-4 e^{-\phi_0} r }{ z \zeta \eta f} dx_0\wedge dx_1 \wedge dr \wedge dz\wedge d\theta \ , \end{aligned}$$ in agreement with the expressions following from the Yang-Baxter $\sigma$-model [@Borsato:2016ose]. Non-unimodular r-matrix {#sapp:366a} ----------------------- The final example we consider is an $r$-matrix that can be found by infinitely boosting the Drinfel’d-Jimbo solution to the modified classical Yang-Baxter equation for $\mathfrak{su}(2,2)$ [@Hoare:2016hwh] $$\label{eq:rmatnm} r= \eta \left(\mathfrak{D} \wedge \mathfrak{P}_0 + \mathfrak{M}_{01}\wedge \mathfrak{P}_1 + \mathfrak{M}_{+2}\wedge \mathfrak{P}_2 + \mathfrak{M}_{+3}\wedge \mathfrak{P}_3 \right) \ .$$ This $r$-matrix of jordanian type and the corresponding deformations of the $AdS_5 \times S^5$ superstring were first studied in [@Kawaguchi:2014qwa; @Kawaguchi:2014fca]. Furthermore, the $r$-matrix is non-unimodular and the corresponding dualisation of $AdS_5$ with respect to the non-abelian subalgebra $$\mathfrak{h}= \{\mathfrak{D} , \mathfrak{P}_0 , \mathfrak{M}_{01}, \mathfrak{P}_1, \mathfrak{M}_{+2} , \mathfrak{P}_2 , \mathfrak{M}_{+3} , \mathfrak{P}_3\} \ ,$$ is afflicted with a mixed gravity/gauge anomaly (i.e. $n_a= f_{ab}{}^b \neq 0$) [@Elitzur:1994ri]. The algebra $\mathfrak{h}$ admits a single central extension with the commutator of each pair of generators in being extended by the same generator. Since all directions are dualised the coset representative is fully fixed to $\hat{g}=1$ leaving three further gauge fixings to be made on the dynamical Lagrange multipliers. We parametrise these as $$v_1= \frac{x_0}{\eta} \ , \quad v_2= \frac{-1+z}{\eta} \ , \quad v_3 = \frac{x_1}{\eta} \ , \quad v_5 + i v_7 = \frac{r e^{i\theta}}{\eta} \ , \quad v_4=v_6=v_8=0 \ , \quad v_9 = \frac{1}{\eta} \ ,$$ where $v_9$ corresponds to the central direction. The dual metric is given by $$\begin{aligned} &\widehat{ds}^2 =e_\pm^i \eta_{ij} e_\pm^i + \widehat{ds}^2_{S^5} \ , \quad \eta_{ij} = \textrm{diag} (1,-1,1, 1,1) \ , \\ &\widehat{e}^{\,1}_+= \frac{1}{p}(-\eta dx_0 + z dz) \ , \quad \widehat{e}^{\,2}_+= \frac{1}{p}(-z dx_0 + \eta dz) \ , \quad \widehat{e}^{\,3}_+= -\frac{z}{q}(z^2 dx_1 + r \eta dr) \ ,\\ & \widehat{e}^{\,4}_+ + i \widehat{e}^{\,5}_+ = \frac{e^{i\theta}}{q}\left( r z \eta dx_1 -z^3 dr - \frac{i q r}{z} d\theta \right) \ , \end{aligned}$$ where $p= z^2-\eta^2$ and $q= z^4+ r^2 \eta^2$. The remaining NS fields are $$\widehat{B} = \frac{z}{p \eta}dz\wedge dx_0 + \frac{r\eta}{q} dr \wedge dx_1 \ , \quad e^{-2(\widehat{\Phi}-\phi_0)} = \frac{p q z^2}{\eta^8} \ .$$ The $SO(1,4)$ Lorentz rotation has a block diagonal decomposition $\Lambda = \Lambda_1 \oplus \Lambda_2$ with $$\Lambda_1 = \frac{1}{p}\left(\begin{array}{cc} z^2+\eta^2 & 2 z \eta \\ 2 z \eta & z^2 +\eta^2 \end{array} \right) \ , \ \ \ \Lambda_2= \frac{1}{q}\left(\begin{array}{ccc} z^4 -r^2 \eta^2 & 2 r z^2 \eta {\cal C}_\theta & 2 r z^2 \eta {\cal S}_\theta \\ -2 r z^2 \eta {\cal C}_\theta & z^4 - r^2 \eta^2 {\cal C}_{2\theta} & -r^2 \eta^2 {\cal S}_{2\theta} \\ -2 r z^2 \eta {\cal S}_\theta &- r^2 \eta^2 {\cal S}_{2\theta} & z^4 + r^2 \eta^2 {\cal C}_{2\theta} \end{array}\right) \ .$$ The corresponding spinor rotation $\Omega = \Omega_1\cdot\Omega_2$ is given by (recalling the signature is such that $(\Gamma^{2})^{2}= - \mathbb{I} $ whilst the remaining $(\Gamma^{i})^{2}= \mathbb{I}$) $$\begin{aligned} \Omega_1 = \frac{1}{\sqrt{p}} \left( z \mathbb{I} + \eta \Gamma^{12} \right) \ , \quad \Omega_2 = \frac{1}{\sqrt{q}} \left(z^2 \mathbb{I} + r\eta \cos\theta \Gamma^{34} + r\eta \sin\theta \Gamma^{35} \right) \ . \end{aligned}$$ This gives the fluxes $$\begin{aligned} F_1&= \frac{ 4e^{-\phi_0}}{\eta^2} r^2 d\theta \ , \quad F_3 = \frac{4e^{-\phi_0} r z^4}{\eta^3 q } dx_1\wedge dr\wedge d\theta - \frac{ 4e^{-\phi_0} r^2 z}{\eta^3 p} dx_0\wedge dz \wedge d\theta \ , \\ F_5&= (1+\star) \frac{ - 4e^{-\phi_0} r z^5}{\eta^4 q p} dx_0\wedge dx_1\wedge dr \wedge dz \wedge d\theta \ . \end{aligned}$$ These fluxes do not solve their Bianchi identities, nor their equations of motions. Instead they solve the generalised supergravity equations above with the modification determined by the one-form $W$ given by the push forward of the worldsheet gauge field $A_+$ as in eq. , which in turn, via eq. , yields $$I = 4\frac{\eta}{p} dx_0 - 2 \frac{z^2 \eta }{q} dx_1 \ .$$ The expressions for the metric, $e^{\widehat{\Phi}} F$ and $I$ agree with those of the background presented in [@Orlando:2016qqu]. Recalling that the “dilaton” field now transforms under the gauge freedom $B \to B + d\Lambda$, we also find that the “dilaton” and $B$-field match up to a gauge transformation. [99]{} O. Lunin and J. M. Maldacena, “Deforming field theories with $U(1) \times U(1)$ global symmetry and their gravity duals,” JHEP [**0505**]{} (2005) 033 \[[[arXiv:hep-th/0502086](http://arxiv.org/abs/hep-th/0502086)]{}\]. N. Berkovits and J. Maldacena, “Fermionic T-Duality, Dual Superconformal Symmetry, and the Amplitude/Wilson Loop Connection,” JHEP [**0809**]{} (2008) 062 \[[[arXiv:0807.3196](http://arxiv.org/abs/0807.3196)]{}\]. N. Beisert, R. Ricci, A. A. Tseytlin and M. Wolf, “Dual Superconformal Symmetry from $AdS_5 \times S^5$ Superstring Integrability,” Phys. Rev. D [**78**]{} (2008) 126004 \[[[arXiv:0807.3228](http://arxiv.org/abs/0807.3228)]{}\]. K. Sfetsos and D. C. Thompson, “On non-abelian T-dual geometries with Ramond fluxes,” Nucl. Phys. B [**846**]{} (2011) 21 \[[[arXiv:1012.1320](http://arxiv.org/abs/1012.1320)]{}\]. Y. Lozano and C. Núñez, “Field theory aspects of non-Abelian T-duality and $\mathcal{N} = 2$ linear quivers,” JHEP [**1605**]{} (2016) 107 \[[[arXiv:1603.04440](http://arxiv.org/abs/1603.04440)]{}\]. B. Hoare and A. A. Tseytlin, “Homogeneous Yang-Baxter deformations as non-abelian duals of the $AdS_5$ sigma-model,” J. Phys. A [**49**]{} (2016) no.49, 494001 \[[[arXiv:1609.02550](http://arxiv.org/abs/1609.02550)]{}\]. R. Borsato and L. Wulff, “Integrable deformations of T-dual $\sigma$ models,” \[[[arXiv:1609.09834](http://arxiv.org/abs/1609.09834)]{}\]. C. Klimcik, “Yang-Baxter $\sigma$-models and dS/AdS T duality,” JHEP [**0212**]{}, 051 (2002) \[[[arXiv:hep-th/0210095](http://arxiv.org/abs/hep-th/0210095)]{}\]. C. Klimcik, “On integrability of the Yang-Baxter $\sigma$-model,” J. Math. Phys. [**50**]{}, 043508 (2009) \[[[arXiv:0802.3518](http://arxiv.org/abs/0802.3518)]{}\]. C. Klimcik, “Integrability of the bi-Yang-Baxter $\sigma$-model,” Lett. Math. Phys. [**104**]{}, 1095 (2014) \[[[arXiv:1402.2105](http://arxiv.org/abs/1402.2105)]{}\]. F. Delduc, M. Magro and B. Vicedo, “On classical $q$-deformations of integrable $\sigma$-models,” JHEP [**1311**]{} (2013) 192 \[[[arXiv:1308.3581](http://arxiv.org/abs/1308.3581)]{}\]. F. Delduc, M. Magro and B. Vicedo, “An integrable deformation of the $AdS_5 \times S^5$ superstring action,” Phys. Rev. Lett. [**112**]{}, no. 5, 051601 (2014) \[[[arXiv:1309.5850](http://arxiv.org/abs/1309.5850)]{}\]. R. R. Metsaev and A. A. Tseytlin, “Type IIB superstring action in $AdS_5 \times S^5$ background,” Nucl. Phys. B [**533**]{} (1998) 109 \[[[arXiv:hep-th/9805028](http://arxiv.org/abs/hep-th/9805028)]{}\]. N. Berkovits, M. Bershadsky, T. Hauer, S. Zhukov and B. Zwiebach, “Superstring theory on $AdS_2 \times S^2$ as a coset supermanifold,” Nucl. Phys. B [**567**]{} (2000) 61 \[[[arXiv:hep-th/9907200](http://arxiv.org/abs/hep-th/9907200)]{}\]. F. Delduc, M. Magro and B. Vicedo, “Derivation of the action and symmetries of the $q$-deformed $AdS_5 \times S^5$ superstring,” JHEP [**1410**]{} (2014) 132 \[[[arXiv:1406.6286](http://arxiv.org/abs/1406.6286)]{}\]. G. Arutyunov, R. Borsato and S. Frolov, “S-matrix for strings on $\eta$-deformed $AdS_5 \times S^5$,” JHEP [**1404**]{}, 002 (2014) \[[[arXiv:1312.3542](http://arxiv.org/abs/1312.3542)]{}\]. G. Arutyunov, R. Borsato and S. Frolov, “Puzzles of $\eta$-deformed $AdS_5 \times S^5$,” JHEP [**1512**]{} (2015) 049 \[[[arXiv:1507.04239](http://arxiv.org/abs/1507.04239)]{}\]. B. Hoare and A. A. Tseytlin, “On integrable deformations of superstring sigma models related to $AdS_n \times S^n$ supercosets,” Nucl. Phys. B [**897**]{} (2015) 448 \[[[arXiv:1504.07213](http://arxiv.org/abs/1504.07213)]{}\]. B. Hoare and A. A. Tseytlin, “Type IIB supergravity solution for the T-dual of the $\eta$-deformed $AdS_5 \times S^5$ superstring,” JHEP [**1510**]{} (2015) 060 \[[[arXiv:1508.01150](http://arxiv.org/abs/1508.01150)]{}\]. G. Arutyunov, S. Frolov, B. Hoare, R. Roiban and A. A. Tseytlin, “Scale invariance of the $\eta$-deformed $AdS_5 \times S^5$ superstring, T-duality and modified type II equations,” Nucl. Phys. B [**903**]{} (2016) 262 \[[[arXiv:1511.05795](http://arxiv.org/abs/1511.05795)]{}\]. B. Vicedo, “Deformed integrable $\sigma$-models, classical R-matrices and classical exchange algebra on Drinfel’d doubles,” J. Phys. A [**48**]{} (2015) no.35, 355203 \[[[arXiv:1504.06303](http://arxiv.org/abs/1504.06303)]{}\]. K. Sfetsos, K. Siampos and D. C. Thompson, “Generalised integrable $\lambda$- and $\eta$-deformations and their relation,” Nucl. Phys. B [**899**]{} (2015) 489 \[[[arXiv:1506.05784](http://arxiv.org/abs/1506.05784)]{}\]. C. Klimcik, “$\eta$ and $\lambda$ deformations as ${\cal E}$-models,” Nucl. Phys. B [**900**]{} (2015) 259 \[[[arXiv:1508.05832](http://arxiv.org/abs/1508.05832)]{}\]. C. Klimcik, “Poisson-Lie T-duals of the bi-Yang-Baxter models,” Phys. Lett. B [**760**]{} (2016) 345 \[[[arXiv:1606.03016](http://arxiv.org/abs/1606.03016)]{}\]. F. Delduc, S. Lacroix, M. Magro and B. Vicedo, “On $q$-deformed symmetries as Poisson-Lie symmetries and application to Yang-Baxter type models,” J. Phys. A [**49**]{} (2016) no.41, 415402 \[[[arXiv:1606.01712](http://arxiv.org/abs/1606.01712)]{}\]. K. Sfetsos, “Integrable interpolations: From exact CFTs to non-abelian T-duals,” Nucl. Phys. B [**880**]{}, 225 (2014) \[[[arXiv:1312.4560](http://arxiv.org/abs/1312.4560)]{}\]. T. J. Hollowood, J. L. Miramontes and D. M. Schmidtt, “Integrable Deformations of Strings on Symmetric Spaces,” JHEP [**1411**]{} (2014) 009 \[[[arXiv:1407.2840](http://arxiv.org/abs/1407.2840)]{}\]. T. J. Hollowood, J. L. Miramontes and D. M. Schmidtt, “An Integrable Deformation of the $AdS_5 \times S^5$ Superstring,” J. Phys. A [**47**]{} (2014) 49, 495402 \[[[arXiv:1409.1538](http://arxiv.org/abs/1409.1538)]{}\]. K. Sfetsos and D. C. Thompson, “Spacetimes for $\lambda$-deformations,” JHEP [**1412**]{}, 164 (2014) \[[[arXiv:1410.1886](http://arxiv.org/abs/1410.1886)]{}\]. S. Demulder, K. Sfetsos and D. C. Thompson, “Integrable $\lambda$-deformations: Squashing Coset CFTs and $AdS_5\times S^5$,” JHEP [**1507**]{} (2015) 019 \[[[arXiv:1504.02781](http://arxiv.org/abs/1504.02781)]{}\]. R. Borsato, A. A. Tseytlin and L. Wulff, “Supergravity background of $\lambda$-deformed model for $AdS_2 \times S^2$ supercoset,” Nucl. Phys. B [**905**]{} (2016) 264 \[[[arXiv:1601.08192](http://arxiv.org/abs/1601.08192)]{}\]. Y. Chervonyi and O. Lunin, “Supergravity background of the $\lambda$-deformed $AdS_3 \times$ S$^3$ supercoset,” Nucl. Phys. B [**910**]{} (2016) 685 \[[[arXiv:1606.00394](http://arxiv.org/abs/1606.00394)]{}\]. T. J. Hollowood, J. L. Miramontes and D. M. Schmidtt, “S-Matrices and Quantum Group Symmetry of k-Deformed Sigma Models,” J. Phys. A [**49**]{} (2016) no.46, 465201 \[[[arXiv:1506.06601](http://arxiv.org/abs/1506.06601)]{}\]. I. Kawaguchi, T. Matsumoto and K. Yoshida, “Jordanian deformations of the $AdS_5 \times S^5$ superstring,” JHEP [**1404**]{} (2014) 153 \[[[arXiv:1401.4855](http://arxiv.org/abs/1401.4855)]{}\]. T. Matsumoto and K. Yoshida, “Lunin-Maldacena backgrounds from the classical Yang-Baxter equation - towards the gravity/CYBE correspondence,” JHEP [**1406**]{} (2014) 135 \[[[arXiv:1404.1838](http://arxiv.org/abs/1404.1838)]{}\]. T. Matsumoto and K. Yoshida, “Schrödinger geometries arising from Yang-Baxter deformations,” JHEP [**1504**]{} (2015) 180 \[[[arXiv:1502.00740](http://arxiv.org/abs/1502.00740)]{}\]. T. Matsumoto and K. Yoshida, “Integrability of classical strings dual for noncommutative gauge theories,” JHEP [**1406**]{} (2014) 163 \[[[arXiv:1404.3657](http://arxiv.org/abs/1404.3657)]{}\]. T. Matsumoto and K. Yoshida, “Yang-Baxter $\sigma$-models based on the CYBE,” Nucl. Phys. B [**893**]{} (2015) 287 \[[[arXiv:1501.03665](http://arxiv.org/abs/1501.03665)]{}\]. S. J. van Tongeren, “On classical Yang-Baxter based deformations of the $AdS_5 \times S^5$ superstring,” JHEP [**1506**]{} (2015) 048 \[[[arXiv:1504.05516](http://arxiv.org/abs/1504.05516)]{}\]. H. Kyono and K. Yoshida, “Supercoset construction of Yang-Baxter deformed $AdS_5 \times S^5$ backgrounds,” Prog. Theor. Exp. Phys. (2016) 083B03 \[[[arXiv:1605.02519](http://arxiv.org/abs/1605.02519)]{}\]. D. Osten and S. J. van Tongeren, “abelian Yang-Baxter Deformations and TsT transformations,” \[[[arXiv:1608.08504](http://arxiv.org/abs/1608.08504)]{}\]. S. J. van Tongeren, “Yang-Baxter deformations, AdS/CFT, and twist-noncommutative gauge theory,” Nucl. Phys. B [**904**]{} (2016) 148 \[[[arXiv:1506.01023](http://arxiv.org/abs/1506.01023)]{}\]. S. J. van Tongeren, “Almost abelian twists and AdS/CFT,” \[[[arXiv:1610.05677](http://arxiv.org/abs/1610.05677)]{}\]. Y. Lozano, E. Ó Colgáin, K. Sfetsos and D. C. Thompson, “Non-abelian T-duality, Ramond Fields and Coset Geometries,” JHEP [**1106**]{} (2011) 106 \[[[arXiv:1104.5196](http://arxiv.org/abs/1104.5196)]{}\]. G. Itsios, C. Núñez, K. Sfetsos and D. C. Thompson, “Non-abelian T-duality and the AdS/CFT correspondence: new $\mathcal{N}=1$ backgrounds,” Nucl. Phys. B [**873**]{} (2013) 1 \[[[arXiv:1301.6755](http://arxiv.org/abs/1301.6755)]{}\]. S. F. Hassan, “T duality, space-time spinors and RR fields in curved backgrounds,” Nucl. Phys. B [**568**]{} (2000) 145 \[[[arXiv:9907152](http://arxiv.org/abs/9907152)]{}\]. R. Benichou, G. Policastro and J. Troost, “T-duality in Ramond-Ramond backgrounds,” Phys. Lett. B [**661**]{} (2008) 192 \[[[arXiv:0801.1785](http://arxiv.org/abs/0801.1785)]{}\]. K. Sfetsos, K. Siampos and D. C. Thompson, “Canonical pure spinor (Fermionic) T-duality,” Class. Quant. Grav. [**28**]{} (2011) 055010 \[[[arXiv:1007.5142](http://arxiv.org/abs/1007.5142)]{}\]. Ö. Kelekci, Y. Lozano, N. T. Macpherson and E. Ó. Colgáin, “Supersymmetry and non-Abelian T-duality in type II supergravity,” Class. Quant. Grav.  [**32**]{} (2015) no.3, 035014 \[[[arXiv:1409.7406](http://arxiv.org/abs/1409.7406)]{}\]. E. Alvarez, L. Alvarez-Gaume and Y. Lozano, “On non-abelian duality,” Nucl. Phys. B [**424**]{} (1994) 155 \[[[arXiv:hep-th/9403155](http://arxiv.org/abs/hep-th/9403155)]{}\]. S. Elitzur, A. Giveon, E. Rabinovici, A. Schwimmer and G. Veneziano, “Remarks on non-abelian duality,” Nucl. Phys. B [**435**]{} (1995) 147 \[[[arXiv:hep-th/9409011](http://arxiv.org/abs/hep-th/9409011)]{}\]. T. H. Buscher, “Path Integral Derivation of Quantum Duality in Nonlinear Sigma Models,” Phys. Lett. B [**201**]{} (1988) 466. L. Wulff and A. A. Tseytlin, “Kappa-symmetry of superstring sigma model and generalized 10d supergravity equations,” JHEP [**1606**]{} (2016) 174 \[[[arXiv:1605.04884](http://arxiv.org/abs/1605.04884)]{}\]. Y. Sakatani, S. Uehara and K. Yoshida, “Generalized gravity from modified DFT,” \[[[arXiv:1611.05856](http://arxiv.org/abs/1611.05856)]{}\]. A. Hashimoto and N. Itzhaki, “Noncommutative Yang-Mills and the AdS/CFT correspondence,” Phys. Lett. B [**465**]{} (1999) 142 \[[[arXiv:hep-th/9907166](http://arxiv.org/abs/hep-th/9907166)]{}\]. J. M. Maldacena and J. G. Russo, “Large N limit of noncommutative gauge theories,” JHEP [**9909**]{} (1999) 025 \[[[arXiv:hep-th/9908134](http://arxiv.org/abs/hep-th/9908134)]{}\]. R. G. Leigh and M. J. Strassler, “Exactly marginal operators and duality in four-dimensional $\mathcal{N} = 1$ supersymmetric gauge theory,” Nucl. Phys. B [**447**]{} (1995) 95 \[[[arXiv:hep-th/9503121](http://arxiv.org/abs/hep-th/9503121)]{}\]. S. A. Frolov, R. Roiban and A. A. Tseytlin, “Gauge-string duality for superconformal deformations of $\mathcal{N}=4$ super Yang-Mills theory,” JHEP [**0507**]{}, 045 (2005) \[[[arXiv:hep-th/0503192](http://arxiv.org/abs/hep-th/0503192)]{}\]. S. Frolov, “Lax pair for strings in Lunin-Maldacena background,” JHEP [**0505**]{}, 069 (2005) \[[[arXiv:hep-th/0503201](http://arxiv.org/abs/hep-th/0503201)]{}\]. L. F. Alday, G. Arutyunov and S. Frolov, “Green-Schwarz strings in TsT-transformed backgrounds,” JHEP [**0606**]{}, 018 (2006) \[[[arXiv:hep-th/0512253](http://arxiv.org/abs/hep-th/0512253)]{}\]. R. Roiban, “On spin chains and field theories,” JHEP [**0409**]{} (2004) 023 \[[[arXiv:hep-th/0312218](http://arxiv.org/abs/hep-th/0312218)]{}\]. D. Berenstein and S. A. Cherkis, “Deformations of $\mathcal{N} = 4$ SYM and integrable spin chain models,” Nucl. Phys. B [**702**]{} (2004) 49 \[[[arXiv:hep-th/0405215](http://arxiv.org/abs/hep-th/0405215)]{}\]. N. Beisert and R. Roiban, “Beauty and the twist: The Bethe ansatz for twisted N=4 SYM,” JHEP [**0508**]{} (2005) 039 \[[[arXiv:hep-th/0505187](http://arxiv.org/abs/hep-th/0505187)]{}\]. J. Fokken, C. Sieg and M. Wilhelm, “Non-conformality of $\gamma_i$-deformed $\mathcal{N} = 4$ SYM theory,” J. Phys. A [**47**]{} (2014) 455401 \[[[arXiv:1308.4420](http://arxiv.org/abs/1308.4420)]{}\]. M. Spradlin, T. Takayanagi and A. Volovich, “String theory in beta deformed spacetimes,” JHEP [**0511**]{} (2005) 039 \[[[arXiv:hep-th/0509036](http://arxiv.org/abs/hep-th/0509036)]{}\]. S. A. Frolov, R. Roiban and A. A. Tseytlin, ‘Gauge-string duality for (non)supersymmetric deformations of $\mathcal{N} = 4$ super Yang-Mills theory,’’ Nucl. Phys. B [**731**]{} (2005) 1 \[[[arXiv:hep-th/0507021](http://arxiv.org/abs/hep-th/0507021)]{}\]. A. Bergman and O. J. Ganor, “Dipoles, twists and noncommutative gauge theory,” JHEP [**0010**]{} (2000) 018 \[[[arXiv:hep-th/0008030](http://arxiv.org/abs/hep-th/0008030)]{}\]. A. Bergman, K. Dasgupta, O. J. Ganor, J. L. Karczmarek and G. Rajesh, “Nonlocal field theories and their gravity duals,” Phys. Rev. D [**65**]{} (2002) 066005 \[[[arXiv:hep-th/0103090](http://arxiv.org/abs/hep-th/0103090)]{}\]. R. Borsato and L. Wulff, “Target space supergeometry of $\eta$ and $\lambda$-deformed strings,” JHEP [**1610**]{} (2016) 045 \[[[arXiv:1608.03570](http://arxiv.org/abs/1608.03570)]{}\]. G. Arutyunov and S. Frolov, “Foundations of the $AdS_5 \times S^5$ Superstring. Part I,” J. Phys. A [**42**]{} (2009) 254003 \[[[arXiv:0901.4937](http://arxiv.org/abs/0901.4937)]{}\]. T. Eguchi, P. B. Gilkey and A. J. Hanson, “Gravitation, Gauge Theories and Differential Geometry,” Phys. Rept. [**66**]{} (1980) 213. B. Hoare and S. J. van Tongeren, “On jordanian deformations of $AdS_5$ and supergravity,” J. Phys. A [**49**]{} (2016) no.43, 434006 \[[[arXiv:1605.03554](http://arxiv.org/abs/1605.03554)]{}\]. I. Kawaguchi, T. Matsumoto and K. Yoshida, “A Jordanian deformation of AdS space in type IIB supergravity,” JHEP [**1406**]{} (2014) 146 \[[[arXiv:1402.6147](http://arxiv.org/abs/1402.6147)]{}\]. D. Orlando, S. Reffert, J. i. Sakamoto and K. Yoshida, “Generalized type IIB supergravity equations and non-abelian classical $r$-matrices,” J. Phys. A [**49**]{} (2016) no.44, 445403 \[[[arXiv:1607.00795](http://arxiv.org/abs/1607.00795)]{}\]. [^1]: A more precise field theoretic explanation of what this limit means has been proposed in [@Lozano:2016kum]. [^2]: In some cases it can be that this doesn’t fully fix the gauge and additional fixing should be imposed on the Lagrange multipliers $ V= v_a \tilde{H}^a$, details of this are discussed in [@Lozano:2011kb]. [^3]: An explicit demonstration of the RR transformation law in the context of supersymmetry in $SU(2)$ non-Abelian T-duality can be found in[@Kelekci:2014ima]. [^4]: Care needs be taken in the interpretation of this deformation. Away from the supersymmetric point the $\gamma_i$ deformation is not conformal due a running coupling of a double-trace operator [@Fokken:2013aea] and indeed the gravitational dual has a tachyon [@Spradlin:2005sv]. [^5]: As with the previous example the $B$-field obtained by the central extension dualisation procedure differs by a closed piece $\Delta B = \frac{1}{\nu_1^2+ \nu_2^2 + \nu_3^2} \left( \nu_1 d\phi_2\wedge d\phi_3 + \nu_3 d\phi_1\wedge d\phi_2+ \nu_2 d\phi_3\wedge d\phi_1 \right)$.\[foot:bdiff\]
{ "pile_set_name": "ArXiv" }
--- abstract: 'The dependence of the Lyapunov exponent on the closeness parameter, $\epsilon$, in tangent bifurcation systems is investigated. We study and illustrate two averaging procedures for defining Lyapunov exponents in such systems. First, we develop theoretical expressions for an isolated tangency channel in which the Lyapunov exponent is defined on single channel passes. Numerical simulations were done to compare theory to measurement across a range of $\epsilon$ values. Next, as an illustration of defining the Lyapunov exponent on many channel passes, a simulation of the intermittent transition in the logistic map is described. The modified theory for the channels is explained and a simple model for the gate entrance rates is constructed. An important correction due to the discrete nature of the iterative flow is identified and incorporated in an improved model. Realistic fits to the data were made for the Lyapunov exponents from the logistic gate and from the full simulation. A number of additional corrections which could improve the treatment of the gates are identified and briefly discussed.' --- [**[Lyapunov Exponents for the Intermittent Transition to Chaos]{}**]{} and [**Walter Wilcox**]{}$^{a,b}$\ $^{a}$Department of Physics, Baylor University, Waco, TX 76798\ $^{b}$Department of Physics, University of Kentucky, Lexington, KY 40506 Chaos is the study of dynamical systems that have a sensitive dependence on initial conditions. Much attention has been paid to the two main routes to chaos: pitchfork bifurcation and tangent bifurcation. If we consider the general difference equation mapping, $$\begin{aligned} x_{n+1}=F(x_{n}),\label{recur}\end{aligned}$$ then tangent bifurcation, also called type I intermitency \[Pomeau & Manneville, 1980\], occurs when a tangency develops in iterates of $F(x_{n})$ across the $x_{n}=x_{n+1}$ reflection line. (Pitchfork bifurcations occur when iterates of $F(x_{n})$ possess perpendicular crossings of this line.) Just before the tangency occurs (characterized by the closeness parameter, $\epsilon$, being small), the map is almost tangent to the reflection line and a long channel is formed. When the iterations enter, a long laminar-like flow is established, with nearly periodic behavior. Once the iterations leave the channel, they behave chaotically, then re-enter the channel. The result is a long region of laminar flow that is intermittently interrupted by chaotic intervals. This occurs when $\epsilon$ is near zero and tangency is about to occur, hence the two names: intermittent chaos and tangent bifurcation. Experimentally, type I intermittency has been observed in turbulent fluids \[Bergé [*et. al.*]{}, 1980\], nonlinear oscillators \[Jeffries & Perez, 1982\], chemical reactions \[Pomeau [*et. al.*]{}, 1981\], and Josephson junctions \[Weh & Kao, 1983\]. An excellent introduction to the intermittency route to chaos is given in Schuster \[1995\]. In the pioneering studies \[Manneville & Pomeau, 1979\] and \[Pomeau & Manneville, 1980\], it was found that the number of iterations followed an $\epsilon^{-1/2}$ dependence and that the Lyapunov exponent varied as $\epsilon^{1/2}$ for a logistic mapping ($z=2$). In the work by \[Hirsch [*et. al.*]{}, 1982\], an expression for the number of iterations spent inside the channel was developed. The equation for the third iterate, i.e. $F(F(F(x)))$ or $F^{(3)}(x)$ where $F(x)=Rx(1-x)$, was expanded in a Taylor series about one of the tangency points for $R_{c}=1+\sqrt{8}$. In the case of the logistic map, we get $$\begin{aligned} F^{(3)}(x)= x_{c}+(x-x_{c})+a_{c}(x-x_{c})^{2}+b_{c}(R_{c}-R),\label{f3}\end{aligned}$$ where $x_{c}$ is one of the three contact points. After a transformation that centers and rescales the system around $x_{c}$ ($y_{n}\equiv \frac{x_{n}-x_{c}}{b_{c}}$), the recursion relation can be put into the form $$\begin{aligned} y_{n+1}=ay_{n}^{2}+y_{n}+\epsilon,\label{what}\end{aligned}$$ where $\epsilon\equiv R_{c}-R>0$ and $a\equiv a_{c}b_{c}$. The more general case can be studied as a first, second, or any iterate instead of just the third iterate, as long as a tangency develops. To derive an analytic description of the trajectory, \[Hirsch [*et. al.*]{}, 1982\] switched from a difference equation to a differential equation. Thus, they considered $$\begin{aligned} \frac{dy}{dn}=ay^{2}+\epsilon.\label{diff}\end{aligned}$$ This approximation is justified as long as the number of iterations in the channel is large enough or, alternately, that the step size between iterations is small compared to the channel length. This is an easy differential equation to solve. One obtains $$\begin{aligned} n(y_{in})=\frac{1}{\sqrt{a\epsilon}}\left[ \tan^{-1}\left( y_{out}\sqrt{\frac{a}{\epsilon}}\,\right) -\tan^{-1}\left( y_{in} \sqrt{\frac{a}{\epsilon}}\,\right) \right].\label{firstn}\end{aligned}$$ $y_{in}$ is the entrance to the tangency channel and $y_{out}$ is the exit value and one has that $$\begin{aligned} -y_{out} \le y_{in} \le y_{out}.\label{limits}\end{aligned}$$ \[Hirsch [*et. al.*]{}, 1982\] observed that the entrance points for the logistic map, $y_{in}$ ($R_{c}\ge R$), had a probability distribution that was roughly uniform. Given this distribution, the average number of iterations to travel the length of the channel is given as $$\begin{aligned} <n>\equiv \frac{1}{2y_{out}}\int_{-y_{out}}^{y_{out}}n(y_{in})dy_{in} =\frac{1}{\sqrt{a\epsilon}}\,\tan^{-1}\left( y_{out}\sqrt{\frac{a}{\epsilon}}\,\right).\label{it}\end{aligned}$$ \[Hirsch [*et. al.*]{}, 1982\] also derived a form for the average number of iterations for an arbitrary universality class. The universality class, $z$, is given by the lowest non vanishing power of $(x-x_{c})$ in the expansion around the tangency point. For tangency to develop, $z$ must always be an even number: $$\begin{aligned} y_{n+1}=ay_{n}^{z}+y_{n}+\epsilon.\label{genz}\end{aligned}$$ This leads to the differential equation, $$\begin{aligned} \frac{dy}{dn}=ay^{z}+\epsilon.\label{genzdiff}\end{aligned}$$ and to the number of iterations, $$\begin{aligned} n(y_{in})=a^{-1/z}\epsilon^{-1+1/z}\int\limits_{y_{in} \sqrt[z]{\frac{a}{\epsilon}}}^{y_{out}\sqrt[z]{\frac{a}{\epsilon}}}\frac{d{\bar y}}{{\bar y}^{z}+1}.\label{n}\end{aligned}$$ The average number of iterations is given by $$\begin{aligned} <n>=\frac{1}{2}a^{-1/z}\epsilon^{-1+1/z}\int\limits_{-y_{out} \sqrt[z]{\frac{a}{\epsilon}}}^{y_{out}\sqrt[z]{\frac{a}{\epsilon}}}\frac{d{\bar y}}{{\bar y}^{z}+1},\label{avgn}\end{aligned}$$ when the entrance distribution is again uniform. The numerical simulations in \[Hirsch [*et. al.*]{}, 1982\] agreed with predicted values quite well. There are two manners in which Lyanunov exponents may be defined in a simulation with many trajectories. One may define a procedure which measures the Lyanunov exponent on a given trajectory, for example a single channel pass, and then averages over these trajectories. Another possibility is to measure the exponent across many trajectories or channel passes, using a binning procedure to define variances. We will use both procedures here to illustrate the theory. The first procedure will be termed a [*single pass*]{} measurement, the second a [*many pass*]{} measurement. We will develop the theory for the first procedure in the next Section, which will then be illustrated in Section 3 by a simulation in an isolated tangency channel for general $z$. As an illustration of a many pass measurement, a simulation of the intermittent transition in the logistic map will be described in Section 4. The modified theory will be motivated and a simple phenomenological model of the data will then be given in Section 5. In Section 6 an improved expression for the inverse number density, $\frac{dy}{dn}$ due to the discrete nature of the iterative flow will be developed. This will improve the comparison of the model to measurement. Finally, we will summarize our findings and make suggestions for further improvements in the model in the final Section. Our analysis of the system described by Eqs.(\[genz\]) and (\[genzdiff\]) is built on the work of both \[Pomeau and Manneville, 1980\] and \[Hirsch [*et.al.*]{}, 1982\]. In contrast to the situation for the average number, $<n>$, little work has been done to develop expressions for the Lyanupov exponents for the tangency channel in intermittent systems. We are interested in understanding the $\epsilon$ dependence of the Lyapunov exponent for $z=2$, finding the constant of proportionality, and generalizing the results for an arbitrary universality class, $z$. The Lyapunov exponent is a measurement which characterizes the sensitive dependence on initial conditions of chaotic systems. It is defined as the coefficient of the average exponential growth per unit time between initial and final states of a system, which in this case we will take to be a single pass through a tangency channel. It is given in the case of the one-dimensional mappings considered here by \[Scheck, 1994\] $$\begin{aligned} \lambda \equiv \lim_{n\to \infty}\frac{1}{n}\sum_{i=1}^{n}\ln|\frac{dF(y_{i})}{dy_{i}}|.\label{lydef}\end{aligned}$$ This gives us our starting point for deriving a theory for the Lyapunov exponent for a system with an arbitrary universality class. In that case, the function $F(y_{i})$ from (\[genz\]) is $$\begin{aligned} F(y_{i})=ay_{i}^{z} +y_{i}+\epsilon,\label{feqn}\end{aligned}$$ so $$\begin{aligned} \frac{dF(y_{i})}{dy_{i}}=1 +azy_{i}^{z-1}.\label{fdiff}\end{aligned}$$ Since we are interested in the Lyapunov exponent for the tangency channel, there are only a finite number of steps during which the iterations are confined to the channel and the appropriate value for n for this trajectory is the total number of iterations in the channel, $n(y_{in})$. With this in mind, the Lyapunov exponent is modeled by $$\begin{aligned} \lambda(y_{in}) \equiv \frac{1}{n(y_{in})}\int\limits_{y_{in}}^{y_{out}}dn \ln|1 +azy^{z-1}|,\label{lyexp1}\end{aligned}$$ where we have replaced the discrete sum by an integral over $n$-space. Again, this step is justified as long as the number of iterations is large enough so that the values of the natural logs of the slope are almost continuous. From Eq.(\[genzdiff\]), we have $$\begin{aligned} dn = \frac{dy}{ay^{z}+\epsilon},\label{dn}\end{aligned}$$ so that $$\begin{aligned} \lambda(y_{in})=\frac{1}{n(y_{in})}\int\limits_{y_{in}}^{y_{out}}dy\frac{\ln|1 +azy^{z-1}|}{ay^{z}+\epsilon}.\label{lyexp2}\end{aligned}$$ This gives the Lyapunov exponent for the system starting at $y_{in}$ and ending at $y_{out}$. In the logistic map or any other system, the entrance into the tube is random. Since the starting points are randomly distributed, it is more useful to derive a formula for the average Lyapunov exponent per pass. Using the above formula for $\lambda(y_{in})$, we can calculate the average value of the Lyapunov exponent and obtain $$\begin{aligned} <\lambda>\equiv \int\limits_{-y_{out}}^{y_{out}}dy_{in}\lambda(y_{in})P(y_{in}),\label{avgly}\end{aligned}$$ where the probability function $P(y_{in})$ satisfies $$\begin{aligned} \int\limits_{-y_{out}}^{y_{out}}dy_{in}P(y_{in})=1.\label{prob}\end{aligned}$$ For the present, let us consider the special case of a uniform distribution, $$\begin{aligned} P(y_{in})=\frac{1}{2y_{out}}.\label{uniform}\end{aligned}$$ Using this probability distribution, we obtain $$\begin{aligned} <\lambda>=\frac{1}{2y_{out}}\int\limits_{-y_{out}}^{y_{out}} dy_{in}\frac{I(y_{in})}{n(y_{in})} ,\label{avgly2}\end{aligned}$$ where $$\begin{aligned} I(y_{in})\equiv \int\limits_{y_{in}}^{y_{out}}dy\frac{\ln|1 +azy^{z-1}|}{ay^{z}+\epsilon},\label{Idef}\end{aligned}$$ and where $n(y_{in})$ is given by Eq.(\[n\]) above. One approximation and a change of variables are necessary to make this formula more usable. One important step is to define the value for $y_{out}$ as $$\begin{aligned} y_{out}\equiv s\sqrt[z]{\frac{\epsilon}{a}},\label{scale}\end{aligned}$$ where sis a positive scale factor that can be independently set in order to model a given system. Clearly, $s$ can not be arbitrarily large. A natural requirement is that the derivative in (\[fdiff\]) be positive, making the possible throatof the channel end at the point where $\frac{dF}{dy}=0$. This gives that $$\begin{aligned} s_{max} \equiv z^{\frac{1}{1-z}}a^{\frac{1}{z(1-z)}}\epsilon^{-\frac{1}{z}},\label{smax}\end{aligned}$$ is the maximum value of s for given $z$, $\epsilon$ and $a$. With the value of $y_{out}$ from (\[scale\]), the integral $I(y_{in})$ can be simplified with a change of variables. Let $$\begin{aligned} y'=\frac{y}{y_{out}}.\label{vchange}\end{aligned}$$ Therefore $$\begin{aligned} I(y_{in})=sa^{-\frac{1}{z}}\epsilon^{-1+\frac{1}{z}} \int\limits_{\frac{y_{in}}{s\sqrt[z]{\frac{\epsilon}{a}}}}^{1}dy' \frac{\ln(1 +za^{\frac{1}{z}}\epsilon^{1-\frac{1}{z}}s^{z-1}y'^{z-1})}{s^{z}y'^{z}+1}, \label{I2}\end{aligned}$$ where the absolute value in the natural log is no longer necessary. The Taylor series expansion for natural log is ($|x|<1$) $$\begin{aligned} \ln (1+x)=x-\frac{x^{2}}{2}+\frac{x^{3}}{3}-\frac{x^{4}}{4}+\ldots\, .\label{taylor}\end{aligned}$$ Using this approximation we have simply $$\begin{aligned} I(y_{in}) \approx zs^{z}\int\limits_{\frac{y_{in}}{s\sqrt[z]{\frac{\epsilon}{a}}}}^{1} dy' \frac{y'^{z-1}}{s^{z}y'^{z}+1}=\ln \left( \frac{s^{z}+1}{\frac{a}{\epsilon} y_{in}^{z}+1}\right) ,\label{I3}\end{aligned}$$ as long as $$\begin{aligned} s << s_{max}. \label{approx}\end{aligned}$$ Our simplified formula for the average Lyapunov exponent is now $$\begin{aligned} <\lambda>=\frac{1}{2s}a^{2/z}\epsilon^{1-2/z} \int\limits_{-y_{out}}^{y_{out}}\frac{dy_{in}} {\int\limits_{y_{in}\sqrt[z]{\frac{a}{\epsilon}}}^{s}\frac{d{\bar y}}{{\bar y}^{z}+1}}\ln \left( \frac{s^{z}+1} {\frac{a}{\epsilon} y_{in}^{z}+1}\right). \label{almost}\end{aligned}$$ We now make the same scale change in the $y_{in}$ integral as in the $y$ integral in (\[vchange\]): $$\begin{aligned} {\hat y} \equiv \frac{y_{in}}{y_{out}}.\label{vchange2}\end{aligned}$$ Therefore $$\begin{aligned} <\lambda>=\frac{1}{2}a^{1/z}\epsilon^{1-1/z} \int\limits_{-1}^{1}\frac{d{\hat y}}{\int\limits_{s{\hat y}}^{s}\frac{d{\bar y}}{{\bar y}^{z}+1}}\ln \left( \frac{s^{z}+1}{s^{z}{\hat y}^{z}+1}\right). \label{final}\end{aligned}$$ As one can see, for constant $s$ the average Lyapunov exponent varies as $\epsilon^{1-1/z}$ with a constant of proportionality determined by the parameters $a$ and $s$. In the case where $z=2$ this gives $$\begin{aligned} <\lambda>=\frac{1}{2}\sqrt{a\epsilon} \int\limits_{-1}^{1}\frac{d{\hat y}}{\tan^{-1}(s)-\tan^{-1}(s{\hat y})} \ln \left( \frac{s^{z}+1} {s^{z}{\hat y}^{z}+1}\right). \label{z=2}\end{aligned}$$ For a general probability distribution, we would have instead $$\begin{aligned} <\lambda>=a^{1/z}\epsilon^{1-1/z} \int\limits_{-1}^{1} d{\hat y} \frac{P({\hat y})} {\int\limits_{s{\hat y}}^{s}\frac{d{\bar y}}{{\bar y}^{z}+1}}\ln \left( \frac{s^{z}+1} {s^{z}{\hat y}^{z}+1} \right) d{\hat y},\label{final2}\end{aligned}$$ where $$\begin{aligned} \int_{-1}^{1} d{\hat y}P({\hat y}) = 1.\label{P}\end{aligned}$$ In the case of constant scale factor $s$, we therefore see that the single pass tangency channel Lyapunov exponent behaves like $\epsilon^{1-1/z}$. The integral in Eq.(\[z=2\]) was calculated using numerical methods and compared against numerical simulations of the logistic map ($z=2$). In all cases we used a simulation consisting of 10,000 Monte Carlo runs for each data point. As can be seen in Figs.1-3, for values of $s=0.1$, 1.0 and 10, the theoretical values agree with the simulation values for a uniform probability distribution for small enough $\epsilon$. At low $\epsilon$, the agreement is excellent, with the theoretical value straddled by the upper and lower error values of the simulation. In the $s=0.1$ simulation, the assumption that the discrete sum can be approximated by a continuous integral breaks down at large enough $\epsilon$ due to the very small number of iterations in the channel. For the $s=10$ simulation, the natural log approximation, Eq.(\[taylor\]), starts to break down and is the main cause of the divergence between theory and simulation. The least divergence between theory and simulation occurs when $s\approx 1$ . The calculations become more time consuming at larger s due to the increased number of iterations necessary to pass through the channel. We examined the more general expression, Eq.(\[final2\]), when $z=2$ for other probability distributions including a Gaussian and a $|y|$ distribution of normal deviates. Although these results are not illustrated here, the theoretical and simulation results were again in excellent agreement for small enough $\epsilon$. We also examined the $\epsilon$ dependence for higher universality classes. However, due to the large amount of computer time it takes to do such simulations, we have data only for one additional $z$ value. For a universality class of $z=4$, the Lyapunov dependence should be $\epsilon^{3/4}$, which is clearly confirmed in Fig. 4. The Section 2 expressions for $<\lambda>$ are displayed in a form appropriate for a simulation at a fixed value of the scale, $s$. However, the approximation employed, Eq.(\[approx\]), is simply the condition that the channel length be much smaller than the total gate size (determined by the tangency point and the point at $\frac{dF}{dy}=0$). Thus, the expression Eq.(\[final2\]) holds also for a fixed tangency channel fraction, $f$, $$\begin{aligned} f\equiv \frac{y_{out}}{(az)^{\frac{1}{1-z}}}= s \epsilon^{\frac{1}{z}}z^{\frac{1}{z-1}}a^{\frac{1}{z(z-1)}},\label{fagain}\end{aligned}$$ as long as $$\begin{aligned} f<<1. \label{f}\end{aligned}$$ The quantity $(az)^{\frac{1}{1-z}}$ is just the total model gate size. For $z=2$ the relationship between $f$ and $s$ is simply $$\begin{aligned} f = 2s\sqrt{a\epsilon}.\label{z=2f}\end{aligned}$$ When the change from $s$ to $f$ is made in Eq.(\[final2\]), the result is no longer proportional to $\epsilon^{1/2}$ but in fact goes to a constant at small values of $\epsilon$. Mathematically, this is due to the fact that the denominator, proportional to the number of iterates in the channel for a starting position ${\hat y}$, falls off like $\epsilon^{1/2}$ when ${\hat y}\approx 1$. Physically, the Lyapunov exponent is being dominated by the small number of iterates associated with entrances on the far side of the narrow channel. Fig. 5 shows the predicted and measured values of the Lyapunov exponent, $<\lambda>$ for the case $f=0.1$, i.e., the channel length is one-tenth the size of the gate, when the entrance probability is again uniform. As expected, and unlike the cases presented above at fixed $s$, the value of the Lyapunov exponent becomes constant at small $\epsilon$, the theoretical value remaining about $10\%$ larger than simulation. This amount of deviation is what we would expect since the term kept in the expansion of the natural log in Eq.(\[taylor\]) is of order $f$ and the first neglected term is of order $f^{2}$. Measurements at fixed $f$ will be important for the simulations done in the full logistic map to be considered in the next Section. As was pointed out earlier, there are two manners in which Lyapunov exponents can be defined in channel simulations, which we called the single pass and many pass definitions. We have already described and illustrated a single pass simulation. As an example of a many pass situation, we consider a simulation of the tangency gates in the logistic map. In doing so, we find it convenient to first describe the details and results of the numerical simulation. The theory and the model used to fit to the data will then be developed together in Section 5. As outlined in the Introduction, the equation for the third iterate of the logistic map for small $\epsilon$ may be expanded in a Taylor series about each of the tangency points. The positions of the tangency points are given to high precision in Table I. (Knowledge of any one gives the others through the basic recurrence relation.) Expanding to third order in $(x-x_{c})$, one has for the third iterate, $$\begin{aligned} F^{(3)}(x)= x_{c}+(x-x_{c})+a_{c}(x-x_{c})^{2}+c_{c}(x-x_{c})^{3}+b_{c}(R_{c}-R).\label{fthing}\end{aligned}$$ The values of $a_{c}$, $b_{c}$ and $c_{c}$ are given in Table I. Again introducing $y_{n}=\frac{x_{n}-x_{c}}{b_{c}}$, the difference equation for all three of the gates takes the form (note that $c_{c}b_{c}^{2}=-196$ to high accuracy for all gates), $$\begin{aligned} y_{n+1}-y_{n}=\epsilon +ay_{n}^{2}-\frac{2}{49}a^{2}y_{n}^{3},\label{ndiff}\end{aligned}$$ where the constant a takes on the value $$\begin{aligned} a=69.29646455628\ldots\,\, .\end{aligned}$$ An interesting aspect of the simulation is the exclusion of certain $x$-values from the logistic map at finite $\epsilon$. We have labeled the tangency gates in increasing order of their $x_{c}$ values. Referring to the third iterate map, shown for $\epsilon=0$ in Fig. 6, it is clear by drawing a horizontal line that gate 1 is reachable only under steady-state conditions from points close to point C in the Figure. Likewise, gate 3 is reachable only from previous iterates starting on or near points A or B in the figure. (Since $x=0.5$ is a symmetry point in the mapping, the values of $F^{(3)}(x)$ at points A and B are the same.) Note in this context that the laminate flow through gates 1 and 2 is from smaller $x$-values to larger ones, whereas the flow through gate 3 is in the opposite direction. Iterates entering gate 1 from the point C will actually enter at the value $x_{L}\equiv F^{(3)}(x_{C})$; points between $x=0$ and $x=x_{L}$ will never be reached. This is after a possible transient of a single iteration. Likewise, iterates entering gate 3 from points A and B will enter at the value $x_{R}\equiv F^{(3)}(x_{A,B})$; points between $x=x_{R}$ and $x=1$ are never reached, again after a possible 1-iteration transient. By drawing a horizontal line, gate 2 is seen to be reachable from 4 separate $x$-regions (excluding the points to the left of $x_{L}$ and to the right of $x_{R}$). The measurements leading to values and variances of the Lyapunov exponents were taken from a single trajectory of $1.6$ million iterations of the third-iterate logistic map at each $\epsilon^{1/2}$ value following an initial heatingto remove the transient $x$-values. Monte Carlo error bars for the Lyapunov exponents were then measured by breaking the single trajectory into 100 bins. Runs were made both at fixed gate fraction, $f$, as well as at fixed scale factor, $s$. Fig. 7 shows a ${\rm log}_{10}$ plot of the contribution of each of the three gates when the fraction of the gate being measured is $f=0.1$, the same used in Fig. 5. Note that $f=0.1$ indicates the fraction of the model gate size, characterized by Eq. (\[what\]), not the actual gate size. The ratio of the model to actual gate length is about $1.025$. The gate Lyapunov exponents, $<\lambda >_{g}$, are normalized to the number of third-iterate hits within each gate, which at small $\epsilon$ just approaches $1/3$ of the total iterations. We notice that even at the larger $\epsilon$ values the contribution from different gates is the same within errors. Fig. 8 shows the contribution of each of the three gates to the Lyapunov exponent, $<\lambda>_{g}$, in a simulation with a fixed value of the scale factor, $s=1.0$. Eq.(\[it\]) implies that such a simulation will have a mixture of half iterations inside the gates and half outside, as indeed is observed. The error bars here are larger in a relative sense than in Fig. 7 because the gate size is shrinking like $\epsilon^{1/2}$ (see Eq.(\[scale\])), leading to smaller statistics. Also unlike Fig. 7 there are significantly different contributions from the three gates at larger $\epsilon$ values, the middle gate having an enhanced $<\lambda >_{g}$. It is only at values of $\epsilon$ below and including $\epsilon^{1/2}=0.512\times 10^{-3}$ that a distinction between the gates can no longer be seen. However, this may simply be the result of the larger statistical fluctuations present at smaller $\epsilon$. Note that the fixed $f$ data in Fig. 7 shows an approximate $\epsilon^{1/2}$ behavior, while the fixed $s$ simulation in Fig. 8 behaves approximately as $\epsilon$ at small $\epsilon$ values. These behaviors are in contrast to the simulations in Section 3 where the fixed $f$ data went to a constant at small $\epsilon$ and the fixed $s$ data behaved like $\epsilon^{1/2}$. The $\epsilon$ behavior of the logistic map Lyapunov exponents will be commented on further in the next Section. In order to model the Lyapunov exponents for the gates, it is necessary to have an understanding of the entrance probability for the gates as a function of position in the gate. There are two ways in which a laminate flow may begin in the tangency channel. Primarily, entrance into the gate occurs as a continuous flow from just outside the gate. Alternatively, the flow can begin in a discontinuous fashion from a disjoint region of the map. These two entrance routes will be termed the [*continuous*]{} and [*discontinuous*]{} types, respectively. Fig. 9 presents a measurement of the binned discontinuous entrance rate, $n_{d}$, for the first gate of the $f=0.1$ simulation at $\epsilon^{1/2}=0.128\times 10^{-3}$. The data is divided into 19 bins with bin size of $\Delta x_{bin}=0.5624 \times 10^{-4}$, which is just the model gate width divided by 100. The first bin (which would have extended from -10 to -9 in the figure units) contains both continuous and discontinuous entrances. Since we have not attempted to separate the discontinuous from the continuous entrances in this bin and because the total rate is off scale, this first bin is not shown. The entrance rate seems to be fairly uniform in this Figure; gate 2 and (the mirror image of) gate 3 look very similar. This approximate entrance uniformity for small $f$ will be useful in setting up a simple model of the gate contribution to the Lyapunov exponent, which will be described in the next Section. We will now develop theoretical expressions for the $\epsilon$-dependence of the Lyapunov exponents for the logistic map. For this purpose we need to develop expressions for $<n>$ and $<\lambda >$ in a many pass simulation, as opposed to the single pass considerations in Sections 2 and 3. In a many pass simulation, the number of iterations in the gate will be weighted by the entrance [*rate*]{} rather than probability. Therefore, Eq.(\[it\]), generalized to an arbitrary probability distribution, is replaced with $$\begin{aligned} <n>_{g}= \frac{1}{2}\int_{-1}^{1}d{\hat y}\,\frac{dN({\hat y})}{d{\hat y}}n(y_{in}), \label{huh}\end{aligned}$$ where $$\begin{aligned} n(y_{in})=\frac{1}{\sqrt{a\epsilon}}(\tan^{-1}(\frac{f}{2\sqrt{a\epsilon}})-\tan^{-1} (\frac{f{\hat y}}{2\sqrt{a\epsilon}})). \label{nn}\end{aligned}$$ We also are using the dimensionless variable ${\hat y}$ introduced in Eq.(\[vchange2\]) ($y_{in}={\hat y}y_{out}$). The functional form of $\frac{dN({\hat y})}{d{\hat y}}$ has yet to be specified. Note that $n_{d}$ in Fig. 9 is given by $n_{d}=\frac{dN({\hat y})}{d{\hat y}}\Delta {\hat y}_{bin}$ where in this case $\Delta {\hat y}_{bin}=0.1$. The form for the Lyapunov exponent, Eqs.(\[avgly2\]) and (\[Idef\]), must also be modified. It is again the rate rather than the probability which is relevant. In addition, for a many pass simulation the Lyapunov integrand in Eq.(\[avgly2\]) must be weighted by the ratio of the number of iterations for each passage through the channel, $n(y_{in})$, to the average total number of iterations, $<n>_{g}$, resulting in $$\begin{aligned} <\lambda>_{g}= \frac{1}{2<n>_{g}}\int_{-1}^{1} d{\hat y}\,\frac{dN({\hat y})}{d{\hat y}}I(y_{in}), \label{blaa}\end{aligned}$$ where $$\begin{aligned} I(y_{in})=\frac{2}{f}\int_{{\hat y}}^{1} dy'\,\frac{\ln (1+fy')}{y'^{2} +\frac{4a\epsilon}{f^{2}}}.\label{Iguy}\end{aligned}$$ Making the same approximation as in Eq.(\[taylor\]) above, this simplifies to $$\begin{aligned} <\lambda>_{g}= \frac{1}{2<n>_{g}}\int_{-1}^{1}d{\hat y}\,\frac{dN({\hat y})}{d{\hat y}}\ln \left( \frac{1+\frac{4a\epsilon}{f^{2}}} {{\hat y}^{2}+\frac{4a\epsilon}{f^{2}}} \right). \label{lambda}\end{aligned}$$ We use $f$ instead of $s$ in these formulas since most of the simulations in the following will use fixed $f$. As we have seen in the last Section, the Lyapunov exponents from the three gates are apparently indistinguishable at small enough $\epsilon$. A very useful simplification therefore is to ignore the distinction between the gates and model the Lyapunov exponent as if there were only two regions, the gate (or periodic) region and the outside (or chaotic) region. In addition, Fig. 9 suggests that a reasonable model of the channel region is to assume that the discontinuous entrance rate is uniform. This last simplification is only possible for small enough $f$, the entrance rate in the complete tangency channels being far from uniform. These assumptions will allow us to construct a very simple model of the $\epsilon$ dependence of the Lyapunov exponents. The choice of $f=0.1$ to separate the two regions is arbitrary. A smaller value would result in an even more uniform entrance rate than Fig. 9; however, one would also loose statistics because of the smaller gate size. As we will see, the gate fraction $f$ will be used to formally separate the outside and inside gate Lyapunov exponent behaviors. A new aspect of modeling the actual gates of the logistic map is the fact that the entrance to the gates can be from flow further up the channel or from a completely disjoint part of the map. These possibilities were termed the continuous and discontinuous routes in the last Section. One may show that the continuous entrances occur within a scaled distance of $\sim f/2 +2a\epsilon/f$ from the lower limit ($-1$) of the integrals in Eqs.(\[huh\]) and (\[blaa\]). In Fig. 9 this contribution would extend about halfway through the first (deleted) bin. Since these entrances always occur in a narrow range of the integrations in these equations for the range of $\epsilon$ considered, it is reasonable to model this contribution by a Dirac delta function located at the lower limit, ${\hat y}=-1$. In addition, we saw in Fig. 9 that the discontinuous part of the entrance rate was approximately uniform. Thus we will model the entrance rate with $$\begin{aligned} \frac{dN({\hat y})}{d{\hat y}}=2N_{c}\delta({\hat y}+1) + \frac{dN_{d}}{d{\hat y}}, \label{Neq}\end{aligned}$$ where $N_{c}$ and $\frac{dN_{d}}{d{\hat y}}$ are constants in ${\hat y}$. This gives a simple model of the contributions to $<n>_{g}$ and $<\lambda >_{g}$ from the logistic gates: $$\begin{aligned} <n>_{g}= \frac{1}{\sqrt{a\epsilon}}\tan^{-1}(\frac{f}{2\sqrt{a\epsilon}})\left( 2N_{c}+\frac{dN_{d}}{d{\hat y}} \right), \label{nnn}\end{aligned}$$ $$\begin{aligned} <\lambda>_{g}= \frac{1}{2<n>_{g}}\frac{dN_{d}}{d{\hat y}}\int_{-1}^{1}d{\hat y}\ln \left( \frac{1+\frac{4a\epsilon}{f^{2}}} {{\hat y}^{2}+\frac{4a\epsilon}{f^{2}}} \right). \label{lambdax}\end{aligned}$$ Notice that $N_{c}$ drops out of the expression for $<\lambda >_{g}$. In fitting the data, we also need the expression for the small $\epsilon$ limit of Eq.(\[nnn\]), which we will call $n_{T}$: $$\begin{aligned} n_{T}\equiv \frac{\pi}{2\sqrt{a\epsilon}}\left( 2N_{c}+\frac{dN_{d}}{d{\hat y}} \right). \label{nT}\end{aligned}$$ As pointed out previously, $f$ represents a separation parameter for the gate and outside regions. Inside the gates one expects from the previous results that the number of iterations associated with a given traverse of the gate will increase like $\epsilon^{-1/2}$ at small $\epsilon$, independent of the form of the entrance probability. Thus in the many pass simulations of the full logistic map, one expects any given fraction $f$ of the gates at small $\epsilon$ to eventually contain essentially all iterates. This means from Eq.(\[nT\]) that the quantity $2N_{c}+\frac{dN_{d}}{d{\hat y}}$ must scale like $\epsilon^{1/2}$ for $n_{T}$ to become constant. This behavior will be assumed for these quantities individually. In addition, we assume that for small enough gate fraction the continuous entrance rate is uniform. Given these assumptions, the parameters $N_{c}$ and $\frac{dN_{d}}{d{\hat y}}$ can be parameterized as, $$\begin{aligned} N_{c} &\equiv& \sqrt{a\epsilon}(1-f) n_{T} K_{c}, \\ \label{K1} \frac{dN_{d}}{d{\hat y}}&\equiv&\sqrt{a\epsilon}f n_{T} K_{d}, \label{K2}\end{aligned}$$ where $K_{c}$ and $K_{d}$ are constants. We can now better understand the $\epsilon$ dependencies in the Lyapunov exponents in the logistic map simulation seen in the last Section. For a many pass simulation we expect the model Lyapunov exponent from Eqs.(\[lambdax\]) and (\[K2\]) for fixed $f$ to behave like $\epsilon^{1/2}$ at small $\epsilon$. This is because $<n>_{g}$ saturates to the value $n_{T}$ while the rate in Eq.(\[K2\]) goes like $\epsilon^{1/2}$. In contrast, for fixed scale $s$ the exponent now acquires an extra $\epsilon^{1/2}$ factor from the gate fraction $f$ in (\[K2\]) and is expected to decrease like $\epsilon$, as was seen in the actual simulations. This is just a result of the shrinking gate size of the fixed $s$ simulation. As a result of the number flow into the gate regions as $\epsilon$ decreases, the outside region becomes sparsely visited, but with the same local density of iterates. (The shape of the outside region changes very little for small $\epsilon$.) This implies the Lyapunov exponent measured in the outside region will go to a constant for small $\epsilon$. The flows being described are all due to the result Eq.(\[n\]) for $n(y_{in})$ and can be thought of as applications of the renormalization flow arguments, without external noise, in \[J. E. Hirsch [*et. al.*]{}, 1982\] and \[ B. Hu and J. Rudnick, 1982\]. Although the emphasis here is on the gate Lyapunov exponents, one can now make a rough model of the complete logistic map exponent. Letting $<\lambda>_{g}$ represent the Lyapunov exponent expression for the fixed $f$ gate (laminar region) from Eq.(\[lambdax\]) and $<\lambda>_{o}$ be the outside (chaotic region) contribution, the expression for the Lyapunov exponent for the complete logistic map in this model is $$\begin{aligned} <\lambda >^{3rd} = \frac{n_{o}}{n_{T}}<\lambda>_{o}+\frac{n_{g}}{n_{T}}<\lambda>_{g}, \label{fullL}\end{aligned}$$ where $n_{o}+n_{g}=n_{T}$. That is, $<\lambda >^{3rd}$ is just assumed to be a sum of the exponents $<\lambda>_{g}$ and $<\lambda>_{o}$ weighted by the relative number of iterations spent in the two regions. A more fundamental description of the logistic map would independently calculate $<\lambda>_{o}$. However, from the previous arguments we expect $<\lambda>_{o}$ to simply be a constant at small $\epsilon$. It will be evaluated from a fit to the data. Numerically, the gate contribution in Eq.(\[fullL\]) is only about $1\%$ of the total through most of the $\epsilon$ range considered for $f=0.1$. Both terms in Eq.(\[fullL\]) go like $\epsilon^{1/2}$, but in different ways. The outside term behaves like $\epsilon^{1/2}$ because the outside number $n_{o}$ has this dependence; the gate term also behaves this way because $<\lambda>_{g}$ itself goes like $\epsilon^{1/2}$ at fixed $f$ as explained above. Note that all Lyapunov exponents in Eq.(\[fullL\]) are normalized to the number of third-iterate steps in the simulation. This is symbolized by writing $<\lambda>^{3rd}$ for the complete logistic map Lyapunov exponent. We must remember to divide this value by three to calculate the usual single-iterate value: $$\begin{aligned} <\lambda>^{1st}=\frac{1}{3}<\lambda>^{3rd}. \label{twolams}\end{aligned}$$ Fig. 10 presents the results of fitting Eq.(\[lambdax\]) to the data for the $f=0.1$ gate exponent and Fig. 11 gives the measured and model $<\lambda>^{1st}$ values for the complete logistic map. There are three parameters needed to do these fits: $K_{c}$, $K_{d}$ and $<\lambda>_{o}$. To evalute these constants we simply fit the values of these expressions to the measured values of $<\lambda>^{1st}$ and $<\lambda>_{g}$ at a value of $\epsilon^{1/2}=0.128\times 10^{-3}$, near the middle of the exponential range in these figures, as well as the maximum number of gate iterations, $n_{T}$. Since we are averaging over the properties of all three gates, $n_{T}= \frac{1}{3}\times 1.6 \times 10^{6}$ for the simulation in Section 4. This gives $K_{c}=0.358\times 10^{-2}$, $K_{d}=0.739\times 10^{-2}$ and $<\lambda>_{o}=0.962$. Of course since Figs. 10 and 11 were used to fit the model parameters, we need an independent test of how well the model truly represents the data. For this purpose we also present Fig. 12 and Table II. Fig. 12 compares the model results against the data for a $s=1.0$ simulation. As explained above, this data falls like $\epsilon$. The theoretical results are satisfactory although seem somewhat high compared to measurement. In addition in Table II we give the fit results for the rates $N_{c}$ and $\frac{dN_{d}}{d{\hat y}}$ compared to measurement. The measured value for $\frac{dN_{d}}{d{\hat y}}$ is actually just an average over all three gates of binned data similar to Fig. 9, and the value for $N_{c}$ is the average value in the three entrance bins minus the average of the entrances in the other bins. The results for $N_{c}$ are good but the fit values of $\frac{dN_{d}}{d{\hat y}}$ are approximately a factor of 2 larger than measurement. As we will see in the next Section, this is largely the result of an inaccurate characterization of the inverse number density, $\frac{dy}{dn}$. Although the expression for the inverse number density, Eq.(\[genzdiff\]), is symmetric (even) in $y$ for $z=2$, there are at least three sources of asymmetry in the actual gate. First, the discontinuous entrance rate $\frac{dN_{d}}{d{\hat y}}$ raises the value of the hit density on the exit sides of all three gates. Second, the term proportional to $(x-x_{c})^{3}$ in Eq.(\[fthing\]) shows that there is a small intrinsic asymmetry in the shape of the gates themselves, raising the hit density in the same sense. Most interestingly, there is also a contribution due to the finite step size of the laminar flow through the gates which contributes even for a perfectly symmetric gate. This will now be described. Remembering that a finite difference gave rise to the left hand side of Eq.(\[genzdiff\]), a more accurate differential characterization of the iterative flow is $$\begin{aligned} \frac{dy}{dn}+\frac{1}{2}\frac{d^{2}y}{dn^{2}}=ay^{2}+\epsilon.\label{diff2}\end{aligned}$$ We will use the method of successive approximants to evaluate the second derivative term. To zeroth order, $$\begin{aligned} \frac{dy}{dn}|_{0}=ay^{2}+\epsilon.\label{diff3}\end{aligned}$$ Thus the lowest order result for the second derivative is just $$\begin{aligned} \frac{d^{2}y}{dn^{2}}|_{0}=2ay(ay^{2}+\epsilon).\label{diff4}\end{aligned}$$ Inserting this back into the starting point, Eq.(\[diff2\]), we now have the improved result $$\begin{aligned} \frac{dy}{dn}|_{1}=ay^{2}+\epsilon (1-ay)-a^{2}y^{3}.\label{diff5}\end{aligned}$$ With this improvement, a better formula for the gate Lyapunov exponent is given by $$\begin{aligned} <\lambda >_{g} & = & \frac{2N_{c}}{<n>_{g}}\int_{-1}^{1} dy' \frac{y'}{y'^{2}+\frac{4a\epsilon}{f^{2}}(1-\frac{f}{2}y') -\frac{f}{2}y'^{3}} \\ \nonumber & + & \frac{dN_{d}}{d{\hat y}}\frac{1}{<n>_{g}}\int_{-1}^{1}d{\hat y}\int_{{\hat y}}^{1} dy'\frac{y'}{y'^{2}+\frac{4a\epsilon}{f^{2}}(1-\frac{f}{2}y')-\frac{f}{2}y'^{3}}. \label{newlam}\end{aligned}$$ Note that the intrinsic contribution to the inverse hit rate from Eq.(\[ndiff\]) is of the same sign but 49/2 times smaller than the term from Eq.(\[diff5\]) and so is neglected. (The intrinsic term would also slightly alter the numerator; see Eq.(\[lydef\]).) Notice that the continuous entrance term, $N_{c}$, now [*does*]{} contribute to the expression for $<\lambda>_{g}$ since the inverse number density is no longer an even function. Unfortunately, the innner integral can no longer be done analytically and a double integral survives. Because the emphasis here is on modeling the Lyapunov exponent and because of the difficulty of performing the numerical integration leading to $n_{T}$ at small values of $\epsilon$, we have not attempted to make the same correction in the expression for $<n>_{g}$. Thus, we continue to use the expression Eq.(\[nnn\]) above for $<n>_{g}$ in the gate region. When the same sort of fit is made to the simulation data as in Section 5, there is surprisingly little change in the functional forms in Figs. 10, 11 and 12, although the inverse number rates of the two models are considerably different and the $N_{c}$ term is now contributing about $50\%$ of the total. The new fit gives $K_{c}\simeq K_{d} = 0.379\times 10^{-2}$. (The value of $<\lambda>_{o}$ does not change from the previous fit.) The major improvement occurs in the value of the discontinuous entrance density, $\frac{dN_{d}}{d{\hat y}}$ (see the Improved modelcolumns of Table II), which is now within $5\%$ of the measured value at $\epsilon^{1/2}=0.128\times 10^{-3}$, where the fit is actually made. However, the value for $N_{c}$ has increased and is now approximately $6\%$ large compared to the measured value. (Note that $2N_{c}+\frac{dN_{d}}{d{\hat y}}$ in Table II is required to have the same value in the two models because the form for $<n>_{g}$ is unchanged.) This problem should be cured when the more accurate result for the number density implied by Eq.(\[diff5\]) is used in the expression for $n(y_{in})$ in Eq.(\[n\]). Tangent bifurcation or intermittent chaos is a common occurrence in systems that exhibit chaotic behavior. In these systems the intermittent behavior can be modeled by differential and difference equations of some universality class. We found that the Lyapunov exponent for isolated gates at single channel pass can be modeled given the universality class, the parameters of the difference equation, the scale factor $s$ or fraction $f$ of the gate size, and the closeness factor $\epsilon$. Our main theoretical result for these systems, subject to the restriction of a sufficient number of steps in the channel and the small gate approximation in Eqs.(\[approx\]) or (\[f\]), is that the average Lyapunov exponent is given by Eqs.(\[final2\]) and (\[P\]). Single pass numerical simulations were consistent with these expressions. At fixed scale factor $s$ these results gave a Lyapunov exponent proportional to $\epsilon^{1-1/z}$ for a tangency channel with general universality class, $z$. We also showed that a simulation at fixed gate fraction $f$ gave a result which instead became constant at small values of $\epsilon$ for $z=2$ due to a small number of entrances on the far side of the narrow channel. Simulations were also performed on the full logistic map near the intermittent transition at $R=1+\sqrt{8}$. Modified expressions for the gate number, Eq.(\[huh\]), and Lyapunov exponent, Eq.(\[lambda\]), for a many channel pass simulation were motivated. A new aspect encountered in the description of the actual tangency channels was the existence of a continuous flow contribution into the tangency gate. Two phenomenological models of the channel were constructed and examined. A very simple model was considered which was able to give a fairly realistic characterization of the various Lyapunov exponents and the continuous, $N_{c}$, and discontinuous, $\frac{dN_{d}}{d{\hat y}}$, entrance parameters. We also derived a first-order correction to the inverse hit rate due to the discrete nature of the iterative flow, which mainly improved the comparison with the measured discontinuous entrance parameter. Besides applying the finite discretization correction to the number density expression, there is considerable room for improving the present model of the logistic gates. For example, the modeling of the continuous contribution to the gates as a delta function is clearly oversimplified; no attempt has made no attempt to resolve the shape of the continuous entrance rate in the binning procedure of Fig. 9. In addition, the approximation used for the logarithm, Eq.(\[taylor\]), can be removed at the cost of a more complicated numerical evaluation. Finally, further discretization corrections to both $<\lambda>_{g}$ and $<n>_{g}$ should result in an improved characterization of the inverse number density, leading to better comparison with the measured rates and functional behaviors at larger $\epsilon$ values. This work was supported in part by NSF Grants PHY-9424124 and PHY-9722073. Some of the numerical calculations were performed on the Power Challenge Array at the National Center for Supercomputing Applications. Bergé, P., Dubois, M., Manneville, P., & Pomeau, Y. \[1980\], Intermittency in Rayleigh-Bénard Convection, [*J. Physique Lett.*]{} [**41**]{}, L341-L354. Hirsch, J. E., Hubermann, B. A. & Scalapino, D. J. \[1982\] Theory of Intermittency, [*Phys. Rev.*]{} [**A25**]{}, 519-532. Hirsch, J.E. & Nauenberg, M. \[1982\], Intermittency in the Presence of Noise: A Renormalization Group Formulation, [*Phys. Lett.*]{} [**87A**]{}, 391-393. Hu, B. & Rudnick, J. \[1982\], Exact Solutions to the Feigenbaum Renormalization-Group Equations for Intermittency, [*Phys.Rev. Lett.*]{} [**48**]{}, 1645-1648. Jeffries, C. & Pérez, J. \[1982\], Observation of a Pomeau-Manneville Intermittent Route to Chaos in a Nonlinear Oscillator, [*Phys. Rev.*]{} [**A26**]{}, 2117-2122. Manneville, P. & Pomeau, Y. \[1979\] Intermittency and the Lorenz Model, [*Phys. Lett.*]{} [**75A**]{}, 1-2. Pomeau, Y. & Manneville, P. \[1980\] Intermittent Transition to Turbulence in Dissipative Dynamical Systems, [*Commun.Math. Phys.*]{} [**74**]{}, 189-197. Pomeau, Y., Roux, J. C., Rossi, A., Bachelart, S. & Vidal, C. \[1981\], Intermittent Behavior in the Belousov-Zhabotinsky Reaction, [*J. Physique Lett.*]{} [**41**]{}, L271-L273. Scheck, F. A. \[1994\] [*Mechanics*]{} (Springer, Berlin) 2nd ed., p. 371. Schuster, H. G. \[1995\] [*Deterministic Chaos*]{} (VCH, Weinheim) 3rd ed., pp. 79-102. Weh, W. J., & Kao, Y. H. \[1983\], Intermittency in Josephson Junctions, [*Appl. Phys. Lett.*]{} [**42**]{}, 299-301. $x_{c}\quad$ $a_{c}\quad$ $b_{c}\quad$ $c_{c}\quad$ ------------ ------------------------ ------------------------ ------------------------- ------------------------ [gate 1]{} $0.1599288184463\dots$ $88.91012989368\dots$ $0.7793989800616\dots$ $-322.6535182739\dots$ [gate 2]{} $0.5143552770620\dots$ $34.14530797001\dots$ $2.029457886780\dots$ $-47.58783903539\dots$ [gate 3]{} $0.9563178419736\dots$ $-310.6483669763\dots$ $-0.2230704292148\dots$ $-3938.873792041\dots$ : Values of the tangency points, $x_{c}$, and the constants $a_{c}$, $b_{c}$ and $c_{c}$ in Eq.(39). ------------------------ -------------------------- -------------------------- ---------------------- ---------------------- ---------------------- -------------- $\epsilon^{1/2}$ $0.8\times 10^{-5}$ $0.1021(6)\times 10^{2}$ $0.112(5)\times 10^{1}$ $0.101\times 10^{2}$ $0.232\times 10^{1}$ $0.107\times 10^{1}$ $0.119\times 10^{1}$ $0.16\times 10^{-4}$ $0.2028(8)\times 10^{2}$ $0.224(6)\times 10^{1}$ $0.203\times 10^{2}$ $0.464\times 10^{1}$ $0.214\times 10^{2}$ $0.237\times 10^{1}$ $0.32\times 10^{-4}$ $0.403(1)\times 10^{2}$ $0.466(8)\times 10^{1}$ $0.406\times 10^{2}$ $0.927\times 10^{1}$ $0.429\times 10^{2}$ $0.474\times 10^{1}$ $0.64\times 10^{-4}$ $0.804(2)\times 10^{2}$ $0.93(2)\times 10^{1}$ $0.812\times 10^{2}$ $0.185\times 10^{2}$ $0.857\times 10^{2}$ $0.949\times 10^{1}$ $0.128\times 10^{-3}$ $0.1607(2)\times 10^{3}$ $0.181(2)\times 10^{2}$ $0.162\times 10^{3}$ $0.371\times 10^{2}$ $0.171\times 10^{3}$ $0.190\times 10^{2}$ $0.256\times 10^{-3}$ $0.3181(3)\times 10^{3}$ $0.368(2)\times 10^{2}$ $0.325\times 10^{3}$ $0.742\times 10^{2}$ $0.343\times 10^{3}$ $0.380\times 10^{2}$ $0.512\times 10^{-3}$ $0.6274(4)\times 10^{3}$ $0.718(4)\times 10^{2}$ $0.649\times 10^{3}$ $0.148\times 10^{3}$ $0.686\times 10^{3}$ $0.759\times 10^{2}$ $0.1024\times 10^{-2}$ $0.1216(1)\times 10^{4}$ $0.1411(5)\times 10^{3}$ $0.130\times 10^{4}$ $0.297\times 10^{3}$ $0.137\times 10^{4}$ $0.152\times 10^{3}$ $0.2048\times 10^{-2}$ $0.2297(1)\times 10^{4}$ $0.2623(7)\times 10^{3}$ $0.260\times 10^{4}$ $0.594\times 10^{3}$ $0.274\times 10^{4}$ $0.304\times 10^{3}$ $0.4096\times 10^{-2}$ $0.4135(2)\times 10^{4}$ $0.4769(9)\times 10^{3}$ $0.519\times 10^{4}$ $0.119\times 10^{4}$ $0.548\times 10^{4}$ $0.607\times 10^{3}$ ------------------------ -------------------------- -------------------------- ---------------------- ---------------------- ---------------------- -------------- : Results from two models for the continuum number contribution, $N_{c}$, and the discontinuous number density, $\frac{dn}{d{\hat y}}$ when $f=0.1$. 1. Simulation of the system described by Eq.(\[genz\]) compared to the Lyapunov exponent given by Eq.(\[z=2\]) ($z=2$; uniform entrance probability). We plot $\log_{10} <\lambda>$ against $\log_{10} \epsilon^{1/2}$; the prediction (\[z=2\]) is given by the dotted line. The Monte Carlo error bars on the calculation are extremely small and are given by the data point bars. We are using $a=34$ (the same as in \[Hirsch [*et. al.*]{}, 1982\]) and $s=0.1$, with $\epsilon^{1/2}$ ranging in value from $0.8\times 10^{-5}$ upwards by factors of 2. 2. The same as Fig. 1 but for s=1. 3. The same as in Fig. 1 except for $s=10$. Note that the largest $\epsilon^{1/2}$ value, present in Figs. 1 and 2, violates the bound $s<s_{max}$ of the text and has been excluded. 4. The case of $z=4$, $s=0.1$ $a=34$ and uniform entrance probability. 5. Contribution to the Lyapunov exponent, as a function of $\log_{10} \epsilon^{1/2}$, from a simulation involving 10,000 gate entrances when $f$, the gate fraction, is set to $0.1$, the entrance probability is uniform and $a=34$. The theoretical result from Eq.(\[z=2\]) is given by the dotted line. 6. The third iterate of the logistic equation, $F^{(3)}(x)$, as a function of $x$ when $\epsilon=0$. The three points where the third iterate makes tangential contact with the $45$-degree line are the tangency points. The points A, B and C in the map are discussed in the text. 7. Contributions of the three tangency gates to the logistic map Lyapunov exponent, $<\lambda >_{g}$ for $f=0.1$. Gate 1 data is given by the circles, gate 2 by the squares and gate 3 by the triangles. Note that the ordinate values of the gate 1 and 3 data points have been shifted to the left and right, respectively, for clarity of presentation. 8. Contributions of the three tangency gates to the logistic map Lyapunov exponent, $<\lambda >_{g}$ for $s=1.0$. The meaning of the symbols is the same as in Fig. 7. 9. The number of discontinuous entrances, $n_{d}$, for gate 1 when $f=0.1$ and $\epsilon^{1/2}=0.128\times 10^{-3}$ as a function of bin number centered about the first tangency point, $x_{c}$. Each data point above corresponds to entrances in a bin size of $\Delta x_{bin}= 0.5624\times 10^{-4}$. (The bin extending from -10 to -9 is not shown; see the text.) 10. Comparison of the model results for the Lyapunov exponent for the $f=0.1$ simulation with the averaged data from Fig. 7. The model values, given by a dotted line (simple model) and a solid line (improved model), are indistinguishable. The data point at $\log_{10}(0.128\times 10^{-3})=-3.893\ldots$ is used for the fit. 11. Comparison of the model results for the Lyapunov exponent ($1st$ iterate) for the entire map with the data. The model values, given by a dotted line (simple model) and a solid line (improved model), are again indistinguishable. The data point at $\log_{10}(0.128\times 10^{-3})=-3.893\ldots$ is used for the fit. 12. Comparison of the model results for the Lyapunov exponent for the $s=1.0$ simulation with the averaged gate data from Fig. 8. The model values are given by a dotted line (simple model) and a solid line (improved model).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Monitoring the optical phase change in a fiber enables a wide range of applications where fast phase variations are induced by acoustic signals or vibrations in general. However, the quality of the estimated fiber response strongly depends on the method used to modulate the light sent to the fiber and capture the variations of the optical field. In this paper, we show that distributed optical fiber sensing systems can advantageously exploit techniques from the telecommunication domain, as those used in coherent optical transmission, to enhance their performance in detecting mechanical events, while jointly offering a simpler setup than widespread pulse-cloning or spectral-sweep based schemes with acousto-optic modulators. We periodically capture an overall fiber Jones matrix estimate thanks to a novel probing technique using two mutually orthogonal complementary (Golay) pairs of binary sequences applied simultaneously in phase and quadrature on two orthogonal polarization states. A perfect channel response estimation of the sensor array is achieved, subject to conditions detailed in the paper, thus enhancing the sensitivity and bandwidth of coherent $\phi$-OTDR systems. High sensitivity, linear response, and bandwidth coverage up to $18~\mathrm{kHz}$ are demonstrated with a sensor array composed of 10 fiber Bragg gratings (FBGs).' address: | Nokia Bell Labs Paris-Saclay, 1 route de Villejust, 91620 Nozay, FRANCE\ christian.dorize@nokia-bell-labs.com\ elie.awwad@nokia-bell-labs.com author: - Christian Dorize and Elie Awwad title: Enhancing performance of coherent OTDR systems with polarization diversity complementary codes --- [99]{} A. Masoudi and T. P. Newson, “Contributed Review: Distributed optical fibre dynamic strain sensing,” Review of Scientific Instruments **87**(1), 011501 (2016). L. Palmieri and L. Schenato, “Distributed optical fiber sensing based on Rayleigh scattering,” The Open Optics Journal **7**(1), 104–127 (2013). Y. Shi, H. Feng and Z. Zeng, “A long distance phase-sensitive optical time domain reflectometer with simple structure and high locating accuracy,” Sensors **15**(9), 21957–21970 (2015). G. Yang, X. Fan, S. Wang, B. Wang, Q. Liu and Z. He, “Long-Range Distributed Vibration Sensing Based on Phase Extraction From Phase-Sensitive OTDR,” IEEE Photonics Journal **8**(3), 1–12 (2016). X. Fan, G. Yang, S. Wang, Q. Liu and Z. He, “Distributed Fiber-Optic Vibration Sensing Based on Phase Extraction From Optical Reflectometry,” J. Lightw. Technol. **35**(16), 3281–3288 (2017). D. Chen, Q. Liu, X. Fan, Z. He, “Distributed fiber-optic acoustic sensor with enhanced response bandwidth and high signal-to-noise ratio,” J. Lightw. Technol. **35**(10), 2037–2043 (2017). H. F. Martins, K. Shi, B. C. Thomsen, S. M.-Lopez, M. G.-Herraez and S. J. Savory, “Real time dynamic strain monitoring of optical links using the backreflection of live PSK data,” Opt. Express **24**(19), 22303–22318 (2016). Q. Yan, M. Tian, X. Li, Q. Yang and Y. Xu, “Coherent $\phi$-OTDR based on polarization-diversity integrated coherent receiver and heterodyne detection,” in IEEE 25th Optical Fiber Sensors Conference (OFS), 1–4 (2017). K. Kikuchi, “Fundamentals of coherent optical fiber communications,” J. Lightw. Technol. **34**(1), 157–179 (2016). F. Zhu, Y. Zhang, L. Xia, X. Wu and X. Zhang, “Improved $\phi$-OTDR sensing system for high-precision dynamic strain measurement based on ultra-weak fiber Bragg grating array,” J. Lightw. Technol. **33**(23), 4775–4780 (2015). F.A.Q. Sun, W. Zhang, T. Liu, Z. Yan and D. Liu, “Wideband fully-distributed vibration sensing by using UWFBG based coherent OTDR,” in IEEE/OSA Optical Fiber Communications Conference and Exhibition (OFC), 1–3 (2017). M. Golay, “Complementary series,” in IRE Transactions on Information Theory **7**(2), 82–87 (1961). M. Nazarathy, S.A. Newton, R.P. Giffard, D.S. Moberly, F. Sischka, W.R. Trutna and S. Foster, “Real-time long range complementary correlation optical time domain reflectometer,” J. Lightw. Technol. **7**(1), 24–38 (1989). X. Huang, “Complementary Properties of Hadamard Matrices,” in International Conference on Communications, Circuits and Systems, 588–592 (2006). R. Posey, G. A. Johnson and S. T. Vohra, “Strain sensing based on coherent Rayleigh scattering in an optical fibre,” Electronics Letters **36**(20), 1688–1689 (2000). Introduction ============ Fiber optic sensors, being intrinsically immune to electromagnetic interference and fairly resistant in harsh environments, meet a growing interest in monitoring applications (structural health monitoring, railway surveillance, pipeline monitoring...). Distributed fiber optic sensors based on optical reflectometry make use of a variety of light scattering effects occurring in the fiber such as Raman, Brillouin, and Rayleigh backscattering to measure temperature (with any of the three effects) or mechanical variations such as strains (only with the two latter) [@Mas16]. Optical fiber sensors may also be customized or enhanced by periodically inscribing fiber Bragg gratings (FBGs) to amplify the backscattered optical field [@Mas16] resulting in a quasi-distributed system with a resolution fixed by the distance between gratings. The main characteristics of a distributed sensor are its sensitivity, spatial resolution and maximum reach. Another important feature for dynamic phenomena distributed sensing is the bandwidth of the mechanical events that the sensor is able to detect, which is closely related to the targeted sensitivity and the sensor length. *Detecting* and *quantifying* sound waves and vibrations, known as distributed acoustic sensing (DAS) or distributed vibration sensing (DVS) is critical in areas of geophysical sciences and surveillance of sensitive sites or infrastructures. Phase and coherent optical-time-domain (resp. optical-frequency-domain) reflectometry (OTDR, resp. OFDR) systems are usually based on an interrogator sending one or more short light pulses or frequency sweeps [@Pal13; @Shi15; @Yan16; @Fan17]. The detector consists of a simple photodiode if, for instance, two pulses at slightly different frequencies are separately launched in the sensing fiber [@Mas16]. In case single pulses are sent, an imbalanced Mach-Zehnder interferometer and a phase detector, or a balanced coherent detector that mixes the backscattered pulse with a local oscillator are used at the receiver side to detect relative phase changes in the Rayleigh backscattered optical field [@Mas16; @Shi15; @Yan16; @Fan17]. The main limitations of these phase-OTDR systems are firstly a trade-off between the spatial resolution and the maximum reach, given that a high spatial resolution forces the use of short pulses resulting in a low signal-to-noise ratio, secondly a trade-off between maximum reach and the covered mechanical bandwidth, the latter being equal to half of the scanning rate of the pulses. A reflectometry scheme based on the injection of several linear-frequency-modulated probe pulses was suggested in [@Che17] to relax these two trade-offs, showcasing a $9~\mathrm{kHz}$ bandwidth with a $10~\mathrm{m}$ resolution over a $24.7~\mathrm{km}$-long fiber. However, the interrogators in these schemes all rely on individual probing pulses generated by acousto-optic modulators or even more complex structures. They are also vulnerable to polarization fading effects given that the Rayleigh backscattered light is polarization dependent. A dual-polarization coherent receiver allowing to detect all the backscattered information by projecting the received optical field over two orthogonal polarization states can fix this problem as shown in recent works [@Mar16; @Yan17]. In order to further relax the reach-spatial resolution trade-off, our approach in this paper consists in continuously probing the sensor using a training sequence that modulates the optical carrier injected in the fiber, as done in [@Mar16]. While random binary sequences modulate two polarization states to probe a $500~\mathrm{m}$-long sensor and detect a sinusoidal strain of $500~\mathrm{Hz}$ in [@Mar16], a perfect optical channel estimation can only be reached asymptotically for very large sequences. Hence, we design in this work optimized probing sequences of finite length allowing to extend the covered bandwidth. The proposed DAS scheme consists in transmitting polarization-multiplexed coded sequences designed from complementary Golay pairs, and detecting the backscattered optical signal using a polarization-diversity coherent receiver typically used in optical fiber transmission systems [@Kik16] followed by a correlation-based post-processing to extract the channel response. As is well known, Rayleigh backscattering is randomly distributed along the fiber and the distributed scatterers reflect different amounts of energy. For this reason, in order to concentrate on the performance of the proposed interrogator, the experimental part of this paper focuses on a fiber sensor with explicit and deterministic back-reflectors using periodically inserted FBGs that turn the fiber into a sensor array, as in [@Zhu15; @Sun17], with a resolution of $10~\mathrm{m}$. We show that the proposed DAS solution is capable of spatially resolving up to $18~\mathrm{kHz}$ dynamic strains even after displacing the sensor array by $25~\mathrm{km}$ of SMF. The paper is organized as follows: in section 2, we introduce the theory underpinning the coded sequences designed to scan the sensor array through polarization multiplexing; in section 3, we describe the experimental setup built to test the DAS system; the results are given in section 4 in static mode first to quantify the noise limits, followed by a dynamic mode analysis during which the sensor array is perturbed at two different locations by two independent vibrations. Theory ====== Notation -------- In the following, $\ast$ and $\otimes$ operators denote the convolution and the correlation operators respectively; $\delta(t)$ stands for the delta function. The correlation and convolution between signals $a(n)$ and $b(n)$ are related since $a(n)\otimes b(n) = a(n)\ast b^\ast(-n)$, $b^\ast$ standing for the conjugated complex of $b$ and $n$ being a time index. $E_{tx}$ and $E_{ty}$ (resp. $E_{rx}$ and $E_{ry}$) denote the two polarization tributaries of the optical field at the transmitter side (resp. receiver side). The optical field vector generated at the transmitter side is given by: $$\overrightarrow{E}_t (n) = \left[ \begin{array}{c} A_{tx}(n)\exp(i\phi_{tx}(n)) \\ A_{ty}(n)\exp(i\phi_{ty}(n)) \\ \end{array}\right]\exp(i2\pi\nu_0nT_S +\phi_0(n)),~~~~n=[1\ldots N]$$ where $A_{tx}$, $A_{ty}$ are the modulated amplitudes of the x- and y- polarization tributaries, $\phi_{tx}$, $\phi_{ty}$ are the modulated phases of the x- and y- polarization tributaries,$\nu_0$ is the optical carrier frequency, $\phi_0$ is the phase noise generated by the laser and $T_S$ the symbol duration. The impulse response of a fiber section is represented by a $2\times2$ Jones matrix: $$\mathbf{H} = \left[ \begin{array}{cc} h_{xx}~~ h_{xy}\\ h_{yx}~~ h_{yy}\\ \end{array}\right]$$ where $h_{xx,xy,yx,yy}$ are complex numbers describing the relation between the polarization tributaries at the input and output of a fiber section. The location and characterization of any mechanical excitation impacting the sensor array is extracted from the space-time table of impulse responses periodically estimated at each fiber array section. Design of polarization-diversity coded sequences ------------------------------------------------ Our objective is twofold: achieving a perfect estimate of the Jones matrix impulse response and maximizing the number of estimates per time unit to enhance the covered mechanical bandwidth. Let us consider two real binary sequences of size $N_G$ each that form a complementary, or Golay, pair [@Gol61] such as: $$G_{a1}(n)\otimes G_{a1}(n) + G_{b1}(n)\otimes G_{b1}(n) = \delta(n) \label{Gol1}$$ Thanks to the above complementary property, probing a channel with such a sequence pair allows for a perfect impulse response estimation in case of a basic single-input-single-output channel. Practically, the transmission of the two complementary sequences is applied successively in time and the response estimation is extracted after a correlation-based post-processing at the receiver side [@Naz89]. Notice that perfect estimation requires that the channel remains stationary during the overall probing time. A natural extension of the single-input-single-output channel case to the $2\times2$ Jones matrix impulse response consists in successively probing each of the two polarization tributaries by means of the above procedure. However, it still extends the probing time, thus reducing the number of impulse response estimations per second and impacting the system bandwidth. Today’s optical transmission systems, based on coherent technology, use a polarization diversity transmitter and receiver to jointly propagate independent signals onto each of the two orthogonal polarization axes. This polarization degree of freedom is generally underused in the fiber sensing domain. To the authors knowledge, the sole work that considered polarization diversity at the transmitter and at the receiver side is [@Mar16]. Our purpose is to study, for complementary codes, the conditions to achieve a perfect Jones matrix estimation with a simultaneous probing of the two polarization axes, thus keeping the channel stationarity constraint the same as for a single-input-single-output channel. During a period of $N$ symbol times, we modulate the $x$ (resp. $y$) polarization of the optical signal at the transmitter side by $N_G$-long sequences $G_x(n) = G_{xI}(n)+i G_{xQ}(n)$ (resp. $G_y(n) = G_{yI}(n)+i G_{yQ}(n)$) at a given symbol-rate $F_S=1/T_S$, and send zeros during the remaining $N-N_G$ slots. Hence: $$E_{tx,ty}(n+kN) = \begin{cases} ~~G_{xI,yI}(n)+i G_{xQ,yQ}(n) & \text{if}~ n~\text{mod}~N\leq N_G\\ ~~0&\text{elsewhere} \end{cases}$$ Let $E_{rx}(n)$ and $E_{ry}(n)$ be the sampled outputs of a coherent polarization diversity receiver at a rate of one sample per symbol. They are given by the convolution of the transmitted signal and the impulse response of the sensor array: $$\begin{split} E_{rx}(n) &= h_{xx}(n)\ast E_{tx}(n)+h_{xy}(n)\ast E_{ty}(n)\\ E_{ry}(n) &= h_{yx}(n)\ast E_{tx}(n)+h_{yy}(n)\ast E_{ty}(n) \end{split}$$ In the following, we only develop $E_{rx}(n)$ for sake of simplicity since a similar procedure can be applied to $E_{ry}(n)$. At the receiver side, a correlation is performed between the received signal $E_{rx}(n)$ and the code sent over $E_{tx}(n)$ to extract $h^\prime_{xx}(n)$ the estimate of $h_{xx}(n)$: $$\begin{split} h^\prime_{xx}(n) &= E_{rx}(n)\otimes\left(G_{xI}(n)+i G_{xQ}(n)\right)\\ &= \left( h_{xx}\ast(G_{xI}+i G_{xQ})+h_{xy}\ast(G_{yI}+i G_{yQ})\right) \otimes\left(G_{xI}+i G_{xQ}\right)\\ &= h_{xx}\ast(G_{xI}+i G_{xQ})\otimes(G_{xI}+i G_{xQ})+h_{xy}\ast(G_{yI}+i G_{yQ})\otimes(G_{xI}+i G_{xQ})\\ &= h_{xx}\ast(G_{xI}\otimes G_{xI}+ G_{xQ}\otimes G_{xQ} + i(G_{xQ}\otimes G_{xI}-G_{xI}\otimes G_{xQ})) \\ &+ h_{xy}\ast(G_{yI}\otimes G_{xI}+G_{yQ}\otimes G_{xQ} + i(G_{yQ}\otimes G_{xI}-G_{yI}\otimes G_{xQ})) \\ &= h_{xx}(n)\ast(g_{0x}(n)+ig_{1x}(n))+h_{xy}(n)\ast(g_{2x}(n)+ig_{3x}(n)) \end{split} \label{hxx}$$ where we partially dropped the $n$ index for clarity and define the following sequences: $$\begin{split} g_{0x}(n) &= G_{xI}(n)\otimes G_{xI}(n) + G_{xQ}(n)\otimes G_{xQ}(n)\\ g_{1x}(n) &= G_{xQ}(n)\otimes G_{xI}(n) - G_{xI}(n)\otimes G_{xQ}(n)\\ g_{2x}(n) &= G_{yI}(n)\otimes G_{xI}(n) + G_{yQ}(n)\otimes G_{xQ}(n)\\ g_{3x}(n) &= G_{yQ}(n)\otimes G_{xI}(n) - G_{yI}(n)\otimes G_{xQ}(n)\\ \end{split}$$ Hence, the conditions for perfect estimation of $h_{xx}(n)$, i.e. $E\left[ h^\prime_{xx}(n)\right] = h_{xx}(n)$ are: $$g_{0x}(n) = \delta(n),~~g_{1x}(n) =g_{2x}(n) =g_{3x}(n) = 0 \label{eq:cd1}$$ Similarly, $E_{rx}(n)$ is correlated with the code sent over $E_{ty}(n)$ to extract $h^\prime_{xy}(n)$: $$\begin{split} h^\prime_{xy}(n) &= E_{rx}(n)\otimes\left(G_{yI}(n)+i G_{yQ}(n)\right)\\ &= h_{xx}\ast(G_{xI}\otimes G_{yI}+ G_{xQ}\otimes G_{yQ} + i(G_{xQ}\otimes G_{yI}-G_{xI}\otimes G_{yQ})) \\ &+ h_{xy}\ast(G_{yI}\otimes G_{yI}+G_{yQ}\otimes G_{yQ} + i(G_{yQ}\otimes G_{yI}-G_{yI}\otimes G_{yQ})) \\ &= h_{xx}(n)\ast(g_{2y}(n)+ig_{3y}(n))+h_{xy}(n)\ast(g_{0y}(n)+ig_{1y}(n)) \end{split} \label{hxy}$$ Again, the conditions for perfect estimation of $h_{xy}(n)$ come down to: $$g_{0y}(n) = \delta(n),~~g_{1y}(n) =g_{2y}(n) =g_{3y}(n) = 0 \label{eq:cd2}$$ Developing the correlation equations with $E_{ry}$ instead of $E_{rx}$ to estimate $h_{yx}(n)$ and $h_{yy}(n)$ yields the same conditions as those in Eqs.  and . To build polarization-multiplexed training sequences satisfying these conditions, let us consider two mutually orthogonal complementary pairs of Golay sequences $\lbrace G_{a1},G_{b1}\rbrace$ and $\lbrace G_{a2},G_{b2}\rbrace$: $$\begin{split} G_{a1}(n)\otimes G_{a2}(n) + G_{b1}(n)\otimes G_{b2}(n) &= 0\\ G_{a1}(n)\otimes G_{b1}(n) + G_{a2}(n)\otimes G_{b2}(n) &= 0\\ \end{split} \label{Had1}$$ The proof of existence of mutually orthogonal pairs of complementary sequences can be found in [@Hua06]. One basic example set of sequences of size $N_G=4$ satisfying these properties is: $G^4_{a1}=\left[1,-1,-1,-1\right] $, $G^4_{b1}=\left[-1,1,-1,-1\right] $, $G^4_{a2}=\left[-1,-1,1,-1\right] $, $G^4_{b2}=\left[1,1,1,-1\right] $. Larger sequences of length $N_G=2^{p+2},p\geq1$ are derived recursively: $$\begin{split} G^{N_G}_{a1} &= \left[ G^{N_G/2}_{a1}, G^{N_G/2}_{b1} \right] \\ G^{N_G}_{b1} &= \left[ G^{N_G/2}_{a1},-G^{N_G/2}_{b1} \right] \\ G^{N_G}_{a2} &= \left[ G^{N_G/2}_{a2}, G^{N_G/2}_{b2} \right] \\ G^{N_G}_{b2} &= \left[ G^{N_G/2}_{a2},-G^{N_G/2}_{b2} \right] \end{split} \label{Gol2}$$ We now study the feasibility of polarization-multiplexed transmission of training sequences jointly satisfying properties  and  to achieve perfect impulse response estimation of the Jones matrix, then we define the mapping of these sequences over binary modulation formats. First, we modulate one polarization channel and set the other polarization to zero. To measure $h_{xx}(n)$ and $h_{yx}(n)$, the two sequences of a single Golay pair $\lbrace G_{a1},G_{b1}\rbrace$ are transmitted successively through a binary phase shift keying (BPSK) modulation (one coded bit per symbol $\lbrace -1,1\rbrace$) on $G_{xI}(n)$ while $G_{xQ}(n)=G_{yI}(n)=G_{yQ}(n)=0$ resulting in $g_{1x}(n)=g_{2x}(n)=g_{3x}(n)=0$ and $g_{0x}(n)=G_{xI}(n)\otimes G_ {xI}(n)$. The transmitted signal can be expressed as: $$E_{tx}(n+kN) = \begin{cases} ~~G_{a1}(n) & 0\leq n<N_G\\ ~~0 & N_G\leq n <N_G+N_{sep}\\ ~~G_{b1}(n-N_G-N_{sep}) & N_G+N_{sep}\leq n < 2N_G+N_{sep}\\ ~~0 & 2N_G+N_{sep}\leq n < N \end{cases}$$ The code is defined as the two complementary sequences sent successively with a guard interval of length $N_{sep}$ following each sequence, as shown in the upper part of Fig. \[fig:GolayBPSK\](a). The code periodicity is then $N = 2(N_G+N_{sep})$ symbols. The upper part of Fig. \[fig:GolayBPSK\](b) shows the periodic auto-correlation of this code highlighting a zero-auto-correlation zone in the range $-(N_G/2+N_{sep})<n<(N_G/2+N_{sep})$. This finite zero-correlation zone translates into a constraint on the impulse response of the sensor array: to achieve perfect estimation of $h_{xx}$ and $h_{xy}$, the sensor array channel response must spread over a time $T_{IR}<(N_G/2+N_{sep})T_S$. Moreover, $N_{sep}$ can be set to 0 yielding $N=2N_G$, consequently maximizing the duty cycle. Hence, the sensor array can be continuously interrogated with a periodic code made of two complementary sequences sent successively such that $T_{code}=NT_S>4T_{IR}$. (400,170) (0,0) [ (-10,5)[![(a) PDM-BPSK sequences. (b) Auto- and cross- correlations with PDM-BPSK.[]{data-label="fig:GolayBPSK"}](fig1a "fig:"){width="220pt" height="170pt"}]{} (100,0) ]{} (180,0) [ (0,8)[![(a) PDM-BPSK sequences. (b) Auto- and cross- correlations with PDM-BPSK.[]{data-label="fig:GolayBPSK"}](fig1b "fig:"){width="220pt" height="170pt"}]{} (100,0) ]{} We now extend the single polarization case to dual polarization states, applying a BPSK modulation on each of the two orthogonal polarization states (Polarization Division Multiplexing or PDM). A Golay pair $\lbrace G_{a1},G_{b1}\rbrace$ is applied to $G_{xI}(n)$ and a mutually orthogonal pair $\lbrace G_{a2},G_{b2}\rbrace$ is simultaneously applied to $G_{yI}(n)$ shown in the lower part of Fig. \[fig:GolayBPSK\](a). The auto-correlation for $G_{yI}(n)$ has the same properties as for $G_{xI}(n)$. The lower part of Fig. \[fig:GolayBPSK\](b) shows the cross-correlation $g_{2x}(n)$ between $G_{yI}(n)$ and $G_{xI} (n)$ expected to be null over a window of $2N_G$ samples. Therefore, a perfect estimation of the Jones matrix of the sensor array is possible using polarization-coded BPSK sequences in the same conditions as in the single polarization case. (400,180) (0,0) [ (-10,5)[![(a) PDM-QPSK sequences. (b) Auto- and cross- correlations with PDM-QPSK.[]{data-label="fig:GolayQPSK"}](fig2a "fig:"){width="220pt" height="180pt"}]{} (100,0) ]{} (180,0) [ (0,8)[![(a) PDM-QPSK sequences. (b) Auto- and cross- correlations with PDM-QPSK.[]{data-label="fig:GolayQPSK"}](fig2b "fig:"){width="220pt" height="180pt"}]{} (100,0) ]{} Minimizing the probing time $T_{code}$ is desirable to increase the number of channel impulse response measurements per second, hence expanding the covered bandwidth. Instead of temporally multiplexing the two sequences from the complementary pair to probe the channel, we may shorten this probing time by half if we modulate the two complementary sequences in phase and quadrature over each polarization tributary through a quadrature phase shift keying (QPSK) modulation. A QPSK constellation consists of the following four complex numbers of unit energy: $\sqrt{2}/2\lbrace 1+i,-1+i,1-i,-1-i\rbrace$. Keeping $N_{sep}=0$, we set $E_{tx}(n)=G_{a1}(n)+iG_{b1}(n)$ and $E_{ty}(n)=G_{a2}(n)+iG_{b2}(n)$ to create the PDM-QPSK coded sequences with $N=N_G$, as shown in Fig. \[fig:GolayQPSK\](a). Recalling the properties in Eqs.  and , we get $g_{0x}(n)=g_{0y}(n)=\delta(n)$ and $g_{2x}(n)=g_{2y}(n)=0$. Thus, Eqs.  and  come down to: $$\begin{split} h^\prime_{xx}(n) &= h_{xx}(n) +i(h_{xx}(n)\ast g_1(n)+h_{xy}(n)\ast g_3(n))\\ h^\prime_{xy}(n) &= h_{xy}(n) -i(h_{xy}(n)\ast g_1(n)+h_{xx}(n)\ast g_3(-n)) \end{split}$$ where $g_1(n)=G_{b1}(n)\otimes G_{a1}(n)-G_{a1}(n)\otimes G_{b1}(n)$ and $g_3(n)=G_{b2}(n)\otimes G_{a1}(n)-G_{a1}(n)\otimes G_{b2}(n)$. The auto- and cross- correlation terms for this modulation scheme in Fig. \[fig:GolayQPSK\](b) show that the conditions for perfect estimation of $h_{xx}(n)$ and $h_{xy}(n)$ are not fulfilled since $g_{1x}(n)$ and $g_{3x}(n)$ are not null. However, it is noteworthy to mention that $g_{1x}(n)$ equals zero for every second index and $g_{3x}(n)$ equals zero for every fourth index. Hence, if we consider a standard equally-spaced-FBG sensor array, a perfect channel estimation is achieved subject to the following condition: the symbol interval $T_S$ is set to one fourth of the dual path delay between the reflectors, which yields $F_S=1/T_S = 4p(\frac{c_f}{2d_s})$ where $d_s$ is the distance between two consecutive FBGs, $p\in\mathbb{N}^\ast$ and $c_f=c/n_g$, $n_g$ being the group refractive index of the fiber and $c$ the velocity of light. Thus, combining polarization multiplexing and the suggested QPSK coding leads, in this specific case, to a probing period reduced by a factor of four compared to a standard use of complementary sequences, which enhances the sensitivity and/or extends the bandwidth of the measurement system. In this case, the new constraint on the channel response for perfect estimation is $T_{code}>T_{IR}$. Optical phase extraction from Jones matrix ------------------------------------------ Even though we estimated the full Jones matrix of each fiber segment, we focus in this work only on the optical phase $\phi$ that can be computed as half the phase of the determinant of the dual-pass Jones matrix of each segment at the subsequent FBG reflector: $$\phi = 0.5\angle(h^\prime_{xx}h^\prime_{yy}-h^\prime_{xy}h^\prime_{yx})$$ The phase is periodically estimated to capture its evolution at each sensor and at consecutive times, achieving a spatio-temporal map of the mechanical/acoustic events surrounding the sensor array with a spatial resolution of $d_s$ and an estimate computed each $T_{code}$ seconds. Being interested in the phase evolution in each fiber segment, the differential phase can be computed with the first reflected phase selected as a reference. Experimental setup ================== The experimental test-bed consists of a coherent transmitter and receiver - similar to the ones used in long-haul optical communication systems - forming the interrogator, and connected to the sensor array through an optical circulator as shown in Fig. \[fig:ExpSetup\]. The light from a $\mathrm{RIO}^{\mathrm{TM}}$ laser with a linewidth of $600~\textrm{Hz}$ emitting a power of $10~\mathrm{dBm}$ at $\lambda_0=1549.1~\mathrm{nm}$ is split into two to be used as a carrier at the transmitter and a local oscillator at the receiver (self-homodyne configuration). The sensor array consists of 10 FBGs with a reflectivity of $10^{-3}$ separated by $10~\mathrm{m}$ of fiber. The dual path optical delay between two $d_s$-spaced FBG reflectors is $\tau_s = 2n_gd_s/c$. The symbol duration $T_S$ has to be selected to fulfil $T_S=\tau_s/K$ where $K\geq1$ is an integer. For $d_s=10~\mathrm{m}$, the symbol rate $1/T_S$ has to be chosen as a multiple of $40~\textrm{MHz}$. At the transmitter side, the carrier is modulated using a dual-polarization I/Q Mach-Zehnder modulator. Four RF signals accounting for the in-phase and quadrature components of each polarization are generated at various symbol rates (multiples of $40~\textrm{MHz}$) and amplified before reaching the modulator. The probing sequences are continuously generated without any guard band. The modulated optical signal is then injected in the sensor array through a circulator. ![Experimental Setup (PEA: piezoelectric actuator).[]{data-label="fig:ExpSetup"}](fig3){width="400pt"} The reflected light from the FBGs goes through a circulator towards a dual-polarization coherent mixer used to detect the in-phase and in-quadrature components over two polarization states. The different interfering optical signals are detected by balanced photodiodes of $1.6~\textrm{GHz}$ bandwidth and the four RF signals $I_X$,$Q_X$,$I_Y$,$Q_Y$ are sampled at $500~\textrm{MSa/s}$ by an oscilloscope during a measurement window $T_{acq}$. The sensor array is inserted in a mechanically-insulated box to isolate it from the lab environment (acoustic and mechanical vibrations from lab occupants, fans of various instruments, ...). To accurately quantify the performance of our sensing system, two independent mechanical stimuli are applied at two different locations: one between the second and third FBG (approximately at $25~\mathrm{m}$ from the circulator) and another between the ninth and tenth FBG (around $95~\mathrm{m}$ from the circulator). At these locations, $1.5~\textrm{m}$ of fiber is coiled around a cylindrical piezoelectric actuator having an outer diameter of $5~\mathrm{cm}$. The actuators are excited by frequency generators with sinusoidal tones with peak-to-peak amplitude $V_{pp1,pp2}$ and frequencies $f_{e1,e2}$. Experimental results ==================== Static regime ------------- As a first step, we check the phase stability as a function of time at each FBG reflector without applying any mechanical excitation. The sensing array, placed in its insulated box, is continuously probed at a symbol rate of $160~\mathrm{MSymbol/s}$. Figure \[fig:Static\](a) shows the signal intensity captured at the receiver side after a correlation process for one transmitted code. The peaks correspond to the reflections on each of the 10 FBG separated by $10~\mathrm{m}$ of fiber (corresponding to $0.1~\mathrm{\mu s}$ round trip delay). The optical phase of the signal reflected at each FBG is extracted from the Jones matrix at each peak location and the procedure is periodically repeated for each received code. (400,150) (0,0) [ (0,8)[![(a) Measured intensities at the receiver side. (b) Estimated phases in static mode.[]{data-label="fig:Static"}](fig4 "fig:"){width="200pt" height="150pt"}]{} (100,0) ]{} (200,0) [ (0,8)[![(a) Measured intensities at the receiver side. (b) Estimated phases in static mode.[]{data-label="fig:Static"}](fig5 "fig:"){width="200pt" height="150pt"}]{} (100,0) ]{} To quantify the phase stability in static mode, we measure the standard deviation (std) of the phase at each FBG over a time frame of $20~\textrm{ms}$. A capture of the estimated phases in static mode is shown in Fig. \[fig:Static\](b) for the ten FBGs. Note that the first FBG serves as the optical phase reference from which the differential phase at the next FBGs is extracted; the phase of the first FBG is thus ignored in the following analyses. For each measurement, we record an average standard deviation by averaging the std values at the nine FBGs. The received signal power at the input of the coherent mixer is measured at $-27~\mathrm{dBm}$. Next, we measure this average std for various lengths of the probing code. The choice of the code length is driven by the trade-off between the measurement noise and the coherence length of the laser source: when probing the sensor array with a very short code, the collected energy over a single code is low which makes us vulnerable to the receiver noise; conversely, a very long code spreads over a duration that exceeds the coherence time of the laser source which invalidates the phase reference and the relative phases computed subsequently. This is illustrated in Fig. \[fig:CodeLength\_SNRmargin\](a) where the phase standard deviation increases on the left edge (for codes shorter than $3~\mathrm{\mu s}$ and on the right edge (for codes longer than $3~\mathrm{ms}$) because of the correlation noise and of the coherence loss respectively. The used laser source - dedicated to sensing applications - has a $600~\mathrm{Hz}$ linewidth, corresponding to a coherence time of $0.5~\mathrm{ms}$. Between these two limits, the standard deviation of the phase is relatively constant around $10~\mathrm{mrad}$. (400,150) (0,0) [ (-5,5)[![(a) Standard deviation of estimated phases as a function of code length. (b) Standard deviation of estimated phases as a function of signal power at the receiver input.[]{data-label="fig:CodeLength_SNRmargin"}](fig6 "fig:"){width="200pt" height="150pt"}]{} (100,0) ]{} (200,0) [ (-5,5)[![(a) Standard deviation of estimated phases as a function of code length. (b) Standard deviation of estimated phases as a function of signal power at the receiver input.[]{data-label="fig:CodeLength_SNRmargin"}](fig7 "fig:"){width="190pt" height="150pt"}]{} (100,0) ]{} Later on, we test the dependence of this result to the optical power of the signal at the input of the coherent receiver by fixing the code length to $3.2~\mathrm{\mu s}$ and varying the signal power level from $-27~\mathrm{dBm}$ down to $-52~\mathrm{dBm}$. The local oscillator power was fixed to $7~\mathrm{dBm}$. In a dual-polarization coherent receiver used in a homodyne configuration, the detected in-phase and in-quadrature photocurrents at the outputs of the balanced photodiodes $I_I$ and $I_Q$ for the two polarization states $X$ and $Y$ are given by: $$\begin{split} I_{I,X/Y} &\propto \sqrt{P_{S,X/Y}P_{LO,X/Y}}\cos(\phi_{X,Y}+\phi_{LO}) + \eta_{I,X/Y}\\ I_{Q,X/Y} &\propto \sqrt{P_{S,X/Y}P_{LO,X/Y}}\sin(\phi_{X,Y}+\phi_{LO}) + \eta_{Q,X/Y} \end{split}$$ where $P_{S,X/Y}$ is the optical power of the signal projected on the $X$ (resp. on the $Y$) polarization axes of the receiver, $P_{LO,X/Y}$ is the optical power of the local oscillator projected on the $X$ (resp. on the $Y$) polarization, $\phi_{X,Y}$ stands for the optical phase of the signal projected on the $X$ (resp. on the $Y$) polarization, $\phi_{LO}$ is the laser phase noise, and $\eta_{I/Q,X/Y}$ is an additive white Gaussian noise added at the receiver side (a combination of shot noise and thermal noise). Figure \[fig:CodeLength\_SNRmargin\](b) shows that the phase stability slowly deteriorates when reducing the signal power down to $-48~\mathrm{dBm}$ at the receiver input. This slow decrease was investigated through numerical simulations to study the noise sources at the receiver side and was found to be mainly due to a limitation imposed by relative intensity noise (RIN) of the laser and shot noise at the photodiodes rather than thermal noise. The swift rise of the std below $-48~\mathrm{dBm}$ is due to a phase unwrapping problem when phase variations become too high. ![Standard deviation of estimated phases as a function of reach.](fig8){width="7cm"} \[fig:Reach\] Another important feature of an optical fiber sensing system is its reach or the maximum distance that can be covered. In our scheme, the reach limit is given by the increase in phase noise for increased round-trip distances. To assess the reach, fiber spools of increasing lengths were added between the interrogator and the sensor array, and the average standard deviation of the estimated phases were computed for each length. Figure \[fig:Reach\] shows the obtained results. The used code length was $82~\mathrm{\mu s}$, and the observation window fixed to $40~\mathrm{ms}$. A tenfold degradation of the standard deviation is observed when moving from $0$ to a one-way distance of $25~\mathrm{km}$. Furthermore, beyond $34~\mathrm{km}$, the estimated phases are corrupted by a phase noise with much larger phase variance resulting from phase unwrapping errors (round-trip distance of $68~\mathrm{km}$ approaching the coherence length of the laser source (around $l_c=c_f/(\pi\Delta\nu)=100~\mathrm{km}$ for $\Delta\nu=600~\mathrm{Hz}$). Dynamic regime -------------- The sensor array is now tested in dynamic mode by means of two identical piezoelectric actuators placed at 25m and 95m from the sensor input. The used actuator is a $5~\mathrm{cm}$-outer-diameter ring with a radial efficiency of $400~\mathrm{pm/V}$. $1.5~\mathrm{m}$ of fiber is wound around each piezo, leading to $25~\mathrm{nm}$ of fiber extension per one volt of excitation voltage. We measured a phase shift of 1 radian for a $75~\mathrm{nm}$ fiber extension obtained by applying an excitation of 3 Volts. As a first test, we simultaneously apply a $500~\mathrm{Hz}$ (resp. $200~\mathrm{Hz}$) sine wave with a $10~\mathrm{Vpp}$ (resp. $4~\mathrm{Vpp}$) magnitude on the first (resp. second) actuator. The sensor array is probed with a $82~\mathrm{\mu s}$-long PDM-QPSK code. Figure \[fig:DisSens\](a) shows the phase measured as a function of time at each of the 10 FBGs. The black dotted curve represents the absolute phase measured at the first FBG. More interesting are the phase evolutions captured at the third and tenth FBGs: both sine waves are easily identified and their magnitudes are simply scaled by the radial efficiency of the actuator. Furthermore, phases measured at the other FBG locations are stable as a function of time, proving the absence of crosstalk between sensors. We performed additional measurements with a single active actuator excited by a $500~\mathrm{Hz}$ sine wave to quantify the minimum rejection over all the remaining unexcited segments and measured a crosstalk rejection level of $-30~\mathrm{dB}$ as shown in Fig. \[fig:DisSens\](b). (400,170) (0,0) [ (-8,5)[![(a) Distributed sensing capability showing low crosstalk between sensors. (b) Crosstalk level at other sensors when only a single one is excited.[]{data-label="fig:DisSens"}](fig9a "fig:"){width="220pt" height="175pt"}]{} (100,0) ]{} (200,0) [ (-5,3)[![(a) Distributed sensing capability showing low crosstalk between sensors. (b) Crosstalk level at other sensors when only a single one is excited.[]{data-label="fig:DisSens"}](fig9bV3 "fig:"){width="195pt" height="175pt"}]{} (100,0) ]{} Later on, the evolution of the phase magnitude as a function of the excitation voltage has also been quantified and is shown in Fig. \[fig:Dyn\_Sen\](a) for a $1~\mathrm{kHz}$ sine wave. The probing code length is $82~\mathrm{\mu s}$ and the observation window is $40~\mathrm{ms}$ long. We observe a linear behavior ($20~\mathrm{dB}$ dynamic range) for voltages between $0.1~\mathrm{Vpp}$ and $20~\mathrm{Vpp}$. We could not further increase the voltage with our low-frequency signal generator. Below $0.1~\mathrm{V}$, the noise floor induced by the phase noise of the laser will become prominent. The dynamics of the sensing system can be enhanced with a laser having a narrower linewidth. However, this demonstrated dynamic range is already acceptable to analyze a wide range of mechanical signals. Next, we measure the sensitivity of our system or the smallest detectable change in the sensed variable at a given frequency, often expressed in terms of $\mathrm{rad}/\sqrt{\mathrm{Hz}}$. For that, one piezoelectric actuator is excited with a pure tone of constant amplitude producing a $2\pi$ peak-to-peak phase variation. The phase is captured over an observation window of $8~\mathrm{ms}$ and the probing code length is fixed to $20.48~\mathrm{\mu s}$. Sensitivity is computed from the normalized power spectral density of the estimated phase at the FBG following the sine wave stimulus as $\sqrt{N_{B}/F_{max}}$ where $N_{B}$ is the noise power in a frequency resolution of $B=125~\mathrm{Hz}$ corresponding to the $8~\mathrm{ms}$ observation window and $F_{max}=1/(2T_{code})$ is the maximum mechanical bandwidth of the used code ($24.4~\mathrm{kHz}$ in this case) [@Pos00]. The measured sensitivity between $10~\mathrm{and}~20~\mathrm{\mu rad}/\sqrt{\mathrm{Hz}}$ is shown in Fig. \[fig:Dyn\_Sen\](b) for frequencies in the range of $\left[100:18000\right]~\mathrm{Hz}$. Furthermore, we measured the sensitivity after displacing the sensor array by adding $25~\mathrm{km}$ of SMF and noticed a tenfold deterioration in sensitivity (between $100~\mathrm{and}~200~\mathrm{\mu rad}/\sqrt{\mathrm{Hz}}$). (400,150) (0,0) [ (-5,5)[![(a) Dynamic range: peak-to-peak phase magnitude versus peak-to-peak voltage for a $1~\mathrm{kHz}$ sine wave. (b) Sensitivity in $\mathrm{rad/\sqrt{Hz}}$ for sine waves between $\left[100:18000\right]~\mathrm{Hz}$.[]{data-label="fig:Dyn_Sen"}](fig10 "fig:"){width="190pt" height="140pt"}]{} (100,0) ]{} (200,0) [ (-10,7)[![(a) Dynamic range: peak-to-peak phase magnitude versus peak-to-peak voltage for a $1~\mathrm{kHz}$ sine wave. (b) Sensitivity in $\mathrm{rad/\sqrt{Hz}}$ for sine waves between $\left[100:18000\right]~\mathrm{Hz}$.[]{data-label="fig:Dyn_Sen"}](fig11 "fig:"){width="190pt" height="140pt"}]{} (100,0) ]{} Fixing the probing code length at $26~\mathrm{\mu s}$, which corresponds to a mechanical bandwidth of $19~\mathrm{kHz}$, the power spectral response is now measured by applying on the actuator a $1~\mathrm{s}$-long chirp excitation that linearly explores the audio bandwidth from $20~\mathrm{Hz}$ to $18~\mathrm{kHz}$. The obtained power spectral density of the phase response from the stimulated sensor is shown in Fig. \[fig:Linearity\]. The disturbances visible on the left part of the figure are induced by the limited measurement window, they disappear when measuring the low frequency part during a larger window. The rise in the response observed above $10~\mathrm{kHz}$ comes from the actuator, and is induced by its first resonance peak located at $20~\mathrm{kHz}$ (provided by the manufacturer of the piezoelectric actuator). This resonance peak can be digitally compensated, and the power spectral response would be flat within $\left[20:18000\right]~\mathrm{Hz}$. This linearity, added to the previously showcased dynamic range and sensitivity, demonstrates the ability of the system to reliably capture distributed audio/mechanical signals over a wide spectral range, as large as the human hearing system. Although the demonstrated range is bounded by $18~\mathrm{kHz}$ in this work, the use of shorter probing codes generated at higher symbol rates will allow us to explore even higher frequencies. ![Power spectral response of the system measured over the audio bandwidth.[]{data-label="fig:Linearity"}](fig12){width="250pt" height="210pt"} Conclusion ========== We introduced novel polarization-multiplexed codes derived from Golay sequences providing a perfect optical channel estimation for phase and polarization sensing applications - while covering only the former in this paper - with a code length flexibility that adjusts the system mechanical bandwidth to the application requirement. The underlying setup is derived from the one used in coherent optical telecommunication systems with the requirement of a low-phase-noise laser source used in a self-homodyne configuration. To generate the probing excitation at the transmitter side, there is no need for any acousto-optic modulators, nor digital-to-analog converters (DACs) thanks to the binary nature of the proposed sequences. In addition, the sensor array can be continuously probed by periodic codes maximizing the signal-to-noise ratio and the covered bandwidth. The main limiting parameter is the laser coherence: the duration of probing codewords and the round-trip delay in the sensor array should be within the coherence time of the laser source to guarantee a targeted sensitivity value. An FBG based sensor array excited with piezoelectric actuators was used experimentally to accurately quantify the system performance. With a $600~\textrm{Hz}$ linewidth laser source modulated by our proposed PDM-QPSK code, a sensitivity of $10~\mathrm{\mu rad}/\sqrt{\mathrm{Hz}}$ was measured for mechanical perturbations up to $18000~\mathrm{Hz}$, thus covering the entire spectral range of the human hearing system. Acknowledgments {#acknowledgments .unnumbered} =============== We warmly thank Ole Henrik Waagaard from Alcatel Submarine Networks Norway AS for his help in the development of the theoretical section.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $M$ denote the moduli space of stable vector bundles of rank $n$ and fixed determinant of degree coprime to $n$ on a non-singular projective curve $X$ of genus $g \geq 2$. Denote by ${\mathcal{U}}$ a universal bundle on $X \times M$. We show that, for $x,y \in X,\; x \neq y$, the restrictions ${\mathcal{U}}|\{x\} \times M$ and ${\mathcal{U}}|\{y\} \times M$ are stable and non-isomorphic when considered as bundles on $X$.' address: - | H. Lange\ Mathematisches Institut\ Universität Erlangen-Nürnberg\ Bismarckstraße $1\frac{ 1}{2}$\ D-$91054$ Erlangen\ Germany - | P.E. Newstead\ Department of Mathematical Sciences\ University of Liverpool\ Peach Street, Liverpool L69 7ZL, UK author: - 'H. Lange' - 'P. E. Newstead' title: On Poincaré bundles of vector bundles on curves --- [^1] Introduction ============ Let $X$ be a non-singular projective curve of genus $g \geq 2$ over the field of complex numbers. We denote by $M = M(n,L)$ the moduli space of stable vector bundles of rank $n$ with determinant $L$ of degree $d$ on $X$, where gcd$(n,d) = 1$. We denote by ${\mathcal{U}}$ a universal bundle on $X \times M$. For any $x \in X$ we denote by ${\mathcal{U}}_x$ the bundle ${\mathcal{U}}|\{x\} \times M$ considered as a bundle on $M$. In a paper of M. S. Narasimhan and S. Ramanan [@nr] it was shown that ${\mathcal{U}}_x$ is a simple bundle and that the infinitesimal deformation map $$\label{eq1} T_{X,x} {\rightarrow}H^1(M, \mbox{End} ({\mathcal{U}}_x))$$ is bijective for all $x \in X$. In [@bbn Proposition 2.4] it is shown that ${\mathcal{U}}_x$ is semistable with respect to the unique polarization of $M$. In fact, ${\mathcal{U}}_x$ is stable; since we could not locate a proof of this in the literature, we include one here. Let ${\mathcal{M}}$ denote the moduli space of stable bundles on $M$ having the same Hilbert polynomial as ${\mathcal{U}}_x$. Then (\[eq1\]) implies that the natural morphism $$X {\rightarrow}{\mathcal{M}}$$ is étale and surjective onto a component ${\mathcal{M}}_0$ of ${\mathcal{M}}$. It is stated in [@nr] that it can be easily deduced from the results of that paper that the map $X {\rightarrow}{\mathcal{M}}_0$ is also injective. This would imply that the curve $X$ can be identified with ${\mathcal{M}}_0$. However no proof of this fact seems to be given. There is a proof in a paper of A. N. Tyurin [@tyu Theorem 2], but this seems to us to be incomplete. We offer here a proof which is in the spirit of [@tyu]. To be more precise, our main result is the following theorem.\ [**Theorem**]{} [*Let $X$ be a non-singular projective curve of genus $g \geq 2$. If $x,y \in X, \; x \neq y$, then ${\mathcal{U}}_x \not\simeq {\mathcal{U}}_y$.*]{}\ Note that if $X$ is a general curve of genus $g \geq 3$ or any curve of genus 2, then $X$ does not admit étale coverings $X {\rightarrow}{\mathcal{M}}_0$ of degree $>1$. So for such curves the theorem is immediate. For the proof we can therefore assume that $g \geq 3$. In fact, our proof fails for $g=2$. In Section 2 we prove the stability of ${\mathcal{U}}_x$. In Sections 3 and 4 we make some cohomological computations, from which a family of stable bundles on $X$ can be constructed. This construction is carried out in Section 5, where we also use the morphism to $M$ given by this family in order to prove the theorem. Stability of ${\mathcal{U}}_x$ ============================== Let $X$ be a non-singular projective curve of genus $g \geq 2$. Let $n \geq 2$ and $d$ be integers with gcd$(n,d) = 1$. There are uniquely determined integers $l$ and $e$ with $0<l<n$ and $0 \leq e<d$ such that $$\label{eq2} ld-en = 1.$$ The bundles ${\mathcal{U}}_x$ were shown to be semistable in [@bbn Proposition 2.4], but the proof does not seem to imply stability directly, even though we know also by [@nr] that ${\mathcal{U}}_x$ is simple. \[propos2.1\] For all $x \in X$, the vector bundle ${\mathcal{U}}_x$ is stable with respect to the unique polarization of $M$. By [@bbn Proposition 2.4] the bundle ${\mathcal{U}}_x$ is semistable. By [@ram Remark 2.9] and possibly after tensoring ${\mathcal{U}}$ by a line bundle on $M$, $$c_1({\mathcal{U}}_x) = l \alpha,$$ where $\alpha$ is the positive generator of $H^2(M)$. By (\[eq2\]), $l$ and $n$ are coprime. It follows that ${\mathcal{U}}_x$ is stable. Cohomological constructions =========================== Let $l$ and $n$ be as in (\[eq2\]). Let $V$ be a semistable vector bundle of rank $l$ and degree $l(n-l)+e$ and $W$ a semistable bundle of rank $n-l$ and degree $d-e-l(n-l)$ on $X$. Then $$\deg (W^* \otimes V) = nl(n-l) -1.$$ Let $q_i, \; i=1,2,$ denote the projections of $X \times X$ on the two factors, $\Delta$ the diagonal of $X \times X$ and write for brevity $$U = q_1^*(W^* \otimes V).$$ \[lem2.1\] For $ n \geq 2$ and $1 \leq i \leq n$,\ [*(a)*]{} $h^0(U(-i\Delta)|\Delta) = (n+(2i-1)(g-1))l(n-l) -1$;\ [*(b)*]{} $h^1(U(-i\Delta)|\Delta) = 0$. Identifying $\Delta$ with $X$, we have $U(-i\Delta)|\Delta = W^* \otimes V \otimes K_X^i$. Since $$\deg (W^* \otimes V \otimes K_X^i) = (n+(2g-2)i)l(n-l) - 1 > l(n-l)(2g-2)$$ and $W^* \otimes V$ is semistable, (b) holds and Riemann-Roch gives (a). \[lem2.2\] For $n \geq 2$, $$h^1(U(-n\Delta)) = gh^0(W^* \otimes V) + l(n-l)(n-1)(g(n-1) + 1) -(n-1).$$ For $0 \leq i \leq n$, consider the exact sequence $$\label{eqn12} 0 {\rightarrow}U(-(i+1)\Delta) {\rightarrow}U(-i\Delta) {\rightarrow}U(-i\Delta)|\Delta {\rightarrow}0$$ on $X \times X$. For $i=0$, this sequence gives $$0 {\rightarrow}H^1(U(-\Delta)) {\rightarrow}H^1(U) \stackrel{\psi}{{\rightarrow}} H^1(U|\Delta),$$ since the restriction map $H^0(U) {\rightarrow}H^0(U|\Delta)$ is an isomorphism. The map $\psi$ is surjective, since its restriction to the Künneth component $H^1(W^* \otimes V) \otimes H^0({\mathcal{O}}) \subset H^1(U)$ is an isomorphism. Hence $$\begin{array}{ll} h^1(U(-\Delta)) &= h^1(U) - h^1(U|\Delta)\\ &= h^1(W^* \otimes V) h^0({\mathcal{O}}) + h^0(W^* \otimes V)h^1({\mathcal{O}}) - h^1(W^* \otimes V)\\ &= g\cdot h^0(W^* \otimes V). \end{array}$$ For $1 \leq i \leq n-1$, the sequence (\[eqn12\]) gives, by Lemma \[lem2.1\] (b), $$0 {\rightarrow}H^0(U(-i\Delta)|\Delta) {\rightarrow}H^1(U(-(i+1)\Delta)) {\rightarrow}H^1(U(-i\Delta)) {\rightarrow}0.$$ This gives, by Lemma \[lem2.1\] (a) and the above computation, $$\begin{array}{l} h^1(U(-n\Delta)) = h^1(U(-\Delta)) + \sum_{i=1}^{n-1} h^0(U(-i\Delta)|\Delta)\\ \quad = g h^0(W^* \otimes V) + \sum_{i=1}^{n-1}((n+(2i-1)(g-1))l(n-l) -1)\\ \quad = g h^0(W^* \otimes V) + l(n-l)(n-1)(g(n-1) + 1) - (n-1). \end{array}$$ \[lem2.3\] Let $n \geq 2$ and $x \in X$. Then, except in the case when $n=2$ and $W^* \otimes V \simeq {\mathcal{O}}(x)$, $$h^1(U(-n\Delta-X\times\{x\}) = h^1(U(-\Delta - X \times \{x\})) + l(n-l)(n-1)^2g -(n-1).$$ For $1 \leq i \leq n-1$ consider the exact sequence $$\begin{array}{ll} 0 {\rightarrow}U(-(i+1)\Delta - X \times \{x\}) {\rightarrow}& U(-i\Delta-X \times \{x\})\\ &{\rightarrow}U(-i\Delta-X \times \{x\})|\Delta {\rightarrow}0 \end{array}$$ on $X \times X$. Identifying $\Delta$ with $X$, we have $$U(-i\Delta-X \times \{x\})|\Delta \simeq K_X^i \otimes W^* \otimes V(-x).$$ If either $i \geq 2$ or $ n \geq 3$, $$\deg(K_X^i \otimes W^* \otimes V(-x)) > l(n-l)(2g-2).$$ So semistability implies $$\label{eqn2} h^1(K_X^i \otimes W^* \otimes V(-x)) = 0.$$ If $n=2$ and $i=1$, then $W^* \otimes V$ has rank 1 and $$\deg(K_X \otimes W^* \otimes V(-x)) = 2g-2.$$ So (\[eqn2\]) is still true, unless $W^* \otimes V \simeq {\mathcal{O}}(x)$. Now Riemann-Roch implies $$h^0(K_X^i \otimes W^* \otimes V(-x)) = ((2g-2)i + n-g)l(n-l) -1.$$ Hence applying the above sequence $n-1$ times, we get $$\begin{array}{l} h^1(U(-n\Delta-X \times \{x\}) = \\ \quad = h^1(U(-\Delta - X \times \{x\})) + \sum_{i=1}^{n-1} h^0(K_X^i \otimes W^* \otimes V(-x))\\ \quad = h^1(U(-\Delta -X \times \{x\})) + \sum_{i=1}^{n-1} \{((2g-2)i +n-g)l(n-l) -1\}\\ \quad = h^1(U(-\Delta -X \times \{x\})) + l(n-l)(n-1)^2g -(n-1). \end{array}$$ Now suppose $(V,W)$ is a general pair of bundles on $X$ with the given ranks and degrees. Here by “general” we mean that the theorem of Hirschowitz (see [@hir]) is true, which says that either $H^0(W^* \otimes V) = 0$ or $H^1(W^* \otimes V) = 0$. \[prop2.4\] For $n \geq 3$, $g \geq 3$ and $(V,W)$ general, there is a 2-dimensional vector subspace $T_0 \subset H^1(U(-n\Delta))$ such that the restriction map $$\label{eqn3} H^1(U(-n\Delta)) {\rightarrow}H^1(W^* \otimes V(-nx))$$ is injective on $T_0$ for all $x \in X$. Consider the exact sequence $$0 {\rightarrow}U(-n\Delta-X \times \{x\}) {\rightarrow}U(-n\Delta) {\rightarrow}U(-n\Delta)|X \times \{x\} {\rightarrow}0$$ on $X \times X$. Since $U(-n\Delta)|X \times \{x\} \simeq W^* \otimes V(-nx)$ is of degree $-1$ and $W^* \otimes V$ is semistable, this gives $h^0(W^* \otimes V(-nx)) = 0$ and thus $$0 {\rightarrow}H^1(U(-n\Delta-X\times \{x\}) {\rightarrow}H^1(U(-n\Delta)) {\rightarrow}H^1(W^* \otimes V(-nx)).$$ We claim that $$\label{eq3} C := h^1(U(-n\Delta)) - h^1(U(-n\Delta - X \times \{x\})) \geq 3.$$ According to Lemmas \[lem2.2\] and \[lem2.3\], $$\begin{array}{ll} C & = gh^0(W^* \otimes V) + l(n-l)(n-1) - h^1(U(-\Delta-X \times \{x\})). \end{array}$$ Now the exact sequence $$0 {\rightarrow}U(-\Delta-X \times \{x\}) {\rightarrow}U(-X \times \{x\}) {\rightarrow}U(-X \times \{x\})|\Delta {\rightarrow}0$$ implies $$\begin{array}{ll} h^1(U(-\Delta-X \times \{x\}) &\leq h^0(U(-X \times \{x\})|\Delta) + h^1(U(-X \times \{x\}))\\ &= h^0(W^* \otimes V(-x)) + gh^0(W^* \otimes V). \end{array}$$ Hence $$C \geq l(n-l)(n-1) - h^0(W^* \otimes V(-x)).$$ According to the above mentioned theorem of Hirschowitz, either $H^0(W^* \otimes V) = 0$ or $H^1(W^* \otimes V) = 0$. In the first case also $H^0(W^* \otimes V(-x)) = 0$ and thus $$C \geq l(n-l)(n-1) \geq 3.$$ In the second case Riemann-Roch implies $$h^0(W^* \otimes V(-x)) \leq h^0(W^* \otimes V) = (n+1-g)l(n-l) -1$$ and thus, for $g \geq 3$, $$C \geq l(n-l)(g-2) + 1 \geq 3.$$ We have thus proved (\[eq3\]) in all cases. This implies that the codimension of the union of the kernels of (\[eqn3\]) for $x \in X$ is at least 2. Hence there is a vector subspace $T_0$ of dimension 2 meeting this union in 0 only. The case $n=2$ ============== Now suppose $n=2$, which implies $l=1$. So $V$ and $W$ are line bundles with $\deg(W^* \otimes V) = 1$. In this case the proof of Proposition \[prop2.4\] fails. In fact, we have to choose $V$ and $W$ such that $$W^* \otimes V \simeq {\mathcal{O}}(x_0)$$ for some fixed $x_0 \in X$. Then Lemmas \[lem2.1\] and \[lem2.2\] remain true and so does Lemma \[lem2.3\] except when $x=x_0$. \[prop3.1\] For $n=2$, there is a $(g-1)$-dimensional vector subspace $T_1 \subset H^1(U(-2\Delta))$ such that the restriction map $$H^1(U(-2\Delta)) {\rightarrow}H^1(W^* \otimes V(-2x))$$ is injective on $T_1$ for all $x \in X$. Since $h^0(W^* \otimes V) = 1$, Lemma \[lem2.2\] says that $$h^1(U(-2\Delta)) = 2g.$$ Lemma \[lem2.3\] implies that, if $x \neq x_0$, then $$\label{eqn4} h^1(U(-2\Delta - X \times \{x\})) = h^1(U(-\Delta - X \times \{x\})) + g-1.$$ If $x=x_0$, then the same proof gives $$\label{eqn5} h^1(U(-2\Delta - X \times \{x\})) \leq h^1(U(-\Delta - X \times \{x\})) + g.$$ Now consider the exact sequence $$\label{eqn6} 0 {\rightarrow}U(-\Delta-X \times \{x\}) {\rightarrow}U(-X \times \{x\}) {\rightarrow}U(-X \times \{x\})|\Delta {\rightarrow}0$$ on $X \times X$. Since under the identification of $\Delta$ with $X$, $$U(-X \times \{x\})|\Delta \simeq {\mathcal{O}}(x_0-x),$$ we get, for $x \neq x_0$, $$0 {\rightarrow}H^1(U(-\Delta - X \times \{x\})) {\rightarrow}H^1(U(- X \times \{x\})) \stackrel{\varphi}{{\rightarrow}} H^1({\mathcal{O}}(x_0-x)).$$ The map $\varphi$ is surjective, since its dual is the canonical injection $$H^0(K_X(x-x_0)) {\rightarrow}\mbox{Hom}(H^0({\mathcal{O}}(x_0)),H^0(K_X(x))) = H^0(K_X(x)).$$ Hence $$\begin{array}{ll} h^1(U(-\Delta - X \times \{x\})) & = h^1(U(-X \times \{x\})) - h^1({\mathcal{O}}(x_0-x))\\ & = h^0({\mathcal{O}}(x_0))h^1({\mathcal{O}}(-x)) - h^1({\mathcal{O}}(x_0-x))\\ & = g - (g-1) = 1. \end{array}$$ If $x = x_0$, the map $\varphi$ is still surjective and thus an isomorphism. So (\[eqn6\]) implies $$h^1(U(-\Delta - X \times \{x\})) = h^0({\mathcal{O}}(x_0-x)) = 1.$$ Now (\[eqn4\]) and (\[eqn5\]) give $$\label{eqn7} h^1(U(-2\Delta-X \times \{x\})) \left\{ \begin{array}{lll} \leq g+1 & if & x=x_0,\\ = g& if & x \neq x_0. \end{array} \right.$$ Now $$0 {\rightarrow}U(-2\Delta-X \times \{x\}) {\rightarrow}U(-2\Delta) {\rightarrow}U(-2\Delta)|X \times \{x\} {\rightarrow}0$$ gives $$0 {\rightarrow}H^1(U(-2\Delta-X \times \{x\})) {\rightarrow}H^1(U(-2\Delta)) {\rightarrow}H^1(W^* \otimes V(-2x)).$$ So the kernel of the restriction map is $H^1(U(-2\Delta-X \times \{x\}))$ which, together with (\[eqn7\]), implies the assertion as in the proof of Proposition \[prop2.4\]. Proof of the Theorem for $g \geq 3$ =================================== We want to consider extensions of the form $$0 {\rightarrow}q_1^*V(-(n-l)\Delta) {\rightarrow}E {\rightarrow}q_1^*W(l\Delta) {\rightarrow}0 \eqno(e)$$ on $X \times X$. The extension $(e)$ is classified by an element $e \in H^1(U(-n\Delta))$. The restriction of $(e)$ to $X \times \{x\}$ is the extension $$0 {\rightarrow}V(-(n-l)x) {\rightarrow}E_x {\rightarrow}W(lx) {\rightarrow}0$$ corresponding to the image of $e$ in $H^1(W^* \otimes V(-nx))$. We can therefore choose a vector subspace $T_0$ of $H^1(U(-n\Delta))$ of dimension 2 such that, for all $0 \neq e \in T_0$, the image of $e$ in $H^1(W^* \otimes V(-nx))$ is non-zero. Note that $$\begin{array}{ll} \det E_x & = \det (V(-(n-l)x)) \otimes \det (W(lx))\\ & = \det V \otimes {\mathcal{O}}(-l(n-l)x) \otimes \det W \otimes {\mathcal{O}}(l(n-l)x)\\ & = \det V \otimes \det W \end{array}$$ for all $x$. On the other hand, by [@ram Lemma 2.1], provided $V$ and $W$ are stable, the bundle $E_x$ is stable for all $0 \neq e \in T_0$ and all $x \in X$.\ Let ${\mathbb P}^1 = P(T_0)$ and consider the product variety $X \times X \times {\mathbb P}^1$. Let $p_i$ and $p_{ij}$ denote the projections of $X \times X \times {\mathbb P}^1$. The non-trivial extensions of the form $(e)$ with $e \in T_0$ form a family parametrized by ${\mathbb P}^1$ which has the form (see for example [@ram Lemma 2.4]) $$\label{eqn8} 0 {\rightarrow}p_1^*V \otimes p_{12}^*{\mathcal{O}}(-(n-l)\Delta)) {\rightarrow}{\mathcal{E}}{\rightarrow}p_1^*W \otimes p_{12}^*{\mathcal{O}}(l\Delta) \otimes p_3^*(\tau^*) {\rightarrow}0,$$ where $\tau$ is the tautological hyperplane bundle on ${\mathbb P}^1$.\ [*Proof of the Theorem*]{}. By what we have said above, ${\mathcal{E}}$ is a family of stable bundles on $X$ of fixed determinant $L = \det V \otimes \det W$ parametrized by $X \times {\mathbb P}^1$. This gives a morphism $$f: X \times {\mathbb P}^1 {\rightarrow}M$$ such that $$(\mbox{id} \times f)^* {\mathcal{U}}\simeq {\mathcal{E}}\otimes p_{23}^*(N)$$ for some line bundle $N \in \mbox{Pic}(X \times {\mathbb P}^1)$. Considering $${\mathcal{E}}_x = {\mathcal{E}}|\{x\} \times X \times {\mathbb P}^1$$ as a bundle on $X \times {\mathbb P}^1$, we have $$f^*{\mathcal{U}}_x \simeq {\mathcal{E}}_x \otimes N.$$ Hence, in order to complete the proof of the theorem, it suffices to show that the bundle ${\mathcal{E}}_x \otimes N$ determines the point $x$. For this we compute the Chern class $c_2({\mathcal{E}}_x \otimes N)$ in the Chow group $\mbox{CH}^2(X \times {\mathbb P}^1)$. From (\[eqn8\]) we get $$\label{eqn9} c_1({\mathcal{E}}) = p_1^*\beta - (n-l)p_3^*h$$ where $\beta$ is the class of $\det V \otimes \det W$ in $\mbox{CH}^1(X)$ and $h$ is the positive generator of $\mbox{CH}^1({\mathbb P}^1)$. For the computation of $c_2({\mathcal{E}})$ we use the formula $$c_2({\mathcal{F}}\otimes {\mathcal{L}}) = c_2({\mathcal{F}}) + (r-1)c_1({\mathcal{F}})c_1({\mathcal{L}}) + {r \choose 2} c_1({\mathcal{L}})^2$$ for any vector bundle ${\mathcal{F}}$ of rank $r$ and any line bundle ${\mathcal{L}}$. The only terms in $c_2({\mathcal{E}})$ which can possibly survive in $c_2({\mathcal{E}}_x)$ when restricting are those involving $[\Delta]h$. So $c_2(p_1^*V \otimes p_{12}^* {\mathcal{O}}(-(n-l)\Delta))$ does not contribute. The coefficient of $[\Delta]h$ in $c_2(p_1^*W \otimes p_{12}^*{\mathcal{O}}(l\Delta) \otimes p_3^*(\tau^*))$ is ${n-l \choose 2}(-2l) $ and the coefficient of $[\Delta]h$ in $$c_1(p_1^*V \otimes p_{12}^*{\mathcal{O}}(-(n-l)\Delta))\cdot c_1(p_1^*W \otimes p_{12}^*{\mathcal{O}}(l\Delta) \otimes p_3^*(\tau^*))$$ is $-l(n-l)(-(n-l)) = l(n-l)^2$. This implies $$c_2({\mathcal{E}}_x) = l(n-l)(-(n-l-1) + n-l)(x \times p) = l(n-l)(x \times p),$$ where $p$ is the class of a point in ${\mathbb P}^1$. Hence, using (\[eqn9\]), we get that $$c_2({\mathcal{E}}_x \otimes N) = l(n-l)(x \times p) + \gamma$$ with $\gamma \in \mbox{CH}^2(X \times {\mathbb P}^1)$ independent of $x$. If ${\mathcal{U}}_x \simeq {\mathcal{U}}_y$, then $l(n-l)((x-y) \times p) = 0$ in $\mbox{CH}^2(X \times {\mathbb P}^1)$. This is equivalent to $$l(n-l)(x-y) = 0 \quad \mbox{in} \quad \mbox{CH}^1(X) = \mbox{Pic}(X).$$ Hence $x-y$ is a point of finite order dividing $l(n-l)$ in $\mbox{Pic}^0(X)$. But there are only finitely many such points in $\mbox{Pic}^0(X)$ and any such point has at most 2 representations of the form $x-y$ ( 2 occurs only if $X$ is hyperelliptic). So, for general $x \in X$, there is no $y \in X$ such that $x-y$ is of finite order dividing $l(n-l)$ in $\mbox{Pic}^0(X)$. Now, as stated in the introduction, the natural morphism $X {\rightarrow}{\mathcal{M}}_0, \; x \mapsto {\mathcal{U}}_x$ is étale and surjective. We have now proved that this étale morphism has degree 1. Hence it is an isomorphism, which completes the proof of the theorem. $\square$ [CAV]{} V. Balaji, L. Brambila-Paz and P. E. Newstead: *Stability of the Poincaré Bundle*. Math. Nachr. 188 (1997), 5-15. A. Hirschowitz: *Problème de Brill-Noether de Rang Supérieur*. Université de Nice, Prépublication Mathématiques No 91 (1986). M. S. Narasimhan and S. Ramanan: *Deformations of the moduli space of vector bundles over an algebraic curve*. Ann. of Math. 101 (1975), 391-417. S. Ramanan: *The moduli space of vector bundles over an algebraic curve*. Math. Ann. 200 (1973), 69-84. A. N. Tyurin: *The geometry of moduli of vector bundles*. Usp. Mat. Nauk 29:6 (1974). 59-88. Russian Math. Surv. 29:6 (1974), 57-88. [^1]: Both authors are members of the research group VBAC (Vector Bundles on Algebraic Curves). The second author acknowledges support from EPSRC Grant No. EP/C515064, and would like to thank the Mathematisches Institut der Universität Erlangen-Nürnberg for its hospitality
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper the stability of a closed-loop cascade control system in the trajectory tracking task is addressed. The considered plant consists of underlying second-order fully actuated perturbed dynamics and the first order system which describes dynamics of the input. The main theoretical result presented in the paper concerns stability conditions formulated based on the Lyapunov analysis for the cascade control structure taking advantage of the active rejection disturbance approach. In particular, limitations imposed on a feasible set of an observer bandwidth are discussed. In order to illustrate characteristics of the closed-loop control system simulation results are presented. Furthermore, the controller is verified experimentally using a two-axis telescope mount. The obtained results confirm that the considered control strategy can be efficiently applied for mechanical systems when a high tracking precision is required.' author: - | Rados[ł]{}aw Patelski, Dariusz Pazderski\ Poznań University of Technology\ Institute of Automation and Robotics\ ul. Piotrowo 3a  60-965 Poznań, Poland bibliography: - 'bibDP.bib' title: 'Tracking control for a cascade perturbed control system using active disturbance rejection paradigm[^1]' --- Introduction ============ Set-point regulation and trajectory tracking constitute elementary tasks in control theory. It is well known that a fundamental method of stabilisation by means of a smooth static state feedback has significant limitations which come, among others, from the inability to measure the state as well as the occurrence of parametric and structural model uncertainties. Thus, for these reasons, various adaptive and robust control techniques are required to improve the performance of the closed-loop system. In particular, algorithms used for the state and disturbance estimation are of great importance here. The use of high gain observers (HGOs) is well motivated in the theory of linear dynamic systems, where it is commonly assumed that state estimation dynamics are negligible with respect to the dominant dynamics of the closed-loop system. A similar approach can be employed successfully for a certain class of nonlinear systems where establishing a fast convergence of estimation errors may be sufficient to ensure the stability, [@KhP:2014]. In a natural way, the HGO observer is a basic tool to support a control feedback when a plant model is roughly known. Here one can mention the free-model control paradigm introduced by Fliess and others, [@Fliess:2009; @FlJ:2013] as well as the active disturbance rejection control (ADRC) proposed by Han and Gao, [@Han:1998; @Gao:2002; @Gao:2006; @Han:2009]. It turns out that the above-mentioned control methodology can be highly competitive with respect to the classic PID technique in many industrial applications, [@SiGao:2005; @WCW:2007; @MiGao:2005; @CZG:2007; @MiH:2015; @NSKCFL:2018]. Furthermore, it can be regarded as an alternative control approach in comparison to the sliding control technique proposed by Utkin and others, [@Utk:77; @Bartol:2008], where bounded matched disturbances are rejected due to fast switching discontinuous controls. Thus, it is possible to stabilise the closed-loop control system, in the sense of Filippov, on a prescribed, possibly time-varying, sliding surface, [@Bart:96; @NVMPB:2012]. Currently, also second and higher-order sliding techniques for control and state estimations are being explored, [@Levant:1993; @Levant:1998; @Bartol:1998; @Cast:2016]. It is noteworthy to recall a recent control algorithm based on higher-order sliding modes to solve the tracking problem in a finite time for a class of uncertain mechanical systems in robotics, [@Gal:2015; @Gal:2016]. From a theoretical point of view, some questions arise regarding conditions of application of control techniques based on a disturbance observer, with particular emphasis on maintaining the stability of the closed-loop system. Recently, new results concerning this issue have been reported for ADRC controllers, [@SiGao:2017; @ACSA:2017]. In this paper we further study the ADRC methodology taking into account a particular structure of perturbed plant. Basically, we deal with a cascade control system which is composed of two parts. The first component is represented by second-order dynamics which constitute an essential part of the plant. It is assumed that the system is fully actuated and subject to matched-type disturbances with bounded partial derivatives. The second component is defined by an elementary first-order linear system which describes input dynamics of the entire plant. Simultaneously, it is supposed that the state and control input of the second order dynamics are not fully available. It can be seen that the considered plant well corresponds to a class of mechanical systems equipped with a local feedback applied at the level of actuators. As a result of additional dynamics, real control forces are not accessible directly which may deteriorate the stability of the closed-loop system. In order to analyse the closed-loop system we take advantage of Lyapunov tools. Basically, we investigate how an extended state observer (ESO) affects the stability when additional input dynamics are considered. Further we formulate stability conditions and estimate bounds of errors. In particular, we show that the observer gains cannot be made arbitrarily large as it is commonly recommended in the ADRC paradigm. Such an obstruction is a result of the occurrence of input dynamics which is not explicitly taken into account in the feedback design procedure. According to the best authors’ knowledge, the Lyapunov stability analysis for the considered control structure taking advantage of the ADRC approach has not been addressed in the literature so far. Theoretical results are illustrated by numerical simulations and experiments. The experimental validation are conducted on a real two-axis telescope mount driven by synchronous gearless motors, [@KPKJPKBJN:2019]. Here we show that the considered methods provide high tracking accuracy which is required in such an application. Additionally, we compare the efficiency of compensation terms, computed based on the reference trajectory and on-line estimates in order to improve the tracking performance. The paper is organised as follows. In Section 2 the model of a cascade control process is introduced. Then a preliminary feedback is designed and a corresponding extended state observer is proposed. The stability of the closed-loop system is studied using Lyapunov tools and stability conditions with respect to the considered control structure are formulated. Simulation results are presented in Section 3 in order to illustrate the performance of the controller. In Section 4 extensive experimental results are discussed. Section 5 concludes the paper. Controller and observer design ============================== Dynamics of a perturbed cascaded system --------------------------------------- Consider a second order fully actuated control system defined as follows $$\left\{ \begin{array}{cl} \dot{x}_{1} & =x_{2},\\ \dot{x}_{2} & =Bu+h(x_{1},x_{2})+q(x_{1},x_{2},u,t), \end{array}\right.\label{eq:general:nominal system}$$ where $x_{1},\,x_{2}\in\mathbb{R}^{n}$ are state variables, $B\in\mathbb{R}^{n\times n}$ is a non-singular input matrix while $u\in\mathbb{R}^{n}$ stands for an input. Functions $h:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{n}$ and $q:\mathbb{R}^{2n}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}$ denote known and unknown components of the dynamics, respectively. Next, it is assumed that input $u$ in is not directly accessible for a control purpose, however, it is governed by the following first order dynamics $$\dot{u}=T^{-1}\left(-u+v\right),\label{eq:general:input dynamics}$$ where $v\in\mathbb{R}^{n}$ is regarded as a real input and $T\in\mathbb{R}^{n\times n}$ is a diagonal matrix of positive time constants. In fact, both dynamics constitute a cascaded third order plant, for which the underlying component is represented by , while corresponds to stable input dynamics. Control system design --------------------- The control task investigated in this paper deals with tracking of a reference trajectory specified for an output of system - which is determined by $y:=x_1$. Simultaneously, it is assumed that variables $x_2$ and $u$ are unavailable for measurement and the only information is provided by the output. To be more precise, we define at least $C^3$ continuous reference trajectory $x_{d}(t):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ and consider output tracking error $\tilde{y}:=x_d-x_1$. Additionally, to quantify a difference between $u$ and $v$, we introduce error $\tilde{u}:=v-u$. Since $v$ is viewed as an alternative input of , one can rewrite as $$\left\{ \begin{array}{cl} \dot{x}_{1} & =x_{2},\\ \dot{x}_{2} & =Bv-B\tilde{u}+h+q. \end{array}\right.\label{eq:general:nominal_system_input_v}$$ For control design purposes, the tracking error will be considered with respect to the state of system . Consequently, one defines $$e = \begin{bmatrix}e_1\\ e_2\end{bmatrix}:=\begin{bmatrix}\tilde{y}\\ e_2\end{bmatrix}=\begin{bmatrix}x_d-x_1\\ \dot{x}_d-x_2\end{bmatrix}\in\mathbb{R}^{2n}.$$ Accordingly, taking time derivative of $e$, one can obtain the following open-loop error dynamics $$\left\{ \begin{array}{cl} \dot{e}_{1} & =e_{2},\\ \dot{e}_{2} & =\ddot{x}_{d}-Bv+B\tilde{u}-h-q. \end{array}\right.\label{eq:general:tracking error dynamics}$$ In order to stabilise system in a vicinity of zero, the following preliminary control law is proposed $$v:=B^{-1}\left(K_{p}\left(x_{d}-\hat{x}_{1}\right)+K_{d}\left(\dot{x}_{d}-\hat{x}_{2}\right)-h_{u}+\ddot{x}_{d}-w_c\right),\label{eq:general:control law}$$ where $K_{p},K_{d}\in\mathbb{R}^{n}$ are diagonal matrices of constant positive gains, $\hat{x}_1\in\mathbb{R}^n$, $\hat{x}_2\in\mathbb{R}^n$ and $w_c\in\mathbb{R}^n$ denote estimates of states and a disturbance, respectively. These estimates are computed by an observer that is not yet defined. Term $h_{u}:\mathbb{R}^{4n}\rightarrow\mathbb{R}^{n}$ is a compensation function, designed in attempt to attenuate influence of $h$ on the closed system dynamics, and is defined using available signals as follows $$h_{u}:=h_{1}(\hat{x}_{1},\hat{x}_{2})+h_{2}(x_{d},\dot{x}_{d}),\label{eq:general:known dynamics compensation}$$ while $h_1$ and $h_2$ satisfy $$h_{1}(x_{1},x_{2})+h_{2}(x_{1},x_{2}):=h.\label{eq:general:known dynamics}$$ Next, in order to simplify design of an observer we rewrite dynamics . Firstly, we consider a new form which does not introduce any change to the system dynamics and is as follows $$\left\{ \begin{array}{cl} \dot{x}_{1} & =x_{2},\\ \dot{x}_{2} & =Bu+h_u+h-h_{u}+q. \end{array}\right.\label{eq:general:nominal system rewritten}$$ Secondly, according to active disturbance rejection methodology, it is assumed that $$z_{3}:=q+h-h_{u}$$ describes an augmented state which can be regarded as a total disturbance. Correspondingly, one can introduce extended state $z=\begin{bmatrix}z_{1}^{T} & z_{2}^{T} & z_{3}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{3n}$, where $z_1:=x_1$ and $z_2:=x_2$. As a result, the following extended form of dynamics can be established $$\left\{ \begin{array}{cl} \dot{z}_{1} & =z_{2},\\ \dot{z}_{2} & =Bu+h_{u}+z_{3},\\ \dot{z}_{3} & =\dot{q}+\dot{h}-\dot{h}_{u}. \end{array}\right.\label{eq:general:extended system-1}$$ Now, in order to estimate state $z$ we define the following Luenberger-like observer $$\left\{ \begin{array}{cl} \dot{\hat{z}}_{1} & =K_{1}\left(z_{1}-\hat{z}_{1}\right)+\hat{z}_{2},\\ \dot{\hat{z}}_{2} & =K_{2}\left(z_{1}-\hat{z}_{1}\right)+\hat{z}_{3}+h_{u}+Bv,\\ \dot{\hat{z}}_{3} & =K_{3}\left(z_{1}-\hat{z}_{1}\right), \end{array}\right.\label{eq:general:observer}$$ where $\hat{z}=\begin{bmatrix}\hat{z}_{1}^{T} & \hat{z}_{2}^{T} & \hat{z}_{3}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{3n}$ denotes estimate of $z$ and $K_{1},K_{2},K_{3}\in\mathbb{R}^{n\times n}$ are diagonal matrices of positive gains of the observer which are chosen based on linear stability criteria. Since estimates $\hat{z}$ are expected to converge to real values of $z$, let observation errors be expressed as $\tilde{z}:=z-\hat{z}$. Taking time derivative of $\tilde{z}$, using , and recalling one obtains the following dynamics $$\dot{\tilde{z}}=H_{o}\tilde{z}+C_{0}B\tilde{u}+C_{1}\dot{z}_{3}\label{eq:general:observator error dynamics}$$ where $$\label{eq:general:Ho_def} H_{o}=\begin{bmatrix}-K_{1} & I & 0\\ -K_{2} & 0 & I\\ -K_{3} & 0 & 0 \end{bmatrix}\in\mathbb{R}^{3n\times 3n},$$ $$C_{0}=\begin{bmatrix}0& -I& 0\end{bmatrix}^T,\ C_{1}=\begin{bmatrix}0& 0& I \end{bmatrix}^T\in\mathbb{R}^{3n},$$ while $I$ stands for the identity matrix of size $n\times n$. Here, it is required that $H_{o}$ is Hurwitz, what can be guaranteed by a proper choice of observer gains. Next, we recall tracking dynamics and feedback . It is proposed that compensating term in , which partially rejects unknown disturbances, is defined by an estimate provided by observer , namely $w_c:=\hat{z}_3$. Consequently, by substituting into the following is obtained $$\dot{e}=H_{c}e+W_{1}\tilde{z}+C_{2}B\tilde{u},\label{eq:general:regulator error dynamics}$$ where $$\label{eq:general:Hc_def} H_{c}=\begin{bmatrix}0 & I\\ -K_{p} & -K_{d} \end{bmatrix},W_{1}=\begin{bmatrix}0 & 0 & 0\\ -K_{p} & -K_{d} & -I \end{bmatrix},\ C_{2}=\begin{bmatrix}0\\ I \end{bmatrix}\in\mathbb{R}^{2n\times n}$$ and $H_{c}$ is Hurwitz for $K_{p}\succ 0$ and $K_{d}\succ 0$. Further, in order to facilitate the design and analysis of the closed-loop system, we take advantage of a scaling operator defined by $$\Delta_m\left(\alpha\right):=\mathrm{diag}\left\{\alpha^{m-1}I,\, \alpha^{m-2}I,\, \ldots,\, I\right\}\in\mathbb{R}^{mn\times mn},$$ where $\alpha>0$ is a positive scalar. Then we define the following scaled tracking and observation errors $$\begin{aligned} \bar{e}:=&\left(\kappa\omega\right)^{-1}\Delta_2\left(\kappa\omega\right)e,\label{eq:general:regulator auxiliary errors}\\ \bar{z}:=&\omega^{-2}\Delta_3\left(\omega\right)\tilde{z},\label{eq:general:observer auxiliary errors} \end{aligned}$$ where $\omega\in\mathbb{R}_{+}$ is scaling parameter which modifies the bandwidth of the the observer, while $\kappa\in\mathbb{R}_{+}$ denotes a relative bandwidth of the feedback determined with respect to $\omega$. Embracing this notation one can introduce the following scaled gains $$\label{eq:design:scaled_gains} \bar{K}_c:=\left(\kappa\omega\right)^{-1}K_c \Delta_2^{-1}\left(\kappa\omega\right),\, \bar{K}_o:=\omega^{-3}\Delta_3\left(\omega\right) \left[K_1^T\ K_2^T\ K_3^T\right]^T,$$ while $K_c:=\left[K_p\ K_d\right]\in\mathbb{R}^{n\times 2n}$. Additionally, exploring relationships outlined in the Appendix, one can rewrite dynamics and as follows $$\begin{aligned} \dot{\bar{e}}=&\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u},\label{eq:general:regulator auxiliary error dynamics}\\ \dot{\bar{z}}=&\omega\bar{H}_{o}\bar{z}+\omega^{-1} C_{0}B\tilde{u}+\omega^{-2} C_{1}\dot{z}_{3},\label{eq:general:observator auxiliary error dynamics}\end{aligned}$$ with $\bar{H}_c$ and $\bar{H}_o$ being Hurwitz matrices of forms , defined in terms of scaled gains $\bar{K}_c$ and $\bar{K}_o$, respectively. Similarly, $\bar{W}_1$ corresponds to $W_1$ parameterised by new gains. Since $\bar{H}_c$ and $\bar{H}_o$ are Hurwitz, one can state that the following Lyapunov equations are satisfied $$\bar{P}_c\bar{H}_c^{T}+\bar{H}_c\bar{P}_c=-\bar{Q}_c,\ \bar{P}_o\bar{H}_o^{T}+\bar{H}_o\bar{P}_o=-\bar{Q}_o \label{eq:general:Lyapunov equation}$$ for some symmetric, positive defined matrices $\bar{Q}_c,\, \bar{P}_c\in\mathbb{R}^{2n\times 2n}$ and $\bar{Q}_o,\, \bar{P}_o\in\mathbb{R}^{3n\times 3n}$. Stability analysis of the closed-loop cascaded control system ------------------------------------------------------------- Lyapunov stability of the closed-loop is to be considered now. For this purpose, a state which consists of tracking, observation and input errors is defined as $$\bar\zeta=\begin{bmatrix}\bar{e}^T&\bar{z}^T&\tilde{u}^T\end{bmatrix}^T\in\mathbb{R}^{6n}.\label{eq:general:stability:errors}$$ A positive definite function is proposed as follows $$V(\bar{\zeta})=\frac{1}{2}\bar{e}^{T}\bar{P}_c\bar{e}+\frac{1}{2}\bar{z}^{T}\bar{P}_o\bar{z}+\frac{1}{2}\tilde{u}^{T}\tilde{u}.\label{eq:general:stability:lyapunov proposition}$$ Its derivative takes form of $$\begin{aligned} \dot{V}(\bar{\zeta})=&-\frac{1}{2}\kappa\omega\bar{e}^{T}\bar{Q}_c\bar{e}-\frac{1}{2}\omega\bar{z}^{T}\bar{Q}_o\bar{z}+\kappa^{-1}\omega \bar{e}^T \bar{P}_c\bar{W}_1\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}\bar{e}^{T}\bar{P}_c C_2B\tilde{u}+\omega^{-1}\bar{z}^T\bar{P}_o C_o B\tilde{u}\\&+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\dot{z}_{3}-\tilde{u}^{T}T^{-1}\tilde{u}+\tilde{u}^{T}\dot{v}.\label{eq:general:stability:lyapunov derivative} \end{aligned}$$ Derivative of control law $v$ defined by can be expressed in terms of $\bar{\zeta}$ as (the details are outlined in the Appendix) $$\dot{v}=B^{-1}\left(\omega^{3}\left(\kappa^3\bar{K}_c\bar{H}_c\bar{e}+\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\bar{z}\right)-\dot{h}_u+\dddot{x}_{d}\right),\label{eq:general:stability:control law derivative}$$ where $\bar{K}_c:=\left[\bar{K}_p\ \bar{K}_d\right]\in\mathbb{R}^{n\times 2n}$ and $\bar{W}_2:=\left[\bar{K}_c\ I \right]\in\mathbb{R}^{n\times{3n}}$. Substituting (\[eq:general:stability:control law derivative\]) and $\dot{z}_{3}$ into (\[eq:general:stability:lyapunov derivative\]) leads to $$\begin{aligned} \dot{V}(\bar{\zeta})=&-\frac{1}{2}\kappa\omega\bar{e}^{T}\bar{Q}_c\bar{e}-\frac{1}{2}\omega\bar{z}^{T}\bar{Q}_o\bar{z}+\kappa^{-1}\omega\bar{e}^T\bar{P}_c\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}\bar{e}^{T}\bar{P}_c C_2B\tilde{u}+\omega^{-1}\bar{z}^T\bar{P}_o C_o B\tilde{u}\\&+\left(\kappa\omega\right)^3 \tilde{u}^{T}B^{-1}\bar{K}_c\bar{H}_c\bar{e}+\omega^3\tilde{u}^{T}B^{-1}\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\bar{z}-\tilde{u}^{T}T^{-1}\tilde{u}\\ &+\tilde{u}^{T}B^{-1}\dddot{x}_d+\tilde{u}^{T}B^{-1}\dot{h}_u+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\left(\dot h-\dot{h}_u\right)+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\dot{q}(z_{1},z_{2},u,t).\label{eq:general:stability:lyapunov derivative split} \end{aligned}$$ In order to simplify the stability analysis, derivative $\dot{V}$ will be decomposed into four terms defined as follows $$\begin{aligned} Y_1:=&-\frac{1}{2}\kappa\omega\bar{e}^{T}\bar{Q}_c\bar{e}-\frac{1}{2}\omega\bar{z}^{T}\bar{Q}_o\bar{z}+\kappa^{-1}\omega\bar{e}^T\bar{P}_c\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}\bar{e}^{T}\bar{P}_c C_2B\tilde{u}+\omega^{-1}\bar{z}^T\bar{P}_o C_o B\tilde{u}\\&+\left(\kappa\omega\right)^3 \tilde{u}^{T}B^{-1}\bar{K}_c\bar{H}_c\bar{e}+\omega^3\tilde{u}^{T}B^{-1}\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\bar{z}-\tilde{u}^{T}T^{-1}\tilde{u},\\ Y_2:=& \tilde{u}^{T}B^{-1}\dddot{x}_d,\,Y_3:=\tilde{u}^{T}B^{-1}\dot{h}_u+\omega^{-2}\bar{z}^{T}\bar{P}_o C_1\left(\dot h-\dot{h}_u\right),\,Y_4:= \omega^{-2}\bar{z}^{T}\bar{P}_o C_1\dot{q}(z_{1},z_{2},u,t). \end{aligned}$$ Each term of $\dot{V}$ will be now considered separately. Firstly, $Y_{1}$ which represents mainly influence of input dynamics on the nominal system will be looked upon. Negative definiteness of this term will be a starting point for further analysis of the closed-loop stability. Let it be rewritten using the matrix notation as $$Y_{1} =-\frac{1}{2}\omega\bar{\zeta}^{T}Q_{Y1}\bar{\zeta},\label{eq:general:stability:Y1}$$ where $$\begin{split} Q_{Y1}=\left[\begin{matrix}\kappa\bar{Q}_c &-\kappa^{-1}\bar{P}_c\bar{W}_1\Delta_3\left(\kappa\right)&Q_{Y1_{13}}\\-\kappa^{-1}\left(\bar{P}_c\bar{W}_1\Delta_3\left(\kappa\right)\right)^T&\bar{Q}_o&Q_{Y1_{23}}\\ Q_{Y1_{13}}^T&Q_{Y1_{23}}^T&2\omega^{-1} T^{-1} \end{matrix}\right]\in\mathbb{R}^{6n\times 6n} \end{split}$$ while $$\begin{aligned} Q_{Y1_{13}} =& -\kappa^{-1}\omega^{-2}\bar{P}_c C_2B-\kappa^3\omega^2\left(B^{-1}\bar{K}_c\bar{H}_c\right)^T,\\ Q_{Y1_{23}}=&-\omega^{-2}\bar{P}_o C_o B-\omega^2\left(B^{-1}\left(\kappa\bar{K}_c\bar{W}_1 \Delta_3\left(\kappa\right)+\bar{W}_2\Delta_3\left(\kappa\right)\bar{H}_o\right)\right)^T. \end{aligned}$$ It can be showed, that there may exist sets $\Omega_v, \mathrm{K}_v \subset \mathbb{R}_{+}$, such, that for every $\omega\in\Omega_v$ and $\kappa\in\mathrm{K}_v$ matrix $Q_{Y1}$ remains positive definite. Domains of both $\Omega_v$ and $\mathrm{K}_v$ strongly depend on inertia matrix $T$ and input matrix $B$ of nominal system. In the absence of other disturbances system would remain asymptotically stable for such a choice of both $\omega$ and $\kappa$ parameters. Influence of other elements of $\dot{V}(\bar{\zeta})$ will be considered in terms of upper bounds which can be imposed on them. \[assu:desired trajectory\]Let desired trajectory $x_{d}$ be chosen such, that norms of $x_{d},\dot{x}_{d},\ddot{x}_{d},\dddot{x}_{d}$ are bounded by, respectively, constant positive scalar values $x_{b0},x_{b1},x_{b2},x_{b3}\in\mathbb{R}_{+}$. Establishing upper bound for norm of $Y_{2}$ is straightforward by using Cauchy-Schwartz inequality. $$\begin{aligned} Y_{2} & =-\tilde{u}^{T}B^{-1}\dddot{x}_{d},\nonumber \\ \left\Vert Y_{2}\right\Vert & \leq\left\Vert \tilde{u}\right\Vert \cdot\left\Vert B^{-1}\dddot{x}_{d}\right\Vert \nonumber \\ & \leq\left\Vert \bar{\zeta}\right\Vert \left\Vert B^{-1}\right\Vert x_{b3}.\label{eq:general:stability:Y2}\end{aligned}$$ Now, $Y_{3}$ is to be considered. This term comes from imperfect compensation of known dynamics in nominal system and it can be further split into the following $$Y_{31}:=\omega^{-2}\bar{z}^{T}P_{o}C_{1}\left(\dot{h}-\dot{h}_{u}\right), Y_{32}:=\tilde{u}^{T}B^{-1}\dot{h}_{u}.\label{eq:general:stability:Y3}$$ \[assu:bounded dynamics\]Let functions $h_{1}(a,b)$ and $h_{2}(a,b)$ be defined such, that norms of partial derivatives\ $\frac{\partial}{\partial a}h_{1}(a,b)$,$\frac{\partial}{\partial b}h_{1}(a,b)$,$\frac{\partial}{\partial a}h_{2}(a,b)$,$\frac{\partial}{\partial b}h_{2}(a,b)$ are bounded for every $a,b\in\mathbb{R}^{n}$ by $h_{1a},h_{1b,}h_{2a,}h_{2b}\in\mathbb{R}_{+}$ respectively. By applying chain rule to calculate derivatives of each function and substituting difference of error and desired trajectory for state variables, term $Y_{31}$ can be expressed as $$Y_{31}=\omega^{-2}\bar{z}^{T}P_{o}C_{1}\left(W_{h1}\begin{bmatrix}\dot{x}_{d}\\\ddot{x}_{d}\end{bmatrix} - W_{h2}\left(\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}\Delta_3(\kappa)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u}\right)+W_{h3}\left(\omega\bar{H}_{o}\bar{z}+\omega^{-1}C_{0}B\tilde{u}\right)\right),\label{eq:general:stability:Y31 equation}$$ where $$\begin{aligned} W_{h1} & =\begin{bmatrix}\left(\frac{\partial h_{1}}{\partial z_{1}}+\frac{\partial h_{2}}{\partial z_{1}}-\frac{\partial h_{2}}{\partial x_{d}}-\frac{\partial h_{1}}{\partial\hat{z}_{1}}\right) & \left(\frac{\partial h_{1}}{\partial z_{2}}+\frac{\partial h_{2}}{\partial z_{2}}-\frac{\partial h_{2}}{\partial\dot{x}_{d}}-\frac{\partial h_{1}}{\partial\hat{z}_{2}}\right)\end{bmatrix}, \nonumber \\ W_{h2} & =\begin{bmatrix}\left(\frac{\partial h_{1}}{\partial z_{1}}+\frac{\partial h_{2}}{\partial z_{1}}-\frac{\partial h_{1}}{\partial\hat{z}_{1}}\right) & \kappa\omega\left(\frac{\partial h_{1}}{\partial z_{2}}+\frac{\partial h_{2}}{\partial z_{2}}-\frac{\partial h_{1}}{\partial\hat{z}_{2}}\right)\end{bmatrix}, \nonumber \\ W_{h3} & =\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \omega\frac{\partial h_{1}}{\partial\hat{z}_{2}} & 0\end{bmatrix}. \nonumber\end{aligned}$$ This term can be said to be bounded by $$\begin{aligned} \left\Vert Y_{31}\right\Vert \leq & \omega^{-2}\left\Vert \bar{\zeta}\right\Vert \left\Vert P_{o}C_{1}\right\Vert \left(\left(2h_{1a}+2h_{2a}\right)x_{b1}+\left(2h_{1b}+2h_{2b}\right)x_{b2}\right) \nonumber \\ & +\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert P_{o}C_{1}\right\Vert \left(\omega^{-1}\kappa\left\Vert W_{h2b}\right\Vert \left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right)+\omega^{-3}\kappa^{-1}\left\Vert W_{h2b}\right\Vert \left\Vert C_{2}B\right\Vert\right) \nonumber \\ & +\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert P_{o}C_{1}\right\Vert \left(\omega^{-1}\left\Vert W_{h3b}\right\Vert \left\Vert \bar{H}_{o}\right\Vert +\omega^{-2}\left\Vert B\right\Vert h_{1b} \right). \label{eq:general:stability:Y31 bound}\end{aligned}$$ where $W_{h2b}=\begin{bmatrix}2h_{1a} + h_{2a} & \kappa\omega\left(2h_{1b} + h_{2b}\right)\end{bmatrix}$ and $W_{h3b} = \begin{bmatrix}h_{1a} & \omega h_{1b} & 0 \end{bmatrix}$. Having established upper bound of $Y_{31}$, we can perform similar analysis with respect to $Y_{32}$. Let $Y_{32}$ be rewritten as $$Y_{32} = \tilde{u}^T B^{-1} \left(W_{h4}\begin{bmatrix}\dot{x}_{d}\\\ddot{x}_{d}\end{bmatrix} - W_{h5}\left(\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}\Delta_3(\kappa)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u}\right)-W_{h6}\left(\omega\bar{H}_{o}\bar{z}+\omega^{-1}C_{0}B\tilde{u}\right)\right), \label{eq:general:stability:Y32 equation}$$ where $$\begin{aligned} W_{h4} & =\begin{bmatrix}\left(\frac{\partial h_{2}}{\partial x_{d}}+\frac{\partial h_{1}}{\partial \hat{z}_{1}}\right) & \left(\frac{\partial h_{2}}{\partial \dot{x}_{d}}+\frac{\partial h_{1}}{\partial \hat{z}_{2}}\right)\end{bmatrix}, \nonumber \\ W_{h5} & =\begin{bmatrix}\frac{\partial h_{1}}{\partial \hat{z}_{1}} & \kappa\omega\frac{\partial h_{1}}{\partial \hat{z}_{2}}\end{bmatrix}, \nonumber \\ W_{h6} & = W_{h3}. \nonumber \end{aligned}$$ An upper bound of norm of $Y_{32}$ can be expressed by the following inequality $$\begin{aligned} \left\Vert Y_{32} \right\Vert \leq & \omega^{-2}\left\Vert \bar{\zeta}\right\Vert \left\Vert \bar{P}_{o}C_{1}\right\Vert \left(q_{z1}x_{b1}+q_{z2}x_{b2}+\left\Vert B\right\Vert q_{z2}+\left\Vert T^{-1}\right\Vert q_{u}+\left\Vert \bar{P}_{o}C_{1}\right\Vert q_{t}\right) \nonumber \\ & +\kappa\omega^{-1}\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert \bar{P}_{o}C_{1}\right\Vert \left\Vert W_{q2}\right\Vert \left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right)\label{eq:general:stability:Y_32 bound}\end{aligned}$$ where $W_{h5b} = \begin{bmatrix}h_{1a} & \kappa\omega h_{1b}\end{bmatrix}$ and naturally $W_{h6b}=W_{h3b}$. A remark can be made now about the structure of $W_{h2}$, $W_{h3}$, $W_{h4}$ and $W_{h5}$. It may be recognized, that elements of these matrices can be divided into group of derivatives calculated with respect to the first and the second argument. Former of these are not scaled by either observer or regulator bandwidth, while the latter is scaled by either $\kappa\omega$ or $\omega$ factor. As will be showed later in the analysis, this difference will have significant influence on the system stability and ability of the controller to reduce tracking errors. Lastly, some upper bound need to be defined for $Y_{4}$ to complete the stability analysis. This final term comes from nominal disturbance $q(z_1, z_2, u, y)$ alone. By chain rule it can be shown that $$Y_{4} = \omega^{-2}\bar{z}^{T}\bar{P}_{o}C_{1}\left(W_{q1}\begin{bmatrix}\dot{x}_{d}\\\ddot{x}_{d}\end{bmatrix}+\kappa\omega W_{q2}\bar{H}_{c}\bar{e}+\kappa^{-1}\omega W_{q2}\bar{W}_{1}\Delta_3\left(\kappa\right)\bar{z}+\left(\kappa\omega\right)^{-1}W_{q2}C_{2}B\tilde{u}-\frac{\partial q}{\partial u}T^{-1}\tilde{u}+\frac{\partial q}{\partial t}\right),\label{eq:general:stability:Y4}$$ where $W_{q1}=\begin{bmatrix}\frac{\partial q}{\partial z_1} & \frac{\partial q}{\partial z_2}\end{bmatrix}$ and $W_{q2} = \begin{bmatrix}\frac{\partial q}{\partial z_1} & \kappa\omega\frac{\partial q}{\partial z_2}\end{bmatrix}$. \[assu:disturbance derivatives\]Let partial derivatives $\frac{\partial}{\partial z_{1}}q(z_{1},z_{2},u,t),\frac{\partial}{\partial z_{2}}q(z_{1},z_{2},u,t),\frac{\partial}{\partial u}q(z_{1},z_{2},u,t),\frac{\partial}{\partial t}q(z_{1},z_{2},u,t)$ be defined in the whole domain and let their norms be bounded by constants $q_{z1},q_{z2},q_{u}$ and $q_{t}\in\mathbb{R}_{+}$, respectively. Under Assumption \[assu:disturbance derivatives\] the norm of $Y_{4}$ is bounded by $$\begin{aligned} \left\Vert Y_{4}\right\Vert & \leq\omega^{-2}\left\Vert \bar{\zeta}\right\Vert \left\Vert \bar{P}C\right\Vert \left(q_{z1}x_{b1}+q_{z2}x_{b2}\right)+\omega^{-1}\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert \bar{P}CW_{5b}\right\Vert \left(\left\Vert \bar{H}\right\Vert +\omega^{-2}\left\Vert \bar{C}B\right\Vert \right)\label{eq:general:stability:Y4 bound}\\ & +\omega^{-2}\left\Vert \bar{\zeta}\right\Vert ^{2}\left\Vert \bar{P}C\right\Vert \left(\left\Vert T^{-1}\right\Vert q_{u}+q_{t}\right).\nonumber \end{aligned}$$ With some general bounds for each of $\dot{V}(\bar{\zeta})$ terms established, conclusions concerning system stability can be finally drawn. For the sake of convenience, let some auxiliary measure of Lyapunov function derivative negative definiteness $\Lambda_V$ and Lyapunov function derivative perturbation $\Gamma_V$ be defined as $$\begin{aligned} \Lambda_V := & \frac{1}{2}\omega\lambda_{\min}(Q_{Y1}) -\kappa\omega\left\Vert B^{-1}\right\Vert \left\Vert W_{h5b}\right\Vert \left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right)-\omega\left\Vert B^{-1}\right\Vert \left\Vert W_{h6b}\right\Vert \left\Vert \bar{H}_{o}\right\Vert \nonumber \\ & -2h_{1b}-\omega^{-1}\left\Vert W_{h3b}\right\Vert \left\Vert \bar{H}_{o}\right\Vert -\kappa\omega^{-1}\left\Vert P_{o}C_{1}\right\Vert \left(\left\Vert W_{h2b}\right\Vert +\left\Vert W_{q2}\right\Vert \right)\left(\left\Vert \bar{H}_{c}\right\Vert +\left\Vert \bar{W}_{1}\right\Vert \right) \nonumber \\ & -\omega^{-2}\left\Vert B\right\Vert h_{1b}-\omega^{-3}\kappa^{-1}\left\Vert W_{h2b}\right\Vert \left\Vert C_{2}B\right\Vert, \label{eq:general:stability:Lyapunov negative definiteness} \\ \Gamma_V := & \left\Vert B^{-1}\right\Vert \left\Vert \left(h_{2a}+h_{1a}\right)x_{b1}+\left(h_{2b}+h_{1b}\right)x_{b2}\right\Vert \nonumber \\ & \omega^{-2}\left\Vert \bar{P}_{o}C_{1}\right\Vert \left(q_{z1}x_{b1}+q_{z2}x_{b2}+\left\Vert B\right\Vert q_{z2}+\left\Vert T^{-1}\right\Vert q_{u}+\left\Vert \bar{P}_{o}C_{1}\right\Vert q_{t}\right), \label{eq:general:stability:Lyapunov perturbation}\end{aligned}$$ where $\lambda_{\min}(Q)$ stands for the smallest eigenvalue of matrix $Q$, then upper bound of $\dot{V}_{\bar\zeta}(\bar\zeta)$ can be expressed as $$\dot{V}_{\bar\zeta} \leq -\Lambda_V\left\Vert \bar\zeta \right\Vert^2 + \Gamma_V\left\Vert \bar\zeta \right\Vert. \label{eq:general:stability:Lyapunov derivative bound}$$ Now, following conditions can be declared 1. \[enum:general:stability:condition 1\] $\omega\in\Omega_v, \kappa\in\mathrm{K}_v$, 2. \[enum:general:stability:condition 2\] $\Gamma_V \geq 0$, and succeeding theorem concludes presented analysis. Perturbed cascade system (\[eq:general:nominal system\])-(\[eq:general:input dynamics\]) satisfying Assumptions \[assu:desired trajectory\]-\[assu:disturbance derivatives\], controlled by feedback (\[eq:general:control law\]) which is supported by extended state observer (\[eq:general:observer\]), remains practically stable if there exist symmetric, positive defined matrices $Q_o$ and $Q_c$ such, that conditions \[enum:general:stability:condition 1\] and \[enum:general:stability:condition 2\] can be simultaneously satisfied. Scaled tracking errors $\bar\zeta$ are then bounded as follows $$\label{eq:control:conclusion} \lim_{t\rightarrow\infty}\left\Vert \bar{\zeta}(t)\right\Vert\leq \frac{\Gamma_{V}}{\Lambda_{V}}.$$ Foregoing proposition remains valid only if Assumptions \[assu:desired trajectory\]-\[assu:disturbance derivatives\] are satisfied. While Assumption \[assu:desired trajectory\] considers desired trajectory only and can be easily fulfilled for any system with state $x_1$ defined on $\mathbb{R}^n$, a closer look at the remaining assumptions ought to be taken now. Similar in their nature, both concern imperfectly known parts of the system dynamics, with the difference being whether an attempt to implicitly compensate these dynamics is taken or not. As a known dynamic term satisfying Assumption \[assu:bounded dynamics\] can also be treated as an unknown disturbance, without a loss of generality, only Assumption \[assu:disturbance derivatives\] has to be commented here. It can be noted, that for many commonly considered systems this assumption cannot be satisfied. A mechanical system equipped with revolute kinematic pairs can be an example of such system, which dynamics, due to Coriolis and centrifugal forces, have neither bounded time derivative nor bounded partial derivative calculated with respect to second state variable. Engineering practice shows nonetheless that for systems, in which cross-coupling is insignificant enough due to a proper mass distribution, this assumption can be approximately satisfied, at least in a bounded set of the state-space, and the stability analysis holds. The requirement that partial derivatives of any disturbance in the system should be bounded is restrictive one, yet less conservative than commonly used in the ADRC analysis expectation of time derivative boundedness. In this sense, the presented analysis is more liberal than ones considered in the literature and it can be expected that enforced assumptions can be better justified. Numerical simulations ===================== In attempt to further research behaviour of the system in the presence of unmodelled dynamics governing the input signal numerical simulations have been conducted. Model of the system has been implemented in Matlab-Simulink environment. The second order, single degree of freedom system and the first order dynamics of the input have been modelled according to the following equations $$\left\{ \begin{array}{cl} \dot{x}_{1} & =x_{2},\\ \dot{x}_{2} & =u, \end{array}\right.\label{eq:simulation:system}$$ where $$\dot{u}=\frac{1}{T}\left(-u+v\right)\label{eq:simulation:input}$$ and $v$ is a controllable input of the system. Parameters $T$ and $\omega$ of the controller were modified in simulations to investigate how they affect the closed-loop system stability and the tracking accuracy. Chosen parameters of the system are presented in the table \[tab:simulation:gains\]. Desired trajectory $x_d$ was selected as a sine wave with unitary amplitude and frequency of $\unit[\frac{10}{2\pi}]{Hz}$. $\bar{K}_{1}$ $\bar{K}_{2}$ $\bar{K}_{3}$ $\bar{K}_{p}$ $\bar{K}_{d}$ $\kappa$ --------------- --------------- --------------- --------------- --------------- ---------- $3$ $3$ $1$ $1$ $2$ $0.01$ : Auxiliary gains of the observer and controller\[tab:simulation:gains\] Selected results of simulations are presented in Figs. \[fig:simulation:T01 adrc\]-\[fig:simulation:T1 pd\]. Tracking errors of two state variables are presented on the plots. Error of $x_{1}$ is presented with solid line, while $e_{2}$ has been plotted with dashed lines on each figure. Integrals of squared errors $e_1$ (ISE criterion) and integral of squared control signals $v$ (ISC criterion) have been calculated for each simulation and are presented above the plots to quantify obtained tracking results. Tests were performed for different values of $T$ and $\omega$, as well as for compensation term $w_c=\hat{z}_{3}$ enabled or disabled, cf. . It can be clearly seen that the existence of some upper bound of $\Omega$ is confirmed by simulation results as proposed by Eq. . As expected, value of this bound decreases with increase of time constant $T$. In the conducted simulations it was not possible to observe and confirm existence of any lower bound imposed on $\Omega$ and for an arbitrarily small $\omega$ stability of the system was being maintained. Secondly, an influence of disturbance rejection term $\hat{z}_{3}$ is clearly visible and is twofold. For $\omega$ chosen to satisfy stability condition \[enum:general:stability:condition 1\], it can be observed, that the presence of the disturbance estimate allows significant decreasing of tracking errors $e_2$ caused by the input dynamics which were not modelled during the controller synthesis. Basically, a residual value of error $e_2$ becomes smaller for a higher value of bandwidth $\omega$. Error trajectory $e_1$ is also slightly modified, however, this effect is irrelevant according to ISE criterion. Nonetheless, usage of the disturbance estimate leads to a significant shrink of $\Omega$ subset. It is plainly visible, that removal of $\hat{z}_{3}$ estimate may lead to recovering of stability of the system in comparison with simulation scenarios obtained using the corresponding ADRC controller. Experimental results ==================== Practical experiments have been undertaken in order to further investigate the considered control problem. All experiment were carried out using robotic telescope mount developed at Institute of Automatic Control and Robotic of Poznan University of Technology, [@KPKJPKBJN:2019]. The plant consists of a robotic mount and an astronomic telescope with a mirror of diameter 0.5 m. The robotic mount alone includes two axes driven independently by $\unit[24]{V}$ permanent magnet synchronous motors (PMSM) with high-precision ring encoders producing absolute position measurement with 32-bit resolution. Control algorithms has been implemented in C++ using Texas Instruments AM4379 Sitara processor with ARM Cortex-A9 core clocked at $\unit[600]{MHz}$. Beside control structure, prepared firmware contains several additional blocks necessary for conducting of proper astronomical research. Controller itself is implemented in a cascade form which consists of independent current and position loops. Both loops work simultaneously with frequency of $\unit[10]{kHz}$. The current loop designed to precisely track desired torque of the motor employs Park-Clark transformation of measured phase currents to express motor dynamics in *q-d* coordinated. Both *q* and *d* axes are then controlled by independent PI regulators with feedforward term and anti-windup correction which satisfy the following equation $$\begin{aligned}\dot{v} & =k_{i}\left(\tilde{i}-k_{s}\left(k_{p}\tilde{i}+v+u_{r}-\mathrm{sat}\left(k_{p}\tilde{i}+v+u_{r}\right)\right)\right),\\ u & =\mathrm{sat}\left(k_{p}\tilde{i}+v+u_{r},U_{m}\right), \end{aligned} \label{eq:experiments:current loop}$$ where $\tilde{i}$ stands for current tracking error, $v$ is integrator input signal, $u$ is regulator input, $u_{r}$ expresses feedforward term, $k_{p}$, $k_{i}$ and $k_{s}$ are positive regulator gains, and finally $\mathrm{sat}(u^{*},U_{m})$ is saturation function of signal $u^{*}$ up to value of $U_{m}$. Output voltage $u$ is generated using PWM output. Current in $d$ axis is stabilised at zero, while current of *q* axis tracks the desired current of the axis. Relation between desired torque and desired current is modelled as a constant gain equals $\unit[2.45]{\frac{Nm}{A}}$. Desired torque is computed in the position loop by the active disturbance rejection based controller designed for the second order mechanical system modelled as follows $$\left\{ \begin{array}{cl} \dot{x}_{1} & =x_{2}\\ \dot{x}_{2} & =B\tau+\smash{\underbrace{f_{c}\cdot\mathrm{tanh}(f_{t}\cdot x_{2})}_{h(x_{2})}}, \end{array}\right.\label{eq:experiment:model}\\*[0.625\normalbaselineskip]$$ where $x_{1}\in\mathbb{R}^2$ and $x_{2}\in\mathbb{R}^2$ are position and velocities of axes, $B$ is input matrix with diagonal coefficients equal $B_{1,1}=\frac{1}{5},B_{2,2}=\frac{1}{30}$, $f_c$ is the constant positive Coulomb friction coefficient while $f_{t}=10^{3}$ expresses scaling term which defines steepness of friction model. Velocity of the axis is approximated in the experiments using either observer estimate $\hat{z}_{2}$ or desired trajectory derivative $\dot{x}_{d}$. The assumed model of the friction force is strongly local, in the sense that different values of $f_{c}$ are required for different accelerations in a time instant when the sign of velocity changes. This locality was overcame during the experiments by manual changes of $f_{c}$ coefficient. While torque generated by the motor is treated as an input signal of the mechanical system, there exists residual dynamics defined by the current loop which is not modelled in the position loop. Here, we assume that this dynamics can be approximated by and thus we can infer about the stability according to mathematical analysis considered in Section 2. Other disturbances come chiefly from flexibility of the mount, ignored cross-coupling reactions between joints and torque ripples generated by synchronous motors. Though some of these disturbances globally do not satisfy assumptions accepted for theoretical analysis of the system stability, in the considered scenario an influence of these dynamics is insignificant. Due to small desired velocities chosen in the experiment, these assumptions can be approximately satisfied here. All gains of the controllers chosen for experiments are collected in Table \[tab:experiment:gains\]. Horizontal axis Vertical axis --------- ------------------ ------------------- $K_{1}$ $1.2\cdot10^{3}$ $2.4\cdot10^{2}$ $K_{2}$ $5.7\cdot10^{5}$ $2.28\cdot10^{4}$ $K_{3}$ $10^{8}$ $0.8\cdot10^{6}$ $K_{p}$ $225$ $225$ $K_{d}$ $24$ $24$ : \[tab:experiment:gains\]Gains of the controllers and observers Here we present selected results of the experiments. In the investigated experimental scenarios both axes were at move simultaneously and the desired trajectory was designed as a sine wave with period of $\unit[30]{s}$ and maximum velocity of $50v_{s}$, in the first experiment, and $500v_{s}$, in the second, where $v_{s}=\unit[7.268\cdot10^{-5}]{\frac{rad}{s}}$ stands for the nominal velocity of stars on the night sky. During the system operation significant changes of friction forces are clearly visible and the influence of compensation term can be easily noticed. Since friction terms vary significantly around zero velocity the tracking accuracy is decreased. In such a case the process of the disturbance estimation is not performed fast enough. Furthermore, in the considered application one cannot select larger gains of the observer due to additional dynamics imposed by an actuator and delays in the control loop. Here, one can recall relationship which clearly states that the tracking precision is dependent on the bound of $\Gamma_V$, cf. . Thus, one can expect that the tracking accuracy increases in operating conditions when disturbances become slow-time varying. This is well illustrated in experiments where friction terms change in a wide range. Each experiment presents results obtained with different approaches to $h_{u}$ term design. Once again integral squared error was calculated for each of the presented plots to ease evaluation of the obtained results. Series of conclusions can be drawn from the presented results. Due to inherently more disturbed dynamics of horizontal axis, any improvement using friction compensation for slow trajectories is hardly achieved. Meanwhile, the compensation term based on the desired trajectory, effectively decreases tracking error bound for all other experiments. As may be expected, compensation function based on estimates of state variables is unable to provide any acceptable tracking quality due to inherent noise in the signal and the existence of input dynamics. It can be noted, that in the first experiment the friction compensation term allows one to decrease the bound of tracking error while overall quality expressed by ISE criterion is worse in comparison to this obtained in experiment without the corresponding term in the feedback. This behaviour is not seen in the second experiment, in which significant improvement was obtained for both axes in terms of error boundary as well as ISE criterion. Conclusions =========== This paper is focused on the application of ADRC controller to a class of second order systems subject to differentiable disturbances. In particular, the system is analysed taking into account the presence of the first order input dynamics and unmodelled terms which may include cross-coupling effects between the state variables. By the means of Lyapunov analysis, general conditions of practical stability are discussed. It is proved that, even in the presence of additional input dynamics, boundedness of partial derivatives of total disturbance can be a sufficient requirement to guarantee stability of the closed-loop system. Using numerical simulations the considered controller is compared against a simple PD-based regulator. The obtained results confirm that in the case of input dynamics, the bandwidth of an extended observer is limited which restricts the effectiveness of the ADRC approach. Lastly, practical results of employing ADRC regulator in the task of trajectory tracking for a robotised astronomical telescope mount are presented. In this application, it is assumed that friction effects are modelled inaccurately and a local drive control-loop is treated as unknown input dynamics. The obtained results illustrate that the considered control algorithm can provide a high tracking accuracy. Further research in this topic may include attempts to explore in more details conditions for the feasible selection of the observer parameters in order to guarantee the stability of the closed-loop system. Other forms of input dynamics and observer models can also be considered in the future works. Appendix ======== Selected properties of scaled dynamics -------------------------------------- Assuming that errors and gains are scaled according to , and the following relationships are satisfied: $$\begin{aligned} \Delta_2\left(\kappa\omega\right)H_c \Delta_2^{-1}\left(\kappa\omega\right) =\kappa\omega\bar{H}_c,\ \Delta_3\left(\omega\right)H_o \Delta_3^{-1}\left(\omega\right) =\omega\bar{H}_o,\\ \Delta_2\left(\kappa\omega\right)W_1=W_1,\, W_1\Delta_3^{-1}\left(\omega\right)=\bar{W}_1\Delta_3\left(\kappa\right)\\ W_2=\Delta_3\left(\kappa\omega\right)\bar{W}_2.\label{eq:app:scalled_terms} \end{aligned}$$ Computation of $\dot{v}$ ------------------------ Taking advantage of estimate $\bar{z}$ and assuming that $w_c:=\hat{z}_3$ one can rewrite as follows $$v=B^{-1}\left(K_c e + K_c \begin{bmatrix}\tilde{z}_1^T&\tilde{z}_2^T \end{bmatrix}^T-h_u+\ddot{x}_d-\hat{z}_3\right),$$ where $K_c := \left[K_p\ K_d\right]$. Equivalently, one has $$v=B^{-1}\left(K_c e - W_2\tilde{z}-h_u+\ddot{x}_d-z_3\right).$$ Consequently, time derivative of $v$ satisfies $$\begin{aligned} \dot{v}&=B^{-1}\left(K_c \dot{e}+W_2\dot{\tilde{z}}-\dot{h}_u+\dddot{x}_d-\dot{z}_3\right){\stackrel{(\ref{eq:general:regulator error dynamics}),(\ref{eq:general:observator error dynamics})}{=}}B^{-1}\left(K_c H_ce+K_cW_1\tilde{z}+K_c C_2B\tilde{u}+W_2H_o\tilde{z}\right.\\ &\quad\left.+W_2C_oB\tilde{u}+W_2C_1\dot{z}_3-\dot{z}_3-\dot{h}_u+\dddot{x}_d\right)=B^{-1}\left(K_c H_ce+K_cW_1\tilde{z}+W_2H_o\tilde{z}-\dot{h}_u+\dddot{x}_d\right). \end{aligned}$$ Computations of $Y_3$ and $Y_4$ ------------------------------- By chain rule it can be shown that $$\dot{h}_1(z_1, z_2) = \begin{bmatrix}\frac{\partial h_{1}}{\partial z_{1}} & \frac{\partial h_{1}}{\partial z_{2}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}-\begin{bmatrix}\frac{\partial h_{1}}{\partial z_{1}} & \kappa\omega\frac{\partial h_{1}}{\partial z_{2}}\end{bmatrix}\dot{\bar{e}},\ \dot{h}_2(z_1,z_2) = \begin{bmatrix}\frac{\partial h_{2}}{\partial z_{1}} & \frac{\partial h_{2}}{\partial z_{2}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}-\begin{bmatrix}\frac{\partial h_{2}}{\partial z_{1}} & \kappa\omega\frac{\partial h_{2}}{\partial z_{2}}\end{bmatrix}\dot{\bar{e}},$$ $$\dot{h}_1(\hat{z}_1,\hat{z}_2)=\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \frac{\partial h_{1}}{\partial\hat{z}_{2}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}-\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \kappa\omega\frac{\partial h_{1}}{\partial\hat{z}_{2}}\end{bmatrix}\dot{\bar{e}}-\begin{bmatrix}\frac{\partial h_{1}}{\partial\hat{z}_{1}} & \omega\frac{\partial h_{1}}{\partial\hat{z}_{2}} & 0\end{bmatrix}\dot{\bar{z}},\ \dot{h}_2(x_d,\dot{x}_d)=\begin{bmatrix}\frac{\partial h_{2}}{\partial x_{d}} & \frac{\partial h_{2}}{\partial\dot{x}_{d}}\end{bmatrix}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}.$$ From here, following are true $$\begin{aligned} \dot{h} - \dot{h}_u &= W_{h1}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}-W_{h2}\dot{\bar{e}}+W_{h3}\dot{\bar{z}},\ \dot{h}_u = W_{h4}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}-W_{h5}\dot{\bar{e}}-W_{h6}\dot{\bar{z}}, \end{aligned}$$ what leads to solution of $Y_3$ by means of basic substitution. Now, the computation of term $Y_4$ will be taken into account. Disturbance term $q(z_{1},z_{2},u,t)$ can be expressed in form of $$\begin{aligned} \dot{q}(z_{1},z_{2},u,t)&=\frac{\partial q}{\partial z_{1}}\left(\dot{x}_{d}-\dot{e}_{1}\right)+\frac{\partial q}{\partial z_{2}}\left(\ddot{x}_{d}-\dot{e}_{2}\right)+\frac{\partial q}{\partial u}T^{-1}\left(-u+v\right)+\frac{\partial q}{\partial t}\\ &=W_{q1}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}+W_{q2}\dot{\bar{e}}-\frac{\partial q}{\partial u}T^{-1}\tilde{u}+\frac{\partial q}{\partial t}\\ &=W_{q1}\begin{bmatrix}\dot{x}_{d}\\ \ddot{x}_{d} \end{bmatrix}+W_{q2}\left(\kappa\omega\bar{H}_{c}\bar{e}+\kappa^{-1}\omega\bar{W}_{1}D(\kappa)\bar{z}+\left(\kappa\omega\right)^{-1}C_{2}B\tilde{u}\right)-\frac{\partial q}{\partial u}T^{-1}\tilde{u}+\frac{\partial q}{\partial t}. \end{aligned}$$ [^1]: This work was supported by the National Science Centre (NCN) under the grant No. 2014/15/B/ST7/00429, contract No. UMO-2014/15/B/ST7/00429.
{ "pile_set_name": "ArXiv" }
--- author: - title: 'DeepBall: Deep Neural-Network Ball Detector' --- {#sec:introduction} An ability to accurately detect and track the ball in a video sequence is a core capability of any system aiming to automate analysis of the football matches or players’ progress. Our method aims to solve the problem of fast and accurate ball detection. It is developed as a part of the computer system for football clubs and academies to track and analyze player performance during both training session and regular games. The system is intended to help professional football analysts to evaluate the players’ performance, by allowing automatic indexing and retrieval of interesting events. Detecting the ball from long-shot video footage of a football game is not trivial to automate. The object of interest (the ball) has very small size compared to other objects visible in the observed scene. Due to the perspective projection, its size varies depending on the position on the play field. The shape is not always circular. When a ball is kicked and moves at high velocity, its image becomes blurry and elliptical. Perceived colour of the ball changes due to shadows and lighting variation. The colour is usually similar to the colour of white lines on the pitch and sometimes to players’ jerseys. Other objects with similar appearance to the ball can be visible, such as small regions near the pitch lines and regions of players’ bodies such as a head. Situations when the ball is in player’s possession or partially occluded are especially difficult. Figure \[jk:fig:ball\_images\] shows exemplary image patches illustrating high variance in the ball appearance and difficulty of the ball detection task. ![image](ball_images.png){width="100.00000%"} Traditional ball detection methods, e.g. based on variants of circular Hough transform, deal well with situations where ball is visible as a single object, separated from the player body. They have problems to detect the ball when it’s possessed or partially occluded by a player. But for players performance analysis purposes, the most informative are frames showing players in close contact with the ball. In this paper we present a ball detection method expanding upon the state-of-the-art deep convolutional object detection network. The method operates on a single video frame and is intended as the first stage in the ball tracking pipeline. Our method does not have limitations associated with earlier methods based on a circular Hough transform. It can deal with situations where the perceived ball shape is not circular due to the motion blur. It detects the ball when it’s in a close contact with or partially occlude by a player’s body. It can detect multiple balls, located relatively close to each other, in the same image. Another benefit of the proposed method is its flexibility. Due to the fully convolutional design it can operate on images of any size and produces the ball confidence map of a size proportional to the input image. The detection network is designed with performance in mind. Evaluation performed in Section \[jk:ev\_results\] proves that our method can efficiently process high definition video input in a real time. {#section} The first step in the traditional ball detection methods, is usually the process of background subtraction. It prevents ball detection algorithms from producing false detections on the static part of the image such as stadium advertisement. The most commonly used background subtraction approaches are based on chromatic features [@Gong95; @Ali12; @Kia16] or motion detection [@DOr02; @DOr04; @Leo08; @Mazz12]. Segmentation methods based on chromatic features use domain knowledge about the visible scene: football pitch is mostly green and the ball mostly white. The colour of the pitch is usually modelled using a Gaussian Mixture Model and hardcoded in the system or learned. When the video comes from the static camera, motion-based segmentation is often used. For computational performance reasons, a simple approach is usually applied based on an absolute difference between consecutive frames or the difference between the current frame and the mean or median image obtained from a few previously processed frames [@High16]. After the background segmentation, heuristic criteria based on chromatic or morphological features are applied on the resulting blobs to locate the ball. These criteria include blob size, colour and shape (circularity, eccentricity) [@Gong95]. Variants of Circle Hough Transform [@Yuen90], modified to detect spherical rather than circular objects, may be used to verify if a blob contains the ball  [@DOr02; @DOr04; @Leo08; @Popp10; @Halb15]. A two-stage approach may be employed to achieve real-time performance and high detection accuracy [@DOr02; @Leo08; @Mazz12]. In this scenario the regions that probably contain the ball are found (*ball candidates extraction*). Then, the candidates are validated (*ball candidate validation*). In [@Ali12] straight lines are detected using kernel-based Hough transform and removed from the foreground image to overcome problem of ball interfusing with white lines on the pitch. Very similar method is proposed in [@Rao15].  [@Gong95; @Pall08; @Halb15] use multiple successive frames to improve the detection accuracy. In [@Gong95], detection is confirmed by searching a neighbourhood area of each ball candidate in the successive frame. If the white area with similar size and circularity is found in the next frame, the ball candidate is validated. In [@Pall08] authors extract ball candidate positions using morphological features (shape and size of the ball). Then, a directed weighted graph is constructed from ball candidates in successive frames. The vertices of the graph correspond to candidate ball positions and edges link candidates found in consecutive frames. The longest path in the graph is computed to give the ball trajectory. Ball detection methods using morphological features to analyze shape of blobs produced by background segmentation, fail if a ball is touching a player. See bottom row of Fig. \[jk:fig:ball\_images\] for exemplary images where these methods are likely to fail. [@Halb15] addresses this limitation by using two-stage approach. First, the ball is detected in not occluded situations, where it appears as a single object. This is done by applying background subtraction to filter out temporally static part of the image. Then, foreground blobs are filtered by size and shape to produce ball candidates. Ball candidates are verified by examining a few successive frames and detecting robust partial ball trajectories (tracklets). When the first stage detector is not able to locate the ball, the second stage detector specialized for partially occluded situations is used. Ball candidates are found using a Hough circle detector. Foreground object contours are extracted and their Freeman chain code is examined. If a ball candidate corresponds to a ’bump’ in the foreground object silhouette it is retained as a true match. In recent years a significant progress was made in the area of neural-network based object detection. Deep neural-network based YOLO detector [@Redm16] achieves 63.4 mean Average Precision (mAP) on PASCAL VOC 2007 dataset, whereas traditional Deformable Parts Models (DPM) detector [@Felz10] scores only 30.4. Current state-of-the-art object detectors can be categorized as one-stage or two-stage. In two-stage detector, such as: Fast R-CNN [@Girs15] or Faster R-CNN [@Ren15], the first stage generates a sparse set of candidate object locations (region proposals). The second stage uses deep convolutional neural network to classify each candidate location as one of the foreground classes or as a background. One-stage detectors, RetinaNet [@Lin17], SSD [@Liu16] or YOLO [@Redm16], do not include a separate region-proposal generation step. A single detector based on deep convolutional neural network is applied instead. [@Spec17] uses convolutional neural networks (CNN) to localize the ball under varying environmental conditions. The first part of the network consists of multiple convolution and max-pooling layers which are trained on the standard object classification task. The output of this part is processed by fully connected layers regressing the ball location as probability distribution along x- and y-axis. The network is trained on a large dataset of images with annotated ground truth ball position. The network is reported to have 87% detection accuracy on the custom made dataset. The limitation of this method is that it fails if more than one ball, or object very similar to the ball, is present in the image. Our method does not have this limitation. [@Reno18] presents a deep neural network classifier, consisting of convolutional feature extraction layers followed by fully connected classification layer. It is trained to classify small, rectangular image patches as ball or no-ball. The classifier is used in a sliding window manner to generate a probability map of the ball occurrence. The method has two drawbacks. First, the set of negative training examples (patches without the ball) must be carefully chosen to include sufficiently hard examples. Also the rectangular patch size must be manually selected to take into account all the possible ways the ball appears on the scene: big or small due to the perspective, sharp or blurred due to its speed. The method is also not optimal from the performance perspective. Each rectangular image patch is separately processed by the neural network using a sliding-window approach. Then, individual results are combined to produce a final ball probability map. Our method, in contrast, requires only a single pass of an entire image through the fully convolutional detection network. {#section-1} The method presented in this paper, called *DeepBall*, is inspired by recent advances in a single-pass deep neural network based object detection methods, such as SSD [@Liu16] or YOLO [@Redm16]. A typical architecture of a neural network-based one stage object detector is modified, to make it more appropriate for the ball detection task. Modifications aim at increasing accuracy of locating small objects and reducing the processing time. The network is designed to take larger visual context into the consideration to correctly classify fragments of the scene containing objects similar to the ball. This is achieved by using hypercolumn concept introduced in [@Hari15]. In order to increase the performance, we removed unnecessary components typical for single stage neural network object detector. Multiple anchor boxes, with different size and aspect ratios, are not needed as we detect objects from a single class (the ball) with a limited shape and size variance. Localization module, predicting the centre and size of object bounding boxes relative to a grid cell is unnecessary, as proposed method produces a dense confidence map predicting the ball location on a pixel level. The method takes a video frame of any resolution as an input and produces scaled down *ball confidence map* encoding probability of ball presence at each location. The size of the output *ball confidence map* is $h_f \times w_f$, where $h_f$ and $w_f$ equal to the original image height and width divided by the scaling factor $k$ ($k=4$ in our case). Position in the *ball confidence map* with coordinates $(x_f, y_f)$ corresponds to the position $(\lfloor k(x_f-0.5) \rfloor, \lfloor k(y_f-0.5) \rfloor$ in the input image. See Fig. \[jk:fig:input\_output\] for an exemplary input image and corresponding *ball confidence map* computed by the trained network. The actual ball position is retrieved from the *confidence map* using the following approach. First, the location with the highest confidence is found in the *ball confidence map*. If the confidence is lower than a threshold $\theta$, no balls are detected. Otherwise, the location with the highest confidence is returned. In ’training game mode’, where more than one ball can be present in the image, more balls are detected. This is done by zeroing-out confidence map values at the previously found maximum and its close neighbourhood (non-max suppression) and searching for the second global maximum. The process is repeated until no new maximum with confidence above the threshold $\theta$ can be found. Pixel coordinates of the ball $(x_p, y_p)$ in the input frame are calculated using the following formula: $(x_p, y_p) = (\lfloor k(x_f-0.5) \rfloor, \lfloor k(y_f-0.5) \rfloor$, where $(x_f, y_f)$ are coordinates in the *ball confidence map* with the maximum confidence and $k=4$ is a scaling factor. The threshold $\theta$ is set experimentally, as the value maximizing detection accuracy on the validation set. ![Part of the exemplary input frame from the test sequence with highlighted ball position (left) and corresponding *ball confidence map* (right)[]{data-label="jk:fig:input_output"}](heat_map_legend.png){height="1.9cm"} #### Network architecture Block Layers Output size --------- ---------------------- ---------------- -- Conv1 Conv: 8 7x7 filters stride 2 Conv: 8 3x3 filters Max pool: 2x2 filter (8, 268, 480) Conv2 Conv: 16 3x3 filters Conv: 16 3x3 filters Max pool: 2x2 filter (16, 134, 240) Conv3 Conv: 32 3x3 filters Conv: 32 3x3 filters Max pool: 2x2 filter (32, 67, 120) Conv4 Conv: 56 3x3 filters Conv: 2 3x3 Filters (2, 268, 480) Softmax Softmax (2, 268, 480) \[2pt\] : Detailes of *DeepBall* network architecture. Output size is specified in the format: (number of channels, height, width). Each convolutional layer is followed by BatchNorm layer and ReLU non-linearity (not show for brevity). All convolutions use same padding and stride one (except for the first one). \[jk:table2\] The diagram depicted in Fig. \[jk:fig:network-diagram\] shows components of our ball detection network and size of outputs of each block. Note that output size depends on the size of the input image, as the network is fully convolutional and can operate on the image of any size. The input image is processed by three convolutional blocks (Conv1, Conv2 and Conv3) producing convolutional feature maps with decreasing spatial resolution and increasing number of channels. In contrast to a typical convolutional network design, the output from each convolutional block is concatenated and jointly fed into the final classification layer. Feature maps produced by convolutional blocks Conv2 and Conv3 are first upsampled to the same spatial resolution as a feature map produced by the first convolutional block (Conv1). Then, the feature map produced by the first convolutional block (Conv1) and upsampled feature maps from second and third convolutional blocks (Conv2 and Conv3) are concatenated along the dimension corresponding to the number of channels to form a hypercolumn. Concatenated feature map is fed to the final fully convolutional classification block (Conv4). The classification block consists of two convolutional layers followed by the softmax layer. It outputs two channel *ball confidence map*. One channel is interpreted as the probability of the location being a background and the other as probability of the ball. For the ball detection task, one output channel, interpreted as the ball probability, would be sufficient. But the proposed design is extensible and can be easily adapted to accommodate detection of additional object categories, such as players. Detailed architecture of each block is given in Table \[jk:table2\]. Concatenation of multiple convolutional feature maps from different level of the network, allows using both low-level features from the first convolutional layers and high-level features computed by higher convolutional layers. Information from first convolutional layers is necessary for a precise spatial location of the object of interest. Further convolutional layers operate on feature maps with lower spatial resolution, thus they cannot provide exact spatial location. But they have bigger receptive fields and their output can provide additional context to improve classification accuracy. This design is inspired by the hypercolumn concept [@Hari15], where outputs from intermediary convolutional layers are upsampled and concatenated in order to allow find-grained object localization. The network architecture described above was chosen experimentally by evaluating a number of alternative designs. See Section \[jk:section-experimental-results\] for information on examined variants and their performance. #### Loss function is a modified version of the loss used in SSD [@Liu16] detector. Proposed network does not regress position and size of the object’s bounding box. The ball position is determined by the maxima of the confidence map computed by the network. Hence only the classification component of the original SSD loss function is used. The loss $\mathcal{L}$ optimized during the training is cross-entropy loss over ball and background class confidences: $$\begin{aligned} \mathcal{L} \left( c \right) = \frac{1}{N} \left(-\sum_{\left(i,j\right)\in Pos}\log\left(c_{ij}^{ball}\right) \right. \\ \left.-\sum_{\left(i,j\right)\in Neg}\log\left(c_{ij}^{bg}\right) \right), \end{aligned}$$ where $c_{ij}^{bg}$ is the value of the channel of the ball confidence map corresponding to the background probability at the spatial location $(i, j)$ and $c_{ij}^{ball}$ is the is the value of the channel of the ball confidence map corresponding to the ball probability at the spatial location $(i, j)$. $Pos$ is a set of positive examples, that is the set of spatial locations on the ball confidence map corresponding to the ground truth ball location. $Neg$ is a set of negative examples, that is the set of spatial locations on the ball confidence map corresponding to the ground truth background. Set of positive examples $Pos$ is constructed as follows. If $(x,y)$ is a true ball position for the image $I$, then the corresponding confidence map location $(i,j) = ( \lfloor x/4, y/4 \rfloor )$ and all its nearest neighbours are added to $Pos$. Negative examples (locations without the ball) correspond to locations on the confidence map, where the ball, according to the ground truth data, is not present. The number of negative examples is orders of magnitude higher than a number of positive examples (locations with the ball) and this would create highly imbalanced training set. To mitigate this, we employ hard negative mining strategy as in [@Liu16]. We chose a limited number of negative examples with the highest confidence loss, so the ratio of negative to positive examples is at most 3:1. #### Training dataset *DeepBall* network is trained using the publicly available ISSIA-CNR Soccer Dataset [@DOr09]. The dataset contains six synchronized, long shot views of the football pitch acquired by six Full-HD DALSA 25-2M30 cameras. Three cameras are designated for each side of the playing-field, recording at 25 fps. Videos are acquired during matches of the Italian ’serie A’. There’re 20,000 manually annotated frames in the dataset, out of which 7,000 contain the ball and 13,000 doesn’t or the ball is occluded by players. The ball radius varies from 8 to 16 pixels. Sequences 1, 2, 3 and 4, covering one penalty area and the centre of the football pitch, are used for training. Sequences 5 and 6, covering the side of football pitch not visible on the training sequences, are left aside for the evaluation purposes. Fig. \[jk:fig:training\_sequences\] shows exemplary frames from the sequence 1 and 3. As the training dataset is relatively small, we use data augmentation to increase the variety of training examples and decrease the risk of overfitting. The following transformations are randomly applied to the training images: random color jitter (random change in brightness, contrast, saturation or hue), horizontal flip, random cropping and random scaling (with scale factor between 0.5 and 1.1). The ground truth (ball position) is modified accordingly to align with the transformed image. ![Exemplary frame from the training dataset.[]{data-label="jk:fig:training_sequences"}](sequence1_example.png){width="45.00000%"} The network is trained using a standard gradient descent approach with Adam [@King14] optimizer. The initial learning rate is set to $0.001$ and decreased by 10 after 50 epochs. The training runs for 75 epochs in total. Batch size is set to 16. Experimental results {#jk:section-experimental-results} ==================== #### Evaluation dataset Evaluation is performed on two datasets. The first contains of sequence 5 and 6 from the ISSIA-CNR Soccer Dataset. This sequence covers the part of the football pitch not seen on the training sequences (sequence 1, 2, 3 and 4). ISSIA-CNR dataset is quite demanding because the video has a moderate quality and there’s noticeable blur. One of the team wears white jerseys which makes difficult to distinguish the ball when it’s close to the player. #### Evaluation metrics {#sec:metrics} We evaluate Average Precision (AP), a standard metric used in assessment of object detection methods. We follow Average Precision definition from Pascal 2007 VOC Challenge [@Ever10]. The precision/recall curve is computed from a method’s ranked output. Recall is defined as a proportion of all positive examples ranked above a given threshold to all positive examples in the ground truth. Precision is a proportion of all positive examples above that threshold to all examples above that threshold. The AP summarizes the shape of the precision/recall curve, and is defined as the mean precision at a set of eleven equally spaced recall levels: $$\mathrm{AP} = \frac{1}{11} \sum_{r \in \left\{ 0, 0.1, \ldots 1 \right\}} p(r) \; ,$$ where $p(r)$ is a precision at recall level $r$. The ball detection method usually operates under the additional constraint, that no more than one object of interest (the ball) is present in the image. Under this constraint, for each image the detector returns the highest response from the ball confidence map greater than the threshold $\theta$ as the ball position. If no location in the ball confidence map is greater than $\theta$, no ball is detected. In this scenario, an image with the ball is classified correctly, if the ball is detected at the correct location. The image without the ball is classified correctly, if ball is not detected. *Ball detection accuracy* is defined as the proportion of correctly classified images to all processed images. $\theta$ is chosen experimentally, as the value maximizing the accuracy on the validation set. #### Evaluation results {#jk:ev_results} Evaluation results are summarized in Table \[jk:table1\]. The results contain Average Precision and Accuracy of evaluated methods, as defined in the previous section. The table also lists a number of trainable parameters in each evaluated model and frame rate, expressed in frames per second, achievable when detecting the ball in a Full HD (1920x1080 resolution) video. Frame rates given in the table take into account the time needed to feed a frame through the detection network and infer the ball position from the resultant feature map. They do not include the time needed to load the frame from an input file, convert it to the tensor and load into the GPU. All methods are implemented in PyTorch [@Pasz17] and run on nVidia Titan X GPU platform. Our method yields the best results on the test set (Sequences 5 and 6 from ISSIA-CNR Soccer Dataset). It achieves 0.877 Average Precision and 0.951 ball detection accuracy. For comparison we evaluate two recent ball detection methods: [@Spec17] and [@Reno18] using the same training and test sets and the same data augmentation approach as in our method. [@Spec17] uses the neural network with three convolutional layers followed by two two-layer fully connected heads estimating the ball x and y coordinates. For evaluation we implemented the best performing model proposed in the paper: Model 1 soft-sign. The model performs poorly on the test dataset, achieving only 0.220 Average Precision. This can be attributed to the fact, that the original model is intended to detect the ball in videos from RoboCup Soccer matches taken from closer distance. The ball image is larger and there are no visible distractors such as advertisement stands around the pitch. The method regresses only one ball position on the input image. If there are multiple objects with ball-like appearance, it likely gets confused and fails to produce the meaningful result. Our method computes a dense confidence map indicating probable ball positions. It’s more robust against presence of objects with similar appearance to the ball. [@Reno18] uses the network consisting of four convolutional layers followed by a fully connected classification layer. This method scores 0.834 Average Precision and 0.917 accuracy. In contrast to the original method, we enhanced the training set construction process. Negative examples (no ball patches) do not need to be manually selected. They are mined online during the network training, as regions of the image not containing the ball but incorrectly classified with the highest confidence (hard negative mining). Even with this improvement, the method yields worse Average Precision and detection accuracy than our method. It must be noted that, while our method outperforms two other neural network based ball detection methods in terms of average precision and detection accuracy, it has significantly lower number of trainable parameters and much higher video processing rate (FPS). [l@c@c@r@r@l]{} Method & ----------- Average Precision ----------- & Accuracy & ------------------ No. of trainable parameters ------------------ & FPS\ DeepBall & **0.877** & **0.951** & 48 658 & 190\ DeepBall (no data augmentation) & 0.792 & 0.899 & 48 658 & 190\ DeepBall (no hypercolumns/context)& 0.833 & 0.911 & 29 146 & 270\ [@Spec17] & 0.220 & 0.220 & 332 365 744 & 22\ [@Reno18] & 0.834 & 0.917 & 313 922 & 32\ \[2pt\] \[jk:table1\] ![Visualization of incorrect detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).[]{data-label="jk:fig:misclassifications"}](ball_not_4.png "fig:"){width="15.00000%"} ![Visualization of incorrect detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).[]{data-label="jk:fig:misclassifications"}](ball_not_6.png "fig:"){width="15.00000%"} ![Visualization of incorrect detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).[]{data-label="jk:fig:misclassifications"}](ball_not_8.png "fig:"){width="15.00000%"}\ ![Visualization of incorrect detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).[]{data-label="jk:fig:misclassifications"}](wrong_ball_2.png "fig:"){width="15.00000%"} ![Visualization of incorrect detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).[]{data-label="jk:fig:misclassifications"}](wrong_ball_4.png "fig:"){width="15.00000%"} ![Visualization of incorrect detection results. Top row show image patches where the ball is not detected (false negatives). The bottom row shows patches with incorrectly detected ball (false positives).[]{data-label="jk:fig:misclassifications"}](ball_not_10.png "fig:"){width="15.00000%"} Due to the relatively small size of the training set, data augmentation proved to be the key allowing generalization of the trained network and good performance on the testing set. Without data augmentation Average Precision drops down from 0.877 to 0.792. Implementing hypercolumn concept by combining convolutional feature maps from different levels of the hierarchy have a positive impact on the method performance. Using a network with a simpler architecture, which bases classification on the output from the last convolutional layer, without combining multiple feature maps, produces worse results. Such architecture scored only 0.833 Average Precision. Fig. \[jk:fig:misclassifications\] show examples of incorrect detections. Two top rows show image patches where our method fails to detect the ball (false negatives). It can be noticed, that misclassification is caused by severe occlusion, where only small part of the ball is visible, or due to blending of the ball image with white parts of the player wear or white background objects outside the play field, such as stadium advertisement. The bottom row shows examples of patches where a ball is incorrectly detected (false positives). The detector is sometimes confused by players’ white socks or by the background clutter outside the play field. {#section-2} The article describes an efficient and effective deep neural network based ball detection method. The proposed network has a fully convolutional architecture processing entire image at once, in a single pass through the network. This is much more computationally effective than a sliding window approach proposed in  [@Reno18]. Additionally, the network can operate on images of any size that can differ from size of images used during the training. It outputs scaled down ball confidence map, indicating estimated ball location. The method performs very well on a challenging ISSIA-CNR Soccer Dataset [@DOr09] resulting in 0.877 Average Precision and 0.951 accuracy. It outperforms two other, recently proposed, neural network-based ball detections methods: [@Spec17] and [@Reno18], while having lower number of trainable parameters and significantly higher frame rate. In the future we plan to use temporal information to improve the system accuracy. Combining convolutional feature maps from few subsequent frames gives additional information that may help to discriminate static, ball-like objects (e.g. parts of stadium advertisement or spare balls located outside the play field) from the moving ball. Acknowledgements {#acknowledgements .unnumbered} ================ This work was co-financed by the European Union within the European Regional Development Fund.
{ "pile_set_name": "ArXiv" }
--- author: - 'Simon Caron-Huot,' - 'Einan Gardi,' - 'Joscha Reichel,' - Leonardo Vernazza bibliography: - 'main.bib' title: 'Two-parton scattering amplitudes in the Regge limit to high loop orders' --- Introduction {#intro} ============ The study of QCD scattering in the Regge limit has been an active area of research for over half a century, e.g. [@Kuraev:1977fs; @Balitsky:1978ic; @Lipatov:1985uk; @Mueller:1993rr; @Mueller:1994jq; @Brower:2006ea; @Moult:2017xpp]. While the general problem of high-energy scattering is non-perturbative, in the regime where the exchanged momentum $-t$ is high enough, i.e. $s\gg-t\gg\Lambda_{\rm QCD}^2$ (see figure \[setup\_fig\]), perturbation theory offers systematic tools to analyse this limit. Central to this is the Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equation [@Kuraev:1977fs; @Balitsky:1978ic], which provides a systematic theoretical framework to resum high-energy (or rapidity) logarithms, $\ln (s/(-t))$, to all orders in perturbation theory. This approach was used extensively to study a range of physical phenomena including the small-$x$ behaviour of deep-inelastic structure functions and parton densities, and jet production with large rapidity gaps. Furthermore, non-linear generalisations of BFKL, known as the Balitsky-JIMWLK equation [@Balitsky:1995ub; @Balitsky:1998kc; @Kovchegov:1999yj; @JalilianMarian:1996xn; @JalilianMarian:1997gr; @Iancu:2001ad], are today a main tool in the theoretical description of dense states of nuclear matter, notably in the context of heavy-ion collisions. While many applications of rapidity evolution equations to phenomenology require the scattering particles to be colour-singlet objects, in the present paper we are concerned with the more theoretical problem of understanding *partonic* scattering amplitudes in the high-energy limit, similarly to refs. [@Sotiropoulos:1993rd; @Korchemsky:1993hr; @Korchemskaya:1996je; @Korchemskaya:1994qp; @DelDuca:2001gu; @DelDuca:2013ara; @DelDuca:2014cya; @Bret:2011xm; @DelDuca:2011ae; @Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo]. This is part of a more general programme of understanding the structure of gauge-theory amplitudes and the underlying physical and mathematical principles governing this structure. The basic observation is that gauge dynamics drastically simplifies in the high-energy limit, which renders the amplitudes computable to all orders in perturbation theory, to a given logarithmic accuracy. The present paper continues our recent study [@Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo] of $2\to 2$ partonic amplitudes ($qq\to qq$, $gg\to gg$, $qg\to qg$) in QCD and related gauge theories. ![The $t$-channel exchange dominating the high-energy limit, $s\gg -t>0$. The figure also defines our conventions for momenta assignment and Mandelstam invariants. We shall assume that particles 2 and 3 (1 and 4) are of the same type and have the same helicity.[]{data-label="setup_fig"}](./img/setup_fig-crop.pdf) A key ingredient in these studies is provided once again by rapidity evolution equations, BFKL and its generalisations, which are used to compute high-energy logarithms in these amplitudes order-by-order in perturbation theory. Scattering amplitudes of quarks and gluons are dominated at high energies by the $t$-channel exchange (figure \[setup\_fig\]) of effective degrees of freedom called *Reggeized gluons*. $2\to 2$ amplitudes are conveniently decomposed into *odd* and *even* signature characterising their symmetry properties under $s\leftrightarrow u$ interchange, or crossing symmetry: $$\label{Odd-Even-Amp-Def} {\cal M}^{(\pm)}(s,t) = \tfrac12\Big( {\cal M}(s,t) \pm {\cal M}(-s-t,t) \Big)\,,$$ where odd (even) amplitudes ${\cal M}^{(-)}$ (${\cal M}^{(+)}$) are governed by the exchange of an odd (even) number of Reggeized gluons. Furthermore, as shown in ref. [@Caron-Huot:2017fxr], these have respectively *real* and *imaginary* coefficients, when expressed in terms of the natural signature-even combination of logarithms, $$\label{L-def} \frac12\left(\log\frac{-s-i0}{-t}+\log\frac{-u-i0}{-t}\right) \simeq \log\left|\frac{s}{t}\right| -i\frac{\pi}{2} \equiv L\,.$$ The real part of the amplitude, ${\cal M}^{(-)}$, is governed, at leading logarithmic (LL) accuracy, by the exchange of a single Reggeized gluon in the $t$ channel. To this accuracy, high-energy logarithms admit a simple exponentiation pattern, namely $$\label{Mreal} {\cal M}^{(-)}_{\rm LL} = (s/(-t))^{\alpha_g(t)} \times {\cal M}^{\rm tree}$$ where the exponent is the *gluon Regge trajectory* (corresponding to a Regge pole in the complex angular momentum plane), $\alpha_g(t)=\frac{\alpha_s}{\pi} C_A \alpha_g^{(1)}(t)+{\cal O}(\alpha_s^2)$, whose leading order coefficient $\alpha_g^{(1)}(t)$ is infrared singular, $\alpha_g^{(1)}(t)\sim \frac{1}{2\epsilon}$ in dimensional regularization with $d=4-2\epsilon$ (see eq. (\[alphag1\]) below). Infrared singularities are well-known to exponentiate, independently of the high-energy limit. Importantly, however, eq. (\[Mreal\]) illustrates the fact that the exponentiation high-energy logarithms must be compatible with that of infrared singularities, which is a nontrivial constraint on both. This observation and its extension to higher logarithmic accuracy underpins a long line of investigation in refs. [@Sotiropoulos:1993rd; @Korchemsky:1993hr; @Korchemskaya:1996je; @Korchemskaya:1994qp; @DelDuca:2001gu; @DelDuca:2013ara; @DelDuca:2014cya; @Bret:2011xm; @DelDuca:2011ae; @Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo]. The key property of the Reggeized gluon being signature-odd greatly constrains the structure of higher-order corrections. For the real part of amplitude, the simple exponentiation pattern generated by a single Reggeized gluon is preserved at the next-to-leading logarithmic (NLL) accuracy, except that it requires ${\cal O}(\alpha_s^2)$ corrections to the trajectory and also the introduction of ($s$-independent) impact factors. This simple picture only breaks down when three Reggeized gluons can be exchanged, which first occurs at NNLL accuracy and leads to Regge cuts. This contribution was computed in ref. [@Caron-Huot:2017fxr] through three-loops, by constructing an iterative solution of the non-linear Balitsky-JIMWLK equation which tested the mixing between one and three Reggeized gluons. In this paper we focus on the imaginary part of the amplitude, ${\cal M}^{(+)}$, extending our work [@Caron-Huot:2017zfo]. Here the leading tower of logarithms, in which we are interested, is generated by the exchange of *two* Reggeized gluons, starting with a non-logarithmic term at one loop: $$\label{MevenOneloop} {\cal M}^{(+)}_{\rm NLL}\simeq i\pi \left[\frac{1}{2\epsilon} \frac{\alpha_s}{\pi} +{\cal O}\left(\alpha_s^{2} L\right)\right] {\mathbf T}^2_{s-u} {\cal M}^{\rm tree}\,.$$ Here we suppressed subleading terms in $\epsilon$ as well as multiloop corrections, which take the form $\alpha_s^{\ell} L^{\ell-1}$ at $\ell$ loops; because the power of the energy logarithm $L$ is one less than that of the coupling, these are formally next-to-leading logarithms (NLL). In eq. (\[MevenOneloop\]) one may observe another salient feature of this tower of corrections, namely the colour structure, which is even under $s\leftrightarrow u$ interchange (${\cal M}^{\rm tree}$ is odd, and so is the operator ${\mathbf T}^2_{s-u}$ acting on it). The first term in the square brackets in (\[MevenOneloop\]) is the exact result in the planar limit; we will be interested in the full series of corrections $\alpha_s^{\ell} L^{\ell-1}$, which are all subleading in the large $N_c$ limit (see the definitions of colour operators in eq. (\[TtTsTu\]) below). All higher-order corrections, ${\cal O}(\alpha_s^{\ell}L^{\ell-1})$, in (\[MevenOneloop\]) can be described by the well-known ladder graphs, where each additional loop constitute an additional rung in the ladder (see figure \[fig:simplebfkl\] below). ![Sketch of evolution generating ladder graphs in the imaginary part of the amplitude. Considering initially emission from the projectile side only, the 0-loop wavefunction (top left) describes a state involving two reggeized gluons. The Reggeized gluons are both off-shell and are characterized by their transverse momenta $k$ and $p-k$. Each application of the BFKL Hamiltonian (the top row) generates an additional rung in the ladder. Upon integrating the $(\ell-1)$-loop wavefunction with the target one obtains the $\ell$-loop amplitude (bottom row).[]{data-label="fig:simplebfkl"}](./img/simplebfkl-crop.pdf) Being the leading contributions to the imaginary part of the amplitude, they are particularly important, and clearly at high energies, where $\alpha_s L\sim {\cal O}(1)$, one should aim at an all-order calculation. These corrections, however, do not feature a simple exponentiation pattern as in eq. (\[Mreal\]); they give rise to a Regge cut rather than a pole. We shall study these corrections using an iterative solution of the BFKL equation, continuing the work of ref. [@Caron-Huot:2013fea; @Caron-Huot:2017fxr; @Caron-Huot:2017zfo]. In [@Caron-Huot:2013fea] higher-order terms in eq. (\[MevenOneloop\]) were computed through four loops – the first order where finite contributions appear (see eqs. (28-29) in [@Caron-Huot:2017fxr]). Subsequently, in ref. [@Caron-Huot:2017zfo] infrared-singular contributions were computed in dimensional regularization to all orders. The purpose of the present paper is to extend the calculation to finite contributions, and in particular, to obtain the infrared-renormalized amplitude, or hard function, which we expect (together with the soft anomalous dimension) to control any infrared-safe cross section. We are interested in the exact perturbative solution of the BFKL equation for any colour exchange, that is, not restricted to the planar limit. While the BFKL Hamiltonian was famously diagonalized by its authors in the case of color-singlet exchange, the solution is not known in the general case. Adding to the complexity is the fact that amplitudes are infrared singular, forcing us to work in dimensional regularization. While it is not known how to diagonalise the BFKL Hamiltonian in these circumstances, we are able to solve the problem by using two complementary approaches, the first by taking the soft approximation while maintaining dimensional regularization, and the second by considering general (hard) kinematics in strictly two transverse dimensions. Let us briefly describe each of these approaches. The first approach is a computation of the wavefunction describing the emission of two Reggeons at $(\ell-1)$ loops, and the corresponding $\ell$-loop $2\to 2$ amplitude, in the *soft approximation*, where one of the two Reggeized gluons carries transverse momentum $k^2$ which is significantly smaller than the total momentum transfer by the pair, $-t = p^2$, i.e. the limit characterized by a double hierarchy of scales $k^2\ll p^2 \ll s$. This is the limit used in ref. [@Caron-Huot:2017zfo] to determine all infrared-singular contributions the amplitude. This was achieved using the simple observation that the wavefunction is itself finite to all orders in perturbation theory and that BFKL evolution closes within this approximation. All the singularities of the amplitude at any given loop order are in turn produced in the final integration over the wavefunction (corresponding to the transition from the top to the bottom row in figure \[fig:simplebfkl\]). In the present paper, building upon the computation of the wavefunction in [@Caron-Huot:2017zfo] we introduce a symmetrized solution accounting simultaneously for the two soft limits, $k^2\ll p^2$ and $(p-k)^2\ll p^2$, which amounts to an elegant separation between soft and hard contributions to the wavefunction and amplitude. Within this approximation we are able to write down a resummed analytic expression for the amplitude, including its finite contributions. The second approach, which we develop in the present paper, is based on starting with the BFKL equation in exactly two dimensions. Without making any further approximation, we set up an iterative solution of the equation by identifying differential operators that commute with (parts of) the Hamiltonian up to a computable set of contact terms. Evolution induced by the Hamiltonian then becomes trivial within a class of iterated integrals dictated by the nature of the problem, these are the Single-Valued Harmonic Polylogarithms (SVHPLs), first systematically classified by Francis Brown in ref. [@Brown:2004ugm] and then studied and applied in the context of motivic periods [@Brown:2013gia] and Feynman integrals [@Chavez:2012kn; @Schnetz:2013hqa]. The relevance of this class of functions for gauge-theory amplitudes within the Regge limit [@Pennington:2012zj; @Dixon:2012yy; @DelDuca:2013lma; @Dixon:2014voa; @DelDuca:2016lad; @DelDuca:2018hrv] (and beyond [@Almelid:2017qju; @Dixon:2019lnw]) has been recognised in recent years, and it is important also in our current problem: the hard wavefunction, defined in strictly two dimensions, is fully expressible in terms of SVHPLs, and the corresponding contribution to the amplitude can in turn be written in terms of Single-Valued Multiple Zeta Values (SVMZVs). For the ladder graphs relevant here, each additional loop increases the transcendental weight by one unit. The resulting uniform-weight expressions in terms of single-valued functions are significantly simpler as compared to the corresponding ones in terms of ordinary polylogarithms and zeta values. For the final integration over the wavefunction we develop two independent approaches, one relying on analytic continuation and integration over the discontinuities of the wavefunction away from the region were they are single-valued, and the other relying instead on a modified application of the evolution algorithm itself. The two yield identical results. By combining the hard contribution to the amplitude with the dimensional-regularized soft contribution we compute the full amplitude, in principle to any order, and in practice to thirteen loops. The structure of the paper is as follows. In section \[chap:bfkl\] we present the BFKL equation in dimensional regularisation, bring it to a form suitable for iterative solution and review the relation between the off-shell wavefunction and the two-to-two scattering amplitude. We also show how an iterative solution can be obtained for the first few orders directly in dimensional regularization without resorting to any approximation, and explain why this approach does not practically extend to higher orders. In this context we compute the amplitude numerically through five loops, providing a valuable check for our subsequent calculations. Next, in section \[soft\] we review the soft approximation developed in [@Caron-Huot:2017zfo] and explain how infrared factorization, combined with the finiteness of the wavefunction, facilitate a systematic separation of the latter into ‘soft’ and ‘hard’ components, such that eventually, finite corrections to the infrared-renormalized scattering amplitude can be determined in full. To this end we introduce a symmetrized version of the soft wavefunction, which captures both soft limits, and then derive an analytic expression for the amplitude as a function of $\alpha_sL$, which resums both infrared-divergent and finite contributions to all loops, within the soft approximation. In section \[2d-bfkl\] we turn to discuss the wavefunction in general (hard) kinematics. Working directly in two dimensions we introduce the relevant kinematic variables, analyse the action of the BFKL Hamiltonian and demonstrate that evolution generated by this Hamiltonian translates into an algorithmic procedure in the space of SVHPLs. Having determined the wavefunction order by order, we turn in section \[amplitude\] to compute the corresponding two-to-two scattering amplitude. In section \[numerics\] we perform a numerical study of the resulting wavefunctions and amplitudes, and address the convergence of the perturbative expansion. Finally, in section \[conclusion\] we make some concluding comments and present an outlook for future investigation. equation in dimensional regularisation and the $2\to 2$ amplitude {#chap:bfkl} ================================================================== In the high-energy limit, scattering amplitudes are conveniently described in terms of Wilson lines, which dress the external partons. The evaluation of vacuum expectation values of Wilson lines stretching from minus to plus infinity leads to rapidity divergences, which needs to be renormalised. As a consequence, the renormalised amplitude obeys a rapidity evolution equation, which can be shown to correspond to the Balitsky-JIMWLK equation. In this paper we are interested to study the two-Reggeon exchange contribution to two-parton scattering amplitudes, for which the evolution equation reduces to the BFKL equation [@Caron-Huot:2013fea; @Caron-Huot:2017fxr]. The scattering amplitude can be determined formally to any order in perturbation theory as an iterative solution of the dimensionally-regularised BFKL equation. This procedure was described in [@Caron-Huot:2017zfo], to which we refer for further details. In this section we review the definitions necessary to set up the calculation. In the following we consider the two-reggeon exchange contribution to $2 \to 2$ scattering amplitudes. We can single out this contribution by introducing a reduced amplitude, in which the one-Reggeon exchange has been removed: \[Mreduced\] \_[ijij]{} e\^[-\_g(t) L [\_t\^2]{}]{} \_[ijij]{}, where $L$ is the signature-even high-energy logarithm defined in eq. (\[L-def\]), ${{\mathbf{T}}_t^2}$ represents the total colour charge exchanged in the $t$ channel (see eq.  below) and $i,j$ are the species indices defining the two-parton scattering; in what follows we will drop these indices, unless explicitly needed. Finally, the function $$\begin{aligned} \alpha_g(t)=\frac{\alpha_s}{\pi} \alpha_g^{(1)}(t)+{\cal O}(\alpha_s^2)\end{aligned}$$ is the *gluon Regge trajectory* introduced already in [eq. ]{}, where the leading-order coefficient in dimensional regularization with $d=4-2\epsilon$ is given by $$\label{alphag1} \alpha_g^{(1)}(t)= \frac{B_0}{2\epsilon} \left(\frac{-t}{\mu^2}\right)^{-\epsilon}$$ where $$\label{B0} {B_{0}} \equiv {B_{0}}({\epsilon})=e^{{\epsilon}\gamma_{\rm E}} \frac{\Gamma^2(1-{\epsilon})\Gamma(1+{\epsilon}) }{\Gamma(1-2 {\epsilon})} = 1 - \frac12 {\epsilon}^2 \zeta_2 -\frac73 {\epsilon}^3\zeta_3 + O({\epsilon}^4)$$ belongs to a class of bubble integrals which will be defined below. The two-Reggeon cut contributes only to the even amplitude defined in [eq. ]{}, thus we focus only on this component in the following. As discussed in [@Caron-Huot:2017zfo], the reduced amplitude takes the form of an integral over the two-Reggeon wavefunction ${\Omega}(p,k)$, as follows: \[ReducedAmpNLL\] [[\_]{}\^[(+)]{}]{}() = -i[\[k\]]{} (p,k) [\_[s-u]{}\^2]{} [\^]{}\_[ijij]{}, where $p^2 = -t$. In [eq. ]{} the integration measure is \[measure\] [\[k\]]{} ( )\^ , and ${{\mathcal{M}}^{\mathrm{(tree)}}}_{ij\to ij}$ represent the tree amplitude, given by [\^]{}\_[ijij]{} = 4\_s (T\_i\^b)\_[a\_1 a\_4]{} (T\_j\^b)\_[a\_2 a\_3]{} \_[\_1\_4]{}\_[\_2\_3]{}, where $\lambda_i$ for $i=1$ through $4$ are helicity indices. The colour operator ${{\mathbf{T}}_{s-u}^2}$ in eq.  acts on ${{\mathcal{M}}^{\mathrm{(tree)}}}_{ij\to ij}$ and it is defined in terms of the usual basis of quadratic Casimirs corresponding to colour flow through the three channels [@Dokshitzer:2005ig; @DelDuca:2011ae]: \[TtTsTu\] [\_[s-u]{}\^2]{} [with]{} { [c]{} \_s = \_1+\_2=-\_3-\_4,\ \_u = \_1+\_3=-\_2-\_4,\ \_t = \_1+\_4=-\_2-\_3, . where ${\mathbf{T}}_i$ is the colour-charge operator [@Catani:1998bh] associated with parton $i$. The BFKL equation [@Kuraev:1977fs; @Balitsky:1978ic] for the wavefunction ${\Omega}(p,k)$ in eq. (\[ReducedAmpNLL\]) takes the form $$\label{BFKL_evolution} \frac{d}{dL}\Omega(p,k)= \frac{\alpha_s B_0(\epsilon)}{\pi} \hat{H} \Omega(p,k)\,,$$ where $L$ is the high-energy logarithm (\[L-def\]) and where the Hamiltonian takes the form [@Caron-Huot:2017zfo] \[Hdef1\] = [(2[C\_A]{}-[\_t\^2]{})]{} [\_]{}+ [([C\_A]{}-[\_t\^2]{})]{} [\_]{}, where two independent colour factors come along with two different operations: \[Hamil\] $$\begin{aligned} {{\hat{H}}_{\mathrm{i}}}\, \Psi(p,k) &= \int {[{\mathrm{D}}k']}\, f(p,k,k') \left[ \Psi(p,k') - \Psi(p,k) \right] \label{Him}, \\ {{\hat{H}}_{\mathrm{m}}}\, \Psi(p,k) &= J(p,k) \, \Psi(p,k)\, \label{eq:Hm}\,. \end{aligned}$$ The function $f(p,k,k')$ in eq. (\[Him\]) represents the evolution kernel \[bfkl-kernel\] f(p,k,k’) + - , and $J(p,k)$ in eq. (\[eq:Hm\]) is defined by $$\begin{aligned} \label{Jp-def2} J(p,k) = \frac{1}{2{\epsilon}} + \int {[{\mathrm{D}}k']}\, f(p,k,k') = \frac{1}{2{\epsilon}} \left[2- {\left(\frac{p^2}{k^2}\right)}^{{\epsilon}} - {\left(\frac{p^2}{(p-k)^2}\right)}^{{\epsilon}} \right].\end{aligned}$$ While it is unknown how to diagonalise this $d$-dimensional Hamiltonian, we may invoke a perturbative solution [@Caron-Huot:2013fea; @Caron-Huot:2017zfo] by expanding the wavefunction in the strong coupling constant: \[OmegaEven\] (p,k) &=& \_[=1]{}\^ ( )\^ L\^[-1]{} [\^[([-1]{})]{}]{}(p,k), where we set the renormalisation scale equal to the momentum transfer, $\mu^2 = -t = p^2$. Substituting the expanded form of the wavefunction in (\[OmegaEven\]) into the BFKL evolution equation (\[BFKL\_evolution\]) one deduces that \[Hdef0\] [\^[([-1]{})]{}]{}(p,k) = [\^[([-2]{})]{}]{}(p,k), where ${\hat{H}}$ is the BFKL hamiltonian of eq. (\[Hdef1\]), that is, the wavefunction at any given order is found by repeated application of the Hamiltonian, where the initial condition in our normalization is simply \[0th-wavefunction-tilde\] [\^[([0]{})]{}]{}(p,k) = 1. Next, let us consider the on-shell $2\to 2$ amplitude. Substituting the expanded wavefunction (\[OmegaEven\]) into (\[ReducedAmpNLL\]) we readily obtain the following expansion \[MhatEven\] [[\_]{}\^[(+)]{}]{}( ) &=& \_[=1]{}\^( )\^ L\^[-1]{} [[\_]{}\^[(+,)]{}]{}, with \[ReducedAmpNLL2\] [[\_]{}\^[(+,)]{}]{} = -i [\[k\]]{} [\^[([-1]{})]{}]{}(p,k) [\_[s-u]{}\^2]{}[\^]{} . Namely, integrating over the $(\ell-1)$-th order contribution to the wavefunction yields the $\ell$-th order contribution to the amplitude. A graphical illustration of eq.  is provided in figure \[fig:bfklwfamp\]. As discussed in the introduction, because of evolution, the amplitude at accuracy can be represented as a ladder. At order $\ell$ it is obtained by closing the ladder and integrating the wavefunction of order $(\ell-1)$ over the resulting loop momentum, according to eq. . The wavefunction ${{\Omega}^{({\ell-1})}}(p,k)$ in turn is obtained by applying once the leading-order evolution kernel to the wavefunction of order $(\ell-2)$. Graphically, this operation corresponds to adding one rung to the ladder. ![Graphical representation of the amplitude at accuracy, as obtained through evolution. The addition of one rung corresponds to applying once the leading-order evolution on the wavefunction of order $(\ell-2)$. This gives the wavefunction at order $(\ell-1)$, according to eq. . Closing the ladder and integrating over the resulting loop momentum gives the reduced amplitude, according to eq. .[]{data-label="fig:bfklwfamp"}](./img/bfklwfamp-crop.pdf) Inspecting [eqs.  and ]{} we see that the BFKL evolution consists of an integration and a multiplication part. The effect of evolution is thus expressed formally in a compact form by introducing a class of functions $$\begin{aligned} \label{J_im_general} {\Omega}_{\mathrm{i},w}(p,k) &\equiv \int {[{\mathrm{D}}k']}f(p,k,k') \left[ {\Omega}_{w}(p,k') - {\Omega}_{w}(p,k) \right], \\ {\Omega}_{\mathrm{m},w}(p,k) &\equiv J(p,k) \, {\Omega}_{w}(p,k) \label{eq:Wmdef}, \end{aligned}$$ where ${\Omega}_{\varnothing}(p,k) \equiv 1$, and $w$ indicates a word made of indices “i” or “m”, which stand for integration and multiplication, respectively, according to the action of the two Hamiltonian operators in eq.  and (\[eq:Hm\]), respectively. In this notation the first four orders of the wavefunction read, for instance, $$\begin{aligned} {{\Omega}^{({1})}}(p,k) &= {({C_A}-{{\mathbf{T}}_t^2})}{\Omega}_{\rm m}, \label{WavefunctionTwoLoops-b} \\ {{\Omega}^{({2})}}(p,k) &= {({C_A}-{{\mathbf{T}}_t^2})}^2 {\Omega}_{\rm m,m} + {(2{C_A}-{{\mathbf{T}}_t^2})}{({C_A}-{{\mathbf{T}}_t^2})}{\Omega}_{\rm i,m}, \label{WavefunctionTwoLoops-b2} \\ \nn {{\Omega}^{({3})}}(p,k) &= {({C_A}-{{\mathbf{T}}_t^2})}^3 {\Omega}_{\rm m,m,m} + {(2{C_A}-{{\mathbf{T}}_t^2})}{({C_A}-{{\mathbf{T}}_t^2})}^2 \left( {\Omega}_{\rm i,m,m} + {\Omega}_{\rm m,i,m} \right) \\ & +\, {(2{C_A}-{{\mathbf{T}}_t^2})}^2 {({C_A}-{{\mathbf{T}}_t^2})}{\Omega}_{\rm i,i,m}, \label{eq:wf3loops} \\ \nn {{\Omega}^{({4})}}(p,k) &= {({C_A}-{{\mathbf{T}}_t^2})}^4 {\Omega}_{\rm m,m,m,m} \\ \nn & +\, {(2{C_A}-{{\mathbf{T}}_t^2})}{({C_A}-{{\mathbf{T}}_t^2})}^3 \left( {\Omega}_{\rm m,m,i,m} + {\Omega}_{\rm m,i,m,m} + {\Omega}_{\rm i,m,m,m} \right) \\ \nn & +\, {(2{C_A}-{{\mathbf{T}}_t^2})}^2 {({C_A}-{{\mathbf{T}}_t^2})}^2 \left( {\Omega}_{\rm m,i,i,m} + {\Omega}_{\rm i,m,i,m} + {\Omega}_{\rm i,i,m,m} \right) \\ & +\, {(2{C_A}-{{\mathbf{T}}_t^2})}^3 {({C_A}-{{\mathbf{T}}_t^2})}{\Omega}_{\rm i,i,i,m}. \label{eq:wf4loops}\end{aligned}$$ Symmetries play an important role in determining the general structure of the wavefunction, and from a practical perspective they can be useful to reduce the number of integrals that need to be evaluated at each loop order. The wavefunction is symmetric under swapping the two $t$-channel Reggeons, which can be understood from the graphical representation of the evolution in figure \[fig:bfklwfamp\]. This implies \[left-right-symmetry\] [\^[()]{}]{}(p,k) = [\^[()]{}]{}(p,p-k), which can be easily verified by showing that the functions $f(p,k,k')$ in , $J(p,k)$ in and ${{\Omega}^{({0})}}(p,k)$ in obey the same symmetry. This symmetry property will become handy in section \[soft\], making it possible to capture simultaneously both soft limits, $k^2\to 0$ and $(p-k)^2\to 0$. This, in turn, will be important for implementing a systematic separation between the soft and hard regimes, without needing an extra regulator. Despite the simplifications allowed by symmetries, though, the evaluation of the wavefunction in $2-2{\epsilon}$ transverse dimensions without additional simplifications becomes quickly infeasible. For instance, already the wavefunctions with one or two integrations (one or two occurrences of the index “i”) involve integrals of the type $$\begin{aligned} \label{triang1} \nn {\Omega}_{\rm i,m} &\ni \int {[{\mathrm{D}}k']}\, \frac{(p-k)^2}{(p-k')^2 (k-k')^2} {\left(\frac{p^2}{(k')^2}\right)}^{{\epsilon}}, \\ {\Omega}_{\rm i,i,m} &\ni \int {[{\mathrm{D}}k']}[Dk''] \, \frac{k^2 (p-k'')^2}{(k'')^2 (p-k')^2 (k-k'')^2 (k'-k'')^2} {\left(\frac{p^2}{(k')^2}\right)}^{{\epsilon}},\end{aligned}$$ which are represented respectively in figure \[fig:triang\] (a) and (b). Such integrals evaluates to Appell, and more in general Lauricella functions in dimensional regularisation. Given the lack of a systematic classification of these functions in terms of iterated integrals, the evaluation of the wavefunction beyond the third order is not practical. ![Three-mass triangle integrals with massless propagators, which appear in the calculation of the wavefunction at two and three loops. These integrals contribute to the amplitude only starting respectively at four and five loops, due to symmetry constraints, as discussed in the main text. The bubble integral on one of the edges of the triangle clarifies the origin of the propagator which is raised to power ${\epsilon}$ in eq. .[]{data-label="fig:triang"}](./img/Wavefunction-loops.pdf){width="72.00000%"} The amplitude at order $\ell$ is obtained upon integrating the wavefunction of order $\ell-1$, as indicated in [eq. ]{}. As in case of the wavefunction, symmetries turn out to be important for a simplification of the calculation and interpretation of the result. While the two Reggeons in the wavefunction can be *defined* to originate from either the projectile *or* target Wilson line — which gives the corresponding ladder graphs a sense of direction — this is no longer true at the level of the amplitude. Physically the two cases become indistinguishable, and we refer to this as the target-projectile symmetry. In general, this implies the relation [@Caron-Huot:2017zfo] \[JiJmSymAllOrders\] [\[k\]]{} [\_]{}\_[w]{}(p,k) = [\[k\]]{} \_[,w]{}(p,k) = 0. Furthermore, in the notation of eqs.  and reversal of the rungs directly translates to the reversal of the indices of the wavefunction. The target-projectile symmetry thus guarantees the equality [\[k\]]{} \_[a\_1,…,a\_n]{}(p,k) = [\[k\]]{} \_[a\_n,…,a\_1]{}(p,k). The symmetries discussed above can reduce the number of functions to be computed significantly, and make the calculation of the amplitude trivial up to three loops, since it can be shown that the integration of the wavefunction involves only bubble integrals. Furthermore, the calculation of the amplitude at four loops in dimensional regularisation is still feasible, as it involve bubble integrals and a single more involved kite-like integral, represented in figure \[fig:5loopex\] (a). Up to four loops one obtains [@Caron-Huot:2017zfo] $$\begin{aligned} \label{ReducedAmpNLL2-one-loop} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{1})}} &= i\pi \frac{{B_{0}}}{2{\epsilon}} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \label{ReducedAmpNLL2-two-loop} \nn {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{2})}} &= i\pi \frac{({B_{0}})^2}{2} \left[ \frac{1}{(2{\epsilon})^2} + \frac{9\zeta_3}{2}{\epsilon}+ \frac{27\zeta_4}{4}{\epsilon}^2 + \frac{63\zeta_5}{2}{\epsilon}^3 + {\mathcal{O}}({\epsilon}^4) \right] \\ &\hspace{55mm} \times {({C_A}-{{\mathbf{T}}_t^2})}{{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn \label{ReducedAmpNLL2-three-loops} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{3})}} &= i\pi \frac{{B_{0}}^3}{3!} \left[ \frac{1}{(2{\epsilon})^3} - \frac{11\zeta_3}{4} - \frac{33\zeta_4}{8}{\epsilon}- \frac{357\zeta_5}{4}{\epsilon}^2 + {\mathcal{O}}({\epsilon}^3) \right] \\ & \hspace{55mm} \times {({C_A}-{{\mathbf{T}}_t^2})}^2 {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn \label{ReducedAmpNLL2-four-loops} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{4})}} &= i\pi \frac{{B_{0}}^4}{4!} \bigg\{ {({C_A}-{{\mathbf{T}}_t^2})}^3 \left( \frac{1}{(2{\epsilon})^4} + \frac{175\zeta_5}{2}{\epsilon}+ {\mathcal{O}}({\epsilon}^2) \right) \\ & \hspace{8mm} + {C_A}{({C_A}-{{\mathbf{T}}_t^2})}^2 \left( -\frac{\zeta_3}{8{\epsilon}} - \frac{3}{16}\zeta_4 - \frac{167\zeta_5}{8}{\epsilon}+ {\mathcal{O}}({\epsilon}^2) \right) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}. \end{aligned}$$ A thorough discussion of the target-projectile symmetry, and its effect on the colour structure of the amplitude has been given in [@Caron-Huot:2017zfo], to which we refer for further details. In this paper we are interested to evaluate the amplitude, including finite terms, to higher orders in the perturbative expansion. Despite the symmetries discussed above, however, beyond four loops the iterated integrals appearing are all but easy with current methods. A simple and fast way to extend the study in ref. [@Caron-Huot:2013fea; @Caron-Huot:2017zfo] to higher loops is provided by numerical integration methods. In particular, we find sector decomposition as implemented in `pySecDec`/`SecDec` [@Carter:2010hi; @Borowka:2017esm] to be suited to calculate the nested integrals that enter the five-loop amplitude. Provided a high numerical accuracy it is straightforward to extract from the results the rational coefficients of the zeta numbers appearing at this loop order. This procedure relies on the observed *homogeneous transcendental weight* property of the $\ell$-loop amplitude: Assigning $o({\epsilon}) = -1$, $o(\pi) = 1$ and $o(\zeta_n) = n$ one sees that the terms of the $\ell$-loop amplitude are uniformly of weight $o({{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{\ell})}} ) = \ell$. We can hence deduce which zeta numbers (or powers of $\pi$) may appear at any given order in ${\epsilon}$. Another observation facilitates this procedure at five loops; after dividing the $\ell$-loop amplitude by $B_0^\ell$ there are no occurrences of $\zeta_2 = \pi^2/6$ up to four loops, see e.g. the ${\mathcal{O}}({\epsilon})$ terms of eq. . If we assume this absence of $\zeta_2$ to be an actual property of the amplitude, the finite terms of the five-loop amplitude can only be proportional to one transcendental number, $\zeta_5$, whereas $\zeta_3 \zeta_2$ is excluded. At this point this approach may seem rather conjectural. However, over the course of the next two sections we develop methods that prove this assumption, and we shall briefly return to it at the end of section \[sec:finiteamp\]. To obtain the five-loop amplitude ${{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{5})}}$ we integrate the four-loop wavefunction ${{\Omega}^{({4})}}(p,k)$ of  according to eq. . In doing so one is faced with a plethora of multi-loop integrals. Many of them correspond to bubble graphs and can be easily evaluated analytically. Others vanish because of the symmetries discussed above. The remaining integrals can be computed numerically using `pySecDec`. One of the more difficult examples is shown in figure \[fig:5loopex\]. In the depicted case one can integrate out the two internal bubbles and is left with a three-loop integral with two of the propagators raised to non-integer powers: \[eq:5loopex\] \~ . ![Example of a four- and five-loop integrals that enters the calculation of the four- and five-loop amplitude respectively. The two bubbles may be integrated out, turning it into a two- and three-loop integral with two propagators raised to non-integer powers, *cf.* eq. .[]{data-label="fig:5loopex"}](./img/Amplitude-loops.pdf){width="76.00000%"} After combining all contributions (and reconstructing the zeta numbers in case of the numerical results) we find $$\begin{gathered} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{5})}} = i\pi \frac{{B_{0}}^5}{5!} \left\{ {({C_A}-{{\mathbf{T}}_t^2})}^4 \left( \frac{1}{32 {\epsilon}^5} - \frac{53 \zeta_5}{2} \right) \right. \\ \left. + {C_A}{({C_A}-{{\mathbf{T}}_t^2})}^3 \left( -\frac{\zeta_3}{16 {\epsilon}^2} - \frac{3 \zeta_4}{32 {\epsilon}} + \frac{253 \zeta_5}{16} \right) - \frac{5}{2} {C_A}^2 {({C_A}-{{\mathbf{T}}_t^2})}^2 \zeta_5 \right\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}. \label{eq:m5num}\end{gathered}$$ This result will serve as a consistency check for our computation below. The soft approximation {#soft} ====================== In section \[chap:bfkl\] we have shown how the two-Reggeon contribution to the two-parton scattering amplitude is conveniently described in terms of the reduced amplitude $\hat {\cal M}$. The latter is defined in [eq. ]{} by (multiplicatively) removing the single-Reggeon effect from the full amplitude ${\cal M}$. This allowed us to use BFKL evolution to express the two-Reggeon contribution to $\hat {\cal M}$ in terms of iterated integrals. Beyond four loops these integrals become difficult to evaluate exactly in $d = 4-2{\epsilon}$ dimensions, but as we are going to show now, this is also not necessary. Ultimately we are interested in extracting physical information about the scattering process, and dimensional regularization is used in the present context for the sole purpose of regularizing long-distance singularities[^1]. Here infrared factorization come into play: the long-distance singularities of ${\cal M}$ can be factorized, ${\cal M}={\bf Z} {\mathcal{H}}$, where the “infrared renormalization” factor ${\bf Z}$ captures all divergences (which famously exponentiate in terms of the soft anomalous dimension, see e.g. [@Sterman:1995fz; @Collins:1989gx; @Korchemskaya:1994qp; @Catani:1998bh; @Aybat:2006mz; @Sterman:2002qn; @Gardi:2009qi; @Becher:2009cu; @Becher:2009qa; @Almelid:2015jia; @Almelid:2017qju]) while the infrared-renormalized amplitude ${\mathcal{H}}$ – sometimes referred to as the “hard function” – is finite, and can be evaluated in four space-time dimensions (or equivalently, two transverse dimensions). To understand this from a physical perspective recall that physical quantities such as cross sections are finite: starting from the infrared-singular amplitude ${\cal M}$, their calculation inevitably incorporates a mechanism of cancellation of the singularities involving soft real-gluon emission. Once this was implemented, the finite, physical result can only depend on four-dimensional quantities, namely the *soft anomalous dimension* and the *infrared-renormalized amplitude* ${\mathcal{H}}$. In Ref. [@Caron-Huot:2017zfo] we have shown that the soft anomalous dimension associated with the signature-even amplitude, or indeed the relevant infrared renormalization factor ${\bf Z}$, can be computed to all orders by evaluating the reduced amplitude $\hat{\mathcal{M}}$ to ${\mathcal{O}}({\epsilon}^{-1})$. Similarly, we are going to show now (section \[sec:IRfact\]) that the infrared-renormalized amplitude ${\mathcal{H}}$ (in four dimensions) can be completely determined from the reduced amplitude $\hat{\mathcal{M}}$, evaluated at the same accuracy, i.e. to ${\mathcal{O}}({\epsilon}^0)$. This, along with the fact that the corresponding wavefunction $\Omega$ is finite, greatly simplifies the task of performing BFKL evolution to high loop orders, because it allows us to follow an “expansion by region” approach: in section \[sec:soft\_hard\_split\] we split the wavefunction into soft and hard components, each of which is rendered computable using different considerations. The soft wavefunction – giving rise to all the singularities in the amplitude – can be computed analytically in dimensional regularization owing to the drastic simplification of BFKL evolution in this limit, while the hard wavefunction is only required in strictly two transverse dimensions, where BFKL evolution again simplifies (see section \[2d-bfkl\]). These two wavefunction components will subsequently serve to compute the corresponding soft and hard contributions to the reduced amplitude $\hat{\mathcal{M}}$ to the required order, ${\mathcal{O}}({\epsilon}^0)$. In section \[sec:softwave\] we review the main results of Ref. [@Caron-Huot:2017zfo] regarding the all-order computation of the wavefunction within the soft approximation. We also introduce there a symmetrized soft wavefunction which captures both soft limits. This, in turn, is used in section \[sec:softAmpl\] to compute the corresponding ${\mathcal{O}}({\epsilon}^0)$ contributions to the reduced amplitude. Finally, in section \[hardFdef\] we make use of the results of sections \[sec:IRfact\] and \[sec:softAmpl\] to evaluate the ${\mathcal{O}}({\epsilon}^0)$ soft contributions to the infrared-renormalized amplitude ${\mathcal{H}}$. Infrared factorisation in the high-energy limit\[sec:IRfact\] ------------------------------------------------------------- According to the infrared factorisation theorem (see e.g. [@Sterman:1995fz; @Collins:1989gx; @Korchemskaya:1994qp; @Catani:1998bh; @Aybat:2006mz; @Sterman:2002qn; @Gardi:2009qi; @Becher:2009cu; @Becher:2009qa; @Almelid:2015jia; @Almelid:2017qju]), infrared singularities of an amplitude ${\mathcal{M}}$ are multiplicatively renormalised by a factor ${\bf Z}$, \[IRfacteq\] ({p\_i},, [\_s]{}() ) = [**Z**]{} ({p\_i},, [\_s]{}() ) ({p\_i},, [\_s]{}() ), such that the infrared-renormalized amplitude ${\cal H}$ is finite as ${\epsilon}\to 0$. We use a minimal subtraction scheme, where the renormalisation factor ${\bf Z}$ consists of pure poles. It is then given explicitly as the path-ordered exponential of the soft anomalous dimension: \[RGsol\] [**Z**]{} ({p\_i},, [\_s]{}() ) = [P]{} { -\_0\^ [****]{} ({p\_i},, [\_s]{}() ) }, where, to the accuracy needed in this paper, we can restrict to tree-level running coupling: ${\alpha_s}(\lambda) = {\alpha_s}(p) \left(p^2/\lambda^2\right)^{{\epsilon}}$. Given that ${\bf Z}$ was determined in Ref. [@Caron-Huot:2017zfo] to NLL accuracy in the high-energy logarithm, our goal here is to determine the infrared-renormalized amplitude ${\cal H}$ to the same accuracy. Thus we need to specialise [eq. ]{} to the high-energy limit. Recalling that in this limit the amplitude splits naturally into even and odd components under the $s \leftrightarrow u$ signature symmetry, we may focus directly on the even component (the odd component was analysed already in [@Caron-Huot:2017fxr]): \[MtoHeven\] \^[(+)]{}\_[NLL]{} = [**Z**]{}\^[(-)]{}\_[NLL]{} \^[(-)]{}\_[LL]{} + [**Z**]{}\^[(+)]{}\_[LL]{} \^[(+)]{}\_[NLL]{}. Our final goal is to determine ${\mathcal{H}}^{(+)}_{\rm NLL}$. Let us begin by inverting (\[MtoHeven\]), i.e. \[getH\] \^[(+)]{}\_[NLL]{} = - ([**Z**]{}\^[-1]{})\^[(+)]{}\_[LL]{} [**Z**]{}\^[(-)]{}\_[NLL]{} \^[(-)]{}\_[LL]{} +([**Z**]{}\^[-1]{})\^[(+)]{}\_[LL]{} \^[(+)]{}\_[NLL]{}. In [eq. ]{} both the leading- and next-to-leading logarithmic renormalisation factors are known: ${\bf Z}^{(+)}_{\rm LL}$, and hence also $\big({\bf Z}^{-1}\big)^{(+)}_{\rm LL}$ is easily determined from the single-Reggeon exchange, see [eqs.  and ]{}: \[Zll\] [**Z**]{}\^[(+)]{}\_[LL]{} = e\^[ \_t\^2]{} ([**Z**]{}\^[-1]{})\^[(+)]{}\_[LL]{}= e\^[- \_t\^2]{}, where we defined $x\equiv \frac{\alpha_s}{\pi} L$. The factor ${\bf Z}^{(-)}_{\rm NLL}$ was determined to all orders in perturbation theory in [@Caron-Huot:2017zfo]: comparing eqs. (4.12), (4.14) and (4.17) there we express ${\bf Z}^{(-)}_{\rm NLL} $ as \[Znll\] e\^[- \_t\^2]{} [**Z**]{}\^[(-)]{}\_[NLL]{} = i ( 1 - R() )\^[-1]{} [\_[s-u]{}\^2]{}|\_[poles]{}, where the function $R({\epsilon})$ reads R() &=& -1\ \[Rdef\] &=& -2\_3 \^3 -3\_4 \^4 -6\_5 \^5 -(10 \_6-2\^2\_3 ) \^6 + [O]{}(\^7), see also eq. (3.16) of [@Caron-Huot:2017zfo]. In [eq. ]{} the factor $\exp[- (x {\mathbf{T}}_t^2)/(2{\epsilon})]$ on the l.h.s. is left there, because it corresponds directly to the factor $\big({\bf Z}^{-1}\big)^{(+)}_{\rm LL}$ appearing in [eq. ]{} to the left of ${\bf Z}^{(-)}_{\rm NLL}$. Notice also that the $-1$ in the numerator of the first fraction on the r.h.s. of [eq. ]{} can actually be removed, given that we need to consider only the poles originating from [eq. ]{}, and this term contributes only at ${\mathcal{O}}({\epsilon}^0)$, given that the second $\epsilon$-dependent factor is regular, i.e. $\big[ 1 - C_A/(C_A -{{\mathbf{T}}_t^2}) \, R({\epsilon}) \big]^{-1} = 1 + {\mathcal{O}}({\epsilon}^3)$. [Eq. ]{} contains also the leading-logarithmic infrared-renormalized amplitude ${\mathcal{H}}^{(-)}_{\rm LL}$, which, as in case of ${\bf Z}^{(+)}_{\rm LL}$, is determined by single-Reggeon exchange, compare again with [eqs.  and ]{}: \[Hll\] \^[(-)]{}\_[LL]{} = e\^[ x C\_A]{} \^[(tree)]{}, where we have substituted ${{\mathbf{T}}_t^2}\to C_A$, given that in ${\mathcal{H}}^{(-)}_{\rm LL}$ the operator ${{\mathbf{T}}_t^2}$ acts on the tree-level amplitude. By this point we collected all ingredients needed to explicitly write down the first term in [eq. ]{}. The only missing term on the r.h.s. of this equation is thus the even amplitude itself, ${\mathcal{M}}^{(+)}_{\rm NLL}$. As explained above, in order to determine ${\mathcal{M}}^{(+)}_{\rm NLL}$ by means of BFKL evolution, we wish to express it in terms of the reduced amplitude ${\cal \hat M}^{(+)}_{\rm NLL}$ of [eq. ]{}. Substituting eqs. (\[Mreduced\]), (\[Zll\]) and (\[Hll\]) into [eq. ]{} we get $$\label{getH2} {\mathcal{H}}^{(+)}_{\rm NLL} = - e^{-\frac{x}{2{\epsilon}} {\mathbf{T}}_t^2} {\bf Z}^{(-)}_{\rm NLL} \, e^{\frac{{B_{0}}({\epsilon})-1}{2{\epsilon}} \, x \, C_A} {\mathcal{M}}^{\rm (tree)} +e^{\frac{B_0({\epsilon})-1}{2{\epsilon}} \, x \, {\mathbf{T}}_t^2} {\cal \hat M}^{(+)}_{\rm NLL}\,\,,$$ where the factor $e^{-\frac{x}{2{\epsilon}} {\mathbf{T}}_t^2} {\bf Z}^{(-)}_{\rm NLL}$ of (\[Znll\]) can be readily substituted as well (this will be done in section \[hardFdef\]). Eq. (\[getH2\]) is an important step because (given that $B_0({\epsilon})-1 = {\cal O}({\epsilon}^2)$, eq. (\[B0\])) it clearly shows that the hard function ${\mathcal{H}}^{(+)}_{\rm NLL} $ at $\epsilon\to 0$ is completely determined once the BFKL-motivated reduced amplitude ${\cal \hat M}^{(+)}_{\rm NLL}$ is known to ${\mathcal{O}}({\epsilon}^0)$, which is the result anticipated at the beginning of this section. With this in mind, we proceed to compute ${\cal \hat M}^{(+)}_{\rm NLL}$ to ${\mathcal{O}}({\epsilon}^0)$. Soft and hard wavefunction and amplitude {#sec:soft_hard_split} ---------------------------------------- Our strategy to compute the finite part of the reduced amplitude ${\cal \hat M}^{(+)}_{\rm NLL}$ at higher orders is to separate soft and hard components of the wavefunction and truncate the latter to two transverse dimensions (${\epsilon}=0$), where BFKL evolution is much more tractable (see section \[2d-bfkl\]). As demonstrated in ref. [@Caron-Huot:2017zfo], the soft limit of the wavefunction, where one of the two Reggeons has a small momentum, e.g., $k^2\ll (p-k)^2\simeq p^2$, fully determines all the singular parts in $\epsilon$. This was used to obtain the all-order result for the renormalisation factor ${\bf Z}^{(-)}_{\rm NLL} $ in [eq. ]{}. In addition, the soft limit generates some ${\cal O}({\epsilon}^0)$ finite contributions, which must be added to those generated by the complementary hard region, where both $k^2$ and $(p-k)^2$ are of order $p^2$. To control ${\cal O}({\epsilon}^0)$ terms a clear separation between the two regions is necessary. We choose to do this at the level of the wavefunction ${\Omega}(p,q)$. Recall that ${\Omega}(p,q)$ is a finite function[^2] of ${\epsilon}$ [@Caron-Huot:2017zfo], i.e. any singularities in the reduced amplitude are generated through the final integration over the wavefunction in (\[ReducedAmpNLL\]). To proceed we split the wavefunction into two terms: \[OmegaSplitDef\] (p,k) = [\_]{}(p,k) + [\_]{}(p,k), such that the second term, the hard component, vanishes in soft limits: \[h\_definition\] \_[k0]{}[\_]{}(p,k) = \_[kp]{}[\_]{}(p,k) =0. It then follows from (\[ReducedAmpNLL\]) that no singularities can be generated upon integrating ${{\Omega}_{\mathrm{h}}}(p,k)$ (i.e. all the singularities in ${{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+)}}$ are generated upon integrating ${{\Omega}_{\mathrm{s}}}(p,k)$) and hence only the ${\epsilon}\to 0$ limit of ${{\Omega}_{\mathrm{h}}}$ contributes to the finite part of the reduced amplitude. Denoting the wavefunction in this limit as \[WhardTwod\] [\_]{}\^[([2d]{})]{}(p,k) \_[0]{} [\_]{} = [\_]{}(p,k) - [\_]{}\^[([2d]{})]{}(p,k), the reduced amplitude (\[ReducedAmpNLL\]), through order ${\cal O}({\epsilon}^0)$, is then given as a sum of soft and hard components: \[eq:redampSplit\] [[\_]{}\^[(+)]{}]{}() = \^[(+)]{}\_[NLL,s]{}() + \^[(+)]{}\_[NLL,h]{}() with \[Msh\] $$\begin{aligned} \hat {\cal M}^{(+)}_{\rm NLL,s}\left(\frac{s}{-t}\right) &=-i\pi \int {[{\mathrm{D}}k]}\,\frac{p^2}{k^2(p-k)^2} {{\Omega}_{\mathrm{s}}}(p,k)\,\, {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}_{ij\to ij}\,, \\ \label{Mhard} \hat {\cal M}^{(+)}_{\rm NLL,h}\left(\frac{s}{-t}\right) &= -i\pi\lim_{{\epsilon}\to 0}\int {[{\mathrm{D}}k]}\,\frac{p^2}{k^2(p-k)^2}\, {{\Omega}_{\mathrm{h}}}^{({\rm 2d})}(p,k)\,\, {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}_{ij\to ij}\, .\end{aligned}$$ Equations (\[eq:redampSplit\]) and (\[Msh\]) are central to our approach and will guide our computations in what follows. They show that, to compute the finite part of the reduced amplitude, we must treat the soft wavefunction exactly as a function of ${\epsilon}$, but we are allowed to truncate the hard wavefunction to ${\cal O}({\epsilon}^0)$. Note that in (\[Mhard\]) we have already substituted the two-dimensional limit of the hard wavefunction, so taking the ${\epsilon}\to 0$ limit simply amounts to taking the integration momentum $k$ to be two-dimensional. These finite integrals will be done in section \[amplitude\]. Let us briefly summarise our plan for the reminder of this section. After reviewing the main arguments of [@Caron-Huot:2017zfo], our aim in section \[sec:softwave\] is to present a symmetrized version of the soft wavefunction in dimensional regularization, eq. (\[Well-1-ansatz-sym\]), which simultaneously captures the two regions where either of the two Reggeons is soft. We then extract the ${\cal O}(\epsilon^0)$ terms in the wavefunction and resum them; these will be used in section \[amplitude\] to determine the two-dimensional hard wavefunction ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}(p,k)$ from the full one according to eq. (\[WhardTwod\]). Subsequently in section \[sec:softAmpl\] we use the soft wavefunction, computed to all-orders in $\epsilon$, to determine the corresponding contributions to the reduced $2\to 2$ amplitude. We also present an analytic formula resumming these corrections in eq. (\[ReducedAmpNLLresum2\]). Finally, in section \[hardFdef\] we determine the soft wavefunction contribution to the infrared-renormalized amplitude ${\mathcal{H}}^{(+)}_{\rm NLL} $ using eq. (\[getH2\]). The soft wavefunction {#sec:softwave} --------------------- The central property of the wavefunction $\Omega(p,k)$ highlighted in [@Caron-Huot:2017zfo] and already mentioned above, is the fact that it is finite for ${\epsilon}\to 0$, to all orders in perturbation theory. This has far reaching consequences, because it means that all singularities in the amplitude must arise from the last integration in (\[ReducedAmpNLL\]), and originate from the soft limits $k \to 0$ and $k \to p$ of $\Omega(p,k)$. One finds that it is particularly easy to calculate the wavefunction in these limits: as it turns out, the soft approximation is closed under BFKL evolution, i.e., starting with ${{\Omega}^{(j)}}(p,k)$, with $k$ soft, implies that the momentum $k'$ in ${{\Omega}^{(j-1)}}(p,k')$, which has one rung fewer, can also be taken soft, $k' \to 0$, without affecting the result for ${{\Omega}^{(j)}}(p,k)$. In other words, starting with ${{\Omega}^{(j)}}(p,k)$ where $k$ is soft is equivalent to considering the entire side rail of the ladder consisting of soft momenta, $k'$, $k''$, $\ldots\to 0$. Similarly, starting with $k \to p$ implies that all momenta $(p-k)$, $(p-k')$, $(p-k'')$, $\ldots$, are soft. The symmetry of [eq. ]{}, then, implies that $\Omega(p,k)$ in the two limits $k \to 0$ and $k \to p$ must be the same. In the soft limit the BFKL hamiltonian becomes [@Caron-Huot:2017zfo] \^[(-1)]{}\_s(p,k)&=&H\_s \^[(-2)]{}\_s(p,k),\ H\_s (p,k) &=& (2C\_A-[\_t\^2]{})\ && + (C\_A-[\_t\^2]{}) J\_s(p,k) (p,k), \[softH\] where \[JpSoft\] J\_s(p,k) = , is the soft approximation of [eq. ]{}. One finds that the wavefunction becomes a polynomial in $ \xi \equiv (p^2/k^2)^{{\epsilon}}$, i.e., the soft limit turns BFKL evolution into a one-scale problem. The integrals involved in [eq. ]{} are simple bubble integrals of the type \[bubbleGeneral1\] [()]{}\^[n ]{} = -[()]{}\^[(n+1)]{}, where the integration measure is given in eq. (\[measure\]), and the class of bubble functions ${B_{n}}({\epsilon})$ is \[bubbleGeneral2\] [B\_[n]{}]{}() = e\^[\_[E]{}]{} . Note that $B_0$ of (\[B0\]) appearing in the gluon Regge trajectory and in the measure (\[measure\]) corresponds to the special case of (\[bubbleGeneral2\]) with $n=0$. Using eq. (\[bubbleGeneral1\]) one can write the action of the soft Hamiltonian (\[softH\]) on any monomial ($m\geq 0$): \[softHpower\] H\_s \^m &=& ((1-)(C\_A-[\_t\^2]{}) + [B\_[m]{}]{}()(2C\_A-[\_t\^2]{}))\ &=& (\^m - \^[m+1]{} ),where we have introduced the notation \[bubblehat\] B\_n() 1- = 2 n (2 + n) \_3 \^3 + 3 n (2 + n) \_4 \^4 +…. By making repeated use of [eq. ]{} one finds that the wavefunction at order $(\ell-1)$ can be expressed in a closed-form, as follows [@Caron-Huot:2017zfo]: \[Well-1-ansatz\] [\_s\^[(-1)]{}]{}(p,k) = \_[n=0]{}\^[-1]{} (-1)\^n [()]{}\^[n]{} \_[m=0]{}\^[n-1]{} {1 - [B\_[m]{}]{}() }. As discussed in [@Caron-Huot:2017zfo], this expression can be easily integrated, obtaining an expression for the amplitude which correctly describes its singular part to all orders in perturbation theory. While [eq. ]{} is perfectly valid in the soft limit, it breaks explicitly the symmetry of [eq. ]{} between the two soft limits. As we will see below, it is advantageous to work with expressions where this symmetry is manifest. In this paper, we thus introduce a different soft wavefunction, obtained by symmetrising [eq. ]{} under $k \leftrightarrow (p-k)$: $$\begin{aligned} \label{Well-1-ansatz-sym} \nn {{\Omega}_s^{(\ell-1)}}(p,k) &= \frac{{({C_A}-{{\mathbf{T}}_t^2})}^{\ell-1}}{(2{\epsilon})^{\ell-1}} \sum_{n=0}^{\ell-1} (-1)^n \binom{\ell-1}{n} {\left(\frac{p^2}{k^2}\right)}^{n{\epsilon}} \bigg(\frac{p^2}{(p-k)^2}\bigg)^{n{\epsilon}} \\ &\hspace{3.0cm} \times \, \prod_{m=0}^{n-1} \left\{1 - {\hat B_{m}}({\epsilon}) \frac{{(2{C_A}-{{\mathbf{T}}_t^2})}}{{({C_A}-{{\mathbf{T}}_t^2})}}\right\}\,.\end{aligned}$$ This formula simultaneously captures the correct behaviour of $\Omega(p,k)$ in both soft limits $k \to 0$ and $k \to p$. It will be used in section \[sec:softAmpl\] below to compute the soft contributions to the reduced $2\to 2$ amplitude. Before doing that let us have a closer look at the $\epsilon$ expansion of the soft wavefunction we obtained. We recall [@Caron-Huot:2017zfo] that all the negative powers of $\epsilon$ in (\[Well-1-ansatz-sym\]) cancel upon performing the sum over $n$, leading to a finite wavefunction at any loop order. While positive powers of $\epsilon$ in (\[Well-1-ansatz-sym\]) do play a role in the computation of the amplitude, the leading ${\mathcal{O}}({\epsilon}^0)$ have a special role: according to eq. (\[WhardTwod\]) it is precisely what must be subtracted from the full two-dimensional wavefunction to obtain the hard wavefunction ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}$. With this in mind, let us write down explicitly the leading terms in $\epsilon$ in the first few orders of the soft wavefunction in (\[Well-1-ansatz-sym\]): \[Wsoft2dim\] $$\begin{aligned} {{{\Omega}^{({0})}}_{\mathrm{s}}}(p,k) \big|_{{\mathcal{O}}({\epsilon}^0)} &= 0, \\ {{{\Omega}^{({1})}}_{\mathrm{s}}}(p,k) \big|_{{\mathcal{O}}({\epsilon}^0)} &= \frac{{({C_A}-{{\mathbf{T}}_t^2})}}{2} \log {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)}, \\ {{{\Omega}^{({2})}}_{\mathrm{s}}}(p,k) \big|_{{\mathcal{O}}({\epsilon}^0)} &= \frac{{({C_A}-{{\mathbf{T}}_t^2})}^2}{4} \log^2 {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)}, \\ {{{\Omega}^{({3})}}_{\mathrm{s}}}(p,k) \big|_{{\mathcal{O}}({\epsilon}^0)} &= \frac{{({C_A}-{{\mathbf{T}}_t^2})}^3}{8} \log^3 {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)} + \frac{{(2{C_A}-{{\mathbf{T}}_t^2})}{({C_A}-{{\mathbf{T}}_t^2})}^2}{2} \zeta_3, \\ \nn {{{\Omega}^{({4})}}_{\mathrm{s}}}(p,k) \big|_{{\mathcal{O}}({\epsilon}^0)} &= \frac{{({C_A}-{{\mathbf{T}}_t^2})}^4}{16} \log^4 {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)} \\ & +\, {(2{C_A}-{{\mathbf{T}}_t^2})}{({C_A}-{{\mathbf{T}}_t^2})}^3 \log {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)} \zeta_3, \\ \nn {{{\Omega}^{({5})}}_{\mathrm{s}}}(p,k) \big|_{{\mathcal{O}}({\epsilon}^0)} &= \frac{{({C_A}-{{\mathbf{T}}_t^2})}^5}{32} \log^5 {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)} \\ &+\, \frac{{(2{C_A}-{{\mathbf{T}}_t^2})}{({C_A}-{{\mathbf{T}}_t^2})}^4}{4} \left[ 5 \log^2 {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)} \zeta_3 + 6 \zeta_5 \right]\,.\end{aligned}$$ In fact, these terms exponentiate and can be resummed into the following all-order expression using (\[OmegaEven\]) for $\epsilon=0$, yielding \[eq:wffullsoftresummed\] [\_]{}(p,k) |\_[(\^0)]{} = \^\^[ [([C\_A]{}-[\_t\^2]{})]{}]{}, with $x = L\, {\alpha_s}/\pi$. Soft contributions to the $2\to2$ amplitude\[sec:softAmpl\] ----------------------------------------------------------- Next, let us consider the soft contribution to the reduced $2\to 2$ scattering amplitude $\hat{\cal M}$. It is straighforward to insert [eq. ]{} into [eq. ]{}, perform the last integration and derive the $\ell$-th order contribution to the amplitude. In particular, given the symmetrised form of [eq. ]{}, the last integration can be done with the integration measure $[{\rm D}k]$ in [eq. ]{}, i.e. avoiding the need to introduce a cut-off as in ref. [@Caron-Huot:2017zfo]. After some arrangement we get \[MellReggeSoft-S\] \_[[NLL,s]{}]{}\^[(+,)]{} &=& i [([C\_A]{}-[\_t\^2]{})]{}\^[-1]{} \_[n=1]{}\^[-1]{} (-1)\^[n+1]{}\ && \_[m=0]{}\^[n-2]{} [\_[s-u]{}\^2]{} [\^]{}, where the functions $B_n({\epsilon})$ and ${\hat B_{n}}({\epsilon})$ have been defined respectively in [eqs.  and ]{}, and we have introduced B\_[n]{}() = e\^[\_E]{} . The coefficients ${\hat{\mathcal{M}}}_{{\rm NLL,s}}^{(+,\ell)}$ in (\[MellReggeSoft-S\]) are of course polynomial in the colour factors. For illustration, we expand [eq. ]{} to the first few orders in perturbation theory, obtaining $$\begin{aligned} \label{eq:MsoftExpanded_1} {\hat{\mathcal{M}}}_{\rm NLL,s}^{(1)} &= i\pi {B_{0}} \bigg\{\frac{1}{2{\epsilon}} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{\rm NLL,s}^{(2)} &= i\pi \frac{{B_{0}}^2}{2} \bigg\{ \frac{{C_2}}{4 {\epsilon}^2} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{\rm NLL,s}^{(3)} &= i\pi \frac{{B_{0}}^3}{3!} \bigg\{ {C_2}^2 \left( \frac{1}{8 {\epsilon}^3} - \frac{11\zeta_3}{4}\right) - {C_1}{C_2}\frac{3\zeta_3}{4}\bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{\rm NLL,s}^{(4)} &= i\pi \frac{{B_{0}}^4}{4!} \bigg\{ {C_2}^3 \left( \frac{1}{16 {\epsilon}^4} + \frac{\zeta_3}{8{\epsilon}} + \frac{3\zeta_4}{16}\right) + {C_1}{C_2}^2 \left( -\frac{\zeta_3}{8 {\epsilon}} - \frac{3\zeta_4}{16}\right) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\hat{\mathcal{M}}}_{\rm NLL,s}^{(5)} &= i\pi \frac{{B_{0}}^5}{5!} \bigg\{ {C_2}^4 \left( \frac{1}{32 {\epsilon}^5} + \frac{\zeta_3}{16{\epsilon}^2} + \frac{3\zeta_4}{32 {\epsilon}} - \frac{717\zeta_5}{16} \right) \\ &\hspace{1.0cm} +\, {C_1}{C_2}^3 \left( -\frac{\zeta_3}{16 {\epsilon}^2} - \frac{3\zeta_4}{32 {\epsilon}} - \frac{27\zeta_5}{16} \right) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\hat{\mathcal{M}}}_{\rm NLL,s}^{(6)} &= i\pi \frac{{B_{0}}^6}{6!} \bigg\{ {C_1}^2 {C_2}^3 \bigg( -\frac{39 \zeta_3^2}{16} \bigg) + {C_1}{C_2}^4 \bigg(-\frac{\zeta_3}{32 {\epsilon}^3} - \frac{3\zeta_4}{64 {\epsilon}^2} - \frac{3\zeta_5}{32 {\epsilon}} - \frac{963\zeta_3^2}{32} + \frac{5 \zeta_6}{32} \bigg) \\ &\hspace{1.0cm} + {C_2}^5 \bigg(\frac{1}{64 {\epsilon}^6} + \frac{\zeta_3}{32 {\epsilon}^3} + \frac{3 \zeta_4}{64 {\epsilon}^2} + \frac{3 \zeta_5}{32 {\epsilon}} - \frac{2879 \zeta_3^2}{32} + \frac{5 \zeta_6}{32} \bigg) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\hat{\mathcal{M}}}_{\rm NLL,s}^{(7)} &= i\pi \frac{{B_{0}}^7}{7!} \bigg\{ {C_1}^2 {C_2}^4 \bigg( \frac{\zeta_3^2}{32 {\epsilon}} + \frac{3\zeta_3 \zeta_4}{32} \bigg) + {C_1}{C_2}^5 \bigg( - \frac{\zeta_3}{64 {\epsilon}^4} - \frac{3\zeta_4}{128 {\epsilon}^3} - \frac{3\zeta_5}{64 {\epsilon}^2}\\ \nn &\hspace{1.0cm} -\, \frac{3\zeta_3^2}{64 {\epsilon}} - \frac{5 \zeta_6}{64 {\epsilon}} - \frac{9 \zeta_3 \zeta_4}{64} - \frac{729 \zeta_7}{64} \bigg) + {C_2}^6 \bigg( \frac{1}{128 {\epsilon}^7} + \frac{\zeta_3}{64 {\epsilon}^4} + \frac{3 \zeta_4}{128 {\epsilon}^3} + \frac{3 \zeta_5}{64 {\epsilon}^2} \\ &\hspace{1.0cm} +\, \frac{\zeta_3^2}{64 {\epsilon}} + \frac{5 \zeta_6}{64 {\epsilon}} + \frac{3 \zeta_3 \zeta_4}{64} - \frac{90711 \zeta_7}{64} \bigg) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn \label{eq:MsoftExpanded_8} {\hat{\mathcal{M}}}_{\rm NLL,s}^{(8)} &= i\pi \frac{{B_{0}}^8}{8!} \bigg\{ {C_1}^2 {C_2}^5 \bigg( \frac{\zeta_3^2}{64 {\epsilon}^2} + \frac{3\zeta_3 \zeta_4}{64 {\epsilon}} - \frac{1341 \zeta_3 \zeta_5}{32} + \frac{21\zeta_8}{512} \bigg) + {C_1}{C_2}^6 \bigg( -\frac{\zeta_3}{128 {\epsilon}^5} \\ \nn &\hspace{1.0cm} -\, \frac{3\zeta_4}{256 {\epsilon}^4} - \frac{3\zeta_5}{128 {\epsilon}^3} - \frac{3 \zeta_3^2}{128 {\epsilon}^2} - \frac{5 \zeta_6}{128 {\epsilon}^2} - \frac{9 \zeta_3 \zeta_4}{128 {\epsilon}} - \frac{9\zeta_7}{128 {\epsilon}} - \frac{96777 \zeta_3 \zeta_5}{64} \\ \nn &\hspace{1.0cm} -\, \frac{189 \zeta_8}{1024} \bigg) + {C_2}^7 \bigg( \frac{1}{256 {\epsilon}^8} + \frac{\zeta_3}{128 {\epsilon}^5} + \frac{3\zeta_4}{256 {\epsilon}^4} + \frac{3\zeta_5}{128 {\epsilon}^3} + \frac{\zeta_3^2}{128 {\epsilon}^2} + \frac{5 \zeta_6}{128 {\epsilon}^2} \\ &\hspace{1.0cm} + \frac{3 \zeta_3 \zeta_4}{128 {\epsilon}} + \frac{9 \zeta_7}{128 {\epsilon}} - \frac{483837 \zeta_3 \zeta_5}{64} + \frac{147 \zeta_8}{1024} \bigg) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}\, , \renewcommand{{C_1}}{(2{C_A}-{{\mathbf{T}}_t^2})} \renewcommand{{C_2}}{({C_A}-{{\mathbf{T}}_t^2})}\end{aligned}$$ where we used the shorthand notation for the colour factors, $C_1={C_1}$ and $C_2={C_2}$. We note that the expansion coefficients display uniform transcendental weight (where, as usual $1/\epsilon$ has weight 1) and involve exclusively single zeta values (sometimes referred to as ordinary zeta values, namely the values of the Riemann zeta function at integer arguments). We further notice that $\zeta_2$ (or $\zeta_2$ times other zeta values, e.g. $\zeta_2\zeta_3$ at weight 5, etc.) factors do not appear in [eqs. –]{} ($\zeta_2$ terms would be present if we were to expand the factor factor $B_0^{\ell}({\epsilon})$). Higher even zeta numbers do appear, but we will see below that they have a distinct origin as compared to the odd ones. Given that the expansion coefficients ${\hat{\mathcal{M}}}_{{\rm NLL,s}}^{(+,\ell)}$ involve just single zeta values, and are moreover of uniform weight, it is interesting to explore the possibility to sum up the series to all orders. Indeed, such summation was achieved for the singular terms in Ref. [@Caron-Huot:2017zfo], so let us compare [eq. ]{} above with the result obtained in [@Caron-Huot:2017zfo]. There we proved that the singular terms of the reduced amplitude admit a simplified form \[MellReggeSoft-0\] \_[[NLL,s\_[simpl.]{}]{}]{}\^[(+,)]{} = [C\_2]{}\^[-1]{} ( 1- [B\_[-1]{}]{}() )\^[-1]{} [\_[s-u]{}\^2]{} [\^]{}. The latter, however, differs from the original soft amplitude obtained from [eq. ]{} starting at ${\cal O}({\epsilon}^{0})$ (compare eqs. (3.13) and (3.15) of [@Caron-Huot:2017zfo]). A nice feature of [eq. ]{} is that the loop functions in the amplitude at order $\ell$ do not depend on the index $\ell$, apart from the factor $\left({B_0}({\epsilon})\right)^{\ell}/\ell!$, and this allows one to easily resum [eq. ]{} to all orders in perturbation theory, obtaining an expression for the integrated soft amplitude ${\hat{\mathcal{M}}}_{{\rm NLL,s_{\rm simpl.}}}$: $$\begin{aligned} \label{MellReggeSoft-res} \begin{split} {\hat{\mathcal{M}}}_{{\rm NLL,s_{\rm simpl.}}} = \frac{i\pi}{L {C_2}} \Bigg\{ & \Big( e^{\frac{B_0}{2{\epsilon}} {C_2}x}-1 \Big) \frac{B_{-1}({\epsilon})}{B_0({\epsilon})} \\ &\quad\times \left( 1- {\hat B_{-1}}({\epsilon}) \frac{{C_1}}{{C_2}} \right)^{-1} \bigg\} \, {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}+ {\mathcal{O}}({\epsilon}^0), \end{split}\end{aligned}$$ with $x = L \,{\alpha_s}/\pi $ (see also eq. (3.18) of [@Caron-Huot:2017zfo]). This formula however does not correctly capture the non-singular terms obtained with a cut-off, for which no similar simplification was found. We nevertheless show that an all-order resummation formula can be found for the ${\cal O}(\epsilon^0)$ corrections to the amplitude defined in our current symmetric scheme, [eq. ]{}. To this end we consider the coefficients defined as the finite part of the *difference* between those in soft amplitude, eq. (\[MellReggeSoft-S\]), and in its simplified version, eq. (\[MellReggeSoft-0\]): $$\begin{aligned} \label{deltadef} \begin{split} {\hat{\mathcal{M}}}_{{\rm NLL,s}}^{(+,\ell)} - {\hat{\mathcal{M}}}_{{\rm NLL,s_{\rm simpl.}}}^{(+,\ell)} &\equiv \, i \pi \, \hat\Delta^{(+,\ell)}_{\rm NLL} \, {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}+{\cal O}(\epsilon^1)\\ &\equiv i \pi \, \delta^{(\ell)} \, {C_2}^{\ell-1} \,{{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}+{\cal O}(\epsilon^1)\,. \end{split}\end{aligned}$$ After some arrangement the coefficients $\hat\Delta^{(+,\ell)}_{\rm NLL}$ can be put into the form \[Delta\_s3\] \^[(+,)]{}\_[NLL]{} &=& [C\_2]{}\^[-1]{} { \_[n=0]{}\^[-1]{} (-1)\^[n+1]{}\ && ( 1- [B\_[n-1]{}]{}() )\^[-1]{} \_[m=0]{}\^[n-2]{}}, where we discarded powers of $B_{0}(\epsilon)$, which do not affect the finite terms. From eq. (\[Delta\_s3\]) the coefficients $\delta^{(\ell)}$ of (\[deltadef\]) can be determined explicitly in terms of odd $\zeta$ numbers and the ratio of colour factors $r=\frac{{C_1}}{{C_2}}$. They are found to exponentiate in terms of the following rescaled odd $\zeta$ numbers: \[rescaled\_zeta\] \_[1+2 n]{} = \_[1+2n]{} (1+ ), such that the sum: $$\begin{aligned} \label{summing_soft_finite} \begin{split} \sum_{\ell=1}^{\infty} \frac{X^{\ell}}{\ell !}\delta^{(\ell)} =\,&1-\exp\left(\sum_{n=1}^{\infty} X^{2n+1} \tilde\zeta_{2n+1}\right) \\=\,& 1-e^{-\gamma_E r X} \frac{\Gamma\Big(1 - X\Big)}{\Gamma\Big(1 + X\Big)} \frac{\Big[\Gamma\Big(1+\frac{X}{2}\Big)\Big]^{2-r}}{ \Big[\Gamma\Big(1 - \frac{X}{2}\Big)\big]^{2-r}} \end{split}\end{aligned}$$ with $X\equiv {C_2}x$ and $x = L \,{\alpha_s}/\pi $, where we used 2 \_[n=1]{}\^ =-2 x \_E +((1-x))-((x+1)) . We conclude that the series $\hat\Delta^{(+,\ell)}_{\rm NLL}$ exponentiate to $$\begin{gathered} \label{DeltaResum2} \hat\Delta^{(+)}_{\rm NLL} = \frac{1}{L {C_2}} \Bigg[ 1- e^{-\gamma_E {C_1}\, x } \frac{\Gamma\Big(1 - {C_2}x\Big)}{\Gamma\Big(1 + {C_2}x\Big)} \\ \times \, \left(\frac{\Gamma\Big(1+{C_2}\frac{x}{2}\Big)}{\Gamma \Big(1 - {C_2}\frac{x}{2}\Big)}\right)^{-\frac{{\mathbf T}_t^2}{C_A-{\mathbf T}_t^2}} \Bigg].\end{gathered}$$ Using now the fact that that the simplified amplitude ${\hat{\mathcal{M}}}_{{\rm NLL,s_{\rm simpl.}}}^{(+,\ell)} $ in [eq. ]{} exponentiates independently, see [eq. ]{}, we obtain $$\begin{gathered} \label{ReducedAmpNLLresum2} {\hat{\mathcal{M}}}_{\rm NLL,s} = \\ i\pi \,\Bigg\{ \frac{e^{\frac{B_0}{2{\epsilon}} {C_2}x}-1}{L {C_2}} \frac{B_{-1}({\epsilon})}{B_0({\epsilon})} \left( 1- {\hat B_{-1}}({\epsilon}) \frac{{C_1}}{{C_2}} \right)^{-1}\,+ \hat\Delta^{(+)}_{\rm NLL} \Bigg\} {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}\\ = i\pi \,\Bigg\{ \frac{e^{\frac{B_0}{2{\epsilon}} {C_2}x}-1}{L {C_2}} \left( 1- \frac{C_A}{{C_2}} R({\epsilon})\right)^{-1}\,+ \hat\Delta^{(+)}_{\rm NLL} \Bigg\} {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}\, ,\end{gathered}$$ where in the second line we expressed the amplitude in terms of the function $R({\epsilon}) ={{B_0}({\epsilon})}/{{B_{-1}}({\epsilon})} -1 $ of [eq. ]{}. Writing the reduced amplitude as in the second line of [eq. ]{} makes it easier to extract the infrared-renormalized amplitude from the reduced amplitude, as we will see in section \[hardFdef\]. Writing explicitly the factor $\hat\Delta^{(+)}_{\rm NLL} $, the reduced amplitude reads $$\begin{gathered} \label{ReducedAmpNLLresum2B} {\hat{\mathcal{M}}}_{\rm NLL,s} = \frac{i\pi}{L {C_2}} \Bigg\{\left( e^{\frac{B_0}{2{\epsilon}} {C_2}x}-1 \right) \left( 1- \frac{C_A}{{C_2}} R({\epsilon})\right)^{-1}\,+ 1 \\ - e^{-\gamma_E {C_1}\, x} \,\, \frac{\Gamma\Big(1 - {C_2}x\Big)}{\Gamma\Big(1 + {C_2}x\Big)} \left(\frac{\Gamma\Big(1+{C_2}\frac{x}{2}\Big)}{\Gamma \Big(1 - {C_2}\frac{x}{2}\Big)}\right)^{-\frac{{\mathbf T}_t^2}{C_A-{\mathbf T}_t^2}} \Bigg\} {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}.\end{gathered}$$ Of course, upon expansion (\[ReducedAmpNLLresum2B\]) yields back the coefficients of (\[MellReggeSoft-S\]) we listed in (\[eq:MsoftExpanded\_1\]) through (\[eq:MsoftExpanded\_8\]). Having at hand a resummed expression we can gain further insight on number-theoretical features of the expansion coefficients in [eqs. –]{}. We already know based on the derivation above that the $\hat\Delta^{(+)}_{\rm NLL}$ component in (\[ReducedAmpNLLresum2\]) gives rise to odd zeta values only. It then transpires that the sole origin of even ones is the function $R(\epsilon)$ in the first term. Further number-theoretical features will be discussed in section \[amplitude\], once we have computed the hard contribution to the reduced amplitude. The possibility to resum the series for the amplitude to all orders including finite ${\cal O}(\epsilon^0)$ terms is highly nontrivial, and it is an additional advantage of $k \leftrightarrow (p-k)$ symmetric scheme we adopted here for the soft approximation. It will be used below in deriving a resummed expression for the contribution of the soft region to the infrared-renormalized amplitude. From the reduced amplitude to the infrared-renormalized amplitude {#hardFdef} ----------------------------------------------------------------- Now that we have determined the soft wavefunction and the corresponding reduced amplitude, we are in a position to consider again the infrared-renormalized amplitude, as defined in [eq. ]{}. Following [eqs.  and ]{} we split the infrared-renormalized amplitude into a soft and a hard component: \[Hsplit\] \^[(+)]{}\_[NLL]{} = \^[(+)]{}\_[NLL,s]{} +\^[(+)]{}\_[NLL,h]{}. Then, from [eq. ]{} it follows that \[getH2both\] $$\begin{aligned} \label{getH2-s} {\mathcal{H}}^{(+)}_{\rm NLL,s} &= - e^{-\frac{x}{2{\epsilon}} {\mathbf{T}}_t^2} {\bf Z}^{(-)}_{\rm NLL} \, e^{\frac{{B_{0}}({\epsilon})-1}{2{\epsilon}} \, x \, C_A} {\mathcal{M}}^{\rm (tree)} +e^{\frac{B_0({\epsilon})-1}{2{\epsilon}} \, x \, {\mathbf{T}}_t^2} {\cal \hat M}^{(+)}_{\rm NLL,s}, \\ \label{getH2-h} {\mathcal{H}}^{(+)}_{\rm NLL,h} &= {\cal \hat M}^{(+)}_{\rm NLL,h}\,,\end{aligned}$$ where in (\[getH2-h\]) we neglected positive powers of $\epsilon$ originating in the expansion of $(1-B_0(\epsilon))$, using the fact that ${\cal \hat M}^{(+)}_{\rm NLL,h}$ is itself finite. Of course such a simplification cannot be applied to (\[getH2-s\]) where there is an interplay between positive powers of $\epsilon$ and negative ones. In section \[sec:softAmpl\] we have determined the reduced soft amplitude, thus we are in a position to explicitly write down the soft part of the infrared-renormalized amplitude at $\epsilon\to 0$, according to [eq. ]{}. Inserting eqs. (\[Znll\]) and (\[ReducedAmpNLLresum2\]) into [eq. ]{} we get $$\begin{gathered} \label{getH3} {\mathcal{H}}^{(+)}_{\rm NLL,s} = i\pi \Bigg\{ - e^{\frac{{B_{0}}({\epsilon})-1}{2{\epsilon}} \, x \, C_A} \bigg[\frac{ e^{\frac{x}{2{\epsilon}} ({C_{A}}-{{\mathbf{T}}_t^2})}}{L({C_{A}}-{{\mathbf{T}}_t^2})} \left( 1 - \frac{C_A}{C_A -{{\mathbf{T}}_t^2}} R({\epsilon}) \right)^{-1} \bigg]_{\rm poles} \\ +e^{\frac{B_0({\epsilon})-1}{2{\epsilon}} \, x \, {\mathbf{T}}_t^2} \Bigg[ \frac{e^{\frac{B_0}{2{\epsilon}} x {C_2}}-1}{L {C_2}} \left( 1- \frac{C_A}{{C_2}} R({\epsilon}) \right)^{-1}\,+ \hat\Delta^{(+)}_{\rm NLL} \bigg]\Bigg\} {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}\,,\end{gathered}$$ where we recall that $x = L \, {\alpha_s}/\pi$, and in the first line, corresponding to ${\bf Z}^{(-)}_{\rm NLL}$, we have dropped the $-1$ term in the numerator inside the square brackets, which does not generate any poles (see discussion following [eq. ]{}). This expression can be rearranged as follows: first of all, by collecting a factor $e^{\frac{{B_{0}}({\epsilon})-1}{2{\epsilon}} \, x \, C_A}$ we get $$\begin{gathered} \label{getH4first} {\mathcal{H}}^{(+)}_{\rm NLL,s} = i\pi e^{\frac{{B_{0}}({\epsilon})-1}{2{\epsilon}} \, x \, C_A} \Bigg\{-\bigg[\frac{ e^{\frac{x}{2{\epsilon}} ({C_{A}}-{{\mathbf{T}}_t^2})}}{L({C_{A}}-{{\mathbf{T}}_t^2})} \left( 1 - \frac{C_A}{C_A -{{\mathbf{T}}_t^2}} R({\epsilon}) \right)^{-1} \bigg]_{\rm poles} \\ + \Bigg[ \frac{e^{\frac{x}{2{\epsilon}} {C_2}} -e^{\frac{1-B_0({\epsilon})}{2{\epsilon}} \, x \, {C_2}}}{L {C_2}} \left( 1- \frac{C_A}{{C_2}} R({\epsilon}) \right)^{-1} \\ + e^{\frac{1-B_0({\epsilon})}{2{\epsilon}} \, x \, {C_2}} \hat\Delta^{(+)}_{\rm NLL} \bigg]\Bigg\} {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}\,.\end{gathered}$$ We see at this point that the second line nicely cancel the poles from the first line. Furthermore, given that $1-B_0({\epsilon}) ={\cal O}({\epsilon}^2)$, see eq. (\[B0\]), and both $\big[ 1 - C_A/(C_A -{{\mathbf{T}}_t^2}) \, R({\epsilon}) \big]^{-1} = 1 + {\mathcal{O}}({\epsilon}^3)$ and $\hat\Delta^{(+)}_{\rm NLL} = {\mathcal{O}}({\epsilon}^0)$, it is safe to set to one all exponentials containing the factor $1-B_0({\epsilon})$. We thus obtain $$\begin{gathered} \label{getH4} {\mathcal{H}}^{(+)}_{\rm NLL,s} = i\pi \Bigg\{ \bigg[\frac{ e^{\frac{x}{2{\epsilon}} ({C_{A}}-{{\mathbf{T}}_t^2})}-1}{L({C_{A}}-{{\mathbf{T}}_t^2})} \left( 1 - \frac{C_A}{C_A -{{\mathbf{T}}_t^2}} R({\epsilon}) \right)^{-1} \bigg]_{{\epsilon}^0} + \hat\Delta^{(+)}_{\rm NLL} \Bigg\} {{\mathbf{T}}_{s-u}^2}\, {{\mathcal{M}}^{\mathrm{(tree)}}}\,,\end{gathered}$$ with $\hat\Delta^{(+)}_{\rm NLL}$ given in [eq. ]{}. For later reference we also expand [eq. ]{} to the first few orders in perturbation theory, obtaining (recall $C_2=C_A-{{\mathbf{T}}_t^2}$): $$\begin{aligned} \label{eq:HsoftExpanded_1} {\cal H}_{\rm NLL,s}^{(1)} &= 0, \\ {\cal H}_{\rm NLL,s}^{(2)} &= 0, \\ {\cal H}_{\rm NLL,s}^{(3)} &= \frac{i\pi}{3!} \bigg\{ - C_A {C_2}\frac{3\zeta_3}{4} -{C_2}^2 \frac{7\zeta_3}{2} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\cal H}_{\rm NLL,s}^{(4)} &= \frac{i\pi}{4!} \bigg\{ - C_A {C_2}^2 \frac{3\zeta_4}{16} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\cal H}_{\rm NLL,s}^{(5)} &= \frac{i\pi}{5!} \bigg\{- C_A {C_2}^3 \frac{27\zeta_5}{16} -{C_2}^4 \frac{93\zeta_5}{2} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\cal H}_{\rm NLL,s}^{(6)} &= \frac{i\pi}{6!} \bigg\{ - C_A^2 {C_2}^3 \frac{39 \zeta_3^2}{16} -C_A {C_2}^4 \bigg( \frac{1119\zeta_3^2}{32} + \frac{5\zeta_6}{32} \bigg) - {C_2}^5 \frac{245 \zeta_3^2}{2} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\cal H}_{\rm NLL,s}^{(7)} &= \frac{i\pi}{7!} \bigg\{ C_A^2 {C_2}^4 \frac{3\zeta_3 \zeta_4}{32} + C_A {C_2}^5 \bigg( \frac{3\zeta_3 \zeta_4}{64} - \frac{729 \zeta_7}{64} \bigg) - {C_2}^6 \frac{5715 \zeta_7}{4} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn \label{eq:HsoftExpanded_8} {\cal H}_{\rm NLL,s}^{(8)} &= \frac{i\pi}{8!} \bigg\{ C_A^2 {C_2}^5 \bigg( - \frac{1341 \zeta_3 \zeta_5}{32} + \frac{21\zeta_8}{512} \bigg) + C_A {C_2}^6 \bigg( - \frac{102141 \zeta_3 \zeta_5}{64} - \frac{105\zeta_8}{1024} \bigg) \\ &\hspace{5.0cm} \,- {C_2}^7 \, 9114 \zeta_3 \zeta_5\bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}\, . \renewcommand{{C_1}}{(2{C_A}-{{\mathbf{T}}_t^2})} \renewcommand{{C_2}}{({C_A}-{{\mathbf{T}}_t^2})}\end{aligned}$$ It is interesting note that $\zeta_n$ values with even $n$ originate solely from the expansion of the factor $R({\epsilon})$ in [eq. ]{}, while the expansion of the factor $\hat\Delta^{(+)}_{\rm NLL}$ generates only $\zeta_n$ values with odd $n$. The latter property of $\hat\Delta^{(+)}_{\rm NLL}$ makes this function compatible with the class of zeta values we will encounter considering the two-dimensional amplitude in section \[amplitude\]. In summary, according to (\[Hsplit\]) the infrared-renormalized amplitude is given as a sum of two terms: $\mathcal{H}_{\rm s}$, computed in this section using the soft approximation, plus $\mathcal{H}_{\rm h}$, which is identical to the hard part of the reduced amplitude (see [eq. ]{}). The latter is infrared finite and originates in the hard wavefunction, which can be computed directly in two transverse dimensions. Let us turn now to evaluate it. BFKL evolution in two transverse dimensions {#2d-bfkl} =========================================== As discussed in the introduction and in section \[chap:bfkl\], much of the complication of solving the evolution stems from the $d$-dimensionality of the Hamiltonian. Recalling that the two-reggeon wavefunction is finite at any loop order and that singularities are exclusively created by integration near the soft limit, it should be clear that no regularisation is required if we (a) only care about finite terms, and (b) remove any soft kinematics from the last integration. The latter condition is fulfilled by construction, having defined the split between the hard and soft wavefunctions (\[OmegaSplitDef\]) subject to the condition (\[h\_definition\]): the vanishing of ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}(p,k)$ in the soft limits guaranties that the corresponding amplitude $\hat {\cal M}^{(+)}_{\rm NLL,h}\left(\frac{s}{-t}\right)$ of eq. (\[Mhard\]) is finite. Our task in this section is therefore to compute ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}(p,k)$. We do so by iteratively applying the Hamiltonian of eqs. (\[Hdef1\]) and (\[Hamil\]), according to eq. (\[Hdef0\]). We keep the kinematics general, but in contrast to section \[chap:bfkl\], we work strictly in two transverse dimensions. To exploit the advantage of two-dimensional kinematics let us view the Euclidean momentum vectors $k$, $k'$ and $p$ as complex numbers k = k\_x + i k\_y, k’ = k\_x’ + i k\_y’ p = p\_x + i p\_y, where the real and imaginary parts are the components of the corresponding momenta and introduce new variables $z,w \in \mathbb{C}$ according to \[eq:zwdef\] = = . Since the wavefunction is a function of Lorentz scalars (i.e. squares of momenta) it will be symmetric under the exchange $z \leftrightarrow {\bar{z}}$ with ${\bar{z}}$ the complex conjugate of $z$. In particular, ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}(p,k)$ depends on the two ratios $$\label{k2zzb} \frac{k^2}{p^2}= \frac{z{\bar{z}}}{(1-z)(1-{\bar{z}})}\,,\quad\qquad \frac{(p-k)^2}{p^2}= \frac{1}{(1-z)(1-{\bar{z}})}\,.$$ These relations also clarify that the symmetry under interchanging the two Reggeons, eq. (\[left-right-symmetry\]), corresponds to $z\to1/z$, and specifically, the two soft limits where one or the other Reggeon is soft correspond respectively to $z\to 0$ and $z\to \infty$. The limit $z\to 1$ instead represents maximally hard kinematics, where both $k^2$ and $(p-k)^2$ are much larger than $p^2$. In the new variables the kernel reads \[eq:f2d\] p\^2 f(p,k,k’) (1-w)\^2 (1-[|[w]{}]{})\^2 K(w,[|[w]{}]{},z,[|[z]{}]{}), where \[eq:K2d\] K(w,[|[w]{}]{},z,[|[z]{}]{}) = = + + . Furthermore, in the limit ${\epsilon}\to 0$, $J(p,k)$ of eq.  becomes \[eq:j2d\] J(p,k) j(z,[|[z]{}]{}) 12 , and the measure reads \[eq:dw2d\] . Here, ${\mathrm{d}}^2 w \equiv {\mathrm{d}}{{\rm Re}}(w)\,{\mathrm{d}}{{\rm Im}}(w)$ where the real and imaginary part of $w$ are to be integrated from $-\infty$ to $+\infty$, in accordance with eq. . In applying BFKL evolution we employ the same notation as in the $d$-dimensional case but add the subscript “2d” to avoid confusion. In particular, from here on we express the two-dimensional hard wavefunction as ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}(p,k)= {{\Omega}_{\mathrm{2d}}}(z,{\bar{z}})$. We expand it as in eq. (\[OmegaEven\]), where we take $B_0(0)= 1$, i.e. \[OmegaEven\_hard\] [\_]{}(z,[|[z]{}]{}) &=& \_[=0]{}\^ ( )\^[+1]{} [[\^[()]{}]{}\_]{}(z,[|[z]{}]{}), where the coefficients of increasing orders are related by the action of the Hamiltonian according to eq. (\[Hdef0\]), which now reads: \[eq:Htdaction\] [[\^[()]{}]{}\_]{}(z,[|[z]{}]{}) = [\_]{}[[\^[([-1]{})]{}]{}\_]{}(z,[|[z]{}]{}), where [\_]{}(z,[|[z]{}]{}) = [C\_1]{}[\_]{}(z,[|[z]{}]{}) + [C\_2]{}[\_]{}(z,[|[z]{}]{}). Plugging in the above expressions we find the two parts of the Hamiltonian to be $$\begin{aligned} \label{eq:hi2d} {{\hat{H}}_{\mathrm{2d,i}}}\Psi(z,{\bar{z}}) &= \frac{1}{4\pi} \int {\mathrm{d}}^2 w K(w,{\bar{w}},z,{\bar{z}}) \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right], \\ \label{eq:hm2d} {{\hat{H}}_{\mathrm{2d,m}}}\Psi(z,{\bar{z}}) &= j(z,{\bar{z}}) \Psi(z,{\bar{z}}),\end{aligned}$$ where ${{{\Omega}^{({0})}}_{\mathrm{2d}}}(z,{\bar{z}}) = {{\Omega}^{({0})}}(p,k) = 1$. In the next section we proceed to solve for the wavefunction ${{\Omega}_{\mathrm{2d}}}$ by iterating the two-dimensional Hamiltonian . The two-dimensional wavefunction {#sec:wf2d} -------------------------------- It is useful to settle on a language before diving into the iteration of the two-dimensional wavefunction. To this end we introduce the class of iterated integrals dubbed *single-valued harmonic polylogarithms* (), which were first described by Brown in ref. [@Brown:2004ugm]. Since then, several applications of in computing scattering amplitudes have been found, in particular in the context of the high-energy limit, e.g. [@Pennington:2012zj; @Dixon:2012yy; @DelDuca:2013lma; @Dixon:2014voa; @DelDuca:2016lad; @DelDuca:2018hrv], and in the context of infared singularities in general kinematics [@Almelid:2017qju; @Dixon:2019lnw]. Here we will show that these functions also form a suitable basis for expressing the two-dimensional wavefunction ${{{\Omega}^{({\ell})}}_{\mathrm{2d}}}(z,{\bar{z}})$ defined above. As the name suggests, single-valued harmonic polylogarithms are single-valued functions which can be written as linear combinations of products of harmonic polylogarithms () of $z$ with of ${\bar{z}}$. We shall denote by ${\mathcal{L}}_\sigma(z,{\bar{z}})$ where $\sigma$ is a sequence of *letters*, typically zeros and ones.[^3] The *letters* are said to form an *alphabet*, $\{0,1\}$, and $\sigma$ is, by analogy, referred to as a *word*. The length of a word is often called the (transcendental) *weight* of the . are the natural choice for the two-dimensional evolution, since $j(z,{\bar{z}})$ of eq.  belongs to this class, j(z,[|[z]{}]{}) = 12 \_[0]{}(z,[|[z]{}]{}) + \_[1]{}(z,[|[z]{}]{}), and the action of the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}$ preserves single-valuedness when acting on a single-valued function. This can be expected on general grounds: any complex pair $z,{\bar{z}}$ identifies a point in the Euclidean transverse momentum plane. Physically there cannot be branch cuts in the Euclidean region; this, by definition, guarantees single-valued results. Indeed, single-valuedness may be confirmed at every step of the iteration. Determining the wavefunction is greatly simplified by working directly with SVHPLs; we briefly summarise their main properties, which will be used below, in Appendix \[app:svhpls\]. As noted upon introducing the variables $z$ and $\bar{z}$ in (\[eq:zwdef\]), the two-dimensional wavefunction is symmetric under $z \leftrightarrow {\bar{z}}$. In addition, as mentioned following (\[k2zzb\]), owing to the symmetry upon interchanging the two Reggeons in [eq. ]{}, the wavefunction is invariant under simultaneously swapping $z \leftrightarrow 1/z$ and ${\bar{z}}\leftrightarrow 1/{\bar{z}}$. Both these symmetries are easily verified by looking at eqs.  and , where, for the latter symmetry, one changes the integration variables $w \to 1/w$, ${\bar{w}}\to 1/{\bar{w}}$. We will use these properties to simplify the iteration of the wavefunction as well as its results in section \[sec:asalphabet\]. The evolution of the wavefunction in strictly two transverse dimensions according to (\[eq:Htdaction\]) has the following basic characteristics. Firstly, iterating ${{\hat{H}}_{\mathrm{2d,m}}}$ amounts to multiplying by $j(z,{\bar{z}})$ and therefore evaluating shuffle products of . Secondly, each application of ${{\hat{H}}_{\mathrm{2d,i}}}$ adds one layer of integration such that ${{{\Omega}^{({\ell-1})}}_{\mathrm{2d}}}$ can be written as a linear combination of of weight $\ell-1$. A method to calculate the convolution in eq. (\[eq:hi2d\]) in terms of residues was described in chapter 6 of Ref. [@DelDuca:2018hrv]. Here we develop an alternative method: we translate the action of the Hamiltonian into a set of differential equations, which we then solve in terms of . Suppose we wish to compute the action of a linear operator $\hat O$, which may involve integration, on a function $\Psi(z,{\bar{z}})$. Assume now that we find a differential operator $\Delta$, which is linear in logarithmic derivatives with respect to $z$ and ${\bar{z}}$, with the following properties: \[eq:deltacrit\] $$\begin{aligned} \label{eq:deltacriti} i.\ &\Delta \text{ commutes with } \hat{O} \\ \label{eq:deltacritii} ii.\ &\Delta \Psi \text{ is a pure function with a weight that is lower than } \Psi~\text{by one unit}.\end{aligned}$$ Then, \[eq:deltadiffeq\] = , and we can compute $\hat{O} \left[ \Psi(z,{\bar{z}}) \right]$ by integrating the differential equation , assuming that the r.h.s. is known explicitly. If it is not the procedure can be applied recursively, i.e. = , until the r.h.s. is simple enough to be calculated. After each integration a constant has to be fixed, e.g. by matching to known boundary conditions. Importantly, because $\Delta$ is assumed to be linear in derivatives with respect to $z$ and ${\bar{z}}$, solving the differential equation amounts to computing a one-dimensional integral. This may be contrasted with the original integral in (\[eq:hi2d\]) which is two-dimensional. Given , solving this differential equation is straightforward, and the result remains in the class of , (see eq. ). The same applies for the class of : to solve the differential equation within this class, we first integrate its holomorphic part according to eq. , and subsequently recover the full result, depending on both $z$ and ${\bar{z}}$, by applying the single-valued map $\mathbf{s}$ defined in eq. . Having outlined the general approach let us see how it is implemented in practice to solve for the wavefunction in (\[eq:hi2d\]). Let us start by considering the $\hat{O}$ in eq. (\[eq:deltadiffeq\]) to coincide with the two-dimensional Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}$ (we will see below that the final procedure involves considering parts of the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}$ in turn). The most natural candidate for the operator $\Delta$ in eq. (\[eq:deltadiffeq\]) is $\Delta_1=z {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$, since condition (\[eq:deltacriti\]) is satisfied, as we now show. For generic values of $w$ and $z$ one finds using eq.  \[eq:zddzKsym\] zK(w,[|[w]{}]{},z,[|[z]{}]{}) = -w K(w,[|[w]{}]{},z,[|[z]{}]{}) . This implies that $z {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$ commutes with the Hamiltonian, $$\begin{aligned} z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}}}\Psi(z,{\bar{z}}) \right] &= \frac{1}{4\pi} \int {\mathrm{d}}^2 w \left\{ \left( -{{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}w K(w,{\bar{w}},z,{\bar{z}}) \right) \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right] \right. \nn \\ &\hspace{45mm} \left. - K(w,{\bar{w}},z,{\bar{z}}) \left( z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right) \right\} \nn \\ &= \frac{1}{4\pi} \int {\mathrm{d}}^2 w K(w,{\bar{w}},z,{\bar{z}}) \left[ w{{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}\Psi(w,{\bar{w}}) - z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] \nn \\ \label{eq:zddzhisym} &= {{\hat{H}}_{\mathrm{2d,i}}}\left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] \qquad \text{(for generic $w,z$)}.\end{aligned}$$ fulfilling condition (\[eq:deltacriti\]). However, some extra caution is needed here: the complex-conjugate pairs $w,{\bar{w}}$ and $z,{\bar{z}}$ cannot be treated as independent variables everywhere. Derivatives w.r.t. those variables receive additional contributions from the non-holomorphic or singular points of the function they act on. These “anomalies” are captured by the two-dimensional Poisson equation \[eq:2dpoisson\] \_w \_[[|[w]{}]{}]{} (w [|[w]{}]{}) = \^2(w) namely, by contributions of the form \[eq:cterms\] = \^2(w-c) with $c$ a complex number. The two-dimensional $\delta$ function in the above equations fixes both the real and the imaginary part of its argument such that \^2 w \^2(w-c) f(w,[|[w]{}]{}) = f(c,|[c]{}) for some function $f$, *cf.* the remark below eq. . For easy bookkeeping let us split a derivative into its *regular part* (“reg”), which is correct in the holomorphic regime, and its *contact terms* (“con”), governed by eq. . Eq.  therefore correctly reads $$\begin{aligned} z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K(w,{\bar{w}},z,{\bar{z}}) &\!= \!\left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{reg}}\!\! + \left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\nn \\ &\!=\! -\left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}w K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{reg}}\!\! + \left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\nn \\ &\!=\! -{{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}\left[ w K(w,{\bar{w}},z,{\bar{z}}) \right] + \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}w K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\!\! + \left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\end{aligned}$$ which modifies eq.  to give $$\begin{gathered} \label{eq:zddzhisym2} z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}}}\Psi(z,{\bar{z}}) \right] = {{\hat{H}}_{\mathrm{2d,i}}}\left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] \\ + \frac{1}{4\pi} \int {\mathrm{d}}^2 w \left\{ \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}w K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}+ \left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\right\} \\ \times \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right].\end{gathered}$$ We shall continue to refer to the behaviour in eq.  as the commutativity of $z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$ and ${{\hat{H}}_{\mathrm{2d,i}}}$ even though we implicitly mean commutativity *modulo contact terms*. Note, that the presence of the contact terms does not conflict with the strategy outlined above; each contact term contains a (two-dimensional) $\delta$-function which makes the integral on the r.h.s. of eq.  easy to evaluate. We will derive the explicit form of the contact terms towards the end of this section, at which point eq. (\[eq:zddzhisym2\]) will become directly usable for determining the action of ${{\hat{H}}_{\mathrm{2d,i}}}$ on the wavefunction $\Psi$. Before doing that, however, we turn our attention to condition (\[eq:deltacritii\]). Concretely in eq. (\[eq:zddzhisym2\]) the requirement is that $z\frac{d}{dz} \Psi$ should be a pure function of weight ones less than $\Psi$ itself. We find that the operator $z\frac{d}{dz}$, upon acting on any of the form ${\mathcal{L}}_{0,\sigma}(z,{\bar{z}})$, does indeed yield such a pure function, so eq. (\[eq:zddzhisym2\]) becomes: z = [\_]{} + \[eq:zddzL0\], where we have used eq. . On the other hand, $z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$ does not have the same effect when acting on an ${\mathcal{L}}_{1,\sigma}(z,{\bar{z}})$, where one obtains instead z = [\_]{}+ , which does not fulfil the condition . One may be tempted to use $(1-z){{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$ instead but, unfortunately, this operator does not commute with ${{\hat{H}}_{\mathrm{2d,i}}}$. The solution is to first split the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}= {{\hat{H}}_{\mathrm{2d,i}_1}}+ {{\hat{H}}_{\mathrm{2d,i}_2}}$ with [\_[\_n]{}]{}(z,[|[z]{}]{}) = \^2 w K\_n(w,[|[w]{}]{},z,[|[z]{}]{}) and $$\begin{aligned} K_1(w,{\bar{w}},z,{\bar{z}}) &= \left(\frac{1}{w-z}-\frac{1}{w}\right) \frac{1}{{\bar{w}}-{\bar{z}}} \\ K_2(w,{\bar{w}},z,{\bar{z}}) &= \frac{1}{w-z} \left(\frac{1}{{\bar{w}}-{\bar{z}}}-\frac{1}{{\bar{w}}}\right)\end{aligned}$$ where $K_1(w,{\bar{w}},z,{\bar{z}}) + K_2(w,{\bar{w}},z,{\bar{z}}) = K(w,{\bar{w}},z,{\bar{z}})$, *cf.* eq. . This split is useful because it opens the possibility of identifying different differential operators $\Delta_i$ that commute with the separate components of the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}_1}}$ and ${{\hat{H}}_{\mathrm{2d,i}_2}}$, *and* yield a pure function when acting directly on ${\mathcal{L}}_{0,\sigma}(z,{\bar{z}})$ or on ${\mathcal{L}}_{1,\sigma}(z,{\bar{z}})$, thus simultaneously fulfilling both conditions in . Regarding the commutation relations, condition , it is straightforward to verify that the following four relations hold, up to contact terms: \[eq:Hicommut\] $$\begin{aligned} \left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}, {{\hat{H}}_{\mathrm{2d,i}_1}}\right] &= {\text{(contact terms)}}, & \left[ z(1-z){{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}, {{\hat{H}}_{\mathrm{2d,i}_1}}\right] &= {\text{(contact terms)}}\label{eq:Hi1ops}, \\ \left[ z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}, {{\hat{H}}_{\mathrm{2d,i}_2}}\right] &= {\text{(contact terms)}}, & \left[ (1-z){{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}, {{\hat{H}}_{\mathrm{2d,i}_2}}\right] &= {\text{(contact terms)}}. \label{eq:Hi2ops} \end{aligned}$$ Let us therefore define the following three differential operators: \[Deltai\] \_i = f\_i(z) f\_i(z) = { [ll]{} z& i=1\ 1-z& i=2\ z(1-z) & i=3 ., and show that we can arrange the wavefunction, which is a linear combination of ${\mathcal{L}}_{0,\sigma}(z,{\bar{z}})$ and ${\mathcal{L}}_{1,\sigma}(z,{\bar{z}})$, such that condition (\[eq:deltacritii\]) would also be fulfilled. To this end, let us first note that upon acting on ${\mathcal{L}}_{0,\sigma}(z,{\bar{z}})$ with either of the two parts of the Hamiltonian we have (using ): $$\label{eq:hi1diffeq2} z{{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}_n}}{\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) \right] = {{\hat{H}}_{\mathrm{2d,i}_n}}\left[ {\mathcal{L}}_\sigma(z,{\bar{z}}) \right] + {\text{(contact terms)}}\,,$$ just as in (\[eq:zddzL0\]). Thus, the remaining challenge is to handle terms containing ${\mathcal{L}}_{1,\sigma}(z,{\bar{z}})$; this is where the additional flexibility of splitting the Hamiltonian pays off. Let us consider first the simplest case of ${{\hat{H}}_{\mathrm{2d,i}_2}}$ where we obtain \[eq:hi2diffeq\] (1-z) = [\_[\_2]{}]{}+ . Now ${{\hat{H}}_{\mathrm{2d,i}_2}}\Psi$ can be readily integrated for any $\Psi$ using (\[eq:hi1diffeq2\]) and (\[eq:hi2diffeq\]). Turning to consider ${{\hat{H}}_{\mathrm{2d,i}_1}}$, let us write \_[1,]{}(z,[|[z]{}]{}) = (\_[1,]{}(z,[|[z]{}]{}) + \_[0,]{}(z,[|[z]{}]{})) - \_[0,]{}(z,[|[z]{}]{}) \[eq:L1split\] and use the linearity of the Hamiltonian to act with it on $({\mathcal{L}}_{1,\sigma} + {\mathcal{L}}_{0,\sigma})$ and $(-{\mathcal{L}}_{0,\sigma})$ separately. We may now apply respectively the differential operators $\Delta_3$ and $\Delta_1$ of to these terms. With eq.  and one can easily verify that they produce the desired pure functions of lower weight in accordance with : $$\begin{aligned} \label{eq:hi1diffeq1} z(1-z){{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}_1}}\left( {\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) + {\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) \right) \right] &= {{\hat{H}}_{\mathrm{2d,i}_1}}\left[ {\mathcal{L}}_\sigma(z,{\bar{z}}) \right] + {\text{(contact terms)}}\end{aligned}$$ Using (\[eq:hi1diffeq1\]) along with (\[eq:hi1diffeq2\]) we see that also ${{\hat{H}}_{\mathrm{2d,i}_1}}\Psi$ can be integrated for any $\Psi$. Thus, by splitting the Hamiltonian and the wavefunction in a convenient way, we were able to identify linear differential operators that admit both requirements in (\[eq:deltacrit\]). In order to complete the process of setting up the differential equations let us now return to derive the explicit form of the contact terms. First, let us write eq.  for general $\Delta_i = f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$ and the two parts of the split Hamiltonian, $$\begin{gathered} \label{eq:fddzhinsym} f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}_n}}\Psi(z,{\bar{z}}) \right] = {{\hat{H}}_{\mathrm{2d,i}_n}}\left[ f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] \\ + \frac{1}{4\pi} \int {\mathrm{d}}^2 w \left\{ \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}f_i(w) K_n(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}+ \left[ f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K_n(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\right\} \\ \times \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right]\end{gathered}$$ where, according to eqs. (\[eq:hi1diffeq2\]), (\[eq:hi2diffeq\]) and (\[eq:hi1diffeq1\]), the relevant combinations of $i$ and $n$ are $$\begin{aligned} n = 1 \quad &\longrightarrow \quad i = 1 \text{ or } 3 \\ n = 2 \quad &\longrightarrow \quad i = 1 \text{ or } 2\,.\end{aligned}$$ In computing the contact terms in (\[eq:fddzhinsym\]) we note that the $f_i(z)$  are functions of $z$ only whilst being independent of the complex conjugate ${\bar{z}}$. According to eq.  this implies that \[eq:pulloutf\] \_ = f\_i(w) \_. for $n=1,2$, and thus (\[eq:fddzhinsym\]) becomes: $$\begin{gathered} \label{eq:fddzhinsym_simpl} f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}_n}}\Psi(z,{\bar{z}}) \right] = {{\hat{H}}_{\mathrm{2d,i}_n}}\left[ f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] \\ + \frac{1}{4\pi} \int {\mathrm{d}}^2 w \left\{ \left[f_i(w) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}K_n(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}+ \left[ f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K_n(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}\right\} \\ \times \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right]\,.\end{gathered}$$ Consequently, we only have to consider the following four derivatives, $$\begin{aligned} \label{eq:ddwk1ct} \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}K_1(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}&= \pi \left[ \delta^2(w-z) - \delta^2(w-\infty) \right] \frac{z}{w(w-z)} \\ \label{eq:ddzk1ct} \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K_1(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}&= -\pi \delta^2(z-w) \frac{z}{w(w-z)} \\[3mm] \label{eq:ddwk2ct} \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{w}}}}K_2(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}&= \pi \left[ \delta^2(w-z) - \delta^2(w) \right] \frac{1}{w-z} \\ \label{eq:ddzk2ct} \left[ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}K_2(w,{\bar{w}},z,{\bar{z}}) \right]_{\mathrm{con}}&= -\pi \delta^2(z-w) \frac{1}{w-z},\end{aligned}$$ where in eqs.  and we have dropped terms proportional to $\delta^2(z)$, restricting our calculation to $z \neq 0$ (we emphasise that $z$ is an external variable so this can be consistently done). Due to the sum of contact terms inside the curly brackets in eq.  the terms proportional to $\delta^2(w-z) = \delta^2(z-w)$ in eqs. – cancel identically, so the remaining contact-term contributions are only at $w=\infty$ for $K_1$ and at $w=0$ for $K_2$. Using the corresponding $\delta$ functions to turn the integrals over $w$ in (\[eq:fddzhinsym\_simpl\] ) into evaluation of limits at infinity and at zero respectively we finally obtain: $$\begin{aligned} \label{eq:fddzhinsym_H1} f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}_1}}\Psi(z,{\bar{z}}) \right] &= {{\hat{H}}_{\mathrm{2d,i}_1}}\left[ f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] -\frac{1}{4} \lim_{w\to\infty} \frac{z f_i(w)}{w(w-z)} \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right]\,, \\ \label{eq:fddzhinsym_H2} f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\left[ {{\hat{H}}_{\mathrm{2d,i}_2}}\Psi(z,{\bar{z}}) \right] &= {{\hat{H}}_{\mathrm{2d,i}_2}}\left[ f_i(z) {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}\Psi(z,{\bar{z}}) \right] - \frac{1}{4} \lim_{w\to 0} \frac{f_i(w) }{w-z} \left[ \Psi(w,{\bar{w}}) - \Psi(z,{\bar{z}}) \right]\,.\end{aligned}$$ This equations will be used in the next section to determine the wavefunction. Differential equations and an iterative solution for the wavefunction {#sec:diffeq} --------------------------------------------------------------------- Finding the differential equations is now simply a matter of compiling together the results of the previous section. Starting with the easiest case, $\Delta_1 {{\hat{H}}_{\mathrm{2d,i}_n}}{\mathcal{L}}_{0,\sigma}$, we notice that with $f_1(w) = w$ both the $w\to \infty$ limit in eq. (\[eq:fddzhinsym\_H1\]) and the $w\to 0$ limit in eq. (\[eq:fddzhinsym\_H2\]) vanish, and thus there are no contributions from contact terms in either of these cases. Dividing by $f_1(z) = z$ to arrive at [\_[\_n]{}]{}\_[0,]{}(z,[|[z]{}]{}) = . \[eq:diffeq0\] Next consider the case $\Delta_2 {{\hat{H}}_{\mathrm{2d,i}_2}}{\mathcal{L}}_{1,\sigma}$, corresponding to eq. . Here $f_2(w) = 1-w$ and eq. (\[eq:fddzhinsym\_H2\]) yields \[eq:diffeq1b\] [\_[\_2]{}]{}\_[1,]{}(z,[|[z]{}]{}) = - 14 . where we have divided by $f_2(z) = 1-z$ and used the shorthand $[\ldots]_{w,{\bar{w}}\rightarrow 0}$ to denote the $w,{\bar{w}}\rightarrow 0$ limit of the functions inside the square brackets. This term can, in fact, be dropped as it always contains a single whose indices feature (at least) one “1” and, thus, is equal to zero in the limit. The last case, $\Delta_i {{\hat{H}}_{\mathrm{2d,i}_1}}{\mathcal{L}}_{1,\sigma}$, is governed by eqs.  and , using the wavefunction split of eq. (\[eq:L1split\]). Considering in turn the action of eq. (\[eq:fddzhinsym\_H1\]) on $({\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) +{\mathcal{L}}_{0,\sigma}(z,{\bar{z}}))$ with $f_3(w) = w(1-w)$ and on $(- {\mathcal{L}}_{0,\sigma}(z,{\bar{z}}))$ with $f_1(w) = w$, we derive two separate equations, which we then combine using the linearity of operators ${{\hat{H}}_{\mathrm{2d,i}_1}}$ and ${{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}$ to obtain $$\begin{gathered} \label{eq:diffeq1a} {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}{{\hat{H}}_{\mathrm{2d,i}_1}}{\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) = \frac{{{\hat{H}}_{\mathrm{2d,i}_1}}{\mathcal{L}}_{\sigma}(z,{\bar{z}})}{1-z} \\ - \frac14 \frac{{\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) + {\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) - [{\mathcal{L}}_{0,\sigma}(w,{\bar{w}}) + {\mathcal{L}}_{1,\sigma}(w,{\bar{w}})]_{w,{\bar{w}}\rightarrow \infty}}{1-z} \end{gathered}$$ with $[\ldots]_{w,{\bar{w}}\rightarrow \infty}$ the $w,{\bar{w}}\rightarrow \infty$ limit of the functions inside the square brackets. Taking this limit requires some careful analytic continuation of the relevant to ensure that $w$ and ${\bar{w}}$ stay complex-conjugate as they approach infinity. Because the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}$ and its components ${{\hat{H}}_{\mathrm{2d,i}_n}}$ are linear operators one can sum up the above equations – and recombine ${{\hat{H}}_{\mathrm{2d,i}_1}}+ {{\hat{H}}_{\mathrm{2d,i}_2}}\to {{\hat{H}}_{\mathrm{2d,i}}}$ obtaining more compact expressions: \[diffeq\] $$\begin{aligned} {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}{{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) &= \frac{{{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{\sigma}(z,{\bar{z}})}{z}\,, \label{eq:diffeq0new} \\ {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}{{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) &= \frac{{{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{\sigma}(z,{\bar{z}})}{1-z} - \frac14 \frac{{\mathcal{L}}_{1,\sigma}(z,{\bar{z}})}{z} \nn \\ &\hspace{-5mm} - \frac14 \frac{{\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) + 2{\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) - [{\mathcal{L}}_{0,\sigma}(w,{\bar{w}}) + {\mathcal{L}}_{1,\sigma}(w,{\bar{w}})]_{w,{\bar{w}}\rightarrow \infty}}{1-z}\,. \label{eq:diffeq1new}\end{aligned}$$ These differential equations compactly represent the action of the Hamiltonian $\hat{H}_{{\rm 2d},{\rm i}}$ according to eq. (\[eq:hi2d\]). By solving them we are able to effectively bypass the computation of the two-dimensional integrals in the latter equation. Since the differential equations only fix the $z$ dependence of the (wave)function — which is a function of both $z$ and ${\bar{z}}$ — a small detour is necessary to recover the action of ${{\hat{H}}_{\mathrm{2d,i}}}$ on : we take the holomorphic part of a given , integrate it w.r.t. $z$ according to the differential equations in (\[diffeq\]), and then reconstruct the functional dependence on ${\bar{z}}$ by requiring the result be single-valued. This ultimately amounts to simply replacing \_0\^z t \_[0,]{}(z,[|[z]{}]{}) \_0\^z t \_[1,]{}(z,[|[z]{}]{}) . For more details on this procedure see appendix \[app:holomorphicpart\]. After each integration we need to fix an integration constant. We find that this is conveniently done by matching with the soft limit. Specifically, it is convenient to consider the soft limit where $k^2/p^2=z{\bar{z}}$ tend to zero. For small $z,{\bar{z}}$, only with all-zero indices can give non-zero contributions; these correspond to powers of logarithms: $${\mathcal{L}}_{\vec{0}_n}(z,{\bar{z}}) = \frac{\log^n(z{\bar{z}})}{n!} \qquad \text{with} \qquad \vec{0}_n = \underbrace{0,\dots,0}_{n\ \text{zeros}} \,.$$ In eq.  we calculated the action of the small-$k$ (or soft) Hamiltonian ${{\hat{H}}_{\mathrm{s}}}$ on powers of $\xi = (k^2/p^2)^{-{\epsilon}}$. The action of ${{\hat{H}}_{\mathrm{i}}}$ in the soft limit can be isolated by looking at the coefficient of $2{C_A}- {{\mathbf{T}}_t^2}$ and thus is $${{\hat{H}}_{\mathrm{i}}}|_{\text{soft}} \left( \frac{k^2}{p^2} \right)^{-m{\epsilon}} = \frac{\hat B_m({\epsilon})}{2{\epsilon}} \left( \frac{k^2}{p^2} \right)^{-(m+1){\epsilon}} \label{eq:Hik2p2}$$ where $\hat B_m({\epsilon})$ is given in eq. (\[bubblehat\]). Expanding both sides in ${\epsilon}$ and matching powers of $\delta=m\epsilon$ in the limit ${\epsilon}\rightarrow 0$ lets us extract the action of ${{\hat{H}}_{\mathrm{i}}}$ in the soft limit on any given power of $\log (k^2/p^2) = \log (z{\bar{z}})$. For reference, we find $$\begin{aligned} {{\hat{H}}_{\mathrm{i}}}|_{\text{soft}} {\mathcal{L}}_0(z,{\bar{z}}) &= \mathcal{O}({\epsilon}) \label{HiL0} \\ {{\hat{H}}_{\mathrm{i}}}|_{\text{soft}} {\mathcal{L}}_{0,0}(z,{\bar{z}}) &= \zeta_3 + \mathcal{O}({\epsilon}) \\ {{\hat{H}}_{\mathrm{i}}}|_{\text{soft}} {\mathcal{L}}_{0,0,0}(z,{\bar{z}}) &= \zeta_3 {\mathcal{L}}_0(z,{\bar{z}}) + \mathcal{O}({\epsilon}) \\ {{\hat{H}}_{\mathrm{i}}}|_{\text{soft}} {\mathcal{L}}_{0,0,0,0}(z,{\bar{z}}) &= \zeta_3 {\mathcal{L}}_{0,0}(z,{\bar{z}}) + \zeta_5 + \mathcal{O}({\epsilon}) \\ {{\hat{H}}_{\mathrm{i}}}|_{\text{soft}} {\mathcal{L}}_{0,0,0,0,0}(z,{\bar{z}}) &= \zeta_3 {\mathcal{L}}_{0,0,0}(z,{\bar{z}}) + \zeta_5 {\mathcal{L}}_0(z,{\bar{z}}) + \mathcal{O}({\epsilon}) \label{HiL00000} \end{aligned}$$ etc., from which we observe that the integration constants exhibit a very simple pattern. Specifically, they only contribute single (ordinary) zeta numbers because they are generated upon expanding ${\hat{B}_{m}}({\epsilon})$ which is a product of gamma functions. We can now calculate the action of ${{\hat{H}}_{\mathrm{2d,i}}}$ on any by iteratively solving the differential equations and , starting from the lowest-weight functions, ${\mathcal{L}}_0$ and ${\mathcal{L}}_1$. Effectively, we have set up an algorithm for calculating the two-dimensional wavefunction to any loop order. Due to the finiteness of the wavefunction it is straightforward to verify the results numerically: We integrate eq.  numerically and compare to the analytical result for a number of randomly generated pairs $z,{\bar{z}}$. Specifically, with $w = w_1 + i w_2$ and $z = z_1 + i z_2$ the action of ${{\hat{H}}_{\mathrm{2d,i}}}$ can be written $$\begin{gathered} {{\hat{H}}_{\mathrm{2d,i}}}\Psi(z,{\bar{z}}) = \frac{1}{2\pi} \int_{-\infty}^\infty {\mathrm{d}}w_1 \int_{-\infty}^\infty {\mathrm{d}}w_2 \frac{w_1 z_1 + w_2 z_2}{(w_1^2 + w_2^2)((w_1 - z_1)^2 + (w_2 - z_2)^2)} \\ \times \left[ \Psi(w_1 + i w_2,w_1 - i w_2) - \Psi(z_1 + i z_2,z_1 - i z_2) \right].\end{gathered}$$ where $\Psi(z,{\bar{z}})$ is a (linear combination of) (s). This type of integral is readily evaluated numerically in e.g. `Mathematica`. For the wavefunction up to weight four we find $$\begin{aligned} {{{\Omega}^{({1})}}_{\mathrm{2d}}} &= \frac{1}{2} C_2 \left({\mathcal{L}}_0+2 {\mathcal{L}}_1\right) \label{eq:wtd1} \\ {{{\Omega}^{({2})}}_{\mathrm{2d}}} &= \frac{1}{2} C_2^2 \left({\mathcal{L}}_{0,0}+2 {\mathcal{L}}_{0,1} +2 {\mathcal{L}}_{1,0}+4 {\mathcal{L}}_{1,1}\right)+\frac{1}{4} C_1 C_2 \left(-{\mathcal{L}}_{0,1} -{\mathcal{L}}_{1,0}-2 {\mathcal{L}}_{1,1}\right) \\ {{{\Omega}^{({3})}}_{\mathrm{2d}}} &= \frac{3}{4} C_2^3 \left({\mathcal{L}}_{0,0,0}+2 {\mathcal{L}}_{0,0,1} +2 {\mathcal{L}}_{0,1,0}+4 {\mathcal{L}}_{0,1,1}+2 {\mathcal{L}}_{1,0,0} +4 {\mathcal{L}}_{1,0,1}+4 {\mathcal{L}}_{1,1,0} +8 {\mathcal{L}}_{1,1,1}\right) \nn \\ &\hspace{4mm}+ \frac{1}{4} C_1 C_2^2 \left(2 \zeta_3 -2 {\mathcal{L}}_{0,0,1} -3 {\mathcal{L}}_{0,1,0}-7 {\mathcal{L}}_{0,1,1}-2 {\mathcal{L}}_{1,0,0} -7 {\mathcal{L}}_{1,0,1}-7 {\mathcal{L}}_{1,1,0} -14 {\mathcal{L}}_{1,1,1}\right) \nn \\&\hspace{4mm} +\frac{1}{16} C_1^2 C_2 \left({\mathcal{L}}_{0,0,1} +2 {\mathcal{L}}_{0,1,0}+4 {\mathcal{L}}_{0,1,1} +{\mathcal{L}}_{1,0,0}+4 {\mathcal{L}}_{1,0,1} +4 {\mathcal{L}}_{1,1,0}+8 {\mathcal{L}}_{1,1,1}\right)\\ {{{\Omega}^{({4})}}_{\mathrm{2d}}} &= \frac{3}{2} C_2^4 \left({\mathcal{L}}_{0,0,0,0} +2 {\mathcal{L}}_{0,0,0,1}+2 {\mathcal{L}}_{0,0,1,0}+4 {\mathcal{L}}_{0,0,1,1} +2 {\mathcal{L}}_{0,1,0,0}+4 {\mathcal{L}}_{0,1,0,1}\right. \nn \\ &\hspace{4mm} \left. +4 {\mathcal{L}}_{0,1,1,0}+8 {\mathcal{L}}_{0,1,1,1} +2 {\mathcal{L}}_{1,0,0,0}+4 {\mathcal{L}}_{1,0,0,1}+4 {\mathcal{L}}_{1,0,1,0} +8 {\mathcal{L}}_{1,0,1,1}\right. \nn \\ &\hspace{4mm} \left. +4 {\mathcal{L}}_{1,1,0,0}+8 {\mathcal{L}}_{1,1,0,1} +8 {\mathcal{L}}_{1,1,1,0}+16 {\mathcal{L}}_{1,1,1,1}\right) \nn \\ &\hspace{4mm} +\frac{1}{8} C_1 C_2^3 \left(-9 {\mathcal{L}}_{0,0,0,1} -14 {\mathcal{L}}_{0,0,1,0}-34 {\mathcal{L}}_{0,0,1,1}-14 {\mathcal{L}}_{0,1,0,0} -42 {\mathcal{L}}_{0,1,0,1}\right. \nn \\ &\hspace{4mm} \left. -44 {\mathcal{L}}_{0,1,1,0}-92 {\mathcal{L}}_{0,1,1,1} -9 {\mathcal{L}}_{1,0,0,0}-34 {\mathcal{L}}_{1,0,0,1}-42 {\mathcal{L}}_{1,0,1,0} -92 {\mathcal{L}}_{1,0,1,1}\right. \nn \\ &\hspace{4mm} \left. -34 {\mathcal{L}}_{1,1,0,0}-92 {\mathcal{L}}_{1,1,0,1} -92 {\mathcal{L}}_{1,1,1,0}-184 {\mathcal{L}}_{1,1,1,1}+8 {\mathcal{L}}_0 \zeta_3 +28 {\mathcal{L}}_1 \zeta_3\right) \nn \\ &\hspace{4mm} +\frac{1}{32} C_1^2 C_2^2 \left(7 {\mathcal{L}}_{0,0,0,1} +15 {\mathcal{L}}_{0,0,1,0}+34 {\mathcal{L}}_{0,0,1,1}+15 {\mathcal{L}}_{0,1,0,0} +56 {\mathcal{L}}_{0,1,0,1}\right. \nn \\ &\hspace{4mm} \left. +56 {\mathcal{L}}_{0,1,1,0}+116 {\mathcal{L}}_{0,1,1,1} +7 {\mathcal{L}}_{1,0,0,0}+40 {\mathcal{L}}_{1,0,0,1}+56 {\mathcal{L}}_{1,0,1,0}\right. \nn \\ &\hspace{4mm} \left. +116 {\mathcal{L}}_{1,0,1,1}+34 {\mathcal{L}}_{1,1,0,0} +116 {\mathcal{L}}_{1,1,0,1}+116 {\mathcal{L}}_{1,1,1,0}+232 {\mathcal{L}}_{1,1,1,1} -44 {\mathcal{L}}_1 \zeta_3\right) \nn \\ &\hspace{4mm} +\frac{1}{64} C_1^3 C_2 \left(-{\mathcal{L}}_{0,0,0,1} -3 {\mathcal{L}}_{0,0,1,0}-6 {\mathcal{L}}_{0,0,1,1}-3 {\mathcal{L}}_{0,1,0,0}\right. \nn \\ &\hspace{4mm} \left. -12 {\mathcal{L}}_{0,1,0,1}-12 {\mathcal{L}}_{0,1,1,0} -24 {\mathcal{L}}_{0,1,1,1}-{\mathcal{L}}_{1,0,0,0}-8 {\mathcal{L}}_{1,0,0,1} -12 {\mathcal{L}}_{1,0,1,0}\right. \nn \\ &\hspace{4mm} \left. -24 {\mathcal{L}}_{1,0,1,1}-6 {\mathcal{L}}_{1,1,0,0} -24 {\mathcal{L}}_{1,1,0,1}-24 {\mathcal{L}}_{1,1,1,0}-48 {\mathcal{L}}_{1,1,1,1} +12 {\mathcal{L}}_1 \zeta_3\right) \label{eq:wtd4}\end{aligned}$$ where we introduced the notation $C_1 = 2{C_A}- {{\mathbf{T}}_t^2}$, $C_2 = {C_A}- {{\mathbf{T}}_t^2}$ and wrote ${{{\Omega}^{({\ell})}}_{\mathrm{2d}}} \equiv {{{\Omega}^{({\ell})}}_{\mathrm{2d}}}(z,{\bar{z}})$ and ${\mathcal{L}}_\sigma \equiv {\mathcal{L}}_\sigma(z,{\bar{z}})$ for brevity. Further results up to weight 14 can be found in the ancillary file `2Reggeon-wavefunction-L01-Basis.txt`. Interestingly, a new type of transcendental number appears for the first time in the twelve-loop wavefunction — a so-called multiple zeta value (). While it is no surprise that do not appear at lower loop orders as we explain in the following two paragraphs, the fact that they *do* appear starting at twelve loops is a non-trivial statement with number-theoretical implications. are the values of evaluated at special points, typically their branch points $z=1$ or $z \to \infty$, for example[^4] $H_{0,0,0,0,1,0,0,1}(1) = H_{5,3}(1) = \zeta_{5,3}$. It turns out that only cover a subset of all when evaluated at $z = {\bar{z}}= 1$ or $z,{\bar{z}}\to \infty$ and we refer to this subset as single-valued multiple zeta values. They are discussed in detail in refs. [@Schnetz:2013hqa; @Brown:2013gia] where the authors show that, up to weight ten, the algebra of single-valued is generated by ordinary (odd) zeta numbers $\zeta_n$. At weight eleven, however, a new type of number appears, alongside the expected $\zeta_{11}$. We shall call it[^5] $g_{5,3,3}$ and it is defined by g\_[5,3,3]{} = -47 \_2\^3 \_5 +65 \_2\^2 \_7 + 45 \_2 \_9 + \_[5,3,3]{}, \[eq:g533\] where $\zeta_{5,3,3} = H_{5,3,3}(1)$. There are two sources that contribute (multiple) zeta values to the wavefunction: the integration constants fixed by the soft limit and the $w,{\bar{w}}\to \infty$ limit in eq. . The former are generated by expanding gamma functions, *cf.* eq.  with eq. , and can therefore contribute only *single* (ordinary) zeta numbers. The value of the large-$w,{\bar{w}}$ limit instead does generally involve (single-valued) *multiple* zeta values. We note that it is guaranteed to multiply the weight-one ${\mathcal{L}}_1(z,{\bar{z}})$ which is generated by the denominator, $1-z$, upon integrating the differential equation . Being the sole source of (single-valued) , we conclude that such zeta values of weight $w$ can only occur starting at the next loop order, i.e. $\ell=w+1$. Specifically, this explains why $g_{5,3,3}$, which is weight 11, cannot appear at loop orders $\ell < 12$. Indeed, we find that $g_{5,3,3}$ is accompanied by ${\mathcal{L}}_1$ in the twelve-loop wavefunction: $$\begin{gathered} {{{\Omega}^{({12})}}_{\mathrm{2d}}}(z,{\bar{z}}) \supset \frac{1}{80} \left( \frac{88653 C_2^2 C_1^{10}}{2048} -\frac{1021171 C_2^3 C_1^9}{4096} -\frac{3517129 C_2^4 C_1^8}{1024} \right. \\ +\frac{43378313 C_2^5 C_1^7}{1024} -\frac{5951395 C_2^6 C_1^6}{32} +\frac{1583033 C_2^7 C_1^5}{4} \\ \left. -\frac{6320709 C_2^8 C_1^4}{16} + 135513 C_2^9 C_1^3 \right) \times g_{5,3,3} \, {\mathcal{L}}_1(z,{\bar{z}})\,.\end{gathered}$$ According to ref. [@Brown:2013gia] (*cf.* eq. (7.4) there) two more such numbers have to be introduced at weight 13 and, using the same logic, we anticipate that they make an appearance in the 14-loop wavefunction. Indeed, defining \[eq:g553\] g\_[5,5,3]{} = 10 \_2\^2 \_9 + \_2 \_[11]{} + 5 \_5 \_[5,3]{} + \_[5,5,3]{} and \[eq:g733\] g\_[7,3,3]{} = - \_2\^3 \_7 + \_2\^2 \_9 + \_2 \_[11]{} + 6 \_5 \_[5,3]{} + \_[7,3,3]{} we observe that the 14-loop wavefunction contains the term $$\begin{gathered} {{{\Omega}^{({14})}}_{\mathrm{2d}}}(z,{\bar{z}}) \supset \frac{1}{2240}\left( - \frac{132291047 C_2^2 C_1^{12}}{20480} + \frac{7701138629 C_2^3 C_1^{11}}{183500800} \right. \\ - \frac{21177619993 C_2^4 C_1^{10}}{81920} - \frac{141869475599 C_2^5 C_1^9}{40960} + \frac{144180124197 C_2^6 C_1^8}{4096} \\ - \frac{1550199662073 C_2^7 C_1^7}{10240} + \frac{941115705999 C_2^8 C_1^6}{2560} - \frac{41630406511 C_2^9 C_1^5}{80} \\ \left. + \frac{15828500247 C_2^{10} C_1^4}{40} - 120229353 C_2^{11} C_1^3 \right) \times g_{5,5,3} \, {\mathcal{L}}_1(z,{\bar{z}})\end{gathered}$$ as well as $$\begin{gathered} {{{\Omega}^{({14})}}_{\mathrm{2d}}}(z,{\bar{z}}) \supset \frac{1}{896} \left( \frac{557319 C_2^2 C_1^{12}}{256} - \frac{296956417 C_2^3 C_1^{11}}{16384} \right. \\ - \frac{3811324785 C_2^4 C_1^{10}}{16384} + \frac{36358896425 C_2^5 C_1^9}{8192} - \frac{125984665967 C_2^6 C_1^8}{4096} \\ + \frac{241764230539 C_2^7 C_1^7}{2048} - \frac{139303244409 C_2^8 C_1^6}{512} + \frac{11897473261 C_2^9 C_1^5}{32} \\ \left. - \frac{2180551359 C_2^{10} C_1^4}{8} + 79134813 C_2^{11} C_1^3 \right) \times g_{7,3,3} \, {\mathcal{L}}_1(z,{\bar{z}}).\end{gathered}$$ The observed term $g_{5,3,3}\, {\mathcal{L}}_1(z,{\bar{z}})$ at twelve loops immediately rules out the possibility to find a closed-form expresson for the two-dimensional wavefunction in terms of gamma functions as was done in the soft limit . The non-zero coefficients of $g_{5,5,3}\, {\mathcal{L}}_1(z,{\bar{z}})$ and $g_{7,3,3}\, {\mathcal{L}}_1(z,{\bar{z}})$ at 14 loops may be seen as hint that indeed *all* single-valued appear in the two-dimensional wavefunction — when and as soon as the weight, i.e.  loop order, allows for it. We will, in fact, encounter a contribution proportional to $g_{5,3,3}$ in the amplitude at eleven loops. We will thus return to discuss single-valued when interpreting our results for the amplitude in section \[sec:finiteamp\]. Before we press ahead and compute the amplitude it is worthwhile exploring the aforementioned symmetries of the wavefunction in some more detail and we do so in the next subsection. This will ultimately lead to a better understanding of the iteration in two dimensions and enable us to calculate it to even higher loop orders. Alphabets and symmetries {#sec:asalphabet} ------------------------ Throughout this paper we have tried to exploit the symmetries of the evolution to aide calculations and simplify expressions. In this section we explore to what extent symmetries can guide us in the two-dimensional limit. As mentioned in section \[sec:wf2d\], in two dimensions, the wavefunction is invariant under two transformations: complex conjugation and inversion of the arguments. The latter, i.e. the fact that ${{\Omega}_{\mathrm{2d}}}(z,{\bar{z}}) = {{\Omega}_{\mathrm{2d}}}(1/z,1/{\bar{z}})$, corresponds to eq. (\[left-right-symmetry\]), i.e. to swapping the two reggeons, and was used, for example, to identify the two soft limits in section \[soft\]. In the present context, it inspired us to introduce a new alphabet for , as we now explain. Instead of 0 and 1, corresponding to integration over ${\mathrm{d}}\log z$ and ${\mathrm{d}}\log (1-z)$, respectively, we shall use $a$ and $s$. They are associated with integration over ${\mathrm{d}}\log z$ and ${\mathrm{d}}\log z/(1-z)^2$ and thus behave antisymmetrically and symmetrically, respectively, under $z \to 1/z$. In particular $$\label{ElDef} {\mathcal{L}}_s(z,{\bar{z}})=\log \frac{z{\bar{z}}}{(1-z)^2(1-{\bar{z}})^2} \qquad\Longrightarrow\qquad {\mathcal{L}}_s(1/z,1/{\bar{z}})={\mathcal{L}}_s(z,{\bar{z}})\,.$$ The leading-order wavefunction simplifies to ${{{\Omega}^{({1})}}_{\mathrm{2d}}} = \frac{1}{2} C_2 {\mathcal{L}}_s(z,{\bar{z}})$, and at higher orders, the $z \to 1/z$ symmetry implies that the antisymmetric letter $a$ would only ever appear an even number of times. Let us now consider the evolution directly in terms of this alphabet. Using the letters $a$ and $s$ simplifies $j(z,{\bar{z}}) = {\mathcal{L}}_s(z,{\bar{z}})/2$ of eq.  and hence the action of ${{\hat{H}}_{\mathrm{2d,m}}}$ in eq. , which now amounts to shuffling an $s$ into the indices of the function it acts on (and multiplying by a $\frac12$), for example [\_]{}\_[a,s,a,s]{}(z,[|[z]{}]{}) = 12 \_[s,a,s,a,s]{}(z,[|[z]{}]{}) + \_[a,s,s,a,s]{}(z,[|[z]{}]{}) + \_[a,s,a,s,s]{}(z,[|[z]{}]{}). The action of ${{\hat{H}}_{\mathrm{2d,i}}}$ has a much richer and more complicated structure. However, we notice that at symbol level, i.e. keeping only the highest-weight , it simply amounts to replacing $s \to ss - aa$ and multiplying by $-\frac14$, for example, $$\begin{gathered} \label{eq:hisymbollevel} {{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{a,s,a,s}(z,{\bar{z}}) = -\frac14 \Big( {\mathcal{L}}_{a,s,s,a,s}(z,{\bar{z}}) - {\mathcal{L}}_{a,a,a,a,s}(z,{\bar{z}}) + {\mathcal{L}}_{a,s,a,s,s}(z,{\bar{z}}) - {\mathcal{L}}_{a,s,a,a,a}(z,{\bar{z}}) \Big) \\ + \Sigma_{\text{sub}}\,,\end{gathered}$$ where $\Sigma_{\text{sub}}$ contains products of subleading-weight and zeta numbers, i.e. terms like ${\mathcal{L}}_\sigma \zeta_{n_1} \cdots \zeta_{n_m}$ with $|\sigma| + n_1 + \dots + n_m = 5$ and $|\sigma| < 5$ in the above example. This replacement rule can be derived from the differential equations and , as we now explain. To this end, let us consider the two cases ${{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{a,\sigma}$ and ${{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{s,\sigma}$ in turn. Considering the former, due to the equivalence of the letters 0 and $a$, eq.  immediately gives the action on ${\mathcal{L}}_{a,\sigma}$ \[eq:diffeqaleading\] [\_]{}\_[a,]{}(z,[|[z]{}]{}) = . The simple recursive nature of this equation implies that ${{\hat{H}}_{\mathrm{2d,i}}}$ does not affect the $a$ indices of a and can, at most, generate subleading terms $\Sigma_{\text{sub}}$ through integration constants, *cf.* eqs. –. The action on ${\mathcal{L}}_{s,\sigma}$ can be written as $$\begin{aligned} {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}{{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_{s,\sigma}(z,{\bar{z}}) &= {{\frac{{\mathrm{d}}}{{\mathrm{d}}{z}}}}{{\hat{H}}_{\mathrm{2d,i}}}\left[ {\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) + 2{\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) \right] \nn\\ &= \frac{1+z}{z(1-z)} {{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_\sigma(z,{\bar{z}}) \nn \\ &\hspace{10mm} - \frac12 \left( \frac{{\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) + 2{\mathcal{L}}_{1,\sigma}(z,{\bar{z}})}{1-z} + \frac{{\mathcal{L}}_{1,\sigma}(z,{\bar{z}})}{z} \right) + \Sigma_{\text{sub}} \nn \\ &= \frac{1+z}{z(1-z)} {{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_\sigma(z,{\bar{z}}) \nn \\ &\hspace{10mm} - \frac14 \frac{1+z}{z(1-z)} \left( {\mathcal{L}}_{0,\sigma}(z,{\bar{z}}) + 2{\mathcal{L}}_{1,\sigma}(z,{\bar{z}}) \right) + \frac14 \frac{{\mathcal{L}}_{0,\sigma}(z,{\bar{z}})}{z} + \Sigma_{\text{sub}} \nn \\ &= \frac{1+z}{z(1-z)} {{\hat{H}}_{\mathrm{2d,i}}}{\mathcal{L}}_\sigma(z,{\bar{z}}) \nn \\ &\hspace{10mm} - \frac14 \left( \frac{1+z}{z(1-z)} {\mathcal{L}}_{s,\sigma}(z,{\bar{z}}) - \frac{{\mathcal{L}}_{a,\sigma}(z,{\bar{z}})}{z} \right) + \Sigma_{\text{sub}}, \label{eq:diffeqsleading}\end{aligned}$$ where at each step we have used $\Sigma_{\text{sub}}$ to collect subleading terms into. The first term in the final expression is again an inert term, like the one encountered in eq. . The following term however, creates two leading-weight terms which, upon integration, yield $-\frac14 ({\mathcal{L}}_{s,s,\sigma} - {\mathcal{L}}_{a,a,\sigma})$ and hence confirm the pattern described above eq. . Note that by the recursive nature of the differential equation this applies (separately) to *every* letter $s$ in the word $(s,\sigma)$, not just the first one (see e.g. eq. (\[eq:hisymbollevel\])). In the following we show that it is possible to unravel the recursive definition of ${{\hat{H}}_{\mathrm{2d,i}}}$ beyond symbol level. The $\Sigma_{\text{sub}}$ terms in the above equations are generated by two independent and additive sources: the $w,{\bar{w}}\to \infty$ limit in eq.  and the constants of integration as shown in eqs. –. Let us denote them $\Sigma_{\text{sub}(\infty)}$ and $\Sigma_{\text{sub}(0)}$, respectively, with their sum equalling $\Sigma_{\text{sub}}$. Empirically we observe that $\Sigma_{\text{sub}(0)}$ follows a simple pattern when using the $\{a,s\}$ alphabet: [\_]{}\_[w\_1,…,w\_[-1]{}]{}(z,[|[z]{}]{}) = \_ + \_[()]{} + \_[j 3 ]{}\^\_j \_[w\_1,…,w\_[-j]{}]{}(z,[|[z]{}]{}). \[eq:subtermszero\] with $\Sigma_{\text{lead}}$ now the leading-weight governed by eq. . $\Sigma_{\text{sub}(\infty)}$ in turn can be summarised by $$\begin{gathered} {{\hat{H}}_{\mathrm{i}}}{\mathcal{L}}_{w_1,\dots,w_{\ell-1}}(z,{\bar{z}}) = \Sigma_{\text{lead}} + \Sigma_{\text{sub}(0)} + \frac18 \sum_{j=1}^{\ell-1} \left( {\mathcal{L}}_{w_1,\dots,w_j}(z,{\bar{z}}) -{\mathcal{L}}_{w_1,\dots,w_{j-1},a}(z,{\bar{z}}) \right) \\ \times \left[ {\mathcal{L}}_{a,w_{j+1},\dots,w_{\ell-1}}(z,{\bar{z}}) + {\mathcal{L}}_{s,w_{j+1},\dots,w_{\ell-1}}(z,{\bar{z}}) \right]_{z,{\bar{z}}\to \infty}. \label{eq:subtermsinf}\end{gathered}$$ In both these equations the final term in the sum needs to be interpreted with care: in eq. (\[eq:subtermszero\]), for $j=\ell$ one obtains ${\mathcal{L}}_{w_1,\ldots,w_0}\equiv1$ and in eq. (\[eq:subtermsinf\]) for $j=\ell-1$ one obtains in the second factor ${\mathcal{L}}_{a,w_{\ell},\dots,w_{\ell-1}}(z,{\bar{z}}) + {\mathcal{L}}_{s,w_{\ell},\dots,w_{\ell-1}}(z,{\bar{z}}) \equiv {\mathcal{L}}_{a}(z,{\bar{z}}) + {\mathcal{L}}_{s}(z,{\bar{z}})$. Observe that in eq. (\[eq:subtermsinf\]) is a necessary yet not sufficient requirement for a non-zero contribution. Being based on observations, the patterns described in eqs.  and need to be verified against the wavefunctions computed in the previous section. We find perfect agreement with the wavefunction up to and including 13 loops, and are thus confident that the above description is correct. By introducing the $\{a,s\}$ alphabet we have accounted for the symmetry of the wavefunction under inversion, $z \to 1/z$, at symbol level, i.e.  as far as leading-weight terms are concerned. Our basis of respects neither this nor the invariance under complex conjugation at function level: in general ${\mathcal{L}}_\sigma(z,{\bar{z}}) \neq {\mathcal{L}}_\sigma(1/z,1/{\bar{z}})$ and ${\mathcal{L}}_\sigma(z,{\bar{z}}) \neq {\mathcal{L}}_\sigma({\bar{z}},z)$. Expecting further simplifications we will therefore construct a set of symmetrised functions in the remainder of this section. In the following we heavily use relations between under a standard set of variable transformations. We summarise the most important aspects of these relations in appendix \[app:svhplvariables\]. Quintessentially, these relations determine the coefficients $c_w$ in ${\mathcal{L}}_\sigma(g(z),g({\bar{z}})) = \sum_w c_w {\mathcal{L}}_w(z,{\bar{z}})$ where the sum runs over all words up to weight $|\sigma|$ and, in the present case, $g(x) = 1/x$ or $g(x) = \bar x$. Let us define \[eq:Fdef\] \_(z,[|[z]{}]{}) 14 ( \_(z,[|[z]{}]{}) + \_([|[z]{}]{},z) + \_(1/z,1/[|[z]{}]{}) + \_(1/[|[z]{}]{},1/z) ) with $\sigma$ a word belonging to an alphabet of one’s choosing. In the following we stick with the $\{a,s\}$ alphabet. We stress that the set of ${\mathcal{F}}$s does not span the space of but it does cover the entire space of wavefunctions. Due to the symmetries of the wavefunction \[eq:wfsyms1\] [\_]{}(z,[|[z]{}]{}) = [\_]{}([|[z]{}]{},z) = [\_]{}(1/z,1/[|[z]{}]{}) = [\_]{}(1/[|[z]{}]{},1/z) and thus \[eq:wfsyms2\] [\_]{}(z,[|[z]{}]{}) = 14 ( [\_]{}(z,[|[z]{}]{}) + [\_]{}([|[z]{}]{},z) + [\_]{}(1/z,1/[|[z]{}]{}) + [\_]{}(1/[|[z]{}]{},1/z) ) one can simply replace ${\mathcal{L}}_\sigma(z,{\bar{z}}) \to {\mathcal{F}}_\sigma(z,{\bar{z}})$ to go from the ${\mathcal{L}}$ to the ${\mathcal{F}}$ basis. It may therefore not be immediately obvious how eq.  simplifies the results. Indeed, it requires a few more steps to showcase the advantages of a symmetrised basis. Firstly, the wavefunction in the ${\mathcal{L}}$ basis contains functions whose indices feature an odd number of the letter $a$. Their leading-weight components are antisymmetric under $z \to 1/z$ because z = - 1/z Converted to ${\mathcal{F}}$ functions they are hence zero at symbol level or, in other words, equal to products of lower-weight and zeta numbers. This can be turned into a recursive algorithm that successively removes all odd-$a$ functions. Schematically, 1. Consider the wavefunction at a given order and replace ${\mathcal{L}}_\sigma(z,{\bar{z}}) \to {\mathcal{F}}_\sigma(z,{\bar{z}})$ 2. Choose an ${\mathcal{F}}_\sigma(z,{\bar{z}})$ where $\sigma$ contains an odd number of $a$ letters. Plug in definition and rewrite as functions of $z,{\bar{z}}$ using the rules in appendix \[app:svhplvariables\]. The resulting will be of lower weight than the original ${\mathcal{F}}_\sigma$, multiplied by zeta numbers. 3. Replace again ${\mathcal{L}}_\sigma(z,{\bar{z}}) \to {\mathcal{F}}_\sigma(z,{\bar{z}})$ 4. Repeat steps 2 & 3 until a fixed point is reached and only functions with an even number of $a$ letters remain. Note that step 3 is valid for the same reason it was legitimate to replace ${\mathcal{L}}_\sigma(z,{\bar{z}}) \to {\mathcal{F}}_\sigma(z,{\bar{z}})$ in the wavefunction, *cf.* eqs.  and . To give a few examples for odd-$a$ functions, $$\begin{aligned} {\mathcal{F}}_{a}(z,{\bar{z}}) &= 0 \\ {\mathcal{F}}_{a,s}(z,{\bar{z}}) = {\mathcal{F}}_{s,a}(z,{\bar{z}}) &= 0 \\ {\mathcal{F}}_{a,s,s}(z,{\bar{z}}) = {\mathcal{F}}_{s,s,a}(z,{\bar{z}}) &= 4 \zeta_3 \\ {\mathcal{F}}_{s,a,s}(z,{\bar{z}}) &= -8 \zeta_3 \\ {\mathcal{F}}_{s,s,s,a}(z,{\bar{z}}) = {\mathcal{F}}_{s,a,s,s}(z,{\bar{z}}) &= 4 \zeta_3 {\mathcal{F}}_s(z,{\bar{z}}). \end{aligned}$$ Secondly, we may combine ${\mathcal{F}}_\sigma(z,{\bar{z}})$ and ${\mathcal{F}}_{\tilde \sigma}(z,{\bar{z}})$ with $\tilde \sigma$ the word $\sigma$ reversed, at the cost of generating subleading terms. This is due to the following identity of : \_(z,[|[z]{}]{}) = \_([|[z]{}]{},z) + \_ For a function ${\mathcal{F}}_\sigma$ this entails \_(z,[|[z]{}]{}) = \_(z,[|[z]{}]{}) + \_ \[eq:Fwordrev\] due to the invariance under complex conjugation. Besides removing nearly half of the ${\mathcal{F}}$ functions we find the generated subleading terms to sometimes reduce but never increase the complexity of a given expression. For the procedure to be algorithmic one chooses which letter to cumulate in the left (or right) half of a word. For the wavefunction up to four loops and with the same abbreviations as in eqs. – we find $$\begin{aligned} {{{\Omega}^{({1})}}_{\mathrm{2d}}} &= \frac12 C_2 {\mathcal{F}}_s \label{eq:wtd1f} \\ {{{\Omega}^{({2})}}_{\mathrm{2d}}} &= \frac{1}{8} C_1 C_2 \left({\mathcal{F}}_{a,a} -{\mathcal{F}}_{s,s}\right)+\frac{1}{2} C_2^2 {\mathcal{F}}_{s,s} \\ {{{\Omega}^{({3})}}_{\mathrm{2d}}} &= \frac{1}{16} C_1 C_2^2 \left({\mathcal{F}}_{a,s,a} +6 {\mathcal{F}}_{s,a,a}-7 {\mathcal{F}}_{s,s,s}+8 \zeta_3\right) +\frac{1}{16} C_1^2 C_2 \left({\mathcal{F}}_{s,s,s}-{\mathcal{F}}_{s,a,a}\right) +\frac{3}{4} C_2^3 {\mathcal{F}}_{s,s,s} \\ \begin{split} {{{\Omega}^{({4})}}_{\mathrm{2d}}} &= \frac{1}{16} C_1 C_2^3 \left({\mathcal{F}}_{a,s,s,a} +6 {\mathcal{F}}_{s,a,a,s}+4 {\mathcal{F}}_{s,a,s,a}+12 {\mathcal{F}}_{s,s,a,a} -23 {\mathcal{F}}_{s,s,s,s}\right. \\ &\hspace{4mm} \left.+20 \zeta_3 {\mathcal{F}}_s\right) +\frac{1}{64} C_1^2 C_2^2 \left(-{\mathcal{F}}_{a,s,s,a} -9 {\mathcal{F}}_{s,a,a,s}-2 {\mathcal{F}}_{s,a,s,a}-24 {\mathcal{F}}_{s,s,a,a}\right. \\ &\hspace{4mm} \left.+7 {\mathcal{F}}_{a,a,a,a}+29 {\mathcal{F}}_{s,s,s,s} -4 \zeta_3 {\mathcal{F}}_s\right)+\frac{1}{64} C_1^3 C_2 \left({\mathcal{F}}_{s,a,a,s} +3 {\mathcal{F}}_{s,s,a,a}\right. \\ &\hspace{4mm} \left.-{\mathcal{F}}_{a,a,a,a}-3 {\mathcal{F}}_{s,s,s,s}\right) +\frac{3}{2} C_2^4 {\mathcal{F}}_{s,s,s,s} \label{eq:wtd4f} \end{split}\end{aligned}$$ where we used eq.  in favour of words that start rather then end with the letter $s$. Further results up to weight 13 can be found in the ancillary file `2Reggeon-wavefunction- Fsa-Basis.txt`. Indeed, comparing the results in eqs. – to the wavefunction in terms of standard (and the standard $\{0,1\}$ alphabet) in eqs. – shows the benefits of the new basis. In terms of ${\mathcal{F}}$ functions the wavefunction takes not only a very compact form and is expressed in terms of fewer functions, it also removes subleading terms in some cases, like the $-\frac{3}{16} {\mathcal{L}}_1 \zeta_3$ in the coefficient of $C_1^3 C_2$ at four loops . Finite corrections to the amplitude from two-dimensional evolution {#amplitude} ================================================================== We now have an algorithm for the calculation of the wavefunction $\Omega_{\rm 2d}$ to any loop order, and we shall use it for the computation of the amplitude. Let us recall from section \[soft\] that the soft part has been fully determined, and our goal here is the calculation of the hard part of the amplitude, as defined in [eq. ]{}. This, in turn, requires the hard part of the two-dimensional wavefunction, which according to [eq. ]{} is obtained by subtracting the $d=2$ limit of the soft wavefunction from the full (two-dimensional) wavefunction $\Omega_{\rm 2d}$ of the previous section. To this end we first define $$\label{Wsoft2d} {{\Omega}_{\mathrm{s}}}^{({\rm 2d})}(z,{\bar{z}})\,\equiv \, \left.\lim_{{\epsilon}\to 0} {{\Omega}_{\mathrm{s}}}(p,k)\right|_{\log {\left(\frac{k^2(p-k)^2}{(p^2)^2}\right)}\,\to\, {\mathcal{L}}_s(z,{\bar{z}})}\,,$$ where taking the limit simply corresponds to selecting the leading ${\mathcal{O}}({\epsilon}^0)$ terms in ${{\Omega}_{\mathrm{s}}}(p,k)$. Notice that within the $d=2$ limit we switch to the two-dimensional variables $z$ and ${\bar{z}}$ of eq. , and the single-valued logarithm ${\mathcal{L}}_s(z,{\bar{z}})=\log\frac{z{\bar{z}}}{(1-z)^2(1-{\bar{z}})^2}$ defined in eq. (\[ElDef\]). Having used the symmetrised soft wavefunction we land directly in the class of SVHPLs used in the two-dimensional computation of section \[sec:diffeq\], and in this way the computations of $\Omega_{\rm 2d}$ and ${{\Omega}_{\mathrm{s}}}^{({\rm 2d})}$ are entirely compatible. We note that with the replacement in (\[Wsoft2d\]) the two-dimensional soft wavefunction in  becomes a polynomial in ${\mathcal{L}}_s(z,{\bar{z}})$ at any given order. According to eq. (\[eq:wffullsoftresummed\]), these terms exponentiate and can be resummed to all-orders. Upon applying the changed of variable of (\[Wsoft2d\]) this resummed expression is \[eq:wffullsoftresummed\_SVHPL\] [\_]{}\^[([2d]{})]{}(z,[|[z]{}]{})= \^ e\^[ [C\_2]{}\_s(z,[|[z]{}]{})]{} , with $x = L \, {\alpha_s}/\pi$. Using the results in section \[2d-bfkl\] for $\Omega_{\rm 2d}$ and the expansion of [eq. ]{} for ${{\Omega}_{\mathrm{s}}}^{({\rm 2d})}$ we determine ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})} = \Omega_{\rm 2d} -{{\Omega}_{\mathrm{s}}}^{({\rm 2d})}$, and we can proceed to determine the hard part of the amplitude order by order, according to [eq. ]{}. To this end, recall that the hard wavefunction ${{\Omega}_{\mathrm{h}}}^{({\rm 2d})}$ is guaranteed to integrate to finite terms only, hence it can be integrated in strictly two dimensions. Applying the limit ${\epsilon}\to 0$ in [eq. ]{} to the integrand and the integration measure using the variables $z$ and $\bar{z}$ (*cf.* eqs. (\[k2zzb\]) and (\[eq:dw2d\])) we obtain: \^[(+)]{}\_[NLL,h]{}() = -i [\_[s-u]{}\^2]{}[\^]{}\_[ijij]{}, \[eq:hardAmp2d\] where, in practice, we loop-expand both the wavefunction and amplitude, as was done in eq. . The next two subsections are dedicated to the computation of the integral in eq. (\[eq:hardAmp2d\]), thus determining the hard component of the reduced amplitude order by order. In section \[sec:finiteamp\] we combine the soft and hard components of the reduced amplitude according to eq. (\[eq:redampSplit\]), and finally in section \[sec:hardAmpl\] we similarly combine the soft and hard components of the infrared-renormalized amplitude using eqs. (\[Hsplit\]) and (\[getH2both\]). To set up the computation of eq. (\[eq:hardAmp2d\]) let us define I [\_]{}\^[([2d]{})]{}(z,[|[z]{}]{}) \[eq:Idef\] and introduce in turn two independent methods for computing these integrals. For the sake of simplicity of notation, given that the entire computation is done in two dimensions, we shall now drop the $({\rm 2d})$ superscript, and refer to the integrand in (\[eq:Idef\]) as ${{\Omega}_{\mathrm{h}}}(z,{\bar{z}})$. Similarly, while (\[eq:Idef\]) is applied order-by-order, in describing the methods we refrain from using an index for the loop order on either side of (\[eq:Idef\]). The first method, described in section \[sec:method1\] below, is based on using the known analytic structure of the wavefunction, in order to convert the two-dimensional integral into an integral over the discontinuity of the wavefunction. It was inspired by the calculations described in section 7.1 of ref. [@Schwartz:2014wha]. The second method, presented in section \[sec:method2\] below, relies on the symmetry of the wavefunction under inversion, $z \rightarrow 1/z$, ${\bar{z}}\rightarrow 1/{\bar{z}}$, and the action of ${{\hat{H}}_{\mathrm{2d,i}}}$ at fixed external points. Method I: final integration using the discontinuity of the wavefunction {#sec:method1} ----------------------------------------------------------------------- Let us define a regularised version of the integral $I$ in eq. : $$I_{\mathrm{reg}}= \frac{1}{4\pi} \int_{\delta^2 < z {\bar{z}}< 1/\delta^2} \frac{{\mathrm{d}}^2 z}{z {\bar{z}}} {{\Omega}_{\mathrm{h}}}(z,{\bar{z}}) \,, \label{eq:Iregdef}$$ where the cutoff $\delta$ is assumed to be small, $\delta \ll 1$. The introduction of $\delta$ may seem superfluous at this point as $\lim_{z,{\bar{z}}\to 0} {{\Omega}_{\mathrm{2d}}}= \lim_{z,{\bar{z}}\to \infty} {{\Omega}_{\mathrm{2d}}}= \lim_{{\epsilon}\to 0} {{\Omega}_{\mathrm{s}}}$ and thus, using eq. (\[eq:wffullsoftresummed\_SVHPL\]), $\lim_{z,{\bar{z}}\to 0} {{\Omega}_{\mathrm{h}}}= \lim_{z,{\bar{z}}\to \infty} {{\Omega}_{\mathrm{h}}}= 0$; more precisely, ${{\Omega}_{\mathrm{h}}}$ vanishes linearly in $z\bar{z}$ in the soft limit, up to logarithms, rendering the integral in (\[eq:Idef\]) convergent, and the difference $I - I_{\mathrm{reg}}= {\mathcal{O}}(\delta^2)$ (up to logarithms). The necessity of this cutoff despite this good convergence will become clear shortly. The exclusion of the points $\{0,\infty\}$ in (\[eq:Iregdef\]) enables us to introduce polar coordinates such that $z{\bar{z}}=r^2$ and $\frac{z}{{\bar{z}}}=e^{2i\theta}$, as now all points in the integration region have a non-vanishing Jacobian: $$\label{eq:lastintpol0} I_{\mathrm{reg}}= \frac{1}{4\pi} \int_\delta^{1/\delta} \frac{{\mathrm{d}}r}{r} \int_0^{2\pi} d\theta\, {{\Omega}_{\mathrm{h}}}\left( re^{i\theta},re^{-i\theta} \right)\,.$$ To proceed we express the angular integral in the latter as an integration in the complex $y$ plane where $y\equiv e^{i\theta}$, getting $$\label{eq:lastintpol} I_{\mathrm{reg}}= \frac{1}{4\pi i} \int_\delta^{1/\delta} \frac{{\mathrm{d}}r}{r} \oint_{|y|=1} \frac{{\mathrm{d}}y}{y} \, {{\Omega}_{\mathrm{h}}}(ry,r/y)\,,$$ where the contour runs along the unit circle. The method outlined in the following is based on deforming the contour in the complex $y$ plane. Essential to this is the fact that the integrand, at any order, is expressed in terms of , whose analytic structure is well understood. These functions are single-valued as long as their arguments are complex conjugates of one another, namely as long as the contour in eq.  runs along the unit circle. Outside of this region, i.e. upon deformation the contour, the in ${{\Omega}_{\mathrm{h}}}(z,{\bar{z}})$ exhibit branch cuts where $z \in [1,\infty]$ and ${\bar{z}}\in [1,\infty]$. In the $r,y$ coordinates of eq.  they correspond to cuts along the real axis in the complex $y$ plane where $y \in [1/r,\infty]$ and $y \in [0,r]$, respectively. ![Position of the branch cuts in $z$ and ${\bar{z}}$ in the complex $y$-plane for $r<1$ (left). The contour along the unit circle in eq.  can be deformed and, consequently, identified with the integral of the ${\bar{z}}$-discontinuity (right), as written in eq. .[]{data-label="fig:ycontour"}](./img/ycontour-crop.pdf) For $r<1$ there is a branch cut-free interval $(r,1/r)$ through which the contour along the unit circle passes, *cf.* the l.h.s. of figure \[fig:ycontour\]. The contour can consequently be shrunk until it corresponds to integrating the ${\bar{z}}$-discontinuity of the wavefunction over $y$ from $0$ to $r$, *cf.* the r.h.s. of figure \[fig:ycontour\]. We can now understand why it is necessary to work with the regularised integral $I_{\mathrm{reg}}$ of eq.  instead of the original $I$ of eq. : while the hard wavefunction ${{\Omega}_{\mathrm{h}}}(z,{\bar{z}})$ vanishes at 0 and $\infty$, its discontinuity, in general, does not. In other words, the contour deformation introduces spurious divergent terms and the cutoff introduced in eq.  regularises them. . For $r>1$ the branch cuts of $z$ and ${\bar{z}}$ overlap. However, the discontinuity cancels identically in the interval $(1/r,r)$. Repeating the procedure, we again identify the contour integration with integrating the ${\bar{z}}$-discontinuity of ${{\Omega}_{\mathrm{h}}}(z,{\bar{z}})$ over $y$, this time, from $0$ to $1/r$. In total, having modified the contour in (\[eq:lastintpol\]) we find $$\begin{aligned} I_{\mathrm{reg}}&= \frac{1}{4\pi i} \left( \int_\delta^1 \frac{{\mathrm{d}}r}{r} \int_0^r \frac{{\mathrm{d}}y}{y} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(ry,r/y)] + \int_1^{1/\delta} \frac{{\mathrm{d}}r}{r} \int_0^{1/r} \frac{{\mathrm{d}}y}{y} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(ry,r/y)] \right) \nonumber \\ &= \frac{1}{4\pi i} \left( \int_\delta^1 \frac{{\mathrm{d}}r}{r} \int_1^\infty \frac{{\mathrm{d}}{\bar{z}}}{{\bar{z}}} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(r^2/{\bar{z}},{\bar{z}})] + \int_1^{1/\delta} \frac{{\mathrm{d}}r}{r} \int_0^1 \frac{{\mathrm{d}}z}{z} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,r^2/z)] \right) \nonumber \\ &= \frac{1}{8\pi i} \left( \int_1^\infty \frac{{\mathrm{d}}{\bar{z}}}{{\bar{z}}} \int_{\delta^2/{\bar{z}}}^{1/{\bar{z}}} \frac{{\mathrm{d}}z}{z} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,{\bar{z}})] + \int_0^1 \frac{{\mathrm{d}}z}{z} \int_{1/z}^{1/(\delta^2 z)} \frac{{\mathrm{d}}{\bar{z}}}{{\bar{z}}} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,{\bar{z}})] \right) \nonumber \\ &= \frac{1}{8\pi i} \left( \int_0^1 \frac{{\mathrm{d}}x}{x} \int_{\delta^2 x}^x \frac{{\mathrm{d}}z}{z} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] + \int_0^1 \frac{{\mathrm{d}}z}{z} \int_{\delta^2 z}^z \frac{{\mathrm{d}}x}{x} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] \right) \label{eq:intdisc}\end{aligned}$$ where the two terms correspond respectively to $r<1$ and $r>1$. It is clear from the outset that they are equal: this corresponds to splitting (\[eq:Iregdef\]) at $z\bar{z}=1$, which admits $(z,{\bar{z}})\leftrightarrow (1/z,1/{\bar{z}})$ symmetry. In the second line of (\[eq:intdisc\]) we reverted to the variable ${\bar{z}}=r/y$ in the first integral and $z=ry$ in the second; in the third we changed the order of integration before reverting to $z=r^2/{\bar{z}}$ in the first integral and ${\bar{z}}=r^2/z$ in the second; finally in the last line we defined $x=1/{\bar{z}}$ in the both integrals. ![Illustration of the integrations in the $r<1$ (I) and $r>1$ (II) contribution to $I_{\mathrm{reg}}$ of eq.  (white triangles, l.h.s.). They can be viewed as the integral over a square (A) plus two wedges (B) minus two small triangles (C) (delimited by dashed lines, r.h.s.).[]{data-label="fig:method1"}](./img/method1-crop.pdf) Let us now discuss the evaluation of the final expression in eq. (\[eq:intdisc\]), where the integration region of two terms is depicted as the white area in figure \[fig:method1\]. In order to perform the integration it is useful to view the integrals (*cf.* the r.h.s. of figure \[fig:method1\]) as the integral over a square $$\label{eq:square} I_A(\delta) = \frac{1}{8\pi i} \int_{\delta^2}^1 \frac{{\mathrm{d}}z}{z} \int_{\delta^2}^1 \frac{{\mathrm{d}}x}{x} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)]\,,$$ plus (the integrals over) two wedges \[eq:wedges\] I\_B() = ( \_0\^1 \_[\^2 x]{}\^[\^2]{} \[[\_]{}(z,1/x)\] + \_0\^1 \_[\^2 z]{}\^[\^2]{} \[[\_]{}(z,1/x)\] ), minus two small triangles \[eq:kite\] I\_C() = ( \_0\^[\^2]{} \_[\^2 x]{}\^x \[[\_]{}(z,1/x)\] + \_0\^[\^2]{} \_[\^2 z]{}\^z \[[\_]{}(z,1/x)\] ), where both $z$ and $x$ are small. Next we would like to evaluate each of these contributions, distinguishing between finite, $\delta$-independent terms, and logarithmically divergent cut-off dependent terms. The discontinuity w.r.t. ${\bar{z}}$ of ${{\Omega}_{\mathrm{h}}}(z,1/x)$ evaluates to of $z$ and $x$. $I_A(\delta)$ of eq.  thus immediately evaluates to at 1, giving rise to , and at $\delta^2$; the latter contain logarithmically divergent terms in $\delta$. The first (second) integral in the expression of $I_B(\delta)$ in  is calculated close to $z=0$ ($x=0$), *cf.*  figure \[fig:method1\]. One can therefore expand the discontinuity function in the integrand and discard terms suppressed by powers of $z$ ($x$) keeping only powers of $\log z$ ($\log x$). The inner integrals then yield powers of $\log \delta^2$, $\log \delta^2 x = \log x + \log \delta^2$ and $\log \delta^2 z = \log z + \log \delta^2$, respectively. The outer integrals thereupon generate from their upper limits; in addition it contains logarithmically divergent terms in $\delta$. Contributions from the lower integration limits are dropped according to the (standard) regularisation of : \_[z 0]{} z = 0. \[eq:hplreg\] A similar analysis of $I_C(\delta)$ in eq.  reveals that only powers of $\log \delta^2$ are generated by the integrations over the two small triangles in figure \[fig:method1\]. Since the original integral $I$ of eq. (\[eq:Idef\]) is finite and $I_{\mathrm{reg}}\to I$ for $\delta \to 0$ all terms proportional to $\log \delta^2$ have to cancel between the three contributions $I_A(\delta)$, $I_B(\delta)$ and $I_C(\delta)$. This enables us to derive a simplified integral in which the logarithmically divergent terms are absent altogether whilst giving the same finite terms: $$\begin{gathered} I_{\mathrm{reg}}= \frac{1}{8\pi i} \int_0^1 \frac{{\mathrm{d}}z}{z} \int_0^1 \frac{{\mathrm{d}}x}{x} {\mathrm{disc_{{\bar{z}}}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] \\ + \frac{1}{8\pi i} \left( - \int_0^1 \frac{{\mathrm{d}}x}{x} \int_0^x \frac{{\mathrm{d}}z}{z} {\mathrm{disc_{{\bar{z}}=1}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] \big|_{z \ll 1} \right. \\ \left. - \int_0^1 \frac{{\mathrm{d}}z}{z} \int_0^z \frac{{\mathrm{d}}x}{x} {\mathrm{disc_{{\bar{z}}=1}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] \big|_{x \ll 1} \right)\,, \label{eq:Iregsimplified}\end{gathered}$$ where all integrals are regulated according to eq.  and ${\mathrm{disc_{{\bar{z}}=1}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] \big|_{z \ll 1}$ and ${\mathrm{disc_{{\bar{z}}=1}}} [{{\Omega}_{\mathrm{h}}}(z,1/x)] \big|_{x \ll 1}$ refer to the aforementioned expansion of the integrand around small $z$ and $x$, respectively. The first integral in eq. (\[eq:Iregsimplified\]) reproduces all finite, cut-off independent terms in $I_A(\delta)$ of (\[eq:square\]), while the second and third ones reproduce, respectively, the finite terms in the two integral in $I_B(\delta)$ in (\[eq:wedges\]); finally, given that no cut-off independent terms are produce by $I_C(\delta)$, it has no trace in (\[eq:Iregsimplified\]). The above calculation is biased towards the discontinuity with respect to ${\bar{z}}$ which is purely a matter of choice. A similar calculation can be performed to get an answer in terms of the discontinuity with respect to $z$ or a mixed expression that features both discontinuities. This integration method was further checked as follows. Given a wavefunction (or ) we expand around $z = {\bar{z}}= 0$ and change variables to the polar coordinates introduced above in eq. . The result is a sum of terms of the form $r^a y^b \log^c(r^2)$ with rational constant coefficients and $a,c \geq 0$ and $b$ are integer powers. Integrating the azimuth over $[0,\, 2\pi]$ then removes all terms that explicitely depend on $y$, i.e.  that have $b \neq 0$. Next, we determine the rational coefficients in terms of harmonic numbers[^6]. This enables us to perform the sum ad infinitum after we integrate term-by-term with respect to $r$. Method II: final integration as an action of the Hamiltonian {#sec:method2} ------------------------------------------------------------ The previous method, albeit straightforward on paper, is computationally demanding at high loop orders as it requires extensive use of analytic continuations of to calculate discontinuities. It turns out there is an easier way to perform the final integration, which lets us make use of our knowledge about the action of the Hamiltonian, established upon computing the wavefunction in section \[2d-bfkl\]. Consider the action of ${{\hat{H}}_{\mathrm{2d,i}}}$ on the wavefunction ${{\Omega}_{\mathrm{h}}}(1-z,1-{\bar{z}})$ \[eq:method2int1\] [\_]{}[\_]{}(1-z,1-[|[z]{}]{}) = \^2 w K(w,[|[w]{}]{},z,[|[z]{}]{}) and set $z = {\bar{z}}= 1$ under the integral. Using ${{\Omega}_{\mathrm{h}}}(0,0) = 0$ one gets on the right-hand side: \_[z,[|[z]{}]{}1]{} K(w,[|[w]{}]{},z,[|[z]{}]{}) = K(w,[|[w]{}]{},1,1) [\_]{}(1-w,1-[|[w]{}]{}) \[eq:integrandat1\] with the kernel K(w,[|[w]{}]{},1,1) = + - , *cf.* eq. . It thus follows that (\[eq:method2int1\]), taken in the limit $z,{\bar{z}}\to 1$, yields: $$\begin{aligned} \label{eq:method2int1.5} \begin{split} \hspace*{-10pt}&{{\hat{H}}_{\mathrm{2d,i}}}{{\Omega}_{\mathrm{h}}}(1-z,1-{\bar{z}}) \big|_{z,{\bar{z}}\to 1} =\\ &\hspace*{20pt} =\, \frac{1}{4\pi} \int {\mathrm{d}}^2 w \left[\frac{ {{\Omega}_{\mathrm{h}}}(1-w,1-{\bar{w}}) }{w{\bar{w}}(1-w)(1-{\bar{w}})} + \frac{ {{\Omega}_{\mathrm{h}}}(1-w,1-{\bar{w}}) }{(1-w)(1-{\bar{w}})} - \frac{ {{\Omega}_{\mathrm{h}}}(1-w,1-{\bar{w}}) }{w{\bar{w}}} \right] \\ &\hspace*{20pt} =\, \frac{1}{4\pi} \int \frac{{\mathrm{d}}^2 w}{w {\bar{w}}} \left[ {{\Omega}_{\mathrm{h}}}\left( \frac{1}{1-w},\frac{1}{1-w} \right) + {{\Omega}_{\mathrm{h}}}(w,{\bar{w}}) - {{\Omega}_{\mathrm{h}}}(1-w,1-{\bar{w}}) \right]\,, \end{split}\end{aligned}$$ where in the second line we changed the integration variables in the first two terms – in the first using $w \rightarrow w/(w-1)$, and in the second using $w \rightarrow 1-w$, and then factored out a common denominator. Given that the wavefunction is symmetric under inversion, ${{\Omega}_{\mathrm{h}}}(1/w,1/{\bar{w}}) = {{\Omega}_{\mathrm{h}}}(w,{\bar{w}})$, the first and third terms in the last equation cancel and we find \[eq:method2int3\] [\_]{}[\_]{}(1-z,1-[|[z]{}]{}) |\_[z,[|[z]{}]{}1]{} = [\_]{}(w,[|[w]{}]{}) = I, which can be readily identified with the integral in eq.  which we are interested to compute. We thus conclude that the integral in eq. , representing the hard wavefunction contribution to the reduced amplitude, integrated in exactly two dimensions, may be calculated with the methods we developed for the computation of the two-dimensional wavefunction itself, described in section \[sec:wf2d\]. In practice one rewrites the wavefunction ${{\Omega}_{\mathrm{h}}}(1-z,1-{\bar{z}})$ in terms of of $z$ and ${\bar{z}}$, then applies the Hamiltonian by solving the corresponding differential equations, and finally evaluates the resulting expression at $z,{\bar{z}}= 1$. The last step produces the anticipated . Method I, described in section \[sec:method1\], and method II outlined here show perfect agreement when applied to the wavefunction. However, we emphasise that while former may be applied on individual , the latter can only be applied to expressions which are symmetric under inversion of their arguments, *cf.* eqs. (\[eq:method2int1.5\]) and (\[eq:method2int3\]). Results for the reduced amplitude {#sec:finiteamp} --------------------------------- With the methods described in the previous sections it is straightforward to integrate the two-dimensional wavefunction and thereby compute the hard contribution to the amplitude, namely the finite terms not captured by the soft limit. Before presenting our results let us recall the number-theoretic observations we made about the amplitude at the end of section \[chap:bfkl\]. There, we claimed that the $\ell$-loop amplitude (divided by $B_0^\ell$, ) has two important number-theoretic properties: all of its terms have weight $\ell$ and there are no terms proportional to $\zeta_2$. We proved this statement for contributions from the soft limit in section \[soft\], see below eq. (\[eq:MsoftExpanded\_8\]). We now show that it holds also for the hard contributions. We begin by noting that the integrand in (\[eq:Idef\]) is expressed as a pure function of uniform weight, written as sums of products of HPLs. We note that both methods for the last integral, in sections \[sec:method1\] and \[sec:method2\], increase the weight of the functions they act on by one, before evaluating the result at $z = {\bar{z}}= 1$. In method I the action of the discontinuity first lowers the weight of its argument by one; this is then compensated by two consecutive integrations of a $d\log$ form, each raising the weight by one. Method II, in turn, applies the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}$ on the wavefunction after a variable transformation $z \to 1-z$. Changing the variables of an does obviously not change its weight and the action of the Hamiltonian corresponds to integrating a first-order differential equation, which raises the weight of the operand by one. at $z = {\bar{z}}= 1$ evaluate to multiple zeta values () of the same weight, as discussed following eqs. –. We remind the reader that the $(\ell-1)$-loop wavefunction consists of weight-$(\ell-1)$ and weight-$(\ell-1)$ products of and zeta numbers and conclude that the hard contributions to the $\ell$-loop amplitude therefore have uniform weight $\ell$. The absence of $\zeta_2$ in the hard component ${\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,\ell)}$ is readily explained by the fact that can, by construction, only ever evaluate to odd zeta numbers, for any argument. We start the discussion of the results by presenting the contributions that originate in the hard region. They are the immediate result of the previous sections and, through eight loops, read \[reduced\_hard\_results\] $$\begin{aligned} {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,1)} &= 0, \label{eq:m1hardfinite} \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,2)} &= 0, \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,3)} &= \frac{i\pi}{3!} \bigg\{ \frac{3\zeta_3}{4} {C_1}{C_2}\bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,4)} &= 0, \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,5)} &= \frac{i \pi}{5!} \bigg\{-\frac{5\zeta_5}{2} {C_1}^2 {C_2}^2 +\, \frac{45\zeta_5}{2} {C_1}{C_2}^3 \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,6)} &= \frac{i\pi}{6!} \bigg\{\frac{39\zeta_3^2}{16} {C_1}^3 {C_2}^2 - \frac{45\zeta_3^2}{2} {C_1}^2 {C_2}^3 +\, \frac{225\zeta_3^2}{2} {C_1}{C_2}^4 \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,7)} &= \frac{i\pi}{7!} \bigg\{ -\frac{2135\zeta_7}{256} {C_1}^4 {C_2}^2 + \frac{30135\zeta_7}{256} {C_1}^3 {C_2}^3 -\, \frac{20111\zeta_7}{32} {C_1}^2 {C_2}^4 \nn \\ &\hspace{4.0cm} + \frac{6111\zeta_7}{4} {C_1}{C_2}^5\bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{{\rm NLL,h}}^{(+,8)} &= \frac{i\pi}{8!} \bigg\{ \frac{611\zeta_3 \zeta_5}{32} {C_1}^5 {C_2}^2 - \frac{643\zeta_3 \zeta_5}{2} {C_1}^4 {C_2}^3 + \frac{8597\zeta_3 \zeta_5}{4} {C_1}^3 {C_2}^4 \nn \\ &\hspace{2.0cm} - 7086\zeta_3 \zeta_5 \, {C_1}^2 {C_2}^5 +\, 13230 \zeta_3 \zeta_5 \, {C_1}{C_2}^6\bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \label{eq:m8hardfinite} \end{aligned}$$ where we again used the shorthand notation for the colour factors, $C_1=2{C_A}-{{\mathbf{T}}_t^2}$ and $C_2={C_A}-{{\mathbf{T}}_t^2}$. One may observe the aforementioned homogeneous weight property and absence of even zeta numbers. In fact, considering the first eight loop orders, one may get the false impression that each order contains just a single (product) of zeta numbers and that they are all single (ordinary) zeta numbers. Both these features are artefacts of looking at low weights, and a much richer structure will be revealed at higher loop orders, as we discuss shortly. Given the identity in [eq. ]{}, i.e. ${\mathcal{H}}^{(+)}_{\rm NLL,h} = {\cal \hat M}^{(+)}_{\rm NLL,h}$, the result of (\[reduced\_hard\_results\]) is sufficient to compute the full infrared-renormalized amplitude ${\mathcal{H}}^{(+)}_{\rm NLL}$ by combining it with the soft contribution ${\mathcal{H}}^{(+)}_{\rm NLL,s}$ of eqs. (\[getH4\]) and (\[eq:HsoftExpanded\_1\]). This will be done in the section \[sec:hardAmpl\] below. Before doing this let us combine the hard and soft components for the reduced amplitude itself, and comment further on some number-theoretic properties, as promised. According to eqs. (\[eq:redampSplit\]) and (\[Msh\]), the expressions for the full reduced amplitude through ${\cal O}(\epsilon^0)$ can be easily obtained order-by-order summing the results for the soft amplitude provided in eqs. (\[eq:MsoftExpanded\_1\])-(\[eq:MsoftExpanded\_8\]) and those for the hard amplitude in eqs. (\[eq:m1hardfinite\])–(\[eq:m8hardfinite\]) above, where the former accounts for all infrared singularities plus some finite terms, and the latter for the remaining finite contributions. We obtain $$\begin{aligned} \label{eq:m1finite} {\hat{\mathcal{M}}}_{\rm NLL}^{(1)} &= i\pi {B_{0}} \bigg\{ \frac{1}{2{\epsilon}} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{\rm NLL}^{(2)} &= i\pi \frac{{B_{0}}^2}{2} \bigg\{ \frac{{C_2}}{4 {\epsilon}^2} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{\rm NLL}^{(3)} &= i\pi \frac{{B_{0}}^3}{3!} \bigg\{ {C_2}^2 \left( \frac{1}{8 {\epsilon}^3} - \frac{11\zeta_3}{4}\right) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\hat{\mathcal{M}}}_{\rm NLL}^{(4)} &= i\pi \frac{{B_{0}}^4}{4!} \bigg\{ {C_1}{C_2}^2 \left( -\frac{\zeta_3}{8 {\epsilon}} - \frac{3\zeta_4}{16}\right) + {C_2}^3 \left( \frac{1}{16 {\epsilon}^4} + \frac{\zeta_3}{8{\epsilon}} + \frac{3\zeta_4}{16}\right) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\hat{\mathcal{M}}}_{\rm NLL}^{(5)} &= i\pi \frac{{B_{0}}^5}{5!} \bigg\{ {C_1}^2 {C_2}^2 \left( - \frac{5\zeta_5}{2} \right) + {C_1}{C_2}^3 \left( -\frac{\zeta_3}{16 {\epsilon}^2} - \frac{3\zeta_4}{32 {\epsilon}} + \frac{333\zeta_5}{16} \right) \\ &\hspace{1.0cm} +\, {C_2}^4 \left( \frac{1}{32 {\epsilon}^5} + \frac{\zeta_3}{16{\epsilon}^2} + \frac{3\zeta_4}{32 {\epsilon}} - \frac{717\zeta_5}{16} \right) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\hat{\mathcal{M}}}_{\rm NLL}^{(6)} &= i\pi \frac{{B_{0}}^6}{6!} \bigg\{ {C_1}^3 {C_2}^2 \bigg( \frac{39 \zeta_3^2}{16} \bigg) + {C_1}^2 {C_2}^3 \bigg( -\frac{399 \zeta_3^2}{16} \bigg) \\ \nn &\hspace{1.0cm} +\, {C_1}{C_2}^4 \bigg(-\frac{\zeta_3}{32 {\epsilon}^3} - \frac{3\zeta_4}{64 {\epsilon}^2} - \frac{3\zeta_5}{32 {\epsilon}} + \frac{2637\zeta_3^2}{32} + \frac{5 \zeta_6}{32} \bigg) \\ &\hspace{1.0cm} +\, {C_2}^5 \bigg(\frac{1}{64 {\epsilon}^6} + \frac{\zeta_3}{32 {\epsilon}^3} + \frac{3 \zeta_4}{64 {\epsilon}^2} + \frac{3 \zeta_5}{32 {\epsilon}} - \frac{2879 \zeta_3^2}{32} + \frac{5 \zeta_6}{32} \bigg) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\hat{\mathcal{M}}}_{\rm NLL}^{(7)} &= i\pi \frac{{B_{0}}^6}{6!} \bigg\{ {C_1}^4 {C_2}^2 \bigg( \frac{2135\zeta_7}{256} \bigg) {C_1}^3 {C_2}^3 \bigg( \frac{30135\zeta_7}{256} \bigg) + {C_1}^2 {C_2}^4 \bigg( \frac{\zeta_3^2}{32 {\epsilon}} + \frac{3\zeta_3 \zeta_4}{32} -\, \frac{20111\zeta_7}{32}\bigg) \\ \nn &\hspace{1.0cm} + {C_1}{C_2}^5 \bigg( - \frac{\zeta_3}{64 {\epsilon}^4} - \frac{3\zeta_4}{128 {\epsilon}^3} - \frac{3\zeta_5}{64 {\epsilon}^2} - \frac{3\zeta_3^2}{64 {\epsilon}} - \frac{5 \zeta_6}{64 {\epsilon}} - \frac{9 \zeta_3 \zeta_4}{64} +\, \frac{97047 \zeta_7}{64} \bigg) \\ \nn &\hspace{1.0cm} + {C_2}^6 \bigg( \frac{1}{128 {\epsilon}^7} + \frac{\zeta_3}{64 {\epsilon}^4} + \frac{3 \zeta_4}{128 {\epsilon}^3} + \frac{3 \zeta_5}{64 {\epsilon}^2} + \frac{\zeta_3^2}{64 {\epsilon}} + \frac{5 \zeta_6}{64 {\epsilon}} \\ &\hspace{1.0cm} +\, \frac{3 \zeta_3 \zeta_4}{64} - \frac{90711 \zeta_7}{64} \bigg) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn \label{eq:m8finite} {\hat{\mathcal{M}}}_{\rm NLL}^{(8)} &= i\pi \frac{{B_{0}}^8}{8!} \bigg\{ {C_1}^5 {C_2}^2 \bigg( \frac{611\zeta_3 \zeta_5}{32} \bigg) +{C_1}^4 {C_2}^3 \bigg( - \frac{643\zeta_3 \zeta_5}{2} \bigg) +{C_1}^3 {C_2}^4 \bigg( \frac{8597\zeta_3 \zeta_5}{4} \bigg) \\ \nn &\hspace{1.0cm} +\, {C_1}^2 {C_2}^5 \bigg( \frac{\zeta_3^2}{64 {\epsilon}^2} + \frac{3\zeta_3 \zeta_4}{64 {\epsilon}} - \frac{228093 \zeta_3 \zeta_5}{32} + \frac{21\zeta_8}{512} \bigg) + {C_1}{C_2}^6 \bigg( -\frac{\zeta_3}{128 {\epsilon}^5} \\ \nn &\hspace{1.0cm} -\, \frac{3\zeta_4}{256 {\epsilon}^4} - \frac{3\zeta_5}{128 {\epsilon}^3} - \frac{3 \zeta_3^2}{128 {\epsilon}^2} - \frac{5 \zeta_6}{128 {\epsilon}^2} - \frac{9 \zeta_3 \zeta_4}{128 {\epsilon}} - \frac{9\zeta_7}{128 {\epsilon}} + \frac{749943 \zeta_3 \zeta_5}{64} \\ \nn &\hspace{1.0cm} -\, \frac{189 \zeta_8}{1024} \bigg) + {C_2}^7 \bigg( \frac{1}{256 {\epsilon}^8} + \frac{\zeta_3}{128 {\epsilon}^5} + \frac{3\zeta_4}{256 {\epsilon}^4} + \frac{3\zeta_5}{128 {\epsilon}^3} + \frac{\zeta_3^2}{128 {\epsilon}^2} + \frac{5 \zeta_6}{128 {\epsilon}^2} \\ &\hspace{1.0cm} + \frac{3 \zeta_3 \zeta_4}{128 {\epsilon}} + \frac{9 \zeta_7}{128 {\epsilon}} - \frac{483837 \zeta_3 \zeta_5}{64} + \frac{147 \zeta_8}{1024} \bigg) \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}\, ,\end{aligned}$$ These result reproduce the one- to four-loop results of ref. [@Caron-Huot:2013fea] and of our numerically-determined five-loop result in eq. . In the ancillary file `NLL-reduced-amplitude.txt` we provide the result for the soft, the hard and the full reduced amplitude up to 13 loops. Furthermore, the amplitude can now be calculated to any number of loops with the methods presented in sections \[soft\], \[2d-bfkl\] and \[amplitude\]. Similarly to the wavefunction at twelve loops (and above), the hard contributions to the amplitude (and thus the full amplitude itself) cannot be expressed in terms of ordinary zeta numbers beyond a certain loop order. In fact, most of what we discussed in the context of the wavefunction below eqs. – applies to the amplitude as well: Either of the two methods presented in section \[sec:method1\] and \[sec:method2\] requires us to evaluate at $z = {\bar{z}}= 1$ and we hence expect the presence of (single-valued) starting from weight eleven. Indeed, the eleven-loop amplitude features a term proportional to $g_{5,3,3}$, defined in eq. : $$\begin{gathered} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{11})}} \supset \frac{1}{102400}\bigg(-\frac{149}{6720} {C_1}^8 {C_2}^2 + \frac{26209}{60480} {C_1}^7 {C_2}^3 - \frac{14813}{4320} {C_1}^6 {C_2}^4 + \frac{210383}{15120} {C_1}^5 {C_2}^5 \\ - \frac{7549}{252} {C_1}^4 {C_2}^6 + \frac{39257}{1260} {C_1}^3 {C_2}^7 - 11 {C_1}^2 {C_2}^8 \bigg) \times g_{5,3,3}. \label{eq:m11g533}\end{gathered}$$ Of course this term is entirely due to the hard component of the amplitude, as the soft one consists exclusively of ordinary zeta values (non-single-valued ones), as discussed in section \[soft\]. At twelve loops the reduced amplitude is again comprised of ordinary zeta numbers $\zeta_n$, as there are no weight 12 single-valued MZVs. Such numbers appear then again in the thirteen loop amplitude: $$\begin{gathered} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{13})}} \supset \frac{1}{2207744000}\bigg( \frac{5367943}{497664} {C_1}^{10} {C_2}^2 - \frac{32668315}{124416} {C_1}^9 {C_2}^3 + \frac{6876365071}{2488320 } {C_1}^8 {C_2}^4 \\ - \frac{10213439791}{622080} {C_1}^7 {C_2}^5 + \frac{37444840199}{622080} {C_1}^6 {C_2}^6 - \frac{10827306157}{77760} {C_1}^5 {C_2}^7 \\ + \frac{3841520891}{19440} {C_1}^4 {C_2}^8 - \frac{503783639}{3240} {C_1}^3 {C_2}^9 + 50459 {C_1}^2 {C_2}^{10} \bigg) \times g_{5,5,3}, \label{eq:m13g553}\end{gathered}$$ and $$\begin{gathered} {{{\hat{{\mathcal{M}}}}_{\mathrm{NLL}}}^{(+,{13})}} \supset \frac{1}{2649292800}\bigg( -\frac{1819475}{82944} {C_1}^{10} {C_2}^2 + \frac{5621717}{10368} {C_1}^9 {C_2}^3 - \frac{961202489}{165888} {C_1}^8 {C_2}^4 \\ + \frac{482408111}{13824} {C_1}^7 {C_2}^5 - \frac{5356152533}{41472} {C_1}^6 {C_2}^6 + \frac{1551101681}{5184} {C_1}^5 {C_2}^7 \\ - \frac{543921901}{1296} {C_1}^4 {C_2}^8 + \frac{69045265}{216} {C_1}^3 {C_2}^9 - 96967 {C_1}^2 {C_2}^{10} \bigg) \times g_{7,3,3}, \label{eq:m13g733}\end{gathered}$$ where the single-values zeta numbers $g_{5,5,3}$ and $g_{7,3,3}$ have been defined in eqs.  and . The fact that the MZV terms in eqs. ,  and  appear in the eleven- and thirteen-loop amplitude already excludes that there could be a simple all-order formula in terms of gamma functions for the reduced amplitude. This stands in sharp contrast to the contributions associated with the soft limit, both singular and finite, which can be resummed to all orders by means of gamma functions, as we have seen in section \[soft\]. The infrared-renormalized amplitude\[sec:hardAmpl\] --------------------------------------------------- We conclude this section by discussing the perhaps most physically relevant infrared-renormalized amplitude (or hard function), which according to [eq. ]{}, is obtained by summing the soft component, given to all orders by the closed expression in [eq. ]{}, and the hard component, which according to [eq. ]{}, coincides with the hard component of the reduced amplitude, ${\cal \hat M}^{(+)}_{\rm NLL,h}$. The latter can be determined to any loop order by following the methods discussed in sections \[sec:method1\] and \[sec:method2\], however a closed-form expression cannot be obtained as in case of the soft part of the infrared-renormalized amplitude. Thus, in practice we limit ourselves to determine this amplitude to 13 loops, and the result is provided in the ancillary file `NLL-IR-renormalised-amplitude.txt`. Here we provide a sample of the result (with ${\cal H}$ defined in eq. (\[IRfacteq\]) and loop-expanded following eq. (\[MhatEven\])), up to eight loops: \[eq:HExpanded\] $$\begin{aligned} \label{eq:HExpanded_1} {\cal H}_{\rm NLL}^{(1)} &= 0, \\ {\cal H}_{\rm NLL}^{(2)} &= 0, \\ {\cal H}_{\rm NLL}^{(3)} &= \frac{i\pi}{3!} \bigg\{-{C_2}^2 \frac{11\zeta_3}{4} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\cal H}_{\rm NLL}^{(4)} &= \frac{i\pi}{4!} \bigg\{ - C_A {C_2}^2 \frac{3\zeta_4}{16} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ {\cal H}_{\rm NLL}^{(5)} &= \frac{i\pi}{5!} \bigg\{- C_A^2 {C_2}^2 \frac{5\zeta_5}{2} + C_A {C_2}^3 \frac{253\zeta_5}{16} -{C_2}^4 \frac{53\zeta_5}{2} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\cal H}_{\rm NLL}^{(6)} &= \frac{i\pi}{6!} \bigg\{ C_A^3 {C_2}^2 \frac{39 \zeta_3^2}{16} - C_A^2 {C_2}^3 \frac{141 \zeta_3^2}{8} + C_A {C_2}^4 \bigg(\frac{1275\zeta_3^2}{32} -\, \frac{5\zeta_6}{32} \bigg) \\ & \hspace{2.0cm} - {C_2}^5 \frac{481 \zeta_3^2}{16} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn {\cal H}_{\rm NLL}^{(7)} &= \frac{i\pi}{7!} \bigg\{ - C_A^4 {C_2}^2 \frac{2135 \zeta_7}{256} + C_A^3 {C_2}^3 \frac{21595\zeta_7}{256} + C_A^2 {C_2}^4 \bigg( \frac{3\zeta_3 \zeta_4}{32} -\frac{83293\zeta_7}{256} \bigg) \\ & \hspace{2.0cm} +\, C_A {C_2}^5 \bigg( \frac{3\zeta_3 \zeta_4}{64} + \frac{148277 \zeta_7}{256} \bigg) - {C_2}^6 \frac{13443 \zeta_7}{32} \bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}, \\ \nn \label{eq:HExpanded_8} {\cal H}_{\rm NLL}^{(8)} &= \frac{i\pi}{8!} \bigg\{ C_A^5 {C_2}^2 \frac{611 \zeta_3 \zeta_5}{32} -C_A^4 {C_2}^3 \frac{7233 \zeta_3 \zeta_5}{32} +C_A^3 {C_2}^4 \frac{16867 \zeta_3 \zeta_5}{16} +C_A^2 {C_2}^5 \bigg( - \frac{77383 \zeta_3 \zeta_5}{32} \\ &\hspace{1.0cm} +\, \frac{21\zeta_8}{512} \bigg) + C_A {C_2}^6 \bigg( \frac{174033 \zeta_3 \zeta_5}{64} - \frac{105\zeta_8}{1024} \bigg) - {C_2}^7 \frac{35941\zeta_3 \zeta_5}{32}\bigg\} {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}\, .\end{aligned}$$ Given that ${\mathcal{H}}^{(+)}_{\rm NLL,h} = {\cal \hat M}^{(+)}_{\rm NLL,h}$, the same number-theory properties discussed at the end of section \[sec:finiteamp\] apply to the infrared-renormalized amplitude as well. In particular, resummation in terms of gamma functions is excluded. Numerical analysis and convergence properties {#numerics} ============================================= The calculation developed in sections \[soft\], \[2d-bfkl\] and \[amplitude\] has allowed us to determine symmetries and general features of the wavefunction and the amplitude, as well as their exact analytic structure to fourteen and thirteen orders in perturbation theory respectively. We are now interested to perform a numerical analysis, and focus on features which are not directly evident from the analytic expressions, such as the qualitative behaviour of the wavefunction, the relative size of the soft and hard contributions to the wavefunction, and the convergence properties of the infrared-renormalized amplitude as an expansion in $x \equiv L \,{\alpha_s}/\pi$. Wavefunction ------------ Let us begin by analysing the wavefunction. Given its finiteness, we consider here the leading term in the ${\epsilon}$ expansion, i.e. the two-dimensional soft, hard and full wavefunctions, defined respectively in eqs. (\[Wsoft2d\]), (\[WhardTwod\]) and (\[eq:Htdaction\]). ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-1.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-2.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-3.pdf "fig:"){width="32.00000%"}\ ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-4.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-5.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-6.pdf "fig:"){width="32.00000%"}\ ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-7.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-8.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the singlet component of the wavefunction. The soft and the full wavefunction exhibit singularities at $z = 0$ and $z = \infty$, due to the $z \leftrightarrow 1/z$ symmetry, (the latter is not visible in the plots). In addition, there is a singularity at $z = 1$, which appears also in the hard part of the wavefunction. Notice that the singularities at $z = 1$ partly cancel between the soft and hard wavefunctions, such that the full wavefunction exhibits a peak near $z=1$ which is markedly smaller relative to the two separate contributions.[]{data-label="Wave1"}](./img/Omega1-9.pdf "fig:"){width="32.00000%"} As an example, in figures \[Wave1\] and \[Wave27\] we plot the coefficients $\Omega_{\rm s}^{(\rm 2d)(\ell)}$, $\Omega_{\rm h}^{(\rm 2d)(\ell)}$, $\Omega_{\rm 2d}^{(\ell)}$, at third, fourth and fifth order in perturbation theory. In these plots we fix $N_c=3$ and consider specific colour representations, namely the singlet and the 27 representation, such that the Casimir operator in the $t$-channel evaluates to $$\begin{aligned} \label{Tt2num} \begin{array}{ll} \text{singlet}:& {\mathbf T}_{t}^2 \, {\cal M}^{[1]} = 0, \\ \text{27 representation}:\quad\qquad& {\mathbf T}_{t}^2 \, {\cal M}^{[27]} = 2(N_c+1)\, {\cal M}^{[27]}=8\, {\cal M}^{[27]}. \end{array}\end{aligned}$$ We plot the wavefunction in the complex $z$ plane, for $\bar z = z^*$. We observe that the soft and full wavefunctions exhibit peaks at $z = 0$; these are associated with the soft limit. Of course, by the $z \leftrightarrow 1/z$ symmetry discussed in sections \[sec:wf2d\] and \[sec:asalphabet\], there is an identical singularity at $z = \infty$ (which is not visible in the patch of the complex plane shown in the plot). In the way we separated between the soft and hard components, the two-dimensional hard wavefunction is strictly zero in these soft limits (see the discussion following eq. (\[eq:Iregdef\])). All components of the wavefunction have singularities at $z = 1$. The $z=1$ singularity represents rather different physics, where both Reggeons are *hard*, namely $k^2, (p-k)^2\gg p^2$. It is interesting to note that the singularity at $z = 1$ is always of opposite sign between the soft and hard wavefunction, such that these contributions cancel to a large extent in the full wavefunction. This observation allows us to conclude already that the soft approximation, although convenient for calculation purposes, does not provide a good numerical approximation for the full wavefunction away from the soft limit. Focusing now on the full wavefunction, the singular behaviour near $z = 0$ and $z = 1$ at $\ell$ loop order can be described respectively by the leading logarithms in the two limits, $\sim c_\ell \log^\ell (z \bar z)$ and $\sim c'_\ell \log^\ell\big[1/(1-z)^2(1-\bar z)^2\big]$, where both the magnitude and the sign of the coefficients $c_\ell$ and $c_{\ell}'$ depends on the colour representation considered. Concerning the limit $z = 0$, the asymptotic behaviour is entirely determined by the soft wavefunction, given that $\Omega_{\rm h}^{\rm 2d}(0,0) = 0$. We obtain the coefficients $c_{\ell}$ expanding the soft function in [eq. ]{} (compare eqs. (\[eq:wffullsoftresummed\]) and (\[Well-1-ansatz-sym\])). Taking into account [eq. ]{}, we find \[asymptotic0\] [\^[()]{}]{}(z,|z)|\_[z 0]{} c\_ \^(z |z), c\_= . Given that ${({C_A}-{{\mathbf{T}}_t^2})}= C_A = 3$ for the singlet, while ${({C_A}-{{\mathbf{T}}_t^2})}= -C_A -2 = -5$ for the 27 representation, this explains the sign-oscillating behaviour of the wavefunction near $z = \bar z = 0$ for the singlet, and the constant sign of the 27 representation, which can be seen also in figures \[Wave1\] and \[Wave27\]. Determining the coefficients $c_{\ell}'$ is less trivial, given that near $z = 1$ also $\Omega_{\rm h}^{\rm 2d}$ contributes. An analysis of the asymptotic behaviour up to the 14th order allows us to deduce the pattern and extrapolate. We find: \[asymptotic1\] [\^[()]{}]{}(z,|z)|\_[z 1]{} c’\_ \^ , c’\_= . Once again, we see that the series has alternating or constant signs depending on the color representation. Specifically, it is sign alternating for the 27 representation, and it has constant sign for the singlet. Notice that both asymptotic expansion of the wavefunction near $z=0$ and $z = 1$ can be summed using [eq. ]{}. We obtain $$\begin{aligned} \label{asymptotic-resum0} {\Omega}(p,k)|_{z \to 0} &= \frac{{\alpha_s}}{\pi} \, (z \bar z)^{\frac{{\alpha_s}}{2 \pi} \, L \, {({C_A}-{{\mathbf{T}}_t^2})}}, \\ \label{asymptotic-resum1} {\Omega}(p,k)|_{z \to 1} &= \frac{{\alpha_s}}{\pi} \, \left\{ \begin{array}{ll} {}_1F_1\left(2 - \frac{2C_A}{{{\mathbf{T}}_t^2}}, 1, -\frac{{\alpha_s}}{4\pi}\, L \, {{\mathbf{T}}_t^2}\log\left( \frac{1}{(1-z)^2(1-\bar z)^2}\right)\right), & \mbox{if } {{\mathbf{T}}_t^2}\neq 0, \\ {}_0F_1\left(1,\frac{{\alpha_s}}{2\pi}\, L \, C_A \log\left( \frac{1}{(1-z)^2(1-\bar z)^2}\right)\right), & \mbox{if } {{\mathbf{T}}_t^2}= 0. \end{array} \right.\end{aligned}$$ where ${}_0F_1$ and ${}_1F_1$ are the confluent and Kummer’s confluent hypergeometric function. These resummed expressions are valid only in the leading logarithmic approximation in $z{\bar{z}}$ and $(1-z)(1-{\bar{z}})$, respectively. The generalization of (\[asymptotic-resum0\]) to include subleading logarithms of $z{\bar{z}}$ has been given in (\[eq:wffullsoftresummed\]), while a closed form generalization of (\[asymptotic-resum1\]) is yet unknown. ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-1.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-2.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-3.pdf "fig:"){width="32.00000%"}\ ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-4.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-5.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-6.pdf "fig:"){width="32.00000%"}\ ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-7.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-8.pdf "fig:"){width="32.00000%"} ![Soft, hard and full wavefunction in the complex plane ${\rm Re}(z)$, ${\rm Im}(z)$. Here we plot the component corresponding to the $27$ colour representation. []{data-label="Wave27"}](./img/Omega27-9.pdf "fig:"){width="32.00000%"} Convergence of the loop expansion of the infrared-renormalized amplitude ------------------------------------------------------------------------ ![Partial sums of the *soft* component of the infrared-renormalized amplitude coefficients $\Xi_{\rm NLL,s}^{(+,\ell)}$, up to 15th order, for the singlet (upper plot) and 27 colour representation (lower plot). The dashed vertical line represents the radius of convergence, $R$, determined from the resummed expression.[]{data-label="Radius-Soft"}](./img/Radius1s.pdf "fig:"){width="70.00000%"}\ ![Partial sums of the *soft* component of the infrared-renormalized amplitude coefficients $\Xi_{\rm NLL,s}^{(+,\ell)}$, up to 15th order, for the singlet (upper plot) and 27 colour representation (lower plot). The dashed vertical line represents the radius of convergence, $R$, determined from the resummed expression.[]{data-label="Radius-Soft"}](./img/Radius27s.pdf "fig:"){width="70.00000%"} ![Partial sums of the *hard* component of the infrared-renormalized amplitude coefficients $\Xi_{\rm NLL,h}^{(+,\ell)}$, up to 13th order, for the singlet (upper plot) and 27 colour representation (lower plot). The dashed vertical line represents the radius of convergence, $R$, determined by the pole closest to $x = 0$, using the method of Padé approximants.[]{data-label="Radius-Hard"}](./img/Radius1h.pdf "fig:"){width="70.00000%"}\ ![Partial sums of the *hard* component of the infrared-renormalized amplitude coefficients $\Xi_{\rm NLL,h}^{(+,\ell)}$, up to 13th order, for the singlet (upper plot) and 27 colour representation (lower plot). The dashed vertical line represents the radius of convergence, $R$, determined by the pole closest to $x = 0$, using the method of Padé approximants.[]{data-label="Radius-Hard"}](./img/Radius27h.pdf "fig:"){width="70.00000%"} ![Partial sums of the infrared-renormalized amplitude coefficients $\Xi_{\rm NLL}^{(+,\ell)}$, up to 13th order, for the singlet (upper plot) and the 27 colour representation (lower plot). The dashed vertical line represents the radius of convergence, $R$, determined by the pole closest to $x = 0$, using the method of Padé approximants.[]{data-label="Radius-Full"}](./img/Radius1.pdf "fig:"){width="70.00000%"}\ ![Partial sums of the infrared-renormalized amplitude coefficients $\Xi_{\rm NLL}^{(+,\ell)}$, up to 13th order, for the singlet (upper plot) and the 27 colour representation (lower plot). The dashed vertical line represents the radius of convergence, $R$, determined by the pole closest to $x = 0$, using the method of Padé approximants.[]{data-label="Radius-Full"}](./img/Radius27.pdf "fig:"){width="70.00000%"} Having computed finite contributions to the imaginary part of the amplitude to high loop orders we are in a position to investigate a very interesting theoretical question, namely the convergence properties of the perturbative expansion. Of course, this is done here at a fixed-logarithmic accuracy, namely considering the amplitude as a function of $x \equiv L\, {\alpha_s}/\pi$. The high-energy limit adds an interesting twist to the question of convergence, since within the a priori “perturbative regime” where $\alpha_s(\mu^2)$ is small (recall that $\mu^2$ is naturally determined by the momentum transfer $-t$, and we assume $s\gg-t\gg\Lambda_{\rm QCD}^2$) high-energy logarithms can be arbitrarily large, but then the effective expansion parameter $x \equiv L\, {\alpha_s}/\pi$ becomes large. Thus, while there is no obvious reason why perturbation theory should break down, the question arises whether we can extend the validity of the calculation to large values of the expansion parameter $x$. In [@Caron-Huot:2017zfo] we studied the infrared-divergent part of the amplitude in detail, and proved that these corrections exponentiate in terms of the soft anomalous dimension. We determined the latter to all orders in perturbation theory, and shown that it is an entire function, having *an infinite radius of convergence* in $x$. We are now in a position to study the convergence of the infrared-renormalized amplitude ${\cal H}_{\rm NLL}^{(+)}$, which we determined analytically to the 13th order in section \[amplitude\]. For convenience, we introduce the amplitude $\Xi$ and its coefficients $\Xi^{(\ell)}$, defined through $$\begin{aligned} \label{MhatEvenB} \begin{split} {\cal H}_{\rm NLL}^{(+)} & = \frac{i \pi}{L} \, \Xi_{\rm NLL}^{(+)} \, {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}\, \\ & = \frac{1}{L} \sum_{\ell=1}^\infty x^\ell {\cal H}_{\rm NLL}^{(+,\ell)} = \frac{i \pi}{L} \sum_{\ell=1}^\infty x^\ell \, \Xi_{\rm NLL}^{(+,\ell)} \, {{\mathbf{T}}_{s-u}^2}{{\mathcal{M}}^{\mathrm{(tree)}}}\, , \end{split}\end{aligned}$$ such that \[MhatEvenC\] [H]{}\_[NLL]{}\^[(+,)]{} = i \_[NLL]{}\^[(+,)]{} [\_[s-u]{}\^2]{}[\^]{} , [eqs.  and ]{}. In the following we will use equivalent definitions also for the soft and hard parts of the infrared-renormalized amplitude coefficients. Numerical expressions for the coefficients of the infrared-renormalized amplitude $\Xi_{\rm NLL}^{(+,\ell)}$ up to thirteen loops can be obtained starting from the analytic expressions given in the ancillary files[^7], using the relations in [eq. ]{}, and converting the multiple zeta values there into decimal numbers. We arrive at \[Xi1\] \_[NLL]{}\^[(+)\[1\]]{} &=& -4.959 x\^3 - 0.2283 x\^4 - 9.230 x\^5 - 2.690 x\^6 - 13.13 x\^7 + 1.696 x\^8\ &&- 20.44 x\^9 + 16.54 x\^[10]{} - 35.99 x\^[11]{} + 46.06 x\^[12]{} - 74.05 x\^[13]{} + (x\^[14]{}),\ \[Xi27\] \_[NLL]{}\^[(+)\[27\]]{} &=& -13.77 x\^3 - 0.6342 x\^4 - 199.2 x\^5 + 381.1 x\^6 - 2826 x\^7 + 9380 x\^8\ && - 46488 x\^9 + 180393 x\^[10]{} - 797524 x\^[11]{} + 3.239 10\^[6]{} x\^[12]{} - 1.374 10\^[7]{} x\^[13]{}\ &&+ (x\^[14]{}). We consider also the soft and hard contribution to the infrared-renormalized amplitude ${\cal H}_{\rm NLL}^{(+)}$, defined by the two terms in [eq. ]{}. Defining the soft $\Xi_{\rm NLL,s}^{(+)}$ and hard $\Xi_{\rm NLL,h}^{(+)}$, in analogy to [eqs.  and ]{}, we can easily obtain a numerical expression for the singlet and 27 colour representation, as in [eqs.  and ]{}: \[Xis1\] \_[NLL,s]{}\^[(+)\[1\]]{} &=& -7.663 x\^3 - 0.2283 x\^4 - 33.73 x\^5 - 78.04 x\^6 - 210.0 x\^7 - 726.9 x\^8\ &&- 2023 x\^9 - 6237 x\^[10]{} - 18605 x\^[11]{} - 55822 x\^[12]{} - 167566 x\^[13]{} + (x\^[14]{}),\ \[Xis27\] \_[NLL,s]{}\^[(+)\[27\]]{} &=& -15.28 x\^3 - 0.6342 x\^4 - 245.7 x\^5 + 641.8 x\^6 - 4445 x\^7 + 19735 x\^8\ &&- 103863 x\^9 + 507855 x\^[10]{} - 2.566 10\^[6]{} x\^[11]{} + 1.277 10\^[7]{} x\^[12]{}\ && - 6.398 10\^[7]{} x\^[13]{} + (x\^[14]{}), for the soft part of the infrared-renormalized amplitude, and \[Xih1\] \_[NLL,h]{}\^[(+)\[1\]]{} &=& 2.705 x\^3 + 24.50 x\^5 + 75.34 x\^6 + 196.9 x\^7 + 728.6 x\^8 + 2003 x\^9\ &&+ 6254 x\^[10]{} + 18570 x\^[11]{} + 55869 x\^[12]{} + 167492 x\^[13]{} + (x\^[14]{}),\ \[Xih27\] \_[NLL,h]{}\^[(+)\[27\]]{} &=& 1.503 x\^3 + 46.45 x\^5 - 260.6 x\^6 + 1619 x\^7 - 10356 x\^8 + 57375 x\^9\ &&- 327462 x\^[10]{} + 1.768 10\^[6]{} x\^[11]{} - 9.527 10\^[6]{} x\^[12]{}\ &&+ 5.024 10\^[7]{} x\^[13]{} + (x\^[14]{}), for its hard part. We plot the partial sums of the soft, hard and full infrared-renormalized amplitude as a function of $x$ respectively in figures \[Radius-Soft\], \[Radius-Hard\], and \[Radius-Full\]. Considering the all-order resummed expression for the soft component of the infrared-renormalized amplitude in [eq. ]{}, we can immediately conclude that it exhibits a finite radius of convergence. The radius of convergence can be identified as the position of the pole closest to the origin in the complex $x$ plane, which we denote in what follows $R$. Inspecting [eq. ]{}, and in particular the explicit expression for $\hat \Delta_{\rm NLL}^{(+)}$ in [eq. ]{}, we see that the soft part of the infrared-renormalized amplitude has poles when the argument of the gamma functions in the numerator equals zero or negative integers. In general poles appear for both positive and negative $x$: this is an important point we shall return to below. The pole closest to the origin is determined by $1- {({C_A}-{{\mathbf{T}}_t^2})}\, x = 0$, which in turn determines the radius of the convergence of the soft part of the infrared-renormalized amplitude to be $R_{ s}= 1/{({C_A}-{{\mathbf{T}}_t^2})}$ (the subscript “s” refers to the soft part of the infrared-renormalized amplitude). This corresponds to $R_{s} = 1/3 \simeq 0.333$ for the colour singlet infrared-renormalized amplitude, and $R_{s} = -1/5 = -0.2$ for one in the 27 representation. The qualitative picture of convergence of the partial sums as a function of $\ell$ for any $x<R_s$, and divergence beyond that point, can indeed be confirmed upon inspecting figure \[Radius-Soft\]. For the hard contribution to the infrared-renormalized amplitude, and thus also for the complete one, we do not have an all-order expression. Nevertheless, information on the radius of convergence can be extracted from the perturbative expansion by constructing Padé approximants of the infrared-renormalized amplitude. More specifically, we may use the partial sum of the infrared-renormalized amplitude at any order $\ell$ to construct a rational function of $x$, which reproduces the partial sum upon expansion. Here we choose to use Padé approximants of the form[^8]: \[padeDef\] \_[NLL]{}\^[(+)]{}|\_[,]{} = . With this definition the Padé approximant has two poles at $x_{\pm} = \big(-b_1 \pm \sqrt{b_1^2 - 4 b_2}\big)/(2 b_2)$. The pole closest to the origin provides a prediction for the radius of convergence of the series: $R = {\rm min}\{ x_{-}, x_+ \}$. Of course, this prediction is expected to be reliable only upon considering sufficiently high orders, where the series approaches its asymptotic regime. The stability of the deduced value for the radius of convergence with respect to the order $\ell$ provides an indication of whether the asymptotic regime is reached. Before describing the results a further comment is due regarding the sign of $R$. Strictly speaking, the radius of convergence would be the absolute value of $R$. Here, however, we are interested in keeping track also of the sign of the nearest pole, indicating whether the series is (asymptotically) of constant signs, or oscillating. We shall see that both scenarios are realised. We test this method on the soft part of the infrared-renormalized amplitude, for which the all-order result is known, as discussed above. The results for the singlet and the 27 representation for $N_c=3$ are shown in Table \[tableXminSoft\]. We see that in both cases the pole closest to $x = 0$ ($x^{[1]}_{\rm s,-} = 0.333$ for the singlet and $x^{[27]}_{\rm s,+} = -0.200$ for the 27 representation) approximates very well the exact radius of convergence, $R_s = 1/{({C_A}-{{\mathbf{T}}_t^2})}$. $\ell$ singlet 27 -------- ---------------------------------------------------------------- ------------------------------------------------------------------ -- -- 10 $x^{[1]}_{\rm s,-} = 0.335, \quad x^{[1]}_{\rm s,+} = -0.703$ $x^{[27]}_{\rm s,-} = 0.472, \quad x^{[27]}_{\rm s,+} = -0.200$ 11 $x^{[1]}_{\rm s,-} = 0.333, \quad x^{[1]}_{\rm s,+} = -1.053$ $x^{[27]}_{\rm s,-} = 0.446, \quad x^{[27]}_{\rm s,+} = -0.200$ 12 $x^{[1]}_{\rm s,-} = 0.334, \quad x^{[1]}_{\rm s,+} = -1.866$ $x^{[27]}_{\rm s,-} = 0.428, \quad x^{[27]}_{\rm s,+} = -0.200$ 13 $x^{[1]}_{\rm s,-} = 0.333, \quad x^{[1]}_{\rm s,+} = 3.911$ $x^{[27]}_{\rm s,-} = 0.419, \quad x^{[27]}_{\rm s,+} = -0.200$ : Table summarising the values of $x=\frac{\alpha_s}{\pi}L$ at the poles of the Padé Approximants in eq. (\[padeDef\]), considering the *soft* component of the infrared-renormalized amplitude (indicated by the subscript ${\rm s}$) at orders $\ell=10$ through 13 for the singlet and the 27 representation.[]{data-label="tableXminSoft"} We thus proceed and apply the same method to the hard component of the infrared-renormalized amplitude. The results are summarised in table \[tableXminHard\]. $\ell$ singlet 27 -------- ---------------------------------------------------------------- ------------------------------------------------------------------- -- -- 10 $x^{[1]}_{\rm h,-} = 0.333, \quad x^{[1]}_{\rm h,+} = -0.753$ $x^{[27]}_{\rm h,-} = 0.822, \quad x^{[27]}_{\rm h,+} = -0.176$ 11 $x^{[1]}_{\rm h,-} = 0.332, \quad x^{[1]}_{\rm h,+} = -0.856$ $x^{[27]}_{\rm h,-} = 0.096, \quad x^{[27]}_{\rm h,+} = -0.179$ 12 $x^{[1]}_{\rm h,-} = 0.333, \quad x^{[1]}_{\rm h,+} = -1.258$ $x^{[27]}_{\rm h,-} = -4.392, \quad x^{[27]}_{\rm h,+} = -0.186$ 13 $x^{[1]}_{\rm h,-} = 0.333, \quad x^{[1]}_{\rm h,+} = -1.244$ $x^{[27]}_{\rm h,-} =-0.02, \quad x^{[27]}_{\rm h,+} = -0.185$ : Table summarising the values of $x=\frac{\alpha_s}{\pi}L$ at the poles of the Padé Approximants in eq. (\[padeDef\]), considering the *hard* component of the infrared-renormalized amplitude (indicated by the subscript ${\rm h}$) at orders $\ell=10$ through 13 for the singlet and the 27 representation.[]{data-label="tableXminHard"} We observe that for the singlet there is a highly stable nearest pole at $x^{[1]}_{\rm h,-} = 0.333$. For the 27 representation, in turn, the stable pole at $x^{[27]}_{\rm h,+} \simeq -0.19$ is not always the one closest to the origin, due to the wide fluctuations of $x^{[27]}_{\rm h,-}$. Finally, for the complete infrared-renormalized amplitude we summarise the results in table \[tableXminFull\]. Here we find highly stable results: $x^{[1]}_{+} \simeq -0.66$ and $x^{[27]}_{+} \simeq -0.24$. We conclude that Padé approximants based on partial sums of order $\ell=10$ through $13$, yield fairly stable predictions for the poles. Naturally, ones still finds some fluctuations, which can be attributed to subasymptotic effects, but an overall consistent picture emerges, and we can deduce an approximate radius of convergence in each case from the position of the poles. $\ell$ singlet 27 -------- ---------------------------------------------------- ------------------------------------------------------ -- -- 10 $x^{[1]}_{-} = 1.092, \quad x^{[1]}_{+} = -0.624$ $x^{[27]}_{-} = 0.393, \quad x^{[27]}_{+} = -0.236$ 11 $x^{[1]}_{-} = 1.266, \quad x^{[1]}_{+} = -0.666$ $x^{[27]}_{-} = 0.437, \quad x^{[27]}_{+} = -0.237$ 12 $x^{[1]}_{-} = 1.311, \quad x^{[1]}_{+} = -0.661$ $x^{[27]}_{-} = 0.367, \quad x^{[27]}_{+} = -0.238$ 13 $x^{[1]}_{-} = 1.466, \quad x^{[1]}_{+} = -0.669$ $x^{[27]}_{-} =0.461, \quad x^{[27]}_{+} = -0.239$ : Table summarising the values of $x=\frac{\alpha_s}{\pi}L$ at the poles of the Padé Approximants in eq. (\[padeDef\]), considering the *full amplitude* at orders $\ell=10$ through 13 for the singlet and the 27 representation.[]{data-label="tableXminFull"} The final results of this analysis are summarised in table \[tableXmin\], where we compare the results for the soft part of the infrared-renormalized amplitude, deduced from the resummed result (which are highly consistent with the Padé approach), with those for the hard component and complete infrared-renormalized amplitude, which are both based solely on the Padé analysis. In the table we also provide an interpretation of the radius of convergence for the full infrared-renormalized amplitude in terms of the analytic dependence on the colour factors $C_1$ and $C_2$; this will be explained below. --------------------------------- ------------------------- ----------------------------------------------------- -- -- Representation singlet 27 Colour factors $C_1=6$, $C_2=3$ $C_1=-2$,$C_2=-5$ $1/{C_2}=1/3 $ $1/{C_2}=-1/5$ $\sim 0.333$ $\sim-0.19$ Full ${\cal H}_{\rm NLL}^{(+)}$ $-2/{C_2}\simeq -0.666$ $1/\big({C_2}- \frac{3}{8}{C_1}\big) \simeq -0.235$ --------------------------------- ------------------------- ----------------------------------------------------- -- -- : Summary table for the radius of convergence $R$ of the expansion of the infrared-renormalized amplitude in powers of $x=\frac{\alpha_s}{\pi}L$, determined by identifying the pole closest to $x= 0$ using Padé approximants. We use the shorthand notation $C_1 = {(2{C_A}-{{\mathbf{T}}_t^2})}$ and $C_2= {({C_A}-{{\mathbf{T}}_t^2})}$.[]{data-label="tableXmin"} The numerical results in the table indicate that the radius of convergence of the full infrared-renormalized amplitude is larger compared to both its soft and hard components. Indeed, better convergence is clearly observed looking at successive orders in the full hard function in figure \[Radius-Full\] compared to its soft and hard components in figures \[Radius-Soft\] and \[Radius-Hard\], respectively. The interpretation is clear: the pole that limits the convergence of the soft component of the infrared-renormalized amplitude in the resummed expression, [eq. ]{}, exactly cancels against a similar divergence in the hard component, hence the similar values of $R$ for the soft and hard components in table \[tableXmin\]. Upon cancelling the leading divergence, a subleading pole is exposed, which becomes the dominant obstruction for convergence of the full amplitude. This is of course another indication that the separation of the *finite* ${\cal O}(\epsilon^0)$ terms between the soft and hard regimes is arbitrary; we have already seen that the soft wavefunction cannot approximate the full one away from the soft limit in figures \[Wave1\] and \[Wave27\]. Even more interesting is the observation that the *sign* of the first pole, $R$, which indicates whether the series is asymptotically sign-oscillating ($R<0$) or of constant signs ($R>0$), is negative for the full infrared-renormalized amplitude, while it may be either positive or negative for the separate soft and hard components, as can be seen in table \[tableXmin\]. Upon resumming the perturbative expansion of the full infrared-renormalized amplitude, one expects a smooth extrapolation to high energies when taking the centre-of-mass energy large compared to the momentum transfer, $s\gg-t\gg\Lambda_{\rm QCD}^2$. Given that the expansion parameter, $x=\frac{\alpha_s(-t)}{\pi} \log\frac{s}{-t}$, gets large (and positive) in this limit, smooth extrapolation (of the resummed expression) to high energies can only be consistent with a finite radius of convergence if the series is sign-oscillating, or put in stronger terms: if *all* the singularities of the resummed infrared-renormalized amplitude are locate away from the positive real axis of $x$. In the example of the soft part of the hard function, singularities appear on the real axis at both positive and negative values. We expect that this would not happen for the full hardfunction. In other words, the singularities present in the resummed soft part of the hard function at positive $x$ *must* all cancel against similar divergences in the resummed hard part of the hard function. This explains the observations above regarding the radius of convergence of the full hard function versus its soft and hard components, but it applies more generally, also to poles further away from the origin. ![The radius of convergence of the full infrared-renormalized amplitude as a function of the colour operators. In these plots the dots represent the value of $1/R$, for the corresponding value of $C_1={(2{C_A}-{{\mathbf{T}}_t^2})}$ and $C_2={({C_A}-{{\mathbf{T}}_t^2})}$, based on the Padé approximant analysis for $\ell=11$ and $\ell=12$, as indicated in the plots. We superimpose two linear lines, which determine the dependence of $1/R$ on the colour operators, as summarised by eq. (\[Rfull\_exact\]).[]{data-label="RR"}](./img/HRR1.pdf "fig:"){width="75.00000%"}\ ![The radius of convergence of the full infrared-renormalized amplitude as a function of the colour operators. In these plots the dots represent the value of $1/R$, for the corresponding value of $C_1={(2{C_A}-{{\mathbf{T}}_t^2})}$ and $C_2={({C_A}-{{\mathbf{T}}_t^2})}$, based on the Padé approximant analysis for $\ell=11$ and $\ell=12$, as indicated in the plots. We superimpose two linear lines, which determine the dependence of $1/R$ on the colour operators, as summarised by eq. (\[Rfull\_exact\]).[]{data-label="RR"}](./img/HRR27.pdf "fig:"){width="75.00000%"} To complete the analysis of the radius of convergence in the full hard function we would now like to interpret the numerical values of $R$ obtained in the Padé-based analysis in terms of the colour structures $C_1={(2{C_A}-{{\mathbf{T}}_t^2})}$ and $C_2={({C_A}-{{\mathbf{T}}_t^2})}$[^9]. We start by recalling that for the soft part of the hard function, $R_{ s}= 1/C_2$ depends on $C_2$ only. The Padé-based analysis of the 27 colour representation indicates that this is not so for the full hard function. To obtain an analytic expression it proves useful to depart from the actual values of the colour factors corresponding to physically-relevant representations, and simply repeat the Padé approximant analysis for a range of values of $C_2$ for a fixed $C_1$. To this end we plot in figure \[RR\] the numerical values of $R$ emerging from Padé approximants, as a function of $C_2$, for fixed values of $C_1$ (we pick $C_1=6$ and $C_1=-2$, corresponding to the singlet and the 27 representation, for easy reference). More precisely, we display in figure \[RR\] the value of $1/R$ rather than $R$ itself, which makes it easy to recognise the exact linear behaviour. Based on this analysis we deduce the radius of convergence of the full amplitude to be: $$\begin{aligned} \begin{split} \hspace*{-30pt}R=\min \left\{x_a,x_b \right\}\qquad \text{with}&\qquad x_a = \frac{1}{C_2 - \frac{3}{8}C_1} = \frac{1}{{({C_A}-{{\mathbf{T}}_t^2})}- \frac{3}{8}{(2{C_A}-{{\mathbf{T}}_t^2})}}, \\ &\qquad x_b = - \frac{2}{C_2}=- \frac{2}{{({C_A}-{{\mathbf{T}}_t^2})}}. \label{Rfull_exact} \end{split}\end{aligned}$$ Returning to the physically-relevant representations, $x_b$ end up being closest to the origin ($|x_b|<|x_a|$) for the singlet representation where $C_1^{[1]} = 6$, $C_2^{[1]} = 3$. One then obtains from eq. (\[Rfull\_exact\]) a radius of convergence of $R=x^{[1]}_b \simeq -0.6667$, in accordance with the result in table \[tableXmin\]. In turn, $x_a$ gives the pole closest to the origin for the 27 representation, namely for $C_1^{[27]} = -2$, $C_2^{[27]} = -5$, where one obtains from eq. (\[Rfull\_exact\]) $x^{[27]}_a \simeq -0.235$, again, in accordance with table \[tableXmin\]. Conclusions {#conclusion} =========== In this paper we completed the perturbative calculation of $2\to 2$ partonic amplitudes at the next-to-leading logarithmic accuracy in the Regge limit to high loop orders. We focused on the previously-unknown even-signature terms, corresponding to the imaginary part of the amplitude, which vanishes at the leading-logarithmic accuracy. Building upon our previous work in ref. [@Caron-Huot:2017zfo] where we determined the infrared singularities, we now computed the finite corrections to the hard amplitude ${\cal H}$, which remain after stripping off, or renormalizing, these singularities. We believe that these results — the soft anomalous dimension and hard functions — together exhaust the physical information contained in these partonic amplitudes. Our results are based on the well-established BFKL evolution equation in momentum space. Since the even amplitude vanishes at the leading-logarithmic order, only the leading-order BFKL evolution kernel was needed in our calculation, and the final formulae apply equally to quark and gluon amplitudes. We exploited the fact, observed in [@Caron-Huot:2017zfo], that the two-Reggeon wavefunction is finite. While it is unknown how to diagonalise the BFKL Hamiltonian for arbitrary colour structures beyond the planar limit, we were able to solve the BFKL equation iteratively, treating complementary regions using two different approaches. The first relies on the soft approximation keeping the dimensional regularization parameter finite – the same method we used in ref. [@Caron-Huot:2017zfo] to determine the singularities of the amplitude – while the second relies instead on a computation in exactly two transverse dimensions, which captures general hard momentum configurations where both Reggeons carry momenta of the order of the total momentum transfer $p^2=-t$. As shown in eqs. (\[getH2\]) and (\[eq:redampSplit\]), each separated part of the BFKL-motivated reduced amplitude needs only be calculated to order ${\cal O}(\epsilon^0)$, and by carefully recombining them we obtained the renormalized amplitude in eq. (\[eq:HExpanded\]). The result passes several consistency checks and agrees with a direct computation in dimensional regularization, which we performed through five loops. The central new computation in this paper is the iterative solution of the BFKL equation in two dimensions, leading to a simple algorithm to compute the two-Reggeon wavefunction to any order, presented in section \[2d-bfkl\]. The result lives inside a very rigid space of functions: the $\ell$-loop wavefunction is a linear combination of weight-$\ell$ single-valued harmonic polylogarithms (SVHPLs) of $z$ and ${\bar{z}}$ with rational coefficients. The algorithm is formulated as an operation on SVHPLs, and it works by producing differential equations in the holomorphic variable $z$ that can be directly integrated in terms of HPLs of $z$, to which we subsequently apply the single-value map to recover the actual wavefunction in terms of SVHPLs of $z$ and ${\bar{z}}$. The hard contribution to the infrared-renormalized amplitude ${\cal H}$ computed using the two-dimensional method admits a rather complex structure, and its resummation goes beyond the scope of the present paper. This is to be contrasted with the soft contribution, which we could resum to all orders in terms of gamma functions, eq. (\[ReducedAmpNLLresum2B\]), which includes singular as well as finite corrections. The number-theoretical content of the hard contribution is interesting: by construction it is restricted to single-valued multi zeta values (see eqs. (\[reduced\_hard\_results\]) and (\[eq:m11g533\])). The presence of multi zeta values – which make their first appearance at weight 11 involving a single-valued version [@Brown:2013gia; @Schnetz:2013hqa] of $\zeta_{5,3,3}$ – precludes resummation in terms of gamma functions, so the resummed result would clearly be of different nature to that of the soft contribution. Having obtained explicit analytic expressions for both the two-Reggeon wavefunction and the infrared-renormalized amplitude ${\cal H}$ to high loop order, it is straightforward to study the results numerically. In section \[numerics\] we examine a couple of aspects, first considering the wavefunction and then the infrared-renormalized amplitude. The wavefunction manifests highly regular behaviour as a function of the Reggeon kinematics variables, except for three specific limits. Two of these correspond to the soft limits, $z,{\bar{z}}\to 0$ and $z,{\bar{z}}\to \infty$, while the third $z,{\bar{z}}\to 1$ corresponds to the limit of large internal momentum. The former are described analytically by the soft wavefunction in eq. (\[eq:wffullsoftresummed\]), and by definition the hard wavefunction vanishes there, while a peak at large momentum is present in both the soft and hard wavefunctions. Interestingly, there is a significant – but incomplete – cancellation between these two leading to a more modest peak in the full wavefunction. While this phenomenon does not affect the validity of our results, it would be interesting to independently predict this limit of the wavefunction (extending eq. (\[asymptotic-resum1\])) which could help find simpler numerical approximations. Considering the infrared-renormalized amplitude we focused on one interesting problem, namely the convergence of the perturbative expansion. We find that the ${\cal O}(\epsilon^0)$ infrared-renormalized amplitude has a finite radius of convergence in the expansion parameter $x=L\alpha_s/\pi$. For the soft contribution, where we have a resummed analytic expression, eq. (\[ReducedAmpNLLresum2\]), this radius of convergence can readily be identified as the first pole of a gamma function, generating asymptotic behaviour $\sim (x(C_A-{{\mathbf{T}}_t^2}))^{\ell}$ at high orders, $\ell\to \infty$. The soft contribution is however not physically meaningful on its own, and the complete infrared-renormalized amplitude features a larger radius of convergence, as shown in figure \[Radius-Full\] (compare with figures \[Radius-Soft\] and \[Radius-Hard\] for the separate soft and hard components). Estimating the convergence radius using Padé approximants for different colour channels, we deduced an empirical formula for the radius of convergence $R$ of the full amplitude in terms of $C_A$ and ${{\mathbf{T}}_t^2}$, eq. (\[Rfull\_exact\]) above. Interestingly, the pole closest to the origin is always on the negative real axis, leading to an asymptotic behaviour of alternating signs. This matches our physical expectation that the resummed expression should smoothly extrapolate to high energies, corresponding to large positive values of $x$, and is similar to what was observed previously for non-global logarithms in ref. [@Larkoski:2016zzc]. It remains for future work to understand the true high-energy (large $x=L\alpha_s/\pi$) behaviour. Let us conclude with a brief summary of the state-of-the-art knowledge of partonic $2\to 2$ scattering amplitudes in the Regge limit. With the completion of this work these amplitudes are known in full to NLL accuracy. The signature odd part, corresponding to the exchange of a single Reggeized gluon was already known, and is given by a *Regge pole* (\[Mreal\]) with two-loop corrections to the trajectory $\alpha^{(2)}_g(p^2)$, and suitable impact factors (the former, in particular, was calculated in [@Fadin:1995xg; @Fadin:1996tb; @Fadin:1995km; @Blumlein:1998ib]; it can also be extracted from two-loop calculations of $2\to 2$ scattering amplitudes [@DelDuca:2001gu]). The signature even part, corresponding to a pair of Reggeized gluons, which generate a *Regge cut*, was determined here. The next frontier is therefore NNLL accuracy. In the signature-odd sector the first step was taken in ref. [@Caron-Huot:2017fxr], where the non-linear Balitsky-JIMWLK equation was used to compute the Regge cut contribution generated through the evolution of three Reggeized gluons and their mixing with one Reggeon through three loops. It is very interesting, and indeed – using the techniques we developed in the present paper – technically feasible, to compute higher-loop corrections in this tower of logarithms. NNLL corrections in the signature-even sector are in turn simpler and can be deduced from linear BFKL evolution with a NLO kernel [@Fadin:1995xg; @Fadin:1996tb; @Fadin:1995km; @Blumlein:1998ib], supplemented by suitable impact factors. At N$^3$LL one expects new phenomena such as the mixing of two and four Reggeon states, which can again be computed using the Balitsky-JIMWLK equation. Finally, beyond their immediate relevance to the the study of the high-energy limit, the results in this paper can be used to check future multi-loop calculations, and ultimately serve as “boundary data” in a bootstrap programme in which amplitudes are deduced using knowledge of the space of functions, analytic properties, symmetries and special kinematic limits. Such a programme was highly successful in the context of ${\cal N}=4$ supersymmetric Yang-Mills theory, see e.g. [@Caron-Huot:2016owq; @Dixon:2016nkn], but also, more recently in the context of the singularity structure of gauge theories including QCD [@Almelid:2017qju]. In both cases, the high-energy limit served as crucial input. We would like to thank Lorenzo Magnea, Jenni Smillie and Claude Duhr for useful discussions. We would also like to thank Gudrun Heinrich and Stephan Jahn for helpful feedback on the numerical evaluations of integrals with pySecDec. EG’s research is supported by the STFC Consolidated Grant ‘Particle Physics at the Higgs Centre’, JR’s research was supported by the Walter Nimmo and Walter Scott PhD studentship, LV research is supported by Fellini - Fellowship for Innovation at INFN, funded by the European Union’s Horizon 2020 research programme under the Marie Skłodowska-Curie Cofund Action, grant agreement no. 754496. EG and JR would like to thank the McGill Physics department for their kind hospitality. EG, SCH and LV would like to thank the GGI, Florence, for support during the programme ‘Amplitudes in the LHC era’ in Autumn 2018. EG thanks the Simons Foundation for support as a Simons Visiting Scientist in GGI in Autumn 2018 and the CERN theory department for hospitality as a Scientific Associate in 2019. This research is also supported by the National Science and Engineering Council of Canada, the Canada Research Chair program, and the Fonds de Recherche du Québec - Nature et Technologies. Harmonic polylogarithms {#app:hpls} ======================= Harmonic polylogarithms () [@Remiddi:1999ew] extend the natural logarithm $\log z$ with $z \in \mathbb{C}$ to nested integrals. Similarly to the well-known polylogarithms ${\mathrm{Li}}_n(z)$ they are defined recursively namely \[eq:hpldef\] H\_[0,]{}(z) = \_0\^z t H\_[1,]{}(z) = \_0\^z t where $\sigma$ is a “word” of any length made from the letters[^10] $\{0,1\}$. The number of indices of a $H(z)$ is called the *weight* of the function. By means of eq.  it corresponds to the number of nested integrals. The recursion is closed by the weight-1 identities \[eq:hpl1def\] H\_[0]{}(z) = z H\_[1]{}(z) = -(1-z). form a shuffle algebra and thus obey shuffle product identities \[eq:hplshuffle\] H\_(z) H\_(z) = \_ H\_(z) where $\rho \shuffle \sigma$ denotes the shuffle of the words $\rho$ and $\sigma$. The indices of a may be shortened by means of a collapsed notation; one replaces strings of zeros followed by a one according to \_[n ]{},1 n+1 \[eq:collapsedn\] for example $H_{0,1,0,0,1,1}(z) \to H_{2,3,1}(z)$. In the collapsed notation the number of indices is referred to as the *depth* of the function (while their sum now equals the weight). Depending on the context it may be useful to view the as nested sums. One commonly used definition is \[eq:hplseriesdef\] H\_(z) = \_[j=1]{}\^z\^j Z\_j() with \[eq:zsumdef\] Z\_j(a,) = \_[i=2]{}\^j Z\_[i-1]{}() Z\_j(1) = 1/j where we assume the collapsed notation. Note that the aforementioned depth is equal to the number of nested sums. The Taylor series of , defined by eq. , whose rightmost index is non-zero, is given by eq.  with . Trailing zeros in the indices of a point to logarithmic divergences at $z=0$. The $\log z = H_0(z)$ terms can be exposed using the shuffle algebra; one considers H\_(z) H\_0(z) = H\_[,0]{}(z) + …+ H\_[0,]{}(z) and solves for $H_{\sigma,0}(z)$. This procedure can be applied recursivly until all trailing zeros are removed. Hence, can always be written as a series in $z$ and $\log z$. For arguments between 0 and 1 yield real values. They show branch cuts on the real axis where $z \in [1,\infty)$ and are thus multi-valued functions. Single-valued harmonic polylogarithms {#app:svhpls} ===================================== Single-valued harmonic polylogarithms () [@Brown:2004ugm] are the class of all branch cut-free, single-valued, combinations of . Their construction is somewhat involved and we will only provide a short summary here. Further details can be found in e.g. refs. [@Pennington:2012zj; @Dixon:2012yy; @DelDuca:2013lma]. are functions of a complex variable $z$ and its complex conjugate ${\bar{z}}$. They correspond to the linear combinations of $H_\sigma(z) H_{\sigma'}({\bar{z}})$ that solve \[eq:ddzL\] \_[0,]{}(z,[|[z]{}]{}) = \_[1,]{}(z,[|[z]{}]{}) = and obey the boundary conditions [@Pennington:2012zj] \_(z,[|[z]{}]{}) = 1, \_[0\_n]{}(z,[|[z]{}]{}) = \^n (z [|[z]{}]{}) / n! \_[z 0]{} \_[0\_n]{}(z,[|[z]{}]{}) = 0. For the explicit construction one typically defines two alphabets $\{x_0,x_1\}$ and $\{y_0,y_1\}$ and the corresponding sets of all words $X^*$ and $Y^*$ formed from the respective alphabet. The letters of the former alphabet directly translate to $\{0,1\}$ when they appear as the indices of a (SV)HPL. The letters $y_0,y_1$ are related to $x_0,x_1$ via $$\begin{aligned} y_0 &= x_0 \label{eq:y0x0} \\ \tilde Z(y_0,y_1) y_1 \tilde Z(y_0,y_1)^{-1} &= Z(x_0,x_1)^{-1} x_1 Z(x_0,x_1) \label{eq:y1x1}\end{aligned}$$ where $Z$ is the so-called Drinfeld associator. It is defined as the generating series Z(x\_0,x\_1) = \_[X\^\*]{} H\_(1) Z(y\_0,y\_1) = \_[Y\^\*]{} H\_[()]{}(1) \[eq:drinfeld\] where the “tilde” operation reverses words and $\phi$ maps $y_i \to x_i$. The values of the at $z = 1$ in the definition are regularised by the shuffle algebra. Eq.  can be solved iteratively for $y_1$. The can then be extracted from the product of another two generating series \_[X\^\*]{} \_(z,[|[z]{}]{}) = L\_X(z) L\_Y([|[z]{}]{}) where L\_X(z) = \_[X\^\*]{} H\_(z) L\_Y([|[z]{}]{}) = \_[Y\^\*]{} H\_[()]{}([|[z]{}]{}) with “tilde” and $\phi$ defined below eq.. obey the same shuffle product as , namely \[svhplshuffle\] \_(z,[|[z]{}]{}) \_(z,[|[z]{}]{}) = \_ \_(z,[|[z]{}]{}). Holomorphic part and single-value map {#app:holomorphicpart} ------------------------------------- are uniquely fixed by their holomorphic part (i.e. their functional dependence on $z$) and the requirement of single-valuedness. We define the holomorphic part of a function $\psi(z,{\bar{z}})$ as the limit \[eq:holopartdef\] \^[(h)]{}(z) = (z,0) |\_[[|[z]{}]{}0]{}. For a given linear combination of taking this limit simply amounts to replacing ${\mathcal{L}}_\sigma(z,{\bar{z}}) \to H_\sigma(z)$. The dependence on ${\bar{z}}$ is reconstructed by the single-value map \[eq:sdef\] ( \^[(h)]{}(z) ) = (z,[|[z]{}]{}) which is discussed in detail in refs. [@Brown:2013gia; @DelDuca:2016lad]. Again, we restrict ourselves here to stating the (obvious) replacement rule $H_\sigma(z) \to {\mathcal{L}}_\sigma(z,{\bar{z}})$ which generates the corresponding single-valued expression from a linear combination of of $z$. As the action of the Hamiltonian ${{\hat{H}}_{\mathrm{2d,i}}}$ removes constant terms from the wavefunction prior to integration we shall not discuss this aspect in the context of eqs.  and here. The interested reader is referred to the above references. Variable transformations {#app:svhplvariables} ------------------------ obey relations under certain variable transformations. For the most part they are, in some sense, the *same* relations that apply to due to the single-value map discussed above in appendix \[app:holomorphicpart\]. While the latter are much better documented (for an overview we recommend ref. [@Maitre:2005uu]) we struggled to find a comprehensive list for which motivated this appendix. In section \[sec:asalphabet\] we transform $z \to 1/z$ and $z \leftrightarrow {\bar{z}}$ to account for the symmetries of the two-dimensional wavefunction. In addition, we consider $z \to 1-z$ in section \[sec:method2\] to facilitate the “last integration”. Let us discuss the latter transformation in detail. At the level of it is straightforward to find relations under $z \to 1-z$. Effectively, the transformation moves the lower limit of the integral definition from zero to one. Consider the weight-$w$ with argument $1-z$ $$\begin{aligned} H_{0,a_2,\dots,a_w}(1-z) &= \int_0^{1-z} \frac{{\mathrm{d}}t}{t} H_{a_2,\dots,a_w}(t) \nn \\ &= \int_0^1 \frac{{\mathrm{d}}t}{t} H_{a_2,\dots,a_w}(t) - \int_{1-z}^1 \frac{{\mathrm{d}}t}{t} H_{a_2,\dots,a_w}(t) \nn \\ &= H_{0,a_2,\dots,a_w}(1) - \int_0^z \frac{{\mathrm{d}}t}{1-t} H_{a_2,\dots,a_w}(1-t) \label{eq:hpl0zerotoone}\end{aligned}$$ and $$\begin{aligned} H_{1,a_2,\dots,a_w}(1-z) &= \int_0^{1-z} \frac{{\mathrm{d}}t}{1-t} H_{a_2,\dots,a_w}(t) \nn \\ &= \int_0^1 \frac{{\mathrm{d}}t}{1-t} H_{a_2,\dots,a_w}(t) - \int_{1-z}^1 \frac{{\mathrm{d}}t}{1-t} H_{a_2,\dots,a_w}(t) \nn \\ &= H_{1,a_2,\dots,a_w}(1) - \int_0^z \frac{{\mathrm{d}}t}{t} H_{a_2,\dots,a_w}(1-t) \label{eq:hpl1zerotoone}\end{aligned}$$ with H\_0(1-z) = -H\_1(z) H\_1(1-z) = -H\_0(z). Since the inside the integrals in eqs.  and are of weight $w-1$ this defines a recursive prescription of how to write any of $1-z$ in terms of of $z$. By means of the holomorphic part of and the single-value map, see appendix \[app:holomorphicpart\], these relations can be applied to . However, it is also possible to solve the recursion and write the answer directly as a sum. We find \_[a\_1,…,a\_w]{}(1-z,1-[|[z]{}]{}) = \_[j=0]{}\^w (-1)\^j \_[a\_1,…,a\_j]{}(z,[|[z]{}]{}) \_[a\_[j+1]{},…,a\_w]{}(1,1) with the “$\sim$” operation swapping the indices $0 \leftrightarrow 1$. Similarly, on can derive identities for the transformation $z \to 1/z$, ${\bar{z}}\to 1/{\bar{z}}$. Again, the recursion can be solved and the resulting formula is simply yet slightly awkward to write out. To do so we define $n_0(\sigma)$ ($n_1(\sigma)$) to count the number zeros (ones) in the indices $\sigma$ and $\hat s_{1 \to 0+1}$ to split ${\mathcal{L}}_\sigma(z,{\bar{z}})$ into a sum of $2^{n_1(\sigma)}$ according to the index rule $1 \to 0 + 1$. For example, $$\begin{gathered} \hat s_{1 \to 0+1} \left[ {\mathcal{L}}_{1,0,0,1,0}(z,{\bar{z}}) \right] = \\ {\mathcal{L}}_{0,0,0,0,0}(z,{\bar{z}}) + {\mathcal{L}}_{0,0,0,1,0}(z,{\bar{z}}) + {\mathcal{L}}_{1,0,0,0,0}(z,{\bar{z}}) + {\mathcal{L}}_{1,0,0,1,0}(z,{\bar{z}})\end{gathered}$$ Then \_[a\_1,…,a\_w]{} ( 1z,1[|[z]{}]{}) = \_[j=0]{}\^w (-1)\^[n\_0(a\_1,…,a\_j)]{} s\_[1 0+1]{} \_[a\_[j+1]{},…,a\_w]{}(,). The values of at $z,{\bar{z}}\to \infty$ are related to the values at $z,{\bar{z}}= 1$ by yet another transformation: $z \to z/(z-1)$. \_[a\_1,…,a\_w]{} ( , ) = (-1)\^[n\_1(a\_1,…,a\_w)]{} s\_[0 0+1]{} with $\hat s_{0 \to 0+1}$ defined like $\hat s_{1 \to 0+1}$ but based on the index rule $0 \to 0 + 1$. This last step is not strictly necessary but it reduces the amount of data needed to apply these kinds of transformations to a list of at $z,{\bar{z}}= 1$. Lastly, let us examine the transformation $z \leftrightarrow {\bar{z}}$ and how to related an ${\mathcal{L}}_\sigma({\bar{z}},z)$ to (a sum of) ${\mathcal{L}}_{\sigma_i'}(z,{\bar{z}})$. The easy yet computationally heavy way is to translate ${\mathcal{L}}_\sigma(z,{\bar{z}})$ to , swap $z \leftrightarrow {\bar{z}}$, extract the holomorphic part by means of eq.  and finally apply $\mathbf{s}$ . For of weight less or equal to five this might be adequate but at higher weights it becomes inefficient due to the large size of expressions that the translation to causes. Like in the above examples this step can be avoided altogether. The procedure relies on knowing the functional dependence of $y_1$ on the $x_i$, *cf.* eq. . Consider the weight-$n$ ${\mathcal{L}}_\sigma(z,{\bar{z}})$ with $\sigma = \sigma_1,\dots,\sigma_n$ and swap $z \leftrightarrow {\bar{z}}$. Then \_([|[z]{}]{},z) = \_(z,[|[z]{}]{}) + \_[i=4]{}\^[||]{} \_[j=0]{}\^[||-i]{} y1(\_j,…,\_[i+j]{}) \_[\_[(A)]{},\_[(B)]{},1]{} \[eq:ztozb\] where the “tilde” map was defined below eq.  and $y1(\sigma)$ is the coefficient of the product of $x_0$ and $x_1$ corresponding to $\sigma$, e.g. if $\sigma = 1,1,0,1,0$ then $y1(\sigma)$ is the coefficient of $x_1 x_1 x_0 x_1 x_0$. The indices (A) in eq.  only appear if $j - 1 \geq 1$ and likewise (B) if $i + j + 1 \leq n$. [^1]: Note that ultraviolet renormalization is irrelevant for the signature-even amplitude at the logarithmic accuracy considered. [^2]: This is a direct consequence of the fact that we have removed the factor of the gluon Regge trajectory in defining the reduced amplitude in [eq. ]{}. [^3]: For the most part of this section we will use the standard letters, 0 and 1. Only in section \[sec:asalphabet\] we introduce a new alphabet to simplify the two-dimensional evolution. [^4]: use the collapsed notation, *cf.* eq.  in appendix \[app:hpls\]. [^5]: Brown [@Brown:2013gia] refers to it as $\zeta_{\rm sv}(3,5,3)$ while Schnetz [@Schnetz:2013hqa] calls it $g_{335}$. [^6]: This step requires some amount of creativity but is greatly helped by The On-Line Encyclopedia of Integer Sequences (), `https://oeis.org`. [^7]: The same result is provided explicitly in the main text in [eqs. –]{}, up to eight loops. [^8]: There is of course some freedom of in choosing the degrees of the polynomials in the numerator and the denominator. After some experimentation we found that Padé approximants with second-order denominators yield stable predictions for the position of the first pole already at relatively low orders, and hence we use this form as the default choice for the analysis presented here. Qualitatively, the results are the same using different Padé approximants. [^9]: The hard function is provided in [eqs.  and ]{} in terms of $C_A$ and $C_2$, but we find it more convenient for this analysis to express it as a function of the color operators $C_1$ and $C_2$. [^10]: The full alphabet of includes the letter $-1$. In the present work however we only encounter integrals corresponding to the letters $0$ and $1$.
{ "pile_set_name": "ArXiv" }
--- author: - 'Ari Belenkiy$^a$, Steve Shnider$^a$, Lawrence Horwitz$^{b,c}$' date: '[a) Department of Mathematics, Bar-Ilan University, Ramat Gan, Israel; b) School of Physics, Tel Aviv University, Ramat Aviv, Israel; c) Department of Physics, College of Judea and Samaria, Ariel, Israel]{}' title: The Geometry of Stochastic Reduction of an Entangled System --- \[section\] epsf.sty [**PACS**]{}: 02.40.Dr, 04.60.Pp, 75.10.Dg [**Keywords**]{}: stochastic reduction, disentanglement, geometric quantum mechanics, projective geometry of states. Introduction ============ A pure quantum state of a system is a vector in a Hilbert space, which may be represented as a linear combination of a basis of eigenstates of an observable (self-adjoint operator) or of several commuting observables. Let us suppose that the eigenvalues corresponding to the eigenstates of the Hamiltonian operator of a system are the physical quantities measured in an experiment. If the action of the experiment is modelled by a dynamical interaction induced by a term in the Hamiltonian of the system, and its effect is computed by means of the standard evolution according to the Schrödinger equation, the final state would retain the structure of the original linear superposition. One observes, however, that the experiment provides a final state that is one of the basis eigenstates and the superposition has been destroyed. The resulting process is called reduction or collapse of the wave function. The history of attempts to find a systematic framework for the description of this process goes back very far in the development of quantum theory (e.g., the problem of Schrödinger’s cat [@sch]). In recent years significant progress has been made. Rather than invoking some random interaction with the environment and attributing the observed decoherence, i.e. collapsing of a linear superposition, to the onset of some uncontrollable phase relation, more rigorous methods have been developed, which add to the Schrödinger equation stochastic terms corresponding to Brownian fluctuations of the wave function. Since a pure quantum state of a system corresponds to an equivalence class of vectors modulo scaling by a non-zero complex number, corresponding to the norm and an overall phase factor [@wig; @mac], it is natural to develop models for collapse in the setting of a projective space [@k; @bh1]. Associated to an $N$-dimensional complex Hilbert space, we have the projective space ${\bf CP}^{n-1}$ equipped with the canonical Fubini-Study metric. In this paper, we shall apply some of these methods of state reduction to the phenomena considered in the famous paper by Einstein-Podolsky-Rosen [@epr] explored experimentally by Aspect [@asp], and analyzed by Bell for its profound implications in quantum theory [@be1; @be2]. The system to be studied consists of a two particle quantum state, where each particle has spin $\frac12$. The two body state of total spin zero has the special property known as “entangled" for which a determination of the state of one particle implies with certainty the state of the second. The problems recognized by EPR and studied extensively by Bell arise when the two entangled particles are very far apart. The states of the two particle system which we shall consider are the equivalence classes of vectors in the tensor product of two spin $\frac12$ representation spaces ${\cal H}\otimes {\cal H}$, where ${\cal H}$ corresponds to the states of one of the constituents. We shall describe the experimental detection of the entangled states in terms of mathematical models recently developed for describing the reduction, or collapse, of the wave function. One begins with an entangled state, corresponding to the $1$-dimensional spin $0$ representation with basis vector the linear superposition: $$\label{1} |s=0\rangle:=\frac1{\sqrt 2}(|\uparrow \rangle_1\otimes|\downarrow \rangle_2 -|\downarrow \rangle_1\otimes |\uparrow\rangle_2.$$ Here $1,2$ refer to the two spin $\frac12$ representations, each one with a basis $\{|\uparrow\rangle$, $|\downarrow\rangle\}$, corresponding to spin up and spin down, resp., relative to an arbitrary but fixed direction. The full tensor product representation is a sum of this spin $0$ representation and a complementary spin $1$ representation. The first stage of reduction, using the stochastic evolution model developed by Diosi, Ghirardi, Pearle, Rimini, Brody and Hughston [@bh1; @di; @hu; @gpr], and references therein, gives rise to a density matrix, a linear combination of projections on disentangled states with Born probability coefficients. The second stage of reduction is the detection of the configuration of disentangled states, which we will not discuss in detail here. Assume that one initially has an entangled spin $0$ state of a two particle system and then by some physical process the two particles become separated and far apart. Measurement of the first particle in the spin down state then implies with certainty that the second particle is in the spin up state, measured in the same direction. For the spin $0$ state this direction is arbitrary. The question is often raised as to how the state of the second particle can respond to the arbitrary choice of direction in the measurement of the first. This question is dealt with here by the addition of an additional term to the Hamiltonian, which we attribute to the presence of the measurement apparatus. On this basis, we shall attempt here to give a mathematical description of the process underlying such a measurement. The state $|s=0\rangle$ is represented in equation (\[1\]) as a linear superposition. As noted above, recently developed methods for describing state reduction can account for a reduction of this superposition to one or the other of the product states occurring on the right hand side of eq. (\[1\]) in a simple way if these states are eigenstates of the self-adjoint infinitesimal generator (Hamiltonian) of the evolution. Suppose, for example, that the Hamiltonian has the form, $$\label{2} H=H_0+H_1$$ where $H_0$ contains the spin-independent kinetic energy of the two particles, $$\label{3} H_0= p_1^2/2m_1 + p_2^2/2m_2,$$ describing the free motion, but $H_1$ has the special form $$\begin{aligned} \label{4} H_1&=&\sum \lambda_{i,j} P_{i,j}\\\ &=&\sum \lambda_{i,j}(|v_i\rangle_1\otimes |v_j\rangle_2)\otimes (_1\langle v_i|\otimes _2\langle v_j|),\nonumber\end{aligned}$$ where the sum is over $i,j=1,2$ and $v_1=\uparrow, v_2=\downarrow$. We show in the next section that, applying the method of adding a Brownian term to the Schrödinger equation, [@hu; @gpr; @di], causes the system to evolve into one or the other of the eigenstates $|v_i\rangle_1\otimes |v_j\rangle_2$ with the correct Born [*a priori*]{} probabilities [@ah; @gpr]. In the case of an initial state of the form (\[1\]), the resulting asymptotic state is either $|\uparrow\rangle_1\otimes |\downarrow\rangle_2)$ or $|\downarrow\rangle_1\otimes |\uparrow\rangle_2)$, each with probablity $\frac12$. Such a configuration is called a mixed state. We should remark that if the two particles correspond to identical fermions, then indices $1,2$ are basically indistinguishable and the two states $|\uparrow\rangle_1\otimes |\downarrow\rangle_2)$ and $|\downarrow\rangle_1\otimes |\uparrow\rangle_2)$ should appear with equal weights. However, since the particles are located far apart when the measurement takes place, there is no overlap of the one particle wave functions, and the Fermi antisymmetry is not required. Thus the presence of two widely separated detectors can split the degeneracy into distinct states, which can, in fact, imply that $\lambda_{1,2}\neq \lambda_{2,1}$. The second stage of reduction, as pointed out above, corresponds to the destruction of the two body state by one-particle filters. The state actually measured is a “separated system" of two particles. We assume that the two filters, which we denote $M_u$ and $M_d$ have the property that if the state has the form $|\uparrow\rangle_1\otimes |\downarrow\rangle_2$, then $M_u$ applied to particle $1$ and $M_d$ applied to particle $2$ succeed with certainty. We shall not discuss the extensive literature dealing with the problem of representing separated systems [@ae; @pi]. We take as our primary task the description of the first stage of this reduction process. In the application of the technique of state reduction, it is usually assumed that the evolution is governed by the physical nature of the system before the measurement process. However, in an undisturbed quantum system the linear supposition of states evolves according to a one parameter group of unitary operators which preserves the superposition and for which there is no collapse. One may understand the Brownian fluctuations leading to collapse as induced by the presence of measurement apparatus. In the same way, the component $H_1$ of the Hamiltonian may be thought of as induced by the measurement apparatus, which, in our formulation of the problem, disentangles the states, even to the extent of defining the orientations for the states $|\uparrow\rangle, |\downarrow\rangle$. In terms of the projective geometry the disentangled states lie in a quadric which is naturally defined by the identification of the underlying $4$-dimensional complex vector space as the tensor product of two $2$-dimensional complex vector spaces. The entangled state $|s=0\rangle$ lies outside this quadric, and the stochastic evolution of the system moves the point $|s=0\rangle$ into the quadric in the first stage of reduction. In the next section we review the geometric approach to quantum mechanics in terms of projective space and describe the geometry of entanglement. In the following sections we show how the introduction of the modified Hamiltonian in Hughston’s model for stochastic evolution gives a theoretical framework for describing Aspect’s experiments. Geometric quantum mechanics =========================== We begin with a quick review of the geometric framework for quantum mechanics in terms of Hamiltonian symplectic dynamics on the quantum mechanical state space introduced by Kibble \[1\] and developed further by Brody, Hughston and others. For simplicity we will assume that for each time $t$, the wave function $\psi(x,t)$ belongs to a fixed finite dimensional complex Hilbert space and is represented as a linear superposition of a finite basis of states $\psi_j$ $$\psi = z^1 \psi_1 + z^2 \psi_2 +...+ z^n \psi_n.$$ The normalization condition demands that $$|z^1|^2 + |z^2|^2 +...+ |z^n|^2 = 1,$$ and since wave functions related by a phase factor $e^{i\alpha}$ represent the same physical state, the time evolution of the system is actually taking place in complex projective $n-1$-space $$S^{2n-1}/ S^1\equiv {\bf CP}^{n-1}.$$ The space ${\bf CP}^{n-1}$ is the set of equivalence classes of complex $n$-tuples modulo multiplication by a non-zero complex number. An equivalence class is represented by $(z^1: ...\ z^j \ ... :z^n),$ and the $z^i$ are called the [*homogeneous*]{} coordinates of that point. The eigenstate $\psi_j$ corresponds to the point $z^j= (0: ... \ 1_j ... \ :0)$. The time evolution of the quantum state is given by the Schrödinger equation on ${\bf C}^n$: $$i d z^j/dt = H_{k,j} z^k,$$ with $H_{k,j} = (\psi_k, \bf H \psi_j)$. In a coordinate patch of ${\bf CP}^{n-1}$, for example, $z^n\neq 0$, with coordinates $\{x^a| a=1,...,2(n-1)\}$, $ x^a + i x^{a+n}:=z^a/z^n$ the Schrödinger equation can be expressed in Hamiltonian form $$\hbar dx^a/dt = 2 \Omega^{ab} \nabla_b H(x) , \label{Schrod}$$ where $\nabla$ is a covariant derivative on $CP^{n-1}$ with a connection form associated with Fubini-Studi metric, $\Omega^{ab}$ is the symplectic structure and the real-valued function (observable) $H(x)$, is defined by $$H(x) = {\Sigma H_{j,k} z^j \ \bar z^k \over {\Sigma |z^j|^2}}. \label{perfectfunction}$$ If the operator [**H**]{} is diagonal in the representation provided by $\{\psi_j\}$, e.g., with eigenvalues $\lambda_j$, $H(x)$ takes the form $$H(x) = {\Sigma_j \lambda_j |z^j|^2 \over \Sigma |z^j|^2}$$ which is a function with critical points at $z^j=(0: \ 1_j: \ 0)$. The projective space geometry naturally lends itself to the computation of transition probabilities. The transition probability from state $X$ to state $Y$ is given by $$Prob(X,Y) = {\langle X|Y\rangle \langle Y|X\rangle \over \langle X|X\rangle \langle Y|Y\rangle}, \label{scalar}$$ which has a simple relation to the geodesic distance with respect to the Fubini-Study metric between $X$ and $Y$ considered as points in ${\bf CP}^{n-1}$. Calling this distance $\theta$, we have, [@hu], $$cos^2({\theta\over 2})={\langle X|Y\rangle \langle Y|X\rangle \over \langle X|X\rangle \langle Y|Y\rangle}.$$ This, in particular, means that two [*conjugate*]{} or [*orthogonal*]{} points have geodesic distance $\pi$ between them. The state space for a pair of spin-${1 \over 2}$ particles is the projective space of ${\bf C}^2\otimes {\bf C}^2$, which we indentify with ${\bf CP}^3$. We represent the basis of ${\bf C}^2$ as $\uparrow, \downarrow$, and the basis ${\bf C}^2\otimes {\bf C}^2$ as $ \uparrow\otimes \downarrow, \uparrow\otimes \uparrow, \downarrow\otimes \downarrow, \downarrow\otimes \uparrow.$ Let $(x:y:z:w)$ be the homogeneous coordinates corresponding to this basis. The [*singlet*]{} state (total spin-0 case)is represented in homogeneous coordinates as $$P_0 = (1:0:0:-1)$$ (it is also represented by the line in $C^4$ with $ x = -w, y = z = 0$). The [*triplet*]{} representation is the orthogonal hyperplane $L$, whose equation in homogeneous coordinates is $$L = \{x-w=0 \}\quad\mbox{\rm or, in parametric form}\quad L = \{(x:y:z:x)\}.$$ Let us describe the space of possible representations of the eigenstates of the [*spin-z*]{} operator. The directions of the $z$-axes of a system of two particles are parametrized by $CP^1 \times CP^1$. The manifold of such states is imbedded in our $CP^3$ as the decomposable $2$-tensors, $(a\uparrow +b\downarrow)\otimes (c\uparrow+ d\downarrow)$: which gives the [*Veronese*]{} embedding $$((a:b),(c:d))\mapsto (ad: ac: bd: bc)$$ of $CP^1 \times CP^1$ onto the quadric represented by the equation $$\label{quadric} Q = \{xw=yz\}.$$ The quadric $Q$ intersects the plane $L$ in a conic : $$\label{conic} C = \{x^2=yz,x=w\}.$$ The point $$P_{\uparrow \uparrow} = (0:1:0:0)$$ on the conic corresponds to the initial spin axis. The point $$P_{\downarrow \downarrow} = (0:0:1:0)$$ is the unique point in the conic (\[conic\]) which is conjugate (orthogonal) to $P_{\uparrow \uparrow} = (0:1:0:0)$ relative to the standard Hermitian inner product. So far we constructed only two eigenstates of the [*spin-z*]{} operator. The third triplet state $P_1$ of the spin operator lies at the intersection of the tangents to the conic at $P_{\uparrow \uparrow}$ and $P_{\downarrow \downarrow}$, see [@hu], and is given by the equations $y=z=0, x=w$: $$P_1 = (1:0:0:1) .$$ A basis for the $0$ eigenstates of the [*spin-z*]{} operator in the full four dimensional representation is given by the intersection of the line $$\overline{P_0 P_1} = (\mu+\nu: 0: 0 :\mu - \nu)$$ with the the quadric $\{xw = yz\}$ in the two distinct points with $\mu = \pm \nu$: $$P_{\uparrow \downarrow} = (0:0:0:1)$$ and $$P_{\downarrow \uparrow} = (1:0:0:0) .$$ In this framework, we have constructed the geometry of four spin states spanning ${\bf C}^2\otimes {\bf C}^2$. Moreover, we have explained that disentangled states form a quadric in the associated projective space, and that the spin $0$ entangled state, lying outside this quadric is a distinguished point. Collapse of the entangled state =============================== We now describe the mechanism by which an initial entangled state, corresponding to this distinguished point, can evolve into a disentangled state in the quadric. To see how this occurs, we review briefly the mechanism of wave function collapse induced by stochastic fluctuations of the Schrödinger evolution. We follow closely the method of Hughston [@hu] (see also [@abbh; @ah]). In the stochastic reduction model of Hughston the system is governed by the following stochastic differential equation: $$\label{twiddle} d x^a = (2\Omega^{a,b} \nabla_b H - {1 \over 4} \sigma^2 \nabla^a V) dt + \sigma \nabla^a H d W_t$$ where $$V(x) = \nabla_a H(x) \nabla^a H(x)$$ is a so-called [*quantum uncertainty*]{}.(Where it is not mentioned explicitly, the indexes are lifted by the metric.) &gt;From Itô theory it immediately follows that above process has two basic properties: 1\) Conservation of Energy $$H(x_t) = H(x_0) + \sigma \int_0^t V(t) dW_t$$ 2\) Stochastic reduction $$\label{star} dV = - \sigma^2 V(x_t)^2 dt + \sigma \nabla_x V(x_t) \nabla^x \beta(x_t) dW_t$$ where $$\beta(x) = \nabla_a H(x) \nabla^a V(x)$$ is the “third” moment. It follows from (\[star\]) that the expectation $E[V]$ of the stochastic process obeys the relation [@hu; @ah] $$E[V_t]= E[V_0] - \sigma^2 \int ^t_0 ds E[V_s^2],$$ and since $0\leq E[(V_s-E[V_s])^2]=E[V_s^2]-(E[V_s])^2,$ $$E[V_t]\leq E[V_0] - \sigma^2 \int ^t_0 ds E[V_s]^2.$$ Since $V_s$ is positive, this implies that $E[V_\infty]=0$, and (up to measure $0$ fluctuations) $V_t\rightarrow 0$ as $t$ tends to $\infty$. Since $$V=\langle \psi, (H -\langle H\rangle)^2 \psi)/||\psi||^2,$$ where $\langle H\rangle=E[H]=\langle \psi,H\psi\rangle/||\psi||^2$, $$V=0 \quad\mbox{\rm implies}\quad ||(H-\langle H \rangle)\psi||=0,$$ and $H\psi=\langle H\rangle \psi$, so $\psi$ is an eigenvector of the Hamiltonian. Note that the system we have described brings the system to one or another of the eigenstates of the Hamiltonian $H$, with the Born probability given by the initial state, [@ah; @gpr]. Therefore, the final configuration corresponds to a mixed state, with each component an eigenstate of $H$. We now apply the mechanism to what we have called the first stage of the Aspect type experiment [@asp], the evolution from an entangled state to a disentangled state. Let us suppose that the system of two spin $\frac12$ particles is initially in the entangled spin $0$ state, and the two particles move away from each other, according to the motion generated by (\[3\]). As the particles approach some neighborhood of the detector, the Schrödinger evolution, $$i {{\partial \psi}\over{\partial t}}= H_0\psi$$ is modified by the Brownian fluctuations appearing in (\[twiddle\]), presumably induced by the detectors and their interactions, for example, with a set of quantum fields. We suppose, as well, that the filters of the apparatus induce a self-adjoint perturbation $H_1$ of the Hamiltonian itself, so that the system evolves, as in (\[2\]), according to a perturbed Hamiltonian $H=H_0+H_1$ in addition to the effect of the Brownian fluctuations. In order that the quantum state converge by stochastic reduction to one of the disentangled states, $ \uparrow\otimes \downarrow, \uparrow\otimes \uparrow \downarrow\otimes \downarrow, \downarrow\otimes \uparrow$, we suppose the perturbation to be of the form (\[4\]). The component $H_0$ of the Hamiltonian induces an irremovable dispersion, but the residual dispersion can be as small as we wish. As we have pointed out, for identical particles there may be a degeneracy between the states $\uparrow\otimes \downarrow$ and $\downarrow\otimes \uparrow$; since the filters in the experiment are arranged in one or the other of these configurations, one expects this degeneracy to be broken. Since one particle has moved in one direction and the second in another (with eventually no overlap of the wave functions), the particles then become effectively distinguishable and the induced Hamiltonian is not required to be degenerate. Therefore the final state may become disentangled, as we noted in the Introduction. The evolution (\[twiddle\]) corresponds to the motion of the point in ${\bf CP}^3$, going from a singlet state to a limit point in the quadric, for example $\uparrow\otimes \downarrow$ occurring with the corresponding Born probability. As we have pointed out the second stage of the detection, due to the direct action of the detector, must destroy the two body state and create a state of a so-called “separated system" in which one particle is seen with spin up and the other with spin down in two separate (although essentially simultaneous) experiments. The mathematical framework for such separated systems is not completely clear [@ae; @pi]. As Aerts [@ae] has shown, the set of propositions of such a system is the direct sum of two lattices and does not correspond to a lattice of subspaces of a Hilbert space. We assume however that the outcome of two measurements corresponds to the configuration of the two body state just before the measurement, an assumption generally made in applications of the quantum theory. Furthermore, we may ask about a situation in which the two filters are not oriented in opposite directions, but at an angle to each other. In this case the bases of the two spin $\frac12$ representation would not correspond. One basis would be $\uparrow, \downarrow$ and the other would be $\nwarrow, \searrow$ where $$\nwarrow=\cos(\theta/2)\uparrow +\sin(\theta/2) \downarrow,\quad \searrow=-\sin(\theta/2)\uparrow +\cos(\theta/2)\downarrow$$ The computation of the Born probability from a singlet state to a final state determined by the filters, say of the form $\nwarrow\otimes \downarrow$ would be $\cos^2(\theta/2)$ ( eq. (\[scalar\])) in agreement with experiment. In this way, arrangements of the filters can effect perturbations of the Hamiltonian that can cause the system to evolve to the appropriate point of the quadric of disentangled states. Some concluding remarks ======================= We have discussed a mechanism based on stochastic reduction, corresponding to a particular class of irreversible processes, which models the evolution of an entangled two-body system to a disentangled state. As an extension of this idea, one may consider a problem with a natural degeneracy of some initial state for which the the presence of effective detectors of some type induces a perturbation in which stochastic reduction takes place, as in the asymptotic cluster decomposition of products of quantum fields reducing a $N$-body system to $M$ $k$-body systems or the formation of local correlations in $N$-body systems such as liquids, or spontaneous symmetry breaking. In all these cases, due to the existence of continuous spectra, there will be some residual dispersion in the final state, although possibly very small. We are currently studying possible applications of the methods discussed here to such configurations. [ABCD]{} S. L. Adler, D. C. Brody,T. A. Brun, L. P. Hughston,[*J. Phys. A*]{} [**34**]{} (2001) 8795. S. L. Adler, L. P. Horwitz, [*J. Math. Phys.*]{} [**41**]{} (2000) 2485, errata [**42**]{},(2001) 976. A. Aspect, Proposed experiment to test the nonseparability of quantum mechanics, [*Phys. Rev. D*]{}[**14**]{} (1976) 1944-51, see Wheeler, Zurek, loc. cit. pp. 435-442. D. Aerts, [*Int. Jour. of Theor. Phys.*]{} [**38**]{} (1999) 289. J. Bell, On the problem of hidden variables in quantum mechanics, [*Rev. Mod. Phys.*]{}[**38**]{} (1966) 447-452. —, [*Speakable and Unspeakable in Quantum Mechanics*]{} Cambridge University Press 1987. D. Bohm [*Quantum Theory*]{}. Prentice Hall 1951. D. C. Brody and L. P. Hughston, Geometric quantum mechanics. [*J. Geom. Phys.*]{} [**38**]{} (2001), 19-53. Diosi, [*J. Phys. A*]{} [**21**]{} (1988) p. 2885, [*Phys. Lett.*]{} [**129A**]{} (1988) p. 419, [*Phys. Lett.*]{} [**132A**]{} (1988) p. 233. Dubrovin B., Novikov S. and Fomenko A. [*Modern Geometry*]{}. 3 vols. NY. Springer 1984. A. Einstein, B. Podolsky, N. Rosen, Can a quantum mechanical description of physical reality be considered complete, [*Phys. Rev.* ]{} [**47**]{} (1935) 777-780, see Wheeler, Zurek, loc. cit., 138-151. G.W. Gibbons, Typical states and density matrices. [*J. Geom. Phys.*]{} [**8**]{} (1992), 147-162. G. C. Ghirardi, P. Pearl, A. Rimimi, [*Phys. Rev. A*]{} [**42**]{} (1990) 78. L. P. Hughston, Geometry of stochastic state vector reduction. [*Proc. Royal Soc. Lond A*]{} [**452**]{} (1996), 953-79. T.W.B. Kibble, Geometrization of quantum mechanics. [*Commun. Math. Phys.*]{} [**65**]{} (1979), 189-201. G. Mackey, Mathematical Foundations of Quantum Mechanics, Benjamin-Cummings Advanced Book Program, Reading, Mass., 1963. C. Piron, Mécanique Quantique, Bases and Applications, Presses Polytechniques et Univesitaires, Lausanne, (1990). E. Schrödinger, The present situation in quantum mechanics, [*Proc. Am. Phil. Soc.*]{} [**124**]{} (1980) 323-38, translation of the original articles in [*Naturwissenschaften*]{} [**23**]{} (1935) 807-812, 823-828, 844-849, see Wheeler, Zurek, loc. cit., pp. 152-168. Wheeler and W. H. Zurek, eds. [*Quantum Theory and Measurement*]{} Princeton U. Press (1983) E. Wigner, Interpretation of quantum mechanics, p. 260-315 in Wheeler, Zurek, loc. cit.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Kinetic Inductance Detectors (KIDs) are superconductive low–temperature detectors useful for astrophysics and particle physics. We have developed arrays of lumped elements KIDs (LEKIDs) sensitive to microwave photons, optimized for the four horn–coupled focal planes of the OLIMPO balloon–borne telescope, working in the spectral bands centered at , , , and . This is aimed at measuring the spectrum of the Sunyaev–Zel’dovich effect for a number of galaxy clusters, and will validate LEKIDs technology in a space–like environment. Our detectors are optimized for an intermediate background level, due to the presence of residual atmosphere and room–temperature optical system and they operate at a temperature of . The LEKID planar superconducting circuits are designed to resonate between 100 and , and to match the impedance of the feeding waveguides; the measured quality factors of the resonators are in the $10^{4}-10^{5}$ range, and they have been tuned to obtain the needed dynamic range. The readout electronics is composed of a *cold part*, which includes a low noise amplifier, a dc–block, coaxial cables, and power attenuators; and a *room–temperature part*, FPGA–based, including up and down-conversion microwave components (IQ modulator, IQ demodulator, amplifiers, bias tees, attenuators). In this contribution, we describe the optimization, fabrication, characterization and validation of the OLIMPO detector system.' address: - '$^1$ Dipartimento di Fisica, *Sapienza* Università di Roma, P.le A. Moro 2, 00185 Roma, Italy' - '$^2$ Istituto Nazionale di Fisica Nucleare, Sezione di Roma, P.le A. Moro 2, 00185 Roma, Italy' - '$^3$ Istituto di Fotonica e Nanotecnologie - CNR, Via Cineto Romano 42, 00156 Roma, Italy' - '$^4$ School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA' - '$^5$ Department of Physics, Arizona State University, Tempe, AZ 85257, USA' author: - | A Paiella$^{1,2}$, E S Battistelli$^{1,2}$, M G Castellano$^3$, I Colantoni$^{3,}$,\ F Columbro$^{1,2}$, A Coppolecchia$^{1,2}$, G D’Alessandro$^{1,2}$,\ P de Bernardis$^{1,2}$, S Gordon$^4$, L Lamagna$^{1,2}$, H Mani$^4$, S Masi$^{1,2}$,\ P Mauskopf$\:^{4,5}$, G Pettinari$^3$, F Piacentini$^{1,2}$ and G Presta$^1$ bibliography: - 'bib\_abbr.bib' title: Kinetic Inductance Detectors and readout electronics for the OLIMPO experiment --- Introduction ============ In the last thirty years, precision cosmology has achieved important goals through measurements of the Cosmic Microwave Background (CMB) radiation such as its spectrum [@0004-637X-473-2-576], the anisotropies [@refId01], the E–mode component of the polarization [@refId0], and the B–mode component of the polarization due to gravitational lensing from dark matter structure [@0004-637X-833-2-228]. Yet, the B-mode power spectrum from inflation and the spectral distortions still remain elusive as well as the spectroscopic measurement of the Sunyaev–Zel’dovich (SZ) effect. The OLIMPO experiment [@Coppolecchia2013] is aimed at measuring the SZ effect which is a CMB anisotropy in the direction of galaxy clusters, due to the inverse Compton scattering of low energy CMB photons by the high energy electrons of the hot gas present in the intra–cluster medium. SZ effect measurements represent an interesting tool to study the morphological and dynamical state of clusters, to probe the CMB temperature evolution with the redshift, to constraint cosmological parameters, and to search for previously unknown clusters by looking at their SZ signature in the microwave sky [@1475-7516-2018-04-020; @1475-7516-2018-04-019]. OLIMPO measures SZ signals with a technique so far unattempted in this kind of obervations: it performs a spectroscopic map of the SZ effect with a differential interferometric instrument, working above the atmosphere, and provides efficient and unbiased decontamination of the SZ and CMB signals from all the foregrounds along the same line of sight [@deBernardis], thus increasing the accuracy on the estimate of the astrophysical quantities involved in the physics of the effect. The OLIMPO experiment has been, therefore, designed as a large balloon–borne mm–wave observatory, with a aperture telescope, equipped with a room–temperature differential Fourier transform spectrometer (DFTS) [@schillaci2014], and four low–temperature detector arrays, centered at 150, 250, 350, and , exploring the negative, zero, and positive regions of the SZ spectrum. The detector arrays, consisting of horn–coupled lumped element kinetic inductance detectors (LEKIDs), are cooled to about by a $^{3}$He fridge, accomodated inside a wet N$_2$ plus $^{4}$He cryostat. The detector arrays are fed and read out coupled by means of two independent bias–readout lines and two FPGA–based electronics. Kinetic inductance detectors are superconductive photon detectors, where the radiation is detected by sensing changes of the kinetic inductance. A superconductor, cooled below its critical temperature $T_{c}$, presents two populations of electrons: quasiparticles and Cooper pairs, bound states of two electrons with binding energy $2\Delta_{0}=3.528\:k_{B}T_{c}$. If pair-breaking radiation ($h\nu>2\Delta_{0}$) is absorbed in a superconducting film, it breaks Cooper pairs, producing a change in the population relative densities, and thus in the kinetic inductance. For these reasons, in the lumped element configuration, a superconducting strip is properly shaped and sized in order to perform like a radiation absorber, and this structure, which is an inductor as well, is coupled to a capacitor to form a superconductive high quality factor resonator. In this way, the change in kinetic inductance, due to the incident radiation, produces a change in the resonant frequency and in the quality factor, which can be sensed by measuring the change in the amplitude and phase of the bias signal of the resonator, transmitted past the resonator through a feedline. The KID design and readout scheme are intrinsically multiplexable for large–format arrays, provided that the resonant frequencies of the individual resonators coupled to the same feedline are adjusted to unique values, for instance by changing the capacitor size. In this way, entire arrays can be fed and read out thanks to an electronics chain made of *cold components*, including low noise amplifiers (LNAs), dc–blocks, coaxial cables, and power attenuators; and a *room–temperature stage*, where an FPGA-based electronics, coupled to an ADC/DAC board, is used to generate one bias tone per resonator. This solution allows to feed and monitor the amplitude and phase of the bias signals of all the resonators at the same time, while physically connecting the cold stage to the room–temperature with one cable only. KID technology has been already proven in ground–based experiments [@Ritacco], and for its features seems to be the optimal solution for next–generation space–borne CMB experiments [@1475-7516-2018-04-014; @1475-7516-2018-04-015], but it still needs to be demonstrated in a representative environment for space applications. OLIMPO, which was operated from the stratosphere, is therefore a natural testbed for KIDs in space–like conditions. Detectors and *cold electronics* ================================ The first constraint in the optimization process of a detector system is always the target science for which it will be built. In the OLIMPO case, moreover, it has to fit an already developed cryogenic and optical system. This implies that the first step is the choice of the material of the superconducting film and the dielectric substrate, the size of the detector arrays, the geometry and size of the absorbers, the geometry and size of the radiation couplers, the number of detectors per array, and the illumination configuration. These steps have been performed through optical simulations. The second step concerns the optimization of the readout scheme: the geometry and size of the feedline; the geometry and size of the capacitors, on which the resonant frequencies of the resonators depend; and the coupling between the resonators and the feedline. This optimization has been done through electrical simulations. The last step regards the optimization of the *cold electronics*: the choice of the material and size of the coaxial cables; the magnitude of the power attenuators; the gain, noise, and operation temperature of the cryogenic amplifier. KID optimization, fabrication and results ----------------------------------------- The detailed description of the optimization and fabrication of the OLIMPO detector systems and the measurement results can be found in [@Paiella2017; @Paiella2018]. All the four arrays are fabricated in a thick aluminum film deposited on silicon substrates of different thickness depending on the observed radiation frequency. The substrate acts also as a backshort, since the face opposite to the detectors has been coated with aluminum. The properties of different aluminum film thicknesses have been measured as described in [@Paiella2016], and the results have been used for the optical simulations. A compromise between optical simulation results and critical temperature (on which the optimal working temperature and the minimum detectable radiation frequency depend) forced us to choose for the aluminum film thickness. For this film, we measured the critical temperature, $T_{c}=\SI{1.31}{K}$; the residual resistance ratio, ${\rm RRR}=3.1$; the sheet resistance, $R_{s}=\SI{1.21}{\Omega/\Box}$, and the surface inductance, $L_{s}=\SI{1.38}{pH/\Box}$. The optimal absorber solution results to be a front–illuminated IV order Hilbert pattern, where the characteristic length scales with the observed radiation wavelength. The 150 and 250 GHz arrays are coupled to the radiation via a single–mode waveguide, while the 350 and 460 GHz are coupled via a single–mode flared waveguide. The number of detectors per array is 23, 39, 25 and 43 for the 150, 250, 350 and array, respectively. The capacitors of the KIDs have been designed so that the lumped element condition is satisfied for all the resonators and the resonant frequencies fall into the range $\left[100;600\right]$ MHz. This is done by means of large capacitors, which have also the advantage of reducing the TLS (two–level system) noise [@Noroozian2009]. Moreover, the resonant frequencies are such that the and the arrays can be fed and read out with the same line as well as the and the arrays. In this way, each readout electronics manages about 65 detectors. Each detector is coupled to a $\SI{50}{\Omega}$–matched microstrip feedline (the width of which is different array by array) by means of capacitors, designed to constraint the coupling quality factor to about $1.5\times10^{4}$ guaranteeing, thus, a quite large detector dynamics, and that the total quality factor results dominated by the coupling one. The arrays have been fabricated at the IFN–CNR. The detectors have been realized by electron–beam litography, electron–gun evaporation and lift–off [@Colantoni2016] on high–quality, high–resistivity ($\rho>\SI{10}{k\Omega.cm}$) intrinsic Si(100) wafers, double–side polished. The sample holders of the detector arrays are made of ergal alloy (aluminum 7075) as well as the horn arrays, in order to guarantee good thermalization and low power losses through the horns. The OLIMPO detectors have been fully characterized: the electrical properties, such as the quality factors and the resonant frequencies, have been measured in a dark laboratory cryostat and result in agreement with the simulations; the optical performance has been measured in the OLIMPO cryostat and results in a Rayleigh-Jeans noise equivalent temperature lower than $\SI{500}{\mu K/\sqrt{Hz}}$ under a blackbody load of about , for all the OLIMPO arrays, and a global optical efficiency of about 10%, averaged over the four channels. Cold electronics optimization ----------------------------- With *cold electronics* we mean the components placed inside the cryostat, necessary to feed and read out the detectors. These components include coaxial cables, power attenuators, low noise amplifiers and dc–blocks. They have to be chosen in such a way that the noise equivalent power (NEP) of the active components of the *cold electronics* is lower than the expected noise equivalent power of the detectors. Theoretically, the noise of a KID is mainly due to the generation–recombination noise (the TLS noise can be neglected by design), thus its NEP can be evaluated as $${\rm NEP}_{g-r}=2\Delta\sqrt{\frac{N_{qp}}{\tau_{qp}}}\;,$$ where $\Delta=1.764\;k_{B}T_{c}$ is half Cooper pair binding energy, with $k_{B}$ the Boltzmann constant, $N_{qp}$ is the quasiparticle number, and $\tau_{qp}$ is the quasiparticle lifetime (measured in [@Paiella2018]). For the OLIMPO detectors, the generation–recombination NEP is collected in table \[tab:NEP\_gr\]. [cc]{} Channel&${\rm NEP}_{g-r}$\ $\left[{\rm GHz}\right]$&$\left[{\rm W}/\sqrt{{\rm Hz}}\right]$\ 150&$3.1\times 10^{-17}$\ 250&$2.8\times 10^{-17}$\ 350&$2.1\times 10^{-17}$\ 460&$1.8\times 10^{-17}$\ As we already said, a KID is a superconductive resonator, which is optimally sensitive when operated close to its resonant frequency. This means that, since the quality factors are very high, due to the superconductor properties, and thus the resonances are very deep, the readout power has to be amplified. This is done by means of a cryogenic low noise amplifier, able to amplify the signal output from the detectors, with very low intrinsic noise. For the OLIMPO experiment, the two LNAs, necessary for the two readout lines, have been provided by Arizona State University (ASU) [@6881015]. These amplifiers dissipate about each ($V=\SI{1.6}{V}$ and $I=\SI{8}{mA}$ at ), and amplify , with a noise temperature of , and a gain compression point, referred to the input, of at , for a signal. The noise equivalent power associated to the cryogenic amplifier is given by $${\rm NEP}_{amp}=\frac{N_{qp}\Delta}{\tau_{qp}}\sqrt{\frac{k_{B}T_{amp}}{P_{read}}}\;; \label{eq:NEP_amp}$$ where $T_{amp}$ is the noise tempertaure, and $P_{read}$ is the total readout power at the LNA input. Since the OLIMPO cryostat does not feature a stage, and since the coldest stage where they can be mounted is the vapor $^{4}$He shield, at a temperature of about , we need to extrapolate the values of the LNA noise temperature at , in order to compare the NEP associated to the LNA and the generation–recombination one. This has been done by combining the measurements provided by ASU at 10, 20 and , with the measurements performed by us in a laboratory cryostat at 4 and . Since is the temperature closest to at which we were able to cool the amplifier quickly, the measurements at this temperature have been performed at different supply voltages. All these data are collected in table \[tab:LNA\]. The noise measurements have been done through an *Anritsu MS2717B* spectrum analyzer, set to a resolution bandwith ${\rm RBW}=\SI{1}{Hz}$, at the LNA output, and have been scaled to the LNA input thanks to the gain measured with an *Anritsu M52034B* vector network analyzer (VNA). All these measurements refer to . The conversion between noise power and noise temperature is given by $${\rm Noise\;Temperature}=\frac{{\rm Noise\;Power}}{k_{B}\:{\rm RBW}}\;.$$ [cccccc]{} & & & & &\ & & & & &\ & & & & &\ 4 & 1.6 & 33 & $-$192.0$\pm$0.2 &4.57$\pm$0.21&\ 44 & 1.5 & 32.1 & $-$189.1$\pm$0.2 &8.91$\pm$0.41&\ 44 & 1.6 & 32.4 & $-$189.4$\pm$0.2&8.31$\pm$0.38&\ 44 & 1.7 & 32.7 & $-$189.7$\pm$0.2&7.76$\pm$0.36&Measurements performed\ 44 & 1.8 & 32.9 & $-$189.9$\pm$0.2&7.41$\pm$0.34&by us in a laboratory cryostat\ 44 & 1.9 & 33.0& $-$190.0$\pm$0.2&7.24$\pm$0.33&\ 44 & 2.0 & 33.1 & $-$190.1$\pm$0.2&7.08$\pm$0.33&\ 44 & 2.1 & 33.2 & $-$190.2$\pm$0.2&6.92$\pm$0.32&\ 10 & 1.6 & 33 & & 5&\ 20 & 1.6 & & & 6&provided by ASU\ 300 & 2.1 &30& & 45&\ The value of the noise temperature of the LNA at has been extrapolated by fitting the data with the function $aT^{2}+bT+c$, see the *left panel* of figure \[fig:LNA\]. We obtained $a=\SI{1.67e-4}{K^{-1}}$, $b=8.59\times10^{-2}$, and $c=\SI{4.20}{K}$, and therefore $$T_{amp}\left(\SI{30}{K}\right)=\SI{6.93}{K}\;.$$ The only information which is missing to calculate the LNA NEP via equation \[eq:NEP\_amp\] is $P_{read}$. The total readout power at the LNA input is the sum of all the bias powers of the tones, each attenuated by the corresponding resonator, on the same readout line. Moreover, in order to work in the linear regime of all the amplifier circuits of the *cold* and *room–temperature electronics*, the powers at the input of such amplifiers have to be lower than the 1 dB gain compression point of the amplifiers themselves. As we are going to see in section \[sec:room-temperature\], the amplifier room–temperature components have been chosen and ordered along the readout chain in such a way that the 1 dB gain compression points are matched, and therefore the only constraint on $P_{read}$ is given by the first amplifier: the LNA. OLIMPO KIDs have been designed to be fed with a bias power of about each, and with a resonance deep of about . This means that, considering an average of 65 detectors per readout line, the total readout power at the LNA input is $P_{read}=\SI{-72}{dBm}$, below the gain compression point of . Therefore, using equation \[eq:NEP\_amp\], we evaluated the LNA NEPs, collected in table \[tab:NEP\_amp\], which result lower than the generation–recombination ones of table \[tab:NEP\_gr\]. The two LNAs have thus been safely mounted on the shield of the OLIMPO cryostat. [cc]{} Channel&${\rm NEP}_{amp}$\ $\left[{\rm GHz}\right]$&$\left[{\rm W}/\sqrt{{\rm Hz}}\right]$\ 150&$9.3\times 10^{-18}$\ 250&$7.6\times 10^{-18}$\ 350&$4.1\times 10^{-18}$\ 460&$3.0\times 10^{-18}$\ The cryogenic coaxial cables complete the *cold electronics*. We equipped the OLIMPO cryostat with coaxial cables made of three different materials, for the different thermal jumps: stainless steel from to and *viceversa*, Cu–Ni from to and *viceversa*, and Nb–Ti from to and *viceversa*. In particular, the bias line from to is composed of a long stainless steel cable, a long Cu–Ni cable, and a long Nb–Ti cable; the readout line from to is composed of a long Nb–Ti cable, a long Cu–Ni cable, and a long stainless steel cable. The power losses, in terms of $S_{21}$ parameter, have been measured with the VNA at room–temperature and are shown in the *right panel* of figure \[fig:LNA\]. The maximum power loss of the bias line, in the frequency range of interest, at room–temperature is about 8 dB, which surely decreases at cryogenic temperature. ![\[fig:LNA\] *Left panel*: Measured data (*black* and *blue* dots) and fit (*red solid line*) of the noise temperature at the LNA input as function of the operation temperature. *Right panel*: $S_{21}$ parameter measured for the coaxial cables at room–temperature. *Dotted lines* are for stainless steel cables, *dot–dash lines* are for Cu–Ni cables, *dashed lines* are for Nb–Ti cables, and *solid lines* represent the whole lines. Different colors indicate the different lines where the cables are.](amplificatore.pdf "fig:") ![\[fig:LNA\] *Left panel*: Measured data (*black* and *blue* dots) and fit (*red solid line*) of the noise temperature at the LNA input as function of the operation temperature. *Right panel*: $S_{21}$ parameter measured for the coaxial cables at room–temperature. *Dotted lines* are for stainless steel cables, *dot–dash lines* are for Cu–Ni cables, *dashed lines* are for Nb–Ti cables, and *solid lines* represent the whole lines. Different colors indicate the different lines where the cables are.](1.pdf "fig:") Along the bias line, we have inserted three cryogenic power attenuators of magnitude each, thermalized at so that their contribution to the total noise results negligible. This is necessary to guarantee a large dynamics for the bias power sweeps on the resonators, essential to find the optimal working point; remaining in the linear regime of the LNA. Precisely, remembering that the LNA gain compression point is , which means per tone at the detector array input, and since our *room–temperature readout electronics* can send a maximum total power of , which means per tone, we need to attenuate . In first approximation, we have seen that the bias line attenuates at most , and the bias room–temperature coaxial cable (between the *room–temperature readout electronics* and the cryostat) attenuates 8 dB, therefore we have to attenuate at least . \[sec:room-temperature\]*Room–temperature readout electronics* ============================================================== The detection system is completed by the *room–temperature electronics*. Our FPGA consists of a ROACH–2 board[^1], coupled to a MUSIC DAC/ADC board[^2], the firmware of which has been developed by ASU, and is able to generate up to 1000 tones over a bandwidth, with a demodulated output sampling rate up to about [@Gordon2016]. Since the resonant frequencies of the OLIMPO resonators include values higher than , the electronics has to be equipped with up–conversion and down–conversion microwave components. The block diagram of such electronics is shown in the *left panel* of figure \[fig:roach\]. The microwave components have been chosen in such a way to optimize the bias–readout electronics in terms of noise, bias and readout power, and to work in the linear regime of the amplifier components (amplifiers and demodulator). The IQ mixer modulator requires as input the $I$ and $Q$ signals and their $\pi$–phase shifts, offset positive through four bias tees. The maximum total power delivered at the IQ modulator output is , and the noise floor is . The room–temperature amplifiers have been selected and located along the readout line in such a way that the power budget allow them and the IQ demodulator to work in the linear regime (the power at the input of such components is lower than the 1 dB gain compression point at the input) and that the total noise figure at the demodulator output was as low as possible. Table \[tab:warm\_amp\] shows the specifications of the two room–temperature amplifiers and the IQ demodulator, which result in a total gain ${\rm G}=\SI{46.9}{dB}$ and a total noise figure ${\rm NF}=\SI{0.44}{dB}$, both estimated at the demodulator output. The NF has been calculated as $${\rm NF}=10\log_{10}\left(n_{1}+\sum_{i=2}^{3}\frac{n_{i}-1}{\prod_{j=1}^{i-1}g_{j}}\right)\;,$$ where $n_{i}=10^{{\rm NF}_{i}/10}$, $g_{j}=10^{{\rm G}_{j}/10}$, $i,j=1$ is Amp. 1, $i,j=2$ is Amp. 2, and $i,j=3$ is the IQ demodulator. [ccccc]{} &Gain (G)&&Noise Figure (NF)\ &$\left[{\rm dB}\right]$&@ input $\left[{\rm dBm}\right]$& @ output $\left[{\rm dBm}\right]$&$\left[{\rm dB}\right]$\ Amp. 1&20&2&22&0.4\ Amp. 2&22.5&$-$9.5&13&2.7\ IQ demodulator&4.4&13&17.4&13.2\ For this electronics, we measured the loopback noise by closing it on a power attenuator of , in such a way to have at the Amp. 1 input (total power of at the output of the IQ modulator), similar to that we expect at the output of the LNA (), attenuated by the cryogenic and room–temperature readout line (). In this way, the noise floor of the IQ modulator is reduced to the room–temperature thermal noise: . The measured loopback noise, shown in the *right panel* of figure \[fig:roach\] compared to the OLIMPO detector array noises, results $\left(-117\pm1\right)\SI{}{dBc/Hz}$, which means $\left(-126.1\pm1.0\right)\SI{}{dBm/Hz}$, for both the $I$ and the $Q$ channels. The expected noise can be estimated as $${\rm Noise}=\SI{-174}{dBm/Hz}+{\rm G}+{\rm NF}=\SI{-126.6}{dBm/Hz}\;,$$ which is compatible with the measured one. Therefore the microwave components have been correctly optimized, and the *readout electronics* noise is indeed lower than that measured on the four OLIMPO detector arrays, as shown in the *right panel* of figure \[fig:roach\]. ![\[fig:roach\] *Left panel*: Block diagram of the *room–temperature electronics*. The ROACH–2 and MUSIC DAC/ADC boards are completed by a set of microwave components to form the bias–readout electronics. *Right panel*: Measured noises of the *room–temperature electronics* (*black lines*) closed on a attenuator (loopback mode) and of the OLIMPO detector arrays (*color lines*), for both the $I$ (*solid lines*) and $Q$ (*dashed lines*) channels.](block_ROACH.pdf "fig:") ![\[fig:roach\] *Left panel*: Block diagram of the *room–temperature electronics*. The ROACH–2 and MUSIC DAC/ADC boards are completed by a set of microwave components to form the bias–readout electronics. *Right panel*: Measured noises of the *room–temperature electronics* (*black lines*) closed on a attenuator (loopback mode) and of the OLIMPO detector arrays (*color lines*), for both the $I$ (*solid lines*) and $Q$ (*dashed lines*) channels.](noises.pdf "fig:") Conclusion ========== We have designed, optimized, fabricated and characterized four arrays of horn–coupled LEKIDs, able to work in the OLIMPO experiment. In this paper we have focused the attention on the optimization process of the *cold* and *room–temperature electronics*. The *cold electronics* has been optimized in terms of gain, noise, and operation temperature of the LNA; power attenuation and temperature thermalization of the coaxial wiring; and magnitude of the power attenuators. The *room–temperature electronics* has been optimized in such a way that the generated tones were at the resonant frequencies of the KIDs; the signal powers were in the dynamics of each microwave components; and the loopback noise was lower than the one measured for the detector arrays. All the work has been done by extrapolating the LNA noise at 30 K from measurements at different temperatures; and by measuring the $S_{21}$ parameter of the coaxial cables, the loopback noise of the *room–temperature electronics*, and the noise of the KID arrays. We obtained that the LNA noise is lower than the expected KID noise as well as the loopback noise of the *room–temperature electronics* is lower than the measured KID noise. [^1]: https://casper.berkeley.edu/wiki/ROACH2 [^2]: https://casper.berkeley.edu/wiki/MUSIC\_Readout
{ "pile_set_name": "ArXiv" }
--- abstract: 'High-resolution simulations of supermassive black holes in isolated galaxies have suggested the importance of short ($\sim$10 Myr) episodes of rapid accretion caused by interactions between the black hole and massive dense clouds within the host. Accretion of such clouds could potentially provide the dominant source for black hole growth in high-z galaxies, but it remains unresolved in cosmological simulations. Using a stochastic subgrid model calibrated by high-resolution isolated galaxy simulations, we investigate the impact that variability in black hole accretion rates has on black hole growth and the evolution of the host galaxy. We find this clumpy accretion to more efficiently fuel high-redshift black hole growth. This increased mass allows for more rapid accretion even in the absence of high-density clumps, compounding the effect and resulting in substantially faster overall black hole growth. This increased growth allows the black hole to efficiently evacuate gas from the central region of the galaxy, driving strong winds up to $\sim$2500 km/s, producing outflows $\sim$10x stronger than the smooth accretion case, suppressing the inflow of gas onto the host galaxy, and suppressing the star formation within the galaxy by as much as a factor of two. This suggests that the proper incorporation of variability is a key factor in the co-evolution between black holes and their hosts.' author: - | C. DeGraf$^{1}$ A. Dekel$^{1}$, J. Gabor$^{2}$, F. Bournaud$^{2}$\ [1]{} [Center for Astrophysics and Planetary Science, Racah Institute of Physics, The Hebrew University, Jerusalem 91904 Israel]{}\ [2]{} [CEA-Saclay, 91190 Gif-sur-Yvette, France]{} bibliography: - 'astrobibl.bib' date: Submitted to MNRAS title: Black hole growth and AGN feedback under clumpy accretion --- quasars: general — galaxies: active — black hole physics — methods: numerical — galaxies: haloes Introduction {#sec:Introduction} ============ Observations suggest that supermassive black holes are to be found at the centers of most galaxies [@KormendyRichstone1995], and properties of the black hole and the host galaxies are strongly correlated [@Magorrian1998; @FerrareseMerritt2000; @Gebhardt2000; @Tremaine2002; @Novak2006; @GrahamDriver2007; @Cattaneo2009; @KormendyHo2013; @McConnellMa2012]. These correlations suggest that the growth of a black hole and the evolution of its host galaxy influence one another. As such, black holes provide a means to better understand the evolution of galaxies, and may provide a key aspect to this evolution. One of the most common explanations for this correlation is that quasar feedback from the central black hole may influence the host galaxy [e.g. @BurkertSilk2001; @Granato2004; @Sazonov2004; @Springel2005; @Churazov2005; @KawataGibson2005; @DiMatteo2005; @Bower2006; @Begelman2006; @Croton2006; @Malbon2007; @CiottiOstriker2007; @Sijacki2007; @Hopkins2007; @Sijacki2009; @DiMatteo2012; @DeGraf2012; @Dubois2013a; @Dubois2013b]. This feedback energy may be sufficient to unbind gas within the galaxy, driving strong outflows [@SilkRees1998; @WyitheLoeb2003]. Observations of galactic-scale outflows have been made [e.g. @Fabian2006; @Spoon2013; @Veilleux2013; @Cicone2014], showing that such outflows certainly exist. Furthermore, there is evidence that the strongest velocities are located in the central-most region of the galaxy [@Rupke2005; @RupkeVeilleux2011], possibly suggesting that the driving force behind them is indeed a centrally-located AGN rather than more widely-distributed feedback sources such as stars and supernovae. Driving these large-scale outflows necessarily requires a large energy output from the AGN, which in turn requires a significant source of gas which can reach the black hole at the galactic center. The angular momentum loss required for this infall can pose a challenge. One of the more commonly-posed explanations is that a gas-rich merger can drive gas toward the black hole. Theoretical work suggests that mergers should drive significant AGN activity [e.g. @Hernquist1989; @DiMatteo2005; @Hopkins2005d; @Hopkins2005b; @Hopkins2008; @Johansson2009; @Debuhr2010; @Debuhr2011] and some observations support this [@Ellison2011]. However, there have also been many studies which find that, although mergers may drive some AGN activity, the majority of AGN are found in isolated galaxies [@Schmitt2001; @ColdwellLambas2006; @Grogin2005; @Georgakakis2009; @Gabor2009; @Cisternas2011; @Kocevski2012], suggesting that an alternate, secular mechanism may be the primary driving force in AGN activity. Theoretical work has suggested that in high-z, gas-rich galaxies, violent disk instabilities can drive gas inflow and produce dense clumps of gas which can be driven in toward the galactic center [@Dekel2009b; @Ceverino2010; @Bournaud2011; @Mandelker2014], which may be a primary cause of AGN activity [@Bournaud2012]. In a companion paper, @GaborBournaud2013 used high (6 pc) resolution simulations to show that accretion onto black holes in gas-rich galaxies can be highly variable, with strong bursts of accretion caused by dense infalling gas clouds. These accretion events were found to generate strong outflows, but without significant effect on the host galaxy [@GaborBournaud2014], at least over short ($\sim 100$ Myr) timescales and in the absence of cosmological gas flows and mergers. In this paper we investigate the impact of periodic bursts of accretion on the growth of black holes and the corresponding effect they have on the host galaxy in a cosmological context, in which the black holes grow by several orders of magnitude (spanning both quiescent AGN phases and stronger quasar phases of extended Eddington growth). We use zoom-in simulations to achieve $\sim 100$ pc resolution for galaxies in a cosmological environment, utilizing a stochastic subgrid model to incorporate the accretion of unresolved high-density gas clouds. We investigate how, in the context of cosmological gas inflow and galaxy mergers, the inclusion of periodic, high-accretion events affects black hole growth, and the impact this has on the host galaxy morphology and star formation rate, and on galactic gas inflow and outflow. The paper is organized as follows: In Section \[sec:Method\] we describe the simulations used and detail the subgrid model for the periodic accretion bursts. In Section \[sec:bhgrowth\] we investigate the impact of these periodic accretion bursts on black hole growth. In Section \[sec:host\] we show how AGN feedback from these accretion bursts can affect the host, specifically host morphology (\[sec:hostmorphology\]), gas properties of the host (\[sec:gas\_impact\]), and gas inflows/outflows (\[sec:inflow\_outflow\]). In Section \[sec:earlytime\] we compare the impact at earlier times, providing a more direct comparison to the high-resolution isolated galaxy run. Finally, we summarize our results in Section \[sec:Conclusions\]. Method {#sec:Method} ====== RAMSES Code {#sec:RAMSES} ----------- For this work we ran cosmological zoom-in simulations using the Adaptive Mesh Refinement (AMR) code RAMSES [@Teyssier2002], which uses particles (acting as collisionless fluid) to model dark matter and stars, while gas is modeled by solving the hydrodynamic equations on a cubic grid of cells which vary in size. This code incorporates cooling, star formation, stellar feedback, and black holes. Cooling is performed as a sink term in the thermal energy of the gas. We allow gas to cool to a minimum temperature floor of $10^4$ K, together with a density-dependent temperature floor requiring the local Jeans length always be resolved to at least 4 grid cells [see, e.g., @Truelove1997]. Star formation is performed in gas cells above the critical density $n_H > 0.1 \: \rm{cm}^{-3}$. The star formation rate is $\dot{\rho} = \epsilon_* \rho_{\rm{gas}}/t_{ff}$, where $\rho_{\rm{gas}}$ is the gas density in the cell, $t_{ff} = (3 \pi/32G\rho_{\rm{gas}})^{1/2}$ is the local free-fall time of the gas, and $\epsilon_* = 0.01$ is the star formation efficiency [@Kennicutt1998; @KrumholzTan2007]. New star particles are then formed stochastically according to the star formation rate of the cell [@RaseraTeyssier2006], initially given the position and velocity of the host cell, but uncoupled from the cell. Supernova feedback is modeled by depositing $20\%$ of a star particles initial mass into the local cell 10 Myr after formation. The energy released is $10^{50} \rm{erg}/M_\odot$, which is deposited thermally onto the gas. We use the same supermassive black hole prescription as @GaborBournaud2013 [see also @Dubois2012]. Black holes are represented as sink particles, seeded into cells whose densities surpass $n_H > 1 \rm{cm}^{-3}$, with an initiall mass of $M_{\rm{seed}}=10^5 M_\odot$. Rather than representing the initial formation of an unresolved seed, this mass is broadly consistent with multiple mechanisms for seed formation, e.g. collapse of PopIII stars [e.g. @BrommLarson2004; @Yoshida2006] or direct collapse of massive gas clouds [e.g. @BrommLoeb2003; @Begelman2006], followed by sufficient growth to reach $M_{\rm{seed}}$. We also prevent black holes from forming within 25 kpc of another BH, thereby preventing multiple BHs from forming within the same galaxy. Once seeded, the black hole grows through gas accretion and BH-BH mergers. Gas accretion is modeled as $$\dot{M}_{\rm{BH}}=(4 \alpha \pi G^2 M_{\rm{BH}}^2 \rho)/(c_s^2+v_{\rm{rel}}^2)^{3/2} \label{eqn:bondi}$$ [@HoyleLyttleton1939; @BondiHoyle1944; @Bondi1952], where $\rho$ is the gas density, $c_s$ is the sound speed of the gas, $v_{\rm{rel}}$ is the velocity of the black hole relative to the gas (calculated within a sphere of 4$r_{\rm{min}}$, where $r_{\rm{min}}$ is the minimum resolution element of the simulation), and $\alpha = (\rho/\rho_0)^2$ for $\rho > \rho_0$ and $\alpha=1$ for $\rho < \rho_0$ [@BoothSchaye2009]. To prevent unphysically high accretion rates, we cap $\dot{M}_{\rm{BH}}$ at the Eddington limit $$\dot{M}_{\rm{edd}} = (4 \pi G M_{\rm{BH}} m_p)/(\epsilon_r \sigma_T c) \label{eqn:edd}$$ [@Eddington1916], where $m_p$ is the mass of a proton, $\sigma_T$ is the Thomson scattering cross section, $c$ is the speed of light, and $\epsilon_r$ is the radiative efficiency for the accreting gas, assumed to be 0.1 [@ShakuraSunyaev1973]. Black hole feedback is accomplished using a thermal feedback model, depositing $\dot{E}_{\rm{BH}} = \epsilon_f \epsilon_r \dot{M}_{\rm{BH}}$ [$\epsilon_f = 0.15$ is the feedback efficiency, selected to reproduce the scaling relations between the black hole and the host galaxy, see @Dubois2012] onto the gas within $4 r_{\rm{min}}$ of the BH. To prevent instantaneous overcooling of the gas (which will tend to happen at low temperatures where the cooling rate is high enough), we only deposit this energy if it is sufficient to heat the gas to at least $10^7$ K, otherwise the energy is stored until this threshold can be reached [@BoothSchaye2009]. To prevent unphysically high temperatures, if the thermal feedback is sufficient to heat the gas in excess of $5 \times 10^9$ K, the injection region is iteratively expanded until the resulting temperature will be below this level. Clumpy Accretion model {#sec:clumpymodel} ---------------------- The key modification to the black hole treatment in this paper is the incorporation of unresolved high-density gas clouds. @GaborBournaud2013 found that the accretion of high-density clouds of gas could be the dominant factor is black hole growth (at least among gas-rich, high-z galaxies), based on isolated galaxy simulations with 6pc resolution. Because these clouds of gas are only $\sim 100-300$ pc in radius, they remain unresolved in the majority of cosmological simulations. To investigate the effect of using resolution more typical for cosmological runs, we re-ran the M4f50 run from @GaborBournaud2013 at the lower resolution of 100pc. In Figure \[fig:resolutioncompare\] we show the comparison between the 6pc resolution (black) and 100 pc resolution (red) runs. On few-timestep scales, the high-resolution simulation exhibits more variation, as may be expected. In addition to the general variability, we note two main differences. First, the high-resolution simulation has several periods of high accretion on the order of 5-10 Myr. Second, in the absence of these accretion events, the low-resolution simulation tends to accrete more rapidly, by a factor of $\sim 2.5-3$. In the bottom panel of Figure \[fig:resolutioncompare\] we show the black hole growth for both the high- (black) and low- (red) resolution runs. Here we see that the short accretion events contributing the majority of the black hole growth in the high-resolution simulation are missed in the low-resolution run, leaving the black hole at a much smaller final mass. Given the importance of these high-density gas clouds on the black hole growth, we incorporate a subgrid prescription to the accretion rate to boost the accretion as if a high-density gas cloud were able to be resolved. We use a simple stochastic prescription for our model. For any timestep in which a black hole is not already undergoing a burst of accretion, we allow for a new event to begin with a probability of $p_{\rm{burst}}$. Each such event causes the accretion rate of the black hole to increase following a gaussian pattern, with a characteristic timescale ($\sigma_{\rm{burst}}$) and amplitude ($A_{\rm{burst}}$). We use the high- and low- resolution runs (shown in Figure \[fig:resolutioncompare\]) to calibrate the values of these parameters. We do this by fitting Gaussians to the rate of increase in the ratio between the accretion rates of the two simulations, finding four events which occur during the comparison period. From this, we incorporate four possible clump accretion events to our simulation, which each occur once in the 85 Myr high-resolution run. These four events occur with amplitude $A_{\rm{burst}}=26.5, 6.22, 5.56, 4.66$, with timescales of $\sigma_{\rm{burst}}=1.83, 1.3, 0.4, 1.5$ Myr. To account for the slower accretion during the smooth period (i.e. in the absence of a dense clump), the smooth accretion decreased by a factor of $\sim 2.6$ (matching the discrepancy in Figure \[fig:resolutioncompare\]). The model calibration is intended to give the lower resolution cosmological run a periodicity comparable to that of the high-resolution run that fully resolves high-density clouds. Given the limited sample size of a single isolated galaxy over a relatively short timescale (for cosmological runs), this will not be a completely accurate parameterization, particularly since it does not depend on the various properties of the host. For this paper, we investigate the impact that periodic accretion can have on the black hole growth and impact on the host, for which this parameterization is sufficient. A full parameter study of how accretion of high-density clumps depends on host properties is left for a followup work, having demonstrated the importance of including such periodicity in this work. ![Resolution dependence of black hole growth. *Top:* Eddington fraction ($\dot{M}_{\rm{BH}}/\dot{M}_{\rm{edd}}$) for black hole in isolated galaxy simulation using 6pc resolution (black) and 100pc resolution (red). Dashed line shows the Eddington limit. *Bottom:* Black hole growth (as a percentage of its initial mass) over the course of the simulations. Lowering the resolution smooths out the highest peaks and leads to significantly lower BH growth.[]{data-label="fig:resolutioncompare"}](plots/resolution_compare.pdf){width="8cm"} Zoom-in simulations ------------------- For this paper we run a set of zoom-in simulations within a $10$ Mpc box. We assume a $\Lambda$CDM cosmology with cosmological parameters consistent with @PlanckCosmological2013: $\Omega_\Lambda = 0.68$, $\Omega_m=0.32$, $\Omega_b=0.05$, and $H_0$=67 km/s. Although these results are not fully consistent with the WMAP results [@Komatsu2011], our investigation is based on a comparison of individual objects between two simulations runs, so our results should remain consistent regardless of the exact cosmology used. Within the base 10 Mpc box, we resolve a zoom region of $\sim$1 Mpc about the largest black hole (based on a low-resolution test run), which reaches $\sim 10^7 M_\odot$ by $z=6$. We resolve the zoom region to a maximum refinement level of 17, providing a maximum spatial resolution of $\sim 50$ pc. Given this base simulation setup, we run the same set of initial conditions using three versions of the code. The ClumpyAccretion run includes our full black hole treatment, including the subgrid model for accretion of high-density clumps described in Section \[sec:clumpymodel\]. The SmoothAccretion run includes black holes, but using the standard accretion model described in Section \[sec:RAMSES\]. Note that we refer to this model as the SmoothAccretion since it lacks the periodic bursts of accretion caused by unresolved gas clouds, but the black hole accretion rate nonetheless varies based on the resolved gas properties around it. Finally, the NoBH run is the base run which does not include black holes at all. The primary analysis of all runs was performed with the data analysis toolkit [@YT2011]. Black Hole Growth {#sec:bhgrowth} ================= ![Growth of our primary black hole in the clumpy-accretion model (black) and smooth-accretion model (red). Green arrows mark the onset of an extended Eddington regime. Black hole mass builds up earlier in the clumpy accretion case, but also leads to a lower final mass.[]{data-label="fig:bh_growth"}](plots/BH2_comparegrowthhistory.pdf){width="8cm"} ![The growth of our primary black hole in the clumpy-accretion model, showing the contribution to the accreted mass from accretion of dense clumps (red) and smooth infall between clump events (blue). The relative importance of the clumpy component of accretion is strongest just as the black hole reaches Eddington at z $\sim$11.[]{data-label="fig:clumpy_growth_representation"}](plots/BH2_growthhistory.pdf){width="8cm"} In Figure \[fig:bh\_growth\] we show the growth of our primary black hole in both the clumpy-accretion (black) and the smooth-accretion (red) runs, clearly showing a dramatically different growth history. In both simulations, the black hole follows the typical growth behavior found in cosmological simluations [e.g. @DiMatteo2008; @DeGrafBHgrowth2012]: it undergoes an initial sub-Eddington growth phase, followed by an extended period of Eddington growth, and upon reaching a high enough mass (relative to its host), self-regulation kicks in, dramatically slowing the growth of the black hole. The main difference between the runs is the onset time of the Eddington growth phase, which occurs much sooner in the clumpy-accretion model. In the smooth-accretion model, the sub-Eddington phase is very long-lasting. Without the added accretion from the dense clumps of gas, the black hole takes until $z \sim 8$ to grow massive enough to reach the Eddington regime. In contrast, the clumpy-accretion model reaches the Eddington regime around $z \sim 10-11$, and has already reached the self-regulation regime by $z \sim 8$. This substantial difference is due to the periods of clump accretion providing short time-scale bursts of Eddington accretion during the sub-Eddington regime. In Figure \[fig:clumpy\_growth\_representation\] we divide the total accreted matter (black) from the clumpy accretion run into two components: the accreted mass during clump-accretion events (blue) and in the absence of clumps (i.e. during smooth accretion; red). From these curves it appears that the accretion of clumps plays a relatively minor role. However, this conclusion neglects two important factors. First, the total mass gained during clump accretion is not a meaningful quantity, since the majority of growth occurs during the extended Eddington phase. During this phase, the growth is capped at $M_{\rm{Edd}}$, and thus an incoming clump will not provide any increase in the accretion rate. For this reason, the meaningful quantity to consider is the mass gained via clump accretion prior to the onset of Eddington accretion. Based on this, we see that the black hole has gained approximately half of its mass through clump accretion near the onset of the Eddington phase (z $\sim$11), demonstrating a significant impact. Even this check underestimates the importance of the clumps, however, as it neglects the exponential nature of the black hole growth. Because the smooth accretion phases depend upon $M_{\rm{BH}}^2$, modest increases in mass at early times (such as those caused by early clump accretion events) have an exponential impact on the continued growth of the black hole, which is what causes the dramatic differences between the two simulations in Figure \[fig:bh\_growth\]. Thus we note that relatively minor differences at very early times can significantly affect the late-time behavior of the black hole. This ability to drive rapid growth at early times may be of significant importance to the seeding mechanisms for supermassive black holes. Using a standard Bondi-like accretion rate, a very low-mass black hole (e.g. a $10^2 M_\odot$ seed from a PopIII star) will tend to accrete relatively slowly. This can present a problem when attempting to reach the high masses seen in observations [such at the $10^9 M_\odot$ BH found at $z \sim 7$ by @Mortlock2011]. However, the bursts of accretion provided by high-density clouds can produce substantially more rapid growth among small, early BH seeds. Initial tests suggest that black holes seeded at masses of $\sim 10^3 M_\odot$ can still grow to $\sim 10^7 M_\odot$ by $z \sim 7$, which will provide more flexibility in the seeding prescriptions used in cosmological simulations. Furthermore, the early growth of a black hole can be highly sensitive to the seeding prescription, particularly the seed mass. Although the final mass (maintained via self-regulation) may be relatively insensitive to the seeding prescription, the evolution to that final mass may be significantly different. As Figure \[fig:bh\_growth\] shows, a larger mass early on can result in much faster overall growth. For example, using a seed of $5 \times 10^4 M_\odot$ will take $\sim 2.5$ times longer to reach $10^{5.5} M_\odot$ ($\sim$ when our BH reaches the Eddington regime) than a seed of $10^5 M_\odot$ if we assume Bondi accretion with constant gas properties. However, the clumpy-accretion events tend to occur at the Eddington rate, which depends on $M_{\rm{BH}}$ rather than $M_{\rm{BH}}^2$ (see Equations \[eqn:bondi\] and \[eqn:edd\]), making it much less sensitive to the seed mass. If we assume all the growth is from these bursts at Eddington, the $5 \times 10^4 M_\odot$ seed will only take 1.6 times longer than the $10^5 M_\odot$ seed. Although the actual result will be somewhere between these expectations (and also depend on the evolution of the gas properties), this clearly shows that the incorporation of clumpy accretion has the potential to make the black hole growth much less sensitive to the seed mass. A full study of the impact of clumpy accretion on black hole seeding prescriptions is beyond the scope of this paper, but may prove useful for studies attempting to isolate the formation mechanism for supermassive black hole seeds. Impact on host {#sec:host} ============== Morphology {#sec:hostmorphology} ---------- ![image](plots/sample_projection_67_x_labeled.png){width="17cm"} In Figure \[fig:projection\_67\] we show images of the gas density (top), gas temperature (middle), and stellar density (bottom) of our galaxy in both the clumpy-accretion model (left) and the smooth-accretion model (right). This figure shows the qualitative effect that the clumpy-accretion model has on the environment in terms of general morphology, AGN-driven outflows, and effect on inflowing gas. The redshift was selected to highlight an outlflow process, but we note that other redshifts after the extended Eddington phase are qualitatively similar (see Section \[sec:earlytime\] for early time comparison). In the density projections, the smooth-accretion model shows a well-defined disk of cold gas has formed without being disrupted. The clumpy-accretion model, however, shows a less well-defined disk that is relatively puffed out in all directions, i.e. has a less-well defined disk plane. More striking than this, however, is the central region of the galaxy, which has been evacuated of dense gas, leaving a substantial void of low-density, high temperature gas surrounding the black hole. This is more clearly seen in Figure \[fig:density\_profile\_comparison\], which shows the gas density profile (solid lines) for the galaxy in both simulations. These profiles show comparable densities above $\sim 1$ kpc (though slightly lower density in the clump-accretion model), but a dramatic difference (up to 2 dex) at smaller scales. Note that the highest resolution cells are $\sim 0.1$ kpc, so the results at the smallest scales are not well-resolved, but the decrease at sub-kpc scales is well within the resolution of the simulation. This clearly demonstrates the ability of the clump-fed AGN to evacuate the gas from the central region of the galaxy, which will necessarily lead to the suppression of the black hole growth (i.e. self-regulation) as well as quench star formation (investigated in more detail in Section \[sec:inflow\_outflow\]). ![Density profiles for the clumpy accretion model (blue) and smooth accretion model (green) at z=7.65. *Solid lines* - gas; *Dashed lines* - stars. Clumpy accretion triggers AGN feedback that lowers the nuclear gas density compared to the smooth accretion case. The stellar profile is minimally affected, with the smooth accretion model having slightly more stars than the clumpy accretion model.[]{data-label="fig:density_profile_comparison"}](plots/compare_densityprofile.png){width="9cm"} The temperature maps in Figure \[fig:projection\_67\] also show significant differences, with the clumpy-accretion model showing a hot region surrounding the black hole ($\sim 1$ kpc, corresponding to the evacuated region), outside of which there are regions of hot and cold gas. In contrast, the disk in the smooth-accretion model remains cool with fewer regions of temperature variation. Outside the galaxy, the clumpy-accretion model produces bubbles of hot gas inflating away from the black hole (similar to radio cavities observed in galaxies), showing clear evidence of AGN-driven outflows. These hot bubbles of outflowing gas are completely lacking in the smooth-accretion model. Consistent with the higher-resolution runs of @GaborBournaud2014, despite using a purely isotropic feedback model the outflows are nearly entirely out-of-plane, though they are not necessarily axisymmetric (see Section \[sec:in\_vs\_out\_of\_plane\] for more details). This anisotropy is purely a result of the local environment, with dense in-plane gas shielding the rest of the disk from the feedback energy, while the relatively low-density out-of-plane gas is effectively driven out. Figure \[fig:projection\_67\] clearly shows the outflows driven almost exclusively in directions of low-density gas, and also shows that resolved cold, dense clumps effectively block the outflows. Unlike the gas density and temperature, the stellar morphology is only weakly affected by the clumpy accretion model. In the bottom panels of Figure \[fig:projection\_67\] we plot the stellar density maps, which show only minimal difference between the two runs. The stars in the smooth accretion case are slightly flatter/more elongated than in the clumpy case, which has a more rounded stellar component. This is consistent with the general gas distribution (top panels), and is a fairly small effect. More significantly, we see no evidence of the evacuated region at the center of the galaxy. This is confirmed in Figure \[fig:density\_profile\_comparison\], where the dashed lines show the stellar density profile. We find the smooth accretion model has slightly higher stellar densities, but otherwise the *distribution* of stars is largely unaffected by the AGN feedback, down to the smallest scales. Thus we find, as expected, that the AGN feedback can have a strong impact on the gas, but has no direct affect on the stellar distribution. It can *indirectly* affect the galaxy’s stellar mass by suppressing star formation (resuling in the slightly higher stellar densities in Figure \[fig:density\_profile\_comparison\]), which we investigate in more detail in Section \[sec:inflow\_outflow\]. ![image](plots/radial_properties_67_labeled.png){width="18cm"} Gas properties {#sec:gas_impact} -------------- In addition to the general morphology, we find noteable differences in the gas properties within the host galaxy. In Figure \[fig:radial\_properties\_67\] we show the distribution of gas density (top), temperature (middle), and radial velocity (bottom) vs. distance from the galaxy center for all three simulation runs at z$\sim$7.65, matching Figure \[fig:projection\_67\]. Pixel color represents the mass of the gas at the given pixel. First, we note that the difference between the smooth-accretion and no-bh runs is quite small. The smooth-accretion black hole heats some of the nearby ($< \: \sim$3 kpc) gas to higher temperatures than the no-bh case, and there is some outflowing gas driven at slightly higher velocities, but they are otherwise qualitatively similar. The clumpy-accretion model, however, is substantially different. In the density distribution, we see that in the vicinity of the black hole, the very low-density gas ($\sim 10^{-25}-10^{-26} \rm{g/cm}^3$ within $\sim$2 kpc, at the bottom left of the panel) has been completely removed in the clumpy-accretion run. At larger radii, this run has extremely low-density gas ($< 10^{-27}$) which is completely missing in the smooth-accretion run. This suggests that the bulk of the low-density gas near the black hole was driven away as outflows, and is thus found at larger radii. The temperature distributions in Figure \[fig:radial\_properties\_67\] show a similar picture. Although the bulk of the very cold (and high-density) gas remains, the majority of inner ($< 2$ kpc) cool gas (between $3 \times 10^4$ and $10^6$ K) has been heated to higher temperatures, and there is significantly more hot gas ($> 10^7$ K) at larger radii. This is consistent with the general picture that the nearby gas has been heated to high temperature and driven out to larger radii. In the bottom panel we confirm this high-velocity gas outflow driven by the clumpy-accretion black hole, with high velocities (up to 3000 km/s) maintained out to radii of 8 kpc, compared to the smooth-accretion model where almost no gas exceeds 500 km/s. Figure \[fig:velocity\_properties\_67\] shows the distribution of gas densities and temperatures as a function of radial velocity. Here we can explicitly see that the strongly outflowing gas found in the clumpy-accretion simulation is low-density ($< 10^{-24} g/cm^{3}$) and high temperature ($> 3 \times 10^6$ K, and most above $10^7$ K). This is consistent with the high-resolution isolated galaxies of @GaborBournaud2014, who similarly found outflows consisting of hot, diffuse gas. Since none of the strongly outflowing gas is at high densities, we deduce that the AGN driven outflows do not directly evacuate the starforming gas, which is dense. Nevertheless, there are other means by which the AGN can suppress star formation, which we investigate further in the next section. ![image](plots/velocity_properties_67_labeled.png){width="18cm"} Inflow and outflow rates {#sec:inflow_outflow} ------------------------ ![Gas inflow (blue) and outflow (red) rates as a function of radial distance from the black hole. Clumpy accretion prevents flow into the innermost kpc and drives much stronger outflows out to large scales. []{data-label="fig:radial_flow_rates_67"}](plots/flowrate_total.pdf){width="8cm"} In Figure \[fig:radial\_flow\_rates\_67\] we show the instantaneous gas inflow and outflow rates through spherical shells about the galaxy center. These flow rates are calculated by $\dot{M}= \frac{1}{\Delta x} \sum{m_i v_i}$, where $m_i$, $v_i$ are the mass and radial velocity for each cell $i$ in the spherical shell, and $\Delta x$ is the shell thickness. For thin shells, this is a reasonable approximation. We note that if a sufficiently thin shell is used, the small number of cells contained within it could lead to noisy results. However, despite using very thin shells (only 100pc thick, comparable to the width of a single cell), the resulting profiles are qualitatively quite smooth, and the results do not depend upon shell thickness. In the smooth-accretion simulation (dashed lines) we find that the inflow rate is nearly an order of magnitude stronger than the outflow rate (except at $< 1$ kpc scales where inflow and outflow are comparable). The exception to this is when a galaxy merger occurs, which provides a localized spike in the inflow rate, often with a corresponding, though much weaker, spike in the ouflow rate due to a gaseous component of the infalling galaxy with velocity dispersion or circular velocity larger than the rate of infall. Excluding the effect of incoming galaxy mergers, the inflow rate remains relatively constant outside $\sim 4$ kpc scales, below which there is often an increase in both the inflow and outflow rates. In contrast, the clumpy-accretion model can have outflow rates comparable to or higher than the inflow rates if the black hole is large enough (by $z \sim 8$ for this black hole), and the outflows extend out to large radii. The lack of decrease in outflow rate beyond $\sim 4$ kpc suggests two things. First, that the majority of the outflowing gas that reaches $\sim 4$ kpc tends to be at or above the escape velocity of the host galaxy (shown to be correct in Figure \[fig:radial\_properties\_67\]), and second it suggests that the majority of outflowing gas that reaches $\sim 4$ kpc is able to continue outward without significant retardation by its environment. This is consistent with Figure \[fig:projection\_67\], which shows that the hot gas tends to expand out of the plane, thereby avoiding the dense in-plane gas that can impede the gas flow. We investigate this directional dependence of the outflows in Section \[sec:in\_vs\_out\_of\_plane\]. We also note that the incoming galaxy (seen as a spike in the inflow rate in each simulation) is notably delayed in the clumpy accretion run. This delay is likely due to the hotter gas environment through which it passes. Since the circumgalactic gas tends to have higher outward velocities, the increased ram pressure is able to more efficiently slow the incoming galactic gas. Although having much stronger outflow rates, Figure \[fig:radial\_flow\_rates\_67\] shows that the clumpy accretion run has a generally comparable inflow rate outside the innermost region to that of the smooth accretion run. To investigate the long-term gas inflow onto the galaxy, in Figure \[fig:cumulative\_gas\_flow\] we plot the cumulative gas inflow (blue) and outflow (red) through spherical shells surrounding the central galactic region for both accretion models. This cumulative flow rate is calculated using the instantaneous flow rate at each snapshot, and assuming this rate remains constant until the next snapshot is reached. To avoid having a single thin shell with an unusually high flow rate due to an infalling clump, we take the average flow rate through 10 shells, each 100pc thick. We show these cumulative curves at radii of 2.5 kpc (top left), 5 kpc (top right), and 10 kpc (bottom left), and a thick-shell curve for flow averaged across all shells between 2 and 10 kpc (bottom right). Considering the outflowing gas in the clumpy accretion model (solid red lines), we see that there is significantly more outflow at 2.5 kpc than at 5 kpc, since some of that gas is slowed down by the gas in the galactic disk. At 5 and 10 kpc, however, we find similar outflow rates across cosmic time, confirming that the bulk of the outflowing gas beyond $\sim 3$ kpc continues to at least 10 kpc without significant deceleration, consistent with the instantaneous outflow rates in Figure \[fig:radial\_flow\_rates\_67\]. In contrast to this, the smooth-accretion model (dashed red line) shows a continued decrease in outflowing gas mass out to larger radii. This is expected, since the much lower outflow velocities (see Figure \[fig:radial\_flow\_rates\_67\]) mean that much less gas from the central region where AGN-driven outflows originate is capable of escaping the potential well, and thus we see the decrease in expelled gas at higher radii. We also show the cumulative gas infall onto the galaxy (blue), where we again find significant differences between the clumpy- and smooth- accretion models. At early times (prior to z$\sim$8), we find the gas mass accreted onto the host galaxy is consistent between the two models. This is expected since at early times, the AGN feedback should be insufficient to affect the inflowing gas. Once the black hole is massive enough, however, we note that not only does the clumpy-accretion model provide much stronger outflows, it also substantially suppresses the inflow of gas onto the galaxy, which we see beginning at $z \sim 8$ for this galaxy. Note that at 5kpc it appears to start much earlier, but this is due to a high-inflow rate caused by an incoming galaxy in a single snapshot. Ignoring the jump caused by this incoming merger, we again see the increased inflow in the smooth accretion case start at $z \sim 8$, also at 5kpc. This suppression of inflowing gas correlates directly with the onset of self-regulated growth (Figure \[fig:bh\_growth\]), suggesting that the regulation of black hole growth is correlated not only with expelling gas from the central region, but also limiting the replenishment of this reservoir through inflowing gas. ![image](plots/velocity_angle_67.png){width="17cm"} In addition to the flow rates through spherical shells, Figure \[fig:cumulative\_gas\_flow\] shows the cumulative SFR (green lines). Because of the localization of SFR to the high density regions, we consider star formation within the spherical region interior to the given radius, rather than within a thin shell at the radius. From these curves we can see that although AGN-driven outflows consist of hot, diffuse gas that does not form stars, the clumpy-accretion AGN nonetheless significantly quenches star formation by nearly a factor of 2. This appears to be in contrast to the results of @GaborBournaud2014 based on isolated-galaxy simulations, who found that despite driving strong outflows, the star formation rate was minimally affected. However, this apparent discrepancy is due to a difference in the black hole growth phase being investigated, and accounting for this brings both results into agreement with one another. We find that the quenching of star formation occurs only after the black hole has undergone an extended phase of Eddington limited growth, while the @GaborBournaud2014 investigation used a black hole which is substantially sub-Eddington (except for the bursts due to clump accretion onto the black hole). Compared to their $\sim 100$ Myr simulation in which the black hole only grows by $\sim 15\%$ (with an averaged Eddington fraction of only a few percent), we begin seeing suppression of star formation only after the black hole grows by an order of magnitude at Eddington, and the effect becomes strong only after growing by a factor of $\sim40$. Prior to such extended growth, we are fully consistent with @GaborBournaud2014: our AGN drives strong outflows of hot, diffuse gas, entraining minimal high-density gas, and being directed almost entirely out of the galactic plane with no significant effect on star formation or host morphology (see Section \[sec:earlytime\] for more details). ### Geometry of inflows and outflows {#sec:in_vs_out_of_plane} In Figure \[fig:projection\_67\] we saw that the hot gas driven by the black hole seemed to be strongly directed out of the plane of the galaxy, and in Figure \[fig:radial\_flow\_rates\_67\] we saw that the outflowing material did not significantly slow beyond $\sim 3$ kpc, again suggesting expansion away from the dense galactic gas that could impede its progress. To investigate this directly, we compute the radial mass flow as a function of cos$(\theta)$, where $\theta$ is the angle relative to the polar axis of the galaxy. We define the polar axis to be the mass-weighted angular momentum vector of the gas in the central 1 kpc of the galaxy, but we find that these results are not sensitive to the size of the region used to calculate this vector. In Figure \[fig:velocity\_angle\_67\] we show the distribution of gas in terms of radial velocity and cos$(\theta)$, in shells of radius of R=2, 4, 6, and 8 kpc and thicknesses of 0.2R, for both clumpy-accretion (top) and smooth-accretion (bottom), Each pixel in $V_R$-cos$(\theta)$ is color coded by the total mass flux through the shell at the given velocity and angle. In the smooth-accretion model, we see that the strongest outflow velocities tend to be out of the plane, but not substantially so, peaking at $\sim 30$ degrees above/below the plane, while the strongest flow rates (rather than flow velocities) tend to be at low velocity and primarily inward. In the clumpy accretion model, however, we have a clear angular dependence on the outflowing velocity, with the strongest outflow rates being at the highest velocities, and strongly out of the plane. Furthermore, this clear correlation between outflow velocity and polar angle grows with shell radius, confirming that the more out-of-plane the gas flows, the less it gets impeded as it travels outward. In contrast to the out-of-plane flows which are relatively unimpeded, the gas moving into the galactic plane is rapidly slowed, with rapid inflow spread over a larger range of $\theta$ at large radii (8 and 6 kpc) than small radii (4 and 2 kpc). We also note that in the clumpy-accretion case, at 2 kpc there is outflowing gas directed into the plane (though not as strong as the out of plane), but this outflowing in-plane gas does not survive to 4 kpc. This is due to the the void around the black hole (see Figure \[fig:projection\_67\]) which extends to $\sim 1-2$ kpc. Within the void, in-plane gas flows freely, but is rapidly stopped upon reaching the high-density region. Beyond the void, the only rapidly outflowing gas is that which was directed out of the galactic plane. Inflow suppression ------------------ ![image](plots/projection_large_67_z.png){width="16cm"} In Figure \[fig:cumulative\_gas\_flow\] we showed that the inflowing gas is suppressed in the clumpy accretion model, showing that the AGN is able to not only drive out hot galactic gas, but affect the inflowing gas streams. In Figure \[fig:largescale\_projection\] we show larger-scale projections of the gas density (top), temperature (middle), and radial velocity (bottom) to show the means by which the inflow is affected. In the density projections, the smooth accretion model shows more well-defined streams which survive to small scales. In contrast, the inflowing gas streams in the clumpy accretion model are disrupted by collisions with outflowing gas, most clearly seen by the shock front to the upper-left of the black hole. In addition to the shocks from collisions between the inflowing and outflowing gas, the outer regions of the inflowing gas streams are stripped and blown away, and only the high-density rapidly infalling gas survives. This is seen in the velocity map in Figure \[fig:largescale\_projection\] (bottom panels). The colorscale only shows gas with speed below 700 km/s to more cleary show the variations among the inflowing gas. Here we see that in the smooth accretion model, the majority of gas is flowing in toward the galaxy (blue), with gradual transition from inflowing to outflowing velocities. In contrast, the clumpy accretion (left) shows relatively small regions where inflowing streams survive. Furthermore, the inflowing streams completely lack the envelope of more slowly infalling gas seen in the smooth accretion model. Instead this envelope has been stripped away, leaving a sharp transition between dense, rapidly infalling gas penetrating the rapidly outflowing gas. This stripping effect can also be seen in Figure \[fig:skycoverage\], which shows the fraction of the sky needed to include a given fraction of the inflowing (blue) and outflowing (red) gas. At $\sim 10$ kpc, the fiducial result from the smooth accretion case shows that inflowing and outflowing gas take up comparable fractions of the sky. In the clumpy accretion model, the outflowing gas is much more widely distributed, with a corresponding compression of the inflowing gas due to the stripping effect described above. At $\sim 30$ kpc, outflow in the fiducial run is compressed to a much smaller fraction of the sky, though note the weaker outflow here means there is very little outflowing gas. Similarly, the clumpy accretion model again shows substantially expanded outflow comparable to the sky coverage at smaller radii, and compressed inflow. ![Fraction of sky needed to include a given fraction of the total inflow (blue) and outflow (red) of the gas through a shell at 10 kpc (top) and 30 kpc (bottom). Compared to the smooth run, outflows from the clumpy accretion run are more widely distributed on the sky, while the inflows are restricted to a smaller covering fraction. []{data-label="fig:skycoverage"}](plots/skycoverage_compare.pdf){width="8cm"} Early-time effects {#sec:earlytime} ================== Although @GaborBournaud2014 found similar outflows (see Section \[sec:inflow\_outflow\]), neither star formation nor host morphology were significantly affected, seemingly in conflict with the results presented here despite our model being calibrated using that simulation. However, we note that those findings were based upon a short-timescale ($\sim 100$ Myr) run in which the black hole only grew $\sim 15\%$ (as shown in Figure \[fig:bh\_growth\]), and without ever having undergone an extended period of Eddington growth (the only Eddington accretion is found during the 5-10 Myr accretion events). In contrast to this, our simulation predicts that the black hole can impact the host galaxy morphology and star formation rate after having undergone an extended Eddington phase, increasing the mass by more than an order of magnitude. To provide a more comparable case between the isolated galaxy run and our cosmological runs, we look at the host properties at an earlier time, when the black hole is smaller and has not yet approached the self-regulated regime. Self-regulation occurs at the end of the Eddington regime, where the feedback from the black hole is strong enough to suppress its own accretion. The onset of regulation is where we expect to find the strongest effects, which we showed in earlier sections. To compare with the isolated galaxy, we consider the black hole and its host at $z \sim 10$, when the black hole has reached $10^6 M_\odot$ but is not yet at the self-regulated regime. In the top panels of Figure \[fig:directcompare\] we show the density maps of the host galaxy, finding no significant morphological effects, contrary to Figure \[fig:projection\_67\] where significant morphological differences were found for the self-regulated regime. In the bottom panel of Figure \[fig:directcompare\] we show the distribution of gas velocity as a function of radius, finding that the clumpy-accretion model (left) does drive significantly more gas at much higher velocities than the smooth-accretion model (right). Thus we find that, consistent with @GaborBournaud2014, if the black hole has not yet undergone significant Eddington growth it is capable of driving strong outflows of hot, diffuse gas without having a significant effect on the rest of the host galaxy. This is further confirmed in Figure \[fig:cumulative\_gas\_flow\] which shows minimal difference in high-z gas inflow or SFR between the clumpy- and smooth-accretion runs. To quantitatively compare the morphologies, Figure \[fig:directcompare\_profile\] shows the density profile for both the clumpy- and smooth- accretion models at this early time. The density profiles are in complete agreement, lacking the clear central void in Figure \[fig:density\_profile\_comparison\] at the later, Eddington phase. The lack of any such void shows that at early times, comparable to the conditions of @GaborBournaud2014, the black hole has not evacuated the central region, which only occurs after longer-term growth and feedback have occured. Thus we find that including periodic accretion of high-density gas clouds can have a strong effect on the host galaxy, but only after the black hole has grown significantly, more than an order of magnitude at $\sim$Eddington rates. Prior to this growth, the AGN can drive rapid outflows of hot, diffuse gas without suppressing star formation or impacting the overall gas distribution of the host. A further investigation into the impact of periodic accretion bursts should also be performed using a high-resolution isolated galaxy, but one in which a black hole has already undergone extended Eddington growth and is approaching the self-regulated regime. Since isolated galaxy simulations cannot be run for such extended times without running into physical limiations (e.g. exhaustion of gas supply in the absence of cosmological inflows), an alternative is to set up initial conditions in which the black hole starts in a very massive state compared to the host, but still in equilibrium. Such simulations are beyond the scope of this paper, so we leave this investigation for a future project. ![Gas density profile for clumpy accretion model (blue) and smooth accretion model (green), prior to reaching the self-regulated regime. Clumpy accretion at early time does not affect the gas density of the galaxy.[]{data-label="fig:directcompare_profile"}](plots/compare_densityprofile_alt.png){width="9cm"} Conclusions {#sec:Conclusions} =========== We find that the increased periods of accretion caused by high-density, small scale gas clumps is an important factor in the cosmological growth of black holes, affecting both the black hole growth and the impact upon the host evolution. - Inclusion of clumpy-accretion allows for a significant boost to black hole growth starting at early times. Prior to the onset of Eddington-limited growth, although the total mass accreted during these clump phases is comparable to the total mass accreted during smooth phases, the *net effect is much larger*. Because sub-Eddington growth depends on $M_{\rm{BH}}^2$ (see Equation \[eqn:bondi\]), the increased mass due to growth from the clump accretion also serves to increase the accretion rate during the smooth periods, reaching high-masses at much earlier times than in the absence of clumpy accretion. - The increased feedback in the clumpy-accretion model has a significant impact on the host morphology: The central $\sim$1 kpc region about the black hole is mostly evacuated of gas, while at larger radii ($\sim 7-8$ kpc) the gas density is higher due to the increased feedback-driven outflows. - In the absence of clumpy-accretion, the inflow is generally an order of magnitude stronger than the outflow beyond the innermost few kpc. In contrast, the clumpy-accretion model has outflows $\sim 10$x stronger, comparable to the inflow rates (excluding incoming galaxy mergers). - The bulk of the feedback-driven outflows are out of the plane of the galaxy. The feedback energy is deposited isotropically, so the polar outflows are a purely environmental effect, caused by the high-density in-plane gas obstructing in-plane outflows. This effect holds out to large radii, with a tendancy for the larger-radius outflows to be even more highly collimated. - In the clumpy accretion model, AGN feedback nearly entirely halts inflow of gas on the $\sim$kpc scale, and at larger scales can suppress gas inflow by nearly a factor of two. This suppression of inflow has two main causes: the outflows from the galaxy center directly interact with the inflowing streams and can even stop them; and more generally, the outflows strip the lower-density, lower-velocity envelope of gas around the high-density streams. - As a result of the stronger outflows and suppressed inflows, the SFR in the clumpy accretion case can be suppressed by as much as a factor of $\sim$2. However, this difference only occurs after the black hole has undergone an extended period of Eddington growth, growing by at least an order of magnitude. Prior to this extended growth, the SFR remains unaffected. - Most of the outflow driven by the strong AGN feedback is strong enough to exit the galaxy, without undergoing significant recycling. Thus we have demonstrated the importance of incorporating the effects of high-density gas clouds in cosmological simulations, and that applying a stochastic subgrid model to include them can lead to significant changes in host evolution. Having shown the strength this periodicity can have, a more in-depth investigation is necessary to constrain the exact parameterization of the subgrid model. The parameters used here are based upon a single isolated galaxy simulation, and treated as if they hold universally. This is obviously an oversimplification, and further high-resolution simulations will be needed to explore the parameter space of potential hosts to determine how the frequency and strength of incoming gas clouds depends upon various properties, including, but not limited to, host mass, gas fraction, stellar mass, disc height, merger history, etc. With a better-constrained set of host-dependent parameters for the bursts of accretion, a full statistical analysis must be done to determine the effect on statistical samples of black holes, including possible observable signatures in the quasar luminosity function and luminosity-dependent clustering behavior. This continuation goes beyond the scope of this paper, and will be addressed in a followup work. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by ISF grant 24/12, by GIF grant G-1052-104.7/2009, by a DIP grant, by the I-CORE Program of the PBC, by ISF grant 1829/12, by NSF grants AST-1010033 and AST-1405962, and supported from the E.C. through an ERC grant StG-257720. Some of the simulations used in this work were performed on GENCI resources at TGCC (project 04-2192).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a study of the Galactic Center region as a possible source of both secondary gamma-ray and neutrino fluxes from annihilating dark matter. We have studied the gamma-ray flux observed by the High Energy Stereoscopic System (HESS) from the J1745-290 Galactic Center source. The data are well fitted as annihilating dark matter in combination with an astrophysical background. The analysis was performed by means of simulated gamma spectra produced by Monte Carlo event generators packages. We analyze the differences in the spectra obtained by the various Monte Carlo codes developed so far in particle physics. We show that, within some uncertainty, the HESS data can be fitted as a signal from a heavy dark matter density distribution peaked at the Galactic Center, with a power-law for the background with a spectral index which is compatible with the Fermi-Large Area Telescope (LAT) data from the same region. If this kind of dark matter distribution generates the gamma-ray flux observed by HESS, we also expect to observe a neutrino flux. We show prospective results for the observation of secondary neutrinos with the Astronomy with a Neutrino Telescope and Abyss environmental RESearch project (ANTARES), Ice Cube Neutrino Observatory (Ice Cube) and the Cubic Kilometer Neutrino Telescope (KM3NeT). Prospects solely depend on the device resolution angle when its effective area and the minimum energy threshold are fixed.' address: - 'Departamento de Física Teórica I, Universidad Complutense de Madrid, E-28040 Madrid, Spain' - 'Instituto de Física Corpuscular (CSIC-Universitat de València), Apdo. 22085, E-46071 Valencia, Spain' author: - 'V. Gammaldi[^1]' - 'J. A. R. Cembranos' - 'A. de la Cruz-Dombriz' - 'R. A. Lineros' - 'A. L. Maroto' title: | Gamma-ray and neutrino fluxes from Heavy Dark Matter\ in the Galactic Center --- Dark Matter ,Galactic Center ,gamma rays ,neutrinos ,Monte Carlo phenomenology Introduction {#1 .unnumbered} ============ Astrophysical evidences for Dark Matter (DM) exist from galactic to cosmological scales, but the interactions with ordinary matter have not been probed beyond gravitational effects. In this sense, both direct and indirect DM searches are fundamental to explore particle models of DM. If DM annihilate or decay in Standard Model (SM) particles, we may indirectly detect the secondary products of such processes in astrophysical sources where the DM density is dominant. In this context, the observation of secondary particles is highly affected by astrophysical uncertainties, such as the DM densities and distribution in the Galaxy and the astrophysical backgrounds. In particular, the Galactic Center (GC) represents an interesting source due to its closeness to the Earth, but also a complex region because of the large amount of sources present. In this work, we review the analysis of the data collected by the HESS collaboration during the years 2004, 2005, and 2006 associated to the HESS J1745-290 GC gamma-ray source as a combination of a DM signal with an astrophysical power-law background. The best fits are obtained for the $u\bar u$ and $d\bar d$ quark channels and for the $W^+W^-$ and $ZZ$ gauge bosons with large astrophysical factors $\approx 10^3$ [@Cembranos:2012nj; @HESS]. Such a parameter is affected not only by the astrophysical uncertainty, but also by the error introduced by the use of differential fluxes simulated by means of Monte Carlo event generator software. The exact estimation of the last effect depends on several factors, such as the annihilation channel, the energy of the process and the energy range of interest [@MC]. In this contribution we focus on the $W^+W^-$ annihilation channel. In addiction to the gamma rays study, we present some prediction on the prospective neutrino flux that may be originated by the same source.\ This work is organized as follows. In the first section we revisit the equations able to describe both the gamma ray and neutrino fluxes from Galactic sources. The second section focuses on gamma rays phenomenology. There we show the fit of the HESS data for the $W^+W^-$ annihilation channel. Although the analysis is model independent, such annihilation channel possesses some interest for heavy dark matter models [@WIMPs], such as branons among others [@branons]. In order to give an estimation of the error introduced by the Monte Carlo simulations, we analyze the case of photon spectra generated by both [[PYTHIA]{}]{}and [[HERWIG]{}]{}packages, both in Fortran and C++. In particular we show results for $2$ TeV center-of-mass events in the $W^+W^-$ channel (see [@MC] for more cases). In section 3, we consider the expected neutrino signal from the annihilation of the heavy DM required to produce the HESS gamma-ray signal.\ Astrophysical flux {#2} ================== In general, both gamma ray and neutrino flux for one particular annihilation channel can be described by the equation for uncharged particles that travel without deviation due to galactic magnetic fields: $$\left(\frac{{\rm d}\Phi^{}_{}}{{\rm d}E_{}}\right)_j^i\,=\,\frac{\langle\sigma_i v \rangle}{8\pi M_{{\rm DM}_i}^2}\left( \frac{{\rm d}N}{{\rm d}E}\right)_j^i\times \langle J\rangle^i_{\Delta\Omega_j}\, \, {\rm GeV}^{-1}{\rm cm}^{-2}{\rm s}^{-1}{\rm sr}^{-1}\,, \label{nuflux}$$ where $j=\gamma,\nu_k$ is the secondary uncharged particle. When $j=\nu_k$, $k=\mu,\tau,e$ is the neutrino flavor. The DM annihilation channel is fixed by the $i=i$-th SM-particle. Because we performed single channel model independent fits, the astrophysical factor depends on the annihilation channel. Here, we present the results for the $i=W^{\pm}$ boson channel. The differential number of particles ${\rm d}N/{\rm d}E$ is simulated by means of the Monte Carlo event generator software, as discussed in section $2.1$. Unlike gamma rays, the composition of the neutrino flux produced at the source can differ from that detected on the Earth because of the combination of different flavors produced by oscillations [@Neutrinos].\ Gamma-ray flux {#2} ============== As introduced before, the gamma rays signal observed by HESS between $200$ GeV and $10$ TeV from the GC direction may be a combination of a DM signal with a simple power-law background. The total fitting function for the observed differential gamma ray flux is: $$\frac{{\rm d}\Phi_{\gamma-Tot}}{{\rm d}E}=\frac{{\rm d}\Phi_{\gamma-Bg}}{{\rm d}E}+\frac{\Phi_{{\rm d}\gamma-DM}}{dE}=B^2\cdot \left(\frac{E}{\text{GeV}}\right)^{-\Gamma}+ A_i^2 \cdot \frac{{\rm d}N^i_{\gamma}}{{\rm d}E}\,, \label{gen}$$ where $$\label{A} A_i^2=\frac{\langle \sigma_i v \rangle\, \Delta\Omega_\gamma^{{\rm HESS}}\, \langle J \rangle_{\Delta\Omega_\gamma^{{\rm HESS}}}}{8\pi M_{\rm DM}^2}$$ needs to be fitted together with the DM particle mass $M_{\rm DM}$, the background amplitude $B$ and spectral index $\Gamma$. By means of the fit of the parameters $A_i$, the astrophysical factor $$\begin{aligned} {\langle J \rangle}^i_{\Delta\Omega}\,=\, \frac{1}{\Delta\Omega}\int_{\Delta\Omega}\text{d}\Omega\int_0^{l_{max}(\Psi)} \rho^2 [r(l)] \,{\rm d}l(\Psi)\,, \label{J}\end{aligned}$$ is also indirectly fitted. In the previous expression, $l$ holds for the distance from the Sun to any point in the halo. It is related with the radial distance $r$ from the GC as $r^2 = l^2 + D_\odot^2 -2D_\odot l \cos \Psi$, where $D_\odot \simeq 8.5$ kpc is the distance from the Sun to the center of the Galaxy. The maximum distance from the Sun to the edge of the halo in the direction $\Psi$ is $l_{max} = D_\odot \cos\Psi+ \sqrt{r^2-D_\odot^2 \sin \Psi}$. Moreover, the photon flux must be averaged over the solid angle of the detector. For the HESS telescope observing gamma rays in the TeV energy scale, it is of order $\Delta \Omega_\gamma^{\rm HESS} = 2 \pi ( 1 - \cos\theta) \simeq 10^{-5}$. The DM density distribution in the Galaxy halo is usually modeled by the NFW profile [@Navarro:1996gj]: $$\rho(r)\equiv\frac{N}{r(r-r_s)^2}\;, \label{NFW}$$ where $N$ is the overall normalization and $r_s$ the scale radius. This profile is in good agreement with N-body non-baryonic cold DM simulations of the GC. In this case and accounting for just annihilating DM, the astrophysical factor is: $\langle J^{\text{NFW}} \rangle\simeq 2.8 \cdot 10^{25}\; \text{GeV}^2 \text{cm}^{-5}$. We will use this value as standard reference in order to define the boost factor as $b^i\equiv \langle J\rangle^i/\langle J^{\text{NFW}}\rangle$. The differential number of gamma photons ${\rm d}N^i_\gamma/{\rm d}E$ is simulated by [[PYTHIA]{}]{}6.4 and the analytical fitting functions of such simulations are used to get the fit [@pythia6; @Ce10]. In Fig. \[fig:fig1\] and Table \[tab:tab1\] we report the results of the gamma rays fit of the $W^+W^-$ channel. The fit well reproduces the spectral form and the energy cut-off for a DM mass of $\sim 50$ TeV and a boost factor of $\sim 10^3$. The background is also fitted and its amplitude and spectral index are in agreement with the FERMI-LAT data [@Fermi]. The uncertainty introduced by the choice of the simulator is discussed in the next section. [![[]{data-label="fig:fig1"}](FigWRIS.pdf "fig:")]{} Channel $W^+W^-$ ------------------- ---------------- $M$ $48.8\pm 4.3$ $A$ $4.98\pm0.40$ $B$ $5.18\pm2.23$ $\Gamma$ $2.80\pm 0.15$ $\chi^2/\,$d.o.f. $0.84$ $\Delta\chi^2$ $2.6$ $b$ $1767 \pm 419$ : []{data-label="tab:tab1"} Monte Carlo uncertainty on photons flux --------------------------------------- The differential number of photons simulated by Monte Carlo event generators software is the result of a particle shower schematization in three parts: the QCD Final-State Radiation, the hadronization model and the QED Final-State Radiation. The four codes differ in each one of the aforementioned fundamental parts and the combination of these intrinsic differences to the final spectra is complicated (see  [@MC; @Seymour:2013ega] for details). In any case, it seems clear that the parton shower evolution variable affects the QCD Final-State Radiation  [@Beringer:1900zz; @Altarelli:1977zs; @81; @83; @CMW; @Py6], while the hadronization model (String model in $\text{{{\ttfamily PYTHIA}\xspace}}\text s$  [@Py6; @Py8] and Cluster model in $\text{{{\ttfamily HERWIG}\xspace}}\text s$  [@Her; @Her++]) produces unstable hadrons which eventually decay. The resultant final states of such process are mainly leptons leading the photon production that involves QED processes. Finally, the Bremsstrahlung component of the Final-State Radiation (FSR) represents the main difference between the codes, for photons spectra at least. In fact, the Bremsstrahlung of very high energy leptons is not implemented in [[HERWIG]{}]{}C++, it is just partially implemented in [[HERWIG]{}]{}Fortran, while it is implemented both in [[PYTHIA]{}]{}++ and [[PYTHIA]{}]{}Fortran. The electroweak (EW) $2\rightarrow2$ processes of the FSR, despite their different implementations for different codes, do not significantly affect the gamma-ray spectra.\ \[3\] For annihilating DM, the photon spectra are better described in terms of the dimensionless variable: $x \equiv E_{\gamma}/E_{\rm CM} $ where $E_{\gamma}$ and $E_{\rm CM}$ correspond to the energy of the photon and center of mass (CM), respectively. This variable lies in the range between 0 to 1. Because the standard Monte Carlo adjust uses data at $E_{\rm CM}=100$ GeV of center of mass energy from colliders such as LEP and LHC, large differences in the spectra are usually present at very low or very high values of $x$. For this reason, we present the spectra in both linear and logarithmic scales for $x$, in order to show more clearly the behavior in the first and second case, respectively. Here, we focused on DM particle mass of $1$ TeV (see [@MC] for the same analysis with $100\,\,\text{GeV}$ DM annihilating into $W^+W^-$, $b\bar{b}$, $\tau^{+}\tau^{-}$ or $500$ GeV DM mass in the case of $t\bar t$ and further details on $1$ TeV DM mass for more channels). The photon spectra are independent of the initial beams (details on the generation of the spectra can be found in [@MC]) and solely depend on the energy of the event, i.e. $E_{\rm CM} = 2 M_{\rm DM}$ for annihilating DM. In Fig. \[fig:w1000g\] we show that the simulated gamma-ray spectra for DM particles annihilating in $W^+W^-$ are very similar for $ x \gtrsim10^{-5}$ for a DM mass of $1$ TeV. The lower fluxes generated by [[HERWIG++]{}]{}at high energies (linear scale) are probably due to the lack in the implementation of the Bremsstrahlung radiation from high energy leptons.\ Although the low energy spectra are less important in the context of indirect searches due to the dominance of the astrophysical background, let us underline that the low energy cut-off strongly depends on the set parameters of each software. In [[PYTHIA]{}]{}8 the cut-off at low energy exactly corresponds to the minimum value allowed for photons and set by the `pTminChgL` parameter, that is 0.005 by default. In [[HERWIG++]{}]{}, `QEDRadiationHandler` is set off by default, so that the cut-off appears at higher energy with respect to the other Monte Carlo generators. In the opposite case, when `QEDRadiationHandler` is enable, the spectrum at low energy changes drastically. In this case, the relevant parameter `IFDipole:MinimumEnergyRest` can be varied in values: small values of such parameter enlarge the production of photons at low energies. Three different low energy cutoffs for the $W^-W^-$ channel are shown in Fig. \[fig:w1tevqed\].\ ![ The difference at low energy in C++ codes can be explained by the parameters that cut-off the lower energy photons. High energies can be proved not to be affected by this fact. $W^+W^-$ channel with [[HERWIG]{}]{}++ at $M_{\rm DM}=1$ TeV in logarithmic scale. The three cuts-off correspond to `QEDRadiationHandler` of $k_T=10^{-8}\,, 10^{-4}\,, 1$.[]{data-label="fig:w1tevqed"}](w_1tev_qed_par_log.pdf) More interesting are the spectra at high energies. We present the Monte Carlo relative deviation ($\Delta {\rm MC}_i$) with respect to [[PYTHIA]{}]{}8 in Fig. \[ErrRel\], defined as $$\Delta {\rm MC}_s = \;\frac{{\rm MC}_s\, - {\text{ {{\ttfamily PYTHIA}\xspace}}} \, 8} {{\text{ {{\ttfamily PYTHIA}\xspace}}} \, 8}, \label{RelErr}$$ where $MC_s$ stands for [[PYTHIA]{}]{}6.4, [[HERWIG]{}]{}and [[HERWIG++]{}]{}. For a DM mass of $1$ TeV and $W$ boson annihilation channel, the relative deviations for $ x\gtrsim0.01$ are always less than $20\%$ up to $x=0.2$. At $x\gtrsim 0.2 $ the absence of Bremsstrahlung radiation generated by high energy leptons in [[HERWIG]{}]{}++ leads to a smaller number of high-energy photons when compared to the other softwares. \[ER\_w\] Also the multiplicity, that is the total number of photons produced by each event, affects the constraints. Apart from the specific characteristics of the detector, the flux of photons depends upon the DM density distribution and the distance of the sources. Thus, two simulations should give different number of photons for the same number of events and this will affect the parameters $\langle J\rangle$ and $b$. In general, the multiplicity depends on both the Monte Carlo event generator, the energy of the event and the annihilation channel. For the $W$ boson channel, it does not depend on the mass above $300$ GeV, as we can see in Fig. \[ErrRel\]. In this study, the energy cut-off increases with the DM mass, because we set a lower photon energy cut-off around $x_C=10^{-5}$. This kind of DM mass depending cut-off allows to reject photons of lower energies, where the simulations present important differences, and the contribution to gamma rays is less important. This cut-off is also compatible with typical gamma-ray detectors energy thresholds, in fact detector energy thresholds are typically around $1-10$ GeV depending on the particular experimental device [@branonsgamma]. In any case, our results do not seem to depend on the particular choice of this cut-off. The multiplicity behavior is well approximated by the following power law relation with the DM mass: $$\frac{N_{\gamma}}{N_{\chi\chi \rightarrow SM}}\simeq a\cdot \left(\frac{M}{1\,\text{GeV}}\right)^b\;. \label{pl}$$ The $a$ and $b$ coefficients are given in [@MC]. They depend on both the Monte Carlo simulator and the annihilation channel. When the SM particle is fixed, cosmological constraints obtained by means of the total number of generated gamma photons might depend on the Monte Carlo simulation.\ In the case of the $W^+W^-$ and whenever the DM annihilation cross section is fixed, [[PYTHIA]{}]{}6.4 provides lower limit values for astrophysical factor/boost factor in gamma rays, depending on the kind of fit. On the other hand, [[HERWIG]{}]{}Fortran gives the upper limit for similar analysis (see Fig. \[ErrRel\] and [@MC] for details). In any case, the difference between [[PYTHIA]{}]{}6.4 and [[PYTHIA]{}]{}8, the latter being the most complete Monte Carlo software for gamma rays, is less that a $4\%$. [![[]{data-label="Wnu"}](WFigRIS.pdf "fig:")]{} Neutrino flux {#3} ============= [![[]{data-label="Aeff"}](Aeff50t2linvsEmin.pdf "fig:")]{} As for gamma rays, the differential neutrino flux from annihilating DM in the Galaxy is described by the equation (\[nuflux\]). As a difference with the gamma ray case, the neutrino flux on the Earth is not the same as at the source, due to neutrinos oscillations and observational limits. In fact, neutrino telescopes are able to discriminate between $\nu_\mu$ and $\nu_e$ or $\nu_\tau$, but they cannot identify either $\nu_e$ from $\nu_\tau$ or neutrinos from anti-neutrinos. So, the interpolation functions of the differential neutrinos spectra simulated by [[PYTHIA]{}]{}8 [@Py8; @Cirelli] need to be slightly modified. In Fig. \[Wnu\] we show the expected neutrino flux on Earth when neutrino oscillations and observational limits are taken into account. Because all the parameters of the model are fitted by the observation in gamma rays, observability of the neutrinos flux depends only on a combination of the resolution angle and effective area of the telescope with the minimum energy threshold and the exposition time. Both the angle and the area depend on the neutrino flavor and the position of the source in the sky with respect to the detector [@ANTARES; @IC; @km3net]. In the case of the GC, it should be possible in principle to get better resolution angle with ANTARES [@ANTARES] than with IceCube [@IC]. In the first case the Earth is used as veto and the background is given by atmospheric neutrinos. On the other hand, when the background is given as most by atmospheric muons, as in the case of IceCube, the effective area of the detector is affected. The IceCube collaboration reports the $\nu_\mu$ and $\nu_e$ atmospheric neutrino fluxes [@nue; @numu]. As is shown in Fig. \[Wnu\], no flux is expected with resolution lower that $\theta\approx1^\circ$ from high density $48.8$ TeV DM annihilating into $W^+W^-$ channel in the GC. Thus we need better resolution angles in order to be able to get some signature above the background. In fact, the statistical significance of the signal above the background is given by $$\chi_{\nu_k}=\frac{\Phi_{\nu_k}\sqrt{A_{\text{eff}}\,t_{\text{exp}}\,\Delta\Omega}}{\sqrt{\Phi_{\nu_k}+\Phi^{\text{Atm}}_{\nu_k}}} = 5\, (3,\, 2)\,. \label{Chi1}$$ $\chi_{\nu_k}$ depends also on the minimum energy threshold, exposition time and effective area. In Fig. \[Aeff\] we show the statistical analysis for $\nu_\mu$ track events for a generic neutrino telescope with $A_{\rm eff}\times t_{\rm exp}=100\,\text{m}^2\,yr$. So, with $A_{\rm eff}=20\,\text{m}^2$, $t_{\rm exp}=5\, {\rm yrs.}$ and $E_{min}=1$ TeV, an angle $\theta\leq0.4^\circ$ is requested to get a signal measurement with a confidence level better than $2\sigma$. Similar analysis was developed fixing the resolution angle and searching for the $A_{\rm eff}\times t_{\rm exp}$ parameter with respect to the minimum energy threshold [@Neutrinos]. Conclusions and Prospects ========================= We have analyzed the gamma-ray and neutrino flux that should be generated by a very peaked DM distribution in the GC and presented partial results for the $W^+W^-$ annihilation channel. The study is based on the fit of the HESS data in gamma rays allowing to constrain the DM mass and the astrophysical factor. Among other channels, the collection of data of the Cerenkov detector for the J1745-290 source is well fitted as $48.8$ TeV DM annihilating into $W^+W^-$ boson particles. The signal is superimposed on a gamma-ray background compatible with the Fermi-LAT observation. We have also analyzed the uncertainty that may be introduced by the simulation of gamma-ray flux with different Monte Carlo particle physics codes. The relative deviation between different codes turns out to be less than $20\%$ for the boson channel at the energy range of interest, whereas the number of photons produced for each event introduces an error less than $4\%$. These uncertainties may affect the $10^3$ enhancement of the astrophysical factor necessary to fit the data. The astrophysical factor may also be affected by the astrophysical uncertainty due to the choice of the DM density profile. In any case, its value is compatible with the baryonic enhancement in Monte Carlo cosmological N-body simulation [@Blumenthal; @Prada:2004pi], although opposed opinions about this scenario remain [@Romano]. For the DM particle able to fit the gamma-rays data, we have also presented the prospects for the detection of the neutrinos flux to be generated by that particle. It depends both on the resolution angle and effective area of the neutrino telescope, in addition to the minimum energy threshold and the observation time. We sketched a partial study of the combined resolution angle and the energy threshold needed to detect a neutrino signal within some confidence level, when the effective area and the exposition time are fixed. A resolution angle of $0.4^\circ$ is requested to get $2\sigma$ signal above the background for neutrino telescope with effective area compatible with ANTARES or IceCube, but more exposition time than we have available with the actual collection of data [@nue; @numu] is required. Therefore, at the present stage we are not able to either accept nor reject the DM origin for gamma rays data with the present generation of neutrino detectors. An improvement in the angular resolution of ANTARES or IceCube when looking at the GC may be fundamental in order to clarify DM hypotheses. Moreover, the observation of this region with the next KM3NeT neutrino detector [@km3net], with an effective area of $1\,\text{km}^2$ and improved resolution angle, will be also of great interest. Finally, the observation of antimatter flux and matter-antimatter ratio as proton, antiproton and positron signal, may be useful to set additional constraints on the DM origin. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by UCM FPI grants G/640/400/8000 (2011 Program), the Spanish MINECO projects numbers FIS2011-23000, FPA2011-27853-C02-01and MULTIDARK CSD2009-00064 (Consolider-Ingenio 2010 Programme). A.d.l.C.D. also acknowledges the hospitality of IFAE-UAB in the final stages of preparation of this manuscript. [00]{} J. A. R. Cembranos, V. Gammaldi and A. L. Maroto, Phys. Rev. D [**86**]{}, 103506 (2012), arXiv:1204.0655v2 \[hep-ph\]; JCAP04(2013)051, arXiv:1302.6871v2 \[astro-ph.CO\]; F. Aharonian, A. G. Akhperjanian, K.M. Aye et al. A&A, 503, 817 (2009). J. A. R. Cembranos, A. de la Cruz-Dombriz, V. Gammaldi, R. A. Lineros, A. L. Maroto, JHEP 1309 (2013) 077, arXiv:1305.2124v3\[hep-ph\] H. Goldberg, Phys. Rev. Lett.  [**50**]{}, 1419 (1983); J. R. Ellis [*et al.*]{}, Nucl. Phys. B [**238**]{}, 453 (1984); K. Griest and M. Kamionkowski, Phys. Rep. **333**, 167 (2000); J. A. R. Cembranos, A. Dobado and A. L. Maroto, Phys. Rev. Lett.  [**90**]{}, 241301 (2003); Phys. Rev. D [**68**]{}, 103505 (2003); Phys. Rev. D [**73**]{}, 035008 (2006); Phys. Rev. D [**73**]{}, 057303 (2006); A. L. Maroto, Phys. Rev. D [**69**]{}, 043509 (2004); Phys. Rev. D [**69**]{}, 101304 (2004); A. Dobado and A. L. Maroto, Nucl. Phys. B **592**, 203 (2001); Int. J. Mod. Phys. [**D13**]{}, 2275 (2004) \[hep-ph/0405165\]; J. A. R. Cembranos [*et al.*]{}, JCAP [**0810**]{}, 039 (2008). A. Dobado and A. L. Maroto, Nucl. Phys. B [**592**]{}, 203 (2001); J. A. R. Cembranos, A. Dobado and A. L. Maroto, Phys. Rev. Lett.  [**90**]{}, 241301 (2003); Phys. Rev. D [**68**]{}, 103505 (2003); A. L. Maroto, Phys. Rev. D [**69**]{}, 043509 (2004); Phys. Rev. D [**69**]{}, 101304 (2004); Int. J. Mod. Phys. [**D13**]{}, 2275 (2004). J. A. R. Cembranos [*et al.*]{}, JCAP [**0810**]{}, 039 (2008). J. A. R. Cembranos, V. Gammaldi, A. L. Maroto arXiv:1403.6018 \[hep-ph\]. J. F. Navarro, C. S. Frenk, and S. D. White, ApJ [**490**]{}, 493 (1997). T. Sjostrand, S. Mrenna and P. Skands, JHEP05 (2006) 026 (LU TP 06-13, FERMILAB-PUB-06-052-CD-T) \[hep-ph/ 0603175\]. J. A. R. Cembranos, A. de la Cruz-Dombriz, A. Dobado, R. Lineros and A. L. Maroto, Phys. Rev.  D [**83**]{}, 083507 (2011); AIP Conf. Proc.  [**1343**]{}, 595-597 (2011); J. Phys. Conf. Ser.  [**314**]{}, 012063 (2011); A. de la Cruz-Dombriz and V. Gammaldi, arXiv:1109.5027 \[hep-ph\]; http://teorica.fis.ucm.es/PaginaWeb/photon\_spectra.html A. A. Abdo et al. \[arXiv:astro-ph.CO/1001.4531v1\] (2010). M. Chernyakova [*et. al.*]{}, ApJ [**726**]{}, 60 (2011); T. Linden, E. Lovegrove and S. Profumo, arXiv:1203.3539 \[astro-ph.HE\]. M. H. Seymour and M. Marx, arXiv:1304.6677 \[hep-ph\]. J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**86**]{}, 010001 (2012). G. Altarelli and G. Parisi, Nucl. Phys. B [**126**]{}, 298 (1977). G. Marchesini and B.R. Webber, Nucl. Phys. B [**238**]{} 1 (1984); Nucl. Phys. B [**310**]{} 461 (1988). T. Sjöstrand and P. Skands, Eur. Phys. J. C [**39**]{} 129 (2005). Catani, Webber, Marchesini, Nucl. Phys. B349 (1991) 635-654. T. Sjöstrand, S. Mrenna, P. Skands, arXiv:0603175 \[hep-ph\]. T. Sjöstrand, S. Mrenna, P. Skands, arXiv:0710.3820v1 \[hep-ph\]; http://home.thep.lu.se/ torbjorn/pythia81.html G. Corcella [*et al.*]{}, arXiv:0011363v3 \[hep-ph\]. M. Bähr [*et al.*]{}, arXiv:0803.0883v3 \[hep-ph\]. S. Gieseke et al., arXiv:1102.1672v1 \[hep-ph\], K. Arnold et al., arXiv:1205.4902v1 \[hep-ph\]. J. A. R. Cembranos, A. de la Cruz-Dombriz, V. Gammaldi, A.L. Maroto, Phys. Rev. D 85, 043505 (2012), arXiv:1111.4448 \[astro-ph.CO\]. M. Cirelli et al. arXiv:1012.4515v4 \[hep-ph\]. S. Adrian-Martinez et al. ANTARES Collaboration, Astrophy. J. 760:53(2012), arXiv:1207.3105 (2012); S. Schulte for the ANTARES Collaboration, icrc2013-0425; S. Adri�n-Mart�nez et al., ANTARES Collaboration, ArXiv:1207.3105v2 \[hep-ph\]. R. Abbasi et al. IceCube Collaboration, arXiv:1210.3557v1 \[hep-ex\]. M.G. Aartsen et al., IceCube Collaboration, arXiv:1212.4760v2 (2012). R. Abbasi et al., IceCube Collaboration, arXiv:1010.3980v2 (2010); M.G. Aartesen et al. arXiv:1307.6669. G.R. Blumenthal, S.M. Faber, R. Flores, J. R. Primack, ApJ [**301**]{}, 27 (1986); O. Y. Gnedin, A. V. Kravtsov, A. A. Klypin and D. Nagai, ApJ [**616**]{}, 16 (2004). F. Prada, A. Klypin, J. Flix Molina, M. Martínez, E. Simonneau, Phys. Rev. Lett.  [**93**]{}, 241301 (2004). E. [Romano-D[í]{}az]{}, I. [Shlosman]{}, Y. [Hoffman]{}, and C. [Heller]{}, ApJ [**685**]{}, L105 (2008); ApJ [**702**]{}, 1250 (2009); A. V. Maccio’ [*et. al.*]{}, arXiv:1111.5620 \[astro-ph.CO\]. T. Seitz, R. Shanidze KM3NET Consortium, Nuclear Instrument and Methods in Physics Research A 626-627 (2011) S205-S207; http://www.km3net.org/. [^1]: gammaldi@pas.ucm.es
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the geometrical conditions for stabilizing magnetic skyrmions in cylindrical nanostrips and nanotubes of ferromagnetic materials with Dzyaloshinskii-Moriya interactions. We obtain the ground state of the system implementing a simulation annealing technique for a classical spin Hamiltonian with competing isotropic exchange and chiral interactions, radial anisotropy and an external field. We address the impact of surface curvature on the formation, the shape and the size of magnetic skyrmions. We demonstrate that the evolution of the skyrmion phase with the curvature of the nanoshell is controlled by the competition between two characteristic lengths, namely the curvature radius, $R$ (geometrical length) and the skyrmion radius, $R_{Sk}$ (physical length). In narrow nanotubes ($R<R_{Sk}$) the skyrmion phase evolves to a stripe phase, while in wide nanotubes ($R>R_{Sk}$) a mixed skyrmion-stripe phase emerges. Most interestingly, the mixed phase is characterized by spatially separated skyrmions from stripes owing to the direction of the applied field relative to the surface normal. Below the instability region ($R \lesssim R_{Sk}$) skyrmions remain circular and preserve their size as a consequence of their topological protection. Zero-field skyrmions are shown to be stable on curved nanoelements with free boundaries within the same stability region ($R\gtrsim R_{Sk}$). The experimental and technological perspectives from the stability of skyrmions on cylindrical surfaces are discussed.' author: - 'D. Kechrakos' - 'A. Patsopoulos' - 'L. Tzannetou' title: Magnetic skyrmions in cylindrical ferromagnetic nanostructures with chiral interactions --- Introduction ============ Magnetic skyrmions are self-localized vortex-like spin structures with axial symmetry [@bog94a]. They have been mainly studied in noncentrosymmetric bulk crystals and their thin films[@muhl09; @pap09; @yux10], as well as, in ultrathin ferromagnetic (FM) films on heavy metal (HM) substrates [@hein11; @rom13], in which a sizable Dzyaloshinskii-Moriya interaction (DMI) [@dzi58; @mor60] leads to their formation. From the technological point of view, two-dimensional magnetic skyrmions formed in ferromagnetic-heavy metal interfaces have potentials for a variety of innovative robust and high-density spintronics applications due to their protected topology and nanoscale size [@fer13]. In particular, they can be driven by lateral spin currents [@fer13; @samp13; @nag13], produced by electrical currents with five to six orders of magnitude smaller current density than those needed for domain wall motion [@rom13], thus pointing to energy efficient [@fer13] skyrmion-based racetrack-type memory devices[@par08]. However,current-driven skyrmions will drift from the racetrack direction due to the presence of Magnus force [@iwa13; @yux12], if the velocity is high enough. This phenomenon known as the Skyrmion Hall effect (SkHE) leads to their annihilation at the racetrack edge and the loss of stored information. An approach for limiting the SkHE effect is through spin-wave driven skyrmion motion [@zha15; @sch15]. Skyrmions can be displaced by magnons induced by thermal gradients in insulating chiral ferromagnets [@kon13], while the SkHE deviation vanishes for high energy magnons [@gar15]. However, compared with the current-driven skyrmion motion, it is difficult to generate spin waves in a nanometre-size nanotrack with appropriate spectral properties for driving the motion of a skyrmion. It is also difficult to realize a skyrmion nanocircuit based on thermal gradients. Consequently, the current-driven skyrmion motion is the most promising method and as such it attracts a great deal of research effort. To this end, various potential barriers have been proposed to confine skyrmions in the central region of the racetrack so that the annihilation at the racetrack edge is avoided [@zha16; @bar16; @pur16; @lai17; @foo15]. A suggested method is by tuning the perpendicular [@foo15] or the crystalline [@lai17] magnetic anisotropy. As a result, a path of lower resistance is created at the racetrack center, allowing the skyrmions to pass the racetrack without annihilation. Another approach, is by tuning the height of the ferromagnetic layers, creating a rectangular groove on the center of the racetrack. As a result, a curb structure is formed, which functions to confine the skyrmion within the groove [@pur16]. Furthermore, the damping constant of the racetrack can be tuned in either the transverse or the longitudinal direction in different regions of the racetrack [@liu16], so that the deviations of the skyrmions are in opposite directions and cancel each other out. Therefore, the skyrmions can be efficiently confined in the racetrack center and the SkHE is avoided. Another aspect hampering the use of magnetic skyrmions in racetrack memory applications, is their uncontrollable excitation realized at the edges of magnetic nanostrips and thin films [@ran17] leading to error writing events.. This phenomenon is known as the edge effect. In addition, skyrmion motion, even including the oscillating motion and the gyration [@gar16], is affected by the edges in confined geometries due to their potential force [@nav16; @gar16] acting on skyrmions. From the aforementioned works, it appears that the possibility of magnetic skyrmions generation and manipulation on boundary-free samples offers be a desirable direction of research and curved nanostructures, as for example, magnetic nanotubes, constitute a promising option. The study of magnetic structure and solitonic excitation on curved surfaces has recently attracted intensive interest as curvature was shown to control physical properties of the system [@streu16]. The curvilinear geometry of bent and curved ferromagnetic wires and surfaces [@pyl15; @gai15; @car15] introduces effective chiral interactions and curvature-induced anisotropy.[@streu16] As a consequence, curvature-driven effects emerge, such as magnetochiral effects [@kra12; @ota12] and topologically induced magnetization patterning,[@kra12; @pyl15] resulting in high domain wall velocities [@yan12] and chirality symmetry breaking [@pyl15]. Despite the fact that recent works have focused on the impact of surface curvature on the emerging chiral properties and related magnetic order of otherwise achiral ferromagnetic materials [@streu16], to the best of our knowledge, the conditions for skyrmion formation on chiral curved surfaces has not been addressed yet. We anticipate on physical grounds, that the skyrmion phase supported on a planar nanostructure, such as a FM/HM interface, will be driven to instability under curving. It is the main aim of the present work, to investigate the ground state properties of curved ferromagnetic nanostructures with chiral interactions (DMI) and examine the conditions under which curvature-driven skyrmion instability occurs. Our structural model accounts for direction modulation of the DMI vector induced by the curvature of the nanostructure under consideration, thus providing a more realistic description of the interplay between isotropic exchange (Heisenberg) and chiral interactions on curved surfaces. We focus on cylindrical nanoelements and nanotubes. Our results demonstrate the feasibility of skyrmion formation on the ridge of a nanotube, where the external field remains almost normal to the surface, provided that the radius of the nanotube remains at least comparable to the skyrmion radius $(R_{tube} \ge R_{Sk} )$. The same geometrical criterion ensures the stability of skyrmions without an external magnetic field on curved nanoelements. Micromagnetic Model and Simulation Method ========================================= We consider a thin ferromagnetic cylindrical nanostrip along the z-axis with length $L_z$, width $L_y$, inner radius $R$ and thickness $t\ll R$ (Fig.\[fig:sketch\]). ![(Color online) Cylindrical nanostrip extended along the $z$-axis with width $L_y$, thickness $t$, curvature radius $R$ and curvature angle $\phi_0$, used as our model system. []{data-label="fig:sketch"}](fig1.jpg){width="0.40\linewidth"} The central angle of the curved nanostrip is defined as $\phi_0=L_y/R$. A planar nanostrip ($R\rightarrow\infty,\phi_0=0 $) and a cylindrical nanotube ($R\ne0,\phi_0=360^0$) naturally occur as limiting cases of the curved nanostrip. The micromagnetic energy of the system as a functional of the continuous magnetization field $\textbf{m}(\textbf{r})=\textbf{M}(\textbf{r})/M_s$ reads $$\begin{aligned} E[\textbf{m}]=\int d^3\textbf{r}~ \{ A |\nabla\textbf{m}|^2 -K_u (\textbf{m}\cdot \textbf{e}_\rho)^2 \nonumber \\ -M_s\textbf{m}\cdot\textbf{B} +w_{DM} \} \label{eq:microm}\end{aligned}$$ where the integral runs over the volume of the nanostructure, $A$ is the exchange constant and $K_u$ is the radial anisotropy density, which we adopt here as a generalization of the perpendicular anisotropy observed in thin ferromagnetic films on a heavy metal substrate[@fer13; @hag15]. $w_{DM}$ is the Dzyaloshinskii-Moriya energy density and $\textbf{e}_\rho(\textbf{r})$ is the radial unit vector. The DMI energy is considered here as arising from the interface coupling between the ferromagnetic nanostrip and a heavy metal layer, which is assumed to be coupled to the nanostrip. Generalizing the expression for the interface DMI energy density[@cor18], we write $$\begin{aligned} w_{DM}=D[(\textbf{m}\cdot\nabla)m_\rho-m_\rho(\nabla\cdot\textbf{m})] \label{eq:dmi}\end{aligned}$$ where $\textbf{m}(\textbf{r})=(m_\rho,m_\phi,m_z)$ are the components of the magnetization field with respect to the local cylindrical coordinate system $(\textbf{e}_\rho,\textbf{e}_\phi,\textbf{z})$ (Fig.\[fig:sketch\]). Magnetostatic terms are neglected in Eq.(\[eq:microm\]), because in the limit of a very long cylinder $(L_z\gg R)$ and a mean field approximation, they can be approximated by a uniaxial anisotropy term along the z-axis leading to reduction the radial anisotropy term as[@roh13] $K_u^{'}=K_u-\frac{1}{2}\mu_0 M_s^2$. Upon discretization of Eq.(\[eq:microm\]) on a cylindrical grid , we obtain for the total energy $$\begin{aligned} E = -\frac{1}{2} J \sum_{<ij>} \textbf{m}_i \cdot \textbf{m}_j \nonumber \\ -\frac{1}{2} d \sum_{<ij>} \textbf{D}_{ij} \cdot (\textbf{m}_i \times \textbf{m}_{j}) \nonumber \\ -k \sum_i (\textbf{m}_i \cdot \textbf{e}_{\rho,i} )^2 -h \sum_i \textbf{m}_i \cdot \textbf{h}_i \label{eq:energy}\end{aligned}$$ with bold characters indicating unit vectors. $\textbf{m}_i$ is the unit vector (spin) along the magnetic moment of the $i$-th cell. The $1/2$ prefactor of the first and second terms accounts for the double-counting of energy contribution from pairs of nearest neighboring sites. The DMI vector takes the form $\textbf{D}_{ij}=\textbf{e}_{\rho,i} \times \textbf{r}_{ij}$, which is a generalization of the expression $\textbf{D}_{ij}=\textbf{x} \times \textbf{r}_{ij}$, that describes the DMI coupling at planar interfaces in the $yz$-plane.[@hag15; @yin16] Note that a major physical difference compared to the flat interface is that for a curved interface the vector $\textbf{D}_{ij}$ becomes site-dependent owing to the variation of the radial direction across the surface. The applied field, is assumed either homogeneous along the $x$-axis ($\textbf{h}_i=\textbf{x}$) or radial ($\textbf{h}_i=\textbf{e}_{n,i}$), as explicitly mentioned below. Under the assumption of a grid cell with equal sizes along the azimuthal and $z$ axes ($a_\phi=a_z$) and in the limit of a very thin FM nanostrip ($t=a_z$), the energy parameters entering Eq.(\[eq:energy\]) are related to the micromagnetic material parameters of Eq.(\[eq:microm\]) through the relations $J\approx 2Aa, d \approx Da^2$ $k \approx K_ua^3$, $h\approx M_sBa^3$. We use material parameters typical of a transition metal thin film on a heavy metal substrate[@hag15; @lel19], namely, $ M_s=580kA/m, A=10pJ/m, D=4mJ/m^2, K_u=500kJ/m^3 $ and a cell size $a=2nm$, which is well below the exchange length $l_{ex}=\sqrt{2A/\mu_0 M_s^2}\sim 7nm$. An applied field $B=0.9~T$ is considered. Then the rationalized (dimensionless) parameters $d/J=0.4, k/J=0.1$ and $h/J=0.1$, consist a complete set of parameters that determine the magnetic configuration at the ground state. Furthermore, the pitch length of the helical phase is determined by the rationalized parameter $d/J$ through the relation[@kee15; @sek16] $$\begin{aligned} p=\frac{2\pi a}{tan^{-1}(d/J)}. \label{eq:pitch}\end{aligned}$$ For the material parameters mentioned above we obtain $p\approx 16.5a=32nm$. This is a characteristic length scale of the skyrmion phase as it is approximately equal to the skyrmion radius.[@kee15; @sek16] To obtain the ground state we perform simulation annealing using the Metropolis Monte Carlo algorithm with single spin updates and temperature-dependent spin aperture that accelerates the approach to equilibrium. In particular, a field-cooling procedure under a field $h/J=0.1$ is performed from a high temperature $k_BT/J=20$ ($k_BT_C/J\approx 1$) to a low temperature $k_BT/J=0.001$, with a variable step $dT/T=5\%$ that produces an exponential decrease of temperature. At each temperature value we perform $5000$ Monte Carlo steps per spin (MCSS) for thermalization followed by $5000$ MCSS for calculations of thermal averages. The latter are calculated from sampling every $\tau=10$ MCSS, in order to minimize statistical correlations between sampling points. The thermodynamic quantities at each temperature are averaged over many ($\approx 20-40$) independent relaxation sequences to obtain an estimate of the statistical errors. Results and Discussion ====================== Skyrmion phase -------------- \[htb!\] ![ (Color online) Ground state configuration showing skyrmion formation in cylindrical nanostructures under application of a uniform magnetic field along the $x$-axis. Cylindrical surfaces are constructed by gradually wrapping an initial square sample ($L_y=L_z\equiv L$) around the $z$-axis. Spin configurations are color coded with the values of magnetization along the applied field direction ($x$-axis). (a), (d) planar surfaces, (b) $L=50a, \phi_0=150^0, R=19.1a$, (c) $L=50a, \phi_0=360^0, R=8.0a$, (e) $L=100a, \phi_0=150^0, R=38.2a$, and (f) $L=100a, \phi_0=360^0, R=15.9a$, with $a=2nm$. With increasing angle of curvature ($\phi_0$) the skyrmion phase transforms to either a spiral phase, as in (c), or to a mixed skyrmion-spiral phase, as in (f), depending on the value of the curvature radius ($R$). []{data-label="fig:Skphase"}](fig2.jpg "fig:"){width="0.95\linewidth"} We consider first the evolution of a skyrmion ground state as the curvature of the nanostructure increases. We start from a planar surface (PS) in the yz-plane and wrap it gradually along the z-axis to form an open cylindrical surface (CS) and eventually, a closed cylindrical surface corresponding to a nanotube (NT) (Fig.\[fig:Skphase\]). When we curve the 2D sample, we preserve the dimensions $(L_y,L_z)$ of the initial planar system in order to emphasize the role of curvature and exclude finite size effects. Periodic boundary conditions are used solely along the z-axis of our curved samples, except for nanotubes, when the lateral free boundaries couple among themselves, naturally. In planar systems, we observe the well-known skyrmion lattice[@yis09] consisting of a hexagonal arrangement of skyrmions. Obviously, the number of skyrmions increases with the area of the planar sample, however their spatial density remains almost unchanged. As the angle of curvature increases, skyrmions close to the free edges of the curved surface become elongated and finally transform into spirals. This effect becomes more evident in smaller samples, which are characterized by smaller values of the curvature radius (Fig.\[fig:Skphase\]b,c). In a small nanotube with radius $R=8a$ (Fig.\[fig:Skphase\]c) stripes form almost all around the surface. On the contrary, in a larger nanotube with radius $R=15.9a$ isolated skyrmions are observed along the front and the back ridge of the cylinder, where the external field is almost normal to the surface, but spiral structures form along the left and right sides of the large tube (Fig.\[fig:Skphase\]f) where the applied field is almost tangential to the surface. Thus, skyrmion formation on nanotubes is strongly dependent on the nanotube radius, with large radius nanotubes supporting the coexistence of both skyrmion and stripe phases. We underline the fact that the two phases are spatially separated with skyrmions forming along the ridge and stripes forming on the sides of the nanotube. The width of the region supporting skyrmions is determined by the size of the skyrmion radius ($R_{Sk}$) relative to the curvature radius ($R$). This point is discussed further below. To quantify the evolution of the skyrmion phase with sample curvature, as depicted in Fig.\[fig:Skphase\], we calculate the topological charge ($Q$) of the ground state. For a three component spin field $\textbf{m}(\phi,z)$ on a cylindrical surface described by the coordinates ($\phi,z$), the topological charge is given as $$\begin{aligned} Q=\frac{1}{4\pi} \iint d\phi~dz~ \textbf{m}\cdot (\frac{\partial\textbf{m}}{\partial\phi} \times \frac{\partial\textbf{m}}{\partial z}). \label{eq:topol_charge}\end{aligned}$$ For the numerical computations we implement a lattice expression of the topological charge[@ber81], appropriate to a square lattice wrapped around a cylindrical surface. Skyrmions have a topological charge $Q=\pm1$, depending on the direction of the applied field. Thus the absolute value of $Q$ for a nanostrip in the skyrmion phase equals to the number of skyrmions supported. The dependence of the topological charge on the curvature angle is shown in Fig.\[fig:Q\_vs\_a\] for nanostrips with different widths $L_y$. \[htb!\] ![ (Color online) Dependence of the ground state topological charge (Q) on the angle of curvature ($\phi_0$) for magnetic nanostructures with different sizes ($L\times L$) and $a=2nm$. The applied magnetic field is uniform along the x-axis (circles, closed triangles) or radial (open triangles). []{data-label="fig:Q_vs_a"}](fig3.jpg "fig:"){width="0.95\linewidth"} We notice that $Q$ remains almost constant up to an angle $\phi_0\approx 100^0$ and further on it decreases smoothly from the initial value for planar nanostrips (skyrmion phase), to a smaller value close to zero for nanotubes, indicating that only a small fraction of the initial number of skyrmions are stabilized (mixed skyrmion-stripe phase). In contrast to this trend, when the applied field is radial, the topological charge is only weakly dependent on the curvature angle. This weak decay of $Q$ with curvature angle is due to the gradual reduction of the co-planar condition between the DMI vectors on each lattice site. A radial field for a cylindrical nanostrip is geometrically analogous to the case of a uniform field normal to a planar nanostrip and the conservation of the topological charge with curvature underlines the importance of the normality condition for the applied filed in the stabilization of skyrmions on any surface. For planar nanostrips shown in Fig.\[fig:Skphase\]a ($Q=7.2$) and Fig.\[fig:Skphase\]d ($Q=37.8$), the values of $Q$ deviate weakly from integer values due to the misalignment of the moments located on the free boundaries of the sample[@roh13] and the thermal fluctuations inherent to the Monte Carlo method. For curved surfaces, however, the shape distortion of the skyrmions and their evolution to stripe-like structures is not characterized by integer values of $Q$, thus the calculation of $Q$ based on Eq.(\[eq:topol\_charge\]) assumes non-integer values, as in Fig.\[fig:Q\_vs\_a\], and is only indicative of the number of skyrmions observed in the mixed phase. \[htb!\] ![ Dependence of the ground state topological charge ($Q$) on the radius ($R$) of magnetic nanotubes ($\phi_0=360^0$) with length $L_z=220a$ and $a=2nm$. []{data-label="fig:Q_vs_R"}](fig4.jpg "fig:"){width="0.95\linewidth"} The magnetization landscape at the ground state of a curved nanostrip depends on both the curvature radius ($R$) and the curvature angle ($\phi_0$). Next, we fix the curvature angle by choosing to consider nanotubes ($\phi_0=360^0$) of various radii and fixed length. We show in Fig.\[fig:Q\_vs\_R\] the dependence of the topological charge on the nanotube radius. For radius $R\lesssim 15a$ the topological charge assumes low values indicating stripe formation around the tube, while in nanotubes with larger radius ($R\gtrsim18a$) a sharp increase of $Q$ is observed signifying skyrmion formation. This behavior of $Q$ is in accordance with the magnetization configuration seen in Fig.\[fig:Skphase\]f ($R=15.9a$), that indicates skyrmion formation along the ridge of the nanotube. In conclusion, the numerical data so far demonstrate that the skyrmion phase of a planar nanostrip with material parameters typical of a ferromagnetic/heavy metal interface ($p\approx16a$), transforms to a stripe phase when the curvature angle or the curvature radius exceeds some characteristic values ($\phi_0\gtrsim100^0$, $R/a \lesssim 15$). It is important to notice in Fig.\[fig:Q\_vs\_R\] that the skyrmion phase disappears when the curvature radius becomes comparable to the pitch length ($R\sim p\sim 16a$). We elaborate further on this point in the next section, by focusing on a nanostrip with a single skyrmion. Skyrmion shape and size ----------------------- The analysis of the skyrmion shape and size in the ground state is a numerically intricate task[@ziv19] especially when the system is in a mixed phase as it occurs in curved nanostrips (Fig.\[fig:Skphase\]). For this reason we increase the discretization level in order to stabilize a single skyrmion in the simulation cell and facilitate the analysis. In particular, we use further on a cell size $a=1nm$ and keep the material parameters $(A_{ex},D,K_u,M_s)$ and the applied field $B$ unchanged. This leads to new rationalized parameters $d/J=0.2$, $k/J=0.025$, $h/J=0.025$, and a pitch length $p\approx33.0a=33nm$. Notice that increasing the discretization level does not affect substantially the physical value of the pitch length, because the latter is a slowly varying function of the grid cell size for $D/A_{ex} \lesssim 1nm^{-1}$ (see Eq.(\[eq:pitch\])). \[htb!\] ![ (Color online) Snapshots of ground state magnetic configurations in curved nanostrips with size $50a\times50a$ $(a=1nm)$. Curvature angles and topological charge are (a) $\phi_0= 0^0$, $Q=0.84$, (b) $\phi_0= 50^0$, $Q=0.83$, (c) $\phi_0=100^0$, $Q=0.84$, (d) $\phi_0=150^0$, $Q=0.99$, (e) $\phi_0=160^0$, $Q=0.52$, and (f) $\phi_0=200^0$, $Q=0.10$. The curved nanostrips (b)-(f) are unwrapped on the $yz$-plane for visual clarity. A uniform field ($h/J=0.025$) is applied in all cases along the $x$-axis. The color code indicates the values of magnetization along the field axis. A transformation from purely skyrmion phase (a), to a mixed skyrmion-stripe phase (e,f), due to increasing curvature, is seen. []{data-label="fig:Single_Sk"}](fig5.jpg "fig:"){width="0.95\linewidth"} In Fig.\[fig:Single\_Sk\], we show the evolution of a single skyrmion that is stable on a planar nanostrip as the nanostrip is curved gradually. For small angles ($\phi\lesssim 100^0$), the skyrmion retains its basic geometrical features, such as its size and axially symmetric shape. The robustness of the skyrmion at small curvature angles is consistent with the constant value of the topological charge at small curvature angles, seen in Fig.\[fig:Q\_vs\_a\]. As the curvature increases, the skyrmion obtains a more elliptical shape while its size decreases. Finally, for larger angles ($\phi\gtrsim160^0$) skyrmion formation is not stable anymore. \[htb!\] ![ (Color online) Dependence of skyrmion circularity ($M_{circ}$) and linearity ($M_{lin}$) on curvature angle of a cylindrical nanostrip with size $50a\times50a~ (a=1nm)$. Error bars are obtained from an average over $30$ independent configurations of the ground state. []{data-label="fig:Sk_MM"}](fig6.jpg "fig:"){width="0.95\linewidth"} To quantify our observations on the magnetic configurations of Fig.\[fig:Single\_Sk\], we perform shape analysis of the skyrmion core ($S$), which is is defined as the compact region of the nanostrip with negative local magnetization ($m_{i,x}<0$). We compute two shape measures of $S$, namely the invariant moments $M_{circ}$ and $M_{lin} $ that measure the degree of circularity[@zun14] and linearity [@sto08], respectively. These are defined as $$\begin{aligned} M_{circ}=\frac{\mu_{00}}{\mu_{20}+\mu_{02}} \label{eq:hue1}\end{aligned}$$ and $$\begin{aligned} M_{lin}=\frac{ \sqrt{(\mu_{20}-\mu_{02})^2+4\mu_{11}^2} }{\mu_{20}+\mu_{02}}, \label{eq:hue2}\end{aligned}$$ where the second order geometric moments are $$\begin{aligned} \mu_{pq}=\frac{1}{N_S}\sum_{i\in S}(y_i-y_c)^p(z_i-z_c)^q \label{eq:moms}\end{aligned}$$ with $p,q$ positive integers satisfying $p+q\le 2$, $N_S$ the number of cells in $S$ and $(y_c,z_c)$ the centroid coordinates $y_c=\sum_i y_i/N_S$ and $z_c=\sum_i z_i /N_S$. In the limiting case of a circle $M_{circ}=1, M_{lin}=0$ and in the case of straight line $M_{circ}=0, M_{lin}=1$. In Fig.\[fig:Sk\_MM\] we show the evolution of the shape measures of a skyrmion with curvature angle. The sudden drop of $M_{circ}$ above $\phi_0 \simeq 100^0$ signifies the skyrmion elongation and eventual annihilation. Below this characteristic angle, the skyrmion retains the circular shape ($M_{circ}\simeq 1$ and $M_{lin}\simeq 0$). Before annihilation, a weak hump in the curve of $M_{lin}$ indicates a weak elongation the skyrmion shape. \[htb!\] ![ (Color online) Dependence of the effective skyrmion radius ($R_{sk}$) on curvature radius ($R$) for curved nanostrips with size $50a\times50a~(a=1nm)$. Dashed line is the $R_{sk}=R$ plot that serves as guide to the eye. Skyrmion annihilation is observed when $R_{sk} \simeq R$ ($\phi_0\simeq 150^o$). The critical curvature radius for skyrmion stability decreases with increasing value of the applied field. []{data-label="fig:Rsk"}](fig7.jpg "fig:"){width="0.95\linewidth"} To study the evolution of skyrmion size with curvature we compute the effective skyrmion radius , with $R_g=\sqrt{\mu_{20}+\mu_{02}}$ the radius of gyration of the skyrmion region $S'$. We define $S'$ as the compact region of the nanostrip with local magnetization less than the saturation value ($m_{i,x} < 0.98$ for $B_x > 0$) and topological charge $Q>0.5$. For a circular region (disk), obviously $R_{eff}$ equals the disk radius, while for an ellipsoidal region $R_{min}<R_{eff}<R_{max}$. In Fig.\[fig:Rsk\] we show the dependence of skyrmion radius on curvature radius for the same nanostrips as in Fig.\[fig:Sk\_MM\]. Starting from the planar limit ($R\gg a$), we notice that $R_{sk}$ remains constant as $R$ decreases up to the point that the two radii become approximately equal. Then a sudden drop of $R_{sk}$ indicates the skyrmion instability and its annihilation. This behavior is also observed for higher field values ($h/J=0.030, 0.035$), where the skyrmion radius is slightly reduced. The stability of the skyrmion phase when $R \gtrsim R_{eff}$ is consistent with what is shown in Fig.\[fig:Q\_vs\_R\] regarding the evolution of the topological charge with nanotube radius. Again in that case, the sudden increase of the curve $Q(R)$, indicating the appearance of the skyrmion phase, occurs when $R\simeq p \simeq ~ R_{sk}$ Seen from a general point of view the curvature radius is a geometrical length scale and the skyrmion radius a physical length scale. Skyrmion stability is established in planar nanostrips where $R/R_{sk}\gg1$ and the stability condition is violated when the two length scales become comparable, in other words when $R/R_{sk} \simeq 1$. This geometrical argument summarizes the stability of skyrmions on curved surfaces as a matter of competition between length scales. Zero-field skyrmions -------------------- \[htb!\] ![ (Color online) (a)-(c) Time evolution of zero-field skyrmions in a nanoelement with size $50a\times50a~(a=1nm)$ and different curvature angles ($\phi_0=50^0,100^0,150^0$). The nanostrips are unwrapped on the $yz$-plane for visual clarity. A uniform field is applied in all cases along the $x$-axis. The color code indicates the values of magnetization along the field direction. (d) Time evolution of the topological charge ($Q$) after switching off the applied field ($t=0$) []{data-label="fig:Sk_t"}](fig8a.jpg "fig:"){width="0.95\linewidth"} ![ (Color online) (a)-(c) Time evolution of zero-field skyrmions in a nanoelement with size $50a\times50a~(a=1nm)$ and different curvature angles ($\phi_0=50^0,100^0,150^0$). The nanostrips are unwrapped on the $yz$-plane for visual clarity. A uniform field is applied in all cases along the $x$-axis. The color code indicates the values of magnetization along the field direction. (d) Time evolution of the topological charge ($Q$) after switching off the applied field ($t=0$) []{data-label="fig:Sk_t"}](fig8b.jpg "fig:"){width="0.95\linewidth"} It is well established that magnetic skyrmions can be stabilized in planar nanoelements of circular shape (dots) in absence of an applied field. These are referred to as zero-field skyrmions. We examine here the possibility of stabilizing zero-field skyrmions in nanoelements that deviate from the planar shape. The size of the nanoelement and the skyrmion pitch are chosen, as in the previous section, such that a single skyrmion is stabilized in the nanoelement. We field cool the system to a very low temperature and at the end of the cooling process we switch off the magnetic field and study the time evolution of the system by recording the magnetization configuration and the topological charge values. Results of the zero-field relaxation of the topological charge are shown in Fig.\[fig:Sk\_t\], where the observation time after reaching the ground state and switching off the field has been about 10 times longer (MCSS=$10^5$) than the relaxation time used during the field-cooling process (MCSS=$10^4$). Distinct behaviors are recored for systems with different degree of curvature. In case of planar nanoelements the topological charge remains almost constant in time indicating the stability of skyrmion at zero field. In systems with small curvature angle ($\phi_0 \simeq 100^0$), the skyrmion is still stable, however, its size increases slightly in the absence of a magnetic field. Increase of skyrmion radius at zero field relative to the non-zero field case is expected on physical grounds, because the Zeeman energy favors ferromagnetic order in expense of moments misalignement within the skyrmion region. However, as seen in Fig.\[fig:Sk\_t\]a, the curvature of the nanoelement enhances this effect. The weak increase of the topological charge from $Q\simeq 0.8$ to $Q\simeq 1.2$ that accompanies the increase in size of the zero-field skyrmion ($\phi_0 \lesssim 100^0$) is understood as an outcome of thermal fluctuations and misalignment of the moments along the free boundaries.[@roh13] For larger curvature angles ($\phi_0 \simeq 150^0$) the skyrmion becomes unstable at zero field and it gradually transforms to a stripe-like structure. This behavior is characterized by decreasing values of the topological charge with time. Interestingly, in case of planar nanoelemets the stabilization of zero-field skyrmions is attributed to the presence of free boundaries that repel the skyrmion. It becomes clear form Fig.\[fig:Sk\_t\] that the same argument holds in the case of curved nanoelements provided the curvature angle remains below a characteristic angle ($\phi_0~100^0$) that corresponds to a curvature radius ($R=L/\phi_0$) comparable to the skyrmion radius($R\sim R_{sk})$. Conclusion and Discussion ========================= We have studied the influence of curvature on the stabilization of Néel skyrmions in thin nanostructures with cylindrical shape and competing exchange and Dzyaloshinskii-Moriya interactions. We showed that application of a uniform magnetic field normal to the cylinder axis is adequate to stabilize the skyrmions. A geometrical criterion for the stabilization of skyrmions is shown to be the curvature radius of the surface to be at least of the size of the skyrmion radius ($R\gtrsim R_{Sk}$). Similarly, zero-field skyrmions can also be stabilized on cylindrical nanoelements, provided the above geometrical criterion is satisfied. With increasing curvature of the magnetic surface a transformation from a purely skyrmion phase to a mixed skyrmion-stripe phase occurs. The appealing fact is that the two phases are spatially separated. Skyrmions form on the ridge of the curved surface, namely a zone parallel to the cylinder axis where the external field is normal or almost normal to the surface and stripes form on the lateral side of the surface, where the magnetic field is parallel or almost parallel to the surface. Our study showed the feasibility of stabilizing skyrmions on nanotubes. In particular, a core/shell magnetic nanowire with heavy metal core and thin transition metal shell could be candidate physical systems to support interface skyrmions in the shell layer. Alternatively, nanotubes of a B20 material are also expected to support Bloch skyrmions in the surface with a similar physical behavior as the Néel skyrmions studied here. The spatial separation of skyrmions from stripes in the thin ferromagnetic cylindrical shell layer is anticipated to bring new perspectives in current-driven dynamics of skyrmions in curved nanostructures, since the applied magnetic field provides the required confining energy barrier that keeps skyrmions along the ridge of the nanotube and prohibits boundary annihilation due to the Skyrmion Hall effect. We hope that our results will stimulate further experimental work in the field of spintronics with magnetic skyrmions in nanowires and nanotubes. Acknowledgments {#acknowledgments .unnumbered} =============== The authors (DK and LT) acknowledge financial support by the Special Account for Research of ASPETE through project *NanoSky* (No 80146). AP is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning» in the context of the project “Strengthening Human Resources Research Potential via Doctorate Research” (MIS-5000432), implemented by the State Scholarships Foundation (IKY). [46]{} natexlab\#1[\#1]{} bibnamefont \#1[\#1]{} bibfnamefont \#1[\#1]{} citenamefont \#1[\#1]{} url \#1[`#1`]{} urlprefix \[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , , ****, (). , , , , **** (). , , , ** (, ). , , , , , , , , **** (). , , , , , ****, (). , , , **** (). , , , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , , , , , , ****, (). , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , , , ****, (). , ** (, ). , , , , ****, (). , ****, (). , , , , , , ****, (). , , , ****, (). , , , ****, ().
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that there exists a universal positive constant $\varepsilon_0 > 0$ with the following property: Let $g$ be a positive Einstein metric on $S^4$. If the Yamabe constant of the conformal class $[g]$ satisfies $$Y(S^4, [g]) >\frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}]) - \varepsilon_0\,,$$ where $g_{\mathbb S}$ denotes the standard round metric on $S^4$, then, up to rescaling, $g$ is isometric to $g_{\mathbb S}$. This is an extension of Gursky’s gap theorem for positive Einstein metrics on the four-sphere.' address: - 'Department of Mathematics, Tokyo Institute of Technology, Tokyo 152-8551, Japan' - 'Department of Mathematics, Tokyo Institute of Technology, Tokyo 152-8551, Japan' - 'Mathematics Department, Indian Institute of Science, 560012 Bangalore, India' author: - 'Kazuo Akutagawa${}^*$' - 'Hisaaki Endo${}^{**}$' - Harish Seshadri date: 'January, 2018;  February, 2018 (revised version).' title: | A gap theorem for positive Einstein metrics\ on the four-sphere --- Introduction and main results ============================= A smooth Riemannian metric $g$ is said to be [*Einstein*]{} if its Ricci tensor ${\rm Ric}_g$ is a constant multiple $\lambda$ of $g$: $${\rm Ric}_g = \lambda\,g\,.$$ When such a metric exists, it is natural to ask whether it is unique. However in dimension $n \geq 5$, there exist many examples of closed $n$-manifolds each of which has infinitely many non-homothetic Einstein metrics (cf.[@Besse]). In fact, there exists infinitely many non-homothetic Einstein metrics of positive sacalar curvature ([*positive Einstein*]{} for brevity) on $S^n$ when $5 \le n \le 9$ [@Bohm] (cf.[@Jensen], [@B-K]). There are no non-existence or uniqueness results known when $n \geq 5$. When $n = 4$, there are necessary topological conditions for a closed $4$-manifold $M$ to admit an Einstein metric [@Thorpe], [@Hitchin-1], [@LeBrun-2]. Uniqueness is known in some special cases: when $M$ is a smooth compact quotient of real hyperbolic $4$-space ([resp.]{} complex-hyperbolic $4$-space), the standard negative Einstein metric is the unique Einstein metric (up to rescaling and isometry) [@BCG] ([resp.]{}[@LeBrun-1]). In the positive case, there are some partial rigidity results on the $4$-sphere $S^4$ and the complex projective plane $\mathbb{CP}^2$ [@GL], [@G], [@Y]. When $M = S^4$, the standard round metric $g_{\mathbb{S}}$ of constant curvature $1$ is, to date, the only known Einstein metric (up to rescaling and isometry). In this connection we have the following gap theorem due to M.Gursky (see [@ABKS] for the significance of the constant $\frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}])$): \[Gursky\] Let $g$ be a positive Einstein metric on $S^4$. If its Yamabe constant $Y(S^4, [g])$ satisfies the following inequality $$Y(S^4, [g]) \geq \frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}])$$ then, up to rescaling, $g$ is isometric to $g_{\mathbb{S}}$. Here, $[g]$ denotes the conformal class of $g$. Note that $Y(S^4, [h]) \leq Y(S^4, [g_{\mathbb{S}}]) = 8\sqrt{6}\pi$ for any Riemannian metric $h$ and that $Y(S^4, [g]) = R_g \sqrt{V_g}$ for any Einstein metric $g$, where $R_g$ and $V_g = {\rm Vol}(S^4, g)$ denote respectively the scalar curvature of $g$ and the volume of $(S^4, g)$. Our main result in this paper is an extension of Theorem\[Gursky\]: \[MainThm1\] There exists a universal positive constant $\varepsilon_0 > 0$ with the following property$:$ If $g$ is a positive Einstein metric on $S^4$ with Yamabe constant $$Y(S^4, [g]) >\frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}]) - \varepsilon_0,$$ then, up to rescaling, $g$ is isometric to $g_{\mathbb{S}}$. This result can be restated in terms of the [*Weyl constant*]{} of $[g]$ (cf.[@ABKS]). Indeed, the Chern-Gauss-Bonnet theorem (see Remark\[ALE\]-(1)) implies that the lower bound on the Yamabe constant is equivalent to the following upper bound on the Weyl constant: $\int_M |W_g|^2 d \mu_g < \frac{32}{3} \pi^2 + \widetilde{\varepsilon}_0$, where $\widetilde{\varepsilon}_0 := \frac{\varepsilon_0}{24}(16\sqrt{2}\pi - \varepsilon_0) > 0$. More generally, we obtain the following (note that $8\sqrt{2}\pi = \frac{1}{\sqrt{3}} Y(S^4, [g_{\mathbb S}])$): \[MainThm2\] For $c > 0$, let $\mathcal{E}_{\geq c}(S^4)$ denote the space of all unit-volume positive Einstein metrics $g$ on $S^4$ with $c \leq Y(S^4, [g]) < 8\sqrt{2}\pi$. Then the number of connected components of the moduli space $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ is finite. In particular, $\{ Y(S^4, [g]) \in [c, 8\sqrt{2}\pi)\ |\ g \in \mathcal{E}_{\geq c}(S^4) \}$ is a finite set $($possible empty$)$. Here $ \mathcal{M}_1(S^4)/{\rm Diff}(S^4)$ has the $C^\infty$-topology and $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ is endowed with the subspace topology. These theorems follow from the following crucial result: \[MainProp\] Let $\{g_i\}$ be a sequence in $ \mathcal{E}_{\geq c}(S^4)$ for some positive constant $c > 0$. Then there exists a subsequence $\{j\} \subset \{i\}$, $\{\phi_j\} \subset {\rm Diff}(S^4)$ and a unit-volume positive Einstein metric $g_{\infty}$ on $S^4$ such that $\phi_j^*g_j$ converges to $g_{\infty}$ with respect to the $C^{\infty}$-topology on $\mathcal{M}_1(S^4)$. [**Remark:**]{} TheoremD of [@Anderson] states that the same conclusion as the one in Proposition\[MainProp\] holds for any sequence $\{g_i\} \subset \mathcal{E}_{\geq c}(M)$ on any closed $4$-manifold $M$ with $1 \leq \chi(M) \leq 3$, where $\chi(M)$ denotes the Euler characteristic of $M$. Unfortunately, the proof appears to be incorrect. Specifically, Theorem D is based on Lemma 6.3, which asserts that a Ricci-flat ALE 4-space $X$ with $\chi(X)=1$ is necessarily isometric to the Euclidean $4$-space $({\mathbb R}^4, g_{\mathbb{E}})$. This is not true: the Ricci-flat ALE 4-space $X_1$ constructed by Eguchi-Hanson [@EH] has a free, isometric ${\mathbb Z}_2$-action whose quotient $X_2 = X_1/{\mathbb Z}_2$ is a Ricci-flat ALE $4$-space with $\chi(X_2)=1$. Note that $X_2$ is nonorientable. Even if we assume that $X$ is orientable in Lemma6.3, the topological argument in the proof still contains some gaps. Proposition3.10 of [@Anderson-GAFA] corrects a minor inaccuracy of Lemma6.3. However, the proof also contains some gaps in the topological argument (see Remark\[Counter\] in $\S$4 for details). Gursky’s proof of Theorem\[Gursky\] involves a sophisticated Bochner technique, a modified scalar curvature and a conformal rescaling argument. The proof of Proposition1.4 is based on topological results about $S^3$-quotients embedded in $S^4$ and the convergence theory of Einstein metrics in four-dimensions. Given this proposition, we invoke Gursky’s result to prove Theorems\[MainThm1\] and\[MainThm2\]. In $\S$2, we recall some background material and prove Theorems\[MainThm1\] and\[MainThm2\], assuming Proposition1.4. In $\S$3, we review two key results needed for the proof of Proposition1.4. Finally, in $\S$4, we prove Proposition1.4.\ \ [**Acknowledgements.**]{} The authors would like to thank Anda Degeratu and Rafe Mazzeo for valuable discussions on the eta invariant, and Shouhei Honda for helpful discussions on convergence results of Riemannian manifolds with bounded Ricci curvature. They would also like to thank Matthew Gursky and Claude LeBrun for useful advices, and Gilles Carron for crucial comments.\ Preliminaries and proofs of Theorems 1.2 and 1.3 ================================================ We first review the definitions of Yamabe constants and Yamabe metrics. Let $M^n$ be a closed $n$-manifold with $n \geq 3$. It is well known that a Riemannian metric on $M$ is Einstein if and only if it is a critical point of the normalized Einstein-Hilbert functional $I$ on the space $\mathcal{M}(M)$ of all Riemannian metrics on $M$ $$I : \mathcal{M}(M) \rightarrow \mathbb{R},\quad g \mapsto I(g) := \frac{\int_MR_gd\mu_g}{{\rm Vol}(M, g)^{(n-2)/n}},$$ where $d\mu_g$ denotes the volume form of $g$. The restriction of $I$ to any conformal class $[g] := \{ e^{2f}\,g\ |\ f \in C^{\infty}(M) \}$ is always bounded from below. Hence, we can consider the following conformal invariant $$Y(M, [g]) := \inf_{\widetilde{g} \in [g]}I(\widetilde{g}),$$ which is called the [*Yamabe constant*]{} of $(M, [g])$. A remarkable theorem of H.Yamabe, N.Trudinger, T.Aubin and R.Schoen asserts that each conformal class $[g]$ contains metrics $\check{g}$, called [*Yambe metrics*]{}, which realize the minimum (cf.[@LP], [@Sc-1]) $$Y(M, [g]) = I(\check{g}).$$ These metrics must have constant scalar curvature $$R_{\check{g}} = Y(M, [g])\cdot V_{\check{g}}^{-2/n},$$ where $V_{\check{g}} = {\rm Vol}(M, \check{g})$. Aubin proved that $$Y(M^n, C) \leq Y(S^n, [g_{\mathbb{S}}]) = n(n-1) V_{g_{\mathbb{S}}}^{2/n}$$ for any conformal class $C$ on $M$. Obata’s Theorem[@Obata] implies that [*any Einstein metric is a Yamabe metric*]{}. When $n = 4$, $$Y(M^4, [g]) = R_{\widehat{g}} \sqrt{V_{\widehat{g}}} \leq Y(S^4, [g_{\mathbb{S}}]) = 8\sqrt{6}\pi$$ for any Einstein metric $\widehat{g} \in [g]$. Assuming Proposition1.4, we can now prove Theorem\[MainThm1\]. Suppose that there exists a sequence $\{g_i\}$ of unit-volume Einstein metrics on $S^4$ satisfying $$Y(S^4, [g_i]) = R_{g_i} < 8\sqrt{2}\pi\ \ ({\rm for}\ \ \forall i),\quad Y(S^4, [g_i]) = R_{g_i} \nearrow 8\sqrt{2}\pi\ \ ({\rm as}\ \ i \to \infty).$$ By Proposition1.4, there exists a subsequence $\{j\} \subset \{i\}$, a sequence $\{\phi_j\} \subset {\rm Diff}(S^4)$ and a unit-volume positive Einstein metric $g_{\infty}$ on $S^4$ such that $\phi_j^*g_j$ converges to $g_{\infty}$ with respect to the $C^{\infty}$-topology on $S^4$. Then, we get $$\label{lll} Y(S^4, [g_{\infty}]) = R_{g_{\infty}} = 8\sqrt{2}\pi.$$ On the other hand, Theorem\[Gursky\] implies that $(S^4, g_{\infty})$ is isometric to $(S^4, g_{\mathbb{S}})$. Hence, $$Y(S^4, [g_{\infty}]) = Y(S^4, [g_{\mathbb{S}}]) = 8\sqrt{6}\pi.$$ This contradicts (\[lll\]). Therefore, there exists a positive constant $\varepsilon_0 > 0$ such that any unit-volume positive Einstein metric $g$ on $S^4$ satisfying $$Y(S^4, [g]) > 8\sqrt{2}\pi - \varepsilon_0$$ is isometric to $g_{\mathbb{S}}$. By the result of N. Koiso[@Koiso-2 Theorem3.1] and [@Besse Corollary12.52] (cf.[@Ebin Theorem7.1], [@Koiso-1 Theorem2.2]), we first remark that, for each $g \in \mathcal{E}_{\geq c}(S^4)$, the premoduli space $\mathcal{E}_{\geq c}(S^4)$ around $g$ is a real analytic subset of a finite dimensional real analytic submanifold in $\mathcal{M}_1(S^4) := \{ g \in \mathcal{M}(S^4)\ |\ V_g = 1 \}$, and the moduli space $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ is arcwise connected. Moreover, the Yamabe constant $Y(S^4, [\bullet])$ is a locally constant function and and it takes (at most) countably many values on $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$. Suppose that there exist infinitely many connected components of the moduli space $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ (see [@Besse Chapters4 and 12] for the topology on it). Then, there exists a sequence $\{g_i\}$ in $\mathcal{E}_{\geq c}(S^4)$ such that the equivalence classes of any two $g_{i_1}$ and $g_{i_2}$ are contained in different connected components of $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ if $i_1 \ne i_2$. Similar to the proof of Theorem\[MainThm1\], there exists a subsequence $\{j\} \subset \{i\}$, a sequence $\{\phi_j\} \subset {\rm Diff}(S^4)$ and a unit-volume positive Einstein metric $g_{\infty}$ on $S^4$ such that $\phi_j^*g_j$ converges to $g_{\infty}$ with respect to the $C^{\infty}$-topology on $S^4$. We note here that the topology of the moduli space is induced from the one of the space $\mathcal{M}_1(S^4)$. Then, there exists a large positive integer $j_0$ such that the set $\{\phi_j^*g_j\}_{j \geq j_0}$ is contained in a connected component. This contradicts the choice of $\{g_i\}$. Hence, the number of connected components of the moduli space $\mathcal{E}_{\geq c}(S^4)/{\rm Diff}(S^4)$ is finite (possibly zero). In particular, the set of $\{ Y(S^4, [g]) \in [c, 8\sqrt{2}\pi)\ |\ g \in \mathcal{E}_{\geq c}(S^4) \}$ is a finite set $($possibly empty$)$. A review of two key results =========================== An embedding theorem: --------------------- It will be necessary to know which quotients $S^3/\Gamma$ of $S^3$ embed smoothly in $S^4$. The theorem below gives a complete answer, which is one of the two key results for the proof of Proposition1.4. \[Key-2\] Let $\Gamma \subset SO(4)$ be a finite subgroup such that $S^3/\Gamma$ is a smooth quotient of $S^3$. If $S^3/\Gamma$ can be smoothly embedded in $S^4$, then either $\Gamma = \{1\}$ or $\Gamma = Q_8$. Here, $Q_8 = \{\pm 1, \pm i, \pm j, \pm k\}$ denotes the quaternion group. Convergence of Einstein metrics: -------------------------------- We first review the definition of the energy of metrics on $4$-manifolds. $(1)$  For a closed Riemannian $4$-manifold $(M, g)$, the [*energy*]{} $\mathscr{E}(g)$ of $g$ (or $(M, g)$) is defined by $$\mathscr{E}(g) := \frac{1}{8\pi^2}\int_M|\mathscr{R}_g|^2d\mu_g,$$ where $\mathscr{R}_g = (R^i_{\ jk\ell})$ denotes the curvature tensor of $g$ and $|\mathscr{R}_g|^2 = \frac{1}{4}R^i_{\ jk\ell}R_i^{\ jk\ell}$. $(2)$  If $(X,h)$ is an [*asymptotically locally Euclidean*]{} $4$-manifold of order $\tau > 0$ (ALE $4$-space for brevity, cf.[@BKN]), the energy $\mathscr{E}(h)$ of $h$ (or $(X, h)$) is again defined by $$\mathscr{E}(h) := \frac{1}{8\pi^2}\int_X|\mathscr{R}_h|^2d\mu_h < \infty.$$ \[ALE\] $(1)$  By the Chern-Gauss-Bonnet formula, $\mathscr{E}(g) = \chi(M)$ for any Einstein metric $g$ on a closed $4$-manifold $M$. Indeed, $$\begin{aligned} \mathscr{E}(g) &= \frac{1}{8\pi^2}\int_M|\mathscr{R}_g|^2d\mu_g = \frac{1}{8\pi^2}\int_M(|W_g|^2 + \frac{1}{24}R_g^2 + \frac{1}{2}|E_g|^2)d\mu_g\\ &= \frac{1}{8\pi^2}\int_M(|W_g|^2 + \frac{1}{24}R_g^2 - \frac{1}{2}|E_g|^2)d\mu_g = \chi(M),\end{aligned}$$ where $W_g = (W^i_{\ jk\ell})$ and $E_g = (E_{ij})$ denote respectively the Weyl tensor and the trace-free Ricci tensor ${\rm Ric}_g - \frac{R_g}{4}g$ of $g$, and $|W_g|^2 = \frac{1}{4}W^i_{\ jk\ell}W_i^{\ jk\ell}$. In particular, $\mathscr{E}(g) = 2$ if $M = S^4$.\ $(2)$  The Chern-Gauss-Bonnet formula for $4$-manifolds with boundary implies the following (cf.[@Hitchin-2 formula(7)]): any Ricci-flat ALE $4$-space $(X, h)$ with end $S^3/\Gamma$ satisfies $$\chi(X) = \mathscr{E}(h) + \frac{1}{|\Gamma|},$$ where $\Gamma$ is a finite subgroup of $O(4)$ acting freely on $\mathbb{R}^4 - \{0\}$ and $|\Gamma|$ is the order of $\Gamma$. If $\chi(X) = 1$, we get, in particular, the following: $$\mathscr{E}(h) = 1 - \frac{1}{|\Gamma|}.$$ $(3)$  Bando-Kasue-Nakajima[@BKN] proved that any Ricci-flat ALE $4$-space $(X, h)$ is an ALE $4$-space of order of $4$. Moreover, when $(X, h)$ is [*asymptotically flat*]{} (AF for brevity, cf.[@Bartnik]), that is, $\Gamma = \{1\}$, this combined with a result of R. Bartnik[@Bartnik Theorem4.3] implies that the mass of $(X, h)$ is zero. The Positive Mass Theorem[@Sc-1 Theorem4.3] for AF manifolds then implies that $(X, h)$ is isometric to $(\mathbb{R}^4, g_{\mathbb{E}})$. Note that $\mathscr{E}(h) = \mathscr{E}(g_{\mathbb{E}}) = 0$. Recall again that any Einstein metric $g$ on a closed $4$-manifold $M$ satisfies that $Y(M, [g]) = R_g\sqrt{V_g}$. Moreover, if $g$ is a unit-volume Einstein metric with $Y(M, [g]) \geq c\ (c > 0)$, then ${\rm Ric}_g \geq \frac{c}{4}g$. Hence, Myers’ diameter estimate gives $${\rm diam}(M, g) \leq \frac{2\sqrt{3}\pi}{\sqrt{c}}.$$ Using this fact and Remark3.3-(1), we can now state a modified version of the convergence theorem for Einstein metrics due to M. Anderson[@Anderson], H. Nakajima[@Nakajima-1] and Bando-Kasue-Nakajima[@BKN], which is the other of the two key results for the proof of Proposition1.4. \[Key-1\] Let $M$ and $\{g_i\}$ be respectively a closed $4$-manifold and a sequence of unit-volume positive Einstein metrics on $M$ with $Y(M, [g]) \geq c$ for a fixed $c > 0$. Then, there exist a subsequence $\{j\} \subset \{i\}$ and a compact Einstein $4$-orbifold $(M_{\infty}, g_{\infty})$ with a finite singular points $\mathcal{S} = \{p_1, p_2, \cdots, p_{\ell}\} \subset M_{\infty}$ $($possibly empty$)$ and an orbifold structure group $\Gamma_a \subset O(4)$ around $p_a$ for which the following assertions hold$:$\ $(1)$  $(M, g_j)$ converges to $(M_{\infty}, g_{\infty})$ in the Gromov-Hausdorff distance.\ $(2)$  There exists a smooth embedding $\phi_j : M_{\infty} - \mathcal{S} \rightarrow M$ for each $j$ such that $\phi_j^*g_j$ converges to $g_{\infty}$ in the $C^{\infty}$-topology on $M_{\infty} - \mathcal{S}$. If $\mathcal{S}$ is empty, then each $\phi_j$ is a diffeomorphism from $M_{\infty}$ onto $M$.\ $(3)$  For each $p_a \in \mathcal{S}$ and $j$, there exists $p_{a, j} \in M$ and a positive number $r_j$ such that\ $(3.1)$  $B_{\delta}(p_{a, j}; g_j)$ converges to $B_{\delta}(p_a;g_{\infty})$ in the pointed Gromov-Hausdorff distance for all $\delta > 0$, where $B_{\delta}(p_{a, j}; g_j)$ denotes the geodesic ball of radius $\delta > 0$ centered at $p_{a, j}$ with respect to $g_j$.\ $(3.2)$  $\lim_{j \to \infty}r_j = 0$.\ $(3.3)$  $((M, r_j^{-2}g_j), p_{a, j})$ converges to $((X_a, h_a), x_{a, \infty})$ in the pointed Gromov-Hausdorff distance, where $(X_a, h_a)$ is a complete, non-compact, Ricci-flat, non-flat ALE $4$-space of order $4$ with $$0 < \int_{X_a}|\mathscr{R}_{h_a}|^2d\mu_{h_a} < \infty,$$ and $x_{a, \infty} \in X_a$.\ $(3.4)$  There exists smooth embeddings $\Phi_j : X_a \rightarrow M$ such that $\Phi_j^*(r_j^{-2}g_j)$ converges to $h_a$ in the $C^{\infty}$-topology on $X_a$.\ $(4)$  It holds that $$\lim_{j \to \infty}\int_M|\mathscr{R}_{g_j}|^2d\mu_{g_j} \geq \int_{M_{\infty}}|\mathscr{R}_{g_{\infty}}|^2d\mu_{g_{\infty}} + \sum_a\int_{X_a}|\mathscr{R}_{h_a}|^2d\mu_{h_a}.$$ \[Tree\] Since $S^3/\Gamma_a$ is smoothly embedded in $M_{\infty}$ around $p_a$ for each $a$, it is also smoothly embedded in $M$ and it separates $M$ into two components $V_a, W_a$, which are compact $4$-manifolds with boundary. More precisely, $M = V_a \cup W_a,\ S^3/\Gamma_a = \partial V_a = \partial W_a = V_a \cap W_a$. Here, we choose $V_a$ satisfying $V_a \subset M_{\infty}$. The infinity $X_a(\infty) \cong S^3/\widetilde{\Gamma}_a$ of $X_a$ is also smoothly embedded in $M$. By the existence of intermediate Ricci-flat ALE $4$-orbifolds in the bubbling tree arising from each singular point $p_a$, $\Gamma_a \ne \widetilde{\Gamma}_a$ in general (cf.[@Bando], [@Nakajima-3]). Proof of Proposition1.4 ======================= Let $\{g_i\}$ be a sequence of positive Einstein metrics on $S^4$ with $\{g_i\} \subset \mathcal{E}_{\geq c}(S^4)$ for some $c > 0$. We apply Theorem\[Key-1\], with $M=S^4$, for the sequence $\{g_i\}$. Then, in order to prove Proposition1.4, by Theorem\[Key-1\]-(2), it is enough to show that the singular set $\mathcal{S}$ is empty. From now on, , that is, $\ell \geq 1$. By a similar reason to Remark\[Tree\], the case of $\Gamma_a = \{1\}$ for some $a\ (1 \leq a \leq \ell)$ may be possible logically for a general closed $4$-manifold, particularly $\Gamma_a \ = \{1\}$ for all $a = 1, 2, \cdots, \ell$. However, at least in the case of $M = S^4$, the following holds. We use the notation of Theorem\[Key-1\] and Remark\[Tree\]. \[Subkey\] $\Gamma_{a_0} \ne \{1\}$ for some $a_0\ (1 \leq a_0 \leq \ell)$. Suppose that $\Gamma_a = \{1\}$ for all $a = 1, 2, \cdots, \ell$. As mentioned in Remark\[Tree\], a smooth embedded $S^3$ around $p_1 \in S^4$ separates $S^4$ into two components $V_1, W_1$ of compact $4$-manifolds with boundary satisfying $$S^4 = V_1 \cup W_1,\quad S^3 = \partial V_1 = \partial W_1 = V_1 \cap W_1,\quad V_1 \subset S^4_{\infty}.$$ By the Mayer-Vietoris exact sequence of homology groups for $(S^4; V_1, W_1)$, one can get $$H_i(V_1; \mathbb{R}) = H_i(W_1; \mathbb{R}) = 0\qquad {\rm for}\ \ i = 1, 2, 3,$$ and hence $\chi(V_1) = \chi(W_1) = 1$. Let $S^4_1 := V_1 \cup_{S^3} \overline{D^4}$ be a closed smooth $4$-manifold obtained by gluing along $S^3 = \partial V_1 = \partial\overline{D^4}$, where $\overline{D^4}$ denotes the closed $4$-ball in $\mathbb{R}^4$. Note that $\chi(S^4_1) = 2$. Similar to the above, a smooth embedded $S^3$ around $p_2 \in S^4_1$ separates $S^4_1$ into two components $V'_2, W'_2$. Then, the closed smooth $4$-manifold $S^4_2 := V'_2 \cup_{S^3} \overline{D^4}$ also satisfies that $\chi(S^4_2) = 2$. Repeating a similar procedure up to $a = \ell$, we get finally a closed smooth $4$-manifold $S^4_{\ell} := V'_{\ell} \cup_{S^3} \overline{D^4}$ with $\chi(S^4_{\ell}) = 2$. By construction, $S^4_{\ell}$ is homeomorphic to $S^4_{\infty}$ which implies that $\chi(S^4_{\infty}) = 2$. By the removable singularities theorem for Einstein metrics [@BKN Theorem5.1], we note that $(S^4_{\infty}, g_{\infty})$ is a closed [*smooth*]{} Einstein $4$-manifold. Combining this with $\chi(S^4_{\infty}) = 2$, we get that $\mathscr{E}(g_{\infty}) = 2$. However, each Ricci-flat ALE $4$-space $(X_a, h_a)$ bubbling out from $p_a$ has a positive energy $\mathscr{E}(h_a) > 0$. This, combined with Theorem\[Key-1\]-(4) leads to a contradiction: $$2 = \lim_{j \to \infty}\mathscr{E}(g_j) \geq \mathscr{E}(g_{\infty}) + \sum_a\mathscr{E}(h_a) > 2.$$ Therefore, $\Gamma_{a_0} \ne \{1\}$ for some $a_0\ (1 \leq a_0 \leq \ell)$. We can now prove Proposition1.4. For simplicity, we assume that $a_0 = 1$. It then follows from Theorem\[Key-2\] that $\Gamma_1 = Q_8$. By Remark\[ALE\]-(3), we also obtain that $\widetilde{\Gamma}_1 = Q_8$. Even if $\widetilde{\Gamma}_a = Q_8$ for some $a$, a similar Mayer-Vietoris argument to that in the proof of Lemma\[Subkey\] still holds, and so $\chi(X_1) = 1$. It then follows from Remark\[ALE\]-(2) that $$\mathscr{E}(h_1) = \chi(X_1) - \frac{1}{|Q_8|} = 1- \frac{1}{8} = \frac{7}{8}.$$ By the signature theorem for compact $4$-orbifolds (cf.[@Nakajima-2 (4.5)]) and the calculation of eta invariant $\eta_S(S^3/\Gamma)$ for the signature operator [@Hitchin-2 Section3], the compact Einstein $4$-orbifold $(S^4_{\infty}, g_{\infty})$ satisfies that $$\tau(S^4_{\infty}) = \frac{1}{12\pi^2}\int_{S^4_{\infty}}\Big{(} |W_{g_{\infty}}^+|^2 - |W_{g_{\infty}}^-|^2 \Big{)}d\mu_{g_{\infty}} - \sum_{a=1}^{\ell}\eta_S(S^3/\Gamma_a),\quad \eta_S(S^3/Q_8) = \frac{3}{4},$$ where $\tau(S^4_{\infty})$ denotes the signature of $S^4_{\infty}$. Since $H_2(S^4_{\infty}; \mathbb{R}) = 0$, we have that $H^2(S^4_{\infty}; \mathbb{R}) = 0$, and so $\tau(S^4_{\infty}) = 0$. Combining that $R_{g_{\infty}} \geq c > 0$ and $\eta_S(S^3/\Gamma_a) \geq 0$ with the above and Theorem\[Key-1\]-(4), we then obtain that $$\frac{9}{8} = 2 - \frac{7}{8} \geq \mathcal{E}(g_{\infty}) = \frac{1}{8\pi^2}\int_{S^4_{\infty}}\Big{(} |W_{g_{\infty}}|^2 + \frac{R_{g_{\infty}}^2}{24} \Big{)}d\mu_{g_{\infty}} > \frac{1}{8\pi^2}\int_{S^4_{\infty}}|W_{g_{\infty}}|^2d\mu_{g_{\infty}}$$ $$\qquad \quad \geq \frac{1}{8\pi^2}\int_{S^4_{\infty}}|W_{g_{\infty}}^+|^2d\mu_{g_{\infty}} \geq \frac{3}{2}\Big{(} \frac{3}{4} + \frac{1}{12\pi^2}\int_{S^4_{\infty}}|W_{g_{\infty}}^-|^2d\mu_{g_{\infty}} \Big{)} \geq \frac{9}{8},$$ and hence it leads a contradiction. Therefore, $\mathcal{S} = \emptyset$. As mentioned in Remark of $\S$1, we describe some details on the topological argument. \[Counter\] Let $N_2$ be the nonorientable disk bundle over the the real projective plane $\mathbb{RP}^2$ with Euler number $2$. Let $T_4$ be the disk bundle of the complex line bundle over $S^2$ of degree $4$. Then, the natural double cover $T_4 \rightarrow N_2$ is the universal cover of $N_2$. Note that $S^4 = N_2 \cup_{\partial N_2}N_2$ and $\partial N_2 = S^3/Q_8$ (see [@Lawson] for details). Note also that $N_2$ is orientable since $N_2$ can be smoothly embedded in $S^4$ as a compact $4$-submanifold. Moreover, we have the following: $$\ H_1(N_2; \mathbb{Z}) = \mathbb{Z}_2,\quad H_i(N_2; \mathbb{Z}) = 0\ \ (i = 2, 3, 4),$$ $$H_2(T_4; \mathbb{Z}) = \mathbb{Z},\quad \ \ H_i(T_4; \mathbb{Z}) = 0\ \ (i = 1, 3, 4).$$ We do not know whether the orientable open $4$-manifold ${\rm Int}(N_2)$ admits a Ricci-flat ALE metric or not. (We have proved here only that such a metric never appears as a bubbling off Ricci-flat ALE metric from a sequence in $\mathscr{E}_{\geq c}(S^4)$.) However, $N_2$ becomes an orientable counterexample to the topological arguments in the proofs of [@Anderson Lemma6.3] and [@Anderson-GAFA Proposition3.10]. [99]{} K.Akutagawa, B.Botvinnik, O.Kobayashi and H.Seshadri, [*The Weyl functional near the Yamabe invariant*]{}, [**J. Geom. Anal. 13**]{} (2003), 1–20. M.Anderson, [*Ricci curvature bounds and Einstein metrics on compact manifolds*]{}, [**J. Amer. Math.Soc. 2**]{} (1989), 455-490. M.Anderson, [*Einstein metrics with prescribed conformal infinity on $4$-manifolds*]{}, [**Geom. Fanct. Anal. 18**]{} (2008), 305–366. S.Bando, [*Bubbling out of Einstein manifolds*]{}, [**Tohoku Math. J. 42**]{} (1990), 205–216; Correction and addition, [**Tohoku Math. J. 42**]{} (1990), 587-588. S.Bando, A.Kasue and H.Nakajima, [*On a construction of coordinates at infinity on manifolds with fast curvature decay and maximal volume growth*]{}, [**Invent. Math. 97**]{} (1989), 313–349. R.Bartnik, [*The mass of an asymptotically flat manifold*]{}, [**Comm. Pure Appl. Math. 39**]{} (1986), 661–693. A.Besse, [*Einstein Manifolds*]{}, Springer, 1987. G.Besson, G.Courtois and S.Gallot, [*Entropies et rigidités des espaces localement symétriques de courbure strictement négative*]{}, [**Geom. Funct. Anal. 5**]{} (1995), 731–799. C.Böhm, [*Inhomogeneous Einstein metrics on low-dimensional spheres and other low-dimensional spaces*]{}, [**Invent. Mtah. 134**]{} (1998), 145–176. J.-P.Bourguignon and H.Karcher, [*Curvature operators: pinching estimates and geometric examples*]{}, [**Ann. Sci. École Norm. Sup. 11**]{} (1978), 71–92. J.S.Crisp and J.A.Hillman, [*Embedding Seifert fibred and $3$-manifolds and ${\rm Sol}^3$-manifolds in $4$-space*]{}, [**Proc. London Math. Soc. 76**]{} (1998), 685–710. D.Ebin, [*The manifold of Riemannian metrics*]{}, Global Analysis, [**Proc. Symp. Pure Math. 15**]{} (1968), 11–40. T.Eguchi and A.J.Hanson, [*Self-dual solutions to Euclidean gravity*]{}, [**Ann. of Phys. 120**]{} (1979), 82–106. M.Gursky, [*Four-manifolds with $\delta W^+ = 0$ and Einstein constants on the sphere*]{}, [**Math. Ann. 318**]{} (2000), 417–43. M.Gursky and C.LeBrun, [*On Einstein manifolds of positive sectional curvature*]{}, [**Ann. Global Anal. Geom. 17**]{} (1999), 315–328. N.Hitchin, [*Compact four-dimensional Einstein manifolds*]{}, [**J. Differential Geom. 9**]{} (1974), 435–441. N.Hitchin, [*Einstein metrics and the eta-invariant*]{}, [**Bullettino U. M. I. 11-B**]{} (1997), 92–105. G.R.Jensen, [*Einstein metrics on principal fibre bundles*]{}, [**J. Differential Geom. 8**]{} (1973), 599–614. N.Koiso, [*Non-deformability of Einstein metrics*]{}, [**Osaka J. Math. 15**]{} (1978), 419–433. N.Koiso, [*Einstein metrics and complex structures*]{}, [**Invent. Math. 73**]{} (1983), 71–106. T.Lawson, [*Splitting $S^4$ on $\mathbb{RP}^2$ via the branched cover of $\mathbb{CP}^2$ over $S^4$*]{}, [**Proc. Amer. Math. Soc. 86**]{} (1982), 328–330. C.LeBrun, [*Einstein metrics and Mostow rigidity*]{}, [**Math. Res. Lett. 2**]{} (1995), 1–8. C.LeBrun, [*Four-manifolds without Einstein metrics*]{}, [**Math. Res. Lett. 3**]{} (1996), 133–147. J.Lee and T.Parker, [*The Yamabe problem*]{}, [**Bull. Amer. Math. Soc. 17**]{} (1987), 37–81. H.Nakajima, [*Hausdorff convergence of Einstein $4$-manifolds*]{}, [**J. Fac. Sci. Univ. Tokyo 35**]{} (1988), 411–424. H.Nakajima, [*Self-duality of ALE Ricci-flat $4$-manifolds and positive mass theorem*]{}, in Recent Topics in Differential and Analytic Geometry, [**Advanced Studies in Pure Math. 18-I**]{} (1990), 313–349. H.Nakajima, [*A convergence theorem for Einstein metrics and the ALE spaces*]{}, [**Amer. Math. Soc. Transl. 160**]{} (1994) 79–94. M.Obata, [*The conjectures on conformal transformations of Riemannian manifolds*]{}, [**J. Differential Geom. 6**]{} (1972), 247–258. R.Schoen, [*Variational theory for the total scalar curvature functional for Riemannian metrics and related topics*]{}, Topics in Calculus of Variations, [**Lect. Notes in Math. 1365**]{}, 121–154, Springer, 1989. J.Thorpe, [*Some remarks on the Gauss-Bonnet integral*]{}, [**J. Math. Mech. 18**]{} (1969), 779–786. D.Yang, [*Rigidity of Einstein $4$-manifolds with positive curvature*]{}, [**Invent. Math. 142**]{} (2000),435–450.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Estimation of 3D human pose from monocular image has gained considerable attention, as a key step to several human-centric applications. However, generalizability of human pose estimation models developed using supervision on large-scale in-studio datasets remains questionable, as these models often perform unsatisfactorily on unseen in-the-wild environments. Though weakly-supervised models have been proposed to address this shortcoming, performance of such models relies on availability of paired supervision on some related tasks, such as 2D pose or multi-view image pairs. In contrast, we propose a novel kinematic-structure-preserved unsupervised 3D pose estimation framework[^1], which is not restrained by any paired or unpaired weak supervisions. Our pose estimation framework relies on a minimal set of prior knowledge that defines the underlying kinematic 3D structure, such as skeletal joint connectivity information with bone-length ratios in a fixed canonical scale. The proposed model employs three consecutive differentiable transformations named as forward-kinematics, camera-projection and spatial-map transformation. This design not only acts as a suitable bottleneck stimulating effective pose disentanglement, but also yields interpretable latent pose representations avoiding training of an explicit latent embedding to pose mapper. Furthermore, devoid of unstable adversarial setup, we re-utilize the decoder to formalize an energy-based loss, which enables us to learn from in-the-wild videos, beyond laboratory settings. Comprehensive experiments demonstrate our state-of-the-art unsupervised and weakly-supervised pose estimation performance on both Human3.6M and MPI-INF-3DHP datasets. Qualitative results on unseen environments further establish our superior generalization ability.' author: - | Jogendra Nath Kundu[^2]Siddharth SethRahul M VMugalodi Rakesh\ **R. Venkatesh BabuAnirban Chakraborty\ Indian Institute of Science, Bangalore, India\ [{jogendrak, siddharthseth}@iisc.ac.in, rmvenkat@andrew.cmu.edu, rakeshramesha@gmail.com, {venky, anirban}@iisc.ac.in]{}** bibliography: - 'ms.bib' title: | Kinematic-Structure-Preserved Representation for\ Unsupervised 3D Human Pose Estimation --- \[tab:char\] Introduction ============ Building general intelligent systems, capable of understanding the inherent 3D structure and pose of non-rigid humans from monocular RGB images, remains an illusive goal in the vision community. In recent years, researchers aim to solve this problem by leveraging the advances in two key aspects, a) improved architecture design [@newell2016stacked; @chu2017multi] and b) increasing collection of diverse annotated samples to fuel the supervised learning paradigm [@VNect_SIGGRAPH2017]. However, obtaining 3D pose ground-truth for non-rigid human-bodies is a highly inconvenient process. Available motion capture systems, such as body-worn sensors (IMUs) or multi-camera structure-from-motion (SFM), requires careful pre-calibration, and hence usually done in a pre-setup laboratory environment [@ionescu2013human3; @zhang2017martial]. This often restricts diversity in the collected dataset, which in turn hampers generalization of the supervised models trained on such data. For instance, the widely used Human3.6M [@ionescu2013human3] dataset captures 3D pose using 4 fixed cameras (only 4 backgrounds scenes), 11 actors (limited apparel variations), and 17 action categories (limited pose diversity). A model trained on this dataset delivers impressive results when tested on samples from the same dataset, but does not generalize to an unknown deployed environment, thereby yielding non-transferability issue. To deal with this problem, researchers have started exploring innovative techniques to reduce dependency on annotated real samples. Aiming to enhance appearance diversity on known 3D pose samples (CMU-MoCap), synthetic datasets have been proposed, by compositing a diverse set of human template foregrounds with random backgrounds [@varol2017learning]. However, models trained on such samples do not generalize to a new motion (e.g. a particular dance form), apparel, or environment much different from the training samples, as a result of large domain shift. Following a different direction, several recent works propose weakly-supervised approaches [@zhou2017towards], where they consider access to a large-scale dataset with paired supervision on some related-tasks other than the task in focus (3D pose estimation). Particularly, they access multiple cues for weak supervision, such as, a) paired 2D ground-truth, b) unpaired 3D ground-truth (3D pose without the corresponding image), c) multi-view image pair ([Rhodin et al. [-@rhodin2018unsupervised]]{}), d) camera parameters in a multi-view setup etc. (see Table \[tab:char\] for a detailed analysis). While accessing such weak paired-supervisions, the general approach is to formalize a self-supervised consistency loop, such as 2D$\rightarrow$3D$\rightarrow$2D [@tung2017adversarial], view-1$\rightarrow$3D$\rightarrow$view-2 [@kocabas2019self], etc. However, the limitations of domain-shift still persists as a result of using annotated data (2D ground-truth or multi-view camera extrinsic). To this end, without accessing such paired samples, [@jakab2019learning] proposed to leverage unpaired samples to model the natural distribution of the expected representations (2D or 3D pose) using adversarial learning. Obtaining such samples, however, requires access to a 2D or 3D pose dataset and hence the learning process is still biased towards the action categories presented in that dataset. One can not expect to have access to any of the above discussed paired or unpaired weak supervisory signals for an unknown deployed environment (e.g. frames of a dance-show where the actor is wearing a rare traditional costume). This motivates us to formalize a fully-unsupervised framework for monocular 3D pose estimation, where the pose representation can be adapted to the deployed environment by accessing only the RGB video frames devoid of dependency on any explicit supervisory signal. **Our contributions.** We propose a novel unsupervised 3D pose estimation framework, relying on a carefully designed kinematic structure preservation pipeline. Here, we constrain the latent pose embedding, to form interpretable 3D pose representation, thus avoiding the need for an explicit latent to 3D pose mapper. Several recent approaches aim to learn a prior characterizing kinematically plausible 3D human poses using available MoCap datasets ([Kundu et al. ]{}[-@kundu2019bihmp]). In contrast, we plan to utilize minimal kinematic prior information, by adhering to the restrictions to not use any external unpaired supervision. This involves, a) access to the knowledge of hierarchical limb connectivity, b) a vector of allowed bone length ratios, and c) a set of 20 synthetically rendered images with diverse background and pose (a minimal dataset with paired supervision to standardize the model towards the intended 2D or 3D pose conventions). The aforementioned prior information is very minimal in comparison to the pose-conditioned limits formalized by ([Akhter et al. ]{}[-@akhter2015pose]) in terms of both dataset size and parameters associated to define the constraints. In the absence of multi-view or depth information, we infer 3D structure, directly from the video samples, for the unsupervised 3D pose estimation task. One can easily segment moving objects from a video, in absence of any background (BG) motion. However, this is only applicable to in-studio static camera feeds. Aiming to work on in-the-wild YouTube videos , we formalize separate unsupervised learning schemes for videos with both static and dynamic BG. In absence of background motion, we form pairs of video frames with a rough estimate of the corresponding BG image, following a training scheme to disentangle foreground-apparel and the associated 3D pose. However, in the presence of BG motion, we lack in forming such consistent pairs, and thus devise a novel energy-based loss on the disentangled pose and appearance representations. In summary, - We formalize a novel collection of three differentiable transformations, which not only acts as a bottleneck stimulating effective pose disentanglement but also yields interpretable latent pose representations avoiding training of an explicit latent-to-pose mapper. - The proposed energy-based loss, not only enables us to learn from in-the-wild videos, but also improves generalizability of the model as a result of training on diverse scenarios, without ignoring any individual image sample. - We demonstrate *state-of-the-art* unsupervised and weakly-supervised 3D pose estimation performance on both Human3.6M and MPI-INF-3DHP datasets. Related Works {#sec:related-works} ============= **3D human pose estimation.** There is a plethora of fully-supervised 3D pose estimations works [@fang2018learning; @mehta2017monocular; @VNect_SIGGRAPH2017], where the performance is bench-marked on the same dataset, which is used for training. Such approaches do not generalize on minimal domain shifts beyond the laboratory environment. In absence of large-scale diverse outdoor datasets with 3D pose annotations, datasets with 2D pose annotations is used as a weak supervisory signal for transfer learning using various 2D to 3D lifting techniques ([Tung et al. [-@tung2017adversarial]]{}; [Chen et al. [-@chen20173d]]{}; [Ramakrishna et al. [-@ramakrishna2012]]{}). However, these approaches still rely on availability of 2D pose annotations. Avoiding this, ([Kocabas et al. [-@kocabas2019self]]{}; [Rhodin et al. [-@rhodin2018unsupervised]]{}) proposed to use multi-view correspondence acquired by synchronized cameras. But in such approaches ([Rhodin et al. [-@rhodin2018unsupervised]]{}), the latent pose representation remains un-interpretable and abstract, thereby requiring a substantially large amount of 3D supervision to explicitly train a *latent-to-pose mapping* mapper. We avoid training of such explicit mapping, by casting the latent representation, itself as the 3D pose coordinates. This is realized as a result of formalizing the geometry-aware bottleneck. **Geometry-aware representations.** To capture intrinsic structure of objects, the general approach is to disentangle individual factors of variations, such as appearance, camera viewpoint and other pose related cues, by leveraging inter-instance correspondence. In literature, we find unsupervised land-mark detection techniques [@zhang2018unsupervised], that aim to utilize a relative transformation between a pair of instances of the same object, targeting the 2D pose estimation task. To obtain such pairs, these approaches rely on either of the following two directions, viz. a) frames from a video with an acceptable time-difference [@jakab2018unsupervised], or b) synthetically simulated 2D transformations [@rocco2017convolutional]. However, such techniques fail to capture the 3D structure of the object in the absence of multi-view information. The problem becomes more challenging for deformable 3D skeletal structures as found in diverse human poses. Recently [@jakab2018unsupervised] proposed an unsupervised 2D landmark estimation method to disentangle pose from appearance using a conditional image generation framework. However, the predicted 2D landmarks do not match with the standard human pose key-points, hence are highly un-interpretable with some landmarks even lying on the background. Such outputs can not be used for a consequent task requiring a structurally consistent 2D pose input. Defining structural constraints in 2D is highly ill-posed, considering images as projections of the actual 3D world. Acknowledging this, we plan to estimate 3D pose separately with camera parameters followed by a camera-projection to obtain the 2D landmarks. As a result of this inverse-graphics formalization, we have the liberty to impose structural constraints directly on the 3D skeletal representation, where the bone-length and other kinematic constraints can be imposed seamlessly using consistent rules as compared to the corresponding 2D representation. A careful realization of 3D structural constraints not only helps us to obtain interpretable 2D landmarks but also reduces the inherent uncertainty associated with the process of lifting a monocular 2D images to 3D pose [@chen2019unsupervised], in absence of any additional supervision such as multi-view or depth cues. ![image](image_final/aaai1_fig1_sid_final.pdf){width="1.0\linewidth"} Approach {#sec:approach} ======== Our aim is to learn a mapping function, that can map an RGB image of human to its 3D pose by accessing minimal kinematic prior information. Motivated by ([Rhodin et al. [-@rhodin2018unsupervised]]{}), we plan to cast it as an unsupervised disentanglement of three different factors , a) foreground (FG) appearance, b) background (BG) appearance, and c) kinematic pose. However, unlike ([Rhodin et al. [-@rhodin2018unsupervised]]{}) in absence of multi-view pairs, we have access to simple monocular video streams of human actions consisting of both static and dynamic BG. Architecture ------------ As shown in Fig. \[fig:main1\]A, we employ two encoder networks each with a different architecture, $E_P$ and $E_A$ to extract the local-kinematic parameters $v_k$ (see below) and FG-appearance, $f_a$ respectively from a given RGB image. Additionally, $E_P$ also outputs 6 camera parameters, denoted by $c$, to obtain coordinates of the camera-projected 2D landmarks, $p_{2D}$. One of the major challenges in learning factorized representations [@denton2017unsupervised] is to realize purity among the representations. More concretely, the appearance representation should not embed any pose related information and vice-versa. To achieve this, we enforce a bottleneck on the pose representation by imposing kinematic-structure based constraints (in 3D) followed by an inverse-graphics formalization for 3D to 2D re-projection. This introduces three pre-defined transformations , a) Forward kinematic transformation, $\mathcal{T}_{fk}$ and b) Camera projection transformation $\mathcal{T}_c$, and c) Spatial-map transformation $\mathcal{T}_m$. ### a) Forward kinematic transformation, $\mathcal{T}_{fk}$ Most of the prior 3D pose estimation approaches ([Chen et al. [-@chen2019unsupervised]]{}; [Rhodin et al. [-@rhodin2018unsupervised]]{}) aim to either directly regress joint locations in 3D or depth associated with the available 2D landmarks. Such approaches do not guarantee validity of the kinematic structure, thus requiring additional loss terms in the optimization pipeline to explicitly impose kinematic constraints such as bone-length and limb-connectivity information [@habibie2019wild]. In contrast, we formalize a view-invariant local-kinematic representation of the 3D skeleton based on the knowledge of skeleton joint connectivity. We define a canonical rule (see Fig. \[fig:main1\]B), by fixing the neck and pelvis joint (along z-axis, with pelvis at the origin) and restricting the [trunk to hip-line (line segment connecting the two hip joints) angle]{}, to rotate only about x-axis on the YZ-plane(1-DOF) in the canonical coordinate system $C$ (Cartesian system defined at the pelvis as origin). Our network regresses one pelvis to hip-line angle and 13 unit-vectors (all 3-DOF), which are defined at their respective parent-relative local coordinate systems, $L^{Pa(j)}$, where $Pa(j)$ denotes the parent joint of $j$ in the skeletal kinematic tree. Thus, $v_k\in \mathbb{R}^{40}$ (1+13\*3). These predictions are then passed on to the forward-kinematic transformation to obtain the 3D joint coordinates $p_{3D}$ in $C$, $\mathcal{T}_{fk}:v_k\rightarrow p_{3D}$ where $p_{3D}\in \mathbb{R}^{3J}$, with $J$ being the total number of skeleton joints. First, positions of the 3 root joints, $p_{3D}^{(j)}$ for $j$ as left-hip, right-hip and neck, are obtained using the above defined canonical rule after applying the estimate of the [trunk to hip-line angle]{}, $v_k^{(0)}$. Let $\textit{len}^{(j)}$ store the length of the line-segment (in a fixed canonical unit) connecting a joint $j$ with $Pa(j)$. Then, $p_{3D}^{(j)}$ for rest of the joints is realized using the following recursive equation, $p_{3D}^{(j)} = p_{3D}^{(Pa(j))}+\textit{len}^{(j)}v_k^{(j)}$. See Fig. \[fig:main1\]B (dotted box) for a more clear picture. ### b) Camera-projection transformation, $\mathcal{T}_{c}$ As $p_{3D}$ is designed to be view-invariant, we rely on estimates of the camera extrinsics $c$ (3 angles, each predicted as 2 parameters, the $\sin$ and $\cos$ component), which is used to rotate and translate the camera in the canonical coordinate system $C$, to obtain 2D landmarks of the skeleton (using the rotation and translation matrices, $R_c$ and $T_c$ respectively). Note that, these 2D landmarks are expected to register with the corresponding joint locations in the input image. Thus, the 2D landmarks are obtained as, $p_{2D}^{(j)} = P(R_c*p_{3D}^{(j)}+T_c)$, where $P$ denotes a fixed perspective camera transformation. ### c) Spatial-map transformation, $\mathcal{T}_{m}$ After obtaining coordinates of the 2D landmarks $p_{2D}\in\mathbb{R}^{2J}$, we aim to effectively aggregate it with the spatial appearance-embedding $f_a$. Thus, we devise a transformation procedure $\mathcal{T}_{m}$, to transform the vectorized 2D coordinates into spatial-maps denoted by $f_{2D}\in \mathbb{R}^{H\times W\times \textit{Ch}}$, which are of consistent resolution to $f_a$, $\mathcal{T}_m:p_{2D}\rightarrow f_{2D}$. To effectively encode both joint locations and their connectivity information, we propose to generate two sets of spatial maps namely, a) heat-map, $f_{hm}$ and b) affinity-map, $f_{am}$ (, $f_{2D}:(f_{hm},f_{am})$). Note that, the transformations to obtain these spatial maps must be fully differentiable to allow the disentaglement of pose using the cross-pose image-reconstruction loss, computed at the decoder output (discussed in Sec. [3.3a]{}). Keeping this in mind, we implement a novel computational pipeline by formalizing translated and rotated Gaussians to represent both joint positions ($f_{hm}$) and skeleton-limb connectivity ($f_{am}$). We use a constant variance $\sigma$ along both spatial directions to realize the heat-maps for each joint $j$, as $f_{hm}^{(j)}(u) = \exp(-0.5||u-p_{2d}^{(j)}||^2/\sigma^{2})$, where $u:[u_x,u_y]$ denotes the spatial-index in a $H\times W$ lattice (see Fig. \[fig:main2\]A). We formalize the following steps to obtain the affinity maps based on the connectivity of joints in the skeletal kinematic tree (see Fig. \[fig:main2\]A). For each limb (line-segment), $l$ with endpoints $p_{2D}^{l(j_1)}$ and $p_{2D}^{l(j_2)}$, we first compute location of its mid-point, $\mu^{(l)}:[\mu_x^{(l)},\mu_y^{(l)}]$ and slope $\theta^{(l)}$. Following this, we perform an affine transformation to obtain, $u^\prime = R_{\theta^{(l)}}*(u-\mu^{(l)})$, where $R_{\theta^{(l)}}$ is the 2D rotation matrix. Let, $\sigma_x^{(l)}$ and $\sigma_y^{(l)}$ denote variance of a Gaussian along both spatial directions representing the limb $l$. We fix $\sigma_y^{(l)}$ from prior knowledge of the limb width. Whereas, $\sigma_x^{(l)}$ is computed as $\alpha*len(l)$ in the 2D euclidean space (see Supplementary). Finally, the affinity map is obtained as, $$\begin{aligned} f_{am}^{(l)}(u) = \exp(-0.5||u_x^\prime/\sigma_x^{(l)}||^2-0.5||u_y^\prime/\sigma_y^{(l)}||^2) \end{aligned}$$ $\mathcal{T}_{fk}$, $\mathcal{T}_{c}$ and $\mathcal{T}_{m}$ (collectively denoted as $\mathcal{T}_k$) are designed using perfectly differentiable operations, thus allowing back-propagation of gradients from the loss functions defined at the decoder output. As shown in Fig. \[fig:main1\]A, the decoder takes in a tuple of spatial-pose-map representation and appearance ($f_{2D}$ and $f_a$ respectively, concatenated along the channel dimension) to reconstruct an RGB image. To effectively disentangle BG information in $f_a$, we fuse the background image $B_t$ towards the end of decoder architecture, inline with ([Rhodin et al. [-@rhodin2018unsupervised]]{}). Access to minimal prior knowledge --------------------------------- One of the key objectives of this work is to solve the unsupervised pose estimation problem with minimal access to prior knowledge whose acquisition often requires manual annotation or a data collection setup, such as CMU-MoCap . Adhering to this, we restrict the proposed framework from accessing any paired or unpaired data samples as shown in Table \[tab:char\]. Here, we list the specific prior information that has been considered in the proposed framework, - Kinematic skeletal structure (the joint connectivity information) with bone-length ratios in a fixed canonical scale. Note that, [we do not consider access to the kinematic angle limits]{} for the limb joints, as such angles are highly pose dependent particularly for diverse human skeleton structures [@akhter2015pose]. - A set of 20 synthetically rendered SMPL models with diverse 3D poses and FG appearance [@varol2017learning]. We have direct paired supervision loss (denoted by $\mathcal{L}_{prior}$) on these samples to standardize the model towards the intended 2D or 3D pose conventions (see Supplementary). ![image](image_final/aaai1_fig2_sid_final.pdf){width="1.0\linewidth"} Unsupervised training procedure ------------------------------- In contrast to [@jakab2018unsupervised], we aim to disentangle foreground (FG) and background (BG) appearances, along with the disentanglement of pose. In a generalized setup, we also aim to learn from in-the-wild YouTube videos in contrast to in-studio datasets, avoiding dataset-bias. ### Separating paired and unpaired samples. For an efficient disentanglement, we aim to form image tuples of the form $(I_s,I_t, B_t)$. Here, $I_s$ and $I_t$ are video frames, which have identical FG-appearance with a nonidentical *kinematic-pose* (pairs formed between frames beyond a certain time-difference). As each video-clip captures action of an individual in a certain apparel, *FG-appearance* remains identical among frames from the same video. Here, $B_t$ denotes an estimate of BG image without the human subject corresponding to the image $I_t$, which is obtained as the median of pixel intensities across a time-window including the frame $I_t$. However, such an estimate of $B_t$ is possible only for scenarios with no camera movement beyond a certain time window to capture enough background evidence (static background with a moving human subject). Given an in-the-wild dataset of videos, we classify temporal clips of a certain duration ($>$5 seconds) into two groups based on the amount of BG motion in that clip. This is obtained by measuring the pixel-wise L2 loss among the frames in a clip, considering human action covers only 10-20% of pixels in the full video frame (see Supplementary). Following this, we realize two disjoint datasets denoted by $\mathcal{D}_{p}=\{(I_s^{(i)},I_t^{(i)},B_t^{(i)})\}_{i=1}^{N}$ and $\mathcal{D}_{unp}=\{(I_s^{(k)}, I_t^{(k)})\}_{k=1}^M$ as sets of tuples with extractable BG pair (paired) and un-extractable BG pair (unpaired), respectively. ### a) Training objective for paired samples, $\mathcal{D}_p$ As shown in Fig. \[fig:main1\]A, given a source and target image ($I_s$ and $I_t$), we aim to transfer the pose of $I_t$ ($f_{2D}$) to the FG-appearance extracted from $I_s$ ($f_a$) and background from $B_t$ to reconstruct $\hat{I}_t$. Here, the FG and BG appearance information can not leak through pose representation because of the low dimensional bottleneck $p_{2D}\in\mathbb{R}^{2J}$. Moreover, consecutive predefined matrix and spatial-transformation operations further restrict the framework from leaking appearance information through the pose branch even as low-magnitude signals. Note that, the BG of $I_s$ may not register with the BG of $I_t$, when the person moves in the 3D world (even in a fixed camera scenario) as these images are outputs of an off-the shelf person-detector. As a result of this BG disparity and explicit presence of the clean spatially-registered background $B_t$, $D_I$ catches the BG information directly from $B_t$, thereby forcing $f_a$ to solely model FG-appearance from the apparel-consistent source, $I_s$. Besides this, we also expect to maintain perceptual consistency between $I_t$ and $\hat{I}_t$ through the encoder networks, keeping in mind the later energy-based formalization (next section). Thus, all the network parameters are optimized for the paired samples using the following loss function, $\mathcal{L}_P = |I_t - \hat{I}_t| + \lambda_1|p_{2D}-\hat{p}_{2D}|+\lambda_2|f_a - \hat{f}_a|$. Here, $\hat{p}_{2D} = \mathcal{T}_{k}\circ E_P(\hat{I_t})$ and $\hat{f_a} = E_A(\hat{I_t})$. ### b) Training objective for unpaired samples, $\mathcal{D}_{unp}$ Although, we find a good amount of YouTube videos where human motion (e.g. dance videos) is captured on a tripod mounted static camera, such videos are mostly limited to indoor environments. However, a diverse set of human actions are captured in outdoor settings (e.g. sports related activities), which usually involves camera motion or dynamic BG. Aiming to learn a general pose representation, instead of ignoring the frames from video-clips with dynamic BG, we plan to formalize a novel direction to adapt the parameters of $E_P$ and $E_A$ even for such diverse scenarios. We hypothesize that the decoder $D_I$ expects the pose and FG-appearance representation in a particular form, satisfying the corresponding input distributions, $P(f_{2D})$ and $P(f_a)$. Here, a reliable estimate of $P(f_{2D})$ and $P(f_a)$ can be achieved solely on samples from $\mathcal{D}_p$ in presence of paired supervision, avoiding *mode-collapse*. More concretely, the parameters of $D_I$ should not be optimized on samples from $\mathcal{D}_{unp}$ (as shown in Fig. \[fig:main2\]B with a lock sign). Following this, one can treat $D_I$ analogous to a *critic*, which outputs a reliable prediction (an image of human with pose from $I_t$, FG-appearance from $I_s$ and BG from $B_t$) only when its inputs $f_{2D}$ and $f_{a}$ satisfy the expected distributions- $P(f_{2D})$ and $P(f_a)$ respectively. We plan to leverage this analogy to effectively use the frozen $D_I$ network as an energy function to realize simultaneous adaptation of $E_P$ and $E_A$ for the unpaired samples from $\mathcal{D}_{unp}$. We denote $B_r$ to represent a random background image. As shown in Fig. \[fig:main2\]B, here $\tilde{I}_t = D_I(f_{2D}, f_{a}, B_r)$, in absence of access to a paired image to enforce a direct pixel-wise loss. Thus, the parameters of $E_P$ and $E_A$ are optimized for the unpaired samples using the following loss function, $\mathcal{L}_{\textit{\textit{UNP}}} = |p_{2D} - \tilde{p}_{2D}| + \lambda_2|f_a - \tilde{f}_a|$, where $\tilde{p}_{2D}=\mathcal{T}^{-1} \circ\mathcal{T}_k\circ E_P\circ\mathcal{T}(\tilde{I}_t)$ and $\tilde{f}_a=E_A(\tilde{I}_t)$. Here, $\mathcal{T}$ and $\mathcal{T}^{-1}$ represents a differentiable spatial transformation (such as image flip or in-plane rotation) and its inverse, respectively. We employ this to maintain a consistent representation across spatial-transformations. Note that, for the flip-operation of $p_{2D}$, we also exchange the indices of the joints associated with the left side to right and vice-versa. We train on three different loss functions, viz. $\mathcal{L}_{prior}, \mathcal{L}_{P}$, and $\mathcal{L}_{\textit{UNP}}$ at separate iterations, each with different optimizer. Here, $\mathcal{L}_{prior}$ denotes the supervised loss directly on $p_{3D}$ and $p_{2D}$ for the synthetically rendered images on randomly selected backgrounds, as discussed before. \[tab:protocol2results\] \[tab:mpiinf3dhp\] Experiments =========== In this section, we describe experimental details followed by a thorough analysis of the framework for bench-marking on two widely used datasets, Human3.6M and MPI-INF-3DHP. We use Resnet-50 (till *res4f*) with ImageNet-pretrained parameters as the base pose encoder $E_P$, whereas the appearance encoder is designed separately using 10 Convolutions. $E_P$ later divides into two parallel branches of fully-connected layers dedicated for $v_k$ and $c$ respectively. We use $J=17$ for all our experiments as shown in Fig. \[fig:main1\]. The channel-wise aggregation of $f_{am}$ (16-channels) and $f_{hm}$ (17-channels) is passed through two convolutional layers to obtain $f_{2D}$ (128-maps), which is then concatenated with $f_a$ (512-maps) to form the input for $D_I$ (each with 14$\times$14 spatial dimension). Our experiments use different AdaGrad optimizers (learning rate: 0.001) for each individual loss components in alternate training iterations, thereby avoiding any hyper-parameter tuning. We perform several augmentations (color jittering, mirroring, and in-plane rotation) of the 20 synthetic samples, which are used to provide a direct supervised loss at the intermediate pose representations. **Datasets.** The *base-model* is trained on a mixture of two datasets, Human3.6M and an in-house collection of YouTube videos (also refereed as YTube). In contrast to the in-studio H3.6M dataset, YTube contains human subjects in diverse apparel and BG scenes performing varied forms of motion (usually dance forms such as western, modern, contemporary etc.). Note that all samples from H3.6M contribute to the paired dataset $\mathcal{D}_p$, whereas $\sim$40% samples in YTube contributed to $\mathcal{D}_p$ and rest to $\mathcal{D}_{unp}$ based on the associated BG motion criteria. However, as we do not have ground-truth 3D pose for the samples from YTube (in-the-wild dataset), we use MPI-INF-3DHP (also refereed as 3DHP) to quantitatively benchmark generalization of the proposed pose estimation framework. ### a) Evaluation on Human3.6M. We evaluate our framework on protocol-II, after performing scaling and rigid alignment of the poses inline with the prior arts ([Chen et al. [-@chen2019unsupervised]]{}; [Rhodin et al. [-@rhodin2018unsupervised]]{}). We train three different variants of the proposed framework a) *Ours(unsup.)*, b) *Ours(semi-sup.)*, and c) *Ours(weakly-sup.)* as reported in Table \[tab:protocol2results\]. After training the *base-model* on the mixed YTube+H3.6M dataset, we finetune it on the static H3.6M dataset by employing $\mathcal{L}_{prior}$ and $\mathcal{L}_{p}$ (without using any multi-view or pose supervision) and denote this model as *Ours(unsup.)*. This model is further trained with full supervision on the 2D pose landmarks simultaneously with $\mathcal{L}_{prior}$ and $\mathcal{L}_{p}$ to obtain *Ours(weakly-sup.)*. Finally, we also train *Ours(unsup.)* with supervision on 5% 3D of the entire trainset simultaneously with $\mathcal{L}_{prior}$ and $\mathcal{L}_{p}$ (to avoid over-fitting) and denote it as *Ours(semi-sup.)*. As shown in Table \[tab:protocol2results\], *Ours(unsup.)* clearly outperforms the prior-art ([Rhodin et al. [-@rhodin2018unsupervised]]{}) with a significant margin (89.4 vs. 98.2) even without leveraging multi-view supervision. Moreover, *Ours(weakly-sup.)* demonstrates state-of-the-art performance against prior weakly supervised approaches. ![image](image_final/aaai1_fig4_sid_final1.pdf){width="1.00\linewidth"} \[fig:viewsyn\] ![image](image_final/aaai1_fig3_sid_final.pdf){width="0.98\linewidth"} \[fig:qualitative\] ### b) Evaluation on MPI-INF-3DHP. We aim to realize a higher level of generalization in consequence of leveraging rich kinematic prior information. The proposed framework outputs 3D pose, which is bounded by the kinematic plausibility constraints even for unseen apparel, BG and action categories. This characteristic is clearly observed while evaluating performance of our framework on unseen 3DHP dataset. We take *Ours(weakly-sup.)* model trained on YTube+H3.6M dataset to obtain 3D pose predictions on unseen 3DHP testset (9th row in Table \[tab:mpiinf3dhp\]). We clearly outperform the prior work [@chen2019unsupervised] by a significant margin in a fully-unseen setting (8th and 9th row with -3DHP in Table \[tab:mpiinf3dhp\]). Furthermore, our weakly supervised model (with 100% 2D pose supervision) achieves state-of-the-art performance against prior approaches at equal supervision level. \[tab:ablations\] ### c) Ablation study. In the proposed framework, our major contribution is attributed to the design of differentiable transformations and an innovative way to facilitate the usage of unpaired samples even in presence of BG motion. Though effectiveness of camera-projection has been studied in certain prior works [@chen2019unsupervised], use of forward-kinematic transformation $\mathcal{T}_{fk}$ and affinity map in the spatial-map transformation $\mathcal{T}_m$ is employed for the first time in such a learning framework. Therefore, we evaluate importance of both $\mathcal{T}_{fk}$ and $\mathcal{T}_m$ by separately bypassing these modules through neural network transformations. Results in Table \[tab:ablations\] clearly highlight effectiveness of these carefully designed transformations for the unsupervised 3D pose estimation task. ### d) Qualitative results. Fig. \[fig:viewsyn\] depicts qualitative results derived from *Ours(unsup.)* on in-studio H3.6M and in-the-wild YTube dataset. It highlights effectiveness of unsupervised disentanglement through separation or cross-transfer of apparel, pose, camera-view and BG, for novel image synthesis. Though, our focus is to disentangle 3D pose information, separation of apparel and pose transfer is achieved as a byproduct of the proposed learning framework. In Fig. \[fig:qualitative\] we show results on the 3D pose estimation task obtained from *Ours(weakly-sup.)* model. Though we train our model on H3.6M, 3DHP and YTube datasets, results on LSP dataset [@johnson2010clustered] is obtained without training on the corresponding train-set, in a fully-unseen setting. Reliable pose estimation on such diverse unseen images highlights generalization of the learned representations thereby overcoming the problem of dataset-bias. Conclusion ========== We present an unsupervised 3D human pose estimation framework, which relies on a minimal set of prior knowledge regarding the underlying kinematic 3D structure. The proposed local-kinematic model indirectly endorses a kinematic plausibility bound on the predicted poses, thereby limiting the model from delivering implausible pose outcomes. Furthermore, our framework is capable of leveraging knowledge from video frames even in presence of background motion, thus yielding superior generalization to unseen environments. In future, we would like to extend such frameworks for predicting 3D mesh, by characterizing the prior knowledge on human shape, alongside pose and appearance. [ **Acknowledgements.** This work was supported by a Wipro PhD Fellowship (Jogendra) and in part by DST, Govt. of India (DST/INT/UK/P-179/2017).]{} [^1]: [<https://sites.google.com/view/ksp-human/>]{} [^2]: equal contribution
{ "pile_set_name": "ArXiv" }
--- abstract: 'Convergence of stochastic processes with jumps to diffusion processes is investigated in the case when the limit process has discontinuous coefficients. An example is given in which the diffusion approximation of a queueing model yields a diffusion process with discontinuous diffusion and drift coefficients.' address: - 'School of Mathematics, University of Minnesota, Minneapolis, MN, 55455, USA' - 'Department of Electrical Engineering-Systems, Tel Aviv University, 69978 Tel Aviv, Israel' author: - 'N. V. Krylov' - 'R. Liptser' title: On diffusion approximation with discontinuous coefficients --- Introduction {#sec-1} ============ Suppose that we are given a sequence of semimartingales $(x^n_t)_{t\ge 0}$, $n=1,2,...$, with paths in the Skorokhod space ${\mathcal{D}}= {\mathcal{D}}([0,\infty),{\mathbb{R}}^d)$ of ${\mathbb{R}}^{d}$-valued right-continuous functions on $[0,\infty)$ having left limits on $(0,\infty)$. If one can prove that the sequence of distributions ${\mathbb{Q}}^{n}$ of $x^{n}_{\cdot}$ on ${\mathcal{D}}$ weakly converges to the distribution ${\mathbb{Q}}$ of a diffusion process $(x _t)_{t\ge 0}$, then one says that the sequence of $(x^n_t)_{t\ge 0}$ admits a diffusion approximation. In this article by diffusion processes we mean solutions of Itô equations of the form $$x_t=x_0+\int_0^tb(s,x_s)\,ds+\int_0^t\sqrt{a(s,x_s)}\,dw_s,$$ with $w_{t}$ being a vector-valued Wiener process. Usually to investigate the question if in a particular situation there is a diffusion approximation one uses the general framework of convergence of semimartingales as developed for instance in §3, Ch. 8 of [@LS] (also see the references in this book). The problem of diffusion approximation attracted attention of many researchers who obtained many deep and important results. The reason for this is that diffusion approximation is a quite efficient tool in stochastic systems theory (see [@Ku'84], [@Ku'90]), in asymptotic analysis of queueing models under heavy traffic and bottleneck regimes (see [@KL]), in finding asymptotically optimal filters (see [@KuRu], [@LR]), in asymptotical optimization in stochastic control problems (see [@KuRu0], [@LRT]), and in many other issues. In all above-mentioned references the coefficients $a(t,x)$ and $b(t,x)$ of the limit diffusion process are continuous in $x$. In part, this is dictated by the approach developed in §3, Ch. 8 of [@LS]. On the other hand, there are quite a few situations in which the limit process should have discontinuous coefficients. One of such situations is presented in [@FS] where a queueing model is considered. It was not possible to apply standard results and the authors only conjectured that the diffusion approximation should be a process with natural coefficients. Later this conjecture was rigorously proved in [@Ch]. In [@Ch] and [@FS] only drift term is discontinuous. Another example of the limit diffusion with discontinuous both drift and diffusion coefficients is given in article [@KhasKryl] on averaging principle for diffusion processes with null-recurrent fast component. The idea to circumvent the discontinuity of $a$ and $b$ is to try to show that the time spent by $(t,x_{t})$ in the set $G$ of their discontinuity in $x$ is zero. This turns out to be enough if outside of $G$ the “coefficients” of $x^{n}_{t}$ converge “uniformly” to the coefficients of $x_{t}$. By the way, even if all these hold, still the functionals $$\int_{0}^{t}a(t,y_{t})\,dt,\quad \int_{0}^{t}b(t,y_{t})\,dt,\quad y_{\cdot}\in{\mathcal{D}}$$ need not be continuous on the support of ${\mathbb{Q}}$. This closes the route of “trivial” generalizing the result from §3, Ch. 8 of [@LS]. To estimate the time spent by $x_{t}$ we use an inequality similar to the following one $$\label{*} E\int_{0}^{T}f(t,x_{t})\,dt \leq N\Bigg(\int_{0}^{T}\int_{\mathbb{R}^d} f^{d+1}(t,x)\,dxdt\Bigg)^{1/(d+1)},$$ which is obtained in [@Kr74] for nonnegative Borel $f$. Then upon assuming that $G\subset(0,\infty)\times{\mathbb{R}}^{d}$ has $d+1$-dimensional Lebesgue measure zero and substituting $I_{G}$ in place of $f$ in (\[\*\]) we get that indeed the time spent by $(t,x_{t})$ in $G$ is zero. However, for (\[\*\]) to hold we need the process $x_{t}$ to be uniformly nondegenerate which may be not convenient in some applications. Therefore, in Sec. \[section 3.14.1\] we prove a version of (\[\*\]), which allows us to get the conclusion about the time spent in $G$ assuming that the process is nondegenerate only on $G$. In essence, our approach to diffusion approximation with discontinuous coefficients is close to the one from [@Ch]. However, details are quite different and we get more general results under less restrictive assumptions. In particular, we do not impose the linear growth condition. Neither do we assume that the second moments of $x^{n}_{0}$ are bounded. The weak limits of processes with jumps appear in many other settings, in particular, in Markov chain approximations in the theory of controlled diffusion processes, where, generally, the coefficients of $x^{n}_{t}$ are not supposed to converge to anything in any sense and yet the processes converge weakly to a process of diffusion type. We mention here Theorem 5.3 in Ch. 10 of [@KD] also bears on this matter in the particular case of Markov chain approximations in the theory of controlled diffusion processes. Clearly, there is no way to specify precisely the coefficients of all limit points in the general problem. Still one can obtain some nontrivial information and one may wonder if one can get anything from general results when we are additionally given that the coefficients do converge on the major part of the space. In Remarks \[remark 10.13.1\] and \[remark 10.13.2\] we show that this is not the case in what concerns Theorem 5.3 in Ch. 10 of [@KD]. Above we alluded to the “coefficients” of $x^{n}_{t}$. By them we actually mean the local drift and the matrix of quadratic variation. We do not use any additional structure of $x^{n}_{t}$. In particular, the quadratic variation is just the sum of two terms: one coming from diffusion and another from jumps. Therefore unlike [@KP] we do not use any stochastic equations for $x^{n}_{t}$. This allows us to neither introduce nor use any assumptions on the martingales driving these equations and their (usual) coefficients thus making the presentation simpler and more general. On the other hand it is worth noting that the methods of [@KP] may be more useful in other problems. Our intention was not to cover all aspects of diffusion approximation but rather give a new method allowing us to treat discontinuous coefficients. In particular, we do not discuss uniqueness of solutions to the limit equation. This is a separate issue belonging to the theory of diffusion processes and we only mention article [@KhasKryl], where the reader can find a discussion of it. The paper is organizes as follows. In Section \[section 4.14.1\] we prove our main results, Theorems \[theorem 3.8.1\] and \[theorem 4.25.1\], about diffusion approximation. Their proofs rely on the estimate proved in Sec. \[section 3.14.1\] we have been talking about above. But even if the set $G$ is empty, the results which we prove are the first ones of the kind. In Theorems \[theorem 3.8.1\] and \[theorem 4.25.1\] there is no assumption about any control of $\sqrt{a(t,x)}$ and $b(t,x)$ as $|x|\to\infty$, but instead we assume that ${\mathbb{Q}}^{n}$ converge weakly to ${\mathbb{Q}}$. Therefore, in Sec. \[section 4.14.2\] we give a sufficient condition for precompactness of a sequence of distributions on Skorokhod space. Interestingly enough, this condition is different from those which one gets from [@JS] and [@LS] and again does not involve usual growth conditions. Sec. \[section 4.18.1\] contains an example of application of our results to a queueing model close to the one from [@Ch], [@FS]. We slightly modify the model from [@Ch], [@FS] and get the diffusion approximation with discontinuous [*drift*]{} and [*diffusion*]{} coefficients. To the best of our knowledge this is the first example when the diffusion approximation leads to discontinuous diffusion coefficients. The authors are sincerely grateful to the referees for many useful suggestions. The main results {#section 4.14.1} ================ We use notions and notation from [@LS]. For each $n=1,2,...$, let $$(\Omega^{n},{\mathcal{F}}^{n},{\mathcal{F}}^{n}_{t},t\geq0, P^{n})$$ be a stochastic basis satisfying the “usual” assumptions. Let ${\mathcal{D}}$ be the Skorokhod space or right-continuous ${\mathbb{R}}^{d}$-valued functions $x_{t}$ given on $[0,\infty)$ and having left limits on $(0,\infty)$. As usual we endow ${\mathcal{D}}$ with Skorokhod-Lindvall metric in which ${\mathcal{D}}$ becomes a Polish space (see Theorem 2, §1, Ch. 6 of [@LS]). Suppose that for each $n$ on $\Omega^{n}$ we are given an ${\mathcal{F}}^{n}_{t}$-semimartingale $x^{n}_{t}$, $t\geq0$, with trajectories in ${\mathcal{D}}$. Let $(B^{n},C^{n},\nu^{n})$ be the triple of predictable characteristics of $(x^{n}_{t},{\mathcal{F}}^{n}_{t})$ and $\mu^{n}$ be its jump measure (see §1, Ch. 4 of [@LS]). Then $$x^{n}_{t}=x^{n}_{0}+B^{n}_{t}+x^{nc}_{t}+\int_{0}^{t} \int_{|x|\leq1}x\,(\mu^{n}-\nu^{n})(dsdx)+\int_{0}^{t}\int_{|x|>1} x\,\mu^{n}(dsdx),$$ where $B^{n}_{t}$ is a predictable process of locally bounded variation with $B^{n}_{0}=0$, $x^{nc}_{t}$ is a continuous local martingale with ${\langle}x^{nc}{\rangle}_{t}=C^{n}_{t}$, $\nu^{n}$ is the compensator of $\mu^{n}$. Define $$m^{n }_{t}=x^{nc}_{t}+\int_{0}^{t}\int_{|x|\leq1} x\,(\mu^{n}-\nu^{n})(dsdx),\quad j^{n}_{t}=\int_{0}^{t}\int_{|x|>1} x\,\mu^{n}(dsdx)$$ so that $m^{n}_{t}$ is a locally square-integrable martingale and $$\label{4.24.2} x^{n}_{t}=x^{n}_{0}+B^{n}_{t}+m^{n}_{t}+j^{n}_{t}.$$ [ \[assumption 3.8.2\] (i) For each $n$ on $(0,\infty)\times{\mathcal{D}}$ we are given an ${\mathbb{R}}^{d}$-valued function $b^{n}=b^{n}(t,y_{\cdot})$ and a $d\times d$ matrix valued function $a^{n}=a^{n}(t,y_{\cdot})$ which is nonnegative and symmetric for any $t$ and $y_{\cdot}\in{\mathcal{D}}$. The functions $b^{n}$ and $a^{n}$ are Borel measurable. (ii) For each $r\in[0,\infty)$ there exists a locally integrable function $L(r,t)$ given on $[0,\infty)$ such that $L(r,t)$ increases in $r$ and $$\label{4.14.3} |b^{n}(t,y_{\cdot})|+{\mathop{\sf trace}}\,a^{n}(t,y_{\cdot})\leq L(r,t)$$ whenever $t>0$, $y_{\cdot}\in{\mathcal{D}}$, and $|y_{t}|\leq r$. (iii) We have $$B^{n}_{t}=\int_{0}^{t}b^{n}(s,x^{n}_{\cdot})\,ds,\quad {\langle}m^{n}{\rangle}_{t}=2\int_{0}^{t}a^{n}(s,x^{n}_{\cdot})\,ds.$$ ]{} [We have $${\langle}m^{n}{\rangle}^{ij}_{t}= {\langle}x^{nc}{\rangle}^{ij}_{t}+\int_{0}^{t} \int_{|x|\leq1}x^{i}x^{j} \nu^{n}(dsdx)$$ and it follows from Assumption \[assumption 3.8.2\] that both summands on the right are absolutely continuous in $t$. In particular, they are continuous, which along with the continuity of $B^{n}_{t}$ implies that $x^{n}_{t}$ is quasi leftcontinuous (see Theorem 1, §1, Ch. 4 of [@LS]). ]{} \[assumption 3.8.3\] (i) On $(0,\infty)\times{\mathbb{R}}^{d}$ we are given an ${\mathbb{R}}^{d}$-valued function $b=b(t,x)$ and a $d\times d$ matrix valued function $a=a(t,x)$ which is nonnegative and symmetric for any $t$ and $x$. The functions $b$ and $a$ are Borel measurable. \(ii) There exists a Borel set $G\subset(0,\infty)\times{\mathbb{R}}^{d}$ (perhaps empty) such that, for almost every $t\in(0,\infty)$, for every $x$ lying outside of the $t$-section $G_{t}:=\{x\in{\mathbb{R}}^{d}:(t,x)\in G\}$ of $G$ and any sequence $y^{n}_{\cdot}\in{\mathcal{D}}$, which converges to a continuous function $y_{\cdot}$ satisfying $y _{t}=x$, it holds that $$b^{n}(t,y^{n}_{\cdot})\to b(t,x),\quad a^{n}(t,y^{n}_{\cdot})\to a(t,x).$$ \[remark 3.13.2\] It is easy to see that Assumption \[assumption 3.8.3\] implies that for almost any $t$, the functions $a(t,x)$ and $b(t,x)$ are continuous on the set ${\mathbb{R}}^{d}\setminus G_{t}$ in the relative topology of this set. Also, Assumptions \[assumption 3.8.2\] and \[assumption 3.8.3\] obviously imply that $$|b (t,x)|+{\mathop{\sf trace}}\,a (t,x)\leq L(r,t)$$ for almost every $t\in(0,\infty)$ and all $x$ satisfying $ |x|\leq r$, $x\not\in G_{t}$. \[assumption 3.8.4\] If $G\ne\emptyset$, then for almost each $t$ \(i) the set $ G _{t} $ has Lebesgue measure zero, \(ii) for every $x\in G _{t} $ and each sequence $y^{n}_{\cdot}\in{\mathcal{D}}$, which converges to a continuous function $y_{\cdot}$ satisfying $ y _{t} =x$, we have $$\label{4.19.2} \varliminf_{n\to\infty}\det a^{n}(t,y^{n}_{\cdot}) \geq\delta(t,x)>0,$$ where $\delta$ is a Borel function. [Condition (\[4.19.2\]) is satisfied if, for instance, the processes $x^{n}_{t}$ are uniformly nondegenerate in a neighborhood of $G_{t}$. ]{} [ \[assumption 3.8.5\] For any $T,\varepsilon\in(0,\infty)$, and any $\alpha\in(0,1]$, it holds that $$\lim_{n\to\infty}P^{n}\big( \nu^{n}\big((0,T]\times B^{c}_{\alpha}))\geq\varepsilon\big)=0,$$ where $B_{\alpha}=\{x\in{\mathbb{R}}^{d}:|x|<\alpha\}$, $B^{c}_{\alpha}=\{x\in{\mathbb{R}}^{d}:|x|\geq \alpha\}$. ]{} \[remark 3.13.1\] Notice that for each $\alpha\in(0,1]$ and $r,T\in[0,\infty)$ $$\theta^{n}_{rT}:=\int_{0}^{T} \int_{|x|\leq1}|x|^{3}I_{|x_{s}|\leq r } \,\nu^{n}(dsdx)\leq\int_{0}^{T}\int_{|x|<\alpha}+ \int_{0}^{T}\int_{|x|\geq\alpha}$$ $$\leq\alpha\int_{0}^{T}\int_{|x|\leq1}|x|^{2}I_{|x_{s}|\leq r } \,\nu^{n}(dsdx)+\nu^{n}\big((0,T]\times B^{c}_{\alpha})),$$ where according to Assumption \[assumption 3.8.2\] the first term on the right is less than $$2\alpha\int_{0}^{T}I_{|x_{s}|\leq r } {\mathop{\sf trace}}\,a^{n}(s,x^{n}_{\cdot}) \,ds\leq 2\alpha\int_{0}^{T}L(r,s)\,ds.$$ It follows easily that, owing to Assumptions \[assumption 3.8.2\] and \[assumption 3.8.5\], for each $\varepsilon>0$ and $r,T\in[0,\infty)$, we have $$\lim_{n\to\infty} P^{n}(\theta^{n}_{rT}\geq\varepsilon\big)=0$$ and since $\theta^{n}_{rT}\leq2\int_{0}^{T}L(r,s)\,ds$, we also have $E^{n}\theta^{n}_{rT}\to0$ as $n\to\infty$, where $E^{n}$ is the expectation sign relative to $P^{n}$. [ \[remark 4.17.1\] Define $$\label{4.17.2} \gamma^{n}=\inf\{t\geq0:|j^{n}_{t}|>1\}.$$ Then $\gamma^{n}$ is an ${\mathcal{F}}^{n}_{t}$-stopping time, and obviously $j^{n}_{t}=0$ for $0\leq t<\gamma^{n}$. Furthermore, by Lemma VI.4.22 of [@JS], Assumption \[assumption 3.8.5\] implies that $$P^{n}(\gamma^{n}\leq T)\to0$$ for each $T\in[0,\infty)$. ]{} \[theorem 3.8.1\] In addition to Assumptions \[assumption 3.8.2\]-\[assumption 3.8.5\], suppose that the sequence of distributions $({\mathbb{Q}}^{n})_{n\geq1}$ of $x^{n}_{\cdot}$ converges weakly on the Polish space ${\mathcal{D}}$ to a measure ${\mathbb{Q}}$. Then ${\mathbb{Q}}$ is the distribution of a solution of the Itô equation $$\label{4.18.3} x_{t}=x_{0}+\int_{0}^{t}\sqrt{2a(s,x_{s})}\,dw_{s} +\int_{0}^{t}b(s,x_{s})\,ds$$ defined on a probability space with $w_{t}$ being a $d$-dimensional Wiener process. \[remark 10.13.1\] Notice that there are [*no conditions*]{} on the values of $ a(t,x)$ and $b(t,x)$ on the set $G$. Hence Theorem \[theorem 3.8.1\] holds if we replace $ a,b $ with any other Borel functions, which coincide with the original ones on the complement $\Gamma$ of $G$. Of course, this can only happen if $$\int_{0}^{t}I_{G}(s,x_{s})\, ds=0\quad\text{(a.s.)}.$$ This equality is proved in Lemma \[lemma 4.21.4\]. In particular, $x_{t}$ satisfies $$\label{4.18.30} x_{t}=x_{0}+\int_{0}^{t}I_{\Gamma}(s,x_{s})\sqrt{2a(s,x_{s})}\,dw_{s} +\int_{0}^{t}I_{\Gamma}(s,x_{s})b(s,x_{s})\,ds.$$ Thus, the limit process satisfies (\[4.18.30\]). A particular feature of this equation is that generally its solutions are not unique. Indeed, let $x'_{t}$ be a one-dimensional Wiener process $w_{t}$ and $x''_{t}$ the process identically equal to zero. They both satisfy $dx_{t}=\sqrt{2a(t,x_{t})}\,dw_{t}$, where $a(t,x)=1/2$ for $(t,x)\not\in G$, $a(t,x)=0$ for $(t,x)\in G$, and $G=[0,\infty)\times\{0\}$. Of course, there are many more different solutions which spend some time at zero then follow the trajectories of $w_{t}$ for a while and then again stay at zero. Therefore, the statement that $x_{t}$ has the form $$x_{t}=x_{0}+\int_{0}^{t}\sqrt{2a_{s}}\,dw_{s} +\int_{0}^{t}b_{s}\,ds,$$ where $a_{s}=a(s,x_{s})$ and $b_{s}=b(s,x_{s})$ whenever $(s,x_{s})\not\in G$ and $a$ and $b$ are not specified otherwise (cf. the first part of Theorem 5.3 in Ch. 10 of [@KD]), contains very little information on the process: in the above example both $x'_{t}$ and $x''_{t}$ have this form. In contrast with this always in the above example, the fact that without changing $x_{t}$ one can change $a,b$ on $G$ in any way, and thus take $a\equiv1/2$, leaves only one possibility: $x_{t}=w_{t}$. \[remark 10.13.2\] From Remark \[remark 10.13.1\] we also see that the assumption that (\[4.18.3\]) has a unique (weak or strong) solution makes no sense unless the values of $ a(t,x)$ and $b(t,x)$ are [*specified everywhere*]{}. In Theorem 5.3 in Ch. 10 of [@KD] an attempt is presented to specify $ a(t,x)$ and $b(t,x)$ on $G$ consisting of requiring that they belong to the set of all possible diffusion and drift coefficients of $x_{t}$ when $x_{t}\in G_{t}$. Generally, the set $x_{t}\in G_{t}$ has zero probability (say, for the Wiener process) and the requirement seems to have little sense. Nevertheless, it is natural to assume that, if $x_{t}=w_{t}$ in the example from Remark \[remark 10.13.1\], then the only possibility for $a(t,0)$ is $1/2$, the same value as for all other $x$. In that case, the equation $dx_{t}=\sqrt{2a(t,x_{t})}\,dw_{t}$ ($=dw_{t}$) with zero initial condition has a unique solution, the distribution of which (by Theorem \[theorem 3.8.1\]) is the weak limit of the distributions of solutions to $dx^{n}_{t}= \sqrt{2a^{n}(x_{t})}\,dw_{t}$ with zero initial condition, where $a^{n}(x)=1/2$ for $|x|\geq1/n$ and $a^{n}(x)=1/3$ for $|x|<1/n$. Hovewer, this fact does not imply that the distributions of any other processes $z^{n}_{\cdot}$ converge to the Wiener measure, provided only that $z^{n}_{t}$ satisfy $z^{n}_{0}=0$ and $dz^{n}_{t}=\sqrt{2c^{n}(z^{n}_{t})}\,dw_{t}$ with $c^{n}(x)=a^{n}(x)$ for $|x|\geq1/n$, $c^{n}\geq0$, and $\sup_{n,x}c^{n}(x)<\infty$. To show this, it suffices to define $c^{n}(x)=n^{2}x^{2}$ for $|x|\leq1/n$ and notice that $z^{n}_{t}\equiv0$ for all $n$. This somewhat contradicts the second part of Theorem 5.3 in Ch. 10 of [@KD]. The proof of Theorem \[theorem 3.8.1\] consists of several steps throughout which we assume that the conditions of this theorem are satisfied. The idea is to rewrite (\[4.18.3\]) in terms of the martingale problem of Stroock-Varadhan. Then naturally we also want to write the information about $x^{n}_{t}$ in a martingale form not involving stochastic bases and convenient to passing to the limit. This is done in Lemma \[lemma 4.21.1\]. After that we pass to the limit and in Lemma \[lemma 4.21.2\] derive our theorem upon additionally assuming that the time spent by the limit process $(t,x_{t})$ in the set $G$ of possible discontinuities of coefficients is zero. This additional assumption holds, for instance, if $G=\emptyset$. Lemma \[lemma 4.21.4\] concludes the proof of the theorem. After that in Theorem \[theorem 4.25.1\] we extend Theorem \[theorem 3.8.1\] to cases in which uniform nondegeneracy on $G_{t}$ of diffusion is not required. We show the usefulness of Theorem \[theorem 4.25.1\] in Remark \[remark 4.25.1\]. As any probability measure on ${\mathcal{D}}$, the measure ${\mathbb{Q}}$ is the distribution on ${\mathcal{D}}$ of a process $x_{\cdot}$ having trajectories in ${\mathcal{D}}$ and defined on a probability space. By $E$ we denote the expectation sign associated with that probability space. We will see that the theorem holds for this $x_{\cdot}$ up to a possible enlargement of the probability space on which $x_{\cdot}$ lives. In the following lemma Assumptions \[assumption 3.8.3\] and \[assumption 3.8.4\] are not used. By $C^{\infty}_{0}({\mathbb{R}}^{d+1})$ we denote the set of all infinitely differentiable real-valued function $u=u(t,x)$ on ${\mathbb{R}}^{d+1}$ with compact support. \[lemma 4.21.1\] For any $0\leq t_{1}\leq...\leq t_{q}\leq s\leq t<\infty$, continuous bounded function $f$ on ${\mathbb{R}}^{qd}$, and $u\in C^{\infty}_{0}({\mathbb{R}}^{d+1})$, we have $$\begin{aligned} &Ef(x _{t_{1}},...,x _{t_{q}}) \big[u(t,x _{t})-u(s,x _{s})\big]&\nonumber \\ &=\lim_{n\to\infty}E^{n}f(x^{n}_{t_{1}},... ,x^{n}_{t_{q}}) \int_{s}^{t}\big[u_{p}(p,x^{n}_{p})&+a^{nij}(p,x^{n}_{\cdot}) u_{x^{i}x^{j}}(p, x^{n}_{p})\nonumber \\ \label{4.21.6} & &+b^{ni}(p,x^{n}_{\cdot})u_{x^{i}}(p,x^{n}_{p})\big]\,dp.\end{aligned}$$ Furthermore, the integrand with respect to $p$ is less than $NL(r,p)$, where the constants $N$ and $r$ depend only on $u$ but not on $\omega$ and $n$. Proof. Denote $$z^{n}_{t}=x^{n}_{t}-j^{n}_{t},$$ and for any process $z_{t}$ on $\Omega^{n}$ denote (whenever it makes sense) $$\begin{aligned} \label{4.17.3} M^{n}_{t}(z_{\cdot}):=u(t,z _{t})-u(0,z_{0}) -\int_{0}^{t}u_{t}(s,z _{s})\,ds- \int_{0}^{t}u_{x^{i}}(s,z _{s})\,dB^{ni}_{s}\nonumber \\ -(1/2)\int_{0}^{t}u_{x^{i}x^{j}}(s,z _{s}) \,d{\langle}m^{n}{\rangle}^{ij}_{s} ,\end{aligned}$$ $$\rho^{n}_{s}(z_{\cdot},x)= u(s,z _{s}+x)- u(s,z _{s } )-x^{i}u_{x^{i}}(s,z _{s}) -(1/2)x^{i}x^{j}u_{x^{i}x^{j}}(s,z _{s}),$$ $$\label{4.17.4} R^{n}_{t}(z_{\cdot})=\int_{0}^{t} \int_{|x|\leq1}\rho^{n}_{s}(z_{\cdot},x)\,\nu^{n}(dsdx).$$ Notice that, by Itô’s formula (see Theorem 1, §3, Ch. 2 of [@LS]) the process $M^{n}_{t}(z^{n}_{\cdot})-R^{n}_{t}(z^{n}_{\cdot})$ is a local ${\mathcal{F}}^{n}_{t}$-martingale. To be more precise Theorem 1, §3, Ch. 2 of [@LS] says that $$M^{n}_{t}(z^{n}_{\cdot})-R^{n}_{t}(z^{n}_{\cdot}) =\sum_{0<s\leq t} \big[u(s,z^{n}_{s})-u(s,z^{n}_{s-})-u_{x^{i}}(s,z^{n}_{s-}) \Delta z^{ni}_{s}\big]$$ $$- \int_{0}^{t}\int_{|x|\leq1}\big[u(s,z^{n}_{s}+x)- u(s,z^{n}_{s } )-x^{i}u_{x^{i}}(s,z^{n}_{s})\big]\, \nu^{n}(dsdx)$$ $$+\int_{0}^{t}u_{x^{i}}(s,x^{n}_{s-})\,dm^{ni}_{s}.$$ Here the last term is a local martingale as is any stochastic integral with respect to a local martingale and the sum of remaining terms equals $$\int_{0}^{t}\int_{|x|\leq1}\big[u(s,z^{n}_{s-}+x)- u(s,z^{n}_{s-} )-x^{i}u_{x^{i}}(s,z^{n}_{s-})\big]\, \bar{\mu}(dsdx)$$ which is the stochastic integral with respect to the martingale measure $\bar{\mu}=\mu-\nu$ and thus also is a local martingale. Take the ${\mathcal{F}}^{n}_{t}$-stopping time $\gamma^{n}$ introduced in (\[4.17.2\]). Then $$M^{n}_{t \wedge\gamma^{n}}(z^{n}_{\cdot}) -R^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})$$ is again a local martingale. It turns out that, for each $T\in[0,\infty)$, the trajectories of $ M^{n}_{t \wedge\gamma^{n}}(z^{n}_{\cdot})$, $t\in[0,T]$, are bounded and even uniformly in $n$, Indeed, let $r$ be such that $u(t,x)=0$ for $|x|\geq r$. Notice that $z^{n}_{t}=x^{n}_{t}$ for $0\leq t< \gamma^{n}$. Then we find $$\int_{0}^{t\wedge\gamma^{n}} u_{x^{i}}(s,z^{n}_{s })\,dB^{ni}_{s} = \int_{0}^{t\wedge\gamma^{n}} u_{x^{i}}(s,x^{n}_{s })b^{ni}(s,x^{n}_{\cdot}) \,ds,$$ where $$|u_{x^{i}}(s,x^{n}_{s })b^{ni}(s,x^{n}_{\cdot})|=0$$ if $|x_{s}|\geq r$ (since $u(t,x)=0$ for $|x|\geq r$) and $$|u_{x^{i}}(s,x^{n}_{s })b^{ni}(s,x^{n}_{\cdot})| \leq L(r,s) \sup_{s,x}|u_{x}(s,x)|$$ if $|x_{s}|\leq r$ (see Assumption \[assumption 3.8.2\]). Therefore, $$\big|\int_{0}^{t\wedge\gamma^{n}} u_{x^{i}}(s,z^{n}_{s })\,dB^{ni}_{s}\big| \leq\sup_{s,x}|u_{x}(s,x)|\int_{0}^{t}L(r,s)\,ds.$$ Similarly one treats the integrals with respect to ${\langle}m^{n}{\rangle}^{ij}_{s}$. As long as $R^{n}_{t }(z^{n}_{\cdot})$ is concerned we notice that, for $|x|\leq1$ and $0\leq t< \gamma^{n}$, we have $$|\rho^{n}_{s}(z^{n}_{\cdot},x)|\leq N|x|^{3}I_{|z^{n}_{s}|\leq r+1} =N|x|^{3}I_{|x^{n}_{s}|\leq r+1},$$ where the constant $N$ can be expressed in terms of the third-order derivatives of $u$ only. Therefore, $$|R^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})| \leq N\theta_{r+1,T}^{n},$$ where $\theta_{r ,T}^{n}$ is introduced in Remark \[remark 3.13.1\]. By this remark for any $t$ we have $E|R^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})|\to 0$. It follows that $E^{n}|R^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})|<\infty$, so that the local martingale $M^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot}) -R^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})$ is in fact a martingale. Hence, $$E^{n}f(x^{n}_{t_{1}},...,x^{n}_{t_{m}}) \big[M^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot}) -R^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})- (M^{n}_{s\wedge\gamma^{n}}(z^{n}_{\cdot}) -R^{n}_{s\wedge\gamma^{n}}(z^{n}_{\cdot})) \big]=0.$$ Since $E^{n}|R^{n}_{t\wedge\gamma^{n}} (z^{n}_{\cdot})|\to0$, we also have $$\lim_{n\to\infty}E^{n}f(x^{n}_{t_{1}},...,x^{n}_{t_{q}}) \big[M^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot}) - M^{n}_{s\wedge\gamma^{n}}(z^{n}_{\cdot}) \big]=0 .$$ Furthermore, due to Remark \[remark 4.17.1\], $P(\gamma^{n}\leq T)\to0$ as $n\to\infty$ for each $T\in[0,\infty)$. In light of this fact and by virtue of the uniform boundedness of $M^{n}_{.\wedge\gamma^{n}}(z^{n}_{\cdot})$, we obtain $$\label{4.16.2} \lim_{n\to\infty}E^{n} \big|M^{n}_{t\wedge\gamma^{n}}(z^{n}_{\cdot})- M^{n}_{s\wedge\gamma^{n}}(z^{n}_{\cdot})\big| I_{\gamma_{n}\leq t}=0,$$ so that $$\lim_{n\to\infty}E^{n}f(x^{n}_{t_{1}},...,x^{n}_{t_{q}}) \big[M^{n}_{t }(z^{n}_{\cdot}) - M^{n}_{s }(z^{n}_{\cdot}) \big]I_{t<\gamma^{n}}=0.$$ In addition, obviously, $M^{n}_{t }(z^{n}_{\cdot})= M^{n}_{t }(x^{n}_{\cdot})$ for $t<\gamma^{n}$ and in the same way as above one can prove that the trajectories of $M^{n}_{t }(x^{n}_{\cdot})$, $t\in[0,T]$, are uniformly bounded in $n$ for each $T$. It follows that (\[4.16.2\]) holds with $t,s,x^{n}_{\cdot}$ in place of $t\wedge\gamma^{n}$, $s\wedge\gamma^{n}$, $z^{n}_{\cdot}$, respectively. Thus, $$\lim_{n\to\infty}E^{n}f(x^{n}_{t_{1}},...,x^{n}_{t_{q}}) \big[M^{n}_{t }(x^{n}_{\cdot}) - M^{n}_{s }(x^{n}_{\cdot}) \big] =0$$ which is rewritten as (\[4.21.6\]). The asserted boundedness of the integrand in (\[4.21.6\]) follows easily from the above argument. The lemma is proved. After we have exploited stochastic bases $(\Omega^{n},{\mathcal{F}}^{n}, {\mathcal{F}}^{n}_{t},t\geq0,P^{n})$, we will pass to processes defined on the same probability space. We are going to rely upon two facts. First we know from Theorem 1, §5, Ch. 6 of [@LS] that, owing to Assumption \[assumption 3.8.5\], ${\mathbb{Q}}$ is concentrated on the space of [*continuous*]{} ${\mathbb{R}}^{d}$-valued functions defined on $[0,\infty)$. Second, remember that if $y^{n}_{\cdot}\to y_{\cdot}$ in ${\mathcal{D}}$ and $y_{\cdot}$ is continuous, then $|y^{n}_{\cdot}-y_{\cdot}|^{*}_{t}\to0$ for any $t<\infty$, where $$y^{*}_{t}:=\sup_{r\leq t}|y_{r}|.$$ Owing to these facts and Skorokhod’s embedding theorem (see §6, Ch. 1 of [@Sk]), we may assume that all the processes $x^{n}_{\cdot}$, $n =1,2,...$, are given on the same probability space and there is a continuous process $x_{t}$ such that (a.s.) $$\label{4.12.2} \lim_{n\to\infty}\sup_{t\leq T}|x^{n}_{t}-x_{t}|=0\quad \forall T\in[0,\infty).$$ \[lemma 4.21.2\] Assume that for any $T$ $$\label{3.13.1} E\int_{0}^{T}I_{G}(t,x_{t})\,dt=0,$$ which is certainly true if $G=\emptyset$. Then the assertion of Theorem \[theorem 3.8.1\] holds. Proof. As explained before the lemma we can write $E$ in place of $E^{n}$ in (\[4.21.6\]). Then we insert $I_{x_{p}\not\in G_{p}}$, which is harmless due to (\[3.13.1\]), in the integral in (\[4.21.6\]) (notice $x_{p}$ and not $x^{n}_{p}$). Furthermore, we remember the last assertion of Lemma \[lemma 4.21.1\] and use Assumption \[assumption 3.8.3\], (\[4.12.2\]), and the dominated convergence theorem to conclude that the limit in (\[4.21.6\]) equals $$\begin{aligned} \label{3.14.3} Ef(x_{t_{1}},...,x_{t_{q}}) \int_{s}^{t}I_{x_{p}\not\in G_{p}} \big[ a^{ ij}(p,x _{p})u_{x^{i}x^{j}}(p,x _{p})\nonumber \\ +b^{ i}(p,x _{p})u_{x^{i}}(p,x _{p})+u_{p}(p,x _{p})\big]\,dp.\end{aligned}$$ By using (\[3.13.1\]) again, we obtain that $$E f(x _{t_{1}},...,x _{t_{q}}) \big[u(t,x _{t})-u(s,x _{s})\big]$$ $$= E f(x _{t_{1}},...,x _{t_{q}}) \int_{s}^{t}\big[u_{p}(p,x_{p})+ a^{ ij}(p,x _{p}) u_{x^{i}x^{j}}(p, x _{p})$$ $$+b^{ i}(p,x _{p})u_{x^{i}}(p,x _{p})\big]\,dp,$$ for any bounded continuous $f$ and $t_{i}\leq s\leq t$. The latter just amounts to saying that the process $$u(t,x_{t})-\int_{0}^{t}\big[u_{s}(s,x_{s})+ a^{ij}(s,x_{s})u_{x^{i}x^{j}}(s,x_{s}) +b^{i}(s,x_{s})u_{x^{i}}(s,x_{s})\big]\,ds$$ is an ${\mathcal{F}}^{x}_{t}$-martingale, where ${\mathcal{F}}^{x}_{t}$ is the $\sigma$-field generated by $x_{s}$, $s\leq t$. It only remains to remember the Lévy-Doob-Stroock-Varadhan characterization theorem (see, for instance, Sec. 4.5 in [@SV] or Secs. 2.6 and 2.7 in [@IW]). The lemma is proved. [ \[remark 4.21.3\] In the general case the above proof and Fatou’s theorem show that, if $f$ [*is nonnegative*]{}, then $$E f(x _{t_{1}},...,x _{t_{q}}) \big[u(t,x _{t})-u(s,x _{s})\big]$$ $$\leq E f(x _{t_{1}},...,x _{t_{q}}) \int_{s}^{t}I_{x_{p}\not\in G_{p}}\big[u_{p} (p,x_{p})+ a^{ ij}(p,x _{p}) u_{x^{i}x^{j}}(p, x _{p})$$ $$\label{4.21.7} +b^{ i}(p,x _{p})u_{x^{i}}(p,x _{p})\big]\,dp +I,$$ where $$\begin{aligned} \label{4.20.1} I=Ef(x_{t_{1}},...,x_{t_{q}}) \int_{s}^{t}I_{x_{p}\in G_{p}}\varlimsup_{n\to\infty} \big[ a^{nij} (p,x^{n}_{\cdot})u_{x^{i}x^{j}}(p,x^{n}_{p})\nonumber \\ +b^{ni}(p,x^{n}_{\cdot})u_{x^{i}}(p,x^{n}_{p}) +u_{p}(p,x^{n}_{p})\big]\,dp.\end{aligned}$$ ]{} In the following lemma we finish proving Theorem \[theorem 3.8.1\]. At this moment we take Theorem \[theorem 3.14.1\] for granted. \[lemma 4.21.4\] Equation (\[3.13.1\]) holds and hence, by Lemma \[lemma 4.21.2\], Theorem \[theorem 3.8.1\] holds true as well. Proof. First, we estimate the $\varlimsup$ in (\[4.20.1\]). Fix $\omega$ and almost any $p$ for which (\[4.19.2\]) holds with $p$ in place of $t$ and $x_{p}(\omega)\in G_{p}$. Then we can replace $\varlimsup_{n\to\infty}$ with $\lim\limits_{n'\to\infty}$, where $n'$ is an appropriate sequence tending to infinity. By extracting further subsequences when necessary we may assume that $a^{n'}(p,x^{n'}_{\cdot})$ and $b^{n'}(p,x^{n'}_{\cdot})$ converge to some $\bar{a}$ and $\bar{b}$. Since $x_{p}\in G_{p}$ and $|x^{n}_{\cdot}-x_{\cdot}|^{*}_{p}\to0$, (\[4.19.2\]) implies that $\det\bar{a}\geq\delta(p,x_{p})$. In addition, $$|\bar{b}|+{\mathop{\sf trace}}\,\bar{a} \leq L(|x_{p}|+1,p)$$ due to Assumption \[assumption 3.8.2\]. Combined with $\det\bar{a}\geq\delta(p,x_{p})$ this yields $$\bar{a}^{ij}\lambda^{i}\lambda^{j}\geq\delta(p,x_{p}) L^{-(d-1)}(|x_{p}|+1,p) |\lambda|^{2}=: \bar{\delta}(p,x_{p})|\lambda|^{2}\geq \tilde{\delta}(p,x_{p})|\lambda|^{2}$$ for all $\lambda\in{\mathbb{R}}^{d}$, where $\tilde{\delta} =I_{G}\bar{\delta}$. Now by replacing $\delta$ with $\tilde{\delta}$ and both $K(r,t)$ and $L(r,t)$ with $L(r+1,t)$ in Sec. \[section 3.14.1\], we conclude that $$\varlimsup_{n\to\infty} \big[a^{nij}(p,x^{n}_{\cdot})u_{x^{i}x^{j}}(p,x^{n}_{p}) +b^{ni}(p,x^{n}_{\cdot})u_{x^{i}}(p,x^{n}_{p}) +u_{p}(p,x^{n}_{p})\big]$$ $$\leq u_{p}(p,x_{p})+F(p,x _{p},u_{x x }(p,x_{p})) + L(|x _{p}|+1,p)|u_{x}(p,x_{p})|.$$ Furthermore, Remark \[remark 3.13.2\] shows that the same estimate holds for the expression in brackets in (\[3.14.3\]), so that according to (\[4.21.7\]) $$Ef(x_{t_{1}},...,x_{t_{q}}) \big[u(t,x_{t})-u(s,x_{s})\big] \leq Ef(x_{t_{1}},...,x_{t_{q}})\int_{s}^{t}\big[ u_{p}(p,x_{p})$$ $$+F(p,x _{p},u_{x x }(p,x_{p})) + L(|x _{p}|+1,p)|u_{x}(p,x_{p})\big]\,dp$$ if $f\geq0$. Hence the process $$u(t,x_{t})-\int_{0}^{t}\big[ u_{s}(s,x_{s}) +F(s,x _{s},u_{x x }(s,x_{s})) + L(|x _{s}|+1,s)|u_{x}(s,x_{s})\big]\,ds$$ is a supermartingale and by Theorem \[theorem 3.14.1\] estimate (\[2.26.2\]) holds. If we take there $f=I_{G}$ and remember that the Lebesgue measure of $G$ is zero and $\bar{\delta}(t,x)>0$ on $G_{t}$ for almost all $t$, then we come to (\[3.13.1\]) with $T\wedge\tau_{r}$ in place of $T$. Upon letting $r\to\infty$ we finally obtain (\[3.13.1\]) as is. The lemma is proved. The following theorem is used in Remark \[remark 4.25.1\]. Its proof is obtained by changing variables. We introduce an assumption different from Assumption \[assumption 3.8.4\]. \[assumption 4.25.2\] If $G\ne\emptyset$, then $G=\bigcup_{m=1}^{\infty}G^{m}$, where $G^{m}$ are Borel sets. For each $m$, we are given an integer $d_{m}\geq1$, a nonnegative Borel function $\delta_{m} $ defined on $(0,\infty) \times{\mathbb{R}}^{d_{m}}$, and a continuous ${\mathbb{R}}^{d_{m}}$-valued function $v^{m}(t,x)=(v^{m1}(t,x), ...,v^{md_{m}}(t,x))$ defined on $[0,\infty)\times{\mathbb{R}}^{d}$ and having there continuous in $(t,x)$ derivatives $v^{mi}_{t},v^{mi}_{x},v^{mi}_{xx}$. For each $m$ and almost every $t\in(0,\infty)$, \(i) the set $v^{m}(t,G^{m}_{t})$ has $d_{m}$-dimensional Lebesgue measure zero, \(ii) for every $x\in v^{m}(t,G^{m}_{t})$ and each sequence $y^{n}_{\cdot}\in{\mathcal{D}}$, which converges to a continuous function $y_{\cdot}$ satisfying $v^{m}(t,y _{t})=x$, we have $$\label{4.25.1} \varliminf_{n\to\infty}\det V^{mn}(t,y^{n}_{\cdot}) \geq\delta_{m}(t,x)>0,$$ where the matrix $V^{mn}(t,y _{\cdot})$ is defined according to $$V^{mn}_{ij}(t,y _{\cdot})=v^{mi}_{x^{k}}(t,y_{t})v^{mj}_{x^{r}} (t,y_{t})a^{nkr}(t,y_{\cdot})\quad i,j=1,...,d_{m}.$$ [Assumption \[assumption 3.8.4\] is stronger than Assumption \[assumption 4.25.2\]. Indeed, if the former is satisfied, one can take $G^{m}=G$, $\delta_{m}(t,x)=\delta(t,x)$, $d_{m}=d$, and $v^{mi}=x^{i}$, $i=1,...,d$, in which case $\det\,V^{mn}=\det\,a^{n}$. ]{} Another case is when again everything is independent of $m$, but $d_{m}=1$ and $v(t,x)=x^{1}$. Then condition (\[4.25.1\]) becomes $$\varliminf_{n\to\infty} a^{n11}(t,y^{n}_{\cdot}) \geq\delta(t,x)>0,$$ which is much weaker than (\[4.19.2\]). However, in this case in order to satisfy requirement (i) of Assumption \[assumption 4.25.2\] we need to assume that $G_{t}$ lies in a hyperplane orthogonal to the first coordinate axis. \[remark 4.19.4\] Assume that $G=\bigcup_{m=1}^{\infty}G^{m}$, where $G^{m}_{t}$ are independent of $t$ and are hyperplanes $G^{m}_{t}=\{x:(x,\alpha_{m})=\beta_{m}\}$ with certain $\alpha_{m}\in{\mathbb{R}}^{d}$ and $\beta_{m}\in{\mathbb{R}}$ satisfying $|\alpha_{m}|=1$. Assume that we have a Borel nonnegative functions $\delta_{m}(t,x)$, $x\in{\mathbb{R}}$. Finally, assume that for every $m\geq1,t>0$, $x\in{\mathbb{R}}^{d}$ such that $$(x,\alpha_{m})=\beta_{m},$$ and each sequence $y^{n}_{\cdot}\in{\mathcal{D}}$, which converges to a continuous function $y_{\cdot}$ satisfying $y _{t}=x$, we have $$\varliminf_{n\to\infty} a^{nij}(t,y^{n}_{\cdot}) \alpha^{i}\alpha^{j} \geq\delta(t,\beta_{m})>0.$$ Then it turns out that Assumption \[assumption 4.25.2\] is satisfied. To show this, it suffices to take $d_{m}=1$ and $v^{m}(t,x)= (x,\alpha_{m})$ and notice that the image of $G^{m}_{t}$ under the mapping $v^{m}(t,\cdot) :G^{m}_{t}\to{\mathbb{R}}$ is just one point $\beta_{m}$. We will use this fact in Sec. \[section 4.18.1\]. [Generally, condition (\[4.25.1\]) is aimed at situations in which $x^{n}_{t}$ in the limit may degenerate in some directions but not along all those which are transversal to $G_{t}$. ]{} \[theorem 4.25.1\] Suppose that Assumptions \[assumption 3.8.2\], \[assumption 3.8.3\], \[assumption 3.8.5\], and \[assumption 4.25.2\] are satisfied and the sequence of distributions $({\mathbb{Q}}^{n})_{n\geq1}$ of $x^{n}_{\cdot}$ converges weakly on ${\mathcal{D}}$ to a measure ${\mathbb{Q}}$. Then the assertion of Theorem \[theorem 3.8.1\] holds true again. Proof. We mimic the argument from the proof of Lemma \[lemma 4.21.4\] to show that (\[3.13.1\]) holds if Assumption \[assumption 4.25.2\] rather than Assumption \[assumption 3.8.4\] is satisfied. The main idea is to change variables according to the mappings $v^{m}$. It suffices to prove that, for each $m$, equation (\[3.13.1\]) holds with $G^{m}$ in place of $G$. Furthermore, without losing generality we may assume that each set $G^{m}$ is bounded otherwise we could split each of them into the union of bounded sets and consider them as new $G^{m}$’s. We fix $m,T$, and $R$ and assume that $G^{m}\subset[0,T]\times B_{R}$. Then the behavior of $v^{m}(t,x)$ for large $|x|$ becomes irrelevant and, changing $v^{m}$ outside of $[0,T]\times B_{R}$ if necessary, we assume that $$\label{4.21.2} v^{m}(t,x) =e_{1}|x|$$ for $(t,x)\not\in[0,2T]\times B_{2R}$ , where $e_{1}$ is the first basis vector in ${\mathbb{R}}^{d_{m}}$. It follows that there is a constant $N_{0}<\infty$ such that $$\label{4.21.1} |v^{m}_{x}(t,x)|+|v^{m}_{x^{i}x^{j}}(t,x)| +|v^{m }_{t}(t,x)|\leq N_{0}\quad\forall t,x.$$ It also follows that, for any $r\geq0$, $$\label{4.21.4} |v^{m}_{x}(t,x)|\leq r\Longrightarrow |x|\leq 2R+r.$$ After that we go back to Lemma \[lemma 4.21.1\] and take there $$u(t,x)=w(t,v^{m}(t,x)),$$ with $w$ being a function of class $C^{\infty}_{0}({\mathbb{R}}^{d_{m}+1})$. By the way, our stipulation (\[4.21.2\]) about the behavior of $v^{m}$ for large $|x|$ yields that $u\in C^{\infty}_{0}({\mathbb{R}}^{d_{m}+1})$. We also take the function $f$ in the form $$f(y_{1},...,y_{q})= g(v^{m}(t_{1},y_{1}),...,v^{m}(t_{q},y_{q})),$$ where $y_{i}\in{\mathbb{R}}^{d}$ and $g$ is a continuous bounded [*nonnegative*]{} function on ${\mathbb{R}}^{qd_{m}}$. Finally, we define $$\tilde{x}^{n}_{t}=v^{m}(t,x^{n}_{t}),\quad \tilde{x} _{t}=v^{m}(t,x _{t}).$$ Notice that $$a^{nij}(p,x^{n}_{\cdot})u_{x^{i}x^{j}}(p,x^{n}_{p}) +b^{ni}(p,x^{n}_{\cdot})u_{x^{i}}(p,x^{n}_{p}) +u_{p}(p,x^{n}_{p})$$ $$=\tilde{a}^{nkr}(p,x^{n}_{\cdot})w_{x^{k}x^{r}} (p,\tilde{x}^{n}_{p}) +\tilde{b}^{nk}(p,x^{n}_{\cdot})w_{x^{i}}(p, \tilde{x}^{n}_{p}) +w_{p}(p,\tilde{x}^{n}_{p}),$$ where, for $y_{\cdot}\in{\mathcal{D}}$, $$\tilde{a}^{nkr}(p,y_{\cdot})=a^{nij} (p,y_{\cdot})v^{mk}_{x^{i}} (p,y_{p})v^{mr}_{x^{j}}(p,y_{p}),$$ $$\tilde{b}^{nk}(p,y_{\cdot})=a^{nij}(p,y_{\cdot}) v^{mk}_{x^{i}x^{j}} (p,y_{p})+b^{ni}(p,y_{\cdot})v^{mk}_{x^{i}} (p,y_{p})+v^{mk}_{p}(p,y_{p}).$$ Then on the basis of Fatou’s theorem and Lemma \[lemma 4.21.1\] we get $$E f(x _{t_{1}},...,x _{t_{q}}) \big[u(t,x _{t})-u(s,x _{s})\big]$$ $$=E f(x _{t_{1}},...,x _{t_{q}}) \int_{s}^{t}\varlimsup_{n\to\infty} \big[u_{p}(p,x _{p})+a^{nij}(p,x^{n}_{\cdot}) u_{x^{i}x^{j}}(p,x^{n}_{p})$$ $$+b^{ni}(p,x^{n}_{\cdot})u_{x^{i}}(p,x^{n}_{p})\big]\,dp$$ $$=Eg(\tilde{x}_{t_{1}},...,\tilde{x}_{t_{q}}) \int_{s}^{t}\varlimsup_{n\to\infty} \big[w_{p}(p,\tilde{x}_{p})+\tilde{a}^{nkr}(p,x^{n}_{\cdot}) w_{x^{k}x^{r}}(p,\tilde{x}^{n}_{p})$$ $$+\tilde{b}^{nk}(p, x^{n}_{\cdot})w_{x^{k}}(p,\tilde{x}^{n}_{p})\big]\,dp.$$ Also notice that owing to (\[4.21.1\]), $\tilde{a}$ and $\tilde{b}$ satisfy (\[4.14.3\]) with $L(r,t)$ replaced with $N_{0}L(r,t)$. In light of (\[4.21.4\]) this implies $$|\tilde{b}^{n}(t,x^{n}_{\cdot})| +{\mathop{\sf trace}}\,\tilde{a}^{n}(t,x^{n}_{\cdot})\leq N_{0}L(2R+|\tilde{x}^{n}_{t}|,t).$$ In addition, according to (\[4.25.1\]), for almost any $t$, for every $\tilde{x}\in v^{m}(t,G^{m}_{t})$ and each sequence $y^{n}_{\cdot}\in{\mathcal{D}}$, which converges to a continuous function $y_{\cdot}$ satisfying $v^{m}(t,y _{t})=\tilde{x}$, we have $$\varliminf_{n\to\infty}\det \tilde{a}^{n}(t,y^{n}_{\cdot}) \geq\delta_{m}(t,\tilde{x}) >0,$$ $$\varliminf_{n\to\infty} \tilde{a}^{nkr}(t,y^{n}_{\cdot}) \lambda^{k}\lambda^{r} \geq\tilde{\delta}_{m}(t,\tilde{x})|\lambda|^{2}$$ for all $\lambda\in R^{d_{m}}$, where $$\tilde{\delta}_{m}(t,\tilde{x})= \delta_{m}(t,\tilde{x}) L^{-(d_{m}-1)}(2R+|\tilde{x}|+1,t)I_{ v^{m} (G^{m})}(t,\tilde{x})$$ Then as in the proof of Lemma \[lemma 4.21.4\] we find that $$Eg(\tilde{x}_{t_{1}},...,\tilde{x}_{t_{q}}) \big[w(t,\tilde{x}_{t})-w(s,\tilde{x}_{s})\big] \leq Eg(\tilde{x}_{t_{1}},...,\tilde{x}_{t_{q}}) \int_{s}^{t}\big[ w_{p}(p,\tilde{x}_{p})$$ $$+F(p,\tilde{x} _{p},w_{x x }(p,\tilde{x}_{p})) + L(2R+|\tilde{x}_{p}|+1,p)|w_{x}(p, \tilde{x}_{p})\big]\,dp,$$ where the operator $F$ is constructed on the basis of $\tilde{\delta}_{m}$ and $N_{0}L(2R+r,t)$ in place of $\delta$ and both $L,K$ from Sec. \[section 3.14.1\], respectively, on the space of functions on ${\mathbb{R}}^{d_{m}}$ in place of ${\mathbb{R}}^{d}$. Again as in the proof of Lemma \[lemma 4.21.4\] we conclude that, for any $S $ we have $$E\int_{0}^{S}I_{v^{m}(G^{m})}(t,\tilde{x}_{t})\,dt=0.$$ Since, obviously, $ I_{G^{m}}(t,x)\leq I_{v^{m}(G^{m})}(t,v^{m}(t,x))$ we get that (\[3.13.1\]) holds with $G^{m}$ in place of $G$. As we have pointed out in the beginning of the proof, this is exactly what we need. The theorem is proved. A sufficient condition for precompactness {#section 4.14.2} ========================================= One of the conditions of Theorem \[theorem 3.8.1\] is that the sequence of distributions $({\mathbb{Q}}^{n})_{n\geq1}$ of $x^{n}_{\cdot}$ on ${\mathcal{D}}$ converge. One can always extract a convergent subsequence from a sequence which is precompact and here we want to give a simple sufficient condition for precompactness to hold. The assumptions of this section are somewhat different from the ones of Sec. \[section 4.14.1\] and this was the reason to treat the issue in a separate section. We take the objects introduced in Sec. \[section 4.14.1\] before Assumption \[assumption 3.8.2\] and instead of that assumption we require the following. \[assumption 4.14.2\] Assumption \[assumption 3.8.2\] is satisfied with condition (ii) replaced by the following weaker condition: For each $r\in[0,\infty)$ there exists a locally integrable function $L(r,t)$ given on $[0,\infty)$ such that $L(r,t)$ increases in $r$ and $$|b^{n}(t,y_{\cdot})|+{\mathop{\sf trace}}\,a^{n}(t,y_{\cdot})\leq L(r,t)$$ whenever $t>0$, $y_{\cdot}\in{\mathcal{D}}$, and $\sup_{s\leq t}|y_{s}|\leq r$. \[lemma 4.15.1\] Under Assumptions \[assumption 3.8.5\] and \[assumption 4.14.2\] suppose that we are given ${\mathcal{F}}^{n}_{t}$ stopping times $\tau^{n}_{r}$, $n=1,2,...,r>0$, and a finite function $\alpha(r)$ defined on $(0,\infty)$ such that we have (i) for all $n$ and $r$, $$\label{4.17.6} |x^{n }_{t}|\leq\alpha(r)\quad\text{if} \quad 0\leq t<\tau^{n}_{r},$$ and (ii) $$\label{4.17.7} \lim_{r\to\infty}\varlimsup_{n\to\infty}P^{n}(\tau^{n}_{r} \leq T)=0\quad\forall T\in[0,\infty).$$ Then the sequence $({\mathbb{Q}}^{n})_{n\geq1}$ is precompact. Proof. Define $$G^{n}_{t}=\int_{0}^{t } \big[|b^{n}(s,x^{n}_{\cdot})|+{\mathop{\sf trace}}\,a^{n} (s,x^{n}_{\cdot})\big]\,ds,$$ $$F^{n}_{t}=G^{n}_{t}+ \int_{0}^{t } \int_{|x|>1}\nu^{n}(dsdx)$$ Owing to Assumption \[assumption 3.8.5\], by Theorem VI.4.18 and Remark VI.4.20 of [@JS] to prove the theorem it suffices to check that the sequence of distributions on ${\mathcal{D}}$ of $F^{n}_{\cdot}$ is $C$-tight, that is precompact and each limit point of this sequence is the distribution of a continuous process. In turn, due to Theorem VI.4.5 and Remark VI.4.6 (3) of [@JS], to prove the $C$-tightness it suffices to show that, for any $T\in[0,\infty)$ and $\varepsilon>0$, $$\begin{aligned} \label{4.17.1} &\lim_{N\to\infty}\varlimsup_n&P^n\Big(\sup_{t\le T}\big|F^{n}_t\big|\ge N\Big)=0,\nonumber \\ &\lim_{\delta\downarrow 0}\,\varlimsup_n \, & P^n\Big(\sup_{t+s\leq T,0\leq s\leq\delta} \big|F^{n}_{t+s}-F^{n}_t\big|\ge \varepsilon\Big)=0.\end{aligned}$$ In view of Assumption \[assumption 3.8.5\] we need only prove (\[4.17.1\]) for $G^{n}$ in place of $F^{n}$. We do this replacement and after that notice that, for any $r$, the left-hand side of the first equation in (\[4.17.1\]) is less than $$\lim_{N\to\infty}\varlimsup_nP^n\Big(\sup_{t\le T\wedge\tau^{n}_{r}}\big|G^{n}_t\big|\ge N\Big) +\varlimsup_{n\to\infty}P^{n}(\tau^{n}_{r} \leq T).$$ Here the first term is zero for each $r$ since $G^{n}_t$ is continuous in $t$ and $$|G^{n}_t|\leq\int_{0}^{t}L(\alpha(r),u)\,du$$ for $t<\tau^{n}_{r}$ when by our assumptions $|x^{n}_{t }|\leq r$. In addition, the second term can be made as small as we wish by choosing a sufficiently large $r$. This proves the first equation in (\[4.17.1\]). Similarly, the left-hand side of the second equation in (\[4.17.1\]) with $G^{n}$ in place of $F^{n}$ is less than $$\lim_{\delta\downarrow 0}\varlimsup_nP^n\Big(\sup_{t+s\le T\wedge\tau^{n}_{r},0\le s\le\delta}\big|G^{n}_{t+s}-G^{n}_{t}\big|\ge\varepsilon\Big) +\varlimsup_{n\to\infty}P^{n}(\tau^{n}_{r} \leq T),$$ where again the first term vanishes since $$|G^{n}_{t+s}-G^{n}_{t}|\leq\int_{t}^{t+s}L(\alpha(r),u)\,du.$$ The lemma is proved. [It may be worth noticing that the combination of assumptions (i) and (ii) of Lemma \[lemma 4.15.1\] is equivalent to the following: for any $T\in(0,\infty)$, the sequence of distributions of $\sup_{t\le T}|x^n_t|$ is tight or put otherwise $$\lim_{r\to\infty}\varlimsup_{n\to\infty}P^n(\sup_{t\le T}|x^n_t|\geq r)=0.$$ ]{} Lemma \[lemma 4.15.1\] reduces the investigation of precompactness to estimating $|x^{n}|^{*}_{t}$. Here the following coercivity assumption turns out to be useful. \[assumption 4.14.1\] For any $n$, there exists a nonnegative ${\mathcal{F}}^{n}_{t}$-predictable function $L_{n}(t)$ such that $$\label{4.14.5} b^{ni}(t,x^{n}_{\cdot})x^{ni}_{t} +{\mathop{\sf trace}}\, a^{n}(t,x^{n}_{\cdot})\leq L_{n}(t)(1+|x^{n}_{t}|^{2})$$ for almost all $(\omega,t)$. Furthermore, for any $T\in[0,\infty)$, $$\lim_{c\to\infty} \varlimsup_{n\to\infty}P^{n}\big(\int_{0}^{T}L_{n}(t)\,dt >c)=0.$$ \[remark 4.25.4\] Quite often one imposes a linear growth assumptions on the coefficients $a^{n}$ and $b^{n}$, which of course implies (\[4.14.5\]). However, say in one dimension, if $a^{n}\equiv0$ and $b^{ni}(t,y_{\cdot})=b^{n}(t,y_{t})$ and $b^{n}(t,y_{t})\geq0$ for $y_{t}<0$ and $b^{n}(t,y_{t})\leq0$ for $y_{t}>0$, then (\[4.14.5\]) is satisfied with $L\equiv0$. Therefore generally (\[4.14.5\]) does not provide any control on the behavior of $|b^{n}(t,y_{t})|$ for large $|y_{t}|$. For that reason, Theorem \[theorem 4.17.1\] below does not follow from the results of [@JS] and [@LS]. \[theorem 4.17.1\] Let $$\label{4.17.9} \lim_{N\to\infty}\varlimsup_{n\to\infty} P^{n}(|x^{n}_{0}|\geq N)=0$$ and let Assumptions \[assumption 3.8.5\], \[assumption 4.14.2\], and \[assumption 4.14.1\] be satisfied. Then the sequence $({\mathbb{Q}}^{n})_{n\geq1}$ is precompact. Furthermore, let $k$ be an integer and $f^{n}(t,x)$ be Borel ${\mathbb{R}}^{k}$-valued functions defined on $(0,\infty)\times{\mathbb{R}}^{d}$ such that $|f^{n}(t,x)|\leq L(|x|,t)$ for all $t,x,n$. Define $$y^{n}_{t}=\int_{0}^{t}f^{n}(s,x^{n}_{s})\,ds.$$ Then the sequence of distributions of $(x^{n}_{\cdot},y^{n}_{\cdot})$ on ${\mathcal{D}}([0,\infty),{\mathbb{R}}^{d+k})$ is precompact as well. Proof. We are going to use a method introduced in Sec. 4, Ch. II of [@KR]. Define $$z^{n}_{t}=x^{n}_{t}-j^{n}_{t},\ \phi_n(t)=\exp \Big( -2\int_{0}^{t}L _n(s)\,ds \Big) , \ u_n(t,x)=(1+|x|^{2})\phi_n(t).$$ Also as in the proof of Lemma \[lemma 4.21.1\], use notation (\[4.17.3\]) and (\[4.17.4\]) and notice that due to special choice of $u$, we have $R^{n}_{t}(z_{\cdot})\equiv0$. Then by using Itô’s formula, we get that the process $$M^{n}_{t}:=(1+|z^{n}_{t}|^{2})\phi_n(t)- (1+|x^{n}_{0}|^{2})$$ $$\label{4.17.5} - \int_{0}^{t}\big[2z^{ni}_{s}b^{ni}(s,x^{n}_{\cdot}) +2{\mathop{\sf trace}}\,a^{n}(s,x^{n}_{\cdot}) -2L_n(s)(1+|z^{n}_{s}|^{2})\big] \phi_n(s)\,ds$$ is a local martingale. Now take $\gamma^{n}$ again from (\[4.17.2\]) and remember that $z^{n}_{s}=x^{n}_{s}$ for $s<\gamma^{n}$, so that the expression in the brackets in (\[4.17.5\]) is negative due to Assumption \[assumption 4.14.1\]. Then we see that $$H^{n}_{t}:=(1+|z^{n}_{t\wedge\gamma^{n}} |^{2})\phi_n(t\wedge\gamma^{n})- (1+|x^{n}_{0}|^{2})$$ is a local supermartingale. For any constant $N>0$, the process $H^{n}_{t}I_{|x^{n}_{0}|\leq N}$ also is a local supermartingale and, since it is bounded from below by the constant $-(1+N^{2})$, it is a supermartingale. Therefore, upon defining $$\kappa^{n}_{r}=\inf\{t\geq0:\sup_{s\leq t}|x^{n}_{s}|>r \}, \quad\tau^{n}_{r}=\gamma^{n}\wedge\kappa^{n}_{r},$$ we get that, for any $T\in[0,\infty)$, $$E^{n}\big(1+|z^{n}_{T\wedge\tau^{n}_{r}}|^{2}\big) \phi_n(T\wedge\tau^{n}_{r})I_{|x^{n}_{0}|\leq N} \leq1+N^{2},$$ $$E^{n}\big(1+|z^{n}_{ \tau^{n}_{r}}|^{2}\big) \phi_n( \tau^{n}_{r})I_{|x^{n}_{0}|\leq N, \tau^{n}_{r}\leq T<\gamma^{n}} \leq1+N^{2}.$$ Then we notice that on the interval $[0,\gamma^{n})$ the process $j^{n}_{t}$ is identically zero. Hence, for $\tau^{n}_{r}\leq T<\gamma^{n}$ we have $$|z^{n}_{ \tau^{n}_{r}}|=|x^{n}_{\tau^{n}_{r}}| =|x^{n}_{\kappa^{n}_{r}}|\geq r$$ and we obtain $$e^{-c}(1+r^{2})P^{n}\bigg(\int_{0}^{T}L_{n}(t)\,dt \leq c,|x^{n}_{0}|\leq N, \tau^{n}_{r}\leq T<\gamma^{n}\bigg)\leq1+N^{2},$$ $$\lim_{r\to\infty}\varlimsup_{n\to\infty} P^{n}(|x^{n}_{0}|\leq N, \tau^{n}_{r}\leq T<\gamma^{n})=0.$$ This holds for any $N$ and along with assumption (\[4.17.9\]) and Remark \[remark 4.17.1\] leads first to to $$\lim_{r\to\infty}\varlimsup_{n\to\infty} P^{n}( \tau^{n}_{r}\leq T<\gamma^{n})=0$$ and then to (\[4.17.7\]). Finally, observe that (\[4.17.6\]) is obviously satisfied even if $0\leq t<\kappa^{n}_{r}$ rather than $0\leq t<\tau^{n}_{r}$. Hence, by referring to Lemma \[lemma 4.15.1\] we finish proving the assertion of our theorem regarding the distributions of $x^{n}_{\cdot}$. Lemma \[lemma 4.15.1\] yields the result for $(x^{n}_{\cdot},y^{n}_{\cdot})$ as well since, obviously, for $0\leq t<r\wedge\tau^{n}_{r}$, we have $$|y^{n}_{t}|\leq\int_{0}^{r}L(r,s)\,ds.$$ The theorem is proved. An example of queueing model {#section 4.18.1} ============================ We consider a particular queueing system with $d$ service stations and $d+1$ incoming streams of customers. We refer the reader to [@FS] for relations of this system to practical problems. The first $d$ streams are composed of customers “having appointments”, meaning that the customers from the $i$th stream only go to the $i$th service station. The last stream, to which we assign number 0, is the one of “free” customers who, upon “checking in”, are routed to the service stations according to certain rule to be described later. We assume that each service station consists of infinitely many servers, so that infinitely many customers can be served at each station simultaneously. Denote by $Q^{i}_{t}$ the number of customers being served at the $i$th station at time $t$. With station $i$, $i=1,...,d$, we associate a “cost” $\alpha_{i}>0$ and suppose that a “free” customer arriving at time $t$ is directed to the $i$th station if $i$ is the smallest integer satisfying $$\alpha_{i}Q^{i}_{t-}\leq\alpha_{j}Q^{j}_{t-}\quad \text{for all}\quad j\ne i.$$ Such a routing policy is called load-balancing in [@FS]. Here and below in this section the summation convention over repeated indices [*is not enforced*]{}. We take some numbers $\lambda_{0},...,\lambda_{d}>0$ and assume that the $i$th stream of customers forms a Poisson process with parameter $\lambda_{i}$. To describe the service times we fix some “thresholds” $N^{1},...,N^{d}$, which are positive integers, and assume that, given $0<Q^{i}_{t}< N^{i}$, each of $ Q^{i}_{t}$ customers at the $i$th station \(i) has its own server, \(ii) spends with its server a random time having exponential distribution with parameter 1, \(iii) after having been served leaves the system. However, given $Q^{i}_{t}\geq N^{i}$, the service is organized differently. All $Q^{i}_{t}$ customers are divided into disjoint groups each consisting of two persons apart from at most one group having only one member. Then each of those groups is supposed to get service according to the rules (i)-(iii) above. By the way, it is not hard to understand that on average both discipline of servicing yield the same number of customers having been served during one unit of time. Finally, we assume that all service times and arrival processes are as independent as they can be. Now we describe the model in rigorous terms. For any numbers $y^{1},...,y^{d}$ define $${\mathop{\sf argmin}}_{k=1,...,d}y^{k}=i$$ if $i$ is the least of $1,...,d$ such that $y^{i}\leq y^{k}$ for $k\ne i$. For $x\in{\mathbb{R}}^{d}$ and $i=1,...,d$, let $$\delta^{i}(x)=\left\{\begin{array}{ll}1&\quad\text{if}\quad i={\mathop{\sf argmin}}\limits_{k=1,...,d}\alpha_{k}x^{k}, \\ 0 &\quad\text{otherwise}. \end{array}\right.$$ Take independent Poisson processes $\Pi^0_t,..., \Pi^d_t$ with parameters $\lambda_{0},...,\lambda_{d}$, respectively. Then we think of the number of arrivals at the $i$th station as given by $$A^{i}_{t}=\int_{0}^{t}\delta^{i}(Q_{s-})\,d\Pi^{0}_{s} +\Pi^{i}_{t},$$ where $Q_{s}=(Q^{1}_{s},...,Q^{d}_{s})$ and $Q^{i}_{t}$ are some integer-valued right continuous processes having left limits. To model the number of departures $D^{i}_{t}$ from the $i$th station up to time $t$ we take Poisson processes $\Pi^{ij} _{t}$ and $\Lambda^{ij} _{t}$, $i=1,...,d$, $j=1,2,...$, having parameter 1 and mutually independent and independent of $(\Pi^{0}_{\cdot},...,\Pi^{d}_{\cdot})$. Then we define $$D^{i}(t)=\int_{0}^{t}I_{N^{i}> Q^{i}_{s-}}\sum_{j\geq1} I_{ Q^{i}_{s-}\geq j}\,d\Pi^{ij}_{ s}$$ $$+ \int_{0}^{t}I_{N^{i}\leq Q^{i}_{s-}}\sum_{j\geq1} \big(I_{ Q^{i}_{s-}\geq2j}+I_{ Q^{i}_{s-}+1\geq2j}\big)\, d\Lambda^{ij}_{ s}.$$ To be consistent with the description, $Q_{t}$ should satisfy the balance equations $Q^{i}_{t}=Q^{i}_{0}+ A^{i}_{t}-D^{i}_{t}$. Thus, we are going to investigate the system of equations $$dQ^{i}_{t}=\delta^{i}(Q_{t-})\,d\Pi^{0}_{t} +d\Pi^{i}_{t}- I_{N^{i}> Q^{i}_{t-}}\sum_{j\geq1} I_{ Q^{i}_{t-}\geq j}\,d\Pi^{ij}_{t}$$ $$\label{4.18.1} - I_{N^{i}\leq Q^{i}_{t-}}\sum_{j\geq1} \big(I_{ Q^{i}_{t-}\geq2j}+I_{ Q^{i}_{t-}+1\geq2j}\big)\, d\Lambda^{ij}_{t}\quad i=1,...,d.$$ Needless to say that we assume that all the Poisson processes we are dealing with are given on a probability basis satisfying the “usual” assumptions. We also assume that the initial condition $Q_{0}$ is independent of the Poisson processes. Notice that for any initial condition $Q_{0}$ there is a unique solution of (\[4.18.1\]). Indeed obviously, for any solution we have $Q^{i}_{t}\leq Q_{0}^{i}+ \Pi^{0}_{t}+\Pi^{i}_{t}$, so that, while solving (\[4.18.1\]) for $t\in[0,T]$, one can safely replace the infinite sums in (\[4.18.1\]) with the sums over $j\leq Q_{0}^{i}+ \Pi^{0}_{T}+\Pi^{i}_{T}$. After that one solves (\[4.18.1\]) on each $\omega$ noticing that between the jumps of the Poisson processes $Q_{t}$ is constant and the jumps of $Q_{t}$ themselves are given by (\[4.18.1\]). For obvious reasons we rewrite (\[4.18.1\]) in terms of representation (\[4.24.2\]). First, for $k=0,...,d,i=1,...,d,j\geq1$, we define $$\bar{\Pi}^{k}_{t}=\Pi^{k}_{t}-\lambda_{k}t,\quad \bar{\Pi}^{ij}_{t}=\Pi^{ij}_{t}-t,\quad \bar{\Lambda}^{ij}_{t}=\Lambda^{ij}_{t}- t.$$ These processes are square integrable martingales with $${\langle}\bar{\Pi}^{k}{\rangle}_{t}=\lambda_{k}t,\quad {\langle}\bar{\Pi}^{ij}{\rangle}_{t}= t,\quad {\langle}\bar{\Lambda}^{ij}{\rangle}_{t}= t.$$ Next, for $i=1,...,d$, define $$M^{ i}_{t}=\int_{0}^{t} \delta^{i}(Q _{s-})\,d\bar{\Pi}^{0}_{s} + \bar{\Pi}^{i}_{t}-\int_{0}^{t} I_{N^{i}> Q^{i}_{s-}}\sum_{j\geq1} I_{ Q^{i}_{s-}\geq j}\,d\bar{\Pi}^{ij}_{s}$$ $$- \int_{0}^{t}I_{N^{i}\leq Q^{i}_{s-}}\sum_{j\geq1} \big(I_{ Q^{i}_{s-}\geq2j}+I_{ Q^{i}_{s-}+1\geq2j}\big)\, d\bar{\Lambda}^{ij}_{s},$$ which are at least locally square integrable martingales. Then after observing that, for any integer $q\geq0$, $$\sum_{j\geq1}I_{q\geq j}=q,\quad \sum_{j\geq1}(I_{q\geq2j}+I_{q+1\geq2j})=q,$$ we turn equation (\[4.18.1\]) into the equation $$\label{4.24.3} dQ^{i}_{t}=(\lambda_{0}\delta^{i}(Q_{t}) +\lambda_{i}-Q^{i}_{t})\,dt+dM^{i}_{t}.$$ In order to explain what follows (in no way is this explanation used in the proof of Theorem \[theorem 4.18.1\] below), notice that (\[4.24.3\]) seems to imply that $$\label{4.24.1} (EQ^{i}_{t} )'=\lambda_{0} E\delta^{i}(Q_{t})+\lambda_{i}-EQ^{i}_{t}.$$ We are interested in the behavior of $Q_{t}$ when $\lambda_{i}$’s are large but $\lambda_{0}$ is much smaller than $\lambda_{1},...,\lambda_{d}$. Then, on the one hand, $EQ^{i}_{t}$ should be large for moderate $t$ and, on the other hand, the first term on the right in (\[4.24.1\]) can be neglected. In that situation equation (\[4.24.1\]) turns out to have a stable point $EQ_{t}^{i}\equiv\lambda_{i}$. This means that, if for the initial condition we have $EQ_{0}^{i}=\lambda_{i}$, then $EQ_{t}^{i}=\lambda_{i}$ for all $t$. Notice that since $\lambda_{i}$’s are large, so should be $EQ_{0}^{i}$. Therefore, we write $\lambda_{i}=\bar{\lambda}_{i}+\Delta\lambda_{i}$, where $\Delta\lambda_{i}$ will be assumed to have order of $\lambda_{0}$, denote $$\bar{Q}^{i}_{t}=Q^{i}_{t}-\bar{\lambda}_{i}$$ and rewrite (\[4.24.3\]) in terms of $\bar{Q}_{t}$. At this moment we introduce the assumption that $$\label{4.24.4} \bar{\lambda}_{i}\alpha_{i}=n,\quad i=1,...,d,$$ with $n$ being an integer (independent of $i$) to be sent to infinity. This is convenient due to the simple fact that then $$\delta^{i}(x)=\delta^{i}(x-\bar{\lambda}).$$ In this notation (\[4.24.3\]) becomes $$d\bar{Q}^{i}_{t}= (\lambda_{0}\delta^{i}(\bar{Q}_{t}) +\Delta\lambda_{i} -\bar{Q}^{i}_{t})\,dt+dM^{i}_{t}.$$ To understand what kind of normalization is natural we compute the quadratic characteristics of $M^{i}_{t}$. Notice that, for any integer $q\geq0$, we have $$\sum_{j\geq1}(I_{q\geq2j}+I_{q+1\geq2j})^{2}= \sum_{j\geq1}(I_{q\geq2j}+2I_{q\geq2j}+I_{q+1\geq2j})$$ $$=3[q/2]+[(q+1)/2]=:qf(q),$$ where $[a]$ is the integer part of $a$. By the way, we can only define $f(q)$ by the above formula for all real $q>0$. If $q\leq0$, we let $f(q)=0$. Then $$\label{4.24.7} 0\leq f\leq2,\quad\lim_{q\to\infty}f(q)=2.$$ It follows that $$d{\langle}M{\rangle}^{ii}_{t}=[\lambda_{0}\delta^{i}(\bar{Q}_{t}) +\lambda_{i}+Q^{i}_{t}I_{Q^{i}_{t}<N^{i}} +Q^{i}_{t}f(Q^{i}_{t})I_{Q^{i}_{t}\geq N^{i}}]\,dt.$$ Also due to independence of our Poisson processes and the fact that $\delta^{i}\delta^{j}=0$ for $i\ne j$, we get $${\langle}M{\rangle}^{ij}_{t}=0\quad\text{for}\quad i\ne j.$$ If we believe that, in a sense, $Q^{i}_{t}\sim\lambda_{i}$, then $Q^{i}_{t}/\lambda_{i}$ should converge as well as $M^{i}_{t}/\sqrt{\lambda}_{i}$, and we see that it is natural to expect $\bar{Q}^{i}_{t} /\sqrt{\lambda}_{i}$ to converge to certain limit. To make the model more meaningful we also assume that the thresholds $N^{i}$’s are large and roughly speaking proportional to $\lambda_{i}$. In this way we convince ourselves that the following result seems natural. \[theorem 4.18.1\] Let $\alpha_{1},...,\alpha_{d}>0$ and $\mu_{0},...,\mu_{d}\geq0$ and $\nu_{1},...,\nu_{d}\in{\mathbb{R}}$ be fixed parameters. For $n=1,2,...$ define $$\lambda_{i}=n\alpha_{i}^{-1}+\mu_{i}\sqrt{n}, \quad i=1,...,d,\quad\lambda_{0}=\mu_{0}\sqrt{n},$$ $$N^{i}=n\alpha_{i}^{-1}+\nu_{i}\sqrt{n},\quad i=1,...,d.$$ Let $Q_{t}=Q_{t}^{n}$ be the solution of (\[4.18.1\]) with certain initial condition independent of the Poisson processes and introduce $$x_{t}^{n}= n^{-1/2}(Q^{n1}_{t}-n\alpha_{1}^{-1},... ,Q^{nd}_{t}-n\alpha_{d}^{-1}).$$ Let ${\mathbb{Q}}^{n}$ be the distribution of $x_{\cdot}^{n}$ on ${\mathcal{D}}$. Finally, assume that the distribution of $ x^{n}_{0} $ weakly converges to a distribution $F_{0}$ as $n\to\infty$. Then, as $n\to\infty$, ${\mathbb{Q}}^{n}$ converges weakly to the distribution of a solution of the following system $$\label{4.24.6} dx^{i}_{t}=(\mu_{0}\delta^{i}(x_{t})+ \mu_{i}-x^{i}_{t})\,dt+ \alpha_{i}^{-1/2}(2 +I_{x^{i}_{t}\geq\nu_{i}})^{1/2}\,dw^{i}_{t}, \quad i=1,...,d$$ considered on some probability space with $w_{t}$ being a $d$-dimensional Wiener process and $x_{0}$ distributed according to $F_{0}$. Proof. First of all notice that (\[4.24.6\]) has solutions on appropriate probability spaces and any solution has the same distribution on the space of ${\mathbb{R}}^{d}$-valued continuous functions. This follows from the fact that an obvious change of probability measure allows us to consider the case with no drift terms in (\[4.24.6\]). In that case (\[4.24.6\]) becomes just a collection of unrelated one-dimensional equations with uniformly nondegenerate and bounded diffusion. Weak unique solvability of such equations is a very well known fact (see, for instance, Theorems 2 and 3 of [@Kr69]). In the proof of convergence we will be using Theorems \[theorem 4.17.1\] and \[theorem 3.8.1\]. Observe that Assumption \[assumption 3.8.5\] is satisfied since $x^{ni}_{t}$ has no jumps bigger than $2n^{-1/2}$ and $\nu^{n}((0,\infty) \times B^{c}_{a})=0$ if $n>4d/a^{2}$. Furthermore, if in the argument before the theorem we take $\bar{\lambda}_{i} =n\alpha_{i}^{-1}$, so that (\[4.24.4\]) holds, and let $ \Delta\lambda_{i}=\mu_{i}\sqrt{n}, $ then after noticing that, by definition, $$Q^{ni}=n^{1/2}x^{ni}_{t}+n\alpha_{i}^{-1},$$ we easily obtain $$\label{4.25.2} dx^{n}_{t}=b^{n}(x^{n}_{t})\,dt+dm^{n}_{t}, \quad{\langle}m^{n}{\rangle}_{t} =\int_{0}^{t}a^{n}(x^{n}_{s})\,ds,$$ where $$b^{ni}(x)=\mu_{0}\delta^{i}(x)+\mu_{i}-x^{i},\quad a^{nij}(x)=\delta^{ij}\big(n^{-1/2} \mu_{0}\delta^{i}(x)+ \alpha_{i}^{-1} +\mu_{i}n^{-1/2}$$ $$+(x^{i}n^{-1/2}+\alpha_{i}^{-1})_{+}\big[I_{x^{i}<\nu^{i}} +f(n^{1/2}\,x^{i}+n\alpha_{i}^{-1})I_{x^{i}\geq\nu^{i}}\big] \big).$$ Upon remembering (\[4.24.7\]) we see that, for a constant $N$ and all $n$ and $x$, we have $|b^{n}(x)|+{\mathop{\sf trace}}\,a^{n}(x)\leq N(1+|x|)$, which shows that Assumptions \[assumption 3.8.2\] and \[assumption 4.14.2\], equivalent in our present situation, and Assumption \[assumption 4.14.1\] are satisfied. By Theorem \[theorem 4.17.1\] the sequence $({\mathbb{Q}}^{n})$ is precompact. Next, obviously Assumption \[assumption 3.8.3\] is satisfied if we take $$G=\{(t,x):t>0,\prod_{i,j=1}^{d}(\alpha_{i}x^{i} -\alpha_{j}x^{j})(x^{i}-\nu_{i})=0\},$$ $$b^{i}(x)=\mu_{0}\delta^{i}(x)+\mu_{i}-x^{i},\quad a^{ij}(x)=\delta^{ij} \alpha_{i}^{-1}\big( 1+ I_{x^{i}<\nu^{i}} +2I_{x^{i}\geq\nu^{i}}\big) .$$ Finally, Assumption \[assumption 3.8.4\] is satisfied since $\det\,a^{n}(x)\geq\alpha_{1}^{-1} \cdot...\cdot\alpha_{d}^{-1}$ everywhere. By Theorem \[theorem 3.8.1\] every convergent subsequence of $({\mathbb{Q}}^{n})$ converges to the distribution of a solution of (\[4.24.6\]) with the above specified initial distribution. Since all such solutions have the same distribution, the whole sequence $({\mathbb{Q}}^{n})$ converges to the distribution of any solution of (\[4.24.6\]). The theorem is proved. \[remark 5.19.2\] In Theorem \[theorem 4.18.1\] we assume that $ Q^{ni}_{0}$ goes to infinity with certain rate, namely $ Q^{ni}_{0}\sim n\alpha^{-1}_{i} $. Interestingly enough, if we change the rate, the diffusion approximation changes. Indeed, keep all the assumption of Theorem \[theorem 4.18.1\] apart from the assumption that $x^{n}_{0}$ converges in distribution and instead assume that, for a $\gamma\in[0,\infty)$ say for $\gamma=0$, $$n^{-1/2}(Q^{n1}_{0}-n\gamma\alpha^{-1}_{1},...,Q^{nd}_{0}-n \gamma\alpha^{-1}_{d})$$ converges in law to a random vector. Notice that the case $\gamma=1$ is covered by Theorem \[theorem 4.18.1\]. We claim that, for $\gamma>1$, the processes $$y^{n}_{t}=n^{-1/2}(Q^{n1}_{t}-nq_{t}\alpha^{-1}_{1}, ...,Q^{nd}_{t}-nq_{t}\alpha^{-1}_{d}),$$ where $q_{t}=1+(\gamma-1)e^{-t}$, weakly converge to a solution of the system $$dy^{i}_{t}=(\mu_{0}\delta^{i}(y_{t})+ \mu_{i}-y^{i}_{t})\,dt+ \alpha_{i}^{-1/2}(1+q_{t})^{1/2}\,dw^{i}_{t}, \quad i=1,...,d,$$ and for $\gamma\in[0,1)$ weakly converge to a solution of $$dy^{i}_{t}=(\mu_{0}\delta^{i}(y_{t})+ \mu_{i}-y^{i}_{t})\,dt+ \alpha_{i}^{-1/2}(1+2q_{t})^{1/2}\,dw^{i}_{t}, \quad i=1,...,d.$$ Indeed, we have $$Q^{ni}=n^{1/2}y^{ni}_{t}+nq_{t}\alpha_{i}^{-1}, \quad dq_{t}=(1-q_{t})\,dt,$$ $$dy^{n}_{t}=b^{n}(y^{n}_{t})\,dt+dm^{n}_{t}, \quad{\langle}m^{n}{\rangle}_{t} =\int_{0}^{t}a^{n}(y^{n}_{s})\,ds,$$ where $$b^{ni}(x)=\mu_{0}\delta^{i}(x)+\mu_{i}-x^{i},\quad a^{nij}(x)=\delta^{ij}\bigg(n^{-1/2} \mu_{0}\delta^{i}(x)+ \alpha_{i}^{-1}$$ $$+\mu_{i}n^{-1/2} +(x^{i}n^{-1/2}+q_{t}\alpha_{i}^{-1})_{+}\big[I_{ (\gamma-1)e^{-t}<\alpha_{i}(\nu^{i}- x^{i})n^{-1/2}}$$ $$+f(n^{1/2}\,x^{i}+nq_{t}\alpha_{i}^{-1}) I_{ (\gamma-1)e^{-t}\geq\alpha_{i}(\nu^{i}- x^{i})n^{-1/2}}\big] \bigg).$$ As in the proof of Theorem \[theorem 4.18.1\] one checks that the sequence of distributions of $y^{n}_{\cdot}$ is precompact. Furthermore, obviously, for any $x$ $$a^{nij}(x)\to\left\{\begin{array}{ll} \delta^{ij}\alpha_{i}^{-1}(1+q_{t} )&\quad \text{if}\quad\gamma<1, \\ \delta^{ij}\alpha_{i}^{-1}(1+2q_{t} )&\quad \text{if}\quad\gamma>1, \end{array}\right.$$ and this yields our claim in the same way as in the proof of Theorem \[theorem 4.18.1\]. \[remark 5.19.3\] We tried to explain before the proof of Theorem \[theorem 4.18.1\] why its statement looks natural. Now we can also explain how the function $q_{t}$ from Remark \[remark 5.19.2\] was found. The explanations is based on a kind of law of large numbers which in queueing theory is associated with so-called “fluid approximations”. Generally, “fluid approximations” can also be derived from Theorems \[theorem 4.17.1\] and \[theorem 3.8.1\]. For instance, if $\lambda_{k}=\lambda_{k}(n)$ and $\lambda_{k}(n)/n \to\beta_{k}$ as $n\to\infty$, and $\beta_{0}=0$, then under the condition that $Q^{n}_{0}/n$ converges in probability to a constant vector, the processes $Q^{n}_{t}/n$ converge in probability uniformly on each finite time interval to the deterministic solution of the system $$dq^{i}_{t}=(\beta_{i}-q^{i})\,dt,\quad i=1,...,d.$$ This fact obviously follows from Theorems \[theorem 4.17.1\] and \[theorem 3.8.1\] applied to (\[4.24.3\]) written in terms of $z^{n}_{t}:=Q^{n}_{0}/n$: $$dz^{ni}_{t}=b^{ni}(z^{n}_{t})\,dt+dM^{ni}_{t},$$ with $d{\langle}M^{n}{\rangle}^{ij}_{t}=a^{n}_{t}(z^{n}_{t})\,dt$, $$b^{ni}(x)= \delta^{i}(x)\lambda_{0}/n +\lambda_{i}/n-x , \quad |a^{nij}_{t}(x)|\leq Nn^{-1}(1+|x|),$$ where the constant $N$ is independent of $x,n,t$. The following observation can be generalized so as to be used in various control problems in which optimal controls are discontinuous with respect to space variables. \[remark 4.25.1\] It turns out that many discontinuous functionals of $x^{n}_{\cdot}$ converge in law to corresponding functionals of $x_{\cdot}$. For instance take a Borel vector-valued function $f(x)$ on ${\mathbb{R}}^{d}$ such that the set of its discontinuities lies in a closed set $J\subset {\mathbb{R}}^{d}$ having Lebesgue measure zero. Also assume that $f$ is locally bounded, that is bounded on any ball in ${\mathbb{R}}^{d}$ but may behave in any way at infinity. As an example, one can take $f(x)=(\delta^{1}(x), ...,\delta^{d}(x))$. Then, for $$y^{n}_{t}:=\int_{0}^{t}f(x^{n}_{s})\,ds,\quad y_{t}:=\int_{0}^{t}f(x _{s})\,ds$$ we have that the distributions of $ (x^{n}_{\cdot} ,y^{n}_{\cdot})$ converge weakly to the distribution of $ (x _{\cdot} ,y _{\cdot})$. Indeed, append (\[4.25.2\]) with one more equation: $dy^{n}_{t}=f(x^{n}_{t})\,dt$ and consider the couple $z^{n}_{\cdot}=(x^{n}_{\cdot} ,y^{n}_{\cdot})$ as a process in ${\mathbb{R}}^{d+1}$. Obviously Assumptions \[assumption 3.8.2\] and Assumptions \[assumption 3.8.5\] are satisfied for thus obtained couple. Furthermore, define $$H=\{(t,x,y):t>0,y\in{\mathbb{R}},\quad x\in J\quad\text{or}\quad \prod_{i,j=1}^{d}(\alpha_{i}x^{i} -\alpha_{j}x^{j})(x^{i}-\nu_{i})=0\}.$$ Since $J$ is closed, for any $t>0$ and $(x,y)\not\in H_{t}$, the function $f$ (independent of $y$) is continuous in a neighborhood of $x$, which along with the argument in the proof of Theorem \[theorem 4.18.1\] shows that Assumption \[assumption 3.8.3\] is satisfied for $z^{n}_{t}$. Finally, for $$H^{m}\equiv H,\quad d_{m}=d,\quad v^{mi}(t,x,y)=x^{i}, \quad i=1,...,d,$$ we have $$v^{m}(H_{t})=\{x: x\in J\quad\text{or}\quad \prod_{i,j=1}^{d}(\alpha_{i}x^{i} -\alpha_{j}x^{j})(x^{i}-\nu_{i})=0\}$$ which has $d$-dimensional Lebesgue measure zero and $$\det\,V^{nm}(t,x^{n}_{\cdot},y^{n}_{\cdot})= \det\,a^{n}(x^{n}_{t})\geq \alpha_{1}^{-1}\cdot...\cdot\alpha_{d}^{-1}>0.$$ Hence Assumption \[assumption 4.25.2\] is satisfied as well. This along with precompactness of distributions of $(x^{n}_{\cdot},y^{n}_{\cdot})$ guaranteed by Theorem \[theorem 4.17.1\] and along with Theorem \[theorem 4.25.1\] shows that any convergent subsequence of distributions of $(x^{n}_{\cdot},y^{n}_{\cdot})$ converges to the distribution of a process $(x_{\cdot},y_{\cdot})$, whose first component satisfies (\[4.24.6\]) and the second one obeys $dy_{t}=f(x_{t})\,dt$. Thus, we get our assertion for a subsequence instead of the whole sequence. However, as we have noticed above, solutions of (\[4.24.6\]) are weakly unique and this obviously implies that solutions of the system (\[4.24.6\]) appended with $dy_{t}=f(x_{t})\,dt$ are also weakly unique. Therefore, the whole sequence of distributions of $(x^{n}_{\cdot},y^{n}_{\cdot})$ converges. An $L_{p}$ estimate {#section 3.14.1} =================== Let $d\geq1$ be an integer, $(\Omega,{\mathcal{F}},P)$ be a complete probability space, and $({\mathcal{F}}_{t},t\geq0)$ be an increasing filtration of $\sigma$-fields ${\mathcal{F}}_{t}\subset{\mathcal{F}}$ with ${\mathcal{F}}_{0}$ being complete with respect to $P,{\mathcal{F}}$. Let $K(r,t)$ and $L(r,t)$ be two nonnegative deterministic function defined for $r,t>0$. Assume that they increase in $r$ and are locally integrable in $t$, so that $$\int_{0}^{T}(K(r,t)+L(r,t))\,dt<\infty \quad\forall r,T\in(0,\infty).$$ Let $\delta(t,x)$ be a nonnegative deterministic function defined for $t\geq0$ and $x\in{\mathbb{R}}^{d}$ and satisfying $\delta(t,x)\leq K(|x|,t)$. Define $A(t,x)$ as the set of all symmetric nonnegative $d\times d$-matrices $a$ such that $$\delta(t,x)|\lambda|^{2}\leq a^{ij}\lambda^{i}\lambda^{j} \leq K(|x|,t)|\lambda|^{2}\quad\forall\lambda\in{\mathbb{R}}^{d}.$$ Here, as well as everywhere in the article apart from Section \[section 4.18.1\], we use the summation convention. For any symmetric $d\times d$-matrix $v=(v_{ij})$ define $$F(t,x,v)=\sup_{a\in A(t,x)}a^{ij}v_{ij}.$$ As is easy to see, if $\lambda_{i}(v)$, $i=1,...,d$, are eigenvalues of $v$ numbered in any order, then $$F(t,x,v)=\sum_{i=1}^{d}\chi(t,x,\lambda_{i}(v)),$$ where $\chi(t,x,\lambda)=K(|x|,t)\lambda$ for $\lambda\geq0$ and $\chi(t,x,\lambda) =\delta(t,x)\lambda$ for $\lambda\leq0$. Remember that $C^{\infty}_{0}({\mathbb{R}}^{d+1})$ is the set of all infinitely differentiable real-valued function $u=u(t,x)$ on ${\mathbb{R}}^{d+1}$ with compact support. \[theorem 3.14.1\] Let $x_{t}$, $t\geq0$, be an ${\mathbb{R}}^{d}$-valued ${\mathcal{F}}_{t}$-adapted continuous process such that, for any $u\in C^{\infty}_{0}({\mathbb{R}}^{d+1})$, the following process is a local ${\mathcal{F}}_{t}$-supermartingale: $$\label{2.26.1} u(t,x_{t})-\int_{0}^{t}\big[ u_{s}(s,x_{s})+F(s,x _{s},u_{x x }(s,x_{s})) + L(x^{*}_{t},s)|u_{x}(s,x_{s})|\big]\,ds,$$ where $u_{x}$ is the gradient of $u$ with respect to $x$, $u_{xx}$ is the matrix of second-order derivatives $u_{x^{i}x^{j}}$ of $u$, $$u_{s}=\partial u/\partial s,\quad u_{x^{i}x^{j}}= \partial^{2}u/\partial x^{i}\partial x^{j}.$$ Then for any $r,T\in(0,\infty)$ there exists a constant $N<\infty$, depending only on $r,L(r,T)$, and $d$ (but not on $K(r,t)$), such that, for any nonnegative Borel $f(t,x)$, we have $$\label{2.26.2} E\int_{0}^{T\wedge\tau_{r}}\delta^{d/(d+1)} (t,x_{t})f(t,x_{t})\,dt \leq N||f||_{L_{d+1}([0,T]\times B_{r})},$$ where $$||f||_{L_{d+1}([0,T]\times B_{r})} =\big(\int_{0}^{T}\int_{|x|\leq r} f^{d+1}(t,x)\,dxdt\big)^{1/(d+1)}.$$ $B_{r}$ is the open ball in ${\mathbb{R}}^{d}$ of radius $r$ centered at the origin, and $\tau_{r}$ is the first exit time of $x_{t}$ from $B_{r}$. Proof. First of all notice that for any $u\in C^{\infty}_{0}({\mathbb{R}}^{d+1})$ expression (\[2.26.1\]) makes sense. Indeed, if $r$ is such that $u(t,x)=0$ for $|x|\geq r$ and all $t$, then the integrand is bounded by a constant times $$\int_{0}^{t}[1+K(r,s)+L(x^{*}_{t}+r,s)]\,ds,$$ which is finite since each trajectory of $x_{s}$ is bounded on $[0,t]$. Also observe that usual approximation techniques allows us to only concentrate on the case of infinitely differentiable functions $f\geq0$ vanishing for $|x|\geq r$ for some $r$. We fix $r$, such a function $f$, and a nonnegative function $\zeta\in C^{\infty}_{0}({\mathbb{R}}^{d+1})$ with unit integral and support in the unit ball of ${\mathbb{R}}^{d+1}$ centered at the origin. Below, for any locally bounded Borel function $g(t,y)$ and $\varepsilon>0$ we use the notation $$g^{(\varepsilon)}=g*\zeta_{\varepsilon},\quad \text{where}\quad\zeta_{\varepsilon}(t,x)= \varepsilon^{-d-1}\zeta(t/\varepsilon,x/\varepsilon).$$ Next, we need Theorem 2 of [@Kr76], which states the following. There exist constants $\alpha=\alpha(d)>0$ and $N_{r}=N(r,d)<\infty$ and there exists a bounded Borel nonpositive function $z$ on ${\mathbb{R}}^{d+1}$ which is convex on $B_{2r} $ for each fixed $t$ and is such that, for each nonnegative symmetric matrix $a$, $$\label{2.26.3} \alpha (\det a )^{1/(d+1)}f^{(\varepsilon)} \le z^{(\varepsilon)}_{t} + a^{ij} z^{(\varepsilon)}_{x^{i}x^{j}} \quad\text{for}\quad\varepsilon\leq r,t\in{\mathbb{R}},|x|\leq r,$$ $$\label{2.26.5} |z^{(\varepsilon)}_{x}|\leq2r^{-1}|z^{(\varepsilon)}| \quad\text{for}\quad\varepsilon\leq r/2,t\in{\mathbb{R}},|x|\leq r,$$ $$\label{2.26.4} |z|\le N_{r}||f||_{L_{d+1}({\mathbb{R}}\times B_{r})} \quad\text{in}\quad{\mathbb{R}}\times B_{2r} .$$ Notice that in Theorem 2 of [@Kr76] there is the minus sign in front of $z^{(\varepsilon)}_{t}$. However, (\[2.26.3\]) is true as is, since one can replace $t$ with $-t$ and this does not affect any other term. Observe that (\[2.26.4\]) obviously implies that for $\varepsilon\leq r$, we have $$\label{2.26.6} |z^{(\varepsilon)}|\le N_{r}||f||_{L_{d+1}({\mathbb{R}}\times B_{r})} \quad\text{in}\quad{\mathbb{R}}\times B_{r} .$$ Fix an $\varepsilon>0$. We claim that the process $$\begin{aligned} \label{2.27.1} \xi_{t}:=-z^{(\varepsilon)}(t\wedge\tau_{r}, x_{t\wedge\tau_{r}}) -\int_{0}^{t\wedge\tau_{r}}\big[- z^{(\varepsilon)}_{s}(s,x_{s}) \\ + F(s,x_{s},-z^{(\varepsilon)}_{x x }(s,x_{s})) +L(r,s)|z^{(\varepsilon)}_{x}(s,x_{s})|\big]\,ds\nonumber\end{aligned}$$ is a local supermartingale. To prove the claim it suffices to prove that (\[2.27.1\]) is a local supermartingale on $[0,T]$ for every $T\in[0,\infty)$. Fix a $T\in[0,\infty)$ and concentrate on $t\in[0,T]$. Change $-z^{(\varepsilon)}$ outside of $[0,T]\times B_{r}$ in any way with the only requirement that the new function, say $u$ belong to $C^{\infty}_{0}({\mathbb{R}}^{d+1})$. Then the process (\[2.26.1\]) is a local supermartingale. Replacing $t$ with $t\wedge\tau_{r}$ yields a local supermartingale again. Also observe that subtracting an increasing continuous process from a local supermartingale preserves the property of being a local supermartingale. After noticing that for $0<s\leq t \wedge\tau_{r}\leq T$, we have $|x_{s}|\leq r$ and $L( x^{*}_{s},s) \leq L(r,s)$ and we conclude that $$\eta_{t}:=u(t\wedge\tau_{r},x_{t\wedge\tau_{r}}) -\int_{0}^{t\wedge\tau_{r}}\big[- z^{(\varepsilon)}_{s}(s,x_{s})$$ $$+F(s,x_{s},-z^{(\varepsilon)}_{xx}(s,x_{s})) +L(r,s)|z^{(\varepsilon)}_{x}(s,x_{s})|\big]\,ds$$ is a local supermartingale on $[0,T]$. Since $$\eta_{t}-\xi_{t}=[u(0,x_{0})-z^{(\varepsilon)}(0,x_{0})] I_{\tau_{r}=0},$$ is a bounded martingale, (\[2.27.1\]) is a local supermartingale indeed. After having proved our claim we notice that for each $T\in[0,\infty)$ the process (\[2.27.1\]) is obviously bounded on $[0,T]$. Therefore (\[2.27.1\]) is a supermartingale and $$E\xi_{T}I_{\tau_{r}>0}\leq E\xi_{0}I_{\tau_{r}>0} \leq\sup_{|x|\leq r}|z^{(\varepsilon)}(0,x)|,$$ which along with (\[2.26.6\]), (\[2.26.5\]), and the fact that $z\leq0$, yields that for any $\varepsilon\leq r/2$ $$E\int_{0}^{T\wedge\tau_{r}}\big[ z^{(\varepsilon)}_{s}(s,x_{s})- F(s,x_{s},-z^{(\varepsilon)}_{xx}(s,x_{s}))\big]\,ds$$ $$\leq N_{r}||f||_{L_{d+1}([0,T]\times B_{r})}\big(1 +2r^{-1} E\int_{0}^{T\wedge\tau_{r}}L(r,s)\,ds\big).$$ Here, owing to (\[2.26.3\]), $$z^{(\varepsilon)}_{s} - F(s,x,-z^{(\varepsilon)}_{xx} ) =\inf_{a\in A(s,x)}\big[z^{(\varepsilon)}_{s} +a^{ij}z^{(\varepsilon)}_{x^{i}x^{j}}\big]$$ $$\geq f^{(\varepsilon)}\alpha\inf_{a\in A(s,x)} (\det a)^{1/(d+1)} =f^{(\varepsilon)}\alpha \delta^{d/(d+1)} .$$ Hence $$E\int_{0}^{t\wedge\tau_{r}}\delta^{d/(d+1)} f^{(\varepsilon)}(s,x_{s}) \,ds \leq N ||f||_{L_{d+1}([0,T]\times B_{r})}$$ with $$N=N_{r}\alpha^{-1}\big(1 +2r^{-1}\int_{0}^{T}L(r,s)\,ds\big).$$ Finally we let $\varepsilon\downarrow0$ and use the continuity of $f$ which guarantees that $f^{(\varepsilon)}\to f$. Then upon remembering that $f\geq0$ and using Fatou’s theorem, we arrive at (\[2.26.2\]) with the above specified $N$. The theorem is proved. [Actually, we did not use the continuity of $x_{t}$. We could have only assumed that $x_{t}$ is a separable measurable process. However, then it turns out that the assumption about the processes (\[2.26.1\]) implies that $x_{t}$ is continuous anyway and moreover that $x_{t}$ is an Itô process (see [@Kr1]). ]{} [mm]{} Yi-Ju Chao, Diffusion approximation of a sequence of semimartingales and its application in exploring the asymptotic bahavior of some queueing networks, PhD thesis, University of Minnesota, June 1999. P.J. Fleming and B. Simon, Heavy traffic approximations for a system of infinite servers with load balancing, Probab. Engrg. Inform. Sci., 13 (1999) 251-273. N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, North-Holland, Amsterdam-Oxford-New York, 1981. J. Jacod and A. Shiryayev, Limit theorems for stochastic processes, Grundlehren der math. Wissenschaften, A series of comprehensive studies in math., Springer-Verlag, New York, Berlin, Heidelberg, 1987. Ya. Kogan and R.S. Liptser, Limit non-stationary behavior of large closed queueing network with bottleneck, Queueing Systems, 14 (1993) 33–55. R. Khasminskii and N. Krylov, On averaging principle for diffusion processes with null-recurrent fast component, Stoch. Proc. Appl., Vol. 93, no. 2 (2001), 229-240. N.V. Krylov, On Itô’s stochastic integral equations, Teoriya Veroyatnostei i eye Primeneniya, Vol. 14, No.2 (1969), 340-348 in Russian; English translation in Theor. Probability Appl., 14 (1969) 330–336. N.V. Krylov, Some estimates of the probability density of a stochastic integral, Izvestija Akademii Nauk SSSR, serija matematicheskaja, Vol. 38, No. 1 (1974), 228–248 in Russian; English translation in Math. USSR Izvestija, Vol. 8 (1974) 233–254. N.V. Krylov, Sequences of convex functions and estimates of the maximum of the solution of a parabolic equation, Sibirskii Mat. Zhurn., 17 (1976) 290–303 (in Russian); English translation: Siberian J. Math., 17 (1976) 226–236. N.V. Krylov, A supermartingale characterization of a set of stochastic integrals, Ukrain. Mat. Zh. 41, No. 6 (1989), 757–762 in Russian; English translation in Ukrainian Math. J. 41 (1990) 650–654. N.V. Krylov and B.L. Rozovsky, Stochastic evolution equations, in Itogy nauki i tekhniki, Vol. 14, VINITI, Moscow, 1979, 71-146 in Russian; English translation in J. Soviet Math., Vol. 16 (1981) 1233–1277. Kushner, H.J. Approximation and Weak Convergence Methods for Random Processes, with Applications to Stochastic Systems Theory, MIT-Press. Cambridge, 1984. H.J. Kushner, Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems, Birkhäuser, 1990. H.J. Kushner and P.G. Dupuis, Numerical methods for stochastic control problems in continuous time, second edition, Springer Verlag, 2001. H.J. Kushner and W. Runggaldier, Nearly optimal state feedback controls for stochastic systems with wideband noise disturbances, SIAM J. on Control and Optimization, 25 (1987) 289–315. H.J. Kushner and W.J. Runggaldier, Filtering and control for wide bandwidth noise driven systems” IEEE Transactions on Automatic Control. AC-23 (1987) 123–133. T.G. Kurtz and P.E. Protter, Weak convergence of stochastic integrals and differential equations. II. Infinite-dimensional case. Probabilistic models for nonlinear partial differential equations (Montecatini Terme, 1995), 197–285, Lecture Notes in Math., 1627, Springer, Berlin, 1996. R.S. Liptser and W.J. Runggaldier, Non-linear filters for linear models (A robust approach), IEEE Transaction on Information Theory, 41 (1995) 1001–1009. R.S. Liptser, W.J. Runggaldier, and M. Taksar, Diffusion approximation and optimal stochastic control. (Russian) Teor. Veroyatnost. i Primenen. 44 (1999) 705–737. R.Sh. Liptser and A.N. Shiryayev, Theory of Martingales, Nauka, Moscow, 1986 in Russian; English translation by Kluwer Acad. Publ., Dordrecht, 1989. A.V. Skorokhod, Issledovaniya po teorii sluchainykh processov, Izd-vo Kievskogo Universiteta, Kiev, 1961 in Russian; English translation: Studies in the theory of random processes, Dover Publ. Inc., New York, 1982. D.W. Stroock and S.R.S. Varadhan, Multidimensional diffusion processes, Springer Verlag, Berlin-New York, 1979.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Infrared Space Observatory [*(ISO)*]{} is used to carry out mid-IR (7 and 15 m) and far-IR (90 m) observations of a sample of star-forming sub-mJy radio sources. By selecting the sample at radio wavelengths, one avoids biases due to dust obscuration. It is found that the mid-IR luminosities, covering the PAH features, measure the star formation rate for galaxies with $P_{1.4 GHz} < 10^{23}$ W Hz$^{-1}$. This is further confirmed using the H$\alpha$ luminosities. The far-IR emission is also found to trace the SFR over the whole range of radio and H$\alpha$ luminosities. The implication of the mid-IR measurements in estimating the SFRs from the future infrared space missions (SIRTF and ASTRO-F) is discussed.' author: - Bahram Mobasher - José Afonso - Lawrence Cram title: 'ISO Observations of Star-forming Galaxies' --- Introduction ============ There now exist several measurements of the star formation rate (SFR) at different redshifts, based on UV [@1; @2; @3] and Balmer-line [@4; @5; @6; @7] studies, with the latter yielding estimates a factor of 2–3 times higher than the former, presumably because of differential dust extinction. These disagreements impede progress in understanding the evolution with redshift of the rates of star-formation and heavy element production [@8]. The problem becomes more serious at high redshifts due to changes in dust content in galaxies with look-back time. In particular, optically selected samples are likely to be biased against actively star-forming and dusty galaxies, leading to an underestimation of the SFR from these samples. Indeed, it has been shown that a large fraction of the bolometric luminosity emerges at far–IR wavelengths, with recent observations with the Infrared Space Observatory (ISO) showing that the contribution to the cosmic infrared background is dominated by infrared luminous galaxies. This confirms that most of the star formation, specially at high redshifts, is hidden in dusty environments. Also, it is shown that different star-formation diagnostics give different SFRs even for the same galaxy. Therefore, to accurately trace the SFR, one needs to use as many [*independent*]{} star-formation diagnostics as possible. In this study, the sensitivity of the mid-IR fluxes (7-15 m), covering the PAH features, to the star-formation activity in galaxies will be studied, using an unbiased sample of star-forming galaxies. The potential of this technique in measuring the SFRs at $z\sim 2$ is then discussed. Sample Selection ================ The sample for this study consists of sub-mJy radio sources, selected at radio (1.4GHz) wavelengths [@9] and hence, is free from dust-induced selection biases. A total of 400 of these galaxies are then spectroscopically observed with their redshifts measured and spectral features (H$\alpha$, MgII, etc) identified [@10]. A sample of 65 radio sources were then observed with ISOCAM (7 and 15 m) and ISOPHOT (90 m)- (Afonso et al 2001, [*in preparation*]{}). The objects adopted for [*ISO*]{} observations are chosen to be sub-mJy radio sources, showing evidence for star-formation activity in their spectra and sufficiently bright at mid- to far- IR wavelengths (as predicted from their SEDs) to allow detection at these wavelengths. The number of radio sources in the [*ISO*]{} survey region, together with the number of galaxies with detections at the three [*ISO*]{} wavelengths are listed in Table \[tab1\]. The ISOCAM pointed survey also resulted in the serendipitous detection of 26 sources for which no radio counterpart was found. These objects will not be discussed here. Details about the [*ISO*]{} observations and data reduction will be presented in a future paper (Afonso et al 2001, [*in preparation*]{}). ------ ------- ------- Band $N_s$ $N_d$ 7 m 146 16 15 m 146 15 90 m 44 9 ------ ------- ------- : Number of sources in the the areas covered by both the [*ISO*]{} and radio surveys. $N_s$ and $N_d$ denote, respectively, the number of radio sources over the area covered by the [*ISO*]{} (65 pointings for ISOCAM, and 44 for ISOPHOT) and the number of [*ISO*]{} detected sources. \[tab1\] Results ======= The intrinsic luminosities at the [*ISO*]{} and radio wavelengths are estimated assuming $H_0 =65$ km/sec/Mpc. The K-corrections are applied, assuming a flat spectrum at 7 and 15 m wavelengths. For the 90 $\mu$m and 1.4GHz fluxes, a power-law SED of the form $f_\nu \propto \nu^{n}$ is assumed with spectral indices of respectively $n=-2$ and $-0.7$. The ratio of the [*ISO*]{} (7, 15, 90 m) to radio power as a function of the radio power for galaxies in the present sample is shown in Fig. \[fig1\]. Both the detections and upper limits are included in this diagram. Figure \[fig1\] is significant in that, the lack of a trend here indicates that the radio, compared to mid-IR and far-IR luminosities measure the [*same*]{} quantity (ie. star-formation) whereas, the presence of a trend implies that they are sensitive to [*different*]{} physical processes. ![Ratio of the [*ISO*]{} luminosities to the radio power as a function of the radio (1.4GHz) power. $L_{7{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}$, $L_{15{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}$ and $L_{90{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}$ are defined as $\nu P_\nu$ at the respective rest-frame wavelength and are given in units of $L_\odot$.[]{data-label="fig1"}](PLOTratio7_15_90vs14all.eps){width=".9\textwidth"} The $L_{7{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/P_{1.4\,{\rm GHz}} - P_{1.4\,{\rm GHz}} $ and $L_{15{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/P_{1.4\,{\rm GHz}} - P_{1.4\,{\rm GHz}}$ relations both show a slight trend for log$(P_{1.4\,{\rm GHz}}) < 10^{23}$W/Hz, followed by a steep slope at log$(P_{1.4\,{\rm GHz}}) > 10^{23}$W/Hz. The value of $10^{23}$W/Hz corresponds to the characteristic radio power of the sub-mJy sources where also a change of slope is found in the 1.4GHz luminosity function of star-forming galaxies [@13]. Assuming that the radio emission from galaxies is a measure of the synchrotron radiation due to relativistic electrons, produced by supernovae remnants, and hence their SFR [@11; @12], one concludes that for $ P_{1.4\,{\rm GHz}} < 10^{23}$W/Hz, the mid-IR (7 and 15 m) luminosity is sensitive to the star-formation activity. However, for objects with $ P_{1.4\,{\rm GHz}} > 10^{23}$ W/Hz, the PAH molecules are destroyed due to the strength of the photon field, resulting a decrease in the mid-IR flux from galaxies. At the far-IR 90 m wavelength, there is no significant trend on the $L_{90{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/P_{1.4\,{\rm GHz}} - P_{1.4\,{\rm GHz}}$ diagram, confirming that both the far-IR and radio luminosities measure the same quantity (ie. SFR). These results are obtained using both the detections and upper limits. Using only the detections, the trend in the relation disappears at 15 m while remains the same for 7 m. The above results are confirmed using H$\alpha$ line luminosity (Figure \[fig2\]) which is a more direct measure of the star-formation in galaxies. While there is a small trend on the $L_{7{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/L_{H\alpha} - L_{H\alpha}$ relation for $L_{H\alpha} > 10^{34.8}$ W, the trend almost disappears for $L_{15{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/L_{H\alpha} - L_{H\alpha}$ and is entirely absent on the $L_{90{\mbox{{{\usefont{U}{eur}{m}{n}\char22}}}}m}/L_{H\alpha} - L_{H\alpha}$ relation. This implies an increase in the sensitivity to the star-formation from 7 to 15 and 90 m wavelengths, in agreement with the results from Figure \[fig1\]. ![Ratio of the [*ISO*]{} mid and far-IR luminosities to H$\alpha$ luminosity as a function of H$\alpha$ luminosity.[]{data-label="fig2"}](PLOTratio7_15_90vsHall.eps){width=".9\textwidth"} The results in this study can be used to establish new star-formation diagnostics, based on 7 and 15 m luminosities, and to calibrate them. This has significant implications for future mid-IR surveys with the SIRTF and ASTRO-F. For example, a 24 m deep survey, planned with the SIRTF, can detect the rest-frame 7 and 15 m emissions at $z= 2.4$ and $z=0.6$ respectively, allowing measurement of the SFRs between these redshifts. By selecting a sample at radio wavelengths, with follow-up mid-IR observations, one could then avoid dust-induced selection biases in estimating the SFR. [7]{} Lilly, S. J., Le Fevre, O., Hammer, F., & Crampton, D. (1996) The Canada-France Redshift Survey: the luminosity density and star formation history of the Universe to $z \sim 1$. ApJ, 460, L1 Treyer, M. A., Ellis, R. S., Milliard, B., Donas, J., & Bridges, T. J. (1998) An Ultraviolet-Selected Galaxy Redshift Survey: new estimates of the local star formation rate. MNRAS, 300, 303 Connolly, A. J., Szalay, A. S., Dickinson, M., Subbarao, M. U., & Brunner, R. J. (1997) The Evolution of the Global Star Formation History as Measured from the Hubble Deep Field. ApJ, 486, L11 Gallego, J., Zamorano, J., Aragon-Salamanca, A., & Rego, M. (1995) The Current Star Formation Rate of the Local Universe. ApJ, 455, L1 Pettini M. et al, 1997, ASP conference series Tresse, L., & Maddox, S. J. (1998) The H$\alpha$ Luminosity Function and Star Formation rate at $z \sim 2$. ApJ, 495, 691 Glazebrook, K., Blake, C., Economou, F., Lilly, S., & Colless, M. (1999) Measurement of the Star Formation Rate from H$\alpha$ in field Galaxies at $z=1$. MNRAS, 306, 843 Ellis R., (1998) The formation and evolution of galaxies. Nature, 395, 3 Hopkins, A. M., Mobasher, B., Cram, L., & Rowan-Robinson, M. (1998) The PHOENIX Deep Survey: 1.4-GHz source counts. MNRAS, 296, 839 Georgakakis, A., Mobasher, B., Cram, L., Hopkins, A., Lidman, C., & Rowan-Robinson, M. (1999) The PHOENIX Survey: optical and near-infrared observations of faint radio sources. MNRAS, 306, 708 Cram, L., Hopkins, A., Mobasher, B., & Rowan-Robinson, (1988) Star Formation Rates in Faint Radio Galaxies. ApJ, 507, 155 Condon, J. J. (1992) Radio emission from normal galaxies. ARA&A, 30, 575 Mobasher, B., Cram, L., Georgakakis, & A., Hopkins, A. (1999) The 1.4GHz and H$\alpha$ Luminosity Functions and Star Formation Rates from Faint Radio Galaxies. MNRAS, 308, 45
{ "pile_set_name": "ArXiv" }
--- abstract: 'A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the $i$th bit of an $n$-bit database replicated among two servers (which do not communicate) while not revealing any information about $i$ to either server. In this work we construct a 1-round 2-server PIR with total communication cost $n^{O({\sqrt{\log\log n/\log n}})}$. This improves over the currently known 2-server protocols which require $O(n^{1/3})$ communication and matches the communication cost of known $3$-server PIR schemes. Our improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives.' author: - 'Zeev Dvir[^1]' - 'Sivakanth Gopi[^2]' bibliography: - 'bibliography.bib' title: '2-Server PIR with sub-polynomial communication' --- Introduction ============ Private Information Retrieval (PIR) was first introduced by Chor, Goldreich, Kuzhelevtiz and Sudan [@ChorKGS98]. In a $k$-server PIR scheme, a user can retrieve the $i$th bit $a_i$ of a $n$-bit database $\ba=({a_1,\cdots,a_n})\in{{\{0,1\}}}^n$ replicated among $k$ servers (which do not communicate) while giving no information about $i$ to any server. The goal is to design PIR schemes that minimize the communication cost which is the worst case number of bits transferred between the user and the servers in the protocol. The trivial solution which works even with one server is to ask a server to send the entire database $\ba$, which has communication cost $n$. When $k=1$ the trivial solution cannot be improved [@ChorKGS98]. But when $k\ge 2$, the communication cost can be brought down significantly. In [@ChorKGS98], a 2-server PIR scheme with communication cost $O(n^{1/3})$ and a $k$-server PIR scheme with cost ${O\left(k^2\log k n^{1/k}\right)}$ were presented. The $k$-server PIR schemes were improved further in subsequent papers [@Ambainis97; @BeimelI01; @BeimelIKR02]. In [@BeimelIKR02], a $k$-server PIR scheme with cost $n^{{O\left(\frac{\log\log k}{k\log k}\right)}}$ was obtained. This was the best for a long time until the breakthrough result of Yekhanin[@Yekhanin08] who gave the first $3$-server scheme with sub-polynomial communication (assuming a number theoretic conjecture). Later, Efremenko[@Efremenko09] gave an unconditional $k$-server PIR scheme with sub-polynomial cost for $k\ge 3$ which were slightly improved in [@ItohS10] and [@CheeFLWZ13]. These new PIR schemes follow from the constructions of constant query smooth Locally Decodable Codes (LDCs) of sub-exponential length called Matching Vector Codes (MVCs)[@DvirGY10]. A $k$-query LDC [@KT00] is an error correcting code which allows the receiver of a corrupted encoding of a message to recover the $i$th bit of the message using only $k$ (random) queries. In a [*smooth*]{} LDC, each query of the reconstruction algorithm is uniformly distributed among the code word symbols. Given a $k$-query smooth LDC, one can construct a $k$-server PIR scheme by letting each server simulate one of the queries. Despite the advances in $3$-server PIR schemes, the 2-server PIR case is still stuck at $O(n^{1/3})$ since 2-query LDCs provably require exponential size encoding [@KerenidisW03] (which translates to $\Omega(n)$ communication cost in the corresponding PIR scheme). For more information on the relation between PIR and LDC and the constructions of sub-exponential LDCs and sub-polynomial cost PIR schemes with more than 2 servers we refer to the survey [@Yekhanin12]. On the lower bounds side, there is very little known. The best known lower bound for the communication cost of a 2-server PIR is $5\log n$ [@WehnerW05] whereas the trivial lower bound is $\log n$. In [@ChorKGS98], a lower bound of $\Omega(n^{1/3})$ is conjectured. In [@RazborovY06], an $\Omega(n^{1/3})$ lower bound was proved for a restricted model of 2-server PIR called bilinear group based PIR. This model encompasses all the previously known constructions which achieve $O(n^{1/3})$ cost for 2-server PIR. We elaborate more on the relation between this model and our construction after we present our results below. PIR is extensively studied and there are several variants of PIR in literature. The most important variant with cryptographic applications is called Computationally Private Information Retrieval (CPIR). In CPIR, the privacy guarantee is based on computational hardness of certain functions i.e. a computationally bounded server cannot gain any information about the user’s query. In this case, non-trivial schemes exist even in the case of one server under some cryptographic hardness assumptions. For more information on these variants of PIR see [@Gasarch; @Gasarch04; @Lipmaa]. In this paper, we are only concerned with information theoretic privacy i.e. even a computationally unbounded server cannot gain any information about the user’s query which is the strongest form of privacy. Our Results ----------- We start with a formal definition of a 2-server PIR scheme. A 2-server PIR scheme involves two servers $\cS_1$ and $\cS_2$ and a user $\cU$. A database $\ba=({a_1,\cdots,a_n})\in {{\{0,1\}}}^n$ is replicated between the servers $\cS_1$ and $\cS_2$. We assume that the servers cannot communicate with each other. The user $\cU$ wants to retrieve the $i$th bit of the database $a_i$ without revealing any information about $i$ to either server. The following definition is from [@ChorKGS98]: A 2-server PIR protocol is a triplet of algorithms $\cP=(\cQ,\cA,\cR)$. At the beginning, the user $\cU$ obtains a random string $r$. Next $\cU$ invokes $\cQ(i,r)$ to generate a pair of queries $(q_1,q_2)$. $\cU$ sends $q_1$ to $\cS_1$ and $q_2$ to $\cS_2$. Each server $S_j$ responds with an answer $ans_j=\cA(j,\ba,q_j)$. Finally, $\cU$ computes its output by applying the recovery algorithm $\cR(ans_1,ans_2,i,r)$. The protocol should satisfy the following conditions: - For any n, $\ba\in {{\{0,1\}}}^n$ and $i\in [n]$, the user the outputs the correct value of $a_i$ with probability 1 (where the probability is over the random strings $r$) i.e. $\cR(ans_1,ans_2,i,r)=a_i$ - Each server individually learns no information about $i$ i.e. for any fixed database $\ba$ and for $ j=1,2$, the distributions of $q_j(i_1,r)$ and $q_j(i_2,r)$ are identical for all $i_1,i_2\in [n]$ when $r$ is randomly chosen. The communication cost of the protocol is the total number of bits exchanged between the user and the servers in the worst case. $k$-server PIR is similarly defined, with the database replicated among $k$ servers which cannot communicate between themselves. We only defined 1-round PIR i.e. there is only one round of interaction between the user and the servers. All known constructions of PIR schemes are 1-round and it is an interesting open problem to find if interaction helps. We now state our main theorem: \[mainthm\] There exists a 2-server PIR scheme with communication cost $n^{{O\left(\sqrt{\frac{\log\log n}{\log n}}\right)}}$. The definition of a 2-server PIR scheme can be generalized in an obvious manner to any number of servers. In [@Efremenko09] a $2^r$-server PIR schemes was given with $n^{{O\left(({\log\log n}/{\log n})^{1-1/r}\right)}}$ communication cost for any $r\ge 2$. Using our techniques, we can reduce the number of servers in this scheme by a factor of two. That is, we prove the following stronger form of Theorem \[mainthm\]. \[THM-kserver\] For any $r \geq 2$, there exists a $2^{r-1}$-server PIR scheme with communication cost $n^{{O\left(({\log\log n}/{\log n})^{1-1/r}\right)}}$. We note that the proof of Theorem \[THM-kserver\] actually allows the database symbols to be in the larger alphabet $\Z_m$, where $m$ is the composite over which we construct the MV family. There was some work on decreasing the $2^r$ query complexity of the construction of Matching Vector Codes in [@Efremenko09]. A query complexity of $9\cdot 2^{r-4}$ for $r\ge 6$ was achieved in [@ItohS10] while keeping the encoding length the same. This was improved in [@CheeFLWZ13] to $3^{\ceil{r/2}}$ for $2\le r\le 103$ and $(\frac{3}{4})^{51}\cdot 2^r$ for $r\ge 104$. Using these LDCs directly to get a PIR scheme will do better than our scheme when the number of servers is more than 26, whereas our scheme will do better than these when the number of servers are less than 9. Related work ------------ #### Polynomial lower bounds for bilinear group based PIR: In [@RazborovY06], an $\Omega(n^{1/3})$ lower bound was shown for a restricted model of 2-server PIR schemes. This lower bound holds for schemes that are both [*bilinear*]{} and [*group based*]{}. Our scheme can be made into a bilinear scheme[^3] (see section \[sec-overZm\]) over the field $\F_3$ of three elements. However, it does not satisfy the property of being group based as defined in [@RazborovY06]. Our scheme does satisfy a weaker notion of [*employing a group-based secret sharing scheme*]{} (another technical term defined in [@RazborovY06]). The difference between these two notions (of being group based as opposed to employing a group based secret sharing scheme) is akin to the difference between LCCs and LDCs (LCCs being the stronger notion). In group based PIR, the database is represented by the values of a function over a subset of a group but the user should be able to recover the value of that function at [*every*]{} group element. Our scheme encodes the database as a function over a group and the user will only be able to recover the bits of the database from the function. #### 2-query LDCs over large alphabet: The reader familiar with the exponential lower bounds for 2-query LDCs [@KerenidisW03] would wonder why our construction does not violate these bounds. The reason is that, when one translates 2-server PIR schemes into LDC, the resulting alphabet of the code can be quite large. Formally, a scheme with communication cost $s$ will translate into an LDC $C : \{0,1\}^n \mapsto (\{0,1\}^s)^{2^s}$ (with the blocks corresponding to all possible answers by the servers). Thus, each one of the two queries used by the decoder is a string of $s$ bits. The known lower bounds for such LDCs are exponential only as long as $s << \log(n)$ and so our construction does not violate them. Hence, our main theorem also gives the first construction of a sub-exponential 2-query LDC over an alphabet of size $2^{n^{o(1)}}$. Proof Overview -------------- On a very high level, the new protocol combines the existing 2-server scheme of [@WoodruffY05], which uses polynomial interpolation using derivatives, with Matching Vector Codes (MV Codes) [@Yekhanin08; @Efremenko09]. In particular, we make use of the view of MV codes as polynomial codes, developed in [@DvirGY10]. This short overview is meant as a guide to the ideas in the construction and assumes some familiarity with [@WoodruffY05] and [@DvirGY10] (a detailed description will follow in the next sections). The 2-server scheme of [@WoodruffY05] works by embedding the database $\ba = (a_1,\ldots,a_n)$ as evaluations of a degree 3 polynomial $F(x_1,\ldots,x_k)$ over a small finite field $\F_q$ with $k \sim n^{1/3}$. To recover the value $a_i = F(P_i)$ the user passes a random line through the point $P_i \in \F_q^k$, picks two random points $Q_1,Q_2$ on that line and sends the point $Q_i$ to the $i$th server. Each server responds with the value of $F$ at $Q_i$ and the values of all partial derivatives $\partial F / \partial x_j, j=1,\ldots,k$ at that point. The restriction of $F$ to the line is a univariate degree 3 polynomial and the user can recover the values of this polynomial at two points as well as the value of its derivative at these points. These four values (two evaluations plus two derivatives) are enough to recover the polynomial and so its value at $P_i$. The key is that each server’s response only depends on the point $Q_i$ (which is completely random). The user can compute the derivatives of the restricted polynomial from these values (knowing the line equation). To see how MV codes come into the picture we have to describe them in some detail. An MV family is a pair of lists $\cU = (\bu_1,\ldots,\bu_n)$, $\cV = (\bv_1,\ldots,\bv_n)$ with each list element $\bu_i$ and $\bv_j$ belonging to $\Z_m^k$ and $m$ is a small integer. These lists must satisfy the condition that ${\langle \bu_i,\bv_j \rangle}$ (taken mod $m$) is zero iff $i=j$. When $m$ is a composite, one can construct such families of vectors of size $n = k^{\omega(1)}$ [@Grolmusz99] (this is impossible if $m$ is prime). From such a family we can construct an $m$-query LDC as follows: given a message $\ba = (a_1,\ldots,a_n) \in \{0,1\}^n$ define the polynomial $F(x_1,\ldots,x_k) = \sum_{i=1}^n a_i\bx^{\bu_i}$ (we denote $\bx^\bc = x_1^{c_1}\ldots x_k^{c_k}$). We think of $F$ as a polynomial with coefficients in some finite field $\F_q$ containing an element $\gamma \in \F_q$ of order $m$. The final encoding of $\ba$ is the evaluations of $F$ over all points in $\F_q^k$ of the form $\gamma^{\bc} = (\gamma^{c_1},\ldots,\gamma^{c_k})$ for all $\bc \in \Z_m^k$. To recover $a_i$ in a ‘smooth’ way, we pick a random $\bz \in \Z_m^k$ and consider the restriction of $F$ to the ‘multiplicative line’ given by $L = \{ \gamma^{\bz + t \bv_i}\,|\, t \in \Z_m\}$. That is, we denote $G(t) = F(\gamma^{\bz + t \bv_i})$. In [@DvirGY10] it was observed that this restriction can be seen as a polynomial $g(T)$ of degree at most $m-1$ in the new ‘variable’ $T = \gamma^t$ and so can be reconstructed from the $m$ values on the line $g(\gamma^t) = G(t), t = 0,1,\ldots,m-1$. The final observation is that $g(0)$ is a nonzero multiple of $a_i$ (since the only contribution to the free coefficient comes from the monomial $a_i\bx^{\bu_i}$) and so we can recover it if we know $g(T)$. Our new protocol combines these two constructions by using the MV code construction and then asking each server for the evaluations of $F$ at a point, as well as the values of a certain differential operator (similar to first order derivatives) at these points. For this to work we need two ingredients. The first is to replace the field $\F_q$ with a certain ring which has characteristic $m$ and an element of order $m$ (we only use $m=6$ and can take the polynomial ring $\Z_m[\gamma]/(\gamma^6 - 1)$). The second is an observation that, in known MV families constructions [@Grolmusz99], the inner products ${\langle \bu_i,\bv_j \rangle}$ that are nonzero (that is, when $i \neq j$) can be made to fall in a small set. More precisely, over $\Z_6$, the inner products are either zero or in the set $\{1,3,4\}$. This means that the restricted polynomial only has nonzero coefficients corresponding to powers of $T$ coming from the set $\{0,1,3,4\}$. Such a polynomial has four degrees of freedom and can be recovered from two evaluations and two derivatives (of order one). We are also able to work with arbitrary MV families by using second order derivatives at two points (which are sufficient to recover a degree 5 polynomial). Organization ------------ In section \[preliminaries\] we give some preliminary definitions and notations. In section \[oldconstruction\], we review the construction of a 2-server PIR scheme with $O(n^{1/3})$ communication cost which is based on polynomial interpolation with partial derivatives [@WoodruffY05]. In section \[newconstruction\], we present our new construction of sub-polynomial 2-server PIR schemes and some of its variants. Then, in Section \[sec-kserver\] we analyze the generalization to more servers. We conclude in Section \[sec-conclude\] with some remarks on future directions. Preliminaries ============= #### Notations: We will use bold letters like $\bu,\bv,\bz$ etc. to denote vectors. The inner product between two vectors $\bu=({u_1,\cdots,u_k}),\bv=({v_1,\cdots,v_k})$ is denoted by ${\langle \bu,\bv \rangle}=\sum_{i=1}^k u_iv_i$. For a commutative ring $\cR$ we will denote by $\cR[{x_1,\cdots,x_k}]$ the ring of polynomials in formal variables $x_1,\ldots,x_k$ with coefficients in $\cR$. We will use the notation $\bx^\bz$ with $\bx=({x_1,\cdots,x_k}),\ \bz=({z_1,\cdots,z_k}) \in \Z^k$ to denote the monomial $\prod_{i=1}^k x_i^{z_i}$. So any polynomial $F(\bx)\in \cR[{x_1,\cdots,x_k}]$ can be written as $F(\bx)=\sum_{\bz} c_{\bz}\bx^{\bz}$. $\Z_m=\Z/m\Z$ is the ring of integers modulo $m$. When $\bu\in \Z_m^k$, $\bx^\bu$ denotes $\bx^{\tilde{\bu}}$ where $\tilde{\bu}\in {\{0,1,\cdots,m-1\}}^k$ is the unique vector such that $\bu\equiv \tilde{\bu} \mod m$. $\F_q$ denotes the finite field of size $q$. The rings $\cR_{m,r}$ --------------------- For our construction it will be convenient (although not absolutely necessary, see Section \[sec-overZm\]) to work over a ring which has characteristic $6$ and contains an element of order $6$. We now discuss how to construct such a ring in general. Let $m>1$ be an integer and let $\gamma$ be a formal variable. We denote by $$\cR_{m,r} = \Z_m[\gamma]/(\gamma^r-1)$$ the ring of univariate polynomials $\Z_m[\gamma]$ in $\gamma$ with coefficients in $\Z_m$ modulo the identity $\gamma^r = 1$.[^4] More formally, each element $f \in \cR_{m,r}$ is represented by a degree $\leq r-1$ polynomial $f(\gamma) = \sum_{\ell=0}^{r-1}c_\ell \gamma^\ell$ with coefficients $c_i \in \Z_m$. Addition is done as in $\Z_m[\gamma]$ (coordinate wise modulo $m$) and multiplication is done over $\Z_m[\gamma]$ but using the identity $\gamma^r=1$ to reduce higher order monomials to degree $\leq r-1$. It is easy to see that this reduction is uniquely defined: to obtain the coefficient of $\gamma^\ell$ we sum all the coefficients of powers of $\gamma$ that are of the form $\ell + km$ for some integer $k \geq 0$. This implies the following lemma. \[nonzerolemma\] Let $f = \sum_{\ell=0}^{r-1}c_\ell \gamma^\ell$ be an element in $\cR_{m,r}$. Then, $f=0$ in the ring $\cR_{m,r}$ iff $c_i=0$ (in $\Z_m$) for all $0 \leq i \leq r-1$. \[nonzerodivisorgammapower\] For any $t\in{\{0,1,\cdots,r-1\}}$, $\gamma^t$ is not a zero divisor of the ring $\cR_{m,r}$. This holds since the coefficients of $\gamma^t \cdot f(\gamma)$ are the same as those of $f(\gamma)$ (shifted cyclicly $t$ positions). Matrices over Commutative Rings ------------------------------- Let $\cR$ be a commutative ring (with unity). Let $M\in \cR^{n\times n}$ be an $n\times n$ matrix with entries from $\cR$. Most of the classical theory of determinants can be derived in this setting in exactly the same way as over fields. One particularly useful piece of this theory is the Adjugate (or Classical Adjoint) matrix. For an $n \times n$ matrix $M \in \cR^{n \times n}$ the Adjugate matrix is denoted by ${\mathrm{adj}}(M) \in \cR^{n \times n}$ and has the $(j,i)$ cofactor of $A$ as its $(i,j)$th entry (recall that the cofactor is the determinant of the matrix obtained from $M$ after removing the $i$th row and $j$th column multiplied by $(-1)^{i+j}$). A basic fact in matrix theory is the following identity. \[adjointlemma\] Let $M \in \cR^{n \times n}$ with $\cR$ a commutative ring with identity. Then $M\cdot {\mathrm{adj}}(M)={\mathrm{adj}}(M)\cdot M=\det(M)\cdot I_n$ where $I_n$ is the $n\times n$ identity matrix. The way we will use this fact is as follows: \[remark-adjugate\] Suppose $M\in \cR^{n \times n}$ has non-zero determinant and let $\ba = (a_1,\ldots,a_n)^t \in \cR^n$ be some column vector where $a_1=0$ or $a_1=c$, where $c$ is not a zero-divisor. Then we can determine the value of $a_1$ (i.e., tell whether its $0$ or $c$) from the product $M \cdot \ba $. The way to do it is to multiply $M \cdot \ba$ from the left by ${\mathrm{adj}}(M)$ and to look at the first entry. This will give us $\det(M) \cdot a_1$ which is zero iff $a_1$ is (since $\det(M) \cdot c$ is always nonzero). Matching Vector Families ------------------------ Let $S\subset\Z_m\setminus{\{0\}}$ and let $\cF=(\cU,\cV)$ where $\cU=({\bu_1,\cdots,\bu_n}),\cV=({\bv_1,\cdots,\bv_n})$ and $\forall i\ \bu_i,\bv_i \in \Z_m^k$. Then $\cF$ is called an $S$-matching vector family of size $n$ and dimension $k$ if $\forall\ i,j$, $$\begin{aligned} {\langle \bu_i,\bv_j \rangle}\begin{cases} = 0 & \mbox{if } i=j\\ \in S & \mbox{if } i\ne j \end{cases}\end{aligned}$$ If $S$ is omitted, it implies that $S=\Z_m\setminus{\{0\}}$. \[Grolmusz\] Let $m=p_1p_2\cdots p_r$ where $p_1,p_2\cdots, p_r$ are distinct primes with $r\ge 2$, then there exists an explicitly constructible $S$-matching vector family $\cF$ in $\Z_m^k$ of size $n\ge {\exp\left(\Omega\left(\frac{(\log k)^r}{(\log\log k)^{r-1}}\right)\right)}$ where $S={\{a\in \Z_m: a\mod p_i \in {\{0,1\}}\ \forall\ i \in[r]\}}\setminus {\{0\}}$. \[CRT\] The size of $S$ in the above theorem is $2^r-1$ by the Chinese Remainder Theorem. Thus, there are matching vector families of size super-polynomial in the dimension of the space with inner products restricted to a set of size $2^r = |S \cup \{0\}|$. In the special case when $p_1=2,p_2=3$, we have $m=6$ and the following corollary: \[Grolmuszmod6\] There is an explicitly constructible $S$-matching vector family $\cF$ in $\Z_6^k$ of size $n\ge {\exp\left(\Omega\left(\frac{(\log k)^2}{\log\log k}\right)\right)}$ where $S={\{1,3,4\}}\subset \Z_6$ A number theoretic lemma ------------------------ We will need the following simple lemma. Recall that the [*order*]{} of an element $a$ in a finite multiplicative group $G$ is the smallest integer $w \geq 1$ so that $a^w=1$. \[lem-order\] Let $\F_p$ be a field of prime order $p$ and let $k \geq 1$ be an integer co-prime to $p$. Then, the algebraic closure of $\F_p$ contains an element $\zeta$ of order $k$. Since $k,p$ are co-prime, $p\in \Z_k^*$ which is the multiplicative group of invertible elements in $\Z_k$. Let $w \geq 1$ be the order of $p$ in the group $\Z_k^{*}$, so $k$ divides $p^w-1$. Consider the extension field $\F_{p^w}$, which is a sub field of the algebraic closure of $\F_p$. The multiplicative group $\F_{p^w}^*$ of this field is a cyclic group of size $p^w - 1$. Since $k$ divides this size, there must be an element in $\F_{p^w}$ of order $k$. Review of $O(n^{1/3})$ cost 2-server PIR {#oldconstruction} ======================================== There are several known constructions of 2-server PIR with $O(n^{1/3})$ communication cost. We will recall here in detail a particular construction due to [@WoodruffY05] which uses polynomial interpolation using derivatives (over a field). In the next section we will replace the field with a ring and see how to use matching vector families to reduce the communication cost. Let $\ba=({a_1,\cdots,a_n})$ be the database, choose $k$ to be smallest integer such that $n \le \binom{k}{3} $. Let $\F_q$ be a finite field with $q > 3$ elements and suppose for simplicity that $q$ is prime (so that partial derivatives behave nicely for polynomials of degree at most $3$). Let $\phi:[n]\mapsto {\{0,1\}}^k\subset \F_q^k$ be an embedding of the $n$ coordinates into points in ${\{0,1\}}^k$ of Hamming weight 3. Such an embedding exists since $n\le \binom{k}{3}$. Define $F({x_1,\cdots,x_k})=F(\bx) \in\F_q[x_1,\cdots,x_k]$ as $$F(\bx)=\sum_{i=1}^n a_i\left( \prod_{j: \phi(i)_j=1}x_j\right)$$ Note that $F(\bx)$ is a degree 3 polynomial satisfying $F(\phi(i))=a_i\ \forall\ i\in [n]$. Fix any two nonzero field elements $t_1 \neq t_2 \in \F_q \setminus \{0\}$. Suppose the user $\cU$ wants to recover the bit $a_\tau$. The protocol is as follows: The user picks a uniformly random element $\bz\in \F_q^k$ and sends $\phi(\tau)+t_1\bz$ to $\cS_1$ and $\phi(\tau)+t_2\bz$ to $\cS_2$. Each server $S_i$ then replies with the value of $F$ at the point received $F(\phi(\tau)+t_i\bz)$ as well as the values of the $k$ partial derivatives of $F$ at the same point $$\nabla F(\phi(\tau)+t_i\bz)=\left(\frac{\partial F}{\partial z_1}(\phi(\tau)+t_i\bz),\cdots,\frac{\partial F}{\partial z_k}(\phi(\tau)+t_i\bz)\right)$$ The partial derivatives here are defined in the same way as for polynomials over the real numbers. [align\*]{} &: \_q\^k\ \_i &: ()+t\_i\ \_i &: F( ()+t\_i), F( ()+t\_i) The protocol is private since $\phi(\tau)+t\bz$ is uniformly distributed in $\F_q^k$ for any $\tau$ and $t\ne 0$. Consider the univariate polynomial $$g(t)=F(\phi(\tau)+t\bz).$$ Observe that, be the chain rule, $$g'(t)={\langle \nabla F(\phi(\tau)+t\bz),\bz \rangle}.$$ Thus the user can recover the values $g(t),g'(t)$ for $t=t_1,t_2$ from the server’s responses. From this information the user needs to find $g(0)=F(\phi(\tau))=a_\tau$. Since $F$ is a degree 3 polynomial, $g(t)$ is a univariate degree 3 polynomial, let $g(t)=\sum_{\ell=0}^3 c_\ell t^\ell$. Therefore we have the following matrix equation: $$\begin{aligned} \left[ \begin{matrix} g(t_1)\\ g'(t_1)\\ g(t_2)\\ g'(t_2)\\ \end{matrix} \right] = \left[ \begin{matrix} 1&t_1&t_1^2&t_1^3\\ 0&1&2t_1&3t_1^2\\ 1&t_2&t_2^2&t_2^3\\ 0&1&2t_2&3t_2^2\\ \end{matrix} \right] \left[ \begin{matrix} c_0\\ c_1\\ c_2\\ c_3 \end{matrix} \right] =M \left[ \begin{matrix} c_0\\ c_1\\ c_2\\ c_3 \end{matrix} \right]\end{aligned}$$ The matrix $M$ has determinant $det(M)=(t_2-t_1)^4$ and so M is invertible as long as $t_1 \neq t_2$. Thus the user can find $c_0=g(0)=F(\phi(\tau)) = a_\tau$ by multiplying by the inverse of $M$. The communication cost of this protocol is $O(k) = O(n^{1/3})$ since the user sends a vector in $\F_q^k$ to each server and each server sends an element in $\F_q$ and a vector in $\F_q^k$ to the user. The new 2-server scheme {#newconstruction} ======================= In this section we describe our main construction which proves Theorem \[mainthm\]. Before describing the construction we set up some of the required ingredients and notations. The first ingredient is a matching vector family over $\Z_6$ as in Corollary \[Grolmuszmod6\]. That is, we construct an $S = \{1,3,4\}$- matching vector family $\cal F = (\cU,\cV)$ where $\cU=({\bu_1,\cdots,\bu_n}),\cV=({\bv_1,\cdots,\bv_n})$ have elements in $\Z_6^k$. Corollary \[Grolmuszmod6\] tells us that this can be done with $n=\exp(\Omega(\log^2 k/\log\log k))$ or $k=\exp({O\left(\sqrt{\log n\ \log\log n}\right)})$. We will work with polynomials over the ring $$\cR = \cR_{6,6}=\Z_6[\gamma]/(\gamma^6-1)$$ (see Section \[preliminaries\]). We will denote the vector $(\gamma^{z_1},\gamma^{z_2},\cdots,\gamma^{z_k})$ by $\gamma^\bz$ where $\bz=({z_1,\cdots,z_k}) \in \Z_6^k$. We will need to extend the notion of partial derivatives to polynomials in $\cR[x_1,\ldots,x_k]$. This will be a non standard definition, but it will satisfy all the properties we will need. Instead of defining each partial derivative separately, we define one operator that will include all of them. Let $\cR$ be a commutative ring and let $F(\bx)=\sum c_{\bz}\bx^{\bz} \in \cR[x_1,\ldots,x_k]$. We define $F^{(1)} \in (\cR^k)[x_1,\ldots,x_k]$ to be $$\begin{aligned} F^{(1)}(\bx)&:=\sum (c_{\bz}\cdot \bz) \bx^{\bz} \end{aligned}$$ For example, when $F(x_1,x_2)=x_1^2x_2+4x_1x_2+3x_2^2$ (with integer coefficients), $$F^{(1)}(x_1,x_2)={\left[\begin{matrix} 2\\ 1\\ \end{matrix}\right]}x_1^2x_2+{\left[\begin{matrix} 4\\ 4\\ \end{matrix}\right]}x_1x_2+{\left[\begin{matrix} 0\\ 6\\ \end{matrix}\right]}x_2^2$$ One can think of $F^{(1)}$ both as a polynomial with coefficients in $\cR^k$ as well as a $k$-tuple of polynomials in $\cR[x_1,\ldots,x_k]$. This will not matter much since the only operation we will perform on $F^{(1)}$ is to evaluate it at a point in $\cR^k$. #### The Protocol: Let $\ba=(a_1,a_2\cdots,a_n)\in {{\{0,1\}}}^n$ be an n-bit database shared by two servers $\cS_1$ and $\cS_2$. The user $\cU$ wants to find the bit $a_\tau$ without revealing any information about $\tau$ to either server. For the rest of this section, $\cR = \cR_{6,6} = \Z_6[\gamma]/(\gamma^6-1)$. The servers represent the database as a polynomial $F(\bx)\in \mathcal{R}[\bx]=\mathcal{R}[x_1,\cdots,x_k]$ given by $$F(\bx)=F(x_1,\cdots,x_k)=\sum_{i=1}^n a_i \bx^{\bu_i},$$ where $\cU = (\bu_1,\ldots,\bu_n)$ are given by the matching vector family $\cF = (\cU,\cV)$. The user samples a uniformly random $\bz\in \Z_6^k$ and then sends $\bz+t_1\bv_\tau$ to $\cS_1$ and $\bz+t_2\bv_\tau$ to $\cS_2$ where we fix $t_1=0$ and $t_2=1$ (other choices of values would also work). $\cS_i$ then responds with the value of $F$ at the point $\bgam^{\bz+t_i\bv_\tau}$, that is with $F(\bgam^{\bz+t_i\bv_\tau})$ and the value of the ‘first order derivative’ at the same point $F^{(1)}(\bgam^{\bz+t_i\bv_\tau})$. Notice that the protocol is private since $\bz+t\bv_\tau$ is uniformly distributed over $\Z_6^k$ for any fixed $\tau$ and $t$. [align\*]{} &: \_6\^k\ \_i &: +t\_i\_\ \_i &: F(\^[+t\_i\_]{}), F\^[(1)]{}(\^[+t\_i\_]{}) #### Recovery: Define $$G(t):=F(\bgam^{\bz+t\bv_\tau}) =\sum_{i=1}^n a_i \gamma^{{\langle \bz,\bu_i \rangle}+t{\langle \bv_\tau,\bu_i \rangle}}$$ Using the fact that $\gamma^6=1$, we can rewrite $G(t)$ as: $$G(t)=\sum_{\ell=0}^{5} c_\ell \cdot \gamma^{t\ell},$$ with each $c_\ell \in \cR$ given by $$c_\ell=\sum_{i:{\langle \bu_i,\bv_\tau \rangle} =\ell\mod 6}a_i \gamma^{{\langle \bz,\bu_i \rangle}}.$$ Since $${\langle \bu_i,\bv_\tau \rangle}\mod 6 \,\, \begin{cases} = 0 &\mbox{if}\ i=\tau\\ \in S={\{1,3,4\}} &\mbox{if}\ i\ne \tau \end{cases}$$ we can conclude that $c_0=a_\tau\gamma^{{\langle \bu_\tau,\bz \rangle}}$ and $c_2=c_5=0$. Therefore $$G(t)=c_0+c_1\gamma^t+c_3\gamma^{3t}+c_4\gamma^{4t}.$$ Next, consider the polynomial $$g(T) = c_0+c_1T+c_3T^3+c_4T^4\in \mathcal{R}[T].$$ By definition we have $$\begin{aligned} g(\gamma^t)=G(t)=F(\bgam^{\bz+t\bv_\tau})\\ g^{(1)}(\gamma^t)=\sum_{\ell=0}^5 \ell c_\ell \gamma^{t\ell}={\langle F^{(1)}(\bgam^{\bz+t\bv_\tau}),\bv_\tau \rangle},\end{aligned}$$ where the last equality holds since $c_2=c_5=0$ and $$\begin{aligned} {\langle F^{(1)}(\bgam^{\bz+t\bv_\tau}),\bv_\tau \rangle}&=\left\langle\sum_{i=1}^n a_i \bu_i\gamma^{{\langle \bz,\bu_i \rangle}+t{\langle \bv_\tau,\bu_i \rangle}},\bv_\tau\right\rangle\\ &= \sum_{i=1}^n a_i{\langle \bu_i,\bv_\tau \rangle} \gamma^{{\langle \bz,\bu_i \rangle}+t{\langle \bv_\tau,\bu_i \rangle}}\\ &= \sum_{\ell=0}^{5} \ell \left(\sum_{i:{\langle \bu_i,\bv_\tau \rangle} =\ell \mod 6}a_i \gamma^{{\langle \bz,\bu_i \rangle}}\right)\gamma^{t\ell}=\sum_{\ell=0}^5 \ell c_\ell \gamma^{t\ell}\end{aligned}$$ So the user can find the values of $g(\gamma^t),g^{(1)}(\gamma^t)$ for $t=t_1,t_2$. Since $t_1=0,t_2=1$, we obtain the following matrix equation: $$\begin{aligned} \left[ \begin{matrix} g(1)\\ g^{(1)}(1)\\ g(\gamma)\\ g^{(1)}(\gamma)\\ \end{matrix} \right] = \left[ \begin{matrix} 1&1&1&1\\ 0&1&3&4\\ 1 &\gamma &\gamma^3& \gamma^4 \\ 0 &\gamma &3\gamma^3 &4\gamma^4 \end{matrix} \right] \left[ \begin{matrix} c_0\\ c_1\\ c_3\\ c_4 \end{matrix} \right] =M \left[ \begin{matrix} c_0\\ c_1\\ c_3\\ c_4 \end{matrix} \right]\end{aligned}$$ The determinant (over $\cR$) of the matrix $M$ is $$\label{determinant} \det(M)=\gamma(\gamma-1)^4(\gamma^2+4\gamma+1)=3\gamma^5+4\gamma^4+3\gamma^3+2\gamma$$ and so, by Lemma \[nonzerolemma\], is a non-zero element of the ring $\cR$. Since $c_0=a_\tau\gamma^{{\langle \bu_\tau,\bz \rangle}}$, either $c_0=0$ or $c_0=\gamma^{{\langle \bu_\tau,\bz \rangle}}$ which is not a zero-divisor by remark \[nonzerodivisorgammapower\]. Hence, by Remark \[remark-adjugate\], the user can find whether $c_0=0$ from the vector $[g(1),g^{(1)}(1),g(\gamma),g^{(1)}(\gamma)]^t$ by multiplying it from the left by ${\mathrm{adj}}(M)$. Since $c_0=a_\tau\gamma^{{\langle \bu_\tau,\bz \rangle}}$, $a_\tau$ will be zero iff $c_0$ is and so the user can recover $a_\tau \in \{0,1\}$. #### Communication Cost: The user sends a vector in $\Z_6^k$ to each server. Each server sends a element of $\cR$ and a vector in $\cR^k$ to the user. Since elements of $\cR$ have constant size description, the total communication cost is $O(k)=n^{o(1)}$. Working over $\Z_6$ or $\F_3$ {#sec-overZm} ----------------------------- Using the ring $\cR_{6,6}=\Z_6[\gamma]/(\gamma^6 - 1)$ in the above construction makes the presentation clearer but is not absolutely necessary. Observing the proof, we see that one can replace it with any ring $\cR$ as long as there is a homomorphism from $\cR_{6,6}$ to $\cR$ such that the determinant of the matrix $M$ (Eq. \[determinant\]) doesn’t vanish under this homomorphism. For example, we can work over the ring $\Z_6$ and use the element $-1$ as a substitute for $\gamma$. Since $(-1)^6 = 1$ all of the calculations we did with $\gamma$ carry through. In addition, the resulting determinant of $M$ is non zero when setting $\gamma= -1$ and so we can complete the recovery process. More formally, define the homomorphism $\tau:\Z_6[\gamma]/(\gamma^6 - 1)\mapsto \Z_6$ by extending the identity homomorphsim on $\Z_6$ using $\tau(\gamma)=-1$. Observe that the determinant of the matrix $M$ in Eq. (\[determinant\]) doesn’t vanish under this homomorphism, $\tau(\det(M))=-4=2$. A more interesting example is the ring of integers modulo $3$, which we denote by $\F_3$ to highlight that it is also a field. We can use the homomorphsim $\phi: \Z_6[\gamma]/(\gamma^6-1)\mapsto \F_3$ by extending the natural homomorphsim from $\Z_6$ to $\F_3$ (given by reducing each element modulo $3$) using $\phi(\gamma)=-1$. Again the determinant in Eq. (\[determinant\]) doesn’t vanish. This also shows that our scheme can be made to be [*bilinear*]{}, as defined in [@RazborovY06], since the answers of each server become linear combinations of database entries over a field. An Alternative Construction --------------------------- In the construction above we used the special properties of Grolmusz’s construction, namely that the non-zero inner products are in the special set $S = {\{1,3,4\}}$. Here we show how to make the construction work with any matching vector family (over $\Z_6$). This construction also introduces higher order differential operators, which could be of use if one is to generalize this work further. Suppose we run our protocol (with $\cR = \cR_{6,6}$) using a matching vector family with $S=\Z_6\setminus{\{0\}}$. Then, we cannot claim that $c_2=c_5=0$, but we still have $c_0=a_\tau\gamma^{{\langle \bu_\tau,\bz \rangle}}$. We can proceed by asking for the ‘second order’ derivative of $F(\bx)=\sum_{i=0}^n a_i\bx^{\bu_i}$ which we define as $$F^{(2)}(\bx):=\sum c_{\bz}\ (\bz\otimes\bz)\ \bx^{\bz}$$ where $\bz\otimes\bz$ is the $k\times k$ matrix defined by $(\bz\otimes\bz)_{ij}=z_iz_j$. For example, when $P(x_1,x_2)=x_1^2x_2+4x_1x_2+3x_2^2$, $$P^{(2)}(x_1,x_2)={\left[\begin{matrix} 4& 2\\ 2& 1\\ \end{matrix}\right]}x_1^2x_2+4{\left[\begin{matrix} 1& 1\\ 1& 1\\ \end{matrix}\right]}x_1x_2+3{\left[\begin{matrix} 0& 0\\ 0& 4\\ \end{matrix}\right]}x_2^2.$$ The final protocol is: [align\*]{} &: \_m\^k\ \_i &: +t\_i\_\ \_i &: F(\^[+t\_i\_]{}), F\^[(1)]{}(\^[+t\_i\_]{}),F\^[(2)]{}(\^[+t\_i\_]{}) Notice that privacy is maintained and the communication is $O(k^2) = n^{o(1)}$ as before. For recovery, define $g(T)\in \cR[T]$ as before and notice that, in addition to the identities $$\begin{aligned} & g(\gamma^t)=\sum_{\ell=0}^5 c_\ell \gamma^{t\ell}=F(\bgam^{\bz+t\bv_\tau})\\ & g^{(1)}(\gamma^t)=\sum_{\ell=0}^5 \ell c_\ell \gamma^{t\ell}={\langle F^{(1)}(\bgam^{\bz+t\bv_\tau}),\bv_\tau \rangle},\end{aligned}$$ we also get the second order derivative of $g$ from $$g^{(2)}(\gamma^t)=\sum_{\ell=0}^5 \ell^2 c_\ell \gamma^{t\ell}={\langle F^{(2)}(\bgam^{\bz+t\bv_\tau}),\bv_\tau\otimes\bv_\tau \rangle},$$ where the inner product of matrices is taken entry-wise and using the identity ${\langle { \bf u} \otimes { \bf u}, \bv \otimes \bv \rangle} = {\langle { \bf u},\bv \rangle}^2$. By choosing $t_1=0,t_2=1$, we have the following matrix equation: $$\begin{aligned} \left[ \begin{matrix} g(1)\\ g^{(1)}(1)\\ g^{(2)}(1)\\ g(\gamma)\\ g^{(1)}(\gamma)\\ g^{(2)}(\gamma) \end{matrix} \right] = \left[ \begin{matrix} 1&1&1&1&1&1\\ 0&1&2&3&4&5\\ 0&1&4&9&16&25\\ 1 &\gamma &\gamma^2 &\gamma^3& \gamma^4 &\gamma^5\\ 0& \gamma &2\gamma^2 &3\gamma^3 &4\gamma^4& 5\gamma^5\\ 0& \gamma &4 \gamma^2 &9\gamma^3 &16\gamma^4 &25\gamma^5\\ \end{matrix} \right] \left[ \begin{matrix} c_0\\ c_1\\ c_2\\ c_3\\ c_4\\ c_5 \end{matrix} \right] =M \left[ \begin{matrix} c_0\\ c_1\\ c_2\\ c_3\\ c_4\\ c_5 \end{matrix} \right]\end{aligned}$$ $\det(M)=4\gamma^3(\gamma-1)^9=4+2\gamma^3\ne 0$ and so we can use recover $a_\tau$ as before. Generalization to more servers {#sec-kserver} ============================== In this section we prove Theorem \[THM-kserver\]. As was mentioned in the introduction, we will allow the database symbols to belong to a slightly larger alphabet $\Z_m$. Let $q=2^{r-1}$ denote the number of servers $\cS_1,\cdots,\cS_q$ for some $r\ge 2$. Let $m=p_1p_2\cdots p_r$ where $p_1,p_2,\cdots,p_r$ are distinct primes. By theorem \[Grolmusz\], there is an explicit $S$-matching vector family $\cF=(\cU,\cV)$ of size $n$ and dimension $k=n^{{O\left((\log\log n/\log n)^{1-1/r}\right)}}$ where $S={\{a\in \Z_m: a\mod p_i \in {\{0,1\}}\ \forall\ i \in[r]\}}\setminus {\{0\}}$. By remark \[CRT\], $|S\cup {\{0\}}|=2^r=2q$. #### The Protocol: We will work over the ring $\cR=\cR_{m,m}=\Z_m[\gamma]/(\gamma^m-1)$. The servers represent the database $\ba=({a_1,\cdots,a_n})\in \Z_m^n$ as a polynomial $F(\bx)\in \mathcal{R}[\bx]=\mathcal{R}[x_1,\cdots,x_k]$ given by $$F(\bx)=F(x_1,\cdots,x_k)=\sum_{i=1}^n a_i \bx^{\bu_i},$$ where $\cU = (\bu_1,\ldots,\bu_n)$ are given by the matching vector family $\cF = (\cU,\cV)$. The user samples a uniformly random $\bz\in \Z_m^k$ and then sends $\bz+t_i\bv_\tau$ to $\cS_i$ for $i\in [q]$ where $t_i=i-1$. $\cS_i$ then responds with the value of $F$ at the point $\bgam^{\bz+t_i\bv_\tau}$, that is with $F(\bgam^{\bz+t_i\bv_\tau})$ and the value of the ‘first order derivative’ at the same point $F^{(1)}(\bgam^{\bz+t_i\bv_\tau})$. Notice that the protocol is private since $\bz+t\bv_\tau$ is uniformly distributed over $\Z_m^k$ for any fixed $\tau$ and $t$. [align\*]{} &: \_m\^k\ \_i &: +t\_i\_\ \_i &: F(\^[+t\_i\_]{}), F\^[(1)]{}(\^[+t\_i\_]{}) #### Recovery: Similarly to the 2-server analysis, we define $$G(t):=F(\bgam^{\bz+t\bv_\tau}) =\sum_{i=1}^n a_i \gamma^{{\langle \bz,\bu_i \rangle}+t{\langle \bv_\tau,\bu_i \rangle}}=c_0+\sum_{\ell\in S}c_\ell\gamma^{t\ell},$$ and $$g(T) = c_0+\sum_{\ell\in S}c_\ell T^\ell \in \mathcal{R}[T],$$ so that $c_0=a_\tau\gamma^{{\langle \bu_\tau,\bz \rangle}}$ and $$\begin{aligned} g(\gamma^t)=G(t)=F(\bgam^{\bz+t\bv_\tau})\\ g^{(1)}(\gamma^t)=\sum_{\ell=0}^{m-1} \ell c_\ell \gamma^{t\ell}={\langle F^{(1)}(\bgam^{\bz+t\bv_\tau}),\bv_\tau \rangle},\end{aligned}$$ Hence, the user can calculate the values of $g(\gamma^t),g^{(1)}(\gamma^t)$ for $t=t_1,\cdots,t_q$ and we end up with the following (square) system of equations: $$\begin{aligned} \left[ \begin{matrix} g(\gamma^{t_1})\\ g^{(1)}(\gamma^{t_1})\\ \vdots\\ g(\gamma^{t_q})\\ g^{(1)}(\gamma^{t_q})\\ \end{matrix} \right] = \left[ \begin{matrix} 1&\cdots&\gamma^{t_1\ell}&\cdots\\ 0&\cdots&\ell\gamma^{t_1\ell}&\cdots\\ \vdots& &\vdots& &\vdots\\ 1&\cdots&\gamma^{t_q\ell}&\cdots\\ 0&\cdots&\ell\gamma^{t_q\ell}&\cdots\\ \end{matrix} \right] \left[ \begin{matrix} c_0\\ \vdots\\ c_\ell\\ \vdots \end{matrix} \right] =M \left[ \begin{matrix} c_0\\ \vdots\\ c_\ell\\ \vdots \end{matrix} \right]\end{aligned}$$ where the $2^r = 2q$ columns are indexed by $\ell\in {\{0\}}\cup S$. Instead of computing the determinant (and the Adjugate matrix), we will use the following Lemma (proven below). \[lem-lambda\] There exists a row vector $$\blam=[\alpha_1,\beta_1, \cdots ,\alpha_q, \beta_q]\in \cR^{2q}$$ such that $\blam M=[\mu,0,\cdots,0]$ for some $\mu\in \cR$ where $\mu \mod p_i \ne 0\ \forall i\in[r]$. Using this Lemma, the user can recover $a_\tau$ as follows. We have $$\begin{aligned} \nu := \blam \left[ \begin{matrix} g(\gamma^{t_1})\\ g^{(1)}(\gamma^{t_1})\\ \vdots\\ g(\gamma^{t_q})\\ g^{(1)}(\gamma^{t_q})\\ \end{matrix} \right] = \blam M \left[ \begin{matrix} c_0\\ \vdots\\ c_\ell\\ \vdots \end{matrix} \right] =[\mu,0,\cdots,0] \left[ \begin{matrix} c_0\\ \vdots\\ c_\ell\\ \vdots \end{matrix} \right] =\mu c_0\end{aligned}$$ Taking this equation modulo $p_i$ we get, $$(\nu \mod p_i)=(\mu c_0 \mod p_i) = (\mu \mod p_i)(a_\tau \mod p_i) \gamma^{{\langle \bu_\tau,\bz \rangle}}$$ Let $\mu=\sum_{j=0}^{m-1}\mu_j\gamma^j$ and $\nu=\sum_{j=0}^{m-1}\nu_j\gamma^j$. Since $\mu \mod p_i \ne 0$, there exists $j$ such that $\mu_j \mod p_i \ne 0$. So $(a_\tau\mod p_i)=(\mu_j \mod p_i)^{-1}(\nu_{j+{\langle \bu_\tau,\bz \rangle}} \mod p_i)$. So we can find $a_\tau \mod p_i$ for each $i\in[r]$. Finally we use Chinese Remainder Theorem to find $a_\tau \in \Z_m$. Proof of Lemma \[lem-lambda\] ----------------------------- For any $\blam=[\alpha_1,\beta_1, \cdots ,\alpha_q, \beta_q]\in \cR^{2q}$ we can define a function $h:S\cup{\{0\}}\mapsto \cR$ as: $$h(\ell) =(\blam M)_\ell = \left(\sum_{i=1}^q \alpha_i \gamma^{t_i\ell} \right)+ \ell \left(\sum_{i=1}^q \beta_i \gamma^{t_i \ell}\right).$$ Our goal is then to construct an $h$ of this form such that $$\begin{aligned} h(\ell) \begin{cases} = 0 &\mbox{if}\ \ell\in S\\ = \mu & \mbox{if}\ \ell=0 \end{cases}\end{aligned}$$ where $(\mu \mod p_i) \ne 0\ \forall i\in[r]$. Notice that, by Chinese Remaindering, $$\label{eq-isomorphism} \cR = \cR_{m,m} \cong \cR_{p_1,m} \times \ldots \times \cR_{p_r,m},$$ where we recall that $\cR_{p_i,m} = \Z_{p_i}[\gamma]/(\gamma^m-1)$. Therefore, we also get that, for a formal variable $x$, the rings of univariate polynomials also satisfy $$\cR[x] \cong \cR_{p_1,m}[x] \times \ldots \times \cR_{p_r,m}[x].$$ In other words, any family of polynomials $f_i \in \cR_{p_i,m}[x]$, $i\in [r]$ can be ‘lifted’ to a single polynomial $f \in \cR[x]$ so that $ (f \mod p_i) = f_i$ for all $i$ (reducing $f$ mod $p_i$ is done coordinate-wise). Moreover, since this lift is done coefficient-wise (using Eq.\[eq-isomorphism\]), we get that the degree of $f$ is equal to the maximum of the degrees of the $f_i$’s. We begin by constructing, for each $i \in [r]$ the following polynomial $f_i(x)\in \cR_{p_i,m}[x]$: $$f_i(x)=\prod_{\ell\in S,\ \ell=0\mod p_i}(x-\gamma^\ell)$$ The degree of $f_i$ is $2^{r-1}-1=q-1$ so, by the above comment, we can find a polynomial $f(x)\in \cR[x]$ of degree $q-1$ such that $f(x)\equiv f_i(x) \mod p_i$ for all $i\in [r]$. Define $\alpha_i, i\in[q]$ to be the coefficients of the polynomial $f$ so that $f(x)=\sum_{i=1}^q \alpha_{i}x^{i-1}$. Since we defined $t_i=i-1$, we have $f(x)=\sum_{i=1}^q \alpha_{i}x^{t_i}$. Define $\beta_i=-\alpha_i$ for all $i\in [q]$. Our final construction of $h$ is thus $$h(\ell)=f(\gamma^\ell)-\ell f(\gamma^\ell)$$ $h(\ell)=0\ \forall \ell\in S$ Since $0\notin S$, $\ell\ne 0$. We will look at $h(\ell)$ modulo each of the primes. $$\begin{aligned} h(\ell) \mod p_i = f_i(\gamma^\ell)-(\ell\mod p_i) f_i(\gamma^\ell)= \begin{cases} f_i(\gamma^\ell)= 0 & \mbox{if}\ \ell=0 \mod p_i\\ f_i(\gamma^\ell)-f_i(\gamma^\ell) =0 & \mbox{if}\ \ell=1 \mod p_i \end{cases}\end{aligned}$$ Therefore, using Chinese Remaindering, $h(\ell)=0\ \forall \ell\in S$. $(h(0) \mod p_j)\ne 0$ for all $j\in [r]$ Suppose in contradiction that $(h(0) \mod p_j)= 0$, then $$h(0) \mod p_j=f_j(1)=\prod_{\ell\in S,\ \ell=0\mod p_j}(1-\gamma^\ell)=0.$$ The above equation holds in the ring $\left(\Z_{p_j}[\gamma]/(\gamma^m-1)\right)$.Therefore, if we consider what happens in the ring $\Z_{p_i}[\gamma] \cong \F_{p_i}[x]$ (we replace the formal variable $\gamma$ with $x$ to highlight the fact that $x$ does not satisfy any relation) we get that $$\label{eq-identity} \prod_{\ell\in S,\ \ell=0\mod p_j}(1-x^\ell)=(x^m-1)\theta(x)$$ for some polynomial $\theta(x)\in \F_{p_j}[x]$. The above equation is an identity in the ring $\F_{p_j}[x]$. So we can check its validity by substituting values for $x$ from the algebraic closure of $\F_{p_j}$. Let $m' = m/p_j$ and let $\zeta$ be an element in the algebraic closure of $\F_{p_j}$ of order $m'$ (so $\zeta^\ell=1$ iff $m'$ divides $\ell$). Since $m'$ and $p_j$ are co-prime, such an element exists by Lemma \[lem-order\]. If we substitute $\zeta$ into Eq. \[eq-identity\], the RHS is zero (since $m'$ divides $m$). However, each term in the LHS product is nonzero, since if $\ell =0 \mod p_j$ and $m'$ divides $\ell$ then $\ell = 0 \mod m$ but we know that $0\notin S$. Since we are working over the algebraic closure of $\F_{p_j}$ which is a field, the product of nonzero elements is nonzero. This is a contradiction, and so Eq. \[eq-identity\] does not hold. Concluding remarks {#sec-conclude} ================== In this work we presented the first 2-server PIR scheme (information theoretic) with sub-polynomial cost. It is unclear what is the optimal communication cost of 2-server schemes and we conjecture that our protocol is far from optimal. One approach to decrease the communication cost is to take $m$ to be a product of $r>2$ prime factors in theorem \[Grolmusz\] to get a larger $S$-matching vector family where $S={\{a\in \Z_m: a\mod p_i \in {\{0,1\}}\ \forall\ i \in[r]\}}\setminus {\{0\}}$ which is of size $2^{r}-1$. So we need $2^{r-1}$ independent equations from each server to find $c_0$. We can ask the servers for derivatives of $F$ at $\gamma^{\bz+t\bv_\tau}$ up to order $2^{r-1}-1$. If these equations are ‘independent’ i.e. the determinant of the coefficient matrix doesn’t vanish then we can find $c_0$. If we can do this, we can decrease the cost to $n^{{O\left(2^r(\log\log n/\log n)^{1-1/r}\right)}}$. But observe that for each $l\in S$, $l^2=l\mod m$ since $l\mod p_i \in {\{0,1\}}\ \forall i\in[r]$. So higher order derivatives of $g$ are equal to the first order derivative and we get repeated rows in the coefficient matrix $M$. One avenue for improvement could be by trying to construct $S$ such that elements of $S$ doesn’t satisfy a low-degree monic polynomial. Acknowledgements ================ We would like to thank Klim Efremenko and Sergey Yekhanin for helpful comments. [^1]: Department of Computer Science and Department of Mathematics, Princeton University. Email: `zeev.dvir@gmail.com`. Research supported by NSF grants CCF-1217416 and CCF-0832797. [^2]: Department of Computer Science, Princeton University. Email: `sgopi@cs.princeton.edu`. [^3]: Our scheme can infact be made linear and using a simple transformation given in [@RazborovY06], any linear scheme can be converted to a bilinear scheme [^4]: The rings $\cR_{m,r}$ are sometimes denoted by $\Z_m[C_r]$ and referred to as the [*group ring*]{} of the cyclic group $C_r$ with coefficients in $\Z_m$. See e.g., [@KKS13; @HH11] for some recent applications of these rings in cryptography.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The universal three-body dynamics in ultra-cold binary gases confined to one-dimensional motion are studied. The three-body binding energies and the (2 + 1)-scattering lengths are calculated for two identical particles of mass $m$ and a different one of mass $m_1$, which interactions is described in the low-energy limit by zero-range potentials. The critical values of the mass ratio $m/m_1$, at which the three-body states arise and the (2 + 1)-scattering length equals zero, are determined both for zero and infinite interaction strength $\lambda_1$ of the identical particles. A number of exact results are enlisted and asymptotic dependences both for $m/m_1 \to \infty$ and $\lambda_1 \to -\infty$ are derived. Combining the numerical and analytical results, a schematic diagram showing the number of the three-body bound states and the sign of the (2 + 1)-scattering length in the plane of the mass ratio and interaction-strength ratio is deduced. The results provide a description of the homogeneous and mixed phases of atoms and molecules in dilute binary quantum gases.' author: - 'O. I. Kartavtsev' - 'A. V. Malykh' - 'S. A. Sofianos' bibliography: - 'onedim.bib' title: 'Bound states and scattering lengths of three two-component particles with zero-range interactions under one-dimensional confinement' --- Introduction {#Introduction} ============ Dynamics of few particles confined in low dimensions is of interest in connection with numerous investigations ranging from atoms in ultra-cold gases [@Gorlitz01; @Rychtarik04; @Petrov00; @Mora04; @Mora05; @Yurovsky06; @Rizzi08] to nonostructures [@Johnson04; @Slachmuylders07; @Olendski08]. Experiments with ultra-cold gases in the one-dimensional (1D) and quasi-1D traps have been recently performed [@Gorlitz01; @Moritz05; @Sadler06; @Ospelkaus06], amid the rapidly growing interest to the investigation of mixtures of ultra-cold gases [@Karpiuk05; @Shin06; @Chevy06; @Deh08; @Taglieber08; @Capponi08; @Zollner08]. Different aspects of the three-body dynamics in 1D have been analyzed in a number of recent papers, e. g., the bound-state spectrum of two-component compound in [@Cornean06], low-energy three-body recombination in [@Mehta07], application of the integral equations in [@Mehta05], and variants of the hyperradial expansion in [@Amaya-Tapia98; @Amaya-Tapia04; @Kartavtsev06]. It is necessary to emphasize that the exact solutions are known for an arbitrary number of identical particles in 1D with contact interactions [@McGuire64; @Lieb63]; in particular, it was found that the ground-state energy $E_N$ of $N$ attractive particles scales as $E_N/E_{N=2} = N (N^2 - 1)/6$. There is a vast literature, in which the exact solution is used to analyze different properties of few- and many-body systems; few examples of this approach can be found in Ref. [@Li03; @Girardeau07; @Zvonarev07; @Guan07]. The main parameters characterizing the multi-component ultracold gases, i. e., the masses and interaction strengths can be easily tuned within wide ranges in the modern experiments, which handle with different compounds of ultracold atoms and adjust the two-body scattering lengths to an arbitrary values by using the Feshbach-resonance and confinement-resonance technique [@Olshanii98]. Under properly chosen scales, all the properties of the system depend on the two dimensionless parameters, viz., mass ratio and interaction strength ratio, the most important characteristics being the bound-state energies and the (2 + 1)-scattering lengths. In particular, knowledge of these characteristics is essential for description of the concentration dependence and phase transitions in dilute two-component mixtures of ultra-cold gases. In the present paper, the two-component three-body system consisting of a particle of mass $m_1$ and two identical particles of mass $m$ interacting via contact ($\delta$-function) inter-particle potential is studied. In the low-energy limit, the contact potential is a good approximation for any short-range interaction and its usage provides a universal, i. e., independent of the potential form, description of the dynamics [@Demkov88; @Wodkiewicz91; @Mehta05; @Kartavtsev99; @Kartavtsev06; @Kartavtsev07]. More specifically, it is assumed that one particle interacts with the other two via an attractive contact interaction of strength $\lambda < 0$ while the sign of the interaction strength $\lambda_1$ for the identical particles is arbitrary. This choice of the parameters is conditioned by an intention to consider a sufficiently rich three-body dynamics since the three-body bound states exist only if $\lambda < 0$. Most of the numerical and analytical results can be obtained by solving a system of hyper-radial equations (HREs) [@Macek68]. It is of importance that all the terms in HREs are derived analytically; the method of derivation and the analytical expressions are similar to those obtained for a number of problems with zero-range interactions [@Kartavtsev99; @Kartavtsev06; @Kartavtsev07]. To describe the dependence on the mass ratio and interaction-strength ratio for the three-body binding energies and the (2 + 1)-scattering length, the two limiting cases $\lambda_1 = 0$ and $\lambda_1 \to \infty$ are considered and the precise critical values of $m/m_1$ for which the three-body bound states arise and the (2 + 1)-scattering length becomes zero are determined. Combining the numerical calculations, exact analytical results, qualitative considerations, and deduced asymptotic dependencies, one produces a schematic “phase” diagram, which shows the number of the three-body bound states and a sign of the (2 + 1)-scattering lengths in the plane of the parameters $m/m_1$ and $\lambda_1/|\lambda|$. This sign is important in studying the stability of mixtures containing both atoms and two-atomic molecules. The paper is organized in the following way. In Sect. \[Outline\] the problem is formulated, the relevant notations are introduced, and the method of “surface” function is described; the analytical solutions, numerical results and asymptotic dependencies are presented and discussed in Sect. \[Results\]; the conclusions are summarized in Sect. \[Conclusion\]. General outline and method {#Outline} ========================== The Hamiltonian of three particles confined in 1D, interacting through the pairwise contact potentials with strengths $\lambda_i$, reads $$\label{ham} H = -\sum_{i} \frac{\hbar^2}{2m_i}\frac{\partial^2}{\partial x_i^2} + \sum_{i} \lambda_{i}\delta(x_{jk}) \ ,$$ where $x_i$ and $m_i$ are the coordinate and mass of the $i$th particle, $x_{jk} = x_j - x_k$, and $ \{ ijk \} $ is a permutation of $ \{ 123 \} $. In order to study the aforementioned two-component three-body systems, one assumes that particle 1 interacts with two identical particles 2 and 3 through attractive potentials and denotes for simplicity $m_2 = m_3 = m$ and $\lambda_{2} = \lambda_{3} \equiv \lambda<0$. The corresponding solutions are classified by their parity and are symmetrical or antisymmetrical under the permutation of identical particles, depending on whether these particles are bosons or fermions. The even (odd) parity solutions will be denoted by $P = 0$ ($P = 1$). In the following, the dependence of the three-body bound state energies and the (2 + 1)-scattering lengths on two dimensionless parameters $m/m_1$ and $\lambda_1/|\lambda|$ will be investigated. Hereafter, one lets $\hbar = |\lambda| = m = 1$ and thus $m \lambda^2/\hbar^2$ and $\hbar^2/(m |\lambda|)$ are the units of energy and length. Furthermore, one denotes by $A$ and $A_1$ the scattering lengths for the collision of the third particle off the bound pair of different and identical particles, respectively. The scattering length is considered at the lowest two-body threshold, which corresponds to determination of $A$ if $\lambda_1/|\lambda| > -\sqrt{2/(1 + m/m_1)}$ and $A_1$ otherwise. With the chosen units, $E_\mathrm{th} = -1/[2(1 + m/m_1)]$ and $E'_\mathrm{th} = -\lambda_1^2/4$ are two-body thresholds, i.e., the bound-state energies of two different and two identical particles, respectively. The binding energy and the scattering length are monotonic functions of the interaction’s strength and for this reason much attention is paid to calculations for two limiting cases of zero ($\lambda_1 = 0$) and infinite ($\lambda_1 \to \infty$) interaction between the identical bosons. It is of interest to recall here that due to one-to-one correspondence of the solutions [@Girardeau60] all the results derived for systems, in which the identical particles are bosons and $\lambda_1 \to \infty$, are applicable to those in which the identical particles are fermions and the s-wave interaction between them is zero ($\lambda_1 = 0$) by definition. The numerical and analytical results will be obtained mostly by solving a system of HREs [@Macek68] where the various terms are derived analytically [@Kartavtsev99; @Kartavtsev06; @Kartavtsev07]. The HREs are written by using the center-of-mass coordinates $\rho$ and $\alpha$, which are expressed via the scaled Jacobi variables as $ \rho\sin \alpha = x_2 - x_3$ and $\rho\cos\alpha = \cot\omega \left(2 x_1 - x_2 - x_3 \right)$ given the kinematic-rotation angle $\omega = \arctan\sqrt{1 + 2 m/m_1}$ so that $E_\mathrm{th} = -\cos^2\omega$. The total wave function is expanded as in papers [@Amaya-Tapia98; @Amaya-Tapia04; @Kartavtsev06; @Kartavtsev07], $$\label{Psi1dim} \Psi = \rho^{-1/2} \sum_{n = 1}^{\infty} f_n(\rho)\Phi_n(\alpha , \rho) \,,$$ in a set of functions $\Phi_n(\alpha , \rho)$ satisfying the equation at fixed $\rho$ $$\label{eqPhi} \left(\frac{\partial^2}{\partial\alpha^2} + \xi^2 \right) \Phi_n(\alpha, \rho) = 0$$ complemented by the condition $$\label{bcomega} \frac{\partial\Phi_n(\alpha, \rho)}{\partial\alpha} \Bigg|_{\alpha = \omega - 0}^{\alpha = \omega + 0} + 2\rho\cos\omega\Phi_n(\omega, \rho) = 0 \ ,$$ which represents the contact interaction between different particles [@Wodkiewicz91; @Kartavtsev06; @Kartavtsev07; @Kartavtsev07a]. Taking into account the symmetry requirements, one can consider the variable $\alpha $ within the range $0 \leq \alpha \leq \pi/2$ and impose the boundary conditions $$\label{boundconda} \left[ \left(1 - P \right) \frac{\partial\Phi_n}{\partial\alpha} + P \Phi_n \right]_{\alpha = \pi/2} = 0 \,,$$ $$\label{boundcondb} \left[ \left(1 - T \right) \frac{\partial\Phi_n}{\partial\alpha} + T \Phi_n\right]_{\alpha = 0} = 0 \, ,$$ where $P = 0$ ($P = 1$) for even (odd) parity and $T = 0$ ($T = 1$) for $\lambda_1 = 0$ ($\lambda_1 \to \infty$). These boundary conditions are posed if two identical particles are bosons, however, the case $T = 1$ is equally applicable if two identical particles are noninteracting ($\lambda_1 = 0$) fermions. The solution to Eq. (\[eqPhi\]) satisfying the boundary conditions (\[boundconda\]) and  (\[boundcondb\]) can be written as $$\label{sol1} \Phi_n(\alpha, \rho) = B_n \cases{ \cos [\xi_n (\omega - \pi /2) - P\pi /2)] \cos (\xi_n \alpha - T\pi/2)\,, & $ \alpha \le \omega $\cr \cos (\xi_n \omega - T\pi/2) \cos [\xi_n (\alpha - \pi /2) - P\pi /2]\,, & $ \alpha \ge \omega$\cr }$$ where the normalization constant is given by $$\begin{aligned} \label{norm1} B_n^{2} = -\left[2\cos^2(\xi_n \{\omega - \pi /2\} - P\pi /2) \cos^2(\xi_n\omega-T\pi/2)\cos\omega\right]^{-1} \frac{{\mathrm d}\xi_n^2}{{\mathrm d} \rho}\ .\end{aligned}$$ In order to meet the condition (\[bcomega\]), the eigenvalues $\xi_n(\rho)$ should satisfy the equation $$\label{transeq} 2 \rho \cos\omega\cos [\xi_n \omega - (\xi_n + P)\pi /2] \cos (\xi_n \omega - T\pi/2) + \xi_n \sin [(\xi_n + P - T)\pi /2] = 0 \ .$$ Notice that the case $P = 1$ and $T = 0$ is formally equivalent to the case $P = 0$ and $T = 1$ under the substitution of $\omega $ for $\pi /2 - \omega $. The expansion of the total wave function (\[Psi1dim\]) leads to an infinite set of coupled HREs for the radial functions $f_n(\rho)$ $$\label{system1} \left[\frac{{\rm d}^2}{{\rm d} \rho^2} - \frac{\xi_n^2(\rho) - 1/4}{\rho^2} + E \right] f_n(\rho) - \sum_{m = 1}^{\infty}\left[P_{mn}(\rho) - Q_{mn}(\rho) \frac{{\rm d}}{{\rm d}\rho} - \frac{{\rm d}} {{\rm d}\rho}Q_{mn}(\rho) \right] f_m(\rho) = 0 \ .$$ Using the method described in [@Kartavtsev99; @Kartavtsev06; @Kartavtsev07], one can derive analytical expressions for all the terms in Eq. (\[system1\]), $$\begin{aligned} \label{Qnm0} Q_{nm}(\rho) &\equiv& \langle \Phi_n \bigm| \Phi_m'\rangle =\frac{\sqrt{\varepsilon_n'\varepsilon_m'}} {\varepsilon_m - \varepsilon_n}\,, \\ \label{Pnm0} P_{nm}(\rho) &\equiv& \langle \Phi_n' \bigm| \Phi_m' \rangle = \cases{ \displaystyle Q_{nm} \displaystyle \left[\frac{\varepsilon_n' + \varepsilon_m'} {\varepsilon_m - \varepsilon_n} + \frac{1}{2} \displaystyle \left(\frac{\varepsilon_n''} {\varepsilon_n'} - \frac{\varepsilon_m''} {\varepsilon_m'}\right)\right]\,, & $n\neq m$ \cr % \displaystyle - \frac{1}{6}\frac{\varepsilon_n'''}{\varepsilon_n'} + \frac{1}{4}\left(\frac{\varepsilon_n''}{\varepsilon_n'}\right)^2\,, & $n = m$\cr }\end{aligned}$$ where $\varepsilon_n = \xi_n^2$ and the prime indicates derivative with respect to $\rho$. The obvious boundary conditions for the HREs (\[system1\]) $f_n(\rho) \to 0 $ as $\rho \to 0$ and $\rho \to \infty$ was used for the solution of the eigenvalue problem. For the calculation of the scattering length $A$, one should impose the asymptotic boundary condition for the first-channel function $$\label{f1as} f_1(\rho) \sim \rho \sin\omega - A \, ,$$ while all other boundary conditions remain the same as for the eigenvalue problem. The condition (\[f1as\]) follows from asymptotic form of the threshold-energy wave function at $\rho \to \infty$, which tends to a product of the two-body bound-state wave function and the function describing the relative motion of the third particle and the bound pair. The linear dependence of the latter function at large distance between the third particle and the bound pair leads to asymptotic expression (\[f1as\]) for the first-channel function in the expansion (\[Psi1dim\]). On the other hand, the expression (\[f1as\]) is consistent with the asymptotic solution of the first-channel equation in (\[system1\]), in which the long-range terms $P_{11}(\rho)$ and $- 1/(4\rho^2)$ cancel each other at large $\rho$. Results {#Results} ======= Exact solutions {#Exact} --------------- There are several examples, where the analytical solution of the Schrödinger equation for the systems under consideration can be obtained. Firstly, for a system containing one heavy and two light particles (in the limit $m/m_1 \to 0$), using the separation of variables, the solutions can be straightforwardly written both for zero and infinite interaction strength between the light particles. In particular, for $\lambda_1 = 0$, there is a single bound state with binding energy $E_{3} = -1$ and the (unnormalized) wave function is $$\Psi_{\rm b} = {\rm e}^{-|x_{12}| - |x_{13}|}\,,$$ whereas the scattering wave function at threshold energy $E_\mathrm{th} = -1/2$ is $$\Psi_{\rm sc} = (|x_{12}| - 1)\,{\rm e}^{-|x_{13}|} + (|x_{13}| - 1)\, {\rm e}^{-|x_{12}|} \, ,$$ which gives the (2 + 1)-scattering length $A = 1$. On the other hand, for $\lambda_1 \to \infty$, the three-body system is not bound, and the scattering wave function at the threshold-energy $E_\mathrm{th} = -1/2$ is $$\Psi_{\rm sc} = |x_{12}\, {\rm e}^{-|x_{13}|} - x_{13}\, {\rm e}^{-|x_{12}|}| \ ,$$ which gives $A = 0$. Furthermore, as mentioned in the introduction, the exact solution is known for an arbitrary number $N$ of identical particles with a contact interactions in 1D [@McGuire64; @Lieb63] and if the interaction is attractive there is a single bound state, which energy equals $E_N =- N(N^2 - 1)/24$. In particular, for three identical particles ($m = m_1$ and $\lambda_1 = \lambda $) there is only one bound state with energy $E_3 = -1$ and the (unnormalized) wave function is $$\Psi_{\rm b} = \exp\left(-\frac{1}{2}\,\sum_{i<j} |x_{ij}|\right)\ ,$$ whereas the exact scattering wave function at the two-body threshold $E_{\rm th} = E'_{\rm th} = -1/4$ is $$\Psi_{\rm sc} = \sum_{i<j}\exp(- \frac{1}{2}|x_{ij}|) - 4 \exp(-\frac{1}{4}\sum_{i<j}|x_{ij}|)\ ,$$ which implies that the (2 + 1)-scattering length is infinite $|A| \to \infty$, i. e., there is a virtual state at the two-body threshold [@Amaya-Tapia98]. Further exact results can be obtained by using the abovementioned correspondence of the three-body solutions for the infinite interaction strength ($\lambda_1 \to \infty$) between two identical bosons and for two noninteracting fermions ($\lambda_1 \to 0$). For example, for three equal-mass particles ($m = m_1$) the exact wave function at the two-body threshold ($E_\mathrm{th} = -1/4$) reads $$\label{exact1} \Psi_{\rm sc} = \cases{ {\rm e}^{-x_{13}/2} + {\rm e}^{x_{12}/2} - 2 {\rm e}^{-x_{23}/2}, & $x_{13}\ge 0$ \cr |{\rm e}^{x_{13}/2} - {\rm e}^{x_{12}/2}|, & $x_{13}\le 0 $ \, .\cr }$$ As follows from (\[exact1\]), the (2 + 1)-scattering length is infinite; as a matter of fact, this implies a rigorous proof of the conjecture [@Cornean06] that $m = m_1$ is the exact critical value for the emergence of the three-body bound state in the case of infinite repulsion ($\lambda_1 \to \infty$) between two identical bosons. It is worthwhile to recall here the exact solution for three equal-mass particles ($m = m_1$) if the interaction between two of them is turned off ($\lambda_1 = 0$) [@Gaudin75]. A transcendental equation was derived for the ground-state energy, which approximate solution gives the ratio of three-body and two-body energies $E_3/E_\mathrm{th} \approx 2.08754$. Numerical calculations {#Numerical} ---------------------- For the even-parity states ($P = 0$) and the two limiting values of the interaction strength between identical bosons, $\lambda_1 = 0$ and $\lambda_1 \to \infty$, the HREs (\[system1\]) are solved to determine the mass-ratio dependence of three-body binding energies and the (2 + 1)-scattering length $A$. The calculations show sufficiently fast convergence with increasing the number of channels; 15-channel results are presented in Fig. \[fig1\]. ![\[fig1\] Mass-ratio dependences for the even-parity states; shown are the ratio of the three-body bound-state energies to the two-body threshold energy (left) and the (2 + 1)-scattering length $A$ (right). Presented are the calculations for a system containing two identical bosons with zero (solid lines) and infinite (dash-dotted lines) interaction strength $\lambda_1$. The dash-dotted lines represent also the results for a system containing two identical noninteracting ($\lambda_1 = 0$) fermions. Encircled are those points, in which the exact analytical solution is known.](fig1a.eps "fig:"){width="8.15cm" height="7.25cm"} ![\[fig1\] Mass-ratio dependences for the even-parity states; shown are the ratio of the three-body bound-state energies to the two-body threshold energy (left) and the (2 + 1)-scattering length $A$ (right). Presented are the calculations for a system containing two identical bosons with zero (solid lines) and infinite (dash-dotted lines) interaction strength $\lambda_1$. The dash-dotted lines represent also the results for a system containing two identical noninteracting ($\lambda_1 = 0$) fermions. Encircled are those points, in which the exact analytical solution is known.](fig1b.eps "fig:"){width="8.15cm" height="7.25cm"} The precise critical values of the mass ratio, for which the three-body bound states arise ($|A| \to \infty$) and the (2 + 1)-scattering length $A = 0$ are presented in Table \[table1\] and are marked by crosses in Fig. \[fig1\] and Fig. \[fig3\]. ------------ ----------------- -------------------------- ----------------- -------------------------- $n$ $m/m_1 (A = 0)$ $m/m_1 (|A| \to \infty)$ $m/m_1 (A = 0)$ $m/m_1 (|A| \to \infty)$ 1 - - 0$^*$ $1^*$ 2 0.971 2.86954 5.2107 7.3791 3 9.365 11.9510 16.1197 19.0289 4 22.951 26.218 32.298 35.879 5 41.762 45.673 53.709 57.923 6 65.791 70.317 80.339 85.159 7 95.032 100.151 112.179 117.583 8 129.477 135.170 149.222 155.193 9 169.120 175.374 191.463 197.989 10 213.964 220.765 238.904 245.973 $^*$ Exact ------------ ----------------- -------------------------- ----------------- -------------------------- : The even-parity critical values of the mass ratio $m/m_1$ for which the (2 + 1)-scattering length becomes zero (marked by $A = 0$) and an $n$th three-body bound state arises (marked by $|A| \to \infty$). Calculations done for two values of the interaction strength between the identical particles, $\lambda_1 = 0$ and $\lambda_1 \to \infty$. \[table1\] The condition that the ground state energy is twice the threshold energy is important as it determines whether production of the triatomic molecules is possible in a gas of diatomic molecules. The mass ratio, at which $E_3/E_\mathrm{th} = 2$ is determined to be $m/m_1 \approx 49.8335$ for $\lambda_1 \to \infty$, while for the excited states the condition $E_3/E_\mathrm{th} = 2$ is satisfied for $m/m_1 \approx 130.4516 $ if $\lambda_1 = 0$ and $m/m_1 \approx 266.1805 $ if $\lambda_1 \to \infty$. As shown in Fig. \[fig1\], the binding energies increase with increasing the mass ratio, whereas, the scattering length $A$ has a general trend to decrease with increasing the mass ratio on each interval between two consecutive critical mass ratios at which the bound states appear. Nevertheless, the calculations for $\lambda_1 = 0$ show that $A(m/m_1)$ becomes non-monotonic function at small $m/m_1$. More precisely, the scattering length takes a maximum value $A \approx 1.124$ at $m/m_1 \approx 0.246$. Again one has to note that the mass-ratio dependence of energy and scattering length (plotted in Fig. \[fig1\]) and the critical values of the mass ratio (presented in Table \[table1\]) are the same both for the three-body system containing two identical bosons if $\lambda_1 \to \infty$ and for the three-body system containing two identical noninteracting ($\lambda_1 = 0$) fermions. It is of interest to note that the calculated binding energy $E_3/E_\mathrm{th} \approx 2.087719$ for three equal-mass particles ($m = m_1$) if two identical ones do not interact with each other ($\lambda_1 = 0$) is very close to the result [@Gaudin75] $E_3/E_\mathrm{th} \approx 2.08754$ obtained from the analytical transcendental equation (see Sect. \[Exact\]). A small discrepancy most probably stems from the approximations of [@Gaudin75] made in numerical solution of the transcendental equation. The (2 + 1)-scattering length turns out to be small and negative, $A \approx -0.09567 $, for $m = m_1$ and $\lambda_1 = 0$ and takes a zero value at slightly smaller mass ratio $m/m_1 \approx 0.971$ (see Table \[table1\]). Analogously, the odd-parity (P = 1) solutions for three-body system containing two identical noninteracting bosons ($\lambda_1 = 0$) were obtained. As follows from Eq. (\[transeq\]), the eigenvalues $\xi_n(\rho)$ entering in HREs (\[system1\]) are nonnegative, which implies that there are no three-body bound states. The calculated dependence of the scattering length $A$ is shown in Fig. \[fig2\]; $A$ increases monotonically with increasing mass ratio following the asymptotic dependence discussed in Sect \[Qualitative\]. ![\[fig2\] Mass-ratio dependence of the (2 + 1)-scattering length $A$ for odd-parity states ($P = 1$) of a system containing two identical noninteracting bosons ($\lambda_1 = 0$). The numerical calculation (solid lines) is compared with the large-mass-ratio asymptotic behaviour given by Eq. (\[sclP1asymp\]) (dash-dotted lines). The dependence corresponding to large $A > 15$ is shown on a large scale in the inset. ](fig2.eps){width=".55\textwidth"} Asymptotic dependencies {#Qualitative} ----------------------- ### Large attractive interaction of two identical particles {#Linter} In the limit of large attractive interaction between the identical particles, $\lambda_1 \to - \infty$, the even-parity wave function takes, with a good accuracy, the factorized form $\Psi \simeq \phi_0(x_{23}) u(y)$ \[$y =\cot\omega \left(2 x_1 - x_2 - x_3 \right)$\], where $\phi_0(x) = \sqrt{|\lambda_1|/2}\exp(-|\lambda_1 x|/2)$ is the wave function of the tightly bound pair of identical particles with energy $E'_{\rm th} = -\lambda_1^2/4$ and $u(y)$ describes the relative motion of a different particle 1 with respect to this pair. Within this approximation, $u(y)$ is a solution of the equation $$\label{eqrel} \left[\frac{{\rm d}^2}{{\rm d} y^2} + 2|\lambda_1| \exp{(- \sqrt{1 + 2m/m_1}|\lambda_1y|)} + \lambda_1^2/4 + E \right] u(y) = 0 \, ,$$ which gives the independent of $\lambda_1$ leading-order terms in the asymptotic expansion for the three-body binding energy, $\varepsilon \approx 4/(1 + 2m/m_1)$, and the (2 + 1)-scattering length, $$\label{A1as} A_1 \approx (1 + 2m/m_1)/4 \, .$$ ### Two heavy and one light particles {#Asymptotic} For large mass ratio $m/m_1$, one can use the adiabatic and quasi-classical approximations which provide, e. g., a universal description for the energy spectrum [@Kartavtsev07a]. To describe the three-body properties in the limit of large $m/m_1 \to \infty$ \[$\omega \to \pi/2 - \sqrt{m_1/(2m)}$\], one considers the first eigenvalue $\xi_1(\rho) \equiv i\kappa(\rho)$, which large-$\rho$ asymptotic dependence is approximately given by $$\label{eigeneqminfty} \rho \cos\omega = \frac{\kappa}{1 + (-1)^P {\rm e}^{-\kappa(\pi - 2\omega)}} \ ,$$ as follows from Eq. (\[transeq\]) on the equal footing for the system containing two identical bosons both for $\lambda_1 = 0$ and $\lambda_1 = \infty$ and for the system containing two identical noninteracting fermions. The number of the three-body even-parity ($P = 0$) bound states $n$ can be determined for large $m/m_1$, using the one-channel approximation in (\[system1\]) and the effective potential $-\kappa^2(\rho)/\rho^2$, from (\[eigeneqminfty\]). Within the framework of the quasi-classical approximations and taking into account the large-$\rho$ asymptotic dependence (\[eigeneqminfty\]), one obtains the relation $m/m_1 \approx C (n + \delta)^2$ in the limit of large $n$ and $m/m_1$. The constant $C$ can be found as $$\label{Cquasicl} C = \frac{\pi^2}{2}\left[\int_0^1\sqrt{2t + t^2} \frac{1 + (1 - \ln t)t}{2t(1 + t)^2}dt\right]^{-2}\approx 2.59 \ ,$$ where the integral is expressed by letting $t = \exp[-\kappa(\pi - 2\omega )]$ in the leading term of the quasi-classical estimate, $$\label{intminfty} \cos\omega\int_0^\infty {\mathrm d}\rho \left\{ \left[(1 + {\rm e}^{\displaystyle\kappa(\rho)\left(\pi - 2\omega \right)}\right]^2 - 1\right\} ^{1/2} = \pi n \ .$$ Fitting the calculated mass-ratio dependence of the critical values, at which the bound states appear, to the $n$-dependence $C (n + \delta)^2$ (up to $n = 20$, see Table \[table1\] for 10 lowest values), one obtains in a good agreement with the quasi-classical estimate (\[Cquasicl\]) $C \approx 2.60$ both for $\lambda_1 \to \infty$ and $\lambda_1 = 0$. Simultaneously, one obtains $\delta = 0.73$ if $\lambda_1 \to \infty$ and $\delta = 0.22$ if $\lambda_1 = 0$ for the parameter, which determine the next-to-leading order term of the large-$n$ expansion. The asymptotic dependence of the effective potential $-\kappa^2(\rho)/\rho^2$ obtained from (\[eigeneqminfty\]) allows one to find the leading order mass-ratio dependence of the odd-parity ($P = 1$) scattering length, $$\label{sclP1asymp} A = \frac{m}{m_1} \sqrt{1 + \frac{m_1}{2m}} \left(\ln\frac{m}{m_1} + 2 \gamma \right) \ ,$$ where $\gamma \approx 0.5772$ is the Euler constant. The convergence of the calculated dependence $A(m/m_1)$ to the asymptotic dependence (\[sclP1asymp\]) is shown in Fig. \[fig2\] for the case of two identical noninteracting bosons ($\lambda_1 = 0$). Mass-ratio and interaction-strength ratio dependencies {#dependencies} ------------------------------------------------------ Collecting the numerical and the exact analytical results, the asymptotic expressions, and qualitative arguments, one obtains a schematic “phase” diagram, which depicts the number of three-body bound states and the sign of the (2 + 1)-scattering lengths in the $m/m_1$ - $\lambda_1/|\lambda|$ plane (shown in Fig. \[fig3\]). ![\[fig3\] Schematic “phase” diagram for the even-parity states of two identical bosons and the third different particle. The dotted line marks the border between two areas where the lowest two-body threshold is set by the energy of two different and two identical particles. The number of the three-body bound states is marked by $n$ in the corresponding areas separated by solid lines. The sign of the (2 + 1)-scattering lengths $A$ and $A_1$ is marked by $\pm$ and the corresponding areas are separated by dashed lines. The crosses show the calculated critical values of the mass ratio (enlisted in Table \[table1\]). Encircled are those points, in which the exact analytical solution is known.](fig3.eps){width="70.00000%"} The plane of parameters is divided into two parts by a dotted line, $\lambda_1/|\lambda| = -\sqrt{2/(1 + m/m_1)}$, with the low-energy three-body properties being essentially different in the upper and lower part, where the two-body threshold is determined by the bound-state energy of two different and identical particles, respectively. The lines which represent the condition $|A|= \infty$ or $|A_1| = \infty$ (arising of the three-body bound state) separate areas with different number of the bound states, whereas the conditions $A = 0$ or $A_1=0$ split each area into two parts of different signs of the scattering lengths. It can be proven rigorously that in the upper part of the diagram (above the dotted line), the number of the three-body bound states $n$ increases and the (2 + 1)-scattering length $A$ decreases with decreasing the interaction strength $\lambda_1$, while in the lower part (below the dotted line) $n$ increases and $A_1$ decreases with decreasing the mass ratio $m/m_1$. The proof is based on the representation for which the lowest two-body threshold is independent of $\lambda_1$ and $m_1$ in the former and latter case, respectively. The required conclusion follows from the monotonic dependence of the Hamiltonian on $\lambda_1$ and $m_1$. A schematic “phase” diagram demonstrated in Fig. \[fig3\], is drawn by using more strict assumption on the positive slope of the lines, which show where the three-body bound states arise ($|A| \to \infty$) and where the (2 + 1)-scattering lengths ($A = 0$ and $A_1 = 0$ in the upper and lower part of the $\lambda_1/|\lambda|$ - $m/m_1$ plane, respectively) become zero. Tentatively, this assumptions seems to reflect correctly the general trend; nevertheless, one should note that the slope of the isolines of constant scattering length is not generally positive. In particular, $A$ is not a monotonic function of the mass ratio for $\lambda_1 = 0$, as shown in Fig. \[fig2\]; this implies a non-monotonic dependence of the constant-$A$ isolines in a region near the point ($m/m_1 = 0$, $\lambda_1/|\lambda| = 0$). For sufficiently large repulsion $\lambda_1$ and small mass ratio $m/m_1$ the three-body bound states are lacking. The limit $m/m_1 \to 0$ (1D analogue of the helium atom with contact interactions between particles) was discussed in paper [@Rosenthal71], where the binding energy as a function of the repulsion strength between light particles was calculated and the critical value of the repulsion strength for which the three particles becomes unbound was determined. Recently, a very precise critical value $\lambda_1/|\lambda| \approx 2.66735$ was found in [@Cornean06]. The boundary of the $n = 0$ area (shown in the upper left corner in Fig. \[fig3\]) goes from the point ($m/m_1 = 0$, $\lambda_1/|\lambda| \approx 2.66735$) to the point ($m/m_1 = 1$, $\lambda_1 \to \infty$), as was conjectured in [@Cornean06] and proven in Sect. \[Exact\] by using the exact solution at the latter point. Taking into account this result, the above-discussed monotonic dependence on $\lambda_1$, and the exact solution for three identical particles, one comes to an interesting conclusion that there is exactly one bound state ($n = 1$) of three equal-mass particles independently of the interaction strength $\lambda_1$. There is exactly one bound state ($n = 1$) also for a sufficiently large attraction between identical particles whereas the second bound state appears for $m > m_1$ and $ |\lambda_1| < 1$ (as shown in Fig. \[fig3\]). Therefore, the scattering length $A_1$ changes from the positive value given by (\[A1as\]) at $\lambda_1 \to - \infty $ to the negative one as $\lambda_1$ increases. The strip areas corresponding to $n > 1$ are located at higher values of the mass ratio with the large-$n$ asymptotic dependence $n \propto \sqrt{m/m_1}$. In each parameter area corresponding to $n$ bound states, the scattering lengths run all the real values tending to infinity at the boundary with the $n - 1$ area and to minus infinity at the boundary with the $n + 1$ area. Conclusion {#Conclusion} ========== The three-body dynamics of ultra-cold binary gases confined to one-dimensional motion is studied. In the low-energy limit, the description is universal, i. e., independent of the details of the short-range two-body interactions, which can be taken as a sum of contact $\delta$-function potentials. Thus, the three-body energies and the (2 + 1)-scattering lengths are expressed as universal functions of two parameters, the mass ratio $m/m_1$ and the interaction-strength ratio $\lambda_1/|\lambda|$. The mass-ratio dependences of the binding energies and the scattering length are numerically calculated for even and odd parity and the accurate critical values of the mass ratio, for which the bound states arise and the scattering length became zero, are determined. It is rigorously proven that $m/m_1 = 1$ is the exact boundary, above which at least one bound state exists (as conjectured by [@Cornean06]); the related conclusion is the existence of exactly one bound state for three equal-mass particles independently of the interaction strength between the identical particles. Asymptotic dependences of the bound-state number and the scattering length $A$ in the limit $m/m_1 \to \infty$ and of the binding energy and the scattering length $A_1$ in the limit $\lambda_1 \to -\infty$ are determined. Combining the numerical calculations, analytical results, and qualitative considerations, a schematic diagram is drawn, which shows the number of the three-body bound states and the sign of the (2 + 1)-scattering length as a function of the mass ratio and interaction-strength ratio. The obtained qualitative and quantitative results on the three-body properties provide a firm base for description of the equation of state and phase separation in dilute binary mixtures of ultra-cold gases. In particular, a sign of the (2 + 1)-scattering lengths essentially controls the transition between the homogeneous and mixed phases of atoms and diatomic molecules. The condition $E_3/E_\mathrm{th} > 2$ defines the parameter area, where the production of the triatomic molecules is energetically favorable in a gas of diatomic molecules. From the analysis of the “phase” diagram in Fig. \[fig3\] it follows that still there are interesting problems deserving further elucidation. These include the problem of non-monotone dependence of the constant-$A$ isolines in the $\lambda_1/|\lambda|$ - $m/m_1$ plane, the behaviour of the lines separating the positive and negative scattering lengths within the $n = 1$ area, and the description of the beak formed by the lines separating the $n = 1$ and $n = 2$ areas in the vicinity of the exact solution for three identical particles ($\lambda_1 = \lambda$ and $m = m_1$). One should discuss the connection of the present results with those, which take into account the finite interaction radius $R_e$ and (quasi)-1D geometry. The determination of the corrections due to finite interaction radius is not a trivial task, however, one expects that the corrections should be small for all calculated values provided $R_e/a$ and $R_e/a_1$ are small, where $a$ and $a_1$ are the two-body scattering lengths. On the other hand for sufficiently tight transverse confinement, one expects that the main ingredient is the relation between the 3D and quasi-1D two-body scattering lengths established in [@Olshanii98]. Moreover, a role of the transverse confinement does not simply reduce to renormalization of the scattering lengths; the full scale three-body calculations are needed to determine the energy spectrum and the scattering data in the (quasi)-1D geometry. It is worthwhile to mention that more few-body problems are of interest in binary mixtures. In particular, the low-energy three-body recombination plays an important role in the kinetic processes, while the elastic and inelastic cross sections for collisions either of diatomic molecules or of atoms off triatomic molecules are needed to describe the properties of the molecular compounds. Acknowledgements {#acknowledgements .unnumbered} ================ This work is based upon research supported by the National Research Foundation (NRF) of South Africa within the collaborating agreement between the Department of Science and Technology of South Africa and the Joint Institute for Nuclear Research, Russia.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper introduces application of Reflexive Game Theory to the matter of multistage decision making processes. The idea behind is that each decision making session has certain parameters like “when the session is taking place", “who are the group members to make decision", “how group members influence on each other", etc. This study illustrates the consecutive or sequential decision making process, which consist of two stages. During the stage 1 decisions about the parameters of the ultimate decision making are made. Then stage 2 is implementation of Ultimate decision making itself. Since during stage 1 there can be multiple decision sessions. In such a case it takes more than two sessions to make ultimate (final) decision. Therefore the overall process of ultimate decision making becomes multistage decision making process consisting of consecutive decision making sessions.' author: - Sergey Tarasenko title: Modeling multistage decision processes with Reflexive Game Theory --- Introduction ============ The Reflexive Game Theory (RGT) [@lef2; @lef5] allows to predict choices of subjects in the group. To do so, the information about a group structure and mutual influences between subjects is needed. Formulation and development of RGT was possible due to fundamental psychological research in the field of reflexion, which had been conducted by Vladimir Lefebvre [@lef4]. The group structure means the set of pair-wise relationships between subjects in the group. These relationships can be either of alliance or conflict type. The mutual influences are formulated in terms of elements of Boolean algebra, which is built upon the set of universal actions. The elements of Boolean algebra represent all possible choices. The mutual influences are presented in the form of Influence matrix. In general, RGT inference can be presented as a sequence of the following steps [@lef2; @lef5]: 1\) formalize choices in terms of elements of Boolean algebra of alternatives; 2\) presentation of a group in the form of a fully connected relationship graph, where solid-line and dashed-line ribs (edges) represent alliance and conflict relationships, respectively; 3\) if relationship graph is decomposable, then it is represented in the form of polynomial: alliance and conflict are denoted by conjunction ($\cdot$) and disjunction (+) operations; 4\) diagonal form transformation (build diagonal form on the basis of the polynomial and fold this diagonal form); 5\) deduct the decision equations; 6\) input influence values into the decision equations for each subject. Let us call the process of decision making in a group to be a session. Therefore, in RGT models a single session. Model of two-stage decision making: formation of points of view =============================================================== This study is dedicated to the matter of setting mutual influences in a group by means of *reflexive control* [@lef1]. The influences, which subjects make on each other, could be considered as a result of a decision making session previous to *ultimate decision making (final session)*. We will call the influences, obtained as a result of a previous session(s), a *set-up influences*. The set-up influences are intermediate result of the overall decision making process. The term set-up influences is related to the influences, which are used during the final session, only. Consequently, the overall decision making process could be segregated into two stages. Let the result of such discussion (decision making) be a particular decision regarding the matter under consideration. We assume the actual decision making regarding the matter (final session - Stage 2) is preceded by preliminary session (Stage 1). Stage 1 is about a decision making regarding the influences (points of view), which each subject will support during the final session. We call such overall decision making process to be a *two-stage decision making process*. The general schema of a two-stage decision making is presented in Fig.\[twostage\]. ![The general schema of the two-stage decision making.[]{data-label="twostage"}](twostages.png){height="2cm"} To illustrate such model we consider a simple example. *Example 1.* Let director of some company has a meeting with his advisors. The goal of this meeting is to make decision about marketing policy for the next half a year. The background analysis and predictions of experts suggest three distinct strategies: aggressive (action $\alpha$), moderate (action $\beta$) and soft (action $\gamma$) strategies. The points of view of director and his advisors are formulated in terms of Boolean algebra of alternatives. Term point of view implies that a subject makes the same influences on the others. Director supports moderate strategy ($\{\alpha\}$), the 1st and the 2nd advisors are supporting aggressive strategy ($\{\beta\}$), and the 3rd advisor defends the idea of soft strategy ($\{\gamma\}$). The matrix of initial influences is presented in Table \[infMtx\]. [|c|c|c|c|c|]{} &a&b&c&d\ ------------------------------------------------------------------------ a&a&$\{\alpha\}$&$\{\alpha\}$&$\{\alpha\}$\ ------------------------------------------------------------------------ b&$\{\alpha\}$&b&$\{\alpha\}$&$\{\alpha\}$\ ------------------------------------------------------------------------ c&$\{\beta\}$&$\{\beta\}$&c&$\{\beta\}$\ ------------------------------------------------------------------------ c&$\{\gamma\}$&$\{\gamma\}$&$\{\gamma\}$&d\ \[infMtx\] Let director is in a conflict with all his advisors, but his advisors are in alliance with each other. Variable $c$ represents the Director, variables $a$, $b$ and $d$ correspond to the 1st , the 2nd and the 3rd advisor, respectively. The relationship graph is presented in Fig.\[polyn1\]. Polynomial $abd+c$ corresponds to this graph. ![Relationship graph for a director-advisors group.[]{data-label="polyn1"}](polyn1.png){height="2cm"} After diagonal form transformation the polynomial does not change: $$\begin{array}{*{20}{c}} {} & {} & {[a][b][d]} & {} & {} & {} & {} & {} \\ {} & {[abd] } & {} &{+[c]} & {} & {} & {1 + [c]} & {} \\ {[abc+d]} & {} & {} & {} & = & {[abd+c]} & {} & { = abd+c.} \\ \end{array}$$ Then we obtain four decision equations and their solutions (decision intervals) (Table \[decInt\]). [|c|c|c|]{} &[Decision Equations]{}&[Decision Intervals]{}\ ------------------------------------------------------------------------ a&$a=(bd+c)a+c\overline{a}$&$(bd+c)\supseteq a \supseteq c$\ ------------------------------------------------------------------------ b&$b=(ad+c)b+c\overline{b}$&$(ad+c)\supseteq b \supseteq c$\ ------------------------------------------------------------------------ c&$c=c+abd\overline{c}$&$1\supseteq c \supseteq abd$\ ------------------------------------------------------------------------ d&$d=(ab+c)d+c\overline{d}$&$(ab+c)\supseteq d \supseteq c$\ \[decInt\] Next we calculate the decision intervals by using information from the influence matrix: subject a: $(bd+c)\supseteq a \supseteq c$ $\Rightarrow$ $(\{\alpha\} \{\gamma\}+\{\beta\})\supseteq a \supseteq \{\beta\}$ $\Rightarrow$ $a=\{\beta\}$; subject b: $(ad+c)\supseteq b \supseteq c$ $\Rightarrow$ $(\{\alpha\} \{\gamma\}+\{\beta\})\supseteq b \supseteq \{\beta\}$ $\Rightarrow$ $b=\{\beta\}$; subject c: $1\supseteq c \supseteq abd$$\Rightarrow$ $1\supseteq c \supseteq \{\alpha\}\{\alpha\}\{\gamma\} $ $\Rightarrow$ $1\supseteq c \supseteq 0 $ $\Rightarrow$ $c = c$; subject d: $(ab+c)\supseteq d \supseteq c$ $\Rightarrow$ $(\{\alpha\} c+\{\beta\})\supseteq b \supseteq \{\beta\}$ $\Rightarrow$ $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$. Therefore, after the preliminary sessions, the point of view of the subjects have changed. Director has obtained a freedom of choice, since he can choose any alternative: $1 \supseteq c \supseteq 0$ $\Rightarrow$ $c = c$. At the same time, the 1st and the 2nd advisors support moderate strategy ($a$ = $b$ = $\{\beta\}$). Finally, the 3rd advisor now can choose between points of view $\{\alpha,\beta\}$ (aggressive of moderate strategy) and $\{\beta\}$ (moderate strategy): $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$. Thus, the point of view of the 1st and the 2nd advisors is strictly determined, while the point of view of 3rd advisor is probabilistic. Next we calculate choice of each subject during the final session, considering the influences resulting from the preliminary session. The matrix of set-up influences is presented in Table \[setupInf\]. The intervals in the matrix imply that a subject can choose either of alternatives from the given interval as a point of view. [|c|c|c|c|c|]{} &a&b&c&d\ ------------------------------------------------------------------------ a&a&$\{\beta\}$&$\{\beta\}$&$\{\beta\}$\ ------------------------------------------------------------------------ b&$\{\beta\}$&b&$\{\beta\}$&$\{\beta\}$\ ------------------------------------------------------------------------ c&$1 \supseteq c \supseteq 0$ &$1 \supseteq c \supseteq 0$ &c&$1 \supseteq c \supseteq 0$\ ------------------------------------------------------------------------ c& $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$ & $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$ & $\{\alpha, \beta\} \supseteq d \supseteq \{\beta\}$ &d\ \[setupInf\] Subject $a$: $d=\{\alpha,\beta\}$: $(bd+c)\supseteq a \supseteq c$ $\Rightarrow$$(\{\beta\}\{\alpha,\beta\}+c)\supseteq a \supseteq c$ $\Rightarrow$$(\{\beta\}+c)\supseteq a \supseteq c$; $d=\{\beta\}$: $(bd+c)\supseteq a \supseteq c$ $\Rightarrow$$(\{\beta\}\{\beta\}+c)\supseteq a \supseteq c$ $\Rightarrow$$(\{\beta\}+c)\supseteq a \supseteq c$; Subject $b$: $d=\{\alpha,\beta\}$: $(ad+c)\supseteq b \supseteq c$ $\Rightarrow$$(\{\beta\}\{\alpha,\beta\}+c)\supseteq b \supseteq c$ $\Rightarrow$$(\{\beta\}+c)\supseteq b \supseteq c$; $d=\{\beta\}$: $(ad+c)\supseteq b \supseteq c$ $\Rightarrow$$(\{\beta\}\{\beta\}+c)\supseteq b \supseteq c$ $\Rightarrow$$(\{\beta\}+c)\supseteq b \supseteq c$; Subject $c$: $d=\{\alpha,\beta\}$: $1 \supseteq c \supseteq abd$ $\Rightarrow$$1 \supseteq c \supseteq \{\beta\}\{\beta\}\{\alpha,\beta\}$ $\Rightarrow$$1 \supseteq c \supseteq \{\beta\}$; $d=\{\beta\}$: $1 \supseteq c \supseteq abd$ $\Rightarrow$$1 \supseteq c \supseteq \{\beta\}\{\beta\}\{\beta\}$ $\Rightarrow$$1 \supseteq c \supseteq \{\beta\}$; Subject $d$: $(ab+c)\supseteq d \supseteq c$ $\Rightarrow$$(\{\beta\}\{\beta\}+c)\supseteq d \supseteq c$ $\Rightarrow$$(\{\beta\}+c)\supseteq d \supseteq c$. Now we compare the results of a single session with the ones of the two-stage decision making. The single session case has been considered above. If the final decision has to be made after the single session, then the 3rd advisor would be able to choose alternative $\{\alpha,\beta\}$ and realize action $\alpha$. This option implies that each advisor is responsible for a particular part of the entire company and can take management decisions on his own. Next we consider the decision made after the two-stage decision making. In such a case, regardless of influence of the 3rd advisor (subject $d$), choice of advisors $a$ and $b$ is defined by the interval $\{\beta\}+c \supseteq x \supseteq c$, where $x$ is either $a$ or $b$ variable. Thus, if director is inactive ($c=0$), subjects $a$ and $b$ can choose either moderate strategy ($\{\beta\}$) or make no decision ($0=\{\}$). The same is true for subject $d$. If the director makes influence $\{\beta\}$, then all advisors will choose alternative $\{\beta\}$. The director himself can choose from the interval $1 \supseteq c \supseteq \{\beta\}$ after the final session. This means that the director can choose any alternative, containing action $\beta$. Thus, occasionally the director can realize his initial point of view as moderate strategy. This example illustrates how using the two-stage decision making it is possible to make one’s opponents choose the one’s point of view. Meanwhile a person interested in such reflexive control can still sustain the initial point of view. The obtained results are applicable in both cases when 1) only the director makes a decision; or 2) the decision are made individually by each subject. A Model of a multi-stage decision making: set-up parameters of the final session ================================================================================ Now we consider the two-stage model in more details. In the considered example, during the preliminary session only the decision regarding the influences has been under consideration. In general case, however, before the final session has begun, there can be made decisions regarding any parameters of the final session. Such parameters include but are not limit to: 1\) group structure (number of subjects and relationships between subjects in a group); 2\) points of view; 3\) decision to start a final session (a time when the final session should start), etc. We call the decision regarding a single parameter to be *consecutive decision*, and decisions regarding distinct parameters to be *parallel decisions*. Therefore, during the first stage (before the final session) it is possible to make multiple decisions regarding various parameters of the final session. This decisions could be both parallel and consecutive ones. We call such model of decision making to be a *multi-stage process of decision making* (Fig.\[multistage\]). ![Multi-stage decision making model.[]{data-label="multistage"}](multistage.png){height="3cm"} Modeling multi-stage decision making processes with RGT ======================================================= Next we consider realization of multi-stage decision making with RGT. *Example 2: Change a group structure.* Considering the subject from Example 1, we analyze the case when director wants to exclude the 3rd advisor from the group, which will make the final decision. In such a case, there is a single action – 1 – to exclude subject $d$ from the group. Then Boolean algebra of alternatives includes only two elements: 1 and 0. Furthermore, it is enough that director just raise a question to exclude subject $d$ from a group and make influence 1 on each subject: if $с = 1$, then $a=1$, $b=1$ and $d=1$ (Table \[decInt\]). Thus the decision to exclude subject $d$ from the group would be made automatically (Fig.\[polyn2\]). ![Exclusion of a subject $d$ from a group.[]{data-label="polyn2"}](polyn2.png){height="2cm"} *Example 3: Realization of a multi-stage decision making.* Let the first decision discussed during the first stage is a decision regarding influences (points of view). The next decision was about exclusion of a subject $d$ from the group. Thus, during the first step the formation (setting-up) of points of view has been implemented, then the structure of a group was changed. Therefore the group, which should make a final decision is described by polynomial $ab +c$. The decision equations and their solutions are presented in Table \[decInt3\]. The overall multi-stage decision making process is presented in Fig.\[multiEx\]. [|c|c|c|]{} &[Decision Equations]{}&[Decision Intervals]{}\ ------------------------------------------------------------------------ a&$a=(b+c)a+c\overline{a}$&$(b+c)\supseteq a \supseteq c$\ ------------------------------------------------------------------------ b&$b=(a+c)b+c\overline{b}$&$(a+c)\supseteq b \supseteq c$\ ------------------------------------------------------------------------ c&$c=c+ab\overline{c}$&$1\supseteq c \supseteq ab$\ \[decInt3\] ![Illustration of multi-stage decision making process. The influences are indicated by the arrow-ends of the ribs. The actual influence is presented near the arrow-end. []{data-label="multiEx"}](multistageEx.png){width="12cm"} We consider that the point of view cannot change without preliminary session regarding the parameter. Therefore we assume that the points of view do not change after the change of group structure. Therefore, during the final session the subjects would make the set-up influences derived from the preliminary session: subjects $a$ and $b$ will make influences $\{\beta\}$ and subject $c$ will have a choice from the interval $1 \supseteq c \supseteq \{\beta\}$. Such process is introduced in Fig. \[multiEx\]. During the 1st stage (first step), the points of view of subjects have been formed. On the 2nd stage (second step), the decision to exclude subject $d$ from a group has been made. Finally, during the 3rd stage the final decision regarding the marketing strategy has been made. Discussion and conclusion ========================= This study introduces the two-stage and multi-stage decision making processed. During the last stage the final decision is made. During the earlier stages the decisions regarding the parameters of a final session are considered. This study shows how before the final decision making the intermediate decision regarding parameters of the final session can be made and how to overall process of decision making could be represented as a sequence of decision making sessions. This approach enables complex decision making, which involves numerous parameters. The important feature of the multi-stage decision making is that during the preliminary sessions subjects can convince other subjects to accept their own point of view. Therefore other subjects can be convince to make decisions beneficial for a particular one. Such approach also allows to distribute the responsibility between all the members of the group, who make the final decision. The results presented in this study allow to extend the scope of applications of RGT to modeling of multi-stage decision making processes. Therefore it becomes possible to perform scenario analysis of various variants of future trends and apply reflexive control to the management of projects. [5]{} Lefebvre, V.A.: Lectures on Reflexive Game Theory. Cogito-Centre, Moscow (2009) (in Russian) Lefebvre, V.A.: Lectures on Reflexive Game Theory. Leaf & Oaks, Los Angeles (2010) Lefebvre, V.A.: Algebra of Conscience. D. Reidel, Holland (1982) Lefebvre, V.A.: The basic ideas of reflexive game’s logic. Problems of systems and structures research. 73–79 (1965) (in Russian)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present an analysis of the photometry and spectroscopy of the host galaxy of [*Swift*]{}-detected GRB080517. From our optical spectroscopy, we identify a redshift of $z=0.089\pm0.003$, based on strong emission lines, making this a rare example of a very local, low luminosity, long gamma ray burst. The galaxy is detected in the radio with a flux density of $S_{4.5\,GHz}=$0.22$\pm$0.04mJy - one of relatively few known GRB hosts with a securely measured radio flux. Both optical emission lines and a strong detection at 22$\mu$m suggest that the host galaxy is forming stars rapidly, with an inferred star formation rate $\sim16$M$_\odot$yr$^{-1}$ and a high dust obscuration (E$(B-V)>1$, based on sight-lines to the nebular emission regions). The presence of a companion galaxy within a projected distance of 25kpc, and almost identical in redshift, suggests that star formation may have been triggered by galaxy-galaxy interaction. However, fitting of the remarkably flat spectral energy distribution from the ultraviolet through to the infrared suggests that an older, 500Myr post-starburst stellar population is present along with the ongoing star formation. We conclude that the host galaxy of GRB080517 is a valuable addition to the still very small sample of well-studied local gamma-ray burst hosts.' author: - | Elizabeth R. Stanway$^{1}$[^1], Andrew J. Levan$^{1}$, Nial Tanvir$^{2}$, Klaas Wiersema$^{2}$, Alexander van der Horst$^3$, Carole G. Mundell$^4$, Cristiano Guidorzi$^5$\ $^{1}$Department of Physics, University of Warwick, Gibbet Hill Road, Coventry, CV4 7AL, UK\ $^{2}$Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH, UK\ $^{3}$Anton Pannekoek Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands\ $^{4}$Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Brownlow Hill, Liverpool L3 5RF, UK\ $^{5}$Department of Physics and Earth Sciences, University of Ferrara, via Saragat 1, I-44122, Ferrara, Italy\ date: 'Accepted 2014 October 28. Received 2014 October 23; in original form 2014 September 19' title: 'GRB 080517: A local, low luminosity GRB in a dusty galaxy at z=0.09' --- \[firstpage\] gamma-ray burst:individual:080517 – galaxies:star formation – galaxies:structure – galaxies: distances and redshifts Introduction ============ Long Gamma Ray Bursts (GRBs) are intense, relativistically beamed, bursts of radiation, likely emitted during the collapse of a massive star at the end of its life [@2006ApJ...637..914W]. As well as constraining the end stages of the evolution for massive stars, they also mark out star formation in the distant Universe, in galaxies often too small to observe directly through their stellar emission or molecular gas [e.g. @2012ApJ...754...46T]. However, extrapolating from the detection of a single stellar event (the burst) to their wider environment, and the contribution of their hosts to the volume averaged cosmic star formation rate [e.g. @2012ApJ...744...95R], is challenging. Doing so relies on a good understanding of the stellar populations and physical conditions that give rise to GRB events. This understanding has improved significantly over recent years. A number of studies now constrain the stellar properties of typical GRB hosts [e.g. @2009ApJ...691..182S; @2010MNRAS.tmp..479S; @2012ApJ...756..187H], their radio properties [e.g. @2012ApJ...755...85M; @2010MNRAS.409L..74S; @radiopaper; @2014arXiv1407.4456P] and behaviour in the far-infrared [@2014arXiv1402.4006H; @2014arXiv1406.2599S]. However these studies have also demonstrated diversity within the population. GRB host galaxies range from low mass, metal poor galaxies forming stars at a moderate rate [e.g. @2010AJ....140.1557L], to more massive moderately dusty but not extreme (SMG-like) starbursts such as the ‘dark’ burst population [@2013ApJ...778..128P; @2013ApJ...778..172P]. The challenge of understanding these sources has been complicated by the high redshifts at which they typically occur. The long GRB redshift distribution peaks beyond $z=1$ [@2012ApJ...752...62J], tracing both the rise in the volume-averaged star formation rate and the decrease in typical metallicity - which may favour the formation of GRB progenitors [see e.g. @2012ApJ...744...95R and references therein]; local examples which can be studied in detail are rare. Of long duration ($>$2s) bursts in the official [*Swift Space Telescope*]{} GRB catalogue table[^2], only three are listed as having $z<0.1$. A few other (pre-[*Swift*]{}) bursts are also known at low redshifts [e.g. GRB980425 at $z=0.009$ @1998Natur.395..670G], but were detected by instruments with quite different systematics and tend to be unusual systems. One of the most recent studies, which exploited ALMA data, identified the host of GRB980425 as a dwarf system with low dust content and suggested that this is typical of GRB hosts as a whole . However each low redshift host investigated in detail has informed our understanding of the population as a whole and proven to differ from the others [e.g. @2011ApJ...741...58W; @2011MNRAS.411.2792S]. Low redshift bursts include several which are sub-luminous, such as GRBs090825 and 031203 [@1998Natur.395..670G; @2004ApJ...609L...5M; @2004Natur.430..648S], and others such as GRBs060505 and 060614 that were long bursts without associated supernovae [@2006Natur.444.1047F; @2006Natur.444.1050D]. Cross-correlation with local galaxy surveys (at $z<0.037$) has suggested that some low redshift GRBs in the existing burst catalogues have yet to be identified as such [@2007MNRAS.382L..21C] and hence opportunities to study their properties in detail have been missed. Given the very small sample, and the variation within it, it is important that we continue to follow up the hosts of low redshift bursts and do not allow a few examples to skew our perception of the population. We have acquired new evidence suggesting that a previously overlooked burst, GRB080517, and its host galaxy might prove a valuable addition to the study of local gamma ray bursts. The WISE all-sky survey [@2010AJ....140.1868W], publically released in 2012, maps the sky at 3-22$\mu$m. While the observations are relatively shallow and most GRB hosts remain undetected or confused, we have identified the host of GRB080517 as anomalous. Not only is an infrared-bright source clearly detected coincident with the burst location, but it has a sharply rising spectrum and is extremely luminous in the 22$\mu$m W4 band, suggesting that it is a rather dusty galaxy, and likely at low redshift. In this paper, we present new photometry and spectroscopy of the host of GRB080517, identifying its redshift as $z=0.09$. Compiling archival data, we consider the spectral energy distribution (SED) of the host galaxy, and also its larger scale environment, evaluating the source as a low redshift example of a dusty GRB host galaxy. In section \[sec:initial\] we discuss the initial identification of this GRB and its properties. In section \[sec:data\] we present new data on the host galaxy of this source. We present our optical photometry and spectroscopy of the GRB host and a neighbouring companion in section \[sec:spec\] and report a detection of the GRB host at radio frequencies in section \[sec:radio\]. In section \[sec:reassess\] we reassess the initial burst properties and its early evolution in the light of our new redshift information. In section \[sec:sed\] we compile new and archival photometry to secure an analysis of the spectral energy distribution, and in section \[sec:sfr\] report constraints on the host galaxy’s star formation rate. In section \[sec:disc\] we discuss the properties of the host galaxy in the context of other galaxy populations before presenting our conclusions in section \[sec:conc\]. Throughout, magnitudes are presented in the AB system [@1983ApJ...266..713O] and fluxes in $\mu$Jy unless otherwise specified. Where necessary, we use a standard cosmology with $h_0$=70kms$^{-1}$Mpc$^{-1}$, $\Omega_M=0.3$ and $\Omega_\Lambda$=0.7. Initial Observations {#sec:initial} ==================== GRB080517 triggered the [*Swift*]{} Burst Alert Telescope (BAT) at 21:22:51 UT on 17th May 2008 as a flare with a measured T$_{90}$ (i.e. period during which 90% of the burst energy was detected) of 65$\pm$27s, classifying the event as a long GRB. The X-ray Telescope (XRT) identified a fading, uncatalogued point source and the presence of a known optical source was noted within the X-Ray error circle. The final enhanced XRT position, with uncertainty $1\farcs5$, was 06h 48m 58.03s +50° 44′ 07.7′′ (J2000), coincident with the optical source [@2008GCN..7742....1P]. The Galactic longitude and latitude ($l=$165.369, $b=$20.301) correspond to a sight-line with moderate dust extinction (A$_V$=0.25) from our own galaxy [@2011ApJ...737..103S]. Early observations with the Liverpool Telescope, starting 11 minutes after the BAT trigger, did not detect an optical transient outside of the known source [@2008GCN..7743....1S] and no further optical follow-up was undertaken - in part due to the difficult proximity (within 50$^\circ$) of the Sun at the time the burst triggered [*Swift*]{}. Both the lack of an optical afterglow and analysis of the BAT spectrum suggested that the source might lie at high redshifts [@2008GCN..7748....1M; @2011ApJ...731..103X], but constraints on the X-ray spectrum precluded a high redshift fit to the data [@2008GCN..7742....1P]. Association with the known, bright optical source would suggest a lower redshift for the burst, but it was not clear whether this was the host galaxy or a star in chance alignment. While the afterglow was not detected in the optical, the $\gamma$-ray and X-ray emission was also relatively weak, with an early time flux at 0.3-10keV of 2.52$^{+1.20}_{-0.75}\times10^{-10}$ergcm$^{-2}$s$^{-1}$, measured in a 10s exposure at a mean photon arrival time of T$_0$+133s [based on analysis from @2009MNRAS.397.1177E]. In the absence of a redshift for the host, the time-averaged X-ray analysis also suggested the presence of an excess neutral hydrogen column density of $3.0^{+2.1}_{-1.8} \times 10^{21}$ cm$^{-2}$ above the Galactic value of $1.09\times10^{21}$cm$^{-2}$ [where these are 90% confidence intervals in analysis from the UK Swift Science Data Centre, @2009MNRAS.397.1177E]. This represents an excess in the X-ray inferred hydrogen column at the $\sim$3$\sigma$ level. [*Swift*]{} observations ended approximately 20 hours after the initial trigger. Initial observations for this source were therefore ambiguous, with different elements of the data either suggesting a high redshift solution (non-detection of the optical transient, BAT spectrum) or appearing to preclude it (optical source association, X-ray spectrum), and the excess extinction seen in the afterglow implying the presence of dust either in the host galaxy or along the line of sight. However the burst’s location, within 50$^\circ$ of the Sun at the time the burst went off, precluded further early time studies, and the burst has largely been ignored since. [*Swift*]{} has not observed this location at any other time. Given the presence of a relatively bright, $r_{AB}=17.73$, catalogued source within the [*Swift*]{} XRT error circle, an obvious question arises: what is the probability that this is a chance alignment rather than a genuine host galaxy identification? Two main factors contribute to this determination. The surface density of galaxies observable at a given magnitude will depend both on the properties of the galaxy population with redshift, and with galactic latitude (which will govern the fraction of the sky affected by foregrounds and crowding). To evaluate this, we have studied the galaxy population in regions of the Sloan Digital Sky Survey Data Release 10 [SDSS DR10, @2014ApJS..211...17A] at comparable Galactic latitude (within $\sim$5$^\circ$) and offset by 30-50$^\circ$ in Galactic longitude. The population identified by the SDSS photometric pipeline as galaxies were selected in ten regions, each with a diameter of 1$^\circ$, and their surface density evaluated as a function of $r'$-band magnitude. As figure \[fig:coin\] illustrates, the surface density of galaxies comparable to the proposed host of GRB080517 is low, with $0.028\pm0.006$ galaxies typically found per square arcminute. Assuming that the SDSS photometric classification is accurate, the probability of finding a galaxy of this brightness within 3arcseconds (see figure \[fig:whtim\]) of a given X-ray location is just 0.007%. Taking into account the 604 long bursts with X-ray localisations in the [*Swift Space Telescope*]{} GRB catalogue table, we would expect 0.04 chance alignments amongst the entire long GRB population. A further constraint arises from the nature of the GRB itself. GRB080517 was a long burst, believed to be associated with a core collapse progenitor, and so likely to be associated with recent or ongoing star formation. If we consider only those galaxies in SDSS with the flat optical colours associated with ongoing star formation, i.e. $|r'-i'|<0.5$, the surface density of galaxies drops still further, to just $0.021\pm0.006$ galaxies per square arc minute, and a predicted 0.03 chance alignments in the entire GRB sample. As will be discussed below, the possible host galaxy of GRB080517 is strongly star forming, and lies within 3 arcseconds of the burst location. Thus we propose its identification as the burst host. ![The surface density of galaxies brighter than a given $r'$-band magnitude, at comparable galactic latitude to GRB080517, based on photometric classification in the Sloan Digital Sky Survey. The solid line shows the surface density of all galaxies, with the standard deviation measured across ten 1$^\circ$-diameter fields. The dashed line shows the lower surface density of relatively blue galaxies likely to be star-forming. The dotted vertical line indicates the magnitude of the proposed host of GRB080517. \[fig:coin\]](coincidence_prob.ps){width="0.99\columnwidth"} Follow-Up Data {#sec:data} ============== WHT Imaging {#sec:whtim} ----------- We targetted the host of GRB080517 on the night of 2014 Feb 25 (i.e. 6 years post-burst) using the auxiliary-port camera, ACAM, on the William Herschel Telescope (WHT). Photometric imaging was acquired in the Sloan $g$, $r$ and $i$ bands, with an integration time of 180s in each band. Observations were carried out as part of programme W/2014/9 (PI: Levan) and photometric data were reduced and calibrated using standard [IRAF]{} procedures. As figure \[fig:whtim\] demonstrates, the 1.2arcsecond seeing was sufficient to determine the morphology of both the host galaxy and a near neighbour, separated from it by 16 arcseconds. While the GRB host shows a relatively smooth, relaxed morphology, it is resolved in our imaging with a measured gaussian FWHM of 2.1arcseconds. Deconvolution with the seeing, as measured from unresolved sources in the image, yields an estimated intrinsic size of 1.6arcseconds, or 2.7kpc at $z=0.09$ (see next section). ![The structure of the compact GRB host galaxy (lower left) and its near-neighbour (upper right) in the Sloan-$i$ band from our WHT imaging. The neighbour clearly has two cores within a more diffuse galaxy, and is likely to be undergoing a major merger. Both sources are at the same redshift (see section \[sec:spec\]), the scale bar indicates physical distance at this redshift. The 1.5arcsecond 90% confidence error circle from the [*Swift*]{} XRT detection of the burst is indicated in red. \[fig:whtim\]](targets.ps){width="0.99\columnwidth"} ![The radial surface brightness profile of the GRB host galaxy in the Sloan-$i$ band from our WHT imaging. Sersic profiles have been convolved with the seeing and overplotted for comparison. Given the large error bars - due to compact morphology relative to the pixel scale and seeing - a range of Sersic parameters provide a reasonable fit to the data. Normalising the profiles close to the centre suggests a Sersic index of $n\sim1.0-2.0$ may provide the best description of this galaxy’s light profile. The gaussian seeing is shown as a solid line for comparison. \[fig:sersic\]](plot_sersic2.ps){width="0.99\columnwidth"} ![The allowed regions of parameter space for a Sersic light profile, quantified by $\chi^2$-fitting against the data. Small effective radii ($<2''$) and low Sersic indices ($n\sim1-2$) are favoured by the data, but there are substantial degeneracies between these parameters. Contours are shown at 1, 2 and 3$\sigma$ confidence levels. \[fig:sersic2\]](fit_sersic2.ps){width="0.99\columnwidth"} While we are unable to distinguish clumpiness on sub-kiloparsec scales, the host galaxy is sufficiently resolved in this new imaging to place constraints on its radial light profile, although such constraints are necessarily limited by the relatively large (0.253 arcsecond) pixels relative to the seeing. In figures \[fig:sersic\] and \[fig:sersic2\] we compare the radially averaged light profile of the galaxy with Sersic profiles [see @2005PASA...22..118G for definitions and discussion], which have been convolved with the seeing in the image. It is clear that a de Vaucouleurs law ($n=4$), such as describes local giant elliptical galaxies would predict far too steep a light profile. Allowing the effective radius and Sersic parameter to vary simultaneously, the best fit to the data is found for $n=1.5\pm1.0$ and $R_e=1.7\pm0.8''$. WHT Spectroscopy {#sec:spec} ---------------- We also obtained spectroscopic data from ACAM on the same night, using the V400 grating and a total integration time of $4\times600$s, producing a spectrum spanning 4000-9000Å with a spectral resolution measured from unblended arc lines of $\sim$18Å($\sim$1000kms$^{-1}$). Both photometric and spectroscopic data were reduced and calibrated using standard [IRAF]{} procedures. Absolute flux and wavelength calibration were achieved through observations of a standard star field and arc lamps immediately preceding the science data. The slit was oriented at a position angle of 50$^\circ$, so as to pass through the centres both of the GRB host and the bright neighbour, separated from it by 16$\arcsec$ measured along the 1$\farcs5$ slit. Both the GRB host galaxy and its neighbour are detected at high signal to noise in our spectroscopy. The latter clearly shows two components, A and B. Of these, component A is the stronger continuum source, while component B appears to show relatively stronger line emission (see figure \[fig:2dspec\]). In table \[tab:lines\] we provide the relative emission line strengths of each source (also shown graphically in figure \[fig:specall\]). Line equivalent widths are presented in the observed frame. We make no adjustment for slit losses since it is difficult to reconstruct precisely where on the object the 1.5 arcsec wide slit was placed, and harder still to estimate whether line ratios in the regions of the galaxy outside the slit are comparable to those in observed regions. The measured redshift for the host galaxy is $z=0.089\pm0.003$ and for the neighbour $z=0.091\pm0.003$ (for both components). The uncertainty, estimated by cross correlation against a template starburst spectrum, comprises instrumental resolution effects, the effects of blending on many of the lines and uncertainty due to small shifts in velocity offset between different emission lines. While we adopt the cross-correlation redshifts and conservative associated uncertainties for our analysis, we also consider the redshift derived from the observed wavelength of individual emission lines. Fitting gaussian profiles to the unblended H$\beta$ and \[OIII\] and the strong, but somewhat blended H$\alpha$ lines, we derive redshifts $z=0.0903\pm0.0006$, $0.0925\pm0.0006$ and $0.0930\pm0.0003$ for the GRB host and components A and B of the companion respectively, where the error now represents the scatter between individual line centroids on each source rather than including other uncertainties. These imply velocity offsets of $\Delta v = 150\pm155$kms$^{-1}$ (i.e. no significant offset) between the two components of the companion and $\Delta v = 576\pm155$kms$^{-1}$ between the host and the companion. This velocity offset places the galaxy pair just outside (although within one standard deviation of) the criteria used to select galaxy pairs in the SDSS by @2008AJ....135.1877E, who placed a cutoff for their sample at $\Delta v = 500$kms$^{-1}$. Those authors recognise however that this cutoff requires a trade-off between contamination and completeness, with genuine pairs observed out to $\Delta v \sim 600$kms$^{-1}$ separations [@2008AJ....135.1877E; @2000ApJ...536..153P]. @2008AJ....135.1877E identified an enhancement in star formation rate for pairs with projected separations $<30-40$kpc, a criterion easily satisifed by the companion in this case (16$''$ represents $\sim27$kpc at this redshift), suggesting that the star formation observed in both host and companion is likely influenced by their proximity. In figures \[fig:specha\] and \[fig:spechb\], we present the spectral regions in the GRB host galaxy associated with line ratios used to classify an ionising spectrum (see section \[sec:disc\]). At this spectral resolution, H$_\alpha$ is blended with the \[N[II]{}\] doublet, and a fit to the three lines must be obtained simultaneously in order to measure their line strengths. With the exception of close doublets (i.e. \[O[II]{}\], \[S[II]{}\]) the other lines in the spectrum are all comparitively unblended, and all lines are consistent with being essentially unresolved at the instrumental resolution. ![image](plot_2D_spec.ps){width="1.97\columnwidth"} Line Host Neighbour A B -------------- ----------------- ---------------- ---------------- O[II]{}3726 61.9 $\pm$ 8.0 O[II]{}3729 31.0 $\pm$ 4.0 H$\gamma$ 6.5 $\pm$ 0.4 H$\beta$ 16.8 $\pm$ 1.0 3.2 $\pm$ 1.3 22 $\pm$ 9 O[III]{}4959 5.8 $\pm$ 0.3 4.1 $\pm$ 1.7 27 $\pm$ 11 O[III]{}5007 17.1 $\pm$ 1.0 12.4 $\pm$ 5.0 77 $\pm$ 32 He I 5875 2.8 $\pm$ 0.1 O[I]{}6300 6.1 $\pm$ 0.2 N[II]{}6548 10.3 $\pm$ 0.2 1.6 $\pm$ 0.1 4.1 $\pm$ 0.4 H$\alpha$ 103.1 $\pm$ 2.4 31.8 $\pm$ 1.5 116 $\pm$ 12 N[II]{}6583 31.5 $\pm$ 0.7 4.9 $\pm$ 0.2 12.5 $\pm$ 1.2 S[II]{}6716 13.3 $\pm$ 0.5 S[II]{}6730 13.0 $\pm$ 0.5 : Line strengths measured for the target objects. All measures are given as observed-frame equivalent widths in Angstroms. Measurement of weak lines is not attempted in the fainter neighbour, and it is impossible to isolate the two components in the \[O[II]{}\] line.\[tab:lines\] ![image](spec_080517.ps){width="1.97\columnwidth"} ![image](spec_neighbour.ps){width="1.97\columnwidth"} ![The spectral region containing H$\alpha$ and the \[N[II]{}\] doublet. All three lines are consistent with the being unresolved at the instrumental FWHM. The relative strength of the doublet lines is consistent with that predicted from the electron transition probabilities. \[fig:specha\]](spec_080517_ha.ps){width="0.98\columnwidth"} ![The spectral region containing H$\beta$ and the \[O[III]{}\] doublet.\[fig:spechb\]](spec_080517_hb.ps){width="0.98\columnwidth"} The relative intensity of the nebular emission in hydrogen Balmer lines allows us to make an estimate of the dust extinction in the actively star forming region of the GRB host galaxy. The flux in H$\alpha$ is expected to scale relative to H$\beta$ by a ratio determined by the temperature and electron density of the emitting region. Standard assumptions for these parameters in HII regions (T=10,000K, low density limit) yields the widely applied expected ratio of 2.87 [see @2006agna.book.....O]. In figure \[fig:specratio\], we scale the line fluxes from the GRB host galaxy accordingly, such that, for Case B reionisation, we would expect each line to have the same relative intensity as H$\beta$. It is clear that the Balmer series shows a decrement in the later lines, most likely attributable to dust. Given a Calzetti-like dust law [$R_V=4.05$, @2000ApJ...533..682C], the measured H$\alpha$/H$\beta$ ratio implies an extinction of flux from the nebular emission region of the GRB host galaxy of E($B-V$)=1.2. Of course, uncertainty arises in whether the HII region parameters and extinction law adopted are indeed appropriate for GRB hosts galaxies. @2011MNRAS.414.2793W explored a grid of temperatures and dust extinction laws for fitting the H[I]{} Balmer series in example spectra and suggested that in the case of the GRB060218 host a higher temperature and steeper extinction law (T=15,000K, $R_V=4.5$) might be appropriate, while the host of GRB100316D is best fit with T$\sim7500$K and $R_V=3.5$. The Calzetti dust law lies between that inferred from these two examples, as does our adopted temperature. More detailed spectroscopy (with fainter Balmer lines, and ideally also the He II recombination series) would be required to reach a tighter constraint. ![A comparison of the Balmer series emission lines in the GRB host galaxy, scaled by the appropriate line ratios for Case B H[I]{} recombination (T=$10^4$K, low density, Osterbrock & Ferland 2006), such that in the absence of dust, all peaks would be expected to match H$\beta$ in intensity. The line intensities are offset from zero for clarity. Even accounting for blending with the \[N[II]{}\] doublet, H$\alpha$ still shows a relative excess, consistent with the blue-wards lines being attenuated by a dusty line of sight. \[fig:specratio\]](spec_080517_hratios2.ps){width="0.98\columnwidth"} We also obtain a tentative spectroscopic redshift for an unrelated galaxy falling on the long slit. The galaxy, located at RA and Declination 06$^h$48$^m$59.756$^2$ +50$^\circ$44$'$23.50$''$ (J2000), lies at $z=0.56$, based on identification of an emission feature as the \[O[II]{}\] 3727Å doublet. Radio Observations {#sec:radio} ------------------ The low redshift confirmed for this GRB host makes it an ideal candidate for radio observation. The majority of radio observations of GRB hosts to date have resulted in non-detections, implying star formation rates that do not significantly exceed their UV-optical estimates [e.g. @2012ApJ...755...85M; @2010MNRAS.409L..74S; @radiopaper]. However, some fraction of GRB hosts appear to be luminous in the submillimetre-radio [@2003ApJ...588...99B; @2004MNRAS.352.1073T], particularly amongst those that show evidence for strong dust extinction [i.e. dark bursts, @2013ApJ...778..172P; @2014arXiv1407.4456P]. Radio observations of the GRB080517 host galaxy were performed with the Westerbork Synthesis Radio Telescope (WSRT) at 4.8GHz on 2014 May 2 and May 3 UT, i.e. almost 6 years after the gamma-ray trigger. We used the Multi Frequency Front Ends [@tan1991] in combination with the IVC+DZB back end in continuum mode, with a bandwidth of 8x20 MHz. Gain and phase calibrations were performed with the calibrator 3C147. The data were analyzed using the Multichannel Image Reconstruction Image Analysis and Display [[MIRIAD]{}; @1995ASPC...77..433S] package. Both observations resulted in a detection of a source at the position of GRB080517, with consistent flux densities. We have measured the flux density in an image of the combined data set as $S_{4.8\,GHz}=$0.22$\pm$0.04mJy. The detection is consistent with a point source, in observations with a synthesised beam of $14.2\times5.3''$ as shown in figure \[fig:radim\]. No significant detection is made of the neighbour galaxy - somewhat surprisingly given its high inferred star formation rate (based on H$\alpha$ emission, see section \[sec:disc\]). ![The 4.8GHz radio flux measured at the WSRT (contours), overlaying the compact GRB host galaxy (lower left) and its near-neighbour (upper right) in the Sloan-$r$ band (greyscale). The contours indicate levels of zero flux (dotted) and +2, 3, 4 and 5$\sigma$. There are no signals below $-2$$\sigma$ in this region. The burst location is indicated with a cross. \[fig:radim\]](radio_targets.ps){width="0.95\columnwidth"} Reassessing GRB080517 {#sec:reassess} ===================== Given the identification of a redshift for the host galaxy of GRB080517, we are able to reassess the properties of the burst and its immediate afterglow in the context of an accurate distance (and thus luminosity) measurement, allowing for more meaningful comparison with the rest of the GRB population. Burst properties {#sec:grb} ---------------- At $z=0.09$ the inferred isotropic equivalent energy of GRB080517 is only $E_{iso} = (1.03 \pm 0.21) \times 10^{49}$ ergs, while its 10 hour X-ray luminosity is $L_X \sim 10^{42}$ ergs s$^{-1}$. Both of these values lie orders of magnitude below the expectations for most GRBs, which have characteristic values of $E_{iso} \sim 10^{52-54}$ ergs [@2008ApJ...680..531K; @2009ApJ...693.1484C] and $L_X \sim 10^{45-47}$ ergs s$^{-1}$ [@2006ApJ...642..389N]. These properties mark GRB080517 as a member of the observationally rare population of low luminosity GRBs (LLGRBs). Only a handful of such low luminosity events have been identified in the past decade, all of which have been relatively local (given the difficulty in observing low luminosity bursts beyond $z\sim0.1$). These include the well studied GRB-supernova pairs GRB980425/SN 1998bw [@1998Natur.395..670G], GRB031203/SN 2003lw [@2004ApJ...609L...5M; @2004Natur.430..648S], GRB 060218/SN 2006aj [@2006Natur.442.1011P] and GRB100316D/SN 2010bh [@2011MNRAS.411.2792S; @2011ApJ...740...41C], and the enigmatic GRBs 060505 and 060614, where associated SNe have been ruled out to deep limits, and whose origin remains mysterious [@2006Natur.444.1047F; @2006Natur.444.1050D]. They may be low luminosity events akin to those above, but where the SNe is absent [e.g. @2006Natur.444.1053G], or they could be GRBs with a similar physical origin to the short-GRBs [most likely NS-NS mergers based on recent observations, @2006Natur.444.1044G; @2013Natur.500..547T], in which case their luminosities would be more typical of their population. GRB 080517 adds a further example to these local, low luminosity events. Its highly star forming host galaxy (as discussed later) is perhaps most in keeping with the expectations of long-GRBs, although its high stellar mass and metallicity would be unusual at such low redshift [@2013ApJ...774..119G]. The prompt emission from GRB080517, as reported by the [ *Swift*]{}/BAT instrument, shows a single ‘fast rise, exponential decay’ [FRED @1995PASP..107.1145F] lightcurve, albeit at low signal to noise. In this respect, its profile is similar to that of low luminosity GRBs031203 and 980425 [@2007ApJ...654..385K], although the profile is not unusual amongst GRBs more generally [@1996ApJ...473..998F]. Interestingly, within the low luminosity population there appears to be a good deal of internal diversity. The [*Swift*]{}-identified low luminosity events to date – GRB060218 [@2006Natur.442.1008C] and GRB100316D [@2011MNRAS.411.2792S] – appear to be of extremely long duration (2000s in the case of GRB060218) with extremely smooth light curves. They are also very soft events in which the X-ray emission exceeds that in the $\gamma-$ray (so called X-ray Flashes). In contrast, the pre-[*Swift*]{} events (GRB980425 and GRB031203) appear to be much closer in prompt properties to classical GRBs, exhibiting shorter durations (tens of seconds) and relatively hard $\gamma$-ray spectra. Although there has been some suggestions that GRB 031203 was a softer X-ray event, integrated over longer time periods, this is based on inferences from its X-ray echo [@2004ApJ...603L...5V], and favour a soft component arising after the initial burst [@2004ApJ...605L.101W]. In retrospect this is most likely an X-ray flare, as are commonly seen in [*Swift*]{} X-ray afterglows [@2006ApJ...636..967W; @2006ApJ...642..389N]. Hence we consider the $E_p$ measured via [*INTEGRAL*]{} as likely indicative of the true burst $E_p$ [@2004Natur.430..646S]. In this case, both GRB 980425 and GRB 031203 lie well away from the correlation between the peak of the $\nu F_{\nu}$ spectrum ($E_p$) and $E_{iso}$ . GRB080517 appears to have much more in common with these pre-Swift events, with a $T_{90} = 65\pm27$s and a hard photon index of $\Gamma \sim 1.5$. While $E_p$ is difficult to directly constrain with the limited BAT bandpass, the Bayesian method of @2007ApJ...663..407B suggests that $E_p > 55$ keV, making GRB080517 a significant outlier in the $E_p$ – $E_{iso}$ relation, with a similar location to GRB980425 and GRB031203 (see figure \[fig:amati\_rel\]). Its recovery, some 6 years after the initial detection implies that other, similar, low luminosity events may be present within the [*Swift*]{} catalog, since a significant number of bursts have not been followed in depth due to observational constraints. However, GRB080517 was unusual in having a bright catalogued source within its error box - a relatively rare occurrence. In this context, it should be noted that the host of GRB080517 is relatively luminous for a GRB host. By contrast, the hosts of GRB980425 and GRB060218 would have had observed magnitudes of $r\approx$20 and $r\approx$22.5 at $z=0.1$ and so would not be readily cataloged in DSS and similar survey observations, suggesting that the presence of a catalogued source is not necessarily a good indicator of event frequency. ![The location of GRB080517 on the $E_p$ – $E_{iso}$ (Amati) relation, given its redshift of $z=0.09$. Black points indicate long GRBs, while those in grey are the short GRB population. GRB080517 and previously identified low luminosity bursts are labelled. The burst lies in an unusual region of parameter space, well below the commonly seen relation for GRBs, placing it in the class of low redshift, low luminosity bursts. \[fig:amati\_rel\]](plot_amati.ps){width="1.05\columnwidth"} Afterglow reanalysis {#sec:afterglow} -------------------- Making use of data analysis tools available from the UK Swift Data Centre[^3], specifying the burst redshift as matching that determined for the host, we have reanalysed the Swift XRT afterglow spectrum and early time series data. Allowing for absorption at the host redshift only slightly modifies the hydrogen column density required to fit the burst X-ray spectrum observed by [*Swift*]{}. The effect on the late time spectrum (mean photon arrival time = T0+25863, where T0 is the [*Swift*]{} trigger time) is negligible, not modifying the required intrinsic absorption from the N$_H=6^{+4}_{-5}\times10^{21}$cm$^{-2}$ in excess of the estimated Galactic absorption estimated with the absorber at $z=0$ (where the errors on N$_H$ from Swift are 90% confidence rather than 1$\sigma$ intervals). The early time PC-mode data, with a mean photon arrival of T0+9559s, yields a lower (but consistent) estimated intrinsic absorption (N$_H=3.4^{+2.4}_{-2.0}\times10^{21}$cm$^{-2}$), and a photon index of 1.9$\pm$0.4. Optical/ultraviolet imaging was also obtained by the [*Swift*]{}/UVOT instrument, from first acquiring the field to the end of observations at T0+19hrs. Data were observed in 6 bands ($V$, $B$, $U$, $UVW1$, $UVM2$ and $UVW2$), and photometric imaging was obtained in each band at intervals throughout this early period. As figure \[fig:uvotearly\] demonstrates, these is little evidence of temporal variation in five of the six bands. Each is consistent with a constant flux. There is a hint of declining flux in the last observations taken in the reddest, $V$, band but the substantial uncertainties on this data preclude a firm identification of afterglow flux. No other band shows a comparable decline during the observation interval. ![Early time optical and ultraviolet photometry from [*Swift*]{}/UVOT. The flux density in the $B$ and $U$-band and the three ultraviolet wavebands shows little evidence of variation. There is a hint of declining flux in the $V$-band, but given the large photometric errors in this band, any decline is difficult to constrain with any accuracy. \[fig:uvotearly\]](lightcurve_plot2.ps){width="0.99\columnwidth"} ![Comparison of averaged early time optical and ultraviolet photometry from [*Swift*]{}/UVOT (red, crosses), with the late time observations of the host from other sources (see section \[sec:sed\]). The UVOT observations are averaged from T0+0hr to T0+19hr. The late time observations are taken at several years post-burst. Nonetheless, the UVOT observations are consistent (within the photometric errors) with the host galaxy data, suggesting that any afterglow was below the UVOT detection limit. \[fig:uvotcomp\]](comp_early_phot.ps){width="0.99\columnwidth"} In figure \[fig:uvotcomp\], we compare this early time UVOT data, now integrated across the 19 hour observation assuming no temporal variation, with the late time host galaxy data described in sections \[sec:whtim\] and \[sec:sed\]. While observations in $U$ and $B$ are not available at late times, the measured flux in the early time integrated $V$ band and near-ultraviolet bands are consistent with that in late time observations of the host galaxy. With the exception of possible variation the $V$-band data, no afterglow is detected within the photometric errors, suggesting that any optical supernova was at or below the UVOT detection limit. Taking the 1$\sigma$ upper limit on the early time photometry, and subtracting off the late time galaxy flux (see below), we constrain the optical afterglow to $F_\lambda<2\times10^{-17}$ergss$^{-1}$Å$^{-1}$ at T0+4000s, measured at 5500Å. The [*Swift*]{} detected X-ray flux at the same epoch (T0$\sim5227^{+1471}_{-1016}$s), was $(3.1\pm0.8)\times10^{-13}$ergss$^{-1}$cm$^{-2}$ in the 0.3-10keV band. Comparing these yields a limit on X-ray to optical ratio, $\beta_{OX}<1.0$. Finally, we have reexamined the early time data from the Liverpool Telescope (LT) observations. Smith et al (2008, GCN7743) reported limits for non-detection of an optical transient, based on the assumption that it was not coincident with the known source. LT observations commenced at 21:34:05 UT, 674s after the burst trigger. The data comprised imaging of the field in SDSS-$r'$, $i'$ and $z'$ bands, with 120s individual exposures, the last of which ended at T0+2290s. We confirm that there is no evidence for an early-time excess due to afterglow flux in this source, either in subtractions against our late-time WHT imaging or in pairwise subtraction of early-time exposures. These data provide a relatively weak constraint, but at an early time, enabling us to limit the flux at T0+900s to $F_\lambda<2\times10^{-16}$ergss$^{-1}$Å at 7500Å. Comparing to the [*Swift*]{}-detected X-ray flux at the same epoch, we determine an identical limit to that from the [*Swift*]{} optical data alone – $\beta_{OX}<1.0$. These limits are formally too weak to satisfy the $\beta_{\rm{OX}}$ criterion that is applied to select dark bursts [@2004ApJ...617L..21J; @2009ApJ...699.1087V]. Thus the non-detection of the afterglow is consistent with either ‘dark’ or ‘normal’ interpretation. However, we note that this burst shows the high column and red, dusty host more common amongst the dark population [e.g. @2014arXiv1402.4006H; @2013ApJ...778..128P]. Additional $V$-band time series data for this target exists in the second data release of the Catalina Real-time Transient Survey [CRTS, @2009ApJ...696..870D]. Both the GRB host and neighbour are well detected. Unfortunately the burst itself occured during a hiatus in CRTS observing, with no data available until 140 days after the GRB trigger (likely due to Sun avoidance). However, as figure \[fig:catalina\] demonstrates, the time series data of the GRB host shows no strong evidence for variability (although a few photometric points are outliers), and allows a precise measurement of the optical magnitude of the host galaxy, $V_{AB}=17.60\pm0.08$. We also investigate the late time optical afterglow, and find no statistically significant evidence for an excess over the host galaxy flux at T+160 days. ![Optical time-series data from the Catalina Real-time Transient Survey for the host galaxy of GRB 0801517. There is no evidence for strong variability in the host galaxy, allowing the time series data to be combined to determine a precise magnitude for the object. In the lower left panel we consider the data closest to the burst data (MJD=54603) in detail, averaging the data points on each night on which the host was observed. While the first three data points lie above the mean measured magnitude, they are within one standard deviation of the time-series mean (shown with shaded region).\[fig:catalina\]](plot_catalina2.ps){width="0.99\columnwidth"} The Spectral Energy Distribution {#sec:sed} ================================ Our WHT optical imaging in the SDSS $g$, $r$, and $i$ bands (described in section \[sec:whtim\]) is supplemented by extensive archival data on these relatively bright sources. In addition to the $V$-band data from the Catalina Real-time Transient Survey described in section \[sec:afterglow\], we compile archival data in the ultraviolet from the [*GALEX*]{} [GR6, @2003SPIE.4854..336M] survey and in the near-infrared from the Two Micron All-Sky Survey [2MASS, @2006AJ....131.1163S] as well as the [*Wide-Field Infrared Survey Explorer*]{} [WISE, @2010AJ....140.1868W]. Both the GRB host and its merging neighbour are detected in the majority of bands from the near-ultraviolet (NUV, 0.15$\mu$m) through to the mid-infrared (W4, 22$\mu$m). While flux from the neighbour is clearly dominated by two principle components in our WHT imaging, these are blended in the remaining data, and we do not attempt to gauge the relative contribution of the two components. Photometry was measured on the images at the source locations, and checked against catalog magnitudes where these were available. For the W3 and W4 bands (at 12 and 22$\mu$m) where the host and its neighbour are blended in the imaging data, we use magnitudes derived from the ‘ALLWISE’ catalogue values (corrected to AB magnitudes) rather than attempting an independent deblending of the two sources. As noted in the introduction, it is the exceptional brightness of this target in the W3 and W4 bands that initially motivated the follow-up observations described here. While the sources are blended in the W4 band, the light distribution of the host galaxy is distorted, such that it is likely that both the GRB host galaxy and its near neighbour are luminous at 22$\mu$m. The multi-wavelength photometry of both the GRB host and its neighbour are given in table \[tab:phot\] and figure \[fig:allbands\] presents snapshots of the host and neighbour in imaging across the full wavelength range. At $z=0.09$, the $g$-band absolute magnitude is M$_g=-20.12\pm0.05$ (comparable to that of the Milky Way). Band $\lambda_\mathrm{cen}$ / Å Source GRB Host Neighbour ------- ---------------------------- ----------- ------------------ ------------------ $FUV$ 1540 GALEX 20.84 $\pm$ 0.20 20.92 $\pm$ 0.22 $NUV$ 2316 GALEX 20.42 $\pm$ 0.11 20.74 $\pm$ 0.16 $g $ 4660 This work 18.03 $\pm$ 0.05 18.42 $\pm$ 0.06 $V $ 5500 CRTS 17.60 $\pm$ 0.09 18.14 $\pm$ 0.12 $r $ 6140 This work 17.73 $\pm$ 0.02 18.18 $\pm$ 0.05 $i $ 7565 This work 17.46 $\pm$ 0.01 18.22 $\pm$ 0.02 $J $ 12400 2MASS 17.37 $\pm$ 0.13 17.91 $\pm$ 0.21 $H $ 16600 2MASS 17.22 $\pm$ 0.15 19.56 $\pm$ 0.95 $Ks$ 21600 2MASS 17.50 $\pm$ 0.20 17.91 $\pm$ 0.26 $W1$ 33500 WISE 17.33 $\pm$ 0.04 18.49 $\pm$ 0.05 $W2$ 46000 WISE 17.57 $\pm$ 0.05 19.05 $\pm$ 0.12 $W3$ 120000 WISE 15.09 $\pm$ 0.05 17.12 $\pm$ 0.32 $W4$ 220000 WISE 13.68 $\pm$ 0.11 14.78 $\pm$ 0.23 Radio 6cm WSRT 0.22$\pm$0.04mJy : Observed photometry measured from broadband observations of the GRB host and its neighbouring galaxy. All magnitudes are given in the AB system. Note that the neighbouring galaxy is undetected in the 2MASS $H$-band and barely detected in $Ks$. Radio fluxes are described in section \[sec:radio\]. In the 12 and 22$\mu$m bands we make use of WISE catalog data rather than attempting to deblend the two sources independently.\[tab:phot\] ![image](image_080715l.ps){width="1.5\columnwidth"} We fit the spectral energy distribution (SED) of the host galaxy using a template fitting approach, minimising the $\chi^2$ parameter to determine the best fit age, mass, star formation history and dust extinction. While we could constrain the dust parameter using the extinction derived from the hydrogen Balmer lines, we allow it to vary, recognising that regions contributing to the continuum flux at long wavelengths and those contibuting nebular emission flux in the optical may well differ in their extinction properties. We use as templates the Binary Population and Spectral Synthesis ([BPASS]{}) stellar population models of @2012MNRAS.419..479E [@2009MNRAS.400.1019E] which include a prescription for the nebular emission excited by the stellar continuum. The [BPASS]{} models consider the instantaneous-burst and constant star formation rate cases. In both cases, we modify the templates using the @2000ApJ...533..682C dust extinction law. This was derived for local infrared-luminous galaxies with active star formation, and would appear appropriate in this case, given the bright infrared fluxes measured. The photometry shows a challenging combination of a very flat optical spectrum (which implies little extinction and potentially even a non-stellar continuum) and evidence for strong dust emission in the infrared (see figure \[fig:bpass\]). The [BPASS]{} population synthesis models include a treatment of stellar evolution pathways through binary evolution. For young stellar populations, this treatment results in a relatively blue UV-optical continuum at a given age (and hence larger energy budget for heating dust) compared to models which neglect such pathways. To model the re-emission of thermal photons at longer wavelengths, we adopt the energy-balance prescription of @2008MNRAS.388.1595D, and re-emit the energy lost from the UV-optical as a combination of black body and PAH emission components. For the latter we scale the composite mid-infrared spectrum determined for star forming galaxies by @2007ApJ...656..770S. The W3 and W4 bands were excluded from the fitting procedure, in order to assess whether the observed UV-optical continuum were able to correctly predict the mid-infrared flux, or whether an additional, heavily dust-extincted component was required. The best fitting single SED model to the host photometry is not one dominated by nebular emission or continuous star formation, but rather a post-starburst template, observed 500Myr after the initial starburst, as shown in figure \[fig:bpass\]. The derived stellar mass is log$_{10}$(M$_\ast$/M$_\odot$)=9.58$^{+0.12}_{-0.16}$, and a relatively low extinction of A$_V$=0.16$\pm$0.02 is required for the dominant stellar component. This model reproduces the GRB host galaxy’s ultraviolet and infrared continua, while underestimating its optical flux by $\sim25-50$% .We note that fitting instead with the [@2005MNRAS.362..799M] stellar population synthesis models returns similar parameters, albeit with a relativity poor fit to the data. The mass, low extinction and age of the dominant stellar component imply that, for $z=0.09$, the GRB host represents a relatively young, slightly sub-L$^\ast$ galaxy. As section \[sec:spec\] demonstrated, there is a substantial contribution to the optical from line emission, which may account for some part of this discrepency. In the $r$-band in particular, line emission likely contributes a minimum of 12% of the broadband flux. To address this, we also allow an additional component of continuous ongoing star formation with moderate dust extinction (as seen in the Balmer series), while cautioning that this may be overfitting the limited data. We find that the star forming component required for the best fit combination contributes a mere 0.1% of the stellar mass of the system. Including such a component improves the fit in the optical and at 12 and 22$\mu$m, but causes the flux in the $K_S$ band and 4.6$\mu$m W2 band to be somewhat overestimated (again, by a factor of $\sim$50%). Since the latter lies at the transition between the stellar and dust components of the template, it is possible that this transition is not correctly addressed in the modelling, and potentially that a steeper spectral index is required in the infrared PAH emission region than is seen in the IR-luminous galaxy composite used [@2007ApJ...656..770S]. Explanations for the comparatively low flux measured in the 2MASS $K_S$ band, and the non-detection of the neighbour in the $H$-band (see figure \[fig:allbands\]), are less clear. It is likely that deeper near-infrared photometry is required to address this issue. We note that the stellar population templates fail to entirely reproduce the high fluxes seen in the 22$\mu$m W4 band, implying that at least some fraction of the stellar emission in this system is heavily extincted and undetectable in the UV-optical. Further observations at millimetre/submillimetre wavelengths will also be required to properly constrain this emission region. ![The best fitting model templates to the host galaxy photometry. The pale cyan line indicates the best fitting single template: a mature system observed 0.5Gyr after an initial starburst. In red we show the best fit derived when a component of ongoing star formation (at age 1Myr, contributing just 0.1% of the stellar mass) is allowed in addition to the dominant model.\[fig:bpass\]](plot_best_bpass.ps){width="1.05\columnwidth"} Star Formation in the host of GRB080517 {#sec:sfr} ======================================= The host of GRB080517 is an actively star forming galaxy at $z=0.09$. The evidence for ongoing star formation is overwhelming, based on the presence of i) strong H$\alpha$ (and other Balmer) emission lines, ii) GALEX FUV and NUV flux, iii) 22$\mu$m emission and iv) a 4.8GHz radio detection, not to mention the initial selection through detection of a core-collapse gamma-ray burst. In table \[tab:sfrs\] we compare the star formation rates (SFRs) derived from these different proxies. In all cases, the star formation rate conversion used is subject to significant systematic uncertainty, but those at 0-22$\mu$m are derived primarily for young ($<100$Myr) stellar populations with continuous star formation, while that for the radio continuum is based primarily on resolved measurements of nearby star forming galaxies. No attempt is made to correct for dust extinction, which may well be affecting different indicators differently, and so these values are effectively lower limits on the total star formation rate. Unsurprisingly, the near-ultraviolet continuum (which is most affected by the presence of dust extinction) gives the lowest estimate of 0.43$\pm$0.07M$_\odot$yr$^{-1}$. In the @2000ApJ...533..682C extinction paradigm, the continuum is subject to 0.44 times the extinction in the nebular component, or E($B-V$)=0.53. This corresponds to a measured 2300Å flux only 2% of its intrinsic value. In fact, the emission in the GALEX band appears to be associated with the mature stellar population (see section \[sec:sed\]) rather than the heavily-extincted star forming component. The star formation rates derived from the optical H$\alpha$ emission line and the 22$\mu$m continuum (measured in the W4 band) are consistent at the 1$\sigma$ level, each estimating rates around 16M$_\odot$yr$^{-1}$. Interestingly, this may suggest that the 1.5arcsecond wide slit used for spectroscopy may have captured the majority of the flux from active star formation in the GRB host galaxy, despite its relatively large ($\sim$2arcsecond full-width at half-maximum) light distribution. Curiously, the final star formation rate indicator, that derived from the radio continuum at 4.5GHz produces a relatively low estimate at 7.6$\pm$1.4M$_\odot$yr$^{-1}$, only about half that determined from the previous two measures, using the conversion rate determined by @2011ApJ...737...67M, and extrapolating to 1.4GHz using a radio spectral slope $\alpha=0.75$. An alternate conversion factor [@2002ApJ...568...88Y] yields a similar but still lower estimate (4M$_\odot$yr$^{-1}$). The synthesised beam of the WSRT at 4.8GHz is insufficient to have resolved out a significant fraction of the flux in this source, and the flux density is measured at better than 5$\sigma$, making it likely that this is a genuine decrement. Gigahertz frequency radio continuum in star forming galaxies arises primarily from non-thermal synchrotron emission, generated by the electrons accelerated by supernovae and their remnants. This introduces a time delay between the onset of star formation and the establishment of a radio continuum, the length of which will depend on the mass distribution, metallicity and other properties of the stellar population. As a result, the radio continuum flux density associated with ongoing star formation rises rapidly with age of the star forming population before stabilising at $>100$Myr . If, then, the young 1Myr starburst suggested by the BPASS models presents a true picture of the ongoing star formation in this source, it is possible that a strong radio continuum has yet to become established and a radio flux - SFR conversion factor up to an order of magnitude higher might be appropriate. Future observations at further radio/submillimeter frequencies, and a measurement of the radio spectral slope, may help to constrain the effect of star formation history on this estimate. Proxy SFR/M$_\odot$yr$^{-1}$ Conversion factor ------------------------- ------------------------ ---------------------- NUV continuum 0.43$\pm$0.07 Hao et al. (2011) H$\alpha$ line emission 15.5$\pm$0.4 Hao et al. (2011) 22$\mu$m continuum 16.5$\pm$1.5 Lee et al (2013) 4.8GHz continuum 7.6$\pm$1.4 Murphy et al. (2011) : Star formation rates derived from different proxies observed for the host of GRB080517. In the radio we apply the conversion factor derived at 1.4GHz, assuming a radio spectral slope of 0.75. \[tab:sfrs\] The Host and Environment of GRB080517 {#sec:disc} ===================================== The host of GRB080517 appears to be a compact, smooth galaxy in the local Universe. Given its UV-optical photometry, there would be little reason to expect significant ongoing star formation. Nonetheless, as the previous section describes, there is substantial evidence for an ongoing, young and fairly dusty (based on H$\alpha$/H$\beta$) starburst, which likely dominates emission at $>10\mu$m. In this context, the presence of a near neighbour (separated by only $\sim$25kpc at $z=0.09$) is intriguing. The two galaxies have comparable masses (the neighbour is just $\sim$0.5 magnitudes fainter than the GRB host and similar in colour), and any interaction between them will constitute a major incident in the history of both galaxies. The gravitational forces caused by a near fly-by in the recent past could well have been sufficient to trigger the starburst detected in both sources - a starburst somewhat obscured by dust. It is also notable that the cores of both galaxies (including both components of the neighbour) and the GRB X-ray error circle lie along a common axis. Further modelling of the dynamics of this system, supported by integral field spectroscopy, and a search for low surface-brightness distortions in the morphology of both galaxies, will be necessary to confirm this picture, but the existing evidence is suggestive. Long-duration GRBs are typically associated with the peak light in their host galaxies, and with recent star formation [e.g @2010MNRAS.tmp..479S]. However, the broadband continuum emission of the host of GRB080517 is dominated by a much older stellar population. If, then, we hypothesise that the GRB is associated with the recent episode of star formation in this system, we are left with the conclusion that it occured in a dusty region (E($B-V$)=1.2) undergoing an intense star burst (SFR$\sim$16M$_\odot$yr$^{-1}$). Such dust extinction is consistent with the excess neutral hydrogen column inferred from the X-ray afterglow, and potentially with the failure of early-time optical observations to identify an optical transient distinguishable from the host galaxy. As we have already discussed, it is impossible to determine whether GRB080517 would indeed have met the ‘dark’ burst criterion if deeper observations were available. It is, however, possible to consider whether its host lies in a similar region of parameter space to known dark bursts. @2013ApJ...778..128P recently presented a systematic analysis of the dark GRB host galaxy population, examining both their afterglow properties and those derived from fitting of the host galaxy spectral energy distribution. As figures \[fig:perley1\] and \[fig:perley2\] demonstrate, the inferred characteristics of the host of GRB080517 lie comfortably within the distribution of ‘dark’ burst hosts in terms of afterglow spectral index, hydrogen column density, stellar mass and inferred star formation rate, although the last is higher than those in dark bursts at $z<0.5$, and more akin to those observed at higher redshifts [@2013ApJ...778..128P]. ![The X-ray afterglow properties of GRB080517 compared to those of ‘dark’ bursts as given by @2013ApJ...778..128P. GRB080517 (bold, red symbol) has an afterglow spectral index, and shows an excess neutral hydrogen column density above that in our own Galaxy, which is consistent with those of the dark GRB population, and follows the same correlation. \[fig:perley1\]](plot_perley1.ps){width="0.99\columnwidth"} ![The host mass of GRB080517, and star formation rate inferred from H$\alpha$ line emission, compared to those of ‘dark’ bursts as given by @2013ApJ...778..128P. GRB080517 (bold, red symbol) has an inferred stellar mass (based on SED fitting) comparable to those of the dark burst population, and follows the same mass-star formation rate trend.\[fig:perley2\]](plot_perley2.ps){width="0.99\columnwidth"} Whether or not GRB080517 is indeed a local example of a ‘dark’ host, one advantage it offers is the opportunity to study its optical spectrum in a detail challenging for higher redshift GRB hosts of either dark or normal types. In figures \[fig:bpt\] and \[fig:metal\] we compare its optical emission line ratios to those of local emission-line galaxies from the SDSS [@2004MNRAS.351.1151B]. The GRB host has line ratios consistent with a Solar or slightly super-Solar metallicity, and is within the range of scatter of the SDSS sample, although well above the relationship between the R$_{23}$ and \[O[ III]{}\]/\[O[II]{}\] at a given metallicity determined by . This is consistent with the results from the less metal-sensitive SED fitting procedure described in section \[sec:sed\], in which 0.5-1.0 Solar metallicity templates were narrowly preferred over those with significantly lower metal enrichment. While far from unique , this places the host of GRB080517 towards the upper end of the metallicity distribution for GRB hosts. Interestingly, at least two other high metallicity bursts, GRB020819 [@2010ApJ...712L..26L] and GRB080607 [@2009ApJ...691L..27P], are dark bursts. The BPT diagram [@1981PASP...93....5B] is an established indicator of starburst versus AGN character, since the different ionisation parameters arising from the two classes have a strong effect on optical emission line ratios and particularly the ratio of \[N[II]{}\] to H$\alpha$. As figure \[fig:bpt\] demonstrates, the two components of the neighbouring source are broadly consistent with a star-formation driven spectrum, although both lie above the local mean in \[O[III]{}\]/H$\beta$ [a trait also often seen in high redshift star forming galaxies, e.g. @2014ApJ...785..153M; @2014arXiv1408.4122S]. While component A has measured line ratios consistent with a ‘composite’ spectrum, large errors on the measured values permit a purely star forming spectrum. The line ratios of the GRB host galaxy are intriguing, placing it too in the region of the parameter space usually described as ‘composite’, suggesting that there might plausibly be a contribution to the ionising spectrum from an AGN. If so, this would be a surprise, since gamma ray bursts have not previously been associated with active galaxies, but may help to explain the excess flux seen in the $22\mu$m band where a hot AGN would be expected to make a contribution to PAH emission. Unfortunately [*Swift*]{} did not track the burst beyond 20 hours after the trigger, at which point the X-ray afterglow was still fading. However the measured flux at this late epoch provides a firm upper limit on possible X-ray flux from the host galaxy of 4.3$^{+2.8}_{-2.0}\times10^{-14}$ergscm$^2$s$^{-1}$ in the [ *Swift*]{} XRT 0.3-10keV band. The luminosity of any hypothetical AGN at $z=0.09$ is therefore constrained to an $L_X<8.6\times10^{41}$ergss$^{-1}$, placing it at the very low end of the AGN luminosity distribution [e.g. @2008ApJ...679..118S; @2010ApJ...716..348B]. As noted in section \[sec:afterglow\], there is also little evidence for any optical variability in the host galaxy, either before or after the gamma ray burst, as might be expected of a galaxy with a strong AGN contibution. While the centre of the host galaxy lies outside the 90% confidence interval on the X-ray location of the GRB (based on the refined XRT analysis), the two locations are consistent at the 2$\sigma$ uncertainty level. It is therefore not impossible that the gamma-ray burst resulted from activity in the galactic nucleus. A rare class of gamma ray flares are known to result from a sudden accretion event due to the tidal disruption of stars around supermassive black holes [@2011Sci...333..203B; @2012ApJ...753...77C Brown et al submitted]. The burst of accretion in such sources launches a relativistic jet, and would result in a short-lived burst of AGN activity from otherwise quiescent galactic nuclei. In this context, the plausible association of GRB080517 with a very low luminosity AGN at $\sim$6 years post burst merits further investigation and monitoring of this system. Further observations will be required to place firmer constraints on the presence or absence of an AGN at late times and any late time variability. We note that if there is no AGN contribution, then the optical emission line ratios in the host imply star formation with a steep ultraviolet spectrum, causing a higher ionisation parameter than is typical at low redshifts, and perhaps strengthening the suggestion that this is a very young, intense starburst (as suggested by the [BPASS]{} stellar population models). ![The emission line strengths of the GRB host galaxy and its neighbour (components A and B) placed on the classic BPT diagram. All three sources lie above the locus of star forming galaxies measured in the SDSS, although the neighbour remains consistent with a star forming origin. The dashed lines indicate the classification criteria of @2003MNRAS.346.1055K. The region between the dashed lines is described as a ‘composite’ region and may indicate contributions from both star formation and AGN activity. The background density plot shows the distribution of galaxies in the SDSS [@2004MNRAS.351.1151B]. Interestingly, the GRB host lies in the ‘composite’ region of the parameter space, suggesting that it may have an AGN component in addition to ongoing star formation. \[fig:bpt\]](bpt.ps){width="0.99\columnwidth"} ![Metallicity-sensitive optical line ratios for the GRB host galaxy. The well known R$_{23}$ index is plotted against the ratio of oxygen lines in order to break the degeneracy in metallicity in the former. The majority of SDSS sources (greyscale) are relatively local and high in metallicity [@2004MNRAS.351.1151B]. The solid line shows the metallicity parameterisation of . The GRB host galaxy lies well above the typical SDSS galaxy in R$_{23}$ but within the distribution of local sources. We note that the effects of correcting for differential dust extinction on the lines is to move it further from the SDSS relation. Its measured line ratios are consistent with a slightly super-Solar metallicity.\[fig:metal\]](metal.ps){width="0.99\columnwidth"} Conclusions {#sec:conc} =========== In this paper we have presented an analysis of new and archival data for the host galaxy of GRB080517. Our main conclusions can be summarised as follows: 1. GRB080517 is a rare, low luminosity, long gamma ray burst. 2. Our WHT spectroscopy reveals that the host galaxy of GRB080517 is a strong optical line emitter lying at $z=0.09$. 3. The morphology of the GRB host appears to be smooth and compact with a half light radius, deconvolved with the seeing, of 2.7kpc. Its light distribution is consistent with a Sersic index of n=$1.5\pm1.0$. 4. The strong optical emission line ratios in the GRB host are consistent with a composite AGN+starburst spectrum at Solar or super-Solar metallicity, and the ratio of Balmer lines suggests the nebular emission is subject to an extinction E($B-V$)=1.2. 5. The spectral energy distribution of the galaxy in the UV-optical is broadly reproduced by a post-starburst template at an age of 500Myr, with a relatively small component of ongoing star formation ($<$1% of the stellar mass). However no template considered provides a good match to all features of the SED, and in particular to the high fluxes measued at $>10\mu$m, suggesting that multiple components with different spectral energy distributions may contribute to the broadband flux. 6. Star formation rate estimates for the GRB host range from 0.43M$_\odot$yr$^{-1}$ to 16.5M$_\odot$yr$^{-1}$, based on different indicators. The low rate estimated from the ultraviolet continuum likely arises due to strong dust extinction in the star forming regions. Estimates from the H$\alpha$ line and $22\mu$m are consistent at 15.5$\pm$0.4 and 16.5$\pm$1.5M$_\odot$yr$^{-1}$ 7. We detect radio emission from the host galaxy with a flux density, $S_{4.8\,GHz}=$0.22$\pm$0.04mJy. This corresponds to a star formation rate of 7.6$\pm$1.4M$_\odot$yr$^{-1}$. 8. The high ionisation parameter seen in the optical line ratios, low radio flux and SED fitting are all consistent with a very young ($<$100Myr) star formation episode. 9. The host galaxy has a close companion within 25kpc in projected distance and lying at the same redshift. The companion shows distorted morphology, including two cores which appear to be undergoing a merger. The proximity of these galaxies may indicate that the GRB progenitor formed in an ongoing starburst triggered by gravitational interaction. 10. While the burst afterglow was too faint to tightly constrain the X-ray to optical flux ratio, its properties and those of its host galaxy are consistent with those of the ‘dark’ GRB population. The host galaxy’s properties and wider environment suggest that the role of galaxy-galaxy interaction in triggering bursts in relatively massive, metal rich galaxies needs to be considered more carefully. We aim to investigate this field further, obtaining stronger X-ray constraints on the presence of AGN activity, high resolution imaging, and also further radio continuum measurements of the host’s dust-obscured star formation. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the anonymous referee of this paper for helpful comments and suggestions. ERS and AJL acknowledge funding from the UK Science and Technology Facilities Council under the Warwick Astrophysics consolidated grant ST/L000733/1. ERS thanks Dr Elmé Breedt for useful discussions and recommending the CRTS. AJvdH acknowledges the support of the European Research Council Advanced Investigator Grant no. 247295 (PI: R.A.M.J. Wijers). CGM acknowledges support from the Royal Society, the Wolfson Foundation and STFC. Optical data were obtained from the William Herschel Telescope. The WHT and its override programme are operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. We also make use of data from the Liverpool Telescope which is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. Radio data were obtained from WSRT. The WSRT is operated by ASTRON (Netherlands Institute for Radio Astronomy) with support from the Netherlands foundation for Scientific Research. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester [@2009MNRAS.397.1177E]. We also made use of Ned Wright’s very useful cosmology calculator [@2006PASP..118.1711W]. Based in part on public data from GALEX GR6. The Galaxy Evolution Explorer (GALEX) satellite is a NASA mission led by the California Institute of Technology. This publication also makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This publication further makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Data is also derived from the Catalina Real-time Transient Survey. The CSS survey is funded by the National Aeronautics and Space Administration under Grant No. NNG05GF22G issued through the Science Mission Directorate Near-Earth Objects Observations Program. The CRTS survey is supported by the U.S. National Science Foundation under grants AST-0909182 and AST-1313422. We make use of SDSS-III data. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. [99]{} Ahn C. P., et al., 2014, ApJS, 211, 17 Amati L., et al., 2002, A&A, 390, 81 Amati L., 2006, MNRAS, 372, 233 Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5 Berger E., Cowie L. L., Kulkarni S. R., Frail D. A., Aussel H., Barger A. J., 2003, ApJ, 588, 99 Bloom J. S., et al., 2011, Sci, 333, 203 Bressan A., Silva L., Granato G. L., 2002, A&A, 392, 377 Brinchmann J., Charlot S., White S. D. M., Tremonti C., Kauffmann G., Heckman T., Brinkmann J., 2004, MNRAS, 351, 1151 Brusa M., et al., 2010, ApJ, 716, 348 Butler N. R., Kocevski D., 2007, ApJ, 663, 407 Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682 Campana S., et al., 2006, Nature, 442, 1008 Cano Z., et al., 2011, ApJ, 740, 41 Castro Cer[ó]{}n J. M., Micha[ł]{}owski M. J., Hjorth J., Malesani D., Gorosabel J., Watson D., Fynbo J. P. U., Morales Calder[ó]{}n M., 2010, ApJ, 721, 1919 Cenko S. B., et al., 2012, ApJ, 753, 77 Cenko S. B., et al., 2009, ApJ, 693, 1484 Chapman R., Tanvir N. R., Priddey R. S., Levan A. J., 2007, MNRAS, 382, L21 Chen H.-W., et al., 2009, ApJ, 691, 152 Christensen L., Hjorth J., Gorosabel J., 2004, A&A, 425, 913 da Cunha E., Charlot S., Elbaz D., 2008, MNRAS, 388, 1595 Della Valle M., et al., 2006, Nature, 444, 1050 Drake A. J., et al., 2009, ApJ, 696, 870 Eldridge J. J., Stanway E. R., 2012, MNRAS, 419, 479 Eldridge J. J., Stanway E. R., 2009, MNRAS, 400, 1019 Ellison S. L., Patton D. R., Simard L., McConnachie A. W., 2008, AJ, 135, 1877 Evans P. A., et al., 2009, MNRAS, 397, 1177 Evans P. A., Goad M. R., Osborne J. P., Beardmore A. P., 2008, GCN, 7744, 1 Fenimore E. E., Madras C. D., Nayakshin S., 1996, ApJ, 473, 998 Fishman G. J., 1995, PASP, 107, 1145 Fruchter A. S., et al., 2006, Nature, 441, 463 Fynbo J. P. U., et al., 2009, ApJS, 185, 526 Fynbo J. P. U., et al., 2006, Nature, 444, 1047 Galama T. J., et al., 1998, Nature, 395, 670 Gal-Yam A., et al., 2006, Nature, 444, 1053 Gehrels N., et al., 2006, Nature, 444, 1044 Gehrels N., et al., 2004, ApJ, 611, 1005 Giavalisco M., et al., 2004, ApJ, 600, L93 Graham A. W., Driver S. P., 2005, PASA, 22, 118 Graham J. F., Fruchter A. S., 2013, ApJ, 774, 119 Greiner J., et al., 2011, A&A, 526, A30 Greiner J., et al., 2013, A&A, 560, A70 Hatsukade B., Hashimoto T., Ohta K., Nakanishi K., Tamura Y., Kohno K., 2012, ApJ, 748, 108 Hao J.-M., Yuan Y.-F., 2013, ApJ, 772, 42 Hao C.-N., Kennicutt R. C., Johnson B. D., Calzetti D., Dale D. A., Moustakas J., 2011, ApJ, 741, 124 Hjorth J., et al., 2012, ApJ, 756, 187 Hunt L. K., et al., 2014, arXiv, arXiv:1402.4006 Jakobsson P., et al., 2012, ApJ, 752, 62 Jakobsson P., Hjorth J., Fynbo J. P. U., Watson D., Pedersen K., Bj[ö]{}rnsson G., Gorosabel J., 2004, ApJ, 617, L21 Kamble A., Soderberg A., Berger E., Zauderer A., Chakraborti S., Williams P., 2014, arXiv, arXiv:1401.1221 Kaneko Y., et al., 2007, ApJ, 654, 385 Kann D. A., et al., 2010, ApJ, 720, 1513 Kauffmann G., et al., 2003, MNRAS, 346, 1055 Kocevski D., Butler N., 2008, ApJ, 680, 531 Kohno K., et al., 2005, PASJ, 57, 147 Kouveliotou C., Meegan C. A., Fishman G. J., Bhat N. P., Briggs M. S., Koshut T. M., Paciesas W. S., Pendleton G. N., 1993, ApJ, 413, L101 Kr[ü]{}hler T., et al., 2011, A&A, 534, A108 Kr[ü]{}hler T., et al., 2012, arXiv, arXiv:1205.4036 Kr[ü]{}hler T., et al., 2012, A&A, 546, A8 Lee J. C., Hwang H. S., Ko J., 2013, ApJ, 774, 62 Levesque E. M., Kewley L. J., Berger E., Zahid H. J., 2010, AJ, 140, 1557 Levesque E. M., Kewley L. J., Graham J. F., Fruchter A. S., 2010, ApJ, 712, L26 Madau P., Haardt F., Rees M. J., 1999, ApJ, 514, 648 Mainzer A., et al., 2011, ApJ, 731, 53 Maiolino R., et al., 2008, A&A, 488, 463 Malesani D., et al., 2004, ApJ, 609, L5 Maraston C., 2005, MNRAS, 362, 799 Markwardt C., et al., 2008, GCN, 7748, 1 Martin C., et al., 2003, SPIE, 4854, 336 Masters D., et al., 2014, ApJ, 785, 153 Melandri A., et al., 2012, MNRAS, 421, 1265 Micha[ł]{}owski M. J., et al., 2014, A&A, 562, A70 Micha[ł]{}owski M. J., et al., 2012, ApJ, 755, 85 Micha[ł]{}owski M. J., et al., 2009, ApJ, 693, 347 Miller N. A., Fomalont E. B., Kellermann K. I., Mainieri V., Norman C., Padovani P., Rosati P., Tozzi P., 2008, ApJS, 179, 114 Moin A., et al., 2013, ApJ, 779, 105 Morrison G. E., Owen F. N., Dickinson M., Ivison R. J., Ibar E., 2010, ApJS, 188, 178 Murphy E. J., et al., 2011, ApJ, 737, 67 Nakagawa Y. E., et al., 2006, PASJ, 58, L35 Nousek J. A., et al., 2006, ApJ, 642, 389 Oke J. B., Gunn J. E., 1983, ApJ, 266, 713 Osterbrock D. E., Ferland G. J., 2006, ’The Astrophysics of Gaseous Nebulae and AGN’, 2nd Edition, University Science Books, Sausalito CA. Parsons A. M., et al., 2008, GCN, 7742, 1 Patton D. R., Carlberg R. G., Marzke R. O., Pritchet C. J., da Costa L. N., Pellegrini P. S., 2000, ApJ, 536, 153 Pellizza L. J., et al., 2006, A&A, 459, L5 Perley D. A., et al., 2014, arXiv, arXiv:1407.4456 Perley D. A., et al., 2013, ApJ, 778, 128 Perley D. A., Perley R. A., 2013, ApJ, 778, 172 Pian E., et al., 2006, Nature, 442, 1011 Priddey R. S. et al, 2006, MNRAS, 369, 1189 Prochaska J. X., et al., 2009, ApJ, 691, L27 Robertson B. E., Ellis R. S., 2012, ApJ, 744, 95 Rol E., Wijers R. A. M. J., Kouveliotou C., Kaper L., Kaneko Y., 2005, ApJ, 624, 868 Salvaterra R., et al., 2012, ApJ, 749, 68 Sault, R. J., Teuben, P. J., & Wright, M. C. H. 1995, Astronomical Data Analysis Software and Systems IV, 77, 433 SavaglioS.,GlazebrookK.,LeBorgneD.,2009,ApJ,691,182 Savaglio S., et al., 2012, MNRAS, 420, 627 Sazonov S. Y., Lutovinov A. A., Sunyaev R. A., 2004, Nature, 430, 646 Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103 Skrutskie M. F., et al., 2006, AJ, 131, 1163 Silverman J. D., et al., 2008, ApJ, 679, 118 Smith R. J., et al., 2008, GCN, 7743, 1 Smith J. D. T., et al., 2007, ApJ, 656, 770 Soderberg A. M., et al., 2004, Nature, 430, 648 Stanway E. R., Davies L. J. M., Levan A. J., 2010, MNRAS, 409, L74 Stanway E. R., Bremer M. N., Tanvir N. R., Levan A. J., Davies L. J. M., 2011, MNRAS, 410, 1496 Stanway E. R., Levan A. J., Davies L. J. M., 2014, MNRAS, 444, 2133 Stanway E. R., Eldridge J. J., Greis S. M. L., Davies L. J. M., Wilkins S. M., Bremer M. N., 2014, arXiv, arXiv:1408.4122 Starling R. L. C., et al., 2011, MNRAS, 411, 2792 Svensson K. M., Levan A. J., Tanvir N. R., Fruchter A. S., Strolger L.-G., 2010, MNRAS, 479 Symeonidis M., et al., 2014, arXiv, arXiv:1406.2599 Tanvir N. R., et al., 2004, MNRAS, 352, 1073 Tan, G. H. 1991, IAU Colloq. 131: Radio Interferometry. Theory, Techniques, and Applications, 19, 42 Tanvir N. R., Levan A. J., Fruchter A. S., Hjorth J., Hounsell R. A., Wiersema K., Tunnicliffe R. L., 2013, Nature, 500, 547 Tanvir N. R., et al., 2012, ApJ, 754, 46 Thoene C. C., Perley D. A., Bloom J. S., 2007, GCN, 6663, 1 van der Horst A. J., Kouveliotou C., Gehrels N., Rol E., Wijers R. A. M. J., Cannizzo J. K., Racusin J., Burrows D. N., 2009, ApJ, 699, 1087 Vaughan S., et al., 2004, ApJ, 603, L5 Weiler K. W., Panagia N., Montes M. J., Sramek R. A., 2002, ARA&A, 40, 387 Watson D., et al., 2011, ApJ, 741, 58 Watson D., et al., 2006, ApJ, 636, 967 Watson D., et al., 2004, ApJ, 605, L101 Wiersema K., 2011, MNRAS, 414, 2793 Woosley S. E., Heger A., 2006, ApJ, 637, 914 Wright E. L., 2006, PASP, 118, 1711 Wright E. L., et al., 2010, AJ, 140, 1868 Xiao L., Schaefer B. E., 2011, ApJ, 731, 103 Yun M. S., Carilli C. L., 2002, ApJ, 568, 88 Zauderer B. A., et al., 2013, ApJ, 767, 161 \[lastpage\] [^1]: E-mail: e.r.stanway@warwick.ac.uk [^2]: http://swift.gsfc.nasa.gov/archive/grb\_table/ [^3]: http://www.swift.ac.uk [@2009MNRAS.397.1177E]
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper examines the problem of rate allocation for multicasting over slow Rayleigh fading channels using network coding. In the proposed model, the network is treated as a collection of Rayleigh fading multiple access channels. In this model, rate allocation scheme that is based solely on the statistics of the channels is presented. The rate allocation scheme is aimed at minimizing the outage probability. An upper bound is presented for the probability of outage in the fading multiple access channel. A suboptimal solution based on this bound is given. A distributed primal-dual gradient algorithm is derived to solve the rate allocation problem.' author: - Avi Zanko - Amir Leshem - Ephraim Zehavi title: Topology management and outage optimization for multicasting over slowly fading multiple access networks --- Network coding for multicasting ,wireless networks ,outage capacity ,Rayleigh fading ,multiple access channels Introduction {#sec:introduction} ============ Network coding extends the functionality of intermediate nodes from storing/forwarding packets to performing algebraic operations on received data. If network coding is permitted, the multicast capacity of a network with a single source has been shown to be equal to the minimal min-cut between the source and each of its destinations [@Ahlswede_Network_2000]. In the past decade, the concept of combining data by network coding has been extensively extended by e.g. [@Li_Linear_2003; @Jaggi_Low_2003; @Barbero_Heuristic_2006] and it is well known that in order to achieve the multicast rate, a linear combination over a finite field suffices if the field size is larger than the number of destinations. Moreover, centralized linear network coding can be designed in polynomial time [@Jaggi_Polynomial_2005]. Decentralized linear network coding can be implemented using a random code approach [@Ho_A_random_2006]. A comprehensive survey of network coding can be found in e.g., [@Fragouli_Network_2007; @Ho_Network_2008]. Many network resource allocation problems can be formulated as a constrained maximization of a certain utility function. The problem of network utility maximization has been explored extensively in the past few decades [@Palomar_A_Tutorial_2006; @Chiang_Layering_2007]. We briefly introduce related work on topology management and rate allocation for network coding in multicast over wireless networks. The problem of finding a minimum-cost scheme (while maintaining a certain multicast rate) in coded networks was studied by Lun et al. [@Lun_Network_2004; @Lun_Minimum_2006]. They showed that there is no loss of optimality when the problem is decoupled into: finding the optimal coding rate allocation vector (also known as subgraph selection) and designing the code that is applied over the optimal subgraph. Moreover, in many cases, optimal subgraphs can be found in polynomial time. If in addition the cost function is also convex and separable, the solution can be found in a decentralized manner, where message passing is required solely between directly connected nodes. This decentralized solution, if coupled with random network coding (e.g. [@Lun_On_2008; @Chou_Practical_2003]) provides a fully distributed scheme for multicast in coded wireline networks. This has prompted many researchers to develop different algorithms that find minimum-cost rate allocation solutions distributively; e.g. [@Cui_Optimal_2004; @Bhadra_Min_2006; @Wu_Distributed_2006; @Xi_Distributed_2010]. When addressing the problem of rate allocation for multicast with network coding in wireless networks, Lun et al., [@Lun_Minimum_2006; @Lun_Achieving_2005] tackled the problem through the so-called *wireless multicast advantage* phenomenon. This phenomenon simply comes down to the fact that when interference is avoided in the network (e.g., by avoiding simultaneous transmissions), communication between any two nodes is overheard by their nearby nodes due to the broadcast nature of the wireless medium. In [@Lun_Achieving_2005], the wireless multicast advantage was used to reduce the transmission energy of the multicast scheme (since when two nodes communicate, some of their nearby nodes get the packet for “free”). Therefore, their wireline minimum-cost optimization problem was updated accordingly [see @Lun_Achieving_2005 eq.(1) and (40)]. In [@Xi_Distributed_2010] interference is allowed but is assumed to be limited. Joint optimal power control, network coding and congestion control is presented for the case of very high SINR (signal to noise plus interference ratio). This interference assumption implies that there are some limitations on simultaneous transmissions and this is taken into account in the optimization problem. In [@Yuan_A_Cross_2006] the problem of joint power control, network coding and rate allocation was studied. They showed that the throughput maximization problem can be decomposed into two parts: subgraph selection at the network layer and power control at the physical layer. A primal dual algorithm was given that converges to the optimal solution provided that the capacity region is convex with respect to the power control variables (i.e., when interference are ignored). On the other hand, to take interference into account a game theoretic method was derived to approximately characterize the capacity region. In wireless networks, it is reasonable to assume that there is no simultaneous packet transmission or reception by any transceiver. These properties of the wireless medium introduced a new cross-layer interactions that may not exist in the wired network. Sagduyu et al. [@Sagduyu_On_Joint_2007] analyzed and designed wireless network codes in conjunction with conflict-free transmission schedules in wireless ad hoc networks. They studied the cross-layer design possibilities of joint medium access control and network coding. It was shown that when certain objectives such as throughput or delay efficiency are considered, then network codes must be jointly designed with medium access control. The joint design of medium access control and network coding [@Sagduyu_On_Joint_2007] was formulated as a nonlinear optimization problem. In [@Niati_Throughput_2012] the work reported in [@Sagduyu_On_Joint_2007] was extended and a linear formulation was derived. However, there are certain other considerations that must be taken into account in the search for a rate allocation vector in wireless networks. The wireless medium varies over time and suffers from fading channels due to multipath or shadowing, for example. In [@Ozarow_Information_1994] the block fading model was introduced. In this model the channel gain is assumed to be constant over each coherence time interval. Typically, fading models are classified as fast fading or slow fading. In fast fading, the coherence time of the channel is small relative to a code block length and as a consequence the channel is ergodic with a well-defined Shannon capacity (also known as the ergodic capacity [@Goldsmith_Capacity_1997]). In slow fading, the code block length and the coherence time of the channel are of the same order. Hence, the channel is not ergodic and the Shannon capacity is not usually a good measure of performance. The notion of outage capacity was introduced in [@Ozarow_Information_1994] for transmitting over fading channels when the channel gain is available only at the receiver. In this approach, transmission takes place at a certain rate and tolerates some information loss when an outage event occurs. An outage event occurs whenever the transmitted rate is not supported by the instantaneous channel gain; i.e., when the channel gain is too low for successful decoding of the transmitted message. It is assumed that outage events occur with low probability that reliable communication is available most of the time. A different strategy to deal with slow fading is the broadcast channel approach [@Shamai_A_broadcast_1997]. In this approach different states of the channel are treated as channels toward different receivers (a receiver for each state). Hence, the same strategy as used for sending common and private messages to different users on the Gaussian broadcast channel can be applied here. When the channel gain is also available at the encoder, the encoder can adapt the power and the transmission rate as a function of the instantaneous state of the channel and thus can achieve a higher rate on average. Moreover, as regards the outage capacity, the transmitter can use power control to conserve power by not transmitting at all during designated outage periods. When dealing with outage capacity for fading MAC, the common outage has a similar definition to the outage event in the point to point case. A common outage event is declared whenever we transmit with rates that are not supported by the instantaneous channel gains. If the channel gains are available at both the decoder and the encoders, additional notions of capacities for the fading MAC need to be taken into account. The throughput capacity region for the Gaussian fading MAC was introduced in [@Tse_Multiaccess_1998]. In a nutshell, this is the Shannon capacity region where the codewords can be chosen as a function of the realization of the fading with arbitrarily long coding delays. However, as for the point to point case, this approach is not realistic in slow fading cases since it requires a very long delay to average out the fading effect. [@Hanly_Multiaccess_1998] derived the delay limited capacity for the Gaussian fading MAC (also known as the zero outage capacity). In the delay limited capacity, unlike the throughput capacity, the chosen coding delay has to work uniformly for all fading processes with a given stationary distribution. However, the delay limited capacity is somewhat pessimistic due to the demand to maintain a constant rate under any fading condition. The outage capacity region and the optimal power allocation for a fading MAC were described in [@Li_Outage_2005]. As was pointed out in [@Li_Outage_2005], in a slow fading environment, the decoding delay depends solely on the code-length employed and not on the time variation of the channel. The demand for interference free channels at all nodes means that some level of orthogonality is required between different transmissions in the network. Avoiding interference between all nodes comes at the cost of loss of expensive bandwidth, or alternatively leads to rate degradation in band limited systems. The same argument can be applied to the limited interference model since some orthogonality at a certain radius is required. In [@Zanko_Network_2013Submeeted], the MAC network coding model was introduced. In the MAC network model, in contrast to the wireless broadcast advantage based models, the superposition property of the wireless medium is exploited. The network is treated as a collection of multi access channels, such that each receiver simultaneously receives data from all its in-neighbors. **Main contributions:** This paper explores the problem of rate allocation for multicasting over slow Rayleigh fading channels using network coding. The problem is examined in a model where the network is treated as a collection of Rayleigh fading multi access channels. In our network model, we assume that links on the network vary faster than the entire network can respond to the variations. Therefore, our goal is to find a rate allocation scheme that is based solely on the statistics of the channels which minimizes the outage probability. This paper differs from prior works at two major aspects. Prior works’ models assume long time averaging of the instantaneous capacity (as in the ergodic capacity approach) or averaging of the packet arrival rate (see e.g., [@Ho_Network_2008]). These assumptions are more suitable for fast fading model while in slow fading model this is unrealistic. Hence, in this paper we design a different rate allocation scheme which is more suitable to the slow fading model. Moreover, in this paper the design of the rate allocation scheme is based solely on the statistics which is desirable in many practical large scale networks, as will be emphasized in section \[sec:Rate\_allocation\_for\_the\_outage\_MAC\_model\]. The communication model is described in detail in section \[sec:Communication\_model\]. In section \[sec:outage\_probability\_bounds\] we present lower and upper bounds for the outage probability of a fading MAC. In section \[sec:Rate\_allocation\_for\_the\_outage\_MAC\_model\] a suboptimal solution for the rate allocation problem is presented for the MAC network model. The solution is based on an upper bound on the probability of outage in the fading MAC. In section \[sec:distributed\] a distributed solution is derived for the rate allocation problem in the MAC network model. In section \[sec:simulation\] we report some simulation results. We end with concluding remarks. Communication model {#sec:Communication_model} =================== Let $\mathcal{G}=\left({\mathcal{V}},{\mathcal{E}}\right)$ be a directed graph with a set of nodes $\mathcal{V}$ and directed edges $\mathcal{E}\subset\bf{V}\times\bf{V}$, where transceivers are nodes and channels are edges representing a wireless communication network. In this paper, scalars and random variables are denoted by lower case letters. Vectors and matrices are denoted by boldface lower and upper case letters, respectively. We are abusing of notation a bit by using the same letters to refer to random variables and their realizations. The cardinality of any set $\mathcal{A}$ is denoted by $|\mathcal{A}|$. All vectors are columns and inequalities between vectors are defined element-wise; i.e., ${\bf v}\leq {\bf u}$ implies $v_i\leq u_i$ for all $i$. For any node $j\in\mathcal{V}$, we denote the in-neighborhood and out-neighborhood of $j$ by $\mathcal{I}(j)$ and $\mathcal{O}(j)$ respectively, i.e., $\mathcal{I}(j)=\{i:\left(i,j\right)\in\mathcal{E}\}$ and $\mathcal{O}(j)=\{i:\left(j,i\right)\in\mathcal{E}\}$. The network is treated as a collection of multi access channels, such that each receiver simultaneously receives data from all its in-neighbors. For simplicity, it is assumed that there is no interference between transmissions toward different receivers (see Fig. \[fig:MAC\_network\_model\](a)). ![The MAC network model: (a) An illustration of a wireless network with $12$ transceivers positioned as a directed graph $\mathcal{G}$. In the MAC network model each receiver receives data from all its in-neighborhood nodes. For example the nodes in $\mathcal{I}(1)=\{2,3,6\}$ transmit toward node $1$ and the nodes in $\mathcal{I}(4)=\{1,6,7\}$ transmit toward $4$. However, it is assumed that (for example), there is no interference between the transmissions toward node $1$ and the transmissions toward node $4$. (b) The MAC of node $1$.[]{data-label="fig:MAC_network_model"}](MAC_network_model_and_MAC_received_signal.pdf){width="\textwidth"} This can be achieved by orthogonal transmissions e.g., by using a certain frequency reuse pattern or directional antennas. Clearly, this is an improvement over a model where all transmissions are orthogonal. If we consider the MAC network model with deterministic channel gains, a joint power control and rate allocation solution for a (convex) network utility can be found distributively [@Yuan_A_Cross_2006]. This is due to the convexity of the capacity region of the multi access channels [@El_Gamal_Network_2011]. The deterministic model can be adapted to deal with fast fading channels in the case of a constant power allocation vector by using the ergodic MAC capacity region instead of the MAC capacity region. This ergodic capacity region is easily obtained by taking the expectations of the capacity constraints [@El_Gamal_Network_2011]. Here, we examine the MAC network model in the case of slow fading channels. We aim to find a rate allocation scheme that is based solely on the statistics of the channels which minimizes the outage probability. The channel gain of link $(i,j)$ is denoted by $h_{i,j}$. $h_{i,j}$ is a zero mean circular complex normal random variable with variance of $\upsilon_{i,j}^2$. It is assumed that all $h_{i,j}$ are independent of each other. Denote by ${\bf h}_j:=\left[h_{i,j}:i\in\mathcal{I}(j)\right]$ and by $\boldsymbol{\eta_j}:=\left[|h_{i,j}|^2:i\in\mathcal{I}(j)\right]$. The transmission on link $(i,j)$ is denoted by $x_{i,j}$ and it is transmitted with an average power $p_{i,j}$. We assume that $\sigma^2_j$ is the variance of $\xi_j$ - the zero mean Gaussian noise at node $j$. Hence, the received signal at node $j$ is given by: $$y_{j}=\displaystyle\sum_{i\in\mathcal{I}(j)}{h_{i,j}x_{i,j}}+\xi_j.$$ Fig. \[fig:MAC\_network\_model\](b) illustrates the MAC of node $1$ in the network of Fig. \[fig:MAC\_network\_model\](a). The rate transmitted on a link $(i,j)$ is denoted by $r_{i,j}$, the rate allocation vector is denoted by ${\bf r}=\left[r_{i,j}:\;(i,j)\in\mathcal{E}\right]$ and the local rate allocation vector is denoted by ${\bf r}_j=\left[r_{i,j}:\;i\in\mathcal{I}(j)\right]$. When the instantaneous channel gains $h_{i,j}$ are deterministic and known, this is the well-known Gaussian multi access channel [@El_Gamal_Network_2011]. Hence, the instantaneous MAC capacity region is given by: $$\label{ineq:MAC_capacity_Region} \mathcal{V}^{\rm{ins}}_j({\bf h}_j):=\left\{r_{i,j}:\begin{array}{l} \displaystyle\sum_{i\in\mathcal{M}(j)}{r_{i,j}} \leq\log_2\left(1+\frac{P_{\mathcal{M}(j),j}}{\sigma^2_j}\right)\\ \qquad\forall\mathcal{M}(j)\subseteq\mathcal{I}(j) \end{array} \right\},$$ where $P_{\mathcal{M}(j),j}=\displaystyle\sum_{i\in\mathcal{M}(j)}{p_{i,j}|h_{i,j}|^2}$. However, when dealing with Rayleigh channels, this capacity region may not be a good measure of performance and the outage capacity is a better and more practical alternative. A common outage event is jointly declared for all links whenever we transmit toward a certain node with rates that are not supported by the instantaneous MAC capacity region. \[def:MAC\_outage\] For a rate vector ${\bf r}_j$ and for the MAC associated with node $j$, the common outage event is $$\label{eq:outage_event_def} {\bf r}_j\notin\mathcal{V}^{\rm{ins}}_j({\bf h}_j),$$ where the (random) capacity region $\mathcal{V}^{\rm{ins}}_j({\bf h}_j)$ is defined in (\[ineq:MAC\_capacity\_Region\]). \[def:MAC\_outage\_probability\] The probability of outage in the fading MAC of node $j$ is given by $$P_{j}^{\rm{out}}=\Pr\left({\bf r}_j\notin\mathcal{V}^{\rm{ins}}_j({\bf h}_j)\right).$$ Similar to these definitions, we define an outage event and outage probability for the MAC network model. The outage event for the MAC network model is the event for which there exists node $j\in{\mathcal{V}}$ such that ${\bf r}_j\notin\mathcal{V}^{\rm{ins}}_j({\bf h}_j)$. Hence, the probability of outage in the MAC network model is given by $$\label{eq:probability_outage_MAC_network_model} P_{\rm{MAC}}^{\rm{out}}=\Pr\left(\displaystyle\bigcup_{j\in{\mathcal{V}}}{\left\{{\bf r}_j\notin\mathcal{V}^{\rm{ins}}_j({\bf h}_j)\right\}}\right).$$ To complete the description of the local communication model, we associate the codebooks, the encoders $\left(\mathcal{F}_{i,j}\;:\;i\in\mathcal{I}(j)\right)$ and the decoder $g_j$ that establish the connection between $\mathcal{I}(j)$ and $j$ at rates $\left(r_{i,j}: i\in\mathcal{I}(j) \right)$ to any node in the network. Obviously, node $j$ shares the appropriate codebooks and encoders with its in-neighbors. The source node is denoted by $s\in{\bf V}$ and it is assumed that $\mathcal{I}(s)=\phi$. The set of all destinations (sinks) is denoted by $\mathcal{D}_s\subseteq\bf{V}\backslash\{s\}$. Intermediate nodes are allowed to send out packets that are a combination of their received information and as a result they break the flow conservation by increasing/decreasing the outside rate. However, the main theorem of network coding for multicast is stated in terms of the max-flow (min-cut) between each source and its destinations. Therefore, we distinguish between the flow at an edge $(i,j)$ and the actual rate at that link. Let $f_{i,j}^{d}$ be the flow at edge $(i,j)$ destined for destination $d\in\mathcal{D}_s$, and let $r_{i,j}$ be the actual at edge $(i,j)$. The communication parameters are summarized in Table \[tab:communication\_model\_parameters\]. ------------------------------------- ------------------------------ ------------------------------------------------------- $x_{i,j}$ Transmission on link $(i,j)$ $h_{i,j}$ Channel gain of link $(i,j)$ $h_{i,j}\sim\mathcal{CN}(0,\upsilon_{i,j}^2)$ ${\bf h}_j$ ${\boldsymbol{\eta}}_j$ $p_{i,j}$ $\xi_j$ Noise at node $j$ $\xi_j\sim\mathcal{CN}(0,\sigma^2_j)$ $y_j$ Received signal at node $j$ $r_{i,j}$ Rate at link $(i,j)$ ${\bf r}_j$ Local rate allocation vector ${\bf r}_j=\left[r_{i,j}:\;i\in\mathcal{I}(j)\right]$ ${\bf r}$ Rate allocation vector ${\bf r}=\left[r_{i,j}:\;(i,j)\in\mathcal{E}\right]$ $\mathcal{V}^{\rm{ins}}_j({\bf H})$ $P_{j}^{\rm{out}}$ $P_{\rm{MAC}}^{\rm{out}}$ $s$ Source node $\mathcal{D}_s$ Set of all destinations $\mathcal{D}_s\subseteq\bf{V}\backslash\{s\}$ $f^d_{i,j}$ ------------------------------------- ------------------------------ ------------------------------------------------------- : A summary of communication model parameters \[tab:communication\_model\_parameters\] As was mentioned in section \[sec:introduction\], there is no loss of optimality by first finding the optimal rate allocation solution and then designing the coding scheme that realizes the connection. In the following section the rate allocation vector for the MAC network model is given as the solution to an optimization problem and the coding scheme that realizes the connection is assumed to be given. For large scale networks, where global network information is not available, the random network coding shown in [@Chou_Practical_2003; @Lun_On_2008] can be employed. In general, in random network coding, intermediate nodes store all their received packets in their memory and when a packet injection occurs on an outgoing link, the node forms a packet that is a random linear combination of the packets in its memory. In order to enable decoding at the destinations, the random coefficients of the linear combinations are included in the header of the packet as side information. These coefficients are called the global encoding vector of the packet. Decoding is possible if all destinations collect enough packets with linearly-independent global encoding vectors. The algorithm shown in [@Lun_On_2008] for random packet level network coding was adjusted to the MAC network model in [@Zanko_Network_2013Submeeted]. Bounds on the probability of outage of a MAC {#sec:outage_probability_bounds} ============================================ In this section we bound the outage probability of a fading MAC. To do so, we need the following notations and definitions. Consider a (slow) fading MAC with $n$ links each of which is a Rayleigh channel i.e., $h_i\sim\mathcal{CN}(0,\upsilon_i^2)\;\;i=1,2,\cdots,n$. Denote the variance of the zero mean Gaussian noise at the receiver by $\sigma^2$. For any matrix ${\bf B}$, $b_{i,j}$ denotes the entry in the $i$’th row and $j$’th column of ${\bf B}$. Let ${\bf B}^{r*}$ be a submatrix of ${\bf B}$ constructed by deleting the $r$’th row of ${\bf B}$. For $r=1$ we denote ${\bf B}^{1*}={\bf B}^{*}$. For any $n\geq 1$ let ${\bf 1}_n$,${\bf 0}_n$ be vectors with length $n$ of ones and zeros, respectively. For any $n\geq 1$, let ${\bf A}_n$ be a $(2^n-1)\times n$ matrix, such that for $n=1$ ${\bf A}_1=1$ and for $n\geq 2$ $$\label{eq:A_n:recursion} {\bf A}_{n+1}=\left[\begin{array}{lll} {\bf 0}_{2^n-1} &,& {\bf A}_{n}\\ 1 &,& {\bf 0}_n^T \\ {\bf 1}_{2^n-1} &,& {\bf A}_{n}. \end{array} \right],$$ i.e., each row of ${\bf A}_n$ is the binary representation of the row index (for example ${\bf A}_2=\left[[0,1]^T,\right.$ $\left.[1,0]^T,[1,1]^T\right]^T$). For any scalar $a$ and vector ${\bf v}\in\mathds{R}^{K}$, ${\bf c}=a^{{\bf v}}-1$ is calculated point-wise; i.e., $c_i=a^{v_i}-1$. The probability of outage of a fading MAC is given in definition \[def:MAC\_outage\_probability\]. Obviously, the probability of outage can be expressed as $$\begin{aligned} \label{eq:MAC_outage_by_success} \rm{Pr}^{\rm{out}}_{\rm{MAC}_n}=1-\Pr\left({\bf r}\in\mathcal{V}^{\rm{ins}}({\bf h})\right),\end{aligned}$$ where $\bf{h}:=\left[h_1,h_2,\cdots,h_n\right]$ and $\mathcal{V}^{\rm{ins}}({\bf h})$ is the instantaneous capacity region. As can be seen from (\[ineq:MAC\_capacity\_Region\]) the expression ${\bf r}\in\mathcal{V}^{\rm{ins}}({\bf h})$ stands for a conjunction of $(2^n-1)$ inequalities, each of which is in the form of $$\label{ineq:MAC_success_inequality} \displaystyle\sum_{i\in\mathcal{M}}{r_{i}} \leq\log_2\left(1+\frac{P_{\mathcal{M}}}{\sigma^2}\right),$$ where $P_{\mathcal{M}}=\displaystyle\sum_{i\in\mathcal{M}}{p_{i}|h_{i}|^2}$ and $\mathcal{M}$ is a subset of $\{1,2,\ldots,n\}$. Rewriting (\[ineq:MAC\_success\_inequality\]) in a matrix form yields $$\label{ineq:MAC_success_inequality_matrix_form} {\bf a}_{\mathcal{M}}^T {\bf r}\leq \log_2\left(1 + {\bf a}_{\mathcal{M}}^T\frac{1}{\sigma^2}{\bf P}{\boldsymbol{\eta}}\right),$$ where, ${\bf a}_{\mathcal{M}}$ is a vector with length $n$ such that $$a_i=\begin{cases} 1 & i\in\mathcal{M}\\ 0 & \rm{otherwise}, \end{cases} \nonumber$$ ${\bf P}$ is an $n\times n$ diagonal matrix with $p_1,p_2,\ldots,p_n$ on the main diagonal and $\boldsymbol{\eta}=\left[|h_1|^2,|h_2|^2,\ldots,|h_n|^2\right]^T$. A simple algebraic operation yields that (\[ineq:MAC\_success\_inequality\_matrix\_form\]) is equivalent to $$\label{ineq:MAC_success_inequality_exp_matrix_form} 2^{{\bf a}_{\mathcal{M}}^T {\bf r}}-1 \leq {\bf a}_{\mathcal{M}}^T\frac{1}{\sigma^2}{\bf P}\boldsymbol{\eta}.$$ Note that since $|h_i|^2\;\;i=1,2,\ldots,n$ are independent exponential random variables with an expectation of $2\upsilon^2_i$, the random variables ${z}_i=\frac{1}{2\upsilon^2_i} |h_i|^2\;\;i=1,2,\ldots,n$ are i.i.d exponential random variables with an expectation of $1$. Hence, the event in (\[ineq:MAC\_success\_inequality\_exp\_matrix\_form\]) is equivalent to the event $$\label{ineq:MAC_success_inequality_standard_exp_matrix_form} 2^{{\bf a}_{\mathcal{M}}^T {\bf r}}-1 \leq {\bf a}_{\mathcal{M}}^T\frac{1}{\sigma^2}{\bf P}{\boldsymbol \Upsilon}{\bf z}_n,$$ where ${\boldsymbol \Upsilon}$ is a diagonal matrix with $2\upsilon^2_1,2\upsilon^2_2,\ldots,2\upsilon^2_n$ on the main diagonal and ${\bf z}_n=[z_1,z_2,\ldots,z_n]^T$ is a vector of $n$ i.i.d standard exponential random variables; $E\{z_i\}=1$. Therefore, from (\[eq:MAC\_outage\_by\_success\]) and (\[ineq:MAC\_success\_inequality\_standard\_exp\_matrix\_form\]), it is implied that the outage probability in a MAC with $n$ links can be written as $$\label{eq:Probability_of_success:A_N} \rm{Pr}^{\rm{out}}_{\rm{MAC}_n}=1- \Pr\left({\bf A}_n{\bf D}_n{\bf z}_n \geq {\bf b}_n\right),$$ where ${\bf D}_n$ is a diagonal matrix with $\frac{1}{\lambda_1},\frac{1}{\lambda_2},\cdots,\frac{1}{\lambda_n}$ on the main diagonal, $\lambda_i=\frac{1}{2\upsilon_i^2}\frac{\sigma^2}{p_i}$ and ${\bf b}_n=2^{{\bf A}_n{\bf r}}-1$. For example in a MAC with $n=3$ links we have that $$\begin{array}{ll} {\bf A}_3=\left[\begin{array}{lll} 0&0&1\\0&1&0\\0&1&1\\1&0&0\\1&0&1\\1&1&0\\1&1&1 \end{array}\right], & {\bf b}_3=\left[\begin{array}{l} 2^{r_3}-1\\2^{r_2}-1\\2^{r_2+r_3}-1\\2^{r_1}-1\\2^{r_1+r_3}-1\\2^{r_1+r_2}-1\\2^{r_1+r_2+r_3}-1 \end{array} \right]. \end{array} \nonumber$$ Note that when the MAC is with i.i.d links we have that ${\bf D}_n=\frac{1}{\lambda}{\bf I}_n$ where ${\bf I}_n$ is an $n\times n$ identity matrix. Hence, the probability of outage in a MAC is related to the joint distribution of linear combinations of exponential random variables. Huffer and Lin [@Huffer_Computing_2001] presented an algorithm for the computation of the exact expression of the joint distribution of general linear combinations of spacings[^1] by repeated uses of two recursions that reduce the dimensionality of the problem. They also pointed out that the algorithm remains valid as well for linear combinations of exponential random variables. However, this is inaccurate and in this paper we revise result to handle exponential random variables. The new recursion is given in Lemma \[lemma:Huffer\_computing\_2001\] in Appendix \[appendix:recursion\]. By using the algorithm in [@Huffer_Computing_2001] together with Lemma \[lemma:Huffer\_computing\_2001\], an exact expression of the probability of common outage can be computed. However, the computation of a symbolic expression becomes extremely complicated in a MAC with more than $2$ links. Therefore, we present an upper and a lower bound on that outage probability. To that end, we need the following lemma: \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\] Let $\bf z$ be a vector of $n$ i.i.d exponential random variables with an expectation $E\{z_i\}=1$. If there exists an entry $a_{r,i}>0$ in $\bf A$ such that $a_{r,j}=0$ for all $j\neq i$ and $b_r\geq 0$, the following holds $$\Pr\left({\bf Az}>\lambda{\bf b}\right)=e^{-\lambda\frac{b_r}{a_{r,i}}} \Pr\left({\bf A}^{r*}{\bf z}>\lambda\left({\bf b}^{r*}-\frac{b_r}{a_{r,i}}{\bf a}^{r*}\right)\right). \nonumber$$ [This lemma is an immediate consequence of Lemma \[lemma:Huffer\_computing\_2001\] (i.e., when $k=1$ and $\Psi=\{j\}$). ]{} In [@Rao_Outage_2013] it was pointed out that the outage probability of a MAC with $n$ i.i.d links is bounded from below by $$\label{ineq:Lower_MAC_Outage_Bound_gamma} \rm{Pr}^{\rm{out}}_{\rm{MAC}_n}\geq 1-e^{-\lambda S_n}\frac{\Gamma\left(n,\lambda\left(2^{R_n}-1-S_n\right)\right)}{(n-1)!},$$ where $\Gamma(n,x)$ is the incomplete gamma function, $S_n=\displaystyle\sum_{k=1}^{n}{(2^{r_n}-1)}$ and $R_n=\displaystyle\sum_{i=1}^{n}{r_i}$. In the case of a Rayleigh fading MAC with independent links but with different variances, Theorem \[theorem:lower\_bound\_outage\_probability\] gives a lower bound on the probability of outage. \[theorem:lower\_bound\_outage\_probability\] Let $\lambda_i=\frac{1}{2\nu^2}\frac{\sigma}{p_i}$ $i=1,2,\cdots,n$ have distinct values; i.e., $\lambda_i\neq \lambda_j$ for all $i\neq j$, then the probability of a MAC with $n$ independent $\rm{Rayleigh}(\upsilon_i)$ channel $i=1,2,\cdots,n$ is bounded below by $$\rm{Pr}^{\rm{out}}_{\rm{MAC}_n}\geq 1- \displaystyle\sum_{i=1}^{n}{\gamma_i e^{-\beta_n-\lambda_i\left(2^{R_n}-S_n-1\right)}},$$ where, $\gamma_i=\displaystyle\prod_{j\neq i}{\frac{\lambda_j}{\lambda_j-\lambda_i}}$, $\beta_n=\displaystyle\sum_{i=1}^{n}{\lambda_i(2^{r_i}-1)}$, $S_n=\displaystyle\sum_{i=1}^{n}{(2^{r_i}-1)}$ and $R_n=\displaystyle\sum_{i=1}^{n}{r_i}$. As was explained earlier, the expression ${\bf A}_n{\bf D}_n{\bf z}_n \geq {\bf b}_n$ in equation (\[eq:Probability\_of\_success:A\_N\]) stands for a conjunction of $(2^n-1)$ inequalities. The inequalities that are related to the rows of ${\bf A}_n$ indexed by $2^i\;\;i=0,1,\ldots,(n-1)$ stand for the direct instantaneous capacity constraints $r_i\leq\log_2\left(1+\frac{|h_i|^2 p_i}{\sigma^2}\right)$, whereas all the other inequalities refer to constraints that involve more than one link of the MAC capacity region (see eq. (\[ineq:MAC\_capacity\_Region\])). In particular, the $(2^n-1)^{\rm{th}}$ inequality refers to the constraint $$\displaystyle\sum_{i=1}^n{r_i}\leq\log_2\left(1+\displaystyle\sum_{i=1}^n\frac{p_i|h_i|^2}{\sigma^2}\right). \nonumber$$ Obviously, $$\label{ineq:submatrix_of_A_n} \Pr\left({\bf A}_n{\bf D}_n{\bf z}_n \geq {\bf b}_n\right)\leq\Pr\left(\tilde{\bf A}_n{\bf D}_n{\bf z}_n \geq \tilde{\bf b}_n\right),$$ where $\tilde{\bf A}_n$ is a submatrix of ${\bf A}_n$ constructed by taking the rows indexed by $\{2^i\;:\;i=0,1,\ldots,(n-1)\}\cup\{2^n-1\} $ of ${\bf A}_n$ and $\tilde{\bf b}_n$ is a sub-vector of ${\bf b}_n$ constructed by taking the appropriate entries of ${\bf b}_n$. Note that up to a permutation of the rows, the first $n$ rows of $\tilde{\bf A}_n$ is the identity matrix. Therefore, we can eliminate these $n$ rows by $n$ uses of lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\]. Hence, we have that $$\Pr\left(\tilde{\bf A}_n{\bf D}_n{\bf z}_n \geq \tilde{\bf b}_n\right)=e^{-\beta_n}\Pr\left(\displaystyle\sum_{i=1}^n{\frac{z_i}{\lambda_i}}\geq x\right),$$ where $x=2^{R_n}-S_n-1$. The probability of a distinct coefficients linear combination of i.i.d exponential variables is given by [@Huffer_Divided_1988] $$\label{eq:distinct_iid_exponential_CDF} \Pr\left(\displaystyle\sum_{i=1}^n{\frac{z_i}{\lambda_i}}\geq x\right)= \displaystyle\sum_{i=1}^{n}{\gamma_i e^{-\lambda_i x}}.$$ The claim now follows. Note that Theorem \[theorem:lower\_bound\_outage\_probability\] is valid only when $\lambda_1,\lambda_2,\ldots,\lambda_n$ are all distinct. When we have a MAC with a set of $K$ links with the same value of $\lambda_i$ and $n-K$ links with distinct values of $\lambda$, a similar bound can be computed by replacing the probability in (\[eq:distinct\_iid\_exponential\_CDF\]) by integrating the expression of the pdf derived in [@Khuong_General_2006]. We omit the calculation of this probability here, for the sake of brevity. For simplicity, in the derivation of the upper bound we only consider a MAC with $n$ i.i.d links. Computing the bound for the case where there are independent links with nonidentical variances is much complicated and is considered as future work. The upper bound for a MAC with $3$ links is given in Lemma \[lemma:MAC\_3\_bound\] and the upper bound for the general case is given in Theorem \[theorem:MAC\_probability\_lower\_bound\]. \[lemma:MAC\_3\_bound\] The probability of common outage of a MAC with $3$ i.i.d links is bounded by $$\rm{Pr}^{\rm{out}}_{\rm{MAC}_3}\leq 1-e^{-\lambda\left(2^{R_3}-1\right)}G(\lambda\alpha_3)$$ where $G(x)=\frac{1}{2}x^2+x+1$, $\lambda=\frac{1}{2\upsilon^2}\frac{\sigma^2}{p}$, $R_3=\displaystyle\sum_{i=1}^{3}{r_i}$ and $\alpha_3=(2^{r_1}-1)(2^{r_2}-1)(2^{r_3}-1)$. The probability of a successful (non-outage) transmission is given by $$1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_3}=\Pr\left({\bf A}_3{\bf z}_3\geq \lambda {\bf b}_3\right).$$ Define the following constants $\beta_i=2^{r_i}-1$ and $\beta_{i,j}=2^{r_i+r_j}-1$. Note that the rows of ${\bf A}_3$ and ${\bf b}_3$ indexed by $\{2^i\;:\;i=0,1,2\}$ satisfy the conditions of lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\]. Hence, by three uses of Lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\] we can eliminate these rows of ${\bf A}$ and ${\bf b}_3$. These three uses of Lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\] are legitimate, since after each use of the lemma the result matrix and vector still satisfy the conditions of Lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\] (since $\beta_i\geq 0$, $\beta_{i,j}-\displaystyle\sum_{i\in{\mathcal{B}}}{\beta_i}\geq 0$ and $(2^{R_3}-1)-\displaystyle\sum_{i=1}^{j}{\beta_i}\geq 0$, for all $\mathcal{B}\subseteq\{i,j\}$ and $i,j\in\{1,2,3\}$). Therefore, the probability of successful transmission can be rewritten as $$\label{eq:probability_of_success_A(4)} 1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_3}=e^{-\lambda S_3}\Pr\left({\bf A}{\bf z}\geq\lambda\tilde{\bf b}\right),$$ where ${\bf A}$ is a submatrix of ${\bf A}_3$ constructed by deleting the rows indexed by $\{2^i:i=0,1,2\}$, $S_3=\displaystyle\sum_{i=1}^{3}{(2^{r_i}-1)}$ and $\tilde{\bf b}=\left[\beta_{2,3}-\beta_2-\beta_3\right.$ ,$\beta_{1,3}-\beta_1-\beta_3$ ,$\beta_{1,2}-\beta_1-\beta_2$ ,$\left.(2^{R_3}-1)-\beta_1-\beta_2-\beta_3\right]^T$. It is easy to see that $$\label{eq:A(4)_inequality} \Pr\left({\bf A}{\bf z}\geq\lambda\tilde{\bf b}\right)\geq\Pr\left(\tilde{\bf A}{\bf z}\geq\lambda\tilde{\bf b}\right),$$ where $$\begin{array}{ll} \bf A=\left[\begin{array}{lll} 0&1&1\\1&0&1\\1&1&0\\1&1&1 \end{array}\right], & \tilde{\bf A}=\left[\begin{array}{lll} 0&1&0\\0&0&1\\1&0&0\\1&1&1 \end{array}\right], \end{array} \nonumber$$ since the eliminated $z_1,z_2$ and $z_3$ are non-negative random variables. Note that since $\tilde{b}_4-\displaystyle\sum_{i=0}^{3}{\tilde{b}_i}=\alpha\geq 0$ and $\tilde{\bf b}\geq 0$, we have that $\tilde{b}_4-\displaystyle\sum_{i=0}^{j}{\tilde{b}_i}\geq 0$ for all $j\in\{1,2,3\}$. Again, by three uses of Lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\], we can eliminate the first three rows of $\tilde{\bf A}$ and write $$\label{eq:A_tilde_probability} \Pr\left(\tilde{\bf A}{\bf z}\geq\lambda\tilde{\bf b}\right)=e^{-\lambda\tilde{\gamma}}\Pr\left(z_1+z_2+z_3\geq\lambda\alpha_3\right),$$ where $\tilde{\gamma}=\tilde{b}_1+\tilde{b}_2+\tilde{b}_3$. Note that $Z=z_1+z_2+z_3$ has an $\rm{Erlang}(3,1)$ distribution and therefore $$\Pr\left(Z>z\right)=e^{-z}G(z).$$ Hence, (\[eq:A\_tilde\_probability\]) can be rewritten as $$\label{eq:A_tilde_probability_rewritten} \Pr\left(\tilde{\bf A}{\bf z}\geq\lambda\tilde{\bf b}\right)=e^{-\lambda\tilde{\gamma}}e^{-\lambda\alpha_3}G(\lambda\alpha_3).$$ Combining (\[eq:probability\_of\_success\_A(4)\]),(\[eq:A(4)\_inequality\]) and (\[eq:A\_tilde\_probability\_rewritten\]) yields $$1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_3}\geq e^{-\lambda\left(\alpha_3+\gamma+\tilde{\gamma}\right)}G(\lambda\alpha_3).$$ The claim now follows from the fact that $$\alpha_3+\gamma+\tilde{\gamma}=2^{R_3}-1. \nonumber$$ \[theorem:MAC\_probability\_lower\_bound\] The probability of common outage of a MAC with $n\geq 3$ i.i.d $\rm{Rayleigh}(\upsilon)$ channels is bounded by $$\label{ineq:Upper_MAC_Outage_Bound} \rm{Pr}^{\rm{out}}_{\rm{MAC}_n}\leq 1-e^{-\lambda\left(2^{R_n}-1\right)}\tilde{G}(\lambda\alpha_n)$$ where $G(x)=\frac{1}{2}x^2+x+1$, $R_n=\displaystyle\sum_{i=1}^{n}{r_i}$ and $\alpha_n=\displaystyle\prod_{i=1}^{n}{(2^{r_i}-1)}$. The proof proceeds by induction on $n$. The claim is true for $n=3$ by Lemma \[lemma:MAC\_3\_bound\]. Let the statement be true for $n=k$. We will now prove the result for $n=k+1$. From (\[eq:Probability\_of\_success:A\_N\]) we have that $$\label{eq:Probability_of_success:A_k+1} 1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_{k+1}}= \Pr\left({\bf A}_{k+1}{\bf z}_{k+1}\geq \lambda {\bf b}_{k+1}\right).$$ Note that $\left[{\bf b}_{k+1}\right]_{2^k}=2^{r_1}-1$. Also note that by exploiting the structure of ${\bf A}_{k+1}$ (see equation (\[eq:A\_n:recursion\])), the $2^k$’th row of ${\bf A}_{k+1}$ can be eliminated by using Lemma \[lemma:A\_n\_has\_a\_row\_with\_one\_and\_zeros\]. Hence, $$\label{eq:Probability_of_success:A_k+1,reduced} 1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_{k+1}}= e^{-\lambda(2^{r_1}-1)}\Pr\left(\tilde{\bf A}_{k+1}{\bf z}_{k+1}\geq \lambda \tilde{\bf b}_{k+1}\right),$$ where $$\label{eq:A_k+1:reduced} \tilde{\bf A}_{k+1}=\left[\begin{array}{lll} {\bf 0}_{2^k-1} &,& {\bf A}_{k}\\ {\bf 1}_{2^k-1} &,& {\bf A}_{k} \end{array} \right]$$ and $\tilde{\bf b}_{k+1}=\left(2^{\tilde{\bf A}_{k+1}{\bf r}}-1\right)-(2^{r_1}-1)\left[{\bf 0}_{2^k-1}^T, {\bf 1}_{2^k-1}^T\right]^T$. It can easily be verified that $$\label{eq:Probability_A_check} \Pr\left(\tilde{\bf A}_{k+1}{\bf Z}_{k+1}\geq \lambda \tilde{\bf b}_{k+1}\right)\geq \Pr\left(\check{\bf A}_{k+1}{\bf Z}_{k+1}\geq \lambda \tilde{\bf b}_{k+1}\right),$$ where $$\check{\bf A}_{k+1}=\left[\begin{array}{lll} {\bf 0}_{2^k-1} &,& {\bf A}_{k}\\ {\bf 0}_{2^k-1} &,& {\bf A}_{k} \end{array} \right], \nonumber$$ since $z_1\geq 0$. Note that for any two vectors ${\bf 0}_{2^k-1}\leq{\bf x}_1\leq{\bf x}_2$ the following holds $$\label{eq:A_check_reduced} \Pr\left(\check{\bf A}_{k+1}{\bf z}_{k+1}\geq \left[{\bf x}_1^T,{\bf x}_2^T\right]^T\right)=\Pr\left({\bf A}_k{\bf z}_k\geq {\bf x}_2\right).$$ Note that $$2^{\tilde{\bf A}_{k+1}{\bf r}}=\left[\begin{array}{l} 2^{{\bf A}_k\boldsymbol{\gamma}}\\ 2^{r_1}\cdot 2^{{\bf A}_k\boldsymbol{\gamma}} \end{array} \right],$$ where $\boldsymbol{\gamma}=[r_2,r_3,\ldots,r_{k+1}]^T$. Therefore, we have that $$\label{eq:b_k+1_tilde} \tilde{\bf b}_{k+1}=\left[\begin{array}{l} 2^{{\bf A}_k\boldsymbol{\gamma}}-1\\ 2^{r_1}\left(2^{{\bf A}_k\boldsymbol{\gamma}}-1\right) \end{array} \right].$$ Since $2^{r_1}\geq 1$, combining (\[eq:A\_check\_reduced\]) with (\[eq:b\_k+1\_tilde\]) yields $$\label{eq:A_check_reduced_even_more} \Pr\left(\check{\bf A}_{k+1}{\bf z}_{k+1}\geq \lambda\tilde{\bf b}_{k+1}\right)=\Pr\left({\bf A}_k{\bf z}_k\geq \tilde{\lambda}\left(2^{{\bf A}_k\boldsymbol{\gamma}}-1\right)\right),$$ where $\tilde{\lambda}=\lambda 2^{r_1}$. By the induction hypothesis for $k$, we have that $$\label{eq:A_k:induction_hypothesis_lambda_tilde} \Pr\left({\bf A}_k{\bf z}_k\geq \tilde{\lambda}\left(2^{{\bf A}_k\boldsymbol{\gamma}}-1\right)\right)\geq e^{-\tilde{\lambda}\left(2^{\tilde{R}_2}-1\right)}G\left(\tilde{\lambda}\tilde{\alpha}_2\right),$$ where $\tilde{R}_2=\displaystyle\sum_{i=2}^{k+1}{r_i}$ and $\tilde{\alpha}_2=\displaystyle\prod_{i=2}^{k+1}{\left(2^{r_i}-1\right)}$. Combining (\[eq:Probability\_of\_success:A\_k+1,reduced\]),(\[eq:Probability\_A\_check\]),(\[eq:A\_check\_reduced\_even\_more\]) and (\[eq:A\_k:induction\_hypothesis\_lambda\_tilde\]) yields $$1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_{k+1}}\geq e^{-\lambda(2^{r_1}-1)}e^{-\tilde{\lambda}\left(2^{\tilde{R}_2}-1\right)}G\left(\tilde{\lambda}\tilde{\alpha}_2\right).$$ The claim now follows from the fact that $$\lambda(2^{r_1}-1)+\tilde{\lambda}\left(2^{\tilde{R}_2}-1\right)=\lambda\left(2^{R_{k+1}}-1\right), \nonumber$$ $\tilde{\lambda}\tilde{\alpha}_2\geq \lambda\alpha_{k+1}$ and that $G(x)$ is monotonically increased with $x$ . Note that the lower bound (\[ineq:Lower\_MAC\_Outage\_Bound\_gamma\]) in the i.i.d case may also be expressed as $$\label{ineq:Lower_MAC_Outage_Bound} \rm{Pr}^{\rm{out}}_{\rm{MAC}_n}\geq 1-e^{-\lambda\left(2^{R_n}-1\right)}\tilde{G}\left(\lambda\left(2^{R_n}-1-S_n\right)\right),$$ where,$\tilde{G}\left(x\right)=\displaystyle\sum_{k=0}^{n-1}{\frac{1}{k!}x^k}$. Hence, $$\tilde{G}(\lambda\beta_n)\geq e^{\lambda\left(2^{R_n}-1\right)}\left(1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_n}\right)\geq G(\lambda\alpha_n),$$ where $\beta_n=2^{R_n}-1-S_n$. Rate allocation for the fading MAC network model {#sec:Rate_allocation_for_the_outage_MAC_model} ================================================ In this section we study the problem of finding the rate allocation vector for the fading MAC network model discussed in the previous sections. In our wireless model we assume a slow fading model with independent Rayleigh fading channels. For simplicity we only consider a network in which $\lambda_{i,j}=\lambda_j$ for all $j\in\mathcal{V}\backslash\{s\}$. In other words, the network is assumed to be a collection of multiple access channels with i.i.d links (note that $\lambda_j\;\;j\in\mathcal{V}\backslash\{s\}$ may be distinct). This assumption comes down to the fact that we normalized the transmission power of nodes that connected to the same receiver appropriately to the statistics of the best channel. As was mentioned earlier, the case where there are independent links with nonidentical variances is much complicated and is considered as future work. While it is assumed that the instantaneous channels gain may be available at both the encoders and the decoders, we assume that the rate allocation vector is determined a-priori, based solely on the statistics. The rationale for this assumption is that the rate allocation vector is determined based on network considerations, whereas the instantaneous state of each component of the network varies faster than the entire network can respond to the variations. Note that this assumption is practical as well when the power constraints must be satisfied in each encoding block (e.g. when we are under Federal Communications Commission (FCC) regulations). Consider the rate allocation graph $\tilde{\mathcal{G}}=\left(\mathcal{V},\mathcal{E},{\bf r}\right)$ (the graph that is obtained by assigning a rate $r_{i,j}$ for each link $(i,j)$ in $\mathcal{G}$). The optimal rate allocation vector that minimizes the probability of outage in the MAC network model while maintaining a multicast rate of $R_s$ is a solution of the following optimization problem: \[opt:optimal\_MAC\_convex\] $$\begin{aligned} \tag{\ref{opt:optimal_MAC_convex}} &\displaystyle\min_{{\bf f},{\bf r}} {\;\;\Pr\left(\displaystyle\bigcup_{j\in{\mathcal{V}}}{\left\{{\bf r}_j\notin\mathcal{V}^{\rm{ins}}_j({\bf h}_j)\right\}}\right)}\\ &\qquad\rm{subject\;\;to}\nonumber\\ \label{con:flow_rate_positivity_MAC_convex_optimal} &0\leq f_{i,j}^{d} \leq r_{i,j} \;\;\;\forall(i,j)\in\mathcal{E},d\in\mathcal{D}_s\\ \label{con:flow_conservation_MAC_convex_optimal} &\displaystyle\sum_{i\in\mathcal{I}(j)}{f_{i,j}^{d}}- \displaystyle\sum_{i\in\mathcal{O}(j)}{f_{j,i}^{d}}=\begin{cases} 0 & j\notin \{s,d\}\\ R_s & j=d\\ \end{cases}\\ &\qquad\qquad\qquad\qquad\qquad\forall j\in\mathcal{V}\backslash \{s\},d\in\mathcal{D}_s,\nonumber\end{aligned}$$ where the flow constraints (\[con:flow\_rate\_positivity\_MAC\_convex\_optimal\])-(\[con:flow\_conservation\_MAC\_convex\_optimal\]) guarantee that any feasible solution of (\[opt:optimal\_MAC\_convex\]) provides a minimum min-cut of at least $R_s$ between the source and each destination. Therefore, a multicast rate of $R_s$ is achievable by network coding, see Theorem 1. in [@Lun_Minimum_2006]. Unfortunately, as was mentioned earlier, the computation of the probability of a common outage becomes extremely complicated in a MAC with more than $2$ links. Therefore, we present a suboptimal solution to the rate allocation problem in the fading MAC network model. We relax the problem and instead of using the exact expression of the probability of common outage, we minimize an upper bound on the outage probability of a multiple access channel. To that end, consider the following lemma. \[lemma:weaker\_bound\] The probability of common outage of a MAC with $n$ i.i.d $\rm{Rayleigh}(\upsilon)$ channels is bounded by $\rm{Pr}^{\rm{out}}_{\rm{MAC}_n}\leq 1-e^{-\lambda\left(2^{R_n}-1\right)}$. For $n=1$, we have a Rayleigh fading Gaussian channel with outage probability given by: $$\rm{Pr}^{\rm{out}}_{\rm{MAC}_1}=1-e^{-\lambda \left(2^{r_1}-1\right)}.$$ For $n=2$, a simple computation yields that $$\begin{aligned} 1-\rm{Pr}^{\rm{out}}_{\rm{MAC}_2}&=e^{-\lambda\left(2^{r_1+r_2}-1\right)}\left(1+\lambda\left(2^{r_1}-1\right)\left(2^{r_2}-1\right)\right)\nonumber\\ &\geq e^{-\lambda\left(2^{r_1+r_2}-1\right)}.\end{aligned}$$ For $n\geq 3$, the claim follows from Theorem \[theorem:MAC\_probability\_lower\_bound\] and the fact that for any non-negative $x$, we have $G(x)\geq 1$. The probability of an outage in the MAC network model is given in equation (\[eq:probability\_outage\_MAC\_network\_model\]). By assumption, all $h_{i,j}$ are independent of each other. Therefore, the probability of an outage in the MAC network model can be rewritten as $$\label{eq:probability_outage_MAC_network_model_simplified} P_{\rm{MAC}}^{\rm{out}}=1-\displaystyle\prod_{j\in\mathcal{V}\backslash\{s\}}{\left(1-P_j^{\rm{out}}\right)}.$$ Obviously, if $P_j^{\rm{out}}$ is bounded above by $\tilde{P}_j^{\rm{out}}$ the following holds $$P_{\rm{MAC}}^{\rm{out}}\leq 1-\displaystyle\prod_{j\in\mathcal{V}\backslash\{s\}}{\left(1-\tilde{P}_j^{\rm{out}}\right)}.$$ Although the bound in Lemma \[lemma:weaker\_bound\] is weaker than the one we get from Theorem \[theorem:MAC\_probability\_lower\_bound\], we used the weaker bound to find a rate allocation vector for the outage MAC model. For every $j\in\mathcal{V}\backslash\{s\}$ denote $$\label{eq:r_j_tilde} \tilde{R}_j:=\displaystyle\sum_{i\in\mathcal{I}(j)}{r_{i,j}}.$$ Hence, from Lemma \[lemma:weaker\_bound\] we have $$P_{\rm{MAC}}^{\rm{out}}\leq 1-\displaystyle\prod_{j\in\mathcal{V}\backslash\{s\}}{e^{-\lambda_j\left(2^{\tilde{R}_j}-1\right)}}.$$ Finally, since $e^{-\lambda\left(2^{R_j}-1\right)}$ is log-concave the problem becomes computationally tractable: \[opt:suboptimal\_MAC\_convex\] $$\begin{aligned} \tag{\ref{opt:suboptimal_MAC_convex}} &\displaystyle\min_{{\bf f},{\bf r}} {\displaystyle\sum_{j\in\mathcal{V}\backslash \{s\}} {\lambda_j 2^{\tilde{R}_j}}}\\ &\qquad\rm{subject\;\;to}\nonumber\\ \label{con:flow_rate_positivity_MAC_convex} &0\leq f_{i,j}^{d} \leq r_{i,j} \;\;\;\forall(i,j)\in\mathcal{E},d\in\mathcal{D}_s\\ \label{con:flow_conservation_MAC_convex} &\displaystyle\sum_{i\in\mathcal{I}(j)}{f_{i,j}^{d}}- \displaystyle\sum_{i\in\mathcal{O}(j)}{f_{j,i}^{d}}=\begin{cases} 0 & j\notin \{s,d\}\\ R_s & j=d\\ \end{cases}\\ &\qquad\qquad\qquad\qquad\qquad\forall j\in\mathcal{V}\backslash \{s\},d\in\mathcal{D}_s.\nonumber\end{aligned}$$ Distributed solution for MAC network model {#sec:distributed} ========================================== In the previous section the rate allocation vector for the MAC network model was given as a solution to the convex optimization problem (\[opt:suboptimal\_MAC\_convex\]). This problem can easily be solved by a standard convex optimization technique in a centralized fashion. However, the centralized solution requires full knowledge of the network topology and statistics. In this section we discuss how to distributively solve (\[opt:suboptimal\_MAC\_convex\]). As pointed out in e.g., [@Lun_Minimum_2006; @Strikant_the_mathematics_2004; @Feijer_stability_2010], if certain conditions are satisfied, convex optimization problems may be solved distributively by a continuous time primal-dual method. This method can be described as follows. The optimization is studied through its Lagrangian where the primal and dual variables are updated simultaneously by a set of gradient laws (dynamic system). These laws define a trajectory in the direction of the respective partial gradients, starting from some initial point. The dynamic system is stated such that the saddle points of the Lagrangian are equilibrium points. Hence, if a strong duality holds for the original convex optimization problem, the algorithm stops updating the variables when it reaches the optimal solution. It is worth mentioning that in contrast to gradient method, in which convergence is guaranteed for convex problems from any initial point (see e.g., §9 in [@Boyd_Convex_2004]), the asymptotic behavior of dynamic systems is not immediate in the general case (even though the problem is convex). In other words, convergence to an equilibrium point is not guaranteed in the general case. In this type of problem the existence of Lyapunov functions is used to prove the stability of the equilibrium points. There is no general technique for the construction of these functions. However, in some specific cases the construction of Lyapunov functions is known (see e.g., [@Khalil_Nonlinear_2002]). As can be easily verified, the cost function in (\[opt:suboptimal\_MAC\_convex\]) is not strictly convex (and also is not separable in the decision variables ${\bf f}$ and ${\bf r}$). In problems with a non-strictly convex cost function, it is possible to have more than one optimum point. Hence, in this case the standard primal-dual solution may not converge. In [@Feijer_stability_2010] a modified primal dual gradient method was derived for non-strictly convex problems. In that method the solution will converge to one of the optimal points by modifying the constraint set of the convex optimization problem. Following [@Feijer_stability_2010], we suggest the following modified convex optimization problem: \[opt:MAC\_equivalent\] $$\begin{aligned} \tag{\ref{opt:MAC_equivalent}} &\displaystyle\min_{{\bf f},{\bf r}} {\displaystyle\sum_{j\in\mathcal{V}\backslash \{s\}} {\lambda_j 2^{\tilde{R}_j}}}\\ &\qquad\rm{subject\;\;to}\nonumber\\ &\phi\left(-f_{i,j}^{d}\right)\leq 0\;\;\;\forall(i,j)\in\mathcal{E},d\in\mathcal{D}_s\\ &\phi\left(f_{i,j}^d-r_{i,j}\right)\leq 0\;\;\;\forall(i,j)\in\mathcal{E},d\in\mathcal{D}_s\\ \label{con:modified_flow_conservation_upper_side} &\phi\left(q_j^d\right)\leq 0\qquad\forall j\in\mathcal{V}\backslash \{s\},d\in\mathcal{D}_s\\ \label{con:modified_flow_conservation_lower_side} &\phi\left(-q_j^d\right)\leq 0\qquad\forall j\in\mathcal{V}\backslash \{s\},d\in\mathcal{D}_s,\end{aligned}$$ where $\tilde{R}_j$ was defined in (\[eq:r\_j\_tilde\]), $\phi(x)=e^x-1$ and for all $j\in\mathcal{V}\backslash \{s\},d\in\mathcal{D}_s$ $$\begin{aligned} &q_j^d:=\displaystyle\sum_{i\in\mathcal{I}(j)}{f_{i,j}^{d}}- \displaystyle\sum_{i\in\mathcal{O}(j)}{f_{j,i}^{d}}-\psi_j^d\\ &\psi_j^d:=\begin{cases} 0 & j\notin \{s,d\}\\ R_s & j=d \end{cases}.\end{aligned}$$ Note that theorem 11 in [@Feijer_stability_2010] that guarantees convergence for the corresponding dynamic system was proved under the assumption that Slater’s condition holds for the modified convex optimization problem. It is easy to see that Slater’s condition does not hold for (\[opt:MAC\_equivalent\]). However, it can be verified that their proofs remain valid as they are under any other constraint qualification (i.e., whenever strong duality for the modified optimization holds). In Appendix \[appendix:qualification\], we show that strong duality holds for (\[opt:MAC\_equivalent\]). Denote by $\rho_{i,j}^d,w_{i,j}^d,\varphi_j^d$ and $\mu_j^d$ the dual variables of (\[opt:MAC\_equivalent\]) and define $$\Delta_{i,j}^d:=-\varphi_j^d e^{q_j^d}+I_{i\neq s}\varphi_i^d e^{q_i^d}+\mu_j^d e^{-q_j^d}-I_{i\neq s}\mu_i^d e^{-q_i^d},$$ where $I_{i\neq s}=1$ if $i\neq s$ and zero otherwise. The primal-dual gradient laws for (\[opt:MAC\_equivalent\]) are given by $$\begin{aligned} \label{dynamic_begin} &\dot{r}_{i,j}=\tau_{i,j} \left(-\lambda_j e^{\tilde{R}_j}+\displaystyle\sum_{d\in\mathcal{D}_s}{w_{i,j}^d e^{f_{i,j}^d-r_{i,j}}}\right)\\ &\dot{f}_{i,j}^d=k_{i,j}^d\left(\rho_{i,j}^d e^{-f_{i,j}^d}-w_{i,j}^d e^{f_{i,j}^d-r_{i,j}}+\Delta_{i,j}^d\right)\\ &\dot{\rho}_{i,j}^d=\alpha_{i,j}^d\left[e^{-f_{i,j}^d}-1\right]_{\rho_{i,j}^d}^{+}\\ &\dot{w}_{i,j}^d=\theta_{i,j}^d \left[e^{f_{i,j}^d-r_{i,j}}-1\right]_{w_{i,j}^d}^{+}\\ &\dot{\varphi}_j^d=\beta_j^d\left[e^{q_{j}^d}-1\right]_{\varphi_j^d}^{+}\\ \label{dynamic_end} &\dot{\mu}_j^d=\gamma_j^d\left[e^{-q_{j}^d}-1\right]_{\mu_j^d}^{+},\end{aligned}$$ where $\tau_{i,j},k_{i,j}^d,\theta_{i,j}^d,\alpha_{i,j}^d,\beta_j^d$ and $\gamma_j^d$ are some positive scalars and for any two scalars $x$ and $p$ $$[x]_{p}^{+}=\begin{cases} 0 & x<0,p<0 \\ x & \rm{otherwise}. \end{cases} \nonumber$$ The dynamic (\[dynamic\_begin\])-(\[dynamic\_end\]) can be distributively implemented by associating a processor for each node in the network, excluding the source node. Each node $j$’s processor keeps track of the variables $\varphi_j^d$ and $\mu_j^d$ as well as the variables associated with node $j$’s ingoing links (i.e., the links in $\{(i,j): i\in\mathcal{I}(j)\}$). Note that message passing is required only between direct neighbors. Simulation results {#sec:simulation} ================== In this section the probability of outage of the suboptimal algorithm for the MAC network model is presented. In the simulation we consider the networks shown in Fig. \[fig:MAC\_network\_model\] where it was assumed that all links are i.i.d $\rm{Rayleigh}(\upsilon)$ channels, with $\upsilon=1$. We solved (\[opt:suboptimal\_MAC\_convex\]) for various values of ${\ensuremath{\hbox{SNR}}}=\frac{P}{\sigma^2}$ and the results are shown in Fig. \[fig:resq\_example\_OutageProbability\_VS\_MulticastRate\_fullPaperNetwork\]. The lower and upper bounds for the outage probability were obtained by calculating (\[ineq:Lower\_MAC\_Outage\_Bound\_gamma\]) and (\[ineq:Upper\_MAC\_Outage\_Bound\]) (respectively) for each MAC associated with node $j$ if $\mathcal{I}(j)\geq 3$ and the exact expression of the outage probability for each receiver $j$ with $\mathcal{I}(j)\leq 2$. As can be seen, up to $6$ bits/sec/Hz the bounds are quite tight. We compared the performance of the fading MAC network model to the performance of the non-naive TDMA model. The optimal rate allocation scheme that minimizes the probability of outage for the TDMA based model was derived in [@Zanko_Network_2013Submeeted]. The results for the non-naive TDMA model are shown in Fig. \[fig:NonNaiveOutageProbability\_VS\_MulticastRate\_fullPaperNetwork\]. Although for low multicast rate demands there is no significant gain in preferring the MAC network model over the Non-naive TDMA model, when we have a demand for a high multicast rate the MAC network model outperforms the TDMA based model. To emphasize this result, note that in the non-naive TDMA model (see Fig. \[fig:NonNaiveOutageProbability\_VS\_MulticastRate\_fullPaperNetwork\]) we achieved a multicast rate of $R_s\approx 5.5$ bits/sec/Hz with probability of outage of $P^{out}\approx 0.1$ with ${\ensuremath{\hbox{SNR}}}=30dB$ whereas we obtained the same results (i.e., $R_s\approx 5.5$, $P^{out}\approx 0.1$) in the MAC network model with ${\ensuremath{\hbox{SNR}}}=25dB$. Finally, we simulated a discrete time version of the distributed algorithm shown in section \[sec:distributed\]. In this version we consider time steps $m=1,2,\ldots$ and the derivatives were replaced by differences. The scalars $\tau_{i,j},k_{i,j}^d,\theta_{i,j}^d,\alpha_{i,j}^d,\beta_j^d$ and $\gamma_j^d$ can be thought as step sizes. We did not optimized these step sizes and they were randomly chosen at the initiation of the simulation ($\tau_{i,j}$ and $k_{i,j}^d$ where about $10$ times larger than the other step sizes). During the simulation, we considered the network sown in Fig. \[fig:MAC\_network\_model\] and it was assumed that node $5$ was the source node and that the destinations were $\mathcal{D}_s=\{1,4,8,10\}$. The convergence of the algorithm is shown in Fig. \[fig:Convergence\_of\_distributed\]. ![The probability of outage of the fading MAC network model for various values of SNR for the network shown in Fig. \[fig:MAC\_network\_model\]. It was assumed that node $5$ was the source node and that the destinations were $\mathcal{D}_s=\{1,4,8,10\}$.[]{data-label="fig:resq_example_OutageProbability_VS_MulticastRate_fullPaperNetwork"}](OutageProb_VS_MulticastRate_bits_20_25_30dB_FullPapernetwork_iid_sigma1.pdf){width="\textwidth"} ![The probability of outage in the non-naive TDMA for various values of SNR for the network shown in Fig. \[fig:MAC\_network\_model\]. It was assumed that node $5$ was the source node and that the destinations were $\mathcal{D}_s=\{1,4,8,10\}$.[]{data-label="fig:NonNaiveOutageProbability_VS_MulticastRate_fullPaperNetwork"}](NonNaive_TDMA_fig1_full.pdf){width="\textwidth"} ![The convergence of the distributed algorithm for the network shown in Fig. \[fig:MAC\_network\_model\]. It was assumed that node $5$ was the source node and that the destinations were $\mathcal{D}_s=\{1,4,8,10\}$. The dashed line represents the optimal solution of (\[opt:suboptimal\_MAC\_convex\]) and the solid line represents the value of the cost function in (\[opt:suboptimal\_MAC\_convex\]) over time.[]{data-label="fig:Convergence_of_distributed"}](distributed_convergence_short_network.pdf){width="\textwidth"} Conclusions {#sec:conclusions} =========== In this paper we studied the rate allocation problem for multicasting over slow Rayleigh fading channels using network coding. A rate allocation scheme based solely on the statistics of the channels was presented. In the MAC network model, where the network is treated as a collection of slowly Rayleigh fading multiple access channels, we proposed a suboptimal scheme as the solution to a convex optimization problem. This suboptimal solution is based on an upper bound on the probability of outage of a fading multiple access channel. A primal-dual gradient algorithm was derived to solve the problem distributively. In the simulation results, it is shown that the MAC network model outperforms the TDMA based model. The paper provides a practical solution to networks with slow fading channels in which long delays are unacceptable (e.g., in video streaming), with the objective of minimizing outage events throughout the network. As a potential future works, one should consider to derive a bound on the outage probability in the non i.i.d case and extend the problem of statistic-based rate allocation scheme to deal with other than Rayleigh fading model (e.g., Rician or Nakagami models). A recursion for computing the joint distribution of linear combinations of exponential random variables {#appendix:recursion} ======================================================================================================= In this section we present a new version of a recursion that appeared as equation (17) in [@Huffer_Computing_2001]. The lemma in [@Huffer_Computing_2001] gives a recursion for the computation of the joint distribution of linear combinations of spacings of uniform distribution. The authors in [@Huffer_Computing_2001] remarked that the recursion remain valid as well for computing the joint distribution of linear combinations of exponential random variables. This is inaccurate and we revise the result to handle exponential random variables. To that end, we need the following notations. Let $\Psi=\{i_1,i_2,\ldots,i_k\}\;\;k\geq 1$ be a set of indices of columns of a matrix $\bf A$ such that $i_{\ell}<i_{\ell+1}$ for all $\ell$ and let ${\bf A}_{-\Psi(m)}$ denote the submatrix of $A$ constructed by deleting the columns of $\bf A$ indexed by $\{i_1,i_2,\ldots,i_m\}$. \[lemma:Huffer\_computing\_2001\] Let $z_1,z_2,\cdots,z_{_{N+1}}$ be $(N+1)$ i.i.d exponential random variables with expectation $E\{z_i\}=1$. Let $\Psi=\{i_1,i_2,\ldots,i_k\}$, $k\geq 1$ be a set of indices of identical columns of matrix $\bf A$ (without loss of generality $i_{\ell}<i_{\ell+1}$ for all $\ell$). If there exists a row $r$ in $\bf A$ such that a) $a_{r,i}>0$ for $i\in\Psi$, b) $a_{r,i}=0$ for $i\notin\Psi$ and c) $b_r\geq 0$, the following recursion holds $$\label{eq:Huffer_computing_2001} \Pr\left({\bf Az}>\lambda{\bf b}\right)=\displaystyle\sum_{m=0}^{k-1}{ \frac{1}{m!}(\lambda\delta )^m e^{-\lambda\delta}\Pr\left({\bf A}^{r*}_{-\Psi(m)}{\bf z}>\lambda{\bf c}\right)},$$ where $\delta=\frac{b_r}{a_{r,i_1}}$ and ${\bf c}={\bf b}^{r*}-\delta{\bf a}_{i_1}^{r*}$. As pointed out in [@Huffer_Computing_2001], since the expression ${\bf Az}>\lambda{\bf b}$ stands for a conjunction of inequalities involving i.i.d random variables, $\Pr\left({\bf Az}>\lambda{\bf b}\right)=\Pr\left({\boldsymbol \pi}{\bf Az}>\lambda{\boldsymbol \pi}{\bf b}\right)$ and $\Pr\left({\bf Az}>\lambda{\bf b}\right)=\Pr\left({\bf A}{\boldsymbol \pi}{\bf z}>\lambda{\boldsymbol \pi}{\bf b}\right)$ hold for any permutation matrix ${\boldsymbol \pi}$ with the appropriate dimensions. Therefore, without loss of generality, we assume that $r=1$ and $\Psi=\{1,2,\ldots,k\}$ (See the illustration of such a matrix in Fig. \[fig:R1\_R3\_A\_matrix\]). ![An illustration of a matrix $\bf A$ that satisfies the assumptions in Lemma \[lemma:Huffer\_computing\_2001\], where $\Psi=\{1,2,\ldots,k\}$ and $r=1$.[]{data-label="fig:R1_R3_A_matrix"}](R1_R3_A_matrix.pdf){width="80.00000%"} Under these assumptions, the first inequality in ${\bf Az}>\lambda{\bf b}$ is $$D=\{a_{1,1}\displaystyle\sum_{\ell=1}^{k}{z_{\ell}}>\lambda b_1\}.$$ Clearly, $$\Pr\left({\bf Az}>\lambda{\bf b}\right)=\Pr\left(D\cap\{{\bf A}^*{\bf z}> \lambda{\bf b}^*\}\right).$$ The event $D$ can be written as the union of disjoint events $D=\displaystyle\cup_{m=0}^{k-1}{D_m}$, where $$D_m=\{\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\leq\lambda\delta\leq \displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}\}.$$ Therefore, $$\label{eq:Law_of_total_probability_for_MAC_events} \Pr\left({\bf Az}>\lambda{\bf b}\right)=\displaystyle\sum_{m=0}^{k-1} {\Pr\left(D_m\right)\Pr\left({\bf A}^*{\bf z}>\lambda {\bf b}^*|D_m\right)}.$$ For $m<k$, the $r$th inequality in ${\bf Az}>\lambda{\bf b}$ can be rewritten as $$\begin{aligned} \label{ineq:R1_R3_inequalities} &\{\displaystyle\sum_{\ell=1}^{N+1}{a_{r,\ell}z_{\ell}>\lambda b_r}\}= \{\displaystyle\sum_{\ell=1}^{m+1}{a_{r,\ell}z_{\ell}}+ \displaystyle\sum_{\ell=m+2}^{N+1}{a_{r,\ell}z_{\ell}>\lambda b_r}\}=\nonumber\\ &\{a_{r,1}\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}+ \displaystyle\sum_{\ell=m+2}^{N+1}{a_{r,\ell}z_{\ell} - a_{r,1}\lambda\delta >\lambda b_r - a_{r,1}\lambda\delta }\}=\nonumber\\ & \{a_{r,1}\left(\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}-\lambda\delta\right)+ \displaystyle\sum_{\ell=m+2}^{N+1}{a_{r,\ell}z_{\ell}} - >\left(b_r - \lambda\delta a_{r,1}\right)\}=\nonumber\\ &\{\left[a_{r,1},a_{r,m+2},\cdots,a_{_{r,N+1}}\right]^T {\bf T}^m>\left(b_r - \lambda\delta a_{r,1}\right)\},\end{aligned}$$ where $${\bf T}^m=\left[\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}-\lambda\delta, z_{m+2},\cdots,z_{N+1}\right]^T.$$ Therefore, $$\label{eq:lower_dimensionality_of_A} \Pr\left({\bf A}^*{\bf z}>\lambda{\bf b}^*|D_m\right)= \Pr\left({\bf A}_{(-m)}^*{\bf T}^m>\lambda\left({\bf b}^*-\delta{\bf a}^*\right)|D_m\right)$$ In the following we show that the event ${\bf T}^m|D_m$ has the same distribution as $(n+1-m)$ i.i.d exponential random variables. Obviously, $D_m$ is independent with $\left({\bf T}^{m}\right)^*$. Therefore, in order to show that ${\bf T}^m|D_m$ has the same distribution as $(n+1-m)$ i.i.d exponential random variables, it suffices to show that $\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}-\lambda\delta |D_m$ has the same distribution as $z_{m+1}$. Note that $$\begin{aligned} &\left\{\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}-\lambda\delta |D_m\right\}= \left\{\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}-\lambda\delta|0\leq \lambda\delta -\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\leq z_{m+1}\right\}.\end{aligned}$$ Therefore, $$\begin{aligned} &\Pr\left(\displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}-\lambda\delta >x|0\leq \lambda\delta -\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\leq z_{m+1}\right)=\\ \label{eq:before_memory_property} &\Pr\left(z_{m+1}>x-\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}+\lambda\delta|0\leq \lambda\delta-\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\leq z_{m+1}\right).\end{aligned}$$ Due to the memoryless property[^2] of the exponential random variable we can write $$\begin{aligned} &\Pr\left(z_{m+1}>x-\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}+\lambda\delta|z_{m+1}\geq \lambda\delta-\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\geq 0\right)=\\ &\Pr\left(z_{m+1}>x-\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}+\lambda\delta -\left(\lambda\delta-\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\right)\right)=\\ &\Pr\left(z_{m+1}>x\right).\end{aligned}$$ Therefore, (\[eq:lower\_dimensionality\_of\_A\]) becomes $$\label{eq:lower_dimensionality_A} \Pr\left({\bf A}^*{\bf z}>\lambda{\bf b}^*|D_m\right)= \Pr\left({\bf A}^*_{(-m)}{\bf z}>\lambda\left({\bf b}^*-\delta{\bf a}^*\right)\right).$$ Combining (\[eq:Law\_of\_total\_probability\_for\_MAC\_events\]) and (\[eq:lower\_dimensionality\_A\]) yields $$\Pr\left({\bf Az}>\lambda{\bf b}\right)=\displaystyle\sum_{m=0}^{k-1} {\Pr\left(D_m\right)\Pr\left({\bf A}^*_{(-m)}{\bf z}>\lambda\left({\bf b}^*-\delta{\bf a}^*\right)\right)}.$$ In order to complete the recursion we need an explicit expression of $\Pr(D_m)$. $$\begin{aligned} \Pr(D_m)&=\Pr\left(\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\leq\lambda\delta\leq \displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}\right)\\ &=1-\Pr\left(\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}>\lambda\delta \right)-\Pr\left(\lambda\delta> \displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}} \right)\\ &=\Pr\left(\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}\leq\lambda\delta \right)-\Pr\left( \displaystyle\sum_{\ell=1}^{m+1}{z_{\ell}}\leq \lambda\delta \right).\end{aligned}$$ Since $\displaystyle\sum_{\ell=1}^{m}{z_{\ell}}$ follows the $\rm{Erlang}(m,1)$ distribution[^3], we have that $$\Pr(D_m)=\frac{1}{m!}(\lambda\delta)^m e^{-\lambda\delta}.$$ The claim now follows. Strong duality of (\[opt:MAC\_equivalent\]) {#appendix:qualification} =========================================== \[lemma:modified\_slater\] Strong duality holds for (\[opt:MAC\_equivalent\]). Consider the following optimization problem \[opt:MAC\_equivalent\_modified\] $$\begin{aligned} \tag{\ref{opt:MAC_equivalent_modified}} &\displaystyle\min_{{\bf f},{\bf r}} {\displaystyle\sum_{j\in\mathcal{V}\backslash \{s\}} {\lambda_j 2^{\tilde{R}_j}}}\\ &\qquad\rm{subject\;\;to}\nonumber\\ &\phi\left(-f_{i,j}^{d}\right)\leq 0\;\;\;\forall(i,j)\in\mathcal{E},d\in\mathcal{D}_s\\ &\phi\left(f_{i,j}^d-r_{i,j}\right)\leq 0\;\;\;\forall(i,j)\in\mathcal{E},d\in\mathcal{D}_s\\ &\displaystyle\sum_{i\in\mathcal{I}(j)}{f_{i,j}^{d}}- \displaystyle\sum_{i\in\mathcal{O}(j)}{f_{j,i}^{d}}=\begin{cases} 0 & j\notin \{s,d\}\\ R_s & j=d\\ \end{cases}\\ &\qquad\qquad\qquad\qquad\qquad\forall j\in\mathcal{V}\backslash \{s\},d\in\mathcal{D}_s.\nonumber\end{aligned}$$ Note that the refined Slater’s condition holds for (\[opt:MAC\_equivalent\_modified\]) and therefore (\[opt:MAC\_equivalent\_modified\]) has zero duality gap [@Boyd_Convex_2004], but it does not hold for (\[opt:MAC\_equivalent\]). Obviously, the feasible sets of (\[opt:suboptimal\_MAC\_convex\]), (\[opt:MAC\_equivalent\]) and (\[opt:MAC\_equivalent\_modified\]) are all the same and therefore they have identical optimal solutions. We need to show that solving (\[opt:MAC\_equivalent\]) through its Lagrangian yields the same solution. Denote the primal variables by ${\bf x}=({\bf f},{\bf r})$ and the dual variables by $\boldsymbol{\zeta}$. Denote the Lagrangians of (\[opt:MAC\_equivalent\]) and (\[opt:MAC\_equivalent\_modified\]) by $L({\bf x},\boldsymbol{\zeta})$, $L_M({\bf x},\boldsymbol{\zeta})$, respectively. The dual function of (\[opt:MAC\_equivalent\]) is given by $q(\boldsymbol{\zeta}) = \displaystyle\min_{{\bf x}}{L({\bf x},\boldsymbol{\zeta})}$. Assume that there exists ${\bf \tilde{x}}(\boldsymbol{\zeta})=({\bf \tilde{f}},{\bf \tilde{r}})$ that is a minimizer of $L({\bf x},\boldsymbol{\zeta})$ that does not obey the flow conservation constraint (\[con:flow\_conservation\_MAC\_convex\]). Therefore, there exists node $j\in\mathcal{V}\backslash \{s\}$ such that $q_j^d\neq 0$. Hence, we have either $\phi(q_j^d)>0$ or $\phi(-q_j^d)>0$. Without loss of generality assume that $\phi(q_j^d)>0$. In that case, since the cost function in (\[opt:MAC\_equivalent\]) is bounded below by $0$ we can always choose $\boldsymbol{\zeta}$ such that the dual solution $q(\boldsymbol{\zeta})$ is infinity (by setting all $\lambda_i$ to zero except the one with the positive coefficient $\phi(q_j^d)>0$). This contradicts the feasibility of the primal (\[opt:MAC\_equivalent\]). We conclude by noting that $L({\bf x},\boldsymbol{\zeta})=L_M({\bf x},\boldsymbol{\zeta})$ for any ${\bf x}$ obeys the flow conservation constraint (\[con:flow\_conservation\_MAC\_convex\]). [38]{} natexlab\#1[\#1]{}\[2\][\#2]{} , , , , in: , . , , , , , () . , , , , () . , , , , in: , pp. . , , , in: , pp. . , , , , , , , , () . , , , , , , , , () . , , , (). , , , , . , , , () . , , , , , () . , , , , , (). , , , , , , , , , () . , , , , , () . , , , () . , , , , () . , , , , () . , , , , in: , pp. . , , , () . , , , , , , , in: , volume , pp. . , , , , , () . , , , () . , , , , () . , , , , () . , , , () . , , in: , p. . , , , () . , , , () . , , , , () . , , , , . , , , () . , , , , (). , , () . , , , () . , , , . , , , () . , , , , . , , , . [^1]: Suppose $u_i\;\;i=1,2,\cdots,n$ are independently and uniformly distributed on the interval (0,1), and let $u_{(1)}\leq u_{(2)}\leq\cdots\leq u_{(n)}$ be the corresponding order statistics. The spacings $s_1,s_2,\cdots,s_{n+1}$ are defined by the successive differences between the order statistics: $s_i=u_{(i)}-u_{(i-1)}$, where $u_{(0)}:=0$ and $u_{(n+1)}:=1$. [^2]: The memoryless property of an exponential variable means that for any $a,b\geq 0$, we have that $\Pr\left(Z>a+b|Z>a\right)=\Pr\left(Z>b\right)$, where $Z$ is exponential random variable. [^3]: The CDF of Erlang$(m,\lambda)$ distributed random variable $Y$ is given by $\;\;\;F_Y(y)=1-\displaystyle\sum_{\ell=0}^{m-1}{\frac{1}{\ell!}(\lambda y)^{\ell} e^{-\lambda y}}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that dropout discourages co-adaptation of weights. We also show that the dropout penalty can grow exponentially in the depth of the network while the weight-decay penalty remains essentially linear, and that dropout is insensitive to various re-scalings of the input features, outputs, and network weights. This last insensitivity implies that there are no isolated local minima of the dropout training criterion. Our work uncovers new properties of dropout, extends our understanding of why dropout succeeds, and lays the foundation for further progress.' author: - | David P. Helmbold\ Department of Computer Science\ University of California, Santa Cruz\ Santa Cruz, CA 95064, USA\ `dph@soe.ucsc.edu`\ - | Philip M. Long\ Google\ `plong@google.com`\ bibliography: - 'general.bib' title: | **Surprising properties of dropout\ in deep networks\ ** --- Properties of the dropout penalty {#s:dropout.penalty} ================================= Acknowledgments {#acknowledgments .unnumbered} =============== We are very grateful to Peter Bartlett, Seshadhri Comandur, and anonymous reviewers for valuable communications.
{ "pile_set_name": "ArXiv" }
--- abstract: '[Considering the strong field approximation we compute the hard thermal loop pressure at finite temperature and chemical potential of hot and dense deconfined QCD matter in lowest Landau level in one-loop order. We consider anisotropic pressure in presence of strong magnetic field [*[i.e.]{}*]{}, longitudinal and transverse pressure along parallel and perpendicular to the magnetic field direction. As a first effort we compute and discuss the anisotropic quark number susceptibility of deconfined QCD matter in lowest Landau level. The longitudinal quark number susceptibility is found to increase with temperature whereas that of transverse one decreases with the temperature. We also compute quark number susceptibility in weak field approximation. The thermomagnetic correction is very marginal in weak field approximation. ]{}' author: - Bithika Karmakar - Najmul Haque - Munshi G Mustafa bibliography: - 'ref.bib' title: 'Second-order quark number susceptibility of deconfined QCD matter in presence of magnetic field' --- Introduction ============ Fluctuations of the conserved quantum numbers like baryon number, electric charge, strangeness number have been proposed as the probe of the hot and dense matter created in high energy heavy-ion collisions. However if one collects all the charged particle in heavy-ion collision then the net charge will be conserved and there will be no fluctuation. But all the particles can not be collected by any detector [@2000PhRvL..85.2076J]. One should consider grand canonical ensemble for the case of real detector. An isolated system does not fluctuate because it is at thermodynamic limit. But if we consider small portion of a system which is small enough to consider the rest of the system as bath and is large enough to ignore the quantum fluctuations then one can calculate the fluctuation of conserved quantities like baryon number using grand canonical ensemble [@Asakawa:2000wh]. These fluctuations can be measured experimentally [@2000PhRvL..85.2076J; @Asakawa:2000wh; @Koch:2001zn]. Several lattice calculations are there which calculate fluctuation and correlation of the conserved quantities [@PhysRevD.92.114505; @PhysRevLett.111.062005; @BORSANYI2013270c; @Ding:2015fca; @PhysRevD.73.014004]. The fluctuation of the conserved quantum numbers can be used to determine the degrees of freedom of the system [@Asakawa:2000wh]. Second and fourth order quark number susceptibilities in thermal medium have been calculated using Hard Thermal Loop (HTL) approximation [@Haque:2014rua; @Haque:2013sja; @Haque:2013qta; @Chakraborty:2003uw; @Chakraborty:2001kx; @BLAIZOT2001143], pQCD [@Vuorinen:2002ue; @Toimela:1984xy; @PhysRevD.68.054017]. Ref. [@Haque:2018eph] calculates the second-order quark number susceptibility (QNS) considering different quark masses for $u$, $d$ and $s$ quark. On the other hand, recent findings show that magnetic field of the order of $10^{18}$ Gauss can be created at the center of the fire ball by the charged spectator particles in non-central heavy-ion collisions [@SKOKOV_2009; @Kharzeev_2008]. The time varying magnetic field is created in a direction perpendicular to the reaction plane [@Shovkovy:2012zn; @DElia:2012ems; @Fukushima:2012vr; @Mueller:2014tea; @Miransky:2015ava] and its strength depends on the impact parameter. The strength of the magnetic field decreases after few fm$/c$ of the collision [@SKOKOV_2009]. Several activities are under way to study the properties of strongly interacting matter in presence of magnetic field. Effects like magnetic catalysis  [@Shovkovy:2012zn; @Gusynin:1994xp; @Gusynin:1995gt], inverse magnetic catalysis [@Bali:2011qj; @AYALA201699; @PhysRevD.90.036001; @PhysRevD.91.016002], chiral magnetic effect [@Fukushima:2008xe; @Kharzeev:2013ffa] in presence of magnetic field in non-central heavy-ion collision have been reported. Furthermore, various thermodynamic quantities [@Karmakar:2019tdp; @Bandyopadhyay:2017cle], transport coefficients [@Kurian:2018dbn; @Kurian:2017yxj], dilepton production rate [@Das:2019nzv; @Bandyopadhyay:2016fyd; @Bandyopadhyay_2017; @Chyi_2000; @PhysRevC.88.024910; @Ghosh:2018xhh], photon production rate [@PhysRevLett.110.192301; @PhysRevLett.109.202303] and damping of photon [@Ghosh:2019kmf] of magnetised QCD matter have been obtained. Here for simplicity, we consider strong ($gT<T<\sqrt{|eB|}$) and weak ($\sqrt{|q_fB|}<m_{th}\sim gT<T $ ) magnetic field with two different scale hierarchies. As a first effort in this article we, using the one-loop HTL pressure of quarks and gluons at finite quark chemical potential in presence of magnetic field, calculate the second-order QNS of deconfined QCD matter in this two scale hierarchies. The paper is organized as follows: in Sec. \[setup\] we present the setup to calculate second-order QNS. In Subsec. \[quark\_f\], one-loop HTL free-energy of quark in presence of strong magnetic field at finite temperature and chemical potential is calculated. The gauge boson free-energy in presence of strong magnetic field is obtained in Subsec. \[gauge\_boson\]. We discuss in Subsec. \[pressure\] the anisotropic pressure and second-order QNS of QCD matter in a strong field approximation. Considering one-loop HTL pressure quark-gluon plasma in weak field approximation [@Bandyopadhyay:2017cle], we also calculate and discuss the second-order QNS in the presence of weak magnetic field in Sec. \[wfa\]. We conclude in Sec. \[conclusion\]. Setup ===== Here we consider the deconfined QCD matter as grand canonical ensemble. The free-energy of the system can be written as F(T,V,)&=&u-Ts-n where $\mu$ is the quark chemical potential, $n$ number density and $s$ is the entropy density. The pressure of the system is given as P=-F. However, we consider the system to be anisotropic in presence of strong magnetic field and the free-energy of the system is defined in Eq. . The second-order QNS is defined as =-\_[=0]{}=\_[=0]{}=\_[=0]{}\[chi\_def\] which is the measure of the variance or the fluctuation of the net quark number. One can find out the covariance of two conserved quantities when the quark flavors have different chemical potential. Alternatively, one can work with other basis according to the system [*[e.g.]{}*]{}, net baryon number $\mathcal B$, net charge $\mathcal Q$ and strangeness number $\mathcal S$ or $\mathcal B$, $\mathcal Q$ and third component of isospin $\mathcal I_3$. In our case we take strangeness and charge chemical potential to be zero. Moreover, we have considered same chemical potential for all flavors which results in zero off-diagonal quark number susceptibilities. Thus the net second order baryon number susceptibility is related to the second-order QNS as $\chi_B=\frac{1}{3}\chi$. The strength of the magnetic field produced in non-central heavy-ion collision can be up to $(10-20)m_\pi^2$ at the time of collision [@Bzdak:2011yy]. However, it decreases very fast being inversely proportional to the square of time [@PhysRevLett.110.192301; @McLerran:2013hla]. But if one considers finite electric conductivity of the medium, then the magnetic field strength will not die out very fast [@Tuchin:2013bda; @Tuchin:2012mf; @Tuchin:2013ie]. We consider two different cases with strong and weak magnetic field in this article. Strong magnetic field {#sfa} ===================== In this section we consider strong field scale hierarchy $gT < T < \sqrt{eB}$. In presence of magnetic field, the energy of charged fermion becomes $E_n=\sqrt{k_3^2+m_f^2+2n q_fB}$ where $k_3$ is the momentum of fermion along the magnetic field direction, $m_f$ is the mass of the fermion and the Landau level, $n$, can vary from 0 to $\infty$. The transverse momentum of fermion becomes quantised. It can be shown that at very high magnetic field, the contribution from all the Landau levels except the lowest Landau level can be ignored [@Bandyopadhyay:2016fyd]. Consequently, the dynamics becomes $(1+1)$ dimensional when one considers only lowest Landau level (LLL). The general structures of quark and gluon self-energy in presence of magnetic field have been formulated in Ref. [@Karmakar:2019tdp] at finite temperature but for zero quark chemical potential. Here we extend it for the case of non-zero quark chemical potential. In the presence of strong magnetic field, the general structure of quark self-energy can be written as [@Karmakar:2019tdp] (p\_0,p\_3)&=& a u + b n + c\_5 u +d\_5 n , where the rest frame of heat bath velocity $u_\mu=(1,0,0,0)$ and the direction of magnetic field $n_\mu=(0,0,0,1)$. Now, the various form factors can be obtained as a&=&\[u\], \[a\_def\]\ b &=& -\[n\] , \[b\_def\]\ c&= &\[ \_5 u\], \[c\_def\]\ d&=&-\[ \_5 n\].\[d\_def\] The form factors are calculated up to $\mathcal O[\mu^4]$ in Appendix \[quark\_ff\] as a=-d &=&c\_1 \[ff\_a\],\ b=-c &=&-c\_1\[ff\_b\], where $c_1,c_2$ and $c_3$ are defined in Eqs. . One-loop quark free-energy in the presence of a strongly magnetized medium {#quark_f} -------------------------------------------------------------------------- Here we calculate the quark free-energy within HTL approximation using the form factors of quark self-energy in and . The quark free-energy can be written as F\_q=- d\_F \_[{p\_0}]{}  , \[fe\] where $d_F=N_c N_f$. Inverse of the effective fermion propagator can be written as S\^[-1]{}\_=+&=& (p\_0+a)+(b-p\_3)+c \_5 +d \_5\ &=&(p\_0+a)\^0+(b-p\_3)\^3+c \_5 \^0+d \_5 \^3. Now we evaluate the determinant as &=&((b+c-p\_3)\^2-(a+d+p\_0)\^2)((-b+c+p\_3)\^2-(a-d+p\_0)\^2)\ &=&(p\_0\^2-p\_3\^2)((p\_0+2a)\^2-(p\_3-2b)\^2)\ &=& P\_\^2$P_\shortparallel^2+4 a p_0+4 b p_3+4 a^2- 4b^2$\ &=& P\_\^4 (1+), where we have used $d=-a$ and $c=-b$. So Eq. (\[fe\]) becomes F\_q&=&- d\_F \_[{p\_0}]{} \ &=& -2 d\_F \_[{p\_0}]{} - d\_F \_[{p\_0}]{} \ &=& F\^\_q+ F’\_q, where the free-energy of free quarks in presence of magnetic field F\^\_q&=& -2 d\_F \_[{p\_0}]{}  = -2 d\_F \_f \_[{p\_0}]{} dp\_3\ &=&- d\_F \_f ( 1+12\^2 ), where $\hat \mu=\mu/2\pi T$. F’\_q &=&- d\_F \_[{p\_0}]{} \ &=&- d\_F \_[{p\_0}]{} \[F\_2\_exp\], where we have kept terms up to $\mathcal{O}(g^4)$ to obtain the analytic exprssion of free-energy. The expansion made above is valid for $g^2 (q_fB/T^2) < 1$, which can be realized as $(q_fB)/T^2 \gtrsim 1$ and $g \ll 1$. As in the strong field approximation, the fermion is considered to be in LLL. So Eq. (\[F\_2\_exp\]) becomes, F’\_q&=& - d\_F \_f\_[{p\_0}]{} dp\_3 . \[Fq’\_ini\] The sum-integrals are calculated in Appendix \[quark\_free\_energy\] and the expression for the quark free-energy up to $\mathcal{O}(g^4)$ is obtained by adding individual contributions as F\_q&=&F\_q\^+F\_q’=- d\_F \_f (1+12\^2 )\ &+&4 d\_F \_f $\frac{\Lambda}{4\pi T}$\^[2]{}+O\[\^6\]. The renormalized quark free-energy is given as F\_q\^r &=&- d\_F \_f(1+12\^2 ) +4 d\_F \_f where $\hat \Lambda=\Lambda/2\pi T$ and $\hat \mu=\mu/2\pi T$. Gauge boson free-energy in a strongly magnetized medium {#gauge_boson} ------------------------------------------------------- The general structure of gauge boson self-energy can be written from Ref. [@Karmakar:2018aig] as \^=B\^+ R\^+Q\^+N\^, where the form factors can be calculated for non-zero quark chemical potential as $$\begin{aligned} \alpha &=\frac{m_D^2}{\bar u^2}\left[1-\mathcal{T}_P(p_0,p)\right]-\sum_f \frac{(\delta m_{D,f}^2)_s}{\bar u^2}e^{{-p_\perp^2}/{2q_fB}}~\frac{p_3^2}{p_0^2-p_3^2}, \label{b_sf} \\ \beta&=\frac{m_D^2}{2}\left[\frac{p_0^2}{p^2}-\frac{P^2}{p^2}\mathcal{T}_P(p_0,p)\right] , \label{c_sf} \\ \gamma&= \frac{m_D^2}{2}\left[\frac{p_0^2}{p^2}-\frac{P^2}{p^2}\mathcal{T}_P(p_0,p)\right]+\sum_f \frac{(\delta m_{D,f}^2)_s}{\bar u^2} e^{{-p_\perp^2}/{2q_fB}}~ \frac{p_3^2}{p_0^2-p_3^2}, \label{d_sf}\\ \delta&=\sum_f (\delta m_{D,f}^2)_s\frac{\sqrt{\bar n^2}}{\sqrt{\bar u^2}}~ e^{{-p_\perp^2}/{2 eB}}\frac{p_0p_3}{p_0^2-p_3^2}, \label{a_sf}\end{aligned}$$ where ${\bar u}^2 = - p^2/P^2$, ${\bar n}^2 = -p_{\perp}^2/p^2$ and \_P(p\_0,p)=. The thermal and magnetic correction of the Debye screening mass is given as m\_D\^2&=&,\ (m\_[D,f]{}\^2)\_s&=&\_[-]{}\^ \ &=& ,\ (m\^s\_D)\^2&=&m\_D\^2 +\_f (m\_[D,f]{}\^2)\_s=m\_D\^2+(m\_D\^2)\_s. The total gluon free-energy expanded upto ${\mathcal O}[g^4]$ is given by F\_g &&d\_A\[Fsg\_expan\] where $d_A=N_c^2-1$. The renormalized total gluon free-energy containing both hard and soft contributions is given as F\_g\^r&=& -.\[fg\] Longitudinal and Transverse Pressure and corresponding Susceptibilities {#pressure} ----------------------------------------------------------------------- Free-energy density of the quark-gluon plasma is given by F=u-Ts-n -eBM\[F\_sfa\], where $u$ is total the energy density and magnetization per unit volume is given by M=-. \[magnetization\] The pressure becomes anisotropic [@Karmakar:2019tdp; @PerezMartinez:2007kw] due to the magnetization acquired by the system in presence of strong magnetic field which results in two different pressure along parallel and perpendicular to the magnetic field direction. The longitudinal pressure is given as P\_z=-F =-(F\_q\^r+F\_g\^r). and transverse pressure is given as P\_=-F-eB M. One gets two different second-order QNS, namely, along the longitudinal ($\chi_z$) and transverse ($\chi_{\perp}$) direction in the presence of the strong magnetic field. The longitudinal second-order QNS can be obtained as \_z= \_[=0]{}, whereas the transverse one can be obtained as \_= \_[=0]{}. The pressure of non-interacting quark-gluon gas in the presence of strong magnetic field is given as P\_[sf]{}= \_f N\_c N\_f q\_fB (1+12\^2)+(N\_c\^2-1). The second-order diagonal QNS for the ideal quark gluon plasma is given as \_[sf]{}=\_f N\_c N\_f . ![Variation of the longitudinal part of the second-order QNS scaled with that of free field value in presence of strong magnetic field with temperature (left panel) and magnetic field (right panel) strength for $N_f=3$.[]{data-label="QNS_sfa_long_T"}](chi2_sfa_long.pdf "fig:") ![Variation of the longitudinal part of the second-order QNS scaled with that of free field value in presence of strong magnetic field with temperature (left panel) and magnetic field (right panel) strength for $N_f=3$.[]{data-label="QNS_sfa_long_T"}](chi2_sfa_long_eB.pdf "fig:") In the left panel of Fig. \[QNS\_sfa\_long\_T\] the variation of the longitudinal second-order QNS with temperature is displayed for two values of magnetic field strength. For a given magnetic field strength the longitudinal second-order QNS is found to increase with temperature and approaches the free field value at high temperature. On the other hand for a given temperature the longitudinal second-order QNS decreases with increase of the magnetic field strength as shown in the right panel of Fig. \[QNS\_sfa\_long\_T\] for two different temperatures. ![Variation of the transverse part of the second-order QNS scaled with that of free field value in presence of strong magnetic field with temperature (left panel) and magnetic field (right panel) strength for $N_f=3$.[]{data-label="QNS_sfa_trans_T"}](chi2_sfa_trans.pdf "fig:") ![Variation of the transverse part of the second-order QNS scaled with that of free field value in presence of strong magnetic field with temperature (left panel) and magnetic field (right panel) strength for $N_f=3$.[]{data-label="QNS_sfa_trans_T"}](chi2_sfa_trans_eB.pdf "fig:") In the left panel of Fig. \[QNS\_sfa\_trans\_T\] the variation of transverse second-order QNS with temperature is displayed for two values of magnetic field strength. It is found that the transverse second-order QNS decreases with temperature. This is an indication that the system may shrink in the transverse direction. For a given temperature the transverse second-order QNS is found to increase with the increase of the magnetic field strength as shown in the right panel of Fig. \[QNS\_sfa\_trans\_T\] for two different temperatures. This behaviour is in contrary to that of longitudinal one. Weak magnetic field {#wfa} =================== In this section we consider magnetic field strength to be the lowest among all the scales $T$, $m_{th}$ as $\sqrt{q_fB} < m_{th}\sim gT <T$. The HTL one-loop free energy for the deconfined QCD matter has been calculated upto $\mathcal O[g^4]$ in Ref. [@Bandyopadhyay:2017cle]. The total renormalized free-energy in presence of weak magnetic field is sum of renormalized quark and gluon free-energy and can be written [@Bandyopadhyay:2017cle] as F=F\_q\^r + F\_g\^r, where the renormalized quark free-energy is F\_q\^r &=& N\_c N\_f. \[Eq:Fqr\] $M_{B,f}$ is the thermomagnetic mass for quark flavor $f$ in presence of weak magnetic field and $M_B$ represents flavor summed thermomagnetic quark mass as M\_B\^2=\_f M\_[B,f]{}\^2 &=& \_f . \[mgmass\] $\aleph(z)$ in Eq.  is abbreviated as (z) (z)+(z\^[\*]{}), with $\Psi(z)$ is the digamma function (z) , and $z=1/2 -i\hat{\mu}$. At small chemical potential, $\aleph(z)$ can be expanded as (z)&=&-2\_E-4 2+14(3)\^2-62(5)\^4+254(7)\^6+[ O]{}(\^8).\[aleph\] In addition to the renormalized quark free-energy in Eq , the renormalized gluon free-energy is given as &=&-\ &-&\^2T\^4\_D\^2\_D\^2(\_E+)+\_f\ &-& \_f, \[Eq:Fgr\] where $\hat m_D^w=m_D^w/2\pi T$, $\hat m_D=m_D/2\pi T$, $ \delta \hat m_D=\delta m_D/2\pi T$ and $m_D^w$ represents the Debye mass in weak magnetic field approximation and is obtained as $m_D^w$\^2 &&\ &+&\_f \_[l=1]{}\^ (-1)\^[l+1]{}l\^2 (2l) K\_0() + \[(q\_fB)\^4\]\ &=& m\_D\^2 + m\_D\^2. \[md\_wfa\] Considering the expression of free energy vis-a-vis pressure we calculate the second-order QNS in weak field limit by using Eq. . The second-order QNS of free quarks and gluons in thermal medium is given as \_f=N\_c N\_f T\^2. ![Variation of second-order QNS scaled with thermal free field value with temperature (left panel) and magnetic field strength (right panel) for $m_f=5$ MeV and $N_f=3$.[]{data-label="QNS_wfa"}](chi2_wfa_Nf3.pdf "fig:") ![Variation of second-order QNS scaled with thermal free field value with temperature (left panel) and magnetic field strength (right panel) for $m_f=5$ MeV and $N_f=3$.[]{data-label="QNS_wfa"}](chi2_wfa_Nf3_eB.pdf "fig:") The left panel of Fig. \[QNS\_wfa\] shows the variation of the scaled second-order QNS with the temperature at different values of the magnetic field strength. The weak field effect appears as a correction to the thermal medium, the weak field second-order QNS is not very much different than that of thermal medium. It is found to increase with temperature and approaches the free field value at high enough temperature. The magnetic field effect on the second-order QNS is visible at low temperature. The value of second-order QNS slowly decreases as one increases the magnetic field strength as shown in the right panel of Fig. \[chi\_def\]. Conclusion ========== We consider a hot and dense deconfined QCD matter in the presence of the background strong and weak magnetic field within HTL approximation. The quarks are directly affected by magnetic field whereas gluons are affected via quark loop in the gluon self-energy. In the strong field approximation we assume quarks are in lowest Landau level. We compute the one-loop HTL pressure in the presence of finite temperature and chemical potential in the lowest Landau level within the strong field approximation. Various divergent terms are eliminated by choosing appropriate counterterms in the ${\overline {\mbox{MS}}}$ renormalization scheme. The presence of magnetization causes the system to be anisotropic, and one obtains two different pressures in directions parallel and perpendicular to the magnetic field. Both the longitudinal and transverse pressures are computed analytically by calculating the magnetization of the system. We then compute both the longitudinal and transverse second-order QNS in the strong field approximation. For a given magnetic field strength, the longitudinal second-order QNS increases with temperature and approaches to the non-interacting value at high enough temperature. For a given temperature the longitudinal second-order QNS is found to decrease with increase of magnetic field strength. In contrast the transverse second-order QNS is found to decrease with temperature and increase with the increase of magnetic field. Further, in weak field approximation we consider one-loop HTL pressure of hot and dense QCD matter of Ref. [@Bandyopadhyay:2017cle] and compute the second-order QNS. The thermomagnetic correction is found to be marginal and slowly varies with magnetic field. Our calculation can be compared with future lattice QCD calculation. Acknowledgement =============== BK and MGM were funded by Department of Atomic Energy (DAE), India via the project TPAES. NH was funded by DAE, India. Calculation of the quark self-energy form factors {#quark_ff} ================================================= Calculation of the form factors $a$ and $d$ ------------------------------------------- We calculate the form factor $a$ from Eq.  as a=\[ u \]&=&- 2g\^2 C\_F \_[{K}]{} e\^[-]{}\ &=& -2g\^2 C\_F e\^[-]{}\ &=& -2g\^2 C\_F \_[-]{}\^\ &=&- \_[-]{}\^ dk\_3 ,\[a\] where T\_2&=& \_[{k\_0}]{} ,\ T\_4&=& \_[{k\_0}]{} = -. Here we also note that in LLL, $p_{\perp}=0$. Now $T_2$ can be calculated as T\_2&=& -\ && -\ &=& -. \_[-]{}\^ dk\_3 T\_2 &=&\ &&-(+ )- (+)\ && -\_0\^ (+ )\ &&-\_0\^ ( + )\ &=&-\_0\^ ( + )\ &&-\_0\^ ( + )\ &=& +\^2 ’\[-2\]\ &&+\^3 \^4 ’\[-4\]-( \^2 \^2 -\^4 \^4 ’\[-4\] )+\[\]\ &=& 2+ ’\[-2\]+ ’\[-4\]-( - ’\[-4\] ). Similarly we get, \_[-]{}\^ dk\_3 T\_4 &=&- \_[-]{}\^ T\_2\ &=&-\ &=&- -( ’\[-2\]+’\[-4\]). T\_2&=& \_[{k\_0}]{} ,\ &=&-\ We note that \_[-]{}\^ n\_F\^(k\_3)&=&\_[0]{}\^ (n\_F\^++n\_F\^-),\ \_[-]{}\^ n\_F\^(k\_3)&=&\_[0]{}\^ ( n\_F\^+- n\_F\^-). Hence, \_[-]{}\^ dk\_3 T\_2&&- \_0\^\ &=&\_0\^( 2n\_B(k\_3)+n\_F(k\_3+)+n\_F(k\_3-))\ &=& 2-( (3)- ’(-4) )+O\[\^5\]. and \_[-]{}\^ dk\_3 q\_fB T\_4&=& -\_[-]{}\^ T\_2\ &=& \_[-]{}\^\ && \_[0]{}\^ ( n\_F(k\_3+)-n\_F(k\_3-))\ &=&-( ’(-2)+’(-4))+O\[\^5\]. So the form factor $a=-d$ upto $\mathcal O[\mu^4]$ can be written as a=-d&=&- . \[ad\_final\] Calculation of quark form factor $b$ and $c$ -------------------------------------------- Similarly one can calculate $b$ from Eq.  as b=-\[ n \]&=& 2g\^2 C\_F \_[{K}]{} e\^[-]{}\ &=& 2g\^2 C\_F e\^[-]{}k\_3\ &=& 2g\^2 C\_F \_[-]{}\^ k\_3\ &=& \_[-]{}\^ dk\_3 k\_3 , \[b\] where T\_1&=& \_[{k\_0}]{} ,\[T1\_def\]\ T\_3&=& \_[{k\_0}]{} =- . After doing the Matsubara sum, Eq.  becomes T\_1&=&\ &&\ &=& . \_[-]{}\^ dk\_3 k\_3 T\_1 &=& I\_1 I\_1&=& \_[-]{}\^ dk\_3 . Fermion part of $I_1$ can be written as, &&-\_[-]{}\^ dk\_3\ && -\_[0]{}\^ dk\_3\ &=& -\_[0]{}\^ dk\_3\ &=& +\[\]\[fermion\_I1\]. Bosonic part of $I_1$ is given as, &&\_[-]{}\^ dk\_3(n\_B(k\_3)-p\_3 )(-)\ &&\ &=&\ &=& (-- +)-+\[\] . \[bosonic\_I1\] After combining and , $I_1$ can be written as I\_1&=&-, \_[-]{}\^ dk\_3 k\_3 T\_3 &=&- dk\_3T\_1\ &&-\ &=&--( 7\^2 ’\[-2\]+\^3 \^4 ’\[-4\]). T\_1&=& ,\ Hence, \_[-]{}\^ dk\_3 k\_3 T\_1&& \_0\^\ &=&\_0\^( 2n\_B(k\_3)+n\_F(k\_3+)+n\_F(k\_3-))\ &=&+O\[\^6\]. and \_[-]{}\^ dk\_3 k\_3q\_fB T\_3&=& -\_[-]{}\^ dk\_3T\_1\ &=& -\_[-]{}\^\ && \_[0]{}\^ ( n\_F(k\_3+)-n\_F(k\_3-))\ &=&-( ’(-2)+ ’(-4))+O\[\^5\]. The form factor $b=-c$ is obtained upto $\mathcal O[\mu^4]$ as b=-c&=&. \[bc\_final\] Eqs.  and  can also be rewritten in compact form as a=-d &=&c\_1 ,\ b=-c &=&-c\_1, \[ab\_final\] with c\_1&=&-\ c\_2&=& (2 - + ’\[-4\])\ c\_3 &=& - q\_fB( ’\[-2\] + ’\[-4\]). \[c1c2c3\] One-loop sum-integrals for quark free-energy {#quark_free_energy} ============================================ Eq.  can be rewritten as F’\_q&=& -4 d\_F \_f \_[{p\_0}]{}dp\_3 \[Fq’\_2\] The various sum-integrals in Eq. \[Fq’\_2\] can be written using Eq.  as ‌\_[{p\_0}]{}  &=& c\_1 \_[{p\_0}]{} ,\ \_[{p\_0}]{}  &=&-c\_1 \_[{p\_0}]{} ,\ \_[{p\_0}]{} &=&c\_1\^2 \_[{p\_0}]{} ,\ \_[{p\_0}]{} &=&c\_1\^2\_[{p\_0}]{},\ \_[{p\_0}]{}  &=&c\_1\^2\_[{p\_0}]{} ,\ \_[{p\_0}]{}  &=&c\_1\^2\_[{p\_0}]{},\ \_[{p\_0}]{}  &=&-c\_1\^2\_[{p\_0}]{},\ which leads to F’\_q&=& -4 d\_F \_f \_[{p\_0}]{}\ &=&-4 d\_F \_f \_[{p\_0}]{} \[quark\_fe\] One can calculate \_[{p\_0}]{} &=& -(1-2 n\_F(p\_3))\[P2\_sum\_F\], \_[{p\_0}]{} &=& ( \_[{p\_0}]{} ) = . Now we perform the sum-integrals in Eq.  as \_[{p\_0}]{} &=&$\frac{e^{\gamma_E}\Lambda^2}{4\pi }$\^\_[-]{}\^ d\^[1-2]{}p\_3\ &&$\frac{\Lambda}{4\pi T}$\^[2]{},\ \_[{p\_0}]{} &=& $\frac{e^{\gamma_E}\Lambda^2}{4\pi }$\^\_[-]{}\^ d\^[1-2]{}p\_3( -1)\ && 2 $\frac{e^{\gamma_E}\Lambda^2}{4\pi }$\^\_[0]{}\^ d\^[1-2]{}p\_3( -1)\ && $\frac{\Lambda}{4\pi T}$\^[2]{}$$-\frac{7}{2} \frac{\zeta^{'} (-2)}{T^2}+O(\epsilon)$$ ,\ \_[{p\_0}]{} &=& $\frac{e^{\gamma_E}\Lambda^2}{4\pi }$\^\_[-]{}\^ d\^[1-2]{}p\_3 $\beta^2\frac{\partial^2}{\partial \beta^2}-3\beta \frac{\partial}{\partial \beta}+3$\ && $\frac{\Lambda}{4\pi T}$\^[2]{},\ \_[{p\_0}]{} &=& $\frac{e^{\gamma_E}\Lambda^2}{4\pi }$\^\_[-]{}\^ d\^[1-2]{}p\_3 $\beta^3\frac{\partial^3}{\partial \beta^3}-6\beta^2\frac{\partial^2}{\partial \beta^2}+15\beta\frac{\partial}{\partial \beta}-15$\ && $\frac{\Lambda}{4\pi T}$\^[2]{}. Using the above sum-integrals in Eq. (\[quark\_fe\]) $F'_q$ up to $\mathcal{O}(g^4)$ becomes, F’\_q &=& -4 d\_F \_f \_[{p\_0}]{}\ &=&4 d\_F \_f $\frac{\Lambda}{4\pi T}$\^[2]{}\ &=&4 d\_F \_f $\frac{\Lambda}{4\pi T}$\^[2]{}+O\[\^5\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate the most massive object in the Universe, finding it to be a cluster of galaxies with total mass $M_{200}=3.8\times10^{15}\,M_{\odot}$ at $z=0.22$, with the $1\sigma$ marginalized regions being $3.3\times10^{15}\,M_{\odot}<M_{200}<4.4\times10^{15}\,M_{\odot}$ and $0.12<z<0.36$. We restrict ourselves to self-gravitating bound objects, and base our results on halo mass functions derived from N-body simulations. Since we consider the very highest mass objects, the number of candidates is expected to be small, and therefore each candidate can be extensively observed and characterized. If objects are found with excessively large masses, or insufficient objects are found near the maximum expected mass, this would be a strong indication of the failure of $\Lambda$CDM. The expected range of the highest masses is very sensitive to redshift, providing an additional evolutionary probe of $\Lambda$CDM. We find that the three most massive clusters in the recent SPT $178\,\mbox{deg}^2$ catalog match predictions, while XMMU J2235.3–2557 is roughly $3\sigma$ inconsistent with $\Lambda$CDM. We discuss Abell 2163 and Abell 370 as candidates for the most massive cluster in the Universe, although uncertainties in their masses preclude definitive comparisons with theory. Our findings motivate further observations of the highest mass end of the mass function. Future surveys will explore larger volumes, and the most massive object in the Universe may be identified within the next decade. The mass distribution of the largest objects in the Universe is a potentially powerful test of $\Lambda$CDM, probing non-Gaussianity and the behavior of gravity on large scales.' author: - 'Daniel E. Holz$^1$ and Saul Perlmutter$^{2,3}$' bibliography: - 'references.bib' title: The most massive objects in the Universe --- [*Introduction*]{}—Our Universe has a finite observable volume, and therefore within our Universe there is a unique most massive object. This object will be a supercluster of galaxies. Theoretical studies of the growth of structure have now matured, and the mass of the most massive objects can be robustly predicted to the level of a few percent. Furthermore, we are in the midst of a revolution in our ability to conduct volume-limited samples of high-mass clusters, with Sunyaev-Zel’dovich (SZ) and X-ray surveys able to provide complete samples at mass $>5\times10^{14}\,\msun$ out to $z>1$. The masses of the most massive clusters in the Universe are therefore a robust prediction of $\Lambda$CDM models, as well as a direct observable of our Universe. The cluster mass function is already being utilized as a probe of cosmology, and in particular, of the dark energy equation-of-state [@2001ApJ...560L.111H; @2001ApJ...553..545H; @2002PhRvL..88w1301W; @2003ApJ...585..603M; @2006PhRvD..74b3512K; @2006astro.ph..9591A; @2008MNRAS.385.2025C]. What additional value is there in singling out the very tail end of the mass function, representing the most massive clusters in the Universe, for special treatment? First, we note that these systems are in many ways the easiest to find, as they are among the largest and brightest objects. They thus avoid many selection effects which might plague lower mass cuts. In addition, these systems constitute a very small sample (ideally, just one compelling candidate), and it is possible to devote significant observational resources to studying them. One might imagine coupled S-Z, X-ray, and weak lensing measurements, and thus the masses of these systems will be among the best constrained of any systems. The mass-observable relation for clusters is an essential component in using the cluster mass function to measure properties of the dark energy, and therefore there is a tremendous amount of ongoing work to characterize the masses of these objects [@2003ApJ...585..603M; @2005PhRvD..72d3006L; @2005ApJ...623L..63M; @2006ApJ...650..538N; @2006ApJ...650..128K; @2008ApJ...672...19R; @2009arXiv0910.3668W]. Finally, because we are probing far down the exponential tail of the mass function, these objects offer an unusually powerful constraint. If the most massive object is found to have too large a mass (or especially, as explained below, too small a mass), this [*single object*]{} will provide a strong indication of non-Gaussianity or modified gravity [@1998ApJ...494..479C]. An excellent example of this is the high-redshift cluster XMMU J2235.3–2557 (hereafter XMM2235) [@2005ApJ...623L..85M], which has been argued to be a few sigma inconsistent with $\Lambda$CDM [@2009ApJ...704..672J; @2009PhRvD..80l7302J; @2010arXiv1003.0841S]. A similar approach based on strong lensing has been presented in [@2009MNRAS.392..930O], which considers the distribution of the largest Einstein radii in the Universe as a probe of $\Lambda$CDM. Although much work has focused on using halo statistics as a probe of cosmology, here we focus on using the high-mass tails of precision mass functions to make explicit predictions for current and future observations. A critical question in one’s attempt to determine the most massive object is to define precisely what is meant by “object”. The largest structure in the Universe detected to date is the Sloan Great Wall [@2005ApJ...624..463G], but the identification of this wall as a unique object is sensitive to a (completely arbitrary) density threshold. For our purposes we define an object as a gravitationally self-bound, virialized mass aggregation. These objects have decoupled from the Hubble flow, and represent large local matter overdensities. This definition has the convenience of robustly identifying objects (both in theory and observation). [*Mass function*]{}—Recent years have shown tremendous progress in characterizing the mass function of dark matter halos in cosmological N-body simulations. We have now established, to better than 5%, the expected number density of dark matter halos as a function of mass and redshift [@2006ApJ...646..881W; @2007MNRAS.374....2R; @2008ApJ...688..709T]. In the simulations underlying these precise mass function expressions, the halos at the high-mass end are resolved by millions of particles, lending particular confidence and robustness to the mass function in this regime. The simulations are pure dark matter, and neglect the influence of baryons. At smaller scales baryons could play a major role in the density profile of the dark matter halos, and could potentially impact the mass function of the objects themselves. At the large scales being considered in this paper, the effects of baryons are expected to be negligible. This is particularly true as our interest is in the mass function, and hence the number density of these halos, not their density profiles. An important issue is the process by which a dark matter halo is identified and characterized in a dark matter simulation . There are two dominant approaches: friends-of-friends (FOF) and spherical overdensity (SO). FOF defines a halo by contours of constant density, while SO defines halos by the overdensity (compared to the mean or critical density) within a spherical region. It has been argued that the mass associated with SO can be most closely tied to observations of clusters [@2008ApJ...688..709T]. On the other hand, using an FOF with a linking length of 0.2 corresponds closely to contours of density 200 times the background density, which from spherical collapse models is a natural proxy for the virial mass. Because of the steep exponential in the mass function, our results are essentially independent of these differences (see Fig. \[fig:fig3\]). The halo mass function depends sensitively on cosmological parameters, including $\Omega_m$, $\Omega_\Lambda$, and the equation-of-state of the dark energy. For our purposes, one of the most important cosmological parameters is the amplitude of the initial density fluctuations, characterized by $\sigma_8$, the RMS variance of the linear density field, smoothed on scales of $8\,\mbox{Mpc}$. Uncertainty in this quantity translates directly into uncertainty in the amplitude of the mass function. We utilize the latest value from [*WMAP*]{}, which provides a $\sim4\%$ measurement of $\sigma_8$ [@2010arXiv1001.4538K]. For reference, a 5% error on $\sigma_8$ shifts the contours in Figure \[fig:fig2\] by less than $1\sigma$ in mass for a full-sky survey, and considerably less for smaller surveys. Since the value of $\sigma_8$ is a major source of uncertainty in the use of the cluster mass function to constrain cosmology, there is great interest in improving its measurement. In addition, the mass function also depends implicitly on the Hubble constant, $h$, which can be seen by expressing it in units of $\mbox{\# of halos}/(\mbox{Mpc}/h)^3$ (observations naturally measure volume in these units). For simplicity we have explicitly put in the [*WMAP*]{}7 value ($h=0.710$), but it is straightforward to re-express all of our results explicitly in terms of $h$ (see Eqs. \[eq:fit1\] and \[eq:fit2\], and the text immediately beneath). ![\[fig:fig2\] Contour plot of the most massive object in the Universe. Three sets of contours are provided, for three different surveys: full sky, $178\,\mbox{deg}^2$ (corresponding to SPT), and $11\,\mbox{deg}^2$ (corresponding to XMM2235). The shaded contours represent the $1\sigma$ and $2\sigma$ (and for the $11\,\mbox{deg}^2$ case, $3\sigma$) regions of the most massive halo in a $\Lambda$CDM Universe. The solid line contours are for the 2nd most massive halo, while the dashed line contours are for the 3rd most massive halo. The (blue) plus signs are Abell 2163 (double point) and Abell 370, the three (green) diamonds are the three most massive clusters in the SPT $178\,\mbox{deg}^2$ survey, and the (red) square is XMM2235. Note that the mass values for Abell 2163 span the predicted region, while Abell 370 is slightly high. The SPT masses fit within their respective contours, while XMM2235 is well outside its $2\sigma$ contour. All masses are $M_{200}$: spherical overdensity halos with $\Delta=200$ (measured with respect to $\rho_{\rm matter}$). For data measured using different overdensities, we have converted to the $M_{200}$ value which gives the equivalent probability.](./fig2.eps){width="0.98\columnwidth"} The mass function predicts the number density of massive dark matter halos in the Universe. For the purposes of this paper we are also interested in the scatter in this relation. At the high-mass end of the mass function, where the number density satisfies roughly one per volume of interest, we assume that the distribution of halos is given by Poisson statistics. This is valid as the largest objects are spatially independent on these scales ($>\mbox{Gpc}$), and are dominated by shotnoise [@2003ApJ...584..702H; @2006PhRvD..73f7301H]. We use the mass function presented in Tinker et al. [@2008ApJ...688..709T], which gives the expected number density of dark matter halos, $dn/dM$, in units of $\mbox{Mpc}/h$, where $h$ is the Hubble constant and volume is measured in comoving $\mbox{Mpc}^3$. This mass function describes the abundance of spherical-overdensity dark matter halos, and is accurate to $\lesssim5\%$ over the redshift range of interest ($0<z<2$), and for overdensity values (compared to the mean matter density at $z$) in the range $200<\Delta<2300$. This mass function has been calibrated for $M_{200}\lesssim4\times10^{15}\,\msun$, and therefore the extreme high-end of our calculations relies on extrapolation. In what follows we assume the [*WMAP*]{}7 cosmological parameters, namely, $h=0.710$, $\Omega_m=0.264$, $\Omega_\Lambda=0.734$, and $\sigma_8=0.801$ [@2010arXiv1001.4538K]. [*The most massive object (Theory)*]{}—We are interested in determining the mass of the most massive object in our Universe. We calculate the expected distribution of masses at the high mass end, assuming Poisson statistics; the results are shown in Figure \[fig:fig2\]. The most massive object in the Universe is expected to be found at $z=0.22$, with a mass $M_{200}=3.8\times10^{15}\,M_{\odot}$. The marginalized $1\sigma$ range in mass is $3.3\times10^{15}<M_{200}<4.4\times10^{15}$, while in redshift it is $0.12<z<0.36$. If the most massive object in the Universe falls outside the range $2\times10^{15}\,\msun<M_{200}<10^{16}\,\msun$, we can conclude with high confidence that either the initial conditions are non-Gaussian, or the growth of structure deviates from the predictions of general relativity. Figure \[fig:fig2\] includes contours of the 2nd and 3rd most massive halos in the Universe. Going from the most massive to the 2nd most massive results in a noticeable shift, demonstrating the power of just a few halos to constrain cosmology. As we go further down (e.g., from the 2nd to the 3rd most massive), the contours rapidly converge due to the exponential steepening in expected number at lower mass. Note that the most massive halo occurs at low redshift. Furthermore, the contours are not centered on the most likely point; there is much larger scatter to high mass, with a sharp lower mass limit, due to the exponential steepening. Note that these likelihoods are not independent, since if the most massive object has an unusually low mass, it is assured that the subsequent few most massive objects will also be unusually low. We have performed Monte-Carlo studies which show that the correlations are weak, however, and the distribution of separations is well approximated by assuming the likelihoods are drawn independently. Figure \[fig:fig2\] also shows the contours for the 1st and 2nd most massive objects from the recent SPT $178\,\mbox{deg}^2$ survey [@2010arXiv1003.0003V], as well as the contours for the archival [*XMM-Newton*]{} survey which discovered XMM2235. Figure \[fig:fig3\] shows contours of the expected number of halos greater than a given mass, and found beyond a minimum redshift: $\left<{N}\right>(>M_{200},>z)$. The contours are roughly linear in the redshift range $0.2\lesssim z\lesssim2$, and are well approximated (to better than 5%) by the family of lines: $\log_{10}(M({\cal N},z))=a({\cal N})+b({\cal N})z$ with $$\begin{aligned} \label{eq:fit1} a({\cal N})&=&15.72-0.136{\cal N}-0.014{\cal N}^2-0.0012{\cal N}^3\\ b({\cal N})&=&-0.5375+0.00581{\cal N}+0.0024{\cal N}^2+0.00027{\cal N}^3, \nonumber \nonumber\end{aligned}$$ where ${\cal N}=\log_{10}\left<{N}\right>$. For the redshift range $z<0.2$, the results are well represented by the values at $z=0$, which are given (to better than $2\%$) by: $$\log_{10}(M({\cal N}))=15.6-0.142{\cal N}-0.014{\cal N}^2. \label{eq:fit2}$$ These expressions can be utilized to calculate the expected number of objects above a given minimum mass and redshift in the mass range $10^{14}\,\msun<M_{200}<10^{16}\,\msun$ and redshift range $0<z<2$, for any survey size. For a volume-limited sample, we are interested in $\left<{N}\right>(>M_{200},<z)$. These contours start at 0 at $z=0$ (since there is no volume), and rapidly rise to their maximum values, flattening by $z\sim0.2$ at the values given by Eq. \[eq:fit2\]. Note that Eqs. \[eq:fit1\] and \[eq:fit2\] assume the [*WMAP*]{}7 value of the Hubble constant, $h=0.710$. To explicitly put in the $h$ dependence, $M_{200}$ and $\left<N\right>$ can be rescaled by $(h/0.71)$ and $(0.71/h)^3$, respectively. ![\[fig:fig3\] Expected number of halos at redshift $\ge z_{\rm min}$ with mass $\ge M_{200,\rm min}$, for a full sky survey. Each contour line represents a value of $\log_{10}\left<{N}\right>$. For a survey with fraction, $f$, of the full sky, the expected numbers of halos are diminished by the factor $f$. The dashed (red) line shows the result for $\left<{N}\right>=0.01$ using the fit from [@2006ApJ...652...71W], based on an FOF halo finder with $b=0.2$. It is virtually indistinguishable from the corresponding SO ($\Delta=200$) contour. The dotted (red) line represents the $\left<{N}\right>=1$ contour for a $\Delta=200c$ mass function, with overdensity compared to $\rho_{\rm crit}$, instead of the average matter density, $\rho_{\rm matter}$. Note that this agrees with the fiducial “0” line ($\Delta=200$) at high redshift, as the Universe becomes matter dominated. The data points are the same as in Fig. \[fig:fig2\]. Fitting forms for the curves in this figure are provided in the text.](./fig3.eps){width="\columnwidth"} [*The most massive object (Observations)*]{}—The most massive object in the Universe is likely to have already been detected by [*ROSAT*]{} (potentially even if it is behind the galactic plane [@2007ApJ...662..224K]). Reliably measuring the masses of candidate [*ROSAT*]{} sources remains challenging, however, and therefore the specific identity and mass of the most massive object is unknown at present. Perhaps the most compelling candidate is Abell 2163 at $z=0.203$, which has an X-ray mass measurement of $M_{500\rm c}=3.4\pm0.8\times10^{15}\,\msun$ [@2009arXiv0909.3099M; @2010Mantz_private] (where “500c” indicates $\Delta$ with respect to $\rho_{\rm crit}$ rather than $\rho_{\rm matter}$). We expect 0.02 (0.002/0.2) clusters with at least this mass and redshift in the entire Universe, where the numbers in parentheses are the $1\sigma$ lower and upper bounds on $\left<{N}\right>$. An alternative, weak lensing measurement of the mass yields a lower value of $M_{500\rm c}=2.0\pm0.3\times10^{15}\,\msun$ , which has expectation 1.4 (0.5/4) (precisely agreeing with predictions). Furthermore, [@2009ApJ...692.1033V] find an X-ray mass of $M_{500\rm c}=2.3\pm0.07\times10^{15}\,\msun$, which agrees well with the lensing value. Abell 370 is another compelling candidate, with a weak lensing mass of $M_{vir}=2.93^{+0.36}_{-0.32}\times10^{15}\,h^{-1}\msun$ at $z=0.375$ [@2008ApJ...685L...9B; @2010MNRAS.402L..44R], and an expectation of 0.02 (0.005/0.05). These data points are shown in Figures \[fig:fig2\] and \[fig:fig3\], where we have converted the masses to the $M_{200}$ values which give the equivalent probabilities. The figures also show the three most massive clusters from the SPT $178\,\mbox{deg}^2$ survey [@2010arXiv1003.0003V], where we have added the statistical and systematic errors in quadrature. For the most massive cluster ($M_{200}=(8.3\pm1.7)\times10^{14}\,\msun/h$ at $z=0.8$), we would expect 0.14 (0.04/0.5) clusters in the given sky area with a mass and redshift at least as large. For the 2nd most massive cluster ($M_{200}=(8.2\pm1.9)\times10^{14}\,\msun/h$ at $z=0.3$), the expected number goes up to 2 (0.8/6), while for the 3rd most massive ($M_{200}=(6.56\pm1.54)\times10^{14}\,\msun/h$ at $z=0.32$) we expect 5 (2/14). These masses are fully consistent with theory. We also plot XMM2235, with a mass of $M_{200\rm c}=(7.3\pm1.3)\times10^{14}\,\msun$ at $z=1.4$ [@2009ApJ...704..672J]. This cluster was found in an $11\,\mbox{deg}^2$ survey ($f=0.0003$). From Figure \[fig:fig3\] we would expect to find a few thousand objects with at least this mass in the entire Universe ($z>0$), and only 10 such objects at $z\ge1.4$ on the entire sky. The expected number of clusters in an $11\,\mbox{deg}^2$ survey, with this minimum mass and redshift, is $1\times10^{-3}$ ($3\times10^{-4}/4\times10^{-3}$). A conservative lower limit of $M_{324}=5\times10^{14}\,\msun$ is quoted in [@2009ApJ...704..672J], which leads to an expectation of $6\times10^{-3}$ in the survey area (see also [@2009PhRvD..80l7302J; @2010arXiv1003.0841S]). From Figure \[fig:fig2\] we see that XMM2235 is a $3\sigma$ outlier. Alternatively, the cluster’s true mass would have to be reduced by $4\sigma$ to achieve $\left<N\right>=1$ (see Figure \[fig:fig3\]). We note that these results are relatively insensitive to errors in the mass determination; 15% errors do not qualitatively alter our conclusions. Current data argues for further exploration of the highest-mass end of the mass function, both at low and high redshift. It would be particularly difficult, theoretically, to account for excessively massive clusters at $z>1$, while having agreement at lower redshift (e.g., non-Gaussianity would not suffice). We expect to have dramatically improved complete high-redshift cluster surveys with which to test $\Lambda$CDM in the near future, including the full SPT survey ($2000\,\mbox{deg}^2$), the Dark Energy Survey ($5000\,\mbox{deg}^2$), [*Planck*]{} (all-sky), and eventually LSST ($20,000\,\mbox{deg}^2$). In particular, [*Planck*]{} is expected to provide a relatively complete, all-sky survey of all massive clusters out to high redshift in the near future [@2003ApJ...597..650W]. If the results from these cluster surveys disagree with the predictions outlined above, the $\Lambda$CDM paradigm for the growth of structure will need to be revisited. We acknowledge valuable discussions with Mark Bautz, Joanne Cohn, Bill Holzapfel, Adam Mantz, Herman Marshall, Elena Pierpaoli, Paul Schechter, Jeremy Tinker, Risa Wechsler, Martin White, and especially Jerry Jungman and Michael Warren.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The majority of the highest energy cosmic rays are thought to be electrically charged: protons or nuclei. Charged particles experience angular deflections as they pass through galactic and extra-galactic magnetic fields. As a consequence correlation of cosmic ray arrival directions with potential sources has proved to be difficult. This situation is not helped by current data samples where the number of cosmic rays/source are typically $\leq O(1)$. Progress will be made when there are significantly larger data samples and perhaps with better catalogs of candidate sources. This paper reports a search for correlations between the RXTE catalog of nearby active galactic nuclei, AGNs, and the published list of ultra-high energy cosmic rays from the AGASA experiment. Although no statistically significant correlations were found, two correlations were observed between AGASA events and the most inclusive category of RXTE AGNs.' address: 'University of New Mexico, Department of Physics and Astronomy, Albuquerque, New Mexico, USA' author: - 'J. D. Hague' - 'J.A.J. Matthews' - 'B. R. Becker' - 'M. S. Gold' title: 'Search for Correlations between Nearby AGNs and Ultra-high Energy Cosmic Rays' --- highest energy cosmic rays – AGNs as sources – search for correlations Introduction ============ Perhaps the primary goal of all experiments studying the highest energy cosmic rays is to find the source of these particles. While circumstantial evidence may favor one type of source over another, demonstration of a clear correlation between the direction of cosmic rays and their sources is arguably essential. Unfortunately for electrically charged cosmic rays, galactic magnetic fields, and for the highest energy cosmic rays extra-galactic magnetic fields, cause angular deflections that can blur the correlation between cosmic ray arrival direction and source direction. If the sources as viewed from the earth are extended[@waxman; @cuoco] the problem is even more difficult. Unless otherwise noted, for this paper we assume compact (point-like) sources for the highest energy cosmic rays. If the angular blurring from magnetic fields is small[@dolag] ([*i.e.*]{} not significantly greater than the experimental angular resolution) and/or for neutral primaries, then experiments should observe cosmic rays that cluster in arrival direction[@agasa_cluster; @tinyakov], and/or that correlate with potential astronomical ([*e.g.*]{} BL Lac) sources[@tinyakov_BLLac; @gorbunov_BLLac0; @gorbunov_BLLac1; @gorbunov_BLLac2; @hires_BLLac]. For nearby sources, where experiments should detect multiple cosmic rays/source, event clusters provide bounds on the cosmic ray source density[@dubovsky; @blasi; @kachelriess] potentially favoring one type of source for the highest energy cosmic rays over another. However at this time the situation is less than clear as some results[@finley; @hires_cluster] question the significance of the reported clusters and/or some of the BL Lac correlations[@hires_BLLac; @not_BLLacs]. If deflections of charged cosmic rays by extra-galactic magnetic fields are not small[@sigl], then lower energy, $E$, cosmic rays should experience the greatest angular deflections. Unfortunately small experiment data samples and a cosmic ray flux $\propto E^{-3}$ have often caused studies to retain cosmic rays to energies, $E_{thresh}$, well below GZK[@gzk] energies[@agasa_cluster]. Furthermore deflections of the highest energy cosmic rays even by our galactic magnetic field can be substantial[@tinyakov_Bfield; @kachelriess_Bfield; @tanco]. As magnetic deflections scale proportional to the charge of the primary cosmic ray, nuclei in the cosmic rays may have significant deflections. Although most searches have looked for clustering and/or source correlations on small angular scales, studies at larger angular scales have also found evidence for clustering and/or source correlations[@kachelriess_clusters; @smialkowski; @singh]. Certainly the angular scale of cosmic ray clusters and the magnitude, and thus relevance, of the deflections of ultra-high energy cosmic rays by magnetic fields is not universally agreed to at this time. In the future, significantly larger data samples will allow analyses to increase $E_{thresh}$ while retaining the number of observed cosmic rays/source (for nearby sources) $\geq O(1)$. However another possibility is to exploit catalogs of candidate sources. With a catalog of source directions, cosmic rays can be effectively correlated with sources even when if magnetic field deflections are “not small” and/or when the the number of observed cosmic rays per source is $<1$ allowing searches with existing data samples. That said, catalog based studies are limited by the completeness of the source catalog and the relevance (or not) of that class of astronomical source to the production of the highest energy cosmic rays. Often conjectured astrophysical sources include gamma ray bursts, GRBs, and/or active galactic nuclei, AGNs[@BGG2002]. This paper reports a search for correlations between a catalog of nearby AGNs[@rxte_catalog] and the published list of ultra-high energy cosmic rays from AGASA[@agasa_cluster]. The components of our analysis are listed in Section\[section:components\]. Issues that relate to data and AGN selection are given in Section\[section:selection\]. The cosmic ray–AGN comparison results are given in Section\[section:comparison\]. Section\[section:summary\] summarizes this study. Analysis Components {#section:components} =================== Our comparison of ultra-high energy cosmic rays and a catalog of AGNs includes three components: the RXTE catalog of AGNs, the AGASA list of cosmic rays, and a Monte Carlo sample of uniformly distributed cosmic rays generated to match the experimental acceptance of AGASA. The catalog of nearby AGNs[@rxte_catalog] results from the Rossi X-ray Timing Explorer, RXTE, all-sky slew survey[@rxte] sensitive to sources of hard X-rays (3-20keV). The survey excluded the galactic plane ($|b|>10^{\circ}$) but covered $\sim 90$% of the remaining sky. X-ray sources were located to better than $1^{\circ}$ and then correlated with known astronomical objects. The efficiency for AGN identification was estimated to be $\sim 70\%$ with somewhat higher efficiency for northern AGNs ($\sim 87\%$) and somewhat lower efficiency for southern AGNs ($\sim 60\%$)[@rxte_catalog]. The resulting catalog provides source directions and probable source distances and intrinsic X-ray luminosities, $L_{3-20}$. The catalog is best for nearby AGNs as RXTE signal thresholds significantly reduced the efficiency for detecting distant sources; additional details are given below. The list of ultra-high energy cosmic rays comes from published AGASA data [@agasa_cluster]. The Monte Carlo sample of uniformly distributed cosmic rays was generated according to a $\cos(\theta)\sin(\theta)$ distribution in local zenith angle, $\theta\leq45^{\circ}$, and uniform in local azimuth. Events were then transformed to celestial right ascension and declination assuming constant detector aperture with time. Correlations, between the AGASA events and the catalog of AGNs from RXTE, would appear as an excess at small angular separations in comparison to the Monte Carlo sample of simulated cosmic rays. To be clear, define unit vectors in the directions of cosmic rays, û$_i$, AGNs, v$_j$, and Monte Carlo simulated cosmic rays, ŵ$_k$. A correlation [*signal*]{} should then appear [*near 1.0*]{} in the distribution of [*dot*]{}-products:  û$_i \cdot $v$_j$  (if magnetic field deflections are modest). The index “$i$” runs over the cosmic rays in the data sample. For each value of “$i$”, only the AGN catalog source (index “$j$”) giving the maximum value of:  û$_i \cdot $v$_j$  contributes to the distribution[^1]. The simulated distribution of [*random background*]{} comes from the analogous distribution of:  ŵ$_k \cdot $v$_j$ where index “$k$” now runs over the sample of Monte Carlo simulated cosmic rays. As with the cosmic ray events, only the AGN catalog source (index “$j$”) giving the maximum value of:  ŵ$_k \cdot $v$_j$  contributes to the distribution. Cosmic Ray and AGN Selection {#section:selection} ============================ A few choices have been made in the comparison of AGASA data and catalog of AGNs from RXTE. These are described here. 0.5 cm The AGASA data have energies, $E > 40$EeV and populate values of declination: $-10^{\circ} \leq Dec \leq 80^{\circ}$. As noted above, the steep cosmic rays spectrum, $\propto E^{-3}$, and modest number of events: 57 with $E>40$EeV and 29 (just over half) with $E>53EeV$ led us to consider three (overlapping) bins in energy: $E\geq40EeV$, $E\geq53EeV$ and $E\geq100$EeV. The last was to see if there are any correlations with the AGASA super-GZK events. Except for the $E\geq100$EeV selection, most of the cosmic rays are predicted, at least under the assumption of proton primaries[@berezinski], to originate at values of redshift, $z>0.01$[^2]; see Fig.\[fig:berezinski\]. 0.5 cm To match the AGASA acceptance, we selected AGNs with $-10^{\circ} \leq Dec \leq 80^{\circ}$. We have also made selections on the redshift of the AGNs to consider sources only with RXTE source detection efficiency $^>_{\sim}50$%. The estimate of the RXTE source detection efficiency involves two issues: 1. the RXTE instrument source detection threshold ([*i.e.*]{} the selection bias) [*VS*]{} redshift from Fig.1 of Ref.[@rxte_catalog], 2. the number density of AGNs [*VS*]{} redshift and intrinsic X-ray luminosity from Table 2 of Ref.[@barger]. Motivated by Ref.[@barger], we divide the AGNs into the two categories: [*all*]{}-AGNs and [*broadline*]{}-AGNs. For the [*all*]{}-AGN category we require that the X-ray 3-20keV intrinsic luminosity[^3], $L_{3-20} \geq 10^{41}$ ergs/s, to match the RXTE data. With this intrinsic luminosity threshold the estimated [*all*]{}-AGN number density is $4.2 \times 10^{-4}$ Mpc$^{-3}$ consistent with the RXTE source density determination of $\sim 5 \times 10^{-4}$ Mpc$^{-3}$[@rxte_catalog]. For the [*broadline*]{}-AGN category we require that the X-ray 2-8keV intrinsic luminosity, $L_{2-8} \geq 10^{42}$ ergs/s as this selects X-ray sources that are likely to be AGNs based purely on energetic grounds[@steffen]. With this intrinsic luminosity threshold the estimated [*broadline*]{}-AGN number density is $\sim 2 \times 10^{-5}$ Mpc$^{-3}$. Combining the RXTE detection threshold with our definition of two categories of AGN (above), we obtain the fraction of each AGN category [*VS*]{} redshift. This is shown in Fig.\[fig:AGNeffic\]. Based on this result we restrict the redshifts for the [*all*]{}-AGN category to $z\leq0.005$ and the redshifts for the [*broadline*]{}-AGN category to $z\leq0.03$. Cosmic Ray–AGN Comparisons {#section:comparison} ========================== Plots of the distribution of [*dot*]{}-products (see definition in text) for the [*all*]{}-AGN selection are shown in Fig.\[fig:allAGNplot\].  A plot of the AGASA cosmic ray and RXTE AGN directions are given in Fig.\[fig:allAGNdisplay\].  The analogous plots for the [*broadline*]{}-AGN selection are shown in Fig.\[fig:broadAGNplot\] and \[fig:broadAGNdisplay\].  A comparison of Figs.\[fig:allAGNdisplay\] and \[fig:broadAGNdisplay\], shows two events shared between the two selections. Independently we have verified that all RXTE AGNs with redshift $z\leq0.03$ satisfy at least one of our two AGN categories. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![[*\[fig:allAGNplot\]*]{} The plots show the distribution of [*dot*]{}-products (see definition in the text) for the [*all*]{}-AGN selection: (top left) with cosmic ray energies $E \geq 40$EeV, (top right) with cosmic ray energies $E \geq 53$EeV, and (bottom) with cosmic ray energies $E \geq 100$EeV. The curve on each figure shows the Monte Carlo [*random background*]{} normalized to the number of entries in each plot. ](all_agn40.eps "fig:"){width="0.5\linewidth"} ![[*\[fig:allAGNplot\]*]{} The plots show the distribution of [*dot*]{}-products (see definition in the text) for the [*all*]{}-AGN selection: (top left) with cosmic ray energies $E \geq 40$EeV, (top right) with cosmic ray energies $E \geq 53$EeV, and (bottom) with cosmic ray energies $E \geq 100$EeV. The curve on each figure shows the Monte Carlo [*random background*]{} normalized to the number of entries in each plot. ](all_agn53.eps "fig:"){width="0.5\linewidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![[*\[fig:allAGNplot\]*]{} The plots show the distribution of [*dot*]{}-products (see definition in the text) for the [*all*]{}-AGN selection: (top left) with cosmic ray energies $E \geq 40$EeV, (top right) with cosmic ray energies $E \geq 53$EeV, and (bottom) with cosmic ray energies $E \geq 100$EeV. The curve on each figure shows the Monte Carlo [*random background*]{} normalized to the number of entries in each plot. ](all_agn100.eps "fig:"){width="0.5\linewidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.05 cm ![[*\[fig:allAGNdisplay\]*]{} The figure shows the map of $RA-Dec$ for the AGASA data and the AGNs from the [*all*]{}-AGN selection. The AGASA data are plotted in [*blue*]{}($*$) for 40EeV$\leq E <$53EeV, [*green*]{}($\blacktriangledown$) for 53EeV$\leq E <$100EeV, and [*red*]{}($\blacksquare$) for 100EeV$\leq E$. The RXTE AGNs are plotted as [*black*]{}([**o**]{}). The galactic plane is drawn as a dotted line. ](all_AGN_display.eps){width="14cm"} 0.2 cm --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![[*\[fig:broadAGNplot\]*]{} The plots show the distribution of [*dot*]{}-products for the [*broadline*]{}-AGN selection: (top left) with cosmic ray energies $E \geq 40$EeV, (top right) with cosmic ray energies $E \geq 53$EeV, and (bottom) with cosmic ray energies $E \geq 100$EeV. The curve on each figure shows the Monte Carlo [*random background*]{} normalized to the number of entries in each plot. ](broad_agn40.eps "fig:"){width="0.5\linewidth"} ![[*\[fig:broadAGNplot\]*]{} The plots show the distribution of [*dot*]{}-products for the [*broadline*]{}-AGN selection: (top left) with cosmic ray energies $E \geq 40$EeV, (top right) with cosmic ray energies $E \geq 53$EeV, and (bottom) with cosmic ray energies $E \geq 100$EeV. The curve on each figure shows the Monte Carlo [*random background*]{} normalized to the number of entries in each plot. ](broad_agn53.eps "fig:"){width="0.5\linewidth"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![[*\[fig:broadAGNplot\]*]{} The plots show the distribution of [*dot*]{}-products for the [*broadline*]{}-AGN selection: (top left) with cosmic ray energies $E \geq 40$EeV, (top right) with cosmic ray energies $E \geq 53$EeV, and (bottom) with cosmic ray energies $E \geq 100$EeV. The curve on each figure shows the Monte Carlo [*random background*]{} normalized to the number of entries in each plot. ](broad_agn100.eps "fig:"){width="0.5\linewidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.05 cm ![[*\[fig:broadAGNdisplay\]*]{} The figure shows the map of $RA-Dec$ for the AGASA data and the AGNs from the [*broadline*]{}- AGN selection. The AGASA data are plotted in [*blue*]{}($*$) for 40EeV$\leq E <$53EeV, [*green*]{}($\blacktriangledown$) for 53EeV$\leq E <$100EeV, and [*red*]{}($\blacksquare$) for 100EeV$\leq E$. The RXTE AGNs are plotted as [*black*]{}([**o**]{}). The galactic plane is drawn as a dotted line. ](broad_AGN_display.eps){width="14cm"} 0.2 cm The plots of the [*dot*]{}-products for the [*all*]{}-AGN selection, Fig.\[fig:allAGNplot\], shows a small excess in the bin nearest to 1 for the AGASA event selections $E \geq 40$EeV, and $E \geq 53$EeV. For this bin ([*i.e.*]{} [*dot*]{}-product $\geq0.975$) the excesses are $\sim 1.1$ and $\sim 1.7$ standard deviations for the AGASA event selections $E \geq 40$EeV, and $E \geq 53$EeV respectively. If this correlation is valid, then it could provide experimental information to bound the magnetic deflections of extra-galactic cosmic rays. To see if the [*all*]{}-AGN category excesses are consistent with [*e.g.*]{} typical GZK models, see Fig.\[fig:berezinski\], we estimate the RXTE AGN catalog efficiency as follows: 1. 90% sky coverage of the $\sim 83$% of the sky surveyed[^4]; 2. 87% estimated completeness factor; 3. $\sim 77$% estimated average [*all*]{}-AGN source detection efficiency (from Fig.\[fig:AGNeffic\]). This yields an lower bound estimate for the [*all*]{}-AGN RXTE efficiency of $\sim 50$%. However for the [*all*]{}-AGN category (only 5 sources, see Fig.\[fig:allAGNdisplay\]) it is likely that the global (redshift independent) RXTE efficiency factors are not appropriate[^5]. To obtain an upper bound estimate for the [*all*]{}-AGN RXTE efficiency we assume the global RXTE efficiency is $\sim 100$%. Then the estimated (upper bound) [*all*]{}-AGN RXTE efficiency is $\sim 77$%. The estimated number of cosmic ray:[*all*]{}-AGN coincidences is: the number of cosmic rays (with sources in a given redshift region) times the average [*all*]{}-AGN source detection efficiency (for the same redshift region). Thus the estimated number of cosmic rays from the $z<0.005$ region is obtained by dividing the excess counts, Fig.\[fig:allAGNplot\], by the [*all*]{}-AGN efficiencies to obtain: 2.8/0.77 $\sim$ 2.8/0.50 or $3.6 \sim 5.6$ events and 4.0/0.77 $\sim$ 4.0/0.50 or $5.2 \sim 8.0$ events respectively. As fractions of all the observed cosmic rays these are: $3.6 \sim 5.6$/57 (or $6.3 \sim 9.8$%) and $5.2 \sim 8.0$/29 (or $18 \sim 28$%) respectively. These fractions are somewhat, to significantly (depending on the [*all*]{}-AGN RXTE efficiency), in excess of typical GZK models assuming proton primaries, see Fig.\[fig:berezinski\]. Finally we note that the AGASA/HiRes cosmic ray [*quartet*]{} (or possibly [*quintet*]{}[@farrar]) cluster, at RA $\approx 169.1^{\circ}$, Dec $\approx 56.3^{\circ}$[@hires_cluster], is near one of the the RXTE AGNs at: RA $179.3^{\circ}$, Dec $55.23^{\circ}$ and redshift $z=0.0035$. In contrast, there is no close correlation in the [*all*]{}-AGN selection with any of the AGASA super-GZK events[@agasa_cluster] plotted in Fig.\[fig:allAGNdisplay\] (more below). The plots of the [*dot*]{}-products for the [*broadline*]{}-AGN selection, Fig.\[fig:broadAGNplot\], are consistent with [*random background*]{}. If we assume the cosmic rays are primarily protons, then we can use a model such as Fig.\[fig:berezinski\] to estimate the number of cosmic rays expected from sources with redshifts $z\leq0.03$. Then [*e.g.*]{} for the AGASA selection $E \geq 53$EeV, we expect $\sim 60$% to originate from sources with redshifts $z\leq0.03$ or $\sim 17.4$ events. However the number that should appear in [*dot*]{}-product bins near 1 depends on the RXTE AGN catalog efficiency. Similar to the evaluation above, we obtain an overall [*broadline*]{}-AGN RXTE efficiency of $\sim 54$%. Thus we should observe a [*signal*]{} as an excess of $\sim 9.4$ events. Unfortunately in absence of a signal signature ([*i.e.*]{} [*dot*]{}-product bins in excess of [*random background*]{}) or of a bound on magnetic field deflections, any statement on lack of excess depends on the assumed [*dot*]{}-product range. That said, assuming any [*signal*]{} would appear at [*dot*]{}-products $\geq0.95$ then $\sim 9.4$ events should result in a $\sim 1.9$ standard deviation excess. With the AGASA statistics and our current knowledge of cosmic ray deflections by magnetic fields, no strong conclusion can be drawn. ![ [*\[fig:allLTp03AGNdisplay\]*]{} The figure shows the map of $RA-Dec$ for the updated list[@AGASAwebsite] AGASA events with energies $E\geq100$EeV and the RXTE catalog of AGNs with redshift $z\leq0.03$ without restriction on AGN declination. The AGASA events are plotted in [*red*]{}($\blacksquare$). The RXTE AGNs are plotted as [*black*]{}([**o**]{}). The galactic plane is drawn as a dotted line.](allLTp03_AGN_display.eps){width="14cm"} 0.2 cm The final issue is the evidence for, or against, correlations between the RXTE catalog of AGNs and the most energetic AGASA events. To investigate this, we show in Fig.\[fig:allLTp03AGNdisplay\] all of the AGNs from the RXTE catalog with $z\leq0.03$ and all of the AGASA events above 100EeV as updated on the AGASA web site[@AGASAwebsite]. Now with 11 super-GZK events: 3 have [*dot*]{}-products $>0.975$, $\sim3$ are close to the galactic plane (region unobserved by RXTE) and the remaining 5 do not correlate well ([*e.g.*]{} [*dot*]{}-product $^<_{\sim} 0.95$) with the RXTE catalog of AGNs. While the 3 AGASA super-GZK events that are close to RXTE catalog AGNs are consistent with [*random background*]{}, on inspection these events all have [*dot*]{}-products $>0.99$. In this case the expected [*random background*]{} is $\sim 1.6$. Furthermore one of these events, with energy $E=122$EeV and RA $176.0^{\circ}$, Dec $36.3^{\circ}$, is close to the group of very nearby ($z<0.005$ in Fig.\[fig:allAGNdisplay\]) AGNs. The closest correlation is with the RXTE AGN at: RA $182.7^{\circ}$, Dec $39.45^{\circ}$ and redshift $z=0.0033$. Of the 5 AGASA super-GZK events that do not correlate well with the RXTE AGNs, 4 lie far from the galactic plane and have energies well above 100EeV. Thus for proton primaries, based on Fig.\[fig:berezinski\] these should originate at redshifts $z^<_{\sim}0.01$. For the [*broadline*]{}-AGN category the RXTE average source detection efficiency is then $\sim 100$% based on Fig.\[fig:AGNeffic\]. Thus for these AGASA events we expect $\sim 4 \times 0.9 \times 0.87 = 3.1$ correlations with the RXTE catalog of AGNs whereas we observe zero. Furthermore, the Poisson probably of then observing zero is small: 4.4%. In contrast if the [*all*]{}-AGN category is the more appropriate source of super-GZK events, then the RXTE average source detection efficiency based on Fig.\[fig:AGNeffic\] is significantly less than 100%, particularly for source redshifts to $z^<_{\sim}0.01$. In this case, the [*all*]{}-AGN category the RXTE average source detection efficiency is estimated at $\sim 53$%, resulting in an overall RXTE catalog efficiency of $\sim 34$%. Thus we expect approximately $11 \times 0.34 = 3.7$ correlations (with sources to $z^<_{\sim}0.01$). The Poisson probability to observe $\leq 1$ correlation (one correlation was observed with the RXTE AGNs to $z^<_{\sim}0.01$) is 11.6%[^6] However if the AGASA energies are overestimated (with respect to the energy scale of Fig.\[fig:berezinski\]) then some of the AGASA events with energies closest to 100EeV could originate at redshifts $z>0.01$. If we extend the possible RXTE AGNs to redshifts of $z^<_{\sim}0.02$ then the RXTE average source detection efficiency decreases to $\sim 33$% (because most of the lower X-ray luminosity AGNs are unobserved by RXTE) resulting in an overall RXTE catalog efficiency $\sim 21$%. Thus we expect $11 \times 0.21 = 2.3$ correlations and we observe two. The (new) additional correlation is between an AGASA event with $E=120$EeV and a RXTE AGN at redshift $z=0.016$. The Poisson probability to observe two correlations is $\sim 27$%. Summary {#section:summary} ======= We have searched for correlations between the published list of the highest energy events from the AGASA experiment[@agasa_cluster; @AGASAwebsite] and the RXTE catalog of AGNs[@rxte_catalog]. Two categories of RXTE AGNs were considered: [*all*]{}-AGNs with RXTE 3-20keV intrinsic luminosities, $L_{3-20} \geq 10^{41}$ ergs/s, and [*broadline*]{}-AGNs with 2-8keV intrinsic luminosities, $L_{2-8} \geq 10^{42}$ ergs/s motivated by the analysis of AGN evolution in Ref.[@barger]. To retain RXTE source detection efficiencies $^>_{\sim}50$%, source redshifts of $z\leq0.005$ and $z\leq0.03$ were required for the [*all*]{}-AGN and [*broadline*]{}-AGN categories respectively. No correlations were observed between the AGASA events and the [*broadline*]{}-AGN category of RXTE AGNs even though this category of AGN is most luminous in X-rays and even though the source density for this category of AGN is favored by some analyses[@blasi; @kachelriess] of the highest energy cosmic rays. In contrast, possible correlations were observed between AGASA events and the most inclusive, [*all*]{}-AGN, category of RXTE AGNs. We note that while not statistically conclusive, one of the nearby RXTE AGNs correlates with the AGASA/HiRes [*quartet*]{} event cluster[@hires_cluster] and one correlates with one of the AGASA super-GZK events[@AGASAwebsite]. Additional data would help confirm, or refute, the interesting possibility of highest energy cosmic ray–AGN correlations. Acknowledgements ================ We wish to acknowledge useful communications with Francesc Ferrer on possible ultra-high energy cosmic rays : BL Lac correlations. [99]{} E. Waxman, K. B. Fisher and T. Piran, Astrophys. J. [**483**]{}, 1 (1997) \[astro-ph/9604005\] A. Cuoco, R. D’ Abrusco, G. Longo, G. Miele and P. D. Serpico, JCAP 0601 (2006) 009, \[astro-ph/0510765\] K. Dolag, D. Grasso, V. Springel and I. Tkachev, JCAP 0501 (2005) 009, \[astro-ph/0410419\] N. Hayashida [*et al.*]{}, Astron. J. [**120**]{}, 2190 (2000), \[astro-ph/0008102\] P. G. Tinyakov and I. I. Tkachev, JETP Lett. [**74**]{}, 1 (2001), \[astro-ph/0102101\] P. G. Tinyakov and I. I. Tkachev, JETP Lett., [**74**]{}, 445 (2001), \[astro-ph/0102476\]; Astropart. Phys., [**18**]{}, 165 (2002), \[astro-ph/0111305\] D. S. Gorbunov, P. G. Tinyakov, I. I. Tkachev and S. V. Troitsky, Astrophys. J. [**577**]{}, L93 (2002) D. S. Gorbunov, P. G. Tinyakov, I. I. Tkachev and S. V. Troitsky, JETP Lett., [**80**]{}, 145 (2004) \[astro-ph/0406654\] D. S. Gorbunov and S. V. Troitsky, Astropart. Phys. [**23**]{}, 175 (2005) \[astro-ph/0410741\] R. U. Abbasi [*et al.*]{}, Astrophys.J. [**636**]{} 680 (2006), \[astro-ph/0507120\] S. L. Dubovsky, P. G. Tinyakov and I. I. Tkachev, Phys. Rev. Lett. [**85**]{} 1154 (2000), \[astro-ph/0001317\] P. Blasi and D. De Marco, Astropart. Phys. [**20**]{} 559 (2004), \[astro-ph/0307067\] M. Kachelriess and D. Semikoz, Astropart. Phys. [**23**]{} 486 (2005) \[astro-ph/0405258\] C. B. Finley and S. Westerhoff, Astropart. Phys. [**21**]{}, 359 (2004) \[astro-ph/0309159\] S. Westerhoff [*et al.*]{}, Nucl. Phys. B (Proceedings Suppl.) [**136C**]{} 46 (2004), \[astro-ph/0408343\]; R. U. Abbasi [*et al.*]{}, Astrophys.J. [**623**]{} 164 (2005), \[astro-ph/0412617\]; S. Westerhoff [*et al.*]{}, Proc. 29th International Cosmic Ray Conference, [**7**]{} 397 (2005), \[astro-ph/0507574\] N. W. Evans, F. Ferrer and S. Sarkar, Phys. Rev. [**D67**]{} 103005 (2003) \[astro-ph/0212533\]; Phys. Rev. [**D69**]{} 128302 (2004) \[astro-ph/0403527\]\ B. Stern and J. Poutanen, Astrophys.J. [**623**]{} L33 (2005) \[astro-ph/0501677\] G. Sigl, F. Miniati and T. E. Ensslin, \[astro-ph/0409098\] K. Greisen, Phys. Rev. Lett. [**16**]{}, 748 (1966); G. T. Zatsepin and V. A. Kuzmin, Pisma Zh. Eksp. Teor. Fiz. [**4**]{}, 144 (1966) P. G. Tinyakov and I. I. Tkachev, Astropart. Phys. [**24**]{} 32 (2005), \[astro-ph/0411669\] M. Kachelriess, P.D. Serpico and M. Teshima, \[astro-ph/0510444\] G. A. Medina Tanco, E. M. de Gouveia Dal Pino and J. E. Horvath, \[astro-ph/9707041\] M. Kachelriess and D. V. Semikoz, \[astro-ph/0512498\] A. Smialkowski, M. Giller and W. Michalak, J. Phys. [**G28**]{} 1359 (2002), \[astro-ph/0203337\] S. Singh, C-P. Ma, and J. Arons, Phys. Rev. [**D69**]{} 063003 (2004), \[astro-ph/0308257\] V. Berezinsky, A.Z. Gazizov and S.I. Grigorieva, \[hep-ph/0107306\] and \[hep-ph/0204357\] S. Yu. Sazonov and M. G. Revnivtsev, Astronomy & Astrophysics [**423**]{} 469 (2004), \[astro-ph/0402415\]; M. Revnivtsev, S. Sazonov, E. Churazov and S. Trudolyubov, \[astro-ph/0511444\] M. Revnivtsev, S. Sazonov, K. Jahoda and M. Gilfanov, Astronomy & Astrophysics [**418**]{}, 927 (2004) V. Berezinski, A. Gazizov and S. Grigorieva, \[astro-ph/0302483\] A. T. Steffen [*et al.*]{}, Astrophys. J. [**596**]{}, L23 (2003) \[astro-ph/0308238\] A. J. Barger [*et al.*]{}, Astron. J. [**129**]{} 578 (2005), \[astro-ph/0410527\] G. R. Farrar, \[astro-ph/0501388\] www-akeno.icrr.u-tokyo.ac.jp/AGASA/results.html\#100EeV D.J. Bird, et al, Astrophys. J. [**441**]{} 144 (1995) [^1]: Thus each cosmic ray has one entry in the [*dot*]{}-product distribution. This choice is consistent with each cosmic ray having one source. As only the AGN source [*nearest in angle*]{} to the cosmic ray is chosen this can result in possible misidentification in the case of large source density. [^2]: This corresponds to a distance $r \approx 42$ Mpc [^3]: For our study we relate the RXTE 3-20keV intrinsic luminosities, $L_{3-20}$ in ergs/s, to 2-8keV intrinsic luminosities, $L_{2-8}$ in ergs/s using: $L_{2-8} \approx L_{3-20}/2$; private communication from Sergey Sazonov. [^4]: This corresponds to the sky fraction outside a $10^{\circ}$ avoidance zone about the galactic plane [^5]: In particular assuming an average AGN source density of $4.2 \times 10^{-4}$ Mpc$^{-3}$ (see above) and the RXTE global efficiency[@rxte_catalog] of $0.9 \times 0.83 \times 0.7 \approx 52$%, the predicted number of nearby ($z\leq0.005$) RXTE AGNs is approximated half those observed. While the small number of AGNs makes this weak statistically, it is nevertheless consistent with: a RXTE global efficiency of $\sim 100$% for nearby ($z\leq0.005$) AGNs and/or with a local over-density of AGNs. Although these estimates were based on the AGN number density [*VS*]{} X-ray luminosity from Ref.[@barger] the AGN number density [*VS*]{} X-ray luminosity deduced by the RXTE experiment[@rxte_catalog] gave a similar result. [^6]: If we also include the 320EeV event from the Fly’s Eye[@FlysEye] then we expect $12 \times 0.34 = 4.08$ correlations (with RXTE AGNs to $z^<_{\sim}0.01$) and we observe one. Now the Poisson probability to observe $\leq 1$ is 8.6%. Anecdotally the Fly’s Eye event is very close, $\sim 3.0^{\circ}$, to one of the RXTE sources at RA $88.8^{\circ}$ Dec $46.3^{\circ}$ and redshift z=0.02. If this is a true correlation, then the proton nature of the cosmic ray and/or the measured energy of the cosmic ray are in question.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study octet-octet baryon ($J^P = {\textstyle \frac12}^+$) contact interactions in SU(3) chiral effective field theory by using large-$N_c$ operator analysis. Applying the $1/N_c$ expansion of the Hartee Hamiltonian, we find 15 operators in the octet-octet baryon potential where 4 operators for leading order (LO) and 11 for and net-to-next-to-leading order (NNLO). The large-$N_c$ operator analysis of octet-octet baryon matrix elements reduces the number of free parameters from 15 to 6 at LO of the $1/N_c$ expansion. The application of large-$N_c$ sum rules to the Jülich model of hyperon-nucleon (YN) interactions at the LO of the chiral expansion reduces the model parameters to 3 from 5 at the LO of $1/N_c$ expansion. We find that the values of LECs fitted to YN scattering data in Ref. [@Li:2016paq] in the relativistic covariant ChEFT (EG) approach is more consistent with the predictions of large-$N_c$ than in the heavy baryon (HB) formalism approach.' author: - Xuyang Liu - Viroj Limkaisang - Daris Samart - Yupeng Yan title: | Large-$N_c$ operator analysis of hyperon-nucleon interactions\ in SU(3) chiral effective field theory --- Introduction ============ Chiral effective field theory (ChEFT) [@Weinberg:1978kz; @Gasser:1983yg], based on the approximately and spontaneously broken chiral symmetry of QCD, allows for a systematic way of calculating low-energy hadronic observables. It is very efficient and convenient to use hadrons as basic degrees of freedom rather than quarks and gluons in the ChEFT. Chiral Lagrangian is required to include all possible interactions between hadrons which are constructed in terms of the relevant symmetries of QCD [@Scherer:2012xha]. A number of low-energy properties in the strong interaction is very successfully described by using the ChEFT. The ChEFT is also utilized to shed light on the study of nuclear forces (see [@Epelbaum:2008ga; @Machleidt:2011zz] for reviews). It was demonstrated by Weinberg’s seminal works [@Weinberg:1990rz; @Weinberg:1991um] that one can calculate the nuclear forces systematically by using appropriate power counting scheme. Therefore, loop-corrections and higher order terms can be included for the accuracy of the calculations. Nucleon-nucleon (NN) forces derived in the ChEFT successfully described a huge number of NN experimental data. The NN potentials are composed of the long and short range interactions, where the long range NN force is mainly contributed by the pion exchange while the short range part is encoded by contact term NN interactions with unknown low-energy constants (LECs) to be fitted to experimental data. The higher order contact terms of the NN potentials have been constructed in Refs. [@Ordonez:1993tn; @Ordonez:1996] at next-to leading order (NLO) and in Refs. [@Epelbaum:2004fk; @Entem:2003ft] for next-to-next-to-next-to leading order (N$^3$LO) in terms of chiral expansions. On the other hand, hyperon-nucleon (YN) and hyperon-hyperon (YY) forces have been less studied compared with the NN forces. YN interactions are keys for understanding hyper-nuclei and neutron stars [@Nogga:2001ef; @Lonardoni:2014bwa]. The contact and meson exchange terms of the YN interactions in the ChEFT were constructed by using the SU(3) flavor symmetry in Ref. [@Polinder:2006zh] at leading order (LO) and extended to NLO in Ref. [@Haidenbauer:2013oca]. The most general SU(3) chiral Largrangians of the octet-octet baryon contact term interactions have been worked out in Ref. [@Petschauer:2013uua]. The study of the YY interactions was performed in Refs. [@Polinder:2007mp; @Haidenbauer:2015zqb; @Haidenbauer:2009qn]. At the LO of the YN interactions [@Polinder:2006zh; @Li:2016paq], the SU(3) chiral Lagrangian has 15 free parameters (LECs) and the partial-wave expansion analysis leads to 5 LECs which are fixed with YN data. In this work, we will use the large-$N_c$ operator analysis to explore the $N_c$ scales and reduce the number of the unknow LECs in the SU(3) chiral Largrangians and in the LO YN potential [@Polinder:2006zh; @Li:2016paq]. Large-$N_c$ is an approximate framework of QCD and very useful in the study of hadrons at low-energies. The basic idea is that one can consider the number of colors ($N_c$) to be large and expand it in power of $1/N_c$ [@'tHooft:1973jz; @Witten:1979kh]. By using this framework, a number of simplifications of QCD occurs in the large-$N_c$ limit (see Refs. [@Jenkins:1998wy; @Matagne:2014lla] for reviews). The $1/N_c$ expansion of QCD for the baryon [@Dashen:1993jt; @Dashen:1994qi; @Luty:1993fu] has been applied to the NN potential in [@Kaplan:1995yg; @Kaplan:1996rk; @Banerjee:2001js] and three-nucleon potential in [@Phillips:2013rsa]. Moreover, the $1/N_c$ expansion is used to study parity-violating NN potentials in [@Phillips:2014kna; @Schindler:2015nga] as well as time-reversal violating NN potentials [@Samart:2016ufg]. The study of the large-$N_c$ analysis in the NN system provides the understanding of the $N_c$ scales of the LECs in the NN forces. In addition, the $1/N_c$ expansion also helps us to reduce the independent number of the LECs [@Schindler:2015nga]. However, the octet-octet baryon interactions in SU(3) flavor symmetry have not been investigated in the large-$N_c$ approach. In this work, we will extend the large-$N_c$ operator analysis in Refs. [@Kaplan:1996rk; @Phillips:2013rsa] to the SU(3) chiral Lagrangian in Refs. [@Polinder:2006zh; @Li:2016paq]. The large-$N_c$ octet-octet baryon potential is constructed up to NNLO in terms of the $1/N_c$ expansion. We will apply large-$N_c$ sum rules to YN interactions at LO which has been recently investigated in Ref. [@Li:2016paq]. Moreover, the results can be applied to the YN at NLO and YY sector. We outline this work as follows: In section 2 we will setup the matrix elements of the octet-octet baryon potential from the SU(3) chiral Lagrangian. In the next section, the potential of the $1/N_c$ expansion is constructed up to NNLO and large-$N_c$ sum rules for LECs are implied. In section 4, we apply results of the large-$N_c$ sum rules to the LO YN potential. In the last section, we give the conclusion in this work. The potential of the SU(3) octet-octet baryon contact term interactions ======================================================================= We start with the SU(3) chiral Largrangian of the octet-octet baryon interactions and it was proposed by Ref. [@Polinder:2006zh]. The SU(3)-flavor symmetry is imposed and the chiral Lagrangian is Hermitian and invariant under Lorentz transformations and the CPT discrete symmetry is implied. The minimal SU(3) invariant chiral Lagrangian with non-derivative is given by, $$\begin{aligned} \label{chi-L} {\mathcal L}^{(1)} &=& C^{(1)}_i \left<\bar{B}_1\bar{B}_2\left(\Gamma_i B\right)_2\left(\Gamma_i B\right)_1\right>\ , \nonumber \\ {\mathcal L}^{(2)} &=& C^{(2)}_i \left<\bar{B}_1\left(\Gamma_i B\right)_1\bar{B}_2\left(\Gamma_i B\right)_2\right>\ , \nonumber \\ {\mathcal L}^{(3)} &=& C^{(3)}_i \left<\bar{B}_1\left(\Gamma_i B\right)_1\right>\left<\bar{B}_2\left(\Gamma_i B\right)_2\right>\ .\end{aligned}$$ Here $1$ and $2$ denote the label of the particles in the scattering process, the $B$ is the usual irreducible octet representation of SU(3) given by $$\begin{aligned} B&=& \frac{1}{\sqrt 2}\sum_{a=1}^8 \lambda^a B^a = \left( \begin{array}{ccc} \frac{\Sigma^0}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} & \Sigma^+ & p \\ \Sigma^- & \frac{-\Sigma^0}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}} & n \\ -\Xi^- & \Xi^0 & -\frac{2\Lambda}{\sqrt{6}} \end{array} \right) \ , \label{eq:7}\end{aligned}$$ where the $\langle \cdots \rangle$ brackets denote taking the trace in the three-dimensional flavor space and the normalization of Gell-Mann matrices $\langle \lambda^a\,\lambda^b \rangle = 2\,\delta^{ab}$ is used. The $\Gamma_i$ are the usual elements of the Clifford algebra $$\Gamma_1=1 \, , \,\, \Gamma_2=\gamma^\mu \, , \,\, \Gamma_3=\sigma^{\mu\nu} \, , \,\, \Gamma_4=\gamma^\mu\gamma_5 \, , \,\, \Gamma_5= i\,\gamma_5 \,\, . \label{eq:2.2}$$ By using the chiral power counting in Ref. [@Polinder:2006zh], it has been shown that we have 15 LO non-derivative terms of the chiral Lagrangian. It has also been demonstrated in Ref. [@Polinder:2006zh] that the above Lagrangians are the minimal set of the contact interaction terms in terms of flavor and spin structures by using Cayley-Hamilton identity and Fierz transformation. To obtain the potentials, we follow approach in Refs. [@Girlanda:2010ya; @Girlanda:2010zz] by imposing relativistic covariant constraints. Letting ${\mathcal H} = -\,{\mathcal L}$ and taking the approach of the relativistic constraints in [@Girlanda:2010ya; @Girlanda:2010zz] into account, one obtains the potential of the octet-octet baryon contact interactions up to the second order of the small momenta of the baryons and it reads, $$\begin{aligned} \label{pot-1} V^{(1)}&=& \langle\bar\chi_2, d\,;\, \bar\chi_1, c\,|\, {\mathcal H}^{(1)} |\, a,\chi_1\,;\, b,\chi_2 \rangle \nonumber\\ &=& \left\{ \frac13\,\delta^{cd}\delta^{ba} + \frac12\,\big(\,d^{cde} +if^{cde} \big)\big( d^{eba} + if^{eba}\big) \right\} \nonumber\\ &&\quad \times\,\Big\{\, c_S^{(1)}\tilde O_S + c_T^{(1)}\tilde O_T + \left(c_1^{(1)}p_-^2 + c_2^{(1)}p_+^2 \right)\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} + \left( c_3^{(1)}p_-^2 + c_4^{(1)}p_+^2 \right)\vec\sigma_1 \cdot \vec\sigma_2 \nonumber\\ &&\qquad\qquad\qquad +\, c_5^{(1)}\frac{i}{2} (\vec\sigma_1 + \vec\sigma_2) \cdot \left(\vec p_+\times\vec p_- \right) + c_6^{(1)}(\vec p_-\cdot\vec\sigma_1)(\vec p_-\cdot\vec\sigma_2) + c_7^{(1)}(\vec p_+\cdot\vec\sigma_1)(\vec p_+\cdot\vec\sigma_2) \,\Big\},\end{aligned}$$ where $$\begin{aligned} \tilde O_S &=& \delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} + \frac{i}{2M^2} \left(\vec p_+\times\vec p_- \right) \cdot (\vec\sigma_1 - \vec\sigma_2)\,, \nonumber\\ \tilde O_T &=& \vec\sigma_1 \cdot \vec\sigma_2 - \frac{i}{2M^2} \left(\vec p_+\times\vec p_- \right) \cdot (\vec\sigma_1 - \vec\sigma_2) \,,\end{aligned}$$ and $\vec \sigma_{i}\equiv \vec\sigma_{\bar\chi_i\chi_i}$ with $i=1,2$. The indices $a\,(c)$, $b\,(d)$, $\chi_1\,(\bar\chi_{1})$ and $\chi_2\,(\bar\chi_{2})$ are flavor and spin indices of incoming (outgoing) baryon number 1 and 2 respectively and $M$ is the octet baryon mass in the SU(3) flavor symmetry limit. We note that the octet-octet baryon potentials agree with the heavy baryons formulation of ChEFT in [@Pastore:2009is; @Epelbaum:1998ka] for the spin structures. By using the partial integrations and the baryon equation of motion to eliminate time derivative as shown in Refs. [@Girlanda:2010ya; @Girlanda:2010zz], the potential in Eq. (\[pot-1\]) is the minimal set of linearly independent operators and it consists of 2 LO and 7 NLO operators (see appendix \[appA\] for the detail derivation of the potential). The LECs, $c_i^{(1)}$ are linear combinations of the couplings $C_i^{(1)}$ as, $$\begin{aligned} \label{new-LECs-1} c_S^{(1)} &=& C_1^{(1)} + C_2^{(1)} \,, \qquad c_T^{(1)} = C_3^{(1)} - C_4^{(1)} \,, \qquad c_1^{(1)} = -\frac{1}{4M^2}\left( C_2^{(1)} + C_3^{(1)} \right),\qquad c_2^{(1)} = -\frac{1}{2M^2}\left( C_1^{(1)} - C_2^{(1)} \right), \nonumber\\ c_3^{(1)} &=& -\frac{1}{4M^2}\left( C_2^{(1)} + C_3^{(1)} \right),\qquad c_4^{(1)} = \frac{1}{4M^2}\left( C_3^{(1)} - C_4^{(1)} \right), \qquad c_5^{(1)} = -\frac{1}{2M^2}\left( C_1^{(1)} - 3 C_2^{(1)} - 3 C_3^{(1)} - C_4^{(1)} \right), \nonumber\\ c_6^{(1)} &=& \frac{1}{4M^2}\left( C_2^{(1)} + C_3^{(1)} + C_4^{(1)} + C_5^{(1)} \right), \qquad c_7^{(1)} = -\frac{1}{4M^2}\left( C_3^{(1)} + C_4^{(1)} \right).\end{aligned}$$ In addition, it is worth to discuss about the chiral power counting ($Q/M$) where a $Q$ is typical three momentum of the baryon. If we impose $M \sim \Lambda$ where $\Lambda$ is a chiral symmetry breaking scale. Therefore, our power counting rule in this work adopts $Q/M \sim \left(Q/\Lambda\right)^2$ which has been used in Refs. [@Epelbaum:2004fk; @Ordonez:1996] for the NN potentials. The notations of the momentum in this work are defined below $$\begin{aligned} \label{momentum-pm} \vec p_+ = \frac12(\vec p\,' + \vec p)\,,\quad p_+^2 = \vec p_+\cdot\vec p_+\,,\qquad \vec p_- = \vec p\,' - \vec p\,,\quad p_-^2 = \vec p_-\cdot\vec p_-\,, \qquad \vec n = \vec p\times\vec p\,' = \vec p_+\times\vec p_-\,,\end{aligned}$$ where $\vec p\,(\vec p\,'\,)$ is incoming (outgoing) three-momentum in the c.m. frame and the on-shell condition of the external momenta is given by $$\begin{aligned} \vec p_+\cdot\vec p_- = 0\,.\end{aligned}$$ With the same manner, the octet-octet baryon potentials for $C_i^{(2)}$ and $C_i^{(3)}$ are written by $$\begin{aligned} \label{pot-2} V^{(2)} &=& \langle\bar\chi_2, d\,;\, \bar\chi_1, c \,|\, {\mathcal H}^{(2)} \,|\; a,\chi_1\,;\, b,\chi_2 \rangle \nonumber\\ &=& \left\{ \frac13\,\delta^{ca}\delta^{bd} + \frac12\,\big(\,d^{cae} +if^{cae} \big)\big( d^{edb} + if^{edb}\big) \right\} \nonumber\\ &&\quad \times\,\Big\{\, c_S^{(2)}\tilde O_S + c_T^{(2)}\tilde O_T + \left(c_1^{(2)}p_-^2 + c_2^{(2)}p_+^2 \right)\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} + \left( c_3^{(2)}p_-^2 + c_4^{(2)}p_+^2 \right)\vec\sigma_1 \cdot \vec\sigma_2 \nonumber\\ &&\qquad\qquad\qquad +\, c_5^{(2)}\frac{i}{2} (\vec\sigma_1 + \vec\sigma_2) \cdot \left(\vec p_+\times\vec p_- \right) + c_6^{(2)}(\vec p_-\cdot\vec\sigma_1)(\vec p_-\cdot\vec\sigma_2) + c_7^{(2)}(\vec p_+\cdot\vec\sigma_1)(\vec p_+\cdot\vec\sigma_2) \,\Big\}\,,\end{aligned}$$ and $$\begin{aligned} \label{pot-3} && V^{(3)} = \langle\bar\chi_2, d\,;\, \bar\chi_1, c \,|\, {\mathcal H}^{(3)} \,|\; a,\chi_1\,;\, b,\chi_2 \rangle \nonumber\\ &&\qquad\, = \delta^{ca}\delta^{bd}\Big\{\, c_S^{(3)}\tilde O_S + c_T^{(3)}\tilde O_T + \left(c_1^{(3)}p_-^2 + c_2^{(3)}p_+^2 \right)\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} + \left( c_3^{(3)}p_-^2 + c_4^{(3)}p_+^2 \right)\vec\sigma_1 \cdot \vec\sigma_2 \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad +\, c_5^{(3)}\frac{i}{2} (\vec\sigma_1 + \vec\sigma_2) \cdot \left(\vec p_+\times\vec p_- \right) + c_6^{(3)}(\vec p_-\cdot\vec\sigma_1)(\vec p_-\cdot\vec\sigma_2) + c_7^{(3)}(\vec p_+\cdot\vec\sigma_1)(\vec p_+\cdot\vec\sigma_2) \,\Big\}\,,\end{aligned}$$ where the LECs in Eqs. (\[pot-2\]) and (\[pot-3\]) are the linear combinations of the couplings as in Eq. (\[new-LECs-1\]) by replacing $c_i^{(1)}\rightarrow c_i^{(2,3)}$ and $C_i^{(1)}\rightarrow C_i^{(2,3)}$. By using relativistic reductions as in [@Girlanda:2010ya; @Girlanda:2010zz], we obtain the minimal set of the SU(3) octet-octet baryon potentials and there are 27 operators totally. Moreover, Fierz identities for the Gell-mann matrices ($\lambda^a$) are also taken into account for the calculations of the potentials in Eqs. (\[pot-1\]), (\[pot-2\]) and (\[pot-3\]). We found that there is no the redundant terms of the SU(3) flavor structures. We obtain 6 and 21 operators at LO and NLO of the small momentum scale expansion ($Q/M$). At the LO, the operators from the couplings $C_{1,2,3,4}^{(1,2,3)}$ enter to contribute the potential but the couplings $C_5^{(1,2,3)}$ start at NLO. We will reduce the independent number of the LECs of the SU(3) octet-octet baryon interactions in the ChEFT by using the large-$N_c$ operator analysis in the next section. The $1/N_c$ operator product expansion analysis of the two-baryon matrix elements ================================================================================= The $1/N_c$ expansion octet-octet baryon ansatz ----------------------------------------------- In this section, we are going to study the $1/N_c$ expansion for the octet-octet baryon matrix elements. According to Witten’s conjecture [@Witten:1979kh], the matrix elements of baryon-baryon scattering should scale like $N_c$, i.e. [@Kaplan:1995yg; @Kaplan:1996rk], $$\begin{aligned} N_c\big\langle B_1\,|\,\mathcal{\hat O}_1^{i}\,|\, B_{1} \big\rangle \big\langle B_2\,|\,\mathcal{\hat O}_2^{i'}\,|\, B_{2} \big\rangle\,,\end{aligned}$$ where $\mathcal{\hat O}_1^{i}$ and $\mathcal{\hat O}_2^{i'}$ operators are the $i$- and $i'$-quark current operators on the first and the second baryon. It has proven in the Ref. [@Luty:1993fu] that the matrix elements for one baryon in SU(3) flavor symmetry has the $N_c$ scaling as, $$\begin{aligned} \big\langle B_j\,|\,\mathcal{\hat O}_j^{i}\,|\, B_j \big\rangle \lesssim N_c^0\,,\end{aligned}$$ with $j=1,~2$. This holds for the matrix elements of the second baryon as well. One can expand the matrix elements in terms of effective quark operator and effective spin-flavor baryon states in $1/N_c$ expansion as [@Dashen:1994qi; @Luty:1993fu], $$\begin{aligned} \big\langle B\,|\,\mathcal{\hat O}^{i}\,|\, B \big\rangle = \big( B\,|\,\sum_r c_r^{(i)}\left(\frac{\mathcal{O}}{N_c}\right)^r\,|\, B \big)\,,\end{aligned}$$ where $c_r^{(i)}$ is a function which contains dynamical properties of the system and $|\, B \big)$ is an effective baryon state composed of spin and flavor structures only [@Dashen:1994qi; @Luty:1993fu]. The $\mathcal{O}^r$ are the $r$-body operators which comprises of the effective quark operators [@Kaplan:1995yg; @Kaplan:1996rk], $$\begin{aligned} \left(\frac{\mathcal{O}}{N_c}\right)^r = \left(\frac{J}{N_c}\right)^l\,\left(\frac{T}{N_c}\right)^m\,\left(\frac{G}{N_c}\right)^n\,,\quad {\rm with}\quad \,l+m+n = r\,.\end{aligned}$$ The operators $J$, $T$ and $G$ are spin, flavor and spin-flavor operators, respectively and they are defined by [@Dashen:1994qi; @Lutz:2010se], $$\begin{aligned} && \mathbb{1} = q^\dagger ( \mathbf{1} \otimes \mathbf{1} )\,q \,, \qquad \qquad \;\; J_i = q^\dagger \Big(\frac{\sigma_i }{2} \otimes \mathbf{1})\, q \,, \nonumber\\ && T^a = q^\dagger \Big(\mathbf{1} \otimes \frac{\lambda_a}{2} \Big)\, q\, ,\qquad \quad \;\; G^a_i = q^\dagger \Big( \frac{\sigma_i}{2} \otimes \frac{\lambda_a}{2} \Big)\, q\,, \label{def:one-body-operators}\end{aligned}$$ where $q$ and $q^\dagger$ are quark annihilation and creation operators respectively. According to the fully antisymmetry and Fermi statistics of the SU($N_c$) color group, the spin and flavor of baryonic ground state of the $N_c$ quarks have to be completely symmetric representation. Therefore one can consider quark operators $q$ and $q^\dagger$ as bosonic operators with the commutation relation $\left [ q\,,\, q^\dagger\right] = 1$. The $N_c$ scaling of the $r$-body operator $\mathcal{O}^r$ and the the coefficient $c_r^{(i)}$ scale like [@Kaplan:1995yg; @Kaplan:1996rk], $$\begin{aligned} \big( B\,|\,\mathcal{O}^r \,|\, B \big) \lesssim N_c^r\,,\qquad c_r^{(i)} \sim N_c^0\,.\end{aligned}$$ In addition, The one-baryon matrix elements of the operators $J$, $T$ and $G$ in SU(3) flavor symmetry framework have $N_c$ scaling in the following way [@Dashen:1994qi] $$\begin{aligned} \label{Nc-scale-operators} \big(B\,|\,J^i\,|\,B\big) \sim N_c^0 \,,\quad \big(B\,|\,\mathbb{1}\,|\,B\big) \sim N_c\,,\quad \big(B\,|\,T^a\,|\,B\big) \lesssim N_c\,,\quad \big(B\,|\,G^{i\,a}\,|\,B\big) \lesssim N_c\,.\end{aligned}$$ In contrast to the SU(2) flavor symmetry, there is only one operator that can suppress rising of the $N_c$ for one-baryon matrix elements i.e. the $J$ whereas all the rest of the effective operators rises the $N_c$ factor. However, the symbol, $\lesssim$ is used for saturating the maximum of the $N_c$ scaling for the $\big(B\,|\,T^a\,|\,B\big)$ and $\big(B\,|\,G^{i\,a}\,|\,B\big)$ because the matrix elements of the $T^a$ operator scales like $N_c^0$ for $a=1,2,3$, but as $\sqrt{N_c}$ when $a=4,5,6,7$ and as $N_c$ when $a=8$. On the other hand, the matrix elements of the $G^{i\,a}$ scales like $N_c$ for $a=1,2,3$, as $\sqrt{N_c}$ when $a=4,5,6,7$ and as $N_c^0$ when $a=8$ [@Dashen:1994qi]. These are the differences of the effective operators between SU(2) and SU(3) flavor symmetries. Moreover, it is worth to discuss about the $N_c$ scaling of the external momentum variables. Here we consider all momentum in c.m. frame as we discussed in the previous section. One recalls the $N_c$ scaling of the momentum variables in Eq. (\[momentum-pm\]), it reads [@Kaplan:1996rk], $$\begin{aligned} \label{momentum-Nc} \vec p_+ \sim 1/N_c\,,\qquad \vec p_- \sim N_c^0\,.\end{aligned}$$ In a meson exchange picture, the $\vec p_+$ can only appear in the baryon-baryon potential as a relativistic correction (i.e., a velocity dependent term). Therefore, the $\vec p_+$ always come with the factor $1/M$. Since $M\,\sim\,N_c$, this gives $\vec p_+ \sim 1/N_c$ (for more detail discussions see [@Kaplan:1996rk; @Phillips:2013rsa; @Schindler:2015nga]). The baryon-baryon potential in terms of $1/N_c$ expansion can be written in the Hartee Hamiltonian [@Kaplan:1996rk; @Phillips:2013rsa; @Schindler:2015nga]. It takes the following form, $$\begin{aligned} \label{hartee} \hat H = N_c\sum_{r}\sum_{lm} c_{r,lm}\, \left(\frac{J}{N_c}\right)^l \left(\frac{T}{N_c}\right)^m \left(\frac{G}{N_c}\right)^{r-l-m}\,,\end{aligned}$$ where again the $c_{r,lm}$ coefficient function has scale $N_c^0$. It is well know that, at the large-$N_c$ limit, the spin-$1/2$ and $3/2$ baryons are degeneracy states. In this work, we project the Hamiltonian $\hat H$ to the octet (spin-$1/2$) baryon sector only. This has been discussed extensively in [@Kaplan:1996rk]. We will construct the Hamiltonian in order of $1/N_c$ expansion. Then the leading-order (LO) is given by $$\begin{aligned} \label{LO} \hat H_{\rm LO} &=& U_1^{\rm LO}(p_-^2)\, \mathbb{1}_1\cdot\mathbb{1}_2 + U_2^{\rm LO}(p_-^2)\, T_1\cdot T_2 + U_3^{\rm LO}(p_-^2)\, G_1\cdot G_2 + U_4^{\rm LO}(p_-^2)\, (p_-^i p_-^j)_{(2)}\cdot (G_{1}^{i,a} G_{2}^{j,a})_{(2)} \,,\end{aligned}$$ where $T_1\cdot T_2 = T_1^a T_2^a$ and $G_1\cdot G_2 = G_1^{i,a} G_2^{i,a}$. $U_i^{\rm LO}(p_-^2)$ is arbitrary function of the $p_-^2$ and it has $N_c^0$ scale. Here we also introduce the notation, $$\begin{aligned} (A^i B^j)_{(2)} \equiv \frac12\left( A^iB^j + A^jB^i - \frac23\delta_{ij}A\cdot B\right),\end{aligned}$$ and then $$\begin{aligned} (p_\pm^i p_\pm^j)_{(2)}\cdot(\sigma_1^i\sigma_2^j)_{(2)} = (\vec p_\pm\cdot\vec\sigma_1)( \vec p_\pm\cdot\vec\sigma_2) - \frac13\,p_\pm^2\sigma_1\cdot\sigma_2 \,.\end{aligned}$$ In this work, we terminate the $1/N_c$ expansion at the $1/N_c^2$ order. Then, the octet-octet baryon Hamiltonian at NNLO takes the following form, $$\begin{aligned} \label{NNLO} \hat H_{\rm NNLO} &=& U_1^{\rm NNLO}(p_-^2)\, p_+^2 \mathbb{1}_1\cdot\mathbb{1}_2 + U_2^{\rm NNLO}(p_-^2)\, \vec J_1\cdot\vec J_2 + U_3^{\rm NNLO}(p_-^2)\,\vec J_1\cdot\vec J_2\,T_1\cdot T_2 + U_4^{\rm NNLO}(p_-^2)\, p_+^2 T_1\cdot T_2 \nonumber\\ &+& U_5^{\rm NNLO}(p_-^2)\, p_+^2 G_1\cdot G_2 + U_6^{\rm NNLO}(p_-^2)\, i\,(\vec p_+ \times \vec p_-)\cdot(\vec J_1 + \vec J_2) + U_7^{\rm NNLO}(p_-^2)\,i\, (\vec p_+\times\vec p_-)\cdot(T_1^a \vec G_2^a + \vec G_1^a T_2^a) \nonumber\\ &+& U_8^{\rm NNLO}(p_-^2)\, i\,(\vec p_+ \times \vec p_-)\cdot(\vec J_1 + \vec J_2)\, T_1\cdot T_2 + U_9^{\rm NNLO}(p_-^2)\, (p_-^ip_-^j)_{(2)}\cdot(J_1^{i}J_2^{j})_{(2)} \nonumber\\ &+& U_{10}^{\rm NNLO}(p_-^2)\,(p_-^ip_-^j)_{(2)}\cdot(J_1^{i}J_2^{j})_{(2)}\, T_1\cdot T_2 + U_{11}^{\rm NNLO}(p_-^2)\, (p_+^ip_+^j)_{(2)}\cdot(G_1^{i,a}G_2^{j,a})_{(2)}\,.\end{aligned}$$ Here the $1/N_c$ scale factor is implied on each effective operators, $\mathbb{1}$, $J$, $T$ and $G$ implicitly. The functions $U_{i}^{\rm LO}(p_-^2)$ and $U_{i}^{\rm NNLO}(p_-^2)$ have $N_c^0$ scale. Noting that there are no $p_+^2 J_1\cdot J_2$ and $(p_+^ip_+^j)_{(2)}\cdot(J_1^{i}J_2^{j})_{(2)}$ structures because these operators have a further suppression in order $1/N_c^4$. Let’s us discuss comparisons between the octet-octet baryon potential and the nucleon-nucleon potential in the $1/N_c$ expansion. In the case of the SU(3) flavor symmetry, we find addition operator $T_1\cdot T_2$ at LO instead of NNLO because $T^8\,T^8/N_c \,\sim\,N_c$ while there is no such operator in nucleon-nucleon potential. Superficially, the two-body operator, $T^a G^{i\,a}/N_c$ should scale like $N_c$ by using the $N_c$ scale counting rules in Eq. (\[Nc-scale-operators\]). But if we consider the operator more carefully then we find $T^a G^{i\,a}/N_c \sim N_c^0$ because $T^{1,2,3} G^{i\,1,2,3}/N_c \,\sim$ $T^{4,5,6,7} G^{i\,4,5,6,7}/N_c \,\sim$ $T^{8} G^{i\,8}/N_c \,\sim\,N_c^0$. Surprisingly, the SU(3) octet-octet potential has the same structures as the nucleon-nucleon potential in SU(2) flavor symmetry i.e. there is no NLO term in the $1/N_c$ expansion. The extension of the flavor symmetry from SU(2) to SU(3) does not change the profile of the $1/N_c$ potential. Before closing this section, we would like to summarize the $1/N_c$ expansion octet-octet baryon Hamiltonian. There are 4 LO operators. At the NNLO of $1/N_c$ expansion, we obtain 11 operators. We totally have 15 operators of $1/N_c$ expansion for octet-octet baryon potential. Matching the octet-octet baryon potential of the SU(3) chiral Lagrangian with the $1/N_c$ operator product expansion -------------------------------------------------------------------------------------------------------------------- We will evaluate, in this section, the octet-octet baryon potential from the Hartee Hamiltonian in Eqs. (\[LO\]) and (\[NNLO\]). The $1/N_c$ potential is given by $$\begin{aligned} V = \big(\bar\chi_{2},d\,;\,\bar\chi_{1},c\,|\,\hat H\,|\,a,\chi_1\,;\,b,\chi_2\big),\end{aligned}$$ where $a\,(c)$, $b\,(d)$, $\chi_1\,(\bar\chi_{1})$ and $\chi_2\,(\bar\chi_{2})$ are flavor and spin indices of incoming (outgoing) baryon number 1 and 2 respectively. After that we will do matching the octet-octet baryon potential and $1/N_c$ operator product expansion to correlate the LECs from the chiral Lagrangian in Eq. (\[chi-L\]). First of all, we recall the action of the effective operators on the effective baryon states at $N_c=3$ as [@Lutz:2010se], $$\begin{aligned} \label{one-body} \mathbb{1} \,{|a,\chi)}&=& 3\, {|a,{\bar \chi})}\,, \nonumber \\ J_i \,{|a,\chi)}&=&\frac{1}{2}\, \sigma^{(i)}_{{\bar \chi} \chi}\, {|a,{\bar \chi})}\,, \nonumber \\ T^a\, {|b,\chi)}&=& i\,f^{bca}\, {|c,\chi)}\,, \nonumber \\ G^{a}_i\, {|b,\chi)}&=& \sigma^{(i)}_{{\bar \chi} \chi}\, \Big(\frac12\,d^{bca} + \frac{i}{3}\, f^{bca}\Big)\, {|c,{\bar \chi})} + \,\cdots\,,\end{aligned}$$ where $\cdots$ stands for a relevant structure of spin-${\textstyle \frac32}$ baryons [@Lutz:2010se] but we do not consider the spin-${\textstyle \frac32}$ baryons degree of freedom in this work. Before matching operators, we make ansazt for the arbitrary functions $U_i^{LO}$ and $U_i^{NNLO}$ that they are, $$\begin{aligned} U_i^{LO}(p_-^2) = g_i\,,\qquad U_i^{NNLO}(p_-^2) = h_i\,.\end{aligned}$$ Using Eq. (\[one-body\]) in Eqs. (\[LO\]) and (\[NNLO\]), the potential in terms of the large-$N_c$ operators at the LO is given by, $$\begin{aligned} \label{LO-pot} V_{\rm LO} &=& 9\,g_1\,\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2}\,\delta^{cd}\delta^{bd} + g_2\,i^2\,f^{ace}\,f^{bde}\,\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} + g_3\,\vec\sigma_1\cdot\vec\sigma_2\,\big( {\textstyle \frac12}\,d^{ace} + {\textstyle \frac{i}{3}}\,f^{ace}\big)\big( {\textstyle \frac12}\,d^{bde} + {\textstyle \frac{i}{3}}\,f^{bde}\big) \nonumber\\ &+& g_4\,(p_-^ip_-^j)_{(2)}\cdot(\sigma_1^i\sigma_2^j)_{(2)}\,\big( {\textstyle \frac12}\,d^{ace} + {\textstyle \frac{i}{3}}\,f^{ace}\big)\big( {\textstyle \frac12}\,d^{bde} + {\textstyle \frac{i}{3}}\,f^{bde}\big)\,,\end{aligned}$$ and at the NNLO of the $1/N_c$ expansion takes form, $$\begin{aligned} \label{NNLO-pot} V_{\rm NNLO} &=& 9\,h_1\,p_+^2\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2}\,\delta^{cd}\delta^{bd} + \frac14\,h_2\,\vec\sigma_1\cdot\vec\sigma_2\,\delta^{cd}\delta^{bd} + \frac14\,h_3\,\vec\sigma_1\cdot\vec\sigma_2\,i^2\,f^{ace}\,f^{bde} + h_4\,p_+^2\,i^2\,f^{ace}\,f^{bde}\,\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} \nonumber\\ &+& h_5\,p_+^2\,\vec\sigma_1\cdot\vec\sigma_2\,\big( {\textstyle \frac12}\,d^{ace} + {\textstyle \frac{i}{3}}\,f^{ace}\big)\big( {\textstyle \frac12}\,d^{bde} + {\textstyle \frac{i}{3}}\,f^{bde}\big) + \frac32\,i\,h_6\,(\vec p_+\times\vec p_-)\cdot(\vec\sigma_1 + \vec\sigma_2)\,\delta^{cd}\delta^{bd} \nonumber\\ &+& i\,h_7\, (\vec p_+\times\vec p_-)\cdot \Big[ \vec\sigma_1\,\big( {\textstyle \frac12}\,d^{ace} + {\textstyle \frac{i}{3}}\,f^{ace}\big)\,i\,f^{bde} + \vec\sigma_2\,i\,f^{ace}\,\big( {\textstyle \frac12}\,d^{bde} + {\textstyle \frac{i}{3}}\,f^{bde}\big) \Big] \nonumber\\ &+& \frac32\,i\,h_8\,(\vec p_+\times\vec p_-)\cdot(\vec\sigma_1 + \vec\sigma_2)\,i^2\,f^{ace}\,f^{bde} + \frac14\,h_9\,(p_-^ip_-^j)_{(2)}\cdot(\sigma_1^i\sigma_2^j)_{(2)}\,\delta^{cd}\delta^{bd} \nonumber\\ &+& \frac14\,h_{10}\,(p_-^ip_-^j)_{(2)}\cdot(\sigma_1^i\sigma_2^j)_{(2)}\,i^2\,f^{ace}\,f^{bde} + h_{11}\,(p_+^ip_+^j)_{(2)}\cdot(\sigma_1^i\sigma_2^j)_{(2)}\,\big( {\textstyle \frac12}\,d^{ace} + {\textstyle \frac{i}{3}}\,f^{ace}\big)\big( {\textstyle \frac12}\,d^{bde} + {\textstyle \frac{i}{3}}\,f^{bde}\big).\end{aligned}$$ We note that the $N_c$ scales of the above potentials are $V_{\rm LO} \sim N_c$ and $V_{\rm NNLO} \sim N_c^{-1}$. By using Eqs. (\[pot-1\]), (\[pot-2\]), (\[pot-3\]), (\[LO-pot\]) and (\[NNLO-pot\]), the $N_c$ scaling relations of the LECs can be extracted, $$\begin{aligned} \label{LECs-Nc} C_{1,2}^{(1)}\,\sim C_{1,2}^{(2)}\,\sim C_{1,2}^{(3)}\,\sim N_c\,,\quad\qquad C_{3,4,5}^{(1)}\,\sim C_{3,4,5}^{(2)}\,\sim C_{3,4,5}^{(3)}\,\sim N_c^{-1}\,,\end{aligned}$$ where $\Lambda \sim N_c^0$ [@Kaplan:1996rk; @Phillips:2014kna; @Schindler:2015nga] is impled. Note that the couplings $C_{1,2,3}^{(1)}$, $C_{1,2,3,4,5}^{(2)}$, $C_{1,2,3}^{(3)}$ are LO of order $N_c$ while the $N_c$ scaling of the $C_{3,4,5}^{(1)}$, $C_{3,4,5}^{(2)}$ and $C_{3,4,5}^{(3)}$ are further suppressed by order $1/N_c^2$. We found that there is no NLO of the LECs in the $1/N_c$ expansion. Matching the spin and flavor structures between the octet-octet baryon potential of the SU(3) chiral Lagrangian and the $1/N_c$ expansion up to NNLO, the large-$N_c$ operator analysis leads to the relations between the LECs of the SU(3) baryon contact interaction and we find the following results, $$\begin{aligned} C_1^{(2)} &=& C_1^{(1)} + g_2 - 4\, h_4 \,\Lambda^2 \,, \nonumber\\ C_2^{(2)} &=& C_2^{(1)} + g_2 + 4\, h_4\,\Lambda^2\,, \nonumber\\ C_3^{(2)} &=& C_3^{(1)}-\frac{1}{2}\,g_2 +\frac{1}{8}\,h_3 - 4\,h_4 \,\Lambda^2 + 2\,h_+\,\Lambda^2\,, \nonumber\\ C_4^{(2)} &=& C_4^{(1)}-\frac{1}{2}\,g_2 - \frac{3}{8}\,h_3 - 4\,h_4\,\Lambda^2 + 2\,h_+\,\Lambda^2 \,, \nonumber\\ C_5^{(2)} &=& C_5^{(1)} + \frac{1}{4}\,h_3 + 4\,h_4\,\Lambda^2 - 4\,h_+\,\Lambda^2 + 2\,h_{10}\,\Lambda^2\,, \nonumber\\ C_1^{(3)}&=& -\frac{1}{3}\,C_1^{(1)} +\frac{9}{2}\,g_1 - \frac{1}{3}\,g_2 - 18\,h_1\,\Lambda^2 + \frac{4}{3}\,h_4\,\Lambda^2\,, \nonumber\\ C_2^{(3)}&=& -\frac{1}{3}\,C_2^{(1)} +\frac{9}{2}\,g_1 -\frac{1}{3}\,g_2 + 18\,h_1\,\Lambda^2 -\frac{4}{3}\,h_4\,\Lambda^2\,, \nonumber\\ C_3^{(3)}&=&-\frac{1}{3}\,C_3^{(1)} -\frac{9}{4}\,g_1 +\frac{1}{6}\,g_2 - 18\,h_1\,\Lambda^2 + \frac{1}{16}\,h_2 - \frac{1}{24}\,h_3 +\frac{4}{3}\,h_4\,\Lambda^2 + \frac{3}{2}\,h_6\,\Lambda^2 -\frac{2}{3}\,h_+\,\Lambda^2\,, \nonumber\\ C_4^{(3)}&=&-\frac{1}{3}\,C_4^{(1)} -\frac{9}{4}\,g_1 +\frac{1}{6}\,g_2 - 18\,h_1\,\Lambda^2 - \frac{3}{16}\,h_2 +\frac{1}{8}\,h_3 +\frac{4}{3}\,h_4\,\Lambda^2 + \frac{3}{2}\,h_6\,\Lambda^2-\frac{2}{3}\,h_+\,\Lambda^2\,, \nonumber\\ C_5^{(3)}&=&-\frac{1}{3}\,C_5^{(1)} + 18\,h_1\,\Lambda^2 +\frac{1}{8}\,h_2 -\frac{1}{12}\,h_3 -\frac{4}{3}\,h_4\,\Lambda^2 -3\,h_6\,\Lambda^2 + h_9\,\Lambda^2 + \frac{4}{3}\,h_+\,\Lambda^2 - \frac{2}{3}\,h_{10}\,\Lambda^2\,,\end{aligned}$$ where $h_+ = 2\,h_7/3 + 3\,h_8$. Note that the Jacobi identities for the $f$ and $d$ symbols, $$\begin{aligned} && f^{abe}\,f^{ecd} + f^{bce}\,f^{ead} + f^{cae}\,f^{ebd} = 0\,, \nonumber\\ &&\, d^{abe}\,f^{ecd} + d^{bce}\,f^{ead} + d^{cae}\,f^{ebd} = 0\,\end{aligned}$$ have been used in the matching procedure. To the LO contributions of the $1/N_c$ expansion, one can reduce the number of the free parameters with $\mathcal{O}\big( 1/N_c^2 \big)$ $\equiv$ $h_i$ corrections. 9 sum rules of the LECs of the SU(3) octet-octet baryon contact interactions in the ChEFT are derived $$\begin{aligned} \label{LO-sumrules} && C_1^{(1)}= C_1^{(2)} = -3\,C_1^{(3)} - 2\,C_4^{(2)} - 6\,C_4^{(3)} \,,\qquad C_2^{(1)}= C_2^{(2)} = -3\,C_2^{(3)} - 2\,C_4^{(2)} - 6\,C_4^{(3)} \,, \nonumber\\ &&C_3^{(1)}= C_3^{(2)} = -3\,C_3^{(3)} + \,C_4^{(2)} + 3\,C_4^{(3)}\,,\qquad C_4^{(1)}= C_4^{(2)}\,,\qquad C_5^{(1)} = C_5^{(2)}=-3\,C_5^{(3)}\,.\end{aligned}$$We find that there are 6 free parameters of the SU(3) octet-octet baryon contact interactions in the ChEFT from the large-$N_c$ operator analysis. At $N_c=3$, these sum rules are held up to corrections of the $1/N_c^2\approx 10\%$ approximately. In order to see the application of the 9 large-$N_c$ sum rules, we will apply our results to YN interactions in next section. Application of the large-$N_c$ sum rules to the Jülich hyperon-nucleon contact interactions at the LO ===================================================================================================== In this section, we will apply the large-$N_c$ sum rules to the Jülich hyperon-nucleon contact interactions at LO [@Polinder:2006zh]. The LO contact terms of the chiral Lagrangians in Eq. (\[chi-L\]) with the large component of the baryon spinors have 6 free parameters. They read, [@Polinder:2006zh], $$\begin{aligned} && C_S^{(1)}\,,\quad C_S^{(2)}\,,\quad C_S^{(3)}\,,\quad C_T^{(1)}\,,\quad C_T^{(2)}\,,\quad C_T^{(3)}\,.\end{aligned}$$ The $C_{S,T}^{(1,2,3)}$ are linear combinations of the coupling constants in Eq. (\[chi-L\]) as $$\begin{aligned} C_S^{(1,2,3)} = C_1^{(1,2,3)} + C_2^{(1,2,3)}\,,\qquad\quad C_T^{(1,2,3)} = C_3^{(1,2,3)} - C_4^{(1,2,3)}\,.\end{aligned}$$ The operator from the couplings, $C_5^{(1,2,3)}$ does not contribute to the YN potentials at the LO of the chiral expansion. Applying the large-$N_c$ sum rules in Eq. (\[LO-sumrules\]), we find 3 sum rules i.e., $$\begin{aligned} \label{LO-PW-sumrules} C_S^{(1)} = C_S^{(2)}\,,\qquad C_T^{(1)} = C_T^{(2)} = -3\,C_T^{(3)}\,.\end{aligned}$$ Above sum rules give only 3 free parameters and the $N_c$ scalings of those parameters are given by $$\begin{aligned} \label{C_ST-Nc} C_S^{(1,2,3)}\sim N_c\,,\qquad C_T^{(1,2,3)}\sim N_c^{-1}\,.\end{aligned}$$ It is interesting to note that $N_c$ scalings of the $C_{S,T}^{(1,2,3)}$ in Eq. (\[C\_ST-Nc\]) agree with the NN case [@Kaplan:1996rk; @Phillips:2013rsa]. The sum rules in Eq. (\[LO-PW-sumrules\]) are useful for calculating the partial wave potentials at the LO in the chiral expansion of the hyperon-nucleon scattering. The hyperon-nucleon partial wave potentials at LO have been constructed and studied in Ref. [@Polinder:2006zh] and also re-investigated in [@Li:2016paq]. According to the SU(3) flavor symmetry, the authors of the Ref. [@Polinder:2006zh] find that there are only 5 parameters (potentials) which are used to fit the experimental data of the hyperon-nucleon scattering. The parameters are read $$\begin{aligned} \label{PW-free} C_{1S0}^{\Lambda\Lambda} \equiv V_{1S0}^{\Lambda\Lambda}\,,\qquad C_{3S1}^{\Lambda\Lambda} \equiv V_{3S1}^{\Lambda\Lambda}\,,\qquad C_{1S0}^{\Sigma\Sigma} \equiv V_{1S0}^{\Sigma\Sigma}\,,\qquad C_{3S1}^{\Sigma\Sigma} \equiv V_{3S1}^{\Sigma\Sigma}\,,\qquad C_{3S1}^{\Lambda\Sigma} \equiv V_{3S1}^{\Lambda\Sigma}\,,\end{aligned}$$ where the Jülich model of the LO hyperon-nucleon potentials are written in terms of the couplings $C_{S,T}^{(1,2,3)}$ in the following forms [@Polinder:2006zh], $$\begin{aligned} \label{YN-LO} V^{\Lambda\Lambda}_{1S0} &=& 4\pi\left[\frac{1}{6}\left(C^{(1)}_S-3C^{(1)}_T\right)+\frac{5}{3}\left(C^{(2)}_S-3C^{(2)}_T\right)+2\left(C^{(3)}_S-3C^{(3)}_T\right)\right], \nonumber \\ V^{\Lambda\Lambda}_{3S1} &=& 4\pi\left[\frac{3}{2}\left(C^{(1)}_S+C^{(1)}_T\right)+\left(C^{(2)}_S+C^{(2)}_T\right)+2\left(C^{(3)}_S+C^{(3)}_T\right)\right], \nonumber \\ V^{\Sigma\Sigma}_{1S0} &=& 4\pi\left[2\left(C^{(2)}_S-3C^{(2)}_T\right)+2\left(C^{(3)}_S-3C^{(3)}_T\right)\right] , \nonumber \\ V^{\Sigma\Sigma}_{3S1} &=& 4\pi\left[-2\left(C^{(2)}_S+C^{(2)}_T\right)+2\left(C^{(3)}_S+C^{(3)}_T\right)\right] , \nonumber \\ V^{\Lambda\Sigma}_{3S1} &=& 4\pi\left[-\frac{3}{2}\left(C^{(1)}_S+C^{(1)}_T\right)+\left(C^{(2)}_S+C^{(2)}_T\right)\right]. \label{eq:2.16}\end{aligned}$$ Using the sum rules in Eq. (\[LO-PW-sumrules\]) to the 5 free parameters in Eq. (\[PW-free\]), one finds at LO of the $1/N_c$ expansion, $$\begin{aligned} \label{s-wave-sumrules} C_{1S0}^{\Sigma\Sigma} = \frac87\,C_{1S0}^{\Lambda\Lambda} - \frac17\,C_{3S1}^{\Lambda\Lambda} - \frac{11}{21}\,C_{3S1}^{\Lambda\Sigma} \,,\qquad C_{3S1}^{\Sigma\Sigma} = C_{3S1}^{\Lambda\Lambda} + 9\,C_{3S1}^{\Lambda\Sigma}\,.\end{aligned}$$ Note that all of the LECs has the same $N_c$ scaling as $N_c$. The large-$1/N_c$ analysis of the LO YN potentials predicts that there are 3 free parameters at the LO of $1/N_c$ expansion with $\mathcal{O}\big( 1/N_c^2\big)$ corrections. With the same manner of the large-$N_c$ analysis of the LO YN potentials, one can apply the sum-rules in Eq. (\[LO-sumrules\]) for the partial-wave analysis in the YN potentials at NLO in Ref. [@Haidenbauer:2013oca] as well as for the YY sector in Refs. [@Polinder:2007mp; @Haidenbauer:2015zqb; @Haidenbauer:2013oca]. $C^{\Lambda \Lambda}_{1S0}$ $C^{\Sigma \Sigma}_{1S0}$ $C^{\Lambda \Lambda}_{3S1}$ $C^{\Sigma \Sigma}_{3S1}$ $C^{\Lambda \Sigma}_{3S1}$ -------- ----------------------------- --------------------------- ----------------------------- --------------------------- ---------------------------- --   EG   $-0.04795(151)$ $-0.07546(81)$ $-0.01727(124)$ $0.36367(30310)$ $0.01271(471)$   HB   $-0.03894(1)$ $-0.07657(1)$ $-0.01629(13)$ $0.20029(14050)$ $-0.00176(304)$ : Best-fitted values of $YN$ s-wave LECs (in units of $10^4$ GeV$^{-2}$) for cut-off, $\Lambda=600$ MeV in the EG and HB approaches [@Li:2016paq].[]{data-label="tab_LECs600"} Next we will compare the prediction of the large-$N_c$ sum rules in Eq. (\[s-wave-sumrules\]) with the best fitted values of the LECs from YN scattering data in Ref. [@Li:2016paq]. This reference has performed the partial wave analysis of the YN s-wave scattering by using the same chiral Lagrangian as in our work. Authors in Ref. [@Li:2016paq] have used two approaches to solve scattering amplitudes via Kadyshevsky equation with the relativistic covariant ChEFT (referred as EG) and Lippmann-Schwinger equation with the heavy-baryon formalisms (referred as HB). The relativistic covariant ChEFT (EG) approach is also used to study NN interactions in [@Ren:2016jna]. The best fitted values of the LECs are shown in Tab. \[tab\_LECs600\]. We will use the LECs, $C_{1S0}^{\Lambda\Lambda}$, $C_{3S1}^{\Lambda\Lambda}$ and $C_{3S1}^{\Lambda\Sigma}$ as input values in Eq. (\[s-wave-sumrules\]) and the large-$N_c$ sum rules predict that $$\begin{aligned} \label{LECs-sumrules} && C_{1S0,{\rm EG}}^{\Sigma\Sigma} = -0.06327\,,\qquad C_{3S1,{\rm EG}}^{\Sigma\Sigma} = 0.1271\,, \nonumber\\ && C_{1S0,{\rm HB}}^{\Sigma\Sigma} = -0.04333\,,\qquad C_{3S1,{\rm HB}}^{\Sigma\Sigma} = -0.0176\,.\end{aligned}$$ Comparing the LECs, $C_{1S0}^{\Sigma\Sigma}$ and $C_{3S1}^{\Sigma\Sigma}$ from the large-$N_c$’s predictions with the best fitted values in Tab. \[tab\_LECs600\], we found that $C_{1S0}^{\Sigma\Sigma}$ and $C_{3S1}^{\Sigma\Sigma}$ from large-$N_c$ are in the same order as the best fitted values and with the same relative sign in EG approach. On the other hand, for the HB formalisms, the $C_{1S0}^{\Sigma\Sigma}$ is also in the same order as the large-$N_c$ value and with the same relative sign. But for the $C_{3S1}^{\Sigma\Sigma}$ value in HB approach, it is different in order of magnitude of 1 with the large-$N_c$ prediction and with different relative sign. One notes that the LECs best fitted values from EG and HB approaches have statistical uncertainties at 68 % (one sigma) level. While Ref. [@Li:2016paq] concluded that there is not much difference between two approaches. But the large-$N_c$ sum rules in this work can show that the LECs from EG approach is more consistent with the predictions of large-$N_c$ than the HB formalism. Conclusions =========== In this work, we studied the large-$N_c$ operator analysis of the octet-octet baryon potential from the SU(3) ChEFT. The minimal set of the octet-octet baryon potential is derived by using the relativistic constraints as suggestion in Refs. [@Girlanda:2010ya; @Girlanda:2010zz] as well as the Claley-Hamilton identity and Fierz rearrangement to eliminate the redundant operators as shown in Ref. [@Polinder:2006zh]. Up to NLO of $Q/\Lambda$ expansion, we found 27 operators for the octet-octet baryon potential in SU(3) flavor symmetry, 6 in LO and 21 in NLO of the small momentum scale. The octet-octet baryon potential in the at LO in The $1/N_c$ expansion is of order $N_c$ and there are 4 operators while he NNLO potential is of order $1/N_c$ and we found 11 operators. The LECs of the ChEFT have two $N_c$ scalings, namely $N_c$ and $1/N_c$ orders as shown in Eq. (\[LECs-Nc\]). Interestingly, the extension of the flavor symmetry from SU(2) to SU(3) in the large-$N_c$ operator analysis does not change the profile of the potential in terms of the $1/N_c$ expansion. There is no NLO for the SU(3) octet-octet baryon potential as for the NN potential [@Kaplan:1996rk; @Phillips:2013rsa]. The matching between the octet-octet baryon potential and the $1/N_c$ operator expansion leads to 6 free parameters of the LECs from the SU(3) chiral Lagrangian at the LO of the $1/N_c$ expansion with $\mathcal{O}\big( 1/N_c^2\big)\approx 10\%$ correction. The application of the sum rules in Eqs. (\[LO-sumrules\]) from the lareg-$N_c$ constraint to the partial-wave potential of the YN interactions at LO of the chiral expansion reduces the LECs of the YN optential to 3 from 5. The comparison of the large-$N_c$ predictions of the LECs with the best fitted values from the YN s-wave scattering reveals that the large-$N_c$ prediction of the LECs is more consistent with the EG results than the HB formalisms. Noted that The theoretical results from the EG and HB approaches in Ref. [@Li:2016paq] are quantitatively similar in describing the YN scattering experimental data. The large-$N_c$ sum rules in this work can also be applied to the NLO of the YN interactions and extended to the ChEFT potential of the YY sector. In addition, we expect that future lattice QCD calculations may check the hierarchy of the $N_c$ scalings of the LECs and the large-$N_c$ sum rules predicted in this work. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Daniel Phillips and Carlos Schat for carefully reading manuscript and useful comments. We also thank Li-Shen Geng for explaining detail of the best fitted values of LECs from YN scattering data. XL acknowledges support by National Natural Science Foundation of China (Project No. 11547182), and the Doctoral Scientific Research Foundation of Liaoning Province (Project No. 201501197). This work is partly supported by Thailand Research Fund (TRF) under contract No. MRG5980255 (DS). YY and DS acknowledge support from, Suranaree University of Technology (SUT) and the Office of the Higher Education Commission under NRU project of Thailand (SUT-COE: High Energy Physics & Astrophysics). DS thanks Chamaipawn Jaipang for supporting useful references. The non-relativistic reductions of the chiral Lagrangian {#appA} ======================================================== In this appendix, we derive the non-relativistic reductions of the chiral Lagrangian in Eq. (\[chi-L\]). Here we follow the derivation from Ref. [@Girlanda:2010ya; @Girlanda:2010zz] and focus for the spin (Dirac) structures of the chiral Lagrangian only. The chiral Lagrangian can be re-written in terms of operator as $$\begin{aligned} \label{op-chi-L} \widetilde{O}_1 &\equiv& (\bar{B} B) (\bar{B} B)\,, \nonumber\\ \widetilde{O}_2 &\equiv& (\bar{B} \gamma_\mu B) ( \bar{B}\gamma^\mu B )\,, \nonumber\\ \widetilde{O}_3 &\equiv& ( \bar{B} \sigma_{\mu\nu} B)( \bar{B} \sigma^{\mu\nu}B)\,, \nonumber\\ \widetilde{O}_4 &\equiv& (\bar{B} \gamma_\mu \gamma_5 B)( \bar{B} \gamma^\mu \gamma_5 B )\,, \nonumber\\ \widetilde{O}_5 &\equiv& (\bar{B} \gamma_5 B)(\bar{B} \gamma_5 B )\,.\end{aligned}$$ ---------- --------------------------------------------------------------------------------------------------------------------------------- $O_S$ $(\varphi_B^\dagger \varphi_B)(\varphi_B^\dagger \varphi_B)$ $O_T$ $(\varphi_B^\dagger {\bm {\sigma}} \varphi_B)\cdot (\varphi_B^\dagger{\bm {\sigma}}\varphi_B)$ $O_1$ $(\varphi_B^\dagger \overrightarrow{\bm \nabla} \varphi_B)^2 +{\rm h.c.}$ $O_2$ $(\varphi_B^\dagger \overrightarrow{\bm \nabla} \varphi_B )\cdot ( \varphi_B^\dagger \overleftarrow{\bm \nabla} \varphi_B) $ $O_3$ $(\varphi_B^\dagger \varphi_B) ( \varphi_B^\dagger \overrightarrow{\bm \nabla}^2 \varphi_B)+{\rm h.c.}$ $O_4$ $i \,( \varphi_B^\dagger \overrightarrow{\bm \nabla} \varphi_B) \cdot (\varphi_B^\dagger \overleftarrow{\bm \nabla} \times {\bm \sigma} \varphi_B )+ {\rm h.c.}$ $O_5$ $i \, (\varphi_B^\dagger \varphi_B)(\varphi_B^\dagger \overleftarrow{\bm \nabla} \cdot {\bm \sigma} \times \overrightarrow{\bm \nabla} \varphi_B)$ $O_6$ $i \, (\varphi_B^\dagger {\bm \sigma} \varphi_B) \cdot (\varphi_B^\dagger \overleftarrow{\bm \nabla} \times \overrightarrow{\bm \nabla} \varphi_B)$ $O_7$ $( \varphi_B^\dagger {\bm \sigma} \cdot \overrightarrow{\bm \nabla} \varphi_B) (\varphi_B^\dagger {\bm \sigma}\cdot \overrightarrow{\bm \nabla} \varphi_B) +{\rm h.c.}$ $O_8$ $(\varphi_B^\dagger \sigma^j \overrightarrow{\nabla^k} \varphi_B)(\varphi_B^\dagger \sigma^k \overrightarrow{\nabla^j} \varphi_B) + {\rm h.c.}$ $O_9$ $(\varphi_B^\dagger \sigma^j \overrightarrow{\nabla^k} \varphi_B)(\varphi_B^\dagger \sigma^j \overrightarrow{\nabla^k} \varphi_B) + {\rm h.c.}$ $O_{10}$ $(\varphi_B^\dagger {\bm \sigma} \cdot \overrightarrow{\bm \nabla}\varphi_B) (\varphi_B^\dagger \overleftarrow{\bm \nabla}\cdot {\bm \sigma} \varphi_B)$ $O_{11}$ $(\varphi_B^\dagger \sigma^j \overrightarrow{\nabla^k}\varphi_B) (\varphi_B^\dagger \overleftarrow{\nabla^j} \sigma^k \varphi_B)$ $O_{12}$ $(\varphi_B^\dagger \sigma^j \overrightarrow{\nabla^k} \varphi_B) (\varphi_B^\dagger \overleftarrow{\nabla^k} \sigma^j \varphi_B)$ $O_{13}$ $ (\varphi_B^\dagger \overleftarrow{\bm \nabla}\cdot{\bm \sigma} \,\overrightarrow{\nabla^j} \varphi_B) (\varphi_B^\dagger \sigma^j \varphi_B) +{\rm h.c.}$ $O_{14}$ $2\, (\varphi_B^\dagger \overleftarrow{\bm \nabla} \sigma^j \cdot \overrightarrow{\bm \nabla} \varphi_B) (\varphi_B^\dagger \sigma^j \varphi_B)$ ---------- --------------------------------------------------------------------------------------------------------------------------------- : Operators of the LO and NLO contact term interactions [@Ordonez:1996], the left (right) arrow on $\nabla$ indicates that the gradient operates on the left (right) field. Normal-ordering of the field operator products is implied.[]{data-label="NR-op"} The relativistic fermion field $B(x)$ can be expanded to the positive energy components $\varphi_B(x)$ in the following from [@Girlanda:2010ya; @Girlanda:2010zz], $$\begin{aligned} B(x) = \left[ \left(\begin{array}{c} 1 \\ 0 \end{array} \right) - \frac{i}{2M} \left(\begin{array}{c} 0 \\ {\bm {\sigma}}\cdot {\bm {\nabla}} \end{array} \right) + \frac{1}{8M^2}\left(\begin{array}{c} {\bm {\nabla}}^2 \\ 0 \end{array} \right)\right]\varphi_B(x) + \mathcal{O}\big( Q^3\big)\,,\end{aligned}$$ where $M$ and $Q$ are baryon mass in SU(3) flavor symmetry limit and small momentum scale respectively. Up to order $Q^2$, the non-relativistic reductions of the operators in Eq. (\[op-chi-L\]) are given by $$\begin{aligned} \label{NR-reduce} \widetilde{O}_1 &\stackrel{{\rm NR}}{\simeq}& O_S+\frac{1}{4 M^2} \left( O_1 + 2\, O_2 + 2\, O_3 + 2\, O_5 \right), \nonumber\\ \widetilde{O}_2 &\stackrel{{\rm NR}}{\simeq}& O_S +\frac{1}{4 M^2} \left(-4\, O_2 -2\, O_5 +4\, O_6 +O_7 - O_9 + 2\, O_{10} - 2\, O_{12} \right), \nonumber\\ \widetilde{O}_3 &\stackrel{{\rm NR}}{\simeq}& O_T+ \frac{1}{4 M^2} \left( -O_1 - 2\, O_2 - 4\, O_5 + 2\, O_6 + O_7 - 2\, O_8 +2\, O_{10} -4\, O_{12} - 2\, O_{13} \right), \nonumber\\ \widetilde{O}_4 &\stackrel{{\rm NR}}{\simeq}& -O_T -\frac{1}{4 M^2} \left(- 2 \, O_6 + O_7 - O_9 - 2 \, O_{10} - 2\, O_{12} + 2\,O_{13} - 2\, O_{14} \right), \nonumber\\ \widetilde{O}_5 &\stackrel{{\rm NR}}{\simeq}& \frac{1}{4 M^2}\left(O_7 +2\, O_{10}\right),\end{aligned}$$ where we took the above results from Refs. [@Girlanda:2010ya; @Girlanda:2010zz] and the operators $O_i$ ($i=1,...,14$) are listed in Tab. \[NR-op\]. By using partial integrations, Ref. [@Pastore:2009is] has been shown that there are only 12 operators are independent with the following constraints, $$\begin{aligned} O_7 + 2\, O_{10} = O_8 + 2\, O_{11} \quad {\rm and} \quad O_4 + O_5 = O_6 \,.\end{aligned}$$ Next step, one re-writes the non-relativistic reductions in Eq. (\[NR-reduce\]) in terms of the basis in Eqs. (\[pot-1\],\[pot-2\],\[pot-3\]) as [@Girlanda:2010ya], $$\begin{aligned} A_S &\equiv& \tilde O_S = O_S + \frac{1}{4M^2}\left( O_1 + O_3 + O_5 + O_6 \right), \nonumber\\ A_T &\equiv& \tilde O_T = O_T - \frac{1}{4M^2}\left( O_5 + O_6 - O_7 + O_8 + 2\,O_{12} + O_{14} \right), \nonumber\\ A_1 &\equiv& p_-^2\,\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} = O_1 + 2\,O_2 \,, \nonumber\\ A_2 &\equiv& p_+^2\,\delta_{\bar\chi_1\chi_1}\delta_{\bar\chi_2\chi_2} = 2\,O_2 + O_3 \,, \nonumber\\ A_3 &\equiv& p_-^2\,\vec\sigma_1 \cdot \vec\sigma_2 = O_9 + 2\,O_{12} \,, \nonumber\\ A_4 &\equiv& p_+^2\,\vec\sigma_1 \cdot \vec\sigma_2 = O_9 + O_{14} \,, \nonumber\\ A_5 &\equiv& i \left(\vec p_+\times\vec p_- \right) \cdot (\vec\sigma_1 + \vec\sigma_2)/2 = O_5 - O_6 \,, \nonumber\\ A_6 &\equiv& (\vec p_-\cdot\vec\sigma_1)(\vec p_-\cdot\vec\sigma_2) = O_7 + 2\,O_{10}\,, \nonumber\\ A_7 &\equiv& (\vec p_+\cdot\vec\sigma_1)(\vec p_+\cdot\vec\sigma_2) = O_7 + O_8 + 2\,O_{13}\,.\end{aligned}$$ By using above relations, we obtain the non-relativistic reductions of the chiral Lagrangian in Eq. (\[chi-L\]) in terms of the operators $A_i$ as, $$\begin{aligned} \widetilde{O}_1 &\simeq& A_S + \frac{1}{4 M^2}\left(A_2 - A_5\right), \nonumber\\ \widetilde{O}_2 &\simeq& A_S -\frac{1}{4M^2}\left(A_1 + A_2 + A_3 - 3\,A_5 - A_6 \right), \nonumber\\ \widetilde{O}_3 &\simeq& A_T -\frac{1}{4M^2}\left(A_1 + A_2 + A_3 - A_4 - 3\,A_5 - A_6 + A_7 \right), \nonumber\\ \widetilde{O}_4 &\simeq& -A_T +\frac{1}{4M^2}\left(A_4 + A_5 + A_6 - A_7 \right), \nonumber\\ \widetilde{O}_5 &\simeq& \frac{1}{4M^2}\,A_6 \,.\end{aligned}$$ [99]{} S. Weinberg, Physica A [**96**]{}, 327 (1979). J. Gasser and H. Leutwyler, Annals Phys.  [**158**]{}, 142 (1984), J. Gasser and H. Leutwyler, Nucl. Phys. B [**250**]{}, 465 (1985). S. Scherer and M. R. Schindler, Lect. Notes Phys.  [**830**]{}, pp.1 (2012). S. Scherer, Adv. Nucl. Phys.  [**27**]{}, 277 (2003) \[hep-ph/0210398\]. E. Epelbaum, H. W. Hammer and U. G. Meissner, Rev. Mod. Phys.  [**81**]{}, 1773 (2009) \[arXiv:0811.1338 \[nucl-th\]\]. R. Machleidt and D. R. Entem, Phys. Rept.  [**503**]{} (2011) 1 \[arXiv:1105.2919 \[nucl-th\]\]. S. Weinberg, Phys. Lett. B [**251**]{}, 288 (1990). S. Weinberg, Nucl. Phys. B [**363**]{}, 3 (1991). C. Ordonez, L. Ray and U. van Kolck, Phys. Rev. Lett.  [**72**]{}, 1982 (1994). C. Ordonez, L. Ray and U. van Kolck, Phys. Rev. C [**53**]{}, 2086 (1996) \[hep-ph/9511380\]. E. Epelbaum, W. Glockle and U. G. Meissner, Nucl. Phys. A [**747**]{}, 362 (2005) \[nucl-th/0405048\]. D. R. Entem and R. Machleidt, Phys. Rev. C [**68**]{}, 041001 (2003) \[nucl-th/0304018\]. A. Nogga, H. Kamada and W. Gloeckle, Phys. Rev. Lett.  [**88**]{}, 172501 (2002) \[nucl-th/0112060\]. D. Lonardoni, A. Lovato, S. Gandolfi and F. Pederiva, Phys. Rev. Lett.  [**114**]{}, no. 9, 092301 (2015) \[arXiv:1407.4448 \[nucl-th\]\]. H. Polinder, J. Haidenbauer and U. G. Meissner, Nucl. Phys. A [**779**]{}, 244 (2006) \[nucl-th/0605050\]. J. Haidenbauer, S. Petschauer, N. Kaiser, U.-G. Meissner, A. Nogga and W. Weise, Nucl. Phys. A [**915**]{}, 24 (2013) \[arXiv:1304.5339 \[nucl-th\]\]. S. Petschauer and N. Kaiser, Nucl. Phys. A [**916**]{}, 1 (2013) \[arXiv:1305.3427 \[nucl-th\]\]. H. Polinder, J. Haidenbauer and U.-G. Meissner, Phys. Lett. B [**653**]{}, 29 (2007) \[arXiv:0705.3753 \[nucl-th\]\]. J. Haidenbauer, U. G. Meißner and S. Petschauer, Nucl. Phys. A [**954**]{}, 273 (2016) \[arXiv:1511.05859 \[nucl-th\]\]. J. Haidenbauer and U.-G. Meissner, Phys. Lett. B [**684**]{}, 275 (2010) \[arXiv:0907.1395 \[nucl-th\]\]. K. W. Li, X. L. Ren, L. S. Geng and B. Long, Phys. Rev. D [**94**]{}, no. 1, 014029 (2016) \[arXiv:1603.07802 \[hep-ph\]\]. G. ’t Hooft, Nucl. Phys. B [**72**]{}, 461 (1974). E. Witten, Nucl. Phys. B [**160**]{}, 57 (1979). E. E. Jenkins, Ann. Rev. Nucl. Part. Sci.  [**48**]{}, 81 (1998) N. Matagne and F. Stancu, Rev. Mod. Phys.  [**87**]{}, 211 (2015) R. F. Dashen, E. E. Jenkins and A. V. Manohar, Phys. Rev. D [**49**]{}, 4713 (1994) Erratum: \[Phys. Rev. D [**51**]{}, 2489 (1995)\] \[hep-ph/9310379\]. R. F. Dashen, E. E. Jenkins and A. V. Manohar, Phys. Rev. D [**51**]{}, 3697 (1995) \[hep-ph/9411234\]. M. A. Luty and J. March-Russell, Nucl. Phys. B [**426**]{}, 71 (1994) \[hep-ph/9310369\]. D. B. Kaplan and M. J. Savage, Phys. Lett. B [**365**]{}, 244 (1996) \[hep-ph/9509371\]. D. B. Kaplan and A. V. Manohar, Phys. Rev. C [**56**]{}, 76 (1997) \[nucl-th/9612021\]. M. K. Banerjee, T. D. Cohen and B. A. Gelman, Phys. Rev. C [**65**]{}, 034011 (2002) \[hep-ph/0109274\]. D. R. Phillips and C. Schat, Phys. Rev. C [**88**]{}, no. 3, 034002 (2013) \[arXiv:1307.6274 \[nucl-th\]\]. D. R. Phillips, D. Samart and C. Schat, Phys. Rev. Lett.  [**114**]{}, no. 6, 062301 (2015) \[arXiv:1410.1157 \[nucl-th\]\]. M. R. Schindler, R. P. Springer and J. Vanasse, Phys. Rev. C [**93**]{}, no. 2, 025502 (2016) \[arXiv:1510.07598 \[nucl-th\]\]. D. Samart, C. Schat, M. R. Schindler and D. R. Phillips, Phys. Rev. C [**94**]{}, no. 2, 024001 (2016) \[arXiv:1604.01437 \[nucl-th\]\]. M. F. M. Lutz and A. Semke, Phys. Rev. D [**83**]{}, 034008 (2011) \[arXiv:1012.4365 \[hep-ph\]\]; M. F. M. Lutz, D. Samart and A. Semke, Phys. Rev. D [**84**]{}, 096015 (2011) \[arXiv:1107.1324 \[hep-ph\]\]. L. Girlanda, S. Pastore, R. Schiavilla and M. Viviani, Phys. Rev. C [**81**]{}, 034005 (2010) \[arXiv:1001.3676 \[nucl-th\]\]. L. Girlanda and M. Viviani, Few Body Syst.  [**49**]{}, 51 (2011). S. Pastore, L. Girlanda, R. Schiavilla, M. Viviani and R. B. Wiringa, Phys. Rev. C [**80**]{}, 034004 (2009) \[arXiv:0906.1800 \[nucl-th\]\]. E. Epelbaum, W. Gloeckle and U. G. Meissner, Nucl. Phys. A [**637**]{}, 107 (1998) \[nucl-th/9801064\]. X. L. Ren, K. W. Li, L. S. Geng, B. W. Long, P. Ring and J. Meng,   arXiv:1611.08475 \[nucl-th\].
{ "pile_set_name": "ArXiv" }
--- abstract: Correct quantization of free electromagnetic field is proposed author: - 'D.Yearchuck' - 'Y.Yerchak' - 'A.Alexandrov' title: To Quantization of Free Electromagnetic Field --- In 1873 “A Treatise on Electricity and Magnetism” by Maxwell [@Maxwell] was published, in which the discovery of the system of electrodynamics equations was reported. The equations are in fact the symmetry expressions for experimental laws, established by Faraday, and, consequently, they are mathematical mapping of experimentally founded symmetry of EM-field. It means in its turn that if some new experimental data will indicate, that symmetry of EM-field is higher, then Maxwell equations have to be generalized. That is the reason why the symmetry study of Maxwell equations is the subject of many research in field theory up to now. Heaviside [@Heaviside] in twenty years after Maxwell discovery was the first, who payed attention to the symmetry between electrical and magnnetic quantities in Maxwell equations. Mathematical formulation of given symmetry, consisting in invariance of Maxwell equations for free EM-field under the duality transformations $$\label{eq1d} \vec {E} \rightarrow \pm\vec {H}, \vec {H} \rightarrow \mp\vec {E},$$ gave Larmor [@Larmor]. Duality transformations (\[eq1d\]) are private case of the more general dual transformations, established by Rainich [@Rainich]. Dual transformations produce oneparametric abelian group $U_1$ of chiral transformations and they are $$\label{eq2d} \begin{split} \raisetag{40pt} \vec {E} \rightarrow \vec {E} cos\theta + \vec {H} sin\theta\\ \vec {H} \rightarrow \vec {H} cos\theta - \vec {E} sin\theta. \end{split}$$ Given symmetry indicates, that both constituents $\vec {E}$ and $\vec {H}$ of EM-field are possessing equal rights, in particular they both have to consist of component with different parity. Subsequent extension of dual symmetry for the EM-field with sources leads to requirement of two type of charges. Examples of the dual symmetry display are for instance the equality of magnetic and electric energy values in LC-tank or in free electromagnetic wave. Recently concrete experimental results have been obtained concerning dual symmetry of EM-field in the matter. Two new physical phenomena - ferroelectric [@Yearchuck_Yerchak] and antiferroelectric [@Yearchuck_PL] spin wave resonances have been observed. They were predicted on the base of the model [@Yearchuck_Doklady] for the chain of electrical “spin” moments, that is intrinsic electrical moments of (quasi)particles. Especially interesting, that in [@Yearchuck_PL] was experimentally proved, that really purely imaginary electrical “spin” moment, in full correspondence with Dirac prediction [@Dirac], is responsible for the phenomenon observed. Earlier on the same samples has been registered ferromagnetic spin wave resonance, [@Ertchak_J_Physics_Condensed_Matter]. The values of splitting parameters $\mathfrak{A}^E$ and $\mathfrak{A}^H$ in ferroelectric and ferromagnetic spin wave resonance spectra allowed to find the ratio $J_{E }/J_{H}$ of exchange constants in the range of $(1.2 - 1.6)10^{4}$. Given result seems to be direct proof, that the charge, that is function, which is invariant under gauge transformations is two component function. The ratio of imagine $e_{H} \equiv g$ to real $e_{E}\equiv e $ components of complex charge is $\frac{g}{e} \sim \sqrt{J_{E }/J_{H}} \approx (1.1 - 1.3)10^{2}$. At the same time in classical and in quantum theory dual symmetry of Maxwell eqations does not take into consideration. Moreover the known solutions of Maxwell eqations do not reveal given symmetry even for free EM-field, see for instance [@Scully], although it is understandable, that the general solutions have to posseess by the same symmetry. The aim of given work is to find the cause of symmetry difference of Maxwell eqations and their solutions and to propose correct field functions for classical and quantized EM-field. Suppose EM-field in volume rectangular cavity. Suppose also, that the field polarization is linear in z-direction. Then the vector of electrical component can be represented in the form $$E_x(z,t) = \sum_{\alpha=1}^{\infty}A_{\alpha}q_{\alpha}(t)\sin(k_{\alpha}z),$$ where $q_{\alpha}(t)$ is amplitude of $\alpha$-th normal mode of the cavity, $\alpha \in N$, $k_{\alpha} = \alpha\pi/L$, $A_{\alpha}=\sqrt{2 \nu_{\alpha}^2m_{\alpha}/(V\epsilon_0)}$, $\nu_{\alpha} = \alpha\pi c/L$, $L$ is cavity length along z-axis, $V$ is cavity volume, $m_{\alpha}$ is parameter, which is introduced to obtain the analogy with mechanical harmonic oscillator. Using the equation $$\epsilon_0\partial_t \vec{E}(z,t) = \left[ \nabla\times\vec{H}(z,t)\right]$$ we obtain in suggestion of transversal EM-field the expression for magnetic field $${H}_y(z,t) = \sum_{\alpha=1}^{\infty}\epsilon_0\frac{A_{\alpha}}{k_{\alpha}}\frac{dq_{\alpha}}{dt}\cos(k_{\alpha}z) + H_{y0}(t),$$ where $H_{y0} = \sum_{\alpha=1}^{\infty} f_{\alpha}(t)$, $\{f_{\alpha}(t)\}$, $\alpha \in N$, is the set of arbitrary functions of the time. The partial solution is usually used, in which the function $H_{y0}(t)$ is identically zero. The field Hamiltonian $\mathcal{H}^{[1]}(t)$, corresponding given partial solution, is $$\begin{split} &\mathcal{H}^{[1]}(t) = \frac{1}{2}\iiint\limits_{(V)}\left[\epsilon_0E_x^2(z,t)+\mu_0H_y^2(z,t)\right]dxdydz\\ &= \frac{1}{2}\sum_{\alpha=1}^{\infty}\left[m_{\alpha}\nu_{\alpha}^2q_{\alpha}^2(t) + \frac{p_{\alpha}^2(t)}{m_{\alpha}} \right], \end{split}$$ where $$p_{\alpha} = m_{\alpha} \frac{dq_{\alpha}(t)}{dt}.$$ Then, using the equation $$\left[ \nabla\times\vec{E}\right] = -\frac{\partial \vec{B}}{\partial t} = -\mu_0 \frac{\partial \vec{H}}{\partial t}$$ it is easily to find the field functions $\{q_{\alpha}(t)\}$. They will satisfy to differential equation $$\frac{d^2q_{\alpha}(t)}{dt^2}+\frac{k_{\alpha}^2}{\mu_0\epsilon_0}q_{\alpha}(t)=0.$$ Consequently, taking into account $\mu_0\epsilon_0 = 1/c^2$, we have $$q_{\alpha}(t) = C_1e^{i\nu_{\alpha}t}+C_2e^{-i\nu_{\alpha}t}$$ Thus, real free Maxwell field equations result in well known in the theory of differential equations situation - the solutions are complex-valued functions. It means, that generally the field function for free Maxwell field produce complex space. From general expression for the field $\vec{H}(\vec{r},t)$ $$\vec{H}(\vec{r},t) = \left[\sum_{\alpha=1}^{\infty}A_{\alpha}\frac{\epsilon_0}{k_{\alpha}}\frac{dq_{\alpha}(t)}{dt}\cos(k_{\alpha}z) + f_{\alpha}(t)\right]\vec{e}_y$$ it is easily to obtain differential equation for $f_{\alpha}(t)$ $$\begin{split} &\frac{d f_{\alpha}(t)}{dt} + A_{\alpha}\frac{\epsilon_0}{k_{\alpha}}\frac{\partial^2q_{\alpha}(t)}{\partial t^2}\cos(k_{\alpha}z) \\ &- \frac {1}{\mu_0} A_{\alpha}k_{\alpha}q_{\alpha}(t)cos(k_{\alpha}z) = 0. \end{split}$$ Its solution in general case is $$f_{\alpha}(t) = \int A_{\alpha} \cos(k_{\alpha}z)\left[q_{\alpha}(t)\frac{k_{\alpha}}{\mu_0}-\frac{d^2q_{\alpha}(t)}{dt^2}\frac{\epsilon_0}{k_{\alpha}}\right] dt +C_{\alpha}$$ Then we have another solution of Maxwell equations $$\vec{H}(\vec{r},t) = \frac{1}{\mu_0}\left\{\sum_{\alpha=1}^{\infty}k_{\alpha}A_{\alpha} \cos(k_{\alpha}z) q_{\alpha}'(t)\right\}\vec{e}_y,$$ $$\vec{E}(\vec{r},t) = \left\{\sum_{\alpha=1}^{\infty}A_{\alpha}\frac{dq_{\alpha}'(t)}{dt}\sin(k_{\alpha}z)\right\}\vec{e}_x,$$ where $q_{\alpha}'(t) = \int q_{\alpha}(t) dt + C_{\alpha}'$ Then Hamiltonian $\mathcal{H}^{[2]}(t)$ is $$\mathcal{H}^{[2]}(t) = \frac{1}{2}\sum_{\alpha=1}^{\infty}\left[m_{\alpha} \nu_{\alpha}^4 q{'}_{\alpha}^2(t) + {m_{\alpha}\nu_{\alpha}^2 (\frac{dq_{\alpha}'(t)}{dt})^2} \right].$$ Let us introduce new variables $$\begin{split} &q{''}_{\alpha}(t) = \nu_{\alpha}q{'}_{\alpha}(t) \\ &p{''}_{\alpha}(t) = m_{\alpha}\nu_{\alpha}\frac{dq_{\alpha}'(t)}{dt} \end{split}$$ Then $$\mathcal{H}^{[2]}(t) = \frac{1}{2}\sum_{\alpha=1}^{\infty}\left[m_{\alpha}\nu_{\alpha}^2q{''}_{\alpha}^2(t) + \frac{p{''}_{\alpha}^2(t)}{m_{\alpha}} \right].$$ We use further the standard procedure of field quantization. So for the first partial solution we have $$\begin{split} &\left[\hat {p}_{\alpha}(t) , \hat {q}_{\beta}(t)\right] = i\hbar\delta_{{\alpha}\beta}\\ &\left[\hat {q}_{\alpha}(t) , \hat {q}_{\beta}(t)\right] = \left[\hat {p}_{\alpha}(t) , \hat {p}_{\beta}(t)\right] = 0, \end{split}$$ where $\alpha, \beta \in N$. Introducing the operators $\hat{a}_{\alpha}(t)$ and $ \hat{a}^{+}_{\alpha}(t)$ $$\begin{split} &\hat{a}_{\alpha}(t) = \frac{1}{ \sqrt{ \frac{1}{2} \hbar m_{\alpha} \nu_{\alpha}}} \left[ m_{\alpha} \nu_{\alpha}\hat {q}_{\alpha}(t) + i \hat {p}_{\alpha}(t)\right]\\ &\hat{a}^{+}_{\alpha}(t) = \frac{1}{ \sqrt{ \frac{1}{2} \hbar m_{\alpha} \nu_{\alpha}}} \left[ m_{\alpha} \nu_{\alpha}\hat {q}_{\alpha}(t) - i \hat {p}_{\alpha}(t)\right], \end{split}$$ we have for the operators of canonical variables $$\begin{split} &\hat {q}_{\alpha}(t) = \sqrt{\frac{\hbar}{2 m_{\alpha} \nu_{\alpha}}} \left[\hat{a}^{+}_{\alpha}(t) + \hat{a}_{\alpha}(t)\right]\\ &\hat {p}_{\alpha}(t) = i \sqrt{\frac{\hbar m_{\alpha} \nu_{\alpha}}{2}} \left[\hat{a}^{+}_{\alpha}(t) - \hat{a}_{\alpha}(t)\right]. \end{split}$$ Then field function operators are $$\hat{\vec{E}}(\vec{r},t) = \{\sum_{\alpha=1}^{\infty} \sqrt{\frac{\hbar \nu_{\alpha}}{V\epsilon_0}} \left[\hat{a}^{+}_{\alpha}(t) + \hat{a}_{\alpha}(t)\right] sin(k_{\alpha} z)\} \vec{e}_x,$$ $$\hat{\vec{H}}(\vec{r},t) = ic\epsilon_0 \{\sum_{\alpha=1}^{\infty} \sqrt{\frac{\hbar \nu_{\alpha}}{V\epsilon_0}} \left[\hat{a}^{+}_{\alpha}(t) - \hat{a}_{\alpha}(t)\right] cos(k_{\alpha} z)\} \vec{e}_y,$$ For the second partial solution, corresponding to Hamiltonian $\mathcal{H}^{[2]}(t)$ we have $$\begin{split} &\left[\hat{p}{''}_{\alpha}(t) , \hat {q}{''}_{\beta}(t)\right] = i\hbar\delta_{{\alpha}\beta}\\ &\left[\hat {q}{''}_{\alpha}(t) , \hat {q}{''}_{\beta}(t)\right] = \left[\hat {p}{''}_{\alpha}(t) , \hat {p}{''}_{\beta}(t)\right] = 0, \end{split}$$ $\alpha, \beta \in N$. The operators $\hat{a}{''}_{\alpha}(t)$, $\hat{a}{''}^{+}_{\alpha}(t)$ are introduced analogously $$\begin{split} &\hat{a}{''}_{\alpha}(t) = \frac{1}{ \sqrt{ \frac{1}{2} \hbar m_{\alpha} \nu_{\alpha}}} \left[ m_{\alpha} \nu_{\alpha}\hat {q}{''}_{\alpha}(t) + i \hat {p}{''}_{\alpha}(t)\right]\\ &\hat{a}{''}^{+}_{\alpha}(t) = \frac{1}{ \sqrt{ \frac{1}{2} \hbar m_{\alpha} \nu_{\alpha}}} \left[ m_{\alpha} \nu_{\alpha}\hat {q}{''}_{\alpha}(t) - i \hat {p}{''}_{\alpha}(t)\right] \end{split}$$ Relationships for canonical variables are $$\begin{split} &\hat {q}{''}_{\alpha}(t) = \sqrt{\frac{\hbar}{2 m_{\alpha} \nu_{\alpha}}} \left[\hat{a}{''}^{+}_{\alpha}(t) + \hat{a}{''}_{\alpha}(t)\right]\\ &\hat {p}{''}_{\alpha}(t) = i \sqrt{\frac{\hbar m_{\alpha} \nu_{\alpha}}{2}} \left[\hat{a}{''}^{+}_{\alpha}(t) - \hat{a}{''}_{\alpha}(t)\right] \end{split}$$ For the field function operators we obtain $$\hat{\vec{E}}^{[2]}(\vec{r},t) = i \{\sum_{\alpha=1}^{\infty} \sqrt{\frac{\hbar \nu_{\alpha}}{V\epsilon_0}} \left[\hat{a}{''}^{+}_{\alpha}(t) - \hat{a}{''}_{\alpha}(t)\right] sin(k_{\alpha} z)\} \vec{e}_x,$$ $$\begin{split} \hat{\vec{H}}^{[2]}(\vec{r},t) = \frac{1}{\mu_0 c} \{\sum_{\alpha=1}^{\infty} \sqrt{\frac{\hbar \nu_{\alpha}}{V\epsilon_0}} \left[\hat{a}{''}^{+}_{\alpha}(t) + \hat{a}{''}_{\alpha}(t)\right] cos(k_{\alpha} z)\} \vec{e}_y, \end{split}$$ Let us designate $\sqrt{\frac{\hbar \nu_{\alpha}}{V\epsilon_0}} = E_0$. In accordance with definition of complex quantities we have $$(\vec{E}(\vec{r},t), \vec{E}^{[2]}(\vec{r},t)) \rightarrow \vec{E}(\vec{r},t) + i \vec{E}^{[2]}(\vec{r},t) = \vec{E}(\vec{r},t).$$ Consequently, correct field operators for quantized EM-field are $$\begin{split} \hat{\vec{E}}(\vec{r},t) = \{\sum_{\alpha=1}^{\infty} E_0 \{\left[\hat{a}^{+}_{\alpha}(t) + \hat{a}_{\alpha}(t)\right]\\ + \left[\hat{a}{''}_{\alpha}(t) - \hat{a}{''}^{+}_{\alpha}(t)\right]\} sin(k_{\alpha} z)\} \vec{e}_x, \end{split}$$ and $$(\vec{H}^{[2]}(\vec{r},t), \vec{H}(\vec{r},t)) \rightarrow \vec{H}^{[2]}(\vec{r},t) + i \vec(\vec{r},t) = \vec{H}(\vec{r},t)$$ $$\begin{split} \hat{\vec{H}}(\vec{r},t) = \{\sum_{\alpha=1}^{\infty} E_0 \{\frac{1}{\mu_0 c} \left[\hat{a}{''}_{\alpha}(t) + \hat{a}{''}^{+}_{\alpha}(t)\right]\\ + c \epsilon_0 \left[\hat{a}_{\alpha}(t) - \hat{a}^{+}_{\alpha}(t)\right]\} cos(k_{\alpha} z) + C_{\alpha}\hat{e}\} \vec{e}_y, \end{split}$$ Maxwell J C, A Treatise on Electricity and Magnetism, Oxford, Clarendon Press, V.1, 1873, 438, V.2 1873, 464 Heaviside O, Phil.Trans.Roy.Soc.A,**183** (1893) 423-430 Larmor J, Collected papers , London, 1928 Rainich G Y, Trans.Am.Math.Soc.,**27** (1925) 106 Dirac P.A M, Proceedigs of the Royal Society **117A** (1928) 610 - 624 Yearchuck D, Yerchak Y, Red’kov V, Doklady NANB **51**, N 5 (2007) 57 - 64 Yearchuck D, Yerchak Y, Alexandrov A, Phys.Lett.A, **373**, N 4 (2009) 489 - 495 Yearchuck D, Yerchak Y, Kirilenko A, Popechits V, Doklady NANB **52**, N 1 (2008) 48 - 53 Ertchak D P, Kudryavtsev Yu P, Guseva M B, Alexandrov A F et al, J.Physics: Condensed Matter, **11**, N3 (1999) 855 -870 Scully M O, Zubairy M S, Quantum Optics, Cambridge University Press, 1997, 650
{ "pile_set_name": "ArXiv" }
--- abstract: 'Most directly imaged giant exoplanets are fainter than brown dwarfs with similar spectra. To explain their relative underluminosity unusually cloudy atmospheres have been proposed. However, with multiple parameters varying between any two objects, it remained difficult to observationally test this idea. We present a new method, sensitive time-resolved Hubble Space Telescope near-infrared spectroscopy, to study two rotating L/T transition brown dwarfs (2M2139 and SIMP0136). The observations provide spatially and spectrally resolved mapping of the cloud decks of the brown dwarfs. The data allow the study of cloud structure variations while other parameters are unchanged. We find that both brown dwarfs display variations of identical nature: J- and H-band brightness variations with minimal color and spectral changes. Our light curve models show that even the simplest surface brightness distributions require at least three elliptical spots. We show that for each source the spectral changes can be reproduced with a linear combination of only two different spectra, i.e. the entire surface is covered by two distinct types of regions. Modeling the color changes and spectral variations together reveal patchy cloud covers consisting of a spatially heterogenous mix of low-brightness, low-temperature thick clouds and brighter, thin and warm clouds. We show that the same thick cloud patches seen in our varying brown dwarf targets, if extended to the entire photosphere, predict near-infrared colors/magnitudes matching the range occupied by the directly imaged exoplanets that are cooler and less luminous than brown dwarfs with similar spectral types. This supports the models in which thick clouds are responsible for the near infrared properties of these “underluminous” exoplanets.' author: - Dániel Apai - Jacqueline Radigan - Esther Buenzli - Adam Burrows - Iain Neill Reid - Ray Jayawardhana bibliography: - 'bdrefs.bib' title: 'HST Spectral Mapping of L/T Transition Brown Dwarfs Reveals Cloud Thickness Variations' --- Introduction ============ With masses between cool stars and giant exoplanets and effective temperatures comparable to those of directly imaged exoplanets [e.g. @Chauvin2005; @Marois2008; @Lafreniere2008; @Marois2010; @Lagrange2010; @Skemer2011] L and T-type brown dwarfs provide the critical reference points for understanding the atmospheres of exoplanets [e.g. @Burrows2001; @Kirkpatrick2005; @Marley2007]. Because the observations of brown dwarfs are not limited by the extreme star-to-planet contrasts exoplanet observations pose, much more detailed studies can be carried out. In particular, brown dwarfs provide an opportunity to solve the puzzling observation that most directly imaged giant planets appear to be redder and up to 4–10 times fainter than typical brown dwarfs with the same spectral type [e.g. @Barman2011_HR8799; @Skemer2012], often referred to as the [[*under-luminosity problem*]{}. Particularly interesting well-studied examples are see in Ross 458C [@Burgasser2010; @Burningham2011; @Morley2012] and 2M1207b [@Chauvin2005; @Mohanty2007; @Patience2010; @Barman2011_2M1207; @Skemer2011]. Although for the case of an obscuring edge-on disk with grey extinction has been proposed as a solution, in the light of additional observations and analysis this solution appears very unlikely [@Skemer2011]. More likely is that the fainter and redder near-infrared emission is due to a property intrinsic to the atmospheres of these exoplanets. This possibility is further supported by the fact that similar underluminosity has also been reported for a handful of field brown dwarfs (e.g. : @Metchev2006, : @Luhman2007, , : @Looper2008) and young brown dwarfs in clusters [@Lucas2001; @Allers2006]. ]{} The different models proposed to explain the lower near-infrared luminosity of exoplanets and [brown dwarfs]{} invoke differences in elemental abundances, surface gravity differences, evolutionary state, chemical equilibrium/non-equilibrium, or cloud structure, or some combination of these. However, because several of these parameters may change between any two brown [dwarfs or exoplanets]{} it remained difficult to isolate the effect of these variables. Possible differences in the structure of condensate clouds have, in particular, received much attention in models (e.g. @AckermanMarley2001 [@Burgasser2002; @Skemer2012; @Barman2011_HR8799; @Barman2011_2M1207; @Burrows2006; @Madhu2011; @Marley2010] and progress has been made in spectroscopic modeling to separate or constrain the impact of cloud structure from other parameters [e.g. @Cruz2007; @Folkes2007; @Looper2008; @Burgasser2008; @Radigan2008; @Cushing2010]. Yet, this problem remains a challenging aspect of ultracool atmospheres and one which will benefit from observational data probing cloud properties more directly. We present here high-cadence, high-precision time-resolved HST spectroscopy of two rotating early T-type brown dwarfs that reveal highly heterogeneous cloud covers across their [photospheres]{}. These observations allow us to separate the effects of different cloud structures from variations in surface gravity, elemental abundances, age and evolutionary state. We show that the observed variations are well reproduced by models with large cloud scale height variations (thin and thick clouds) across the surfaces. When thick clouds turn to the visible hemispheres both targets fade in the near-infrared and display changes consistent with the colors and brightness of ÒunderluminousÓ directly imaged exoplanets. The similarity of the changes observed provides strong support to models that invoke atmospheres with high dust scale heights to explain the photometry of directly imaged exoplanets. Observations and Data Reduction =============================== Observations and Targets ------------------------ We used the Hubble Space Telescope (HST) to obtain near-infrared grism spectroscopy of two L/T transition brown dwarfs as part of a larger campaign (Programs 12314, 12551 PI: Apai). The data were acquired with the sensitive Wide Field Camera 3 instrument [@MacKenty2010] by obtaining 256$\times$256 pixel images of the targets’ spectra dispersed by the G141 grism in six consecutive HST orbits. [Table \[ObsLog\] provides a log of the observations. In short, we obtained 660 spectra for 2M2139, each with 22.34s integration time; and 495 spectra for SIMP0136, each with 22.34s integration time. In the analysis that follows we averaged sets of 10 spectra for 2M2139 and sets of 5 spectra for SIMP0136, giving us an effective temporal resolution of 223s for 2M2139 and 112s for SIMP0136. Our targets are relatively bright (J=13.5 mag for SIMP0136 and J=15.3 mag for 2M2139), resulting in very high signal-to-noise spectra (see Sect. \[Uncertainties\] for a detailed assessment).]{} At the beginning of each orbit a direct image was obtained to accurately determine the position of the source on the detector, required for precise wavelength calibration. Cross-correlation of the images and centroid positions on-source revealed positional differences less than 0.1 pixel (0.01") between images taken at different orbits. No dithering was applied to stabilize source positions, and improve the accuracy of relative measurements. [lccccccc]{} Target & Date & Time & \# of Int. & \#Orbits & Total & Bin & Noise\ Name & & Per Int. & / Orbit & & \#Spectra & Size & per Int.\ 2M2139 & 2010/10/21 & 22.34 s & 11 & 6 & 660 & 10 & 0.27%\ SIMP0136 & 2011/10/10 & 22.34 s & 16 or 17 & 6 & 495 & 5 & 0.11%\ ![Systematic effects observed and corrected in the WFC3 data: flux loss (a) and ramp (b). Both effects are well fitted and removed by simple analytical functions. In (a) sources are shown with counts $<$0.6 (blue) or $>$0.6 (red) of the maximum count in the spectrum. In (b) blue symbols are ramp in orbits 2-6 and red symbols are the ramp in orbit 1. Here the source is a non-variable field star. \[FigCorrections\]](Fig_S1_Corrections.pdf) The observations presented here focus on two L/T transition brown dwarfs. Target 2M2139 (or 2MASS J21392676+0220226) has been classified [as a T0 dwarf based on red optical spectrum [@Reid2008] and as a peculiar T2.5$\pm$1 dwarf based on a 0.8–2.5 $\mu$m spectrum [@Burgasser2006]. More recently [@Burgasser2010] found that the spectrum of 2M2139 is better fit by a composite spectrum of an earlier (L8.5) and a later type (T3.5) dwarf than any single template brown dwarf. It was recently found to show impressive periodic photometric variability with a peak-to-peak amplitude of $\simeq$27% [@Radigan2012]. Ground-based photometry of variation 2M2139 argues for a period of $7.721\pm0.005$ hr, but also leaves open the possibility of a two times longer period [@Radigan2012]. Recent observations by @Khandrika2013 confirm the variability and argue against a double-peak period. Based on JHK light curves and a fit to the spectrum by [@Burgasser2006] these authors argue that cloud thickness variations are likely responsible for the photometric variations seen in 2M2139.]{} Target SIMP0136 (2MASS J0136565+093347), another T2 brown dwarf, has also been reported variable [@Artigau2009]. These targets are the first L/T transition sources observed in our two ongoing HST surveys; further results, including coordinated HST/Spitzer observations and sources with later spectral types are discussed in @Buenzli2012 and other upcoming papers. Data Reduction -------------- Our reduction pipeline combined the standard aXe pipeline with [a ]{} custom-made IDL script, which included corrections for different low-level detector systematics, critical for highly precise relative spectroscopy. We used two-dimensional spectral images from the standard WFC3 pipeline, which were bias and dark current-subtracted, and corrected for non-linearity and gain. Bad pixels were marked by a corresponding flag in the data quality plane. In order to correct for detector systematics we started with the [.ima]{} files, which contain all non-destructive sub-reads of each exposure, rather than the combined [.flt]{} images. Because our observations were not taken with a dithering pattern, we did not use the standard pipeline’s [MultiDrizzle]{} routine. We first extracted all sub-reads, discarded the first two zero-reads (0 s and 0.27 s), and compared the count rates of the individual pixels over the sub-reads of an exposure to identify and remove outliers by replacing them with the median value. We also corrected flagged bad pixels by interpolating over adjacent good pixels in the same row. We identified a systematic nearly linear flux loss from the first to last subread of an exposure, with a steeper slope for the brighter pixels of a spectrum. We empirically determined a slope of $-$0.2% per 22.34 s (one subread) for pixels of brightness $>$60% of the maximum in the spectrum, and only $-$0.001% for pixels below that level (Fig. \[FigCorrections\]). [The uncertainties of the slopes are negligible, but a significant ‘zig-zag’ pattern is present with a scatter of $\sim0.1\%$. The standard deviation of the normalized fluxes measured at the same exposure but in different orbits is $\leq$0.2%. ]{} This relation held for all of our objects, regardless of the absolute brightness of the spectra. Only the first subread had systematically higher flux than expected from the linear relation and had to be corrected individually for each source. The spectral extraction was executed with aXe [@axe2011]. First, the sub-array images were embedded in larger, full frame-sized images to allow aXe to use standard instrument calibration frames, which are full-frame sized. The data quality flag was used to exclude the extra surrounding pixels of the extended frame from the actual data analysis. As a first step, the aXe pipeline subtracted a scaled master sky frame, with the scale factor determined individually for each imageÕs background level (i.e. excluding the observed spectra). Then, the location of the target spectrum and the wavelength calibration were determined from the direct (non-grism) images. For the source 2M2139, only one direct image was obtained at the beginning of the first orbit and this image was used for each subsequent spectra. For SIMP0136, a separate direct image was taken at the beginning of each orbit and used for all spectra in the given orbit. We fixed the spectral extraction width within each orbit, but allowed it to vary between subsequent orbits. The extraction width was determined by summing up all spectra in a single orbit and then collapsing the sum into a one-dimensional vertical profile. The extraction width was then chosen as three times the full-width-half-maximum (FWHM) of a gaussian fit to that profile. This resulted in an extraction width with mean and standard deviation for the six orbits of 6.50$\pm$0.02 for SIMP0136 and 6.56$\pm$0.01 px for 2M2139, respectively. In the final step the spectra from each image were flat fielded, extracted, and collapsed using the standard pixel extraction tables of aXe and flux-calibrated with the latest instrument sensitivity curves. This led to one-dimensional spectra with a spectral resolution of R=$\lambda/\Delta\lambda\simeq130$ and highly reliable data over the wavelength range from 1.1 to 1.7 $\mu$m. We calculated the uncertainties as composed of photon shot noise, read-noise, and sky noise. A second systematic detector effect became evident at this point. During each orbit, there was a small increase in flux in the form of an exponential ramp (Fig. \[FigCorrections\]). Because the intrinsic variability of the sources prevented a direct quantification of this effect, we used the partial spectrum of a bright star visible in one of our target fields. The ramp was found to be independent of object brightness. We integrated the stellar spectrum over the full wavelength range and removed exposures where saturation had occurred. We fitted the exponential ramp $C\times(1-Ae^{-t/T})$ to the light curve, where $t$ is the time since the beginning of an orbit. The ramp was very similar for all the orbits between the second and sixth, which we therefore averaged before fitting, but it was a different, stronger effect in the first orbit of a visit, which was fitted and corrected individually. In the final step we corrected the spectra of all our objects by dividing the time-dependent data by the value of the analytical ramp function sampled at the times of the sub-reads. Fig. \[FigCorrections\] shows the ramp and its [best-fit parameters. The latter are the following: For Orbit 1 $A=0.0185\pm0.0013$, $T=5.95\pm0.88$, and $C=1.0005\pm0.0005$, while for orbits 2-6 : $A=0.0077\pm0.0009$, $T=16.65\pm6.02$, $C=1.0025\pm0.0009$. Note, that these uncertainties include the propagated uncertainties from the slope correction described above. The combined uncertainties for the ramp correction lead to a 1$\sigma$ uncertainty of $\simeq$0.15%. ]{} Uncertainties {#Uncertainties} ------------- In the following we briefly discuss the uncertainties of our measurements. We distinguish three different uncertainties that affect our data: random (white) noise emerging from photon noise and readout noise, systematic wavelength-dependent trends, and systematic time-dependent trends. As explained below, due the fact that our targets are very bright by HST’s standards the photon noise is very small (typically well below 0.1%) and the systematic wavelength-dependent trends are negligible, the residual time-dependent trends dominate the noise in our data. We characterize the amplitude of each of these three components based on our data. ### White noise Random (white) noise is present in our data due to the combination of photon noise, residual dark noise, and read-out noise. While all three components are present at very low levels, often negligible for practical purposes, we use our data to measure their combined amplitude. To do this we extract pixel-to-pixel variations helped by the fact that our temporal resolution ($<$1 minute) significantly exceeds the timescale on which the astrophysical changes occur. We started from the binned spectral cubes containing 66 spectra (each with 10 binned readouts) for 2M2139 and 99 spectra (each with 5 binned samples) for SIMP0136. To measure the white noise we removed the correlated components by first subtracting a 2-pixel-smoothed version of the data, leaving only variations smaller than 2 resolution elements. In the low-resolution HST data most of the changes even on this small scale are correlated physical changes (i.e. high-frequency residuals of actual spectral features). These features are the same in every spectra and can be removed via the subtraction of the median spectrum of the spectral cube. This procedure has removed correlated changes in wavelength and in time, leaving us with white noise-dominated data. To measure the white noise amplitude we calculated the standard deviation of the data in each spectrum and their median. We find that for 2M2139 the 1$\sigma$ noise per resolution element is 0.27%, while for the brighter SIMP0136 the 1$\sigma$ noise per resolution elements is 0.11%. We point out that most of the conclusions in this paper are drawn from brightness variations measured in broad photometric bands, further decreasing the importance of white noise. For example, the J-band light curves in our spectra typically contain 55 data points, leading to a white noise contribution of less than 0.04%. ### Time-dependent trends Because we study temporal changes in our targets it is important to assess the level of time-dependent trends (red noise) in our data. Any red noise present in the data would come for a time-dependent sensitivity variations, potentially introduced either by drifts in the positions of the sources or by sensitivity changes in the instrument. Our measurements of the positions of the sources show that the targets have remained in precisely the same positions, thus contribution from the former noise factor can not be significant. The second noise source, changes in the instrumental sensitivity, however, must be characterized through our data. Due to observing efficiency considerations our observations were taken in a subarray mode, which has a relatively small field of view and thus and do not contain any other sources of comparable brightness to the targets. Therefore, we do not have other non-varying sources in the same datasets that could be used to measure time-dependent trends in our data. Instead, we use a third brown dwarf target observed identically to our targets to assess time-dependent trends. This third target, (in the following 2M0915), did not show variability above the 0.6% level and thus provides us with a good reference for measuring the temporal stability of the observations. 2M0915 is a resolved binary brown dwarf [L7+L7, @Reid2006], allowing us to measure flux levels for the two sources simultaneously. We point out that although the wings of the two spectra show some overlap, this overlap should not affect the photometric stability of the measurements and the following assessment of the systematic uncertainties. We reduced the 2M0915 data set in the same way as data from the other two sources, with the exception that a larger aperture was adopted to include both of the slightly overlapping sources. The data reduction was repeated two times, with the aperture once centered on component A and once centered on component B. For the two components we measured a mean standard deviation of points [*within*]{} the same orbit of 0.16% and 0.13%, fully consistent with the combined uncertainties of the correction of the systematic effects and the white noise components. The fact that this standard deviation is not larger demonstrates that there is no measurable systematic trend left uncorrected on timescales of an orbit or less. To assess the photometric stability of HST over timescales of multiple orbits we determined the scatter of the mean values in each orbit. Thus, we calculated the standard deviation of the mean values of the fluxes measured in each of the six orbits. We found that these values were 0.25% and 0.13% for the two components; these values are our estimates for the uncertainties of the photometric stability of HST over six orbits. Thus, based on the measurements of the noise properties we conclude the following: 1) the random (white) noise level is 0.3% per resolution element and integration; 2) the random noise of band-integrated light curves is less than 0.04% (practically negligible); 3) the photometric stability between orbits is about 0.25% or less; 4) photometric trends within an orbit ($<$50 minutes) amount to 0.16% or less. Therefore, although the differences discussed in our paper are small (typically few percent levels) they are all high significance level detections. Results ======= ![Spectra at the faintest and brightest stages of the two brown dwarfs show prominent water, potassium, and methane absorption features with similar depths. The ratio of the minimum over maximum spectra (minor panels on left) show variations with weak wavelength-dependence in the continuum. and in the potassium, sodium and methane features, but demonstrate lower-amplitude variations in the 1.4 $\mu$m water band. The period-folded J-band light curves (right) reveal variations in the [surface brightness]{} distributions of these two targets. Red and black colors show data from the first and sixth orbit for 2M2139, which perfectly overlap if a 0.5% flux scaling is allowed, [consistent with the photometric stability on a 2$\sigma$ level]{}. In contrast, SIMP0136 displays light curve evolution over 5 hours [present both in the absolute levels and the light curve shape at levels well above our uncertainties]{}. \[FigSpectraLC\]](Fig1_SpecLC.pdf) Our observations provided a series of very high signal-to-noise (SNR$>$300) spectra (see Fig. \[FigSpectraLC\]) of the targets covering the 1.1 to 1.7 $\mu$m wavelength range. These spectra probe the J and H broad-band photometric bands, prominent molecular absorption bands (water, methane), as well as atomic resonance lines (K [I]{}, Na [I]{}). The spectra of both targets are dominated by deep and broad water vapor absorption at 1.1 $\mu$m, 1.4 $\mu$m, and 1.7 $\mu$m. Characteristic of L/T transition dwarfs, they both display narrower neutral atomic lines and weaker CH$_4$ absorption. Both sources showed dramatic brightness changes during the observations. Synthetic photometry derived in the core of the standard J- and H-bands (centered at 1.2 and 1.6 $\mu$m) display variations with peak-to-peak amplitudes of 27% (2M2139) and 4.5% (SIMP0136). The variations are periodic and we estimate the periods to be 7.83$\pm$0.1 hr and 2.39 $\pm$0.05 hr for 2M2139 and SIMP0136, respectively. The period for 2M2139 was derived by least square minimization of the light curve segments overlapping in phase (see inset in Fig. \[FigSpectraLC\]) and leads to a 0.35% standard deviation in the overlapping region, but a slightly imperfect match in light curve shape. In contrast, a somewhat shorter period (7.76 h) provides a near-perfect match for the light curve shape, but leads to a slightly higher standard deviation (0.55%). Given the information at hand we conservatively assume that the period is 7.83$\pm$0.1 h. Determining the period for SIMP0136 poses a different challenge: Although the overlap in phase is much larger than it is for 2M2139, the light curve shows a clear evolution during the extent of our observations. We find that assuming a period of 2.39 h aligns the troughs in the light curve, but leads to a shift between consecutive peaks; in turn, a period of 2.42 h aligns the peaks well but leads to a mismatch in the troughs. This behavior is fully consistent with the light curve evolution observed in this source; in the following we conservatively assume that the period of SIMP 0136 is 2.39 $\pm$0.07h. These rotational periods were determined by minimizing the differences in phase-folded light curves. The periods are consistent with those reported for these sources from ground-based photometry [@Radigan2012; @Artigau2009]. For 2M2139 our first and sixth HST orbits cover the same phase for an assumed period of 7.83 hr. During this overlap the spectra and the flux levels provide very close match [in the shape of the light curve. The flux levels differ by only 0.5%, which is twice the 1$\sigma$ uncertainty we estimated for the photometric stability of our measurements over multi-orbit timescales (see Section \[Uncertainties\]). The fact that the overlapping light curve segments are so similar both in flux and in shape argues]{} against a twice longer period, a possibility ground-based data left open [@Radigan2012]. The 2.39 hr and the 7.83 hr rotations are shorter than Jupiter’s rotation period ($\sim$9.92h). The left panel of Fig. \[FigSpectraLC\] displays the maximum and minimum spectra observed for both targets as well as their ratio (for clarity we do not plot the entire spectral series). The data are of superb quality (S/N$>$300) and allow detailed analysis of the changes. Both targets show a strikingly similar pattern: only weakly wavelength-dependent [broadband]{} variations. [For the precise shape of the variations we refer the reader to the [*Ratio*]{} panels in Fig. \[FigSpectraLC\] and here only highlight the peak of the ratios in the J and H bands. For 2M2139 the flux density change $\Delta F$ in the observed spectra peaks in the J-band at $33\%$ at 1.20 $\mu$m and in the H-band at a level of 28% at $1.58~\mu$m. For SIMP0136 $\Delta F$ peaks in the J-band at ${1.15\mu m}$ at 5.9% and $\Delta F_{1.56\mu m}$ at a level of $5.8\%$. While the changes in the J and H bands are smooth and similar, in both sources the water absorption bands between $\sim1.32-1.50~\mu$m vary at much lower levels: for example $\Delta F_{1.4\mu m}=14\%$ for 2M2139 and $\Delta F_{1.4\mu m}=3.2\%$ for SIMP0136. Note, that given our uncertainties of 0.25% for multi-orbit photometric stability (see Sect. \[Uncertainties\]) these differences are all highly significant.]{} Surprisingly, with the exception of the water all other gas-phase absorption bands (CH$_{4}$, Na [I]{}, K [I]{}) change together with the continuum. The light curve of 2M2139 shows nearly sinusoidal variations, but SIMP0136 displays sharper, more structured variations. Spectral Variations and PCA Analysis {#PCA} ------------------------------------ Both light curves contain distinct and prominent higher-frequency [components]{}. We interpret these variations as [spots]{} with different spectra rotating in and out of the visible hemispheres of the targets. We use the detailed data sets to identify the spectra and spatial distribution of these spots and contrast this information with predictions of state-of-the-art atmosphere models. We identify the smallest set of independent spectra, over the mean spectrum, that account for the majority of the observed variance by applying a principal components analysis (PCA). We computed the covariance matrix of the spectral time series over wavelengths of 1.1$-$1.7 $\mu$m, cutting off lower signal-to-noise regions outside this range. Eigenvectors ([**E$_i$**]{}) and eigenvalues ($\Lambda_i$) of the covariance matrix were determined using the [LA\_PACK]{} routine [LA\_EIGENQL]{} in $IDL$. Components were sorted by eigenvalue, and the fractional contribution of each component to the overall variability determined as $\Lambda_i/ (\sum_i \Lambda_i$), where the denominator is a sum over all eigenvalues. Every observed spectrum at a given time, ${\bf S}(t)$ can then be approximated by a linear combination of the principal components, $${\bf S}(t) \approx \langle {\bf S}\rangle + c_0(t){\bf E}_0 + c_1(t){\bf E}_1 + ...$$ where the series is truncated to include only components that contribute significantly above the noise level. The coefficients $c_i(t)$ are given by projections of the principal components onto the observed ${\bf S}(t) - \langle {\bf S}\rangle$. Perhaps surprisingly, variations of a [*single*]{} principal component account for 99.6% and 99.7% of the observed variability for 2M2139 and SIMP0136 respectively, with second components contributing at the 0.1% and 0.4% levels. In Figure Fig. \[FigJackiePCAcomps\] the mean spectrum and first two principal components ($\langle {\bf S}\rangle$,$ {\bf E}_0$, and $ {\bf E}_1$), as well as the time-variability of the principal components ($c_i(t)$) are shown for both targets. In both cases, variations are given by ${\bf S}(t) \approx \langle {\bf S}\rangle + c_0(t){\bf E}_0$, where only a single principal component is required to account for most of the observed variability. This implies that only [*two*]{} dominant spectra contribute to the observed variations (e.g., take [**E**]{}$_0$ to represent the difference between two types of time-independent spectra ${\bf S}_2-{\bf S}_1$), with the appearance of one “surface type” completely correlated with disappearance of the other. Thus, our major conclusion is that [*only two types*]{} of “surface” patches (e.g. cloudy and clear, or thick and thin clouds) are required to explain the observations in both of these sources. This finding validates a simple light curve model, applied below, in which the photosphere is describe by a linear combination of two 1D model atmospheres differing in cloud thickness and/or temperature. ![Principal Component Analysis of the time series spectra for 2M2139 ([*top*]{}) and SIMP0136. [*Top left:*]{} The mean spectrum (black line) and first two principal components of the variability (red and blue lines respectively). All components have been normalized as unit vectors. The contributions of each component to the total variability are indicated. [*Bottom left:*]{} The principal components plotted relative to the mean spectrum, multiplied by the maximum difference in their time-projections, $\Delta c_i$. In other words, this panel shows the variability amplitude as a function of wavelength for isolated components. The relative error in flux densities is shown as a grey line for comparison. [*Right*]{}: Projections of the principal components onto the data spectra as a function of time. The first component is dominant, producing a light curve that mirrors the broadband variations, while the second component appears to cycle with the HST orbits and may reflect low-level uncorrected systematic errors. \[FigJackiePCAcomps\]](pca_2139.pdf "fig:"){width="6.5"} ![Principal Component Analysis of the time series spectra for 2M2139 ([*top*]{}) and SIMP0136. [*Top left:*]{} The mean spectrum (black line) and first two principal components of the variability (red and blue lines respectively). All components have been normalized as unit vectors. The contributions of each component to the total variability are indicated. [*Bottom left:*]{} The principal components plotted relative to the mean spectrum, multiplied by the maximum difference in their time-projections, $\Delta c_i$. In other words, this panel shows the variability amplitude as a function of wavelength for isolated components. The relative error in flux densities is shown as a grey line for comparison. [*Right*]{}: Projections of the principal components onto the data spectra as a function of time. The first component is dominant, producing a light curve that mirrors the broadband variations, while the second component appears to cycle with the HST orbits and may reflect low-level uncorrected systematic errors. \[FigJackiePCAcomps\]](pca_0136.pdf "fig:"){width="6.5in"} Light Curve Analysis {#Mapping} -------------------- Next we search for the simplest physically plausible surface brightness model that explains the observed light curve. We model the surface brightness distributions using the self-developed genetic algorithm-optimized mapping routine [*Stratos*]{} (described in Appendix \[Stratos\] in detail). The only assumption of the model is that it describes surface features as elliptical spots with their major axes parallel to the rotational direction, [an assumption that is motivated by the common outcome of simulations of hydrodynamical turbulent flows in shallow water approximation [e.g. @ChoPolvani1996]. (We note here that our two targets, just like all Solar System giant planets, will be rotationally dominated with a Rossby number R$<<1$, @ShowmanKaspi2012).]{} The input parameters of the model are the number of spots (i) and the number of different surface types allowed (2 in our case, as given by the PCA); the optimized parameters are the inclination and limb darkening of the BD, the surface brightness of each surface type, as well as five values for each spot (longitude, latitude, aspect ratio, area, and surface type). Possible solutions are ranked and optimized on the basis of their fitness, which we define as the reduced chi square difference between the observed and predicted light curves. Interestingly, for both 2M2139 and SIMP0136 we find that models with [*at least*]{} three spots are required to explain the structured light curves: Models with only one or two elliptical spots failed to reproduce the observed light curves. In Fig. \[FigBestModel\] we show the best-fit model, a non-unique, but representative solution for 2M2139. Additional models with somewhat different surface distributions are shown in Fig. \[FigDiversity\]. Although the solutions are somewhat degenerate (as discussed in Section \[Stratos\]) the best solutions for 2M2139 all agree in the following: 1) the overall longitudinal spot covering fraction distribution is similar (between 20% and 30%); 2) at least three spots are required; 3) the surface brightness of the spots typically differ by a factor of two to three (either brighter [*or*]{} fainter) from the surface, corresponding to approximately  $\simeq$300 K difference in brightness temperature; 4) the size of the largest spot extends about 60$^\circ$ in diameter. We note that more complex solutions with a higher number of spots are also possible, but these replace the larger spots with groups of smaller spots, without changing the general properties identified above. An important property of all solutions is the presence of very extended spots or spot groups [in the photosphere]{}, which raises the question whether such large structures can exist in the fast-rotating, warm brown dwarfs. As for Jupiter the gradient of the Coriolis force on our rotating targets is expected to break the atmospheric circulation into parallel jet systems (belts), which will limit the maximum size of continuous atmospheric structures. Here, we use the Rhines scale [e.g. @Showman2010] to estimate the relative number of jet systems between our fast-rotating and slowly rotating sources: $N_{jet}\simeq \left(\frac{2 \Omega \times a }{ U} \right)^{1/2}$, where $\Omega$ is the angular velocity, $a$ is the brown dwarf radius, and $U$ is the wind speed. If the maximum size of a feature ($s_i$) is limited by the jet width, the relative maximum spot sizes for two sources will be given by: $$\label{RelativeSpotSizes} \frac{s_1}{s_2} = \frac{ \pi / N_{jet,1}}{ \pi / N_{jet,2} } = \left( \frac{\Omega_1 a_1 U_2}{\Omega_2 a_2 U_1} \right)^{1/2}.$$ Further, for both sources $a_1$ and $a_2$ should closely approach 1 $R_{Jup}$. If we assume that the wind speeds ($U_1$ and $U_2$ ) are similar in these two T2 dwarfs, the relative spot sizes in Eq \[RelativeSpotSizes\] will be simply approximated by $s_1/s_2=(P_2/P_1)^2$, arguing for $\sim$3.2$\times$ larger [*maximum*]{} spot surfaces on 2M2139 than on SIMP0136, in line with the larger amplitudes observed. [ Based on the above considerations we can also estimate the physical spot sizes. Although wind speeds are not known for brown dwarfs, in the following we explore the possible range. In estimating wind speeds only a few reference points are available: while the highest wind speeds ($U\simeq$2 km/s) yet have been observed in the upper atmospheres of heavily irradiated hot jupiters [e.g. @Snellen2010], much lower wind speeds are typical for the cooler and only weakly irradiated atmospheres of solar system planets (typically 40$-$100m/s, but up to $\simeq$300 m/s in Saturn). For a discussion on how brown dwarf circulation may fit in this picture we refer the reader to @ShowmanKaspi2012. We will now assume two bracketing case: U=100 m/s (jupiter-like) and U=1,000 m/s (hot jupiter-like). Our simple approximation with the low and high wind speeds would suggest $N_{jet}\simeq17$ and $N_{jet}\simeq5$, respectively, for the slowly-rotating 2M2139 and $N_{jet}\simeq32$ and $N_{jet}\simeq10$, respectively, for the rapidly rotating SIMP0136. These values correspond to maximum latitudinal spot diameters of $\simeq10^\circ$ and $\simeq36^\circ$ for 2M2139, and $\simeq6^\circ$ and $\simeq18^\circ$ for SIMP0136. Thus, slower, jupiter-like wind speeds would lead in small maximum features sizes (6–10$^\circ$), while high speeds would allow larger features (18–36$^\circ$). While we can not accurately determine the wind speeds in our targets, the fact that these sources show large-amplitude variations emerging from large regions across their photospheres argues for wind speeds that are higher than those typical to Jupiter. ]{} The simple predictions described above are consistent with the much larger variation seen in the slowly rotating 2M2139 than in its faster-rotating sibling and the predicted maximum spot sizes are similar to the size of the largest spots predicted by the best-fit light curve models of 2M2139 (Fig. \[FigSpectraLC\]). With more data on varying brown dwarfs a realistic treatment of the atmospheric circulation will be possible, replacing the simple argument introduced above. Atmospheric Model Comparisons {#ModelComparison} ----------------------------- ![The near-infrared color-magnitude variations of our targets (blue and red) in the context of warmer cloudy L-dwarfs and cooler cloud-free T-dwarfs (in gray from @DupuyLiu2012). Brightness variations in 2M2139 and SIMP0136 occur without strong color changes (top right and middle right panels). Lower right: Changes predicted by varying single model parameters (black dashed lines) are inconsistent with the observations. The green dashed lines show that simultaneously changing cloud structure (thin to thick) and temperature provides a perfect match. The direction of the modeled changes in 2M2139 and SIMP0136 (green dashed line) is compatible with the poorly understood underluminosity of several directly imaged giant planets (shown in magenta). The percentages show the covering fraction of thin and thick clouds, respectively.\[FigCMD\]](Fig2_CMD.pdf) Even the most capable state-of-the-art ultracool model atmospheres provide only imperfect fits to the fine structure of brown dwarf spectra; therefore, the full interpretation of our very high signal-to-noise spectral series is constrained by the fact that no existing model can accurately describe atmospheres at sub-percent accuracy. Nevertheless, the existing models can be used to explore the types of changes required to account for the observed color/magnitude variations. We proceed by identifying the best-fit atmosphere models for our two targets with the assumption that these models would well describe the dominant surface type on the targets. Then we explore what secondary surface types have to be added to account for the observed color-magnitude variations by adding a second model atmosphere. We base our analysis on the state-of-the-art radiative-convective model atmospheres in and out of chemical equilibrium described in @Burrows2006 and @Madhu2011, but also used an independent set of models by [@Allard2011] to verify that our conclusions are model-independent. The Burrows models include different empirical descriptions of the vertical structures of clouds of different condensates [@Burrows2006]. For each condensate the vertical particle distribution is approximated by a combination of a flat cloud shape function and exponential fall-offs at the high- and low-pressure ends. The cloud altitudes are defined by the intersects of the temperature-pressure profile and the condensation curves. The tested models included cloud shape functions with constant vertical distribution of particles above the cloud base (B-clouds, qualitatively consistent with the DUSTY models described in @Allard2001), and different parameterizations of the generic cloud shape function (A, C, D, E in @Burrows2006). Of particular interest is cloud type E, a generic cloud with very steep exponential fall-offs corresponding to thin clouds thought to be typical to clouds composed of a single refractory condensate’s large grains. The spectra we explored ranged in temperature from 600 K to 1,800 K with steps of 100 K, included solar and 10$\times$ sub-solar metallicities, and log g=4.0 and 5.0 for the @Burrows2006 models and log g=4.0, 4.5, and 5.0 for the @Allard2001 models. Fig. \[FigSpectra\] shows our two targets and the best-fit spectra and templates. The upper models are from the BT-SETTL series from @Allard2011, which we plot for comparison, while the lower curves are model calculations based on models described in @Madhu2011. The field brown dwarf spectral templates are from @Leggett2000 [@Chiu2006]. We find that for 2M2139 the best matching spectral template is a T2 type template, while for SIMP0136 a T2.5 provides a good match. Both sources can be fit well with a BT-SETTL model in local thermodynamical equilibrium and an effective temperature of 1,200 K, but fitting the spectrum of 2M2139 [requires a lower surface gravity (log g=4.5) than SIMP0136, which is better matched by a higher surface gravity model (log g=5.0). Fits with the Burrows models suggest slightly lower temperatures. Given the coarser spacing of the surface gravity grid of the Burrows models we used log g=4.0 for 2M2139 and log g=5.0 for SIMP0136. Although the lower surface gravity fits better our spectra, due to its limited wavelength coverage we take these log g values as comparative but do not argue that they represent accurate characterization of the surface gravity (see also @Cushing2008 for the difficulty of determining precise atmospheric parameters from single-band spectroscopy).]{} The modal grain sizes are an adjustable parameter in the Burrows models and models with large grains provided the best match for 2M2139. We note that this model comparison is based on [*peak-to-valley normalized*]{} spectra, i.e. focuses on the spectral shape rather than the absolute J, H brightnesses – we do so because our sources show strong J, H-band variations and only weak variations in the spectral shape. After establishing the best-fit starting model for the two sources, we explore what secondary surface type is required to explain the observed color-magnitude variations. Our tool for this step is the near-infrared color-magnitude diagram shown in Fig. \[FigCMD\]. Here we plotted a full sequence of L/T dwarfs using the parallax and near-infrared photometric database by @DupuyLiu2012. Fully cloudy L-type brown dwarfs typically appear bright and red, while the dust-free atmospheres of T-type brown dwarfs are blue and faint. Transition objects are seen to show brighter J-band magnitudes with later spectral types before an eventual turn to the clear T-dwarf sequence [@Dahn2002; @Tinney2003; @Vrba2004]. Next, we tried to vary each model parameter separately to assess its effect with the observed variations. We found that [*no*]{} combination of models with a difference only in a single parameter (temperature, cloud scale height, presence or absence of cloud layer) can introduce the observed changes (see dashed black lines lower right panel in Fig. \[FigCMD\]): the tracks along which the source would vary on the CMD if such a secondary surface type would be added is clearly inconsistent with the actual variations. The same conclusion was reached by @Artigau2009 for SIMP0136 based on comparison of models by @Allard2003 and @Tsuji2005 to their ground-based $\Delta$J/$\Delta$K observations and by @Radigan2012 for 2M2139 through comparison to models by @SaumonMarley2008. We next explored [*correlated*]{} changes in parameter pairs. We found that correlated changes in cloud scale height and temperature are required (green dashed lines in Fig. \[FigCMD\]). By allowing these two model parameters to vary together we find that thin clouds in combination with large patches of cold and thick clouds (i.e. T$_{eff}=1,100$ K models with E-type clouds and 800 K B-type clouds, green dashed lines) can explain well the observed color-magnitude variations (blue and red crosses). This solution requires about 300 K temperature difference between the spots and the surface, fully consistent with the factor of three surface brightness change predicted by our light curve shape model. Our models also predict the relative surface covering fraction for the thin and thick clouds (given as percentages in Fig. \[FigCMD\]), which are also consistent with the surface model shown in Fig. \[FigBestModel\]. We note that qualitatively similar results were obtained for 2M2139 using a ground-based photometric dataset and models of @SaumonMarley2008 by @Radigan2012. ![Sketch illustrating a possible cloud structure consistent with the observations, which argue for large-scale variations in dust cloud scale height, correlated with a change in temperature. Higher clouds will limit the observed column to the cooler upper atmospheres, explaining the correlated changes in temperature and cloud scale height. More complex configurations, such as multi-layer clouds, are also possible. \[FigSketch\]](NewCloudSketch_A.png) ![Comparison of the observed peak spectra (green), normalized theoretical model atmospheres (blue) and field brown dwarf spectral templates (red). The top model curves are from @Allard2011, while the lower models are based on @Burrows2006. \[FigSpectra\]](Fig_S4_SpectralFits2139.pdf "fig:") ![Comparison of the observed peak spectra (green), normalized theoretical model atmospheres (blue) and field brown dwarf spectral templates (red). The top model curves are from @Allard2011, while the lower models are based on @Burrows2006. \[FigSpectra\]](Fig_S4_SpectralFits0136.pdf "fig:") Discussion ========== A Single Spot Type ------------------ Because of the richness of possible condensates in brown dwarf atmospheres one may expect a complex mix of cloud properties, composition, and [cloud]{} scale height to be represented even within a single brown dwarf atmosphere. The observational fingerprint of such surface complexity in a rotating brown dwarf would be multi-component, complex changes in the spectra, brightness, and color. In stark contrast, the two sources observed show a single distinct type of simple change. This is reflected, for example, in the single-direction track in the color-magnitude diagram (Fig. \[FigCMD\]). The PCA analysis (see Section \[PCA\]) reveals that all the observed variance in the two sources can be reproduced with the combination of [*only two*]{} different spectra. This surprising result shows that all major features in the visible photosphere, although distributed across both hemispheres of each target, share the same spectra (i.e. same deviation from the mean spectrum). We note that based on spectral fits from composite near-infrared spectra @Burgasser2010 has identified 2M2139 as a strong candidate for being an unresolved binary brown dwarf. These authors argue that 2M2139 could be matched better, although imperfectly, by a blend of an L7-L9.5 and a T3.5–T4.0 binary than by any single template brown dwarf. It is tempting to consider the possibility that the fit by two blended spectra in this case is not the sign of an unresolved binary, but instead emerges from the blend of two different surface types on the source (as also proposed by @Khandrika2013 and @Radigan2012)— one with a spectrum of an L8.5$\pm$0.7 dwarf, the other with a spectrum of a T3.5$\pm$1.0 dwarf. However, this explanation is unlikely to be correct. A composite spectrum of two such dwarfs with time-varying weights due to the rotation would introduce J vs. J–H color variations in the direction of the L–T transition, very different to what we observed (see Fig. \[FigCMD\]). The fact that flux density in the observed molecular bands – with the exception of the less-changing water band – varies together and at the same rate as the continuum shows that the opacity variations in the targets [*cannot*]{} originate from changes in the abundances of the common gas-phase absorbers, such as methane. Based on comparison to atmosphere models we argue that these spots are patches of very thick clouds. The fact that any two such cloud patch would share the same spectra, different from the dominant spectrum of the sources, argues for the presence of a [*single mechanism*]{} to form these thick cloud patches. Possibilities, as explored below, include circulation and large-scale vertical mixing. Complex Surfaces ---------------- With our surface mapping tools (see Section \[Mapping\]) we find that both sources have relatively complex surface brightness distributions: assuming elliptical spots, no one- or two-spot model can reproduce the structure of the light curves. While our model is not able to deduce the [precise]{} appearance of the [photosphere]{}, it provides useful insights into the complexity and overall distribution of the thick cloud patches observed. Specifically, the lightcurve and our models reveal that a large fraction of the surface of both targets is covered by the cloud patches. These patches may be very large single structures (corresponding to our simplest model) or they may be super-structures, consisting of dozens or even hundreds of smaller cloud patches with the same surface covering fractions. Whether single or complex structures, however, any mechanism that explains the spectral appearance and high cloud scale height, needs to account for the concentration of these patches on the surface of the targeted brown dwarfs. [We note here that the lightcurve modeling is only sensitive to the varying surface components, because homogeneously distributed features will not introduce brightness variations. Thus, it is likely that the large patches deduced from the light curve are not the only features present in the photosphere. Reinforcing this possibility is the fact that color-magnitude modeling based on model atmospheres (Section \[ModelComparison\]) suggests similar level of asymmetry but on top of a more symmetric component. According to the best-fit model atmosphere combination the fraction of surface covered by thick clouds varies from about 50% to 63% on the visible 2M2139, while it occupies only 25% to 29% of the surface of SIMP0136 (see Fig. \[FigCMD\]). Thus, the modeling suggests that most of 2M2139’s surface is covered by thick clouds with large thin patches (that are brighter and contribute most of the observed emission), while SIMP0136 has an overall thin cloud layer with large patches of thick clouds. Although the covering fractions are different, based on the overall similarity of the spectra and the spectral variations these two sources have very similar thin and thick cloud layers.]{} The complex distribution of otherwise similar or identical thick cloud patches also offers opportunity for further exploration of the dynamics of brown dwarf atmospheres. Cloud structures are subject to multiple [dynamical]{} processes and are likely to evolve on a broad range of timescales. For example, the structure of the light curves may evolve due to differential rotation, an effect that should be observable with high-precision dataset covering sufficient baselines. Other processes, such as thick cloud formation (i.e. rapid increase of the cloud scale height) or the reverse of this process, rain-out of the condensate grains, may also result in changes over relatively short timescales. Thin and Thick Clouds, but No Deep Holes ---------------------------------------- A leading hypothesis to explain the dramatic spectral changes observed to occur at the L/T transition invokes a breaking apart of the cloud cover [e.g. @AckermanMarley2001; @Burgasser2002] This idea provided motivation for our initial observations, and predicts surfaces consisting of clouds and holes that act as windows into the deeper photosphere. The most simple picture, and the one that has been explored by models [@Burgasser2002; @Marley2010] is one where holes in the cloud layer represent pure clearings, 100% free of condensate opacity. In contrast to this assumption, our observations of two of the most variable brown dwarfs suggest that dark and bright regions of the photosphere represent thick and thin cloud regions rather than cloudy and cloud-free regions (Section \[ModelComparison\]). This may reflect holes in a thick cloud layer that look down into a thinner cloud layer. More generally, our observations argue for a more complex picture of cloud heterogeneities than envisioned with simple cloudy and cloud-free models. A similar conclusion was arrived at by Radigan et al. (2012), who found that JHK photometric monitoring of 2M2139 was inconsistent with the presence of cloud-free regions, based on models of @SaumonMarley2008. Thus both photometric observations out to the K band and spectroscopic observations from 1-1.7 microns argue against the existence of cloud-free regions. We also found that the cloud thickness variations are correlated with changes in the effective temperature of the secondary model. This correlation is not surprising: the higher the dust scale height, the shallower pressures and the lower temperatures are visible to the external observer (see Fig. \[FigSketch\]). This correlation, thus, argues for patches of thick clouds towering over the otherwise thinner cloud layer covering most of the hemispheres of the targets. The vertical structure and composition of these thick clouds is an [exciting]{} question, albeit one our current data does not constrain well. [Recently @Buenzli2012 have shown that by combining data over a broad wavelength range – where different wavelengths probe different atmospheric depths – the vertical structures of the clouds can be explored. In their study five atmospheric layers of a T6.5 dwarf’s were sampled by obtaining five complete light curves at depths ranging from 0.1 to $\sim$10 bar. An important and surprising result of their study was a significant and pressure-dependent phase difference in the atmosphere with the largest phase shift observed at the deepest level exceeding 180$^\circ$. While a full analysis like that carried out by @Buenzli2012 is beyond the scope of this paper, we show in Fig. \[Nophaseshift\] that the same narrow-band light curves as used by those authors, when extracted from our spectral series, show no significant phase shift. Thus, while the results of the T6.5 brown dwarf suggest correlated large-scale vertical-horizontal structures, the two early-T dwarfs studied here show similar cloud structures at different layers without phase differences.]{} ![Five narrow-band light curves extracted from the spectral series for the two targets show that the LC changes are all occurring at the same phase. The spectral bands have been selected to probe specific atmospheric depth and match those adopted by @Buenzli2012. In contrast to the no phase shift seen in our two T2 dwarf, the T6.5 dwarf analyzed by @Buenzli2012 showed a very prominent pressure-dependent phase shift in the same narrow-band light curves, revealing a large scale horizontal-vertical structure. \[Nophaseshift\]](2M2139_lcs.pdf "fig:") ![Five narrow-band light curves extracted from the spectral series for the two targets show that the LC changes are all occurring at the same phase. The spectral bands have been selected to probe specific atmospheric depth and match those adopted by @Buenzli2012. In contrast to the no phase shift seen in our two T2 dwarf, the T6.5 dwarf analyzed by @Buenzli2012 showed a very prominent pressure-dependent phase shift in the same narrow-band light curves, revealing a large scale horizontal-vertical structure. \[Nophaseshift\]](S0136_lcs.pdf "fig:") Apparent Underluminosity due to Thick Clouds -------------------------------------------- The remarkable cloud scale height variations in our targets demonstrate how this parameter affects the brightness of ultracool atmospheres. Thick clouds persisting at temperatures lower than typical for brown dwarfs have been proposed as one of the processes that may explain the apparent underluminosity of directly imaged giant exoplanets compared to brown dwarfs with similar spectral morphology (magenta crosses in Fig. \[FigCMD\], see also @Skemer2012 [@Barman2011_HR8799; @Barman2011_2M1207; @Currie2011; @Madhu2011; @Marley2012]). Several authors propose that such unusually thick clouds would be present in directly imaged exoplanets due to the low surface gravities of these sources, which provides an attractive and self-consistent explanation for the appearance and low occurrence rate of these sources. However, the effects of thick clouds remained difficult to verify, as multiple parameters (metallicity, surface gravity, age, mass, chemistry, cloud structure) vary simultaneously between any two ultracool atmosphere. The novelty of our observations is that they are comparing [*different cloud structures*]{} within the [*same atmospheres*]{}, i.e. keeping metallicity, surface gravity, age, mass, [and bulk composition]{} constant. This allows isolating the effects of cloud structure. [We note that a minor caveat here is that the pressure-temperature profile of atmospheres with large spots may differ from those with only thick [*or*]{} thin clouds; thus, the thick clouds’ impact must be evaluated keeping this possible difference in mind. ]{} We can explore the effect the thick clouds would have by extending their surface covering fraction in our model beyond that observed in our targets, i.e. by increasing the surface covering fractions of the thick cloud patches to values approaching 100%. The resulting green dashed line in Fig. \[FigCMD\] shows that increasing thick cloud coverage will produce color-magnitude tracks crossing the positions of the directly imaged planets (magenta crosses). Thus, if the thick cloud patches we observed in the atmospheres of our T2 targets would cover their complete or near-complete atmospheres, their near-infrared brightness and colors would provide a good match to the directly imaged exoplanets. These results lend support to the models in which the peculiar color and brightness of exoplanets is introduced by thick clouds, but we note that in this work no attempt was made to match the spectra predicted by our simple models to those observed by exoplanets. While our observations cannot identify the cause for the thick clouds in exoplanets, we expect that any successful model for the thick clouds in faint and red exoplanet atmospheres will also provide an explanation for the coexistence of thin and thick clouds in L/T transition brown dwarfs. The Path to Next Generation Atmosphere Models --------------------------------------------- Our observations provide exceptionally high signal-to-noise spectra that probe spectral variations within the photospheres of brown dwarfs. The accuracy of this dataset is high enough that finding a perfect match ($<1\%$) with existing atmosphere models was not possible, demonstrating the limitations of the existing models. Although the best fit spectra match the mean spectrum to typically within 5–10% in the spectral range studied, which are typically considered to be good fits for brown dwarf atmospheres, these differences are comparable to or larger than the amplitude of changes we observe in our spectral series. Therefore, the accuracy of the existing models used in this paper did not allow meaningful modeling of the entire spectral series yet. Instead, in our modeling procedure we started from atmosphere models that provided best fits to the mean spectra of the targets and then explored their [*color variations*]{}. Our approach to model a patchy cloud cover with a linear combination of independent 1D models is imperfect and it is likely not physically self-consistent due to the somewhat different pressure-temperature profiles of these models [e.g. @Marley2010; @Marley2012]. It is worthwhile to briefly explore the limitations of current models and potential pathways to improve them. Arguably, the key limitations are the incomplete molecular opacity databases and the fundamentally one-dimensional nature of most atmosphere models. Expanding and refining the opacity databases relies on the continuation of ongoing laboratory and theoretical efforts and not limited by the observations of ultracool atmospheres. In contrast, constraining and further developing two- or three-dimensional models (e.g. @Freytag2010) will require more accurate datasets and, in particular, datasets that provide spectrally and spatially resolved information. We anticipate that the brown dwarf spectral mapping technique introduced in this paper will lead to a major step in testing and refining physically realistic models of cloud formation and cloud structure [e.g. @Helling2008a; @Helling2008b] and atmospheric dynamics [@Freytag2010; @Showman2013] Spectral Mapping of Ultracool Atmospheres ----------------------------------------- In this paper we also applied a method, spectral mapping, to a new class of objects, ultracool atmospheres. As a new method for this field we now briefly discuss its potential and future uses. Photometric phase mapping has been proposed early to reach spatial information beyond the diffraction limit [e.g. @Russell1906]. Since then, different variants of this idea have been used successfully to derive asteroid shapes from photometry [e.g. @Kaasalainen2001; @Kaasalainen2012], to map their surface composition from spectral mapping [e.g. @Binzel1995], to map starspots via photometry and Doppler imaging [e.g. @Budding1977; @Vogt1987; @Luftinger2010], and to translate precision Spitzer photometry to one-dimensional and two-dimensional brightness distributions for hot jupiters [e.g. @Knutson2007; @Cowan2012a; @Majeau2012]. Similarly to the hot jupiter studies, in an upcoming study Heinze et al. (ApJ, submitted) use precision Spitzer photometry complemented by ground-based near-infrared lightcurve to explore cloud properties on an early L-dwarf. Most recently, [@Buenzli2012] have found pressure-dependent phase shifts in multi-band [light curves]{} extracted from HST spectral series and a Spitzer 4.5 $\mu$m light curve of a T6.5 brown dwarf, revealing vertical structure in an ultracool atmosphere for the first time. The method used here is a logical next step in these brown dwarf studies, where a large time-resolved spectral set is used to to identify the diversity, spatial distribution, and spectra of the key photospheric features. Further similar studies with space-based instruments on HST and Spitzer, complemented by sensitive ground-based photometric observations, will allow obtaining data similar to those presented here on brown dwarfs covering a much broader range of atmospheric parameters. Such a dataset will allow the exploration of the properties of cloud cover with spectral type, surface gravity, and rotation period, an important step toward establishing a physically consistent picture of condensate clouds. Next-generation adaptive optics systems will also be capable of measuring relatively small photometric variations in directly imaged giant exoplanets, allowing comparative studies of brown dwarfs and extrasolar giant planets [@KostovApai2012]. The changes observed in SIMP0136, first reported in @Artigau2009 and also seen in our Fig. \[FigSpectraLC\], also highlight another exciting question. Cloud covers in some brown dwarfs clearly change on very short timescales on very significant levels, offering an opportunity to study the atmospheric dynamics of ultracool atmospheres via multi-epoch, multi-timescale, multi-wavelength observations. Spectral mapping is set to be a powerful new method not only to characterize brown dwarf atmospheres, but also extrasolar planets. Summary ======= In summary, we apply spectral mapping for the first time on ultracool atmospheres and show that two L/T transition brown dwarfs have patchy cloud covers with multiple ($>$3) large spots/structures. Analysis of the spectral variations shows that a linear combination of only two types of spectra can explain the variance observed in both sources, demonstrating the presence of a single type of photospheric features in an otherwise homogeneous cloud cover. We find that light curves derived from narrow wavelength sections of the spectra all change in phase. The observed variations show that the near-infrared brightness of dusty brown dwarfs can decrease significantly (3–27%) with only a modest reddening in the J-H color. These changes and extrapolation from the atmospheric models fitting them closely resemble the properties of red and “underluminous” directly imaged exoplanets, arguing for thick clouds causing the underluminosity of giant planets. Our models with large cloud thickness variations and correlated temperature variations ($\simeq$300 K) explain the observed light curves (amplitudes, color-magnitude changes) as well as the light curve structures. Our findings reinforce models that explain the underluminosity of directly imaged super-jupiters with large scale height dusty atmospheres. The technique applied here, rotational phase mapping, provides a powerful tool to study the atmospheres of ultracool objects, brown dwarfs and exoplanets. Cooler objects may harbor clouds of particles with various composition (NH$_3$, CH$_4$, H$_2$O) and phase (solid or liquid), the presence of which can be inferred with this technique. Its full potential, however, can be achieved with a telescope like James Webb Space Telescope, which will couple high sensitivity and broad wavelength coverage with high contrast, enabling spectrally and spatially resolved mapping of directly imaged exoplanets. We acknowledge the anonymous referee, whose comments improved the manuscript. Support for Program number GO-12314 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. The work of JR and RJ was supported by grants from the Natural Sciences and Engineering Research Council of Canada. We are grateful to the dedicated staff at the Space Telescope Science Institute for their outstanding support of the observations and the instrumentation. Calibrated data and reference files used in this work are available indefinitely at the B. A. Mikulski Archive for Space Telescopes (http://archive.stsci.edu). We acknowledge an STScI DirectorÕs Discretionary Grant that helped the start of this project. [*Facilities:*]{} . Surface Mapping: The Stratos Package {#Stratos} ==================================== Translating a light curve into the surface map of a rotating sphere is an under-constrained deconvolution problem. Although due to the nature of the problem not all information can be retrieved, equipped with a few priors about the physically plausible geometries and a very high signal-to-noise data it is possible to derive meaningful maps with robust characteristics. To model our data we developed a new IDL package ([*Stratos*]{}). In the following we briefly describe the principles and organization of this package and then discuss the solutions and degeneracies in the derived parameters. In [*Stratos*]{} we apply an optimized forward-modeling procedure to identify the best-fitting two-dimensional map for each spectral series given a small set of a priori assumptions. We start by using principal components analysis to identify $i$, the number of independent spectral components required to explain at least 96% of the observed spectral variance (see Sect. \[PCA\] for both of our sources $i=2$). This determines the number of the types of surface features we include in our model, including the ambient spectrum. Our actual surface model consists of a sphere with $j$ ellipses [in its photosphere]{}. The surface brightness level of each of these ellipses is fixed to one of the i distinct levels. Thus, the basic free parameters of the model are the latitude, longitude, axes ratio, and area of each ellipse, i.e. four parameters per ellipse. The relative brightness of the [*i*]{} different surface types are also free parameters (in our case only a single parameter). In addition to these parameters we add the inclination of the targetÕs spin axis and the limb darkening as free parameters. We express limb darkening in the commonly used form as $I(\phi)=I_0 \times (1-c\times(1-cos(\phi))$, where $\phi$ is the angle between the line of sight and the observed surface element and $c$ is the limb darkening coefficient fitted. We allowed c to vary between 0 and 0.8, the former representing no limb darkening and latter upper bound corresponding to the strongest limb darkening predicted in near-infrared wavelengths for low-mass stars [@Claret2011]. We optimize the above model using a genetic algorithm, a commonly applied heuristic optimization algorithm (see @Charbonneau1995). We define the fitness of each solution by the sum of the squared differences between the predicted and observed spectral variations. Multiple parallel optimization runs were executed on a 12-core Intel Xeon-based Mac Pro; typical runtimes are about two days per target. ![[*Left:*]{} The Stratos package provides multiple, broadly similar models that can reproduce well the observed light curve of 2M2139. All models require at least three distinct spots. Models are numbered by increasing chi square and shown in Fig. \[FigDiversity\]. [*Right:*]{} Example best-fit model for 2M2139. Although some parameters are degenerate, all models require multiple large spots distributed across the surface of the source in a broadly similar pattern. \[FigBestModel\]](Fig_S3_LC_Fit.pdf "fig:") ![[*Left:*]{} The Stratos package provides multiple, broadly similar models that can reproduce well the observed light curve of 2M2139. All models require at least three distinct spots. Models are numbered by increasing chi square and shown in Fig. \[FigDiversity\]. [*Right:*]{} Example best-fit model for 2M2139. Although some parameters are degenerate, all models require multiple large spots distributed across the surface of the source in a broadly similar pattern. \[FigBestModel\]](Fig_S5_bestmodel.png "fig:") ![Although light curve decomposition yields degenerate solutions, several key properties of the solutions are similar. No solution with less than three spots can fit the data; the relative spot sizes and total covering fractions of the models are very similar. When the inclinations are considered the spot distributions are also similar. While no unique solution exists, the modeling provides an insight into the similarities of the simplest best-fitting models. \[FigDiversity\]](Fig_S2_Modeldiversity.png) The maps derived with phase mapping and Stratos have three natural limitations. First, our data is sensitive to variations in the surface brightness distributions and insensitive to time-invariant features. Second, the observations can only probe the visible fraction of the [photosphere]{}: for a rotation axis inclined with respect to the plane of the sky part of the [photosphere]{} will not be visible at any rotation phase. Third, a variation in the latitude of any feature only slightly changes the light curve. The above limitations lead to solutions that are degenerate in some parameters, while robust in others. In the following we discuss the similarities and degeneracies in the best-fit solutions. During a typical fitting procedure Stratos evaluates about $\simeq10^5$ different solutions. Figure \[FigDiversity\] shows the light curves for the six best-fit models for 2M2139, the target with the highest signal-to-noise levels. Note, that the light curves predicted by the six models differ the most in the inter-orbit gaps, where HST could not obtain data. These models provide excellent fits and the amplitude of the residuals are less than 0.5% (Figure \[FigBestModel\]). Figure \[FigDiversity\] provides an overview of the six best-fit surface maps. For the first glance these solutions may appear different, but even cursory inspection reveals that all solutions share several key characteristics. First, three spots provide excellent fits to the observations, whereas no one- or two-spot solution was acceptable. Second, the longitudinal distributions of the visible surface area occupied by lighter and darker features are very similar. Third, the projected sizes and relative positions of the two larger spots are also similar when considering the inclination of the model. Clearly, there are also several parameter pairs that are degenerate and not tightly constrained: the inclination and the limb darkening and, to a lesser degree, the spot size and the spot surface brightness. [*Bright spot/dark spot degeneracy:*]{} Our surface modeling procedure can model the light curves equally good with large bright spots on darker surface or with large darker spots on a lighter surface. These solutions – when the inclination and the latitudinal integration is considered – appear to be inverse of each other and, ultimately, lead to the same longitudinal one-dimensional surface brightness distribution. At the level of accuracy of our measurements these two families of solutions are degenerate, although they may be distinguishable in the future with more precise datasets and a shape model for the features. In the context of our atmospheric modeling the darker regions can be interpreted as covered by thick clouds, while the brighter regions are covered by thinner clouds with a higher-pressure upper boundary (i.e. warmer). Whether the photosphere should be interpreted as a thin cloud layer with towering thick clouds or a thick cloud layer with depressions or cavities (but not deep holes) depends on the covering fraction of the two surface types. The atmospheric modeling (see Section \[ModelComparison\] and Fig. \[FigCMD\]) provides guidance on the probable relative covering fractions: the models suggest a thin cloud cover varying between 71% and 75% for SIMP0136 (i.e. dominantly darker spots in a lighter photosphere) and between 37% to 50% thin cloud cover for 2M2139. However, this comparison is imperfect: our lightcurves are sensitive to changes in the surface covering fractions and insensitive to azimuthally symmetrically distributed photospheric features. Such features – bands or evenly distributed small spots – would not influence the light curves (i.e. distribution of the darker/lighter surface features), but would influence the relative photospheric covering fraction of the atmospheric models used. We foresee that our dataset and similar datasets will be modeled in the near future by [*simultaneously*]{} fitting the mean spectra, the spectral changes and the lightcurve shapes in a self-consistent manner, instead of the three–step procedure we followed here.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Expansion dynamics of single-species, non-neutral clouds, such as electron bunches used in ultrafast electron microscopy, show novel behavior due to high acceleration of particles in the cloud interior. This often leads to electron bunching and dynamical formation of a density shock in the outer regions of the bunch. We develop analytic fluid models to capture these effects, and the analytic predictions are validated by PIC and N-particle simulations. In the space-charge dominated regime, two and three dimensional systems with Gaussian initial densities show bunching and a strong shock response, while one dimensional systems do not; moreover these effects can be tuned using the initial particle density profile and velocity chirp.' author: - 'B. S. Zerbe' - 'X. Xiang' - 'C.-Y. Ruan' - 'S. M. Lund' - 'P. M. Duxbury' bibliography: - 'CoulombDynamics.bib' title: Dynamical bunching and density peaks in expanding Coulomb clouds --- =1 ​ Introduction ============ Non-neutral plasma systems arise in a variety of physical contexts ranging from astrophysics[@Arbanil:2014_charged_star_review; @Maurya:2015_charged_sphere; @Yousefi:2014_dust_aggregates]; accelerator technologies [@Bacci:2014_plasma_acceleration; @Boine:2015_intense_beams; @Whelan:2016_MRI; @Bernal:2016_recirculator]; ion and neutron production [@Bulanov:2002_charged_beam_generation; @Fukuda:2009_species_generation; @Esirkepov:2004_highly_efficient_ion_generation; @Kaplan:2015_preprint; @Parks:2001_neutron_production; @Bychenkov_2015_review]; sources for electron and ion microscopy[@Murphy:2014_cold_ions; @Gahlmann:2008_ultrashort]; to high power vacuum electronics[@Booske:2011_vacuum_review; @Liu:2015_maximal_charge; @Zhang:2016_review]. Understanding of the dynamics of spreading of such systems is critical to the design of next generation technologies, and simple analytic models are particularly helpful for instrument design. As a result, substantial theoretical efforts have already been made in this vein[@Jansen:1988_book; @Reiser:1994_book; @Batygin:2001_self; @Bychenkov:2005_coulomb_explosion; @Grech:2011_coulomb_explosion; @Kaplan:2003_shock; @Kovalev:2005_kinetic_spherically_coulomb_explosion; @Last:1997_analytic_coulomb_explosion; @Eloy:2001_coulomb_explosion; @Krainov:2001_ce_dynamics; @Morrison:2015_slow_down_dynamics; @Boella:2016_multiple_species]. Specifically, free expansion of clouds of charged single-specie particles starting from rest have been well studied both analytically and computationally[@Last:1997_analytic_coulomb_explosion; @Eloy:2001_coulomb_explosion; @Grech:2011_coulomb_explosion; @Batygin:2001_self; @Degtyareva:1998_gaussian_pileup; @Siwick:2002_mean_field; @Qian:2002_fluid_flow; @Reed:2006_short_pulse_theory; @Collin:2005_broadening; @Gahlmann:2008_ultrashort; @Tao:2012_space_charge; @Portman:2013_computational_characterization; @Portman:2014_image_charge; @Michalik:2006_analytic_gaussian], and a number of studies have found evidence of the formation of a region of high-density, often termed a “shock”, on the periphery of the clouds under certain conditions[@Grech:2011_coulomb_explosion; @Kaplan:2003_shock; @Kovalev:2005_kinetic_spherically_coulomb_explosion; @Last:1997_analytic_coulomb_explosion; @Murphy:2014_cold_ions; @Reed:2006_short_pulse_theory; @Degtyareva:1998_gaussian_pileup]. One application of these theories that is of particular current interest is to high-density electron clouds used in next-generation ultrafast electron microscopy (UEM) development[@King:2005_review; @Hall:2014_report; @Williams:2017_longitudinal_emittance]. The researchers in the UEM and the ultrafast electron diffraction (UED) communities have conducted substantial theoretical treatment of initially extremely short bunches of thousands to ultimately hundreds of millions of electrons that operate in a regime dominated by a virtual cathode (VC) limit[@Valfells:2002_vc_limit; @Luiten:2004_uniform_ellipsoidal; @King:2005_review; @Miller:2014_science_review; @Tao:2012_space_charge] which is akin to the Child-Langmuir current limit for beams generated under the steady-state conditions[@Zhang:2016_review]. These short bunches are often generated by photoemission, and such bunches inheret an initial profile similar to that of the driving laser pulse profile. Typically, the laser pulse has an in-plane, “transverse” extent that is of order one hundred microns and a duration on the order of fifty femtoseconds, and these parameters translate into an initial electron bunch with similar transverse extents and sub-micron widths[@King:2005_review]. After photoemission, the electrons are extracted longitudinally using either a DC or AC field typically in the 1-10 MV/m[@Srinivasan:2003_UED; @Ruan:2009_nanocrystallography; @van_Oudheusden:2010_rf_compression_experiment; @Sciaini:2011_review] through tens of MV/m[@Musumeci:2010_single_shot; @Weathersby:2015_slac; @Murooka:2011_TED] ranges, respectively. However, the theoretical treatments of such “pancake-like” electron bunch evolution have largely focused on the longitudinal dimension[@Luiten:2004_uniform_ellipsoidal; @Siwick:2002_mean_field; @Qian:2002_fluid_flow; @Reed:2006_short_pulse_theory; @Collin:2005_broadening], and the few studies looking at transverse dynamics have either assumed a uniform-transverse distribution[@Collin:2005_broadening] or have looked at the effect of a smooth Gaussian-to-uniform evolution of the transverse profile on the evolution of the pulse in the longitudinal direction[@Reed:2006_short_pulse_theory; @Portman:2013_computational_characterization]. Of specific note, only one analytic study found any indication, a weak longitudinal signal, of a shock[@Reed:2006_short_pulse_theory]. ---------- -- --------- \[-3mm\] \[8cm\] ---------- -- --------- On the other hand, an attractive theoretical observation is that an ellipsoidal cloud of cool, uniformly distributed charged particles has a linear electric field within the ellipsoid which results in maintenance of the uniform charge density as the cloud spreads [@Grech:2011_coulomb_explosion]. In the accelerator community, such a uniform distribution is a prerequisite in employing techniques such as emittance compensation[@Rosenzweig:2006_emittance_compensation] as well as forming the basis of other theoretical analyses. It has long been proposed that such a uniform ellipsoid may be generated through proper control of the transverse profile of a short charged-particle bunch emitted from a source into vacuum[@Luiten:2004_uniform_ellipsoidal], and experimental results have shown that an electron cloud emitted from a photocathode and rapidly accelerated into the highly-relativistic regime can develop into a final ellipsoidal profile characteristic of a uniform charge distribution[@Musucemi:2008_generate_uniform_ellipsoid]. Contrary to expectations from the free expansion work but consistent with the longitudinal analyses, this shadow lacks any indication of a peripheral region of high-density shocks. However, recent work has indicated that a substantial high density region may indeed form in the transverse direction[@Williams:2017_transverse_emittance], and N-particle simulation results, as demonstrated in Fig. (\[fig:distribution substructure\]), demonstrate a rapidly-developed substantial ring-like shock circumscribing the median of the bunch when the bunch starts from sufficient density. Moreover, this shock corresponds to a region of exceedingly low brightness, or conversely, high, local temperature, and that experiments show that removal of this region results in a dramatic increase in the bunch brightness[@Williams:2017_transverse_emittance], which we term “Coulomb cooling” as it is similar to evaporative cooling in the fact that the “hottest” charged particles are removed from the distribution’s edge thus leaving behind a higher-quality, cooler bunch. To understand Coulomb cooling, we first investigate this transverse shock. Here we demonstrate the formation of a ring-like shock within N-particle simulations [@Berz:1987_cosy; @Zhang:2015_fmm_cosy] of electron bunches with initial transverse Gaussian profile and offer an explanation of why this phenomena has not been noted previously within the UED literature. We then utilize a Poisson fluid approach to derive analytic predictions for the expansion dynamics in planar (1D), cylindrical, and spherical geometries, and we derive conditions for the emergence of density peaks distinct from any initial density maximum. We show that peak formation has a strong dependence on dimension, with one dimensional systems less likely to form shocks, while in cylindrical and spherical geometries bunching is more typical. Particle-in-cell (PIC) methods, utilizing WARP[@Friedman:2014_warp], and N-particle simulation are then used to validate the analytical predictions for peak emergence. Observation of Transverse Shock =============================== One reason that a transverse shock has not been seen previously in N-particle simulations is apparent in Fig. (\[fig:distribution substructure\]). We consider pancake electron bunches typical of 100keV ultrafast electron microscopy, and we consider the thin direction of the bunch to be the z-axis. Previous studies of the expansion dynamics of these bunches, including our own work, have looked at the projection of the particle density distribution to the x-z plane[@Luiten:2004_uniform_ellipsoidal; @Musucemi:2008_generate_uniform_ellipsoid; @Portman:2014_image_charge; @Morrison:2013_measurement; @Li:2008_quasiellipsoidal]. Fig. (\[fig:distribution substructure\]) shows that by projecting the distribution in this manner, and with the ability to statistically discern density fluctuations at about the 10% level, results in what appears to be a uniform distribution; however, by restricting the projection to only electrons near the median of the bunch, a restriction that can only be done computationally presently, results in evidence of a transverse ring-like density substructure near the median longitudinal (z) position. -- -- -- -- -- -- ![\[fig:splined\_grid\] Density near the z-median of simulated pancake bunches with transverse Gaussian profiles ($\sigma_r = 100 ~ \mu$m) in an extraction field of 10 MV/m. Each figure is the transverse radial density of a section of width $\sigma_z \approx 0.4~\mu$m for different initial conditions and different numbers of electrons, at time $10 ~ \tau_p$ where $\tau_p = 2 \pi \sqrt{\frac{m \epsilon_0}{n e^2}}$ is the plasma frequency; where $n = \frac{N}{\pi \sigma_r^2 \sigma_z}$. The number of electrons in each horizontal panel is different and equal to $N = 1,000$ (top), $N=10,000$ (middle), and $N = 100,000$ (bottom). For the density at $10 ~ \tau_p$, 30 cylindrical shells of equal volume and length $\sigma_z$ partitioned the distribution out to $0.6$ mm, and the numbers of electrons in each of these shells were used to calculate a density at the shell’s average radius. Due partially to the different numbers of electrons and partially due to the fact that the longer simulations, namely the simulation with $N = 1000$, resulted in significantly more electrons migrating out of the analysis region as a result of the initial velocity spread , the density scales are different for the three rows in the figure: $\frac{1}{(0.1 mm)^3}$ for the top row, $\frac{0.1}{(0.01 mm)^3}$ for the middle row, and $\frac{1}{(0.01 mm)^3}$ for the bottom row. Red dashed lines represent splines of order 3 with 10 knots. Notice the clear presence of a shock for the case $N=100,000$, an ambiguous shock at $N=10,000$, and essentially noise at $N=1,000$.](grid_of_splined_2.png){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![\[fig:emergence time\] The emergence time divided by the plasma period as a function of the number of electrons in the initial Gaussian profile with $\sigma_r \approx 100 ~\mu$m and $\sigma_z \approx 0.4 \mu$m. Emergence time was determined as the first time the density away from the inner-most-value exceeded the inner-most-value by 2%. Notice that the emergence time converges to about $5~\tau_p$ for high densities, but at low densities the emergence time has high variability with a median shifted to higher multiples of the plasma period. ](emergence_time.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- To better understand when this shock emerges, simulations with the same distribution parameters ($\sigma_r \approx 100 ~\mu$m and $\Delta z \approx 0.4~\mu$m) but various numbers of electrons were run. The average radial density was calculated for 30 instances of bunches with 1 thousand, 10 thousand, and 100 thousand electrons. As can be seen in Fig. (\[fig:average distribution\]), the shock emergence is present for bunches with 100 thousand electrons but not for those with 1 thousand electrons. The case of bunches with 10 thousand electrons suggests the emergence of the shock, but the shock becomes less defined at later times. Fig (\[fig:splined\_grid\]) shows nine density profiles at time $10 ~ \tau_p$, where $\tau_p$ represents the plasma period, with $\tau_p = 2 \pi \sqrt{\frac{m \epsilon_0}{n e^2}}$ and $n = \frac{N}{\pi \sigma_r^2 \sigma_z}$. The dots in each figure indicate average densities in cylindrical rings, originating from three randomly chosen initial conditions (figures in each row) and for three values of the total number of electrons $N$: 1 thousand, 10 thousand, and 100 thousand electrons; for the top, middle, and bottom rows, respectively. These representative density plots support the conclusion that the shock is only present in the case of bunches with 100 thousand electrons; where the spline fit to the data indicates a significant peak removed from the center of the bunch in all instances examined. As expected, the density of bunches with one thousand electrons is noisy due to low statistics both from the small number of electrons in the simulation and the large proportion of electrons that spread beyond the analysis region due to the initial velocity spread; and the density profile of bunches with 10 thousand electron has a consistent general shape that fits well to the spline fit but lacks significant emergent peaks; which are the indicators of shock formation. We define the emergence time as the time at which peaks indicative of a shock emerge in the dynamics of Coulomb clouds. Fig. (\[fig:emergence time\]) shows the dependence of the emergence time, and its variability, on the number of electrons in a bunch. It also shows very clearly that the emergence time is proportional to the plasma period, $\tau_p$. As can be seen in this figure, the spread in the emergence time is large for a bunch with 10 thousand electrons, but this spread decreases as the density of the bunch increases. For bunches with $N\ge 100,000$ the spread in the emergence time is small, moreover the emergence time appears to converge toward approximately $5~\tau_p$ at large $N$ (for Gaussian initial distributions). We note here that for Gaussian pulses with similar spatial and temporal extents, simulations at and above $10$ million electrons, a goal of the community[@BES_report:2016_electron_sources], result in relativistic velocities as a result of the stronger space-charge effects. As the discussion here focuses on non-relativistic physics, we present data for up to 1 million electrons, where the velocity obtained from the self-field remains non-relativistic. The results presented in Figs. 2-4 are a second reason that shock formation has not been seen previously in studies of electron bunches. Specifically, most work has been conducted using $\le 10,000$ electrons with a transverse standard deviation of 100 $\mu$m, and in the regime where there is no consistent emergence of a shock. Moreover, the fact that the non-relativistic evolution of the bunch profile has a time scale proportional to the plasma period, a fact that we derive under special geometries later in this manuscript, means that higher density bunches result in faster, more consistent evolution of the transverse profile. In other words, the emergence time of a shock happens earlier as the density of the bunch is increased. Specifically, a transverse shock emerges at on the order of $50$ ps for an initially Gaussian profile ($\sigma_r = 100 ~\mu$m with sub-micron length) with $10^6$ electrons, which is the number of electrons which is the current goal for the diffraction community[@BES_report:2016_electron_sources]. This implies that for modern bunches, this transverse shock is happening well within the photoemission gun before the onset of the relativistic regime. The goal of $10^8$ electrons for the imaging community needs to be further examined as the transverse velocity spread will be relativistic, but we expect to find this effect there as well, however we expect that it occurs at short times, of order a few picoseconds. 1D model ======== As noted in the introduction, formation of a shock in the longitudinal direction of an expanding pancake pulse has not been observed, and the analysis of Reed [@Reed:2006_short_pulse_theory] demonstrates that this is true for cold initial conditions. Here we re-derive this result using an elementary method, which enables extension to include the possibility of an initial chirp; and we find chirp conditions at which shock formation in the longitudinal direction can occur. Consider the non-relativistic spreading of an electron bunch in a one dimensional model, which is a good early time approximation to the longitudinal spreading of a pancake-shaped electron cloud generated at a photocathode. In one dimensional models, the density, $\rho$, only depends on one coordinate, which we take to be $z$. We also take $\rho$ to be normalized so that its integral is one. For the sake of readability, denote the position of a particle from the Lagrangian perspective to be $z = z(t)$ and $z_0 = z(0)$. The acceleration of a Lagrangian particle is $$\begin{aligned} a(z;t) = \frac{qQ_\text{tot}}{2 m \epsilon_0} \delta \sigma\end{aligned}$$ where $q$ is the charge of the particle (e.g. electron), $m$ is its mass, $Q_{\text{tot}}$ is the total charge in the bunch, and $$\begin{aligned} \delta \sigma = \int_{-z}^{z} \rho(\tilde{z};t) d\tilde{z}\end{aligned}$$ The key observation enabling analytic analysis is that if the flow of electrons is lamellar, so that there is no crossing of particle trajectories, then these integrals and the acceleration calculated from them are time independent and hence may be determined from the initial distribution. Therefore, we denote $a(z;t) = a_0$, $\rho_0 = \rho(z;0)$, and $\delta \sigma = \int_{-z_0}^{z_0} \rho_0(\tilde{z})d\tilde{z}$. Moreover, due to the fact that for any particle trajectory, the acceleration is constant and given by $a_0$, the Lagrangian particle dynamics reduces to the elementary constant acceleration kinematic equation $$\label{eq:non-relativistic position} z(t) = z_0 + v_0 t + {1\over 2} a_0 t^2$$ where $v_0$ is the initial velocity of the charged particle that has initial position $z_0$. Notice that both $v_0$ and $a_0$ are functions of the initial position, $z_0$, and we shall see later that the derivatives of these parameters, $v' = \frac{dv_0}{dz_0}$ and $a' = \frac{da_0}{dz_0}$ are important in describing the relative dynamics of Langrangian particles starting at different initial positions. Moreover, the special case of $v_0 = 0$ everywhere, which we will call the cold-case, is commonly assumed in the literature, and we now examine this case in detail. First we consider the speading charge distribution within the Eulerian perspective, where $z$ is an independent variable instead of it describing the trajectory of a particle. We denote the charge distribution at all times to be $Q_\text{tot} \rho(z; t)$ with $\rho(z; t)$ a unitless, probability-like density and $Q_\text{tot}$ the total charge per unit area in the bunch. Since particle number is conserved, we have $$\begin{aligned} \rho(z; t)dz = \rho_0 dz_0 \end{aligned}$$ so that in the non-relativistic case derived above $$\begin{aligned} \rho(z,t) &= \rho_0 \left({dz\over dz_0}\right)^{-1} =\frac{ \rho_0}{1 + v_0' t +\frac{1}{2} a_0' t^2} \label{eq:1D density evolution}\end{aligned}$$ Notice that the derivative of the acceleration with respect to the initial position is directly proportional to the initial distribution, so that for initial distributions that are symmetric about the origin, $$\begin{aligned} \label{eq:first order acceleration} a_0' &= \frac{q Q_\text{tot}}{2 m \epsilon_0} \frac{d{\delta \sigma}}{dz_0}\nonumber\\ &= \frac{q Q_\text{tot}}{m \epsilon_0}\rho_0\end{aligned}$$ Plugging Eq. (\[eq:first order acceleration\]) into Eq. (\[eq:1D density evolution\]), we get $$\begin{aligned} \rho(z,t) &= { \rho_0 \over 1 + v_0' t + \frac{q Q_\text{tot}}{2 m \epsilon_0} \rho_0t^2}\label{eq:1D density evolution subbed}\\ \frac{d}{dz} \rho(z,t) &= \frac{ \rho_0'\left( 1 + v_0' t\right) - \rho_0v_0''t }{\left(1 + v_0' t + \frac{q Q_\text{tot}}{2 m \epsilon_0} \rho_0t^2\right)^3}\label{eq:1D rho derivative} \end{aligned}$$ where $\rho_0' = \frac{d\rho_0}{dz_0}$ and $v_0'' = \frac{d^2v_0}{dz_0^2}$. A detailed derivation of the second expression is in Appendix \[ap:1D appendix\]. For the cold-case, Eq. (\[eq:1D density evolution subbed\]) reduces to the density evolution equation derived by Reed[@Reed:2006_short_pulse_theory] using different methods. Also, in the cold-case, Eq. (\[eq:1D rho derivative\]) simplifies into a proportionality between the initial slope of the distribution and the slope of the distribution at any later time. Therefore, a charge distribution that is initially at rest and unimodal, i.e only a single initial location has non-zero density with $\rho_0' = 0$, never develops a dynamically generated second maximum. This explains why we should not expect to see an emergent shock in the longitudinal direction; provided the 1D model is applicable and cold initial conditions are valid. However, if particles in the initial state have an initial velocity that depends on initial position, i.e. $v_0(z_0)$, then density peaks will emerge at $z$ when $t = \frac{1}{v_0''\rho_0 - v_0' \rho_0'}$. This occurs at positive time if $v_0'' \rho_0 > v_0' \rho_0'$. In the special case $v_0'' \rho_0 = v_0' \rho_0'$, the distribution may be reframed as a cold-case distribution starting from $t = -\frac{m c_1 \epsilon_0}{q Q_\text{tot}}$ for some $z_0$-independent constant $c_1$ with velocity units when $\frac{m c_1^2 \epsilon_0}{q Q_\text{tot}} < 1$ or a distribution starting from a singularity with velocity distribution ${\tilde{v}}_0 = c_1 \left( \rho_0 - \frac{1}{2} \delta\sigma\right)$. As noted earlier, the function $a_0(z_0)$ is monotonically increasing as a function of distance from the center of the pulse, which means that electrons at the edges of the bunch always have larger accelerations away from the center of the pulse than electrons nearer the pulse center. Thus crossover, where an inner electron moves past an outer electron, cannot occur unless the initial velocities of inner electrons overcome this relative acceleration. A practical case where crossover may be designed is where the initial distribution has an initial velocity chirp, i.e. $v_0 = c z_0$ where $c$ has units of inverse time. Intuitively we can expect that the velocity chirp needs to be negative in order for crossover to occur. To find the crossover time, we consider the time at which two electrons that were initially apart, are at the same position at the same time. In this case it is straight forward to find the time at which crossover occurs by considering an electron at initial position $z_0$, and a second electron at position $z_0 + \delta z_0$. Before either of these electrons experiences a crossover Eq. (\[eq:non-relativistic position\]) is valid, and setting $z(z_0, t_x) = z(z_0 + dz_0, t_x)$ reduces to, $$\begin{aligned} At^2_x + Bt_x + 1 = 0\end{aligned}$$ where $A = \frac{q}{2m \epsilon_0} \rho_0(z_0)$ and $B = \frac{dz}{dz_0}$. Solving the quadratic equation leads to the crossover time given by $$\begin{aligned} t_x &= \frac{-B \pm \sqrt{B^2 - 4A}}{2A}\end{aligned}$$ Since $A$ is always positive, the square root is real only if $B^2$ is larger than $4A$. Moreover the time is only positive if $B$ is negative. Therefore crossover only occurs if the chirp has a negative slope, as expected on physical grounds. The conditions for tuning the chirp to produce crossover in 1D are then $$\begin{aligned} \frac{dv_0}{dz_0} &< 0\label{eq:negative chirp}\\ \left| \frac{dv_0}{dz_0}\right| &\ge \sqrt{\frac{2q}{m\epsilon_0}\rho_0(z_0)}\label{eq:sufficient chirp})\end{aligned}$$ The results above are applicable to the spreading in the longitudinal direction of non-relativistic pancake bunches, because the expression Eq. (\[eq:non-relativistic position\]) is linear in acceleration. In that case, the position of a charged particle at any time can be calculated from a superposition of the contribution from the space-charge field and any external constant field such as a constant and uniform extraction field. In that case, the space charge field leads to spreading of the pulse, while the extraction field leads solely to an acceleration of the center of mass of the entire bunch. In that case the center of mass and spreading dynamics are independent and can be decoupled. The extension of the description above to asymmetric charge density functions is also straightforward, as is the inclusion of an image field at the photocathode. Moreover, inclusion of these effects does not change Eq. (\[eq:1D density evolution subbed\]), Eq. (\[eq:1D rho derivative\]), nor the conclusions we have drawn from them. These results apply generally to all times before the initial crossover event within the evolution of the bunch, and once crossover occurs, the distribution can be reset with a new Eq. (\[eq:1D density evolution subbed\]) to follow further density evolution. As we show in the next section the one-dimensional results do not apply, even qualitatively, to higher dimensions, as the constant acceleration situation is not valid and crossover can occur even with cold initial conditions, as demonstrated in the simulations presented in previous section. In the next section we present fluid models in higher dimensions where the origin of these new effects is evident. Cylindrical and Spherical Models ================================ The methodology for the cylindrical and spherical systems is similar so we develop the analysis concurrently. Consider a non-relativistic evolving distribution $Q_\text{tot} \rho(r,t)$, where $\rho(r,t)$ is again taken to be the unitless particle distribution and $Q_\text{tot}$ is again the total charge in the bunch. In a system with cylindrical symmetry, the mean field equation of motion for a charge at $r \equiv r(t)$ is given by $$\label{eq:2D second derivative} \frac{d \vec{p}_\text{r}}{dt} = { q Q_\text{tot} \lambda (r,t)\over 2 \pi \epsilon_0 r}\hat{r},$$ where $\vec{p}_\text{r}$ is the momentum of a Lagrangian particle, $\lambda(r,t)$ is the cummulative distribution function (cdf) $$\lambda(r,t) = \int_0^{r} 2\pi \tilde{r} \rho(\tilde{r},t) d\tilde{r}$$ and $Q_\text{tot} \lambda(r;t)$ is the charge inside radius $r$. Analogously in a system with spherical symmetry, the equation of motion for an electron at position $r$ is given by $$\label{eq:3D second derivative} \frac{d \vec{p}_\text{r}}{dt} = { q Q_\text{tot} P(r,t)\over4\pi \epsilon_0 r^2}\hat{r},$$ where $P(r,t) = \int_0^{r} 4\pi \tilde{r}^2 \rho(\tilde{r},t) d\tilde{r}$ is the cdf and $Q_\text{tot} P(r;t)$ is the charge within the spherical shell of radius $r$. Notice, $r$ in Eq. (\[eq:2D second derivative\]) denotes the cylindrical radius while $r$ in Eq. (\[eq:3D second derivative\]) represents the spherical radius. In both cases, before any crossover occurs, the cdf of a Lagrangian particle is constant in time. For simplicity we write $\lambda(r,t) = \lambda(r_0,0) \equiv \lambda_0$ and $P(r,t) = P(r_0,0) \equiv P_0$ for a particle starting at $r_0 \equiv r(0)$. In other words, since $Q_\text{tot} \lambda_0$ and $Q_\text{tot} P_0$ can be interpreted as the charge contained in the appropriate Gaussian surface, if we track the particle that starts at $r_0$, these contained charges should remain constant before crossover occurs. It is convenient to also define the average particle density to be $\bar{\rho}_0 = \frac{\lambda_0}{\pi r_0^2 }$ in the cylindrically symmetric case and $\bar{\rho}_0 = \frac{3 P_0}{4\pi r_0^3 }$ in the spherically symmetric case. Notice that these average particle densities are a function solely of $r_0$, and we will use these parameters shortly. Eq. (\[eq:2D second derivative\]) and Eq. (\[eq:3D second derivative\]) may now be rewritten as, for the cylindrical and spherical cases respectively $$\begin{aligned} \frac{d p_r}{dt} &= { q Q_\text{tot} \lambda_0\over 2 \pi \epsilon_0 r}\label{eq:2D equation of motion}, \\ \frac{d p_r}{dt} &= { q Q_\text{tot} P_0\over 4 \pi \epsilon_0 r^2}\label{eq:3D equation of motion}, \end{aligned}$$ which apply for the period of time before particle crossover. Note that unlike the one dimensional case, in two and three dimensional systems the acceleration on a Lagrangian particle is not constant, and has a time dependence through the time dependent position $r=r(r_0:t)$ term in the denominator. Since Eq. (\[eq:2D equation of motion\]) and Eq. (\[eq:3D equation of motion\]) represent the force on the particle in the cylindrical and spherical contexts, respectively, we can integrate over the particle’s trajectory to calculate the change in the particle’s energy. Integrating from $r_0, 0$ to $r, t$ gives for the cylindrical and spherical cases respectively $$\begin{aligned} E(r,t) - E(r_0,0) &= \frac{q Q_\text{tot} \lambda_0}{2 \pi \epsilon_0} ln\left(\frac{r}{r_0}\right)\label{eq:2D energy}\\ E(r,t) - E(r_0,0) &= \frac{q Q_\text{tot} P_0}{4 \pi \epsilon_0} \left(\frac{1}{r_0} - \frac{1}{r}\right)\label{eq:3D energy}\end{aligned}$$ where the term on the right side of the equality can be interpreted as the change in the potential energy within the self-field of the bunch. These expressions are fully relativistic, and in the non-relativistic limit, we can derive implicit position-time relations for the particle by setting the energy difference equal to the non-relativisitic kinetic energy $mv^2/2$, and integrating. The details of this derivation have been placed in Appendix \[ap:time-location appendix\], and the resulting expressions in the cold-case for the cylindrical and spherical systems are respectively $$\begin{aligned} t &= \frac{\bar{\tau}_{p,0}}{\pi} \frac{r}{r_0} F\left(\sqrt{ln\left(\frac{r}{r_0}\right)}\right)\label{eq:2D time}\\ t &= \sqrt{\frac{3}{2}}\frac{\bar{\tau}_{p,0}}{2\pi} \left( \tanh^{-1} \left( \sqrt{1 - \frac{r_0}{r}} \right) + \frac{r}{r_0}\sqrt{1 - \frac{r_0}{r}}\right)\label{eq:3D time}\end{aligned}$$ where $F(\cdot)$ represents the Dawson function and $\bar{\tau}_{p,0}$ represents the plasma period determined from the initial conditions: $\bar{\tau}_{p,0} = 2\pi \sqrt{\frac{m\epsilon_0}{q Q_\text{tot} \bar{\rho}_0}} = \frac{2\pi}{{\bar{\omega}}_0}$, indicating that the appropriate time scale is the scaled plasma period as seen in Figs. (\[fig:average distribution\]) and (\[fig:emergence time\]) for the case of pancake bunches used in ultrafast electron diffraction systems. Eq. (\[eq:3D time\]) and its derivation is equivalent to previous time-position relations reported in the literature[@Boyer:1989_kinetic_energy; @Last:1997_analytic_coulomb_explosion] although the previous work did not identify the plasma period as the key time-scale of Coulomb spreading processes and cylindrical symmetry was not discussed (Eq. (\[eq:2D time\])). The time-position relations detailed in the equations above depend solely on the amount of charge nearer to the origin than the point in question, i.e. $Q_\text{tot} \bar{\rho}_0$, and not on the details of the distribution. Notice however, that it is the difference between the time-position relationships of different locations where the details of the distribution become important and may cause neighboring particles to have interesting relative dynamics; leading to the possibility of shock formation in the density. To translate the Lagrangian particle evolution equations above to an understanding of the dynamics of the charge density distribution, we generalize Eq. (\[eq:1D density evolution\]) to $$\begin{aligned} \rho(r,t) = \rho_0 \left( \left(\frac{r}{r_0}\right)^{d-1} {\frac{dr}{dr_0}}\right)^{-1}\label{eq:density evolution general}\end{aligned}$$ where d is the dimensionality of the problem, i.e. 1 (planar symmetry), 2 (cylindrical symmetry), or 3 (spherical symmetry). The factor in the denominator, ${\frac{dr}{dr_0}}$, may be determined implicitly from the time-position relations above, and the details are presented in Appendix \[ap:spatial derivatives\]. The resulting expressions for the density dynamics, in the cold case, for $d=2$ (cylindrical) and $d=3$ (spherical) cases are $$\begin{aligned} \frac{d r}{dr_0} &= \frac{r}{r_0} \left(1 + D_d(r_0) f_\text{d}\left(\frac{r}{r_0}\right)\right)\label{eq:dr over dr_0 no v_0}\end{aligned}$$ where $$D_d = D_d(r_0) = \frac{d}{2} \left(\frac{\rho_0}{\bar{\rho}_0} - 1\right),$$ is a function only of the initial position. The functions $f_d$ for cylindrical systems is given by $$f_\text{2}\left(\frac{r}{r_0}\right) = 2 \sqrt{\ln\left(\frac{r}{r_0}\right)}F\left(\sqrt{\ln\left(\frac{r}{r_0}\right)}\right),$$ while for systems with spherical symmetry we find $$f_\text{3}\left(\frac{r}{r_0}\right) = \frac{r_0}{r} \sqrt{1 - \frac{r_0}{r}} \tanh^{-1}(\sqrt{1 - \frac{r_0}{r}}) + 1 - \frac{r_0}{r}.$$ Note that these are functions of the ratio $r/r_0$. The functions $f_d$ can also be written as mixed functions of $r$ and $t$, specifically $f_2\left(\frac{r}{r_0}\right) = \frac{r_0}{r}\sqrt{\ln\left(\frac{r}{r_0}\right)}{\bar{\omega}}_0t$ and $f_3\left(\frac{r}{r_0}\right) = \frac{r_0}{r}\sqrt{1 - \frac{r_0}{r}}{\bar{\omega}}_0t$. However, care must be used when using these mixed forms as $r$ is implicitly dependent on $t$. Here we work with these functions in terms of relative position, $\frac{r}{r_0}$. Substituting Eq. (\[eq:dr over dr\_0 no v\_0\]) into Eq. (\[eq:density evolution general\]), we find that the density evolution in systems with cylindrical (d=2) and spherical symmetry(d=3) can be compactly written as $$\begin{aligned} \rho(r;t) &= \left(\frac{r_0}{r}\right)^d \frac{\rho_0}{1 + D_d(r_0) f_\text{d}\left(\frac{r}{r_0}\right)}\label{eq:general density evolution}\end{aligned}$$ This expression is general and can be applied to arbitrary, spherically symmetric or cylindrically symmetric initial conditions. Analogously to the 1D case the condition $\frac{dr}{dr_0} < 0$ results in particle crossover. However, as detailed in Eq. (\[eq:dr over dr\_0 no v\_0\]), the sign of $\frac{dr}{dr_0}$ depends on the sign of $1 + D_d(r_0) f_\text{d}\left(\frac{r}{r_0}\right)$. It is very interesting to note that $ D_\text{d}(r_0)$ is the deviation from a uniform distribution function, so that the $D$ functions are solely functions of the initial conditions and are positive at locations where the local density is larger than the average density at $r_0$, and negative when the local density is smaller than the average density at $r_0$. On the other hand, the functions $f_d$ are functions of the evolution of the Langrangian particle. One immediate consequence of Eq.(\[eq:general density evolution\]) is that for a uniform initial density distribution, for either cylindrical or spherical systems, the corresponding $D$ function is zero at every location where the original density is defined. Thus, the uniform density evolution in Eq. (\[eq:density evolution general\]) reduces to the generally recognized expressions: $\rho(r,t) \pi r^2= \rho_0 \pi r_0^2$ for the cylindrical case; and $\rho(r,t) \frac{4}{3} \pi r^3= \rho_0 \frac{4}{3} \pi r_0^3$ in the spherical case. We provide additional details for the uniform distribution in the next section. However, Eq. (\[eq:dr over dr\_0 no v\_0\]) is general for any distribution before particle crossover, not just the uniform distribution. \ For a particle starting at position $r_0$ and having a deviation from uniform function $D_\text{d}(r_0)$, crossover occurs when the particle is at a position, $r$, that satisfies $f_\text{d}\left(\frac{r}{r_0}\right) = -1/D_d(r_0)$. Since every particle moves toward positive $r$, every particle will have a time for which it will assume every value of the function $f\left(\frac{r}{r_0}\right))$. The character of the two and three dimensional $f$’s are similar as can be seen in Fig (\[fig:classifying D\]) where the value of the function is plotted against $\frac{r}{r_0}$. Specifically, both functions increase to a maximum and then asymptote towards 1 from above. This means that all density positions eventually experience uniform-like scaling since $\lim_{r \to \infty} f_{d}(\frac{r}{r_0}) = 1$ results in Eq. (\[eq:density evolution general\]) simplifying to $\rho \pi r^2 = \rho_0 \frac{\pi r_0^2}{1 + D_\text{d}(r_0)}$ and $\rho \frac{4}{3} \pi r^3= \rho_0 \frac{\frac{4}{3} \pi r_0^3}{1 + D_\text{d}(r_0)}$ in the cylindrical and spherical cases, respectively, for large enough $r$. Notice, this uniform-like scaling does not mean that the distribution goes to the uniform distribution, which is what happens in 1D but need not happen under cylindrical and spherical geometries. The main difference between the cylindrical and spherical symmetries are that the cylindrical function’s maximum is larger than the spherical function’s maximum; and we find $max(f_\text{2}) \approx 1.28$ while $max(f_\text{3}) \approx 1.07$. Moreover the maximum of the cylindrical function occurs at a larger value of $\frac{r_0}{r}$ than that of the spherical function; specifically $r \approx 9.54 r_0$ instead of $r \approx 8.27 r_0$, respectively. The first observation means cylindrical symmetry is more sensitive to the distribution than the spherical case, while the second observation indicates that if crossover is going to occur for a specific particle, it will occur before the $r$ value for which the corresponding $f$ function is maximum (i.e. $r \approx 9.54 r_0$ or $r \approx 8.27 r_0$), otherwise the particle will never experience crossover. From this reasoning, we obtain the earliest time for crossover by minimizing the time taken for a trajectory to reach the maximum of the function $f_d$, with the crossover constraint $ {\frac{dr}{dr_0}} = 0$. This may be achieved by using Lagrange multipliers or by running calculations for a series of values of $r/r_0$ to find the position at which crossover happens first. The mean field theory is valid before the minimum crossover time, and the results presented below are well below this time. -- -- -- -- Uniform and Gaussian Evolutions: Theory and Simulation ====================================================== In this section, the mean field predictions are compared to the N-particle and PIC simulations. First we present the evolution of the initially-at-rest cylindrically- and spherically- symmetric uniform distribution of $1.875 \times 10^7$ and $2 \times 10^4$ electrons within radii’s of 1 mm (see Fig. (\[fig:density evolution\](a,b)). Note, in this fairly trivial case, crossover should not occur and the analytic results should be valid mean-field-results for all time. Since $\rho_0 = {\bar{\rho}}_0$ in this case, $D_d(r_0) = 0$ and Eq. (\[eq:general density evolution\]) reduces to $$\begin{aligned} \rho(r;t) = \left(\frac{r_0}{r}\right)^d \rho_0(r_0)\label{eq:uniform evolution}\end{aligned}$$ Notice that $r$ can be solved for a specific time using Eq. (\[eq:2D time\]) or Eq. (\[eq:3D time\]), depending on whether we are examining the cylindrically- or spherically- symmetric case, respectively, and due to ${\bar{\rho}}_0$’s independence from $r_0$, these equations need only be solved once for a give time to describe all $r$. Therefore, we may write $r = \alpha(t) r_0 \equiv \alpha r_0$, where $\alpha$ is independent of $r_0$, and we immediately see that Eq. (\[eq:uniform evolution\]) can be written as $\rho(r;t) = \alpha^d \rho_0(r_0)$ suggesting that the density simply scales with time as generally recognized by the community. We solve for $\alpha$ at 6 times, and present a comparison with both PIC and N-particle cylindrically- symmetric and spherically-symmetric simulations in Fig. (\[fig:density evolution\](a,b)). As can be seen, despite the presence of initial density fluctuations arising from sampling, the simulated results follow the analytic results exceedingly well. Specifically, the distributions simply expand while remaining essentially uniform, and the analytic mean field formulation correctly calculates the rate of this expansion. While this comparison is arguably trivial, it is reassuring to see that our general equation reduces to a form that captures these dynamics. Less trivial is the evolution of Gaussian distributions. We simulated $3.75 \times 10^7$ and $10^5$ electrons for the cylindrical and spherical cases, respectively, using $\sigma_r =1 ~$ mm. Solving for the minimum crossover time, we get approximately 44 ns for each distribution. Therefore, we simulate for 37.5 ns, which is well before any crossover events. For the Gaussian distributions we introduce the scaled radius variables $s = \frac{r}{\sqrt{2} \sigma_r}$ and $s_0 = \frac{r_0}{\sqrt{2}\sigma_r}$, so that from Eq. (25) for the cylindrical and spherical cases we have, $$\begin{aligned} D_2(s_0) &= \frac{(1 + s_0^2) e^{-s_0^2} - 1}{1 - e^{-s_0^2}}\\ D_3(s_0) &= \frac{(2 s_0^3 + 3 s_0)e^{-s_0^2} - \frac{3 \sqrt{\pi}}{2} \text{erf}(s_0)}{\sqrt{\pi}\text{erf}(s_0) - 2 s_0 e^{-s_0^2}}\end{aligned}$$ where erf is the well known error function. Putting these expressions into Eq. (\[eq:general density evolution\]) we find for the cylindrical and spherical cases respectively $$\begin{aligned} \rho(s;t) &= \frac{\frac{s_0^2}{\pi s^2} e^{-s_0^2}}{1 + 2 \frac{(1 + s_0^2) e^{-s_0^2} - 1}{1 - e^{-s_0^2}} \sqrt{\ln\left(\frac{s}{s_0}\right)} F\left(\sqrt{\ln\left(\frac{s}{s_0}\right)}\right)}\\ \rho(s;t) &= \frac{\frac{s_0^3}{\pi^{\frac{3}{2}} s^3} e^{-s_0^2}}{1 + \frac{(2 s_0^3 + 3 s_0)e^{-s_0^2} - \frac{3 \sqrt{\pi}}{2} \text{erf}(s_0)}{\sqrt{\pi}\text{erf}(s_0) - 2 s_0 e^{-s_0^2}} \left(\frac{s_0}{s}\sqrt{1 - \frac{s_0}{s}} \tanh^{-1}\left(\sqrt{1 - \frac{s_0}{s}}\right) + 1 - \frac{s_0}{s}\right)}\end{aligned}$$ To find $r(t)/r_0$ we solve Eq. (\[eq:2D time\]) or Eq. (\[eq:3D time\]), depending on whether we are examining the cylindrically- or spherically- symmetric cases, respectively; and for $\frac{r}{r_0}$, and for every time step, we calculate the predicted distribution at 5000 positions, $r$, corresponding to 5000 initial positions, $r_0$, evolved to time $t$. As can be seen in Fig. (\[fig:density evolution\]), both the cylindrically- and spherically- symmetric Gaussian distributions develop peaks similar to those seen in the simulations of expanding pancake bunches described in the first section of the paper. As can be seen in Fig. (\[fig:density evolution\]), both the PIC and the N-particle results match the analytical results very well. Notice, the primary differences between the cylindrically- and spherically- symmetric evolutions is in their rate of width expansion and the sharpness of the peak that forms, and both of these facets are captured by the analytic models. Conclusions =========== In this work, we have shown that a shock occurs in the transverse, but not longitudinal, direction during expansion of pancake-like charged particle distributions typical of those use in ultrafast electron microscope (UEM) systems. Fluid models for arbitrary initial distributions, Eq. (\[eq:1D density evolution subbed\]), a generalization of a model already in the literature, showed that the formation of such a shock should not occur for any cold initial distribution in one dimension. This result is consistent with the finding that typically no shock is visible in the longitudinal direction dynamics of UEM bunches; however by tuning the initial velocity distribution it should be possible to generate a dynamic shock. We generalized the fluid theory to cylindrical and spherical symmetries deriving implicit evolution equations for the charge density distributions Eq. (\[eq:general density evolution\]). We analyzed these models for the advent of particle crossover, which occur for some distributions even when the initial distribution is cold due to the behavior of the Coulomb force in higher dimensions; and we found that the time scales associated with the space charge expansion are proportional to the plasma period. One interesting detailed observation is that in the case of cylindrical symmetry, the pre-factor of $\frac{\tau_p}{\pi}$ of Eq. (\[eq:2D time\]) is roughly 0.3 while for the spherical symmetric case, corresponding prefactor in Eq. (\[eq:3D time\]), $\sqrt{\frac{3}{2}}\frac{\tau_p}{2\pi}$, is roughly 0.2 plasma periods. Interestingly beam relaxation has been independently found to occur at roughly 0.25 the plasma period[@Wangler:1985_emittance_relaxation], which falls directly in the middle of our cylindrically and spherically symmetric models. The analytic theory predicts that emergence of a shock is distribution dependent, and as expected, a uniform initial distribution does not produce a shock. However we showed that electron bunches that are initially Gaussian distributed produce a shock well before the advent of particle crossover indicating that the emergence of a shock is well described by fluid models presented here. This is consistent with the observation of a shock in N-particle simulations of the transverse expansion of UEM pancake bunches (see Figs. 1-4). To our knowledge, we have presented the first analytic derivation of the cold, single-species, non-neutral density evolution equations for cylindrical and spherical symmetries. These equations are general enough to handle any distribution under these symmetries, and can be used across specialties from accelerator technology, to electronics, to astrophysics. While simulation methods, like the N-particle and PIC codes used here are general tools, the insights provided by these simple analytical equations should provide fast and easy first-approximations for a number of calculations; while providing physical insights and parameter dependences that are more difficult to extract from purely computational studies. The analysis presented here has been carried out for the non-relativistic regime; which is only valid for cases of sufficiently low density where the shock occurs prior to the electrons achieving relativistic velocities. For higher densities or other physical situations where the bunch becomes relativistic more quickly than the formation of this shock, a relativistic analysis is needed. On the other hand, for sufficiently high densities, i.e. approaching $10^7$ or more electrons in the pancake geometry used in this manuscript and typical in the UE field, relativistic effects in the transverse direction become important and need to be considered. The extension to fully relativistic cases will be addressed in future work. We point out that Child-Langmuir current should not have these dynamic shocks except at the onset of the current before the steady-state condition sets in. Previous studies note the “hollowing” of a steady-state beam due to fringe field effects[@Luginsland:1996_child_langmuir_2d], but a steady state Child-Langmuir current is largely independent of emission parameters; so that this hollowing effect is not dynamical, but part of the continuous emission process itself, and is therefore a very different mechanism than the dynamic shocks we see here. It would be interesting to study the combined effects of steady state beam hollowing and dynamic shock formation in pancake bunches to determine if the combination of these processes provides new opportunities for optimization of beam properties. The analytic models presented here treat free expansion whereas most applications have lattice elements to confine the bunches. Substantial work, in particular the particle-core model, has been very successful at predicting transverse particle halos of beams[@Gluckstern:1994_analytic_halo; @Wangler:1998_particle_core_review]. This model assumes a uniform-in-space beam-core density called a Kapchinsky-Vladimirsky (KV) distribution due to its ease of theoretical treatment. Such an assumption is supported by the analysis presented here as we find that the distribution within the shock is nearly uniform. However, the particle-core models do not treat the initial distribution as having a large density on the periphery. It would be interesting to revisit such treatments with this new perspective although we would like to point out that the main effect the particle-core model attempts to capture, halos, occur even after aperturing the beam[@Gluckstern:1994_analytic_halo]. Specifically, it should be possible to examine the effect of radial-focussing fields on the evolution of the three-dimensional distributions we have investigated here. The experimental work that motivated this analysis, [@Williams:2017_transverse_emittance], not only predicted a shock but also a correlated decrease in brightness near the periphery. We emphasize that the mean-field equations used here explain the density shock only and do not provide a quantitative theory of the emittance and the Coulomb cooling achieved by removing the electrons in the shock. Specifically, the true emittance in the analytic models presented here remains zero for all time as all particles at a radius $r$ have velocity $\sqrt{\frac{2}{3}}\frac{r_0}{{\bar{\omega}}_{p,0}} \sqrt{1 - \frac{r_0}{r}}$ resulting in zero local spread in velocity space. This perfect relationship between velocity and position means that the true emittance is zero even if the relation is non-linear; however, in such a non-linear chirp case, the rms emittance will not remain zero despite the true emittance being zero. Moreover, the analytic model does capture some of the rms emittance growth as a change in the distribution has a corresponding change on the variance measures used to determine the rms emittance. Specifically, a Gaussian distribution should have especially large emittance growth due to its evolution to a bimodal distribution, a distribution that is specifically problematic for variance measures. Such a large change in the emittance of the transverse Gaussian profile has been seen by Luiten[@Luiten:2004_uniform_ellipsoidal] experimentally and us computationally[@Portman:2013_computational_characterization; @Portman:2014_image_charge]. On the other hand, the perfectly uniform distribution does not change its distribution throughout its evolution and therefore should have zero rms emittance growth as the chirp exactly cancels out the expansion of the pulse at all times. Moreover, Luiten et al. found experimentally that the uniform distribution does have an increase in emittance although less than the Gaussian casel[@Luiten:2004_uniform_ellipsoidal], an observation that is corroborated by our own work with PIC and N-particle calculations[@Portman:2013_computational_characterization; @Portman:2014_image_charge]. The analytic formulation of mean-field theory presented here provides new avenues to treating emittance growth, by treating fluctuations to these equations in a systematic manner. This analysis will be presented elsewhere. ***Acknowledgment*** This work was supported by NSF Grant 1625181, the College of Natural Science, the College of Communication Arts and Sciences, and the Provost’s office at Michigan State University. Computational resources were provided by the High Performance Computer Center at MSU. We thank Martin Berz and Kyoko Makino for their help with employing COSY as an N-particle code. 1D Density Derivative {#ap:1D appendix} ===================== In the main text, we argued $$\begin{aligned} \rho(z)= \rho_0(z_0) \left({dz\over dz_0}\right)^{-1}\end{aligned}$$ To determine the slope of the density, we take the derivative with respect to the z coordinate $$\begin{aligned} \frac{d}{dz} \rho(z) &= \frac{d}{dz_0} \left(\rho_0(z_0) \left(\frac{dz}{dz_0}\right)^{-1}\right)\left(\frac{dz}{dz_0}\right)^{-1}\nonumber\\ &= \frac{d}{dz_0} \left(\rho_0(z_0)\right) \left(\frac{dz}{dz_0}\right)^{-2} - \rho_0(z_0) \left(\frac{dz}{dz_0}\right)^{-3} \frac{d^2z}{dz_0^2}\nonumber\\ &= \frac{\frac{d}{dz_0} \left(\rho_0(z_0)\right) \frac{dz}{dz_0} - \rho_0(z_0)\frac{d^2z}{dz_0^2}}{\left(\frac{dz}{dz_0}\right)^{3}}\label{eq:generic z derivative}\end{aligned}$$ For the sake of conciseness, denote $\rho_0 = \rho_0(z_0)$, $\rho' = \frac{d}{dz} \rho(z)$, $\rho_0' = \frac{d}{dz_0} \rho_0$, $v_0' = \frac{d v_0}{d z_0}$, and $v_0'' = \frac{d^2 v_0}{d z_0^2}$. From the main text, we have $$\begin{aligned} \frac{dz}{dz_0} &= 1 + v_0' t + \frac{q}{2 m \epsilon_0} \rho_0(z_0)t^2\end{aligned}$$ and from this it is straightforward to show $$\begin{aligned} \frac{d^2z}{dz_0^2} &= v_0'' t + \frac{q}{2 m \epsilon_0} \rho_0't^2\end{aligned}$$ Subbing this back into Eq. (\[eq:generic z derivative\]), we get $$\begin{aligned} \rho' &= \frac{\rho_0'\left(1 + v_0' t + \frac{q}{2 m \epsilon_0} \rho_0t^2\right) - \rho_0\left(v_0'' t + \frac{q}{2 m \epsilon_0} \rho_0't^2\right)}{\left(1 + v_0' t + \frac{q}{2 m \epsilon_0} \rho_0t^2\right)^3}\nonumber\\ &= \frac{ \rho_0'\left( 1 + v_0' t\right) - \rho_0v_0''t }{\left(1 + v_0' t + \frac{q}{2 m \epsilon_0} \rho_0t^2\right)^3}\label{eq:1D rho derivative appendix} \end{aligned}$$ Derivation of Time-location Relations {#ap:time-location appendix} ===================================== Integral form ------------- Starting with the relativistic expression for change in particle energy derived in the main text $$\begin{aligned} \text{cyl: }E(t) - E(0) &= \frac{q Q_{tot} \lambda_0}{2 \pi \epsilon_0} ln\left(\frac{r}{r_0}\right)\label{eq:2D energy}\\ \text{sph: }E(t) - E(0) &= \frac{q Q_{tot} P_0}{4 \pi \epsilon_0} \left(\frac{1}{r_0} - \frac{1}{r}\right)\label{eq:3D energy}\end{aligned}$$ we approximate the energy change with a change in non-relativistic kinetic energy starting from rest $$\begin{aligned} \text{cyl: }\frac{1}{2} m v^2 &= \frac{q Q_{tot} \lambda_0}{2 \pi \epsilon_0} ln\left(\frac{r}{r_0}\right)\label{eq:2D kinetic energy}\\ \text{sph: }\frac{1}{2} m v^2 &= \frac{q Q_{tot} P_0}{4 \pi \epsilon_0} \left(\frac{1}{r_0} - \frac{1}{r}\right)\label{eq:3D kinetic energy}\end{aligned}$$ where $v = \frac{d r}{dt}$ are the velocity of the particle at time $t$ in the two or one of the three dimensional models, respectively, with the appropriate definition of $r$. Solving these equations for the velocity at time $t$, we get $$\begin{aligned} \text{cyl: } \frac{d r}{d t} &= \sqrt{\frac{q Q_{tot} \lambda_0}{\pi m \epsilon_0} ln\left(\frac{r}{r_0}\right)}\label{eq:2D velocity}\\ \text{sph: } \frac{d r}{d t} &= \sqrt{\frac{q Q_{tot} P_0}{2 \pi m \epsilon_0} \left(\frac{1}{r_0} - \frac{1}{r }\right)}\label{eq:3D velocity}\end{aligned}$$ Separating the variables and integrating, we obtain $$\begin{aligned} \text{cyl: } t &= \int_{r_0}^{r} \frac{d\tilde{r}}{ \sqrt{\frac{q Q_{tot} \lambda_0}{\pi m \epsilon_0} ln\left(\frac{\tilde{r}}{r_0}\right)}}\label{eq:2D time integral}\\ \text{sph: } t &= \int_{r_0}^{r} \frac{d\tilde{r}}{\sqrt{\frac{q Q_{tot} P_0}{2 \pi m \epsilon_0} \left(\frac{1}{r_0} - \frac{1}{\tilde{r}}\right)}}\label{eq:3D time integral}\end{aligned}$$ Defining $a = \frac{q Q_\text{tot}}{\pi m \epsilon_0}$, we rewrite Eq. (\[eq:2D time integral\]) and Eq. (\[eq:3D time integral\]) as $$\begin{aligned} \text{cyl: }t &= \int_{r_0}^{r} \frac{d\tilde{r}}{ \sqrt{a \lambda_0 ln\left(\frac{\tilde{r}}{r_0}\right)}}\label{eq:2D time integral subbed}\\ \text{sph: }t &= \int_{r_0}^{r} \frac{d\tilde{r}}{\sqrt{\frac{aP_0}{2r_0}}\sqrt{1 - \frac{r_0}{\tilde{r} }}}\label{eq:3D time integral subbed}\end{aligned}$$ Cylindrically-symmetric integral solution ----------------------------------------- We solve the sylindrically-symmetric integral first. Define $\tilde{u} = \sqrt{a \lambda_0 ln\left(\frac{\tilde{r}}{r_0}\right)}$. Solving this equation for $\tilde{r}$ in terms of $\tilde{u}$, we see that $\tilde{r} = r_0 e^{\frac{\tilde{u}^2}{a \lambda_0}}$. It is also straightforward to see that $$\begin{aligned} d \tilde{u} &= \frac{1}{2} \frac{1}{ \sqrt{a \lambda_0 ln\left(\frac{\tilde{r}}{r_0}\right)}} \frac{a \lambda_0}{\tilde{r}} d \tilde{r}\nonumber\\ &= \frac{1}{ \sqrt{a \lambda_0 ln\left(\frac{\tilde{r}}{r_0}\right)}} \frac{a \lambda_0}{2 r_0} e^{\frac{-\tilde{u}^2}{a \lambda_0}} d \tilde{r} \nonumber\end{aligned}$$ Applying this change of coordinates to Eq. (\[eq:2D time integral subbed\]), we get $$\begin{aligned} \text{cyl: } t &= \int_{0}^{u} \frac{2 r_0}{a \lambda_0}e^{\frac{\tilde{u}^2}{a \lambda_0}} d \tilde{u}\nonumber\\ &= \frac{2 r_0}{a \lambda_0}e \int_{0}^{u} e^{\frac{\tilde{u}^2}{a \lambda_0}} d \tilde{u}\nonumber\\ &= \frac{2 r_0}{\sqrt{a \lambda_0}} \int_{0}^{w} e^{\tilde{w}^2} d \tilde{w}\label{eq:2D time integral in w}\end{aligned}$$ where $u = \sqrt{a \lambda_0 ln\left(\frac{r}{r_0}\right)}$, $\tilde{w} = \frac{\tilde{u}}{\sqrt{a \lambda_0}}$, and $w = \sqrt{ln\left(\frac{r}{r_0}\right)}$. The remaining integral, $\int_{0}^{w} e^{\tilde{w}^2} d \tilde{w}$ can be written in terms of the well-studied Dawson function, $F(\cdot)$: $$\begin{aligned} \int_{0}^{w} e^{\tilde{w}^2} d \tilde{w} &= e^{w^2}F(w) \nonumber\\ &= \frac{r}{r_0} F\left(\sqrt{ln\left(\frac{r}{r_0}\right)}\right)\label{eq:integral as Dawson}\end{aligned}$$ Subbing Eq. (\[eq:integral as Dawson\]) back into Eq. (\[eq:2D time integral in w\]) gives us our time-position relation $$\begin{aligned} \label{eq:2D time vs r} \text{cyl: } t &= \frac{2 r}{\sqrt{a \lambda_0}} F\left(\sqrt{ln\left(\frac{r}{r_0}\right)} \right)\nonumber\\ &= 2 \frac{r}{r_0} \sqrt{\frac{\pi r_0^2 m \epsilon_0}{q Q_\text{tot}\lambda_0}} F\left(\sqrt{ln\left(\frac{r}{r_0}\right)} \right)\nonumber\\ &= 2 \frac{r}{r_0} \sqrt{\frac{m \epsilon_0}{q Q_\text{tot}\frac{\lambda_0}{\pi r_0^2}}}F\left(\sqrt{ln\left(\frac{r}{r_0}\right)} \right)\nonumber\\ &= 2 \frac{r}{r_0} \sqrt{\frac{m \epsilon_0}{q Q_\text{tot}{\bar{\rho}}_0}}F\left(\sqrt{ln\left(\frac{r}{r_0}\right)} \right)\nonumber\\ &= \frac{2}{{\bar{\omega}}_{p,0}} \frac{r}{r_0} F\left(\sqrt{ln\left(\frac{r}{r_0}\right)} \right)\nonumber\\ &= \frac{{\bar{\tau}}_{p,0}}{\pi} \frac{r}{r_0} F\left(\sqrt{ln\left(\frac{r}{r_0}\right)} \right)\end{aligned}$$ where ${\bar{\rho}}_0 = \frac{\lambda_0}{\pi r_0^2}$ and ${\bar{\omega}}_{p,0} = \sqrt{\frac{q Q_\text{tot}{\bar{\rho}}_0}{m \epsilon_0}} = \frac{2 \pi}{{\bar{\tau}}_{p,0}}$. Spherically-symmetric Integral Solution --------------------------------------- We solve the spherically-symmetric integral with an analogous approach. Define $\tilde{u} = \sqrt{1 - \frac{r_0}{\tilde{r} }}$ and solving for $\tilde{r}$ gives $\tilde{r} = \frac{r_0}{1-\tilde{u}^2}$. Thus $$\begin{aligned} d \tilde{u} &= \frac{1}{2} \frac{1}{\sqrt{1 - \frac{r_0}{\tilde{r} }}}\frac{r_0}{\tilde{r}^2 } d \tilde{r}\nonumber\\ &= \frac{1}{\sqrt{1 - \frac{r_0}{\tilde{r} }}} \frac{(1 - \tilde{u}^2)^2}{2 r_0} d\tilde{r}\nonumber\end{aligned}$$ Applying this change of coordinates to Eq. (\[eq:3D time integral subbed\]) with $u = \sqrt{1 - \frac{r_0}{r}}$, we get $$\begin{aligned} \text{sph: }t &= \sqrt{\frac{2 r_0}{aP_0}}\int_{0}^{u} \frac{2 r_0}{(1 - \tilde{u}^2)^2} d \tilde{u}\nonumber\\ &= 2 \sqrt{\frac{2 r_0^3}{aP_0}} \int_{0}^{u} \frac{1}{(1 - \tilde{u}^2)^2} d \tilde{u} \nonumber\\ &= 2 \sqrt{\frac{2 r_0^3}{aP_0}} \left( \frac{1}{2} \tanh^{-1} \left(\tilde{u}\right) + \frac{1}{2 } \frac{\tilde{u}}{1 - \tilde{u}^2}\right) \bigg \rvert_{\tilde{u} = 0}^{\tilde{u} = \sqrt{1 - \frac{r_0}{r}}}\nonumber\\ &= \sqrt{\frac{2 r_0^3}{a P_0}} \left( \tanh^{-1} \left(\sqrt{1 - \frac{r_0}{r}}\right) + \frac{\sqrt{1 - \frac{r_0}{r}}}{1 - 1 + \frac{r_0}{r}}\right)\nonumber\\ &= \sqrt{\frac{2 \pi r_0^3 m \epsilon_0}{q Q_\text{tot} P_0}} \left( \tanh^{-1} \left(\sqrt{1 - \frac{r_0}{r}}\right) + \frac{r}{r_0}\sqrt{1 - \frac{r_0}{r}}\right)\nonumber\\ &= \sqrt{\frac{2}{3}} \sqrt{\frac{m \epsilon_0}{q Q_\text{tot} \frac{P_0}{\frac{4}{3}\pi r_0^3}}} \left( \tanh^{-1} \left(\sqrt{1 - \frac{r_0}{r}}\right)\right.\nonumber\\ &\quad\quad \left.+ \frac{r}{r_0}\sqrt{1 - \frac{r_0}{r}}\right)\nonumber\\ &= \sqrt{\frac{2}{3}} \sqrt{\frac{m \epsilon_0}{q Q_\text{tot} {\bar{\rho}}_0}} \left( \tanh^{-1} \left(\sqrt{1 - \frac{r_0}{r}}\right)\right.\nonumber\\ &\quad\quad \left. + \frac{r}{r_0}\sqrt{1 - \frac{r_0}{r}}\right)\nonumber\\ &= \sqrt{\frac{2}{3}} \frac{1}{{\bar{\omega}}_{p,0}} \left( \tanh^{-1} \left(\sqrt{1 - \frac{r_0}{r}}\right) + \frac{r}{r_0}\sqrt{1 - \frac{r_0}{r}}\right)\nonumber\\ &= \sqrt{\frac{2}{3}} \frac{{\bar{\tau}}_{p,0}}{2 \pi} \left( \tanh^{-1} \left(\sqrt{1 - \frac{r_0}{r}}\right) + \frac{r}{r_0}\sqrt{1 - \frac{r_0}{r}}\right)\label{eq:3D time vs r}\end{aligned}$$ where the solution to the integral was obtained with Mathematica’s online tool[@Mathematica] and where ${\bar{\rho}}_0 = \frac{P_0}{\frac{4}{3}\pi r_0^3}$ and ${\bar{\omega}}_{p,0} = \frac{q Q_\text{tot}{\bar{\rho}}_0}{m \epsilon_0} = \frac{2 \pi}{{\bar{\tau}}_{p,0}}$. Derivation of Derivatives with Respect to Initial Position {#ap:spatial derivatives} ========================================================== As noted in the main text, much of the physics of distribution evolution in our models is captured in the term $\frac{dr}{dr_0}$. The procedure to derive the expressions for this derivative is to take the derivative of Eq. (\[eq:2D time vs r\]) and Eq. (\[eq:3D time vs r\]). We do this mathematics here. The Cylindrically-symmetric Derivative -------------------------------------- We begin by re-writing $t$ from Eq. (\[eq:2D time vs r\]) as $$\begin{aligned} t &= 2 r \sqrt{\frac{1}{a \lambda_0}} F\left(y \right)\end{aligned}$$ where $y = \sqrt{ln\left(\frac{r}{r_0}\right)} = \sqrt{\ln \left(r \right) - \ln\left({r_0}\right)}$. So $$\begin{aligned} \frac{dy}{dr_0} &= \frac{1}{2y} \left(\frac{1}{r} \frac{d r}{d r_0} - \frac{1}{r_0} \right)\label{eq:derivative 2D sqrt term} \end{aligned}$$ The Dawson function has the property $\frac{d}{dy} F(y) = 1 - 2 y F(y) = \left(\frac{1}{F(y)} - 2 y\right) F(y)$, and with the chain rule this becomes $\frac{d}{dr_0} F(y) = \left(\frac{1}{F(y)} - 2 y\right) \frac{dy}{dr_0} F(y)$. Using Eq. (\[eq:derivative 2D sqrt term\]), this becomes $$\begin{aligned} \frac{d}{dr_0} F\left(y\right) &= \left(\frac{1}{F(y)} - 2 y\right)\frac{F(y)}{2y} \left(\frac{1}{r} \frac{d r}{d r_0} - \frac{1}{r_0} \right)\nonumber\\ &=F(y) \left(\frac{1}{2 y F(y)} - 1\right)\left(\frac{1}{r} \frac{d r}{d r_0} - \frac{1}{r_0} \right)\label{eq:derivative our Dawson} \end{aligned}$$ Also, note $$\begin{aligned} \frac{d}{dr_0}\frac{1}{\sqrt{\lambda_0}} &= -\frac{1}{2}\frac{1}{(\lambda_0)^{3/2}} \frac{d\lambda_0}{d r_0}\nonumber\\ &= -\frac{1}{2 \sqrt{\lambda_0}} \frac{d \ln(\lambda_0)}{d r_0}\label{eq:derivative sqrt a lambda_0} \end{aligned}$$ So $$\begin{aligned} 0 &= \frac{dt}{dr_0}\nonumber\\ &= \frac{t}{r} \frac{dr}{dr_0} - \frac{1}{2} t \frac{d \ln(\lambda_0)}{d r_0} + t \left(\frac{1}{2 y F(y)} - 1\right)\left(\frac{1}{r} \frac{d r}{d r_0} - \frac{1}{r_0} \right)\nonumber\\ &= \frac{t}{r} \frac{1}{2 y F(y)}\frac{dr}{dr_0} - t \left(\frac{1}{2} \frac{d \ln(\lambda_0)}{d r_0} + \left(\frac{1}{2 y F(y)} - 1\right) \frac{1}{r_0}\right)\end{aligned}$$ which gives $$\begin{aligned} \frac{dr}{dr_0} &= 2 y F(y) r \left(\frac{1}{2} \frac{d \ln(\lambda_0)}{d r_0} + \left(\frac{1}{2 y F(y)} - 1\right) \frac{1}{r_0}\right)\nonumber\\ &= \frac{r}{r_0}\left( 1 + \left(\frac{r_0}{2} \frac{d \ln(\lambda_0)}{d r_0} - 1 \right) 2 y F(y) \right)\nonumber\\ &= \frac{r}{r_0}\left( 1 + \left(\frac{r_0}{2} \frac{2 \pi r_0 \rho_0}{\lambda_0} - 1 \right) 2 y F(y) \right)\nonumber\\ &= \frac{r}{r_0}\left( 1 + \left(\frac{\rho_0}{\frac{\lambda_0}{\pi r_0^2}} - 1 \right) 2 y F(y) \right)\nonumber\\ &= \frac{r}{r_0} \left(1 + \left(\frac{\rho_0}{\bar{\rho}_0} - 1\right) 2 \sqrt{\ln\left(\frac{r}{r_0}\right)}F\left(\sqrt{\ln\left(\frac{r}{r_0}\right)}\right)\right) \end{aligned}$$ The Spherically-symmetric Derivatives ------------------------------------- We begin by re-writing $t$ from Eq. (\[eq:3D time vs r\]) as $$\begin{aligned} t &= \sqrt{\frac{2 r_0^3}{a P_0}} \left(\tanh^{-1} y + \frac{r}{r_0} y \right)\end{aligned}$$ where $y = \sqrt{1 - \frac{r_0}{r}}$. So $$\begin{aligned} \frac{dy}{dr_0} &= \frac{1}{2y}\left(\frac{r_0}{r^2} \frac{dr}{dr_0} - \frac{1}{r}\right)\nonumber\\ &= - \frac{1}{2yr}\left(1- \frac{r_0}{r} \frac{dr}{dr_0}\right) \end{aligned}$$ Hence $$\begin{aligned} \frac{d\tanh^{-1} y}{dr_0} &= \frac{1}{1 - y^2} \frac{dy}{dr_0}\nonumber\\ &= \frac{r}{r_0} \left(- \frac{1}{2yr}\left(1- \frac{r_0}{r} \frac{dr}{dr_0}\right)\right)\nonumber\\ &= - \frac{1}{2yr_0}\left(1- \frac{r_0}{r} \frac{dr}{dr_0}\right)\end{aligned}$$ and $$\begin{aligned} \frac{d\left(\frac{r}{r_0} y \right)}{dr_0} &= \frac{y}{r_0} \frac{dr}{dr_0} - \frac{r}{r_0^2}y + \frac{r}{r_0} \frac{dy}{dr_0}\nonumber\\ &= \frac{1}{r_0}\left( \left(\frac{dr}{dr_0} - \frac{r}{r_0}\right)y -\frac{1}{2y}\left(1- \frac{r_0}{r} \frac{dr}{dr_0}\right) \right)\nonumber\\ &= \frac{1}{2 y r_0}\left( 2 \left(\frac{dr}{dr_0} - \frac{r}{r_0}\right)\left(1 - \frac{r_0}{r}\right) - 1+ \frac{r_0}{r} \frac{dr}{dr_0}\right)\nonumber\\ \end{aligned}$$ Therefore $$\begin{aligned} \frac{d\left(\tanh^{-1} y + \frac{r}{r_0} y \right)}{dr_0} &= \frac{1}{y r_0}\left( \left(\frac{dr}{dr_0} - \frac{r}{r_0}\right)\left(1 - \frac{r_0}{r}\right)\right.\nonumber\\ &\quad\quad \left. - 1+ \frac{r_0}{r} \frac{dr}{dr_0}\right)\nonumber\\ &= \frac{1}{y r_0}\left( \frac{dr}{dr_0} - \frac{r}{r_0}\right)\nonumber\\ \end{aligned}$$ Also, similar to Eq. (\[eq:derivative sqrt a lambda\_0\]), $\frac{d}{dr_0}\frac{1}{\sqrt{P_0}} = -\frac{1}{2 \sqrt{P_0}} \frac{d \ln(P_0)}{d r_0}$. Putting this together we have $$\begin{aligned} 0 &= \frac{dt}{dr_0}\nonumber\\ &= \frac{3}{2} \frac{t}{r_0} - \frac{t}{2} \frac{d \ln(P_0)}{d r_0} + \sqrt{\frac{2 r_0^3}{a P_0}} \frac{d\left(\tanh^{-1} y + \frac{r}{r_0} y \right)}{dr_0}\nonumber\\ &= \frac{3}{2} \frac{t}{r_0} - \frac{t}{2} \frac{d \ln(P_0)}{d r_0} + \sqrt{\frac{2 r_0^3}{a P_0}} \frac{1}{y r_0}\left( \frac{dr}{dr_0} - \frac{r}{r_0} \right)\end{aligned}$$ Solving for $\frac{dr}{dr_0}$ we get $$\begin{aligned} \frac{dr}{dr_0} &= \frac{r}{r_0} - \frac{3y}{2} \left(\tanh^{-1} y + \frac{r}{r_0} y \right) \nonumber\\ &\quad\quad + \frac{y r_0}{2}\left(\tanh^{-1} y + \frac{r}{r_0} y \right)\frac{d \ln(P_0)}{d r_0}\nonumber\\ &= \frac{r}{r_0} + \frac{3y}{2}\left( \frac{r_0}{3}\frac{d \ln(P_0)}{d r_0} - 1\right) \left(\tanh^{-1} y + \frac{r}{r_0} y \right)\nonumber\\ &= \frac{r}{r_0} \left(1 + \frac{3}{2}\left(\frac{r_0}{3}\frac{4 \pi r_0^2 \rho_0}{P_0} - 1\right) \left(\frac{r_0}{r} y \tanh^{-1} y + y^2 \right)\right)\nonumber\\ &= \frac{r}{r_0} \left(1 + \frac{3}{2}\left(\frac{\rho_0}{\frac{P_0}{\frac{4}{3}\pi r_0^3}} - 1\right)\left(\frac{r_0}{r} \sqrt{1 - \frac{r_0}{r}} \tanh^{-1}\left(\sqrt{1 - \frac{r_0}{r}}\right)\right.\right.\nonumber\\ &\quad\quad \left.\left. + 1 - \frac{r_0}{r} \right)\right)\nonumber\\ &= \frac{r}{r_0} \left(1 + \frac{3}{2}\left(\frac{\rho_0}{{\bar{\rho}}_0} - 1\right)\left(\frac{r_0}{r} \sqrt{1 - \frac{r_0}{r}} \tanh^{-1}\left(\sqrt{1 - \frac{r_0}{r}}\right) \right.\right. \nonumber\\ &\quad\quad \left.\left. + 1 - \frac{r_0}{r} \right)\right)\end{aligned}$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'One studies Cremona monomial maps by combinatorial means. Among the results is a simple integer matrix theoretic proof that the inverse of a Cremona monomial map is also defined by monomials of fixed degree, and moreover, the set of monomials defining the inverse can be obtained explicitly in terms of the initial data. A neat consequence is drawn for the plane Cremona monomial group, in particular the known result saying that a plane Cremona (monomial) map and its inverse have the same degree. Included is a discussion about the computational side and/or implementation of the combinatorial invariants stemming from these questions.' address: - | Departamento de Matemática\ Universidade Federal de Pernambuco\ 50740-540 Recife\ Pe\ Brazil - | Departamento de Matemáticas\ Centro de Investigación y de Estudios Avanzados del IPN\ Apartado Postal 14–740\ 07000 Mexico City, D.F. author: - Aron Simis - 'Rafael H. Villarreal' title: Combinatorics of Cremona monomial maps --- [^1] Introduction ============ The expression “birational combinatorics” has been introduced in [@birational-linear] to mean the combinatorial theory of rational maps ${\mathbb{P}}^{n-1}\dasharrow {\mathbb{P}}^{m-1}$ defined by monomials, along with natural integer arithmetic criteria for such maps to be birational onto their image varieties. As claimed there, both the theory and the criteria were intended to be a simple transcription of the initial geometric data. Yet another goal is to write characteristic-free results. Thus, here too one works over an arbitrary field in order that the theory be essentially independent of the nature of the field of coefficients, specially when dealing with squarefree monomials. In this paper, we stick to the case where $m=n$ and deal with Cremona maps. An important step has been silently taken for granted in the background of [@birational-linear Section 5.1.2], namely, that the inverse of a Cremona monomial map is also defined by monomials. To be fair this result can be obtained via the method of [@birational-linear Section 3] together with the criterion of [@bir2003]; however the latter gives no hint on how to derive explicit data from the given ones. Here we add a few steps to the theory, by setting up a direct way to convert geometric results into numeric or combinatorial data regardless of the ground field nature. The conversion allows for an incursion into some of the details of the theory of plane Cremona maps defined by monomials. In particular, it is shown that the group of such maps under composition is completely understood without recurring to the known results about general plane Cremona maps. Thus, one shows that this group is generated by two basic monomial quadratic maps, up to reordering of variables in the source and the target. The result is not a trivial consequence of Noether’s theorem since the latter requires composing with projective transformations, which is out of the picture here. Moreover, the known proofs of Noether’s theorem (see, e.g., [@alberich]) reduce to various especial situations, passing through the celebrated de Jonquières maps which are rarely monomial. The well-known result that a plane Cremona map and its inverse have the same degree is shown here for such monomial maps by an easy numerical counting. The argument for general plane Cremona maps is not difficult but requires quite a bit of geometric insight and preparation (see, e.g., [@alberich Proposition 2.1.12]). Monomial Cremona maps have been dealt with in [@pan] and in [@Kor], but the methods and some of the goals are different and have not been drawn upon here. Tools of integer linear algebra =============================== Recall that if $a=(a_1,\ldots,a_n)\in {\mathbb R}^n$, its [*support*]{} is defined as ${\rm supp}(a)=\{i\, |\, a_i\neq 0\}$. Note that we can write $a=a^+-a^-$, where $a^+$ and $a^-$ are two non-negative vectors with disjoint support. The vectors $a^+$ and $a^-$ are called the positive and negative part of $a$ respectively. Following a familiar notation we write $|a|=a_1+\cdots+a_n$. The following result is essentially embodied in the treatment given in the preliminaries of [@birational-linear]. For easy reference we chose to isolate it along with its complete proof. \[starbucks-upstairs\] Let $v_1,\ldots,v_n$ be a set of vectors in $\mathbb{N}^n$ such that $|v_i|=d\geq 1$ for all $i$ and $\det(A)=\pm d$, where $A$ is the $n\times n$ matrix with column vectors $v_1,\ldots,v_n$. Then $A^{-1}(e_i-e_j)\in\mathbb{Z}^n$ for all $i,j$. Fixing indices $i,j$, there are $\lambda_1,\ldots,\lambda_n$ in $\mathbb{Q}$ such that $A^{-1}(e_i-e_j)=\sum_{k=1}^n\lambda_ke_k$. Notice that $A^{-1}(e_i)$ is the $i$[*th*]{} column of $A^{-1}$. Set $\mathbf{1}=(1,\ldots,1)$. Since $\mathbf{1}A=d\mathbf{1}$, we get $\mathbf{1}/d=\mathbf{1}A^{-1}$. Therefore $|A^{-1}(e_i)|=|A^{-1}(e_j)|=1/d$ and $\sum_k\lambda_k=0$. Then we can write $$A^{-1}(e_i-e_j)=\sum_{k=2}^n\lambda_k(e_i-e_1)\ \Longrightarrow\ e_i-e_j=\sum_{k=2}^n\lambda_k(v_k-v_1).$$ Thus there is $0\neq s\in\mathbb{N}$ such that $s(e_i-e_j)$ belong to $\mathbb{Z}\{v_1-v_k\}_{k=2}^n$, the subgroup of $\mathbb{Z}^n$ generated by $\{v_1-v_k\}_{k=2}^n$. By [@birational-linear Lemma 2.2 and Theorem 2.6], the quotient group $\mathbb{Z}^n/\mathbb{Z}\{v_1-v_k\}_{k=2}^n$ is free, in particular has no nonzero torsion elements. Then we can write $$e_i-e_j=\eta_2(v_2-v_1)+\cdots+\eta_n(v_n-v_1),$$ for some $\eta_i$’s in $\mathbb{Z}$. Since $\mathbb{Z}\{v_1-v_k\}_{k=2}^n$ is also free (of rank $n-1$), the vectors $v_2-v_1,\ldots,v_n-v_1$ are linearly independent. Thus $\lambda_k=\eta_k\in \mathbb{Z}$ for all $k\geq 2$, hence ultimately $A^{-1}(e_i-e_j)\in\mathbb{Z}^n$. Next we state our main result of integer linear algebra nature. Its geometric translation and applications will be given in Section \[cremona\]. \[ipn-ufpe\] Let $v_1,\ldots,v_n$ be a set of vectors in $\mathbb{N}^n$ such that $|v_i|=d\geq 1$ for all $i$ and $\det(A)=\pm d$, where $A$ is the $n\times n$ matrix with column vectors $v_1,\ldots,v_n$. Then there are unique vectors $\beta_1,\ldots,\beta_n,\gamma \in \mathbb{N}^n$ such that the following two conditions hold[:]{} 1. $A\beta_i=\gamma+e_i$ for all $i$, where $\beta_i,\gamma$ and $e_i$ are regarded as column vectors$\,$[;]{} 2. The matrix $B$ whose columns are $\beta_1,\ldots,\beta_n$ has at least one zero entry in every row. Moreover, $\det(B)=\pm (|\gamma|+1)/d=\pm |\beta_i|$ for all $i$. First we show the uniqueness. Assume that $\beta_1',\ldots,\beta_n',\gamma'$ is a set of vectors in $\mathbb{N}^n$ such that: (a’) $A\beta_i'=\gamma'+e_i$ for all $i$, and (b’) The matrix $B'$ whose column vectors are $\beta_1',\ldots,\beta_n'$ has at least one zero entry in every row. Let $\Delta=(\Delta_i)$ and $\Delta'=(\Delta_i')$ be non-negative vectors such that $A^{-1}(\gamma-\gamma')=\Delta'-\Delta$. Then from (a) and (a’) we get $$\label{starbucks} \beta_i-\beta_i'=A^{-1}(\gamma-\gamma')=\Delta'-\Delta,\ \forall\, i\ \Longrightarrow\ \beta_{ik}-\beta_{ik}'=\Delta_k'-\Delta_k,\ \forall\, i,k,$$ where $\beta_i=(\beta_{i1},\ldots,\beta_{in})$ and $\beta_i'=(\beta_{i1}',\ldots,\beta_{in}')$. It suffices to show that $\Delta=\Delta'$. If $\Delta_k'>\Delta_k$ for some $k$, then, by Eq. (\[starbucks\]), we obtain $\beta_{ik}>0$ for $i=1,\ldots,n$, which contradicts (b). Similarly if $\Delta_k'<\Delta_k$ for some $k$, then, by Eq. (\[starbucks\]), we obtain $\beta_{ik}'>0$ for $i=1,\ldots,n$, which contradicts (b’). Thus $\Delta_k=\Delta_k'$ for all $k$, i.e., $\Delta=\Delta'$. Next we prove the existence of $\beta_1,\ldots,\beta_n$ and $\gamma$. By Lemma \[starbucks-upstairs\], for $i\geq 2$ we can write $$0\neq\alpha_i=A^{-1}(e_1-e_i)=\alpha_i^+-\alpha_i^-$$ where $\alpha_i^+$ and $\alpha_i^-$ are in $\mathbb{N}^n$. Notice that $\alpha_i^+\neq 0$ and $\alpha_i^-\neq 0$. Indeed the sum of the entries of $A^{-1}(e_i)$ is equal to $1/d$. Thus $|\alpha_i|=|\alpha_i^+|-|\alpha_i^-|=0$, and consequently the positive and negative part of $\alpha_i$ are both non zero for $i\geq 2$. The vector $\alpha_i^+$ can be written as $\alpha_i^+=(\alpha_{i1}^+,\ldots,\alpha_{in}^+)$ for $i\geq 2$. For $1\leq k\leq n$ consider the integers given by $$m_k=\max_{2\leq i\leq n}\{\alpha_{ik}^+\}$$ and set $\beta_1=(m_1,\ldots,m_n)$. Since $\beta_1\geq\alpha_i^+$, for each $i\geq 2$ there is $\theta_i\in\mathbb{N}^n$ such that $\beta_1=\theta_i+\alpha_i^+$. Therefore $$\alpha_i=A^{-1}(e_1-e_i)=\alpha_i^+-\alpha_i^-=\beta_1-(\theta_i+\alpha_i^-).$$ We set $\beta_i=\theta_i+\alpha_i^-$ for $i\geq 2$. Since we have $A\beta_1-e_1=A\beta_i-e_i$ for $i\geq 2$, it follows readily that $A\beta_1-e_1\geq 0$ (make $i=2$ in the last equality and compare entries). Thus, setting $\gamma:=A\beta_1-e_1$, it follows that $\beta_1,\ldots,\beta_n$ and $\gamma$ satisfy (a). If each row of $B$ has some zero entry the proof of the existence is complete. If every entry of a row of $B$ is positive we subtract the vector $\mathbf{1}=(1,\ldots,1)$ from that row and change $\gamma$ accordingly so that (a) is still satisfied. Applying this argument repeatedly we get a sequence $\beta_1,\ldots,\beta_n,\gamma$ satisfying (a) and (b). To see the last part of the assertion, note that condition (a) is equivalent to the equality $AB=\Gamma+I$, where $\Gamma$ is the matrix all of whose columns are equal to $\gamma$. Since $A\beta_i=\gamma+e_i$ for all $i$ and $\det(B)=\pm \det (\Gamma+I)/d$ it suffices to show that $\det(\Gamma+I)=|\gamma|+1$. The latter is a classical calculation than can be performed in various ways. One manner is to express this determinant as the sum of the determinants of two simpler matrices of the same size and proceed by recurrence. One has a more general statement which is given below for lack of an appropriate reference with a clear proof: \[Gordan\] Let $\Gamma=(\gamma_{i,j})$ be an $n\times n$ square matrix over an arbitrary commutative ring, and let $D={\rm diag}(d_1,\ldots, d_n)$ be a diagonal matrix over the same ring. Then $$\begin{aligned} \det (\Gamma+D)&=&\det(\Gamma) \\ &+&\sum_id_i \Delta _{[n]\setminus \{i\}}+ \sum_{1\leq i_1<i_2\leq n}d_{i_1}d_{i_2} \Delta_{[n]\setminus\{i_1,i_2\}} \\ &+&\cdots + \sum_{1\leq i_1<\cdots <i_{n-1}\leq n}d_{i_1}\cdots d_{i_{n-1}} \Delta_{[n]\setminus\{i_1,\ldots,i_{n-1}\}} \\ & +& \det D,\end{aligned}$$ where $[n]=\{1,\ldots, n\}$ and $\Delta_{[n]\setminus\{i_1,\ldots,i_{k}\}}$ denotes the principal $(n-k)\times (n-k)$-minor of $\Gamma$ with rows and columns $[n]\setminus \{i_1,\ldots,i_{k}\}$. In particular, if $\Gamma$ has rank at most one and $D$ is the identity matrix, one gets $\det(\Gamma+I)={\rm trace}(\Gamma)+1$, as required above. For the proof of the lemma, one notes that, due to the multilinearity of the determinant, the following equality of determinants holds: [$$\begin{aligned} \det \left(\begin{matrix} \gamma_{1,1}+d_1&\gamma_{1,2}&\cdots & \gamma_{1,n}\cr \gamma_{2,1}&\gamma_{2,2}+d_2&\cdots & \gamma_{2,n}\cr \vdots &\vdots& &\vdots\cr \gamma_{n,1}&\gamma_{n,2}&\cdots & \gamma_{n,n}+d_n \end{matrix}\right)&=& \det \left(\begin{matrix} \gamma_{1,1}&\gamma_{1,2}&\cdots & \gamma_{1,n}\cr \gamma_{2,1}&\gamma_{2,2}+d_2&\cdots & \gamma_{2,n}\cr \vdots &\vdots& &\vdots\cr \gamma_{n,1}&\gamma_{n,2}&\cdots & \gamma_{n,n}+d_n \end{matrix}\right) \\[10pt] &+& \det \left(\begin{matrix} d_1&\gamma_{1,2}&\cdots & \gamma_{1,n}\cr 0&\gamma_{2,2}+d_2&\cdots & \gamma_{2,n}\cr \vdots &\vdots& &\vdots\cr 0&\gamma_{n,2}&\cdots & \gamma_{n,n}+d_n \end{matrix}\right)\end{aligned}$$ ]{} The recurrence for the second determinant on the right-hand side is obvious, while for the first determinant we repeat the procedure, thus writing [$$\begin{aligned} \det \left(\begin{matrix} \gamma_{1,1}&\gamma_{1,2}&\gamma_{1,3}&\cdots & \gamma_{1,n}\cr \gamma_{2,1}&\gamma_{2,2}+d_2& \gamma_{2,3} &\cdots & \gamma_{2,n}\cr \gamma_{3,1}&\gamma_{3,2}&\gamma_{3,3}+d_3 &\cdots & \gamma_{3,n}\cr \vdots &\vdots& \vdots &&\vdots\cr \gamma_{n,1}&\gamma_{n,2}&\gamma_{n,3}&\cdots & \gamma_{n,n}+d_n \end{matrix}\right)&=& \det \left(\begin{matrix} \gamma_{1,1}&\gamma_{1,2}&\gamma_{1,3}&\cdots & \gamma_{1,n}\cr \gamma_{2,1}&\gamma_{2,2}&\gamma_{2,3}&\cdots & \gamma_{2,n}\cr \gamma_{3,1}&\gamma_{3,2}&\gamma_{3,3}+d_3 &\cdots & \gamma_{3,n}\cr \vdots &\vdots &\vdots &&\vdots\cr \gamma_{n,1}&\gamma_{n,2}& \gamma_{n,3}&\cdots & \gamma_{n,n}+d_n \end{matrix}\right) \\[10pt] &+& \det \left(\begin{matrix} \gamma_{1,1}&0 &\gamma_{1,3}&\cdots & \gamma_{1,n}\cr \gamma_{2,1}&d_2 &\gamma_{2,3}&\cdots & \gamma_{2,n}\cr \gamma_{3,1}&0&\gamma_{3,3}+d_3 &\cdots & \gamma_{3,n}\cr \vdots &\vdots &\vdots & &\vdots\cr \gamma_{n,1}&0& \gamma_{n,3}&\cdots & \gamma_{n,n}+d_n \end{matrix}\right)\end{aligned}$$ ]{} and so forth. This gives the proof of the lemma and completes the proof of Theorem \[ipn-ufpe\]. Bridges and Ryser [@ryser] (cf. [@cornu-book Theorem 4.4]) studied the equation $AB=\Gamma+I$, when $A,B$ are $\{0,1\}$ matrices and $\Gamma$ is the matrix with all its entries equal to $1$. They show that if equality occurs, then each row and column of $A$ has the same number $r$ of ones, each row and column of $B$ has the same number $s$ of ones with $rs=n+1$, and $AB=BA$. For the applications we have in mind here (see Section \[cremona\]) this case is uninteresting; furthermore, the commutation $AB=BA$ will seldom be fulfilled. The proof of Theorem \[ipn-ufpe\] provides an algorithm to compute the vectors $\beta_1,\ldots,\beta_n$ and $\gamma$ (see Example \[propedeutico\] and the proof of Proposition \[degree\_of\_inverse\] for specific illustrations of the algorithm). Other means to compute these vectors will be discussed in Section \[computing\]. \[propedeutico\] Consider the following matrix $A$ and its inverse: $$A=\left(\begin{matrix} d&d-1&0\cr 0&1&d-1\cr 0&0&1 \end{matrix}\right);\ \ \ \ A^{-1}=\frac{1}{d}\left(\begin{matrix} 1&1-d&(d-1)^2\cr 0&d&d(1-d)\cr 0&0&d \end{matrix}\right).$$ To compute the $\beta_i$’s and $\gamma$ we follow the proof of Theorem \[ipn-ufpe\]. Then $\beta_1=(2,d,0)$, $\beta_2=(1,d+1,0)$, $\beta_3=(d,1,1)$, $\gamma=(d^2+d-1,d,0)$, and $$B=\left(\begin{matrix} 2&1&d\cr d&d+1&1\cr 0&0&1 \end{matrix}\right).$$ By subtracting the vector $(1,1,1)$ from rows $1$ and $2$, we get $$B'=\left(\begin{matrix} 1&0&d-1\cr d-1&d&0\cr 0&0&1 \end{matrix}\right).$$ The column vectors $\beta_1'=(1,d-1,0)$, $\beta_2'=(0,d,0)$, $\beta_3'=(d-1,0,1)$, $\gamma'=(d^2-d,d-1,0)$ satisfy (a) and (b). Application to Cremona maps {#cremona} =========================== Let $R=k[x_1,\ldots,x_n]$ be a polynomial ring over a field $k$ and set ${x}^{\alpha}:= x_1^{a_1}\cdots x_{n}^{a_{n}}$ for $\alpha=(a_1, \ldots,a_n)\in {\mathbb N}^n$. In the sequel we consider a finite set of distinct monomials ${F}=\{{x}^{v_1},\ldots, {x}^{v_q}\}\subset R$ of the same degree $d\geq 1$ and having no non-trivial common factor. We also assume throughout that every $x_i$ divides at least one member of $F$, a harmless condition. The set $F$ defines a rational (monomial) map ${\mathbb{P}}^{n-1}\dasharrow {\mathbb{P}}^{n-1}$ which will also be denoted $F$ and written as a tuple $F=({x}^{v_1},\ldots, {x}^{v_q})$. This map is said to be a [*Cremona map*]{} (or a [*Cremona transformation*]{}) if it admits an inverse rational map with source ${\mathbb{P}}^{n-1}$. Note that a rational monomial map is defined everywhere if and only if the defining monomials are pure powers of the variables, in which case it is a Cremona map if and only if $d=1$ (the identity map). Finally, the integer $d$ is often called [*degree*]{} of $F$ (not to be confused with its degree as a map). \[transcription\] Let $F:{\mathbb{P}}^{n-1}\dasharrow {\mathbb{P}}^{n-1}$ stand for a rational map defined by monomials of fixed degree. If $F$ is a Cremona map then its inverse is also defined by monomials of fixed degree. Moreover, the degree as well as a set of monomials defining the inverse can be obtained explicitly in terms of the given set of monomials defining $F$. Let $F$ be a Cremona map defined by a set of monomials $f_1,\ldots, f_n$ of the same degree $d\geq 1$. By [@SiVi Theorem 1.2] the matrix of exponents of these monomials (i.e., their “log-matrix”) $A$ has determinant $\pm d$. Therefore Theorem \[ipn-ufpe\] implies the existence of an $n\times n$ matrix $B$ such that $AB=\Gamma+I$, where $\Gamma$ is a matrix with repeated column $\gamma$ throughout. Let $g_1,\ldots, g_n$ denote the monomials whose log-matrix is $B$ and call $G$ the corresponding rational monomial map. Letting $x^{\gamma}$ denote the monomial whose exponents are the coordinates of $\gamma$, the above matrix equality translates into the equality $$(f_1(g_1,\ldots, g_n),\ldots ,f_n(g_1,\ldots, g_n))= ({x}^{\gamma}\cdot x_1,\ldots, {x}^{\gamma}\cdot x_n).$$ Thus the left hand side is proportional to the vector $(x_1,\ldots,x_n)$ which means that the composite map $F\circ G$ is the identity map wherever the two are defined (see [@bir2003 proof of Proposition 2.1]). On the other hand, since $B$ is the log-matrix of $g_1,\ldots, g_n$, Theorem \[ipn-ufpe\] applied in the opposite direction says that $G$ is also a Cremona map. Therefore $G$ has to be the inverse of $F$, as required. Finally, notice that the proof of Theorem \[ipn-ufpe\] provides an algorithm to compute $B$ and $\gamma$. The input for this algorithm is the log-matrix $A$ of $f_1,\ldots, f_n$. We will call a Cremona map as above a [*Cremona monomial map*]{}. The theorem allows us to introduce the following group. \[monomialgroup\] The [*Cremona monomial group*]{} of order $n-1$ is the subgroup of the Cremona group of ${\mathbb{P}}^{n-1}$ whose elements are Cremona monomial maps. Here we will not distinguish between rational maps $F,G:{\mathbb{P}}^{n-1}\dasharrow {\mathbb{P}}^{m-1}$ defined, respectively, by forms $f_1,\ldots, f_m$ of the same degree and their multiples $g_1=gf_1,\ldots, g_m=gf_m$ by a fixed form $g$ of arbitrary degree. There is a potential confusion in this terminology. For instance, in [@Kor] one allows for more general maps by considering the free product with certain Klein groups. Since our goal is combinatorial we will stick to the above definition of the Cremona monomial group, but will allow for source and target permutation of variables. As a matter of notation, the composite of two Cremona maps $F,G$ will be indicated by $FG$ (first $G$, then $F$). Likewise, the power composite $FF\cdots F$ of $m$ factors will be denoted $F^m$. We shift our attention to [*plane*]{} Cremona monomial maps, i.e., we will study the structure of the Cremona monomial group for $n=3$. Consider the maps $H=(x_1^2,x_1x_2,x_2x_3)$ and $S=(x_1x_2,x_1x_3,x_2x_3)$. $\bullet$ For any $d\geq 1$, one has $H^{d-1}=(x_1^d, x_1^{d-1}x_2, x_2^{d-1}x_3)$. This is a straightforward composite calculation: by induction, we assume that $H^{d-2}$ is defined by $x_1^{d-1}, x_1^{d-2}x_2, x_2^{d-2}x_3$, hence $H^{d-1}=H^{d-2}H$ is defined by $x_1^{2(d-1)}, x_1^{2d-3}x_2, x_1^{d-2}x_2^{d-1}x_3$ which is the same rational map as the one defined by $x_1^d, x_1^{d-1}x_2, x_2^{d-1}x_3$ (by canceling the $\gcd$ $x_1^{d-2}$) $\bullet$ For any $d\geq 1$, up to permutation of the variables both in the source and the target, $H^{d-1}$ is involutive, i.e., coincides with its own inverse – in [@birational-linear] this was called “ $p$-involutive”. Namely, consider the Cremona map $G=(x_1x_2^{d-1},x_2^d,x_1^{d-1}x_3)$. Again, one finds straightforwardly that $H^{d-1}G$ is defined by $x_1^dx_2^{d(d-1)}, x_1^{d-1}x_2^{(d-1)^2+d}x_2^d, x_1^{d-1}x_2^{d(d-1)}x_3$; canceling the $\gcd$ $x_1^{d-1}x_2^{d(d-1)}$ yields the identity map $(x_1,x_2,x_3)$. $\bullet$ The monomial maps $S$ and $H$ “nearly” commute; actually they are conjugate by a transposition. This is once more straightforward: the general composites $SH^{d-1}$ and $H^{d-1}S$ are defined, respectively, by $x_1^d, x_1x_2^{d-2}x_3, x_2^{d-1}x_3$ and $x_1x_2^{d-1}, x_1x_2^{d-2}x_3, x_3^d$ and these are the same up to transposing $x_1$ and $x_3$ and the extreme terms. This means that they are conjugate by a transposition. It has as a consequence that the subgroup of the Cremona group of ${\mathbb{P}}^2$ generated by $S$ and $H$ is Abelian up to free product with suitable permutation subgroups. \[pillo-boda-28march\] Let $F$ be a plane Cremona map of degree $d$ of the form $F=(x_1^{a_1}x_2^{a_2},x_2^{b_2}x_3^{b_3},x_1^{c_1}x_3^{c_3})$. Then, up to permutation of the source $($variables$)$ and the target $($monomials$)$, $F$ is one of the following two kinds: $$F=(x_1^d,x_2x_3^{d-1},x_1^{d-1}x_3)\ \mbox{ or }\ F=(x_1x_2,x_2x_3,x_1x_3).$$ By hypothesis $d=a_1+a_2=b_2+b_3=c_1+c_3=a_1b_2c_3+c_1a_2b_3$. Then one has $$\label{march23-09} a_1(b_2c_3-1)=a_2(1-c_1b_3).$$ The cases below can be readily verified using these equations. Case (I): $a_1\geq 1$, $a_2=0$. Then $a_1=d$, $b_2=c_3=1$, and $F=(x_1^d,x_2x_3^{d-1},x_1^{d-1}x_3)$. Case (II): $a_1=0$, $a_2\geq 1$. Then $a_2=d$, $b_3=c_1=1$, and $F=(x_2^d,x_2^{d-1}x_3,x_1x_3^{d-1})$. Case (III)(a): $a_1\geq 1$, $a_2\geq 1$, $b_2c_3=0$, $b_2=0$. Then $F=(x_1^{d-1}x_2,x_3^d,x_1x_3^{d-1})$. Case (III)(b): $a_1\geq 1$, $a_2\geq 1$, $b_2c_3=0$, $c_3=0$. Then $F=(x_1^{d-1}x_2,x_2^{d-1}x_3,x_1^d)$. Case (III)(c): $a_1\geq 1$, $a_2\geq 1$, $c_1b_3=0$, $c_1=0$. Then $F=(x_1x_2^{d-1},x_2x_3^{d-1},x_3^d)$. Case (III)(d): $a_1\geq 1$, $a_2\geq 1$, $c_1b_3=0$, $b_3=0$. Then $F=(x_1x_2^{d-1},x_2^{d},x_1^{d-1}x_3)$. Case (III)(e): $a_1\geq 1$, $a_2\geq 1$, $b_2c_3-1\geq 0$, $1-c_1b_3\leq 0$. From Eq. (\[march23-09\]) we get $b_2c_3=1$ and $c_1b_3=1$. Then $F=(x_1x_2,x_2x_3,x_1x_3)$. We next give a purely integer matrix theoretic proof of the following result (see [@Kor] for similar dealings). \[group\_structure\]If $n=3$, then up to permutation of the variables [(]{}both in the source and the target[)]{}, the plane Cremona monomial group is generated by the maps $S, H$, where $S$ is the quadratic map of first kind defined by $x_1x_2, x_1x_3, x_2x_3$ [(]{}Steiner involution[)]{} and $H$ is the quadratic map of second kind defined by $x_1^2, x_1x_2, x_2x_3$ [(]{}“hyperbolism”[)]{}. Let $F=(x_1^{a_1}x_2^{a_2}x_3^{a_3},x_1^{b_1}x_2^{b_2}x_3^{b_3},x_1^{c_1}x_2^{c_2}x_3^{c_3})$ be a plane Cremona map of degree $d$. We will prove that, up to a permutation of the variables and the monomials, every plane Cremona monomial map is of the form $F\cdots SH^{d_{i_k}}SH^{d_{i_{k+1}}}\cdots G$, where $F,G$ are in $\{S, H^{d}\,|\, d\geq 1\}$. The proof is by induction on the degree. Consider the corresponding log-matrix $$A=\left(\begin{matrix} a_1&b_1&c_1\\ a_2&b_2&c_2\\ a_3&b_3&c_3 \end{matrix} \right).$$ Up to permutation of variables and monomials, there are essentially two cases to consider: $$\begin{array}{ccc} A=\left(\begin{matrix} a_1&b_1&0\\ a_2&b_2&0\\ a_3&0&d \end{matrix} \right) & \ \ \ \mbox{ or }\ \ \ & A=\left(\begin{matrix} a_1&0&c_1\\ a_2&b_2&0\\ 0&b_3&c_3 \end{matrix} \right). \end{array}$$ By Lemma \[pillo-boda-28march\] we may assume that $A$ is the matrix on the left, i.e., $F=(x_1^{a_1}x_2^{a_2}x_3^{a_3},x_1^{b_1}x_2^{b_2},x_3^d)$. Case (I): $a_1\geq b_1$. From $a_1+a_2+a_3=b_1+b_2=d$, we get that $b_2\geq a_2$. Consider the Steiner involution given by $S=(x_1x_2,x_2x_3,x_1x_3)$. Then $$SF=(x_1^{a_1+b_1}x_2^{a_2+b_2}x_3^{a_3}, x_1^{b_1}x_2^{b_2}x_3^{d}, x_1^{a_1}x_2^{a_2}x_3^{d+a_3})= x_1^{b_1}x_2^{a_2}x_3^{a_3}(x_1^{a_1}x_2^{b_2},x_2^{b_2-a_2}x_3^{d-a_3}, x_1^{a_1-b_1}x_3^{d}).$$ Thus by Lemma \[pillo-boda-28march\] we get that $SF=H^{a_1+b_2-1}$ for some hyperbolism $H$ or $SF$ is a Steiner involution. Thus multiplying by the inverse of $S$, we obtain that $F$ has the required form. Case (II)(a): $b_1>a_1$ and $\det(A)=d$. By Lemma \[pillo-boda-28march\] we may assume $a_1\geq 1$. From the equality $d=\det(A)=d(a_1b_2-a_2b_1)$, we get that $b_2\geq a_2$. Consider the hyperbolism $H=(x_1^2,x_1x_2,x_2x_3)$. In this case we have $$\begin{aligned} HF&=&(x_1^{2a_1}x_2^{2a_2}x_3^{2a_3}, x_1^{a_1+b_1}x_2^{a_2+b_2}x_3^{a_3}, x_1^{b_1}x_2^{b_2}x_3^{d})\\ &=& x_1^{a_1+1}x_2^{a_2}x_3^{a_3} (x_1^{a_1-1}x_2^{a_2}x_3^{a_3},x_1^{b_1-1}x_2^{b_2}, x_1^{b_1-a_1-1}x_2^{b_2-a_2}x_3^{d-a_3})\\ &=&x_1^{a_1+1}x_2^{a_2}x_3^{a_3}F_1.\end{aligned}$$ Since $F_1$ has degree at most $d-1$, we have lowered the degree of $F$. Thus by induction $F$ has the required form. Case (II)(b): $b_1>a_1$ and $\det(A)=-d$. We may assume that $a_2\geq b_2$, otherwise we may proceed as in Case (II)(a). By Lemma \[pillo-boda-28march\] we may also assume that $a_1\geq 1$. We claim that $F$ has the form $F=(x_1^{d-2}x_2x_3,x_1^{d-1}x_2,x_3^d)$. The condition on the determinant is equivalent to the equality $a_2b_1-a_1b_2=1$. Thus $a_2\geq 1$. Hence using that $b_1\geq a_1+1$ one has $$a_2b_1\geq a_2a_1+a_2\geq b_2a_1+1=a_2b_1.$$ Consequently $a_2=b_2=1$. The condition $\det(A)=-d$, becomes $a_1=b_1-1=d-2$. Therefore $F$ has the asserted form. Consider the hyperbolism $H=(x_1^2,x_1x_2,x_2x_3)$. It is easy to see that $HF$ is equal to $x^{\gamma}(x_1^{d-3}x_2x_3,x_1^{d-2}x_2,x_3^{d-1})$ for some monomial $x^{\gamma}$. Thus we have lowered the degree of $F$ and we may apply induction. The next result is classically well-known. One has a simple direct proof in the case of Cremona monomial maps. The proof shows moreover explicit simple formulae for the Cremona inverse in the case of $3$ variables. \[degree\_of\_inverse\] If $n=3$, then a Cremona monomial map and its inverse have the same degree. The proof is based on the method employed in the proof of Theorem \[ipn-ufpe\]. Let $A$ be the matrix defining a Cremona map of degree $d$: $$A=\left(\begin{matrix} a_1&b_1&c_1\\ a_2&b_2&c_2\\ a_3&b_3&c_3 \end{matrix} \right)$$ For simplicity we assume that $\det(A)=d$, the case $\det(A)=-d$ can be shown similarly. Up to permutation of variables, as in the proof of Proposition \[group\_structure\], there are essentially two cases to consider: $$\begin{array}{ccc} A=\left(\begin{matrix} a_1&b_1&0\\ a_2&b_2&0\\ a_3&0&d \end{matrix} \right) & \ \ \ \mbox{ or }\ \ \ & A=\left(\begin{matrix} a_1&0&c_1\\ a_2&b_2&0\\ 0&b_3&c_3 \end{matrix} \right). \end{array}$$ It is readily seen that the inverse of $A$ is given by $$\begin{array}{ccc} A^{-1}=\left(\begin{matrix} b_2&-b_1&0\\ -a_2&a_1&0\\ -a_3b_2/d&b_1a_3/d&1/d \end{matrix} \right) & \ \ \ \mbox{ or }\ \ \ & A^{-1}=\displaystyle\frac{1}{d}\left(\begin{matrix} b_2 c_3& b_3 c_1& -b_2 c_1\\ -a_2 c_3& a_1 c_3& a_2 c_1\\ a_2 b_3& -a_1 b_3& a_1 b_2 \end{matrix} \right). \end{array}$$ respectively. By the argument given in the proof of Theorem \[ipn-ufpe\] we get that $$\beta_1=(d,0,0),\ \gamma=(da_1-1,da_2,da_3)\ \ \mbox{ or }\ \ \beta_1=(b_2,0,b_3),\ \gamma=(a_1 b_2 + b_3 c_1-1, a_2 b_2, b_3 c_3)$$ respectively. Therefore $$\begin{array}{ccc} B=\left(\begin{matrix} d&0&b_1\\ 0&a_1+a_2&a_2\\ 0&a_3&(a_3b_2+1)/d \end{matrix} \right) & \ \ \ \mbox{ or }\ \ \ & B=\left(\begin{matrix} b_2&c_1&0\\ 0&c_3&a_2\\ b_3&0&a_1 \end{matrix} \right). \end{array}$$ respectively. Using that the sum of the entries in every column of $A$ is equal to $d$ and $\det(A)=d$, it is seen that the sum of the entries in each column of $B$ is equal to $d$ and $\det(B)=d$. Further combinatorial and computational aspects {#computing} =============================================== Let $v_1,\ldots,v_n$ be a set of vectors in $\mathbb{N}^n$ such that $|v_i|=d\geq 1$ for all $i$ and $\det(A)=\pm d$, where $A$ is the $n\times n$ matrix with column vectors $v_1,\ldots,v_n$. Then, by Theorem \[ipn-ufpe\], there are unique vectors $\beta_1,\ldots,\beta_n,\gamma \in \mathbb{N}^n$ such that the following two conditions hold[:]{} (a) $A\beta_i=\gamma+e_i$ for all $i$, and (b) The matrix $B$ whose columns are $\beta_1,\ldots,\beta_n$ has at least one zero entry in every row. The proof of Theorem \[ipn-ufpe\] provides an algorithm to compute $\beta_1,\ldots,\beta_n,\gamma$. Next we discuss two other methods, one of them based on integer programming techniques and the notion of a Hilbert basis. To compute the sequence $\beta_1,\ldots,\beta_n,\gamma$ using linear programming we regard the $\beta_i$’s and $\gamma$ as vectors of indeterminates and introduce a new variable $\tau$. Consider the homogeneous system of linear inequalities $$\begin{aligned} \label{jan5-09} A\beta_i&=&\gamma+\tau e_i,\ \ i=1,\ldots,n\\ \beta_i&\geq& 0,\ \ \ i=1,\ldots,n\nonumber\\ \gamma&\geq &0,\ \ \tau\geq 0.\nonumber\end{aligned}$$ This linear system has $n^2$ equality constraints and $\ell=n^2+n+1$ indeterminates. The set $C$ of solutions form a rational pointed polyhedral cone. By [@Schr Theorem 16.4], there is a unique minimal integral Hilbert basis $$\mathcal{H}=\{h_1,\ldots,h_r\}$$ of $C$ such that $\mathbb{Z}^\ell\cap \mathbb{R}_+\mathcal{H}=\mathbb{N}\mathcal{H}$ and $C=\mathbb{R}_+\mathcal{H}$ (minimal relative to taking subsets), where $\mathbb{R}_+\mathcal{H}$ denotes the cone generated by $\mathcal{H}$ consisting of all linear combinations of $\mathcal{H}$ with non-negative real coefficients and $\mathbb{N}\mathcal{H}$ denotes the semigroup generated by $\mathcal{H}$ consisting of all linear combinations of $\mathcal{H}$ with coefficients in $\mathbb{N}$. The Hilbert basis of $C$ has the following useful description. [[@Schr p. 233]]{}\[hb-description\] $\mathcal{H}$ is the set of all integral vectors $0\neq h\in C$ such that $h$ is not the sum of two other non-zero integral vectors in $C$. There is a unique element $h$ of $\mathcal{H}$ with $\tau=1$ and this element gives the unique sequence $\beta_1,\ldots,\beta_n,\gamma$ that satisfies $(a)$ and $(b)$. We set $x_0=(\beta_1,\ldots,\beta_n,\gamma,1)\in\mathbb{N}^{n^2+n+1}$, where $\beta_1,\ldots,\beta_n,\gamma$ is the unique sequence that satisfies $(a)$ and $(b)$. First we show that $x_0$ is in $\mathcal{H}$. Clearly $x_0$ is in $C$. Thus we may assume that $x_0$ is written as $$x_0=\eta_1h_1+\cdots+\eta_kh_k,\ \ \ \ \ 0\neq \eta_i\in\mathbb{N}\, \mbox{ for }i=1,\ldots,k,$$ where $h_k$ has its last entry equal to $1$ and the last entry of $h_i$ is equal to $0$ for $i<k$. The vector $h_k$ has the form $$h_k=(\beta_{1}^{(k)},\ldots,\beta_n^{(k)},\gamma^{(k)},1),$$ where the $\beta_{i}^{(k)}$’s and $\gamma^{(k)}$ are in $\mathbb{N}^n$ and satisfy Eq. (\[jan5-09\]), i.e., they satisfy (a). Notice that from the first equality one has $x_0\geq h_k$. Then $\beta_i\geq \beta_i^{(k)}$ for all $i$ and $\gamma\geq \gamma^{(k)}$. Therefore the $\beta_{i}^{(k)}$’s and $\gamma^{(k)}$ also satisfy (b). Consequently $x_0=h_k$ and $x_0\in\mathcal{H}$, as claimed. Let $h_i$ be any other element in $\mathcal{H}$ whose last entry is equal to $1$. Next we show that $h_i$ must be equal to $x_0$. The vector $h_i$ has the form $$h_i=(\beta_{1}^{(i)},\ldots,\beta_n^{(i)},\gamma^{(i)},1),$$ where the $\beta_{j}^{(i)}$’s and $\gamma^{(i)}$ are in $\mathbb{N}^n$ and satisfy Eq. (\[jan5-09\]). Since the $\beta_{j}^{(i)}$’s and $\gamma^{(i)}$ satisfy (a), it suffices to show that they also satisfy (b). We proceed by contradiction. Assume that the $\ell$[*th*]{} entry of $\beta_{j}^{(i)}$ is not zero for $j=1,\ldots,n$. For simplicity of notation assume that $\ell=1$. Then the vectors $$h'=(e_1,\ldots,e_1,Ae_1,0)\ \mbox{and }\ h''=(\beta_{1}^{(i)}-e_1,\ldots,\beta_n^{(i)}-e_1,\gamma^{(i)}-Ae_1,1)$$ are integral vectors that satisfy Eq. (\[jan5-09\]), i.e., $h'$ and $h''$ are integral vectors in $C$ and $h=h'+h''$, a contradiction to Theorem \[hb-description\]. There are computer programs that can be used to find integral Hilbert basis of polyhedral cones defined by linear systems of the form $x\geq 0\, ;A'x=0$, where $A'$ is an integral matrix. We have used [@normaliz2] to compute some specific examples with this procedure: \[propedeutico-d=2\] Consider the following matrix $$A=\left(\begin{matrix} 2&1&0\cr 0&1&1\cr 0&0&1 \end{matrix}\right)$$ To compute $\beta_1,\beta_2,\beta_3,\gamma$ we use the following input file for [*Normaliz*]{} [@normaliz2] 9 13 2 1 0 0 0 0 0 0 0 -1 0 0 -1 0 1 1 0 0 0 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 0 0 0 0 -1 0 0 0 0 2 1 0 0 0 0 -1 0 0 0 0 0 0 0 1 1 0 0 0 0 -1 0 -1 0 0 0 0 0 1 0 0 0 0 0 -1 0 0 0 0 0 0 0 2 1 0 -1 0 0 0 0 0 0 0 0 0 0 1 1 0 -1 0 0 0 0 0 0 0 0 0 0 1 0 0 -1 -1 5 Part of the output file produced by [*Normaliz*]{} is: 4 generators of integral closure: 0 1 0 0 1 0 0 1 0 1 1 0 0 0 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 1 0 0 1 0 0 2 0 0 0 1 1 0 0 2 0 1 0 1 2 1 0 1 From the last row we get: $\beta_1=(1,1,0),\beta_2=(0,2,0),\beta_3=(1,0,1),\gamma=(2,1,0)$. Another method to compute $\beta_1,\ldots,\beta_n$ is based on the following result whose proof we omit. Of course, to use this result effectively one must seek methods to compute $\delta$. Let $v_1,\ldots,v_n$ be a set of vectors in $\mathbb{N}^n$ such that $|v_i|=d\geq 1$ for all $i$ and $\det(A)=\pm d$, where $A$ is the $n\times n$ matrix with column vectors $v_1,\ldots,v_n$. Then there is $0\neq\delta=(\delta_i)\in\mathbb{Q}_+^n$ such that $\delta+A^{-1}e_i\in\mathbb{N}^n$ for all $i$. If $$\beta_i=A^{-1}(\delta_1v_1+\cdots+\delta_nv_n+e_i),\ \ i=1,\ldots,n$$ and $\gamma=\delta_1v_1+\cdots+\delta_nv_n$. Then $\beta_1,\ldots,\beta_n,\gamma$ are vectors in $\mathbb{N}^n$ that satisfy condition $($[a]{}$)$. [99]{} W. G. Bridges and H. J. Ryser, Combinatorial designs and related systems, J. Algebra [**13**]{} (1969), 432–446. W. Bruns and B. Ichim, <span style="font-variant:small-caps;">Normaliz 2.0</span>, Computing normalizations of affine semigroups 2008. Available from . G. Gonzalez-Sprinberg and I. Pan, On the monomial birational maps of the projective space, An. Acad. Brasil. Ci$\hat{\rm e}$nc. [**75**]{} (2003), no. 2, 129–134. A. Simis and R. H. Villarreal, Linear syzygies and birational combinatorics, Results Math. [**48**]{} (2005), no. 3-4, 326–343. [^1]: The first author was partially supported by a grant of CNPq. He warmly thanks CINVESTAV for support during a visit. The second author was partially supported by CONACyT grant 49251-F and SNI
{ "pile_set_name": "ArXiv" }
--- abstract: 'Solid-state qubits hold the promise to achieve unmatched combination of sensitivity and spatial resolution. To achieve their potential, the qubits need however to be shielded from the deleterious effects of the environment. While dynamical decoupling techniques can improve the coherence time, they impose a compromise between sensitivity and bandwidth, since to higher decoupling power correspond higher frequencies of the field to be measured. Moreover, the performance of pulse sequences is ultimately limited by control bounds and errors. Here we analyze a versatile alternative based on continuous driving. We find that continuous dynamical decoupling schemes can be used for AC magnetometry, providing similar frequency constraints on the AC field and improved sensitivity for some noise regimes. In addition, the flexibility of phase and amplitude modulation could yield superior robustness to driving errors and a better adaptability to external experimental scenarios.' author: - Masashi Hirose - 'Clarice D. Aiello' - Paola Cappellaro bibliography: - '../../Biblio.bib' title: Continuous dynamical decoupling magnetometry --- Solid-state qubits have emerged as promising quantum sensors, as they can be fabricated in small volumes and brought close to the field to be detected. Notably, Nitrogen-Vacancy (NV) centers in nano-crystals of diamond [@Jelezko02] have been applied for high sensitivity detection of magnetic [@Taylor08; @Maze08; @Balasubramanian08] and electric fields [@Dolde11] and could be used either as nano-scale scanning tips [@Maletinsky12] or even in-vivo due their small dimensions and low cytotoxicity [@McGuinness11]. Unfortunately, solid-state qubits are also sensitive probes of their environment [@Bar-Gill12; @Bylander11] and this leads to rapid signal decay, which limits the sensor interrogation time and thus its sensitivity. Dynamical decoupling (DD) methods [@Carr54; @Viola99b; @Uhrig07; @Khodjasteh07; @Biercuk11] have been adopted to prolong the coherence time of the sensor qubits [@Taylor08; @deLange11; @Bar-Gill12; @Pham12]. Although DD techniques prevent measuring constant, DC fields, they provide superior sensitivity to oscillating AC fields, as they can increase the sensor coherence time by orders of magnitude. The sensitivity is maximized by carefully matching the decoupling period to the AC field; conversely, one can study the response of a decoupling scheme to fields of various frequencies, thus mapping out their bandwidth. Still, the refocusing power of pulsed DD techniques is ultimately limited by pulse errors and bounds in the driving power. Here we investigate an alternative strategy, based on continuous dynamical decoupling (CoDD), that has the potential to overcome these limitations. We consider the problem of measuring a small external field, coupled to the sensor by a Hamiltonian: ${{\mathcal{H}}}_{b}=\gamma b(t) S_{z}$, where $S_{z}$ is the spin operator of the quantum sensor. For example, $b(t)$ can be an external magnetic field and $\gamma$ the spin’s gyromagnetic ratio. The figure of merit for a quantum sensor is the smallest field $\delta b_{min}$ that can be read out during a total time $\mathbf{t}$, that is, the sensitivity $\eta=\delta b_{min}\sqrt{\mathbf{t}}$. We use this metric to compare pulsed and continuous DD schemes and show how CoDD can offer an advantage for some noise regimes. The principle of DD schemes rests on the spin echo sequence, which refocuses unwanted phase accumulation due to a slow bath by reversing the system evolution with control pulses. More complex DD sequences can in principle extend the coherence time indefinitely, by increasing the number of pulses. In practice, however, a large number of imperfect, finite-width pulses provokes the accumulation of error and degrades DD performance [@Khodjasteh07; @Khodjasteh05; @Wang12]. CoDD has been first introduced in the context of NMR to mitigate pulse errors [@Burum81; @Boutis03] and it has then lead to many schemes, such as composite pulses [@Shaka83b; @Levitt86], dynamically corrected gates [@Khodjasteh09] and optimized modulations [@Jones12]. In general, phase and amplitude modulation of the continuous driving allows great flexibility and CoDD can achieve high decoupling power. Here we consider only two schemes, constant continuous driving (C) and Rotary Echo (RE) [@Solomon57; @Aiello12; @Laraoui11], as their periodicity allows an easier use for AC magnetometry (see Fig. \[fig:Sequence\]); we will compare these schemes to the simplest pulsed DD scheme, period dynamical decoupling (PDD). ![Pulse sequences for four AC magnetometry schemes: PDD (P), constant driving (S), RE with optimal frequency (R$_k^{\text{opt}}$) and spin-locking (S). Blue boxes represent microwave driving, with phase (x and y) as indicated.[]{data-label="fig:Sequence"}](Sequence2){width="45.00000%"} As an example, we compute the signal and sensitivity of AC magnetometry under RE, but similar derivations apply for the other schemes. The RE sequence consists of a continuous on-resonance driving field of constant amplitude $\Omega$ and phase inverted at periodic intervals (see Fig. \[fig:Sequence\]). RE is parametrized by the angle $\theta=\Omega T/2$, where $T$ is the sequence period. While RE is usually employed to refocus errors in the driving field, for $\theta=2\pi k$ the sequence also refocus dephasing noise, with performance depending on both $k$ and the Rabi frequency. We consider the evolution of a sensor qubit under a sequence of $2\pi k$-RE and in the presence of an external AC magnetic field of frequency $\omega$ whose magnitude $b$ is to be sensed: $$\mathcal{H}(t) = \Omega \mathbb{SW}(t)S_x + \gamma b\cos(\omega t + \phi)S_z,$$ where $\mathbb{SW}(t)$ is the square wave of period $T = {4\pi k}/{\Omega}$. In the toggling frame of the driving field, the Hamiltonian becomes $$\widetilde{\mathcal{H}}(t)\!=\!\frac{\gamma b\cos(\omega t+\phi)}{2}[ \cos(\Omega t)S_z-\mathbb{SW}(t)\sin(\Omega t)S_y ].$$ We consider only the cases where $\phi=0$ and $\omega T=2m\pi$, with $m$ an odd integer, since as we show below this yields good sensitivities. Under this assumption $\tilde{\mathcal{H}}(t)$ is periodic and for small fields $b$ the evolution operator can be well approximated from a first order average Hamiltonian over the period $T$, $\overline{\mathcal{H}} \approx \frac{1}{T}\int_{0}^{T}\tilde{\mathcal{H}}(t)dt=\gamma\overline b\,S_y$. If $m = 1$, we define $\omega_{low} = \frac{\Omega}{2 k}$, which, for a fixed $\Omega$, is easily adjustable by changing the echo angle $2\pi k$. Setting instead $m = (2k-1)$, we define $\omega_{opt} = \frac{\Omega (2k-1)}{2k}$, which yields $\overline b=4bk/[\pi(4k-1)]$ and attains the best sensitivity of the method. The sensitivity, obtained as $\eta(t) = \displaystyle\lim_{b \rightarrow 0}\textstyle\frac{\Delta\mathcal{S}}{|\frac{\partial \mathcal{S}}{\partial b}|}\sqrt{t}$, where $\mathcal{S}$ is the signal and $\Delta\mathcal{S}$ its shot-noise limited uncertainty, depends on $\overline b$, that is, on the averaging of the AC field over the sequence period due to the DD modulation. We compare the performance of both $2\pi k$-RE schemes to PDD (optimum $\omega = {2\pi}/{t}$, $\phi = {\pi}/{2}$) and a constant modulation with $\omega=\Omega$ (see Fig. \[fig:Sequence\]). We obtain for the schemes considered: $$\renewcommand{\arraystretch}{2} \begin{array}{lclc} \eta^{opt}_{R_k}=\eta\frac{4k-1}{2k} &\quad (\theequation.a) &\qquad \eta_P=\eta&\quad (\theequation.b)\\ \eta^{low}_{R_k}=\eta\frac{4k^2-1}{2k} &\quad (\theequation.c) &\qquad \eta_C=\frac4\pi\eta&\quad (\theequation.d), \end{array}\nonumber \label{eq:sensitivity} \addtocounter{equation}{1}$$ where $\eta=\frac{\pi }{2\gamma C \sqrt{t}}$, with $C$ a parameter capturing inefficiencies in the sensor readout [@Taylor08]. Here $R_k$ labels a $2k\pi$-RE scheme, $P$ the PDD scheme and $C$ the constant modulation (see Figure \[fig:Sequence\]). A fourth operating scheme can be obtained by a “spin-locking” sequence [@Redfield55], where the spin is first rotated to the transverse plane before applying a driving field in the same direction; choosing $\phi=0$ and $\omega=\Omega$ yields the same sensitivity as for the constant modulation, $\eta_S\!=\!\eta_C$, even when the driving phase is inverted periodically. We note that if the phase $\phi$ of the AC field is not optimized, the sensitivities are reduced by a factor $\Phi(\phi)$, with $\Phi_P=\Phi_C=\csc(\phi)$ and $\Phi_{R_k}=\sec\phi$. If in addition the phase of the AC field cannot be fixed, $\Phi(\phi)=\sqrt{2}$ when considering the average signal over many realizations. These ideal sensitivities are degraded in the presence of noise and whenever the frequency of the AC field is not matched to the DD period. In the following we analyze these two contributions, showing that they lead to a sensitivity $\eta\to\eta\mathcal D(t)/W(\omega)$, where $\mathcal D(t)$ describes the decay under DD sequences and $W(\omega)$ is the reduction in the accumulated phase when the field frequency $\omega$ is suboptimal. ![Bandwidth for AC magnetometry. We plot the weight functions $W(\omega)$ that scale the phase acquired during DD magnetometry for AC fields of frequency $\omega$. Left: we plot $W(\omega)$ and the envelop of its passband decay for PDD (blue dotted), RE ($k=1$, red, thick) and constant driving (green, thin line) for $n=2$ cycles, expressing the frequency in terms of the sequence period. In the inset: we compare the main peak for $n=1$ (red, thick) and $n=10$ (gray) for RE ($k=1$) showing the reduction in bandwidth. Right: we compare $W(\omega)$ for continuous driving (green) and for RE with $k=1$ (red, thick) and $k=4$ (gray, dashed), plotting as a function of $\omega$ in units of the Rabi frequency $\Omega$.](WeightT "fig:"){width="22.00000%"}![Bandwidth for AC magnetometry. We plot the weight functions $W(\omega)$ that scale the phase acquired during DD magnetometry for AC fields of frequency $\omega$. Left: we plot $W(\omega)$ and the envelop of its passband decay for PDD (blue dotted), RE ($k=1$, red, thick) and constant driving (green, thin line) for $n=2$ cycles, expressing the frequency in terms of the sequence period. In the inset: we compare the main peak for $n=1$ (red, thick) and $n=10$ (gray) for RE ($k=1$) showing the reduction in bandwidth. Right: we compare $W(\omega)$ for continuous driving (green) and for RE with $k=1$ (red, thick) and $k=4$ (gray, dashed), plotting as a function of $\omega$ in units of the Rabi frequency $\Omega$.](WeightRabi "fig:"){width="22.00000%"} \[fig:bandwidth\] Optimal sensitivities are obtained by carefully matching the period of the DD schemes to the oscillating field. In practice, however, when field frequencies are either unknown or known to a finite precision, it is of relevance to determine the bandwidth of the scheme and the deviation from optimum sensitivities. We estimate the bandwidth by calculating the phase accumulated by the sensor over the total interrogation time $t = nT$, $\overline B t= \int_{0}^{t}b(t) f(t)\mathrm{d}t$, and examining the frequency dependence of its absolute value. For PDD, the filter function is $f_{P}(t) =\mathbb{SW}_{P}(t)$, the square wave with the period of the modulation. For continuous driving schemes such as RE and Rabi, $f(t)$ is the strength of the toggling frame Hamiltonian. In particular, $f_{R_k}(t) = \mathbb{SW}(t)\sin(\Omega t)$ yielding the weight function $W_{R_k}(\omega)=|\overline B_{R_k}(\omega)|/|\overline B_{R_k}(\omega^{opt})|$: $$W_{R_k}(\omega)\!=\!\frac{(4 k-1)/n}{\left|(4 k)^2-\left(\frac{T \omega }{\pi }\right)^2\right|}\left|\sin (n T \omega ) \tan \left(\frac{T \omega }{4}\right)\right| .$$ $W_{R_k}$ has peaks (*pass-bands*) at $\omega = {2\pi(2(k+p)-1)}/{T}$, where $p$ is an integer satisfying $p\geq1-k$. The lowest pass-band occurs for $p = 1-k$, corresponding to $\omega_{low} = {\Omega}/{2k}$. The strongest peak is for $p=0$ at $\omega_{opt}$. Subsequent periodic peaks are attenuated from the symmetry point $\omega = \Omega$ as $\sim \frac{\Omega^2}{|\omega^2 - \Omega^2|}$.The FWHM of the optimum peak in $W_{R_k}(\omega)$ decays as $\approx \frac{7.58}{2nT}$, where $7.58 \approx$ FWHM of the squared sinc function, a result common to the other DD schemes. A similar calculation for the accumulated phase during a PDD sequence indicates the existence of peaks at $\omega = {m\pi}/{T}$, with $m$ odd, whose intensity decays as $1/m$. This slower decay than for the RE pass-bands could be beneficial if the goal is to detect fields of unknown frequencies. On the other hand, AC magnetometry under continuous driving or spin locking could be used for frequency-selective detection because $W_{C}(\omega)$ has a unique peak at $\omega = {2\pi}/{T}$ with FWHM on the same order of that for RE. A comparison of the different weight functions is depicted in Fig. \[fig:bandwidth\]. We note that while $W(\omega)$ describes the poor performance of DD schemes at detecting AC fields with unmatched frequencies, this property could in turn be used for frequency-selective measurements and even spectroscopy, by scanning the sequence period. While constant driving provides the best selectivity (canceling out higher octaves), RE provides more flexibility by changing both the period time and the angle $2\pi k$, which allows more uniform noise cancellation. ![Sensitivity for AC magnetometry. We compare the magnetic field sensitivity of a single NV center for PDD (left) and RE ($k\!=\!1$ center; $k\!=\!4$ right). We assumed $T_2\!=\!500\mu$s under OU noise (comparable to a $^{13}$C bath), yielding a decay $\propto\!e^{-T^3/(n^2 T_2^3)}$, and a single readout with $C=0.03$. A larger number of refocusing cycles (with shorter periods) achieves better sensitivity but can only detect higher frequencies, as shown by the color of the curves (right bar, MHz).](SensitivityACall2){width="42.00000%"} \[fig:SensitivityACtime\] The refocusing power of RE can surpass that of pulsed schemes. Consider for example a noise with long correlation time $\tau_c$: In this limit, the signals decays as ${\left\langle \mathcal{S}_{R_k}(t)\right\rangle}=e^{-(\Gamma_{2R}t)^3/n^2}$, with $\Gamma^3_{2R}=\frac{3\sigma^2}{8 k^2 \pi ^2 \tau_c}$. Using a similar derivation [@Kubo62; @Cappellaro06], the decay under a PDD sequence is instead ${\left\langle \mathcal{S}_P(t)\right\rangle}=e^{-(\Gamma_{2P}t)^3/n^2}$, with $\Gamma^3_{2P}=\frac{2\sigma^2}{3 \tau_c}$. The sensitivities in Eq. (\[eq:sensitivity\]) are further limited by the signal decay $\mathcal D(t)$ under the DD sequences. The achievable sensitivity is then a compromise between the refocusing power of the sequence used and the frequency that it allows detecting (Fig. \[fig:SensitivityACtime\]). While the decay for pulsed DD has been widely studied, evolution under continuous DD is more complex [@Dobrovitski09]. We can estimate the RE decay to first leading order using a cumulant expansion [@Kubo62; @Cappellaro06]. We assume a stochastic Hamiltonian, ${{\mathcal{H}}}(t)=\Omega {\,\mathbb{SW}}_k(t){\sigma_x}+\delta(t){\sigma_z}$, where $\delta(t)$ is an Ornstein-Uhlenbek noise with zero mean and autocorrelation function $G(\tau)=\sigma^2 e^{-\frac{\tau}{\tau_c}}$, with $\sigma$ the dispersion and $\tau_c$ the correlation time. The signal decay can be calculated from the average of the superoperator ${\left\langle \mathbb S(t)\right\rangle}={\left\langle \mathcal T e^{-i\int_0^t\widehat Hdt'}\right\rangle}$, where we indicate by a hat the superoperators $\widehat A=A\otimes{\leavevmode\hbox{\small1\normalsize\kern-.33em1}}-{\leavevmode\hbox{\small1\normalsize\kern-.33em1}}\otimes A$ and $\mathcal T$ is the time ordering operator. In turns, this can be approximated by the cumulants, ${\left\langle \mathbb S(t)\right\rangle}\approx \exp{[-(K_1+K_2+\dots)t]}$, with the first cumulant $K_1=0$ and the second given by $$K_2=\frac1{2t}\int_0^tdt_1\,\int_0^{t_1}dt_2\,\langle{\widehat H(t_1)\widehat H(t_2)}\rangle_c,$$ where the cumulant average is $$\langle{\widehat H(t_1)\widehat H(t_2)}\rangle_c=\mathcal T\langle\widehat H(t_1)\widehat H(t_2)\rangle-\langle{\widehat H(t_1)}\rangle\langle{\widehat H(t_2)}\rangle.$$ In the toggling frame of the driving field, the stochastic Hamiltonian is $\tilde {{\mathcal{H}}}(t)=\delta(t)N(t)\equiv\delta(t)\left[\cos(\Omega t) {\sigma_z}+ {\,\mathbb{SW}}\sin(\Omega t){\sigma_y}\right]$. Then the second cumulant for $n$ cycles is $K_2\!=\!n\triangle+\square\sum_{j=1}^n(n-j) G_j$ [@Cappellaro06], with $G_j=e^{-\frac{4k\pi j}{\Omega\tau_c}}$ and $$\triangle=\int_0^{4k\pi/\Omega}dt_1\,\int_0^{t_1}dt_2 {\widehat N}(t_1){\widehat N}(t_2) G(t_1-t_2),$$ $$\square=\int_0^{4k\pi/\Omega}dt_1\,\int_0^{2k\pi/\Omega}dt_2 {\widehat N}(t_1){\widehat N}(t_2) G(t_1-t_2).$$ The cumulant can be written as $$K_2=\frac{\alpha+\beta}2\hat S_z^2+\frac{\alpha-\beta}2\hat S_y^2+\frac{\sqrt{\gamma^2-\beta^2}}2(\hat S_y\hat S_z+\hat S_z\hat S_y) \label{eq:K2}$$ (see appendix for explicit expressions), yielding the signal $${\left\langle \mathcal{S}_{R_k}\right\rangle} ={\frac{1}{2}}[1+\mathcal D_R]={\frac{1}{2}}\left[1+e^{-\alpha} ( \cosh(\gamma)+\frac{\beta}{\gamma} \sinh(\gamma))\right].$$ Numerical simulations match well with these approximate analytical results. The longer coherence time under the RE sequence can be exploited either to reach a better sensitivity for a given frequency or to measure lower frequency fields at a given sensitivity, as shown in figure \[fig:SensitivityACFreq\]. The achievable improvement depends on the effective coherence time ratio, $\tau=T_{2R}/T_{2E}$, obtained from the two schemes. Because of the improved refocusing of RE with respect to PDD, the sensitivity can be improved for some noise regimes. In addition, RE-AC magnetometry provides the flexibility of using larger angles (larger $k$) to allow for longer interrogation times (Figure \[fig:SensitivityACtime\]) at lower frequencies, which could be beneficial in practical cases in combination with repeated readout schemes [@Neumann10b; @Aiello12]. We remark that besides the decay functions obtained above in the presence of dephasing noise, other sources of decay can arise from imperfect pulses or fluctuations in the driving power. To this effect, RE provides a good protection against slow fluctuation in the driving power [@Solomon57; @Aiello12] and it is thus expected to achieve much better overall sensitivities than a continuous driving. ![Sensitivity for AC magnetometry. We compare the achievable sensitivity for constant driving (green, dash-dotted), PDD of $n=50$ echoes (blue, dotted) and RE ($2\pi$-RE, red, achieving the same sensitivity of PDD at a lower frequency and $8\pi$-RE, black, achieving better sensitivity than PDD at the same frequency). We assumed $T_2=500\mu$s under OU noise, yielding a super-exponential decay $\propto e^{-T^3/(n^2 T_2^3)}$, and a single readout with $C=0.03$. The decay of the constant (Rabi) driving was calculated following Ref. [@Dobrovitski09] for long $\tau_c$. The dashed, thin lines correspond to the ideal limit with no driving or pulse errors.[]{data-label="fig:SensitivityACFreq"}](SensitivityACFreq1){width="40.00000%"} In conclusion, we analyzed a novel scheme for AC magnetometry based on continuous dynamical decoupling and compared its performance to pulsed DD schemes. While we focused on the simplest DD sequences, we note that more complex driving, such as composite pulses [@Levitt86; @Aiello12], could achieve even better refocusing of driving field instability and inhomogeneity while still providing comparable sensitivity. We further analyzed the response of AC magnetometry to fields of unknown frequencies, finding that some CoDD schemes (such as continuous driving or spin locking with alternating phases) are advantageous for spectroscopy. The sensitivity is ultimately limited not only by the theoretically achievable coherence time, but also by pulse errors or fluctuations in the driving field. While a full comparison of the limits due to imperfection in the control fields is beyond the scope of this work, the flexibility of CoDD schemes in modulating both phase and amplitude of the driving field can provide practical advantages, yielding a better compromise between the DD refocusing power and the frequencies of the field to be measured.\ **Acknowledgments** This work was supported in part by the ARO through grant No. W911NF-11-1-0400 and by DARPA. C. D. A. acknowledges support from the Schlumberger Foundation. Cumulant ======== We can calculate the time (ensemble) average of a time-ordered exponential operator by means of a cumulant expansion The first cumulant is zero since we assume a zero-average as zero. The second cumulant for the RE sequence is given by Eq. \[eq:K2\] with $$\begin{array}{ll} \alpha=&\frac{\sigma ^2 T^2 \tau _c e^{-\frac{n T}{\tau _c}}} {\left(e^{\frac{T}{2 \tau _c}}+1\right)^2 \left(16 \pi ^2 k^2 \tau _c^2+T^2\right)^2} \left[2 n \left(e^{\frac{T}{2 \tau _c}}+1\right){}^2 e^{\frac{n T}{\tau _c}} \left(16 \pi ^2 k^2 T \tau _c^2+64 \pi ^2 k^2 \tau _c^3 \tanh \left(\frac{T}{4 \tau _c}\right)+T^3\right)\right.\\ &\left.-8 \tau _c e^{\frac{(n+1) T}{2 \tau _c}} \left(\left(T^2-16 \pi ^2 k^2 \tau _c^2\right)+\left(16 \pi ^2 k^2 \tau _c^2+T^2\right) \cosh \left(\frac{T}{2 \tau _c}\right)\right) \sinh \left(\frac{n T}{2 \tau _c}\right)\right]\end{array}$$ $$\begin{array}{ll} \beta=&-\frac{2 \sigma ^2 T^2 \tau _c^2 e^{-\frac{n T}{\tau _c}}} {\left(e^{\frac{T}{2 \tau _c}}+1\right)^2 \left(16 \pi ^2 k^2 \tau _c^2+T^2\right)^2}\times \\&\left[16 \pi ^2 k^2 \tau _c^2 \left(e^{\frac{T}{2 \tau _c}}-1\right) \left(e^{\frac{n T}{\tau _c}} \left((4 n-1) e^{\frac{T}{2 \tau _c}}+4 n+1\right)+e^{\frac{T}{2 \tau _c}}-1\right)+T^2 \left(e^{\frac{T}{2 \tau _c}}+1\right){}^2 \left(e^{\frac{n T}{\tau _c}}-1\right)\right]\end{array}$$ $$\begin{array}{ll} \gamma=&-\frac{2 \sigma ^2 T^2 \tau _c^2 e^{-\frac{n T}{\tau _c}} } {\left(e^{\frac{T}{2 \tau _c}}+1\right)^2 \left(16 \pi ^2 k^2 \tau _c^2+T^2\right)^2} \left[64 \pi ^2 k^2 T^2 \tau _c^2 \left(e^{\frac{T}{2 \tau _c}}+1\right)^4 \left(e^{\frac{n T}{\tau _c}}-1\right)^2 \tanh\left(\frac{T}{4 \tau _c}\right)^2+\right.\\ &\left.\left(16 \pi ^2 k^2 \tau _c^2 \left(e^{\frac{T}{2 \tau _c}}-1\right) \left(e^{\frac{n T}{\tau _c}} \left((4 n-1) e^{\frac{T}{2 \tau _c}}+4 n+1\right)+e^{\frac{T}{2 \tau _c}}-1\right) + T^2 \left(e^{\frac{T}{2 \tau _c}}+1\right)^2 \left(e^{\frac{n T}{\tau _c}}-1\right)\right)^2\right]^{1/2}\end{array}$$ [32]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1063/1.1507838) [****,  ()](\doibase 10.1038/nphys1075) [****,  ()](\doibase 10.1038/nature07279) [****,  ()](\doibase 10.1038/nature07278) [****,  ()](\doibase 10.1038/nphys1969) [****,  ()](\doibase 10.1038/nnano.2012.50) [****,  ()](\doibase 10.1038/nnano.2011.64) [****,  ()](\doibase 10.1038/ncomms1856) [****,  ()](\doibase 10.1038/NPHYS1994) [****, ()](\doibase 10.1103/PhysRev.94.630) [****,  ()](\doibase 10.1103/PhysRevLett.82.2417) [****,  ()](\doibase 10.1103/PhysRevLett.98.100504) [****, ()](\doibase 10.1103/PhysRevA.75.062310) [****,  ()](\doibase 10.1088/0953-4075/44/15/154002) [****,  ()](\doibase 10.1103/PhysRevLett.106.080802) @noop [ ()]{},  [****,  ()](\doibase 10.1103/PhysRevLett.95.180501) [****,  ()](\doibase 10.1103/PhysRevB.85.085206) [****,  ()](\doibase 10.1016/0022-2364(81)90200-6) [****,  ()](\doibase 10.1016/S1090-7807(03)00010-7) [****,  ()](\doibase 10.1016/0022-2364(83)90035-5) [****,  ()](\doibase 10.1016/0079-6565(86)80005-X) [****, ()](\doibase 10.1103/PhysRevA.80.032314) [ ()](http://arxiv.org/abs/1205.2402) [****, ()](\doibase 10.1103/PhysRev.110.61) @noop [****,  ()](\doibase 10.1103/PhysRevB.84.161403) [****, ()](\doibase 10.1103/PhysRev.98.1787) [****,  ()](\doibase 10.1143/JPSJ.17.1100) [****,  ()](\doibase 10.1063/1.2216702) [****, ()](\doibase 10.1103/PhysRevLett.102.237601) [****,  ()](\doibase 10.1126/science.1189075)
{ "pile_set_name": "ArXiv" }
‘=11 makefntext\#1[ to 3.2pt [-.9pt $^{{\ninerm\@thefnmark}}$]{}\#1]{} makefnmark[to 0pt[$^{\@thefnmark}$]{}]{} PS. @myheadings[mkbothgobbletwo oddhead[ ]{} oddfootevenheadevenfoot \#\#1\#\#1]{} \[appendixc\] \[subappendixc\] \#1 =1.5pc citex\[\#1\]\#2[@fileswauxout citeacite[forciteb:=\#2]{}[\#1]{}]{} @cghi cite\#1\#2[[$\null^{#1}$@tempswa ]{}]{} =cmbx10 scaled1 =cmr10 scaled1 =cmti10 scaled1 =cmbxti10 scaled=cmbx10 scaled=cmr10 scaled=cmti10 scaled=cmbxti10 =cmbx10 =cmr10 =cmti10 =cmbx9 =cmr9 =cmti9 =cmbx8 =cmr8 =cmti8 6.0in 8.6in -0.25truein 0.30truein 0.30truein =1.5pc YUMS 97-018, DO-TH 97-15, SNUTP 97-089\ hep-ph/9706451 (modified 27 June 1997)\ **FLAVOR DEMOCRACY AND QUARK MASS MATRICES [^1]** C. S.  Kim *Department of Physics, Yonsei University, Seoul 120-749, Korea* E-mail: kim@cskim.yonsei.ac.kr and G.  Cvetič *Department of Physics, University of Dortmund, Dortmund, Germany* E-mail: cvetic@doom.physik.uni-dortmund.de Flavor Democracy at Low Energy ============================== In the standard electroweak theory, the hierarchical pattern of the quark masses and their mixing remains an outstanding issue. While a gauge interaction is characterized by its universal coupling constant, the Yukawa interactions have as many coupling constants as there are fields coupled to the Higgs boson. There is no apparent underlying principle which governs the hierarchy of the various Yukawa couplings, and as a result, the Standard Model of strong and electroweak interactions can predict neither the quark (or lepton) masses nor their mixing. This situation can be improved by assuming a universal Yukawa interaction – the resulting spectrum consists then of one massive and two massless quarks in each (up and down) sector in the three generation Standard Model. Flavor–democratic (FD) quark mass matrices, and a perturbed form of such FD matrices, were introduced already in 1978 by Harari, Haut and Weyers[@0)] in a left-right symmetric framework. Flavor democracy has recently been suggested by Koide, Fritzsch and Plankl[@1)], as well as Nambu[@[3]] and many other authors[@[3]] as an analogy with the BCS theory of superconductivity. In this Section we will discuss how this flavor symmetry can be broken by a slight perturbation at low energies, in order to reproduce the quark masses and the CKM matrix[@3)]. As a result, predictions for the top quark mass and for the CP violation parameter $J_{CP}$ are obtained. This Section is based on a work by Cuypers and Kim[@11)]. Considering only quark fields, the gauge invariant Yukawa Lagrangian is $${\cal L}_{\rm Y} = - \sum_{i,j} (\bar Q'_{iL}~\Gamma^D_{ij}~d'_{jR}~\phi~+~ \bar Q'_{iL}~\Gamma^U_{ij}~u'_{jR}~\tilde \phi~+~\mbox{h.c.}) \ . \label{eq1}$$ Here, the primed quark fields are in a flavor \[$SU(2)$\] basis of the $SU(2) \times U(1)$ electroweak gauge group – the left-handed quarks form doublets under the $SU(2)$ transformation, $\bar Q'_L=(\bar u'_L,~\bar d'_L)$, and the right-handed quarks are singlets. The indices $i$ and $j$ run over the number of fermion generations. The Yukawa coupling matrices $\Gamma^{U,D}$ are arbitrary and not necessarily diagonal. After spontaneous symmetry breaking, the Higgs field $\phi$ acquires a nonvanishing vacuum expectation value (VEV) $v$ which yields quark mass terms in the original Lagrangian $${\cal L}_{\rm mass} = - \sum_{i,j} (\bar d'_{iL}~M^D_{ij}~ d'_{jR}~+~\bar u'_{iL}~M^U_{ij}~u'_{jR}~+~\mbox{h.c.}) \ , \label{eq2}$$ and the quark mass matrices are defined as $$M^{U,D}_{ij} \equiv {v \over \sqrt{2}}~\Gamma^{U,D}_{ij} \ . \label{eq3}$$ Mass matrices $M^{U,D}$ are diagonalized by biunitary transformations involving unitary matrices $U^{U,D}_L$ and $U^{U,D}_R$, and the flavor eigenstates are tranformed to physical mass eigenstates by the same unitary transformations, $$U^{U,D}_L~M^{U,D}~(U^{U,D}_R)^{\dagger} = M^{U,D}_{\rm diag}~~{\rm and}~~ U^U_{L,R}~u'_{L,R} = u_{L,R},~~U^D_{L,R}~d'_{L,R} = d_{L,R}~~. \label{eq4}$$ Using the recent CDF data[@4)] of the physical top mass $m_t^{\rm phys.} \approx 175$ GeV, the diagonalized mass matrices $M^{U,D}_{\rm diag}$ at a mass scale of 1 GeV are $$M_{\rm diag}^U \approx m_t \left[ \begin{array}{ccc} 2.5\times10^{-5} & & \\ & 0.006 & \\ & & 1 \end{array} \right] \quad {\rm and} \quad M_{\rm diag}^D \approx m_b \left[ \begin{array}{ccc} 1.7\times10^{-3} & & \\ & 0.03 & \\ & & 1 \end{array} \right]. \label{eq5}$$ The first two eigenvalues in both matrices are almost zero (almost degenerate) when compared to the eigenvalue of the third generation. In order to account for this large mass gap, one can use mass matrices which have in a flavor basis the flavor–democratic (FD) form $$M^U_0 = \frac{m_t}{3} \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right] ~~{\rm and}~~ M^D_0 = \frac{m_b}{3} \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right] ~~. \label{eq6}$$ Diagonalization leads to a pattern similar to the experimental spectrum (5) $$M_{\rm diag}^U = m_t \left[ \begin{array}{ccc} 0 & & \\ & 0 & \\ & & 1 \end{array} \right] \qquad {\rm and} \qquad M_{\rm diag}^D = m_b \left[ \begin{array}{ccc} 0 & & \\ & 0 & \\ & & 1 \end{array} \right] \ . \label{eq7}$$ Arbitrariness in the choice of the Yukawa Lagrangian has been substantially reduced with this symmetric choice. Each (up or down) quark sector is determined in this pure FD approximation by a single universal Yukawa coupling. To induce nonzero masses for the lighter quarks and to reproduce the experimental CKM matrix, small perturbations have to be added to the universal Yukawa interactions. One possibility is to analyze effects of the following two kinds of independent perturbation matrices $$P_1 = \left[ \begin{array}{ccc} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & 0 \end{array} \right] \qquad {\rm and} \qquad P_2 = \left[ \begin{array}{ccc} 0 & a & 0 \\ a & 0 & b \\ 0 & b & 0 \end{array} \right] \ , \label{eq8}$$ $\alpha,~\beta,~a$ and $b$ being real parameters to be determined from the quark masses. For simplicity, these perturbations can be applied separately. Quark mass matrices (in a flavor basis) are then sums of the dominant universal FD matrices (6) plus one kind of the perturbation matrices (8). One then has to solve the eigenvalue problem $$\det~|M^{U,D}~-~\lambda|=0,~~{\rm where}~~M^{U,D}=M^{U,D}_0~+~P_i~~ {\rm and}~~\lambda = m_1,~-m_2,~m_3~~ \ , \label{eq9}$$ and $m_1=m_d~{\rm or}~m_u,~m_2=m_s~{\rm or}~m_c$ and $m_3=m_b$ or $m_t$. The six parameters of the perturbed matrices $M^{U,D}$ (e.g., $m_t,~{\alpha}^{(u)},~{\beta}^{(u)};~m_b,~a^{(d)},~b^{(d)}$) are uniquely determined from the experimental input of the five light (current) quark masses and the choice of a particular mass for the top quark. CKM matrix is then constructed as $$V = U^U_L~ \left[ \begin{array}{ccc} 1 & & \\ & e^{i\sigma} & \\ & & e^{i\tau} \end{array} \right]~ U^{D\dagger}_L \ , \label{eq10}$$ where phase angles $\sigma$ and $\tau$ are introduced phenomenologically to generate possible CP violation in the framework of the three generation standard CKM model. The CKM matrix is then uniquely determined by the arbitrary input of the two angles $\sigma$ and $\tau$ in (10). To determine these eight perturbation parameters, a $\chi^2$ analysis was used. For the first five quarks, the masses obtained by Gasser and Leutwyler[@5)] can be used. No constraints on the top quark mass were imposed. Additional constraints were used – for four degrees of freedom of the CKM matrix coming from two sources. Information on the quark mixing angles comes from the measurements of the three absolute values[@6)]: $$|V_{us}| = \sin \theta_{C} = 0.221 \pm 0.002,~~ |V_{cb}| = 0.040 \pm 0.004,~~ \left| V_{ub}/V_{cb} \right| = 0.08 \pm 0.02~~. \label{eq11}$$ Information on the CP violating phase was taken from the experimental value of $\varepsilon$ parameter of K decay $$\varepsilon = (2.26 \pm 0.02)~10^{-3} = B_K\cdot f(m_c,m_t,V) \ , \label{eq12}$$ where $f$ is a complicated function of the charmed and top quark masses and of CKM matrix elements, and $B_K$ is the parameter connecting a free quark estimate to the actual value of $\Delta S =2$ matrix element describing $K - \bar K$ mixing. Following Ref. [@8)], we used the value of $B_K \approx 2/3~\pm~1/3~~.$ Analysis showed that only the combination of perturbations $P_U=P_1$ and $P_D=P_2$ resulted in an acceptable value of $\chi^2/d.o.f. \approx 0.6/1$. The best fit was obtained for $$m_s = 183~{\rm MeV},~~ m_t = 100~{\rm GeV},~~ \sigma = 0.6^{\circ},~~{\rm and}~~ \tau = 5.7^{\circ}~~, \label{eq13}$$ the other quark masses being close to their central values. The three other combinations gave much larger values $\chi^2 > 4$. It appears thus that the prediction for the top quark mass from the low energy FD mass matrices cannot satisfy the TEVATRON[@4)] value of $m_t^{\rm phys.} \approx 175$ GeV. This model’s prediction for $$J_{CP} = {{\rm Im}}(V_{ub}V_{td}V^*_{ud}V^*_{tb}) \ , \label{eq14}$$ as a function of $m_t$ can also be obtained – the approximate value $J_{CP} = (0.3 \pm 0.2)~10^{-4}$ is predicted, which corresponds to $\sin \delta_{13} \approx (0.56 \pm 0.37)$. This result is used to predict $${\varepsilon'/\varepsilon} = (290)\cdot J_{CP}\cdot H(m_t) \ , \label{eq15}$$ where $H(m_t)$ is a decreasing function of the top quark mass[@8))]. The predicted value in the model is ${\varepsilon'/\varepsilon} = (0.6 \pm 0.5)~10^{-3}$, with a weak dependence on the top quark mass. This prediction seems to favor the data from E731[@9)] over the data from NA31[@10)]. To conclude this Section, we described a new set of quark mass matrices based on a perturbation of a universal (FD) Yukawa interaction at [*low energy*]{}. The model contains eight parameters, which have been fitted to reproduce the five known quark masses (except $m_t$), moduli of three known elements of the CKM matrix, and the $K$-physics parameter ${\varepsilon}$. As a result, the physical top quark mass is predicted to be not much heavier than $\approx 100$ GeV, and the direct CP violation parameters are predicted to be $J_{CP} = (0.3 \pm 0.2)~10^{-4}$ and ${\varepsilon'/\varepsilon} = (0.6 \pm 0.5)~10^{-3}$. The analysis will be improved substantially with a better theoretical knowledge of $B_K$, a more precise determination of the light quark masses as well as by taking into account the more accurate measurement of $|V_{cb}|$ and the ratio $|V_{ub}/V_{cb}|$. This [*low energy*]{} model, based on a simple perturbation of a universal FD Yukawa interaction at low energies, has been invalidated by the discovery of the top quark much heavier than 100 GeV. Flavor Democracy at High Energies ================================= Many attempts to unify the gauge interactions of the Standard Model (SM) have been made in the past – within the framework of the Grand Unified Theories (GUT’s). These theories give a unification energy $E_{\rm GUT} \stackrel{>}{\sim} 10^{16}$ GeV, i.e., the energy where the SM gauge couplings would coincide: $5 {\alpha}_1/3 =$ $\alpha_2 =$ $\alpha_3$. Here, $\alpha_j = g_j^2 / 4 \pi$ ($j=1,2,3$) are the gauge couplings of $U(1)_Y,~SU(2)_L$, $SU(3)_C$, respectively. For the unification condition to be satisfied at a single point $\mu (=E_{\rm GUT})$ exactly, supersymmetric theories (SUSY) were used,[@[1]] replacing the SM above the energies $\mu \approx M_{\rm SUSY} \approx 1$ TeV. This changed the slopes of $\alpha_j=\alpha_j (\mu)$ at $\mu \geq M_{\rm SUSY}$, and for certain values of parameters of SUSY the three lines met at a single point. There are several deficiencies in such an approach. The unification energy is exceedingly large ($E_{\rm GUT} \stackrel{>}{\sim} 10^{16}$ GeV) since the proton decay time is large ($\tau_{\rm proton} \geq 5.5 \times 10^{32}$ yr). This implies a large desert between $M_{\rm SUSY}$ and $E_{\rm GUT}$. While eliminating several of the previously free parameters of the SM, SUSY introduces several new parameters and new elementary particles which haven’t been observed. It is our belief that it is more reasonable to attempt first to reduce the number of degrees of freedom (d.o.f.’s) in the Yukawa sector, since this sector seems to be at least as problematic as the gauge boson sector. Any such attempt should be required to lead to an overall reduction of the seemingly independent d.o.f.’s, unlike the GUT–SUSY approach. The symmetry responsible for this reduction of the number of parameters can be “flavor democracy” (FD), valid possibly in certain separate sectors of fermions (e.g., up-type sector, down-type sector). This symmetry could be realized in a flavor gauge theory (FGT)[@[2]] – this is a theory blind to fermionic flavors at high energies $E > {\Lambda}_{\rm FGT}$ and leading at “lower” energies $E \sim {\Lambda}_{\rm FGT}$ to flavor–democratic (FD) Yukawa interactions. Requirement of reduction of as many d.o.f.’s as possible would make it natural for FGT’s to be without elementary Higgs. The scalars of the SM are then tightly bound states of fermion pairs ${\bar f} f$, with ${\bar f}f$ condensation taking place at energies ${\Lambda}$: $E_{\rm ew} \ll {\Lambda} \stackrel{<}{\sim} {\Lambda}_{\rm FGT}$. The idea of FD, and deviations from the exact FD, at [*low energies*]{} ($E \sim 1-10^2$ GeV) have been investigated by several authors[@1); @[3]; @11)]. On the other hand, in this Section we discuss FD and deviations from it at [*higher energies*]{} $E \gg E_{\rm ew}$, and possible connection with FGT’s. This discussion is motivated and partly based on works of Ref. [@[2]]. Let us illustrate first these concepts with a simple scheme of an FGT. Assume that at energies $E \stackrel{>}{\sim} \Lambda_{\rm FGT}$ we have no SM scalars, but new gauge bosons $B_\mu$, i.e., the symmetry group of the gauge theory is extended to a group $G_{\rm SM} \times G_{\rm FGT}$. Furthermore, we assume that the new gauge bosons obtain a heavy mass $M_B~( \sim \Lambda_{\rm FGT})$ by an unspecified mechanism (e.g., dynamically, or via a mechanism mediated by an elementary Higgs). At thus high energies, the SM–part $G_{\rm SM} \equiv$ $SU(3)_c \times SU(2)_L \times U(1)_Y$ is without Higgses, and hence with (as yet) massless gauge bosons and fermions. The FGT–part of Lagrangian in the fermionic sector is written schematically as $${\cal{L}}^{\rm FGT}_{g.b.-f} = -g \Psi \gamma^\mu B_\mu \Psi ~~~({\rm for}~~E \stackrel{>}{\sim} \Lambda_{\rm FGT})~, \label{eqq2}$$ where $\Psi$ is the column of all fermions and $B_\mu=B_\mu^j T_j$. $T_j$’s are the generator matrices of the new symmetry group $G_{\rm FGT}$. Furthermore, we assume that the $T_j$’s corresponding to the electrically neutral $B_\mu^j$’s do not mix flavors (i.e., no FCNC’s at tree level) and are proportional to identity matrices in the flavor space (“flavor blindness”). We will argue in the following lines that the FGT Lagrangian (16) can imply creation of composite Higgs particles through condensation of fermion pairs, and can subsequently lead at lower energies to Yukawa couplings with a flavor democracy. The effective current–current interaction, corresponding to exchanges of neutral gauge bosons $B$ at “low” cutoff energies $E$ ($E \sim \Lambda_{\rm FGT} \sim M_B$), is $${\cal{L}}^{\rm FGT}_{4f} \approx -{g^2 \over 2 M_B^2} \sum_{i,j} (\bar f_i \gamma^\mu f_i)(\bar f_j \gamma_\mu f_j)~~~({\rm for}~~ E \sim \Lambda_{\rm FGT} \sim M_B)~. \label{eqq3}$$ Since we are interested in the possibility of Yukawa interactions of SM originating from (17), and since such interactions connect left–handed to right–handed fermions, we have to deal only with the left–to–right (and right–to–left) part of (17). Applying a Fierz transformation[@[4]] to this part, we obtain four-fermion interactions without $\gamma_\mu$’s $${\cal{L}}^{\rm FGT}_{4f} \approx {2 g^2 \over M_B^2} \sum_{i,j} (\bar f_{iL} f_{jR})(\bar f_{jR} f_{iL})~~~({\rm for}~~E \sim \Lambda_{\rm FGT} \sim M_B)~. \label{eqq4}$$ These interactions can be rewritten in a formally equivalent (Yukawa) form with auxiliary (i.e., as yet nondynamical) scalar fields. One possibility is to introduce only one $SU(2)$ doublet auxiliary scalar $H$ with (as yet arbitrary) bare mass $M_H$, by employing a familiar mathematical trick[@[5]] $$\begin{aligned} {\cal L}^{(E)}_{\rm Y}& \approx & - M_H {\sqrt{2} g \over M_B} \sum_{i,j=1}^{3} {\Bigg \{} \left[ (\bar\psi^q_{iL} \tilde H) u^q_{jR} + (\bar\psi^l_{iL} \tilde H) u^l_{jR} + \mbox{h.c.} \right] \nonumber\\ &&+ \left[ (\bar\psi^q_{iL} H) d^q_{jR} + (\bar\psi^l_{iL} H) d^l_{jR} + \mbox{h.c.} \right] {\Bigg \}} - M_H^2 H^{\dagger} H \ , \label{eqq5a}\end{aligned}$$ where $M_H$ is an unspecified bare mass of the auxiliary $H$, and we use the notations $$H = {H^{+} \choose H^0} \ , \qquad \tilde H = i \tau_2 H^{\ast} \ ; \qquad \psi^q_i = {u^q_i \choose d^q_i} \ , \psi^l_i = {u^l_i \choose d^l_i} \ ,$$ where $u^q_1 = u$, $u_1^l = \nu_e$, $u^q_2=c$, etc. Another possibility is to introduce two auxiliary scalar isodoublets $H^{(U)},~H^{(D)}$, with (as yet) arbitrary bare masses $M_H^{(U)},~ M_H^{(D)}$, and express (18) in the two-Higgs ‘Yukawa’ form $$\begin{aligned} {\cal{L}}^{(E)}_{\rm Y} \approx &- M_H^{(U)} {\sqrt{2} g \over M_B} \sum_{i,j=1}^{3}\left[ (\bar\psi^q_{iL} \tilde H^{(U)}) u^q_{jR} + (\bar\psi^l_{iL} \tilde H^{(U)}) u^l_{jR} + \mbox{h.c.} \right] \nonumber\\ &- M_H^{(D)} {\sqrt{2} g \over M_B} \sum_{i,j=1}^{3} \left[ (\bar\psi^q_{iL} H^{(D)}) d^q_{jR} + (\bar\psi^l_{iL} H^{(D)}) d^l_{jR} + \mbox{h.c.} \right] \\ &- {M_H^{(U)}}^2 ({H^{(U)}}^\dagger H^{(U)}) - {M_H^{(D)}}^2 ({H^{(D)}}^\dagger H^{(D)})~~. \nonumber \label{eqq6}\end{aligned}$$ The cutoff superscript $E$ ($\sim \Lambda_{\rm FGT}$) at the “bare” parameters and fields in (19) and (20) is suppressed for simplicity of notation. Yukawa terms there involve nondynamical scalar fields and are formally equivalent to (18). Equations of motion show that the (yet) nondynamical scalars $H$, $H^{(U)}$, $H^{(D)}$ are proportional to condensates involving fermions and antifermions – i.e., they are composite. When further decreasing the energy cutoff $E$ in the sense of the renormalization group, the composite scalars in (19) and (20) obtain kinetic energy terms and vacuum expectation values (VEV’s) through quantum effects if the FGT gauge coupling $g$ is strong enough – i.e., they become dynamical in an effective SM (or: two-Higgs-doublet SM) framework and they induce dynamically electroweak symmetry breaking (DEWSB). The neutral physical components of these composite Higgs doublets are scalar condensates[@[6]] of fermion pairs $H^0 \sim {\bar f} f$. The low energy effective theory is the minimal SM (MSM) in the case (19) and the SM with two Higgs doublets – type II \[2HDM(II)\] in the case (20). Hence, although (19) and (20) are formally equivalent to four-fermion interactions (18), they lead to two physically different low energy theories[@[2]]. The condensation scenario with the smaller vacuum energy density would physically materialize. We emphasize that the central ingredient distinguishing the described scheme from most of the other scenarios of DEWSB is the flavor democracy in the Yukawa sector near the transition energies, as expressed in (19) and (20). We note that (19) implies that the MSM, if it is to be replaced by an FGT at high energies, should show up a trend of the Yukawa coupling matrix (or equivalently: of the mass matrix) in a flavor basis toward a complete flavor democracy for [**all**]{} fermions, with a common overall factor, as the cutoff energy is increased within the effective MSM toward a transition energy $E_0 (\sim {\Lambda}_{\rm FGT})$ $$M^{(U)}~~{\rm and}~~M^{(D)} \rightarrow {1 \over 3} m_t^0 \pmatrix{N_{FD}^q & 0 \cr 0 & N_{FD}^l \cr}~~~{\rm as}~~ E \uparrow E_0~~, \label{eqq7a}$$ where $m_t^0=m_t(\mu=E_0)$ and $N_{FD}$ is the $3 \times 3$ flavor–democratic matrix $$N_{FD}^f = \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right]~~, \label{eqq7b}$$ with the superscript $f=q$ for the quark sector and $f=l$ for the leptonic sector. On the other hand, if the SM with two Higgses (type II) is to experience such a transition, then (20) implies [**separate**]{} trends toward FD for the up–type and down–type fermions $$M^{(U)}~(M^{(D)}) \rightarrow {1 \over 3}~ m_t^0~(m_b^0)~ \left[ \begin{array}{cc} N_{FD}^q & 0 \\ 0 & N_{FD}^l \end{array} \right]~~~ {\rm as}~~ E \uparrow E_0~~, \label{eqq8}$$ where $m_t^0$ and $m_b^0$ can in general be different. Note that $N_{FD}$, when written in the diagonal form in the mass basis, has the form $$N_{FD}^{\rm mass~basis} = 3 \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right]~~. \label{eqq9}$$ Hence, FD (and FGT) implies in the mass basis as $E$ increases to $E_0 \sim {\Lambda}_{\rm FGT}$: $$\begin{aligned} {m_u \over m_t},~{m_c \over m_t},~{m_{\nu_e} \over m_{\nu_\tau}},~ {m_{\nu_\mu} \over m_{\nu_\tau}} &\rightarrow 0~~, \nonumber\\ {m_d \over m_b},~{m_s \over m_b},~~{m_e \over m_\tau},~~ {m_\mu \over m_\tau} &\rightarrow 0~~, \\ {m_{\nu_\tau} \over m_t},~~{m_\tau \over m_b} &\rightarrow 1~~, \nonumber \label{eqq10}\end{aligned}$$ and in the case of the minimal SM [**in addition**]{} $${m_b \over m_t},~{m_\tau \over m_{\nu_\tau}} \rightarrow 1~~. \label{eqq11}$$ In our previous papers[@[2]] we showed, by considering the quark sector, that the minimal SM does not have the required trend toward FD, but that SM with two Higgs doublets (type II) does. We also checked that these conclusions remain true when we include the leptonic sector. When including also leptons (Ref. [@[2]], first entry), we can neglect for simplicity masses of the first two families of fermions, i.e., only $(t,~b)$ and $(\nu_\tau,~\tau)$ are dealt with (here $\nu_\tau$ is the Dirac tau–neutrino), and then investigate evolution of their Yukawa coupling parameters (or: their masses) with energy. In the case of the effective 2HDM(II) with only the third fermion family, the FD conditions read as (25) (last line). The one–loop renormalization group equations (RGE’s) for the Yukawa coupling parameters $g_t,~g_b~,g_\nu,~g_\tau$ of the third family fermions in any fixed flavor basis for various Standard Models with two Higgs doublets are available, for example, in Ref.[@[7]]. The running masses (at evolution, or cutoff, energies $E$), are proportional to these parameters and to the (running) VEV’s of the two Higgs doublets: $$\left[ \begin{array}{c} m_t(E) \\ m_{\nu_\tau}(E) \end{array} \right] = \frac{ v_{_U}(E) }{\sqrt{2}} \left[ \begin{array}{c} g_t(E) \\ g_{\nu_\tau}(E) \end{array} \right] \ , \qquad \left[ \begin{array}{c} m_b(E) \\ m_{\tau}(E) \end{array} \right] = \frac{ v_{_D}(E) }{\sqrt{2}} \left[ \begin{array}{c} g_b(E) \\ g_{\tau}(E) \end{array} \right] \ , \label{eqq12a}$$ where $$\begin{aligned} \langle H^{(U)(E)} \rangle_0 & = & {1 \over \sqrt{2}} {0 \choose v_{_U}(E) }~,~~ \langle H^{(D)(E)} \rangle_0 = {1 \over \sqrt{2}} {0 \choose v_{_D}(E) }~ \nonumber\\ {\rm and}~~ v_{_U}^2(E) + v_{_D}^2(E)& = & v^2(E) \ (\approx 246^2~{\rm GeV}^2 \ \mbox{ for} \ E \sim E_{\rm ew}) \ . \label{eqq12b}\end{aligned}$$ We recall that the transition energy $E_0$, appearing in FD conditions (25) and (26), is the energy above which SM starts being replaced by an FGT and the composite scalars start “de-condensing.” In Ref.[@[2]], we argued that this $E_0$ lies near the pole of the running fermion masses ($E_0 \stackrel{<}{\approx} \Lambda_{\rm pole}$). We then simply approximate: $E_0 = \Lambda_{\rm FGT} = \Lambda_{\rm pole}$. Hence, the high energy boundary conditions (25) are then $${g_{\nu_\tau} \over g_t}=1,~~{g_\tau \over g_b}=1~~~ {\rm at}~~E \approx \Lambda_{\rm pole}~. \label{eqq13}$$ These conditions are taken into account in numerical calculations, together with the low energy boundary conditions $$\begin{aligned} &m_\tau = 1.78~{\rm GeV},~~m_b(\mu=1~{\rm GeV}) = 5.3~{\rm GeV}, \nonumber\\ &m_t(\mu=m_t) \approx 167~{\rm GeV}~, \label{eqq14}\end{aligned}$$ where $m_\tau$ and $m_b$ are based on the available data of the measured masses[@[8]; @[9]]. The above value of mass $m_t(m_t) \approx m_t^{\rm phys.} [1 + 4 {\alpha}_3(m_t)/(3 \pi) ]^{-1}$ $\approx m_t^{\rm phys.}/1.047$ is based on the experimental value of $m_t^{\rm phys.} \approx 175$ GeV measured at Tevatron[@4)]. For chosen values of VEV’s ratio $v_{_U}/v_{_D}$, we found the masses of Dirac tau–neutrino $m_{\nu_\tau}$, which satisfy the above boundary conditions (29,30), by using numerical integration of RGE’s from $\mu=1$ GeV to $\Lambda_{\rm pole}$. The calculated Dirac neutrino masses are too large to be compatible with results of the available experimental predictions[@[11]]. Therefore, we invoke the usual “see–saw mechanism”[@[12]] of the mixing of the Dirac neutrino masses and the much larger right–handed Majorana neutrino masses $M_R$, in order to obtain a small physical neutrino mass $$m_\nu^{\rm phys.} \approx {m_\nu^{\rm Dirac} \over 4~ M_R}~~. \label{eqq15a}$$ Majorana mass term breaks the lepton number conservation. Therefore, Majorana masses $M_R$ are expected to be of the order of some new (unification) scale $\Lambda~\gg~E_{\rm ew}$. We assume: $M_R \approx \Lambda$. Within our context, the simplest choice of this new unification scale would be the energy $\Lambda_{\rm FGT} = \Lambda_{\rm pole}$ where SM is replaced by FGT. $$m_\nu^{\rm phys.} \approx {m_\nu^{\rm Dirac} \over 4~ \Lambda_{\rm FGT}}~~. \label{eqq15b}$$ The physical tau–neutrino masses $m_{\nu_\tau}^{\rm phys.}$ predicted in this way are very small for the most cases of chosen values of $v_{_U}/v_{_D}$ and $m_t^{\rm phys.}$, i.e., in most cases they are acceptable since being below the experimentally predicted upper bounds[@[11]]. The see–saw scenario leading to our predictions of $m_{\nu_\tau}^{\rm phys.}$ implicitly assumes that: (a) FGT contains in addition Majorana neutrinos, and its energy range of validity also provides the scale for the heavy Majorana masses \[i.e., $M_R \sim \Lambda_{\rm FGT}$\]. (b) At low (SM) energies, Majorana neutrinos remain decoupled from (or very weakly coupled to) the Dirac neutrinos, which is a very plausible assumption in view of assumption (a). In general, it could be assumed $M_R \sim \Lambda_{\rm new-scale} \geq \Lambda_{\rm FGT}$, leading thus to even smaller $m_{\nu_\tau}^{\rm phys.}$ than those in (32). When increasing $m_t^{\rm phys.}$ at a fixed $v_{_U}/v_{_D}$, $m_{\nu_\tau}^{\rm Dirac}$ increases and $\Lambda_{\rm FGT}$ decreases, and hence $m_{\nu_\tau}^{\rm phys.}$ increases. This provides us, at a given ratio $v_{_U}/v_{_D}$, with: (a) [*upper*]{} bounds on $m_t^{\rm phys.}$ for (various) specific upper bounds imposed on $m_{\nu_\tau}^{\rm phys.}$ (e.g., $\leq 31$ MeV[@[11]], or $\leq 1$ MeV, or $\leq 17$ KeV[@[13]]); (b) [*lower*]{} bounds on $m_t^{\rm phys.}$ for (various) specific upper bounds imposed on $\Lambda_{\rm FGT}$ (e.g., $\leq \Lambda_{\rm Planck}$, or $\leq 10^{10}$ GeV, or $\leq 10^5$ GeV). Even with the largest possible upper bounds on $m_{\nu_\tau}^{\rm phys.} \leq 31$ MeV and $\Lambda_{\rm FGT} \leq \Lambda_{\rm Planck}$, we can still get rather narrow bands on the values of $m_t^{\rm phys.}$ at any given $v_{_U}/v_{_D}$. E.g., if $v_{_U}/v_{_D}=1$, then 155 GeV $\stackrel{<}{\approx} m_t^{\rm phys.} \stackrel{<}{\approx} 225$ GeV. Inversely, if $m_t^{\rm phys.} = 175$ GeV \[$m_t(m_t) = 167$ GeV\], $m_{\nu_\tau}^{\rm phys.} \leq 31$ MeV and $\Lambda_{\rm FGT} \leq \Lambda_{\rm Planck}$, then we obtain rather stringent bounds on the VEV ratio: $0.64 \stackrel{<}{\approx} v_{_U}/v_{_D} \stackrel{<}{\approx} 1.35$. To conclude this Section, we stress that we can estimate the masses of top and tau–neutrino within SM with two Higgs doublets, assuming solely that the complete flavor democracy should set in at energies where SM starts breaking down. The gauge theories (FGT’s) which presumably replace SM at such energies remain to be further investigated. For related detailed information, see Ref.[@[2]]. Discussions and Conclusion ========================== We discussed on the one hand flavor–democratic (FD) mass matrices at [*low energies*]{}, and on the other hand conditions under which mass matrices show a trend to flavor–democratic forms at [*high energies*]{} (in a flavor basis) – a behavior possibly related to flavor gauge theories (FGT’s) at high energies. However, we found that the model based on our simple perturbation of a universal FD Yukawa interaction at [*low energies*]{} has been invalidated, because of the discovery of a top quark much heavier than 100 GeV. On the contrary, at [*high energies*]{}, assuming solely that the complete flavor democracy should set in at energies where an effective perturbative two-Higgs-doublet SM (type II) starts breaking down, we can estimate the masses of top and tau–neutrino, which are compatible with the present experimental results. Therefore, the gauge theories (FGT’s) which presumably replace SM at such energies remain to be further investigated. In our forthcoming work[@chk], we would like to investigate further the simple FD mass matrices ansatz which had been applied earlier[@11)] at low energies and had given experimentally unacceptable $m_t$. We would like to apply this ansatz at a high energy scale $E \sim \Lambda_{\rm pole}$, employing RGE evolution within a two-Higgs-doublet SM model (type II). Furthermore, the compositeness nature of the scalars in this framework should be further investigated, particularly in view of the fact that, for cases when VEV ratio is $v_{_U}/v_{_D} \sim 1$, the usual RGE compositeness conditions at ${\Lambda}_{\rm pole}$ suggest that only $H^{(U)}$ can be fully composite, but not $H^{(D)}$ (cf. Ref. [@rev]). Acknowledgements ================ CSK would like to thank Prof. Y. Koide for his kind invitation to the Workshop of MMQL97. The work of CSK was supported in part by the CTP, Seoul National University, in part by Yonsei University Faculty Research Fund of 1997, in part by the BSRI Program, Ministry of Education, Project No. BSRI-97-2425, and in part by the KOSEF-DFG large collaboration project, Project No. 96-0702-01-01-2. The work of GC was supported in part by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie, Project No. 057DO93P(7). References ========== [9]{} H. Harari, H. Haut and J. Weyers, Phys. Lett. [**B78**]{} (1978) 459. Y. Koide, Phys. Rev. [**D39**]{} (1989) 3500; H. Fritzsch and J. Plankl, Phys. Lett [**B237**]{} (1990) 451. Y. Nambu, Proceedings of XI Warsaw symposium on High Energy Physics (1988); P. Kaus and S. Meshkov, Phys. Rev. [**D42**]{} (1990) 1863; F. Cuypers and C.S. Kim, Phys. Lett [**B254**]{} (1991) 462; H. Fusaoka and Y. Koide, Mod. Phys. Lett. [**A10**]{} (1995) 289. M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{} (1973) 652. CDF Collaboration: F. Abe [*et al.*]{}, Phys. Rev. Lett. [**74**]{} (1995) 2626, Phys. Rev. Lett. [**77**]{} (1996) 438; D0 Collaboration: S. Adachi [*et al.*]{}, Phys. Rev. Lett. [**74**]{} (1995) 2632. J. Gasser and H. Leutwyler, Phys. Rep. [**87**]{} (1982) 77. Particle Data Book: Phys. Rev. [**D54**]{} (1996) 1; A. Ali and D. London, Nucl. Phys. Proc. Suppl. [**54A**]{} (1997) 297. C.S. Kim, J.L. Rosner and C.-P. Yuan, Phys. Rev. [**D42**]{} (1990) 96. G. Buchalla, A.J. Buras and M.K. Harlander, Nucl. Phys. [**B337**]{} (1990) 313. E731 Collaboration: E.J. Ramberg et al., Phys. Rev. Lett. [**70**]{} (1993) 2529. NA31 Collaboration: G.D. Barr et al., Phys. Lett. [**B317**]{} (1993) 233. F. Cuypers and C.S. Kim, in Ref. 3. U. Amaldi, W. de Boer and H. Furstenau, Phys. Lett. [**B260**]{} (1991) 447; P. Langacker and M. Luo, Phys. Rev. [**D44**]{} (1991) 817. G. Cvetič and C.S. Kim, Mod. Phys. Lett. [**A9**]{} (1994) 289; Int. J. Mod. Phys. [**A9**]{} (1994) 1495; Nucl. Phys. [**B407**]{} (1993) 290; Phys. Rev. [**D51**]{} (1995) 201. M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. [**B120**]{} (1977) 316. T. Kugo, Prog. Theor. Phys. [**55**]{} (1976), 2032; K. Kikkawa, Prog. Theor. Phys. [**56**]{} (1976) 947; T. Eguchi, Phys. Rev. [**D14**]{} (1976) 2755; see also Ref. 14 (third entry), App. A. V.G. Vaks and A.I. Larkin, Zh. Exp. Teor. Fiz. [**40**]{}, No. 1 (1961) \[Sov. Phys. JETP [**13**]{} (1961) 192\]; Y. Nambu and G. Jona–Lasinio, Phys. Rev [**122**]{} (1961) 345; [**124**]{} (1961) 246. C.T. Hill, C.N. Leung and S. Rao, Nucl. Phys. [**B262**]{} (1985) 517; G. Cvetič, S.S. Hwang and C.S. Kim, hep-ph/9706323 (June 1997). BES Collaboration: Jing-Zhi Bai et al., Phys. Rev. Lett. [**69**]{} (1992) 3021. J. Gasser and H. Leutwyler, Phys. Rep. [**87**]{} (1982) 77; S. Narison, Phys. Lett. [**B197**]{} (1987) 405. ALEPH Collaboration: D. Decamp et al., Z. Phys. [**C53**]{} (1992) 1; L3 Collaboration: B. Adeva et al., Z. Phys. [**C51**]{} (1991) 179. ARGUS Collaboration: H. Albrecht et al., Phys. Lett [**B202**]{} (1988) 149; [**B292**]{} (1992) 221. M. Gell–Mann, P. Ramond and R. Slansky, in [*Supergravity*]{}, edited by P. Van Nieuwenhuizen and D.Z. Freedman (North–Holland, Amsterdam, 1979); T. Yanagida, Proceedings of the Workshop on Unified Theory and Baryon Number of the Universe (KEK, Japan, 1979). A. Hime, R.J.N. Phillips, G.G. Ross and S. Sankar, Phys. Lett. [**B260**]{} (1991) 381. G. Cvetič, S.S. Hwang and C.S. Kim, work in progress (1997). G. Cvetič, “Top quark condensation – a review,” (Subsec. VI.A.3), hep-ph/ 9702381, to appear in Rev. Mod. Phys. [^1]: Talk given by C.S. Kim at the Workshop on Masses and Mixings of Quarks and Leptons, Shizuoka, Japan, March 19-21, 1997. Proceedings will be published.
{ "pile_set_name": "ArXiv" }