text
stringlengths 0
514k
| meta
dict |
---|---|
---
abstract: 'In this article we characterize the complex hyperbolic groups that leave invariant a copy of the Veronese curve in $\Bbb{P}^2_{\Bbb{C}}$. As a corollary we get that every discrete compact surface group in $\PO^+(2,1)$ admits a deformation in $\PSL(3,\Bbb{C})$ with a non-empty region of discontinuity which is not conjugate to a complex hyperbolic subgroup. This provides a way to construct new examples of Kleinian groups acting on $\Bbb{P}^2_\Bbb{C}$, see [@CNS; @CS1; @SV3; @SV1; @SV2].'
address:
- ' UCIM UNAM, Unidad Cuernavaca, Av. Universidad s/n. Col. Lomas de Chamilpa, C.P. 62210, Cuernavaca, Morelos, México.'
- 'IIT UACJ, Av. del Charro 610 Norte, Partido Romero, C.P. 32310, Ciudad Juárez, Chihuahua, México'
author:
- Angel Cano
- Luis Loeza
title: Two dimensional Veronese groups with an invariant ball
---
[^1]
Introduction {#introduction .unnumbered}
============
Back in the 1990s, Seade and Verjovsky began the study of discrete groups acting on projective spaces, see [@SV3; @SV1; @SV2]. Over the years, new results have been discovered, see [@CNS]. However, it has been hard to construct groups acting on $\Bbb{P}^2_\Bbb{C}$ which are neither virtually affine nor complex hyperbolic. In this article we use the irreducible representation $\iota$ of $\PSL(2,\Bbb{C})$ into $\PSL(3,\Bbb{C})$ to produce such groups, more precisely, we show:
Let $\Gamma\subset \PSL(2,\Bbb{C})$ be a discrete group of the first kind with non-empty discontinuity region in the Riemann sphere. Then the following claims are equivalent:
1. The group $\Gamma$ is Fuchsian.
2. The group $\iota\Gamma$ is complex hyperbolic.
3. The group $\iota\Gamma$ is $\Bbb{R}$-Fuchsian.
Before we present our next result we should recall the following definition, see [@lab] page 30. A group $G$ is called a [*compact surface group*]{}, if it is isomorphic to the fundamental group of a compact orientable topological surface $\Sigma_g$ of genus $g \geq 2$.
\[t:main2\] Let $\Sigma_g$ a compact orientable topological surface $\Sigma_g$ of genus $g \geq 2$ and $\rho_0: \Pi_{1}(\Sigma_g)\rightarrow \PO^+(2,1)$ be a faithful discrete representation, where $\PO^+(2,1)$ denotes the projectivization of identity component of $\O(2,1)$. Then we can find a sequence of discrete faithful representations $\rho_n: \Pi_{1}(\Sigma_g)\rightarrow \PSL(3,\Bbb{C})$ such that:
1. For each $n\in \Bbb{N}$ the group $\rho_n(\Pi_1(\Sigma_g))=\Gamma_n$ is a complex Kleinian group whose action on $\Bbb{P}^2_{\Bbb{C}}$ is irreducible.
2. For each $n\in \Bbb{N}$ the group $\Gamma_n$ is not conjugate to a subgroup of $\PU(2,1)$ or $\PSL(3,\Bbb{R})$.
3. The sequence of representations $(\rho_n)$ converge algebraically to $\Gamma_0$, [*i. e.*]{} $lim_n \rho_n(\gamma)=h$ exists as a projective transformation for all $\gamma\in \Pi_1(\Sigma_g)$ and $\Gamma_0=\{h:lim_n\rho_n(\gamma)=h, \gamma\in \Pi_1(\Sigma_g) \}$ compare with the corresponding definition in [@JM].
4. The sequence $(\Gamma_n)$ of compact surface groups converge geometrically to $\Gamma_0$, [*i. e.*]{} if for every subsequence $(j_n)$ of $(n)$ we get $$\Gamma_0=\{g\in \PSL(3,\Bbb{C}):g=lim_{j_n}\rho_{j_n}(\gamma_n), \gamma_n\in \Pi_1(\Sigma_g)\},$$ compare with the corresponding definition in [@JM].
There are complex Kleinian groups acting on $\Bbb{P}^2_{\Bbb{C}}$ which are neither conjugate to complex hyperbolic groups nor virtually affine groups.
This paper is organized as follows: in Section \[s:recall\] we review some general facts and introduce the notation used throughout the text. In Section \[s:gever\] we describe some properties of the Veronese curve which are useful for our purposes. In Section \[s:chvh\] we characterize the complex hyperbolic subgroups that leave invariant a Veronese curve. In Section \[s:riv\] we depict those real hyperbolic subgroups leaving invariant a Veronese curve. Finally, in Section \[s:rep\] we show that every discrete compact surface group in $\PO(2,1)^+$ admits a deformation in $\PSL(3,\Bbb{C})$ which is not conjugate to a complex hyperbolic subgroup and has non-empty Kulkarni region of discontinuity.
Preliminaries {#s:recall}
=============
Projective geometry
-------------------
The complex projective space $\mathbb{P}^2_{\mathbb {C}}$ is defined as $$\mathbb{P}^{2}_{\mathbb {C}}=(\mathbb {C}^{3}\setminus \{0\})/\Bbb{C}^*,$$ where $\Bbb{C}^*$ acts by the usual scalar multiplication. This is a compact connected complex $2$-dimensional manifold. If $[\mbox{}]:\mathbb{C}^{3}\setminus\{0\}\rightarrow\mathbb{P}^{2}_{\mathbb{C}}$ is the quotient map, then a non-empty set $H\subset\mathbb{P}^2_{\mathbb{C}}$ is said to be a line if there is a $\mathbb{C}$-linear subspace $\widetilde{H}$ in $\mathbb{C}^{3}$ of dimension $2$ such that $[\widetilde{H}\setminus \{0\}]=H$. If $p,q$ are distinct points then $\overleftrightarrow{p,q}$ is the unique complex line passing through them. In this article, $e_1,e_2,e_{3}$ will denote the standard basis for $\Bbb{C}^{3}$.
Projective transformations
----------------------------
The group of projective automorphisms of $\mathbb{P}^{2}_{\mathbb{C}}$ is defined as $$\PSL(3, \mathbb {C}) \,:=\, \GL({3}, \Bbb{C})/\Bbb{C}^*,$$ where $\Bbb{C}^*$ acts by the usual scalar multiplication. Then $\PSL(3, \mathbb{C})$ is a Lie group acting by biholomorphisms on $\Bbb{P}^2_{\Bbb{C}}$; its elements are called projective transformations. We denote by $[[\mbox{ }]]: \GL(3,\mathbb{C})\rightarrow \PSL(3,\mathbb{C})$ the quotient map. Given $ \gamma\in\PSL(3, \mathbb{C})$, we say that $\widetilde\gamma\in\GL(3,\mathbb {C})$ is a [*lift*]{} of $ \gamma$ if $[[\widetilde\gamma]]=\gamma$.\
Complex hyperbolic groups
-------------------------
In the rest of this paper, we will be interested in studying those subgroups of $\PSL(3,\Bbb{C})$ that preserve the unitary complex ball. We start by considering the following Hermitian matrix: $$H=
\left (
\begin{array}{lll}
&&1\\
&1&\\
1&&
\end{array}
\right ).$$ We will set $$\U(2,1)=\{g\in \GL(3,\Bbb{C}):g^*Hg^*=H\}$$ $$\O(2,1)=\{g\in \GL(3,\Bbb{R}):g^t Hg=H\}$$
and $\langle,\rangle:\Bbb{C}^{3}\rightarrow \Bbb{C}$ the Hermitian form induced by $H$. Clearly, $\langle,\rangle$ has signature $(2,1)$ and $\U(2,1)$ is the group that preserves $\langle,\rangle$, see [@goldman]. The projectivization $\PU(2,1)$ preserves the unitary complex ball: $$\Bbb{H}^2_\Bbb{C}=\{[w]\in \Bbb{P}^2_{\Bbb{C}}\mid \langle w,w\rangle <0\}.$$ Given a subgroup $\Gamma\subset\PU(2,1)$, we define the following notion of limit set, as in [@CG].
Let $\Gamma\subset \PU(2,1)$, then its Chen–Greenberg limit set is $\Lambda_{\CG}(\Gamma):= \bigcup \overline{\Gamma x}\cap \partial \Bbb{H}^2_\Bbb{C}$ where the union on the right runs over all points $x\in \Bbb{H}^2_\Bbb{C}$.
As in the Fuchsian groups case, $\Lambda_{\CG}(\Gamma)$ has either 1, 2 or infinitely many points. A group is said to be non-elementary if $\Lambda_{\CG}(\Gamma)$ has infinitely many points, and in that case it does not depend on the choice of orbit, [*i.e.*]{} $\Lambda_{\CG}(\Gamma):= \overline{\Gamma x}\cap \partial \Bbb{H}^2_\Bbb{C}$ where $x\in \Bbb{H}^2_\Bbb{C}$ is any point.\
Pseudo-projective transformations
----------------------------------
The space of linear transformations from $\Bbb{C}^{3}$ to $\Bbb{C}^{3}$, denoted by $M(3,\Bbb{C})$, is a complex linear space of dimension $9$, where $\GL(3,\Bbb{C})$ is an open dense set in $ M(3,\Bbb{C})$. Then $\PSL(3,\Bbb{C})$ is an open dense set in $QP(3,\Bbb{C})=(M(3,\Bbb{C})\setminus\{0 \})/\Bbb{C}^*$ called in [@CS] the space of pseudo-projective maps. Let $\widetilde{M}:\mathbb{C}^{3}\rightarrow\mathbb{C}^{3}$ be a non-zero linear transformation. Let $Ker(\widetilde M)$ be its kernel and $Ker([[\widetilde M]])$ denote its projectivization. Then $\widetilde{M}$ induces a well defined map $[[\widetilde M]]:\mathbb {P}^{2}_\mathbb {C}\setminus Ker([[\widetilde M]]) \rightarrow\mathbb {P}^{2}_\mathbb {C}$ by $$[[\widetilde M]]([v])=[\widetilde M(v)].$$ The following result provides a relation between convergence in $QP(3,\Bbb{C})$ viewed as points in a projective space and the convergence viewed as functions.
\[See [@CS]\] \[p:completes\] Let $(\gamma_m)_{m\in \mathbb {N}}\subset \PSL(3,\mathbb {C})$ be a sequence of distinct elements. Then:
1. There is a subsequence $(\tau_m)_{m\in \mathbb {N}}\subset(\gamma_m)_{m\in\mathbb{N}}$ and a $\tau_0\in M(3,\Bbb{C})\setminus\{0\}$ such that $\tau_m\xymatrix{\ar[r]_{m\rightarrow\infty}&}\tau_0$ as points in $QP(3,\Bbb{C})$.
2. If $(\tau_m)_{m\in \mathbb {N}}$ is the sequence given by the previous part of this lemma, then $\tau_m\xymatrix{\ar[r]_{m\rightarrow\infty}&}\tau_0$, as functions, uniformly on compact sets of $\mathbb{P}^n_\mathbb{C}\setminus Ker(\tau_0)$. Moreover, the equicontinuity set of $\{\tau_m\vert m\in \Bbb{N} \}$ is $\Bbb{P}^n\setminus Ker(\tau_0)$.
Kulkarni’s limit set
--------------------
When we look at the action of a group acting on a general topological space, in general there is no natural notion of limit set. A nice starting point is Kulkarni’s limit set.
\[d:lim\] Let $\Gamma\subset\PSL(n+1,\mathbb{C})$ be a subgroup. We define
1. the set $\Lambda(\Gamma)$ to be the closure of the set of cluster points of $\Gamma z$ as $z$ runs over $\mathbb{P}^n_{\mathbb{C}}$,
2. the set $L_2(\Gamma)$ to be the closure of cluster points of $\Gamma K$ as $K$ runs over all the compact sets in $\mathbb{P}^n_{\mathbb{C}}\setminus \Lambda(\Gamma)$,
3. and *Kulkarni’s limit set* of $\Gamma$ to be $$\Lambda_{\Kul}(\Gamma)=\Lambda(\Gamma)\cup L_2(\Gamma),$$
4. *Kulkarni’s discontinuity region* of $\Gamma$ to be $$\Omega_{\Kul}(\Gamma)=\mathbb{P}^n_{\mathbb{C}}\setminus\Lambda_{\Kul}(\Gamma).$$
Kulkarni’s limit set has the following properties. For a more detailed discussion of this in the two-dimensional setting, see [@CNS].
\[p:pkg\] Let $\Gamma\subset \PSL(3,\Bbb{C})$ be a complex Kleinian group. Then:
1. The sets\[i:pk2\] $\Lambda_{\Kul}(\Gamma),\,\Lambda(\Gamma),\,L_2(\Gamma)$ are $\Gamma$-invariant closed sets.
2. \[i:pk3\] The group $\Gamma$ acts properly discontinuously on $\Omega_{\Kul}(\Gamma)$.
3. \[i:pk4\] If $\Gamma$ does not have any projective invariant subspaces, then $$\Omega_{\Kul}(\Gamma)=Eq(\Gamma).$$ Moreover, $\Omega_{\Kul}(\Gamma)$ is complete Kobayashi hyperbolic and is the largest open set on which the group acts properly discontinuously.
The Geometry of the Veronese Curve {#s:gever}
===================================
Now let us define the Veronese embedding. Set $$\begin{array}{l}
\psi:\Bbb{P}^1_\Bbb{C}\rightarrow \Bbb{P}^2_\Bbb{C}\\
\psi([z,w])=[z^2,2zw, w^2].
\end{array}$$
Let us consider $\iota: \PSL(2,\Bbb{C})\rightarrow \PSL(3,\Bbb{C})$ given by $$\iota\left(\frac{az+b}{cz+d}\right )=\left [\left [
\begin{array}{lll}
a^2&ab&b^2\\
2ac&ad+bc&2bd\\
c^2&dc&d^2\\
\end{array}
\right ]\right ].$$ Trivially, $\iota$ is well defined. Note that this map is induced by the canonical action of $\SL(2,\Bbb{C})$ on the space of homogeneous polynomials of degree two in two complex variables.
\[l:mor\] The map $\iota$ is an injective group morphism.
Let $$A=\left [\left [
\begin{array}{ll}
a&b\\
c&d\\
\end{array}
\right ]\right ],\,B=\left [\left [
\begin{array}{ll}
e&f\\
g&h\\
\end{array}
\right ]\right ]\in\PSL(2,\Bbb{C}).$$ Then
$$\begin{array}{ll}
\iota (AB)
&=\iota\left [\left [
\begin{array}{ll}
ae+bg&af+bh\\
ce+dg&cf+dh\\
\end{array}
\right ]\right ]\\
&=\left [\left [
\begin{array}{lll}
(ae+bg)^2&(ae+bg)(af+bh)&( af+bh)^2\\
2(ae+bg)(ce+dg)&(cf+dh)(ae+bg)+(af+bh)(ce+dg)&2(af+bh)(cf+dh)\\
(ce+dg)^2&2(ce+dg)(cf+dh)&(cf+dh)^2\\
\end{array}
\right ]\right ]\\
&=\left [\left [
\begin{array}{lll}
a^2&ab&b^2\\
2bc&ad+bc&2bd\\
c^2&cd&d^2\\
\end{array}
\right ]\right ]\left [\left [
\begin{array}{lll}
e^2&ef&f^2\\
2eg&eh+fg&2fh\\
g^2&gh&h^2\\
\end{array}
\right ]\right ]\\
&=\iota(A)\iota(B).
\end{array}$$
Therefore $\iota$ is a group morphism. Now suppose $A=[[a_{ij}]]\in \PSL(2,\Bbb{C})$ is such that $\iota(A)=Id$. Then $$\left [\left [
\begin{array}{lll}
a^2_{11}&a_{11}a_{12}&a^2_{12}\\
2a_{12}a_{21}&a_{11}a_{22}+a_{12}a_{21} &2a_{12}a_{21}\\
a_{21}^2&a_{21}a_{22}&a_{22}^2\\
\end{array}
\right ]\right ]
=\left [\left [
\begin{array}{lll}
1&0&0\\
0&1&0\\
0&0&1\\
\end{array}
\right ]\right ]$$ and so we conclude $a_{12}=a_{21}=0$. Since $a_{11}a_{22}-a_{12}a_{21}=1$, we deduce $a_{11}^2=a_{22}^2=1$, [*i.e.*]{} $A=Id$, which concludes the proof.
The morphism $\iota$ is type preserving. In particular, if $\Gamma\subset\PSL(2,\Bbb{C})$ is a discrete subgroup, we must have $\iota(\Gamma)$ is a discrete group such that each element is strongly loxodromic.
Here, by type preserving, we mean that $\iota$ carries elliptic elements into elliptic elements, and similarly for loxodromic and parabolic elements.
Consider $$A=
\left [\left [
\begin{array}{ll}
a&0\\
0&a^{-1}
\end{array}
\right ]\right ],\,B=
\left [ \left [
\begin{array}{ll}
1&1\\
0&1
\end{array}
\right ]\right ]\in\PSL(2,\Bbb{C}).$$ A straightforward calculation shows $$\iota(A)=
\left [\left [
\begin{array}{lll}
a^2&0&0\\
0&1&0\\
0&0&a^{-2}
\end{array}
\right ]\right ],\,\iota(B)=
\left [ \left [
\begin{array}{lll}
1&1&1\\
0&1&0\\
0&0&1\\
\end{array}
\right ]\right ].$$ This shows that $\iota$ is type preserving. Now let $$A_n=
\left [\left [
\begin{array}{ll}
a_n&b_n\\
c_n&d_n\\
\end{array}
\right ]\right ]\in\PSL(2,\Bbb{C})$$ be a sequence such that $\iota(A_n)\xymatrix{\ar[r]_{n\rightarrow\infty}&}Id$. Then
$$\left [\left [
\begin{array}{lll}
a^2_n&a_nb_n&b^2_n\\
2a_nc_n& a_nd_n+b_nc_n&2b_nd_n\\
c_n^2&d_nc_n&d^2_n\\
\end{array}
\right ] \right ]
\xymatrix{\ar[r]_{n\rightarrow\infty}&}Id.$$ Therefore the $(a^2_n),(d^2_n)$ are bounded and bounded away from $0$, $b_n^2\xymatrix{\ar[r]_{n\rightarrow\infty}&}0$, and $c_n^2\xymatrix{\ar[r]_{n\rightarrow\infty}&}0$, which is a contradiction.
Let $g\in\PSL(3,\Bbb{C}) $ be such that $g$ fixes four points in general position. Then $g=Id$.
We can assume that the four points in general position fixed by $g$ are $\{e_1,e_2,e_3,p\}$. Then $$g=
\left [\left [
\begin{array}{lll}
a_1&0&0\\
0&a_2&0\\
0&0& a_3
\end{array}
\right ]\right ].$$
Since $p,e_1,e_2,e_3$ are in general position, we conclude $p=[b_1,b_2,b_3]$ where $b_1b_2b_3\neq 0$. On the other hand, since $p$ is fixed we deduce $$[b_1,b_2,b_3]=[a_1b_1,a_2b_2,a_3b_3],$$ therefore there is an $r\in\Bbb{C}^*$ such that $b_i=ra_ib_i$. In consequence $a_1=a_2=a_3$, which concludes the proof.
The Veronese curve has four points in general position.
A straightforward calculation shows that $[1,0,0], [0,0,1],[1,2,1],[1,2i,-1]$ are points on the Veronese curve. In order to conclude the proof, it is enough to observe $$\left \vert
\begin{array}{lll}
1&1&1 \\
0&2&2i\\
0&1& -1
\end{array}\right\vert=-2-2i,\,\hbox{and}\,\left\vert
\begin{array}{lll}
0&1&1\\
0&2&2i\\
1&1&-1
\end{array}\right\vert=-2+2i.$$
The subgroup of $\PSL(3,\Bbb{C})$ leaving $\psi(\Bbb{P}_\Bbb{C}^1)$ invariant is $\iota(\PSL(2,\Bbb{C}))$.
First, let us prove that $Ver=\psi(\Bbb{P}_\Bbb{C}^1)$ is invariant under $\iota(\PSL(2,\Bbb{C}))$. Let $A=[[a_{ij}]]\in \PSL(2,\Bbb{C})$. Then $$\iota
\left [\left [
\begin{array}{ll}
a_{11}&a_{12}\\
a_{21}&a_{22}\\
\end{array}
\right ]\right ]
\left [
\begin{array}{l}
x\\
2xy\\
y^2\\
\end{array}
\right ]=
\left [
\begin{array}{l}
(a_{11}x+a_{12}y)^2 \\
2(a_{21}x+a_{22}y)(a_{11}x+a_{12}y)\\
(a_{21}x+a_{22}y)^2\\
\end{array}
\right ],$$ and so $Ver$ is invariant under $\iota \PSL(2,\Bbb{C})$ and the following diagram commutes. $$\label{e:aut}
\xymatrix{
\Bbb{P}_\Bbb{C}^1 \ar[r]^\gamma \ar[d]^\psi & \Bbb{P}_\Bbb{C}^1 \ar[d]^\psi \\
Ver \ar[r]^{\iota \gamma} & Ver
}$$ Now let $\tau\in\PSL(3,\Bbb{C})$ be an element which leaves $Ver$ invariant. Define $$\begin{array}{l}
\widetilde{\tau}:\Bbb{P}_\Bbb{C}^1\rightarrow \Bbb{P}_\Bbb{C}^1 \quad.\\
\widetilde{\tau}(z)=\psi^{-1}(\tau(\psi(z))).
\end{array}$$ Clearly $\widetilde{\tau}$ is well defined and biholomorphic, thus $\widetilde{\tau}\in \PSL(2,\Bbb{C})$ and the following diagram commutes. $$\xymatrix{
\Bbb{P}_\Bbb{C}^1 \ar[r]^{\widetilde \tau}\ar[d]^\psi & \Bbb{P}_\Bbb{C}^1 \ar[d]^\psi \\
Ver \ar[r]^{\tau} & Ver
}$$ From diagram \[e:aut\], we conclude that $\tau\mid_{Ver}=\iota\widetilde\tau\mid_{Ver}$. Since the Veronese curve has four points in general position, we conclude $\tau=\iota\widetilde \tau$ in $\Bbb{P}_\Bbb{C}^2$, which concludes the proof.
\[l:ltanver\] Given $[1,k]\in \Bbb{P}^1_\Bbb{C}$, the tangent line to $Ver$ at $\psi[1,k]$, denoted $T_{\psi[1,k]}Ver$, is given by $$T_{\psi[x,y]}Ver=\{[x,y,z]\in \Bbb{P}^2_\Bbb{C}\vert z=ky-k^2x\}.$$
Let us consider the chart $(W_1=\{[x,y,z]\in\Bbb{P}^2_\Bbb{C}\vert x\neq 0\},\phi_1:W_1\rightarrow\Bbb{C}^2) $ of $\Bbb{P}^2_\Bbb{C}$ where $\phi_1[x,y,z]=(yx^{-1},zx^{-1})$ and $(W_2=\{[x,y]\in\Bbb{P}^1_\Bbb{C}\vert x\neq 0\},\phi_2:W_2\rightarrow\Bbb{C}^1)$ of $\Bbb{P}^1_\Bbb{C}$ where $\phi_1[x,y]=yx^{-1}$. Let us define $$\begin{array}{l}
\phi:\Bbb{C}\rightarrow \Bbb{C}^2\\
\phi(z)=\phi_1(\psi(\phi_2^{-1}( z)))
\end{array}.$$
A straightforward calculation shows that $\phi(z)=(2z,z^2)$, thus the tangent space to the curve $\phi$ at $\phi(k)$ is $\Bbb{C}(1, k)+(2k,k^2)$. Therefore the tangent line to $Ver$ at $[1,2k,k^2]$ is $\overleftrightarrow{[1,2k,k^2], [1,2k+1,k+k^2]}$. A simple verification shows
$$T_{\psi[x,y]}Ver=\{[x,y,z]\in \Bbb{P}^1_\Bbb{C} \vert z=ky-k^2x\}.$$
\[l:3gen\] Let $\Gamma\subset\PSL(2,\Bbb{C})$ be a non-elementary subgroup and $x,y,z\in \Lambda(\Gamma)$ be distinct points, then the lines $T_{\psi(x)}Ver,T_{\psi(y)}Ver,T_{\psi(z)}Ver$ are in general position.
Let us assume that $[1,0],[0,1]\notin \Lambda(\Gamma)$. Then there are $k,r,s\in \Bbb{C}$ such that
$$\begin{array}{l}
\psi(x)=[1,2k,k^2] \\
\psi(y)=[1,2r,r^2] \\
\psi(z)=[1,2s,s^2].
\end{array}$$ From Lemma \[l:ltanver\] we know $$\begin{array}{l}
T_{\psi(x)} Ver=\{
[x,y,z]
\in \Bbb{P}^1_\Bbb{C} \vert z=ky-k^2x\} \\
T_{\psi(y)} Ver=\{
[x,y,z]
\in \Bbb{P}^1_\Bbb{C} \vert z=ry-r^2x\} \\
T_{\psi(z)} Ver =\{
[x,y,z]
\in \Bbb{P}^1_\Bbb{C} \vert z=sy-s^2x\}.
\end{array}$$ Since $$\left \vert
\begin{array}{lll}
k^2&-k&1\\
r^2&-r&1\\
s^2&-s&1\\
\end{array}\right\vert=(s-r)(k-s)(k-r)\neq 0$$ we conclude the proof.
\[l:pseudo\] Let $(\gamma_n)\subset \PSL(2,\Bbb{C})$ be a sequence of distinct elements such that $\gamma_n\xymatrix{\ar[r]_{\rightarrow\infty}&}x$ uniformly on compact sets of $\Bbb{P}^1_\Bbb{C}\setminus\{y\}$. Then $\iota\gamma_n\xymatrix{\ar[r]_{\rightarrow\infty}&}\psi(x)$ uniformly on compact sets of $\Bbb{P}^2_\Bbb{C}\setminus T_{\psi(y)}Ver$.
Let us assume that $\gamma_n=\left [\left [a_{ij}^{(n)}\right ]\right ]$. Note that we can assume $a_{ij}^{(n)}\xymatrix{\ar[r]_{n\rightarrow\infty}&}a_{ij}$ and $\sum_{i,j=1}^2\mid a_{ij} \mid\neq 0$. Then $\gamma_n\xymatrix{\ar[r]_{n\rightarrow\infty}&}\gamma=\left [\left [a_{ij}\right ]\right ]$ uniformly on compact sets of $\Bbb{P}^1_{\Bbb{C}}\setminus Ker(\gamma)$, thus $Ker(\gamma)=\{y\}$ and $Im(\gamma)=\{x\}$. Therefore there is a $k\in \Bbb{C}^*$ such that $x= [1,k]$, thus $a_{11}=-ka_{12}$ and $a_{21}=-ka_{22}$. In consequence $$\iota\gamma_n
\xymatrix{ \ar[r]_{n \rightarrow\infty}&}
B=
\left [ \left [
\begin{array}{lll}
k^2a_{12}^2&-ka_{12}^2& a_{12}^2\\
2k^2a_{12}a_{22}&-2ka_{12}a_{22}&2a_{12}a_{22}\\
k^2a_{22}^{2}&-ka_{22}^2&a_{22}^2\\
\end{array}
\right ]\right ].$$
A simple calculation shows that $Ker(B)$ is the line $\ell=\overleftrightarrow{ [e_1-k^2e_3],[e_2+ke_3]}$. Also it is not hard to check that $$\ell=\{[x,y,z]\vert k^2x-ky+z=0 \},$$ which concludes the proof.
Let $\Gamma\subset\PSL(2,\Bbb{C})$ be a non-elementary group. Then $\iota(\Gamma)$ does not have invariant subspaces in $\Bbb{P}^2_\Bbb{C}$.
Let us assume that there is a complex line $\ell$ invariant under $\iota(\Gamma)$. By Bézout’s theorem $Ver\cap \ell$ has either one or two points. From the following commutative diagram $$\xymatrix{
\Bbb{P}_\Bbb{C}^1 \ar[r]^{ \tau}\ar[d]^\psi & \Bbb{P}_\Bbb{C}^1 \ar[d]^\psi \\
Ver \ar[r]^{\iota\tau} & Ver
}$$ where $\tau\in\Gamma$, we deduce that $\Gamma$ leaves $\psi^{-1}(Ver\cap\ell)$ invariant. Therefore $\Gamma$ is an elementary group, which is a contradiction, thus $\iota\Gamma$ does not have invariant lines in $\Bbb{P}^2_\Bbb{C}$. Finally, if there is a point $p\in\Bbb{P}_\Bbb{C}^2$ fixed by $\iota\Gamma$, then by Lemmas \[l:3gen\], \[l:eq\] and \[l:pseudo\], there is a sequence of distinct elements $(\gamma_m)_{m\in\Bbb{N}}\subset\Gamma$ and a pseudo-projective transformation $\gamma\in QP(3,\Bbb{C})$ such that $\iota\gamma_m\xymatrix{\ar[r]_{m\rightarrow\infty}&}\gamma $ and $Ker(\gamma)$ is a complex line not containing $p$. Since $p$ is invariant and outside $Ker(\gamma)$ we conclude $\{p\}=Im(\gamma)$. On the other hand, by Lemma \[l:pseudo\] we deduce $p\in Ver$. Therefore $\Gamma$ is elementary, which is a contradiction.
The following theorem follows easily from the previous discussion.
\[l:eq\] Let $\Gamma$ be a discrete subgroup of $\PSL(2,\Bbb{C})$. Then $$\Bbb{P}_\Bbb{C}^2\setminus Eq(\iota(\Gamma))=\bigcup_{z\in\Lambda(\Gamma)}T_{\psi(z)}(\psi(\Bbb{P}_\Bbb{C}^)).$$ Moreover $\Omega_{\Kul}(\iota\Gamma)=Eq(\iota(\Gamma))$ is Kobayashi hyperbolic, pseudo-convex, and is the largest open set on which $\Gamma$ acts properly discontinuously.
Complex Hyperbolic Groups Leaving $Ver$ Invariant {#s:chvh}
=================================================
In this section we characterize the subgroups of $\PU(2,1)$ that leave invariant a projective translation of the Veronese curve $Ver$. We need some preliminary lemmas.
\[l:semialg\] Let $B$ be a complex ball. Then $$Aut(BV)=\{g\in\PSL(3,\Bbb{C})\vert g\in \iota\PSL(2,\Bbb{C}),gB=B\}$$ is a semi-algebraic group.
Since $\iota(\PSL(2,\Bbb{C}))$ and $\PU(2,1)$ are simple Lie groups with trivial centers, we deduce that they are semi-algebraic groups (see [@semi]). Thus the sets $$\begin{array}{l}
\{(g,h,gh): g,h\in Aut(BV)\}\\
\{(g,g^{-1}): g\in Aut(BV)\}
\end{array}$$ are semi-algebraic sets. Therefore $Aut(BV)$ is a semi-algebraic group.
\[c:liedim\] Let $\Gamma\subset\PSL(2,\Bbb{C})$ be a discrete non-elementary group such that $\iota\Gamma$ leaves invariant a complex ball $B$. Then:
1. \[l:1\] The group $Aut(BV)$ is a Lie group of positive dimension.
2. \[l:2\] We have $\psi\Lambda(\Gamma)\subset Ver\cap\partial B$.
3. \[l:3\] Set $C=\partial B\cap Ver$. Then the set $\psi^{-1}(C)$ is an algebraic curve of degree at most four.
4. \[l:4\] The group $\iota^{-1}Aut(BV)$ can be conjugated to a subgroup of $Mob(\hat{\Bbb{R}})$, where $Mob(\hat{\Bbb{R}})=\{\gamma\in\PSL(2,\Bbb{C}):\gamma(\Bbb{R}\cup\{\infty\})=\Bbb{R}\cup\{\infty\}\}$.
5. \[l:5\] The set $\psi^{-1}(C)$ is a circle in the Riemann sphere.
6. \[l:6\] The set $C$ is an $\Bbb{R}$-circle, [*i.e.*]{} $C=\gamma(\partial\Bbb{H}^2_{\Bbb{C}}\cap\Bbb{P}^2_{\Bbb{R}})$, where $\gamma\in\PSL(3,\Bbb{C})$ is some element satisfying $\gamma(\Bbb{H}^2_{\Bbb{C}})=B$.
7. \[l:7\] The set $Ver\cap (\Bbb{P}^2_{\Bbb{C}}\setminus\overline{B})$ is non-empty.
8. \[l:8\] The set $Ver\cap B$ is non-empty.
Let us start by showing (\[l:1\]). Since $Aut(BV)$ is semi-algebraic, we deduce that it is a Lie group with a finite number of connected components (see [@semi]). On the other hand, since $Aut(BV)$ contains a discrete subgroup, we conclude $Aut(BV)$ has positive dimension.\
Now let us prove part (\[l:2\]). Let $x\in \Lambda(\Gamma)$. Then there is a sequence $(\gamma_n)\subset \Gamma$ of distinct elements such that $\gamma_n\xymatrix{\ar[r]_{m\rightarrow\infty}&}x$ uniformly on compact sets of $\widehat{\Bbb{C}}\setminus \{x\}$. From Lemma \[l:pseudo\] we know that $\iota\gamma_n\xymatrix{\ar[r]_{m\rightarrow\infty}&}\psi(x)$ uniformly on compact sets of $\Bbb{P}_\Bbb{C}^2\setminus T_{\psi(x)}Ver$, thus $\psi x\in \partial B$ and $T_{\psi(x)}Ver$ is tangent to $\partial(B)$ at $x$. This concludes the proof.\
Now let us prove part (\[l:3\]). Since $\Gamma$ preserves the ball $B$, there is a Hermitian matrix $A=(a_{ij})$ with signature $(2,1)$ such that $B=\{[x]\in\Bbb{P}^2_{\Bbb{C}}:\overline{x}^t Ax<0\}$. Without loss of generality, we may assume that $[0,0,1]\notin C=\partial(B)\cap Ver$. Thus for each $x\in C$, there is a unique $z\in \Bbb{C}$ such that $x=[1,2z,z^2]=\psi [1,z]$ and $(1,2\bar z,\bar{z}^2)^tA(1, 2z,z^2)=0$. A straightforward calculation shows that $(1,2\bar z,\bar{z}^2)^tA(1, 2z,z^2)=0$ is equivalent to $$\label{e:cuadrica}
a_{11}+4Re(a_{12}z)+2Re(a_{13}z^2)+a_{33}\vert z\vert^4+4\vert z\vert^2 Re(a_{23}z)+4\vert z \vert^2a_{22}=0.$$ Taking $z=x+iy$ and $a_{ij}=b_{ij}+ic_{ij}$, Equation (\[e:cuadrica\]) can be written as $$\begin{array}{l}
a_{11}+4(b_{12}x-c_{12}y)+2(b_{13}(x^2-y^2)-2c_{13}xy)+a_{33}(x^2+y^2)^2+\\
+4(x^2+y^2)( b_{23}x-c_{23}y)+4(x^2+y^2)a_{22}=0,
\end{array}$$ which proves the assertion.\
Let us prove part (\[l:4\]). Since $\iota^{-1}Aut(BV)$ is a Lie group with positive dimension containing a non-elementary discrete subgroup, we deduce that (see [@CS1]) $\iota^{-1}Aut(BV)$ can be conjugated either to $\PSL(2,\Bbb{C})$ or a subgroup of $Mob(\hat{\Bbb{R}})$. On the other hand, we know that $\PSL(2,\Bbb{C})$ acts transitively on the Riemann sphere, but $\iota^{-1}Aut(BV)$ leaves an algebraic curve invariant, plus a point, therefore $\iota^{-1}Aut(BV)$ is conjugate to a subgroup of $Mob(\hat{\Bbb{R}})$, which concludes the proof.\
Let us prove part (\[l:5\]). We know that $C$ is $Aut(BV)$-invariant and by part (\[l:3\]) of the present lemma $\psi^{-1}C$ is an algebraic curve. Thus by Montel’s Lemma we conclude that $\Lambda_{Gr}\iota^{-1}Aut(BV)\subset \psi^{-1}C$, where $\Lambda_{Gr}\iota^{-1}Aut(BV)$ is the Greenberg limit set of $\iota^{-1}Aut(BV)$, see [@CS1]. Finally by part (\[l:3\]), we know that $ \iota^{-1}Aut(BV)$ is conjugate to a subgroup of $Mob(\hat{\Bbb{R}})$, therefore $ \Lambda_{Gr}\iota^{-1}Aut(BV)$ is a circle in the Riemann sphere and $\Lambda_{Gr}\iota^{-1}Aut(BV) = \psi^{-1}C$.\
In order to prove part (\[l:6\]), observe that after a projective change of coordinates we can assume that $\psi^{-1}C=\hat{\Bbb{R}}$. Thus $C=\psi \hat{\Bbb{R}}=\{[z^2,2zw,w^2]:z,w\in \Bbb{R}, \vert a\vert +\vert b\vert \neq 0\}$. The following claim concludes the proof.\
Claim. The sets $C$ and $\partial \Bbb{H}^1_{\Bbb{R}}=\{[x,y,z]\in\Bbb{P}^2_{\Bbb{R}}:x^2+y^2=z^2\}$ are projectively equivalent. Let $\gamma\in PSL(3,\Bbb{R})$, be the projective transformation induced by: $$\widetilde \gamma=
\begin{pmatrix}
1 & 0 & -1\\
0 & 1 &0\\
1 & 0 &1
\end{pmatrix}.$$ Given $[p]=[x^2,2xy, y^2]\in C$, we get $\gamma(p)=(x^2 - y^2, 2 xy, x^2 + y^2)$ and $$(x^2 - y^2)^2+ (2 xy)^2=( x^2 + y^2)^2.$$ Thus $\gamma C\subset \partial \Bbb{H}^1_{\Bbb{R}}$. Since $C$ is a compact, connected and contains more than two points we conclude that $\gamma$ is a projective equivalence between $C$ and $\partial \Bbb{H}^1_{\Bbb{R}}$.\
Now we prove part (\[l:7\]). Let $x\in B$. Then $x^{\bot}$ is a complex line in $\Bbb{P}_\Bbb{C}^2\setminus \bar{B}$; by Bézout’s theorem we know $Ver\cap x^\bot$ is non-empty, thus $Ver\cap(\Bbb{P}^2_\Bbb{C}\setminus\bar{B})\neq\emptyset$.\
Finally, let us prove part (\[l:8\]). After conjugating by an element in $\iota\PSL(3,\Bbb{C})$ we can assume that $[0,0,1]\notin\partial B$. Let $A=(a_{ij})$ be the Hermitian matrix introduced in part (\[l:3\]) of the present lemma. Clearly $a_{33}\neq 0$. Now let $F:\Bbb{R}^2\rightarrow \Bbb{R}$ be given by
$$F(x,y)=a_{11}+4(b_{12}x-c_{12}y)+2(b_{13}(x^2-y^2)-2c_{13}xy)+a_{33}(x^2+y^2)^2
+4(x^2+y^2) ( b_{23}x-c_{23}y+a_{22}).$$
Thus by part (\[l:5\]) of this lemma we know $\psi^{-1}C=F^{-1}0$ is a circle. Moreover $$\begin{array}{l}
\iota F^{-1}\Bbb{R}^+=Ver\cap\Bbb{P}_\Bbb{C}^2\setminus \bar{B}.\\
\iota F^{-1}\Bbb{R}^-=Ver\cap B.\\
\iota F^{-1}0=Ver\cap\partial B.\\
\end{array}$$ If $Ver\cap B=\emptyset$, then $F(x,y)\geq 0$. A straightforward calculation shows $$\bigtriangleup F(x,y)=16(a_{33}(x^2+y^2)+a_{22}+2b_{23}x-2c_{23}y).$$ Thus $E=\{(x,y)\in \Bbb{R}^2:\bigtriangleup F(x,y)=0\}$ is an ellipse.\
Claim: We have $ \psi^{-1}C\cap Int(E)=\emptyset$. On the contrary let us assume that there is an $x\in C\cap Int(E)\neq\emptyset$. Then there is an open neighbourhood $U$ of $x$ contained in $Int(E)$. Thus $\bigtriangleup F(x,y) $ is negative on $U$, [*i.e.*]{} $F$ is super-harmonic on $U$. However, $F$ attains its minimum in $U$, which is a contradiction.\
From the previous claim we conclude $C$ is contained in the closure of $Ext(E)$, therefore $\bigtriangleup F(x,y)\leq 0$ in $Int(\psi^{-1}C)$. As a consequence, $F$ is subharmonic in $Int(\psi^{-1}C)$. Let $c$ be the centre of $\psi^{-1}C$ and $r$ its radius. Let $(r_n)$ be a strictly increasing sequence of positive numbers such that $r_n\xymatrix{\ar[r]_{n\rightarrow\infty}&}r$. Let $x_n\in\overline{B_{r_n}(c)}$ be such that $$F(x_n)=max\{F(x):x\in\overline{B_{r_n}(c)}\}.$$
Since $F$ is subharmonic in $B_{r_n}(c)$ we conclude $x_n\in \partial B_{r_n}(c)$ and $(F(x_{n}))$ is a strictly increasing sequence of positive numbers. Since $Int(\psi^{-1}C)\cup \psi^{-1}C$ is a compact set, we can assume $x_n\xymatrix{\ar[r]_{n\rightarrow\infty}&}x$, and clearly $x\in \psi^{-1}C$. On the other hand, since $F$ is continuous we conclude $F(x_n)\rightarrow F(x)=0$, which is a contradiction.
\[l:conpo\] There is a $\gamma_0\in \PSL(3,\Bbb{R})$ such that
1. \[l:con1\] $\gamma_0\iota \PSL(2,\Bbb{R})\gamma^{-1}_0=\PO^+(2,1)$, where $\PO^+(2,1)$ is the principal connected component of $\PO(2,1)$,
2. \[l:con2\] $\gamma_0 Ver \cap \Bbb{H}^2_\Bbb{C} $ is non-empty and $\PO(2,1)^+$-invariant.
Let us prove (\[l:con1\]). By Lemma \[c:liedim\] we have that $\iota\PSL(2,\Bbb{R})$ is a Lie group of dimension three and preserves the quadric in $\Bbb{P}^2_{\Bbb{R}}$ given by $$\{[w^2,2wz,z^2]: z,w\in \Bbb{R}\}.$$ Thus there is a $\gamma_0$ in $\PSL(3,\Bbb{R})$ such that $\gamma_0\iota M\ddot{o}b(\hat{\Bbb{R}})\gamma^{-1}_0$ preserves $$\{[x,w,z]\in\Bbb{P}^2_{\Bbb{R}}:\vert y\vert^2+\vert w\vert^2<\vert x\vert^2\}.$$ Hence $\gamma_0\iota\PSL(2,\hat{\Bbb{R}})\gamma^{-1}_0=\PO^+(2,1)$. Part (\[l:con2\]) is now trivial.
\[t:liedim\] Let $\Gamma\subset\PSL(2,\Bbb{C})$ be a discrete non-elementary group. The group $\iota\Gamma$ is complex hyperbolic if and only if $ \Gamma$ is Fuchsian, [ i.e.]{} a subgroup of $\PSL(2,\Bbb{R})$.
Assume that $\iota\Gamma$ preserves a complex ball $B$. Then by Lemma \[c:liedim\] we deduce that $\Gamma$ preserves a circle $C$ in the Riemann sphere. Let $B^+$ and $B^-$ be the connected components of $\Bbb{P}_\Bbb{C}^1\setminus C$ and assume that there is a $\tau\in\Gamma$ such that $\tau(B)^+=B^-$. Let $x\in Ver\cap B$ and denote by $Aut^+(BV)$ the principal connected component of $Aut(BV)$ which contains the identity. Then by Lemma \[c:liedim\] we deduce $$\begin{array}{l}
Aut^+(BV)x= \psi\iota^{-1}Aut(BV)\psi^{-1}x=
\psi (B^{+}) \;\;\hbox {and}
\\
Aut^+(BV) \iota\tau(x)= \psi\iota^{-1}Aut(BV)\tau\psi^{-1}x=\psi(B^{-}).\\
\end{array}$$ Therefore $$Ver=Aut^+(BV)x\cup Aut^+(BV)\iota\tau(x)\cup C\subset\overline{\Bbb{H}}^2,$$ which is a contradiction. Clearly, this concludes the proof.
We arrive at the following theorem:
Let $\Gamma\subset\PSL(2,\Bbb{C})$. Then the following claims are equivalent:
1. The group $\Gamma$ is Fuchsian.
2. The group $\iota\Gamma$ is complex hyperbolic.
3. The group $\iota\Gamma$ is $\Bbb{R}$-Fuchsian
Subgroups of $\PSL(3,\Bbb{R})$ that Leave Invariant a Veronese Curve {#s:riv}
====================================================================
In this section we characterize those subgroups of $\PSL(3,\Bbb{R})$ which leave invariant a projective copy of $Ver$.
Let $\Gamma\subset \PSL(2,\Bbb{C})$ be a discrete subgroup. Then the following facts are equivalent
1. The group $\Gamma $ is conjugate to a subgroup of $Mob(\hat{\Bbb{R}})$.
2. The group $\iota\Gamma$ is conjugate to a subgroup of $\PSL(3,\Bbb{R})$.
Let $\Gamma$ be a subgroup of $Mob(\hat{\Bbb{R}})$ and $\gamma\in \Gamma$. Then $$\gamma=\left [\left [
\begin{array}{ll}
i & 0\\
0 & -i\\
\end{array}
\right ]\right ]
\left [\left [
\begin{array}{ll}
a & b\\
c & d\\
\end{array}
\right ]\right ]$$ where $a,b,c,d\in \Bbb{R}$ and $ad-bc=1$. A straightforward calculation shows that $$\iota\gamma=
\left [\left [
\begin{array}{lll}
-1 & 0 &0\\
0 & 1&\\
0 &0 &-1
\end{array}
\right ]\right ]
\left [
\left [
\begin{array}{lll}
a^2 & ab & b^2\\
2ac & ad+bc &2bd\\
c^2 & cd& d^2\\
\end{array}
\right ]\right ],$$ therefore $\iota\Gamma\subset\PSL(3,\Bbb{R})$.\
Let us assume that there is a real projective space $\Bbb{P}$ which is $\iota\Gamma$-invariant. Thus, as in Lemma \[l:semialg\], we conclude that $$Aut(PV)=\iota\PSL(2,\Bbb{C})\cap\{g\in \PSL(3,\Bbb{C})\vert g\Bbb{P}=\Bbb{P}\}$$ is a semi-algebraic group. Since $\Gamma\subset\iota^{-1}Aut(PV)$, we conclude that $\iota^{-1}Aut(PV)$ is a Lie group with positive dimension. From the classification of Lie subgroups of $\PSL(2,\Bbb{C})$ (see [@CS1]), we deduce that $\iota^{-1}Aut(PV)$ is either conjugate to $Mob(\hat{\Bbb{C}})$ or a subgroup of $Mob(\hat{\Bbb{R}})$. In order to conclude the proof, observe that the group $\iota^{-1}Aut(PV)$ can not be conjugated to $Mob(\hat{\Bbb{C}})$. In fact, assume on the contrary that $\iota^{-1}Aut(PV)=Mob(\hat{ \Bbb{C}})$. Since $Mob(\hat{ \Bbb{C}})$ acts transitively on $\hat{\Bbb{C}}$ we deduce that $Aut(PV)$ acts transitively on $Ver$. Finally, since $\psi(\Lambda(\Gamma))\subset Ver\cap\Bbb{P}$, we deduce $Ver\subset\Bbb{P}$, which is a contradiction.
Examples of Kleinian Groups with Infinite Lines in General Position {#s:rep}
===================================================================
Let us introduce the following projection, see [@goldman]. For each $z\in\Bbb{C}^3$ let $\eta$ be the function satisfying $\eta(z)^2=-<z,z>$ and consider the projection $\Pi:\Bbb{H}^2_{\Bbb{C}}\rightarrow\Bbb{H}^2_{\Bbb{R}}$ given by $$\Pi([z_1,z_2,z_3])=[\overline{\eta(z_1,z_2,z_3)}(z_1,z_2,z_3)+\eta(z_1,z_2,z_3)(\overline{z_1},\overline{z_2},\overline{z_3})].$$
The projection $\Pi$ is $\PO(2,1)$-equivariant.
Let $A\in O(2,1)$ and $[z]\in \Bbb{H}_{\Bbb{C}}^2$. Then $$\begin{array}{ll}
\Pi [Az] &=[\overline{ \eta(Az)}Az+\eta(Az)\overline{Az}]\\
&=[\overline{\sqrt{-<A z, Az>}}Az+\sqrt {-<Az,Az>}A\bar{z}]\\
&= [\overline{\sqrt{-<z,z>}}Az+\sqrt{-<z,z>}A\bar{z}]\\
&= [A][\overline{\eta(z)}z+\eta(z)\overline{z}]\\
&=[A]\Pi[z].
\end{array}$$
For simplicity in the notation, in the rest of this article we will write $Ver$ instead of $\gamma_0(Ver)$, $\psi$ instead of $\gamma_0\circ \psi$, and $\gamma_0\iota(\cdot)\gamma_0^{-1}$ instead of $\iota(\cdot)$, where $\gamma_0$ is the element given in Corollary \[l:conpo\].
\[l:prv\] The map $\Pi:Ver \cap \Bbb{H}^2_{\Bbb{C}}\rightarrow \Bbb{H}^2_{\Bbb{R}}$ is a homeomorphism.
Let us prove that the map is onto. Let $x\in\Bbb{H}^+\cup\Bbb{H}^-$ be such that $\psi(x)\in Ver\cap\Bbb{H}^2_{\Bbb{C}}$. Then $$\begin{array}{ll}
\Bbb{H}^2_{\Bbb{R}}&=\PSO^+(2,1)\Pi(\psi x)\\
&=\Pi(\PSO^+(2,1)\psi x)\\
&=\Pi(\iota\PSL(2,\Bbb{R}))(\psi(x))\\
&=\Pi(Ver\cap\Bbb{H}^2_{\Bbb{C}})
\end{array}.$$ Finally, let us prove that our map is injective. On the contrary, let us assume that there are $x,y\in Ver\cap\Bbb{H}^2_\Bbb{C}$ such that $\Pi(x)=\Pi(y)$. Now define $$\begin{array}{ll}
H_x=Isot(\PSL(2,\Bbb{R}),\psi^{-1}x),\\
H_y=Isot(\PSL(2,\Bbb{R}),\psi^{-1}y).
\end{array}$$ Clearly $H_y$ and $H_x$ are groups where each element is elliptic. On the other hand, observe that $$\begin{array}{l}
\iota H_x\Pi(x)=\Pi\iota H_x(x)=\Pi(x) \;\; \hbox{and}\\
\iota H_y\Pi(y)=\Pi\iota H_y(y)=\Pi(y).
\end{array}$$ Therefore $$\iota H_x\cup\iota H_y\subset Isot(\PO^+(2,1),\Pi x).$$ Since $\Pi(x)\in \Bbb{H}^2_{\Bbb{R}}$, we deduce that $Isot (\PO^+(2,1),\Pi x)$ is a Lie group where each element is elliptic. Therefore $H=\iota^{-1}Isot (\PO^+(2,1),\Pi x)\gamma_0$ is a Lie subgroup of $\PSL(2,\Bbb{R})$ where each element is elliptic and $H_y\cup H_x\subset H$. From the classification of Lie subgroups of $\PSL(2,\Bbb{C})$, we deduce that $H$ is conjugate to a subgroup of $Rot_\infty$. Hence $H_y=H_x$ and so $x=y$.
\[t:rf\] Let $\Gamma\subset \PSL(2,\Bbb{C})$ be a discrete group. Then $\Gamma$ is conjugate to a subgroup $\Sigma$ of $\PSL(2,\Bbb{R})$ such that $ \Bbb{H}/\Sigma$ is a compact Riemann surface if and only if $\iota\Gamma$ is conjugate to a discrete compact surface group of $\PO^+(2,1)$.
Let $\Gamma\subset \PSL(2,\Bbb{R})$ be a subgroup acting properly, discontinuously, freely, and with compact quotient on $\Bbb{H}^+$. Let $R$ be a fundamental region for the action of $\Gamma$ on $\Bbb{H}^+$. We may assume without loss of generality that $\psi(R)\subset\Bbb{H}^2_{\Bbb{C}}$. Thus $\Pi\psi\overline{R} $ is a compact subset of $\Bbb{H}^2_\Bbb{R}$ satisfying $\iota\Gamma\Pi\psi\overline{R}=\Bbb{H}^2_{\Bbb{R}}$ which shows that $\iota\Gamma$ is a discrete compact surface group of $\PO^+(2,1)$.\
Now let us assume that $\iota\Gamma$ is a discrete compact surface group of $\PO^+(2,1)$. Then $\Gamma\subset\PSL(2,\Bbb{R})$. Thus $\iota\Gamma\subset\PO^+(2,1)$ and $\Bbb{H}^2_{\Bbb{R}}/\iota\Gamma$ is a compact surface, see [@tengren]. Now, consider the following commutative diagram
$$\xymatrix{
\Bbb{H}_\Bbb{R}^2 \ar[r]^{\Pi^{-1}} \ar
[d]^{q_1} & Ver\cap \Bbb{H}^2_\Bbb{C} \ar[d]^{q_2} \ar[r]^{\psi^{-1}} & \Bbb{H}^+\ar[d]^{q_3}\\
\Bbb{H}^2_{\Bbb{R}}/\iota
\Gamma
\ar[r]^{\widetilde {\Pi}} &
(Ver\cap \Bbb{H}^2_\Bbb{C})/ \iota\Gamma\ar[r]^{\widetilde \psi} &
\Bbb{H}^+/\Gamma
}$$
where $q_1,q_2,q_3$ are the quotient maps, $\widetilde {\Pi}(x)=q_2\Pi^{-1}q_1^{-1}x$, and $\widetilde\psi(x)=q_3\psi^{-1}q_2^{-1}(x)$. By Lemma \[l:prv\], we conclude that $\Bbb{H}^2_{\Bbb{R}}/\iota\Gamma,(Ver\cap \Bbb{H}^2_\Bbb{C})/\iota\Gamma,\Bbb{H}^+/\Gamma$ are homeomorphic compact surfaces, which concludes the proof.
Proof of theorem \[t:main2\] {#proof-of-theorem-tmain2 .unnumbered}
============================
If $\Gamma\subset\PO^+(2,1)$ is a discrete compact surface group, then by Lemma \[l:conpo\] we can assume that there is a $\Sigma\subset\PSL(2,\Bbb{R})$ such that $\iota\Sigma=\Gamma$. By Theorem \[t:rf\] we know that $\Bbb{H}/\Sigma$ is a compact Riemann surface. From the classic theory of quasi-conformal maps, see [@lipa1; @lipa2], it is known that there is a sequence of quasi-conformal maps $(q_n:\widehat{\Bbb{C}}\rightarrow\widehat{\Bbb{C}})$ such that $q_n\xymatrix{\ar[r]_{n\rightarrow\infty}&}Id$ and $\Sigma_n=q_n\Sigma q_n^{-1}$ is a quasi-Fuchsian group, which can not be conjugated to a Fuchsian one. In consequence $\Gamma_n=\gamma_0\iota\Sigma_n\gamma_0^{-1}$ is complex Kleinian and neither conjugate to a subgroup of $\PU(2,1)$ nor $\PSL(3,\Bbb{R})$, which concludes the proof.
Now the following result is trivial.
There are complex Kleinian groups acting on $\Bbb{P}^2_{\Bbb{C}}$ which are not conjugate to either a complex hyperbolic group or a virtually affine group.
The authors would like to thank J. Seade for fruitful conversations. Also we would like to thank the staff of UCIM at UNAM for their kindness and help.
[10]{}
W. Barrera, A. Cano, and J. P. Navarrete, The limit set of discrete subgroups of PSL(3,C), Math. Proc. Cambridge Philos. Soc. [**150**]{} (2011), no. 1, pp. 129-146.
L. Bers, Several Complex Variables I (Maryland 1970), Lecture Notes in Mathematics, ch. Spaces of Kleinian groups, pp. 9–34, Springer-Verlag, Berlin, 1970.
L. Bers, On moduli of Kleinian groups, Russian Mathematical Surveys [**29**]{} (1974), no. 2, pp. 88-102.
A. Cano, J. P. Navarrete, and J. Seade, Complex Kleinian Groups, Progress in Mathematics, no. 303, Birkhäuser/Springer, Basel, 2013.
A. Cano and J. Seade, On the equicontinuity region of discrete subgroups of PU(1,n), J. Geom. Anal. [**20**]{} (2010), no. 2, pp. 291-305.
A. Cano and J. Seade, On discrete groups of automorphism of PSL(3,C), Geometriae Dedicata [**168**]{} (2014), no. 1, pp. 9-60.
S. S. Chen and L. Greenberg, Hyperbolic spaces, Contributions to Analysis (A Collection of Papers Dedicated to to Lipman Bers), Academic Press, New York, 1974, pp. 49-87.
Myung-Jun Choi and Dong Youp Suh, Comparison of semialgebraic groups with Lie groups and algebraic groups, RIMS Kôkyûroku [**1449**]{} (2005), pp. 12-20.
W. M. Goldman, Complex Hyperbolic Geometry, Oxford University Press, New York, 1999.
T. Jorgensen, A. Marden, Algebraic and geometric convergence of Kleinian groups, Math. Scand. [**66**]{} (1990), pp. 47-72.
R. S. Kulkarni, Groups with domains of discontinuity, Math. Ann. [**237**]{} (1978), no. 3, pp. 253-272.
F. Labourie, Lectures on Representations of Surface Groups, Zurich lectures in advanced mathematics, European Mathematical Society, 2013.
J. Seade and A. Verjovsky, Actions of discrete groups on complex projective spaces, in M. Lyubich, J. W. Milnor, and Y. N. Minsky, (eds.), Laminations and Foliations in Dynamics, Geometry and Topology, Contemporary Mathematics, vol. 269, AMS, Providence, RI, 2001, pp. 155-178.
J. Seade and A. Verjovsky, Higher dimensional complex Kleinian groups, Math. Ann. [**322**]{} (2002), no. 2, pp. 279-300.
J. Seade and A. Verjovsky, Complex Schottky Groups, Asterisque, vol. 287, SMF, Paris, 2003, pp. 251-272.
T. Zhang, Geometry of the Hitchin component (2015), Ph. D. Thesis, University of Michigan, https://deepblue.lib.umich.edu/handle/2027.42/113605
[^1]: Partially supported by grants of the PAPPIT’s project IA100112
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Nowadays we are often faced with huge databases resulting from the rapid growth of data storage technologies. This is particularly true when dealing with music databases. In this context, it is essential to have techniques and tools able to discriminate properties from these massive sets. In this work, we report on a statistical analysis of more than ten thousand songs aiming to obtain a complexity hierarchy. Our approach is based on the estimation of the permutation entropy combined with an intensive complexity measure, building up the complexity-entropy causality plane. The results obtained indicate that this representation space is very promising to discriminate songs as well as to allow a relative quantitative comparison among songs. Additionally, we believe that the here-reported method may be applied in practical situations since it is simple, robust and has a fast numerical implementation.'
address:
- 'Departamento de Física and National Institute of Science and Technology for Complex Systems, Universidade Estadual de Maringá, Av. Colombo 5790, 87020-900, Maringá, PR, Brazil'
- 'Department of Chemical and Biological Engineering, Northwestern University, Evanston, IL 60208, USA'
- 'Centro de Investigaciones Ópticas (CONICET La Plata - CIC), C.C. 3, 1897 Gonnet, Argentina'
- 'Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata, Argentina'
author:
- 'Haroldo V. Ribeiro'
- Luciano Zunino
- 'Renio S. Mendes'
- 'Ervin K. Lenzi'
title: 'Complexity-entropy causality plane: a useful approach for distinguishing songs'
---
permutation entropy ,music ,complexity measure ,time series analysis
Introduction
============
Nowadays we are experimenting a rapid development of technologies related to data storage. As an immediate consequence, we are often faced with huge databases hindering the access to information. Thus, it is necessary to have techniques and tools able to discriminate elements from these massive databases. Text categorization [@Sebastiani], scene classification [@Radke] and protein classification [@Enright] are just a few examples where this problem emerges. In a parallel direction, statistical physicists are increasingly interested in studying the so-called complex systems [@Auyang; @Jensen; @Barabasi; @Sornette; @Boccara]. These investigations employ established methods of statistical mechanics as well as recent developments of this field aiming to extract hidden patterns that are governing the system’s dynamics. In a similar way, this framework may help to advance in distinguishing elements within these databases, with the benefit of the simplicity often attributed to statistical physics methods.
A very interesting case corresponds to the music databases, not only because of the incredible amount of data (for instance, the iTunes Store has more than 14 million songs), but also due to the ubiquity of music in our society as well as its deeply connection with cognitive habits and historical developments [@DeNora]. In this direction, there are investigations focused on collective listening habits [@Lambiotte; @Lambiotte2; @Buldu], collaboration networks among artists [@Teitelbaum], music sales [@Lambiotte3], success of musicians [@Davies; @Borges; @Hu], among others. On the other hand, the sounds that compose the songs present several complex structures and emergent features which, in some cases, resemble very closely the patterns of out-of-equilibrium physics, such as scale-free statistics and universality. For instance, the seminal work of Voss and Clarke [@Voss] showed that the power spectrum associated to the loudness variations and pitch fluctuations of radio stations (including songs and human voice) is characterized by $1/f$ noise-like pattern in the low frequency domain ($f \leq 10 Hz$). Klimontovich and Boon [@Klimontovich] argue that this behavior for low-frequency follows from a natural flicker noise theory. However, this finding has been questioned by Nettheim [@Nettheim] and according to him the power spectrum may be better described by $1/f^2$. Fractal structures were also reported by Hsü and Hsü [@Hsu2; @Hsu] when studying classical pieces concerning frequency intervals. It was also found that the distribution of sound amplitudes may be adjusted by a one-parameter stretched Gaussian and that this non-Gaussian feature is related to correlation aspects present in the songs [@Mendes].
These features and others have attracted the attention of statistical physicists, who have attempted to obtain some quantifiers able to distinguish songs and genres. One of these efforts was made by Jennings et al. [@Jennings] who found that the Hurst exponent estimated from the volatility of the sound intensity depends on the music genre. Correa et al. [@Correa] investigated four music genres employing a complex network representation for rhythmic features of the songs. There are still other investigations [@Boon; @Bigerelle; @Diodati; @Gunduz; @Su2; @Scaringella; @Jafari; @Dagdug; @Su; @Rio; @Ro; @Serra; @Mostafa; @Boon2], most of which are based on fractal dimensions, entropies, power spectrum analysis or correlation analysis. It is worth noting that there are several methods of automatic genre classification emerging from engineering disciplines (see, for instance, Ref. [@Tzanetakis]). [In particular, there exists a very active community working on music classification problems and several important results are published at the ISMIR [@ISMIR] conferences (just to mention a few please see Refs. [@ISMIR1; @ISMIR2; @ISMIR3; @ISMIR4; @ISMIR5; @ISMIR6; @ISMIR7; @ISMIR8; @ISMIR9; @ISMIR10; @ISMIR11; @ISMIR12; @ISMIR13].)]{}
However, the music genre it not a well defined concept [@Scaringella], and, specially, the boundaries between genres still remain fuzzy. Thus, any taxonomy may be controversial, representing a challenging and open problem of pattern recognition. In addition, some of the proposed quantifiers require specific algorithms or recipes for processing the sound of the songs, which may depend on tuning parameters.
Here, we follow an Information Theory approach trying to quantify aspects of songs. More specifically, the Bandt and Pompe approach [@Bandt] is applied in order to obtain a complexity hierarchy for songs. This method defines a “natural” complexity measure for time series based on ordinal patterns. Although this concept has not been explored yet within the context of music, it has been successfully applied in other areas, such as medical [@Li; @Nicolaou], financial [@Zunino; @Zunino3] and climatological time series [@Saco; @Barreiro]. In this direction, our main goal is to fill this hiatus employing the Bandt and Pompe approach together with a non-trivial entropic measure [@LopezRuiz; @Martin; @Lamberti], constructing the so-called complexity-entropy causality plane [@Zunino; @Zunino3; @Rosso; @Zunino2]. As it will be discussed in detail below, we have found that this representation space is very promising to distinguish songs from huge databases. Moreover, thanks to the simple and fast implementation it is possible to conjecture its use in practical situations. In the following, we review some aspects related to the Bandt and Pompe approach as well as the complexity-entropy causality plane (Section 2). Next, we describe our database and the results (Section 3). Finally, we end this work with some concluding comments (Section 4).
Methods
=======
The essence of the permutation entropy proposed by Bandt and Pompe [@Bandt] is to associate a symbolic sequence to the time series under analysis. This is done by employing a suitable partition based on ordinal patterns obtained by comparing neighboring values of the original series. To be more specific, consider a given time series $\{x_t\}_{t=1,\dots,N}$ and the following partitions represented by a $d$-dimensional vector ($d>1, D \in \mathbb{N}$) $$(s)\mapsto (x_{s-(d-1)},x_{s-(d-2)},\dots,x_{s-1},x_{s})\;,$$ with $s=d,d+1,\dots,N$. For each one of these $(N-d+1)$ vectors, we investigate the permutations of $(0,1,\dots,d-1)$ defined by $x_{s-r_{d-1}}\leq x_{s-r_{d-2}}\leq \dots \leq x_{s-r_{1}} \leq x_{s-r_{0}}$, and, for all $d\, !$ possible permutations of $\pi$, we evaluate the probability distribution $P=\{p(\pi)\}$ given by $$p(\pi) = \frac{\#\{s|s\leq N-d+1;~ (s) ~\text{has type}~ \pi \}}{N-d+1}\;,$$ where the symbol $\#$ stands for the number (frequency) of occurrences of the permutation $\pi$. Thus, we define the normalized permutation entropy of order by $$H_s[P]=\frac{S[P]}{\log d\,!}\;,$$ with $S[P]$ being the standard Shannon’s entropy [@Shannon]. Naturally, $0 \leq H_s[P] \leq 1$, where the upper bound occurs for a completely random system, i.e., a system for which all $d\,!$ possible permutations are equiprobable. If the time series exhibits some kind of ordering dynamics $H_s[P]$ will be smaller than one. As pointed out by Bandt and Pompe [@Bandt], the advantages in using this method lie on its simplicity, robustness and very fast computational evaluation. Clearly, the parameter $d$ (known as embedding dimension) plays an important role in the estimation of the permutation probability distribution $P$, since it determines the number of accessible states. In fact, the choice of $d$ depends on the length $N$ of the time series in such a way that the condition $d\,!\ll N$ must be satisfied in order to obtain a reliable statistics. For practical purposes, Bandt and Pompe recommend $d=3,\dots,7$. Here, we have fixed $d=5$ because the time series under analysis are large enough (they have more than one million of data values). [We have verified that the results are robust concerning the choice of the embedding dimension $d$.]{}
Advancing with this brief revision, we now introduce another statistical complexity measure able to quantify the degree of physical structure present in a time series [@LopezRuiz; @Martin; @Lamberti]. Given a probability distribution $P$, this quantifier is defined by the product of the normalized entropy $H_s$, and a suitable metric distance between $P$ and the uniform distribution $P_e=\{1/d\,!\}$. Mathematically, we may write $$C_{js}[P]=Q_j[P,P_e]\,H_s[P]\,,$$ where $$Q_j[P,P_e] = \frac{S[(P+P_e)/2] - S[P]/2 - S[P_e]/2}{Q_{\text{max}}}\,$$ and $Q_{\text{max}}$ is the maximum possible value of $Q_j[P,P_e]$, obtained when one of the components of $P$ is equal to one and all the others vanish, i.e., $$Q_{\text{max}}=-\frac{1}{2}\left[ \frac{d\,!+1}{d\,!} \log(d\,!+1) - 2 \log(2 d\,!) + \log(d\,!) \right]\,.$$ The quantity $Q_j$, usually known as disequilibrium, will be different from zero if there are more likely states among the accessible ones. It is worth noting that the complexity measure $C_{js}$ is not a trivial function of the entropy [@LopezRuiz] because it depends on two different probability distributions, the one associated to the system under analysis, $P$, and the uniform distribution, $P_e$. It quantifies the existence of correlational structures, providing important additional information that may not be carried only by the permutation entropy. Furthermore, it was shown that for a given $H_s$ value, there exists a range of possible $C_{js}$ values [@Martin2]. Motivated by the previous discussion, Rosso et al. [@Rosso] proposed to employ a diagram of $C_{js}$ versus $H_s$ for distinguishing between stochasticity and chaoticity. This representation space, called complexity-entropy causality plane [@Rosso; @Zunino; @Zunino3], herein will be our approach for distinguishing songs.
[The concept of ordinal patterns can be straightforward generalized for non-consecutive samples, introducing a lag of $\tau$ (usually known as embedding delay) sampling times. With $\tau=1$ the consecutive case is recovered, and the analysis focuses on the highest frequency contained within the time series. It is clear that different time scales are taken into account by changing the embedding delays of the symbolic reconstruction. The importance of selecting an appropriate embedding delay in the estimation of the permutation quantifiers has been recently confirmed for different purposes, like identifying intrinsic time scales of delayed systems [@zunino2010; @soriano2011], quantifying the degree of unpredictability of the high-dimensional chaotic fluctuations of a semiconductor laser subject to optical feedback [@zunino2011], and classifying cardiac biosignals [@parlitz2011]. We have found that an embedding delay $\tau=1$ is the optimal one for our music categorization goal since when this parameter is increased the permutation entropy increases and the permutation statistical complexity decreases. Thus, the range of variation of both quantifiers is smaller and, consequently, it is more difficult to distinguish songs and genres.]{}
Data Presentation and Results
=============================
It is clear that a music piece can be naturally considered as the time evolution of an acoustic signal and time irreversibility is inherent to musical expression [@Boon; @Boon2]. From the physical point of view, the songs may be considered as pressure fluctuations traveling through the air. These waves are perceived by the auditory system leading the sense of hearing. In the case of recordings, these fluctuations are converted into a voltage signal by a record system and then stored, for instance, in a compact disc (CD). The perception of sound is usually limited to a certain range of frequencies - for human beings the full audible range is approximately between 20 Hz and 20 kHz. Because of this limitation the record systems often employ a sampling rate of 44.1 kHz encompassing all the previous spectrum. All the songs analyzed here have this sampling rate.
Our database consists of 10124 songs distributed into ten different music genres, they are: blues (1020), classical (997), flamenco (679), hiphop (1000), jazz (700), metal (1638), Brazilian popular music - mpb (580), pop (1000), tango (1016) and techno (1494). The songs were chosen aiming to cover a large number of composers and singers. To achieve this [and also to determine the music genre via an external judgment]{}, we tried to select CDs that are compilations of a given genre or from representative musical groups [of a given genre]{}.
![A graphical representation of 4 songs from 4 different genres. In the left panel we show the amplitude series and in the right panel the intensity series. The music genres are blues, classic, metal and techno, respectively.[]{data-label="fig:sample"}](fig1.pdf)
By using the previous database, we focus our analysis on two times series directly obtained from the digitized files that represent each song - the sound amplitude series and the sound intensity series, i.e., the square of the amplitude. Figure \[fig:sample\] shows these two time series for several songs. We evaluate the normalized entropy $H_s$ and the statistical complexity measure $C_{js}$ for the amplitude and intensity series associated to each song as shown in Figs. \[fig:plane\]a and \[fig:plane\]b. Notice that both series, amplitude and intensity, lead to similar behavior, contrarily to what happens with other quantifiers. For instance, when dealing with Hurst exponent is preferable to work with the intensities [@Mendes] or volatilities [@Jennings], since the amplitudes are intrinsically anti-correlated due the oscillatory nature of the sound. Moreover, we have found that there is a large range of $H_s$ and $C_{js}$ possible values. This wide variation allows a relative comparison among songs and someone may ask to listen songs that are limited within some interval of $H_s$ and/or $C_{js}$ values. We also evaluate the mean values of $C_{js}$ and $H_s$ over all songs grouped by genre as shown by Figs. \[fig:plane\]c and \[fig:plane\]d. These mean values enable us to quantify the complexity of each music genre. In particular, we can observe that high art music genres (e.g. classic, jazz and tango) are located in the central part of the complexity plane, being equally distant from the fully aleatory limit ($H_s\to1$ and $C_{js}\to0$) and also from the completely regular case ($H_s\to0$ and $C_{js}\to0$). On the other hand, light/dance music genres (e.g. pop and techno) are located closer to the fully aleatory limit (white noise). In this context, our approach agrees with other works [@Mendes; @Jennings; @Diodati].
![(color online) Complexity-entropy causality plane, i.e., $C_{js}$ versus $H_s$ for all the songs when considering the (a) amplitude series and (b) the intensity series. In (c) and (d), we show the mean value of $C_{js}$ and $H_s$ for each genre. The upper (bottom) dashed line represents the maximum (minimum) value of $C_{js}$ as a function of $H_s$ for $d=5$ and the different symbols refer to the 10 different genres. For a better visualization of the different genres see also Figs. \[fig:ampgenre\] and \[fig:intgenre\].[]{data-label="fig:plane"}](fig2.pdf)
![Complexity-entropy causality plane for the amplitude series by music genres when considering the original and shuffled series. The upper (bottom) dashed line represents the maximum (minimum) value of $C_{js}$ as a function of $H_s$ for $d=5$ [and the arrows are indicating the shuffled analysis]{}.[]{data-label="fig:ampgenre"}](fig3.pdf)
![Complexity-entropy causality plane for the intensity series by music genres when considering the original and shuffled series. The upper (bottom) dashed line represents the maximum (minimum) value of $C_{js}$ as a function of $H_s$ for $d=5$ [and the arrows are indicating the shuffled analysis]{}.[]{data-label="fig:intgenre"}](fig4.pdf)
Therefore, we have verified that the ordinal pattern distribution that exists among the sound amplitudes values and also among the sound intensity is capable to spread out our database songs though the complexity-entropy causality plane. It is interesting to remark that the embedding dimension employed here ($d=5$) corresponds to approximately $10^{-4}$ seconds. Thus, it is surprising how this very short time dynamics retains so much information about the songs. We also investigated shuffled version of each song series aiming to verify if the localization of the songs in the complexity-entropy causality plane is directly related to the presence of correlations in the music time series. This analysis is shown in Figs. \[fig:ampgenre\] and \[fig:intgenre\] for each song and for all genres. We have obtained $H_s\approx 1$ and $C_{js}\approx 0$ for all shuffled series, confirming that correlations inherently present in the original songs are the main source for the different locations in this plane.
Although our approach is not focused on determining which music genre is related to a particular given song, this novel physical method may help to understand the complex situation that emerges in the problem of automatic genre classification. For instance, we can take a glance on the fuzzy boundaries existent in the music genre definitions, by evaluating the distribution of $H_s$ and $C_{js}$ values. Figure \[fig:pdfs\] shows these distributions for both time series employed here. There are several overlapping regions among the distributions of $H_s$ and $C_{js}$ for the different genres. This overlapping is an illustration on how fuzzy the boundaries between genres and, consequently, the own concept of music genre can be. It is also interesting to observe that some genres have more localized PDFs, for instance, the techno genre is practically bounded to the interval $(0.85,0.95)$ of $H_s$ values for the intensity series while the flamenco or mpb genres have a wider distribution. [To go beyond the previous analysis, we try to quantify the efficiency of permutation indexes $H_s$ and $C_{js}$ in a practical scenery of automatic genre classification. In order to do this, we use an implementation [@SVM1] of a support vector machine (SVM) [@SVM2] where we have considered the values of $H_s$ and $C_{js}$ for the amplitudes and intensity series as features of the SVM. We run the analysis for each genre training the SVM with 90$\%$ of dataset and performing an automatic detection over the remaining 10$\%$. It is a simplified version of the SVM, where the system have to make a binary choice, i.e., to choose between a given genre and all the others. The accuracy rates of automatic detection are shown in Table \[tab:SVM\]. Note that the accuracy values are around 90$\%$ within this simplified implementation, however we have to remark that in a multiple choice system these values should be much smaller. On the other hand, this analysis indicates that the entropic indexes employed here may be used in practical situations. ]{}
[lrclr]{} Genre & Accuracy & & Genre & Accuracy\
Blues & 87.87$\%$ & & Metal & 89.89$\%$\
Classic & 92.03$\%$ & & MPB & 97.15$\%$\
Flamenco & 95.12$\%$ & & Pop & 88.11$\%$\
Hiphop & 88.11$\%$ & & Tango & 87.87$\%$\
Jazz & 91.68$\%$ & & Techno & 87.14$\%$\
![(color online) Probability distribution functions (PDF) for the values of (a) $H_s$ and (b) $C_{js}$ when considering the amplitude series grouped by music genre. Figs. (c) and (d) show the same PDFs for the intensity series.[]{data-label="fig:pdfs"}](fig5.pdf)
Summary and Conclusions
=======================
Summing up, in this work we applied the permutation entropy [@Bandt], $H_s$, and an intensive statistical complexity measure [@LopezRuiz; @Martin; @Lamberti], $C_{js}$, to differentiate songs. Specifically, we analyzed the location of the songs in the complexity-entropy causality plane. This permutation information theory approach enabled us to quantitatively classify songs in a kind of complexity hierarchy.
We believe that the findings presented here may be applied in practical situations as well as in technological applications related to the distinction of songs in massive databases. In this aspect, the Bandt and Pompe approach has some advantageous technical features, such as its simplicity, robustness, and principally a very fast numerical evaluation.
Acknowledgements {#acknowledgements .unnumbered}
================
[The authors would like to thank an anonymous reviewer for his very helpful comments. Dr. Osvaldo A. Rosso is also acknowledged for useful discussions and valuable comments.]{} HVR, RSM and EKL are grateful to CNPq and CAPES (Brazilian agencies) for the financial support. HVR also thanks Angel A. Tateishi for the help with the music database and CAPES for financial support under the process No 5678-11-0. LZ was supported by Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Argentina.
[99]{} F. Sebastiani, Acm. Comput. Surv. **34** (2002) 1. R. J. Radke, S. Andra, O. Al-Kofahi, B. Roysam, IEEE T. Image Process **14** (2005) 294. A. J. Enright, S. Van Dongen, C. A. Ouzounis, Nucleic. Acids Res. **30** (2002) 1575. S. Y. Auyang, *Foundations of complex-systems* (Cambridge University Press, Cambridge, 1998). H. J. Jensen, *Self-organized criticality* (Cambridge University Press, Cambridge, 1998). R. Albert, A.-L. Barabási, Rev. Mod. Phys. **74** (2002) 47. D. Sornette, *Critical phenomena in natural sciences* (Springer-Verlag, Berlin, 2006). N. Boccara, *Modeling complex systems* (Springer-Verlag, Berlin, 2010). T. DeNora, *The music of everyday life* (Cambridge University Press, Cambridge, 2000). R. Lambiotte, M. Ausloos, Phys. Rev. E **72** (2005) 066107. R. Lambiotte, M. Ausloos, Eur. Phys. J. B **50** (2006) 183. J. M. Buldú, P. Cano, M. Koppenberger, J. A. Almendral, S. Boccaletti, New J. Phys. **9** (2007) 172. T. Teitelbaum, P. Balenzuela, P. Cano, J. M. Buldú, Chaos **18** (2008) 043105. R. Lambiotte, M. Ausloos, Physica A **362** (2006) 485. J. A. Davies, Eur. Phys. J. B **27** (2002) 445. E. P. Borges, Eur. Phys. J. B **30** (2002) 593. H.-B. Hu, D.-Y. Han, Physica A **387** (2008) 5916. R. F. Voss, J. Clarke, Nature **258** (1975) 317. Y. Klimontovich, J. P. Boon, Europhys. Lett. **3** (1987) 395. N. Nettheim, Journal of New Music Research **21** (1992) 135. K. J. Hsü, A. Hsü, Proc. Natl. Acad. Sci. USA **87** (1990) 938. K. J. Hsü, A. Hsü, Proc. Natl. Acad. Sci. USA **88** (1991) 3507. R. S. Mendes, H. V. Ribeiro, F. C. M. Freire, A. A. Tateishi, E. K. Lenzi, Phys. Rev. E **83** (2011) 017101. H. D. Jennings, P. Ch. Ivanov, A. M. Martins, P. C. da Silva, G. M. Viswanathan, Physica A **336** (2004) 585. D. C. Correa, J. H. Saito, L. F. Costa, New J. Phys. **12** (2010) 053030. J. P. Boon, O. Decroly, Chaos **5** (1995) 501. M. Bigerelle, A. Iost, Chaos Solitons Fractals **11** (2000) 2179. P. Diodati, S. Piazza, Eur. Phys. J. B **17** (2000) 143. G. Gündüz, U. Gündüs, Physica A **357** (2005) 565. Z.-Y. Su, T Wu, Physica D **221** (2006) 188. N. Scaringella, G. Zoia, D. Mlynek, IEEE Signal Process. Mag. **23** (2006) 133. G. R. Jafari, P. Pedram, L. Hedayatifar, J. Stat. Mech. (2007) P04012. L. Dagdug, J. Alvarez-Ramirez, C. Lopez, R. Moreno, E. Hernandez-Lemus, Physica A **383** (2007) 570. Z.-Y. Su, T Wu, Physica A **380** (2007) 418. M. Beltrán del Río, G. Cocho, G. G. Naumis, Physica A **387** (2008) 5552. W. Ro, Y. Kwon, Chaos Solitons Fractals **42** (2009) 2305. J. Serrà, X. Serra, R. G. Andrzejak, New J. Phys. **11** (2009) 093017. M. M. Mostafa, N. Billor, Expert Syst. Appl. **36** (2009) 11378. J. P. Boon, Adv. Complex. Syst. **13** (2010) 155. G. Tzanetakis, P. Cook, IEEE Trans. Speech Audio Process. **20** (2002) 293. ISMIR - The International Society for Music Information Retrieval (http://www.ismir.net). T. Lidy, A. Rauber, In Proc. ISMIR, 2005. A. S. Lampropoulos, P. S. Lampropoulou, G. A. Tsihrintzis, In Proc. ISMIR, 2005. A. Meng, J. Shawe-Taylor, In Proc. ISMIR, 2005. E. Pampalk, A. Flexer, G. Widmer, In Proc. ISMIR, 2005. J. Reed, C.-H. Lee, In Proc. ISMIR, 2006. C. McKay, I. Fujinaga, In Proc. ISMIR, 2006. M. Dehghani, A. M. Lovett, In Proc. ISMIR, 2006. T. Lidy, A. Rauber, A. Pertusa, J. M. Iñesta, In Proc. ISMIR, 2007. A. J. D. Craft, G. A. Wiggins, T. Crawford, In Proc. ISMIR, 2007. I. Panagakis, E. Benetos, C. Kotropoulos, In Proc. ISMIR, 2008. R. Mayer, R. Neumayer, A. Rauber, In Proc. ISMIR, 2008. R. Mayer, R. Neumayer, A. Rauber, In Proc. ISMIR, 2008. S. Doraisamy, S. Golzari, N. M. Norowi, Md. N. B. Sulaiman, N. I. Udzir, In Proc. ISMIR, 2008. C. Bandt, B. Pompe, Phys. Rev. Lett. **88** (2002) 174102. X. Li, G. Ouyang, D. A. Richards, Epilepsy Research **77** (2007) 70. N. Nicolaou, J. Georgiou, Clin. EEG Neurosci. **42** (2011) 24. L. Zunino, M. Zanin, B. M. Tabak, D. G. Pérez, O. A. Rosso, Physica A **389** (2010) 1891. L. Zunino, B. M. Tabak, F. Serinaldi, M. Zanin, D. G. Pérez, O. A. Rosso, Physica A **390** (2011) 876. P. M. Saco, L. C. Carpi, A. Figliola, E. Serrano, O. A. Rosso, Physica A **389** (2010) 5022. M. Barreiro, A. C. Marti, C. Masoller, Chaos **21** (2011) 013101. R. López-Ruiz, H. L. Mancini, X. Calbet, Phys. Lett. A **209** (1995) 321. M. T. Martin, A. Plastino, O. A. Rosso, Phys. Lett. A **311** (2003) 126. P. W. Lamberti, M. T. Martin, A. Plastino, O. A. Rosso, Physica A **334** (2004) 119. O. A. Rosso, H. A. Larrondo, M. T. Martin, A. Plastino, M. A. Fuentes, Phys. Rev. Lett. **99** (2007) 154102. O. A. Rosso, L. Zunino, D. G. Pérez, A. Figliola, H. A. Larrondo, M. Garavaglia, M. T. Martín, A. Plastino Phys. Rev. E **76** (2007) 061114. C. E. Shannon, Bell. Syst. Tech. J. **27** (1948) 623. M. T. Martin, A. Plastino, O. A. Rosso, Physica A **369** (2006) 439. L. Zunino, M. C. Soriano, I. Fischer, O. A. Rosso, C. R. Mirasso, Phys. Rev. E **82** (2010) 046212. M. C. Soriano, L. Zunino, O. A. Rosso, I. Fischer, C. R. Mirasso, IEEE J. Quantum Electron. **47** (2011) 252. L. Zunino, O. A. Rosso, M. C. Soriano, IEEE J. Sel. Top. Quantum Electron. **17** (2011) 1250. U. Parlitz, S. Berg, S. Luther, A. Schirdewan, J. Kurths, N. Wessel, Comput. Biol. Med. (2011), doi:10.1016/j.compbiomed.2011.03.017 (in press). T. Joachims, http://svmlight.joachims.org (accessed in November 2011). V. N. Vapnik, *The Nature of Statistical Learning Theory*. (Springer, New York, 1995).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '[Starting from a Skyrme interaction with tensor terms, the $\beta$-decay rates of $^{52}$Ca have been studied within a microscopic model including the $2p-2h$ configuration effects. We observe a redistribution of the strength of Gamow-Teller transitions due to the $2p-2h$ fragmentation. Taking into account this effect results in a satisfactory description of the neutron emission probability of the $\beta$-decay in $^{52}$Ca.]{}'
author:
- ' $^{1),2)}$'
title: 'Strength fragmentation of Gamow-Teller transitions and delayed neutron emission of atomic nuclei'
---
The multi-neutron emission is basically a multistep process consisting of (a) the $\beta$-decay of the parent nucleus (N, Z) which results in feeding the excited states of the daughter nucleus (N - 1, Z + 1) followed by the (b) $\gamma$-deexcitation to the ground state or (c) multi-neutron emissions to the ground state of the final nucleus (N - 1 - X, Z + 1), see e.g., Ref. [@b05]. Predictions of the multi-neutron emission are needed for the analysis of radioactive beam experiments and for modeling of astrophysical r-process. Recent experiments gave an evidence for strong shell effects in exotic calcium isotopes [@w13; @s13]. For this reason, the $\beta$-decay properties of neutron-rich isotope $^{52}$Ca provides valuable information [@h85], with important tests of theoretical calculations.
[ccccccc]{} $\lambda_i^{\pi}=1_i^+$&&\
& Expt. & QRPA & 2PH & Expt. & QRPA & 2PH\
$1_1^+$ &1.64 & 1.5 & 1.3 &4.2$\pm$0.1& 4.3 & 4.3\
$1_2^+$ &2.75 & & 3.9 &4.5$\pm$0.2& & 6.4\
$1_3^+$ &3.46 & & 4.2 &5.3$\pm$0.5& & 9.2\
$1_4^+$ &4.27 & 5.0 & 4.9 &4.0$\pm$0.5& 3.2 & 3.3\
One of the successful tools for studying charge-exchange nuclear modes is the quasiparticle random phase approximation (QRPA) with the self-consistent mean-field derived from a Skyrme energy-density functional (EDF) since these QRPA calculations enable one to describe the properties of the parent ground state and Gamow-Teller (GT) transitions using the same EDF. Making use of the finite rank separable approximation (FRSA) [@gsv98] for the residual interaction, the approach has been generalized for the coupling between one- and two-phonon components of the wave functions [@svg04]. The FRSA in the cases of the charge-exchange excitations and the $\beta$-decay was already introduced in Refs. [@svg12; @ss13] and in Ref. [@svbag14; @e15], respectively. In the case of the $\beta$ decay of $^{52}$Ca, we use the EDF T45 which takes into account the tensor force added with refitting the parameters of the central interaction [@TIJ]. The pairing correlations are generated by a zero-range volume force with a strength of -315 MeVfm$^{3}$ and a smooth cut-off at 10 MeV above the Fermi energies [@svbag14]. This value of the pairing strength has been fitted to reproduce the experimental neutron pairing energy of $^{52}$Ca obtained from binding energies of neighbouring Ca isotopes.
Taking into account the basic ideas of the quasiparticle-phonon model (QPM) [@solo; @ks84], the Hamiltonian is then diagonalized in a space spanned by states composed of one and two QRPA phonons [@svbag14], $$\begin{aligned}
\Psi _\nu (J M) = \left(\sum_iR_i(J \nu )Q_{J M i}^{+}+
\sum_{\lambda _1i_1\lambda _2i_2}P_{\lambda _2i_2}^{\lambda
_1i_1}( J \nu )\left[ Q_{\lambda _1\mu _1i_1}^{+}\bar{Q}_{\lambda
_2\mu _2i_2}^{+}\right] _{J M }\right)|0\rangle, \label{wf}\end{aligned}$$ where $Q_{\lambda \mu i}^{+}\mid0\rangle$ are the wave functions of the one-phonon states of the daughter nucleus (N - 1, Z + 1); $\bar{Q}_{\lambda\mu i}^{+} |0\rangle$ is the one-phonon excitation of the parent nucleus (N, Z). We use only the two-phonon configurations $[1^{+}_{i}\otimes 2^{+}_{i'}]_{QRPA}$. In the allowed GT approximation, the $\beta^{-}$-decay rate is expressed by summing the probabilities (in units of $G_{A}^{2}/4\pi$) of the energetically allowed transitions ($E_{k}^{\mathrm{GT}}\leq Q_{\beta}$) weighted with the integrated Fermi function $$\begin{aligned}
T_{1/2}^{-1}=D^{-1}\left(\frac{G_{A}}{G_{V}}\right)^{2}
\sum\limits_{k}f_{0}(Z+1,A,E_{k}^{\mathrm{GT}})B(GT)_{k},\end{aligned}$$ $$E_{k}^{\mathrm{GT}}=Q_{\beta}-E_{1^+_k},$$ where $G_A/G_V$=1.25 and $D$=6147 s. $E_{1_k^+}$ denotes the excitation energy of the daughter nucleus. As proposed in Ref. [@ebnds99], this energy can be estimated by the following expression: $$E_{1^{+}_{k}}\approx E_{k}-E_{\textrm{2QP},\textrm{lowest}}.$$ $E_{k}$ are the eigenvalues of the wave functions (\[wf\]) and $E_{\textrm{2QP},\textrm{lowest}}$ corresponds the lowest two-quasiparticle energy. The difference in the characteristic time scales of the $\beta$ decay and subsequent particle emission processes justifies an assumption of their statistical independence (see Ref. [@b05] for more details). The $P_{n}$ probability of the delayed neutron emission is defined as the ratio of the integral $\beta$-strength to the excited states above the neutron separation energy of the daughter nucleus.
The spectrum of four low-energy $1^+$ states of $^{52}$Sc is shown in Table 1. The structure peculiarities are reflected in the $\log ft$ values. We find that the dominant contribution in the wave function of the first (fourth) $1^+$ state comes from the configuration $\{\pi1f_{7/2}\nu1f_{5/2}\}$ ($\{\pi1f_{7/2}\nu1f_{7/2}\}$). The inclusion of the four-quasiparticle configurations $\{\pi1f_{7/2}\nu1f_{5/2} \nu2p_{3/2}\nu2p_{1/2}\}$ and $\{\pi1f_{7/2}\nu1f_{5/2} \nu2p_{3/2}\nu2p_{3/2}\}$ plays the key role in our calculations of the states $1_{2}^+$ and $1_{3}^+$, respectively. The inclusion of the two-phonon configurations results in the $P_{n}$ value of 5%, and the quantitative agreement with the experimental data [@h85] is satisfactory. Note that this value is almost three times less than that within the one-phonon approximation.
In summary, by starting from the Skyrme mean-field calculations the GT strength in the $Q_{\beta}$-window has been studied within the model including the $2p-2h$ fragmentation. We analyze this effect on the $\beta$-transition rates in the case of $^{52}$Ca. Including the $2p-2h$ configurations leads to qualitative agreement with existence of four low-energy $1^+$ states of $^{52}$Sc. As a result, the probability of the delayed neutron emission is decreased.
I would like to thank I.N. Borzov, Yu.E. Penionzhkevich, and D. Verney for fruitful collaboration, N.N. Arsenyev and E.O. Sushenok for help. This work is partly supported by CNRS-RFBR Agreement No. 16-52-150003, the IN2P3-JINR agreement, and RFBR Grant No. 16-02-00228.
[99]{} $\beta$-delayed neutron emission in the $^{78}$Ni region // Phys. Rev. C. 2005. V. 71. P. 065801. Masses of exotic calcium isotopes pin down nuclear forces // Nature. 2013. V. 498. P. 346–349. Evidence for a new nuclear ‘magic number’ from the level structure of $^{54}$Ca // Nature. 2013. V. 502. P. 207–210. Beta decay of the new isotopes $^{52}$K, $^{52}$Ca, and $^{52}$Sc; a test of the shell model far from stability // Phys. Rev. C. 1985. V. 31. P. 2226–2237. Finite rank approximation for random phase approximation calculations with Skyrme interactions: an application to Ar isotopes // Phys. Rev. C. 1998. V. 57. P. 1204–1209. Effects of phonon-phonon coupling on low-lying states in neutron-rich Sn isotopes // Eur. Phys. J. A. 2004. V. 22. P. 397–403. Charge-exchange excitations with Skyrme interactions in a separable approximation// Prog. Theor. Phys. 2012. V. 128. P. 489–506. Tensor correlation effects on Gamow-Teller resonances in $^{120}$Sn and $N=80,82$ isotones// Prog. Theor. Exp. Phys. 2013. V. 2013. P. 103D03. Influence of 2p-2h configurations on $\beta$-decay rates// Phys. Rev. C. 2014. V. 90. P. 044320. Low-lying intruder and tensor-driven structures in $^{82}$As revealed by $\beta$-decay at a new movable-tape-based experimental setup// Phys. Rev. C. 2015. V. 91. P. 064317. Tensor part of the Skyrme energy density functional: Spherical nuclei// Phys. Rev. C. 2007. V. 76. P. 014312. Theory of atomic nuclei: quasiparticles and phonons. Bristol and Philadelphia, Institute of Physics, 1992. Fragmentation of the Gamow-Teller resonance in spherical nuclei// J. Phys. G. 1984. V. 10. P. 1507-1522. $\beta$-decay rates of r-process waiting-point nuclei in a self-consistent approach// Phys. Rev. C. 1999. V. 60. P. 014302.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'It was shown by Gruslys, Leader and Tan that any finite subset of $\mathbb{Z}^n$ tiles $\mathbb{Z}^d$ for some $d$. The first non-trivial case is the punctured interval, which consists of the interval $\{-k,\ldots,k\} \subset \mathbb{Z}$ with its middle point removed: they showed that this tiles $\mathbb{Z}^d$ for $d = 2k^2$, and they asked if the dimension needed tends to infinity with $k$. In this note we answer this question: we show that, perhaps surprisingly, every punctured interval tiles $\mathbb{Z}^4$.'
author:
- Harry Metrebian
title: Tiling with punctured intervals
---
Introduction
============
A *tile* is a finite non-empty subset of $\mathbb{Z}^n$ for some $n$. We say that a tile $T$ *tiles* $\mathbb{Z}^d$ if $\mathbb{Z}^d$ can be partitioned into copies of $T$, that is, subsets that are translations, rotations or reflections, or any combination of these, of $T$.
For example, the tile $\texttt{X.X} = \{-1,1\} \subset \mathbb{Z}$ tiles $\mathbb{Z}$. The tile $\texttt{XX.XX} = \{-2,-1,1,2\} \subset \mathbb{Z}$ does not tile $\mathbb{Z}$, but we can also regard it as a tile in $\mathbb{Z}^2$, and indeed it tiles $\mathbb{Z}^2$, as shown, for example, in [@gltan16].
Chalcraft [@chalcraft1; @chalcraft2] conjectured that, for any tile $T \subset \mathbb{Z}^n$, there is some dimension $d$ for which $T$ tiles $\mathbb{Z}^d$. This was proved by Gruslys, Leader and Tan [@gltan16]. The first non-trivial case is the *punctured interval* $T = \underbrace{\texttt{XXXXX}}_{k}\!\texttt{.}\!\underbrace{\texttt{XXXXX}}_{k}$. The authors of [@gltan16] showed that $T$ tiles $\mathbb{Z}^d$ for $d = 2k^2$, but they were unable to prove that the smallest required dimension $d$ was quadratic in $k$, or even that $d \to \infty$ as $k \to \infty$. They therefore asked the following question:
Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, and let $d$ be the least number such that $T$ tiles $\mathbb{Z}^d$. Does $d \to \infty$ as $k \to \infty$?
In this paper we will show that, rather unexpectedly, $d$ does not tend to $\infty$:
\[mainthm\] Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$. Then $T$ tiles $\mathbb{Z}^4$. Furthermore, if $k$ is odd or congruent to $4 \pmod 8$, then $T$ tiles $\mathbb{Z}^3$.
We have already noted that `X.X` tiles $\mathbb{Z}$, and `XX.XX` tiles $\mathbb{Z}^2$ but not $\mathbb{Z}$. It can be shown via case analysis that, for $k \geq 3$, the tile $T$ does not tile $\mathbb{Z}^2$. However, this proof is tedious and provides little insight, and since it is not the focus of this paper, we omit it. For odd $k \geq 3$ and for $k \equiv 4 \pmod 8$, 3 is therefore the least $d$ such that $T$ tiles $\mathbb{Z}^d$. For the remaining cases, namely $k \equiv 0, 2, 6 \pmod 8$, $k \geq 6$, it is unknown whether the least possible $d$ is 3 or 4.
In this paper, we will first prove the result for odd $k$. This will introduce some key ideas, which we will develop to prove the result for general $k$, and then to improve the dimension from 4 to 3 for $k \equiv 4 \pmod 8$.
Finally, we give some background. Tilings of $\mathbb{Z}^2$ by polyominoes (edge-connected tiles in $\mathbb{Z}^2$) have been thoroughly investigated. For example, Golomb [@golomb70] showed that results of Berger [@berger66] implied that there is no algorithm which decides whether copies of a given finite set of polyominoes tile $\mathbb{Z}^2$. It is unknown whether the same is true for tilings by a single polyomino. For tilings of $\mathbb{Z}$ by sets of general one-dimensional tiles, such an algorithm does exist, as demonstrated by Adler and Holroyd [@ah81]. Kisisel [@kisisel01] introduced an ingenious technique for proving that certain tiles do not tile $\mathbb{Z}^2$ without having to resort to case analysis.
A similar problem is to consider whether a tile $T$ tiles certain finite regions, such as cuboids. There is a significant body of research, sometimes involving computer searches, on tilings of rectangles in $\mathbb{Z}^2$ by polyominoes (see, for example, Conway and Lagarias [@cl90] and Dahlke [@dahlke]). Friedman [@friedman] has collected some results on tilings of rectangles by small one-dimensional tiles. More recently, Gruslys, Leader and Tomon [@gltomon16] and Tomon [@tomon16] considered the related problem of partitioning the Boolean lattice into copies of a poset, and similarly Gruslys [@gruslys16] and Gruslys and Letzter [@gl16] have worked on the problem of partitioning the hypercube into copies of a graph.
Preliminaries and the odd case
==============================
We begin with the case of $k$ odd. This is technically much simpler than the general case, and allows us to demonstrate some of the main ideas in the proof of Theorem \[mainthm\] in a less complicated setting.
\[kodd\] Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, with $k$ odd. Then $T$ tiles $\mathbb{Z}^3$.
Throughout this section, $T$ is fixed, and $k \geq 3$. We will not yet assume that $k$ is odd, because the tools that we are about to develop will be relevant to the general case too.
We start with an important definition from [@gltan16]: a *string* is a one-dimensional infinite line in $\mathbb{Z}^d$ with every $(k+1)$th point removed. Crucially, a string is a disjoint union of copies of $T$.
We cannot tile $\mathbb{Z}^d$ with strings, as each string intersects $[k+1]^d$ in either 0 or $k$ points, and $(k+1)^d$ is not divisible by $k$. However, we could try to tile $\mathbb{Z}^d$ by using strings in $d-1$ of the $d$ possible directions, leaving holes that can be filled with copies of $T$ in the final direction. We therefore consider $\mathbb{Z}^d$ as consisting of slices equivalent to $\mathbb{Z}^{d-1}$, each of which will be partially tiled by strings.
Any partial tiling of the discrete torus $\mathbb{Z}_{k+1}^{d-1} = (\mathbb{Z}/(k+1)\mathbb{Z})^{d-1}$ by lines with one point removed corresponds to a partial tiling of $\mathbb{Z}^{d-1}$ by strings. We will restrict our attention to these tilings at first, as they are easy to work with.
We will call a set $X \subset \mathbb{Z}_{k+1}^{d-1}$ a *hole* in $\mathbb{Z}_{k+1}^{d-1}$ if $\mathbb{Z}_{k+1}^{d-1} \setminus X$ can be tiled with strings. One particularly useful case of this is when $d = 3$ and $X$ either has exactly one point in each row of $\mathbb{Z}_{k+1}^2$ or exactly one point in each column of $\mathbb{Z}_{k+1}^2$. Then $X$ is clearly a hole, since a string in $\mathbb{Z}_{k+1}^2$ is just a row or column minus a point.
The following result will allow us to fill the gaps in the final direction, assuming we have chosen the partial tilings of the $\mathbb{Z}^{d-1}$ slices carefully:
\[biglemma\] Let $S \subset \mathbb{Z}^d$, $|S| = 3$. Then there exists $Y \subset S \times \mathbb{Z}$ such that $T$ tiles $Y$, and for every $n \in \mathbb{Z}$, $|Y \cap (S \times \{n\})| = 2$.
Let $S = \{x_1, x_2, x_3\}$. For $i = 1,2,3$, place a copy of $T$ beginning at $\{x_i\} \times \{n\}$ for every $n \equiv ik \pmod {3k}$. The union $Y$ of these tiles has the required property:\
For $n \equiv 0, k+1, \ldots, 2k-1 \pmod{3k}$, $Y \cap (S \times \{n\}) = \{x_1, x_3\} \times \{n\}$.\
For $n \equiv k, 2k+1, \ldots, 3k-1 \pmod{3k}$, $Y \cap (S \times \{n\}) = \{x_1, x_2\} \times \{n\}$.\
For $n \equiv 2k, 1, \ldots, k-1 \pmod{3k}$, $Y \cap (S \times \{n\}) = \{x_2, x_3\} \times \{n\}$.\
We will now prove Theorem \[kodd\]. We know that if $X \subset \mathbb{Z}_{k+1}^2$ has one point in each row or column then $X$ is a hole of size $k+1$. Since $k+1$ is even, we can try to choose $X_n$ in each slice $\mathbb{Z}_{k+1}^2 \times \{n\}$ so that $\bigcup_{n\in\mathbb{Z}}X_n$ is the disjoint union of $\frac{k+1}{2}$ sets $Y_i$ of the form in Lemma \[biglemma\].
We can do this as follows:\
For $n \equiv 0, k+1, \ldots, 2k-1 \pmod{3k}$, let $X_n = \{(0,0),(1,1),\ldots,(k-1,k-1),(k,k)\}$.\
For $n \equiv k, 2k+1, \ldots, 3k-1 \pmod{3k}$, let $X_n = \{(0,0),(0,1),(2,2),(2,3),\ldots,(k-1,k-1),\newline(k-1,k)\}$.\
For $n \equiv 2k, 1, \ldots, k-1 \pmod{3k}$, let $X_n = \{(0,1),(1,1),(2,3),(3,3),\ldots,(k-1,k),(k,k)\}$.\
Then let $X = \bigcup\limits_{n\in\mathbb{Z}} (X_n \times \{n\}) \subset \mathbb{Z}_{k+1}^2 \times \mathbb{Z}$.
Each $X_n$ is a hole, so we can tile $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z})\setminus X$ with strings. Also, $X$ is the disjoint union of sets of the form $Y$ from Lemma \[biglemma\]: for $0 \leq i \leq \frac{k-1}{2}$, let $S_i = \{(2i,2i),(2i,2i+1),(2i+1,2i+1)\}$. Then $X \cap (S_i \times \mathbb{Z})$ is precisely the set $Y$ generated from $S_i$ in the proof of Lemma \[biglemma\]. Hence $T$ tiles $X$.
Since $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z})\setminus X$ can be tiled with strings, we can partially tile $\mathbb{Z}^3$ with strings, leaving a copy of $X$ empty in each copy of $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}$. We can tile all of these copies of $X$ with $T$, so $T$ tiles $\mathbb{Z}^3$, completing the proof of Theorem \[kodd\].
The general case
================
We now move on to general $k$:
\[generalk\] Let $T$ be the tile $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$. Then $T$ tiles $\mathbb{Z}^4$.
We will assume throughout that $T$ is fixed and $k \geq 3$.
For even $k$, the construction used to prove Theorem \[kodd\] does not work, as all holes in $\mathbb{Z}_{k+1}^2$ have size $(k+1)^2-mk$ for some $m$, and this is always odd, so we cannot use Lemma \[biglemma\]. The same is true if we replace 2 with a larger dimension, or if, as in [@gltan16], we use strings in which every $(2k+1)$th point, rather than every $(k+1)$th point, is removed. We will therefore need a new idea.
Instead of using strings in $d-1$ out of $d$ directions, we could only use them in $d-2$ directions and fill the gaps with copies of $T$ in the 2 remaining directions. We will show that this approach works in the case $d = 2$, giving a tiling of $\mathbb{Z}^4$. The strategy will be to produce a partial tiling of each $\mathbb{Z}^3$ slice and use the construction from Lemma \[biglemma\] to fill the gaps with tiles in the fourth direction.
We will again build partial tilings of $\mathbb{Z}^{2}$, and therefore of higher dimensions, from partial tilings of the discrete torus $\mathbb{Z}_{k+1}^{2}$. The following result is a special case of one proved in [@gltan16]:
\[onepoint\] If $x \in \mathbb{Z}_{k+1}^{2}$, then $\mathbb{Z}_{k+1}^{2}\setminus\{x\}$ can be tiled with strings.
Let $x = (x_1,x_2)$, where the first coordinate is horizontal and the second vertical. Since a string is a row or column minus one point, we can place a string $(\{n\} \times \mathbb{Z}_{k+1})\setminus\{(n,x_2)\}$ in each column, leaving only the row $\mathbb{Z}_{k+1} \times \{x_2\}$ empty. Placing the string $(\mathbb{Z}_{k+1} \times \{x_2\})\setminus \{x\}$ in this row completes the tiling of $\mathbb{Z}_{k+1}^{2}\setminus\{x\}$.
The sets $S$ of size 3 that we will use in Lemma \[biglemma\] will have 2 points, say $x_1$ and $x_2$, in one $\mathbb{Z}_{k+1}^{2}$ layer and one point, say $x_3$, in another layer. Every layer will contain points from exactly one such set $S$. Let $Y$ be the set constructed from $S$ in the proof of Lemma \[biglemma\]. In a given slice $\mathbb{Z}^3 \times \{n\}$, there are therefore two cases:
1. $Y \cap (S \times \{n\}) = \{x_1, x_3\} \times \{n\}$ or $\{x_2, x_3\} \times \{n\}$.
2. $Y \cap (S \times \{n\}) = \{x_1, x_2\} \times \{n\}$.
In Case 1, each $\mathbb{Z}_{k+1}^{2}$ layer contains exactly one point of $Y$. $T$ then tiles the rest of the layer by Proposition \[onepoint\].
In Case 2, some of the layers contain two points of $Y$, and some of the layers contain no points. Holes of size 0 and 2 do not exist, so we will need copies of $T$ in the third direction to fill some gaps (where $Y$ consists of copies of $T$ in the fourth direction). The following lemma provides us with a way to do this:
\[otherlemma\] Let $A \subset \mathbb{Z}^d$, $|S| = 3k$. Then there exists $B \subset S \times \mathbb{Z}$ such that $T$ tiles $B$, and $$|B \cap (S \times \{n\})| =
\begin{cases}
k+1 & \text{\emph{if} } n \equiv 1, \ldots, k \pmod{2k}\\
k-1 & \text{\emph{if} } n \equiv k+1, \ldots, 2k \pmod{2k}
\end{cases}$$
Let $A = \{a_1, \ldots, a_{3k}\}$. Then:\
For $i = 1, \ldots, k$, place a copy of $T$ beginning at $\{a_i\} \times \{n\}$ for every $n \equiv i \pmod{6k}$.\
For $i = k+1, \ldots, 2k$, place a copy of $T$ beginning at $\{a_i\} \times \{n\}$ for every $n \equiv i+k \pmod{6k}$.\
For $i = 2k+1, \ldots, 3k$, place a copy of $T$ beginning at $\{a_i\} \times \{n\}$ for every $n \equiv i+2k \pmod{6k}$.\
We now observe that the union $B$ of these tiles has the required property.\
For $n \equiv 1, \ldots, k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{2k+n}, \ldots, a_{3k}, a_1, \ldots, a_n\}$ (size $k+1$).\
For $n \equiv k+1, \ldots, 2k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_1, \ldots, a_k\}\setminus\{a_{n-k}\}$ (size $k-1$).\
For $n \equiv 2k+1, \ldots, 3k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{n-2k}, \ldots, a_{n-k}\}$ (size $k+1$).\
For $n \equiv 3k+1, \ldots, 4k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{k+1}, \ldots, a_{2k}\}\setminus\{a_{n-2k}\}$ (size $k-1$).\
For $n \equiv 4k+1, \ldots, 5k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{n-3k}, \ldots, a_{n-2k}\}$ (size $k+1$).\
For $n \equiv 5k+1, \ldots, 6k \pmod{6k}$, $B \cap (A \times \{n\}) = \{a_{2k+1}, \ldots, a_{3k}\}\setminus\{a_{n-3k}\}$ (size $k-1$).
The reasoning behind this lemma is that there exist sets $X \subset \mathbb{Z}_{k+1}^{2} \times \mathbb{Z}$ that are missing exactly $k+1$ points in every $\mathbb{Z}_{k+1}^{2}$ layer and can be tiled with strings. If we take $d = 2$ in Lemma \[otherlemma\], we would like to choose such a set $X$ and a set $A \subset \mathbb{Z}_{k+1}^{2}$ (abusing notation slightly, as $\mathbb{Z}_{k+1}^{2}$ is not actually a subset of $\mathbb{Z}^2$) such that the resulting $B$ in Lemma \[otherlemma\] is disjoint from $X$. Then $(\mathbb{Z}_{k+1}^{2} \times \mathbb{Z})\setminus(B \cup X)$ contains either 2 or 0 points in each $\mathbb{Z}_{k+1}^{2}$ layer, which is what we wanted.
In order for this construction to work, we need the set $B \cap (A \times \{n\})$ to be a hole whenever it has size $k+1$, and to be a subset of a hole of size $k+1$ whenever it has size $k-1$, so that we actually can tile the required points with strings. By observing the forms of the sets $B \cap (A \times \{n\})$ in the proof of Lemma \[otherlemma\], we see that it is sufficient to choose the $a_n$ such that for all $n$, $\{a_n, \ldots, a_{n+k}\}$ is a hole. Here we regard the indices $n$ of the points $a_n$ of $A$ as integers mod $3k$, so $a_{3k+1} = a_1$ and so on. The following proposition says that we can do this.
\[anprop\] There exists a set $A = \{a_1, \ldots, a_{3k}\} \subset \mathbb{Z}_{k+1}^{2}$ such that for all $n$, $\{a_n, \ldots, a_{n+k}\}$ contains either one point in every row or one point in every column. Here the indices are regarded as integers *mod* $3k$.
For $n = 1, \ldots, k+1$, let $a_n = (n-1,n-1)$.\
For $n = k+2, \ldots, 2k-1$, let $a_n = (n-k-2,n-k-1)$.\
For $n = 2k, 2k+1, 2k+2$, let $a_n = (n-k-2,n-2k)$.\
For $n = 2k+3, \ldots, 3k$, let $a_n = (n-2k-3,n-2k)$.\
Note that all the $a_n$ are distinct. Let us regard the first coordinate as horizontal and the second as vertical.\
Then, for $n = 1, \ldots, 2k$, $\{a_n, \ldots, a_{n+k}\}$ contains one point in every column.\
For $n = 2k+1, \ldots, 3k$, $\{a_n, \ldots, a_{n+k}\}$ contains one point in every row.
From now on, $a_n$ refers to the points defined in the above proof. This proposition is the motivation for choosing the value $6k$ in the proof of Lemma \[otherlemma\].
We can now prove Theorem \[generalk\]. We will need 3 distinct partial tilings of $\mathbb{Z}^3$ slices, corresponding to the 3 cases in the proof of Lemma \[biglemma\] with $d = 3$. The repeating unit in each of these partial tilings will have size $(k+1) \times (k+1) \times 6k$, so we will work in $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k}$.
We start by choosing the sets $S$ as in Lemma \[biglemma\]. These will be as follows:\
For $n = 1, \ldots, k$, $S_n = \{(0,0,n),(a_n,n+k),(a_{k+1},n+k)\}$.\
For $n = k+1, \ldots, 2k$, $S_n = \{(0,0,n+k),(a_n,n+2k),(a_{2k+1},n+2k)\}$.\
For $n = 2k+1, \ldots, 3k$, $S_n = \{(0,0,n+2k),(a_n,n+3k),(a_1,n+3k)\}$.\
We will refer to the points in $S_n$ as $x_{n,1},x_{n,2},x_{n,3}$ in the order given.
We can construct a set $Y_n \subset \mathbb{Z}^4$ from each $S_n$ using the construction in the proof of Lemma \[biglemma\]. Let $Y = \bigcup_{1 \leq n \leq 3k} Y_n$. For a given $m \in \mathbb{Z}$, there are two possibilities for the structure of $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$:
1. $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ consists of pairs of the form $\{x_{n,1},x_{n,2}\}$ or $\{x_{n,1},x_{n,3}\}$. Then it contains exactly one point in each $\mathbb{Z}_{k+1}^2$ layer. We can therefore tile $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\}) \setminus Y$ entirely with strings, by Proposition \[onepoint\].
2. $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ consists of pairs of the form $\{x_{n,2},x_{n,3}\}$. Then it contains either 2 or 0 points in each $\mathbb{Z}_{k+1}^2$ layer.\
If $A = \{a_1, \ldots, a_{3k}\}$, and $B$ is the set constructed from $A$ in the proof of Lemma \[otherlemma\], then, by the choice of the $S_n$, the sets $B$ and $Y \cap (\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\})$ are disjoint. Furthermore, if $C$ is the union of these two sets, then, for every $n$, $C \cap (\mathbb{Z}_{k+1}^2 \times \{n\} \times \{m\}) = \{a_r, \ldots, a_{r+k}\}$ for some $r$, and by Proposition \[anprop\], this contains either one point in every row or one point in every column and is therefore a hole.\
Since $T$ tiles $B$, it also tiles $(\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \{m\}) \setminus Y$.
$T$ tiles $Y$ by Lemma \[biglemma\]. Hence $T$ tiles $\mathbb{Z}_{k+1}^2 \times \mathbb{Z}_{6k} \times \mathbb{Z}$, and therefore also $\mathbb{Z}^4$, completing the proof of Theorem \[generalk\].
The 4 mod 8 case
================
To finish the proof of Theorem \[mainthm\], all that remains is to prove the following:
\[4mod8\] Let $T$ be the tile $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, with $k \equiv 4 \pmod 8$. Then $T$ tiles $\mathbb{Z}^3$.
We will prove this by constructing partial tilings of each $\mathbb{Z}^2$ slice and filling in the gaps using the construction from the proof of Lemma \[biglemma\]. We will define 3 subsets $X_1$, $X_2$, $X_3$ of $\mathbb{Z}^2$ and show that $T$ tiles each of them. However, two of these tilings will not make use of strings.
Let $S_1 = \{(x,x+n(k+1)) \; | \; n \in \mathbb{Z}, x \equiv 2n,2n+1,2n+2,2n+3 \pmod 8\}$.
Let $S_2 = \{(x,x+n(k+1)) \; | \; n\in \mathbb{Z}, x \equiv 2n+4,2n+5,2n+6,2n+7 \pmod 8\}$.
Let $S_3 = \{(x,x+n(k+1)+1) \; | \; n \in \mathbb{Z}, x \equiv 2n+2,2n+3,2n+4,2n+5 \pmod 8\}$.
Let $X_1 = \mathbb{Z}^2 \setminus (S_2 \cup S_3)$, $X_2 = \mathbb{Z}^2 \setminus (S_1 \cup S_3)$, $X_3 = \mathbb{Z}^2 \setminus (S_1 \cup S_2)$.
Let the first coordinate be horizontal and the second vertical.
$X_3$ is $\mathbb{Z}^2$ with every $(k+1)$th diagonal removed, so each row (or column) is $Z$ with every $(k+1)$th point removed, that is, a string. Hence $T$ tiles $X_3$.
We will show that $X_1$ can be tiled with vertical copies of $T$ and $X_2$ can be tiled with horizontal copies of $T$.
Note that $(x,x+n(k+1))+(2,k+3) = (x+2,(x+2)+(n+1)(k+1))$. Also, if $x \equiv 2n+r \pmod 8$, then $x+2 \equiv 2(n+1)+r \pmod 8$. Hence, by the definitions of $S_2$ and $S_3$, we see that $X_1$ is invariant under translation by $(2,k+3)$. To show that vertical copies of $T$ tile $X_1$, it therefore suffices to show that $T$ tiles the columns $X_1 \cap (\{0\} \times \mathbb{Z})$ and $X_1 \cap (\{1\} \times \mathbb{Z})$.
But in fact, if $(0,y) \in S_2$, then $0 \equiv 2n+4$ or $2n+6 \pmod 8$, so $1 \equiv 2n+5$ or $2n+7 \pmod 8$, so also $(1,y+1) \in S_2$. The converse also holds, and the same is true for $S_3$. Thus we only need to check the case $x = 0$.
$(0,n(k+1)) \in S_2$ for $n \equiv 1,2,5,6 \pmod 8$, that is, $n \equiv 1,2 \pmod 4$.
$(0,n(k+1)+1) \in S_3$ for $n \equiv 2,3,6,7 \pmod 8$, that is, $n \equiv 2,3 \pmod 4$.
Therefore $(0,y) \notin X_1$ for $y \equiv k+1, 2(k+1), 2(k+1)+1, 3(k+1)+1 \pmod{4(k+1)}$, so copies of $T$ beginning at positions $1$ and $2(k+1)+2 \pmod{4(k+1)}$ tile $X_1 \cap (\{0\} \times \mathbb{Z})$.
Hence $T$ tiles $X_1$.
Note that $(x,x+n(k+1))+(k+2,1) = (x+k+2,(x+k+2)+(n-1)(k+1))$.\
Since $k \equiv 4 \pmod 8$, if $x \equiv 2n+r \pmod 8$ then $x+k+2 \equiv 2(n-1)+r \pmod 8$. Hence $X_2$ is invariant under translation by $(k+2,1)$, by the definitions of $S_1$ and $S_3$. To show that horizontal copies of $T$ tile $X_2$, it is therefore enough to show that $T$ tiles the row $X_2 \cap (\mathbb{Z} \times \{0\})$.
We can express $S_1$ as $\{(y-n(k+1),y) \; | \; y \equiv -n,1-n,2-n,3-n \pmod 8\}$.
Similarly $S_3 = \{(y-n(k+1)-1,y) \; | \; y \equiv 3-n,4-n,5-n,6-n \pmod 8\}$.
Therefore $(-n(k+1),0) \in S_1$ for $n \equiv 0,1,2,3 \pmod 8$, and $(-n(k+1)-1,0) \in S_3$ for $n \equiv 3,4,5,6 \pmod 8$.
Hence $(x,0) \notin X_2$ for $x \equiv 0, 2(k+1)-1, 3(k+1)-1, 4(k+1)-1, 5(k+1)-1, 5(k+1), 6(k+1), \newline 7(k+1) \pmod{8(k+1)}$, so copies of $T$ beginning at positions $k+1, 3(k+1), 5(k+1)+1, 7(k+1)+1 \pmod{8(k+1)}$ tile $X_2 \cap (\mathbb{Z} \times \{0\})$.
Hence $T$ tiles $X_2$.
$S_1 \cup S_2 \cup S_3$ can be partitioned into sets of the form $S = \{x_1, x_2, x_3\}$, where $x_1 = (x,y) \in S_1$, $x_2 = (x+4,y+4) \in S_2$, $x_3 = (x+2,y+3) \in S_3$. Then $|S| = 3$, so we can construct the corresponding set $Y \subset \mathbb{Z}^3$ as in Lemma \[biglemma\]. Now, given $n \in \mathbb{Z}$, $(S \times \{n\}) \setminus Y = \{x_i\}$ for some $i \in \{1,2,3\}$. Then $Y \cap (X_i \times \{n\}) = \emptyset$. If we do this for all such sets $S$, and let $U$ be the (disjoint) union of the resulting sets $Y$, then $U \cap (X_i \times \{n\}) = \emptyset$, and $\mathbb{Z}^2 \times \{n\} \subset U \cup (X_i \times \{n\})$. Recall that $T$ tiles each $Y$ and therefore $U$.
We can do this for every $n$, choosing a partial tiling $X_i$ for the corresponding $\mathbb{Z}^2$ layer. Together with $U$, these form a tiling of $\mathbb{Z}^3$ by $T$. This completes the proof of Theorem \[4mod8\], and therefore also the proof of Theorem \[mainthm\].
Open problems
=============
Theorem \[mainthm\], together with the result that a punctured interval $T = \underbrace{\texttt{XXXXX}}_{k}\!\texttt{.}\!\underbrace{\texttt{XXXXX}}_{k}$ does not tile $\mathbb{Z}^2$ for $k \geq 3$, determines the smallest dimension $d$ such that $T$ tiles $\mathbb{Z}^d$ in the cases $k$ odd and $k \equiv 4 \pmod 8$. However, for other values of $k$, it is still unknown whether the smallest such dimension $d$ is 3 or 4:
Let $T$ be the punctured interval $\underbrace{\texttt{\emph{XXXXX}}}_{k}\!\texttt{.}\!\underbrace{\texttt{\emph{XXXXX}}}_{k}$, where $k \equiv 0, 2, 6 \pmod 8$, $k \geq 6$. Does $T$ tile $\mathbb{Z}^3$?
It is also natural to consider more general tiles. The next non-trivial case is that of an interval with a non-central point removed. One might wonder if there is an analogue of Theorem \[mainthm\] for these tiles:
Does there exist a number $d$ such that, for any tile $T$ consisting of an interval in $\mathbb{Z}$ with one point removed, $T$ tiles $\mathbb{Z}^d$?
For general one-dimensional tiles, Gruslys, Leader and Tan [@gltan16] conjectured that there is a bound on the dimension in terms of the size of the tile:
For any positive integer $t$, there exists a number $d$ such that any tile $T \subset \mathbb{Z}$ with $|T| \leq t$ tiles $\mathbb{Z}^d$.
This conjecture remains unresolved. The authors of [@gltan16] showed that if $d$ always exists then $d \to \infty$ as $t \to \infty$, by exhibiting a tile of size $3d-1$ that does not tile $\mathbb{Z}^d$. This gives a simple lower bound on $d$; better bounds would be of great interest.
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Vytautas Gruslys for suggesting this problem and for many helpful discussions, and Imre Leader for his encouragement and useful comments.
[99]{}
A. Adler and F. C. Holroyd, ‘Some results on one-dimensional tilings’, *Geom. Dedicata* 10 (1981) 49–58.
R. Berger, ‘The undecidability of the domino problem’, *Mem. Amer. Math. Soc.* 66 (1966) 1–72.
J. H. Conway and J. C. Lagarias, ‘Tiling with polyominoes and combinatorial group theory’, *J. Combin. Theory Ser. A* 53 (1990) 183–208.
K. Dahkle, ‘Tiling rectangles with polyominoes’, http://eklhad.net/polyomino/index.html (retrieved 7 May 2018)
E. Friedman, ‘Problem of the Month (February 1999)’,\
https://www2.stetson.edu/\~efriedma/mathmagic/0299.html (retrieved 7 May 2018)
S. W. Golomb, ‘Tiling with sets of polyominoes’, *J. Combin. Theory* 9 (1970) 60–71.
V. Gruslys, ‘Decomposing the vertex set of a hypercube into isomorphic subgraphs’, arXiv:1611.02021.
V. Gruslys, I. Leader, T. S. Tan, ‘Tiling with arbitrary tiles’, *Proc. London Math. Soc.* (3) 112 (2016) 1019–1039.
V. Gruslys, I. Leader, I. Tomon, ‘Partitioning the Boolean lattice into copies of a poset’, arXiv:1609.02520.
V. Gruslys, S. Letzter, ‘Almost partitioning the hypercube into copies of a graph’, arXiv:1612.04603.
A. U. O. Kisisel, ‘Polyomino convolutions and tiling problems’, *J. Combin. Theory Ser. A* 95 (2001) 373–380.
The Math Forum, ‘Two tiling problems’,\
http://mathforum.org/kb/message.jspa?messageID=6223965 (retrieved 7 May 2018)
MathOverflow, ‘Does every polyomino tile $\mathbb{R}^n$ for some $n$?’\
https://mathoverflow.net/questions/49915/does-every-polyomino-tile-rn-for-some-n (retrieved 7 May 2018)
I. Tomon, ‘Almost tiling of the Boolean lattice with copies of a poset’, arXiv:1611.06842.
Harry Metrebian\
Trinity College\
Cambridge\
CB2 1TQ\
United Kingdom
rhkbm2@cam.ac.uk
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigate mode coupling in a two dimensional compressible disc with radial stratification and differential rotation. We employ the global radial scaling of linear perturbations and study the linear modes in the local shearing sheet approximation. We employ a three-mode formalism and study the vorticity (W), entropy (S) and compressional (P) modes and their coupling properties. The system exhibits asymmetric three-mode coupling: these include mutual coupling of S and P-modes, S and W-modes, and asymmetric coupling between the W and P-modes. P-mode perturbations are able to generate potential vorticity through indirect three-mode coupling. This process indicates that compressional perturbations can lead to the development of vortical structures and influence the dynamics of radially stratified hydrodynamic accretion and protoplanetary discs.'
author:
- |
A. G. Tevzadze$^1$, G. D. Chagelishvili$^{1,2}$, G. Bodo$^3$ and P. Rossi$^3$\
$^1$ Georgian National Astrophysical Observatory, Chavchavadze State University, Tbilisi, Georgia\
$^2$ Nodia Institute of Geophysics, Georgian Academy of Sciences, Tbilisi, Georgia\
$^3$ INAF – Osservatorio Astronomico di Torino, strada dell’Osservatorio 20, I-10025 Pino Torinese, Italy
title: Linear coupling of modes in 2D radially stratified astrophysical discs
---
accretion, accretion discs – hydrodynamics – instabilities
Introduction
============
The recent increased interest in the analysis of hydrodynamic disc flows is motivated, on one hand, by the study of turbulent processes, and, on the other, by the investigation of regular structure formation in protoplanetary discs. Indeed, many astrophysical discs are thought to be neutral or having ionization rates too low to effectively couple with magnetic field. Among these are cool and dense areas of protoplanetary discs, discs around young stars, X-ray transient and dwarf nova systems in quiescence (see e.g. Gammie and Menou 1998, Sano et al. 2000, Fromang, Terquem and Balbus 2002). Observational data shows that astrophysical discs often exhibit radial gradients of thermodynamic variables (see e.g. Sandin et al. 2008, Issela et al. 2007). To what extent these inhomogeneities affect the processes occurring in the disc is still subject open to investigations. It has been found that strong local entropy gradients in the radial direction may drive the Rossby wave instability (Lovelace et al. 1999, Li et al. 2000) that transfers thermal to kinetic energy and leads to vortex formation. However, in astrophysical discs, radial stratification is more likely weak. In this case, the radial entropy (temperature) variation on the global scale leads to the existence of baroclinic perturbations over the barotropic equilibrium state. This more appropriate situation has recently become a subject of extensive study.
Klahr and Bodenheimer (2003) pointed out that the radial stratification in the disc can lead to the global baroclinic instability. Numerical results show that the resulting state is highly chaotic and transports angular momentum outwards. Later Klahr (2004) performed a local 2D linear stability analysis of a radially stratified flow with constant surface density and showed that baroclinic perturbations can grow transiently during a limited time interval. Johnson and Gammie (2005) derived analytic solutions for 3D linear perturbations in a radially stratified discs in the Boussinesq approximation. They find that leading and trailing waves are characterized by positive and negative angular momentum flux, respectively. Later Johnson and Gammie (2006) performed numerical simulations, in the local shearing sheet model, to test the radial convective stability and the effects of baroclinic perturbations. They found no substantial instability due to the radial stratification. This result reveals a controversy over the issue of baroclinic instability. Presently, it seems that nonlinear baroclinic instability is an unlikely development in the local dynamics of sub-Keplerian discs with weak radial stratification.
Potential vorticity production, and the formation and development of vortices in radially stratified discs have been studied by Petersen et al. (2007a,b) by using pseudospectral simulations in the anelastic approximation. They show that the existence of thermal perturbations in the radially stratified disc flows leads to the formation of vortices. Moreover, stronger vortices appear in discs with higher temperature perturbations or in simulations with higher Reynolds numbers, and the transport of angular momentum may be both outward and inward.
Keplerian differential rotation in the disc is characterized by a strong velocity shear in the radial direction. It is known that shear flows are non-normal and exhibit a number of transient phenomena due to the non-orthogonal nature of the operators (see e.g. Trefethen et al. 1993). In fact, the studies described above did not take into account the possibility of mode coupling and energy transfer between different modes due to the shear flow induced mode conversion. Mode coupling is inherent to shear flows (cf. Chagelishvili et al. 1995) and often, in many respects, defines the role of perturbation modes in the system dynamics and the further development of nonlinear processes. Thus, a correct understanding of the energy exchange channels between different modes in the linear regime is vital for a correct understanding of the nonlinear phenomena.
Indications of the shear induced mode conversion can be found in a number of previous studies. Barranco and Marcus 2005 report that vortices are able to excite inertial gravity waves during 3D spectral simulations. Brandenburg and Dintrans (2006) have studied the linear dynamics of perturbation SFH to analyze nonaxisymetric stability in the shearing sheet approximation. Temporal evolution of the perturbation gain factors reveal a wave nature after the radial wavenumber changes sign. Compressible waves are present, along with vortical perturbations, in the simulation by Johnson & Gammie (2005b) but their origin is not particularly discussed.
In parallel, there are a number of papers that focus on the investigation of the shear induced mode coupling phenomena. The study of the linear coupling of modes in Keplerian flows has been conducted in the local shearing sheet approximation (Tevzadze et al. 2003,2008) as well as in 2D global numerical simulations (Bodo et al. 2005, hereafter B05). Tevzadze et al. (2003) studied the linear dynamics of three-dimensional small scale perturbations (with characteristic scales much less then the disc thickness) in vertically (stably) stratified Keplerian discs. They show, that vortex and internal gravity wave modes are coupled efficiently. B05 performed global numerical simulations of the linear dynamics of initially imposed two-dimensional pure vortex mode perturbations in compressible Keplerian discs with constant background pressure and density. The two modes possible in this system are effectively coupled: vortex mode perturbations are able to generate density-spiral waves. The coupling is, however, strongly asymmetric: the coupling is effective for wave generation by vortices, but not viceversa. The resulting dynamical picture points out the importance of mode coupling and the necessity of considering compressibility effects for processes with characteristic scales of the order or larger than the disc thickness. Bodo et al. (2007) extended this work to nonlinear amplitudes and found that mode coupling is an efficient channel for energy exchange and is not an artifact of the linear analysis. B05 is particularly relevant to the present study, since it studies the dynamics of mode coupling in 2D unstratified flows and is a good starting point for a further extension to radially stratified flows. Later, Heinemann & Papaloizou (2009a) derived WKBJ solutions of the generated waves and performed numerical simulations of the wave excitation by turbulent fluctuations (Heinemann & Papaloizou 2009b).
In the present paper we study the linear dynamics of perturbations and analyze shear flow induced mode coupling in the local shearing sheet approximation. We investigate the properties of mode coupling using qualitative analysis within the three-mode approximation. Within this approximation we tentatively distinguish vorticity, entropy and pressure modes. Quantitative results on mode conversion are derived numerically. It seems that a weak radial stratifications, while being a weak factor for the disc stability, still provides an additional degree of freedom (an active entropy mode), opening new options for velocity shear induced mode conversion, that may be important for the system behavior. One of the direct result of mode conversion is the possibility of linear generation of the vortex mode (i.e., potential vorticity) by compressible perturbations. We want to stress the possibility of the coupling between high and low frequency perturbations, considering that high frequency oscillations have been often neglected during previous investigations in particular for protoplanetary discs.
Conventionally there are two distinct viewpoints commonly employed during the investigation of hydrodynamic astrophysical discs. In one case (self gravitating galactic discs) the emphasis is placed on the investigation of the dynamics of spiral-density waves and vortices, although normally present in numerical simulations, are thought to play a minor role in the overall dynamics. In the other case (non-self-gravitating hydrodynamic discs) the focus is on the potential vorticity perturbations and density-spiral waves are often thought to play a minor role. Here, discussing the possible (multi) mode couplings, we want to draw attention to the possible flaws of these simplified views (see e.g. Mamatsashvili & Chagelishvili 2007). In many cases, mode coupling makes different perturbation to equally participate in the dynamical processes despite of a significant difference in their temporal scales.
In the next section we present mathematical formalism of our study. We describe three mode formalism and give schematic picture of the linear mode coupling in the radially sheared and stratified flow. Numerical analysis of the mode coupling is presented in Sec. 3. We evaluate mode coupling efficiencies at different radial stratification scales of the equilibrium pressure and entropy. The paper is summarized in Sec. 4.
Basic equations
===============
The governing ideal hydrodynamic equations of a two-dimensional, compressible disc flows in polar coordinates are: $${\partial \Sigma \over \partial t} + {1 \over r} {\partial \left( r
\Sigma V_r \right) \over
\partial r} + {1 \over r} {\partial \left( \Sigma V_\phi \right)\over
\partial \phi} = 0~,~~~~~~~~~~~~~~~~~~~~~~$$ $${\partial V_r \over \partial t} + V_r{\partial V_r \over
\partial r}+ {V_\phi \over r}{\partial V_r \over
\partial \phi} - {V_\phi^2 \over r} = -{1 \over \Sigma}
{\partial P \over \partial r} - {\partial \psi_g \over \partial r}
~,~~~~~~$$ $${\partial V_\phi \over \partial t} + V_r{\partial V_\phi \over
\partial r}+ {V_\phi \over r}{\partial V_\phi \over
\partial \phi} +
{V_r V_\phi \over r} = -{1 \over \Sigma r}{\partial P\over
\partial \phi}~,~~~~~~~~~$$ $${\partial P \over \partial t} + V_r{\partial P \over
\partial r}+ {V_\phi \over r}{\partial P \over
\partial \phi}
= - {\gamma P} \left( {1 \over r} {\partial (r V_r) \over
\partial r} + {1 \over r} {\partial V_\phi \over \partial
\phi} \right)~,$$ where $V_r$ and $V_\phi$ are the flow radial and azimuthal velocities respectively. $P(r,\phi)$, $\Sigma(r,\phi)$ and $\gamma~$ are respectively the pressure, the surface density and the adiabatic index. $\psi_g$ is the gravitational potential of the central mass, in the absence of self-gravitation $~(\psi_g \sim -{1 / r})$. This potential determines the Keplerian angular velocity: $${\partial \psi_g \over \partial r} = \Omega_{Kep}^2 r ~,~~~~
\Omega_{Kep} \sim r^{-3/2};$$
Equilibrium state
-----------------
We consider an axisymmetric $(\partial / \partial \phi \equiv 0),~$ azimuthal $(\bar {V}_{r} = 0)~$ and differentially rotating basic flow: $\bar {V}_{\phi}= \Omega(r)r$. In the 2D radially stratified equilibrium (see Klahr 2004), all variables are assumed to follow a simple power law behavior: $$\bar {\Sigma}(r) = \Sigma_0 \left( {r \over r_0}
\right)^{-\beta_\Sigma},~~~~\bar {P}(r) = P_0\left( {r \over r_0}
\right)^{-\beta_P} ~,$$ where overbars denote equilibrium and $\Sigma_0$ and $P_0$ are the values of the equilibrium surface density and pressure at some fiducial radius $r = r_0$. The entropy can be calculated as: $$\bar S = \bar P \bar \Sigma^{-\gamma} = - \left(r \over r_0
\right)^{-\beta_S} ~,$$ where $$\beta_S \equiv \beta_P - \gamma \beta_\Sigma ~.$$ $S$ is sometimes called potential temperature, while the physical entropy can be derived as $\bar S = C_V \log S + \bar S_0$.
This equilibrium shows a deviation from the Keplerian profile due to the radial stratification: $$\Delta \Omega^2(r) = \Omega^2(r) - \Omega^2_{Kep}= {1 \over r {\bar
{\Sigma}(r)}} {\partial {\bar{P}(r)}\over \partial r} =
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ $$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ = - {P_0
\over \Sigma_0} {\beta_P \over r_0^2}\left( {r \over r_0}
\right)^{\beta_\Sigma-\beta_P-2} ~.$$ Hence, the described state is sub-Keplerian or super-Keplerian when the radial gradient of pressure is negative ($\beta_P>0$) or positive ($\beta_P<0$), respectively. Although these discs are non-Keplerian, they are still rotationally supported, since the deviation from the Keplerian profile is small: $~\Delta
\Omega^2(r)\ll \Omega^2_{Kep}$.
Linear perturbations
--------------------
We split the physical variables into mean and perturbed parts: $$\Sigma(r,\phi) = {\bar {\Sigma}(r)} + {{\Sigma}^\prime(r,\phi)} ~,$$ $$P(r,\phi) = {\bar{P}(r)} + P^\prime(r,\phi) ~,$$ $$V_r(r,\phi) = V_r^\prime(r,\phi) ~,$$ $$V_\phi(r,\phi) = \Omega(r) r + V_\phi^\prime(r,\phi) ~.$$ In order to remove background trends from the perturbations we employ the global radial power law scaling for perturbed quantities: $$\hat \Sigma(r) \equiv \left({r \over r_0}\right)^{-\delta_\Sigma}
\Sigma^\prime(r) ~,$$ $$\hat P(r) \equiv \left({r \over r_0}\right)^{-\delta_P} P^\prime(r)
~,$$ $$\hat {\bf V}(r) \equiv \left({r \over r_0}\right)^{-\delta_V} {\bf
V}^\prime(r) ~.$$
After the definitions one can get the following dynamical equations for the scaled perturbed variables: $$\left\{ {\partial \over \partial t} + \Omega(r) {\partial \over
\partial \phi} \right\} {\hat \Sigma \over \Sigma_0} +
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ $$\left( {r \over r_0} \right)^{-\beta_\Sigma-\delta_\Sigma+\delta_V}
\left[ {\partial \hat V_r \over \partial r} + {1 \over r} {\partial
\hat V_\phi \over
\partial \phi} + {1+\delta_V-\beta_\Sigma \over r} \hat V_r \right]
= 0 ~,$$ $$\left\{ {\partial \over \partial t} + \Omega(r) {\partial \over
\partial \phi} \right\} \hat V_r - 2\Omega(r) \hat V_\phi +$$ $${c_s^2 \over \gamma} \left({r \over r_0}
\right)^{\beta_\Sigma+\delta_P-\delta_V} {\partial \over \partial r}
{\hat P \over P_0} + c_s^2 {\delta_P \over \gamma r_0} \left( {r
\over r_0} \right)^{\beta_\Sigma+\delta_P-\delta_V-1} {\hat P \over
P_0} +$$ $$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ c_s^2 {\beta_P \over \gamma r_0}
\left( {r \over r_0}
\right)^{2\beta_\Sigma+\delta_\Sigma-\beta_P-\delta_V-1} {\hat
\Sigma \over \Sigma_0} = 0 ~,$$ $$\left\{ {\partial \over \partial t} + \Omega(r) {\partial \over
\partial \phi} \right\} \hat V_\phi + \left( 2 \Omega(r) +
r {\partial \Omega(r) \over \partial r} \right) \hat V_r +
~~~~~~~~~~~~~~~$$ $$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {c_s^2 \over \gamma r_0}
\left( {r \over r_0} \right)^{\beta_\Sigma+\delta_P-\delta_V-1}
{\partial \over \partial \phi} {\hat P \over P_0} = 0 ~,$$ $$\left\{ {\partial \over \partial t} + \Omega(r) {\partial \over
\partial \phi} \right\} {\hat P \over P_0} +$$ $$\gamma \left( {r \over r_0} \right)^{-\beta_P+\delta_V-\delta_P}
\left[ {\partial \hat V_r \over
\partial r} + {1 \over r} {\partial \hat V_\phi \over \partial
\phi} + {1+\delta_V-\beta_P/\gamma \over r} \hat V_r \right] = 0 ~,$$ where $c_s^2 = \gamma P_0/\Sigma_0$ is the squared sound speed at $r=r_0$.
Local approximation
-------------------
The linear dynamics of perturbations in differentially rotating flows can be effectively analyzed in the local co-rotating shearing sheet frame (e. g., Goldreich & Lynden-Bell 1965; Goldreich & Tremaine 1978). This approximation simplifies the mathematical description of flows with inhomogeneous velocity. In the radially stratified flows the spatial inhomogeneity of the governing equations comes not only from the equilibrium velocity, but from the pressure, density and entropy profiles as well. In this case we first re-scale perturbations in global frame in order to remove background trends from linear perturbations, rather then use complete form of perturbations to the equilibrium (see Eqs. 14-16). Hence, using the re-scaled linear perturbation ($\hat P$, $\hat
\Sigma$, $\hat {\rm \bf V}$) we may simplify local shearing sheet description as follows. Introduction of a local Cartesian co-ordinate system: $$x \equiv r - r_0~,~~~~ y \equiv r_0 (\phi - \Omega_0 t)~,~~~~{x
\over r_0} ,~ {y \over r_0} \ll 1~,$$ $${\partial \over \partial x} = {\partial \over \partial r}~,~~~
{\partial \over \partial y} = {1 \over r_0}{\partial \over
\partial \phi}~,~~~ {\partial \over \partial t} =
{\partial \over \partial t} - r_0 \Omega_0 {\partial \over
\partial y},$$ where $\Omega_0$ is the local rotation angular velocity at $r=r_0$, transforms global differential rotation into a local radial shear flow and the two Oort constants define the local shear rate: $$A \equiv {1 \over 2} r_0 \left[ {\partial \Omega(r) \over \partial
r}\right]_{r=r_0}~,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ $$B \equiv - {1 \over 2} \left[ r{\partial \Omega(r)\over \partial r}
+ 2\Omega(r) \right]_{r=r_0}= -A - \Omega_0~.$$ Hence, the equations describing the linear dynamics of perturbations in local approximation read as follows: $$\left\{ {\partial \over \partial t} + 2Ax {\partial \over
\partial y} \right\} {\hat P \over \gamma P_0} +$$ $$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left[ {\partial \hat V_x \over
\partial x} + {\partial \hat V_y \over \partial y} +
{1+\delta_V-\beta_P/\gamma \over r_0} \hat V_x \right] = 0 ~,$$ $$\left\{ {\partial \over \partial t} + 2Ax {\partial \over
\partial y} \right\} {\hat V_x} - 2\Omega_0 \hat V_y +$$ $$~~~~~~~~~~~~~~~~~~~~~ c_s^2 \left[ {\partial \over \partial x} {\hat
P \over \gamma P_0} + {\delta_P + \beta_P/\gamma \over r_0} {\hat P
\over \gamma P_0} - {\beta_P \over \gamma r_0} {\hat S \over \gamma
P_0}\right] = 0 ~,$$ $$\left\{ {\partial \over \partial t} + 2Ax {\partial \over
\partial y} \right\} {\hat V_y} - 2B \hat V_y + c_s^2
{\partial \over \partial y} {\hat P \over \gamma P_0} =0 ~,$$ $$\left\{ {\partial \over \partial t} + 2Ax {\partial \over
\partial y} \right\} {\hat S \over \gamma P_0} - {\beta_S
\over \gamma r_0} \hat V_x = 0 ~,$$ where $\hat S $ is the entropy perturbation: $$\hat S \equiv \hat P - c_s^2 \hat \Sigma ~.$$ Now we may adjust the global scaling law of perturbations in order to simplify the local shearing sheet description (see Eqs. 25,26): $$1 + \delta_V - \beta_P/\gamma = 0 ~,$$ $$\delta_P + \beta_P/\gamma = 0 ~.$$
Let us introduce spatial Fourier harmonics (SFHs) of perturbations with time dependent phases: $$\left( \begin{array}{c} {\hat V}_x({\bf r},t) \\ {\hat V}_y({\bf r},t) \\
{{\hat P}({\bf r},t) / \gamma P_0} \\ {{\hat S}({\bf r},t) / \gamma
P_0} \end{array} \right) =
\left( \begin{array}{r} u_x({\bf k}(t),t) \\ u_y({\bf k}(t),t) \\
-{\rm i} p({\bf k}(t),t) \\ s({\bf k}(t),t)
\end{array} \right) \times$$ $$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\exp \left( {\rm i} k_x(t) x + {\rm i} k_y y \right) ~,$$ with $$k_x(t) = k_x(0) - 2Ak_y t~.$$
Using the above expansion and Eqs. (27-30), we obtain a compact ODE system that governs the local dynamics of SFHs of perturbations: $${{\rm d} \over {\rm d} t} p - k_x(t) u_x - k_y u_y = 0 ~,$$ $${{\rm d} \over {\rm d} t} u_x - 2 \Omega_0 u_y + c_s^2 k_x(t) p -
c_s^2 k_P s = 0 ~,$$ $${{\rm d} \over {\rm d} t} u_y - 2 B u_x + c_s^2 k_y p = 0 ~,$$ $${{\rm d} \over {\rm d} t} s - k_S u_x = 0 ~.$$ where $$k_P = {\beta_P \over \gamma r_0} ~~~ k_S = {\beta_S \over \gamma
r_0} ~.$$ The potential vorticity: $$W \equiv k_x(t)u_y - k_y u_x - 2B p ~,$$ is a conserved quantity in barotropic flows: $W = {\rm const.}$ when $k_P=0$.
Perturbations at rigid rotation
-------------------------------
The dispersion equation of our system can be obtained in the shearless limit ($A=0$, $B=-\Omega$). Hence, using Fourier expansion of perturbations in time $\propto \exp({\rm i} \omega t)$, in the shearless limit, we obtain: $$\omega^4 - \left( c_s^2 k^2 + 4 \Omega_0^2 - c_s^2 \eta \right)
\omega^2 - c_s^4 \eta k_y^2 = 0 ~,$$ where $$\eta \equiv k_P k_S = {\beta_P \beta_S \over \gamma^2 r_0^2} ~.$$ Solutions of the Eq. (40) describe a compressible density-spiral mode and a convective mode that involves perturbations of entropy and potential vorticity. For weakly stratified discs $(\eta \ll
k^2)$, we find the frequencies are: $$\bar \omega_{p}^2 = c_s^2 k^2 + 4 \Omega_0^2 ~,$$ $$\bar \omega_{c}^2 = - {c_s^4 \eta k_y^2 \over c_s^2 k^2 + 4
\Omega_0^2} ~.$$ High frequency solutions ($\bar \omega_{p}^2$) describe the density-spiral waves and will be referred later as the P-modes. Low frequency solutions ($\bar \omega_{c}^2$), instead, describe radial buoyancy mode due to the stratification. In barotropic flows ($\eta=0$) this mode is degenerated into stationary zero frequency vortical solution. Therefore, we may refer to it as a baroclinic mode. The mode describes instability when $\eta>0$. In this case the equilibrium pressure and entropy gradients point in the same direction. Klahr (2004) has anticipated such result, although worked in the constant surface density limit ($\beta_\Sigma=0$). The same behavior has been obtained for axisymmetric perturbations in Johnson and Gammie (2005). For comparison, in our model baroclinic perturbations are intrinsically non-axisymmetric. Hence, our result obtained in the rigidly rotating limit shows that the local exponential stability of the radial baroclinic mode is defined by the Schwarzschild-Ledoux criterion: $${{\rm d} \bar P \over {\rm d} r} {{\rm d} \bar S \over {\rm d} r}
> 0 ~.$$
The dynamics of linear modes can be described using the modal equations for the eigenfunctions: $$\left\{ {{\rm d}^2 \over {\rm d} t^2} + \bar \omega_{p,c}^2 \right\}
\Phi_{p,c}(t) = 0 ~,$$ where $\Phi_p(t)$ and $\Phi_c(t)$ are the eigenfunctions of the pressure and convective (baroclinic) modes, respectively. The form of these functions can be derived from Eqs. (34-41) in the shearless limit: $$\Phi_{p,c}(t) = (\bar \omega_{p,c}^2+c_s^2 \eta) p(t) - 2 \Omega_0
W(t) - c_s^2 k_P k_x s(t) ~.$$ All physical variables in our system ($p$, $u_x$, $u_y$, $s$) can be expressed by the two modal eigenfunctions and their first time derivatives ($\Phi_{p,c}$, $\Phi_{p,c}^\prime$). Hence, we can fully derive the perturbation field of a specific mode individually by setting the eigenfunction of the other mode equal to zero.
As we will see later, the Keplerian shear leads to the degeneracy of the convective buoyancy mode. In this case only the shear modified density-spiral wave mode eigenfunction can be employed in the analysis.
Perturbations in shear flow: mode coupling
------------------------------------------
It is well known that velocity shear introduces non-normality into the governing equations that significantly affects the dynamics of different perturbations. In this case we benefit from the shearing sheet transformation and seek the solutions in the form of the so-called Kelvin modes. These originate from the vortical solutions derived in seminal paper by Kelvin (1887). In fact, as it was argued lately (see e.g., Volponi and Yoshida 2002), the shearing sheet transformation leads to some sort of generalized modal approach. Shear modes arising in such description differ from linear modes with exponential time dependence in many respects. Primarily, phases of these continuous spectrum shear modes vary in time through shearing wavenumber; their amplitudes can be time dependent; and most importantly, they can couple in limited time intervals. On the other hand, shear modes can be well separated asymptotically, where analytic WKBJ solution for the each mode can be increasingly accurate. In the following, we will simply refer to these shearing sheet solution as “modes”.
The character of shear flow effects significantly depend on the value of velocity shear parameter. To estimate the time-scales of the processes we compare the characteristic frequencies of the linear modes $|\bar \omega_p|$, $|\bar \omega_c|$ and the velocity shear $|A|$. In order to speak about the modification of the linear mode by the velocity shear, the basic frequency of the mode should be higher than the one set by shear itself: $\omega^2 > A^2$. Otherwise the modal solution can not be used to calculate perturbation dynamics, since perturbations will obey the shear induced variations at shorter timescales.
In quasi-Keplerian differentially rotating discs with weak radial stratification: $$\bar \omega_p^2 \gg A^2 ~~~~ {\rm and} ~~~ \bar \omega_{c}^2 \ll
A^2 ~, {~~~ \rm when ~~~} {\beta_P \beta_S \over \gamma^2} \ll 1 ~.$$ In this case the convective mode diverges from its modal behavior and is strongly affected by the velocity shear: the thermal and kinematic parts obey shear driven dynamics individually. Therefore, we tentatively distinguish shear driven vorticity (W) and entropy (S) modes. On the contrary, the high frequency pressure mode is only modified by the action of the background shear. Hence, we assume the above described three mode (S, W and P) formalism as the framework for our further study.
For the description of the P mode in differential rotation we define the function: $$\Psi_p(t) = \omega_p^2(t) p(t) - 2\Omega_0 W(t) - c_s^2 k_P k_x(t)
s(t) ~,$$ where $$\omega_p^2(t) = c_s^2 k^2(t) - 4B \Omega_0 ~.$$ This can be considered as the generalization of the $\Phi_P(t)$ eigenfunction for the case of the shear flow, by accounting for the temporal variation of the radial wavenumber.
In order to analyze the mode coupling in the considered limit, we rewrite Eqs. (34-39) as follows: $$\left\{ {{\rm d}^2 \over {\rm d} t^2} + f_p {{\rm d} \over {\rm d}
t} + \omega_p^2 - \Delta \omega_p^2 \right\} \Psi_p = \chi_{pw} W +
\chi_{ps} s ~,$$ $$\left\{ {{\rm d} \over {\rm d} t} + f_s \right\} s = \chi_{s p 1}
{{\rm d} \Psi_p \over {\rm d} t} + \chi_{s p 2} \Psi_p + \chi_{s w}
W ~,$$ $${{\rm d} W \over {\rm d} t} = \chi_{ws} s ~,$$ where $f_p$ and $\Delta \omega_p^2$ describe the shear flow induced modification to the P-mode $$f_p = 4 A { k_x k_y \over k^2} - 2 {(\omega_p^2)^\prime \over
\omega_p^2 } ~,$$ $$\Delta \omega_p^2 = {(\omega_p^2)^{\prime \prime} \over \omega_p^2}
+ f_p {(\omega_p^2)^\prime \over \omega_p^2 } + 8AB{k_y^2 \over k^2}
~,$$ parameter $f_s$ describes the modification to the entropy mode $$f_s = c_s^2 \eta {k_x^2 (\omega_p^2)^\prime \over k^2 \omega_p^4} ~,$$ and $\chi$ parameters describe the coupling between the different modes: $$\chi_{pw} = 2 \Omega_0 \Delta \omega_p^2(t) + 4A {k_y^2 \over k^2}
\omega_p^2 ~,$$ $$\chi_{ps} = c_s^2 k_P k_x \left( \Delta \omega_p^2 + 4B {k_y \over
k_x} {(\omega_p^2)^\prime \over \omega_p^2} - 8AB {k_y^2 \over k^2}
\right) ~,$$ $$\chi_{s p 1} = {k_S k_x \over k^2 \omega_p^2 }~,$$ $$\chi_{s p 2} = -{k_S k_x \over k^2 \omega_p^2 } \left(
{(\omega_p^2)^\prime \over \omega_p^2} + 2B {k_y \over k_x} \right)
~,$$ $$\chi_{s w} = -{2 \Omega k_S k_x \over k^2 \omega_p^2} \left(
{(\omega_p^2)^\prime \over \omega_p^2} + 2B {k_y \over k_x} + {k_y
\omega_p^2 \over 2 \Omega k_x } \right) ~,$$ $$\chi_{w s} = -c_s^2 k_P k_y ~.$$ Here prime denotes temporal derivative.
Equations (50-52) describe the linear dynamics of modes and their coupling in the considered three mode model. In this limit, our interpretation is that the homogeneous parts of the equations describe the individual dynamics of modes, while the right hand side terms act as a source terms and describe the mode coupling. This tentative separation is already fruitful in a qualitative description of mode coupling.
Dynamics of the density-spiral wave mode in the differential rotation can be described by the homogeneous part of the Eq. (50). Homogeneous part of Eq. (51) describes the modifications to the entropy dynamics. Inhomogeneous parts of the Eqs. (50-52) reveal coupling terms between the three linear modes that originate due to the background velocity shear and radial stratification. We analyze the mode coupling dynamics numerically, but use the coupling $\chi$ coefficients for qualitative description.
The sketch illustration of the mode coupling in the above described three-mode approximation can be seen in Fig. \[coupling\]. The figure reveals a complex picture of the three mode coupling that originates by the combined action of velocity shear and radial stratification.
The temporal variation of the coupling coefficients during the swing of the perturbation SFHs from leading to trailing phases is shown in Fig. \[chi\]. The relative amplitudes of the $\chi_{pw}$ and $\chi_{ps}$ parameters reveal that potential vorticity is a somewhat more effective source of P mode perturbations when compared to the entropy mode. On the other hand, it seems that S mode excitation sources due to potential vorticity ($\chi_{sw}$) can be stronger when compared with the P-mode sources ($\chi_{sp1}$, $\chi_{sp2}$).
The effect of the stratification parameters on the mode coupling is somewhat more apparent. First, we may conclude that the excitation of the entropy mode, which depends on the parapeters $\chi_{sp1}$, $\chi_{sp2}$ and $\chi_{sw}$ is generally a stronger process for higher entropy stratification scales $k_S$ (see Eqs. 58-60). Second, we see that the generation of the potential vorticity depending on the $\chi_{ws}$ parameter proceeds more effectively at hight pressure stratification scale $k_P$. And third, we see profound asymmetry in the three-mode coupling: P-mode is not coupled with the W-mode [*directly*]{}.
A quantitative estimate of the mode excitation parameters can be done using a numerical analysis. In this case, the amplitudes of the generated W and S modes can be estimated through the values of potential vorticity or entropy outside the coupling area. In order to quantify the second order P mode dynamics we define its modal energy as follows: $$E_P(t) \equiv |\Psi_p(t)^\prime|^2 + \omega_p(t)^2 |\Psi_P(t)|^2 ~.$$ This quadratic form is a good approximation to the P mode energy in the areas where it obeys adiabatic dynamics: $k_x(t)/k_y \gg 1$.
The presented qualitative analysis suggests that perturbations of the density-spiral waves can generate entropy perturbations not only due to the flow viscosity (not included in our formalism), but also kinematically, due to the velocity shear induced mode coupling. The generated entropy perturbations should further excite potential vorticity through baroclinic coupling. Hence, it seems that in baroclinic flows, contrary to the barotropic case, P-mode perturbations are able to generate potential vorticity through a three-mode coupling mechanism: P $\to$ S $\to$ W. We believe that traces of the described mode coupling can be also seen in Klahr 2004, where the process has not been fully resolved due to the numerical filters used to remove higher frequency oscillations.
![ Mode coupling scheme. In the zero shear limit two second order modes P-mode and buoyancy mode with eigenfunctions $\Phi_p$ and $\Phi_c$ are uncoupled. In the shear flow, when the characteristic time of shearing is shorter then the buoyancy mode temporal variation scale ($A^2 > \bar \omega_c^2$), we use three mode formalism. In this limit we consider the coupling of the P, W, and S modes. $\chi$ parameters describe the strength of the coupling channel. Asymmetry of the mode coupling is revealed in the fact that compressible oscillations of the pressure mode are not able to directly generate potential vorticity, but still do so via interaction with S-mode and farther baroclinic ties with W-mode.[]{data-label="coupling"}](Coupling.eps){width="80mm"}
![The coupling $\chi$ parameters vs. the ratio of radial to azimuthal wavenumbers $k_x(t)/k_y$ when latter passes through zero value during the time interval $\Delta t = 4 \Omega_0^{-1}$. Here $k_y = H^{-1}$, $k_P = k_S = 0.5 H^{-1}$. []{data-label="chi"}](chi.eps){width="80mm"}
Numerical Results
=================
In order to study the mode coupling dynamics in more detail we employ numerical solutions of Eqs. (34-37). We impose initial conditions that correspond to the one of the three modes and use a standard Runge-Kutta scheme for numerical integration (MATLAB ode34 RK implementation). Perturbations corresponding to the individual modes at the initial point in time are derived in the Appendix A.
W-mode: direct coupling with S and P-modes
------------------------------------------
In this subsection we consider the dynamics of SFH when only the perturbations of potential vorticity are imposed initially. As it is known from previous studies (see Chagelishvili et al. 1997, Bodo et al. 2005) vorticity perturbations are able to excite acoustic modes nonadiabatically in the vicinity of the area where $k_x(t)=0$. Here we observe a similar, but more complex, behavior of mode coupling. The W-mode is able to generate P and S-modes simultaneously. Fig. \[SFH\_w1\] shows the evolution of the W-mode perturbations in a flow with growing baroclinic perturbations ($\eta>0$). The results show the excitation of both S and P-mode perturbations due to mode coupling that occurs in a short period of time in the vicinity of $t=10$. The following growth of the negative potential vorticity is due to the baroclinic coupling of entropy and potential vorticity perturbations.
Fig. \[SFH\_w2\] shows the evolution of potential vorticity SFH in flows with negative $\eta$. After the mode coupling and generation of P and S-modes, we observe a decrease of potential vorticity. This represents the well known fact that stable stratification (positive Richardson number) can play a role of “baroclinic viscosity” on the vorticity perturbations.
Numerical calculations show that the efficiency of the mode coupling generally decreases as we increase the azimuthal wavenumber $k_y$ corresponding to an increase of the density-spiral wave frequency: lower frequency waves couple more efficiently.
To test the effect of background stratification parameters on the mode coupling, we calculate the amplitude of the entropy and the energy of the P-mode perturbations generated in flows with different pressure and entropy stratification scales. The amplitudes are calculated after a $10 \Omega_0^{-1}$ time interval from the change in sign of the radial wave-number. In this case, modes are well isolated and the energy of the P mode can be well defined.
Fig. \[surf\_w\] shows the results of these calculations. It seems that the mode coupling efficiency is higher with stronger radial gradients. In particular, numerical results generally verify our qualitative results that the S-mode generation predominantly depends on the entropy stratification scale $k_S$. Therefore, P-mode excitation is stronger for higher values of $\eta$.
![Evolution of the W-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with growing baroclinic perturbations $k_P=k_S=0.2H^{-1}$. Mode coupling occurs in the vicinity of $t=10 \Omega_0^{-1}$, where $k_x(t)=0$. Excitation of the P and S-modes are clearly seen in the panels for pressure ($P$) and entropy ($S$) perturbations. Perturbations of the potential vorticity start to grow due to the baroclinic coupling with entropy perturbations. []{data-label="SFH_w1"}](SFH_w1.eps){width="84mm"}
![Evolution of the W-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with positive $\eta$: $k_P=-0.2H^{-1}$, $k_S=0.2H^{-1}$. Interestingly, SFH dynamics shows the decay of potential vorticity after the mode coupling and excitation of S- and W-modes at $t=10 \Omega_0^{-1}$. The latter fact is normally anticipated process in the flows that are baroclinically stable. []{data-label="SFH_w2"}](SFH_w2.eps){width="84mm"}
![Surface graph of the generated S and P-mode amplitudes at $ky=2H^{-1}$, $kx=-60H^{-1}$, and different values of $k_P$ and $k_S$. Initial perturbations are normalized to set E(0)=1. Excitation amplitudes of the entropy perturbations show stronger dependence of the $k_S$ (left panel), while both entropy and pressure scales are important (approximately $k_S k_P$ dependence) for the generation of P-modes (right panel). See electronic edition of the journal for color images.[]{data-label="surf_w"}](surf_w.eps){width="84mm"}
S-mode: direct coupling with W and P-modes
------------------------------------------
Fig. \[SFH\_s1\] shows the evolution of the S-mode SFH in a flow with growing baroclinic perturbations. Here we observe two shear flow phenomena: mode coupling and transient amplification. Entering the nonadiabatic area (around $t = 10$) the entropy SFH is able to generate the P-mode, while undergoing transient amplification itself. The transient growth of entropy is unsubstantial and the growth rate decreases with the growth of $k_y$. The W-mode is instead constantly coupled to entropy perturbations through baroclinic forces, although higher entropy perturbations at later times give an higher rate of growth of potential vorticity. The total energy of perturbations is however dominated, at the end, by the P mode.
Fig. \[surf\_s\] shows the dependence of the W and P-mode generation on the pressure and entropy stratification scales. As expected from qualitative estimates, P-mode excitation depends almost solely on the pressure stratification scale $k_P$, while the generation of potential vorticity generally grows with $\eta$.
![Evolution of the S-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with growing baroclinic perturbations $k_P=k_S=0.2H^{-1}$. Perturbations of the potential vorticity are coupled grow from the beginning due to the baroclinic coupling with entropy perturbations. Excitation of the P-mode is clearly seen in the panel for pressure ($P$), while the panel for entropy perturbations ($S$) shows swing amplification in the nonadiabatic area around $k_x(t)=0$. Change of the amplitude of the entropy SFH affects the growth factor of potential vorticity SFH.[]{data-label="SFH_s1"}](SFH_s1.eps){width="84mm"}
![Surface graph of the generated W and P-mode amplitudes at $ky=2H^{-1}$, $kx=-60H^{-1}$, and different values of $k_P$ and $k_S$. Initial perturbations are normalized to set E(0)=1. Excitation amplitudes of the entropy perturbations show predominant dependence on the $k_S$ (left panel), while only pressure stratification scale $k_P$ is important for the generation of P-modes (right panel). See electronic edition of the journal for color images.[]{data-label="surf_s"}](surf_s.eps){width="84mm"}
P-mode: direct coupling with S-mode and indirect coupling with W-mode
---------------------------------------------------------------------
Fig. \[SFH\_p1\] shows the evolution of an initially imposed P-mode SFH in a flow with growing baroclinic perturbations. The [*oscillating*]{} behavior of the entropy perturbation for $t < 10$ is given by the P-mode. This oscillating component has a zero mean value when averaged over time-scales longer than the wave period. The existence of the [*aperiodic*]{} S-mode is instead characterized by a nonzero mean value. When the azimuthal wavenumber $k_y(t)$ changes sign at $t = 10$, we can observe the appearance of a nonzero mean value (marked on the plot by the horizontal dashed line), indicating that the high frequency oscillations of the P-mode are able to generate the aperiodic perturbations of the S-mode. The aperiodic part of the entropy perturbation is than able to generate potential vorticity perturbations. However, as we see from Eq. (54) and Fig. \[coupling\], there is no direct coupling between P and W-modes. Therefore, the P-mode generates the S-mode by shear flow induced mode conversion, while the W-mode is further generated because of its baroclinic ties with the entropy SFH. We describe this situation as the three-mode coupling or in other words, indirect coupling of the P to the W-mode. Note, that although the S and W-mode generation is apparent from the dynamics of entropy and potential vorticity SFH, energetically it plays a minor role as compared to the compressible energy carried by the P-mode.
Fig \[SFH\_p2\]. shows that P-mode generates potential vorticity with a positive sign. However, the sign of the generated potential vorticity depends on the initial phase of the P-mode. Hence, our numerical results show generation of the W-mode with both positive and negative signs.
It is interesting also to look at the P-mode dynamics in flows stable to baroclinic perturbations (see Fig. \[SFH\_p2\]). The initially imposed P-mode is able to generate the S-mode and consequently the W-mode, that gives a growth of the potential vorticity with time. Apart from the intrinsic limitations (the dependence of the sign of the generated potential vorticity on the initial phase of the P-mode and the low efficiency of the W-mode generation), this process demonstrates the fact that potential vorticity can be actually generated in flows with positive radial buoyancy ($\eta<0$) and Richardson number.
Fig. \[surf\_p\] shows the dependence of the S and W-mode generation on the pressure and entropy stratification scales. In good agreement with qualitative estimates, the S-mode excitation depends strongly on the entropy stratification scale $k_S$, while the generation of the potential vorticity generally grows with $\eta$.
![Evolution of the P-mode SFH in the flow with $k_x(t)=-30H^{-1}$, $k_y=2H^{-1}$ and equilibrium with growing baroclinic perturbations $k_P=k_S=0.2H^{-1}$. Mode coupling occurs in the vicinity of $t=10\Omega_0^{-1}$, where W and S-modes are excited. The amplitude of the generated aperiodic contribution to the entropy perturbation is marked by the red doted line. Farther, this component leads to the baroclinic production of potential vorticity with negative sign.[]{data-label="SFH_p1"}](SFH_p1.eps){width="84mm"}
![Same as in previous figure but for $k_P=-0.2H^{-1}$ and $
k_S=0.2H^{-1}$. Perturbations are stable to baroclinic forces. However, production of the potential vorticity with positive sign is still observed.[]{data-label="SFH_p2"}](SFH_p2.eps){width="84mm"}
![Surface graph of the generated S and W-mode amplitudes at $ky=2H^{-1}$, $kx=-60H^{-1}$, and different values of $k_P$ and $k_S$. Initial perturbations are normalized to set E(0)=1. Excitation amplitudes of the entropy perturbations mainly depend on the $k_S$ (left panel), while both pressure and entropy stratification scales are important for the generation of W-mode perturbations (right panel). See electronic edition of the journal for color images.[]{data-label="surf_p"}](surf_p.eps){width="84mm"}
Conclusion and Discussion
=========================
We have studied the dynamics of linear perturbations in a 2D, radially stratified, compressible, differentially rotating flow with different radial density, pressure and entropy gradients. We employed global radial scaling of linear perturbations and removed the algebraic modulation due to the background stratification. We derived a local dispersion equation for nonaxisymmetric perturbations and the corresponding eigenfunctions in the zero shear limit. We show that the local stability of baroclinic perturbations in the barotropic equilibrium state is defined by the Schwarzschild-Ledoux criterion.
We study the shear flow induced linear coupling and the related possibility of the energy transfer between the different modes of perturbations using qualitative and a more detailed numerical analysis. We employ a three-mode formalism and describe the behavior of S W and P-modes under the action of the baroclinic and velocity shear forces in local approximation.
We find that the system exhibits an asymmetric coupling pattern with five energy exchange channels between three different modes. The W-mode is coupled to S and P-modes: perturbations of the potential vorticity are able to excite entropy and compressible modes. The amplitude of the generated S-mode grows with the increase of entropy stratification scale of the background ($k_S$) while the amplitude of the generated P-mode perturbations grows with the increase of background baroclinic index ($\eta$). The S-mode is coupled to the W and P-modes: the amplitude of the generated P-mode perturbations grows with increase of the background pressure stratification scale ($k_P$), while the amplitude of the W-mode grows with the increase of baroclinic index. The P-mode is coupled to the S-mode: the amplitude of generated entropy perturbations grows with the increase of the background entropy stratification scale. On the other hand, there is no direct energy exchange channel from P to W mode and, therefore, no direct conversion is possible. Our results, however, show that the P-mode is still able to generate W-mode through indirect three mode P-S-W coupling scheme. This linear inviscid mechanism indicates that compressible perturbations are able to generate potential vorticity via aperiodic entropy perturbations.
The dynamics of radially stratified discs have been already studied by both, linear shearing sheet formalism and direct numerical simulations. However, previous studies focus on the baroclinic stability and vortex production by entropy perturbations, neglecting the coupling with higher frequency density waves.
The most vivid signature of density wave excitation in radially stratified disc flows can be seen in Klahr (2004). The numerical results presented on the linear dynamics of perturbation SFH show high frequency oscillations after the radial wavenumber changes sign. However, focusing on the energy dynamics, the author filters out high frequency oscillations from the analysis.
The purpose of numerical simulations by Johnson and Gammie (2006) was the investigation of the velocity shear effects on the radial convective stability and the possibility of the development of baroclinic instability. Therefore, no significant amount of compressible perturbations is present initially, and it is hard to judge if high frequency oscillations appear later in simulations. Petersen et al. (2007a), (2007b) employed the anelastic approximation that does not resolve the coupling of potential vorticity and entropy with density waves. Moreover, if produced, high frequency density waves soon develop into spiral shocks (see e.g., Bodo et al 2007). The anelastic gas approximation does intentionally neglect this complication and simplifies the description down to low frequency dynamics.
Numerical simulations of hydrodynamic turbulence in unstratified disc flows showed that the dominant part of turbulent energy is accumulated into the high frequency compressional waves (see, e.g., Shen et al. 2006). On the other hand, it is vortices that are thought to play a key role in hydrodynamic turbulence in accretion discs, as well as planet formation in protoplanetary disc dynamics. Therefore, any link and possible energy exchange between high frequency compressible oscillations and aperiodic vortices can be an important factor in the above described astrophysical situations.
Based on the present findings we speculate that density waves can participate in the process of the development of regular vortical structures in discs with negative radial entropy gradients. Numerical simulations have shown that thermal (entropy) perturbations can generate vortices in baroclinic disc flows (see e.g., Petersen et al. 2007a, 2007b). Hence, vortex development through this mechanism depends on the existence of initial regular entropy perturbations, i.e., thermal plumes, in differentially rotating baroclinic disc flows.
It seems that compressional waves with linear amplitudes can heat the flow through two different channels: viscous dissipation and shear flow induced mode conversion. However, there is a strict difference between the entropy production by the kinematic shear mechanism and viscous dissipation. In the latter case, compressional waves first need to be tightly stretched down to the dissipation length-scales by the background differential rotation to be subject of viscous dumping. As a result, the entropy produced by viscous dissipation of compressional waves takes a shape of narrow stretched lines. This thermal perturbations can baroclinically produce potential vorticity of similar configuration. However, this is clearly not an optimal form of potential vorticity that can lead to the development of the long-lived vortical structures. On the contrary, entropy perturbations produced through the mode conversion channel can have a form of a localized thermal plumes. These can be very similar to those used in numerical simulations by Petersen et al. (2007a), (2007b). In this case compressional waves can eventually lead to the development of persistent vortical structures of different polarity. Hence, high frequency oscillations of the P mode can participate in the generation of anticyclonic vortices that further accelerate dust trapping and planetesimal formation in protoplanetary discs with equilibrium entropy decreasing radially outwards.
Using the local linear approximation we have shown the possibility of the potential vorticity generation in flows with both, positive and negative radial entropy gradients (Richardson numbers). In fact, the standard alpha description of the accretion discs implies *positive* radial stratification of entropy and hence, weak baroclinic decay of existing vortices. In this case there will be a competition between the “baroclinic viscosity” and potential vorticity generation due to mode conversion. Hence, it is not strictly overruled that a significant amount of compressional perturbations can lead to the development of anticyclonic vortices even in flows with positive entropy gradients. In this case, radial stratification opens an additional degree of freedom for velocity shear induced mode conversion to operate. Although, the viability of this scenario needs further investigation.
This paper presents the results obtained within the linear shearing sheet approximation. At nonlinear amplitudes, the P mode leads to the development of shock waves. These shocks induce local heating in the flow. Therefore, a realistic picture of entropy production and vortex development in radially stratified discs with significant amount of compressible perturbations needs to be analyzed by direct numerical simulations.
Acknowledgments {#acknowledgments .unnumbered}
===============
A.G.T. was supported by GNSF/PRES-07/153. A.G.T. would like to acknowledge the hospitality of Osservatorio Astronomico di Torino. This work is supported in part by ISTC grant G-1217.
Barranco, J. A., and Marcus, P. S., 2005, ApJ [**623**]{}, 1157 Bodo G., Chagelishvili G. D., Murante G., Tevzadze A. G., Rossi P. and Ferrari A., 2005, A&A [**437**]{}, 9 Bodo G., Tevzadze A. G., Chagelishvili G. D., Mignone A., Rossi, P. and Ferrari A., 2007, A&A [**475**]{}, 51 Brandenburg, A., and Dintrans, B., 2006, A&A [**450**]{}, 437 Chagelishvili G. D., Tevzadze A. G., Bodo G. and Moiseev, S. S., 1997, Phys. Rev. Letters [**79**]{}, 3178. Fromang, S., Terquem, C., and Balbus, S. 2002 MNRAS [**329**]{} 18 Gammie, C. F. and Menou, K., 1998, ApJ [**492**]{}, 75 Goldreich P. and Lynden-Bell D., 1965, MNRAS [**130**]{}, 125 Goldreich P. and Tremaine S., 1978, ApJ [**222**]{}, 850 Heinemann, T. and Papaloizou, J. C. B. 2009a, MNRAS [**397**]{}, 52 Heinemann, T. and Papaloizou, J. C. B. 2009b, MNRAS [**397**]{}, 64 Johnson B. M. and Gammie C. F., 2005a, ApJ [**626**]{}, 978 Johnson B. M. and Gammie C. F., 2005b, ApJ [**635**]{}, 149 Johnson B. M. and Gammie C. F., 2006, ApJ [**636**]{}, 63 Isella, A., Testi, L., Natta, A., Neri, R., Wilner, D., and Qi, C., 2007, A&A [**469**]{}, 213 Klahr H. H. and Bodenheimer P., 2003, ApJ [**582**]{}, 869 Klahr H., 2004, ApJ [**606**]{}, 1070 Lerche, I. and Parker, E. N., 1967, ApJ [**149**]{}, 559 Li H., Finn J. M., Lovelace R. V. E. and Colgate S. A, 2000, ApJ [**533**]{}, 1023 Lord Kelvin (W. Tompson), 1887, Philos. Mag. [**24**]{}, 188 Lovelace R. V. E., Li H., Colgate S. A. and Nelson A. F., 1999, ApJ [**513**]{}, 805 Mamatsashvili, G. M., and Chagelishvili, G. D., 2007, MNRAS [**381**]{}, 809 Petersen M. K., Julien K. and Stewart G. R., 2007a, ApJ [**658**]{}, 1236 Petersen M. K., Stewart G. R. and Julien K., 2007b, ApJ [**658**]{}, 1252 Sandin, C., Schönberner, D., Roth, M., Steffen, M., Böhm, P., and Monreal-Ibero, A. 2008, A&A [**486**]{}, 545 Sano, T., Miyama, S., Umebayashi, T., Nakano, T., 2000, ApJ [**543**]{}, 486 Shen, Y., Stone, J. M., and Gardiner, T. A., 2006, ApJ [**653**]{}, 513 Tevzadze A. G., Chagelishvili G. D., Zahn J.-P., Chanishvili R. G. and Lominadze J. G., 2003, A&A [**407**]{}, 779 Tevzadze A. G., Chagelishvili G. D. and Zahn J.-P., 2008, A&A [**478**]{}, 9 Trefethen, L. N., Trefethen, A. E., Reddy, S. C., and Driscoll, T. A., 1993, Science [**261**]{}, 578. Volponi, F., and Yoshida, Z., 2002, J. Phys. Soc. Japan [**71**]{}, 1870
Initial conditions
==================
Here we present the approximations used to derive the analytic form of the initial conditions corresponding to individual modes in radially stratified shear flows. These conditions are used to construct the initial values of perturbations in the numerical integration of the ODEs governing the linear dynamics of perturbations in these flows. We employ different methods for high and low frequency modes.
P-mode
------
P-mode perturbations are intrinsically high frequency and well separated from low frequency modes everywhere outside the coupling region $k_x/k_y < 1$. In order to construct P-mode perturbations we use convective eigenfunction derived in the shearless limit and account for shear flow effects only in the adiabatic limit: $$\Psi_{c}(t) = (\omega_{c}^2(t) + c_s^2 \eta) P(t) - 2 \Omega_0 W(t)
- c_s^2 k_P k_x(t) s(t) ~,$$ where $$\omega_c^2(t) = -{c_s^2 \eta k_y^2 \over c_s^2 k^2(t) - 4B\Omega_0}
~.$$ Although this form of the eigenfunction is not valid function for describing W and S modes individually in a sheared medium, it has proved to be a good tool for excluding both modes from the initial spectrum: $$\Psi_{c}(0) = 0 ~.$$ Assuming that we are looking for P-mode perturbations with wave-numbers satisfying the condition $ k_x(0)/k_y \gg 1 $ we may use the zero potential vorticity condition: $$W(0) = 0 ~.$$ Hence, Eqs. (A3,A4) yield the full set of initial conditions for the high frequency P-mode SFH of perturbations: $$p(0) = P_0 ~, ~~~~~ u_x(0) = U_0 ~,$$ $$u_y(0) = {1 \over k_x(0)} \left( k_y U_0 + 2BP_0 \right) ~,$$ $$s(0) = {\omega_c^2(0)+c_s^2 \eta \over c_s^2 k_p k_x(0) } P_0 ~,$$ where $P_0$ and $U_0$ are free parameters corresponding to the two P-modes in the system. Specific values of these two parameters define whether the potential or kinetic part of the wave harmonic is present initially.
Low frequency modes
-------------------
In order to derive the initial conditions for the S and W modes individually we employ the second order equation for radial velocity perturbation that can be derived from Eqs. (34-37): $$\left\{ {{\rm d}^2 \over {\rm d} t^2} + c_s^2 k^2 - 4B\Omega_0 -
c_s^2 \eta \right\} u_x = -c_s^2 k_y W + 4Ac_s^2 k_y p ~.$$ $$\left\{ {{\rm d}^2 \over {\rm d} t^2} + c_s^2 k^2 - 4B\Omega_0
\right\} u_y = c_s^2 k_x(t) W + 2 B c_s^2 k_P s ~,$$ For low frequency perturbations $${{\rm d}^2 \over {\rm d} t^2} \left( \begin{array}{c} u_x \\ u_y
\end{array} \right) \sim \omega^2_c \left( \begin{array}{c} u_x \\ u_y
\end{array} \right) ~.$$ Assuming that $\omega_c^2(0) \ll c_s^2 k^2(0)$ and neglecting the corresponding terms in Eqs. (A6-A7) leads to the following algebraic system: $$\left[c_s^2 k^2 - 4B\Omega_0 \right] u_x = -c_s^2 k_y W + 4Ac_s^2
k_y p ~.$$ $$\left[c_s^2 k^2 - 4B\Omega_0 \right] u_y = c_s^2 k_x(t) W + 2 B
c_s^2 k_P s ~.$$ Hence, we can derive the initial conditions for the low frequency modes as follows: $$p(0) = {B \over 2A c_s^2 k_y^2 + B\omega_p^2(0)} \left( 2 \Omega_0
W_0 + c_s^2 k_p k_x(0) S_0 \right) ~,$$ $$u_x(0) = {1 \over \omega_p^2(0)} \left( -c_s^2 k_y W_0 + 4A c_s^2
k_y p(0) \right) ~,$$ $$u_y(0) = {1 \over \omega_p^2(0)} \left( c_s^2 k_x(0) W_0 + 2B c_s^2
k_p S_0 \right) ~,$$ where $$\omega_p^2(0) = c_s^2 (k_x^2(0) + k_y^2) - 4B\Omega_0 ~.$$ Eqs. (A11-A14) give the initial values of perturbation SFHs for S-mode when $$W_0 = 0 ~,~~~ S_0 \not= 0 ~,$$ and W-mode when $$W_0 \not= 0 ~,~~~ S_0 = 0 ~.$$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss bilinear estimates of tempered distributions in the Fourier restriction spaces for the two-dimensional Schödinger equation whose principal part is the d’Alembertian. We prove that the bilinear estimates hold if and only if the tempered distributions are functions.'
address: 'Mathematical Institute, Tohoku University, Sendai 980-8578, Japan'
author:
- Eiji ONODERA
title: |
Bilinear Estimates\
Associated to the Schrödinger Equation\
with a Nonelliptic Principal Part
---
[^1]
Introduction {#section:introduction}
============
This paper is devoted to studying bilinear estimates of tempered distributions in the Fourier restriction spaces related with the two-dimensional Schrödinger equation whose principal part is the d’Alembertian. The Fourier restriction spaces were originated by Bourgain in his celebrated papers [@BOURGAIN1] and [@BOURGAIN2] to establish time-local or time-global well-posedness of the initial value problem for one-dimensional nonlinear Schrödinger equations and the Korteweg-de Vries equation in $L^2(\mathbb{R})$ respectively. Generally speaking, to solve the initial value problem for nonlinear dispersive partial differential equations which can be treated by the classical energy method, one usually analyzes the interactions of propagation of singularities in nonlinearity in detail, and applies the regularity properties of free propagators to the resolution of singularities. It is well-known that propagators of some classes of linear dispersive equations with constant coefficients have local smoothing effects (see, e.g., [@CHIHARA]), and dispersion properties (see, e.g., [@GV] and [@STRICHARTZ]). Surprisingly, the Fourier restriction spaces automatically work for both of the analysis of the interactions of propagation of singularities in the frequency space and the application of the regularity properties of free propagators. For this reason, many applications and refinements of the method of the Fourier restriction spaces have been investigated in the last decade; see, e.g., [@CDKS], [@KPV1]–[@NTT], [@TAO1; @TAO2] and references therein.
Here we state the definition of the Fourier restriction spaces. The Fourier transform of a function $f(x,t)$ of $(x,t)=(x_1,\dotsc,x_n,t)\in\mathbb{R}^{n+1}$ is defined by $$\tilde{f}(\xi,\tau)
=
(2\pi)^{-\frac{n+1}{2}}
\iint_{\mathbb{R}^{n+1}}
e^{-it\tau-ix\cdot\xi}
f(x,t)
dxdt,$$ where $i=\sqrt{-1}$, $(\xi,\tau)=(\xi_1,\dotsc,\xi_n,\tau)\in\mathbb{R}^{n+1}$ and $x\cdot\xi=x_1\xi_1+\dotsb+x_n\xi_n$. Let $a(\xi)$ be a real polynomial of $\xi=(\xi_1,\dotsc,\xi_n)\in\mathbb{R}^n$. Set $\partial_t=\frac{\partial}{\partial{t}}$, $\partial_j=\frac{\partial}{\partial{x_j}}$, $D_t=-i\partial_t$, $D_j=-i\partial_j$, $D=(D_1,\dotsc,D_n)$, $\lvert{\xi}\rvert=\sqrt{\xi\cdot\xi}$, $\langle{\tau}\rangle=\sqrt{1+\tau^2}$, and $\langle\xi\rangle=\sqrt{1+\lvert\xi\rvert^2}$. For $s,b\in\mathbb{R}$, the Fourier restriction space $X^{s,b}=X^{s,b}(\mathbb{R}^{n+1})$ associated to the differential operator $D_t-a(D)$ is the set of all tempered distributions $f$ on $\mathbb{R}^{n+1}$ satisfying $$\lVert{f}\rVert_{s,b}
=
\left(
\iint_{\mathbb{R}^{n+1}}
\big\lvert
\langle{\tau-a(\xi)}\rangle^b
\langle{\xi}\rangle^s
\tilde{f}(\xi,\tau)
\big\rvert^2
\,d\xi
\,d\tau
\right)^{\frac{1}{2}}
<+\infty.$$ The free propagator $e^{ita(D)}$ of a differential equation $(D_t-a(D))u=0$ is defined by $$e^{ita(D)}\phi(x)
=
(2\pi)^{-\frac{n}{2}}
\int_{\mathbb{R}^n}
e^{ix\cdot\xi+ita(\xi)}
\hat{\phi}(\xi)
\,d\xi,$$ where $\hat{\phi}$ is the Fourier transform of $\phi$ in $x\in\mathbb{R}^n$, that is, $$\hat{\phi}(\xi)
=
(2\pi)^{-\frac{n}{2}}
\int_{\mathbb{R}^n}
e^{-ix\cdot\xi}
\phi(x)
\,dx.$$ In one-dimensional case, bilinear estimates in the Fourier restriction spaces associated to $D_t-D^2$ and $D_t-D^3$ were completed. More precisely, in [@KPV1] and [@KPV2], Kenig, Ponce and Vega refined the bilinear estimates in the Fourier restriction spaces with some negative indices $s<0$. Nakanishi, Takaoka and Tsutsumi in [@NTT] constructed sequences of tempered distributions breaking the bilinear estimates to show the optimality of the indices $s<0$ used in [@KPV1] and [@KPV2].
In [@TAO1] Tao investigated the bilinear estimates associated to $a(\xi)=\lvert\xi\rvert^2$ with $n\geqslant2$. He dealt with some equivalent estimates of the integral of trilinear form, and pointed out that the worst singularity occurs when an orthogonal relationship of three phases in that integral holds. Particularly in case $n=2$, Colliander, Delort, Kenig and Staffilani succeeded in overcoming this difficulty by the dyadic decomposition in not only the sizes of phases but also the angles among them. See [@CDKS] for the detail. Combining the above results for $a(\xi)=\lvert\xi\rvert^2$ with $n=1,2$, we have the following.
\[theorem:CDKPSV\] Let $n=1,2$, and let $a(\xi)=\lvert\xi\rvert^2$.
- [ For any $s\in (-\frac{3}{4},0]$, there exist $b\in(\frac{1}{2},1)$ and $C>0$ such that $$\begin{aligned}
\lVert{uv}\rVert_{s,b-1}
& \leqslant
C
\lVert{u}\rVert_{s,b}\,
\lVert{v}\rVert_{s,b},
\label{equation:b1}
\\
\lVert{\bar{u}\bar{v}}\rVert_{s,b-1}
& \leqslant
C
\lVert{u}\rVert_{s,b}\,
\lVert{v}\rVert_{s,b}.
\label{equation:b2}\end{aligned}$$ ]{}
- [ For any $s\in (-\frac{1}{4},0]$, there exist $b\in(\frac{1}{2},1)$ and $C>0$ such that $$\lVert{\bar{u}v}\rVert_{s,b-1}
\leqslant
C
\lVert{u}\rVert_{s,b}\,
\lVert{v}\rVert_{s,b}.
\label{equation:b3}$$ ]{}
- [ For any $s<-\frac{3}{4}$ and for any $b\in\mathbb{R}$, the estimates and fail to hold, and for any $s<-\frac{1}{4}$ and for any $b\in\mathbb{R}$, fails to hold.]{}
Here we mention a few remarks. First, the difference between (i) and (ii) are basically due to the structure of the products. In view of Hörmander’s theorem concerned with the microlocal condition on the multiplication of distributions (see [@SOGGE Theorem 0.4.5] for instance), $u\bar{u}$ needs more smoothness of $u$ than $u^2$ and $\bar{u}^2$ to make sense. Secondly, the local smoothing effect and the dispersion property of the fundamental solution $e^{it\lvert{D}\rvert^2}$ are strongly reflected in these bilinear estimates. These are applied to solving the initial value problem for some nonlinear Schrödinger equations in a class of tempered distributions which are not necessarily functions. Indeed, by using the technique developed in [@KPV1] together with the estimates , and , one can prove time-local well-posedness of the initial value problem for quadratic nonlinear Schrödinger equations of the form $$\begin{aligned}
{2}
D_tu
-
\lvert{D}\rvert^2u
& =
N_j(u,u)
&
\quad\text{in}\
& \mathbb{R}^n\times\mathbb{R},
\label{equation:pde1}
\\
u(x,0)
& =
u_0(x)
&
\quad\text{in}\
& \mathbb{R}^n,
\label{equation:data1}\end{aligned}$$ in Sobolev space $H^s(\mathbb{R}^n)$ with $s\in (-\frac{3}{4},0]$ for $j=1,2$ and $s\in (-\frac{1}{4},0]$ for $j=3$, respectively. Here $n=1,2$, $u(x,t)$ is a complex-valued unknown function of $(x,t)$, $u_0$ is a given initial data, $N_1(u,v)=uv$, $N_2(u,v)=\bar{u}\bar{v}$, $N_3(u,v)=\bar{u}v$, $H^s(\mathbb{R}^n)=\langle{D}\rangle^{-s}L^2(\mathbb{R}^n)$, and $L^2(\mathbb{R}^n)$ is the set of all square-integrable functions on $\mathbb{R}^n$.
Some two-dimensional nonlinear dispersive equations with a nonelliptic principal part arise in classical mechanics. For example, the Ishimori equation ([@ISHIMORI]) $$\begin{aligned}
D_tu-(D_1^2-D_2^2)u
&=
\frac{-2\bar{u}}{1+\lvert{u}\rvert^2}
\Bigl((D_1u)^2-(D_2u)^2\Bigr)
+
i(D_2\phi{D_1u}+D_1\phi{D_2u}),\\
\phi
&=
-4i\lvert{D}\rvert^{-2}
\left(
\frac{D_1\bar{u}D_2u-D_1uD_2\bar{u}}{1+\lvert{u}\rvert^2}
\right),\end{aligned}$$ and the hyperbolic–elliptic Davey-Stewartson equation ([@DS]) $$D_tu-(D_1^2-D_2^2)u
=
-\lvert{u}\rvert^2u
-uD_1^2\lvert{D}\rvert^{-2}(\lvert{u}\rvert^2) \vspace{-0.05cm}$$ are well-known two-dimensional nonlinear dispersive equations. It is easy to see that $e^{it(D_1^2-D_2^2)}$ has exactly the same local smoothing and dispersion properties of $e^{it(D_1^2+D_2^2)}$ since $a(\xi)=\xi_1^2\pm\xi_2^2$ are two-dimensional nondegenerate quadratic forms. If the gradient $a^\prime(\xi)$ of a quadratic form $a(\xi)$ does not vanish for $\xi\ne0$, then $e^{ita(D)}$ gains $\frac{1}{2}\,$-spatial differentiation globally in time and locally in space. If the Hessian $a^{\prime\prime}(\xi)$ of an $n$-dimensional quadratic form $a(\xi)$ is a nonsingular matrix, then the distribution kernel of $e^{ita(D)}$ in $\mathbb{R}^n\times\mathbb{R}^n$ is estimated by $O(\lvert{t}\rvert^{-\frac{n}{2}})$ for all $t\in\mathbb{R}$ (see, e.q., [@KPV0]). Then, we expect that the bilinear estimates for $a(\xi)=\xi_1^2-\xi_2^2$ are the same as those for $a(\xi)=\xi_1^2+\xi_2^2$. The purpose of this paper is to examine this expectation. However, our answer is negative. More precisely, our results are the following.
\[theorem:main\] Let $n=2$, and let $a(\xi)=\xi_1^2-\xi_2^2$.
- [ For $s \geq 0$, there exists $b\in(\frac{1}{2},1)$ and $C>0$ such that the estimates , and hold. ]{}
- [ For any $s<0$ and for any $b\in\mathbb{R}$, the estimates , and fail to hold. ]{}
Note that our results are independent of the structure of products. In other words, our results depend only on the properties of $a(\xi)$, in particular, on the noncompactness of the zeros of $a(\xi)$.
We shall prove Theorem \[theorem:main\] in the next section. On one hand, we directly compute trilinear forms in the phase space to show (i) of Theorem \[theorem:main\]. We see that the Strichartz estimates work for making use of the regularity property of the free propagator $e^{it(D_1^2-D_2^2)}$ to prove (i).
On the other hand, to prove (ii) of Theorem \[theorem:main\], we construct two sequences of real-analytic functions for which the bilinear estimates break down. We observe that one cannot make full use of the regularity properties of $e^{it(D_1^2-D_2^2)}$ for the negative index $s$. More precisely, if $s<0$, then these properties cannot work effectively near the set of zeros of $a(\xi)$, that is, the hyperbola in $\mathbb{R}^2$.
Finally, we remark that our results seem to be strongly related with the recent results on bilinear estimates of the two dimensional Fourier restriction problems by Tao and Vargas in [@TV] and [@VARGAS]. They obtained bilinear estimates of two functions restricted on the unit paraboloid in the phase space. Their method of proof does not work for the restriction on the hyperbolic paraboloid.
Proof of Theorem \[theorem:main\] {#section:proof}
=================================
Fix $a(\xi)=\xi_1^2-\xi_2^2$. Note that $a(\xi)=a(-\xi)$ for any $\xi\in\mathbb{R}^2$. First, we prove (i) of Theorem \[theorem:main\]. Secondly, we prove a lemma needed in the proof of (i). Lastly, we conclude this paper by proving (ii) of Theorem \[theorem:main\].
Let $s \geq 0$ and $\frac{1}{2}<b<1$. We employ the idea of trilinear estimates developed in [@TAO1]. In view of the duality argument, we have only to show that there exists a positive constant $C$ depending only on $s$ and $b$ such that $$\lvert{I}\rvert
\leqslant
C
\lVert{f}\rVert_{L^2(\mathbb{R}^3)}\,
\lVert{g}\rVert_{L^2(\mathbb{R}^3)}\,
\lVert{h}\rVert_{L^2(\mathbb{R}^3)},$$ where $$I
=
\int_{A_1}
\int_{A_2}
\frac{\langle\mu_{0}\rangle^{s}\,
\langle\mu_{1}\rangle^{-s}\,
\langle\mu_{2}\rangle^{-s}\,
f(\mu_0,\tau_0)g(\mu_1,\tau_1)h(\mu_2,\tau_2)}
{\langle\tau_0+a(\mu_0)\rangle^{1-b}\,
\langle\tau_1{\pm}a(\mu_1)\rangle^b\,
\langle\tau_2{\pm}a(\mu_2)\rangle^b}\,
d\tau_0d\tau_1d\tau_2
d\mu_0d\mu_1d\mu_2$$ and, $A_1$ and $A_2$ are defined by $$\begin{aligned}
A_1
& =
\{
(\mu_0,\mu_1,\mu_2)\in\mathbb{R}^{6}
\ \vert\
\mu_0+\mu_1+\mu_2=0
\}
\\
A_2
& =
\{
(\tau_0,\tau_1,\tau_2)\in\mathbb{R}^3
\ \vert\
\tau_0+\tau_1+\tau_2=0
\}.\end{aligned}$$ By using the pairs of signatures ${\pm}a(\mu_1)$ and ${\pm}a(\mu_2)$ in $I$, we can prove , and together. More precisely, the pairs $(-,-)$, $(+,+)$ and $(+,-)$ correspond to , and respectively. Since $\langle\mu_{1}+\mu_{2}\rangle^s
\leqslant
2^{s}\,
\langle\mu_{1}\rangle^s\,
\langle\mu_{2}\rangle^s
$ for $s \geq 0$, a simple computation gives $$\begin{aligned}
\lvert{I}\rvert
& =
\biggl\lvert
\int_{\mathbb{R}^{4}}
\int_{\mathbb{R}^2}
\frac{ f(-\mu_1-\mu_2,-\tau_1-\tau_2)
g(\mu_1,\tau_1)h(\mu_2,\tau_2) }
{ \langle\tau_1{\pm}a(\mu_1)\rangle^b\,
\langle\tau_2{\pm}a(\mu_2)\rangle^b }
\\
&\phantom{=\ } \times
\frac{ \langle\mu_{1}+\mu_{2}\rangle^{s}\,
\langle\mu_{1}\rangle^{-s}\,
\langle\mu_{2}\rangle^{-s} }
{ \langle-\tau_1-\tau_2+a(\mu_1+\mu_2)\rangle^{1-b}
}\,
d\tau_1\,d\tau_2
\,d\mu_1\,d\mu_2
\biggr\rvert
\\*[0.2cm]
& \leqslant
2^s
\!\int_{\mathbb{R}^{4}}
\int_{\mathbb{R}^2}
\frac{\lvert{f(-\mu_1-\mu_2,-\tau_1-\tau_2)}\rvert
\lvert{g(\mu_1,\tau_1)}\rvert
\lvert{h(\mu_2,\tau_2)}\rvert}
{\langle\tau_1{\pm}a(\mu_1)\rangle^b\,
\langle\tau_2{\pm}a(\mu_2)\rangle^b}\,
d\tau_1\,d\tau_2
\,d\mu_1\,d\mu_2
\\*[0.2cm]
& =
(2\pi)^{-\frac{3}{2}}2^s
\int_{\mathbb{R}^2}
\int_{\mathbb{R}}
\mathscr{F}_{\xi,\tau}^{-1}[\lvert f \rvert](x,t)
G(x,t)H(x,t)
\,dt\,dx,\end{aligned}$$ where $$\begin{aligned}
G(x,t)
& =
\int_{\mathbb{R}^2}
\int_{\mathbb{R}}
e^{i(x\cdot\mu+t\tau)}
\frac{\lvert{g(\mu,\tau)}\rvert}
{\langle\tau{\pm}a(\mu)\rangle^b}\,
\,d\tau
\,d\mu
\\
H(x,t)
& =
\int_{\mathbb{R}^2}
\int_{\mathbb{R}}
e^{i(x\cdot\mu+t\tau)}
\frac{\lvert{h(\mu,\tau)}\rvert}
{\langle\tau{\pm}a(\mu)\rangle^b}\,
\,d\tau
\,d\mu,\end{aligned}$$ and $\mathscr{F}_{\xi,\tau}^{-1}$ denotes the inverse Fourier transform on $\xi$ and $\tau$, that is, $$\mathscr{F}_{\xi,\tau}^{-1}[\tilde{f}](x,t)
=
(2\pi)^{-\frac{3}{2}}
\iint_{\mathbb{R}^3}
e^{it\tau+ix\cdot\xi}
\tilde{f}(\xi,\tau)
d\xi
d\tau.$$ The estimates of $G$ and $H$ are the following:
\[theorem:ste\] For $b>\frac{1}{2}$, there exists $C_1=C_1(b)>0$ such that for any $g$, $h\in L^2(\mathbb{R}^3)$ $$\lVert
G
\rVert_{L^4(\mathbb{R}^3)}
\leqslant
C_1
\lVert
g
\rVert_{L^2(\mathbb{R}^3)},
\quad
\lVert
H
\rVert_{L^4(\mathbb{R}^3)}
\leqslant
C_1
\lVert
h
\rVert_{L^2(\mathbb{R}^3)},$$ where $L^4(\mathbb{R}^3)$ is the set of all Lebesgue measurable functions of $(x,t)\in\mathbb{R}^2\times\mathbb{R}$ satisfying $$\lVert{F}\rVert_{L^4(\mathbb{R}^3)}
=
\left(
\int_{\mathbb{R}^2}
\int_{\mathbb{R}}
\left\lvert{F(x,t)}\right\rvert^4
dtdx
\right)^{\frac{1}{4}}
<+\infty.$$
By using Lemma \[theorem:ste\], the Hölder inequality and the Plancherel formula, we deduce $$\lvert{I}\rvert
\leqslant
\lVert{f}\rVert_{L^2(\mathbb{R}^3)}
\lVert{G}\rVert_{L^4(\mathbb{R}^3)}
\lVert{H}\rVert_{L^4(\mathbb{R}^3)}
\leqslant
C
\lVert{f}\rVert_{L^2(\mathbb{R}^3)}
\lVert{g}\rVert_{L^2(\mathbb{R}^3)}
\lVert{h}\rVert_{L^2(\mathbb{R}^3)},$$ which was to be established.
We show the estimate of $G$. Changing a variable by $\tau=\lambda{\mp}a(\xi)$, we deduce $$\begin{aligned}
G(x,t)
& =
\int_{\mathbb{R}^2}
\int_{\mathbb{R}}
e^{i(x\cdot\xi+t\tau)}
\frac{\lvert{g(\xi,\tau)}\rvert}
{\langle\tau{\pm}a(\xi)\rangle^b}
\,d\tau
\,d\xi
\\
& =
\int_{\mathbb{R}}
e^{it\lambda}
\langle\lambda\rangle^{-b}
\left(
\int_{\mathbb{R}^2}
e^{ix\cdot\xi}
e^{{\mp}ita(\xi)}
\lvert{g(\xi,\lambda{\mp}a(\xi))}\rvert
\,d\xi
\right)
d\lambda
\\
& =
\int_{\mathbb{R}}
e^{it\lambda}
\langle\lambda\rangle^{-b}
e^{{\mp}ita(D)}
\psi_{\lambda}(x)
\,d\lambda,\end{aligned}$$ where $
(\psi_{\lambda})^{\wedge}(\xi)
=
2\pi
\lvert
g(\xi,\lambda \mp a(\xi))
\rvert.
$ Applying the Minkowski inequality, we get $$\begin{aligned}
\lVert{G}\rVert_{L^4(\mathbb{R}^3)}
& =
\left(
\iint_{\mathbb{R}^3}
\left\lvert
\int_{\mathbb{R}}
e^{it\lambda}
\langle\lambda\rangle^{-b}
e^{{\mp}ita(D)}
\psi_{\lambda}(x)
d\lambda
\right\rvert^4
dt\,dx
\right)^{\!\frac{1}{4}}
\nonumber
\\
& \leqslant
\int_{\mathbb{R}}
\left(
\iint_{\mathbb{R}^3}
\left\lvert
e^{it\lambda}
\langle\lambda\rangle^{-b}
e^{{\mp}ita(D)}
\psi_{\lambda}(x)
\right\rvert^4
dt\,dx
\right)^{\!\frac{1}{4}}
\!d\lambda
\nonumber
\\
& =
\int_{\mathbb{R}}
\langle\lambda\rangle^{-b}
\left(
\iint_{\mathbb{R}^3}
\left\lvert
e^{{\mp}ita(D)}
\psi_{\lambda}(x)
\right\rvert^4
dtdx
\right)^{\!\frac{1}{4}}
\!d\lambda.
\label{equation:onnagurui}\end{aligned}$$
Since $a(\xi)$ is a two-dimensional nondegenerate quadratic form of $\xi$, the so-called Strichartz estimate $$\lVert
e^{\pm ita(D)}u
\rVert_{L^4(\mathbb{R}^3)}
\leqslant
C
\lVert
u
\rVert_{L^2(\mathbb{R}^2)}$$ holds (see, e.g., [@GS Appendix]). Using this, the Schwarz inequality with $b>\frac{1}{2}$ and the Plancherel formula, we obtain $$\begin{aligned}
\lVert{G}\rVert_{L^4(\mathbb{R}^3)}
& \leqslant
C
\int_{\mathbb{R}}
\langle\lambda\rangle^{-b}
\lVert\psi_\lambda\rVert_{L^2(\mathbb{R}^2)}
\,d\lambda
\\
& \leqslant
C(b)
\left(
\int_{\mathbb{R}}
\lVert
\psi_\lambda
\rVert_{L^2(\mathbb{R}^2)}^2
\,d\lambda
\right)^{\!\frac{1}{2}}
\\
& =
2\pi
C(b)
\left(
\int_{\mathbb{R}}
\int_{\mathbb{R}^2}
\lvert{g(\xi,\lambda{\mp}a(\xi))}\rvert^2
\,d\xi
\,d\lambda
\right)^{\!\frac{1}{2}}
\\
& =
2\pi
C(b)
\left(
\int_{\mathbb{R}}
\int_{\mathbb{R}^2}
\lvert{g(\xi,\lambda)}\rvert^2
\,d\xi
\,d\lambda
\right)^{\!\frac{1}{2}}
\\
& =
2\pi
C(b)
\lVert{g}\rVert_{L^2(\mathbb{R}^3)}.\end{aligned}$$ This completes the proof of Lemma 2.1.
Basically we show the optimality in the bilinear estimates by constructing suitable Knapp-type counterexamples as in [@KPV1].
First, we prove the case $j=1$. Fix $s<0$ and $b\in\mathbb{R}$. Set $B=\max\{1,\lvert{b}\rvert\}$ for short. Suppose that there exists a positive constant $C>0$ such that the bilinear estimate holds for any $u$, $v\in L^2(\mathbb{R}^3)$. For $N=1,2,3,\ldots$, set $$\widetilde{u_N}(\xi_1,\xi_2,\tau)
=
\chi_{Q_N}(\xi_1,\xi_2,\tau),
\quad
\widetilde{v_N}(\xi_1,\xi_2,\tau)
=
\chi_{Q_N}(-\xi_1,-\xi_2,-\tau),$$ where $\chi_A$ is the characteristic function of a set $A$, and $$Q_N
=
\left\{
(\xi_1,\xi_2,\tau)\in\mathbb{R}^3
\ \bigg\vert\
N\leqslant \xi_1+\xi_2\leqslant 2N,
\lvert\xi_1-\xi_2\rvert\leqslant\frac{1}{4N},
\lvert\tau\rvert\leqslant\frac{1}{2}
\right\}.$$ Note that $$Q_N
\subset
\left\{
(\xi_1,\xi_2,\tau)\in \mathbb{R}^3
\ \bigg\vert\
\lvert\tau{\pm}a(\xi)\rvert\leqslant1,
\frac{N}{2}\leqslant\lvert\xi\rvert\leqslant{2N}
\right\},
\label{equation:aya2}$$ since $$\begin{gathered}
-\frac{1}{2}
\leqslant
a(\xi)
=(\xi_1+\xi_2)(\xi_1-\xi_2)
\leqslant
\frac{1}{2} \\
\frac{N^2}{2}
\leqslant
\lvert\xi\rvert^2
=
\frac{(\xi_1+\xi_2)^2}{2}
+
\frac{(\xi_1-\xi_2)^2}{2}
\leqslant
2N^2+\frac{1}{32N^2}.\end{gathered}$$ By using , we deduce $$\begin{aligned}
\lVert{u_N}\rVert_{s,b}
& =
\left(
\iint_{Q_N}
\langle\tau-a(\xi)\rangle^{2b}\,
\langle\xi\rangle^{2s}
d\tau
d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& \leqslant
2^{B-s}N^s
\left(
\iint_{Q_{N}}d\tau d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& =
2^{B-s-1}N^s,
\label{equation:ine1}
\intertext{and}
\lVert{v_N}\rVert_{s,b}
& =
\left(
\iint_{Q_N}
\langle\tau+a(\xi)\rangle^{2b}\,
\langle\xi\rangle^{2s}
d\tau
d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& \leqslant
2^{B-s}N^s
\left(
\iint_{Q_N}d\tau d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& =
2^{B-s-1}N^s.
\label{equation:ine2}\end{aligned}$$ A simple computation shows that for large $N\in\mathbb{N}$, $$\begin{aligned}
\widetilde{u_Nv_N}(\xi,\tau)
& =
(2\pi)^{-\frac{3}{2}}
\iint_{\mathbb{R}^3}
\chi_{Q_N}(\xi-\eta,\tau-\lambda)
\chi_{Q_N}(-\eta,-\lambda)
d\eta
d\lambda
\\
& =
(2\pi)^{-\frac{3}{2}}
\iint_{Q_N}
\chi_{Q_N}(\xi+\eta,\tau+\lambda)
d\eta
d\lambda
\\
& \geqslant
\frac{1}{2^{6+\frac{1}{2}}\,\pi^{\frac{3}{2}}}
\chi_{R_N}(\xi,\tau),\end{aligned}$$ where $$R_N
=
\left\{
(\xi_1,\xi_2,\tau)\in \mathbb{R}^3
\ \bigg\vert\
\lvert\xi_1+\xi_2\rvert\leqslant\frac{N}{2},
\lvert\xi_1-\xi_2\rvert\leqslant\frac{1}{8N},
\lvert\tau\rvert\leqslant\frac{1}{4}
\right\}.$$ Since $
R_N
\subset
\big\{
(\xi_1,\xi_2,\tau)\in\mathbb{R}^3
\ \big\vert\
\lvert\tau{\pm}a(\xi)\rvert\leqslant1,
\lvert\xi\rvert\leqslant{\frac{N}{2}}
\big\},
$ we get $$\begin{aligned}
\lVert{u_Nv_N}\rVert_{s,b-1}
& =
\left(
\iint_{\mathbb{R}^3}
\lvert\widetilde{u_{N}v_{N}}(\xi,\tau)\rvert^2
\langle\tau-a(\xi)\rangle^{2(b-1)}\,
\langle\xi\rangle^{2s}
d\tau
d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& \geqslant
\frac{1}{2^{6+\frac{1}{2}}\pi^{\frac{3}{2}}}
\left(
\iint_{R_N}
\langle\tau-a(\xi)\rangle^{2(b-1)}\,
\langle\xi\rangle^{2s}
d\tau
d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& \geqslant
2^{B-6-\frac{1}{2}}\pi^{-\frac{3}{2}}
N^s
\left(
\iint_{R_N}
d\tau
d\xi
\right)^{\frac{1}{2}}
\nonumber
\\
& =
2^{B-8-\frac{1}{2}}\pi^{-\frac{3}{2}}
N^s.
\label{equation:ine3}\end{aligned}$$ Substitute , and into . Then we have $
2^{B-8-\frac{1}{2}}\pi^{-\frac{3}{2}}
N^s
\!\leqslant
2^{2B-2s-2}N^{2s}\!,
$ which becomes $
2^{-B+2s-6-\frac{1}{2}}\pi^{-\frac{3}{2}}
\leqslant
N^s.
$ Since $s<0$, the right hand side of the above goes to zero as $N\rightarrow\infty$ while the left hand side is a strictly positive constant depending only on $s<0$ and $b\in\mathbb{R}$. This is contradiction. Then, this completes the proof of the case $j=1$.
The cases $j=2,3$ are proved in the same way. Let $Q_N$ be the same as above. For $j=2$, set $$\widetilde{u_N}(\xi,\tau)=\chi_{Q_N}(-\xi,-\tau),
\quad
\widetilde{v_N}(\xi,\tau)=\chi_{Q_N}(\xi,\tau),$$ and for $j=3$, set $$\widetilde{u_N}(\xi,\tau)=\chi_{Q_N}(-\xi,-\tau),
\quad
\widetilde{v_N}(\xi,\tau)=\chi_{Q_N}(-\xi,-\tau).$$ We omit the detail about the cases $j=2,3$.
The author would like to thank Hiroyuki Chihara for a number of his valuable suggestions and encouragements.
[10]{}
Bourgain, J., Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations. [*Geom. Funct. Anal.*]{} 3 (1993), 107 – 156.
Bourgain, J., Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. II. The KdV-equation. [*Geom. Funct. Anal.*]{} 3 (1993), 209 – 262.
Chihara, H., Smoothing effects of dispersive pseudodifferential equations. [*Comm. Partial Diff. Eqs.*]{} 27 (2002), 1953 – 2005.
Colliander, J. E., Delort, J.-M., Kenig, C. E. and Staffilani, G., Bilinear estimates and applications to 2D NLS. [*Trans. Amer. Math. Soc.*]{} 353 (2001), 3307 – 3325.
Davey, A. and Stewartson, K., On three dimensional packets of surface waves. [*Proc. R. Soc. London Ser. A*]{} 338 (1974), 101 – 110.
Ghidaglia, J.-M. and Saut, J.-C., Nonelliptic Schrödinger equations. [*J. Nonlinear Sci.*]{} 3 (1993), 169 – 195.
Ginibre, J. and Velo, G., On the global Cauchy problem for some nonlinear Schrödinger equations. [*Ann. Inst. H. Poincaré Anal. Non Linéaire*]{} 1 (1984), 309 – 329.
Ishimori, Y., Multivortex solutions of a two-dimensional nonlinear wave equation. [*Progr. Theoret. Phys.*]{} 72 (1984), 33 – 37.
Kenig, C. E., Ponce, G. and Vega, L., Oscillatory integrals and regularity of dispersive equations. [*Indiana Univ. Math. J.*]{} 40 (1991), 33 – 69.
Kenig, C. E., Ponce, G. and Vega, L., Quadratic forms for the $1$-D semilinear Schrödinger equation. [*Trans. Amer. Math. Soc.*]{} 348 (1996), 3323 – 3353.
Kenig, C. E., Ponce, G. and Vega, L., A bilinear estimate with applications to the KdV equation. [*J. Amer. Math. Soc.*]{} 9 (1996), 573 – 603.
Nakanishi, K., Takaoka, H. and Tsutsumi, Y., Counterexamples to bilinear estimates related with the KdV equation and the nonlinear Schrödinger equation. [*Methods Appl. Anal.*]{} 8 (2001), 569 – 578.
Sogge, C. E., [*Fourier Integrals in Classical Analysis*]{}. Cambridge: Cambridge Univ. Press 1993.
Strichartz, R. S., Restrictions of Fourier transforms to quadratic surfaces and decay of solutions of wave equations. [*Duke Math. J.*]{} 44 (1977), 705 – 714.
Tao, T., Multilinear weighted convolution of $L^2$ functions, and applications to non-linear dispersive equations. [*Amer. J. Math.*]{} 123 (2001), 839 – 908.
Tao, T., Local and global well-posedness for nonlinear dispersive equations. [*Proc. Centre Math. Appl. Austral. Nat. Univ.*]{} 40 (2002), 19 – 48.
Tao, T. and Vargas, A., A bilinear approach to cone multipliers. I. Restriction estimates. [*Geom. Funct. Anal.*]{} 10 (2000), 185 – 215.
Vargas, A., Restriction theorems for a surface with negative curvature. [*Math. Z.*]{} 249 (2005), 97 – 111.
[^1]: The author is supported by the JSPS Research Fellowships for Young Scientists and the JSPS Grant-in-Aid for Scientific Research No.19$\cdot$3304.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: '[The formation of singularities on a free surface of a conducting ideal fluid in a strong electric field is considered. It is found that the nonlinear equations of two-dimensional fluid motion can be solved in the small-angle approximation. This enables us to show that for almost arbitrary initial conditions the surface curvature becomes infinite in a finite time. ]{}'
---
[**Formation of Root Singularities on the Free Surface\
of a Conducting Fluid in an Electric Field**]{}
[**N. M. Zubarev**]{}
Electrohydrodynamic instability of a free surface of a conducting fluid in an external electric field \[1,2\] plays an essential role in a general problem of the electric strength. The interaction of strong electric field with induced charges at the surface of the fluid (liquid metal for applications) leads to the avalanche-like growth of surface perturbations and, as a consequence, to the formation of regions with high energy concentration which destruction can be accompanied by intensive emissive processes.
In this Letter we will show that the nonlinear equations of motion of a conducting fluid can be effectively solved in the approximation of small perturbations of the boundary. This allows us to study the nonlinear dynamics of the electrohydrodynamic instability and, in particular, the most physically meaningful singular solutions.
Let us consider an irrotational motion of a conducting ideal fluid with a free surface, $z=\eta(x,y,t)$, that occupies the region $-\infty<z\leq\eta(x,y,t)$, in an external uniform electric field $E$. We will assume the influence of gravitational and capillary forces to be negligibly small, which corresponds to the condition $$E^2\gg 8\pi\sqrt{g\alpha\rho},$$ where $g$ is the acceleration of gravity, $\alpha$ is the surface tension coefficient, and $\rho$ is the mass density.
The potential of the electric field $\varphi$ satisfies the Laplace equation, $$\Delta\varphi=0,$$ with the following boundary conditions, $$\varphi\to -Ez, \qquad z\to\infty,$$ $$\varphi=0, \qquad z=\eta.$$ The velocity potential $\Phi$ satisfies the incompressibility equation $$\Delta\Phi=0,$$ which one should solve together with the dynamic and kinematic relations on the free surface, $$\frac{\partial\Phi}{\partial t}+\frac{(\nabla\Phi)^2}{2}=
\frac{(\nabla\varphi)^2}{8\pi\rho}+F(t), \qquad z=\eta,$$ $$\frac{\partial\eta}{\partial t}=\frac{\partial\Phi}{\partial z}
-\nabla\eta\cdot\nabla\Phi,
\qquad z=\eta,$$ where $F$ is some function of variable $t$, and the boundary condition $$\Phi\to 0, \qquad z\to-\infty.$$ The quantities $\eta(x,y,t)$ É $\psi(x,y,t)=\Phi|_{z=\eta}$ are canonically conjugated, so that the equations of motion take the Hamiltonian form \[3\], $$\frac{\partial\psi}{\partial t}=-\frac{\delta H}{\delta\eta},
\qquad
\frac{\partial\eta}{\partial t}=\frac{\delta H}{\delta\psi},$$ where the Hamiltonian $$H=\int\limits_{z\leq\eta}\frac{(\nabla\Phi)^2}{2} d^3 r
-\int\limits_{z\geq\eta}\frac{(\nabla\varphi)^2}{8\pi\rho} d^3 r$$ coincides with the total energy of a system. With the help of the Green formula it can be rewritten as the surface integral, $$H=\int\limits_{s}\left[\frac{\psi}{2}\,\frac{\partial\Phi}{\partial n}+
\frac{E\eta}{8\pi\rho}\,\frac{\partial\tilde\varphi}{\partial n}\right]ds,$$ where $\tilde\varphi=\varphi+Ez$ is the perturbation of the electric field potential; $ds$ is the surface differential.
Let us assume $|\nabla\eta|\ll 1$, which corresponds to the approximation of small surface angles. In such a case we can expand the integrand in a power series of canonical variables $\eta$ and $\psi$. Restricting ourselves to quadratic and cubic terms we find after scale transformations $$t\to t E^{-1}(4\pi\rho)^{1/2},
\quad
\psi\to\psi E/(4\pi\rho)^{1/2},
\quad
H\to HE^2/(4\pi\rho)$$ the following expression for the Hamiltonian, $$H=\frac{1}{2}\int\left[\psi\hat k\psi+
\eta\left((\nabla\psi)^2-(\hat k\psi)^2\right)\right] d^2 r$$ $$-\frac{1}{2}\int\left[\eta\hat k\eta-\eta\left((\nabla\eta)^2-
(\hat k\eta)^2\right)\right] d^2 r.$$ Here $\hat k$ is the integral operator with the difference kernel, whose Fourier transform is the modulus of the wave vector, $$\hat{k}f=-\frac{1}{2\pi}\!\int\limits_{-\infty}^{+\infty}
\int\limits_{-\infty}^{+\infty}
\frac{f(x',y')}{\left[(x'-x)^2+(y'-y)^2\right]^{3/2}}\,dx'dy'.$$ The equations of motion, corresponding to this Hamiltonian, take the following form, $$\psi_t-\hat k\eta=\frac{1}{2}\left[(\hat k\psi)^2-(\nabla\psi)^2+
(\hat k\eta)^2-(\nabla\eta)^2\right]+
\hat k(\eta\hat k\eta)+\nabla(\eta\nabla\eta),$$ $$\eta_t-\hat k\psi=-\hat k(\eta\hat k\psi)-\nabla(\eta\nabla\psi).$$ Subtraction of Eqs. (2) and (1) gives in the linear approximation the relaxation equation $$(\psi-\eta)_t=-\hat k(\psi-\eta),$$ whence it follows that we can set $\psi=\eta$ in the nonlinear terms of Eqs. (1) and (2), which allows us to simplify the equations of motion. Actually, adding Eqs. (1) and (2) we obtain an equation for a new function $f=(\psi+\eta)/2$, $$f_t-\hat k f=\frac{1}{2}\,(\hat k f)^2-\frac{1}{2}\,(\nabla f)^2,$$ which corresponds to the consideration of the growing branch of the solutions. As $f=\eta$ in the linear approximation, Eq. (3) governs the behavior of the elevation $\eta$.
First we consider the one-dimensional case when function $f$ depends only on $x$ (and $t$) and the integral operator $\hat k$ can be expressed in terms of the Hilbert transform $\hat H$, $$\hat k=-\frac{\partial}{\partial x}\,\hat H,
\qquad
\hat{H}f=\frac{1}{\pi}\,\mbox{P}\!\!\int\limits_{-\infty}^{+\infty}
\frac{f(x')}{x'-x}\,dx',$$ where P denotes the principal value of the integral. As a result, Eq. (3) can be rewritten as $$f_t+\hat H f_x=\frac{1}{2}\,(\hat H f_x)^2-\frac{1}{2}\,(f_x)^2.$$ It should be noted that if one introduces a new function $\tilde f=\hat H f$, then Eq. (4) transforms into the equation proposed in Ref. \[4\] for the description of the nonlinear stages of the Kelvin-Helmholtz instability.
For further consideration it is convenient to introduce a function, analytically extendable into the upper half-plane of the complex variable $x$, $$v=\frac{1}{2}\,(1-i\hat H)f_x.$$ Then Eq. (4) takes the form $$\mbox{Re}\left(v_t+iv_x+2vv_x\right)=0,$$ that is, the investigation of integro-differential equation (4) amounts to the analysis of the partial differential equation $$v_t+iv_x+2vv_x=0,$$ which describes the wave breaking in the complex plane. Let us study this process in analogy with \[5,6\], where a similar problem was considered. Eq. (5) can be solved by the standard method of characteristics, $$v=Q(x'),$$ $$x=x'+it+2Q(x')t.$$ where the function $Q$ is defined from initial conditions. It is clear that in order to obtain an explicit form of the solution we must resolve Eq. (7) with respect to $x'$. A mapping $x\to x'$, defined by Eq. (7), will be ambiguous if $\partial x/\partial x'=0$ in some point, i.e. $$1+2Q_{x'}t=0.$$ Solution of (8) gives a trajectory $x'=x'(t)$ on the complex plane $x'$. Then the motion of the branch points of the function $v$ is defined by an expression $$x(t)=x'(t)+it+2Q(x'(t))t.$$ At some moment $t_0$ when the branch point touches the real axis, the analiticity of $v(x,t)$ at the upper half-plane of variable $x$ breaks, and a singularity appears in the solution of Eq. (4).
Let us consider the solution behavior close to the singularity. Expansion of (6) and (7) at a small vicinity of $x=x(t_0)$ up to the leading orders gives $$v=Q_0-\delta x'/(2t_0),$$ $$\delta x=i\delta t+2Q_0\delta t+Q''t_0(\delta x')^2,$$ where $Q_0=Q(x'(t_0))$, $Q''=Q_{x'x'}(x'(t_0))$, $\delta x\!=\!x\!-\!x(t_0)$, $\delta x'\!=\!x'\!-\!x'(t_0)$, and $\delta t\!=\!t\!-\!t_0$. Eliminating $\delta x'$ from these equations, we find that close to singularity $v_x$ can be represented in the self-similar form ($\delta x\sim\delta t$), $$v_x=-\left[16Q''t_0^3
(\delta x-i\delta t-2Q_0\delta t)\right]^{-1/2}.$$ As $\mbox{Re}(v)=\eta/2$ in the linear approximation, we have at $t=t_0$ $$\eta_{xx}\sim|\delta x|^{-1/2},$$ that is the surface curvature becomes infinite in a finite time. It should be mentioned that such a behavior of the charged surface is similar to the behavior of a free surface of an ideal fluid in the absence of external forces \[5,6\], though the singularities are of a different nature (in the latter case the singularity formation is connected with inertial forces).
Let us show that the solutions corresponding to the root singularity regime are consistent with the applicability condition of the truncated equation (3). Let $Q(x')$ be a rational function with one pole in the lower half-plane, $$Q(x')=-\frac{is}{2(x'+iA)^2},$$ which corresponds to the spatially localized one-dimensional perturbation of the surface ($s>0$ and $A>0$). The characteristic surface angles are thought to be small, $\gamma\approx s/A^2\ll 1$.
It is clear from the symmetries of (9) that the most rapid branch point touches the real axis at $x=0$. Then the critical moment $t_0$ can be found directly from Eqs. (7) and (8). Expansion of $t_0$ with respect to the small parameter $\gamma$ gives $$t_0\approx A\left[1-3(\gamma/4)^{1/3}\right].$$ Taking into account that the evolution of the surface perturbation can be described by an approximate formula $$\eta(x,t)=\frac{s(A-t)}{(A-t)^2+x^2},$$ we have for the dynamics of the characteristic angles $$\gamma(t)\approx\frac{s}{(A-t)^2}.$$ Then, substituting the expression for $t_0$ (10) into this formula, we find that at the moment of the singularity formation with the required accuracy $$\gamma(t_0)\sim\gamma^{1/3},$$ that is, the angles remain small and the root singularities are consistent with our assumption about small surface angles.
In conclusion, we would like to consider the more general case where the weak dependence of all quantities from the spatial variable $y$ is taken into account. One can find that if the condition $|k_x|\ll|k_y|$ holds for the characteristic wave numbers, then the evolution of the fluid surface is described by an equation $$\left[v_t+iv_x+2vv_x\right]_x=-iv_{yy}/2,$$ which extends Eq. (5) to the two-dimensional case.
An interesting group of particular solutions of this equation can be found with the help of substitution $v(x,y,t)=w(z,t)$, where $$z=x-\frac{i}{2}\,\frac{(y-y_0)^2}{t}.$$ The equation for $w$ looks like $$w_t+iw_z+2ww_z=-w/(2t).$$ It is integrable by the method of characteristics, so that we can study the analyticity violation similarly to the one-dimensional case. Considering a motion of branch points in the complex plane of the variable $z$ we find that a singularity arises at some moment $t_0<0$ at the point $y_0$ along the $y$-axis. Close to the singular point at the critical moment $t=t_0$ we get $$\left.\eta_{xx}\right|_{\delta y=0}\sim|\delta x|^{-1/2},
\qquad
\left.\eta_{xx}\right|_{\delta x=0}\sim|\delta y|^{-1}.$$ This means that in the examined quasi-two-dimensional case the second derivative of the surface profile becomes infinite at a single isolated point.
Thus, the consideration of the behavior of a conducting fluid surface in a strong electric field shows that the nonlinearity determines the tendency for the formation of singularities of the root character, corresponding to the surface points with infinite curvature. We can assume that such weak singularities serve as the origin of the more powerful singularities observed in the experiments \[7,8\].
I would like to thank A.M. Iskoldsky and N.B. Volkov for helpful discussions, and E.A. Kuznetsov for attracting my attention to Refs. \[5,6\]. This work was supported by Russian Foundation for Basic Research, Grant No. 97–02–16177.
[**References**]{}
1. L. Tonks, Phys. Rev. 48 (1935) 562.
2. Ya.I. Frenkel, Zh. Teh. Fiz. 6 (1936) 347.
3. V.E. Zakharov, J. Appl. Mech. Tech. Phys. 2 (1968) 190.
4. S.K. Zhdanov and B.A. Trubnikov, Sov. Phys. JETP 67 (1988) 1575.
5. E.A. Kuznetsov, M.D. Spector, and V.E. Zakharov, Phys. Lett. A 182 (1993) 387.
6. E.A. Kuznetsov, M.D. Spector, and V.E. Zakharov, Phys. Rev. E 49 (1994) 1283.
7. M.D. Gabovich and V.Ya. Poritsky, JETP Lett. 33, (1981) 304.
8. A.V. Batrakov, S.A. Popov, and D.I. Proskurovsky, Tech. Phys. Lett. 19 (1993) 627.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Stochastic resetting is prevalent in natural and man-made systems giving rise to a long series of non-equilibrium phenomena. Diffusion with stochastic resetting serves as a paradigmatic model to study these phenomena, but the lack of a well-controlled platform by which this process can be studied experimentally has been a major impediment to research in the field. Here, we report the experimental realization of colloidal particle diffusion and resetting via holographic optical tweezers. This setup serves as a proof-of-concept which opens the door to experimental study of resetting phenomena. It also vividly illustrates why existing theoretical models must be improved and revised to better capture the real-world physics of stochastic resetting.'
author:
- 'Ofir Tal-Friedman$^{1}$'
- 'Arnab Pal$^{2,3}$'
- 'Amandeep Sekhon$^{2}$'
- 'Shlomi Reuveni$^{2,3}$'
- 'Yael Roichman$^{1,2}$'
title: Experimental realization of diffusion with stochastic resetting
---
=1
Stochastic resetting is ubiquitous in nature, and has recently been the subject of vigorous studies in physics [@Evans2011_1; @Evans2011_2; @Evans2011_3], chemistry [@Restart-Biophysics1; @Restart-Biophysics2; @Restart-Biophysics6], biological physics [@Restart-Biophysics3; @Restart-Biophysics8], computer science [@restart-CS1; @restart-CS2], queuing theory [@queue1; @queue2] and other cross-disciplinary fields (see [@review] for extensive account of recent developments). A stylized model to study resetting phenomena was proposed by Evans and Majumdar in 2011 [@Evans2011_1]. The model, which considers a diffusing particle subject to stochastic resetting, exhibits many rich properties e.g., the emergence of a non-equilibrium steady state and interesting relaxation dynamics [@Evans2011_1; @Evans2011_2; @Evans2011_3; @Evans2014_3; @Pal2016_1; @relaxation1; @relaxation2; @local] which were also observed in other systems subject to stochastic resetting [@restart_conc3; @SEP; @return1; @return2; @return3; @return4; @return5; @Bod1; @Bod2]. The model is also pertinent to the study of search and first-passage time (FPT) questions [@RednerBook; @Schehr-review]. In particular, it was used to show that resetting can significantly reduce the mean FTP of a diffusing particle to a target by mitigating the deleterious effect of large FPT fluctuations that are intrinsic to diffusion in the absence of resetting [@Evans2011_1; @Evans2011_2; @Evans2011_3; @review; @Pal2016_1; @Ray; @interval]. Interestingly, this beneficial effect of resetting also extends beyond free diffusion and applies to many other stochastic processes [@review; @return5; @Bod1; @Bod2; @return3; @return4; @ReuveniPRL; @PalReuveniPRL; @branching_II; @Restart-Search1; @Restart-Search2; @Chechkin; @Landau; @HRS]; and further studies moreover revealed a genre of universality relations associated with optimally restarted processes as well as the existence of a globally optimal resetting strategy [@Restart-Biophysics1; @Restart-Biophysics2; @ReuveniPRL; @PalReuveniPRL; @Chechkin; @branching_II; @Landau; @HRS].
![Experimental realization of diffusion with stochastic resetting. a) A sample trajectory of a silica particle diffusing (blue) near the bottom of a sample cell. The particle sets off from the origin and is stochastically reset at a rate $r=0.05 s^{-1}$. Following a resetting epoch, the particle is driven back to the origin at a constant radial velocity $v=0.8 \mu m/s$ using HOTs (red). After the particle arrives at the origin it remains trapped there for a short period of time to improve localization (green). Inset shows a schematic illustration of the experiment. b) Projection of the particle’s trajectory onto the $x$-axis.[]{data-label="Fig:expt"}](Fig1_slow.pdf)
Despite a long catalogue of theoretical studies dedicated to stochastic resetting, no attempt to experimentally study resetting in a controlled environment has been made to date. This is needed as resetting in the real world is never ‘clean’ as in theoretical models which glance over physical complications for the sake of analytical tractability and elegance. In this letter, we report the experimental realization of diffusion with stochastic resetting ([Fig. \[Fig:expt\]]{}). Our setup comprises of a colloidal particle suspended in fluid (in quasi-two dimensions) and resetting is implemented via holographic optical tweezers (HOTs) [@Dufresne2001; @Polin2005; @Grier06; @Crocker1996]. We study two, physically amenable, resetting protocols in which the particle is returned to the origin: (i) at a constant velocity, and (ii) within a constant time. In both cases, resetting is stochastic and time intervals between resetting events come from an exponential distribution with mean $1/r$.
In what follows, we utilize the setup in [Fig. \[Fig:expt\]]{} to study two different statistical measures of diffusion with stochastic resetting. First, we study the long time position distribution of a tagged particle and how it depends on the resetting protocol. Then, we study the mean FPT of a tagged particle to a region in space. Finally, we also consider the work and energy required to implement resetting in our system. In all cases, we discover that existing theoretical models must be extended and revised to better capture the physics of stochastic resetting in the real world. We conclude with discussion and outlook on the future of experimental studies of stochastic resetting.
Our experimental setup is based on a home built holographic optical tweezers (HOTs) system. It uses a spatial light modulator (Hamamtsu, X10468-04) to imprint a computer generated phase pattern on an expanded laser beam (Coherent, Verdi $\lambda=532$nm). The beam is then projected on the back aperture of a 100x objective (oil immersion, NA = 1.4) mounted on an Olympus microscope (IX71). Samples consist of a dilute colloidal suspension of spherical silica particles with a diameter of $d=1.5\pm 0.02\mu m$ and a refractive index of $n_p=1.46$ (Kisker Biotech, lot\# GK0611140 02) in double distilled water sealed between a glass slide and a coverslip with a sample thickness of approximately $20\mu$m. Motion of the particle (confined to a quasi two dimensional geometry) is recorded by a CMOS camera (Grasshopper 3, Point Gray) at a rate of 20 fps. Particle position is extracted using conventional video microscopy algorithms [@Crocker1996] with an accuracy of approximately 30nm. A laser power of $1$W was used to ensure sufficient trapping of the particle. We utilize in-house developed programs for hardware control and data analysis.
We start our experiments by realizing diffusion with stochastic resetting in the following manner. Every experiment starts by drawing a series of random resetting times $\{t_1,t_2,t_3,...\}$ taken from an exponential distribution with mean $1/r$. At time zero, the particle is trapped at the origin and the experiment, which consists of a series of statistically identical steps, begins. At the $i$-th step of the experimental protocol, the particle is allowed to diffuse for a time $t_i$ eventually arriving at a position $(x_i,y_i)$. At this time, an optical trap is projected onto the particle and the particle is dragged by the trap to its initial position. A typical trajectory of a colloidal particle performing diffusion under stochastic resetting with $r=0.05s^{-1}$ is shown in [Fig. \[Fig:expt\]]{}a (see also Supplementary movie 1). Note that the trajectory is composed of three phases of motion: diffusion, return, and a short waiting time to allow for optimal localization at the origin ([Fig. \[Fig:expt\]]{}b). To collect sufficient statistics, we perform approximately 450 resetting events for each constant velocity experiment, and 305 events for the constant return time experiment.
*Stochastic resetting with instantaneous returns.—* We first utilize our setup to study a canonical (yet non-physical) case in which upon resetting the particle is teleported back to the origin in zero time. This case was the first to be analysed theoretically [@Evans2011_1], thus providing a benchmark for experimental results. To obtain trajectories of diffusion with stochastic resetting and instantaneous returns we digitally remove the return (red) and wait (green) phases of motion from the experimentally measured trajectories ([Fig. \[Fig:expt\]]{}b). A sample trajectory obtained via this procedure is shown in Fig. S1.
![Steady-state distribution of diffusion with stochastic resetting and instantaneous returns. a) Distribution of the position along the $x$-axis. Markers come from experiments and the dashed line is the theoretical prediction of Eq. (1). b) The radial position distribution. Markers come from experiments and the dashed line is the theoretical prediction $\rho(R)=\alpha_0^2RK_0(\alpha_0R)$ [@SM] with $K_n(z)$ standing for the modified Bessel function of the second kind [@Stegun]. In both panels no fitting procedure was applied: $D=0.18\pm 0.02 \mu m^2/s$ was measured independently and $r=0.05s^{-1}$ was set by the operator.[]{data-label="Fig:ssDist"}](1DinstantPDFv2.pdf "fig:"){width="4.25cm" height="3.25cm"} ![Steady-state distribution of diffusion with stochastic resetting and instantaneous returns. a) Distribution of the position along the $x$-axis. Markers come from experiments and the dashed line is the theoretical prediction of Eq. (1). b) The radial position distribution. Markers come from experiments and the dashed line is the theoretical prediction $\rho(R)=\alpha_0^2RK_0(\alpha_0R)$ [@SM] with $K_n(z)$ standing for the modified Bessel function of the second kind [@Stegun]. In both panels no fitting procedure was applied: $D=0.18\pm 0.02 \mu m^2/s$ was measured independently and $r=0.05s^{-1}$ was set by the operator.[]{data-label="Fig:ssDist"}](1DinstantPDFradialv2.pdf "fig:"){width="4.25cm" height="3.25cm"}
A particle undergoing free Brownian motion is not bound in space. It has a Gaussian position distribution with a variance that grows linearly with time. Repeated resetting of the particle to its initial position will, however, result in effective confinement and in a non-Gaussian steady state distribution [@Evans2011_1; @Evans2011_2]. Estimating the steady state distribution of the particle’s position along the $x$-axis from recorded trajectories (see SI for details), we find (Fig. 2a) that the experimentally measured results conform with the theoretical result derived by Evans and Majumdar [@Evans2011_1; @Evans2011_2] $$\rho(x)=\frac{\alpha_0}{2}e^{-\alpha_0|x|}~,
\label{Eq:SS}$$ where $\alpha_0=\sqrt{r/D}$ is an inverse length scale corresponding to the typical distance diffused by the particle in the time between two resetting events, and $D$ is the diffusion constant. The steady state radial density of the particle can also be extracted from the experimental trajectories by looking at the steady-state distribution of the distance $R=\sqrt{x^2+y^2}$ from the origin. Here too, we find excellent agreement with the theoretical result (Fig. 2b).
*Stochastic resetting with non-instantaneous returns.—* We now turn our attention to more realistic pictures of diffusion with stochastic resetting. These have just recently been considered theoretically in attempt to account for the non-instantaneous returns and waiting times that are seen in all physical systems that include resetting [@Restart-Biophysics1; @Restart-Biophysics2; @Restart-Biophysics6; @HRS; @return1; @return2; @return3; @return4; @return5]. First, we consider a case where upon resetting HOTs are used to return the particle to the origin at a constant radial velocity $v=\sqrt{v_x^2+v_y^2}$ ([Fig. \[Fig:expt\]]{}). This case naturally arises for resetting by constant force in the over-damped limit. We find that the radial steady state density is then given by [@SM] (R)=p\_D\^[c.v.]{}\_(R)+(1-p\_D\^[c.v.]{})\_(R), \[radial-constant-velocity\] where $p_D^{c.v.}=\left(1+\frac{\pi r}{2 \alpha_0 v} \right)^{-1}$ is the steady-state probability to find the particle in the diffusive phase. $\rho_{\text{diff}}(R)=\alpha_0^2 R K_0(\alpha_0R)$ and $\rho_{\text{ret}}(R)=\frac{2\alpha_0^2}{\pi}RK_1(\alpha_0R)$ stand for the conditional probability densities of the particle’s position when in the diffusive and return phases respectively. Here $K_{n}(z)$ is once again the modified Bessel function of the second kind [@Stegun]. The result in [Eq. (\[radial-constant-velocity\])]{} is in very good agreement with experimental data as shown in [Fig. \[non-inst\]]{}a and Fig. S3.
![Steady-state distributions of diffusion with stochastic resetting and non-instantaneous returns. a) The radial position distribution, $\rho(R)$, as a function of the distance $R$ and the radial return velocity $v$ as given by [Eq. (\[radial-constant-velocity\])]{}. Experimental results of a realization with $v=0.8\mu m/s$ are superimposed on the theoretical prediction (black spheres). b) The radial position distribution as a function of $R$ and the return time $\tau_0$ as given by [Eq. (\[radial-constant-time\])]{}. Experimental results of a realization with $\tau_0=3.79 s$ are superimposed on the theoretical prediction (black spheres).[]{data-label="non-inst"}](3Dradialjoint1.pdf){width="8.5cm"}
Next, we consider a case where upon resetting HOTs are used to return the particle to the origin at a constant time $\tau_0$ — irrespective of the particle’s position at the resetting epoch. This case is appealing due to its simplicity and ease of experimental implementation. Here too, we find that the radial steady-state position distribution can be put in a closed form which reads [@SM] (R)=p\_D\^[c.t.]{}\_(R)+(1-p\_D\^[c.t.]{})\_(R), \[radial-constant-time\] where $ p_D^{c.t.}=(1+r \tau_0)^{-1}$ is the steady-state probability to find the particle in the diffusive phase, and with $\rho_{\text{diff}}(R)=\alpha_0^2 R K_0(\alpha_0R)$ and $\rho_{\text{ret}}(R)=\frac{\pi \alpha_0^2}{2}\left[ \frac{1}{\alpha _0}-R \left[K_0\left( \alpha _0 R \right) \pmb{L}_{-1}\left( \alpha _0 R \right)
+K_1\left( \alpha _0 R \right) \pmb{L}_0\left( \alpha _0 R \right)\right] \right]$, standing for the conditional probability densities of the particle’s radial position when in the diffusive and return phases respectively. Here, $\pmb{L}_n$ is the modified Struve function of order $n$ [@Stegun]. The result in [Eq. (\[radial-constant-velocity\])]{} is in very good agreement with experimental data as shown in [Fig. \[non-inst\]]{}b and Fig. S5.
Comparing the steady-state distributions for the constant time and constant velocity cases, we find that they are almost identical for short return times and high return speeds. Indeed, in these limits the two protocols are virtually indistinguishable as returns are effectively instantaneous. On the other extreme, i.e., for long return times and slow return speeds, marked differences are found between the distributions (Fig. S4 and S6).
*First Passage under stochastic resetting.—* Having realized diffusion with stochastic resetting and analyzed its stationary properties, we now turn to study how resetting affects the first-passage statistics of a Brownian particle. First-passage processes have numerous applications in natural sciences as they are used to describe anything from chemical reactions to single‐cell growth and division, and everything from transport dynamics to search and animal foraging [@Evans2011_3; @Restart-Biophysics1; @Restart-Biophysics2; @Restart-Biophysics6; @Restart-Biophysics3; @Restart-Biophysics8; @RednerBook; @Schehr-review; @ReuveniPRL; @PalReuveniPRL; @branching_II; @Restart-Search1; @Restart-Search2; @Chechkin; @Landau; @HRS; @Frinkes2010; @Branton2010; @Tu2013; @Bezrukov2000; @Grunwald2010; @Ghale2014; @Ma2013; @Iyer2014; @Ingraham1983; @Amir2014; @Osella2014; @cooper1991; @MetzlerBook]. To this end, it is known that while the mean first-passage time (MFPT) of a Brownian particle to a stationary target diverges [@RednerBook; @Schehr-review], resetting will render it finite [@Evans2011_1] even if the returns are non-instantaneous [@Restart-Biophysics1; @Restart-Biophysics2; @Restart-Biophysics6; @HRS; @return3; @return5].
To experimentally study first-passage under stochastic resetting, we consider the setup illustrated in [Fig. \[Fig:MFPT\]]{}a. In this set of experiments, similarly to the previous set, the experiment starts at time zero with the particle positioned at the origin. Resetting is conducted stochastically with rate $r$, and HOTs are used to return the particle to the origin at a constant return time $\tau_0$. However, we now also define a target, set to be a virtual infinite absorbing wall located at $x=L$, i.e., parallel to the $y$-axis. The particle is allowed to diffuse with stochastic resetting until it hits the target, and the hitting times (first-passage times) are recorded. Experiments were performed at 5 different resetting rates: $r=0.05s^{-1}$, $0.0667s^{-1}$, $0.125s^{-1}$, $0.5s^{-1}$, and $1s^{-1}$ with a constant return time of $\tau_0= 3.79s$. A typical trajectory extracted from such an experiment with $L = 1 \mu$m and $r=0.05s^{-1}$ is shown in Fig. \[Fig:MFPT\]b, Fig. S7, and Supplementary movie 2. We extract the FPTs from this and other trajectories from the duration of paths that start at the origin and end at the first crossing of the virtual wall (Fig. \[Fig:MFPT\]b). Several hundreds of resetting events were performed to gather enough FPT statistics (see SI for details).
To check agreement between data coming from FPT experiments and theory, we derived a formula for the mean FPT of diffusion with stochastic resetting and constant time returns to the origin. This is given by [@SM] T\_r =( +\_0 ) . \[Eq:MFPT\_return\] Equation \[Eq:MFPT\_return\] is in excellent agreement with the experimental data as shown in [Fig. \[Fig:MFPT\]]{}c, including accurate prediction of the optimal resetting rate which minimizes the mean FPT of the particle to the target.
*Work and energy.—*A central, and previously unexplored, aspect of stochastic resetting in physical systems concerns the energetic cost associated with the resetting process itself. As discussed above, stochastic resetting prevents a diffusing particle from spreading over the entire available space as it normally would. Instead, a localized, non-equilibrium, steady-state is formed; but the latter can only be maintained by working on the system continuously.
![Energetic cost of resetting. a) The radial distance from the origin vs. time for a particle diffusing with stochastic resetting at rate $r=0.05s^{-1}$ and constant radial return velocity $v=0.8\mu m/s$. b) The cumulative energy expenditure for the trajectory in panel a) (neglecting the cost of the wait period). c) The distribution of energy spent per resetting event. Red disks come from experiments and the theoretical prediction of [Eq. (\[Eq:Energy\])]{} is plotted as a solid blue line. d) Normalized energy spent per resetting event at constant power vs. the normalized radial return velocity as given by [Eq. (\[minmax\])]{}. The minimal energy is attained at a maximal velocity for which the trap is just barely strong enough to overcome the fluid drag force and prevent the particle from escaping the trap.[]{data-label="Fig:work"}](work_const_v.pdf)
In our experiments, work is done by the laser to capture the particle in an optical trap and drag it back to the origin. The total energy spent per resetting event is then simply given by $E=\mathcal{P}\tau(R)$, where $\mathcal{P}$ is the laser power fixed at 1W and $\tau(R)$ is the time required for the laser to trap the particle at a distance $R$ and bring it back to the origin. As the particle’s distance at the resetting epoch fluctuates randomly from one resetting event to another ([Fig. \[Fig:work\]]{}a), the energy spent per resetting event is also random ([Fig. \[Fig:work\]]{}b). To compute its distribution, we note that $E$ is proportional to the return time whose probability density function is in turn given by [@SM] (t)=\_[-]{}\^ d\_0\^ dR R \_0\^dt’ G\_0(R,t’)f(t’) , where $f(t)=re^{-rt}$ is the resetting time density and $G_0(R,t)=\frac{1}{4\pi Dt}e^{-R^2/4Dt}$ is the diffusion propagator in polar coordinates. For the case of constant radial return velocity, $v$, we have $\tau(R)=R/v$. A simple derivation then yields the probability density of the energy spent per resetting event [@SM] (E)= K\_0(E/E\_0) ,\[Eq:Energy\] with $E_0=\alpha_0^{-1}v^{-1}\mathcal{P}$; and note that this is a special case of the K-distribution [@Redding; @Long]. The mean energy spent per resetting event can be computed directly from [Eq. (\[Eq:Energy\])]{} and is given by $\langle E \rangle=\pi E_0/2$. Equation (\[Eq:Energy\]) demonstrates good agreement with experimental data ([Fig. \[Fig:work\]]{}c).
As $\langle E \rangle \propto v^{-1}$, the average energy spent per resetting event in our experiment can be made smaller by working at higher return velocities. Note, however, that the stiffness of the optical trap should be strong enough to oppose the drag force acting on the particle so as to keep it in the trap. Assuming that the maximum allowed displacement of a particle in the trap is $\approx0.5\mu m$ [@Roichman07], we find that working conditions must obey $k\ge2\gamma v$. As the stiffness is proportional to the laser power, $k=\mathcal{C} \mathcal{P}$ (where $\mathcal{C}$ is the conversion factor), the maximal working velocity is given by $v_{\text{max}}\approx \frac{1}{2} \mathcal{C} \mathcal{P}/\gamma$ which—independent of laser power— minimizes energy expenditure to $E_{\text{min}} \approx \pi \gamma \mathcal{C}^{-1} \alpha_0^{-1}$. Going to dimensionless variables we find E /E\_=v\_/v \[minmax\] for $v<v_{\text{max}}$ ([Fig. \[Fig:work\]]{}d).
*Discussion and future outlook.—*In this study, we have demonstrated a unique and versatile method to realize experimentally a resetting process in which many parameters can be easily controlled. We have used this technique to verify an array of theoretical predictions, and further motivate the derivation of new results which come to address novel consideration that arise from experimental realization of diffusion with stochastic resetting. Of prime importance in this regard is the energetic cost of resetting [@thermo1; @thermo2; @thermo3].
The optical trapping method used herein is far from being the most efficient way to apply force to a colloidal particle. In fact, in our experiments we used $1W$ of power at the laser output to create a trap of $k=30pN/\mu m$ for a silica bead of radius $a=0.75\mu m$. For experiments with a constant return velocity $v=0.8\mu m/s$ and resetting rate $r=0.05s^{-1}$, the average return time was $ \langle \tau(R) \rangle=3.68s$. This translates to an average energy expenditure of $\langle E\rangle=\mathcal{P}\langle \tau(R) \rangle=3.68\pm0.05J$ per resetting event. In contrast, the work done against friction to drag the particle at a constant velocity $v$ for a distance $R$ is given by $W_{\text{drag}}=\gamma v R$ where $\gamma=6\pi\eta a$ is the the Stokes drag coefficient. Taking averages, we find that the work required per resetting event is given by $\langle W_{\text{drag}} \rangle= \gamma v \langle R \rangle = \pi \alpha_0^{-1} \gamma v/2$, which translates into $3.4\cdot10^{-20}J$ or $8.3 k_BT$ per resetting event. We thus see that $\langle W_{\text{drag}} \rangle \ll \langle E \rangle$, i.e., that the work required to reset the particle’s position is orders of magnitude smaller than the actual amount of energy spent when resetting is done using HOTs.
Concluding, we note that experimental research of stochastic resetting is still in its infancy with many open questions left to be answered by consecutive studies. To this end, the setup described above along with its future extensions provide a promising platform.
**Acknowledgments** The authors acknowledge Gilad Pollack for his help in coding the resetting protocol of the HOTs. Arnab Pal acknowledges support from the Raymond and Beverly Sackler Post-Doctoral Scholarship at Tel-Aviv University; and Somrita Ray for many fruitful discussions. Amandeep Sekhon acknowledges support from the Ratner center for single molecule studies. Shlomi Reuveni acknowledges support from the Azrieli Foundation, from the Raymond and Beverly Sackler Center for Computational Molecular and Materials Science at Tel Aviv University, and from the Israel Science Foundation (grant No. 394/19). Yael Roichman acknowledges support from the Israel Science Foundation (grant No. 988/17).
[99]{}
Evans, M.R. and Majumdar, S.N., 2011. Diffusion with stochastic resetting. Physical review letters, 106(16), p.160601.
Evans, M.R. and Majumdar, S.N., 2011. Diffusion with optimal resetting. Journal of Physics A: Mathematical and Theoretical, 44(43), p.435001.
Evans, M.R., Majumdar, S.N. and Mallick, K., 2013. Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics. Journal of Physics A: Mathematical and Theoretical, 46(18), p.185001.
Reuveni, S., Urbakh, M. and Klafter, J., 2014. Role of substrate unbinding in Michaelis-Menten enzymatic reactions. Proceedings of the National Academy of Sciences, 111(12), pp.4391-4396.
Rotbart, T., Reuveni, S. and Urbakh, M., 2015. Michaelis-Menten reaction scheme as a unified approach towards the optimal restart problem. Physical Review E, 92(6), p.060101.
Robin, T., Reuveni, S. and Urbakh, M., 2018. Single-molecule theory of enzymatic inhibition. Nature communications, 9(1), p.779.
Roldan, E., Lisica, A., Sanchez-Taltavull, D. and Grill, S.W., 2016. Stochastic resetting in backtrack recovery by RNA polymerases. Physical Review E, 93(6), p.062411.
Budnar, S., Husain, K.B., Gomez, G.A., Naghibosadat, M., Varma, A., Verma, S., Hamilton, N.A., Morris, R.G. and Yap, A.S., 2019. Anillin promotes cell contractility by cyclic resetting of RhoA residence kinetics. Developmental cell, 49(6), pp.894-906.
Luby, M., Sinclair, A. and Zuckerman, D., 1993. Optimal speedup of Las Vegas algorithms. Information Processing Letters, 47(4), pp.173-180.
Gomes, C.P., Selman, B. and Kautz, H., 1998. Boosting combinatorial search through randomization. AAAI/IAAI, 98, pp.431-437.
Di Crescenzo, A., Giorno, V., Nobile, A.G. and Ricciardi, L.M., 2003. On the M/M/1 queue with catastrophes and its continuous approximation. Queueing Systems, 43(4), pp.329-347.
Kumar, B.K. and Arivudainambi, D., 2000. Transient solution of an M/M/1 queue with catastrophes. Computers & Mathematics with applications, 40(10-11), pp.1233-1240.
Evans, M.R., Majumdar, S.N. and Schehr, G., 2020. Stochastic resetting and applications. Journal of Physics A: Mathematical and Theoretical.
Evans, M.R. and Majumdar, S.N., 2014. Diffusion with resetting in arbitrary spatial dimension. Journal of Physics A: Mathematical and Theoretical, 47(28), p.285001.
Majumdar, S.N., Sabhapandit, S. and Schehr, G., 2015. Dynamical transition in the temporal relaxation of stochastic processes under resetting. Physical Review E, 91(5), p.052131.
Pal, A., 2015. Diffusion in a potential landscape with stochastic resetting. Physical Review E, 91(1), p.012113.
Pal, A., Kundu, A. and Evans, M.R., 2016. Diffusion under time-dependent resetting. Journal of Physics A: Mathematical and Theoretical, 49(22), p.225001.
Pal, A., Chatterjee, R., Reuveni, S. and Kundu, A., 2019. Local time of diffusion with stochastic resetting. Journal of Physics A: Mathematical and Theoretical, 52(26), p.264002.
Eule, S. and Metzger, J.J., 2016. Non-equilibrium steady states of stochastic processes with intermittent resetting. New Journal of Physics, 18(3), p.033006.
Evans, M.R. and Majumdar, S.N., 2018. Effects of refractory period on stochastic resetting. Journal of Physics A: Mathematical and Theoretical. 52 01LT01.
Bodrova, A.S., Chechkin, A.V. and Sokolov, I.M., 2019. Scaled Brownian motion with renewal resetting. Physical Review E, 100(1), p.012120.
Bodrova, A.S., Chechkin, A.V. and Sokolov, I.M., 2019. Nonrenewal resetting of scaled Brownian motion. Physical Review E, 100(1), p.012119.
Basu, U., Kundu, A. and Pal, A., 2019. Symmetric exclusion process under stochastic resetting. Physical Review E, 100(3), p.032136.
Pal, A., Kusmierz, L. and Reuveni, S., 2019. Time-dependent density of diffusion with stochastic resetting is invariant to return speed. Physical Review E, 100(4), p.040101.
Pal, A., Kusmierz, L. and Reuveni, S., 2019. Invariants of motion with stochastic resetting and space-time coupled returns. New Journal of Physics, 21(11), p.113024.
Bodrova, A.S. and Sokolov, I.M., 2019. Resetting processes with non-instanteneous return. arXiv preprint arXiv:1907.12326.
Maso-Puigdellosas, A., Campos, D. and Mendez, V., 2019. Transport properties of random walks under stochastic noninstantaneous resetting. Physical Review E, 100(4), p.042104.
Redner, S., 2007. A Guide to First-Passage Processes. A Guide to First-Passage Processes, by Sidney Redner, Cambridge, UK: Cambridge University Press, 2007.
Bray, A.J., Majumdar, S.N. and Schehr, G., 2013. Persistence and first-passage properties in nonequilibrium systems. Advances in Physics, 62(3), pp.225-361.
Ray, S., Mondal, D. and Reuveni, S., 2019. Peclet number governs transition to acceleratory restart in drift-diffusion. Journal of Physics A: Mathematical and Theoretical, 52(25), p.255002.
Pal, A. and Prasad, V.V., 2019. First passage under stochastic resetting in an interval. Physical Review E, 99(3), p.032123.
Kusmierz, L., Majumdar, S.N., Sabhapandit, S. and Schehr, G., 2014. First order transition for the optimal search time of Lévy flights with resetting. Physical review letters, 113(22), p.220602.
Kusmierz, L., & Gudowska-Nowak, E. (2015). Optimal first-arrival times in Lévy flights with resetting. Physical Review E, 92(5), 052127.
Reuveni, S., 2016. Optimal stochastic restart renders fluctuations in first passage times universal. Physical review letters, 116(17), p.170601.
Pal, A. and Reuveni, S., 2017. First Passage under Restart. Physical review letters, 118(3), p.030603.
Chechkin, A. and Sokolov, I.M., 2018. Random search with resetting: a unified renewal approach. Physical review letters, 121(5), p.050601.
Pal, A., Eliazar, I. and Reuveni, S., 2019. First passage under restart with branching. Physical review letters, 122(2), p.020602.
Pal, A. and Prasad, V.V., 2019. Landau-like expansion for phase transitions in stochastic resetting. Physical Review Research, 1(3), p.032001.
Pal, A., Kusmierz, L. and Reuveni, S., 2019. Home-range search provides advantage under high uncertainty. arXiv preprint arXiv:1906.06987.
Dufresne, E.R., Spalding, G.C., Dearing, M.T., Sheets, S.A. and Grier, D.G., 2001. Computer-generated holographic optical tweezer arrays. Review of Scientific Instruments, 72(3), pp.1810-1816.
Polin, M., Ladavac, K., Lee, S.H., Roichman, Y. and Grier, D.G., 2005. Optimized holographic optical traps. Optics Express, 13(15), pp.5831-5845.
Grier, D.G. and Roichman, Y., 2006. Holographic optical trapping. Applied optics, 45(5), pp.880-887.
Crocker, J.C. and Grier, D.G., 1996. Methods of digital video microscopy for colloidal studies. Journal of colloid and interface science, 179(1), pp.298-310.
See supplemental material for the detailed derivation.
Abramowitz, M. and Stegun, I.A. eds., 1948. Handbook of mathematical functions with formulas, graphs, and mathematical tables (Vol. 55). US Government printing office.
Firnkes, M., Pedone, D., Knezevic, J., Doblinger, M. and Rant, U., 2010. Electrically facilitated translocations of proteins through silicon nitride nanopores: conjoint and competitive action of diffusion, electrophoresis, and electroosmosis. Nano letters, 10(6), pp.2162-2167.
Branton, D., Deamer, D.W., Marziali, A., Bayley, H., Benner, S.A., Butler, T., Di Ventra, M., Garaj, S., Hibbs, A., Huang, X. and Jovanovich, S.B., 2010. The potential and challenges of nanopore sequencing. In Nanoscience and technology: A collection of reviews from Nature Journals (pp. 261-268).
Tu, L.C., Fu, G., Zilman, A. and Musser, S.M., 2013. Large cargo transport by nuclear pores: implications for the spatial organization of FG‐nucleoporins. The EMBO journal, 32(24), pp.3220-3230.
Bezrukov, S.M., Kullman, L. and Winterhalter, M., 2000. Probing sugar translocation through maltoporin at the single channel level. FEBS letters, 476(3), pp.224-228.
Ghale, G., Lanctot, A.G., Kreissl, H.T., Jacob, M.H., Weingart, H., Winterhalter, M. and Nau, W.M., 2014. Chemosensing ensembles for monitoring biomembrane transport in real time. Angewandte Chemie International Edition, 53(10), pp.2762-2765.
Grünwald, D. and Singer, R.H., 2010. In vivo imaging of labelled endogenous $\beta$-actin mRNA during nucleocytoplasmic transport. Nature, 467(7315), p.604.
Ma, J., Liu, Z., Michelotti, N., Pitchiaya, S., Veerapaneni, R., Androsavich, J.R., Walter, N.G. and Yang, W., 2013. High-resolution three-dimensional mapping of mRNA export through the nuclear pore. Nature communications, 4, p.2414.
Iyer-Biswas, S., Wright, C.S., Henry, J.T., Lo, K., Burov, S., Lin, Y., Crooks, G.E., Crosson, S., Dinner, A.R. and Scherer, N.F., 2014. Scaling laws governing stochastic growth and division of single bacterial cells. Proceedings of the National Academy of Sciences, 111(45), pp.15912-15917.
Ingraham, J.L., Maaloe, O. and Neidhardt, F.C., 1983. Growth of the bacterial cell. Sinauer Associates.
Amir, A., 2014. Cell size regulation in bacteria. Physical Review Letters, 112(20), p.208102.
Osella, M., Nugent, E. and Lagomarsino, M.C., 2014. Concerted control of Escherichia coli cell division. Proceedings of the National Academy of Sciences, 111(9), pp.3431-3435.
Cooper, S., 1991. Bacterial growth and division: biochemistry and regulation of the division cycle of prokaryotes and eukaryotes.
Metzler, R., Redner, S. and Oshanin, G., 2014. First-Passage Phenomena and Their Applications (Vol. 35). Singapore: World Scientific.
Redding, N.J., 1999. Estimating the parameters of the K distribution in the intensity domain (No. DSTO-TR-0839). ELECTRONICS RESEARCH LAB SALISBURY (AUSTRALIA).
Long, M.W., 1975. Radar reflectivity of land and sea. Lexington, Mass., DC Heath and Co., 1975. 390 p. Roichman, Y., Wong, V. and Grier, D., 2007. Colloidal transport through optical tweezer arrays. Phys. Rev. E., 75(1), p011407. Fuchs, J., Goldt, S. and Seifert, U., 2016. Stochastic thermodynamics of resetting. EPL (Europhysics Letters), 113(6), p.60009.
Pal, A. and Rahav, S., 2017. Integral fluctuation theorems for stochastic resetting systems. Physical Review E, 96(6), p.062135.
Gupta, D., Plata, C.A. and Pal, A., 2020. Work fluctuations and Jarzynski equality in stochastic resetting. Physical Review Letters, 124(11), p.110608.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'K. Nandra[^1]'
- 'P.M. O’Neill'
- 'I.M. George'
- 'J.N. Reeves'
- 'T.J. Turner'
title: 'An XMM-Newton survey of broad iron lines in AGN'
---
Introduction
============
Observations with showed complex, broad emission from iron to be very common in Seyfert galaxies (Nandra et al. 1997). These lines can be interpreted as emission from a relativistic accretion disk, in which case they represent a powerful probe of the strong gravity regime around black holes (Fabian et al. 1989; Stella 1990). The most celebrated case is MCG-6-30-15, where the broad, skewed line seen with is of high signal-to-noise ratio, and the disk line interpretation is apparently robust (Tanaka et al. 1995; Fabian et al. 1995). Several other high quality profiles from [*ASCA*]{} also showed broad, relativistic lines (e.g. George et al. 1998; Nandra et al. 1999; Done et al. 2000).
Since the launch of , it has been possible to obtain even higher quality data on these broad emission lines. Early results confirmed relativistic emission in some cases, including MCG-6-30-15 (e.g. Wilms et a. 2001; Fabian et al. 2002; Vaughan & Fabian 2004), but in others no broad line was detected (e.g. Gondoin et al. 2001; Pounds et al. 2003; Bianchi et al. 2004). In yet others, complexity has been observed around iron-$K$, but the interpretation as relativistic disk emission has been challenged. One specific suggestion is that absorption by a high column, high ionization warm absorber can mimic the “red wing” characteristic of an accretion disk line (Reeves et al. 2004).
The absence of a comprehensive and systematic survey of the X-ray spectra of Seyferts observed by prevents firm conclusions being drawn as to the prevalence of broad iron lines in AGN and the robustness of their interpretation. Here we present preliminary results from such a study, the full results of which will be presented in a forthcoming paper (Nandra et al. 2006, in preparation).
The sample and data analysis
============================
Our sample is culled from pointed AGN observations in the archive. We examine only local AGN($z<0.05$) and exclude Seyfert 2 galaxies and radio loud objects. Furthermore we choose only the objects with the highest number of counts in the 2-10 keV band, to maximise the signal-to-noise ratio around the iron line. The sample reported here consists of 41 observations of 30 objects.
An important feature of our work is that we have performed a well-defined, uniform analysis, with conservative selection cirteria and using the latest available calibrations. The techniques are fully decribed in O’Neill et al. (2006, in preparation), but compared to much of the previous work the improvements include: a) consistent definition of source and background regions for each observation b) well defined and conservative background rejection c) precise definition of good-time intervals d) standardised spectral grouping related to the instrumental resolution. We restrict our analysis to the pn instrument. For observations with significant pileup we use only the pattern 0 events. Spectral fits are undertaken in the 2.5-10 keV range only, to minimize complications due to absorption and soft excess emission, and avoid the instrumental calibration feature around 2.2 keV.
------------ ---------- ---------------- ----------------- --------------
Fraction Energy Width EW
(keV) (keV) (eV)
(1) (2) (3) (4)
[*ASCA*]{} 77% $6.34\pm 0.04$ $0.43 \pm 0.12$ $160 \pm 30$
[*XMM*]{} 73% $6.32\pm 0.05$ $0.36\pm 0.04$ $108\pm 12$
------------ ---------- ---------------- ----------------- --------------
: Comparison between mean parameters for broad lines determined by (Nandra et al. 1997) and (this work). Note that that fits did not account for a distant narrow component of the Fe K$\alpha$, nor did they include a warm absorber. The fraction of objects in which the F-test indicates a 99% improvement is given, along with the mean Energy, Gaussian $\sigma$ and equivalent width. \[tab:mpars\]
Results
=======
Base model
----------
While we have excluded the most heavily obscured objects (Seyfert 2s) from our sample, there remains a possibility that absorption can have a significant effect even on the spectra above 2.5 keV. We account for this by fitting an XSTAR (Kallman et al. 2004) photoionization model to the spectra,excluding the iron band (4.5-7.5 keV). Where the the fit improves significantly at 95% confidence, according to the F-test, this XSTAR component is included in all subsequent fits, with free $N_{\rm H}$ and ionization parameter. It is now also known that many AGN exhibit narrow cores to their iron K$\alpha$ lines (e.g. Yaqoob & Padhmanhaban 2004). These are thought to arise from very distant material such as the torus (e.g. Krolik & Kallman 1987; Awaki et al. 1991). If so they will be accompanied by continuum Compton reflection. We therefore include in all fits below a neutral Compton reflection component appropriate for a slab geometry (Magdziarz & Zdziarski 1995), with accompanying Fe K$\alpha$, Fe K$\beta$ and Ni K$\alpha$, line emission (George & Fabian 1991) and a Compton shoulder (Matt 2002). The emission lines and reflection are all incorporated in a single model with solar abundances. We assume an inclination of $60^{\circ}$ for the slab and hence the reflection is characterized by a single parameter, $R = \Omega/2\pi$, where $\Omega$ is the solid angle subtended by the slab at the illuminating source.
![Characteristic emission radius for the relativistic iron K$\alpha$ lines versus disk inclination. []{data-label="fig:rbreak"}](rbreak_inc.ps){width="80mm"}
Simple parameterization of the broad emission
---------------------------------------------
To provide a simple, model-independent characterization of further complexity in the iron band, we have added a broad Gaussian to the fits described above. A significant improvement to the fit was found in 30 of the 41 observations, and 22 of the 30 objects. Clearly, complexity at iron K$\alpha$ is extremely common in Seyferts. A comparison between the mean parameters of the broad Gaussian fits to the data (Nandra et al. 1997) and our new sample, is given in Table \[tab:mpars\]. There is remarkable agreement in all cases, with the exception that the line equivalent widths in the sample are about $50$ % higher. This difference can be attributed to the fact that the narrow line cores were deconvolved in the fits, but not with .
The energies of the broad lines seen with clearly indicate that they are associated with iron, as they are very close to the expected energy, but there is some evidence that the typical energy is redshifted compared to the neutral value. The lines are usually quite broad, with $<\sigma>=0.36$ keV or 40,000 km s$^{-1}$ FWHM. Significant dispersion is seen in all the measured quantities, however, which confirms the result from that there is a wide variety of line profiles, and takes this further in that the variation from object-to-object cannot be attributed solely to varying relative contributions of the narrow core and broader emission. It should also be noted that in 5 of the fits, the width of the “broad" gaussian component is $<10,000$ km s$^{-1}$. These lines could plausibly arise from the optical broad line region (BLR), rather than the inner disk.
Disk line models
----------------
We have tested explicitly whether the complex line shapes seen in the spectra can be accounted for with a relativistic accretion disk. We do this by adding an additional, neutral reflection component with Fe and Ni line emission as above, but this time apply relativistic blurring (Laor 1991; Fabian et al. 2002). Rather than leave all the parameters free, we initially chose to fix the inner and outer radii at $R_{\rm i}=6 R_{\rm g}$ and $R_{\rm o}=400 R_{\rm g}$. We adopt an emissivity law appropriate for a point source above a slab in a Newtonian geometry, which can be approximated as a broken power law. The adopted emissivity depends on $R^{-q}$, with $q=0$ within and $q=3$ outside some break radius $R_{\rm br}$. This represents the characteristic radius where the majority of the line emission originates, so can be used to assess whether relativistic effects are important. The inclination and reflection fraction are left as free parameters too. The relativistically blurred model improves the fits significantly in $\sim 75$% of the observations and indeed gives markedly better fits than a Gaussian in several cases.
The characteristic emission radius ($R_{\rm br}$) is plotted against the inclination in Fig. \[fig:rbreak\]. The bottom left part of this diagram is where we expect “classic” disk lines to occur. Here the emission is concentrated in the innermost regions ($<50$ R$_{\rm g}$) and the inclination is relatively low, such that much of the emission is redshifted. The upper left portion is where we expect weak and very broad lines from highly inclined disks. It is sparsely populated, which is expected as such lines are difficult to detect. The upper right portion shows several strong disk lines with apparently high inclinations but at relatively large radii. This indicates that the lines are broad but predominantly towards the blue rather than the red. These are likely candidates for a highly ionized disk, which is in reality at lower inclination than inferred in fits which assume the disk is neutral. Finally, at the bottom right of the diagram we see emission at low inclination and large radius. In these objects the lines will be relatively narrow and not strongly shifted. For these, there is no requirement for the line to arise in the inner accretion disk and they may come from more distant material, such as the optical BLR.
Using this model, we can also assess the evidence for black hole spin. A simple test is to repeat the fits using an inner radius of $1.235 R_{\rm g}$, appropriate for a Kerr Black hole with $a/M=0.998$, as opposed to the Schwarzschild value of $6 R_{\rm g}$. Only two of the spectra showed an appreciable improvement with such a model. In both, NGC 3783 and NGC 4151, there is complex absorption which strongly affects the spectrum above 2.5 keV (Reeves et al. 2004; Schurch et al. 2004). We therefore consider the evidence for maximal Kerr black holes to be tentative, leaving it an open question as to whether black holes in AGN are generally rotating.
Comparison with alternative models
----------------------------------
Some recent studies have suggested alternatives to the relativistic disk model for broad iron lines in AGN. A number of objects show no evidence for broad emission at all, including some in our sample. In others, it may be possible to model the “red wing” as a high ionization warm absorber (Reeves et al. 2004), and the “blue wing” with blends of narrow lines. To test this, we have fitted a model comprising a high ionization warm absorber (in [*addition*]{} lower ionization gas already included), with three narrow emission lines, two fixed at the energies appropriate to helium and hydrogen–like iron and another intermediate (6.4-6.7 keV) line with free energy. Once again neutral, unblurred reflection is also included to account for any narrow emission at 6.4 keV.
![Difference in $\chi^{2}$ between the relativistic disk line model and an alternative model comprising a high ionization warm absorber, and a blend of narrow lines. All fits include both line and continuum from a distant neutral reflector and a soft X-ray warm absorber where needed. The disk line model has 3 fewer free parameters than the alternative, but provides a dramatically better fit in a large number of cases (see Fig. \[fig:dream\])[]{data-label="fig:dchi"}](xmm_delchi_hist.ps){width="80mm"}
A comparison of the $\chi^{2}$ values is given in Fig. \[fig:dchi\]. The alternative model has 3 more free parameters than the disk line model, but provides a substantially worse fit in a large number of objects. In a few cases the alternative model fits a little better, but not substantially so considering the larger number of free parameters.
From the consideration of this alternative model, and the results from Fig \[fig:rbreak\], we can define a sample of robust relativistic lines for which disk models both indicate a small characteristic radius, and fit much better than the alternative. There are 11 spectra of 9 objects satisfying the criteria that $R_{br}<20$ $R_{\rm g}$ and $\Delta\chi^{2}>10$ for the relativistic model compared to the alternative model (despite having 3 [*fewer*]{} free parameters). The line profiles are shown in Fig. \[fig:dream\].
Discussion and conclusions
==========================
Our systematic survey should serve to clear up some of the controversy about how often broad emission lines from an accretion disk can be claimed robustly in AGN. For Seyferts, at least, complexity at iron K$\alpha$ is seen in about 3/4 of objects and this complexity is always interpretable in terms of an accretion disk model. In about 1/3 of our sample, that interpretation is clearly preferred over competing models. In a few cases with high signal-to-noise ratio the relativistic emission appears to be absent, but great caution needs to be exercised before this can be concluded definitively, as even with very good statistics are required (Guainazzi et al., this volume). In cases where broad emission appears to be absent, the disk may simply be highly inclined, such that the line is very broad and weak. Alternatively, the inner disk may be hot and/or highly ionized (Nayakshin 2000), which can also account for cases where broad emission is present, but indicative of a relative large characteristic radius ($\sim 100$ $R{\rm g}$). Alternatively, the lack of broad emission seen in a given observation may be due to line profile variability (Longinotti et al. 2004).
Perhaps surprisingly, we have not yet found any strong evidence for black hole spin. This contrasts with some previous studies indicating maximally rotating holes in MCG-6-30-15 (Wilms et al. 2001) and some black hole binaries (e.g. Miller et al. 2002). This is probably due to our conservative approach in consideration of distant reflection and complex absorption. On the other hand, our observations provide no evidence [*against*]{} rapidly rotating black holes in AGN and while it has been pointed out in several previous studies that complex absorption can mimic very broad lines (e.g. Done & Gierlinski 2006), it is important to bear in mind that the converse is also true.
Our main conclusion, however, is that the accretion disk interpretation for broad iron K$\alpha$ lines in AGN appears to be robust. The implication is that the potential for X-ray observations, particularly with [*XEUS*]{} and [*Con–X*]{}, to reveal new information about the innermost regions of accreting black holes may well be realised.
We thank Tim Kallman for help with XSTAR; PPARC and the Leverhulme Trust for financial support and gratefully acknowledge those who built and operate the satellite.
Awaki, H., Koyama, K., Inoue, H., Halpern, J.P., 1991, PASJ, 43, 195 Bianchi S., Matt G., Balestra I., Guainazzi M., Perola G. C., 2004, A&A, 422, 65 Done C., Madejski G.M., Zycki P.T., 2000, ApJ, 536, 213 Done C., Gierlinski M., 2006, MNRAS, 367, 659 Fabian A. C., Rees M. J., Stella L., White N. E., 1989, MNRAS, 238, 729 Fabian A.C., Nandra K., Reynolds C.S., et al., 1995, MNRAS, 277, L11 Fabian A. C., Vaughan S., Nandra K., 2002, MNRAS, 335, L1 George I. M., Fabian A. C., 1991, MNRAS, 249, 352 George I.M., Turner T.J., Mushotzky R.F., Nandra K., Netzer H., 1998, ApJ, 503, 174 Gondoin, P., Barr, P., Lumb, D., Oosterboek, T., Orr, A., Parmar, A.N., 2001, A&A, 378, 806 Kallman T.R., Palmeri P., Bautista M.A., Mendoza C., Krolik J.H., 2004, ApJS, 155, 675 Krolik J.H., Kallman T.R., 1987, ApJ, 320, L5 Laor A., 1991, ApJ, 376, 90 Longinotti A.L., Nandra K., Petrucci P.O., O’Neill P.M., 2004, MNRAS, 355, 929 Magdziarz P., Zdziarski A.., 1995, MNRAS, 273, 837 Matt G., 2002, MNRAS, 337, 147 Miller J.M., Fabian A.C., Reynolds C.S., et al., 2002, ApJ, 570, L69 Nayakshin S., 2000, ApJ, 534, 718 Nandra K., George I. M., Mushotzky R. F., Turner T. J., Yaqoob T., 1997, ApJ, 477, 602 Nandra K., George I. M., Mushotzky R. F., Turner T. J., Yaqoob T., 1999, ApJ, 523, L17 Pounds K.A., Reeves J.N., Page K.L., Wynn G.A., O’Brien P.T., 2003, MNRAS, 345, 705 Reeves J. N., Nandra K., George I. M., Pounds K. A., Turner T. J., Yaqoob T., 2004, ApJ, 602, 648 Schurch N.J., Warwick R.S., Griffiths R.E., Kahn S.M., 2004, MNRAS, 350, 1 Stella L., 1990, Nature, 344, 747 Tanaka Y., Nandra K., Fabian A.C., et al., 1995, Nature, 375, 659 Vaughan S., Fabian A. C., 2004, MNRAS, 348, 1415 Wilms J., Reynolds C.S., Begelman M.C., Reeves J., Molendi S., Staubert R., Kendziorra E., 2001, MNRAS, 328, L27 Yaqoob T., Padmanabhan U., 2004, ApJ, 604, 63 (YP04)
[^1]: Corresponding author:
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
J. P. B. C. de Melo$^{(a)}$, A. E. A. Amorim$^{(b)}$, Lauro Tomio$^{(c)}$ and T.Frederico$^{(d)}$\
$^{(a)}$ Division de Physique Théorique, Institut de Physique Nucléaire, 91406 Orsay CEDEX, and Laboratoire de Physique Théorique et Particules Elémentaires,\
Universté Pierre et Marie Curie, 4 Place Jussieau, 75252 Paris CEDEX 05, France\
$^{(b)}$ Faculdade de Tecnologia de Jahu, CEETPS, Jahu, Brasil.\
$^{(c)}$ Instituto de Física Teórica, UNESP\
01405 São Paulo, SP, Brasil\
$^{(d)}$ Dept. de Física, Instituto Tecnológico da Aeronáutica,CTA\
12.228-900 S. José dos Campos, SP, Brasil
title: '**Relativistic Bound States in 2+1 and 1+1 Dimensions in the Null-Plane [^1]**'
---
=cmr8
1.5pt
\#1[/\#1]{}
The Faddeev-like equation for the component of the three-boson vertex for a relativistic contact interaction is [@fred92] $$\begin{aligned}
v(q^\mu)=2\tau(M_2) \int \frac{d^nk}{(2\pi)^n}
\frac{i}{k^2-M^2+i{\varepsilon}} \frac{i}{(P_3-q-k)^2-M^2+i{\varepsilon}}v(k^\mu),
\label{1}\end{aligned}$$ where $P^\mu_3=(M_{3B},0,0,0)$ is the three-boson four-momentum in the center of mass system, $n$ is the dimension of the space-time, the mass of the two-boson subsystem is given by $M_2^2=(P_3-q)^2$ and the single boson mass is $M$. The factor 2 comes from symmetrization of the total vertex.
The total three-boson vertex is the sum of three Faddeev components in which each boson is spectator once [@fred92]. The two-boson scattering amplitude, $\tau(M_2)$, enters in the kernel of the integral equation for the vertex in Eq.(\[1\]). It is easily obtained as: $$\begin{aligned}
\tau^{(n)}(M_2)= \left\{i\lambda^{-1} - B^{(n)}(M_2)\right\}^{-1},
\label{2}\end{aligned}$$ where $\lambda$ is the coupling constant of the zero-range interaction, $M_2$ is the mass of the two boson system and $B(M_2)$ is the kernel of the integral equation for the scattering amplitude $$\begin{aligned}
B^{(n)}(M_2)=-\int \frac{d^nk}{(2\pi)^n}
\left\{\left(k^2-M^2+i{\varepsilon}\right)\left((P-k)^2-M^2+i{\varepsilon}\right)
\right\}^{-1},
\label{3}\end{aligned}$$ where $M$ is the boson mass and $P^\mu=(M_2,0,0,0)$ .
The value of $\lambda$ is chosen such that the two-boson system has one bound-state. The scattering amplitude, Eq.(\[2\]), has a pole at the bound-state mass $(M_{2B})$, which yields $$\begin{aligned}
i\lambda^{-1}= B^{(n)}(M_{2B}).
\label{4}\end{aligned}$$
In two dimensions the two-boson scattering amplitude for $M_2 \ < \ 2M$ is $$\begin{aligned}
\tau^{(2)}(M_2) = - 2\pi i \left\{
\frac{atan\left(2\beta(M_{2B})\right)^{-1}}
{M^{2}_{2B}\beta(M_{2B})}
- \frac{atan\left(2\beta(M_{2})\right)^{-1}}
{M^{2}_{2}\beta(M_{2})}
\right\}^{-1} \ ,
\label{5}\end{aligned}$$ where $ \beta(M_2)=\sqrt{\frac{M^2}{M_{2}^2}-\frac14}$ .
In three dimensions the scattering amplitude is $$\begin{aligned}
\tau^{(3)}(M_2)=- 8\pi i \left\{ M^{-1}_{2B}
ln\left(\frac{2M+M_{2B}}{2M-M_{2B}}
\right)
-M^{-1}_{2} ln\left(\frac{2M+M_{2}}{2M-M_{2}}
\right)
\right\}^{-1} \ .
\label{6} \end{aligned}$$ For our purpose of the bound-state calculation is enough to know $\tau^{(n)}(M_2)$ for $M_2 < 2M$ .
The momentum variables in the integral equation are the momenta in the null-plane for an on-mass-shell particle, $q^+$ and $q_\perp$ [@fred92]. The transversal momentum is needed in three space-time dimensions.
Let us discuss the limits of the variables $y=\frac{q^+}{M_{3B}}$ and $q_\perp$. In 1+1 space-time dimensions only the momentum fraction is enough to describe the spectator boson. The mass of the two-boson subsystem must be real and in 1+1 dimensions it implies $$\begin{aligned}
(M_2)^2\ = \ (M_{3B}-q^+)\left(M_{3B}-\frac{
M^2}{q^+}\right) \ > \ 0 \ .
\label{7}\end{aligned}$$ From the above inequality follows $ 1\ > \ y \ > \ \frac{M^2}{M^2_{3B}} \ . $
In 2+1 dimensions, we deduce the range of values of the perpendicular momentum allowed by the reality of the mass of the two-boson subsystem. Then $$\begin{aligned}
(M_2)^2\ = \ (M_{3B}-q^+)\left(M_{3B}-\frac{q^2_\perp
+M^2}{q^+}\right)-q^2_\perp \ > \ 0 \ .
\label{8}\end{aligned}$$ Solving the inequality for $q^2_\perp$ , we obtain $ q^2_\perp\ < \ (1-y)(M_{3B}^2y-M^2) \ . $ The limits for $y$ are $ 1 \ > \ y \ > \ \frac{M^2}{M^2_{3B}} \ , $ and the lower bound comes from $q^2_\perp \ > \ 0$.
The equation for the Faddeev component of the vertex in 1+1 space-time dimensions is obtained as the result of the $k^-$ integration in the momentum loop of Eq.(\[1\]). We also use Eq.(\[5\]) and the limit in the internal momentum fraction $x$ $$\begin{aligned}
v(y)=\frac{i}{2\pi}\tau^{(2)}(M_2) \int^{1-y}_\frac{M^2}{M^2_{3B}}\frac{dx}{x(1-y-x)}
\frac{v(x)}{M^2_{3B}-M^2_{03}},
\label{9}\end{aligned}$$ where $(M_2)^2$ is given by Eq.(\[7\]) and the free mass of the virtual three-boson state in 1+1 dimensions is: $$\begin{aligned}
M^2_{03}=
\frac{M^2}{x}+
\frac{M^2}{y} + \frac{M^2}{1-y-x} \ .
\label{10}\end{aligned}$$
The equation for the Faddeev component of the vertex in 2+1 space-time dimensions is found after the $k^-$ integration of in the momentum loop of Eq.(\[1\]), $$\begin{aligned}
v(y,\vec q_\perp)=\pi^{-2}
\tau^{(3)}(M_2)
\int^{1-y}_\frac{M^2}{M^2_{3B}}\frac{dx}{x(1-y-x)}
\int^{k_\perp^{max}}_{-k_\perp^{max}}d^2k_\perp
\frac{v(x,\vec k_\perp)}{M^2_{3B}-M^2_{03}},
\label{11}\end{aligned}$$ where $M_2$ is given by Eq.(\[8\]), as well as $ k_\perp^{max}=\sqrt{(1-x)(M_{3B}^2x-M^2)}$ . The mass of the virtual three-boson state is: $$\begin{aligned}
M^2_{03}=
\frac{k^2_\perp+M^2}{x}+
\frac{q^2_\perp+M^2}{y} + \frac{(q+k)^2_\perp +M^2}{1-y-x} \ .
\label{12}\end{aligned}$$
The dependence of $v$ on $q^-$ is not specified because the spectator boson is on mass-shell. $q^+$ and $\vec q_\perp$ describe the spectator boson propagation. The relativistic equations in 1+1 and 2+1 dimensions, have a lower bound for the mass of the three-boson system which comes from the limits on the $x$ integration and the condition $y>\frac{M^2}{M^2_{3B}}$ which implies $ M_{3B}> \sqrt{2}M \ . $ The same limit was obtained in 3+1 dimensions in [@fred92].
The Faddeev component of the ground state vertex, in 2+1 dimensions is rotationally symmetric in the x-y plane in Eq.(\[11\]). The mass of the boson gives the scale of the system. Here, the solution is presented for $M=1$. In Fig.(1), the numerical results for the ground-state binding energies $(E_{3B}=M_{2B}+M-M_{3B})$ of the three-boson system are shown in two and three-dimensions. In the nonrelativistic limit, for $E_{2B}=0$, the results approach the well-known values [@dodd].
In summary, we give an example of how null-plane dynamics can be elaborated. We develop a zero-range model of the three-boson bound state in the null-plane and solve numerically the dynamical equation for the ground-state in 1+1 and 2+1 space-time dimensions.
This work was supported by Conselho Nacional de Desenvolvimento e Pesquisa - CNPq and Fundação de Amparo a pesquisa do Estado de São Paulo - FAPESP. J. P. B. C. de Melo is a FAPESP-Brazil fellow (contract 97/13902-8).
[99]{} T.Frederico, Phys. Lett. [**B282**]{}, (1992) 409; W.R.B. de Araújo, J.P.B.C. de Melo and T.Frederico, Phys.Rev. [**C52**]{}, (1995) 2733. L.R. Dodd, J. Math. [**11**]{}, (1970) 207.
\[fig1\]
[^1]: To appear in “Proceedings VI Hadrons 1998”, Florianópolis, Santa Catarina, Brazil
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
This paper describes TARDIS (Traffic Assignment and Retiming Dynamics with Inherent Stability) which is an algorithmic procedure designed to reallocate traffic within Internet Service Provider (ISP) networks. Recent work has investigated the idea of shifting traffic in time (from peak to off-peak) or in space (by using different links). This work gives a unified scheme for both time and space shifting to reduce costs. Particular attention is given to the commonly used 95th percentile pricing scheme.
The work has three main innovations: firstly, introducing the Shapley Gradient, a way of comparing traffic pricing between different links at different times of day; secondly, a unified way of reallocating traffic in time and/or in space; thirdly, a continuous approximation to this system is proved to be stable. A trace-driven investigation using data from two service providers shows that the algorithm can create large savings in transit costs even when only small proportions of the traffic can be shifted.
author:
- |
Richard G. Clegg,\
Dept of Elec. Eng.\
University College London\
\
Raul Landa\
Dept of Elec. Eng.\
University College London\
\
João Taveira Araújo\
Dept of Elec. Eng.\
University College London\
\
- |
Eleni Mykoniati\
Dept of Elec. Eng.\
University College London\
\
David Griffin\
Dept of Elec. Eng.\
University College London\
\
Miguel Rio\
Dept of Elec. Eng.\
University College London\
\
bibliography:
- 'sigmetrics\_tardis\_2014.bib'
title: 'TARDIS: Stably shifting traffic in space and time'
---
Introduction {#sec:intro}
============
Background {#sec:background}
==========
Definitions {#sec:definitions}
===========
Pricing {#sec:pricing}
=======
Dynamical systems approach {#sec:dynamics}
==========================
Modelling framework {#sec:modelling}
===================
Analysis of user data {#sec:results}
=====================
Conclusions {#sec:conclusions}
===========
*This research has received funding from the Seventh Framework Programme (FP7/2007-2013) of the European Union, through the FUSION project (grant agreement 318205).*
The Shapley gradient price {#sec:shap_indep}
==========================
A stability proof for multiple choice sets {#sec:smith_extension}
==========================================
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A two-photon transition in laser-cooled and trapped calcium atoms is proposed as the atomic reference in an optical frequency standard. An efficient scheme for interrogation of the frequency standard is described, and the sensitivity of the clock transition to systematic effects is estimated. Frequency standards based on this transition could lead to compact and portable devices that are capable of rapidly averaging down to $< 10^{-16}$.'
author:
- 'Amar C. Vutha'
bibliography:
- 'calcium\_clock.bib'
title: 'Optical frequency standard based on a two-photon transition in calcium'
---
Introduction
============
With their extremely high quality factors, narrow optical resonances in atoms are ideal candidates for realizing a highly stable atomic frequency reference. Optical frequency standards (OFS), consisting of narrow linewidth lasers stabilized to atomic transitions, have improved their performance significantly over the last decade. They are soon likely to lead to a more accurate re-definition of the SI second [@Gill2005; @Poli2014]. The atoms in these frequency standards must be isolated from external perturbations and frequency shifts due to atomic motion, and therefore OFS typically use trapped atoms or atomic ions. The best performance to date has been obtained by interrogating the narrow $^1S_0 \to {}^3P_0$ transition in Sr and Yb atoms trapped in optical lattices, to eliminate Doppler and photon recoil shifts, with the lattice lasers tuned to a “magic” wavelength to minimize perturbations on the atoms [@Ludlow2014; @Margolis2014].
Aside from the primary optical clocks that will form the basis of the new SI second, there is a need for ensembles of secondary frequency standards to provide optical flywheels for generating timescales [@Parker2012]. A compact and transportable standard could also address the important problem of comparing the performance of widely separated primary frequency standards. In addition to improving the accuracy of atomic timekeeping, secondary OFS will also find applications in low-noise microwave synthesis (using frequency combs to transfer the phase stability of optical waves to microwaves [@McFerran2005]) and precise geodetic surveys (using the high sensitivity of optical clocks to gravitational red-shifts [@Chou2010]). An array of high-performance frequency standards aboard satellites could also be sensitive to gravitational waves [@Smarr1983; @Armstrong2006]. However, all these applications need a compact OFS that can provide robust long-term performance. A simple design with low system complexity will be an important step towards this goal.
In this paper, the $E1^2$ two-photon transition between the $4s^2 \ {}^1S_0 \to 4s 3d \ {}^1D_2$ states in calcium atoms is offered as a means to realize a compact optical frequency standard. The clock transition between the $m=0$ sublevels of the $^1S_0$ and $^{1}D_2$ states is insensitive to magnetic fields. The two-photon clock transition, driven with identical counter-propagating photons, is free from 1st-order Doppler shifts and photon recoil. This transition, and its analogs in the other alkaline earth atoms, were examined in [@Hall1989] as a possible means of realizing optical fountain clocks. However, the realization of fountain clocks is complicated by large 2nd-order Doppler shifts in the beam of atoms, and the large angular divergence of the atomic beams after single-stage laser-cooling. In this work, we show that magneto-optical trapping of calcium atoms circumvents the woes associated with fountain clocks, and leads to a simple scheme to realize an optical frequency standard. Among the alkaline earth atoms, the feasibility of this scheme is special to Ca, as the ${}^1D_2$ state in Mg is not metastable, whereas the lifetime of the $^1D_2$ state in Sr and Ba are significantly shorter than in Ca. Compared to optical lattice clocks, this simplified scheme eliminates the second-stage cooling lasers and MOTs, the lattice laser, and the associated vacuum and optical hardware from the apparatus. This scheme is therefore well-suited to the realization of portable and low-maintenance secondary optical standards capable of high stability and accuracy. In the following section, the design of such a frequency standard, and an efficient interrogation scheme for the clock transition, are described. This is followed by estimates of the susceptibility of the clock transition to undesired frequency shifts.
We note in passing that an even narrower two-photon transition is generally available in the alkaline earth atoms between the $^1S_0 \to {}^3D_2$ states. However, using this as the basis for an optical frequency standard requires a higher-power clock laser (which leads in turn to larger light shifts), as well as more lasers to repump out of the $^3D$ states. In addition, the proximity of the $^3D_1$ and $^3D_{0,2}$ states leads to 2nd order Zeeman shifts that are larger than for the [$^1S_0 \to {}^1D_2$ ]{}transition. For these reasons, an analysis of the $^1S_0 \to {}^3D_2$ transition is not included here.
Interrogation scheme
====================
Alkaline earth atoms are loaded from an oven or a getter [@Bridge2009] into a compact ultra-high-vacuum chamber pumped by an ion pump. A standard MOT configuration is used for laser cooling and trapping on the strong $4s^2 \ ^1S_0 \to 4s 4p \ {}^1P_1$ transition ($\gamma_1/2\pi \approx$ 34 MHz) (using a laser L1, 423 nm). There is a very small rate of shelving into the $^1D_2$ state ($10^{-5}$/cycle), out of which atoms can be repumped back into the cooling cycle using a laser (L2, 672 nm) tuned to the strongly allowed $4s 3d \ ^1D_2 \to 4s 5p {}^1P_1$ transition ($\gamma_2/2\pi \approx$ 2 MHz). After loading the trap, the cooling lasers and MOT magnetic fields are switched off to avoid light shifts, optical pumping and Zeeman shifts during the interrogation phase. Two counter-propagating beams, derived from the same narrow linewidth laser oscillator (LO) (L3, 916 nm), are then used to drive the [$^1S_0 \to {}^1D_2$ ]{}clock transition ($\gamma_3/2\pi \approx$ 40 Hz) in a Rabi or Ramsey scheme. We assume that a sufficiently long interrogation sequence is used ($\geq$6.3 ms for a Ramsey sequence) so that the measured linewidth is limited by the natural linewidth of the transition.
The excitation probability is measured using laser-induced cycling fluorescence on the $^1D_2 \leftrightarrow 4s 4f \ {}^1F_3$ cycling transition (L4, 488 nm) [^1], or by measuring the drop in the MOT’s fluorescence when the laser L1 (but not L2) is switched back on. Collection of fluorescence on these cycling transitions means that small-solid-angle detectors can be conveniently used while still obtaining near-unity detection efficiency, at a noise level limited by quantum projection noise in the detection process. During a 10 ms-long interrogation + detection sequence, the atoms move $\sim$ 7 mm. Therefore a large fraction of them can be recaptured by the MOT and re-used for the next cycle.
Assuming that $N=10^6$ atoms can be interrogated in the MOT and detected with shot-noise-limited sensitivity, the frequency resolution of the clock, with a natural lifetime $T$ and an integration time $\tau$, is $\delta \nu = 1/2\pi \sqrt{N T \tau}$. Using $T =$ 2 ms, this evaluates to a fractional frequency resolution $\delta \nu/\nu = 10^{-17}/\sqrt{\tau(s)}$. Even allowing that this might be reduced due to the duty cycle of the interrogation, this is an extremely attractive sensitivity for a secondary standard, which must be capable of quick comparisons with primary standards and other frequency references. Further, the fast cycle time means that Dick effect noise is less important – the flywheel for the laser’s frequency only needs to carry it over for $\sim$ 10 ms, until the next interrogation – leading to relaxed requirements on the LO. Calculations using realistic parameters for a MOT indicate that a two-photon Rabi frequency $\Omega_{\mathrm}{eff}$ = $2\pi \times$ 300 Hz can be achieved with 1 W of LO power, commensurate with a modest power build-up cavity around the MOT that is fed by a 916 nm diode laser. All the lasers can be derived from laser diodes, and the scheme is compatible with a low-mass, low-power apparatus. We also note that this scheme lends itself quite naturally to methods that attempt to push beyond the standard quantum limit, using atom-cavity interactions [@Schleier-Smith2010] and/or non-classical light sources.
![a) Energy levels of neutral calcium, showing the atomic transitions involved in the magneto-optical trapping and interrogation scheme. b) The timing sequence of the MOT lasers (L1 & L2), B-field, clock laser (L3) and detection laser (L4). Exciting the clock transition with two counter-propagating L3 photons leads to Doppler- and recoil-free excitation, which is probed with fluorescence induced by L4.](CaMOT.pdf){width="\columnwidth"}
\[fig:energyLevels\]
Systematic effects
==================
2nd order Doppler shift
-----------------------
Using $\sim$ 2 mK for the temperature of a single stage 423 nm MOT, the root-mean-square velocity of the calcium atoms is $v_{\mathrm}{rms} \approx$ 70 cm/s. This leads to a 2nd-order Doppler shift whose fractional size is $$\frac{\delta \nu}{\nu} = -\frac{1}{2} \frac{v_{\mathrm}{rms}^2}{c^2} = -2.7 \times 10^{-18}.$$ This rather small number implies that the stability of the temperature of the atoms in the MOT will not affect the operation of the frequency standard to any relevant degree.
Collisions
----------
Assuming a MOT density $n_{\mathrm}{MOT} = 10^9$/cm$^3$ and a collision cross section $\sigma \simeq 10^{-14}$ cm$^2$, the estimated mean free time between collisions is $\tau_{\mathrm}{coll} \simeq$ 1500 s. This is significantly larger than the expected cycle time of the interrogation. Conservatively assuming $\sim \pi$ rad phase shift per collision, the (fractional) collisional frequency shift evaluates to $\sim 3 \times 10^{-18}$. Therefore we consider it likely that collisional effects can be controlled or calibrated at the level of $\leq 10^{-16}$.
Electric shifts
---------------
The DC electric and light shifts were evaluated numerically, using the known transition rates (and dipole matrix elements derived from them) for the singlet states up to $4s 5p$ [@Hansen1999]. The calculated DC polarizability of the $4s^2 \ ^1S_0$ state is 75, $a_0^3$ and that of the $4s 3d \ ^1D_2$ state is 32 $a_0^3$. The resulting DC electric shift of the transition is $\delta \nu_{\mathrm}{E} \approx -\Delta \alpha_{\mathrm}{DC} {\mathcal{E}}^2 \approx$ +13 mHz/(V/cm)$^2$, $(\delta \nu/\nu)_{\mathrm}{E} \approx 2 \times 10^{-17}$/(V/cm)$^2$.
![Calculated DC electric-field-induced energy shifts of the singlet $m=0$ levels in calcium. The calculated energy shifts are are fit to a quadratic curve to obtain the DC polarizability $\alpha_{\mathrm}{DC}$.](dc_shifts){width="\columnwidth"}
\[fig:electric-shifts\]
The light shift of the clock transition due to photons at 916 nm was evaluated using dressed states (using both the co- and counter-rotating components). It is equal to $\Delta E_{\mathrm}{LS} \simeq 12$ Hz/(W/cm$^2$) for $\hat{z}$-polarized light.
Note that for many of the applications of a secondary standard, it is sufficient that the frequency (shifts) be stable and repeatable. However, there are also ways to improve the absolute accuracy of the frequency standard by cancelling the light shift: a) the light shift and the (spatial) excitation profile scale in the same way with the laser intensity. Once the light shift is calibrated, it can be applied as a correction that is proportional to the (measured) excitation probability. cf. [@Huntemann2012a] for an example of a highly forbidden transition, where the light shift is cancelled by extrapolating the laser power. b) There are variants of the Ramsey pulse sequence that have been applied to forbidden clock transitions (“hyper-Ramsey” pulse sequences) [@Yudin2010], where the light shift can be cancelled at the expense of some complexity in the interrogation pulse sequence.
Black-body radiation shift
--------------------------
The BBR shift can be approximately estimated from the DC electric polarizabilities, since the relevant atomic transitions are well to the blue of the thermal photon distribution: $$\delta \nu_{\mathrm}{BBR} \approx -\frac{2}{15}(\alpha \pi)^3 T^4 \ [\alpha_{\mathrm}{DC}({}^1D_2) - \alpha_{\mathrm}{s}({}^1S_0)].$$ (Here $\alpha$ is the fine structure constant, $T$ is the temperature in atomic units, and the value for $\delta \nu_{\mathrm}{BBR}$ is also in atomic units.) Using the above-calculated DC polarizabilities, this yields $(\delta \nu/\nu)_{\mathrm}{BBR} \approx 0.6 \times 10^{-15}$. This BBR shift only needs to be evaluated to $\sim$ 10% accuracy, to obtain $\leq 10^{-16}$ fractional accuracy of the frequency standard at room temperature.
Magnetic shifts
---------------
The clock states are both $m=0$ states, and the most abundant calcium isotope has zero nuclear magnetic moment. The dominant source of magnetic shifts of the clock transition is a 2nd-order Zeeman shift due to the ${\mathcal{B}}$-field-induced mixing of the $^1D_2$ and $^3D_{1,3}$ states. Assuming a $\sim$1 $\mu_B$ matrix element for this spin flip transition, and using the energy difference $\Delta \approx$ 50 THz between these states, the magnetic shift coefficient for the clock transition is estimated to be $\delta \nu_{\mathrm}{B} \approx$ 50 mHz/G$^2$. The fractional shift is $\Big(\frac{\delta \nu}{\nu}\Big)_B \approx 2 \times 10^{-16}$/G$^2$. This bodes well for achieving a fractional frequency accuracy better than $10^{-16}$ using field cancellation coils and/or simple magnetic shielding.
---------------------------------------- ---------------------------------------------------------------------- ----------------------------------------------
Systematic Parameter Fractional frequency
effect control range shift, $\frac{\delta \nu}{\nu}$ ($10^{-17}$)
2$^{\mathrm}{nd}$ order Doppler $v_{\mathrm}{rms} \lesssim$ 1 m/s 0.5
Electric shift ${\mathcal{E}}\lesssim$ 0.1 V/cm 0.02
Light shift $\frac{\delta P}{P} \lesssim 10^{-3}$ 1.8
BBR shift $\frac{\delta \alpha_{\mathrm}{s}}{\alpha_{\mathrm}{s}} \lesssim$ 5% 3
Magnetic shift ${\mathcal{B}}\lesssim$ 0.1 G 0.2
---------------------------------------- ---------------------------------------------------------------------- ----------------------------------------------
: *Estimated contributions to the systematic shifts in a calcium-MOT-based optical frequency reference, using realistic ranges for parameters that can be controlled or calculated.*
Summary
=======
A scheme to construct an optical frequency standard has been described, based on the [$^1S_0 \to {}^1D_2$ ]{}two-photon transition in calcium atoms trapped in a magneto-optical trap. Using a simple apparatus, it is capable of achieving statistical sensitivity and systematic immunity at the level of $\leq$ 1 part in $10^{16}$. The dominant systematic effects that are likely to affect the operation of the frequency standard are listed in Table I. An implementation of this scheme in a compact calcium MOT could lead to robust and portable frequency standards, with a multitude of potential applications in time & frequency transfer, geophysics and precision measurements. We have begun the construction of a prototype device.
Acknowledgments {#acknowledgments .unnumbered}
===============
I am grateful to Eric Hessels and Dave DeMille for their encouragement and helpful suggestions. I have benefited greatly from conversations with Stephan Falke and Uwe Sterr. I thank Eric Hudson, Wes Campbell and the ACME collaboration for the loan of equipment during a preliminary experiment. This work is supported by a Society in Science Branco Weiss Fellowship, administered by the ETH Zurich.
[^1]: D. DeMille, private communication (2014)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper has four main parts. In the first part, we construct a noncommutative residue for the hypoelliptic calculus on Heisenberg manifolds, that is, for the class of [$\Psi_{H}$DO]{} operators introduced by Beals-Greiner and Taylor. This noncommutative residue appears as the residual trace on integer order [$\Psi_{H}$DOs]{} induced by the analytic extension of the usual trace to non-integer order [$\Psi_{H}$DOs]{}. Moreover, it agrees with the integral of the density defined by the logarithmic singularity of the Schwartz kernel of the corresponding [$\Psi_{H}$DO]{}. In addition, we show that this noncommutative residue provides us with the unique trace up to constant multiple on the algebra of integer order [$\Psi_{H}$DOs]{}. In the second part, we give some analytic applications of this construction concerning zeta functions of hypoelliptic operators, logarithmic metric estimates for Green kernels of hypoelliptic operators, and the extension of the Dixmier trace to the whole algebra of integer order [$\Psi_{H}$DOs]{}. In the third part, we present examples of computations of noncommutative residues of some powers of the horizontal sublaplacian and the contact Laplacian on contact manifolds. In the fourth part, we present two applications in CR geometry. First, we give some examples of geometric computations of noncommutative residues of some powers of the horizontal sublaplacian and of the Kohn Laplacian. Second, we make use of the framework of noncommutative geometry and of our noncommutative residue to define lower dimensional volumes in pseudohermitian geometry, e.g., we can give sense to the area of any 3-dimensional CR manifold. On the way we obtain a spectral interpretation of the Einstein-Hilbert action in pseudohermitian geometry.'
address: 'Department of Mathematics, University of Toronto, Canada.'
author:
- Raphaël Ponge
title: 'Noncommutative residue for Heisenberg manifolds. Applications in CR and contact geometry'
---
Introduction
============
The aim of this paper is to construct a noncommutative residue trace for the Heisenberg calculus and to present several of its applications, in particular in CR and contact geometry. The Heisenberg calculus was built independently by Beals-Greiner [@BG:CHM] and Taylor [@Ta:NCMA] as the relevant pseudodifferential tool to study the main geometric operators on contact and CR manifolds, which fail to be elliptic, but may be hypoelliptic (see also [@BdM:HODCRPDO], [@EM:HAITH], [@FS:EDdbarbCAHG], [@Po:MAMS1]). This calculus holds in the general setting of a Heisenberg manifold, that is, a manifold $M$ together with a distinguished hyperplane bundle $H\subset TM$, and we construct a noncommutative residue trace in this general context.
The noncommutative residue trace of Wodzicki ([@Wo:LISA], [@Wo:NCRF]) and Guillemin [@Gu:NPWF] was originally constructed for classical [$\Psi$DOs]{} and it appears as the residual trace on integer order [$\Psi$DOs]{} induced by analytic extension of the operator trace to [$\Psi$DOs]{} of non-integer order. It has numerous applications and generalizations (see, e.g., [@Co:AFNG], [@Co:GCMFNCG], [@CM:LIFNCG], [@FGLS:NRMB], [@Gu:RTCAFIO], [@Ka:RNC], [@Le:NCRPDOLPS], [@MMS:FIT], [@MN:HPDO1], [@PR:CDBFCF], [@Po:IJM1], [@Sc:NCRMCS], [@Vas:PhD]). In particular, the existence of a residual trace is an essential ingredient in the framework for the local index formula in noncommutative geometry of Connes-Moscovici [@CM:LIFNCG].
Accordingly, the noncommutative residue for the Heisenberg calculus has various applications and several of them are presented in this paper. Further geometric applications can be found in [@Po:Crelle2].
Noncommutative residue for Heisenberg manifolds
-----------------------------------------------
Our construction of a noncommutative residue trace for [$\Psi_{H}$DOs]{}, i.e., for the pseudodifferential operators in the Heisenberg calculus, follows the approach of [@CM:LIFNCG]. It has two main ingredients:
\(i) The observation that the coefficient of the logarithmic singularity of the Schwartz kernel of a [$\Psi_{H}$DO]{} operator $P$ can be defined globally as a density $c_{P}(x)$ functorial with respect to the action of Heisenberg diffeomorphisms, i.e., diffeomorphisms preserving the Heisenberg structure (see Proposition \[thm:NCR.log-singularity\]).
\(ii) The analytic extension of the operator trace to [$\Psi_{H}$DOs]{} of complex non-integer order (Proposition \[thm:NCR.TR.global\]).
The analytic extension of the trace from (ii) is obtained by working directly at the level of densities and induces on [$\Psi_{H}$DOs]{} of integer order a residual trace given by (minus) the integral of the density from (i) (Proposition \[thm:NCR.TR.local\]). This residual trace is our noncommutative residue for the Heisenberg calculus.
In particular, as an immediate byproduct of this construction the noncommutative residue is invariant under the action of Heisenberg diffeomorphisms. Moreover, in the foliated case our noncommutative residue agrees with that of [@CM:LIFNCG], and on the algebra of Toeplitz pseudodifferential operators on a contact manifold of Boutet de Monvel-Guillemin [@BG:STTO] we recover the noncommutative residue of Guillemin [@Gu:RTCAFIO]. As a first application of this construction we show that when the Heisenberg manifold is connected the noncommutative residue is the unique trace up to constant multiple on the algebra of integer order [$\Psi_{H}$DOs]{} (Theorem \[thm:Traces.traces\]). As a consequence we get a characterization sums of [$\Psi_{H}$DO]{} commutators and we obtain that any smoothing operator can be written as a sum of [$\Psi_{H}$DO]{} commutators.
These results are the analogues for [$\Psi_{H}$DOs]{} of well known results of Wodzicki ([@Wo:PhD]; see also [@Gu:RTCAFIO]) for classical [$\Psi$DOs]{}. Our arguments are somewhat elementary and partly rely on the characterization of the Schwartz kernels of [$\Psi_{H}$DOs]{} that was used in the analysis of their logarithmic singularities near the diagonal.
Analytic applications on general Heisenberg manifolds
-----------------------------------------------------
The analytic extension of the trace allows us to directly define the zeta function $\zeta_{\theta}(P;s)$ of a hypoelliptic [$\Psi_{H}$DO]{} operator $P$ as a meromorphic functions on ${\ensuremath{\mathbb{C}}}$. The definition depends on the choice of a ray $L_{\theta}=\{\arg
\lambda =\theta\}$, $0\leq \theta <2\pi$, which is a ray of principal values for the principal symbol of $P$ in the sense of [@Po:CPDE1] and is not through an eigenvalue of $P$, so that $L_{\theta}$ is a ray of minimal growth for $P$. Moreover, the residues at the potential singularity points of $\zeta_{\theta}(P;s)$ can be expressed as noncommutative residues.
When the set of principal values of the principal symbol of $P$ contains the left half-plane $\Re \lambda\leq 0$ we further can relate the residues and regular values of $\zeta_{\theta}(P;s)$ to the coefficients in the heat kernel asymptotics for $P$ (see Proposition \[prop:Zeta.heat-zeta-global\] for the precise statement). We then use this to derive a local formula for the index of a hypoelliptic [$\Psi_{H}$DO]{} and to rephrase in terms of noncommutative residues the Weyl asymptotics for hypoelliptic [$\Psi$DOs]{} from [@Po:MAMS1] and [@Po:CPDE1]. An interesting application concerns logarithmic metric estimates for Green kernels of hypoelliptic [$\Psi_{H}$DOs]{}. It is not true that a positive hypoelliptic [$\Psi_{H}$DO]{} has a Green kernel positive near the diagonal. Nevertheless, by making use of the spectral interpretation of the noncommutative residue as a residual trace, we show that the positivity still pertains when the order is equal to the critical dimension $\dim M+1$ (Proposition \[prop:Metric.positivity-cP\]).
When the bracket condition $H+[H,H]=TM$ holds, i.e., $H$ is a Carnot-Carathéodory distribution, this allows us to get metric estimates in terms of the Carnot-Carathéodory metric associated to any given subriemannian metric on $H$ (Theorem \[thm:Metric.metric-estimate\]). This result connects nicely with the work of Fefferman, Stein and their collaborators on metric estimates for Green kernels of subelliptic sublaplacians on general Carnot-Carathéodory manifolds (see, e.g., [@FS:FSSOSO], [@Ma:EPKLCD], [@NSW:BMDVF1], [@Sa:FSGSSVF]).
Finally, we show that on a Heisenberg manifold $(M,H)$ the Dixmier trace is defined for [$\Psi_{H}$DOs]{} of order less than or equal to the critical order $-(\dim M+1)$ and on such operators agrees with the noncommutative residue (Theorem \[thm:NCG.Dixmier\]). Therefore, the noncommutative residue allows us to extend the Dixmier trace to the whole algebra of [$\Psi_{H}$DOs]{} of integer order. In noncommutative geometry the Dixmier trace plays the role of the integral on infinitesimal operator of order $\leq 1$. Therefore, our result allows us to integrate any [$\Psi_{H}$DO]{} even though it is not an infinitesimal operator of order $\leq 1$. This is the analogue of a well known result of Connes [@Co:AFNG] for classical [$\Psi$DOs]{}.
Noncommutative residue and contact geometry
-------------------------------------------
Let $(M^{2n+1},H)$ be a compact orientable contact manifold, so that the hyperplane bundle $H\subset TM$ can be realized as the kernel of a contact form $\theta$ on $M$. The additional datum of a *calibrated* almost complex structure on $H$ defines a Riemannian metric on $M$ whose volume ${{\operatorname{Vol}}}_{\theta}M$ depends only on $\theta$.
Let $\Delta_{b;k}$ be the horizontal sublaplacian associated to the above Riemannian metric acting on horizontal forms of degree $k$, $k\neq n$. This operator is hypoelliptic for $k\neq n$ and by making use of the results of [@Po:MAMS1] we can explicitly express the noncommutative residue of $\Delta_{b;k}^{-(n+1)}$ as a constant multiple of ${{\operatorname{Vol}}}_{\theta}M$ (see Proposition \[prop:Contact.residue-Deltab\]).
Next, the contact complex of Rumin [@Ru:FDVC] is a complex of horizontal forms on a contact manifold whose Laplacians are hypoelliptic in every bidegree. Let $\Delta_{R;k}$ denote the contact Laplacian acting on forms degree $k$, $k=0,\ldots,n$. Unlike the horizontal sublaplacian $\Delta_{R}$ does not act on all horizontal forms, but on the sections of a subbundle of horizontal forms. Moreover, it is not a sublaplacian and it even has order 4 on forms of degree $n$. Nevertheless, by making use of the results of [@Po:MAMS1] we can show that the noncommutative residues of $\Delta_{R;k}^{-(n+1)}$ for $k\neq n$ and of $\Delta_{R;n}^{-\frac{n+1}{2}}$ are universal constant multiples of the contact volume ${{\operatorname{Vol}}}_{\theta}M$ (see Proposition \[prop:Contact.residue-DeltaR\]).
Applications in CR geometry
---------------------------
Let $(M^{2n+1},H)$ be a compact orientable $\kappa$-strictly pseudoconvex CR manifold equipped with a pseudohermitian contact form $\theta$, i.e., the hyperplane bundle $H\subset TM$ has an (integrable) complex structure and the Levi form associated to $\theta$ has at every point $n-\kappa$ positive eigenvalues and $\kappa$ negative eigenvalues. If $h$ is a Levi metric on $M$ then the volume with respect to this metric depends only on $\theta$ and is denoted ${{\operatorname{Vol}}}_{\theta}M$.
As in the general contact case we can explicitly relate the pseudohermitian volume ${{\operatorname{Vol}}}_{\theta}M$ to the noncommutative residues of the following operators:
- ${\square}_{b;pq}^{-(n+1)}$, where ${\square}_{b;pq}$ denotes the Kohn Laplacian acting on $(p,q)$-forms with $q\neq \kappa$ and $q\neq n-\kappa$ (see Proposition \[prop:CR.residue-Boxb1\]);
- $\Delta_{b;pq}^{-(n+1)}$, where $\Delta_{b;pq}$ denotes the horizontal sublaplacian acting on $(p,q)$-forms with $(p,q)\neq (n-\kappa,\kappa)$ and $(p,q)\neq (\kappa,n-\kappa)$ (see Proposition \[prop:CR.residue-Deltab1\]).
From now on we assume $M$ strictly pseudoconvex (i.e. we have $\kappa=0$) and consider the following operators:
- ${\square}_{b;pq}^{-n}$, with $q\neq 0$ and $q\neq n$,;
- $\Delta_{b;pq}^{-n}$, with $(p,q)\neq (n,0)$ and $(p,q)\neq (0,n)$.
Then we can make use of the results of [@BGS:HECRM] to express the noncommutative residues of these operators as universal constant multiple of the integral $\int_{M}R_{n}d\theta^{n}\wedge \theta$, where $R_{n}$ denotes the scalar curvature of the connection of Tanaka [@Ta:DGSSPCM] and Webster [@We:PHSRH] (see Propositions \[prop:CR.residue-Boxb2\] and \[prop:CR.residue-Deltab2\]). These last results provide us with a spectral interpretation of the Einstein-Hilbert action in pseudohermitian geometry, which is analogous to that of Connes ([@Co:GCMFNCG], [@KW:GNGWR], [@Ka:DOG]) in the Riemannian case. Finally, by using an idea of Connes [@Co:GCMFNCG] we can make use of the noncommutative residue for classical [$\Psi$DOs]{} to define the $k$-dimensional volumes Riemannian manifold of dimension $m$ for $k=1,\ldots,m-1$, e.g. we can give sense to the area in any dimension (see [@Po:LMP07]). Similarly, we can make use of the noncommutative residue for the Heisenberg calculus to define the $k$-dimensional pseudohermitian volume ${{\operatorname{Vol}}}^{(k)}_{\theta}M$ for any $k=1,\ldots,2n+2$. The argument involves noncommutative geometry, but we can give a purely differential geometric expression of these lower dimensional volumes (see Proposition \[prop:CR.lower-dim.-volumes\]). Furthermore, in dimension 3 the area (i.e. the 2-dimensional volume) is a constant multiple of the integral of the Tanaka-Webster scalar curvature (Theorem \[thm:spectral.area\]). In particular, we find that the area of the sphere $S^{3}\subset {\ensuremath{\mathbb{C}}}^{2}$ endowed with its standard pseudohermitian structure has area $\frac{\pi^{2}}{8\sqrt{2}}$.
Potential geometric applications
--------------------------------
The boundaries of a strictly pseudoconvex domain of ${\ensuremath{\mathbb{C}}}^{n+1}$ naturally carry strictly pseudoconvex CR structures, so one can expect the above results to be useful for studying from the point of view of noncommutative geometry strictly pseudoconvex boundaries, and more generally Stein manifolds with boundaries and the asymptotically complex hyperbolic manifolds of [@EMM:RLSPD]. Similarly, the boundary of a symplectic manifold naturally inherits a contact structure, so we could also use the results of this papers to give a noncommutative geometric study of symplectic manifolds with boundary.
Another interesting potential application concerns a special class of Lorentzian manifolds, the Fefferman’s spaces ([@Fe:MAEBKGPCD], [@Le:FMPHI]). A Fefferman’s Lorentzian space ${\ensuremath{\mathcal{F}}}$ can be realized as the total space of a circle bundle over a strictly pseudoconvex CR manifold $M$ and it carries a Lorentzian metric naturally associated to any pseudohermitian contact form on $M$. For instance, the curvature tensor of ${\ensuremath{\mathcal{F}}}$ can be explicitly expressed in terms of the curvature and torsion tensors of the Tanaka-Webster connection of $M$ and the Dalembertian of ${\ensuremath{\mathcal{F}}}$ pushes down to the horizontal sublaplacian on $M$. This strongly suggests that one could deduce a noncommutative geometric study of Fefferman spaces from a noncommutative geometric study of strictly pseudoconvex CR manifolds. An item of special interest would be to get a spectral interpretation of the Einstein-Hilbert action in this setting.
Finally, it would be interesting to extend the results of this paper to other subriemannian geometries such as the quaternionic contact manifolds of Biquard [@Bi:MEAS].
Organization of the paper
-------------------------
The rest of the paper is organized as follows.
In Section \[sec:Heisenberg-calculus\], we recall the main facts about Heisenberg manifold and the Heisenberg calculus.
In Section \[sec:NCR\], we study the logarithmic singularity of the Schwartz kernel of a [$\Psi_{H}$DO]{} and show that it gives rise to a well defined density. We then construct the noncommutative residue for the Heisenberg calculus as the residual trace induced on integer order [$\Psi_{H}$DOs]{} by the analytic extension of the usual trace to non-integer order [$\Psi_{H}$DOs]{}. Moreover, we show that the noncommutative residue of an integer order [$\Psi_{H}$DO]{} agrees with the integral of the density defined by the logarithmic singularity of its Schwartz kernel. We end the section by proving that, when the Heisenberg manifold is connected, the noncommutative residue is the only trace up to constant multiple.
In Section \[sec:Analytic-Applications\], we give some analytic applications of the construction of the noncommutative residue. First, we deal with zeta functions of hypoelliptic [$\Psi_{H}$DOs]{} and relate their singularities to the heat kernel asymptotics of the corresponding operators. Second, we prove logarithmic metric estimates for Green kernels of hypoelliptic [$\Psi_{H}$DOs]{}. Finally, we show that the noncommutative residue allows us to extend the Dixmier trace to *all* integer order [$\Psi_{H}$DOs]{}.
In Section \[sec:Contact\], we present examples of computations of noncommutative residues of some powers of the horizontal sublaplacian and of the contact Laplacian of Rumin on contact manifolds.
In Section \[sec:CR\], we present some applications in CR geometry. First, we give some examples of geometric computations of noncommutative residues of some powers of the horizontal sublaplacian and of the Kohn Laplacian. Second, we make use of the framework of noncommutative geometry and of the noncommutative residue for the Heisenberg calculus to define lower dimensional volumes in pseudohermitian geometry.
Finally, in Appendix for reader’s convenience we present a detailed proof of Lemma \[lem:Heisenberg.extension-symbol\] about the extension of a homogeneous symbol into a homogeneous distribution. This is needed for the analysis of the logarithmic singularity of the Schwartz kernel of a [$\Psi_{H}$DO]{} in Section \[sec:NCR\].
Part of the results of this paper were announced in [@Po:CRAS1] and [@Po:CRAS2] and were presented as part of my PhD thesis at the University of Paris-Sud (Orsay, France) in December 2000. I am grateful to my advisor, Alain Connes, and to Charlie Epstein, Henri Moscovici and Michel Rumin, for stimulating and helpful discussions related to the subject matter of this paper. In addition, I would like to thank Olivier Biquard, Richard Melrose and Pierre Pansu for their interests in the results of this paper.
Heisenberg calculus {#sec:Heisenberg-calculus}
===================
The Heisenberg calculus is the relevant pseudodifferential calculus to study hypoelliptic operators on Heisenberg manifolds. It was independently introduced by Beals-Greiner [@BG:CHM] and Taylor [@Ta:NCMA] (see also [@BdM:HODCRPDO], [@Dy:POHG], [@Dy:APOHSC], [@EM:HAITH], [@FS:EDdbarbCAHG], [@Po:MAMS1], [@RS:HDONG]). In this section we recall the main facts about the Heisenberg calculus following the point of view of [@BG:CHM] and [@Po:MAMS1].
Heisenberg manifolds
--------------------
In this subsection we gather the main definitions and examples concerning Heisenberg manifolds and their tangent Lie group bundles.
1\) A Heisenberg manifold is a pair $(M,H)$ consisting of a manifold $M$ together with a distinguished hyperplane bundle $H
\subset TM$.
2\) Given Heisenberg manifolds $(M,H)$ and $(M',H')$ a diffeomorphism $\phi:M\rightarrow M'$ is said to be a Heisenberg diffeomorphism when $\phi_{*}H=H'$.
Following are the main examples of Heisenberg manifolds:
*- Heisenberg group.* The $(2n+1)$-dimensional Heisenberg group ${\ensuremath{\mathbb{H}}}^{2n+1}$ is the 2-step nilpotent group consisting of ${\ensuremath{\mathbb{R}}}^{2n+1}={\ensuremath{\mathbb{R}}}\times {\ensuremath{\mathbb{R}}}^{2n}$ equipped with the group law, $$x.y=(x_{0}+y_{0}+\sum_{1\leq j\leq n}(x_{n+j}y_{j}-x_{j}y_{n+j}),x_{1}+y_{1},\ldots,x_{2n}+y_{2n}).
$$ A left-invariant basis for its Lie algebra ${\ensuremath{\mathfrak{h}}}^{2n+1}$ is then provided by the vector fields, $$X_{0}=\frac{\partial}{\partial x_{0}}, \quad X_{j}=\frac{\partial}{\partial x_{j}}+x_{n+j}\frac{\partial}{\partial
x_{0}}, \quad X_{n+j}=\frac{\partial}{\partial x_{n+j}}-x_{j}\frac{\partial}{\partial
x_{0}}, \quad 1\leq j\leq n.
$$ For $j,k=1,\ldots,n$ and $k\neq j$ we have the Heisenberg relations $[X_{j},X_{n+k}]=-2\delta_{jk}X_{0}$ and $[X_{0},X_{j}]=[X_{j},X_{k}]=[X_{n+j},X_{n+k}]=0$. In particular, the subbundle spanned by the vector field $X_{1},\ldots,X_{2n}$ yields a left-invariant Heisenberg structure on ${\ensuremath{\mathbb{H}}}^{2n+1}$.
*- Foliations.* A (smooth) foliation is a manifold $M$ together with a subbundle ${\ensuremath{\mathcal{F}}}\subset TM$ integrable in Frobenius’ sense, i.e., the space of sections of $H$ is closed under the Lie bracket of vector fields. Therefore, any codimension 1 foliation is a Heisenberg manifold.
*- Contact manifolds.* Opposite to foliations are contact manifolds. A contact manifold is a Heisenberg manifold $(M^{2n+1}, H)$ such that $H$ can be locally realized as the kernel of a contact form, that is, a $1$-form $\theta$ such that $d\theta_{|H}$ is nondegenerate. When $M$ is orientable it is equivalent to require $H$ to be globally the kernel of a contact form. Furthermore, by Darboux’s theorem any contact manifold is locally Heisenberg-diffeomorphic to the Heisenberg group ${\ensuremath{\mathbb{H}}}^{2n+1}$ equipped with the standard contact form $\theta^{0}= dx_{0}+\sum_{j=1}^{n}(x_{j}dx_{n+j}-x_{n+j}dx_{j})$.
*- Confoliations.* According to Elyashberg-Thurston [@ET:C] a *confoliation structure* on an oriented manifold $M^{2n+1}$ is given by a global non-vanishing $1$-form $\theta$ on $M$ such that $(d\theta)^{n}\wedge \theta\geq 0$. In particular, if we let $H=\ker \theta$ then $(M,H)$ is a Heisenberg manifold which is a foliation when $d\theta\wedge \theta=0$ and a contact manifold when $(d\theta)^{n}\wedge \theta>0$.
*- CR manifolds.* A CR structure on an orientable manifold $M^{2n+1}$ is given by a rank $n$ complex subbundle $T_{1,0}\subset T_{{\ensuremath{\mathbb{C}}}}M$ such that $T_{1,0}$ is integrable in Frobenius’ sense and we have $T_{1,0}\cap T_{0,1}=\{0\}$, where we have set $T_{0,1}=\overline{T_{1,0}}$. Equivalently, the subbundle $H=\Re (T_{1,0}\otimes T_{0,1})$ has the structure of a complex bundle of (real) dimension $2n$. In particular, $(M,H)$ is a Heisenberg manifold. The main example of a CR manifold is that of the (smooth) boundary $M=\partial D$ of a bounded complex domain $D \subset {\ensuremath{\mathbb{C}}}^{n+1}$. In particular, when $D$ is strongly pseudoconvex with defining function $\rho$ the 1-form $\theta=i(\partial
-\bar{\partial})\rho$ is a contact form on $M$.
Next, the terminology Heisenberg manifold stems from the fact that the relevant tangent structure in this setting is that of a bundle $GM$ of graded nilpotent Lie groups (see [@BG:CHM], [@Be:TSSRG], [@EMM:RLSPD], [@FS:EDdbarbCAHG], [@Gr:CCSSW], [@Po:Pacific1], [@Ro:INA], [@Va:PhD]). This tangent Lie group bundle can be described as follows.
First, there is an intrinsic Levi form ${\ensuremath{\mathcal{L}}}:H\times H\rightarrow TM/H$ such that, for any point $a
\in M$ and any sections $X$ and $Y$ of $H$ near $a$, we have $${\ensuremath{\mathcal{L}}}_{a}(X(a),Y(a))=[X,Y](a) \qquad \bmod H_{a}.
\label{eq:Heisenberg.Levi-form}$$ In other words the class of $[X,Y](a)$ modulo $H_{a}$ depends only on the values $X(a)$ and $Y(a)$, not on the germs of $X$ and $Y$ near $a$ (see [@Po:Pacific1]). This allows us to define the tangent Lie algebra bundle ${\ensuremath{\mathfrak{g}}}M$ as the vector bundle $(TM/H)\oplus H$ together with the grading and field of Lie brackets such that, for sections $X_{0}$, $Y_{0}$ of $TM/H$ and $X'$, $Y'$ of $H$, we have $$\begin{gathered}
t.(X_{0}+X')=t^{2}X_{0}+t X', \qquad t\in {\ensuremath{\mathbb{R}}},
\label{eq:Heisenberg.Heisenberg-dilations}\\
[X_{0}+X',Y_{0}+Y']_{{\ensuremath{\mathfrak{g}}}M}={\ensuremath{\mathcal{L}}}(X',Y').
\end{gathered}$$
Since each fiber ${\ensuremath{\mathfrak{g}}}_{a}M$ is 2-step nilpotent, ${\ensuremath{\mathfrak{g}}}M$ is the Lie algebra bundle of a Lie group bundle $GM$ which can be realized as $(TM/H)\oplus H$ together with the field of group law such that, for sections $X_{0}$, $Y_{0}$ of $TM/H$ and $X'$, $Y'$ of $H$, we have $$(X_{0}+X').(Y_{0}+Y')=X_{0}+Y_{0}+\frac{1}{2}{\ensuremath{\mathcal{L}}}(X',Y')+X'+Y'.
\label{eq:Heisenberg.group-law}$$ We call $GM$ the *tangent Lie group bundle* of $M$.
Let $\phi$ be a Heisenberg diffeomorphism from $(M,H)$ onto a Heisenberg manifold $(M',H')$. Since we have $\phi_{*}H=H'$ the linear differential $\phi'$ induces linear vector bundle isomorphisms $\phi':H\rightarrow H'$ and $\overline{\phi'}:TM/H\rightarrow TM'/H'$, so that we get a linear vector bundle isomorphism $\phi_{H}':(TM/H)\oplus H\rightarrow (TM'/H')\oplus H'$ by letting $$\phi_{H}'(a).(X_{0}+X')= \overline{\phi'}(a)X_{0}+\phi'(a)X',
\label{eq:Heisenberg.tangent-map}$$ for any $a \in M$ and any $X_{0}$ in $(T_{a}M/H_{a})$ and $X'$ in $H_{a}$. This isomorphism commutes with the dilations in (\[eq:Heisenberg.Heisenberg-dilations\]) and it can be further shown that it gives rise to a Lie group isomorphism from $GM$ onto $GM'$ (see [@Po:Pacific1]).
The above description of $GM$ can be related to the extrinsic approach of [@BG:CHM] as follows.
A local frame $X_{0},X_{1},\ldots,X_{d}$ of $TM$ such that $X_{1},\ldots,X_{d}$ span $H$ is called a $H$-frame.
Let $U \subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be an open of local coordinate equipped with a $H$-frame $X_{0},\ldots,X_{d}$.
\[def:Heisenberg-privileged-coordinates\] For $a\in U$ we let $\psi_{a}:{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\rightarrow {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ denote the unique affine change of variable such that $\psi_{a}(a)=0$ and $(\psi_{a})_{*}X_{j}(0)=\frac{\partial}{\partial x_{j}}$ for $j=0,\ldots,d$. The coordinates provided by the map $\psi_{a}$ are called privileged coordinates centered at $a$.
In addition, on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ we consider the dilations, $$t.x=(t^{2}x_{0},tx_{1},\ldots,tx_{d}), \qquad t \in {\ensuremath{\mathbb{R}}}.
\label{eq:Heisenberg.Heisenberg-dilations-Rd}$$
In privileged coordinates centered at $a$ we can write $X_{j}=\frac{\partial}{\partial x_{j}}+\sum_{k=0}^{d}a_{jk}(x)\frac{\partial}{\partial x_{j}}$ with $a_{jk}(0)=0$. Let $X_{0}^{(a)}=\frac{\partial}{\partial x_{0}}$ and for $j=1,\ldots,d$ let $X_{j}^{(a)}=\frac{\partial}{\partial_{x_{j}}}+\sum_{k=1}^{d}b_{jk}x_{k}
\frac{\partial}{\partial_{x_{0}}}$, where $b_{jk}= \partial_{x_{k}}a_{j0}(0)$. With respect to the dilations (\[eq:Heisenberg.Heisenberg-dilations-Rd\]) the vector field $X_{j}^{(a)}$ is homogeneous of degree $w_{0}=-2$ for $j=0$ and of degree $w_{j}=-1$ for $j=1,\ldots,d$. In fact, using Taylor expansions at $x=0$ we get a formal expansion $X_{j} \sim X_{j}^{(a)}+X_{j,w_{j}-1}+\ldots$, with $X_{j,l}$ homogeneous vector field of degree $l$.
The subbundle spanned by the vector fields $X_{j}^{(a)}$ is a 2-step nilpotent Lie algebra under the Lie bracket of vectors fields. Its associated Lie group $G^{(a)}$ can be realized as ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ equipped with the group law, $$x.y=(x_{0}+\sum_{j,k=1}^{d}b_{kj}x_{j}x_{k},x_{1},\ldots,x_{d}).
$$
On the other hand, the vectors $X_{0}(a),\ldots,X_{d}(a)$ provide us with a linear basis of the space $(T_{a}M/H_{a})\oplus H_{a}$. This allows us to identify $G_{a}M$ with ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ equipped with the group law, $$x.y=(x_{0}+y_{0}+\frac{1}{2}L_{jk}(a)x_{j}y_{k},x_{1}+y_{1},\ldots,x_{d}+y_{d}).
\label{eq:Heisenberg.group-law-tangent-group-coordinates}$$ Here the functions $L_{jk}$ denote the coefficients of the Levi form (\[eq:Heisenberg.Levi-form\]) with respect to the $H$-frame $X_{0},\ldots,X_{d}$, i.e., we have ${\ensuremath{\mathcal{L}}}(X_{j},X_{k})=[X_{j},X_{k}]=L_{jk}X_{0}
\bmod H$.
The Lie group $G^{(a)}$ is isomorphic to $G_{a}M$ since one can check that $L_{jk}=b_{jk}-b_{kj}$. An explicit isomorphism is given by $$\phi_{a}(x_{0},\ldots,x_{d})= (x_{0}-\frac{1}{4}\sum_{j,k=1}^{d}(b_{jk}+b_{kj})x_{j}x_{k},x_{1},\ldots,x_{d}).
$$
\[def:Heisenberg-Heisenberg-coordinates\] The local coordinates provided by the map $\varepsilon_{a}:=\phi_{a}\circ\psi_{a}$ are called Heisenberg coordinates centered at $a$.
The Heisenberg coordinates refines the privileged coordinates in such way that the above realizations of $G^{(a)}$ and $G_{a}M$ agree. In particular, the vector fields $X_{j}^{(a)}$ and $X_{j}^{a}$ agree in these coordinates. This allows us to see $X_{j}^{a}$ as a first order approximation of $X_{j}$. For this reason $X_{j}^{a}$ is called the *model vector field of $X_{j}$* at $a$.
Left-invariant pseudodifferential operators
-------------------------------------------
Let $(M^{d+1},H)$ be a Heisenberg manifold and let $G$ be the tangent group $G_{a}M$ of $M$ at a given point $a\in M$. We briefly recall the calculus for homogeneous left-invariant [$\Psi$DOs]{} on the nilpotent group $G$.
Recall that if $E$ is a finite dimensional vector space the Schwartz class ${\ensuremath{\mathcal{S}}}(E)$ carries a natural Fréchet space topology and the Fourier transform of a function $f\in {\ensuremath{\mathcal{S}}}(E)$ is the element $\hat{f}\in {\ensuremath{\mathcal{S}}}(E^{*})$ such that $\hat{f}(\xi)=\int_{E}e^{i{\ensuremath{\langle \xi , x \rangle}}}f(x)dx$ for any $\xi \in E^{*}$, where $dx$ denotes the Lebesgue measure of $E$. In the case where $E=(T_{a}M/H_{a})\oplus H_{a}$ the Lebesgue measure actually agrees with the Haar measure of $G$, so ${\ensuremath{\mathcal{S}}}(E)$ and ${\ensuremath{\mathcal{S}}}(G)$ agree. Furthermore, as $E^{*}=(T_{a}M/H_{a})^{*}\otimes H_{a}^{*}$ is just the linear dual ${\ensuremath{\mathfrak{g}}}^{*}$ of the Lie algebra of $G$, we also see that ${\ensuremath{\mathcal{S}}}(E^{*})$ agrees with ${\ensuremath{\mathcal{S}}}({\ensuremath{\mathfrak{g}}}^{*})$.
Let ${\ensuremath{\mathcal{S}}}_{0}(G)$ denote the closed subspace of ${\ensuremath{\mathcal{S}}}(G)$ consisting of functions $f \in {\ensuremath{\mathcal{S}}}(G)$ such that for any differential operator $P$ on ${\ensuremath{\mathfrak{g}}}^{*}$ we have $(P\hat{f})(0)=0$. Notice that the image $\hat{{\ensuremath{\mathcal{S}}}}_{0}(G)$ of ${\ensuremath{\mathcal{S}}}(G)$ under the Fourier transform consists of functions $v\in {\ensuremath{\mathcal{S}}}({\ensuremath{\mathfrak{g}}}^{*})$ such that, given any norm $|.|$ on $G$, near $\xi=0$ we have $|g(\xi)|={\operatorname{O}}(|\xi|^{N})$ for any $N\in {\ensuremath{\mathbb{N}}}$.
We endow ${\ensuremath{\mathfrak{g}}}^{*}$ with the dilations $\lambda.\xi=(\lambda^{2}\xi_{0},\lambda\xi')$ coming from (\[eq:Heisenberg.Heisenberg-dilations\]). For $m\in {\ensuremath{\mathbb{C}}}$ we let $S_{m}({\ensuremath{\mathfrak{g}}}^{*}M)$ denote the closed subspace of $C^{\infty}({\ensuremath{\mathfrak{g}}}^{*}\setminus 0)$ consisting in functions $p(\xi)\in C^{\infty}({\ensuremath{\mathfrak{g}}}^{*}\setminus 0)$ such that $p(\lambda.\xi)=\lambda^{m}p(\xi)$ for any $\lambda>0$.
If $p(\xi)\in S_{m}({\ensuremath{\mathfrak{g}}}^{*})$ then it defines an element of $\hat{{\ensuremath{\mathcal{S}}}}_{0}({\ensuremath{\mathfrak{g}}}^{*})'$ by letting $${\ensuremath{\langle p , g \rangle}}= \int_{{\ensuremath{\mathfrak{g}}}^{*}}p(\xi)g(\xi)d\xi, \qquad g \in \hat{{\ensuremath{\mathcal{S}}}}_{0}({\ensuremath{\mathfrak{g}}}^{*}).$$ This allows us to define the inverse Fourier transform of $p$ as the element $\check{p}\in {\ensuremath{\mathcal{S}}}_{0}(G)'$ such that ${\ensuremath{\langle \check{p} , f \rangle}}={\ensuremath{\langle p , \check{f} \rangle}}$ for any $f \in {\ensuremath{\mathcal{S}}}_{0}(G)$. It then can be shown (see, e.g., [@BG:CHM], [@CGGP:POGD]) that the left-convolution with $p$ defines a continuous endomorphism of ${\ensuremath{\mathcal{S}}}_{0}(G)$ via the formula, $${\operatorname{Op}}(p)f(x)=\check{p}*f(x)={\ensuremath{\langle \check{p}(y) , f(xy) \rangle}}, \qquad f\in {\ensuremath{\mathcal{S}}}_{0}(G).
\label{Heisenberg.left-invariant-PDO}$$ Moreover, we have a bilinear product, $$*:S_{m_{1}}({\ensuremath{\mathfrak{g}}}^{*})\times S_{m_{2}}({\ensuremath{\mathfrak{g}}}^{*})
\longrightarrow S_{m_{1}+m_{2}}({\ensuremath{\mathfrak{g}}}^{*}),
\label{eq:Heisenberg.product-symbols}$$ in such way that, for any $p_{1}\in S_{m_{1}}({\ensuremath{\mathfrak{g}}}^{*})$ and any $p_{2}\in S_{m_{2}}({\ensuremath{\mathfrak{g}}}^{*})$, we have $${\operatorname{Op}}(p_{1})\circ {\operatorname{Op}}(p_{2})={\operatorname{Op}}(p_{1}*p_{2}).''$$
In addition, if $p \in S_{m}({\ensuremath{\mathfrak{g}}}^{*})$ then ${\operatorname{Op}}(p)$ really is a pseudodifferential operator. Indeed, let $X_{0}(a),\ldots,X_{d}(a)$ be a (linear) basis of ${\ensuremath{\mathfrak{g}}}$ so that $X_{0}(a)$ is in $T_{a}M/H_{a}$ and $X_{1}(a),\ldots,X_{d}(a)$ span $H_{a}$. For $j=0,\ldots,d$ let $X_{j}^{a}$ be the left-invariant vector field on $G$ such that $X^{a}_{j|_{x=0}}=X_{j}(a)$. The basis $X_{0}(a),\ldots,X_{d}(a)$ yields a linear isomorphism ${\ensuremath{\mathfrak{g}}}\simeq {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$, hence a global chart of $G$. In the corresponding local coordinates $p(\xi)$ is a homogeneous symbol on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\setminus 0$ with respect to the dilations (\[eq:Heisenberg.Heisenberg-dilations-Rd\]). Similarly, each vector field $\frac{1}{i}X_{j}^{a}$, $j=0,\ldots,d$, corresponds to a vector field on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ with symbol $\sigma_{j}^{a}(x,\xi)$. If we set $\sigma^{a}(x,\xi)=(\sigma_{0}^{a}(x,\xi),\ldots,\sigma_{d}^{a}(x,\xi))$, then it can be shown that in these local coordinates we have $${\operatorname{Op}}(p)f(x)= (2\pi)^{-(d+1)}\int_{{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}} e^{i{\ensuremath{\langle x , \xi \rangle}}}p(\sigma^{a}(x,\xi))\hat{f}(\xi)d\xi, \qquad f \in {\ensuremath{\mathcal{S}}}_{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}).
\label{eq:PsiHDO.PsiDO-convolution}$$ In other words ${\operatorname{Op}}(p)$ is the pseudodifferential operator $p(-iX^{a}):=p(\sigma^{a}(x,D))$ acting on ${\ensuremath{\mathcal{S}}}_{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
The [$\Psi_{H}$DO]{} operators
------------------------------
The original idea in the Heisenberg calculus, which goes back to Elias Stein, is to construct a class of operators on a given Heisenberg manifold $(M^{d+1},H)$, called [$\Psi_{H}$DOs]{}, which at any point $a \in M$ are modeled in a suitable sense on left-invariant pseudodifferential operators on the tangent group $G_{a}M$.
Let $U \subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be an open of local coordinates equipped with a $H$-frame $X_{0},\ldots,X_{d}$.
$S_{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $m\in{\ensuremath{\mathbb{C}}}$, consists of functions $p(x,\xi)$ in $C^{\infty}(U\times{{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0})$ which are homogeneous of degree $m$ in the $\xi$-variable with respect to the dilations (\[eq:Heisenberg.Heisenberg-dilations-Rd\]), i.e., we have $p(x,t.\xi)=t^m p(x,\xi)$ for any $t>0$.
In the sequel we endow ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ with the pseudo-norm, $$\|\xi\|=(\xi_{0}^{2}+\xi_{1}^{4}+\ldots+\xi_{d}^{4})^{1/4}, \qquad \xi\in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}.
$$ In addition, for any multi-order $\beta\in {\ensuremath{\mathbb{N}}}^{d+1}_{0}$ we set ${\ensuremath{\langle\! \beta\!\rangle}}=2\beta_{0}+\beta_{1}+\ldots+\beta_{d}$.
$S^m({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $m\in{\ensuremath{\mathbb{C}}}$, consists of functions $p(x,\xi)$ in $C^{\infty}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ with an asymptotic expansion $ p \sim \sum_{j\geq 0} p_{m-j}$, $p_{k}\in S_{k}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, in the sense that, for any integer $N$, any compact $K \subset U$ and any multi-orders $\alpha$, $\beta$, there exists $C_{NK\alpha\beta}>0$ such that, for any $x\in K$ and any $\xi\in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ so that $\|\xi \| \geq 1$, we have $$| \partial^\alpha_{x}\partial^\beta_{\xi}(p-\sum_{j<N}p_{m-j})(x,\xi)| \leq
C_{NK\alpha\beta }\|\xi\|^{\Re m-{\ensuremath{\langle\! \beta\!\rangle}} -N}.
\label{eq:Heisenberg.asymptotic-expansion-symbols}$$
Next, for $j=0,\ldots,d$ let $\sigma_{j}(x,\xi)$ denote the symbol (in the classical sense) of the vector field $\frac{1}{i}X_{j}$ and set $\sigma=(\sigma_{0},\ldots,\sigma_{d})$. Then for $p \in S^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ we let $p(x,-iX)$ be the continuous linear operator from $C^{\infty}_{c}(U)$ to $C^{\infty}(U)$ such that $$p(x,-iX)f(x)= (2\pi)^{-(d+1)} \int e^{ix.\xi} p(x,\sigma(x,\xi))\hat{f}(\xi)d\xi,
\qquad f\in C^{\infty}_{c}(U).$$
In the sequel we let ${\ensuremath{\Psi^{-\infty}}}(U)$ denote the space of smoothing operators on $U$, that is, the space of continuous operators $P:{\ensuremath{\mathcal{E}}}'(U)\rightarrow {\ensuremath{\mathcal{D}}}'(U)$ with a smooth Schwartz kernel.
${\ensuremath{\Psi_{H}}}^{m}(U)$, $m\in {\ensuremath{\mathbb{C}}}$, consists of operators $P:C^{\infty}_{c}(U)\rightarrow C^{\infty}(U)$ of the form $$P= p(x,-iX)+R,
$$ with $p$ in $S^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ (called the symbol of $P$) and $R$ smoothing operator.
The class of [$\Psi_{H}$DOs]{} is invariant under changes of $H$-framed charts (see [@BG:CHM Sect. 16], [@Po:MAMS1 Appendix A]). Therefore, we can extend the definition of [$\Psi_{H}$DOs]{} to the Heisenberg manifold $(M^{d+1},H)$ and let them act on sections of a vector bundle ${\ensuremath{\mathcal{E}}}^{r}$ over $M$ as follows.
${\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $m\in {\ensuremath{\mathbb{C}}}$, consists of continuous operators $P$ from $C^{\infty}_{c}(M,{\ensuremath{\mathcal{E}}})$ to $C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ such that:
\(i) The Schwartz kernel of $P$ is smooth off the diagonal;
\(ii) For any $H$-framed local chart $\kappa:U\rightarrow V\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ over which there is a trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|U}\rightarrow U\times
{\ensuremath{\mathbb{C}}}^{r}$ the operator $\kappa_{*}\tau_{*}(P_{|U})$ belongs to ${\ensuremath{\Psi_{H}}}^{m}(V,{\ensuremath{\mathbb{C}}}^{r}):={\ensuremath{\Psi_{H}}}^{m}(V)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathbb{C}}}^{r}$.
Let $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $m \in {\ensuremath{\mathbb{C}}}$.
\(1) Let $Q\in {\ensuremath{\Psi_{H}}}^{m'}(M,{\ensuremath{\mathcal{E}}})$, $m'\in {\ensuremath{\mathbb{C}}}$, and suppose that $P$ or $Q$ is uniformly properly supported. Then the operator $PQ$ belongs to ${\ensuremath{\Psi_{H}}}^{m+m'}(M,{\ensuremath{\mathcal{E}}})$.
\(2) The transpose operator $P^{t}$ belongs to ${\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}}^{*})$.
\(3) Suppose that $M$ is endowed with a density $>0$ and ${\ensuremath{\mathcal{E}}}$ is endowed with a Hermitian metric. Then the adjoint $P^{*}$ of $P$ belongs to ${\ensuremath{\Psi_{H}}}^{\overline{m}}(M,{\ensuremath{\mathcal{E}}})$.
In this setting the principal symbol of a [$\Psi_{H}$DO]{} can be defined intrinsically as follows.
Let ${\ensuremath{\mathfrak{g}}}^{*}M=(TM/H)^{*}\oplus H^{*}$ denote the (linear) dual of the Lie algebra bundle ${\ensuremath{\mathfrak{g}}}M$ of $GM$ with canonical projection $\text{pr}: M\rightarrow {\ensuremath{\mathfrak{g}}}^{*}M$. For $m \in {\ensuremath{\mathbb{C}}}$ we let $S_{m}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}})$ be the space of sections $p\in C^{\infty}({\ensuremath{\mathfrak{g}}}^{*}M\setminus 0,{\ensuremath{{\operatorname{End}}}}\text{pr}^{*}{\ensuremath{\mathcal{E}}})$ such that $p(x,t.\xi)=t^{m}p(x,\xi)$ for any $t>0$.
\[def:Heisenberg.principal-symbol\] The principal symbol of an operator $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ is the unique symbol $\sigma_{m}(P)$ in $S_{m}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}})$ such that, for any $a\in M$ and for any trivializing $H$-framed local coordinates near $a$, in Heisenberg coordinates centered at $a$ we have $\sigma_{m}(P)(0,\xi)=p_{m}(0,\xi)$, where $p_{m}(x,\xi)$ is the principal symbol of $P$ in the sense of (\[eq:Heisenberg.asymptotic-expansion-symbols\]).
Given a point $a\in M$ the principal symbol $\sigma_{m}(P)$ allows us to define the model operator of $P$ at $a$ as the left-invariant [$\Psi$DO]{} on ${\ensuremath{\mathcal{S}}}_{0}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}}_{a})$ with symbol $p_{m}^{a}(\xi):=\sigma_{m}(P)(a,\xi)$ so that, in the notation of (\[Heisenberg.left-invariant-PDO\]), the operator $P^{a}$ is just ${\operatorname{Op}}(p_{m}^{a})$.
For $m \in {\ensuremath{\mathbb{C}}}$ let $S_{m}({\ensuremath{\mathfrak{g}}}^{*}_{a}M,{\ensuremath{\mathcal{E}}}_{a})$ be the space of functions $p\in C^{\infty}({\ensuremath{\mathfrak{g}}}^{*}_{a}M\setminus 0,{\ensuremath{\mathcal{E}}}_{a})$ which are homogeneous of degree $m$. Then the product (\[eq:Heisenberg.product-symbols\]) yields a bilinear product, $$*^{a}:S_{m_{1}}({\ensuremath{\mathfrak{g}}}^{*}_{a}M,{\ensuremath{\mathcal{E}}}_{a})\times S_{m_{2}}({\ensuremath{\mathfrak{g}}}^{*}_{a}M,{\ensuremath{\mathcal{E}}}_{a})\rightarrow S_{m_{1}+m_{2}}({\ensuremath{\mathfrak{g}}}^{*}_{a}M,{\ensuremath{\mathcal{E}}}_{a}).
$$ This product depends smoothly on $a$ as much so to gives rise to the bilinear product, $$\begin{gathered}
*:S_{m_{1}}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}})\times S_{m_{2}}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}}) \longrightarrow S_{m_{1}+m_{2}}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}}),
\label{eq:CPCL.product-symbols}\\
p_{m_{1}}*p_{m_{2}}(a,\xi)=(p_{m_{1}}(a,.)*^{a}p_{m_{2}}(a,.))(\xi), \qquad p_{m_{j}}\in S_{m_{j}}({\ensuremath{\mathfrak{g}}}^{*}M).
$$
\[prop:Heisenberg.operations-principal-symbols\] Let $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $m \in {\ensuremath{\mathbb{C}}}$.
1\) Let $Q\in {\ensuremath{\Psi_{H}}}^{m'}(M,{\ensuremath{\mathcal{E}}})$, $m'\in {\ensuremath{\mathbb{C}}}$, and suppose that $P$ or $Q$ is uniformly properly supported. Then we have $\sigma_{m+m'}(PQ)=\sigma_{m}(P)*\sigma_{m'}(Q)$, and for any $a \in M$ the model operator of $PQ$ at $a$ is $P^{a}Q^{a}$.
2\) We have $\sigma_{m}(P^{t})(x,\xi)=\sigma_{m}(P)(x,-\xi)^{t}$, and for any $a \in M$ the model operator of $P^{t}$ at $a$ is $(P^{a})^{t}$.
3\) Suppose that $M$ is endowed with a density $>0$ and ${\ensuremath{\mathcal{E}}}$ is endowed with a Hermitian metric. Then we have $\sigma_{\overline{m}}(P^{*})(x,\xi)=\sigma_{m}(P)(x,\xi)^{*}$, and for any $a \in M$ the model operator of $P^{*}$ at $a$ is $(P^{a})^{*}$.
In addition, there is a complete symbolic calculus for [$\Psi_{H}$DOs]{} which allows us to carry out the classical parametrix construction for an operator $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ whenever its principal symbol $\sigma_{m}(P)$ is invertible with respect to the product $*$ (see [@BG:CHM]). In general, it may be difficult to determine whether $\sigma_{m}(P)$ is invertible with respect to that product. Nevertheless, given a point $a\in M$ we have an invertibility criterion for $P^{a}$ in terms of the representation theory of $G_{a}M$; this is the so-called Rockland condition (see, e.g., [@Ro:HHGRTC], [@CGGP:POGD]). We then can completely determine the invertibility of the principal symbol of $P$ in terms of the Rockland conditions for its model operators and those of its transpose (see [@Po:MAMS1 Thm. 3.3.19]).
Finally, the [$\Psi_{H}$DOs]{} enjoy nice Sobolev regularity properties. These properties are best stated in terms of the weighted Sobolev of [@FS:EDdbarbCAHG] and [@Po:MAMS1]. These weighted Sobolev spaces can be explicitly related to the usual Sobolev spaces and allows us to show that if $P\in
{\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $\Re m>0$, has an invertible principal symbol, then $P$ is maximal hypoelliptic, which implies that $P$ is hypoelliptic with gain of $\frac{m}{2}$-derivatives. We refer to [@BG:CHM] and [@Po:MAMS1] for the precise statements. In the sequel we will only need the following.
\[prop:Heisenberg.L2-boundedness\] Assume $M$ compact and let $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $\Re m\geq 0$. Then $P$ extends to a bounded operator from $L^{2}(M,{\ensuremath{\mathcal{E}}})$ to itself and this operator is compact if we further have $\Re m<0$.
Holomorphic families of [$\Psi_{H}$DOs]{}
-----------------------------------------
In this subsection we recall the main definitions and properties of holomorphic families of [$\Psi_{H}$DOs]{}. Throughout the subsection we let $(M^{d+1},H)$ be a Heisenberg manifold, we let ${\ensuremath{\mathcal{E}}}^{r}$ be a vector bundle over $M$ and we let $\Omega$ be an open subset of ${\ensuremath{\mathbb{C}}}$.
Let $U\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be an open of local coordinates equipped with a $H$-frame $X_{0},\ldots,X_{d}$. We define holomorphic families of symbols on ${U\times{\ensuremath{\mathbb{R}}}^{d+1}}$ as follows.
\[def:Heisenberg.hol-family-symbols\] A family $(p(z))_{z\in\Omega}\subset S^*({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ is holomorphic when:
\(i) The order $w(z)$ of $p(z)$ depends analytically on $z$;
\(ii) For any $(x,\xi)\in {U\times{\ensuremath{\mathbb{R}}}^{d+1}}$ the function $z\rightarrow p(z)(x,\xi)$ is holomorphic on $\Omega$;
\(iii) The bounds of the asymptotic expansion (\[eq:Heisenberg.asymptotic-expansion-symbols\]) for $p(z)$ are locally uniform with respect to $z$, i.e., we have $p(z) \sim \sum_{j\geq 0} p(z)_{ w(z)-j}$, $p(z)_{w(z)-j}\in S_{w(z)-j}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, and, for any integer $N$, any compacts $K\subset U$ and $L\subset \Omega$ and any multi-orders $\alpha$ and $\beta$, there exists a constant $ C_{NKL\alpha\beta}>0$ such that, for any $(x,z)\in K\times L$ and any $\xi \in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ so that $\|\xi\|\geq 1$, we have $$| \partial_{x}^\alpha\partial_{\xi}^\beta (p(z)-\sum_{j<N}
p(z)_{w(z)-j})(x,\xi)| \leq C_{NKL\alpha\beta} \|\xi\|^{\Re w(z)-N-{\ensuremath{\langle\! \beta\!\rangle}}}.
\label{eq:Heisenberg.symbols.asymptotic-expansion-hol-families}$$
In the sequel we let ${{\operatorname{Hol}}}(\Omega,S^*({U\times{\ensuremath{\mathbb{R}}}^{d+1}}))$ denote the class of holomorphic families with values in $S^*({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$. Notice also that the properties (i)–(iii) imply that each homogeneous symbol $p(z)_{w(z)-j}(x,\xi)$ depends analytically on $z$, that is, it gives rise to a holomorphic family with values in $C^{\infty}({U\times({\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0)})$ (see [@Po:MAMS1 Rem. 4.2.2]).
Since ${\ensuremath{\Psi^{-\infty}}}(U)={\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{E}}}'(U),C^{\infty}(U))$ is a Fréchet space which is isomorphic to $C^{\infty}(U\times U)$ by Schwartz’s Kernel Theorem, we can define holomorphic families of smoothing operators as families of operators given by holomorphic families of smooth Schwartz kernels. We let ${{\operatorname{Hol}}}(\Omega,{\ensuremath{\Psi^{-\infty}}}(U))$ denote the class of such families.
\[def:Heisenberg.hol-family-PsiHDO’s\] A family $(P(z))_{z\in \Omega}\subset {\ensuremath{\Psi_{H}}}^{m}(U)$ is holomorphic when it can be put in the form, $$P(z) = p(z)(x,-iX) + R(z), \qquad z \in \Omega,
$$ with $(p(z))_{z\in \Omega}\in {{\operatorname{Hol}}}(\Omega, S^{*}({U\times{\ensuremath{\mathbb{R}}}^{d+1}}))$ and $(R(z))_{z\in \Omega} \in {{\operatorname{Hol}}}(\Omega,{\ensuremath{\Psi^{-\infty}}}(U))$.
The above notion of holomorphic families of [$\Psi_{H}$DOs]{} is invariant under changes of $H$-framed charts (see [@Po:MAMS1]). Therefore, it makes sense to define holomorphic families of [$\Psi_{H}$DOs]{} on the Heisenberg manifold $(M^{d+1},H)$ acting on sections of the vector bundle ${\ensuremath{\mathcal{E}}}^{r}$ as follows.
A family $(P(z))_{z\in \Omega}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ is holomorphic when:
\(i) The order $w(z)$ of $P(z)$ is a holomorphic function of $z$;
\(ii) For $\varphi$ and $\psi$ in $C^\infty_{c}(M)$ with disjoint supports $(\varphi P(z)\psi)_{z\in \Omega}$ is a holomorphic family of smoothing operators; (iii) For any trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|_{U}}\rightarrow U\times {\ensuremath{\mathbb{C}}}^{r}$ over a local $H$-framed chart $\kappa:U \rightarrow V\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ the family $(\kappa_{*}\tau_{*}(P_{z|_{U}}))_{z\in\Omega}$ belongs to ${{\operatorname{Hol}}}(\Omega, {\ensuremath{\Psi_{H}}}^{*}(V,{\ensuremath{\mathbb{C}}}^{r})):={{\operatorname{Hol}}}(\Omega, {\ensuremath{\Psi_{H}}}^{*}(V))\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathbb{C}}}^{r}$.
We let ${{\operatorname{Hol}}}(\Omega,{\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}}))$ denote the class of holomorphic families of [$\Psi_{H}$DOs]{} on $M$ and acting on the sections of ${\ensuremath{\mathcal{E}}}$.
Let $(P(z))_{z\in\Omega}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ be a holomorphic family of [$\Psi_{H}$DOs]{}.
1\) Let $(Q(z))_{z\in\Omega}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ be a holomorphic family of [$\Psi_{H}$DOs]{} and assume that $(P(z))_{z\in\Omega}$ or $(Q(z))_{z\in\Omega}$ is uniformly properly supported with respect to $z$. Then the family $(P(z)Q(z))_{z \in \Omega}$ belongs to ${{\operatorname{Hol}}}(\Omega,{\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}}))$. 2) Let $\phi:(M,H)\rightarrow (M',H')$ be a Heisenberg diffeomorphism. Then the family $(\phi_{*}P(z))_{z\in \Omega}$ belongs to ${{\operatorname{Hol}}}(\Omega, \Psi_{H'}^{*}(M',\phi_{*}{\ensuremath{\mathcal{E}}}))$.
Complex powers of hypoelliptic [$\Psi_{H}$DOs]{}
------------------------------------------------
In this subsection we recall the constructions in [@Po:MAMS1] and [@Po:CPDE1] of complex powers of hypoelliptic [$\Psi_{H}$DOs]{} as holomorphic families of [$\Psi_{H}$DOs]{}.
Throughout this subsection we let $(M^{d+1},H)$ be a compact Heisenberg manifold equipped with a density $>0$ and we let ${\ensuremath{\mathcal{E}}}$ be a Hermitian vector bundle over $M$.
Let $P:C^{\infty}(M,{\ensuremath{\mathcal{E}}})\rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ be a differential operator of Heisenberg order $m$ which is positive, i.e., we have ${\ensuremath{\langle Pu , u \rangle}}\geq
0$ for any $u \in C^{\infty}(M,{\ensuremath{\mathcal{E}}})$, and assume that the principal symbol of $P$ is invertible, that is, $P$ satisfies the Rockland condition at every point.
By standard functional calculus for any $s\in {\ensuremath{\mathbb{C}}}$ we can define the power $P^{s}$ as an unbounded operator on $L^{2}(M,{\ensuremath{\mathcal{E}}})$ whose domain contains $C^{\infty}(M,{\ensuremath{\mathcal{E}}})$. In particular $P^{-1}$ is the partial inverse of $P$ and we have $P^{0}=1-\Pi_{0}(P)$, where $\Pi_{0}(P)$ denotes the orthogonal projection onto the kernel of $P$. Furthermore, we have:
\[prop:Heisenberg.complex-powers.positive\] Assume that $H$ satisfies the bracket condition $H+[H,H]=TM$. Then the complex powers $(P^{s})_{s \in {\ensuremath{\mathbb{C}}}}$ form a holomorphic 1-parameter group of [$\Psi_{H}$DOs]{} such that ${{{\operatorname{ord}}}}P^{s}=ms\ \forall s\in {\ensuremath{\mathbb{C}}}$.
This construction has been generalized to more general hypoelliptic [$\Psi_{H}$DOs]{} in [@Po:CPDE1]. Let $P:C^{\infty}(M,{\ensuremath{\mathcal{E}}})\rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ be a [$\Psi_{H}$DO]{} of order $m>0$. In [@Po:CPDE1] there is a notion of *principal cut* for the principal symbol $\sigma_{m}(P)$ of $P$ as a ray $L\subset {\ensuremath{\mathbb{C}}}\setminus 0$ such that $P-\lambda$ admits a parametrix in a version of the Heisenberg calculus with parameter in a conical neighborhood $\Theta \subset {\ensuremath{\mathbb{C}}}\setminus 0$ of $L$.
Let $\Theta(P)$ be the union set of all principal cuts of $\sigma_{m}(P)$. Then $\Theta(P)$ is an open conical subset of ${\ensuremath{\mathbb{C}}}\setminus 0$ and for any conical subset $\Theta$ of $\Theta(P)$ such that $\overline{\Theta}\setminus 0\subset \Theta(P)$ there are at most finitely many eigenvalues of $P$ in $\Theta$ (see [@Po:CPDE1]).
Let $L_{\theta}=\{\arg \lambda=\theta\}$, $0\leq \theta<2\pi$, be a principal cut for $\sigma_{m}(P)$ such that no eigenvalue of $P$ lies in $L$. Then $L_{\theta}$ is ray of minimal growth for $P$, so for $\Re s<0$ we define a bounded operator on $L^{2}(M,{\ensuremath{\mathcal{E}}})$ by letting $$\begin{gathered}
P_{\theta}^{s}= \frac{-1}{2i\pi} \int_{\Gamma_{\theta}} \lambda^{s}_{\theta}(P-\lambda)^{-1}d\lambda,
\label{eq:Heisenberg.complex-powers-definition}\\
\Gamma_{\theta}=\{ \rho e^{i\theta}; \infty <\rho\leq r\}\cup\{ r e^{it};
\theta\geq t\geq \theta-2\pi \}\cup\{ \rho e^{i(\theta-2\pi)}; r\leq \rho\leq \infty\},
\label{eq:Heisenberg.complex-powers-definition-Gammat}\end{gathered}$$ where $r>0$ is such that no nonzero eigenvalue of $P$ lies in the disc $|\lambda|<r$.
\[prop:Heisenberg.powers2\] The family (\[eq:Heisenberg.complex-powers-definition\]) gives rise to a unique holomorphic family $(P_{\theta}^{s})_{s\in {\ensuremath{\mathbb{C}}}}$ of [$\Psi_{H}$DOs]{} such that:
\(i) We have ${{{\operatorname{ord}}}}P_{\theta}^{s}=ms$ for any $s \in {\ensuremath{\mathbb{C}}}$;
\(ii) We have the 1-parameter group property $P_{\theta}^{s_{1}+s_{2}}=P_{\theta}^{s_{1}} P_{\theta}^{s_{2}}$ $\forall s_{j}\in {\ensuremath{\mathbb{C}}}$;
\(iii) We have $P_{\theta}^{k+s}=P^{k} P_{\theta}^{s}$ for any $k\in {\ensuremath{\mathbb{N}}}$ and any $s \in {\ensuremath{\mathbb{C}}}$.
Let $E_{0}(P)=\cup_{j \geq 0} \ker P^{j}$ be the characteristic subspace of $P$ associated to $\lambda=0$. This is a finite dimensional subspace of $C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ and so the projection $\Pi_{0}(P)$ onto $E_{0}(P)$ and along $E_{0}(P^{*})^{\perp}$ is a smoothing operator (see [@Po:CPDE1]). Then we have: $$P_{\theta}^{0}=1-\Pi_{0}(P), \qquad P_{\theta}^{-k}=P^{-k}, \quad k=1,2,\ldots,
\label{eq:PsiDO.complex-powers-integers}$$ where $P^{-k}$ denotes the partial inverse of $P^{k}$, i.e., the operator that inverts $P^{k}$ on $E_{0}(P^{*})^{\perp}$ and is zero on $E_{0}(P)$.
Assume further that $0$ is not in the spectrum of $P$. Let $Q\in {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ and for $z \in {\ensuremath{\mathbb{C}}}$ set $Q(z)=QP_{\theta}^{z/m}$. Then $(Q(z))_{z \in {\ensuremath{\mathbb{C}}}}$ is a holomorphic family of [$\Psi_{H}$DOs]{} such that $Q_{0}=Q$ and ${{{\operatorname{ord}}}}Q(z)=z+{{{\operatorname{ord}}}}Q$. Following the terminology of [@Gu:GLD] a holomorphic family of [$\Psi_{H}$DOs]{} with these properties is called a *holomorphic gauging* for $Q$.
Noncommutative residue trace for the Heisenberg calculus {#sec:NCR}
========================================================
In this section we construct a noncommutative residue trace for the algebra of integer order [$\Psi_{H}$DOs]{} on a Heisenberg manifold. We start by describing the logarithmic singularity near the diagonal of the Schwartz kernel of a [$\Psi_{H}$DO]{} of integer order and we show that it gives rise to a well-defined density. We then construct the noncommutative residue for the Heisenberg calculus as the residual trace induced by the analytic continuation of the usual trace to [$\Psi_{H}$DOs]{} of non-integer orders. Moreover, we show that it agrees with the integral of the density defined by the logarithmic singularity of the Schwartz kernel of the corresponding [$\Psi_{H}$DO]{}. Finally, we prove that when the manifold is connected then every other trace on the algebra of integer order [$\Psi_{H}$DOs]{} is a constant multiple of our noncommutative residue. This is the analogue of a well-known result of Wodzicki and Guillemin.
Logarithmic singularity of the kernel of a [$\Psi_{H}$DO]{}
-----------------------------------------------------------
In this subsection we show that the logarithmic singularity of the Schwartz kernel of any integer order [$\Psi_{H}$DO]{} gives rise to a density which makes sense intrinsically. This uses the characterization of [$\Psi_{H}$DOs]{} in terms of their Schwartz kernels, which we shall now recall.
First, we extend the notion of homogeneity of functions to distributions. For $K$ in ${\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and for $\lambda >0$ we let $K_{\lambda}$ denote the element of ${\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $${\ensuremath{\langle K_{\lambda} , f \rangle}}=\lambda^{-(d+2)} {\ensuremath{\langle K(x) , f(\lambda^{-1}.x) \rangle}} \quad \forall f\in{\ensuremath{\mathcal{S}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}).
\label{eq:PsiHDO.homogeneity-K-m}$$ It will be convenient to also use the notation $K(\lambda.x)$ for denoting $K_{\lambda}(x)$. We say that $K$ is homogeneous of degree $m$, $m\in{\ensuremath{\mathbb{C}}}$, when $K_{\lambda}=\lambda^m K$ for any $\lambda>0$.
In the sequel we let $E$ be the anisotropic radial vector field $2x_{0}\partial_{x_{0}}+\partial_{x_{1}}+\ldots+\partial_{x_{d}}$, i.e., $E$ is the infinitesimal generator of the flow $\phi_{s}(\xi)=e^{s}.\xi$.
\[lem:Heisenberg.extension-symbol\] Let $p(\xi) \in S_{m}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, $m \in {\ensuremath{\mathbb{C}}}$.
1\) If $m$ is not an integer $\leq -(d+2)$, then $p(\xi)$ can be uniquely extended into a homogeneous distribution $\tau \in {\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
2\) If $m$ is an integer $\leq -(d+2)$, then at best we can extend $p(\xi)$ into a distribution $\tau \in {\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $$\tau_{\lambda}=\lambda^{m}\tau +\lambda^{m}\log \lambda\sum_{{\ensuremath{\langle\! \alpha\!\rangle}}=-(m+d+2)} c_{\alpha}(p)\delta^{(\alpha)} \quad \text{for any $\lambda
>0$},
\label{eq:NCR.log-homogeneity}$$ where we have let $c_{\alpha}(p) = \frac{(-1)^{|\alpha|}}{\alpha!}\int_{\|\xi\|=1}\xi^\alpha p(\xi)i_{E}d\xi$. In particular, $p(\xi)$ admits a homogeneous extension if and only if all the coefficients $c_{\alpha}(p)$ vanish.
For reader’s convenience a detailed proof of this lemma is given in Appendix.
Let $\tau\in {\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and let $\lambda>0$. Then for any $f \in {\ensuremath{\mathcal{S}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ we have $${\ensuremath{\langle (\check{\tau})_{\lambda} , f \rangle}}= \lambda^{-(d+2)}{\ensuremath{\langle \tau , (f_{\lambda^{-1}})^{\vee} \rangle}}=
{\ensuremath{\langle \tau , (\check{f})_{\lambda} \rangle}}= \lambda^{-(d+2)}{\ensuremath{\langle (\tau_{\lambda^{-1}})^{\vee} , f \rangle}}.
\label{eq:NCR.Fourier-transform-scaling}$$ Hence $(\check{\tau})_{\lambda} = \lambda^{-(d+2)}(\tau_{\lambda^{-1}})^{\vee}$. Therefore, if we set $\hat{m}=-(m+d+2)$ then we see that:
- $\tau$ is homogeneous of degree $m$ if and only if $\check{\tau}$ is homogeneous of degree $\hat{m}$;
- $\tau$ satisfies (\[eq:NCR.log-homogeneity\]) if and only if for any $\lambda>0$ we have $$\check{\tau}(\lambda.y)= \lambda^{\hat{m}} \check{\tau}(y) - \lambda^{\hat{m}}\log \lambda
\sum_{{\ensuremath{\langle\! \alpha\!\rangle}} =\hat{m}} (2\pi)^{-(d+1)}c_{\alpha}(p) (-iy)^{\alpha} .
\label{eq:NCR.log-homogeneity-kernel}$$
Let $U\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be an open of local coordinates equipped with a $H$-frame $X_{0},\ldots,X_{d}$. In the sequel we set ${\ensuremath{\mathbb{N}}}_{0}={\ensuremath{\mathbb{N}}}\cup\{0\}$ and we let ${\ensuremath{\mathcal{S}}}'_{{{\text{reg}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be the space of tempered distributions on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ which are smooth outside the origin. We endow ${\ensuremath{\mathcal{S}}}'_{{{\text{reg}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ with the weakest locally convex topology that makes continuous the embeddings of ${\ensuremath{\mathcal{S}}}'_{{{\text{reg}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ into ${\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and $C^{\infty}({{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0})$. In addition, recall also that if $E$ is a topological vector space contained in ${\ensuremath{\mathcal{D}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ then $C^{\infty}(U){\hat\otimes}E$ can be identified as the space $C^{\infty}(U,E)$ seen as a subspace of ${\ensuremath{\mathcal{D}}}'({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$.
The discussion above about the homogeneity of the (inverse) Fourier transform leads us to consider the classes of distributions below.
${\ensuremath{\mathcal{K}}}_{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $m\in{\ensuremath{\mathbb{C}}}$, consists of distributions $K(x,y)$ in $C^\infty(U){\hat\otimes}{\ensuremath{\mathcal{S}}}'_{{{\text{reg}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that, for any $\lambda>0$, we have: $$K(x,\lambda y)= \left\{
\begin{array}{ll}
\lambda^m K(x,y) & \text{if $m\not \in {\ensuremath{\mathbb{N}}}_{0}$}, \\
\lambda^m K(x,y) + \lambda^m\log\lambda
\sum_{{\ensuremath{\langle\! \alpha\!\rangle}}=m}c_{K,\alpha}(x)y^\alpha & \text{if $m\in {\ensuremath{\mathbb{N}}}_{0}$},
\end{array}\right.
\label{eq:NCR.log-homogeneity-Km}$$ where the functions $c_{K,\alpha}(x)$, ${\ensuremath{\langle\! \alpha\!\rangle}}=m$, are in $C^{\infty}(U)$ when $m\in {\ensuremath{\mathbb{N}}}_{0}$.
\[rem:NCR.regularity-cKm1\] For $\Re m>0$ we have $K_{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})\subset C^{\infty}(U){\hat\otimes}C^{[\frac{\Re m}{2}]'}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, where $[\frac{\Re m}{2}]'$ denotes the greatest integer $< \Re m$ (see [@Po:MAMS1 Lemma A.1]).
${\ensuremath{\mathcal{K}}}^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $m\in {\ensuremath{\mathbb{C}}}$, consists of distributions $K(x,y)$ in ${\ensuremath{\mathcal{D}}}'({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ with an asymptotic expansion $K\sim \sum_{j\geq0}K_{m+j}$, $K_{l}\in {\ensuremath{\mathcal{K}}}_{l}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, in the sense that, for any integer $N$, as soon as $J$ is large enough $K-\sum_{j\leq J}K_{m+j}$ is in $C^{N}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$.
\[rem:NCR.regularity-cKm2\] The definition implies that any distribution $K\in{\ensuremath{\mathcal{K}}}^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ is smooth on ${U\times({\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0)}$. Furthermore, using Remark \[rem:NCR.regularity-cKm1\] we see that for $\Re m>0$ we have ${\ensuremath{\mathcal{K}}}^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})\subset C^{\infty}(U){\hat\otimes}C^{[\frac{\Re m}{2}]'}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
Using Lemma \[lem:Heisenberg.extension-symbol\] we can characterize homogeneous symbols on ${U\times{\ensuremath{\mathbb{R}}}^{d+1}}$ as follows.
\[lem:NCR.extension-symbolU\] Let $m \in {\ensuremath{\mathbb{C}}}$ and set $\hat{m}=-(m+d+2)$.
1\) If $p(x,\xi)\in S_{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ then $p(x,\xi)$ can be extended into a distribution $\tau(x,\xi)\in C^{\infty}(U){\hat\otimes}{\ensuremath{\mathcal{S}}}_{{{\text{reg}}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $K(x,y):=\check{\tau}_{{{\xi\rightarrow y}}}(x,y)$ belongs to ${\ensuremath{\mathcal{K}}}_{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$. Furthermore, if $m$ is an integer $\leq -(d+2)$ then, using the notation of (\[eq:NCR.log-homogeneity-Km\]), we have $c_{K,\alpha}(x)= (2\pi)^{-(d+1)}\int_{\|\xi\|=1}\frac{(i\xi)^{\alpha}}{\alpha!}p(x,\xi)\iota_{E}d\xi$.
2\) If $K(x,y)\in {\ensuremath{\mathcal{K}}}_{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ then the restriction of $\hat{K}_{{{y\rightarrow\xi}}}(x,\xi)$ to ${U\times({\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0)}$ is a symbol in $S_{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$.
Next, for any $x \in U$ we let $\psi_{x}$ (resp. $\varepsilon_{x}$) denote the change of variable to the privileged (resp. Heisenberg) coordinates centered at $x$ (cf. Definitions \[def:Heisenberg-privileged-coordinates\] and \[def:Heisenberg-Heisenberg-coordinates\]).
Let $p \in S_{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ and let $k(x,y)\in C^{\infty}(U){\hat\otimes}{\ensuremath{\mathcal{D}}}'(U)$ denote the Schwartz kernel of $p(x,-iX)$, so that $[p(x,-iX)u](x)={\ensuremath{\langle k(x,y) , u(y) \rangle}}$ for any $u\in C^{\infty}_{c}(U)$. Then one can check (see, e.g., [@Po:MAMS1 p. 54]) that we have: $$k(x,y)=|\psi_{x}'| \check{p}_{{{\xi\rightarrow y}}}(x,-\psi_{x}(y))=|\varepsilon_{x}'|\check{p}_{{{\xi\rightarrow y}}}(x,\phi_{x}(-\varepsilon_{x}(y))).
\label{eq:Heisenberg.kernel-quantization-symbol-psiy}$$ Combining this with Lemma \[lem:NCR.extension-symbolU\] leads us to the characterization of [$\Psi_{H}$DOs]{} below.
\[prop:PsiVDO.characterisation-kernel1\] Consider a continuous operator $P:C_{c}^\infty(U)\rightarrow C^\infty(U)$ with Schwartz kernel $k_{P}(x,y)$. Let $m \in
{\ensuremath{\mathbb{C}}}$ and set $\hat{m}=-(m+d+2)$. Then the following are equivalent:
\(i) $P$ is a [$\Psi_{H}$DO]{} of order $m$.
\(ii) We can put $k_{P}(x,y)$ in the form, $$k_{P}(x,y)=|\psi_{x}'|K(x,-\psi_{x}(y)) +R(x,y) ,
\label{eq:PsiHDO.characterization-kernel.privileged}$$ for some $K\in{\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $K\sim \sum K_{\hat{m}+j}$, and some $R \in C^{\infty}(U\times U)$.
\(iii) We can put $k_{P}(x,y)$ in the form, $$k_{P}(x,y)=|\varepsilon_{x}'|K_{P}(x,-\varepsilon_{x}(y)) +R_{P}(x,y) ,
\label{eq:PsiHDO.characterization-kernel.Heisenberg}$$ for some $K_{P}\in {\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $K_{P}\sim \sum K_{P,\hat{m}+j}$, and some $R_{P} \in C^{\infty}(U\times U)$.
Furthermore, if (i)–(iii) hold then we have $K_{P,l}(x,y)=K_{l}(x,\phi_{x}(y))$ and $P$ has symbol $p\sim \sum_{j\geq 0} p_{m-j}$, where $p_{m-j}(x,\xi)$ is the restriction to ${U\times({\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0)}$ of $(K_{m+j})^{\wedge}_{{{y\rightarrow\xi}}}(x,\xi)$.
Now, let $U\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be an open of local coordinates equipped with a $H$-frame $X_{0},X_{1},\ldots,X_{d}$. Let $m\in{\ensuremath{\mathbb{Z}}}$ and let $K\in {\ensuremath{\mathcal{K}}}^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $K\sim \sum_{j\geq m} K_{j}$. Then: - For $j \leq -1$ the distribution $K_{j}(x,y)$ is homogeneous of degree $j$ with respect to $y$ and is smooth for $y\neq 0$;
- For $j=0$ and $\lambda>0$ we have $K_{0}(x,\lambda.y)=K_{0}(x,y)-c_{K_{0},0}(x)\log \lambda$, which by setting $\lambda=\|y\|^{-1}$ with $y\neq 0$ gives $$K_{0}(x,y)=K_{0}(x,\|y\|^{-1}.y)-c_{K_{0},0}\log \|y\|.
\label{eq:Log.behavior-K0}$$
- The remainder term $K-\sum_{j\geq 1}K_{j}$ is in $C^{0}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ (cf. Remarks \[rem:NCR.regularity-cKm1\] and \[rem:NCR.regularity-cKm2\]).
It follows that $K(x,y)$ has a behavior near $y=0$ of the form, $$K(x,y)=\sum_{m\leq j\leq -1} K_{j}(x,y)-c_{K}(x)\log \|y\| +{\operatorname{O}}(1), \qquad c_{K}(x)=c_{K_{0},0}(x).
\label{eq:Log.behavior-K}$$
Let $P\in {\ensuremath{\Psi_{H}}}^{m}(U)$ have kernel $k_{P}(x,y)$ and set $\hat{m}=-(m+d+2)$.
1\) Near the diagonal $k_{P}(x,y)$ has a behavior of the form, $$k_{P}(x,y)=\sum_{\hat{m}\leq j \leq -1}a_{j}(x,-\psi_{x}(y)) -c_{P}(x) \log \|\psi_{x}(y)\| +{\operatorname{O}}(1),
\label{eq:Log.behavior-kP}$$ with $a_{j}(x,y)\in C^{\infty}({U\times({\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0)})$ homogeneous of degree $j$ in $y$ and $c_{P}(x)\in C^{\infty}(U)$.
2\) If we write $k_{P}(x,y)$ in the forms (\[eq:PsiHDO.characterization-kernel.privileged\]) and (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K(x,y)$ and $K_{P}(x,y)$ in ${\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, then we have $$c_{P}(x)=|\psi_{x}'|c_{K}(x)=|\varepsilon_{x}'|c_{K_{P}}(x)=\frac{|\psi_{x}'|}{(2\pi)^{d+1}}\int_{\|\xi\|=1}p_{-(d+2)}(x,\xi)\imath_{E}d\xi,
\label{eq:NCR.formula-cP}$$ where $p_{-(d+2)}$ denotes the symbol of degree $-(d+2)$ of $P$.
If we put $k_{P}(x,y)$ in the form (\[eq:PsiHDO.characterization-kernel.privileged\]) with $K\in {\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $K\sim \sum K_{\hat{m}+j}$, then it follows from (\[eq:Log.behavior-K\]) that $k_{P}(x,y)$ has a behavior near the diagonal of the form (\[eq:Log.behavior-kP\]) with $c_{P}(x)=|\psi_{x}'|c_{K}(x)=|\psi_{x}'|c_{K_{0},0}(x)$. Furthermore, by Proposition \[prop:PsiVDO.characterisation-kernel1\] the symbol $p_{-(d+2)}(x,\xi)$ of degree $-(d+2)$ of $P$ is the restriction to ${U\times({\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0)}$ of $(K_{0})^{\wedge}_{{{y\rightarrow\xi}}}(x,\xi)$, so by Lemma \[lem:NCR.extension-symbolU\] we have $c_{K}(x)=c_{K_{0},0}(x)=(2\pi)^{-(d+1)}\int_{\|\xi\|=1}p_{-(d+2)}(x,\xi)\imath_{E}d\xi$.
Next, if we put $k_{P}(x,y)$ in the form (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K_{P}\in {\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $K_{P}\sim \sum K_{P,\hat{m+j}}$ then by Proposition \[prop:PsiVDO.characterisation-kernel1\] we have $K_{P,0}(x,y)=K_{0}(x,\phi_{x}(y))$. Let $\lambda>0$. Since $\phi_{x}(\lambda.y)=\lambda.\phi_{x}(y)$, using (\[eq:NCR.log-homogeneity-Km\]) we get $$K_{P,0}(x,\lambda.y)-K_{P,0}(x,y)= K_{0}(x,\lambda.\phi_{x}(y))-K_{0}(x,\phi_{x}(y))=c_{K_{0}}(x) \log \lambda . \\
$$ Hence $c_{K_{P,0}}(x)=c_{K,0}(x)$. As $|\varepsilon_{x}'|=|\phi_{x}'|.|\psi_{x}'|=|\psi_{x}'|$ we see that $|\psi_{x}'|c_{K}(x)=|\varepsilon_{x}'|c_{K_{P}}(x)$. The proof is thus achieved.
\[lem:log-sing.invariance\] Let $\phi : U\rightarrow \tilde{U}$ be a change of $H$-framed local coordinates. Then for any $\tilde{P}\in
{\ensuremath{\Psi_{H}}}^{m}(\tilde{U})$ we have $c_{\phi^{*}\tilde{P}}(x)=|\phi'(x)|c_{\tilde{P}}(\phi(x))$.
Let $P=\phi^{*}\tilde{P}$. Then $P$ is a [$\Psi_{H}$DO]{} of order $m$ on $U$ (see [@BG:CHM]). Moreover, by [@Po:MAMS1 Prop. 3.1.18] if we write the Schwartz kernel $k_{\tilde{P}}(\tilde{x},\tilde{y})$ in the form (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K_{\tilde{P}}(\tilde{x},\tilde{y})$ in ${\ensuremath{\mathcal{K}}}^{\hat{m}}(\tilde{U}\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, then the Schwartz kernel $k_{P}(x,y)$ of $P$ can be put in the form (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K_{P}(x,y)$ in ${\ensuremath{\mathcal{K}}}^{\hat{m}}(U\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $$K_{P}(x,y) \sim \sum_{{\ensuremath{\langle\! \beta\!\rangle}}\geq \frac{3}{2}{\ensuremath{\langle\! \alpha\!\rangle}}} \frac{1}{\alpha!\beta!}
a_{\alpha\beta}(x)y^{\beta}(\partial_{\tilde{y}}^{\alpha}K_{\tilde{P}})(\phi(x),\phi_{H}'(x).y),
\label{eq:PsiHDO.asymptotic-expansion-KP}$$ where we have let $a_{\alpha\beta}(x)=\partial^{\beta}_{y}[|\partial_{y}(\varepsilon_{\phi(x)}\circ \phi\circ \tilde{\varepsilon}_{x}^{-1})(y)|
(\tilde{\varepsilon}_{\phi(x)}\circ \phi\circ \varepsilon_{x}^{-1}(y)-\phi_{H}'(x)y)^{\alpha}]_{|_{y=0}}$, the map $\phi_{H}'(x)$ is the tangent map (\[eq:Heisenberg.tangent-map\]), and $\tilde{\varepsilon}_{\tilde{x}}$ denotes the change to the Heisenberg coordinates at $\tilde{x}\in \tilde{U}$. In particular, we have $$K_{P}(x,y)= a_{00}(x) K_{\tilde{P}}(\phi(x),\phi_{H}'(x).y) \quad
\bmod y_{j}{\ensuremath{\mathcal{K}}}^{\hat{m}+1}({U\times{\ensuremath{\mathbb{R}}}^{d+1}}),
\label{eq:PsiHDO.asymptotic-expansion-KP2}$$ where $a_{00}(x)=|\varepsilon_{\phi(x)}'||\phi'(x)||\varepsilon_{x}'|^{-1}$.
Notice that $\tilde{K}(x,y):= K_{\tilde{P}}(\phi(x),\phi_{H}'(x).y)$ is an element of ${\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, since we have $\phi'_{H}(x).(\lambda.y)=\lambda.(\phi'_{H}(x).y)$ for any $\lambda>0$. Moreover, the distributions in $y_{j}{\ensuremath{\mathcal{K}}}^{*}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $j=0,..,d$, cannot have a logarithmic singularity near $y=0$. To see this it is enough to look at a distribution $H(x,y)\in {\ensuremath{\mathcal{K}}}^{-l}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, $l\in {\ensuremath{\mathbb{N}}}_{0}$. Then $H(x,y)$ has a behavior near $y=0$ of the form: $$H(x,y)=\sum_{-l\leq k \leq -1}b_{k}(x,y)-c_{H}(x)\log \|y\|+{\operatorname{O}}(1),
$$ with $b_{k}(x,y)$ homogeneous of degree $k$ with respect to the $y$-variable. Thus, $$y_{j}H(x,y)=\sum_{-l\leq k \leq -1}y_{j}b_{k}(x,y)-c_{H}(x)y_{j}\log \|y\|+{\operatorname{O}}(1).
$$ Observe that each term $y_{j}b_{k}(x,y)$ is homogeneous of degree $k+1$ with respect to $y$ and the term $y_{j}\log \|y\|$ converges to $0$ as $y\rightarrow 0$. Therefore, we see that the singularity of $y_{j}H(x,y)$ near $y=0$ cannot contain a logarithmic term.
Combining the above observations with (\[eq:PsiHDO.asymptotic-expansion-KP\]) shows that the coefficients of the logarithmic singularities of $K_{P}(x,y)$ and $a_{00}(x)\tilde{K}(x,y)$ must agree, i.e., we have $c_{K_{P}}(x)=c_{a_{00}\tilde{K}}(x)=a_{00}(x)c_{\tilde{K}}(x)=|\varepsilon_{\phi(x)}'||\phi'(x)||\varepsilon_{x}'|^{-1}c_{\tilde{K}}(x)$. Furthermore, the only contribution to the logarithmic singularity of $\tilde{K}(x,y)$ comes from $$\begin{gathered}
c_{K_{\tilde{P}}}(\phi(x))\log\|\phi_{H}'(x)y\|= c_{K_{\tilde{P}}}(\phi(x))\log[\|y\|
\|\phi_{H}'(x).(\|y\|^{-1}.y\|)] \\ =c_{K_{\tilde{P}}}(\phi(x))\log\|y\| +{\operatorname{O}}(1).
\end{gathered}$$ Hence $c_{\tilde{K}}(x)=c_{K_{\tilde{P}}}(\phi(x))$. Therefore, we get $c_{K_{P}}(x)=|\varepsilon_{\phi(x)}'||\phi'(x)||\varepsilon_{x}'|^{-1}c_{K_{\tilde{P}}}(\phi(x))$, which by combining with (\[eq:NCR.formula-cP\]) shows that $c_{P}(x)=|\phi'(x)|c_{\tilde{P}}(\phi(x))$ as desired.
Let $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ and let $\kappa:U\rightarrow V$ be a $H$-framed chart over which there is a trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|_{U}}\rightarrow U\times {\ensuremath{\mathbb{C}}}^{r}$. Then the Schwartz kernel of $P_{\kappa,\tau}:=\kappa_{*}\tau_{*}(P_{|_{U}})$ has a singularity near the diagonal of the form (\[eq:Log.behavior-kP\]). Moreover, if $\tilde{\kappa}:\tilde{U}\rightarrow \tilde{V}$ be a $H$-framed chart over which there is a trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|_{\tilde{U}}}\rightarrow
\tilde{U}\times {\ensuremath{\mathbb{C}}}^{r}$ and if we let $\phi$ denote the Heisenberg diffeomorphism $\tilde{\kappa}\circ \kappa^{-1}:\kappa(U\cap \tilde{U})\rightarrow
\tilde{\kappa}(U\cap \tilde{U})$, then by Lemma \[lem:log-sing.invariance\] we have $c_{P_{\kappa,\tau}}(x)=|\phi'(x)|c_{P_{\tilde{\kappa},\tilde{\tau}}}(\phi(x))$ for any $x \in U$. Therefore, on $U\cap \tilde{U}$ we have the equality of densities, $$\tau^{*}\kappa^{*}(c_{P_{\kappa,\tau}}(x)dx)= \tilde{\tau}^{*}\tilde{\kappa}^{*}(c_{P_{\tilde{\kappa},\tilde{\tau}}}(x)dx).$$
Now, the space $C^{\infty}(M,|\Lambda|(M)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}})$ of ${\ensuremath{{\operatorname{END}}}}{\ensuremath{\mathcal{E}}}$-valued densities is a sheaf, so there exists a unique density $c_{P}(x) \in C^{\infty}(M,|\Lambda|(M)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}})$ such that, for any local $H$-framed chart $\kappa:U\rightarrow V$ and any trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|_{U}}\rightarrow U\times {\ensuremath{\mathbb{C}}}^{r}$, we have $$c_{P}(x)|_{U}=\tau^{*}\kappa^{*}(c_{\kappa_{*}\tau_{*}(P_{|_{U}})}(x)dx).
$$ Moreover, this density is functorial with respect to Heisenberg diffeomorphisms, i.e., for any Heisenberg diffeomorphism $\phi:(M,H)\rightarrow
(M',H')$ we have $$c_{\phi_{*}P}(x)=\phi_{*}(c_{P}(x)).
\label{eq:Log.functoriality-cP}$$
Summarizing all this we have proved:
\[thm:NCR.log-singularity\] Let $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $m \in {\ensuremath{\mathbb{Z}}}$. Then:
1\) On any trivializing $H$-framed local coordinates the Schwartz kernel $k_{P}(x,y)$ of $P$ has a behavior near the diagonal of the form, $$k_{P}(x,y)=\sum_{-(m+d+2)\leq j\leq -1}a_{j}(x,-\psi_{x}(y)) - c_{P}(x)\log \|\psi_{x}(y)\| +
{\operatorname{O}}(1),$$ where $c_{P}(x)$ is given by (\[eq:NCR.formula-cP\]) and each function $a_{j}(x,y)$ is smooth for $y\neq 0$ and homogeneous of degree $j$ with respect to $y$.
2\) The coefficient $c_{P}(x)$ makes sense globally on $M$ as a smooth ${\ensuremath{{\operatorname{END}}}}{\ensuremath{\mathcal{E}}}$-valued density which is functorial with respect to Heisenberg diffeomorphisms.
Finally, the following holds.
\[prop.Sing.transpose-adjoint\] Let $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$, $m \in {\ensuremath{\mathbb{Z}}}$. 1) Let $P^{t}\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}}^{*})$ be the transpose of $P$. Then we have $c_{P^{t}}(x)=c_{P}(x)^{t}$.
2\) Suppose that $M$ is endowed with a density $\rho>0$ and ${\ensuremath{\mathcal{E}}}$ is endowed with a Hermitian metric. Let $P^{*}\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ be the adjoint of $P$. Then we have $c_{P^{*}}(x)=c_{P}(x)^{*}$.
Let us first assume that ${\ensuremath{\mathcal{E}}}$ is the trivial line bundle. Then it is enough to prove the result in $H$-framed local coordinates $U\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$, so that the Schwartz kernel $k_{P}(x,y)$ can be put in the form (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K_{P}(x,y)$ in ${\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$.
We know that $P^{t}$ is a [$\Psi_{H}$DO]{} of order $m$ (see [@BG:CHM Thm. 17.4]). Moreover, by [@Po:MAMS1 Prop. 3.1.21] we can put its Schwartz kernel $k_{P^{t}}(x,y)$ in the form (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K_{P^{t}}(x,y)$ in ${\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ such that $$K_{P^{t}}(x,y) \sim \sum_{\frac{3}{2}{\ensuremath{\langle\! \alpha\!\rangle}} \leq {\ensuremath{\langle\! \beta\!\rangle}}} \sum_{|\gamma|\leq |\delta| \leq 2|\gamma|}
a_{\alpha\beta\gamma\delta}(x) y^{\beta+\delta}
(\partial^{\gamma}_{x}\partial_{y}^{\alpha}K_{P})(x,-y),$$ where $a_{\alpha\beta\gamma\delta}(x)=\frac{|\varepsilon_{x}^{-1}|}{\alpha!\beta!\gamma!\delta!}
[\partial_{y}^{\beta}(|\varepsilon_{\varepsilon_{x}^{-1}(-y)}'|(y-\varepsilon_{\varepsilon_{x}^{-1}(y)}(x))^{\alpha})
\partial_{y}^{\delta}(\varepsilon_{x}^{-1}(-y)-x)^{\gamma}](x,0)$. In particular, we have $K_{P^{t}}(x,y)=K_{P}(x,-y) \bmod y_{j}{\ensuremath{\mathcal{K}}}^{\hat{m}+1}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$. Therefore, in the same way as in the proof of Lemma \[lem:log-sing.invariance\], we see that the logarithmic singularity near $y=0$ of $K_{P}(x,y)$ agrees with that of $K_{P^{t}}(x,-y)$, hence with that of $K_{P^{t}}(x,y)$. Therefore, we have $c_{K_{P^{t}}}(x)=c_{K_{P}}(x)$. Combining this with (\[eq:NCR.formula-cP\]) then shows that $c_{P^{t}}(x)=c_{P}(x)$.
Next, suppose that $U$ is endowed with a smooth density $\rho(x)>0$. Then the corresponding adjoint $P^{*}$ is a [$\Psi_{H}$DO]{} of order $m$ on $U$ with Schwartz kernel $k_{P^{*}}(x,y)=\rho(x)^{-1}\overline{k_{P^{t}}(x,y)}\rho(y)$. Thus $k_{P^{*}}(x,y)$ can be put in the form (\[eq:PsiHDO.characterization-kernel.Heisenberg\]) with $K_{P^{*}}(x,y)$ in ${\ensuremath{\mathcal{K}}}^{\hat{m}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ such that $$\begin{gathered}
K_{P^{*}}(x,y)=[\rho(x)^{-1}\rho(\varepsilon_{x}^{-1}(-y))]\overline{K_{P^{t}}(x,y)}\\
=\overline{K_{P^{t}}(x,y)} \ \bmod y_{j}{\ensuremath{\mathcal{K}}}^{\hat{m}+1}({U\times{\ensuremath{\mathbb{R}}}^{d+1}}).
\end{gathered}$$ Therefore, $K_{P^{*}}(x,y)$ and $\overline{K_{P^{t}}(x,y)}$ same logarithmic singularity near $y=0$, so that we have $c_{K_{P^{*}}}(x)=\overline{c_{K_{P^{t}}}(x)}=\overline{c_{K_{P}}(x)}$. Hence $c_{P^{*}}(x)=\overline{c_{P}(x)}$.
Finally, when ${\ensuremath{\mathcal{E}}}$ is a general vector bundle, we can argue as above to show that we still have $c_{P^{t}}(x)=c_{P}(x)^{t}$, and if $P^{*}$ is the adjoint of $P$ with respect to the density $\rho$ and some Hermitian metric on ${\ensuremath{\mathcal{E}}}$, then we have $c_{P^{*}}(x)=c_{P}(x)^{*}$.
Noncommutative residue
----------------------
Let $(M^{d+1},H)$ be a Heisenberg manifold and let ${\ensuremath{\mathcal{E}}}$ be a vector bundle over $M$. We shall now construct a noncommutative residue trace on the algebra ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ as the residual trace induced by the analytic extension of the operator trace to [$\Psi_{H}$DOs]{} of non-integer order.
Let ${\ensuremath{\Psi_{H}^{\text{int}}}}(M,{\ensuremath{\mathcal{E}}}) := \cup_{\Re m < -(d+2)}{\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ the class of [$\Psi_{H}$DOs]{} whose symbols are integrable with respect to the $\xi$-variable (this notation is borrowed from [@CM:LIFNCG]). If $P$ belongs to this class, then it follows from Remark \[rem:NCR.regularity-cKm2\] that the restriction to the diagonal of $M\times M$ of its Schwartz kernel defines a smooth density $k_{P}(x,x)$ with values in ${\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}}$. Therefore, when $M$ is compact then $P$ is a trace-class operator on $L^{2}(M,{\ensuremath{\mathcal{E}}})$ and we have $${\ensuremath{{\operatorname{Trace}}}}(P) = \int_{M} {{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}k_{P}(x,x).
$$
We shall now construct an analytic extension of the operator trace to the class ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ of [$\Psi_{H}$DOs]{} of non-integer order. As in [@Gu:GLD] (see also [@KV:GDEO], [@CM:LIFNCG]) the approach consists in working directly at the level of densities by constructing an analytic extension of the map $P\rightarrow k_{P}(x,x)$ to ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$. Here analyticity is meant with respect to holomorphic families of [$\Psi_{H}$DOs]{}, e.g., the map $P \rightarrow k_{P}(x,x)$ is analytic since for any holomorphic family $(P(z))_{z\in \Omega}$ with values in ${\ensuremath{\Psi_{H}^{\text{int}}}}(M,{\ensuremath{\mathcal{E}}})$ the output densities $k_{P(z)}(x,x)$ depend analytically on $z$ in the Fréchet space $C^{\infty}(M,|\Lambda|(M)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}})$.
Let $U \subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be an open of trivializing local coordinates equipped with equipped with a $H$-frame $X_{0}, \ldots, X_{d}$, and for any $x\in U$ let $\psi_{x}$ denote the affine change of variables to the privileged coordinates at $x$. Any $P \in {\ensuremath{\Psi_{H}}}^{m}(U)$ can be written as $P=p(x,-iX)+R$ with $p\in S^{m}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ and $R\in \Psi^{-\infty}(U)$. Therefore, if $\Re m <-(d+2)$ then using (\[eq:Heisenberg.kernel-quantization-symbol-psiy\]) we get $$k_{P}(x,x)=|\psi_{x}'| (2\pi)^{-(d+2)}\int p(x,\xi)d\xi +k_{R}(x,x).
\label{eq:NCR.kP(x,x)}$$ This leads us to consider the functional, $$L(p):=(2\pi)^{-(d+2)}\int p(\xi) d\xi, \qquad p\in{\ensuremath{S^{\text{int}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}).
$$
In the sequel, as in Section \[sec:Heisenberg-calculus\] for [$\Psi_{H}$DOs]{}, we say that a holomorphic family of symbols $(p(z))_{z\in{\ensuremath{\mathbb{C}}}}\subset S^{*}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is a *gauging* for a given symbol $p\in
S^{*}({\ensuremath{\mathbb{R}}}^{d+1})$ when we have $p(0)=p$ and ${{{\operatorname{ord}}}}p(z)=z+{{{\operatorname{ord}}}}p$ for any $z\in {\ensuremath{\mathbb{C}}}$.
\[lem:Spectral.tildeL\] 1) The functional $L$ has a unique analytic continuation $\tilde{L}$ to $S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. The value of $\tilde{L}$ on a symbol $p\sim \sum_{j\geq 0} p_{m-j}$ of order $m \in{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}$ is given by $$\tilde{L}(p)= (p- \sum_{j\leq N}\tau_{m-j})^{\vee}(0), \qquad N\geq \Re{m}+d+2,
\label{eq:NCR.L-tilda}$$ where the value of the integer $N$ is irrelevant and the distribution $\tau_{m-j}\in {\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is the unique homogeneous extension of $p_{m-j}(\xi)$ provided by Lemma \[lem:Heisenberg.extension-symbol\].
2\) Let $ p\in S^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, $p\sim \sum p_{m-j}$, and let $(p(z))_{z \in {\ensuremath{\mathbb{C}}}}\subset S^{*}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be a holomorphic gauging for $p$. Then $\tilde{L}(p(z))$ has at worst a simple pole singularity at $z=0$ in such way that $${\ensuremath{{\operatorname{Res}}}}_{z=0}\tilde{L}(p(z)) = \int_{\|\xi\|=1} p_{-(d+2)}(\xi) \imath_{E}d\xi,
$$ where $p_{-(d+2)}(\xi)$ is the symbol of degree $-(d+2)$ of $p(\xi)$ and $E$ is the anisotropic radial vector field $2\xi_{0}\partial_{x_{0}}+\xi_{1}\partial_{\xi_{1}}+\ldots+\xi_{d}\partial_{\xi_{d}}$.
First, the extension is necessarily unique since the functional $L$ is holomorphic on ${\ensuremath{S^{\text{int}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and each symbol $p\in S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ can be connected to ${\ensuremath{S^{\text{int}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ by means of a holomorphic family with values in $S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
Let $p\in S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, $p\sim \sum_{j\geq 0} p_{m-j}$, and for $j=0,1,\ldots$ let $\tau_{m-j} \in {\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ denote the unique homogeneous extension of $p_{m-j}$ provided by Lemma \[lem:Heisenberg.extension-symbol\]. For $N\geq \Re{m}+d+2$ the distribution $p-\sum_{j\leq N}\tau_{m-j}$ agrees with an integrable function near $\infty$, so its Fourier transform is continuous and we may define $$\tilde{L}(p)= (p-\sum_{j\leq N}\tau_{m-j})^\wedge (0).
\label{eq:Spectral.tildeL}$$ Notice that if $j>\Re m +d+2$ then $\tau_{m-j}$ is also integrable near $\infty$, so $\hat\tau_{m-j}(0)$ makes well sense. However, its value must be $0$ for homogeneity reasons. This shows that the value of $N$ in (\[eq:Spectral.tildeL\]) is irrelevant, so this formula defines a linear functional on $S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. In particular, if $\Re m<-(d+2)$ then we can take $N=0$ to get $\tilde{L}(p)=\check{p}(0)=\int p(\xi)d\xi=L(p)$. Hence $\tilde{L}$ agrees with $L$ on ${\ensuremath{S^{\text{int}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})\cap S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
Let $(p(z))_{z \in \Omega}$ be a holomorphic family of symbols such that $w(z)={{{\operatorname{ord}}}}p(z)$ is never an integer and let us study the analyticity of $\tilde{L}(p(z))$. As the functional $L$ is holomorphic on ${\ensuremath{S^{\text{int}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ we may assume that we have $|\Re w(z)-m|<1$ for some integer $m\geq -(d+2)$. In this case in (\[eq:Spectral.tildeL\]) we can set $N=m+d+2$ and for $j=0,\ldots,m+d+1$ we can represent $\tau(z)_{w(z)-j}$ by $p(z)_{w(z)-j}$. Then, picking $\varphi \in C_{c}^\infty({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $\varphi=1$ near the origin, we see that $ \tilde{L}(p(z))$ is equal to $$\begin{gathered}
\int [p(z)(\xi)- (1-\varphi(\xi) )\sum_{j\leq m+d+2} p(z)_{w(z)-j}(\xi)] d\xi
- \sum_{j\leq m+d+2} {\ensuremath{\langle \tau (z)_{w(z)-j} , \varphi \rangle}} \\
= L(\tilde{p}(z)) -{\ensuremath{\langle \tau(z) , \varphi \rangle}}- \sum_{j\leq m+d+1} \int p(z)_{ w(z)-j}(\xi)\varphi(\xi)d\xi ,
\label{eq:Spectral.tildeLbis} \end{gathered}$$ where we have let $\tau(z)= \tau(z)_{w(z)-m-(d+2)}$ and $\tilde{p}(z)=p(z)-(1-\varphi)\sum_{j\leq m+d+2} p(z)_{w(z)-j}$.
In the r.h.s. of (\[eq:Spectral.tildeLbis\]) the only term that may fail to be analytic is $- {\ensuremath{\langle \tau(z) , \varphi \rangle}}$. Notice that by the formulas (\[eq:Appendix.almosthomogeneous-extension\]) and (\[eq:Appendix1.h’\]) in Appendix we have $${\ensuremath{\langle \tau(z) , \varphi \rangle}}= \int p(z)_{w(z)-m-(d+2)}(\varphi(\xi)-\psi_{z}(\xi))d\xi,
$$ with $\psi_{z}\in C^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ of the form $\psi_{z}(\xi)= \int^{\infty}_{\log\|\xi\|}
[(\frac 1{w(z)-m}\frac{d}{ds} +1)g](t)dt$, where $g(t)$ can be any function in $C^{\infty}_{c}({\ensuremath{\mathbb{R}}})$ such that $\int g(t)dt=1$. Without any loss of generality we may suppose that $\varphi(\xi)= \int^{\infty}_{\log\|\xi\|}
g(t)dt$ with $g\in C^{\infty}_{c}({\ensuremath{\mathbb{R}}})$ as above. Then we have $\psi_{z}(\xi)= -\frac 1{w(z)-m} g(\log \|\xi\|) + \varphi(\xi)$, which gives $$\begin{gathered}
{\ensuremath{\langle \tau(z) , \varphi \rangle}} =\frac 1{w(z)-m} \int p(z)_{w(z)-m-(d+2)}(\xi) g(\log \|\xi\|)d\xi \\
=\frac1{w(z)-m}
\int \mu^{w(z)-m}g(\log \mu)\frac{d\mu}{\mu}\ \int_{\|\xi\|=1} p(z)_{w(z)-m-(d+2)}(\xi) \imath_{E}d\xi.
\label{eq:NCR.tau-z}\end{gathered}$$ Together with (\[eq:Spectral.tildeLbis\]) this shows that $\tilde{L}(p(z))$ is an analytic function, so the the first part of the lemma is proved.
Finally, let $p\sim \sum p_{m-j}$ be a symbol in $S^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and let $(p(z))_{|\Re z -m|<1}$ be a holomorphic family which is a gauging for $p$. Since $p(z)$ has order $w(z)=m+z$ it follows from (\[eq:Spectral.tildeLbis\]) and (\[eq:NCR.tau-z\]) that $\tilde{L}(p(z))$ has at worst a simple pole singularity at $z=0$ such that $$\begin{gathered}
{\ensuremath{{\operatorname{Res}}}}_{z=0}\tilde{L}(p(z))= {\ensuremath{{\operatorname{Res}}}}_{z=0} \frac{-1}{z} \int \mu^{z}g(\log \mu)\frac{d\mu}{\mu}\ \int_{\|\xi\|=1}
p(z)_{z-(d+2)}(\xi) \imath_{E}d\xi \\ = - \int_{\|\xi\|=1} p_{-(d+2)}(\xi) \imath_{E}d\xi .
$$ This proves the second part of the lemma.
Now, for $P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(U)$ we let $$t_{P}(x) = (2\pi)^{-(d+2)}|\psi_{x}'| \tilde{L}(p(x,.)) + k_{R}(x,x),
\label{eq:NCR.tP-definition}$$ where the pair $(p,R) \in S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})\times \Psi^{\infty}(U)$ is such that $P=p(x,-iX)+R$. This definition does not depend on the choice of $(p,R)$. Indeed, if $(p',R')$ is another such pair then $p-p'$ is in $S^{-\infty}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$, so using (\[eq:NCR.kP(x,x)\]) we that $k_{R'}(x,x)-k_{R}(x,x)$ is equal to $$\begin{gathered}
k_{(p-p')(x,-iX)}(x,x) = (2\pi)^{-(d+2)}|\psi_{x}'| L((p-p')(x,.))\\ =
(2\pi)^{-(d+2)}|\psi_{x}'| (\tilde{L}(p(x,.)) -\tilde{L}(p'(x,.))),
$$ which shows that the r.h.s. of (\[eq:NCR.tP-definition\]) is the same for both pairs.
On the other hand, observe that (\[eq:Spectral.tildeLbis\]) and (\[eq:NCR.tau-z\]) show that $\tilde{L}(p(x,.))$ depends smoothly on $x$ and that for any holomorphic family $(p(z))(z)\in \Omega \subset S^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$ the map $z\rightarrow \tilde{L}(p(x,.))$ is holomorphic from $\Omega$ to $C^{\infty}(U)$. Therefore, the map $P\rightarrow t_{P}(x)$ is an analytic extension to ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(U)$ of the map $P\rightarrow k_{P}(x,x)$.
Let $P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(U)$ and let $(P(z))_{z \in
\Omega}\subset {\ensuremath{\Psi_{H}}}^{*}(U)$ be a holomorphic gauging for $P$. Then it follows from (\[eq:Spectral.tildeLbis\]) and (\[eq:NCR.tau-z\]) that with respect to the topology of $C^{\infty}(M,|\Lambda|(M)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}})$ the map $z\rightarrow t_{P(z)}(x)$ has at worst a simple pole singularity at $z=0$ with residue $${\ensuremath{{\operatorname{Res}}}}_{z=0} t_{P(z)}(x)=-(2\pi)^{-(d+2)} \int_{\|\xi\|=1} p_{-(d+2)}(\xi) \imath_{E}d\xi =-c_{P}(x),
\label{eq:NCR.residue.t-P.U}$$ where $p_{-(d+2)}(\xi)$ denotes the symbol of degree $-(d+2)$ of $P$.
Next, let $\phi:\tilde{U}\rightarrow U$ be a change of $H$-framed local coordinates. Let $P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(U)$ and let $(P(z))_{z \in{\ensuremath{\mathbb{C}}}}$ be a holomorphic family which is a gauging for $P$. As shown in [@Po:CPDE1] the [$\Psi_{H}$DO]{} family $(\phi^{*}P(z))_{z\in {\ensuremath{\mathbb{C}}}}$ is holomorphic and is a gauging for $\phi^{*}P$. Moreover, as for $\Re z$ negatively large enough we have $k_{\phi^{*}P(z)}=|\phi'(x)|k_{P(z)}(\phi(x),\phi(x))$, an analytic continuation gives $$t_{\phi^{*}P}(x)=|\phi'(x)|t_{P}(\phi(x)).
\label{eq:NCR.functoriality-tP}$$
Now, in the same way as in the construction of the density $c_{P}(x)$ in the proof of Proposition \[thm:NCR.log-singularity\], it follows from all this that if $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ then there exists a unique ${\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}}$-valued density $t_{P}(x)$ such that, for any local $H$-framed chart $\kappa:U\rightarrow V$ and any trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|_{U}}\rightarrow U\times {\ensuremath{\mathbb{C}}}^{r}$, we have $$t_{P}(x)|_{U}=\tau^{*}\kappa^{*}(t_{\kappa_{*}\tau_{*}(P_{|_{U}})}(x)dx).
$$ On every trivializing $H$-framed chart the map $P\rightarrow t_{P}(x)$ is analytic and satisfies (\[eq:NCR.residue.t-P.U\]). Therefore, we obtain:
\[thm:NCR.TR.local\] 1) The map $P \rightarrow t_{P}(x)$ is the unique analytic continuation of the map $P \rightarrow k_{P}(x,x)$ to ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$.
2\) Let $P \in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ and let $(P(z))_{z\in\Omega}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ be a holomorphic family which is a gauging for $P$. Then, in $C^{\infty}(M,|\Lambda|(M)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}})$, the map $z\rightarrow t_{P(z)}(x)$ has at worst a simple pole singularity at $z=0$ with residue given by $${\ensuremath{{\operatorname{Res}}}}_{z=0} t_{P(z)}(x)=- c_{P}(x),
$$ where $c_{P}(x)$ denotes the ${\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}}$-valued density on $M$ given by Theorem \[thm:NCR.log-singularity\].
3\) The map $P\rightarrow t_{P}(x)$ is functorial with respect to Heisenberg diffeomorphisms as in (\[eq:Log.functoriality-cP\]).
Taking residues at $z=0$ in (\[eq:NCR.functoriality-tP\]) allows us to recover (\[eq:Log.functoriality-cP\]).
From now one we assume $M$ compact. We then define the *canonical trace* for the Heisenberg calculus as the functional ${\ensuremath{{\operatorname{TR}}}}$ on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ given by the formula, $${\ensuremath{{\operatorname{TR}}}}P := \int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}} t_{P}(x) \qquad \forall P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}}).$$
\[thm:NCR.TR.global\] The canonical trace ${\ensuremath{{\operatorname{TR}}}}$ has the following properties:
1\) ${\ensuremath{{\operatorname{TR}}}}$ is the unique analytic continuation to ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ of the usual trace.
2\) We have $ {\ensuremath{{\operatorname{TR}}}}P_{1}P_{2}={\ensuremath{{\operatorname{TR}}}}P_{2}P_{1}$ whenever ${{{\operatorname{ord}}}}P_{1}+{{{\operatorname{ord}}}}P_{2}\not\in{\ensuremath{\mathbb{Z}}}$.
3\) ${\ensuremath{{\operatorname{TR}}}}$ is invariant by Heisenberg diffeomorphisms, i.e., for any Heisenberg diffeomorphism $\phi:(M,H)\rightarrow (M',H')$ we have $ {\ensuremath{{\operatorname{TR}}}}\phi_{*}P={\ensuremath{{\operatorname{TR}}}}P$ $\forall P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$.
The first and third properties are immediate consequences of Theorem \[thm:NCR.TR.local\], so we only have to prove the second one.
For $j=1,2$ let $P_{j}\in {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ and let $(P_{j}(z))_{z\in {\ensuremath{\mathbb{C}}}}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ be a holomorphic gauging for $P_{j}$. We further assume that ${{{\operatorname{ord}}}}P_{1}+{{{\operatorname{ord}}}}P_{2}\not\in{\ensuremath{\mathbb{Z}}}$. Then $P_{1}(z)P_{2}(z)$ and $P_{2}(z)P_{1}(z)$ have non-integer order for $z$ in ${\ensuremath{\mathbb{C}}}\setminus \Sigma$, where $\Sigma:=-({{{\operatorname{ord}}}}P_{1}+{{{\operatorname{ord}}}}P_{2})+{\ensuremath{\mathbb{Z}}}$. For $\Re z$ negatively large enough we have ${\ensuremath{{\operatorname{Trace}}}}P_{1}(z)P_{2}(z)={\ensuremath{{\operatorname{Trace}}}}P_{2}(z)P_{1}(z)$, so by analytic continuation we see that ${\ensuremath{{\operatorname{TR}}}}P_{1}(z)P_{2}(z)={\ensuremath{{\operatorname{TR}}}}P_{2}(z)P_{1}(z)$ for any $z\in{\ensuremath{\mathbb{C}}}\setminus \Sigma$. Setting $z =0$ then shows that we have $ {\ensuremath{{\operatorname{TR}}}}P_{1}P_{2}={\ensuremath{{\operatorname{TR}}}}P_{2}P_{1}$ as desired.
Next, we define the *noncommutative residue* for the Heisenberg calculus as the linear functional ${\ensuremath{{\operatorname{Res}}}}$ on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ given by the formula, $${\ensuremath{{\operatorname{Res}}}}P :=\int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}} c_{P}(x) \qquad \forall P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}}).
$$ This functional provides us with the analogue for the Heisenberg calculus of the noncommutative residue trace of Wodzicki ([@Wo:LISA], [@Wo:NCRF]) and Guillemin [@Gu:NPWF], for we have:
\[thm:NCR.NCR\] The noncommutative residue ${\ensuremath{{\operatorname{Res}}}}$ has the following properties:
1\) Let $P \in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ and let $(P(z))_{z\in \Omega}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ be a holomorphic gauging for $P$. Then at $z=0$ the function ${\ensuremath{{\operatorname{TR}}}}P(z)$ has at worst a simple pole singularity in such way that we have $${\ensuremath{{\operatorname{Res}}}}_{z=0} {\ensuremath{{\operatorname{TR}}}}P(z)= - {\ensuremath{{\operatorname{Res}}}}P.
\label{eq:NCR.residueTR}$$
2\) We have ${\ensuremath{{\operatorname{Res}}}}P_{1}P_{2}={\ensuremath{{\operatorname{Res}}}}P_{2}P_{1}$ whenever ${{{\operatorname{ord}}}}P_{1}+{{{\operatorname{ord}}}}P_{2}\in {\ensuremath{\mathbb{Z}}}$. Hence ${\ensuremath{{\operatorname{Res}}}}$ is a trace on the algebra ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$.
3\) ${\ensuremath{{\operatorname{Res}}}}$ is invariant by Heisenberg diffeomorphisms.
4\) We have ${\ensuremath{{\operatorname{Res}}}}P^{t}={\ensuremath{{\operatorname{Res}}}}P$ and ${\ensuremath{{\operatorname{Res}}}}P^{*}=\overline{{\ensuremath{{\operatorname{Res}}}}P}$ for any $P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$.
The first property follows from Proposition \[thm:NCR.TR.local\]. The third and fourth properties are immediate consequences of Propositions \[thm:NCR.log-singularity\] and \[prop.Sing.transpose-adjoint\].
It remains to prove the 2nd property. Let $P_{1}$ and $P_{2}$ be operators in ${\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ such that ${{{\operatorname{ord}}}}P_{1}+{{{\operatorname{ord}}}}P_{2}\in {\ensuremath{\mathbb{Z}}}$. For $j=1,2$ let $(P_{j}(z))_{z\in {\ensuremath{\mathbb{C}}}}\subset {\ensuremath{\Psi_{H}}}^{*}(M,{\ensuremath{\mathcal{E}}})$ be a holomorphic gauging for $P_{j}$. Then the family $(P_{1}(\frac{z}{2})P_{2}(\frac{z}{2}))_{z\in {\ensuremath{\mathbb{C}}}}$ (resp. $(P_{2}(\frac{z}{2})P_{1}(\frac{z}{2}))_{z\in {\ensuremath{\mathbb{C}}}}$) is a holomorphic gauging for $P_{1}P_{2}$ (resp. $P_{2}P_{1}$). Moreover, by Proposition \[thm:NCR.TR.global\] for any $z \in{\ensuremath{\mathbb{C}\!\setminus\!\mathbb{Z}}}$ we have ${\ensuremath{{\operatorname{TR}}}}P_{1}(\frac{z}{2})P_{2}(\frac{z}{2})={\ensuremath{{\operatorname{TR}}}}P_{\frac{z}{2}}(z)P_{1}(\frac{z}{2}$. Therefore, by taking residues at $z=0$ and using (\[eq:NCR.residueTR\]) we get ${\ensuremath{{\operatorname{Res}}}}P_{1}P_{2}={\ensuremath{{\operatorname{Res}}}}P_{2}P_{1}$ as desired.
Traces and sum of commutators {#sec:traces}
-----------------------------
Let $(M^{d+1},H)$ be a compact Heisenberg manifold and let ${\ensuremath{\mathcal{E}}}$ be a vector bundle over $M$. In this subsection, we shall prove that when $M$ is connected the noncommutative residue spans the space of traces on the algebra ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$. As a consequence this will allow us to characterize the sums of commutators in ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$.
Let $H\subset T{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be a hyperplane bundle such that there exists a global $H$-frame $X_{0},X_{1},\ldots,X_{d}$ of $T{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$. We will now give a series of criteria for an operator $P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ to be a sum of commutators of the form, $$P= [ x_{0}, P_{0}]+\ldots+ [x_{d}, P_{d}], \qquad P_{j}\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}).
\label{eq:Traces.sum-commutators}$$
In the sequel for any $x\in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ we let $\psi_{x}$ denote the affine change of variables to the privileged coordinates at $x$ with respect to the $H$-frame $X_{0},\ldots,X_{d}$.
\[lem:Traces.criterion-logarithmic-kernel\] Let $P\in {\ensuremath{\Psi_{H}}}^{-(d+2)}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ have a kernel of the form, $$k_{P}(x,y)=|\psi_{x}'|K_{0}(x,-\psi_{x}(y)),
\label{eq:Traces.logarithmic-kernel}$$ where $K_{0}(x,y)\in {\ensuremath{\mathcal{K}}}_{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is homogeneous of degree $0$ with respect to $y$. Then $P$ is a sum of commutators of the form (\[eq:Traces.sum-commutators\]).
Set $\psi_{x}(y)=A(x).(y-x)$ with $A\in C^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}, GL_{d+1}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}))$ and for $j,k=0,\ldots, d$ define $$K_{jk}(x,y):= A_{jk}(x) y_{j}^{\beta_{j}}\|y\|^{-4}K_{0}(x,y), \qquad (x,y)\in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times {{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0},
$$ where $\beta_{0}=1$ and $\beta_{1}=\ldots=\beta_{d}=3$. As $K_{jk}(x,y)$ is smooth for $y\neq 0$ and is homogeneous with respect to $y$ of degree $-2$ if $j=0$ and of degree $-1$ otherwise, we see that it belongs to ${\ensuremath{\mathcal{K}}}_{*}({\ensuremath{\mathbb{R}}}\times {\ensuremath{\mathbb{R}}})$. Therefore, the operator $Q_{jk}$ with Schwartz kernel $k_{Q_{jk}}=|\psi_{x}'| K_{jk}(x,-\psi_{x}(y))$ is a [$\Psi_{H}$DO]{}.
Next, set $A^{-1}(x)=(A^{jk}(x))_{1\leq j,k\leq d}$. Since $x_{k}-y_{k}=-\sum_{l=0}^{d}A^{kl}(x)\psi_{x}(y)_{l}$ we deduce that the Schwartz kernel of $\sum_{j,k=0}^{d}[x_{k},Q_{jk}]$ is $|\psi_{x}'|K(x,-\psi_{x}(y))$, where $$\begin{gathered}
K(x,y)= \sum_{0\leq j,k,l \leq d} A^{kl}(x)y_{l}A_{jk}(x)y_{j}^{\beta_{j}}\|y\|^{-4} K_{0}(x,y)\\
=\sum_{0\leq j\leq d} y_{j}^{\beta_{j}+1}\|y\|^{-4}K_{0}(x,y)=K_{0}(x,y).
\end{gathered}$$ Hence $P=\sum_{j,k=0}^{d}[x_{k},Q_{jk}]$. The lemma is thus proved.
\[lem:Traces.smoothing-operators1\] Any $R\in {\ensuremath{\Psi^{-\infty}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ can be written as a sum of commutators of the form (\[eq:Traces.sum-commutators\]).
Let $k_{R}(x,y)$ denote the Schwartz kernel of $R$. Since $k_{R}(x,y)$ is smooth we can write $$k_{R}(x,y)=k_{R}(x,x)+(x_{0}-y_{0})k_{R_{0}}(x,y)+\ldots +(x_{d}-y_{d})k_{R_{d}}(x,y),
\label{eq:Traces.Taylor}$$ for some smooth functions $k_{R_{0}}(x,y),\ldots,k_{R_{d}}(x,y)$. For $j=0,\ldots,d$ let $R_{j}$ be the smoothing operator with Schwartz kernel $k_{R_{j}}(x,y)$, and let $Q$ be the operator with Schwartz kernel $k_{Q}(x,y)=k_{R}(x,x)$. Then by (\[eq:Traces.Taylor\]) we have $$R=Q+[x_{0},R_{0}]+\ldots + [x_{d},R_{d}].
\label{eq:Traces.commutators.smoothing-RQ}$$
Observe that the kernel of $Q$ is of the form (\[eq:Traces.logarithmic-kernel\]) with $K_{0}(x,y)=|\psi_{x}'|^{-1}k_{R}(x,x)$. Here $K_{0}(x,y)$ belongs to ${\ensuremath{\mathcal{K}}}_{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and is homogeneous of degree $0$ with respect to $y$, so by Lemma \[lem:Traces.criterion-logarithmic-kernel\] the operator $Q$ is a sum of commutators of the form (\[eq:Traces.sum-commutators\]). Combining this with (\[eq:Traces.commutators.smoothing-RQ\]) then shows that $R$ is of that form too.
\[lem:Traces.sum-commutators\] Any $P\in {\ensuremath{\Psi_{H}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $c_{P}(x)=0$ is a sum of commutators of the form (\[eq:Traces.sum-commutators\]).
For $j=0,\ldots,d$ we let $\sigma_{j}(x,\xi)=\sum_{k=0}^{d}\sigma_{jk}(x)\xi_{k}$ denote the classical symbol of $-iX_{j}$. Notice that $\sigma(x):=(\sigma_{jk}(x))$ belongs to $C^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}, GL_{d+1}({\ensuremath{\mathbb{C}}}))$.
\(i) Let us first assume that $P= (\partial_{\xi_{j}}q)(x,-iX)$ for some $q\in S^{{\ensuremath{\mathbb{Z}}}}({U\times{\ensuremath{\mathbb{R}}}^{d+1}})$. Set $q_{\sigma}(x,\xi)=q(x,\sigma(x,\xi))$. Then we have $$[q(x,-iX),x_{k}]= [q_{\sigma}(x,D),x_{k}]= (\partial_{\xi_{k}}q_{\sigma})(x,D)=\sum_{l} \sigma_{lk}(x)(\partial_{\xi_{l}}q)(x,-iX).
$$ Therefore, if we let $(\sigma^{kl}(x))$ be the inverse matrix of $\sigma(x)$, then we see that $$\sum_{k}[\sigma^{jk}(x)q(x,-iX),x_{k}]= \sum_{k,l} \sigma^{jk}(x)\sigma_{lk}(x)(\partial_{\xi_{l}}q)(x,-iX)=(\partial_{\xi_{j}}q)(x,-iX)=P.
$$ Hence $P$ is a sum of commutators of the form (\[eq:Traces.sum-commutators\]).
\(ii) Suppose now that $P$ has symbol $p \sim \sum_{j\leq m}p_{j}$ with $p_{-(d+2)}=0$. Since $p_{l}(x,\xi)$ is homogeneous of degree $l$ with respect to $\xi$, the Euler identity, $$2\xi_{0}\partial_{\xi_{0}}p_{l} + \xi_{1}\partial_{\xi_{1}}p_{l}+\ldots+ \xi_{d}\partial_{\xi_{d}}p_{l}
= l p_{l},$$ implies that we have $$2\partial_{\xi_{0}}(\xi_{0}p_{l}) + \partial_{\xi_{1}}(\xi_{1}p_{l})+\ldots+
\partial_{\xi_{d}}(\xi_{d}p_{l})= (l+d+2)p_{l}.$$
For $j=0, \ldots, d$ let $q^{(j)}$ be a symbol so that $q^{(j)}
\sim \sum_{l\neq -(d+2)} (l+d+2)^{-1} \xi_{j}p_{l}$. Then for $l\neq -(d+2)$ the symbol of degree $l$ of $2\partial_{\xi_{0}}q^{(0)}+
\partial_{\xi_{1}} q^{(1)}+\ldots+ \partial_{\xi_{j}} q^{(d)}$ is equal to $$(l+d+2)^{-1}(2\partial_{\xi_{0}}(\xi_{0}p_{l}) + \partial_{\xi_{1}}(\xi_{1}p_{l}))+\ldots+
\partial_{\xi_{d}}(\xi_{d}p_{l}))= p_{l}.
$$ Since $p_{-(d+2)}=0$ this shows that $p- 2\partial_{\xi_{0}}q^{(0)} +
\partial_{\xi_{1}} q^{(1)}+\ldots+ \partial_{\xi_{j}} q^{(d)}$ is in $S^{-\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Thus, there exists $R$ in ${\ensuremath{\Psi^{-\infty}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $$P=2(\partial_{\xi_{0}}q^{(0)})(x,-iX) +
(\partial_{\xi_{1}} q^{(1)})(x,-iX)+\ldots+ (\partial_{\xi_{j}} q^{(d)})(x,-iX)+R,
$$
Thanks to the part (i) and to Lemma \[lem:Traces.smoothing-operators1\] the operators $(\partial_{\xi_{j}}q^{(j)})(x,-iX)$ and $R$ are sums of commutators of the form (\[eq:Traces.sum-commutators\]), so $P$ is of that form as well.
\(iii) The general case is obtained as follows. Let $p_{-(d+2)}(x,\xi)$ be the symbol of degree $-(d+2)$ of $P$. Then by Lemma \[lem:NCR.extension-symbolU\] we can extend $p_{-(d+2)}(x,\xi)$ into a distribution $\tau(x,\xi)$ in $C^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}){\hat\otimes}{\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ in such way that $K_{0}(x,y):=\check{\tau}_{{{\xi\rightarrow y}}}(x,y)$ belongs to ${\ensuremath{\mathcal{K}}}_{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Furthermore, with the notation of (\[eq:NCR.log-homogeneity-Km\]) we have $c_{K,0}(x)=(2\pi)^{-(d+2)}\int_{\|\xi\|=1}p_{-(d+2)}(x,\xi)\iota_{E}d\xi$. Therefore, by using (\[eq:NCR.formula-cP\]) and the fact $c_{P}(x)$ is zero, we see that $c_{K,0}(x)=|\psi'_{x}|^{-1}c_{P}(x)=0$. In view of (\[eq:NCR.log-homogeneity-Km\]) this show that $K_{0}(x,y)$ is homogeneous of degree $0$ with respect to $y$.
Let $Q\in {\ensuremath{\Psi_{H}}}^{-(d+2)}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be the [$\Psi_{H}$DO]{} with Schwartz kernel $|\psi_{x}'|K_{0}(x,-\psi_{x}(y))$. Then by Lemma \[lem:Traces.criterion-logarithmic-kernel\] the operator $Q$ is a sum of commutators of the form (\[eq:Traces.sum-commutators\]). Moreover, observe that by Proposition \[prop:PsiVDO.characterisation-kernel1\] the operator $Q$ has symbol $q\sim
q_{-(d+2)}$, where for $\xi \neq 0$ we have $q_{-(d+2)}(x,\xi)=(K_{0})^{\wedge}_{{{y\rightarrow\xi}}}(x,\xi)=p_{-(d+2)}(x,\xi)$. Therefore $P-Q$ is a [$\Psi_{H}$DO]{} whose symbol of degree $-(d+2)$ is zero. It then follows from the part (ii) of the proof that $P-Q$ is a sum of commutators of the form (\[eq:Traces.sum-commutators\]). All this shows that $P$ is the sum of two operators of the form (\[eq:Traces.sum-commutators\]), so $P$ is of that form too.
In the sequel we let ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{*}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and ${\ensuremath{\Psi^{-\infty}_{c}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ respectively denote the classes of [$\Psi_{H}$DOs]{} and smoothing operators on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ with compactly supported Schwartz kernels.
\[lem:Traces.sum-commutators.compact\] There exists $\Gamma \in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that, for any $P\in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, we have $$P= ({\ensuremath{{\operatorname{Res}}}}P)\Gamma \quad \bmod [{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}), {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})].
\label{eq:Traces.sum-commutators-compact}$$
Let $P\in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. We will put $P$ into the form (\[eq:Traces.sum-commutators-compact\]) in 3 steps.
\(i) Assume first that $c_{P}(x)=0$. Then by Lemma \[lem:Traces.sum-commutators\] we can write $P$ in the form, $$P= [ x_{0}, P_{0}]+\ldots+ [x_{d}, P_{d}], \qquad P_{j}\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}).
$$ Let $\chi$ and $\psi$ in $C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\psi(x)\psi(y)=1$ near the support of the kernel of $P$ and $\chi=1$ near $ {{\operatorname{supp}}}\psi$. Since $\psi P \psi=P$ we obtain $$P=\sum_{j=0}^{d} \psi [x_{d}, P_{d}]\psi= \sum_{j=0}^{d} [x_{d}, \psi P_{d}\psi] =\sum_{j=0}^{d}[\chi x_{d}, \psi P_{d}\psi].
\label{eq:Traces.commutators.vanishing-cP}$$ In particular $P$ is a sum of commutators in ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
\(ii) Let $\Gamma_{0}\in {\ensuremath{\Psi_{H}}}^{-(d+2)}$ have kernel $k_{\Gamma_{0}}(x,y)=-\log\|\phi_{x}(y)\|$ and suppose that $P=c \Gamma_{0} \psi$ where $c\in
C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is such that $\int c(x)dx=0$ and $\psi \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is such that $\psi=1$ near ${{\operatorname{supp}}}c$. First, we have:
If $c\in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is such that $\int c(x)dx=0$, then there exist $c_{0}, \ldots,c_{d}$ in $C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $c=\partial_{x_{0}}c_{0}+\ldots+\partial_{x_{d}}c_{d}$.
We proceed by induction on the dimension $d+1$. In dimension $1$ the proof follows from the the fact that if $c\in
C^{\infty}_{c}({\ensuremath{\mathbb{R}}})$ is such that $\int_{-\infty}^{\infty}c(x_{0})dx_{0}=0$, then $\tilde{c}(x_{0})=\int_{-\infty}^{x_{0}}c(t)dt$ is an antiderivative of $c$ with compact support.
Assume now that the claim is true in dimension $d$ and under this assumption let us prove it in dimension $d+1$. Let $c\in C^{\infty}_{c}({\ensuremath{\mathbb{R}}}^{d+1})$ be such that $\int_{{\ensuremath{\mathbb{R}}}^{d+1}} c(x)dx=0$. For any $(x_{0},\ldots,x_{d-1})$ in ${\ensuremath{\mathbb{R}}}^{d}$ we let $\tilde{c}(x_{0},\ldots,x_{d-1})=\int_{{\ensuremath{\mathbb{R}}}}c(x_{0},\ldots,x_{d-1},x_{d})dx_{d}$. This defines a function in $C^{\infty}_{c}({\ensuremath{\mathbb{R}}}^{d})$ such that $$\int_{{\ensuremath{\mathbb{R}}}^{d}}\tilde{c}(x_{0},\ldots,x_{d-1})dx_{0}\ldots dx_{d-1}= \int_{{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}}c(x_{0},\ldots,x_{d})dx_{0}\ldots dx_{d}=0.
$$ Since the claim is assumed to hold in dimension $d$, it follows that there exist $\tilde{c}_{0},
\ldots,\tilde{c}_{d-1}$ in $C_{c}^{\infty}({\ensuremath{\mathbb{R}}}^{d})$ such that $ \tilde{c}=\sum_{j=0}^{d-1}\partial_{x_{j}}\tilde{c}_{j}$.
Next, let $\varphi\in C^{\infty}_{c}({\ensuremath{\mathbb{R}}})$ be such that $\varphi(x_{d})dx_{d}=1$. For any $(x_{0},\ldots,x_{d})$ in ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ we let $$b(x_{0},\ldots,x_{d})=c(x_{0},\ldots,x_{d})-\varphi(x_{d})\tilde{c}(x_{0},\ldots,x_{d-1}).
$$ This defines a function in $C^{\infty}_{c}({\ensuremath{\mathbb{R}}}^{d+1})$ such that $$\int_{-\infty}^{\infty}b(x_{0},\ldots,x_{d})dx_{d}=\int_{-\infty}^{\infty}c(x_{0},\ldots,x_{d})dx_{d}-\tilde{c}(x_{0},\ldots,x_{d-1})=0.
$$ Therefore, we have $b=\partial_{x_{d}}c_{d}$, where $c_{d}(x_{0},\ldots,x_{d}):=\int_{-\infty}^{x_{d}}b(x_{0},\ldots,x_{d-1},t)dt$ is a function in $C^{\infty}_{c}({\ensuremath{\mathbb{R}}}^{d+1})$.
In addition, for $j=0,\ldots,d-1$ and for $(x_{0},\ldots,x_{d})$ in ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ we let $c_{j}(x_{0},\ldots,x_{d})=\varphi(x_{d})\tilde{c}(x_{0},\ldots,x_{d-1})$. Then $c_{0},,\ldots,c_{d-1}$ belong to $C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and we have $$\begin{gathered}
c(x_{0},\ldots,x_{d})= b(x_{0},\ldots, x_{d})+\varphi(x_{d})\tilde{c}(x_{0},\ldots,x_{d-1}) \\ = \partial_{x_{d}} c_{d}(x_{0},\ldots,x_{d})
+\varphi(x_{d})\sum_{j=0}^{d-1}\partial_{x_{j}}\tilde{c}_{j}(x_{0},\ldots,x_{d-1})
= \sum_{j=0}^{d}\partial_{x_{j}}{c}_{j}.
\end{gathered}$$ This shows that the claim is true in dimension $d+1$. The proof is now complete.
Let us now go back to the proof of the lemma. Since we have $\int c(x)dx=0$ the above claim tells us that $c$ can be written in the form $c=\sum_{j=0}^{d}\partial_{j}c_{j}$ with $c_{0},\ldots,c_{d}$ in $C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Observe also that the Schwartz kernel of $[\partial_{x_{j}},\Gamma_{0}]$ is equal to $$\begin{gathered}
( \partial_{x_{j}}- \partial_{y_{j}})[- \log \|\psi_{x}(y)\|] \\ = \sum_{k,l} ( \partial_{x_{j}}- \partial_{y_{j}})
[\varepsilon_{kl}(x)(x_{l}-y_{l})][\partial_{z_{k}}\log \|z\|]_{z=-\psi_{x}(y)}\\
= \sum_{k,l} (x_{k}-y_{k}) (\partial_{x_{j}}\varepsilon_{kl})(x)
\gamma_{k}(-\psi_{x}(y))\|\psi_{x}(y)\|^{-4},
\end{gathered}$$ where we have let $\gamma_{0}(y)=\frac{1}{2} y_{0}$ and $\gamma_{k}(y)=y_{k}^{3}$, $k=1,\ldots,d$. In particular $k_{[\partial_{x_{j}},\Gamma_{0}]}(x,y)$ has no logarithmic singularity near the diagonal, that is, we have $c_{[\partial_{x_{j}},\Gamma_{0}]}(x)=0$.
Next, let $\psi \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\psi=1$ near ${{\operatorname{supp}}}c\cup{{\operatorname{supp}}}c_{1}\cup \cdots \cup {{\operatorname{supp}}}c_{d}$ and let $\chi \in
C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\chi=1$ near ${{\operatorname{supp}}}\psi$. Then we have $$\begin{gathered}
[\chi \partial_{x_{j}}, c_{j}\Gamma_{0}\psi]= [\partial_{x_{j}}, c_{j}\Gamma_{0}\psi] =
[\partial_{x_{j}},c_{j}]\Gamma_{0} \psi + c_{j}
[\partial_{x_{j}}, \Gamma_{0}]\psi+ c_{j}\Gamma_{0} [\partial_{x_{j}},\psi ]\\
= \partial_{x_{j}}c_{j} \Gamma_{0} \psi + c_{j} [\partial_{x_{j}}, \Gamma_{0}]\psi + c_{j}\Gamma_{0} \partial_{x_{j}}\psi.\end{gathered}$$ Since $ c_{j}\Gamma_{0} \partial_{x_{j}}\psi$ is smoothing and $c_{c_{j} [\partial_{x_{j}}, \Gamma_{0}]\psi}(x)=c_{j}c_{[\partial_{x_{j}},
\Gamma_{0}]}(x)=0$ we deduce from this that $P$ is of the form $P= \sum_{j=0}^{d}[\chi \partial_{x_{j}}, c_{j}\Gamma_{0}\psi] +Q$ with $Q\in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $c_{Q}(x)=0$. It then follows from the part (i) that $P$ belongs to the commutator space of ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
\(iii) Let $\rho \in C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\int \rho(x)dx=1$, let $\psi \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\psi=1$ near ${{\operatorname{supp}}}\rho$, and set $\Gamma=\rho \Gamma_{0}\psi$. Let $P\in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and let $\tilde{\psi}\in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\tilde{\psi}=1$ near ${{\operatorname{supp}}}c_{P}\cup {{\operatorname{supp}}}\psi$. Then we have $$P=({\ensuremath{{\operatorname{Res}}}}P) \Gamma + ({\ensuremath{{\operatorname{Res}}}}P)\rho \Gamma_{0}(\tilde{\psi}-\psi)+ (c_{P}-({\ensuremath{{\operatorname{Res}}}}P)\rho)\Gamma_{0}\tilde{\psi}+ P-c_{P}\Gamma_{0}\tilde{\psi}.
\label{eq:Traces.decomposition-P-compact-support}$$
Notice that $({\ensuremath{{\operatorname{Res}}}}P)\rho \Gamma_{0}(\tilde{\psi}-\psi)$ belongs to ${\ensuremath{\Psi^{-\infty}_{c}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Observe also that the logarithmic singularity of $P-c_{P}\Gamma_{0}\tilde{\psi}$ is equal to $c_{P}(x)-\tilde{\psi}(x)c_{P}(x)=0$. Therefore, it follows from (i) that these operators belong to commutator space of ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. In addition, as $\int (c_{P}(x)-({\ensuremath{{\operatorname{Res}}}}P)\rho(x))dx=0$ we see that $(c_{P}-({\ensuremath{{\operatorname{Res}}}}P)\rho)\Gamma_{0}\tilde{\psi}$ is as in (ii), so it also belongs to the commutator space of ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Combining all this with (\[eq:Traces.decomposition-P-compact-support\]) then shows that $P$ agrees with $({\ensuremath{{\operatorname{Res}}}}P) \Gamma $ modulo a sum of commutators in ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. The lemma is thus proved.
Next, we quote the well known lemma below.
\[lem:Traces.smoothing-operators2\] Any $R \in {\ensuremath{\Psi^{-\infty}}}(M,{\ensuremath{\mathcal{E}}})$ such that ${\ensuremath{{\operatorname{Tr}}}}R=0$ is the sum of two commutators in ${\ensuremath{\Psi^{-\infty}}}(M,{\ensuremath{\mathcal{E}}})$.
We are now ready to prove the main result of this section.
\[thm:Traces.traces\] Assume that $M$ is connected. Then any trace on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ is a constant multiple of the noncommutative residue.
Let $\tau$ be a trace on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$. By Lemma \[lem:Traces.sum-commutators.compact\] there exists $\Gamma \in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that any $P=(P_{ij})$ in ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}},{\ensuremath{\mathbb{C}}}^{r})$ can be written as $$P=\Gamma \otimes R \bmod [{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}), {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})]\otimes M_{r}({\ensuremath{\mathbb{C}}}),
$$ where we have let $R=({\ensuremath{{\operatorname{Res}}}}P_{ij})\in M_{r}({\ensuremath{\mathbb{C}}})$. Notice that ${\ensuremath{{\operatorname{Tr}}}}R= \sum
{\ensuremath{{\operatorname{Res}}}}P_{ii}={\ensuremath{{\operatorname{Res}}}}P$. Thus $R-\frac{1}{r}({\ensuremath{{\operatorname{Res}}}}P)I_{r}$ has a vanishing trace, hence belongs to the commutator space of $M_{r}({\ensuremath{\mathbb{C}}})$. Therefore, we have $$P=({\ensuremath{{\operatorname{Res}}}}P) \Gamma\otimes (\frac{1}{r}I_{r}) \quad \bmod [{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}, {\ensuremath{\mathbb{C}}}^{r}), {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}},{\ensuremath{\mathbb{C}}}^{r})].
\label{eq:Traces.decomposition-P-Rd-Cr}$$
Let $\kappa:U\rightarrow {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be a local $H$-framed chart mapping onto ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ and such that ${\ensuremath{\mathcal{E}}}$ is trivializable over its domain. For sake of terminology’s brevity we shall call such a chart a *nice $H$-framed chart*. As $U$ is $H$-framed and is Heisenberg diffeomorphic to ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ and as${\ensuremath{\mathcal{E}}}$ is trivializable over $U$, it follows from (\[eq:Traces.decomposition-P-Rd-Cr\]) that there exists $\Gamma_{U}\in
{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|_{U}})$ such that, for any $P \in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}((U,{\ensuremath{\mathcal{E}}}_{|_{U}})$, we have $$P=({\ensuremath{{\operatorname{Res}}}}P)\Gamma_{U} \quad \bmod [{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}(U,{\ensuremath{\mathcal{E}}}_{|_{U}}), {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}(U,{\ensuremath{\mathcal{E}}}_{|_{U}})].
\label{eq:Traces.local-form-Psivdos}$$ If we apply the trace $\tau$, then we see that, for any $P \in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}(U,{\ensuremath{\mathcal{E}}}_{|_{U}})$, we have $$\tau(P) = \Lambda_{U}{\ensuremath{{\operatorname{Res}}}}P, \qquad \Lambda_{U}:=\tau(\Gamma_{U}).
$$
Next, let ${\ensuremath{\mathcal{U}}}$ be the set of points $x \in M$ near which there a domain $V$ of a nice $H$-framed chart such that $\Lambda_{V}=\Lambda_{U}$. Clearly ${\ensuremath{\mathcal{U}}}$ is a non-empty open subset of $M$. Let us prove that ${\ensuremath{\mathcal{U}}}$ is closed. Let $x \in \overline{{\ensuremath{\mathcal{U}}}}$ and let $V$ be an open neighborhood of $x$ which is the domain a nice $H$-framed chart (such a neighborhood always exists). Since $x$ belongs to the closure of ${\ensuremath{\mathcal{U}}}$ the set ${\ensuremath{\mathcal{U}}}\cup V$ is non-empty. Let $y \in {\ensuremath{\mathcal{U}}}\cup V$. As $y$ belongs to ${\ensuremath{\mathcal{U}}}$ there exists an open neighborhood $W$ of $y$ which is the domain a nice $H$-frame chart such that $\Lambda_{W}=\Lambda_{U}$. Then for any $P$ in ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}(V\cap W, {\ensuremath{\mathcal{E}}}_{|V\cap W})$ we have $\tau(P)=\Lambda_{V}{\ensuremath{{\operatorname{Res}}}}P =\Lambda_{W}{\ensuremath{{\operatorname{Res}}}}P$. Choosing $P$ so that ${\ensuremath{{\operatorname{Res}}}}P\neq 0$ then shows that $\Lambda_{V}=\Lambda_{W}=\Lambda_{U}$. Since $V$ contains $x$ and is a domain of a nice $H$-framed chart we deduce that $x$ belongs to ${\ensuremath{\mathcal{U}}}$. Hence ${\ensuremath{\mathcal{U}}}$ is both closed and open. As $M$ is connected it follows that ${\ensuremath{\mathcal{U}}}$ agrees with $M$. Therefore, if we set $\Lambda=\Lambda_{U}$ then, for any domain $V$ of a nice $H$-framed chart, we have $$\tau(P)=\Lambda {\ensuremath{{\operatorname{Res}}}}P \qquad \forall P \in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}(V,{\ensuremath{\mathcal{E}}}_{|_{V}}).
\label{eq:Traces.mutiple-local}$$
Now, let $(\varphi_{i})$ be a finite partition of the unity subordinated to an open covering $(U_{i})$ of $M$ by domains of nice $H$-framed charts. For each index $i$ let $\psi_{i}\in C^{\infty}_{c}(U_{i})$ be such that $\psi_{i}=1$ near ${{\operatorname{supp}}}\varphi_{i}$. Then any $P \in
{\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ can be written as $P=\sum \varphi_{i} P\psi_{i}+R$, where $R$ is a smoothing operator whose kernel vanishes near the diagonal of $M\times M$. In particular we have ${\ensuremath{{\operatorname{Trace}}}}R=0$, so by Lemma \[lem:Traces.smoothing-operators2\] the commutator space of ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ contains $R$. Since each operator $\varphi_{i} P\psi_{i}$ can be seen as an element of ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}(U_{i},{\ensuremath{\mathcal{E}}}_{|_{U_{i}}})$, using (\[eq:Traces.mutiple-local\]) we get $$\tau(P)=\sum \tau(\varphi_{i}P\psi_{i}) = \sum \Lambda {\ensuremath{{\operatorname{Res}}}}\varphi_{i}P\psi_{i}=\Lambda {\ensuremath{{\operatorname{Res}}}}P.
$$ Hence we have $\tau =\Lambda {\ensuremath{{\operatorname{Res}}}}$. This shows that any trace on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ is proportional to the noncommutative residue.
Since the dual of ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})/[{\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}}),{\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})]$ is isomorphic to the space of traces on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$, as a consequence of Theorem \[thm:Traces.traces\] we get:
Assume $M$ connected. Then an operator $P \in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ is a sum of commutators in ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ if and only if its noncommutative residue vanishes.
In [@EM:HAITH] Epstein and Melrose computed the Hochschild homology of the algebra of symbols ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})/{\ensuremath{\Psi^{-\infty}}}(M,{\ensuremath{\mathcal{E}}})$ when $(M,H)$ is a contact manifold. In fact, as the algebra ${\ensuremath{\Psi^{-\infty}}}(M,{\ensuremath{\mathcal{E}}})$ is $H$-unital and its Hochschild homology is known, the long exact sequence of [@Wo:LESCHAEA] holds and allows us to relate the Hochschild homology of ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ to that of ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})/{\ensuremath{\Psi^{-\infty}}}(M,{\ensuremath{\mathcal{E}}})$. In particular, we can recover from this that the space of traces on ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ is one-dimensional when the manifold is connected.
Analytic Applications on general Heisenberg manifolds {#sec:Analytic-Applications}
=====================================================
In this section we derive several analytic applications of the construction of the noncommutative residue trace for the Heisenberg calculus. First, we deal with zeta functions of hypoelliptic [$\Psi_{H}$DOs]{} and relate their singularities to the heat kernel asymptotics of the corresponding operators. Second, we give logarithmic metric estimates for Green kernels of hypoelliptic [$\Psi_{H}$DOs]{} whose order is equal to the Hausdorff dimension $\dim M +1$. This connects nicely with previous results of Fefferman, Stein and their students and collaborators. Finally, we show that the noncommutative residue for the Heisenberg calculus allows us to extend the Dixmier trace to the whole algebra of integer order [$\Psi_{H}$DOs]{}. This is the analogue for the Heisenberg calculus of a well-known result of Alain Connes.
Zeta functions of hypoelliptic [$\Psi_{H}$DOs]{}
------------------------------------------------
Let $(M^{d+1}, H)$ be a compact Heisenberg manifold equipped with a smooth density $>0$, let ${\ensuremath{\mathcal{E}}}$ be a Hermitian vector bundle over $M$ of rank $r$, and let $P:C^{\infty}(M,{\ensuremath{\mathcal{E}}})\rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ be a [$\Psi_{H}$DO]{} of integer order $m\geq 1$ with an invertible principal symbol. In addition, assume that there is a ray $L_{\theta}=\{\arg \lambda =\theta\}$ which is is not through an eigenvalue of $P$ and is a principal cut for the principal symbol $\sigma_{m}(P)$ as in Section \[sec:Heisenberg-calculus\].
Let $(P_{\theta}^{s})_{s\in {\ensuremath{\mathbb{C}}}}$ be the associated family of complex powers associated to $\theta$ as in Proposition \[prop:Heisenberg.powers2\]. Since $(P_{\theta}^{s})_{s \in {\ensuremath{\mathbb{C}}}}$ is a holomorphic family of [$\Psi_{H}$DOs]{}, Proposition \[thm:NCR.TR.global\] allows us to directly define the zeta function $\zeta_{\theta}(P;s)$ as the meromorphic function, $$\zeta_{\theta}(P;s):={\ensuremath{{\operatorname{TR}}}}P_{\theta}^{-s}, \qquad s \in {\ensuremath{\mathbb{C}}}.
$$
\[prop:Zeta.zeta-function\] Let $\Sigma=\{-\frac{d+2}{m}, -\frac{d+1}{m},\ldots, \frac{-1}{m},\frac{1}{m},
\frac{2}{m}, \ldots\}$. Then the function $\zeta_{\theta}(P;s)$ is analytic outside $\Sigma$, and on $\Sigma$ it has at worst simple pole singularities such that $${\ensuremath{{\operatorname{Res}}}}_{s=\sigma}\zeta_{\theta}(P;s)=m{\ensuremath{{\operatorname{Res}}}}P^{-\sigma}_{\theta}, \qquad \sigma\in \Sigma.
\label{eq:Zeta.residue-zeta}$$ In particular, $\zeta_{\theta}(P;s)$ is always regular at $s=0$.
Since ${{{\operatorname{ord}}}}P_{\theta}^{-s}=ms$ it follows from Proposition \[thm:NCR.NCR\] that $\zeta_{\theta}(P;s)$ is analytic outside $\Sigma':=\Sigma\cup\{0\}$ and on $\Sigma'$ has at worst simple pole singularities satisfying (\[eq:Zeta.residue-zeta\]). At $s=0$ we have $ {\ensuremath{{\operatorname{Res}}}}_{s=0}\zeta_{\theta}(P;s)=m{\ensuremath{{\operatorname{Res}}}}P^{0}_{\theta}=m{\ensuremath{{\operatorname{Res}}}}[1-\Pi_{0}(P)]$, but as $\Pi_{0}(P)$ is a smoothing operator we have ${\ensuremath{{\operatorname{Res}}}}[1-\Pi_{0}(P)]=-{\ensuremath{{\operatorname{Res}}}}\Pi_{0}(P)=0$. Thus $\zeta_{\theta}(P;s)$ is regular at $s=0$.
Assume now that $P$ is selfadjoint and the union set of its principal cuts is $\Theta(P)={\ensuremath{\mathbb{C}}}\setminus [0,\infty)$. This implies that $P$ is bounded from below (see [@Po:CPDE1]), so its spectrum is real and contains at most finitely many negative eigenvalues. We will use the subscript $\uparrow$ (resp. $\downarrow$) to refer to a spectral cutting in the upper halfplane $\Im \lambda>0$ (resp. lower halfplane $\Im
\lambda<0$).
Since $P$ is bounded from below it defines a heat semigroup $e^{-tP}$, $t\geq 0$, and, as the principal symbol of $P$ is invertible, for $t>0$ the operator $e^{-tP}$ is smoothing, hence has a smooth Schwartz kernel $k_{t}(x,y)$ in $C^{\infty}(M,{\ensuremath{\mathcal{E}}}){\hat\otimes}C^{\infty}(M,{\ensuremath{\mathcal{E}}}^{*}\otimes |\Lambda|(M))$. Moreover, as $t\rightarrow 0^{+}$ we have the heat kernel asymptotics, $$k_{t}(x,x)\sim t^{-\frac{d+2}{m}}\sum_{j\geq 0} t^{\frac{j}{m}}a_{j}(P)(x) + \log t\sum_{k\geq 0}t^{k}b_{k}(P)(x),
\label{eq:Zeta.heat-kernel-asymptotics}$$ where the asymptotics takes place in $C^{\infty}(M,{\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}}\otimes |\Lambda|(M))$, and when $P$ is a differential operator we have $a_{2j-1}(P)(x)=b_{j}(P)(x)=0$ for all $j\in {\ensuremath{\mathbb{N}}}$ (see [@BGS:HECRM], [@Po:MAMS1] when $P$ is a differential operator and see [@Po:CPDE1] for the general case).
\[prop:Zeta.heat-zeta-local\] For $j=0,1,\ldots$ set $\sigma_{j}=\frac{d+2-j}{m}$. Then:
1\) When $\sigma_{j}\not \in {\ensuremath{\mathbb{Z}}}_{-}$ we have $${\ensuremath{{\operatorname{Res}}}}_{s=\sigma_{j}}t_{P_{{\uparrow \downarrow}}^{-s}}(x)=m c_{P^{-\sigma_{j}}}(x) = \Gamma(\sigma_{j})^{-1}a_{j}(P)(x).
\label{eq:Zeta.tPs-heat1}
$$ 2) For $k=1,2,\ldots$ we have $$\begin{gathered}
{\ensuremath{{\operatorname{Res}}}}_{s=-k}t_{P_{{\uparrow \downarrow}}^{-s}}(x)=m c_{P^{k}}(x) = (-1)^{k+1}k!b_{k}(P)(x),
\label{eq:Zeta.tPs-heat2}\\
\lim_{s\rightarrow -k}[t_{P_{{\uparrow \downarrow}}^{-s}}(x)-m (s+k)^{-1}c_{P^{k}}(x)] = (-1)^{k}k! a_{d+2+mk}(P)(x).\end{gathered}$$ 3) For $k=0$ we have $$\lim_{s\rightarrow 0} t_{P_{{\uparrow \downarrow}}^{-s}}(x) =a_{d+2}(P)(x)-t_{\Pi_{0}}(x).
\label{eq:Zeta.tPs-heat4}$$
When $P$ is positive and invertible the result is a standard consequence of the Mellin formula (see, e.g., [@Gi:ITHEASIT]). Here it is slightly more complicated because we don’t assume that $P$ is positive or invertible.
For $\Re s>0$ set $Q_{s}= \Gamma(s)^{-1}\int_{0}^{1}t^{s-1}e^{-tP}dt$. Then we have:
The family $(Q_{s})_{\Re s>0}$ can be uniquely extended to a holomorphic family of [$\Psi_{H}$DOs]{} parametrized by ${\ensuremath{\mathbb{C}}}$ in such way that:
\(i) The families $(Q_{s})_{s \in {\ensuremath{\mathbb{C}}}}$ and $(P_{{\uparrow \downarrow}}^{-s})_{s \in {\ensuremath{\mathbb{C}}}}$ agree up to a holomorphic family of smoothing operators;
\(ii) We have $Q_{0}=1$ and $Q_{-k}=P^{k}$ for any integer $k\geq 1$.
First, let $\Pi_{+}(P)$ and $\Pi_{-}(P)$ denote the orthogonal projections onto the positive and negative eigenspaces of $P$. Notice that $\Pi_{-}(P)$ is a smoothing operator because $P$ has at most only finitely many negative eigenvalues. For $\Re s>0$ the Mellin formula allows us to write $$P_{{\uparrow \downarrow}}^{-s}=\Pi_{-}(P) P_{{\uparrow \downarrow}}^{-s}+\Gamma(s)^{-1}\int_{0}^{\infty}t^{s}\Pi_{+}(P)e^{-tP}\frac{dt}{t}=Q_{s}+R_{{\uparrow \downarrow}}(s),
\label{eq:Zeta.claim-heat.PQs}\\$$ where $R_{{\uparrow \downarrow}}(s)$ is equal to $$\Pi_{-}(P) P_{{\uparrow \downarrow}}^{-s}-s^{-1}\Gamma(s)^{-1}\Pi_{0}(P)-\Pi_{-}(P)\int_{0}^{1}t^{s}e^{-tP}\frac{dt}{t}+
\int_{1}^{\infty} t^{s}\Pi_{+}(P)e^{-tP}\frac{dt}{t}.
$$ Notice that $ (\Pi_{-}(P) P_{{\uparrow \downarrow}}^{-s})_{s \in {\ensuremath{\mathbb{C}}}}$ and $(s^{-1}\Gamma(s)^{-1}\Pi_{0}(P))_{s \in {\ensuremath{\mathbb{C}}}}$ are holomorphic families of smoothing operators because $\Pi_{-}(P)$ and $\Pi_{0}(P)$ are smoothing operators. Moreover, upon writing $$\begin{gathered}
\Pi_{-}(P)\int_{0}^{1}t^{s}e^{-tP}\frac{dt}{t}=\Pi_{-}(P)(\int_{0}^{1} t^{s}e^{-tP}\frac{dt}{t})\Pi_{-}(P),\\
\int_{1}^{\infty} t^{s}\Pi_{+}(P)e^{-tP}\frac{dt}{t}= e^{-\frac{1}{4}P}(\int_{1/2}^{\infty} t^{s}\Pi_{+}(P)e^{-tP}\frac{dt}{t})e^{-\frac{1}{4}P},
$$ we see that $( \Pi_{-}(P)\int_{0}^{1}t^{s}e^{-tP}\frac{dt}{t})_{\Re s>0}$ and $( \int_{1}^{\infty} t^{s}\Pi_{+}(P)e^{-tP}\frac{dt}{t})_{\Re s>0}$ are holomorphic families of smoothing operators. Therefore $(R_{{\uparrow \downarrow}}(s))_{\Re s>0}$ is a holomorphic family of smoothing operators and using (\[eq:Zeta.claim-heat.PQs\]) we see that $(Q_{s})_{\Re s>0}$ is a holomorphic family of [$\Psi_{H}$DOs]{}.
Next, an integration by parts gives $$\Gamma(s+1)P Q_{s+1}= \int_{0}^{1}
t^{s}\frac{d}{dt}(e^{-tP})= e^{-P} + s \int_{0}^{1}
t^{s-1} e^{-tP}dt.
$$ Since $\Gamma(s+1)=s\Gamma(s)$ we get $$Q_{s}= P Q_{s+1} - \Gamma(s+1)^{-1}e^{-P}, \qquad \Re s>0.
\label{eq:Zeta.Qs-extension1}$$ An easy induction then shows that for $k=1,2,\ldots$ we have $$Q_{s}= P^{k} Q_{s+k} - \Gamma(s+k)^{-1}P^{k-1}e^{-P}+\ldots +(-1)^{k}\Gamma(s+1)^{-1}e^{-P}.
\label{eq:Zeta.Qs-extension2}$$ It follows that the family $(Q_{s})_{\Re s>0}$ has a unique analytic continuation to each half-space $\Re s>-k$ for $k=1,2,\ldots$, so it admits a unique analytic continuation to ${\ensuremath{\mathbb{C}}}$. Furthermore, as for $\Re s>-k$ we have $P^{-s}_{{\uparrow \downarrow}}=P^{k}P^{-(s+k)}_{{\uparrow \downarrow}}$ we get $$Q_{s}-P^{-s}_{{\uparrow \downarrow}}=P^{k} R_{{\uparrow \downarrow}}(s+k) -\Gamma(s+k)^{-1}P^{k-1}e^{-P}+\ldots +(-1)^{k}\Gamma(s+1)^{-1}e^{-P},
$$ from which we deduce that $(Q_{s}-P^{-s}_{{\uparrow \downarrow}})_{\Re s >-k}$ is a holomorphic family of smoothing operators. Hence the families $(Q_{s})_{s \in {\ensuremath{\mathbb{C}}}}$ and $(P^{-s}_{{\uparrow \downarrow}})_{s \in {\ensuremath{\mathbb{C}}}}$ agree up to a holomorphic family of smoothing operators.
Finally, we have $$Q_{1}=\Pi_{0}(P)+\int_{0}^{1}(1-\Pi_{0}(P))e^{-tP}dt=\Pi_{0}(P)-P^{-1}(e^{-P}-1).
$$ Thus setting $s=1$ in (\[eq:Zeta.Qs-extension1\]) gives $$\begin{gathered}
Q_{0}=P[\Pi_{0}(P)-P^{-1}(e^{-P}-1)]+e^{-P} = -(1-\Pi_{0}(P))(e^{-P}-1)+e^{-P}\\ = 1-\Pi_{0}(P)+\Pi_{0}e^{-P}=1.
\label{eq:Zeta.Q0}\end{gathered}$$ Furthermore, as $\Gamma(s)^{-1}$ vanishes at every non-positive integer, from (\[eq:Zeta.Qs-extension2\]) and (\[eq:Zeta.Q0\]) we see that we have $Q_{-k}=P^{k}Q_{0}=P^{k}$ for any integer $k\geq 1$. The proof of the claim is thus achieved.
Now, for $j=0,1,\ldots$ we set $\sigma_{j}=\frac{d+2-j}{m}$. As $(R_{{\uparrow \downarrow}}(s))_{s \in {\ensuremath{\mathbb{C}}}}:=(P_{{\uparrow \downarrow}}^{-s}-Q_{s})_{s \in {\ensuremath{\mathbb{C}}}}$ is a holomorphic family of smoothing operators, the map $s \rightarrow
t_{R_{{\uparrow \downarrow}}(s)}(x)$ is holomorphic from ${\ensuremath{\mathbb{C}}}$ to $C^{\infty}(M, |\Lambda|(M)\otimes {\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}})$. By combining this with Proposition \[prop:Zeta.zeta-function\] we deduce that for $j=0,1,\ldots$ we have $${\ensuremath{{\operatorname{Res}}}}_{s=\sigma_{j}}t_{P_{{\uparrow \downarrow}}^{-s}}(x)=m c_{P^{-\sigma_{j}}}(x) = {\ensuremath{{\operatorname{Res}}}}_{s=\sigma_{j}}t_{Q_{s}}(x),
\label{eq:Zeta.tPs-TQs'1}$$ Moreover, as for $k=1,2,\ldots$ we have $R_{{\uparrow \downarrow}}(-k)=0$ we also see that $$\begin{gathered}
\lim_{s\rightarrow -k}[t_{P_{{\uparrow \downarrow}}^{-s}}(x)-m (s+k)^{-1}c_{P^{k}}(x)] \\ = \lim_{s\rightarrow -k}[t_{Q_{s}}(x)-(s+k)^{-1}{\ensuremath{{\operatorname{Res}}}}_{s=-k}t_{Q_{s}}(x)].\end{gathered}$$ Similarly, as $P_{{\uparrow \downarrow}}^{0}=1-\Pi_{0}(P)=Q_{0}-\Pi_{0}(P)$ we get $$\lim_{s\rightarrow 0} t_{P_{{\uparrow \downarrow}}^{-s}}(x) =\lim_{s\rightarrow 0}t_{Q_{s}}(x)-t_{\Pi_{0}}(x).
\label{eq:Zeta.tPs-TQs'3}$$
Next, let $k_{Q_{s}}(x,y)$ denote the kernel of $Q_{s}$. As $Q_{s}$ has order $-m s$, for $\Re s>-\frac{d+2}{m}$ this is a trace-class operator and thanks to (\[eq:Zeta.heat-kernel-asymptotics\]) we have $$\Gamma(s) k_{Q_{s}}(x,x)= \int_{0}^{1}t^{s-1}k_{t}(x,x) dt.
$$ Moreover (\[eq:Zeta.heat-kernel-asymptotics\]) implies that, for any integer $N\geq 0$, in $C^{\infty}(M,{\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}}\otimes |\Lambda|(M))$ we have $$k_{t}(x,x)=\sum_{-\sigma_{j}<N}t^{-\sigma_{j}}a_{j}(P)(x)+\sum_{k<N}(t^{k}\log t)b_{k}(P)(x) +{\operatorname{O}}(t^{N}).
$$ Therefore, for $\Re s >\frac{d+2}{m}$ the density $\Gamma(s)k_{Q_{s}}(x,x)$ is of the form $$\sum_{\sigma_{j}<N}(\int_{0}^{1}t^{s-\sigma_{j}}\frac{dt}{t})a_{j}(P)(x)+
\sum_{k<N}(\int_{0}^{1}t^{k+s}\log t \frac{dt}{t})b_{k}(P)(x) + \Gamma(s) h_{N,s}(x),
$$ with $h_{N,s}(x) \in {{\operatorname{Hol}}}(\Re s>-N, C^{\infty}(M,{\ensuremath{{\operatorname{End}}}}{\ensuremath{\mathcal{E}}}\otimes |\Lambda|(M))$. Since for $\alpha >0$ we have $$\int_{0}^{1}t^{\alpha}\log t \frac{dt}{t}=-\frac{1}{\alpha}\int_{0}^{1}t^{\alpha-1}dt = -\frac{1}{\alpha},
$$ we see that $k_{Q_{s}}(x,x)$ is equal to $$\Gamma(s)^{-1} \sum_{\sigma_{j}<N}\frac{1}{s+\sigma_{j}}a_{j}(P)(x)-
\Gamma(s)^{-1} \sum_{k<N}\frac{1}{(s+k)^{2}}b_{k}(P)(x) + h_{N,s}(x).
$$ Since $\Gamma(s)$ is analytic on ${\ensuremath{\mathbb{C}}}\setminus ({\ensuremath{\mathbb{Z}}}_{-}\cup\{0\})$ and for $k=0,1,\ldots$ near $s=-k$ we have $\Gamma(s)^{-1}\sim (-1)^{k}k!(s+k)^{-1}$ , we deduce that:
- when $\sigma_{j}\not \in {\ensuremath{\mathbb{N}}}$ we have $ {\ensuremath{{\operatorname{Res}}}}_{s=\sigma_{j}}t_{Q_{s}}(x)= \Gamma(\sigma_{j})^{-1}a_{j}(P)(x)$.
- for $k=1,2,\ldots$ we have $$\begin{gathered}
{\ensuremath{{\operatorname{Res}}}}_{s=-k}t_{Q_{s}}(x)= (-1)^{k+1}k!b_{k}(P)(x),\\
\lim_{s\rightarrow -k}[t_{Q_{s}}(x)-(s+k)^{-1}{\ensuremath{{\operatorname{Res}}}}_{s=-k}t_{Q_{s}}(x)] = (-1)^{k}k! a_{d+2+mk}(P)(x).
\end{gathered}$$ - for $k=0$ we have $\lim_{s\rightarrow 0} t_{Q_{s}}(x) =a_{d+2}(P)(x)$.
Combining this with (\[eq:Zeta.tPs-TQs’1\])–(\[eq:Zeta.tPs-TQs’3\]) then proves the equalities (\[eq:Zeta.tPs-heat1\])–(\[eq:Zeta.tPs-heat4\]).
From Proposition \[prop:Zeta.heat-zeta-local\] we immediately get:
\[prop:Zeta.heat-zeta-global\] 1) For $j=0,1,\ldots$ let $\sigma_{j}=\frac{d+2-j}{m}$. When $\sigma_{j}\not \in {\ensuremath{\mathbb{Z}}}_{-}$ we have: $${\ensuremath{{\operatorname{Res}}}}_{s=\sigma_{j}}\zeta_{{\uparrow \downarrow}}(P;s) =m {\ensuremath{{\operatorname{Res}}}}P^{-\sigma_{j}}= \Gamma(\sigma_{j})^{-1}\int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}a_{j}(P)(x).
\label{eq:Zeta.heat-zeta-global1}$$ 2) For $k=1,2,\ldots$ we have $$\begin{gathered}
{\ensuremath{{\operatorname{Res}}}}_{s=-k}\zeta_{{\uparrow \downarrow}}(P;s) =m {\ensuremath{{\operatorname{Res}}}}P^{k} = (-1)^{k+1}k!\int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}b_{k}(P)(x), \\
\lim_{s\rightarrow -k}[\zeta_{{\uparrow \downarrow}}(P;s)-m (s+k)^{-1} {\ensuremath{{\operatorname{Res}}}}P^{k}] = (-1)^{k}k! \int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}} a_{d+2+mk}(P)(x).\end{gathered}$$ 3) For $k=0$ we have $$\zeta_{{\uparrow \downarrow}}(P;0)=\int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}} a_{d+2}(P)(x)-\dim \ker P.
$$
Next, for $k=0,1,\ldots$ let $\lambda_{k}(P)$ denote the $(k+1)$’th eigenvalue of $P$ counted with multiplicity. Then by [@Po:MAMS1] and [@Po:CPDE1] as $k\rightarrow \infty$ we have the Weyl asymptotics, $$\lambda_{k}(P)\sim \left(\frac{k}{\nu_{0}(P)}\right)^{\frac{m}{d+2}}, \qquad \nu_{0}(P)=\Gamma(1+\frac{d+2}{m})^{-1} \int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}a_{0}(P)(x).
\label{eq:Zeta.Weyl-asymptotics1}$$
Now, by Proposition \[prop:Zeta.heat-zeta-global\] we have $$\int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}a_{0}(P)(x)=m\Gamma(\frac{d+2}{m}){\ensuremath{{\operatorname{Res}}}}P^{-\frac{d+2}{m}}=\frac{1}{d+2} \Gamma(1+\frac{d+2}{m}){\ensuremath{{\operatorname{Res}}}}P^{-\frac{d+2}{m}},
$$ Therefore, we obtain:
\[prop:Zeta.Weyl-asymptotics\] As $k\rightarrow \infty$ we have $$\lambda_{k}(P)\sim \left(\frac{k}{\nu_{0}(P)}\right)^{\frac{m}{d+2}}, \qquad \nu_{0}(P)= (d+2)^{-1}{\ensuremath{{\operatorname{Res}}}}P^{-\frac{d+2}{m}}.
$$
Finally, we can make use of Proposition \[prop:Zeta.heat-zeta-global\] to prove a local index formula for hypoelliptic [$\Psi_{H}$DOs]{} in the following setting. Assume that ${\ensuremath{\mathcal{E}}}$ admits a ${\ensuremath{\mathbb{Z}}}_{2}$-grading ${\ensuremath{\mathcal{E}}}={\ensuremath{\mathcal{E}}}^{+}\oplus {\ensuremath{\mathcal{E}}}_{-}$ and let $D:C^{\infty}(M,{\ensuremath{\mathcal{E}}})\rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ be a selfadjoint [$\Psi_{H}$DO]{} of integer order $m\geq 1$ with an invertible principal symbol and of the form, $$D= \left(
\begin{array}{cc}
0& D_{-} \\
D_{+}& 0
\end{array}
\right), \qquad D_{\pm}:C^{\infty}(M,{\ensuremath{\mathcal{E}}}_{\pm}) \rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}}_{\mp}).$$ Notice that the selfadjointness of $D$ means that $D_{+}^{*}=D_{-}$.
Since $D$ has an invertible principal symbol and $M$ is compact we see that $D$ is invertible modulo finite rank operators, hence is Fredholm. Then we let $${{\operatorname{ind}}}D:= {{\operatorname{ind}}}D_{+}=\dim \ker D_{+}-\dim \ker D_{-}.
$$
Under the above assumptions we have $${{\operatorname{ind}}}D=\int_{M} {{\operatorname{str}}}_{{\ensuremath{\mathcal{E}}}} a_{d+2}(D^{2})(x),
$$ where ${{\operatorname{str}}}_{{\ensuremath{\mathcal{E}}}}:={{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}^{+}}-{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}^{-}}$ denotes the supertrace on the fibers of ${\ensuremath{\mathcal{E}}}$.
We have $D^{2} = \left(
\begin{array}{cc}
D_{-}D_{+}& 0 \\
0 & D_{+}D_{-}
\end{array}
\right)$ and $D_{\mp} D_{\pm}=D_{\pm}^{*}D_{\pm}$. In particular, $D_{\mp} D_{\pm}$ is a positive operators with an invertible principal symbol. Moreover, for $\Re s >\frac{d+2}{2m}$ the difference $ \zeta(D_{-}D_{+};s) -\zeta(D_{+}D_{-};s) $ is equal to $$\sum_{\lambda>0} \lambda^s (\dim\ker (D_{-}D_{+} -\lambda) -
\dim\ker (D_{+}D_{-} -\lambda)) = 0,
$$ for $D$ induces for any $\lambda>0$ a bijection between $\ker (D_{-}D_{+}-\lambda)$ and $\ker
(D_{+}D_{-} -\lambda)$ (see, e.g., [@BGV:HKDO]). By analytic continuation this yields $ \zeta(D_{-}D_{+};0) -\zeta(D_{+}D_{-};0)=0$. On the other hand, by Proposition \[prop:Zeta.heat-zeta-global\] we have $$\zeta(D_{\mp} D_{\pm};0) = \int_{M} {{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}_{\pm}}a_{d+2}(D_{\mp} D_{\pm})(x)-\dim \ker D_{\mp} D_{\pm}.
$$ Since $\dim \ker D_{\mp} D_{\pm}=\dim \ker D_{\pm}$ we deduce that ${{\operatorname{ind}}}D$ is equal to $$\int_{M} {{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}_{+}}a_{d+2}(D_{+}D_{-})(x)- \int_{M} {{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}_{-}}a_{d+2}(D_{-}D_{+})(x)= \int_{M} {{\operatorname{str}}}_{{\ensuremath{\mathcal{E}}}}a_{d+2}(D^{2})(x).
$$ The proof is thus achieved.
Metric estimates for Green kernels of hypoelliptic [$\Psi_{H}$DOs]{}
--------------------------------------------------------------------
Consider a compact Heisenberg manifold $(M^{d+1},H)$ endowed with a positive density and let ${\ensuremath{\mathcal{E}}}$ be a Hermitian vector bundle over $M$. In this subsection we shall prove that the positivity of a hypoelliptic [$\Psi_{H}$DO]{} pertains in its logarithmic singularity when it has order $-(\dim
M+1)$. As a consequence this will allow us to derive some metric estimates for Green kernels of hypoelliptic [$\Psi$DOs]{}.
Let $P:C^{\infty}(M,{\ensuremath{\mathcal{E}}})\rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ be a [$\Psi_{H}$DO]{} of order $m>0$ whose principal symbol is invertible and is positive in the sense of [@Po:MAMS1], i.e., we can write $\sigma_{m}(P)=q*q^{*}$ with $q\in S_{\frac{m}{2}}({\ensuremath{\mathfrak{g}}}^{*}M,{\ensuremath{\mathcal{E}}})$. The main technical result of this section is the following.
\[prop:Metric.positivity-cP\] The density ${{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P^{-\frac{d+2}{m}}}(x)$ is $>0$.
We will prove Proposition \[prop:Metric.positivity-cP\] later on in the section. As a first consequence, by combining with Proposition \[prop:Zeta.heat-zeta-local\] we get:
Let $a_{0}(P)(x)$ be the leading coefficient in the small time heat kernel asymptotics (\[eq:Zeta.heat-kernel-asymptotics\]) for $P$. Then the density ${{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}a_{0}(P)(x)$ is $>0$.
Assume now that the bracket condition $H+[H,H]=TM$ holds, i.e., $H$ is a Carnot-Carathéodory distribution in the sense of [@Gr:CCSSW]. Let $g$ be a Riemannian metric on $H$ and let $d_{H}(x,y)$ be the associated Carnot-Carathéodory metric on $M$. Recall that for two points $x$ and $y$ of $M$ the value of $d_{H}(x,y)$ is the infinum of the lengths of all closed paths joining $x$ to $y$ that are tangent to $H$ at each point (such a path always exists by Chow Lemma). Moreover, the Hausdorff dimension of $M$ with respect to $d_{H}$ is equal to $\dim M+1$.
In the setting of general Carathéodory distributions there has been lot of interest by Fefferman, Stein and their collaborators for giving metric estimates for the singularities of the Green kernels of hypoelliptic sublaplacians (see, e.g., [@FS:FSSOSO], [@Ma:EPKLCD], [@NSW:BMDVF1], [@Sa:FSGSSVF]). This allows us relate the analysis of the hypoelliptic sublaplacian to the metric geometry of the underlying manifold.
An important result is that it follows from the maximum principle of Bony [@Bo:PMIHUPCOED] that the Green of kernel of a selfadjoint hypoelliptic sublaplacian is positive near the diagonal. In general the positivity of the principal symbol does not pertain in the Green kernel. However, by making use of Proposition \[prop:Metric.positivity-cP\] we shall prove:
\[thm:Metric.metric-estimate\] Assume that $H+[H,H]=TM$ and let $P:C^{\infty}(M)\rightarrow C^{\infty}(M)$ be a [$\Psi_{H}$DO]{} of order $m>0$ whose principal symbol is invertible and is positive. Let $k_{P^{-\frac{d+2}{m}}}(x,y)$ be the Schwartz kernel of $P^{-\frac{d+2}{m}}$. Then near the diagonal we have $$k_{P^{-\frac{d+2}{m}}}(x,y)\sim -c_{P^{-\frac{d+2}{m}}}(x)\log d_{H}(x,y).
\label{eq:Metric.metric-estimate}$$ In particular $k_{P^{-\frac{d+2}{m}}}(x,y)$ is $>0$ near the diagonal.
It is enough to proceed in an open of $H$-framed local coordinates $U\subset {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$. For $x \in U$ let $\psi_{x}$ be the affine change to the corresponding privileged coordinates at $x$. Since by Proposition \[prop:Metric.positivity-cP\] we have $c_{P^{-\frac{d+2}{m}}}(x)>0$, using Proposition \[thm:NCR.log-singularity\] we see that near the diagonal we have $ k_{P^{-\frac{d+2}{m}}}(x,y) \sim -c_{P^{-\frac{d+2}{m}}}(x)\log\|\psi_{x}(y)\|$. Incidentally, we see that $k_{P^{-\frac{d+2}{m}}}(x,y)$ is positive near the diagonal.
On the other hand, since $H$ has codimension one our definition of the privileged coordinates agrees with that of [@Be:TSSRG]. Therefore, it follows from [@Be:TSSRG Thm. 7.34] that the ratio $\frac{d_{H}(x,y)}{\|\psi_{x}(y)\|}$ remains bounded in $(0, \infty)$ near the diagonal, that is, we have $\log d_{H}(x,y)\sim \log \|\psi_{x}(y)\|$. It then follows that near the diagonal we have $ k_{P^{-\frac{d+2}{m}}}(x,y)\sim -c_{P^{-\frac{d+2}{m}}}(x)\log d_{H}(x,y)$. The theorem is thus proved.
It remains now to prove Proposition \[prop:Metric.positivity-cP\]. To this end recall that for an operator $Q\in {\ensuremath{\Psi_{H}}}^{l}(M,{\ensuremath{\mathcal{E}}})$, $l\in {\ensuremath{\mathbb{C}}}$, the model operator $Q^{a}$ at a given point $a\in M$ is defined as the left-invariant [$\Psi_{H}$DO]{} on ${\ensuremath{\mathcal{S}}}_{0}(G_{a}M,{\ensuremath{\mathcal{E}}})$ with symbol $q^{a}(\xi)=\sigma_{l}(Q)(a,\xi)$. Bearing this in mind we have:
\[lem:Metric.cP-Heisenberg-coordinates\] Let $Q\in {\ensuremath{\Psi_{H}}}^{-(d+2)}(M,{\ensuremath{\mathcal{E}}})$ and let $Q^{a}$ be its model operator at a point $a \in M$.
1\) We have $c_{Q^{a}}(x)=c_{Q^{a}}dx$, where $c_{Q^{a}}$ is a constant and $dx$ denotes the Haar measure of $G_{a}M$.
2\) In Heisenberg coordinates centered at $a$ we have $c_{Q}(0)=c_{Q^{a}}$.
Let $X_{0},X_{1},\ldots,X_{d}$ be a $H$ frame near $a$. Since $G_{a}M$ has underlying set $(T_{a}M/H_{a})\oplus H_{a}$ the vectors $X_{0}(a),\ldots,X_{d}(a)$ define global coordinates for $G_{a}M$, so that we can identify it with ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ equipped with the group law (\[eq:Heisenberg.group-law-tangent-group-coordinates\]). In these coordinates set $q^{a}(\xi):=\sigma_{-(d+2)}(P)(a,\xi)$. Then (\[eq:PsiHDO.PsiDO-convolution\]) tells us that $Q^{a}$ corresponds to the operator $q^{a}(-iX^{a})$ acting on ${\ensuremath{\mathcal{S}}}_{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, where $X_{0}^{a},\ldots,X_{d}^{a}$ is the left-invariant tangent frame coming from the model vector fields at $a$ of $X_{0},\ldots,X_{d}$.
Notice that the left-invariance of the frame $X_{0}^{a},\ldots,X_{d}^{a}$ implies that, with respect to this frame, the affine change of variables to the privileged coordinates centered at any given point $x\in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ is just $\psi^{a}_{x}(y)=y.x^{-1}$. In view of (\[eq:Heisenberg.group-law-tangent-group-coordinates\]) this implies that $|\psi_{x}^{a'}|=1$. Therefore, from (\[eq:NCR.formula-cP\]) we get $$c_{Q^{a}}(x)=(2\pi)^{-(d+1)}\int_{\|\xi\|=1}q^{a}(\xi)\iota_{E}d\xi.
\label{eq:Metric.cQa}$$ Since the Haar measure of $G_{a}M$ corresponds to the Lebesgue measure of ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ this proves the 1st part of the lemma.
Next, by Definition \[def:Heisenberg.principal-symbol\] in Heisenberg coordinates centered at $a$ the principal symbol $\sigma_{-(d+2)}(Q)(x,\xi)$ agrees at $x=0$ with the principal symbol $q_{-(d+2)}(x,\xi)$ of $Q$ in the sense of (\[eq:Heisenberg.asymptotic-expansion-symbols\]), so we have $q^{a}(\xi)=q_{-(d+2)}(0,\xi)$. Furthermore, as we already are in Heisenberg coordinates, hence in privileged coordinates, we see that, with respect to the $H$-frame $X_{0},\ldots,X_{d}$, the affine change of variables $\psi_{0}$ to the privileged coordinates centered at the origin is just the identity. Therefore, by using (\[eq:NCR.formula-cP\]) and (\[eq:Metric.cQa\]) we see that $c_{Q}(0)$ is equal to $$(2\pi)^{-(d+1)}\int_{\|\xi\|=1}q(0,\xi)\iota_{E}d\xi=(2\pi)^{-(d+1)}\int_{\|\xi\|=1}q^{a}(\xi)\iota_{E}d\xi=c_{Q^{a}}.
$$ The 2nd part of the lemma is thus proved.
We are now ready to prove Proposition \[prop:Metric.positivity-cP\].
For sake of simplicity we may assume that ${\ensuremath{\mathcal{E}}}$ is the trivial line bundle, since in the general case the proof follows along similar lines. Moreover, for any $a \in M$ by Lemma \[lem:Metric.cP-Heisenberg-coordinates\] in Heisenberg coordinates centered at $a$ we have $c_{P^{-\frac{d+2}{m}}}(0)=c_{(P^{-\frac{d+2}{m}})^{a}}$. Therefore, it is enough to prove that $c_{(P^{-\frac{d+2}{m}})^{a}}$ is $>0$ for any $a\in M$.
Let $a \in M$ and let $X_{0},\ldots,X_{d}$ be a $H$-frame near $a$. By using the coordinates provided by the vectors $X_{0}(a),\ldots,X_{d}(a)$ we can identify $G_{a}M$ with ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ equipped with the group law (\[eq:Heisenberg.group-law-tangent-group-coordinates\]). We then let $H^{a}\subset T{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be the hyperplane bundle spanned by the model vector fields $X_{1}^{a},\ldots,X_{d}^{a}$ seen as left-invariant vector fields on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$. In addition, for any $z\in {\ensuremath{\mathbb{C}}}$ we let $p(z)(\xi):=\sigma_{z}(P^{\frac{z}{m}})(a,\xi)$ be the principal symbol at $a$ of $P^{\frac{z}{m}}$, seen as a homogeneous symbol on ${{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0}$. Notice that by [@Po:MAMS1 Rem. 4.2.2] the family $(p(z))_{z \in {\ensuremath{\mathbb{C}}}}$ is a holomorphic family with values in $C^{\infty}({{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0})$.
Let $\chi\in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\chi(\xi)=1$ near $\xi=0$. For any $z \in {\ensuremath{\mathbb{C}}}$ and for any pair $\varphi$ and $\psi$ of functions in $C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ we set $$\tilde{p}(z)(\xi):=(1-\chi)p(z) \qquad \text{and} \qquad P_{\varphi,\psi}(z):=\varphi \tilde{p}(z)(-iX^{a}) \psi.
$$ Then $(\tilde{p}(z))_{z\in {\ensuremath{\mathbb{C}}}}$ and $(P_{\varphi,\psi}(z))_{z\in {\ensuremath{\mathbb{C}}}}$ are holomorphic families with values in $S^{*}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and $\Psi^{*}_{H^{a}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ respectively.
Notice that $P_{\varphi,\psi}(z)$ has order $z$ and the support of its Schwartz kernel is contained in the fixed compact set ${{\operatorname{supp}}}\varphi \times {{\operatorname{supp}}}\psi$, so by Proposition \[prop:Heisenberg.L2-boundedness\] the operator $P_{\varphi,\psi}(z)$ is bounded on $L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ for $\Re z\leq 0$. In fact, by arguing as in the proof of [@Po:MAMS1 Prop. 4.6.2] we can show that $(P_{\varphi,\psi}(z))_{\Re z\leq 0}$ actually is a holomorphic family with values in ${\ensuremath{\mathcal{L}}}(L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}))$.
Moreover, by [@Po:MAMS1 Prop. 4.6.2] the family $(P_{\varphi,\psi}(\overline{z})^{*})_{z\in {\ensuremath{\mathbb{C}}}}$ is a holomorphic family with values in $\Psi^{*}_{H^{a}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that ${{{\operatorname{ord}}}}P_{\varphi,\psi}(\overline{z})^{*}=z$ for any $z \in {\ensuremath{\mathbb{C}}}$. Therefore $(P_{\varphi,\psi}(z)P_{\varphi,\psi}(\overline{z})^{*})_{\Re z <-\frac{d+2}{2}}$ is a holomorphic family with values in $\Psi_{H^{a}}^{\text{int}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. For any $z \in {\ensuremath{\mathbb{C}}}$ let $k(z)(x,y)$ denote the Schwartz kernel of $P_{\varphi,\psi}(z)P_{\varphi,\psi}(\overline{z})^{*}$. Then the support of $k(z)(x,y)$ is contained in the fixed compact set ${\operatorname{supp}}\varphi \times
{\operatorname{supp}}\varphi$, and by using \[eq:NCR.kP(x,x)\] we can check that $(k(z)(x,y))_{\Re z <-\frac{d+2}{2}}$ is a holomorphic family of continuous Schwartz kernels. It then follows that $(P_{\varphi,\psi}(z)P_{\varphi,\psi}(\overline{z})^{*})_{\Re z <-\frac{d+2}{2}}$ is a holomorphic family with values in the Banach ideal ${\ensuremath{\mathcal{L}}}^{1}(L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}))$ of trace-class operators on $L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
Let us now choose $\psi$ so that $\psi =1$ near ${{\operatorname{supp}}}\varphi$. For any $t \in {\ensuremath{\mathbb{R}}}$ the operator $P^{\frac{t}{m}}$ is selfadjoint, so by Proposition \[prop:Heisenberg.operations-principal-symbols\] its principal symbol is real-valued. Therefore, by Proposition \[prop:Heisenberg.operations-principal-symbols\] the principal symbol of $(P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*})$ is equal to $$[ \varphi p(t)\psi]*^{a}[\overline{\psi}\overline{p(t)}\overline{\varphi}] =|\varphi|^{2}p(t)*p(t)=|\varphi|^{2}p(2t).
\label{eq:Metric.principal-symbol}$$ In particular, the principal symbols of $P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}$ and $P_{|\varphi|^{2},\psi}(-(d+2))$ agree. By combining this with Lemma \[lem:Metric.cP-Heisenberg-coordinates\] we see that $$c_{P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}}(x)=
c_{P_{|\varphi|^{2},\psi}(-(d+2))}(x)=|\varphi(x)|^{2}c_{(P^{-\frac{(d+2)}{m}})^{a}}.
\label{eq:Metric.cPphipsi-cPa}$$ It then follows from Proposition \[thm:NCR.TR.local\] that we have: $$\begin{gathered}
c_{(P^{-\frac{(d+2)}{m}})^{a}}(\int |\varphi(x)|^{2}dx) =\int c_{P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}}(x) dx \\
=\lim_{t \rightarrow \frac{-(d+2)}{2}}\frac{-1}{t+\frac{d+2}{2}}\int t_{P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*})}(x) dx\\ =
\lim_{t \rightarrow [\frac{-(d+2)}{2}]^{-}}\frac{-1}{t+\frac{d+2}{2}} {\ensuremath{{\operatorname{Trace}}}}[P_{\varphi,\psi}(-t)P_{\varphi,\psi}(-t)^{*}] \geq 0.
\end{gathered}$$ Thus, by choosing $\varphi$ so that $\int |\varphi|^{2}>0$ we obtain that $c_{(P^{-\frac{(d+2)}{m}})^{a}}$ is $\geq 0$.
Assume now that $c_{(P^{-\frac{(d+2)}{m}})^{a}}$ vanishes, and let us show that this assumption leads us to a contradiction. Observe that $(P_{\varphi,\psi}(\frac{z-(d+2)}{2})P_{\varphi,\psi}(\frac{\overline{z-(d+2)}}{2})^{*})_{z\in {\ensuremath{\mathbb{C}}}}$ is holomorphic gauging for $P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}$. Moreover, by (\[eq:Metric.cPphipsi-cPa\]) we have $c_{P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}}(x)=|\varphi(x)|^{2}c_{(P^{-\frac{(d+2)}{m}})^{a}}=0$. Therefore, it follows from Proposition \[prop:Heisenberg.operations-principal-symbols\] that ${\ensuremath{{\operatorname{TR}}}}P_{\varphi,\psi}(z)P_{\varphi,\psi}(\overline{z})^{*}$ is analytic near $z=-\frac{d+2}{2}$. In particular, the limit $\lim_{t\rightarrow {\frac{-(d+2)}{2}}^{-}} {\ensuremath{{\operatorname{Trace}}}}P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*}$ exists and is finite.
Let $(\xi_{k})_{k\geq 0}$ be an orthonormal basis of $L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and let $N\in {\ensuremath{\mathbb{N}}}$. For any $t>\frac{d+2}{2}$ the operator $P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*}$ is trace-class and we have $$\sum_{0\leq k \leq N}{\ensuremath{\langle P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*}\xi_{k} , \xi_{k} \rangle}} \leq {\ensuremath{{\operatorname{Trace}}}}[P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*}].
\label{eq:Metric.partial-trace}$$ As $t \rightarrow {-\frac{d+2}{2}}^{-}$ the operator $P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*}$ converges to $P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}$ in ${\ensuremath{\mathcal{L}}}(L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Therefore, letting $t$ go to ${-\frac{d+2}{2}}^{-}$ in (\[eq:Metric.partial-trace\]) shows that, for any $N\in {\ensuremath{\mathbb{N}}}$, we have $$\sum_{0\leq k \leq N}{\ensuremath{\langle P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}\xi_{k} , \xi_{k} \rangle}}
\leq\lim_{t\rightarrow [\frac{-(d+2)}{2}]^{-}} {\ensuremath{{\operatorname{Trace}}}}[P_{\varphi,\psi}(t)P_{\varphi,\psi}(t)^{*}] <\infty.$$ This proves that $P_{\varphi,\psi}(-\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})^{*}$ is a trace-class operator. Incidentally, we see that $P_{\varphi,\psi}(-\frac{d+2}{2})$ is a Hilbert-Schmidt operator on $L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
Next, let $Q \in \Psi_{H^{a}}^{-\frac{d+2}{2}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and let $q(x,\xi)\in S_{-\frac{d+2}{2}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be the principal symbol of $Q$. The principal symbol of $\varphi Q\psi$ is $\varphi(x) q(x,\xi)$. Moreover, since for any $z \in {\ensuremath{\mathbb{C}}}$ we have $p(z)*p(-z)=p(0)=1$, we see that the principal symbol of $\psi Q\psi P_{\psi,\psi}(\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})$ is equal to $$(\psi q \psi)*(\psi p(\frac{d+2}{2})\psi)*(\varphi p(-\frac{d+2}{2})\psi)= \varphi q*p(\frac{d+2}{2})*p(-\frac{d+2}{2})=\varphi q.
$$ Thus $\varphi Q\psi$ and $\psi Q\psi P_{\psi,\psi}(\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2})$ have the same principal symbol. Since they both have a compactly supported Schwartz kernel it follows that we can write $$\varphi Q\psi = \psi Q\psi P_{\psi,\psi}(\frac{d+2}{2})P_{\varphi,\psi}(-\frac{d+2}{2}) +Q_{1},
\label{eq:Metric.Hilbert-Schmidt-decomposition}$$ for some operator $Q_{1} \in \Psi_{H^{a}}^{-\frac{d+2}{2}-1}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ with a compactly supported Schwartz kernel. Observe that:
- the operator $ \psi Q\psi P_{\psi,\psi}(\frac{d+2}{2})$ is a zero’th order [$\Psi_{H}$DO]{} with a compactly supported Schwartz kernel, so this is a bounded operator on $L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$;
- as above-mentioned $P_{\varphi,\psi}(-\frac{d+2}{2})$ is a Hilbert-Schmidt operator;
- as $Q_{1}^{*}Q_{1}$ belongs to $\Psi_{H,c}^{\text{int}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ this is a trace-class operator, and so $Q_{1}$ is a Hilbert-Schmidt operator.
Since the space ${\ensuremath{\mathcal{L}}}^{2}(L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}))$ of Hilbert-Schmidt operators is a two-sided ideal, it follows from (\[eq:Metric.Hilbert-Schmidt-decomposition\]) and the above observations that $\varphi Q\psi $ is a Hilbert-Schmidt operator. In particular, by [@GK:ITLNSO p. 109] the Schwartz kernel of $\varphi Q\psi $ lies in $L^{2}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$.
We now get a contradiction as follows. Let $Q\in \Psi_{H^{a}}^{-\frac{d+2}{2}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ have Schwartz kernel, $$k_{Q}(x,y)=|{\psi_{x}^{a}}'| \|\psi_{x}^{a}(y)\|^{-\frac{d+2}{2}},
$$ where $\psi_{x}^{a}$ is the change to the privileged coordinates at $a$ with respect to the $H^{a}$-frame $X_{0}^{a},\ldots,X_{d}^{a}$ (this makes sense since $\|y\|^{-\frac{d+2}{2}}$ is in ${\ensuremath{\mathcal{K}}}_{-\frac{d+2}{2}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}\times
{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$). As alluded to in the proof of Lemma \[lem:Metric.cP-Heisenberg-coordinates\] the left-invariance of the frame $X_{0}^{a},\ldots,X_{d}^{a}$ implies that $\psi_{x}^{a}(y)=y.x^{-1}$. Therefore, the Schwartz kernel of $\varphi Q\psi $ is equal to $$k_{\varphi Q\psi}(x,y)=\varphi(x) \|y.x^{-1}\|^{-\frac{d+2}{2}} \psi(y).
$$ However, this not an $L^{2}$-integrable kernel, since $\|y.x^{-1}\|^{-(d+2)}$ is not locally integrable near the diagonal.
We have obtained a contradiction, so $c_{(P^{-\frac{d+2}{m}})^{a}}$ cannot be zero. Since we know that $c_{(P^{-\frac{d+2}{m}})^{a}}$ is $\geq 0$, we see that $c_{(P^{-\frac{d+2}{m}})^{a}}$ is $>0$. The proof of Proposition \[prop:Metric.positivity-cP\] is thus complete.
The Dixmier trace of [$\Psi_{H}$DOs]{} {#sec:Dixmier}
--------------------------------------
The quantized calculus of Connes [@Co:NCG] allows us to translate into the language of quantum mechanics the main tools of the classical infinitesimal calculus. In particular, an important device is the Dixmier trace ([@Di:ETNN], [@CM:LIFNCG Appendix A]), which is the noncommutative analogue of the standard integral. We shall now show that, as in the case of classical [$\Psi$DOs]{} (see [@Co:AFNG]), the noncommutative residue allows us to extend the Dixmier trace to the whole algebra of integer order [$\Psi_{H}$DOs]{}.
Let us first recall the main facts about Connes’ quantized calculus and the Dixmier trace. The general setting is that of bounded operators on a separable Hilbert space ${\ensuremath{\mathcal{H}}}$. Extending the well known correspondence in quantum mechanics between variables and operators, we get the following dictionary between classical notions of infinitesimal calculus and their operator theoretic analogues.
Classical Quantum
----------------------------------- ------------------------------------------------------
Real variable Selfadjoint operator on ${\ensuremath{\mathcal{H}}}$
Complex variable Operator on ${\ensuremath{\mathcal{H}}}$
Infinitesimal variable Compact operator on ${\ensuremath{\mathcal{H}}}$
Infinitesimal of order $\alpha>0$ Compact operator $T$ such that
$\mu_{n}(T)={\operatorname{O}}(n^{-\alpha})$
The third line can be explained as follows. We cannot say that an operator $T$ is an infinitesimal by requiring that $\|T\| \leq \epsilon$ for any $\epsilon >0$, for this would give $T=0$. Nevertheless, we can relax this condition by requiring that for any $\epsilon>0$ we have $\|T\|<\epsilon$ outside a finite dimensional space. This means that $T$ is in the closure of finite rank operators, i.e., $T$ belongs to the ideal ${\ensuremath{\mathcal{K}}}$ of compact operators on ${\ensuremath{\mathcal{H}}}$.
In the last line $\mu_{n}(T)$ denotes the $(n+1)$’th characteristic value of $T$, i.e., the $(n+1)$’th eigenvalue of $|T|=(T^{*}T)^{\frac12}$. In particular, by the min-max principle we have $$\begin{aligned}
\mu_{n}(T) & = & \inf\{ \|T_{E^\perp}\|; \dim E=n\},
\nonumber \\
& = & {\operatorname{dist}}(T,\mathcal{R}_{n}) , \qquad \mathcal{R}_{n}=
\{\text{operators of rank}\leq n\},
\label{eq:NCG.min-max}
$$ so the decay of $\mu_{n}(T)$ controls the accuracy of the approximation of $T$ by finite rank operators. Moreover, by using (\[eq:NCG.min-max\]) we also can check that, for $S$, $T$ in ${\ensuremath{\mathcal{K}}}$ and $A$, $B$ in ${\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})$, we have $$\mu_{n}(T+S)\leq \mu_{n}(T)+\mu_{n}(S) \qquad \text{and} \qquad \mu_{n}(ATB)\leq \|A\| \mu_{n}(T) \|B\|,
\label{eq:NCG.inequalities-2sided-ideals}$$ This implies that the set of infinitesimal operators of order $\alpha$ is a two-sided ideal of ${\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})$.
Next, in this setting the analogue of the integral is provided by the Dixmier trace ([@Di:ETNN], [@CM:LIFNCG Appendix A]). The latter arises in the study of the logarithmic divergency of the partial traces, $${\ensuremath{{\operatorname{Trace}}}}_{N}(T) = \sum_{n=0}^{N- 1} \mu_{n}(T), \qquad T \in{\ensuremath{\mathcal{K}}}, \quad T\geq 0.$$ The domain of the Dixmier trace is the Schatten ideal, $${\ensuremath{\mathcal{L}^{(1,\infty)}}}=\{T\in {\ensuremath{\mathcal{K}}}; \|T\|_{1,\infty} :=\sup \frac{\sigma_{N}(T)}{\log N} < \infty\}.
$$
We extend the definition of $ {\ensuremath{{\operatorname{Trace}}}}_{N}(T)$ by means of the interpolation formula, $$\sigma_{\lambda}(T) =\inf \{\|x\|_{1}+ \lambda \|y\| ; x+y=T\}, \qquad \lambda>0,$$ where $\|x\|_{1}:={\ensuremath{{\operatorname{Trace}}}}|x|$ denotes the Banach norm of the ideal ${\ensuremath{\mathcal{L}}}^{1}$ of trace-class operators. For any integer $N$ we have $\sigma_{N}(T)={\ensuremath{{\operatorname{Trace}}}}_{N}(T)$. In addition, the Cesāro mean of $\sigma_{\lambda}(T)$ with respect to the Haar measure $\frac{d\lambda}{\lambda}$ of ${\ensuremath{\mathbb{R}}}_{+}^{*}$ is $$\tau_{\Lambda}(T) = \frac{1}{\log\Lambda}\int_{e}^\Lambda \frac{\sigma_{\lambda}(T)}
{\log \lambda}\frac{d\lambda}{\lambda}, \qquad \Lambda\geq e.$$
Let ${\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})_{+}=\{T \in {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}}); \ T\geq 0\}$. Then by [@CM:LIFNCG Appendix A] for $T_{1}$ and $T_{2}$ in ${\ensuremath{\mathcal{L}^{(1,\infty)}}}\cap {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})_{+}$ we have $$|\tau_{\Lambda}(T_{1}+T_{2}) -\tau_{\Lambda}(T_{1}) -\tau_{\Lambda}(T_{2}) | \leq
3({\ensuremath{\|{T_{1}}\|_{(1,\infty)}}}+{\ensuremath{\|{T_{2}}\|_{(1,\infty)}}}) \frac{\log\log\Lambda}{\log\Lambda}.$$ Therefore, the functionals $\tau_{\Lambda}$, $\Lambda \geq e$, give rise to an additive homogeneous map, $$\tau: {\ensuremath{\mathcal{L}}}^{(1,\infty)}\cap {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})_{+} \longrightarrow C_{b}[e,\infty)/C_{0}[e,\infty).
$$ It follows from this that for any state $\omega$ on the $C^{*}$-algebra $C_{b}[e,\infty)/C_{0}[e,\infty)$, i.e., for any positive linear form such that $\omega(1)=1$, there is a unique linear functional ${\ensuremath{{\operatorname{Tr}}_{\omega}}}:{\ensuremath{\mathcal{L}}}^{(1,\infty)}\rightarrow {\ensuremath{\mathbb{C}}}$ such that $${\ensuremath{{\operatorname{Tr}}_{\omega}}}T = \omega(\tau(T)) \qquad \forall T \in {\ensuremath{\mathcal{L}}}^{(1,\infty)}\cap {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})_{+}.
$$
We gather the main properties of this functional in the following.
\[prop:NCG.properties-Dixmier-trace\] For any state $\omega$ on $C_{b}[e,\infty)/C_{0}[e,\infty)$ the Dixmier trace ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ has the following properties:
1\) If $T$ is trace-class, then ${\ensuremath{{\operatorname{Tr}}_{\omega}}}T=0$.
2\) We have ${\ensuremath{{\operatorname{Tr}}_{\omega}}}(T)\geq 0$ for any $T\in {\ensuremath{\mathcal{L}^{(1,\infty)}}}\cap {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})_{+}$.
3\) If $S:{\ensuremath{\mathcal{H}}}'\rightarrow {\ensuremath{\mathcal{H}}}$ is a topological isomorphism, then we have ${\ensuremath{{\operatorname{Tr}}}}_{\omega,{\ensuremath{\mathcal{H}}}'}(T)={\ensuremath{{\operatorname{Tr}}}}_{\omega,{\ensuremath{\mathcal{H}}}}(STS^{-1})$ for any $T\in {\ensuremath{\mathcal{L}^{(1,\infty)}}}({\ensuremath{\mathcal{H}}}')$. In particular, ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ does not depend on choice of the inner product on ${\ensuremath{\mathcal{H}}}$.
4\) We have ${\ensuremath{{\operatorname{Tr}}_{\omega}}}AT={\ensuremath{{\operatorname{Tr}}_{\omega}}}TA$ for any $A \in {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})$ and any $T\in {\ensuremath{\mathcal{L}^{(1,\infty)}}}$, that is, ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ is a trace on the ideal ${\ensuremath{\mathcal{L}^{(1,\infty)}}}$.
The functional ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ is called the *Dixmier trace* associated to $\omega$. We also say that an operator $T \in {\ensuremath{\mathcal{L}}}^{(1,\infty)}$ is *measurable* when the value of ${\ensuremath{{\operatorname{Tr}}_{\omega}}}T$ is independent of the choice of the state $\omega$. We then call *the Dixmier trace* of $T$ the common value, $${\ensuremath{-\hspace{-2,4ex}\int}}T :={\ensuremath{{\operatorname{Tr}}_{\omega}}}T.
$$ In addition, we let ${\ensuremath{\mathcal{M}}}$ denote the space of measurable operators. For instance, if $T\in {\ensuremath{\mathcal{K}}}\cap {\ensuremath{\mathcal{L}}}({\ensuremath{\mathcal{H}}})_{+}$ is such that $\lim_{N\rightarrow \infty}\frac{1}{\log N} \sum_{n=0}^{N-1} \mu_{n}(T) = L$, then it can be shown that $T$ is measurable and we have ${\ensuremath{-\hspace{-2,4ex}\int}}T=L$.
An important example of measurable operator is due to Connes [@Co:AFNG]. Let ${\ensuremath{\mathcal{H}}}$ be the Hilbert space $L^{2}(M,{\ensuremath{\mathcal{E}}})$ of $L^{2}$-sections of a Hermitian vector bundle over a compact manifold $M$ equipped with a smooth positive density and let $P:L^{2}(M,{\ensuremath{\mathcal{E}}})\rightarrow
L^{2}(M,{\ensuremath{\mathcal{E}}})$ be a classical [$\Psi$DO]{} of order $-\dim M$. Then $P$ is measurable for the Dixmier trace and we have $${\ensuremath{-\hspace{-2,4ex}\int}}P =\frac{1}{\dim M} {\ensuremath{{\operatorname{Res}}}}P,
\label{eq:NCG.Trw-NCR-PsiDOs}$$ where ${\ensuremath{{\operatorname{Res}}}}P$ denotes the noncommutative residue trace for classical [$\Psi$DOs]{} of Wodzicki ([@Wo:LISA], [@Wo:NCRF]) and Guillemin [@Gu:NPWF]. This allows us to extends the Dixmier trace to all [$\Psi$DOs]{} of integer order, hence to integrate any such [$\Psi$DO]{} even though it is not an infinitesimal of order $\leq 1$.
From now one we let $(M^{d+1},H)$ be a compact Heisenberg manifold equipped with a smooth positive density and we let ${\ensuremath{\mathcal{E}}}$ be a Hermitian vector bundle over $M$. In addition, we recall that by Proposition \[prop:Heisenberg.L2-boundedness\] any $P\in {\ensuremath{\Psi_{H}}}^{m}(M,{\ensuremath{\mathcal{E}}})$ with $\Re m\geq 0$ extends to a bounded operator from $L^{2}(M,{\ensuremath{\mathcal{E}}})$ to itself and this operator is compact if we further have $\Re m<0$.
Let $P:C^{\infty}(M,{\ensuremath{\mathcal{E}}}) \rightarrow C^{\infty}(M,{\ensuremath{\mathcal{E}}})$ be a positive [$\Psi_{H}$DO]{} with an invertible principal symbol of order $m>0$, and for $k=0,1,..$ let $\lambda_{k}(P)$ denote the $(k+1)$’ th eigenvalue of $P$ counted with multiplicity. By Proposition \[prop:Zeta.Weyl-asymptotics\] when $k \rightarrow \infty$ we have $$\lambda_{k}(P) \sim (\frac{k}{\nu_{0}(P)})^{\frac{m}{d+2}}, \qquad \nu_{0}(P)= \frac{1}{d+2}{\ensuremath{{\operatorname{Res}}}}P^{-\frac{d+2}{m}}.
$$ It follows that for any $\sigma \in {\ensuremath{\mathbb{C}}}$ with $\Re \sigma <0$ the operator $P^{\sigma}$ is an infinitesimal operator of order $\frac{m |\Re \sigma|}{d+2}$. Furthermore, for $\sigma=-\frac{d+2}{m}$ using (\[eq:NCG.inequalities-2sided-ideals\]) we see that $P^{-\frac{d+2}{m}}$ is measurable and we have $${\ensuremath{-\hspace{-2,4ex}\int}}P^{-\frac{d+2}{m}}=\nu_{0}(P)=\frac{1}{d+2}{\ensuremath{{\operatorname{Res}}}}P^{-\frac{d+2}{m}}.
\label{eq:NCG.Dixmier-trace-NCR.hypoelliptic}$$
These results are actually true for general [$\Psi_{H}$DOs]{}, for we have:
\[thm:NCG.Dixmier\] Let $P: L^{2}(M,{\ensuremath{\mathcal{E}}}) \rightarrow L^{2}(M,{\ensuremath{\mathcal{E}}})$ be a [$\Psi_{H}$DO]{} order $m$ with $\Re m<0$.
1\) $P$ is an infinitesimal operator of order $(\dim M+1)^{-1}|\Re m|$.
2\) If ${{{\operatorname{ord}}}}P=-(\dim M+1)$, then $P$ is measurable and we have $${\ensuremath{-\hspace{-2,4ex}\int}}P = \frac1{\dim M+1} {\ensuremath{{\operatorname{Res}}}}P.
\label{eq:NCG.bint-NCR}$$
First, let $P_{0}\in {\ensuremath{\Psi_{H}}}^{1}(M,{\ensuremath{\mathcal{E}}})$ be a positive and invertible [$\Psi_{H}$DO]{} with an invertible principal symbol (e.g. $P_{0}=(1+\Delta^{*}\Delta)^{\frac{1}{4}}$, where $\Delta$ is a hypoelliptic sublaplacian). Then $PP_{0}^{m}$ is a zeroth order [$\Psi_{H}$DO]{}. By Proposition \[\] any zeroth order [$\Psi_{H}$DO]{} is bounded on $L^{2}(M,{\ensuremath{\mathcal{E}}})$ and as above-mentioned $P^{-m}_{0}$ is an infinitesimal of order $\alpha:=(\dim M+1)^{-1}|\Re m|$. Since we have $P=PP_{0}^{m}.P_{0}^{-m}$ we see that $P$ is the product of a bounded operator and of an infinitesimal operator of order $\alpha$. As (\[eq:NCG.inequalities-2sided-ideals\]) shows that the space of infinitesimal operators of order $\alpha$ is a two-sided ideal, it follows that $P$ is an infinitesimal of order $\alpha$. In particular, if ${{{\operatorname{ord}}}}P=-(d+2)$ then $P$ is an infinitesimal of order $1$, hence is contained in ${\ensuremath{\mathcal{L}}}^{(1,\infty)}$.
Next, let ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ be the Dixmier trace associated to a state $\omega$ on $C_{b}[e,\infty)/C_{0}[e,\infty)$, and let us prove that for any $P \in
{\ensuremath{\Psi_{H}}}^{-(d+2)}(M,{\ensuremath{\mathcal{E}}})$ we have ${\ensuremath{{\operatorname{Tr}}_{\omega}}}P=\frac{1}{d+2}{\ensuremath{{\operatorname{Res}}}}P$.
Let $\kappa:U\rightarrow {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ be a $H$-framed chart mapping onto ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ such that there is a trivialization $\tau:{\ensuremath{\mathcal{E}}}_{|U} \rightarrow U\times
{\ensuremath{\mathbb{C}}}^{r}$ of ${\ensuremath{\mathcal{E}}}$ over $U$ (as in the proof of Theorem \[thm:Traces.traces\] we shall call such a chart a *nice $H$-framed chart*). As in Subsection \[sec:traces\] we shall use the subscript $c$ to denote [$\Psi_{H}$DOs]{} with a compactly supported Schwartz kernel (e.g. ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ denote the class of integer order [$\Psi_{H}$DOs]{} on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ whose Schwartz kernels have compact supports). Notice that if $P\in{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}},{\ensuremath{\mathbb{C}}}^{r})$ then the operator $\tau^{*}\kappa^{*}P$ belongs to ${\ensuremath{\Psi_{H}}}^{{\ensuremath{\mathbb{Z}}}}(M,{\ensuremath{\mathcal{E}}})$ and the support of its Schwartz kernel is a compact subset of $U\times U$.
Since $P_{0}$ is a positive [$\Psi_{H}$DO]{} with an invertible principal symbol, Proposition \[prop:Metric.positivity-cP\] tells us that the density ${{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P_{0}^{-(d+2)}}(x)$ is $>0$, so we can write $\kappa_{*}[{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P_{0}^{-(d+2)}}(x)_{|U}]=c_{0}(x)dx$ for some positive function $c_{0}\in C^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Then for any $c \in C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and any $\psi \in C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $\psi=1$ near ${{\operatorname{supp}}}c$ we let $$P_{c,\psi}:=(\frac{c\circ \kappa}{c_{0}\circ \kappa})P_{0}^{-(d+2)} (\psi\circ \kappa).
\label{eq:Dixmier.Pcpsi}$$ Notice that $P_{c,\psi}$ belongs to ${\ensuremath{\Psi_{H}}}^{-(d+2)}(M,{\ensuremath{\mathcal{E}}})$ and it depends on the choice $\psi$ only modulo operators in ${\ensuremath{\Psi^{-\infty}}}(M,{\ensuremath{\mathcal{E}}})$. Since the latter are trace-class operators and the Dixmier trace ${\ensuremath{{\operatorname{Tr}}}}_{\omega}$ vanishes on such operators (cf. Proposition \[prop:NCG.properties-Dixmier-trace\]), we see that the value of ${\ensuremath{{\operatorname{Tr}}_{\omega}}}P_{c,\psi}$ does not depend on the choice of $\psi$. Therefore, we define a linear functional $L:C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}) \rightarrow {\ensuremath{\mathbb{C}}}$ by assigning to any $c \in C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ the value $$L(c):={\ensuremath{{\operatorname{Tr}}_{\omega}}}P_{c,\psi},
$$ where $\psi \in C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is such that $\psi=1$ near ${{\operatorname{supp}}}c$.
On the other hand, let $P\in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|U})$. Then $\tau_{*}P$ belongs to ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathbb{C}}}^{r}):={\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U)\otimes M_{r}({\ensuremath{\mathbb{C}}})$. Set $\tau_{*}P=(P_{ij})$ and define ${{\operatorname{tr}}}P:=\sum P_{ii}$. In addition, for $i,j=1,\ldots,r$ let $E_{ij}\in M_{r}({\ensuremath{\mathbb{C}}})$ be the elementary matrix whose all entries are zero except that on the $i$th row and $j$th column which is equal to $1$. Then we have $$\tau_{*}P=\frac{1}{r}({{\operatorname{tr}}}P)\otimes I_{r} + \sum_{i} P_{ii}\otimes (E_{ii}-\frac{1}{r}I_{r}) +\sum_{i\neq j} P_{ij}\otimes E_{ij}.
$$ Any matrix $A \in M_{r}({\ensuremath{\mathbb{C}}})$ with vanishing trace is contained in the commutator space $[M_{r}({\ensuremath{\mathbb{C}}}),M_{r}({\ensuremath{\mathbb{C}}})]$. Notice also that the space ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U)\otimes [M_{r}({\ensuremath{\mathbb{C}}}),M_{r}({\ensuremath{\mathbb{C}}})]$ is contained in $[{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{0}(U,{\ensuremath{\mathbb{C}}}^{r}),{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathbb{C}}}^{r})]$. Therefore, we see that $$P= \frac{1}{r}({{\operatorname{tr}}}P)\otimes {\operatorname{id}}_{{\ensuremath{\mathcal{E}}}} \mod [{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{0}(U,{\ensuremath{\mathcal{E}}}_{|U}),{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|U})].
\label{eq:Dixmier.decomposition-P}$$
Let us write $\kappa_{*}[{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P}(x)]=a_{P}(x)dx$ with $a_{P} \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, and let $\psi \in C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\psi=1$ near ${{\operatorname{supp}}}a_{P}$. Then we have $$\kappa_{*}[c_{{{\operatorname{tr}}}P_{a_{P},\psi}}(x)=(\frac{a_{P}(x)}{c_{0}(x)})\psi(x)\kappa_{*}[{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P_{0}^{-(d+2)}}(x)]=a_{P}(x)dx=\kappa_{*}[{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}
c_{P}(x)]=\kappa_{*}[c_{{{\operatorname{tr}}}P}(x)].$$ In other words $Q:={{\operatorname{tr}}}P-{{\operatorname{tr}}}P_{a_{P},\psi}$ is an element of ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U)$ such that $c_{Q}(x)=0$. By the step (i) of the proof of Lemma \[lem:Traces.sum-commutators.compact\] we then can write $\kappa_{*}Q$ in the form $\kappa_{*}Q=[\chi_{0},Q_{0}]+\ldots+[\chi_{d},Q_{d}]$ for some functions $\chi_{0},\ldots,\chi_{d}$ in $C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and some operators $Q_{0},\ldots,Q_{d}$ in ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{{\ensuremath{\mathbb{Z}}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. In fact, it follows from the proof of Lemmas \[lem:Traces.sum-commutators\] and \[lem:Traces.sum-commutators.compact\] that $Q_{0},..,Q_{d}$ can be chosen to have order $\leq -(d+2)$. This insures us that $\kappa_{*}Q$ is contained in $[{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{0}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}),{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})]$. Thus, $${{\operatorname{tr}}}P= {{\operatorname{tr}}}P_{a_{P},\psi} \mod [{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{0}(U),{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U)].$$ By combining this with (\[eq:Dixmier.decomposition-P\]) we obtain $$P= \frac{1}{r}({{\operatorname{tr}}}P)\otimes {\operatorname{id}}_{{\ensuremath{\mathcal{E}}}} = \frac{1}{r}({{\operatorname{tr}}}P_{a_{P},\psi})\otimes {\operatorname{id}}_{{\ensuremath{\mathcal{E}}}} =P_{a_{P},\psi} \mod
[{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{0}(U,{\ensuremath{\mathcal{E}}}_{|U}),{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|U})].
$$ Notice that $[{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{0}(U,{\ensuremath{\mathcal{E}}}_{|U}),{\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|U})]$ is contained in $[{\ensuremath{\Psi_{H}}}^{0}(M,{\ensuremath{\mathcal{E}}}),{\ensuremath{\Psi_{H}}}^{-(d+2)}(M,{\ensuremath{\mathcal{E}}})]$, which is itself contained in the commutator space $[{\ensuremath{\mathcal{L}}}(L^{2}(M)),{\ensuremath{\mathcal{L}}}^{(1,\infty)}(M)]$ of ${\ensuremath{\mathcal{L}^{(1,\infty)}}}$. As the Dixmier trace ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ vanishes on the latter space (cf. Proposition \[prop:NCG.properties-Dixmier-trace\]) we deduce that $${\ensuremath{{\operatorname{Tr}}_{\omega}}}P={\ensuremath{{\operatorname{Tr}}_{\omega}}}P_{a_{P},\psi}=L(a_{P}).
\label{eq:NCG.Trw-tau-cP}$$
Now, let $c \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ and set $c_{1}=
\frac{c}{\sqrt{c_{0}(x)}}$. In addition, let $\psi \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ be such that $\psi\geq 0$ and $\psi=1$ near ${{\operatorname{supp}}}c$, and set $\tilde{c}_{1}=c\circ \kappa$ and $\tilde{\psi}=\psi\circ\kappa$. Notice that with the notation of (\[eq:Dixmier.Pcpsi\]) we have $\overline{\tilde{c}_{1}}\tilde{c}_{1}P_{0}^{-(d+2)}\tilde{\psi}=P_{|c|^{2},\psi}$. Observe also that we have $$(\tilde{c}_{1}P_{0}^{-\frac{d+2}{2}}\tilde{\psi})(\tilde{c}_{1}P_{0}^{-\frac{d+2}{2}}\tilde{\psi})^{*}
=\tilde{c}_{1}P_{0}^{-\frac{d+2}{2}}\tilde{\psi}^{2}P_{0}^{-\frac{d+2}{2}}\overline{\tilde{c}_{1}}
= \tilde{c}_{1}P_{0}^{-(d+2)}\psi \overline{\tilde{c}_{1}} \mod \Psi^{-\infty}(M,{\ensuremath{\mathcal{E}}}).
$$ As alluded to earlier the trace ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ vanishes on smoothing operators, so we get $$\begin{gathered}
{\ensuremath{{\operatorname{Tr}}_{\omega}}}[ (\tilde{c}_{1}P_{0}^{-\frac{d+2}{2}}\tilde{\psi})(\tilde{c}_{1}P_{0}^{-\frac{d+2}{2}}\tilde{\psi})^{*} ]=
{\ensuremath{{\operatorname{Tr}}_{\omega}}}[\tilde{c}_{1}P_{0}^{-(d+2)}\tilde{\psi} \overline{\tilde{c}_{1}}]\\ ={\ensuremath{{\operatorname{Tr}}_{\omega}}}[\overline{\tilde{c}_{1}}\tilde{c}_{1}P_{0}^{-(d+2)}\tilde{\psi} ] = {\ensuremath{{\operatorname{Tr}}_{\omega}}}P_{|c|^{2},\psi}=L(|c|).
$$ Since ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ is a positive trace (cf. Proposition \[prop:NCG.properties-Dixmier-trace\]) it follows that we have $L(|c|^{2})\geq 0$ for any $c \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, i.e., $L$ is a positive linear functional on $C_{c}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. Since any such functional uniquely extends to a Radon measure on $C_{0}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, this shows that $L$ defines a positive Radon measure.
Next, let $a \in {\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$ and let $\phi(x)=x+a$ be the translation by $a$ on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$. Since $\phi'(x)=1$ we see that $\phi$ is a Heisenberg diffeomorphism, so for any $P \in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{*}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ the operator $\phi_{*}P$ is in ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{*}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ too. Set $\phi_{\kappa}=\kappa^{-1} \circ \phi \circ \kappa$. Then by (\[eq:Log.functoriality-cP\]) we have $$\kappa_{*}[ {{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}} c_{\phi_{\kappa*}P_{c,\psi}}(x)]=\kappa_{*}\phi_{\kappa*}[{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P_{c,\psi}}(x)]=
\phi_{*}[c(x)dx]=c(\phi^{-1}(x))dx.
$$ Since shows that $a_{\phi_{\kappa*}P_{c,\psi}}(x)= c(\phi^{-1}(x))$, so from (\[eq:NCG.Trw-tau-cP\]) we get $${\ensuremath{{\operatorname{Tr}}_{\omega}}}\phi_{\kappa*}P_{c,\psi}=L[c\circ \phi^{-1}].
\label{eq:Dixmier.Lcphi}$$
Let $K$ be a compact subset of ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$. Then $\phi_{\kappa}$ gives rise to a continuous linear isomorpshism $\phi_{\kappa*}:L^{2}_{\kappa^{-1}(K)}(M,{\ensuremath{\mathcal{E}}})\rightarrow L^{2}_{\kappa^{-1}(K+a)}(M,{\ensuremath{\mathcal{E}}})$. By combining it with a continuous linear isomorphism $L^{2}_{\kappa^{-1}(K)}(M,{\ensuremath{\mathcal{E}}})^{\perp}\rightarrow L^{2}_{\kappa^{-1}(K+a)}(M,{\ensuremath{\mathcal{E}}})^{\perp}$ we obtain a continuous linear isomorphism $S:L^{2}(M,{\ensuremath{\mathcal{E}}})\rightarrow L^{2}(M,{\ensuremath{\mathcal{E}}})$ which agrees with $\phi_{\kappa*}$ on $L^{2}_{\kappa^{-1}(K)}(M,{\ensuremath{\mathcal{E}}})$. In particular, we have $\phi_{\kappa*}P_{c,\psi} = S P_{c,\psi}S^{-1}$. Therefore, by using Proposition \[prop:NCG.properties-Dixmier-trace\] and \[eq:Dixmier.Lcphi\] we see that, for any $c \in C_{K}^{\infty}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, we have $$L[c]={\ensuremath{{\operatorname{Tr}}_{\omega}}}P_{c,\psi}={\ensuremath{{\operatorname{Tr}}_{\omega}}}S P_{c,\psi}S^{-1}= {\ensuremath{{\operatorname{Tr}}_{\omega}}}\phi_{\kappa*}P_{c,\psi}=L[c\circ \phi^{-1}].
$$ This proves that $L$ is translation-invariant. Since any translation invariant Radon measure on $C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is a constant multiple of the Lebesgue measure, it follows that there exists a constant $\Lambda_{U}\in {\ensuremath{\mathbb{C}}}$ such that, for any $c \in C^{\infty}_{c}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$, we have $$L(c)=\Lambda_{U} \int c(x)dx.
\label{eq:Dixmier.L-Lebesgue}$$
Now, combining (\[eq:NCG.Trw-tau-cP\]) and (\[eq:Dixmier.L-Lebesgue\]) shows that, for any $P\in {\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|U})$, we have $$\begin{gathered}
{\ensuremath{{\operatorname{Tr}}_{\omega}}}P= \Lambda_{U}\int_{{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}} a_{P}(x)dx= \Lambda_{U}\int_{{\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}} \kappa_{*}[{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P}(x)] \\
=\Lambda_{U}\int_{M}{{\operatorname{tr}}}_{{\ensuremath{\mathcal{E}}}}c_{P}(x)=(2\pi)^{d+1}\Lambda_{U}{\ensuremath{{\operatorname{Res}}}}P.
\label{eq:Dixmier-Trw-Res}\end{gathered}$$ This shows that, for any domain $U$ of a nice $H$-framed chart, on ${\ensuremath{\Psi_{H,{\operatorname{c}}}}}^{-(d+2)}(U,{\ensuremath{\mathcal{E}}}_{|U})$ the Dixmier trace ${\ensuremath{{\operatorname{Tr}}_{\omega}}}$ is a constant multiple of the noncommutative residue. Therefore, if we let $M_{1}, \ldots, M_{N}$ be the connected components of $M$, then by arguing as in the proof of Theorem \[thm:Traces.traces\] we can prove that on each connected component $M_{j}$ there exists a constant $\Lambda_{j}\geq 0$ such that $${\ensuremath{{\operatorname{Tr}}_{\omega}}}P =\Lambda_{j} {\ensuremath{{\operatorname{Res}}}}P \qquad \forall P\in {\ensuremath{\Psi_{H}}}^{-(d+2)}(M_{j},{\ensuremath{\mathcal{E}}}_{|M_{j}}).
$$ In fact, if we take $P=P_{0|_{M_{j}}}^{-(d+2)}$ then from (\[eq:NCG.Dixmier-trace-NCR.hypoelliptic\]) we get $\Lambda_{j}=(d+2)^{-1}$. Thus, $${\ensuremath{{\operatorname{Tr}}_{\omega}}}P =\frac{1}{d+2}{\ensuremath{{\operatorname{Res}}}}P \qquad \forall P \in {\ensuremath{\Psi_{H}}}^{-(d+2)}(M,{\ensuremath{\mathcal{E}}}).
$$ This proves that any operator $P \in {\ensuremath{\Psi_{H}}}^{-(d+2)}(M,{\ensuremath{\mathcal{E}}})$ is measurable and its Dixmier trace then is equal to $(d+2)^{-1}{\ensuremath{{\operatorname{Res}}}}P$. The theorem is thus proved.
As a consequence of Theorem \[thm:NCG.Dixmier\] we can extend the Dixmier trace to the whole algebra ${\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$ by letting $${\ensuremath{-\hspace{-2,4ex}\int}}P :=\frac{1}{d+2}{\ensuremath{{\operatorname{Res}}}}P \quad \text{for any $P\in {\ensuremath{\Psi_{H}^{{\ensuremath{\mathbb{Z}}}}}}(M,{\ensuremath{\mathcal{E}}})$}.
$$
In the language of the quantized calculus this means that we can integrate any [$\Psi_{H}$DO]{} of integer order, even though it is not an infinitesimal operator of order $\geq 1$. This property will be used in Section \[sec:CR\] to define lower dimensional volumes in pseudohermitian geometry.
Noncommutative residue and contact geometry {#sec:Contact}
===========================================
In this section we make use of the results of [@Po:MAMS1] to compute the noncommutative residues of some geometric operators on contact manifolds.
Throughout this section we let $(M^{2n+1},H)$ be a compact orientable contact manifold, i.e., $(M^{2n+1},H)$ is a Heisenberg manifold and there exists a contact 1-form $\theta$ on $M$ such that $H=\ker \theta$ (cf. Section \[sec:Heisenberg-calculus\]).
Since $M$ is orientable the hyperplane $H$ admits an almost complex structure $J\in C^{\infty}(M,{\ensuremath{{\operatorname{End}}}}H)$, $J^{2}=-1$, which is calibrated with respect to $\theta$, i.e., $d\theta(.,J.)$ is positive definite on $H$. We then can endow $M$ with the Riemannian metric, $$g_{\theta,J}=\theta^{2}+d\theta(.,J.).
\label{eq:Contact.Riem-metric}$$ The volume of $M$ with respect to $g_{\theta,J}$ depends only on $\theta$ and is equal to $${{\operatorname{Vol}}}_{\theta}M:=\frac{1}{n!}\int_{M}d\theta^{n}\wedge \theta.
$$
In addition, we let $X_{0}$ be the *Reeb field* associated to $\theta$, that is, the unique vector field on $M$ such that $\iota_{X_{0}}\theta=1$ and $\iota_{X_{0}}d\theta=0$.
Noncommutative residue and the horizontal sublaplacian (contact case)
---------------------------------------------------------------------
In the sequel we shall identify $H^{*}$ with the subbundle of $T^{*}M$ annihilating the orthogonal complement $H^{\perp}\subset TM$. This yields the orthogonal splitting, $$\Lambda_{{\ensuremath{\mathbb{C}}}}T^{*}M=(\bigoplus_{0\leq k \leq 2n} \Lambda^{k}_{{\ensuremath{\mathbb{C}}}}H^{*})\oplus (\theta\wedge \Lambda^{*}T_{{\ensuremath{\mathbb{C}}}}^{*}M).
$$ The horizontal differential $d_{b;k}:C^{\infty}(M,\Lambda^{k}_{{\ensuremath{\mathbb{C}}}}H^{*})\rightarrow C^{\infty}(M,\Lambda^{k+1}_{{\ensuremath{\mathbb{C}}}}H^{*})$ is $$d_{b}=\pi_{b;k+1}\circ d,
$$ where $\pi_{b;k}\in C^{\infty}(M,{\ensuremath{{\operatorname{End}}}}\Lambda_{{\ensuremath{\mathbb{C}}}}T^{*}M)$ denotes the orthogonal projection onto $\Lambda^{k}_{{\ensuremath{\mathbb{C}}}}H^{*}$. This is not the differential of a chain complex, for we have $$d_{b}^{2}=-{\ensuremath{\mathcal{L}}}_{X_{0}}\varepsilon(d\theta)=-\varepsilon (d\theta){\ensuremath{\mathcal{L}}}_{X_{0}},
\label{eq:CR.square-db}$$ where $\varepsilon (d\theta)$ denotes the exterior multiplication by $d\theta$.
The horizontal sublaplacian $\Delta_{b;k}:C^{\infty}(M,\Lambda^{k}_{{\ensuremath{\mathbb{C}}}}H^{*})\rightarrow C^{\infty}(M,\Lambda^{k+1}_{{\ensuremath{\mathbb{C}}}}H^{*})$ is $$\Delta_{b;k}=d_{b;k}^{*}d_{b;k}+d_{b;k-1}d_{b;k-1}^{*}.
\label{eq:CR.horizontal-sublaplacian}$$ Notice that the definition of $\Delta_{b}$ makes sense on any Heisenberg manifold equipped with a Riemannian metric. This operator was first introduced by Tanaka [@Ta:DGSSPCM], but versions of this operator acting on functions were independently defined by Greenleaf [@Gr:FESPM] and Lee [@Le:FMPHI]. Since the fact that $(M,H)$ is a contact manifold implies that the Levi form (\[eq:Heisenberg.Levi-form\]) is nondegenerate, from [@Po:MAMS1 Prop. 3.5.4] we get:
The principal symbol of $\Delta_{b;k}$ is invertible if and only if we have $k\neq n$.
Next, for $\mu\in (-n,n)$ we let $$\rho(\mu)= \frac{\pi^{-(n+1)}}{2^{n}n!} \int_{-\infty}^{\infty}e^{-\mu\xi_{0}}(\frac{\xi_{0}}{\sinh \xi_{0}})^{n}d\xi_{0}.$$ Notice that with the notation of [@Po:MAMS1 Eq. (6.2.29)] we have $\rho(\mu)=(2n+2)\nu(\mu)$. For $q \neq n$ let $\nu_{0}(\Delta_{b;k})$ be the coefficient $\nu_{0}(P)$ in the Weyl asymptotics (\[eq:Zeta.Weyl-asymptotics1\]) for $\Delta_{b;k}$, i.e., we have ${\ensuremath{{\operatorname{Res}}}}\Delta_{b;k}^{-(n+1)}=(2n+2)\nu_{0}(\Delta_{b;k})$. By [@Po:MAMS1 Prop. 6.3.3] we have $\nu_{0}(\Delta_{b;k})=\tilde{\gamma}_{nk}{{\operatorname{Vol}}}_{\theta}M$, where $\tilde{\gamma}_{nk}:=\sum_{p+q=k}2^{n}\binom{n}{p} \binom{n}{q}\nu(p-q)$. Therefore, we get:
\[prop:Contact.residue-Deltab\] For $k \neq n$ we have $${\ensuremath{{\operatorname{Res}}}}\Delta_{b;k}^{-(n+1)}= \gamma_{nk} {{\operatorname{Vol}}}_{\theta}M ,\quad
\gamma_{nk}=\sum_{p+q=k}2^{n}\binom{n}{p} \binom{n}{q}\rho(p-q).
\label{eq:Contact.residue-Deltab}$$ In particular $\gamma_{nk}$ is a universal constant depending only on $n$ and $k$.
Noncommutative residue and the contact Laplacian
------------------------------------------------
The contact complex of Rumin [@Ru:FDVC] can be seen as an attempt to get a complex of horizontal forms by forcing the equalities $d_{b}^{2}=0$ and $(d_{b}^{*})^{2}=0$. Because of (\[eq:CR.square-db\]) there are two natural ways to modify $d_{b}$ to get a chain complex. The first one is to force the equality $d_{b}^{2}=0$ by restricting $d_{b}$ to the subbundle $\Lambda^{*}_{2}:=\ker \varepsilon(d\theta) \cap \Lambda^{*}_{{\ensuremath{\mathbb{C}}}}H^{*}$, since the latter is closed under $d_{b}$ and is annihilated by $d_{b}^{2}$. Similarly, we get the equality $(d_{b}^{*})^{2}=0$ by restricting $d^{*}_{b}$ to the subbundle $\Lambda^{*}_{1}:=\ker \iota(d\theta)\cap \Lambda^{*}_{{\ensuremath{\mathbb{C}}}}H^{*}=({{\operatorname{im}}}\varepsilon(d\theta))^{\perp}\cap \Lambda^{*}_{{\ensuremath{\mathbb{C}}}}H^{*}$, where $\iota(d\theta)$ denotes the interior product with $d\theta$. This amounts to replace $d_{b}$ by $\pi_{1}\circ d_{b}$, where $\pi_{1}$ is the orthogonal projection onto $\Lambda^{*}_{1}$.
In fact, since $d\theta$ is nondegenerate on $H$ the operator $\varepsilon(d\theta):\Lambda^{k}_{{\ensuremath{\mathbb{C}}}}H^{*}\rightarrow \Lambda^{k+2}_{{\ensuremath{\mathbb{C}}}}H^{*}$ is injective for $k\leq n-1$ and surjective for $k\geq n+1$. This implies that $\Lambda_{2}^{k}=0$ for $k\leq n$ and $\Lambda_{1}^{k}=0$ for $k\geq n+1$. Therefore, we only have two halves of complexes. As observed by Rumin [@Ru:FDVC] we get a full complex by connecting the two halves by means of the differential operator, $$B_{R}:C^{\infty}(M,\Lambda_{{\ensuremath{\mathbb{C}}}}^{n}H^{*})\rightarrow C^{\infty}(M,\Lambda_{{\ensuremath{\mathbb{C}}}}^{n}H^{*}), \qquad
B_{R}={\ensuremath{\mathcal{L}}}_{X_{0}}+d_{b,n-1}\varepsilon(d\theta)^{-1}d_{b,n},$$ where $\varepsilon(d\theta)^{-1}$ is the inverse of $\varepsilon(d\theta):\Lambda^{n-1}_{{\ensuremath{\mathbb{C}}}}H^{*}\rightarrow \Lambda^{n+1}_{{\ensuremath{\mathbb{C}}}}H^{*}$. Notice that $B_{R}$ is second order differential operator. Thus, if we let $\Lambda^{k}=\Lambda_{1}^{k}$ for $k=0,\ldots,n-1$ and we let $\Lambda^{k}=\Lambda_{1}^{k}$ for $k=n+1,\ldots,2n$, then we get the chain complex, $$\begin{gathered}
C^{\infty}(M)\stackrel{d_{R;0}}{\rightarrow}C^{\infty}(M,\Lambda^{1})\stackrel{d_{R;1}}{\rightarrow}
\ldots
C^{\infty}(M,\Lambda^{n-1})\stackrel{d_{R;n-1}}{\rightarrow}C^{\infty}(M,\Lambda^{n}_{1})\stackrel{B_{R}}{\rightarrow}\\
C^{\infty}(M,\Lambda^{n}_{2}) \stackrel{d_{R;n}}{\rightarrow}C^{\infty}(M,\Lambda^{n+1})
\ldots \stackrel{d_{R;2n-1}}{\longrightarrow} C^{\infty}(M,\Lambda^{2n}),
\label{eq:contact-complex}\end{gathered}$$ where $d_{R;k}:=\pi_{1}\circ d_{b;k}$ for $k=0,\ldots,n-1$ and $d_{R;k}:=d_{b;k}$ for $k=n,\ldots,2n-1$. This complex is called the *contact complex*.
The contact Laplacian is defined as follows. In degree $k\neq n$ it consists of the differential operator $\Delta_{R;k}:C^{\infty}(M,\Lambda^{k})\rightarrow C^{\infty}(M,\Lambda^{k})$ given by $$\Delta_{R;k}=\left\{
\begin{array}{ll}
(n-k)d_{R;k-1}d^{*}_{R;k}+(n-k+1) d^{*}_{R;k+1}d_{R;k}& \text{$k=0,\ldots,n-1$},\\
(k-n-1)d_{R;k-1}d^{*}_{R;k}+(k-n) d^{*}_{R;k+1}d_{R;k}& \text{$k=n+1,\ldots,2n$}.
\label{eq:contact-Laplacian1}
\end{array}\right.$$ In degree $k=n$ it consists of the differential operators $\Delta_{R;nj}:C^{\infty}(M,\Lambda_{j}^{n})\rightarrow C^{\infty}(M,\Lambda^{n}_{j})$, $j=1,2$, defined by the formulas, $$\Delta_{R;n1}= (d_{R;n-1}d^{*}_{R;n})^{2}+B_{R}^{*}B_{R}, \quad \Delta_{R;n2}=B_{R}B_{R}^{*}+ (d^{*}_{R;n+1}d_{R;n}).
\label{eq:contact-Laplacian2}$$
Observe that $\Delta_{R;k}$, $k\neq n$, is a differential operator of order $2$, whereas $\Delta_{R;n1}$ and $\Delta_{R;n2}$ are differential operators of order $4$. Moreover, Rumin [@Ru:FDVC] proved that in every degree the contact Laplacian is maximal hypoelliptic in the sense of [@HN:HMOPCV]. In fact, in every degree the contact Laplacian has an invertible principal symbol, hence admits a parametrix in the Heisenberg calculus (see [@JK:OKTGSU], [@Po:MAMS1 Sect. 3.5]).
For $k\neq n$ (resp. $j=1,2$) we let $\nu_{0}(\Delta_{R;k})$ (resp. $\nu_{0}(\Delta_{R;nj})$) be the coefficient $\nu_{0}(P)$ in the Weyl asymptotics (\[eq:Zeta.Weyl-asymptotics1\]) for $\Delta_{R;k}$ (resp. $\Delta_{R;nj}$). By Proposition \[prop:Zeta.Weyl-asymptotics\] we have ${\ensuremath{{\operatorname{Res}}}}\Delta_{R;k}^{-(n+1)}=(2n+2)\nu_{0}(\Delta_{R;k})$ and $ {\ensuremath{{\operatorname{Res}}}}\Delta_{R;nj}^{-\frac{n+1}{2}}=(2n+2)\nu_{0}(\Delta_{R;nj})$. Moreover, by [@Po:MAMS1 Thm. 6.3.4] there exist universal positive constants $\nu_{nk}$ and $\nu_{n,j}$ depending only on $n$, $k$ and $j$ such that $\nu_{0}(\Delta_{R;k})=\nu_{nk}{{\operatorname{Vol}}}_{\theta}M$ and $\nu_{0}(\Delta_{R;nj})= \nu_{n,j}{{\operatorname{Vol}}}_{\theta}M$. Therefore, we obtain:
\[prop:Contact.residue-DeltaR\] 1) For $k \neq n$ there exists a universal constant $\rho_{nk}>0$ depending only on $n$ and $k$ such that $${\ensuremath{{\operatorname{Res}}}}\Delta_{R;k}^{-(n+1)}=\rho_{nk} {{\operatorname{Vol}}}_{\theta}M.$$
2\) For $j=1,2$ there exists a universal constant $\rho_{n,j}>0$ depending only on $n$ and $j$ such that $${\ensuremath{{\operatorname{Res}}}}\Delta_{R;nj}^{-\frac{n+1}{2}}= \rho_{n,j} {{\operatorname{Vol}}}_{\theta}M.$$
We have $\rho_{nk}=(2n+2)\nu_{nk}$ and $\rho_{n,j}=(2n+2)\nu_{n,j}$, so it follows from the proof of [@Po:MAMS1 Thm. 6.3.4] that we can explicitly relate the universal constants $\rho_{nk}$ and $\rho_{n,j}$ to the fundamental solutions of the heat operators $\Delta_{R;k}+\partial_{t}$ and $\Delta_{R;nj}+\partial_{t}$ associated to the contact Laplacian on the Heisenberg group ${\ensuremath{\mathbb{H}}}^{2n+1}$ (cf. [@Po:MAMS1 Eq. (6.3.18)]). For instance, if $K_{0;k}(x,t)$ denotes the fundamental solution of $\Delta_{R;0}+\partial_{t}$ on ${\ensuremath{\mathbb{H}}}^{2n+1}$ then we have $\rho_{n,0}=
\frac{2^{n}}{n!}K_{0;0}(0,1)$.
Applications in CR geometry {#sec:CR}
===========================
In this section we present some applications in CR geometry of the noncommutative residue for the Heisenberg calculus. After recalling the geometric set-up, we shall compute the noncommutative residues of some powers of the horizontal sublaplacian and of the Kohn Laplacian on CR manifolds endowed with a pseudohermitian structure. After this we will make use of the framework of noncommutative geometry to define lower dimensional volumes in pseudohermitian geometry. For instance, we will give sense to the area of any 3-dimensional pseudohermitian manifold as a constant multiple the integral of the Tanaka-Webster scalar curvature. As a by-product this will allow us to get a spectral interpretation of the Einstein-Hilbert action in pseudohermitian geometry.
The geometric set-up
--------------------
Let $(M^{2n+1},H)$ be a compact orientable CR manifold. Thus $(M^{2n+1},H)$ is a Heisenberg manifold such that $H$ admits a complex structure $J\in C^{\infty}(M,{\ensuremath{{\operatorname{End}}}}H)$, $J^{2}=-1$, in such way that $T_{1,0}:=\ker (J+i)\subset T_{{\ensuremath{\mathbb{C}}}}M$ is a complex rank $n$ subbundle which is integrable in Fröbenius’ sense (cf. Section \[sec:Heisenberg-calculus\]). In addition, we set $T_{0,1}=\overline{T_{1,0}}=\ker(J-i)$.
Since $M$ is orientable and $H$ is orientable by means of its complex structure, there exists a global non-vanishing real 1-form $\theta$ such that $H=\ker \theta$. Associated to $\theta$ is its Levi form, i.e., the Hermitian form on $T_{1,0}$ such that $$L_{\theta}(Z,W)=-id\theta(Z,\overline{W}) \qquad \forall Z,W \in T_{1,0}.
$$
We say that $M$ is strictly pseudoconvex (resp. $\kappa$-strictly pseudoconvex) when we can choose $\theta$ so that $L_{\theta}$ is positive definite (resp. has signature $(n-\kappa,\kappa,0)$) at every point.
If $(M, H)$ is $\kappa$-strictly pseudoconvex then $\theta$ is a contact form on $M$. Then in the terminology of [@We:PHSRH] the datum of the contact form $\theta$ annihilating $H$ defines a *pseudohermitian structure* on $M$.
From now we assume that $M$ is $\kappa$-strictly pseudoconvex, and we let $\theta$ be a pseudohermitian contact form such that $L_{\theta}$ has signature $(n-\kappa,\kappa,0)$ everywhere. We let $X_{0}$ be the Reeb vector field associated to $\theta$, so that $\iota_{X_{0}}\theta=1$ and $\iota_{X_{0}}d\theta=0$ (cf. Section \[sec:Contact\]), and we let ${\ensuremath{\mathcal{N}}}\subset T_{{\ensuremath{\mathbb{C}}}}M$ be the complex line bundle spanned by $X_{0}$.
We endow $M$ with a *Levi metric* as follows. First, we always can construct a splitting $T_{1,0}=T_{1,0}^{+}\oplus T_{1,0}^{+}$ with subbundles $T_{1,0}^{+}$ and $T_{1,0}^{-}$ which are orthogonal with respect to $L_{\theta}$ and such that $L_{\theta}$ is positive definite on $T_{1,0}^{+}$ and negative definite on $T_{1,0}^{-}$ (see, e.g., [@FS:EDdbarbCAHG], [@Po:MAMS1]). Set $T_{0,1}^{\pm}=\overline{T_{1,0}^{\pm}}$. Then we have the splittings, $$T_{{\ensuremath{\mathbb{C}}}}M={\ensuremath{\mathcal{N}}}\oplus T_{1,0}\oplus T_{0,1}={\ensuremath{\mathcal{N}}}\oplus T_{1,0}^{+}\oplus T_{1,0}^{-}\oplus T_{0,1}^{+}\oplus T_{0,1}^{-}.
\label{eq:CR.splitting-TcM}$$ Associated to these splittings is the unique Hermitian metric $h$ on $T_{{\ensuremath{\mathbb{C}}}}M$ such that:
- The splittings (\[eq:CR.splitting-TcM\]) are orthogonal with respect to $h$;
- $h$ commutes with complex conjugation;
- We have $h(X_{0},X_{0})=1$ and $h$ agrees with $\pm L_{\theta}$ on $T_{1,0}^{\pm}$.
In particular, the matrix of $L_{\theta}$ with respect to $h$ is ${\operatorname{diag}}(1,\ldots,1,-1,\ldots,-1)$, where $1$ has multiplicity $n-\kappa$ and $-1$ multiplicity $-1$.
Notice that when $M$ is strictly pseudoconvex $h$ is uniquely determined by $\theta$, since in this case $T_{1,0}^{+}=T_{1,0}$ and one can check that we have $h=\theta^{2}+d\theta(.,J.)$, that is, $h$ agrees on $TM$ with the Riemannian metric $g_{\theta,J}$ in (\[eq:Contact.Riem-metric\]). In general, we can check that the volume form of $M$ with respect to $h$ depends only on $\theta$ and is equal to $$v_{\theta}(x):=\frac{(-1)^{\kappa}}{n!}d\theta^{n}\wedge\theta.
$$ In particular, the volume of $M$ with respect to $h$ is $${{\operatorname{Vol}}}_{\theta}M:=\frac{(-1)^{\kappa}}{n!}\int_{M} d\theta^{n}\wedge\theta.
$$
Finally, as proved by Tanaka [@Ta:DGSSPCM] and Webster [@We:PHSRH] the datum of the pseudohermitian contact form $\theta$ defines a natural connection, the *Tanaka-Webster connection*, which preserves the pseudohermitian structure of $M$, i.e., it preserves both $\theta$ and $J$. It can be defined as follows.
Let $\{Z_{j}\}$ be a local frame of $T_{1,0}$. Then $\{X_{0},Z_{j},Z_{\overline{j}}\}$ forms a frame of $T_{{\ensuremath{\mathbb{C}}}}M$ with dual coframe $\{\theta,\theta^{j}, \theta^{\overline{j}}\}$, with respect to which we can write $d\theta =ih_{j\overline{k}}\theta^{j}\wedge \theta^{\overline{k}}$. Using the matrix $(h_{j\overline{k}})$ and its inverse $(h^{j\overline{k}})$ to lower and raise indices, the connection 1-form $\omega=(\omega_{j}^{~k})$ and the torsion form $\tau_{k}=A_{jk}\theta^{j}$ of the Tanaka-Webster connection are uniquely determined by the relations, $$d\theta^{k}=\theta^{j}\wedge \omega_{j}^{~k}+\theta \wedge \tau^{k}, \qquad
\omega_{j\bar{k}} + \omega_{\bar{k}j}
=dh_{j\bar{k}}, \qquad A_{jk}=A_{k j}.
$$
The curvature tensor $\Pi_{j}^{~k}:=d\omega_{j}^{~k}-\omega_{j}^{~l}\wedge \omega_{l}^{~k}$ satisfies the structure equations, $$\Pi_{j}^{~k}=R_{j\bar{k} l\bar{m}} \theta^{l}\wedge \theta^{\bar{m}} +
W_{j\bar{k}l}\theta^{l}\wedge \theta - W_{\bar{k}j\bar{l}}\theta^{\bar{l}}\wedge \theta
+i\theta_{j}\wedge \tau_{\bar{k}}-i\tau_{j}\wedge \theta_{\bar{k}}.
\label{eq:CR.TW-curvature}$$ The *Ricci tensor* of the Tanaka-Webster connection is $ \rho_{j \bar{k}}:=R_{l~j \bar{k}}^{~l}$, and its *scalar curvature* is $R_{n}: =\rho_{j}^{~j}$.
Noncommutative residue and the Kohn Laplacian {#subsec:CR.NCR-pseudohermitian}
---------------------------------------------
The ${\overline{\partial}_{b}}$-complex of Kohn-Rossi ([@KR:EHFBCM], [@Ko:BCM]) is defined as follows.
Let $\Lambda^{1,0}$ (resp. $\Lambda^{0,1}$) be the annihilator of $T_{0,1}\oplus {\ensuremath{\mathcal{N}}}$ (resp. $T_{0,1}\oplus {\ensuremath{\mathcal{N}}}$) in $T^{*}_{{\ensuremath{\mathbb{C}}}}M$. For $p,q=0,\ldots,n$ let $\Lambda^{p,q}:=(\Lambda^{1,0})^{p}\wedge (\Lambda^{0,1})^{q}$ be the bundle of $(p,q)$-covectors on $M$, so that we have the orthogonal decomposition, $$\Lambda^{*}T_{{\ensuremath{\mathbb{C}}}}^{*}M=(\bigoplus_{p,q=0}^{n}\Lambda^{p,q})\oplus (\theta\wedge \Lambda^{*}T_{{\ensuremath{\mathbb{C}}}}^{*}M).
\label{eq:CR-Lambda-pq-decomposition}$$ Moreover, thanks to the integrability of $T_{1,0}$, given any local section $\eta$ of $\Lambda^{p,q}$, its differential $d\eta$ can be uniquely decomposed as $$d\eta ={\overline{\partial}_{b;p,q}}\eta + \partial_{b;p,q}\eta + \theta \wedge {\ensuremath{\mathcal{L}}}_{X_{0}}\eta,
\label{eq:CR.dbarb}$$ where ${\overline{\partial}_{b;p,q}}\eta $ (resp. $\partial_{b;p,q}\eta$) is a section of $\Lambda^{p,q+1}$ (resp. $\Lambda^{p+1,q}$).
The integrability of $T_{1,0}$ further implies that $\overline{\partial}_{b}^{2}=0$ on $(0,q)$-forms, so that we get the cochain complex $\overline{\partial}_{b;0,*}:C^{\infty}(M,\Lambda^{0,*})\rightarrow C^{\infty}(M,\Lambda^{0,*+1})$. On $(p,q)$-forms with $p\geq 1$ the operator ${\overline{\partial}_{b}}^{2}$ is a tensor which vanishes when the complex structure $J$ is invariant under the Reeb flow (i.e., when we have $[X_{0},JX]=J[X_{0},X]$ for any local section $X$ of $H$). Let ${\overline{\partial}_{b;p,q}}^{*}$ be the formal adjoint of ${\overline{\partial}_{b;p,q}}$ with respect to the Levi metric of $M$. Then the *Kohn Laplacian* ${{\square}_{b;p,q}}:C^{\infty}(M,\Lambda^{p,q})\rightarrow C^{\infty}(M,\Lambda^{p,q})$ is defined to be $${{\square}_{b;p,q}}={\overline{\partial}_{b;p,q}}^{*}{\overline{\partial}_{b;p,q}}+ \overline{\partial}_{b;p,q-1}\overline{\partial}_{b;p,q-1}^{*}.$$ This a differential operator which has order 2 in the Heisenberg calculus sense. Furthermore, we have:
The principal symbol of ${{\square}_{b;p,q}}$ is invertible if and only if we have $q \neq \kappa$ and $q\neq n-\kappa$.
Next, for $q \not\in\{\kappa,n-\kappa\}$ let $\nu_{0}({{\square}_{b;p,q}})$ be the coefficient $\nu_{0}(P)$ in the Weyl asymptotics (\[eq:Zeta.Weyl-asymptotics1\]) for ${{\square}_{b;p,q}}$. By [@Po:MAMS1 Thm. 6.2.4] we have $ \nu_{0}({{\square}_{b;p,q}})=\tilde{\alpha}_{n\kappa pq}{{\operatorname{Vol}}}_{\theta}M$, where $\tilde{\alpha}_{n\kappa pq}$ is equal to $$\sum_{\max(0,q-\kappa)\leq k\leq \min(q,n-\kappa)} \frac{1}{2} \binom{n}{p} \binom{n-\kappa}{k}\binom{\kappa}{q-k}
\nu(n-2(\kappa-q+2k)).
\label{eq:CR.talphankpq}$$ Therefore, by arguing as in the proof of Proposition \[prop:Contact.residue-Deltab\] we get:
\[prop:CR.residue-Boxb1\] For $q \neq \kappa$ and $q\neq n-\kappa$ we have $${\ensuremath{{\operatorname{Res}}}}{{\square}_{b;p,q}}^{-(n+1)}=\alpha_{n\kappa pq}{{\operatorname{Vol}}}_{\theta}M,
\label{eq:CR.residue-Boxb1}$$ where $\alpha_{n\kappa pq}$ is equal to $$\sum_{\max(0,q-\kappa)\leq k\leq \min(q,n-\kappa)} \frac{1}{2} \binom{n}{p} \binom{n-\kappa}{k}\binom{\kappa}{q-k}
\rho(n-2(\kappa-q+2k)).
\label{eq:CR.alphapq}
$$ In particular $\alpha_{n\kappa pq}$ is a universal constant depending only on $n$, $\kappa$, $p$ and $q$.
\[rem:CR.residue-Boxb1-local\] Let $a_{0}({{\square}_{b;p,q}})(x)$ be the leading coefficient in the heat kernel asymptotics (\[eq:Zeta.heat-kernel-asymptotics\]) for ${{\square}_{b;p,q}}$. By (\[eq:Zeta.tPs-heat1\]) we have $ \nu_{0}({{\square}_{b;p,q}})= \frac{1}{ (n+1)!} \int_{M}{{\operatorname{tr}}}_{\Lambda^{p,q}}a_{0}({{\square}_{b;p,q}})(x)$. Moreover, a careful look at the proof of [@Po:MAMS1 Thm. 6.2.4] shows that we have $${{\operatorname{tr}}}_{\Lambda^{p,q}}a_{0}({{\square}_{b;p,q}})(x)=(n+1)!\tilde{\alpha}_{n\kappa pq}v_{\theta}(x).
$$ Since by (\[eq:Zeta.tPs-heat1\]) we have $2c_{{{\square}_{b;p,q}}^{-(n+1)}}(x)=(n!)^{-1}a_{0}({{\square}_{b;p,q}})(x)$, it follows that the equality (\[eq:CR.residue-Boxb1\]) ultimately holds at the level of densities, that is, we have $$c_{{{\square}_{b;p,q}}^{-(n+1)}}(x)=\alpha_{n\kappa pq}v_{\theta}(x).
$$
Finally, when $M$ is strictly pseudoconvex, i.e., when $\kappa=0$, we have:
\[prop:CR.residue-Boxb2\] Assume $M$ strictly pseudoconvex. Then for $q =1,\ldots, n-1$ there exists a universal constant $\alpha_{npq}'$ depending only on $n$, $p$ and $q$ such that $${\ensuremath{{\operatorname{Res}}}}{{\square}_{b;p,q}}^{-n}=\alpha_{npq}'\int_{M}R_{n}d\theta^{n}\wedge \theta,
$$ where $R_{n}$ denotes the Tanaka-Webster scalar curvature of $M$.
For $q=1,\ldots,n-1$ let $a_{2}({{\square}_{b;p,q}})(x)$ be the coefficient of $t^{-n}$ in the heat kernel asymptotics (\[eq:Zeta.heat-kernel-asymptotics\]) for ${{\square}_{b;p,q}}$. By (\[eq:Zeta.tPs-heat1\]) we have $2c_{{{\square}_{b;p,q}}^{-n}}(x)=\Gamma(n)^{-1}a_{2}({{\square}_{b;p,q}})(x)$. Moreover, by [@BGS:HECRM Thm. 8.31] there exists a universal constant $\alpha_{npq}'$ depending only on $n$, $p$ and $q$ such that ${{\operatorname{tr}}}_{\Lambda^{p,q}}a_{2}({{\square}_{b;p,q}})(x)=\alpha_{npq}'R_{n}d\theta^{n}\wedge \theta$. Thus, $${\ensuremath{{\operatorname{Res}}}}{{\square}_{b;p,q}}^{-n}= \int_{M}{{\operatorname{tr}}}_{\Lambda^{p,q}}c_{{{\square}_{b;p,q}}^{-n}}(x)= \alpha_{npq}'\int_{M}R_{n}d\theta^{n}\wedge \theta,
$$ where $\alpha_{npq}'$ is a universal constant depending only on $n$, $p$ and $q$.
Noncommutative residue and the horizontal sublaplacian (CR case)
----------------------------------------------------------------
Let us identify $H^{*}$ with the subbundle of $T^{*}M$ annihilating the orthogonal supplement $H^{\perp}$, and let $\Delta_{b}:C^{\infty}(M,\Lambda^{*}_{{\ensuremath{\mathbb{C}}}}H^{*})\rightarrow C^{\infty}(M,\Lambda^{*}_{{\ensuremath{\mathbb{C}}}}H^{*})$ be the horizontal sublaplacian on $M$ as defined in (\[eq:CR.horizontal-sublaplacian\]).
Notice that with the notation of (\[eq:CR.dbarb\]) we have $d_{b}={\overline{\partial}_{b}}+\partial_{b}$. Moreover, we can check that ${\overline{\partial}_{b}}\partial_{b}^{*}+\partial_{b}^{*} {\overline{\partial}_{b}}= {\overline{\partial}_{b}}^{*} \partial_{b}+\partial_{b}
{\overline{\partial}_{b}}^{*}=0$. Therefore, we have $$\Delta_{b}={\square}_{b}+ \overline{{\square}}_{b}, \qquad \overline{{\square}}_{b}:=\partial_{b}^{*}\partial_{b}+\partial_{b}\partial_{b}^{*}.
$$ In particular, this shows that the horizontal sublaplacian $\Delta_{b}$ preserves the bidegree, so it induces a differential operator $\Delta_{b;p,q}:C^{\infty}(M,\Lambda^{p,q})\rightarrow C^{\infty}(M,\Lambda^{p,q})$. Then the following holds.
The principal symbol of $\Delta_{b;p,q}$ is invertible if and only if we have $(p,q)\neq (\kappa,n-\kappa)$ and $(p,q)\neq (n-\kappa,\kappa)$.
Bearing this in mind we have:
\[prop:CR.residue-Deltab1\] For $(p,q)\neq (\kappa,n-\kappa)$ and $(p,q)\neq (n-\kappa,\kappa)$ we have $${\ensuremath{{\operatorname{Res}}}}\Delta_{b;p,q}^{-(n+1)}= \beta_{n\kappa pq}{{\operatorname{Vol}}}_{\theta}M,
\label{eq:CR.residue-Deltab1a}$$ where $\beta_{n\kappa pq}$ is equal to $$\! \! \! \! \sum_{\substack{\max(0,q-\kappa)\leq k\leq \min(q,n-\kappa)\\ \max(0,p-\kappa)\leq l\leq \min(p,n-\kappa)}} \! \! \! \!
2^{n}\binom{n-\kappa}{l}\binom{\kappa}{p-l} \binom{n-\kappa}{k}\binom{\kappa}{q-k}
\rho(2(q-p)+4(l-k)).
\label{eq:CR.residue-Deltab1b}$$ In particular $\beta_{n\kappa pq}$ is a universal constant depending only on $n$, $\kappa$, $p$ and $q$.
Let $\nu_{0}(\Delta_{b;p,q})$ be the coefficient $\nu_{0}(P)$ in the Weyl asymptotics (\[eq:Zeta.Weyl-asymptotics1\]) for $\Delta_{b;p,q}$. By [@Po:MAMS1 Thm. 6.2.5] we have $\nu_{0}(\Delta_{b;p,q})=\frac{1}{2n+2}\beta_{n\kappa pq}{{\operatorname{Vol}}}_{\theta}M$, where $\beta_{n\kappa pq}$ is given by (\[eq:CR.residue-Deltab1b\]). We then can show that $ {\ensuremath{{\operatorname{Res}}}}\Delta_{b;p,q}^{-(n+1)}= \beta_{n\kappa pq}{{\operatorname{Vol}}}_{\theta}M$ by arguing as in the proof of Proposition \[prop:Contact.residue-Deltab\].
\[rem:CR.residue-Deltab1-local\] In the same way as (\[eq:CR.residue-Boxb1\]) (cf. Remark \[rem:CR.residue-Boxb1-local\]) the equality (\[eq:CR.residue-Deltab1a\]) holds at the level of densities, i.e., we have $c_{\Delta_{b;p,q}^{-(n+1)}}(x)=\beta_{n\kappa pq}v_{\theta}(x)$.
\[prop:CR.residue-Deltab2\] Assume that $M$ is strictly pseudoconvex. For $(p,q)\neq (0,n)$ and $(p,q)\neq (n,0)$ there exists a universal constant $\beta_{npq}'$ depending only $n$, $p$ and $q$ such that $${\ensuremath{{\operatorname{Res}}}}\Delta_{b;p,q}^{-n}=\beta_{npq}' \int_{M}R_{n}d\theta^{n}\wedge \theta.
\label{eq:CR.residue-Deltab2}$$
The same analysis as that of [@BGS:HECRM Sect. 8] for the coefficients in the heat kernel asymptotics (\[eq:Zeta.heat-kernel-asymptotics\]) for the Kohn Laplacian can be carried out for the coefficients of the heat kernel asymptotics for $\Delta_{b;p,q}$ (see [@St:SICRM]). In particular, if we let $a_{2}(\Delta_{b;p,q})(x)$ be the coefficient of $t^{-n}$ in the heat kernel asymptotics for $\Delta_{b;p,q}$, then there exists a universal constant $\tilde{\beta}_{npq}$ depending only on $n$, $p$ and $q$ such that ${{\operatorname{tr}}}_{\Lambda^{p,q}}a_{2}(\Delta_{b;p,q})(x)=\tilde{\beta}_{npq}R_{n}d\theta^{n}\wedge \theta$. Arguing as in the proof of Proposition \[prop:CR.residue-Boxb2\] then shows that ${\ensuremath{{\operatorname{Res}}}}\Delta_{b;p,q}^{-n}=\beta_{npq}' \int_{M}R_{n}d\theta^{n}\wedge \theta$, where$\beta_{npq}'$ is a universal constant depending only $n$, $p$ and $q$.
Lower dimensional volumes in pseudohermitian geometry {#subsec.CR.area}
-----------------------------------------------------
Following an idea of Connes [@Co:GCMFNCG] we can make use of the noncommutative residue for classical [$\Psi$DOs]{} to define lower dimensional dimensional volumes in Riemannian geometry, e.g., we can give sense to the area and the length of a Riemannian manifold even when the dimension is not 1 or 2 (see [@Po:LMP07]). We shall now make use of the noncommutative residue for the Heisenberg calculus to define lower dimensional volumes in pseudohermitian geometry.
In this subsection we assume that $M$ is strictly pseudoconvex. In particular, the Levi metric $h$ is uniquely determined by $\theta$. In addition, we let $\Delta_{b;0}$ be the horizontal sublaplacian acting on functions.Then, as explained in Remark \[rem:CR.residue-Deltab1-local\], we have $c_{\Delta_{b;0}^{-(n+1)}}(x)=\beta_{n}v_{\theta}(x)$, where $\beta_{n}=\beta_{n000}=2^{n}\rho(0)$. In particular, for any $f \in C^{\infty}(M)$ we get $c_{f\Delta_{b;0}^{-(n+1)}}(x)=\beta_{n}f(x)v_{\theta}(x)$. Combining this with Theorem \[thm:NCG.Dixmier\] then gives$${\ensuremath{-\hspace{-2,4ex}\int}}f\Delta_{b;0}^{-(n+1)}=\frac{1}{2n+2}\int_{M}c_{f\Delta_{b;0}^{-(n+1)}}(x)=\frac{\beta_{n}}{2n+2}\int_{M}f(x)v_{\theta}(x).
$$ Thus the operator $\frac{2n+2}{\beta_{n}}\Delta_{b;0}^{-(n+1)}$ allows us to recapture the volume form $v_{\theta}(x)$.
Since $-(2n+2)$ is the critical order for a [$\Psi_{H}$DO]{} to be trace-class and $M$ has Hausdorff dimension $2n+2$ with respect to the Carnot-Carathéodory metric defined by the Levi metric on $H$, it stands for reason to define the *length element* of $(M,\theta)$ as the positive selfadjoint operator $ds$ such that $(ds)^{2n+2}=\frac{2n+2}{\beta_{n}}\Delta_{b;0}^{-(n+1)}$, that is, $$ds:= c_{n}\Delta_{b;0}^{-1/2}, \qquad c _{n}=(\frac{2n+2}{\beta_{n}})^{\frac{1}{2n+2}}.
$$
For $k=1,2,\ldots,2n+2$ the $k$-dimensional volume of $(M,\theta)$ is $${\operatorname{Vol}}_{\theta}^{(k)}M:={\ensuremath{-\hspace{-2,4ex}\int}}ds^{k}.
$$ In particular, for $k=2$ the area of $(M,\theta)$ is $ {\operatorname{Area}}_{\theta}M:={\ensuremath{-\hspace{-2,4ex}\int}}ds^{2}$.
We have ${\ensuremath{-\hspace{-2,4ex}\int}}ds^{k}=\frac{(c_{n})^{k}}{2n+2}\int_{M}c_{\Delta_{b;0}^{-\frac{k}{2}}}(x)$ and thanks to (\[eq:Zeta.tPs-heat1\]) we know that $2
c_{\Delta_{b;0}^{-\frac{k}{2}}}(x)$ agrees with $\Gamma(\frac{k}{2})^{-1}a_{2n+2-k}(\Delta_{b;0})(x)$, where $a_{j}(\Delta_{b;0})(x)$ denotes the coefficient of $t^{\frac{2n+2-j}{2}}$ in the heat kernel asymptotics (\[eq:Zeta.heat-kernel-asymptotics\]) for $\Delta_{b;0}$. Thus, $${\operatorname{Vol}}_{\theta}^{(k)}M=\frac{(c_{n})^{k}}{4(n+1)}\Gamma(\frac{k}{2})^{-1}\int_{M}a_{2n+2-k}(\Delta_{b})(x).
$$ Since $\Delta_{b;0}$ is a differential operator we have $a_{2j-1}(\Delta_{b;0})(x)=0$ for any $j\in {\ensuremath{\mathbb{N}}}$, so $ {\operatorname{Vol}}_{\theta}^{(k)}M$ vanishes when $k$ is odd. Furthermore, as alluded to in the proof of Proposition \[prop:CR.residue-Deltab2\] the analysis in [@BGS:HECRM Sect. 8] of the coefficients of the heat kernel asymptotics for the Kohn Laplacian applies *verbatim* to the heat kernel asymptotics for the horizontal sublaplacian. Thus, we can write $$a_{2j}(\Delta_{b;0})(x)=\gamma_{nj}(x)d\theta^{n}\wedge \theta(x),
\label{eq:CR-volumes-gamma-nj}$$ where $\gamma_{nj}(x)$ is a universal linear combination, depending only on $n$ and $j$, in complete contractions of covariant derivatives of the curvature and torsion tensors of the Tanaka-Webster connection (i.e. $\gamma_{nj}(x)$ is a local pseudohermitian invariant). In particular, we have $\gamma_{n0}(x)=\gamma_{n0}$ and $\gamma_{n1}=\gamma_{n1}'R_{n}(x)$, where $\gamma_{n0}$ and $\gamma_{n1}$ are universal constants and $R_{n}(x)$ is the Tanaka-Webster scalar curvature (in fact the constants $\gamma_{n0}$ and $\gamma_{n1}'$ can be explicitly related to the constants $\beta_{n000}$ and $\beta_{n00}'$). Therefore, we obtain:
\[prop:CR.lower-dim.-volumes\] 1) ${{\operatorname{Vol}}}^{(k)}_{\theta}M$ vanishes when $k$ is odd.
2\) When $k$ is even we have $${\operatorname{Vol}}_{\theta}^{(k)}M=\frac{(c_{n})^{k}}{4(n+1)}\Gamma(\frac{k}{2})^{-1}\int_{M}\tilde{\gamma}_{nk}(x)d\theta^{n}\wedge \theta(x).
\label{eq:CR.volumes-even}$$ where $\tilde{\gamma}_{nk}(x):=\gamma_{nn+1-\frac{k}{2}}(x)$ is a universal linear combination, depending only on $n$ and $k$, of complete contractions of weight $n+1-\frac{k}{2}$ of covariant derivatives of the curvature and torsion tensors of the Tanaka-Webster connection.
In particular, thanks to (\[eq:CR.volumes-even\]) we have a purely differential-geometric formulation of the $k$-dimensional volume $ {\operatorname{Vol}}_{\theta}^{(k)}M$. Moreover, for $k=2n+2$ we get: $${\operatorname{Vol}}_{\theta}^{(2n+2)}M=\frac{(c_{n})^{2n+2}}{4(n+1)}\frac{\gamma_{n0}}{n!}\int_{M}d\theta^{n}\wedge \theta.
$$ Since ${\operatorname{Vol}}_{\theta}^{(2n+2)}M={\operatorname{Vol}}_{\theta}M=\frac{1}{n!}\int_{M}d\theta^{n}\wedge \theta$ we see that $(c_{n})^{2n+2}=\frac{4(n+1)}{\gamma_{n0}}$, where $\gamma_{n0}$ is above.
On the other hand, when $n=1$ (i.e. $\dim M=3$) and $k=2$ we get $${\operatorname{Area}}_{\theta}M=\gamma_{1}''\int_{M}R_{1}d\theta\wedge \theta, \qquad \gamma_{1}'':=\frac{(c_{1})^{2}}{8}\gamma_{11}'
=\frac{\gamma_{11}'}{\sqrt{8\gamma_{10}}},
\label{eq:CR.area-universal}$$ where $\gamma_{11}'$ is above. To compute $\gamma_{1}''$ it is enough to compute $\gamma_{10}$ and $\gamma_{11}'$ in the special case of the unit sphere $S^{3}\subset {\ensuremath{\mathbb{C}}}^{2}$ equipped with its standard pseudohermitian structure, i.e., for $S^{3}$ equipped with the CR structure induced by the complex structure of ${\ensuremath{\mathbb{C}}}^{2}$ and with the pseudohermitian contact form $\theta:= \frac{i}2 (z_{1} d\bar{z}_{1}+ z_{2} d\bar{z}_{2})$.
First, the volume ${{\operatorname{Vol}}}_{\theta}S^{3}$ is equal to $$\int_{S^{3}}d\theta\wedge \theta = \frac{-1}{4}\int_{S^{3}}(z_{2}dz_{1}\wedge d\bar{z_{1}} \wedge d\bar{z_{2}} +
z_{1} dz_{1}\wedge dz_{2}\wedge d\bar{z_{2}})=\pi^{2}.
\label{eq:CR.volumeS3}$$ Moreover, by [@We:PHSRH] the Tanaka-Webster scalar here is $R_{1}=4$, so we get $$\int_{S^{3}}R_{1}d\theta\wedge \theta =4{{\operatorname{Vol}}}_{\theta}S^{3}=4\pi^{2}.
$$
Next, for $j=0,1$ set $A_{2j}(\Delta_{b;0})=\int_{S^{3}}a_{2j}(\Delta_{b;0})(x)$. In view of the definition of the constants $\gamma_{10}$ and $\gamma_{11}'$ we have $$A_{0}(\Delta_{b;0})=\gamma_{10}\int_{S^{3}}d\theta\wedge \theta =\pi^{2}\gamma_{10}, \quad
A_{2}(\Delta_{b;0})=\gamma_{11}'\int_{S^{3}}R_{1}d\theta\wedge \theta =4\pi^{2}\gamma_{11}'.
\label{eq:CR.gamma-A2j}$$ Notice that $A_{0}(\Delta_{b;0})$ and $A_{2}(\Delta_{b;0})$ are the coefficients of $t^{-2}$ and $t^{-1}$ in the asymptotics of $ {\ensuremath{{\operatorname{Tr}}}}e^{-t\Delta_{b;0}}$ as $t\rightarrow 0^{+}$. Moreover, we have $\Delta_{b;0}
=\boxdot_{\theta}-\frac{1}{4}R_{1}=\boxdot_{\theta}-1$, where $\boxdot_{\theta}$ denotes the CR invariant sublaplacian of Jerison-Lee [@JL:YPCRM], and by [@St:SICRM Thm. 4.34] we have ${\ensuremath{{\operatorname{Tr}}}}e^{-t\boxdot_{\theta}} =\frac{\pi^{2}}{16t^{2}}+
{\operatorname{O}}(t^\infty)$ as $t \rightarrow 0^{+}$. Therefore, as $t\rightarrow 0^{+}$ we have $${\ensuremath{{\operatorname{Tr}}}}e^{-t\Delta_{b;0}}=e^{t}{\ensuremath{{\operatorname{Tr}}}}e^{-t\boxdot_{\theta}}\sim \frac{\pi^{2}}{16t^{2}}(1+t+\frac{t^{2}}{2}+\ldots).
$$ Hence $A_{0}(\Delta_{b;0})=A_{2}(\Delta_{b;0})=\frac{\pi^{2}}{16}$. Combining this with (\[eq:CR.gamma-A2j\]) then shows that $\gamma_{10}=\frac{1}{16}$ and $\gamma_{11}'=\frac{1}{64}$, from which we get $\gamma_{1}''=\frac{1/64}{\sqrt{8.\frac{1}{16}}}=\frac1{32\sqrt2}$. Therefore, we get:
\[thm:spectral.area\] If $\dim M=3$, then we have $${\operatorname{Area}}_{\theta}M = \frac1{32\sqrt2}\int_{M}R_{1}d\theta \wedge\theta.
\label{eq:CR.area-dimension3}$$
For instance, for $S^{3}$ equipped with its standard pseudohermitian structure we obtain ${\operatorname{Area}}_{\theta}S^{3}=\frac{\pi^{2}}{8\sqrt{2}}$.
Appendix. Proof of Lemma \[lem:Heisenberg.extension-symbol\] {#appendix.-proof-of-lemmalemheisenberg.extension-symbol .unnumbered}
============================================================
In this appendix, for reader’s convenience we give a detailed proof of Lemma \[lem:Heisenberg.extension-symbol\] about the extension of a homogeneous symbol on ${{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0}$ into a homogeneous distribution on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$.
Let $p\in C^\infty({{\ensuremath{\mathbb{R}}}^{d+1}\!\setminus\! 0})$ be homogeneous of degree $m$, $m\in {\ensuremath{\mathbb{C}}}$, so that $p(\lambda.\xi)=\lambda^{m}p(\xi)$ for any $\lambda>0$. If $\Re m>-(d+2)$, then $p$ is integrable near the origin, so it defines a tempered distribution which is its unique homogeneous extension.
If $\Re m \leq -(d+2)$, then we can extend $p$ into the distribution $\tau\in{\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ defined by the formula, $${\ensuremath{\langle \tau , u \rangle}}= \int [u(\xi)-\psi(\|\xi\|)\sum_{{\ensuremath{\langle\! \alpha\!\rangle}}\leq k}
\frac{\xi^\alpha}{\alpha!} u^{(\alpha)}(0)] p(\xi)d\xi \qquad \forall u\in{\ensuremath{\mathcal{S}}}({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}),
\label{eq:Appendix.almosthomogeneous-extension}$$ where $k$ is an integer $\geq -(\Re m +d+2)$ and $\psi$ is a function in $C_{c}^\infty({\ensuremath{\mathbb{R}}}_{+})$ such that $\psi=1$ near $0$. Then in view of (\[eq:PsiHDO.homogeneity-K-m\]) for any $\lambda>0$ we have $$\begin{split}
{\ensuremath{\langle \tau_{\lambda} , u \rangle}}-\lambda^{m} {\ensuremath{\langle \tau , u \rangle}} &= \lambda^{-(d+2)}\int [u(\lambda^{-1}.\xi)-\psi(\|\xi\|)\sum_{{\ensuremath{\langle\! \alpha\!\rangle}}\leq k}
\frac{\xi^\alpha\lambda^{-{\ensuremath{\langle\! \alpha\!\rangle}}}}{\alpha!} u^{(\alpha)}(0)]p(\xi)d\xi \\
& -\lambda^{m}\int [u(\xi)-\psi(\|\xi\|)\sum_{{\ensuremath{\langle\! \alpha\!\rangle}}\leq k}
\frac{\xi^\alpha}{\alpha!} u^{(\alpha)}(0)] p(\xi)d\xi,\\
&= \lambda^{m} \sum_{{\ensuremath{\langle\! \alpha\!\rangle}}\leq k} \frac{u^{(\alpha)}(0)}{\alpha!} \int [\psi(\|\xi\|)-\psi(\lambda\|\xi\|)] \xi^{\alpha}p(\xi)d\xi,\\
&= \lambda^{m} \sum_{{\ensuremath{\langle\! \alpha\!\rangle}}\leq k} \rho_{\alpha}(\lambda) c_{\alpha}(p) {\ensuremath{\langle \delta^{(\alpha)} , u \rangle}},
\end{split}$$ where we have let $$c_{\alpha}(p) = \frac{(-1)^{|\alpha|}}{\alpha!}\int_{\|\xi\|=1}\xi^\alpha p(\xi)i_{E}d\xi, \qquad
\rho_{\alpha}(\lambda)=\int_{0}^\infty \mu^{{\ensuremath{\langle\! \alpha\!\rangle}}+m+d+2}
(\psi(\mu)-\psi(\lambda\mu)) \frac{d\mu}{\mu},$$ and, as in the statement of Lemma \[lem:Heisenberg.extension-symbol\], $E$ is the vector field $2\xi_{0}\partial_{\xi_{0}}+\xi_{1}\partial_{\xi_{1}}+\ldots+\xi_{d}\partial_{\xi_{d}}$.
Set $\lambda=e^s$ and assume that $\psi$ is of the form $ \psi(\mu)=h(\log
\mu) $ with $h\in C^\infty({\ensuremath{\mathbb{R}}})$ such that $h=1$ near $-\infty$ and $h=0$ near $+\infty$. Then, setting $a_{\alpha}={\ensuremath{\langle\! \alpha\!\rangle}}+m+d+2$, we have $$\frac{d}{ds}\rho_{\alpha}(e^s)= \frac{d}{ds} \int_{-\infty}^{\infty}(h(t)-h(s+t))e^{a_{\alpha}t}dt=- e^{-as}\int_{-\infty}^{\infty}
e^{a_{\alpha}t} h'(t)dt.
\label{eq:Appendix.differentiation-rhoalpha}$$ As $\rho_{\alpha}(1)=0$ it follows that $\tau$ is homogeneous of degree $m$ provided that $$\int_{-\infty}^{\infty} e^{at} h'(t) ds = 0 \qquad \text{for $a=m+d+2, \ldots, m+d+2+k$}.
\label{eq:Appendix.homogeneous-extension-condition}$$
Next, if $g\in C_{c}^\infty({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is such that $\int g(t)dt=1$, then for any $a \in {\ensuremath{\mathbb{C}}}\setminus 0$ we have $$\int _{-\infty}^{\infty} e^{at}(\frac{1}{a}\frac{d}{dt}+1)g(t)dt=0.
\label{eq:Appendix1.integral-a-g}$$ Therefore, if $m \not\in{\ensuremath{\mathbb{Z}}}$ then we can check that the conditions (\[eq:Appendix.homogeneous-extension-condition\]) are satisfied by $$h'(t)=\prod_{a=m+d+2}^{m+d+2+k} (\frac{1}{a} \frac{d}{dt} +1)g(t).
\label{eq:Appendix1.h'}$$ As $\int_{-\infty}^{\infty}h'(t)dt=1$ we then see that the distribution $\tau$ defined by (\[eq:Appendix.almosthomogeneous-extension\]) with $\psi(\mu)=\int_{\log \mu}^\infty h'(t)dt$ is a homogeneous extension of $p(\xi)$.
On the other hand, if $\tilde{\tau}\in{\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ is another homogeneous extension of $p(\xi)$ then $\tau-\tau_{1}$ is supported at the origin, so we have $\tau=\tilde{\tau} + \sum b_{\alpha}
\delta^{(\alpha)}$ for some constants $b_{\alpha}\in{\ensuremath{\mathbb{C}}}$. Then, for any $\lambda>0$, we have $$\tau_{\lambda}-\lambda^{m}\tau=\tilde{\tau}_{\lambda}-\lambda^{m}\tilde{\tau} +
\sum (\lambda^{-(d+2-{\ensuremath{\langle\! \alpha\!\rangle}})} -\lambda^m) b_{\alpha}\delta^{(\alpha)}.
\label{eq:Appendix.tau-l-tau-tilde}$$ As both $\tau$ and $\tilde{\tau}$ are homogeneous of degree $m$, we deduce that $\sum (\lambda^{-(d+2-{\ensuremath{\langle\! \alpha\!\rangle}})} -\lambda^m) b_{\alpha}\delta^{(\alpha)}=0$. The linear independence of the family $\{\delta^{(\alpha)}\}$ then implies that all the constants $b_{\alpha}$ vanish, that is, we have $\tilde{\tau}=\tau$. Thus $\tau$ is the unique homogeneous extension of $p(\xi)$ on ${\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}}$.
Now, assume that $m$ is an integer $\leq -(d+2)$. Then in the formula (\[eq:Appendix.almosthomogeneous-extension\]) for $\tau$ we can take $k=-(m+d+2)$ and let $\psi$ be of the form, $$\psi(\mu)=\int_{\log \mu}^\infty h'(t) dt, \qquad h'(t)=\prod_{a=m+d+2}^{m+d+2+k} (\frac{1}{a} \frac{d}{dt} +1) g(t),
$$ with $g\in C_{c}^\infty({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$ such that $\int g(t)dt=1$. Then thanks to (\[eq:Appendix.differentiation-rhoalpha\]) and (\[eq:Appendix1.integral-a-g\]) we have $\rho_{\alpha}(\lambda)=0$ for ${\ensuremath{\langle\! \alpha\!\rangle}}<-(m+d+2)$, while for ${\ensuremath{\langle\! \alpha\!\rangle}}=-(m+d+2)$ we get $$\frac{d}{ds}\rho_{\alpha}(e^s)=\int h'(t)dt= \int g(t)dt= 1.
$$ Since $\rho_{\alpha}(1)=0$ it follows that $\rho_{\alpha}(e^{s})=s$, that is, we have $\rho_{\alpha}(\lambda)=\log \lambda$. Thus, $$\tau_{\lambda}=\lambda^{m}\tau +\lambda^{m}\log \lambda\sum_{{\ensuremath{\langle\! \alpha\!\rangle}}=-(m+d+2)} c_{\alpha}(p)\delta^{(\alpha)} \qquad \forall \lambda>0.
\label{eq:Appendix.taulambda-tau}$$ In particular, we see that if all the coefficients $c_{\alpha}(p)$ with ${\ensuremath{\langle\! \alpha\!\rangle}}=-(m+d+2)$ vanish then $\tau$ is homogeneous of degree $m$.
Conversely, suppose that $p(\xi)$ admits a homogeneous extension $\tilde{\tau}\in{\ensuremath{\mathcal{S}}}'({\ensuremath{{\ensuremath{\mathbb{R}}}^{d+1}}})$. As $\tau-\tilde{\tau}$ is supported at $0$, we can write $\tau=\tilde{\tau} + \sum b_{\alpha}
\delta^{(\alpha)}$ with $b_{\alpha}\in{\ensuremath{\mathbb{C}}}$. For any $\lambda>0$ we have $ \tilde{\tau}_{\lambda}=\lambda^{m}\tilde{\tau}$, so by combining this with (\[eq:Appendix.tau-l-tau-tilde\]) we get $$\tau_{\lambda}-\lambda^{m}\tau = \sum_{{\ensuremath{\langle\! \alpha\!\rangle}}\neq -(m+d+2)}
b_{\alpha}(\lambda^{-({\ensuremath{\langle\! \alpha\!\rangle}}+d+2)}-\lambda^{m})\delta^{(\alpha)}.
$$ By comparing this with (\[eq:Appendix.taulambda-tau\]) and by using linear independence of the family $\{\delta^{(\alpha)}\}$ we then deduce that we have $c_{\alpha}(p)=0$ for ${\ensuremath{\langle\! \alpha\!\rangle}}= -(m+d+2)$. Therefore $p(\xi)$ admits a homogeneous extension if and only if all the coefficients $c_{\alpha}(p)$ with ${\ensuremath{\langle\! \alpha\!\rangle}}=-(m+d+2)$ vanish. The proof of Lemma \[lem:Heisenberg.extension-symbol\] is thus achieved.
[GJMS]{} Beals, R.; Greiner, P.C.: *Calculus on Heisenberg manifolds*. Annals of Mathematics Studies, vol. 119. Princeton University Press, Princeton, NJ, 1988.
Beals, R.; Greiner, P.C. Stanton, N.K.: *The heat equation on a CR manifold*. J. Differential Geom. **20** (1984), no. 2, 343–387.
Bella[ï]{}che, A.: *The tangent space in sub-Riemannian geometry*. *Sub-Riemannian geometry*, 1–78, Progr. Math., 144, Birkhäuser, Basel, 1996.
Berline, N.; Getzler, E.; Vergne, M.: *Heat kernels and Dirac operators*. Grundlehren der Mathematischen Wissenschaften, vol. 298. Springer-Verlag, Berlin, 1992.
Biquard, O.: *Métriques d’Einstein asymptotiquement symétriques.* Astérisque No. 265 (2000).
Bony, J.M: *Principe du maximum, inégalité de Harnack et unicité du problème de Cauchy pour les opérateurs elliptiques dégénérés*. Ann. Inst. Fourier **19** (1969) 277–304.
Boutet de Monvel, L.: *Hypoelliptic operators with double characteristics and related pseudo-differential operators.* Comm. Pure Appl. Math. **27** (1974), 585–639.
Boutet de Monvel, L.; Guillemin, V. *The spectral theory of Toeplitz operators*. Annals of Mathematics Studies, 99. Princeton University Press, Princeton, NJ, 1981.
Christ, M.; Geller, D.; Głowacki, P.; Polin, L.: *Pseudodifferential operators on groups with dilations.* Duke Math. J. **68** (1992) 31–65.
Connes, A.: *The action functional in noncommutative geometry*. Comm. Math. Phys. **117** (1988), no. 4, 673–683.
Connes, A.: *Noncommutative geometry*. Academic Press, Inc., San Diego, CA, 1994.
Connes, A.: *Gravity coupled with matter and the foundation of non-commutative geometry*. Comm. Math. Phys. **182** (1996), no. 1, 155–176.
Connes, A.; Moscovici, H.: *The local index formula in noncommutative geometry*. Geom. Funct. Anal. **5** (1995), no. 2, 174–243.
Dixmier, J.: *Existence de traces non normales.* C. R. Acad. Sci. Paris Sér. A-B **262** (1966) A1107–A1108.
Dynin, A.: *Pseudodifferential operators on the Heisenberg group.* Dokl. Akad. Nauk SSSR **225** (1975) 1245–1248.
Dynin, A.: *An algebra of pseudodifferential operators on the Heisenberg groups. Symbolic calculus.* Dokl. Akad. Nauk SSSR **227** (1976), 792–795.
Eliashberg, Y.; Thurston, W.: *Confoliations*. University Lecture Series, 13, AMS, Providence, RI, 1998.
Epstein, C.L.; Melrose, R.B.: *The Heisenberg algebra, index theory and homology*. Preprint, 1998. Available at `http://www-math.mit.edu/\tilde{}rbm`.
Epstein, C.L.; Melrose, R.B.; Mendoza, G.: *Resolvent of the Laplacian on strictly pseudoconvex domains*. Acta Math. **167** (1991), no. 1-2, 1–106.
Fedosov, B.V.; Golse, F.; Leichtnam, E.; Schrohe, E.: *The noncommutative residue for manifolds with boundary*. J. Funct. Anal. **142** (1996), no. 1, 1–31.
Fefferman, C.: *Monge-Ampère equations, the Bergman kernel, and geometry of pseudoconvex domains*. Ann. of Math. (2) **103** (1976), no. 2, 395–416.
Fefferman, C.L.; Sánchez-Calle, A.: *Fundamental solutions for second order subelliptic operators*. Ann. of Math. (2) **124** (1986), no. 2, 247–272.
Folland, G.; Stein, E.M.: *Estimates for the $\bar \partial\sb{b}$ complex and analysis on the Heisenberg group.* Comm. Pure Appl. Math. **27** (1974) 429–522.
Gilkey, P.B.: *Invariance theory, the heat equation, and the Atiyah-Singer index theorem*. Mathematics Lecture Series, 11. Publish or Perish, Inc., Wilmington, Del., 1984.
Gohberg, I.C.; Kreĭn, M.G.: *Introduction to the theory of linear nonselfadjoint operators*. Trans. of Math. Monographs 18, AMS, Providence, 1969.
Gover, A.R.; Graham, C.R.: *CR invariant powers of the sub-Laplacian.* J. Reine Angew. Math. **583** (2005), 1–27.
Greenleaf, A.: *The first eigenvalue of a sub-Laplacian on a pseudo-Hermitian manifold*. Comm. Partial Differential Equations **10** (1985), no. 2, 191–217.
Gromov, M.: *Carnot-Carathéodory spaces seen from within*. *Sub-Riemannian geometry*, 79–323, Progr. Math., 144, Birkhäuser, Basel, 1996.
Guillemin, V.: *A new proof of Weyl’s formula on the asymptotic distribution of eigenvalues*. Adv. in Math. **55** (1985), no. 2, 131–160.
Guillemin, V.W.: *Gauged Lagrangian distributions.* Adv. Math. **102** (1993), no. 2, 184–201.
Guillemin, V.: *Residue traces for certain algebras of Fourier integral operators*. J. Funct. Anal. **115** (1993), no. 2, 391–417.
Helffer, B.; Nourrigat, J.: *Hypoellipticité maximale pour des opérateurs polynômes de champs de vecteurs.* Prog. Math., No. 58, Birkhäuser, Boston, 1986.
Jerison, D.; Lee, J.M.: *The Yamabe problem on CR manifolds*. J. Differential Geom. **25** (1987), no. 2, 167–197.
Julg, P.; Kasparov, G.: *Operator $K$-theory for the group ${\rm SU}(n,1)$*. J. Reine Angew. Math. **463** (1995), 99–152.
Kalau, W.; Walze, M.: *Gravity, non-commutative geometry and the Wodzicki residue*. J. Geom. Phys. **16** (1995), no. 4, 327–344.
Kassel, C.: *Le résidu non commutatif (d’après M. Wodzicki)*. Séminaire Bourbaki, Vol. 1988/89. Astérisque No. 177-178, (1989), Exp. No. 708, 199–229.
Kastler, D.: *The Dirac operator and gravitation*. Comm. Math. Phys. **166** (1995), no. 3, 633–643.
Kohn, J.J.: *Boundaries of complex manifolds*. 1965 Proc. Conf. Complex Analysis (Minneapolis, 1964) pp. 81–94 Springer, Berlin
Kohn, J.J.; Rossi, H.: *On the extension of holomorphic functions from the boundary of a complex manifold*. Ann. of Math. **81** (1965) 451–472.
Kontsevich, M.; Vishik, S.: *Geometry of determinants of elliptic operators.* *Functional analysis on the eve of the 21st century*, Vol. 1 (New Brunswick, NJ, 1993), 173–197, Progr. Math., 131, Birkhäuser Boston, Boston, MA, 1995.
Lee, J.M.: *The Fefferman metric and pseudo-Hermitian invariants*. Trans. Amer. Math. Soc. **296** (1986), no. 1, 411–429.
Lesch, M.: *On the noncommutative residue for pseudodifferential operators with log-polyhomogeneous symbols*. Ann. Global Anal. Geom. **17** (1999), no. 2, 151–187.
Machedon, M.: *Estimates for the parametrix of the Kohn Laplacian on certain domains*. Invent. Math. **91** (1988), no. 2, 339–364.
Mathai, V.; Melrose, R.; Singer, I.: *Fractional index theory*. J. Differential Geom. **74** (2006) 265–292.
Melrose, R.; Nistor, V.: *Homology of pseudodifferential operators I. Manifolds with boundary*. Preprint, arXiv, June ‘96.
Nagel, A.; Stein, E.M.; Wainger, S.: *Balls and metrics defined by vector fields. I. Basic properties*. Acta Math. **155** (1985), no. 1-2, 103–147.
Paycha, S.; Rosenberg, S.: *Curvature on determinant bundles and first Chern forms*. J. Geom. Phys. **45** (2003), no. 3-4, 393–429.
Ponge, R.: *Calcul fonctionnel sous-elliptique et résidu non commutatif sur les variétés de Heisenberg*. C. R. Acad. Sci. Paris, Série I, **332** (2001) 611–614.
Ponge, R.: *Géométrie spectrale et formules d’indices locales pour les variétés CR et contact*. C. R. Acad. Sci. Paris, Série I, **332** (2001) 735–738.
Ponge, R.: *Spectral asymmetry, zeta functions and the noncommutative residue*. Int. J. Math. **17** (2006), 1065-1090.
Ponge, R.: *The tangent groupoid of a Heisenberg manifold.* Pacific Math. J. **227** (2006) 151–175.
Ponge, R.: *Heisenberg calculus and spectral theory of hypoelliptic operators on Heisenberg manifolds.* E-print, arXiv, Sep. 05, 140 pages. To appear in Mem. Amer. Math. Soc..
Ponge, R.: *Noncommutative residue invariants for CR and contact manifolds.* E-print, arXiv, Oct. 05, 30 pages. To appear in J. Reine Angew. Math.. Ponge, R.: *Noncommutative geometry and lower dimensional volumes in Riemannian geometry*. E-print, arXiv, July 07.
Ponge, R.: *Hypoelliptic functional calculus on Heisenberg manifolds. A resolvent approach.* E-print, arXiv, Sep. 07.
Rockland, C.: *Hypoellipticity on the Heisenberg group-representation-theoretic criteria.* Trans. Amer. Math. Soc. **240** (1978) 1–52.
ÊRockland, C.: *Intrinsic nilpotent approximation.* Acta Appl. Math. **8** (1987), no. 3, 213–270.
Rothschild, L.; Stein, E.: *Hypoelliptic differential operators and nilpotent groups.* Acta Math. **137** (1976) 247–320.
Rumin, M.: *Formes différentielles sur les variétés de contact*. J. Differential Geom. **39** (1994), no.2, 281–330.
Sánchez-Calle, A.: *Fundamental solutions and geometry of the sum of squares of vector fields*. Invent. Math. **78** (1984), no. 1, 143–160.
Schrohe, E.: *Noncommutative residues and manifolds with conical singularities*. J. Funct. Anal. **150** (1997), no. 1, 146–174.
Stanton, N.K.: *Spectral invariants of CR manifolds*. Michigan Math. J. **36** (1989), no. 2, 267–288.
Tanaka, N.: *A differential geometric study on strongly pseudo-convex manifolds*. Lectures in Mathematics, Department of Mathematics, Kyoto University, No. 9. Kinokuniya Book-Store Co., Ltd., Tokyo, 1975.
Taylor, M.E.: *Noncommutative microlocal analysis. I.* Mem. Amer. Math. Soc. 52 (1984), no. 313,
Webster, S.: *Pseudo-Hermitian structures on a real hypersurface*. J. Differential Geom. **13** (1978), no. 1, 25–41.
Van Erp, E.: PhD thesis, Pennsylvania State University, 2005.
Vassout, S.: *Feuilletages et résidu non commutatif longitudinal*. PhD thesis, University of Paris 7, 2001.
Wodzicki, M.: *Local invariants of spectral asymmetry*. Invent. Math. **75** (1984), no. 1, 143–177.
Wodzicki, M.: *Spectral asymmetry and noncommutative residue* (in Russian), Habilitation Thesis, Steklov Institute, (former) Soviet Academy of Sciences, Moscow, 1984.
Wodzicki, M.: *Noncommutative residue. I. Fundamentals*. *$K$-theory, arithmetic and geometry* (Moscow, 1984–1986), 320–399, Lecture Notes in Math., 1289, Springer, Berlin-New York, 1987.
Wodzicki, M.: *The long exact sequence in cyclic homology associated with extensions of algebras*. C. R. Acad. Sci. Paris Sér. I Math. **306** (1988), no. 9, 399–403.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Steven A. Balbus'
date: 'Received ; accepted '
title: Nonlinear Scale Invariance in Local Disk Flows
---
Introduction
============
Disks with Keplerian rotation profiles are linearly stable by the Rayleigh criterion of outwardly increasing specific angular momentum, but are extremely sensitive to the presence of magnetic fields. A weakly magnetized disk is linearly unstable if its angular velocity decreases outward, a condition met by Keplerian and almost all other astrophysical rotation profiles (Balbus & Hawley 1991). The underlying physics behind this magnetorotational instability (MRI) is well-understood, and the breakdown of the flow into fully developed turbulence has been convincingly demonstrated in a large series of numerical simulations (Balbus 2003 for a review).
Not all astrophysical disks need have the requisite minimum ionization level to sustain magnetic coupling, however. Protostellar disks, for example, may have an extended “dead zone” near the midplane on radial scales from $\sim 0.1$ to $\sim 10$ AU (Gammie 1996; Fromang, Terquem, & Balbus 2001). This, along with other similar cases (e.g. CV disks, cf. Gammie & Menou 1998), has led to speculation that there are also hydrodynamical mechanisms by which Keplerian flow is destabilized (Gammie 1996).
Before the advent of the MRI, such reasoning was orthodox. The pioneering work of Shakura & Sunyaev (1973), for example, invoked nonlinear, high Reynolds number shear instabilities as a likely destabilizing mechanism that would lead to turbulence (see also Crawford & Kraft 1956). Since, for nonaxisymmetric disturbances, there is still no proof either of linear or nonlinear stability, this mechanism continues to attract adherents (Dubrulle 1993, Richard & Zahn 1999, Richard 2003).
Theoretical analysis may have hit an impasse, but the intervening years have in fact seen a stunning rise in the capabilities of numerical simulation. These have shown no indication of local nonlinear rotational instabilities (Hawley, Balbus, & Winters 1999), in Keplerian disks. They do, however, reveal nonlinear shear instabilities when Corilois forces are absent, or when the disk is marginally stable (constant specific angular momentum). Indeed, even linear instability is possible in some Rayleigh-stable disks, provided that global physics is introduced (Goldreich, Goodman, & Narayan 1986; Blaes 1987), a result that has been numerically confirmed (Hawley 1991).
The numerical stability findings have been criticized on the grounds that the effective Reynolds number of the codes is too low, and that this damps the nonlinear instabilities: the latter require yet-to-be resolved spatial scales in order to reveal themselves (Richard & Zahn 1999). In this paper, we show that the local disk equations possess a scale invariance that implies any solution to the governing equations must be present on all scales. In other words, for every small scale velocity flow, there is an exact large scale counterpart with the same long term stability behavior. The absence of instability at large scale therefore implies the absence at small scales as well. Conversely, any true small scale instabilities (those present in a shear layer, for example), must also have large scale counterparts, and therefore instability should be found even at crude numerical resolutions. This is indeed the case. Our findings suggest that if nonlinear hydrodynamical instabilities were present in Keplerian disks, such unstable disturbances must involve dynamics beyond the local approximation, and are not an inevitable nonlinear outcome of differential rotation.
The Local Approximation
=======================
In cylindrical coordinates $(R, \phi, z)$, the fundamental equations of motion for a flow in which viscous effects are negligible are mass conservation \[fun0\] [t]{} + [[ ]{}]{}[[ ]{}]{}([ ]{})= 0, and the dynamical equations, \[fun1\] [v\_Rt]{} + [ ]{}[[ ]{}]{}[[ ]{}]{}v\_R - [v\_\^2R]{} = - [1]{}[PR]{} - [R]{} \[fun2\] [v\_t]{} + [ ]{}[[ ]{}]{}[[ ]{}]{}v\_+ [v\_v\_RR]{} = - [1R]{}[P]{} \[fun3\] [v\_zt]{} + [ ]{}[[ ]{}]{}[[ ]{}]{}v\_z = - [1]{}[Pz]{} -[z]{} Our notation is standard: ${ \mbox{\boldmath{$v$}} }$ is the velocity field, $\rho$ the mass density, $P$ the gas pressure, and $\Phi$ is the Newtonian point mass potential for central mass $M$: = - [GM(R\^2 + z\^2)\^[1/2]{}]{}. $G$ is the gravitational constant.
The local limit consists of the following series of approximations. First, we assume that $R$ is large and $z \ll R$, so that \[phis\] - [GMR]{} (1 - [z\^22R\^2]{}) and , Choose a fiducial value of $R$, say $R_0$. Denote the angular velocity as $\Omega(R)$ (we assume a dependence only upon $R$), and let $\Omega_0 = \Omega(R_0)$. We next erect local Cartesian coordinates x = R - R\_0, y = R\_0(-\_0 t) which corotate with the disk at $R=R_0$. Let \[w\] [ ]{} [ ]{} - R\_0 t [ ]{} be the velocity relative to uniform rotation at $\Omega= \Omega_0$. In the local approximation, the magnitude of ${ \mbox{\boldmath{$w$}} }$ is assumed to be small compared with $R\Omega_0$.
The undisturbed angular velocity is Keplerian, \[kep\] \^2 = [GMR\^3]{} Substitution of equations (\[phis\]-\[kep\]) into equations (\[fun1\]–\[fun2\]) and retaining leading order, yields the so-called [*local*]{} or [*Hill*]{} equations (e.g., Balbus & Hawley 1998): \[hill0\] [t]{} + [[ ]{}]{}[[ ]{}]{}([ ]{})= 0, \[hill1\] ( [t]{} + [ ]{}[[ ]{}]{})[w\_R]{} - 2w\_= - x[d\^2d R]{} - [1]{} [Px]{} \[hill2\] ( [t]{} + [ ]{}[[ ]{}]{})[w\_]{} + 2w\_R = - [1]{} [Py]{} \[hill3\] ( [t]{} + [ ]{}[[ ]{}]{})[w\_z]{} = - z\^2 - [1]{} [Pz]{} The “0” subscript has been dropped in the $2\Omega$ terms in equations (\[hill1\]) and (\[hill2\]), and in the derivative term on the right of equation (\[hill1\]). The time derivative is taken in the corotating frame, viz.: = [ t]{} + \_0 Equations (\[hill0\]–\[hill3\]) are well known, and have been used extensively in both numerical and analytical studies. The fundamental approach dates from nineteenth century treatments of the Earth-moon system (Hill 1878).
Scale symmetry in the Hill Equations
====================================
The local equations of motion incorporate an important symmetry in their structure. Let [ ]{}([ ]{}, t), ([ ]{}, t), P([ ]{}, t), where ${ \mbox{\boldmath{$r$}} }=(x, y, z)$, be an exact solution to the Hill equations (\[hill0\]–\[hill3\]). Then, if $\alpha$ is an arbitrary constant, (1/)[ ]{}([ ]{}, t), ([ ]{}, t), (1/\^2) P([ ]{}, t) is also an exact solution to the same equations. The proof is a simple matter of direct substitution.
An equivalent formulation of the scaling symmetry is [ ]{}([ ]{}/, t)(1/) [ ]{}([ ]{}, t) ([ ]{}/, t) ([ ]{}, t) P ([ ]{}/, t)(1/\^2) P ([ ]{}, t). In this form, with $\epsilon \ll 1$, we see that any solution of the Hill equations that involves very small length scales has a rescaled counterpart solution with exactly the same time dependence. In particular, any solution corresponding to a breakdown into turbulence must be present on both large and small scales.
The implications of this scaling symmetry are of particular importance for understanding and testing the possible existence of local nonlinear instabilities in Keplerian disks. The key point is that any such instability would have to exist not just at small scales, but at all scales. Finite difference numerical codes would find such instabilities, if they existed. Indeed, a constant specific angular momentum profile is nonlinearly unstable, and is found to be so even at resolutions as crude as $32^3$. By way of contrast, local Keplerian profiles show no evidence of nonlinear instability at resolutions up to $256^3$, instead converge to the same stable solution in codes with completely different numerical diffusion properties (Hawley, Balbus, & Winters 1999). The argument that small scale flow structure is somehow being repressed is simply untenable.
To see how the Reynolds number changes with scale, assume that a flow is characterized by an effective kinematic viscosity $\nu$. The scaling argument we have just given applies to inviscid equations, so we should not expect it to hold in the presence of viscosity. The Reynolds number associated with the small scale solution is Re\_[s]{} = [w l ]{} The Reynolds number associated with the large scale solution is Re\_[l]{} = [wl]{} where $w$ here means $w(l, t)$, the value of the velocity function evaluated at a fiducial value length $l$ and time $t$. $Re_l =Re_s/\epsilon^2 \ll Re_{s}$ because at larger scales both the velocity and the length scales increase by a factor of $1/\epsilon$. In a numerical simulation, strict scaling invariance is not obeyed. Instead, the large scale solutions approach the inviscid limit, while their sufficiently small scale counterparts are damped. But by behaving nearly inviscidly, the large scale solutions capture the behavior of the Hill system at all scales.
What this Result Does Not Show
==============================
Obviously, scale invariance does not constitute a proof of nonlinear stability in any Keplerian flow. There are several points we have not covered.
First, the local approximation ignores boundary conditions. In laboratory flows, the fluid is always bounded by hard walls, and boundary layers form. A recent laboratory confirmation of the MRI also finds finite amplitude velocity fluctuations in a magnetically stable flow, for example. But the source of such disturbances are boundary layers (Sisan et al. 2004).
The Hill equations emerge in the limit $R\rightarrow\infty$, and therefore curvature terms drop out of the analysis. Instabilities that depend, for example, upon inflection points or vorticity maxima in the background rotation profile would not appear in this limit. Nothing precludes them from forming in the $w$ velocity profile, however, and if such instabilities were present they should manifest on large scales as well as small. In any case, the criticism of the numerical simulations is that extremely small structure is being lost, and that high Reynolds number differential rotation is supposedly intrinsically unstable. It is very difficult to see how large scale curvature could play an essential destabilizing role here. In these equations, the curvature terms are nonsingular perturbations. Planar Couette and Poiseuille flows break down into turbulence without assistance from geometrical curvature.
Our Hill analysis together with numerical simulations would also suggest that a non-Keplerian disk with, say, $\Omega \propto R^{-1.8}$ is nonlinearly stable. But an annulus supporting such a profile is in fact [*linearly*]{} unstable (Goldreich, Goodman, & Narayan 1986), transporting angular momentum outward even in its linear phase. The point is that the annulus supports edge modes that become unstable, and these global modes do not exist in the local approximation. The existence of a similar instabilities in disks found in nature cannot be ruled out, though to date none afflicting Keplerian disks have been found.
The disk thermal structure could also be unstable, at least in principle. Nothing presented in this work bears on these types of instabilities.
Finally, there are technical loopholes to the argument presented in this paper. What if the unstable solution required not just some small scales to be resolved, but very disparate scales? Why this should be so is far from clear, but this possibility cannot be ruled out [*a priori.*]{} Indeed, one could imagine that a fractal structure is required down to infinitesimal scales. Rescaling would not bring such a solution to larger characteristic length scales, by definition. This solution is obviously not characterized by a critical Reynolds number, above which it is necessary to be seen. The critical Reynolds number would be infinity! This is not the argument made by proponents of nonlinear high Reynolds number instability. Such a solution may remain a mathematical possibility, but not one that can be realized in nature.
Conclusion
==========
The local dynamics of Keplerian or other astrophysical disk profiles can be captured by a an established formalism known as the local, or Hill, approximation. The resulting system of equations has an exact scale invariance, so that any flow characterized by very small scales has an exact large scale counterpart with same stability properties. This feature of the Hill equations implies that finite difference codes at available resolutions are sufficient to explore the possibility of [*local*]{} nonlinear shear instabilities in astrophysical disks. If simulations accurately describe the large scale behavior of the Hill system, there is nothing more to uncover at small scales; it is simply renormalized large scale behavior. The absence of any observed instabilities in Keplerian numerical studies, coupled with the ready manifestation of such instabilities in local shear layers and constant specific angular momentum systems, suggests that any putative nonmagnetic disk instability would have to incorporate physics beyond simple differential rotation.
I thank C. Gammie, J. Hawley, K. Menou, and C. Terquem for useful comments. This work is supported by NASA grants NAG5-13288 and NNG04GK77G.
Balbus, S. A. 2003, , 41, 555
Balbus, S. A. & Hawley, J. F. 1991, ApJ, 376, 214
Balbus, S. A. & Hawley, J. F. 1998. Rev. Mod. Phys., 70, 1.
Blaes, O. M. 1987, MNRAS, 227, 975
Crawford, J. A., & Kraft, R. P. 1956, ApJ, 123, 44
Dubrulle, B. 1993, Icarus, 106, 59
Fromang, S., Terquem, C., & Balbus, C. 2001, MNRAS, 329, 18
Gammie, C. F. 1996, ApJ, 457, 355
Gammie, C. F., & Menou, K. 1998, ApJ, 492, L75
Goldreich, P., Goodman, J., & Narayan, R. 1986, MNRAS, 221, 339
Hawley, J. F. 1991, ApJ, 381, 496
Hawley, J. F., Balbus, S. A., & Winters, W. F. 1999, ApJ, 518, 394
Hill, G. W. 1878, Am. J. Math., 1, 5
Richard, D. 2003, å, 408, 409
Richard, D., & Zahn, J.-P. 1999, A&A, 347, 734
Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
Sisan, D. R., Mujica, N., Tillotson, W. A., Huang, Y.-M., Dorland, W., Hassam, A., Antonsen, T. M., & Lathrop, D. P. 2004, Phys. Rev., in press (physics/0401125).
| {
"pile_set_name": "ArXiv"
} |
[SLAC–PUB–10812\
October 2004\
]{}
[[**New Results in Light-Front\
Phenomenology**]{}[^1]]{}
[*Presented at\
LightCone 2004\
Amsterdam, The Netherlands\
16–20 August 2004*]{}\
[**Abstract** ]{}
The light-front quantization of gauge theories in light-cone gauge provides a frame-independent wavefunction representation of relativistic bound states, simple forms for current matrix elements, explicit unitarity, and a trivial vacuum. In this talk I review the theoretical methods and constraints which can be used to determine these central elements of QCD phenomenology. The freedom to choose the light-like quantization four-vector provides an explicitly covariant formulation of light-front quantization and can be used to determine the analytic structure of light-front wave functions and define a kinematical definition of angular momentum. The AdS/CFT correspondence of large $N_C$ supergravity theory in higher-dimensional anti-de Sitter space with supersymmetric QCD in 4-dimensional space-time has interesting implications for hadron phenomenology in the conformal limit, including an all-orders demonstration of counting rules for exclusive processes. String/gauge duality also predicts the QCD power-law behavior of light-front Fock-state hadronic wavefunctions with arbitrary orbital angular momentum at high momentum transfer. The form of these near-conformal wavefunctions can be used as an initial ansatz for a variational treatment of the light-front QCD Hamiltonian. The light-front Fock-state wavefunctions encode the bound state properties of hadrons in terms of their quark and gluon degrees of freedom at the amplitude level. The nonperturbative Fock state wavefunctions contain intrinsic gluons, and sea quarks at any scale $Q$ with asymmetries such as $ s(x) \ne \bar s(x)$, $\bar u(x) \ne
\bar d(x).$ Intrinsic charm and bottom quarks appear at large $x$ in the light-front wavefunctions since this minimizes the invariant mass and off-shellness of the higher Fock state. In the case of nuclei, the Fock state expansion contains “hidden color" states which cannot be classified in terms of nucleonic degrees of freedom. I also briefly review recent analyses which shows that some leading-twist phenomena such as the diffractive component of deep inelastic scattering, single-spin asymmetries, nuclear shadowing and antishadowing cannot be computed from the LFWFs of hadrons in isolation.
Introduction
============
A central problem in nonperturbative quantum chromodynamics is to determine not only the masses but also the wavefunctions of hadronic bound states. Relativity and quantum mechanics requires that a hadron fluctuates not only in coordinate space, spin, and color, but also in the number of quanta. The light-front Hamiltonian formulation of quantum chromodynamics provides a comprehensive formulation for determining not only the spectrum of the theory, but also the complete set of light-front Fock state wavefunctions $\psi_{n/H}(x_i,\vec k_{\perp i},\lambda_i)$ which encode the bound state properties of hadrons in terms of their fundamental quark and gluon degrees of freedom at the amplitude level.
Formally, the light-front expansion is constructed by quantizing QCD at fixed light-cone time [@Dirac:1949cp] $\tau = t + z/c$ and forming the invariant light-front Hamiltonian: $ H^{QCD}_{LF} = P^+
P^- - {\vec P_\perp}^2$ where $P^\pm = P^0 \pm
P^z$ [@Brodsky:1997de]. The momentum generators $P^+$ and $\vec
P_\perp$ are kinematical; [*i.e.*]{}, they are independent of the interactions. The generator $P^- = i {d\over d\tau}$ generates light-cone time translations, and the eigen-spectrum of the Lorentz scalar $ H^{QCD}_{LF}$ gives the mass spectrum of the color-singlet hadron states in QCD together with their respective light-front wavefunctions. For example, the proton state satisfies: $
H^{QCD}_{LF} {\,\left|\,{\psi_p}\right\rangle} = M^2_p {\,\left|\,{\psi_p}\right\rangle}$.
The light-front (LF) quantization of QCD in light-cone gauge $A^+=0$ has a number of remarkable advantages, including explicit unitarity, a physical Fock expansion, the absence of ghost degrees of freedom, and the decoupling properties needed to prove factorization theorems in high momentum transfer inclusive and exclusive reactions. Prem Srivastava and I have given a systematic derivation [@Srivastava:2000cf] of LF-quantized gauge theory using the Dirac method of constraints. The free theory gauge field is shown to satisfy the Lorentz condition as an operator equation as well as the light-cone gauge condition. Its propagator is found to be transverse with respect to both its four-momentum and the gauge direction. The interaction Hamiltonian of QCD has a form resembling that of covariant theory, except for additional instantaneous interactions which can be treated systematically. The QCD $\beta$ function computed in the light-cone gauge agrees with that known in the conventional framework. In the case of the electroweak theory, spontaneous symmetry breaking is realized in LF quantization by the appearance of zero modes of the Higgs field. Light-front quantization leads to an elegant ghost-free theory of massive gauge particles, automatically incorporating the Lorentz and ’t Hooft conditions, as well as the Goldstone boson equivalence theorem [@Srivastava:2002mw].
The expansion of the proton eigensolution ${\,\left|\,{\psi_p}\right\rangle}$ on the color-singlet $B = 1$, $Q = 1$ eigenstates $\{{\,\left|\,{n}\right\rangle} \}$ of the free Hamiltonian $ H^{QCD}_{LF}(g = 0)$ gives the light-front Fock expansion: $$\begin{aligned}
{\,\left|\,{ \psi_p(P^+, {\vec P_\perp} )}\right\rangle} &=& \sum_{n}\ \prod_{i=1}^{n} {{\rm
d}x_i\, {\rm d}^2
{\vec k_{\perp i}} \over \sqrt{x_i}\, 16\pi^3} \, \ 16\pi^3 \
\delta\left(1-\sum_{i=1}^{n} x_i\right)\, \delta^{(2)}\left(\sum_{i=1}^{n}
{\vec k_{\perp
i}}\right) \label{a318}
\\
&& \rule{0pt}{4.5ex} \times \psi_{n/H}(x_i,{\vec k_{\perp i}},
\lambda_i) {\,\left|\,{ n;\, x_i P^+, x_i {\vec P_\perp} + {\vec k_{\perp
i}}, \lambda_i}\right\rangle}. \nonumber\end{aligned}$$ The light-cone momentum fractions $x_i = k^+_i/P^+$ and ${\vec k_{\perp i}}$ represent the relative momentum coordinates of the QCD constituents. The physical transverse momenta are ${\vec p_{\perp i}} = x_i {\vec P_\perp} + {\vec k_{\perp i}}.$ The $\lambda_i$ label the light-cone spin projections $S^z$ of the quarks and gluons along the quantization direction $z$. Each Fock component has the invariant mass squared $$\mathcal{M}^2_n = (\sum^n_{i=1} k_i^\mu)^2
= \sum^n_{i=1}{k^2_{\perp i} + m^2_i\over x_i}.$$ The physical gluon polarization vectors $\epsilon^\mu(k,\ \lambda = \pm 1)$ are specified in light-cone gauge by the conditions $k \cdot \epsilon = 0,\ \eta \cdot \epsilon =
\epsilon^+ = 0.$ The gluonic quanta which appear in the Fock states thus have physical polarization $\lambda = \pm 1$ and positive metric. Since each Fock particle is on its mass shell in a Hamiltonian framework, $k^- = k^0-k^z= {k^2_\perp + m^2\over k^+}$. One cannot truncate the LF expansion; the expansion sum runs over all $n,$ beginning with the lowest valence state. The probability of massive Fock states with invariant mass $\mathcal{M}$ falls-off at least as fast as $1/\mathcal{M}^2.$
Because they are defined at fixed light-front time $\tau = t + z/c$ (Dirac’s “Front Form"), LFWFs have the remarkable property of being independent of the hadron’s four-momentum. In contrast, in equal-time quantization, a Lorentz boost mixes dynamically with the interactions, so that computing a wavefunction in a new frame at fixed $t$ requires solving a nonperturbative problem as complicated as the Hamiltonian eigenvalue problem itself. The LFWFs are properties of the hadron itself; they are thus universal and process independent.
The light-front Fock state expansion provides important perspectives on the quark and gluon distributions of hadrons. For example, there is no scale $Q_0$ where one can limit the quark content of a hadron to valence quarks. The nonperturbative Fock state wavefunctions contain intrinsic gluons, strange quarks, charm quarks, etc., at any scale. The internal QCD interactions lead to asymmetries such as $
s(x) \ne \bar s(x)$, $\bar u(x) \ne \bar d(x)$ and intrinsic charm and bottom distributions at large $x$ since this minimizes the invariant mass and off-shellness of the higher Fock state. In the case of nuclei, the Fock state expansion contains hidden color states which cannot be classified in terms of nucleonic degrees of freedom. However, some leading-twist phenomena such as the diffractive component of deep inelastic scattering, single-spin asymmetries, nuclear shadowing and antishadowing cannot be computed from the LFWFs of hadrons in isolation. These issues are reviewed in Section 5 below.
One of the important aspects of fundamental hadron structure is the presence of non-zero orbital angular momentum in the bound-state wave functions. The evidence for a “spin crisis" in the Ellis-Jaffe sum rule signals a significant orbital contribution in the proton wave function [@Jaffe:1989jz; @Ji:2002qa]. The Pauli form factor of nucleons is computed from the overlap of LFWFs differing by one unit of orbital angular momentum $\Delta L_z= \pm
1$. Thus the fact that the anomalous moment of the proton is non-zero requires nonzero orbital angular momentum in the proton wavefunction [@BD80]. In the light-front method, orbital angular momentum is treated explicitly; it includes the orbital contributions induced by relativistic effects, such as the spin-orbit effects normally associated with the conventional Dirac spinors. Angular momentum conservation for each Fock state implies $$J^z= \sum_i^{n} S^z_i + \sum_i^{n-1} L^z_i$$ where $L^z_i$ is one of the $n-1$ relative orbital angular momenta.
One can also define the light-front Fock expansion using a covariant generalization of light-front time: $\tau=x {\makebox[0.08cm]{$\cdot$}}\omega$. The four-vector $\omega$, with $\omega^2 = 0$, determines the orientation of the light-front plane; the freedom to choose $\omega$ provides an explicitly covariant formulation of light-front quantization [@cdkm]: all observables such as matrix elements of local current operators, form factors, and cross sections are light-front invariants – they must be independent of $\omega_\mu.$ In recent work, Dae Sung Hwang, John Hiller, Volodya Karmonov [@Brodsky:2003pw], and I have studied the analytic structure of LFWFs using the explicitly Lorentz-invariant formulation of the front form. Eigensolutions of the Bethe-Salpeter equation have specific angular momentum as specified by the Pauli-Lubanski vector. The corresponding LFWF for an $n$-particle Fock state evaluated at equal light-front time $\tau =
\omega\cdot x$ can be obtained by integrating the Bethe-Salpeter solutions over the corresponding relative light-front energies. The resulting LFWFs $\psi^I_n(x_i, k_{\perp
i})$ are functions of the light-cone momentum fractions $x_i= {k_i\cdot \omega / p \cdot
\omega}$ and the invariant mass of the constituents $\mathcal{M}_n,$ each multiplying spin-vector and polarization tensor invariants which can involve $\omega^\mu.$ They are eigenstates of the Karmanov–Smirnov kinematic angular momentum operator [@ks92; @cdkm]. $$\label{ac1}
\vec{J} = -i[\vec{k}\times
\partial/\partial\vec{k}\,]-i[\vec{n}\times
\partial/\partial\vec{n}] +\frac{1}{2}\vec{\sigma},$$ where $\vec n$ is the spatial component of $\omega$ in the constituent rest frame ($\vec{\mathcal{P}}=\vec 0$). Although this form is written specifically in the constituent rest frame, it can be generalized to an arbitrary frame by a Lorentz boost.
Normally the generators of angular rotations in the LF formalism contain interactions, as in the Pauli–Lubanski formulation; however, the LF angular momentum operator can also be represented in the kinematical form (\[ac1\]) without interactions. The key term is the generator of rotations of the LF plane $-i[\vec{n}\times\partial/\partial\vec{n}]$ which replaces the interaction term; it appears only in the explicitly covariant formulation, where the dependence on $\vec{n}$ is present. Thus LFWFs satisfy all Lorentz symmetries of the front form, including boost invariance, and they are proper eigenstates of angular momentum.
In principle, one can solve for the LFWFs directly from the fundamental theory using methods such as discretized light-front quantization (DLCQ) [@Pauli:1985ps], the transverse lattice [@Bardeen:1979xx; @Dalley:2004rq; @Burkardt:2001jg], lattice gauge theory moments [@DelDebbio:1999mq], Dyson-Schwinger techniques [@Maris:2003vk], and Bethe–Salpeter techniques [@Brodsky:2003pw]. DLCQ has been remarkably successful in determining the entire spectrum and corresponding LFWFs in one space-one time field theories [@Gross:1997mx], including QCD(1+1) [@Hornbostel:1988fb] and SQCD(1+1) [@Harada:2004ck]. There are also DLCQ solutions for low sectors of Yukawa theory in physical space-time dimensions [@Brodsky:2002tp]. The DLCQ boundary conditions allow a truncation of the Fock space to finite dimensions while retaining the kinematic boost and Lorentz invariance of light-front quantization.
The transverse lattice method combines DLCQ for one-space and the light-front time dimensions with lattice theory in transverse space. It has recently provided the first computation of the generalized parton distributions of the pion [@Dalley:2004rq]. Dyson-Schwinger methods account well for running quark mass effects, and in principle can give important hadronic wavefunction information. One can also project known solutions of the Bethe–Salpeter equation to equal light-front time, thus producing hadronic light-front Fock wave functions [@Brodsky:2003pw]. Bakker and van Iersel have developed new methods to find solutions to bound-state light-front equations in ladder approximation [@vanIersel:2004gf]. Pauli has shown how one can construct an effective light-front Hamiltonian which acts within the valence Fock state sector alone [@Pauli:2003tb]. Another possible method is to construct the $q\bar q$ Green’s function using light-front Hamiltonian theory, DLCQ boundary conditions and Lippmann-Schwinger resummation. The zeros of the resulting resolvent projected on states of specific angular momentum $J_z$ can then generate the meson spectrum and their light-front Fock wavefunctions. As emphasized by Weinstein and Vary, new effective operator methods [@Weinstein:2004nr; @Zhan:2004ct] which have been developed for Hamiltonian theories in condensed matter and nuclear physics, could also be applied advantageously to light-front Hamiltonian. Reviews of nonperturbative light-front methods may be found in references [@Brodsky:1997de; @cdkm; @Dalley:ug; @Brodsky:2003gk].
Even without explicit solutions, much is known about the explicit form and structure of LFWFs. They can be matched to nonrelativistic Schrodinger wavefunctions at soft scales. At high momenta, the LFWFs at large $k_\perp$ and $x_i \to 1$ are constrained by arguments based on conformal symmetry, the operator product expansion, or perturbative QCD. The pattern of higher Fock states with extra gluons is given by ladder relations [@Antonuccio:1997tw]. The structure of Fock states with nonzero orbital angular momentum is also constrained by the Karmanov-Smirnov operator [@ks92].
AdS/CFT and Its Consequences for Near-Conformal Field Theory
============================================================
As shown by Maldacena [@Maldacena:1997re], there is a remarkable correspondence between large $N_C$ supergravity theory in a higher dimensional anti-de Sitter space and supersymmetric QCD in 4-dimensional space-time. String/gauge duality provides a framework for predicting QCD phenomena based on the conformal properties of the AdS/CFT correspondence. For example, Polchinski and Strassler [@Polchinski:2001tt] have shown that the power-law fall-off of hard exclusive hadron-hadron scattering amplitudes at large momentum transfer can be derived without the use of perturbation theory by using the scaling properties of the hadronic interpolating fields in the large-$r$ region of AdS space. Thus one can use the Maldacena correspondence to compute the leading power-law falloff of exclusive processes such as high-energy fixed-angle scattering of gluonium-gluonium scattering in supersymmetric QCD. The resulting predictions for hadron physics effectively coincide [@Polchinski:2001tt; @Brower:2002er; @Andreev:2002aw] with QCD dimensional counting rules [@Brodsky:1973kr; @Matveev:ra; @Brodsky:1974vy; @Brodsky:2002st]. Polchinski and Strassler [@Polchinski:2001tt] have also derived counting rules for deep inelastic structure functions at $x \to 1$ in agreement with perturbative QCD predictions [@Brodsky:1994kg] as well as Bloom-Gilman exclusive-inclusive duality. An interesting point is that the hard scattering amplitudes which are normally or order $\alpha_s^p$ in PQCD appear as order $\alpha_s^{p/2}$ in the supergravity predictions. This can be understood as an all-orders resummation of the effective potential [@Maldacena:1997re; @Rey:1998ik]. The near-conformal scaling properties of light-front wavefunctions thus lead to a number of important predictions for QCD which are normally discussed in the context of perturbation theory.
De Teramond and I [@Brodsky:2003px] have shown how one can use the scaling properties of the hadronic interpolating operator in the extended AdS/CFT space-time theory to determine the form of QCD wavefunctions at large transverse momentum $k^2_\perp \to \infty$ and at $x \to 1$ [@Brodsky:2003px]. The angular momentum dependence of the light-front wavefunctions also follow from the conformal properties of the AdS/CFT correspondence. The scaling and conformal properties of the correspondence leads to a hard component of the light-front Fock state wavefunctions of the form: $$\begin{aligned}
\psi_{n/h} (x_i, \vec k_{\perp i} , \lambda_i, l_{z i})
&\sim& \frac{(g_s~N_C)^{\frac{1}{2} (n-1)}}{\sqrt {N_C}}
~\prod_{i =1}^{n - 1} (k_{i \perp}^\pm)^{\vert l_{z i}\vert}\\[1ex]
&&\times \left[\frac{ \Lambda_o}{ {M}^2 - \sum _i\frac{\vec k_{\perp i}^2 +
m_i^2}{x_i} +
\Lambda_o^2} \right] ^{n +\sum_i \vert l_{z i} \vert -1}\ ,\nonumber
\label{eq:lfwfR}\end{aligned}$$ where $g_s$ is the string scale and $\Lambda_o$ represents the basic QCD mass scale. The scaling predictions agree with the perturbative QCD analysis given in the references [@Ji:2003fw], but the AdS/CFT analysis is performed at strong coupling without the use of perturbation theory. The form of these near-conformal wavefunctions can be used as an initial ansatz for a variational treatment of the light-front QCD Hamiltonian.
The recent investigations using the AdS/CFT correspondence has reawakened interest in the conformal features of QCD [@Brodsky:2003dn]. QCD becomes scale free and conformally symmetric in the analytic limit of zero quark mass and zero $\beta$ function [@Parisi:zy]. This correspondence principle provides a new tool, the conformal template, which is very useful for theory analyses, such as the expansion polynomials for distribution amplitudes [@Brodsky:1980ny; @Brodsky:1984xk; @Brodsky:1985ve; @Braun:2003rp], the non-perturbative wavefunctions which control exclusive processes at leading twist [@Lepage:1979zb; @Brodsky:2000dr]. The near-conformal behavior of QCD is also the basis for commensurate scale relations [@Brodsky:1994eh] which relate observables to each other without renormalization scale or scheme ambiguities [@Brodsky:2000cr]. An important example is the generalized Crewther relation [@Brodsky:1995tb]. In this method the effective charges of observables are related to each other in conformal gauge theory; the effects of the nonzero QCD $\beta-$ function are then taken into account using the BLM method [@Brodsky:1982gc] to set the scales of the respective couplings. The magnitude of the corresponding effective charge [@Brodsky:1997dh] $\alpha^{\rm exclusive}_s(Q^2) =
{F_\pi(Q^2)/ 4\pi Q^2 F^2_{\gamma \pi^0}(Q^2)}$ for exclusive amplitudes is connected to the effective charge $\alpha_\tau$ defined from $\tau$ hadronic decays [@Brodsky:2002nb] by a commensurate scale relation. Its magnitude: $\alpha^{\rm
exclusive}_s(Q^2) \sim 0.8$ at small $Q^2,$ is sufficiently large as to explain the observed magnitude of exclusive amplitudes such as the pion form factor using the asymptotic distribution amplitude [@Lepage:1980fj].
Theoretical [@vonSmekal:1997is; @Zwanziger:2003cf; @Howe:2002rb; @Howe:2003mp; @Furui:2003mz] and phenomenological [@Mattingly:ej; @Brodsky:2002nb; @Baldicchi:2002qm] evidence is now accumulating that the QCD coupling becomes constant at small virtuality; [*i.e.*]{}, $\alpha_s(Q^2)$ develops an infrared fixed point in contradiction to the usual assumption of singular growth in the infrared. If QCD running couplings are bounded, the integration over the running coupling is finite and renormalon resummations are not required. If the QCD coupling becomes scale-invariant in the infrared, then elements of conformal theory [@Braun:2003rp] become relevant even at relatively small momentum transfers.
Menke, Merino, and Rathsman [@Brodsky:2002nb] and I have presented a definition of a physical coupling for QCD which has a direct relation to high precision measurements of the hadronic decay channels of the $\tau^- \to \nu_\tau {\rm H}^-$. Let $R_{\tau}$ be the ratio of the hadronic decay rate to the leptonic one. Then $R_{\tau}\equiv R_{\tau}^0\left[1+\frac{\alpha_\tau}{\pi}\right]$, where $R_{\tau}^0$ is the zeroth order QCD prediction, defines the effective charge $\alpha_\tau$. The data for $\tau$ decays is well-understood channel by channel, thus allowing the calculation of the hadronic decay rate and the effective charge as a function of the $\tau$ mass below the physical mass. The vector and axial-vector decay modes can be studied separately. Using an analysis of the $\tau$ data from the OPAL collaboration [@Ackerstaff:1998yj], we have found that the experimental value of the coupling $\alpha_{\tau}(s)=0.621 \pm
0.008$ at $s = m^2_\tau$ corresponds to a value of $\alpha_{{\hbox{$\overline{\hbox{\tiny MS}}$}}}(M^2_Z) = (0.117$-$0.122) \pm 0.002$, where the range corresponds to three different perturbative methods used in analyzing the data. This result is in good agreement with the world average $\alpha_{{\hbox{$\overline{\hbox{\tiny MS}}$}}}(M^2_Z) = 0.117 \pm 0.002$. However, one also finds that the effective charge only reaches $\alpha_{\tau}(s)
\sim 0.9 \pm 0.1$ at $s=1\,{\rm GeV}^2$, and it even stays within the same range down to $s\sim0.5\,{\rm GeV}^2$. The effective coupling is close to constant at low scales, suggesting that physical QCD couplings become constant or “frozen" at low scales.
The near constancy of the effective QCD coupling at small scales helps explain the empirical success of dimensional counting rules for the power law fall-off of form factors and fixed angle scaling. As shown in the references [@Brodsky:1997dh; @Melic:2001wb], one can calculate the hard scattering amplitude $T_H$ for such processes [@Lepage:1980fj] without scale ambiguity in terms of the effective charge $\alpha_\tau$ or $\alpha_R$ using commensurate scale relations. The effective coupling is evaluated in the regime where the coupling is approximately constant, in contrast to the rapidly varying behavior from powers of $\alpha_{\rm s}$ predicted by perturbation theory (the universal two-loop coupling). For example, the nucleon form factors are proportional at leading order to two powers of $\alpha_{\rm s}$ evaluated at low scales in addition to two powers of $1/q^2$; The pion photoproduction amplitude at fixed angles is proportional at leading order to three powers of the QCD coupling. The essential variation from leading-twist counting-rule behavior then only arises from the anomalous dimensions of the hadron distribution amplitudes.
Light-Front Phenomenology
=========================
Light-front Fock state wavefunctions $\psi_{n/H}(x_i,\vec k_{\perp
i},\lambda_i)$ play an essential role in QCD phenomenology, generalizing Schrödinger wavefunctions $\psi_H(\vec k)$ of atomic physics to relativistic quantum field theory. Given the $\psi^{(\Lambda)}_{n/H},$ one can construct any spacelike electromagnetic, electroweak, or gravitational form factor or local operator product matrix element of a composite or elementary system from the diagonal overlap of the LFWFs [@BD80]. Exclusive semi-leptonic $B$-decay amplitudes involving timelike currents such as $B\rightarrow A \ell \bar{\nu}$ can also be evaluated exactly in the light-front formalism [@Brodsky:1998hn]. In this case, the timelike decay matrix elements require the computation of both the diagonal matrix element $n \rightarrow n$ where parton number is conserved and the off-diagonal $n+1\rightarrow n-1$ convolution such that the current operator annihilates a $q{\bar{q'}}$ pair in the initial $B$ wavefunction. This term is a consequence of the fact that the time-like decay $q^2 = (p_\ell + p_{\bar{\nu}} )^2 > 0$ requires a positive light-cone momentum fraction $q^+ > 0$. Conversely for space-like currents, one can choose $q^+=0$, as in the Drell-Yan-West representation of the space-like electromagnetic form factors. The light-front Fock representation thus provides an exact formulation of current matrix elements of local operators. In contrast, in equal-time Hamiltonian theory, one must evaluate connected time-ordered diagrams where the gauge particle or graviton couples to particles associated with vacuum fluctuations. Thus even if one knows the equal-time wavefunction for the initial and final hadron, one cannot determine the current matrix elements. In the case of the covariant Bethe-Salpeter formalism, the evaluation of the matrix element of the current requires the calculation of an infinite number of irreducible diagram contributions.
One can also prove directly from the LFWF overlap representation that the anomalous gravitomagnetic moment $B(0)$ vanishes for any composite system [@Brodsky:2000ii]. This property follows directly from the Lorentz boost properties of the light-front Fock representation and holds separately for each Fock state component.
Given the LFWFs, one can also compute the hadronic distribution amplitudes $\phi_H(x_i,Q)$ which control hard exclusive processes as an integral over the transverse momenta of the valence Fock state LFWFs [@Lepage:1980fj]. In addition one can compute the unintegrated parton distributions in $x$ and $k_\perp$ which underlie generalized parton distributions for nonzero skewness. As shown by Diehl, Hwang, and myself [@Brodsky:2000xy], one can give a complete representation of virtual Compton scattering $\gamma^* p \to \gamma
p$ at large initial photon virtuality $Q^2$ and small momentum transfer squared $t$ in terms of the light-cone wavefunctions of the target proton. One can then verify the identities between the skewed parton distributions $H(x,\zeta,t)$ and $E(x,\zeta,t)$ which appear in deeply virtual Compton scattering and the corresponding integrands of the Dirac and Pauli form factors $F_1(t)$ and $F_2(t)$ and the gravitational form factors $A_{q}(t)$ and $B_{q}(t)$ for each quark and anti-quark constituent. We have illustrated the general formalism for the case of deeply virtual Compton scattering on the quantum fluctuations of a fermion in quantum electrodynamics at one loop.
The integrals of the unintegrated parton distributions over transverse momentum at zero skewness provide the helicity and transversity distributions measurable in polarized deep inelastic experiments [@Lepage:1980fj]. For example, the polarized quark distributions at resolution $\Lambda$ correspond to $$\begin{aligned}
q_{\lambda_q/\Lambda_p}(x, \Lambda) &=& \sum_{n,q_a}
\int\prod^n_{j=1} dx_j d^2 k_{\perp j}\sum_{\lambda_i} \vert
\psi^{(\Lambda)}_{n/H}(x_i,\vec k_{\perp i},\lambda_i)\vert^2
\\
&& \times\ \delta\left(1- \sum^n_i x_i\right) \delta^{(2)}
\left(\sum^n_i \vec k_{\perp i}\right) \delta(x - x_q)\nonumber \\
&& \times\ \delta_{\lambda_a, \lambda_q} \Theta(\Lambda^2 -
\mathcal{M}^2_n)\ ,\nonumber\end{aligned}$$ where the sum is over all quarks $q_a$ which match the quantum numbers, light-cone momentum fraction $x,$ and helicity of the struck quark.
Hadronization phenomena such as the coalescence mechanism for leading heavy hadron production are computed from LFWF overlaps. Diffractive jet production provides another phenomenological window into the structure of LFWFs. However, as shown recently [@Brodsky:2002ue], some leading-twist phenomena such as the diffractive component of deep inelastic scattering, single spin asymmetries, nuclear shadowing and antishadowing cannot be computed from the LFWFs of hadrons in isolation.
As shown by Raufeisen and myself [@Raufeisen:2004dg], one can construct a “light-front density matrix" from the complete set of light-front wavefunctions which is a Lorentz scalar. This form can be used at finite temperature to give a boost invariant formulation of thermodynamics. At zero temperature the light-front density matrix is directly connected to the Green’s function for quark propagation in the hadron as well as deeply virtual Compton scattering. One can also define a light-front partition function $Z_{LF}$ as an outer product of light-front wavefunctions. The deeply virtual Compton amplitude and generalized parton distributions can then be computed as the trace $Tr[Z_{LF}
\mathcal{O}],$ where $\mathcal{O}$ is the appropriate local operator [@Raufeisen:2004dg]. This partition function formalism can be extended to multi-hadronic systems and systems in statistical equilibrium to provide a Lorentz-invariant description of relativistic thermodynamics [@Raufeisen:2004dg].
Complications from Final-State Interactions
===========================================
Although it has been more than 35 years since the discovery of Bjorken scaling [@Bjorken:1968dy] in electroproduction [@Bloom:1969kc], there are still many issues in deep-inelastic lepton scattering and Drell-Yan reactions which are only now being understood from a fundamental basis in QCD. In contrast to the parton model, final-state interactions in deep inelastic scattering and initial state interactions in hard inclusive reactions cannot be neglected—leading to $T-$odd single spin asymmetries [@Brodsky:2002cx; @Belitsky:2002sm; @Collins:2002kn] and diffractive contributions [@Brodsky:2002ue; @Brodsky:2004hi]. This in turn implies that the structure functions measured in deep inelastic scattering are not probability distributions computed from the square of the LFWFs computed in isolation [@Brodsky:2002ue].
It is usually assumed—following the parton model—that the leading-twist structure functions measured in deep inelastic lepton-proton scattering are simply the probability distributions for finding quarks and gluons in the target nucleon. In fact, gluon exchange between the fast, outgoing quarks and the target spectators effects the leading-twist structure functions in a profound way, leading to diffractive leptoproduction processes, shadowing of nuclear structure functions, and target spin asymmetries. In particular, the final-state interactions from gluon exchange between the outgoing quark and the target spectator system lead to single-spin asymmetries in semi-inclusive deep inelastic lepton-proton scattering at leading twist in perturbative QCD; [*i.e.*]{}, the rescattering corrections of the struck quark with the target spectators are not power-law suppressed at large photon virtuality $Q^2$ at fixed $x_{bj}$ [@Brodsky:2002cx] The final-state interaction from gluon exchange occurring immediately after the interaction of the current also produces a leading-twist diffractive component to deep inelastic scattering $\ell p \to
\ell^\prime p^\prime X$ corresponding to color-singlet exchange with the target system; this in turn produces shadowing and anti-shadowing of the nuclear structure functions [@Brodsky:2002ue; @Brodsky:1989qz]. In addition, one can show that the pomeron structure function derived from diffractive DIS has the same form as the quark contribution of the gluon structure function [@Brodsky:2004hi]. The final-state interactions occur at a short light-cone time $\Delta\tau \simeq
1/\nu$ after the virtual photon interacts with the struck quark, producing a nontrivial phase. Here $\nu = p \cdot q/M$ is the laboratory energy of the virtual photon. Thus none of the above phenomena is contained in the target light-front wave functions computed in isolation. In particular, the shadowing of nuclear structure functions is due to destructive interference effects from leading-twist diffraction of the virtual photon, physics not included in the nuclear light-front wave functions. Thus the structure functions measured in deep inelastic lepton scattering are affected by final-state rescattering, modifying their connection to light-front probability distributions. Some of these results can be understood by augmenting the light-front wave functions with a gauge link, but with a gauge potential created by an external field created by the virtual photon $q \bar q$ pair current [@Belitsky:2002sm]. The gauge link is also process dependent [@Collins:2002kn], so the resulting augmented LFWFs are not universal.
Single-spin asymmetries in hadronic reactions provide a remarkable window to QCD mechanisms at the amplitude level. In general, single-spin asymmetries measure the correlation of the spin projection of a hadron with a production or scattering plane [@Sivers:1990fh]. Such correlations are odd under time reversal, and thus they can arise in a time-reversal invariant theory only when there is a phase difference between different spin amplitudes. Specifically, a nonzero correlation of the proton spin normal to a production plane measures the phase difference between two amplitudes coupling the proton target with $J^z_p = \pm {1\over
2}$ to the same final-state. The calculation requires the overlap of target light-front wavefunctions with different orbital angular momentum: $\Delta L^z = 1;$ thus a single-spin asymmetry (SSA) provides a direct measure of orbital angular momentum in the QCD bound state.
The observation that $\simeq 10\%$ of the positron-proton deep inelastic cross section at HERA is diffractive [@Derrick:1993xh; @Ahmed:1994nw] points to the importance of final-state gauge interactions as well as a new perspective to the nature of the hard pomeron. The same interactions are responsible for nuclear shadowing and Sivers-type single-spin asymmetries in semi-inclusive deep inelastic scattering and in Drell-Yan reactions. These new observations are in contradiction to parton model and light-cone gauge based arguments that final state interactions can be ignored at leading twist. The modifications of the deep inelastic lepton-proton cross section due to final state interactions are consistent with color-dipole based scattering models and imply that the traditional identification of structure functions with the quark probability distributions computed from the wavefunctions of the target hadron computed in isolation must be modified.
The shadowing and antishadowing of nuclear structure functions in the Gribov-Glauber picture is due to the destructive and constructive coherence, respectively, of amplitudes arising from the multiple-scattering of quarks in the nucleus. The effective quark-nucleon scattering amplitude includes Pomeron and Odderon contributions from multi-gluon exchange as well as Reggeon quark exchange contributions [@Brodsky:1989qz]. The multiscattering nuclear processes from Pomeron, Odderon and pseudoscalar Reggeon exchange leads to shadowing and antishadowing of the electromagnetic nuclear structure functions in agreement with measurements. An important conclusion is that antishadowing is nonuniversal—different for quarks and antiquarks and different for strange quarks versus light quarks. This picture thus leads to substantially different nuclear effects for charged and neutral currents, particularly in anti-neutrino reactions, thus affecting the extraction of the weak-mixing angle $\sin^2\theta_W$ and the constant $\rho_o$ which are determined from the ratios of charged and neutral current contributions in deep inelastic neutrino and anti-neutrino scattering. In recent work, Schmidt, Yang, and I [@Brodsky:2004qa] find that a substantial part of the difference between the standard model prediction and the anomalous NuTeV result [@Zeller:2001hh] for $\sin^2\theta_W$ could be due to the different behavior of nuclear antishadowing for charged and neutral currents. Detailed measurements of the nuclear dependence of charged, neutral and electromagnetic DIS processes are needed to establish the distinctive phenomenology of shadowing and antishadowing and to make the NuTeV results definitive.
Other QCD Phenomenology Related to Light-Front Wavefunctions
============================================================
A number of important phenomenological properties follow directly from the structure of light-front wavefunctions in gauge theory.
(1). [*Intrinsic Glue and Sea.*]{} Even though QCD was motivated by the successes of the parton model, QCD predicts many new features which go well beyond the simple three-quark description of the proton. Since the number of Fock components cannot be limited in relativity and quantum mechanics, the nonperturbative wavefunction of a proton contains gluons and sea quarks, including heavy quarks at any resolution scale. Thus there is no scale $Q_0$ in deep inelastic lepton-proton scattering where the proton can be approximated by its valence quarks. Empirical evidence also continues to accumulate that the strange-antistrange quark distributions are not symmetric in the proton [@Brodsky:1996hc; @Kretzer:2004bg].
\(2) [*Intrinsic Charm.*]{} [@Brodsky:1980pb] The probability for Fock states of a light hadron such as the proton to have an extra heavy quark pair decreases as $1/m^2_Q$ in non-Abelian gauge theory [@Franz:2000ee; @Brodsky:1984nx]. The relevant matrix element is the cube of the QCD field strength $G^3_{\mu \nu}.$ This is in contrast to abelian gauge theory where the relevant operator is $F^4_{\mu \nu}$ and the probability of intrinsic heavy leptons in QED bound state is suppressed as $1/m^4_\ell.$ The intrinsic Fock state probability is maximized at minimal off-shellness. It is useful to define the transverse mass $m_{\perp i}= \sqrt{k^2_{\perp
i} + m^2_i}.$ The maximum probability then occurs at $x_i = {
m^i_\perp /\sum^n_{j = 1} m^j_\perp}$; [*i.e.*]{}, when the constituents have minimal invariant mass and equal rapidity. Thus the heaviest constituents have the highest momentum fractions and the highest $x_i$. Intrinsic charm thus predicts that the charm structure function has support at large $x_{bj}$ in excess of DGLAP extrapolations [@Brodsky:1980pb]; this is in agreement with the EMC measurements [@Harris:1995jx]. It predicts leading charm hadron production and fast charmonium production in agreement with measurements [@Anjos:2001jr]. In fact even double $J/\psi's$ are produced at large $x_F$, consistent with the dissociation and coalescence of double intrinsic Fock states of the projectile LFWF [@Vogt:1995tf].
The proton wavefunction thus contains charm quarks with large light-cone momentum fractions $x$. The recent observation by the SELEX experiment [@Ocherashvili:2004hi; @Mattson:2002vu] showing that doubly-charmed baryons such as the $\Xi_{cc}^+$ and hence two charmed quarks are produced at large $x_F$ and small $p_T$ in hadron-nucleus collisions provides additional and compelling evidence for the diffractive dissociation of complex off-shell Fock states of the projectile. These observations contradict the traditional view that sea quarks and gluons are always produced perturbatively via DGLAP evolution. Intrinsic charm can also explain the $J/\psi \to \rho \pi$ puzzle [@Brodsky:1997fj]. It also affects the extraction of suppressed CKM matrix elements in $B$ decays [@Brodsky:2001yt].
3\. [*Hidden Color.*]{} A rigorous prediction of QCD is the “hidden color" of nuclear wavefunctions at short distances. QCD predicts that nuclear wavefunctions contain “hidden color" [@Brodsky:1983vf] components: color configurations not dual to the usual nucleonic degrees of freedom. In general, the six-quark wavefunction of a deuteron is a mixture of five different color-singlet states [@Brodsky:1983vf]. The dominant color configuration at large distances corresponds to the usual proton-neutron bound state where transverse momenta are of order ${\vec k}^2 \sim 2 M_d \epsilon_{BE}.$ However, at small impact space separation, all five Fock color-singlet components eventually acquire equal weight, [*i.e.*]{}, the deuteron wavefunction evolves to 80% hidden color.
At high $Q^2$ the deuteron form factor is sensitive to wavefunction configurations where all six quarks overlap within an impact separation $b_{\perp i} < \mathcal{O} (1/Q).$ Since the deuteron form factor contains the probability amplitudes for the proton and neutron to scatter from $p/2$ to $p/2+q/2$, it is natural to define the reduced deuteron form factor[@Brodsky:1976rz; @Brodsky:1983vf] $$f_d(Q^2) \equiv {F_d(Q^2)\over
F_{1N} \left(Q^2\over 4\right)\, F_{1N}\,\left(Q^2\over
4\right)}.$$ The effect of nucleon compositeness is removed from the reduced form factor. QCD then predicts the scaling $$f_d(Q^2) \sim {1\over Q^2} ;$$ [*i.e.*]{}, the same scaling law as a meson form factor. This scaling is consistent with experiment for $Q^2 > 1~{\rm GeV}^2.$ In fact as seen in Fig. \[reduced\], the deuteron reduced form factor contains two components: (1) a fast-falling component characteristic of nuclear binding with probability $85\%$, and (2) a hard contribution falling as a monopole with a scale of order $0.5~{\rm
GeV}$ with probability $15\%.$ The normalization of the deuteron form factor observed at large $Q^2$ [@Arnold:1975dd], as well as the presence of two mass scales in the scaling behavior of the reduced deuteron form factor [@Brodsky:1976rz] thus suggests sizable hidden-color Fock state contributions such as ${\,\left|\,{(uud)_{8_C} (ddu)_{8_C}}\right\rangle}$ with probability of order $15\%$ in the deuteron wavefunction [@Farrar:1991qi].
\(4) [*Color transparency.*]{} The small transverse size fluctuations of a hadron wavefunction with a small color dipole moment will have minimal interactions in a nucleus [@Bertsch:1981py; @Brodsky:1988xz].
This has been verified in the case of diffractive dissociation of a high energy pion into dijets $\pi A \to q \bar q A^\prime$ in which the nucleus is left in its ground state [@Ashery:2002jx]. When the hadronic jets have balancing but high transverse momentum, one studies the small size fluctuation of the incident pion. The diffractive dissociation cross section is found to be proportional to $A^2$ in agreement with the color transparency prediction. Color transparency has also been observed in diffractive electroproduction of $\rho$ mesons [@Borisov:2002rd] and in quasi-elastic $p A \to
p p (A-1)$ scattering [@Aclander:2004zm] where only the small size fluctuations of the hadron wavefunction enters the hard exclusive scattering amplitude. In the latter case an anomaly occurs at $\sqrt s \simeq 5 $ GeV, most likely signaling a resonance effect at the charm threshold [@Brodsky:1987xw].
Color transparency, as evidenced by the Fermilab measurements of diffractive dijet production, implies that a pion can interact coherently throughout a nucleus with minimal absorption, in dramatic contrast to traditional Glauber theory based on a fixed $\sigma_{\pi
n}$ cross section. Color transparency gives direct validation of the gauge interactions of QCD.
Hard Exclusive Processes and Form Factors at High $Q^2$
=========================================================
Leading-twist PQCD predictions for hard exclusive amplitudes [@Lepage:1980fj] are written in a factorized form as the product of hadron distribution amplitudes $\phi_I(x_i,Q)$ for each hadron $I$ convoluted with the hard scattering amplitude $T_H$ obtained by replacing each hadron with collinear on-shell quarks with light-front momentum fractions $x_i = k^+_i/P^+.$ The hadron distribution amplitudes are obtained by integrating the $n-$parton valence light-front wavefunctions: $$\phi(x_i,Q) =
\int^Q \Pi^{n-1}_{i=1} d^2 k_{\perp i} ~ \psi_{\rm
val}(x_i,k_\perp).$$ Thus the distribution amplitudes are $L_z=0$ projections of the LF wavefunction, and the sum of the spin projections of the valence quarks must equal the $J_z$ of the parent hadron. Higher orbital angular momentum components lead to power-law suppressed exclusive amplitudes [@Lepage:1980fj; @Ji:2003fw]. Since quark masses can be neglected at leading twist in $T_H$, one has quark helicity conservation, and thus, finally, hadron-helicity conservation: the sum of initial hadron helicities equals the sum of final helicities. In particular, since the hadron-helicity violating Pauli form factor is computed from states with $\Delta L_z = \pm 1,$ PQCD predicts $F_2(Q^2)/F_1(Q^2) \sim 1/Q^2 $ \[modulo logarithms\]. A detailed analysis shows that the asymptotic fall-off takes the form $F_2(Q^2)/F_1(Q^2) \sim \log^2 Q^2/Q^2$ [@Belitsky:2002kj]. One can also construct other models [@Brodsky:2003pw] incorporating the leading-twist perturbative QCD prediction which are consistent with the JLab polarization transfer data [@Jones:1999rz] for the ratio of proton Pauli and Dirac form factors. This analysis can also be extended to study the spin structure of scattering amplitudes at large transverse momentum and other processes which are dependent on the scaling and orbital angular momentum structure of light-front wavefunctions. Recently, Afanasev, Carlson, Chen, Vanderhaeghen, and I [@Chen:2004tw] have shown that the interfering two-photon exchange contribution to elastic electron-proton scattering, including inelastic intermediate states, can account for the discrepancy between Rosenbluth and Jefferson Lab spin transfer polarization data [@Jones:1999rz].
A crucial prediction of models for proton form factors is the relative phase of the timelike form factors, since this can be measured from the proton single spin symmetries in $e^+ e^- \to p
\bar p$ or $p \bar p \to \ell \bar \ell$ [@Brodsky:2003gs]. Carl Carlson, John Hiller, Dae Sung Hwang and I [@Brodsky:2003gs] have shown that measurements of the proton’s polarization strongly discriminate between the analytic forms of models which fit the proton form factors in the spacelike region. In particular, the single-spin asymmetry normal to the scattering plane measures the relative phase difference between the timelike $G_E$ and $G_M$ form factors. The dependence on proton polarization in the timelike region is expected to be large in most models, of the order of several tens of percent. The continuation of the spacelike form factors to the timelike domain $t = s > 4 M^2_p$ is very sensitive to the analytic form of the form factors; in particular it is very sensitive to the form of the PQCD predictions including the corrections to conformal scaling. The forward-backward $\ell^+
\ell^-$ asymmetry can measure the interference of one-photon and two-photon contributions to $\bar p p \to \ell^+ \ell^-.$
As discussed in section 2, dimensional counting rules for hard exclusive processes have now been derived in the context of nonperturbative QCD using the AdS/CFT correspondence. The data for virtually all measured hard scattering processes appear to be consistent with the conformal predictions of QCD. For example, recent measurements of the deuteron photodisintegration cross section $\gamma d \to p n$ follow the leading-twist $s^{11}$ scaling behavior at large momentum transfers in the few GeV region [@Holt:1990ze; @Bochna:1998ca; @Rossi:2004qm]. This adds further evidence for the dominance of leading-twist quark-gluon subprocesses and the near conformal behavior of the QCD coupling. As discussed above, the evidence that the running coupling has constant fixed-point behavior, which together with BLM scale fixing, could help explain the near conformal scaling behavior of the fixed-CM angle cross sections. The angular distribution of hard exclusive processes is generally consistent with quark interchange, as predicted from large $N_C$ considerations.
New Directions
==============
As I have emphasized in this talk, the light-front wavefunctions of hadrons are the central elements of QCD phenomenology, describing bound states in terms of their fundamental quark and gluon degrees of freedom at the amplitude level. Given the light-front wavefunctions one can compute quark and gluon distributions, distribution amplitudes, generalized parton distributions, form factors, and matrix elements of local currents such as semileptonic $B$ decays. The diffractive dissociation of hadrons on nucleons or nuclei into jets or leading hadrons provides new measures of the LFWFs of the projectile as well as tests of color transparency and intrinsic charm.
It is thus imperative to compute the light-front wavefunctions from first principles in QCD. Lattice gauge theory can provide moments of the distribution amplitudes by evaluating vacuum-to-hadron matrix elements of local operators [@DelDebbio:1999mq]. The transverse lattice is also providing new nonperturbative information [@Dalley:2004rq; @Burkardt:2001jg].
The DLCQ method is also a first-principles method for solving nonperturbative QCD; at finite harmonic resolution $K$ the DLCQ Hamiltonian acts in physical Minkowski space as a finite-dimensional Hermitian matrix in Fock space. The DLCQ Heisenberg equation is Lorentz-frame independent and has the advantage of providing not only the spectrum of hadrons, but also the complete set of LFWFs for each hadron eigenstate.
An important feature the light-front formalism is that $J_z$ is conserved; thus one simplify the DLCQ method by projecting the full Fock space on states with specific angular momentum. As shown in ref. [@Brodsky:2003pw], the Karmanov-Smirnov operator uniquely specifies the form of the angular dependence of the light-front wavefunctions, allowing one to transform the light-front Hamiltonian equations to differential equations acting on scalar forms. A complementary method would be to construct the $T$-matrix for asymptotic $q \bar q$ or $qqq$ or gluonium states using the light-front analog of the Lippmann-Schwinger method. This allows one to focus on states with the specific global quantum numbers and spin of a given hadron. The zeros of the resulting resolvent then provides the hadron spectrum and the respective light-front Fock state projections.
The AdS/CFT correspondence has now provided important new information on the short-distance structure of hadronic LFWFs; one obtains conformal constraints which are not dependent on perturbation theory. The large $k_\perp$ fall-off of the valence LFWFs is also rigorously determined by consistency with the evolution equations for the hadron distribution amplitudes [@Lepage:1980fj]. Similarly, one can also use the structure of the evolution equations to constrain the $x \to 1$ endpoint behavior of the LFWFs. One can use these strong constraints on the large $k_\perp$ and $x \to 1$ behavior to model the LFWFs. Such forms can also be used as the initial approximations to the wavefunctions needed for variational methods which minimize the expectation value of the light-front Hamiltonian.
Acknowledgments {#acknowledgments .unnumbered}
===============
I wish to thank Professors Ben Bakker, Piet Mulders, and their colleagues at the Vrije Universiteit in Amsterdam for hosting this outstanding meeting. This talk is based on collaborations with Guy de Teramond, Markus Diehl, Rikard Enberg, John Hiller, Paul Hoyer, Dae Sung Hwang, Gunnar Ingelman, Volodya Karmanov, Gary McCartor, Sven Menke, Carlos Merino, Joerg Raufeisen, and Johan Rathsman.
[99]{}
P. A. M. Dirac, Rev. Mod. Phys. [**21**]{}, 392 (1949). S. J. Brodsky, H. C. Pauli and S. S. Pinsky, Phys. Rept. [**301**]{}, 299 (1998) \[arXiv:hep-ph/9705477\]. P. P. Srivastava and S. J. Brodsky, Phys. Rev. D [**64**]{}, 045006 (2001) \[arXiv:hep-ph/0011372\]. P. P. Srivastava and S. J. Brodsky, Phys. Rev. D [**66**]{}, 045019 (2002) \[arXiv:hep-ph/0202141\]. R. L. Jaffe and A. Manohar, Nucl. Phys. B [**337**]{}, 509 (1990). X. D. Ji, Nucl. Phys. Proc. Suppl. [**119**]{}, 41 (2003) \[arXiv:hep-lat/0211016\]. S. J. Brodsky and S. D. Drell, Phys. Rev. D [**22**]{}, 2236 (1980). J. Carbonell, B. Desplanques, V. A. Karmanov, and J. F. Mathiot, Phys. Rep. [**300**]{}, 215 (1998) \[arXiv:nucl-th/9804029\]. S. J. Brodsky, J. R. Hiller, D. S. Hwang and V. A. Karmanov, Phys. Rev. D [**69**]{}, 076001 (2004) \[arXiv:hep-ph/0311218\]. V. A. Karmanov and A.V. Smirnov, Nucl. Phys. A [**546**]{}, 691 (1992).
H. C. Pauli and S. J. Brodsky, Phys. Rev. D [**32**]{}, 2001 (1985). W. A. Bardeen, R. B. Pearson and E. Rabinovici, Phys. Rev. D [**21**]{}, 1037 (1980). S. Dalley, arXiv:hep-ph/0409139. M. Burkardt and S. Dalley, Prog. Part. Nucl. Phys. [**48**]{}, 317 (2002) \[arXiv:hep-ph/0112007\]. L. Del Debbio, M. Di Pierro, A. Dougall and C. T. Sachrajda \[UKQCD collaboration\], Nucl. Phys. Proc. Suppl. [**83**]{}, 235 (2000) \[arXiv:hep-lat/9909147\]. P. Maris and C. D. Roberts, Int. J. Mod. Phys. E [**12**]{}, 297 (2003) \[arXiv:nucl-th/0301049\]. D. J. Gross, A. Hashimoto and I. R. Klebanov, Phys. Rev. D [**57**]{}, 6420 (1998) \[arXiv:hep-th/9710240\]. K. Hornbostel, S. J. Brodsky and H. C. Pauli, Phys. Rev. D [**41**]{}, 3814 (1990). M. Harada, J. R. Hiller, S. Pinsky and N. Salwen, Phys. Rev. D [**70**]{}, 045015 (2004) \[arXiv:hep-th/0404123\]. S. J. Brodsky, J. R. Hiller and G. McCartor, Annals Phys. [**305**]{}, 266 (2003) \[arXiv:hep-th/0209028\]. M. van Iersel and B. L. G. Bakker, arXiv:hep-ph/0407318. B. L. G. Bakker, M. van Iersel, and F. Pijlman, Few-Body Systems [**33**]{}, 27 (2003). H. C. Pauli, arXiv:hep-ph/0312300. M. Weinstein, arXiv:hep-th/0410113. H. Zhan, A. Nogga, B. R. Barrett, J. P. Vary and P. Navratil, Phys. Rev. C [**69**]{}, 034302 (2004) \[arXiv:nucl-th/0401047\].
S. Dalley, Nucl. Phys. B (Proc. Suppl.) [**108**]{}, 145 (2002). S. J. Brodsky, Published in [*Nagoya 2002, Strong coupling gauge theories and effective field theories, 1-18.*]{} \[arXiv:hep-th/0304106\]. F. Antonuccio, S. J. Brodsky and S. Dalley, Phys. Lett. B [**412**]{}, 104 (1997) \[arXiv:hep-ph/9705413\]. J. M. Maldacena, Adv. Theor. Math. Phys. [**2**]{}, 231 (1998) \[Int. J. Theor.Phys. [**38**]{}, 1113 (1999)\] \[arXiv:hep-th/9711200\]. J. Polchinski and M. J. Strassler, Phys. Rev. Lett. [**88**]{}, 031601 (2002) \[arXiv:hep-th/0109174\]. R. C. Brower and C. I. Tan, Nucl. Phys. B [**662**]{}, 393 (2003) \[arXiv:hep-th/0207144\]. O. Andreev, Phys. Rev. D [**67**]{}, 046001 (2003) \[arXiv:hep-th/0209256\]. S. J. Brodsky and G. R. Farrar, Phys. Rev. Lett. [**31**]{}, 1153 (1973). V. A. Matveev, R. M. Muradian and A. N. Tavkhelidze, Lett. Nuovo Cim. [**7**]{}, 719 (1973). S. J. Brodsky and G. R. Farrar, Phys. Rev. D [**11**]{}, 1309 (1975). S. J. Brodsky, Published in [*Newport News 2002, Exclusive processes at high momentum transfer 1-33.*]{} \[arXiv:hep-ph/0208158.\] S. J. Brodsky, M. Burkardt and I. Schmidt, Nucl. Phys. B [**441**]{}, 197 (1995) \[arXiv:hep-ph/9401328\]. S. J. Rey and J. T. Yee, Eur. Phys. J. C [**22**]{}, 379 (2001) \[arXiv:hep-th/9803001\]. S. J. Brodsky and G. F. de Teramond, Phys. Lett. B [**582**]{}, 211 (2004) \[arXiv:hep-th/0310227\].
X. d. Ji, J. P. Ma and F. Yuan, Phys. Rev. Lett. [**90**]{}, 241601 (2003) \[arXiv:hep-ph/0301141\]. S. J. Brodsky, SLAC-PUB-10206 [*Invited talk at International Conference on Color Confinement and Hadrons in Quantum Chromodynamics - Confinement 2003, Wako, Japan, 21-24 Jul 2003*]{}
G. Parisi, Phys. Lett. B [**39**]{}, 643 (1972). S. J. Brodsky, Y. Frishman, G. P. Lepage and C. Sachrajda, [Phys. Lett.]{} [**91B**]{}, 239 (1980).
S. J. Brodsky, P. Damgaard, Y. Frishman and G. P. Lepage, Phys. Rev. D [**33**]{}, 1881 (1986). S. J. Brodsky, Y. Frishman and G. P. Lepage, Phys. Lett. B [**167**]{}, 347 (1986). V. M. Braun, G. P. Korchemsky and D. Muller, Prog. Part. Nucl. Phys. [**51**]{}, 311 (2003) \[arXiv:hep-ph/0306057\]. G. P. Lepage and S. J. Brodsky, Phys. Lett. B [**87**]{}, 359 (1979). S. J. Brodsky and G. P. Lepage, SLAC-PUB-4947 [*In \*A.H. Mueller, (ed): Perturbative Quantum Chromodynamics, 1989, p. 93-240,*]{} and S. J. Brodsky, SLAC-PUB-8649 [*In \*Shifman, M. (ed.): At the frontier of particle physics, Handbook of QCD: Boris Ioffe Festschrift, vol. 2\*, 2001, p 1343-1444.*]{}
S. J. Brodsky and H. J. Lu, Phys. Rev. D [**51**]{}, 3652 (1995) \[arXiv:hep-ph/9405218\]. S. J. Brodsky, E. Gardi, G. Grunberg and J. Rathsman, Phys. Rev. D [**63**]{}, 094017 (2001) \[arXiv:hep-ph/0002065\]. S. J. Brodsky, G. T. Gabadadze, A. L. Kataev and H. J. Lu, Phys. Lett. [**B372**]{}, 133 (1996) \[arXiv:hep-ph/9512367\]. S. J. Brodsky, G. P. Lepage and P. B. Mackenzie, Phys. Rev. D [**28**]{}, 228 (1983). S. J. Brodsky, C. R. Ji, A. Pang and D. G. Robertson, Phys. Rev. D [**57**]{}, 245 (1998) \[arXiv:hep-ph/9705221\]. S. J. Brodsky, S. Menke, C. Merino and J. Rathsman, Phys. Rev. D [**67**]{}, 055008 (2003) \[arXiv:hep-ph/0212078\]. G. P. Lepage and S. J. Brodsky, Phys. Rev. D [**22**]{}, 2157 (1980). L. von Smekal, R. Alkofer and A. Hauck, Phys. Rev. Lett. [**79**]{}, 3591 (1997) \[arXiv:hep-ph/9705242\]. D. Zwanziger, Phys. Rev. D [**69**]{}, 016002 (2004) \[arXiv:hep-ph/0303028\]. D. M. Howe and C. J. Maxwell, Phys. Lett. B [**541**]{}, 129 (2002) \[arXiv:hep-ph/0204036\]. D. M. Howe and C. J. Maxwell, Phys. Rev. D [**70**]{}, 014002 (2004) \[arXiv:hep-ph/0303163\]. S. Furui and H. Nakajima, arXiv:hep-lat/0309166. A. C. Mattingly and P. M. Stevenson, Phys. Rev. D [**49**]{}, 437 (1994) \[arXiv:hep-ph/9307266\].
M. Baldicchi and G. M. Prosperi, Phys. Rev. D [**66**]{}, 074008 (2002) \[arXiv:hep-ph/0202172\]. K. Ackerstaff [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J. C [**7**]{}, 571 (1999) \[arXiv:hep-ex/9808019\]. B. Melic, B. Nizic and K. Passek, Phys. Rev. D [**65**]{}, 053020 (2002) \[arXiv:hep-ph/0107295\].
S. J. Brodsky and D. S. Hwang, Nucl. Phys. B [**543**]{}, 239 (1999) \[arXiv:hep-ph/9806358\]. S. J. Brodsky, D. S. Hwang, B. Q. Ma and I. Schmidt, Nucl. Phys. B [**593**]{}, 311 (2001) \[arXiv:hep-th/0003082\]. S. J. Brodsky, M. Diehl and D. S. Hwang, Nucl. Phys. B [**596**]{}, 99 (2001) \[arXiv:hep-ph/0009254\]. S. J. Brodsky, P. Hoyer, N. Marchal, S. Peigne and F. Sannino, Phys. Rev. D [**65**]{}, 114025 (2002) \[arXiv:hep-ph/0104291\].
J. Raufeisen and S. J. Brodsky, arXiv:hep-th/0408108. J. D. Bjorken, Phys. Rev. [**179**]{}, 1547 (1969). E. D. Bloom [*et al.*]{}, Phys. Rev. Lett. [**23**]{}, 930 (1969). S. J. Brodsky, D. S. Hwang and I. Schmidt, Phys. Lett. B [**530**]{}, 99 (2002) \[arXiv:hep-ph/0201296\]. A. V. Belitsky, X. Ji and F. Yuan, Nucl. Phys. B [**656**]{}, 165 (2003) \[arXiv:hep-ph/0208038\]. J. C. Collins, Phys. Lett. B [**536**]{}, 43 (2002) \[arXiv:hep-ph/0204004\].
S. J. Brodsky, R. Enberg, P. Hoyer and G. Ingelman, arXiv:hep-ph/0409119. S. J. Brodsky and H. J. Lu, Phys. Rev. Lett. [**64**]{}, 1342 (1990). D. W. Sivers, Phys. Rev. D [**43**]{}, 261 (1991). M. Derrick [*et al.*]{} \[ZEUS Collaboration\], Phys. Lett. B [**315**]{}, 481 (1993). T. Ahmed [*et al.*]{} \[H1 Collaboration\], Nucl. Phys. B [**429**]{}, 477 (1994).
S. J. Brodsky, I. Schmidt and J. J. Yang, arXiv:hep-ph/0409279. G. P. Zeller [*et al.*]{} \[NuTeV Collaboration\], Phys. Rev. Lett. [**88**]{}, 091802 (2002) \[Erratum-ibid. [**90**]{}, 239902 (2003)\] \[arXiv:hep-ex/0110059\]. S. J. Brodsky and B. Q. Ma, Phys. Lett. B [**381**]{}, 317 (1996) \[arXiv:hep-ph/9604393\]. S. Kretzer, arXiv:hep-ph/0408287.
S. J. Brodsky, P. Hoyer, C. Peterson and N. Sakai, Phys. Lett. B [**93**]{}, 451 (1980). M. Franz, V. Polyakov and K. Goeke, Phys. Rev. D [**62**]{}, 074024 (2000) \[arXiv:hep-ph/0002240\]. S. J. Brodsky, J. C. Collins, S. D. Ellis, J. F. Gunion and A. H. Mueller, DOE/ER/40048-21 P4 [*Proc. of 1984 Summer Study on the SSC, Snowmass, CO, Jun 23 - Jul 13, 1984*]{}
B. W. Harris, J. Smith and R. Vogt, Nucl. Phys. B [**461**]{}, 181 (1996) \[arXiv:hep-ph/9508403\]. J. C. Anjos, J. Magnin and G. Herrera, Phys. Lett. B [**523**]{}, 29 (2001) \[arXiv:hep-ph/0109185\]. R. Vogt and S. J. Brodsky, Phys. Lett. B [**349**]{}, 569 (1995) \[arXiv:hep-ph/9503206\].
A. Ocherashvili [*et al.*]{} \[SELEX Collaboration\], arXiv:hep-ex/0406033.
M. Mattson [*et al.*]{} \[SELEX Collaboration\], Phys. Rev. Lett. [**89**]{}, 112001 (2002) \[arXiv:hep-ex/0208014\]. S. J. Brodsky and M. Karliner, Phys. Rev. Lett. [**78**]{}, 4682 (1997) \[arXiv:hep-ph/9704379\]. S. J. Brodsky and S. Gardner, Phys. Rev. D [**65**]{}, 054016 (2002) \[arXiv:hep-ph/0108121\].
S. J. Brodsky, C. R. Ji and G. P. Lepage, Phys. Rev. Lett. [**51**]{}, 83 (1983). S. J. Brodsky and B. T. Chertok, Phys. Rev. D [**14**]{}, 3003 (1976). R. G. Arnold [*et al.*]{}, Phys. Rev. Lett. [**35**]{}, 776 (1975). G. R. Farrar, K. Huleihel and H. y. Zhang, Phys. Rev. Lett. [**74**]{}, 650 (1995). G. Bertsch, S. J. Brodsky, A. S. Goldhaber and J. F. Gunion, Phys. Rev. Lett. [**47**]{}, 297 (1981). S. J. Brodsky and A. H. Mueller, Phys. Lett. B [**206**]{}, 685 (1988). D. Ashery, Comments Nucl. Part. Phys. [**2**]{}, A235 (2002). A. B. Borisov \[HERMES Collaboration\], Nucl. Phys. A [**711**]{}, 269 (2002). J. Aclander [*et al.*]{}, arXiv:nucl-ex/0405025. S. J. Brodsky and G. F. de Teramond, Phys. Rev. Lett. [**60**]{}, 1924 (1988).
A. V. Belitsky, X. D. Ji and F. Yuan, Phys. Rev. Lett. [**91**]{}, 092003 (2003) \[arXiv:hep-ph/0212351\].
M. K. Jones [*et al.*]{} \[Jefferson Lab Hall A Collaboration\], Phys. Rev. Lett. [**84**]{}, 1398 (2000) \[arXiv:nucl-ex/9910005\]. Y. C. Chen, A. Afanasev, S. J. Brodsky, C. E. Carlson and M. Vanderhaeghen, arXiv:hep-ph/0403058. S. J. Brodsky, C. E. Carlson, J. R. Hiller and D. S. Hwang, Phys. Rev. D [**69**]{}, 054022 (2004) \[arXiv:hep-ph/0310277\]. R. J. Holt, Phys. Rev. [**C41**]{}, 2400 (1990). C. Bochna [*et al.*]{} \[E89-012 Collaboration\], Phys. Rev. Lett. [**81**]{}, 4576 (1998) \[arXiv:nucl-ex/9808001\]. P. Rossi [*et al.*]{} \[CLAS Collaboration\], arXiv:hep-ph/0405207.
[^1]: Work supported by Department of Energy contract DE–AC02–76SF00515.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Motivated by experiments on Josephson junction arrays, and cold atoms in an optical lattice in a synthetic magnetic field, we study the “fully frustrated” Bose-Hubbard (FFBH) model with half a magnetic flux quantum per plaquette. We obtain the phase diagram of this model on a $2$-leg ladder at integer filling via the density matrix renormalization group approach, complemented by Monte Carlo simulations on an effective classical XY model. The ground state at intermediate correlations is consistently shown to be a chiral Mott insulator (CMI) with a gap to all excitations and staggered loop currents which spontaneously break time reversal symmetry. We characterize the CMI state as a vortex supersolid or an indirect exciton condensate, and discuss various experimental implications.'
author:
- 'Arya Dhar$^1$, Maheswar Maji$^2$, Tapan Mishra$^{1,7}$, R. V. Pai$^3$, Subroto Mukerjee$^{2,4}$ and Arun Paramekanti$^{2,5,6,7}$'
title: 'Bose Hubbard Model in a Strong Effective Magnetic Field: Emergence of a Chiral Mott Insulator Ground State'
---
The simplest model to understand strongly correlated bosons is the Bose-Hubbard (BH) model [@fisher.prb1989] which describes bosons hopping on a lattice and interacting via a local repulsive interaction. With increasing repulsion, at integer filling, its ground state undergoes a superfluid to Mott insulator quantum phase transition which has been studied using ultracold atoms in an optical lattice [@greiner.nature2002].
Remarkably, recent experiments have used two-photon Raman transitions to create a uniform or staggered “synthetic magnetic field” for neutral atoms [@spielman.nature2009], permitting one to access large magnetic fields for lattice bosons. The multiple degenerate minima in the resulting Hofstadter spectrum can be populated by non-interacting bosons in many ways. Repulsive interactions quench this “kinetic frustration”, leading to unconventional superfluids [@hemmerich.naturephys2011; @ffxy.theory; @lim.2008; @sengupta.epl2011], or quantum Hall liquids [@demlerfqhe.prl2005]. Tuning the sign of the atom hopping amplitude or populating higher bands also leads to such frustrated bosonic fluids [@hemmerich.naturephys2011]. These developments motivate us to study the interplay of [*strong correlations and frustration*]{} in the fully frustrated Bose-Hubbard (FFBH), with half a “magnetic flux” quantum per plaquette [@ffxy.theory; @lim.2008; @sengupta.epl2011]. At large integer filling, the FFBH is also the simplest quantum variant of the classical fully frustrated XY (FFXY) model [@jayaprakash.prb1983; @olsson.prl1995] of Josephson junction arrays (JJAs) [@mooij].
Here, we obtain the phase diagram shown in Fig. \[Fig:classical\_phased\] of the FFBH model at integer filling on a $2$-leg ladder using the density matrix renormalization group (DMRG) method [@whitedmrg.prl1992] and Monte Carlo (MC) simulations. Our key result is that the ground state of the FFBH and quantum FFXY models at intermediate Hubbard repulsion is a [*chiral Mott Insulator*]{} (CMI). The CMI has a nonzero charge gap, and simultaneously supports staggered loop currents that [*spontaneously*]{} break time reversal symmetry. With increasing repulsion, the CMI undergoes an Ising transition into an ordinary Mott insulator (MI) where the loop currents vanish. Weakening the repulsion leads to a Berezinskii-Kosterlitz-Thouless (BKT) [@kosterlitz.jpc1973] transition out of the CMI into a previously studied chiral superfluid (CSF) phase [@old.csf] which retains current order. We show that the CMI may be viewed as a vortex supersolid or an exciton condensate, and discuss the loop current, the charge gap, and the momentum distribution across the phase diagram.
![(Color online) (A) Phase diagram of the effective classical model $H_{\rm XY}$, with $J_\tau=J_\parallel$, obtained via MC simulations (see text for details). (B) Phase diagram of the FFBH model in Eqn. \[LadderHam\] obtained using DMRG. Both models exhibit a chiral Mott insulator (CMI) state sandwiched between a chiral superfluid (CSF) and an ordinary Mott insulator (MI). ($1/J_\tau$ in the XY model $\sim \sqrt{U/t}$ in the FFBH model.[@supp])[]{data-label="Fig:classical_phased"}](phase_diagram.eps){width="2.8in"}
[*Fully Frustrated Bose-Hubbard Ladder. —*]{} The Hamiltonian of the FFBH model on a 2-leg ladder is $$\begin{aligned}
H \!&=&\!\! -t \sum_x (a^\dagger_x a^{\phantom \dagger}_{x+1} \!+\! a^\dagger_{x+1} a^{\phantom \dagger}_{x})
\!+\! t \sum_x (b^\dagger_x b^{\phantom \dagger}_{x+1} \!+\! b^\dagger_{x+1} b^{\phantom \dagger}_{x}) \nonumber \\
\!&-&\!\! t_\perp
\sum_x (a^\dagger_x b^{\phantom \dagger}_{x} + b^\dagger_{x} a^{\phantom \dagger}_{x}) + \frac{U}{2} \sum_x (n^2_{a,x} + n^2_{b,x}),
\label{LadderHam}\end{aligned}$$ where $a$ and $b$ label the two legs of the ladder (see Fig. \[Fig:dispersion\]), $t_\perp$ couples the two legs, and $U$ is the local boson repulsion. The opposite signs of the hopping amplitude ($\pm t$) on the two legs leads to an Aharonov-Bohm phase of $\pi$ for a boson hopping around an elementary plaquette [@foot1].
For $U\!=\!0$, the boson dispersion (in Fig. \[Fig:dispersion\] (A)) exhibits two bands with the lowest ($\alpha$) band having degenerate minima at momenta $k\!=\!0,\pi$. This leads to a large degeneracy of many-body ground states — the ground state for $N$ bosons corresponds to having $N_1$ bosons in one minimum and $(N\!-\!N_1)$ in the other for any $N_1 \leq N$ — which is broken by the repulsion.. The minimum at $k\!=\!0$ ($k\!=\!\pi$) has a wavefunction that mainly resides on leg-$a$ (leg-$b$). Since the Hubbard repulsion favors a uniform density, it prefers an [*equal*]{} number of bosons at $k=0,\pi$. A mean field Bose condensed state thus takes the form $$|\psi\rangle = \frac{1}{\sqrt{N!}}\left[{\rm e}^{i \varphi} (\alpha_{0}^\dagger +
{\rm e}^{i\theta} \alpha_{\pi}^\dagger)\right]^N |0\rangle.
\label{Eq:sfwavefn}$$ Here $\varphi$ is the $U(1)$ condensate phase, $\theta$ is the relative phase between the two modes, and $\alpha^\dagger_{0,\pi}$ creates quasiparticles at $k\!=\! 0,\pi$.
![(Color online) (A) Dispersion of the FFBH model at $U=0$, with two degenerate minima in the low energy $\alpha$-band. Interactions force an equal number of bosons (on average) to condense into each of the two minima. (B) Alternating pattern of plaquette currents in the presence of chiral order.[]{data-label="Fig:dispersion"}](dispersion.eps){width="3in" height="1.25in"}
For small $U$, Hartree theory [@lim.2008; @supp] shows $\theta=\pm \pi/2$, while $\varphi$ has (nonuniversal) power law order. This Luttinger liquid is the CSF - it supports the long-range staggered current pattern in Fig.1(B). The two signs of $\theta$ correspond to patterns related by time-reversal or unit lattice translation. For very large $U$, both $\theta$ and $\varphi$ are disordered, leading to an ordinary MI which respects all the symmetries of $H$. Remarkably, for intermediate $U$, we find that $\varphi$ is disordered leading to loss of superfluidity, while $\theta$ is pinned at $\pm \pi/2$, spontaneously breaking (Ising) time reversal symmetry. This [*fully gapped*]{} intermediate state is the CMI. This goes beyond mean field theory [@lim.2008] which predicts a direct CSF-MI transition [@supp].
[*Physical pictures for the CMI. —*]{} The CSF, with staggered currents depicted in Fig. \[Fig:dispersion\] (B), is best viewed as a vortex crystal where vortices and antivortices are nucleated by the presence of frustration, and locked into an ‘antiferromagnetic’ pattern due to the intervortex repulsion. At large $U$, this crystal melts and the vortices completely delocalize - this vortex superfluid is well known to be simply a dual description of the ordinary MI [@fisherlee.prb1989]. However if a [*small*]{} number of defect vortices in the vortex crystal delocalize and condense, they kill superfluidity but preserve the background vortex crystallinity. This [*vortex supersolid*]{} is the dual description of the CMI.
A different but equivalent picture emerges if we start from the usual MI at large $U$ which supports [*charge gapped*]{} particle and hole excitations (adding or removing bosons). These excitations have degenerate dispersion minima at $k=0,\pi$ as in Fig. 1(A), similar to the original noninteracting bosons. Decreasing $U$ decreases the MI charge gap. If the charge gap vanishes, the resulting gapless particles and holes at $k=0,\pi$ could yield a Bose condensed (or power-law) superfluid. However, a precursor phase emerges from first condensing a [*neutral indirect exciton*]{}, composed of a particle and a hole at different momenta ($k\!=\!0$ and $k\!=\!\pi$), while the particles and holes are still gapped. The CMI is precisely this intervening ‘exciton condensate’ [@supp].
[*Effective bilayer XY model. —*]{} To quantitatively flesh out the phase diagram described above, we first study the FFBH model at large fillings, where it is equivalent to a quantum FFXY model used to describe JJAs of charge $2e$ Cooper pairs with an Aharonov-Bohm flux of $h c / 4 e$ per plaquette. The quantum FFXY Hamiltonian in turn maps on to an effective classical model on a ‘space-time lattice’ leading to a classical 2D bilayer square lattice model [@supp] $
H_{\rm XY} = - \sum_{i, \delta} J_\delta \cos \left( \varphi_i - \varphi_{i+\delta}\right),
\label{Eq:classham}
$ where $\varphi_i$ are the boson phases, and $(i,i\! +\! \delta)$ denote nearest neighbour sites along $\delta$. The couplings $J_\delta$ take on values $\pm J_\parallel$ on the two legs, $J_\perp$ on the rungs linking the two layers, and $J_\tau$ in the imaginary time direction [@supp]. (We choose the ‘time step’ in the imaginary time direction to set $J_\parallel = J_\tau$ [@supp].) Phase ordering leads to a superfluid, while the fully paramagnetic phase of $H_{\rm XY}$ is the ordinary MI. Based on small system studies of $H_{\rm XY}$ [@granato.prb1993], it has been argued that the isotropic case $J_\perp \! =\! J_\parallel$ exhibits a single phase transition with novel exponents, while the highly anisotropic case harbors two separate transitions [@granato.prb1993]. Here we use extensive MC simulations, on $L \! \times\! L \! \times\! 2$ bilayers with $L\!=\! 16$-$64$, to obtain the phase diagram shown in Fig. 1(A). We find three phases: the CSF, the regular MI, and an intervening CMI for a wide range of $J_\perp$ [*including*]{} the isotropic point $J_\perp\! =\! J_\parallel$. We show that CSF-CMI and CMI-MI phase transitions are BKT and 2D Ising transitions respectively.
Fig. 3 shows the MC data for $J_\perp\! =\! 1$. Similar data was also obtained for various $J_\perp/J_\tau$. Fig. 3(A) shows that the helicity modulus $\Gamma$ (related to the superfluid density) has an increasingly abrupt change with $1/J_\tau$ for increasing $L$, indicative of a jump as at a BKT transition. If the transition out of the CSF is indeed a BKT transition, $\Gamma$ can be fit to the finite size scaling form $\Gamma (L) \! =\! A \left( 1\! + \! \frac{1}{2 (\log L + C)} \right)$ (with fit parameters $A$,$C$) right at the transition point, with $A$ taking on the universal value of $2/\pi$, while $C$ is a non-universal constant [@weber_minnhagen; @olsson.prl1995]. Fitting $\Gamma(L)$ to this form, we find that the error to this fit shows a sharp minimum [@weber_minnhagen; @olsson.prl1995] at a certain $1/J_\tau$ (Fig.\[Fig:classical\_trans\] inset), with $A\! \approx\! 2/\pi$ at this dip. This not only allows us to precisely locate the transition out of the CSF state, but also [*confirms*]{} its BKT nature.
![(Color online) (A) Helicity modulus $\Gamma$ versus $1/J_\tau$ for different system sizes for $J_\perp\!=\!1$. (A-Inset) RMS error of fit to the BKT finite size scaling form of $\Gamma$ shows a deep minimum [@supp] at the transition, at $1/J_\tau \!=\! 0.887(1)$, and yields a jump $\Delta \Gamma\! \approx\! 0.637$, close to the BKT value $2/\pi$. (B) Binder cumulants for the staggered current versus $1/J_\tau$ (for different $L$ for $J_\perp\!=\!1$) intersecting at a continuous transition at $1/J_\tau\! =\! 0.981(4)$. (B-inset) Critical susceptibility versus $L$ gives the ratio of critical exponents $\gamma/\nu \! \approx\! 1.72$, very close to 2D Ising value $\gamma/\nu\!=\!7/4$. Error bars are smaller than the symbol sizes. []{data-label="Fig:classical_trans"}](xy.eps){width="2.8in" height="1.7in"}
To check for staggered loop currents, we compute the Binder cumulant $B_L = \left(1 - \langle m^4 \rangle_L/3 \langle m^2 \rangle^2_L \right)$, for the order parameter $
%\begin{equation}
m = \frac{1}{L^2}\sum_{i\tau} \left( -1 \right)^i J_{i\tau},
$ where $J_{i,\tau}$ is the current around a spatial plaquette. For small $1/J_\tau$, we find $B_L \to 2/3$ indicating long range current order, while $B_L \to 0$ for large $1/J_\tau$ indicating absence of loop currents. Fig. \[Fig:classical\_trans\](B) shows the transition point where the current order vanishes as seen from the crossing of $B_L$ curves [@binder] for different $L$. Remarkably, we find that loop current order persists into the regime where the superfluid order is absent, revealing an intermediate insulating phase with staggered loop currents - this is the CMI.
For $J_\perp/J=1$, we find the BKT transition occurring at $1/J_\tau=0.887(1)$ while the current order vanishes at the Ising transition which is located at $1/J_\tau=0.981(4)$, where the error bars on the transition point are estimated from the error in the location of the dip in the inset of Fig.\[Fig:classical\_trans\](A) and the error in the crossing point in Fig.\[Fig:classical\_trans\](B), both of which yield the limiting thermodynamic values for the transition points. This establishes that the phase diagram supports [*three*]{} phases: CSF, CMI, and MI. A similar analysis for different values of $J_\perp$ allows us to obtain the phase diagram in Fig. \[Fig:classical\_phased\](A).
We have already seen that the transition out of the CSF, i.e., the CSF-CMI transition, is of the BKT type. The scaling of the divergent susceptibility peak $\chi_{\rm crit}(L)$ for current order (Fig. \[Fig:classical\_trans\](B) inset) shows that the CMI-MI critical point is a 2D Ising transition. Such consecutive, closely spaced, BKT-Ising thermal transitions are also observed in the classical 2D FFXY model [@olsson.prl1995], although its Hamiltonian is quite distinct from $H_{\rm XY}$, and the chiral order in the classical model corresponds to having in-plane currents rather than interlayer currents as in our bilayer model. Such consecutive transitions are also found in spinor condensates [@mukerjee.prb2009].
[*DMRG study. —*]{} We next study the FFBH ladder model in Eq. (\[LadderHam\]) at a filling of one boson per site using the finite size DMRG (FS-DMRG) method [@whitedmrg.prl1992]. (We set $t=1$ here.) As noted previously [@sengupta.epl2011; @cha.pra2011], the boson momentum distribution $n(k)$ in the presence of $\pi$-flux exhibits two peaks; for our gauge choice, these peaks are located at $k=0,\pi$. In the CSF state, which is a Luttinger liquid [@giamarchi] on the ladder, we have a singular momentum distribution $n(k \!\to\! 0) \sim |k|^{-(1-K/2)}$, with $K > 0$ being an interaction dependent Luttinger parameter [@supp]. Similarly, $n(k\to\pi) \sim
|k-\pi|^{-(1-K/2)}$. Let $U_{c 1}$ denote the location of the transition out of the CSF into an insulator. If this transition is of the BKT type, as shown from our XY model study, the exponent $K$ should take on a [*universal*]{} value $K_{c}\!\! =\!\! 1/2$ at $U_{c 1}$. A plot of $n(k\!=\!0) L^{-3/4}$ for different $L$ should thus show a crossing point at the transition out of the CSF, as seen at $U_{c1} \approx 3.98(1)$ in Fig. \[Fig:dmrg\](A) for $t_\perp=1$. Remarkably, Fig. 4(A) (inset) shows that the charge gap also becomes nonzero for $U > U_{c1}$, coinciding with the point where $K\!=\! 1/2$, confirming that the CSF-to-insulator transition is a BKT transition. This leads to the phase boundary of the CSF state shown in Fig. 2(B).
![(Color online) (A) DMRG results for $n(k=0)L^{-3/4}$ versus $U/t$, for the FFBH Hamiltonian in Eqn. \[LadderHam\] with $t_\perp\! =\! t$ and various $L$. The crossing of these curves at $U_{c1}/t \approx 3.98(1)$ yields the CMI-MI transition (see text). Inset shows the onset of the charge gap at $U_{c1}$. (B) Rung current structure factor $S_j(\pi)L^{2\beta/\nu}$ versus $U/t$ at $t_\perp=1$. The intersection point yields the CMI-MI Ising transition at $U_{c2} \approx 4.08(1) t$. Inset shows $S_j(\pi)L^{2\beta/\nu}$ versus $\delta L^{1/\nu}$ with $\delta \equiv
(U - U_{c 2})/t$, for different $U/t$, leading to a scaling collapse for 2D Ising exponents $\nu=1$ and $\beta=1/8$.[]{data-label="Fig:dmrg"}](dmrg.eps){width="2.8in" height="1.7in"}
The staggered current order parameter can be obtained from the rung-current structure factor $S_j(k)=\frac{1}{L^2}\sum_{x,x'}{e^{ik(x-x')}\langle{j_x j_{x'}}\rangle}$, with $j_x=i \left( a_x^\dagger b_x - b_x^\dagger a_x \right)$. $S_j(k=\pi) \sim L$ indicates long range staggered current order. Our XY model study informs us that the current order disappears at a MI-CMI transition which is in the Ising universality class. We thus expect $S_j(\pi)$ to obey the critical scaling form $S_j (\pi)L^{2\beta/\nu}= f \left(\left(U-U_{c 2} \right)L^{1/\nu}\right)$, where $U_{c 2}$ is the CMI-MI critical point, $f(.)$ is a universal scaling function, and $\beta = 1/8$ and $\nu=1$ are the Ising critical exponents. As a result, curves of $S_j (\pi)L^{2\beta/\nu}$ for different $L$ are expected to intersect at the MI-CMI critical point $U_{c 2}$. This crossing, as seen at $U_{c 2} \approx 4.08(1)$ for $t_\perp=1$ from Fig. \[Fig:dmrg\], allows us to carefully locate the CMI-MI phase transition. As seen in Fig.\[Fig:dmrg\] (inset), plotting $S_j (\pi)L^{2\beta/\nu}$ as a function of $(U-U_{c 2}) L^{1/\nu}$ shows a complete data collapse for $U_{c2} = 4.08$. Similar to our discussion for the computations on the XY model, our analysis of these crossing points in the FFBH model yields the limiting thermodynamic values of the transition points, and the error bars are estimated from examining the errors in these crossing points. Such an analysis, carried out for a range of values of $t_\perp/t$, allows us to map out the MI-CMI phase boundary in Fig. \[Fig:classical\_phased\](B); we find $U_{c2} > U_{c1}$, again consistent with an intermediate CMI state. [*Discussion. —*]{} Our computations on the FFBH model at unit filling and the XY model (which describes the FFBH model at large integer filling), suggest that the CMI appears near the tip of the Mott lobes at all boson fillings on the ladder. We have generalized the work of Ref. [@sorella.prl2007] to obtain a long-range Jastrow correlated wavefunction which captures all the essential correlations of this CMI state on the ladder [@supp]. Since the CMI is [*completely*]{} gapped, with not just a charge gap but also an “Ising” gap to charge-neutral excitations, it will be stable in a 2D system of weakly coupled FFBH ladders. The CSF and CMI states are bosonic analogs of staggered current metallic [@ddw] and insulating [@marston.prb2002] states of fermions in models of cuprate superconductors. The CSF and CMI also find analogs in insulating magnets: paramagnetic gapless [@chiralgapless] or spin-gapped [@chiral] phases with long range vector chiral order.
The CMI may be realized in a Josephson junction ladder at a magnetic field of $hc/4e$ flux per plaquette [@mooij], where it would appear as an insulator in transport measurements. With a Josephson coupling $\sim 1K$, we estimate that the spontaneous loop currents could produce staggered magnetic fields $\sim 1$nT for arrays with lattice parameter $10 \mu$m, which could be measured using SQUID microscopy [@fong.revsci2005]. Ultracold bosonic atoms in the presence of a (uniform or staggered) synthetic $\pi$-flux [@spielman.nature2009] are candidates to realize the CMI. The signature of the flux would appear as twin peaks in the atom momentum distribution: the peaks would be sharp in the CSF but broad in the CMI and MI. Re-interfering the $k=0$ and $k=\pi$ peaks obtained in time of flight via Bragg pulses [@bragg.epjd2005] could test for the persistence of intermode coherence (the phase $\theta=\pm\pi/2$) in the CMI, and distinguish it from the MI. Jaynes-Cummings lattices in a “magnetic field” [@JClattice], could also be used to simulate a polariton FFBH model. [*Acknowledgments:*]{} We thank B. P. Das, M. P. A. Fisher, D. A. Huse, and J. H. Thywissen, for discussions. We acknowledge support from DST, Govt. of India (SM and RVP), CSIR (RVP), and NSERC of Canada (AP).
[51]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, [*et al*]{}, ****, (); D. Jaksch, [*et al*]{}, , 3108 (1998).
, [*et al*]{}, ****, ().
, [*et al*]{}, ****, (); , [*et al*]{}, ****, (); K. Jimenéz-Garcia, [*et al*]{}, arXiv:1201.6630 (unpublished).
, , , ****, ().
, [*et al*]{}, ****, (); , [*et al*]{}, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
B. J. van Wees, H. S. J. van der Zant, and J. E. Mooij, Phys. Rev. B [**35**]{}, 7291 (1987).
, ****, ().
, ****, ().
, ****, (); , ****, (); , ****, ().
, ****, ().
H. Weber and P. Minnhagen, , 5986 (1988).
, [*et al*]{}, ****, (); , [*et al*]{}, ****, (); , [*et al*]{}, ****, (). , [*et al*]{}, ****, ().
, ****, ().
, , , ****, (); , , , ****, (); , , , ****, ().
, ****, ().
, ** (, , ).
, , , ****, ().
, ** (, , ).
, ****, ().
,[*et al*]{}, ****, ().
, [*et al*]{}, ****, (). , [*et al*]{}, ****, (). , [*et al*]{}, ****, (); , [*et al*]{}, ****, ().
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
=0.6 cm
[**Abstract**]{}
In this paper, we investigate so-called spherical photon orbits around a deformed Kerr black hole with an extra deformation parameter. The change in the azimuth $\Delta\varphi$ and the angle of dragging of nodes per revolution $\Delta\Omega $ of a complete latitudinal oscillated orbit is calculated analytically. Finally, representative six examples orbits are plotted out to illustrate how spherical photon orbits look like and exhibit some interesting behavior. Especially, this spherical photon orbits are absent in circular orbits and different from the Kerr case.
author:
- 'Changqing Liu$^{1}$[^1] Chikun Ding$^{1}$[^2], Jiliang Jing$^{2}$[^3]'
title: '**Selected spherical photon orbits around a deformed Kerr black hole** '
---
=0.8 cm
Introduction
============
The investigation of null geodesic motion can reveal significant features of a curved spacetime. Especially, there are unstable and stable photon orbits around the compact objects. The unstable photon orbits define the boundary between capture and non-capture of across-section of light rays of black hole such as the shadow in lensing images[@sha2; @sha3; @sha4], on the other hand, the stable photon have directly link with the optical appearance[@sha5] of the thin accretion disk[@Luminet] and chaotic scattering in lensing around of hairy black holes and spacetime instabilities[@sw; @fpos2; @fpos3]. These fundamental photon orbits have an interested invariant structures around dynamical systems and compact objects[@BI; @mingzhi0]. In more specifically, the so-called spherical photon orbits[@Wilkins; @teo]–orbits with constant coordinate radii that are not confined to the equatorial plane, have rich orbital structure, i.e. periodic orbit of the longitudinal motion of particles. This orbits can further reveal the feature of black holes.
Spherical timelike orbits around the Kerr black hole was firstly proposed by Wilkins[@Wilkins]. An explicit example of spherical timelike orbits was plotted with numerical integration in paper[@Goldstein]. The extension to the case of the charged Kerr Newman black hole was considered in Ref [@Johnston]. Further more, an example of a non-spherical timelike orbit around the Kerr black hole was obtained by numerical integration in Stoghianidis[@Stoghianidis] ’s work. However, there has been less work done on spherical photon orbits. Early examples of spherical photon orbits in the hyper-extreme Kerr space-time were illustrated in [@Schastok] and offered a tantalizing hint as to how spherical photon orbits might look like. In Teo’s paper[@teo], several representative latitudinal oscillations photon orbits, including a zero-angular momentum photon orbit and one with non-fixed azimuthal direction, was plotted to illustrate how the spherical photon orbits look like. These orbits exhibit a variety of interesting behavior that are absent in circular orbits.
Recently the LIGO [@gw1; @gw2; @gw3; @gw4] and VIRGO collaborations reported the observation of gravitational-wave signal corresponding to the inspiral and merger of two black holes. However, the current precision of the experiment there remains some possibility for alternative theories of gravity. Konoplya and Zhidenko [@kz; @RLs] have proposed a deformed Kerr black hole metric beyond general relativity through adding a static deformation, which can be looked as an axisymmetric vacuum solution of a unknown alternative theory of gravity [@RLs]. This deformed Kerr black hole has three parameter, i.e., the mass $M$, the rotation parameter $a$, and the deformation parameter $\eta$. $\eta$ describes the deviation from the usual Kerr one and modifies sharply the structures of spacetime in the strong-field region. Moreover, research work about shadow of lensing[@mingzhi], energy extraction[@fen], strong gravitational lensing effect[@sy], the iron line [@GKt02] and the quasi-periodic oscillations [@GKt01] endorse that the geometry of a real astrophysical black hole could be described by such a deformed Kerr metric.
In this paper, we shall focus on spherical photon orbits (with positive energy) outside the event horizon of the deformed Kerr[@kz; @RLs] black hole. We shall find it quite amazing that photons can actually trace out such orbits around the deformed Kerr black hole, and also compared our result with the Kerr case[@teo]. This may provide a possibility to test how astronomical black holes with deformation parameter deviate from the Kerr black hole from the orbital motion aspect.
The paper is organized as follows: In Sec. II, we will derive the relevant geodesic equations of deformed Kerr black hole. In Sec. III, the conditions for the existence of spherical photon orbits are considered. In Sec.IV, the expression for the change in the orbit¡¯s azimuth for every oscillation in latitude. 3D orbit and the corresponding projective plane of selected spherical photon orbits is plotted by numerical integration to illustrate how it differ from the Kerr case. Finally, we end the paper with a summary.
The null geodesic equations in the deformed Kerr black hole
============================================================
The deformed Kerr metric obtained in Ref.[@kz; @RLs] describes the geometry of a rotating black hole with the deviations from the Kerr one through adding an extra deformation. The deformed Kerr metric in the standard Boyer-Lindquist coordinates can be expressed as $$\begin{aligned}
\label{xy}
ds^{2} &=& -\bigg(1-\frac{2Mr^2+\eta}{r\rho^{2}}\bigg)dt^{2}
+\frac{\rho^{2}}{\Delta}dr^{2}+\rho^{2} d\theta^{2}+\sin^{2}\theta\bigg[r^{2}+a^{2}
+\frac{(2Mr^{2}+\eta)a^2\sin^{2}\theta}{r\rho^{2}}\bigg]d\varphi^{2}\\ \nonumber
&-&\frac{2(2Mr^2+\eta)a\sin^{2}\theta}{r\rho^{2}}dtd\varphi,\end{aligned}$$ with $$\Delta=a^{2}+r^{2}-2Mr-\frac{\eta}{r},\;\;\;\;\;\;\;\;\;\; \rho^{2}=r^{2}+a^{2}\cos^{2}\theta.$$ where $M$, $a$ and $\eta$ denote the mass, the angular momentum and the deformation parameter of black hole, respectively. The deformation parameter $\eta$ describes the deviations from the Kerr metric. As the parameter $\eta$ vanishes, the metric reduces to the case of Kerr.In Ref [@kz; @sy], the condition for the existence of black hole horizon is analyzed in detail. In the case $a<M$, the existence of black hole horizon require [@sy] $$\label{hcz}
\eta\geq \eta_{c1}\equiv-\frac{2}{27}(\sqrt{4M^{2}-3a^{2}}+2M)^{2}(\sqrt{4M^{2}-3a^{2}}-M),$$ while in the case $a>M$, it becomes $\eta>0$. When $\eta$ and $a$ lie in other regions, there is no horizon and then the spacetime (\[xy\]) becomes a naked singularity. Thus, the value of $\eta$ determines the number and positions of black hole horizons. These spacetime properties affect the propagation of photon and further changes shadow of deformed Kerr black hole.
The Hamiltonian of a photon propagation along null geodesics in a deformed Kerr black hole can be expressed as $$H(x_i, p_i) =\frac{1}{2}g^{\mu\nu}(x)p_{\mu}p_{\nu}=
\frac{\Delta}{2\rho^2}p_r^2+\frac{1}{2\rho^2}p_\theta^2+f(r,\theta,p_t,p_\varphi)=0 ,$$ Since there exist two ignorable coordinates $t$ and $\phi$ in the above Hamiltonian, it is easy to obtain two conserved quantities $E$ and $L_{z}$ with the following forms $$\begin{aligned}
\label{EL}
E=-p_{t}=-g_{tt}\dot{t}-g_{t\varphi}\dot{\varphi},\;\;\;\;\;\;\;\;\;\;\;\;\;\;
L_{z}=p_{\varphi}=g_{\varphi\varphi}\dot{\varphi}+g_{\varphi t}\dot{t},\end{aligned}$$ which correspond to the energy and angular momentum of photon moving in the deformed Kerr black hole. With these two conserved quantities, after tedious calculation, we obtain the null geodesic equation[@mingzhi0]. $$\begin{aligned}
\label{tfc}
\dot{t}&=&E+\frac{(a^{2}E-aL_{z}+Er^{2})(2Mr^{2}+\eta)}{\Delta\rho^{2}r},
\\
\label{jfc}
\dot{\varphi}&=&\frac{aE\sin^{2}\theta(2Mr^{2}+\eta)
+a^{2}L_{z}r\cos^{2}\theta-L_{z}(2Mr^{2}
-r^{3}+\eta)}{\Delta\rho^{2}r\sin^{2}\theta},\\
\label{rfc}
\rho^{4}\dot{r}^{2}&=&R(r)=-\Delta[Q+(aE-L_{z})^{2}]+[aL_{z}-(r^{2}+a^{2})E]^{2},
\\
\label{thfc}
\rho^{4}\dot{\theta}^{2}&=&p_{\theta}^{2}=\Theta(\theta)=Q-\cos^{2}\theta
\bigg(\frac{L_{z}^{2}}{\sin^{2}\theta}-a^{2}E^{2}\bigg),\end{aligned}$$ where the quantity $Q$ is the generalized Carter constant related to the constant of separation $K$ by $Q=K-(aE-L_{z})^2$.
We note that while these equations are concise and appealing in some ways, during numerical integration they tend to accumulate error at the turning points due to the explicit square roots in the $r$ and $\theta$ equations, not to mention the nuisance of having to change the signs of the $r$ and $\theta$ velocities by hand at every turning point. Following the procedure in Ref [@Levin] , We will convert this equation into Hamiltonian formulation to avoid the numerical difficulties and smoothly plot the selected spherical photon orbits.
So we rewrite the Hamiltonian as following $$H(x_, p_i) =
\frac{\Delta}{2\rho^2}p_r^2+\frac{1}{2\rho^2}p_\theta^2
-\frac{R + \Delta\Theta}{2\Delta \rho^2}
\quad\quad ,
\label{niceham}$$ with the help of Hamilton’s equations $$\label{D-Eq} \frac{dx^i}{d \lambda} = \frac{\partial H}{\partial
p_i} \, , \; \; \frac{dp_i}{d \lambda} = - \frac{\partial
H}{\partial x^i} \, ,$$ for the non-zero Hamiltonian formulation of the photon’s motion become as $$\begin{aligned}
\label{eom}
\dot{r} & =& \frac{\Delta}{\rho^2}p_{r}\label{geeoss1} , \\
\dot{p}_{r} & = &
-\left (\frac{\Delta}{2\rho^2}\right )'p_{r}^{2} -
\left (\frac{1}{2\rho^2}\right )'p_{\theta}^{2} + \left (\frac{R +
\Delta\Theta}{2\Delta\rho^2}\right )' \label{geeoss2}, \\
\dot{\theta}& = & \frac{1}{\rho^2}p_{\theta}
,\\
\dot{p}_{\theta} & = &
-\left (\frac{\Delta}{2\rho^2}\right )^{\theta}p_{r}^{2} -
\left (\frac{1}{2\rho^2}\right )^{\theta}p_{\theta}^{2} + \left (\frac{R +
\Delta\Theta}{2\Delta\rho^2}\right )^{\theta} \label{geeoss3},\\
\dot{t} & = & \frac{1}{2\Delta\rho^2} \frac{\partial (R+\Delta\Theta)}{\partial
E}\label{geeoss4}, \\
\dot{\varphi} & = & -\frac{1}{2\Delta\rho^2}
\frac{\partial(R +\Delta\Theta)}{\partial
L}, \label{geeoss5}\end{aligned}$$ where the superscripts $'$ and $\theta$ denote differentiation with respect to $r$ and $\theta$, respectively.
Spherical Photon Orbits around a deformed Kerr Black Hole
==========================================================
In this section, we shall study in detail the properties of the spherical photon orbit. The so-called spherical photon orbits–orbits with constant coordinate radii that are not confined to the equatorial plane and photon move as latitudinal oscillations in the permitted maximum $\theta$ angle[@Wilkins; @teo]. The radial equation of motion in the spherical photon orbits is unstable and satisfy $$\begin{aligned}
\dot{r}=0,\;\;\;\;\; \text{and} \;\;\;\; \ddot{r}=0,\end{aligned}$$ which yield $$\begin{aligned}
\label{r}
R(r)&=&-\Delta[Q+(aE-L_{z})^{2}]+[aL_{z}-(r^{2}+a^{2})E]^{2}=0,
\\
\label{r1}
R'(r)&=&-4Er[aL_{z}-(r^{2}+a^{2})E]-[Q+(aE-L_{z})^{2}]
(-2M+2r+\frac{\eta}{r^{2}})=0.\end{aligned}$$ For the unstable spherical orbits, we have $$\begin{aligned}
\label{r2}
R''(r)=8E^{2}r^{2}-4E[aL_{z}-(r^{2}+a^{2})E]-2[Q+(aE-L_{z})^{2}]
(1-\frac{\eta}{r^{3}})>0.\end{aligned}$$ Solving the two equation (\[r\]) and (\[r1\]), we find that for the spherical orbits motion of photon the reduced constants $\xi$ and $\sigma$ have the form $$\begin{aligned}
\label{pj}
\xi&\equiv&\frac{L_{z}}{E}=\frac{2a^{2}Mr^{2}-a^{2}\eta+2\Delta r^{3}-2Mr^{4}-3\eta r^{2}}{a(2Mr^{2}-2r^{3}-\eta)},\\
\label{q}
\sigma&\equiv&\frac{Q}{E^{2}}=\frac{-r^{4}[(6Mr^{2}-2r^{3}+5\eta)^{2}-8a^{2}(2Mr^{3}+3\eta r)]}{a^{2}(2r^{3}-2Mr^{2}+\eta)^{2}}.\end{aligned}$$ From Eq. (\[thfc\]), we find that $\xi$ and $\sigma$ obey $$\begin{aligned}
\label{fw}
\sigma-\xi^{2}\cot^2\theta+a^2\cos^{2}\theta\geq0.\end{aligned}$$ If we set $u = cos\theta$, when $\sigma$ is non-negative, The physically allowed ranges for $u_0$ is given as $$\begin{aligned}
\label{fw11}
u_0^2=\frac{(a^2-\xi-\sigma^2)+\sqrt{(a^2-\xi-\sigma^2)^2+4a^2\sigma}}{2a^2}\end{aligned}$$ As the initial radius of $r_0$ the spherical photon orbit is given, The physically allowed angular momentum $L$, Carter constant $Q$, $\theta$ range is also determined. we take the extreme Kerr black hole as an example to illustrate the relationship between $\theta$ and $r_0$, $L$, $Q$. in Fig. \[figure1\]. photon can oscillate between $arccos(u_0)$ and $arccos(-u_0)$, such orbits cross the equatorial plane repeatedly. all orbits either remain in the equatorial plane or cross it repeatedly. i.e. as the Carter constant $Q$ =0, photon orbit is entirely in the equatorial plane, while the Carter constant $Q$ =27, photon orbit is entirely in the $\theta$ direction. Especially, the zero angular momentum photon orbit has reached the maximum $\theta$ value.
![Variation of $\theta$ range of latitudinal oscillations with the initial radius of $r_0$, $L$, $Q$ in extreme Kerr black hole, here we set M=1 and E=1. \[figure1\]](sob.eps)
Since the spherical photon orbit is latitudinal oscillations and periodicity. it is useful to have a measure of this periodicity. One possibility is to consider the change in azimuth $\Delta\varphi$ for a complete latitudinal oscillation of the orbit. It turns out to be possible to obtain an exact expression for the azimuth $\Delta\varphi$ for the photon orbit[@Wilkins; @teo; @Goldstein; @Johnston], as we now briefly describe the azimuth $\Delta\varphi$ by finding the connection between $\theta$ and $\varphi$ motions.
If we set $w=u^2$, using Eqs. (\[jfc\]) and (\[thfc\]), we have $$\begin{aligned}
\label{thphi}
\frac{d\varphi}{dw}=\left(\frac{a(2Mr^2+\eta-a\xi~r)}{2\Delta r}+\frac{\xi}{2(1-w)}\right)\frac{1}{Y(w)},\end{aligned}$$ where $$\begin{aligned}
\label{thphis}
Y(w)^2=\sigma~w-(\sigma+\xi^2-a^2)w^2-a^2w^3.\end{aligned}$$ It would be useful to write the latter in the form $-a^2w(w-w_+)(w-w_-)$, where $w_+$ are the positive and negative roots of $Y(w)^2$, respectively. Then the change in azimuth for one complete oscillation in latitude is $$\begin{aligned}
\label{latitude}
\Delta\varphi=\frac{2a(2Mr^2+\eta-a\xi~r)}{\Delta r}\int_0^{w_+}\frac{dw}{Y(w)}+2\xi\int_0^{w_+}\frac{dw}{(1-w)Y(w)}.\end{aligned}$$ These integrals can be evaluated using standard techniques to give $$\begin{aligned}
\label{latitudeaas}
\Delta\varphi=\frac{4}{\sqrt{w_+-w_-}}\left(\frac{2Mr^2+\eta-a\xi~r}{\Delta r}K(\sqrt{\frac{w_+}{w_+-w_-}})+\frac{\xi}{a}\frac{1}{1-w_+}\Pi(\frac{-w_+}{1-w_+},
\sqrt{\frac{w_+}{w_+-w_-}}~)\right),\end{aligned}$$ where $K(x)$ and $\Pi(\upsilon,x)$ are the complete elliptic integrals of the first and third kind, respectively.
The specific dependence of the azimuth $\Delta\varphi$ on $r$ is shown in the left Fig. \[figure2\]. When $r_1<r<r_3$, the orbits are prograde whenever $\Delta\varphi$ is positive, on the other hand, When $r_3<r<r_2$, the orbits are reprograde whenever $\Delta\varphi$ is negative ( $r_1$ and $r_2$ are the solution of the carter constant $\sigma=0$, $r_3$ is the solution of zero angular momentum $\xi=0$). Notice that there is a discontinuity at $r=r_3$( zero angular momentum $\xi=0$), the value $\Delta\varphi$ at exactly $r=r_3$ , given by the Point $A$, is exactly half-way between the upper limit $\lim_{r\rightarrow~r^+_3}\Delta\varphi$ and the lower limits $\lim_{r\rightarrow~r^-_3}\Delta\varphi$. It turns out that there is a satisfying explanation for this behavior of 3D orbit of zero angular momentum photon in Fig. \[figure2\].
![Variation of the azimuth $\Delta\varphi$ and the angle of advance $\Delta\Omega$ of the nodes with the value $r$ in deformed Kerr black hole, $r_1$ and $r_2$ are the solution of the carter constant $\sigma=0$, $r_3$ is the solution of angular momentum $\xi=0$, the point $A$ is the exactly value $\Delta\varphi~(r_3)$ and $\Delta\Omega$. Here we set $M=1$, $\eta=0.1$ and E=1. \[figure2\]](good.eps)
Following Winkins[@Wilkins], we can define the ratio between the frequencies in the $\varphi$ and $\theta$ direction by $$\begin{aligned}
\label{feqs}
f=\frac{\upsilon_\varphi}{\upsilon_\theta}=\frac{|\Delta\varphi|}{2\pi}.\end{aligned}$$ The angle of advance of the nodes(a point where a non equatorial orbit intersections the equatorial plane ) per nodal period $\Delta\Omega$ [@Wilkins] is $$\begin{aligned}
\label{feqssa}
\Delta\Omega=2\pi|f-1|.\end{aligned}$$ We can find in the right of Fig. \[figure2\], compared with the case of the azimuth $\Delta\varphi$, the change of the angle of advance of the nodes $\Delta\Omega$ is continuously.
Although each orbit that we are considering has a definite non-zero value for $\Delta\varphi$, it is not guaranteed that the photon is moving in a fixed azimuthal direction at every point of its orbit, In fact, it follows from Eq.(\[jfc\]) that $\dot{\varphi}$ changes sign whenever $u^2$ reaches the value $$\begin{aligned}
\label{usb}
u_1^2&=&\frac{-2aMr^2+2\xi~Mr^2-Lr^3-a\eta+\xi\eta}{a(a\xi~r-2Mr^2-\eta)}\\\nonumber
&=&\frac{(6Mr^2-2r^3+5\eta)r^2}{a^2(2r^2(M+r)-\eta)}.\end{aligned}$$ We define the value $r$ of satisfying $\dot{\varphi}=0$ as $r_{\dot{\varphi}}$ and the corresponding angular momentum as $\xi_{\dot{\varphi}}$[@teo]. Note that when $r_3<r<r_{\dot{\varphi}}$ (corresponding to $\xi_{{\dot{\varphi}}}<\xi<0$), orbits with these parameters would therefore not be moving in a fixed azimuthal direction. An example of such an orbit will also be given in the following section. To compared with the result about the azimuth $\Delta\varphi$ and the angle of advance of the nodes $\Delta\Omega$ in the extreme Kerr black hole[@teo], we list the value of the azimuth $\Delta\varphi$ and the angle of advance of the nodes $\Delta\Omega$ with the given initial angular momentum $\xi$ in deformed Kerr black hole in Table. \[table1\]. We find that for the given initial angular momentum $\xi$, the presence of the deformation parameter $\eta$ decrease the value of azimuth $\Delta\varphi$ and the angle of advance of the nodes $\Delta\Omega$.
$\eta$=0 $\eta$=0.5 $\eta$=1 $\eta$=2 $\eta$=3 $\eta$=4
------------------------------------------- ---------- ------------ ---------- ---------- ---------- ---------- --
$\Delta\varphi_{\xi=0}$ 3.1761 2.4213 2.0420 1.6253 1.3869 1.2267
$\Delta\Omega_{\xi=0}$ 3.1071 3.8619 1.9160 4.6579 4.8963 5.0565
$\Delta\varphi_{\xi=-1}$ -3.7128 -4.1351 -4.3962 -4.7203 -4.9924 -5.0647
$\Delta\Omega_{\xi=-1}$ 2.5694 2.1481 1.8870 1.5629 1.3607 1.2185
$\Delta\varphi_{\xi=\xi_{\dot{\varphi}}}$ -4.0728 -4.3250 -4.5101 -4.7688 -4.9449 -5.0747
$\Delta\Omega_{\xi=\xi_{\dot{\varphi}}}$ 2.2104 1.9582 1.7731 1.5144 1.3382 1.2084
$\Delta\varphi_{\xi=-6}$ -4.7450 -4.8296 -4.9017 -5.0191 -5.1117 -5.1874
$\Delta\Omega_{\xi=-6}$ 1.5382 1.4535 1.3815 1.2641 1.1714 1.0957
$\Delta\varphi_{\xi=1}$ 10.8428 9.0649 8.4874 7.9519 7.6760 7.4999
$\Delta\Omega_{\xi=1}$ 4.5596 2.7817 2.20426 1.6688 1.3928 1.2167
$\Delta\varphi_{\xi=1.999}$ 159.418 9.4422 8.5860 7.9392 7.6396 7.4570
$\Delta\Omega_{\xi=1.999}$ 153.135 3.1591 2.3028 1.6560 1.3564 1.1738
\[table1\]
: The value of the azimuth $\Delta\varphi$ and the angle of advance of the nodes $\Delta\Omega$ with the given initial angular momentum $\xi$ in the deformed Kerr black hole. Here we set M=1.
![The three-dimensional($x-y-z$) plane, the projective $x-y$ and $x-z$ plane, the $\theta-\varphi$ plane of the spherical photon orbits around the extreme Kerr black hole with the given initial angular momentum $\xi=0, -2, -1$ ,respectively. Here we set $M=1$. \[figure3\]](figone.eps)
![The three-dimensional($x-y-z$) plane, the projective $x-y$ and $x-z$ plane, the $\theta-\varphi$ plane of the spherical photon orbits around the extreme Kerr black hole with the given initial angular momentum $\xi=-6, 1, 1.999$ ,respectively. Here we set $M=1$. \[figure4\]](figtwo.eps)
![ the projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole with the zero angular momentum $\xi=0$ for a different deformation parameter $\eta$. Here we set $M=1$. \[fig1\]](fzero.eps)
![The projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole with the given initial angular momentum $\xi=\xi_{\dot{\varphi}}$ where satisfying $\dot{\varphi}=0$ for a different deformation parameter $\eta$. Here we set $M=1$. \[fig2\]](fltwo.eps)
![The projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole with the given initial angular momentum $\xi=1$ for a different deformation parameter $\eta$. Here we set $M=1$. \[fig3\]](flone.eps)
![The projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole with the given initial angular momentum $\xi=-6$ for a different deformation parameter $\eta$.. Here we set $M=1$ \[fig4\]](flsix.eps)
![The projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole with the given initial angular momentum $\xi=1$ for a different deformation parameter $\eta$. Here we set $M=1$. \[fig5\]](fone.eps)
![The projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole with the given initial angular momentum $\xi=1.999$ for a different deformation parameter $\eta$.. Here we set $M=1$. \[fig6\]](fnine.eps)
Examples of Selected Spherical Photon Orbit Around the deformed Kerr black hole
================================================================================
In this section, we shall present explicit examples of spherical photon orbits around the deformed Kerr black hole. In Ref[@teo], two prograde($\xi>0$ and $r_1<r<r_3$) and four retrograde($\xi<0$ and $r_3\leq~r<r_2$) spherical photon orbits around extreme Kerr black hole was plotted out, including two special case: a zero-angular momentum($r=r_3$) photon orbit and one orbit at initial radius at $r=r_{\dot{\varphi}}$ where $\dot{\varphi}=0$. To compared with the result of the extreme Kerr black hole, we also plot the spherical photon orbits around the deformed Kerr black hole($a=M$). These orbits can only be obtained numerically, by integrating the first-order differential of Hamiltonian formulation.
Firstly, we plot the same Teo[@teo]’s examples of the three-dimensional($x-y-z$) spherical photon orbit around the extreme Kerr black hole at the initial angular momentum $\xi=0, -1, -2, -6, 1, 1.999$, respectively. Besides the three-dimensional spherical photon orbit in cartesian coordinates, We also plot the projective plane of $x-y$ and $x-z$ , $\theta-\varphi$ of the spherical photon orbit around the extreme Kerr black hole to more vividly illustrate the properties of periodic latitudinal oscillations. This six examples of the spherical photon orbit is shown in Figs. \[figure3\] and \[figure4\]. Notice that the periodic motion is mainly reflected on the $x-y$ projective plane. From this pictures, we get the following interesting properties of the orbits :(1) The $x-y$ plane orbit of the zero-angular momentum($r=r_3=1+\sqrt{2}$ and $\xi=0$) looks like four-leaf colliding in the center. (2) The $x-y$ plane orbit with initial value ($r=r_{\dot{\varphi}}=3$ and $\xi=-2$), where $\dot{\varphi}$ changes sign, is Periodic cuspy orbits. The photon is moving vertically whenever it is at the equator(see the correspondent $x-z$ plane). This behavior can be understood from the Lense Thirring effect: the dragging of inertial frames is strongest at the equator, and in this case, it precisely cancels out the retrograde motion of the photon. Away from the equator, the dragging becomes weaker and so the orbit regains its retrograde character[@teo]. (3) The $x-y$ plane orbit with the initial value ($r=1+\sqrt{3}$ and $\xi=-1$) is Trochoidlike trajectory. This photon orbits do not have a fixed azimuthal direction. (4) The $x-y$ plane orbit with the initial value ($r=1+2\sqrt{2}$ and $\xi=-6$) looks like a pancake around by sixteen stripes. (5) The $x-y$ plane orbit with the initial value ($r=2$ and $\xi=1$) looks like precession of circles. (6) The plane orbit with the initial value $r=1.0316$ and $\xi=1.999$ is one latitudinal oscillation and its $x-y$ plane orbit is spiral circle and a helical pattern. (7) The value of $\theta$ is periodic oscillated with the change of $\varphi$ for all of this spherical photon orbits.
To compared the spherical photon orbit around the deformed Kerr black hole with the case of the extreme Kerr black hole[@teo], we also plot six example of the projective $x-y$ plane of the spherical photon orbits around the deformed Kerr black hole in Figs. \[fig1\],\[fig2\],\[fig3\],\[fig4\],\[fig5\] and \[fig6\]. Especially, when the deformation parameter take the value $\eta=0.5,1,2,3,4$, the shape and periodicity of latitudinal oscillating orbit is great different from the case of Kerr black hole and exhibit qualitatively different behavior for the given initial angular momentum $\xi=0, \xi_{\dot{\varphi}},-1, -6,1, 1.999$, respectively. The illustrations of these examples manifest that spherical photon orbits around the deformed Kerr black hole have a variety of interesting behavior that are absent in circular orbits. These examples, in principle, may provide a possibility to test how astronomical black holes with the deformation parameter $\eta$ deviate from the Kerr black hole.
summary
=======
In this paper, we have studied the spherical photon orbits around the deformed Kerr black hole. The change in the azimuth $\Delta\varphi$ and the angle of dragging of nodes per revolution $\Delta\Omega $ of a latitudinal oscillated orbit is calculated analytically. Besides the three-dimensional spherical photon orbit in Cartesian coordinates, we plot the projective plane of the three-dimensional orbit and the $\theta-\varphi$ plane of the spherical photon orbit around the extreme Kerr black hole to more vividly illustrate the properties of periodic latitudinal oscillations. This spherical photon orbits shown in Figs. \[figure3\] and \[figure4\] are latitudinal oscillations and periodicity. Finally, we also illustrate different behavior with six representative examples of such orbits around deformed Kerr black hole to compared with the case of Kerr black hole in Figs. \[fig1\], \[fig2\], \[fig3\], \[fig4\], \[fig5\] and \[fig6\], including two special case: a zero-angular momentum($r=r_3$) photon orbit and one orbit at initial radius at $r=r_{\dot{\varphi}}$ where $\dot{\varphi}=0$. As we have seen, these orbits exhibit a variety of interesting behavior that are absent in circular orbits.
**Acknowledgments**
===================
Changqing’s work was supported by the National Natural Science Foundation of China under Grant Nos.11447168. Chikun’s work was supported by the National Natural Science Foundation of China under Grant Nos11247013; Hunan Provincial Natural Science Foundation of China under Grant Nos. 12JJ4007 and 2015JJ2085. I would like to thank Carlos A. R. Herdeiro and Pedro V. P. Cunha for reading the manuscript and for their useful comments.
[99]{}
=0.6 cm =0.6 cm J. M. Bardeen, in *Black Holes (Les Astres Occlus)*, edited by C. DeWitt and B. DeWitt (Gordon and Breach, New York, 1973), p. 215-239. S. Chandrasekhar, *The Mathematical Theory of Black Holes* (Oxford University Press, New York, 1992).
H. Falcke, F. Melia, and E. Agol, [**528**]{}, L13 (2000), arXiv:astro-ph/9912263.
N. I. Shakura and R. A. Sunyaev, Astron. Astrophys. [**24**]{}, 33 (1973).
J. P. Luminet, Astron. Astrophys. [**75**]{}, 228 (1979).
P. V. P. Cunha, C. Herdeiro, E. Radu and H. F. Runarsson, Phys. Rev. Lett. [**115**]{}, 211102 (2015), arXiv:1509.00021.
P. V. P. Cunha, C. Herdeiro, and E. Radu, Phys. Rev. D [**96**]{}, 024039 (2017).
P. V. P. Cunha, E. Berti and C. Herdeiro, arXiv:gr-qc 1708.04211.
J. Grover, A. Wittig, hys. Rev. D [**96**]{}, 024045 (2017).
M. Wang, S. Chen and J. Jing, arXiv:gr-qc 1707.07172.
D. C. Wilkins, Phys. Rev. D [**52**]{}, 814 (1972).
E. Teo, General Relativity and Gravitation [**35**]{}, 1909 (2003).
H. Goldstein, Z. Phys. [**271**]{}, 275 (1974). M.Johnston, and R. Ruffini, Phys. Rev. D [**10**]{}, 2324 (1974).
E. Stoghianidis I and D. Tsoubelis, General Relativity and Gravitation [**19**]{}, 1235 (1987).
J.Schastok, M.Soffel, H. Ruder, and M. Schneider. Am. J. Phys. [**55**]{}, 336 (1987).
B. Abbott et al. , Phys. Rev. Lett. [**116**]{}, 241103 (2016).
B. Abbott et al, Phys. Rev. Lett. [**118**]{}, 221101 (2017).
B. Abbott et al, Phys. Rev. Lett. [**119**]{}, 141101 (2017).
B. Abbott et al, arXiv:1711.05578 (2017).
R. Konoplya, A. Zhidenko, Phys. Lett. B [**756**]{},350 (2016).
R. Konoplya, L. Rezzolla and A. Zhidenko, Phys. Rev. D [**93**]{}, 064015 (2016).
S. Wang, S. Chen, J. Jing, J. Cosmol. Astropart. Phys. [**11**]{}, 020 (2016).
M. Wang, S. Chen, J. Jing, J. Cosmol. Astropart. Phys. [**10**]{}, 051 (2017), arXiv:gr-qc 1707.09451.
F.Long, S. Chen, S. Wang and J. Jing, Nucl. Phys. B [**926**]{}, 83 (2018), arXiv:gr-qc 1707.03175.
Y. Ni, J. Jiang and C. Bambi, J. Cosmol. Astropart. Phys. [**09**]{}, 014 (2016).
C. Bambi and S. Nampalliwar, Europhys. Lett. [**116**]{}, 30006 2016.
J. Levin and G. Perez-Giz, Phys. Rev. D [**77**]{}, 103005 (2008).
[^1]: Electronic address: changqingliu@ua.pt
[^2]: Electronic address: Chikun Ding@huhst.edu.cn
[^3]: Electronic address: jljing@hunnu.edu.cn
| {
"pile_set_name": "ArXiv"
} |
---
address: |
Institut d’Astrophysique de Paris, 98bis Bd Arago,\
75014 Paris, France
author:
- 'N. PRANTZOS'
title: 'STELLAR RADIOACTIVITIES AND DIFFUSE GAMMA-RAY LINE EMISSION IN THE MILKY WAY'
---
Introduction
============
Shortly after the discovery of the phenomenon of radioactivity, radionuclides revealed to be unique ”’probes” in our study of the cosmos and important agents in its evolution (radioactive dating of the Earth, meteorites and stars; radioactive heating of planetary and supernova interiors; radioactive origin of abundant stable nuclei, like , and of isotopic anomalies in meteorites, etc).
As most other stable nuclei, radionuclides are produced in stellar interiors and ejected in the interstellar medium through stellar winds and explosions (nova or supernova). In a few cases, concerning extra-solar objects, the characteristic -ray line signature of their radioactive decay has been detected and used as a probe of a large variety of astrophysical sites; indeed, -ray line astronomy with cosmic radioactivities has grown to a mature astrophysical discipline in the last decade. See. e.g. Diehl and Timmes 1998, Arnould and Prantzos 1999, Knödlseder and Vedrenne 2001, for recent reviews; also, the proceedings of the [*Astronomy with Radioactivities*]{} Conference, organised every two years, nicely reflects the status of that discipline (web site: [ http://www.mpe.mpg.de/gamma/science/lines/workshops/radioactivity.htm ]{} ).
In this review I shall focus on radioactivities produced by massive stars (SNII and WR stars); radioactivities produced by exploding white dwarfs (novae and SNIa) are reviewed by Hernanz (this volume).
A short history of stellar radioactivities and $\gamma$-ray line astronomy
==========================================================================
The main theoretical ideas underlying $\gamma$-ray line astronomy emerged slowly in the 60ies, while observational evidence came only about 20 years later. This history is largely dominated by two rather independent “programmes” of research: an astronomical one, seeking for the explanation of the late lightcurves of supernovae, and a nucleosynthetic one, seeking for the origin of the most abundant heavy nucleus, . An exceptionally clear and vivid account of that history is given in the text of Clayton (1999), on which much of this section is based.
In the early 50ies, the exponential decline of the late lightcurves of SNIa was attributed to the radioactive decay of $^7$Be (Borst 1950) or $^{59}$Fe (Anders 1959) or $^{254}$Cf (Anders 1959, Burbidge et al. 1956), all those nuclei having half-lives of $\sim$45-55 days. In his PhD thesis (1962), the mineralogist T. Pankey Jr suggested that is produced as unstable , and that the radioactive chain $\ra$$\ra$ can explain the lightcurves of supernovae; however, his suggestion went completely unnoticed by astronomers and nuclear physicists alike. Indeed, up to the mid-sixties it was thought that is produced as such in stellar interiors (Hoyle 1946; Burbidge et al. 1957; Fowler and Hoyle 1964), through the so-called [*“e-process”*]{}, despite the fact that the issue of its [*ejection*]{} in the interstellar medium (which might modify its abundance) was far from being clear. The role of [*explosive Si-burning*]{}, leading to the production (and natural ejection from supernovae) of doubly-magic was clarified through semi-analytical calculations of Bodansky et al. (1968), after hints from pioneering numerical nucleosynthesis calculations of Truran et al. (1966). Based on those results, Colgate and McKee (1969) convincingly argued that the radioactive chain $\ra$$\ra$ powers the lightcurves of supernovae; as time goes on, an increasing percentage of that power escapes the SN ejecta (which become progressively more transparent to $\gamma$-rays) and as a result the optical light curve declines more rapidly (by a factor of 2 every $\sim$55 days) than the amount of (half-life: 77 days).
The implications of those ideas for $\gamma$-ray line astronomy were studied in the 60ies at the Rice University, where Clayton and Craddock (1965) first calculated the expected $\gamma$-ray flux and spectrum from the Crab remnant, on the assumption that the $^{254}$Cf hypothesis was correct; finding that extremely large overabundances of other heavy elements (Os, Ir, Pt) should be obtained in that case, they expressed doubts on the correctness of that hypothesis. After this “false-start”, the implications of production in Si-burning were fully clarified in the landmark paper of Clayton, Colgate and Fishman (1969), which opened exciting perspectives to the field by suggesting that any supernova within the local group of galaxies should be detectable in $\gamma$-ray lines.
In the 70ies D. Clayton identified most of the radionuclides of astrophysical interest (i.e. giving a detectable $\gamma$-ray line signal); for that purpose he evaluated their average SN yields, by assuming that the corresponding daughter stable nuclei are produced in their solar system abundances. Amazingly enough (or naturally enough, depending on one’s point of view) his predictions of average SNII radionuclide yields (Table 2 in Clayton 1982) are in excellent agreement with modern yield calculations, based on full stellar models and detailed nuclear physics (see Fig. 1). Only the importance of escaped Clayton’s (1982) attention, perhaps because its daughter nucleus $^{26}$Mg is produced in its stable form, making the evaluation of the parent’s yield quite uncertain. That uncertainty did not prevent Arnett (1977) and Ramaty and Lingenfelter (1977) from arguing (on the basis of Arnett’s (1969) explosive nucleosynthesis calculations) that, even if only 10$^{-3}$ of solar $^{26}$Mg is produced as , the resulting Galactic flux from tens of thousands of supernovae (during the $\sim$1 Myr lifetime of ) would be of the order of 10$^{-4}$ .
In the case of nature appeared quite generous, providing a -ray flux even larger than the optimistic estimates of Ramaty and Lingenfelter (1977): the HEAO-3 satellite detected the corresponding 1.8 MeV line from the Galactic center direction at a level of 4 10$^{-4}$ (Mahoney et al. 1984). That detection, the first ever of a cosmic radioactivity, showed that nucleosynthesis is still active in the Milky Way; however, the implied large amount of galactic ($\sim$3 per Myr, assuming steady state) was difficult to accomodate in conventional models of galactic chemical evolution if SNII were the main source (Clayton 1984), since $^{27}$Al would be overproduced in that case; however, if the “closed box model” assumption is dropped and [*infall*]{} is assumed in the chemical evolution model, that difficulty is removed, as subsequently shown by Clayton and Leising (1987).
Another welcome mini-surprise came a few years later, when the -ray lines were detected in the supernova SN1987A, a $\sim$20 star that exploded in the Large Magellanic Cloud. On theoretical grounds, it was expected that a SNIa (exploding white dwarf of $\sim$1.4 that produces $\sim$0.7 of ) would be the first to be detected in -ray lines; indeed, the large envelope mass of SNII ($\sim$10 ) allows only small amounts of -rays to leak out, making the detectability of such objects problematic (Woosley et al. 1981, Gehrels et al. 1987). Despite the intrinsically weak -ray line emissivity of SN1987A, the proximity of LMC allowed the first detection of the tell-tale -ray line signature from the famous radioactive chain $\ra$$\ra$, thus confirming a 25-year old conjecture (namely, that the abundant is produced in the form of radioactive ).
Those discoveries laid the observational foundations of the field of -ray line astronomy with radioactivities. The next steps were made in the 90ies, thanks to the performances of the Compton Gamma-Ray Observatory (CGRO). First, the [*OSSE*]{} instrument aboard CGRO detected the -ray lines from SN1987A (Kurfess et al. 1992); the determination of the abundance ratio of the isotopes with mass numbers 56 and 57 offered a unique probe of the physical conditions in the innermost layers of the supernova, where those isotopes are synthesized (Clayton et al. 1992). On the other hand, the [*COMPTEL*]{} instrument mapped the Miky Way in the light of the 1.8 MeV line and found irregular emission along the plane of the Milky Way and prominent “hot-spots” in directions tangent to the spiral arms (Diehl et al. 1995); that map implies that massive stars (SNII and/or WR) are at the origin of galactic (as suggested by Prantzos 1991, 1993) and not an old stellar population like novae or AGB stars. Furthermore, [*COMPTEL*]{} detected the 1.16 MeV line of radioactive in the Cas-A supernova remnant (Iyudin et al 1994); that discovery offered another valuable estimate of the yield of a radioactive isotope produced in a massive star explosion (although, in that case the progenitor star mass is not known, contrary to the case of SN1987A).
After that short historical introduction to the field of -ray line astronomy, we turn in the next section into a discussion of the theoretically predicted yields of radioactivities from massive stars, the associated uncertainties and the relevant observational constraints.
Stellar Radioactivities: Yields, constraints, detectability
===========================================================
Overview
--------
All nuclei (except for the primordial isotopes of H and He and those of Li, Be and B) are thermonuclearly synthesized in the hot and dense stellar interiors, which are opaque to -rays. Released -ray photons interact with the surrounding material and are Compton-scattered down to X-ray energies, until they are photoelectrically absorbed and their energy is emitted at longer wavelengths. To become detectable, radioactive nuclei have to be brought to the surface (through vigorous convection) and/or ejected in the interstellar medium, either through stellar winds (AGB and WR stars) or an explosion (novae or supernovae). Their detection provides then unique information on their production sites.
-2.8 cm -4.4 cm
The intensity of the escaping -ray lines gives important information on the yields of the corresponding isotopes and the physical conditions (temperature, density, neutron excess etc.) in the stellar zones of their production, as well as on other features of the production sites (extent of convection, mass loss, hydrodynamic instabilities, position of the “mass-cut” in SNII, etc.). The shape of the -ray lines reflects the velocity distribution of the ejecta, modified by the opacity along the line of sight and can give information on the structure of the ejecta (see e.g. Burrows 1991 for the potential of -ray lines as a tool of supernova diagnostics). Up to now, only the 0.847 MeV line from SN1987A and the 1.8 MeV line from the inner Galaxy have been resolved (both with the same instrument, the balloon borne GRIS spectrometer), but their “message” is not quite understood yet.
Obviously, radionuclides of interest for -ray line astronomy are those with high enough yields and short enough lifetimes for the emerging -ray lines to be detectable. On the basis of those criteria, Table 1 gives the most important radionuclides (or radioactive chains) for -ray line astronomy, along with the corresponding lifetimes, line energies and branching ratios, production sites and nucleosynthetic processes.[^1]
When the lifetime of a radioactive nucleus is not very large w.r.t. the timescale between two nucleosynthetic events in the Galaxy, those events are expected to be seen as point-sources in the light of that radioactivity. In the opposite case a diffuse emission along the Galaxy is expected from the cumulated emission of hundreds or thousands of sources. Characteristic timescales between two explosions are $\sim$1-2 weeks for novae (from their estimated Galactic frequency of $\sim$30 yr$^{-1}$, Della Vale and Livio 1995), $\sim$50-100 yr for SNII+SNIb and $\sim$200-400 yr for SNIa (from the corresponding Galactic frequencies of $\sim$2 SNII+SNIb century$^{-1}$ and $\sim$0.25-0.5 SNIa century$^{-1}$, Tammann et al. 1994, Cappellaro et al. 1997). Comparing those timescales to the decay lifetimes of Table 1 one sees that in the case of the long-lived and a diffuse emission is expected; the spatial profile of that emission should reflect the Galactic distribution of the underlying sources. All the other radioactivities of Table 1 should be seen as point sources in the Galaxy except, perhaps, $^{22}$Na from Galactic novae; indeed, the most prolific producers, O-Ne-Mg rich novae, have a frequency $\sim$1/3 of the total (i.e. $\sim$10 yr$^{-1}$), resulting in $\sim$40 sources active in the Galaxy during the 3.8 yr lifetime of $^{22}$Na.
Yields
------
Yields of radioactive isotopes produced in SNII are displayed in Fig. 1. On the left part of the diagram, Clayton’s (1982) “educated guess” of those yields is presented for illustration purposes; as discussed in Sec. 2, it is in excellent agreement with modern yield calculations.
In Fig. 1 it appears that the stellar mass does not affect substantially those yields; at least in the 15-25 mass range, yields do not vary by more than a factor of $\sim$2-3 (notice, however, that they do not always behave monotonically with mass). Unfortunately, the uncertainties in those yields are difficult to quantify at present, because of the many factors involved: nuclear physics (for instance, the $^{12}$C($\alpha,\gamma$) rate or n-capture and n-production cross sections), convection and mass loss prescriptions, position of the mass-cut, neutrino spectra (for some nuclei that may receive conribution from neutrino-induced nucleosynthesis) etc. Taking all those uncertainties into account, it is safe to assume that theoretical yields at present are uncertain by at least a factor of 2 (and, quite probably, by much larger factors). In particular, the yield of all Fe-peak radioactivities (including ) are quite sensitive to the position of the mass-cut; some discussion on relevant constraints is given in Sec. 3.3. Here we proceed to a comparison between results of 2 recent calculations, by Rauscher et al. (2002 or RHHW2002) and Chieffi and Limongi (2002 or CL2002), performed with state-of-the-art stellar evolution models (including mass loss and a simulation of the explosion) and extended nuclear reaction networks with updated physics. These results illustrate well current uncertainties for and , two radioactivities produced outside the stellar Fe-core.
- In the case of , the overall agreement is rather good: the RHHW2002 yields are larger by a factor of 2.5 on average than those of CL2002, the difference been more pronounced in the 15 star than in the 25 case. The two calculations converge in the more massive stars, where production is dominated by pre-explosive nucleosynthesis in the Ne and H shells. In lower mass stars production is dominated by explosive Ne-burning; several factors may then explain the differences between the two calculations: the detailed pre-supernova structure through which the shock-wave runs; the amount of seed nuclei ($^{23}$Na, $^{25}$Mg etc) which are products of C-burning and depend thereoff on the carbon abundance left off from He-burning, that is on the $^{12}$C($\alpha,\gamma$) reaction rate; the $\nu$-induced nucleosynthesis (included in RHHW2002 but not in CL2002), etc.
- In the case of the situation is not as satisfactory as for . is mainly produced by explosive Ne-burning, through neutron captures on stable and $^{58}$Fe; its yield depends on the available amount of $^{22}$Ne, which releases those neutrons through $^{22}$Ne($\alpha$,n), as well as on available $^{58}$Fe. There is a factor of $\sim$10 difference between the two calculations, for both the 15 and the 25 stars. An explanation of such a large difference appears difficult, especially when the non-monotonic behaviour of the RHHW2002 yields of with stellar mass is taken into account: according to RHHW2002, the $\sim$20 region marks the transition from exoergic convective carbon burning (for M$<$20 ) to stars where energy production from central C-burning just compensates for neutrino losses (M$>$20 ); the effect of that transition on the yields has not been investigated yet. Notice that the yields of RHHW2002 are much larger than those of the previous calculations of that same group (Woosley and Weaver 1995). Notice also that the yields of RHHW2002 are larger than the corresponding ones of , a situation that is not encountered either in CL2002 or in Woosley and Weaver (1995).
Constraints
-----------
The issue of the and yields in massive stars is of importance, in view of current observational constraints and forthcoming [*INTEGRAL*]{} measurements (see Sec. 4.2). Fig. 1 displays some other observational constraints on SNII radioactivities, obtained for SN1987A (paralellograms for , and , for a 18-20 star) and for other supernovae (on the right of the figure; the corresponding stellar mass is irrelevant in the latter case).
In the case of SN1987A, the yield (0.07 ) is obtained through extrapolation of the supernova lightcurve, assumed to be powered by decay, to the day of the explosion (e.g. Arnett et al. 1989). The yield of is obtained in three different ways: a) through the measured intensity of the 0.122 Mev line of and assuming a low optical depth for those photons; b) through the study of the late bolometric lightcurve of SN1987A and assuming that it is dominated by decay at days 1100-2000 (this analysis is far less straightforward than in the case of ); c) through an analysis of the infrared emission lines of the ejecta. All those methods converge to a value of mass of $\sim$ 3 10$^{-3}$ (see Fransson and Kozma 2002). Finally, the yield of is evaluated through methods (b) and (c), albeit with substantial difficulties, due to the complex physics of supernova heating and coooling involved and the role of positrons; current estimates give values in the 0.5-2 10$^{-4}$ range (Fransson and Kozma 2002), while Sollerman (2002) suggests an upper limit of 1.1 10$^{-4}$ .
These observational constraints compare rather well with theoretical predictions for 18-20 stars (the estimated progenitor mass of SN1987A, on the basis of its optical luminosity, e.g. Arnett et al. (1989)). Notice, however, that model results in Fig. 1 correspond to stars calculated with initial metallicity Z=, while the progenitor of SN1987A presumably had LMC metallicity, namely Z$\sim$0.3 . Notice also that Thielemann et al. (1996) obtain a larger yield for the 20 star (1.7 10$^{-4}$ ), due to a difference in the way of simulating the explosion: the “thermal bomb” they use leads to a larger entropy and more important $\alpha$-rich freeze-out than in the case of the piston-driven explosion adopted by RHHW2002. Such a high yield is marginally detectable by [*INTEGRAL*]{} (see next section).
Data on the right of Fig. 1 concern yield estimates for extragalactic SNII. Based on a sample of 8 SNIIP (the “standard” SNII, with a “plateau” in the optical lightcurve) and assuming a bolometric correction similar to the one of SN1987A, Sollerman (2002) finds a mean value of 0.075 with a standard deviation of 0.03 . He notices, however, that SNII with much lower and higher yields than the “canonical” one have also been found. In the former case belong SN1994W: the extremely rapid fading of its lightcurve suggests a yield lower than 0.015 . On the other hand, SN1998bw is the most -rich supernova today: detailed modelling of its late emission requires yields of 0.5-0.9 , and simple arguments lead to a lower limit of 0.3 (Sollerman et al. 2002). Thus, it appears that the yield of massive stars is far from being a “universal constant” of $\sim$0.075 , a fact that may have interesting implications for stellar models as well as galactic chemical evolution, especially concerning the observed scatter of abundance ratios in halo stars (Ishimaru et al. 2002).
Finally, the yield of CasA is inferred from the 1.16 MeV line flux of $^{44}$Sc decay detected by [*COMPTEL*]{} (3.3$\pm$0.6 10$^{-5}$ ) and the CasA distance (3.4 kpc) and age (320 yr) and amounts to $\sim$1.7 10$^{-4}$ (Iyudin et al. 1999). An independent evaluation of the yield in CasA came recently, through detection of the low energy decay lines of by [*Beppo-SAX*]{}: the detected flux at 68 and 78 keV implies a mass of 1-2 10$^{-4}$ , depending on the modelisation of the underlying continuum spectrum (Vink et al. 2001; Vink and Laming this volume). These yields are larger than the average yields of RHHW2002 (see Fig. 1), typically by a factor of $\sim$3, but compatible with those of Thielemann et al. (1996). Notice, however, that these estimates suffer from uncertainties related to the ionisation state of the SN remnant; an ionised medium could slow down the electron-capture decay of that radionuclide and explain the observed flux with a smaller yield (see Mochizuki et al. 1999).
Detectability
-------------
For tutorial purposes, we present in Fig. 2 a schematic view of the -ray line emissivity of a “typical” SNII, over three different timescales: 10 years, 10 centuries and a few Myrs. The figure is based on the yields of Fig. 1 and is calculated by assuming a SN1987A-like opacity for the ejecta.
Notice that, if the RHHW2002 yields of $^{60}$Co are correct, the lines might dominate the -ray line emission of the SN for a couple of years, between 5 and 8 years after the explosion; that possibility was suggested by Clayton (1982) for very young SN remnants in the Milky Way. Unfortunately, the expected flux from SN1987A was below the sensitivity limits of instruments aboard CGRO and it will also be below the detection threshold of [*INTEGRAL*]{} (which is launched $\sim$15 years after the explosion, while has a mean life of 7.6 yr). The role of for the late lightcurve of SN1987A was studied in Timmes et al. (1996). It may well be that the current difficulties in modelling the late bolometric lightcurve of that supernova and its infrared line emissivity (see previous section) may be, at least partially, due to an inadequate account of the energy input from that isotope.
The expected 1.16 MeV -ray line flux from in SN1987A ($\sim$10$^{-5}$ ) lies at the detection limit of [*INTEGRAL*]{} and will be one of the prime targets of the SPI instrument aboard that satellite. Even a 3-$\sigma$ upper limit would bring important information on the position of the mass-cut and the explosion mechanism of that supernova, since yield is more sensitive to the mass-cut than other isotopes (e.g. Timmes et al. 1996). On the other hand, Fig. 2 reveals also that from centuries-old SN remnants in the Milky Way should be detectable by [*INTEGRAL*]{}; here again, a positive detection will reveal hitherto unknown Galactic SN remnants, while a negative result is expected to place interesting constraints on the frequency of the production sites of that isotope and on the corresponding yields. Indeed, on the basis of Woosley and Weaver (1995) yields Timmes et al. (1996) estimate that, in order to explain the solar abundance of $^{44}$Ca, one has to invoke either a higher SN frequency in the Galaxy or high yields or production of $^{44}$Ca in rare events, like sub-Chandrasekhar mass SNIa. An analysis of [*COMPTEL*]{} map of the inner Galaxy in the light of 1.16 MeV suggests that the first two possibilities should be excluded, otherwise more and/or brighter “hot-spots” than actually observed should be found by [*COMPTEL*]{} (The et al. 2000). In that respect, it is interesting to notice that tantalizing hints for emission from the nearby source GRO J0852-4642, a previously unknown supernova remnant, were recently reported (Iyudin et al. 1998, Aschenbach et al. 1999; but, see also Schönfelder et al. 2000).
Long -lived radioactivities are difficult to detect from individual sources, even with next generation instruments. For instance, in the case of , an exceptionally close site (closer than $\sim$0.3 kpc) is required for its 1.8 MeV line to be detectable by [*INTEGRAL*]{}; the Vela region might offer just such a chance, in view of some intriguing hints from [*COMPTEL*]{} observations (see Sec. 4.4). In the following we shall focus on the long-lived radioactivities and . During their $\sim$Myr lifetimes the collective emission from tens of thousands of sources gives rise to a diffuse emission along the plane of the Milky Way; only the emission has been detected up to now.
Diffuse -ray line emission from long-lived and in the Milky Way
=================================================================
Overview
--------
[*COMPTEL*]{} is the only instrument with imaging capabilities that detected the Galactic 1.8 MeV line emission (Fig. 3). The data shows clearly a diffuse, irregular, emission along the Galactic plane, allowing to eliminate: i) a unique point source in the Galactic centre and/or a nearby local bubble in that direction; ii) an important contribution of the Galactic bulge, signature of an old population and iii) any class of sources involving a large number of sites with low individual yields (like nova or low mass AGB stars), since a smooth flux distribution is expected in that case (Diehl et al. 1995). Identification of some of the observed features (“hot-spots”) with tangents to spiral arms seems quite plausible and suggests that massive stars are at the origin of (Prantzos and Diehl 1996).
Estimates of the galactic mass of rely on assumptions about the spatial distribution of the underlying sources. All plausible disk models tested by the [*COMPTEL*]{} team yield a mass of $\sim$2 . Introducing a spiral structure to the axisymmetric disk models improves the fit to the data and implies that between 60 and 100 $\%$ of the may lie on the spiral arms (Diehl et al. 1998). It should be noticed that the derived spatial distribution of depends on the method of analysis. As shown by Knödlseder et al. (1999) some imaging analysis methods lead to all-sky maps with more pronounced localised features than some others; still, the irregular nature of the 1.8 MeV emission along the Galactic plane and the localised “hot-spots” are revealed by all imaging methods, in a statistically significant way (Plüschke et al. 2001a).
Sources of and the role of
----------------------------
The yields presented in Fig. 1 concern massive stars exploding as SNII. Even more massive stars ($>$30 ) may produce substantial amounts of during central ([*hydrostatic*]{}) H-burning and eject them through their powerful stellar winds, in the WR stage (Prantzos and Cassé 1986); the WR yields are relatively well determined (e.g. Meynet et al. 1997), but the [*explosive*]{} yields of those stars (which ultimately explode as SNIb) are very poorly known at present.
Under the most favorable conditions (highest possible yields for SNII allowed by current uncertainties; accounting for the strong metallicity dependence of WR yields, which favours sources in the inner Galaxy; adopting a mildly steep IMF, i.e. with the Salpeter slope of -1.35 instead of the Scalo slope of -1.7), it turns out that both SNII and WR can account for $\sim$2 /Myr of (e.g. Prantzos and Diehl 1996). It may well be that both classes of sources contribute equally to the Galactic (a coincidence not “stranger” than the quasi-equality between the solar abundances of s- and r- elements, or between the contributions of the dark matter and dark energy to the density of the Universe). However, it is interesting to see whether independent constraints can be used to distinguish between the SNII and WR contributions and identify a dominant component (assuming that there is one).
One such constraint is the flux ratio of the -ray lines of (1.17 and 1.33 MeV) and (1.8 MeV). Indeed, is predicted to be co-produced with in SNII (in almost the same zones and in similar amounts, Fig. 1), but not in WR stars. If SNII dominate galactic production, an important emission is then expected (flux ratio: ${^{60}Fe}\over{^{26}Al}$ = ${Y_{60}/60/\tau_{60}}\over{Y_{26}/26/\tau_{26}}$, where Y represent yields averaged over the IMF and $\tau$ the corresponding decay lifetimes); if WR stars are dominant, the -ray line flux ratio of / is expected to be extremely low.
The ${^{60}Fe}\over{^{26}Al}$ ratio of SNII depends on stellar models (and slightly on the IMF). The Woosley and Weaver (1995) yields lead to a flux ratio of 0.16 (Timmes et al. 1995), and so do the recent ones of CL2002 (note that the absolute and yields of CL2002 are $\sim$ 4 times lower than those of WW1995); however, the most recent results of the Santa Cruz group (RHHW2002) lead to a surprisingly large flux ratio ${^{60}Fe}\over{^{26}Al}$$\sim$0.4. On the other hand, current observational upper limits, obtained by GRIS (Naya et al. 1998) and [*COMPTEL*]{} (Diehl 2000) are close to 0.15. It appears then that: a) the RHHW2002 yields can produce $\sim$1 of Galactic , but should be excluded by the non-detection of ; b) the CL2002 yields produce a Galactic mass of $\sim$0.4 , too low to explain the detected 1.8 MeV flux. Taken at face value, the most recent SNII yields apparently exclude SNII as dominant sources of Galactic . Does this mean that WR stars constitute a viable alternative?
WR stars can indeed provide $\sim$2 /Myr of (Meynet et al. 1997), provided that the strong metallicity dependence of the yields is taken into account. Moreover, Knödlseder (1999) showed that a map of the ionising power from massive stars (derived from the COBE data, after correction for synchrotron contribution) corresponds to the 1.809 MeV map of galactic in all significant detail; assuming a standard stellar initial mass function, his calculation reproduces consistently the current galactic supernova rate and massive star population from both maps, and suggests that most of is produced by WR stars of high metallicity in the inner Galaxy. Finally, Knödlseder et al. (2001) point out that one of the prominent “hot-spots” in the [*COMPTEL*]{} 1.8 MeV sky-map, the Cygnus region, is an association of massive stars with no sign of recent supernova activity. All these observational and theoretical indices favour WR stars as dominant contributors. However, in that case, the 1.8 MeV longitude profile (or, equivalently, the radial profile) should be steeper than observed (see Fig. 5).
In summary, there is no satisfactory explanation at present for the flux of the 1.8 MeV line and its spatial distribution in the Milky Way. [*INTEGRAL*]{} is expected to provide a more detailed spatial profile than COMPTEL and to put more stringent limits (or, perhaps, to detect) emission from . Only when the nature of the major sources is clarified it will become possible to tackle the question of their Galactic distribution (i.e. with any yield dependence on metallicity - or other factors - properly taken into account).
The line width: a hint for mixing of SN ejecta in the ISM?
-----------------------------------------------------------
The width of the line was already discussed by Ramaty and Lingenfelter (1977) who pointed outh that ejected from SN should deccelerate in the ISM in a timescale short compared with its decay timescale; as a consequence, the emitted -ray line should be quite narrow (narrower than the $\sim$2 keV width imposed by Galactic rotation), making its detection relatively easy.
The HEAO-3 Ge detectors found the line to be narrow indeed: FWHM$<$3 keV (Mahoney et al. 1984); however, the GRIS instrument measured a FWHM=5.4$\pm$1.4 keV, $\sim$3 times larger than HEAO-3 and much larger than allowed by Galactic rotation (Naya et al. 1996). If real, that large width can be interpreted either as kinematic (with the bulk of moving with velocities $\sim$540 km/s) or thermal (with most atoms brough to temperatures T$\sim$4.5 10$^8$ K). The thermal origin seems improbable, since it would imply that all is produced in $\sim$200 mini-starburst regions in the inner Galaxy regions (Chen et al. 1997). A non-thermal origin could be understood if nuclei are incorporated in dust grains, which are launched by the SN explosion (Chen et al. 1997), or accelerated by the SN shock wave (Ellison et al. 1997) or repeatedly accelerated by SN shocks (Sturner and Naya 1999).
The SPI instrument of [*INTEGRAL*]{} will clarify that issue, by measuring the line width and also the latitude distribution of the line emission. Already, [*COMPTEL*]{} measurements imply a vertical scaleheight of $<$220 pc for the distribution and suggest that the velocity of the bulk of has not as large a component perpendicularly to the Galactic plane as suggested by the kinematic interpretation of the GRIS measurements (Oberlack 1997).
“hot-spots”: monitoring stars, superbubbles and young stellar associations
---------------------------------------------------------------------------
The study of individual “hot-spots” revealed by [*COMPTEL*]{} bears on our understanding of the evolution of young stellar associations (in the cases of Cygnus, Carina and Centaurus-Circinus) and even individual stars (in the case of Vela).
The Cygnus regions was studied with population synthesis models by two groups (Cervinho et al. 2001, Plüschke et al. 2001b). The resulting morphology of the 1.8 MeV emission compares well with the [*COMPTEL*]{} data. However, in the case of Carina, the predicted absolute flux is smaller (by a factor of 5-20) than detected by [*COMPTEL*]{} (Knödlseder et al. 2001). That discrepancy may imply something interesting, either for the (in)completeness of the stellar census of that association or for the yields. [*INTEGRAL*]{} will establish more accurately the morphology of those “hot-spots” and further test the “massive star group” origin of .
Another target of importance for future 1.8 MeV studies is the Orion/Eridanus region. [*COMPTEL*]{} surveys of the anticenter region show significant (5 $\sigma$) extended emission towards the south of the Orion molecular clouds. That emission could be attributed (Diehl 2002) to ejected by the prominent Orion OB1 association and expanded into the low density cavity of the Eridanus bubble. The exansion of supernova ejected into a previously formed cavity of peculiar shape (and not into a medium with radial symmetry) is a novel and interesting field of study, opened by [*COMPTEL*]{} and left for [*INTEGRAL*]{} to explore.
Finally, the Vela region offers the opportunity to measure (or put upper limits on) yields from individual sources. The morphology of the rather extened 1.8 MeV emission detected by [*COMPTEL*]{} does not allow identification with any of the three known objects in the field (the Vela SNR, the closest WR star $^2$ Vel and SNR RX-J0852-4622); all three objects lie closer than 260 pc, according to recent estimates. [*COMPTEL*]{} measurements are compatible with current yields of SNII (in the case of Vela SNR) and marginally compatible with current yields of $^2$ Vel (Oberlack et al. 2000). [*INTEGRAL*]{} measurements in the Vela region are then expected to place more stringent constraints on stellar models.
Summary
=======
The aim of [*Gamma-Ray Astronomy with Radioactivities*]{}, as explicitly defined by the “founding fathers” of the field in the 60ies (see Sec. 2) was to probe stellar nucleosynthesis as well as supernova structure and energetics. This original aim was reached in a spectacular way in the case of SN1987A (which, however, remains today - and, probably, for sometime in the future - a unique object in that respect).
On the other hand, the legacy of HEAO-3 and [*COMPTEL*]{} set new aims to the field of [*Gamma-Ray Astronomy with long-lived Radioactivities*]{}: to probe the large-scale distribution of active nucleosynthesis sites in the Galaxy and the properties/history of any clusterings in that distribution (young stellar associations, individual objects). [*INTEGRAL*]{} is expected to perform this next step.
Acknowledgements {#acknowledgements .unnumbered}
================
I am grateful to Roland Diehl for his critical suggestions and comments.
References {#references .unnumbered}
==========
[99]{}
E. Anders . D. Arnett . D. Arnett . D. Arnett, J. Bachall, R. Kirchner and S. Woosley . B. Aschenbach, A. Iyudin and V. Schönfelder . D. Bodansky, D. Clayton and W. Fowler . L. Borst . G. Burbidge, F. Hoyle, M. Burbidge, R. Christy and W. Fowler . G. Burbidge, M. Burbidge, W. Fowler and F. Hoyle . A. Burrows, in [*Gamma-Ray Line Astrophysics*]{}, Eds. Ph. Durouchoux and N. Prantzos, (New York: AIP, Vol 232), p. 297 (1991). E. Cappellaro, M. Turatto, D. Tsvetkov . M. Cervinho, J. Knödlseder, D. Schaerer, P. von Ballmoos and G. Meynet . W. Chen et al. in [*The Transparent Universe*]{}, eds. C. Winkler et al., (ESA SP-382), p. 105, (1997) A. Chieffi and M. Limongi, in press, . D. Clayton in [*Essays in Nuclear Astrophysics*]{}, eds C. Barnes et al., (Cambridge University Press), p. 401, (1982). D. Clayton . D. Clayton . D. Clayton and W. Craddock . D. Clayton, S. Colgate and G. Fishman . D. Clayton and M. Leising . D. Clayton, M. Leising, L.-S. The, W. Johnson and J. Kurfess . S. Colgate and C. McKee . M. Della Vale and M. Livio . R. Diehl in [*INTEGRAL School*]{}, unpublished [2000]{}. R. Diehl, in press, . R. Diehl and F. Timmes . R. Diehl et al. . R. Diehl et al. in [*AIP Conf. Proc. 410*]{}, eds. C. Dermer et al., p. 1109, (1998). D. Ellison, L. Drury and J.-P. Meyer . W. Fowler and F. Hoyle . C. Fransson and C. Kozma, astro-ph/0112405 and in press, . N. Gehrels, M. Leventhal and C. MacCallum . F. Hoyle . Y. Ishimaru, N. Prantzos and S. Wanajo, in preparation. A. Iyudin et al. . A. Iyudin et al. . A. Iyudin et al. [*Astro. Let. and Communications*]{}, [**38**]{}, 313, (1999). W. Johnson, F. Harnden and R. Haymes . R. Kinzer, P. Milne, J. Kurfess, M. Strickman, W. Johnson, W. Purcell . J. Knödlseder [*PhD Thesis*]{}, Univ. Paul Sabatier, Toulouse (unpublished),(1997). J. Knödlseder . J. Knödlseder, K. Bennett, H. Bloemen, et al. . J. Knödlseder et al. astro-ph/0104074 and [*Proceedings of 4th INTEGRAL Workshop*]{}, Eds. A. Gimenez, V. Reglero & C. Winkler, [*ESA SP-459*]{}, p.47 (2001). J. Knödlseder and G. Vedrenne, astro-ph/0101018 and [*Proceedings of 4th INTEGRAL Workshop*]{}, Eds. A. Gimenez, V. Reglero & C. Winkler, [*ESA SP-459*]{}, p.23 (2001). J. Kurfess et al. . W. Mahoney, J. Ling, A. Wheaton, A. Jacobson . G. Meynet, M. Arnould, G. Paulus and N. Prantzos Y. Mochizuki, K. Takahashi,H-Th. Janka, W. Hillebrandt and R. Diehl, . J. Naya et al. . J. Naya et al. . U. Oberlack, [*PhD Thesis*]{}, (unpublished), (1997). U. Oberlack, U. Wessolowski, R. Diehl et al. . T. Pankey Jr, [*PhD Thesis*]{}, (unpublished), (1962). S. Plüschke, R. Diehl, V. Schönfelder et al., in [*Proceedings of 4th INTEGRAL Workshop*]{}, Eds. A. Gimenez, V. Reglero & C. Winkler, [*ESA SP-459*]{}, p.55 (2001a). S. Plüschke, K. Kretschmer, R. Diehl, D. Hartmann and U. Oberlack, in [*Proceedings of 4th INTEGRAL Workshop*]{}, Eds. A. Gimenez, V. Reglero & C. Winkler, [*ESA SP-459*]{}, p.91 (2001b). N. Prantzos, in [*Gamma-Ray Line Astrophysics*]{}, Eds. Ph. Durouchoux and N. Prantzos (New York: AIP, Vol. 232), p. 129 (1991) N. Prantzos . N. Prantzos and M. Cassé . N. Prantzos and R. Diehl . R. Ramaty and R. Lingenfelter . T. Rauscher, A. Heger, R. Hoffman and S. Woosley, astro-ph/0112478 and ApJ (2002). V. Schönfelder, H. Bloemen, W. Collmar et al. in [*Proceedings of 5th Compton Symp.*]{}, Eds. M. McConnel and J. Ryan (New York: AIP, Vol. 510), p. 54 (2000). J. Sollerman, astro-ph/0204469, and in press, . J. Sollerman et al., astro-ph/0204498 and in press, [*A&A*]{} (2002). S. Sturner and J. Naya . G. Tammann, W. Loffler and A. Schroeder, . L.S. The, R. Diehl, D. Hartmann et al., in [*Proceedings of 5th Compton Symp.*]{}, Eds. M. McConnel and J. Ryan(New York: AIP, Vol. 510) , p. 64 (2000) K.-F. Thielemann, K. Nomoto and M. Hashimoto, . F. Timmes et al. . F. Timmes et al. . J. Truran, A. G. W. Cameron and A. Gilbert . J. Vink et al. . S. Woosley and T. Weaver . S. Woosley, T. Axelrod and T. Weaver .
[^1]: The 511 keV line of e$^+$-e$^-$ annihilation is, in fact, the first -ray line ever detected (Johnson et al. 1972), although its origin (probably related to the radionuclides of Table 1) and spatial distribution in the Galaxy are not well understood yet (see Kinzer et al. 2001, and references therein).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Predrag Prester\
Max Planck Institute for Gravitational Physics (Albert Einstein Institute)\
Am Mühlenberg 1, D-14476 Golm, Germany\
\
Theoretical Physics Department, Faculty of Natural Sciences and Mathematics\
p.p. 331, HR-10002 Zagreb, Croatia\
\
E-mail:
title: Lovelock type gravity and small black holes in heterotic string theory
---
Introduction
============
Recently black holes in heterotic string theory had attracted a lot of attention[^1]. Special class are 2-charge small black holes. On the string side these black holes should correspond to perturbative half-BPS states of heterotic string compactified on $T^{9-D}\times S^1$, with momentum and winding on $S^1$ equal to $n$ and $w$, respectively, for which one can easily calculate asymptotic expression ($n,w\gg1$) for the number of states [@DabHar89; @DaGiHaRR90]. Logarithm (which is the entropy in microcanonical ensamble) is in the leading order given by $$\label{entropy}
S=4\pi\sqrt{nw}$$ This result, obtained for a free string, due to supersymmetry remains to be valid after switching on the string coupling $g_s$. Now, as the string coupling is increased, at one point de Broglie-Compton wavelength $1/M$ becomes smaller then the corresponding Schwarzschild radius $\ell_P^2 M \sim g_s^2 \alpha' M$, which should lead to formation of (extremal) black hole. This is a one way to argue that elementary string states with mass large enough should describe black holes [@thoft90; @sussk93; @SusUgl94; @HorPol96].
Indeed, exact black hole solutions of the low energy effective action of heterotic string theory in the leading order in $\alpha'$ were found which decribe D-dimensional extremal black holes with “correct” quantum numbers (e.g., they have two electric charges proportional to $n$ and $w$) [@sen9411; @peet95]. They are in some sense pathological having null singularities and zero horizon area[^2]. This implies vanishing Bekenstein-Hawking entropy which is obviously in disagreement with the string result (\[entropy\]).
To understand what is happening, one should go back to the derivation of (\[entropy\]) – and to see that although it is perturbative in string coupling, it is [*nonperturbative*]{} in $\alpha'$. This means that on the gravity side one should start from the complete tree-level (in string coupling) effective action which contains all $\alpha'$ higher-derivative corrections. This is also visible from the structure of the solution in the leading order – singularity of the horizon implies that one cannot neglect higher curvature terms (or treat them as perturbation) in the efective action near the horizon, as it is usually done for large black holes. In fact, a priory all terms should be of the same importance. The remarkable property of small black holes is that they give us some information on the [*complete*]{} tree-level (in string coupling) effective action.
In [@dabh0409; @DakaMa04; @sen0411; @HuMaRa04; @sen0502; @sen0504] it was shown in $D=4$ that adding to the action just one type of the higher-derivative terms, obtained by supersymmetrizing square of the Weyl tensor [@CaWiMo98; @CaWiKaMo99; @CaWiKaMo00], one obtains that corrected black holes have regular horizon of $AdS_2\times S^2$ type, for which generalised Wald entropy formula[^3] [@wald93; @IyWa94; @JaKaMy94] gives a desired result (\[entropy\]). This result is at the same time exciting and mysteriuos, because there is no apparent reason why should only terms quadratic in curvature contribute to the entropy, with all higher-order terms somehow cancelling.[^4] It is important to note that for the entropy one only needs behaviour of the solution near the horizon, so this cancelation could just appear there (as a consequence of the $AdS_2\times S^2$ geometry). Indeed, numerical extrapolations to the far-away region show that solution does not aproach to Schwarzschild solution but has oscillating behaviour connected with spurious degrees of freedom typicaly present in higher order gravity theories [@sen0411; @HuMaRa04]. This could suggest that other higher order terms become important away from horizon.
A natural question is what is happening in $D>4$? Unfortunately, it is imposible to perform the same analysis, as it is not known how to supersymmetrize $R^2$-terms in the action. In lack of this, Sen [@sen0505] took as a “toy-model” just the gravitational part, which is proportional to Gauss-Bonnet density[^5], and analysed near-horizon behaviour of the solution (for which he assumed $AdS_2\times S^{D-2}$ geometry). Although this action is not supersymmetric, surprisingly, Wald entropy formula again gave (\[entropy\]), now in $D=4$ and $D=5$ (but not for $D\ge6$). Even more surprisingly, in the recent paper [@sen0508], it was shown that for the same type of the action, applied to the large class of 8-charge black holes in $D=4$, entropy, near horizon metric, gauge field strengths and the axion-dilaton field are identical to those obtained in [@CaWiMo9906; @CaWiKaMo04] from a supersymmetric version of the theory based on squared Weyl tensor.
In this paper we extend Sen’s analysis of two-charge black holes to any number of dimensions $D\ge4$. For the effective action near the horizon we take obvious generalisation, i.e., we use extended Gauss-Bonnet densities as higher-order terms in curvature [@love71; @love72]. These “Lovelock type” actions have several appealing properties, e.g, they are of the first order (no ghosts or spurious states [@zwibach85; @zumino86]), have good boundary value problem, and contain only finite number of terms. We perform near horizon analysis assuming $AdS_2\times S^{D-2}$ geometry and, using Wald formula, calculate entropy, which has a complicated dependance on $D$ and[^6] $[D/2]$ coupling constants[^7] $\lambda_m$. We show that there is a unique choice for $\lambda_m$ (independent of $D$) which gives exactly the expression (\[entropy\]) in [*any*]{} $D$. It should be emphasized that this is a nontrivial result, in the sence that to fix the entropy for $D$ black holes one has only $[D/2]$ free parameters to play with (or, in other words, for each couple of dimensions enters only one parameter). This result trivially extends to black holes with more electric charges, connected with heterotic string compactifications on $M_D\times T^{10-D-k}\times (S^1)^k$.
Effective action with extended Gauss-Bonnet terms
=================================================
We are interested in heterotic string compactified on $T^{9-D}\times S^1$, for which effective low energy action in the leading order in string coupling can be written in the form $$\label{treeea}
S = \frac{1}{16\pi G_N} \int d^Dx \sqrt{-g} \,S \sum_{m=1}
\alpha'^{m-1} \mathcal{L}_m$$ where $S$ is the dilaton field, which is connected to the effective closed string coupling constant $g$ by $S=1/g^2$.
Leading order term in $\alpha'$ is given by [@sen0505] $$\label{alpha0}
\mathcal{L}_1 = R + S^{-2}(\nabla S)^2 - T^{-2}(\nabla T)^2
- T^2 \left(F_{\mu\nu}^{(1)}\right)^2
- T^{-2} \left(F_{\mu\nu}^{(2)}\right)^2$$ where we assumed that all other fields are vanishing. In this order exact half-BPS electricaly charged extremal black hole solutions in any $D$ were found [@peet95] which have the same quantum numbers as perturbative half-BPS string states (where two electric charges are proportional to momentum and winding of the string along $S^1$). These solutions have singular horizon (null singularity) with a vanishing area, on which effective string coupling also vanishes. This properties are in contrast with what one expects from string theory, which for example gives the nonvanishing result for the entropy (\[entropy\]).
It is obvious what is wrong in the above analysis. As the horizon is singular, the curvature invariants (and some other fields like $S$) are also, which means that in the effective action (\[treeea\]) one cannot neglect higher-order terms which typicaly contain higher powers and/or derivatives of the Riemann tensor. In $D=4$ dimensions it was shown in [@dabh0409; @DakaMa04; @sen0411; @HuMaRa04] that if one adds a particular class of higher-derivative terms (obtained by supersymmetrization of the square of the Weyl tensor), corrections completely change the nature of singularity - one gets timelike singularity hidden behind a horizon with the finite area. Also, the dilaton field $S$ becomes finite on the horizon, which means that effective string coupling is nonvanishing. Using Wald formula it was shown that the entropy is equal to the string result (\[entropy\]). Now, the mystery is why other terms, which are known to be present in the effective action (especially ones containing higher powers of the Riemann tensor), are appearing to be irrelevant for the entropy calculation.
One way to understand what is happening would be to make the same analysis in higher dimensions. Unfortunately, for $D>4$ supersymmeric version of the action containing curvature squared terms is not known. In lack of this, in [@sen0505] Sen took as a toy model an action obtained by adding just the Gauss-Bonnet term. Although this action is not supersymmetric, from the near horizon analysis he obtained that the entropy is again given by (\[entropy\]), but only in $D=4,5$. Now, the interesting thing is that in $D=6$ a next extended Gauss-Bonnet term is present, so the natural question to ask is what is happening if we include in the action all extended Gauss-Bonnet terms. That is the main subject of this paper.
We propose to analyse the actions of the Lovelock type where higher order terms in $\alpha'$ in (\[treeea\]) are given by the extended Gauss-Bonnet densities [@love71; @love72] $$\label{lgbm}
\mathcal{L}_m = \lambda_m \mathcal{L}^{GB}_m = \frac{\lambda_m}{2^{m}}
\, \delta_{\mu_1\nu_1\ldots\mu_m\nu_m}^{\rho_1\sigma_1\ldots
\rho_m\sigma_m} \, {R^{\mu_1\nu_1}}_{\rho_1\sigma_1}\cdots
{R^{\mu_m\nu_m}}_{\rho_m\sigma_m}\;, \qquad m=2,\ldots,[D/2]$$ where $\lambda_m$ are some (at the moment free) dimensionless parameters, $\delta_{\alpha_1\ldots\alpha_k}^{\beta_1\ldots\beta_k}$ is totally antisymmetric product of $k$ Kronecker deltas, normalized to take values 0 and $\pm 1$, $[x]$ denote integer part of $x$, and all greek indeces are running from 0 to $D-1$. Extended Gauss-Bonnet densities $\mathcal{L}^{GB}_m$ are in many respects generalisation of the Einstein term (note that $\mathcal{L}^{GB}_1=R$). Especially, $m$-th term is topological in $D=2m$ dimensions. Also note that they identicaly vanish for $m>[D/2]$, so for any $D$ there is a finite number of terms in the action.
Near horizon analysis
=====================
We want to study solutions of the action given by (\[treeea\]–\[lgbm\]) which should be deformations of the exact small black hole solutions obtained in lowest order in $\alpha'$. We do not know how to exactly solve equations of motion, but we are primarly interested in the entropy which is given by the Wald formula [@wald93; @IyWa94; @JaKaMy94] $$\label{wald}
S = 2\pi \int_\mathcal{H} \hat{\epsilon} \,
\frac{\partial\mathcal{L}}{\partial R_{\mu\nu\rho\sigma}}
\eta_{\mu\nu}\eta_{\rho\sigma}$$ Important here is to notice that integration is done on the cross section of the horizon $H$, so to calculate the entropy one only needs to know a solution near the horizon.
Now, in [@sen0506] it was shown that symmetries of the horizon can enormously simplify calculation of the entropy. In $D=4$ case it was shown that near horizon geometry is of $AdS_2 \times S^2$ type, where effect of $\alpha'$ corrections was to make radius of horizon nonvanishing. Following [@sen0505] we conjecture that the same happens in $D>4$ so the near horizon geometry should be $AdS_2 \times S^{D-2}$. This implies that near the horizon fields have the following form $$\begin{aligned}
&& ds^2 = g_{\mu\nu}\, dx^\mu dx^\nu = v_1 \left( -x^2 dt^2 +
\frac{dx^2}{x^2} \right) + v_2\,d\Omega_{D-2}^2 \nonumber \\
&& S = u_S \nonumber \\
&& T = u_T \nonumber \\
&& F_{rt}^{(i)} = e_i \;, \qquad i=1,2 \label{horfie}\end{aligned}$$ where $v_i$, $u_S$, $u_T$, $e_i$ are constants, and moreover that the covariant derivatives of the scalar fields $S$ and $T$, the gauge fields $F_{\mu\nu}^{(i)}$ and the Riemann tensor $R_{\mu\nu\rho\sigma}$ vanish on the horizon $x=0$. This makes solving the equations of motions (EOM’s) near the horizon (i.e., finding $v_i$, $u_S$, $u_T$ and $e_i$) very easy. One first defines $$\label{fuve}
f(\vec{u},\vec{v},\vec{e}) = \int_{S^{D-2}} \sqrt{-g}
\, \mathcal{L}$$ where the integration is over $S^{D-2}$, and one uses (\[horfie\]). Equations of motion are near the horizon given by $$\label{eom}
\frac{\partial f}{\partial u_S}=0 \;,\qquad
\frac{\partial f}{\partial u_T}=0 \;,\qquad
\frac{\partial f}{\partial v_1}=0 \;, \qquad
\frac{\partial f}{\partial v_2}=0$$ Notice that configuration (\[horfie\]) solves EOM’s for gauge fields identicaly on the horizon for any $e_i$. We also need to know electric charges $q_i$. In [@sen0506] it was shown that they are given by $$\label{charge}
q_i = \frac{\partial f}{\partial e_i} \;,\qquad i=1,2$$ We would also like to connect conserved charges (\[charge\]) with corresponding quantum numbers of half-BPS states of heterotic string, which are momentum $n$ and winding $w$ around $S^1$. This is given by [@sen0508] $$\label{q-nw}
q_1 = \frac{2\,n}{\sqrt{\alpha'}} \;,\qquad
q_2 = \frac{2\,w}{\sqrt{\alpha'}}$$ It was shown in [@sen0506] that the entropy for the configuration (\[horfie\]) is given by $$S = 2\pi \left( \sum_{i=1}^2 e_i \, q_i - f \right)$$ For the actions of the type (\[treeea\]) EOM for dilaton $S$ implies that $f$ vanishes on-shell near the horizon, so we have just $$\label{ent}
S = 2\pi \sum_{i=1}^2 e_i \, q_i$$
Entropy of small black holes
============================
We now apply procedure from the previous section to analyse extremal small black hole solutions in $D$ dimensions, with the $AdS_2\times
S^{D-2}$ horizon geometry, when the action is given by (\[treeea\]–\[lgbm\]). First we need to calculate function $f$ (\[fuve\]) using (\[horfie\]). It was shown [@PP02] that for the metrics of the type $$\label{gsph}
ds^2 = \gamma_{ab}(x) dx^a dx^b + r(x)^2 d\Omega_{D-2}\;,\qquad
a,b=1,2$$ the Gauss-Bonnet densities, integrated over the unit sphere $S^{D-2}$, give $$\begin{aligned}
\label{lmint}
\int_{S^{D-2}} \sqrt{-g}\,\mathcal{L}_m &=& - \Omega_{D-2} \lambda_m
\frac{(D-2)!}{(D-2m)!} \sqrt{-\gamma}\,r^{D-2m-2}
\left[1-(\nabla r)^2\right]^{m-2} \nonumber \\ &&\times \bigg\{
2m(m-1)r^2\left[(\nabla_a\nabla_br)^2-(\nabla^2r)^2\right] \nonumber\\
&&\quad+2m(D-2m)r\nabla^2r\left[1-(\nabla r)^2\right]
-m\mathcal{R}r^2\left[1-(\nabla r)^2\right] \nonumber \\
&&\quad\left. -(D-2m)(D-2m-1)\left[1-(\nabla r)^2\right]^2\right\}\;.\end{aligned}$$ where $\mathcal{R}$ is a two-dimensional Ricci scalar calculated from $\gamma_{ab}$. Specializing further to $AdS_2\times S^{D-2}$ metric (\[horfie\]) all terms having covariant derivatives vanish on the horizon and using this and (\[horfie\]) one obtains the following expression for the function $f$ $$\begin{aligned}
\label{fgb}
f &=& \frac{\Omega_{D-2}}{16\pi G_N}\,u_S\,v_1\,v_2^{(D-2)/2} \left\{
\frac{2\,u_T^2\,e_1^2}{v_1^2} + \frac{2\,e_2^2}{u_T^2\,v_1^2} \right.
\\ \nonumber && + \sum_{m=1}^{[D/2]}
\alpha'^{m-1} \lambda_m \frac{(D-2)!}{(D-2m)!} \left.
v_2^{-m} \left[(D-2m)(D-2m-1) - 2m\frac{v_2}{v_1} \right] \right\}\end{aligned}$$ where $\lambda_1=1$.
Now we can use (\[eom\]–\[ent\]) to calculate entropy. For better understanding we specialize first to $D\le7$ and then take the general case.
$D=4,5$
-------
In this case we have only $m=1,2$ terms in (\[fgb\]). Although the analysis was already done in [@sen0505], for completeness we shall repeat it here. From (\[fgb\]) we get $$\label{fgb45}
f = \frac{\Omega_{D-2}}{16\pi G_N}\,u_S\,v_1\,v_2^{(D-2)/2} \left[
\frac{2\,u_T^2\,e_1^2}{v_1^2} + \frac{2\,e_2^2}{u_T^2\,v_1^2}
- \frac{2}{v_1} + \frac{(D-2)(D-3)}{v_2} \left( 1 -
\frac{4\,\alpha' \lambda_2}{v_1} \right) \right]$$ Now we impose EOM’s (\[eom\]), and use (\[charge\],\[q-nw\]) to express results in terms of $n$ and $w$. One obtains a unique solution $$\begin{aligned}
v_1 &=& 4\,\alpha' \lambda_2 \label{v145} \\
v_2 &=& 2(D-2)(D-3) \alpha' \lambda_2 \\
u_T &=& \sqrt{\frac{n}{w}} \label{uT45} \\
u_S &=& \frac{4\pi G_N}{\Omega_{D-2}} \frac{v_1}{v_2^{(D-2)/2}}
\frac{q_1}{e_2} = \frac{4\pi G_N}{\Omega_{D-2}}
\frac{v_1}{v_2^{(D-2)/2}}
\frac{\sqrt{2nw}}{\alpha'\sqrt{\lambda_2}} \label{uS45} \\
e_1 &=& \sqrt{2\,\alpha' \lambda_2 \frac{w}{n}} \;,\qquad\qquad
e_2 = \sqrt{2\,\alpha' \lambda_2 \frac{n}{w}} \label{e1245}\end{aligned}$$ Using (\[v145\]-\[e1245\]) and (\[q-nw\]) in (\[ent\]) we obtain the entropy $$\label{ent45l}
S = 4\pi \sqrt{8\,\lambda_2} \sqrt{nw}$$ We now see that to match the statistical entropy of string states (\[entropy\]) one has to take $$\label{lam2}
\lambda_2 = \frac{1}{8}$$ As noticed in [@sen0505] this is exactly the value which appears in front of the Gauss-Bonnet term in the low energy effective action of heterotic strings. Observe also that by fixing only one parameter $\lambda_2$ one obtains (\[entropy\]) for both $D=4$ and $D=5$.
Notice here some aspects of solution which we shall show to be common for all $D$. First, dilaton field $u_S\propto\sqrt{nw}$, so for the effective string coupling on the horizon $g^2=1/u_S\propto
1/\sqrt{nw}\ll 1$ for $n,w\gg1$. So, tree level in string coupling is a good approximation. Second, $v_i\propto\alpha'$, which means that all terms in our effective action are of the same order in $\alpha'$. All higher curvature terms ar a priori important.
$D=6,7$
-------
When we go up to $D=6$ and $D=7$, we see from (\[fgb\]) that the function $f$ receives additional contribution (comparing to (\[fgb45\])), given by $$\label{fgb67}
\Delta f_{6,7} = \frac{\Omega_{D-2}}{16\pi G_N}\,u_S\,v_1\,v_2^{(D-2)/2}
(D-2)(D-3)(D-4)(D-5) \frac{\alpha'}{v_2^2} \left( \lambda_2 -
\frac{6\,\alpha' \lambda_3}{v_1} \right)$$ We saw in the previous subsection that $\lambda_2=1/8$.
Now we solve the EOM’s. It is obvious that we again obtain (\[uT45\]) and the first equality in (\[uS45\]). Solving EOM’s for $v_1$ and $v_2$ we obtain $$t_1 = \frac{t_2^2 + a(t_2+48b\lambda_3)}{a (t_2-8b)}$$ where $t_2$ is a solution of the cubic equation $$\label{cubic}
t_2^3 - (a-b) t_2^2 - 144ab\lambda_3 t_2 - 48ab^2\lambda_3 = 0$$ In the above formulae we have used the notation $$t_i \equiv \frac{4v_i}{\alpha'}\;,\qquad a \equiv (D-2)(D-3)\;,\qquad
b \equiv (D-4)(D-5)$$ For any given $\lambda_3$ we have generally three solutions for $v_{1,2}$, but it can be shown that there is only one physicaly interesting for which both $v_1,v_2$ are real and positive. Using this solution one can proceed further and as in $D=4,5$ solve all EOM’s and calculate the entropy. As the corresponding expressions are cumbersome and noniluminating functions of $\lambda_3$, we shall not write them explicitely.
The entropy (\[ent\]) has the form $$\label{entD}
S = \omega(\lambda_3,D) \sqrt{nw}$$ where $\omega$ is some complicated function of $\lambda_3$ and $D$. Now, we search for such $\lambda_3$ for which in $D=6$ and $D=7$ we obtain (\[entropy\]). One way to fix $\lambda_3$ is to demand[^8] that entropy is the same in both dimensions $$\omega(\lambda_3,D=6) = \omega(\lambda_3,D=7)$$ It is easy to show that the only solution is $$\label{lam3}
\lambda_3 = \frac{1}{96}$$ Now we use this value for $\lambda_3$ in (\[entD\]) and obtain that the entropy is given by $$S = 4\pi\sqrt{nw}$$ which is again exactly the string result (\[entropy\]). For the choice (\[lam3\]) solution is given by $$\begin{aligned}
v_1 &=& \frac{\alpha'}{2} \label{v167} \\
v_2 &=& \frac{\alpha'}{8} (D-2)(D-3) \left[ 1 +
\sqrt{1+\frac{2(D-4)(D-5)}{(D-2)(D-3)}} \,\right] \label{v267} \\
u_T &=& \sqrt{\frac{n}{w}} \label{uT67} \\
u_S &=& \frac{4\pi G_N}{\Omega_{D-2}} \frac{v_1}{v_2^{(D-2)/2}}
\frac{q_1}{e_2} = \frac{8\pi G_N}{\Omega_{D-2}}
\frac{\sqrt{nw}}{v_2^{(D-2)/2}} \label{uS67} \\
e_1 &=& \sqrt{\frac{\alpha'}{4} \frac{w}{n}} \;,\qquad\qquad
e_2 = \sqrt{\frac{\alpha'}{4} \frac{n}{w}} \label{e1267}\end{aligned}$$
General dimensions
------------------
We now pass to general number of dimensions $D$ recursively. From (\[fgb\]) we see that passing from (odd) dimension $D=2m-1$ to $D=2m$ and $D=2m+1$ the function $f$ gets additional contribution $$\Delta f = \frac{\Omega_{D-2}}{16\pi G_N}\,u_S\,v_1\,v_2^{(D-2)/2}
\alpha'^{m-2} \frac{(D-2)!}{(D-2m)!} v_2^{-m+1}
\left( \lambda_{m-1} - \frac{2m\alpha'}{v_1} \lambda_m \right)$$ We assume that all $\lambda_{k}$, $k=1,\ldots,m-1$ are determined from lower-dimensional analyses, so the only free parameter at the moment is $\lambda_m$.
In principle we could apply the same analysis as in previos subsections, i.e., solve the EOM’s, calculate the entropy for general $\lambda_m$ and then look is there a value of $\lambda_m$ for which the entropy is equal to (\[entropy\]). The problem is that for this one has to solve polinomial equation, like (\[cubic\]), which is now of the order $(2m-3)$ and so for $m\ge4$ cannot be solved analyticaly for general $\lambda_m$.
However, closer inspection of the solution (\[v167\]-\[e1267\]) for $D\le7$ reveals the shortcut. We notice that only $v_2$ depends on $D$, and that $v_1$, $u_T$, $e_i$ are depending just on $n$ and $w$. From (\[q-nw\]) and (\[ent\]) we see that to obtain for the entropy string result (\[entropy\]) it is necessary that $e_i$ are given by (\[e1267\]). One obvious way to have this is to fix $m\lambda_m/\lambda_{m-1}$ to be the same for all $m$. Then $$\label{v1gen}
v_1 = 2m\alpha'\frac{\lambda_m}{\lambda_{m-1}}$$ is one solution of EOM. Then, to have (\[e1267\]) we see that $v_1$ has to be given by (\[v167\]), which combined with (\[v1gen\]) gives the coupling constants $$\label{coup}
\lambda_m = \frac{\lambda_{m-1}}{4m} = \frac{4}{4^m m!}$$ where we have used $\lambda_1=1$.
To summarise, for the choice of coupling constants given in (\[coup\]) there is a solution[^9] of EOM for any $D$ given by (\[v167\]), (\[uT67\]-\[e1267\]), and with $v_2=\alpha'y(D)$, where $y(D)$ is some complicated function of $D$ (which is a real and positive root of $(m-1)$-th order polynomial), for which the Wald entropy formula gives $$S=4\pi\sqrt{nw} \;.$$ And this is exactly the statistical entropy of half-BPS states of heterotic string given in (\[entropy\]).
Some remarks
============
Before discussing our results, let us make two remarks. First, we would like to note that the gravitational part of the Lovelock type action with coefficients given by (\[coup\]) apparently can be written in the exponential form $$\label{llexp}
S_{grav} = \frac{1}{4\pi G_N \alpha'} \int d^Dx \sqrt{-g} \,S
\left[ \exp \left( \sum_{m=1} \frac{\alpha'^m}{4} \lambda_m
\tilde{\mathcal{L}}^{GB}_m \right) - 1 \right]$$ where $\tilde{\mathcal{L}}^{GB}_m$ are obtained from the extended Gauss-Bonnet densities $\mathcal{L}^{GB}_m$ given in (\[lgbm\]) by throwing away all terms which are products of two or more scalars (like e.g., $R^2$, $R(R_{\mu\nu})^2$, etc.). We do not have a proof of this, but we have checked it explicitely for terms up to $\alpha'^3$ order, and also confirmed that terms of the type $R^k X$ are in agreement with the known recursion relation $$\frac{\partial \mathcal{L}^{GB}_m}{\partial R} =
m \mathcal{L}^{GB}_{m-1} \;.$$ This makes us believe that (\[llexp\]) is correct. As far as we know, the Lovelock action with the particular choice of parameters given in (\[coup\]) was not mentioned in the literature before.
For a second remark, notice that from (\[fgb\]) and (\[coup\]) follows that $f$ function can be put in the form $$\label{actgen}
f = \frac{\Omega_{D-2}}{16\pi G_N}\,u_S\,v_1\,v_2^{(D-2)/2} \left[
\frac{2\,u_T^2\,e_1^2}{v_1^2} + \frac{2\,e_2^2}{u_T^2\,v_1^2}
- \frac{2}{v_1}
- \left(\frac{1}{v_1}-\frac{2}{\alpha'}\right) A \right]$$ where the function $A$ is given by $$A = A(v_2) = \sum_{m=1}^{[D/2]} \alpha'^m \lambda_{m+1}
\frac{2m(D-2)!}{(D-2m-2)!} \frac{1}{v_2^m}$$ Equation for $v_2$ ($\partial f/\partial v_2=0$) gives directly a solution $v_1=\alpha'/2$, which substituted back into $f$ leaves just the term $$-\frac{2}{v_1} = R_{AdS_2}$$ plus the terms with gauge fields. In equation for dilaton $u_S$ (equivalent to $f=0$) all dependence on $v_2$ vanishes and we obtain $$e_1 e_2 = \frac{\alpha'}{4}$$ from which, using (\[ent\]), we obtain result (\[entropy\]) for the entropy without ever needing to solve for $v_2$.
It is obvious that in the arguments above a precise form of the function $A$ was completely arbitrary, moreover it could depend also on $v_1$ and $e_i$. One always gets (\[v167\],\[uT67\]-\[e1267\],\[entropy\]) where the exact form of $A(v_1,v_2)$ only affects the solution for $v_2$ (which affects also dilaton $u_S$ through (\[uS67\].). As a consequence, any action which for the $AdS_2\times S^{D-2}$ near horizon geometry has the form (\[actgen\]) will give the same result for the entropy of 2-charged black holes, i.e., (\[entropy\]).
The same conclusion does not hold for 4-charged and 8-charged black holes in $D=4$. In these cases there is additional term inside the square brackets in (\[actgen\]) proportional to $v_2^{-2}$ [@sen0508] and only for some special choices of the function $A$ one would get the entropy equal to statistical entropy of string states.
Discussion
==========
We have analysed solutions with $AdS_2\times S^{D-2}$ geometry in the theories with actions of the Lovelock type which contain all extended Gauss-Bonnet densities. We expect that these solutions describe $D$-dimensional asymptoticaly flat two-charge black holes near the horizon. The idea was to check could Sen’s results for $D=4,5$ [@sen0505] be generalized to all dimensions.
In the lowest order in $\alpha'$ and string these actions are equal to truncated tree level (in string coupling [*and*]{} tension $\alpha'$) low energy effective actions of the heterotic string compactified on $T^{9-D}\times S^1$, for which analytic black hole solutions having singular horizon with vanishing area, and thus also the entropy, were found [@sen9411; @peet95]. They are believed to correspond to perturbative half-BPS states of heterotic string, for which the statistical entropy (i.e., logarithm of the number of states) is asymptoticaly given by (\[entropy\]) [@DabHar89; @DaGiHaRR90]. A reason for the discrepancy in the results for the entropy is that these black holes are small, in fact singular, with the curvature diverging on the horizon. This suggests that higher curvature terms in the action are important. On the other hand, dilaton field near the horizon is large, which means that string coupling is small. One concludes that it is necessary to consider effective action which is tree level in string coupling, but [*not*]{} in $\alpha'$.
Now, the small black holes we have analysed in this paper are obviously some deformations of these singular black hole solutions, but of course the question is have they anything at all with the black holes of heterotic string. We have shown that parameters which appear in the Lovelock type action can be uniquely chosen such that the black hole entropy matches statistical entropy of heterotic string states for [*all*]{} $D$. Moreover, this choice is nontrivial, in the sence that there is “one parameter for every couple of dimensions”. Certainly, this matching could be just a coincidence. But, recently it was shown [@sen0508] that the same type of the action applied to 4-charge and 8-charge black holes in $D=4$ produced the same results for the entropy, gauge field strengths and the axion-dilaton field as in the analyses based on supersymmetric action obtained by supersymmetrizing square of the Weyl tensor [@CaWiMo9906; @CaWiKaMo04]. Unfortunately, as corresponding supersymmetric formulations in $D>4$ are unknown it is impossible to make simmilar comparison in our case. In spite of this, these results are hinting that there could be some connection between the Lovelock type action we used and the heterotic string on the tree level in the string coupling. If true, then our analysis shows how increasing the dimension $D$ naturally introduces terms of higher and higher order in curvature ($[D/2]$-order in $D$ dimensions).
Obviously, the action we used differs from the low energy effective action of heterotic string on $M_D\times T^{9-D}\times S^1$ background. Alhough we do not know the exact form of the latter, we do know that it should be supersymmetric and to contain additional higher curvature terms beside extended Gauss-Bonnet ones, and also higher derivative terms including gauge fields. Moreover, it is known that $\mathcal{L}^{GB}_3$ term is not present in the low energy effective action, and that some of the terms on the $m=4$ level are proportional to the transcedental number $\zeta(3)$. This is in contrast to our results $\lambda_3=1/96$ and $\lambda_4=1/3\cdot 2^9$. On the other hand, as noted in [@sen0505], the result $\lambda_2=1/8$ is exactly the value which appears in the low energy effective action of heterotic string [@MetTse87; @GroSlo87]. Curiously, $\lambda_3=1/96$ is exactly the value which appears in the case of the bosonic string. Here the following observation is important. Any term which is obtained by multiplying and contracting $m$ Riemann and field strength tensors evaluated on $AdS_2\times S^{D-2}$ background (\[horfie\]) gives just a linear combination of terms $v_1^{-k}v_2^{k-m}$, $k=1,\ldots,m$ with some coefficients generally depending on $D$. Now, there is an infinite set of actions which are equivalent to ours when evaluated on this background, and even bigger one consisting of actions which lead to the more general form (\[actgen\]). It can be explicitely shown that one can use above this freedom to avoid disagreement with cubic and quartic higher curvature terms mentioned above. The question can supersymmetry be accomodated is opened. We shall present details elsewhere ([@PP2]).
It is clear that the sole results from this paper and from [@sen0505; @sen0508] are insufficient for making any strong claims. One can construct other actions leading to same results. As an illustration, let us consider an action obtained by adding higher curvature correction $$\mathcal{L}_2 = \frac{1}{8}
\left[ (R_{\mu\nu\rho\sigma})^2 - (R_{\mu\nu})^2 \right]$$ to the leading term given by (\[alpha0\]). This action does not belong to the type (\[actgen\]). It can be shown that it gives the same result for the entropy (\[entropy\]) as Lovelock type action for 2-charged black holes in $D=4$ and $D=5$, and for 4-charge and 8-charge black holes in $D=4$. In fact, we could with this action repeat the analysis in [@sen0508] and obtain exactly the same solutions, including the atractor equations (4.11). Adding apropriate higher derivative terms (with coefficients not depending on $D$) it is posible to match the entropy of 2-charge black holes in any dimension $D$.
To conclude, the results in this paper support and extend to all dimensions Sen’s suggestion of a possible role of Gauss-Bonnet densities in description of black holes in heterotic string theory. It would be interesting to relate our results to the anomaly cancelation arguments of [@KraLar0506; @KraLar0508], especially concidering the topological origin of the extended Gauss-Bonnet densities. In any case, further analyses, including more examples, could either clarify this role, or to show that obtained agreement is accidental.
I would like to thank J. Kappeli, A. Kleinschmidt, K. Peeters and S. Theisen for valuable discussions. This work was supported by Alexander von Humbold Foundation and by Ministry of Science, Education and Sport of Republic of Croatia (contract No. 0119261).
[999]{}
B. de Witt, .
G. ’t Hooft, .
L. Susskind, in [*The black hole*]{}, Teitelboim, C. (ed.), 118-131, .
L. Susskind and J. Uglum, .
G. T. Horowitz and J. Polchinski, .
A. Dabholkar and J. A. Harvey, .
A. Dabholkar, G. W. Gibbons, J. A. Harvey and F. Ruiz Ruiz, .
A. Sen, .
A. Peet, .
A. Dabholkar, .
A. Dabholkar, R. Kallosh and A. Maloney, .
A. Sen, .
V. Hubeny, A. Maloney and M. Rangamani, .
A. Sen, .
A. Sen, .
G. Lopes Cardoso, B. de Wit and T. Mohaupt, .
G. Lopes Cardoso, B. de Wit, J. Kappeli and T. Mohaupt, .
G. Lopes Cardoso, B. de Wit, J. Kappeli and T. Mohaupt, .
R. M. Wald, .
V. Iyer and R. M. Wald, .
T. Jacobson, G. Kang and R. C. Myers, .
P. Kraus, F. Larsen, .
P. Kraus, F. Larsen, .
A. Sen, .
A. Sen, .
A. Sen, .
G. Lopes Cardoso, B. de Wit and T. Mohaupt, .
G. Lopes Cardoso, B. de Wit, J. Kappeli and T. Mohaupt, .
D. Lovelock, .
D. Lovelock, .
B. Zwiebach, .
B. Zumino, .
M. Cvitan, S. Pallua, P. Prester, .
R.R. Metsaev and A.A. Tseytlin, .
D.J. Gross, J.H. Sloan,
P. Prester, in preparation.
[^1]: A overview of recent results for black holes in string theory is given in [@witt0511].
[^2]: This is the reason why they are called small or microscopic.
[^3]: Note that although Wald derivation demands existence of the bifurkate Killing horizon, and so does not apply to extremal black holes, one can formally take the limit of extremality in the final formula.
[^4]: In [@KraLar0506; @KraLar0508] an explanation was presented based on anomalies induced by particular Chern-Simons terms. However, it is not clear to us why only those terms should contribute.
[^5]: There is also a term proportional to the Pontryagin density, but it vanishes identicaly in $AdS_2\times S^n$ background.
[^6]: $[x]$ denote integer part of $x$.
[^7]: $[D/2]$ is the number of extended Gauss-Bonnet terms in $D$ dimensions, including the Einstein term.
[^8]: Equivalently, we could ask that $\omega=4\pi$ for $D=6$, and then check do we obtain the same result for $D=7$.
[^9]: We have checked that for $D\le9$ this is a unique solution with both $v_1$, $v_2$ real and positive.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We consider the two dimensional shrinking target problem in the beta dynamical system for general $\beta>1$ and with the general error of approximations. Let $f, g$ be two positive continuous functions such that $f(x)\geq g(y)$ for all $x,y\in~[0,1]$. For any $x_0,y_0\in[0,1]$, define the shrinking target set $$\begin{aligned}
E(T_\beta, f,g)=\Big\{(x,y)\in [0,1]^2:|T_{\beta}^{n}x-&x_{0}|<e^{-S_nf(x)},\\ |T_{\beta}^{n}y-&y_{0}|< e^{-S_ng(y)}~~~\text{for infinitely many}~n\in \mathbb{N}\},\end{aligned}$$ where $S_nf(x)=\sum_{j=0}^{n-1}f(T_\beta^jx)$. We calculate the Hausdorff dimension of this set and prove that it is the solution to some pressure function. This represents the first result of this kind for the higher dimensional beta dynamical systems.
address:
- 'Mumtaz Hussain, Department of Mathematics and Statistics, La Trobe University, PoBox199, Bendigo 3552, Australia. '
- 'Weiliang Wang, School of Mathematics and Statistics, Huazhong University of Science and Technology, 430074 Wuhan, China'
author:
- Mumtaz Hussain
- Weiliang Wang
title: Higher dimensional shrinking target problem in beta dynamical systems
---
introduction
============
The study of the Diophantine properties of the distribution of orbits for a measure preserving dynamical system has received much attention recently. Let $T:X\to X$ be a measure preserving transformation of the system $(X,\mathcal{B},\mu)$ with a consistent metric $d$. If the transformation $T$ is ergodic with respect to the measure $\mu$, Poincare’s recurrence theorem implies that, for almost every $x\in X$, the orbit $\{T^nx\}_{n=0}^\infty$ returns to $X$ infinitely often. In other words, for any $x_0\in X,$ almost surely $$\liminf\limits_{n\rightarrow\infty} d(T^nx,x_0)=0.$$ Poincare’s recurrence theorem is qualitative in nature but it does motivates the study of the distribution of $T$-orbits of points in $X$ quantitatively. In other words, a natural motivation is to investigate *how fast the above liminf tends to zero?* To this end, the spotlight is on the size of the set $$D(T, \varphi):=\{x\in X: d(T^n x, x_0)<\varphi(n)~~\text{for infinitely many}~n\in \mathbb{N}\},$$ where $\varphi:\N\rightarrow \R_{\geq 0}$ is a positive function such that $\varphi(n)\rightarrow 0$ as $n\rightarrow \infty.$ The set $D(T,\varphi)$ can be viewed as the collection of points in $X$ whose $T$-orbit hits a shrinking target infinitely many times. The set $D(T,\varphi)$ is the dynamical analogue of the classical inhomogeneous well-approximable set $$W(\varphi):=\{x\in [0, 1): |x-p/q-x_0|<\varphi(q)~~\text{for infinitely many}~p/q\in \mathbb Q\}.$$ As one would expect the ‘size’ of both of these sets depend upon the nature of the function $\varphi$ i.e. how fast it is approaching to zero. The typical notion of size is in terms of Lebesgue measure but if the speed of approximation is rapid then, irrespective of the approximating function, the Lebesgue measure of the corresponding sets is zero (null-sets). For instance, if $\varphi(q)=|q|^{-\eta}$ then it follows from Schmidt’s theorem (1964) that the Lebesgue measure of the set $W(\varphi)$ is zero for any $\eta>2$. To distinguish between null-sets the notion of Hausdorff measure and dimension are appropriate tools in this study. Note that both of the sets $D(T, \varphi)$ and $W(\varphi)$ are limsup sets and estimation of the size of such sets, in general, is a difficult task. However, in the last two decades, a lot of work has been done in developing the measure theoretic frameworks to estimate the size of limsup sets, for example, the ubiquity framework [@BDV] and the mass transference principle [@BeresnevichVelani; @WWX; @HussainSimmons2] are two such powerful tools. As a consequence of these tools a complete metrical theory, in all dimensions, has been established for the set $W(\varphi)$. However, not much is known for the higher dimensional version of the set $D(T, \varphi)$.
Following the work of Hill and Velani [@HillVelani1; @HillVelani2], the Hausdorff dimension of the set $D(T, \varphi)$ has been determined for many dynamical systems, from the system of rational expanding maps on their Julia sets to conformal iterated function systems [@Urbanski11]. We refer the reader to [@CHW] for a comprehensive discussion regarding the Hausdorff dimension of various dynamical systems. In this paper, we confine ourself to the two dimensional shrinking target problem in the beta dynamical system with a general error of approximation.
For a real number $\beta>1$, define the transformation $T_\beta:[0,1]\to[0,1]$ by $$T_\beta: x\mapsto \beta x\bmod 1.$$ This map generates the $\beta$-dynamical system $([0,1], T_\beta)$. It is well known that $\beta$-expansion is a typical example of an expanding non-finite Markov system whose properties are reflected by the orbit of some critical point, in other words, it is not a subshift of finite type with mixing properties. This causes difficulties in studying the metrical questions related to $\beta$-expansions. General $\beta$-expansions have been widely studied in the literature, see for instance [@TanWang; @SeuretWang; @LB; @HussainWeiliang] and references therein.
We are interested in the Hausdorff dimension of the following higher dimensional dynamically defined limsup set. For any function $h$, let $S_nh$ denotes the ergodic sum of $h$ defined as $$S_nh(r)=h(r)+h(Tr)+\ldots+h(T^{n-1}r).$$ Let $f,g$ be two positive continuous function on $[0,1]$ with $f(x)\geq g(y)$ for all $x,y\in [0,1]$. Let $x_0, y_0$ be two fixed real numbers in the unit interval $(0, 1]$. Define $$\begin{aligned}
E(T_\beta, f,g)=\Big\{(x,y)\in [0,1]^2:|T_{\beta}^{n}x-&x_{0}|<e^{-S_nf(x)},\\ |T_{\beta}^{n}y-&y_{0}|< e^{-S_ng(y)}
\text{~~~for infinitely many}\ n\in \N\Big\}.\end{aligned}$$
The set $E(T_\beta, f,g)$ is the set of all points $(x, y)$ in the unit square such that the pair $\{T^{n}x, T^{n} y\}$ is in the shrinking ball $B\left ( (x_0, y_0); (e^{-S_nf(x)}, e^{-S_nf(y)}) \right)$ for infinitely many $n$. The rectangular ball shrink to zero at a rate governed by the ergodic sums $e^{-S_nf(x)}, e^{-S_nf(y)}$. The shrinking rates depend upon the points to be approximated and hence naturally provide better approximation properties than the conventional positive error function $\varphi(n)$. Dependence of the error functions on the points to be approximated significantly increases the level of difficulty.
The set $E(T_\beta, f,g)$ is the dynamical analogue of the following two dimensional classical inhomogeneous simultaneous Diophantine approximation set;
$$\begin{aligned}
W(\varphi_1, \varphi_2)=\Big\{(x,y)\in [0,1]^2:|x-&p_1/q-x_0|<\varphi_1(q),|y-p_2/q-y_0|< \varphi_2(q)\\
& \text{~~~for infinitely many}\ (p_1, p_2, q)\in \mathbb Z^2\times\N\Big\}.\end{aligned}$$
Where both $\varphi_1, \varphi_2$ are positive functions tending to zero as $q$ tends to infinity. A complete metric theory for this set has already been established some time ago. In particular, the Lebesgue measure of the set $W(\varphi_1, \varphi_2)$ has been established in [@HussainYusupova2], the Hausdorff measure for $W(\varphi, \varphi)$ in [@Bugeaud_Glasgow] and the Hausdorff measure for $W(\varphi_1, \varphi_2)$ follows from [@HussainYusupova1]. However, hardly anything is known for the set $E(T_\beta, f,g)$. We remedy this situation and prove the following theorem.
\[t2\] Let $f,g$ be two continuous functions on $[0,1]$ with $f(x)\geq g(y)$ for all $x,y\in [0,1]$. Then $${\dim_{\mathrm H}}E(T_\beta, f,g)=\min\{s_1,s_2\},$$ where $$\begin{aligned}
s_1&=\inf\{s\geq 0: P(f-s(\log\beta+f))+P(-g)\leq 0\}, \\ s_2&=\inf\{s\geq 0: P(-s(\log\beta+g))+\log\beta\leq 0\}.\end{aligned}$$
Here the notation $P(\cdot)$ stands for the pressure function for the $\beta$-dynamical system associated to continuous potentials $f$ and $g$. To keep the introductory section short, we formally give the definition of pressure function in section \[pre\]. The reason that the Hausdorff dimension is in terms of the pressure function is because of the dynamical nature of the set $E(T_\beta, f, g)$. For the detailed analysis of the properties of the pressure function, ergodic sums for general dynamical systems we refer the reader to Chapter 9 of the book [@Walterbook].
The proof of this theorem splits into two parts: establishing the upper bound and then the lower bound. Proving the upper bound is reasonably straightforward by simply using the natural cover of the set. However, establishing the lower bound is challenging and the main substance of this paper. Actually, the main obstacle in determining the metrical properties of general $\beta$-expansions lies in the difficulty of estimating the length of a general cylinders and, since we are dealing with two dimensional settings, as a consequence area of the cross product of general cylinders. As far as the Hausdorff dimension is concerned, one does not need to take all points into consideration; instead, one may choose a subset of points with regular properties to approximate the set in question. This argument, in turn require some continuity of the dimensional number, when the system is approximated by its subsystem.
The paper is organised as follows. Section \[pre\] is devoted to recalling some elementary properties of $\beta$-expansions. Short proofs are also given when we could not find any reference. Definitions and some properties of the pressure function are stated in this section as well. In section \[upperbound\], we prove the upper bound of the Theorem \[t2\]. In section \[lowerbound\], we prove the lower bound of Theorem \[t2\] and since this carries the main weightage we subdivide this section into several subsections.
Preliminaries {#pre}
=============
We begin with a brief account on some basic properties of $\beta$-expansions and fixing some notation. We then state and prove two propositions which will give the covering and packing properties.
The $\beta$-expansion of real numbers was first introduced by Rényi [@Renyi], which is given by the following algorithm. For any $\beta>1$, let $$\label{e1}
T_{\beta}(0):=0,~~ T_{\beta}(x)=\beta x-\lfloor\beta x\rfloor, x\in[0,1),$$ where $\lfloor\xi\rfloor$ is the integer part of $\xi\in \mathbb{R}$. By taking $$\epsilon_{n}(x,\beta)=\lfloor\beta T_{\beta}^{n-1}x\rfloor\in \mathbb{N}$$ recursively for each $n\geq 1,$ every $x\in[0,1)$ can be uniquely expanded into a finite or an infinite sequence $$\label{e2}
x=\frac{\epsilon_{1}(x,\beta)}{\beta}+\frac{\epsilon_{2}(x,\beta)}{\beta^2}+\cdots+\frac{\epsilon_{n}(x,\beta)}{\beta^n}+
\frac{T_{\beta}^n x}{\beta^n},$$ which is called the $\beta$-expansion of $x$ and the sequence $\{\epsilon_{n}(x,\beta)\}_{n\geq1}$ is called the digit sequence of $x.$ We also write the $\beta$-expansion of $x$ as $$\epsilon(x,\beta)=\big(\epsilon_{1}(x,\beta),\cdots,\epsilon_{n}(x,\beta),\cdots\big).$$ The system $([0,1],T_{\beta})$ is called the *$\beta$-dynamical system* or just the *$\beta$-system.*
A finite or an infinite sequence $(w_{1},w_{2},\cdots)$ is said to be admissible $(\text{with respect to the base}~\beta),$ if there exists an $x\in[0,1)$ such that the digit sequence of $x$ equals $(w_{1},w_{2},\cdots).$
Denote by $\Sigma_{\beta}^n$ the collection of all admissible sequences of length $n$ and by $\Sigma_{\beta}$ that of all infinite admissible sequences.
Let us now turn to the infinite $\beta$-expansion of $1$, which plays an important role in the study of $\beta$-expansion. Applying algorithm $(\ref{e1})$ to the number $x=1$, then the number $1$ can be expanded into a series, denoted by $$1=\frac{\epsilon_{1}(1,\beta)}{\beta}+\frac{\epsilon_{2}(1,\beta)}{\beta^2}+\cdots+\frac{\epsilon_{n}(1,\beta)}{\beta^n}+
\cdots.$$
If the above series is finite, i.e. there exists $m\geq1$ such that $\epsilon_{m}(1,\beta)\neq 0$ but $\epsilon_{n}(1,\beta)=0$ for $n>m$, then $\beta$ is called a simple Parry number. In this case, we write $$\epsilon^*(1,\beta):=(\epsilon_{1}^{*}(\beta),\epsilon_{2}^{*}(\beta),\cdots)=(\epsilon_{1}(1,\beta),\cdots,\epsilon_{m-1}(1,\beta),
\epsilon_{m}(1,\beta)-1)^{\infty},$$ where $(w)^\infty$ denotes the periodic sequence $(w,w,w,\cdots).$ If $\beta$ is not a simple Parry number, we write $$\epsilon^*(1,\beta):=(\epsilon_{1}^{*}(\beta),\epsilon_{2}^{*}(\beta),\cdots)=(\epsilon_{1}(1,\beta),\epsilon_{2}(1,\beta),
\cdots).$$
In both cases, the sequence $(\epsilon_{1}^{*}(\beta),\epsilon_{2}^{*}(\beta),\cdots)$ is called the infinite $\beta$-expansion of $1$ and we always have that $$\label{e3}
1=\frac{\epsilon_{1}^*(\beta)}{\beta}+\frac{\epsilon_{2}^*(\beta)}{\beta^2}+\cdots+\frac{\epsilon_{n}^*(\beta)}{\beta^n}+
\cdots.$$
The lexicographical order $\prec$ between the infinite sequences is defined as follows: $$w=(w_{1},w_{2},\cdots,w_{n},\cdots)\prec w'=(w_{1}',w_{2}',\cdots,w_{n}',\cdots)$$ if there exists $k\geq1$ such that $w_{j}=w_{j}'$ for $1\leq j<k$, while $w_{k}<w_{k}'.$ The notation $w\preceq w'$ means that $w\prec w'$ or $w=w'.$ This ordering can be extended to finite blocks by identifying a finite block $(w_{1},w_2,\cdots,w_n)$ with the sequence $(w_{1},w_2,\cdots,w_n,0,0,\cdots)$.
The following result due to Parry [@Parry] is a criterion for the admissibility of a sequence.
\[le1\] Let $\beta>1$ be a real number. Then a non-negative integer sequence $\epsilon=(\epsilon_1,\epsilon_2,\cdots)$ is admissible if and only if, for any $k\geq 1$, $$(\epsilon_k,\epsilon_{k+1},\cdots)\prec(\epsilon_1^*(\beta),\epsilon_2^*(\beta),\cdots).$$
The following result of Rényi implies that the dynamical system $([0,1],T_\beta)$ admits $\log\beta$ as its topological entropy.
\[le2\] Let $\beta>1.$ For any $n\geq 1,$ $$\beta^{n}\leq\#\Sigma_{\beta}^n\leq\frac{\beta^{n+1}}{\beta-1},$$ where $\#$ denotes the cardinality of a finite set.
It is clear from this lemma that $$\lim_{n\to\infty}\frac{\log\left(\#\Sigma_{\beta}^n\right)}{n}=\log\beta.$$ For any $(\epsilon_1,\cdots,\epsilon_n)\in \Sigma_{\beta}^n,$ call $$I_{n}(\epsilon_1,\cdots,\epsilon_n):=\{x\in[0,1),\epsilon_j(x,\beta)=\epsilon_j,1\leq j\leq n\}$$ an $n$-th order cylinder $(\text{with respect to the base}~\beta)$. It is a left-closed and right-open interval with the left endpoint $$\frac{\epsilon_1}{\beta}+\frac{\epsilon_2}{\beta^2}+\cdots+\frac{\epsilon_n}{\beta^n}$$ and of length $$|I_n(\epsilon_1,\cdots,\epsilon_n)|\leq\frac{1}{\beta^n}.$$ Here and throughout the paper, we use $|\cdot|$ to denote the length of an interval. Note that the unit interval can be naturally partitioned into a disjoint union of cylinders; that is for any $n\geq 1$, $$\label{e4}[0,1]=\bigcup\limits_{(\epsilon_1,\cdots,\epsilon_n)\in\Sigma_{\beta}^n}
I_{n}(\epsilon_1,\cdots,\epsilon_n).$$ One difficulty in studying the metric properties of $\beta$-expansion is that the length of a cylinder is not regular. It may happen that $|I_n(\epsilon_1,\cdots, \epsilon_n)|\ll \beta^{-n}$. The following notation plays an important role to bypass this difficulty.
A cylinder $I_n(\epsilon_1,\cdots,\epsilon_n)$ is called full if it has maximal length, i.e. if $$|I_n(\epsilon_1,\cdots,\epsilon_n)|=\frac{1}{\beta^n}.$$ Correspondingly, we also call the word $(\epsilon_1,\cdots,\epsilon_n)$, defining the full cylinder $I_n(\epsilon_1,\cdots,\epsilon_n)$, a full word.
Next, we collect some properties about the distribution of full cylinders.
\[le4\] An $n$-th order cylinder $I_{n}(\epsilon_{1}\cdots\epsilon_{n})$ is full, if and only if for any admissible sequence $(\epsilon_{1}',\epsilon_{2}',\cdots,\epsilon_{m}')\in\Sigma_{\beta}^m$ with $m\geq 1$, $$(\epsilon_{1}\cdots\epsilon_{n}, \epsilon_{1}',\epsilon_{2}',\cdots,\epsilon_{m}')\in\Sigma_{\beta}^{n+m}.$$ Moreover $$|I_{n+m}(\epsilon_{1},\cdots,\epsilon_{n},\epsilon_{1}',\cdots,\epsilon_{m}')|=
|I_{n}(\epsilon_{1},\cdots,\epsilon_{n})|\cdot|I_{m}(\epsilon_{1}',\cdots,\epsilon_{m}')|.$$ So, for any two full cylinders $I_{n}(\epsilon_{1}\cdots\epsilon_{n}),~ I_{m}(\epsilon_{1}',\epsilon_{2}',\cdots,\epsilon_{m}')$, the cylinder $$I_{n+m}(\epsilon_{1},\cdots,\epsilon_{n},\epsilon_{1}',\cdots,\epsilon_{m}')$$ is also full.
\[le3\] For $n\geq 1$, among every $n+1$ consecutive cylinders of order $n$, there exists at least one full cylinder.
As a consequence, one has the following relationship between balls and cylinders.
\[p1\]Let $J$ be an interval of length ${\beta}^{-l}$ with $l\geq 1$. Then it can be covered by at most $2(l+1)$ cylinders of order $l$.
By Lemma \[le3\], among any $2(l+1)$ consecutive cylinders of order $l$, there are at least $2$ full cylinders. So the total length of these intervals is larger than $2{\beta}^{-l}$. Thus $J$ can be covered by at most $2(l+1)$ cylinders of order $l$.
The following result may have an independent interest.
\[p2\] Fix $0<\epsilon<1$. Let $~n_0$ be an integer such that $2n^2\beta<{\beta}^{(n-1)\epsilon}$ for all $n\ge n_0$. Let $J\subset [0,1]$ be an interval of length $r$ with $0<r<2n_{0}\beta^{-n_0}$. Then inside $J$, there exists a full cylinder $I_n$ satisfying $$r\geq|I_n|>r^{1+\epsilon}.$$
Let $n>n_0$ be the integer such that $$2n\beta^{-n}\leq r<2(n-1)\beta^{-n+1}.$$ Since every cylinder of order $n$ is of length at most $\beta^{-n}$, the interval $J$ contains at least $2n-2\geq n+1$ consecutive cylinders of order $n$. Thus, by Lemma \[le3\], it contains a full cylinder of order $n$ and we denote such a cylinder by $I_n$. By the choice of $n_0$, we have $$r\geq|I_n|=\beta^{-n}>\Big(2(n-1)(\beta^{-n+1})\Big)^{1+\epsilon}>r^{1+\epsilon}.$$ This completes the proof.
Now we define a sequence of numbers $\beta_N$ approximating $\beta$ from below. For any $N$ with $\epsilon_N^*(\beta)\geq1,$ define $\beta_N$ to be the unique real solution to the algebraic equation $$\label{g1}
1=\frac{\epsilon_{1}^*(\beta)}{\beta_N}+\frac{\epsilon_{2}^*(\beta)}{\beta_N^2}+\cdots+\frac{\epsilon_{N}^*(\beta)}{\beta_N^N}.$$ Then $\beta_N$ approximates $\beta$ frow below and the $\beta_N$-expansion of the unity is $$(\epsilon_{1}^*(\beta),\cdots,\epsilon_{N-1}^*(\beta),\epsilon_{N}^*(\beta)-1)^{\infty}.$$
More importantly, by the criterion of admissible sequence, we have, for any $(\epsilon_1,\cdots,\epsilon_n)\in \Sigma_{\beta_N}^n$ and $(\epsilon_1',\cdots,\epsilon_m')\in \Sigma_{\beta_N}^m$, that $$\label{g2}
(\epsilon_1,\cdots,\epsilon_n,0^N,\epsilon_1',\cdots,\epsilon_m')\in\Sigma_{\beta_N}^{n+N+m},$$ where $0^N$ means a zero word of length $N$.
From the assertion $(\ref{g2})$, we get the following proposition.
For any $(\epsilon_1,\cdots,\epsilon_n)\in \Sigma_{\beta_N}^n$, $I_{n+N}(\epsilon_1,\cdots,\epsilon_n,0^N)$ is a full cylinder. So, $$\frac{1}{\beta^{n+N}}\leq|I_{n}(\epsilon_1,\cdots,\epsilon_n)|\leq\frac{1}{\beta^{n}}.$$
We end this section with a definition of the pressure function for $\beta$-dynamical system associated to some continuous potential $g$. The readers are referred to [@Walters] for more details. $$\label{g3}
P(g, T_\beta):=\lim\limits_{n\rightarrow \infty}\frac{1}{n}\log\sum\limits_{(\epsilon_1,\cdots,\epsilon_n)\in\Sigma_\beta^n}
\sup\limits_{y\in I_n(\epsilon_1,\cdots,\epsilon_n)}e^{S_ng(y)},$$ where $S_ng(y)$ denotes the ergodic sum $\sum_{j=0}^{n-1}g(T^j_\beta y)$. Since $g$ is continuous, hence the limit does not depend upon the choice of $y$. The existence of the limit follows from the subadditivity: $$\log\sum\limits_{(\epsilon_1,\cdots,\epsilon_n, \epsilon_1^\prime,\cdots,\epsilon_m^\prime)\in\Sigma_\beta^{n+m}}
e^{S_{n+m}g(y)}\leq \log\sum\limits_{(\epsilon_1,\cdots,\epsilon_n)\in\Sigma_\beta^{n}}
e^{S_{n}g(y)}+\log\sum\limits_{(\epsilon_1^\prime,\cdots,\epsilon_m^\prime)\in\Sigma_\beta^{m}}
e^{S_{n}g(y)}.$$
Proof of Theorem \[t2\]: the upper bound {#upperbound}
========================================
As is typical in determining the Hausdorff dimension of a set; we split the proof of Theorem $\ref{t2}$ into two parts: the upper bound and the lower bound.
For any $U=(\epsilon_1,\cdots\epsilon_n)\in\Sigma_\beta^n$ and $W=(\omega_1,\cdots,\omega_n)\in\Sigma_\beta^n,$ we always take $$x^*=\frac{\epsilon_{1}}{\beta}+\frac{\epsilon_{2}}{\beta^{2}}+\cdots+\frac{\epsilon_{n}}{\beta^{n}}$$ to be the left endpoint of $I_n(U)$ and $$y^*=\frac{\omega_{1}}{\beta}+\frac{\omega_{2}}{\beta^{2}}+\cdots+\frac{\omega_{n}}{\beta^{n}}$$ to be the left endpoint of $I_n(W)$.
Instead of directly considering the set $E(T_\beta,f,g)$, we will consider a closely related lim sup set $$\overline{E}(T_\beta,f,g)=\bigcap_{N=1}^{\infty}\bigcup_{n=N}^{\infty}\bigcup_{U,W\in \Sigma_{\beta}^{n}}J_n(U)\times J_n(W),$$ where $$\begin{aligned}
J_n(U)&=\{x\in[0,1]:|T_\beta^nx-x_0|<e^{-S_nf(x^*)}\},\\
J_n(W)&=\{y\in[0,1]:|T_\beta^ny-y_0|<e^{-S_ng(y^*)}\}.
\end{aligned}$$ In the sequel it will be clear that the set $\overline{E}(T_\beta,f,g)$ is easier to handle. Since $f$ and $g$ are continuous functions, for any $\delta>0$ and $n$ large enough, we have$$|S_nf(x)-S_nf(x^*)|<n\delta,\quad |S_ng(y)-S_ng(y^*)|<n\delta.$$ Thus we have $$\overline{E}(T_\beta, f+\delta,g+\delta)\subset E(T_\beta,f,g)\subset \overline{E}(T_\beta, f-\delta,g-\delta).$$ Therefore, to calculate the Hausdorff dimension of the set $E(T_\beta,f,g)$, it is sufficient to determine the Hausdorff dimension of $\overline{E}(T_\beta,f,g)$.
The length of $J_n(U)$ satisfies $$|J_n(U)|\leq2\beta^{-n}e^{-S_nf(x^*)},$$ since, for every $x\in J_n(U)$, we have $$|x-(\frac{\epsilon_1}{\beta}+\cdots+\frac{\epsilon_n+x_0}{\beta^n})|=\frac{|T_\beta^nx-x_0|}{\beta^n}<\beta^{-n}e^{-S_nf(x^*)}.$$ Similarly, $$|J_n(W)|\leq2\beta^{-n}e^{-S_nf(y^*)}.$$ So, $\overline{E}(T_\beta,f,g)$ is a $\limsup$ set defined by a collection of rectangles. There are two ways to cover a single rectangle $J_n(U)\times J_n(W)$ as follows.
Covering by shorter side length {#sectionshorter}
-------------------------------
Recall that $f(x)\geq g(y)$ for all $x, y\in [0, 1]$. This implies that the length of $J_n(U)$ is shorter than the length of $J_n(W)$. Then the rectangle $J_n(U)\times J_n(W)$ can be covered by $$\frac{\beta^{-n}e^{-S_ng(y^*)}}{\beta^{-n}e^{-S_nf(x^*)}}=\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}$$ many balls of side length $\beta^{-n}e^{-S_nf(x^*)}.$
Since for each $N$, $$\overline{E}(T_\beta,f,g)\subseteq\bigcup_{n=N}^{\infty}\bigcup_{U,W\in \Sigma_{\beta}^{n}}J_n(U)\times J_n(W),$$ therefore, the $s$-dimensional Hausdorff measure $\H^s$ of $\overline{E}(T_\beta,f,g)$ can be estimated as $$\H^s\Big(\overline{E}(T_\beta,f,g)\Big)\le \liminf_{N\to\infty}\sum_{n=N}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}
\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(\frac{1}{\beta^ne^{S_nf(x^*)}}\Big)^s.$$
Define $$s_1=\inf\{s\geq 0: P(f-s(\log\beta+f))+P(-g)\leq 0\}.$$ Then from the definition of the pressure function , it is clear that
$$P(f-s(\log\beta+f))+P(-g)\leq 0 \quad \iff \quad \sum_{n=1}^{\infty} \sum_{U,W\in \Sigma_{\beta}^n}
\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(\frac{1}{\beta^ne^{S_nf(x^*)}}\Big)^s<\infty.$$
Hence, for any $s>s_1$
$$\H^s\Big(\overline{E}(T_\beta,f,g)\Big)= 0.$$ Hence it follows that ${\dim_{\mathrm H}}(\overline{E}(T_\beta,f,g))\leq s_1.$
Covering by longer side length {#sectionlonger}
------------------------------
From the previous subsection (§\[sectionshorter\]), it is clear that only one ball of side length $\beta^{-n}e^{-S_ng(y^*)}$ is needed to cover the rectangle $J_n(U)\times J_n(W)$. Hence, in this case, the $s$-dimensional Hausdorff measure $\H^s$ of $\overline{E}(T_\beta,f,g)$ can be estimated as
$$\mathcal{H}^s(\overline{E}(T_\beta,f,g))\leq\liminf\limits_{N\rightarrow\infty}\sum_{n=N}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}\Big(
\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s.$$ Define $$s_2=\inf\{s\geq 0: P(-s(\log\beta+g))+\log\beta\leq 0\}.$$Then, from the definition of pressure function and Hausdorff measure, it follows that, for any $s>s_2$, $\H^s\Big(\overline{E}(T_\beta,f,g)\Big)= 0.$ Hence, $${\dim_{\mathrm H}}(\overline{E}(T_\beta,f,g))\leq s_2.$$
Completing the upper bound proof
--------------------------------
Finally to complete the proof, we need to show that if $s_0=\min\{s_1,s_2\}$ then we have that $${\dim_{\mathrm H}}\overline{E}(T_\beta,f,g)\leq s_0.$$
One may argue that for different $n$, the most appropriate cover of $J_n(U)\times J_n(W)$ may be different, so it may be better to consider the minimum of the two covers for every $n$. This leads to another $s$-dimension Hausdorff measure of $E$ given as: $$\mathcal{H}^s( \overline{E}(T_\beta,f,g))\leq\liminf\limits_{N\rightarrow\infty}\sum_{n=N}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}\min\left\{\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(
\frac{1}{\beta^{n}e^{S_nf(x^*)}}\Big)^s,\Big
(\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s\right\}.$$ Then an upper bound of the dimension of $ \overline{E}(T_\beta,f,g)$ is related to the convergence of the series $$\label{f7}
\sum_{n=1}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}\min\left\{\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(
\frac{1}{\beta^{n}e^{S_nf(x^*)}}\Big)^s,\Big
(\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s\right\}.$$ So, we can define $$s_0^{'}=\inf\left\{s\geq0:\sum_{n=1}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}\min\left\{\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(
\frac{1}{\beta^{n}e^{S_nf(x^*)}}\Big)^s,\Big
(\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s\right\}<\infty\right\},$$ and it turns out that, actually, $s_0^{'}$ is the same as $s_0$ as the following proposition demonstrates.
$s_0=s_0^{'}$
It can be readily verified that for any $s<s_0^{'}$, both the series
$$\label{sum1}
\sum_{n=N}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}
\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(\frac{1}{\beta^ne^{S_nf(x^*)}}\Big)^s,$$
$$\label{sum2}
\sum_{n=1}^{\infty}\sum_{U,W\in \Sigma_{\beta}^n}\Big(
\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s$$
diverges. Hence $s_0^{'}\leq s_0. $
To prove the reverse inequality, we split the proof into two cases; $s_0^{'}<1$ or $s_0^{'}\geq1.$
If $s_0^{'}<1$ then for any $s_0^{'}<s<1$, the series (\[f7\]) converges. However, in this case, it is clear that $$\min\left\{\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(
\frac{1}{\beta^{n}e^{S_nf(x^*)}}\Big)^s,\Big
(\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s\right\}=\Big(\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s.$$ So, the series (\[sum2\]) converges. Thus $s_2\leq s.$ This shows that $\min\{s_1,s_2\}\leq s_0^{'}.$
Now if $s_0^{'}\geq1$, then for any $s>s_0^{'}\geq1,$ the series (\[f7\]) converges. However, in this case, it is clear that $$\min\left\{\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(
\frac{1}{\beta^{n}e^{S_nf(x^*)}}\Big)^s,\Big
(\frac{1}{\beta^{n}e^{S_ng(y^*)}}\Big)^s\right\}=\frac{e^{S_nf(x^*)}}{e^{S_ng(y^*)}}\Big(
\frac{1}{\beta^{n}e^{S_nf(x^*)}}\Big)^s.$$ So, the series (\[sum1\]) converges. Thus $s_1\leq s.$ This shows that $\min\{s_1,s_2\}\leq s_0^{'}.$
Theorem \[t2\]: The lower bound {#lowerbound}
================================
It should be clear from the previous section that proving the upper bound requires only a suitable covering of the set $\overline{E}(T_\beta,f,g)$. However, in contrast, proving the lower bound is a challenging task, requiring all possible coverings to be considered and, therefore, represents the main problem in metric Diophantine approximation (in various settings). The following principle commonly known as the Mass Distribution Principle [@Falconer_book] has been used frequently for this purpose.
\[mdp\] Let $E$ be a Borel measurable set in $R^d$ and $\mu$ be a Borel measure with $\mu(E)>0$. Assume that there exist two positive constant $c,\delta$ such that, for any set $U$ with diameter $|U|$ less than $\delta$, $\mu(U)\leq c |U|^s$, then ${\dim_{\mathrm H}}E\geq s$.
Specifically, the mass distribution principle replaces the consideration of all coverings by the construction of a particular measure $\mu$ and it is typically deployed in two steps:
- construct a suitable Cantor subset $\mathcal F_\infty$ of $\overline{E}(T_\beta,f,g)$ and a probability measure $\mu$ supported on $\mathcal F_\infty$,
- show that for any fixed $c>0$, $\mu$ satisfies the condition that for any measurable set $U$ of sufficiently small diameter, $\mu (U)\leq c |U|^s$.
If this can be done, then by the mass distribution principle, it follows that $${\dim_{\mathrm H}}(\overline{E}(T_\beta,f,g))\geq {\dim_{\mathrm H}}(\mathcal F_\infty)\geq s.$$
The main intricate and substantive part of this entire process is the construction of a suitable Cantor type subset $\mathcal F_\infty$ which supports a probability measure $\mu$. In the remainder of this paper, we will construct a suitable Cantor type subset of the set $\overline{E}(T_\beta,f,g)$ and demonstrate that it satisfies the mass distribution principle.
Construction of the Cantor subset. {#construction-of-the-cantor-subset. .unnumbered}
----------------------------------
We construct the Cantor subset $\mathcal F_\infty$ iteratively. Start by fixing an $\epsilon>0$ and assume that $f(x)\geq(1+\epsilon)g(y)\geq g(y)$ for all $x,y\in[0,1].$ We construct a Cantor subset level by level and note that each level depends on its predecessor. Choose a rapidly increasing subsequence $\{m_k\}_{k\geq 1}$ of positive integers with $m_1$ large enough.
Level 1 of the Cantor set.
--------------------------
Let $n_1=m_1.$ For any $U_1,W_1\in \Sigma_\beta^{n_1}$ ending with the zero word of order $N$, i.e. $0^N.$ Let $x_1^*\in I_{n_1}(U_1), ~y_1^*\in I_{n_1}(W_1).$ From Proposition \[p2\], it follows that there are two full cylinders $I_{k_1}(K_1), I_{l_1}(L_1)$ such that $$\begin{aligned}
I_{k_1}(K_1)&\subset B\Big(x_0,e^{-S_{n_1}f(x_1^*)}\Big), \\ I_{l_1}(L_1)& \subset B\Big(y_0,e^{-S_{n_1}g(y_1^*)}\Big), \end{aligned}$$ and $$e^{-S_{n_1}f(x_1^*)}>\beta^{-k_1}>\Big(e^{-S_{n_1}f(x_1^*)}\Big)^{1+\epsilon},$$ $$e^{-S_{n_1}g(y_1^*)}>\beta^{-l_1}>\Big(e^{-S_{n_1}g(y_1^*)}\Big)^{1+\epsilon}=e^{-S_{n_1}(1+\epsilon)g(y_1^*)}.$$
So, we get a subset $I_{n_1+k_1}(U_1,K_1)\times I_{n_1+l_1}(W_1,L_1)$ of $J_{n_1}(U_1)\times J_{n_1}(W_1)$. Since $f(x)\geq (1+\epsilon)g(y)$ for all $x,y\in [0,1],$ then $k_1\geq l_1.$ It should be noted that $K_1$ and $L_1$ depends on $U_1$ and $W_1$ respectively. Consequently, for different $U_1$ and $W_1,$ the choice of $K_1$ and $L_1$ may be different.
The first level of the Cantor set is defined as $$\mathcal{F}_1=\Big\{I_{n_1+k_1}(U_1,K_1)\times I_{n_1+l_1}(W_1,L_1):U_1,W_1\in \Sigma_\beta^{n_1} ~~\text{ending with}~~ 0^N\Big\},$$ which is composed of a collection of rectangles. Next, we cut each rectangle into balls with the radius as the shorter side length of the rectangle: $$\begin{aligned}
I_{n_1+k_1}(U_1,K_1)\times I_{n_1+l_1}(W_1,L_1)\rightarrow\Big\{I_{n_1+k_1}&(U_1,K_1)\\&\times I_{n_1+k_1}(W_1,L_1,H_1):H_1\in \Sigma_\beta^{k_1-l_1}\Big\}.\end{aligned}$$ Then we get a collection of balls $$\begin{aligned}
\mathcal{G}_1=\Big\{I_{n_1+k_1}(U_1,K_1)\times I_{n_1+k_1}(W_1,L_1,H_1):U_1,&W_1\in \Sigma_\beta^{n_1}\\& \text{ending with}~~ 0^N,
H_1\in \Sigma_\beta^{k_1-l_1}\Big\}.\end{aligned}$$
Level 2 of the Cantor set.
--------------------------
Fix a $J_1=I_{n_1+k_1}(\Gamma_1)\times I_{n_1+k_1}(\Upsilon_1)$ in $\mathcal{G}_1$. We define the local sublevel $\mathcal{F}_2(J_1)$ as follows.
Choose a large integer $m_2$ such that $$\frac{\epsilon}{1+\epsilon}\cdot m_2\log\beta\geq\Big(n_1+\sup\{k_1:I_{n_1+k_1}(\Gamma_1)\}\Big)||f||,$$ where $||f||=\sup\Big\{|f(x)|:x\in[0,1]\Big\}.$
Write $n_2=n_1+k_1+m_2.$ Just like the first level of the Cantor set, for any $U_2,W_2\in \Sigma_\beta^{m_2}$ ending with $0^N$, applying Proposition \[p2\] to $J_{n_2}(\Gamma_1,U_2)\times J_{n_2}(\Upsilon_1,W_2)$, we can get two full cylinders $I_{k_2}(K_2)$, $I_{l_2}(L_2)$ such that
$$I_{k_2}(K_2)\subset B\Big(x_0,e^{-S_{n_2}f(x_2^*)}\Big),~ I_{l_2}(L_2)\subset B\Big(y_0,e^{-S_{n_2}g(y_2^*)}\Big)$$ and $$e^{-S_{n_2}f(x_2^*)}>\beta^{-k_2}>\Big(e^{-S_{n_2}f(x_2^*)}\Big)^{1+\epsilon},$$ $$e^{-S_{n_2}g(y_2^*)}>\beta^{-l_2}>\Big(e^{-S_{n_2}g(y_2^*)}\Big)^{1+\epsilon}=e^{-S_{n_2}(1+\epsilon)g(y_2^*)},$$ where $x_2^*\in I_{n_2}(\Gamma_1,U_2),~y_2^*\in I_{n_2}(\Upsilon_1,W_2).$
Obviously, we get a subset $$I_{n_2+k_2}(\Gamma_1,U_2,K_2)\times I_{n_2+l_2}(\Upsilon_1,W_2,L_2)$$ of $$J_{n_2}(\Gamma_1,U_2)\times J_{n_2}(\Upsilon_1,W_2)$$ and $k_2\geq l_2$.
Then, the second level of the Cantor set is defined as
$$\begin{aligned}
\mathcal{F}_2(J_1)=\Big\{I_{n_2+k_2}(\Gamma_1,U_2,K_2)\times &I_{n_2+l_2}(\Upsilon_1,W_2,L_2)\\&:U_2,W_2\in \Sigma_\beta^{m_2} ~~\text{ending with}~~ 0^N\Big\},\end{aligned}$$
which is composed of a collection of rectangles.
Next, we cut each rectangle into balls with the radius as the shorter sidelength of the rectangle: $$\begin{aligned}
I_{n_2+k_2}(\Gamma_1,U_2,K_2)\times &I_{n_2+l_2}(\Upsilon_1,W_2,L_2)\rightarrow\Big\{I_{n_2+k_2}(\Gamma_1,U_2,K_2)\\&\times I_{n_2+k_2}(\Upsilon_1,W_2,L_2,H_2):H_2\in \Sigma_\beta^{k_2-l_2}\Big\}:=\mathcal{G}_2(J_1).\end{aligned}$$
Therefore, the second level is defined as $$\mathcal{F}_2=\bigcup
\limits_{J\in \mathcal{G}_1}\mathcal{F}_2(J),~\mathcal{G}_2=\bigcup\limits_{J\in \mathcal{G}_1}\mathcal{G}_2(J).$$
From Level $(i-1)$ to Level $i$.
--------------------------------
Assume that the $(i-1)$th level of the Cantor set $\mathcal{G}_{i-1}$ has been defined. Let $J_{i-1}=I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})$ be a generic element in $\mathcal{G}_{i-1}.$ We define the local sublevel $\mathcal{F}_i(J_{i-1})$ as follows.
Choose a large integer $m_{i}$ such that $$\label{espil}
\frac{\epsilon}{1+\epsilon}\cdot m_i\log\beta\geq\Big(n_{i-1}+\sup\big\{k_{i-1}:I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\big\}\Big)||f||.$$
Write $n_i=n_{i-1}+k_{i-1}+m_i.$ For each $U_i,W_i\in \Sigma_\beta^{m_i}$ ending with $0^N$, apply Proposition \[p2\] to $$J_{n_i}(\Gamma_{n_{i-1}+k_{i-1}},U_{i})\times J_{n_i}(\Upsilon_{n_{i-1}+k_{i-1}},W_i),$$ we can get two full cylinders $I_{k_i}(K_i)$, $I_{l_i}(L_i)$ such that
$$I_{k_i}(K_i)\subset B\Big(x_0,e^{-S_{n_i}f(x_i^*)}\Big), I_{l_i}(L_i)\subset B\Big(y_0,e^{-S_{n_i}g(y_i^*)}\Big)$$ and $$e^{-S_{n_i}f(x_i^*)}>\beta^{-k_i}>\Big(e^{-S_{n_i}f(x_i^*)}\Big)^{1+\epsilon},$$ $$e^{-S_{n_i}g(y_i^*)}>\beta^{-l_i}>\Big(e^{-S_{n_i}g(y_i^*)}\Big)^{1+\epsilon}=e^{-S_{n_i}(1+\epsilon)g(y_i^*)},$$ where $x_i^*\in I_{n_i}(\Gamma_{i-1},U_{i}), ~y_i^*\in I_{n_i}(\Upsilon_{i-1},W_{i}).$
Obviously, we get a subset $$I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i)$$ of $$J_{n_i}(\Gamma_{i-1},U_i)\times J_{n_i}(\Upsilon_{i-1},W_i)$$ and $k_i\geq l_i$. Then, the $i$-th level of the Cantor set is defined as
$$\begin{aligned}
\mathcal{F}_i(J_{i-1})=\Big\{I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times &I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i)\\&:U_i,W_i\in \Sigma_\beta^{m_i} ~~\text{ending with}~~ 0^N\Big\},\end{aligned}$$
which is composed of a collection of rectangles. As before, we cut each rectangle into balls with the radius as the shorter sidelength of the rectangle: $$\begin{aligned}
I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}&(\Upsilon_{i-1},W_i,L_i)\rightarrow\Big\{I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times\\ &I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i):H_i\in \Sigma_\beta^{k_i-l_i}\Big\}:=\mathcal{G}_i(J_{i-1}).\end{aligned}$$
Therefore, the $i$-th level is defined as $$\mathcal{F}_i=\bigcup
\limits_{J\in \mathcal{G}_{i-1}}\mathcal{F}_i(J),~\mathcal{G}_i=\bigcup\limits_{J\in \mathcal{G}_{i-1}}\mathcal{G}_i(J).$$
Finally, the Cantor set is defined as $$\mathcal{F}_\infty=\bigcap\limits_{i=1}^\infty\bigcup\limits_{J\in \mathcal{F}_i}J=\bigcap\limits_{i=1}^\infty\bigcup\limits_{I\in \mathcal{G}_i}I.$$ It is straightforward to see that $\mathcal{F}_\infty\subset \overline{E}(T_\beta,f,g)$.
\[rem1\]It should be noted that the integer $k_i$ depends upon $\Gamma_{i-1}$ and $U_i$. However,(assume that $f$ is strictly positive, otherwise replace $f$ by $f+\epsilon$ ), since $m_i$ can be chosen such that $m_i\gg n_{i-1}$ for all $n_{i-1}$. So, $$\beta^{-k_i}\approx e^{-S_{n_i}f(x_i^*)}=\Big(e^{-S_{m_i}f(T_\beta^{n_{i-1}+k_{i-1}}x_i^*)}\Big)^{1+\epsilon}.$$ where $x_i^*\in I_{n_{i-1}+k_{i-1}+m_i}(\Gamma_{i-1},U_i).$ In other words, $k_i$ is almost dependent only on $U_i$ and $$\label{eqki}
\beta^{-k_i}\approx e^{-S_{m_i}f(x_i')}, {x_i'}\in I_{m_i}(U_i).$$
The same is true for $l_i$, $$\label{eqli}
\beta^{-l_i}\approx e^{-S_{m_i}f(y_i')}, {y_i'}\in I_{m_i}(W_i).$$
**Supporting measure**
-----------------------
Now we construct a probability measure $\mu$ supported on $\mathcal{F}_\infty$, which is defined by distributing masses among the cylinders with non-empty intersection with $\mathcal{F}_\infty$. The process splits into two cases: when $s_0>1$ and $ 0\leq s_0\leq 1$.
### **Case I: $s_0>1$** {#case-i-s_01 .unnumbered}
In this case, for any $1<s<s_0,$ notice that $$\frac{e^{S_nf(x')}}{e^{S_ng(y')}}\left(\frac{1}{\beta^ne^{S_nf(x')}}\right)^s\leq\left(\frac{1}{\beta^ne^{S_ng(y')}}\right)^s.$$ This means that the covering the rectangle $J_n(U)\times J_n(W)$ by balls of shorter side length preferable and therefore, it reasonable to define the probability measure on smaller balls. To this end, let $s_i$ be the solution to the equation $$\sum\limits_{U,W\in \Sigma_{\beta_N}^{m_i}}
\frac{e^{S_{m_i}f(x_i')}}{e^{S_{m_i}g(y_i')}}\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}f(x_i')}}\Big)^s=1,$$ where $x_i'\in I_{m_i}(U_i),~ y_i'\in I_{m_i}(W_i).$
By the continuity of the pressure function $P(T_\beta,f)$ with respect to $\beta$ [@TanWang Theorem 4.1], it can be shown that $s_i\rightarrow s_0$ when $m_i\rightarrow \infty.$ Thus without loss of generality, we choose that all $m_i$ are large enough such that $s_i>1$ for all $i$ and $|s_i-s_0|=o(1).$
We systematically define the measure $\mu$ on the Cantor set by defining it on the basic cylinders first. Recall that for the level 1 of the Cantor set construction, we assumed that $n_1=m_1.$ For sub-levels of the Cantor set, roughly speaking, the role of $m_1$ and $m_k$ are to denote how many positions where the digits can be chosen (almost) freely. While $n_1$ and $n_k$ denote the length of a word in level $\F_k$ before shrinking.
- Let $I_{n_1+k_1}(U_1,K_1)\times I_{n_1+k_1}(W_1,L_1,H_1)$ be a generic cylinder in $\mathcal{G}_1.$ Then define $$\mu\Big(I_{n_1+k_1}(U_1,K_1)\times I_{n_1+k_1}(W_1,L_1,H_1)\Big)=\Big(\frac{1}{\beta^{m_1}e^{S_{m_1}f(x_1')}}\Big)^{s_1},$$ where $ x_1'\in I_{m_1}(U_1)$.
Assume that the measure on the cylinders of order $(i-1)$ has been well define. To define measure on the $i$th cylinder,
- Let $I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i)$ be a generic $i$th cylinder in $\mathcal{G}_i$. Define the probability measure $\mu$ as $$\begin{aligned}
\mu\Big(I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)&\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i)\Big)= \\&\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}f(x_i')}}\Big)^{s_i},\end{aligned}$$
where $ x_i'\in I_{m_i}(U_i)$.
The measure of a rectangle in $\mathcal{F}_i$ is then given as $$\begin{aligned}
&\mu\Big(I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i)\Big)\\
&=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times{\#\Sigma_\beta^{k_i-l_i}}\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}f(x_i')}}\Big)^{s_i}\\
&\approx \mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\frac{e^{S_{m_i}f(x_i')}}{e^{S_{m_i}g(y_i')}}\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}f(x_i')}}\Big)^{s_i},\end{aligned}$$ where the last inequality follows from the estimates and .
### Estimation of the $\mu$-measure of cylinders.
For any $i\geq 1$ consider the generic cylinder, $$I:=I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i).$$ We would like to show by induction that, for any $1<s< s_0$, $$\mu(I)\leq|I|^{s/(1+\epsilon)}.$$
When $i=1$. The length of $I$ is given as $$|I|=\beta^{-m_1-k_1}\geq\beta^{-m_1}\cdot\Big(e^{-S_{n_1}f(x_1^*)}\Big)^{1+\epsilon}=\beta^{-m_1}\cdot\Big(e^{-S_{m_1}f(x_1^*)}\Big)^{1+\epsilon}.$$ But, by the definition of the measure $\mu$, it is clear that $$\mu(I)\leq|I|^{s_1}\leq|I|^{s/(1+\epsilon)}.$$
Now we consider the inductive process. Assume that $$\begin{aligned}
\mu(I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times &I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1}))\\&
\leq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|^{s/(1+\epsilon)}.\end{aligned}$$
Let $$I=I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i)$$ be a generic cylinder in $\mathcal{G}_i$. One one hand, its length satisfies $$\begin{aligned}
\label{f9}
|I|&=\beta^{-n_i-k_i}=|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|\times\beta^{-m_i}\times\beta^{-k_i}\nonumber\\
&\geq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|\times\beta^{-m_i}\Big(e^{-S_{n_i}f(x_i^*)}\Big)^{1+\epsilon},\end{aligned}$$ where $x_i^*\in I_{n_i}(\Gamma_i,U_i).$
We compare $S_{n_i}f(x_i^*)$ and $S_{m_i}f(x_i')$ , by (\[espil\]) we have $$\begin{aligned}
|S_{n_i}f(x_i^*)-S_{m_i}f(x_i')|&=|S_{n_{i-1}+k_{i-1}}f(x_i^*)|\\ &\leq({n_{i-1}+k_{i-1}})\|f\| \\ &\leq \frac{\epsilon}{1+\epsilon}m_i\log\beta, \end{aligned}$$ where $x_i'\in I_{m_i}(U_i).$ So, we get
$$\label{geq}
|I|\geq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|\times\Big(\beta^{-m_i}e^{-S_{m_i}f(x_i')}\Big)^{1+\epsilon}.$$
On the other hand, by the definition of the measure $\mu$ and the induction, we have that $$\begin{aligned}
\mu(I)&=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\Big(\beta^{-m_i}e^{-S_{m_i}f(x_i')}\Big)^{s_i}\\
&\leq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|^{s/(1+\epsilon)}
\Big((\beta^{-m_i}e^{-S_{m_i}f(x_i')})^{1+\epsilon}\Big)^{s/(1+\epsilon)}\\
&\leq |I|^{s/(1+\epsilon)}.\end{aligned}$$
In the following steps, for any $(x,y)\in \mathcal{F}_\infty,$ we will estimate the measure of $I_n(x)\times I_n(y)$ compared with its length $\beta^{-n}.$ By the construction of $\mathcal{F}_\infty,$ there exists $\{k_i,l_i\}_{i\geq1}$ such that for all $i\geq 1,$ $$(x,y)\in I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i).$$
We remark that though $\{k_i,l_i\}$ are different for different cylinders composing $\mathcal{F}_\infty$ is given, once $(x,y)\in \mathcal{F}_\infty$ is given, the corresponding integers $\{k_i,l_i\}$ are fixed.
For any $n\geq 1$, Let $i\geq1$ be the integer such that $$n_{i-1}+k_{i-1}<n\leq n_i+k_i=n_{i-1}+k_{i-1}+m_i+k_i.$$
[**Step 1.**]{} When $n_{i-1}+k_{i-1}+m_i+l_i\leq n \leq n_{i}+k_{i}=n_{i-1}+k_{i-1}+m_i+k_i.$
Then the cylinder $I_n(x)\times I_n(y)$ contains $\beta^{n_{i}+k_{i}-n}$ cylinders in $\mathcal{G}_i$ with order $n_{i}+k_{i}$. Note that by the definition of $\{k_j,l_j\}_{1\leq j\leq i}$, the first $i$-pairs $\{k_j,l_j\}_{1\leq j\leq i}$ depends only on the first $n_i$ digits of $(x,y)$. So the measure of the sub-cylinder of order $n_{i}+k_{i}$ are the same. So, its measure of $I_n(x)\times I_n(y)$ can be estimated as $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)
=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})&\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big) \\&\times \Big(\beta^{-m_i}e^{-S_{m_i}f(x_i')}\Big)^{s_i}\times \beta^{n_{i}+k_{i}-n}.\end{aligned}$$ Thus by the measure estimation of cylinders of order $n_{i-1}+k_{i-1}$ and the choice of $k_i$, one has that $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)&\leq \Big(\beta^{-n_{i-1}-k_{i-1}}\Big)^{s/(1+\epsilon)}\Big(\beta^{-m_{i}-k_{i}}\Big)^{s/(1+\epsilon)}\times\beta^{n_{i}+k_{i}-n}\\
&=\Big(\beta^{-n_{i}-k_{i}}\Big)^{s/(1+\epsilon)}\times\beta^{n_{i}+k_{i}-n}
\\ &\leq \Big(\beta^{-n}\Big)^{s/(1+\epsilon)},\end{aligned}$$by noting that $n \leq n_{i}+k_{i}$ and ${s/(1+\epsilon)}>1.$
[**Step 2.**]{} When $n_{i-1}+k_{i-1}+m_i\leq n \leq n_{i}+l_{i}=n_{i-1}+k_{i-1}+m_i+l_i.$
Recalling the definition of $n_{i}+k_{i}$, the first $i$-pairs $\{k_j,l_j\}_{1\leq j\leq i}$ depends only on the first $n_i$ digits of $(x,y)$. So the measure of the sub-cylinder in $\mathcal{G}_i$ with order $n_{i}+k_{i}$ are the same. It is clear that the cylinder $I_n(x)\times I_n(y)$ contains $\beta^{k_{i}-l_i}$ cylinders of order $n_{i}+k_{i}$. So, its measure of $I_n(x)\times I_n(y)$ can be estimated as $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)
=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})&\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\\& \times \Big(\beta^{-m_i}e^{-S_{m_i}f(x_i'^*)}\Big)^{s_i}\times \beta^{k_{i}-l_i}.\end{aligned}$$ Thus by the measure estimation of cylinders of order $n_{i-1}+k_{i-1}$ and the choice of $k_i$, one has that $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)&\leq \Big(\beta^{-n_{i-1}-k_{i-1}}\Big)^{s/(1+\epsilon)}\Big(\beta^{-m_{i}-k_{i}}\Big)^{s/(1+\epsilon)}\times\beta^{k_{i}-l_i}\\
&=\Big(\beta^{-n_{i}-k_{i}}\Big)^{s/(1+\epsilon)}\times\beta^{k_{i}-l_i}\\
&\leq\Big(\beta^{-n_i-l_i}\Big)^{s/(1+\epsilon)}
\\ &\leq \Big(\beta^{-n}\Big)^{s/(1+\epsilon)},\end{aligned}$$ by noting that $n \leq n_{i}+l_{i}$ and ${s/(1+\epsilon)}>1.$
[**Step 3.**]{} When $n_{i-1}+k_{i-1}\leq n \leq n_{i-1}+k_{i-1}+m_i.$
Assume that $U_i=(\epsilon_1,\epsilon_2,\ldots,\epsilon_{m_i}), W_i=(\omega_1,\omega_2,\ldots,\omega_{m_i}).$ Denote $l=n-(n_{i-1}+k_{i-1})$ and $h=m_i-l$. Then $$\begin{aligned}
&\mu\left(I_n(x)\times I_n(y)\right)\\ &=\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}
\mu\left(I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)
\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i)\right)\times\beta^{k_{i}-l_i}\\
&=\mu\left(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\right)\times
\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l \\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}\left(\beta^{-m_i}e^{-S_{m_i}f(x_i')}\right)^{s_i}
\times\beta^{k_{i}-l_i}\\
&=\mu\left(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\right)\times
\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}\frac{e^{S_{m_i}f(x_i')}}{e^{S_{m_i}g(y_i')}}
\left(\beta^{-m_i}e^{-S_{m_i}f(x_i')}\right)^{s_i}.\end{aligned}$$ Then by the estimation on the measure of cylinders of order $n_{i-1}+k_{i-1}$ and let $(\widetilde{x_i'},\widetilde{y_i'})=(T_\beta^lx_i',T_\beta^ly_i')$, we get $$\begin{aligned}
\mu\left(I_n(x)\times I_n(y)\right)&\leq(\beta^{-n_{i-1}-k_{i-1}})^{s/(1+\epsilon)}\cdot\frac{e^{S_{l}f(x_i')}}{e^{S_{l}g(y_i')}}\cdot
\left(\beta^{-l}e^{-S_{l}f(x_i')}\right)^{s_i}\times\\&
\quad\quad\quad\quad
\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}\frac{e^{S_{h}f(\widetilde{x_i'})}}{e^{S_{h}g(\widetilde{y_i'})}}
\cdot\left(\beta^{-h}e^{-S_{h}f(\widetilde{x_i'})}\right)^{s_i}.\end{aligned}$$
The first part can be estimated as $$\begin{aligned}
\left(\beta^{-n_{i-1}-k_{i-1}}\right)^{s/(1+\epsilon)}\cdot\frac{e^{S_{l}f(x_i')}}{e^{S_{l}g(y_i')}}\cdot
\Big(\beta^{-l}e^{-S_{l}f(x_i')}\Big)^{s_i}&\leq\Big(\beta^{-(n_{i-1}+k_{i-1}+l)}\Big)^{s/(1+\epsilon)}\\ &=\Big(\beta^{-n}\Big)^{ s/(1+\epsilon)},\end{aligned}$$
since $$\frac{e^{S_{l}f(x_i')}}{e^{S_{l}g(y_i')}}\cdot\Big(e^{-S_{l}f(x_i')}\Big)^{s_i}\leq 1, \text{~for~} s_i\geq1.$$
To estimate the second part, we first recall that we defined $s_i$ to be the solution of the equation $$\sum\limits_{U,W\in \Sigma_{\beta_N}^{m_i}}
\frac{e^{S_nf(x_i')}}{e^{S_ng(y_i')}}\Big(\frac{1}{\beta^ne^{S_nf(x_i')}}\Big)^s=1.$$ Therefore, $$\begin{aligned}
1=\sum\limits_{U_1,W_1\in \Sigma_{\beta_N}^{l}}
\frac{e^{S_lf(x_i')}}{e^{S_lg(y_i')}}&\Big(\frac{1}{\beta^le^{S_lf(x_i')}}\Big)^{s_i}\times
\sum\limits_{U_2,W_2\in \Sigma_{\beta_N}^{h}}
\frac{e^{S_hf(\widetilde{x_i'})}}{e^{S_hg(\widetilde{y_i'})}}\Big(\frac{1}{\beta^le^{S_hf(\widetilde{x_i'})}}\Big)^{s_i}.\end{aligned}$$ So, with the similar arguments as in the paper [@TanWang pp. 2095-2097] and [@WW pp. 1331-1332], we derive that $$\sum\limits_{U_2,W_2\in \Sigma_{\beta_N}^{h}}
\frac{e^{S_hf(\widetilde{x_i'})}}{e^{S_hg(\widetilde{y_i'})}}
\Big(\frac{1}{\beta^le^{S_hf(\widetilde{x_i'})}}\Big)^{s_i}\leq\beta^{l\epsilon}.$$
Therefore,$$\mu\Big(I_n(x)\times I_n(y)\Big)\leq\beta^{-n\cdot s/(1+\epsilon)}\cdot\beta^{l\epsilon}\leq(\beta^{-n})^{s/(1+\epsilon)-\epsilon}.$$
As far as the measure of a general ball $B(x,r)$ with $\beta^{-n-1}\leq r<\beta^{-n}$ is concerned, we notice that it can intersect at most $3$ cylinders of order $n$. Thus,$$\mu\Big(B(x,r)\Big)\leq3(\beta^{-n})^{s/(1+\epsilon)-\epsilon}\leq3\beta^sr^{s/(1+\epsilon)-\epsilon}\leq3\beta^2r^{s/(1+\epsilon)-\epsilon}.$$
So, finally, an application of the mass distribution principle (Proposition \[mdp\]) yields that $${\dim_{\mathrm H}}\overline{E}(T_\beta, f,g)\geq s_0.$$
### **Case II: $0\leq s_0\leq1$** {#case-ii-0leq-s_0leq1 .unnumbered}
The arguments are similar to Case I but the calculations are different. In this case, for any $s<s_0\leq 1,$ it is trivial that $$\frac{e^{S_nf(x')}}{e^{S_ng(y')}}\Big(\frac{1}{\beta^ne^{S_nf(x')}}\Big)^s\geq\big(\frac{1}{\beta^ne^{S_ng(y')}}\big)^s.$$ This means that the covering of the rectangle $J_n(U)\times J_n(W)$ by balls of larger side length is more preferable and therefore, it reasonable to define the probability measure of the rectangle to be the same measure for the cylinder of order $n_i+l_i$.
Just like Case I, let $s_i$ be the solution to the equation $$\sum\limits_{U,W\in \Sigma_{\beta_N}^{m_i}~~\text{ending with}~ 0^N}
\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^s=1,$$ where $y_i'\in I_{m_i}(W_i).$ By the continuity of the pressure function $P(T_\beta,f)$ with respect to $\beta$ we can assume that for all $m_i$ large enough we have that $s_i<1$ for all $i$ and $|s_i-s_0|=o(1).$
We first define the measure $\mu$ on the basic cylinders.
- Let $I_{n_1+k_1}(U_1,K_1)\times I_{n_1+l_1}(W_1,L_1)$ be a generic cylinder in $\mathcal{F}_1.$ Then define $$\mu\Big(I_{n_1+k_1}(U_1,K_1)\times I_{n_1+l_1}(W_1,L_1)\Big)=\Big(\frac{1}{\beta^{m_1}e^{S_{m_1}g(y_1')}}\Big)^{s_1},$$ where $ y_1'\in I_{m_1}(W_1)$.
<!-- -->
- Then the measure of it is evenly distributed on its sub-cylinders in $\mathcal{G}_1$. So, for a generic cylinder $I_{n_1+k_1}(U_1,K_1)\times I_{n_1+k_1}(W_1,L_1,H_1)$ in $\mathcal{G}_1$, define
$$\begin{aligned}
\mu\Big(I_{n_1+k_1}(U_1,K_1)\times I_{n_1+k_1}(W_1,L_1,H_1)\Big)&=\frac{1}{\#\Sigma_\beta^{k_1-l_1}}\Big(\frac{1}{\beta^{m_1}e^{S_{m_1}g(y_1')}}\Big)^{s_1}\\
&\approx\frac{1}{\beta^{k_1-l_1}}\Big(\frac{1}{\beta^{m_1}e^{S_{m_1}g(y_1')}}\Big)^{s_1}.\end{aligned}$$
Assume that the measure on the cylinders of order $(i-1)$ has been well defined. Then to define the measure on the $i$th cylinder we proceed as follows.
- Let $I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i)$ be a generic cylinder in $\mathcal{F}_i$.
Then define $$\begin{aligned}
\mu\Big(I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},&W_i,L_i)\Big)
= \mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\\&\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^{s_i},\end{aligned}$$ where $ y_i'\in I_{m_i}(W_i)$.
<!-- -->
- By the definition of $k_i,l_i$, the measure of a cylinder in $\mathcal{G}_i$ is then given as $$\begin{aligned}
&\mu\Big(I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i)\Big)\\
&=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\frac{1}{\#\Sigma_\beta^{k_i-l_i}}\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^{s_i}\\
&\approx \mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\frac{e^{S_{m_i}g(y_i')}}{e^{S_{m_i}f(x_i')}}\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^{s_i}.\end{aligned}$$
### Estimation of the $\mu$-measure of cylinders.
We first show by induction that for any $i\geq 1$ and a generic cylinder $$I:=I_{{n_{i-1}+k_{i-1}}+m_{i}+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{{n_{i-1}+k_{i-1}}+m_{i}+k_i}(\Upsilon_{i-1},W_i,L_i,H_i),$$ we have $$\mu(I)\leq|I|^{s/(1+\epsilon)}.$$
When $i=1$. On the one hand, the length of $I$ is given as $$|I|=\beta^{-m_1-k_1}\geq\beta^{-m_1}\cdot\Big(e^{-S_{n_1}f(x_1')}\Big)^{1+\epsilon}=\beta^{-m_1}\cdot\Big(e^{-S_{m_1}f(x_1')}\Big)^{1+\epsilon}.$$ But on the other hand, by the definition of the measure $\mu$, it is clear that $$\begin{aligned}
\mu(I)&\leq\frac{e^{S_{m_1}g(y_1')}}{e^{S_{m_1}f(x_1')}}\cdot\Big(\frac{1}{\beta^{m_1}e^{S_{m_1}g(y_1')}}\Big)^{s_1}\\
&\leq\Big(\beta^{-m_1}e^{-S_{m_1}f(x_1')}\Big)^{s_1}\\
&\leq|I|^{s/(1+\epsilon)},\end{aligned}$$ by noting that $s_1<1.$
Just like Case I, we consider the inductive process. Assume that $$\begin{aligned}
\mu(I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times &I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1}))\leq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|^{s/(1+\epsilon)}.\end{aligned}$$ Let $$I=I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+k_i}(\Upsilon_{i-1},W_i,L_i,H_i)$$ be a generic cylinder in $\mathcal{G}_i$. By (\[geq\]) we get $$|I|\geq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|\times\Big(\beta^{-m_i}e^{-S_{m_i}f(x_i')}\Big)^{1+\epsilon}.$$
From the definition of the measure $\mu$, the induction and that $s_i<1$, it follows that $$\begin{aligned}
\mu(I)&=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\frac{e^{S_{m_i}g(y_i')}}{e^{S_{m_i}f(x_i')}}
\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^{s_i}\\
&\leq|I_{n_{i-1}+k_{i-1}}(\Gamma_{i-1})\times I_{n_{i-1}+k_{i-1}}(\Upsilon_{i-1})|^{s/(1+\epsilon)}
\Big((\beta^{-m_i}e^{-S_{m_i}f(x_i')})^{1+\epsilon}\Big)^{s/(1+\epsilon)}\\
&\leq |I|^{s/(1+\epsilon)}\\ &=\Big(\beta^{-n_i-k_i}\Big)^{s/(1+\epsilon)}\\ &\approx\Big(\beta^{-m_i-k_i}\Big)^{s/(1+\epsilon)}.\end{aligned}$$
So, for a rectangle $$J=I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i)$$ in $\mathcal{F}_i,$ we have that $$\begin{aligned}
\mu(J)&=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^{s_i}\\
&\leq \Big(\beta^{-n_{i-1}-k_{i-1}}\Big)^{s/(1+\epsilon)}\Big(\beta^{-m_i}\beta^{-l_i}\Big)^{s_i}\\
&\leq\Big(\beta^{-n_{i}-l_{i}}\Big)^{s/(1+\epsilon)}.\end{aligned}$$
For any $(x,y)\in \mathcal{F}_\infty,$ we will estimate the measure of $I_n(x)\times I_n(y)$ compared with its length $\beta^{-n}.$ By the construction of $\mathcal{F}_\infty,$ there exists $\{k_i,l_i\}_{i\geq1}$ such that for all $i\geq 1,$ $$(x,y)\in I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i,).$$
For any $n\geq 1$, let $i\geq1$ be the integer such that $$n_{i-1}+k_{i-1}<n\leq n_i+k_i=n_{i-1}+k_{i-1}+m_i+k_i.$$
[**Step I.**]{} When $n_{i-1}+k_{i-1}+m_i+l_i\leq n \leq n_{i}+k_{i}=n_{i-1}+k_{i-1}+m_i+k_i.$
In this case, the cylinder can intersect only one rectangle in $\mathcal{F}_i,$ so $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)
&=\mu\Big(I_{{n_{i}+k_{i}}}(\Gamma_{i-1},U_i,K_i)\times I_{{n_{i}+k_{i}}}(\Upsilon_{i-1},W_i,L_i)\Big)\\
&\leq\Big(\beta^{-n_{i}-l_{i}}\Big)^{s/(1+\epsilon)}
\\ &\leq \Big(\beta^{-n}\Big)^{s/(1+\epsilon)}.\end{aligned}$$
[**Step II.**]{} When $n_{i-1}+k_{i-1}+m_i\leq n \leq n_{i}+l_{i}=n_{i-1}+k_{i-1}+m_i+l_i.$
Then the cylinder $I_n(x)\times I_n(y)$ contains $\beta^{n_{i}+l_{i}-n}$ cylinders in $\mathcal{F}_i$ with order $n_{i}+l_{i}$. Note that by the definition of $\{k_j,l_j\}_{1\leq j\leq i}$, the first $i$-pairs $\{k_j,l_j\}_{1\leq j\leq i}$ depends only on the first $n_i$ digits of $(x,y)$. So the measure of the sub-cylinder of order $n_{i}+k_{i}$ are the same. So, its measure of $I_n(x)\times I_n(y)$ can be estimated as $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)
&=\mu\Big(I_{{n_{i}+k_{i}}}(\Gamma_{i-1},U_i,K_i)\times I_{{n_{i}+k_{i}}}(\Upsilon_{i-1},W_i,L_i)\Big)\times \frac{1}{\beta^{n-n_{i}-l_{i}}}\\
&\leq \Big(\beta^{-n_{i}-l_{i}}\Big)^{s/(1+\epsilon)}\times \frac{1}{\beta^{n-n_{i}-l_{i}}}
\\ &\leq\Big(\beta^{-n}\Big)^{s/(1+\epsilon)}.\end{aligned}$$
[**Step III.**]{} When $n_{i-1}+k_{i-1}\leq n \leq n_{i-1}+k_{i-1}+m_i.$
Assume $U_i=(\epsilon_1,\epsilon_2,\ldots,\epsilon_{m_i}), W_i=(\omega_1,\omega_2,\ldots,\omega_{m_i}).$ Write $l=n-(n_{i-1}+k_{i-1})$ and $h=m_i-l$. Then $$\begin{aligned}
&\mu\Big(I_n(x)\times I_n(y)\Big)\\ &=\sum\limits_{\substack{ (\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}
\mu\Big(I_{n_i+k_i}(\Gamma_{i-1},U_i,K_i)
\times I_{n_i+l_i}(\Upsilon_{i-1},W_i,L_i)\Big)\\
&=\mu\Big(I_{{n_{i-1}+k_{i-1}}}(\Gamma_{i-1})\times I_{{n_{i-1}+k_{i-1}}}(\Upsilon_{i-1})\Big)\times
\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}
\Big(\beta^{-m_i}e^{-S_{m_i}g(y_i')}\Big)^{s_i}.\end{aligned}$$ Then by the estimation on the measure of cylinders of order $n_{i-1}+k_{i-1}$ and let $\widetilde{y_i'}=T_\beta^ly_i'$, we get $$\begin{aligned}
\mu\Big(I_n(x)\times I_n(y)\Big)&\leq\Big(\beta^{-n_{i-1}-k_{i-1}}\Big)^{s/(1+\epsilon)}\cdot\Big(\beta^{-l}e^{-S_{l}g(y_i')}\Big)^{s_i}\times
\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}\Big(\beta^{-h}e^{-S_{h}g(\widetilde{y_i'})}\Big)^{s_i}\\
&\leq\Big(\beta^{-n}\Big)^{s/(1+\epsilon)}\cdot\sum\limits_{\substack{(\epsilon_{l+1},\ldots,\epsilon_{m_i})\in \Sigma_\beta^l\\ (\omega_{l+1},\ldots,\omega_{m_i})\in \Sigma_\beta^h}}
\Big(\beta^{-h}e^{-S_{h}g(\widetilde{y_i'})}\Big)^{s_i}.\end{aligned}$$
Recall the definition of $s_i:$ $$\sum\limits_{U,W\in \Sigma_{\beta_N}^{m_i}}
\Big(\frac{1}{\beta^{m_i}e^{S_{m_i}g(y_i')}}\Big)^s=1.$$ Then $$\begin{aligned}
1=\sum\limits_{U_1,W_1\in \Sigma_{\beta_N}^{l}}
\Big(\frac{1}{\beta^le^{S_lg(y_1')}}\Big)^{s_i}\cdot
\sum\limits_{U_2,W_2\in \Sigma_{\beta_N}^{h}}
\Big(\frac{1}{\beta^le^{S_hg(\widetilde{y_1'})}}\Big)^{s_i},\end{aligned}$$ where $y_1'^*\in I_{l}(U_1),\widetilde{y_1'}\in I_{h}(W_2).$\
So, with the similar argument as in the previous section, we have that $$\sum\limits_{U_2,W_2\in \Sigma_{\beta_N}^{l} }
\Big(\frac{1}{\beta^he^{S_hg(\widetilde{y_h'})}}\Big)^{s_i}\leq\beta^{l\epsilon}.$$
Therefore,$$\mu\Big(I_n(x)\times I_n(y)\Big)\leq\Big(\beta^{-n}\Big)^ {s/(1+\epsilon)}\cdot\beta^{l\epsilon}\leq\Big(\beta^{-n}\Big)^{s/(1+\epsilon)-\epsilon}.$$
Notice that a general ball $B(x,r)$ with $\beta^{-n-1}\leq r<\beta^{-n}$ can intersect at most $3$ cylinders of order $n$. Therefore the measure of the general ball can be estimated as, $$\mu\Big(B(x,r)\Big)\leq3\Big(\beta^{-n}\Big)^{s/(1+\epsilon)-\epsilon}\leq3\beta^sr^{s/(1+\epsilon)-\epsilon}\leq3\beta^2r^{s/(1+\epsilon)-\epsilon}.$$
So, finally, by using the mass distribution principle we have the lower bound of the Hausdorff dimension of this case, $${\dim_{\mathrm H}}\overline{E}(T_\beta, f,g)\geq s_0.$$ Hence combining both the case, we have the desired conclusion.\
[**Acknowledgments.**]{} The first-named author was supported by La Trobe University’s Asia Award and the start-up grant. We would like to thank Professor Baowei Wang for useful discussions on this project.
[99]{}
V. Beresnevich, D. Dickinson, and Sanju Velani, *Measure theoretic laws for lim sup sets*, Mem. Amer. Math. Soc. 179 (2006), no. 846, x+91 pp.
V. Beresnevich and S. Velani, *A mass transference principle and the [D]{}uffin-[S]{}chaeffer conjecture for [H]{}ausdorff measures*, Ann. of Math. (2) 164 (2006), no. 3, 971–992. Y. Bugeaud and B. Wang, *Distribution of full cylinders and the Diophantine properties of the orbits in $\beta$-expansions.* J. Fractal Geom. 1 (2014), no. 2, 221–241.
Y. Bugeaud, *A note on inhomogeneous [D]{}iophantine approximation*, Glasg. Math. J. 45 (2003), no. 1, 105–110. M. Coons, M. Hussain, and Bao-Wei Wang, *A dichotomy law for the Diophantine properties in [$\beta$]{}-dynamical systems*, Mathematika 62 (2016), no. 3, 884–897.
K. Falconer, *Fractal geometry: [M]{}athematical foundations and applications*, John Wiley & Sons, Ltd., Chichester, 1990.
A. Fan and B. Wang, *On the lengths of basic intervals in beta expansions*, Nonlinearity **25** (2012), no. 5, 1329–1343.
R. Hill and S. Velani, *The ergodic theory of shrinking targets*, Invent. Math. 119 (1995), no. 1, 175–198.
, *Metric [D]{}iophantine approximation in [J]{}ulia sets of expanding rational maps*, Inst. Hautes Études Sci. Publ. Math. (1997), no. 85, 193–216.
M. Hussain and D. Simmons, *A general principle for [H]{}ausdorff measure*, Proc. Amer. Math. Soc., Volume 147, Number 9, 3897–3904.
M. Hussain and W. Wang, *Two-dimensional shrinking target problem in beta-dynamical systems*, Bull. Aust. Math. Soc., 97(2018), no. 1, 33–42.
M. Hussain and T. Yusupova, *On weighted inhomogeneous Diophantine approximation on planar curves.* Math. Proc. Cambridge Philos. Soc. 154 (2013), no. 2, 225–241.
M. Hussain and T. Yusupova, *A note on the weighted Khintchine-Groshev theorem.* J. Théor. Nombres Bordeaux 26 (2014), no. 2, 385–397.
W. Parry, *On the [$\beta $]{}-expansions of real numbers*, Acta Math. Acad. Sci. Hungar. 11 (1960), 401–416.
A. Rényi, *Representations for real numbers and their ergodic properties*, Acta Math. Acad. Sci. Hungar 8 (1957), 477–493.
S. Seuret and B.-W. Wang, *Quantitative recurrence properties in conformal iterated function systems*, Adv. Math. 280 (2015), 472–505.
L. Shen, B. Wang, Shrinking target problems for beta-dynamical system, Sci. China Math. 56 (2013) 91–104.
B. Tan and B. Wang, *Quantitative recurrence properties for beta-dynamical system*, Adv. Math. 228 (2011), no. 4, 2071–2097.
M. Urbański, *Diophantine analysis of conformal iterated function systems*, Monatsh. Math. 137 (2002), no. 4, 325–340.
P. Walters, *Equilibrium states for [$\beta $]{}-transformations and related transformations*, Math. Z. 159 (1978), no. 1, 65–88.
P. Walters, An Introduction to Ergodic Theory, Grad. Texts in Math., vol. 79, Springer-Verlag, New York/Berlin,1982.
B. Wang, J. Wu, *Hausdorff dimension of certain sets arising in continued fraction expansions. Adv. Math,* 218 (2008), no. 5, 1319–1339.
B. Wang, J. Wu, and J. Xu, *Mass transference principle for limsup sets generated by rectangles*, Math. Proc. Cambridge Philos. Soc. 158 (2015), no. 3, 419–437.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Extended BRS symmetry is used to prove gauge independence of the fermion renormalization constant $Z_2$ in on-shell QED renormalization schemes. A necessary condition for gauge independence of $Z_2$ in on-shell QCD renormalization schemes is formulated. Satisfying this necessary condition appears to be problematic at the three-loop level in QCD.'
author:
- |
S. Alavian and T.G. Steele[^1]\
[*Department of Physics and Engineering Physics* ]{}\
[*University of Saskatchewan*]{}\
[*Saskatoon, Saskatchewan S7N 5E2, Canada.*]{}
title: ' [Extended BRS Symmetry and Gauge Independence in On-Shell Renormalization Schemes]{} '
---
In on-shell schemes, the fermion mass renormalization $Z_m$ and wave function renormalization $Z_2$ have been observed to be gauge parameter independent in explicit two-loop QED and QCD calculations [@bgs]. Gauge parameter independence of $Z_2$ is phenomenologically significant because it implies that the difference between the (fermion) anomalous dimension of heavy quark effective theories [@iw] and QCD is gauge independent.
An extension of BRS symmetry, which allows variations of the gauge parameter to be included as part of the symmetry transformations [@ps], will be applied to the gauge parameter dependence of $Z_2$. This approach results in an extension of Slavnov-Taylor identities, allowing gauge dependence to be formulated algebraically. Previous application of these techniques resulted in a proof of the gauge independence of the mass renormalization $Z_m$ to all orders in on-shell QED and QCD renormalization schemes [@bls]. We will prove the gauge parameter independence of $Z_2$ in on-shell schemes for QED and formulate a necessary condition for gauge independence in QCD which appears problematic beyond the two-loop level. This complements earlier work on gauge independence of $Z_2$ in QED resulting in the (dimensionally-regularized) relation [@z2] $$\frac{\partial Z_2}{\partial \xi}\sim \int d^Dk\frac{1}{k^4}=0
\label{jz}$$ where $\xi$ is the gauge parameter and the massless tadpole is zero in dimensional regularization. Since the QED result (\[jz\]) cannot be extended to QCD, our extended BRS symmetry proof for QED provides a new approach to formulating questions of gauge independence of $Z_2$ in QCD.
The QED Lagrangian in the auxiliary field formalism [@nl] for covariant gauges is $${\cal L}=-\frac{1}{4}F^2+\bar \psi\left(i\dsl{D}-m\right)\psi +\frac{\xi}{2} B^2 +B\partial
\cdot A-\bar c\partial^2 c
\label{l_qed}$$ where $F$ is the field strength and $B$ is the auxiliary gauge field. This Lagrangian is invariant under the BRS symmetry $$\begin{aligned}
& & \delta A_\mu=\epsilon \partial_\mu c\quad ,\quad \delta \bar\psi=i\epsilon g c\bar
\psi\quad ,\quad \delta c=0\nonumber\\
& &\delta B=0\quad ,\quad \delta \psi=-i\epsilon g c\psi \quad ,\quad \delta \bar c=0
\label{qed_brs}\end{aligned}$$ where $\epsilon$ is a global grassmann quantity. The auxiliary field formalism guarantees nilpotence of the BRS transformations without invoking equations of motion.
An extension of BRS symmetry that includes gauge parameter variations introduces a new term in the Lagrangian $${\cal L}\rightarrow {\cal L}+\frac{\chi}{2}\bar c B
\label{ex_l_qed}$$ where $\chi$ is a global grassmann variable. Although $\chi$ will be set to zero after functional differentiation, it is still important to recognize that since $\chi$ is a global Grassmann quantity, it does not change the dynamics of any process with zero ghost number. The modified Lagrangian (\[ex\_l\_qed\]) is invariant under the following extended BRS symmetry [@ps] $$\begin{aligned}
& &\delta^+ A_\mu=\epsilon \partial_\mu c\quad ,\quad \delta^+ \bar\psi=i\epsilon g c\bar
\psi\quad ,\quad \delta^+ c=0\nonumber\\
& &\delta^+ B=0\quad ,\quad \delta^+ \psi=-i\epsilon g c\psi \quad ,\quad \delta^+ \bar c=B
\label{qed_x_brs}\\
& &\delta^+\xi=\epsilon\chi\quad ,\quad \delta^+\chi=0\end{aligned}$$
As for BRS symmetry, the extended BRS symmetry (\[qed\_x\_brs\]) implies the the following relation for the effective action $\Gamma$. $$0=\partial_\mu c \frac{\delta \Gamma}{\delta A_\mu}
+ \frac{\delta\Gamma}{\delta \bar K} \frac{\delta \Gamma}{\delta \psi}
+ \frac{\delta\Gamma}{\delta K}
\frac{\delta \Gamma}{\delta \bar\psi}
+ B\frac{\delta \Gamma}{\delta \bar c}
+\chi\frac{\partial\Gamma}{\partial\xi}
\label{qed_gamma_x_brs}$$ where $K$ is a current coupled to the composite operator $\delta^+\bar \psi$ and $\bar K$ is coupled to $\delta^+ \psi$. Differentiating (\[qed\_gamma\_x\_brs\]) with respect to $\chi$, $\bar\psi(x)$, $\psi(y)$, setting $\chi=0$ and imposing ghost number conservation leads to the following identity for the proper fermion two-point function [@bls]. $$\frac{\partial}{\partial\xi}\frac{\delta^2\Gamma}{\delta\psi(y)\delta
\bar\psi(x)} =
+
\frac{\delta^3\Gamma}{\delta\psi(y)\delta\bar K\delta \chi}
\frac{\delta^2\Gamma}{\delta\bar\psi(x)\delta\psi}
+
\frac{\delta^2\Gamma}{\delta\psi(y)\delta\bar\psi}
\frac{\delta^3\Gamma}{\delta\bar\psi(x)\delta K\delta\chi}
\label{gamma_ident}$$ Transforming to momentum space and defining [^2] $$\begin{aligned}
& &\frac{\delta^2\Gamma}{\delta\chi\delta {\bar K}(w)\delta\psi (y)}
=\int \frac{d^4q}{(2\pi)^4}\frac{d^4\ell}{(2\pi)^4}\,e^{-iq\cdot(y-z)
-i\ell\cdot(w-z)}
F(q,\ell,-q-\ell)\label{F}\\
& &
\frac{\delta^2\Gamma}{\delta\chi\delta K(w)\delta{\bar\psi} (y)}
=\int \frac{d^4q}{(2\pi)^4}\frac{d^4\ell}{(2\pi)^4}\,e^{-iq\cdot(x-z)
-i\ell\cdot(w-z)}
{\bar F}(q,\ell,-q-\ell) \label{Fbar}\end{aligned}$$ results in the final form needed for studying the gauge dependence of the fermion propagator $S_F$ in QED [@bls]. $$\frac{\partial}{\partial\xi}S_F^{-1}(p)=S_F^{-1}(p)\left[ F(p,-p,0)+
\bar F(-p,p,0) \right]
\label{prop_ident}$$ Note that the Green functions $F(p, -p, 0)$ and $\bar F(p, -p, 0)$ cannot have single particle poles.
In on-shell renormalization schemes the bare mass $m_0$ and the renormalized mass $M$ are related through the condition $$\biggl. S_F^{-1}(p)\biggr|_{\dsl{p}=M}=0
\label{mass_shell}$$ This results in the definition of the mass renormalization constant. $$\frac{m_0}{M}=Z_m
\label{Z_m}$$ The wave function renormalization constant $Z_2$ is the residue of $S_F$ at the $\dsl{p}=M$ pole. $$Z_2=\lim_{\dsl{p}=M} \left(\dsl{p} -M\right)S_F(p)
\label{Z_2}$$ Perturbative expansions of $Z_m$ and $Z_2$ have been calculated to two-loop order in a scheme which dimensionally regulates both the infrared and ultraviolet divergences, resulting in explicitly gauge independent expressions for QED and QCD [@bgs].
The mass renormalization $Z_m$, and hence $M$, has been proven to be gauge independent to all orders of perturbation theory [@bls; @kron]. Thus when both sides of (\[prop\_ident\]) are divided by $\dsl{p}-M$ the quantity $\dsl{p}-M$ commutes with the $\xi$ derivative. $$\frac{\partial}{\partial\xi}\left(\frac{S_F^{-1}(p)}{\dsl{p}-M}\right)
=\frac{S_F^{-1}(p)}{\dsl{p}-M}
\left[ F(p,-p,0)+\bar F(-p,p,0) \right]
\label{Z_2F1}$$ Using the property that $$S_F^{-1}(p)=\frac{\dsl{p}-M}{Z_2}+{\cal O}
\left[\,\left(\dsl{p}-M\right)^2\,\right]
\label{S_F_property}$$ along with the gauge independence of $M$ leads to the following result when (\[Z\_2F1\]) is evaluated on-shell. $$\frac{\partial}{\partial\xi}\left(\frac{1}{Z_2}\right)=\frac{1}{Z_2}
\lim_{\dsl{p}=M}
\left[
F(p,-p,0)+\bar F(-p,p,0)\right]
\label{Z_2_rsult}$$ This is our central result for QED: the gauge dependence of the wave function renormalization constant is related to the on-shell properties of the Green function $F(p,-p,0)+\bar F(p,-p,0)$. In particular, if this Green function is zero on-shell, then $Z_2$ is gauge independent.
Before studying the on-shell behaviour of $F(p,-p,0)+\bar F(p,-p,0)$ we review some aspects of the auxiliary field formalism. Since the $B$ field and $\partial\cdot A$ are mixed in the Lagrangian (\[l\_qed\]) the quadratic part of the Lagrangian must be diagonalized, leading to the free field propagators $$\begin{aligned}
& &
\int d^4x\,e^{ip\cdot x}\langle O\vert T\left(B(x) B(0)\right)\vert O\rangle
=0 \label{bb_prop}\\
& &
\int d^4x\,e^{ip\cdot x}\langle O\vert T\left(B(x) A_\mu(0)\right)\vert O
\rangle = \frac{p_\mu}{p^2}\equiv G_\mu(p) \label{ba_prop}\\
& &
\int d^4x\,e^{ip\cdot x}\langle O\vert T\left(A_\mu(x) A_\nu(0)\right)
\vert O
\rangle =
i\left[-\frac{g^{\mu\nu}}{p^2} +(1-\xi)\frac{p^\mu p^\nu}{p^4}\right]
\equiv D^{\mu\nu}(p)
\label{aa_prop}\end{aligned}$$ BRS symmetry implies that (\[bb\_prop\]) and (\[ba\_prop\]) are valid to all orders in perturbation theory [@bls].
As illustrated in Figure \[f\_fig\], the (QED) Green function $F(p,-p,0)$ is easily written in terms of one-particle irreducible Green functions $$F(p, -p, 0)=\int d^Dk \,\Gamma_\mu(k, p) G_\mu(k) S_F(p+k) \tilde D(k^2)
\label{on-shell_F_1}$$ where $\tilde D(k^2)$ is the ghost propagator (which for QED corresponds to the free field result) and the fermion-photon vertex function $\Gamma_\mu$ is defined by $$S_F(p)\Gamma_\nu (p, k) S_F(p+k) D^{\mu \nu}(k)
=\int d^Dx \int d^Dy \,\,e^{i k\cdot x+i p\cdot y}\langle O\vert T\left[
\psi(0) A_\mu(x) \bar \psi (y)
\right]\vert O\rangle
\label{vertex}$$ Substituting (\[ba\_prop\]) and the (free-field) ghost propagator into (\[on-shell\_F\_1\]) and using the Ward identity for the vertex function $$k^\mu \Gamma_\mu(p,k)=S_F^{-1}(p+k)-S_F^{-1}(p)
\label{ward}$$ simplifies the expression for $F(p, -p, 0)$. $$F(p,-p,0)=iS_F^{-1}(p) \int d^Dk \frac{1}{k^4} S_F(p+k)-i\int d^Dk\frac{1}{k^4}
\label{on-shell_F_2}$$ The second term in the above equation is a massless tadpole which is zero in dimensional regularization, leading to the final expression for $F(p, -p, 0)$ in QED. $$F(p,-p,0)=iS_F^{-1}(p) \int d^Dk \frac{1}{k^4} S_F(p+k)
\label{on-shell_F_3}$$ In the on-shell scheme [@bgs] infrared and ultraviolet divergences are dimensionally regulated, so the integral in (\[on-shell\_F\_3\]) is finite on-shell. Thus the $S_F^{-1}(p)$ prefactor in (\[on-shell\_F\_3\]) implies that $F(p, -p, 0)$ is zero at the $\dsl{p}=M$ mass-shell. This argument can be trivially extended to $\bar F(p, -p, 0)$, and we conclude that to all orders in QED $$\biggl. F(p, -p, 0)+\bar F(p, -p, 0) \biggr|_{\dsl{p}=M}=0$$ and hence from the result (\[Z\_2\_rsult\]) we have proven the gauge independence of the QED renormalization constant $Z_2$ in mass-shell schemes.
![Feynman diagram expressing $F(p,-p,0)$ in terms of one-particle irreducible functions represented by the solid circles. Dashed lines represent the ghost field, and the dotted line represents the auxiliary field $B$. Composite operators coupled to the currents are represented by the partially-filled circles.[]{data-label="f_fig"}](z2_f1.eps)
An explicit illustration of the on-shell behaviour of $F(p, -p,0)+\bar F(p, -p, 0)$ in the regularization scheme [@bgs] to one-loop order requires evaluation of the diagram in Figure \[f\_1l\_fig\]. In terms of the integrals (with the convention $D=4+2\epsilon$) $$\begin{aligned}
& &\int \frac{d^Dk}{(2\pi)^D}
\frac{1}{\left[(k-p)^2-m_0^2\right]^\alpha \,k^{2\beta}}
=I\left[\alpha, \beta\right]
\label{I_alpha_beta}\\
& &\int \frac{d^Dk}{(2\pi)^D}
\frac{k^\mu}{\left[(k-p)^2-m_0^2\right]^\alpha \,k^{2\beta}}=p^\mu
J\left[\alpha , \beta\right]
\label{J_alpha_beta}\end{aligned}$$ we find the one-loop expression for $F(p,-p,0)+\bar F(p,-p,0)$. $$F(p, -p, 0)+\bar F(p, -p, 0)=2i g^2\left[ m_0 \left(\dsl{p}-m_0\right)
J(1,2)+\left(p^2+m_0^2\right) J(1,2) -I(1,1)\right]
\label{F+bar_F}$$ and hence the on-shell behavior of $F+\bar F$ to one-loop order is given by $$\lim_{\dsl{p}=M} \left[F(p,-p,0)+\bar F(p,-p,0)\right]=2ig^2
\lim_{\dsl{p}=M=m_0} \left[
2m_0^2 J(1,2)-I(1,1)\right]
\label{on_shell_F}$$ The desired on-shell values for the integals in (\[on\_shell\_F\]) can be reduced to evaluation of a single class of scalar integrals. $$\Lambda\left[\alpha, \beta\right] =
\int \frac{d^Dk}{(2\pi)^D}
\frac{1}{\left[k^2+2 p\cdot k\right]^\alpha \,k^{2\beta}}$$ a particular example being a relation between $J(\alpha, \beta)$ and $\Lambda(\alpha, \beta)$ $$\lim_{\dsl{p}=m_0}J(\alpha, \beta)=\frac{1}{2 m_0^2}\left[
\Lambda(\alpha, \beta-1) -\Lambda(\alpha-1, \beta)\right]$$ The integration by parts technique [@ct] for these on-shell integrals leads to recursion relations among the $\Lambda(\alpha, \beta)$. The identities $$\begin{aligned}
& &0=\int d^Dk \frac{\partial}{\partial k^\mu}\left(
\frac{p^\mu}{\left[k^2+2 p\cdot k\right]^\alpha \,k^{2\beta}}
\right)\label{int_by_parts_1}\\
& &0=\int d^Dk \frac{\partial}{\partial k^\mu}\left(
\frac{k^\mu}{\left[k^2+2 p\cdot k\right]^\alpha \,k^{2\beta}}
\right)\label{int_by_parts_2}\end{aligned}$$ lead to the recursion relations $$\begin{aligned}
& &0= -\beta \Lambda(\alpha-1, \beta+1)+(\beta-\alpha)\Lambda(\alpha, \beta)-2\alpha m_0^2
\Lambda(\alpha+1, \beta)+\alpha \Lambda(\alpha+1, \beta-1)
\label{rec_1}\\
& & 0=(D-2\beta-\alpha)\Lambda(\alpha, \beta)-\alpha\Lambda(\alpha+1, \beta-1 )
\label{rec_2}\end{aligned}$$ The recursion relation (\[rec\_2\]) can also be obtained from dimensional analysis. These recursion relations allow the on-shell behaviour of the one-loop integrals, after setting mass tadpoles to zero, to be reduced to the fundamental dimensional regularization result $$\Lambda(\alpha, 0)=\int\frac{d^D k}{(2\pi)^D} \frac{1}{\left[ k^2-m_0^2\right]^\alpha}
=\frac{i}{(4\pi)^{D/2}}\left(-m_0^2\right)^{2-\alpha} m_0^{2\epsilon}
\frac{\Gamma(\alpha-2-\epsilon)}{\Gamma(\alpha)}
\label{fund_dim_reg}$$ Using the above techniques it is simple to find the on-shell integrals required in (\[on\_shell\_F\]). $$\begin{aligned}
& & J(1,2)=\frac{i}{(4\pi)^{D/2}}m_0^{2\epsilon} \frac{\Gamma(-\epsilon)}{2 m_0^2(D-3)}
\label{J(1,2)}\\
& & I(1,1)=\frac{i}{(4\pi)^{D/2}}m_0^{2\epsilon} \frac{\Gamma(-\epsilon)}{(D-3)}\end{aligned}$$ and hence in the on-shell regularization scheme [@bgs], the Green function $F+\bar F$ is zero on-shell to one-loop order, providing a specific example of our general result.
![Feynman diagram for one-loop contributions to $F(p,-p,0)$. Dashed lines represent the ghost field, and the dotted line represents the auxiliary field $B$. Composite operators coupled to the currents are represented by the partially-filled circles.[]{data-label="f_1l_fig"}](z2_f2.eps)
The gauge dependence of $Z_2$ in QCD can be formulated in a similar fashion. Analogous to (\[ex\_l\_qed\]) the Lagrangian for QCD becomes $${\cal L}=-\frac{1}{4}F^2+\bar \psi\left(i\dsl{D}-m\right)\psi +\frac{\xi}{2} B^2 +B\partial
\cdot A-\bar c\partial^\mu D_\mu c +\frac{\chi}{2}\bar c B
\label{ex_l_qcd}$$ which is invariant under an extended BRS symmetry $$\begin{aligned}
& &\delta^+ A_\mu=\epsilon D_\mu c\quad ,\quad \delta^+ \bar\psi=i\epsilon g c\bar
\psi\quad ,\quad \delta^+ c=-\frac{1}{2} \epsilon g\left[ c,c\right]\nonumber\\
& &\delta^+ B=0\quad ,\quad \delta^+ \psi=-i\epsilon g c\psi \quad ,\quad \delta^+ \bar c=B
\label{qcd_x_brs}\\
& &\delta^+\xi=\epsilon\chi\quad ,\quad \delta^+\chi=0\end{aligned}$$ The extended BRS symmetry (\[qcd\_x\_brs\]) implies the following identity for the effective action nearly identical in form to the QED identity (\[qed\_gamma\_x\_brs\]) $$0=\frac{\delta\Gamma}{\delta K_\mu}\frac{\delta \Gamma}{\delta A_\mu}
+ \frac{\delta\Gamma}{\delta \bar K} \frac{\delta \Gamma}{\delta \psi}
+ \frac{\delta\Gamma}{\delta K}
\frac{\delta \Gamma}{\delta \bar\psi}
+ B\frac{\delta \Gamma}{\delta \bar c}
+\frac{\delta\Gamma}{\delta \bar K_c}
+\chi\frac{\partial\Gamma}{\partial\xi}\frac{\delta \Gamma}{\delta c}
\label{qcd_gamma_x_brs}$$ where $K_\mu$ and $\bar K_c$ are currents coupled to composite operators respectively coupled to the extended BRS variations of $A^\mu$ and $c$. Following the procedure used to develop (\[gamma\_ident\]) leads to a QCD expression in a similar form. $$\frac{\partial}{\partial\xi}\frac{\delta^2\Gamma}{\delta\psi(y)\delta
\bar\psi(x)} =
+
\frac{\delta^3\Gamma}{\delta\psi(y)\delta\bar K\delta \chi}
\frac{\delta^2\Gamma}{\delta\bar\psi(x)\delta\psi}
+
\frac{\delta^2\Gamma}{\delta\psi(y)\delta\bar\psi}
\frac{\delta^3\Gamma}{\delta\bar\psi(x)\delta K\delta\chi}
\label{qcd_gamma_ident}$$ After transforming to momentum space we find a result identical in form to (\[prop\_ident\]). $$\frac{\partial}{\partial\xi}S_F^{-1}(p)=S_F^{-1}(p)\left[ F(p,-p,0)+
\bar F(-p,p,0) \right]
\label{qcd_prop_ident}$$ As in the QED case, we see that the necessary condition for gauge independence of $Z_2$ in QCD is for the Green function $F+\bar F$ to be zero on shell. The distinction between QED and QCD occurs in the interactions, particularly the ghost-gluon interaction, which will contribute to $F(p,-p,0)$. This is particularly evident at three loop level where diagrams (such as those in Figure \[non\_ab\_fig\]) occur that cannot be related to the fundamental two- or three-point Green functions. Thus at three-loop level there is no simple extension of the result (\[on-shell\_F\_1\]) from QED to QCD, and hence gauge independence of $Z_2$ in on-shell schemes seems problematic at the three-loop level and beyond in QCD.
![A three-loop QCD diagram contributing to $F(p,-p,0)$ which cannot be reduced to the the form (\[on-shell\_F\_1\]) composed of fundamental one-particle irreducible Green functions. []{data-label="non_ab_fig"}](z2_f3.eps)
[**Acknowledgements:**]{} TGS is grateful for the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC). TGS thanks Martin Lavelle and Emilio Bagan for discussions at early stages of this work.
[99]{} D.J. Broadhurst, N. Gray, K. Schilcher: Z. Phys. [**C52**]{} (1991) 111.
N. Isgur, M. Wise: Phys. Lett. [**B232**]{} (1989) 113.
O. Piguet, K. Sibold: Nucl. Phys. [**B253**]{} (1985) 517.
J.C. Breckenridge, M.J. Lavelle, T.G. Steele: Z. Phys. [**C65**]{} (1995) 155.
K. Johnson, B. Zumino: Phys. Rev. Lett. [**3**]{} (1959) 351; T. Fukuda, R. Kubo, K. Yokoyama: Prog. Theor. Phys. [**63**]{} (1980) 1384.
N. Nakanishi: Prog. Theor. Phys. [**35**]{} (1966) 1111; B. Lautrup: Mat. Fys. Medd. Dan. Vid. Selsk. [**35**]{} (1967) 29.
A.S. Kronfeld, Phys. Rev. [**D58**]{} (1998) 051501
K.G. Chetyrkin, F.V. Tkachov: Nucl. Phys. [**B192**]{} (1981) 159.
[^1]: email: Tom.Steele@usask.ca
[^2]: An implicit coordinate integration is associated with the $\chi$ derivative.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
This paper describes SEPIA, a tool for automated proof generation in Coq. SEPIA combines model inference with interactive theorem proving. Existing proof corpora are modelled using state-based models inferred from tactic sequences. These can then be traversed automatically to identify proofs. The SEPIA system is described and its performance evaluated on three Coq datasets. Our results show that SEPIA provides a useful complement to existing automated tactics in Coq.
Interactive Theorem Proving; Model Inference; Proof Automation
author:
- 'Thomas Gransden, Neil Walkinshaw and Rajeev Raman'
bibliography:
- 'cade-bib.bib'
title: 'SEPIA: Search for Proofs Using Inferred Automata[^1]'
---
Introduction
============
Interactive theorem provers (ITPs) such as Coq [@Coq:manual] and Isabelle [@Isabelle02] are systems that enable the manual development of proofs for a variety of domains. These range from mathematics through to complex software and hardware verification. Thanks to the expressive logics that are used, they provide a very rich programming environment.
Nevertheless, constructing proofs can be a challenging and time-consuming process. A proof development will typically contain many routine lemmas, as well as more complex ones. The ITP system will take care of the bookkeeping and perform simple reasoning steps; however much time is spent manually entering the requisite tactics (even for the most trivial lemmas). In 2008, Wiedijk stated that it takes up to one week to formalize a page of an undergraduate mathematics textbook [@Freek08].
To help combat this problem, we present SEPIA (Search for Proofs Using Inferred Automata) – an automated approach designed to assist users of Coq. SEPIA automatically generates proofs by inferring state-based models from previously compiled libraries of successful proofs, and using the inferred models as a basis for automated proof search.
Background {#sec:background}
==========
This section presents the necessary background required for this paper. We briefly introduce the underlying model inference technique (called MINT), followed by a motivating example.
Inferring EFSMs with MINT {#sub:inferringEfsms}
-------------------------
MINT [@WalkinshawWCRE] is an technique designed to infer state machine models from sequences, where the sequencing of events may depend on some underlying data state. Such systems are modelled as extended finite state machines (see Definition \[def:efsm\]). EFSMs can be conceptually thought of as conventional finite state machines with an added memory. The transitions in an EFSM not only contain a label, but may also contain guards that must hold with respect to variables contained in the memory.
***Extended Finite State Machine*** \[def:efsm\] An Extended Finite State Machine (EFSM) $M$ is a tuple $(S,s_0,F,L,V,\Delta,T)$. $S$ is a set of states, $s_0 \in S$ is the initial state, and $F \subseteq S$ is the set of final states. $L$ is defined as the set of labels. $V$ represents the set of data states, where a single instance $v$ represents a set of concrete variable assignments. $\Delta:V \rightarrow \{True,False\}$ is the set of *data guards*. Transitions $t \in T$ take the form $(a,l,\delta,b)$, where $a,b \in S$, $l \in L$, and $\delta \in \Delta$.
MINT infers EFSMs from sets of *traces*. These can be defined formally as follows:
\[def:traces\] A *trace* $T=\langle e_0,\ldots,e_n\rangle$ is a sequence of $n$ trace elements. Each element $e$ is a tuple $(l,v)$, where $l$ is a label representing the names of function calls or input / output events, and $v$ is a string containing the parameters (this may be empty).
The inference approach adopted by MINT [@WalkinshawWCRE] is an extension of a traditional state-merging approach [@Lang1998] that has been proven to be successful for conventional (non-extended) finite state machines [@WalkinshawStamina]. Briefly, the model inference starts by arranging the traces into a *prefix-tree*, a tree-shaped state machine that exactly represents the set of given traces. The inference then proceeds by a process of *state-merging*; pairs of states in the tree that are roughly deemed to be equivalent (based on their outgoing sequences) are merged. This merging process yields an EFSM that can accept a broader range of sequences than the initial given set.
The transitions in an EFSM not only imply the sequence in which events can occur, but also place constraints on which parameters are valid. This is done by inferring data-classifiers from the training data – each data guard takes the following form $(l,v,possible)$ where $l \in L$, $v \in V$ and $possible \in$ {$true, false$}. When states are merged, the resulting machine is checked to make sure it remains consistent with the data guards.
Motivating Example
------------------
To motivate this work, we consider a typical scenario that arises during interactive proof. Suppose that we are trying to prove the following conjecture: `forall n m p:nat, p + n <= p + m -> n <= m`. The automated Coq tactics [@BC04] have only been able to perform routine reasoning (namely calling the `intros` tactic) to advance the proof to the following:
n : nat
m : nat
p : nat
H : p + n <= p + m
============================
n <= m
There are 2 theories from the Coq Standard Library called `Le.v` and `Lt.v`, that contain proofs about similar properties. The built-in tactics fail to prove the goal. The question we are faced with is this: Given the examples of successful proofs, can we use these to automatically find a proof for the above conjecture?
In previous work [@GransdenCICM] we showed how to use MINT to infer EFSM models of Coq proofs. The resulting EFSMs were simply presented and used manually to derive proofs. This work extends our previous approach by automating the search process, allowing proofs to be completed automatically.
SEPIA System Description {#sec:implementation}
========================
In this section we describe the SEPIA approach. We present the key stages of the technique. It is available[^2] as a ProofGeneral extension that works with Coq. An overview of SEPIA is shown in Figure \[fig:system\]. It contains three main stages:
1. [Generate proof traces from a selection of existing Coq theories.]{}
2. [Use MINT to infer a model from these proof traces.]{}
3. [Systematically search the model, formulating and attempting possible proofs from paths through the model.]{}
![SEPIA overview[]{data-label="fig:system"}](System)
Before describing these three steps in more detail, we look at three properties of the approach that are particularly appealing:
#### Adaptivity
For every iteration, as more valid proofs are discovered they can be incorporated into future cycles to infer more accurate models, forming a ‘virtuous loop’. This is a major benefit over the existing built-in automated tactics, which are typically limited to attempting a fixed set of tactics.
#### Automation
Aside from providing the initial set of theories from which to infer a model, the user is not prompted for any other input. In addition, as will be elaborated later, the overall process typically completes in less than a minute (at least in the context of our experiments).
#### Ability to identify new proofs
The state-merging process [@WalkinshawWCRE] can result in models that accept sequences of tactics which aren’t present in the initial set of proofs. These wouldn’t necessarily be intuitive, or be spotted from manual scrutiny of the proof library. These can however contain valuable steps that lead to a successful proof.
Generating traces from existing proofs
--------------------------------------
To begin a proof attempt we must provide one or more Coq theories from which we wish to generate a model. The proofs within the theories must be converted into their corresponding *proof traces* (see Definition \[def:traces\]). This step is identical to the process used in our previous work [@GransdenCICM].
Figure \[fig:traces\] shows the proof script from the lemma `le_antisym` from `Le.v` and the corresponding proof trace. An important concept in Coq proofs is the semicolon operator. If two (or more) tactics are separated by a semicolon, for example `t1;t2`, this means apply `t1` to the current goal and then apply `t2` to *all* generated subgoals. We record the usage of the semicolon in our traces, so that this information can be reused during proof search.
+:---------------------------------:+:---------------------------------:+
| \(a) Proof Script | \(b) Trace |
+-----------------------------------+-----------------------------------+
| | |
+-----------------------------------+-----------------------------------+
| intros n m H; | ------------------------------- |
| destruct H as [|m' H]; | ------------------------------- |
| auto with arith. | Event $e$ Label $l$ Params |
| intros H1. | $v$ |
| absurd (S m' <= m'); | ----------- ----------- ------- |
| auto with arith. | ------------------------------- |
| apply le_trans with n; | $e_0$ intros “n m H; |
| auto with arith. | "\ |
| | $e_1$ & |
| | destruct&“H as \[$|$m’ H\];"\ |
| | $e_2$ & |
| | auto&“with arith"\ |
| | $e_3$ & |
| | intros&“H1"\ |
| | $e_4$ & |
| | absurd&“(S m’ $<=$ m’);"\ |
| | $e_5$ & |
| | auto&“with arith"\ |
| | $e_6$ & |
| | apply&“le\_trans with n;"\ |
| | $e_7$ & |
| | auto&“with arith" |
| | ------------------------------- |
| | ------------------------------- |
+-----------------------------------+-----------------------------------+
Inferring the model
-------------------
Once the proof traces have been generated, MINT is invoked to infer a model. There are two main parameters associated with MINT. The inference strategy dictates how states are merged during the inference process. A value called $k$ represents the minimum score before a pair of states can be deemed to be equivalent. An in-depth discussion of these variables is outside the scope of this paper.
A preliminary study (with results online) found that using the state merging strategy `redblue` and $k=1$ performed reasonably well for the task of interactive proving. These settings are based on the number of proofs discovered, the time taken and the presence of shorter/novel proofs. For the rest of this paper we refer to these as the default settings for MINT. A portion of the EFSM inferred from [Le.v]{} and [Lt.v]{} is shown in Figure \[fig:efsmexample\].
![Portion of inferred EFSM from [Le.v]{} and [Lt.v]{}[]{data-label="fig:efsmexample"}](model-initial)
Searching for a proof
---------------------
Once a model has been inferred it can be used to search for candidate proofs. We adopt a breadth-first search as this ensures that if a proof is contained in the model, the shortest one will be returned. An instance of Coq is loaded, and the lemma is stated. The proof search moves through the model and applies the tactics and arguments suggested on each transition.
A timeout or a limit on the number of tactics applied can be provided to control the search. If we reach a point where a proof is found, SEPIA outputs the proof (and some proof search statistics). When running SEPIA on our motivating example we obtain the following result:
Proof was: intros m n diff. elim diff; auto with arith.
5611 tactics evaluated.
Inference and search took 0 min, 1 sec
The above proof is particularly interesting for two reasons. Firstly, we have managed to prove something completely automatically that Coq’s automated tools could not. Secondly, the sequence of tactics (and parameters) was not found anywhere else within `Le.v` or `Lt.v`.
Evaluation {#sec:eval}
==========
In this section we provide an experimental evaluation of our approach. We consider the following research questions:
- [[**RQ1:** ]{}Can proofs be derived automatically using our approach?]{}
- (a): How many proofs can be found?
- (b): How long does it take to find a proof?
- [[**RQ2:** ]{}Are there “interesting" characteristics of the proofs?]{}
- (a): Do the proofs contain new sequences of tactics?
- (b): Are the proofs shorter?
- [[**RQ3:** ]{}How does our results compare to Coq’s built-in automated tactics?]{}
Methodology
-----------
The aim of this evaluation is to assess the practicalities of using our approach in real proof developments. We evaluate SEPIA on three distinct Coq contributions as our datasets. We use a method inspired by $k$-folds cross-validation [@KohaviIJCAI] in order to study proof attempts made by our approach.
### Datasets
The datasets used in this evaluation consist of theories selected from three Coq proof developments. The datasets were chosen mainly for their domain, complexity and size. All theories were selected before the experiments took place. SSreflect[^3] contains seven core theories. We select all of these theories as our first dataset. Secondly, MSets[^4] is an implementation of finite sets using lists/trees. All eleven theories are selected to form our second dataset. Finally, we use some theories from CompCert[^5]. Owing to the size of the development, we select a four theories containing both general purpose proofs along with some more specialized ones. Due to the exploratory nature of this evaluation, there are some threats to validity associated with the selection of data. We have only used three Coq datasets, so any results cannot be interpreted to represent performance on all Coq proofs.
### Evaluating Proof Attempts
To provide some answers to RQ1, we want to model the following situation: given some existing proofs, can we use these to prove new properties that are not part of the initial collection. To do this, we use an approach inspired by *k*-folds cross-validation [@KohaviIJCAI].
Each Coq theory file is taken individually and the proofs are randomly partitioned into $k$ non-overlapping sets. We then infer a model from $k-1$ of the sets, and try and prove the lemmas in the remaining set. This process repeats until each set has been used exactly once as the collection of lemmas to be proved.
For each proof attempt, we allow 10,000 tactics to be applied before reporting a failure. The results presented in this paper are from using $k=10$, a standard value for $k$-folds cross-validation [@KohaviIJCAI]. Other values of $k$ have been investigated and the full set of results are online.
As well as capturing whether a proof attempt was successful or not, when a proof is found we analyse how “interesting" the proof is. First, we check and see whether a proof is shorter than the corresponding hand-curated proof. We also check whether the sequence of tactics was new (i.e. not present in the examples the model was inferred from). These provide us with answers to RQ2.
To investigate RQ3 we also run the Coq automated tools to try and prove each lemma. The following command is issued to Coq: `auto with * || eauto with * || tauto || firstorder || trivial`. This simply attempts to prove a goal by trying all of the automated tactics. The default search depth is used in all cases. Where we can specify lemma databases, we allow any available database to be used during proof search.
Results
-------
The full results from our experiments are shown in Table \[tab:res\]. The results are presented for each theory, grouped by library. The remainder of this section provides some answers to the research questions defined earlier.
------------------------ -------------- ---------- ----------- --------- ------------- --------------
(r)[4-6]{} **Library** **Theory** **Size** **Total** **New** **Shorter** **Coq-Tacs**
ssrnat 341 135 (39%) 14 9 59 (17%)
ssrbool 240 120 (50%) 17 10 60 (25%)
seq 394 94 (24%) 14 6 18 (4%)
fintype 243 42 (17%) 15 1 0 (0%)
eqtype 82 36 (44%) 18 2 10 (12%)
choice 30 6 (20%) 0 0 1 (3%)
ssrfun 30 5 (16%) 1 0 7 (23%)
avl 26 0 (0%) 0 0 0 (0%)
decide 22 18 (81%) 0 3 4 (18%)
eqproperties 106 43 (40%) 1 5 47 (44%)
facts 65 17 (26%) 4 8 10 (15%)
gentree 61 9 (15%) 3 3 3 (5%)
list 42 8 (19%) 3 3 3 (7%)
positive 67 13 (19%) 5 4 1 (1%)
properties 137 78 (57%) 9 3 15 (11%)
rbt 89 12 (13%) 10 6 2 (2%)
tofiniteset 14 5 (35%) 2 2 4 (28%)
weaklist 27 8 (30%) 4 5 6 (22%)
cshmgenproof 65 15 (23%) 14 14 0 (0%)
amsgenproof0 57 12 (21%) 9 9 6 (10%)
coqlib 114 36 (31%) 24 23 16 (14%)
values 99 20 (20%) 17 13 5 (5%)
------------------------ -------------- ---------- ----------- --------- ------------- --------------
: Results Summary[]{data-label="tab:res"}
### RQ1(a): A significant proportion of the lemmas were proved automatically using our approach
In Table \[tab:res\], the column headed SEPIA shows the total number of lemmas proved in each theory using our approach. The results suggest that EFSM-based methods are useful at finding proofs automatically. Looking at each dataset as a whole, 32% (438 out of 1360) of the SSreflect dataset were proved. In MSets, 30% (211 out of 687) were successfully proved using our approach. In our selection of CompCert theories, there were 25% (83 out of 335) proved.
### RQ1(b): Many proofs were discovered in under 30 seconds
We measured the time required to derive a proof using our approach. These times take into account both the time required to infer the model and the search time. Over 90% of the proofs were found within 30 seconds. These results show that when a user invokes the process, a proof will usually be delivered quickly. Overall, a proof can be discovered in a relatively small period of time. Of course, this is encouraging for the user involved in the proof development.
### RQ2(a): A quarter of the proofs found were new sequences of tactics
The number of new proofs discovered using our approach are listed under the ‘New’ column in Table \[tab:res\]. We compare the discovered proof with the ones used to infer the model If the sequence is not contained in an existing proof, then it is considered new and only found as a result of inferring an EFSM. Our results show a significant number of new proofs were discovered, backing up further that EFSMs can be useful for automated proof generation. In SSreflect, a total of 79 proofs were new. In the MSets theories, 41 new proofs were found, and 64 were discovered in CompCert.
### RQ2(b): Many proofs discovered were shorter than their original ones
We have listed the number of shorter proofs found in Table \[tab:res\] under the Shorter column. When a proof is found, we compare the discovered proof with the original hand-curated one. The length (in terms of tactics used) of both proofs are then compared, to see if we managed to derive a shorter one. In SSreflect, 28 of the proofs found were shorter than their original counterparts. For MSets, 42 of the proofs were shorter, whilst in CompCert 59 of proofs were shorter. The combination of the state merging algorithms and a breadth-first search means we were able to identify these shorter proofs.
### RQ3: SEPIA provides an alternative to existing Coq tactics
The column headed Coq-Tacs in Table \[tab:res\] provides the number of lemmas that were proved using Coq’s automated tactics. Despite being relatively limited in the steps that they try, they manage to prove 155 SSreflect lemmas, 95 MSets lemmas and 27 of the CompCert lemmas. On the whole, we see that our approach significantly outperforms the automated tactics in terms of number of lemmas proved. This is to be expected, as they only provide modest automation. Nevertheless, there are occasions where the automated tactics prove more lemmas (in `msetproperties` and `ssrfun` for instance).
Related Work {#sec:related}
============
There have been many projects aimed at improving the automation of proofs in ITPs. As we have shown in this work, machine learning can be applied in the context of interactive theorem proving. Specifically, we have shown that the tactics used in proofs can serve as useful features for machine learning algorithms. This is an area that has received moderate attention previously.
Jamnik *et al.* have previously applied an Inductive Logic Programming technique to examples of proofs in the $\Omega$mega system [@Jamnik03]. Given a collection of well chosen proof method sequences, Jamnik *et al.* perform a method of least generalisation to infer what are ultimately regular grammars. The value of even basic models is intuitive. Proofs could be derived automatically using the technique. However, the proof steps learned do not contain any parameters. The parameters required are reconstructed after running the learning technique.
Another approach that concentrated on Isabelle proofs was implemented by Duncan [@Duncan07]. Duncan’s approach was to identify commonly occurring sequences of tactics from a given corpora. After eliciting these tactic sequences, evolutionary algorithms were used to automatically formulate new tactics. The evaluation showed that simple properties could be derived automatically using the technique; however the parameter information was left out of the learning approach.
Conclusion and Future Work {#sec:conclusion}
==========================
This paper has presented SEPIA, an approach to automatically generate proofs in Coq. This has been achieved by applying model inference techniques to interactive proof scripts. We have shown that even learning from tactic sequences, which is admittedly a simplistic view of interactive proofs, can provide effective proof automation. It would be interesting to see what can be achieved by using more sophisticated views such as the proof goal view [@Grov12].
The overall process is fully automated our evaluation shows SEPIA performs well on a range of proofs from three varied Coq datasets. It succeeds in proving a number of lemmas that were out of reach for Coq’s automated tactics. Additionally, when SEPIA finds a proof it usually does so in seconds.
As well as reusing existing proofs, SEPIA can construct proofs using new tactic sequences. These new sequences might not have been identified if manually analysing proof libraries. In our evaluation, we also identified a number of shorter proofs (by comparing the proofs found using SEPIA to original proofs). This follows the trend of other comparisons of automated and human proofs [@Alama12].
We plan to investigate automatic identification of appropriate theories or lemmas that could be used to infer models. Currently, we use whole theories; however it may be the case that only a handful of these proofs are actually useful. By using methods such as ML4PG [@ML4PG13] it may be possible to discover the most useful lemmas from a large collection of theories.
[^1]: The final publication is available at http://link.springer.com.
[^2]: https://bitbucket.org/tomgransden/efsminferencetool
[^3]: http://ssr.msr-inria.inria.fr/doc/ssreflect-1.4/
[^4]: https://coq.inria.fr/library/
[^5]: http://compcert.inria.fr/doc/index.html
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The dissipative dynamics of a two-qubit system is studied theoretically. We make use of the Bloch-Redfield formalism which explicitly includes the parameter-dependent relaxation rates. We consider the case of two flux qubits, when the controlling parameters are the partial magnetic fluxes through the qubits’ loops. The strong dependence of the inter-level relaxation rates on the controlling magnetic fluxes is demonstrated for the realistic system. This allows us to propose several mechanisms for lasing in this four-level system.'
author:
- 'E. A. Temchenko'
- 'S. N. Shevchenko'
- 'A. N. Omelyanchouk'
title: 'Dissipative dynamics of two-qubit system: four-level lasing'
---
Introduction
============
Recently considerable progress has been made in studying Josephson-junctions-based superconducting circuits, which can behave as effectively few-level quantum systems. [@Korotkov] When the dynamics of the system can be described in terms of two levels only, this circuit is called a qubit. Demonstrations of the energy level quantization and the quantum coherence provide the basis for both possible practical applications and for studying fundamental quantum phenomena in systems involving qubits. Important distinctions of these multi-level artificial quantum systems from their microscopic counterparts are high level of controllability and unavoidable coupling to the dissipative environment.
Multi-level systems with solid-state qubits may be realized in different ways. First, the devices used for qubits in reality are themselves multi-level systems with the lowest two levels used to form a qubit. For some recent study of multi-level superconducting devices see Ref. . Then, a qubit can be coupled to another quantum system, e.g. a quantum resonator.[@qb-oscillator] Such a composite system is also described by a multi-level structure. As a particular case of coupling with other systems, the multi-qubit system is of particular interest (see e.g. Ref. ).
Operations with the multi-level systems can be described with level-population dynamics. In particular, population inversion was proposed for cooling and lasing with superconducting qubits.[@Astafiev07; @Grajcar08-i-drugie] However, most of the previous propositions were related to three-level systems, while for practical purposes four-level systems are often more advantageous.[@Svelto]
The natural candidate for the solid-state four-level system is the system of two coupled qubits. The purpose of this paper is the theoretical study of mechanisms of population inversion and lasing, as a result of the pumping and relaxation processes in the system. We will start in the next Section by demonstrating the controllable energy level structure of the system. Our calculations are done for the parameters of the realistic two-flux-qubit system studied in Ref. . To describe the dynamics of the system we will present the Bloch-Redfield formalism in Sec. III. The key feature of the system is the strong dependence of the relaxation rates on the controlling parameters. Then solving the master equation in Sec. IV we will demonstrate several mechanisms for creating the population inversion in our four-level system. We will demonstrate further that applying additional driving induces transitions between the operating states resulting in stimulated emission. We summarize our theoretical results in Sec. V. and, based on our calculations, we then discuss the experimental feasibility of the two-qubit lasing.
Model Hamiltonian and Eigenstates of the two-qubit system
=========================================================
The main object of our study is a system of two coupled qubits. And altough our analysis bears general character, for concreteness we consider superconducting flux qubits, see Fig. \[Fig:scheme\]. A flux qubit, which is a superconducting ring with three Josephson junctions, can be controlled by constant ($\Phi _{\mathrm{dc}}$) and alternating ($\Phi _{\mathrm{ac}}\sin \omega t$) external magnetic fluxes. Each of the two qubits can be considered as a two-level system with the Hamiltonian in the pseudospin notation [@vanderWal03; @Korotkov] $$\widehat{H}_{\mathrm{1q}}^{(i)}=-\frac{1}{2}\epsilon _{i}(t)\widehat{\sigma }_{z}^{(i)}-\frac{1}{2}\Delta _{i}\widehat{\sigma }_{x}^{(i)}, \label{H1q}$$where $\Delta _{i}$ is the tunnelling amplitude, $\widehat{\sigma }_{x,z}^{(i)}$ are the Pauli matrices in the basis $\left\{ \lvert {\downarrow }\rangle ,\lvert {\uparrow }\rangle \right\} $ of the current operator in the $i$-th qubit: $\widehat{I}_{i}=-I_{\mathrm{p}}^{(i)}\widehat{\sigma }_{z}^{(i)},$ with $I_{\mathrm{p}}^{(i)}$ being the absolute value of the persistent current in the $i$-th qubit; then the eigenstates of $\widehat{\sigma }_{z}$ correspond to the clockwise ($\widehat{\sigma }_{z}\left\vert \downarrow \right\rangle =-\left\vert \downarrow
\right\rangle $) and counterclockwise ($\widehat{\sigma }_{z}\left\vert
\uparrow \right\rangle =\left\vert \uparrow \right\rangle $) current in the $i$-th qubit. The energy bias $\epsilon _{i}(t)$ is controlled by constant and alternating magnetic fluxes
\[epsilon\] $$\begin{aligned}
\epsilon _{i}(t) &=&2I_{\mathrm{p}}^{(i)}\left( \Phi _{i}(t)-\frac{1}{2}\Phi
_{0}\right) =\epsilon _{i}^{(0)}+\tilde{\epsilon}_{i}(t), \\
\epsilon _{i}^{(0)} &=&2I_{\mathrm{p}}^{(i)}\Phi _{0}f_{i}\text{, \ \ }f_{i}=\frac{\Phi _{\mathrm{dc}}^{(i)}}{\Phi _{0}}-\frac{1}{2}, \\
\tilde{\epsilon}_{i}(t) &=&2I_{\mathrm{p}}^{(i)}\Phi _{0}f_{\mathrm{ac}}\sin
\omega t\text{, \ \ }f_{\mathrm{ac}}=\frac{\Phi _{\mathrm{ac}}}{\Phi _{0}}.\end{aligned}$$
![(Color online). **Schematic diagram of the two-qubit system**. Two different flux qubits are biased by independent constant magnetic fluxes, $\Phi _{\mathrm{dc}}^{(1)}$ and $\Phi _{\mathrm{dc}}^{(2)}$, and by the same alternating magnetic flux $\Phi _{\mathrm{ac}}\sin \protect\omega t$. The former controls the energy levels structure, while the latter changes the populations of the levels. The dissipation processes are described by coupling the system to the bath of harmonic oscillators.[]{data-label="Fig:scheme"}](Fig1.eps){width="8.6cm"}
The basis state vectors for the two-qubit system $\left\{ \lvert {\downarrow
\downarrow }\rangle ,\lvert {\downarrow \uparrow }\rangle ,\lvert {\uparrow
\downarrow }\rangle ,\lvert {\uparrow \uparrow }\rangle \right\} $ are composed from the single-qubit states: $\lvert {\downarrow \uparrow }\rangle
=\lvert {\downarrow }\rangle _{(1)}\lvert {\uparrow }\rangle _{(2)}$, etc. For identification of the level structure and understanding different transition rates, we will start the consideration from the case of two non-interacting qubits. Then, the energy levels of two qubits consist of the pair-wise summation of single-qubit levels,
$$E_{i}^{\pm }=\pm \frac{\Delta E_{i}}{2}=\pm \frac{1}{2}\sqrt{\epsilon
_{i}^{(0)2}+\Delta _{i}^{2}}, \label{DE}$$
which are the eigenstates of the single-qubit time-independent Hamiltonian (\[H1q\]) at $f_{\mathrm{ac}}=0$. We demonstrate this in Fig. [Fig:levels]{}(a), where we plot the energy levels, fixing the bias in the first qubit $f_{1}$, as a function of the partial bias in the second qubit $f_{2}$. Then the single-qubit energy levels appear as (dashed) horizontal lines at $E_{1}^{\pm }=\pm \frac{1}{2}\sqrt{\epsilon _{1}^{(0)2}+\Delta
_{1}^{2}}$ for the first qubit and as the parabolas at $E_{2}^{\pm
}(f_{2})=\pm \frac{1}{2}\sqrt{\epsilon _{2}^{(0)}(f_{2})^{2}+\Delta _{2}^{2}}
$.
After showing the two-qubit energy levels in Fig. \[Fig:levels\](a), we assume that the relaxation in the first qubit is much faster than in the second (this will be studied in the next Section), which is shown with the arrows in the figure. And now our problem, with four levels and with fast relaxation between certain levels, becomes similar to the one with lasers. [@Svelto] This allows us to propose three- and four-level lasing schemes in Fig. \[Fig:levels\](b,c). This is the subject of our further detailed study.
![(Color online). **Energy level structure of two uncoupled qubits** ($J=0$). (a) One-qubit and two-qubits energy levels are shown by dashed and solid lines as a function of partial flux $f_{\mathrm{2}}$ at fixed flux $f_{\mathrm{1}}$. We mark the energy levels by the current operator eigenstates, $\lvert {\downarrow \downarrow }\rangle $ *etc.* Particularly, we will consider the energy levels and dynamical behaviour of the system for the flux biases $f_{\mathrm{2}}=f_{2\mathrm{L}}$ (marked by the square) and $f_{\mathrm{2}}=f_{2\mathrm{R}}$ (marked by the circle). By the arrows we show the fastest relaxation - for qubit $1$. (b) Scheme for *three-level lasing* at $f_{\mathrm{2}}=f_{2\mathrm{L}}$. The driving magnetic flux pumps (P) the upper level $\left\vert 3\right\rangle $. Fast relaxation (R) creates the population inversion of the first excited level $\left\vert 1\right\rangle $ in respect to the ground state $\left\vert
0\right\rangle $; these two operating levels can be used for lasing (L). (c) Scheme for *four-level lasing* at $f_{\mathrm{2}}=f_{2\mathrm{R}}$. Pumping (P) and fast relaxations (R$_{1}$ and R$_{2}$) create the population inversion of the level $\left\vert 2\right\rangle $ with respect to level $\left\vert 1\right\rangle $.[]{data-label="Fig:levels"}](Fig2){width="8.6cm"}
We have analyzed the relaxation in the system of two uncoupled qubits. However this system can not be used for lasing, since this requires pumping from the ground state to the upper excited state (see Fig. \[Fig:levels\](b,c)). Such excitation of the two-qubit system requires simultaneously changing the state of both qubits and can be done provided the two qubits are interacting. That is why in what follows we consider in detail the system of two *coupled* qubits. The coupling between the two qubits we assume to be determined by an Ising-type (inductive interaction) term $\frac{J}{2}\widehat{\sigma }_{z}^{(1)}\widehat{\sigma }_{z}^{(2)}$, where $J$ is the coupling energy between the qubits. Then the Hamiltonian of the two driven flux qubits can be represented as the sum of time-independent and perturbation Hamiltonians $$\begin{gathered}
\widehat{H}_{\mathrm{2q}}=\widehat{H}_{0}+\widehat{V}(t), \label{H2q} \\
\widehat{H}_{0}=\sum_{i=1,2}\left( -\frac{1}{2}\Delta _{i}\widehat{\sigma }_{x}^{(i)}-\frac{1}{2}\epsilon _{i}^{(0)}\widehat{\sigma }_{z}^{(i)}\right) +\frac{J}{2}\widehat{\sigma }_{z}^{(1)}\widehat{\sigma }_{z}^{(2)},
\label{H0} \\
\widehat{V}(t)=\sum_{i=1,2}-\frac{1}{2}\tilde{\epsilon}_{i}(t)\widehat{\sigma }_{z}^{(i)},\end{gathered}$$where $\widehat{\sigma }_{x,z}^{(1)}=\widehat{\sigma }_{x,z}\otimes \widehat{\sigma }_{0}$, $\widehat{\sigma }_{x,z}^{(2)}=\widehat{\sigma }_{0}\otimes
\widehat{\sigma }_{x,z}$, and $\widehat{\sigma }_{0}$ is the unit matrix. When presenting concrete results we will use the parameters of Ref. : $\Delta _{\mathrm{1}}/h=15.8$ GHz, $\Delta _{\mathrm{2}}/h=3.5$ GHz, $I_{\mathrm{p}}^{(\mathrm{1})}\Phi _{0}/h=375$ GHz, $I_{\mathrm{p}}^{(\mathrm{2})}\Phi _{0}/h=700$ GHz, $J/h=3.8$ GHz.
For further analysis of the system, we have to convert to the basis of eigenstates of the unperturbed Hamiltonian . Eigenstates $\left\{
\lvert {0}\rangle ,\lvert {1}\rangle ,\lvert {2}\rangle ,\lvert {3}\rangle
\right\} $ of the unperturbed Hamiltonian are connected with the initial basis $$\left[
\begin{matrix}
\lvert {0}\rangle \\
\lvert {1}\rangle \\
\lvert {2}\rangle \\
\lvert {3}\rangle\end{matrix}\right] =\widehat{S}\left[
\begin{matrix}
\lvert {\downarrow \downarrow }\rangle \\
\lvert {\downarrow \uparrow }\rangle \\
\lvert {\uparrow \downarrow }\rangle \\
\lvert {\uparrow \uparrow }\rangle\end{matrix}\right] , \label{conversion}$$where $\widehat{S}$ is the unitary matrix consisting of eigenvectors of the unperturbed Hamiltonian . Making use of the transformation $\widehat{H}_{0}^{\prime }=\widehat{S}^{-1}\widehat{H}_{0}\widehat{S}$, we obtain the Hamiltonian $\widehat{H}_{0}^{\prime }$ in the energy representation: $\widehat{H}_{0}^{\prime }=$diag$(E_{0},E_{1},E_{2},E_{3})$. These eigenvalues of the Hamiltonian $H_{0}$ are computed numerically and plotted in Fig. \[Fig:W1\](a) as functions of the bias flux in the second qubit $f_{2}$. The distinction from Fig. \[Fig:levels\](a), calculated with $J=0$, is in that, first, the crossing at $f_{2}=f_{2}^{\ast }$ becomes an avoided crossing, and second, the distance between the \[previously single-qubit\] energy levels is not equal, e.g. now $E_{3}-E_{2}\neq
E_{1}-E_{0}$.
![(Color online). (a) **Energy levels** of the system of two coupled qubits. Arrows show the pumping and dominant relaxation, as in Fig. \[Fig:levels\]. (b) **The relaxation rates** $W_{mn}$, which give the probability of the transition from level $n$ to level $m$, induced by the interaction with the dissipative bath. Dominant relaxations are $W_{13}$ and $W_{02}$ to the left from the avoided crossing at $f_{\mathrm{2}}=f_{\mathrm{2}}^{\ast }$ and $W_{23}$ and $W_{01}$ to the right. (The small relaxation rates $W_{03}$ and $W_{12}$ are not shown.)[]{data-label="Fig:W1"}](Fig3){width="8.6cm"}
Likewise, we could also convert the excitation operator $\widehat{V}(t)$ to the energy representation $$\widehat{V}^{\prime }(t)=\widehat{S}^{-1}\widehat{V}(t)\widehat{S}=\sum_{i=1,2}-\frac{1}{2}\tilde{\epsilon}_{i}(t)\widehat{\tau }_{z}^{(i)},
\label{V'}$$$$\widehat{\tau }_{z}^{(i)}=\widehat{S}^{-1}\widehat{\sigma }_{z}^{(i)}\widehat{S}.$$
Master equation and relaxation
==============================
Bloch-Redfield formalism
------------------------
Following Ref. , we will describe the dissipation in the open system of two qubits, assuming that it is interacting with the thermostat (bath), see Fig. \[Fig:scheme\]. Within the Bloch-Redfield formalism, the Liouville equation for the quantum system interacting with the bath is transformed into the master equation for the reduced system’s density matrix. This transformation is made with several reasonable assumptions: the interaction with the bath is weak (Born approximation); the bath is so large that the effect of the system on its state is ignored; the dynamics of the system depends on its state only at present (Markov approximation). Then the master equation for the reduced density matrix $\rho (t)$ of our driven system in the energy representation can be written in the form of the following differential equations [@MasterEqn] $$\dot{\rho}_{ij}=-i\omega _{ij}\rho _{ij}-\frac{i}{\hbar }\left[ \widehat{V}^{\prime },\widehat{\rho }\right] _{ij}+\delta _{ij}\sum_{n\neq j}\rho
_{nn}W_{jn}-\gamma _{ij}\rho _{ij}. \label{M_eqn}$$Here $\omega _{ij}=(E_{i}-E_{j})/\hbar $, and the relaxation rates $$W_{mn}=2\text{Re}\Gamma _{nmmn}, \label{Ws}$$$$\gamma _{mn}=\sum_{r}\left( \Gamma _{mrrm}+\Gamma _{nrrn}^{\ast }\right)
-\Gamma _{nnmm}-\Gamma _{mmnn}^{\ast } \label{gmn}$$are defined by the relaxation tensor $\Gamma _{lmnk}$, which is given by the Golden Rule$$\Gamma _{lmnk}=\frac{1}{\hbar ^{2}}\int\limits_{0}^{\infty }dte^{-i\omega
_{nk}t}\left\langle H_{\mathrm{I},lm}(t)H_{\mathrm{I},nk}(0)\right\rangle .$$Here $\widehat{H}_{\mathrm{I}}(t)$ is the Hamiltonian of the interaction of our system with the bath in the interaction representation; the angular brackets denote the thermal averaging of the bath degrees of freedom.
It was shown [@vanderWal03; @Governale01-i-drugie] that the noise from the electromagnetic circuitry can be described in terms of the impedance $Z(\omega )$ from a bath of $LC$ oscillators. For simplicity one assumes that both qubits are coupled to a common bath of oscillators, then the Hamiltonian of interaction is written as$$\widehat{H}_{\mathrm{I}}=\frac{1}{2}\left( \widehat{\sigma }_{z}^{(1)}+\widehat{\sigma }_{z}^{(2)}\right) \widehat{X} \label{HI}$$in terms of the collective bath coordinate $\widehat{X}=\sum\nolimits_{k}c_{k}\widehat{\Phi }_{k}$. Here $\widehat{\Phi }_{k}$ stands for the magnetic flux (generalized coordinate) in the $k$-th oscillator, which is coupled with the strength $c_{k}$ to the qubits. We note that the coupling to the environment in the form of Eq. (\[HI\]) applies only to correlated noise, or both qubits interacting with the same environment. One could argue that it would be more realistic to use two separate terms, one for each qubit coupled to its own environment. However, since this term leads to different relaxation rates in our qubits $1$ and $2$ (see below), then the form in Eq. (\[HI\]) should give essentially the same results as two separate coupling terms.
Then it follows that the relaxation tensor $\Gamma _{lmnk}$ is defined by the noise correlation function $S(\omega )$$$\Gamma _{lmnk}=\frac{1}{\hbar ^{2}}\Lambda _{lmnk}S(\omega _{nk}),$$$$\Lambda _{lmnk}=\left( \widehat{\tau }_{z}^{(1)}+\widehat{\tau }_{z}^{(2)}\right) _{lm}\left( \widehat{\tau }_{z}^{(1)}+\widehat{\tau }_{z}^{(2)}\right) _{nk},$$$$S(\omega )=\int\limits_{0}^{\infty }dte^{-i\omega t}\left\langle
X(t)X(0)\right\rangle .$$The noise correlator $S(\omega )$ was calculated in Ref. within the spin-boson model and it was shown that its imaginary part results only in a small renormalization of the energy levels and can be neglected. The relevant real part of the relaxation tensor [@Governale01-i-drugie] $$\text{Re}\Gamma _{lmnk}=\frac{1}{8\hbar }\Lambda _{lmnk}J(\omega _{nk})\left[
\coth \frac{\hbar \omega _{nk}}{2T}-1\right] \label{ReG}$$is defined by the environmental spectral density $J(\omega )$. Here $T$ is the bath temperature ($k_{B}$ is assumed $1$); for the numerical calculations we take $T/h=1$ GHz ($T=50$ mK). The electromagnetic environment can be described as an Ohmic resistive shunt across the junctions of the qubits, $Z(\omega )=R$.[@vanderWal03] Then the low frequency spectral density is linear $J(\omega )\propto \omega Z(\omega
)\propto $ $\omega $ and should be cut off at some large value $\omega _{\mathrm{c}}$; the realistic experimental situation is described by [Governale01-i-drugie]{} $$J(\omega )=\alpha \frac{\hbar \omega }{1+\omega ^{2}/\omega _{\mathrm{c}}^{2}}, \label{J(w)}$$where $\alpha $ is a dimensionless parameter that describes the strength of the dissipative effects; in numerical calculations we take $\alpha =0.01$ and $\omega _{\mathrm{c}}/2\pi =10^{4}$ GHz (the cut-off frequency $\omega _{\mathrm{c}}$ is taken much larger than other characteristic frequencies, so that for relevant values $\omega :$ $J(\omega )\approx \alpha \hbar \omega $).
Relaxation rates
----------------
From the above equations the expression for the relaxation rates from level $\left\vert n\right\rangle $ to level $\left\vert m\right\rangle $ follows$$W_{mn}=\frac{1}{4\hbar }\Lambda _{nmmn}J(\omega _{mn})\left[ \coth \frac{\hbar \omega _{mn}}{2T}-1\right] . \label{Wmn}$$These relaxation rates are plotted in Fig. \[Fig:W1\](b) as functions of the partial flux bias $f_{2}$. This figure demonstrates that the fastest transitions are those between the energy levels corresponding to changing the state of the first qubit and leaving the same state of the second qubit, cf. Fig. \[Fig:W1\](a). Namely, the fastest transitions are those with the rates $W_{13}$ and $W_{02}$ to the left from the avoided crossing and $W_{23} $ and $W_{01}$ to the right, which correspond to the transitions $\lvert {\uparrow \uparrow }\rangle \rightarrow \lvert {\downarrow \uparrow }\rangle $ and $\lvert {\uparrow \downarrow }\rangle \rightarrow \lvert {\downarrow \downarrow }\rangle $. Note that we do not show in the figure the rates $W_{03}$ and $W_{12}$; they correspond to the transitions with simultaneously changing the states of the two qubits and they are much smaller than the rates shown.
The relaxation rates $W_{ij}$ are shown in Fig. \[Fig:W2\] as functions of the two partial bias fluxes, $f_{1}$ and $f_{2}$. Again, one can see the regions where certain relaxation rates are dominant. Such a difference in the relaxation rates creates a sort of artificial selection rules for the transitions similar to the selection rules studied in Refs. . In our case the transitions are induced by the interaction with the environment and the difference is due to the different parameters of the two qubits.[@Paladino09] To further understand this issue, we consider the single-qubit relaxation rates.
![(Color online). **Relaxation rates** $W_{mn}$ versus partial biases of the two qubits, $f_{\mathrm{1}}$ and $f_{\mathrm{2}}$. The square and the circle show the parameters $f_{\mathrm{1}}$ and $f_{\mathrm{2}}=f_{\mathrm{2L(R)}}$, at which the calculations of other figures are done.[]{data-label="Fig:W2"}](Fig4){width="8.6cm"}
From the above equations we can obtain the energy relaxation time $T_{1}$ and the decoherence time $T_{2}$ for single qubit. For the two-level system with two states $\lvert {0}\rangle $ and $\lvert {1}\rangle $ the relaxation time is given by [@MasterEqn] $T_{1}^{-1}=W_{01}+W_{10}$. The Boltzmann distribution, $W_{10}/W_{01}=\exp (-\Delta E/T)$, means that at low temperature the major effect of the bath is the relaxation from the upper level to the lower one. Now, from Eq. (\[Wmn\]) it follows that$$T_{1}^{-1}=\frac{\alpha \Delta ^{2}}{2\hbar \Delta E}\coth \frac{\Delta E}{2T}. \label{T1}$$Also from Eq. (\[gmn\]) we obtain the dephasing rate [@MasterEqn]$$T_{2}^{-1}=\text{Re}\gamma _{01}=\frac{1}{2}T_{1}^{-1}+\frac{\alpha T}{\hbar
}\frac{\epsilon ^{(0)2}}{\Delta E^{2}}. \label{T2}$$For the calculation presented in Fig. \[Fig:levels\](a) for two qubits with $J=0$ in the vicinity of the point $f_{2}=f_{2}^{\ast }$, where $\Delta
E^{(1)}=\Delta E^{(2)}$, we obtain$$\frac{T_{1}^{(1)}}{T_{1}^{(2)}}\simeq \left( \frac{\Delta _{2}}{\Delta _{1}}\right) ^{2}. \label{relation}$$
As we explained above, the lasing in the four-level system requires the hierarchy of the relaxation times. In particular, we assumed $T_{1}^{(1)}\ll
T_{1}^{(2)}$. So, in our calculations we have taken $\Delta _{1}\gg \Delta
_{2}$ and consequently the first qubit relaxed faster. This qualitatively explains the dominant relaxations in Fig. \[Fig:W1\](b).
Equations for numerical calculations
------------------------------------
If we use the Hermiticity and normalization of the density matrix, then the $16$ complex equations can be reduced to $15$ real equations. After the straightforward parametrization of the density matrix, $\rho
_{ij}=x_{ij}+iy_{ij}$, we get [@ShT08]
\[Eqs\] $$\begin{gathered}
\dot{x}_{ii}=-\frac{1}{\hbar }\left[ V^{\prime },y\right] _{ii}+\sum_{r\neq
i}W_{ir}x_{rr}-x_{ii}\sum_{r\neq i}W_{ii},\text{ }i=1,2,3; \\
\dot{x}_{ij}=\omega _{ij}y_{ij}-\frac{1}{\hbar }\left[ V^{\prime },y\right]
_{ij}-\gamma _{ij}x_{ij},\text{ }i>j; \\
\dot{y}_{ij}=-\omega _{ij}x_{ij}+\frac{1}{\hbar }\left[ V^{\prime },y\right]
_{ij}-\gamma _{ij}y_{ij},\text{ }i>j;\end{gathered}$$$y_{ii}=0$, $x_{00}=1-(x_{11}+x_{22}+x_{33})$; $x_{ji}=x_{ij}$, $y_{ji}=-y_{ij}$.
This system of equations can be simplified if the relaxation rates are taken at zero temperature, $T=0$, and neglecting the impact of the inter-qubit interaction on relaxation, $J=0$. Then among all the $W_{ij}$ and $\gamma
_{ij}$ non-trivial are only the elements corresponding to single-qubit relaxations (see Eqs. (\[T1\]-\[T2\])). For example consider $f_{2}<f_{2}^{\ast }$ (see Fig. \[Fig:levels\](a) for the notation of the levels), then non-trivial elements are
$$\begin{aligned}
W_{13} &=&W_{02}=\left( T_{1}^{(1)}\right) ^{-1}=\frac{\alpha \Delta _{1}^{2}}{2\hbar \Delta E_{1}}, \\
W_{23} &=&W_{01}=\left( T_{1}^{(2)}\right) ^{-1}=\frac{\alpha \Delta _{2}^{2}}{2\hbar \Delta E_{2}},\end{aligned}$$
$$\begin{aligned}
\gamma _{13} &=&\gamma _{31}=\gamma _{02}=\gamma _{20}=(T_{2}^{(1)})^{-1}=\frac{1}{2}(T_{1}^{(1)})^{-1}, \\
\gamma _{23} &=&\gamma _{32}=\gamma _{01}=\gamma _{10}=(T_{2}^{(2)})^{-1}=\frac{1}{2}(T_{1}^{(2)})^{-1}.\end{aligned}$$
In our numerical calculations we did not ignore the influence of the coupling on relaxation, i.e. we did not assume $J=0$. However, we have numerically checked that such simplification, $J=0$, resulting in the relaxation rates (25-26), sometimes allows to describe qualitatively dynamics of the system.
Several schemes for lasing
==========================
In Sec. II and in Fig. \[Fig:levels\] we pointed out that in the system of two coupled qubits there are two ways to realize lasing, making use of the three or four levels to create the population inversion between the operating levels. In this Section we will demonstrate the lasing in the two-qubit system solving numerically the Bloch-type equations (\[Eqs\]) with the relaxation rates given by Eqs. (\[Ws\], \[gmn\], \[ReG\]). Besides demonstrating the population inversion between the operating levels, we apply an additional signal with the frequency matching the distance between the operating levels, to stimulate the transition from the upper operating level to the lower one. So, we will first consider the system driven by one monochromatic signal $f(t)=f_{\mathrm{ac}}\sin \omega t$ to pump the system to the upper level and to demonstrate the population inversion. Then we will apply another signal stimulating transitions between the operating laser levels:
$$f(t)=f_{\mathrm{ac}}\sin \omega t+f_{\mathrm{L}}\sin \omega _{\mathrm{L}}t.$$
Solving the system of equations (\[Eqs\]), we obtain the population of $i$-th level of our two-qubit system, $P_{i}=x_{ii}$. The results of the calculations are plotted in Figs. \[Fig:creation\] and \[Fig:stimulated\], where the temporal dynamics of the level populations is presented for different situations.
![(Color online). **Three-level lasing and stimulated transition**. Time evolution of the numerically calculated occupation probabilities at biases $f_{1}=14\times 10^{-3}$ and $f_{2}=11\times 10^{-3}$ is plotted for (a) one-photon driving and (b) two-photon driving. As shown in the inset schemes, the driving and fast relaxation create the inverse population between the levels $\lvert {1}\rangle $ and $\lvert {0}\rangle $. So, these levels can be used for lasing, which we schematically mark by the double arrow. After some time delay (when the population inversion is reached) an additional periodic signal (S) $f_{\mathrm{L}}\cos \protect\omega _{\mathrm{L}}t$ is turned on matching the operating levels, $\hbar \protect\omega _{\mathrm{L}}=E_{1}-E_{0}$. This leads to the stimulated transition $\lvert {1}\rangle \rightarrow \lvert {0}\rangle $.[]{data-label="Fig:creation"}](Fig5){width="8.6cm"}
![(Color online). **Four-level lasing and stimulated transition**. Time evolution of the occupation probabilities at biases $f_{1}=14\times
10^{-3}$ and $f_{2}=20\times 10^{-3}$ is plotted for (a) one-photon driving and (b) two-photon driving. The driving and fast relaxation create the inverse population between the levels $\lvert {2}\rangle $ and $\lvert {1}\rangle $. After a time delay an additional periodic signal $f_{\mathrm{L}}\cos \protect\omega _{\mathrm{L}}t$ is turned on matching the operating levels, $\hbar \protect\omega _{\mathrm{L}}=E_{2}-E_{1}$. This leads to the stimulated transition $\lvert {2}\rangle \rightarrow \lvert {1}\rangle $.[]{data-label="Fig:stimulated"}](Fig6){width="8.6cm"}
In Fig. \[Fig:creation\] we consider the situation where the relevant dynamics includes three levels (for definiteness, we take $f_{1}=14\times
10^{-3}$, $f_{2}=11\times 10^{-3}$, which is marked as the square in Fig. \[Fig:W2\]). Pumping ($\lvert {0}\rangle \rightarrow \lvert {3}\rangle $) and relaxation ($\lvert {3}\rangle \rightarrow \lvert {1}\rangle $) create the population inversion between the levels $\lvert {1}\rangle \ $and $\lvert {0}\rangle $. For pumping we consider two possibilities: one-photon driving, Fig. \[Fig:creation\](a), when $\hbar \omega =E_{3}-E_{0}$, and two-photon driving, Fig. \[Fig:creation\](b), when $2\hbar \omega
=E_{3}-E_{0}$. In the latter case we have chosen the parameters (namely $f_{1}$ and $f_{2}$) so, that the two-photon excitation goes via an intermediate level $\lvert {2}\rangle $. We note here that, as was demonstrated in Ref. , the multi-photon excitation in our multi-level system can be direct, as below in Fig. \[Fig:stimulated\](b), or ladder-type, via an intermediate level, as in Fig. \[Fig:creation\](b). Figure \[Fig:creation\] was calculated for the following parameters: $\omega _{\mathrm{L}}/2\pi =13.7$ GHz ($\hbar \omega _{\mathrm{L}}=E_{1}-E_{0} $) and also (a) $\omega /2\pi =35.2$ GHz, $f_{\mathrm{ac}}=7\times 10^{-3}$, $f_{\mathrm{L}}=5\times 10^{-3}$; (b) $\omega /2\pi
=17.6 $ GHz, $f_{\mathrm{ac}}=2\times 10^{-3}$, $f_{\mathrm{L}}=5\times
10^{-3}$.
Next, we consider the scheme for the four-level lasing, which occurs in a similar scenario, except the changing of the levels. Then, the main relaxation transitions are $\lvert {3}\rangle \rightarrow \lvert {2}\rangle $ and $\lvert {1}\rangle \rightarrow \lvert {0}\rangle $, and now the population inversion should be created between levels $\lvert {2}\rangle $ and $\lvert {1}\rangle $. For this we take the partial biases $f_{1}=14\times 10^{-3}$, $f_{2}=20\times 10^{-3}$ (marked by the circle in Fig. \[Fig:W2\]). First, the system is pumped only with one signal either with $\hbar \omega =E_{3}-E_{0}$, Fig. \[Fig:stimulated\](a), or with $2\hbar \omega =E_{3}-E_{0}$, Fig. \[Fig:stimulated\](b). Such pumping together with fast relaxation ($\lvert {3}\rangle \rightarrow \lvert {2}\rangle $) creates the population inversion between the levels $\lvert {2}\rangle \ $and $\lvert {1}\rangle $. Fast relaxation from lower laser level $\lvert {1}\rangle $ into the ground state $\lvert {0}\rangle $ helps creating the population inversion between the laser levels $\lvert {2}\rangle \ $and $\lvert {1}\rangle $, which is the advantage of the four-level scheme.[@Svelto] Then the second signal is applied with a frequency matching the laser operating levels ($\hbar \omega _{\mathrm{L}}=E_{2}-E_{1}$). This stimulates the transition $\lvert {2}\rangle
\rightarrow \lvert {1}\rangle $, which provides the scheme for the four-level lasing. Figure \[Fig:stimulated\] was calculated for the following parameters: $\omega _{\mathrm{L}}/2\pi =9$ GHz ($\hbar \omega _{\mathrm{L}}=E_{2}-E_{1}$) and also (a) $\omega /2\pi =47.4$ GHz, $f_{\mathrm{ac}}=5\times 10^{-3}$, $f_{\mathrm{L}}=3\times 10^{-3}$; (b) $\omega /2\pi
=23.7$ GHz, $f_{\mathrm{ac}}=5\times 10^{-3}$, $f_{\mathrm{L}}=5\times
10^{-3}$.
In the experimental realization of the lasing schemes proposed here, the system of two qubits should be put in a quantum resonator, e.g. by coupling to a transmission line resonator, as in Ref. . Then the stimulated transition between the operating states, which we have demonstrated here, will result in transmitting the energy from the qubits to the resonator as photons. For this, the energy difference between the operating levels should be adjusted to the resonator’s frequency.
Conclusions and Discussion
==========================
We have considered the dissipative dynamics of a system of two qubits. Assuming *different* qubits makes some of the relaxation rates dominant. With these fast relaxation rates, population inversion can be created involving three or four levels. The four-level situation is more advantageous for lasing since the population inversion between the operating levels can be created more easily. We demonstrated that the upper level can be pumped by one- or multi-photon excitations. We also have shown that after applying additional driving, the transition between the operating levels is stimulated.
When presenting concrete results, we have considered the system of two flux superconducting qubits with the realistic parameters of Ref. . For lasing in a generic two-qubit (four-level) system, our recipe is the following. The hierarchy of the relaxation times in the system is obtained by making it asymmetric, with different parameters for individual qubits. This makes transitions between the levels corresponding to a qubit with smaller tunneling amplitude $\Delta $ negligible, which creates a sort of the artificial selection rule. Based on our numerical analysis, we conclude that the optimal combination of pumping and relaxation is realized for $\Delta _{1}\gg \Delta _{2}\sim J$.
Creation of *the population inversion* and* the stimulated transitions* between the laser operating levels, demonstrated here theoretically, can be the basis for the respective experiments similar to Ref. . In that work, a three-level qubit (artificial atom) was coupled to a quantum (transmission line) resonator. First, spontaneous emission from the upper operating level was demonstrated. In this way the qubit system can be used as a microwave photon source.[Houck09]{} Then, the operating levels were driven with an additional frequency and the microwave amplification due to the stimulated emission was demonstrated. We believe that similar experiments can be done with the two-qubit system (which forms an *artificial four-level molecule* from two atoms/qubits). To summarize, we propose to put the two-qubit system in a quantum resonator with the frequency adjusted with the operating levels and to measure the spontaneous and stimulated emission as the increase of the transmission coefficient. Such lasing in a two-qubit system may become a new useful tool in the qubit toolbox.
We thank E. Il’ichev for fruitful discussions and S. Ashhab for critically reading the manuscript. This work was partly supported by Fundamental Researches State Fund (grant F28.2/019) and NAS of Ukraine (project 04/10-N).
[99]{} For reviews see Special issue on quantum computing with superconducting qubits, Quant. Inf. Process., Vol. **8**, Nos. 2-3 (2009). S.K. Dutta *et al.*, Phys. Rev. B **78**, 104510 (2008); D.M. Berns *et al.*, Nature **455**, 51 (2008); M. Neeley *et al.*, Science **325**, 722 (2009); H. Jirari *et al.*, Eur. Phys. Lett. **87**, 28004 (2009); M.A. Sillanpää *et al.*, Phys. Rev. Lett. **103**, 193601 (2009); G. Sun *et al.*, Appl. Phys. Lett. **94**, 102502 (2009); J. Joo *et al.*, Phys. Rev. Lett. **105**, 073601 (2010); L. Du and Y. Yu, Phys. Rev. B **82**, 144524 (2010). A. Wallraff *et al.*, Nature **431**, 162 (2004); J. Hauss *et al.*, Phys. Rev. Lett. **100**, 037003 (2008); A.A. Abdumalikov *et al.*, Phys. Rev. Lett. **104**, 193601 (2010); S. Ashhab and F. Nori, Phys. Rev. A **81**, 042311 (2010).
Yu.A. Pashkin *et al.*, Nature **421**, 823 (2003); J.B. Majer *et al.*, Phys. Rev. Lett. **94**, 090501 (2005); M. Grajcar *et al.*, Phys. Rev. B **72**, 020503 (2005); M. Steffen *et al.*, Science **313**, 423 (2006); A. Fay *et al.*, Phys. Rev. Lett. **100**, 187003 (2008); A. Izmalkov *et al.*, Phys. Rev. Lett. **101**, 017003 (2008); J. Li *et al.*, Phys. Rev. B **78**, 064503 (2008); L. DiCarlo *et al.*, Nature **460**, 240 (2009); F. Altomare *et al.*, Nature Phys. **6**, 777 (2010). O. Astafiev *et al.*, Nature **449**, 588 (2007).
M. Grajcar *et al.*, Nature Phys. **4**, 612 (2008); S. André *et al.*, Phys. Scr. **T137**, 014016 (2009); S. Ashhab *et al.*, New J. Phys. **11**, 023030 (2009); O.V. Zhirov and D.L. Shepelyansky, Phys. Rev. B **80**, 014519 (2009); M.A. Macovei, Phys. Rev. A **81**, 043411 (2010).
O. Svelto, *Principles of Lasers,* Plenum Press, New York (1989).
E. Il’ichev *et al.*, Phys. Rev. B **81**, 012506 (2010).
C.H. van der Wal *et al.*, Eur. Phys. J. B **31**, 111 (2003).
K. Blum, *Density Matrix Theory and Applications*, Plenum Press, New York–London (1981); U. Weiss, *Quantum Dissipative Systems*, 2nd ed., World Scientific, Singapore (1999).
Yu. Makhlin, G. Schön and A. Shnirman, Rev. Mod. Phys. **73**, 357 (2001); M. Governale *et al.*, Chem. Phys. **268**, 273 (2001); M.J. Storcz and F.K. Wilhelm, Phys. Rev. A **67**, 042319 (2003); M.J. Storcz, *PhD thesis* (2002); L. Chirolli and G. Burkard, Advances in Physics **57**, 225 (2008); Y. Dubi and M. Di Ventra, Phys. Rev. A **79**, 012328 (2009).
Yu-xi Liu *et al.*, Phys. Rev. Lett. **95**, 087001 (2005); J.Q. You *et al.*, Phys. Rev. B **71**, 024532 (2005); J.Q. You *et al.*, Phys. Rev. B **75**, 104516 (2007).
P.C. de Groot *et al.*, Nature Phys. **6**, 763 (2010).
E. Paladino *et al.*, Phys. Scr. **T137**, 014017 (2009).
S.N. Shevchenko and E.A. Temchenko, J. Phys.: Conf. Ser. **129**, 012035 (2008).
A.A. Houck *et al.*, Nature **449**, 328 (2007).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this article we study the homology of spaces ${{\rm Hom}}({{\mathbb Z}}^n,G)$ of ordered pairwise commuting $n$-tuples in a Lie group $G$. We give an explicit formula for the Poincaré series of these spaces in terms of invariants of the Weyl group of $G$. By work of Bergeron and Silberman, our results also apply to ${{\rm Hom}}(F_n/\Gamma_n^m,G)$, where the subgroups $\Gamma_n^m$ are the terms in the descending central series of the free group $F_n$. Finally, we show that there is a stable equivalence between the space ${{\rm Comm}}(G)$ studied by Cohen–Stafa and its nilpotent analogues.'
address:
- 'Indiana University - Purdue University Indianapolis, Indianapolis, IN 46202'
- 'Tulane University, New Orleans, LA 70118'
author:
- 'Daniel A. Ramras'
- Mentor Stafa
title: 'Hilbert–Poincaré series for spaces of commuting elements in Lie groups'
---
Introduction
============
Let $G$ be a compact and connected Lie group and let $\pi $ be a discrete group generated by $n$ elements. In this article we study the rational homology of the space of group homomorphisms ${{\rm Hom}}(\pi,G)\subseteq G^n$, endowed with the subspace topology from $G^n$. In particular, when $\pi$ is free abelian or nilpotent we give an explicit formula for the Poincaré series of ${{\rm Hom}}(\pi,G)_1$, the connected component of the trivial representation, in terms of invariants of the Weyl group $W$ of $G$.
The topology of the spaces ${{\rm Hom}}(\pi,G)$ has been studied extensively in recent years, in particular when $\pi$ is a free abelian group [@adem2007commuting; @bairdcohomology; @BJS; @gomez.pettet.souto; @pettet.souto; @stafa.comm; @stafa.comm.2]; in this case ${{\rm Hom}}({\mathbb{Z}}^n, G)$ is known as *the space of ordered commuting $n$-tuples in $G$*. The case in which $\pi$ is a finitely generated nilpotent group was recently analyzed by Bergeron and Silberman [@bergeron; @bergeron2016note]. These spaces and variations thereon, such as the space of almost commuting elements [@borel2002almost], have been studied in various settings, including work of Witten and Kac–Smilga on supersymmetric Yang-Mills theory [@witten1; @witten2; @kac.smilga].
Our formula for the Poincaré series of the identity component ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ builds on work of Baird [@bairdcohomology] and Cohen–Reiner–Stafa [@stafa.comm]. In fact, we give a formula for a more refined *Hilbert–Poincaré series*, which is a tri-graded version of the standard Poincaré series that arises from a certain cohomological description of these spaces due to Baird. Work of Bergeron and Silberman [@bergeron2016note] then leads immediately to results for nilpotent groups. The formula we produce is obtained by comparing stable splittings of ${{\rm Hom}}({{\mathbb Z}}^n,G)$ and of the space ${{\rm Comm}}(G)$ introduced in [@stafa.comm]. The latter space is an analogue of the James reduced product construction for commuting elements in $G$; see Section \[Comm-sec\].
Main results
------------
The main purpose of this paper is to give an explicit formula for the Poincaré series of the component ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$.
\[thm: Poincare series of Hom INTRO\] The Poincaré series of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ is given by $$P({{\rm Hom}}({{\mathbb Z}}^n,G)_1;q)=\frac{\prod_{i=1}^r (1-q^{2d_i})}{|W|}
\left(\sum_{w\in W} \frac{ \det(1+qw)^n}{\det(1-q^2w)} \right),$$ where the integers $d_1,\dots,d_r$ are the characteristic degrees of the Weyl group $W$.
A similar formula for the homology of the character variety ${{\rm Hom}}({{\mathbb Z}}^n, G)/G$ appears in Stafa [@Stafa-char-var].
Some comments are in order regarding the above formula. Let $T \subset G$ be a maximal torus with lie algebra $\mathfrak{t}$. Then the Weyl group $W$ acts on the dual space $\mathfrak{t}^*$ as a finite reflection group, and the determinants in the formula are defined in terms of this linear representation of $W$. The characteristic degrees of $W$ arise by considering the induced action of $W$ on the polynomial algebra ${{\mathbb R}}[x_1, \ldots, x_r]$, where the $x_i$ form a basis for $\mathfrak{t}^*$ (so $r ={{\rm rank\,}}(G)$). It is a theorem of Shephard–Todd [@shephard1954finite] and Chevalley [@chevalley1955invariants] that the $W$–invariants ${{\mathbb R}}[x_1, \ldots, x_r]^W$ form a polynomial ring with $r$ homogeneous generators. The characteristic degrees of $W$ are then the degrees of the homogeneous generators for ${{\mathbb R}}[x_1, \ldots, x_r]^W$. These degrees are well-known, and are displayed in Table \[table: characteristic degrees\]. For further discussion of these ideas, see Section \[FRG\].
The spaces ${{\rm Hom}}(\pi,G)$ are not path-connected in general; see for instance [@giese.sjerve] where the path components of ${{\rm Hom}}({{\mathbb Z}}^n,SO(3))$ are described. However, if there is only one conjugacy class of maximal abelian subgroups in $G$, namely the conjugacy class of maximal tori, then ${{\rm Hom}}({{\mathbb Z}}^n,G)$ and ${{\rm Comm}}(G)$ are both path-connected. This is true, for instance, if $G=U(n)$, $SU(n)$, or $Sp(n)$; on the other hand, ${{\rm Hom}}({{\mathbb Z}}^n,SO(2n+1))$ is disconnected for $n{\geqslant}2$ and ${{\rm Hom}}({{\mathbb Z}}^n,G_2)$ is disconnected for $n{\geqslant}3$. In fact, Kac and Smilga have classified those compact, simple Lie groups for which ${{\rm Hom}}({{\mathbb Z}}^n, G)$ is path connected [@kac.smilga]. Moreover, when $G$ is semisimple and simply connected, it is a theorem of Richardson that ${{\rm Hom}}({{\mathbb Z}}^2, G)$ is an irreducible algebraic variety, and hence is connected [@richardson1979commuting].
Theorem \[thm: Poincare series of Hom INTRO\] can also be applied to nilpotent groups. Let $F_n \unrhd \Gamma^2_n \unrhd \Gamma^3_n \cdots$ be the descending central series of the free group $F_n$. Bergeron and Silberman [@bergeron2016note] show that for each $m{\geqslant}2$, the identity component ${{\rm Hom}}(F_n/\Gamma^m_n,G)_1$ consists entirely of abelian representations. In fact, they show that if $N$ is a finitely generated nilpotent group, then the natural map $${{\rm Hom}}(N/[N,N], G)\longrightarrow {{\rm Hom}}(N, G)$$ restricts to a homeomorphism between the identity components. Since $G$ admits a neighborhood $U$ of the identity that contains no subgroup other than the trivial subgroup, every representation in the identity component of ${{\rm Hom}}(N/[N,N], G)$ kills the torsion subgroup of $N/[N,N]$. Thus our main result also yields the homology of the identity component in ${{\rm Hom}}(N, G)$.
The assumption that $G$ is compact is not in fact very restrictive, since if $G$ is the complex points of a connected reductive linear algebraic group over ${{\mathbb C}}$ (or the real points if $G$ is defined over ${{\mathbb R}}$) and $K{\leqslant}G$ is a maximal compact subgroup, then Pettet and Souto [@pettet.souto] showed that ${{\rm Hom}}({{\mathbb Z}}^n,G)$ deformation retracts onto ${{\rm Hom}}({{\mathbb Z}}^n,K)$. For simplicity, we refer to such groups $G$ simply as [reductive]{} Lie groups. Bergeron [@bergeron] generalized this result to finitely generated nilpotent groups (in fact, Bergeron’s result also allows $G$ to be disconnected). It should be emphasized, however, that for other discrete groups $\pi$ it is known that the homotopy types of ${{\rm Hom}}(\pi, G)$ and ${{\rm Hom}}(\pi, K)$ can differ; examples appear in [@adem2007commuting].
The above descending central series can be used to define a filtration of the James reduced product of $G$, denoted $J(G)$, which is also known as the free monoid generated by the based space $G$. The filtration is given by the spaces $${{\rm Comm}}(G)={{\rm X}}(2,G)\subset {{\rm X}}(3,G) \subset \cdots \subset {{\rm X}}(\infty,G)=J(G)$$ defined in Section \[sec: topology Hom\], and was studied by Cohen and Stafa [@stafa.comm]. Here we show that all the terms in the filtration have the same Poincaré series.
\[thm: Poincare series of X(q,G) Intro\] The inclusion $$P({{\rm Comm}}(G)_1;q){\hookrightarrow}P({{\rm X}}(m, G)_1;q)$$ induces an isomorphism in homology for every $m\geq 2$ .
Structure of the paper
----------------------
We start in Section \[sec: topology Hom\] by giving some basic topological properties of the spaces of homomorphisms ${{\rm Hom}}({{\mathbb Z}}^n,G)$ and we define the spaces ${{\rm X}}(m, G) \subset J(G)$ considered above. In particular, we explain how all these spaces decompose into wedge sums after a single suspension. We prove Theorems \[thm: Poincare series of Hom INTRO\] and \[thm: Poincare series of X(q,G) Intro\] in Sections \[sec: Poincare series of Hom(Zn,G)\] and \[sec: Poincare series of X(q,G)\], respectively. In Section \[ungraded-sec\], we consider the ungraded cohomology and the rational complex $K$–theory of ${{\rm Hom}}({{\mathbb Z}}^n, G)_1$. Finally, we give examples of Hilbert–Poincaré series in Section \[sec: examples Poincare\], most notably for the exceptional Lie group $G_2$.
[**Acknowledgements:**]{} We thank Alejandro Adem and Fred Cohen for helpful comments, and Mark Ramras for pointing out the Binomial Theorem, which simplified our formulas.
Topology of commuting elements in Lie groups {#sec: topology Hom}
============================================
Let $G$ be a compact and connected Lie group. Fix a maximal torus $T{\leqslant}G$ and let $W = N_G(T)/T$ be the Weyl group of $G$. The map $$\label{c}G \times T \to G$$ conjugating elements of the maximal torus by elements of $G$ has been studied as far back as Weyl’s work, and can be used to show that the rational cohomology of $G$ is the ring of invariants $[H^\ast(G/T) \otimes H^\ast(T)]^W$. To study the rational cohomology of ${{\rm Hom}}({{\mathbb Z}}^n,G)$ we can proceed as follows. The action by conjugation of $T$ on itself is trivial, so (\[c\]) descends to a map $G/T\times T \to G$, which is invariant under the $W$–action $([g],t)\cdot [n] = ([gn],n^{-1}tn)$, where $n\in N_G (T)$. In [@bairdcohomology] Baird showed that the induced map $$\begin{aligned}
\label{theta}
\begin{split}
\theta_n: G/T\times_{W} T^n &\to {{\rm Hom}}({{\mathbb Z}}^n,G)\\
[g,t_1,\dots,t_n] &\mapsto (gt_1g^{-1},\dots, gt_ng^{-1})
\end{split}\end{aligned}$$ surjects onto ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ and induces an isomorphism of rational cohomology groups $$\label{Baird}
H^\ast({{\rm Hom}}({{\mathbb Z}}^n,G)_1;{{\mathbb Q}}) {\cong}[H^\ast(G/T;{{\mathbb Q}})\otimes H^\ast(T^n;{{\mathbb Q}})]^W.$$ This recovers the above fact about the cohomology of $G$ when $n=1$. Baird in fact shows that all torsion in $H^\ast({{\rm Hom}}({{\mathbb Z}}^n,G)_1;{{\mathbb Z}})$ has order dividing $|W|$, but little else is known about the torsion in these spaces, beyond the case of $SU(2)$ [@BJS] and the fact that $H_1 ({{\rm Hom}}({{\mathbb Z}}^n,G)_1; {{\mathbb Z}})$ is torsion-free [@gomez.pettet.souto].
We note that as an ungraded ${{\mathbb Q}}W$-module, the ring $H^\ast(G/T;{{\mathbb Q}})$ is simply the regular representation ${{\mathbb Q}}W$, a well-known fact that dates back to Borel [@borel1953cohomologie] – a proof can be found for instance in the exposition by M. Reeder [@reeder1995cohomology]. This fact implies that the ungraded cohomology of the homomorphism space is just a regraded version of the cohomology of $T^n$. As we will see, various topological constructions related to the maps $\theta_n$ enjoy a similar structure in their cohomology.
Adem and Cohen [@adem2007commuting] showed that there is a homotopy decomposition of the suspension of ${{\rm Hom}}({{\mathbb Z}}^n,G)$ into a wedge sum of *smaller* spaces as follows $$\label{eqn: stable decomp Hom}
\Sigma {{\rm Hom}}({{\mathbb Z}}^n,G) \simeq \Sigma \bigvee_{1\leq k \leq n}
\bigvee_{n \choose k} \widehat{{{\rm Hom}}}({{\mathbb Z}}^k,G),$$ where $\widehat{{{\rm Hom}}}({{\mathbb Z}}^k,G)$ is the quotient of $ {{\rm Hom}}({{\mathbb Z}}^n,G)$ by the subspace consisting of all commuting $n$–tuples $(g_1, \ldots, g_n)$ such that $g_i=1$ for at least one coordinate $i$. This decomposition, along with the analogous decomposition of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ given in Lemma \[dec\], will play a key role in our study of homology.
Recall that the descending central series of a group $\pi$ is the sequence of subgroups of $\pi$ given by $
\pi=\Gamma^1 \unrhd \Gamma^2=[\pi,\pi] \unrhd
\cdots \unrhd \Gamma^{k+1} \unrhd \cdots,
$ where inductively $\Gamma^{k+1}=[\pi,\Gamma^{k}]$. Let $\Gamma^k_n$ be the $k$-th stage in the descending central series of the free group $F_n$, and note that $\Gamma_n^\infty = \bigcap_{k=1}^\infty \Gamma^k_n = 1$. Then we obtain a filtration $$\label{eq: filtration of G^n}
{{\rm Hom}}(F_n/\Gamma^2_n,G)\subset {{\rm Hom}}(F_n/\Gamma^3_n,G)
\subset \cdots \subset {{\rm Hom}}(F_n/\Gamma^\infty_n,G)=G^n$$ of the space $G^n$ by subspaces of nilpotent $n$-tuples, where the first term of the filtration is the space of commuting $n$-tuples $F_n/\Gamma^2_n=F_n/[F_n,F_n]={{\mathbb Z}}^n$. It should be noted that this filtration need not be exhaustive; that is, $\bigcup_{k=1}^\infty {{\rm Hom}}(F_n/\Gamma^k_n,G)$ is in general a proper subset of $G^n$. Also note that $T^n \subset F_n/\Gamma^2_n = {{\rm Hom}}({{\mathbb Z}}^n,G)$, a fact that will be used later. We obtain the following stable decompositions of the connected components of the trivial representations for nilpotent $n$-tuples.
\[dec\] Let $G$ be either a compact connected Lie group or a reductive connected Lie group. For each $m\geq 2$ there is a homotopy equivalence $$\begin{aligned}
\Sigma {{\rm Hom}}(F_n/\Gamma^m_n,G)_1 \simeq \Sigma
\bigvee_{1\leq k \leq n} \bigvee_{n \choose k}
\widehat{{{\rm Hom}}}(F_k/\Gamma^m_k,G)_1.\end{aligned}$$ In particular there is a homotopy equivalence $$\begin{aligned}
\Sigma {{\rm Hom}}({{\mathbb Z}}^n,G)_1 \simeq \Sigma
\bigvee_{1\leq k \leq n} \bigvee_{n \choose k}
\widehat{{{\rm Hom}}}({{\mathbb Z}}^k,G)_1.\end{aligned}$$
This is a minor modification to the arguments in [@villarreal2016cosimplicial Corollary 2.21], where the corresponding decompositions for the full representation spaces are obtained. The spaces $\{{{\rm Hom}}(F_n/\Gamma^m_n,G)\}_n$ form a simplicial space, which Villarreal shows is simplicially NDR in the sense defined in [@adem.cohen.gitler.bahri.bendersky]. The face and degeneracy maps in these simplicial spaces preserve the identity components, so $\{{{\rm Hom}}(F_n/\Gamma^m_n,G)_1\}_n$ is also a simplicial space, and is again simplicially NDR. The decompositions now follow from the main result of [@adem.cohen.gitler.bahri.bendersky Theorem 1.6].
The James reduced product
-------------------------
The *James reduced product* $J(Y)$ can be defined for any CW-complex $Y$ with basepoint $*$. In our discussion $Y$ is usually a compact Lie group with basepoint the identity element. Define $J(Y)$ as the quotient space $$J(Y):= \bigg( \bigsqcup_{n\geq 0} Y^n \bigg)/\sim$$ where $\sim$ is the relation $(\dots,*,\dots) \sim (\dots,\widehat{*},\dots)$ omitting the coordinates equal to the basepoint. This can also be seen as the free monoid generated by the elements of $Y$ with the basepoint acting as the identity element. It is a classical result that $J(Y)$ is weakly homotopy equivalent to $\Omega\Sigma Y$, the loops on the suspension of $Y$. Moreover, the suspension of $J(Y)$ is given by $$\Sigma J(Y) \simeq \Sigma \bigvee_{n \geq 1} \widehat{Y^n},$$ where $\widehat{Y^n}$ is the $n$-fold smash product. It was first observed by Bott and Samelson [@bott1953pontryagin] that the homology of $J(Y)$ is isomorphic as an algebra to the tensor algebra ${{\mathcal T}}[\widetilde{H}_\ast(Y;R)]$ generated by the reduced homology of $Y$, given that the homology of $Y$ is a free $R$-module. This is a central result used in our calculation.
The spaces ${{\rm X}}(m,G)$ {#Comm-sec}
---------------------------
Now consider the case in which $Y = G$, a connected Lie group with basepoint the identity element $1\in G$. A filtration of the free monoid $J(G)$ is given by $$\label{eq: filtration of J(G)}
{{\rm X}}(2,G)\subset {{\rm X}}(3,G) \subset
{{\rm X}}(4,G) \subset \cdots \subset {{\rm X}}(\infty,G)=J(G),$$ where each space is defined by $${{\rm X}}(m,G):=\bigg(\bigsqcup_{n\geq 0} {{\rm Hom}}(F_n/\Gamma^m_n,G) \bigg)/\sim$$ where $\sim$ is the same relation as in $J(G)$. The spaces ${{\rm X}}(m,G)$ and ${{\rm Comm}}(G)={{\rm X}}(2,G)$ were studied in [@stafa.comm], where it was shown that ${{\rm Comm}}(G)$ carries important information about the spaces of commuting $n$-tuples ${{\rm Hom}}({{\mathbb Z}}^n,G)$. However, note that in general the spaces ${{\rm X}}(m,G)$ do not have the structure of a monoid for any $m$. As in the case of spaces of homomorphisms, the spaces ${{\rm X}}(m,G)$ need not be path connected. For instance, the space ${{\rm X}}(2,SO(3))$ has infinitely many path components, as shown in [@stafa.thesis]. We can define the connected component of the trivial representation for each space ${{\rm X}}(m,G)$ by $${{\rm X}}(m,G)_1:=\bigg(\bigsqcup_{n\geq 0}
{{\rm Hom}}(F_n/\Gamma^m_n,G)_1 \bigg)/\sim$$ with ${{\rm X}}(2,G)_1={{\rm Comm}}(G)_1.$ Cohen and Stafa [@stafa.comm Theorem 5.2] show that there is a stable decomposition of this space as follows: $$\label{eqn: stable decomp Comm}
\Sigma {{\rm X}}(m,G) \simeq \Sigma
\bigvee_{k \geq 1} \widehat{{{\rm Hom}}}(F_k/\Gamma^m_k,G).$$
We have an analogous result for the identity components.
\[prop: X(q,G) decomposition\] Let $G$ be either a compact connected Lie group or a reductive connected Lie group. For each $m\geq 2$ there is a homotopy equivalence $$\Sigma {{\rm X}}(m,G)_1 \simeq \Sigma
\bigvee_{k \geq 1} \widehat{{{\rm Hom}}}(F_k/\Gamma^m_k,G)_1.$$ This is true in particular for ${{\rm Comm}}(G)_1$.
This follows from the proof of [@stafa.comm Theorem 5.2].
[Poincaré series of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$]{} {#sec: Poincare series of Hom(Zn,G)}
=========================================================
For a topological space $X$ the (rational) *Poincaré series* is the series $$P(X; q):=\sum_{k\geq 0} {{\rm rank\,}}_{{\mathbb Q}}(H_k(X;{{\mathbb Q}})) q^k.$$ In this section we describe the Poincaré series of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$. Following [@stafa.comm], we will refine the usual grading of cohomology and introduce tri-graded *Hilbert–Poincaré series* for $X = {{\rm Hom}}({{\mathbb Z}}^n,G)$, ${{\rm Comm}}(G)$, or ${{\rm X}}(m,G)$. These additional gradings will facilitate the computation of the Poincaré series itself.
For the remainder of this section, we will drop the coefficient group ${{\mathbb Q}}$ from our notation for (co)homology. The statements are true for any field with characteristic 0 or relatively prime to $|W|$.
The maps $\theta_{n}: G/T\times_{W} T^n \to {{\rm Hom}}({{\mathbb Z}}^n,G)$ can be assembled to give a map $$\label{Theta}
\Theta : G/T\times_{W} J(T) \to {{\rm Comm}}(G)$$ which surjects onto the connected component ${{\rm Comm}}(G)_1.$ It was shown in [@stafa.comm] that $\Theta$ induces isomorphisms on the level of rational (co)homology, so rationally we obtain $$\label{Comm}
H^\ast({{\rm Comm}}(G)_1) \cong [H^\ast(G/T)\otimes H^\ast(J(T))]^W
\cong [H^\ast(G/T)\otimes {{\mathcal T}}^*[\widetilde{H}_*(T)] ]^W,$$ where ${{\mathcal T}}^*$ denotes the dual of the tensor algebra. This interpretation of the cohomology in terms of Weyl group invariants allows us to make the following definition. Define the Hilbert–Poincaré series of ${{\rm Comm}}(G)_1$ as the tri-graded series $$P({{\rm Comm}}(G)_1;q,s,t)=\sum_{i,j,k \geq 0} {{\rm rank\,}}A(i,j,k)^W\,\, q^i s^j t^m,$$ where $$A(i,j,k) := H^i (G/T) \otimes {{\mathcal T}}^*[\widetilde{H}_*(T)]_{j,m}$$ and ${{\mathcal T}}^*[\widetilde{H}_*(T)]_{j,m}$ is the dual of the submodule of ${{\mathcal T}}[\widetilde{H}_*(T)]$ generated by the $m$–fold tensors of total cohomological degree $j$.
To recover the ordinary Poincaré series we can set $s$ equal to $q$ and $t$ equal to 1 since the tensor degree does not affect the (co)homological degree. In order to understand this tri-graded version of the Poincaré series, we take a short diversion to discuss the characteristic degrees of a finite reflection group.
Finite reflection groups {#FRG}
------------------------
A finite reflection group is a finite subgroup $W\subset GL_k(\mathbf{k})$, with $\mathbf{k}$ a field of characteristic 0, such that $W$ is generated by reflections. Equivalently, consider an $n$-dimensional vector space $V$ over $\mathbf{k}$ equipped with the action of a finite subgroup $W \subset GL (V)$. There is a corresponding action on the symmetric algebra $R$ of $V$, which is isomorphic to the polynomial algebra $R:=\mathbf{k}[x_1,\dots,x_n]$[^1]. It is a classical result of Chevalley [@chevalley1955invariants] and Shephard–Todd [@shephard1954finite] that when $W$ is generated by reflections, the invariant elements of the $W$-action also form an algebra generated by $n$ elements, and these generators can be chosen to be (algebraically independent) homogeneous polynomials $f_1,\dots,f_n$. Hence the $W$-invariant subalgebra is given by $R^W=\mathbf{k}[f_1,\dots,f_n]$. The degrees of the $f_i$ are independent of the choice of the homogeneous generators. The degrees $d_i={\rm deg}(f_i)$ are called the *characteristic degrees* of the reflection group $W$. See [@springer1974regular; @broue.reflextion.gps] for a thorough exposition.
Let $W$ be the Weyl group of a compact and connected Lie group $G$, which is finite. Then $W$ is a unitary reflection group: $W$ acts on the maximal torus $T$ of $G$, and there is an induced action of $W$ on the Cartan subalgebra $\mathfrak{t}$ of the Lie algebra $\mathfrak{g}$. The actions of $W$ on $\mathfrak{t}$ and its dual $\mathfrak{t}^\ast$ are faithful, so $W$ can be considered as a subgroup of $GL(\mathfrak{t}^\ast)$. Moreover, $W$ is generated by reflections. This action of $W$ has associated characteristic degrees $d_1,\dots,d_r$, where $r$ is the rank of the maximal torus $T$, and it is a well-known fact that $|W|=\prod_i d_i$. Characteristic degrees of reflection groups have many other remarkable properties, outside the scope of this paper.
Type Lie group Rank $W$ $|W|$ Characteristic degrees
------- ------------ ---------- ---------------------------------------------- ------------------- ------------------------- -- -- -- -- -- --
$A_n$ $SU(n+1)$ $n\geq1$ $\Sigma_{n+1}$ $(n+1)!$ $2,3,...,n+1$
$B_n$ $SO(2n+1)$ $n$ ${{\mathbb Z}}_2^n\rtimes\Sigma_n$ $n!2^n$ $2,4,\dots,2n$
$C_n$ $Sp(n)$ $n$ ${{\mathbb Z}}_2^n\rtimes\Sigma_n$ $n!2^n$ $2,4,\dots,2n$
$D_n$ $SO(2n)$ $n$ $H_n \rtimes\Sigma_n$ $n!2^{n-1}$ $2,4,\dots,2n-2,n$
$G_2$ $G_2$ 2 $D_{2^2\cdot 3}$ 12 $2,6$
$F_4$ $F_4$ 4 $D_{2^7\cdot 3^2}$ 1,152 $2,6,8,12$
$E_6$ $E_6$ 6 $O(6,{{\mathbb F}}_2)$ 51,840 $2,5,6,8,9,12$
$E_7$ $E_7$ 7 $O(7,{{\mathbb F}}_2)\times {{\mathbb Z}}_2$ 2,903,040 $2,6,8,10,12,14,18$
$E_8$ $E_8$ 8 $\widehat{O(8, {{\mathbb F}}_2)}$ $2^{14}3^5 5^2 7$ $2,8,12,14,18,20,24,30$
: Characteristic degrees of Weyl groups $W$[]{data-label="table: characteristic degrees"}
As an example consider the unitary group $U(n)$ with Weyl group the symmetric group $\Sigma_n$ on $n$ letters. The rank of $U(n)$ is $n$ and the $\Sigma_n$ acts on the maximal torus $T=(S^1)^n$ by permuting the coordinates, so it acts on $\mathfrak{t}$ by permuting the basis vectors. Therefore, as a subgroup of $GL(\mathfrak{t}^\ast)$ the Weyl group $\Sigma_n$ consists of permutation matrices. The invariant subalgebra is then generated by the elementary symmetric polynomials $\epsilon_1,\dots,\epsilon_n$, with degrees $d_i={\rm deg}(\epsilon_i)=i$ for $i=1,\dots,n.$
Table \[table: characteristic degrees\] summarizes the Weyl groups and their associated characteristic degrees for families of simple Lie groups, including exceptional Lie groups. In the column for $W$, the group $H_n$ is the kernel of the multiplication map ${{\mathbb Z}}_2^n = \{\pm 1\}\to {{\mathbb Z}}_2$ (so $H_n$ consists of $n$–tuples containing an even number of $-1$’s), $D_n$ denotes the dihedral group of order $n$, and $\widehat{O(8, {{\mathbb F}}_2)}$ is a double cover of ${O(8, {{\mathbb F}}_2)}$. Similar information about characteristic degrees can also be found in [@springer1974regular p. 175] and [@humphreys1992reflection p. 59].
As shown in [@stafa.comm], the information in Table \[table: characteristic degrees\] and the realization of the Weyl group $W$ as a subgroup of $GL(\mathfrak{t}^\ast)$ suffice to describe the rational cohomology of ${{\rm Comm}}(G)_1.$ This information will be used below to describe the corresponding Hilbert–Poincaré series for ${{\rm Hom}}({{\mathbb Z}}^k,G)_1$.
Hilbert–Poincaré series
-----------------------
Suppose $G$ has rank $r$. Let us denote by $A_W (q)$ the quantity $$A_W (q):=\frac{\prod_{i=1}^r (1-q^{2d_i})}{|W|},$$ where $d_1,\dots,d_r$ are the characteristic degrees of $W$. It was shown by Cohen, Reiner and Stafa [@stafa.comm] that the Hilbert–Poincaré series of ${{\rm Comm}}(G)_1$ is given by the following infinite series.
Let $G$ be a compact and connected Lie group with maximal torus $T$ and Weyl group $W$. Then the Hilbert–Poincaré series of the connected component of the trivial representation in ${{\rm Comm}}(G)$ is given by $$\label{eqn: Poincere series Comm 3}
\ds P(Comm(G)_1;q,s,t) =
A_W (q) \sum_{w\in W}
\frac{1}{\det(1-q^2w)(1-t(\det(1+sw)-1))}.$$
Using this theorem and stable decompositions of ${{\rm Comm}}(G)_1$ given above, we will now describe the Hilbert–Poincaré polynomial of the space of ordered pairwise commuting $n$-tuples. We begin with the following result, which is the fundamental step in our calculation of Poincaré polynomials for homomorphism spaces.
\[prop: homology of hom hat\] For $m{\geqslant}1$, the reduced Hilbert–Poincaré series of $\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1$ is given by $$\label{Phat}
P(\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1;q,s) =
A_W (q) \sum_{w\in W} \frac{(\det(1+sw)-1)^m}{\det(1-q^2 w)}.$$ In particular, setting $s=q$ gives the reduced Poincaré series of $\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1$.
When $m=0$, the same formulas yield the (unreduced) Hilbert–Poincaré and Poincaré series of the one-point space $\widehat{{{\rm Hom}}}({{\mathbb Z}}^0,G)_1$.
When $m=0$, this result asserts that the above series reduces to the constant series 1. An algebraic explanation of this fact, in terms of Molien’s Theorem, is given at the end of this section.
The bigrading in this Hilbert–Poincaré series arises from applying the homology isomorphism (\[hom-hat-iso\]) described in the proof, together with the Künneth Theorem. More specifically, let $\widehat{T^m}$ denote the $m$–fold smash product of the maximal torus $T{\leqslant}G$ with itself. Then the coefficient of $q^i s^j$ in the above Hilbert–Poincaré series is the rank of the subspace $[H^i (G/T) \otimes H^j( \widehat{T^m})]^W$ of $W$–invariant elements.
First rearrange the terms in the Hilbert–Poincaré series of ${{\rm Comm}}(G)_1$: $$\begin{aligned}
\ds P({{\rm Comm}}(G)_1;q,s,t)
&= A_W (q) \sum_{w\in W}\frac{1}{\det(1-q^2w)
(1-t(\det(1+sw)-1))}\\
&= A_W (q) \sum_{w\in W}\frac{\sum_{m=0}^\infty
(t(\det(1+sw)-1)))^m}{\det(1-q^2w)}\\
&= A_W (q) \sum_{w\in W} \sum_{m=0}^\infty \frac{
(\det(1+sw)-1))^m t^m}{\det(1-q^2w)}\\
&= A_W (q) \sum_{m=0}^\infty \sum_{w\in W} \frac{
(\det(1+sw)-1))^m t^m}{\det(1-q^2w)}\\
&= \sum_{m=0}^\infty \left( A_W (q) \sum_{w\in W} \frac{
(\det(1+sw)-1))^m}{\det(1-q^2w)}\right) t^m.\\\end{aligned}$$ We claim that after setting $s=q$, the coefficient of $t^m$ in $P({{\rm Comm}}(G)_1;q,s,t)$ is the Poincaré series of the stable wedge summand $\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1$ appearing in the decomposition of ${{\rm Comm}}(G)_1$ given by Proposition \[prop: X(q,G) decomposition\].
Recall that our tri-grading of the (co)homology of ${{\rm Comm}}(G)_1$ comes from the natural map $$\Theta\co G/T \times_W J(T){\longrightarrow}{{\rm Comm}}(G)_1,$$ (see (\[Theta\])) which induces isomorphisms in (rational) cohomology. On the left-hand side, we have $$H^*(G/T \times_W J(T)) {\cong}\big(H^*(G/T)\otimes H^*(J(T))\big)^W
{\cong}\big(H^*(G/T)\otimes {{\mathcal T}}^*[{\widetilde{H}_*(T)}]\big)^W.$$ Let ${{\mathcal T}}^*_m[{\widetilde{H}_*(T)]}$ denote the dual of the submodule $${{\mathcal T}}_m[{\widetilde{H}_*(T)}]\subset {{\mathcal T}}[{\widetilde{H}_*(T)}]$$ of $m$–fold tensors. The action of $W$ preserves these submodules, so we obtain a decomposition $$H^*(G/T \times_W J(T)) {\cong}\bigoplus_m \big(H^*(G/T)\otimes {{\mathcal T}}^*_m[{\widetilde{H}_*(T)}]\big)^W.$$ Note that for $m>0$, the terms in this decomposition are in fact the reduced cohomology of $G/T{\times}_W \widehat{T^m}$, where $\widehat{T^m}$ denotes the $m$–fold smash product of $T$ with itself, so the coefficient of $t^m$ in $P({{\rm Comm}}(G)_1;q,s,t)$ is the (bigraded, reduced) Hilbert–Poincaré series of $G/T{\times}_W \widehat{T^m}$. Similarly, the $m=0$ term in this decomposition is unreduced cohomology of $G/T{\times}_W \widehat{T^0} = (G/T)/W$. Note that the rational cohomology of $G/T{\times}_W 1{\cong}(G/T)/W$ is trivial, since the action of $W$ on $H^* (G/T)$ is the regular representation.
To complete the proof, it will suffice to show that the map $$\label{hom-hat-iso}G/T{\times}_W \widehat{T^m} {\longrightarrow}\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1$$ is an isomorphism in (rational) cohomology. As shown in the proof of [@stafa.comm Theorem 6.3], the induced map $$\big(G/T{\times}_W \widehat{T^m} \big)/(G/T{\times}_W 1) {\longrightarrow}\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1$$ induces an equivalence in rational cohomology. But the map $$G/T{\times}_W \widehat{T^m} {\longrightarrow}\big(G/T{\times}_W \widehat{T^m} \big)/(G/T{\times}_W 1)$$ is also an equivalence in rational cohomology, because the rational cohomology of $G/T{\times}_W 1{\cong}(G/T)/W$ is trivial, as noted above.
Baird’s formula (\[Baird\]), together with the Künneth Theorem, provides a bigraded Hilbert–Poincaré series for ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$, in which the coefficient of $q^i s^j$ records the rank of $[H^i (G/T)\otimes H^j(T^n)]^W$. We now compute this series.
\[thm: Poincare series of Hom\] The homology of the component of the trivial representation in the space of commuting $n$-tuples in $G$ is given by the following Hilbert–Poincaré series: $$\label{eqn: Hilb-Poincare series of Hom(Zn,G)1}
P({{\rm Hom}}({{\mathbb Z}}^n,G)_1;q,s)=A_W (q)\sum_{w\in W}
\left(\sum_{k=0}^n {n \choose k} \frac{(\det(1+sw)-1)^k}{\det(1-q^2w)} \right).$$ The Poincaré series of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ is given by $$\label{eqn: Poincare series of Hom(Zn,G)1}
P({{\rm Hom}}({{\mathbb Z}}^n,G)_1;q)=A_W (q) \sum_{w\in W} \frac{\det(1+qw)^n}{\det(1-q^2w)}.$$
Consider the bigraded series $$\label{PHomqs}
P({{\rm Hom}}({{\mathbb Z}}^n,G)_1;q,s)= \sum_{k=0}^n {n \choose k} A_W (q)
\left(\sum_{w\in W} \frac{(\det(1+sw)-1)^k}{\det(1-q^2w)} \right),$$ which in which the summands are the Hilbert–Poincaré series from Proposition \[prop: homology of hom hat\]. Since the terms in this sum match the terms from the stable decomposition of the space ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ in equation (\[eqn: stable decomp Hom\]), we see that setting $q=s$ in (\[PHomqs\]) yields the Poincaré series of $\widehat{{{\rm Hom}}}({{\mathbb Z}}^m,G)_1$: $$\begin{aligned}
P({{\rm Hom}}({{\mathbb Z}}^n,G)_1;q)= &A_W (q)\sum_{k=0}^n {n \choose k}
\left(\sum_{w\in W} \frac{(\det(1+qw)-1)^k}{\det(1-q^2w)} \right)\\
=& A_W (q)
\sum_{w\in W} \frac{\sum_{k=0}^n {n \choose k} (\det(1+qw)-1)^k}{\det(1-q^2w)}.\end{aligned}$$ Setting $x = \det(1+qw)-1$ in the binomial expansion $$(1+x)^n = \sum_{k=0}^n {n \choose k} x^k$$ gives $$\sum_{k=0}^n {n \choose k} (\det(1+qw)-1)^k = \det(1+qw)^n,$$ yielding the simplified form (\[eqn: Poincare series of Hom(Zn,G)1\]).
Finally, we check that the bigrading in $P({{\rm Hom}}({{\mathbb Z}}^n,G)_1;q,s)$ agrees with the bigrading arising from the Künneth Theorem applied to (\[Baird\]). More precisely, we want to show that for each $i, j{\geqslant}0$, $$\label{qisj} {{\rm rank\,}}[H^i (G/T) \otimes H^j (T^n)]^W
= \sum_{k=0}^n {n\choose k} {{\rm rank\,}}[H^i (G/T) \otimes H^j (\widehat{T^k})]^W.$$ The spaces $G/T {\times}T^n$ form a simplicial space $G/T {\times}T^\bullet$ as $n$ varies, where the simplicial structure arises from the bar construction on $T$ (so the face and degeneracy maps are the identity on the $G/T$ factors). The main result of [@adem.cohen.gitler.bahri.bendersky] provides a stable splitting of the spaces $G/T{\times}T^n$: $$\gamma\co {\Sigma}(G/T {\times}T^n)
{\stackrel{{\simeq}}{{\longrightarrow}}}
\bigvee_{k=0}^n \bigvee_{{n\choose k}} {\Sigma}(G/T {\times}\widehat{T^k}).$$ In fact, the stable splittings from [@adem.cohen.gitler.bahri.bendersky] apply to any (sufficiently nice) simplicial space, and are natural with respect to simplicial maps. In particular, we can consider the simplicial maps $$G/T \longleftarrow G/T {\times}T^\bullet {\longrightarrow}T^\bullet,$$ where $G/T$ is viewed as a constant simplicial space, and $ T^\bullet$ is the bar construction on $T$. Naturality of the splittings yields a commutative diagram
(G/T) & &\
(G/T) & (G/T T\^n) & T\^n,\
in which the vertical maps are weak equivalences. Commutativity implies that the map on cohomology induced by $\gamma$ respects the Künneth decompositions of $H^p ({\Sigma}(G/T {\times}T^n))$ and $H^p ({\Sigma}(G/T {\times}\widehat{T^k}))$ ($p{\geqslant}1$), so that $\gamma$ induces isomorphisms $$\label{gamma*}\bigoplus_{k=0}^n \bigoplus_{{n\choose k}} (H^i(G/T)\otimes H^j( \widehat{T^k}))
{\stackrel{{\cong}}{{\longrightarrow}}} H^i (G/T)\otimes H^j ( \widehat{T^k})$$ for each $i,j{\geqslant}0$. Moreover, $W$ acts simplicially on $G/T{\times}T^\bullet$, so naturality implies that $\gamma$ is $W$–equivariant, and hence the maps (\[gamma\*\]) induce isomorphisms when restricted to $W$–invariants. This establishes the desired equality (\[qisj\]).
Since ${{\rm Hom}}({{\mathbb Z}}^n,G)_{1}$ is path connected, the constant term in its Poincaré series must be 1. This can be understood in terms of a classical theorem of Molien [@molien1897] (also see [@shephard1954finite p. 289]). Let $R=\mathbf{k}[x_1,\dots,x_r]$ and $W$ be as above, with $x_1,\dots,x_r$ in degree 1. Molien’s Theorem states that the number of linearly independent elements in degree $m$ in the invariant ring $R^W=\mathbf{k}[x_1,\dots,x_r]^W$ is given by the coefficients of the generating function $$\sum_{m=0}^{\infty} l_m q^m = \frac{1}{|W|}\sum_{w\in W} \frac{1}{\det(1-qw)} .$$ Moreover, Chevalley [@chevalley1955invariants] and Shephard–Todd [@shephard1954finite] give the following generating function for $R^W=\mathbf{k}[f_1,\dots,f_r]$ $$\sum_{m=0}^{\infty} l_m q^m =\prod_{i=1}^r \frac{1}{(1-q^{d_i})},$$ Therefore, after doubling the degree of $q$ one obtains the equation $$1 =\frac{\prod_{i=1}^r (1-q^{2d_i})}{|W|} \sum_{w\in W} \frac{1}{\det(1-q^2w)},$$ which corresponds to the constant term in the Hilbert–Poincaré series of the spaces of homomorphisms ${{\rm Hom}}({{\mathbb Z}}^n,G)_{1}$ in Theorem \[thm: Poincare series of Hom\].
By work of Gomez–Pettet–Souto [@gomez.pettet.souto], $$\pi_1 ({{\rm Hom}}({{\mathbb Z}}^n, G)_1) \cong (\pi_1 G)^n.$$ It follows that $$\label{GPS}{{\rm rank\,}}(H^1 ({{\rm Hom}}({{\mathbb Z}}^n, G)_1)) = n \cdot {{\rm rank\,}}(H^1 G).$$ This can in fact be seen directly from the formula in Theorem \[thm: Poincare series of Hom\] by analyzing the coefficient of $q$. Indeed, any non-zero coefficient of $q$ must come from one of the terms $\det (1+qw)^n$. We have $$\det (1+qw) = \prod (1+ \lambda(w)q),$$ where $\lambda(w)$ ranges over the eigenvalues of $w$ (counted with multiplicity). Hence the constant term of $\det (1+qw)$ is 1, and the coefficient of $q$ is the trace of $w$ (acting on $\mathfrak{t}^*$). It follows that the coefficient of $q$ in $P({{\rm Hom}}({{\mathbb Z}}^n, G)_1; q)$ is precisely $$\frac{n}{|W|} \sum_{w\in W} {{\rm trace\,}}(w) = n \langle \chi, 1\rangle = n\cdot {{\rm rank\,}}((\mathfrak{t}^*)^W),$$ where $\chi$ is the character of the representation of $W$ on $\mathfrak{t}^*$ and $\langle \chi, 1\rangle$ is the inner product of this character with the trivial 1-dimensional character. Since this representation is isomorphic to the natural representation of $W$ on $H^1 (T; {{\mathbb C}})$, we find that ${{\rm rank\,}}(H^1 ({{\rm Hom}}({{\mathbb Z}}^n, G)_1)) = n\cdot {{\rm rank\,}}(H^1 (T; {{\mathbb C}})^W)$. As discussed above, $H^*(G) {\cong}[H^\ast(G/T) \otimes H^\ast(T)]^W$, and the action of $W$ on $H^*(G/T)$ is the regular representation. Hence $$H^1 (G) {\cong}(H^1(T))^W,$$ and combining the previous two formulas yields (\[GPS\]).
Poincaré series of ${{\rm X}}(m,G)_1$ {#sec: Poincare series of X(q,G)}
=====================================
The following theorem describes the Poincaré series of ${{\rm X}}(m,G)_1$ for all $m \geq 2.$ Note that when $G$ is a compact connected Lie group, it follows from Bergeron–Silberman [@bergeron2016note] that ${{\rm X}}(m,G)_1 = {{\rm Comm}}(G)_1$.
Let $G$ be a reductive connected Lie group. Then the natural inclusion maps $$X(2,G)_1 \hookrightarrow X(3,G)_1 \hookrightarrow
\cdots \hookrightarrow X(m,G)_1 \hookrightarrow \cdots$$ all induce homotopy equivalences after one suspension. In particular, the Hilbert–Poincaré series of ${{\rm X}}(m,G)_1$, for all $m\geq 2$, is given by $$\label{eqn: Poincere series Comm 4}
P({{\rm X}}(m,G)_1;q,s,t) =
A_W (q) \sum_{w\in W}
\frac{1}{\det(1-q^2w)(1-t(\det(1+sw)-1))},$$ where $W$ is the Weyl group of a maximal compact subgroup $K{\leqslant}G$.
By Proposition \[prop: X(q,G) decomposition\], there is a stable decomposition of ${{\rm X}}(m,G)_1$ into a wedge sum $$\label{q-s}
\Sigma {{\rm X}}(m,G)_1 \simeq \Sigma
\bigvee_{k \geq 1} \widehat{{{\rm Hom}}}(F_k/\Gamma^m_k,G)_1.$$ Consider the commutative diagram of cofibrations
S\_[n,2]{}(G) & [[Hom]{}]{}([[Z]{}]{}\^n,G)\_[1]{} &([[Z]{}]{}\^n,G)\_[1]{}\
S\_[n,m]{}(G) & [[Hom]{}]{}(F\_n/\^m\_n,G)\_[1]{} & (F\_n/\^m\_n,G)\_[1]{},\
where $S_{n,m}(G)$ is the subspace of ${{\rm Hom}}(F_n/\Gamma^m_n,G)_{1}$ consisting of $n$-tuples with at least one coordinate the identity, and $m \geq 2$. The middle vertical map $$i\co {{\rm Hom}}({{\mathbb Z}}^n,G)_{1} \hookrightarrow {{\rm Hom}}(F_n/\Gamma^m_n,G)_{1}$$ is a homotopy equivalence: by [@bergeron], up to homotopy we can replace $G$ by a maximal compact subgroup, and by [@bergeron2016note], the map $i$ is a homeomorphism in the compact case. The first vertical map $$S_{n,2}(G) \hookrightarrow S_{n,m}(G)$$ is a homotopy equivalence by the Gluing Lemma [@RBrown], since these spaces can be built up inductively as pushouts of subspaces of the form $$\{(g_1, \ldots, g_n) \,:\, g_i =1 \textrm{ for all } i\in I\}$$ for various $I \subset \{1, \ldots n\}$, and on these subspaces the results from [@bergeron] and [@bergeron2016note] apply. Applying the Gluing Lemma again, the third vertical map $$\widehat{{{\rm Hom}}}({{\mathbb Z}}^n,G)_{1} \to \widehat{{{\rm Hom}}}(F_n/\Gamma^m_n,G)_{1}$$ is a homotopy equivalence as well, and the theorem follows from the decompositions (\[q-s\]).
Ungraded cohomology and $K$–theory {#ungraded-sec}
==================================
The *ungraded cohomology* $H^u (X;R)$ of a space $X$ with coefficients in $R$ refers to the ungraded direct sum of all the cohomology groups of $X$ as an $R$–module. It is a classical result [@borel1953cohomologie] that the ungraded cohomology of $G/T$ with rational coefficients, viewed as a $W$–module, is the regular representation ${{\mathbb Q}}W$. This alone yields some interesting consequences. Let $M$ be a graded ${{\mathbb Q}}W$-module. Then it follows that $(H^u(G/T;{{\mathbb Q}})\otimes M)^W {\cong}M.$ Applying this principle to formulas (\[Baird\]) and (\[Comm\]) yields the following result.
Let $G$ be a compact and connected Lie group. Then
1. the ungraded rational cohomology of the compact and connected Lie group $G$ is the same as the ungraded rational cohomology of its maximal torus: $$(H^u(G/T;{{\mathbb Q}})\otimes H^u(T))^W{\cong}H^u(T);$$
2. the ungraded cohomology of ${{\rm Hom}}(F_n/\Gamma^m_n,G)_1$ is given by $$H^u({{\rm Hom}}(F_n/\Gamma^m_n,G)_1;{{\mathbb Q}}) {\cong}H^u(T^n;{{\mathbb Q}})$$ for all integers $m\geq 2.$
It is quite interesting, although not a surprise, that the maximal torus $T\subset G$ plays a fundamental role in the topology of nilpotent representations into $G$, similar to the role it plays in the topology of $G$ from the classical theory of Lie groups. Recall that the rational cohomology of Lie groups can be described by the cohomology of a product of as many spheres of odd dimension as the rank of $G$ [@reeder1995cohomology]. It would however be very compelling to understand, in a topological manner, the regrading process that produces the cohomology of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ and ${{\rm Comm}}(G)_1$ from the cohomology of $T^n$ and $J(T)$, respectively.
\[tot-rk\] Let $G$ be a compact and connected Lie group of rank $r$. Then $$\begin{aligned}
\sum_{k\geq 0} {{\rm rank\,}}( H^k({{\rm Hom}}({{\mathbb Z}}^n,G)_1;{{\mathbb Q}}))=
\sum_{k\geq 0} {{\rm rank\,}}( H^k(T^n;{{\mathbb Q}}) )= 2^{nr}.\end{aligned}$$
Having identified the total rank of the cohomology of these spaces, one can ask if they satisfy Halperin’s Toral Rank Conjecture, which states that if a topological space $X$ has an almost free action of a torus of rank $k$, then the rank of the total cohomology of $X$ is at least $2^k$; see [@halperin1985 Problem 1.4]. In this setting, the conjecture predicts that if a torus $T'$ acts almost freely on the space of homomorphisms ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$, then the rank of $T'$ must be at least $nr.$ Hence it would be interesting to understand almost-free torus actions on these spaces.
Having identified the total rank of the cohomology of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$, we can also identify its rational complex $K$–theory.
\[K-cor\] Let $G$ be a compact and connected Lie group of rank $r$. Then $$\begin{aligned}
{{\rm rank\,}}( K^i({{\rm Hom}}({{\mathbb Z}}^n,G)_1))= 2^{nr-1}.\end{aligned}$$ for every $i$.
Since ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ is a finite CW complex, the Chern character provides an isomorphism from $K^i({{\rm Hom}}({{\mathbb Z}}^n,G)_1)\otimes {\mathbb{Q}}$ to the sum of the rational cohomology groups of ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ in dimensions congruent to $i$ (mod 2). The fibration sequence $$G/T \times T^n {\longrightarrow}G/T \times_W T^n {\longrightarrow}BW$$ implies that the Euler characteristic of $G/T \times_W T^n$ is zero, and the same follows for ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ by Baird’s result (\[Baird\]). Hence $$\begin{aligned}
\sum_{k \textrm{ even}} {{\rm rank\,}}( H^k({{\rm Hom}}({{\mathbb Z}}^n,G)_1) )=
\sum_{k \textrm{ odd}} {{\rm rank\,}}( H^k({{\rm Hom}}({{\mathbb Z}}^n,G)_1)), \end{aligned}$$ and the result follows from Corollary \[tot-rk\].
When $G$ is a product of groups of the form $SU(r)$, $U(q)$, and $Sp(k)$, Adem and Gomez [@AG-equivK Corollary 6.8] showed that the $G$–equivariant $K$–theory ring $K^*_G ({{\rm Hom}}({{\mathbb Z}}^n, G))$ is free as module of rank $2^{nr}$ over the representation ring $R(G)$ (for such $G$ we have ${{\rm Hom}}({{\mathbb Z}}^n, G)_1 = {{\rm Hom}}(Z^n, G)$). In light of Corollary \[K-cor\], it is natural to ask whether the map $R(G) \otimes K^*({{\rm Hom}}({{\mathbb Z}}^n,G)) \to K_G^* ({{\rm Hom}}({{\mathbb Z}}^n,G))$ is an isomorphism for these groups.
We now explain how to compute ${{\rm rank\,}}(H^u({{\rm Hom}}({{\mathbb Z}}^n,G)_1))$, and the Euler characteristic $\chi({{\rm Hom}}({{\mathbb Z}}^n,G)_1)$, directly from Theorem \[thm: Poincare series of Hom\], by setting $q=1$ or $-1$ in the formula $$\label{p}P({{\rm Hom}}({{\mathbb Z}}^n,G)_1; q) =
A_W (q) \sum_{w\in W} \frac{\det(1+qw)^n}{\det(1-q^2w)}.$$ To do so, we must compute the multiplicities of $\pm 1$ as roots of $A_W (q)$ and of $\det (1-q^2 w)$ ($w\in W)$. We have $$A_W (q) = \frac{1}{|W|}\prod_{i=1}^r (1-q^{d_i})(1+q^{d_i}),$$ so the multiplicity of $\pm 1$ as a root of $A_W (q)$ is $r = {{\rm rank\,}}(G)$. On the other hand, $$\det (1-q^2 w) = \prod_i (1-q^2 \lambda_i (w))^{n_i},$$ where the numbers $\lambda_i (w)$ are the eigenvalues of $w$ (acting on $\mathfrak{t}^\ast$) and $n_i$ is the dimension of the corresponding eigenspace. So the multiplicity of $\pm 1$ is the dimension of the eigenspace for $\lambda_i (w) = 1$, which is strictly less than ${{\rm rank\,}}(G)$ unless $w = 1$, in which case it is exactly ${{\rm rank\,}}(G)$. Canceling factors of $1\pm q$ in $$\frac{\prod_{i=1}^r (1-q^{d_i})(1+q^{d_i})}{\det(1-q^2w)}\det(1+qw)^n$$ and plugging in $q=\pm 1$, we see that all terms for $w\neq 1$ are zero.
Now consider what happens when we plug in $q=-1$ into (\[p\]). The term for $w=1$ contains the determinant of $I +qI= I-I = 0$ as a factor, so it too vanishes. This gives another proof that $\chi ({{\rm Hom}}({\mathbb{Z}}^n, G)_1) = 0$.
To calculate ${{\rm rank\,}}(H^u ({{\rm Hom}}({\mathbb{Z}}^n, G)_1))$, we must analyze the $w=1$ term of (\[p\]) more closely. This term has the form $$\begin{aligned}
\frac{\prod_{i=1}^r (1+q^{d_i})(1-q^{d_i})}{|W|} \cdot & \frac{\det((1+q)I)^n}{\det((1-q^2)I)}\\
= &\frac{\prod_{i=1}^r (1+q^{d_i})(1-q^{d_i})}{|W|} \cdot \frac{(1+q)^{rn}}{(1-q^2)^r}\\
= &\frac{\prod_{i=1}^r (1+q^{d_i})(1+q+q^2 +\cdots +q^{d_i-1})}{|W|} \cdot \frac{(1+q)^{rn}}{(1+q)^r}.\end{aligned}$$ Plugging in $q=1$, we find that $${{\rm rank\,}}(H^u ({{\rm Hom}}({\mathbb{Z}}^n, G)_1)) =
\frac{\left(\prod_{i=1}^r 2d_i \right) 2^{rn}}{|W| \cdot 2^r}
=2^{rn},$$ where we have used the equation $\prod_{i=1}^r d_i = |W|$.
Examples of Hilbert–Poincaré series {#sec: examples Poincare}
===================================
Using Theorem \[thm: Poincare series of Hom\] and Table \[table: characteristic degrees\], one can obtain explicit formulas for the Hilbert–Poincaré and Poincaré series described in this article. We demonstrate this for some low-dimensional Lie groups and for the exceptional Lie group $G_2$. We give only the formulas for the Poincaré series. The Hilbert–Poincaré series can then be deduced similarly from Theorem \[thm: Poincare series of Hom\] and are left to the reader.
The maximal torus of $SU(2)$ has rank 1 and the Weyl group is isomorphic to $W={{\mathbb Z}}_2$. The dual space $\mathfrak{t}$ is 1-dimensional, and $W$ is represented as $\{1,-1\} \subset GL(\mathfrak{t}^\ast)$. The only characteristic degree of $W$ is $d_1=2$. Therefore, we have $A_W (q)=(1-q^4)/2$ and $\det(1+qw)$ equals $1+q$ and $1-q$, for $w$ equal to 1 and -1, respectively, and $$\frac{\det(1+qw)^n}{\det(1-q^2w)}=
\begin{cases}
\ds\frac{(1+q)^n}{1-q^2} & \mbox{if $w=1$}, \\
\\
\ds\frac{(1-q)^n}{1+q^2} & \mbox{if $w=-1$}. \\
\end{cases}$$ We know the space of commuting $n$-tuples in $SU(n)$ is path connected, so it equals the component of the trivial representation. $$\begin{aligned}
P({{\rm Hom}}({{\mathbb Z}}^n,SU(2));q)&= A_W (q) \sum_{w\in {{\mathbb Z}}_2} \frac{\det(1+qw)^n}{\det(1-q^2w)}\\
&= \frac{1}{2}\bigg((1+q)^n (1+q^2) + (1-q)^n(1-q^2)\bigg), \end{aligned}$$ which agrees with calculations in [@bairdcohomology].
The maximal torus of $U(2)$ has rank 2, the Weyl group $W {\cong}{{\mathbb Z}}_2$ acts on $\mathfrak{t}^*$ via the matrices $$\left\{\left(
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right), \left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)\right\},$$ and the characteristic degrees of $W$ are $d_1=1,d_2=2$. We know the space of commuting $n$-tuples in $U(n)$ is path connected, so again it equals the component of the trivial representation. We have $A_W (q)=(1-q^2)(1-q^4)/2$ and $$\frac{\det(1+qw)^n}{\det(1-q^2w)}=
\begin{cases}
\ds\frac{(1+q)^{2n}}{(1-q^2)^2} & \mbox{if $w=1$}, \\
\\
\ds\frac{(1-q^2)^n}{1-q^4} & \mbox{if $w\neq 1$}. \\
\end{cases}$$ Therefore, we get the following Poincaré series $$P({{\rm Hom}}({{\mathbb Z}}^n,U(2));q)=\dfrac{1}{2}\bigg((1+q)^{2n} (1+q^2) + (1-q^2)^{n+1}\bigg).$$ For example, we get $$\begin{aligned}
P({{\rm Hom}}({{\mathbb Z}}^2,U(2));q)&= 1+2q+2q^2+4q^3+5q^4+2q^5,\\
P({{\rm Hom}}({{\mathbb Z}}^3,U(2));q)&= 1 + 3q + 6q^2 + 13q^3 + 18q^4 +13q^5 + 6q^6 + 3q^7 + q^8,\\
P({{\rm Hom}}({{\mathbb Z}}^4,U(2));q)&= 1 + 4q + 12q^2 + 32q^3 + 54q^4 +56q^5 + 44q^6 + 32q^7\\
&\,\,\,\,\,\,\,\,\,\,\, + 17q^8+4q^9.\end{aligned}$$
The maximal torus has rank 3 and the Weyl group is the symmetric group on 3 letters $$W=\Sigma_3=\{e,(12),(13),(23),(123),(132)\}.$$ The characteristic degrees of $W$ are $1$, $2$, and $3$, so $$A_W (q) = \frac{1}{6}(1-q^2)(1-q^4)(1-q^6).$$ The matrix representations $W \leqslant GL(\mathfrak{t}^\ast)$ can be obtained by applying each permutation in $\Sigma_3$ to the rows of the $3\times 3$ identity matrix $I_{3\times 3}$. This can be done in general for the Weyl group $\Sigma_n$ of $U(n).$ For the transpositions $w=(12), (13), (23)\in W$ and for the 3-cycles we obtain the same determinants, respectively, since they are in the same conjugacy class. Hence we get $$\ds
\frac{\det(1+qw)^n}{\det(1-q^2w)}=
\begin{cases}
\ds\frac{(1+q)^{3n}}{(1-q^2)^3} & \mbox{if $w=e$}, \\
\\
\ds\frac{(1+q)^n(1-q^2)^n}{(1-q^2)(1-q^4)} & \mbox{if $w=(12), (13), (23)$}, \\
\\
\ds\frac{(1+q^3)^n}{1-q^6} & \mbox{if $w=(123),(132)$}. \\
\end{cases}$$ Therefore, the Poincaré series is given by $$\begin{aligned}
P({{\rm Hom}}&({{\mathbb Z}}^n,U(3)),q)
=\frac{1}{6} \bigg(
(1+q^2)(1+q^2+q^4)(1+q)^{3n} \\ &+
3 (1-q^6)(1+q)^n(1-q^2)^n +
2 (1-q^2)(1-q^4)(1+q^3)^n
\bigg).\end{aligned}$$ In particular, the following are the Poincaré series for pairwise commuting pairs, triples, and quadruples in $U(3)$, respectively: $$\begin{aligned}
P({{\rm Hom}}({{\mathbb Z}}^2,U(3)),q) = & 1+2q+2q^2+ 4q^3+7q^4+10q^5+11q^6+8q^7+8q^8\\
&+8q^9+3q^{10}\\
P({{\rm Hom}}({{\mathbb Z}}^3,U(3)),q)= & \,1 +3q + 6q^2 + 14q^3 + 30q^4 + 54q^5 + 73q^6 + 75q^7 + 75q^8\\
& \,\,\,\,+ 73q^9 + 54q^{10} + 30q^{11} + 14q^{12} + 6q^{13} + 3q^{14} + q^{15}, \\
P({{\rm Hom}}({{\mathbb Z}}^4,U(3)),q)= & \,1 +4q + 12q^2 + 36q^3 + 96q^4 + 212q^5 + 357q^6 + 472q^7 \\
& \,\,\,\,+555q^8+ 604q^9 + 574q^{10} + 468q^{11} + 330q^{12} + 204q^{13} \\
& \,\,\,\,+ 113q^{14} + 48q^{15} + 10q^{16}.\end{aligned}$$
Now consider the exceptional Lie group $G_2$, a 14 dimensional submanifold of $SO(7)$, which has rank 2 and Weyl group the dihedral group $W=D_{12}$ of order 12 with presentation $\langle s,t | s^2,t^6,(st)^2 \rangle.$ We can write $W=\{1,t,t^2,t^3,t^4,t^5,s,st,st^2,st^3,st^4,st^5\}$ as a subgroup of $GL(\mathfrak{t}^\ast)$ by setting $$s=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right),
\text{ and }
t=\frac{1}{2}\left(
\begin{array}{cc}
1 & \sqrt{3} \\
-\sqrt{3} & 1 \\
\end{array}
\right).$$ The characteristic degrees of $W$ are 2 and 6 as given in Table \[table: characteristic degrees\]. The space ${{\rm Hom}}({{\mathbb Z}}^n,G_2)$ is not path-connected since it has an elementary abelian 2–subgroup of rank 3, which is non-toral. Setting $t=1$ and $q=s$ the Poincaré series of ${{\rm Hom}}({{\mathbb Z}}^n,G_2)_1$ is calculated using Equation \[eqn: Poincare series of Hom(Zn,G)1\]: $$\begin{aligned}
P(& {{\rm Hom}}({{\mathbb Z}}^n,G_2)_1;q)= 1 + \frac{1}{12}
\big[(2{q}^{14}-2{q}^{12}-2{q}^{2}+2)(-{q}^{2}+1)^{n-1}\\
&+ (2{q}^{12}-2{q}^{10}-2{q}^{8}+4{q}^{6}-2{q}^{4}-2{q}^{2}+2)({q}^{2}-q+1)^{n}\\
&+ (2{q}^{12}+2{q}^{10}-2{q}^{8}-4{q}^{6}-2{q}^{4}+2{q}^{2}+2)( {q}^{2}+q+1)^{n}\\
&+ ({q}^{12}-2{q}^{10}+2{q}^{8}-2{q}^{6}+2{q}^{4}-2{q}^{2}+1)(-1+q)^{2n}
+(-4{q}^{12}+4)(-{q}^{2}+1)^{n} \\
&+ ({q}^{12}+2{q}^{10}+2{q}^{8}+2{q}^{6}+2{q}^{4}+2{q}^{2}+1)(q+1)^{2n}\big].\end{aligned}$$ For example, for $n=1,2,3$ we obtain: $$\begin{aligned}
P({{\rm Hom}}({{\mathbb Z}}^1,G_2)_1;q)=&1+q^3+q^{11}+q^{14}=P(G_2;q),\\
P({{\rm Hom}}({{\mathbb Z}}^2,G_2)_1;q)=&1+q^2+2q^3+q^4+2q^5+q^6+q^{10}+2q^{11}+2q^{13}+3q^{14},\\
P({{\rm Hom}}({{\mathbb Z}}^3,G_2)_1;q)=&1+3q^2+3q^3+6q^4+9q^5+3q^6+3q^7+3q^8+2q^9+3q^{10}\\
& \,\,\,+3q^{11}+3q^{12}+9q^{13}+6q^{14}+3q^{15}+3q^{16}+q^{18}.\end{aligned}$$
It can be observed from the above formula for the Poincaré series that the *rational homological dimension* of the spaces of commuting $(2k-1)$-tuples in $G_2$ is the same as that for commuting $2k$-tuples, namely $12+4k$. However, it is not clear if there is a topological reason for this phenomenon.
The above formulas suggests that for odd $n$, ${{\rm Hom}}({{\mathbb Z}}^n,G)_1$ is a [rational Poincaré duality space]{}; in particular, the coefficients of the above Poincaré series are palindromic for $n$ odd. A geometric proof of this fact was provided to us by Antolín, Gritschacher, and Villarreal (private communication). Briefly, Baird’s theorem (as discussed in Section \[sec: topology Hom\]) reduces us to showing that for $n$ odd, the action of $W$ on the manifold $G/T {\times}T^n$ is [orientation preserving]{}. From the case $n=1$, where $H^*(G)$ is known, we can see that for each $w\in W$ the action of $w$ on $G/T$ is orientation-preserving if and only if the action of $w$ on $T$ is orientation preserving. Since $W$ acts diagonally on $T^n$, the result follows.
[10]{}
A. Adem, A. Bahri, M. Bendersky, F. R. Cohen, and S. Gitler. On decomposing suspensions of simplicial spaces. , 15(1):91–102, 2009.
A. Adem and F. R. Cohen. Commuting elements and spaces of homomorphisms. , 338(3):587–626, 2007.
Alejandro Adem and José Manuel Gómez. Equivariant [$K$]{}-theory of compact [L]{}ie group actions with maximal rank isotropy. , 5(2):431–457, 2012.
T. Baird. Cohomology of the space of commuting [$n$]{}-tuples in a compact [L]{}ie group. , 7:737–754, 2007.
T. Baird, L. C Jeffrey, and P. Selick. The space of commuting $n$-tuples in [[[SU]{}]{}]{}$(2)$. , 55(3):805–813, 2011.
M. Bergeron. The topology of nilpotent representations in reductive groups and their maximal compact subgroups. , 19:1383––1407, 2015.
M. Bergeron and L. Silberman. A note on nilpotent representations. , 19(1):125–135, 2016.
A. Borel. . , 57(1):115–207, 1953.
A Borel, R. Friedman, and J. Morgan. . Number 747 in Mem. Amer. Math. Soc. AMS, 2002.
R. Bott and H. Samelson. . , 27(1):320–337, 1953.
M. Brou[é]{}. , volume 1988 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 2010.
Ronald Brown. . BookSurge, LLC, Charleston, SC, 2006.
C. Chevalley. Invariants of finite groups generated by reflections. , 77(4):778–782, 1955.
F. R. Cohen and M. Stafa. . In [*[Configurations Spaces: Geometry, Topology and Representation Theory]{}*]{}, volume [14]{} of [*[Springer INdAM series]{}*]{}, pages [ 361–379]{}. [Springer]{}, [2016]{}.
F. R. Cohen and M. Stafa. . , 161(3):381–407, 2016.
J. G[ó]{}mez, A. Pettet, and J. Souto. On the fundamental group of [${\rm Hom}({\Bbb Z}^k,G)$]{}. , 271(1-2):33–44, 2012.
L. C. Grove and C. T. Benson. , volume 99 of [*Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, second edition, 1985.
S. Halperin. , pages 293–306. London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge 1985.
J. E. Humphreys. , volume 29. , 1992.
V. Kac and A. Smilga. . , pages 185–234, 2000.
Th. [Molien]{}. , 1897:1152–1156, 1897.
A. Pettet and J. Souto. Commuting tuples in reductive groups and their maximal compact subgroups. , 17(5):2513–2593, 2013.
M. Reeder. . , 41:181–200, 1995.
R. W. Richardson. Commuting varieties of semisimple [L]{}ie algebras and algebraic groups. , 38(3):311–327, 1979.
G. C. Shephard and J. A. Todd. Finite unitary reflection groups. , 6(2):274–301, 1954.
D. Sjerve and E. Torres-Giese. Fundamental groups of commuting elements in [L]{}ie groups. , 40(1):65–76, 2008.
T. A. Springer. Regular elements of finite reflection groups. , 25(2):159–198, 1974.
M. Stafa. . PhD thesis, University of Rochester, 2013.
Mentor Stafa. Poincaré series of character varieties for nilpotent groups. , 2017.
Bernardo Villarreal. Cosimplicial groups and spaces of homomorphisms. , 17(6):3519–3545, 2017.
E. [Witten]{}. . , 202:253–316, July 1982.
Edward Witten. Toroidal compactification without vector structure. , (2):Paper 6, 43 pp., 1998.
[^1]: Some authors (e.g. Grove and Benson [@Grove-Benson Chapter 7], or Humphreys [@humphreys1992reflection Chapter 3]) replace $V$ by its dual $V^*$ in this discussion.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Motivated by Gentzen’s disjunction elimination rule in his Natural Deduction calculus and reading inequalities with meet in a natural way, we conceive a notion of distributivity for join-semilattices. We prove that it is equivalent to a notion present in the literature. In the way, we prove that those notions are linearly ordered. We finally consider the notion of distributivity in join-semilattices with arrow, that is, the algebraic structure corresponding to the disjunction-conditional fragment of intuitionistic logic.'
author:
- |
Rodolfo C. [E]{}rtola-Biraben$^1$, Francesc Esteva$^2$, and Lluís Godo$^2$\
$^1$ CLE - State University of Campinas\
13083-859 Campinas, São Paulo, Brazil\
$^2$ IIIA - CSIC, 08193 Bellaterra, Spain
title: 'On distributive join-semilattices'
---
Introduction
============
Different notions of distributivity for semilattices have been proposed in the literature as a generalization of the usual distributive property in lattices. As far as we know, notions of distributivity for semilattices have been given, in chronological order, by Grätzer and Schmidt [@GS] in 1962, by Katriňák [@K] in 1968, by Balbes [@B] in 1969, by Schein[@S] in 1972, by Hickman [@H] in 1984, and by Larmerová and Rachnek [@LR] in 1988. Following the names of its authors, we will use the terminology GS-, K-, B-, S$_n$-, H-, and LR-distributivity, respectively.
In this paper, motivated by Gentzen’s disjunction elimination rule in his Natural Deduction calculus, and reading inequalities with meet in a natural way, we conceive another notion of distributivity for join-semilattices, that we call ND-distributivity. We aim to find out whether it is equivalent to any of the notions already present in the literature. In doing so, we also compare the different notions of distributivity for join-semilattices we have found. Namely, we see that the given notions imply each other in the following linear order:
GS $\Rightarrow$ K $\Rightarrow$ (H $\Leftrightarrow$ LR $\Leftrightarrow$ ND) $\Rightarrow$ B $\Rightarrow \cdots$ S$_n
\Rightarrow$ S$_{n-1} \Rightarrow \cdots$ S$_3 \Rightarrow$ S$_2$,
and we also provide countermodels for the reciprocals.
Additionally, we show that H-distributivity may be seen as a very natural translation of a way to define distributivity for lattices, fact that will provide more motivation for the use of that notion. Note that Hickman used the term mild distributivity for H-distributivity.
The paper is structured as follows. After this introduction, in Section 2 we provide some notions and notations that will be used in the paper. In Section 3 we show how to arrive to our notion of ND-distributivity for join-semilattices. In Section 4 we compare the different notions of distributivity for join-semilattices that appear in the literature. We prove that one of those is equivalent to the notion of ND-distributivity found in Section 3. Finally, in Section 5 we consider what happens with the different notions of distributivity considered in Section 4 when join-semilattices are expanded with a natural version of the relative meet-complement.
Preliminaries
=============
In this section we provide the basic notions and notations that will be used in the paper.
Let ${\bf{J}} = (J; \leq)$ be a poset. For any $S \subseteq J$, we will use the notations $S^l$ and $S^u$ to denote the set of lower and upper bounds of $S$, respectively. That is,
- $S^l = \{x \in J: x \leq s$, for all $s \in S \}$ and
$S^u = \{x \in J: s \leq x$, for all $s \in S \}$.
\[BL\] Let ${\bf{J}} = (J; \leq)$ be a poset. For all $a, b, c \in J$ the following statements are equivalent:
- for all $x \in J$, if $x \leq a$ and $x \leq b$, then $x \leq c$,
- $\{a, b\}^l \subseteq \{c\}^l$,
- $c \in \{a, b\}^{lu}$.
A poset ${\bf{J}} = (J; \leq)$ is a [*join-semilattice*]{} (resp. meet-semilattice) if $\sup\{a, b\}$ (resp. $\inf\{a, b\}$) exists for every $a, b \in J$. A poset ${\bf{J}} = (J; \leq)$ is a lattice if it is both a join- and a meet-semilattice. As usual, the notations $a \vee b$ (resp. $a \wedge b$) shall stand for $\sup\{ a, b\}$ (resp. $\inf\{a, b \}$).
Given a join-semilattice ${\bf{J}} = (J; \leq)$, we will use the following notions:
- ${\bf{J}}$ is *downwards directed* iff for any $a, b \in J$, there exists $c \in J$ such that $c \leq a$ and $c \leq b$.
- A non empty subset $I \subseteq J$ is said to be an [*ideal*]{} iff\
(1) if $x,y \in I$, then $x\vee y \in I$ and\
(2) If $x \in I$ and $y \leq x$, then $y \in I$.
- The principal ideal generated by an element $a \in A$, noted $(a]$, is defined by $(a] = \{x \in A : x \leq a\}$.
- $Id({\bf{J}})$ will denote the set of all ideals of ${\bf{J}}$.
- $Id_{fp}({\bf{J}})$ will denote the subset of ideals that are intersection of a finite set of principal ideals, that is, $Id_{fp}({\bf{J}}) = \{(a_1] \cap \dots \cap (a_k] : a_1,...a_k \in J\}$.
In this paper we are concerned with various notions of distributivity for join-semilattices, all of them generalizing the usual notion of distributive lattice, that is, a lattice ${\bf{J}} = (J; \leq)$ is distributive if the following equation holds true for any elements $a, b, c \in J$:
- $a \land (b \lor c) = (a \land b) \lor (a \land c)$ (equivalently, $a \lor (b \land c) = (a \lor b) \land (a \lor c)$).
There are several equivalent formulations of this property, in particular we mention the following ones that are relevant for this paper:
- for all $a, b, c \in J$, if $a \lor b = a \lor c$ and $a \land b = a \land c$ then $ b = c$.
- for any two ideals $I_1, I_2$ of $\bf J$, the ideal $I_1 \lor I_2$ generated by their union is defined by $I_1 \lor I_2 = \{ a \lor b : a \in I_1, b \in I_2\}.$
- the set $Id({\bf{J}})$ of ideals of $\bf J$ is a distributive lattice.
For the case of semilattices, several non-equivalent generalizations of these conditions can be found in the literature, already mentioned in the introduction. However, as expected, all of them turn to be equivalent to usual distributivity in the case of lattices.
The class of distributive lattices form a variety (that is, an equational class). In contrast, in any sense of distributivity for join-semilattices that coincides with usual distributivity in the case of a lattice, the class of distributive join-semilattices is not even a quasi-variety. Indeed, consider the distributive lattice in Figure \[Fdjns\]. Taken as a join-semilattice, the set of black-filled nodes is a sub join-semilattice, that is clearly a non-distributive lattice (a diamond). Thus, it is neither distributive as a join-semilattice. This proves that the class of distributive (in any sense that coincides with usual distributivity in the case of a lattice) join-semilattices is not closed by subalgebras, and hence it is not a quasi-variety.
\[ht\]
=\[draw, circle, fill=white, minimum size=4pt, inner sep=0pt, label distance=0.5mm\]
(0,0) node (0) \[fill=black\] ; (-1.3,1.3) node (1) ; (0,1.3) node (2) ; (1.3,1.3) node (3) ; (-1.3,2.6) node (4) \[fill=black\] \[label=left:a\] ; (0,2.6) node (5) \[fill=black\] \[label=right:b\] ; (1.3,2.6) node (6) \[fill=black\] \[label=right:c\] ; (0,3.9) node (7) \[fill=black\] ; (0)–(1)–(4)–(2)–(0)–(3)–(6)–(2) (1)–(5)–(3) (4)–(7)–(6) (5)–(7);
Distributivity and Natural Deduction {#DaND}
====================================
Let us consider the disjunction-fragment of intuitionistic logic in the context of Gentzen’s Natural Deduction calculus (see [@Ge p. 186]). It has the following introduction rule for $\vee$ and an analogous one with $\mathfrak{B}$ as only premiss:
and the following disjunction elimination rule:
The last rule may be read as saying that if $\mathfrak C$ follows from $\mathfrak A$ and $\mathfrak C$ follows from $\mathfrak B$, then $\mathfrak C$ follows from $\mathfrak A \vee B$, so reflecting what is usually called “proof by cases”. It is possible to give an algebraic translation in the context of a join-semilattice ${\bf{J}} = (J; \leq)$:
for all $a, b, c \in J$, if $a \leq c$ and $b \leq c$, then $a \vee b \leq c$,
which is easily seen to be one of the conditions stating that $\vee$ is the supremum of $a$ and $b$. Now, the last rule is usually employed in a context with a fourth formula $\mathfrak H$:
In the context of a lattice ${\bf{L}} = (L; \leq)$, we would give the following algebraic translation:
- for all $h, a, b, c \in L$,
if $h \wedge a \leq c$ and $h \wedge b \leq c$, then $h \wedge (a \vee b) \leq c$.
It is easily seen that [**(D$_{\wedge \vee}$)**]{} is equivalent to the usual notion of distributivity for lattices. Now, the natural question arises how to give an algebraic translation of [**($\vee$E)**]{} if only $\vee$ is available, for example, if we are in the context of a join-semilattice. Considering that an inequality $u \land v \leq w$ in a lattice ${\bf{L}} = (L; \leq)$ is equivalently expressed as the first order statement
for all $x \in L$, if $x \leq u$ and $x \leq v$ then $x \leq w$,
we may write [**[(D$_{\wedge \vee}$)]{}**]{} in the context of a join-semilattice ${\bf{J}} = (J; \leq)$ as follows:
- for all $h, a, b, c \in J$,
IF for all $x \in J$ (if $x \leq h$ and $x \leq a$, then $x \leq c)$ and
for all $x \in J$ (if $x \leq h$ and $x \leq b$, then $x \leq c)$,
THEN for all $x \in J$ (if $x \leq h$ and $x \leq a \vee b$, then $x \leq c$).
Alternatively, using the equivalence between parts (i) and (ii) in Lemma \[BL\], we may write
- for all $h, a, b, c \in J$,
if $\{h,a\}^l \cup \{h,b\}^l \subseteq \{c\}^l$, then $\{h,a \vee b\}^l \subseteq \{c\}^l$.
Yet, using the equivalence between parts (ii) and (iii) in Lemma \[BL\], we may also alternatively write
- for all $h, a, b, c \in J$,
if $ c \in \{h,a\}^{lu} \cup \{h,b\}^{lu}$, then $c \in \{h,a \vee b\}^{lu}$.
Accordingly, given the above logical motivation, it is natural to consider the following notion of distributivity for join-semilattices.
A join-semilattice ${\bf{J}} = (J; \leq)$ is called ND-distributive (ND for Natural Deduction) if it satisfies [**[(D$_{\vee}$)]{}**]{}.
Now, it happens that there are many different (and non-equivalent) notions of distributivity for semilattices. This is not new:
> “The concept of distributivity permits different non-equivalent generalizations from lattices to semilattices.” (see [@S])
So, it is natural to inquire whether the given notion of ND-distributivity for join-semilattices is equivalent to any of the notions already present in the literature and, if so, to which. In what follows we will solve that question. In doing so, we will also compare the different notions of distributivity for join-semilattices that we have found.
In this paper, given our logical motivation, we restrict ourselves to study the distributivity property in join-semilattices, but an analogous path could be followed for meet-semilattices or even for posets.
*Let us note that the following rule (reflecting proof by three cases) is equivalent to [**($\vee$E)**]{}:*
Indeed, it implies [**($\vee$E)**]{} taking $\mathfrak C = \mathfrak B$. Also, the following derivation shows that it may be derived using [**($\vee$E)**]{} twice:
Different notions of distributivity for join-semilattices {#SDN}
=========================================================
In the following subsections we consider and compare the notions of distributivity for semilattices we have found in the literature. Some authors have presented their notion for the case of meet-semilattices and others for join-semilattices. We will make things uniform and, motivated by the logical considerations in the previous section, we will choose to consider join-semilattices.
We emphasize that all the distributivity notions for semilattices (and posets) proposed in the literature are generalizations of the distributivity property for lattices, in fact, when restricted to lattices all these notions coincide.
GS-distributivity
-----------------
The following seems to be the most popular definition of distributivity for join-semilattices.
\[GSd\] A join-semilattice ${\bf{J}} = (J; \leq)$ is GS-distributive iff
- for all $a, b, x \in J$, if $x \leq a \vee b$, then there exist $a', b' \in J$ such that $a' \leq a$, $b' \leq b$, and $x = a' \vee b'$.
In order to visualize it, see Figure \[Dmsl\]. The given definition seems to have appeared for the first time in [@GS p. 180, footnote 4]. It also appears in many other places, e.g., in [@Gr Sect. II.5.1, pp. 167-168].
\[ht\]
=\[draw, circle, fill=white, minimum size=4pt, inner sep=0pt, label distance=1mm\]
(0,0) node (x) \[label=right: $x$\] – ++(90:1.5 cm) node (ab) \[label=above: $a \vee b$\] – ++(225:1.4142 cm) node (a) \[label=left: $a$\] – ++(45:1.4142 cm) – ++(315:1.4142 cm) node (b) \[label=right: $b$\] ;
(b)–(1,-1) node () \[label=right: $b'$\] – ++ (x) – ++ (-1,-1) node () \[label=left: $a'$\] – ++ (a);
Next, note that [**(GS)**]{} implies that every pair elements has a lower bound. In fact, we have the following equivalence.
\[E1\] Let ${\bf{J}} = (J; \leq)$ be a join-semilattice. Then, the following two statements are equivalent:
\(i) Every pair of elements has a lower bound.
\(ii) for all $a, b, x \in J$, if $x \leq a \vee b$, then there exist $a', b' \in J$ such that $a' \leq a$, $b' \leq b$, and $a' \vee b' \leq x$.
\(i) $\Rightarrow$ (ii) Suppose $x \leq a \vee b$. Let $a'$ be a lower bound of $\{ a, x\}$ and $b'$ be a lower bound of $\{ b, x\}$. Then, $a' \leq a$ and $b' \leq b$. Also, $a' \leq x$ and $b' \leq x$, which implies that $a' \vee b' \leq x$.
\(ii) $\Rightarrow$ (i) Let $a, b \in J$. We have $a \leq a \vee b$. Then, by hypothesis, there exist $a' \leq a$, $b' \leq b$ such that $a' \vee b' \leq a$. As $b' \leq a' \vee b'$, it follows that $b' \leq a$. Then, $b' \leq a, b$. That is, $b'$ is a lower bound of $\{ a, b\}$.
This proposition shows that every GS-distributive join-semilattice is downward directed. This implies, as it is shown in [@Gr], that the ideal $I \lor J$, generated by the union of two ideals $I, J$, is defined as in the case of distributive lattices, namely, $$I \lor J = \{ a \lor b : a \in I, b \in J\}.$$
As a consequence, it follows that the ideals of a (GS)-distributive join-semilattice ${\bf{J}}$ form a lattice that will be denoted by $Id({\bf{J}})$, and Grätzer proves in [@Gr p. 168] the following characterization result.
\[dJedi\] Let ${\bf{J}}$ be a join-semilattice. Then, ${\bf{J}}$ is (GS)-distributive iff $Id({\bf{J}})$ is distributive.
K-distributivity
----------------
The concept given in the following definition is similar to the one in [**(GS)**]{}.
\[Kd\] A join-semilattice ${\bf{J}} = (J; \leq)$ is K-distributive iff
- for all $a, b, x \in J$, if $x \leq a \vee b$, $x \nleq a$ and $x \nleq b$, then there exist $a', b' \in J$ such that $a' \leq a$, $b' \leq b$, and $x = a' \vee b'$.
In order to visualize, see again Figure \[Dmsl\]. The given definition seems to have appeared for the first time in [@K Definition 4, p. 122]. It also appears, for example, in [@H p. 167].
It turns out that, from the very definition, GS-distributivity implies K-distributivity. In fact, as noted in [@K 1.5, p. 122-123], it is the case that GS-distributivity is equivalent to K-distributivity plus the condition that every pair of elements has a lower bound (that is, downward directed). Therefore, the following proposition makes clear the relationship between GS- and and K-distributivity.
GS-distributivity implies K-distributivity, but not conversely.
The most simple counter-example showing that the reciprocal does not hold is the join-semilattice in Figure \[KGS\], that is not downward directed. Indeed, the given join-semilattice is K-distributive, as the only way to satisfy the antecedent of [**(K)**]{} is to take $1 \leq a \vee b$, but then the consequent is also true. On the other hand, it is not (GS)-distributive, as we have $a \leq a \vee b$ and, however, there are no $a' \leq a$, $b' \leq b$ such that $a' \vee b' = a$.
\[ht\]
=\[draw, circle, fill=black, minimum size=4pt, inner sep=0pt, label distance=1mm\]
(0,0) node (a) \[label=left: $a$\] – ++(45:1cm) node (1) \[label=right: $1$\] – ++(315:1cm) node (b) \[label=right: $b$\] ;
Finally, analogously to Proposition \[dJedi\], we have the following characterisation of K-distributivity via ideals, a proof of which may be found in [@K p. 123].
\[Kiff\] Let ${\bf{J}}$ be a join-semilattice. Then, ${\bf{J}}$ is (K)-distributive iff $Id({\bf{J}}) \cup \{\emptyset\}$ is distributive.
H-distributivity
----------------
In [@H] Hickman introduces the concept of [*mildly distributive*]{} meet-semilattices as those meet-semilattices whose lattice of their strong ideals is distributive. In [@H Theorem 2.5, p. 290] it is stated that it is equivalent to the following statement: [^1]
- for all $n$ and $x, a_1, \cdots, a_n$,
IF for all $b$ (if $a_1 \leq b, \cdots, a_n \leq b$, then $x \leq b)$,
THEN there exists $(x \wedge a_1) \vee \cdots \vee (x \wedge a_n)$ and $x \leq (x \wedge a_1) \vee \cdots \vee (x \wedge a_n)$.
The given conditional may be seen as a translation of the following version of distributivity for lattices:
IF $x \leq a_1 \vee \cdots \vee a_n$, THEN $x \leq (x \wedge a_1) \vee \cdots \vee (x \wedge a_n)$.
In the case of a join-semilattice ${\bf{J}} = (J; \leq)$ and using quantifiers, [**(H)**]{} may be rendered as follows:
- for all $n$ and $x, a_1, \dots, a_n \in J$,
IF $x \leq a_1 \vee \cdots \vee a_n$,
THEN for all $y$, if for all $i=1, \ldots, n$ (for all $z$, IF $z \leq x$ and $z \leq a_i$, THEN $z \leq y$) then $x \leq y$
that is in turn equivalent to:
- for all $n$ and $x, a_1, \dots, a_n \in J$,
IF $x \leq a_1 \vee \cdots \vee a_n$,
THEN for all $y$, if (for all $z$, IF $z \leq x$ and ($z \leq a_1$ or …or $z \leq a_n$), THEN $z \leq y$) then $x \leq y$.
Using set-theoretic notation, [**(H)**]{} may also be rendered as follows:
- for all $n$ and $x, a_1, \dots, a_n \in J$,
if $x \leq a_1 \vee \cdots \vee a_n$, then $x \in (\{x, a_1 \}^l \cup \cdots \cup \{x, a_n \}^l)^{ul}$.
At this point, the reader may wonder whether the number $n$ of arguments is relevant or whether two arguments are enough. Let us settle this question. Firstly, with that in mind, consider
- for all $x, a_1, \dots , a_n, c$,
if $\{x,a_1\}^l \cup \cdots \cup \{x,a_n\}^l \subseteq \{c \}^l$, then $\{x,a_1 \vee \cdots \vee a_n \}^l \subseteq \{c\}^l$.
Now, let us state the following fact.
[**(D$_{\vee_n}$)**]{} is equivalent to [**(C)**]{}.
$\Rightarrow$) Suppose $x \leq a_1 \vee \cdots \vee a_n$ and $y \in (\{x,a_1\}^l \cup \cdots \cup \{x,a_n\}^l)^u$. Our goal is to see that $x \leq y$. Take $c=y$ and apply [**(D$_{\vee_n}$)**]{}. Then we have $\{x\}^l = \{x,a_1 \vee \cdots \vee a_n \}^l \subseteq \{y\}^l$, and hence $x \leq y$. $\Leftarrow$) Suppose $\{x,a_1\}^l \cup \{x,a_2\}^l \cup \cdots \cup \{x,a_n\}^l \subseteq \{c \}^l$. We have to prove that, if $y \leq x$ and $y \leq a_1 \vee \cdots \vee a_n$ then $y \leq c$. Now, using [**(C)**]{}, and the assumptions $y \leq x$ and $y \leq a_1 \vee \cdots \vee a_n$ it follows that $y \in (\{x,a_1\}^l \cup \cdots \cup \{x,a_n\}^l)^{ul}$. But since $\{x,a_1\}^l \cup \{x,a_2\}^l \cup \cdots \cup \{x,a_n\}^l \subseteq \{c \}^l$, we also have $y \in (\{x,a_1\}^l \cup \cdots \cup \{x,a_n\}^l)^{ul} \subseteq \{c\}^{lul} = \{c\}^l$. Hence $y \leq c$.
In turn, let us see that [**(D$_{\vee_n}$)**]{} is equivalent to [**(D$_\vee$)**]{}, which proves that having more than two arguments does not make any difference.
[**(D$_{\vee_n}$)**]{} is equivalent to [**(D$_\vee$)**]{}.
We just prove that [**(D$_\vee$)**]{} implies [**(D$_{\vee_3}$)**]{}, the reciprocal being immediate. Let us suppose $\{h,a_1\}^l \cup \{h, a_2\}^l \cup \{x, a_3\}^l \subseteq \{c \}^l$. Then, we get both $\{h,a_1\}^l \subseteq \{c \}^l$ and $\{h, a_2\}^l \cup \{x,a_3\}^l \subseteq \{c \}^l$, the last of which, using [**(D$_\vee$)**]{}, implies that $\{h, a_2 \vee a_3 \}^l \subseteq \{c \}^l$, which, together with the first, using [**(D$_\vee$)**]{} again, finally implies that $\{h, a_1 \vee a_2 \vee a_3 \}^l \subseteq \{c \}^l$.
As a consequence, H-distributivity coincides with the notion of DN-distributivity for join-semilattices introduced in Section \[DaND\]. Accordingly, we have the following proposition.
\[Hd\] A join-semilattice ${\bf{J}} = (J; \leq)$ is H-distributive iff it is ND-distributive.
Analogously to Propositions \[dJedi\] and \[Kiff\], we also have a characterization of H-distributivity for join-semilattices in terms of distributivity of a sublattice of their ideals. This appears as Corollary 2.4 in [@H p. 290]), where $Id_{fp}({\bf{J}})$ denotes the set $\{(a_1] \cap \dots \cap (a_k] : a_1,...a_k \in J \}$, that is, the set of ideals that are intersection of a finite set of principal ideals of the join-semilattice ${\bf{J}}=(J; \leq)$.
\[Hiff\] Let ${\bf{J}}$ be a join-semilattice. Then, ${\bf{J}}$ is H-distributive iff $Id_{fp}({\bf{J}})$ is distributive.
Let us now compare H- with K-distributivity.
\[KH\] Let ${\bf{J}} = (J; \leq)$ be a join-semilattice. Then, K-distributivity implies H-distributivity.
Suppose
- for all $x \in J$, if $x \leq h$ and $x \leq a$, then $x \leq c$ and
- for all $x \in J$, if $x \leq h$ and $x \leq b$, then $x \leq c$.
Further, suppose both (S1) $x \leq h$ and (S2) $x \leq a \vee b$. The goal is to prove $x \leq c$. Let us suppose that $x \leq a$. Then, using (x1) and (S1), it follows that $x \leq c$. The case $x \leq b$ is analogous using (x2). Finally, suppose both $x \nleq a$ and $x \nleq b$. Using (K) and (S2) it follows that there exist $a', b' \in J$ such that $a' \leq a$, $b' \leq b$, and (F) $x = a' \vee b'$, which implies $a' \leq x$, which using (S1) gives $a' \leq h$. As we also have $a' \leq a$, using (x1) we get $a' \leq c$. Reasoning analogously, we get $b' \leq c$. So, using (F) it follows that $x \leq c$.
The reciprocal of Proposition \[KH\] does not hold considering the model in Figure \[HK\] (with the understanding that there is no element in the white node). The given model appears as a poset in [@Go Figure 2.7, p. 37].[^2] We provide a proof using the characterization of K- and H-distributivity by their ideals (Propositions \[Kiff\] and \[Hiff\]).
\[ht\]
=\[draw, circle, fill=black, minimum size=4pt, inner sep=0pt, label distance=1mm\]
(0,0) node (x1) \[label=left: $x_1$\] ; (x1)–(3,1) node (y1) \[label=right: $y_1$\] ; (y1)–(3,2) node (y2) \[label=right: $y_2$\] ; (y2)–(3,3) node (y3) \[label=right: $y_3$\] ; (y3)–(3,4); (3, 5) node (c) \[label=right: $c$\] ; (c)–(4,6) node (a) \[label=right: $a$\] ; (a)–(3,7) node (top) \[label=right: $1$\] ;
(top)–(0,6) node (e) \[label=above: $e$\] ; (e)–(1,5) node (d) \[label=right: $d$\] ; (d)–(a); (c)–(2,6) node (b) \[label=right: $b$\] ; (b)–(top); (e)–(-1,5) node (f) \[label=left: $f$\] ; (b)–(f); (f)–(0,4) node (df) \[fill=white\] ; (c)–(df); (d)–(df);
(0,3)–(0,2) node (x3) \[label=left: $x_3$\] ; (x3)–(0,1) node (x2) \[label=left: $x_2$\] ; (x2)–(x1); (x2)–(y2); (x3)–(y3);
H-distributivity does not imply K-distributivity.
Let us characterize the sets $Id_{fp}(J)$ and $Id(J)$, where $(J, \leq)$ is the join-semilattice of Figure \[HK\]. An easy computation proves, on the one hand, that $Id_{fp}(J)$ is isomorphic to the ordered set of Figure \[HK\] plus the ideal $I_{\overline{x}} = (f] \wedge (d]$, whose elements are $\{x_i : i \in w\}$, that does not exist in the original join-semilattice. On the other hand, $Id(J)$ is the set of ideals in $Id_{fp}(J)$ plus the ideal ${I}_{\overline{y}}$ generated by the set $\{y_i : i \in w\}$, that is, the ideal with elements ${I}_{\overline{y}} = \{y_i : i \in w\} \cup \{x_i : i \in w\}$. Clearly, this ideal is not a finite intersection of principal ideals. Both $Id_{fp}(J)$ and $Id(J)$ are lattices. Moreover, it is obvious that $Id_{fp}(J)$ is a distributive lattice and thus the join-semilattice of the example is H-distributive. But this is not the case for $Id(J)$, since it has a sublattice isomorphic to the pentagon formed by the elements $(a]$, $(d]$, $(c]$, ${I}_{\overline{y}}$, and ${I}_{\overline{x}}$. Thus, the join-semilattice of the example is not K-distributive.
It is natural to ask whether it is possible to find a finite example in order to prove the reciprocal of Proposition \[KH\]. Let us see that the answer is negative.
For finite join-semilattices, H-distributivity and K-distributivity coincide.
Consider a finite H-distributive join-semilattice. We want to see that it is K-distributive. Accordingly, suppose $x \leq a \vee b$, $x \nleq a$, and $x \nleq b$. It is natural to consider $\bigvee \{ a, x\}^l$ and $\bigvee \{ b, x\}^l$ as candidates for $a'$ and $b'$ in the definition of K-distributivity. Now, in order to do that, we first need to prove that the sets $\{ a, x\}^l$ and $\{ b, x\}^l$ are not empty. Suppose, say, $\{ a, x\}^l = \emptyset$. Then, we have :
- - for all $y$, if $y \leq x$ and $y \leq a$, then $y \leq b$ (as $\{ a, x\}^l = \emptyset$),
- for all $y$, if $y \leq x$ and $y \leq b$, then $y \leq b$,
- $x \leq x$, and
- $x \leq a \vee b$.
So, using H-distributivity, it follows that $x \leq b$, a contradiction.
Having proved that both $\{ a, x\}^l \neq \emptyset$ and $\{ b, x\}^l \neq \emptyset$, let us note that both $\bigvee \{ a, x\}^l$ and $\bigvee \{ b, x\}^l$ exist, due to having a finite structure. Next, let us see that $\bigvee \{ a, x\}^l =$ inf $\{ a, x \}$ (analogously, $\bigvee \{ b, x\}^l =$ inf $\{ b, x \}$). It is clear that both $\bigvee \{ a, x\}^l \leq a$ and $\bigvee \{ a, x\}^l \leq x$. Now, suppose $y \leq a, x$. Then, $y \in \{ a, x\}^l$, and so, $y \leq \bigvee \{ a, x\}^l$, as desired.
It remains to be seen that 1) inf $\{ a, x \} \leq a$, 2) inf $\{ b, x \} \leq b$, and 3) $x =$ inf$\{ a, x \} \vee$ inf$\{ b, x \}$. Now, 1) and 2) are easy to see. Regarding 3), as we have both that inf$\{ a, x \} \leq x$ and sup $\{ b, x \} \leq x$, it follows that inf$\{ a, x \} \ \vee$ inf$\{ b, x \} \leq x$. Finally, observe that the inequality $x \leq$ inf$\{ a, x \} \ \vee$ inf$\{ b, x \}$ follows from
- - for all $y$, if $y \leq x$ and $y \leq a$, then $y \leq$ inf$\{ a, x \} \ \vee$ inf$\{ b, x \}$,
- for all $y$, if $y \leq x$ and $y \leq b$, then $y \leq$ inf$\{ a, x \} \ \vee$ inf$\{ b, x \}$,
- $x \leq x$, and
- $x \leq a \vee b$,
using H-distributivity.
In fact, it is easy to observe that in the case of a finite join-semilattice $J$, the sets of ideals $Id(J)$ and $Id_{fp}(J)$ coincide since, for any two elements $a, b$, either there is no lower bound, that is, $\{a, b\}^l = \emptyset$, or there exists their meet $a \land b = \bigvee \{ a, b\}^l$.
LR-distributivity
-----------------
Larmerová-Rachnek version of distributivity (see [@LR]) was given for posets, as we next see.
\[LRPd\] A poset ${\bf{P}} = (P; \leq)$ is *LR-distributive* iff
- for all $a, b, c \in P$, $(\{ c, a \}^l \cup \{ c, b \}^l)^{ul} = (\{ c \} \cup \{ a, b \}^u)^l$.
In the given definition, it is enough to take one inclusion. Indeed, given a poset ${\bf{P}} = (P; \leq)$ and $a, b, c \in P$, it is always the case that $(\{ c, a \}^l \cup \{ c, b \}^l)^{ul} \subseteq (\{ c \} \cup \{ a, b \}^u)^l$.
It is natural to ask for LR-distributivity in the case of a join-semilattice. The following definition follows from the fact that in a join-semilattice ${\bf{J}} = (J, \leq)$ it holds that $(\{ c \} \cup \{ a, b \}^u)^l = \{ c, a \vee b \}^l$.
\[LRJd\] A join-semilattice ${\bf{J}} = (J, \leq)$ is *LR-distributive* iff
- for all $a, b, c \in J$, $\{ c, a \vee b \}^l \subseteq (\{ c, a \}^l \cup \{ c, b \}^l)^{ul}$.
Now, it can be seen that LR-distributivity is equivalent to H-distributivity, and hence to the condition [**[(D$_{\vee}$)]{}**]{} as well.
Let ${\bf{J}} = (J; \leq)$ be a join-semilattice. Then the following conditions are equivalent:
- ${\bf{J}}$ satisfies [**(LR)**]{},
- ${\bf{J}}$ satisfies [**(H)**]{},
- ${\bf{J}}$ satifies [**[(D$_{\vee}$)]{}.**]{}
The equivalence between (ii) and (iii) is Prop. \[Hd\]. Let us prove that [**(LR)**]{} implies [**(H)**]{}. Suppose
- (x1) for all $x \in J$, if $x \leq h$ and $x \leq a$, then $x \leq c$,
(x2) for all $x \in J$, if $x \leq h$ and $x \leq b$, then $x \leq c$, $x \leq h$ and $x \leq a \vee b$.
Then, the last two inequalities imply $x \in \{ c, a \vee b \}^l$. So, using [**(LR)**]{} we get that $x \in (\{ c, a \}^l \cup \{ c, b \}^l)^{ul}$. That is, for all $y \in J$, if $y \in (\{ c, a \}^l \cup \{ c, b \}^l)^u$, then $x \leq y$. Now, it should be clear that (x1) and (x2) imply that $c \in (\{ c, a \}^l \cup \{ c, b \}^l)^u$. So, $x \leq c$, as desired.
Now, let us see that [**(H)**]{} implies [**(LR)**]{}. Suppose $x \in \{ h, a \vee b \}^l$, that is, (H1) $x \leq h$ and (H2) $x \leq a \vee b$. In order to get our goal, that is, $x \in (\{ h, a \}^l \cup \{ h, b \}^l)^{ul}$, let us suppose that (S) $y \in (\{ h, a \}^l \cup \{ h, b \}^l)^u$ and try to derive $x \leq y$. Now, (S) means that for all $z \in J$, if $z \in (\{ h, a \}^l \cup \{ h, b \}^l$, then $z \in y$, that is,
- (y1) for all $z \in J$, if $z \leq h$ and $z \leq a$, then $z \leq y$ and
(y2) for all $z \in J$, if $z \leq h$ and $z \leq b$, then $z \leq y$.
Now, using [**(H)**]{}, (y1), (y2), (H1), and (H2), we get our goal, that is, $x \leq y$.
B-distributivity
----------------
The following definition seems to have appeared for the first time in [@B Theorem 2.2. (i), p. 261].
\[Bd\] A join-semilattice ${\bf J} = (J; \leq)$ is *B-distributive* iff
- for all $n, a_1, a_2, \dots, a_n, x \in J$, if $a_1 \wedge a_2 \wedge \cdots \wedge a_n$ exists, then also $(x \vee a_1) \wedge (x \vee a_2) \wedge \cdots \wedge (x \vee a_n)$ exists and equals $x \vee (a_1 \wedge a_2 \wedge \cdots \wedge a_n)$.
We have the following fact.
\[HB\] Let ${\bf{J}} = (J; \leq)$ be a join-semilattice. Then, H-distributivity implies B-distributivity.
Let us have a H-distributive join semilattice $J$ and let us take $a, b, x \in J$ (the general case follows by induction). Let us suppose that $a \wedge b$ exists in $J$. Then, also $x \vee (a \wedge b)$ exists in $J$. Our goal is to see that $x \vee (a \wedge b) =$ inf $\{x \vee a, x \vee b \}$. It is clear that $x \vee (a \wedge b) \leq x \vee a, x \vee b$. Now, suppose both (F1) $y \leq x \vee b$ and (F2) $y \leq x \vee a$. We have to see that $y \leq x \vee (a \wedge b)$. It immediately follows that
- (x1) for all $w \in J$, if $w \leq x \vee b$ and $w \leq x$, then $w \leq x \vee (a \wedge b)$.
Now, suppose (F3) $w \leq x \vee b$ and (F4) $w \leq a$. Then, we have both
- (x1’) for all $y \in J$, if $y \leq a$ and $y \leq x$, then $y \leq x \vee (a \wedge b)$, and
(x2’) for all $y \in J$, if $y \leq a$ and $y \leq b$, then $y \leq x \vee (a \wedge b)$.
So, applying H-distributivity to (F3), (F4), (z1’), and (x2’), we have $w \leq x \vee (a \wedge b)$. That is, we have proved
- (x2) for all $w \in J$, if $w \leq x \vee b$ and $w \leq a$, then $w \leq x \vee (a \wedge b)$.
Using H-distributivity, (F1), (F2), (x1) and (x2), it finally follows that $y \leq x \vee (a \wedge b)$, as desired.
The reciprocal of Proposition \[HB\] does not hold as may be seen in Figure \[BH\].
\[ht\]
=\[draw, circle, fill=black, minimum size=4pt, inner sep=0pt, label distance=1mm\]
(0,0) node (a)\[label=left: $a$\] – ++(45:1.4142cm) node (1)\[label=right:$1$\] – ++(270:1cm) node (b)\[label=right:$b$\] – ++(90:1cm) – ++(315:1.4142cm) node (d)\[label=right: $c$\] ;
Observe also that the lattice $Id_{fp}(J)$, for $J$ being the join-semilattice of Figure \[BH\], is not distributive since it is a diamond.
S$_n$-distributivity
--------------------
The following definition seems to have appeared for the first time in [@S].
\[Sd\] A join-semilattice $(J; \leq)$ is said to be *S$_n$-distributive* for $n$ a natural number, $2 \leq n$, iff
- for all $a_1, a_2, \dots, a_n, x \in J$, if $a_1 \wedge a_2 \wedge \cdots \wedge a_n$ exists, then also $(x \vee a_1) \wedge (x \vee a_2) \wedge \cdots \wedge (x \vee a_n)$ exists and equals $x \vee (a_1 \wedge a_2 \wedge \cdots \wedge a_n)$.
It is easy to see that B-distributivity implies $S_n$-distributivity, for any $n \geq 2$. It is also clear that for any $n \geq 2$, $S_{n+1}$ implies $S_n$. On the other hand, we have that for no natural $n \geq 2$ it holds that $S_n$-distributivity implies B-distributivity. In fact, it was proved that for any $n \geq 2$, $S_n$ does not imply $S_{n+1}$ (see [@Ke]), where infinite models using the real numbers are provided. As in the case of GS- and H-distributivity, it is natural to ask whether, for example, finite models are possible. As in the cases just mentioned, the answer is negative as already proved in [@SCLS Theorem 7.1, p. 1071]. In [@SA Theorem, p. 26] it is also proved that it is not possible to find infinite wellfounded models.
Therefore, so far we have seen that, in the case of a join-semilattice, we have the following chain of implications:
[**(GS)**]{} $\Rightarrow$ [**(K)**]{} $\Rightarrow$ [**(H)**]{} $\Leftrightarrow$ [**(LR)**]{} $\Leftrightarrow$ [**(ND)**]{} $\Rightarrow$ [**(B)**]{} $\Rightarrow \cdots$ [**(S$_n$)**]{} $\Rightarrow$ [**(S$_{n-1}$)**]{} $\Rightarrow \cdots$ [**(S$_2$)**]{}.
Join-semilattices with arrow
============================
The expansion of semilattices with an arrow operation has been well studied in the literature in the case of meet-semilattices under the name of relatively pseudo-complemented semilattices (see, for example, [@Gr]). However, as far as we know, the expansion of join-semilattices with an arrow has not received much attention, see, for instance, [@Chajda; @Chajda2]. In this section we deal with distributivity of join-semilattices expanded with an arrow operation.
A join-semilattice with arrow is a structure $(J; \leq, \to)$ where $(J; \leq)$ is a join-semilattice and the arrow $\to$ is a binary operation such that for all $a, b \in J$:
$a \to b = \max \{c \in J:$ for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b \}$.
The existence of the $\to$ operation is clearly equivalent to the requirement that $\to$ satisfies the following two conditions:
($\to$E) for all $x \in J$, if $x \leq a$ and $x \leq a \to b $, then $x \leq b$,
($\to$I) for all $c \in J$, IF for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b$, THEN $c \leq a \to b$.
The idea of defining arrow in a poset was already present in [@Ha] (see Definition 4, where the author uses the terminology of Brouwer poset and also proves that a poset with arrow is LR-distributive). Moreover, the author, using LR-notation, defines $a \to b =$ max $\{c \in J: \{a,c\}^l \subseteq \{b\}^l\}$.
In a lattice, or even in a meet-semilattice, arrow coincides with the usual relative meet-complement. This follows from the fact that, as previously mentioned, the inequality $a \wedge x \leq b$ is equivalent to the following universal quantification: for all $y$, if $y \leq a$ and $y \leq x$, then $y \leq b$. By the way, we prefer to use “arrow” instead of “relative meet-complement”, because the meet is not present.
As is well known, a lattice with a relative meet-complement (that is in fact a Heyting algebra) is distributive (see [@S1] or [@S2]). The natural question arises whether a join-semilattice with arrow is distributive in any of the senses considered in Section \[SDN\]. The answer is negative in the case of (GS)-distributivity, as the join-semilattice in Figure \[jsl1\] has arrow and is not GS-distributive.
\[ht\]
=\[draw, circle, fill=white, minimum size=4pt, inner sep=0pt, label distance=1mm\]
(0,0) node (a) \[label=left: $a$\]
– ++(45:1cm) node (1) \[label=above: $1$\] – ++(315:1cm) node (b) \[label=right:$b$\] ;
$\to$ a b 1
------- --- --- ---
a 1 b 1
b a 1 1
1 a b 1
A similar question in the case of K-distributivity has also a negative answer, as the the join-semilattice in Figure \[jsl2\], already given in Figure \[HK\], has arrow and is not K-distributive.
\[ht\]
=\[draw, circle, fill=black, minimum size=4pt, inner sep=0pt, label distance=1mm\]
(0,0) node (x1) \[label=left: $x_1$\] ; (x1)–(3,1) node (y1) \[label=right: $y_1$\] ; (y1)–(3,2) node (y2) \[label=right: $y_2$\] ; (y2)–(3,3) node (y3) \[label=right: $y_3$\] ; (y3)–(3,4); (3, 5) node (c) \[label=right: $c$\] ; (c)–(4,6) node (a) \[label=right: $a$\] ; (a)–(3,7) node (top) \[label=right: $1$\] ;
(top)–(0,6) node (e) \[label=above: $e$\] ; (e)–(1,5) node (d) \[label=right: $d$\] ; (d)–(a); (c)–(2,6) node (b) \[label=right: $b$\] ; (b)–(top); (e)–(-1,5) node (f) \[label=left: $f$\] ; (b)–(f); (f)–(0,4) node (df) \[fill=white\] ; (c)–(df); (d)–(df);
(0,3)–(0,2) node (x3) \[label=left: $x_3$\] ; (x3)–(0,1) node (x2) \[label=left: $x_2$\] ; (x2)–(x1); (x2)–(y2); (x3)–(y3);
$\to$ $x_1$ $x_2$ $x_n$ $y_1$ $y_2$ $y_n$ f d e c b a 1
------- ------- ------- ------- ------- ------- ------- --- --- --- --- --- --- ---
$x_1$ 1 1 1 1 1 1 1 1 1 1 1 1 1
$x_2$ $y_1$ 1 1 $y_1$ 1 1 1 1 1 1 1 1 1
$x_n$ $y_1$ $y_2$ 1 $y_1$ $y_2$ 1 1 1 1 1 1 1 1
$y_1$ e e e 1 1 1 e e e 1 1 1 1
$y_2$ e e e $y_1$ 1 1 e e e 1 1 1 1
$y_n$ $x_1$ $x_2$ e $y_1$ $y_2$ 1 1 1 1 1 1 1 1
f $y_1$ $y_2$ $y_n$ $y_1$ $y_2$ $y_n$ 1 a 1 c 1 a 1
d $y_1$ $y_2$ $y_n$ $y_1$ $y_2$ $y_n$ b 1 1 b b 1 1
e $y_1$ $y_2$ $y_n$ $y_1$ $y_2$ $y_n$ b a 1 c b a 1
c $x_1$ $x_2$ $x_n$ $y_1$ $y_2$ $y_n$ e e e 1 1 1 1
b $x_1$ $x_2$ $x_n$ $y_1$ $y_2$ $y_n$ e d e a 1 a 1
a $x_1$ $x_2$ $x_n$ $y_1$ $y_2$ $y_n$ f e e b b 1 1
1 $x_1$ $x_2$ $x_n$ $y_1$ $y_2$ $y_n$ f d e c b a 1
The case of H-distributivity is different, as we see next.
Every join-semilattice expanded with arrow is H-distributive.
Let ${\bf J} = (J; \leq)$ be a join-semilattice with arrow. Take $a, b, c, h \in J$. Suppose
- (x1) for all $x \in J$, if $x \leq h$ and $h \leq a$, then $h \leq c$ and
(x2) for all $x \in J$, if $x \leq h$ and $x \leq b$, then $x \leq c$.
Take $y \in J$ and suppose
- (F1) $y \leq h$ and
(F2) $y \leq a \vee b$.
Now, using ($\to$I), (x1) implies $a \leq h \to c$ and (x2) implies $b \leq h \to c$. These inequalities together with (F2) imply $y \leq h \to c$, which, using (F1) and ($\to$E), gives $y \leq c$.
Analogously to what happens when considering lattices, in the finite case we have the following fact.
\[fHd\] Every finite H-distributive join-semilattice has arrow.
Let ${\bf J} = (J; \leq)$ be a finite H-distributive join-semilattice. Due to finiteness, $c_1 \vee c_2 \vee \cdots \vee c_n = \bigvee \{c \in J:$ for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b \}$ exists, for any $a, b\in J$. It is clear that for any $c_i, 1 \leq i \leq n$, it holds that
- \(F) for all $x$, if $x \leq a$ and $x \leq c_i$, then $x \leq b$.
Now, let us see that $c_1 \vee c_2 \vee \cdots \vee c_n$ is in fact $a \to b$.
First, let us see that $c_1 \vee c_2 \vee \cdots \vee c_n \in \{c \in J:$ for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b \}$. That is, we have to see that
- \(T) for all $x \in J$, if $x \leq a$ and $x \leq c_1 \vee c_2 \vee c_n$, then $x \leq b$.
Now, (T) clearly follows from (F) by H-distributivity.
Secondly, let us take $c \in J$ such that for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b$. Then, obviously, $c \in \{c \in J:$ for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b \}$. Then, $c \leq c_1 \vee c_2 \cdots \vee c_n$, as $c_1 \vee c_2 \vee \cdots \vee c_n = \bigvee \{c \in J:$ for all $x \in J$, if $x \leq a$ and $x \leq c$, then $x \leq b \}$.
Finally, the natural question arises whether the class of join-semilattices expanded with arrow forms a variety or at least a quasi variety. The following example proves that the answer is negative. Indeed, consider the distributive lattice in Figure \[B9\], which is the direct product ${\bf J} = (L \times L; \leq)$ where $L = \{0, \tfrac{1}{2}, 1\}$. It is clear that we can define in ${\bf J}$ an arrow $\to$, in fact, ${\bf J}^* = (L \times L; \leq, \to)$ becomes a Heyting algebra. Now, consider ${\bf J}^*$ as a join-semilattice with arrow, and observe that the set $B$ of elements represented by black nodes in the figure is the domain of a subalgebra $(B; \leq, \to)$ of ${\bf J}^*$, since both $\lor$ and $\to$ are closed on $B$. However, the join-semilattice $(B, \leq)$ is not distributive (it contains a pentagon), and moreover the arrow operation is not defined for all pairs of elements. In particular, $(\tfrac{1}{2}, \tfrac{1}{2}) \Rightarrow (0, 0)$ is not defined since the set $$\{ (c, d) \in B : \forall (x,y) \in B, \mbox{if } (x, y) \leq (c, d) \mbox{ and }(x, y) \leq (\tfrac{1}{2}, \tfrac{1}{2}) \mbox{, then } (x, y) \leq (0, 0) \}$$ has no maximum.
\[ht\]
=\[draw, circle, fill=black, minimum size=4pt, inner sep=0pt, label distance=0.5mm\]
(0,0) node (00) \[label=right:[(0,0)]{}\] ; (-1,1)node (0m) \[label=left:[(0,$\tfrac{1}{2}$)]{}\] \[fill=white\] ; (-2,2)node (01) \[label=left:[(0,1)]{}\] \[\]; (1,1) node (m0) \[label=right:[($\tfrac{1}{2}$,0)]{}\] \[fill=white\] ; (2,2) node (10) \[label=right:[(1,0)]{}\] ; (0,2) node (mm) \[label=right:[($\tfrac{1}{2}$,$\tfrac{1}{2}$)]{}\] ; (-1,3)node (m1) \[label=left:[($\tfrac{1}{2}$,1)]{}\]; (1,3) node (1m) \[label=right:[(1,$\tfrac{1}{2}$]{})\]; (0,4) node (11) \[label=right:[(1,1)]{}\];
(00)–(0m)–(01)–(11)–(1m)–(10)–(m0)–(00) (0m)–(mm)–(1m) (m0)–(mm)–(m1);
Conclusions
===========
In this paper we have proposed a notion of distributivity for join-semilattices with logical motivations related to Gentzen’s disjunction elimination rule in the $\{\lor, \to\}$-fragment of intuitionistic logic, and we have compared it to other notions of distributivity for join-semilattices proposed in the literature.
There are a number of open problems that we plan to address as future research. In particular we can mention the following ones:
- As for the logical motivation, similar to the $\bf (\lor E)$ rule in Section 3, one can consider the following rule with two contexts:
This rule also has a natural algebraic translation in the case of join-semilattices. The question arises whether it is equivalent to the condition [**[(D$_{\vee}$)]{}**]{} or if it leads to a different one.
- Distributive lattices are charecterized by their lattice of ideals. In the case of join-semillatices, there are similar characterizations for GS-, K- and H-distributivity, but not for B- and S$_n$distributivity. The question is whether B- and S$_n$-distributive join-semilattices can be characterized by means of their ideals.
- In [@Chajda3] the authors generalize the well-known characterisation of distributive lattices in terms of forbidden sublattices (diamond and pentagon) to distributive posets, also identifying the set of forbidden subposets. A similar study for distributive join-semilattices is an open question.
Acknowledgments {#acknowledgments .unnumbered}
---------------
The authors acknowledge partial support by the H2020 MSCA-RISE-2015 project SYSMICS. Esteva and Godo also acknowledge the FEDER/MINECO project TIN2015-71799-C2-1-P.
[9]{}
Balbes, R. A representation theory for prime and implicative semilattices. ** Trans. Amer. Math. Soc. [**[136]{}**]{} (1969), 261-267.
I. Chajda, J. Rachnek. Forbidden Configurations for Distributive and Modular Ordered Sets. [*Order*]{} 5, 407-423, 1989.
I. Chajda, R. Halaš, and J. Kühr. [*Semilattice structures*]{}. Research and Exposition in Mathematics, 30. Heldermann Verlag, Lemgo, 2007.
I. Chajda, H. Länger. Relatively pseudocomplemented posets. [*Mathematica Bohemia*]{}, 2017 (Doi: 10.21136/MB.2017.0037-16).
Gentzen, G. Untersuchungen über das logische Schließen I. *Mathematische Zeitschrift*, [**[39]{}**]{} (1934), 176-210.
González, Luciano J. Topological dualities and completions for (distributive) partially ordered sets. PhD Thesis.
Grätzer, G. Lattice Theory: Foundation. Springer/Birkhäuser (2011).
Grätzer, G., Schmidt, E. On congruence lattices of lattices. *Acta Math. Acad. Sci. Hungar.* [**[13]{}**]{} (1962), 179-185.
Halaš, R. Pseudocomplemented ordered sets. *Archivum Mathematicum (Brno)* [**[29]{}**]{} (1993), 153-160.
Hickman, R. Mildly distributive semilattices. *J. Austral. Math Soc. (Series A)* [**[36]{}**]{} (1984), 287-315.
Katriňák, T. Pseudokomplementäre Halbverbände. *Mat. Časopis* [**[18]{}**]{} (1968), 121-143.
Kearns, K. The Class of Prime Semilattices is Not Finitely Axiomatizable. *Semigroup Forum* [**[55]{}**]{} (1997), 133-134.
Larmerová, Jana and Rachnek, Jirí. Translations of distributive and modular ordered sets. *Acta Universitatis Palackianae Olomucensis Facultas Rerum Naturalium Mathematica XXVII*, [**[91]{}**]{} (1988), 13-23.
Schein, B. On the definition of distributive semilattices. *Algebra universalis* [**[2]{}**]{} (1972), 1-2.
Serra Alves, C. Distributivity and wellfounded semilattices. *Portugaliae Mathematica* [**[52]{}**]{}(1) (1995), 25-27.
Shum, K. P., Chan, M. W., Lai, C. K., and So, K. Y. Characterizations for prime semilattices. *Can. J. Math.*, [**[37]{}**]{}(6) (1985), 1059-1073.
Skolem, T. Untersuchungen über die Axiome des Klassenkalküls und über Produktations- und Summationsprobleme, welche gewisse Klassen von Aussagen betreffen, *Skrifter uitgit av Videnskapsselskapet i Kristiania, I*, Matematisk-naturvidenskabelig klasse, No. 3, 1-37, 1919.
Skolem, T. *Selected Works in Logic*. Edited by Jens Erik Fenstad, Universitetforlaget, Oslo, 1970.
[^1]: Note that the original Hickman’s statement can be misleading since the condition “there exists $(x \wedge a_1) \vee (x \wedge a_2) \vee \cdots \vee (x \wedge a_n)$” is missing.
[^2]: We thank the author of that paper for communicating this example.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Utkarsh Mall$^{1, 2}$'
- 'G. Roshan Lal$^{1, 3}$'
- 'Siddhartha Chaudhuri$^{1, 4}$'
- Parag Chaudhuri$^1$
- |
\
$^1$IIT Bombay $^2$Cornell University $^3$University of Wisconsin-Madison $^4$Adobe Research
bibliography:
- 'ebf.bib'
title: A Deep Recurrent Framework for Cleaning Motion Capture Data
---
![image](figures/teaser.jpg){width="100.00000%"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove that the algebraic Witten’s “top Chern class" constructed in [@PV] satisfies the axioms for the spin virtual class formulated in [@JKV].'
address: 'Department of Mathematics, Boston University, Boston, MA 02215'
author:
- Alexander Polishchuk
title: 'Witten’s top Chern class on the moduli space of higher spin curves'
---
[^1]
This paper is a sequel to [@PV]. Its goal is to verify that the [*virtual top Chern class*]{} ${c^{1/r}}$ in the Chow group of the moduli space of higher spin curves ${{\overline{{{{\mathcal}M}}}_{g,n}}^{1/r}}$, constructed in [@PV], satisfies all the axioms of [*spin virtual class*]{} formulated in [@JKV]. Hence, according to [@JKV], it gives rise to a cohomological field theory in the sense of Kontsevich-Manin [@KM]. As was observed in [@PV], the only non-trivial axioms that have to be checked for the class ${c^{1/r}}$ are two axioms that we call [*Vanishing axiom*]{} and [*Ramond factorization axiom*]{}. The first of them requires ${c^{1/r}}$ to vanish on all the components of the moduli space ${{\overline{{{{\mathcal}M}}}_{g,n}}^{1/r}}$, where one of the markings is equal to $r-1$. The second demands vanishing of the push-forward of ${c^{1/r}}$ restricted to the components of the moduli space corresponding to the so called Ramond sector, under some natural finite maps.
Recall that the virtual top Chern class is a crucial ingredient in the generalized Witten’s conjecture formulated in [@W1], [@W2]. The original index-theoretic construction of this class sketched by Witten was recently extended to the compactified moduli space by T. Mochizuki [@Mo] who also showed that the obtained class satisfies the axioms of [@JKV]. The algebraic construction of [@PV] gives a class in the Chow group with rational coefficients (and axioms are satisfied on the level of Chow groups). Presumably, the algebraic construction induces the same class in cohomology of ${{\overline{{{{\mathcal}M}}}_{g,n}}^{1/r}}$ as the analytic construction.
It is interesting to note that the class ${c^{1/r}}$ is constructed as a characteristic class of certain supercommutative ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded dg-algebra over ${{\overline{{{{\mathcal}M}}}_{g,n}}^{1/r}}$ equipped with an odd closed section (where the entire data is defined up to quasi-isomorphism). This resembles Kontsevich’s approach to the construction of the virtual fundamental class (see [@Kon]). One may hope that both constructions can be embedded into a more general framework involving dg-spaces. This would be in agreement with the philosophy of derived moduli spaces promoted in [@Kon], [@CK1], [@CK2].
The paper is organized as follows. In section \[homsec\] we prove two identities for localized Chern characters of specific ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded complexes. In section \[vanishsec\] we deduce Vanishing axiom from the first identity and in section \[Ramondsec\] we deduce Ramond factorization axiom from the second identity.
Some ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded homological algebra {#homsec}
================================================================
Recall (see [@PV]) that for every ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded complex $(V^{\bullet}=V^+\oplus V^-,d)$ of vector bundles on a scheme $X$, which is strictly exact off a closed subset $Z\subset X$, the graph-construction associates the [*localized Chern character*]{} ${\operatorname{ch}}_Z^X(V^{\bullet})\in A^*(Z{\rightarrow}X)_{{{{\mathbb}Q}}},$ where $A^*(Z{\rightarrow}X)_{{{{\mathbb}Q}}}$ is the bivariant Chow group with rational coefficients. (the original construction given in [@BFM] or [@F Ch. 18] deals with ${{{\mathbb}Z}}$-graded complexes). Here we use the following terminology from [@PV]: $(V^{\bullet},d)$ is [*strictly exact*]{} if it is exact and ${\operatorname{im}}(d)$ is a subbundle of $V^{\bullet}$. Note that if a ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded complex is homotopic to zero then it is strictly exact (since in this case ${\operatorname{im}}(d)$ is a direct summand of $V^{\bullet}$). Witten’s top Chern class is constructed in [@PV] by slightly modifying the localized Chern class of the complex corresponding to the action of an isotropic section of an orthogonal bundle on a spinor bundle, where the relevant orthogonal data is constructed using the higher spin structure on the universal curve over ${{\overline{{{{\mathcal}M}}}_{g,n}}^{1/r}}$ (we will recall this construction in section \[vanishsec\]).
We are going to prove two identities for the localized Chern character that will be main ingredients for the proof of Vanishing axiom and Ramond factorization axiom respectively. In both cases the identities hold on the level of $K$-theory (of complexes that are strictly exact off a closed subset).
\[mainlem\] Let $d({\lambda}):V^{\bullet}{\rightarrow}V^{\bullet}[{\lambda}]$ be an odd endomorphism of a ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded bundle $V^{\bullet}$ on $X$ of the form $d({\lambda})=d_0+d_1{\lambda}+\ldots+d_{r-1}{\lambda}^{r-1}$ for some $r\ge 2$, depending on a formal parameter ${\lambda}$ (commuting with everything). Assume that $d({\lambda})^2={\lambda}^r$. Then ${\operatorname{ch}}_Z^X(V^{\bullet},d_0)=0$ for every closed subset $Z{\subset}X$ such that $(V^{\bullet},d_0)$ is strictly exact off $Z$.
[[*Proof*]{}]{}. We can extend $d({\lambda})$ to a ${{{\mathcal}O}}_X[{\lambda}]$-linear endomorphism of $V^{\bullet}[{\lambda}]$. Let $W^{\bullet}=V^{\bullet}[{\lambda}]/({\lambda}^r)$ with the odd endomorphism $d_W:W^{\bullet}{\rightarrow}W^{\bullet}$ induced by $d({\lambda})$. Then $d_W^2=0$ and there is a natural $r$-step filtration on the complex $(W^{\bullet},d_W)$ with all consequtive quotient-complexes isomorphic to $(V^{\bullet},d_0)$. Thus, it is enough to prove that $(W^{\bullet},d_W)$ is strictly exact (everywhere). We claim that in fact this complex is homotopic to zero. Indeed, for every interval of integers $[a,b]$ let us denote $V^{\bullet}[{\lambda}]_{[a,b]}=\oplus_{i=a}^b V^{\bullet}{\lambda}^i$. Let us denote by $d'$ and $d''$ the following components of the restriction of $d({\lambda})$ to $V^{\bullet}[{\lambda}]_{[0,r-1]}$: $$d({\lambda}):V^{\bullet}[{\lambda}]_{[0,r-1]}\rTo^{(d',d'')}
V^{\bullet}[{\lambda}]_{[0,r-1]}\oplus V^{\bullet}[{\lambda}]_{[r,2r-1]}.$$ Note that $d'=d_W$ upon the natural identification of $V^{\bullet}[{\lambda}]_{[0,r-1]}$ with $W^{\bullet}$. Also, the image of $d''$ is contained in $V^{\bullet}[{\lambda}]_{[r,2r-2]}$. Extending $d'$ and $d''$ to ${{{\mathcal}O}}_X[{\lambda}^r]$-linear endomorphisms of $V^{\bullet}[{\lambda}]$ we can write $d=d'+d''$. Then the condition $d({\lambda})^2={\lambda}^r$ implies that $d'd''+d''d'={\lambda}^r$ on $V^{\bullet}[{\lambda}]_{[0,r-1]}$. Hence $$h=d''/{\lambda}^r:V^{\bullet}[{\lambda}]_{[0,r-1]}{\rightarrow}V^{\bullet}[{\lambda}]_{[0,r-1]}$$ gives a homotopy between the identity and zero endomorphisms of the complex $(V^{\bullet}[{\lambda}]_{[0,r-1]},d')\simeq (W^{\bullet},d_W)$.
The above lemma admits the following generalization: if the differential $d({\lambda})=d_0+d_1{\lambda}+\ldots+d_{r-1}{\lambda}^{r-1}$ as above satisfies $d({\lambda})^2=f({\lambda})$ for some polynomial $f$ of degree $r$ then $$\sum_{z: f(z)=0}m_z\cdot{\operatorname{ch}}_Z^X(V^{\bullet},d(z))=0$$ where $m_z$ is the multiplicity of a root $z$, $Z\subset X$ is a closed subset such that all the complexes $(V^{\bullet},d(z))$ (for $f(z)=0$) are strictly exact off $Z$.
\[mainlem2\] Let $d:V^{\bullet}{\rightarrow}V^{\bullet}$ be an odd endomorphism of a ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded bundle $V^{\bullet}$ on $X$ such that $d^2=-(f_1\ldots f_r)\cdot{\operatorname{id}}_{V^{\bullet}}$, where $f_1,\ldots,f_r$ are functions on $X$. For every $i=1,\ldots,r$ let us introduce the differential $d_i$ on $V^{\bullet}\oplus V^{\bullet}[1]$ by the formula $$d_i(x,x')=
(d(x)+(\prod_{j\neq i}f_j)\cdot x', -d(x')+f_i\cdot x)$$ where $x\in V^{\bullet}$, $x'\in V^{\bullet}[1]$. Then $$\sum_{i=1}^r{\operatorname{ch}}_Z^X(V^{\bullet}\oplus V^{\bullet}[1],d_i)=0$$ for every closed subset $Z{\subset}X$ such that all $(V^{\bullet}\oplus V^{\bullet}[1],d_i)$ are strictly exact off $Z$.
[[*Proof*]{}]{}. Let us introduce the differential $D$ on $W^{\bullet}:=(V^{\bullet}\oplus V^{\bullet}[1])^{\oplus r}$ by the formula $$D(x_i,x'_i)_{i=1,\ldots r}=(y_i,y'_i)_{i=1,\ldots,r}$$ where $x_i,y_i\in V^{\bullet}$, $x'_i,y'_i\in V^{\bullet}[1]$, $$\begin{aligned}
&y_i=dx_i+f_{i+1}f_{i+2}\ldots f_r\cdot[x'_1+f_1x'_2+f_1f_2x'_3+\ldots+
f_1\ldots f_{i-1}x'_i]\ \text{for}\ i<r,\\
&y_r=dx_r+x'_1+f_1x'_2+f_1f_2x'_3+\ldots+f_1\ldots f_{r-1}x'_r,\\
&y'_i=-dx'_i+f_ix_i-x_{i-1}\ \text{for}\ i\ge 2,\\
&y'_1=-dx'_1+f_1x_1.\end{aligned}$$ One can easily check that $D^2=0$. There is a natural decreasing filtration of $W^{\bullet}$ by subcomplexes $W^{\bullet}=F^1W^{\bullet}\supset\ldots\supset F^rW^{\bullet}\supset
F^{r+1}W^{\bullet}=0$, where $$F^jW^{\bullet}=\{(x_i,x'_i)_{i=1,\ldots,r}: x_1=\ldots=x_{j-1}=0,
x'_1=\ldots=x'_{j-1}=0\}.$$ The associated graded quotients are $$F^jW^{\bullet}/F^{j+1}W^{\bullet}\simeq (V^{\bullet}\oplus V^{\bullet}[1],d_j),$$ $j=1,\ldots,r$. Therefore, $$\sum_{i=1}^r{\operatorname{ch}}_Z^X(V^{\bullet}\oplus V^{\bullet}[1],d_i)={\operatorname{ch}}_Z^X(W^{\bullet},D).$$ It remains to prove that the complex $(W^{\bullet}, D)$ is strictly exact on $X$. For this we construct a homotopy $h$ between the identity and zero endomorphisms of $W^{\bullet}$. Namely, we set $$h(x_i,x'_i)_{i=1,\ldots r}=(y_i,y'_i)_{i=1,\ldots,r},$$ where $$\begin{aligned}
&y_i=-[x'_{i+1}+f_{i+1}x'_{i+2}+f_{i+1}f_{i+2}x'_{i+2}+\ldots+
f_{i+1}\ldots f_{r-1}x'_r]\ \text{for}\ i<r-1,\\
&y_{r-1}=-x'_r,\ y_r=0,\\
&y'_1=x_r,\ y'_i=0\ \text{for}\ i\ge 2.\end{aligned}$$ It is easy to check that $Dh+hD={\operatorname{id}}_{W^{\bullet}}$.
Vanishing axiom {#vanishsec}
===============
Henceforward all our schemes are assumed to be quasiprojective over a field $k$. We assume that $\operatorname{char} k>r$ and that $k$ contains all $r$-th roots of unity.
Let $\pi:{{{\mathcal}C}}{\rightarrow}X$ be a family of prestable curves over a scheme $X$, and let ${{{\mathcal}T}}$ be a family of rank-one torsion-free sheaves on ${{{\mathcal}C}}$ equipped with a non-zero homomorphism $b:{{{\mathcal}T}}^r{\rightarrow}{\omega}_{{{{\mathcal}C}}/X}$, where ${\omega}_{{{{\mathcal}C}}/X}$ is the dualizing sheaf of $\pi$. In this situation we defined in section 5.1 of [@PV] the class $c({{{\mathcal}T}},b)\in A^{-\chi}(X)_{{{{\mathbb}Q}}}$, where $\chi$ is the Euler-Poincaré characteristic of members of the family ${{{\mathcal}T}}$. To construct this class we consider the map $\tau:S^rR\pi_*{{{\mathcal}T}}{\rightarrow}{{{\mathcal}O}}_X[-1]$ induced by $b$ and by the trace map ${\operatorname{Tr}}:R\pi_*{\omega}_{{{{\mathcal}C}}/X}{\rightarrow}{{{\mathcal}O}}_X[-1]$. As was proved in Proposition 4.7 of [@PV] there exists a complex $C_0{\rightarrow}C_1$ of vector bundles on $X$ representing $R\pi_*{{{\mathcal}T}}$ such that the map $\tau$ is represented by the chain map of complexes $S^r[C_0{\rightarrow}C_1]{\rightarrow}{{{\mathcal}O}}_X[-1]$. This chain map corresponds to a morphism of vector bundles $\nu:S^{r-1}C_0{\rightarrow}C_1^{\vee}$. We can consider the differential $d:C_0{\rightarrow}C_1$ and the map $\nu$ as sections of the pull-backs of $C_1$ and $C_1^{\vee}$ to the total space of $C_0$. Then $s=(d,\nu)$ will be an isotropic section of the orthogonal vector bundle $p^*C_1\oplus p^*C_1^{\vee}$ on $C_0$, where $p:C_0{\rightarrow}X$ is the projection. Moreover, $s$ vanishes exactly on $X$ embedded into $C_0$ by the zero section. Then we consider the action of $s$ on the spinor bundle ${\Lambda}^*p^*C_1^{\vee}$. The obtained ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded complex $({\Lambda}^*p^*C_1^{\vee},s)$ is exact outside $X{\subset}C_0$. Therefore, the localized Chern character of this complex is an element of the bivariant Chow group $A^*(X{\rightarrow}C_0)_{{{{\mathbb}Q}}}\simeq A^*(X)_{{{{\mathbb}Q}}}$. The class $c({{{\mathcal}T}},b)$ is obtained by multiplying this localized Chern character with the Todd class of $C_1$. Theorem 4.3 of [@PV] assures that $c({{{\mathcal}T}},b)$ does not depend on the choices made.
We can consider $({{{\mathcal}A}}:=p_*({\Lambda}^*p^*C_1^{\vee}),{\delta})$ as a sheaf of ${{{\mathbb}Z}}/2{{{\mathbb}Z}}$-graded dg-algebras over $X$, where the differential ${\delta}$ is induced by $d:C_0{\rightarrow}C_1$. The action of the isotropic section $s$ on ${{{\mathcal}A}}$ has form $d+\epsilon(e)$, where $\epsilon(e)$ is the operator of multiplication with the ${\delta}$-closed odd section $e\in{{{\mathcal}A}}$ corresponding to $\nu:S^{r-1}C_0{\rightarrow}C_1^{\vee}$. The proof of Theorem 4.3 in [@PV] can be converted into the proof of the fact that the quasi-isomorphism class of the data $({{{\mathcal}A}},e)$ is uniquely determined by $({{{\mathcal}T}},b)$.
Let ${\sigma}:X{\rightarrow}{{{\mathcal}C}}$ be a section of $\pi$ such that $\pi$ is smooth near ${\sigma}(X)$ (a marked point). By abuse of notation we will denote by $F\mapsto F({\sigma})$ the operation of tensoring with the line bundle ${{{\mathcal}O}}_{{{{\mathcal}C}}}({\sigma}(X))$. The following theorem immediately implies Vanishing axiom for ${c^{1/r}}$ (Axiom 4 of [@JKV]).
\[axiom1thm\] Assume that $b$ factors as a composition $${{{\mathcal}T}}^r\stackrel{b_0}{{\rightarrow}}{\omega}_{{{{\mathcal}C}}/X}(-(r-1){\sigma}){\rightarrow}{\omega}_{{{{\mathcal}C}}/X},$$ where ${\sigma}^*b_0$ is an isomorphism. Then $c({{{\mathcal}T}},b)=0$.
Let us set $L={{{\mathcal}T}}({\sigma})|_{{\sigma}}$. The map ${\sigma}^*b_0$ gives an isomorphism $$L^r{\widetilde}{{\rightarrow}}{\omega}_{{{{\mathcal}C}}/X}({\sigma})|_{\sigma}\simeq{{{\mathcal}O}}_X.$$ Since we are working with rational coefficients, we can replace $X$ by its finite étale covering over which $L$ is trivial (right now we do not need to choose a specific trivialization of $L$).
Let $\pi^{(r)}:{{{\mathcal}C}}^{(r)}{\rightarrow}X$ denote the relative $r$-th symmetric power of ${{{\mathcal}C}}$ over $X$. We denote by ${\sigma}^r\in{{{\mathcal}C}}^{(r)}$ the $X$-point corresponding to the relative divisor $r{\sigma}(X)$ and by ${{{\mathcal}I}}_{{\sigma}^r}{\subset}{{{\mathcal}O}}_{{{{\mathcal}C}}^{(r)}}$ the ideal sheaf of ${\sigma}^r(X){\subset}{{{\mathcal}C}}^{(r)}$. For every coherent sheaf ${{{\mathcal}F}}$ on ${{{\mathcal}C}}$, let ${{{\mathcal}F}}^{(r)}$ denote the $r$-th symmetric power of ${{{\mathcal}F}}$, which is a sheaf on ${{{\mathcal}C}}^{(r)}$. We claim that $b_0$ induces a morphism $$\label{idealsheafmap}
R\pi^{(r)}_*({{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)}){\rightarrow}R\pi_*({\omega}_{{{{\mathcal}C}}/X}).$$ Indeed, let ${\Delta}:{{{\mathcal}C}}{\rightarrow}{{{\mathcal}C}}^{(r)}$ be the diagonal map. Then we have a natural morphism $${{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)}{\rightarrow}{\Delta}_*{\Delta}^*({{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)}){\rightarrow}{\Delta}_*{{{\mathcal}T}}^r((r-1){\sigma}){\rightarrow}{\Delta}_*{\omega}_{{{{\mathcal}C}}/X},$$ where the last arrow is induced by $b_0$. Now the morphism (\[idealsheafmap\]) is obtained by applying the functor $R\pi^{(r)}_*$. Composing (\[idealsheafmap\]) with the trace map $R\pi_*({\omega}_{{{{\mathcal}C}}/X}){\rightarrow}{{{\mathcal}O}}_X[-1]$, we get a morphism $${\widetilde}{\tau}:R\pi^{(r)}_*({{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)}){\rightarrow}{{{\mathcal}O}}_X[-1]$$ that will play a major role in the proof of Theorem \[axiom1thm\]. Note that we also have a natural map $$\iota:R\pi^{(r)}_*({{{\mathcal}T}}^{(r)}){\rightarrow}R\pi^{(r)}_*({{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)})$$ and an isomorphism $S^r R\pi_*{{{\mathcal}T}}\simeq R\pi^{(r)}_*({{{\mathcal}T}}^{(r)})$, such that $\tau={\widetilde}{\tau}\circ\iota$ can be identified with the map $S^r R\pi_*{{{\mathcal}T}}{\rightarrow}{{{\mathcal}O}}_X[-1]$ used in the definition of $c({{{\mathcal}T}},b)$.
Note that there is an exact triangle $$\label{symextri}
{{{\mathcal}O}}_X[-1]\stackrel{{\delta}}{{\rightarrow}}
R\pi^{(r)}_*({{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)}){\rightarrow}R\pi^{(r)}_*({{{\mathcal}T}}({\sigma}))^{(r)}{\rightarrow}{{{\mathcal}O}}_X$$ where we use the canonical trivialization of $L^r$.
\[composlem\] The composition ${\widetilde}{\tau}\circ{\delta}$ is the identity map.
[[*Proof*]{}]{}. This follows immediately from the existence of a natural morphism of exact triangles $$\begin{diagram}
{\Delta}^*{{{\mathcal}I}}_{{\sigma}^r}\otimes{{{\mathcal}T}}^r(r{\sigma}) &\rTo & {{{\mathcal}T}}^r(r{\sigma}) &\rTo & {\Delta}^* ({\sigma}^r)_*{{{\mathcal}O}}_X &\rTo\ldots\\
\dTo & & \dTo & & \dTo &\\
{\omega}_{{{{\mathcal}C}}/X} &\rTo &{\omega}_{{{{\mathcal}C}}/X}({\sigma}) &\rTo & {\sigma}_*{{{\mathcal}O}}_X &\rTo\ldots
\end{diagram}$$ and from the fact that the composition $${{{\mathcal}O}}_X[-1]\simeq R\pi_*{\sigma}_*{{{\mathcal}O}}_X[-1]{\rightarrow}R\pi_*{\omega}_{{{{\mathcal}C}}/X}{\rightarrow}{{{\mathcal}O}}_X[-1]$$ is the identity map.
We want to realize the maps ${\widetilde}{\tau}$ and $\iota$ on the level of complexes in a compatible way. We start by realizing the canonical distinguished triangle in $D^b(X)$: $$R\pi_*{{{\mathcal}T}}\stackrel{{\alpha}}{{\rightarrow}} R\pi_*({{{\mathcal}T}}({\sigma}))\stackrel{{\beta}}{{\rightarrow}} L
\stackrel{{\gamma}}{{\rightarrow}} R\pi_*{{{\mathcal}T}}[1]$$ by an exact triple of complexes of vector bundles on $X$.
\[extensionlem\] Let $[C_0\stackrel{d}{{\rightarrow}} C_1]$ be a complex of vector bundles (concentrated in degrees $[0,1]$) representing $R\pi_*{{{\mathcal}T}}$. Then there exists an extension of vector bundles $$\label{mainexseq}
0{\rightarrow}C_0{\rightarrow}{\widetilde}{C}_0{\rightarrow}L{\rightarrow}0,$$ and a morphism ${\widetilde}{d}:{\widetilde}{C}_0{\rightarrow}C_1$ extending $d$, such that the morphism ${\gamma}: L{\rightarrow}R\pi_*{{{\mathcal}T}}[1]$ is represented by the chain map of complexes $$\label{chainext}
[C_0{\rightarrow}{\widetilde}{C}_0]\stackrel{({\operatorname{id}},{\widetilde}{d})}{{\rightarrow}} [C_0{\rightarrow}C_1],$$ hence, the complex $[{\widetilde}{C}_0\stackrel{{\widetilde}{d}}{\rightarrow}C_1]$ represents $R\pi_*({{{\mathcal}T}}({\sigma}))$ and the morphisms ${\alpha}$ and ${\beta}$ are represented by the natural chain maps $$[C_0{\rightarrow}C_1]{\rightarrow}[{\widetilde}{C}_0{\rightarrow}C_1]{\rightarrow}L;$$
[[*Proof*]{}]{}. Applying the second arrow in the exact sequence $$\label{exseq}
{\operatorname{Hom}}(L,C_1){\rightarrow}{\operatorname{Hom}}(L,R\pi_*{{{\mathcal}T}}[1]){\rightarrow}{\operatorname{Ext}}^1(L,C_0){\rightarrow}{\operatorname{Ext}}^1(L,C_1)$$ to the element ${\gamma}$ we get an extension class $e\in{\operatorname{Ext}}^1(L,C_0)$ which becomes trivial in ${\operatorname{Ext}}^1(L,C_1)$. Let $$0{\rightarrow}C_0{\rightarrow}{\widetilde}{C}_0{\rightarrow}L{\rightarrow}0$$ be an extension with the class $e$, ${\widetilde}{d}:{\widetilde}{C}_0{\rightarrow}C_1$ be a splitting of its push-out by $d:C_0{\rightarrow}C_1$. The element in ${\operatorname{Hom}}(L,R\pi_*{{{\mathcal}T}}[1])$ represented by the chain map (\[chainext\]) induces the same class $e$ in ${\operatorname{Ext}}^1(L,C_0)$. Now the sequence (\[exseq\]) shows that after changing a splitting ${\widetilde}{d}$ by an appropriate element of ${\operatorname{Hom}}(L,C_1)$ the chain map (\[chainext\]) will represent ${\gamma}$.
Let $C=[C_0{\rightarrow}C_1]$ be a complex representing $R\pi_*{{{\mathcal}T}}$ and let ${\widetilde}{C}=[{\widetilde}{C}_0{\rightarrow}C_1]$ be the complex representing $R\pi_*({{{\mathcal}T}}({\sigma}))$ obtained by applying the above lemma. Then the complex $S^r C$ (resp. $S^r {\widetilde}{C}$) represents $S^r R\pi_*{{{\mathcal}T}}\simeq R\pi^{(r)}_*{{{\mathcal}T}}^{({\sigma})}$ (resp. $R\pi^{(r)}_*({{{\mathcal}T}}({\sigma}))^{(r)}$) and we have a natural surjective map of complexes $S^r{\widetilde}{C}{\rightarrow}L^r\simeq{{{\mathcal}O}}_X$ induced by the map ${\widetilde}{C}_0{\rightarrow}L$. Then the kernel complex ${\operatorname{ker}}(S^r{\widetilde}{C}{\rightarrow}{{{\mathcal}O}}_X)$ represents $R\pi^{(r)}_*({{{\mathcal}I}}_{{\sigma}^r}\otimes({{{\mathcal}T}}({\sigma}))^{(r)})$ in a way compatible with the exact triangle (\[symextri\]). Moreover, the map $\iota$ is represented by the natural chain map $S^r C{\rightarrow}{\operatorname{ker}}(S^r{\widetilde}{C}{\rightarrow}{{{\mathcal}O}}_X)$. It remains to choose our data in such a way that ${\widetilde}{\tau}$ would be represented by a chain map ${\widetilde}{\tau}:{\operatorname{ker}}(S^r{\widetilde}{C}{\rightarrow}{{{\mathcal}O}}_X){\rightarrow}{{{\mathcal}O}}_X[-1]$. For this we use the following lemma analogous to Proposition 4.7 from [@PV].
\[complexlem\] There exists a complex of vector bundles $C_0{\rightarrow}C_1$ representing $R\pi_*{{{\mathcal}T}}$, such that one has $${\operatorname{Hom}}_{K^b(X)}(E,{{{\mathcal}O}}_X[n])\simeq{\operatorname{Hom}}_{D^b(X)}(E,{{{\mathcal}O}}_X[n])$$ for $n\le 0$ and $E={\operatorname{ker}}(S^r[{\widetilde}{C}_0{\rightarrow}C_1]{\rightarrow}L)$.
[[*Proof*]{}]{}. We start with an arbitrary complex of vector bundles $C'_0{\rightarrow}C'_1$ representing $R\pi_*{{{\mathcal}T}}$ and then replace it by the quasiisomorphic complex $C_0{\rightarrow}C_1$, where $C_1={{{\mathcal}O}}_X(-m)^{\oplus N}{\rightarrow}C'_1$ is a surjection (see [@PV], Lemma 4.6), ${{{\mathcal}O}}_X(1)$ is an ample line bundle on $X$ $m$ is an integer (later we will need to choose $m$ sufficiently large). The spectral sequence computing ${\operatorname{Hom}}_{D^b(X)}(E,{{{\mathcal}O}}_X[n])$ shows that to prove (ii) it suffices to check the vanishing $$H^i(S^j{\widetilde}{C}_0^{\vee}(m'))=0$$ for $i>0$, $j<r$, $m'\ge m$. Since ${\widetilde}{C}_0$ is an extension of the trivial bundle by $C_0$, this would follow from the vanishing of $H^i(S^jC_0^{\vee}(m'))$ under the same conditions on $i,j,m'$. We know that for sufficiently large $m$ one has $$H^{>0}(S^j(C'_0)^{\vee}\otimes
S^{j_1}(C'_1)^{\vee}\otimes\ldots\otimes S^{j_k}(C'_1)^{\vee}(m'))=0$$ for $j+j_1+\ldots+j_k<r$ and $m'\ge m$. As was shown in Proposition 4.7 of [@PV], this implies that $H^{>0}(S^jC_0^{\vee}(m'))=0$ for $j<r$, $m'\ge m$.
[*Proof of Theorem \[axiom1thm\]*]{}. Let us choose the data $(C_0,C_1,{\widetilde}{C}_0,d,{\widetilde}{d})$ as in Lemmas \[complexlem\] and \[extensionlem\]. Let us set $K:={\operatorname{ker}}(S^r[{\widetilde}{C}_0{\rightarrow}C_1]{\rightarrow}{{{\mathcal}O}}_X)$. Then the morphism ${\widetilde}{\tau}$ is represented by the chain map $K{\rightarrow}{{{\mathcal}O}}_X[-1]$ that corresponds to a morphism $${\widetilde}{\tau}:S^{r-1}{\widetilde}{C}_0\otimes C_1{\rightarrow}{{{\mathcal}O}}_X$$ such that the composition $$\label{zerocomp}
{\operatorname{ker}}(S^r{\widetilde}{C}_0{\rightarrow}{{{\mathcal}O}}_X) {\rightarrow}S^{r-1}{\widetilde}{C}_0\otimes C_1{\rightarrow}{{{\mathcal}O}}_X$$ is zero.
Let $X'{\rightarrow}X$ be the affine bundle classifying splittings of the exact sequence (\[mainexseq\]). Since the pull-back induces an isomorphism of Chow groups of $X$ and $X'$ we can make a base change of our data by the morphism $X'{\rightarrow}X$. Thus, we can assume that the extension (\[mainexseq\]) splits. Let $1\in{\widetilde}{C}_0$ be a section projecting to a trivialization of $L$. It is easy to see that the morphism ${{{\mathcal}O}}_X[-1]{\rightarrow}K$ corresponding to the section $1^{r-1}\otimes {\widetilde}{d}(1)$ of $S^{r-1}{\widetilde}{C}_0\otimes C_1$ represents the map ${\delta}$ from (\[symextri\]). So from Lemma \[composlem\] we derive that ${\widetilde}{\tau}(1^{r-1}\otimes{\widetilde}{d}(1))=1$. Together with the condition that the composition (\[zerocomp\]) vanishes this is equivalent to the equation $${\langle}\nu((x+{\lambda}\cdot 1)^{r-1}),{\widetilde}{d}(x+{\lambda}\cdot 1){\rangle}={\lambda}^r$$ where $x\in C_0{\subset}{\widetilde}{C}_0$, the morphism $\nu:S^{r-1}{\widetilde}{C}_0{\rightarrow}C_1^{\vee}$ is induced by ${\widetilde}{\tau}$. It follows that the section $$s_{{\lambda}}(x)=({\widetilde}{d}(x+{\lambda}),\nu(x+{\lambda}\cdot 1))$$ of the orthogonal bundle $p^*C_1\oplus p^*C_1^{\vee}$ on $C_0$ satisfies $s_{{\lambda}}(x)\cdot s_{{\lambda}}(x)={\lambda}^r$. Applying Lemma \[mainlem\] to the action of $s_{{\lambda}}$ on the spinor bundle ${\Lambda}^*p^*C_1^{\vee}$ we derive the vanishing of localized Chern class corresponding to the isotropic section $s_0$ obtained from $s_{{\lambda}}$ by setting ${\lambda}=0$. But the latter class is precisely $c({{{\mathcal}T}},b)$.
Ramond factorization axiom {#Ramondsec}
==========================
Let the data $(\pi:{{{\mathcal}C}}{\rightarrow}X,{{{\mathcal}T}}, b:{{{\mathcal}T}}^r{\rightarrow}{\omega}_{{{{\mathcal}C}}/X})$ be as in section \[vanishsec\]. Assume in addition that we have an $X$-point ${\sigma}:X{\rightarrow}{{{\mathcal}C}}$ which is a nodal point of every fiber and that ${\widetilde}{\pi}:{\widetilde}{{{{\mathcal}C}}}{\rightarrow}X$ is a fiberwise normalization of this point. We denote by $n:{\widetilde}{{{{\mathcal}C}}}{\rightarrow}{{{\mathcal}C}}$ the corresponding morphism and by ${\sigma}_1,{\sigma}_2:X{\rightarrow}{\widetilde}{{{{\mathcal}C}}}$ two disjoint points that project to $p\in{{{\mathcal}C}}$. Finally, let us assume that ${{{\mathcal}T}}$ is locally free at ${\sigma}$ and that that map $b$ is an isomorphism at ${\sigma}$ (in [@JKV] this situation is referred to as “Ramond case”).
For every ${\lambda}\in k^*$ there is a natural line bundle ${{{\mathcal}L}}_{{\lambda}}$ on ${{{\mathcal}C}}$ such that $n^*{{{\mathcal}L}}_{{\lambda}}\simeq{{{\mathcal}O}}_{{\widetilde}{{{{\mathcal}C}}}}$ and the isomorphism $n^*{{{\mathcal}L}}_{{\lambda}}|_{{\sigma}_1}{\rightarrow}n^*{{{\mathcal}L}}_{{\lambda}}|_{{\sigma}_2}$ corresponds to the multiplication by ${\lambda}$. It is clear that ${{{\mathcal}L}}_{{\lambda}{\lambda}'}\simeq{{{\mathcal}L}}_{{\lambda}}\otimes{{{\mathcal}L}}_{{\lambda}'}$. In particular, if $\xi$ is an $r$-th root of unity then ${{{\mathcal}L}}_{\xi}^r\simeq{{{\mathcal}O}}_{{{{\mathcal}C}}}$. Therefore, we can twist the data $({{{\mathcal}T}},b)$ by considering ${{{\mathcal}T}}\otimes{{{\mathcal}L}}_{\xi}$ and the map $b_{\xi}:({{{\mathcal}T}}\otimes{{{\mathcal}L}}_{\xi})^r{\rightarrow}{\omega}_{{{{\mathcal}C}}/X}$ induced by $b$ and the trivialization of ${{{\mathcal}L}}_{\xi}^r$.
Now the Ramond case of Axiom 3 in [@JKV] is implied easily by Theorem \[axiom1thm\] together with the following result.
\[Ramondthm\] One has $$\sum_{\xi:\xi^r=1} c({{{\mathcal}T}}\otimes{{{\mathcal}L}}_{\xi},b_{\xi})=0.$$
Recall that the relative dualizing sheaves on ${{{\mathcal}C}}/X$ and ${\widetilde}{{{{\mathcal}C}}}/X$ are related by the isomorphism $n^*{\omega}_{{{{\mathcal}C}}/X}\simeq{\omega}_{{\widetilde}{{{{\mathcal}C}}}/X}({\sigma}_1+{\sigma}_2)$ such that the following diagram is commutative $$\begin{diagram}
n^*{\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}_1} &&\rTo && n^*{\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}_2}\\
\dTo &&&&\dTo\\
{\omega}_{{\widetilde}{{{{\mathcal}C}}}/X}({\sigma}_1+{\sigma}_2)|_{{\sigma}_1}&\rTo^{{\operatorname{Res}}}&{{{\mathcal}O}}_X&
\lTo^{-{\operatorname{Res}}}&{\omega}_{{\widetilde}{{{{\mathcal}C}}}/X}({\sigma}_1+{\sigma}_2)|_{{\sigma}_2}
\end{diagram}$$ where the top arrow is the canonical isomorphism (the sign comes from the relation $dx/x=-dy/y$ near the node $xy=0$). In particular, there is a canonical trivialization of ${\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}}$ such that the boundary map ${\delta}:{{{\mathcal}O}}_X\simeq{\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}}{\rightarrow}R\pi_*{\omega}_{{{{\mathcal}C}}/X}[1]$ from the exact triangle $$R\pi_*{\omega}_{{{{\mathcal}C}}/X}\rTo R\pi_*n_*n^*{\omega}_{{{{\mathcal}C}}/X}\rTo{\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}}\rTo^{{\delta}}
R\pi_*{\omega}_{{{{\mathcal}C}}/X}[1]$$ satisfies ${\operatorname{Tr}}\circ{\delta}={\operatorname{id}}$, where ${\operatorname{Tr}}:R\pi_*{\omega}_{{{{\mathcal}C}}/X}[1]{\rightarrow}{{{\mathcal}O}}_X$ is the trace map.
We can recover $R\pi_*{{{\mathcal}T}}$ from $R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}$, where ${\widetilde}{{{{\mathcal}T}}}=n^*{{{\mathcal}T}}$, together with the evaluation maps at ${\sigma}_1$ and ${\sigma}_2$. Namely, if we denote $L={\widetilde}{{{{\mathcal}T}}}|_{{\sigma}_1}\simeq{\widetilde}{{{{\mathcal}T}}}|_{{\sigma}_2}$ then there is an exact triangle $$\label{Ramondtriangle}
R\pi_*{{{\mathcal}T}}{\rightarrow}R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}\rTo^{{\operatorname{ev}}_{1}-{\operatorname{ev}}_{2}} L{\rightarrow}R\pi_*{{{\mathcal}T}}[1]$$ where ${\operatorname{ev}}_{i}:R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}{\rightarrow}L$ is the evaluation map at ${\sigma}_i$ ($i=1,2$). Note that the morphism $b:{{{\mathcal}T}}^r{\rightarrow}{\omega}_{{{{\mathcal}C}}/X}$ induces a morphism $${\widetilde}{b}:{\widetilde}{{{{\mathcal}T}}}^r{\rightarrow}{\omega}_{{\widetilde}{{{{\mathcal}C}}}/X}({\sigma}_1+{\sigma}_2).$$ Moreover, ${\widetilde}{b}$ is an isomorphism at ${\sigma}_1$ and ${\sigma}_2$, so restricting to either of these points we get a trivialization of $L^r$ (the two trivializations are the same). Passing to an étale cover of $X$ we can assume that $L$ itself is trivial.
Let $[C_0\stackrel{d}{{\rightarrow}} C_1]$ be a complex of vector bundles representing $R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}$ with $C_1$ a direct sum of sufficiently negative powers of an ample line bundle on $X$. Then the evaluation maps ${\operatorname{ev}}_{1},{\operatorname{ev}}_{2}:R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}{\rightarrow}L$ can be realized by morphisms $[C_0{\rightarrow}C_1]{\rightarrow}L$ in the homotopy category of complexes. Let $e_1,e_2:C_0{\rightarrow}L$ be the corresponding morphisms (unique up to adding morphisms that factor through $C_1$). Then we can choose a quasiisomorphism of $R\pi_*{{{\mathcal}T}}$ with the complex $${\operatorname{Cone}}([C_0{\rightarrow}C_1]\rTo^{e_1-e_2} L)[-1]=[C_0\rTo^{(d,e_1-e_2)}C_1\oplus L]$$ compatible with the triangle (\[Ramondtriangle\]), where ${\operatorname{Cone}}(C{\rightarrow}C')$ denotes the cone of a morphism of complexes $C{\rightarrow}C'$.
The triangle (\[Ramondtriangle\]) is obtained by applying the functor $R\pi_*$ to to the triangle $${{{\mathcal}T}}{\rightarrow}n_*{\widetilde}{{{{\mathcal}T}}}\rTo^{{\operatorname{ev}}_{1}-{\operatorname{ev}}_{2}} {\sigma}_*L{\rightarrow}{{{\mathcal}T}}[1]$$ on ${{{\mathcal}C}}$. To understand the map $S^rR\pi_*{{{\mathcal}T}}{\rightarrow}R\pi_*{\omega}_{{{{\mathcal}C}}/X}$ we can use the symmetric Künneth isomorphism $S^rR\pi_*{{{\mathcal}T}}\simeq R\pi^{(r)}_*({{{\mathcal}T}}^{(r)})$, where ${{{\mathcal}T}}^{(r)}$ is the $r$-th symmetric power of ${{{\mathcal}T}}$ on ${{{\mathcal}C}}^{(r)}$. The maps ${\operatorname{ev}}_1,{\operatorname{ev}}_2:n_*{\widetilde}{{{{\mathcal}T}}}{\rightarrow}{\sigma}_*L$ induce naturally the maps $${\operatorname{ev}}_1^r,{\operatorname{ev}}_2^r:(n_*{\widetilde}{{{{\mathcal}T}}})^{(r)}{\rightarrow}{\sigma}^r_*L,$$ where ${\sigma}^r:X{\rightarrow}{{{\mathcal}C}}^{(r)}$ is the $r$-tuple point of ${{{\mathcal}C}}^{(r)}$ corresponding to ${\sigma}$. Let us define a coherent sheaf on ${{{\mathcal}C}}^{(r)}$ as follows: $$K:={\operatorname{ker}}((n_*{\widetilde}{{{{\mathcal}T}}})^{(r)}\rTo^{{\operatorname{ev}}_1^r-{\operatorname{ev}}_2^r}{\sigma}^r_*L^r).$$ Then we have a natural embedding ${{{\mathcal}T}}^{(r)}{\rightarrow}K$ which induces a morphism $$\iota:S^rR\pi_*{{{\mathcal}T}}\simeq R\pi^{(r)}_*{{{\mathcal}T}}^{(r)}{\rightarrow}R\pi^{(r)}_*K.$$ Let ${\Delta}:{{{\mathcal}C}}{\rightarrow}{{{\mathcal}C}}^{(r)}$ be the diagonal embedding. We claim that there is a natural morphism $K{\rightarrow}{\Delta}_*{\omega}_{{{{\mathcal}C}}/X}$, such that the composition of the induced map ${\widetilde}{\eta}:R\pi^{(r)}_*K{\rightarrow}R\pi_*{\omega}_{{{{\mathcal}C}}/X}$ with $\iota$ coincides with the map $\eta:S^rR\pi_*{{{\mathcal}T}}{\rightarrow}R\pi_*{\omega}_{{{{\mathcal}C}}/X}$ induced by $b$. Indeed, ${\Delta}^*K$ maps to the kernel of the upper horizontal arrow in the commutative diagram $$\begin{diagram}
(n_*{\widetilde}{{{{\mathcal}T}}})^{\otimes r}&\rTo^{{\operatorname{ev}}_1^r-{\operatorname{ev}}_2^r}&{\sigma}_*L^r\\
\dTo &&\dTo\\
n_*n^*{\omega}_{{{{\mathcal}C}}/X} &\rTo& {\sigma}_*({\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}})
\end{diagram}$$ Therefore, we obtain the natural map from ${\Delta}^*K$ to the kernel of the lower horizontal arrow in this diagram, i.e., a map ${\Delta}^*K{\rightarrow}{\omega}_{{{{\mathcal}C}}/X}$. By adjunction we get a morphism $K{\rightarrow}{\Delta}_*{\omega}_{{{{\mathcal}C}}/X}$. The restriction of this map to the subsheaf ${{{\mathcal}T}}^{(r)}{\subset}K$ is the map induced by $b$ which implies our claim. We also have a morphism of exact sequences $$\begin{diagram}
0 &\rTo& K&\rTo& (n_*{\widetilde}{{{{\mathcal}T}}})^{(r)}&\rTo^{{\operatorname{ev}}_1^r-{\operatorname{ev}}_2^r}&{\sigma}^r_*L^r &\rTo& 0\\
&&\dTo&&\dTo &&\dTo\\
0 &\rTo&{\Delta}_*{\omega}_{{{{\mathcal}C}}/X}&\rTo&{\Delta}_*n_*n^*{\omega}_{{{{\mathcal}C}}/X} &\rTo&{\sigma}^r_*({\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}})&\rTo&0
\end{diagram}$$ This implies the commutativity of the following diagram: $$\begin{diagram}
{{{\mathcal}O}}_X\simeq &L^r &\rTo& R\pi^{(r)}_*K[1]\\
&\dTo&&\dTo^{{\widetilde}{\eta}[1]} \\
&{\omega}_{{{{\mathcal}C}}/X}|_{{\sigma}} &\rTo&R\pi_*{\omega}_{{{{\mathcal}C}}/X}[1]
\end{diagram}$$ Therefore, the composition of the map ${\widetilde}{\tau}:={\operatorname{Tr}}\circ{\widetilde}{\eta}[1]:R\pi^{(r)}_*K[1]{\rightarrow}{{{\mathcal}O}}_X$ with the natural map ${{{\mathcal}O}}_X\simeq L^r{\rightarrow}R\pi^{(r)}_*K[1]$ is equal to the identity. On the other hand, since ${\widetilde}{\eta}\circ\iota=\eta$, it follows that the composition ${\widetilde}{\tau}\circ\iota=\tau:S^rR\pi_*{{{\mathcal}T}}[1]{\rightarrow}{{{\mathcal}O}}_X$ is exactly the map induced by $b$ (which is used in the definition of the class $c({{{\mathcal}T}},b)$).
Note that $R\pi^{(r)}_*n_*{\widetilde}{{{{\mathcal}T}}}\simeq S^rR{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}$, so the object $R\pi^{(r)}_*K$ fits into the distinguished triangle $$R\pi^{(r)}_*K{\rightarrow}S^r R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}\rTo^{{\operatorname{ev}}_1^r-{\operatorname{ev}}_2^r} L^r{\rightarrow}R\pi^{(r)}_*K[1].$$ Therefore, it can be represented by the complex ${\operatorname{Cone}}(S^r[C_0{\rightarrow}C_1]\rTo^{e_1^r-e_2^r} L^r)[-1]$ in a way compatible with this triangle. Furthermore, the natural morphism $S^rR\pi_*{{{\mathcal}T}}{\rightarrow}R\pi^{(r)}_*K$ is realized by the natural map of complexes $$\label{complexmap}
S^r[C_0\rTo^{(d,e_1-e_2)}C_1\oplus L]{\rightarrow}{\operatorname{Cone}}(S^r[C_0{\rightarrow}C_1]\rTo^{e_1^r-e_2^r} L^r)[-1]$$ with the components ${\operatorname{id}}:S^r C_0{\rightarrow}S^r C_0$, $$S^{r-1}C_0\otimes (C_1\oplus L){\rightarrow}(S^{r-1}C_0\otimes C_1)\oplus L^r:
x^{r-1}\otimes (y,z)\mapsto (x^{r-1}\otimes y, \sum_{i=0}^{r-1}e_1^i(x)e_2^{r-1-i}(x)z),$$ etc. Finally, we claim that for a suitable choice of the complex $C_0{\rightarrow}C_1$ (as in Proposition 4.7 of [@PV]) the map ${\widetilde}{\tau}:R\pi^{(r)}_*K[1]{\rightarrow}{{{\mathcal}O}}_X$ is represented by the chain map of complexes ${\operatorname{Cone}}(S^r[C_0{\rightarrow}C_1]\rTo^{e_1^r-e_2^r} L^r){\rightarrow}{{{\mathcal}O}}_X$. This is a consequence of the following general result.
\[Ramondlem\] Let $g:A{\rightarrow}B$, $f:B{\rightarrow}C$ be a pair of maps in the homotopy category ${{{\mathcal}K}}$ of some abelian category and let ${{{\mathcal}D}}$ be the corresponding derived category. Consider the subsets $H_{{{{\mathcal}K}}}(f){\subset}{\operatorname{Hom}}_{{{{\mathcal}K}}}({\operatorname{Cone}}(g),C)$ and $H_{{{{\mathcal}D}}}(f){\subset}{\operatorname{Hom}}_{{{{\mathcal}D}}}({\operatorname{Cone}}(g),C)$ consisting of morphisms ${\operatorname{Cone}}(g){\rightarrow}C$ such that their composition with the canonical morphism $i:B{\rightarrow}{\operatorname{Cone}}(g)$ is equal to $f$ (in ${{{\mathcal}K}}$ and ${{{\mathcal}D}}$ respectively). Assume that the map $${\operatorname{Hom}}_{{{{\mathcal}K}}}(A,C){\rightarrow}{\operatorname{Hom}}_{{{{\mathcal}D}}}(A,C)$$ is injective and the map $${\operatorname{Hom}}_{{{{\mathcal}K}}}(A[1],C){\rightarrow}{\operatorname{Hom}}_{{{{\mathcal}D}}}(A[1],C)$$ is surjective. Then the natural map $$\kappa:H_{{{{\mathcal}K}}}(f){\rightarrow}H_{{{{\mathcal}D}}}(f)$$ is surjective.
[[*Proof*]{}]{}. Let us denote by $\pi:{\operatorname{Cone}}(g){\rightarrow}A[1]$ the canonical chain map. If the set $H_{{{{\mathcal}D}}}(g)$ is empty then the assertion is clear, so we can assume that $H_{{{{\mathcal}D}}}(g)\neq\emptyset$. Then the composition $f\circ g:A{\rightarrow}C$ becomes zero in the derived category. By our assumption the natural map ${\operatorname{Hom}}_{{{{\mathcal}K}}}(A,C){\rightarrow}{\operatorname{Hom}}_{{{{\mathcal}D}}}(A,C)$ is injective, hence $f\circ g$ is homotopic to zero. Every homotopy $h$ from $g\circ f$ to $0$ induces naturally a chain map ${\operatorname{Cone}}(h):{\operatorname{Cone}}(g){\rightarrow}C$ which coincides with $f$ on the subcomplex $i(B){\subset}{\operatorname{Cone}}(g)$. In fact, it is easy to see that the map $h\mapsto{\operatorname{Cone}}(h)$ is a bijection between homotopies from $g\circ f$ to $0$ and chain maps ${\operatorname{Cone}}(g){\rightarrow}C$ extending $f$ on $B$. If we have two homotopies $h_1,h_2$ from $g\circ f$ to $0$ then the difference $h_1-h_2$ gives a chain map from $A[1]$ to $C$. It is easy to see that $${\operatorname{Cone}}(h_1)-{\operatorname{Cone}}(h_2)=(h_1-h_2)\circ\pi.$$
Now let ${\gamma}\in H_{{{{\mathcal}D}}}(g)$ be any element. Let us pick a homotopy $h_0$ from $g\circ f$ to $0$. Then the homotopy class $[{\operatorname{Cone}}(h_0)]$ is an element of $H_{{{{\mathcal}K}}}(f)$. The composition of $\kappa([{\operatorname{Cone}}(h_0)])-{\gamma}$ with $i$ vanishes in the derived category, hence we have $\kappa([{\operatorname{Cone}}(h_0)])-{\gamma}={\beta}\circ\pi$ for some ${\beta}\in{\operatorname{Hom}}_{{{{\mathcal}D}}}(A[1],C)$. By our assumption there exists a chain map ${\widetilde}{{\beta}}:A[1]{\rightarrow}C$ representing ${\beta}$. Then $h=h_0-{\widetilde}{{\beta}}$ is another homotopy from $g\circ f$ to $0$. We have $$\kappa([{\operatorname{Cone}}(h)])=\kappa([{\operatorname{Cone}}(h_0)-{\widetilde}{{\beta}}\circ\pi])=\kappa([{\operatorname{Cone}}(h_0)])-{\beta}\circ\pi={\gamma}.$$
We apply the above lemma to $A=S^r[C_0{\rightarrow}C_1]$, $B=L^r$ and $C={{{\mathcal}O}}_X$, where $f:L^r{\rightarrow}{{{\mathcal}O}}_X$ is the canonical isomorphism. To satisfy the assumptions of the lemma we choose the complex $C_0{\rightarrow}C_1$ representing $R{\widetilde}{\pi}_*{\widetilde}{{{{\mathcal}T}}}$ with $C_1$ a direct sum of sufficiently negative powers of an ample line bundle (one has to argue as in Proposition 4.7 of [@PV]). Hence, the map ${\widetilde}{\tau}$ is represented by a morphism in the homotopic category ${\operatorname{Cone}}(S^r[C_0{\rightarrow}C_1]\rTo^{e_1^r-e_2^r} L^r){\rightarrow}{{{\mathcal}O}}_X$ that we still denote by ${\widetilde}{\tau}$. The restriction of ${\widetilde}{\tau}$ to the subcomplex $L^r$ is equal to the canonical isomorphism $L^r{\rightarrow}{{{\mathcal}O}}_X$, while its composition with the map (\[complexmap\]) is the morphism $$\tau:S^r[C_0\rTo^{(d,e_1-e_2)}C_1\oplus L]{\rightarrow}{{{\mathcal}O}}_X[-1]$$ that should be used for the computation of $c({{{\mathcal}T}},b)$. It follows that the restriction of the corresponding morphism $$\tau:S^{r-1}C_0\otimes (C_1\oplus L){\rightarrow}{{{\mathcal}O}}_X$$ to $S^{r-1}C_0\otimes L$ has form $$\tau(x^{r-1}\otimes y)=\sum_{i=0}^{r-1} e_{1}(x)^i e_{2}(x)^{r-1-i}y\in L^r\simeq{{{\mathcal}O}}_X$$ where $x\in C_0$, $y\in L$. Hence, the corresponding isotropic section of $p^*(C_1\oplus L\oplus C_1^{\vee}\oplus L^{-1})$ (where $p:C_0{\rightarrow}X$ is the projection) has form $$s(x)=(d(x),(e_1-e_2)(x),\nu(x),\sum_{i=0}^{r-1} e_{1}(x)^i e_{2}(x)^{r-1-i}),$$ where the last component belongs to $L^{r-1}\simeq L^{-1}$, $\nu$ is given by some morphism $S^{r-1}C_0{\rightarrow}C_1^{\vee}$.
To compute the class corresponding to the twisted data $({{{\mathcal}T}}\otimes{{{\mathcal}L}}_{\xi},b_{\xi})$ for some $r$-th root of unity $\xi$ we simply have to replace the pair $(e_1,e_2)$ by $(e_1,\xi e_2)$. Note that this will not affect the definition of $K$ and of the morphism ${\widetilde}{\tau}$. Hence the corresponding isotropic section of $p^*(C_1\oplus L\oplus C_1^{\vee}\oplus L^{-1})$ will take form $$s_{\xi}(x)=
(d(x),(e_1-\xi e_2)(x),\nu(x),\sum_{i=0}^{r-1} \xi^{r-1-i}e_{1}(x)^i e_{2}(x)^{r-1-i}),$$ for some $\nu:S^{r-1}C_0{\rightarrow}C_1^{\vee}$.
Now we can finish the proof of Theorem \[Ramondthm\]. For every $\xi$ the class $c({{{\mathcal}T}}\otimes{{{\mathcal}L}}_{\xi},b_{\xi})$ is equal to $${\operatorname{td}}(C_1\oplus L)\cdot{\operatorname{ch}}^{C_0}_X({\Lambda}^*p^*(C_1^{\vee}\oplus L^{-1}),s_{\xi}),$$ where $s_{\xi}\in p^*(C_1\oplus L\oplus C_1^{\vee}\oplus L^{-1}), s_{\xi})$ is the isotropic section $s_{\xi}$ constructed above. Let us set $f_{\xi}=e_1-\xi e_2$. We consider $(f_{\xi})$ as a collection of sections of $p^*L$ on $C_0$. We have an orthogonal decomposition $$p^*(C_1\oplus L\oplus C_1^{\vee}\oplus L^{-1})\simeq p^*(C_1\oplus C_1^{\vee})\oplus
p^*(L\oplus L^{-1}),$$ so that the section $s_{\xi}$ has components $s_0=(d,\nu)\in p^*(C_1\oplus C_1^{\vee})$ and $(f_{\xi},\prod_{\xi'\neq\xi} f_{\xi'})$. Recall that we can trivialize $L$, so the spinor bundle ${\Lambda}^*p^*(C_1^{\vee}\oplus L^{-1})$ can be identified with ${\Lambda}^*p^*C_1^{\vee}\oplus {\Lambda}^*p^*C_1^{\vee}[1]$. Under this identification the action of sections $s_{\xi}$ will take form of differentials $(d_i)$ in Lemma \[mainlem2\], where the odd endomorphism $d$ of ${\Lambda}^*p^*C_1^{\vee}$ is given by the action of $s_0$. Now the assertion of the theorem follows from Lemma \[mainlem2\].
[99]{} P.Baum, W.Fulton, R.MacPherson, [*Riemann-Roch for singular varieties*]{}, Inst. Hautes Itudes Sci. Publ. Math. **45** (1975), 101–145.
I.Ciocan-Fontanine, M.Kapranov, *Derived Quot schemes*, preprint math.AG/9905174. I.Ciocan-Fontanine, M.Kapranov, *Derived Hilbert schemes*, preprint math.AG/0005155.
W.Fulton, [*Intersection theory*]{}, Springer, 1998.
T.J.Jarvis, T.Kimura, A.Vaintrob, *Moduli spaces of higher spin curves and integrable hierarchies*, Compositio Math., [math.AG/9905034]{}.
M.Kontsevich, *Enumeration of rational curves via torus action*, in *Moduli Space of Curves (R.Dijkgraaf, C.Faber, G.van der Geer, Eds.)*, 335–368, Birkhauser, Boston, 1995.
M.Kontsevich, Yu.I.Manin, *Gromov-Witten classes, quantum cohomology, and enumerative geometry*, Commun. Math. Phys.**164** (1994), 525–562.
T. Mochizuki, [*The virtual class of the moduli stack of $r$-spin curves*]{}, preprint, see http://www.math.ias.edu/$\tilde{\phantom{x}}$takuro/list.html
A. Polishchuk, A. Vaintrob, [*Algebraic construction of Witten’s top Chern class*]{}, in [*Advances in Algebraic Geometry motivated by Physics*]{}, E. Previato, ed., 229–250. AMS, 2001.
E.Witten, *The $N$-matrix model and gauged WZW models,* Nucl. Phys. B **371** (1992), no. 1-2, 191-245.
E.Witten, *Algebraic geometry associated with matrix models of two dimensional gravity*, Topological methods in modern mathematics (Stony Brook, NY, 1991), Publish or Perish, Houston, 1993, 235-269.
[^1]: This work was partially supported by NSF grant DMS-0070967.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The properties of $\sim 1000$ high-excitation and low-excitation radio galaxies (HERGs and LERGs) selected from the [@2016MNRAS.460.4433H] $1 - 2$ GHz VLA survey of Stripe 82 are investigated. The HERGs in this sample are generally found in host galaxies with younger stellar populations than LERGs, consistent with other work. The HERGs tend to accrete at a faster rate than the LERGs, but there is more overlap in the accretion rates of the two classes than has been found previously. We find evidence that mechanical feedback may be significantly underestimated in hydrodynamical simulations of galaxy evolution; 84 % of this sample release more than 10 % of their energy in mechanical form. Mechanical feedback is significant for many of the HERGs in this sample as well as the LERGs; nearly 50 % of the HERGs release more than 10 % of their energy in their radio jets.'
---
Introduction
============
One of the key unknowns in galaxy evolution is how star-formation in galaxies becomes quenched; it is widely thought that feedback from active galactic nuclei (AGN) is responsible for this, but the mechanisms are not well understood. Observational evidence (e.g. [@2012MNRAS.421.1569B]) suggests that AGN can be split into two distinct classes; high-excitation radio galaxies (HERGs; also known as cold mode, quasar mode or radiative mode sources) which radiate efficiently across the electromagnetic spectrum and posses the typical AGN accretion-related structures such as an accretion disk and a dusty torus, and low-excitation radio galaxies (LERGs; also known as hot mode, radio mode or jet mode sources) which radiate inefficiently and emit the bulk of their energy in mechanical form as powerful radio jets (see e.g. ).
It is thought that these two AGN accretion modes have different feedback effects on the host galaxy (see review by ) and lead to the two different feedback paths in semi-analytic and hydrodynamic simulations, but despite being widely studied over the last decade (e.g. [@2007MNRAS.376.1849H]; [@2009Natur.460..213C]; ) these processes are not well understood. In these proceedings I use a sample of $\sim 1000$ HERGs and LERGs to investigate the host galaxy properties and accretion rates of the two classes, and explore the implications of these results for AGN feedback.
Data used and source classification
===================================
This work is based on a $1-2$ GHz Karl G. Jansky Very Large Array (VLA) survey covering 100 deg$^2$ in SDSS Stripe 82 which has a $1 \sigma$ rms noise of 88 $\mu$Jy beam$^{-1}$ and a resolution of $16 \times 10$ arcsec; full details of the radio survey are given in [@2016MNRAS.460.4433H]. This radio catalogue was matched to the SDSS DR14 optical catalogue ([@2018ApJS..235...42A]) by eye; details of the matching process are described in [@2018MNRAS.480..707P]. We restrict our analysis to sources with a counterpart in the spectroscopic catalogue with $z < 0.7$, our sample therefore has 1501 sources which cover the range $0.01 < z < 0.7$ and $10^{21} < L_{1.4~\rm GHz} / \textrm{W Hz}^{-1} < 10^{27}$. We use the the value-added spectroscopic catalogues described in [@2013MNRAS.431.1383T].
Sources are classified as either AGN or star-forming galaxies using information from their optical spectra, full details of this process are given in [@2018MNRAS.480..707P]. The AGN in the sample are then classified as HERGs or LERGs using the criteria given in [@2012MNRAS.421.1569B], which use a combination of emission line ratios and \[OIII\] equivalent width. This is explained in detail in [@2018MNRAS.480..358W]. Additionally to the Best and Heckman classification scheme, we introduce a ‘probable LERG’ class for sources which cannot be classified according to the full criteria but which have an \[OIII\] equivalent width $<5$ Å. The total number of sources in each category is as follows; HERGs = 60, LERGs = 149, probable LERGs = 600, QSOs = 81 and unclassified sources = 271, with 340 star-forming galaxies.
Host galaxy properties
======================
![4000 Å break strength as a function of redshift with the HERGs, LERGs, probable LERGs and unclassified sources shown separately. The filled shapes show the mean values in each luminosity bin for the different samples. From [@2018MNRAS.480..358W][]{data-label="fig:Dn4000"}](fig1_whittam.pdf){width="7cm"}
Using the wealth of multi-wavelength data available in the field, we can compare the properties of the host galaxies of the HERGs and LERGs. 4000 Å break strength, which traces stellar age, is shown as a function of redshift in Fig \[fig:Dn4000\]. This shows that HERGs tend to be found in host galaxies with younger stellar populations than LERGs across the redshift range probed here. This is agrees with other results in the literature (e.g. [@2012MNRAS.421.1569B]) and is consistent with the idea that HERGs have a supply of cold gas which provides the fuel for both star-formation and AGN activity. We refer the reader to [@2018MNRAS.480..358W] for further discussion of this and other host galaxy properties.
Accretion rates
===============
![Left panel shows the distribution of Eddington-scaled accretion rates for the different source classifications. Right panel shows the fraction of the accreted energy released in the jets for the different source types as a function of redshift. Triangles represent sources with an upper limit on their radiative accretion rate, so the fraction of energy released in the jet is a lower limit. The dashed line is the radio mode feedback model used in Horizon-AGN from [@2014MNRAS.444.1453D]. The uncertainties in the scaling relations used to estimate $L_\textrm{bol}$ and $L_\textrm{mech}$ are 0.4 and 0.7 dex respectively. From [@2018MNRAS.480..358W]. []{data-label="fig:accretion"}](fig2a_whittam.pdf "fig:"){width="6.5cm"} ![Left panel shows the distribution of Eddington-scaled accretion rates for the different source classifications. Right panel shows the fraction of the accreted energy released in the jets for the different source types as a function of redshift. Triangles represent sources with an upper limit on their radiative accretion rate, so the fraction of energy released in the jet is a lower limit. The dashed line is the radio mode feedback model used in Horizon-AGN from [@2014MNRAS.444.1453D]. The uncertainties in the scaling relations used to estimate $L_\textrm{bol}$ and $L_\textrm{mech}$ are 0.4 and 0.7 dex respectively. From [@2018MNRAS.480..358W]. []{data-label="fig:accretion"}](fig2b_whittam.pdf "fig:"){width="6.5cm"}
There is a scenario building up in the literature that there are two distinct accretion modes which are responsible for HERGs and LERGs respectively; in this scenario there is a dichotomy in accretion rates between the two classes, relating to the two different modes. The radiative accretion rates of the AGN in this sample are estimated from their \[OIII\] 5007 line luminosity and the mechanical accretion rates are estimated from the 1.4-GHz radio luminosity using the [@2010ApJ...720.1066C] relationship. Black hole masses are estimated from the local black hole mass - bulge mass relation, allowing Eddington-scaled accretion rates to be calculated as follows: $\lambda = (L_{\rm bol} + L_{\rm mech}) / L_{\rm Edd}$. The left panel of Fig. \[fig:accretion\] shows the distribution of Eddington-scaled accretion rates for the HERGs and LERGs in this sample. It is clear from this figure that the HERGs generally accrete at a faster Eddington-scaled rate than the LERGs, with a distribution that peaks just below 0.1 compared to 0.01. However, there is a significant overlap in accretion rates between the two classes, with HERGs found across nearly the full range of accretion rates.
The dichotomy in accretion rates between HERGs and LERGs is therefore less clear in this study than it is in other studies in the literature; for example [@2012MNRAS.421.1569B] and [@2014MNRAS.440..269M] both find almost no overlap in accretion rates between the two classes. In contrast, our sample seems to suggest a more continuous range of accretion rates. Note the our sample probes fainter radio luminosities ($10^{21} < L_{1.4~\rm GHz} / \textrm{W Hz}^{-1} < 10^{27}$) than other results in the literature; this could be part of the reason for the difference in our results, although we see some overlap in the accretion rates of the HERGs and LERGs across the luminosity range sampled here. We also do not observe any dichotomy in the \[OIII\] equivalent width or Excitation Index distributions, the two main parameters used to classify the HERGs and LERGs, suggesting that any dividing value chosen in these parameters is perhaps arbitrary for our sample.
Implications for AGN feedback
=============================
AGN feedback is required in all leading hydrodynamical simulations of galaxy evolution to quench star-formation. Some simulations implement mechanical and radiative feedback (assumed to relate to LERGs and HERGs respectively) separately (e.g. Horizon-AGN; [@2014MNRAS.444.1453D]) while others do not (e.g. MUFASA; [@2016MNRAS.462.3265D]).
The right panel of Fig. \[fig:accretion\] shows $L_\textrm{mech} / (L_\textrm{bol} + L_\textrm{mech})$, which provides an estimate of the fraction of the total accreted energy deposited back into the interstellar medium in mechanical form. The dashed line shows the mechanical feedback efficiency of 10 % assumed in Horizon-AGN; it is clear that this is a significant underestimate for the sources in this sample, with 84 % of the sample depositing more than 10 % of their energy in mechanical form. This plot also demonstrates that mechanical feedback can be significant for HERGs as well as for LERGs; nearly 50 % (29/60) of the HERGs in this sample release more than 10 % of their accreted energy in mechanical form.
There is a scatter of $\sim 2$ dex in $L_\textrm{mech} / (L_\textrm{bol} + L_\textrm{mech})$, which shows that the assumption that there is a direct scaling between accretion rate and mechanical feedback which is used in most hydrodynamical simulations does not necessarily hold. This may be because environment plays a significant role.
Conclusions and future perspectives
===================================
We have used the [@2016MNRAS.460.4433H] VLA 1-2 GHz radio survey covering 100 deg$^2$ in Stripe 82 along with optical spectroscopy to probe the properties of $\sim 1000$ high- and low-excitation radio galaxies. They key results of this work are:
- HERGs tend to be found in host galaxies with younger stellar populations than LERGs, consistent with other results in the literature.
- While the HERGs in our sample tend to have higher accretion rates than the LERGs, we find considerable overlap in the accretion rates of the two samples.
- Mechanical feedback can be significant for HERGs as well as for LERGs, and may be underestimated for both populations in hydrodynamical simulations.
The advent of new radio telescopes, such as MeerKAT, LOFAR and ASKAP, means there is potential to make a large step forward in our understanding of radio galaxies and their mechanical feedback effects in the next few years. One example of a survey planned with a new instrument is the MeerKAT MIGHTEE survey ([@2016mks..confE...6J]) which has just started to collect data and will survey 10 deg$^2$ to a depth of 1 $\mu$Jy at 800 - 1600 MHz in four different fields. The unique combination of deep radio images over a significant cosmological volume along with excellent multi-wavelength coverage means we will be able to, amongst other things, extend the study described in this proceedings to significantly fainter luminosities and probe whether or not there is an accretion mode dichotomy, particularly at lower luminosities.
*Acknowledgements* The author thanks Matthew Prescott, Matt Jarvis, Kim McAlpine and Ian Heywood for their significant contributions to this work. This research was supported by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Technology.
Abolfathi B., et al., 2018, *ApJS*, 235, 42
Best P. N., Heckman T. M., 2012, *MNRAS*, 421, 1569
Cattaneo A., et al., 2009, *Nature*, 460, 213
Cavagnolo K. W., et al., 2010, *ApJ*, 720, 1066
Dav[é]{} R., Thompson R., Hopkins P. F., 2016, *MNRAS*, 462, 3265
Dubois Y., et al., 2014, *MNRAS*, 444, 1453
Fabian A. C., 2012, *ARA&A*, 50, 455
Hardcastle M. J., Evans D. A., Croston J. H., 2007, *MNRAS*, 376, 1849
Heckman T. M., Best P. N., 2014, *ARA&A*, 52, 589
Heywood I., et al., 2016, *MNRAS*, 460, 4433
Jarvis M., et al., 2016, *Proceedings of MeerKAT Science: On the Pathway to the SKA. 25-27 May, 2016 Stellenbosch, South Africa*, 6
Mingo B., et al., 2014, *MNRAS*, 440, 269
Prescott M., et al., 2018, *MNRAS*, 480, 707
Thomas D., et al., 2013, *MNRAS*, 431, 1383
Whittam I. H., Prescott M., McAlpine K., Jarvis M. J., Heywood I., 2018, *MNRAS*, 480, 358
| {
"pile_set_name": "ArXiv"
} |
Introduction
============
The experimental detection [@1; @1a] of Bose-Einstein condensation (BEC) at ultralow temperature in dilute trapped bosons (alkali metal and hydrogen atoms and the recent possibility in molecules) has spurred intense theoretical activities on various aspects of the condensate [@2; @3; @4; @5; @6]. Many properties of the condensate are usually described by the mean-field time-dependent Gross-Pitaevskii (GP) equation [@5; @8]. One of the most interesting features of BEC has been observed in the case of attractive interatomic interaction [@1a; @2]. In that case the condensate is stable for a maximum critical number of atoms, beyond which the condensate experiences a collapse. When the number of atoms increases beyond the critical number, due to interatomic attraction the radius tends to zero and the central density of the condensate tends to infinity. Consequently, the condensate collapses emitting particles until the number of atoms is reduced below the critical number and a stable configuration is reached. The condensate may experience a series of collapses [@1a; @2]. This phenomenon was observed in the BEC of $^7$Li atoms with negative scattering length denoting attractive interaction where the critical number of atoms was about 1400 [@1a; @2]. Theoretical analyses based on the GP equation in the case of $^7$Li atoms also confirmed this collapse [@1a; @2; @5; @6].
More recently, there has been experimental realization of BEC involving atoms in two different quantum states [@excpl1; @excpl2]. In one experiment $^{87}$Rb atoms formed in the $F=1$, $m=-1$ and $F=2$, $m=1$ states by the use of a laser served as two different species, where $F$ and $m$ are the total angular momentum and its projection [@excpl1]. In another experiment a coupled BEC was formed with the $^{87}$Rb atoms in the $F=1, m=-1$ and $F=2, m=2$ states [@5; @excpl2]. It is possible to use the same magnetic trap to confine atoms in two magnetic states and this makes these experimental investigations technically simpler compared to a realization of BEC with two different types of atoms requiring two different trapping mechanisms. This is why so far it has not been possible to prepare a coupled BEC with two different types of atoms. In addition to coupled atomic condensates, there has been consideration of a hybrid BEC where one type of bosons are atoms and the other molecules [@cpl2]. These initiated theoretical activities in BEC involving more than one types of bosons using the coupled GP equation [@cpl2; @cpl1].
In addition to just forming a coupled BEC with two quantum states of the same atom, these studies also yielded crucial information about the interaction among component atoms and measured the percentage of each quantum states in the condensate [@excpl1; @excpl2]. It has been found that $^{87}$Rb atoms have repulsive interaction in all three quantum states. Also, the strength of repulsive interactions in $F=1, m=-1$ and $F=2,m=2$ states are essentially identical. The interaction between an atom in the $F=1, m=-1$ state and another in the $F=2, m=2$ state is repulsive. As the change in the $m$ value of a atomic quantum state does not correspond to a substantial structural change, it is likely that such change would not correspond to a large change in the atomic interaction.
Here we study theoretically the collapse in a coupled BEC composed of two quantum states 1 and 2 of a bosonic atom using the coupled time-dependent GP equation. We motivate this study by considering two possible atomic states of $^7$Li whenever possible. An experiment of collapse in a coupled BEC has not yet been realized but could be possible in the future. In the case of $^7$Li the interaction in state 1 is taken to be attractive which is responsible for collapse. Here there are three types of interactions denoted by the scattering lengths $a_{ij}$, $i,j=1,2$, between states $i$ and $j$. A negative (positive) scattering length denotes an attractive (repulsive) interaction. We study the collapse with different possibilities of attraction and repulsion between atoms in state 1 and 2. If one of the scattering lengths is negative, at least one component of the condensate may experience collapse. If two of the scattering lengths are negative one can have collapse in both components. Specifically, one can also have collapse of both components if $a_{12}$ is negative and $a_{ii}$, $i=1,2$ are positive.
The usual GP equation conserves the number of atoms. The dynamics of the collapse (growth and decay of number of atoms) is best studied by introducing an absorptive contact interaction in the GP equation which allows for a growth in the particle number from an external source. One has also to introduce an imaginary quartic three-body interaction term responsible for recombination loss from the condensate [@2]. If the strengths of these two terms are properly chosen, the solution of the time-dependent GP equation could produce a growth of the condensate with time when the number of atoms is less than the critical number. Once it increases past the critical number, the three-body interaction takes control and the number of atoms suddenly drops below the critical level by recombination loss signaling a collapse [@2]. Then the absorptive term takes over and the number of atoms starts to increase again. This continues indefinitely showing an infinite sequence of collapse.
Coupled Gross-Pitaevskii Equation with absorption
=================================================
We consider the following spherically symmetric coupled GP equation with two components at time $\tau$ for the condensate wave function $\psi_i(r,\tau)$ [@cpl1] $$\begin{aligned}
\label{cc} \biggr[
-\frac{\hbar^2}{2m}\frac{1}{r}\frac{\partial^2 }{\partial
r^2}r &+& \frac{1}{2}c_im\omega^2 r^2 +\sum_{j=1}^2
g_{ij}N_j|\psi_j({
r},\tau)|^2 \nonumber
\\
&-& \mbox{i}\hbar\frac{\partial}{\partial \tau}\biggr]
\psi_i(r,\tau)=0,\end{aligned}$$ $i=1,2$, where $m$ is the atomic mass. Here $g_{ij}=4\pi\hbar^2a_{ij}/m$ is the coupling constant for atomic interaction, $N_j$ the number of condensed atoms in state $j$, and $\omega$ the frequency of the harmonic oscillator trap. The parameter $c_i$ has been introduced to modify the frequency of the trap for the atoms in each quantum state.
As in Refs. [@4] it is convenient to use dimensionless variables defined by $x = \sqrt 2 r/a_{\mbox{ho}}$ , and $t=\tau \omega, $ where $a_{\mbox{ho}}\equiv \sqrt {\hbar/(m\omega)}$, and $
\phi_i(x,t) = x\psi_i(r,\tau ) (\sqrt 2\pi a_{\mbox{ho}}^3)^{1/2}$ . In terms of these variables Eq. (\[cc\]) becomes [@4] $$\begin{aligned}
\label{e}
\biggr[ -\frac{\partial^2 }{\partial
x^2} &+& \frac{ c_ix^2}{4} +\sum_{j=1}^2 n_{ij}
\frac{|\phi_j({x},t)|^2}{x^2}
-\mbox{i}\xi_i\frac{|\phi_i({x},t)|^4}{x^4}\nonumber \\
&+&\mbox{i}\gamma_i
- \mbox{i}\frac{\partial
}{\partial t} \biggr]\phi_i({ x},t)=0, \end{aligned}$$ where $n_{ij}\equiv 2\sqrt 2 N_j a_{ij}/a_{\mbox{ho}}$ could be negative (positive) when the corresponding interaction is attractive (repulsive). In Eq. (\[e\]) we have introduced a diagonal absorptive i$\gamma_i$ and a quartic three-body term $
-\mbox{i}\xi_i{|\phi_i({x},t)|^4}/{x^4}$ appropriate to study collapse [@2]. For $\gamma_i=\xi_i=0, i=1,2$, the normalization condition of the wave function is $$\label{5} \int_0 ^\infty |\phi_i(x,t)|
^2 dx = 1.$$ The root-mean-square (rms) radius of the component $i$ $x^{(i)}_{\mbox{rms}}(t)$ at time $t$ is defined by $$\label{7}
x^{(i)}_{\mbox{rms}}(t)=
\left[\frac {\int_0
^\infty x^2 |\phi_i(x,t)| ^2 dx} {\int_0
^\infty |\phi_i(x,t)| ^2 dx
}\right]^{1/2}.$$
Numerical results
=================
To solve Eq. (\[e\]) we discretize it in both space (using step 0.0001) and time (using step 0.05) employing a Crank-Nicholson-type rule and reduce it to a set of algebraic equations which is then solved by iteration using the known boundary conditions, e.g., $|\phi_i(0,t)|=0,$ and $\lim_{x
\to \infty} |\phi_i(x,t)|\sim \exp(-x^2/4). $ The iteration is started with the known normalized (harmonic oscillator) solution of Eq. (\[e\]) obtained with $n_{ij}=0$ at $t=0$. The nonlinear constants $n_{ij}$ in this equation are increased by equal amounts over 500 to 1000 time iterations starting from zero until the desired final values are reached. This iterative method is similar to one in the uncoupled case [@2; @4]. A detailed account of the numerical procedure for the coupled case will appear elsewhere.
Stationary Problem
------------------
First we consider the stationary solution of Eq. (\[e\]) with $\gamma_i=\xi_i=0$, which illustrate the collapse. As the three scattering lengths $a_{ij}$ and two numbers $N_i$ are all independent, the four parameters $n_{ij}$ are also so with one restriction: the signs of $n_{12}$ and $n_{21}$ are identical.
Now we study the simplest case of collapse by taking only the interaction between the atoms in state 1 to be attractive corresponding to a negative $a_{11}$. All other scattering lengths $-$ $a_{22}$ and $a_{12}$ (= $a_{21}$) $-$ are taken to be positive. Quite expectedly, here the first component of the condensate could experience collapse. Although the present formulation is generally valid, one has to choose numerical values of the parameters before an actual calculation.
The collapse of the first component is illustrated in Fig. 1 (a) for $
n_{11}=-3.814, n_{22}=4$ $n_{12}=n_{21}=1, $ $c_1=0.25$, $c_2=4$. These parameters are in dimensionless units and one can associate them with an actual physical problem of experimental interest. For this we consider the state 1 to be the states of $^7$Li with attractive interaction as in the actual collapse experiment with $|a_{11}|/a_{\mbox{ho}}\simeq 0.0005$ [@1a]. As $n_{11}=2\sqrt 2 N_1
|a_{11}|/a_{\mbox{ho}}$ this corresponds to a boson number $N_1 \simeq
2700. $ This number is larger than the maximum number atoms permitted in the BEC of a single component $^7$Li which is about 1400 [@1a]. The presence of the second component with repulsive interaction allows for a formation of a stable BEC with more $^7$Li atoms in quantum state 1 than allowed in the single-component BEC. Similar conclusion was reached by Esry [@esry] in a study of a coupled BEC in a different context. We find from Fig. 1 (a) that $\phi_1$ is very much centrally peaked compared to $\phi_2$. This corresponds to a small rms radius and large central density for $\phi_1$ denoting an approximation to collapse. If the number $N_{1}$ is slightly increased beyond 2700 the first component of the condensate wave function becomes singular at the origin and no stable stationary solution to Eq. (\[e\]) could be obtained.
Next we discuss the collapse by taking only the interaction among atoms in two different states to be attractive corresponding to a negative $a_{12}$ ($=
a_{21}$). The atomic interaction in both quantum states 1 and 2 is taken to be repulsive corresponding to a positive $a_{11}$ and $a_{22}$. Although it is a problem of theoretical interest for the study of collapse, it has no experimental analogue in terms of $^7$Li. We illustrate the approximation to collapse in this case in Fig. 1 (b) for parameters $n_{11}=1, n_{22}=1.5, $ $n_{12}=-5.95,n_{21}=-2, $ $c_1=1$, $c_2=0.25$. Both wave-function components are peaked near $x=0$ and have small rms radii. The system would collapse with a small increase of $|n_{12}|$ and/or $|n_{21}|$. Here the interactions among atoms in states 1 and 2 are both repulsive. The collapse is a consequence of the attraction between an atom in state 1 and one in state 2. This leads to a dominance of nonlinear off-diagonal coupling terms in the coupled GP equation.
Finally, in Fig. 1 (c) we illustrate the approximation to collapse of both components when all scattering lengths are negative. This corresponds to taking all possible interactions attractive. The parameters in this case are $n_{11}=n_{22}=-1, $ $n_{12}=n_{21}=-0.552, $ $c_1=4$, $c_2=0.25$. This has an experimental analogue in terms of two states of $^7$Li. We assume the atomic interaction in both states to be equally attractive corresponding to a negative scattering length: $a_{11}=a_{22}$. For $|a_{11}|/a_{\mbox{ho}}\simeq 0.0005$ as in the actual experiment [@1a], one has $N_1 = N_2 \simeq 700.$ The total number of particles in this case is roughly 1400, which is equal to the critical number observed in the actual experiment of collapse in $^7$Li. Both wave-function components could become singular in this case as all possible interactions are attractive.
Time-dependent Problem
----------------------
Although the collapse of the coupled condensates could be inferred from the shape of the stationary wave functions of Fig. 1 (sharply peaked centrally with small rms radii), we also study the dynamics of collapse from a time evolution of the full GP equation (\[e\]) in the presence of an absorption and three-body recombination, e.g., for $\gamma_i \ne 0$ and $\xi_i \ne 0$ as in the uncoupled case [@2]. For this purpose we consider the solution of Eq. (\[e\]) normalized according to Eq. (\[5\]) at $t=0$ obtained with $\gamma_i=\xi_i=0$ and allow this solution to evolve in time with $\gamma_i
\ne 0$ and $\xi_i \ne 0$ by iterating the GP equation (\[e\]). The fractional change in the number of atoms due to the combined effect of absorption and three-body recombination is given by $$\frac{N_i(t)}{N_i(0)} = \frac{\int_0 ^\infty
|\phi_i(x,t)|^2 dx}{\int_0 ^\infty |\phi_i(x,0)| ^2 dx} \quad ,$$ and the rms radii by Eq. (\[7\]). The continued growth and decay of the number of particles in the condensate would signal the possible collapse in a particular case. The oscillation of the rms radius would demonstrate the consequent radial vibration of the condensate.
Now we study the time evolution of the number of atoms of the two components and the corresponding rms radii. The general nature of time evolution is independent of the actual values of $\gamma_i$ and $\xi_i$ employed provided that a very small value for $\xi_i (\sim
0.001)$ and a relatively larger one for $\gamma_i (\sim 0.01$ to 0.1) are chosen [@2]. The following parameters were chosen in case of models (a), (b), and (c) of Fig. 1: (a) and (b) $\gamma_1 =\gamma_2= 0.03,
\xi_1=\xi_2=0.001$, (c) $\gamma_1 =0.15, \gamma_2= 0.03, \xi_1=0.002,
\xi_2=0.003$. The fractional change in the number of atoms for the two components are shown in Figs. 2 (a), (b), and (c). The results for $0<t<100$ in Fig. 2 are calculated with 2000 iterations of the GP equation (\[e\]) using a time step 0.05.
The quadratic nonlinear terms in model (a) are all repulsive in channel 2, the corresponding wave function ($\phi_2$) of Fig. 1 (a) does not show any sign of approximation to collapse as in channel 1 where the diagonal nonlinear term is attractive. The results reported in Fig. 2 (a) are consistent with this. The number of particles $N_1$ of the first component undergoes successive growth and decay, whereas that of the second component keeps on growing indefinitely typical to a repulsive interaction.
For model (b) the effective nonlinear terms in channels 1 and 2 are both repulsive and it should be possible to have collapse in both channels by decreasing the dominating off-diagonal quadratic nonlinear terms $n_{12}$ and $n_{21}$ corresponding to an increase in attraction between an atom in state 1 and one in state 2. However, for the actual parameters of this model only the component 1 exhibits collapse. This is consistent with the more singular nature of $\phi_1$ reported in Fig. 1 (b), compared to $\phi_2$. Consequently, in Fig. 2(b) only component 1 experiences collapse; the number of particles $N_2$ keeps on growing with time.
In model (c) all the quadratic nonlinear terms are attractive. Consequently, in Fig. 2 (c) we find a series of collapse in both channels. The collapse is most favored in model (c) with attractive diagonal and nondiagonal nonlinear terms. This corresponds to attraction between two atoms in state 1, between two atoms in state 2, and between an atom in state 1 and another in state 2. The next favored case is of model (a) where the diagonal nonlinear term is negative in channel 1. Here only the atomic interaction in state 1 is attractive, all other atomic interactions are repulsive. The least favored case is of model (b) where only the off-diagonal nonlinear terms are negative. This corresponds to repulsion between two atoms in state 1, and between two atoms in state 2, and attraction between an atom in state 1 and another in state 2. In the last case, collapse takes place due to the dominance of the attractive nondiagonal nonlinear term over the repulsive diagonal one in channel 1. This is explicit in Fig. 2 where the frequency of collapse decreases from model (c) to (a) and then to (b).
Finally, in Figs. 3 (a), (b), and (c) the rms radii for the two components are shown for models of Figs. 2 (a), (b), and (c), respectively. In case of models (a) and (b) we find from Figs. 2 (a) and (b) that the number $N_2$ grows with time. This is reflected in the growth of the corresponding rms radii in Figs. 3 (a) and (b). In case of model (c) there is collapse in both channels and both the rms radii oscillate with time. This radial vibration of the collapsing condensate(s) also takes place in the uncoupled case [@2]. However, from Figs. 3 (a) and (b) we find that due to a collapse in one of the channels, both rms radii could execute oscillations. In one of the channels it is a direct consequence of collapse, in the other it is due to a coupling to the channel experiencing collapse.
Conclusion
==========
To conclude, we studied the collapse in a trapped BEC of atoms in states 1 and 2 using the GP equation when some of the atomic interactions are attractive. We motivate parts of this study with two atomic states of $^7$Li. The component $i$ of the condensate could experience collapse when the interaction among atoms in state $i$ is attractive. Both components could experience collapse when at least the interaction between an atom in state $1$ and one in state $2$ is attractive. The collapse is predicted from a stationary solution of the GP equation. The time evolution of collapse is studied via the time-dependent GP equation with absorption and three-body recombination. The number of particles of the component(s) of BEC experiencing collapse alternately grows and decays with time. With the possibility of observation of coupled BEC, the results of this study could be verified experimentally in the future.
The work is supported in part by the Conselho Nacional de Desenvolvimento Científico e Tecnológico and Fundação de Amparo à Pesquisa do Estado de São Paulo of Brazil.
J. R. Ensher, D. S. Jin, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. [**77**]{}, 4984 (1996); K. B. Davis, M. O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle, [*ibid.*]{} [**75**]{}, 3969 (1995); D. G. Fried, T. C. Killian, L. Willmann, D. Landhuis, S. C. Moss, D. Kleppner, T. J. Greytak, [*ibid.*]{} [**81**]{}, 3811 (1998); R. Wynar, R. S. Freeland, D. J. Han, C. Ryu, D. J. Heinzen, Science [**287**]{}, 1016 (2000).
C. C. Bradley, C. A. Sackett, J. J. Tolett, and R. G. Hulet, Phys. Rev. Lett. [**75**]{}, 1687 (1995); C. Sackett, H. T. C. Stoof, and R. G. Hulet, [*ibid.*]{} [**80**]{}, 2031 (1998).
Yu. Kagan, A. E. Muryshev, and G. V. Shlyapnikov, Phys. Rev. Lett. [**81**]{}, 933 (1998); A. Gammal, T. Frederico, L. Tomio, and Ph. Chornaz, Phys. Rev. A [**61**]{}, 051602 (2000).
S. Giorgini, L. P. Pitaevskii, and S. Stringari, Phys. Rev. A [**54**]{}, R4633 (1996); M. Edwards, P. A. Ruprecht, K. Burnett, R. J. Dodd, and C. W. Clark, Phys. Rev. Lett. [**77**]{}, 1671 (1996); S. K. Adhikari and A. Gammal, Physica A [**286**]{}, 299 (2000).
A. Gammal, T. Frederico, and L. Tomio, Phys. Rev. E [ **60**]{}, 2421 (1999); S. K. Adhikari, Phys. Lett. A [**265**]{}, 91 (2000); Phys. Rev. E [**62**]{}, 8671 (2000).
F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. [**71**]{}, 463 (1999).
R. J. Dodd, M. Edwards, C. J. Williams, C. W. Clark, M. J. Holland, P. A. Ruprecht, and K. Burnett, Phys. Rev. A [**54**]{}, 661 (1996); M. Houbiers and H. T. C. Stoof; [*ibid.*]{} [**54**]{}, 5055 (1996); S. K. Adhikari, Physica A [**284**]{}, 97 (2000); N. Akhmediev, M. P. Das, and A. V. Vagov, Aust. J. Phys. [**53**]{}, 157 (2000).
E. P. Gross, Nuovo Cimento [ 20]{} (1961) 454; L. P. Pitaevskii, Zh. Eksp. Teor. Fiz. [ 40]{} (1961) 646 \[Sov. Phys. JETP [ 13]{} (1961) 451\].
M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman, and E. A. Cornell; Phys. Rev. Lett. [**83**]{}, 2498 (1999); M. R. Matthews, D. S. Hall, D. S. Jin, J. R. Ensher, C. E. Wieman, E. A. Cornell, F. Dalfovo, C. Minniti, and S. Stringari, [*ibid.*]{} [**81**]{}, 243 (1998).
C. J. Myatt, E. A. Burt, R. W. Ghrist, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. [**78**]{}, 586 (1997).
D. J. Heinzen, R. Wynar, P. D. Drummond, and K. V. Kheruntsyan, Phys. Rev. Lett. [**84**]{}, 5029 (2000). A. Sinatra, P. O. Fedichev, Y. Castin, J. Dalibard, and G. V. Shlyapnikov, Phys. Rev. Lett. [**82**]{}, 251 (1999); B. D. Esry, C. H. Greene, J. P. Burke, Jr., and J. L. Bohn, [*ibid.*]{} [**78**]{}, 3594 (1997); T.-L. Ho and V. B. Shenoy, [*ibid.*]{} [**77**]{}, 3276 (1996).
B. D. Esry, Phys. Rev. A [**58**]{}, R3399 (1998).
[**Figure Caption:**]{}
1\. Wave function components $\phi_1(x)$ (full line) and $\phi_2(x)$ (dashed line) vs. $x$ for two coupled GP equations with (a) $
n_{11}=-3.814, n_{22}=4$, $n_{12}=n_{21}=1, $ $c_1=0.25$, $c_2=4$; (b) $n_{11}=1,
n_{22}=1.5, $ $n_{12}=-5.95,n_{21}=-2, $ $c_1=1$, $c_2=0.25$; and (c) $n_{11}=n_{22}=-1, $ $n_{12}=n_{21}=-0.552, $ $c_1=4$, $c_2=0.25$.
2\. The fractional change in the number of atoms $N_i(t)/N_i(0)$ vs. $t$ for component 1 (full line) and 2 (dashed line) for models (a) and (b) with $\gamma_1=\gamma_2=0.03 $ and $\xi_1=\xi_2=0.001$, and for (c) with $\gamma_1= 0.15,
\gamma_2=0.03$, $\xi_1= 0.002$, and $\xi_2=0.003$. The parameters are as in Fig. 1.
3\. The time dependence of rms radii $x_{\mbox{rms}}^{(i)}(t)$ of models (a), (b), and (c) for component 1 (full line) and 2 (dashed line). The parameters are as in Figs. 1 and 2.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting “preferred directions” on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common “fully-thermal” (energy-dump) or “fully-kinetic” (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution $\gtrsim 100\,{M_{\sun}}$, they diverge by orders-of-magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that [*all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution*]{} ($<100\,{M_{\sun}}$). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models [*without*]{} re-tuning parameters.'
author:
- |
\
[$^{1}$]{}[TAPIR, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125, USA]{}\
[$^{2}$]{}[The Observatories of the Carnegie Institution for Science, Pasadena, CA 91101, USA]{}\
[$^{3}$]{}[Department of Physics, University of California, Davis, CA 95616, USA]{}\
[$^{4}$]{}[Department of Physics, Center for Astrophysics and Space Science, University of California at San Diego, 9500 Gilman Drive, La Jolla, CA 92093]{}\
[$^{5}$]{}[Department of Physics and Astronomy and CIERA, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA]{}\
[$^{6}$]{}[Department of Astronomy and Theoretical Astrophysics Center, University of California Berkeley, Berkeley, CA 94720]{}\
[$^{7}$]{}[Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA]{}\
[$^{8}$]{}[Canadian Institute for Theoretical Astrophysics, 60 St. George Street, University of Toronto, ON M5S 3H8, Canada]{}\
[$^{9}$]{}[Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA]{}\
[$^{10}$]{}[Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA]{}
bibliography:
- '/Users/phopkins/Dropbox/Public/ms.bib'
date: 'Submitted to MNRAS, July 2017'
title: How To Model Supernovae in Simulations of Star and Galaxy Formation
---
=1
\[firstpage\]
galaxies: formation — galaxies: evolution — galaxies: active — stars: formation — cosmology: theory
Introduction {#sec:intro}
============
Stellar feedback is critical in understanding galaxy formation. Without it, gas accretes into dark matter halos and galaxies, cools rapidly on a timescale much faster than the dynamical time, collapses, fragments, and forms stars on a free fall-time [@bournaud:2010.grav.turbulence.lmc; @hopkins:rad.pressure.sf.fb; @tasker:2011.photoion.heating.gmc.evol; @dobbs:2011.why.gmcs.unbound; @harper-clark:2011.gmc.sims], inevitably turning most of the baryons into stars on cosmological timescales [@katz:treesph; @somerville99:sam; @cole:durham.sam.initial; @springel:lcdm.sfh; @keres:fb.constraints.from.cosmo.sims]. But observations imply that, on galactic scales, only a few percent of gas turns into stars per free-fall time [@kennicutt98], while individual giant molecular clouds (GMCs) disrupt after forming just a few percent of their mass in stars [@zuckerman:1974.gmc.constraints; @williams:1997.gmc.prop; @evans:1999.sf.gmc.review; @evans:2009.sf.efficiencies.lifetimes]. Similarly, galaxies retain and turn into stars just a few percent of the universal baryon fraction [@conroy:monotonic.hod; @behroozi:mgal.mhalo.uncertainties; @moster:stellar.vs.halo.mass.to.z1], and both direct observations of galactic winds [@martin99:outflow.vs.m; @heckman:superwind.abs.kinematics; @sato:2009.ulirg.outflows; @steidel:2010.outflow.kinematics; @coil:2011.postsb.winds] and indirect constraints on the inter-galactic and circum-galactic medium [IGM/CGM; @aguirre:2001.igm.metal.evol.sims; @pettini:2003.igm.metal.evol; @songaila:2005.igm.metal.evol; @oppenheimer:outflow.enrichment; @martin:2010.metal.enriched.regions] require that a large fraction of the baryons have been “processed” in galaxies via their accretion, enrichment, and expulsion in super-galactic outflows.
Many different feedback processes contribute to these galactic winds and ultimately the self-regulation of galactic star formation, including protostellar jets, photo-heating, stellar mass loss (O/B and AGB-star winds), radiation pressure, and supernovae (SNe) Types Ia & II [see @evans:2009.sf.efficiencies.lifetimes; @lopez:2010.stellar.fb.30.dor and references therein]. Older galaxy-formation simulations could not resolve the effects of these different processes (even on relatively large scales within the galactic disk), so they used simplified prescriptions to model galactic winds. However, a new generation of high-resolution simulations has emerged with the ability to resolve multi-phase structure in the ISM and so begin to directly incorporate these distinct feedback processes [@hopkins:rad.pressure.sf.fb; @hopkins:fb.ism.prop; @tasker:2011.photoion.heating.gmc.evol; @kannan:2013.early.fb.gives.good.highz.mgal.mhalo; @agertz:2013.new.stellar.fb.model]. One example is the Feedback In Realistic Environments (FIRE)[^1] project [@hopkins:2013.fire]. These and similar simulations have demonstrated predictions in reasonable agreement with observations for a wide variety of galaxy properties [e.g. @ma:2015.fire.mass.metallicity; @sparre.2015:bursty.star.formation.main.sequence.fire; @wetzel.2016:latte; @feldmann.2016:quiescent.massive.highz.galaxies.fire]. In a companion paper, @hopkins:fire2.methods [hereafter [Paper [I]{}]{}], we presented an updated version of the FIRE code. We refer to this updated FIRE version as “FIRE-2” and the older FIRE implementation as “FIRE-1”. We explored how a wide range of numerical effects (resolution, hydrodynamic solver, details of the cooling and star formation algorithm) influence the results of galaxy-formation simulations. We compared these to the effects of feedback and concluded that mechanical feedback, particularly from Type-II SNe, has much larger effects on galaxy formation (specifically properties such as galaxy masses, star formation histories, metallicities, rotation curves, sizes and morphologies) compared to the various numerical details studied. This is consistent with a number of previous studies [@abadi03:disk.structure; @governato04:resolution.fx; @robertson:cosmological.disk.formation; @stinson:2006.sne.fb.recipe; @zavala:cosmo.disk.vs.fb; @scannapieco:2012.aquila.cosmo.sim.compare]. However, in galaxy-formation simulations, the actual implementation of SNe feedback, and the physical assumptions associated with it, often differ significantly between different codes. This can have significant effects on the predictions for galaxy formation [see @scannapieco:2012.aquila.cosmo.sim.compare; @rosdahl:2016.sne.method.isolated.gal.sims; @kim:agora.isolated.disk.test].
In this paper, we present a detailed study of the algorithmic implementation of SNe feedback and its effects, in the context of the FIRE-2 simulations. We emphasize that there are two [*separate*]{} aspects of mechanical feedback that must be explored.
First, the [*numerical*]{} aspects of the algorithmic coupling. Given some feedback “products” (mass, metals, energy, momentum) from a star, these must be deposited in the surrounding gas. [*Any*]{} good algorithm should respect certain basic considerations: conservation (of mass, energy, and momentum), statistical isotropy[^2] (avoiding imprinting preferred directions that either depend on the numerical grid axes or the arbitrary gas configuration around the feedback source), and convergence. We will show that accomplishing these is non-trivial, and that many algorithms in common use (including the older algorithm that we used in FIRE-1[^3]) do not respect all of them.
Second, the [*physics*]{} of the coupling must be explored. At any finite resolution, there is a “sub-grid scale” – the space or mass between a star particle and the center of the nearest gas resolution element, for example. An ideal implementation of the feedback coupling should exactly reproduce the converged solution, if we were to populate that space with infinite resolution – in other words, our coupling should be equivalent to “down grading” the resolution of a high-resolution case, given the [*same*]{} physical assumptions used in the larger-scale simulation. We use a suite of simulations of isolated SNe (with otherwise identical physics to our galaxy-scale simulations) to show that a well-posed algorithm of this nature must account for [*both*]{} thermal [*and*]{} kinetic energy of the ejecta as they couple in a specific manner. This forms the basis for the default treatment of SNe in the FIRE simulations (introduced in @hopkins:2013.fire), and similar to subsequent implementations in simulations by e.g. @kimm.cen:escape.fraction [@rosdahl:2016.sne.method.isolated.gal.sims]). In contrast, we show that coupling only thermal or kinetic energy leads to strongly resolution-dependent errors, which in turn can produce order-of-magnitude too-large or too-small galaxy masses. To predict reasonable masses, such models must be modified (a.k.a. “re-tuned”) at each resolution level. This is even more severe in “delayed cooling” or “target temperature” models which are explicitly intended for low-resolution applications, and are not designed to converge to the exact solution at high resolution. This explains many seemingly contradictory conclusions in the literature regarding the implementation of feedback. In contrast, we will show that the mechanical feedback models proposed here reproduce the high-resolution solution in idealized problems at [*all*]{} resolution levels that we explore, converge much more rapidly in cosmological galaxy-formation simulations, and (perhaps most importantly) represent the solution towards which other less-accurate “sub-grid” SNe treatments (at least those which do not artificially modify the cooling physics) converge at very high resolution.
Our study here is relevant for simulations of the ISM and galaxy formation with mass resolution in the range $\sim 10-10^{6}\,{M_{\sun}}$; we will show that at resolution higher than this, the numerical details have weak effects because early SN blastwave evolution is explicitly well-resolved. Conversely, at lower resolution than this, treating individual SN events becomes meaningless (necessitating a different sort of “sub-grid” approach).
In § \[sec:methods\] we provide a summary of the FIRE-2 simulations (§ \[sec:methods:all\]), a detailed description of the numerical algorithm for mechanical feedback coupling (§ \[sec:feedback:mechanical\]), and a detailed motivation and description of the physical breakdown between kinetic and thermal energy (§ \[sec:feedback:mechanical:sedov\]). We note that [Paper [I]{}]{} includes complete details of all aspects of the simulations here, necessary to fully reproduce our results. In § \[sec:feedback:mechanical:ideal.tests\] we validate the numerical coupling algorithm (conservation, statistical isotropy, and convergence) and explore the effects of alternative coupling schemes on full galaxy formation simulations. In § \[sec:feedback:mechanical:tests\] we validate the physical breakdown of coupled kinetic/thermal energy, compare this to simulations of individual SN explosions at extremely high resolution, and explore how different choices which neglect these physics alter the predictions of full galaxy formation simulations. We briefly discuss non-convergent alternative models (e.g. “delayed cooling” and “target temperature” models) but provide more detailed tests of these in the Appendices. In § \[sec:discussion\] we summarize our conclusions. Additional tests are discussed in the Appendices.
Methods & Physical Motivation {#sec:methods}
=============================
Overview & Methods other than Mechanical Feedback {#sec:methods:all}
-------------------------------------------------
The simulations in this paper were run as part of the Feedback in Realistic Environments (FIRE) project, using the FIRE-2 version of the code detailed in [Paper [I]{}]{}. Our default simulations are exactly those in [Paper [I]{}]{}; we will vary the SNe algorithm to explore how this alters galaxy formation, but all other simulation properties, physics, and numerical choices are held fixed. For detailed exploration of how those numerical details alter galaxy formation, we refer to [Paper [I]{}]{}. The simulations were run using [GIZMO]{}[^4] [@hopkins:gizmo], in its meshless finite-mass MFM mode. This is a mesh-free, finite-volume Lagrangian Godunov method which provides adaptive spatial resolution together with conservation of mass, energy, momentum, and angular momentum, and the ability to accurately capture shocks and fluid mixing instabilities (combining advantages of both grid-based and smoothed-particle hydrodynamics methods). For extensive test problems see @hopkins:gizmo [@hopkins:mhd.gizmo; @hopkins:cg.mhd.gizmo; @hopkins:gizmo.diffusion]; for tests of the methods specific to these simulations see [Paper [I]{}]{}.
These simulations are cosmological “zoom-in” runs that follow the Lagrangian region that surrounds a galaxy at $z=0$ (out to several virial radii) from seed perturbations at $z=100$. Gravity is solved for collisional (gas) and collisionless (stars and dark matter) species with adaptive gravitational softening so hydrodynamic and force softening are always matched. Gas cooling is followed self-consistently from $T=10-10^{10}\,$K including free-free, Compton, metal-line, molecular, fine-structure, dust collisional, and cosmic ray processes, photo-electric and photo-ionization heating by both local sources and a uniform but redshift-dependent meta-galactic background, and self-shielding. Gas is turned into stars using a sink-particle prescription (gas which is locally self-gravitating at the resolution scale following @hopkins:virial.sf, self-shielding/molecular following @krumholz:2011.molecular.prescription, Jeans unstable, and denser than $n_{\rm crit}>1000\,{\rm cm^{-3}}$ is converted into star particles on a free-fall time). Star particles are then treated as single-age stellar populations with all IMF-averaged feedback properties calculated from [STARBURST99]{} [@starburst99] assuming a @kroupa:2001.imf.var IMF. We then explicitly treat feedback from SNe (both Types Ia and II), stellar mass loss (O/B and AGB winds), and radiation (photo-ionization and photo-electric heating and UV/optical/IR radiation pressure), with implementations at the resolution-scale described in [Paper [I]{}]{} and here.
[Paper [I]{}]{} provides a complete description of all aspects of the numerical methods. In this paper, we study the mechanical feedback algorithm, used for SNe and stellar mass loss. In a companion paper (henceforth [Paper [III]{}]{}), we study the radiation feedback algorithm.
For simplicity, we focus our study here on two example galaxies: [**m10q**]{} is a dwarf galaxy and [**m12i**]{} is a Milky Way (MW)-mass galaxy. Table \[tbl:sims\] lists their properties. Both were studied extensively in [Paper [I]{}]{}. The star formation history, stellar mass, and mean stellar-mass weighted metallicity of each galaxy as a function of cosmic time, as well as the $z=0$ baryonic and dark matter mass profiles and rotation curves, will be discussed below. We have explicitly verified that the conclusions drawn here regarding mechanical feedback from our [**m10q**]{} and [**m12i**]{} simulations are robust across simulations of several different galaxies/halos at dwarf and MW mass scales, respectively.
Mechanical Feedback Coupling Algorithm {#sec:feedback:mechanical}
--------------------------------------
### Determining When Events Occur {#sec:feedback:mechanical:event.detection}
Once a star particle forms, the SNe rate is taken from stellar evolution models, assuming the particle represents an IMF-averaged population of a given age (since it formed) and abundances (inherited from its progenitor gas element). Given the particle masses and timesteps ($\Delta t \sim 100-1000\,$yr) for young star particles, the expected number of SNe per particle per timestep is always $\ll 1$. To determine if an event occurs, we therefore draw from a binomial distribution at each timestep given the expected rate $\langle N \rangle = (dN/dM_{\ast}\,dt)\,m_{i}\,\Delta t$, where $(dN/dM_{\ast}\,dt)$ is the IMF-averaged SNe rate per unit mass for a single stellar population of the age and metallicity of the star particle and $m_{i}$ is the star particle mass. For continuous mass-loss processes such as O/B or AGB winds, an “event” occurs every timestep, with mass loss $\Delta M_{\ast} = \Delta t\,\dot{M}_{\ast}$ and the associated kinetic luminosity. See [Paper [I]{}]{} for details and tabulations of the relevant rates.
Consider a time $t_{a}$ (timestep $\Delta t$), during which a mechanical feedback “event” occurs sourced at some location ${\bf x}_{a}$ (for example, the location of a star particle “$a$” in which a SN explodes). Our focus in this paper is how to treat this event. Fig. \[fig:mechanical.fb.cartoon\] provides an illustration of our algorithm. We first define a set of conserved quantities: mass $m_{\rm ej}$, metals $m_{Z,\,{\rm ej}}$, momentum $p_{\rm ej}=m_{\rm ej}\,v_{\rm ej}$, and energy $E_{\rm ej}$, which must be “injected” into the neighboring gas via some numerical fluxes.
### Finding Neighbors to Couple {#sec:feedback:mechanical:neighbor.finding}
We define an effective neighbor number $N_{\ast}$ the same as for the hydrodynamics, $N_{\ast} = (4\pi/3)\,H_{a}^{3}\,\bar{n}_{a}(H_{a})$ where, $\bar{n}_{a}=\sum W({\bf x}_{ba}\equiv {\bf x}_{b}-{\bf x}_{a},\,H_{a})$, $W$ is the kernel function, and $H_{a}$ is the search radius around the star (set by $N_{\ast}$, which is the “fixed” parameter).[^5] Thus we obtain all gas elements $b$ within a radius $|{\bf x}_{ba}| < H_{a}$.
However, severe pathologies can occur if feedback is coupled [*only*]{} to the nearest neighboring gas to the star. For example, in an infinitely thin, dense disk of gas surrounding the star particle, with a tenuous atmosphere in the vertical direction above/below the disk, the closest $N_{\ast}$ elements to ${\bf x}_{a}$ likely will be in the disk – so searching only within $H_{a}$ will fail to “see” the vertical directions, thus coupling all feedback within the disk, despite the fact that the disk subtends a vanishingly small portion of the sky as seen from the star. Our solution to this is to use the same approach used in the hydrodynamic solver (in all mesh-free methods; SPH and MFM/MFV): we include [*both*]{} elements with $|{\bf x}_{ba}| < H_{a}$ and $|{\bf x}_{ba}| < H_{b}$. That is, we additionally include any gas elements whose kernel encompasses the star. In the disk example, the closest “atmosphere elements” above/below the disk necessarily have their own kernel radii, $H_{b}$, that overlap the disk, so this guarantees “covering” by elements in the vertical direction. This is demonstrated in Fig. \[fig:mechanical.fb.cartoon\]. The importance of including these elements is validated in our tests below, where we show that failure to include these neighbors artificially biases the feedback deposition.
We impose a maximum cutoff radius, $r_{\rm max}$, on the search, to prevent pathological situations for which there is no nearby gas so feedback would be deposited at unphysically large distances. Specifically, we impose $r_{\rm max}=2\,$kpc. This corresponds to where the ram pressure of free-expanding ejecta falls below the thermal pressure in even low-density circum-galactic conditions ($T \sim 10^{4}$K at $n\gtrsim 0.001\,{\rm cm^{-3}}$). However, our results are not sensitive to this choice, because it affects a vanishingly small number of events.
### Weighting the Deposition: The Correct “Effective Area” {#sec:feedback:mechanical:weighting}
Having identified interacting neighbors, $b$, we must deposit the injected quantities according to some weighting scheme. Each neighbor resolution element gets a weight $\tilde{\omega}_{b}$ that determines the fraction of the injected quantity it receives. Of course, this must be normalized to properly conserve quantities, so we first calculate an un-corrected weight, $\omega_{b}$, and then assign $$\begin{aligned}
\label{eqn:weight.renorm.scalar} \tilde{\omega}_{b} \equiv \frac{\omega_{b}}{\sum_{c}\,\omega_{c}}\end{aligned}$$ so that $\sum_{b} \tilde{\omega}_{b}=1$, exactly.
Naively, a simple weight scheme might use $\omega_{b}=1$, or $\omega_{b}=W({\bf x}_{ba},\,H_{a})$. However, for quasi-Lagrangian schemes for which the different gas elements have approximately equal masses ($m_{b}\sim$constant), this is effectively mass-weighting the feedback deposition, which is not physical. In the example of the infinitely thin disk, because most of the neighbor elements lie within the disk, the disk-centered elements would again receive most of the feedback, despite the fact that they cover a vanishingly small portion of the sky from the source.
If the feedback is emitted statistically isotropically from the source ${\bf x}_{a}$, the correct solution is to integrate the injection into each solid angle and determine the total solid angle $\Delta \Omega_{b}$ subtended by a given gas resolution element, i.e. adopt $\omega_{b} = \Delta \Omega_{b}/4\pi$. This is shown in Fig. \[fig:mechanical.fb.cartoon\]. Given a source at ${\bf x}_{a}$ and neighbors at ${\bf x}_{b}$, we can construct a set of faces that enclose ${\bf x}_{a}$ with some convex hull. Each face has a vector oriented area ${\bf A}_{b}$; if the face is symmetric it subtends a solid angle on the sky as seen by ${\bf x}_{a}$ of $$\begin{aligned}
\label{eqn:solidangle}\omega_{b} & \equiv \frac{1}{2}\,\left(1-\frac{1}{\sqrt{1+({\bf A}_{b}\cdot \hat{\bf x}_{ba})/(\pi\,|{\bf x}_{ba}|^{2})}}\right) \approx \frac{\Delta\Omega_{b}}{4\pi}\end{aligned}$$ (This simply interpolates between $\sim A_{b}/4\pi\,r_{b}^{2}$ for $r_{b}^{2} = |{\bf x}_{ba}|^{2} \gg A_{b} \equiv |{\bf A}_{b}\cdot \hat{\bf x}_{ba}|$, and $1/2$ for $r_{b}^{2} \ll A_{b}$.)[^6]
No unique convex hull exists. One solution, for example, would be to construct a Voronoi tesselation around ${\bf x}_{a}$, with both the star particle ${\bf x}_{a}$ and the locations of all neighbors ${\bf x}_{b}$ as mesh-generating points. However, we already have an internally consistent value of ${\bf A}_{b}$, namely, the definition ${\bf A}_{b}^{\rm hydro}$ of the “effective faces” used in the hydrodynamic equations (the faces that appear in the discretized Euler equations: e.g. $d{\bf U}_{a}/dt = - \sum_{b}\,{\bf F}_{ab}({\bf U}) \cdot {\bf A}_{b}$, where ${\bf U}$ is a conserved quantity and ${\bf F}$ is its flux). For a Voronoi moving-mesh code (e.g. [AREPO]{}), this is the Voronoi tesselation. For SPH as implemented in [GIZMO]{}, this is ${\bf A}_{b}^{\rm hydro} = [\bar{n}_{a}^{-2}\,\partial W(r_{b},\,H_{a})/\partial r_{b} + \bar{n}_{b}^{-2}\,\partial W(r_{b},\,H_{b})/\partial r_{b}]\,\hat{\bf x}_{ba}$. For MFM/MFV the expression is more complicated but is given in Eq. 18 in @hopkins:gizmo.[^7] We therefore adopt ${\bf A}_{b} = {\bf A}_{b}^{\rm hydro}$ – the “effective face area” that the neighbor gas elements would share with ${\bf x}_{a}$ in the hydrodynamic equations if the source (star particle) were a gas element. Fig. \[fig:sne.fb.coupling.tests\] demonstrates that this is sufficient to ensure the coupling into each solid angle is statistically isotropic in the frame of the SN.
While we find that weighting by solid angle is important, at the level of accuracy here, the exact values of ${\bf A}_{b}^{\rm hydro}$ given by SPH, MFM, or Voronoi formalisms differ negligibly, and we can use them interchangeably with no detectable effects on our results. This is not surprising: @hopkins:gizmo showed that the Voronoi tesselation is simply the limit for a sharply-peaked kernel of the MFM faces.
### Dealing With Vector Fluxes (Momentum Deposition) {#sec:feedback:mechanical:vector}
If we were only considering sources of scalar conserved quantities (e.g. mass $m_{\rm ej}$ or metals $m_{Z,\,{\rm ej}}$), we would be done. We simply define a numerical flux $\Delta m_{b} = \tilde{\omega}_{b}\,m_{\rm ej}$ into each neighbor element (subtracting the same from our “source” star particle), and we are guaranteed both machine-accurate conservation ($\sum_{b}\,\Delta m_{b} = m_{\rm ej}$) and the correct spatial distribution of ejecta.
However, the situation is more complex for a vector flux, specifically here, momentum deposition. If the ejecta have some uniform radial velocity, ${\bf v}_{\rm ej} = v_{\rm ej}\,\hat{r}$, away from the source, ${\bf x}_{a}$, then one might naively define the corresponding momentum flux $\Delta {\bf p}_{b} = \tilde{\omega}_{b}\,m_{\rm ej}\,v_{\rm ej}\,\hat{\bf x}_{ba} = p_{\rm ej}\,\tilde{\omega}_{b}\,\hat{\bf x}_{ba} $. However, then $\sum_{b} \Delta {\bf p}_{b} = p_{\rm ej}\,\sum_{b}\,\tilde{\omega}_{b}\,\hat{\bf x}_{ba}$. But this is [*not*]{} guaranteed to vanish: the deposition can violate linear momentum conservation, if $\boldsymbol{\psi}_{a} \equiv \sum_{b}\,\tilde{\omega}_{b}\,\hat{\bf x}_{ba} \ne {\bf 0}$. The correct $\boldsymbol{\psi}_{a}={\bf 0}$ is only guaranteed if (1) the coupled momentum $\Delta {\bf p}_{b}$ is the [*exact*]{} solution of the integral of $p_{\rm ej}\,(4\pi\,|{\bf r}|^{2})^{-1}\hat{\bf r}\cdot d{\bf A}_{b}(\theta,\,\phi)$ (where ${\bf r}$ is the vector from ${\bf x}_{a}$ to a location ${\bf x}$ on the surface ${\bf A}_{b}$), and (2) the faces of the convex hull close exactly ($\sum_{b}\,{\bf A}_{b}={\bf 0}$). Even in a Cartesian grid (which trivially satisfies (2)), condition (1) can only be easily evaluated if we assume (incorrectly) that the feedback event occurs exactly at the center or corner of a cell; in Voronoi meshes and mesh-free methods (1) is only possible to satisfy with an expensive numerical quadrature, and (2) is only satisfied up to some integration accuracy.
In practice, $\Delta {\bf p}_{b} = p_{\rm ej}\, \tilde{\omega}_{b}\,\hat{\bf x}_{ba}$ is a good approximation to the integral in condition (1), and is again exact for faces symmetric about $\hat{\bf x}_{ba}$, and (2) is satisfied up to second-order integration errors in our MFM/MFV methods, so the dimensionless $|\boldsymbol{\psi}_{a}|\ll 1$ is small. However, we wish to ensure machine-accurate conservation, so we must impose a [*tensor*]{} re-normalization condition, not simply the scalar re-normalization in Eq. \[eqn:weight.renorm.scalar\]: we therefore define the six-dimensional vector weights $\hat{\bf x}_{ba}^{\pm}$: $$\begin{aligned}
\label{eqn:vector.weight.def} \hat{\bf x}_{ba} &\equiv \frac{{\bf x}_{ba}}{|{\bf x}_{ba}|} = \sum_{+,\,-}\,\hat{\bf x}_{ba}^{\pm} \\
\label{eqn:vector.weight.def.sub1} (\hat{\bf x}^{+}_{ba})^{\alpha} &\equiv {|{\bf x}_{ba}|^{-1}}\,{\rm MAX}({\bf x}_{ba}^{\alpha},\,0)\,{\Bigr|}_{\alpha=x,\,y,\,z}\\
\label{eqn:vector.weight.def.sub2} (\hat{\bf x}^{-}_{ba})^{\alpha} &\equiv {|{\bf x}_{ba}|^{-1}}\,{\rm MIN}({\bf x}_{ba}^{\alpha},\,0)\,{\Bigr|}_{\alpha=x,\,y,\,z}\end{aligned}$$ i.e. the unit vector component in the plus (or minus) $x,\,y,\,z$ directions ($\alpha$ refers to these components), for each neighbor. We can then define a vector weight $\tilde{\bf w}_{b}$: $$\begin{aligned}
\label{eqn:vector.weight.normalized} \bar{\bf w}_{b} &\equiv \frac{{\bf w}_{b}}{\sum_{c}\,|{\bf w}_{c}|} \\
\label{eqn:vector.weight.normalized.sub1} {\bf w}_{b} &\equiv \omega_{b}\, \sum_{+,\,-}\,\sum_{\alpha}\,(\hat{\bf x}_{ba}^{\pm})^{\alpha}\,\left( f_{\pm}^{\alpha} \right)_{a} \\
\label{eqn:vectornorm} \left( f_{\pm}^{\alpha} \right)_{a} &\equiv \left\{ \frac{1}{2}\,\left[1 + \left( \frac{\sum_{c}\,\omega_{c}\,|\hat{\bf x}_{ca}^{\mp}|^{\alpha}}{\sum_{c}\,\omega_{c}\,|\hat{{\bf x}}_{ca}^{\pm}|^{\alpha}} \right)^{2}\right]\right\}^{1/2} \end{aligned}$$ This is evaluated in two passes over the neighbor list.[^8]
It is straightforward to verify (and we show explicitly in tests below) that the approach above guarantees momentum conservation to machine accuracy. Ignoring these correction terms can (if the neighbors are “badly ordered,” e.g. all lie the same direction), lead to order-unity errors in momentum conservation, and the fractional error $|\sum_{b} \Delta {\bf p}_{b}| / p_{\rm ej} = |\boldsymbol{\psi}_{a}|$ depends only on the spatial distribution of neighbors in the kernel, not on the resolution.
Physically, we should think of the vector weights $\bar{\bf w}$ as accounting for asymmetries about the vector $\hat{\bf x}_{ab}$ in the faces ${\bf A}_{b}$. If the faces were all exactly symmetric (e.g. the neighbor elements were perfectly isotropically distributed), then the net momentum integrated into each face would indeed point exactly along $\hat{\bf x}_{ab}$. But, typically, they are not, so we must account for this in order to properly retain momentum conservation.
### Assigning Fluxes and Including Gas-Star Motion {#sec:feedback:mechanical:assignment}
Finally, we can assign fluxes: $$\begin{aligned}
\label{eqn:flux.m} \Delta m_{b} &= |\bar{\bf w}_{b}|\,m_{\rm ej} \\
\label{eqn:flux.z} \Delta m_{Z,\,b} &= |\bar{\bf w}_{b}|\,m_{Z,\,{\rm ej}} \\
\label{eqn:flux.e} \Delta E_{b} &= |\bar{\bf w}_{b}|\,E_{\rm ej} \\
\label{eqn:flux.p} \Delta {\bf p}_{b} &= \bar{\bf w}_{b}\,p_{\rm ej}\end{aligned}$$ which the definitions above guarantee will [*exactly*]{} satisfy: $$\begin{aligned}
\label{eqn:flux.m.conservation} \sum\,\Delta m_{b} &= m_{\rm ej} \\
\label{eqn:flux.z.conservation} \sum\,\Delta m_{Z,\,b} &= m_{Z,\,{\rm ej}} \\
\label{eqn:flux.e.conservation} \sum\,\Delta E_{b} &= E_{\rm ej} \\
\label{eqn:flux.p.conservation1} \sum\,|\Delta {\bf p}_{b}| &= p_{\rm ej} \\
\label{eqn:flux.p.conservation2} \sum\,\Delta {\bf p}_{b} &= {\bf 0} \end{aligned}$$ Our definitions also ensure that the fraction of ejecta entering a gas element is as close as possible (as much as allowed by the strict conservation conditions above) to the fraction of solid angle subtended by the element, as would be calculated self-consistently by the hydrodynamic method in the code, i.e. $$\begin{aligned}
\label{eqn:weight.area.equivalence} |\bar{\bf w}_{b}| \approx \frac{\Delta {\mathbf \Omega}_{b}^{\rm hydro}}{4\pi}\end{aligned}$$ Moreover, in the limit where Eq. \[eqn:solidangle\] is exact (the faces ${\bf A}_{b}$ are symmetric about $\hat{\bf x}_{ba}$), and they close exactly ($\sum_{b}{\bf A}_{b}={\bf 0}$; i.e. good element order), then $(f_{\pm})=1$ and $\sum_{c}\,|{\bf w}_{c}|=1$, i.e. $\bar{\bf w}_{b}\rightarrow \omega_{b}\,\hat{\bf x}_{ba}$ and our naive estimate is both exact and conservative, and no normalization of the weights is necessary. In practice, as noted above, we find that the deviations (in the sum) from this perfectly-ordered case are usually small (percents-level), but there are always pathological element configurations where they can be large, and maintaining good conservation requires the corrected terms above.
Implicitly, we have been working in the frame moving with the feedback “source” (${\bf x}_{a}={\bf 0}$, ${\bf v}_{a} \equiv d{\bf x}_{a}/dt = {\bf 0}$), in which the source is statistically isotropic. However, in coupling the fluxes to surrounding gas elements, we also must account for the frame motion. Boosting back to the lab/simulation frame, the total ejecta velocity entering an element is of course $\Delta m_{b}^{-1}\,\Delta {\bf p}_{b} + {\bf v}_{a}$. This change of frame has no effect on the mass fluxes, but it does modify the momentum and energy fluxes: to be properly conservative, we must take: $$\begin{aligned}
\label{eqn:flux.mz.framecorr} \Delta m_{b}^{\prime} &\equiv \Delta m_{b}\ \ \ , \ \ \ \Delta m_{Z,\,b}^{\prime} \equiv \Delta m_{Z,\,b} \\
\label{eqn:flux.p.framecorr} \Delta {\bf p}_{b}^{\prime} &\equiv \Delta {\bf p}_{b} + \Delta m_{b}\,{\bf v}_{a} \\
\label{eqn:flux.e.framecorr} \Delta E_{b}^{\prime} &\equiv \Delta {E}_{b} + \frac{1}{2\,\Delta m_{b}}\,\left( |\Delta {\bf p}_{b}^{\prime}|^{2} - |\Delta {\bf p}_{b}|^{2} \right)\end{aligned}$$ where the prime (e.g. “$ \Delta m_{b}^{\prime}$”) notation denotes the lab frame. Note that the extra momentum added to the neighbors ($\sum_{b}\,\Delta m_{b}\,{\bf v}_{a} = m_{\rm ej}\,{\bf v}_{a}$) is exactly the momentum lost by the feedback source $a$, by virtue of its losing $m_{\rm ej}$ in mass.[^9]
These fluxes are simply added to each neighbor in a fully-conservative manner: $$\begin{aligned}
\label{eqn:flux.m.coupling} m_{b}^{\rm new} &= m_{b} + \Delta m_{b}^{\prime} \\
\label{eqn:flux.z.coupling} (Z\,m_{b})^{\rm new} &= Z^{\rm new}\,m_{b}^{\rm new} = (Z\,m_{b}) + \Delta m_{Z,\,b}^{\prime} \\
\label{eqn:flux.p.coupling} {\bf p}_{b}^{\rm new} &= m_{b}^{\rm new}\,{\bf v}_{b}^{\rm new} = {\bf p}_{b} + \Delta {\bf p}_{b}^{\prime} \\
\label{eqn:flux.e.coupling} E_{b}^{\rm new} &= E_{\rm kinetic}^{\rm new} + U_{\rm internal}^{\rm new} = E_{b} + \Delta E_{b}^{\prime}\end{aligned}$$ So the updated vector velocity ${\bf v}$ of the element follows from its updated momentum and mass (and its metallicity follows from its updated metal mass and total mass); the energy $E$ here is a [*total*]{} energy, so the updated internal energy $U$ of the element follows from its updated total energy ($E$), kinetic energy (from ${\bf v}$), and mass (this is the usual procedure in finite-volume updates with conservative hydrodynamic schemes).
The terms accounting for the relative gas-star motion are necessary to ensure exact conservation. For SNe, they have essentially no effect. However, for slow stellar winds (e.g. AGB winds with $v_{\rm wind}\sim 10\,{\rm km\,s^{-1}}$), the relative star-gas velocity can be much larger than the wind velocity ($|{\bf v}_{b} - {\bf v}_{a}| \gg v_{\rm wind}$), which means the shock energy and post-shock temperature of the winds colliding with the ISM is much higher than would be calculated ignoring these terms, which may significantly change their role as a feedback agent [@conroy:2014.agb.heating.quenching].
Sub-Grid Physics: Unresolved Sedov-Taylor Phases {#sec:feedback:mechanical:sedov}
------------------------------------------------
A potential concern if naively applying the above prescription for SNe is that low-resolution simulations are unable to resolve the Sedov-Taylor (S-T) phase, during which the expanding shocked bubble is energy-conserving (the cooling time is long compared to the expansion time) and does $P\,dV$ work on the gas, converting energy into momentum, until it reaches some terminal radius where the residual thermal energy has been lost and the blastwave becomes a cold, momentum-conserving shell. This would, if properly resolved, modify the input momentum ($\Delta p_{b}$) and energy ($\Delta E_{b}$) felt by the gas element $b$.
### Motivation: Individual SN Remnant Evolution {#sec:feedback:mechanical:sedov:background}
Idealized, high-resolution simulations (with element mass $m_{i}\ll {M_{\sun}}$) have shown that there is a robust radial terminal momentum, $p_{\rm t}$, of the swept-up gas in the momentum-conserving phase, from a single explosion, given by: $$\begin{aligned}
\label{eqn:terminal.p}\frac{p_{\rm t}}{{M_{\sun}}\,{\rm km\,s^{-1}}} &\approx 4.8\times10^{5}
\left(\frac{E_{\rm ej}}{10^{51}\,{\rm erg}}\right)^{\frac{13}{14}}
\left(\frac{n}{{\rm cm^{-3}}}\right)^{-\frac{1}{7}}
f(Z)^{\frac{3}{2}}\\
\label{eqn:terminal.p.zdep} f(Z) &\approx
\begin{cases}
{\displaystyle 2 \, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hfill { (Z/Z_{\sun}<0.01)}} \\
{\displaystyle (Z/Z_{\sun})^{-0.14}\ \ \ \ \ \hfill { (0.01 \le Z/Z_{\sun})}}
\end{cases}\end{aligned}$$ where $n$ and $Z$ are the gas number density and metallicity surrounding the explosion. The expression above is from @cioffi:1988.sne.remnant.evolution (where we restrict $f(Z)$ to the minimum metallicity they consider), but similar expressions have been found in a wide range of other studies [for discussion see @draine:1991.snr.with.xrays; @slavin:snr.expansion; @thornton98; @martizzi:sne.momentum.sims; @walch.naab:sne.momentum; @kim.ostriker:sne.momentum.injection.sims; @haid:snr.in.clumpy.ism; @iffrig:sne.momentum.magnetic.no.effects; @hu:photoelectric.heating; @li:multi.sne.sims; @gentry:clustered.sne.momentum.enhancement], with variations up to a factor $\sim 2$, which we explore below.[^10]
We validate this expression in simulations below. But physically, this follows from simple cooling physics: taking $E_{0}\sim E_{51}\,10^{51}\,{\rm erg}$ and converting an order-unity fraction to thermal energy within a swept-up mass $M$ gives a temperature $T\sim 10^{6}\,K\,(M/3000\,E_{51}\,M_{\sun})^{-1}$, so when $M \gtrsim M_{\rm cool} \sim 3000\,E_{51}\,{M_{\sun}}$, $T$ drops to $< 10^{6}$K and the gas moves into the peak of the cooling curve where radiative losses are efficient [@rees:1977.tcool.tdyn.vs.mhalo]. While energy-conserving, the shell momentum scales as $p\sim M\,v \sim \sqrt{2\,M\,E_{0}}$, so the terminal momentum is $p_{\rm t} \sim \sqrt{2\,M_{\rm cool}\,E_{0}} \sim 5\times10^{5}\,E_{51}\,{M_{\sun}}\,{\rm km\,s^{-1}}$.
One important caveat: these scalings (and our implementations below) are developed for single “events” (e.g. explosions), as opposed to continuous events (e.g. approximately constant rates of stellar mass-loss over long time periods). “Continuous” feedback can, in principle, produce different scalings [see e.g. the discussion in @weaver:1977.wind.bubble.expansion; @mckee:bubble.expansion; @freyer:2006.massive.star.wind.egy; @cafg:2012.egy.cons.bal.winds; @gentry:clustered.sne.momentum.enhancement]. It is still the case that winds must either expand in some energy-conserving fashion (doing $P\,dV$ work) or cool, and so a scaling qualitatively like those here must still apply – however, details of when cooling occurs (which set the exact “terminal momentum”), in continuous cases, are much less robust to the environment, density profile, ability of the surrounding medium to confine the wind, and temperature range of the reverse shock (see references above and e.g. @harper.clark:stellar.bubbles.energy.missing [@rosen:2014.xray.energy.wind.clusters]). Moreover, there is growing evidence that stellar mass loss is highly “bursty” or “clumpy” with most of the kinetic luminosity associated with smaller time-or-spatial scale ejection events and/or clumps . In those cases, treating each “event” with the scalings above is appropriate. Because the kinetic luminosity in stellar mass-loss is an order-of-magnitude lower than that associated with SNe, even relatively large changes in our treatment of stellar mass-loss (e.g. assuming the ejecta are entirely radiative, so the terminal momentum is the initial momentum) have little effect on galaxy scales (if SNe are also present). We therefore, for simplicity, apply the same scalings to all mechanical feedback. But this certainly merits more detailed study in future work.
### Numerical Treatment {#sec:subgrid.terminal.momentum}
To account for potentially unresolved energy-conserving phases, we first calculate the momentum that [*would*]{} be coupled to the gas element, assuming the blastwave were energy conserving throughout that [*single*]{} element, which is simply $\Delta p_{b}^{\prime} \rightarrow \Delta p_{b}^{\prime} (1 + m_{b}/\Delta m_{b})^{1/2}$. We then compare this to the terminal momentum $p_{\rm t}$ (assume each neighbor $b$ sees the appropriate “share” of the terminal momentum according to its share of the ejecta mass), and assign the actual coupled momentum to be the smaller of the two.[^11] In other words: $$\begin{aligned}
\label{eqn:dp.subgrid.sub1} \Delta {\bf p}_{b}^{\rm new} &\equiv {\rm MIN}\left[ \Delta {\bf p}_{b}^{\rm energy-conserving}\ ,\
\Delta {\bf p}_{b}^{\rm terminal} \right] \\
\label{eqn:dp.subgrid} &= \Delta {\bf p}_{b}\ {\rm MIN}\left[ \sqrt{1 + \frac{m_{b}}{\Delta m_{b}} }\ ,
\ \frac{p_{\rm t}}{p_{\rm ej}} \right]\end{aligned}$$ (where recall $p_{\rm ej} = \sqrt{2\,m_{\rm ej}\,E_{\rm ej}}$). Because the coupled $\Delta E$ is the [*total*]{} energy and is not changed, this remains manifestly energy-conserving (the energy that implicitly goes into the $PdV$ work increasing ${\bf p}$ is automatically moved from thermal to kinetic energy). This is done in the rest frame (before boosting back to the lab frame).
Consider the two limits: (1) when $p_{t}/p_{\rm ej} < (1 + m_{b}/\Delta m_{b})^{1/2}$, the physical statement is that the cooling radius is un-resolved. Because $\Delta {\bf p}_{b} = p_{\rm ej}\,\bar{\bf w}_{b}$, multiplying by $p_{\rm t}/p_{\rm ej}$ simply replaces the “at explosion” initial $p_{\rm ej}$ with the terminal $p_{\rm t}$ – in other words, exactly the momentum that the element $b$ [*should*]{} see, if we had properly resolved the S-T phase between ${\bf x}_{a}$ and ${\bf x}_{b}$. On the other hand: (2) when $p_{t}/p_{\rm ej} > (1 + m_{b}/\Delta m_{b})^{1/2}$, the cooling radius is resolved; so we simply assume the blastwave is energy-conserving at the location of coupling. Because, by definition, the coupled momentum will be less then $p_{\rm t}$, the actual momentum coupling is, in this limit, largely irrelevant – we essentially couple thermal energy and rely on the hydrodynamic code to actually [*solve*]{} for the correct $PdV$ work as the blastwave expands.[^12]
[Strictly speaking, the expressions in Eq. \[eqn:dp.subgrid.sub1\]-\[eqn:dp.subgrid\] are expected if the relative gas-star velocities (${\bf v}_{b}-{\bf v}_{a}$) surrounding the explosion are either (a) small or (b) uniform. In Appendix \[sec:energy.cons.w.motion\] we present the more exact scalings, as well as the appropriate boost/de-boost corrections for momentum and energy, accounting for arbitrary gas-star motions.]{}
We show in § \[sec:feedback:mechanical:tests\], Figs. \[fig:sne.convergence.momentum\]-\[fig:sne.convergence.energy\] that this algorithm reproduces the exact results of much higher-resolution, converged simulations of SN blastwaves even when the coupling is applied in lower-resolution simulations – just as intended.
To be fully consistent, we also need to account for the loss of thermal energy (via radiation) in limit (1), when the cooling radius is un-resolved. The effective cooling radius $R_{\rm cool}$ is exactly determined by the expression for $p_{\rm t}$, because at the end of the energy-conserving phase ($R_{\rm shock} = R_{\rm cool}$), $(1/2)\,(m_{\rm ej} + m_{\rm swept}[R_{\rm cool}])\,v_{f}^{2} = (1/2)\,m_{\rm ej}\,v_{\rm ej}^{2}$ and $p_{\rm t} = m_{\rm swept}[R_{\rm cool}]\,v_{f}$, giving $R_{\rm cool} \approx 28.4\,{\rm pc}\,(n/{\rm cm^{-3}})^{-3/7}\,(E_{\rm ej}/10^{51}\,{\rm erg})^{2/7}\,f(Z)$ for $p_{\rm t}$ in Eq. \[eqn:terminal.p\]. Following @thornton98, the post-shock thermal energy outside $R_{\rm cool}$ decays $\propto (r/R_{\rm cool})^{-6.5}$, so we first calculate the post-shock thermal energy of element $b$ that would be added by the ejecta, $\Delta U_{b} = E_{\rm ej} - \Delta {\rm KE}$ (where $\Delta {\rm KE}$ is the change in kinetic energy, i.e. based on the coupled energy and momentum) in our usual fully-conservative manner, then if $r_{b} \equiv |{\bf x}_{ba}| > R_{\rm cool}$ we reduce this accordingly: $\Delta U_{b} \rightarrow \Delta U_{b}\,(|{\bf x}_{ba}|/R_{\rm cool})^{-6.5}$. In practice, because [*by definition*]{} this correction to $\Delta U_{b}$ only appears outside the cooling radius (where the post-shock cooling time is short compared to the expansion time), we find that the inclusion/exclusion of this correction term has no detectable effects on our simulations (see § \[sec:feedback:mechanical:tests:effects\]); if we do not include it, the thermal energy is simply radiated away in the next timestep, as it should be. Still, we include the term for consistency.
We can (and do, for the sake of consistency) apply the full treatment described above to continuous stellar mass loss as well as SNe, using the differential $E_{\rm ej}$ (and enforcing $p_{\rm t} \ge p_{\rm ej}$), but the “multiplier” is small because the winds are injected continuously so the $E_{\rm ej}$ in a single timestep is small.
Finally, the calculations of $p_{t}$ and Eqs. \[eqn:dp.subgrid.sub1\]-\[eqn:dp.subgrid\] are done independently for each neighbor $b$. In effect, we are considering each solid angle face $\Delta\Omega_{b}$ to be an independent “cone” with its own density and metallicity, in which an independent energy-conserving solution is considered. @haid:snr.in.clumpy.ism have performed a detailed simulation study of SNe in inhomogeneous environments and showed explicitly that almost all of the (already weak) effect of different inhomogeneous initial conditions (in e.g. turbulent, clumpy, multi-phase media) in their study and others is properly captured by considering each element surrounding the SN as an independent cone, which is assigned its own density-dependent solution according to the single homogeneous scaling above. In fact, once the density and metallicity dependence are accounted for as we do, residual systematic uncertainties in Eq. \[eqn:terminal.p\] are remarkably small ($\sim 10-50\%$) – much smaller than uncertainties in the SNe rate itself!
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Numerical tests of the mechanical feedback coupling algorithm in § \[sec:feedback:mechanical:tests\]. [*Top:*]{} Statistical Isotropy Test: We detonate a single SN in the center of a thin disk, generated by randomly sampling a vertical Gaussian profile with gas particles, and we measure the resulting momentum/mass/metal flux deposited to neighbor gas elements as a function of polar angle $\theta$ ($\cos{\theta}=\pm1$ is midplane, $\cos{\theta}=0$ polar). We repeat $100$ times and show the median ([*lines*]{}) and $95\%$ interval ([*shaded*]{}). The result [*should*]{} be statistically isotropic (“Exact Solution”). Our “Default FIRE-2 Coupling” method (Fig. \[fig:mechanical.fb.cartoon\]) recovers this with noise owing to the finite number of particles coupled. The “Naive Kernel Coupling” model only includes neighbors within the search radius $H_{\ast}$ of the SN (not those where $H_{\ast} < |{\bf x}_{\ast}-{\bf x}_{\rm gas}| < H_{\rm gas}$; § \[sec:feedback:mechanical:neighbor.finding\]) and weights deposition by a simple kernel function (effectively mass-weighting) instead of the solid angle subtended by the element (§ \[sec:feedback:mechanical:weighting\]). This naive coupling biases ejecta to couple into the midplane and suppresses coupling in the polar direction. [*Bottom:*]{} Momentum Conservation Test: We detonate SNe at random locations in the same system and measure the total fractional error in the linear momentum of the box (error $L_{1}(t) \equiv | \sum_{a} m_{a}\,{\bf v}_{a}(t) | / \sum p_{\rm coupled}$, where $p_{\rm coupled} = N(t)\,p_{\rm ej}$ is the total magnitude of the momentum injected by all events). This is the net deviation from exact momentum conservation, relative to the total coupled. Our “Default FIRE-2 Coupling” uses a tensor re-normalization scheme to keep these errors at machine accuracy (see § \[sec:feedback:mechanical:vector\]). The “Non-Conservative Coupling” scheme removes this re-normalization (but is otherwise identical); fractional conservation errors for a single event can then be order-unity! The fractional error declines with SNe number as $\sim N_{\rm SNe}^{-1/2}$ because of cancellations; increasing the coupled neighbor number $N_{\ast}$ reduces the errors but inefficiently. \[fig:sne.fb.coupling.tests\]](figs_fb_sne_algorithm/sne_isotropy_test.pdf "fig:"){width="0.98\columnwidth"}
![Numerical tests of the mechanical feedback coupling algorithm in § \[sec:feedback:mechanical:tests\]. [*Top:*]{} Statistical Isotropy Test: We detonate a single SN in the center of a thin disk, generated by randomly sampling a vertical Gaussian profile with gas particles, and we measure the resulting momentum/mass/metal flux deposited to neighbor gas elements as a function of polar angle $\theta$ ($\cos{\theta}=\pm1$ is midplane, $\cos{\theta}=0$ polar). We repeat $100$ times and show the median ([*lines*]{}) and $95\%$ interval ([*shaded*]{}). The result [*should*]{} be statistically isotropic (“Exact Solution”). Our “Default FIRE-2 Coupling” method (Fig. \[fig:mechanical.fb.cartoon\]) recovers this with noise owing to the finite number of particles coupled. The “Naive Kernel Coupling” model only includes neighbors within the search radius $H_{\ast}$ of the SN (not those where $H_{\ast} < |{\bf x}_{\ast}-{\bf x}_{\rm gas}| < H_{\rm gas}$; § \[sec:feedback:mechanical:neighbor.finding\]) and weights deposition by a simple kernel function (effectively mass-weighting) instead of the solid angle subtended by the element (§ \[sec:feedback:mechanical:weighting\]). This naive coupling biases ejecta to couple into the midplane and suppresses coupling in the polar direction. [*Bottom:*]{} Momentum Conservation Test: We detonate SNe at random locations in the same system and measure the total fractional error in the linear momentum of the box (error $L_{1}(t) \equiv | \sum_{a} m_{a}\,{\bf v}_{a}(t) | / \sum p_{\rm coupled}$, where $p_{\rm coupled} = N(t)\,p_{\rm ej}$ is the total magnitude of the momentum injected by all events). This is the net deviation from exact momentum conservation, relative to the total coupled. Our “Default FIRE-2 Coupling” uses a tensor re-normalization scheme to keep these errors at machine accuracy (see § \[sec:feedback:mechanical:vector\]). The “Non-Conservative Coupling” scheme removes this re-normalization (but is otherwise identical); fractional conservation errors for a single event can then be order-unity! The fractional error declines with SNe number as $\sim N_{\rm SNe}^{-1/2}$ because of cancellations; increasing the coupled neighbor number $N_{\ast}$ reduces the errors but inefficiently. \[fig:sne.fb.coupling.tests\]](figs_fb_sne_algorithm/sne_conservation_test.pdf "fig:"){width="0.99\columnwidth"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
![image](figs_images/m12i_ref13_s0600_t000_star_N3c_alt.jpg){width="0.49\columnwidth"} ![image](figs_images/m12i_ref13_s0600_t090_star_N3c_res.jpg){width="0.49\columnwidth"} ![image](figs_images/m12i_ref13_fb_aniso_s0600_t000_star_N3c.jpg){width="0.49\columnwidth"} ![image](figs_images/m12i_ref13_fb_aniso_s0600_t090_star_N3c.jpg){width="0.49\columnwidth"}
![image](figs_images/m12i_ref12_s0600_t000_star_N3c_res.jpg){width="0.49\columnwidth"} ![image](figs_images/m12i_ref12_s0600_t090_star_N3c_res.jpg){width="0.49\columnwidth"} ![image](figs_images/m12i_ref12_fb_aniso_s0600_t000_star_N3c.jpg){width="0.49\columnwidth"} ![image](figs_images/m12i_ref12_fb_aniso_s0600_t090_star_N3c.jpg){width="0.49\columnwidth"}
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
### Implied Resolution Requirements {#sec:feedback:mechanical:sedov:resolution}
Eq. \[eqn:dp.subgrid\] demonstrates that with sufficiently small element mass ($m_{b}$ below some critical $m_{\rm crit}$), the cooling radius is resolved – i.e. we are in limit (2) above: $(1 + m_{b}/\Delta m_{b})^{1/2} < p_{t}/p_{\rm ej}$. This corresponds to $m_{b}
\ll m_{\rm crit} = 2500\,{M_{\sun}}\,|\bar{\bf w}|\,E_{51}^{6/7}\,(n/{\rm cm^{-3}})^{-2/7}\,f(Z)^{3}$. Because the kernel function is strongly peaked, most of ejecta energy/momentum/mass is deposited in the nearest few elements, so $|\bar{\bf w}|\sim 1/$few; so $m_{\rm crit} \sim 500\,{M_{\sun}}\,(n/{\rm cm^{-3}})^{-2/7}$. This is a [*mass*]{} resolution criterion: as noted above, the cooling [*radius*]{} depends on density, $R_{\rm cool} \sim n^{-1/3}$, such that an almost invariant mass $M_{\rm cool} \equiv m_{\rm swept}(R_{\rm cool}) \sim ({\rm a\ few})\,m_{\rm crit}$ is enclosed inside $R_{\rm cool}$.
Similar results are found in @hu:photoelectric.heating (their Appendix B): they show, for example, that with $\sim 100\,{M_{\sun}}$ resolution, the blastwave momentum is almost perfectly recovered (within $<10\%$ of simulations with element/particle masses $\sim 0.01\,{M_{\sun}}$). Even higher-order effects such as the blastwave mass-loading, velocity structure, shell position, etc, are recovered almost perfectly once the shell has propagated into the momentum-conserving phase.
In our cosmological simulations of isolated dwarf galaxies, we begin to satisfy $m_{b} < m_{\rm crit}$. However, in MW-mass simulations, this remains unattainable for now. Therefore, ignoring the correction for an unresolved S-T phase in massive galaxies can significantly under-estimate the effects of feedback. We consider explicit resolution tests below which validate these approximate scalings.
Numerical Tests: The Coupling Algorithm {#sec:feedback:mechanical:ideal.tests}
=======================================
We now consider detailed numerical tests of the SNe coupling scheme. Specifically, we first consider tests of the pure algorithm used to deposit feedback from § \[sec:feedback:mechanical:neighbor.finding\]-\[sec:feedback:mechanical:assignment\], independent of the feedback physics (energy, momentum, rates, etc.).
Validation: Ensuring Correct Coupling Isotropy, Weights, and Exact Conservation {#sec:feedback:mechanical:ideal.tests:validation}
-------------------------------------------------------------------------------
Fig. \[fig:sne.fb.coupling.tests\] considers two simple validation tests (for conservation and statistical isotropy) of our algorithm in a pure hydrodynamic test problem. We initialize a periodic box of arbitrarily large size centered on ${\bf x}=\mathbf{0}$, filled with particles of equal mass, $m$, meant to represent a patch of a vertically-stratified disk. There is no gravity and the gas is forced to obey an exactly isothermal equation of state with vanishingly small pressure. The particles are laid down randomly with a uniform probability distribution in the $x$ and $y$ dimensions and probability $dp$ along the $z$ dimension such that $dp \propto m^{-1}\langle\rho(z)\,\rangle \,dx\,dy\,dz$, where $\langle \rho(z) \rangle = \exp{(-z^{2}/2h^{2})}$. Initial velocities are zero. We define $m$ and code units such that $h$ is equal to the mean inter-particle separation in the midplane. The desired density distribution is therefore obeyed on average but with a noisy particle distribution, as in a real simulation.
The top panel of Fig. \[fig:sne.fb.coupling.tests\] shows the results after a single SN detonated at the center of the box, using the standard FIRE-2 coupling scheme to deposit its ejecta. Because of the enforced equation-of-state, the coupled thermal energy is instantly dissipated – all that is retained is momentum, mass, and metals. We measure the amount deposited in each direction – each unit solid angle “as seen by” the SN. By construction, our algorithm is [*supposed*]{} to couple the ejecta statistically isotropically. But because the ejecta must be deposited discretely in a finite number of neighbors, in any single explosion the deposition is noisy: it occurs only along the directions where there are neighbors. We therefore re-generate the box and repeat $100$ times, and plot the resulting mean distribution and scatter. We confirm that our default algorithm correctly deposits ejecta statistically isotropically, on average. However, if we instead consider a simpler algorithm where the search for neighbors to couple the SN (§ \[sec:feedback:mechanical:neighbor.finding\]) is done only using particles within a nearest-neighbor radius $H_{a}$ of the SN (excluding particles outside $H_{a}$ but for which the SN is inside [*their*]{} nearest-neighbor radius $H_{b}$), [*or*]{} if we weight the deposition “per neighbor” by a simple kernel weight (§ \[sec:feedback:mechanical:weighting\]), in this case the cubic spline kernel ($\omega_{b} = W({\bf x}_{ba},\,H_{a})$); then we obtain a biased ejecta distribution. The bias is as expected: most of the ejecta go into the disk midplane direction, because on average there are more particles in this direction, and they are closer, as opposed to in the vertical direction, where the density decreases. In a real simulation, this is a serious concern: momentum and energy would be preferentially coupled in the plane of the galaxy disk, rather than “venting” in the vertical direction as they should, simply because more particles are in the disk!
In the bottom panel of Fig. \[fig:sne.fb.coupling.tests\], we repeat our setup, but now we repeatedly detonate SNe throughout the box at fixed time intervals, each in a random position. After each SN we measure the total momentum of all gas elements, $|{\bf p}| \equiv | \sum m_{a}\,{\bf v}_{a} |$, and define the dimensionless, fractional linear momentum error as the ratio of this to the total ejecta momentum that has been injected, $L_{1}=|{\bf p}| / \sum p_{\rm ej}$. Linear momentum conservation demands ${\bf p} = {\bf 0}$. In our standard FIRE-2 algorithm, we confirm momentum is conserved to machine accuracy. However, re-running without the tensor renormalization in § \[sec:feedback:mechanical:vector\], we see quite large errors, with $L_{1}\sim 0.1-1$ for a single SN, decreasing slowly with the number of SNe in the box only because the errors add incoherently (so $L_{1}$ gradually decreases $\propto N_{\rm SNe}^{-1/2}$ as a Poisson process). We can decrease $L_{1}$ in the non-conservative algorithm by increasing the number of gas neighbors used for the SN deposition, but this is inefficient and reduces the spatial resolution.
Tests in FIRE Simulations: Effects of Algorithmic SNe Coupling {#sec:feedback:mechanical:ideal.tests:firesims}
--------------------------------------------------------------
In Figs. \[fig:sf.history.sne.algorithm\]-\[fig:images.resolution.nonsymmetric\],[^13] we examine how the algorithmic choices discussed above alter the formation history of galaxies in cosmological simulations. We compare:
1. [[*Default*]{}: Our default FIRE-2 coupling. This manifestly conserves mass, energy, and momentum; correctly deposits the ejecta in an unbiased (statistically isotropic) manner; and accounts for the Lagrangian distribution of particles in all directions.]{}
2. [[*Non-conservative:*]{} Coupling that neglects the tensor correction from § \[sec:feedback:mechanical:vector\], which Fig. \[fig:sne.fb.coupling.tests\] showed was necessary to maintain exact momentum conservation. We stress that the scalar mass and energy from SNe are still manifestly conserved here; only vector momentum is imperfectly added.]{}
3. [[*FIRE-1 Coupling:*]{} Our older scheme from FIRE-1, which used the non-conservative formulation, conducted the SNe neighbor search only “one-directionally” (ignoring neighbors with at distances $>H_{a}$), as defined in § \[sec:feedback:mechanical:neighbor.finding\], and scaled the deposition “weights” $\omega_{b}$ defined in § \[sec:feedback:mechanical:weighting\] with volume ($\omega_{b} \propto m_{b}/\rho_{b}$; the “SPH-like” weighting; see @price:2012.sph.review), as opposed to solid angle. Fig. \[fig:sne.fb.coupling.tests\] shows this leads to unphysically anisotropic momentum deposition.]{}
Fig. \[fig:sf.history.sne.algorithm\] ([*left*]{}) shows that the detailed choice of coupling algorithm has essentially no effect in dwarf galaxies, because of their stochastic, bursty star formation and outflows and irregular/spheroidal morphologies. That is, a “galaxy wide explosion” remains such regardless of exactly how individual SN are deposited. Indeed, we find that this independence from the coupling algorithm persists at any resolution that we test. We do not show visual morphologies of dwarf galaxies in Fig. \[fig:images.resolution.nonsymmetric\], because they are essentially the same in all cases (see also [Paper [I]{}]{}). For MW-mass halos, we find only a weak dependence of galaxy properties in Fig. \[fig:sf.history.sne.algorithm\] on the SNe algorithm (see Appendix \[sec:resolution\] for demonstration of this at various resolution levels). The non-conservative implementations generally show a lower central stellar density at $<1\,$kpc, owing to burstier intermediate-redshift star formation, because the momentum conservation errors allow more “kicking out” of material in dense regions, as discussed further below.
At low and intermediate resolution, the MW-mass simulations all exhibit “normal” disky visual morphologies, without strong dependence on the SNe algorithm. However, at high resolution the “non-conservative” run essentially destroys its disk! This is in striking contrast to the “default” run, where the disk continues to become thinner and more extended at higher resolution (a trend seen in several MW-mass halos studied in [Paper [I]{}]{}). Note that the formation history and mass profile are not dramatically different in the two runs, so what has “gone wrong” in the non-conservative case? The problem is, as noted in § \[sec:feedback:mechanical:vector\], the momentum conservation error in the non-conservative algorithm is zeroth-order – it depends only on the spatial distribution of and number of neighbor gas elements within the kernel, not on the absolute mass/spatial scale of that kernel. Because we keep the number of neighbors seen by the SN fixed with changing mass resolution, this means that the fractional errors (i.e. the net linear momentum error deposited per SN) does not converge away. Meanwhile, the individual gas element masses get smaller at high resolution – so the net linear velocity “kick” becomes larger. The “worst-case” error for a single SN would be an order-unity fractional violation of momentum conservation, implying a kick $|\Delta {\bf v}_{\rm err}| \sim p_{t}/m_{a} \sim 100\,{\rm km\,s^{-1}}\,(7000/m_{i,\,1000})$; at low and intermediate resolution even this worst-case gives $|\Delta {\bf v}_{\rm err}|\lesssim 10\,{\rm km\,s^{-1}}$ (comparable to the thin-disk velocity dispersion) so this is not a serious issue. But at our highest resolution, the non-conservative “worst case scenario” occurs where in some star-forming regions, net momentum is coherently deposited all in one direction owing to a pathological local particle distribution: the cloud then coherently “self-ejects” or “bootstraps” itself out of the disk. The thin disk is destroyed in the process, and the most extreme examples of this are visibly evident as “streaks” of stars from self-ejected clumps flying out of the galaxy center!
We also re-ran a “non-conservative” simulation of [**m12i**]{} at high resolution ($m_{i,\,1000}=7.0$) with a crude “cap” or upper limit arbitrarily imposed for the fraction of the momentum allowed to couple to any one particle, and to the maximum velocity change per event (of $50\,{\rm km\,s^{-1}}$). This is presented in Appendix \[sec:non.con.extreme\]. In that case, the system does indeed form a thin, extended disk, similar to our default coupling. This confirms that the “self-destruction” of the disk is driven by rare cases with large momentum errors, rather than small errors in “typical” cases.
As noted above, our older FIRE-1 algorithm used the “non-conservative” formulation. The MW-mass simulations published with that algorithm were all lower-resolution, where $|\Delta {\bf v}_{\rm err}|\lesssim 10\,{\rm km\,s^{-1}}$, so these errors were not obvious (at dwarf masses, the lower metallicities and densities meant the cooling radii of blastwaves were explicitly resolved, so as Fig. \[fig:sf.history.sne.algorithm\] shows, the effects were even smaller, and their irregular morphologies meant perturbations to thin disks were not possible). However, running that algorithm in MW-mass halos at higher resolution led to similar errors as shown in Fig. \[fig:images.resolution.nonsymmetric\]. This, in fact, motivated the development of the new FIRE-2 algorithm.
We have confirmed that all of the conclusions above are not unique to the two halos above: we have re-run halos [**m09**]{} ($\sim 10^{9}\,{M_{\sun}}$), [**m10v**]{} ($\sim 10^{10}\,{M_{\sun}}$), [**m11q**]{}, [**m11v**]{} ($\sim 10^{11}\,{M_{\sun}}$), [**m12f**]{} and [**m12m**]{} ($\sim 10^{12}\,{M_{\sun}}$) from [Paper [I]{}]{} with “Default” and “Non-conservative” implementations. All halos $\sim 10^{9}-10^{11}\,{M_{\sun}}$ show the same [*lack*]{} of effect from the coupling scheme as our [**m10q**]{} run here; the $\sim 10^{12}\,{M_{\sun}}$ halos all show the same systematic dependencies as our [**m12i**]{} run.
In Appendix \[sec:grid\] we briefly discuss algorithms that ensure manifest momentum conservation by simply coupling a pre-determined momentum in the Cartesian $\pm x$, $\pm y$, $\pm z$ directions (independent of the local mesh or particle geometry). We do not adopt such a method because (a) it ignores the physically correct geometry of the mesh in irregular-mesh or mesh-free methods, and (b) it imprints preferred directions onto the simulation, which forces disks to align with the simulation coordinate axes, introducing spurious numerical torques that can significantly reduce disk angular momentum (as often seen in grid-based codes).
------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------
![image](figs_fb_sne_subgrid/sne_subgrid_convergence_zoom_a.pdf){width="33.00000%"} ![image](figs_fb_sne_subgrid/sne_subgrid_convergence_zoom_b.pdf){width="33.00000%"}
------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------
------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------
![image](figs_fb_sne_subgrid/sne_subgrid_convergence_zoom_c.pdf){width="33.00000%"} ![image](figs_fb_sne_subgrid/sne_subgrid_convergence_zoom_d.pdf){width="33.00000%"} ![image](figs_fb_sne_subgrid/sne_subgrid_convergence_zoom_e.pdf){width="33.00000%"}
------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------
Numerical Tests: Subgrid Physics and the Need to Account For Thermal and Kinetic Energy {#sec:feedback:mechanical:tests}
=======================================================================================
Having tested the algorithmic aspect of SNe coupling above, we now consider tests of the physical scalings in the feedback coupling, specifically how it assigns momentum versus thermal energy as described in § \[sec:feedback:mechanical:sedov\].
Validation: Ensuring “Subgrid” Scalings Reproduce High-Resolution Simulations in Resolution-Independent Fashion {#sec:feedback:mechanical:tests:validation}
---------------------------------------------------------------------------------------------------------------
In Figs. \[fig:sne.convergence.momentum\]-\[fig:sne.convergence.energy\], we consider an idealized test problem that validates the sub-grid SNe treatment used in FIRE. We initialize a periodic box of arbitrarily large size with uniform density $n= 1\,{\rm cm^{-3}}$ and metallicity $Z=Z_{\sun}$, with constant gas particle mass $m_{i}$ (so the inter-particle separation is given by $\rho = m_{i}/h_{i}^{3}$, i.e. $h_{i}\sim 16\,{\rm pc}\,(m_{i}/100\,{M_{\sun}})^{1/3}$), and with our full FIRE-2 cooling physics (with the $z=0$ meta-galactic background) and hydrodynamics, but no self-gravity. We then detonate a single SN explosion at the center of the box, using [*exactly*]{} our default FIRE-2 algorithm (same SN energy $=10^{51}\,{\rm erg}$, ejecta mass $=10.4\,{M_{\sun}}$, metal content, ejecta momentum, and algorithmic coupling scheme from Fig. \[fig:mechanical.fb.cartoon\] and § \[sec:feedback:mechanical:neighbor.finding\]-\[sec:feedback:mechanical:assignment\]). We also test several additional schemes for how to deal with the thermal versus kinetic (energy/momentum) component of the SN.
1. [[*FIRE Sub-grid:*]{} This is our default FIRE-2 treatment from § \[sec:feedback:mechanical:sedov\] (Eq. \[eqn:dp.subgrid.sub1\]), where we account for the $PdV$ work done by the expanding blastwave out to the minimum of either the coupling radius or cooling radius (where the resulting momentum reaches the terminal momentum $p_{t}$ in Eq. \[eqn:terminal.p\], and we assume any remaining thermal energy is dissipated outside the cooling radius). The coupled momentum ranges, therefore, between $p_{\rm ejecta} \le p_{\rm coupled} \le p_{\rm terminal}$ and [*total*]{} (kinetic+thermal) energy coupled ranges from $0 < E_{\rm coupled} \le E_{\rm ejecta} = 10^{51}\,{\rm erg}$, according to the total mass enclosed within the single gas particle (the smallest possible “coupling radius”). Recall, at small particle mass, this becomes identical to coupling exactly the SN ejecta energy and momentum. At large particle mass, this reduces to coupling the terminal momentum and radiating (instantly) all residual (post-shock) thermal energy.]{}
2. [[*Thermal (+Ejecta):*]{} This couples only the ejecta momentum ($p_{\rm coupled} = p_{\rm ejecta} < p_{\rm terminal}$): any additional energy is coupled as thermal energy (not radiated away in the coupling step; $E_{\rm coupled} = E_{\rm ejecta}$). This ignores any accounting for whether the coupling is inside/outside the cooling radius, or any $PdV$ work done by the un-resolved blastwave expansion. It is equivalent to dropping the terms from § \[sec:feedback:mechanical:sedov\] completely. A method like this was used in some previous work with non-cosmological simulations [@hopkins:fb.ism.prop].]{}
3. [[*Fully-Kinetic:*]{} We assume that $100\%$ of the ejecta energy is converted into kinetic energy, i.e. coupled in “pure momentum” form ($p_{\rm coupled} = \sqrt{2\,E_{\rm ejecta}\,m_{\rm coupled}} \ge p_{\rm ejecta}$, $E_{\rm coupled} = E_{\rm kinetic} = E_{\rm ejecta}$). This ignores any un-resolved cooling. This is similar (algorithmically) to many common implementations in e.g. @aguirre:2001.igm.metal.evol.sims [@springel:multiphase; @cen:2005.kinetic.sne.fb; @dalla-vecchia:2008.iso.gal.wwo.freestream.winds; @vogelsberger:2013.illustris.model] (although most of these authors alter the fraction of energy coupled).]{}
4. [[*Fully-Thermal:*]{} We assume $100\%$ of the ejecta energy is converted into thermal energy, with zero momentum (i.e. $E_{\rm coupled} = E_{\rm thermal} = E_{\rm ejecta}$, $p_{\rm coupled} = 0$). This also ignores any un-resolved cooling. This is a common implementation used in e.g. @katz:1992.overcooling.problem.standard.sne [@ceverino:cosmo.sne.fb; @ceverino:2013.rad.fb; @kim:agora.isolated.disk.test].]{}
We evolve the explosion until well after it reaches an asymptotic terminal momentum: when the momentum changes by $<1\%$ over a factor of $>2$ increase in the shock radius, or – if this occurs before the shock reaches $>10$ inter-particle spacings – when the shock radius moves by $<1\%$ over a factor of $2$ increase in time.
In Fig. \[fig:sne.convergence.momentum\] we plot the terminal momentum in each simulation and compare to our analytic scaling from Eq. \[eqn:terminal.p\]. In Fig. \[fig:sne.convergence.energy\], we plot the radial profile of the shock properties as the shock radius expands: the total radial momentum, kinetic, and thermal energy (these depend on the time since explosion, so we plot each resolution at different times). We consider particle masses ranging from $m_{i}<0.1\,{M_{\sun}}$, sufficient to resolve even the free-expansion phase of explosion, (let alone the cooling radius), to $m_{i}> 10^{6}\,{M_{\sun}}$.
At sufficiently high resolution, all of the schemes above give identical, well converged solutions – as they should, since in all cases (at high enough resolution) they generate a shock with the same initial energy, which undergoes an energy-conserving Sedov-Taylor type expansion (in which case the asymptotic solution is fully-determined by the ambient density and total blastwave energy). In this limit, the shock formation, Sedov-Taylor phase, conversion of energy into momentum, cooling radius, snowplow phase, and ultimate effective conversion of energy into momentum via $PdV$ work are explicitly resolved, so it does not matter how we initially input the energy. Reassuringly, Eq. \[eqn:terminal.p\] agrees well with the [*predicted*]{} terminal momentum in the highest-resolution simulations – in other words, given the cooling physics in FIRE-2, we are using the correct $p_{t}$.
At poor resolution, the different treatments diverge, as predicted in § \[sec:feedback:mechanical:sedov:resolution\]. For “Thermal (+Ejecta)” and “Fully-Thermal” couplings, when the particle mass $m_{i} \gtrsim 100\,{M_{\sun}}$, the predicted momentum and kinetic energy drop rapidly compared to the converged, exact solutions. Physically, the cooling radius – which is roughly the radius enclosing a fixed [*mass*]{} $m_{\rm cool}\sim 1000\,{M_{\sun}}$ (see § \[sec:feedback:mechanical:sedov:resolution\]) – becomes unresolved. Spreading only thermal energy among this large a gas mass leads to post-shock temperatures below the peak in the cooling curve, so the energy is immediately radiated before much work can be done to accelerate gas (increase the momentum). The terminal momentum and kinetic energy are under-estimated by constant factors of $\sim 60$ and $\sim 3600$, respectively. With the “Thermal (+Ejecta)” case, the same problem occurs, but the initial ejecta momentum remains present, so the terminal momentum and kinetic energy are under-estimated by factors of $\sim 14$ and $\sim 200$.
The “Fully-Kinetic” coupling errs in the opposite direction at poor resolution: assuming perfect conversion of energy to momentum and ignoring cooling losses gives $p_{\rm coupled} = \sqrt{2\,E_{\rm ejecta}\,m_{\rm coupled}}$, so $p_{\rm coupled}/p_{\rm terminal} \propto m_{\rm coupled}^{1/2}$, and the terminal momentum is over-estimated by a factor $\sim 20\,(m_{i}/10^{4}\,{M_{\sun}})^{1/2}$. The kinetic energy is over-estimated by a corresponding factor $\sim 400\,(m_{i}/10^{4}\,{M_{\sun}})$.
In contrast, the FIRE sub-grid model reproduces the high-resolution exact solutions correctly, [*independent*]{} of resolution (within $<10\%$ in momentum, kinetic and thermal energy, even at $m_{i}\sim 10^{6}\,{M_{\sun}}$). This is the desired behavior in a “good” sub-grid model. Of course, at poor resolution, the cooling radius is un-resolved, so the simulation cannot capture the early phases where gas is shock-heated to large temperatures. However the sub-grid treatment captures the correct behavior of the high-resolution blastwave once it has expanded to a mass or spatial resolution scale which [*is*]{} resolved in the low-resolution run. (For similar experiments which reach the same conclusions, see e.g. Fig. 6 in @kim:tigress.ism.model.sims).
Effects In FIRE Simulations: Correctly Dealing With Energy & Momentum Matters {#sec:feedback:mechanical:tests:effects}
-----------------------------------------------------------------------------
Having seen in § \[sec:feedback:mechanical:tests:validation\] that correctly accounting for unresolved $PdV$ work in expanding SNe is critical to resolution-independent solutions, we now apply the different treatments therein to cosmological simulations.
Fig. \[fig:sf.history.sne.subgrid\]-\[fig:sf.z0.sne.subgrid\] show the results, for both dwarf- and MW-mass galaxies in cosmological simulations as well as controlled re-starts of the same MW-mass galaxy at $z\sim0$ (to ensure identical late-time ICs).
At the mass resolution scales in Figs. \[fig:sf.history.sne.subgrid\]-\[fig:sf.z0.sne.subgrid\], our default FIRE-2 coupling scheme reproduces accurately the SN momentum, kinetic and thermal energies from much higher-resolution idealized simulations. In contrast, the “Thermal (+Ejecta)” and “Fully-Kinetic” models severely under and over-estimate, respectively, the kinetic energy imparted by SNe (relative to high resolution simulations and/or analytic solutions). Not surprisingly, then, this is immediately evident in the galaxy evolution. “Fully-Thermal” and “Thermal (+Ejecta)” cases resemble a “no SNe” case – because the cooling radii are unresolved, the energy is radiated away immediately, and the terminal momentum that [*should have been resolved*]{} is not properly accounted for – so SNe do far less work than they should and far more stars form. The Fully-Kinetic case, on the other hand, wildly over-estimates the conversion of thermal energy to kinetic (and ignores cooling losses), so star formation is radically suppressed.
Given this strong dependence, one might wonder whether the exact details of our FIRE treatment might change the results. However these are not so important. In Appendix \[sec:appendix:implicit.cooling\], we consider a “no implicit cooling” model: here we take our standard FIRE-2 coupling (the coupled momentum, mass, and metals are unchanged), but even if the cooling radius is un-resolved, we still couple the full ejecta energy (i.e. we do not assume, implicitly, that the ejecta thermal energy has radiated away if we do not resolve the cooling radius, so couple a total thermal plus kinetic energy $=E_{\rm ejecta}$). This produces no detectable difference from our default model, which is completely expected. If the cooling radius is resolved, our default model does not radiate the energy away; if it is unresolved, “keeping” the thermal energy in the SNe coupling step simply leads to its being radiated away explicitly in the simulation cooling step on the subsequent timestep.
Fig. \[fig:sf.z0.sne.subgrid\] considers the effects of changing the analytic terminal momentum $p_{t}$ in Eq. \[eqn:terminal.p\], by a factor $\sim 4$. As discussed in § \[sec:feedback:mechanical:sedov\], while there are physical uncertainties in this scaling owing to uncertain microphysics of blastwave expansion, they are generally smaller. But in any case, the effect on our galaxy-scale simulations is relatively small, even at low resolution. As expected, smaller $p_{t}$ leads to higher SFRs, because the momentum coupled per SN is smaller, so more stellar mass is needed to self-regulate. In a simple picture where momentum input self-regulates SF and wind generation [see e.g. @ostriker.shetty:2011.turb.disk.selfreg.ks; @cafg:sf.fb.reg.kslaw; @hayward.2015:stellar.feedback.analytic.model.winds], we would expect the SFR to be inversely proportional to $p_{t}$ at low resolution. However, because of non-linear effects, and the fact that even at low resolution the simulations resolve massive super-bubbles (where $p_{t}$ does not matter because the cooling radius for overlapping explosions is resolved), the actual dependence is sub-linear, $\dot{M}_{\ast} \propto p_{t}^{-0.3}$. So given the (small) physical uncertainties, this is not a dominant source of error.[^14]
Recently, @rosdahl:2016.sne.method.isolated.gal.sims performed a similar experiment, exploring different SNe implementations in the AMR code [RAMSES]{}. They used a different treatment of cooling and star formation, non-cosmological simulations, and no other feedback. However, their conclusions are similar, regarding the relative efficiencies of the “Fully-Thermal,” “FIRE-sub grid” (in their paper, the “mechanical” model), and “Fully-Kinetic” treatments of SNe. Our conclusions appear to be robust across a wide range of conditions and detailed numerical treatments.
Again, we have repeated these tests in other halos to ensure our conclusions are not unique to a single galaxy. Specifically we have compared a “Fully-Thermal” and “Fully-Kinetic” run in halo [**m10v**]{} and [**m12f**]{} from [Paper [I]{}]{}, and compared re-starts from $z=0.07$ of an [**m12f**]{} run with $m_{i,\,1000}=56$ using the same set of parameter variations as Fig. \[fig:sf.z0.sne.subgrid\]. The results are nearly identical to our studies with [**m10q**]{} and [**m12i**]{}.
Convergence: Incorrect Sub-Grid Treatments Converge to the Resolution-Independent FIRE Scaling {#sec:feedback:mechanical:tests:convergence}
----------------------------------------------------------------------------------------------
In Fig. \[fig:sf.fb.sne.subgrid.convergence\], we consider another convergence test of the SNe coupling scheme, but this time in cosmological simulations. We re-run our [**m10q**]{} simulation with standard FIRE-2 physics, considering our default SNe treatment as well as the “Thermal (+Ejecta)” and “Fully-Kinetic” models, with mass resolution varied from $ 30 - 1.3\times10^{5}\,{M_{\sun}}$.
Not only does our default FIRE treatment of SNe produce excellent convergence in the star formation history across this entire resolution range, but [*both*]{} the “Thermal (+Ejecta)” model (which suffers from over-cooling, hence excessive SF, at low resolution because the SNe energy is almost all coupled thermally) and the “Fully-Kinetic” model (which over-estimates the kinetic energy of SNe, hence over-suppresses SF, at low resolution) converge [*to our FIRE solution*]{} at higher resolution, especially at $m_{i}\lesssim 100\,{M_{\sun}}$. Of course, even at our highest resolution, details of SNe shells and venting can differ in the early stages of ejecta expansion, so convergence is not perfect – but the trends clearly approach the “default” model.
On “Delayed-Cooling” and “Target-Temperature” Models {#sec:delayed.cooling.discussion}
----------------------------------------------------
Given the failure of “Fully-Thermal” models at low resolution, a popular “fix” in the galaxy formation literature is to artificially suppress gas cooling at large scales, either explicitly or implicitly. This is done via (a) “delayed cooling” prescriptions, for which energy injected by SNe is not allowed to cool for some large timescale $\Delta t_{\rm delay}\gtrsim t_{\rm dynamical} \sim 10^{7-8}$yr [as in @thackercouchman00; @thackercouchman01; @stinson:2006.sne.fb.recipe; @stinson:2013.new.early.stellar.fb.models; @dubois:delayed.cooling.sne.models], or (b) “target temperature” prescriptions, where SNe energy is “stored” until sufficient energy is accumulated to heat (in a single “event”) a large resolved gas mass to some high temperature $T_{\rm target} \gg 10^{7}\,$K [as in @gerritsen:target.temperature.models; @mori:1997.target.temperature.sne.models; @dalla.vecchia:target.temperature.sne.delayed.cooling.feedback; @crain:eagle.sims].
Although these approximations may be useful in low-resolution simulations with $m_{i}\gtrsim 10^{6}\,{M_{\sun}}$ (typical of large-volume cosmological simulations), where ISM structure and the clustering of star formation cannot be resolved, they are fundamentally ill-posed for simulations with resolved ISM structure, for at least three reasons. [**(1)**]{} Most importantly, they are [*non-convergent*]{} (at least as defined here). This is easy to show rigorously, but simply consider a case with arbitrarily good resolution: then either (a) turning off cooling for longer than the actual shock-cooling time, or (b) enforcing a “target temperature” that does not exactly match the initial reverse-shock temperature will produce un-physical results. Strictly speaking there is no define-able convergence criterion for these models: they do not interpolate to the correct solution as resolution increases, but to some other (non-physical) system. [**(2)**]{} They do not represent the converged solution in Fig. \[fig:sne.convergence.energy\] at any low-resolution radius/mass. Once a SN has swept through, say, $\sim 10^{6}\,{M_{\sun}}$ of gas, it should, correctly, be a cold shell, not a hot bubble. Thus we are not reproducing the higher-resolution solutions correctly, at some finite practical resolution. [**(3)**]{} They introduce an additional set of parameters: $\Delta t_{\rm delay}$ or $T_{\rm target}$, and the “size” (or mass) of the region that is influenced. Both of these strongly influence the results. For example, by increasing the region size, one does not simply “spread” the same energy among neighbors differently, but rather, because the models are binary, one either (a) increases the mass that cannot cool or (b) must change the number of SNe “stored up” (hence the implicit cooling-delay-time) to reach $T_{\rm target}$.
In Appendix \[sec:delayed.cooling\] we consider some implementations of these models, at the resolutions studied here. As expected, we show that they do not converge as we approach resolution $\sim 100\,{M_{\sun}}$, and that certain galaxy properties (metallicities, star formation histories) exhibit biases that are clear artifacts of the un-physical nature of these coupling schemes at high resolution. We therefore do not focus on them further.
Discussion & Conclusions {#sec:discussion}
========================
We have presented an extensive study of both numerical and physical aspects of the coupling of mechanical feedback in galaxy formation simulations (most importantly, SNe, but the methods are relevant to stellar mass-loss and black hole feedback). We explored this in both idealized calculations of individual SN remnants and in the FIRE-2 cosmological simulations at both dwarf and MW mass scales. We conclude that there are two critical components to an optimal algorithm, summarized below.
Ensuring Conservation & Statistical Isotropy {#sec:discussion:conservation}
--------------------------------------------
It is important to design an algorithm that is statistically isotropic (i.e. does not numerically bias the feedback to prefer certain directions), and manifestly conserves mass, metals, momentum, and energy. This is particularly non-trivial in mesh-free numerical methods. In particular, naively distributing ejecta with a simple kernel or area weight to “neighbor” cells or particles – as is common practice in most numerical treatments – can easily produce violations of linear momentum conservation and bias the ejecta so that in, for example, a thin disk, feedback preferentially acts (incorrectly) in the disk plane instead of venting out. This is especially important for [*any*]{} numerical method for which the gas resolution elements might be irregularly distributed around a star (e.g. moving-mesh codes, SPH, or AMR if the star is not at the exact cell center). If these constraints are not met, we show that spurious numerical torques or outflow geometries can artificially remove disk angular momentum and bias predicted morphologies. Worse yet, the momentum conservation errors may not converge and can become more important at high resolution.
In fact, as discussed in detail in § \[sec:feedback:mechanical:ideal.tests:firesims\], our older published “FIRE-1” simulations suffered from some of these errors, but (owing to lower resolution) they were relatively small. Higher-resolution tests, however, demonstrated their importance, motivating the development of the new FIRE-2 algorithm.
In § \[sec:feedback:mechanical\] we present a general algorithm (used in FIRE-2) that resolves all of these issues (as well as accounting for relative star-gas motions), and can trivially be applied in any numerical galaxy formation code (regardless of hydrodynamic method), for any mechanical feedback mechanism.
Accounting for Energy & Momentum from Un-Resolved “PdV Work” {#sec:discussion:pdv}
------------------------------------------------------------
At the mass ($m_{i}$) or spatial resolution ($h_{i}$) of current cosmological simulations, it is [*physically incorrect*]{} to couple SNe to the gas either as entirely thermal energy (heating-only) or entirely kinetic energy (momentum transfer only), or the initial ejecta mix of momentum and energy. Because the SN blastwave has implicitly propagated through a region containing mass $\sim m_{i}$, it [*must*]{} have either (a) done some mechanical (“$PdV$”) work, increasing the momentum of the blastwave, and/or (b) radiated its energy away. In @hopkins:2013.fire we proposed a simple way to account for this in simulations, which we provide in detail in § \[sec:feedback:mechanical:sedov\]. This method is used in all FIRE simulations, was further tested in idealized simulations by @martizzi:sne.momentum.sims, and similar methods have been developed and used in galaxy formation simulations by e.g. @kimm.cen:escape.fraction [@rosdahl:2016.sne.method.isolated.gal.sims]. Essentially, we account for the $PdV$ work by imposing energy conservation up to a terminal momentum (Eq. \[eqn:terminal.p\]), beyond which the energy is radiated, with the transition occurring at the cooling radius of the blastwave.
In this paper, we use high-resolution (reaching $<0.1\,{M_{\sun}}$) simulations of individual SN to show that this implementation, independent of the resolution at which it is applied, reproduces the exact, converged high-resolution simulation of a single SN blastwave, given the [*same*]{} physics. In other words, taking a high-resolution simulation of a SN in a homogenous medium and smoothing it at the resolved coupling radius produces the same result as what is directly applied to the large-scale simulations. Perhaps most importantly, we show that this method of partitioning thermal and kinetic energy leads to relatively rapid convergence in predicted stellar masses and star formation histories in galaxy-formation simulations.
In contrast, coupling only thermal or kinetic energy (or the initial ejecta partitioning of the two) will over or under-predict the coupled momentum by orders of magnitude, in a strongly resolution-dependent fashion (Fig. \[fig:sne.convergence.momentum\]). Briefly, at poor resolution, coupling $\sim 10^{51}$erg as thermal energy (e.g. including no momentum or only the initial ejecta momentum) spreads the energy over an artificially-large mass, so the gas is barely heated and efficiently radiates the energy away without resolving the $PdV$ work. But simply converting all (or any resolution-independent fraction) of this energy into kinetic energy, on the other hand, ignores the cooling that should have occurred and will always, at sufficiently poor resolution, over-estimate the correct momentum generated in a resolution-dependent manner (since for fixed kinetic energy input, the [*momentum*]{} generated is a function of the mass resolution). This in turn leads to strongly resolution-dependent predictions for galaxy masses (Fig. \[fig:sf.fb.sne.subgrid.convergence\]). In principle, one could compensate for this by introducing explicitly resolution-dependent “efficiency factors” that are re-tuned at each resolution level to produce some “desired” result, but this severely limits the predictive power of the simulations and will still fail to produce the correct mix of phases in the ISM and outflows (because the correct thermal-kinetic energy mix is not present). Using cosmological simulations reaching $\sim 30\,{M_{\sun}}$ resolution, we show that [*all*]{} of these studied coupling methods do converge to the same solution when applied at sufficiently high resolution. The difference is, the proposed method in § \[sec:feedback:mechanical:sedov\] from the FIRE simulations converges much more quickly (at a factor $\sim1000$ lower-resolution), while the unphysical “Fully-Thermal” or “Fully-Kinetic” approaches require mass resolution $\ll 100\,{M_{\sun}}$.
Caveats and Future Work {#sec:discussion:future}
-----------------------
While the SNe coupling algorithm studied here reproduces the converged, high-resolution solution at any practical resolution, it is of course possible that the actual conditions under which the SNe explode (the local resolved density, let alone density sub-structure) continue to change as simulation resolution increases. The small-scale density structure of the ISM might in turn depend on other physics (e.g. HII regions, radiation pressure), which could have different convergence properties from the SNe alone.
We stress that our conclusions are relevant for simulations of the ISM or galaxies with mass resolution in the range $10\,{M_{\sun}}\lesssim m_{i} \lesssim 10^{6}\,{M_{\sun}}$. Below $\ll 100\,{M_{\sun}}$, simulations directly resolve early stages of SNe remnant evolution, and it is less important that the coupling is done accurately because the relevant dynamics will be explicitly resolved. Above $\gg 10^{6}\,{M_{\sun}}$, it quickly becomes impossible to resolve even the largest scales of fragmentation and multi-phase structure in the ISM. Such star formation cannot cluster and SNe are not individually time-resolved (i.e. a resolution element has many SNe per timestep), so there is no possibility of explicitly resolving overlap of many SNe into super-bubbles, regardless of how the SNe are treated. In that limit, it is necessary to implement a [*galaxy-wide*]{} sub-grid model for SNe feedback (e.g. a model that directly implements a mass-loading of galactic winds as presented in e.g. @vogelsberger:2013.illustris.model [@dave:mufasa.followup.gas.metal.sfr.props.vs.time; @dave.2016:mufasa.fire.inspired.cosmo.boxes]).
The scalings above for un-resolved “PdV work” are well-studied for SNe, but much less well-constrained for quasi-continuous processes such as stellar mass-loss (OB & AGB winds) and AGN accretion-disk winds. In both cases, the problem is complicated by the fact that the structure and time-variability (e.g. “burstiness”) of the mass-loss processes themselves is poorly understood. Especially for energetic AGN-driven winds, more work is needed to better understand these regimes.
Finally, new physics not included here could alter our conclusions. For example, magnetic fields, or anisotropic thermal conduction, or plasma instabilities altering fluid mixing, or cosmic rays, could all influence the SNe cooling and expansion. Different stellar evolution models could change the predicted SNe rates and/or energetics. It is not our intent to say that the solution here includes all possible physics. However, [*independent*]{} of these physics, the two key points (§ \[sec:discussion:conservation\]-\[sec:discussion:pdv\]) must still hold! And the goal of any “sub-grid” representation of SNe should be to represent the converged solution [*given the same physics*]{} as the large-scale simulation – otherwise convergence cannot even be defined in any meaningful sense. So in future work it would be valuable to repeat the exercises in this paper for modified physical assumptions. However, the extensive literature studying the effect of different physical conditions on SNe remnant evolution (see references in § \[sec:feedback:mechanical:sedov:background\]) has shown that the terminal momentum is weakly sensitive to these additional physics.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank our referee, Joakim Rosdahl, for a number of insightful comments. Support for PFH and co-authors was provided by an Alfred P. Sloan Research Fellowship, NASA ATP Grant NNX14AH35G, and NSF Collaborative Research Grant \#1411920 and CAREER grant \#1455342. AW was supported by a Caltech-Carnegie Fellowship, in part through the Moore Center for Theoretical Cosmology and Physics at Caltech, and by NASA through grant HST-GO-14734 from STScI. CAFG was supported by NSF through grants AST-1412836 and AST-1517491, and by NASA through grant NNX15AB22G. DK was supported by NSF Grant AST1412153 and a Cottrell Scholar Award from the Research Corporation for Science Advancement. The Flatiron Institute is supported by the Simons Foundation. Numerical calculations were run on the Caltech compute cluster “Wheeler,” allocations TG-AST120025, TG-AST130039 & TG-AST150080 granted by the Extreme Science and Engineering Discovery Environment (XSEDE) supported by the NSF, and the NASA HEC Program through the NAS Division at Ames Research Center and the NCCS at Goddard Space Flight Center.\
Additional Resolution Tests {#sec:resolution}
===========================
Extensive resolution tests of our “default” algorithm, at both dwarf and MW mass scales (and considering both mass and spatial resolution, and additional halos) are presented in [Paper [I]{}]{}. The main text here also directly compares the different sub-grid treatments of un-resolved cooling (“Fully-Thermal,” “Fully-Kinetic,” etc. models) as a function of resolution. Here we simply note that we have re-run tests of the different purely numerical SNe coupling schemes from Figs. \[fig:sf.history.sne.algorithm\]-\[fig:images.resolution.nonsymmetric\], at both dwarf and MW mass scales, at several resolution levels. In both cases we find our conclusions from the main text are not sensitive to resolution. In the dwarf case this is unsurprising, since there was no significant effect from the coupling algorithm. For MW-mass halos, we demonstrate this explicitly in Fig. \[fig:sf.sne.algorithm.hires.models\].
Confirmation that the Errors in the Non-Conservative Algorithm Are Dominated By Extreme Events {#sec:non.con.extreme}
==============================================================================================
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Mock images in HST bands of our [**m12i**]{} run at $z=0$ at our highest resolution ($m_{i,\,1000}=7.0$), for the alternative SNe coupling tests in Fig. \[fig:sf.sne.algorithm.hires.models\]. With the “caps” added to the non-conservative method, the catastrophic errors in Fig. \[fig:images.resolution.nonsymmetric\] are suppressed and the morphology agrees with our “default” run reasonably well. In the “force grid-aligned coupling” run, the spurious torques from numerically forcing the winds along the coordinate axes (incorrectly) drive the disk into alignment with these axes, removing angular momentum from recycling material and producing a more compact disk. \[fig:images.resolution.diff.coupling\]](figs_images/m12i_ref13_s0600_t000_star_N3c.jpg "fig:"){width="0.49\columnwidth"} ![Mock images in HST bands of our [**m12i**]{} run at $z=0$ at our highest resolution ($m_{i,\,1000}=7.0$), for the alternative SNe coupling tests in Fig. \[fig:sf.sne.algorithm.hires.models\]. With the “caps” added to the non-conservative method, the catastrophic errors in Fig. \[fig:images.resolution.nonsymmetric\] are suppressed and the morphology agrees with our “default” run reasonably well. In the “force grid-aligned coupling” run, the spurious torques from numerically forcing the winds along the coordinate axes (incorrectly) drive the disk into alignment with these axes, removing angular momentum from recycling material and producing a more compact disk. \[fig:images.resolution.diff.coupling\]](figs_images/m12i_ref13_s0600_t090_star_N3c.jpg "fig:"){width="0.49\columnwidth"}
![Mock images in HST bands of our [**m12i**]{} run at $z=0$ at our highest resolution ($m_{i,\,1000}=7.0$), for the alternative SNe coupling tests in Fig. \[fig:sf.sne.algorithm.hires.models\]. With the “caps” added to the non-conservative method, the catastrophic errors in Fig. \[fig:images.resolution.nonsymmetric\] are suppressed and the morphology agrees with our “default” run reasonably well. In the “force grid-aligned coupling” run, the spurious torques from numerically forcing the winds along the coordinate axes (incorrectly) drive the disk into alignment with these axes, removing angular momentum from recycling material and producing a more compact disk. \[fig:images.resolution.diff.coupling\]](figs_images/m12i_ref13_fb-aniso-angle-max_s0400_t000_star_N3c.jpg "fig:"){width="0.49\columnwidth"} ![Mock images in HST bands of our [**m12i**]{} run at $z=0$ at our highest resolution ($m_{i,\,1000}=7.0$), for the alternative SNe coupling tests in Fig. \[fig:sf.sne.algorithm.hires.models\]. With the “caps” added to the non-conservative method, the catastrophic errors in Fig. \[fig:images.resolution.nonsymmetric\] are suppressed and the morphology agrees with our “default” run reasonably well. In the “force grid-aligned coupling” run, the spurious torques from numerically forcing the winds along the coordinate axes (incorrectly) drive the disk into alignment with these axes, removing angular momentum from recycling material and producing a more compact disk. \[fig:images.resolution.diff.coupling\]](figs_images/m12i_ref13_fb-aniso-angle-max_s0400_t090_star_N3c.jpg "fig:"){width="0.49\columnwidth"}
![Mock images in HST bands of our [**m12i**]{} run at $z=0$ at our highest resolution ($m_{i,\,1000}=7.0$), for the alternative SNe coupling tests in Fig. \[fig:sf.sne.algorithm.hires.models\]. With the “caps” added to the non-conservative method, the catastrophic errors in Fig. \[fig:images.resolution.nonsymmetric\] are suppressed and the morphology agrees with our “default” run reasonably well. In the “force grid-aligned coupling” run, the spurious torques from numerically forcing the winds along the coordinate axes (incorrectly) drive the disk into alignment with these axes, removing angular momentum from recycling material and producing a more compact disk. \[fig:images.resolution.diff.coupling\]](figs_images/m12i_ref13_fb_iso_s0600_t000_star_N3c.jpg "fig:"){width="0.49\columnwidth"} ![Mock images in HST bands of our [**m12i**]{} run at $z=0$ at our highest resolution ($m_{i,\,1000}=7.0$), for the alternative SNe coupling tests in Fig. \[fig:sf.sne.algorithm.hires.models\]. With the “caps” added to the non-conservative method, the catastrophic errors in Fig. \[fig:images.resolution.nonsymmetric\] are suppressed and the morphology agrees with our “default” run reasonably well. In the “force grid-aligned coupling” run, the spurious torques from numerically forcing the winds along the coordinate axes (incorrectly) drive the disk into alignment with these axes, removing angular momentum from recycling material and producing a more compact disk. \[fig:images.resolution.diff.coupling\]](figs_images/m12i_ref13_fb_iso_s0600_t090_star_N3c.jpg "fig:"){width="0.49\columnwidth"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In § \[sec:feedback:mechanical:ideal.tests:firesims\], we showed that at sufficiently high resolution, the non-conservative algorithm can produce momentum errors which destroy the thin-disk morphology of a simulated MW-mass galaxy. Here we confirm that the dramatic effects seen there are dominated by the smaller number of “extreme” or “worst case” events, rather than smaller errors that occur more ubiquitously in a non-conservative algorithm.
Specifically, in Figs. \[fig:sf.sne.algorithm.hires.models\] & \[fig:images.resolution.diff.coupling\], we conduct the same tests as Figs. \[fig:sf.history.sne.algorithm\]-\[fig:images.resolution.nonsymmetric\], but with a modified non-conservative algorithm (“non-conservative + caps”). Here we take the non-conservative formulation from § \[sec:feedback:mechanical:ideal.tests:firesims\] and – for testing purposes only – limit the most serious errors by enforcing an upper limit to the fraction of SNe momentum coupled to any one particle ($=h_{b}^{2}/4\,[{\rm MAX}(h_{a},\,h_{b})^{2} + |{\bf x}_{ba}|^{2}]$; where $h_{a}^{3}\equiv m_{a}/\rho_{a}$) and an upper limit to the maximum velocity change of $\sim 50\,{\rm km\,s^{-1}}$ (per “event”).
Figs. \[fig:sf.sne.algorithm.hires.models\] & \[fig:images.resolution.diff.coupling\] clearly demonstrate that it is only the most severe, pathological local coupling cases in the “non-conservative” algorithm which generate the “disk destruction” (as opposed to an integrated sum of small errors). Running this “capped” model at the same resolution, we see a reasonable, clearly thin-disk morphology emerge, in good agreement with our default run. So long as we control (or better yet, eliminate) these errors at a reasonable level, they do not corrupt our solutions. This is why at lower resolution (where the “worst case” kick magnitude was much smaller, $< 10\,{\rm km\,s^{-1}}$ for a single gas element), as we studied in FIRE-1, we do not see problematic behavior.
Problems with Explicitly-Grid-Aligned Feedback Coupling {#sec:grid}
=======================================================
In § \[sec:feedback:mechanical:ideal.tests:firesims\], we discuss the effects of the purely numerical mechanical feedback coupling algorithm. We discussed the importance of algorithms which respect statistical isotropy. Here we compare another algorithm which is not statistically isotropic, for a different reason.
In Fig. \[fig:sf.sne.algorithm.hires.models\], we conduct the same tests as Figs. \[fig:sf.history.sne.algorithm\]-\[fig:images.resolution.nonsymmetric\], but we consider a “Force grid-aligned coupling” model. The coupling follows our default algorithm, except we treat the particles around the SN [*as if*]{} they were distributed in a perfect Cartesian lattice with the SN at the center (as if the SN exploded at the exact center of a cell in a Cartesian grid code), and so enforce the exact same coupling in the $\pm x$, $\pm y$, $\pm z$ coordinate directions. This trivially ensures momentum conservation but is not the correct solution given the actual non-grid distribution of particles. Moreover it imprints the coordinate axes of the simulation directly onto the galaxy – it is a fundamentally non-statistically-isotropic algorithm. But this is useful for comparison, because such “preferred directions” are generic to Cartesian grid-based simulations (e.g. AMR) and their SNe coupling schemes.
The “grid-aligned” implementation shows a higher central $V_{c}$, especially at our highest resolution ($m_{i,\,1000}=7$), owing to a more compact disk. This is evident in Fig. \[fig:images.resolution.diff.coupling\], where we compare the $z=0$ visual morphologies of the MW-mass simulations run at our highest resolution. In the “grid-aligned” implementation (uniquely), the disk is nearly perfectly-aligned with the simulation coordinate axes – not surprising given that feedback is forcibly aligned in this case. This artificial alignment generates strong torques on outflowing/recycling material, as well as material within the disk (it must be torqued from its “natural” orientation); as winds recycle and the disk first forms, this in turn produces a significant loss of angular momentum. As a result, the late-time inflowing/recycled material (which forms the disk) has lower angular momentum in this run, and produces a more compact disk, with a much higher central $V_{c}$. Note the error is essentially independent of resolution (whereas the central $V_{c}$ decreases with resolution, in all other algorithms tested), because the grid alignment is resolution-independent.
We show this to emphasize that this can be a serious worry for fixed-grid or adaptive mesh refinement (AMR) codes, where grid-alignment of disks is a ubiquitous and well-known problem [@davis:1984.rotating.upwind.eulerian.scheme], even at extremely high resolution and independent of feedback [because the hydrodynamics themselves are grid-aligned; see e.g. @de-val-borro:2006.disk.planet.interaction.comparison; @byerly:2014.hybrid.cartesian.scheme.for.ang.mom; @hopkins:gizmo], especially in simulations of cosmological disk formation [see @hahn:2010.disk.gal.orientations.ramses]. This may bias these simulations to smaller, more compact galaxies.
Given the highly-irregular dSph morphology of [**m10q**]{}, there is not an obvious difference in that galaxy with this algorithm (there is no thin disk to torque); we therefore do not show a detailed comparison.
We have re-run halos [**m09**]{} and [**m10v**]{} (both dwarfs), and [**m12f**]{} and [**m12m**]{} (MW-mass) from [Paper [I]{}]{} with this algorithm to confirm the results are robust across halos at both dwarf and MW mass scales.
Delayed-Cooling and Target-Temperature Models: Tests {#sec:delayed.cooling}
====================================================
We briefly discussed “delayed-cooling” and “target-temperature” models in the text in § \[sec:delayed.cooling.discussion\]. There we emphasized that such models are fundamentally ill-posed at high resolution. Here we demonstrate this explicitly in zoom-in cosmological simulations at dwarf and MW mass scales.
We compare four simple models, which resemble common implementations in the literature.
1. [*Delayed-Cooling (Physical):*]{} Here we take the “Fully-Thermal” model from the text (injecting the full $10^{51}\,{\rm erg}$ per SNe as thermal energy), but particles which are heated by the SNe are not allowed to cool for a time $\Delta t_{\rm delay}$. Physically, the cooling time of an explosion is (by definition) the time it takes to reach $R_{\rm cool}$: since it is in an energy-conserving Sedov-phase before this, the time is $t^{\rm shock}_{\rm cool} = (2/5)\,R_{\rm cool}/v(R_{\rm cool})$ where $v(R_{\rm cool}) \equiv p_{\rm t}/M_{\rm cool}$ is the velocity at this stage. Given the expression for $p_{\rm t}$ in the text (Eq. \[eqn:terminal.p\]), this is $t^{\rm shock}_{\rm cool} = 5\times10^{4}\,{\rm yr}\,(n/{\rm cm^{-3}})^{-4/7}\,(Z/Z_{\sun})^{-5/14}$ (this is, as it should be, approximately the physical cooling time for metal-enriched gas with the post-shock temperature appropriate for a shock velocity $v(R_{\rm cool})\sim 200\,{\rm km\,s^{-1}}$). So we adopt $\Delta t_{\rm delay} = t^{\rm shock}_{\rm cool}$. Note, though, that $t^{\rm shock}_{\rm cool}$ is $\sim 1000$ times shorter than the gas dynamical time ($1/\sqrt{G\,\rho}$) – so unless the cooling radius (stage of blastwave expansion where the expansion time is shorter than $t^{\rm shock}_{\rm cool}$) is resolved, this will do little work.[^15]
2. [*Delayed-Cooling (300xPhysical):*]{} We take the “Delayed-Cooling (Physical)” model, but arbitrarily multiply the delay timescale by a factor of $300$. This brings it to $\gtrsim 10^{7}\,$yr, comparable to the galaxy dynamical time.
3. [*Target-Temperature (Physical):*]{} We take the “Fully-Thermal” model from the text but wish to heat the targeted gas particles as close as possible to some desired target temperature, $T_{\rm target}\sim 10^{7.5}\,$K, without artificially changing the physics. Therefore we adjust the number of neighbor particles on-the-fly as needed to get as close as possible to this goal. However, even putting $10^{51}\,{\rm erg}$ into a single particle can only increase the temperature by $\Delta T \approx 2.4\times10^{6}\,\,(m_{i}/1000\,{M_{\sun}})^{-1}\,$K. So typically this amounts to putting $100\%$ of the energy of each SN into the neighbor particle which is closest to (but still below) $T_{\rm target}$.
4. [*Target-Temperature (Store SNe):*]{} We [*require*]{} that all gas particles heated by a SN receive sufficient energy such that their temperature rises by $T_{\rm target}=10^{7.5}\,$K. We follow @dalla.vecchia:target.temperature.sne.delayed.cooling.feedback[^16] and achieve this by implicitly turning off cooling – we “store” SNe until a sufficient number have accumulated in order to heat a target gas mass by the desired $T_{\rm target}$. Then all the SNe energy is deposited “at once” in that gas in a thermal-energy dump. To minimize the number of SNe which must be “stored,” we set a target gas mass (for each “heating event”) of just $10$ gas particles. Given this, the number of SNe which must be “stored” and then injected simultaneously is $N_{\rm SNe} \sim 10^{5}\,(m_{i}/10^{6}\,{M_{\sun}})$; this is physically similar to delaying cooling for $\sim 30\,$Myr (while the SNe accumulate) for a gas particle surrounded by $\sim 10$ star particles.
Figs. \[fig:delaycool.physics\]-\[fig:delaycool.resolution\] repeat the experiments from § \[sec:delayed.cooling.discussion\] in the main text, for these models. Not surprisingly, at the resolution shown ($m_{i}\gg 100\,{M_{\sun}}$), the “physically-motivated” models (either delayed-cooling or target-temperature) resemble the “Fully-Thermal” model from the text, which itself resembled the “no SNe” result. Turning off cooling only for the real cooling time, or heating gas only to the correct physical temperature, ignoring momentum, leads to over-cooling at low resolution.
Of course, in this class of models we can simply adjust the model parameters until a reasonable stellar mass is obtained. The “Delayed-cooling (300xPhysical)” and “Target-temperature (Store SNe)” models manage to produce order-of-magnitude similar galaxy masses to our converged default model at low resolution. However there are serious issues.
1. The actual explicit or implicit “cooling turnoff times” are wildly unphysical ($\gtrsim 10\,$Myr) – many orders of magnitude larger than physical in both cases [see @martizzi:sne.momentum.sims; @agertz:sf.feedback.multiple.mechanisms]. Thus the solutions we “insert” on large scales do [*not*]{} in any way resemble a “down-sampled” high-resolution simulation; nor can the relevant parameters be predicted [*a priori*]{} from higher-resolution simulations. Note that such unphysically-long delayed cooling times are what are actually used in most simulations with these “delayed cooling” models [e.g. @stinson:2006.sne.fb.recipe; @shen:2014.seven.dwarfs; @crain:eagle.sims].
It has been suggested that these models, while obviously unphysical for a single SN explosion, could represent the result of SNe which are strongly clustered in both space and time. However, all the simulations here, by allowing resolved cooling into GMCs, explicitly resolve stellar clustering (and if anything, we show in [Paper [I]{}]{} that low resolution tends to over-estimate clustering, owing to discrete star-particle sampling). Therefore if such clustering were to occur, one would not need to artificially turn off cooling or store SNe (one could simply allow the explosions to occur rapidly and create a super-bubble, as occurs in our default models). In contrast, these models [*impose*]{}, rather than predict, a strong and [*explicitly resolution-dependent*]{} assumption about clustering: for e.g. the target temperature model it is that SNe explode in “units” in both time and space of $\sim 10^{5}\,{\rm SNe}\,(m_{i}/10^{6}\,{M_{\sun}})$.
@walch.naab:sne.momentum [@martizzi:sne.momentum.sims; @kim:tigress.ism.model.sims] and most explicitly @kim:superbubble.mass.loading have demonstrated this in greater detail, in studies of idealized single-SN explosions or clustered SNe in a sub-volume of the ISM. There, these authors demonstrate more explicitly that delayed cooling or target-temperature models are not a good approximation to the “down-sampled” results of high-resolution simulations.
2. As also noted by @agertz:sf.feedback.multiple.mechanisms, this un-physical feedback coupling produces several artifacts in the galaxy properties. [**(1)**]{} Shapes of the star formation histories are biased: in dwarfs the star formation in both cases is much more concentrated at early times, compared to our converged solutions in Fig. \[fig:sf.fb.sne.subgrid.convergence\]. [**(2)**]{} In massive galaxies, the “delayed cooling” model accumulates a massive reservoir of gas (with its cooling turned off by successive generations of SNe) at the galaxy center, which finally (because of the dependence of $t_{\rm delay}$ on density and metallicity) achieves a short cooling time even with the imposed “delay,” then forms a strong starburst (at time $\sim 10\,$Gyr) and leaves an extremely compact bulge (the $\sim 500\,{\rm km\,s^{-1}}$ rotation-curve peak). [**(3)**]{} The metal abundances are highly sensitive to the “delayed cooling” and “target temperature” model implementations, and vary by several orders of magnitude in the variants explored here. The metallicities for dwarfs are extremely high in the “target temperature” models, because the “stored” SNe inject a huge metal mass simultaneously,[^17] which is ejected from the galaxy but is so metal-rich that it re-cools and preferentially forms the next generation of stars. We have verified this feature remains regardless of whether we include or exclude explicit “turbulent metal mixing” (numerical metal diffusion) terms as described in [Paper [I]{}]{}. [**(4)**]{} The gas phase structure is quite different from our converged solutions in the text. Since these models rely only on hot gas, there is little or no cool ($\sim 10^{4}-10^{5}\,$K) or cold ($\lesssim 10^{4}\,$K) gas in the outflows here, unlike our default simulations [see @muratov:2015.fire.winds; @muratov:2016.fire.metal.outflow.loading; @faucher-giguere:2014.fire.neutral.hydrogen.absorption; @faucher.2016:high.mass.qso.halo.covering.fraction.neutral.gas.fire; @angles.alcazar:particle.tracking.fire.baryon.cycle.intergalactic.transfer], although @rosdahl:2016.sne.method.isolated.gal.sims show that alternative “delayed cooling” implementations err in the opposite manner and produce far too much cold, dense gas in outflows.
3. The solutions are non-convergent. Fig. \[fig:delaycool.resolution\] shows this explicitly, re-running [**m10q**]{} at resolution from $250-10^{5}\,{M_{\sun}}$, with the “Delayed-cooling (300xPhysical)” and “Target-temperature (Store SNe)” models. In both, the stellar masses, metallicities, central galaxy (and dark matter) densities, rotation curves in the central $\sim$kpc, and late-time star formation rates all increase systematically as the resolution increases.
Of course, this owes to the explicit resolution-dependence of the assumed clustering and blastwave structure of SNe. In target-temperature models, the SNe cluster and are synchronized in time and space in an explicitly particle-mass dependent manner. In delayed-cooling models, the “cooling mass” $M_{\rm cool}$ is essentially defined to be the mass of the kernel over which the SNe are distributed (some multiple of the particle mass): since the terminal momentum for an energy-conserving blast (which this is forced to be, by not allowing cooling) is $p_{t} \sim \sqrt{E_{\rm SNe}\,M_{\rm cool}}$, the momentum injected increases $\propto M_{\rm cool}^{1/2} \propto m_{i}^{1/2}$, so feedback becomes more efficient at lower resolution (analogous to the “Fully-Kinetic” models discussed in the text).
Interestingly, while the lack of convergence for delayed-cooling models is “smooth,” the “target temperature” models exhibit false convergence in some properties (such as stellar mass) at low resolution, then “jump” in the predicted values once a critical mass resolution (here $\sim 2000\,{M_{\sun}}$) is reached. That is of course the mass resolution where the [*physical*]{} cooling radii of SNe begin to be resolved: so the fundamental meaning and behavior of the sub-grid model changes. At even higher resolution, the “target temperature” of $\sim 10^{7.5}$K would actually become [*lower*]{} than the correct, resolved blastwave temperatures: this would lead one to “store” $<1$ SN at a time. Clearly, in this limit the “delayed cooling” and “target temperature” models simply become ill-defined.
Energy-Conserving Solutions Accounting for Arbitrary Star-Gas Motions {#sec:energy.cons.w.motion}
=====================================================================
In the text, we noted that, for a spherically symmetric blastwave propagating into a medium initially at rest, converting an energy $E_{\rm ej} = (1/2)\,m_{\rm ej}\,{v}_{\rm ej}^{2}$ into kinetic energy (pure radial momentum), after coupling to a total mass $m_{b}$, simply implies a final kinetic energy $p_{\rm final}^{2}/(2\,(m_{b} + m_{\rm ej})) = E_{\rm ej}$, giving $p_{\rm final} = (1+m_{b}/m_{\rm ej})^{1/2}\,m_{\rm ej}\,(2\,E_{\rm ej}/m_{\rm ej})^{1/2} = (1+m_{b}/m_{\rm ej})^{1/2}\,p_{\rm ej}$, where $p_{\rm ej}=m_{\rm ej}\,v_{\rm ej}$ is the initial ejecta momentum. The situation is more complex if we allow for arbitrary initial gas and stellar velocities.
First recall the mass conservation condition $\sum \Delta m_{b} = m_{\rm ej}$ is un-altered by star or gas motion. The momentum condition is, in the rest frame of the star, $\sum \Delta {\bf p}_{b} = \mathbf{0}$, which in the lab/simulation frame becomes $\sum \Delta {\bf p}^{\prime}_{b} = m_{\rm ej}\,{\bf v}_{a}$, trivially satisfied by the boost $\Delta {\bf p}_{b}^{\prime} = \Delta {\bf p}_{b} + \Delta m_{b}\,{\bf v}_{a}$. In these two cases, no net mass or linear momentum is created/destroyed. For energy, we must account for the energy injected. Consider a hypothetical instant “just after” explosion, but “before” coupling. Then the mass of the star particle is $m_{a}-m_{\rm ej}$, moving at ${\bf v}_{a}$. Gas neighbors $b$ have their “unperturbed” mass and velocities, etc. In the rest-frame of the star, the ejecta contain the energy $E_{\rm ej}=(1/2)\,m_{\rm ej}\,{v}_{\rm ej}^{2}$. Assume the ejecta have negligible initial internal energy, then $v_{\rm ej}$ is the real radial velocity. If the ejecta are isotropic in the rest frame, each parcel in some solid angle $d\Omega$ carries mass $dm = m_{\rm ej}/(4\pi)\,d\Omega$, with velocity ${\bf v}_{\rm ej} = v_{\rm ej}\,\hat{r}$ (where $\hat{r}$ points from the star radially outward). If the star is moving initially at velocity ${\bf v}_{a}$, the whole system is boosted, and ${\bf v}^{\prime}_{\rm ej} = v_{\rm ej}\,\hat{r} + {\bf v}_{a}$. To calculate $E_{\rm ej}^{\prime} = (1/2)\,\int |{\bf v}^{\prime}_{\rm ej}|^{2}\,dm$ in the lab frame, note $|{\bf v}^{\prime}_{\rm ej}|^{2} = v_{\rm ej}^{2} + 2\,{\bf v}_{\rm ej}\cdot {\bf v}_{a} + v_{a}^{2} = v_{\rm ej}^{2} + v_{a}^{2} + 2\,v_{\rm ej}\,v_{a}\,\cos{\theta}_{ea}$ (where we can define standard spherical coordinates such that $\hat{r}\cdot \hat{\bf v}_{a} \equiv \cos{\theta}_{ea}$). Using $dm = m_{\rm ej}/(4\pi)\,d\Omega = m_{\rm ej}/(4\pi)\,d\phi\,d\cos{\theta}_{ea}$, we trivially obtain that the “cross-term” ${\bf v}_{\rm ej}\cdot {\bf v}_{a}$ vanishes (integrating over all ejecta), so we have $E_{\rm ej}^{\prime} = (m_{\rm ej}/2)\,(v_{\rm ej}^{2} + v_{a}^{2})$.
Now assume we couple some energy and momentum to all the neighbors $b$ – the exact [*discrete*]{} energy conservation condition must be satisfied, summed over all elements which receive some ejects mass/energy/momentum. In the simulation/lab frame, the energy conservation condition can be written: $$\begin{aligned}
E_{\rm initial}& + E_{\rm ej}^{\prime} = \\
\nonumber & \frac{(m_{a}-m_{\rm ej})}{2}\,{\bf v}_{a}^{2} + \sum_{b}\,\frac{{\bf p}_{b}^{2}}{2\,m_{b}}+
\frac{m_{\rm ej}}{2}\,({\bf v}_{a}^{2} + v_{\rm ej}^{2}) + U_{0} \\
\nonumber &= \frac{m_{a}}{2}\,{\bf v}_{a}^{2} + \sum_{b}\,\frac{{\bf p}_{b}^{2}}{2\,m_{b}} + \frac{m_{ej}}{2}\,v_{\rm ej}^{2} + U_{0} \\
\nonumber &= E_{\rm final} = \frac{(m_{a}-m_{\rm ej})}{2}\,{\bf v}_{a}^{2} + \sum_{b}\,\frac{|{\bf p}_{b} + \Delta {\bf p}^{\prime}_{b}|^{2}}{2\,(m_{b}+\Delta m_{b})} + U_{f}\end{aligned}$$ where (as in the main text) we use $x_{b}$ to denote the pre-coupling value of $x$. Here $U_{0}$ and $U_{f} \equiv U_{0}+\Delta U$ collect the non-kinetic energy terms (e.g. thermal energy, discussed below).
Now using the fact that $\Delta {\bf p}^{\prime}_{b} \equiv \Delta {\bf p}_{b} + \Delta m_{b}\,{\bf v}_{a}$, we can write this in terms of the coupled momenta $\Delta {\bf p}_{b}$. Use the identities ${\bf v}_{b} = {\bf p}_{b}/m_{b}$, $\sum \Delta m_{b} = m_{\rm ej}$, $\sum \Delta {\bf p}_{b} = \mathbf{0}$, $\mu_{b} \equiv \Delta m_{b}/m_{b}$, $1/(1+x) = 1-x/(1+x)$, and following through with some tedious algebra, we can re-arrange terms and write the energy conservation condition as: $$\begin{aligned}
\label{eqn:egycon.reduced} \epsilon &= \sum_{b}\,\left[ \frac{|\Delta {\bf p}_{b}|^{2}}{2\,m_{b}\,(1+\mu_{b})}
+ \frac{{\bf v}_{ba} \cdot \Delta {\bf p}_{b}}{(1+\mu_{b})}
\right] \end{aligned}$$ where ${\bf v}_{ba} \equiv {\bf v}_{b}-{\bf v}_{a}$ and the energy $\epsilon$ is defined by: $$\begin{aligned}
\epsilon &\equiv \frac{1}{2}\,m_{\rm ej}\,\left( v_{\rm ej}^{2} + \sum_{b} w^{\prime}_{b}\,|{\bf v}_{ba}|^{2} \right) - \Delta U \\
w^{\prime}_{b} &\equiv \frac{1}{1+\mu_{b}}\,\left(\frac{\Delta m_{b}}{m_{\rm ej}}\right)\ .\end{aligned}$$ This makes it clear that the dynamics depend only on the [*relative*]{} velocity ${\bf v}_{ba}$ of gas relative to the star (i.e. a uniform boost will not change the dynamics, as it should not). In $\epsilon$, the term in ${\bf v}_{ba}^{2}$ reflects the additional energy generated by relative gas-star motion – since $\sum w^{\prime}_{b}\approx1$, this is negligible for SNe where ${\bf v}_{ba}^{2} \ll v_{\rm ej}^{2}$, but potentially important for slow winds.
Now without loss of generality, define the coupled momentum $\Delta {\bf p}_{b}$ as the value we used in the text (for the case where the gas is not moving relative to the star) multiplied by an arbitrary function $\psi_{b}$: $$\begin{aligned}
\label{eqn:delta.p.defn}\Delta {\bf p}_{b} &\equiv \psi_{b}\,\Delta m_{b}\,\left(1 + \frac{m_{b}}{\Delta m_{b}} \right)^{1/2}\,\left(\frac{2\,\epsilon}{m_{\rm ej}} \right)^{1/2} \Delta\hat{\bf p}_{b}\end{aligned}$$ where $\Delta\hat{\bf p}_{b}\equiv \Delta{\bf p}_{b}/|\Delta{\bf p}_{b}|$. Inserting this into the energy conservation condition in Eq. \[eqn:egycon.reduced\], we obtain the constraint equation in terms of $\psi$: $$\begin{aligned}
\label{eqn:psi.simplified.b} 1 &= \sum_{b}\,\psi_{b}^{2}\,\frac{\Delta m_{b}}{m_{\rm ej}} + 2\,\sum_{b}\,\psi_{b}\,\cos{\theta}_{ba}\,\left({\frac{w^{\prime}_{b}\,m_{b}\,|{\bf v}_{ba}|^{2}}{2\,\epsilon}}\right)^{1/2} \\
&\cos{\theta}_{ba} \equiv \hat{\bf v}_{ba} \cdot \Delta\hat{\bf p}_{b} = \frac{{\bf v}_{ba}\cdot \Delta {\bf p}_{b}}{|{\bf v}_{ba}|\,|\Delta {\bf p}_{b}|}\end{aligned}$$ If ${\bf v}_{ba}=0$ (no initial gas-star motion), then the term in $\cos{\theta}_{ba}$ vanishes, and this is trivially solved for $\psi_{b}=1$, and Eq. \[eqn:delta.p.defn\] reduces to our solution for a spherically symmetric explosion in a stationary medium (as it should). More generally, any $\psi_{b}$ and $\Delta\hat{\bf p}_{b}$ must still produce $\sum \Delta{\bf p}_{b} = {\bf 0}$. If we have defined a set of vector weights, as in the main text, such that this is true for the ${\bf v}_{ba}=0$ (stationary) case, then the simplest choice which guarantees $\sum \Delta{\bf p}_{b} = {\bf 0}$ is preserved is to take $\psi_{b}=\psi$, such that $\sum \Delta {\bf p}_{b} \rightarrow \psi\,\sum \Delta {\bf p}_{b}^{\rm stationary} = \mathbf{0}$. Of course in detail the true solution for a blastwave in an inhomogeneous medium with locally varying velocities could feature variable $\psi_{b}$ or changes in the direction $\Delta\hat{\bf p}_{b}$ (i.e. work being done in different directions from the initial ejecta expansion); but by definition this sub-structure is un-resolved at the coupling radius so we should think of $\psi$ as an average over the un-resolved structure. The solution to Eq. \[eqn:psi.simplified.b\] is then simply: $$\begin{aligned}
\psi_{b} &= \psi \equiv \sqrt{1+\beta_{\psi}^{2}} - \beta_{\psi} \\
\beta_{\psi} &\equiv \sum_{b}\,\cos{\theta}_{ba}\,\left({\frac{w^{\prime}_{b}\,m_{b}\,|{\bf v}_{ba}|^{2}}{2\,\epsilon}}\right)^{1/2} \end{aligned}$$
This gives us the desired expression in the strictly energy-conserving limit. But at low resolution (large particle masses) the solution is not energy-conserving (the blastwave has reached the terminal momentum and radiated energy away). Following the text, return to Eq. \[eqn:egycon.reduced\] and insert the terminal momentum $\Delta {\bf p}_{b} = \Delta {\bf p}_{b}^{\rm terminal} = \phi_{b}\,\Delta {\bf p}_{b}^{\rm terminal}({\bf v}_{ba}=0) = \phi_{b}\,(p_{\rm t}/p_{\rm ej})\,\Delta {\bf p}_{b}^{\rm initial} = \phi_{b}\,(p_{\rm t}/p_{\rm ej})\,\Delta m_{b}\,(2\,\epsilon/m_{\rm ej})^{1/2}\,\Delta \hat{\bf p}_{b}$, where $\phi_{b}$ is an arbitrary constant analogous to $\psi_{b}$. Following the solution above we will take $\phi_{b}\rightarrow\phi$. Since some energy has been radiated, the right hand side of Eq. \[eqn:egycon.reduced\] must be $< \epsilon$ (the constraint is an inequality). This is solved by: $$\begin{aligned}
\phi &= {\rm MIN}\left[1,\,{\alpha_{\phi}^{-1}}\,\left(\sqrt{\beta_{\phi}^{2}+{\alpha_{\phi}}} -\beta_{\phi} \right) \right] \\
\alpha_{\phi} &\equiv \sum_{b}\,w_{b}^{\prime}\,\frac{\Delta m_{b}}{m_{\rm ej}}\,\frac{p_{t}}{m_{b}\,v_{t}} \\
\beta_{\phi} &\equiv \sum_{b}\,w_{b}^{\prime}\,\cos{\theta}_{ba}\,\frac{|{\bf v}_{ba}|}{v_{t}} \end{aligned}$$ where $v_{t} \equiv 2\,\epsilon/p_{t}$ (approximately the velocity at which the blastwave becomes radiative). In the limit where the terminal momentum is reached, $p_{t} \ll m_{b}\,v_{t}$ by definition, so $\alpha_{\phi}$ is vanishingly small and $\phi \approx {\rm MIN}[1,\,1/2\beta_{\phi}]$ (for $\beta_{\phi}>0$). This has a simple interpretation then: $\beta_{\phi}$ is just the (kernel-averaged) ratio of the net outward gas velocity from the SN to the velocity where the blastwave becomes radiative. If the recession velocity exceeds $v_{t}\sim 200\,{\rm km\,s^{-1}}$ (the velocity at which the terminal momentum is reached, for a stationary surrounding medium), the SN must reach terminal momentum earlier (at a higher velocity therefore lower terminal momentum) before the ambient medium “outruns” the blastwave: mathematically $\beta_{\phi} \gtrsim 1$ and $\phi \lesssim 1$, accordingly.
Having computed $\psi$ and $\phi$, we can then decide which limit (the energy-conserving or terminal-momentum limit) a gas element should be in, as in Eqs. \[eqn:dp.subgrid.sub1\]-\[eqn:dp.subgrid\], by comparing the corrected $\Delta {\bf p}_{b}^{\rm energy-conserving}$ (Eq. \[eqn:delta.p.defn\]) and corrected $\Delta {\bf p}_{b}^{\rm terminal}$ above. If $|\Delta {\bf p}_{b}^{\rm energy-conserving}| > |\Delta {\bf p}_{b}^{\rm terminal}|$ (the momentum implied by the energy-conserving limit exceeds the terminal momentum), [or]{} $m_{b} > m^{b}_{\rm cool} \equiv |\Delta {\bf p}_{b}^{\rm terminal}| / v_{t}$ (the mass of particle $b$ exceeds the swept-up-mass at which the energy-conserving solution would de-celerate to below $v_{t} = 2\,\epsilon/p_{t}$), then the terminal solution is applied (otherwise the energy-conserving solution is applied). If the gas-particle motion is negligible ($\psi\approx\phi\approx1$), these two conditions are [*exactly*]{} equivalent; more generally we need to check both.
Physically, note that for ${\bf v}_{ba}=0$ (non-moving cases or uniformly boosting the whole simulation), $\beta_{\psi}=\beta_{\phi}=0$ so $\psi=\phi=1$ and we obtain the stationary case as expected. For ${\bf v}_{ba}=$constant (assuming $\Delta m_{b} \ll m_{b}$), the condition $\sum \Delta{\bf p}_{b}={\bf 0}$ becomes mathematically identical to $\beta_{\psi}=\beta_{\phi}=0$, so there is no change in the coupled momentum or energy relative to what would occur in the stationary case (this is easiest to see by returning to Eq. \[eqn:egycon.reduced\] and simply taking the ${\bf v}_{ba}$ term outside the sum). In a medium moving with uniform velocity relative to the star, the blastwave produce a stronger, slower-moving shock in the “upwind” direction and weaker, faster-moving shock in the “downwind” direction, but it is easy to verify that the difference in the energies produced in both directions cancel one another (and of course, the momentum imparted in both directions must, by conservation, be equal). In a turbulent medium, different velocities will tend to cancel, so $\beta_{\psi,\,\phi}$ will both be small.
However, when there is a large net inflow/outflow motion around the star, $\beta_{\psi,\,\phi}$ can be non-negligible. Consider a spherically symmetric case with ${\bf v}_{ba}=v_{r}\,\hat{r}$ so $\cos{\theta}_{ba}=1$ if $v_{r}>0$, or $\cos{\theta}_{ba}=-1$ if $v_{r}<0$, and assume the total mass to which the ejecta is coupled is distributed in a shell with mass $M_{\rm coupled}$. Then $|\beta_{\psi}|\sim (M_{\rm coupled}\,v_{r}^{2}/2\,\epsilon)^{1/2} \sim (KE_{\rm initial}/KE_{\rm ejecta})^{1/2}$, i.e. $\beta_{\psi}^{2}$ scales with the ratio of the initial (pre-coupling) kinetic energy of the surrounding gas elements (across which the ejecta are distributed) to the ejecta energy. Although $|{\bf v}_{ba}| = |v_{r}| \ll v_{\rm ej}$ for SNe, the kinetic energy is weighted by the particle mass, so it is [*not*]{} necessarily negligible: $|\beta_{\psi}|\gtrsim 1$ if the typical $ |v_{r}| \gtrsim (m_{\rm ejecta}/M_{\rm coupled})^{1/2}\,v_{\rm ej} \sim 350\,{\rm km\,s^{-1}}\,(m_{i}/100\,{M_{\sun}})^{-1/2}$ for typical core-collapse SNe. At sufficiently high resolution, then, this term becomes negligible (the gas velocities are never so coherently large), but at low resolution it can be important. Of course, at low resolution, the cooling radius is un-resolved and we should use the terminal momentum expression, where $\beta_{\phi} \gtrsim 1$ requires $|v_{r}|\gtrsim v_{t} \sim 200\,{\rm km\,s^{-1}}$ – the expression becomes resolution-independent in this limit (once $m_{i}$ exceeds a few hundred solar masses). In either case, if such large $v_{r}$ is reached, $\psi\approx 1/2\beta_{\psi}$ (or $\phi\approx1/2\beta_{\phi}$) becomes $<1$. This comes from the ${\bf p}_{b}\cdot \Delta {\bf p}_{b}$ term, which dominates over $\Delta {\bf p}_{b}^{2}$ in this limit – physically, it requires more energy to accelerate a shell which is already moving rapidly away from the origin. Conversely, when $|v_{r}|$ is large and $\beta_{\psi}<0$, $\psi\approx2\,|\beta_{\psi}|\gg1$, i.e. this implies a larger momentum injection, with the energy for the additional $PdV$ work coming from the shocked external medium falling onto the shock.
If we choose to keep our simple momentum scaling from the main text (setting $\psi=\phi=1$ always) – i.e. assume that the momentum scaling of SNe is robust across variations in the surrounding gas velocity field – then this necessarily means the kinetic energy coupled by SNe varies, with larger kinetic energy coupled in cases with a net “outflow” motion around the star, and smaller kinetic energy change in cases with net “inflow” motion around the star. This is not necessarily unphysical – it depends, to some extent, on whether the more robust property of SNe blastwaves in a non-uniform flow is their kinetic energy or their momentum. Similarly, it is quite possible that the general scaling for the terminal momentum $p_{t}$ from the text could have a complicated dependence on the detailed structure of the velocity field, although simulations in turbulent media discussed in § \[sec:feedback:mechanical:sedov:background\] suggest that, on average, broadly similar results are obtained as in simulations where the background is stationary. Clearly, future work is warranted to explore these conditions in more detail.
In practice we find that whether we include this more detailed correction, or set $\psi=\phi=1$, almost always has a small effect on galaxy properties at all resolution levels: some examples are shown in Fig. \[fig:sneenergy.method.tests\]. Galaxy masses, star formation histories, mass profiles, visual morphologies, metal abundance distribution functions, rotation curves, CGM gas content, and mean outflow rates are essentially unchanged (with at most a systematic $\sim 0.1$dex shift in the masses of very low-mass dwarfs, and smaller effects in higher-mass galaxies). We have specifically tested this in the [**m10q**]{} and [**m12i**]{} galaxies in this paper as well as galaxies [**m10v**]{}, [**m11q**]{}, [**m12f**]{} and [**m12m**]{} from [Paper [I]{}]{}; we have compared all properties discussed in this paper and in [Paper [I]{}]{}. The fact that these corrections produce such small effects owes to the fact that coherent, large inflow/outflow velocities around star particles are rare and, even when they occur, tend to average out over time and space. Even in the worst-possible-case (maximal $\beta_{\psi}$) scenario, namely violent post-starburst outflow episodes around dwarf galaxies at low resolution, where most of the ISM of the galaxy is evacuated, the net change in kinetic energy of the gas setting $\psi=\phi=1$ only differs from the kinetic energy coupled with the exact formulation here by a factor $\sim 2$. And, critically, the difference between methods vanishes ($\beta_{\psi},\,\beta_{\phi}\rightarrow0$) at sufficiently high resolution.
Details of Unresolved Cooling Do Not Influence Predictions of Our Default Model {#sec:appendix:implicit.cooling}
===============================================================================
As noted in § \[sec:feedback:mechanical:tests:effects\], we have verified in a number of tests that, within the context of our default FIRE sub-grid model, the details of how we treat the “unresolved cooling phase” when the simulation does not resolve the local cooling radius are secondary, so long as the correct momentum is coupled to the gas. Fig. \[fig:implicit.cooling\] shows this explicitly for both [**m10q**]{} and [**m12i**]{} simulations. In this figure we compare a model where we take our standard sub-grid coupling (the momentum, mass, and metals are unchanged) but always couple the “full” total energy – we do not assume (as in our default model) that the residual thermal energy has been radiated away when the cooling radius is unresolved. As expected, this produces nearly identical results to our default model – in this limit, [*by definition*]{}, the cooling time is shorter than the dynamical time at the radius where the energy is deposited. So the code simply radiates away the energy in the next few timesteps, without doing significant work. This is a non-trivial statement, however, in that it clearly shows that in this regime, the [*momentum*]{} coupled, not the thermal energy, is the important physical ingredient.
[^1]: \[foot:movie\]See the [FIRE]{} project website:\
[[<http://fire.northwestern.edu>](http://fire.northwestern.edu)]{}\
For additional movies and images of FIRE simulations, see:\
[[<http://www.tapir.caltech.edu/~phopkins/Site/animations>](http://www.tapir.caltech.edu/~phopkins/Site/animations)]{}
[^2]: Throughout the text, we use the term “statistical isotropy” to refer to a specific, desirable property of the numerical feedback-coupling algorithm. Namely, that the algorithm does not un-physically systematically bias the ejecta into certain directions (or otherwise “imprint” preferred directions) for numerical reasons. Of course, ejecta may be intrinsically anisotropic in the SN frame, and there can be global anisotropies sourced by e.g. pressure gradients and galaxy morphology, but these can only be captured properly if the ejecta-coupling [*algorithm*]{} is statistically isotropic.
[^3]: To be specific (this will be discussed below): the FIRE-1 algorithm used the “non-conservative method” defined in § \[sec:feedback:mechanical:vector\], with a less-accurate SPH approximation of the solid angle subtended by neighbors ($\omega_{b}$ defined in § \[sec:feedback:mechanical:weighting\] set $\propto m_{b}/\rho_{b}$), and only coupled to the nearest neighbors for each SN instead of using the bi-directional search defined in § \[sec:feedback:mechanical:neighbor.finding\] and needed to ensure statistical isotropy.
[^4]: A public version of [GIZMO]{} is available at [[<http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html>](http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html)]{}
[^5]: In this paper we will use a cubic spline for $W$, but other choices have weak effects on our conclusions (because $W$ will be re-normalized anyways in the assignment of “weights” for feedback). We adopt $N_{\ast}=64$ for reasons discussed below. The equation for $N_{\ast}(H_{a})$ is non-linear, so it is solved iteratively in the neighbor search; see @springel:entropy.
[^6]: Eq. \[eqn:solidangle\] is exact for a face ${\bf A}_{b}$ which is rotationally symmetric about the axis $\hat{\bf x}_{ba}$; for asymmetric ${\bf A}_{b}$, evaluating $\Delta\Omega_{b}$ exactly requires an expensive numerical quadrature. If this is done exactly, Eq. \[eqn:weight.renorm.scalar\] is unnecessary: $\sum_{b}\omega_{b}=1$ is guaranteed. We have experimented with an exact numerical quadrature; but it is extremely expensive and has no measurable effect on our results compared to simply using Eq. \[eqn:weight.renorm.scalar\] & \[eqn:solidangle\] for all ${\bf A}_{b}$ (Eq. \[eqn:solidangle\] is usually accurate to $<1\%$, and the most severe discrepancies do not exceed $\sim 10\%$, and these are normalized out by Eq. \[eqn:weight.renorm.scalar\]).
[^7]: In MFM/MFV, the effective face ${\bf A}_{b}$ is given by: $$\begin{aligned}
\label{eqn:mfm.face.area.def} {\bf A}_{b} &\equiv \bar{n}_{a}^{-1}\,\bar{\bf q}_{b}({\bf x}_{a}) + \bar{n}_{b}^{-1}\,\bar{\bf q}_{a}({\bf x}_{b})\\
\label{eqn:mfm.face.area.def.sub1} \bar{\bf q}_{b}({\bf x}_{a}) &\equiv {\bf E}_{a}^{-1} \cdot {\bf x}_{ba}\, W({\bf x}_{ba},\,H_{a}) \\
\label{eqn:mfm.face.area.def.sub2} {\bf E}_{a} &\equiv \sum_{c}\,({\bf x}_{ca} \otimes {\bf x}_{ca}) \,W({\bf x}_{ca},\,H_{a}) \end{aligned}$$ where “$\cdot$” and “$\otimes$” denote the inner and outer product, respectively.
[^8]: The function $(f_{\pm})$ in Eq. \[eqn:vectornorm\] is derived by requiring ${\bf 0} = \sum \Delta {\bf p}_{b}$. Component-wise, this becomes $0 = \sum (\Delta {\bf p}_{b})^{\alpha} = p_{\rm ej}/(\sum_{c}\,|{\bf w}_{c}|)\,\left[\left( f_{+}^{\alpha}\,\sum_{b} \omega_{b}\,(\hat{\bf x}_{ba}^{+})^{\alpha} +
f_{-}^{\alpha}\,\sum_{b} \omega_{b}\,(\hat{\bf x}_{ba}^{-})^{\alpha} \right) \right]$. Since $p_{\rm ej}$ and $\sum_{c}\,|{\bf w}_{c}|$ are positive-definite, the term in brackets must vanish ($f_{+}^{\alpha}\,\boldsymbol{\psi}_{+}^{\alpha}=f_{-}^{\alpha}\,\boldsymbol{\psi}_{-}^{\alpha}$, if we define $\boldsymbol{\psi}_{\pm}^{\alpha} \equiv \sum_{b}\,\omega_{b}\,|\hat{\bf x}_{ba}^{\pm}|^{\alpha}$). But we also wish to minimize the effect of the correction factor $f_{\pm}$ on the total momentum coupled (ensuring $f_{\pm}\approx 1$), so we minimize the least-squares penalty function $\Delta^{2}_{\boldsymbol{\psi}} =\| [(f_{+}^{\alpha}\boldsymbol{\psi}_{+}^{\alpha})^{2} + (f_{-}^{\alpha}\boldsymbol{\psi}_{-}^{\alpha})^{2} ] - [ (\boldsymbol{\psi}_{+}^{\alpha})^{2} + (\boldsymbol{\psi}_{-}^{\alpha})^{2} ] \|$. The $f_{\pm}$ in Eq. \[eqn:vectornorm\] is the unique function which simultaneously guarantees ${\bf 0} = \sum \Delta {\bf p}_{b}$ (i.e. $f_{+}^{\alpha}\,\boldsymbol{\psi}_{+}^{\alpha}=f_{-}^{\alpha}\,\boldsymbol{\psi}_{-}^{\alpha}$) and $\Delta^{2}_{\boldsymbol{\psi}}=0$. It is easy to see that $f_{\pm}\rightarrow 1$, as it should, if $\boldsymbol{\psi}_{+}=\boldsymbol{\psi}_{-}$, i.e. when $\sum \Delta {\bf p}_{b} = {\bf 0}$ without the need for an additional correction.
[^9]: The de-boosted energy equation, Eq. \[eqn:flux.e.framecorr\], assumes that the gas surrounding the star has initial gas-star relative velocities small compared to the ejecta velocity. A more general expression is presented in Appendix \[sec:energy.cons.w.motion\].
[^10]: We adopt the specific expression from @cioffi:1988.sne.remnant.evolution, as opposed to that from more recent work, for consistency with the previous FIRE-1 simulations.
[^11]: @kimm.cen:escape.fraction introduce a smooth interpolation function rather than a simple threshold in Eq. \[eqn:dp.subgrid.sub1\]; we have experimented with variations of this and find no detectable effects.
[^12]: Note that we do not need to make any distinction between the free-expansion radius, post-shock (reverse shock) radius, etc, in our formalism, because the fully-conservative coupling – which [*exactly*]{} solves the elastic two-body gas collision between ejecta and gas resolution element – automatically assigns the correct values in either limit. For example, if $m_{b} \ll m_{\rm ej}$, our coupling will automatically determine that element $b$ should simply be “swept up” with velocity ${\bf v}_{b} \approx {\bf v}_{\rm ej}$ (free-expansion); if $m_{b} \gg m_{\rm ej}$, the gas is automatically assigned the appropriate post-shock temperature.
[^13]: Mock images in Fig. \[fig:images.resolution.nonsymmetric\] are computed as $ugr$ composites, ray-tracing from each star after using its age and metallicity to determine the intrinsic spectrum from @starburst99 and accounting for line-of-sight dust extinction with a MW-like extinction curve and dust-to-metals ratio following @hopkins:lifetimes.letter.
[^14]: To be clear, in Fig. \[fig:sf.z0.sne.subgrid\] we alter [*only*]{} the terminal momentum, so e.g. if the cooling radius of super-bubbles is resolved the change has no effect whatsoever, and other feedback mechanisms (e.g. radiative feedback) are also un-altered. In contrast, in @orr:ks.law (Appendix A) we show the results of multiplying/dividing [*all*]{} feedback mechanisms and strengths (total energy and momentum) by a uniform factor $=3$. Not surprisingly this produces a stronger effect closer to the expected inverse-linear dependence; however non-linear effects still reduce the dependence to somewhat sub-linear.
[^15]: In our delayed-cooling experiments, we have considered both turning off [*all*]{} cooling for a particle, and tracking a separate reservoir of SNe-injected energy, which is not allowed to cool (while other energy can cool). Both give similar results for our comparison here. We also “reset” the delay time $\Delta t_{\rm delay}$ whenever a new SNe injects energy into a gas element.
[^16]: The @dalla.vecchia:target.temperature.sne.delayed.cooling.feedback “target-temperature” implementation released the SNe energy stochastically rather than deterministically after a fixed time – we have implemented this as well and the results are identical to the “target-temperature (store SNe)” implementation shown.
[^17]: If we “store up” SNe each with $\sim 10^{51}\,{\rm erg}$ until we can heat a discrete mass $\Delta m$ to a temperature $\sim 10^{7.5}\,$K, then if each SN deposits $\sim 2\,{M_{\sun}}$ worth of metals, the mass $\Delta m$ will be immediately enriched to metallicity $Z\approx 2\,Z_{\sun}$!
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
A pair is a holomorphic map from a Riemann surface to $S^2$ with additional properties. A dessin d’enfants is a bipartite graph with additional structure. It is well know that there is a bijection between pairs and dessins d’enfants.
Vassiliev has defined a filtration on formal sums of isotopy classes of knots. Motivated by this, we define a filtration on formal sums of Belyĭ pairs, and another on dessin d’enfants. We ask if the two definitions give the same filtration.
author:
- |
Jonathan Fine\
Milton Keynes\
England\
`jfine@pytex.org`
date: 28 September 2009
title: A filtration question on Belyĭ pairs and dessins
---
Introduction
============
First, we recall some definitions [@Belyui; @Dessins]. A *Belyĭ pair* is a Riemann surface $C$ together with a holomorphic map $f:C \to S^2 = \C \cup \{\infty\}$ to the Riemann sphere, such that $f'(p)$ is non-zero provided $f(p)$ is not $0$, $1$ or $\infty$. (Belyĭ proved that given $C$ such an $f$ can be found iff $C$ can be defined as an algebraic curve over the algebraic numbers.)
A *dessin d’enfants*, or *dessin* for short, is a graph $G$ together with a cyclic order of the edges at each vertex, and also a partition of the vertices $V$ into two sets $V_0$ and $V_1$ such that every edge joins $V_0$ to $V_1$. Necessarily, $G$ must be a bipartite graph. Traditionally, the vertices in $V_0$ and $V_1$ are coloured black and white respectively.
It is easy to see that a Belyĭ pair gives rise to a dessin, where $V_0=f^{-1}(0)$, $V_1 = f^{-1}(1)$, and the edges are the components of the inverse image $f^{-1}([0,1])$ of the unit interval in $\C$. The cyclic order arise from local monodromy around the vertices.
A much harder result, upon which our definitions rely, is that up to isomorphism every dessin arises from exactly one Belyĭ pair, or in other words that there is a bijection between isomorphism classes of Belyĭ pairs and dessins.
Definitions
===========
A *Belyĭ object* $B$ consists of $((B_C, B_f), B_D)$ where $(B_C, B_f)$ is a Belyĭ pair and $B_D$ is the associated dessin (or vice versa for the dessin and the pair).
The *Vassiliev space* $V=V_\C$ (for Belyĭ objects) is the vector space over $\C$ which has as basis the isomorphism classes of Belyĭ objects.
Clearly, when an edge is removed from a dessin then it is still a dessin. Suppose $D$ is a dessin, and $T$ is a subset of its edges. We will use $D \setminus T$ to denote the dessin so obtained. This same operation can also be applied to a Belyĭ object $B$, even though computing the associated curve $(B\setminus T)_C$ from $B_D$ and $T$ might be hard.
We will now define one or two filtrations of $V$.
Let $D$ be dessin and $S$ a $d$-element subset of $D$. Each subset $T$ of $S$ determines a dessin $S\setminus T$ and hence a object $B_{S\setminus T}$. Let $|T|$ denote the number of edges in $T$. Use $$B_S = \sum\nolimits _{T\subseteq S} (-1)^{|T|}B_{S \setminus T}$$ to define a vector $B_S$ in $V$, which we call *the expansion of a dessin with $d$ optional edges*.
Let $V_{D,d}$ be the span of the expansions of all dessins with $d$ optional edges. The sequence $$V =
V_{D, 0} \supseteq
V_{D, 1} \supseteq
V_{D, 2} \supseteq
V_{D, 3} \ldots$$ is the *dessin filtration* of $V$.
We can also think of a Belyĭ object as a map $f:C\to S^2$ (with special properties). Let $(C_1, f_1)$ and $(C_2, f_2)$ be Belyĭ pairs. Then there is of course a map $$g: C_1 \times C_2 \to S^2 \times S^2 \>.$$
Let $\Delta \subset S^2 \times S^2$ denote the diagonal, and let $C$ denote $g^{-1}(\Delta)$, and $f$ the restriction of $g$ to $C$. In general $$f: C \to \Delta \cong S^2$$ will not be a Belyĭ pair. There are two possible problems. The first is that $C\subset C_1\times C_2$ might have self intersections or be otherwise singular. If this happens, we replace $C$ by its resolution, which is unique.
The second problem is more interesting. It might be that $f$ has critical points not lying above the special points $0$, $1$ and $\infty$. This problem cannot be avoided. However, the above discussion does show that there is product, which we will denote by ‘$\circ$’, on holomorphic branched covers of $S^2$.
Let $W$ be the vector space with basis isomorphism classes of branched covers of $S^2$. We set $W_n$ to be the span of all products of the form $$(A_1 - B_1) \circ
(A_2 - B_2) \circ
\ldots \circ
(A_n - B_n)$$ for $A_i$ and $B_i$ basis vectors of $W$. Clearly, the $W_n$ provide a filtration of $W$.
The induced filtration of $V$ defined by $V_{B,n} = W_n \cap V$ is called the *filtration* of $V$.
Questions
=========
Are the two filtrations $V_D$ and $V_B$ equal?
If so, then we have also answered the next two questions.
The absolute Galois group acts on pairs, and preserves the filtration. Does this action also preserve the dessin filtration?
Because the dessins with $d$ edges, all of which are optional, span $V_d/V_{d+1}$, the dessin filtration has finite dimensional quotients. Does the filtration have finite dimensional quotients?
Investigating the last two questions might help us answer the first. They might also be of interest in their own right.
[9]{}
D. Bar-Natan, On the Vassiliev knot invariants, Topology 34 (1995), 423–472
G. V. , Another proof of the three points theorem, Subornik: Mathematics 193 (2002), 329–32.
Leila Schneps, ed, The Grothendieck Theory of Dessins d’Enfants, London Math. Soc. Lecture Note Ser., vol 200, Cambridge Univ. Press 1994.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Collisionless shocks are loosely defined as shocks where the transition between pre-and post-shock states happens on a length scale much shorter than the collisional mean free path. In the absence of collision to enforce thermal equilibrium post-shock, electrons and ions need not have the same temperatures. While the acceleration of electrons for injection into shock acceleration processes to produce cosmic rays has received considerable attention, the related problem of the shock heating of quasi-thermal electrons has been relatively neglected.
In this paper we review that state of our knowledge of electron heating in astrophysical shocks, mainly associated with supernova remnants (SNRs), shocks in the solar wind associated with the terrestrial and Saturnian bowshocks, and galaxy cluster shocks. The solar wind and SNR samples indicate that the ratio of electron temperature, ($T_e$) to ion temperature ($T_p$) declines with increasing shock speed or Alfvén Mach number. We discuss the extent to which such behavior can be understood via cosmic ray-generated waves in a shock precursor, which then subsequently damp by heating electrons. Finally, we speculate that a similar mechanism may be at work for both solar wind and SNR shocks.
author:
- Parviz Ghavamian
- 'Steven J. Schwartz'
- Jeremy Mitchell
- Adam Masters
- 'J. Martin Laming'
date: 'Received: date / Accepted: date'
title: 'Electron-Ion Temperature Equilibration in Collisionless Shocks: the Supernova Remnant-Solar Wind Connection '
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction {#intro}
============
Shock waves have been observed in a wide range of environments outside the Earth, from the solar wind to the hot gas in galaxy clusters. However, the mechanism whereby the gas in these environments is shocked has been poorly understood. While shock transitions in the Earth’s atmosphere are mediated by molecular viscosity (and hence direct particle collisions), those in interstellar space and the solar wind are too dilute to form in this way. In non-relativistic shocks, the role of collisions is effectively played by collective interactions of the plasma with the magnetic field. This results in a multi-scale shock transition having sub-structure at ion kinetic length scales (Larmor radius or inertial length) and potentially electron kinetic scales (inertial lengths or whistler mode) (e.g., Schwartz et al. (this volume), Treumann 2009). Such plasmas are termed collisionless. The magnetic fields threading through the charged particle plasmas in space endow the plasmas with elastic properties, much like a fluid. The kinetic energy of the inflowing gas is dissipated within this fluid via collective interactions between the particles and magnetic field, transferring energy from the magnetic field to the particles. The collective processes are the result of the DC electromagnetic fields present in the shock transition layer, kinematic phase mixing, and also plasma instabilities; the last give rise to a rich range of plasma waves and turbulent interactions.
It has long been known that these processes may heat the electrons beyond the mass-proportional value predicted by the Rankine-Hugoniot jump conditions - ample evidence is found in spacecraft studies of solar wind shocks (Schwartz et al. 1988) and multi wavelength spectroscopy of supernova remnants (Ghavamian et al. 2001, 2002, 2003, 2007; Laming et al. 1996; Rakowski et al. 2008) and galaxy cluster gas (Markevitch et al. 2005; Markevitch & Vikhlinin 2007; Russell et al. 2012) understanding how this process depends on such shock parameters as shock speed, preshock magnetic field orientation and plasma beta has been slow.
In collisionless plasmas, the downstream state of the plasma cannot be uniquely determined from the upstream parameters because the Rankine-Hugoniot jump conditions only predict the [*total*]{} pressure downstream, not the individual contributions from the electrons and ions: $P\,=\,n_i\,k\,T_i\,+\,n_e\,k\,T_e$. At the limit of a strong shock, $n_e$ and $n_i$ are each 4 times their preshock values, so the relative values of [$T_e$]{}and [$T_i$]{}immediately behind the shock are wholly dependent upon the nature of the collisionless heating processes occurring at the shock transition. Although an MHD description can be used to describe the behavior of the gas far upstream and far downstream of the shock, a more detailed kinetic approach is required for understanding how the dissipation at the shock front transfers energy from plasma waves and turbulence to the electrons and ions.
Non-relativistic collisionless shocks can be broadly sorted into three categories: slow, intermediate and fast. The three types are defined according to the angle between the shock velocity and upstream magnetic field, as well as the relative value of the shock speed compared to the upstream sound speed ($c_s\,\equiv\,\sqrt{\gamma\,P/\rho}$) and Alfvén speed ($v_A\,\equiv\,B/\sqrt{4\pi\,\rho_i}$). Most astrophysical shocks are quasi-perpendicular (i.e., they propagate at a nearly right angle to the preshock magnetic field), allowing only for fast-mode propagation. In that case, the relevant quantity is the magnetosonic Mach number, $M_{ms}$ ($\equiv\,v_{sh} / \sqrt{v_A^2\,+\,c_s^2}$). Collisionless shocks have also been classified according to whether the flow speed exceeds the sound speed in the downstream plasma (Kennel 1985). Above the critical Mach number where the flow is subsonic, the dissipation of flow energy into thermal energy can no longer be maintained by electrical resistivity, and plasma wave turbulence (the cause of which are instabilities generated when the electron and ion distribution functions are distorted at the shock transition) is required (Kennel 1985). Shocks above the critical Mach number are termed supercritical, while those below are termed subcritical. Note that even for subcritical shocks, observations suggests that kinetic processes other than resistivity and turbulence contribute to the shock dissipation (Greenstadt and Mellott 1987).There are also other, higher critical Mach numbers related to the formation of subshocks (Kennel et al. 1985) and non-steady cyclic shock reformation beyond the whistler critical Mach number (Krasnoselskikh et al. 2002). The critical Mach number for quasi-perpendicular shocks is estimated to be rather low $\sim$2.8 (Edmiston and Kennel 1984). The Mach numbers of most SNR shocks are expected to be well in excess of this value, meaning that they are both fast-mode and supercritical.
As one approaches the high Mach numbers expected for astrophysical shocks ($M_A\,\sim\,$20-100 for an ambient magnetic field of 3 $\mu G$), a greater and greater fraction of the incoming ions become reflected back upstream from the shock front, corresponding to an increasingly turbulent and and disordered shock transition. Physical parameters such as $B$, $T$ and $n$ no longer jump in an ordered manner (i.e., the transitions are no longer laminar). In addition to the reflected particles, the hot ions from downstream become hot enough to escape upstream, further enhancing the population of ions in front of the shock. These ions, which form a precursor, are now believed to play an essential role in the dissipation of high Mach number collisionless shocks. Aside from providing the seed population for the acceleration of cosmic rays, the precursor ions are likely generate a variety of plasma waves capable of selectively heating the electrons over the ions, thereby providing an important mechanism for raising [$T_e/T_p$]{} above the mass-proportional value of $m_e/m_p$.
Formalism: Equilibration Timescales
===================================
Taken at face value, the Rankine-Hugoniot jump conditions predict that electrons and ions will be heated in proportion to their masses:
$$k \,T_{e,i}\,=\,\frac{3}{16}{m_{e,i}}\,V^2_{sh}
\label{massprop}$$
In collisionless shocks, there are three relevant timescales to consider: the time required for Coulomb collisions to isotropize a distribution of electrons, $t_{ee}$, the time required for Coulomb collisions to isotropize distribution of ions, $t_{ii}$, and finally the time required for the electrons and ions to equilibrate to a common temperature, $t_{ei}$ (Spitzer 1962). After a time scale $t_{ee}$ or $t_{ii}$, the particles in question attain a Maxwellian velocity distribution. The self-collision time, $t_{c,ee}$ for electrons of density $n_e$ and temperature $T_e$ is (Spitzer 1962):
$$t_{c,ee}\,=\,\frac{0.266\,T_e^{3/2}}{n_e\,ln\,\Lambda_e}\,\,sec\,
\approx\,\frac{0.0116\,V_{s}(1000)^3}{n_e\,ln\,\Lambda_e}\,\,\,yr
\label{relax}$$
where ln$\Lambda$ is the Coulomb logarithm, Equation \[massprop\] has been used to write the relaxation time in terms of the shock speed and $V_s(1000)$ is the shock speed in units of 1000 [kms$^{-1}$]{}. For a young SNR having $V_s\,\sim\,$1000 [kms$^{-1}$]{}, postshock density n$\,\sim\,$1 cm$^{-3}$ and ln $\Lambda\,\sim$30 the time required to establish a Maxwellian distribution at the electron temperature given by equation 1 is $t_{c,ee}\,\sim\,$ 0.004 yrs. The electrons are isotropized by self-collisions first, then the protons, and finally over a longer timescale the electrons and protons equilibrate to a common temperature. For this reason, Coulomb collisions alone are unable to establish equilibration electron and proton distributions at the shock front. This equilibration is described by the relation
$$\frac{d T_e}{dt}\,=\,\frac{T_p\,-\,T_e}{t_{c,ep}}
\label{tetp}$$
where $$t_{c,ep}\,=\sqrt{m_p\over m_e}t_{c,pp}\,={m_p\over m_e}t_{c,ee}
\label{tep}$$
The temperatures $T_e$ and $T_i$ equilibrate to a common density-weighted average temperature $T_{av}$, given by $T_{av}\,=\,\frac{3}{16}\mu m_p V_s^2$, where $\mu$ is the mean mass per particle ($=\,(1.4/2.3)\,=\,0.6$ for cosmic abundances). For mass-proportional heating, the timescale given by Equation \[tep\] is $\sim $2000 yrs for [$V_s$]{} $ \geq $1000 [kms$^{-1}$]{}, of similar order but longer than the proton-proton isotropization timescale, $t_{c,pp}$. These are longer than the age of the SNR, substantially so at higher [$V_s$]{} indicating that for minimal heating (i.e., [$T_e/T_p$]{}$=\,\frac{m_e}{m_p}\,\sim\,$1/1836) the electrons and ions will not equilibrate to $T_{av}$ during the lifetime of the SNR (Itoh 1978; Draine & McKee 1993).
The arguments above indicate that Coulomb collisions are ineffective at both isotropizing the heavy ion particle distributions and equilibrating the electron and ion temperatures at the transtions of collisionless shocks. The emission spectra of non-radiative SNRs (dominated mostly by X-ray and ultraviolet emission) should therefore be sensitive probes of the collisionless heating processes at the shock transition. We consider the observational constraints of these processes below.
Observational Constraints From SNRs {#sec:1}
===================================
The most useful shocks for studying collisionless equilibration processes are those exhibiting detectable emission from the immediate postshock gas. To be diagnostically useful, the emission should arise from the region close to the shock front, where temperature disequilibrium between electrons, protons and heavy ions is substantial enough to affect both the relatives fluxes and relative velocity widths of emission lines. The shape and extent of the spatial profile for the different emission species behind the shock is also a useful diagnostic, especially for the UV resonance lines of He II $\lambda$1640, C IV $\lambda\lambda$1548, 1550, N V $\lambda\lambda$1238,1243, and O VI $\lambda$1032, 1038. For a given shock speed, the distance behind the shock where the emission peaks depends strongly on the initial electron temperature, and electron temperature immediately behind the shock.
The ubiquity of non-radiative SNRs, as well as their relatively simple geometry and very high shock speeds, makes these objects the most important laboratories for investigating the efficiency and nature of electron-ion and ion-ion equilibration. Other non-radiative shocks available for study are those occurring in stellar wind bubbles (for example, Wolf-Rayet bubbles) (Gosset et al. 2011) and in galaxy cluster shocks (e.g., the ’Bullet Cluster’ (Markevitch et al. 2005) and Abell 2146 (Russell et al. 2012)). However, these shocks are far less frequently observed than those in SNRs, leaving the latter as the most important objects providing both a broad range of shock speeds and corresponding diagnostic information (such as proper motions and spatially resolved structure). Shocks in star forming regions, such as HH objects and their associated bow shocks, tend to be clumpy and complicated, and are usually located in dense environments ($n\,\sim\,$100-1000 cm$^{-3}$) with low enough shock speeds ([$V_s$]{}$\sim\,$100-200 [kms$^{-1}$]{}) to be radiative. In those cases, the optical and UV emission (where the most valuable shock diagnostic line emission arises) is dominated by emission from the cooling and recombination zones far downstream from the shock. In those regions the cooling of the gas below 100,000K and the accompanying compression of the gas result in a collisional plasma with $T_e\,=\,T_p\,=\,T_i$. This erases any ’memory’ of the initial electron-ion equilibration. Furthermore, a significant fraction of the Ly $\alpha$ continuum produced in the recomination zone is expected to pass upstream and ionize the preshock gas. In strong shocks ([$V_s$]{}$>$80 [kms$^{-1}$]{}), this results in complete ionization of hydrogen (Shull & McKee 1979; Cox & Raymond 1985; Dopita et al. 1993), thus precluding the use of collisionally excited Balmer line emission from neutral H (described in the next section) as a temperature equilibration diagnostic.
Optical Spectroscopy of Balmer-Dominated Shocks: The [$T_e/T_p$]{}$\propto$V$_{sh}^{-2}$ Relation {#sec:2}
-------------------------------------------------------------------------------------------------
In the late 1970s it was discovered that very fast ($\sim$2000 [kms$^{-1}$]{}) shocks in young SNRs could generate detectable optical emission very close to the shock transition (Chevalier & Raymond 1978; Chevalier, Kirshner & Raymond 1980), providing a valuable diagnostic of physical conditions at the shock before Coulomb collisions or cooling could alter them. This emission is produced by collisional excitation of H I as it flows into the shock front. The cold neutral component does not interact directly with the plasma waves and turbulence at the shock, while the ionized component is strongly heated and compressed by a factor of four (when the shock is strong). Some of the cold H is destroyed by collisional ionization; however, the rest of the cold H undergoes charge exchange with hot ions behind the shock, generating a separate population of hot H. Approximately 1 in every 5 collisions results in collisional excitation to the n=3 level of H, producing H$\alpha$ and Ly $\beta$ emission. The H$\alpha$ line from the cold neutrals is narrow and reflects the preshock temperature ($\leq$30,000 K), while that from the hot neutrals is broad (typically $\geq$500 [kms$^{-1}$]{}), and reflects the postchock temperature (and to a large extent, the velocity distribution) of the protons (Chevalier & Raymond 1978; Chevalier, Kirshner & Raymond 1980; Smith et al. 1991; Ghavamian et al. 2001) (Figure \[fig:balmerprofile\]). In Balmer-dominated shocks,the broad to narrow H$\alpha$ flux ratio is proportional to the ratio of the charge exchange rate to the ionization rate, with the latter being highly sensitive to the electron and proton temperatures. This makes the broad to narrow ratio, [$I_b/I_n$]{}, very sensitive to [$T_e/T_p$]{}. There is only weak dependence of [$I_b/I_n$]{} on the preshock H I fraction and preshock temperature, mainly due to differences in the amount of Ly $\beta$ converted into H$\alpha$ in the narrow component (Ghavamian et al. 2001; van Adelsberg et al. 2008).
![image](balmerprofile.eps)
-0.2in
The first systematic attempt to use Balmer-dominated SNRs to infer [$T_e/T_p$]{} for collisionless shocks was attempted by Ghavamian et al. (2001). Using the broad H$\alpha$ line widths measured from a sample of Balmer-dominated shocks, they constrained the range of plausible shock speeds between the limits of minimal equilibration and full equilibration. They then predicted the broad-to-narrow ratios for a grid of shock models over this range of [$V_s$]{} and [$T_e/T_p$]{}, allowing [$T_e/T_p$]{} to be constrained. The range of broad component H$\alpha$ widths observed in SNRs ranges from $\sim$250 [kms$^{-1}$]{} for the slowest Balmer-dominated shocks (Cgynus Loop), to $\sim$ 500 [kms$^{-1}$]{} for intermediate-velocity shocks (RCW 86) and finally $\sim$2600 [kms$^{-1}$]{} for the fastest shocks (SNR 0509$-$67.5). This corresponds to a well-sampled range of shock speeds: nearly a factor of 10. The primary uncertainty in measurement of the broad component width at low shock speeds ($\lesssim$200 [kms$^{-1}$]{}, as seen in the Northeastern Cygnus Loop; Hester et al. 1994) is disentangling the broad and narrow components when they are of comparable width. At high shock speeds ($\gtrsim$2000 [kms$^{-1}$]{}) the main difficulty is the baseline uncertainty of the surrounding continuum: if the peak of the broad line is low and the width very large, errors in ascertaining where the broad line merges into the background can lead to underestimates of the broad component width. The range of [$I_b/I_n$]{} for this sample of Balmer-dominated shocks lies between 0.4 and 1.2 (Kirshner, Winkler & Chevalier 1987; Smith et al. 1991; Ghavamian et al. 2001; 2003, Rakowski, Ghavamian & Laming 2009). However, it does not not vary with broad component width in a monotonic fashion.
Recently there has been substantial improvement in the modeling of Balmer-dominated shocks. Earlier calculations of the broad H$\alpha$ line profile assumed that it formed from a single charge exchange (Chevalier, Kirshner & Raymond 1980; Smith et al. 1991; Ghavamian et al. 2001), and it treated the hot and cold neutrals as two separate, distinct populations, with a given neutral belonging to either one or the other. However, in reality an interaction ’tree’ is required to track the number of photons emitted by each neutral over multiple excitations and charge exchanges. These effects were first incorporated in the Balmer-dominated shock models of Heng & McCray (2007), who also found that charge exchange results in a third population of neutrals having velocity widths intermediate between the hot and cold neutrals. Further improvements in modeling of Balmer-dominated shocks were included by van Adelsberg et al. (2008), who included the momentum transferred by charge exchange between the hot neutrals and protons. This allowed the bulk velocity of the postshock neutrals to be calculated separately from those of the protons. Inclusion of the momentum transfer showed that for shock speeds $\lesssim$1000 [kms$^{-1}$]{}, charge exchange effectively couples the fast neutral and thermal proton distributions, while for high shock speeds ($\gtrsim$5000 [kms$^{-1}$]{}), it does so far less effectively. This results in a fast neutral distribution that is skewed relative to the protons in velocity space and an average velocity that is much higher for the fast neutrals than the protons (van Adelsberg et al. 2008). Together, the inclusion of all these effects has enhanced the ability of the models to match the observed broad-to-narrow ratios (and hence predict [$T_e/T_p$]{}). In particular, the newer models can now match the low broad-to-narrow ratio ([$I_b/I_n$]{}$\approx$0.67) observed in Knot g of Tycho’s SNR, yielding [$T_e/T_p$]{}$\approx$0.05, [$V_s$]{}$\approx$1600 [kms$^{-1}$]{}. However, even after all the additional physics is included, the Balmer-dominated shock models are still unable to reach the low broad-to-narrow ratios measured along the rims of DEM L71 ([$I_b/I_n$]{}$\approx$0.2-0.7; Ghavamian et al. 2003; Rakowski, Ghavamian & Laming 2009). The electron-ion equilibration in DEM L 71 was instead determined via comparison of broad H$\alpha$ line FWHM with postshock electron temperatures measured from [[*Chandra*]{}]{} observations (Rakowski, Ghavamian & Hughes 2003). The most promising explanation advanced for the [$I_b/I_n$]{} discrepancy has been added narrow component flux from the shock precursor (Raymond et al. 2011; Morlino et al. 2012), hitherto not included in the earlier shock models (Ghavamian et al. 2003, Rakowski, Ghavamian & Laming 2009). These developments are described in more detail in the next section.
The plot of [$T_e/T_p$]{} versus [$V_s$]{}for the available sample of Balmer-dominated shocks shows a declining trend of equilibration with shock speed (Ghavamian et al. 2007; Heng et al. 2007; van Adelsberg et al. 2008). The trend is described by Ghavamian et al. (2007) as full equilibration for shock speeds up to and including 400 [kms$^{-1}$]{}, and a declining equilibration proportional to the inverse square of the shock speed above 400 [kms$^{-1}$]{}. This description can be characterized in the following way: $$\frac{T_e}{T_p}\,=\,\left\lbrace\begin{array}{c c}{1} & {\rm if\,V_s\,< 400\,km\,s^{-1}} \\ { \frac{m_e}{m_p}\,+\,\left(1\,-\,\frac{m_e}{m_p}\right)\left(\frac{V_s}{400}\right)^{-2}} & {\rm\,\, if\, V_s\,\geq\,400\,km\,s^{-1} }\end{array}\right\rbrace
\label{tetpeq}$$ where the functional form of the the [$T_e/T_p$]{} relation is designed to asymptotically transition to [$T_e/T_p$]{}=[$m_e/m_p$]{} at very high shock velocities.
The most up to date plot of [$T_e/T_p$]{} versus shock speed, reproduced from van Adelsberg et al. (2008), is shown in Figure \[fig:plotequil\] (Note that the shock models used in producing these plots do not include contribution from the shock precursor). Although the inverse correlation between [$T_e/T_p$]{} and [$V_s$]{}was largely confirmed by van Adelsberg et al (2008), there may be some evidence of departure from the [$T_e/T_p$]{}$\propto$ $V_s^{-2}$ relation at shock speeds exceeding 2000 [kms$^{-1}$]{}. van Adelsberg et al. find that when all three measured broad component widths and broad-to-narrow ratios from SN 1006 are included in the plot ([$V_s$]{}$\sim$2200-2500 [kms$^{-1}$]{}), a slight upturn in the [$T_e/T_p$]{}-[$V_s$]{} relation appears. The [$T_e/T_p$]{} ratios for those cases are found to be $\sim$0.03, superficially similar to [$\sqrt{m_e/m_p}$]{}, rather than ${\rm m_e/m_p}$. The reason for the discrepancy is not clear, nor whether this indicates a breakdown in the $V_s^{-2}$ dependence at shock speeds exceeding 2000 [kms$^{-1}$]{}. Some caveats to consider when interpreting the upturn in [$T_e/T_p$]{} seen in Figure \[fig:plotequil\] are as follows: For shock speeds $\gtrsim$2000 [kms$^{-1}$]{} collisional ionization and excitation of H are primarily caused by proton (and to a lesser degree, alpha particle) impact (Laming et al. 1996; Ghavamian 1999; Ghavamian et al. 2001; Tseliakhovich et al. 2012). Experimentally measured cross sections for these interactions have still not been available to high precision (uncertainties $\sim$20%-30% still exist), although more sophisticated theoretical calculations are now becoming available (see, for example, Tseliakhovich et al. 2012) . In addition, as the broad component width increases, the H$\alpha$ profiles are spread out over an increasing number of pixels, resulting in noisier spectra and greater measurement uncertainty in the broad component width. These larger error bars in turn result in a larger uncertainty in [$V_s$]{}, especially at shock speeds of 2000 [kms$^{-1}$]{} and higher.
![image](plotequil.ps){width="55.00000%"}
The best way to further constrain the equilibration-shock speed relation is to add new data points to the curve shown in Figure \[fig:plotequil\]. To this end, we present Balmer-dominated H$\alpha$ profiles for two additional positions in Tycho’s SNR (different from those of Knot g presented by Kirshner, Winkler & Chevalier (1987), Smith et al. (1991) and Ghavamian et al. (2001)). These profiles, shown in Figure \[fig:tychocennwspec\], were acquired with a moderate resolution spectrograph in 1998 (for details on the observational setup, see Ghavamian 1999 and Ghavamian et al. 2001). The profile marked ’NE’ was obtained from a clump of H$\alpha$ emission located along the northeastern edge of Tycho’s SNR, approximately 1$^{\prime}$ northward of Knot g. The clump appears behind the main body of the Balmer filaments and exhibits a broad component that is substantially Doppler shifted to the red (10.7$\pm$1.2 Å, or about 490 [kms$^{-1}$]{}). The Doppler shift reflects the bulk velocity of the hot posthock proton distribution, so the significant velocity shift of the broad component centroid indicates that the shock in the NE has a substantial velocity component into the plane of the sky, i.e., that the NE shock is located on the far side of the blast wave shell.
The broad H$\alpha$ width of the NE shock is 1300$\pm$65 [kms$^{-1}$]{} (the smallest broad component width measured in Tycho’s SNR so far), with [$I_b/I_n$]{}=0.85$\pm$0.04. The NW shock, on the other hand, has a broad H$\alpha$ width of 2040$\pm$55 [kms$^{-1}$]{} (the largest broad component width measured in this SNR so far), with [$I_b/I_n$]{}=0.45$\pm$0.15. Neither of these broad-to-narrow ratios is strictly reproduced by the latest models of van Adelsberg et al. (2008), with the lowest predicted ratios being for Case B (the assumption of optically thick conditions for Ly $\beta$ photons in the narrow component) and for low equilibrations ([$T_e/T_p$]{}$\lesssim$0.1). For these low equilibrations, the corresponding shock speeds for the NE and NW shocks in Tycho are approximately 1400 [kms$^{-1}$]{} and 2250 [kms$^{-1}$]{} (using Figures 5 and 10 of van Adelsberg et al. 2008).
![image](tychocennwspec.eps){width="75.00000%"}
In Figure \[fig:plotequil\] we have added data from the NE and NW shocks in Tycho’s SNR to the [$T_e/T_p$]{}-[$V_s$]{} plot. The two data points help fill in a portion of the plot where the data are sparse: the region between approximately 1200 [kms$^{-1}$]{} and 1500 [kms$^{-1}$]{}, as well as the region beyond 2000 [kms$^{-1}$]{}, where the existing data are taken entirely from SN 1006. The added point near 1400 [kms$^{-1}$]{} is fully consistent with the $V_s^{-2}$ relation, while [$T_e/T_p$]{} is not well constrained enough for the point at 2250 [kms$^{-1}$]{} to contradict the appearance of an upturn at the highest shock velocities. It is clear that proper characterization of the equilibration-shock velocity curve above 2000 [kms$^{-1}$]{} will require both higher signal-to-noise spectra on existing Balmer-dominated shocks, as well as new data points beyond a broad component width of 2500 [kms$^{-1}$]{}.
Exceptions to [$T_e/T_p$]{}$\propto$V$_{sh}^{-2}$
-------------------------------------------------
Although the inverse squared relation between equilibration and shock velocity appears has been the most salient result of the study of Balmer-dominated shocks, there have been discrepant results reported in a small subset of cases. Recently the broad H$\alpha$ component in the LMC SNR 0509$-$67.5 was detected for the first time (Helder, Kosenko & Vink 2010). Broad emission was observed along both the northeastern rim (FWHM 3900$\pm$800 [kms$^{-1}$]{}) and southwestern rim (FWHM 2680$\pm$70 [kms$^{-1}$]{}), with the former being the fastest Balmer-dominated shock detected to date having the characteristic broad and narrow component H$\alpha$ emission. Interestingly, the broad-to-narrow ratios for both shocks are exceptionally low, [$I_b/I_n$]{}=0.08$\pm$0.02 and 0.29$\pm$0.01 along the NE and SW rims, respectively. As noted by Helder, Kosenko & Vink (2010) these ratios are nearly twice as large as the smallest ones predicted by the models of van Adelsberg et al. (2008), precluding measurement of [$T_e/T_p$]{} from the Balmer-dominated spectra and implicating excess narrow component H$\alpha$ emission in a cosmic ray precursor once again. Using RGS spectra of SNR 0509$-$67.5 acquired with [*XMM-Newton*]{}, they obtained a forward shock speed of approximately 5000 [kms$^{-1}$]{} in the SW. This implies a broad H$\alpha$ width of 3600 [kms$^{-1}$]{}, substantially smaller than the measured width of 2680 [kms$^{-1}$]{}. Helder, Kosenko & Vink (2010) suggest that this indicates some thermal energy loss ($\sim$20%) to cosmic ray acceleration. This picture is supported by the presence of nonthermal X-ray emission in their fitted RGS spectra of SNR 0509$-$67.5.
The result described above relies upon an accurate disentangling of the bulk Doppler broadening from the thermal Doppler broadening in the X-ray lines. The disentangling depends on parameters such as the ratio of reverse shock to forward shock velocities, as well as the ratio of the gradients in the reverse and forward shock velocities, which in turn had to be assumed from evolutionary SNR models. The result, while intriguing, is still significantly uncertain. On the other hand, Helder, Kosenko & Vink (2010) found that the X-ray shock velocities from the NE could only be reconciled with the observed broad H$\alpha$ width there if [$T_e/T_p$]{}$\approx\,$0.2, which clearly which predicts [$T_e/T_p$]{}$\sim\,m_e/m_p$ predicted from Equation \[tetpeq\]. If this result were to be confimed by future observations, it would present a new challenge in understanding how electron-ion equilibration occurs in fast collisionless shocks. One possibility, given the presence of nonthermal X-ray emission in the spectra of 0509$-$67.5 (Warren & Hughes 2004; Helder, Kosenko & Vink 2010) is that the moderate loss of thermal energy to cosmic ray acceleration may have slightly increased the compression and reduced the temperature at the shock front compared to the case with no acceleration (Decourchelle, Ellison & Ballet 2000; Ellison et al. 2007). Both of these effects would tend to render the plasma more collisional, possibly explaining the [$T_e/T_p$]{}$\approx\,$0.2 result. However, it is also worth noting that the shock velocity used for obtaining [$T_e/T_p$]{} in the NE is subject to the same model dependence and uncertainties as the SW measurements, so similar caution is required in its interpretation.
A similar combined optical and X-ray study of RCW 86 was performed by Helder et al. (2009,2011). There, the broad component H$\alpha$ widths were supplemented with electron temperatures measured from [*XMM-Newton*]{} RGS spectra from the same projected locations along the rim. One of the main results of this study was that the slower shocks (broad H$\alpha$ FWHM $\sim$500-600 [kms$^{-1}$]{}) showed showed [$T_e/T_p$]{}$\sim$1, agreeing with earlier results from similar shocks observed both in RCW 86 and elsewhere (Ghavamian et al. (1999, 2001, 2007)). However, Helder et al. (2011) found while the results were indeed consistent with low equilibration at the shock front for fast shocks ([$T_e/T_p$]{}$\approx$0.02 for broad FWHM $\sim$1100 [kms$^{-1}$]{})) and higher equilibration for the slower shocks ([$T_e/T_p$]{}$\approx$1 for broad FWHM $\sim$650 [kms$^{-1}$]{}), their X-ray derived electron temperatures were inconsistent with $T_e\,=\,$0.3 keV at the shock front contradicting the suggestion of Ghavamian et al. (2007) that shocks above 400 [kms$^{-1}$]{} may all heat electrons to roughly 0.3 keV. However, a major caveat of these results is that the forward shock in RCW 86 is believed to be impacting the walls of a wind-blown bubble (Williams et al. 2011), resulting in substantial localized variations in shock speed around the rim. These variations occur as different parts of the forward shock impact the cavity wall at different times. While the broad component H$\alpha$ widths closely trace the current position of the shock front, the X-ray emission behind that shock arises over a much more extended spatial scale, and is sensitive to the history of the forward shock interaction with the cavity wall. Furthermore, narrowband H$\alpha$ imagery of RCW 86 with the ESO Very Large Telescope (Helder et al. (2009, 2011)) shows a complex morphology of filaments, especially along the eastern side of the SNR. The broad H$\alpha$ components of these filaments exhibit substantial, localized variations in line width, ranging from $\sim$600 [kms$^{-1}$]{} FWHM to 1100 [kms$^{-1}$]{} Ghavamian (1999), Helder et al. (2009, 2011)). These variations reflect localized changes in density and viewing geometry along the line of sight. As such, uniquely mapping the observed Balmer filaments to their corresponding X-ray emission in [*XMM-Newton*]{} data (especially given the somewhat coarse 10$^{\prime\prime}$ spatial resolution of that instrument) is fraught with uncertainty. Additional corroboration for these results would be desirable.
Imprint of the Shock Precursor on the H$\alpha$ Line Profile
------------------------------------------------------------
Perhaps the most important and exciting recent development in our understanding of Balmer-dominated shocks has come with the development of new kinetic-based shock models. Blasi et al. (2012) have introduced a kinetic model for following the momentum and energy exchange between neutrals and ions, along with the back-reaction of those neutrals when they pass back upstream and form a fast neutral precursor. Rather than assume a Maxwellian velocity distribution for the neutrals (as had been done in previous models, despite the lack of thermal contact between neutrals needed to justify such an assumption), both the ion and neutral distributions are computed from their appropriate Boltzmann equations. Building on these models, Morlino et al. (2012a,b) have confirmed what had been suspected earlier (Smith et al. 1994; Hester et al. 1994; Sollerman et al. 2003), namely that the broadening of the narrow component beyond the expected ISM value ($\sim$25-30 [kms$^{-1}$]{} instead of 10 [kms$^{-1}$]{}) is most likey due to heating in a cosmic ray precursor. In particular, Morlino et al. (2012) found that the characteristic charge exchange length of the incoming neutrals exceeds that of the neutrals crossing back upstream, so that the narrow component width is impacted not by the neutral return flux, but rather by heating in the cosmic ray precursor. Raymond et al. (2011) predicted a similar broadening of the narrow component, though they focused mainly on the contribution of collisional excitation in the precursor to the flux in the narrow H$\alpha$ component. Spatially resolved line broadening of the narrow H$\alpha$ component was detected in ground-based longslit spectroscopy of Knot g in Tycho (Lee et al. 2007). In addition, a small ramp-up in H$\alpha$ emission was observed ahead of Knot g in HST imagery Tycho’s SNR (Lee et al. 2010). The results, taken together, are strong evidence for the presence of cosmic ray precursors in Balmer-dominated SNRs.
One prediction of the new Balmer-dominated shock models is the existence of a third component of the H$\alpha$ emission (Morlino et al. 2012a,b). When the hot neutrals escape upstream, they undergo charge exchange with the colder preshock protons. This results in fast protons with cold neutrals, with the former rapidly equilibrating with preshock protons and pre-heating the gas. The temperature of the equilibrated protons in the precursor lies between the temperature of the far upstream protons ($\sim$5000 K) and the far downstream protons ($\sim$ 10$^6$ - 10$^8$ K), typically $\sim$10$^5$ K. Further charge exchange between these warm protons and the preshock neutrals gives rise to a third, ’warm’ neutral component (neither fast nor slow) having velocity widths of hundreds of [kms$^{-1}$]{}(Morlino et al. 2012a). Interestingly, the presence of a third H$\alpha$ component was first observationally reported by Smith et al. (1994) in their high resolution echelle spectroscopy of Balmer-dominated SNRs in the Large Magellanic Cloud (an example of one of the spectra from Smith et al. (1994) is reproduced in Figure \[fig:0509\_3rdHalphacomp\]). A third H$\alpha$ component was also reported in high resolution spectra of Knot g in Tycho’s SNR (Ghavamian et al. 2000). In their models of Balmer-dominated shock emission, Morlino et al. (2012a) found that the importance of the third component relative to that of the broad and narrow components depends strongly on the preshock neutral fraction and [$T_e/T_p$]{}, in line with earlier theoretical predictions on properties of a fast neutral precursor (Smith et al. 1994). The fact that the third component has been detected in Tycho’s SNR (width $\sim$150 [kms$^{-1}$]{}) is consistent with the high preshock neutral fraction ($f_{H~I}\,\sim\,$0.9) inferred from the broad-to-narrow ratio of Knot g by Ghavamian et al. (2001). A similar third component may have been detected in high resolution spectra of SNR 0509$-$67.5, where measurement of the narrow component width required the inclusion of an additiona component of width 75 [kms$^{-1}$]{} (Smith et al. 1994).
![image](0509_3rdHalphacomp.eps){width="50.00000%"}
The new fast neutral precursor models predict that a substantial fraction of the H$\alpha$ excitation in Balmer-dominated shocks can arise ahead of the shock, where warm neutrals are excited by electron impact. Interestingly, the relative contribution of the preshock H$\alpha$ to the total (upstream + downstream) is sensitive to [$T_e/T_p$]{} behind the shock. Morlino et al. (2012a) found that up to 40% of the total H$\alpha$ flux from a Balmer-dominated shock can arise from the fast neutral precursor when [$V_s$]{}$\sim$2500 [kms$^{-1}$]{} and [$T_e/T_p$]{}=1 both upstream and downstream. In these models the preshock contribution to the total flux drops substantially for lower downstream equilibrations for [$V_s$]{}$\gtrsim$1000 [kms$^{-1}$]{} (the slowest shocks considered by Morlino et al. 2012a). This is generally consistent with the fact that in most cases, shock models not including the precursor H$\alpha$ emission have been able to match the observed broad-to-narrow ratios. In other words, if the postshock temperature equilibration were not low for such fast shocks, the agreement between the observed and predicted [$I_b/I_n$]{}would have been substantially worse for such remnants as Tycho’s SNR and SN 1006.
Ultraviolet and X-ray Studies of Balmer-Dominated Shocks {#sec:3}
--------------------------------------------------------
SN 1006 is an example of a SNR accessible to UV spectroscopy due to its galactic location, 450 pc up from the galactic plane, and therefore with relatively low extinction due to intervening dust and gas. The Hopkins Ultraviolet Telescope (HUT) observed the UV resonance lines of He II $\lambda$1640, C IV $\lambda\lambda$1548, 1550, N V $\lambda\lambda$1238,1243, and O VI $\lambda$1032, 1038 (Raymond et al. 1995) emitted from the Balmer dominated filament in the NW quadrant. The line showed Doppler broadening consistent with that of the H $\alpha$ broad component observed in the optical, indicating insignificant ion-ion equilibration. Laming et al. (1996) were also able to infer the degree of electron-ion equilibration at the shock.
![image](ovi_spatial.ps){width="75.00000%"}
-0.2in
He II $\lambda$1640, by virtue of its relatively high excitation potential $\sim$ 48 eV), is excited only by electrons, and its intensity is therefore directly related to the electron temperature. C IV $\lambda\lambda$1548, 1550, N V $\lambda\lambda$1238,1243, and O VI $\lambda$1032, 1038, by contrast, have much lower excitation potentials of $\sim 8, 10$ and 12 eV, so although these ionization states are established by electrons, the line emission in these transitions can also be excited by impacts with hot protons and $\alpha$ particles, and the intensity ratio of He II $\lambda$1640 to C IV $\lambda\lambda$1548, 1550, N V $\lambda\lambda$1238,1243, and O VI $\lambda$1032, 1038 can be sensitive to the post-shock electron-proton temperature equilibration. The spatial distribution of the UV resonance line emission, when spatially resolved, provides additional constraints on the degree of electron-ion equilibration at the shock front. For shocks slower than 1500 [kms$^{-1}$]{}, $T_e\,=\,T_p$ at the shock front results in both a more rapid rise and higher maximum in emissivity of the UV resonance lines with distance behind the shock (see Figure \[fig:ovispatial\]).
Laming et al. (1996) calculated impact excitation cross sections for protons and $\alpha$ particles colliding with Li-like ions, using a partial wave expansion with the Coulomb-Bethe approximation, and applying a unitarization procedure following Seaton (1964). They found a degree of equilibration of order $T_e/T_p\sim 0.05$ or less, which implied for a 2250 km s$^{-1}$ shock an electron temperature immediately postshock of $< 5\times 10^6$ K, in very good agreement (and actually predating) the optical results discussed above in subsection 4.1.
The ion-ion equilibration in SN 1006 was revisited by Korreck et al.(2004), using higher spectral resolution FUSE data comprising O VI $\lambda$1032, 1038 and the broad Ly $\beta \lambda$1025 emission lines. They found a slightly broader line profile in Ly $\beta$, implying less than mass-proportional heating and possibly a small degree of ion-ion equilibration.
SN 1987A represents another SN/SNR in a region of the sky accessible to UV observations. HST COS observed the He II $\lambda$1640, C IV $\lambda\lambda$1548, 1550, N V $\lambda\lambda$1238,1243 and N IV $\lambda\lambda$1486 lines emitted from the reverse shock (France et al. 2011). When combined with optical spectroscopy of H $\alpha$, the [$T_e/T_p$]{} ratio at the shock is determined to be in the range 0.14 – 0.35, significantly higher than similar ratios coming from Balmer dominated forward shocks. France et al. (2011) argued that a different equilibration mechanism is likely at work. Considering the relative youth of SN 1987A, and the fact that the reverse shock is the origin of the emission, significant populations of cosmic rays and associated magnetic field amplification are unlikely. In fact, in the expanding ejecta the magnetic field is likely to be very weak, leading to a very high Aflvén Mach number shock. As will be discussed below in connection with shocks in galaxy clusters, electron heating in such a case is likely to be due to acceleration in the cross-shock potential. The cross-shock potential is effective at heating electrons, and so may explain the higher [$T_e/T_p$]{} in SN 1987A.
The forward shock of SN 1987A has also been observed in X-rays with the grating instruments on Chandra (e.g. Zhekov et al. 2009). In general electron heating well below complete equilibration is seen, though precise interpretation is difficult because one observation sees emission from shocks at a variety of different velocities, due to irregularities in the density of the surrounding medium.
Do Results from the Balmer-Dominated Shocks Apply to Fully Ionized Shocks? {#sec:4}
---------------------------------------------------------------------------
The inverse relationship between the temperature equilibration and shock speed is an interesting result from studies of Balmer-dominated SNRs. However, the applicability of this result to both fully ionized shocks and shocks undergoing efficient CR acceleration ($\gtrsim$50% of their energy transferred to CRs) remains unsettled. Recently, Vink et al. (2010) used a two-fluid-model for cosmic rays and thermal gas to simulate the effect of cosmic ray acceleration on the temperature and ionization structure of fast, non-relativistic shocks. They found that if 5% of the shock energy were to be channeled into cosmic rays (the minimum needed if SNRs are the dominant source of cosmic rays) then approximately 30% of the postshock pressure must reside in cosmic rays (corresponding to a ratio of cosmic ray to total postshock pressure, w, of 0.3). For w=0.3, Vink et al. (2010) predicted a lowering of the average temperature of the postshock gas to $\sim$70% of the value when the cosmic ray contribution is ignored. This is a significant alteration of the postshock temperature profile, and should result in much more rapid equilibration of electrons and protons close to the shock.
However, do the effects described above actually occur in SNR shocks? One of the principle lines of evidence cited by Vink et al. (2010) in support of this picture was the result found in RCW 86 by Helder et al. (2009). In that SNR, the broad H$\alpha$ widths of Balmer-dominated filaments were found to be nearly 50% smaller than the minimum allowed given their X-ray proper motions. Filaments in the NE of the SNR exhibited broad H$\alpha$ widths of 1000 [kms$^{-1}$]{}, but their apparent X-ray counterparts, which exhibited strong X-ray synchrotron (nonthermal) emission, exhibited proper motions indicating shock speeds of 3000-6000 [kms$^{-1}$]{}. This result, along with the theoretical prediction that X-ray synchrotron emission requires shock speeds of at least 2000 [kms$^{-1}$]{} (Aharonian et al. 1999) was taken by Helder et al. (2009) and Vink et al. (2010) as evidence for substantial energy loss (w$\sim$0.5) from the Balmer-dominated shocks to cosmic rays. However, this association has now been refuted by subsequent multi-epoch optical imagery of the H$\alpha$ filaments, which have failed to show the kind of high proper motions seen in the nonthermal X-ray filaments (Helder et al. 2012, in preparation). Instead, they show proper motions consistent with shock speeds predicted by the broad H$\alpha$ widths without energy loss to cosmic rays($\sim$600-1200 [kms$^{-1}$]{}), implying that w$<$0.2. The association between the Balmer-dominated shocks studied spectroscopically by Helder et al. (2009) and the X-ray filaments was due either to coincidental spatial alignment, or due to sudden deceleration of the outer shock in RCW 86 during its encounter with the surrounding cavity wall (Williams et al. 2011; Helder et al. 2012, in preparation).
![image](tycho_balmer.ps){width="110.00000%"}
-1.2in
The lack of association between the Balmer-dominated filaments and the non-thermal X-ray filaments in RCW 86 raises some important questions about the feasibility of using Balmer-dominated shocks to study electron-proton equilibration in cases where 50% or more of the thermal energy is diverted to cosmic rays. In SNRs such as SN 1006 (Koyama et al. 1996; Katsuda et al. 2010a), Tycho’s SNR (Warren et al. 2005; Katsuda et al. 2010b), Cas A (Vink & Laming 2003) and RX J1713.7$-$3946 (Koyama et al. 1997; Slane et al. 1999; Tanaka et al. 2008) the presence of strong synchrotron X-ray filaments has been interpreted as evidence for highly efficient cosmic ray acceleration. The narrowness of the synchrotron filaments most likely reflects the short emitting lifetimes of the ultra high energy electrons (energies $\sim$10-100 TeV) as they spiral in the postshock magnetic field (Vink & Laming 2003). The detection of $\gamma$-ray emission from the shells of SN 1006 (Acero et al. 2010) and RX J1713.7$-$3946 (Aharonian et al. 2006; Abdo et al. 2011) has shown that cosmic rays are accelerated to energies as high as 100 TeV in these SNRs. In all cases thermal X-ray emission has been exceptionally faint due to the very low inferred preshock densities ($\lesssim$0.1 cm$^{-3}$), making it more likely that the overall X-ray emission will be dominated by synchrotron radiation from the most energetic cosmic rays. SNRs expanding into such low density media can propagate at the high shock speeds required for cosmic ray acceleration ($\gtrsim$2000 [kms$^{-1}$]{}; Aharonian et al. 1999) for a longer time, allowing their structure to become modified by the back pressure from the cosmic rays. Investigating the temperature and ionization structure of such shocks with Balmer line spectroscopy requires finding Balmer-dominated shocks exhibiting X-ray synchrotron radiation.
All of the known SNRs exhibiting Balmer-dominated shocks have also been imaged at X-ray wavelengths with [[*Chandra*]{}]{} or XMM, allowing reasonably detailed searches for shocks emitting both H$\alpha$ and synchrotron X-ray emission (the latter producing hard continuum that is dominant at energies of 2 keV and higher). A detailed comparison for all Balmer-dominated SNRs has not yet been published. However, even a cursory comparison between the narrowband H$\alpha$ and hard X-ray images of these SNRs shows a distinct [*anticorrelation*]{} between shocks emitting in these two bands. For example, overlaying H$\alpha$ and [[*Chandra*]{}]{} images of Tycho’s SNR acquired during the same epoch (2007) (Figure \[fig:tycho\_balmer\]) shows little or no correlation between the prominent Balmer-dominated filaments on the eastern and northeastern edges and the non-thermal X-ray filaments (E$\geq$3 keV) circling the remnant. The Balmer-dominated filaments (shown in red in Figure \[fig:tycho\_balmer\]) on the eastern side of Tycho’s SNR are seen projected 30$^{\prime\prime}$-1$^{\prime}$ inside the edge of the nonthermal X-ray filaments (marked in green), an indication that portion of the shell along the line of sight has significantly decelerated. The Balmer filaments are seen at the outermost edge of the thermally emitting X-ray ejecta (marked in blue), but only at locations where little or no nonthermal X-ray emission is present. The bright optical filament known as Knot g (at the far left edge of Figure \[fig:tycho\_balmer\]) is the only location where Balmer filaments and X-ray synchrotron emission appear coincident. However, upon closer inspection the anticorrelation between the Balmer line and synchrotron emission can be seen in Knot g as well: the upper half of the filament, where Balmer line emission is strongest, exhibits minimal synchrotron emission, while the opposite is true in the lower half of the filament. The enhanced nonthermal emission inside of Knot g may be due to the strong recent deceleration of Knot g, where the SNR is currently propagating into a strong density gradient at the outermost edge of an H I cloud (Ghavamian et al. 2000). The lack of optical/X-ray synchrotron correlation is especially striking given that the Balmer-dominated filaments in Tycho’s SNR have a high enough shock velocity ($\sim$1800-2100 [kms$^{-1}$]{}) to accelerate particles to TeV energies.
The anticorrelation between the optical and nonthermal X-ray emission can be observed in other SNRs as well, including SN 1006, where recently X-ray proper motions have been measured along the entire rim by Katsuda et al. (2012). As with Tycho’s SNR, the locations of the Balmer-dominated filaments and the nonthermal X-ray filaments along the NW rim of SN 1006 are mutually exclusive. Instead, the Balmer-dominated shocks are closely associated with thermal X-ray filaments having a proper motion consistent with a shock velocity of 3300$\pm$200$\pm$300 [kms$^{-1}$]{} (statistical and registrational uncertainties, respectively) for a distance of 2.2 kpc. This result is in excellent agreement with the shock velocity of 2890$\pm$100 [kms$^{-1}$]{} determined from the broad H$\alpha$ width and broad-to-narrow ratio of the NW filament by Ghavamian et al. (2002). Such close agreement is a strong indication that little substantial energy has been lost from the thermal plasma to cosmic ray acceleration, similar to optical proper motion studies from RCW 86 (Helder et al., in preparation).
From the above discussion it appears that Balmer-dominated SNRs, while offering powerful diagnostics of [$T_e/T_p$]{} and [$V_s$]{}, are not useful for investigating equilibration the extreme cases of strongly cosmic-ray modified shocks. In fact, the very condition allowing for the detection of the Balmer line emission - presence of neutral gas ahead of the shock - is also responsible for limiting the fraction of shock energy lost to cosmic ray acceleration. Quantatitive evaluations of this effect by Drury et al. (1996) and Reville et al. (2007) show that when the preshock gas is significantly neutral, Alfvén waves driven by the cosmic rays ahead of the shock are dissipated by ion-neutral damping. As long as the charge exchange frequency, $\omega_{cx}$ ($\equiv\,n_{HI} \langle\sigma_{cx} v\rangle$) is larger than the Alfvén wave frequency, $\omega_A$ ($\equiv\,k\,v_A$) the ions and neutrals oscillate coherently, and ion-neutral damping is not important. However, when $\omega_{cx}\,<\,\omega_A$ the neutrals are left behind by the ions in the Alfvén wave motion, and during the incoherent oscillation between the two, charge exchange exerts a drag on the Alfvén waves, damping them. The condition required for Alfvén waves to not be strongly damped in the precursor can be written out as $$n_{HI}\langle \sigma_{cx}v\rangle\,\,>\,\,k\,v_A$$ where $v_A\,\equiv\,\frac{B}{(4\pi\,m_i\,n_i)^{1/2}}$ is the Alfvén speed of the ions ahead of the shock and $n_{HI}$ is the preshock neutral density. Given that cosmic rays resonantly scatter off Alfvén waves having Doppler shifted frequencies comparable to their gyrofrequency, and that the cosmic ray gyrofrequency is related to its energy via $\omega_{cr}\,\equiv\,e\,c\,B / E$, the inequality above can be cast in terms of the energy, $E_{crit}$, below which a significant fraction of the cosmic ray flux out of the shock is reduced by ion-neutral damping: $$E_{crit}(TeV)\,=\,0.07\,\frac{B^2_3\,T_4^{\,-0.4}}{x_{HI}\,(1 - x_{HI})^{1/2}\, n^{3/2}}$$ where we have set $\langle\sigma_{cx}v\rangle\,\approx$8.4$\times$10$^{-9}\, T_4^{\,0.4}$ cm$^{3}$ s$^{-1}$ (Kulsrud & Cesarsky 1971), $B_3$ is the preshock magnetic field strength in units of 3 $\mu G$, $n$ and $x_{HI}$ are the total preshock density and neutral fraction, and $T_4$ is the preshock temperature in units of 10$^4$ K. For Balmer-dominated SNRs, where recent models have required moderate amplification of the preshock magnetic field ($\Delta B/B\,\sim\,$3-5; Ghavamian et al. 2007) and where the preshock temperature may exceed 20,000 K (Raymond et al. 2011), $E_{crit}\,\sim\,$4 TeV for the typical case where $x_{HI}\,=\,$0.5 cm$^{-3}$. SNRs exhibiting nonthermal X-ray emission are believed to contain cosmic rays with energies of tens of TeV, so $E_{crit}\,\sim$4 TeV is certainly high enough reduce the effectiveness of Balmer-dominated shocks in producing nonthermal X-ray emission.
However, as noted earlier, a modest back pressure from cosmic rays is required to explain the width of the H$\alpha$ narrow component line, as well as the low broad-to-narrow ratios seen in some SNRs (Rakowski et al. 2009; Raymond et al. 2011). In fact, one model for electron heating in fast collisionless shocks requires at least some feedback from the cosmic rays in order to explain the moderate heating of electrons in SNRs, as well as the inverse squared relationship between [$T_e/T_p$]{} and [$V_s$]{} (Ghavamian et al. 2007, described in the next section). Furthermore, as pointed out by Drury et al. (1996), the ion-neutral damping of Alfvén waves in the precursor is unimportant for cosmic rays which have already exceeded $E_{crit}$. Since the acceleration time for cosmic rays shortens considerably with shock speed ($\tau_{acc}\,\approx\,\kappa_{CR}/V_s^2$; Malkov & Drury 2001), the fastest Balmer-dominated shocks are more likely to have accelerated particles beyond $E_{crit}$ and hence will begin to exhibit nonthermal X-ray emission and cosmic-ray modified shock structure. A good example is the aforementioned SNR 0509$-$67.5, where the shock speeds exceed 5000 [kms$^{-1}$]{} (Helder, Kosenko & Vink 2010) and nonthermal X-ray emission from cosmic ray accelerated electrons is detected from the forward shock. The forward shocks in more evolved Balmer-dominated SNRs (such as SN 1006 and Tycho’s SNR) will have swept up more mass and slowed down to speeds $\lesssim$2000 [kms$^{-1}$]{}, by which point $\tau_{acc}$ will have lengthened and the shocks will be less cosmic-ray dominated.
Models for Electron Heating in SNRs {#sec:5}
===================================
Given the lack of in situ measurements of the particle distributions in SNRs, the electron heating mechanisms in these shocks have been studied primarily via numerical methods. On one hand, a number of studies have focused on electron heating in relativistic shocks, with the aim of modeling high energy emission from gamma-ray bursts (e.g., Gedalin et al. 2008; Sironi & Spitkovsky 2011). These shocks are in a different area of parameter space than the SNR shocks discussed here, and the physics governing the electron heating in relativistic shocks is substantially different. At the very high Alfvénic Mach numbers characteristic of gamma-ray bursts, the shock transition becomes very thin (less than an electron gyroradius). Electrons in this case may be once again accelerated by the cross shock potential, similar to the very low Mach number case. On the other hand, a number of other studies consider non-relativistic shocks relevant to SNRs ($\lesssim$0.01c), where accelerated particles such as cosmic rays or solar energetic particles (SEPs) may play an important role in establishing their shock structure. These studies have sought to identify plasma waves capable of boosting electrons to mildly relativistic energies (e.g., Amano & Hoshino 2010; Riquelme & Spitkovsky 2011), with the objective of understanding how electrons are injected into the cosmic ray acceleration process. This is a different (though related) question from what we consider here, namely how electrons are promptly heated to temperatures $\sim$5$\times$10$^6$ K at the shock front (Ghavamian et al. 2007; Rakowski et al. 2008). This limits our consideration of the work done so far to two broad scenarios of electron heating in fast, non-relativistic collisionless shocks. One scenario is based on lower hybrid wave heating in the cosmic ray precursor (Laming 2000; Ghavamian et al. 2007; Rakowski et al. 2008) while the other is based on counterstreaming instabilities ahead of the shock (e.g., the Buneman instability, Cargill & Papadapoulos (1988), Matsukiyo 2010; Dieckmann et al. 2012). We discuss these mechanisms below in turn.
Lower Hybrid Wave Heating
-------------------------
The most significant result of the Balmer-dominated shock studies, the inverse squared relation between [$T_e/T_p$]{} and [$V_s$]{}, places a useful constraint on the range of plausible equilibration mechanisms at the shock front. The simplest way to obtain [$T_e/T_p$]{}$\propto\,V^{-2}_{s}$ is to set $\Delta\,T_e\,\approx\,const.$ at shock speeds of 400 [kms$^{-1}$]{} and higher, while allowing $T_p$ to rise according to the Rankine-Hugoniot jump conditions, $k \Delta\,T_p\,\approx\,\frac{3}{16}m_p V^2_{s}$. The requirement that [$T_e/T_p$]{}=1 at [$V_s$]{}=400 [kms$^{-1}$]{} gives $\Delta\,T_e\,\approx\,$0.3 keV for [$V_s$]{}$\geq$400 [kms$^{-1}$]{}, independent of shock velocity (Ghavamian et al. 2007). Although there may be marginal evidence of a departure from this relation at shock speeds exceeding 2000 [kms$^{-1}$]{} (van Adelsberg et al. 2008), a velocity-independent heating of electrons in SNR shocks is an important clue to the nature of plasma heating processes in fast collisionless shocks. It suggests that plasma processes ahead of the shock front are an important (if not dominant) source of electron heating in SNRs (Ghavamian et al. 2007; Rakowski et al. 2009).
As mentioned earlier, strong interstellar shocks are expected to form a precursor where cosmic rays crossing upstream give rise to Alfvén waves and turbulence (Blandford & Eichler 1987; Jones & Ellison 1991), compressing and pre-heating the gas before it enters the shock. As long as the shock is strong ($v_{downstream}/V_{s}\,\approx\,$1/4) and cosmic ray pressure does not dominate the postshock pressure ($\Delta$B/B does not greatly exceed unity, with $\lesssim$20% of the postshock pressure provided by cosmic rays) the thermal heating within the precursor does not depend strongly on shock velocity. The limited range of narrow component H$\alpha$ widths observed in Balmer-dominated SNRs over a wide range in shock speeds (Sollerman et al. 2003; Raymond et al. 2011) is consistent with the relative insensitivity of the preshock heating to shock speed. Since the widening of the H$\alpha$ narrow component line is now believed to arise in a precursor where the gas is heated by the damping of cosmic-ray driven waves (Wagner et al. 2008; Raymond et al. 2011; Morlino et al. 2012), it stands to reason that perhaps the physical processes generating a constant electron heating with shock speed ($\Delta\,T_e\,\approx\,$0.3 keV) also originate within the cosmic ray precursor.
The above argument was used by Ghavamian et al. (2007) and Rakowski et al. (2008) to advocate for a heating model where lower hybrid waves within the cosmic ray precursor preheat electrons to a constant temperature before they enter the shock front. This model was based on the conception of McClements et al. (1997), who suggested that lower hybrid waves driven by the reflected population of nonthermal ions could generate lower hybrid waves ahead of the shock, pre-heating electrons and injecting them into the cosmic ray acceleration process. The condition for generating such waves is that the shock be quasi-perpendicular, and that the reflected ions form a beam-like (gyrotropic) distibution. A similar scenario was suggested by Ghavamian et al. (2007) and Rakowski et al. (2008), but with one crucial difference: the reflected particles considered are ultra-relativistic cosmic rays rather than suprathermal ions. The lower hybrid waves are electrostatic ion waves which propagate perpendicular to the magnetic field and whose frequency is the geometric mean of the electron and ion geofrequencies, $\omega_{LH}\,=\,(\Omega_e\,\Omega_i)^{1/2}$. The group velocity of these waves is directed primarily along the magnetic field lines ($k^2_{||}/k^2_{\perp}\,=\,m_e/m_p$; Laming 2001) and the waves can simultaneously resonate with ions moving across the field lines and electrons moving along the field lines. Although the growth rate of lower hybrid waves is generally small (Rakowski et al. 2008), their group velocity perpendicular to the magnetic field (and hence the shock front) can be on the order of the shock velocity ($\partial \omega/\partial k_{\perp}\,\approx\,$[$V_s$]{}). This allows the lower hybrid waves to remain in contact with the shock for long periods of time, attaining high intensities capable of effectively heating the electrons (McClements et al. 1997; Ghavamian et al. 2007).
In the case of cosmic rays, the time spent by the electrons in the precursor is $t\,\sim\,\kappa_{CR}/v^2_{sh}$. The kinetic energy acquired by the electrons in the precursor is $\Delta\,E_e\,\propto\,D_{||\,||}\,t$, where $D_{||\,||}$ is the momentum diffusion coefficient of electrons (Ghavamian et al. 2007). For lower hybrid wave turbulence, $D_{||\,||}\,\propto\,V_{s}^2$ (Karney 1978; Ghavamian et al. 2007; Rakowski et al. 2008), so that $\Delta\,E_e\,\approx\,\frac{1}{16}\,\left(\frac{m_e}{m_p}\right)^{1/2}\,
m_e \Omega_e\,\kappa_{CR}\,\propto\,B\,\kappa_{CR}\,\sim\,const$, as needed to account for the inverse squared relationship between equilibration and shock speed. Note that under the assumption that nonlinear amplification of the preshock magnetic field is not too strong ($\Delta\,B/B\,\sim\,1$), $\kappa_{CR}$ is that of Bohm diffusion, which scales as 1/B, so that $\Delta\,E_e$ is also approximately independent of B.
During the past decade more refined models of cosmic ray acceleration have shown that a non-resonant mode of Alfvén waves, having a higher growth rate than the previously considered resonant mode (Skilling 1975), can be excited by cosmic rays in the precursor (Bell & Lucek 2001, Bell 2004, 2005). Unlike for the resonant case, the non-resonant amplification allows for $\Delta\,B/B\,\gg\,1$, driving preshock magnetic fields to values as high as 1 mG (Vink & Laming 2003; Berezhko et al. 2003; Bamba et al. 2005; Ballet 2006). Such magnetic fields are hundreds of times stronger than the canonical preshock magnetic field of 3 $\mu G$ and high enough to account for the observed narrowness of X-ray synchrotron-emitting rims in such SNRs as SN 1006 (assuming the narrowness is due to rapid cooling of high energy electrons behind the shock; see Ballet 2006 and Morlino et al. 2012). Additional studies have suggested that non-resonant amplification may dominate early in the life of the SNR, while resonant amplification may take over during the Sedov-Taylor stage of evolution (Amato & Blasi 2009; Schure et al. 2012), though in either case, $\Delta\,B/B\,>$10 is readily attained. Such a strong magnetic field effectively reduces the acceleration time for particles, and is very well suited for explaining how cosmic rays can reach the knee in the cosmic ray spectrum near 10$^{15}$ eV (Bell & Lucek 2001; Eriksen et al. 2011).
An important factor influencing the effectiveness of lower hybrid wave heating of electrons is the orientation of the preshock magnetic field relative to the shock front. Lower hybrid wave heating is only effective in perpendicular shocks (Vink & Laming 2003, Ghavamian et al. 2007; Rakowski et al. 2008). Given their spherical global geometry, SNR blast waves generally propagate at a range of angles to the interstellar magnetic field. X-ray observations and models of such SNRs as SN 1006 (Orlando et al. 2007; Petruk et al. 2008) have indicated that perpendicular shocks are far more effective at accelerating cosmic rays than parallel shocks. Although the detailed implications of such differences have not yet been worked out for the lower hybrid wave heating model, Rakowski et al. (2008) argue that even for quasi-parallel shock geometries the cosmic ray current driving the nonresonant Alfvén waves will generate a significant perpendicular magnetic field ahead of the shock (such a possibility has also been inferred from numerical simulations; Riquelme & Spitkovsky 2011). This would allow lower hybrid wave growth to overtake modified Alfvén wave growth for arbitrary orientations of the far upstream magnetic field, and allow for a more ubiquitous role for lower hybrid wave heating of electrons.
The amplification of the preshock magnetic field well beyond its far upstream value introduces an interesting possibility: effective lowering of the Alfvénic (and hence magnetosonic) Mach number of the shock. For the Balmer-dominated shocks, where analysis of the optical spectra has shown that at best only a moderate fraction ($\lesssim$20%) of the shock energy has likely been channeled into cosmic rays, the widening of the H$\alpha$ narrow component has been interpreted as nonthermal broadening caused by the lowest frequency waves in the precursor (Ghavamian et al. 2007; though see Raymond et al (2011) for a thermal intepretation) . To explain the 30-50 [kms$^{-1}$]{} widths of the H$\alpha$ narrow component, the preshock magnetic field must be enhanced by a factor of a few. For the non-resonant Alfvén waves in the Bell (2004) mechanism, the magnetic field energy density immediately behind the shock is given by (Schure et al. 2012) $$\frac{B^2}{4\pi}\,\approx\,\frac{1}{4}\,\phi^2\,\rho\,v^2_{sh}$$ where $\phi\,\equiv\,P_{CR}/\rho\,v^2_{sh}$ is the fraction of the shock ram pressure channeled into cosmic rays. Solving this expression for B gives $B(\mu G)\,\approx\,228.7\,\phi\,n^{1/2}\,V_{1000}$, where $V_{1000}$ is the shock speed in units of 1000 [kms$^{-1}$]{}. For $\phi\,\sim\,$0.1-0.2, $n\,\sim\,$1 cm$^{-3}$, a postshock compression factor of 4 and Balmer-dominated shock speeds $\sim$2000 [kms$^{-1}$]{}, this gives $\Delta\,B/B\,\sim$4-10 ahead of the shock. Correspondingly, $v_A$ can increase by nearly the same factor, so that $M_A$ can be reduced by as much as an order of magnitude. Treumann & Jaroschek (2008) describe the physical picture in this case as that of the shock having to prevent an increasing number of ions from crossing the shock jump by deflecting an increasing number of them at higher and higher Mach numbers. This deflection is necessary so that the ability of the shock to dissipate the inflowing energy is not overwhelmed. By deflecting these ions back upstream into a precursor, the net inflow of momentum and energy density into the shock is reduced, reducing the net difference in velocity between the inflowing and outflowing ions. This effectively reduces the Mach number in the frame of the upstream medium.
Note that the ions in the precursor are only mildly compressed (Wagner et al. 2008; Morlino et al. 2012), which only weakly counteracts the rise in B. In addition, $v_A$ only scales as $n^{-1/2}$, but scales directly as B. The result is that [*given the compelling evidence for enhanced preshock magnetic fields in SNRs shocks, the Mach numbers of these shocks may be overestimated by as much as an order of magnitude.*]{} As we describe in Section \[sec:5\], a unified description of solar wind and SNR shocks, where the physics of electron-ion temperature equilibration occurs over a similar range in Mach numbers and involves a similar range of physical processes, may be possible.
Plasma Wave Heating from the Buneman Instability
------------------------------------------------
Similar to the lower hybrid wave model, the Buneman instability-driven wave model focuses on the region immediately ahead of a quasi-perpendicular shock. However, unlike the lower hybrid wave model, the Buneman instability models consider the reflected nonthermal ion distribution, rather than ultrarelativistic cosmic rays. In the latter model, $\sim$20% of the ions are reflected backstream against the incoming electron and ion plasma (Papadapoulos 1988; Cargill & Papadapoulos 1988). Here the upstream plasma is not electrically neutral due to the positive charge of the reflected ion distribution. In such cases, a drift is induced between the electrons and ions. The microinstabilities excited by this configuration depend upon the relative size of the electron thermal speed relative to the electron-ion drift velocity. The Buneman instability occurs when the drift velocity of the reflected ions relative to the upstream electrons exceeds the thermal speed of the upstream electrons ($2 v_{s}\,>\,(kT_e/m_e)^{1/2}$)(Cargill & Papadapoulos 1988), a condition which occurs for very high Mach number ($M_A\,\gtrsim$50) shocks. If the reflected ion current upstream is strong enough, the electron current generated to counteract it may produce a large enough drift between the preshock ions and electrons to cause a secondary Buneman instability when the ion speed exceeds the electron thermal speed (Dieckmann et al. 2012). The Buneman instability generates electrostatic plasma waves which damp by rapidly heating the preshock electrons to $k\Delta\,T_e\,\approx\,2 m_e v^2_{s}\,\approx\,0.01\,v^2_{1000}$ keV, where $v_{1000}$ is the shock speed in units of 1000 [kms$^{-1}$]{}. until their thermal speed matches the electron-ion drift speed, at which point the instability saturates. The rapid heating of the electrons perpendicular to the magnetic field results in $T_e/T_i\,\gg\,$1 and makes it possible for an ion acoustic instability to occur between the preshock electrons and either the reflected ions or preshock ions (Cargill & Papdapoulos 1988). The waves generated by the ion acoustic instability can then transfer a substantial fraction of the shock energy (tens of percent) into electron thermal energy. This process occurs over a length scale of $v_{s}/\Omega_i$ (as opposed to $\kappa_{CR}/v_{s}$ for the cosmic ray precursor), resulting in a [$T_e/T_p$]{}$\approx$0.2, independent of shock speed. [*This is in strong disagreement with the equilibrations obtained for the Balmer-dominated shocks.*]{}
A number of other electron heating mechanisms, such as the modified two-stream instability and electron-cyclotron drift instability, have been proposed for collisionless shocks based on results from particle in cell (PIC) simulations (Umeda et al. 2012a, 2012b). A unified picture proposed by Matsukiyo (2010) suggests that electrons can be strongly energized at low Mach numbers ($M_A\,\leq\,$10) via a modified two-stream instability, where the velocities of the reflected/incoming ions and the electrons are lower than the thermal speed of the electrons and the electrons are able to damp out the Buneman instability. In this case, obliquely propagating whistler mode waves are excited, having frequencies between the electron cyclotron frequency and the lower hybrid wave frequency. When the electron-ion drift velocity and electron thermal speed become nearly equal, the electron-cyclotron drift instability becomes important (Umeda et al. 2012a), exciting waves with frequencies that are multiples of the electron cyclotron frequency. At higher Mach numbers the electron thermal speed is lower than that of the ions, and the Buneman instability/ion acoustic wave process descrbied earlier is predicted to take over.
The amount of electron heating predicted by the Buneman instability/ion acoustic wave model scales as $M^2_A$ (Cargill & Papadapoulos 1988; Matsukiyo 2010), so that for shocks in the 2000 [kms$^{-1}$]{}-10,000 [kms$^{-1}$]{} range, $\Delta\,E_e\,\sim\,
$2-50 keV. This is clearly at odds with $\Delta\,E_e\,=\,$0.3 keV observed between 400 [kms$^{-1}$]{} and 2000 [kms$^{-1}$]{} for Balmer-dominated shocks. One explanation for this discrepancy is that growth of the Buneman-like and two-stream instabilities described above requires that the reflected ions form a distribution function with a positive gradient at some velocity (Laming 2000). This distribution forms when specularly reflected ions have a mostly monoenergetic, beamlike configuration. At the low Mach numbers in the solar wind ($\lesssim$10), where the shock structure is laminar, the reflected ions closely resemble a monoenergetic beam. However, at the higher Mach numbers, where the shock front is more turbulent and disordered, the reflected ions are likely to have a greater spread in energy and are probably less beamlike (Laming 2000). This would lead to suppression of Buneman-like instabilities. However, this line of reasoning is still speculative, and the real explanation for the lack of agreement between the observed [$T_e/T_p$]{} and those predicted by models in this section remains to be explored. Cosmic-ray driven processes may ultimately provide a better explanation for electron heating at SNR shocks than those involving reflected suprathermal ions.
The cross shock potential arises from the charge separation produced by the different gyroradii for ions and electrons as they cross the shock transition. It accelerates electrons into the post shock layer, and can be a means of electron heating at subcritical shocks with an approximately laminar structure. At supercritical shocks, the time dependence and non-locality introduced reduces the degree of electron heating. However at sufficiently Alfvén Mach number (where the shock transition becomes thin (on the order of the electron convective gyroradius or inertial length), significant electron heating may again occur. In the absence of magnetic field amplification by cosmic rays, this might be expected to happen at SNR shocks. However it is much more likely in environments where the plasma beta is low, such as galaxy clusters. It may also occur in cases where a significant population of cosmic rays is unlikely due to the low age of the shock, as in gamma-ray bursts.
Constraints From Solar Wind Studies {#sec:6}
===================================
From the beginning, a detailed study of the fastest collisionless shocks has been hampered by one inherent limitation: they occur in objects which are too remote for in situ study. Although some collisionless shocks in our solar system reach Alfvénic Mach numbers as high as 30 (such as those around Saturn; Achilleos et al. 2006; Masters et al. 2011), there are no physical phenomena in our solar system energetic enough to produce the type of shocks seen in SNRs (Mach numbers $\sim$100 or more if no preshock magnetic field enhancement in the CR is assumed). In addition, the range of plasma betas attainable in the solar system is larger than the range of plasma betas attained in the interstellar medium.
Another fundamental difference between solar wind and SNR shocks is the fact that the former are short-lived phenomena confined to small spatial scales (millions of km) in a curved (bow shock) geometry, whereas the latter are sustained for thousands of years, on spatial scales of parsecs, often well-described by a planar geometry. This results in an irreducible difference between the two types of shocks: particles crossing back and forth between upstream and downstream can remain in contact with SNR shocks for long periods of time, allowing accelerated CRs to acquire much more energy in SNR shocks than in solar wind shocks. This potentially allows the CRs to create shock precursors with properties needed to heat electrons and influence [$T_e/T_p$]{}.
The heating of electrons at the Earth’s and other planetary bow shocks has been the subject of much theoretical and observational work. Typical features of the electron temperature change, $\Delta T_e$, observed at the bow shock as observed by @schwartz88 include (a) an approximate relationship between heating and the incident solar wind energy $\Delta T_e \propto U^2$, where $U$ is the component of the solar wind’s velocity incident upon the shock, and (b) a relationship between the change in temperature normalized by the incident energy and the fast magnetosonic Mach number, $\Delta T_e / (m_p U^2/2) \propto
M_{ms}^{-1}$. A similar approximate relationship holds between the normalized electron temperature change and the Alfvén Mach number $M_A$, especially for shocks with a low plasma $\beta$, which is the ratio between thermal and magnetic pressures. Recent work by @masters11 shows that this relationship with $M_A$ holds well at Saturn’s bow shock. This is particularly interesting as $M_A$ at Saturn is often much larger than at Earth.
In Figure \[fig:combined\] we plot the ratio of electron and ion temperatures downstream of the shock against the magnetosonic, and Alfvén Mach numbers, as well as the upstream flow velocity relative to the shock. The data in these figures are taken from bow shock crossings of the ISEE spacecraft, and is a subset of those listed in @schwartz88, consisting of 61 crossings for which all the necessary data is available.
It is well known that quantities other than those displayed here may be more appropriate, specifically the change in electron temperature over the change in total temperature $\Delta T_e / \Delta (T_e + T_i)$ or even $\Delta T_e / \Delta
T_i$ are better correlated with inverse Mach numbers than $T_e / T_i$ [@schwartz88]. Nevertheless, we use the latter quantity here as it enables a comparison with data from extra-solar system and outer planetary shocks at where less data are available. Furthermore, the approximate inverse dependence upon $M_{ms}$, $M_A$, and $V_s$ is still quite apparent in this data. It is interesting to note that the relationship with the Mach numbers is much more favourable than the dependence upon $V_s$, indicating that the Mach number is the more relevant quantity for organizing the relationship between [$T_e/T_p$]{} and shock strength.
![Collected electron-ion equilibration data from both the solar wind bow shocks and supernova remnant shocks. [$T_e/T_p$]{} is plotted versus shock speed (left), Alfvénic Mach number (center) and magnetosonic Mach number (right). Green symbols show data from crossings of Earth’s bow shock (@schwartz88), while the black symbols show data from crossings of Saturn’s bow shock (@masters11). Shock speeds for the Saturnian bow shock are based on a solar wind model and an assumed shock speed with respect to the spacecraft of 100 [kms$^{-1}$]{}, and ion temperatures are based on electron distribution measurements and the application of the Rankine-Hugoniot conditions (see Masters et al. (2011) for a full discussion of shock parameter derivations at Saturn’s bow shock). Red symbols show data acquired from Balmer-dominated SNR shocks (van Adelsberg et al. 2008), and assume $v_A$=9 [kms$^{-1}$]{}, $c_s$=11 [kms$^{-1}$]{}. []{data-label="fig:combined"}](combined_vs_ma_mms.ps "fig:"){width="5in"} -.5in
Many mechanisms may be involved in the heating of electrons in solar system shocks. Proposed mechanisms involve acceleration of electrons by a cross-shock potential [@goodrichScudder84; @scudder1_86; @scudder3_86], wave turbulence [@galeev76], microinstabilities [@wu84], and electron trajectory scattering [@Balikhin93]. The existence of a cross-shock potential may be deduced from the generalized Ohm’s law in which a gradient in electron thermal pressure gives rise to an electric field. Examining the energetics of electrons crossing the shock may simplified by working in the de Hoffmann-Teller frame of reference [@dehoffmannTeller50], defined as the frame in which the shock is at rest, and in which the magnetic field and plasma flow velocity are (anti-)parallel. In this case the the electric field is dominated by that generated by the electron pressure gradient, and the work done on electrons crossing the shock is determined by the cross-shock potential. Additional mechanisms are required to scatter electrons to pitch angles that are more perpendicular, and to flatten the distribution so that empty regions of phase space are filled. This results in a distribution whose temperature is controlled to a large extent by the de Hoffmann-Teller frame cross-shock potential and downstream density. In addition to direct measurement of the electric fields within the shock, [@baleMozer07; @dimmock11], comparison of upstream and downstream electron phase space distributions have shown that these are consistent with electron acceleration by a cross-shock potential in the de Hoffmann-Teller frame [@lefebvre07].
Connecting the Solar Wind Results to those in SNRs {#sec:7}
==================================================
Figure \[fig:combined\] may indicate that similar mechanism(s) heat the electrons in solar wind and in SNR shocks. [*This is especially appealing when we remember that $M_A$ in SNRs may be overestimated due to preshock amplification of magnetic field by cosmic rays.*]{} In their study of the terrestrial bow shock and interplanetary shocks, Schwartz et al (1988) found that the electron-ion temperature equilibration organizes best by $T_e/T_i\propto 1/M_A$. Given the difficulty in determining the Mach numbers of SNR shocks, the equilibration dependence on shock strength has been characterized via the shock speed instead, and found to obey $T_e/T_i\propto$ 1/[$V_s$]{}$^{2}$ ([$V_s$]{} is much more accurately known than the Mach numbers). Subsection 4.1 outlined a model for SNR electron heating, where the cosmic ray diffusion coefficient $\kappa _{CR}$ is assumed independent of [$V_s$]{}. From quasi-linear theory (Blandford and Eichler 1987), $$\kappa _{CR} = {p^2c^2v\over 3\pi e^2U}$$ where $U$ is the energy density of turbulence ($\equiv\,\langle\Delta\,B^2\rangle/8\pi$) and $v$ is the cosmic ray velocity. For resonant amplification, we evaluate $U$ at $k_{\Vert} = \Omega /v_{\Vert}$ , where $\Omega$ is the gyroradius and $v_{\Vert}$ the parallel component of the cosmic ray velocity. For relativistic cosmic rays, where $v=c$, this results in $\kappa _{CR}\propto p^2/U$, assumed constant with $V_s$. However, for nonrelativistic suprathermal particles, $v$ will most likely be proportional to the shock velocity [$V_s$]{}, which with the same assumptions leads to $T_e/T_i\,\propto$1/[$V_s$]{} for solar wind shocks (as opposed to $1/V_s^2$ in SNRs). This argument is admittedly loose, and should not be be viewed as much more than a hypothesis to motivate further work.
In our arguments above we have made considerable assumptions about $\kappa_{CR}$. The most obvious one is that $\kappa _{CR}$ as written above applies to parallel shocks, whereas we are most likely dealing with quasi-perpendicular cases. This may reduce the difference anticipated between solar wind and SMR shocks, depending on the turbulence spectrum (summarized in Appendix A of Rakowski et al. 2008).
The degree of cosmic ray magnetic field amplification at SNR shocks required to bring SNR data points in Figure \[fig:combined\] into alignment with solar wind data points is approximately an order of magnitude or less at the highest velocities considered. Given the degree of magnetic field amplification expected from cosmic ray acceleration, this appears to be highly plausible. In the case of saturated nonresonant instability (Bell 2004, 2005) by resonant scattering (Luo & Melrose 2009), $\langle\Delta B\rangle^2/B^2\sim 10 - 100$ is expected. In the case of nonresonant saturation, higher values, but strongly dependent on $k$, are predicted. Saturation by electron heating (i.e. the $M_A$ where growth of lower hybrid waves becomes greater than the growth rate of magnetic field) leads to similar magnetic field enhancement, with $\Delta B^2/B\sim 200$ (Rakowski et al. 2008). Such magnetic field amplification is less likely at solar wind shocks. The suprathermal particle densities are lower in solar wind shocks, and the ambient magnetic fields are higher, much closer to where the nonresonant instability would saturate (if not already beyond it).
Observational Constraints from Galaxy Cluster Shocks {#sec:8}
====================================================
Collisionless shocks occur over a vast range of length scales, with those in galaxy clusters being among the largest. While the shock speeds in the galaxy cluster shocks are similar to those in supernova remnants (up to 4000 [kms$^{-1}$]{}; Markevitch 2005; Markevitch & Vikhlinin 2007; Russell et al. 2012), they occur in environments that are substantially different from both the solar wind and the ISM. These differences can be encapsulated via the plasma beta, defined as the ratio of the thermal pressure to the magnetic pressure of a plasma ($\beta\,\equiv\,n\,k\,T / (B^2 / 8\pi)$). Utilizing solar wind parameters listed by Bruno & Carbone (2005), this ratio ranges from around unity at 1 AU under fast solar wind conditions (wind velocity $\sim$900 [kms$^{-1}$]{}) to around 20 for the quiescent wind (wind velocity $\sim$300 [kms$^{-1}$]{}). The plasma $\beta$ of the ISM is close to that of the fast solar wind, $\beta_{ISM}\,\sim$1-4 (assuming a B=3$\mu$G, n= 1 cm$^{-3}$ and T=10$^4$ K). On the other hand, the electron temperature of the intracluster medium (ICM) ranges from ∼ 10$^7$K - 10$^8$K (1-10 keV), with number densities steeply declining from $\sim$ 10$^{−2}$ cm$^{−3}$ near the cluster centers to $\sim$10$^{−4}$ cm$^{−3}$ at the outer edges. The corresponding sound speed is close to 1000 [kms$^{-1}$]{}, nearly two orders of magnitude higher than in the general ISM. The magnetic fields measured in galaxy clusters are actually close that of the ISM, typically on the order of a microGauss (Carilli & Taylor 2002). Therefore, $v_A\,\sim\,$50 [kms$^{-1}$]{} in galaxy clusters, so that $\beta_e\,\gg\,$1 and the magnetic field pressure has negligible contribution to the dynamics of shocks in galaxy clusters. This puts galaxy cluster shocks in a different region of parameter space than solar wind and SNR shocks.
Clusters such as 1E0657$-$56 (the ’Bullet Cluster’) and A520 show strongly enhanced X-ray emission from collisionless shocks, formed during major mergers when gas from one cluster plunges through gas from the other (Markevitch et al. 2005; Markevitch & Vikhlinin 2007; Russell et al. 2012). Shocks moving mostly along the plane of the sky have a favorable viewing geometry and appear as giant curved structures hundreds of kiloparsecs in length. The large clumps of infalling gas drive bow shocks into the cluster gas, which has already been heated to at least 1 keV, and produces thermal Bremsstrahlung emission peaking close to that energy. This is another major difference between collisionless shocks in the ISM and those in the ICM. While the Alfvénic and magnetosonic Mach numbers of SNR shocks are very difficult to determine due to the lack of available observational constraints on magnetohydrodynamic quantities upstream (such as $T$, $B$ and $n$), those in galaxy clusters can readily be measured by spectral analysis of X-ray emission upstream. Comparison of this emission to that of the enhanced postshock region gives the density contrast between the downstream and upstream gas (i.e., $n_2/n_1$). This density contrast yields the sonic Mach number, $M$, via solving the Rankine-Hugoniot jump conditions (Russell et al. 2012): $$M\,=\,\left( \frac{2\,n_2/n_1}{\gamma\,+\,1\,-\,\frac{n_2}{n_1}(\gamma-1)} \right)^{1/2}$$ where $\gamma$ is the ratio of specific heats of the cluster gas. Measurement of these jumps from X-ray observations have yielded $M\,\sim\,$1.5-3 for the Bullet Cluster (Markevitch et al. 2005), Abell 520 and Abell 2146 (Russell et al. 2012). Using these estimated Mach numbers, the shock velocity itself can be calculated via [$V_s$]{}=$M\,c_s$, where $c_s$ is the upstream sound speed as inferred from the X-ray spectra. This yields shock speeds ranging between 2500 [kms$^{-1}$]{} and 4000 [kms$^{-1}$]{}, similar to the fastest known Balmer-dominated shocks. However, fits to the X-ray spectra behind these shocks show prompt electron-ion equilibration at the shock front, consistent with [$T_e/T_p$]{}=1 (Markevitch et al. 2005; Markevitch & Vikhlinin 2007), despite the extremely high shock speeds involved. This result can only be reconciled with equilibration measurements from the solar wind and SNRs if the equilibration depends on the Mach number, rather than [$V_s$]{}. Furthermore, given the low Mach numbers found in the galaxy clusters, it is plausible that the shock transitions in these cases are laminar (as opposed to turbulent like the SNR and fastest solar wind shocks), with electron heating occurring efficiently at the shock front via the same type of cross-shock potential as seen in the slowest solar wind and slowest SNR ([$V_s$]{}$\leq$ 400 [kms$^{-1}$]{}) shocks. This is of course a speculation; further insight into collisionless cluster shocks may be obtained via numerical simulations for the appropriate conditions.
Summary and Future Work {#sec:9}
=======================
There have been exciting advances in the study of electron-ion temperature equilibration in collisionless shocks during the past few years. Perhaps most notable has been the growing realization that temperature equilibration and cosmic ray acceleration may be intertwined processes. Optical studies of collisionless shocks in partially neutral gas (Balmer-dominated shocks) have shown that the electron-ion temperature equilibration is a declining function of shock speed, well characterized as [$T_e/T_p$]{}$\propto\,V_s^{-2}$. This relationship most likely arises due to electron heating ahead of the shock that is nearly independent of shock speed above 400 [kms$^{-1}$]{}. Cosmic ray precursors, with moderately amplified preshock magnetic field and density, are the most logical sites for electron heating in SNR collisionless shocks. The transition to fully equilibrated SNR shocks at speeds below 400 [kms$^{-1}$]{} may be due to a less turbulent and more laminar structure at low shock speeds and Mach numbers. This allows the electrons to experience a more uniform cross-shock potential, and hence higher energization compared to higher shock speeds and Mach numbers, where the shock jump is more turbulent and disordered. The magnetosonic Mach numbers of SNR shocks may match those in solar wind shocks if there is a approximately an order of magnitude increase in the Alfvén speed of the preshock gas in SNRs compared to the average ISM value. This is possible if there is a moderately amplified preshock magnetic field ($\sim$10), and is readily provided by compression and heating in a cosmic ray precursor. In solar wind shocks, the precursor is due to suprathermal, non-relativistic ions, resulting in a [$T_e/T_p$]{}$\propto\,1/V_s$ seen in the solar wind.
While Balmer-dominated shocks have allowed us to elucidate some of the physics of electron-ion temperature equilibration, ion-neutral damping limits most of those observed to cases where the shock structure has not become strongly modified by cosmic ray acceleration. Given this limitation, [*electron-ion equilibration studies of fast, collisionless shocks in fully pre-ionized gas would be highly desirable*]{}. Such a sample would allow the equilibration to be studied over a range of speeds where shocks are increasingly affected by feedback from the accelerated cosmic rays. In such circumstances it is unclear what would happen to the [$T_e/T_p$]{} versus [$V_s$]{} relation. If, as predicted by Amato & Blasi (2009), Bell’s non-resonant cosmic ray instability takes over from the resonant instability at the highest shock speeds, then additional electron heating may occur in the fastest shocks ([$V_s$]{}$\gtrsim$5000 [kms$^{-1}$]{}), resulting in substantial deviation from the [$T_e/T_p$]{}$\propto\,V_s^{-2}$ relation. Such deviations may already have been seen in the fastest ([$V_s$]{}$\gtrsim$2000 [kms$^{-1}$]{}) shocks, where there is evidence that [$T_e/T_p$]{} does not settle down to [$m_e/m_p$]{}, but rather $\sim$0.03. Other deviations may have been detected in SNR 0509$-$67.5, where [$T_e/T_p$]{} for a 5000 [kms$^{-1}$]{} shock has been estimated to be $\sim$0.2, substantially higher than predicted by the inverse squared relation. However, the study of such shocks would be challenging. Without an H$\alpha$ broad component to constrain the range of [$V_s$]{}, shock speeds would have to be determined by other means, such as proper motion studies. That would require X-ray and or UV imagery of SNRs with well-constrained distances (such as those in the LMC or SMC), over multiple epochs. It would also require spectroscopy of the forward shocks in these SNRs, in order to constrain both the electron temperature (via X-ray continuum fits and and UV emission line ratios) and the ion temperature (e.g., via He II, C IV, N V and O VI resonance lines).
An important test of our ideas concerning electron heating by cosmic ray generated waves in a shock precursor would be to measure electron temperatures at SNR shocks exhibiting strong cosmic ray modification and substantial magnetic field amplification ($\Delta\,B/B\,\gtrsim$100). Several SNRs show X-ray filaments produced by synchrotron radiation from cosmic ray electrons, and are generally distinct from those shocks with strong Balmer emission. The absence of neutral material ahead of these shock means that optical and UV emission is weak, and electron temperatures will have to be measured from X-ray spectra. Difficulties arise in distinguishing thermal electron bremsstrahlung from cosmic ray electron synchrotron emission, requiring data of high signal to noise. Further complications in some SNRS (e.g. Cas A) stem from scattering of bright X-ray emission from the ejecta such that it coincides spatially with emission from the forward shock. Such scattering may either be local, due to SNR dust, or instrumental, due to telescope imperfections. This leaves SN 1006 as the best likely target for such an observation, since due to its evolutionary state, only the outer layers of ejecta have encountered the reverse shock.
The development of missions like Solar Orbiter and Solar Probe Plus will allow [*in situ*]{} measurements of shocks in the solar wind, most likely associated with coronal mass ejections, much closer to the Sun. These will probe a different parameter regime, where the magnetic field pressure dominates over the gas pressure (low $\beta$, similar to ISM shocks). As such, measurements here might yield insights into the properties of similar plasma in the precursors of SNR shocks where the magnetic field has been amplified by cosmic rays.
P. G. acknowledges support by HST grant HST-GO-11184.07-A to Towson University. JML acknowledges support by grant NNH10A009I from the NASA Astrophysics Data Analysis Program, and by basic research funds of the Office of Naval Research.
[12]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{}
A. A. [Abdo]{}, M. [Ackermann]{}, M. [Ajello]{} et al.. . *Astrophysical Journal*, 734:0 28, June 2011. [doi: ]{}[10.1088/0004-637X/734/1/28]{}.
F. [Acero]{}, F. [Aharonian]{}, A. G. [Akhperjanian]{} et al.. . *Astronomy and Astrophysics*, 516:0 A62, June 2010. [doi: ]{}[10.1051/0004-6361/200913916]{}.
N. [Achilleos]{}, C. [Bertucci]{}, C. T. [Russell]{} et al.. . *Journal of Geophysical Research*, 111:0 A03201. [doi: ]{}[doi:10.1029/2005JA011297]{}.
F. [Aharonian]{} et al.. . *Astronomy and Astrophysics*, 351:0 330, Nov 1999. [doi: ]{}[ ]{}.
F. [Aharonian]{}, A. G. [Akhperjanian]{}, A. R. [Bazer-Bachi]{} et al.. . *Astronomy and Astrophysics*, 449:0 223–242, April 2006. [doi: ]{}[10.1051/0004-6361:20054279]{}.
T. [Amano]{} and M. [Hoshino]{}. . *Physical Review Letters*, 1040 (20):0 181102, May 2010. [doi: ]{}[10.1103/PhysRevLett.104.181102]{}.
E. [Amato]{} and P. [Blasi]{}. . *Monthly Notices of the Royal Astronomical Society*, 3920 1591–1600, February 2009. [doi: ]{}[10.1111/j.1365-2966.2008.14200.x]{}.
S. D. [Bale]{} and F. S. [Mozer]{}. . *Physical Review Letters*, 980 (20):0 205001, May 2007. [doi: ]{}[10.1103/PhysRevLett.98.205001]{}.
M. [Balikhin]{}, M. [Gedalin]{}, and A. [Petrukovich]{}. . *Physical Review Letters*, 70:0 1259–1262, March 1993. [doi: ]{}[10.1103/PhysRevLett.70.1259]{}.
J. [Ballet]{}. . *Advances in Space Research*, 37:0 1902–1908, 2006. [doi: ]{}[10.1016/j.asr.2005.03.047]{}.
A. [Bamba]{}, R. [Yamazaki]{}, T. [Yoshida]{}, T. [Terasawa]{}, and K. [Koyama]{}. . *Astrophysical Journal*, 621:0 793–802, March 2005. [doi: ]{}[10.1086/427620]{}.
A. R. [Bell]{} and S. G. [Lucek]{}. . *Monthly Notices of the Royal Astronomical Society*, 321:0 433–438, March 2001. [doi: ]{}[10.1046/j.1365-8711.2001.04063.x]{}.
A. R. [Bell]{}. . *Monthly Notices of the Royal Astronomical Society*, 353:0 550–558, March 2005. [doi: ]{}[10.1111/j.1365-2966.2004.08097.x]{}.
A. R. [Bell]{}. . *Monthly Notices of the Royal Astronomical Society*, 358:0 181–187, September 2004. [doi: ]{}[10.1111/j.1365-2966.2005.08774.x]{}.
R. [Blandford]{} and D. [Eichler]{}. . *Physics Reports*, 154:0 1–75, October 1987. [doi: ]{}[10.1016/0370-1573(87)90134-7]{}.
P. [Blasi]{}, G. [Morlino]{}, R. [Bandiera]{}, E. [Amato]{}, and D. [Caprioli]{}. . *Astrophysical Journal*, 755:0 121, August 2012. [doi: ]{}[10.1088/0004-637X/755/2/121]{}.
E. G. [Berezhko]{}, L. T. [Ksenofontov]{}, and H. J. [Völk]{}. . *Astronomy and Astrophysics*, 412:0 L11–L14, December 2003. [doi: ]{}[10.1051/0004-6361:20031667]{}.
R. [Bruno]{} and V. [Carbone]{}. . *Living Reviews in Solar Physics*, 2:0 4, September 2005. [doi: ]{}.
P. J. [Cargill]{} and K. [Papadopoulos]{}. . *Astrophysical Journal*, 329:0 L29–L32, June 1988. [doi: ]{}[10.1086/185170]{}.
C. L. [Carilli]{} and G. B. [Taylor]{}(2002). . *Annual Review of Astronomy and Astrophysics*, 40:0 319–348, 2002. [doi: ]{}[10.1146/annurev.astro.40.060401.093852]{}.
R. A. [Chevalier]{} and J. C. [Raymond]{}. . *Astrophysical Journal*, 225:0 L27–L30, October 1978. [doi: ]{}[10.1086/182785]{}.
R. A. [Chevalier]{}, R. P. [Kirshner]{}, and J. C. [Raymond]{}. . *Astrophysical Journal*, 235:0 186–195, January 1980. [doi: ]{}[10.1086/157623]{}.
D. P. [Cox]{} and J. C. [Raymond]{}. . *Astrophysical Journal*, 298:0 651–659, November 1985. [doi: ]{}[10.1086/163649]{}.
F. [de Hoffmann]{} and E. [Teller]{}. . *Physical Review*, 80:0 692–703, November 1950. [doi: ]{}[10.1103/PhysRev.80.692]{}.
A. [Decourchelle]{} and D. [Ellison]{}. . *Astrophysical Journal Letters*, 543:0 L57-L60, November 2000. [doi: ]{}[10.1086/318167]{}.
M. E. [Dieckmann]{}, A. [Bret]{}, G. [Sari]{}, E. [Perez Alvaro]{}, I. [Kourakis]{}, and M. [Borghesi]{}. . *Plasma Physics and Controlled Fusion*, 54:0 085015, August 20122. [doi: ]{}[10.1088/0741-3335/54/8/085015]{}.
A. P. [Dimmock]{}, M. A. [Balikhin]{}, and Y. [Hobara]{}. . *Annales Geophysicae*, 29:0 815–822, May 2011. [doi: ]{}[10.5194/angeo-29-815-2011]{}.
B. T. [Draine]{} and C. F. [McKee]{}. . *Annual review of astronomy and astrophysics*, 31:0 3733–432, 1993. [doi: ]{}[10.1146/annurev.aa.31.090193.002105]{}.
L. O’C. [Drury]{}, P. [Duffy]{}, and J. G. [Kirk]{}. . *Astronomy and Astrophysics*, 309:0 1002–1010, May 1996. [doi: ]{}
J. P. [Edmiston]{}, and C. F. [Kennel]{}. . *Journal of Plasma Physics*, 32:0 429–441, December 1984. [doi: ]{}[10.1017/S002237780000218X]{}.
D. C. [Ellison]{}, D. J. [Patnaude]{}, P. [Slane]{}, P. [Blasi]{} and S. [Gabici]{}. . *Astrophysical Journal*, 661:0 879-891, June 2007. [doi: ]{}[10.1086/517518]{}.
K. A. [Eriksen]{}, J. P. [Hughes]{}, C. [Badenes]{}, R. [Fesen]{}, P. [Ghavamian]{}, D. [Moffett]{}, P. P. [Plucinsky]{}, C. E. [Rakowski]{}, E. M. [Reynoso]{}, and P. [Slane]{}. . *Astrophysical Journal*, 728:0 L28, February 2011. [doi: ]{}[10.1088/2041-8205/728/2/L28]{}.
K. [France]{}, R. [McCray]{}, S. V. [Penton]{}, R. P. [Kirshner]{}, P. [Challis]{}, J. M. [Laming]{}, P. [Bouchet]{}, R. [Chevalier]{}, P. M. [Garnavich]{}, C. [Fransson]{}, K. [Heng]{}, J. [Larsson]{}, S. [Lawrence]{}, P. [Lundqvist]{}, N. [Panagia]{}, C. S. J. [Pun]{}, N. [Smith]{}, J. [Sollerman]{}, G. [Sonneborn]{}, B. [Sugerman]{}, and J. C. [Wheeler]{} . *Astrophysical Journal*, 743:0 186, December 2011. [doi: ]{}[10.1088/0004-637X/743/2/186]{}.
A. A. [Galeev]{}. . In [D. J. Williams]{}, editor, *Physics of Solar Planetary Environments*, pages 464–490, 1976.
M. [Gedalin]{}, M. A. [Balikhin]{}, and D. [Eichler]{}. . *Physical Review E*, 77:0 026403, February 2008. [doi: ]{}[10.1103/PhysRevE.77.026403]{}.
P. [Ghavamian]{}. . *PhD Thesis, Rice University* December 1999.
P. [Ghavamian]{}, J. C. [Raymond]{}, P. [Hartigan]{}, and W. P. [Blair]{}. . *Astrophysical Journal*, 535:0 266–2274, May 2000. [doi: ]{}[10.1086/308811]{}.
P. [Ghavamian]{}, J. C. [Raymond]{}, R. C. [Smith]{}, and P. [Hartigan]{} . *Astrophysical Journal*, 547:0 995–1009, February 2001. [doi: ]{}[10.1086/318408]{}.
P. [Ghavamian]{}, P. F. [Winkler]{}, J. C. [Raymond]{}, and K. S. [Long]{} . *Astrophysical Journal*, 572:0 888–896, June 2002. [doi: ]{}[10.1086/340437]{}.
P. [Ghavamian]{}, C. E. [Rakowski]{}, J. P. [Hughes]{}, and T. B. [Williams]{} . *Astrophysical Journal*, 590:0 833–845, June 2003. [doi: ]{}[10.1086/375161]{}.
P. [Ghavamian]{}, J. M. [Laming]{}, and C. E. [Rakowski]{}. . *Astrophysical Journal*, 654:0 L69–L72, January 2007. [doi: ]{}[10.1086/510740]{}.
C. C. [Goodrich]{} and J. D. [Scudder]{}. . *Journal of Geophysical Research*, 89:0 6654–6662, August 1984. [doi: ]{}[10.1029/JA089iA08p06654]{}.
E. [Gosset]{}, M. [De Becker]{}, Y. [Nazé]{}, S. [Carpano]{}, G. [Rauw]{}, I. I. [Antokhin]{}, J.-M. [Vreuz]{}, and A. M. T. [Pollock]{}. . *Astronomy and Astrophysics*, 527:0 A66, March 2011. [doi: ]{}[10.1051/0004-6361/200912510]{}.
E. W. [Greenstadt]{} and M. M. [Mellott]{}. . *Journal of Geophysical Research* , 92:0 4730–4734, May 1987. [doi: ]{}[10.1029/JA092iA05p04730]{}.
E. A. [Helder]{}, J. [Vink]{}, C. GH. [Bassa]{}, A. [Bamba]{}, J. A. M. [Bleeker]{}, S. [Funk]{}, P. [Ghavamian]{}, K. J. [van der Heyden]{}, F. [Verbunt]{}, and R. [Yamazaki]{}. . *Science*, 325:0 719, August 2009. [doi: ]{}[10.1126/science.1173383]{}.
E. A. [Helder]{}, D. [Kosenko]{}, and J. [Vink]{}. . *Astrophysical Journal Letters*, 719:0 L140, August 2010 [doi: ]{}[10.1088/2041-8205/719/2/L140]{}.
E. A. [Helder]{}, J. [Vink]{} and C. G. [Bassa]{}.. . *Astrophysical Journal*, 737:0 85, August 2011 [doi: ]{}[10.1088/0004-637X/737/2/85]{}.
K. [Heng]{} and R. [McCray]{}. . *Astrophysical Journal*, 654:0 923–937, January 2007. [doi: ]{}[ 10.1086/509601]{}.
K. [Heng]{}, M. [van Adellsberg]{}, R. [McCray]{}, and J. C. [Raymond]{}. . *Astrophysical Journal*, 668:0 275–284, October 2007. [doi: ]{}[10.1086/521298]{}.
J. J. [Hester]{}, J. C. [Raymond]{}, and W. P. [Blair]{}. . *Astrophysical Journal*, 420:0 721–745, January 1994. [doi: ]{}[10.1086/173598]{}.
H. [Itoh]{}. . *Publications of the Astronomical Society of Japan*, 30:0 489–498, 1978. [doi: ]{}.
F. C. [Jones]{} and D. C. [Ellison]{}. . *Space Science Reviews*, 58:0 259–346, December 1991. [doi: ]{}[10.1007/BF01206003]{}.
C. F. F. [Karney]{}. . *Physics of Fluids*, 21:0 1584–1599, September 1978. [doi: ]{}[10.1063/1.862406]{}.
S. [Katsuda]{}, R. [Petre]{}, K. [Mori]{}, S. P. [Reynolds]{}, K. S. [Long]{}, P. F. [Winkler]{}, and H. [Tsunemi]{}. . *Astrophysical Journal*, 723:0 383–392, November 2010. [doi: ]{}[10.1088/0004-637X/723/1/383]{}.
S. [Katsuda]{}, R. [Petre]{}, J. P. [Hughes]{}, U. [Hwang]{}, H. [Yamagauchi]{}, A. [Hayato]{}, K. [Mori]{}, and H. [Tsunemi]{}. . *Astrophysical Journal*, 709:0 1387–1395, February 2010. [doi: ]{}[10.1088/0004-637X/709/2/1387]{}.
S. [Katsuda]{}, K. S. [Long]{}, R. [Petre]{}, S. P. [Reynolds]{}, B. J. [Williams]{}, and P. F. [Winkler]{}. . *arXiv:1211.6443* 2012. [doi: ]{}.
C. F. [Kennel]{}, J. P. [Edmiston]{} and T. [Hada]{}. . *Washington DC American Geophysical Union Monograph Series*, 34:0 1–36 [doi: ]{}[ ]{}.
R. [Kirshner]{}, P. F. [Winkler]{}, and R. A. [Chevalier]{}. . *Astrophysical Journal*, 315:0 L135–L139, April 1987. [doi: ]{}[10.1086/184875]{}.
K. E. [Korreck]{}, J. C. [Raymond]{}, T. H. [Zurbuchen]{} and P. [Ghavamian]{}. . *Astrophysical Journal*, 615:0 280–285, November 2004. [doi: ]{}[10.1086/424481]{}.
K. [Koyama]{}, R. [Petre]{}, E. V. [Gotthelf]{}, U. [Hwang]{}, M. [Matsuura]{}, M. [Ozaki]{}, and S. S. [Holt]{}. . *Nature*, 378:0 255–258, November 1995. [doi: ]{}[10.1038/378255a0]{}.
K. [Koyama]{}, K. [Kinugasa]{}, K. [Matsuzaki]{}, M. [Nishiuchi]{}, M. [Sugizaki]{}, K. [Torii]{}, S. [Yamauchi]{}, and B. [Aschenbach]{}. . *Publications of the Astronomical Society of Japan*, 49:0 L7–L11, June 1997. [doi: ]{}.
V. V. [Krasnoselskikh]{}, B. [Lemb[è]{}ge]{}, P. [Savoini]{} and V. V [Lobzin]{} . *Physics of Plasmas*, 9:0 1192–12090, April 2002. [doi: ]{}[10.1063/1.1457465]{}.
R. M. [Kulsrud]{} and C. J. [Cesarsky]{}. . *Astrophysical Letters*, 8:0 189, March 1971. [doi: ]{}.
J. M. [Laming]{}, J. C. [Raymond]{}, B. M. [McLaughlin]{} and W. P. [Blair]{}. . *Astrophysical Journal*, 472:0 267–274, November 1996. [doi: ]{}[10.1086/178061]{}.
J. M. [Laming]{}. . *Astrophysical Journal Supplement Series*, 127:0 409–413, April 2000. [doi: ]{}[10.1086/313325]{}.
J. M. [Laming]{}. . *Astrophysical Journal*, 546:0 1149–1158, January 2001. [doi: ]{}[10.1086/318317]{}.
J. J. [Lee]{}, B.-C. [Koo]{}, J. C. [Raymond]{}, P. [Ghavamian]{}, T.-S. [Pyo]{}, A. [Tajitsu]{} and M. [Hayashi]{}. . *Astrophysical Journal Letters*, 659:0 L133-L136, April 2007. [doi: ]{}[10.1086/517520]{}.
J. J. [Lee]{}, J. C. [Raymond]{}, S. [Park]{}, W. P. [Blair]{}, P. [Ghavamian]{}, P. F. [Winkler]{}, K. [Korreck]{}. . *Astrophysical Journal Letters*, 715:0 L146-L149, June 2010. [doi: ]{}[10.1088/2041-8205/715/2/L146]{}.
B. [Lefebvre]{}, S. J. [Schwartz]{}, A. F. [Fazakerley]{}, and P. [D[é]{}cr[é]{}au]{}. . *Journal of Geophysical Research (Space Physics)*, 112:0 A09212, September 2007. [doi: ]{}[10.1029/2007JA012277]{}.
Q. [Luo]{} and D. [Melrose]{}. . *Monthly Notices of the Royal Astronomical Society*, 397:0 1402–1409, August 2009. [doi: ]{}[10.1111/j.1365-2966.2009.14872.x]{}.
M. [Markevitch]{}, F. [Govoni]{}, G. [Brunetti]{}, and D. [Jerius]{}. . *Astrophysical Journal*, 627:0 733–738, July 2005. [doi: ]{}[10.1086/430695]{}.
M. [Markevitch]{} and A. [Vikhlinin]{}. . *Physics Reports*, 443:0 1–53, May 2007. [doi: ]{}[10.1016/j.physrep.2007.01.001]{}.
A. [Masters]{}, S. J. [Schwartz]{}, E. M. [Henley]{}, M. F. [Thomsen]{}, B. [Zieger]{}, A. J. [Coates]{}, N. [Achilleos]{}, J. [Mitchell]{}, K. C. [Hansen]{}, and M. K. [Dougherty]{}. . *Journal of Geophysical Research (Space Physics)*, 116:0 A10107, October 2011. [doi: ]{}[10.1029/2011JA016941]{}.
S. [Matsukiyo]{}. . *Physics of Plasmas*, 17:0 042901, April 2010. [doi: ]{}[10.1063/1.3372137]{}.
K. G. [McClements]{}, R. O. [Dendy]{}, R. [Bingham]{}, J. G. [Kirk]{}, and L. O’C. [Drury]{}. . *Monthly Notices of the Royal Astronomical Society*, 291:0 241–249, October 1997. [doi: ]{}.
G. [Morlino]{}, E. [Amato]{}, P. [Blasi]{}, and D. [Caprioli]{}. . *Monthly Notices of the Royal Astronomical Society*, 405:0 L21–L25, June 2010. [doi: ]{}[10.1111/j.1745-3933.2010.00851.x]{}.
G. [Morlino]{}, R. [Bandiera]{}, P. [Blasi]{}, and E. [Amato]{}. . *Astrophysical Journal*, 760:0 137, December 2012. [doi: ]{}[ 10.1088/0004-637X/760/2/137]{}.
G. [Morlino]{}, P. [Blasi]{}, R. [Bandiera]{}, E. [Amato]{}, and D. [Caprioli]{}. . *arXiv1211.6148*, November 2012. [doi: ]{}.
S. [Orlando]{}, F. [Bocchino]{}, F. [Reale]{}, F. [Peres]{}, and O. [Petruk]{}. . *Astronomy and Astrophysics*, 470:0 927–939, August 2007. [doi: ]{}[10.1051/0004-6361:20066045]{}.
K. [Papadopoulos]{}. . *ESASP*, 161:0 409, November 1981. [doi: ]{}.
K. [Papadopoulos]{}. . *Astrophysics and Space Science*, 144:0 535–547, May 1988. [doi: ]{}[10.1007/BF00793203]{}.
O. [Petruk]{}, F. [Bocchino]{}, G. [Castelletti]{}, G. [Dubner]{}, D. [Lakubovskyi]{}, M. [Kirsch]{}, M. [Miceli]{} and I. [Telezhinsky]{}. . *Proc. “The X-ray Universe 2008”, Granada, Spain*, 109, July 2008. [doi: ]{}.
C. E. [Rakowski]{}, P. [Ghavamian]{} and J. P. [Hughes]{}. . *Astrophysical Journal*, 590:0 846-857, June 2003 [doi: ]{}[10.1086/375162]{}
C. E. [Rakowski]{}, J. M. [Laming]{}, and P. [Ghavamian]{}. . *Astrophysical Journal*, 684:0 348–357, September 2008. [doi: ]{}[10.1086/590245]{}.
C. E. [Rakowski]{}, P. [Ghavamian]{}, and J. M. [Laming]{}. . *Astrophysical Journal*, 696:0 2195–2205, May 2009. [doi: ]{}[ 10.1088/0004-637X/696/2/2195]{}.
J. C. [Raymond]{}, W. P. [Blair]{}, and K. S. [Long]{}. . *Astrophysical Journal*, 454:0 L31–L34, November 1995. [doi: ]{}[10.1086/309772]{}.
J. C. [Raymond]{}, J. [Vink]{}, E. A. [Helder]{}, and A. [de Laat]{} . *Astrophysical Journal*, 731:0 L14, April 2011. [doi: ]{}[10.1088/2041-8205/731/1/L14]{}.
B. [Reville]{}, J. G. [Kirk]{}, P. [Duffy]{}, and S. [O’Sullivan]{}. . *Astronomy and Astrophysics*, 475:0 435–439, November 2007. [doi: ]{}[10.1051/0004-6361:20078336]{}.
M. A. [Riquelme]{} and A. [Spitkovsky]{}. . *Astrophysical Journal*, 733:0 63, May 2011. [doi: ]{}[10.1088/0004-637X/733/1/63]{}.
H. R. [Russell]{}, B. R. [McNamara]{}, J. S. [Sanders]{}, A. C. [Fabian]{}, P. E. J. [Nulsen]{}, R. E. A. [Canning]{}, S. A. [Baum]{}, M. [Donahue]{}, A. [Edge]{}, L. J. [King]{} and C. P. [O’Dea]{}. . *Monthly Notices of the Royal Astronomical Society*, 423:0 236–255, June 2012. [doi: ]{}[10.1111/j.1365-2966.2012.20808.x]{}.
K. M. [Schure]{}, A. R. [Bell]{}, L. O’C. [Drury]{}, and A. M. [Bykov]{}. . *Space Science Reviews*, 173:0 491–519, November 2012. [doi: ]{}[10.1007/s11214-012-9871-7]{}.
S. J. [Schwartz]{}, M. F. [Thomsen]{}, S. J. [Bame]{}, and J. [Stansberry]{}. . *Journal of Geophysical Research*, 93:0 12923–12931, November 1988. [doi: ]{}[10.1029/JA093iA11p12923]{}.
S. J. [Schwartz]{}, E. G. [Zweibel]{}, and M. [Goldman]{}. . *Space Science Reviews*, 2013 [doi: ]{}[TBD]{}
J. D. [Scudder]{}, T. L. [Aggson]{}, A. [Mangeney]{}, C. [Lacombe]{}, and C. C. [Harvey]{}. . *Journal of Geophysical Research*, 91:0 11019–11052, October 1986. [doi: ]{}[10.1029/JA091iA10p11019]{}.
J. D. [Scudder]{}, A. [Mangeney]{}, C. [Lacombe]{}, C. C. [Harvey]{}, and C. S. [Wu]{}. . *Journal of Geophysical Research*, 91:0 11075–11097, October 1986. [doi: ]{}[10.1029/JA091iA10p11075]{}.
M. J. [Seaton]{}. . *Monthly Notices of the Royal Astronomical Society*, 127:0 191–194, December 1964. [doi: ]{}.
J. M. [Shull]{} and C. F. [McKee]{}. . *Astrophysical Journal*, 327:0 191–149, January 1979. [doi: ]{}[10.1086/156712]{}.
L. [Sironi]{} and A. [Spitkovsky]{}. . *Astrophysical Journal* , 726:0 75, January 2011. [doi: ]{}[10.1088/0004-637X/726/2/75]{}.
J. [Skilling]{}. . *Nature*, 258:0 687–688, December 1975. [doi: ]{}[10.1038/258687a0]{}.
P. [Slane]{}, B. M. [Gaensler]{}, T. M. [Dame]{}, J. P. [Hughes]{}, P. [Plucinsky]{}, and A. [Green]{}. . *Astrophysical Journal*, 525:0 357–367, November 1999. [doi: ]{}[10.1086/307893]{}.
R. C. [Smith]{}, R. P. [Kirshner]{}, W. P. [Blair]{}, and P. F. [Winkler]{}. . *Astrophysical Journal*, 375:0 652–662, July 1991. [doi: ]{}[10.1086/170228]{}.
R. C. [Smith]{}, J. C. [Raymond]{}, and J. M. [Laming]{}. . *Astrophysical Journal*, 4220:0 286–293, January 1994. [doi: ]{}[10.1086/1735581]{}.
J. [Sollerman]{}, P. [Ghavamian]{}, P. [Lundqvist]{}, and R. C. [Smith]{}. . *Astronomy and Astrophysics*, 407:0 249–257, August 2003. [doi: ]{}[10.1051/0004-6361:20030839]{}.
L. [Spitzer]{}. . *New York: Interscience*, 1964. [doi: ]{}.
T. [Tanaka]{}, Y. [Uchiyama]{}, F. A. [Aharonian]{}, T. [Takahashi]{}, A. [Bamba]{}, J. S. [Hiraka]{}, J. [Kataoka]{}, T. [Kishishita]{}, M. [Kokubun]{}, K. [Mori]{}, K. [Nakazawa]{}, R. [Petre]{}, H. [Tajima]{}, and S. [Watanabe]{}. . *Astrophysical Journal*, 685:0 988–1004, October 2008. [doi: ]{}[10.1086/591020]{}.
R. A. [Treumann]{}. , 17:0 409–535, December 2009 [doi: ]{}[10.1007/s00159-009-0024-2]{}.
D. [Tseliakhovich]{}, C. M. [Hirata]{}, and K. [Heng]{}. . *Monthly Notices of the Royal Astronomical Society*, 422:0 2357–2371, May 2012. [doi: ]{}[10.1111/j.1365-2966.2012.20787.x]{}.
T. [Umeda]{}, Y. [Kidani]{}, S. [Matsukiyo]{}, and R. [Yamazaki]{}. . *Journal of Geophysical Research*, 117:0 A03206, March 2012. [doi: ]{}[10.1029/2011JA017182]{}.
T. [Umeda]{}, Y. [Kidani]{}, S. [Matsukiyo]{}, and R. [Yamazaki]{}. . *Physics of Plasmas*, 19:0 042109, April 2012. [doi: ]{}[10.1063/1.3703319]{}.
M. [van Adelsberg]{}, K. [Heng]{}, R. [McCray]{}, and J. C. [Raymond]{}. . *Astrophysical Journal*, 689:0 1089–1104, December 2008. [doi: ]{}[10.1086/592680]{}.
J. [Vink]{} and J. M. [Laming]{}. . *Astrophysical Journal*, 584:0 758–769, February 2003. [doi: ]{}[10.1086/345832]{}.
J. [Vink]{}, R. [Yamazaki]{}, E. A. [Helder]{}, and K. M. [Schure]{}. . *Astrophysical Journal*, 722:0 1727–1734, October 2010. [doi: ]{}[10.1088/0004-637X/722/2/1727]{}.
A. Y. [Wagner]{}, J.-J. [Lee]{}, J. C. [Raymond]{}, T. W. [Hartquist]{}, and S. A. E. G. [Falle]{}. . *Astrophysical Journal*, 690:0 1412–1423, January 2009. [doi: ]{}[10.1088/0004-637X/690/2/1412]{}.
J. [Warren]{}, J. P. [Hughes]{}. . *Astrophysical Journal*, 608:0 261–273, June 2004. [doi: ]{}[10.1086/392528]{}.
J. S. [Warren]{}, J. P. [Hughes]{}, C. [Badenes]{}, P. [Ghavamian]{}, C. F. [McKee]{}, D. [Moffett]{}, P. [Plucinsky]{}, C. E. [Rakowski]{}, E. [Reynoso]{}, and P. [Slane]{}. . *Astrophysical Journal*, 634:0 376–389, November 2005. [doi: ]{}[10.1086/496941]{}.
B. J. [Williams]{}, W. P. [Blair]{}, J. M. [Blondin]{}, K. J. [Borkowski]{}, P. [Ghavamian]{}, K. S. [Long]{}, J. C. [Raymond]{}, S. P. [Reynolds]{}, J. [Rho]{}, and P. F. [Winkler]{}. . *Astrophysical Journal*, 741:0 96, November 2011. [doi: ]{}[10.1088/0004-637X/741/2/96]{}.
C. S. [Wu]{}, D. [Winske]{}, M. [Tanaka]{}, K. [Papadopoulos]{}, K. [Akimoto]{}, C. C. [Goodrich]{}, Y. M. [Zhou]{}, S. T. [Tsai]{}, P. [Rodriguez]{}, and C. S. [Lin]{}. . *Space Science Reviews*, 37:0 63–109, January 1984. [doi: ]{}[10.1007/BF00213958]{}.
S. A. [Zhekov]{}, R. [McCray]{}, D. [Dewey]{}, C. R. [Canizares]{}, K. J. [Borkowski]{}, D. N. [Burrows]{}, and S. [Park]{} . *Astrophysical Journal*, 692:0 1190–1204, February 2009. [doi: ]{}[10.1088/0004-637X/692/2/1190]{}.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The early epoch in which the first stars and galaxies formed is among the most exciting unexplored eras of the Universe. A major research effort focuses on probing this era with the 21-cm spectral line of hydrogen. While most research focused on statistics like the 21-cm power spectrum or the sky-averaged global signal, there are other ways to analyze tomographic 21-cm maps, which may lead to novel insights. We suggest statistics based on quantiles as a method to probe non-Gaussianities of the 21-cm signal. We show that they can be used in particular to probe the variance, skewness, and kurtosis of the temperature distribution, but are more flexible and robust than these standard statistics. We test these statistics on a range of possible astrophysical models, including different galactic halo masses, star-formation efficiencies, and spectra of the X-ray heating sources, plus an exotic model with an excess early radio background. Simulating data with angular resolution and thermal noise as expected for the Square Kilometre Array (SKA), we conclude that these statistics can be measured out to redshifts above 20 and offer a promising statistical method for probing early cosmic history.'
author:
- |
Alon Banet$^{1}$[^1], Rennan Barkana$^{1}$, Anastasia Fialkov$^{2}$, Or Guttman$^{1}$,\
$^{1}$ School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel\
$^{2}$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
title: 'Quantiles as Robust Probes of Non-Gaussianity in 21-cm Images'
---
\[firstpage\]
dark ages, reionization, first stars – cosmology: theory – galaxies: high redshift
Introduction {#Sec:Intro}
============
Ever since Penzias and Wilson discovered the Cosmic Microwave Background (CMB) in 1964, cosmologists have had a good understanding of the Universe in its early stages. Meanwhile, modern telescopes have allowed astronomers to study astronomical objects in the more recent Universe, reaching times as early as a billion years after the Big Bang. Despite tremendous progress in recent decades, the exciting period in between, in which stars and galaxies first formed and evolved, remains largely unobserved to this day.
While a few bright galaxies that date back to 400 Myr after the Big Bang have been detected directly via telescopes, it is thought that most of the early stars are distributed in a large number of very small galaxies, making them difficult to observe directly. The most promising probe of these early times is the spin-flip transition of neutral hydrogen. Since redshift acts as the line-of-sight dimension, it can be used to produce a 3-D tomography map of the cosmic gas.
The brightness temperature of the 21-cm signal (which is measured relative to the CMB temperature) is determined by the spin temperature, which denotes the abundance of the excited level of the hyperfine split of hydrogen relative to the ground level. The spin temperature is affected by astrophysical and cosmological events, so therefore it may allow us to study star and galaxy formation within dark matter halos, as well as phenomena like cosmic reionization and early cosmic heating.
Previous studies have shown that the 21-cm signal should have large spatial fluctuations, which stem not only from reionization at low redshifts, but also from fluctuations in the Ly$\alpha$ intensity during the Ly$\alpha$ coupling era [@Barkana:2005] and fluctuations in the X-ray background during the era of cosmic heating [@Pritchard:2007]. Research over this past decade has focused on the statistics of these fluctuations and in particular, on the 21-cm power spectrum, a highly promising measure of the 21-cm signal. Since the foregrounds are expected to have a smooth spectrum, the power spectrum may be measured in the near future. Another approach, pursued by both theorists and experimentalists is the sky-averaged global 21-cm signal, which could prove useful in independently constraining the parameters of the early universe. The first claimed detection of a cosmological 21-cm signal is of the global signal at cosmic dawn, by the EDGES experiment [@Bowman:2018]. The surprisingly-deep absorption, if confirmed, would require an exotic explanation, such as an interaction with dark matter that cools the baryons [@Barkana:2018], or an enhanced early radio background (discussed further below).
These methods span only part of the richness the 21-cm signal holds. In particular, they are not able to probe non-Gaussianities in the signal caused by the non-linear processes described above. In the near future the Square Kilometre Array (SKA) should provide a detailed 3-D map of the 21-cm fluctuation signal. However, the topic of analyzing such images has received only limited attention [@Koopmans:2015]. The aim of this study is to explore new ways of studying these maps, in order to gain new insights about the 21-cm signal. In the near future, maps will likely have a fairly low signal-to-noise ratio, so use of averaging through statistics will be necessary, but it is possible to go beyond the power spectrum. Use of the 21-cm bispectrum has been explored [e.g., @Bharadwaj; @Majumdar; @Trott]. A number of papers have explored use of the probability distribution function of 21-cm brightness temperature [e.g., @Ciardi; @Fur04; @Mellema; @Ichikawa; @Mondal], and in particular the skewness and kurtosis statistics, mostly during the reionization era [@Wyithe; @LOFAR; @Watkinson; @Kubota; @Kitt] and out to cosmic dawn [@Shimabukuro; @WatkinsonCD]. We explore these standard statistics within a wide range of possible models, and also suggest new statistics that serve as a more robust and flexible measure of non-Gaussian characteristics and can help us explore the evolution of the signal and understand the processes affecting it.
This paper is structured as follows. In section 2 we present the details of how we simulated 21-cm images, laying out the assumed models and their main parameters (2.1), and how we imitated observational aspects corresponding to resolution and noise (2.2). In section 3 we present our statistical methods and results, laying out measures based on quantiles (3.1), finding average radial profiles (3.2), showing how we corrected for thermal noise (3.3), plotting the variance and our alternative (the quantile average) (3.4), exploring the extra flexibility of the quantile average (3.5), plotting the skewness and our alternative (the quantile difference) (3.6), as well as the kurtosis and our alternative (the normalized quantile average) (3.7). Finally, we summarize and conclude in section 4.
Simulated 21-cm tomography maps {#Sec:Simulation}
===============================
We obtain the 21-cm image boxes using a semi-numerical simulation [e.g., @21cmfast] in a box that is 384 Mpc on a side, with 3 Mpc resolution (comoving units), as described by @Cohen:2017. The observed brightness temperature (relative to the CMB) depends on the spin temperature ${T_{\rm s}}$, the neutral fraction ${x_{\rm HI}}$, and the baryonic overdensity $\delta$, as follows [e.g., @Madau; @Barkana:2016]:
$$\label{Eq:Tb}
{T_{\rm{b}}}\propto {x_{\rm HI}}(1+\delta) \left(1-\frac{{T_{\rm CMB}}}{{T_{\rm s}}}\right)\ .$$
The spin temperature plays an important role in the evolution of the signal, and ${T_{\rm s}}$ can be expressed as a weighted mean [@Barkana:2016]: $$\label{Eq:Ts}
{T_{\rm s}}^{-1}=\frac{{T_{\rm CMB}}^{-1}+x_{\rm c}{T_{\rm k}}^{-1}+x_{\rm \alpha} {T_{\rm c}}^{-1}}{1+x_{\rm c}+x_{\rm \alpha}}\ ,$$ where ${T_{\rm k}}$ is the kinetic temperature of the gas, ${T_{\rm c}}$ is the effective (color) Ly$\alpha$ temperature (which is very close to ${T_{\rm k}}$), and $x_{\rm c}$, $x_{\rm\alpha}$ are the coupling coefficients for collisions and Ly$\alpha$ scattering, respectively.
At redshifts above $z \sim 200$, ${T_{\rm k}}$ was close to ${T_{\rm CMB}}$, causing the signal to vanish. As the universe expanded the gas cooled adiabatically, faster than the CMB, while atomic collisions kept the spin temperature coupled to ${T_{\rm k}}$, leading to an absorption signal. Eventually, the gas density decreased enough to make collisional coupling ineffective, the radiative coupling of ${T_{\rm s}}$ to ${T_{\rm CMB}}$ dominated and the signal diminished. As star formation began, Ly$\alpha$ photons were emitted and coupled ${T_{\rm s}}$ to ${T_{\rm k}}$ via the Wouthuysen-Field [@Wouthuysen:1952; @Field:1958] effect. Meanwhile, X-ray sources started heating the cosmic gas and UV photons ionized the gas around galaxies, creating ionized bubbles and initiating the process of cosmic reionization. It is useful to define three milestone redshifts. A typical theoretical set of definitions would be: Ly$\alpha$ coupling, defined as when the mean $x_{\rm
\alpha}=1$; the heating transition, defined as when the mean ${T_{\rm k}}={T_{\rm CMB}}$; and the mid-point of reionization, at which the mean ${x_{\rm HI}}=0.5$. However, in plots below we adopt a modified set of milestone redshifts, defined phenomenologically using peak redshifts of our main measure of the signal (the quantile average, discussed below in section \[s:qave\] and shown for our various models in Figure \[fig:AveAll\]).
Models and parameters
---------------------
To illustrate our method of exploring the characteristics of 21-cm intensity maps, we used several models that differ in their input astrophysical parameters. Given the early state of 21-cm observations, the details of astrophysics at high redshift are still highly uncertain, and it is important to consider a wide range of possible models. The following are the main parameters of our models [@Cohen:2017]:
1. Star formation efficiency (SFE) - the fraction of gas that is converted into stars, out of the gas that falls into star-forming dark matter halos. The overall SFE depends on the details of the process of star formation as well as the dominant feedback mechanisms. It strongly affects the 21-cm signal by influencing the amount of radiation produced by stars. For otherwise identical astrophysical parameters, a higher SFE implies an earlier onset of Ly$\alpha$ coupling, and a faster build-up of X-ray and ionizing radiation backgrounds; hence, a high SFE value shifts the cosmological 21-cm signal milestones to higher redshifts.
2. Cooling mass - the minimum halo mass in which there is significant gas cooling (and thus star formation). It depends on the cooling channels of the gas in halos, and is best described in terms of a minimum circular velocity $V_{\rm c}$. In atomic cooling halos, stars form with masses down to the cooling threshold of atomic hydrogen, given by $V_{\rm c}=16.5$ km s$^{-1}$. As an example of strong feedback, we consider a model of “Massive” halos in which stars only form in halos with masses of at least 100 times the mass required for atomic cooling, which corresponds to $V_{\rm
c}=76.5$ km s$^{-1}$. In this model star formation is delayed, so that the 21-cm milestones are shifted to lower redshift values.
3. The spectrum of early X-ray sources. The mean free path of an X-ray photon is proportional to $E_{\rm photon}^3$, and thus soft X-rays have relatively short mean free paths and therefore they are absorbed soon after emission, heating the local gas before suffering significant energy loss due to redshift effects. Thus, soft X-ray sources cause large spatial fluctuations in the gas temperature during cosmic heating. However, the most plausible sources of cosmic heating are X-ray binaries, which are expected to have a relatively hard spectrum [@Mirabel:2011; @Fragos]. Due to their long mean free path, the photons emitted from such sources will be absorbed late, after having lost a significant part of their energy as a result of cosmological redshift [@Fialkov:2014b]. Hence, a hard X-ray spectrum leads to cosmic heating at a later time and reduces the fluctuations in ${T_{\rm k}}$. Our standard assumption is a hard X-ray spectrum, but given the current uncertainty in the properties of high-redshift sources, we also consider a model with a soft X-ray spectrum.
4. X-ray radiation efficiency - proportional to the ratio of the X-ray luminosity to the star formation rate (SFR). It is normalized so that unity corresponds to low metallicity, low redshift starburst galaxies [@Mineo:2012]. Higher X-ray efficiency leads to earlier cosmic heating.
5. Excess radio background radiation. In order to explain the EDGES measurement of the global 21-cm signal at $z=17.2$ (which corresponds to $\nu=78.2 \rm{MHz}$) [@Bowman:2018], we consider an example of an exotic model with a greatly enhanced early radio background [@Bowman:2018; @Feng:2018; @Fialkov:2019]. In this model the background temperature at redshift $z$ is modified to:
$$\label{Eq:Trad}
T_{\rm rad}={T_{\rm CMB}}(1+z)\left[1+A_{\rm r}\left(\frac{\nu_{\rm obs}}{\rm{78~MHz}}\right)^\beta\right] ,$$
where $\nu_{\rm obs}$ is the observed frequency, $A_{\rm r}$ is the amplitude defined relative to the CMB temperature, and $\beta=-2.6$ is the spectral index, assumed to follow the shape of the observed radio background. The radio background enhances the 21-cm signal when there is absorption, i.e., when ${T_{\rm s}}\ll T_{\rm rad}$.§
For our study we chose four models from [@Cohen:2017] plus an exotic model with an excess radio background, chosen to be generally consistent with the EDGES measurement [@Fialkov:2019]. The full parameters are listed in Table \[table:casesparam\].
**Model** **$f_*$** **$f_X$** **SED** **Halo type**
---------------- ----------- ----------- --------- -------------------------
Standard 0.05 1 Hard Atomic cooling
$\rm\#$53 ($V_{\rm c}=16.5$ km/s)
Low-Efficiency 0.005 0.1 Hard Atomic cooling
$\rm\#$37
Soft 0.05 1 Soft Atomic cooling
$\rm\#$55
Massive 0.5 0.1 Hard Massive
$\rm\#$186 ($V_{\rm c}=76.5$ km/s)
Radio 0.05 1 Hard Atomic cooling
: Parameters of the models that we consider: star formation efficiency $f_*$, X-ray efficiency of X-ray sources $f_X$, spectral energy distribution (SED) of X-ray sources, and minimum circular velocity $V_{\rm c}$. The first four models are taken from @Cohen:2017 \[case numbers from there are indicated\]; these all have a total CMB optical depth $\tau$ = 0.066. The Radio model has a radio background amplitude $A_{\rm r}=4.2$ (measured at the central EDGES frequency of 78 MHz, and corresponding to $0.22\%$ of the CMB at 1.42 GHz) and $\tau$ = 0.0737.[]{data-label="table:casesparam"}
Angular resolution, thermal noise, and smoothing {#Sec:thermal}
------------------------------------------------
We generated mock signals that correspond to observations with the SKA (i.e., the low-frequency instrument of the phase-one SKA), in terms of various resolutions and the expected thermal noise for each. It is interesting to consider various resolutions (not only the highest achievable SKA resolution) since low resolution images have significantly lower noise. To create these mock 21-cm maps, we used the following procedure \[@Koopmans:2015; also L. Koopmans, personal communication\].
We adopted the reasonable approximation of a Gaussian point-spread function (PSF). Thus we used the 3 Mpc voxels (i.e., 3-D pixels) in our simulation box but for each resolution we smoothed the signal map with a two-dimensional Gaussian with full-width at half max (FWHM) of $2R$, where $R$ is the smoothing radius. We illustrate our results with three values of $R$, 10, 20, and 40 Mpc. In terms of the telescope array, the FWHM corresponds to $\sim 0.6\lambda/D$, where $\lambda$ is the wavelength and $D$ is the diameter within which baselines are included. Different resolutions correspond to using different values of $D$, so the dependence of the noise on the resolution depends on the distribution of baselines. In the frequency direction, the voxel size was always fixed at 3 Mpc. Now, the PSF also indicates how the thermal noise is correlated in the image. To produce a realistic noise map, we first generated a map of independent Gaussian random variables in each voxel with $\sigma=1$. We then smoothed (each slice of) the map using the same two-dimensional Gaussian with FWHM $2R$, which gave the correct angular correlations. The map was then rescaled so that each slice has the expected root mean square (RMS) value of the noise for the SKA, which depends on the redshift and the smoothing radius $R$ approximately as \[@Koopmans:2015; also L. Koopmans, personal communication\]: $$\label{Eq:noise}
\rm \sigma_{\rm thermal}=\begin{cases} a \left( \frac{1+z}{17}
\right)^b &\mbox{if } z \leq 16\ , \\a \left( \frac{1+z}{17} \right)^c
&{\rm otherwise}\ , \end{cases}$$ where $a$, $b$ and $c$ are the numerical coefficients for each smoothing radius given in Table \[table:noisecoef\] (assuming a 1000 hr integration by the SKA). Finally, the resulting noise map was added to the signal.
$R\, \rm[Mpc]$ 10 20 40
---------------- ----- ----- -----
$a\, \rm[mK]$ 15 4.0 1.8
$b$ 3.1 2.7 2.8
$c$ 4.7 5.1 4.2
: The numerical coefficients for each smoothing radius, for thermal noise of the SKA as given by Equation \[Eq:noise\].[]{data-label="table:noisecoef"}
A given resolution corresponds to 2-D Gaussian smoothing with a radius $R$, but it is also useful to consider applying additional 3-D smoothing as a step in the data analysis. The idea is to produce a more isotropic image, which is more conducive for measuring statistics that are designed to probe spherically-averaged structure. Now, while any smoothing removes some information in the map, it also smooths out and thus lowers the thermal noise. In our results below, we have found that the differences are small between using the images with or without 3-D smoothing, but in most cases the 3-D smoothing increases the signal-to-noise ratio, i.e., the noise is smoothed-out more than the signal. This makes sense since the typical coherence/correlation scale of the noise is $R$ (due to the PSF), while the typical scales of the 21-cm features (due to reionization, heating, or Ly$\alpha$ coupling) are usually significantly larger. Thus as our default procedure we did include 3-D smoothing, using a spherical top-hat with the same smoothing radius $R$ as in the corresponding 2-D Gaussian.
Statistical Methods and Results {#Sec:Results}
===============================
We first show the sky-averaged (global) signal for all five models from Table \[table:casesparam\] as predicted from the simulation. All five curves show the same general behavior of a deep absorption dip, as is the case for all reasonable models [@Cohen:2017]. Three models are especially similar in their timing: the Standard, Soft, and Radio models have relatively early Ly$\alpha$ coupling and X-ray heating, resulting in a peak absorption at $z \sim 18-19$, followed by a rise to emission (${T_{\rm{b}}}> 0$) before the drop to zero due to reionization. On the other hand, the Low-Efficiency and Massive models have much later star formation, so that Ly$\alpha$ coupling is delayed and X-ray heating overlaps with reionization and does not manage to lead to emission. Comparing the Soft model to the Standard one, the heating phase starts earlier in the Soft model, leading to an earlier rise from the absorption trough. The Radio model has a very deep $\rm{Ly}\alpha$ minimum due to the excess radio background.
![The global 21-cm signal as a function of redshift for our five models, Standard (blue), Low-Efficiency (green), Soft (orange), Massive (black), and Radio (red).[]{data-label="fig:global"}](global){width="3.2in"}
Histograms and quantiles {#s:thresh}
------------------------
Our statistical tools are mostly based on histograms of the 21-cm signal map, i.e., the probability distribution function $p(T_b)$ of the 21-cm intensity (brightness temperature $T_b$) in voxels, normalized to a total area of unity. As the variable we use $\Delta
T_b$, which is $T_b$ measured relative to the mean temperature at the same redshift, since interferometers do not measure the zero point. Figure \[fig:Distributions\] shows two examples of such histograms for separate models and cosmic times. The distributions are clearly non-Gaussian, and one of the main features we focus on is the obvious asymmetry. The shape of the asymmetry depends in a complex way on the astrophysical processes and parameters; these examples illustrate opposite signs of the skewness.
![image](Hist53){width="3.1in"} ![image](Hist37quan){width="3.1in"}
In what follows, we use the cumulative distribution function (CDF) of the signal, either the upper portion: $$F_+(\Delta T_b) \equiv \int_{\Delta T_b}^\infty p(\Delta \hat{T}_b)\, d\Delta \hat{T}_b\ ,$$ or the lower portion: $$F_-(\Delta T_b) \equiv \int_{-\infty}^{\Delta T_b} p(\Delta \hat{T}_b)\, d\Delta \hat{T}_b\ .$$ Note that $F_+(-\infty)=F_-(\infty)=1$. We measure characteristic brightness temperatures as thresholds at certain values of the CDF. This is the inverse function of the CDF (also called the quantile function $Q$, which here has units of mK). For a given fraction $f$ of the total probability, we have an upper threshold $Q_+(f)$ so that a fraction $f$ of the probability lies at temperatures above $Q_+(f)$, and similarly a lower threshold $Q_-(f)$. They are defined so that $$F_+(Q_+(f)) = f\ ,$$ and $$F_-(Q_-(f)) = f\ .$$ For the probability fractions we use characteristic thresholds $t$ based on the cumulative probability of a normal distribution, measured in units of the standard deviation $\sigma$. For instance, we define $Q(t=1\sigma) \equiv Q(f=15.9\%)$, where this holds for both $Q_+$ and $Q_-$. Note that $Q_+$ and $Q_-$ are defined to be one-sided so we use the corresponding one-sided fractions of a Gaussian (e.g., $f=15.9\%$ for $t=1\sigma$, not $f=31.7\%$). More generally, the relation between $f$ and $t$ is given by $$\label{Eq:Threshold}
f(t)=\frac{1}{2} {\rm erfc} \left(\frac{\textit{t}}{\sqrt{2}}\right)\ ,$$ where $t$ is measured in units of $\sigma$. Table \[table:Thresholds\] lists the values of various thresholds that we use below along with their corresponding percentiles, according to eq. \[Eq:Threshold\]. Note that for a Gaussian distribution, $Q_+(t) = -Q_-(t) = t \sigma$.
Quantiles for one case are shown in the right panel of Figure \[fig:Distributions\]. In this case, $Q_+$ and $Q_-$ have nearly the same magnitude at $1\sigma$; while $Q_-$ is closer than $Q_+$ to the peak of the PDF as well as to its median, we have defined $Q_+$ and $Q_-$ as they are measured in 21-cm images, i.e., relative to the cosmic mean brightness temperature. At the $2\sigma$ threshold the difference becomes clear, with the higher $|Q_+|$ reflecting the broader tail at high brightness temperature. In the Low-Efficiency model shown here during reionization, the intergalactic medium is still cold, so that the high $T_b$ tail corresponds to regions that are mostly reionized (though not completely so, due to the smoothing of the map, which mixes ionized bubbles with nearby pixels that are still partly neutral).
$t$ $f(t)$ $1-f(t)$ $\rm{N_{vx}}$
------------- -------- ---------- ---------------
0.5$\sigma$ 30.9% 69.1% 647,000
1$\sigma$ 15.9% 84.1% 333,000
1.5$\sigma$ 6.7% 93.3% 140,000
2$\sigma$ 2.28% 97.72% 47,700
2.5$\sigma$ 0.62% 99.38% 13,000
3$\sigma$ 0.135% 99.865% 2,830
: List of thresholds used in this paper along with the corresponding percentiles of the normal distribution. $\rm{N_{vx}}$ denotes the actual number of voxels corresponding to the fraction $f(t)$, for a 128$^3$ voxel simulation box as used here.[]{data-label="table:Thresholds"}
Radial profiles
---------------
In most of our analysis below, we focus on the PDF of $T_b$ values and various derived statistics as laid out in the previous subsection. This approach brings out non-Gaussianity most clearly, and makes thermal noise especially easy to deal with. However, there is additional spatial information that can be derived from the 21-cm map. We briefly give an example of this here.
We can use the thresholds to explore what roughly corresponds to radial profiles around temperature peaks. Specifically, we found the average profiles around the voxels with the highest or lowest values of $\Delta {T_{\rm{b}}}$. From this we can examine the contribution of various spatial scales to the fluctuations and also look for asymmetry (and thus non-Gaussianity) by comparing the highest and lowest voxels. Since we wanted average spherical profiles, in order to select the voxels we used as before the 3-D spherically averaged ${T_{\rm{b}}}$ around each voxel. As an example, we chose $R=20$ Mpc and used the 15.9% highest and lowest voxels (corresponding to $t=1\sigma$ in the previous subsection). To find the profiles, at each distance $r$ we found the volume-averaged smoothed signal in the shell that includes points at distances between $r-R$ and $r+R$ from the central voxel. For $r=0$ we simply used the spherical average out to radius $R$. Finally, the profiles of each group (highest or lowest pixels) were stacked to produce an average profile for each group. Figure \[fig:profiles\] illustrates the resulting profiles (shown normalized, relative to $r=0$) for all five models at the $\rm{Ly}\alpha$ peak. Differences between the profiles of the highest and lowest pixels are visible for all models, i.e., there is clear asymmetry. Also, different models show different characteristic scales for the drop of the profile. For example, the profile that declines most slowly (i.e., shows the strongest large-scale correlations) corresponds to the Massive model, where the halos are massive, rare, and more highly biased than in the other models.
![image](profilesLy){width="7in"}
Quantiles and noisy maps
------------------------
From here on, we consider the quantiles at various thresholds as defined in section \[s:thresh\]. At each threshold level $t$, we find $Q_+(t)$ and $Q_-(t)$, which measure the brightness temperature above or below the mean which describes a fraction of the map corresponding to that threshold. These quantities probe the magnitude of the positive and negative fluctuations, and the choice of $t$ gives us controls: a higher threshold $t$ corresponds to probing rarer fluctuations, while a lower threshold is more robust and less sensitive to noise, especially to outliers in the data. Standard statistical measures average over the entire distribution and do not offer such flexibility. As we show, we can reconstruct the standard non-Gaussian statistics with quantiles, plus look for additional measures.
For a Gaussian distribution, a quantile at a given threshold value would give a brightness temperature that is a fixed multiple of the standard deviation $\sigma$ of the distribution. Thus, in general, what a quantile measures is roughly (a multiple of) the standard deviation. Now, in general, the total variance of the noisy signal equals the sum of the signal variance and noise variance (assuming that they are independent). This leads us to use a simple procedure for correcting the measured quantiles from our mock data for the effect of noise. The estimated signal is taken as $$\label{e:est}
S_{\rm{est}}=\sqrt{({S+N_1})^2-{N_2}^2}\ ,$$ where $S_{\rm{est}}$ refers to the estimated signal (either $Q_+$ or $Q_-$ at some threshold $t$), $S+N_1$ is the measured signal from a 21-cm image with signal plus thermal noise, and $N_2$ is the same quantity measured from a noise-only 21-cm image, using noise $N_2$ generated independently from $N_1$. Thus, we assume that in the data analysis the statistical properties of the thermal noise are known (but not the particular instance that is included in the measured data). We note that it is not obvious that this noise-correction procedure, which is based on variances, applies exactly to quantiles even for non-Gaussian signals. In practice, though, we find that it works very well, and we thus conclude that this simple noise-correction property is an important advantange of working with quantiles.
The estimation in all plots was made up to redshift 27 which approximately corresponds to the SKA’s lowest measured frequency of 50 MHz. Note that the signal maps were generated with a redshift resolution of $\Delta z=0.1$ up to redshift 15 and $\Delta z=1$ above this, for all models except for the Radio model where we used resolution of $\Delta z=1$ for all redshifts.
Quantile average compared to variance {#s:qave}
-------------------------------------
The first quantile measure we looked at is the average (in absolute value) of the high- and low-end quantiles, i.e., $$\label{e:ave}
Q_{\rm ave}(t) \equiv \frac{|Q_+(t)| + |Q_-(t)|}{2}\ .$$ Note that, by their definitions, $Q_+$ is positive and $Q_-$ is negative (not necessarily for all possible distributions, but this is the case for all realistic ones). This quantity would equal $t$ times $\sigma$ for a Gaussian distribution, and more generally it corresponds to estimating the distribution’s standard deviation (except for the factor of $t$). By averaging the two ends we ignore any asymmetry and get an accurate estimate of the symmetric part. As our main configuration we use a 2$\sigma$ threshold and $R=20\, \rm
Mpc$. We could get a similar result here with the more natural 1$\sigma$, but we prefer to keep the same choice later when we look at the difference, and that signal happens to nearly vanish for a 1$\sigma$ threshold (see Figure \[fig:DifCases\], below). Figure \[fig:AveAll\] shows the average for all five models as a function of redshift with the above main configuration parameters, with the regular standard deviation of the PDF shown for comparison. As with the quantiles, the variance estimation from the noisy map was corrected for noise by subtracting the variance of an independent noise map: $$\sigma_{\rm{est}}=\sqrt{\rm{Var}(\textit{S+N}_1)-\rm{Var}(\textit{N}_2)}\ .$$
![image](AveAll){width="7in"} ![image](VarAll){width="7in"}
From the plot, the quantile average accurately measures the standard deviation (times a factor of 2 in this case, i.e., our main configuration). Compared to the noise-less image, the noise-corrected estimation from the noisy map performs very well, nearly up to the highest redshifts considered, for both the quantile average and the standard deviation statistic, and for all models. The exceptions are redshifts at which the signal drops near zero for some models.
As noted above, our ability to control the threshold and smoothing radius allows us to look at different parts of the temperature distribution and at various scales (similar to what we do when using the power spectrum), and to manipulate the magnitudes of the signal and noise since smoothing affects them differently. Figure \[fig:AveCases\] illustrates the effect of using different parameter configurations for the Standard and Radio models. For high threshold and low $R$ we get the biggest magnitude, but as can be seen for the Standard model (left panel), with this choice the estimation fails for $z > 22$ and also becomes inaccurate below 8. The Radio model has a particularly strong signal and thus yields more accurate estimates at the highest redshifts. We conclude that having the option to control the two parameters that are varied here has the potential to yield more information from the analysis of a real dataset.
![image](Ave53){width="3.1in"} ![image](AveRad){width="3.1in"}
Most of our plots in this paper are presented as functions of redshift. However, as noted above, when we wish to select particular milestone redshifts, we define them phenomenologically using the (mock) estimated signal. Specifically, we use the redshifts where our main measure of the signal, the quantile average, achieves a peak value (i.e., a local maximum). As seen from Figure \[fig:AveAll\], from high to low redshift, in each model we have a Ly$\alpha$ peak, a Heating peak, and a Reionization peak (except that there is no Heating peak in the Massive and Low-Efficiency models).
Threshold dependence {#sec:thresh}
--------------------
The quantiles that we have defined can be used to directly compare the measured PDF to a Gaussian distribution, by varying the threshold and normalizing to a Gaussian. As the first step, we calculated the quantile-average curves (defined as in Figure \[fig:AveCases\] but for a fixed $R=20\, \rm Mpc$) and normalized them according to the threshold (e.g., the 2$\sigma$ curve was divided by 2). The resulting curves, shown in the top panel of Figure \[fig:Threshold\], would lie exactly on top of each other for a pure Gaussian distribution. For the simulated (noise-less) 21-cm signal there are differences, an indication of non-Gaussianity. Note that the estimated signal from noisy maps is not plotted here since the points would be very crowded; there errors were illustrated in the previous two figures, and the normalization by a constant does not change the relative errors of the estimation.
![image](NormAve53){width="3.1in"} ![image](NormAveRad){width="3.1in"}
![image](Threshold53){width="3.1in"} ![image](ThresholdRad){width="3.1in"}
![image](Threshold53Norm){width="3.1in"} ![image](ThresholdRadNorm){width="3.1in"}
The differences between the normalized curves are largest mostly near the cosmological milestone redshifts. We focus on these special redshifts in the other two panels. The middle panel shows the normalized quantile average at each redshift, as a function of the threshold level $t$. We bring out the variation more clearly in the bottom panel, where we have applied yet another normalization according to the value of each curve at the 2$\sigma$ threshold. In these two panels, a Gaussian distribution would give a flat horizontal line. The non-Gaussian signature is strongest during reionization, but all the curves exhibit interesting behavior. The symbols, which represent the same estimated statistics from the noisy signal, show that the SKA thermal noise usually does not prevent this non-Gaussianity from being measured; at the Ly$\alpha$ peak, the measurement is rather noisy in the Standard model, but the stronger signal in the Radio model allows an accurate measurement also at $z=20$. Another interesting feature is that in the Radio model the curves are not monotonic as they are in the Standard model. We relate these measures of non-Gaussianity from the symmetric quantile-average to the kurtosis in section \[sec:kur\]; but first we move on to the asymmetry of the positive and negative brightness temperature fluctuations.
Quantile difference and skewness
--------------------------------
We now probe the asymmetry of the PDF using the difference between the high- and low-end quantiles, i.e., $$\label{e:diff}
Q_{\rm diff}(t) \equiv |Q_+(t)| - |Q_-(t)|\ .$$ This quantity can be compared with the standard measure of non-Gaussian asymmetry, namely the distribution’s skewness given by $$\rm{Ske}(\textit{X})=E[(\textit{X}-\mu)^3]/\sigma^3\ ,$$ where $\mu$ refers to the mean value of $X$ which in our case equals zero. Both of these measures of asymmetry would equal zero for a Gaussian PDF, and cannot be probed using the 21-cm power spectrum (which measures the contribution of $k$-modes to the variance). Figure \[fig:DifAll\] shows our quantile difference statistic, as well as the skewness (multiplied by the measured $\sigma(z)$ to make it have dimensions of brightness temperature), for all five astrophysical models, with the main configuration parameters (2$\sigma$ threshold with $R=20\, \rm Mpc$). We see that the two statistics are quite similar (though not identical), and can be estimated accurately from a noisy map except when the signal is low at $z>20$. The skewness estimation from the noisy map was done using the formula: $$\rm{Ske}_{[\rm{est}]}=\frac{\rm{Ske}(\textit{S+N}_1)\rm{Var}^{3/2}
(\textit{S+N}_1)}{(\rm{Var}(\textit{S+N}_1)-\rm{Var}
(\textit{N}_2))^{3/2}}\ .$$ This is easily derived from the fact that the Gaussian noise has zero skewness, and the skewness of the signal is defined with respect to the variance of the (noise-less) signal.
![image](DifAll){width="7in"} ![image](SkeAll){width="7in"}
Figure \[fig:DifCases\] shows the quantile difference for the Standard and Massive models, with various choices of threshold $t$ and comoving radius $R$. Here the signal can change sign, and is lower in absolute value than the quantile average shown earlier. Also, the shape depends more strongly on the choice of $t$ and $R$. In particular, the low-threshold curves change sign compared to the high-threshold ones, and for 1$\sigma$ the signal almost vanishes. This is the reason for us choosing a 2$\sigma$ threshold (and $R=20\, \rm Mpc$) as our main configuration throughout this paper. Figure \[fig:DifCases\] also shows an example of the results obtained when we do not add 3-D smoothing at radius $R$ (as discussed in section \[Sec:thermal\]). The results for the statistic measured from the noise-less 21-cm images (the curves in the figure) are qualitatively similar to the case with 3-D smoothing, but higher in absolute value (as there is less smoothing of the 21-cm signal). However, the reconstructed signal from noisy data is significantly worse in tracing the correct signal-only result. This shows that 3-D smoothing removes thermal noise more effectively than it reduces the 21- cm signal, and justifies our inclusion of 3-D smoothing throughout this work.
![image](Dif53){width="3.1in"} ![image](Dif186){width="3.1in"}
![image](Dif53noSmoothing){width="3.1in"} ![image](Dif186noSmoothing){width="3.1in"}
Normalized quantile average and kurtosis {#sec:kur}
----------------------------------------
In section \[sec:thresh\] we explored the threshold dependence of the quantile average. Taking the average removes the asymmetry and with it any sensitivity to the skewness of the distribution. Comparing the threshold dependence of the quantile average to a Gaussian is thus most sensitive to the kurtosis. Specifically, we take the normalized averages from the top panels of Figure \[fig:Threshold\] and divide by $\sigma(z)$. This quantity, $Q_{\rm ave}(t)/(t \sigma)$, which for a Gaussian would equal unity (independent of $t$), corresponds roughly to the distribution’s kurtosis. The kurtosis is defined as $$\rm{Kur}(\textit{X})=\frac{E[(\textit{X}-\mu)^4]}{\sigma^4}\ ,$$ and equals 3 for a Gaussian. Figure \[fig:Kurtosis\] shows these two quantities for all five models as a function of redshift with the main configuration parameters (2$\sigma$ threshold with $R=20\, \rm
Mpc$). We chose $t=3$ because it gave results qualitatively somewhat more similar to the kurtosis than using $t=2$. The kurtosis estimation from the noisy map was done using the formula: $$\begin{aligned}
\rm{Kur}_{[\rm{est}]}= \{ \rm{Kur}(\textit{S+N}_1)
\rm{Var}^2(\textit{S+N}_1)-\rm{Kur}(\textit{N}_2)
Var^2(\textit{N}_2) \nonumber \\
- 6[\rm{Var}(\textit{S+N}_1)-\rm{Var}(\textit{N}_2)]
\rm{Var}(\textit{N}_2) \} \\ \nonumber
/ [\rm{Var}(\textit{S+N}_1)-\rm{Var}(\textit{N}_2)]^2\ ,\end{aligned}$$ which is easily derived from assuming that the thermal noise is Gaussian and independent of the signal. As before, here $N_1$ is the thermal noise added to the signal and $N_2$ is an independently-generated thermal noise map.
![image](sigmanormAveAll){width="7in"} ![image](KurAll){width="7in"}
Both the kurtosis and our alternate measure can be measured accurately from noisy data up to $z \sim 18$. The definitions (which involve division by $\sigma$) makes the kurtosis (and to a lesser degree the skewness) especially sensitive to redshifts at which the variance of the signal is particularly low (i.e., approaches zero, and becomes difficult to measure accurately). These are the points where the magnitude of the kurtosis (and of the alternate kurtosis) peaks. Examples of this can be seen at $z=10$ for the Standard and Soft models, where the kurtosis estimation deviates from the real signal-only curve, and at $z > 20$ for the Massive model and $z > 22$ for the Low-Efficiency model. These are redshifts where $\sigma$ approaches zero according to Figure \[fig:AveAll\].
Summary and Conclusion
======================
We have suggested quantile-based statistics as a new method for measuring non-Gaussianities in the 21-cm signal via tomography maps. This method is complementary to the global signal and power spectrum which are commonly used and not sensitive to non-Gaussian aspects such as the asymmetry of the temperature fluctuation distribution. Quantiles offer a simple, robust and flexible statistic that is easy to correct for thermal noise. Also, quantiles can be used to probe the variance, skewness, and kurtosis of the temperature distribution. The flexibility comes through the ability to choose different thresholds in the quantile measures. The robustness comes from being less sensitive to outliers than common statistics that integrate over the entire distribution function. The simplicity comes in the noise-correction, which for each quantile measure is done simply like correcting the variance, i.e., by subtracting the squares using an independent noise-only map (eq. \[e:est\]).
We used mock signals from five possible astrophysical models, covering the full redshift range of the SKA and exploring a much wider range of possible signals than previous investigations of non-Gaussian statistics. This included models with different spectra of the X-ray heating sources (Soft vs. Standard model), different characteristic masses of galactic halos (Massive vs. Standard), different star-formation and X-ray efficiencies (Low-Efficiency vs. Standard), as well as an exotic model with an excess early radio background motivated by the EDGES global 21-cm detection. To the single images we added mock thermal noise according to the expected level for upcoming observations with the SKA. We tried various smoothing/resolution radii $R$ of the signal. Varying $R$ allows us to explore various distance scales, similar to looking at $k$ modes of the power spectrum. Together with the profile analysis shown in Figure \[fig:profiles\], this can yield a broad picture of the spatial behavior of the signal and illuminate the physical processes involved. For our quantile statistics, we found it advantageous to add, as an initial analysis step, 3-D smoothing at the same radius $R$, as this smoothed out the noise more effectively than the signal.
We based our main statistical measures on upper and lower quantiles, $Q_+$ and $Q_-$, at threshold $t$ defined as containing a cumulative probability corresponding to a normal distribution, with $t$ in units of $\sigma$. We then took the symmetric average $Q_{\rm ave}$ (eq. \[e:ave\]), which approximately corresponds to measuring the standard deviation, and the difference $Q_{\rm diff}$ (eq. \[e:diff\]), which approximately measures the skewness. We also showed that the normalized average $Q_{\rm ave}(t)/(t \sigma)$ approximately measures the kurtosis. The threshold dependence of $Q_{\rm ave}$ (Figure \[fig:Threshold\]) can hold more information that might be explored. For example, we noticed a peak threshold value in the Radio model (at some redshifts) that does not appear in the Standard model.
We found that both our statistical measures and the corresponding standard measures of non-Gaussianity can be measured out to high redshift with the SKA, often out to $z>20$ and including the redshift of the Ly$\alpha$ peak. This was the case after accounting for the expected angular resolution and thermal noise of the SKA (i.e., SKA1-Low). This is especially true if the EDGES measurement by @Bowman:2018 is confirmed, as it implies a stronger amplitude of 21-cm fluctuations (as exemplified by our Radio model). Generally, each of our five different astrophysical models has a substantially different cosmic history, as measured by each statistic (all five models are shown in Figures \[fig:global\], \[fig:profiles\], \[fig:AveAll\], \[fig:DifAll\], and \[fig:Kurtosis\]). Thus, the variation of parameters among the models shows that the minimum galactic halo mass, the star-formation and X-ray efficiencies, and the X-ray spectrum, can all be constrained if these statistics are measured.
With the SKA we will be able to directly image cosmic dawn for the first time in history. It is necessary to have a variety of methods and tools that can be applied on the collected data in order to fully extract the potential it holds. Of course, we have only taken a first step here, and the next step is to consider more realistic SKA data with foreground residuals. We expect that the flexibility and robustness of the quantile statistics will help to deal with that as well. On the optimistic side, we note that we have used here a simulation box with volume approximately equal to that of a single SKA field, while SKA observations will create large surveys covering multiple fields. Thus, 21-cm cosmology with the SKA holds great promise.
Acknowledgments
===============
This project/publication was made possible for AB and RB through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. AB and RB were also supported by the ISF-NSFC joint research program (grant No. 2580/17). AF was supported by the Royal Society University Research Fellowship.
[20]{}
Barkana, R., 2016, PhysRep, 645, 1
Barkana, R., 2018, Nature, 555, 71
Barkana, R., Loeb, A., 2005, ApJ, 626, 1
Bharadwaj, S., & Pandey, S. K., 2005, MNRAS, 358, 968
Bowman, J. D., Rogers, A. E. E., Monsalve, R. A., Mozdzen, T. J., Mahesh, N., 2018, Nature, 555, 67
Ciardi B., Madau P., 2003, ApJ, 596, 1
Cohen, A., Fialkov, A., Barkana, R., Lotem, M., 2017, MNRAS, 472, 1915
Feng, C., Holder, G., 2018, ApJ, 858, 17
Fialkov, A., Barkana, R., 2019, MNRAS, 486, 2, 1763
Fialkov, A., Barkana, R., $\&$ Visbal, E., 2014, Nature, 506, 197
Field, G. B. 1958, PIRE, 46, 240
Fragos T., Lehmer B. D., Naoz S., Zezas A., Basu-Zych A., 2013, ApJL, 776, L31
Furlanetto S. R., Zaldarriaga M., Hernquist L., 2004, ApJ, 613, 16
Harker G. J. A., et al., 2009, MNRAS, 393, 1449
Ichikawa K., Barkana R., Iliev I. T., Mellema G., Shapiro P. R., 2010, MNRAS, 406, 2521
Kittiwisit P., Bowman J. D., Jacobs D. C., Thyagarajan N., Beardsley A. P., 2016, arXiv, arXiv:1610.06100
Koopmans, L., et al., 2015, Advancing Astrophysics with the Square Kilometre Array, PoS(AASKA14)001
Kubota K., Yoshiura S., Shimabukuro H., Takahashi K., 2016, PASJ, 68, 61
Madau P., Meiksin A., Rees M. J., 1997, ApJ, 475, 429
Majumdar S., Pritchard J. R., Mondal R., Watkinson C. A., Bharadwaj S., Mellema G., 2018, MNRAS, 476, 4007
Mellema G., Iliev I. T., Pen U.-L., Shapiro P. R., 2006, MNRAS, 372, 679
Mesinger A., Furlanetto S., Cen R., 2011, MNRAS, 411, 955
Mineo, S., Gilfanov, M., Sunyaev, R., 2012, MNRAS, 419, 2095
Mirabel, I. F., Dijkstra, M., Laurent, P., Loeb, A., Pritchard, J. R., 2011, A&A, 528, 149
Mondal R., Bharadwaj S., Majumdar S., Bera A., Acharyya A., 2015, MNRAS, 449, L41
Pritchard, J. R., Furlanetto, S. R., 2007, MNRAS, 376, 4, 1680
Pritchard, J. R., Loeb, A., 2012, Reports on Progress in Physics, 75, 086901
Shimabukuro H., Yoshiura S., Takahashi K., Yokoyama S., Ichiki K., 2015, MNRAS, 451, 467
Trott C. M., et al., 2019, PASA, 36, e023
Watkinson C. A., Pritchard J. R., 2014, MNRAS, 443, 3090
Watkinson C. A., Pritchard J. R., 2015, MNRAS, 454, 1416
Wouthuysen, S. A. 1952, AJ, 57, 31
Wyithe J. S. B., Morales M. F., 2007, MNRAS, 379, 1647
\[lastpage\]
[^1]: E-mail: alon.banet@gmail.com
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'OpenFMO framework, an open-source software (OSS) platform for Fragment Molecular Orbital (FMO) method, is extended to multi-physics simulations (MPS). After reviewing the several FMO implementations on distributed computer environments, the subsequent development planning corresponding to MPS is presented. It is discussed which should be selected as a scientific software, lightweight and reconfigurable form or large and self-contained form.'
author:
- Toshiya Takami
- Jun Maki
- 'Jun’ichi Ooba'
- Yuuichi Inadomi
- Hiroaki Honda
- Ryutaro Susukita
- Koji Inoue
- Taizo Kobayashi
- Rie Nogita
- Mutsumi Aoyagi
bibliography:
- 'BibTeX/MyWorks.bib'
- 'BibTeX/FMO.bib'
title: 'Multi-physics Extension of OpenFMO Framework'
---
[ address=[Research Institute for Information Technology, Kyushu University, Fukuoka 812–8581, Japan]{} ]{}
[ address=[Research Institute for Information Technology, Kyushu University, Fukuoka 812–8581, Japan]{} ]{}
[ address=[Research Institute for Information Technology, Kyushu University, Fukuoka 812–8581, Japan]{} ]{}
[ address=[PSI Project Lab., Kyushu University, 3–8–33–710 Momochihama, Sawara-ku, Fukuoka 814–0001, Japan]{} ]{}
[ address=[PSI Project Lab., Kyushu University, 3–8–33–710 Momochihama, Sawara-ku, Fukuoka 814–0001, Japan]{} ]{}
[ address=[Fukuoka IST Foundation, Acros Fukuoka Nishi Office 9F, 1–1–1 Tenjin, Chuo-ku, Fukuoka 810–0001, Japan]{} ]{}
[ address=[Department of Informatics, Kyushu University, 6–10–1 Hakozaki, Higashi-ku, Fukuoka 812–8581, Japan]{} ,altaddress=[PSI Project Lab., Kyushu University, 3–8–33–710 Momochihama, Sawara-ku, Fukuoka 814–0001, Japan]{} ]{}
[ address=[Research Institute for Information Technology, Kyushu University, Fukuoka 812–8581, Japan]{} ]{}
[ address=[Research Institute for Information Technology, Kyushu University, Fukuoka 812–8581, Japan]{} ]{}
[ address=[Research Institute for Information Technology, Kyushu University, Fukuoka 812–8581, Japan]{} ,altaddress=[PSI Project Lab., Kyushu University, 3–8–33–710 Momochihama, Sawara-ku, Fukuoka 814–0001, Japan]{} ]{}
Introduction and Overview
=========================
Multi-physics simulations are widely used even in complex scientific studies. Such calculations are often constructed by combining multiple theories including different degrees of approximations and different scales of description. Since reality and accuracy are required increasingly, these simulations have become larger and more complicated year by year. Grids, distributed computer resources over wide-area networks, are expected to execute such complicated scientific applications, and have been installed all over the world in order to demonstrate large-scale heterogeneous simulations with the help of middlewares [@NAREGI05; @NAREGI-Web]. On the other hand, the next generation supercomputer with a peta-scale performance is already planned in several countries[@Riken; @PSI-Web]. Thus, the development of high-performance computing environments is fast and transient. As a scientist, it is important to watch the trend of those computer resources.
In the present contribution, the multi-physics calculations by Fragment Molecular Orbital (FMO) method [@FMO99] are constructed on the distributed computing environments. OpenFMO framework toward “peta-scale” computing[@OpenFMO; @HPCNano06] is extended to the multi-physics simulations. It is also discussed what architecture and development policy should be chosen in the fast-moving world of computing.
Grid-enabled Calculations of FMO
================================
Before entering the main subject, we briefly review the grid-enabled FMO implementations developed in the NAREGI project[@NAREGI05]. These are based on the famous MO package, GAMESS[@GAMESS].
Implementation of a Loosely-coupled FMO
---------------------------------------
Although it is usually considered as an approximation to [*ab initio*]{} molecular orbital (MO) calculations, the FMO algorithm is a multi-layered problem (see Fig. \[fig:lcFMO\](b)) including the MO calculations for each fragment and the electro-static (ES) interaction between fragments. In the MO-layer, the quantum mechanical interactions of all the atoms and electrons within a fragment are included to obtain a fragment energy. On the other hand, only the classical ES interaction is considered when we go over the boundary of fragments. Since the MO-layer calculations can be executed independently, we can break the program into loosely-coupled components corresponding to a large-scale parallel execution in the distributed computing environments (Fig. \[fig:lcFMO\](c)).
The grid-enabled version called “Loosely-coupled FMO” was developed as a part of NAREGI[@NAREGI05]. The total control flow is constructed by the use of the NAREGI Workflow tool. In Fig. \[fig:lcFMO-result\] (a), the total electron density of the whole molecule[@Inadomi] of a Gramicidin-A is shown as an equi-density surface, and the electron density for one of the fragments in a fatty-acid albumin is shown in Fig. \[fig:lcFMO-result\](b).
3D-RISM/FMO Simulation Connected by a Mediator
----------------------------------------------
As an example of the multi-physics simulations, a coupled simulation of FMO and 3D-RISM is presented, where FMO calculations are coupled to statistical mechanics calculations for molecular liquids by Reference Interaction Site Model (RISM)[@rismBook]. In order to obtain properties of bio-molecules, drugs, enzymes, etc., it is necessary to perform calculations under the influence of a solvent since these molecules usually work in aqueous solution. However, the full description of the solute and solvent system is difficult in general because of the large number of degrees of freedom. The standard strategy to solve the problem is to combine, in some way, originally different theories or programs, which is the multi-physics approach.
In the multi-physics simulations, physical data are exchanged between separate program components, where we must transform not only formats but also their semantics, i.e., physical meanings of the data. In order to assist such data-exchanges with semantic transformations, we used a set of application program interfaces called Mediator (mediator-API)[@mediator; @NAREGI05], which is included in the beta-version release of the NAREGI grid-middleware. Fig. \[fig:rism-fmo\](a) shows the total flow of this simulation, where the partial charge distribution of the solute and solvent molecules are exchanged each other through the mediator-API (Fig. \[fig:rism-fmo\](b)). In order to execute on the NAREGI grid, the flow is incorporated in the NAREGI Workflow tool (Fig. \[fig:rism-fmo\](c)).
In Fig. \[fig:met-enkephalin\], we show results of this coupled calculation for methionine-enkephalin (75 atoms) and chignolin (138 atoms) in aqueous solution, where the partial charge distribution by water molecules are also shown around these molecules.
Development of OpenFMO Framework
================================
OpenFMO[@OpenFMO] is an open-licensed software platform to construct FMO applications under high-performance distributed computer environments. The current status of this development is in the end of Phase II. In Phase I, we introduced the OpenFMO framework and predicted a peta-scale performance on a hypothetical computer architecture[@HPCNano06]. In Phase II, we have tried to implement the skeleton by one-sided communications[@HPCAsia07] under the PSI project[@PSI-Web]. In Phase III, we are going to extend the platform to the multi-physics simulations (see Fig. \[fig:OpenFMO\]).
Multi-physics Extension of OpenFMO and its Application
------------------------------------------------------
The main purpose of the Phase II in the development schedule of OpenFMO (Left of Fig. \[fig:OpenFMO\]) was to correspond actual executions on the next generation supercomputer with more than 10,000 CPUs, where we have tried to reduce redundant memory consumption on each computing nodes and improve parallel performance by the use of one-sided communications. The detailed results will be presented elsewhere[@HPCAsia07].
The OpenFMO framework is extended to correspond to multi-physics applications including scientific simulations (Fig. \[fig:OpenFMO\]). Since the current FMO skeleton is well configured and has been proven effective in high-performance computing environments, it is better that the other multi-physics components are developed separately. Then, the key point is physical data representation and manipulation, where sort of transparent accessibility to internal data from outer components should be provided. The semantic transformation of the data can also be executed by the Mediator component depending on the needs.
By the use of the multi-physics extension, one can construct the following coupled simulations: QM/MM or QM/MD calculations[@QM/MM90; @QM/MD03] for docking simulations of proteins and enzymes in aqueous solution, RISM/SCF simulations[@rismBook] for various protein molecules with scientific theoretical studies[@TMO07], etc. One of the properties of the OpenFMO platform is the lightweight and reconfigurable skeleton program, which is useful for modifying applications corresponding to fast-changing computational environments.
Discussion
==========
Application sizes of recent molecular package programs are more and more increasing by their self-contained structures with many functions corresponding to complicated options and various computer environments. Such strategies in their development potentially run into a dead-end. However, when we use multiple grid resources simultaneously, it is inevitable to prepare a tailored scheduler[@Grid2007] in each application. The most important task as a scientist is to develop effective theories and algorithms that can be implemented on every computer environments, while actual implementation on a given environment should be carefully done with the help of computer scientists. Our development of the OpenFMO framework is one of those attempts to implement the FMO algorithm on the future computers.
This work is partly supported by the Ministry of Education, Sports, Culture, Science and Technology (MEXT) through the Science-grid NAREGI Program under the Development and Application of Advanced High-performance Supercomputer Project.
| {
"pile_set_name": "ArXiv"
} |
[**A [N]{}ATURAL [S]{}OLUTION TO THE [$\mu$]{} [P]{}ROBLEM**]{}
J.A. CASAS${}^{*,**}\;$ and C. MUÑOZ${}^{***}$
${}^{*}$ CERN, CH–1211 Geneva 23, Switzerland
${}^{**}$ Instituto de Estructura de la Materia (CSIC),\
Serrano 123, E-28006 Madrid, Spain
${}^{***}$ Dept. de Física Teórica C-XI,\
Univ. Autónoma de Madrid, E-28049 Madrid, Spain
**Abstract**
We propose a simple mechanism for solving the $\mu$ problem in the context of minimal low–energy supergravity models. This is based on the appearance of non–renormalizable couplings in the superpotential. In particular, if $H_1H_2$ is an allowed operator by all the symmetries of the theory, it is natural to promote the usual renormalizable superpotential $W_o$ to $W_o+\lambda W_o H_1H_2$, yielding an effective $\mu$ parameter whose size is directly related to the gravitino mass once supersymmetry is broken (this result is maintained if $H_1H_2$ couples with different strengths to the various terms present in $W_o$). On the other hand, the $\mu$ term must be absent from $W_o$, otherwise the natural scale for $\mu$ would be $M_P$. Remarkably enough, this is entirely justified in the supergravity theories coming from superstrings, where mass terms for light fields are forbidden in the superpotential. We also analyse the $SU(2)\times U(1)$ breaking, finding that it takes place satisfactorily. Finally, we give a realistic example in which supersymmetry is broken by gaugino condensation, where the mechanism proposed for solving the $\mu$ problem can be gracefully implemented.
[CERN–TH.6764/92]{}\
[IEM–FT–66/92]{}\
[FTUAM 92/45]{}\
[December 1992]{}
Introduction
============
One of the interesting features of low–energy supergravity (SUGRA) models is that the electroweak symmetry breaking can be a direct consequence of supersymmetry (SUSY) breaking \[1\]. In the ordinary SUGRA models, SUSY breaking takes place in a hidden sector of the theory, so that the gravitino mass $m_{3/2}$ becomes of the electroweak scale order. Below the Planck mass, $M_P$, one is left with a global SUSY Lagrangian plus some terms (characterized by the $m_{3/2}$ scale) breaking explicitly, but softly, global SUSY. As we will briefly review below, the breakdown of $SU(2)\times U(1)_Y$ appears as an automatic consequence of the radiative corrections to these terms. The so–called $\mu$ problem \[2\] arises in this context.
Let us consider a SUGRA theory with superpotential $W(\phi_i)$ and canonical kinetic terms for the $\phi_i$ fields[^1]. Then, the scalar potential takes the form \[3\] $$\begin{aligned}
V = e^K \left[ \sum_i \left| \frac{\partial W}{\partial \phi_i}
+ \bar{\phi_i}W \right|^2 - 3|W|^2 \right]\;+\;\mathrm{D}\;\mathrm{terms}
\;\;,
\label{V}\end{aligned}$$ where $K=\sum_i|\phi_i|^2$ is the Kähler potential. It is customary to consider $W$ as a sum of two terms corresponding to the observable sector $W^{obs}(\phi_i^{obs})$ and a hidden sector $W^{hid}(\phi_i^{hid})$ $$\begin{aligned}
W(\phi_i^{obs},\phi_i^{hid})=
W^{obs}(\phi_i^{obs}) + W^{hid}(\phi_i^{hid})
\;\;.
\label{W}\end{aligned}$$ $W^{hid}(\phi_i^{hid})$ is assumed to be responsible for the SUSY breaking, which implies that some of the $\phi_i^{hid}$ fields acquire non–vanishing vacuum expectation values (VEVs) in the process. Then, the form of the effective observable scalar potential obtained from eq.(\[V\]), assuming vanishing cosmological constant, is \[4\] $$\begin{aligned}
V^{obs}_{eff} = \sum_i \left| \frac{\partial \hat{W}^{obs}}
{\partial \phi_i^{obs}}\right|^2 &+& m_{3/2}^2 \sum_i | \phi_i^{obs}|^2
+ \left( Am_{3/2}\hat W^{obs}_t + Bm_{3/2}\hat W^{obs}_b\; +\;
\mathrm{h.c.} \right)
\nonumber \\
\;&+&\;\mathrm{D}\;\mathrm{terms}
\label{Vobs}\end{aligned}$$ with $$\begin{aligned}
m_{3/2}^2 = e^{K^{hid}}|W^{hid}|^2
\label{m32}\end{aligned}$$ $$\begin{aligned}
B = A-1=\sum_i \left( | \phi_i^{hid}|^2 + \frac{\bar{\phi}^{hid}}
{\bar{W}^{hid}}\frac{\partial \bar{W}^{hid}}
{\partial \bar{\phi}_i^{hid}}\right)\;-\;1\;\;,
\label{AB}\end{aligned}$$ where $K^{hid}=\sum_i |\phi_i^{hid}|^2$, $\hat W^{obs}$ is the rescaled observable superpotential $\hat W^{obs}=e^{K^{hid}/2}
W^{obs}$, the subindex $t$($b$) denotes the trilinear (bilinear) part of the superpotential, and $A$, $B$ are dimensionless numbers of $O(1)$, which depend on the VEVs of the hidden fields. Since we are assuming that SUSY breaking takes place at a right scale, the gravitino mass given by eq.(\[m32\]) is hierarchically smaller than the Planck mass (i.e. of order the electroweak scale).
In the minimal supersymmetric standard model (MSSM) the matter content consists of three generations of quark and lepton superfields plus two Higgs doublets, $H_1$ and $H_2$, of opposite hypercharge. Under these conditions the most general effective observable superpotential has the form $$\begin{aligned}
W^{obs}=\sum_{generations}(h_uQ_LH_2u_R +
h_dQ_LH_1d_R + h_eL_LH_1e_R )+ \mu H_1H_2\;\;.
\label{Wobs}\end{aligned}$$ This includes the usual Yukawa couplings (in a self–explanatory notation) plus a possible mass term for the Higgses, where $\mu$ is a free parameter. From eq.(\[Vobs\]) the relevant Higgs scalar potential along the neutral direction for the electroweak breaking is readily obtained $$\begin{aligned}
V(H_1,H_2)=\frac{1}{8}(g^2+g'^2)\left(|H_1|^2-|H_2|^2\right)^2
+ \mu_1^2|H_1|^2 + \mu_2^2|H_2|^2 -\mu_3^2(H_1H_2+\mathrm{h.c.})
\;\;,
\label{Vhiggs}\end{aligned}$$ where $$\begin{aligned}
\mu_{1,2}^2 &=& m_{3/2}^2 + \hat\mu^2
\nonumber \\
\mu_{3}^2 &=& -Bm_{3/2}\hat\mu
\nonumber \\
\hat\mu &\equiv & e^{K^{hid}/2}\mu\;\;.
\label{mus}\end{aligned}$$ This is the SUSY version of the usual Higgs potential in the standard model. In order for the potential to be bounded from below, the condition $$\begin{aligned}
\mu_{1}^2+\mu_2^2-2|\mu_3^2|>0
\label{condmus}\end{aligned}$$ must be imposed all over the energy range $[M_Z,M_P]$. This implies in particular $\langle H_{1,2}\rangle = 0$ at the Planck scale. Below the Planck scale, one has to consider the radiative corrections to the scalar potential. Then the boundary conditions of eq.(\[mus\]) are substantially modified in such a way that the determinant of the Higgs mass–squared matrix becomes negative, triggering $\langle H_{1,2}\rangle \neq 0$ and $SU(2)\times U(1)_Y$ symmetry breaking \[1\].
For this scheme to work, the presence of the last term in eq.(\[Wobs\]) is crucial. If $\mu=0$, then the form of the renormalization group equations (RGEs) implies that such a term is not generated at any $Q$ scale since $\mu(Q)\propto \mu$. The same occurs for $\mu_3$, i.e. $\mu_3(Q)\propto \mu$. Then, the minimum of the potential of eq.(\[Vhiggs\]) occurs for $H_1=0$ and, therefore, $d$–type quarks and $e$–type leptons remain massless. Besides, the superpotential of eq.(\[Wobs\]) with $\mu=0$ possesses a spontaneously broken Peccei–Quinn symmetry \[5\] leading to the appearance of an unacceptable Weinberg–Wilczek axion \[6\].
Once it is accepted that the presence of the $\mu$ term in the superpotential is essential, there arises an inmediate question: Is there any dynamical reason why $\mu$ should be small, of the order of the electroweak scale? Note that, to this respect, the $\mu$ term is different from the SUSY soft–breaking terms, which are characterized by the small scale $m_{3/2}$ once we assume correct SUSY breaking. In principle the natural scale of $\mu$ would be $M_P$, but this would reintroduce the hierarchy problem since the Higgs scalars get a contribution $\mu^2$ to their squared mass \[see eq.(\[mus\])\]. Thus, any complete explanation of the electroweak breaking scale must justify the origin of $\mu$. This is the so–called $\mu$ problem \[2\]. This problem has been considered by several authors and different possible solutions have been proposed \[2,7,8\]. In this letter we suggest a scenario in which $\mu$ is generated by non–renormalizable terms and its size is directly related to the gravitino mass. A comparison with the scenarios of refs.\[2,7,8\] is also made.
A natural solution to the $\mu$ problem
=======================================
Let us start with a simple scenario with superpotential $$\begin{aligned}
W=W_o + \lambda W_oH_1H_2
\;\;.
\label{WWo}\end{aligned}$$ where $W_o$ is the usual superpotential (including both observable and hidden sectors) [*without*]{} a $\mu H_1H_2$ term. We have allowed in (\[WWo\]) a non–renormalizable term, characterized by the coupling $\lambda=O(1)$ (in Planck units), which mixes the observable sector with the hidden sector (other higher–order terms of this kind could also be included, but they are not relevant for the present analysis). The $\mu H_1H_2$ term must be absent from $W_o$ since, as was mentioned above, the natural scale for $\mu$ would otherwise be $M_P$. Certainly, this is technically possible in a supersymmetric theory, since the non–renormalization theorems assure that this term cannot be generated radiatively if initially $\mu=0$. One may wonder, however, whether there is a theoretical reason for the absence of the $\mu H_1H_2$ term from $W_o$ in eq.(\[WWo\]), since it is not forbidden by any symmetry of the theory[^2]. It is quite remarkable here that this is provided in the low–energy SUSY theory obtained from superstrings. In this case mass terms (like $\mu H_1H_2$) are forbidden in the superpotential. We will see in section 4 an explicit example in this context. Finally, non-renormalizable terms (like $\lambda W_oH_1H_2$) are in principle allowed in a generic SUGRA theory. Next, we show that the $\lambda W_oH_1H_2$ term yields dynamically a $\mu$ parameter.
Using the general expression of eq.(\[V\]), the scalar potential $V$ generated by $W$ has the form $$\begin{aligned}
V = &e^K& \left\{ \sum_i \left| \frac{\partial [W_o(1+\lambda
H_1H_2)]}{\partial \phi_i}
+ \bar{\phi_i}W_o (1+\lambda H_1H_2)\right|^2 - 3|W_o(1+\lambda
H_1H_2)|^2 \right\}
\nonumber\\
\;&+&\;\mathrm{D}\;\mathrm{terms}
\;\;,
\label{V2}\end{aligned}$$ which can be written as $$\begin{aligned}
V = V^{(1)}|1+\lambda H_1H_2|^2 &+&
e^K \left\{ \left| \frac{\partial [W_o(1+\lambda
H_1H_2)]}{\partial H_1}
+ \bar{H_1}W_o (1+\lambda H_1H_2)\right|^2 +(H_1\leftrightarrow H_2)
\right\}
\nonumber\\
\;&+&\;\mathrm{D}\;\mathrm{terms}
\;\;,
\label{V3}\end{aligned}$$ where $$\begin{aligned}
V^{(1)}\equiv e^K \left( \sum_i \left| \frac{\partial W_o}
{\partial \phi_i}
+ \bar{\phi_i}W_o \right|^2 - 3|W_o|^2 \right)\;
;\;\;\phi_i\neq H_{1,2}\;\;.
\label{V1}\end{aligned}$$ Since $H_{1,2}$ enter in $W_o$ only through the ordinary Yukawa couplings and we are assuming vanishing VEVs for the observable scalar fields, it is clear (recall that $W_o$ does not contain a $\mu H_1H_2$ coupling) that $\left. \frac{\partial W_o}
{\partial H_{1,2}}\right|_{min}=0$. Besides, the vanishing of the cosmological constant implies $V^{(1)}=0$ at the minimum of the potential. So, we can extract from the second term in eq.(\[V3\]) the soft terms associated with $H_{1,2}$: $$\begin{aligned}
V(H_1,H_2)=&\frac{1}{8}&(g^2+g'^2)\left(|H_1|^2-|H_2|^2\right)^2
+ m_{3/2}^2(1+\lambda^2)|H_1|^2 + m_{3/2}^2(1+\lambda^2)|H_2|^2
\nonumber\\
&+&2m_{3/2}^2\lambda(H_1H_2+\mathrm{h.c.})
\;\;.
\label{Vhiggs2}\end{aligned}$$ Comparing eqs.(\[Wobs\]–\[mus\]) with eqs.(\[WWo\],\[Vhiggs2\]) it is clear that $\lambda W_oH_1H_2$ behaves like a $\mu$ term when $W_o$ acquires a non–vanishing VEV dynamically. Defining $\lambda\langle W_o\rangle\equiv\mu$ we can write eq.(\[Vhiggs2\]) as eqs.(\[Vhiggs\],\[mus\]) where now the value of $B$ is $$\begin{aligned}
B=2
\;\;.
\label{B}\end{aligned}$$ The value of $A$ is still given by eq.(\[AB\]), but the relation $B=A-1$ is no longer true. The fact that the new “$\mu$ parameter” is of the electroweak–scale order is a consequence of our assumption of a correct SUSY–breaking scale $m_{3/2}=e^{K/2}W=O(M_Z)$. Finally, note that the usual condition for the potential to be bounded from below (\[condmus\]) is automatically satisfied by (\[Vhiggs2\]) for any value of $\lambda$.
One may wonder how general is the simple scenario of eq.(\[WWo\]). First of all, let us note that the fact that $H_1H_2$ is not forbidden by any symmetry of the theory is a key ingredient for this scenario to work. An obvious generalization of (\[WWo\]) arises when $W_o$ consists of several terms $W_o=W_o^{(1)}+W_o^{(2)}+...$ and $H_1H_2$ couples with a different strength to each term, i.e. $(\lambda_1W_o^{(1)}+
\lambda_2W_o^{(2)}+...)H_1H_2$. However, provided that the hierarchical small value for $\langle W_o\rangle$ is not achieved by a fine–tuning between the VEVs of the various terms $W_o^{(1)},W_o^{(2)},...$, it is clear that the order of magnitude of $\mu$ continues being $m_{3/2}$. Apart from this, it should be noticed that $\lambda_i=O(1)$ (in Planck units) is only natural if $W_o^{(i)}$ is not an operator with a extremely small coupling constant. However, this would be a naturalness problem by itself. This would happen, for instance, for $W_o^{(i)}=m\Phi^2$ with $m<<M_P$. (These terms are forbidden in string theories.)
To conclude this section, it is worth noticing that in the context of supergravity theories there is another possible solution to the $\mu$ problem. Since the Kähler potential $K$ is an arbitrary real–analytic function of the scalar fields, we can study for example a theory with the following $K$ $$\begin{aligned}
K=\sum_i|\phi_i|^2 + f(g(\phi_j, \bar \phi_j)H_1H_2\ +\ \mathrm{h.c.})
\;\;,
\label{K}\end{aligned}$$ where $\phi_j\neq H_{1,2}$ and $f$ and $g$ are generic functions ($\langle g(\phi_j, \bar\phi_j)\rangle= O(1)$). Then, although $W_o$ does not contain a $\mu$ term, this is generated in the scalar potential. This is trivial to see for the simplest case (i.e. $f(x)=x$, $g=$ const. $\equiv\lambda$). Then the theory is equivalent to one with Kähler potential $\sum_i|\phi_i|^2$ and superpotential $W_oe^{\lambda H_1H_2}$, since the function ${\cal G}=K+\log|W|^2$ that defines the SUGRA theory is the same for both. Expanding the exponential, the first two terms coincide with eq.(\[WWo\]) and hence we obtain the same $\mu$ term as in eq.(\[Vhiggs2\]). The possibility (\[K\]) was examined in ref.\[7\] for $f(x)=x$ and when $g$ is a non–trivial function of the hidden fields, in particular for the simplest case $g(\phi_j, \bar\phi_j)=\bar\xi$, where $\bar\xi$ is a hidden field. It remains to be explored whether a Kähler potential similar to that of eq.(\[K\]) can arise in the context of superstring theories.
Expectation values for the Higgses
==================================
In the above analysed solution to the $\mu$ problem it is assumed that the observable scalar fields have vanishing VEVs at the Planck scale. Since the non–renormalizable term $\lambda W_oH_1H_2$ mixes observable and hidden fields, one may wonder whether that assumption is still true for the Higgses. We will show now that this is in fact the case.
We assume here that the initial superpotential $W_o$ gives a correct SUSY breaking, i.e. small gravitino mass and vanishing cosmological constant. This means that $V_o$, i.e. the scalar potential derived from $W_o$, is vanishing at the minimum $\left. V_{o}\right|_{min}=0$ and thus positive–definite. Using the general expression of eq.(\[V\]), $V_o$ can be decomposed in three pieces $$\begin{aligned}
V_o = V^{(1)}\;+\;e^K \left\{ \left| \frac{\partial W_o}{\partial H_1}
+ \bar{H_1}W_o \right|^2 + (H_1\rightarrow H_2)\right\}\;+\;
\mathrm{D}\;\mathrm{terms}
\;\;,
\label{Vo}\end{aligned}$$ where $V^{(1)}$ is defined in eq.(\[V1\]). Recalling that we are assuming that $W_o$ does not contain a $\mu H_1H_2$ term and that $\left. \frac{\partial W_o}{\partial H_{1,2}}\right|_{min}=0$ (since squarks and sleptons are supposed to have vanishing VEVs), it is clear that $V^{(1)}$ is flat in $H_{1,2}$. So, the minimum of the second piece of (\[Vo\]) is zero and occurs at $H_{1,2}=0$ (for any value of $W_o$). Therefore, necessarily $\left. V^{(1)}\right|_{min}=0$, i.e. $V^{(1)}$ is also positive–definite. All this is very ordinary: it simply means that the hidden sector is entirely responsible for the breaking. (Note that the $H_{1,2}$ F–terms are vanishing, while some of the hidden fields F–terms must be different from zero.) Notice also that from (\[Vo\]) one obtains $e^{K}|W_o|^2(|H_1|^2+|H_2|^2)=
m_{3/2}^2|H_1|^2+m_{3/2}^2|H_2|^2$ but, because of the absence of a $\mu H_1H_2$ term in $W_o$, there is no $Bm_{3/2}\hat\mu H_1H_2$ term in the scalar potential.
Let us now study the impact of doing, according to our approach, $W_o \rightarrow W = W_o + \lambda
W_oH_1H_2$. The corresponding scalar potential, $V$, has already been written in eq.(\[V3\]). Now, since $V^{(1)}$ is positive–definite, so is $V$. In fact, the minimum of $V$ is for $V=0$ and occurs when the three pieces of (\[V3\]) are vanishing. Clearly, the minimum of the first and third pieces of (\[V3\]) coincides with that of eq.(\[Vo\]) above, implying $\left. V^{(1)}\right|_{min}=0$,[^3] and thus the VEV of $W_o$ is the same as when we started with just $W_o$. Finally, recalling that $\left. \frac{\partial W_o}{\partial H_{1,2}}
\right|_{min}=0$, it is clear that the second piece of $V$ in eq.(\[V3\]) has two possible minima $$\begin{aligned}
H_1, H_2=0
\;\;,
\label{Hs1}\end{aligned}$$ $$\begin{aligned}
\lambda H_2+(1+\lambda H_1H_2)\bar H_1&=&0
\nonumber\\
(H_1\leftrightarrow H_2)&=&0
\label{Hs2}\end{aligned}$$ As was explained in section 1, the solution (\[Hs1\]) is the phenomenologically interesting one, whereas the solution (\[Hs2\]) leads to $H_{1,2}\sim M_P$, so it is not phenomenologically viable. We can ignore this solution since if $H_{1,2}$ are initially located at $H_{1,2}=0$ (e.g. by thermal effects) they will remain there as long as (\[Hs1\]) continues to be a minimum solution. Of course, radiative corrections will trigger non–zero VEVs of the correct size for $H_1$, $H_2$.
A realistic example
===================
As we saw in section 2, the assumption of correct SUSY breaking was crucial for obtaining the $\mu$ parameter of the electroweak–scale order. As a matter of fact, gaugino condensation effects in the hidden sector \[9\] are the most satisfactory mechanism so far explored, able to break SUSY at a scale hierarchically smaller than $M_P$ \[10\]. The reason is that the scale of gaugino condensation corresponds to the scale at which the gauge coupling becomes large, and this is governed by the running of the coupling constant. Since the running is only logarithmically dependent on the scale, the gaugino condensation scale is suppressed relative to the initial one by an exponentially small factor $\sim e^{-1/2\beta g^2}$ ($\beta$ is the one–loop coefficient of the beta function of the hidden sector gauge group $G$). This mechanism has been intensively studied in the context of SUGRA theories coming from superstrings \[11,12\], where the gauge coupling is related to the VEV of the dilaton field $S$ (more specifically Re$S=g^{-2}$). Recall that we have argued in section 2 that superstring theories are precisely a natural context where the solution of the $\mu$ problem presented here can be implemented, since mass terms, such as $\mu H_1 H_2$, appearing in the superpotential are automatically forbidden in superstrings. Besides, non–renormalizable terms like $\lambda W_oH_1H_2$ in eq.(\[WWo\]) are in principle allowed and, in fact, they are usually present \[13\].
In the absence of hidden matter, the condensation process is correctly described by a non–perturbative effective superpotential $$\begin{aligned}
W_o\propto e^{-3S/2\beta_o}
\;\;,
\label{Wcond}\end{aligned}$$ with $\beta_o=3C(G)/16\pi^2$, where $C(G)$ is the Casimir operator in the adjoint representation of $G$. It is difficult to imagine, however, how the mechanism expounded in section 2 could be implemented here. More precisely, it is not clear that we could have something like $W=W_o+\lambda W_oH_1H_2$, due to the effective character of (\[Wcond\]).
Fortunately, things are different in the presence of hidden matter, which is precisely the most frequent case in string constructions \[13\]. There is not at present a generally accepted formalism describing the condensation in the presence of massless matter, but the case of massive matter is well understood \[14\]. For example, in the case of $G=SU(N)$ with $M(N+\bar N)$ “quark” representations $Q_\alpha$, $\bar Q_\alpha$, $\alpha=1,...,M$, with a mass term given by $$\begin{aligned}
W_o^{pert}=-\sum_{\alpha,\beta}{\cal M}_{\alpha,\beta}Q_{\alpha}
\bar Q_\beta
\;\;,
\label{Wpert}\end{aligned}$$ the complete condensation superpotential can be written as \[12\] $$\begin{aligned}
W_o\propto [\mathrm{det}{\cal M}]^{\frac{1}{N}} e^{-3S/2\beta_o}
\;\;.
\label{Wcond2}\end{aligned}$$ It should be noticed here that, strictly speaking, there are no mass terms like (\[Wpert\]) in the context of string theories. However the matter fields usually have trilinear couplings which play the role of mass terms with a dynamical mass given by the VEV of another matter field. The simplest case occurs when there is an $SU(N)$ singlet field $A$ giving mass to all the quark representations. Then (\[Wpert\]) takes the form $$\begin{aligned}
W_o^{pert}=-\sum_{\alpha=1}^M AQ_{\alpha}\bar Q_\alpha
\;\;,
\label{Wpert2}\end{aligned}$$ and $\mathrm{det}{\cal M}=A^M$. Now, if $H_1H_2$ is an allowed coupling from all the symmetries of the theory, it is natural to promote $W_o^{pert}$ to[^4] $$\begin{aligned}
W^{pert}=-\sum_{\alpha}A(1+\lambda' H_1H_2)Q_{\alpha}\bar Q_\alpha
\;\;,
\label{Wpert3}\end{aligned}$$ so that $\mathrm{det}{\cal M}=[A(1+\lambda' H_1H_2)]^M$, and (\[Wcond2\]) takes the form $$\begin{aligned}
W_o\rightarrow W\propto
[A(1+\lambda' H_1H_2)]^{\frac{M}{N}} e^{-3S/2\beta_o}
\simeq A^{\frac{M}{N}}(1+\frac{M}{N}\lambda' H_1H_2) e^{-3S/2\beta_o}
\;\;.
\label{Wcond3}\end{aligned}$$ Thus $$\begin{aligned}
W=W_o+\lambda W_oH_1H_2
\;\;,
\label{WWocond2}\end{aligned}$$ where we have defined $\lambda\equiv\frac{M}{N}\lambda'$. This is precisely the kind of superpotential we wanted (see eq.(\[WWo\])) in order to generate the $\mu$ term dynamically.
In ref.\[8\] an interesting solution to the $\mu$ problem was proposed in a similar context with a PQ symmetry, using the presence of a term $H_1H_2Q\bar Q$ in the superpotential and assuming that the scalar components of $Q$ and $\bar Q$ condense at a scale $\Lambda\simeq 10^{11}$ GeV. As mentioned above, the only accepted formalism describing the condensation is in the presence of massive matter. Thus the previous term behaves as a dynamical mass term for the squarks and the complete superpotential (\[Wcond2\]) becomes $W\propto (H_1H_2)^{\frac{1}{N}}
e^{-3S/2\beta_o}$. This is phenomenologically unviable since the Higgses must have vanishing VEVs at $M_P$ for a correct phenomenology, which would imply $\langle W\rangle=0$ and thus no SUSY breaking. We can improve this model by including a mass term for $Q\bar Q$. However, a genuine mass term for $Q\bar Q$ would break the PQ symmetry, so one should consider something similar to (\[Wpert2\]). Then the perturbative superpotential is $$\begin{aligned}
W^{pert}\sim AQ\bar Q + H_1H_2 Q\bar Q
\;\;,
\label{Wpert4}\end{aligned}$$ and the scenario becomes much more similar to that given by eq.(\[Wpert3\]). However, there still is an important difference. In eq.(\[Wpert3\]) $H_1H_2$ couples to $AQ\bar Q$ (which is the natural thing if $H_1H_2$ is invariant under all the symmetries of the theory) instead of $Q\bar Q$; thus there is no PQ symmetry. Moreover, (\[Wpert3\]) leads to (\[WWocond2\]) in which the $\mu$ scale is directly given by the $m_{3/2}$ scale ($\mu=O(m_{3/2})$). However from (\[Wpert4\]) the $\mu$ scale is given by the squark condensation scale \[12\] $\langle Q\bar Q\rangle/M_P\simeq
m_{3/2}M_P/N\langle A\rangle$, so that the value of $\mu$ in this case tends to be a bit too large.
Summary and conclusions
=======================
We have proposed a simple mechanism for solving the $\mu$ problem in the context of minimal low–energy SUGRA models. This is based on the appearance of non–renormalizable couplings in the superpotential. In particular, if $H_1H_2$ is an allowed operator by all the symmetries of the theory, it is natural to promote the usual renormalizable superpotential $W_o$ to $W_o+\lambda W_o H_1H_2$, yielding an effective $\mu$ parameter whose size is directly related to the gravitino mass once SUSY is broken (this result is essentially maintained if $H_1H_2$ couples with different strengths to the various terms present in $W_o$).
On the other hand, the $\mu$ term must be absent in $W_o$, otherwise the natural scale for $\mu$ would be $M_P$. Certainly this is technically possible in a supersymmetric theory since the non–renormalization theorems assure that this term cannot be generated radiatively if initially $\mu=0$. Remarkably enough, however, a theoretical reason for the absence of the $\mu H_1H_2$ term from $W_o$ is provided in the low–energy SUSY theory obtained from superstrings. In this case mass terms (such as $\mu H_1H_2$) are forbidden in the superpotential (however, non–renormalizable terms like $\lambda W_oH_1H_2$ are in principle allowed and, in fact, they are usually present).
We have also addressed other alternative solutions, comparing them with the one proposed here. On the other hand, we have analysed the $SU(2)\times U(1)$ breaking, finding that it takes place satisfactorily.
Finally, we have given a realistic example in which SUSY is broken by gaugino condensation in the presence of hidden matter (which is the usual situation in strings), and where the mechanism proposed for solving the $\mu$ problem can be gracefully implemented.
[**ACKNOWLEDGEMENTS**]{} We gratefully acknowledge J. Louis for extremely useful discussions.
[99]{}
For a recent review, see: L.E. Ibañez and G.G. Ross, CERN–TH.6412/92 (1992), to appear in Perspectives in Higgs Physics, ed. G. Kane, and references therein
J.E. Kim and H.P. Nilles, Phys. Lett. B138 (1984) 150
E. Cremmer, S. Ferrara, L. Girardello and A. Van Proeyen, Nucl. Phys. B212 (1983) 413
R. Barbieri, S. Ferrara and C.A. Savoy, Phys. Lett. 119B (1982) 343; L. Hall, J. Lykken and S. Weinberg, Phys. Rev. D27 (1983) 2359
R. Peccei and H. Quinn, Phys. Rev. Lett. 38 (1977) 1440
S. Weinberg, Phys. Rev. Lett. 40 (1978) 223; F. Wilczeck, Phys. Rev. Lett. 40 (1978) 229
G.F. Giudice and A. Masiero, Phys. Lett. B206 (1988) 480
J.E. Kim and H.P. Nilles, Phys. Lett. B263 (1991) 79; E.J. Chun, J.E. Kim and H.P. Nilles, Nucl. Phys. B370 (1992) 105
H.P. Nilles, Phys. Lett. B115 (1982) 193, Nucl. Phys. B217 (1983) 366; S. Ferrara, L. Girardello and H.P. Nilles, Phys. Lett. B125 (1983) 457
For a review, see: H.P. Nilles, Int. J. Mod. Phys. A5 (1990) 4199
J.P. Derendinger, L.E. Ibáñez and H.P. Nilles, Phys. Lett. B155 (1985) 65; M. Dine, R. Rohm, N. Seiberg and E. Witten, Phys. Lett. B156 (1985) 55; N.V. Krasnikov, Phys. Lett. B193 (1987) 37; L. Dixon, talk presented at the A.P.S. D.P.F. Meeting at Houston (1990); V. Kaplunovsky, talk presented at the “Strings 90” Workshop at College Station (1990); J.A. Casas, Z. Lalak, C. Muñoz and G.G. Ross, Nucl. Phys. B347 (1990) 243; A. Font, L. Ibáñez, D. Lüst and F. Quevedo, Phys. Lett. B245 (1990) 401; M. Cvetic, A. Font, L. Ibáñez, D. Lüst and F. Quevedo, Nucl. Phys. B361 (1991) 194; S. Ferrara, N. Magnoli, T.R. Taylor and G. Veneziano, Phys. Lett. B245 (1990) 409; H.P. Nilles and M. Olechowsky, Phys. Lett. B248 (1990) 268; P. Binétruy and M.K. Gaillard, Phys. Lett. B253 (1991) 119; J. Louis, SLAC–PUB–5645 (1991); B. de Carlos, J.A. Casas and C. Muñoz, CERN–TH.6436/92 (1992), to appear in Nucl. Phys. B
D. Lüst and T.R. Taylor, Phys. Lett. B253 (1991) 335; B. de Carlos, J.A. Casas and C. Muñoz, Phys. Lett. B263 (1991) 248; D. Lüst and C. Muñoz, Phys. Lett. B279 (1992) 272
J.A. Casas, E.K. Katehou and C. Muñoz, Nucl. Phys. B317 (1989) 171; J.A. Casas and C. Muñoz, Phys. Lett. B214 (1988) 63; A. Font, L. Ibáñez, H.P. Nilles and F. Quevedo, Phys. Lett. B210 (1988) 101; I. Antoniadis, J. Ellis, J.S. Hagelin and D.V. Nanopoulos, Phys. Lett. B205 (1988) 459, B213 (1988) 56; A. H. Chamseddine and M. Quirós, Nucl. Phys. B316 (1989) 101
T.R. Taylor, G. Veneziano and S. Yankielowicz, Nucl. Phys. B218 (1983) 493; I. Affleck, M. Dine and N. Seiberg, Nucl. Phys. B241 (1984) 493; D. Amati, K. Konishi, Y. Meurice, G.C. Rossi and G. Veneziano, Phys. Rep. 162 (1988) 169
[^1]: We will consider this case throughout the paper for simplicity. Our general conclusions will not be modified by taking a more general case.
[^2]: The $\mu H_1H_2$ term can be forbidden by invoking a Peccei–Quinn (PQ) symmetry \[2,8\]. This is not possible here since (\[WWo\]) does not possess any PQ symmetry.
[^3]: The only exception occurs if $\lambda H_1H_2=-1$, but then the second piece of (\[V3\]), which is also positive–definite, is different from zero, so this is not a solution for the minimization of the whole potential.
[^4]: We neglect here higher–order non–renormalizable couplings since they do not contribute to the $\mu$ term.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this article, we study the high order term of the fidelity of the Heisenberg chain with next-nearest-neighbor interaction and analyze its connection with quantum phase transition of Beresinskii-Kosterlitz-Thouless type happened in the system. We calculate the fidelity susceptibility of the system and find that although the phase transition point can’t be well characterized by the fidelity susceptibility, it can be effectively picked out by the higher order of the ground-state fidelity for finite-size systems.'
author:
- Li Wang
- 'Shi-Jian Gu'
- Shu Chen
title: 'High-order fidelity and quantum phase transition for the Heisenberg chain with next-nearest-neighbor interaction'
---
Introduction
============
Quantum phase transitions (QPTs) of a quantum many-body system have been attracting the persistent interest of physical researchers in recent years. Due to the diversity of quantum phases and QPTs, finding universal ways or methods to characterize QPTs is very meaningful and urgent. From the viewpoint of Landau-Ginzburg theory which has been widely accepted and known in condensed matter physics [@sachdev], QPT is connected with the corresponding order parameter and symmetry breaking. However, there are also some QPTs which cannot be well understood under the Landau-Ginzburg paradigm, such as the topological phase transitions [@xgwen] and Beresinskii-Kosterlitz-Thouless (BKT) phase transitions [beresinskii,kosterlitz]{}. Recently, an increasing research effort has been focused on the role of ground-state fidelity in characterizing QPTs[Gu\_review,htquan,zanardi06, hqzhou,YouWL07,zanardi07PRl,schen07pre,schen08pra,buonsante,mfyang,ZhouPRL]{}. As a basic concept in quantum information science, the fidelity measures the similarity between two states and is simply defined as modulus of their overlap [@zanardi06]. The fidelity approach provides us a novel way to understand QPTs from the viewpoint of quantum information theory. So far QPTs in various quantum many-body systems [@YouWL07; @hqzhou; @mfyang; @qhchen; @mfyang08; @WangXG08; @schen07pre; @schen08pra; @Paunkovic; @Venuti; @buonsante; @AHamma07; @abasto; @YangS; @zanardi07PRl; @WangXG; @Zhou09; @Zhou0803; @ZhouPRL] have been shown to be well characterized by the ground-state fidelity or fidelity susceptibility which is the leading term of the fidelity [YouWL07,zanardi07PRl]{}.
Generally, one may expect that the structure of the ground states at the different phases is basically different and should reveal itself by some sort of singular behavior in the ground state fidelity or the fidelity susceptibility at the transition point [@zanardi06; @hqzhou]. Despite its great successes of application in various systems, this intuitive idea turns out to be not complete [@schen07pre; @mfyang; @schen08pra; @YouWL07; @mfyang08]. Although the fidelity and the fidelity susceptibility can be used to describe first- and second-order QPTs[@schen08pra], as well as the topological QPTs [@AHamma07; @abasto; @YangS; @Zhou0803] successfully, nevertheless there are also some ambiguous cases for that both the two methods mentioned above do not work very effectively [@schen07pre; @schen08pra; @YouWL07; @mfyang08]. Very recently, the controversial issue of BKT phase transition and ground state fidelity has been studied in Ref. [@Zhou09] from a perspective of matrix product states which essentially depend on a classical simulations of quantum lattice systems [@ZhouPRL].
In case that the leading term of the fidelity (fidelity susceptibility) works not very effectively, the higher order term in the fidelity may be worth studying. Up to now, there is still lack of literature concerning this part of the fidelity. Here, in this paper, we make an attempt on investigating the effect of higher order term of the fidelity on the characterization of the BKT-type phase transition happened in the Heisenberg chain with next-nearest-neighbor (NNN) interaction [@Haldane]. We will show that although the fidelity and fidelity susceptibility cannot effectively characterize the BKT-type phase transition point for the Heisenberg chain with NNN interaction, the higher order term of the fidelity gives a good attempt on detecting such a transition.
Our paper is organized as follows. In Sec. II, we display the formulism of the higher order term of the fidelity. The subsequent section is devoted to the calculation of the higher order term of the fidelity for the model of Heisenberg chain with NNN interaction and show its connection to the quantum phase transition of the system. A brief summary is given in Sec. [sec:sum]{}.
Higher order of the fidelity {#sec:highorder}
============================
As usual, the ground state fidelity is defined as the modulus of the overlap between $|\Psi_0(\lambda) \rangle$ and $| \Psi_0(\lambda+\delta\lambda)
\rangle$, i.e. $$F(\lambda, \delta\lambda) =\left| f(\lambda, \lambda+\delta\lambda) \right |
= \left | \langle \Psi_0(\lambda)| \Psi_0(\lambda+\delta\lambda) \rangle
\right | , \tag{1} \label{eqF}$$ where $\Psi_0(\lambda)$ is the ground-state wavefunction of Hamiltonian $%
H=H_0 + \lambda H_I$, $\lambda$ is the driving parameter and $\delta \lambda$ is a small deviation in the parameter space of $\lambda$. The fidelity susceptibility denotes only the leading term of the fidelity. Straightforwardly, one can get the higher order term of the fidelity following similar expansion in deriving the fidelity susceptibility [YouWL07]{}. By using the Taylor expansion, the overlap between two wavefunction $|\Psi _{0}(\lambda )\rangle $ and $|\Psi _{0}(\lambda +\delta
\lambda )\rangle $ can be expanded to an arbitrary order of $\delta\lambda$, i.e. $$f(\lambda ,\lambda +\delta \lambda )=1+\sum_{n=1}^{\infty }\frac{(\delta
\lambda )^{n}}{n!}\left\langle \Psi _{0}(\lambda )\left\vert \frac{\partial
^{n}}{\partial \lambda ^{n}}\Psi _{0}(\lambda )\right. \right\rangle .
\tag{2} \label{eqf}$$ Therefore, the fidelity becomes $$\begin{aligned}
F^{2}=&1+\sum_{n=1}^{\infty }\frac{(\delta \lambda )^{n}}{n!}\left\langle
\Psi _{0}\left\vert \frac{\partial ^{n}}{\partial \lambda ^{n}}\Psi
_{0}\right. \right\rangle+ \notag \\
&\sum_{n=1}^{\infty }\frac{(\delta \lambda )^{n}}{n!}\left\langle \left.
\frac{\partial ^{n}}{\partial \lambda ^{n}}\Psi _{0}\right\vert \Psi
_{0}\right\rangle + \notag \\
&\sum_{m,n=1}^{\infty }\frac{(\delta \lambda )^{m+n}}{m!n!}\left\langle \Psi
_{0}\left\vert \frac{\partial ^{n}}{\partial \lambda ^{n}}\Psi _{0}\right.
\right\rangle \left\langle \left. \frac{\partial ^{m}}{\partial \lambda ^{m}}%
\Psi _{0}\right\vert \Psi _{0}\right\rangle. \tag{3} \label{eqF2}\end{aligned}$$ We note that $\frac {\partial^{n}} {\partial \lambda^{n}} \langle
\Psi_0(\lambda)| \Psi_0(\lambda) \rangle =0$ and use the relation for a given $n$ $$\sum_{m=0}^{n}\frac{n!}{m!(n-m)!}\left\langle \left. \frac{\partial ^{m}}{%
\partial \lambda ^{m}}\Psi _{0}\right\vert \frac{\partial ^{n-m}}{\partial
\lambda ^{n-m}}\Psi _{0}\right\rangle =0 , \tag{4} \label{relation}$$ then we can simplify the expression of (\[eqF2\]) into $$F^{2}=1-\sum_{l=1}^{\infty }(\delta \lambda )^{l}\chi _{F}^{(l)} \tag{5}
\label{F2simple}$$where $$\chi _{F}^{(l)}=\sum_{l=m+n}\frac{1}{m!n!}\left\langle \left. \frac{\partial
^{m}}{\partial \lambda ^{m}}\Psi _{0}\right\vert \hat {P} \left\vert \frac{%
\partial ^{n}}{\partial \lambda ^{n}}\Psi _{0}\right. \right\rangle ,
\tag{6} \label{eq:higherorderdiff}$$ with the projection operator $\hat {P}$ defined as $\hat {P}=1- |\Psi_0
\rangle \langle \Psi_0 |$. It is easy to check that $\chi _{F}^{(1)}$ is zero and $\chi _{F}^{(2)}$ the fidelity susceptibility [@YouWL07].
Next we shall consider the third order fidelity $\chi _{F}^{(3)}$ and apply it to judge the phase transition in the spin chain model with NNN exchanges. Alternatively, one can directly derive the expression of $\chi _{F}^{(3)}$ from the perturbation expansion of the GS wavefunction. According the perturbation theory, the GS wavefunction, up to the second order, is $$\begin{aligned}
|\Psi _{0}(\lambda +\delta \lambda )\rangle =& |\Psi _{0}\rangle +\delta
\lambda \sum_{n\neq 0}\frac{H_{I}^{n0}|\Psi _{n}\rangle }{E_{0}-E_{n}} \\
& +\left( \delta \lambda \right) ^{2}\sum_{m,n\neq 0}\frac{%
H_{I}^{nm}H_{I}^{m0}|\Psi _{n}\rangle }{(E_{0}-E_{m})(E_{0}-E_{n})} \\
& -\left( \delta \lambda \right) ^{2}\sum_{n\neq 0}\frac{%
H_{I}^{00}H_{I}^{n0}|\Psi _{n}\rangle }{(E_{0}-E_{n})^{2}} \\
& -\frac{\left( \delta \lambda \right) ^{2}}{2}\sum_{n\neq 0}\frac{%
H_{I}^{0n}H_{I}^{n0}|\Psi _{0}\rangle }{(E_{0}-E_{n})^{2}}.\end{aligned}$$The 3rd order term $\chi _{F}^{(3)}$, which is proportional to the 3rd order derivative of GS fidelity, can be then directly extracted from eq. (\[F2simple\]): $$\chi _{F}^{(3)}=\sum_{m,n\neq 0}\frac{2H_{I}^{0m}H_{I}^{mn}H_{I}^{n0}}{%
(E_{0}-E_{m})(E_{0}-E_{n})^{2}}-\sum_{n\neq 0}\frac{2H_{I}^{00}\left\vert
H_{I}^{n0}\right\vert ^{2}}{(E_{0}-E_{n})^{3}}. \tag{8}
\label{eq:higheroderperturb}$$
Eqs. (\[eq:higherorderdiff\]) and (\[eq:higheroderperturb\]) present the main formulism of the higher order expansion of the fidelity. So far the explicit physical meaning of the high order term in the fidelity is still not clear. The expression of 3rd fidelity bears the similarity to its correspondence of the 3rd derivative of GS energy which has the following form $$\frac {\partial^3 E} {\partial \lambda^3} = \sum_{m,n\neq0}\frac{6
H_I^{0n}H_I^{nm}H_I^{m0}}{(E_0-E_m)(E_0-E_n)} -\sum_{n\neq0}\frac{6
H_I^{00}\left|H_I^{n0}\right|^2}{(E_0-E_n)^2}. \tag{9}$$ Obviously, the 3rd fidelity is more divergent than the 3rd derivative of GS energy. Similar connection between the fidelity susceptibility and 2nd derivative of GS energy has been unveiled [@schen08pra]. Generally the $n $-th order fidelity is much more divergent than its counterpart of $n$-th order derivative of GS energy, therefore an $n$-th order QPT can be certainly detected by the $n$-th order fidelity. However, this conclusion does not exclude the possibility that $n$-th order fidelity can detect a even higher order or infinite order QPT. A concrete example has been given in Ref. [mfyang08]{}, where a QPT of higher than second order was singled out unambiguously by using the fidelity susceptibility despite the corresponding second derivative of the ground-state energy density showing no signal of divergence. So far no example of BKT-type QPT unambiguously detected by fidelity susceptibility has been given. Next we shall attempt to apply the third-order fidelity to study the BKT-type transition in a the spin chain model with NNN exchanges.
The model and the calculation of 3rd order fidelity {#sec:model}
===================================================
Now we turn to the one-dimensional Heisenberg chain with the NNN coupling described by the Hamiltonian $$H(\lambda )=\sum_{j=1}^{L}\left( \hat{s}_{j}\hat{s}_{j+1}+\lambda \hat{s}_{j}%
\hat{s}_{j+2}\right) , \tag{10} \label{Ham}$$where $\hat{s}_{j}$ denotes the spin-1/2 operator at the $j\,$th site, $L$ denotes the total number of sites. The driving parameter $\lambda $ represents the ratio between the NNN coupling and the nearest-neighbor (NN) coupling. The GS properties of the model (\[Ham\]) has been widely studied by both analytical method [@Haldane; @Giamarchi] and numerical method [Okamoto,Castilla,RChitra,SRWhite96]{}. The QPT driven by $\lambda $ is well understood. The driving term due to $\lambda $ is irrelevant when $\lambda <\lambda _{c}(\simeq 0.2411)$, and the system flows to a spin fluid or Luttinger liquid with massless spinon excitations. As $\lambda
>\lambda _{c}$, the frustration term is relevant and the ground state flows to the dimerized phase with a spin gap open [@Haldane; @Giamarchi]. The transition from spin fluid to dimerized phase is known to be of BKT type [@Haldane; @Giamarchi], for which the transition point was hard to be determined numerically due to the problem of logarithmic correction [@Affleck]. The critical value of $\lambda _{c}=0.2411\pm 0.0001$ has been accurately determined by various numerical methods [@Okamoto; @Castilla; @RChitra; @SRWhite96].
![The GS fidelity susceptibility of the heisenberg chain with next-nearest-neighbor interaction for finite system size from 14 sites to 26 sites. Obviously, there is no expected peaks can be observed.[]{data-label="Figure1"}](Fig1.eps){width="9cm"}
The GS fidelity for the model (\[Ham\]) has been studied in Ref. [schen07pre]{} and also in Ref. [@WangXG] in terms of operator fidelity. No singularities in the GS fidelity or operator fidelity around $\lambda_c$ have been detected for the system with different sizes, which implies that the GS fidelity may be not an effective characterization of the BKT-type QPT in this model. The BKT-type QPT is a infinite order phase transition where the $n$-th order derivative of GS energy is continuous.
![The third order term of the GS fidelity of the spin chain with next-nearest-neighbor interaction for the finite system size from 14 sites to 26 sites. Explicit peaks can be observed in this figure. As the system size increases, the position of the peak gets closer to the BKT-transition point.[]{data-label="Figure2"}](Fig2.eps){width="9cm"}
![Finite-size scaling of the extrema of the third term of the GS fidelity. A linear fit is made. According to this fit, when it comes to the point $N\rightarrow \infty$, $\protect\lambda_c=0.238 \pm0.006$. []{data-label="Figure3"}](Fig3.eps){width="9cm"}
In light of the higher-order fidelity being more powerful than its energy judgement, we study the possibility for detecting the infinite-order BKT-type QPT via the 3rd order fidelity and focus on the QPT in the spin chain with NNN interactions as a concrete example. We first calculate the GS wave functions by using the numerical exact diagonalization method for finite size system, and thus the fidelity susceptibility and the 3rd order fidelity can be extracted from the overlap of neighboring GS wave functions. In Fig.1, we display the fidelity susceptibility for systems with different sizes. We observe that no an obvious peak for the fidelity susceptibility is detected in a wide range of the parameter $0<\lambda<0.5$. This result suggests that the transition point for the BKT-type QPTs cannot be very effectively characterized by the fidelity susceptibility either for a finite-size system.
The BKT-type phase transition generally is an infinite order phase transition for which the infinite order derivatives of the ground-state energy is continuous. A good example with exact proof is the BKT-type transition happened in the antiferromagnetic XXZ spin chain model [YangCN]{}. In the BKT-type transition point, it has been proven analytically that all the $n$-th order derivatives of ground state energy is continuous [@YangCN]. Since the $n$-th order fidelity is much divergent than its correspondence of derivative of the ground-state energy, one might expect that there exists the possibility that the $n$-th order fidelity is divergent even its $n$-th order energy derivative is continuous. To see whether a higher order fidelity works better than fidelity susceptibility in detecting the BKT-type QPT happened in this model, we calculated the 3rd order fidelity versus the driving parameter as shown in Fig. 2. It is clear that a peak is developed in the 3rd order fidelity and the location of peaks tends to get close to the side of transition point $%
\lambda _{c}$ with the increase of lattice size. To extrapolate the $\lambda _{c}$ in the infinite size limit, we analyze the finite size scaling of position of peak in the Fig. 3. When the system size comes to infinity, the extrapolated value of the phase transition point is $\lambda _{c}=0.238\pm 0.006$, which, within the scope of fitting error, agrees well with $\lambda
_{c}=0.2411\pm 0.0001$ obtained by highly accurate numerical methods [Okamoto,Castilla,RChitra,SRWhite96]{}.
Summary {#sec:sum}
=======
We have shown the formulism for the high order of the fidelity in detail and applied it to a concrete model, *i.e.*, the one dimensional Heisenberg chain with NNN interaction. We first calculate the ground-state wavefunction of the system by exact diagnolization method, and then extract fidelity susceptibility and the third order of the GS fidelity. We find that despite the GS fidelity and the fidelity susceptibility being not a very effective detector, the BKT-type phase transition happened in this spin chain model might be effectively detected by the 3rd order term of the GS fidelity for finite-size system. Although the physical meaning of the higher order term of the GS fidelity hasn’t been deeply understood, we wish that our observation would stimulate further studies on this issue.
This work is supported by NSF of China under Grant No. 10821403, programs of Chinese Academy of Sciences, National Program for Basic Research of MOST, China and the Earmarked Grant Research from the Research Grants Council of HKSAR, China (Project No. CUHK 400807).
[99]{} S. Sachdev, *Quantum Phase Transitions*(Cambridge University Press, Cambridge, England, 1999)
X. G. Wen, *Quantum Field Theory of Many-Body Systems* (Oxford University, New York, 2004)
V. L. Beresiskii, Sov. Phys. JETP 32, 493 (1971).
J. M. Kosterlitz and D. J. Thouless, J. Phys. C 6, 1181(1973); J. M. Kosterlitz, *ibid* 7, 1046 (1974).
S. J. Gu, e-print arXiv: 0811.3127.
H. T. Quan, Z. Song, X. F. Liu, P. Zanardi, and C. P. Sun, Phys. Rev. Lett. 96, 140604 (2006).
P. Zanardi and N. Paunkovi$\acute{c}$, Phys. Rev. E 74, 031123 (2006).
H. Q. Zhou and J. P. Barjaktarevic, J. Phys. A: Math. Theor. 41 412001 (2008); H. Q. Zhou, J. H. Zhao, H. L. Wang, B. Li, arXiv: 0711.4651.
P. Buonsante and A. Vezzani, Phys. Rev. Lett. 98, 110601 (2007).
S. Chen, L. Wang, S. J. Gu, and Y. Wang, Phys. Rev. E 76, 061108 (2007).
S. Chen, L. Wang, Y. Hao, and Y. Wang, Phys. Rev. A **77**, 032111 (2008).
M. F. Yang, Phys. Rev. B 76, 180403(R)(2007); Y. -C. Tzeng and M. F. Yang, Phys. Rev. A 77, 012311 (2008); J. O. Fjaerestad, J. Stat. Mech. P07011 (2008).
W. L. You, Y. W. Li, and S. J. Gu, Phys. Rev. E 76, 022101 (2007); S. J. Gu, H. M. Kwok, W. Q. Ning, and H. Q. Lin, Phys. Rev. B **77**, 245109 (2008).
P. Zanardi, P. Giorda, and M. Cozzini, Phys. Rev. Lett. **99**, 100603 (2007).
H. Q. Zhou, R. Orus, and G. Vidal, Phys. Rev. Lett. **100**, 080601 (2008).
Y. C. Tzeng, H. H. Hung, Y. C. Chen, and M. F. Yang, Phys. Rev. A **77**, 062321 (2008).
H. L. Wang, J. H. Zhao, B. Li, and H. Q. Zhou, arXiv:0902.1670.
L. C. Venuti, M. Cozzini, P. Buonsante, F. Massel, N. Bray-Ali, and P. Zanardi, Phys. Rev. B **78**, 115410 (2008).
A. Hamma, W. Zhang, S. Haas, and D. A. Lidar, Phys. Rev. B **77**, 155111 (2008).
D. F. Abasto, A. Hamma, and P. Zanardi, Phys. Rev. A **78**, 010301 (2008).
S. Yang, S. J. Gu, C. P. Sun, and H. Q. Lin, Phys. Rev. A **78**, 012304 (2008).
J. H. Zhao, H. Q. Zhou, arXiv:0803.0814.
X. M. Lu, Z. Sun, X. G. Wang, and P. Zanardi, Phys. Rev. A **78**, 032309 (2008); J. Ma, L. Xu, H. N. Xiong, and X. G. Wang, Phys. Rev. E **78**, 051126 (2008).
N. Paunkovic *et al.*, Phys. Rev. A 77, 052302 (2008).
K. W. Sun, Y. Y. Zhang, Q. H. Chen, Phys. Rev. B **79**, 104429 (2009); T. Liu, Y. Y. Zhang, Q. H. Chen, K. L. Wang, arXiv:0812.0321; L. Gong and P. Q. Tong, Phys. Rev. B **78**, 115114 (2008).
X. G. Wang, Z. Sun and Z. D. Wang, Phys. Rev. A **79** , 012105 (2009).
F. D. M. Haldane, Phys. Rev. B **25**, 4925 (1982).
K. Okamto and K. Nomura, Phys. Lett. A **169**, 433 (1992).
G. Castilla, S. Chakravarty, and V.J. Emery, Phys. Rev. Lett. **75**, 1823 (1995).
T. Giamarchi, *Quantum Physics in One Dimension* (Oxford University Press, Oxford, England, 2004).
R. Chitra, S. Pati, H. R. Krishnamurthy, D. Sen, and S. Ramasesha, Phys. Rev. B **52**, 6581 (1995).
S. R. White, and I. Affleck, Phys. Rev. B **54**, 9862 (1996).
I. Affleck, D. Gepner, H. J. Schulz, and T. Ziman, J. Phys. A 22, 511 (1989).
C. N. Yang and C. P. Yang, Phys. Rev. 150, 321 (1966); J. D. Cloizeaux and M. Gaudin, J. Math. Phys, 7, 1387 (1966).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Many investigations of star formation rates (SFRs) in galaxies have explored details of dust obscuration, with a number of recent analyses suggesting that obscuration appears to increase in systems with high rates of star formation. To date these analyses have been primarily based on nearby ($z \le 0.03$) or UV selected samples. Using 1.4GHz imaging and optical spectroscopic data from the [*Phoenix Deep Survey*]{}, the SFR-dependent obscuration is explored. The use of a radio selected sample shows that previous studies exploring SFR-dependent obscurations have been biased against obscured galaxies. The observed relation between obscuration and SFR is found to be unsuitable to be used as an obscuration measure for individual galaxies. Nevertheless, it is shown to be successful as a first order correction for large samples of galaxies where no other measure of obscuration is available, out to intermediate redshifts ($z\approx 0.8$).'
author:
- 'J. Afonso, A. Hopkins, B. Mobasher, and C. Almeida'
title: Dependence of dust obscuration on star formation rates in galaxies
---
INTRODUCTION
============
Dust obscuration is currently recognized as one of the most serious sources of uncertainty in studies of galaxy evolution. With the recent results of far-infrared (FIR) and sub-mm observations revealing an ever increasing number of dusty star forming galaxies [e.g., @Genzel00; @Ivison00; @Smail02], the need to unify measures of star formation rate (SFR) from independent indicators at different wavelengths (e.g., UV, H$\alpha$, FIR, radio continuum) is as pressing as ever.
A relatively simple prescription for dust extinction correction to SFR has been suggested by @Hopkins01 and @Sullivan01, by assuming a luminosity- (or SFR) dependent obscuration. This was shown to provide a good first order correction to optically derived SFRs, while smaller differences still remain between different SFR indicators that are likely to be related to different star formation histories and/or extinction properties [@Sullivan01].
The above studies were based on the comparison of different SFR indicators for samples of relatively low redshift galaxies, selected at optical or UV wavelengths, which are prone to dust induced biases. Furthermore, at intermediate redshifts where the H$\alpha$ line falls out of the optical window and the \[O[ii]{}\]$\lambda$3727 line can instead be used to measure the SFR ($0.3<z<0.8$), a direct comparison is more difficult [see @Cardiel03]. In this paper we explore the validity of the luminosity- (or SFR) dependent obscuration at both low ($z\lesssim 0.3$) and intermediate ($0.3<z<0.8$) redshifts, using a sample of star-forming galaxies selected at radio wavelengths (which are insensitive to dust obscuration) with spectroscopic information.
Throughout this paper we adopt $H_0 = 70\,$ kms$^{-1}$Mpc$^{-1}$, $\Omega_M = 0.3$, and $\Omega_\Lambda = 0.7$.
OBSERVATIONS
============
The [*Phoenix Deep Survey*]{} (PDS) includes a 1.4GHz survey made using the Australia Telescope Compact Array (ATCA) covering a field a little over 4.5 square degrees, selected to lie in a region of low optical obscuration and devoid of bright radio sources [@Hopkins03a and references therein]. Optical imaging at the Anglo-Australian Telescope (AAT) has produced an optical catalogue probing to $R=22.5$ for the whole field, and 2dF multi-object spectroscopy from the AAT provides spectra for many of the optically identified radio sources [@Georgakakis99; @Afonso02]. The spectra were taken using the low resolution gratings 270R, 316R and 300B. Most objects were observed through only one of the red or blue gratings, with a small number being observed through both. The 270R and 316R gratings provide a wavelength coverage of $5000 \lesssim \lambda \lesssim 8500\,$Å, and the 300B gives $4000 \lesssim \lambda \lesssim 7000\,$Å. Redshifts were determined by visual inspection of the spectra, and line parameters were measured through Gaussian fitting using the [splot]{} package in IRAF. There are currently a total of 445 galaxies with measured spectra, of which 138 were securely classified as star forming using optical emission line diagnostic diagrams. A more detailed account of the spectroscopic data reduction and classification can be found elsewhere [@Georgakakis99; @Afonso02].
Star formation rates in the faint radio population {#sect:sfr}
==================================================
The origin of 1.4 GHz emission in star-forming galaxies is primarily thought to be synchrotron radiation from relativistic electrons, accelerated by the shocks from supernova ejecta. The insensitivity of radio wavelengths to dust obscuration, makes radio emission a particularly attractive way of estimating SFRs in star-forming galaxies. The relation between SFR and 1.4GHz luminosity can be written as
$${\rm SFR}_{\rm 1.4\,GHz} = \frac{L_{1.4\,{\rm GHz}}}{8.4\times10^{20}\,{\rm W~Hz^{-1}}}~{\rm M}_{\sun}\,{\rm yr}^{-1},
\label{eqn:sfr14}$$
for a Salpeter IMF with stellar masses between $0.1-100$M$_{\sun}$ [@Haarsma00]. Using this relation, and calculating $L_{1.4\,{\rm GHz}}$ assuming a radio spectral index $\alpha$ of 0.8 (S$_{\nu}\!\propto\!\nu^{-\alpha}$), characteristic of synchrotron radiation, the SFRs for the star-forming galaxies in the Phoenix spectroscopic sample were calculated and are presented in Figure \[fig:sfr14z\]. The depth and area of the radio survey and the spectroscopic follow-up allow sampling star-formation rates from below one to a few hundred solar masses per year.
Of particular interest is the comparison between the SFRs as given by the radio and the optical line emission. To calculate the SFR from [H$\alpha$]{}, the conversion indicated by @Kennicutt98, for an IMF and mass limits as for equation (\[eqn:sfr14\]), is used
$${\rm SFR}_{\rm H\alpha} = \frac{L_{\rm H\alpha}}{1.27 \times 10^{34}\,{\rm W}}~{\rm M}_{\sun}\,{\rm yr}^{-1}.
\label{eqn:sfrha}$$
The choice of a different IMF would change equations (\[eqn:sfr14\]) and (\[eqn:sfrha\]) by a similar factor (which for the most commonly used IMFs would be $\sim$2-4), since both SFR indicators are sensitive to stars in roughly the same mass range (${\rm M \gtrsim 8\,M_{\sun}}$). Hence, a comparison between the SFRs derived from radio and H$\alpha$ emission will be insensitive to the particular IMF shape choosen.
For galaxies at higher redshift ($z\,\gtrsim\,0.3$), where [H$\alpha$]{} could not be measured, one can use the \[O[ii]{}\]$\lambda$3727 flux. This method relies on equation (\[eqn:sfrha\]) and the observed ratio between \[O[ii]{}\] and [H$\alpha$]{}. A value of $F_{\rm [OII]}=0.45 \times F_{\rm H\alpha}$ [@Kennicutt98] is commonly used. However, using the first data release of the Sloan Digital Sky Survey [@Abazajian03], @Hopkins03b observes a median relation for radio-detected star-forming galaxies of $F_{\rm [OII]}=0.23 \times F_{\rm H\alpha}$ (with a scatter of around 0.1), finding the difference to be due to the incompleteness of previous samples. The range of optical luminosities sampled by @Hopkins03b is similar to the one in the present Phoenix spectroscopic sample ($M_{R}\sim 20 - 23.5$) and is not enough to reveal the optical luminosity dependence in the \[O[ii]{}\]/Ha ratio observed by @Jansen01, that produces lower \[O[ii]{}\]/Ha ratios for galaxies with higher optical luminosities. Also, the range of extinctions, as given by the Balmer decrement, sampled by the @Hopkins03b sample ([H$\alpha$]{}/H$\beta$, between 3 and 12) is similar to that of the present work (as will be seen below). We thus adopt the determination of @Hopkins03b to convert our measured \[O[ii]{}\] luminosities to [H$\alpha$]{} values, using equation (\[eqn:sfrha\]) to obtain the corresponding SFR estimate.
Figure \[fig:sfr14sfrha\] compares the SFRs derived from the radio luminosity and line emission. Although a correlation exists, as expected, the SFRs derived from 1.4GHz are in general higher that those calculated from nebular lines, especially for higher luminosities (or SFRs). This effect, seen previously in several studies [@Cram98; @Hopkins01; @Sullivan01], is attributed to dust obscuration, which affects the optical line emission. Furthermore, one can conceive an amount of obscuration which increases with the SFR (a SFR-dependent dust obscuration). This has been explored with considerable success for nearby optical [@Hopkins01] and UV selected galaxies [@Sullivan01].
SFR-dependent dust obscuration
==============================
For a subset of the Phoenix sample described above, an estimate of the extinction can be made from the observed Balmer decrement ([H$\alpha$]{}/H$\beta$). Stellar absorption of the Balmer lines was corrected by assuming an average value of 2Å for the equivalent width (EW) of the H$\beta$ absorption in star-forming galaxies [@Tresse96; @Georgakakis99], with a similar value (2.1Å) being used for the [H$\alpha$]{} line [equation (2) of @Miller02]. Figure \[fig:balmer\] shows the resulting Balmer decrements, corrected for stellar absorption, as a function of SFR, derived from the radio luminosity using equation (\[eqn:sfr14\]). Unlike previous studies [@Hopkins01; @Sullivan01], a tight correlation is not observed here. Rather, a trend for a broader range of obscurations for higher SFR systems is observed. This behaviour seems to be due to different selection criteria and small number statistics, as we now explain.
The sample used by @Hopkins01 to derive the relation between SFR and obscuration includes only nearby ($z\le 0.03$) galaxies with EW([H$\alpha$]{}) larger than 30Å. While no clear trend is seen in the present sample when restricted to this EW value, the higher limit of 60Å (filled circles in Figure \[fig:balmer\]) does suggest a closer match to the observations in @Hopkins01.
On the other hand, @Buat02 also observed a dual behaviour: while nearby star forming galaxies behave similarly to what is present in Figure \[fig:balmer\], a sample of IUE galaxies shows a much tighter correlation, as that observed for the UV-selected sample of @Sullivan01. This suggests that UV-selection results in some kind of bias that is avoided with the present sample. Completeness in Figure \[fig:balmer\] is not easy to quantify, given the several selection criteria (initially the radio flux limit, followed by the optical identification and 2dF spectroscopy, which imposes a practical limit of $R \sim 20$). However, it is possible to try to understand a possible bias in a magnitude-limited UV selected sample, and at the same time to evaluate the improvement of the present work, as we now show.
The tight relation between dust-free UV emission and SFR can be used to evaluate which regions of Figure \[fig:balmer\] are not accessible to a magnitude-limited UV study. Assuming a limiting magnitude $m_{\rm UV}=18.5$ [as in @Sullivan01], an intrinsic SFR (ie, $L_{\rm UV}$ before obscuration) and redshift will define the maximum value of the Balmer decrement that still allows a detection. Figure \[fig:balmerarea\] shows the present sample, separated according to redshift, overlaid with the maximum detectable Balmer decrement at $z=0.05$ (dotted line), 0.1 (dashed line) and 0.2 (dot-dashed line). The conversion between SFR and $L_{\rm UV}$ uses the calibration from @Kennicutt98, while the extinction at 2000Å is derived from the Balmer decrement using the procedures of @Calzetti00. An estimate of the $K$-correction is obtained using an average colour of $m_{\rm UV}-b=-1.5$ [@Milliard92]. It is clear that the present sample represents a significant improvement for $z>0.1$. In particular, many of the galaxies showing high Balmer decrement values in the present study would not be detected in a UV survey limited to $m_{\rm UV}=18.5$. Sample selection thus seem to be a major source of bias when trying to investigate the correlation between dust obscuration and SFRs.
Given the large scatter present in Figure \[fig:balmer\], a SFR-dependent reddening correction is obviously unsuitable for application in galaxies where a direct estimate of obscuration exists. However, a trend for higher average Balmer decrement (and greater distribution width) with increasing SFR seems to exist. This can still be useful as a preliminary dust obscuration estimate for large samples of galaxies where no other measure of obscuration is available. Although in practice the form of the derived relation may be comparable to the ones in @Hopkins01 and @Sullivan01, here we recognize that there is no tight correlation between obscuration and SFR, but an average obscuration may still be defined for any given SFR. As can be seen in Figure \[fig:balmer\] the resulting correction will be affected by large uncertainties for individual galaxies, especially at large SFRs.
The sample was thus split into 7 bins of $\log$(SFR) (as estimated from the radio luminosity), each having between 5 and 16 objects. The median $\log$(SFR) and Balmer decrement in each bin were then found (shown as asterisks in Figure \[fig:balmer\]). A linear fit, taking into account the errors in both quantities, results in
$$\left(\frac{\rm H\alpha}{\rm H\beta}\right)_{median} = 1.29 \log ({\rm SFR}) + 5.06,
\label{balmersfr}$$
with a correlation coefficient of 0.8. Keeping in mind the meaning and limitations of this correlation, as seen in Figure \[fig:balmer\], one can now test its usefulness as a first correction for the effect seen in Figure \[fig:sfr14sfrha\].
The departure of the observed Balmer decrement from the Case B value of 2.86 [e.g., @Brocklehurst71], can be related to the color excess for nebular emission lines, $E(B-V)_{gas}$, and extinction, $k(\lambda)$, by
$$\left( \frac{\rm H\alpha}{\rm H\beta} \right)_{\rm Case~B} = \left( \frac{\rm H\alpha}{\rm H\beta} \right)_{\rm obs} 10^{\,0.4 \,E(B-V)_{gas}\,(k_{\alpha}-k_{\beta})}.
\label{balmer}$$
Substituting (\[balmersfr\]) into (\[balmer\]) gives a relation for the color excess as a function of SFR:
$$E(B-V)_{gas} = \frac{2.5}{(k_{\alpha}-k_{\beta})} \log \left( \frac{2.86}{1.29 \log ({\rm SFR}) + 5.06} \right).
\label{ebvsfr}$$
Together with an appropriate extinction curve (the standard Galactic extinction curve of @Cardelli89 with $R_{V}=3.1$, found by @Calzetti01 to describe well the reddening of the ionized gas in star-forming galaxies), this can then be used to correct $L_{\rm H\alpha}$, and consequently, SFR$_{\rm H\alpha}$, for dust obscuration:
$$L_{\rm H\alpha} = L_{\rm H\alpha}^{obs}~10^{\,0.4 \,E(B-V)_{gas}\,k_{\alpha}}.
\label{hacorr}$$
where $L_{\rm H\alpha}^{obs}$ can either be the observed H$\alpha$ luminosity or the “effective” H$\alpha$ luminosity derived from an observed \[O[ii]{}\] flux.
Equation (\[ebvsfr\]) gives the relation between extinction and the [*intrinsic*]{} SFR. Assuming this to be the value given by the radio luminosity could be a good approximation, but would create an artificial dependence between the corrected [H$\alpha$]{} SFR and the one from 1.4GHz. Instead, since the form for the SFR-dependent obscuration is monotonically increasing, an iteration over possible values for intrinsic SFR and the corresponding obscuration can be performed until the calculated obscured SFR converges with the observed value [@Hopkins01]. We note that this procedure does not take into account any absorption of ionizing photons by dust inside HII regions. @Charlot02, modeling the observed spectra in non-Seyfert galaxies, estimate that this mechanism is responsible for the loss of $\sim$20% of ionizing photons. Given the large uncertainty associated with this value, however, we do not attempt any correction, noting that its magnitude would not significantly affect our results.
Figure \[fig:sfr14sfrhacorr\] shows the resulting dust corrected relation for the SFR from line and radio luminosities. It is clear that the SFR-dependent dust absorption, while being a very coarse approximation, can successfully account for the first order offset between the SFRs derived from [H$\alpha$]{} or \[O[ii]{}\] and radio luminosities for galaxies spanning a broad range of redshifts (out to $z \approx 0.8$). This would not be possible if the relations between Balmer decrements and SFR drawn from previous samples [@Hopkins01; @Sullivan01] had been used. The scatter still present has an rms of 0.4 dex about the best fit line, maintained from the scatter in Figure \[fig:sfr14sfrha\]. The lack of an improvement lies in the coarse relationship between SFR and obscuration seen in Figure \[fig:balmer\] – the linear fit to the median values cannot correct for the range of obscurations seen at each SFR.
There will be, of course, additional uncorrelated mechanisms involved in the [H$\alpha$]{} and radio emission which contribute to the scatter seen, but their quantification will only be possible after a precise account of the obscuration for each individual galaxy.
Summary
=======
A radio selected sample of star forming galaxies to $z \approx 0.8$ has been compiled from the [*Phoenix Deep Survey*]{}. The use of radio selection minimises bias in the sample due to dust obscuration effects. The relationship between obscuration and SFR is shown to be only for a higher Balmer decrement range at higher SFRs, contrary to the tight correlation observed in previous studies. Still, the use of a linear relation, reflecting only the broadest trend, was explored as a first order correction for large samples of galaxies with no direct measurement of obscuration. This successfully accounts for the major discrepancy between optical emission line SFR estimates and 1.4GHz luminosity estimates for all galaxies in the broad redshift range probed. However, a much more detailed correction of the dust obscuration is necessary for the study of the uncorrelated mechanisms (e.g., star formation histories) responsible for the scatter still present.
We thank M. Sullivan for his comments and advice. JA gratefully acknowledges the support from the Science and Technology Foundation (FCT, Portugal) through the fellowship BPD-5535-2001 and the research grant ESO-FNU-43805-2001. AMH acknowledges support provided by the National Aeronautics and Space Administration (NASA) through Hubble Fellowship grant HST-HF-01140.01-A awarded by the Space Telescope Science Institute (STScI). JA dedicates this work to the memory of Gustavo Camejo Rodrigues, who will always be dearly remembered.
Abazajian et al. 2003, (submitted; astro-ph/0305492)
Afonso, J. 2002, PhD Thesis, University of London
Bell, E. F. 2003, , 586, 794
Brocklehurst, M. 1971, , 153, 471
Buat, V., Boselli, A., Gavazzi, G., & Bonfanti, C. 2002, , 383, 801
Calzetti, D., Armus, L., Bohlin, R., Kinney, A., Koornneef, J., & Storchi-Bergmann, T. 2000, , 533, 682
Calzetti, D. 2001, , 113, 1449
Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245
Cardiel, N., Elbaz, D., Schiavon, R. P., Willmer, C. N. A., Koo, D. C., Phillips, A. C., & Gallego, J. 2003, , 584, 76
Charlot, S., Kauffmann, G., Longhetti, M., Tresse, L., White, S. D. M., Maddox, S. J., & Fall, S. M. 2002, , 330, 876
Cram, L., Hopkins, A., Mobasher, B., & Rowan-Robinson, M. 1998, , 507, 155
Genzel, R. & Cesarsky, C. J. 2000, , 38, 761
Georgakakis, A., Mobasher, B., Cram, L., Hopkins, A., Lidman, C., & Rowan-Robinson, M. 1999, , 306, 708
Haarsma, D. B., Partridge, R. B., Windhorst, R. A., & Richards, E. A. 2000, , 544, 641
Hopkins, A. M., Mobasher, B., Cram, L., & Rowan-Robinson, M. 1998, , 296, 839
Hopkins, A., Connolly, A., Haarsma, D., & Cram, L. 2001, , 122, 288
Hopkins, A. M., Afonso, J., Chan, B., Cram., L. E., Georgakakis, A., & Mobasher, B. 2003a, , 125, 465
Hopkins, A. M., Miller, C. J., Nichol, R. C., Gómex, P. L., Goto, T., Connolly, A. J., Bernardi, M. & Tremonti, C. A. 2003b, , [*submitted*]{}
Ivison, R. J., Smail, I., Barger, A. J., Kneib, J.-P., Blain, A. W., Owen, F. N., Kerr, T. H., & Cowie, L. L. 2000, , 315, 209
Jansen, R. A., Franx, M., & Fabricant, D. 2001, , 551, 825
Kennicutt, R. C., Jr. 1998, , 36, 189
Miller, N. A. & Owen, F. N. 2002, , 124, 2453
Milliard, B., Donas, J., Laget, M., Armand, C., & Vuillemin, A. 1992, , 257, 24
Smail, I., Ivison, R. J., Blain, A. W., & Kneib, J.-P. 2002, , 331, 495
Sullivan, M., Mobasher, B., Chan, B., Cram, L., Ellis, R., Treyer, M., & Hopkins, A. 2001, , 558,72
Tresse, L., Rola, C., Hammer, F., Stasinska, G., Le Fevre, O., Lilly, S. J., & Crampton, D. 1996, , 281, 847
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have identified a new class of galaxy cluster using data from the Galaxy Zoo project. These clusters are rare, and thus have apparently gone unnoticed before, despite their unusual properties. They appear especially anomalous when the morphological properties of their component galaxies are considered. Their identification therefore depends upon the visual inspection of large numbers of galaxies, a feat which has only recently been made possible by Galaxy Zoo, together with the Sloan Digital Sky Survey. We present the basic properties of our cluster sample, and discuss possible formation scenarios and implications for cosmology.'
author:
- |
Marven F. Pedbost,$^{1}$[^1] Trillean Pomalgu,$^{1}$ and the Galaxy Zoo team\
$^{1}$Institute of Cosmology, University of Brentwood, Brentwood, Essex, HG4 2TG, UK.
---
\[firstpage\]
galaxies: clusters: general — galaxies: structure — galaxies: fundamental parameters
Introduction
============
For nearly as long as it has been recognised that galaxies are stellar systems external to our own, we have known that they are not distributed randomly throughout space, but tend to cluster together [@hubble]. This structure is now well understood by the amplifying influence of gravity on small scale fluctuations in the early universe. We are able to predict, both through simulations and analytically, the clustering of the collisionless dark matter component that is inferred to exist from a range of observations. It is an important and popular fact that the initially smooth matter distribution collapses to form haloes, roughly spherical in shape, though with some ellipsoidal or triaxial distortions. These high density haloes are joined by lower density filaments, along which smaller haloes move, to be eventually accreted by the larger haloes, which thus grow more massive with time [@bubble].
Although the dark matter component is well understood, the behaviour of baryonic matter is necessarily more complicated. On large scales it is expected to follow the dark matter, and hydrodynamical simulations demonstrate this. However, on small scales the density field evolves non-linearly and the densities are such that gas physics and feedback from collapsed baryonic objects, such as stars and black holes, become important. On the scale of galaxy clusters the interplay between gas and dark matter may cause the density profiles and shapes of haloes to vary from those predicted by models based on dark matter alone. In the regime of galaxies, this becomes even more likely, as here baryons dominate the matter density.
Observationally, clusters are found to host galaxy populations quite different to the Universal average. Their members tend to have red colours and suppressed or entirely absent star-formation. They also mostly possess smooth, early type morphologies, particularly toward the centre of a cluster. This can partly be explained by the preference for clusters to host the most massive galaxies, together with the observation that more massive galaxies are more likely to be red, passive and elliptical in any environment [@toil]. However, there remains a large population of lower-mass galaxies in clusters whose “red and dead” condition is in stark contrast with the properties of their counterparts in the field. At higher redshifts, this dichotomy between cluster and field galaxy populations appears to diminish, with a growing proportion of clusters containing significant starforming components [@trouble]. At $z \ga 1$ it even appears to reverse, with clusters hosting the most actively starforming objects.
An unusual galaxy cluster
=========================
![image](fig1.ps){width="\textwidth"}\
Given the typical properties of galaxy clusters as described above, the existence, at low redshift ($z \sim 0.05$), of the structure displayed in Fig. \[fig1\] is somewhat surprising. Our attention was called to this cluster by the community of Galaxy Zoo participants, who fortuitously recognised its unusual properties whilst classifying its individual galaxies. The overdensity of galaxies clearly identifies this structure as a rich galaxy cluster, however it possesses strikingly different properties compared to typical clusters of this richness.
One of the most distinctive aspects of this cluster is the morphologies and colours of its component galaxies. Many of its members have blue colours and show clear evidence of spiral morphology, even if the spiral arms are often disturbed. These disturbed morphologies are probably the result of a high frequency of close pairs and merging systems. Such a high fraction of merging systems is unexpected for high mass clusters due to the large velocity dispersion, and much more typical of lower mass galaxy groups.
Another unusual aspect is the morphology of the cluster as a whole. The structure is rather linear, and boxy, reminiscent of the filaments seen in N-body simulations. However, the observed galaxy density is far higher than seen in simulations of filaments. There is no obvious central concentration of the number density or luminosity profile, unlike any normal cluster of this richness. Weak lensing and x-ray data may assist in understanding this cluster’s strange appearance, by adding information on the distribution of the cluster’s dark matter and gas content.
Finally, but perhaps most surprising, is that upon detailed inspection, the morphologies of individual galaxies and close systems approximate the familiar geometric shapes of letters of the basic modern Latin alphabet. From East to West and North to South, respectively, these shapes may be represented as “w e a p o l o g i s e f o r t h e i n c o n v e n i e n c e”. Although galaxies displaying morphologies corresponding to Latin characters have been noticed before, ‘S’ and ‘Z’ being particularly common, a localised collection of this size is highly improbable.
A close visual inspection suggests that the galaxy distribution exhibits an element of substructure. The galaxies appear to divide into five distinct groups. These are: Group I: “w e”, Group II: “a p o l o g i s e”, Group III: “f o r”, Group IV: “t h e”, and Group V: “i n c o n v e n i e n c e”. These may be familiar to the reader as common words of the English language.
The appearance of rational English within an astrophysical system is widely regarded as impossible. Furthermore, the event that an arrangement of galaxies should express regret would be considered by many to be ludicrous. The data could be disregarded simply as a statistical anomaly, an unlikely occurrence which just happens to have occurred. Space is, after all, not only big, but really big, and full of really surprising things. The authors, however, maintain that, since it is observed, the cluster requires explanation.
It remains a possibility that previous estimates of the likelihood of such events have been grossly underestimated and no fundamentally new physics is required to explain this observation. Although current cosmological simulations are not known to produce English sentences on cluster scales, there has been little effort to test this, and in particular a lack of visual inspection. It is plausible that with suitably chosen prescriptions, semi-analytic models could reproduce an abundance of clusters similar to those presented in this paper.
On the other hand, many would attribute a much deeper meaning to the appearance of this cluster. Firstly, the occurrence of these phenomena could potentially lend support to some of the more exotic models for Dark Energy or modified gravity, if they are able to predict such structures. More controversially, as most occurrences of English sentences are considered to be the work of intelligent beings, the existence of these messages might indicate intelligent life beyond our own. The scale of the messages would require a lifeform with abilities far beyond those currently possessed by humans, and even beyond those which we could realistically expect to acquire; implying the existence of an intelligent being with extraordinary powers. Indeed, another appearance of exactly the same message has been previously reported in the hotly debated work by @adams, where the text is interpreted as God’s final message to His creation.
Additional examples
===================
![image](fig2.ps){width="\textwidth"}\
The significance of the cluster discussed in the previous section is modified somewhat by the discovery of additional examples of clusters belonging to this unusual class. These share many properties with the prototype, as is clear from Figs. \[fig2\] & \[fig3\]. In particular, both exhibit natural subgroups of galaxies with morphologies that conspire to resemble English words. The cluster in Fig. \[fig2\] exhibits the natural sub-structure groups “c a u t i o n !”, “s t r u c t u r e”, “f o r m a t i o n”, “i n”, “p r o g r e s s”, whereas the cluster shown in Fig. \[fig3\] is apparently another warning, comprising the groups “D e l a y s”, “p o s s i b l e”, “f o r”, “7 Gyr”.
Each of the additional clusters demonstrates new features, compared with Fig. \[fig1\]. The cluster in Fig. \[fig2\] appears to contain punctuation, in the form of an exclamation mark. The cluster in Fig. \[fig3\], on the other hand, includes the first unambiguous appearance of a capital letter, “D”, a numeral, “7”, and an abbreviated unit “Gyr”. In addition, the latter figure demonstrates a notable left-hand justification across multiple lines.
Individually, these two further clusters present the same problems as the first when considered within the context of currently well-regarded cosmologies. In such models, clusters that form sensible English phrases are generally regarded as impossible. The three known instances, presented here, thus appear to constitute an event that would traditionally be viewed as really not very likely at all. Their discovery also suggests the possibility of other messages, not yet identified, and in particular the potential existence of a similar clusters, utilising other languages and alphabets.
When considered collectively, the various examples presented here of this “unusual” class, seem to suggest a possible common theme, being reminiscent of the familiar local phenomenon of road works [@roadworks]. Making this identification, the message in Fig. \[fig1\] may then be understood as a general acknowledgement of blame for the specific problems conveyed in Figs. \[fig2\] and \[fig3\]. Thus, these vivid messages are apparently not to be understood as, in the paradigm of @adams, a message from God, but rather a notification of the common frustrations that one group of intelligent beings imposes on other intelligent beings in the name of progress, or even, simply, basic maintenance of former progress.
Such a model, however, must evoke the existence of other, so called, “intelligent beings” beyond our own planet. While regarded by many to be a good long-term bet, current evidence for the existence of extra-terrestrial life is in seriously short supply. Even the predictions of how much intelligent life we might reasonably expect to find are ambiguous at best. Indeed, some of the most rigorous arguments on the subject actually find in favour of a total absence of intelligent life of any kind [@universe]. A suitably advanced civilisation capable of fashioning galactic sized structures into directed notifications, therefore, tends towards the absurd. From this vantage, we cannot exclude the alternative that the appearance of familiar English phrases of unified sense in large scale cluster morphologies are anything more than chance occurrences, which one might hope to better understand via future insights into probability theory or cosmology.
If we interpret these unusual clusters in this manner, we must necessarily re-evaluate our understanding of their local counterparts [@roadworks]. Observations that hitherto had been taken as certain indications the presence of intelligent life are then reduced to nothing more than the product of pure chance.
![\[fig3\] SDSS colour composite image ($vri$) for another unusual galaxy cluster, at $\rmn{RA}=27^{\rmn{h}}10^{\rmn{m}}99^{\rmn{s}}$, $\rmn{Dec}=-97\degr 71\arcmin 23\arcsec$, identified by Galaxy Zoo participants. Orientation as Fig. \[fig1\].](fig3.ps "fig:"){width="45.00000%"}\
Conclusions
===========
Thanks to the visual inspection of SDSS images afforded by the Galaxy Zoo project, we have identified a new class of galaxy clusters which possess number of unusual properties. These clusters are unusually elongated, possess young and highly dynamic galaxy populations, and most unexpectedly, present neatly typeset, left-justified, messages written in the English language. One interpretation for the existence of these galaxy clusters is as conclusive evidence for intelligent life elsewhere in the universe. Conversely, however, they could indicate that many phenomena usually attributed to intelligent life on Earth actually occur spontaneously, without any thought necessarily being involved at all.
Acknowledgements {#acknowledgements .unnumbered}
================
This work has been made possible by the participation of many members of the public in visually classifying SDSS galaxies on the Galaxy Zoo website. Their contributions, many individually acknowledged at http://www.galaxyzoo.org/Volunteers.aspx, have produced a number of published scientific papers, with many more yet to come. This article is particularly indebted to those who have tirelessly sought out odd and unusual objects and brought them to general attention on the Galaxy Zoo Forum.[^2] We thank them for their extraordinary efforts in making this project a success. We are also grateful to various members of the media, both traditional and online, for helping to bring this project to the public’s attention.
Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, and the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
Adams D., 1985, “So Long, and Thanks for All the Fish”, London: Pan Books Aunty B.B.C., 2009,\
[http://www.bbc.co.uk/cult/hitchhikers/guide/universe.shtml]{} Boubl[' e]{} B., 1960, Csi, 42, 42 Transport D.F., 2005,\
[http://www.dft.gov.uk/pgr/roads/tss/workingdrawings/roadworksp7000series/]{} Hubble E. P., 1932, Sci, 75, 24 Toil T., 1980, Tat., 666, 999 Trobble T., 2000, AjP., 123, 456
\[lastpage\]
[^1]: E-mail: team@galaxyzoo.org
[^2]: We stress that, despite their implausible appearance, the galaxies comprising each character in the figures presented in this paper are taken directly from the SDSS multicolour composite imaging. Note, however, that some degree of translation and rotation has been performed to the individual characters, for presentation purposes.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the properties of gas inside and around galaxy haloes as a function of radius and halo mass, focussing mostly on $z=2$, but also showing some results for $z=0$. For this purpose, we use a suite of large cosmological, hydrodynamical simulations from the OverWhelmingly Large Simulations project. The properties of cold- and hot-mode gas, which we separate depending on whether the temperature has been higher than $10^{5.5}$ K while it was extragalactic, are clearly distinguishable in the outer parts of massive haloes (virial temperatures $\gg 10^5$ K). The differences between cold- and hot-mode gas resemble those between inflowing and outflowing gas. The cold-mode gas is mostly confined to clumpy filaments that are approximately in pressure equilibrium with the diffuse, hot-mode gas. Besides being colder and denser, cold-mode gas typically has a much lower metallicity and is much more likely to be infalling. However, the spread in the properties of the gas is large, even for a given mode and a fixed radius and halo mass, which makes it impossible to make strong statements about individual gas clouds. Metal-line cooling causes a strong cooling flow near the central galaxy, which makes it hard to distinguish gas accreted through the cold and hot modes in the inner halo. Stronger feedback results in larger outflow velocities and pushes hot-mode gas to larger radii. The gas properties evolve as expected from virial arguments, which can also account for the dependence of many gas properties on halo mass. We argue that cold streams penetrating hot haloes are observable as high-column density H<span style="font-variant:small-caps;">i</span> Lyman-$\alpha$ absorption systems in sightlines near massive foreground galaxies.'
author:
- |
Freeke van de Voort$^{1}$[^1] and Joop Schaye$^1$\
$^{1}$Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden, The Netherlands
bibliography:
- 'propertiesRvir.bib'
date: 'Accepted 2012 March 15. Received 2012 February 19; in original form 2011 November 21'
title: Properties of gas in and around galaxy haloes
---
\[firstpage\]
galaxies: evolution – galaxies: formation – galaxies: haloes – intergalactic medium – cosmology: theory
Introduction
============
The gaseous haloes around galaxies grow by accreting gas from their surroundings, the intergalactic medium, which is the main reservoir for baryons. The galaxies themselves grow by accreting gas from their haloes, from which they can form stars. Some of the gas is, however, returned to the circumgalactic medium by galactic winds driven by supernovae (SNe) or active galactic nuclei (AGN) and by dynamical processes such as tidal or ram pressure forces. Such interactions between the different gas phases are essential for galaxy formation and evolution.
The physical state of the gas in and around haloes will determine how fast galaxies grow. Quantifying and understanding the properties of the gas is therefore vital for theories of galaxy formation. It is also crucial for making predictions and for the interpretation of observations as the physical state of the gas determines how much light is absorbed and emitted.
Theoretical and computational studies of the accretion of gas onto galaxies have revealed the existence of two distinct modes. In the first mode the inflowing gas experiences an accretion shock as it collides with the hot, hydrostatic halo near the virial radius. At that point it is shock-heated to temperatures similar to the virial value and typically remains part of the hot halo for longer than a dynamical time. If it reaches a sufficiently high density, it can cool radiatively and settle into a disc [e.g. @Rees1977; @White1978; @Fall1980]. This mode is referred to as ‘hot-mode accretion’ [@Katz2003; @Keres2005]. If, on the other hand, the cooling time of the gas is short compared to the dynamical time, which is the case for haloes of sufficiently low mass, a hot halo is unable to form and the accreting gas will not go through a shock near the virial radius. The accretion rate then depends on the infall rate instead of on the cooling rate [@White1991; @Birnboim2003; @Dekel2006]. Additionally, simulations have shown that much of the gas enters the halo along dense filaments or in clumps, which gives rise to short cooling times, even in the presence of a hot, hydrostatic halo. This denser gas does not go through an accretion shock near the virial radius and will therefore remain cold until it accretes onto the central galaxy or is hit by an outflow [e.g. @Keres2005; @Dekel2009a; @Voort2011a]. We refer to this mode as ‘cold-mode accretion’ [@Katz2003; @Keres2005].
Hot- and cold-mode accretion play very different roles in the formation of galaxies and their gaseous haloes [@Keres2005; @Ocvirk2008; @Keres2009a; @Keres2009b; @Brooks2009; @Dekel2009a; @Crain2010a; @Voort2011a; @Voort2011b; @Powell2011; @Faucher2011]. It has been shown that cold-mode accretion is more important at high redshift, when the density of the Universe is higher. Hot-mode accretion dominates the fuelling of the gaseous haloes of high-mass systems [halo mass $> 10^{12} \, {\rm M_\odot}$; e.g. @Ocvirk2008; @Voort2011a]. The importance of hot-mode accretion is much reduced when considering accretion onto galaxies (as opposed to haloes) [@Keres2009a; @Voort2011a]. At $z\ge1$ all galaxies accrete more than half of their material in the cold mode, although the contribution of hot-mode accretion is not negligible for high-mass haloes. Cold-mode accretion provides most of the fuel for star formation and shapes the cosmic star formation rate density [@Voort2011b].
@Voort2011a [@Voort2011b] investigated the roles of feedback mechanisms on the gas accretion. They found that while the inclusion of metal-line cooling has no effect on the accretion onto haloes, it does increase the accretion rate onto galaxies, because it decreases the cooling time of the hot halo gas. Feedback from SNe and AGN can reduce the accretion rates onto haloes by factors of a few, but accretion onto galaxies is suppressed by up to an order of magnitude. The inclusion of AGN feedback is particularly important for suppressing hot-mode accretion onto galaxies, because it is mainly effective in high-mass haloes and because diffuse gas is more susceptible to outflows.
Hot, hydrostatic halo gas is routinely studied using X-ray observations of galaxy groups and clusters and has perhaps even been detected around individual galaxies [e.g. @Crain2010a; @Crain2010b; @Anderson2011]. As of yet, there is no direct observational evidence for cold-mode accretion, even though there are claims of individual detections in H<span style="font-variant:small-caps;">i</span> absorption based on the low metallicity and proximity to a galaxy of the absorption system [@Ribaudo2011; @Giavalisco2011]. Cosmological simulations can reproduce the observed H<span style="font-variant:small-caps;">i</span> column density distribution [@Altay2011]. They show that cold-mode accretion is responsible for much of the observed high column density H<span style="font-variant:small-caps;">i</span> absorption at $z\sim 3$. In particular, most of the detected Lyman limit and low column density damped Lyman-$\alpha$ absorption may arise in cold accretion streams [@Fumagalli2011a; @Voort2011c].
It has also been claimed that the diffuse Lyman-$\alpha$ emission detected around some high-redshift galaxies is powered by cold accretion [e.g. @Fardal2001; @Dijkstra2009; @Goerdt2010; @Rosdahl2012], but both simulations and observations indicate that the emission is more likely scattered light from central H<span style="font-variant:small-caps;">ii</span> regions [e.g. @Furlanetto2005; @Faucher2010; @Steidel2010; @Hayes2011; @Rauch2011].
The temperature is, however, not the only difference between the two accretion modes. In this paper we use the suite of cosmological hydrodynamical simulations from the OverWhelmingly Large Simulations project [OWLS; @Schaye2010] to investigate other physical properties, such as the gas density, pressure, entropy, metallicity, radial peculiar velocity, and accretion rate of the gas in the two modes. We will study the dependence of gas properties on radius for haloes of total mass $\sim 10^{12}~{\rm M}_\odot$ and the dependence on halo mass of the properties of gas just inside the virial radius. Besides contrasting the hot and cold accretion modes, we will also distinguish between inflowing and outflowing gas. While most of our results will be presented for $z=2$, when both hot- and cold-mode accretion are important for haloes of mass $\sim 10^{12}~{\rm M}_\odot$, we will also present some results for $z=0$, which are therefore directly relevant for observations of gas around the Milky Way. We will make use of the different OWLS runs to investigate how the results vary with the efficiency of the feedback and the cooling.
This paper is organized as follows. The simulations are described in Section \[sec:sim\], including the model variations, the way in which haloes are identified, and our method for distinguishing gas accreting in the hot and cold modes. In Sections \[sec:properties\] and \[sec:mass\] we study the radial profiles and the dependence on halo mass, respectively. In Section \[sec:inout\] we discuss the difference in physical properties between inflowing and outflowing gas. We assess the effect of metal-line cooling and feedback from SNe and AGN on the gas properties in Section \[sec:SNAGN\]. In Section \[sec:z0\] we study the properties of gas around Milky Way-sized galaxies at $z=0$. Finally, we discuss and summarize our conclusions in Section \[sec:concl\].
Simulations {#sec:sim}
===========
To investigate the gas properties in and around haloes, we make use of simulations taken from the OWLS project [@Schaye2010], which consists of a large number of cosmological simulations, with varying (subgrid) physics. Here, we make use of a subset of these simulations. We first summarize the reference simulation, from which we derive our main results. The other simulations are described in Section \[sec:var\]. For a full description of the simulations, we refer the reader to @Schaye2010. Here, we will only summarize their main properties.
We use a modified version of <span style="font-variant:small-caps;">gadget-3</span> [last described in @Springel2005], a smoothed particle hydrodynamics (SPH) code that uses the entropy formulation of SPH [@Springel2002], which conserves both energy and entropy where appropriate.
All the cosmological simulations used in this work assume a $\Lambda$CDM cosmology with parameters derived from the WMAP year 3 data, $\Omega_\mathrm{m} = 1 - \Omega_\Lambda = 0.238$, $\Omega_\mathrm{b} = 0.0418$, $h = 0.73$, $\sigma_8 = 0.74$, $n = 0.951$ [@Spergel2007]. These values are consistent[^2] with the WMAP year 7 data [@Komatsu2011]. The primordial abundances are $X = 0.752$ and $Y = 0.248$, where $X$ and $Y$ are the mass fractions of hydrogen and helium, respectively.
[lrrrrrr]{}\
simulation & $L_\mathrm{box}$ & $N$ & $m_\mathrm{DM}$ & $m_\mathrm{gas}^\mathrm{initial}$ & number of haloes with & number of resolved haloes\
& ($h^{-1}$Mpc) & & (M$_\odot$) & (M$_\odot$) & $10^{11.5}$ M$_\odot<M_\mathrm{halo}<10^{12.5}$ M$_\odot$ & at $z=2$\
\
*L100N512* & 100 & 512$^3$ & $5.56\times 10^8$ & $1.19 \times 10^8$ & 4407 ($z=2$) & 32167\
*L050N512* & [50]{} & 512$^3$ & $6.95\times 10^7$ & $1.48\times 10^7$ & 518 ($z=2$); 1033 ($z=0$) & 32663\
*L025N512* & [25]{} & 512$^3$ & $8.68\times 10^6$ & $1.85\times 10^6$ & 59 ($z=2$)& 25813\
A cubic volume with periodic boundary conditions is defined, within which the mass is distributed over $N^3$ dark matter and as many gas particles. The box size (i.e. the length of a side of the simulation volume) of the simulations used in this work are 25, 50, and 100 $h^{-1}$Mpc, with $N=512$. The (initial) particle masses for baryons and dark matter are $1.5\times10^7(\frac{L_\mathrm{box}}{50\ h^{-1}\mathrm{Mpc}})^3$ M$_\odot$ and $7.0\times10^7(\frac{L_\mathrm{box}}{50\ h^{-1}\mathrm{Mpc}})^3$ M$_\odot$, respectively, and are listed in Table \[tab:res\]. We use the notation *L\*\*\*N\#\#\#*, where *\*\*\** indicates the box size in comoving Mpc$/h$ and *\#\#\#* the number of particles per dimension. The gravitational softening length is initially 3.9 $(\frac{L_\mathrm{box}}{50\ h^{-1}\mathrm{Mpc}})$ $h^{-1}$ comoving kpc, i.e. 1/25 of the mean dark matter particle separation, but we imposed a maximum of 1 $(\frac{L_\mathrm{box}}{50\ h^{-1}\mathrm{Mpc}})$ $h^{-1}$kpc proper. We use simulation *REF\_L050N512* for our main results. The *L025N512* simulations are used for images, for comparisons between simulations with different subgrid physics, and for resolution tests. The *L100N512* run is only used for the convergence tests shown in the appendix.
The abundances of eleven elements (hydrogen, helium, carbon, nitrogen, oxygen, neon, magnesium, silicon, sulphur, calcium, and iron) released by massive stars (type II SNe and stellar winds) and intermediate mass stars (type Ia SNe and asymptotic giant branch stars) are followed as described in @Wiersma2009b. We assume the stellar initial mass function (IMF) of @Chabrier2003, ranging from 0.1 to 100 M$_\odot$. As described in @Wiersma2009a, radiative cooling and heating are computed element-by-element in the presence of the cosmic microwave background radiation and the @Haardt2001 model for the UV/X-ray background from galaxies and quasars. The gas is assumed to be optically thin and in (photo)ionization equilibrium.
Star formation is modelled according to the recipe of @Schaye2008. The Jeans mass cannot be resolved in the cold, interstellar medium (ISM), which could lead to artificial fragmentation [e.g. @Bate1997]. Therefore, a polytropic equation of state $P_\mathrm{tot}\propto\rho_\mathrm{gas}^{4/3}$ is implemented for densities exceeding $n_\mathrm{H}=0.1$ cm$^{-3}$, where $P_\mathrm{tot}$ is the total pressure and $\rho_\mathrm{gas}$ the density of the gas. This equation of state makes the Jeans mass, as well as the ratio of the Jeans length and the SPH smoothing kernel, independent of the density. Gas particles whose proper density exceeds $n_\mathrm{H}\ge0.1$ cm$^{-3}$ while they have temperatures $T\le10^5$ K are moved on to this equation of state and can be converted into star particles. The star formation rate per unit mass depends on the gas pressure and is set to reproduce the observed Kennicutt-Schmidt law [@Kennicutt1998].
Feedback from star formation is implemented using the prescription of @Vecchia2008. About 40 per cent of the energy released by type II SNe is injected locally in kinetic form. The rest of the energy is assumed to be lost radiatively. Each gas particle within the SPH smoothing kernel of the newly formed star particle has a probability of being kicked. For the reference model, the mass loading parameter $\eta = 2$, meaning that, on average, the total mass of the particles being kicked is twice the mass of the star particle formed. Because the winds sweep up surrounding material, the effective mass loading can be much higher. The initial wind velocity is 600 kms$^{-1}$ for the reference model. @Schaye2010 showed that these parameter values yield a peak global star formation rate density that agrees with observations.
Variations {#sec:var}
----------
[lcrcr]{}\
simulation & Z cool & $v_\mathrm{wind}$ & $\eta$ & AGN\
& & (kms$^{-1}$) & &\
\
*REF* & yes & 600 & 2 & no\
*NOSN\_NOZCOOL* & **no** & ** 0** & **0** & no\
*NOZCOOL* & **no** & 600 & 2 & no\
*WDENS* & yes & & no\
*AGN* & yes & 600 & 2 & **yes**\
To investigate the effect of feedback and metal-line cooling, we have performed a suite of simulations in which the subgrid prescriptions are varied. These are listed in Table \[tab:owls\].
The importance of metal-line cooling can be demonstated by comparing the reference simulation (*REF*) to a simulation in which primordial abundances are assumed when calculating the cooling rates (*NOZCOOL*). We also performed a simulation in which both cooling by metals and feedback from SNe were omitted (*NOSN\_NOZCOOL*). To study the effect of SN feedback, this simulation can be compared to (*NOZCOOL*).
In massive haloes the pressure of the ISM is too high for winds with velocities of 600 kms$^{-1}$ to blow the gas out of the galaxy [@Vecchia2008]. To make the winds effective at higher halo masses, the velocity can be scaled with the local sound speed, while adjusting the mass loading so as to keep the energy injected per unit stellar mass constant at $\approx 40$ per cent (*WDENS*).
Finally, we have included AGN feedback (*AGN*). Black holes grow via mergers and gas accretion and inject 1.5 per cent of the rest-mass energy of the accreted gas into the surrounding matter in the form of heat. The model is based on the one introduced by @Springeletal2005 and is described and tested in @Booth2009, who also demonstrate that the simulation reproduces the observed mass density in black holes and the observed scaling relations between black hole mass and central stellar velocity dispersion and between black hole mass and stellar mass. @McCarthy2010 have shown that model *AGN* reproduces the observed stellar mass fractions, star formation rates, and stellar age distributions in galaxy groups, as well as the thermodynamic properties of the intragroup medium.
Identifying haloes {#sec:halo}
------------------
The first step towards finding gravitationally bound structures is to identify dark matter haloes. These can be found using a Friends-of-Friends (FoF) algorithm. If the separation between two dark matter particles is less than 20 per cent of the average separation (the linking length $b=0.2$), they are placed in the same group. Baryonic particles are linked to a FoF halo if their nearest dark matter neighbour is in that halo. We then use <span style="font-variant:small-caps;">subfind</span> [@Dolag2009] to find the most bound particle of a FoF halo, which serves as the halo centre. In this work we use a spherical overdensity criterion, considering all the particles in the simulation. We compute the virial radius, $R_\mathrm{vir}$, within which the average density agrees with the prediction of the top-hat spherical collapse model in a $\Lambda$CDM cosmology [@Bryan1998]. At $z=2$ this corresponds to a density of $\rho = 169\langle\rho\rangle$.
We include only haloes containing more than 100 dark matter particles in our analysis, corresponding to a minimum dark matter halo mass of $M_\mathrm{halo}=10^{10.7}$, $10^{9.8}$, and $10^{8.9}$ M$_\odot$ in the 100, 50, and 25 $h^{-1}$Mpc box, respectively. For these limits our mass functions agree very well with the @Sheth1999 fit. Table \[tab:res\] lists, for each simulation of the reference model, the number of haloes with mass $10^{11.5}$ M$_\odot<M_\mathrm{halo}<10^{12.5}$ M$_\odot$ and the number of haloes with more than 100 dark matter particles.
Hot- and cold-mode gas
----------------------
During the simulations the maximum past temperature, $T_\mathrm{max}$, was stored in a separate variable. The variable was updated for each SPH particle at every time step for which the temperature was higher than the previous maximum past temperature. The artificial temperature the particles obtain when they are on the equation of state (i.e. when they are part of the unresolved multiphase ISM) was ignored in this process. This may cause us to underestimate the maximum past temperature of gas that experienced an accretion shock at densities $n_{\rm H} > 0.1 ~{\rm cm}^{-3}$. Ignoring such shocks is, however, consistent with our aims, as we are interested in the maximum temperature reached *before* the gas entered the galaxy. Note, however, that the maximum past temperature of some particles may reflect shocks in outflowing rather than accreting gas.
Another reason why $T_\mathrm{max}$ may underestimate the true maximum past temperature, is that in SPH simulations a shock is smeared out over a few smoothing lengths, leading to in-shock cooling [@Hutchings2000]. If the cooling time is of the order of, or smaller than, the time step, then the maximum temperature will be underestimated. @Creasey2011 have shown that a particle mass of $10^6$ M$_\odot$ is sufficient to avoid numerical overcooling of accretion shocks onto haloes, like in our high-resolution simulations (*L025N512*). The Appendix shows that our lower-resolution simulations give very similar results.
Even with infinite resolution, the post-shock temperatures may, however, not be well defined. Because electrons and protons have different masses, they will have different temperatures in the post-shock gas and it may take some time before they equilibrate through collisions or plasma effects. We have ignored this complication. Another effect, which was also not included in our simulation, is that shocks may be preceded by the radiation from the shock, which may affect the temperature evolution. Disregarding these issues, @Voort2011a showed that the distribution of $T_\mathrm{max}$ is bimodal and that a cut at $T_\mathrm{max}=10^{5.5}$ K naturally divides the gas into cold- and hot-mode accretion and that it produces similar results as studies based on adaptive mesh refinement simulations [@Ocvirk2008]. This $T_\mathrm{max}$ threshold was chosen because the cooling function peaks at $10^{5-5.5}$ K [e.g. @Wiersma2009a], which results in a minimum in the temperature distribution. Additionally, the UV background can only heat gas to about $10^5$ K, which is therefore characteristic for cold-mode accretion. In this work we use the same $T_\mathrm{max}=10^{5.5}$ K threshold to separate the cold and hot modes.
Physical properties: dependence on radius {#sec:properties}
=========================================
![image](figures/zoomsinglehalodenshotcold_z2.eps)
The gas in the Universe is distributed in a cosmic web of sheets, filaments, and haloes. The filaments also affect the structure of the haloes that reside inside them or at their intersections. At high redshift, cold, narrow streams penetrate hot haloes and feed galaxies efficiently [e.g. @Keres2005; @Dekel2006; @Agertz2009; @Ceverino2010; @Voort2011a]. The middle row of Fig. \[fig:dens\] shows the overdensity in several haloes with different masses, ranging from $M_\mathrm{halo}=10^{12.5}$ (left panel) to $10^{10.5}$ M$_\odot$ (right panel), taken from the high-resolution reference simulation (*REF\_L025N512*) at $z=2$. Each image is four virial radii on a side, so the physical scale decreases with decreasing halo mass, as indicated in the middle panels. To illustrate the morphologies of gas that was accreted in the different modes, we show the density of the cold- and hot-mode gas separately in the top and bottom panels, respectively. The spatial distribution is clearly different. Whereas the cold-mode gas shows clear filaments and many clumps, the hot-mode gas is much more spherically symmetric and smooth, particularly for the higher halo masses. The filaments become broader, relative to the size of the halo, for lower mass haloes. In high-mass haloes, the streams look disturbed and some fragment into small, dense clumps, whereas they are broad and smooth in low-mass haloes. Cold-mode accretion is clearly possible in haloes that are massive enough to have well-developed virial shocks if the density of the accreting gas is high, which is the case when the gas accretes along filaments or in clumps.
![image](figures/zoomsinglehaloproperties_z2_M12.eps)
Fig. \[fig:halo\] shows several physical quantities for the gas in a cubic 1 $h^{-1}$ comoving Mpc region, which is about four times the virial radius, centred on the $10^{12}$ M$_\odot$ halo from Fig. \[fig:dens\]. These properties are (from the top-left to the bottom-right): gas overdensity, temperature, maximum past temperature, pressure, entropy, metallicity, radial peculiar velocity, radial mass flux, and finally the “hot fraction” which we define as the mass fraction of the gas that was accreted in the hot mode (i.e. that has $T_\mathrm{max}\ge10^{5.5}$ K). The properties are mass-weighted and projected along the line of sight. The virial radius is 264 $h^{-1}$ comoving kpc and is indicated by the white circles.
![image](figures/radialprop_REF_L050N512_z2p0_mass11p5to12p5_pecvel0p1.eps)
In Fig. \[fig:haloradz2\] we show the same quantities as in Fig. \[fig:halo\] as a function of radius for the haloes with $10^{11.5}$ M$_\odot<M_\mathrm{halo}<10^{12.5}$ M$_\odot$ at $z=2$ in simulation *REF\_L050N512*. The black curves show the median values for all gas, except for the last two panels which show the mean values. The red (blue) curves show the median or mean values for hot- (cold-)mode gas, i.e. gas with maximum past temperatures above (below) $10^{5.5}$ K. The shaded regions show values within the 16th and 84th percentiles. Hot-mode gas at radii larger than $2R_\mathrm{vir}$ is dominated by gas associated with other haloes and/or large-scale filaments.
All the results we show are weighted by mass. In other words, we stacked all 518 haloes in the selected mass range using $R/R_{\rm vir}$ as the radial coordinate. The black curves in Fig. \[fig:haloradz2\] (except for the last two panels) then show the values of the corresponding property (e.g. the gas overdensity in the top-left panel) that divide the total mass in each radial bin in half, i.e. half the mass lies above the curve. We have done the same analysis for volume-weighted quantities by computing, as a function of radius, the values of each property that divides the total volume, i.e. the sum of $m_\mathrm{gas}/\rho$, in half but we do not show the results. The volume is completely dominated by hot-mode gas out to twice the virial radius, reaching 50 per cent at $3R_\mathrm{vir}$. Even though the volume-weighted hot fraction is very different, the properties of the gas and the differences between the properties of hot- and cold-mode gas are similar if we weigh by volume rather than mass.
We find that the median density of cold-mode gas is higher, by up to 1 dex, than that of hot-mode gas and that its current temperature is lower, by up to 2 dex, at least beyond $0.2R_\mathrm{vir}$. The hot-mode maximum past temperature is on average about an order of magnitude higher than the cold-mode maximum past temperature. The median pressure of the hot-mode gas only exceeds that of the cold mode by a factor of a few, but for the entropy the difference reaches 2.5 dex. For $R\ga R_{\rm vir}$ the gas metallicity in the cold mode is lower and has a much larger spread (four times larger at $R_\mathrm{vir}$) than in the hot mode. Cold-mode gas is flowing in at much higher velocities, by up to 150 $\rm km\, s^{-1}$, and dominates the accretion rate at all radii. Below, we will discuss these gas properties individually and in more detail.
Density {#sec:dens}
-------
The halo shown in Fig. \[fig:halo\] is being fed by dense, clumpy filaments as well as by cooling diffuse gas. The filaments are overdense for their radius, both inside and outside the halo. The top-left panel of Fig. \[fig:haloradz2\] shows that the overdensity of both hot- and cold-mode gas increases with decreasing radius, from $\sim 10$ at $10R_{\rm vir}$ to $\sim 10^2$ at $R_{\rm vir}$ and to $10^4$ at $0.1R_{\rm vir}$. The median density of cold-mode gas is higher by up to an order of magnitude than that of hot-mode gas for all radii $0.1R_\mathrm{vir}<R<4R_\mathrm{vir}$. The cold-mode gas densities exhibit a significant scatter of about 2 dex, as opposed to about 0.4 dex for hot-mode gas at $R_\mathrm{vir}$, which implies that the cold-mode gas is much clumpier. Beyond $4R_\mathrm{vir}$ the median hot-mode density becomes higher than for the cold mode, because there the hot-mode gas is associated with different haloes and/or large-scale filaments, which are also responsible for heating the gas.
Temperature
-----------
Hot gas, heated either by accretion shocks or by SN feedback, extends far beyond the virial radius (top-middle of Fig. \[fig:halo\]). Most of the volume is filled with hot gas. The location of cold gas overlaps with that of dense gas, so the temperature and density are anti-correlated. This anti-correlation is a result of the fact that the cooling time deceases with the gas density.
For $R\gtrsim0.2R_\mathrm{vir}$ the temperatures of the hot- and cold-mode gas do not vary strongly with radius (top middle panel of Fig. \[fig:haloradz2\]). Note that this panel shows the *current* temperature and not the maximum *past* temperature. Gas accreted in the hot mode has a temperature $\sim 10^6$ K at $R>0.2R_\mathrm{vir}$, which is similar to the virial temperature. The median temperature of the hot-mode gas increases slightly from $\approx 2R_{\rm vir}$ to $\approx 0.2R_\mathrm{vir}$ because the hot gas is compressed as it falls in. Within $0.5R_{\rm vir}$ the scatter increases and around $0.2R_{\rm vir}$ the median temperature drops sharply to $\sim 10^4$ K. The dramatic decrease in the temperature of the hot-mode gas is a manifestation of the strong cooling flow that results when the gas has become sufficiently dense to radiate away its thermal energy within a dynamical time. The median temperature of cold-mode gas peaks at slightly below $10^5$ K around $2R_\mathrm{vir}$ and decreases to $\sim 10^4$ K at $0.1~R_\mathrm{vir}$. The peak in the temperature of the cold-mode gas is determined by the interplay between photo-heating by the UV background and radiative cooling. The temperature difference between the two accretion modes reaches a maximum of about 2 dex at $0.3R_\mathrm{vir}$ and vanishes around $0.1R_\mathrm{vir}$.
Maximum past temperature
------------------------
The maximum past temperature (top-right panel of Fig. \[fig:halo\]) is by definition at least as high as the current temperature, but its spatial distribution correlates well with that of the current temperature. As shown by Fig. \[fig:haloradz2\], the difference between maximum past temperature and current temperature is small at $R\ga R_\mathrm{vir}$, but increases towards smaller radii and becomes 1 dex for cold-mode gas and 2 dex for hot-mode gas at 0.1$R_\mathrm{vir}$.
While the temperature of the cold-mode gas decreases with decreasing radius, its maximum past temperature stays constant at $T_\mathrm{max}\approx10^5$ K. This value of $T_\mathrm{max}$ is reached around 2$R_\mathrm{vir}$ as a result of heating by the UV background. Both the current and the maximum past temperature of the hot-mode gas increase with decreasing radius for $R>0.3R_\mathrm{vir}$. The fact that $T_\mathrm{max}$ decreases below $0.3R_\mathrm{vir}$ shows that it is, on average, the colder part of the hot-mode gas that can reach these inner radii. If it were a random subset of all the hot-mode gas, then $T_\mathrm{max}$ would have stayed constant.
Pressure
--------
As required by hydrostatic equilibrium, the gas pressure generally increases with decreasing radius (middle-left panels of Figs. \[fig:halo\] and \[fig:haloradz2\]). However, the median pressure profile (Fig. \[fig:haloradz2\]) does show a dip around $0.2-0.3 R_{\rm vir}$ that reflects the sharp drop in the temperature profiles. Here catastrophic cooling leads to a strong cooling flow and thus a breakdown of hydrostatic equilibrium.
Comparing the pressure map with those of the density and temperature, the most striking difference is that the filaments become nearly invisible inside the virial radius, whereas they stood out in the density and temperature maps. However, beyond the virial radius the filaments do have a higher pressure than the diffuse gas. This suggests that pressure equilibrium is quickly established after the gas accretes onto the haloes.
Fig. \[fig:haloradz2\] shows that the difference between the pressures of the hot- and cold-mode gas increases beyond $2R_\mathrm{vir}$. This is because at these large radii the hot-mode gas is associated with other haloes and/or large-scale filaments, while the cold-mode gas is intergalactic, so we do not expect them to be in pressure equilibrium. Moving inwards from the virial radius, the median pressure difference increases until it reaches about an order of magnitude at $0.3R_{\rm vir}$. At smaller radii the pressures become nearly the same because the hot-mode gas cools down to the same temperature as the cold-mode gas.
Although it takes some time to reach pressure equilibrium if the hot gas is suddenly heated, we expect the hot and cold gas to be approximately in equilibrium inside the halo, because a phase with a higher pressure will expand, lowering its pressure, and compressing the phase with the lower pressure, until equilibrium is reached. While the pressure distributions do overlap, there is still a significant difference between the two. This difference decreases somewhat with increasing resolution, because the cold gas reaches higher densities and thus higher pressures, as is shown in the appendix. From the example pressure map (Fig. \[fig:halo\]) we can see that the filaments inside the halo are in fact approximately in pressure equilibrium with the diffuse gas around them. At first sight this seems at odds with the fact that the median pressure profiles are different. However, the pressure map also reveals an asymmetry in the pressure inside the halo, with the gas to the left of the centre having a higher pressure than the gas to the right of the centre. Because there is also more hot-mode gas to the left, this leads to a pressure difference between the two modes when averaged over spherical shells, even though the two phases are locally in equilibrium. The asymmetry arises because the hot-mode gas is a space-filling gas and the flow has to converge towards the centre of the halo, which increases its pressure. The cold-mode gas is not space filling and therefore does not need to compress as much.
Entropy
-------
We define the entropy as $$S\equiv \dfrac{P(\mu m_\mathrm{H})^{5/3}}{k_\mathrm{B}\rho^{5/3}},$$ where $\mu$ is the mean molecular weight, $m_\mathrm{H}$ is the mass of a hydrogen atom, and $k_\mathrm{B}$ is Boltzmann’s constant. Note that the entropy remains invariant for adiabatic processes. In the central panel of Fig. \[fig:halo\] we clearly see that the filaments have much lower entropies than the diffuse gas around them. This is expected for cold, dense gas.
For $R>R_\mathrm{vir}$ the median entropy of hot-mode gas is always higher than that of cold-mode gas (Fig. \[fig:haloradz2\]). While the entropy of the cold-mode gas decreases smoothly and strongly towards the centre of the halo, the entropy of the hot-mode gas decreases only slightly down to $0.2R_\mathrm{vir}$ after which it drops steeply. Cold-mode gas cools gradually, but hot-mode gas cannot cool until it reaches high enough densities, which results in a strong cooling flow.
Metallicity {#sec:metal}
-----------
The middle-right panels of Figs. \[fig:halo\] and \[fig:haloradz2\] show that the cold-mode streams have much lower metallicities than the diffuse, hot-mode gas, at least for $R\ga R_{\rm vir}$. The cold-mode gas also has a much larger spread in metallicity, four orders of magnitude at $R_\mathrm{vir}$, as opposed to only one order of magnitude for hot-mode gas.
The gas in the filaments tends to have a lower metallicity, because most of it has never been close to a star-forming region, nor has it been affected by galactic winds, which tend to avoid the filaments [@Theuns2002]. The radial velocity image in the bottom-left panel of Fig. \[fig:halo\] confirms that the winds take the path of least resistance. The cold mode also includes dense clumps, which show a wide range of metallicities. If the density is high enough for embedded star formation to occur, then this can quickly enrich the entire clump. The enhanced metallicity will increase its cooling rate, making it even more likely to accrete in the cold mode (recall that our definition of the maximum past temperature ignores shocks in the ISM). On the other hand, clumps that have not formed stars remain metal-poor. The metallicity spread is thus caused by a combination of being shielded from winds driven by the central galaxy and exposure to internal star formation.
At $R_\mathrm{vir}$ the median metallicity of the gas is subsolar, $Z\sim 10^{-1}~Z_\odot$ for hot-mode gas and $Z\sim 10^{-2}~Z_\odot$ for cold-mode gas. However, we caution the reader that the median cold-mode metallicity is not converged with numerical resolution (see the Appendix) and could in fact be much lower. The metallicity increases towards the centre of the halo and this increase is steeper for cold-mode gas. The metallicity difference between the two modes disappears at $R\approx 0.5R_\mathrm{vir}$, but we find this radius to move inwards with increasing resolution (see the Appendix). Close to the centre the hot gas cools down and ongoing star formation in the disc enriches all the gas. The scatter in the metallicity decreases, especially for cold-mode gas, to $\sim0.7$ dex.
As discussed in detail by @Wiersma2009b, there is no unique definition of metallicity in SPH. The metallicity that we assign to each particle is the ratio of the metal mass density and the total gas density at the position of the particle. These “SPH-smoothed abundances” were also used during the simulation for the calculation of the cooling rates. Instead of using SPH-smoothed metallicities, we could, however, also have chosen to compute the metallicity as the ratio of the metal mass and the total gas mass of each particle. Using these so-called particle metallicities would sharpen the metallicity gradients at the interfaces of different gas phases. Indeed, we find that using particle metallicities decreases the median metallicity of the metal-poor cold mode. For the hot mode the median particle metallicity is also lower than the median smoothed metallicity, but it increases with resolution, whereas the cold-mode particle metallicities decrease with resolution.
While high-metallicity gas may belong to either mode, gas with metallicity $\la 10^{-3}~Z_\odot$ is highly likely to be part of a cold flow. This conclusion is strengthened when we increase the resolution of the simulation or when we use particle rather than SPH-smoothed metallicities. Thus, a very low metallicity appears to be a robust way of identifying cold-mode gas.
Radial velocity
---------------
The radial peculiar velocity is calculated with respect to the halo centre after subtracting the peculiar velocity of the halo. The peculiar velocity of the halo is calculated by taking the mass-weighted average velocity of all the gas particles within 10 per cent of virial radius. Note that the Hubble flow is not included in the radial velocities shown. It is unimportant inside haloes, but is about a factor of two larger than the peculiar velocity at $10R_\mathrm{vir}$.
The bottom-left panel of Fig. \[fig:halo\] shows that gas outside the haloes is, in general, moving towards the halo (i.e. it has a negative radial velocity). Within the virial radius, however, more than half of the projected area is covered by outflowing gas. These outflows are not only caused by SN feedback. In fact, simulations without feedback also show significant outflows (see Figs. \[fig:halodiff\] and \[fig:haloradz2diff\]). A comparison with the pressure map shows that the outflows occur in the regions where the pressure is relatively high for its radius (by $0.1-0.4$ dex, see Figs. \[fig:haloradflux\] and \[fig:halomassflux\]). The inflowing gas is associated with the dense, cold streams, but the regions of infall are broader than the cold filaments. Some of the hot-mode gas is also flowing in along with the cold-mode gas. These fast streams can penetrate the halo and feed the central disc. At the same time, some of the high temperature gas will expand, causing mild outflows in high pressure regions and these outflows are strengthened by SN-driven winds.
The hot-mode gas is falling in more slowly than the cold-mode gas or is even outflowing (bottom-left panel of Fig. \[fig:haloradz2\]). This is expected, because the gas converts its kinetic energy into thermal energy when it goes through an accretion shock and because a significant fraction of the hot-mode gas may have been affected by feedback. For hot-mode gas the median radial velocity is closest to zero between $0.3R_\mathrm{vir}<R<1R_\mathrm{vir}$. Most of it is inflowing at smaller radii, where the gas temperature drops dramatically, and also at larger radii.
The cold-mode gas appears to accelerate to $-150$ km s$^{-1}$ towards $R_\mathrm{vir}$ (i.e. radial velocities becoming more negative) and to decelerate to $-30$ km s$^{-1}$ from $R_\mathrm{vir}$ towards the disc. We stress, however, that the behaviour of individual gas elements is likely to differ significantly from the median profiles. Individual, cold gas parcels will likely accelerate until they go through an accretion shock or are hit by an outflow, at which point the radial velocity may suddenly vanish or change sign. If this is more likely to happen at smaller radii, then the median profiles will show a smoothly decelerating inflow. Finally, observe that while there is almost no outflowing cold-mode gas around the virial radius, close to the central galaxy ($R\la 0.3R_\mathrm{vir}$) a significant fraction is outflowing.
Accretion rate
--------------
The appropriate definition of the accretion rate in an expanding Universe depends on the question of interest. Here we are interested in the mass growth of haloes in a comoving frame, where the haloes are defined using a criterion that would keep halo masses constant in time if there were no peculiar velocities. An example of such a halo definition is the spherical overdensity criterion, which we use here, because the virial radius is in that case defined as the radius within which the mean internal density is a fixed multiple of some fixed comoving density.
The net amount of gas mass that is accreted per unit time through a spherical shell $S$ with comoving radius $x = R/a$, where $a$ is the expansion factor, is then given by the surface integral $$\begin{aligned}
\dot{M}_{\rm gas}(x) &=& - \int_S a^3 \rho \dot{x}\frac{dS}{a^2}\\
&=& - \int_S \rho v_{\rm rad} dS,\end{aligned}$$ where $a^3\rho$ and $dS/a^2$ are a comoving density and a comoving area, respectively, and the radial peculiar velocity is $v_{\rm rad} \equiv a\dot{x}$. We evaluate this integral as follows, $$\dot{M}_{\rm gas}(R) = - \sum_{R \le r_i < R+dR} \dfrac{m_{{\rm
gas},i}v_{{\rm rad},i}}{V_\mathrm{shell}}A_\mathrm{shell},
\label{eq:mdotgas}$$ where $$V_\mathrm{shell}=\dfrac{4\pi}{3}((R+dR)^3-R^3),$$ $$A_\mathrm{shell}=4\pi(R+\dfrac{1}{2}dR)^2,$$ $r_i$ is the radius of particle $i$, and $dR$ is the bin size. Note that a negative accretion rate corresponds to net outflow.
The mass flux map shown in the bottom-middle panel of Fig. \[fig:halo\] is computed per unit area for each pixel as $\Sigma_i m_{{\rm gas},i}v_{{\rm rad},i}/V_\mathrm{pix}$, where $V_\mathrm{pix}$ is the proper volume of the pixel. The absolute mass flux is highest in the dense filaments and in the other galaxies outside $R_\mathrm{vir}$, because they contain a lot of mass and have high inflow velocities.
The gas accretion rate $\dot{M}_{\rm gas}(R/R_{\rm vir})$ computed using equation (\[eq:mdotgas\]) is shown as the black curve in the bottom-middle panel of Fig. \[fig:haloradz2\]. The accretion rate is averaged over all haloes in the mass bin we are considering here ($10^{11.5}$ M$_\odot<M_\mathrm{halo}<10^{12.5}$ M$_\odot$). Similarly, the red and blue curves are computed by including only hot- and cold-mode particles, respectively. The accretion rate is positive at all radii, indicating net accretion for both modes. The inflow rate is higher for the cold mode even though the hot-mode gas dominates the mass budget around the virial radius (see Section \[sec:hot\]). The hot-mode gas accretion rate is a combination of the density and the radial velocity of the hot-mode gas, as well as the amount of mass in the hot mode. Gas belonging to the cold mode at $R>R_\mathrm{vir}$ may later become part of the hot mode after it has reached $R<R_\mathrm{vir}$.
The extended halo is not in a steady state, because the accretion rate varies with radius. Moving inwards from $10 R_{\rm vir}$ to $R_{\rm vir}$, the net rate of infall drops by about an order of magnitude. This implies that the (extended) halo is growing: the flux of mass that enters a shell from larger radii exceeds the flux of mass that leaves the same shell in the direction of the halo centre. This sharp drop in the accretion rate with decreasing radius is in part due to the fact that some of the gas at $R > R_{\rm vir}$ is falling towards other haloes that trace the same large-scale structure.
Within the virial radius the rate of infall of all gas and of cold-mode gas continues to drop with decreasing radius, but the gradient becomes much less steep ($d\ln \dot{M}/d\ln R \approx 0.4$), indicating that the cold streams are efficient in transporting mass to the central galaxy. For the hot mode the accretion rate only flattens at $R\lesssim 0.4R_\mathrm{vir}$ around the onset of catastrophic cooling. Hence, once the hot-mode gas reaches small enough radii, its density becomes sufficiently high for cooling to become efficient, and the hot-mode accretion becomes efficient too. However, even at 0.1$R_\mathrm{vir}$ its accretion rate is still much lower than that of cold-mode gas.
Hot fraction {#sec:hot}
------------
Even though the average hot fraction, i.e. the mean fraction of the gas mass that has a maximum past temperature greater than $10^{5.5}$ K, of the halo in the image is close to 0.5, few of the pixels actually have this value. For most pixels $f_\mathrm{hot}$ is either close to one or zero (bottom-right panel of Fig. \[fig:halo\]), confirming the bimodal nature of the accretion.
The bottom-right panel of Fig. \[fig:haloradz2\] shows that the hot fraction peaks around the virial radius, where it is about 70 per cent. Although the hot fraction decreases beyond the virial radius, it is still 30 per cent around $10 R_{\rm vir}$. The hot-mode gas at very large radii is associated with other haloes and/or large-scale filaments. Within the halo the hot fraction decreases from 70 per cent at $R_\mathrm{vir}$ to 35 per cent at $0.1R_\mathrm{vir}$. While hot-mode accretion dominates the growth of haloes, most of the hot-mode gas does not reach the centre. Cold-mode accretion thus dominates the growth of galaxies.
Dependence on halo mass {#sec:mass}
=======================
![image](figures/massprop_REF_L050N512_z2p0_pecvel0p1.eps)
In Fig. \[fig:halomassz2\] we plot the same properties as in Fig. \[fig:haloradz2\] as a function of halo mass for gas at radii $0.8R_\mathrm{vir}<R<R_\mathrm{vir}$, where differences between hot- and cold-mode gas are large. Grey, dashed lines show analytic estimates and are discussed below. The dotted, grey line in the top-left panel indicates the star formation threshold, i.e. $n_{\rm H}=0.1~{\rm cm}^{-3}$. The differences between the density and temperature of the hot- and cold-mode gas increase with the mass of the halo. The average temperature, maximum past temperature, pressure, entropy, metallicity, absolute radial peculiar velocity, absolute accretion rate, and the hot fraction all increase with halo mass.
We can compare the gas overdensity at the virial radius to the density that we would expect if baryons were to trace the dark matter, $\rho_\mathrm{vir}$. We assume an NFW profile [@Navarro1996], take the mean internal density relative to the critical density at redshift $z$, $\Delta_c\langle\rho\rangle$, from spherical collapse calculations [@Bryan1998] and the halo mass-concentration relation from @Duffy2008 and calculate the mean overdensity at $R_\mathrm{vir}$. This is plotted as the dashed, grey line in the top-left panel of Fig. \[fig:halomassz2\]. It varies very weakly with halo mass, because the concentration depends on halo mass, but this is invisible on the scale of the plot. For all halo masses the median density is indeed close to this analytic estimate. While the same is true for the hot-mode gas, for high-mass haloes ($M_\mathrm{halo}\gtrsim10^{12}$ M$_\odot$) the median density of cold-mode gas is significantly higher than the estimated density and the difference reaches two orders of magnitude for $M_\mathrm{halo}\sim10^{13}$ M$_\odot$. A significant fraction of the cold-mode gas in these most massive haloes is star forming and hence part of the ISM of satellite galaxies. The fact that cold-mode gas becomes denser and thus clumpier with halo mass could have important consequences for the formation of clumpy galaxies at high redshift [@Dekel2009b; @Agertz2009; @Ceverino2010].
The blue curve in the top-middle panel shows that the median temperature of the cold-mode gas at $R_\mathrm{vir}$ decreases slightly with halo mass, from 40,000 K to 15,000 K. This reflects the increase in the median density of cold-mode gas with halo mass, which results in shorter cooling times. The median temperature of the hot-mode gas increases with halo mass and is approximately equal to the virial temperature for $M_\mathrm{halo}\gtrsim10^{11.5}$ M$_\odot$. The virial temperature is plotted as the dashed, grey line and is given by $$\begin{aligned}
\label{eqn:virialtemperature}
T_\mathrm{vir}&=&\left(\dfrac{G^2H_0^2\Omega_\mathrm{m}\Delta_c}{54}\right)^{1/3}\dfrac{\mu m_\mathrm{H}}{k_B}M_{\rm halo}^{2/3}(1+z),\\
&\approx& 9.1\times10^5~{\rm K} \left (\dfrac{M_{\rm halo}}{10^{12}~{\rm M}_\odot}\right )^{2/3} \left (\dfrac{1+z}{3}\right ),\end{aligned}$$ where $G$ is the gravitational constant, $H_0$ the Hubble constant and $\mu$ is assumed to be equal to 0.59.
While much of the gas accreted onto low-mass haloes in the hot mode has a temperature smaller than $10^{5.5}$ K and has therefore already cooled down substantially[^3], there is very little overlap in the current temperatures of gas accreted in the two modes for haloes with $T_\mathrm{vir} \ga 10^6~$K. Because the cooling rates decrease with temperature for $T > 10^{5.5}~$K [e.g. @Wiersma2009a], most of the hot-mode gas in haloes with higher temperatures stays hot. For the same reason, the median temperature of all gas rises sharply at $M\approx10^{11.5}$ M$_\odot$ ($T_{\rm vir} \approx 10^{5.5}$ K) and is roughly equal to $T_\mathrm{vir}$ for $M_\mathrm{halo}>10^{12}$ M$_\odot$.
The top-right panel shows that the median maximum past temperature of gas at the virial radius is close to the virial temperature, which is again shown as the grey dashed curve, for the full range of halo masses shown. Some of the gas does, however, have a maximum past temperatures that differs strongly from the virial temperature. The largest difference is found for cold-mode gas in high-mass haloes. Because of its high density, its cooling time is short and the gas does not shock to the virial temperature. The maximum past temperature of gas accreted in the cold mode is close to $10^5$ K for all halo masses. For $M_\mathrm{halo} < 10^{10.5}$ M$_\odot$ this temperature is higher than the virial temperature. The gas in low-mass haloes has not been heated to its maximum temperature by a virial shock, but by the UV background radiation or by shocks from galactic winds. Heating by the UV background is the dominant process, because simulations without supernova feedback show the same result (see Fig. \[fig:haloradz2diff\]). The maximum past temperature of hot-mode gas follows the virial temperature closely for high-mass haloes. For $T_\mathrm{vir}<10^{5.5}$ K ($M_\mathrm{halo} \la 10^{11.5}$ M$_\odot$) the maximum past temperature of the hot-mode gas remains approximately constant, at around $10^{5.7}$ K, because of our definition of hot-mode gas ($T_\mathrm{max}\ge10^{5.5}$ K).
The pressure of the gas increases roughly as $M_{\rm halo}^{2/3}$ (middle-left panel). We can estimate the pressure at the virial radius from the virial temperature and the density at the virial radius. $$\frac{P_\mathrm{vir}}{k_{\rm B}} = \dfrac{T_\mathrm{vir}\rho_\mathrm{vir}}{\mu m_\mathrm{H}},$$ where $\mu$ is assumed to be equal to 0.59. This pressure is shown by the dashed, grey line. The actual pressure is very close to this simple estimate. It scales with mass as the virial temperature because the density at the virial radius is nearly independent of the mass. For all halo masses the median pressure of the gas accreted in the hot-mode is about a factor of two higher than the median pressure of the cold-mode gas.
The central panel shows that the entropy difference between hot- and cold-mode gas increases with halo mass, because the entropy of hot-mode gas increases with halo mass, whereas the cold-mode entropy decreases. The hot-mode gas follows the slope of the relation expected from virial arguments, $$S_\mathrm{vir}=\dfrac{P_\mathrm{vir}(\mu m_\mathrm{H})^{5/3}}{k_\mathrm{B}\rho_\mathrm{vir}^{5/3}},$$ where $\mu$ is assumed to be equal to 0.59. $S_\mathrm{vir}$ is shown as the dashed, grey line.
The middle-right panel shows that the median gas metallicity at the virial radius increases from $\sim 10^{-2}~Z_\odot$ for $M_{\rm halo} \sim 10^{10}~{\rm M}_\odot$ to $\sim
10^{-1}~Z_\odot$ for $10^{13}~{\rm M}_\odot$. This increase reflects the increased fraction of hot-mode gas (see the bottom-right panel) and an increase in the median metallicity of the cold-mode gas, which is probably due to the fact that a greater fraction of the gas resides in the ISM of satellite galaxies for more massive haloes (see the top-left panel). The scatter in the metallicity of the cold-mode gas is always very large. The hot-mode gas has a median metallicity $\sim 10^{-1}~Z_\odot$ for all halo masses, which is similar to the predicted metallicity of the warm-hot intergalactic medium [@Wiersma2011].
The black curve in the bottom-left panel shows that for all halo masses more mass is falling into the halo than is flowing out. The radial velocity distributions are, however, very broad. A substantial fraction of the hot-mode gas, more than half for $M_{\rm halo}< 10^{11.5}~{\rm M}_\odot$, is outflowing at $R_\mathrm{vir}$. Cold-mode gas is predominantly inflowing for all masses, but the fraction of outflowing gas becomes significant for $M_{\rm halo}< 10^{11.5}~{\rm M}_\odot$.
As expected, the gas at the virial radius falls in faster for higher-mass haloes and the absolute velocities are larger for cold-mode gas. We can compare the radial peculiar velocity to the escape velocity, $$\begin{aligned}
\label{eqn:esc}
v_\mathrm{esc} &=& \sqrt{\dfrac{2GM_{\rm halo}}{R_\mathrm{vir}}},\\
&\approx & 275~{\rm km}\,{\rm s}^{-1} \left (\frac{M_{\rm halo}}{10^{12}~{\rm M}_\odot}\right )^{1/3} \left (\frac{1+z}{3}\right )^{1/2}, \end{aligned}$$ where we assumed a matter-dominated Universe and used $$\begin{aligned}
R_\mathrm{vir} &=& \left(\dfrac{2GM}{H_0^2\Omega_\mathrm{m}\Delta_c}\right)^{1/3}\dfrac{1}{1+z},\\
&\approx & 114~{\rm kpc} \left( \frac{M_{\rm halo}}{10^{12}~{\rm M}_\odot}\right )^{1/3} \left (\frac{1+z}{3}\right )^{-1}.\end{aligned}$$ We show $-v_\mathrm{esc}$ by the dashed, grey curve. We only expect the gas to have a velocity close to this estimate if it fell in freely from very large distances and if the Hubble expansion, which damps peculiar velocities, were unimportant. However, we do expect the scaling with mass to be more generally applicable. For the cold mode the trend with halo mass is indeed well reproduced by the escape velocity.
![image](figures/radialpropflux_REF_L050N512_z2p0_mass11p5to12p5_pecvel0p1.eps)
![image](figures/masspropflux_REF_L050N512_z2p0_pecvel0p1.eps)
On average, for halo masses above $10^{11}$ M$_\odot$, the net accretion rate is positive, which means that more mass is flowing in than is flowing out (bottom-middle panel). Therefore, unsurprisingly, the haloes are growing. For haloes with $10^{10}$ M$_\odot<M_\mathrm{halo}<10^{11}$ M$_\odot$ the mean accretion rate is negative (indicating net outflow), but small ($\sim 0.1$ M$_\odot \,$yr$^{-1}$). The mean accretion rate of cold-mode gas is positive for all halo masses, but for hot-mode gas there is net outflow for $M_\mathrm{halo}<10^{11.5}$ M$_\odot$. Although these haloes are losing gas that is currently hot-mode, their hot-mode gas reservoir may still be increasing if cold-mode gas is converted into hot-mode gas. For higher-mass haloes, the hot-mode accretion rate is also positive and it increases approximately linearly with halo mass. This is the regime where the implemented supernova feedback is not strong enough to blow gas out of the halo. This transition mass is increased by more than an order of magnitude when more effective supernova feedback or AGN feedback is included (not shown). For $M_\mathrm{halo}>10^{12.5}$ M$_\odot$ the hot-mode inflow rate is slightly stronger than the cold-mode inflow rate.
The grey, dashed curve indicates the accretion rate a halo with a baryon fraction $\Omega_{\rm b}/\Omega_{\rm m}$ would need to have to grow to its current baryonic mass in a time equal to the age of the Universe at $z=2$, $$\dot{M}=\dfrac{\Omega_\mathrm{b}M_\mathrm{halo}}{\Omega_\mathrm{m}t_\mathrm{Universe}}.$$ Comparing this analytic estimate with the actual mean accretion rate, we see that they are equal for $M_\mathrm{halo}>10^{11.5}$ M$_\odot$, indicating that these haloes are in a regime of efficient growth. For lower-mass haloes, the infall rates are much lower, indicating that the growth of these haloes has halted, or that their baryon fractions are much smaller than $\Omega_{\rm b}/\Omega_{\rm m}$.
The bottom-right panel of Fig. \[fig:halomassz2\] shows that, at the virial radius, hot-mode gas dominates the gas mass for high-mass haloes. The hot fraction at $R_\mathrm{vir}$ increases from 10 per cent in haloes of $\sim 10^{10}$ M$_\odot$ to 90 per cent for $M_\mathrm{halo}\sim 10^{13}$ M$_\odot$. Note that for haloes with $M_\mathrm{halo}<10^{11.3}$ M$_\odot$ the virial temperatures are lower than our adopted threshold for hot-mode gas. The hot fraction would have been much lower without supernova feedback ($f_\mathrm{hot}<5$ per cent for $M_\mathrm{halo}<10^{10.5}$ M$_\odot$).
Inflow and outflow {#sec:inout}
==================
Figs. \[fig:haloradflux\] and \[fig:halomassflux\] show the physical properties of the gas, weighted by the radial mass flux, for all, inflowing, and outflowing gas (black, blue, and red curves, respectively). Except for the last two panels, the curves indicate the medians, i.e. half the mass flux is due to gas above the curves. Similarly, the shaded regions indicate the 16th and 84th percentiles. Note that we do not plot this separately for hot- and cold-mode gas. The differences between the blue and red curves arise purely from the different radial peculiar velocity directions.
We can immediately see that separating gas according to its radial velocity direction yields similar results as when the gas is separated according to its maximum past temperature (compare to Figs. \[fig:haloradz2\] and \[fig:halomassz2\]). Like cold-mode gas, inflowing gas has, on average, a higher density, a lower temperature, a lower entropy, and a lower metallicity than outflowing gas.
Comparing Figs. \[fig:haloradz2\] and \[fig:haloradflux\], we notice that the differences in density, temperature, pressure, entropy, and metallicity between in- and outflowing gas tend to be slightly smaller than between cold- and hot-mode gas. This is particularly true outside the haloes (at $R>3R_\mathrm{vir}$), where there is a clear upturn in the density and pressure of outflowing gas that is accompanied by a marked decrease in the temperature. These features are due to gas that is flowing towards other haloes and/or large-scale filaments. Although such gas is outflowing from the perspective of the selected halo, it is actually infalling gas and hence more likely to be cold-mode.
Unsurprisingly, the radial peculiar velocities (bottom-left panels) are clearly very different when we separate the gas into in- and outflowing components than when we divide it into cold and hot modes. Low values of the radial velocity are avoided because the plot is weighted by the mass flux. While the radial velocity of cold-mode gas decreases from the virial radius towards the centre, the mass flux-weighted median radial velocity of inflowing gas is roughly constant. Within the haloes, the mass flux-weighted median radial velocity of outflowing gas decreases with radius.
Both the inflow and the outflow mass flux (bottom-middle panel of Fig. \[fig:haloradflux\]) are approximately constant inside $0.7 R_{\rm vir}$, which implies that the fraction of the gas that is outflowing is also constant (last panel of Fig. \[fig:haloradflux\]). A mass flux that is independent of radius implies efficient mass transport, as the same amount of mass passes through each shell per unit of time. The inflowing mass flux decreases from 10$R_\mathrm{vir}$ to $\sim R_\mathrm{vir}$ because the overdensity of the region is increasing and because some of the gas that is infalling at distances $\gg R_{\mathrm{vir}}$ is falling towards neighbouring haloes. At $R\gtrsim R_\mathrm{vir}$ the outflowing mass flux decreases somewhat, indicating that the transportation of outflowing material slows down and that the galactic winds are becoming less efficient. This can also be seen by the drop in outflow fraction around the virial radius in the last panel of Fig. \[fig:haloradflux\]. (The small decrease in outflow fraction below $10^{10.5}$ M$_\odot$ is a resolution effect.) The outflowing mass flux increases again at large radii, because the hot-mode gas is falling towards unrelated haloes.
Comparing Figs. \[fig:halomassz2\] and \[fig:halomassflux\], we see again that, to first order, inflowing and outflowing gas behave similarly as cold-mode and hot-mode gas, respectively. There are, however, some clear differences, although we do need to keep in mind that some are due to the fact that Fig. \[fig:halomassz2\] is mass-weighted while Fig. \[fig:halomassflux\] is mass flux-weighted. The mass flux-weighted median temperature of outflowing gas is always close to the virial temperature. For $M_\mathrm{halo}\la 10^{11}$ M$_\odot$ this is much lower than the median temperature of hot-mode gas, but that merely reflects the fact that for these haloes the virial temperature is lower than the value of $10^{5.5}~$K that we use to separate the cold and hot modes. The mass flux-weighted maximum past temperature is about 0.5 dex higher than $T_\mathrm{vir}$.
Another clear difference is visible at the high-mass end ($M_\mathrm{halo}\ga 10^{12.5}$ M$_\odot$). While the cold-mode density increases rapidly with mass, the overdensity of infalling gas remains $\sim10^2$, and while the cold-mode temperature remains $\sim 10^4~$K, the temperature of infalling gas increases with halo mass. Both of these differences can be explained by noting that, around the virial radius, hot-mode gas accounts for a greater fraction of the infall in higher mass haloes (see the bottom-middle panel of Fig. \[fig:halomassz2\]).
As was the case for the cold-mode gas, the radial peculiar velocity of infalling gas scales like the escape velocity (bottom-middle panel). Interestingly, although the mass flux-weighted median outflowing velocity is almost independent of halo mass, the high-velocity tail is much more prominent for low-mass haloes. Because the potential wells in these haloes are shallow and because the gas pressure is lower, the outflows are not slowed down as much before they reach the virial radius. The flux-weighted outflow velocities are larger than the inflow velocities for $M_\mathrm{halo}<10^{11.5}$ M$_\odot$, whereas the opposite is the case for higher-mass haloes.
Finally, the last panel of Fig. \[fig:halomassflux\] shows that the fraction of the gas that is outflowing around $R_{\rm vir}$ is relatively stable at about 30–40 per cent. Although the accretion rate is negative for $10^{10}$ M$_\odot<M_\mathrm{halo}<10^{11}$ M$_\odot$, which indicates net outflow, less than half of the gas is outflowing.
Effect of metal-line cooling and outflows driven by supernovae and AGN {#sec:SNAGN}
======================================================================
![image](figures/zoomsinglehalopropertiesdiff_z2.eps)
![image](figures/radialpropdiff_L025N512_z2p0_mass11p5to12p5_pecvel0p1.eps)
Fig. \[fig:halodiff\] shows images of the same $10^{12}$ M$_\odot$ halo as Fig. \[fig:halo\] for five different high-resolution (*L025N512*) simulations at $z=2$. Each row shows a different property, in the same order as the panels in the previous figures. Different columns show different simulations, with the strength of galactic winds increasing from left to right (although the winds are somewhat stronger in *NOZCOOL* than in *REF*). The first column shows the simulation without SN feedback and without metal-line cooling. The second column shows the simulation with SN feedback, but without metal-line cooling. The third column shows our reference simulation, which includes both SN feedback and metal-line cooling. The fourth column shows the simulation with density-dependent SN feedback, which is more effective at creating galactic winds for this halo mass. The last column shows the simulation that includes both SN and AGN feedback.
The images show substantial and systematic differences. More efficient feedback results in lower densities and higher temperatures of diffuse gas and in the case of AGN feedback, even some of the cold filaments are partially destroyed. Although the images reveal some striking differences, Fig. \[fig:haloradz2diff\] shows that the trends in the profiles of the gas properties, including the differences between hot and cold modes, are very similar in the different simulations. This is partly because the profiles are mass-weighted, whereas the images are only mass-weighted along the projected dimension. We would have seen larger differences if we had shown volume-weighted profiles, because the low-density, high-temperature regions, which are most affected by the outflows, carry very little mass, but dominate the volume. Although the conclusions of the previous sections are to first order independent of the particular simulation that we use, there are some interesting and clear differences, which we shall discuss below.
We can isolate the effect of turning off SN-driven outflows by comparing models *NOSN\_NOZCOOL* and *NOZCOOL*. Without galactic winds, the cold-mode densities and pressures are about an order of magnitude higher within $0.5 R_{\rm vir}$. On the other hand, turning off winds decreases the density of hot-mode gas by nearly the same factor around $0.1 R_{\rm vir}$. Galactic winds thus limit the build up of cold-mode gas in the halo center, which they accomplish in part by converting cold-mode gas into hot-mode gas (i.e. by shock-heating cold-mode gas to temperatures above $10^{5.5}~$K). We can see that this must be the case by noting that the hot-mode accretion rate is negative inside the haloes for model *NOZCOOL*. The absence of SN-driven outflows also has a large effect on the distribution of metals. Without winds, the metallicities of both hot- and cold-mode gas outside the halo are much lower, because there is no mechanism to transport metals to large distances. On the other hand, the metallicity of the cold-mode gas is higher within the halo, because the star formation rates, and thus the rates of metal production, are higher. This suggests that the metals in cold-mode gas are associated with locally formed stars, e.g. infalling companion galaxies.
Increasing the efficiency of the feedback, as in models *WDENS* and particularly *AGN*, the cold-mode median radial velocity becomes less negative and the accretion rate of cold-mode gas inside haloes decreases. At the same time, the radial velocity and the outflow rate of the hot-mode increase. In other words, more efficient feedback hinders the inflow rate of cold-mode gas and boosts the outflows of hot-mode gas. The differences are particularly large outside the haloes. Whereas the moderate feedback implemented in model *REF* predicts net infall of hot-mode gas, the net accretion rate of hot-mode gas is negative out to about $4R_{\rm vir}$ when AGN feedback is included. Beyond the virial radius, stronger winds substantially increase the mass fraction of hot-mode gas. This comes at the expense of the hot-mode gas inside the haloes, which decreases if the feedback is more efficient, at least for $0.2 R_{\rm vir} < R < R_{\rm vir}$.
The effect of metal-line cooling can be isolated by comparing models *NOZCOOL* and *REF*. Without metal-line cooling, the cooling times are much longer. Consequently, the median temperature of the hot-mode gas remains above $10^6$ K (at least for $R> 0.1R_\mathrm{vir}$), whereas it suddenly drops to below $10^5$ K around $0.2 R_{\rm vir}$ when metal-line cooling is included. Thus, the catastrophic cooling flow of the diffuse, hot component in the inner haloes is due to metals. Indeed, while the median hot-mode radial peculiar velocity within $0.2 R_{\rm vir}$ is positive without metal-line cooling, it becomes negative (i.e. infalling) when metal-line cooling is included.
Evolution: Milky Way-sized haloes at $z=0$ {#sec:z0}
==========================================
![image](figures/radialprop_REF_L050N512_z0p0_mass11p5to12p5_pecvel0p1.eps)
Fig. \[fig:haloradz0\] is identical to Fig. \[fig:haloradz2\] except that it shows profiles for $z=0$ rather than $z=2$. For comparison, the dotted curves in Fig. \[fig:haloradz0\] show the corresponding $z=2$ results. As we are again focusing on $10^{11.5}$ M$_\odot<M_\mathrm{halo}<10^{12.5}$ M$_\odot$, the results are directly relevant for the Milky Way galaxy.
Comparing Fig. \[fig:haloradz2\] to Fig. \[fig:haloradz0\] (or comparing solid and dotted curves in Fig. \[fig:haloradz0\]), we see that the picture for $z=0$ looks much the same as it did for $z=2$. There are, however, a few notable differences. The overdensity profiles hardly evolve, although the difference between the cold and hot modes is slightly smaller at lower redshift. However, a constant overdensity implies a strongly evolving proper density ($\rho \propto (1+z)^3$) and thus also a strongly evolving cooling rate. The large decrease in the proper density caused by the expansion of the Universe also results in a large drop in the pressure and a large increase in the entropy.
The lower cooling rate shifts the peak of the cold-mode temperature profile from about 2$R_{\rm vir}$ to about $R_{\rm vir}$. While there is only a small drop in the temperature of the hot-mode gas, consistent with the mild evolution of the virial temperature of a halo of fixed mass ($T_{\rm vir} \propto (1+z)$; eq. \[\[eqn:virialtemperature\]\]), the evolution in the median temperature for all gas is much stronger than for the individual accretion modes. While at $z=2$ the overall median temperature only tracks the median hot-mode temperature around $0.5R_{\rm vir} < R< 2 R_{\rm vir}$, at $z=0$ the two profiles are similar at all radii.
The metallicity profiles do not evolve much, except for a strong increase in the median metallicity of cold-mode gas at $R \gg R_{\rm vir}$. Both the weak evolution of the metallicity of dense gas and the stronger evolution of the metallicity of the cold, low-density intergalactic gas far away from galaxies are consistent with the findings of @Wiersma2011, who found these trends to be robust to changes in the subgrid physics.
The absolute, net radial velocities of both the hot- and cold-mode components are smaller at lower redshift, as expected from the scaling of the characteristic velocity ($v_{\rm esc}\propto (1+z)^{1/2}$; eq. \[\[eqn:esc\]\]).
While at $z=2$ the net accretion rate was higher for the cold mode at all radii, at $z=0$ the hot mode dominates beyond $3R_\mathrm{vir}$. At low redshift the rates are about an order of magnitude lower than at $z=2$. For $R<0.4 R_{\rm vir}$ the net accretion rate is of order 1 M$_\odot$yr$^{-1}$, which is dominated by the cold mode, even though most of the mass is in the hot mode. Since a substantial fraction of both the cold- and hot-mode gas inside this radius is outflowing, the actual accretion rates will be a bit higher.
At $z=0$ the fraction of the mass that has been hotter than $10^{5.5}~$K exceeds 50 per cent at all radii and the profile show a broad peak of around 80 per cent at $0.3 < R < R_{\rm vir}$. Thus, hot-mode gas is more important at $z=0$ than at $z=2$, where cold-mode gas accounts for most of the mass for $R \la 0.3 R_{\rm vir}$ and $R\ga 2 R_{\rm vir}$. This is consistent with @Voort2011a, who investigated the evolution of the hot fraction in more detail using the same simulations.
Conclusions and discussion {#sec:concl}
==========================
We have used cosmological hydrodynamical simulations from the OWLS project to investigate the physical properties of gas in and around haloes. We paid particular attention to the differences in the properties of gas accreted in the cold and hot modes, where we classified gas that has remained colder (has been hotter) than $10^{5.5}$ K while it was extragalactic as cold-mode (hot-mode) gas. Note that our definition allows hot-mode gas to be cold, but that cold-mode gas cannot be hotter than $10^{5.5}$ K. We focused on haloes of $10^{12}~$M$_\odot$ at $z=2$ drawn from the OWLS reference model, which includes radiative cooling (also from heavy elements), star formation, and galactic winds driven by SNe. However, we also investigated how the properties of gas near the virial radius change with halo mass, we compared $z=2$ to $z=0$, we measured properties separately for inflowing and outflowing gas, and we studied the effects of metal-line cooling and feedback from star formation and AGN. We focused on mass-weighted median gas properties, but noted that volume-weighted properties are similar to the mass-weighted properties of the hot-mode gas, because most of the volume is filled by dilute, hot gas, at least for haloes with $T_{\rm vir} \ga 10^{5.5}$ K (see Figs. \[fig:dens\] and \[fig:halo\]).
Let us first consider the properties of gas just inside the virial radius of haloes drawn from the reference model at $z=2$ (see Fig. \[fig:halomassz2\]). The fraction of the gas accreted in the hot mode increases from 10 per cent for halo masses $M_\mathrm{halo}\sim 10^{10}~$M$_\odot$ to 90 per cent for $M_\mathrm{halo}\sim 10^{13}$ M$_\odot$. Hence, $10^{12}~$M$_\odot$ is a particularly interesting mass scale, as it marks the transition between systems dominated by cold and hot mode gas.
Although the cold streams are in local pressure equilibrium with the surrounding hot gas, cold-mode gas is physically distinct from gas accreted in the hot mode. It is colder ($T < 10^5~$K vs. $T\ga T_{\rm vir}$) and denser, particularly for high-mass haloes. While hot-mode gas at the virial radius has a density $\sim 10^2\left <\rho\right >$ for all halo masses, the median density of cold-mode gas increases steeply with the halo mass.
While the radial peculiar velocity of cold-mode gas is negative (indicating infall) and scales with halo mass like the escape velocity, the median hot-mode velocities are positive (i.e. outflowing) for $M_\mathrm{halo} \la 10^{11.5}$ M$_\odot$ and for larger masses they are much less negative than for cold-mode gas. Except for $M_\mathrm{halo} \sim 10^{10.5}$ M$_\odot$, the net accretion rate is positive. For $10^{12} < M_\mathrm{halo} < 10^{13}$ M$_\odot$ the cold- and hot-mode accretion rates are comparable.
While hot-mode gas has a metallicity $\sim 10^{-1}~Z_\odot$, the metallicity of cold-mode gas is typically significantly smaller and displays a much larger spread. The scatter in the local metallicity of cold-mode gas is large because the cold filaments contain low-mass galaxies that have enriched some of the surrounding cold gas. We emphasized that we may have overestimated the median metallicity of cold-mode gas because we find it to decrease with increasing resolution. It is therefore quite possible that most of the cold-mode gas in the outer halo still has a primordial composition.
The radial profiles of the gas properties of haloes with $10^{11.5} < M_\mathrm{halo} < 10^{12.5}$ M$_\odot$ revealed that the differences between gas accreted in the cold and hot modes vanishes around $0.1 R_{\rm vir}$ (see Fig. \[fig:haloradz2\]), although the radius at which this happens decreases slightly with the resolution. The convergence of the properties of the two modes at small radii is due to catastrophic cooling of the hot gas at $R \la 0.2 R_{\rm vir}$. Interestingly, in the absence of metal-line cooling the hot-mode gas remains hot down to much smaller radii, which suggests that it is very important to model the small-scale chemical enrichment of the circumgalactic gas. While stronger winds do move large amounts of hot-mode gas beyond the virial radius, even AGN feedback is unable to prevent the dramatic drop in the temperature of hot-mode gas inside $0.2R_{\rm vir}$, at least at $z=2$.
While the density and pressure decrease steeply with radius, the mass-weighted median temperature peaks around $0.5-1.0R_{\rm vir}$. This peak is, however, not due to a change in the temperature of either the cold- or hot-mode gas, but due to the radial dependence of the mass fraction of gas accreted in the hot mode. The hot-mode fraction increases towards the halo, then peaks around $0.5-1.0R_\mathrm{vir}$ and decreases moving further towards the centre. Even outside the halo there is a significant amount of hot-mode gas (e.g. $\sim 30$ per cent at $10R_{\rm vir}$).
Beyond the cooling radius, i.e. the radius where the cooling time equals the Hubble time, the temperature of the hot-mode gas decreases slowly outwards, but the temperature of cold-mode gas only peaks around $2R_{\rm vir}$ at values just below $10^5~$K. The density for which this peak temperature is reached depends on the interplay between cooling (both adiabatic and radiative) and photo-heating. The metallicity decreases with radius and does so more strongly for cold-mode gas. Near $0.1 R_{\rm vir}$ the scatter in the metallicity of cold-mode gas is much reduced, although this could be partly a resolution effect.
The median radial peculiar velocity of cold-mode gas is most negative around $0.5-1.0R_{\rm vir}$. For the hot mode, on the other hand, it is close to zero around that same radius. Hence, the infall velocity of the cold streams peaks where the hot-mode gas is nearly static, or outflowing if the feedback is very efficient. We note, however, that the scatter in the peculiar velocities is large. For $R\la R_{\rm vir}$ much of the hot-mode gas is outflowing and the same is true for cold-mode gas at $R\sim 0.1R_{\rm vir}$.
Inside the halo the cold-mode accretion rate increases only slightly with radius ($d\ln \dot{M}/d\ln R \approx 0.4$), indicating that most of the mass is transported to the central galaxy. For the hot-mode, on the other hand, the accretion rate only flattens at $R\lesssim
0.4R_\mathrm{vir}$. This implies that the hot accretion mode mostly feeds the hot halo. However, hot-mode gas that reaches radii $\sim 0.1R_{\rm vir}$ is efficiently transported to the centre as a result of the strong cooling flow. Nevertheless, cold-mode accretion dominates the accretion rate at all radii.
Dividing the gas into inflowing and outflowing components yields results that are very similar to classifying the gas on the basis of its maximum past temperature. This is because inflowing gas is mostly cold-mode and outflowing gas is mostly hot-mode. The situation is, however, different for high-mass haloes ($M_\mathrm{halo}\ga 10^{12.5}$ M$_\odot$). Because the two accretion modes bring similar amounts of mass into these haloes, the properties of the infalling gas are intermediate between those of the cold and hot accretion modes.
When expressed in units of the mean density of the universe, the $z=0$ density profiles (with radius expressed in units of the virial radius) are very similar to the ones at $z=2$. The same is true for the metallicities and temperatures, although the peak temperatures shift to slightly smaller radii (again normalized to the virial radius) with time. A fixed overdensity does imply that the proper density evolves as $\rho\propto (1+z)^3$, so the pressure (entropy) are much lower (higher) at low redshift. Infall velocities and accretion rates are also significantly lower, while the fraction of gas accreted in the hot mode is higher. The difference in the behaviour of the two accretion modes is, however, very similar at $z=0$ and $z=2$.
Although there are some important differences, the overall properties of the two gas modes are very similar between our different simulations, and are therefore insensitive to the inclusion of metal-line cooling and galactic winds. Without SN feedback, the already dense cold-mode gas reaches even higher densities, the metallicity of hot-mode gas in the outer halo is much lower. Without metal-line cooling, the temperature at radii smaller than $0.2R_{\rm vir}$ is much higher. With strong SN feedback or AGN feedback, a larger fraction of the gas is outflowing, the outflows are faster, and the peak in the fraction of the gas that is accreted in the hot mode peaks at a much larger radius (about $3R_{\rm vir}$ instead of $0.5-1.0 R_{\rm vir}$).
@Keres2011 have recently shown that some of the hot gas properties depend on the numerical technique used to solve the hydrodynamics. They did, however, not include metal-line cooling and feedback. They find that the temperatures of hot gas (i.e. $T>10^5$ K) around $10^{12-13}$ M$_\odot$ haloes are the same between the two methods, but that the median density and entropy of $10^{12}$ M$_\odot$ haloes is somewhat different, by less than a factor of two. These differences are comparable to the ones we find when using different feedback models. They also show that the radial velocities are different at small radii. Our results show that including metal-line cooling decreases the radial velocities, whereas including supernova or AGN feedback increases the radial velocities significantly. Even though there are some differences, the main conclusions of this work are unchanged. The uncertainties associated with the subgrid implementation of feedback and the numerical method are therefore unlikely to be important for our main conclusions.
Cold (i.e. $T \ll 10^5$ K) outflows are routinely detected in the form of blueshifted interstellar absorption lines in the rest-frame UV spectra of star-forming galaxies [e.g. @Weiner2009; @Steidel2010; @Rubin2010; @Rakic2011a]. This is not in conflict with our results, because we found that the outflowing gas spans a very wide range of temperatures and because the detectable UV absorption lines are biased towards colder, denser gas. Additionally, the results we showed are mass-weighted, but if we had shown volume-weighted quantities, the outflow fractions would have been larger. The inflowing material has smaller cross-sections and is therefore less likely to be detected [e.g. @FaucherKeres2011; @Stewart2011a].
How can we identify cold-mode accretion observationally? The two modes only differ clearly in haloes with $T_{\rm vir} \gg 10^5$ K ($M_{\rm halo} \gg 10^{10.5} ~{\rm M}_\odot ((1+z)/3)^{-3/2}$), because photo-ionization by the UV background radiation ensures that all accreted gas is heated to temperatures up to $\sim 10^5$ K near the virial radius. Near the central galaxy, $R\la 0.1R_{\rm vir}$, it is also difficult to distinguish the two modes, because gas accreted in the hot mode is able to cool.
In the outer parts of sufficiently massive haloes the properties of the gas accreted in the two modes do differ strongly. The cold-mode gas is confined to clumpy filaments that are approximately in pressure equilibrium with the diffuse, hot-mode gas. Besides being colder and denser, cold-mode gas typically has a much lower metallicity and is much more likely to be infalling. However, the spread in the properties of the gas is large, even for a given mode and a fixed radius and halo mass, which makes it impossible to make strong statements about individual gas clouds. Nevertheless, it is clear that most of the dense ($\rho \gg 10^2 \left <\rho\right >$) gas in high-mass haloes ($T_{\rm vir}\ga 10^6$ K) is infalling, has a very low metallicity and was accreted in the cold-mode.
Cold-mode gas could be observed in UV line emission if we are able to detect it in the outer halo of massive galaxies. Diffuse Lyman-$\alpha$ emission has already been detected [e.g. @Steidel2000; @Matsuda2004], but its interpretation is complicated by radiative transfer effects and the detected emission is more likely scattered light from central H<span style="font-variant:small-caps;">ii</span> regions (e.g. @Furlanetto2005 [@Faucher2010; @Steidel2011; @Hayes2011; @Rauch2011], but see also @Dijkstra2009 [@Rosdahl2012]). Metal-line emission from ions such as C <span style="font-variant:small-caps;">iii</span>, C <span style="font-variant:small-caps;">iv</span>, Si <span style="font-variant:small-caps;">iii</span>, and Si <span style="font-variant:small-caps;">iv</span> could potentially reveal cold streams, but current facilities do not probe down to the expected surface brightnesses [@Bertone2010b; @Bertone2012].
The typical temperatures ($T\sim 10^4$ K) and densities ($\rho \ga 10^2 \left <\rho\right >$) correspond to those of strong quasar absorption lines systems. For example, at $z=2$ the typical H<span style="font-variant:small-caps;">i</span> column density is[^4] $N_{\rm H\,\textsc{i}} \sim 10^{16} ~{\rm cm}^{-2}(\rho/[10^2 \left <\rho\right >])^{3/2}$ [@Schaye2001a] with higher column density gas more likely to have been accreted in the cold mode. At low redshift the H<span style="font-variant:small-caps;">i</span> column densities corresponding to a fixed overdensity are about 1–2 orders of magnitude lower [@Schaye2001a].
Indeed, simulations show that Lyman limit systems (i.e. $N_{\rm H\,\textsc{i}} > 10^{17.2} ~{\rm cm}^{-2}$) may be used to trace cold flows [@FaucherKeres2011; @Fumagalli2011a; @Voort2011c] and @Voort2011c have demonstrated that cold-mode accretion is required to match the observed rate of incidence of strong absorbers at $z=3$. Many strong QSO absorbers also tend to have low metallicities [e.g. @Ribaudo2011; @Giavalisco2011; @Fumagalli2011b], although it should be noted that metallicity measurements along one dimension may underestimate the mean metallicities of three-dimensional gas clouds due to the expected poor small-scale metal mixing [@Schaye2007]. We also note that most Lyman limit systems are predicted to arise in or around haloes with masses that are much lower than required for the presence of stable accretion shocks near the virial radius, so that they will generally not correspond to cold streams penetrating hot, hydrostatic haloes [@Voort2011c]. To study those, it is therefore more efficient to target sight lines to QSOs close to massive foreground galaxies [e.g. @Rakic2011b; @FaucherKeres2011; @Stewart2011a; @Kimm2011].
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank the referee, Daniel Ceverino, for helpful comments and Robert Crain, Alireza Rahmati and all the members of the OWLS team for valuable discussions. The simulations presented here were run on Stella, the LOFAR BlueGene/L system in Groningen, on the Cosmology Machine at the Institute for Computational Cosmology in Durham as part of the Virgo Consortium research programme, and on Darwin in Cambridge. This work was sponsored by the National Computing Facilities Foundation (NCF) for the use of supercomputer facilities, with financial support from the Netherlands Organization for Scientific Research (NWO), also through a VIDI grant, and from the Marie Curie Initial Training Network CosmoComp (PITN-GA-2009-238356).
Resolution tests
================
![image](figures/radialpropres_N512_z2p0_mass11p5to12p5_pecvel0p1.eps)
![image](figures/masspropres_N512_z2p0_pecvel0p1.eps)
We have checked (but do not show) that the results presented in this work are converged with respect to the size of the simulation volume if we keep the resolution fixed. The only exception is hot-mode gas at $R>5R_\mathrm{vir}$, for which the radial velocities and accretion rate require a box of at least 50$h^{-1}$Mpc on a side (which implies that our fiducial simulation is sufficiently large).
Convergence with resolution is, however, more difficult to achieve. In Figs. \[fig:haloradres\] and \[fig:halomassres\] we show again the radial profiles and mass dependence of the halo properties for the hot- and cold-mode components at $z=2$. Shown are three different simulations of the reference model, which vary by a factor of 64 (8) in mass (spatial) resolution. All trends with radius and halo mass are very similar in all runs, proving that most of our conclusions are robust to changes in the resolution. Below we will discuss the convergence of hot- and cold-mode gas separately and in more detail.
The convergence is generally excellent for hot mode gas. As the resolution is increased, the density of hot-mode gas decreases slightly and the temperature drop close to the halo centre shifts to slightly smaller radii, which also affects the pressure and entropy. There is a small upturn in the density of hot-mode gas at the virial radius as we approach the halo mass corresponding to the imposed minimum of 100 dark matter particles, showing that we may have to choose a minimum halo mass that is a factor of 5 higher for complete convergence. The radial peculiar velocity increases slightly with resolution, causing the hot-mode accretion rate to decrease.
Convergence is more difficult to achieve for cold-mode gas. The density of cold-mode gas inside haloes, and thereby also the difference between the two modes, increases with the resolution. The pressure of the cold-mode gas also increases somewhat with resolution, which leads to a smaller difference with the pressure of hot-mode gas. The cold-mode radial velocity becomes more negative, increasing the difference between the two modes.
The median metallicity of the cold-mode gas decreases strongly with increasing resolution for $R>0.3R_\mathrm{vir}$. The metallicity difference between the two modes therefore increases, although the distributions still overlap (not shown), and the radius at which the metallicities of the two modes converge decreases. In fact, the convergence of the median metallicity of cold-mode gas is so poor that we cannot rule out that it would tend to zero at all radii if we keep increasing the resolution.
If we use particle metallicities rather than SPH smoothed metallicities (see Section \[sec:metal\]), then the median metallicity of both modes is lower and the median cold-mode metallicity plummets to zero around the virial radius (not shown). The decrease in metallicity with increasing resolution is in that case less strong, but still present. The unsmoothed hot-mode metallicities are also not converged and lower than the (converged) smoothed hot-mode metallicities, but they increase with increasing resolution. The difference between the two modes therefore increases with resolution.
With increasing resolution, the radial velocities of cold-mode gas become slightly more negative within the halo. The net accretion rates and hot fractions are converged.
Even though some properties are slightly resolution dependent, or strongly so for the case of the metallicity of cold-mode gas, all our conclusions are robust to increases in the numerical resolution.
\[lastpage\]
[^1]: E-mail:fvdvoort@strw.leidenuniv.nl
[^2]: The most significant discrepancy is in $\sigma_8$, which is 8 per cent, or $2.3\sigma$, lower than the value favoured by the WMAP 7-year data.
[^3]: Note that for haloes with $T_{\rm vir} \la 10^{5.5}$ K the median hot-mode temperature is affected by the requirement $T_{\rm max} > 10^{5.5}$ K (our definition of hot-mode gas).
[^4]: For $N_{\rm H\,\textsc{i}} \ga 10^{18}~{\rm cm}^{-2}$ the relation is modified by self-shielding, see e.g. @Schaye2001b [@Altay2011].
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Sen Tian[^1] Clifford M. Hurvich Jeffrey S. Simonoff\
\
Department of Technology, Operations, and Statistics,\
Stern School of Business, New York University.
bibliography:
- 'reference.bib'
title: '**On the Use of Information Criteria for Subset Selection in Least Squares Regression**'
---
\#1
[^1]: E-mail: stian@stern.nyu.edu
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Chiara Panosetti
- 'Simon B. Anniés'
- Cristina Grosu
- Stefan Seidlmayer
- Christoph Scheurer
bibliography:
- 'references.bib'
title: 'DFTB modelling of lithium intercalated graphite with machine-learned repulsive potential'
---
Abstract
========
Lithium ion batteries have been a central part of consumer electronics for decades. More recently, they have also become critical components in the quickly arising technological fields of electric mobility and intermittent renewable energy storage. However, many fundamental principles and mechanisms are not yet understood to a sufficient extent to fully realize the potential of the incorporated materials. The vast majority of concurrent lithium ion batteries make use of graphite anodes. Their working principle is based on intercalation—the embedding and ordering of (lithium-) ions in the two-dimensional spaces between the graphene sheets. This important process—it yields the upper bound to a battery’s charging speed and plays a decisive role for its longevity—is characterized by multiple phase transitions, ordered and disordered domains, as well as non-equilibrium phenomena, and therefore quite complex. In this work, we provide a simulation framework for the purpose of better understanding lithium intercalated graphite and its behaviour during use in a battery. In order to address the large systems sizes and long time scales required to investigate said effects, we identify the highly efficient, but semi-empirical Density Funtional Tight Binding (DFTB) as a suitable approach and combine particle swarm optimization (PSO) with the machine learning (ML) based Gaussian Process Regression (GPR) to obtain the necessary parameters. Using the resulting parametrization, we are able to reproduce experimental reference structures at a level of accuracy which is in no way inferior to much more costly *ab initio* methods. We finally present structural properties and diffusion barriers for some exemplary system states.
Introduction {#intro}
============
Within the past decade, studies investigating the consequences of man-made climate change [@Sharp2011; @Fisher2012; @Program2018] have become more specific, the predicted time frames shorter and the warnings more urgent. The immediate and radical reduction of carbon dioxide emissions by replacing fossil fuel based energy sources with renewable ones has been found to be the only reasonable approach to at least limit those consequences. [@Anderson2016] While the generation of electric energy from wind and sun is already quite advanced and efficient, its storage and transport are the main factors holding it back compared to coal and oil. Currently, two main approaches are being pursued in order to eliminate these drawbacks. One aims directly at the synthesis of alternative liquid or gas-phase fuels. The other intends to improve upon existing battery technology—especially lithium ion batteries—enough, to make it a serious contender in terms of energy sustenance. In this work, we intend to lay some groundwork for gaining deeper insight into some of the atomistic mechanisms limiting the (dis-)charging speed and lifetime of the most common types of lithium ion batteries, with graphite intercalation anodes. Ever since graphite was ascertained experimentally and theoretically to be an excellent candidate as an anode for Li-ion batteries, numerous attempts were made at fully describing the working system. [@Hennig1959; @Guerard1975; @Hawrylak1984; @Conard1994; @Nitta2015] Most of the electrochemical properties of the anode material itself are well-known. However, in particular transport processes during strongly driven operating conditions, like fast charging, are only poorly understood at a microscopic level. These technologically important macroscopic conditions are accompanied [*e.g.*]{} by temperature variations, leading to a capacity fade during ageing, as well as lithium plating. All of them limit the lifetime of the battery. [@Gallagher2016; @Wandt2018; @Yang2018] Against this background, experiments and theory are pushed quite far to gain insight into the real processes occuring during the electrochemical operation. Depending on the quantities accessible via experiments and theory, two different hypotheses are regularly invoked to explain the findings in the range of 0% (graphite) to 100% (LiC$_6$) state of charge (SOC): the staging and the domain model. The lithium intercalation process shows evidence of multiple phase transitions in the voltage vs. SOC diagram. The corresponding system configurations are termed “stages” I, II and so forth. In the simple staging model, these correspond directly to the numbers of empty galleries (spaces between graphene sheets) between the fully occupied ones (see Figure \[fig:staging\]).
![Sketch of Li-intercalated graphite in stage I to III configurations [@Smith2017]. Violet spheres represent lithium ions, black lines correspond to graphene sheets. Bottom right: illustration of the domain model [@Daumas1969]. The structure has the same nominal stoichiometry as the structure in stage II (top right). \[fig:staging\]](./staging_compact.png "fig:"){width="0.9\linewidth"}\
In the domain model, these motifs are not assumed to range over meso-/macroscopic dimensions but to form regions of finite lateral extent. Consequently, it is quite clear that different SOC with the same nominal stoichiometry LiC$_x$ will not be configurationally homogeneous, making Li-intercalated graphite a profoundly non-trivial system to address.
In order to effectively connect to experimental studies, a theoretical framework for simulating large-scale and long-duration non-equilibrium processes in the graphite anode, based on kinetic Monte Carlo (kMC) [@Andersen2019] simulations is required. The first step towards this goal is gaining the ability to quickly and accurately calculate diffusion barriers on the fly, which is the primary motivation of this work. This requires the ability to reproduce reliably and accurately the layer distances (ideally of all possible configurations, but predominantly of the dilute, low-saturation stages) and the forces affecting the lithium-ions, while the strains within the graphene layers are of lesser importance.
Large-scale atomistic simulations typically pursue force field approaches [@Duin2001] for those systems where energetics and kinetics are well described within the upper end of the SOC range. However, those approaches are limited when it comes to the entire range of different SOC, from extremely diluted stages to fully concentrated ones. Recently, a Gaussian Approximation Potential (GAP) was reported to be able to describe amorphous carbon well. [@Deringer2017] However, when the latter was later extended to model lithium intercalation, [@Fujikake2018] it became apparent that the insertion of lithium into those host structures requires a non-trivial description of the electrostatic interaction. Contrary to most approaches, including the one presented in this work, Fujikake *et al.* did not treat the full Li-C system, but attempted to model the energy and force differences arising from lithium intercalation separately, and then added them to the carbon GAP. More specifically, their machine learning process (ML) is based on fitting the energy and force differences between identical carbon host structures, but with and without an intercalated lithium atom. However due to the fact that the lithium intercalation energies are significantly larger in magnitude than the electrostatic lithium-lithium interaction energies, they were not able to recover the latter from the data to a satisfactory degree and had to manually add an extra correction term (fitted to DFT) in order to account for those contributions. To avoid similar shortcomings, we rather base our approach on Density Functional Tight Binding (DFTB) [@Elstner1998], a semi-empirical—and thus computationally much cheaper—approximation to Density Functional Theory (DFT), [@Kohn1965] which has been the most common technique for high-accuracy electrochemical simulations for many decades [@Koskinen2009]. However, since DFTB’s speedup is achieved by pre-calculating atomic interactions to avoid calculating them at runtime, this comes at the cost—or rather, initial investment—of pairwise parametrization. As of now, no Li-Li and Li-C DFTB parameters are available. In the following, we combine for the first time the recently developed Particle Swarm Optimization [@Shi1998] parametrization approach as first proposed by Chou [*et al.*]{} [@Chou2016] with a more flexible ML repulsive potential [@Engelmann2018], to obtain finely-tuned parameters for this system—taking advantage of its physics, albeit perhaps at the expense of some transferability. Let us however stress that the parametrization procedure employed here remains completely general, as the system specificity lies entirely in the choice of the training set(s).
The electronic part {#electro}
===================
![image](./bands_lithium.png){width="0.24\linewidth"} ![image](./bands_graphene.png){width="0.24\linewidth"} ![image](./bands_diamond.png){width="0.24\linewidth"} ![image](./bands_LiC6.png){width="0.24\linewidth"}
In DFTB jargon, the so-called “electronic part” includes the semi-empirical band structure and the Coulombic contributions to the total energy of the system. [@Koskinen2009] These depend parametrically on the diagonal elements $\varepsilon$ of the non-interacting Hamiltonian, the Hubbard-$U$ and a confinement potential which is used to cut off the diffuse tails of the basis orbitals. For the free atom, the first two quantities are tabulated for most elements or can be calculated with DFT. However, using the free atom values is an approximation, and the decision whether it is justified must be made carefully on a case to case basis. The confinement potential, on the other hand, is always treated as a parameter. Quadratic [@Stohr2018] and general power-law functional forms [@Wahiduzzaman2013] are commonly used, as well as the Woods-Saxon potential [@Chou2016] (also employed here) which assures a smoother transition to zero in the orbital tails. Each of these parameters needs to be determined for every chemical species present in the system of interest, typically in a non-linear optimization process. In the PSO, each particle then represents a set of parameters ($\left\{\varepsilon\right\}$, $\left\{U\right\}$, and the confinement constants), with which the DFTB interaction is constructed, so that the parametrization can be improved by minimizing a cost function. The central task is thus the definition of a meaningful cost function. Frequently, one uses the weighted sum of an arbitrary number of contributions $f(\sigma^{DFT}, \sigma^{DFTB})$, each providing a measure of the deviation between DFT and DFTB for some system property $\sigma$. Hereby, as we are optimizing the electronic parameters only, the chosen target properties must not depend on repulsion. For our system, we compare the band structures of metallic lithium, graphene and diamond. Additional details on the definition of the corresponding cost function are provided in the SI. Figure \[fig:bandstructures\] shows our resulting band structures. Overall, we recognize decent agreement for all band structures, while some deviations are expected given the minimal basis in DFTB. For example, the pronounced mismatch in the conduction band at the $H$ point in the lithium band structure as well as the incorrectly direct band gap of diamond can be ascribed to this over-simplification in the DFTB model. For the two carbon systems, we see very good qualitative agreement for most regions of the band structures, but notice a small degree of overall compression towards the Fermi level. Given the systematic nature of this imperfection, we speculate that for further improvement, it would probably be necessary to include $U$ and $\varepsilon$ in our parameter space, which would simultaneously increase the dimensionality of the optimization problem. As an additional validation criterion, we examine the charge population (which DFTB provides by default) for the lithium ions in LiC$_6$. Our parametrization produces a value of $0.853\,e$, in agreement with the value of $0.86\,e$ calculated by Krishnan [@Krishnan2013] with Bader charge analysis [@Bader1990]. Given this excellent agreement and also considering the fact that the repulsion potential is capable of quite effectively correcting small imperfections in the electronic part, we decide not to optimize the latter any further in this work—a decision justified in retrospect by the excellent results we present. However, let us still emphasize the opportunity for improvement here, should it eventually become necessary.
The repulsion potential {#rep}
=======================
It is common practice to assume some analytical form for the repulsive potential and fit the chosen functional parameters as to minimize a set of DFT-DFTB force differences [@Koskinen2009]—a protocol easily implemented also for the PSO approach. However, the main limitation and bias results from the choice of said parametrized functional form. It needs to be sufficiently flexible to cover a large space of systems and bonding situations. This typically yields a high dimensional non-linear optimization problem, which might still be insufficient to capture unexpected subtle, yet extremely relevant physical features. We rather adopt the method recently developed by A. Engelmann [@Engelmann2018], which employs Gaussian Process Regression (GPR) [@Rasmussen2006] to create a flexible functional form “on the fly”, while adapting to the physics captured by the training data set, instead of forcing us to guess it *a priori*. In the SI, we give a short introduction to the method and explain the character and effect of the related hyperparameters, referring the reader to Rasmussen [@Rasmussen2006] for the underlying stochastic theory and to Engelmann [@Engelmann2018] for the application to DFTB repulsive potentials. For the global damping, correlation distance, and data noise hyperparameters, we verified (see SI) that a sizeable subspace of the overall hyperparameter-space is appropriate, and choosing pretty much any combination of values within that subspace will produce very similar, correct results. The same is not necessarily true for the cutoff radii of the C-C and the Li-C repulsion. Since the electronic energy contribution is entirely based on just a sum of non-interacting atomic contributions, the repulsion potential has to account for different chemical environments affecting the same type of atom. In a GPR setting it is therefore of paramount importance to sample a sufficiently large set of training data which covers all interatomic distance ranges and chemical environments relevant for a faithful representation of the system studied. Ideally, it should also be ascertained that the model quality is stable w.r.t. the explicit choice of hyperparameters such as the cutoff radii.
The training data {#training}
-----------------
![Interlayer distances for graphite (grey), LiC$_{12}$ (SOC 50%, grey-purple) and LiC$_6$ (SOC 100%, bright purple) as a function of C-C repulsion cutoff trained. Note that for LiC$_{12}$, there are two different layer distances to consider: one for the empty gallery and one for the full gallery. Here, we plot the average of the two. The dashed lines show the experimental layer distances we aim to reproduce (Sources: Trucano [*et al.*]{} [@Trucano1975] (graphite), Vadlamani [*et al.*]{} [@Vadlamani2014] (LiC$_{12}$ and LiC$_6$). The green coloured area represents the range within which we are satisfied with the performance.[]{data-label="fig:vsCC"}](./all_vs_CC_colors.png){width="0.9\linewidth"}
In terms of DFT functional, our starting point is PBE [@Perdew1996], which has been used by the majority of researchers working on intercalation phenomena and is known to describe LiC$_6$ well. However, it does not reproduce the dispersive interaction between graphene sheets. In order to address this, we finally (see “Set 3” below) combine the reference PBE calculation with a Many Body Dispersion (MBD) treatment and the DFTB model with a computationally cheap Lennard Jones (LJ) [@Zhechkov2005] dispersion correction [@Rappe1992]. The rationale for this choice is that PBE should reproduce galleries containing many lithium atoms correctly and LJ-dispersion should predict empty galleries well, while not interfering too much with the PBE-description of the concentrated ones. However, it is unclear, how this interaction shapes out for intermediate, dilute lithium stoichiometries. During our investigations, we find that this approach works somewhat decently, but needs some controlled adjustments (vide infra) in order to produce truly satisfactory results.
As a first guess, we construct a set of training structures (Set 1) which consists of a balanced mix of Li$_n$C$_{36}$ super-cells ($n \in (0, 1, ..., 6)$), in order to represent the entire range of charging states (exemplary structures are shown in the SI). Additionally, those structures are rattled (each atom randomly displaced), as well as compressed or expanded. This procedure yields a smooth distribution of bond lengths and forces. We then train a GPR repulsion potential by matching DFTB against PBE forces for this structural ensemble, aiming at a first, mostly transferable model. The standard LJ DFTB correction is subsequently applied on top of this parametrized DFTB model. With this approach, we are able to find parametrizations that reproduce all layer distances (of graphite, LiC$_{12}$ and of LiC$_6$) correctly, albeit not for a stable range of all parameters (in particular the Li-C cutoff, see below). As shown in Figure \[fig:vsCC\], the choice of cutoff radius for the C-C repulsion potential does not have a major influence on the layer-distances for quite a large range of values. In fact, the point at which the predictions stop being accurate can be identified as approximately the experimental values for the interlayer distances. Going beyond that with the cutoff radius essentially corresponds to including interlayer interactions in the potential fit, mixing their description with the intralayer covalent bonds. Thus, the restriction of the cutoff radius we find here is physically motivated by the range separation of the interactions that characterize our system: as the 2nd next neighbour distance in a relaxed graphene sheet is around $2.45$ Å and the layer distance is $3.35$ Å, the cutoff range defined by the (smallest) plateau in Figure \[fig:vsCC\] represents a sweet spot where the GPR learns 2nd next neighbour interactions but does not yet (mistakenly) take any interlayer interactions (even in the compressed structures) into account in the repulsion potential. In light of these findings, we select the cutoff value $2.6$ Å for the C-C-repulsion potential. Indeed, we did not encounter any reason to change this selection during the entirety of this work (despite rigorously testing it for each of the training data sets).
However, with this first training set we do not obtain an equally stable plateau as a function of the Li-C repulsive cutoff (see SI). Furthermore, we discover that the quite strongly distorted graphite planes in these structures lead to large forces compared with those acting on the intercalated lithium-ions hindering the performance in lithium-forces prediction. We tackle the second problem first: while the rattled, scaled structures in Set 1 cover a sufficiently large range of bond lengths, they only account for configurations with the lithium-ions sitting over the centre of a graphite ring, *i.e.* in a local energy minimum. We recognize this as the reason for the comparably small lithium-forces. In order to balance out this structural bias, we calculate a number of transition paths for lithium diffusion processes using a Nudged Elastic Band (NEB) method [@Henkelman2000; @Henkelman2000a]. Exemplary structures can be found in the SI. Now, we are able to extract structures from these trajectories, in which the lithium ions are subject to stronger forces commensurable with the graphite-layers. For our second training set (Set 2), we replace around $45\%$ of the rattled and scaled structures with those transition path geometries. By this measure, we are able to improve the accuracy for predicting forces on Li-ions significantly, without sacrificing the description of the graphite layers. However, while we do observe a plateau for the resulting layer distances with respect to the Li-C cutoff, the interlayer distances are not reproduced equally well as in Figure \[fig:vsCC\] for Set 1 (see Figure \[fig:vsLiC\], yellow area), while only the LiC$_{12}$ interlayer distances assume correct values (see Figure \[fig:vsLiC\], green areas), yet outside the plateau.
![Interlayer distances for LiC$_{12}$ (SOC 50%, grey-purple) and LiC$_{6}$ (SOC 100%, bright purple) as a function of Li-C repulsion cutoff, with a fixed C-C cutoff set to 2.6 Å. The repulsion was trained on a set analogous to Set 1 ([*cf.*]{} text), where 45% of the structures were replaced by geometries randomly extracted from intra-layer Li diffusion paths. For LiC$_{12}$, the plotted interlayer distance is the average between the values for the filled and the empty gallery. The dashed lines show the experimental layer distances. The yellow coloured area represents the range within which the results are stable, however at a wrong value.[]{data-label="fig:vsLiC"}](./all_vs_LiC_set2.png){width="0.9\linewidth"}
![Interlayer distances for LiC$_{12}$ (SOC 50%, grey-purple) and LiC$_{6}$ (SOC 100%, bright purple) as a function of Li-C repulsion cutoff, with a fixed C-C cutoff set to 2.6 Å. The repulsion was trained on a set analogous to Set 2 ([*cf.*]{} text), where 55% of the structures were replaced by geometries with MBD-corrected forces. For LiC$_{12}$, the plotted interlayer distance is the average between the values for the filled and the empty gallery. The dashed lines show the experimental layer distances. []{data-label="fig:vsLiC_set3"}](./all_vs_LiC_set3.png){width="0.9\linewidth"}
![image](./landscapes.png){width="0.9\linewidth"}
This behaviour suggests that our problem here does not lie in the choice of the training set, but rather in the treatment of long-ranged interactions.
Let us consider the underlying predicament: so far, the DFTB-part of the force residues used for the ML process is calculated without LJ dispersion correction. We then construct the repulsion potential with the purpose of making those DFTB calculations match references based on PBE-DFT, which reliably predicts layer distances for LiC$_6$. By then using LJ (required to obtain the correct empty layer distance in graphite) in our actual DFTB calculations (after the parametrization process), we cause the aforementioned offset for highly lithiated compounds. Using LJ already for the force-residue calculations during the ML seems like the obvious solution to this problem. However, this presents a new issue in the lower-saturation range (Li$_x$C$_6$, $x<0.5$). There, we previously fitted the repulsion to PBE-DFT references, which are not correct in that range without dispersion correction. The resulting DFTB forces are then shifted by LJ towards the correct value (as is indicated by the quite decent results for LiC$_{12}$ with Set 2). But after the modification, we would then fit the *final* DFTB forces (that result after applying the LJ) to the (incorrect) PBE-DFT references, thus improving our performance for highly saturated system states, but ruining it for dilute ones, by effectively double counting dispersive contributions. It becomes apparent that in order to make this approach work, we need to utilize dispersion corrected DFT reference forces which are also correct for low saturation states and, at the same time, compatible with the computationally cheap DFTB-LJ correction.
Our ansatz is that we can essentially—to a degree—encode the difference between the LJ dispersion and the “true” dispersion into the repulsion potential. At this point we stress that *ideally*, both the true, non-local exchange correlation functional in DFT and an ideal repulsion energy in DFTB would already encompass all dispersion effects, and it is solely due to approximations in the derivations, [*e.g.*]{} of GGAs, that they do not in these models. Therefore, rather than mixing our repulsion potential with something fundamentally different (which would be physically questionable), what we do here simply corresponds to partially adding a contribution back in, that should have been there in the first place. To our knowledge, the currently best way to calculate dispersion corrected lithium intercalated graphite, with correct layer distances predicted for the entire saturation range, is the MBD correction [@Tkatchenko2012]. This method is computationally rather expensive, but since we only need to run DFT calculations for our training data set, which is very limited in size, this is not vital to us. We do realize that this approach most likely comes with some cost in terms of transferability. In order to retain as much of it as possible, we choose not to replace *all* force residues, but only $\approx50\%$, which proves sufficient to demonstrate the effectiveness of the presented method in a general way. Nonetheless, further investigating the effect this percentage has on the performance is certainly a task that should be tackled in the future. Of course, alternatively to our approach, it is possible to simply apply the MBD correction scheme directly to our DFTB calculations. However, doing so would cost us one to two orders of magnitude in speed, as MBD then becomes the dominating step in terms of computation time. Exactly as we had hoped for, we have succeeded at shifting the predicted interlayer distances (within the stable Li-C cutoff plateau) into the very close proximity of the experimental reference values for both LiC$_6$ and LiC$_{12}$ (Figure \[fig:vsLiC\_set3\]). Especially the excellent results for the stage II compound LiC$_{12}$ show that our parametrization is now able to handle *both* mainly ionic concentrated *and* mainly dispersive dilute layers to a satisfactory degree. In Figure \[2Dlandscape\], we illustrate the effect our modification has on the repulsion potential landscape for a wide range of Li-C cutoff radii. First (and most notably), we have moved and solidified the local minimum related to the next-neighbour lithium-carbon interaction (see bottom right). For the Set 2 and Set 3 potentials, the minima (blue and black dashed lines) are located at atomic distances of $2.41$ Å and $2.35$ Å respectively, which correspond to LiC$_6$ interlayer distances of $3.83$ Å and $3.67$ Å, the exact values which *do*, in fact, result from the relaxation of those structures, using the two repulsion potentials respectively. The 2D maps (top) show that this behaviour is apparent for an entire range of cutoff radii, thus ruling out the possibility that the fit is only accidentally correct (as it happens, [*e.g.*]{}, for Set 2, see Figure \[fig:vsLiC\]). In bottom left, we can also clearly see the upper ($\sim 5.8$Å) and lower ($\sim 4.5$Å) boundaries for the cutoff radius, beyond which the physicality of the model falls apart. They define exactly the range within which we find the stable cutoff dependency plateau, which is now at the correct numerical value as shown in Figure \[fig:vsLiC\_set3\]. We may identify the upper boundary at $5.8$ Å (as further discussed in the SI), as the distance between a lithium ion and the second closest graphene sheet, which is an intuitively plausible limitation. It is less obvious, though, to assign a clear physical meaning to the lower bound at $4.5$ Å, as it cannot be directly related to any particular structural feature of LiC$_6$. The most likely cause, we believe, is that the cosine-shaped cutoff function employed in the GPR framework starts cutting off physically relevant details from the repulsion potential below that. A physically motivated lower bound may be identified by evaluating the mean absolute forces acting on lithium as a function of Li-C cutoff, shown in Figure \[Fig:maf\].
![Mean absolute forces acting on Li for validation set??? compared to DFT reference calculations (black dashed line)[]{data-label="Fig:maf"}](./rmsd_forces_set3.png){width="0.9\linewidth"}
Overall, we now observe two separate Li-C cutoff plateaus: between approximately 4.3 Å and 5.7 Å, we obtain accurate layer distances (Figure \[fig:vsLiC\_set3\]), while for radii above roughly 5.0 Å, our predictions for forces and transition energies are correct (Figure \[Fig:maf\]). This duality can very simply be explained by the fact that the first property is mostly a z-direction phenomenon (and interactions with the second closest graphene sheet limit the physicality of our model), while the other takes place almost exclusively in the x-y-plane, where no such limitation applies. Given this difference in fundamental nature, it is very plausible to trust both these plateaus. Thus, their overlap (5.0–5.7 Å) defines the region within which any value of the Li-C cutoff radius produces an almost identical parametrization that performs very well, for all our benchmark criteria, in a stable and trustworthy manner.
Results: interlayer distances and diffusion barriers
====================================================
[l l | r r r r r r ]{}\
&compound &experiment &DFTB &DFT in [@Krishnan2013] &filled gallery &empty gallery &barrier\
C$_6$ &– &$3.355$ Å &$+46$ mÅ &$+62$ mÅ &– & $+46$ mÅ\
LiC$_6$ &stage I &$3.687$ Å &$-12$ mÅ &$+56$ mÅ &$-12$ mÅ &– &$351$ meV\
LiC$_{12}$ &stage II &$3.511$ Å &$+16$ mÅ &$-16$ mÅ &$+112$\* mÅ &$-79$\* mÅ &$401$ meV\
LiC$_{18}$ &stage III &$3.470$ Å &$-30$ mÅ&$+173$ mÅ &$+196$\* mÅ &$-50$\* mÅ &$393$ meV\
\
\
Table \[table1\] reports some resulting inter-layer distances and diffusion barriers based on our new DFTB parametrization in Table \[table1\], compared with experimentally determined values, as well as previous theoretical findings.
Furthermore, we draw qualitative conclusions from these results and summarize their implications on the intercalation mechanism. As a quick reminder, stages I, II and III correspond to every, every other and every third gallery being filled (to any degree) with lithium. Additionally, we describe the concentration of the intercalant in a filled gallery as dilute (low) or concentrated (high), thus allowing for a simple classification of fundamentally different compounds.
Here, we take only concentrated stages into consideration. For all calculations, we chose an Li-C cutoff radius of $5.5$ Å, following the findings discussed above. As Table \[table1\] clearly illustrates, we systematically outperform the method by Krishnan *et al.* [@Krishnan2013]—in terms of accuracy—for every structure they provide comparison for. This is especially remarkable considering the fact that they used full GGA-DFT with dispersion corrections in post-processing, which is the current state-of-the-art approach, as well as significantly more computationally expensive than our method.\
Subsequently, we investigate intra-layer next-neighbour diffusion barriers and compare our results to recent experimental findings from [@Umegaki2017] (based on muon spin relaxation spectroscopy). Our calculations yield purely *microscopic* results within 50 meV from each other for all three relevant compounds, as is shown in Table \[table1\]. The deviations between them can be attributed to slight differences in the filled-layer spacing of the different structures.\
In contrast, the experimentally determined *active* barriers of 270 meV for LiC$_6$ and 170 meV for LiC$_{12}$ show a strong dependency on the systems stage. We believe this difference to be caused by correlation effects. Capturing those using kinetic Monte Carlo simulation is something we intend to do in the near future.
Conclusions and outlook
=======================
In this work, we put forward—for the first time combining particle swarm (i.e. PSO) and machine learning [@Engelmann2018] (i.e. GAP) approaches for this task—a well-performing DFTB-parametrization for lithium intercalated graphite which is capable of very accurately reproducing various structural properties and qualitative trends relating to the intercalation mechanism for a wide variety of Li$_x$C$_{36}$ compounds. In the course of this process, we believe to have shown that Density Functional Tight Binding (DFTB) is a superior approach for modelling intercalation compared with force field methods (*e.g.* the GAP by [@Fujikake2018] requires a manual correction term for lithium-lithium interactions which our method does not). Furthermore, we share key details and choices along this process and thus provide guidance for similar endeavours in the future.
| {
"pile_set_name": "ArXiv"
} |
---
address: 'University of Bucharest, Faculty of Mathematics14 Academiei str., 70109 Bucharest, Romania'
author:
- Liviu Ornea
title: 'Weyl structures on quaternionic manifolds. A state of the art.'
---
[^1]
This is a survey on quaternion Hermitian Weyl (locally conformally quaternion Kähler) and hyperhermitian Weyl (locally conformally hyperkähler) manifolds. These geometries appear by requesting the compatibility of some quaternion Hermitian or hyperhermitian structure with a Weyl structure. The motivation for such a study is two-fold: it comes from the constantly growing interest in Weyl (and Einstein-Weyl) geometry and, on the other hand, from the necessity of understanding the existing classes of quaternion Hermitian manifolds.
Various geometries are involved in the following discussion. The first sections give the minimal background on Weyl geometry, quaternion Hermitian geometry and $3$-Sasakian geometry. The reader is supposed familiar with Hermitian (Kähler and, if possible, locally conformally Kähler) and metric contact (mainly Sasakian) geometry.
All manifolds and geometric objects on them are supposed differentiable of class $\mathcal{C}^\infty$.
Weyl structures
===============
We present here the necessary background concerning Weyl structures on conformal manifolds. We refer to [@F], [@G1], [@H] or to the most recent survey [@CP] for more details and physical interpretation (motivation) for Weyl and Einstein-Weyl geometry.
Let $M$ be a $n$-dimensional, paracompact, smooth manifold, $n\geq 2$. A $\mathrm{CO}(n)\simeq \mathrm{O}(n)\times {\mathbb{R}}_+$ structure on $M$ is equivalent with the giving of a conformal class $c$ of Riemannian metrics. The pair $(M,c)$ is a *conformal manifold*.
For each metric $g\in c$ one can consider the Levi-Civita connection $\nabla^g$, but this will not be compatible with the conformal class. Instead, we shall work with $\mathrm{CO}(n)$-connections. Precisely:
A *Weyl connection* $D$ on a conformal manifold $(M,c)$ is a torsion-free connection which preserves the conformal class $c$. We say that $D$ defines a Weyl structure on $(M,c)$ and $(M,c,D)$ is a Weyl manifold.
Preserving the conformal class means that for any $g\in c$, there exists a $1$-form $\theta_g$ (called the Higgs field) such that $$Dg=\theta_g\otimes g.$$ This formula is conformally invariant in the following sense: $$\label{th}
\text{if}\; h=e^{f}g,\;\; f\in \mathcal{C}^\infty(M),\; \text{then}\;\;
\theta_h=\theta_g-df.$$ Conversely, if one starts with a fixed Riemannian metric $g$ on $M$ and a fixed $1$-form $\theta$ (with $T=\theta^\sharp$), the connection $$D=\nabla^g-\frac{1}{2}\{\theta\otimes Id+Id\otimes\theta-g\otimes T\}$$ is a Weyl connection, preserving the conformal class of $g$. Clearly, $(g,\theta)$ and $(e^{f}, \theta-df)$ define the same Weyl structure.
On a Weyl manifold $(M,c,D)$, Weyl introduced the *distance curvature function*, a $2$-form defined by $\Theta=d\theta_g$. By , the definition does not depend on $g\in c$. If $\Theta=0$, the cohomology class $[\theta_g]\in H^1_{dR}(M)$ is independent on $g\in c$. A Weyl structure with $\Theta=0$ is called *closed*.
All these geometric objects can be interpreted as sections in tensor bundles of the bundle of scalars of weight $1$, associated to the bundle of linear frames of $M$ *via* the representation $GL(n,{\mathbb{R}})\ni A\mapsto \mid \det A\mid ^{1/n}$. *E.g.* $c$ is a section of $S^2T^*M\otimes L^2$, $\theta $ is a connection form in $L$ whose curvature form is exactly the distance curvature function etc. This also motivates the terminology. We refer to [@G1] for a systematic treatment of this viewpoint.
A fundamental result on Weyl structures is the following “co-closedeness lemma”:
[@G2] Let $(M,c)$ be a compact, oriented, conformal manifold of dimension $>2$. For any Weyl structure $D$ preserving $c$, there exists a unique (up to homothety) $g_0\in c$ such that the associated Higgs field $\theta_{g_0}$ is $g_0$-coclosed.
The metric $g_0$ provided by the theorem is called the *Gauduchon metric* of the Weyl structure.
In Weyl geometry, the good notion of Einstein manifold makes use of the Ricci tensor associated to the Weyl connection: $$Ric^D=\frac{1}{2}\sum_{i=1}^n\{g(R^D(X,e_i)Y,e_i)-g(R^D(X,e_i)e_i,Y)\}$$ where $g\in c$ and $\{e_i\}$ is a local $g$-orthonormal frame. The scalar curvature of $D$ is then defined as the conformal $\mathrm{trace}$ of $Ric^D$. For each choice of a $g\in c$, $Scal^D$ is represented by $Scal^D_g=\mathrm{trace}_gRic^D$. The relations between $Ric^D$ and $Ric^{\nabla^g}$ and, correspondingly, between their scalar curvatures are: $$\begin{gathered}
\label{28}Ric^D=Ric^{\nabla^g}+\delta^g\theta\cdot g-(n-2)\{\nabla^g\theta+
{\Vert \theta\Vert}^2_g\cdot g-\theta\otimes\theta\}.\\
\label{29}Scal^D_g=Scal^{\nabla^g}+2(n-1)\delta^g\theta-(n-1)(n-2){\Vert \theta\Vert}^2_g.\end{gathered}$$
A Weyl structure is *Einstein-Weyl* if the symmetric part of the Ricci tensor $Ric^D$ of the Weyl connection is proportional to one (hence any) metric of $c$.
For an Einstein-Weyl structure, one has $$\label{24}
Ric^D=\frac{1}{n}Scal^D_g\cdot g-\frac{n-2}{2}d\theta$$ for any $g\in c$.
Note that for an Einstein-Weyl structure, the scalar curvature $Scal^D $ need not be constant (this means $D$-parallel w.r.t. to $D$ as a section of $L^{-2}$). But, if the Weyl connection is precisely the Levi-Civita connection of a metric in $c$ (in this case the Weyl structure is called *exact*), then $Scal^D$ is constant.
Observe that for any Einstein-Weyl structure and any $g\in c$ one has the formula $$\label{27}
\begin{split}
\frac{1}{n}dScal^D_g&=\frac{2n}{n-2}d\delta^g\theta-
2(\delta^g\theta)\theta-2\delta^g\nabla^g\theta+\delta^gd\theta-\\
&-2\nabla^g_T\theta-(n-3)d{\Vert \theta\Vert}^2_g.
\end{split}$$ This follows from , and .
If $g$ is the Gauduchon metric, $\delta^g\theta=0$ and reduces to $$\frac{1}{n}dScal^D_g+
2\delta^g\nabla^g\theta-\delta^gd\theta+
+2\nabla^g_T\theta+(n-3)d{\Vert \theta\Vert}^2_g=0.$$ Contracting here with $\theta$ yields $$\label{37}
D\theta=\frac{1}{2}d\theta$$ This, together with the relation between $D$ and $\nabla^g$ prove the first statement of the following extremely important result (the second statement will be proved in a more particular situation):
\[gg\][@G1] Let $D$ be an Einstein-Weyl structure on a compact, oriented manifold $(M,c)$ of dimension $>2$. Let $g$ be the Gauduchon metric in $c$ associated to $D$ and $\theta$ the corresponding Higgs field. If the Weyl structure is closed, but not exact, then
1\) $\theta$ is $\nabla^g$ parallel: $\nabla^g\theta=0$ (in particular, also $g$-harmonic).
2\) $Ric^D=0$.
Odd dimensional spheres and products of spheres $S^1\times S^{2n+1}$ admit Einstein-Weyl structures (note that $S^1\times S^2$ and $S^1\times S^3$ can bear no Einstein metric, cf. [@Hi]). Further examples, with $Ric^D=0$, will be the compact quaternion Hermitian Weyl and hyperhermitian Weyl manifolds.
Quaternionic Hermitian manifolds
================================
This section is devoted to the introduction of quaternion Hermitian geometry. The standard references are [@Sa], [@Be], [@Sw], [@AM2].
Let $(M,g)$ be a $4n$-dimensional Riemannian manifold. Suppose $End (TM)$ has a rank $3$ subbundle $H$ with transition functions in $\mathrm{SO}(3)$, locally generated by orthogonal almost complex structures $I_{\alpha}$, ${\alpha}=1,2,3$ satisfying the quaternionic relations. Precisely: $$\label{unu}
I_{\alpha}^2=-Id, \; I_{\alpha}I_{\beta}=\varepsilon_{{\alpha}{\beta}{\gamma}}I_{\gamma}, \; g(I_{\alpha}\cdot,
I_{\alpha}\cdot)=
g(\cdot,\cdot)\; {\alpha},{\beta},{\gamma}=1,2,3$$ where $\varepsilon_{{\alpha}{\beta}{\gamma}}$ is $1$ (resp. $-1$) when $({\alpha}{\beta}{\gamma})$ is an even (resp. odd) permutation of $(123)$ (such a basis of $H$ is called admissible.) The triple $(M,g,H)$ is called a *quaternionic Hermitian manifold* whose *quaternionic bundle* is $H$.
Any (local) or global section of $H$ is called *compatible*, but in general, $H$ has no global section. A striking example is ${\mathbb{H}P^{n}}$, the quaternionic projective space. The three canonical almost complex structures of ${\mathbb{H}}^{n+1}$ induced by multiplication with the imaginary quaternionic units descend to only local almost complex structures on ${\mathbb{H}P^{n}}$ generating the bundle $H$. The metric is the one projected by the flat one on ${\mathbb{H}}^{n+1}$, *i.e.* the Fubini-Study metric written in quaternionic coordinates. Note that ${\mathbb{H}P^{1}}$ is diffeomorphic with $S^4$, hence cannot bear any almost complex structure. Consequently, no greater dimensional ${\mathbb{H}P^{n}}$ can have an almost complex structure neither, because this would be induced on any quaternionic projective line ${\mathbb{H}P^{1}}$, contradiction.
This shows that the case when $H$ is trivial is of a special importance and motivates
A quaternionic Hermitian manifold with trivial quaternionic bundle is called a *hyperhermitian manifold*.
In this terminology, an admissible basis of a quaternion Hermitian manifold is a local almost hyperhermitian structure.
For a hyperhermitian manifold we shall always fix a (global) basis of $H$ satisfying the quaternionic relations, so we shall regard it as a manifold endowed with three Hermitian structures $(g,I_{\alpha})$ related by the identities The simplest example is ${\mathbb{H}}^{n}$. But we shall encounter other many examples.
The analogy with Hermitian geometry suggests imposing conditions of [Kähler]{} type. Let $\nabla^g$ be the Levi-Civita connection of the metric $g$.
A quaternionic Hermitian manifold $(M,g,H)$ of dimension at least $8$ is *quaternion [Kähler]{} * if $\nabla^g$ parallelizes $H$, *i.e.* $\nabla^gI_{\alpha}=a_{\alpha}^{\beta}\otimes I_{\beta}$ (with sqew-symmetric matrix of one forms $(a_{\alpha}^{\beta})$).
A hyperhermitian manifold is called *hyperkähler* if $\nabla^gI_{\alpha}=0$ for ${\alpha}=1,2,3.$
This definition of quaternion Kähler manifold is redundant in dimension $4$. As S. Marchiafava proved (see [@Ma2]) that any four-dimensional isometric submanifold of a quaternion [Kähler]{} manifold whose tangent bundle is invariant to each element of $H$ is Einstein and self-dual, one takes this as a definition. We won’t be concerned with dimension $4$ in this report.
Note that, unless in the complex case, here the parallelism of $H$ does not imply the integrability of the single almost complex structures.
${\mathbb{H}}^{n}$ with its flat metric is hyperkähler. By a result of A. Beauville, the $K^3$ surfaces also, see [@Be], Chapter 14. The irreducible, symmetric quaternion [Kähler]{} were classified by J. Wolf. Apart ${\mathbb{H}P^{n}}$, the compact ones are: the Grassmannian of oriented $4$-planes in ${\mathbb{R}}^{m}$, the Grassmannian of complex $2$-planes of ${\mathbb{C}}^m$ and five other exceptional spaces (see [@Be], *loc. cit.*)
From the holonomy viewpoint, equivalent definitions are obtained as follows: A Riemannian manifold is quaternion [Kähler]{} (resp. hyperkähler) iff its holonomy is contained in $\mathrm{Sp}(n)\cdot \mathrm{Sp}(1)=\mathrm{Sp}(n)\times \mathrm{Sp}(1)/{\mathbb{Z}}_2$ (resp. $\mathrm{Sp}(n)$).
On a quaternion quaternionic Hermitian manifold, the usual [Kähler]{} forms are only local: on any trivializing open set $U$, one has the $2$-forms $\omega_{\alpha}(\cdot,\cdot)=
g(I_a\cdot,\cdot)$. But the $4$-form $\omega=\sum_{{\alpha}=1}^3\omega_{\alpha}\wedge\omega_{\alpha}$ is global (because the transition functions of $H$ are in $\mathrm{SO}(3)$), nondegenerate and, if the manifold is quaternion [Kähler]{}, parallel. Hence, it gives a nontrivial $4$-cohomology class, precisely $[\omega]=8\pi^2p_1(H)\in H^4(M,{\mathbb{R}})$ ([@Kr]).
To get a converse, let ${\mathcal{H}}$ be the algebraic ideal generated by $H$ in $\Lambda^2T^*M$ (by identifying, as usual, a local almost complex structure with the associated [Kähler]{} $2$-form). It is a differential ideal if for any admissible basis of $H$, one has $d\omega_{\alpha}=\sum_{{\beta}=1}^3\eta_{{\alpha}{\beta}}\wedge\omega_{\beta}$ for some local $1$-forms $\eta_{{\alpha}{\beta}}$. Then we have:
[@Sw] \[swa\] A quaternion Hermitian manifold of dimension at least $12$ with closed $4$-form $\omega$ is quaternion [Kähler]{}.
A quaternion Hermitian manifold of dimension $8$ is quaternion Kähler iff $\omega$ is closed and ${\mathcal{H}}$ is a differential ideal.
Swann’s proof uses representation theory. A more direct one can be found in [@AM1].
\[conf\] It is important to note that the condition of being a differential ideal is conformally invariant and, moreover, invariant to different choices of admissible basis.
For an almost almost quaternionic Hermitian manifold $(M,g,H)$, we define its structure tensor by $$T^H=\frac{1}{12}\sum_{{\alpha}=1}^3[I_{\alpha},I_{\alpha}].$$ Clearly, $T^H$ is zero if one can choose, locally, admissible basis formed by integrable almost complex structures. The *Obata connection* $\nabla^H$ is then the unique connection which preserves $H$ and has torsion equal to $T^H$. It defines the fundamental $1$-form $\eta$ by the relation $$\eta(x)=\frac{1}{(8(n+1)}\mathrm{trace}\{g^{-1}\nabla^H_Xg\}.$$ A direct (but lengthy) computation proves:
\[AM1\] [@AM1] Let $(M,g,H)$ be a quaternion Hermitian manifold such that ${\mathcal{H}}$ is a differential ideal. For any admissible basis of $H$, the following formulae for $T^H$ and $\nabla^H$ hold good:
$$\label{t}
\begin{split}
T^H_XY=&\frac{1}{60}\sum_{{\alpha}=1}{3}\{[(5{\varphi}_{\alpha}+\rho_al)(I_{\alpha}X]I_{\alpha}Y-
[(5{\varphi}_{\alpha}+\rho_al)(I_{\alpha}Y]I_{\alpha}X+\\
+&4\omega_{\alpha}X,Y)g^{-1}(\rho_{\alpha}\circ I_{\alpha})\},
\end{split}$$
$$\label{nab}
\begin{split}
(\nabla^H_Zg)(X,Y)=&\frac{1}{12}\{2{\varepsilon}(Z)g(X,Y)+{\varepsilon}(X)g(Y,Z)+{\varepsilon}(Y)g(X,Z)-\\
-&\sum_{{\alpha}=1}^3{\varepsilon}(I_{\alpha}X)g(Y,I_{\alpha}Z)-
\sum_{{\alpha}=1}^3{\varepsilon}(I_{\alpha}Y)g(X,I_{\alpha}Z)+\\
+&4\eta(Z)g(X,Y)\}
\end{split}$$
where $$\begin{split}
{\varphi}_{\alpha}=&2\eta_{[{\beta}{\gamma}]}\circ I_{\alpha}-\eta_{[{\gamma}{\alpha}]}\circ I_{\beta}-
\eta_{[{\alpha}{\beta}]}\circ I_{\gamma},\\
\rho_{\alpha}=&-6\eta_{{\alpha}{\alpha}}+2\eta-3\eta_{({\alpha}{\beta})}\circ I_{\gamma}+
3\eta_{({\gamma}{\alpha})}\circ I_{\beta}\\
{\varepsilon}=&\sum_{{\alpha}=1}^3\eta_{[{\alpha}{\beta}]}\circ I_{\gamma}\end{split}$$ the subscript $()$ (resp. $[]$) indicating symmetrization (resp. sqew-symmetrization).
A most important geometric property of quaternion Kähler manifolds, partly motivating the actual interest in their study is:
[@Ber] A quaternionic [Kähler]{} manifold is Einstein.
We briefly sketch, following [@Be], p. 403, S. Ishihara’s proof. We fix a local admissible basis. Direct computations lead to the formulae: $$[R^g(X,Y),I_{\alpha}]=\sum_{{\beta}=1}^3\eta_{{\alpha}{\beta}}I_{\beta}$$ with a sqew-symmetric matrix of $2$-forms $(\eta_{{\alpha}{\beta}})$ which can be expressed in terms of Ricci tensor as follows: $$\eta_{{\alpha}{\beta}}(X,Y)=\frac{2}{n+2}Ric^g(I_{\gamma}X,Y), \quad \dim M=4n.$$ From these one gets:
$$\begin{split}
&g(R^g(X,I_1X)Z,I_1Z)+g(R^g(X,I_1X)I_2Z,I_3Z)+\\
+&g(R^g(I_2X,I_3X)Z,I_1Z)+g(R^g(I_2X,I_3X)I_2Z,I_3Z)=\\
=&\frac{4}{n+4}Ric^g(X,X){\Vert Z\Vert}^2=\frac{4}{n+4}Ric^g(Z,Z){\Vert X\Vert}^2
\end{split}$$
for any $X$ and $Z$, hence $Ric^g(X,X)={\lambda}g(X,X)$ and $(M,g)$ is Einstein.
On the other hand, hyperkähler manifolds have holonomy included in $\mathrm{Sp}(n)\subset \mathrm{SU}(2n)$, hence they are Ricci-flat, in particular Einstein (see [@Be]).
Although apparently hyperkähler manifolds form a subclass of quaternion [Kähler]{} ones, this is not quite true. Besides the holonomy argument, the following result motivates the dichotomy:
[@Ber] A quaternion Kähler manifold is Ricci-flat iff its reduced holonomy group is contained in $\mathrm{Sp}(n)$. And if it is not Ricci-flat, then it is de Rham irreducible.
From these results it is clear that when discussing quaternion [Kähler]{} manifolds, one is mainly interested in the non-zero scalar curvature.
Ricci flat quaternion Kähler manifolds are called *locally hyperkähler*. Similarly, P. Piccinni discussed in [@Pi2], [@Pi] the class of *locally quaternion Kähler manifolds*, having the *reduced* holonomy group contained in $\mathrm{Sp}(n)\cdot \mathrm{Sp}(1)$ and proved:
[@Pi2] Any complete locally quaternion Kähler manifold with positive scalar curvature is compact, locally symmetric and admits a finite covering by a quaternion Kähler Wolf symmetric space.
As the local sections of $H$ are generally non-integrable, one cannot use the methods of complex geometry directly on quaternion [Kähler]{} manifolds. However, one can construct an associated bundle whose total space is Hermitian. Let $p:Z(M)\rightarrow M$ be the unit sphere subbundle of $H$. Its fibre $Z(M)_m$ is the set of all almost complex structures on $T_mM$. This is called the *twistor bundle* of $M$. Using the Levi-Civita connection $\nabla^g$, one splits the tangent bundle of $Z(M)$ in horizontal and vertical parts. Then an almost complex structure ${\mathcal{J}}$ can be defined on $Z(M)$ as follows: each $z\in Z(M)$ represents a complex structure on $T_{p(z)}M$; as the horizontal subspace in $z$ is naturally identified with $T_{p(z)}M$, the action of ${\mathcal{J}}$ on horizontal vectors will be the tautological one, coinciding with the action of $z$. The vertical subspace in $z$ is isomorphic with the tangent space of the fibre $S^2$. Hence we let ${\mathcal{J}}$ act on vertical vectors as the canonical complex structure of $S^2$. Happily, ${\mathcal{J}}$ is integrable. Moreover:
[@Sa] Let $(M,g,H)$ be a quaternion [Kähler]{} manifold with positive scalar curvature. Then $(Z(M),{\mathcal{J}})$ admits a Kähler - Einstein metric with positive Ricci curvature with respect to which $p$ becomes a Riemannian submersion.
Local and global $3$-Sasakian manifolds {#3ss}
=======================================
We now describe the odd dimensional analogue, within the frame of contact geometry, of hyperkähler manifolds, as well as a local version of it. We send the interested reader to the excellent recent survey [@BG2], where also a rather exhaustive list of references is given and to [@OP2] for the local version.
A $4n+3$ dimensional Riemannian manifold $(N,h)$ such that the cône metric $dr^2+r^2h$ on ${\mathbb{R}}_+\times N$ is hyperkähler is called a *$3$-Sasakian* manifold.
This is equivalent to the existence of three mutually orthogonal unit Killing vector fields $\xi_1$, $\xi_2$, $\xi_3$, each one defining a Sasakian structure (*i.e.*: ${\varphi}_{\alpha}:=\nabla^h\xi_{\alpha}$ satisfies the differential equation $\nabla^h{\varphi}_{\alpha}=Id\otimes\xi_{\alpha}^\flat-h\otimes\xi_{\alpha}$) and related by: $$[\xi_1,\xi_2]=2\xi_3,
[\xi_2,\xi_3]=2\xi_1,
[\xi_3,\xi_1]=2\xi_2.$$ $3$-Sasakian manifolds are necessarily Einstein ([@Ka]) with positive scalar curvature and their Einstein constant is $4n+2$.
Starting with a $3$-Sasakian manifold $N$, one has to consider the foliation generated by the three structure vector fields $\xi_{\alpha}$. It is easy to compute the curvature of the leaves: it is precisely one. Hence, the leaves are spherical space forms. If the foliation is quasi-regular (it is enough to have compact leaves), then the quotient space is a quaternion [Kähler]{} orbifold $M$ of positive sectional curvature (see [@BG3] for a thorough discussion about the geometry and topology of orbifolds and their applications in contact geometry). As all the geometric constructions we are interested in can be carried out in the category of orbifolds, one considers now the twistor space $Z(M)$. The triangle is closed by observing that, fixing one of the contact structures of $N$, one has an $S^1$-bundle $N\rightarrow Z(M)$ whose Chern class is, up to torsion, the one of an induced Hopf bundle (this is a particular case of a Boothby-Wang fibration, cf. [@Bl]). Moreover, all three orbifold fibrations involved in this commutative triangle are Riemannian submersions.
Conversely, given a positive quaternion [Kähler]{} orbifold $(M,g,H)$, one constructs its [Kähler]{}-Einstein twistor space (it will be an orbifold) and an $\mathrm{SO}(3)$-principal bundle over $M$. The total space $N$ will then be a $3$-Sasakian orbifold which, as above, fibers in $S^1$ over $Z(M)$ closing the diagram. One of the deepest results in this theory was the determination of conditions under which $N$ is indeed a manifold (cf. [@BGMR]).
A local version of $3$-Sasakian structure will be also useful in the sequel:
[@OP2]\[3s\] A Riemannian manifold $(N,h)$ is said to be a *locally $3$-Sasakian manifold* if a rank $3$ vector subbundle ${\mathcal{K}}\subset TN$ is given, locally spanned by an orthonormal triple $\xi_1,\xi_2,\xi_3$ of Killing vector fields satisfying:
\(i) $[\xi_\alpha,\xi_\beta]=2\xi_\gamma$ for $(\alpha , \beta ,
\gamma)=(1,2,3)$ and circular permutations.
\(ii) Any two such triples $\xi_1,\xi_2,\xi_3$ and $\xi'_1,\xi'_2,\xi'_3$ are related on the intersections $U \cap U'$ of their definition open sets by matrices of functions with values in $\mathrm{SO}(3)$.
\(iii) If ${\varphi}_\alpha = \nabla^h \xi_\alpha$ , ($\alpha = 1,2,3$), then $ (\nabla^h_Y {\varphi}_\alpha)~Z=\xi_\alpha^\flat (Z)Y-
h(Y,Z)\xi_\alpha,$ for any local vector fields $Y,Z$.
Clearly, if ${\mathcal{K}}$ can be globally trivialized with Killing vector fields as above, $(N,h)$ is $3$-Sasakian. It is easily seen that locally $3$-Sasakian manifolds share the local properties with the (global) $3$-Sasakian spaces: they are Einstein with positive scalar curvature; hence, by Myers’ theorem we have
Complete locally and globally $3$-Sasakian manifolds are compact.
But a specific property of the local case is:
[@OP2]\[flat\] The bundle ${\mathcal{K}}$ of a locally $3$-Sasakian manifold is flat.
Let $(\xi_1,\xi_2,\xi_3)$, $(\xi_1',\xi_2',\xi_3')$ be two local orthonormal triples of Killing fields trivializing ${\mathcal{K}}$ on $U$, $U'$. Then, on $U\cap U'\neq \emptyset$ we have $\xi_{\lambda}'=f^\sigma_{\lambda}\xi_\sigma$. We shall show that $f^\sigma_{\lambda}$ are constant. Compute first the bracket $$2\xi'_\nu=[\xi'{\lambda},\xi'_\mu]=\{f^\rho_{\lambda}\xi_\rho(f_\mu^\sigma)-
f_\mu^\rho\xi_\rho(f_{\lambda}^\sigma)\}\xi_\sigma+
f^\rho_{\lambda}f_\mu^\sigma[\xi_\rho,\xi_\sigma].$$ From $(f^\mu_{\lambda})\in \mathrm{SO}(3)$ and $[\xi_\rho,\xi_\sigma]=2\xi_\tau$ ($(\rho, \sigma,
\tau) = (1,2,3)$ and cyclic permutations), we can derive: $$f^\rho_{\lambda}f_\mu^\sigma[\xi_\rho,\xi_\sigma]=2\{f^\rho_{\lambda}f_\mu^\sigma-
f_{\lambda}^\sigma f_\mu^\rho\}\xi_\tau=2\xi'_\nu.$$ Hence $$f^\rho_{\lambda}\xi_\rho(f_\mu^\sigma)-f_\mu^\rho\xi_\rho(f_{\lambda}^\sigma)=0.$$ Thus, for any ${\lambda}, \mu,\sigma=1,2,3$: $\xi'_{\lambda}(f^\mu_\sigma)-\xi'_\mu(f_{\lambda}^\sigma)=0$. It follows: $$\label{p}
\xi_{\lambda}(f_\sigma^\mu)-\xi_\mu(f^{\lambda}_\sigma)=0.$$ Now we use the Killing condition applied to $\xi_{\lambda}'=f^\mu_{\lambda}\xi_\mu$: $$Y(f_{\lambda}^\mu)h(\xi_\mu,Z)+Z(f_{\lambda}^\mu)h(\xi_mu,Y)=0, \quad Y,Z\in \mathcal{X}(M)$$ which yields, on one hand $Z(f_{\lambda}^\mu)=0$ for any $Z\perp span\{\xi_1,\xi_2,\xi_3\}$ and, on the other hand $$\xi_\rho(f_{\lambda}^\sigma)+\xi_\sigma(f_{\lambda}^\rho)=0.$$ This and imply $\xi_\sigma(f_{\lambda}^\rho)=0$ and the proof is complete.
The vector bundle ${\mathcal{K}}$ generates a $3$-dimensional foliation that, for simplicity, we equally denote ${\mathcal{K}}$. It can be shown that ${\mathcal{K}}$ is Riemannian. As in the global case, if the leaves of ${\mathcal{K}}$ are compact, the leaf space $M=N/{\mathcal{K}}$ is a compact orbifold. The metric $h$ projects to a metric $g$ on $P$ making the natural projection $\pi$ a Riemannian submersion with totally geodesic fibers. The locally defined endomorphisms ${\varphi}_{\lambda}$ can be projected on $M$ producing locally defined almost complex structures: $J_{\alpha}X_{\pi(x)}=\pi_*({\varphi}_{\alpha}(\tilde X_x))$, where $\tilde X$ is the horizontal lift of $X$ w.r.t. the submersion. As ${\varphi}_{\alpha}\circ{\varphi}_{\beta}=-{\varphi}_{\gamma}+\xi_{\alpha}\otimes\xi_{\beta}^\flat$, $P$ can be covered with open sets endowed with local almost hyperhermitian structures $\{J_{\alpha}\}$. As the transition functions of ${\mathcal{K}}$ are in $\mathrm{SO}(3)$, so are the transition functions of the bundle $\mathcal{F}$ locally spanned by the ${\varphi}_{\alpha}$. Hence, two different almost hyperhermitian structures are related on their common domain by transition functions in $\mathrm{SO}(3)$. This means that the bundle $H$ they generate is quaternionic. Using the O’Neill formulae, it is now seen, as in the global case, that $(M,g,H)$ is a quaternion Kähler orbifold. Summing up we can state:
[@OP2]\[fib\] Let $(N,h,{\mathcal{K}})$ be a locally $3$-Sasakian manifold such that ${\mathcal{K}}$ has compact leaves. Then the leaf space $M=N/{\mathcal{K}}$ is a quaternion Kähler orbifold with positive scalar curvature and the natural projection $\pi:N\rightarrow M$ is a Riemannian, totally geodesic submersion which fibers are (generally inhomogeneous) $3$-dimensional spherical space forms.
P. Piccinni proved in [@Pi2] that some global $3$-Sasakian manifolds also project over local quaternion Kähler manifolds with positive scalar curvature.
A further study of the (supposed compact) leaves of ${\mathcal{K}}$ will show a very specific property of locally $3$-Sasakian manifolds. To this end, we recall, following [@Sc], some aspects of the classification of $3$-dimensional spherical space forms $S^3/G$, with $G$ a finite group of isometries of $S^3$, hence a finite subgroup of $\mathrm{SO}(4)$. The finite subgroups of $S^3$ are known: they are cyclic groups of any order or binary dihedral, tetrahedral, octahedral, icosahedral and, of course, the identity. In all these cases, $S^3/G$ is a homogeneous $3$-dimensional space form carrying an induced (global) $3$-Sasakian structure, see [@BGM0]. The other finite subgroups of $\mathrm{SO}(4)$, not contained in but acting freely on $S^3$, are characterized by being conjugated in $\mathrm{SO}(4)$ to a subgroup of $\Gamma_1=\mathrm{U}(1)\cdot \mathrm{Sp}(1)$ or $\Gamma_2=\mathrm{Sp}(1)\cdot \mathrm{U}(1)$. Observe that the right (resp. left) isomorphism between ${\mathbb{H}}$ and ${\mathbb{C}}^2$ induces an isomorphism between $\Gamma_1$ (resp. $\Gamma_2$) and $\mathrm{U}(2)$. Hence, any finite subgroup $\Gamma$ of $\Gamma_1$ or $\Gamma_2$ will preserve two structures of $S^3$: the locally $3$-Sasakian structure induced by the hyperhermitian structure of ${\mathbb{C}}^2$ and a global Sasakian structure induced by some complex Hermitian structure of ${\mathbb{C}}^2$ belonging to the given hyperhermitian one. Moreover, altering $\Gamma$ by conjugation in $\mathrm{SO}(4)$ does not affect the above preserved structures; only the global Sasakian structure will come from a hermitian structure of ${\mathbb{R}}^4$ conjugate with the standard one. Altogether, we obtain:
[@OP2]\[gog\] On any locally $3$-Sasakian manifold, the compact leaves of ${\mathcal{K}}$ are locally $3$-Sasakian $3$-dimensional space-forms carrying a global almost Sasakian structure.
We end with another consequence of Proposition \[gog\]:
[@OP2]\[pul\] Let $\tilde {\mathcal{K}}\rightarrow \tilde N$ be the pull-back of the bundle ${\mathcal{K}}\rightarrow N$ to the universal Riemannian covering space of a locally $3$-Sasakian manifold. Then $\tilde {\mathcal{K}}$ is globally trivialized by a global $3$-Sasakian structure on $\tilde N$.
By Proposition \[flat\], the bundle $\tilde{\mathcal{K}}\rightarrow \tilde N$ is trivial. However, this is not enough to deduce that the trivialization can be realized with Killing fields generating a $\mathrm{su}(2)$ algebra. *E.g.* the inhomogeneous $3$-dimensional spherical space forms are parallelizable but locally, not globally $3$-Sasakian. To overcome this difficulty, start with the induced locally $3$-Sasakian structure of $\tilde N$. Let $X_1$ be the global Sasakian structure of $\tilde N$ provided by Proposition \[gog\] and consider an open set $\tilde U$ on which $\tilde{\mathcal{K}}$ is trivialized by a local $3$-Sasakian structure incuding $X_1$.
The manifold $\tilde N$ is simply connected and Einstein, hence analytic (see [@Be], Theorem 5.26). By a result of Nomizu (cf. [@No]) each local Killing vector field on $\tilde N$ can be extended uniquely to the whole $\tilde N$. We thus extend the above three local Killing fields. Clearly, the extension $Y_1$ of $X_1$ coincides with $X_1$. The extension $Y_2$ of ${X}_2$ is thus orthogonal to $Y_1$ and belongs to $\tilde {\mathcal{K}}$ in every point of $\tilde N$. It follows from Proposition \[flat\] that ${Y}_2$ is a global Sasakian structure. Now $Y_3=\frac{1}{2}[Y_1,Y_2]$ completes the desired global $3$-Sasakian structure.
Quaternion Hermitian Weyl and hyperhermitian Weyl manifolds
===========================================================
We now arrive to the structures giving the title of this survey. We consider $4n$-dimensional quaternion Hermitian manifolds and let the metric vary in its conformal class. In this setting, the natural connection to work with is no more the Levi-Civita connection, but a Weyl connection which has to be compatible with the quaternionic structure too.
Definitions. First properties
-----------------------------
Let $(M^{4n},c,H)$, $n \geq 2$ be a conformal manifold endowed with a quaternionic bundle $H$ such that $(M,g,H)$ is quaternion Hermitian for each $g\in c$.
$(M^{4n},H,c,D)$ is said *quaternion-Hermitian-Weyl* if:
1) $(M,c,D)$ is a Weyl manifold;
2) $(M,g,H)$ is quaternion-Hermitian for any $g\in c$;
3) $DH=0$, *i.e.* $DI_{\alpha}=a^{\beta}_{\alpha}\otimes I_{\beta}$ with sqew-symmetric matrix of one-forms $(a_{\alpha}^{\beta})$ for any admissible basis of $H$.
$(M^{4n},c,H,D)$ is said *hyperhermitian Weyl* if it satisfies condition 1) and:
2’) $(M,g,H)$ is hyperhermitian for any $g\in c$;
3’) $DI=0$ for any section of $H$.
The above definition is clearly inspired by the complex case, where the theory of Hermitian-Weyl (locally conformal [Kähler]{}ian in other terminology) is widely studied (see [@DO] for a recent survey). Indeed, the following equivalent definition is available:
[@PPS] $(M^{4n},c,H,D)$ is quaternion-Hermitian-Weyl\
(resp. hyperhermitian Weyl) if and only if $(M,g,H)$ is locally conformally quaternion [Kähler]{} (resp. locally conformally hyperkähler) (i.e. $g_{\vert_{U_i}} = e^{f_i}g'_i$, where the $g'_i$ are quaternion Kähler (resp. hyperkähler) over open neighbourhoods $\{U_i\}$ covering $M$) for each $g\in c$.
Let $(M^{4n},c,H,D)$ be quaternion-Hermitian Weyl. Fix a metric $g\in c$ and choose an open set $U$ on which $H$ is trivialized by an admissible basis $I_1,I_2,I_3$. Then $Dg=\theta_g\otimes g$ together with condition 2) of the definition imply $D\omega_{\alpha}=
\theta\otimes\omega_{\alpha}+a_{\alpha}^{\beta}\otimes \omega_{\beta}$, hence $$\label{cc}
d\omega_{\alpha}= \theta\wedge\omega_{\alpha}+a_{\alpha}^{\beta}\wedge \omega_{\beta}.$$ This implies that ${\mathcal{H}}$ is a differential ideal and, on the other hand, the derivative of the fundamental four-form is $d\omega=\theta_g\wedge\omega$. Differentiating here we get $0=d^2\omega=d\theta_g\wedge\omega$. As $\omega$ is nondegenerate, this means $d\theta_g=0$. Consequently, locally, on some open sets $U_i$, $\theta_g=df_i$ for some differentiable functions defined on $U_i$. It is now easy to see that for each $g'_i=e^{-f_i}g_{\vert_{U_i}}$, the associated $4$-form is closed, hence, taking into account Proposition \[swa\] and Remark \[conf\], the local metrics $g'_i$ are quaternion [Kähler]{}.
Conversely, starting with the local quaternion [Kähler]{} metrics $ g'_i=e^{-f_i}g_{\vert_{U_i}}$, define $(\theta_g)_{\vert_{U_i}}=df_i$. It can be seen that these local one forms glue together to a global, closed one-form and $d\omega=\theta_g\wedge\omega$. Then construct the Weyl connection associated to $g$ and $\theta_g$: $$\label{cw}
D=\nabla^g-{\frac{1}{2}}\{\theta\otimes Id+Id\otimes\theta-
g\otimes\theta_g^{\sharp}\}.$$ A straightforward computation shows that $D$ has the requested properties.
The proofs for the global case are completely similar.
\[cor\] A quaternion Hermitian manifold $(M,g,H)$ is quaternion Hermitian Weyl if and only if there exist a $1$-form $\theta$ (necessarily closed) such that the fundamental $4$-form $\omega$ satisfy the integrability condition $d\omega=\theta\wedge\omega$. In particular, $(M,g,H)$ is quaternion Kähler if and only if $\theta=0$.
The form $\theta$ is the Higgs field associated to the Weyl manifold $(M,c,D)$. But in this context, we shall prefer to call it *the Lee form* (see [@DO] for a motivation).
As on a simply connected manifold any closed form is exact we derive:
A quaternion Hermitian Weyl ( hyperhermitian Weyl) manifold which is not globally conformal quaternion Kähler ( hyperkähler) cannot be simply connected.
The universal Riemannian covering space of a quaternion Hermitian Weyl ( hyperhermitian Weyl) manifold is globally conformal quaternion Kähler ( hyperkähler).
\[ex\] We give here just one example of compact hyperhermitian Weyl manifold and leave the description of other examples for the end of the paper, following the structure of quaternion Hermitian Weyl and hyperhermitian Weyl manifolds.
The standard quaternionic Hopf manifold is $H^n_{\mathbb{H}}={\mathbb{H}}-\{0\}/\Gamma_2$, where $\Gamma_2$ is the cyclic group generated by the quaternionic automorphism $(q_1,...,q_n)\mapsto (2q_1,...,2q_n)$. The hypercomplex structure of ${\mathbb{H}}^n$ is easily seen to descend to $H_{\mathbb{H}}^n$. Moreover, the globally conformal quaternion Kähler metric $(\sum_iq_i{\overline}{q}_i)^{-1}
\sum_idq_i\otimes d{\overline}{q}_i$ on ${\mathbb{H}}^n-\{0\}$ is invariant to the action of $\Gamma_2$, hence induces a locally conformally hyperkähler metric on the Hopf manifold. Note that, as in the complex case, $H^n_{\mathbb{H}}$ is diffeomorphic with a product of spheres $S^1\times S^{4n-1}$. Consequently, its first Betti number is $1$ and it cannot accept any hyperkähler metric.
Before going over, let us note the following result:
[@OP2] A quaternion Hermitian manifold $(M,g,H)$ admits a unique quaternion Hermitian Weyl structure.
We have to prove that there exists a unique torsion free connection preserving both $H$ and $[g]$. Indeed, if $D_1$, $D_2$ are such, let $\theta_1$, $\theta_2$ be the associated Lee forms. Then the fundamental $4$-form $\omega$ satisfies $$\label{inj}
d\omega=\theta_1\wedge\omega=\theta_2\wedge\omega.$$ Using the operator $L:\Lambda^1T^*M
\rightarrow \Lambda^5T^*M$, $L{\alpha}={\alpha}\wedge\omega$, yields $L(\theta_1-\theta_2)=0$. But $L$ is injective, because it is related to its formal adjoint $\Lambda$ by $\Lambda L=(n-1)Id$. Hence $\theta_1=\theta_2$. Finally, formula proves that $D_1=D_2$.
[@Pi2] For hyperhermitian Weyl manifolds, this uniqueness property is implied by the characterization of the Obata connection as the unique torsion-free hypercomplex connection. It must then coincide with our Weyl connection $D$. In general, the set of torsion-free quaternionic connections has an affine structure modelled on the space of $1$-forms. However, only one torsion-free connection can preserve a given conformal class of hyperhermitian metrics. This follows from the fact that the exterior multiplication with the fundamental four-form of the metric maps injectively $\Lambda ^1(T^*M)$ into $\Lambda^5(T^*M)$.
Note that the connection $D\vert_{U_i}$ is in fact the Levi Civita connection of the local quaternion Kähler metric $g'_i$. As quaternion-Kähler manifolds are Einstein, we obtain the following fundamental result:
[@PPS] Quaternion Hermitian Weyl manifolds are Einstein Weyl.
Hence, as $d\theta=0$, *i.e.* the Weyl structure $(M,c,D)$ is closed and not exact, because the $D$ is the Levi-Civita connection of *local* metrics (the Weyl structure is only locally exact), the quoted Theorem \[gg\] of P. Gauduchon implies:
\[par\][@PPS] On any compact quaternion-Weyl (hyperhermitian Weyl) manifold which is not globally conformal quaternion Kähler (hyperkähler) there exists a representative $g\in c$ (the Gauduchon metric) such that the associated Lee form $\theta_g$ be $\nabla^g$-parallel.
In the sequel, the parallel Lee form of the Gauduchon metric will always be supposed of unit length.
\[va\] Let $g$ be the above metric with parallel Lee form on a compact hyperhermitian Weyl manifold and $\{I_{\alpha}\}$ an adapted hyperhermitian structure. Then $(g,I_{\alpha})$ are Vaisman structures on $M$ (cf. [@DO]).
[@OP1] On a compact quaternion Weyl manifold which is not globally conformal quaternion Kähler, the local quaternion Kähler metrics $g'_i$ are Ricci-flat.
This result follows directly from Theorem \[gg\], 2), but we prefer to give here a direct proof, adapted to our situation.
On each $U_i$, the relation between the scalar curvatures $\mathrm{Scal}'_i$ and $Scal$ of $g'_i$ and $g$ is (cf. [@Be], p. 59): $$\mathrm{Scal}'_i=e^{-f_i}\left\{\mathrm{Scal}_{|U_i}-\frac{(4n-1)(2n-1)}{2}\right\}.$$ Hence $\mathrm{Scal}'_i$ is constant. If $\mathrm{Scal}'_i$ is not identically zero, differentiation of the above identity yields: $$\theta_{|U_i}=d\log \left\{\mathrm{Scal}_{|U_i}-\frac{(4n-1)(2n-1)}{2}\right\}.$$ As both $\theta$ and $\mathrm{Scal}$ are global objects on $M$, it follows that $\theta$ is exact, contradiction. But if $\mathrm{Scal}'_i=0$ on some $U_i$, then $Scal=\mathrm{Scal}_{|U_i}=\frac{(4n-1)(2n-1)}{2}$, constant on $M$. This proves that $\mathrm{Scal}'_i=0$ on each $U_i$.
\[int\]
The above result says that quaternion-Hermitian Weyl manifolds are locally conformally locally hyperkähler. In particular, the open subsets $U_i$ can always be taken simply connected and endowed with admissible basis made of by integrable, parallel almost complex structures. But this does not mean that $M$ would be a locally conformal Kähler manifold, because a global Kähler structure might not exist.
Another characterization, using the differential ideal ${\mathcal{H}}$ is the following (recall that the differential ideal condition is conformally invariant, so one can speak about the differential ideal of a conformal manifold):
[@ABM]\[laba\] A quaternionic conformal manifold $(M,c,H)$ of dimension at least $12$ is quaternion-Hermitian Weyl if and only if ${\mathcal{H}}$ is a differential ideal.
The following result is essential in the author’s proof, also motivating the restriction on the dimension:
[@ABM] Let $(M,g,H)$ be an almost hyperhermitian manifold with $\dim M\geq 12$. Suppose $\sum_{{\alpha}=1}^3\phi_{\alpha}\wedge\omega_{\alpha}=0$ for some $2$-forms $\phi_{\alpha}$. Then there exists the sqew-symmetric matrix of real functions $f_{{\alpha}\rho}$ such that $\phi_{\alpha}=\sum_{\rho\neq{\alpha}}f_{{\alpha}\rho}\omega_\rho$.
Let $F_{\alpha}$ be the $1-1$ tensor fields metrically equivalent with the $2$-forms $\phi_{\alpha}$. The identity in the statement can be rewritten as: $$\label{iu}
\begin{split}
\sum_{\rho=1}^3\{&-\phi_\rho(X,Y)I_\rho Z+\phi_\rho(X,Z)I_\rho Y +
\omega_\rho(Y,Z)F_\rho X-\\
&-\phi_\rho(Y,Z)I_\rho X+ \omega_\rho(Z,X)F_\rho Y+
\omega_\rho(X,Y)F_\rho Z\}=0.
\end{split}$$ Let now $X$ be unitary, fixed. In the orthogonal complement of ${\mathbb{H}}X=\{X, I_1X, I_2X,I_3X\}$ we choose a unitary $Z$ and let $Y=I_{\alpha}Z$. With these choices, the above identity reads: $$F_{\alpha}(X)= \sum_{\rho=1}^3\{\phi_\rho(X,I_\rho Z)I_\rho Z -
\phi_\rho(X,Z)I_\rho I_{\alpha}Z\}+\sum_{\rho=1}^3 \phi_\rho(I_\rho Z,Z)I_\rho X.$$ Here we use the assumption $n\geq 3$ to obtain: $$F_{\alpha}(X)=\sum_{\rho=1}^3 \phi_\rho(I_\rho Z,Z)I_\rho X,$$ hence $\phi_{\alpha}$ have the form $\phi_{\alpha}=\sum_{\rho\neq{\alpha}}f_{{\alpha}\rho}\omega_\rho$ which, introduced in the equation , gives: $$\begin{split}
\sum_{{\alpha}=1}^3f_{{\alpha}{\alpha}}\{-\omega_{\alpha}(X,Y)I_{\alpha}Z+\omega_{\alpha}(X,Z)I_{\alpha}Y-
\omega_{\alpha}(Y,Z)I_{\alpha}X\}+\\
+\sum_{\rho\neq{\alpha}}(f_{{\alpha}\rho}+f_{\rho{\alpha}})
\{\omega_{\alpha}(X,Y)I_\rho Z+\omega_{\alpha}(X,Z)I_\rho Y+
\omega_{\alpha}(Y,Z)I_\rho X\}=0.
\end{split}$$ Again using $n\geq 3$, we may choose $Y$ and $Z$ orthogonal to ${\mathbb{H}}X$ and get: $$-f_{{\alpha}{\alpha}}\omega_{\alpha}(Y,Z)-
\sum_{\rho\neq{\alpha}}(f_{{\alpha}\rho}+f_{\rho{\alpha}})\omega_{\alpha}(Y,Z)=0.$$ Now it remains to take $Z=I_{\alpha}Y$ to derive the sqew-symmetry of $(f_{{\alpha}\rho})$.
(of Theorem \[laba\]). Fix $g\in c$ and an admissible basis for $H$. Starting from equations , and $d\omega_{\alpha}=\sum_{{\beta}=1}^3\eta_{{\alpha}{\beta}}\wedge\omega_{\beta}$, one can derive the following formula: $$\label{cu}
d\omega_{\alpha}=\eta_{\gamma}\wedge\omega_{\beta}-\omega_{\beta}\wedge\omega_{\gamma}+\frac{1}{3}
\eta\wedge\omega_al,$$ where $2\eta:=\eta_{{\beta}{\gamma}}-\eta_{{\gamma}{\beta}}$. After differentiating we get: $$\frac{1}{3}d\eta\wedge\omega_{\alpha}+(d\eta_{\gamma}+\eta_{\alpha}\wedge\eta_{\beta})\wedge\omega_{\beta}-
(d\eta_{\beta}+\eta_{\gamma}\wedge\eta_{\alpha})\wedge\omega_{\gamma}=0.$$ The previous Lemma applies and provides: $$\begin{split}
\frac{1}{3}\eta=&f_{{\alpha}{\beta}}\omega_{\beta}+f_{{\beta}{\gamma}}\omega_{\gamma}\\
d\eta_{\gamma}+\eta_{\alpha}\wedge\eta_{\beta}=&f_{{\beta}{\alpha}}\omega_{\alpha}+f_{{\beta}{\gamma}}\omega_{\gamma}\\
-d\eta_{\beta}+\eta_{\gamma}\wedge\eta_{\alpha}=&f_{{\gamma}{\alpha}}\omega_{\alpha}+f_{{\gamma}{\beta}}\omega_{\beta}\end{split}$$ This yields $d\eta=0$ and $d\eta_{\alpha}+\eta_{\beta}\wedge\eta_{\gamma}=f\omega_{\alpha}$ with $f$ not depending on ${\alpha}$. Hence, locally $f=d\sigma$ and we have $$d\omega_{\alpha}=\eta_{\gamma}\wedge\omega_{\beta}-\eta_{\beta}\wedge\omega_{\gamma}-
\frac{1}{3}d\sigma\wedge\omega_{\alpha},$$ an equation similar to . The rest and the converse are obvious.
It is still unknown if this result is true in dimension $8$ too.
For quaternion Hermitian manifolds, various adapted canonical connections were introduced by V. Oproiu, M. Obata and others. A unified treatement can be found in some recent papers of D. Alekseevski, E. Bonan, S. Marchiafava (see *e.g.* [@AM2] and the references therein). In particular, in [@Ma3], one finds a characterization of hyperhermitian Weyl manifolds in terms of canonical connections and structure tensors of the subordinated quaternionic Hermitian structure.
We end this section with a characterizations of quaternion Kähler manifolds among (non compact) quaternion Hermitian Weyl manifolds by means of submanifolds (compare with [@V2] for the complex case):
[@OP1] A quaternion Hermitian Weyl manifold $(M,g,$ $H)$ of dimension at least $8$ is quaternion Kähler if and only if through each point of it passes a totally geodesic submanifold of real dimension $4h
\geq 8$ which is quaternion Kähler with respect to the structure induced by $(g,H)$.
On a given submanifold of $M$, locally one can induce the metric $g$ and the quaternion Kähler one $g_i'$. Correspondingly, there are two second fundamental forms $b$ and $b_i'$. As $g$ and $g_i'$ are conformally related on $U_i$, the relation between $b$ and $b_i'$ is $$b'_i=b+\frac{1}{2}g\otimes T^\nu,$$ where $T^\nu$ is the part of $T$ normal to the submanifold. Now let $x\in M$ and let $j:Q\rightarrow M$ be a quaternion Kähler submanifold through $x$ as stated. We have $j^*d\omega=0$. From $d\omega=\theta\wedge\omega$ we then derive $j^*\theta\wedge j^*\omega=0$. But rank $j^*\omega=4h\geq 8$, hence $j^*\theta=0$ meaning that $T$ is normal to $Q$: $T=T^\nu$. On the other hand, the same relation $j^*\theta=0$ shows that $Q\cap U_i$ is a quaternion Kähler submanifold of the quaternion Kähler manifold $(U_i,H_{|U_i},g'_i)$. As quaternion submanifolds of quaternion Kähler manifolds are totally geodesic, $Q\cap U_i$ is totally geodesic in $U_i$ with respect to $g'_i$. It follows $2b=-g\otimes T$ on $Q\cap U_i$. But $b$ is zero from the assumption ($Q$ is totally geodesic with respect to $g$). This yields $T=0$ on $Q\cap U_i$, in particular $T_x=0$. Since $x$ was arbitrary in $M$, $T=0$ on $M$ proving that $(M,g,H)$ is quaternion Kähler.
For the converse, just take $Q=M$.
We end this general presentation with a recent result which makes quaternion Hermitian Weyl manifolds interesting for physics. We first recall (sending to [@GP] and [@Iv] for details and further references) the notion of *quaternionic Kähler* (resp. *hyperkähler*) *manifold with torsion*, briefly QKT (resp. HKT) manifolds. Let $(M,g,H)$ be a quaternionic Hermitian (resp. hyperhermitian) manifold. It is called QKT (resp. HKT) manifold if it admits a metric quaternionic (resp. hypercomplex) connection $\nabla$ with totally skew symmetric torsion tensor which is, moreover, of type $(1,2)+(2,1)$ w.r.t. each local section $I_{\alpha}$, that is it satisfies: $$T(X,Y,Z)=T(I_{\alpha}X,I_{\alpha}Y,Z)+T(I_{\alpha}X,Y,I_{\alpha}Z)+T(X,I_{\alpha}Y,I_{\alpha}Z),$$ where $T(X,Y,Z)=g(Tor^\nabla(X,Y),Z)$ and $Tor^\nabla(X,Y)=\nabla_XY-\nabla_YX-[X,Y]$. The holonomy of such a connection is contained in $\mathrm{Sp}(n)\cdot \mathrm{Sp}(1)$. These structures appear naturally on the target space of $(4,0)$ supersymmetric two-dimensional sigma model with Wess-Zumino term and seem to be of growing interest for physicists. Let us introduce the $1$-forms: $$t_{\alpha}(X)=-\frac 12\sum_{i=1}^{4n}T(X,e_i,I_{\alpha}e_i),\quad {\alpha}=1,2,3.$$ Then the $1$-form $t=I_{\alpha}t_{\alpha}$ is independent on the choice of $I_{\alpha}$. We can now state:
[@Iv] Every quaternion Hermitian Weyl (resp. hyperhermitian Weyl) manifold admits a QKT (resp. HKT) structure.
Conversely, a $4n$ dimensional ($n>1$) QKT manifold $(M,g,H,\nabla)$ is quaternion Hermitian Weyl if and only if: $$T=\frac{1}{2n+1}\sum_{\alpha}t_{\alpha}\wedge\omega_{\alpha}\;\; \text{and} \;\; dt=0.$$
The canonical foliations
------------------------
From now on $(M,c,H,D)$ will be compact, non globally conformal quaternion Kähler. According to Proposition \[par\], we let $g\in c$ be the Gauduchon metric whose Lee form $\theta:=\theta_g$ is parallel w.r.t. the Levi-Civita connection $\nabla:=\nabla^g$. Hence we look at the quaternion Hermitian manifold $(M,g,H)$. We also suppose $\theta\neq 0$ meaning that $M$ is not quaternion Kähler, see Corollary \[cor\]. We recall that, being parallel, we can suppose $\theta$ normalised, *i.e.* $\mid\theta\mid=1$. We denote $T:=\theta^\sharp$ and let $T_{\alpha}=I_{\alpha}T$ and $\theta_{\alpha}=\theta\circ I_{\alpha}$.
The following proposition gathers the computational formulae we need:
[@OP1] Let $(M,g,H)$ be a compact quaternion Hermitian Weyl manifold and $\{I_1,I_2,
I_2\}$ a local admissible basis of $H$ with $I_{\alpha}$ integrable and parallel (as in remark \[int\]). The following formulae hold good: $$\begin{gathered}
\label{a}{\mathcal{L}}_TI_{\alpha}=0,\quad {\mathcal{L}}_Tg=0,\quad {\mathcal{L}}_T\omega_{\alpha}=0,
\quad {\mathcal{L}}_T\omega=0\\
\label{b}\nabla I_{\alpha}=\frac{1}{2}\{Id\otimes\theta_{\alpha}-I_{\alpha}\otimes\theta-
\omega_{\alpha}\otimes T+g\otimes T_{\alpha}\}\\
\label{c}{\mathcal{L}}_{T_{\alpha}}I_{\alpha}=0, \quad {\mathcal{L}}_{ T_{\alpha}}I_{\beta}=I_{\gamma},
\quad {\mathcal{L}}_{ T_{\alpha}}g=0\\
\label{d}[T, T_{\alpha}]=0, \quad [T_{\alpha}, T_{\beta}]= T_{\gamma}\\
\label{e}\nabla \theta_{\alpha}=\frac{1}{2}\{\theta\otimes\theta_{\alpha}-\theta_{\alpha}\otimes\theta-
\omega_{\alpha}\}\\
\label{f}d\theta_{\alpha}=-\omega_{\alpha}+\theta\wedge\theta_{\alpha}\\
\label{g}{\mathcal{L}}_{T_{\alpha}}\omega_{\alpha}=0, \quad {\mathcal{L}}_{T_{\alpha}}\omega_{\beta}=\omega_{\beta}, \quad
{\mathcal{L}}_{T_{\alpha}}\omega=0\end{gathered}$$ where ${\mathcal{L}}$ is the operator of Lie derivative.
The proof is by direct computation and mimics the corresponding one for Vaisman manifolds, see [@DO]. In particular, from and , we obtain according to [@P1]:
The vector fields $T$ and $T_{\alpha}$ are infinitesimal automorphisms of the quaternion Hermitian structure.
There are two interesting foliations on any compact quaternion Hermitian Weyl manifold:
- the $(4n-1)$-dimensional ${\mathcal{F}}$, spanned by the kernel of $\theta$ and
- the $4$-dimensional ${\mathcal{D}}$, locally generated by $T, T_1,T_2,T_3$.
Here are their properties:
[@OP2],\[ff\] On a compact quaternion Hermitian Weyl manifold, ${\mathcal{F}}$ is a Riemannian, totally geodesic foliation. Its leaves have an induced locally $3$-Sasakian structure.
The first statement is a consequence of . As for the second one, the bundle ${\mathcal{K}}$ is locally generated by the (rescaled to be unitary) local vector fields $T_{\alpha}$. Indeed, they are Killing by the last equation of ; the first condition of definition \[3s\] is given by ; the transition functions of ${\mathcal{K}}$ are in $\mathrm{SO}(3)$ because the transition functions of $H$ are so; finally, condition 3) of the definition is implied by .
[@OP1] On a compact hyperhermitian Weyl manifold, ${\mathcal{F}}$ is a Riemannian, totally geodesic foliation whose leaves have an induced (global) $3$-Sasakian structure.
[@OP1]\[qq\] On a compact quaternion Hermitian Weyl manifold, the foliation ${\mathcal{D}}$ is Riemannian, totally geodesic. Its leaves are conformally flat $4$-manifolds $({\mathbb{H}}-\{0\})/G$, with $G$ a discrete subgroup of $\mathrm{GL}(1,{\mathbb{H}})\cdot \mathrm{Sp}(1)$ inducing an integrable (in the sense of G-structures) quaternionic structure.
Let $X$ be a leaf of ${\mathcal{D}}$ and let the superscript $'$ refer to restrictions of objects from $M$ to $X$. A local orthonormal basis of tangent vectors for $X$ is provided by $\{T',T_1',
T_2',T_3'\}$. As $X$ is totally geodesic, $\nabla'\theta'=0$ and a direct computation of the curvature tensor of the Weyl connection $R^D$ on this basis proves $R^D=0$ on $X$. Hence $X$ is conformally flat and the curvature tensor of the Levi-Civita connection is $$\label{cur}
\begin{split}
R'(U,Y)Z=&\theta'(U)\theta'(Z)Y-\theta'(Y)\theta'(Z)U-\theta'(U)g'(Y,Z)T'+\\
+&\theta'(Y)g'(U,Z)T'+g'(Y,Z)U-g'(U,Z)Y.
\end{split}$$ It follows that the Ricci tensor $Ric'=g'-\theta'\otimes\theta'$ is $g'$-parallel and, on the other hand, the sectional curvature is non-negative and strictly positive on any plane of the form $\{T'_{\alpha},T'_{\beta}\}$. Now recall that the universal Riemannian covering spaces of conformally flat Riemannian manifolds with parallel Ricci tensor were classified in [@L]. By the above discussion and the reducibility of $X$ (due to $\nabla'T'=0$), the only class fitting from Lafontaine’s classification is that with universal cover ${\mathbb{R}}^4-\{0\}$ equipped with the conformally flat metric written in quaternionic coordinate $(h{\overline}{h})^{-1}dh\otimes d{\overline}{h}$. We still have to determine the allowed deck groups.
Happily, Riemannian manifolds with such universal cover were studied in [@G1] and, in arbitrary dimension, in [@RV]. Here it is proved that equation forces the deck group of the covering to contain only conformal transformations of the form (in real coordinates) $\tilde{x}^i=\rho a^i_jx^j$ where $\rho>0$ and $(a_j^i)\in \mathrm{SO}(4)$. This leads to the following form of $G$: $$\label{54}
G=\{ht_0^k\; ;\; h\in G_0, k\in {\mathbb{Z}}\}$$ where $t_0$ is a conformal transformation of maximal module $0<\rho<1$ and $G'$ is one of the finite subgroups of $\mathrm{U}(2)$ listed in [@Kat]. Finally, as $\mathrm{CO}^+(4)\simeq \mathrm{GL}(1,{\mathbb{H}})\cdot \mathrm{Sp}(1)$, $X$ has an induced integrable quaternionic structure.
[@OP1] \[qqq\] On a compact hyperhermitian Weyl manifold, the foliation ${\mathcal{D}}$ is Riemannian, totally geodesic. Its leaves, if compact, are complex Hopf surfaces (non-primary, in general) admitting an integrable hypercomplex structure.
Only the second statement has to be proved. It is clear that the leaves inherit a hyperhermitian Weyl, non hyperkähler (because $\theta\neq 0$) structure. The compact hyperhermitian surfaces are classified in [@Bo] and the only class having the stated property is that of Hopf surfaces.
As above, here [*integrable hypercomplex structure*]{} is intended in the sense of $G$-structures, i.e. of the existence of a local quaternionic coordinate such that the differential of the change of coordinate belongs to ${{\mathbb{H}}}^*$. For further use we recall the following:
[*(cf. [@Katt])*]{}\[lista\] A complex Hopf surface $S$ admits an integrable hypercomplex structure if and only if $S = ({{\mathbb{H}}}
-\{0\})/\Gamma$ where the discrete group $\Gamma$ is conjugate in $\mathrm{GL}(2, {{\mathbb{C}}})$ to any of the following subgroups $G \subset {{\mathbb{H}}}^*
\subset \mathrm{GL}(2, {{\mathbb{C}}})$:
$(i)$ $\; G={{\mathbb{Z}}}_m\times \Gamma_c$ with ${{\mathbb{Z}}}_m$ and $\Gamma_c$ both cyclic generated by left multiplication by $a_m=e^{2\pi i/m}$, $m \geq
1$, and $c\in{{\mathbb{C}}}^*$.
$(ii)$ $\; G=L\times \Gamma_c$, where $c\in{{\mathbb{R}}}^*$ and $L$ is one of the following: $D_{4m}$, the dihedral group, $m\geq2$, generated by the quaternion $j$ and $\rho_m = e^{\pi i/m}$; $\; T_{24}$, the tetrahedral group generated by $\zeta^2$ and ${1/ \sqrt 2}(\zeta^3 +\zeta^3j), ~\zeta =
e^{\pi i/4}$; $O_{48}$, the octahedral group generated by $\zeta$ and ${1/ \sqrt 2}(\zeta^3+\zeta^3j)$; $I_{120}$, the icosahedral group generated by $\epsilon^3,~j, ~{1/ \sqrt 5}[\epsilon^4 - \epsilon + (\epsilon^2 -
\epsilon^3)j], ~\epsilon = e^{2\pi i/ 5}.$
$(iii)$ $G$ generated by ${{\mathbb{Z}}}_m$ and $cj, m\geq 3,~ c\in{{\mathbb{R}}}^*$.
$(iv)$ $G$ generated by $D_{4m}$ and $c\rho_{2n}, ~c\in {{\mathbb{R}}}^*$ or by $T_{24}$ and $c\zeta, ~ c\in {{\mathbb{R}}}^*$.
Contrary to ${\mathcal{D}}$, the distribution ${\mathcal{D}}^\perp$ is not integrable. In fact, it plays the part of the contact distribution from contact geometry:
[@OP1]\[non\] On any compact quaternion Hermitian Weyl or hyperhermitian Weyl manifold, the distribution ${\mathcal{D}}^\perp$ is not integrable. Moreover, its integral manifolds are totally real and have maximal dimension $n-1$.
Note that a submanifold $N$ is an integral manifold of ${\mathcal{D}}^\perp$ if and only if $\theta$ and $\theta_{\alpha}$ vanish on $N$. In this case, also $d\theta_{\alpha}$ vanishes on $N$. Then implies that $I_{\alpha}X$ is normal to $N$ for any $X$ tangent to $N$, *i.e.* $N$ is totally real. The statement about the dimension of $N$ is now obvious.
Examples of hyperhermitian Weyl manifolds having as leaves of ${\mathcal{D}}$ any of the groups of the surfaces in the above list can be obtained as follows: start with the standard hypercomplex Hopf manifold $S^1\times S^{4n-1}={\mathbb{H}}^n-\{0\}/\Gamma_2$ (see example \[ex\]). Consider now the diagonal action of any $G$ in Kato’s list on ${\mathbb{H}}^n$. The action is induced on the fibers of the projection $S^\times S^{4n-1}\rightarrow {\mathbb{H}}P^{n-1}$, hence on the primary standard Hopf surface $S^1\times S^3$ obtaining the desired examples.
Structure theorems
------------------
In this section we use the properties of the foliations described above to clarify the structure of compact quaternion Hermitian Weyl and hyperhermitian Weyl manifolds whose foliations have compact leaves, in relation with the other geometries involved: Kähler, quaternion Kähler, locally and globally $3$-Sasakian.
### The link with (locally) $3$-Sasakian geometry
[@OP2]\[globb\] The class of compact quaternion Hermitian Weyl manifolds $M$ which are not quaternion Kähler and whose Lee field is quasi-regular, (i.e. each point of $M$ has a cubic neighbourhood in which the orbit of $T$ enters a finite number of times), coincides with the class of flat principal $S^1$-bundles over compact locally $3$-Sasakian orbifolds $N=M/T$.
Let first $(M,g,H)$ be a compact quaternion Hermitian Weyl manifold as in the statement. The orbits of $T$ are closed, hence after rescaling, one may suppose they are circles $S^1$ acting on $M$ by isometries because $T$ is Killing. The quotient space $N=M/T$ is an orbifold (a manifold if $T$ is regular) and, with respect to the induced metric $h$, the natural projection $\pi$ becomes a Riemannian submersion. Hence, for any leaf $N'$ of ${\mathcal{F}}$, $\pi_{|N'}:N'\rightarrow N$ is a Riemannian covering map. As, according to Proposition \[ff\], the leaves of ${\mathcal{F}}$ have a locally $3$-Sasakian structure, $(N,h)$ is locally $3$-Sasakian.
Conversely, consider a flat principal $S^1$-bundle $\pi:M\rightarrow N$ over a compact locally $3$-Sasakian manifold $(N,h)$ with local Killing field $\xi_{\alpha}$. Choose a closed $1$-form $\theta$ on $M$ defining the flat connection of the bundle $\pi$ and define the metric $g:=\pi^*h+\theta\otimes\theta$. Also, define an almost quaternionic bundle $H$ on $M$ by defining its local basis as: $$\label{j}
\begin{split}
I_{\alpha}=&-{\varphi}_{\alpha}-\xi_{\alpha}^\flat\otimes T, \quad \text{on horizontal fields}\\
I_{\alpha}T=&\xi_{\alpha}\end{split}$$ where ${\varphi}_{\alpha}=\nabla^h\xi_{\alpha}$ and $T=\theta^\sharp$. It is straightforward to check, as in the complex case (see [@DO], chapter 6) that $(M,g,H)$ is quaternion Hermitian Weyl with Lee form $\theta$.
[@OP1]\[hom\] The class of compact hyperhermitian Weyl manifolds, not hyperkähler and having a quasi-regular (resp. regular) $T$ coincide with the class of flat principal $S^1$-bundles over compact $3$-Sasakian orbifolds (resp. manifolds).
### The link with quaternion Kähler geometry
We now describe the leaf space of the foliation ${\mathcal{D}}$, when it exists.
[@OP1], [@PPS]\[ts2\] Let $(M,g,H)$ be a compact quaternion Hermitian Weyl (resp. hyperhermitian Weyl) manifold, non quaternion Kähler (resp. non hyperkähler) whose foliation ${\mathcal{D}}$ has compact leaves. Then the leaves space $P=M/{\mathcal{D}}$ is a compact quaternion Kähler orbifold with positive scalar curvature, the projection is a Riemannian, totally geodesic submersion and a fibre bundle map with fibres as described in Proposition \[qq\] (resp. \[qqq\]).
In the local case of quaternion Hermitian Weyl $M$, we have to explain how to project the structure of $M$ over $P$. The key point is that locally, $H$ has admissible basis formed by $\nabla$-parallel (hence integrable) complex structures. Then formulae , show that $H$ is projectable. The foliation being Riemannian, $g$ is also projectable. The compatibility of the projected quaternion bundle with the projected metric is clear. To show that the projected structure is quaternion Kähler, let $\omega_P$ be the $4$-form of the projected structure. As the projection is a totally geodesic Riemannian submersion, $\omega_P$ coincides with the restriction of $\omega$ to basic vector fields on $M$. Hence, it is enough to show that $\nabla\omega=0$ on basic vector fields. But $\nabla\omega=\sum_{\alpha}\nabla\omega_{\alpha}\wedge\omega_{\alpha}+
\omega_{\alpha}\wedge\nabla\omega_{\alpha}$ and the result follows from equation . The scalar curvature of $(P,g)$ is easily computed using O’Neill formulae.
The global case of a hyperhermitian Weyl $M$ now follows.
The above fibration can never be trivial, according to Proposition \[non\].
Let now $M$ be hyperhermitian Weyl, ${\mathcal{T}}$ be the foliation generated by the vector field $T$ and ${\mathcal{V}}$ the $2$-dimensional foliation generated by $T$ and $JT$, where $J$ is a fixed compatible global complex structure belonging to $H$. Theorem \[ts2\], together with the structure of $3$-Sasakian manifolds described in section \[3ss\], furnish the following structure theorem:
[@OP1], [@OP2]\[diag\] Let $(M,g,H)$ be a compact hyperhermitian Weyl manifold, non hyperkähler, such that the foliations ${\mathcal{D}}$, ${\mathcal{V}}$, ${\mathcal{T}}$ and ${\mathcal{K}}$ have compact leaves. There exists the following commutative diagram of fibre bundles and Riemannian submersions in the category of orbifolds:
(160,140)(10,-40)
(67,-40)[$P$]{} (10,80)[$Z$]{} (15,76)[(1,-2)[52]{}]{} (18,35)[$S^2$]{} (130,80)[$N$]{} (125,82)[(-1,0)[105]{}]{} (75,85)[$S^1$]{} (126,76)[(-1,-2)[52]{}]{} (110,35)[$S^3/G$]{} (66,15)[$M$]{} (71,10)[(0,-1)[40]{}]{} (69,25)[(-1,1)[48]{}]{} (71,26)[(1,1)[48]{}]{} (85,50)[$S^1$]{} (46,50)[$T^1_{\mathbb{C}}$]{}
Here $N$ is globally $3$-Sasakian. The fibres of $M \rightarrow P$ are Kato’s integrable hypercomplex Hopf surfaces $(S^1 \times S^3)/G$, non necessarily primary and non necessarily all homeomorphic if $M$ is hyperhermitian Weyl. The $S^1$-bundle $P\rightarrow Z$ is a Boothby-Wang fibration.
Note that all arrows appearing in the diagram are canonical, except for $M \rightarrow Z$, which depends on the choice of the compatible global complex structure on $M$. However, different choices of this complex structure produce analytically equivalent complex manifolds $Z$.
The diagram \[diag\] holds also if $dim(M) = 8$. In this case $P$ is still Einstein by the above discussion. The integrability of the complex structure on its twistor space implies it is also self-dual (cf. [@Be]). Then just recall that a $4$-dimensional $N$ is usually defined to be quaternionic Kähler if it is Einstein and self-dual.
For the hyperhermitian Weyl manifold $M=S^1\times S^{4n-1}$, diagram \[diag\] becomes the well-known:
(160,140)(0,-40)
(50,-40)[${\mathbb{H}}P^{n-1}$]{} (0,80)[${\mathbb{C}}P^{n-1}$]{} (15,76)[(1,-2)[52]{}]{} (18,35)[$S^2$]{} (130,80)[$S^{4n-1}$]{} (125,82)[(-1,0)[90]{}]{} (75,85)[$S^1$]{} (126,76)[(-1,-2)[52]{}]{} (110,35)[$S^3$]{} (50,15)[$S^1\!\!\times\!\! S^{4n-1}$]{} (71,10)[(0,-1)[40]{}]{} (69,25)[(-1,1)[48]{}]{} (71,26)[(1,1)[48]{}]{} (85,50)[$S^1$]{} (46,50)[$T^1_{\mathbb{C}}$]{}
which was the model for the general one. Also, examples of quaternion Hermitian Weyl manifolds will be obtained by considering appropriate quotients of the manifolds in the vertices of this diagram.
It is proved in [@BGM0] that in every dimension $4k-5, k\geq 3$ there are infinitely many distinct homotopy types of complete inhomogeneous 3-Sasakian manifolds. Thus, by simply making the product with $S^1$, we obtain infinitely many non-homotopically equivalent examples of compact hyperhermitian Weyl manifolds.
### Some topological consequences of diagram \[d\]
A first consequence of the diagram \[diag\] concerns cohomology. Note first that the property $\nabla\theta = 0$ implies the vanishing of the Euler characteristic of $M$. Then, applying twice the Gysin sequence in the upper triangle one finds the relations between the Betti numbers of $M$ and $Z$ : $$\begin{gathered}
b_i(M) = b_i(Z) + b_{i-1}(Z) - b_{i-2}(Z) - b_{i-3}(Z)\hspace{.1 in}
(0 \leq i \leq 2n-1),\\
b_{2n}(M) = 2\left [ b_{2n-1}(Z) - b_{2n-3}(Z)\right ].\end{gathered}$$ On the other hand, since $P$ has positive scalar curvature, both $P$ and its twistor space $Z$ have zero odd Betti numbers, cf. [@Be]. The Gysin sequence of the fibration $Z \rightarrow P$ then yields: $$b_{2p}(Z) = b_{2p}(P) + b_{2p-2}(P)$$ Together with the previous found relations this implies:
[@OP1], [@OP2] Let $M$ be a compact hyperhermitian Weyl manifold satisfying the assumptions of Theorem \[diag\]. Then the following relations hold good: $$\begin{gathered}
b_{2p}(M) = b_{2p+1}(M) = b_{2p}(P) - b_{2p-4}(P)\hspace{.1 in} (0 \leq 2p
\leq 2n-2),\\
b_{2n}(M) =0,\\
\sum_{k=1}^{n-1}k(n-k+1)(n-2k+1)b_{2k}(M)=0.\end{gathered}$$ (Poincaré duality gives the correspondent of the first two equalities for $2n+2 \leq 2p \leq
4n$). In particular $b_1(M) = 1$. Moreover, if $n$ is even, $M$ cannot carry any quaternion Kähler metric.
The last identity is obtained, by applying S. Salamon’s constraints on compact positive quaternion Kähler manifolds to the same diagram (cf [@GS]).
We obtain in particular $b_{2p-4}(P) \leq b_{2p}(P)$ for $0 \leq 2p
\leq 2n-2.$ Since any compact quaternion Kähler $P$ with positive scalar curvature can be realized as the quaternion Kähler base of a compact quaternion Hermitian Weyl manifold $M$, this implies, in the positive scalar curvature case, the Kraines - Bonan inequalities for Betti numbers of compact quaternion Kähler manifolds (cf. [@Be]).
$b_1(M) = 1$ is a much stronger restriction on the topology of compact quaternion Hermitian Weyl manifolds in the larger class of compact complex Vaisman (generalized Hopf) manifolds. For the latter, the only restriction is $b_1$ odd and the induced Hopf bundles over compact Riemann surfaces of genus $g$ provide examples of Vaisman (generalized Hopf) manifolds with $b_1 = 2g+1$ for any $g$ , cf. [@Va3].
The properties $b_1 = 1$ and $b_{2n} = 0$ have the following consequences:
Let $(M, I_1, I_2, I_3)$ be a compact hypercomplex manifold that admits a locally and non globally conformal hyperKähler metric. Then none of the compatible complex structures $J = a_1I_1 + a_2I_2 + a_3I_3$, $a_1^2 + a_2^2 +
a_3^2 = 1$, can support a Kähler metric. In particular, $(M, I_1, I_2, I_3)$ does not admit any hyperKähler metric.
Let $M$ be a $4n$-dimensional $\mathcal C^\infty$ manifold that admits a locally and non globally conformal hyperKähler structure $(I_1, I_2, I_3,g)$. Then, for $n$ even, $M$ cannot admit any quaternion Kähler structure and, for $n$ odd, any quaternion Kähler structure of positive scalar curvature.
### Homogeneous compact hyperhermitian Weyl manifolds
In the complex case, a complete classification of compact homogeneous Vaisman manifolds is still lacking. By contrast, for compact homogeneous hyperhermitian Weyl manifolds a precise classification may be obtained.
A hyperhermitian Weyl manifold $(M,[g],H,D)$ is homogeneous if there exists a Lie group which acts transitively and effectively on the left on $M$ by hypercomplex isometries.
The homogeneity implies the regularity of the canonical foliations:
[@OP1] On a compact homogeneous hyperhermitian Weyl manifold the foliations $\mathcal D$, $\mathcal V$ and $\mathcal B$ are regular and in the diagram \[diag\], $N$, $Z$, $P$ are homogeneous manifolds, compatible with the respective structures.
Fix $J \in H$ be a compatible complex structure on $M$. Then $(M,g,J)$ is a homogeneous Vaisman manifold and by Theorem 3.2 in [@Va4] we have the regularity of both the foliations $\mathcal V_J$ and $\mathcal B$. Therefore $M$ projects on homogeneous manifolds $Z_J$ and $N$. In particular the projections of $I_{\alpha}B$ on $N$ are regular Killing vector fields. Then Lemma 11.2 in [@Tan] assures that the 3-dimensional foliation spanned by the projections of $I_1B, I_2B, I_3B$ is regular. This, in turn, implies that $P$ is a homogeneous manifold, thus $\mathcal D$ is regular on $M$.
On the other hand, a compact homogeneous $3$-Sasakian manifolds have been classified in [@BGM0]. We use this classification together with Corollary \[hom\] to derive:
[@OP1] The class of compact homogeneous hyperhermitian Weyl manifolds coincides with that of flat principal $S^1$-bundles over one of the 3-Sasakian homogeneous manifolds: $S^{4n-1}$, ${{\mathbb{R}}}P^{4n-1}$ the flag manifolds $\mathrm{SU}(m)/\mathrm{S}(\mathrm{U}(m-2)\times \mathrm{U}(1)),
m\geq 3$, $\mathrm{SO}(k)/$$(\mathrm{SO}(k-4)\times \mathrm{Sp}(1)), k\geq 7$, the exceptional spaces $G_2/\mathrm{Sp}(1)$, $F_4/\mathrm{Sp}(3)$, $E_6/\mathrm{SU}(6)$, $E_7/\mathrm{Spin}(12)$, $E_8/E_7$.
The flat principal $S^1$-bundles over $P$ are characterized by having zero or torsion Chern class $c_1 \in H^2(P;{\mathbb{Z}})$ and classified by it. The integral cohomology group $H^2$ of the 3-Sasakian homogeneous manifolds can be computed by looking at the long homotopy exact sequence $$...\rightarrow \pi_2(K) \rightarrow \pi_2(G) \rightarrow \pi_2(G/K)
\rightarrow \pi_1(K)\rightarrow \pi_1(G) \rightarrow ...$$ for the 3-Sasakian homogeneous manifolds $G/K$ listed above. Since $\pi_2(G) = 0$ for any compact Lie group $G$, one obtains the following isomorphisms (cf. [@BGM2]): $$H^2~(\frac {\mathrm{SU}(m)}{\mathrm{S}(\mathrm{U}(m-2)\times \mathrm{U}(1))}) \cong {\mathbb{Z}},
\qquad H^2({{\mathbb{R}}}P^{4n-1}) \cong {\mathbb{Z}}_2$$ and $H^2(G/K) = 0$ for all the other 3-Sasakian homogeneous manifolds. Hence:
[@OP1] Let $M$ be a compact homogeneous hyperhermitian Weyl manifold. Then $M$ is one of the following:
$(i)$ A product $(G/K) \times S^1$, where $G/K$ can be any of the 3-Sasakian homogeneous manifolds in the list:\
$S^{4n-1}$, ${\mathbb{R}}P^{4n-1}$, $\mathrm{SU}(m)/\mathrm{S}(\mathrm{U}(m-2)\times \mathrm{U}(1)), m\geq 3$, $\mathrm{SO}(k)/(\mathrm{SO}(k-4)\times \mathrm{Sp}(1)), k\geq 7$, $G_2/\mathrm{Sp}(1)$, $F_4/\mathrm{Sp}(3)$, $E_6/\mathrm{SU}(6)$, $E_7/\mathrm{Spin}(12)$, $E_8/E_7$.
$(ii)$ The Möbius band, i.e. the unique non trivial principal $S^1$-bundle over ${{\mathbb{R}}}P^{4n-1}$.
For example in dimension $8$ one obtains only the following spaces: $S^7\times S^1$, ${{\mathbb{R}}}P^7 \times S^1$, $\{\mathrm{SU}(3)/\mathrm{S}(\mathrm{U}(1)\times \mathrm{U}(1))\}\times S^1$ and the Möbius band over ${{\mathbb{R}}}P^7$. The first exceptional example appears in dimension $12$: the trivial bundle $\{G^2/\mathrm{Sp}(1)\}
\times S^1$ whose $3$-Sasakian base is diffeomorphic to the Stiefel manifold $V_2({{\mathbb{R}}}^7)$ of the orthonormal 2-frames in ${{\mathbb{R}}}^7$.
### A hyperhermitian Weyl finite covering of a quaternion Hermitian Weyl manifold
In general, quaternion Kähler manifolds are not finitely covered by non simply connected hyperkähler ones. But in the locally conformal Kähler case we have:
[@OP2] Let $M$ be a compact quaternion Hermitian Weyl manifold which is not quaternion Kähler. If the leaves of ${\mathcal T}$ are compact, then $M$ admits a finite covering space carrying a structure of a hyperhermitian Weyl manifold.
Let first $T$ be a regular vector field. Accordingly, $N=M/T$ is compact locally $3$-Sasakian *manifold*, Einstein with positive scalar curvature. From Myers theorem, its Riemannian universal cover $\tilde N$ is compact and $\pi_1(N)$ is finite. Hence, the pull-back $\tilde {\mathcal{K}}\rightarrow \tilde N$ (see Corollary \[pul\]) is trivial and $\tilde N$ is globally $3$-Sasakian. Let now $\tilde M\rightarrow \tilde N$ be the pull-back of the $S^1$-bundle $M\rightarrow N$: being a flat principal circle bundle over a $3$-Sasakian manifold, Corollary \[hom\] provides a hyperhermitian Weyl structure on $\tilde M$. By construction, this one projects on the quaternion Hermitian Weyl structure of $M$.
In the weaker assumption that ${\mathcal{T}}$ has only compact leaves (it is a quasi-regular foliation), the leaves space $N$ is a compact orbifold with same Riemannian properties as above. Its universal orbifold covering $\tilde N^{orb}$ is a complete Riemannian orbifold with positive Ricci curvature. According to Corollary 21 in [@Borz], the diameter of $\tilde N^{orb}$ is finite. Hence $\tilde N^{orb}$ is compact and $\pi_1^{orb}(N)$ is finite. Now the pull-back of ${\mathcal{K}}\rightarrow N$ to $\tilde N^{orb}$ is again trivial and, as in the manifold case, one shows that $\tilde N^{orb}$ is a globally $3$-Sasakian orbifold. The proof then continues as above. Note that the total space $\tilde M$ is again a *manifold*.
Examples
--------
Using the structure theorems, we can now describe a large class of examples of quaternion Hermitian Weyl manifolds.
Recall first that a real $4$-dimensional Hopf manifold is an integrable quaternion Hopf manifold, *i.e.* a quotient $({\mathbb{H}}-\{0\})/G=
({\mathbb{R}}^4-\{0\})/G$, where $G$ is a discrete subgroup of $\mathrm{CO}(4)\sim \mathrm{GL}(1,{\mathbb{H}})\cdot \mathrm{Sp}(1)$. The metric $(h{\overline}{h})^{-1}dh\otimes d{\overline}{h}$, globally conformal with the flat one on ${\mathbb{H}}$, is invariant w.r.t. the action of $G$. This proves:
[@OP1] Any real $4$-dimensional Hopf manifold is a compact quaternion Hermitian Weyl manifold.
We generalize this construction to higher dimensions by considering the quaternion Hopf manifold $M=({\mathbb{H}}^n-\{0\})/G$, with $G$ of the form , acting diagonally on the quaternionic coordiantes $(h^1,...,h^n)$. The metric on $M$ will now be the projection of $(\sum_ih^i{\overline}{h}^i)^{-1}\sum_idh^i\otimes d{\overline}{h}^i$ and is denoted with $g$. Moreover, we shall assume the resulting $4$-dimensional foliation ${\mathcal{D}}$ to have compact leaves. We may state:
[@OP1] The quaternion Hopf manifold $M=({\mathbb{H}}^n-\{0\})/G$ endowed with the metric $g$ is a compact quaternion Hermitian Weyl manifold. The leaves of the foliation ${\mathcal{D}}$ are integrable quaternion Hopf $4$-manifolds. The leaf space $P=M/{\mathcal{D}}$ is a quaternion Kähler orbifold quotient of ${\mathbb{H}}P^{n-1}$ whose set of singular points is, generally, ${\mathbb{R}}P^{n-1}\subset {\mathbb{H}}P^{n-1}$. Moreover:
If $G$ is one of the groups in Kato’s list (see Theorem \[lista\]), then $M$ is hyperhermitian Weyl, The leaves of ${\mathcal{D}}$ are integrable Hopf surfaces and $P$ is ${\mathbb{H}}P^{n-1}$.
The result follows from the fact that the group $G$, being a discrete subgroup of $\mathrm{GL}(n,{\mathbb{H}})\cdot \mathrm{Sp}(1)$, preserves the quaternionic structure of the universal covering of $M$. The structure of the leaves was discussed in Proposition \[qq\]. Note that $\mathrm{GL}(n,{\mathbb{H}})$ acts on the left and $\mathrm{Sp}(1)$ acts on the right on the quaternionic coordinates, hence the induced action of $G$ on ${\mathbb{H}}P^{n-1}$ fixes the points which can be represented in real coordinates. If $G$ belongs to Kato’s list, then it is a subgroup of $\mathrm{GL}(n,{\mathbb{H}})$ and preserves the hyperhermitian structure of the covering, inducing the same structure on the leaves.
[@PPS], [@OP1] For $n=2$, let $G$ be the cyclic group generated by $(h^0,h^1)\mapsto
(2e^{2\pi i/3}h^0, 2e^{4\pi i/3}h^1)$ and $M=({\mathbb{H}}^2-\{0\})/G$. Here the leaf space $P=M/{\mathcal{D}}$ is a ${\mathbb{Z}}_3$ quotient ${\mathbb{H}}P^1$. The leaves of ${\mathcal{D}}$ are standard Hopf surfaces $S^1\times S^3$ over the regular points of the orbifold $P$ and are non-primary Hopf surfaces $(S^1\times S^3)/{\mathbb{Z}}_3$ over the two singular points of homogeneous coordinates $[1:0]$ and $[0:1]$ of $P$.
[99]{}
D.V. Alekseevsky, E. Bonan, S. Marchiafava, *On some structure equations for almost quaternionic Hermitian manifolds*, in: Complex structures and vector fields, 114-134, World Scientific (1996). D.V. Alekseevsky, S. Marchiafava, *Almost quaternionic Hermitian and quasi-Kähler manifolds*, Proceedings of the “International Workshop on Almost Complex Structures”, Sophia, 22-25 August 1992. D.V. Alekseevsky, S. Marchiafava, *Quaternionic structures on a manifold and subordinated structures*, Ann. Mat. Pura e Appl. [**171**]{} (1996), 205-273. A. Besse, Einstein manifolds, Springer-Verlag (1987). M. Berger, *Remarques sur le groupe d’holonomie des variétés riemanniennes*, C. R. Acad. Sci. Paris, [**262**]{}, (1966), 1316-1318. D.E. Blair, Contact manifolds in Riemannian geometry, L.N.M. 509, Springer (1976). J.E. Borzellino, *Orbifolds of maximal diameter*, Indiana Math. J. [**42**]{} (1993), 37-53. Ch.P. Boyer, *A note on hyperhermitian four-manifolds*, Proc. Amer. Math. Soc., [**102**]{} (1988), 157-164. Ch.P. Boyer, K. Galicki, *$3$-Sasakian manifolds*, in “Surveys in differential geometry: Essays on Einstein Manifolds”, M. Wang and C. LeBrun eds., International Press 2000, 123-186. Ch.P. Boyer, K. Galicki, *On Sasakian-Einstein Geometry*, . Internat. J. Math. [**11**]{} (2000), no. 7, 873–909. Ch.P. Boyer, K. Galicki, B. Mann, [*The geometry and topology of 3-Sasakian manifolds*]{}, J. Reine Angew. Math., [**455**]{} (1994), 183-220. Ch.P. Boyer, K. Galicki, B. Mann, *Hypercomplex structures on Stiefel manifolds*, Ann. of Global Anal. Geom. [**14**]{} (1996), 81-105. Ch.P. Boyer, K. Galicki, B. Mann, E. Rees, *Compact $3$-Sasakian $7$-manifolds with arbitrary second Betti number*, Invent. Math. [**131**]{} (1998), 321-344. D. Calderbank, H. Pedersen, *Einstein-Weyl geometry*, in “Surveys in differential geometry: Essays on Einstein Manifolds”, M. Wang and C. LeBrun eds., International Press 2000, 387-423.
S. Dragomir, L. Ornea, [*Locally conformal K[ä]{}hler geometry*]{}, Progress in Math. [**155**]{}, Birkh[ä]{}user (1998). G. B. Folland, *Weyl manifolds*, J. Diff. Geom., [**4**]{} (1970), 143-153. P. Gauduchon, [*La $1$-forme de torsion d’une vari[é]{}t[é]{} hermitienne compacte*]{}, Math. Ann., [**267**]{} (1984), 495-518. P. Gauduchon, *Structures de Weyl-Einstein, espaces de twisteurs et variétés de type $S^1 \times
S^3$*, J. Reine Angew. Math. [**469**]{} (1995), 1-50. K. Galicki, S. Salamon, *Betti numbers of $3$-Sasakian manifolds*, Geom. Dedicata, [**63**]{} (1996), 45-68. G. Grantcharov, Y.S. Poon, *Geometry of hyper-Kähler connections with torsion*, Comm. Math. Physics [**213**]{} (2000), 19-37. T. Higa, *Weyl manifolds and Einstein-Weyl manifolds*, Comm. Math. Sancti Pauli, [**42**]{} (1993), 143-160. N. J. Hitchin, *On compact four-dimensional Einstein manifolds*, J. Diff. Geom., [**9**]{} (1974), 435-442. S. Ivanov, *Geometry of quaternionic Kähler connections with torsion*, arXiv:math.DG/0003214. T. Kashiwada, *A note on Riemannian manifolds with $3$-Sasakian structure*, Nat. Sci. Reps. Ochanomizu Univ., [**22**]{} (1971), 1-2. Ma. Kato, *Topology of Hopf surfaces*, J. Math. Soc. Japan, [**27**]{} (1975), 222-238. Ma. Kato, *Compact differentiable $4$-folds with quaternionic structure*, Math. ann., [**248**]{} (1980), 79-96. V. Kraines, *Topology of quaternionic manifolds*, Trans. Amer. Math. Soc., [**122**]{} (1966), 357-367. J. Lafontaine, *Remarques sur les variétés conformément plates*, Math. Ann. [**259**]{} (1982), 313-319. S. Marchiafava, *Su alcune sottovarietà che ha interesse considerare in una varietà Kähleriana quaternionale*, Rend. mat., [**VII**]{}, (10), (1990), 493-529. S. Marchiafava, *A report on almost quaternionic Hermitian manifolds*, An. Şt. Univ. “Ovidius” Constanţa, [**3**]{} (1995), 55-64. K. Nomizu, *On local and global existence of Killing vector fields*, Ann. of Math. [**72**]{} (1960), 105-120. L. Ornea, P. Piccinni, *Locally conformal [Kähler]{}structures in quaternionic geometry*, Trans. Am. Math. Soc. [**349**]{} (1997), 641-655. L. Ornea, P. Piccinni, *Compact hyperhermitian-Weyl and quaternion Hermitian-Weyl manifolds*, Ann. Global Anal. Geom. [**16**]{} (1998), 383-398. [*Erratum*]{}, same journal, [**18**]{} (2000), 105-106. H. Pedersen, Y. S. Poon, A. Swann, *The Einstein-Weyl equations in complex and quaternionic geometry*, Diff. Geom. Appl. [**3**]{} (1993), 309-321. H. Pedersen, A. Swann, [*Einstein-Weyl Geometry, Bach tensor and conformal scalar curvature*]{}, J. Reine Angew. Math., [**441**]{} (1993), 99-113. P. Piccinni, *On the infinitesimal automorphisms of quaternionic structures*, J. Math. pures Appl. [**72**]{} (1993), 593-605. P. Piccinni, *Manifolds with local quaternion Kähler structures*, Rend. Mat. [**17**]{} (1997), 679-696. P. Piccinni, *The Geometry of positive locally quaternion Kähler manifolds*, Ann. Global Anal. Geom. [**16**]{} (1998), 255-272. M. Pontecorvo, *Complex structures on quaternionic manifolds*, Diff. Geom. Appl. [**4**]{} (1992), 163-177. S. M. Salamon, *Quaternionic Kähler manifolds*, Invent. Math., [**67**]{} (1982), 143-171. P. Scott, [*The Geometry of $3$-manifolds*]{}, Bull. London Math. Soc. [**15**]{},(1983), 401-487. S. Tanno, *Killing vectors on contact Riemannian manifolds and fiberings related to the Hopf fibration*, Tôhoku Math. J. [**23**]{} (1971), 313-333. A. Swann, *Hyper[Kähler]{} and quaternionic [Kähler]{} geometry*, Math. Ann., [**289**]{} (1991), 421-450. I. Vaisman, *A geometric condition for a locally conformally Kähler manifold to be Kähler*, Geom. Dedicata, [**10**]{} (1981), 129-134. I. Vaisman, *Generalized Hopf manifolds*, Geom. Dedicata, [**13**]{} (1982), 231-255. I. Vaisman, *A survey of generalized Hopf manifolds*, Rend. Sem. Mat. Torino, Special issue (1984), 205-221. I. Vaisman, C. Reischer, *Local similarity manifolds*, Ann. Mat. Pura Appl. [**135**]{} (1983), 279-292.
[^1]: The author is a member of EDGE, Research Training Network HPRN-CT-2000-00101, supported by The European Human Potential Programme
| {
"pile_set_name": "ArXiv"
} |
{
"pile_set_name": "ArXiv"
} |
|
---
abstract: 'We have used the Wide Field and Planetary Camera 2 on board the Hubble Space Telescope to obtain $V$ and $I$ images of seven nearby galaxies. For each, we have measured a distance using the tip of the red giant branch (TRGB) method. By comparing the TRGB distances to published Cepheid distances, we investigate the metallicity dependence of the Cepheid period-luminosity relation. Our sample is supplemented by 10 additional galaxies for which both TRGB and Cepheid distances are available in the literature, thus providing a uniform coverage in Cepheid abundances between 1/20 and 2 (O/H)$_\odot$. We find that the difference between Cepheid and TRGB distances decreases monotonically with increasing Cepheid abundance, consistent with a mean metallicity dependence of the Cepheid distance moduli of ${{\delta{(m - M)}}/{\delta[O/H]}} = {-0.24 \pm 0.05}$ mag dex$^{-1}$.'
author:
- Shoko Sakai
- Laura Ferrarese
- 'Robert C. Kennicutt, Jr.'
- Abhijit Saha
title: 'The Effect of Metallicity on Cepheid-Based Distances [^1]'
---
Introduction
============
In the past decade, the uncertainty in the value of the Hubble constant based on the local distance scale ladder has decreased from roughly a factor of two to $\pm$10–15% (e.g., Mould, Kennicutt,& Freedman 2000). This breakthrough was made possible by the determination of HST-based Cepheid distances to 25 nearby galaxies, carefully selected to provide an accurate calibration for a variety of secondary distance indicators (Kennicutt et al. 1995, Saha 1997). The dominant source of systematic errors in the distance scale as a whole and in H$_0$ in particular resides in the calibration of the Cepheid period luminosity relation, most notably its zero point (which is tied to the distance to the Large Magellanic Cloud), and possible dependence (both in zero point and slope) on the metallicity of the variable stars. Reducing these extant errors is imperative. For instance, the recent WMAP analysis of fluctuations in the cosmic microwave background has produced a value of the Hubble constant $H_0 = 71 \pm 4$ km s$^{-1}$ Mpc$^{-1}$ (Bennett et al. 2003). While in perfect agreement with the local value derived by the HST Key Project on the Extragalactic Distance Scale (72 $\pm$ 8 km s$^{-1}$ Mpc$^{-1}$; Freedman et al. 2001, hereafter F01), tighter constraint on the latter would allow a more meaningful comparison of these two estimates.
The uncertainty in the metallicity dependence of the Cepheid period-luminosity (PL) relation is particularly troublesome. The galaxies used by the Key Project span a range of gas-phase metal abundances of roughly a factor of 50 ($-1.5 \le [O/H] \le 0.3$; Ferrarese et al. 2000a), wide enough that a systematic change of 0.5 mag in the Cepheid distance moduli per factor 10 increase in abundance would by itself introduce a systematic error of approximately 10% in the Key Project value of H$_0$ (Kennicutt et al. 1998, hereafter K98; Mould et al. 2000; F01). Not only there are significant metallicity offsets between Cepheids in the Key Project galaxies and those in the Large Magellanic Cloud (LMC), on which the PL calibration itself rests, such offsets are different for the mean samples used to calibrate secondary distance indicators (e.g., SNe Ia, fundamental plane). This can potentially lead to systematic offsets in the values of $H_0$ derived from individual calibrators. A lack of constraints on the metallicity dependence of the Cepheid PL relation also hampers the interpretation of fully external tests of the zero point of the Cepheid distance scale (e.g., Hernstein et al. 1999).
The magnitude of the metallicity dependence of the Cepheid PL relation is, unfortunately, poorly constrained, both theoretically or observationally. Following K98, we describe such dependece in terms of the parameter $\gamma$:
$$\gamma = {\delta {{(m - M)}_0}} / {\delta{\log Z}},$$
where $\delta{{(m-M)}_0} = (m-M)_{\mbox{0,Z}} - (m-M){\mbox{0,LMC}}$, the difference between the distance modulus obtained with and without the metallicity correction, and $\delta{\log Z} = (\log Z)_{\mbox{LMC}} -
(\log Z)_{\mbox{galaxy}}$. Note that $\gamma$ reflects the net effect on [*distance determination*]{}. Metallicity can affect both the luminosity of a Cepheid, as well as the color boundaries of the instability strip. Since a mean period-color relation is used to deduce reddening and extinction, the second of these effects can be the dominating influence. Thus $\gamma$ really depends on the specific passbands chosen. In this paper we are mainly concerned with Cepheid distances based on $V$ and $I$ observations ($\gamma$) which covers essentially all of the HST measurements. Theoretical models of Cepheids predict metallicity dependences ranging from near zero (Saio & Gautschy 1998; Alibert et al. 1999) to significant dependences (in either direction!) of up to $\pm$0.3 mag dex$^{-1}$ (Chiosi, Wood, & Capitanio 1993; Bono et al. 1999; Sandage, Bell, & Tripicco 1999; Caputo et al. 2000; Fiorentino et al. 2002). Recent empirical determinations of $\gamma$ have yielded an even larger range of values. The HST Key Project attempted to constrain $\gamma$ in two ways, by comparing measured PL relations for two Cepheid fields in M101 differing in \[O/H\] by 0.7 dex, and by investigating a possible systematic difference between Cepheid and tip of the red giant branch (TRGB) distances for a sample of 10 galaxies (K98). These two tests yielded marginal (1.5 $\sigma$) detections of a metallicity dependence, with $\gamma = -0.24 \pm 0.16$ and $-0.12 \pm 0.08$ mag dex$^{-1}$ respectively. This led to a provisional correction of $-0.20$ mag dex$^{-1}$ to the final Cepheid distances published by the Key Project team (F01). However, value of $\gamma$ between 0 and $-$0.9 mag dex$^{-1}$ are supported by independent studies. By comparing Cepheid, TRGB, and RR Lyrae distances to the Magellanic Clouds and IC 1613, Udalski et al. (2001) detected no significant metallicity dependence. A null result was also derived by Ciardullo et al. (2002) from a comparison of Cepheid and planetary nebula luminosity function (PNLF) distances to nearby galaxies. On the other hand, a comparison of Cepheid PL relations in the LMC and SMC by Sasselov et al. (1997) produced $\gamma = -0.4{^{+0.1}_{-0.2}}$. A similar analysis, but applied to the Key Project galaxies, yielded $\gamma = -0.4 \pm 0.2$ (Kochanek 1997). An even larger dependence was reported by Gould (1994), based on a re-analysis of M31 Cepheid observations of Freedman & Madore (1990). The reason for the discrepancy between the various studies is the small number of galaxies used, the limited range of metal abundances spanned, and/or lack of quantifiable systematic errors.
Most recently, Tammann, Sandage & Reindl (2003) examined 321 Cepheid variables in the Galaxy with good $B$, $V$, and $I$ photometry by Berdnikov et al. (2000), and compared them to more than 1000 Cepheids in the LMC and SMC (Udalski et al. 1999b,c). They found that the Cepheid variables followed different period-color relations; LMC Cepheids were bluer than the Galactic ones, for example. They suggested that the observed differences in three galaxies for Cepheids with $\log P > 1.0$ were due to metallicity differences. Kanbur et al. (2003) then studied Cepheids from the HST Key Project, and also from the Sandage-Tammann-Saha sample. They measured the distances to these galaxies using several different PL relations calibrated using Galactic and LMC Cepheids, including the new relation by Tammann et al. (2003). Kanbur et al. (2003) found that the Tammann et al. Galactic calibration yielded the same distances as the Udalski et al. (1999) LMC calibration, if the latter were corrected for a metallicity effect of $\gamma = -0.2$ from Freedman et al. (2001). This suggested that the Galactic and LMC Cepheids did indeed follow different PL relations, and constrained the metallicity dependence to be $\gamma \sim -0.2$ mag dex$^{-1}$.
The goal of this paper is to correct all of these shortcomings and perform a more robust test of the metallicity dependence of Cepheid PL relation. We will follow the same technique used by K98, which was based on a comparison of Cepheid and TRGB distances for galaxies spanning a wide range in Cepheid metallicity. Compared to K98, our study benefits from an increased sample size and a wider and more uniform range in Cepheid metallicity. To the 10 galaxies analyzed by K98, we add seven new measurements, covering a range in Cepheid abundances of 0.05 $-$ 2 in $Z/Z_\odot$. A comparison of TRGB and Cepheid distances provides an especially powerful test for a metallicity dependence of the Cepheid PL relation.The method is transparent and robust: over the metallicity range spanned by our galaxies, the TRGB magnitude in the $I$-band is insensitive to the metal abundance and age of the stellar population (Da Costa & Armandroff 1990; Lee, Freedman & Madore 1993; Salaris & Cassisi 1997; Sakai 1999). Furthermore, the metallicities of the halo fields targeted by TRGB observations do not correlate with those of the disk Cepheids, therefore even a small metallicity dependence of the TRGB would not introduce any systematic biases in our results. Finally, we also make the implicit assumption that the Oxygen fraction with respect to the total metal content is constant for all galaxies used in the application presented in this paper.
The paper is organized as follows: in §2, we discuss the observations and reduction of the HST/WFPC2 TRGB data obtained as part of this program for six galaxies (plus one downloaded from the public HST archive). §3 deals with the TRGB distances, including those that had been published prior to this paper. Cepheid distances, all of which have been previously published, are discussed in §4. The Cepheid abundances, which are derived from those of nearby [HII]{} regions, are presented in §5. Results and discussion are presented in §6 and §7 respectively.
HST Observations
================
The key improvement of our study over the similar analysis conducted by K98 is the addition of new TRGB measurements for seven galaxies with well-determined Cepheid distances: IC 4182, NGC 300, NGC 3031 (M81), NGC 3351, NGC 3621, NGC 5253, and NGC 5457 (M101). Six of our new TRGB distances (all except NGC 5253) are based on deep F555W ($V$) and F814W ($I$) images obtained with the Wide Field and Planetary Camera 2 (WFPC2) on HST during Cycles 9 and 10 (GO-8584). NGC 5253 was observed as part of an independent project and the data downloaded from the public HST Archive. The TRGB distance to NGC 5253 is based on F814W observations only, which are however sufficient for an accurate determination. Although five of the galaxies had already been observed with HST with the goal of measuring Cepheid distances, the existing data were unsuitable for TRGB observations: the Cepheid fields were placed in the crowded star forming disk regions, while TRGB observations must target less crowded, metal-poor halo regions, to allow for an unambiguous detection of the Population II red giant stars.
We restricted our sample to galaxies with Cepheid distances of $\le$10 Mpc, to assure reliable detection of the TRGB within reasonable integration times. Four of the galaxies (M81, M101, NGC 3351, NGC 3621) contain Cepheid fields with metallicities higher than the LMC ($Z > 0.4 Z_\odot$), where the K98 results are particularly poorly constrained. The most distant of our galaxies, NGC 3351, is especially critical since it contains one of the most metal-rich Cepheid fields in the Key Project sample (2.2 $Z_\odot$). Another key target is M101, for which Cepheid were observed in two separate fields with mean abundances of $\sim$0.4$~Z_{\odot}$ and $2~Z_{\odot}$ (K98).
The observations are summarized in Table 1, and the the WFPC2 field of view is superimposed to a ground based image of each galaxy in Figure 1. The exposure times were chosen to reach at least one magnitude below the TRGB in both F555W and F814W.
The reduction and stellar photometry of the images was carried out independently using the DAOPHOT (Stetson 1994) and DoPHOT (Schechter, Mateo, & Saha 1993) families of PSF-fitting procedures. The reduction procedures closely followed the methods described in the HST Key Project series, with the exception that here we only deal with single-epoch observations. We refer the reader to Ferrarese et al. (1996) for a detailed discussion of the reduction procedures, and only briefly summarize the process here.
For both photometric reductions, the WFPC2 images were first calibrated using a standard pipeline maintained by the Space Telescope Science Institute (STScI), which included correction of analog-to-digital (A/D) conversion errors, detector bias, dark, and flatfield corrections. In addition, bad pixels were masked using the data-quality files provided by the HST data processing pipeline, the vignetted edges of the detectors were blocked by applying an image mask, and photometric variations introduced by the geometric distortion of the WFPC2 optics were corrected using pixel area maps. The frames for individual exposures were then co-added to make deeper $F555W$ and $F814$ images; in the process cosmic-rays were identified and removed using standard IRAF/STSDAS routines[^2] Finally, each frame was multiplied by 4 and converted to short integers.
The reduced, combined images were processed with the DAOPHOT and ALLSTAR software to extract PSF magnitudes from each image. These were converted to the calibrated Landolt (1992) system as described in detail in Hill et al. (1998). The instrumental PSF magnitudes were first transformed to 05 diameter aperture magnitudes as defined by Holtzman et al. (1995) by selecting $\sim$15 bright, isolated stars on each WFPC2 chip. All other stars were subtracted to provide clean sky measurements, and aperture magnitudes were measured at 12 radii between 015 and 050 to define a growth curve. The magnitudes were then transformed to calibrate $V$ and $I$ magnitudes as described in Hill et al. (1998).
PSF fitting magnitudes were independently measured using a version of DoPHOT developed specifically to handle the peculiarities of the HST data and PSFs (see Saha et al. 1994). DoPHOT output magnitudes are simply proportional to the height of the fitted PSF, and must be transformed to 05 magnitudes (as in Holtzman et al. 1995) by applying an aperture correction. In NGC 3351 and NGC 3651, aperture corrections could be calculated reliably (with an rms uncertainty of 0.02 mag or less) from bright, isolated stars in the field. For all other galaxies, not enough isolated stars exist to allow for a determination of the aperture corrections from the fields themselves; in these cases we adopted ‘standard’ aperture corrections calculated from independent observations of uncrowded fields in Leo I (Hill et al. 1998). The difference between the Leo I aperture corrections, and those calculated from the NGC 3361 and NGC 3651 data is at most 0.02 mag, and agree identically for most chips. The magnitudes thus obtained were converted to the ‘ground system’ magnitudes F555W and F814W as defined in Holtzman et al. (1995b) using the zero points derived from observations of $\omega$ Cen (Hill et al. 1998). Finally, F555W and F814W magnitudes were converted to $V$ and $I$ magnitudes following the procedure outlined by Holtzman et al. (1995b).
The photometric results of the DAOPHOT and DoPHOT analyses were compared; for both filters, the magnitudes of individual stars agreed within the photometric errors. In order to simplify the presentation of the results, we only show the DAOPHOT/ALLFRAME photometry in the remainder of this paper.
We stress that all of the photometry presented in this paper is based on the calibration of Hill et al. (1998). Updated calibrations do exist, using improved charge-transfer-efficiency corrections (e.g., Stetson 1998; Dolphin 2000). The Hill et al. calibration was adopted in this analysis to maintain consistency with the large majority of the published Cepheid photometry, against which our TRGB distances will be compared. If the $V$ and $I$ WFPC2 zeropoints of Stetson (1998) were adopted instead, the TRGB distance moduli derived in this paper would decrease by 0.04 mag, and the HST-derived Cepheid distance moduli by 0.07 mag. As discussed in §6, because of the differential nature of our test, this has no effect on the results derived in this paper.
TRGB Distances
==============
A detailed review of the TRGB method can be found in Madore, Freedman, & Sakai (1997) and Sakai (1999). As the stars become brighter and evolve up the red giant branch, they undergo a drastic change at the onset of helium-burning in the core. At most wavelengths, the absolute magnitude of the RGB tip is sensitive to the age and metallicity of the red giant population. However in the $I$-band the tip luminosity has been shown, both observationally and theoretically, to vary very little with $M_{I,TRGB} = -4.0 \pm 0.1$ mag for stellar population ages between 2 and 15 Gyr, and metallicities spanned by Galactic globular clusters ($-2.2 \leq$ \[Fe/H\] $\leq -0.7$) (Da Costa & Armandroff 1990; Lee et al. 1993; Salaris & Cassisi 1997; Sakai 1999) The near constancy of the $I$-band tip magnitude produces a sharp edge in the luminosity function (LF), making the TRGB a reliable distance indicator. Furthermore, the TRGB is calibrated independently of the Cepheid distance scale. These properties make the TRGB method an ideal choice for constraining the metallicity dependence of the Cepheid PL relation.
For this purpose, we obtained deep images of halo fields in the $V$ and $I$ bands, and determined the TRGB magnitude using automated techniques, as described below. Although the RGB tip magnitude is determined from the $I$-band LF, the $I, (V-I)$ color-magnitude diagram (CMD) allows us to exclude more easily stellar contaminants (e.g., AGB stars) and restrict the tip determination to stars with colors appropriate to the metallicity range within which the TRGB calibration is reliable.
TRGB Detection Methods
----------------------
Very early applications of the TRGB method relied on a visual estimate of the TRGB magnitude from a CMD. Lee et al. (1993) showed that the application of a Sobel edge-detection filter with kernel \[-1,0,+1\] to the binned LF histogram can provide an efficient and objective determination of the TRGB position. Sakai, Madore, & Freedman (1996) modified this method for application to a smoothed, continuous LF. First, the smoothed $I$-band LF is represented by replacing the discretely distributed stellar magnitudes with the corresponding gaussians:
$$\Phi (m) = \sum_{i-1}^{N} \frac{1}{\sqrt{2\pi} \sigma_i} \exp \left[-\frac{(m_i-m)^2}{2\sigma^2_i}\right],$$
where $m_i$ and $\sigma_i$ are the magnitude and photometric error of $i$th star, respectively, and $N$ is the total number of stars in the sample. The edge-detection filter is then defined by: $$E(m) = \Phi(m+\sigma_m) - \Phi(m-\sigma_m),$$ where $\sigma_m$ is the mean photometric error for all star with magnitudes between $m-0.05$ and $m+0.05$ mag. Since its first application in Sakai et al. (1996), the uncertainty in the TRGB magnitudes derived using the edge-detection method has been quoted simply as the FWHM of the peak profile. This is undoubtedly an overestimate; for instance, the the peak of the Gaussian can be measured with higher precision than given by its FWHM. In this paper, the errors are estimated using a boot-strap test. The magnitude of each star is varied randomly following a Gaussian distribution with $\sigma$ given by the observational error. A new luminosity function is constructed using these randomly-displaced magnitudes, and the TRGB magnitude is determined using the edge-detection method. This routine is repeated 500 times for each galaxy, and the standard deviation of the distribution of 500 TRGB magnitudes is taken as the uncertainty in the original tip determination.
The edge-detection method is very effective for galaxies when the RGB tip is located $>$2 mag above the magnitude limit of the photometry, because in such cases, the CMD is well sampled in this region, and crowding and incompleteness are not significant. However, the method becomes less precise if sampling statistics are poor, or incompleteness strongly affects the LF within 1 mag of the RGB tip location. In such cases, the edge detector response becomes noisy even when the RGB tip is clearly seen in the CMD. Therefore, in addition to the edge-detection method described above, we have also applied a modification of the cross-correlation (“CC") technique introduced by Méndez et al. (2002). A template $I$-band LF, $y_0(M)$, was constructed using the LFs observed for fiducial galaxies, as described below. The template LF is then compared to the giant-branch LF, $y(m)$, of a given galaxy. The template is shifted in magnitude by increments $a$ and its normalization, $n$, is varied until the best match is found. This corresponds to the minimum of the function $\phi(a) = \sum{_{n_{min}}^{n_{max}}} \sum{_{m_{min}}^{m_{max}}} |{(y_0(M)}-y(m+a))/n|$, where the summation limits $m_{min}$ and $m_{max}$ are chosen to extend from 1 mag brighter than the RGB tip to 1 mag below the tip (or when incompleteness sets in). These limits are intially set using the best estimate for the RGB tip magnitude as determined from the edge method or visual inspection of the CMD, and then adjusted iteratively until convergence. The limits in the normalization constant are also varied around the value estimated initially as the best “guess”.
The template LF was constructed from the observed CMDs of the halo regions of the nearby irregular galaxies IC 1613 (Freedman 1988), IC 4182 (this paper), and NGC 5253 (this paper). Each galaxy has a well-sampled LF at the RGB tip, and reliable photometry extending over one mag below the tip. The LFs were cross-correlated with each other within $\pm$2 mag of the RGB tips, and the shifted, matched, and normalized LFs were then added to form the final template. The resulting function $y_0(M)$ is shown in Figure 2, shifted in magnitude so that the tip has $I=0$ mag. We also overplotted the shifted, normalized luminosity functions of IC 1613, IC 4182 and NGC 5253.
The uncertainty in the TRGB magnitude determined using the CC technique was estimated using two independent methods. In the first, the TRGB magnitude of each galaxy was calculated using four different template luminosity functions: the one discussed above (the combined luminosity function), and the IC 4182, IC 1613 and NGC 5253 luminosity functions alone. The rms in the mean of four estimates was taken as the uncertainty. This gives a rough estimate, albeit not a very accurate one, since the combined and individual templates are not independent. The second method uses the same boot-strap test described above, with the CC method applied at each of 500 iterations (a similar method was used by Mendez et al. 2002). The derived uncertainty accounts for errors introduced by photometric uncertainties affecting the stellar magnitudes, while our first estimate of the uncertainty in the tip magnitude is sensitive to systematics hidden in the choice of the template luminosity function. The two uncertainties are summed in quadrature to obtain the formal error in the measured TRGB magnitude. Even so, it needs to be pointed out that this error is likely a lower limit to the true uncertainty, since it does not account for other source of systematics, most notably crowding. For most of our galaxies, the CC method returns errors between 0.01 and 0.05 mag. This is not altogether surprising, since a 0.05 mag mismatch between the template and the LF under study is generally clearly noticeable from a simple visual inspection.
Calibration of the TRGB
-----------------------
The calibration of the TRGB is discussed extensively by several authors (e.g., DaCosta & Armandroff 1990; Lee et al. 1993; Madore et al. 1997; Salaris & Cassisi 1998; Bellazzini, & Ferraro, & Pancino 2001). The zeropoint rests on observations of Galactic globular clusters with $-2.2 < $\[Fe/H\] $< -0.7$ (DaCosta & Armandroff 1990; Lee et al. 1993). The distances to these clusters rest in turn on an adopted metallicity-$M_V$ calibration for RR Lyrae stars, based on theoretical models of the horizontal branch for $Y_{MS}=0.23$ (Lee, Demarque & Zinn 1990). For this study we have adopted the calibration of Lee et al. (1993), which is expressed as $(m-M)_I = I_{TRGB} - M_{bol} + BC_I$. The bolometric magnitude, $M_{bol}$ and the bolometric correction, $BC_I$, can be related to the metallicity of the RGB stars: $M_{bol} = -0.19$\[Fe/H\]$-3.81$ mag, and $BC_I = 0.81 - 0.243 (V-I)_{TRGB}$. The metallicity, \[Fe/H\], is expressed in terms of the color of the RGB stars 0.5 magnitude fainter than the TRGB: \[Fe/H\]$=-12.65 + 12.6 (V-I)_{-3.5} - 3.3 (V-I)^2_{-3.5}$. Combining these gives: $$M{_I^{TRGB}} = 1.594 - 2.394 (V-I)_{-3.5} + 0.627 (V-I)^2_{-3.5} + 0.243(V-I)_{TRGB}.$$
It should be noted that the above TRGB calibration is semi-empirical, based on an RR Lyrae distance scale, combined with the theoretical models of the horizontal branch stars. Cassisi & Salaris (1997) presented a purely theoretical calibration of the TRGB magnitude by examining their theoretical stellar evolution models (Salaris & Cassisi 1997). They reported that the semi-empirical TRGB calibration is too faint by about 0.1 mag compared to the theoretical model. This, they suggest, is due to the poor sampling of RGB stars in the Galactic globular clusters observed by Frogel et al. (1983), which were used in the empirical TRGB calibration. We emphasize that a precise calibration of the TRGB method is not necessary for the purpose of this paper. Since we are testing a [*differential effect*]{}, as long as the same calibration is applied to all the TRGB distances, our result will not depend on the zero point of the TRGB calibration itself. We are using the Lee et al. (1993) TRGB calibration for now, but in Section 7, we discuss the effects of using another calibration.
TRGB Distances to Individual Galaxies
-------------------------------------
The TRGB distance moduli used in this paper are summarized in Table 2. In this section, we describe in detail how each TRGB distance was derived using the edge-detection and CC methods.
### IC 4182
IC 4182 is an SA(s)m galaxy with a Cepheid distance of 4.5 Mpc (Saha et al. 1994; F01). The placement of the WFPC2 field of view is shown in Figure 1. Our derived CMDs are shown in Figure 3, for all four WFPC2 chips (top) and separately for the WF 2 and WF 3 chips (bottom), which are more representative of the galaxy’s halo population (see Figure 1). The TRGB is seen clearly just below $I \sim 24$ mag, especially in the bottom panel. On the right side of Figure 3, we show the $I$-band LF, constructed from stars on the WF 2 and WF 3 chips with $0.5 \le (V-I) \le 1.9$. The edge detector response functions, also drawn in the Figure, shows a firm detection of the TRGB at $I = 24.20 \pm 0.07$ mag. We performed the same fit to the logarithmic LF (bottom right panel of Figure 3) with identical results.
The CC method was also applied. Figure 11 shows the template LF (dotted line) shifted to provide the best match to the LF of the galaxy under study. In the case of IC 4182, the CC method yields a TRGB magnitude of $I = 24.20 \pm 0.05$ mag. The agreement between the edge-detection and CC results is not altogether surprising, since IC 4182 was used to build the cross-correlation template.
The foreground extinction along the line of sight to IC 4182 is $A_B = 0.059$ mag (Schlegel, Finkbeiner, & Davis 1998), corresponding to $A_I = 0.027$ mag (using the reddening law of Cardelli et al. (1989), with $R_V = 3.1$). Adopting the Lee et al. (1993) calibration, the TRGB magnitude is expected at $M_I^{TRGB} = 4.08 \pm 0.05$ mag for $(V-I)_{TRGB} = 1.49$ mag and $(V-I)_{-3.5} = 1.45$ mag.
We have estimated the TRGB magnitude of IC 4182 using three methods. Before determining the distance modulus, we need to find out how the three estimates are related. In order to do this, we used the boot-strap method of estimating uncertainties in the TRGB magnitudes, as described in the previous Section. For each of the 5000 iterations, the TRGB magnitudes were measured using all three methods. The first two methods which use edge-detection filtering are found to be correlated with each other, almost one-to-one. In contrast, the CC method is very stable from one iteration to another, and does not correlate with the first two methods at all. Thus, the calculation of the final average magnitude was done in two steps. First, the average value of the two edge-filtering methods was estimated, by deriving the rms in the mean of the distribution of all TRGB magnitudes found (for both methods) for 5000 iterations. Finally, the average of all three estimates was measured by taking the weighted mean of the result of the CC method and the average value estimated for the two edge-filtering methods, since the CC method is not correlated with the edge-filtering method. We adopt a TRGB distance modulus for IC 4182 of $(m-M)_0 = 28.25 \pm 0.06$ mag, corresponding to a linear distance of $4.5 \pm 0.1$ Mpc. Since the edge detection and CC methods are not fully independent, we have used a conservative estimate of the uncertainty.
### NGC 5253
NGC 5253 is a peculiar Im galaxy in the M83 group, with a Cepheid distance of 3.4 Mpc (F01; also see §4). This galaxy was not included in our HST Cycle 9 observing program, but archival HST observations exist which enabled us to measure its TRGB distance. The WFPC2 field is shown in Figure \[figure:n5253wfpc\]. The CMD is shown in Figure \[figure:n5253tip\] for the entire region covered by the WFPC2 (top left panel), and for stars in the halo region outside the ellipse in Figure \[figure:n5253wfpc\] (lower left panel).
The $I$-band LFs (linear and logarithmic) and corresponding filter responses are shown on the right side of Figure \[figure:n5253tip\]. The $V$-band observations of NGC 5253 used the F547M filter rather than the broader F555W filter. This is not a major concern, since only calibrated $I$-band magnitudes are required for TRGB distance measurements. The TRGB is detected at $I = 24.03 \pm 0.06$ and $I = 23.98 \pm 0.05$ mag in the linear and logarithmic filter responses, respectively. Applying the CC method yielded a TRGB magnitude $I = 23.95 \pm 0.05$ (Figure \[figure:ccresults\]), which agrees well with the result obtained using the logarithmic luminosity function edge-detection method. Both are less susceptible to small noise which affects the analysis of the linear luminosity function, and might therefore provide a better estimate of the tip magnitude. Nevertheless, we adopt the average of the three determination as our best estimate of the tip magnitude. The $I$-band foreground extinction in the direction of NGC 5253 is 0.109 mag. Since there are no $V$-band observations of NGC 5253, we cannot estimate the $(V-I)$ color of the RGB stars which is necessary to determine the absolute magnitude of the TRGB. Instead, we adopt $-4.0 \pm 0.1$ mag for the TRGB magnitude, as most of the magnitudes fall within the magnitude range of $-3.9$ and $-4.1$ mag. Our final distance modulus to NGC 5253 from TRGB is thus $(m-M)_0 = 27.88 \pm 0.11$ mag, corresponding to a linear distance of 3.6 $\pm$ 0.2 Mpc.
### NGC 300
NGC 300 is an SA(s)d galaxy in the Sculptor group with a Cepheid distance of 2.0 Mpc (Freedman et al. 1992, F01). The detection of the TRGB in this galaxy is somewhat more difficult than the previous cases, because the WFPC2 field does not sample a large enough region to include large numbers of halo giants.
We were able to detect the tip in the logarithmic LF only by binning the data so that $\sigma_i$ in Equation (2) is doubled, but we failed to detect any significant edge in the noisier linear luminosity function. Figure \[figure:n300tip\] shows the CMDs, linear and logarithmic luminosity functions, and corresponding edge-detection filter response functions. The LF was constructed using only stars with $0.5 \leq (V-I) \leq 2.0$ mag. The TRGB can be seen in the CMD near $I \sim$ 22.6. In the logarithmic luminosity function, the TRGB was detected at $I=22.62 \pm 0.07$ mag, as seen clearly in the right bottom panel in Figure \[figure:n300tip\].
Because the luminosity function rises very gradually for more than two magnitudes, the CC method breaks down for NGC 300 as there is no unique edge to search for. Thus, we adopt the result of the edge-detection method applied to the logarithmic luminosity function as our estimate of the tip magnitude. Using the Lee et al. (1993) calibration, the absolute magnitude of the TRGB is predicted to be $M_{\mbox{TRGB}} = 4.05 \pm 0.05$ mag for $(V-I)_{-4.0} = 1.5$ mag and $(V-I)_{-3.5} = 1.4$ mag. The foreground extinction to NGC 300 is $A_I = 0.025$ mag. Therefore, the final distance modulus of NGC 300 from TRGB is $(m-M)_0 = 26.65 \pm 0.09$, corresponding to a distance of $2.14 \pm 0.09$ Mpc.
### NGC 3031
NGC 3031 (M81) is an SA(s)b galaxy with a Cepheid distance of approximately 3.6 Mpc (Freedman et al. 1994, F01). The CMD is shown in the left panel of Figure \[figure:n3031tip\]. All stars are included in the top left panel, while only those found on chips WF 3 and WF 4 (the outermost regions) are shown in the bottom left panel. As was the case for IC 4182, the outer fields suffers less contamination from younger disk stars, and were therefore used for the TRGB distance determination.
Stars with colors $(V-I) \le 1.4$ and $(V-I) \ge 2.5$ were excluded in constructing the LF shown on the right side of Figure \[figure:n3031tip\]. The edge-detection method applied to the linear and logarithmic LFs give TRGB magnitudes of $I = 24.02 \pm 0.09$ and $24.08 \pm 0.04$ mag respectively, as shown in the bottom right panels of Figure \[figure:n3031tip\], If stars detected only in the F814W frames (which reach deeper magnitudes than the F555W frames) are included in the analysis (to limit the effect of incompleteness near the TRGB), the tip is detected at $I = 24.03 \pm 0.10$ mag.
The CC method applied to the data yields a considerably fainter tip magnitude, $I = 24.34 \pm 0.15$. This is due to the fact that the luminosity function rises very slowly over $\sim 0.8$ mag; the CC method then triggers on the mid-point of this rising “edge”, which is significantly fainter than the true TRGB magnitude.
As our final determination of the tip magnitude we therefore adopt the average of the three separate TRGB determinations using the edge-detection method, $I_{TRGB} = 24.13 \pm 0.06$ mag. The RGB colors $(V-I)_{\mbox{\small TRGB}} = 2.1$ mag and $(V-I)_{-3.5} = 1.8$ mag imply a TRGB luminosity $M^I_{\mbox{\small TRGB}} = -4.05 \pm 0.10$ mag. The foreground extinction along the line of sight to NGC 3031 is $A_I = 0.155$ mag. The final distance modulus to NGC 3031 from TRGB is thus $(m-M)_0 = 28.03 \pm 0.12$ mag, corresponding to a linear distance of $4.0 \pm 0.2$ Mpc .
### NGC 3351
NGC 3351 is an SB(r)b galaxy in the Leo group, and the most distant galaxy in this study. Its radial velocity is 774 km s$^{-1}$, and its Cepheid distance is 9.3 – 10 Mpc (Graham et al. 1997; F01). The galaxy lies near the WFPC2 limit for detecting the TRGB within reasonable exposure times: 14 of our allocated 28 orbits were spent on this galaxy alone.
Figure \[figure:n3351tip\] shows the CMD of stars detected on the WF2, WF3, and WF4 chips (top left), and for stars located in the outer halo region shown by the triangular outline in Figure \[figure:footprints\]. From the CMD, the TRGB appears around $I \sim 26.5 \pm 0.5$ mag. The linear edge filter, applied only to the stars in the outer region, shows a strong peak at $I = 26.55 \pm 0.13$ mag; however magnitude incompleteness becomes severe at approximately the same magnitude, potentially biasing the filter response. We have also applied the filters to the stellar sample with blue stars excluded. However, this exercise did not yield any result that is more accurate than using a larger sample, likely due to a very small number statistics. The TRGB is hardly visible in the logarithmic LF, making for a very uncertain tip determination. The CC method seems more robust, it provides a perfect fit to the rising part of the LF (see Figure 11), and yields a tip detection of $I = 26.53 \pm 0.10$ mag.
Taking the average of two TRGB magnitude estimates, we have $I = 26.54 \pm 0.08$ mag. The foreground extinction in the direction of NGC 3351 is $A_I = 0.054$ mag. For $(V-I)_{TRGB} = 1.2$ mag and $(V-I)_{-3.5} = 1.1$ mag, the predicted TRGB absolute magnitude is $M{^I_{TRGB}} = 3.9 \pm 0.1$ mag. Thus, the final distance moduli to NGC 3351 from TRGB is $(m-M)_0 = 30.39 \pm 0.13$ mag, corresponding to a linear distance of $12.0 \pm 0.7$ Mpc.
### NGC 3621
NGC 3621 is an SA(s)d galaxy with a Cepheid distance of 6.6 Mpc (Rawson et al. 1997; F01). CMDs are shown in Figure \[figure:n3621tip\]; the halo stellar population is best represented in the WF2 chip (Figure \[figure:footprints\]). On the top right panel of Figure \[figure:n3621tip\], the I-band luminosity function of stars on WF2 is shown, together with the corresponding edge filtering response function. No color cut to the luminosity function was applied as it did not make the TRGB detection any better. The TRGB is detected at $25.42 \pm 0.06$ mag. Application of the edge-filtering to the logarithmic luminosity function yields $I = 25.47 \pm 0.06$ mag, while the CC method produces $I = 25.46 \pm 0.05$ mag, both consistent with the edge-filtering determination made using the linear LF. We adopt the average of the three estimates as our final tip magnitude.
The foreground extinction to NGC 3621 is $A_I = 0.156$ mag, and the observed color of the giant branch implies a TRGB luminosity $M{^I_{TRGB}} = -4.06 \pm 0.10$ mag. The final distance modulus to NGC 3621 from TRGB is thus $(m-M)_0 = 29.36 \pm 0.11$ mag, corresponding to a linear distance of $7.4 \pm 0.4$ Mpc.
### NGC 5457
NGC 5457 (M101) is an SAB(rs)cd galaxy with a Cepheid distance of $\sim$7 Mpc (Kelson et al. 1996; F01). This galaxy is of particular interest for this project, since two independent Cepheid distances have been estimated in two separate fields with different mean metal abundances.
In Figure \[figure:n5457tip\], we show CMDs for the entire WFPC2 field (top left) and the outermost chips WF3 and WF4 (bottom left). Only the latter were used when constructing the $I$-band LFs, as shown on the right side of Figure \[figure:n5457tip\]. The TRGB is clearly detected in the linear and logarithmic filter responses, at $I = 25.41 \pm 0.04$ and $25.40 \pm 0.04$ mag respectively. This field has a strong contamination of brighter red stars, presumably AGB stars and red supergiants; this is not particularly surprising because M101 has a very extended disk, making it difficult to isolate a purely halo-dominated field. Despite this contamination the RGB tip stands out clearly in the LFs and the filter responses. We have also applied the edge-detection method to a sample with the blue stars ($(V-I) < 0.5$ mag) excluded, and obtained the exactly same result. Applying the CC method gives best fitting tip magnitude of $I = 25.42 \pm 0.05$ mag, which agrees very well with the edge-filter results.
The foreground extinction to NGC 5457 is $A_I = 0.017$ mag. The TRGB calibration for this galaxy, with $(V-I)_{TRGB}=1.50$mag and $(V-I)_{-3.5} = 1.36$ mag, is $M{^I_{TRGB}} = -4.02 \pm 0.10$ mag. Taking the average of three TRGB magnitude estimates (two by the edge-filtering method and one by the CC method), we obtain $I_{TRGB} = 25.40 \pm 0.04$ mag. Thus the distance modulus of NGC 5457 from TRGB is $29.42 \pm 0.11$ mag, corresponding to a linear distance of $7.7 \pm 0.4$ Mpc.
Other TRGB Distances from the Literature
----------------------------------------
In addition to the galaxies discussed in the previous sections, Table 2 lists published TRGB distance moduli for 10 additional galaxies with well determined Cepheid distances. We comment briefly on the individual galaxies below. In most case, we make use of color and extinction data data tabulated by Ferrarese et al. (2000a: F00) in converting tip magnitudes to distance moduli.
LMC: The first estimate of the TRGB magnitudes, $I_{0,TRGB} = 14.53 \pm 0.05$ mag, was by Reid, Mould, & Thompson (1987). They used data from the Shapley III star forming region, which is heavily contaminated by intermediate-age AGB stars. A second estimate, $I_{0,TRGB} = 14.50 \pm 0.25$ mag, came from Romaniello et al. (2000) analysis of HST/WFPC2 observations of regions around SN1987A. Unfortunately, the small spatial coverage of the WFPC2 field of view allowed them to detect only $\sim 150$ stars in the brightness range necessary for the TRGB measurement. Cioni et al.(2000) also attempted to the TRGB distance based on a very large stellar sample from the DENIS survey. However, they were not able to constrain the internal reddening well enough to estimate the accurate TRGB distance. The most reliable determination is from Sakai, Zaritsky, & Kennicutt (2000) who used data from the Magellanic Clouds Photometric Survey (Zaritsky, Harris, & Thompson 1997). The unique feature of this study is that the reddening was determined along the light of sight of individual stars by fitting spectra and extinction to $UBVI$ photometry (Zaritsky 1999). Therefore, Sakai et al. (2000) were able to select the regions of low reddening, and reported $I_{0,TRGB} = 14.54 \pm 0.04$ mag. Using $(V-I)_{TRGB} = 1.7 \pm 0.1$ mag, and $(V-I)_{-3.5} = 1.5 \pm 0.1$ mag, the absolute TRGB magnitude is expected at $M_{I,TRGB} = -4.05 \pm 0.06$ mag. We thus adopt $(m-M)_0 = 18.59 \pm 0.09$ mag as the LMC TRGB distance modulus.
SMC: The TRGB distance to the SMC has been measured using DENIS data by Cioni et al.(2000), who derived a modulus $(m - M)_0 = 18.99 \pm 0.03 \pm0.08$ mag.
Sextans A: Sakai, Madore & Freedman (1996) detected the TRGB at $I=21.73 \pm 0.09$ mag using single-epoch ground-based data. More recently, Dolphin et al. (2003) used HST/WFPC2 observations to detect the tip at $I = 21.76 \pm 0.05$ mag. Since the HST/WFPC2 sample is very sparse (the I-band luminosity function jumps by only a few stars at the TRGB edge), we adopt the TRGB distance modulus from Sakai et al. (1996), $(m-M)_0 = 25.67 \pm 0.13$ mag. Sextans B: Sakai, Madore, & Freedman (1997) measured the TRGB at $I = 21.60 \pm 0.10$ mag using ground-based imaging data, which yields a distance modulus of $(m-M)_0 = 25.61 \pm 0.14$ mag when adopting $A_I = 0.062$ mag and $M_{I,TRGB} = -4.07$ mag using the colors of RGB stars tabulated in F00. More recently, Mendez et al. (2002) have used HST imaging to derive a distance of $(m-M)_0 = 25.63 \pm 0.04 \pm 0.18$ mag. We adopt the average of these measurements, $(m-M)_0 = 25.63 \pm 0.04$ mag.
NGC 224 (M31): Mould & Kristian (1986) used ground-based imaging to estimate a TRGB magnitude of $I = 20.55 \pm 0.17$ mag for NGC 224. A more recent study by Durrell, Harris, & Pritchet (2001) yields the nearly identical result: $I = 20.52 \pm 0.05$ mag. We adopt the average of these measurements. Using $(V-I)_{TRGB} = 1.97 \pm 0.10$ mag and $(V-I)_{-3.5} = 1.7 \pm 0.1$ mag, as tabulated in F00, we expect $M_{I,TRGB} = -4.07 \pm 0.10$ mag, producing a distance modulus to NGC 224 of $24.44 \pm 0.11$ mag.
NGC 598 (M33): Mould & Kristian, using ground-based data, derived a TRGB magnitude of $I = 20.95 \pm 0.17$ mag. Adopting $E(B-V) = 0.04$ mag, and $M_{I,TRGB} = -4.02$ mag yields a distance modulus of $24.89 \pm 0.20$ mag. Recently Kim et al.(2002) used HST/WFPC2 imaging of 10 fields in M33 to derive $24.81 \pm 0.04 {^{0.15}_{-0.11}}$ mag. The average of these measurements yields $(m-M)_0 = 24.81 \pm 0.04$ mag.
NGC 3109: F00 tabulated two sets of data for NGC 3109. Lee (1993) used deep ground-based $V$ and $I$ imaging to derive $I = 21.55 \pm 0.10$ mag. Minniti, Zilstra, & Alonso (1999) derived $I = 21.70 \pm 0.06$ mag also from ground-based data. Using the colors tabulated in F00, we obtain distance moduli of $(m-M)_0 = 25.43 \pm 0.14$ mag and $25.60 \pm 0.12$ mag respectively. More recently, HST observations of Méndez et al.(2002) yielded $I = 25.52 \pm 0.06$ mag. Placing all of these measurements on the Schlegel et al. (1998) reddening scale gives an average value of $(m - M)_0 = 25.52 \pm 0.05$ mag for the distance modulus of NGC 3109.
NGC 6822: Lee et al. (1993) derived $(m - M)_0 = 23.46 \pm 0.10$ mag, Updating the reddening yields a slightly smaller value of $23.34 \pm 0.10$ mag. Subsequently Gallart, Aparicio, & Vilchez (1996) derived $23.4 \pm 0.1$ mag, using reddening values estimated from observations of Cepheid variable stars in the same field. Applying the reddening value from Schlegel et al. (1988) yields the same distance modulus $(m-M)_0 = 23.4 \pm 0.1$ mag. We have adopted the mean of these measurements, $(m-M)_0 = 23.37 \pm 0.07$ mag.
IC 1613: Freedman (1988) derived $I_{TRGB} = 20.25 \pm 0.15$ mag. Adopting the colors tabulated in F00, the corresponding distance modulus is $(m-M)_0 = 24.29 \pm 0.18$ mag. Subsequently, Cole et al. (1999) and Dolphin et al. (2001) obtained HST/WFPC2 imaging of the center and halo of IC 1613, and derived TRGB distance moduli of $(m - M)_0 = 24.29 \pm 0.12$ mag and 24.32 $\pm$ 0.08 mag, respectively. We adopt the average of these three values, $(m-M)_0 = 24.31 \pm 0.06$ mag.
WLM: Two estimates of the TRGB magnitude exist, both based on ground-based data. Lee et al. (1993), derived $I_{TRGB} = 20.85 \pm 0.10$ mag, while Minniti & Zijlstra (1997) found $I_{TRGB} = 20.80 \pm 0.05$ mag. Averaging these values and applying the RGB colors and reddening from F00 yields a distance modulus of $(m-M)_0 = 24.77 \pm 0.09$ mag.
Cepheid Distances
=================
Published Cepheid distances exist for all galaxies discussed in this paper, and are listed in Column 2 of Table 3, where the appropriate references are also given. The assumptions and procedures used in deriving these distances vary from galaxy to galaxy. For instance, NGC 3031, NGC 3351, NGC 3621, NGC 4258, and the innter field of NGC 5457 were all observed with HST using the same instrument configuration, WFPC2 and the F555W ($\sim$ Johnson $V$) and F814W ($\sim$ Johnson $I$) filters. NGC 5253, IC 4182 and the outer field of NGC 5457 were obtained with the pre-refurbishment HST/WFC. Distances to these galaxies share the same calibration (both zero point and slope) of the Cepheid PL relation, adopted from Madore & Freedman (1991, hereafter MF91). However, the data are not always on a common photometric system, the latter having been revised several times since the installation of WFPC2 on HST. With the exception of the 1988 distance to IC1613, which preceded MF91, distances from ground-based data adopt the MF91 PL relation slope, but not always the zero point (e.g. NGC 3109). Most are based on $BVRI$ data, although for any given galaxy, not all Cepheids are observed in all four photometric bands. Only $I-$band data exist for WLM, making it necessary to adopt an internal reddening to the galaxy (Lee et al. 1993) to transform the $I-$band distance modulus to a true (de-reddened) one. Finally, in about half of the cases, the Cepheid sample is truncated at the low period end prior to calculating a distance, to reduce the effect of magnitude incompleteness and avoid contamination from overtone pulsators.
The inhomegeneity in the published Cepheid distances could introduce artificial trends in our analysis. To create a consistent dataset, we have converged on the following criteria: all distances must be based on 1) $VI$ data only, for consistency with the HST sample; 2) for the HST data, a photometric calibration following Hill et al. (1998); 3) a common calibration of the PL relation (to be discussed in detail below); and 4) Cepheids with period between eight and 100 days, to exclude overtone pulsators and long period Cepheids, which might define a different PL relation than their shorter period counterparts. A longer cut at the short end might be applied to the samples of Cepheids observed with HST to avoid magnitude incompleteness (see Ferrarese et al. 2000a and F01).
Because of the differential nature of our comparison, a zero point shift, in either the photometric zero point or the Cepheid PL relation (for instance due to a change in the LMC distance) has no effect on the results. The adopted slope of the calibrating Cepheid PL relation, however, can be critical. The MF91 calibration of the Cepheids PL relation was based on a sample of 32 LMC Cepheids, mostly at short periods. A distance modulus of 18.50 mag, a mean and differential reddening of $E(V-I)=0.13$ and $R=A_V/(A_V - A_I) = 2.45$ (Cardelli, Clayton & Mathis 1989) were adopted for the LMC. This calibration has been superseded by the more recent work of the OGLE consortium (Udalski et al. 1999, hereafter U99), who observed almost 650 Cepheids in the LMC. While the MF91 and U99 calibrations give identical slopes for the $V-$band PL relation, the slopes in the $I-$band differ significantly. The impact of this slope change on the Cepheid distances can be significant – up to 5% in some cases – and distance dependent (F01). The measured reddening to each Cepheid is larger when the U99 calibration is used, the more so the longer the Cepheid’s period. Because of observational biases, the period of the shortest observed Cepheids is generally longer in distant galaxies than in nearby ones; it follows that, as a general trend, the U99 calibration leads to increasingly larger reddenings, or increasingly smaller distances, the further away the galaxy under study. Since the more distant galaxies in our sample happen to be the more metal rich, it is quite feasible that adopting an incorrect slope for the LMC PL relation might translate into a spurious metallicity dependence when Cepheid and TRGB distances are compared.
To assess the impact of systematics, the analysis in the following sections will be performed twice, with Cepheid distances derived using the MF91 and U99 calibration. In all cases, we adopt a distance modulus of 18.50 mag, and a mean reddening of $E(V-I)=0.13$ mag to the LMC. These distances are listed in column 3 and 4 respectively of Table 3. For NGC 224, and all of the galaxies observed with HST with the exception of the two NGC 5457 fields, the distances were adopted from Table 3 of F01. In the case of distances based on the U99 calibration, F01 adopted the photometric zero points from Stetson et al. (1998); for consistency, we transformed these distances to the Hill et al. (1998) photometric system by adding 0.07 mag. For all other galaxies, we found it necessary to calculate the distances anew. Additional details are given in the Appendix.
For some of the galaxies with ground-based distances, Cepheids are observed in more bands than $V$ and $I$ (for instance $B$ and $R$, see Appendix A). Calculating a distance using multi-wavelength data sometimes leads to improvements over fits which only use $V$ and $I$ data, especially in the case of sparsely sampled PL relations. Distances using all available photometric bands are therefore listed in Table 3, columns 5 and 6; these agree identically to the distances tabulated in Columns 3 and 4 when only $V$ and $I$ data are available.
Abundances
==========
Metal abundances for most extragalactic Cepheids cannot be measured directly. The HST Key Project adopted \[O/H\] nebular abundances derived from spectra of HII regions at the same galactocentric distance as the Cepheid fields (Zaritsky, Kennicutt, & Huchra 1994, hereafter ZKH; K98; and Ferrarese et al. (2000a). Although these should provide reasonable estimates of the \[Fe/H\] stellar abundances for relatively massive, luminous, and short lived Cepheids, we note that a one-to-one correspondence is not essential: the inferred metallicity dependence of the Cepheid PL relation should be valid so long as it is applied using a self-consistent nebular abundance scale.
The ZKH “empirical” abundances were derived from the relative strengths of the \[\]$\lambda$3726,3729 \[\]$\lambda$4959,5007, and H$\beta$ emission lines, and calibrated with a combination of observations and theoretical nebular photoionization models (e.g., Edmunds & Pagel 1984; Kewley & Dopita 2002). The adopted abundances are listed in the 2nd column of Table 4. For some metal-poor dwarf galaxies ($Z < 0.3 Z_\odot$), direct HII region oxygen abundances derived from electron temperature ($T_e$) measurements were available and were used instead.
There have been two significant developments in the abundance scale since the publication of the K98 analysis. First, improved measurements of the CNO abundances in the Sun have resulted in a downward revision of the solar oxygen abundance scale, from $12 + \log~{\rm O/H} = 8.9$ to 8.7 (Allende Prieto et al. 2001; Holweger 2001). All values of \[O/H\] in this paper will be referenced to the new lower solar abundance. Second, recent measurements of $T_e$-based abundances in several galaxies reveal that the strong-line empirical abundance are systematically higher than the direct abundances by 0.3–0.5 dex, for HII regions more metal-rich than $Z \sim Z_{LMC}$ (Kennicutt, Bresolin, & Garnett 2003 and references therein). This difference affects the formally derived slope of the Cepheid metallicity dependence, because adopting direct $T_e$-based abundances lowers the metal-rich end of the abundance scale without changing the abundances adopted for metal-poor HII regions (and Cepheids). Fortunately this does not significantly change the effects of any Cepheid $Z$-dependence on the distance scale, so long as the metallicity corrections are applied and calibrated using the same nebular abundance scale. The issue is however relevant for understanding the physical origins of any Cepheid $Z$-dependence, where the absolute magnitude of the effect is important.
In order to remain consistent with K98 and the published extragalactic Cepheid studies, we will continue to adopt the ZKH abundance scale in our analysis. However in §6 we also discuss the impact of adopting a $T_e$-based abundance scale on the absolute scale of the Cepheid metallicity dependence.
Metallicity Dependence of the Cepheid PL Relation
=================================================
The main result of our study is shown in Figure \[figure:metaldep\], which plots the difference between Cepheid and TRGB distance moduli as a function of Cepheid \[O/H\] abundance (for $12 + \log O/H = 8.7$; the solar abundance is 8.7). The figure includes two datapoints for M101 (which has Cepheid measurements in two fields at different metallicities) and one for each of the other 16 galaxies in Table 2.
In order to test the sensitivity of our results to the Cepheid samples and calibration used, Table 5 lists various fits to the different samples. The first section in Table 5 and Figure \[figure:metaldep\] present the results obtained when the TRGB distances (column 2 of Table 2) are compared to the following: (1) Cepheid distances as published in the original papers (column 2 of Table 3); (2) Cepheid distances derived using only $V$ and $I$ data, calibrated as in Madore & Freedman (1991) (column 3 of Table 3); (3) Cepheid distances derived using only $V$ and $I$ data, calibrated as in Udalsky (1999) (column 4 of of Table 3); (4) Cepheid distances based on multiwavelength fits (when available), calibrated using MF91(column 5 of Table 3) (5) Cepheid distances based on multiwavelength fits (when available), calibrated using Udalski (1999) (column 6 of Table 3). In fits (2) and (3) (based on $V$ and $I$ data only), Sextans B was excluded, since its distance, measured using only two Cepheids, is likely unreliable (a conclusion supported by the fact that the galaxy is a significant outlier in the 2nd and 3rd panel of Figure \[figure:metaldep\]). The least square fits account only for the errors in the distance moduli; the uncertainties in the abundances, \[O/H\], are mostly systematic in nature and do not affect the fits. All of the comparisons show a clear trend with metallicity, with Cepheid residual modulus decreasing with increasing \[O/H\]. This is in the same sense as reported earlier by Freedman & Madore (1990), Gould (1994), Sasselov et al. (1997), Kochanek (1997), and K98, but now the dependence is seen much more clearly. The slope $\gamma$ and its rms uncertainty, derived from a weighted least-squares fit, are shown in the upper right corner of each panel. We note that the main effect of adopting the Udalski et al.(1999) calibration instead of the MF91 scale is to reduce the average Cepheid distance moduli by about 0.08 mag regardless of the metallicity of the sample, so that $\gamma$ is unaffected.
Since there is no a priori reason to prefer one dataset of Cepheid distances to another, we take the average of the $\gamma$ values returned by the five fits as our best estimate of the metallicity dependence of the Cepheid PL relation. Strictly speaking, the five fits are correlated since each one is always applied to the same set of galaxies. However, the methods used are different for the published, the V/I and the multi-wavelength data. In that sense, the estimates are independent because they were derived by applying independent methods. Therefore, we assume the following; the two fits to the $V$ and $I$ data alone based on the MF91 and U99 calibrations are correlated, as well as the two fits using the multiple-wavelength data based on the same two calibrations. Thus, we first estimated the average for each of two sets of correlated fits. The uncertainty was chosen to encompass the range of two error bars. Finally, the value of $\gamma$ was derived by taking the weighted average of three independent estimates (from published Cepheid data, averaged V/I data sets, and averaged multiple-wavelength data sets):
$$\gamma = -0.24 \pm 0.05 {\rm mag~dex^{-1}}$$
If we had assumed that all fits are very highly correlated (rather than statistically independent), then the value of $\gamma$ would be $-0.23 \pm 0.11$, which agrees well with the value above. Our measurement of $\gamma$ is similar to $\gamma = -0.24
\pm 0.16$ derived by K98 using two Cepheid fields in M101, and $\gamma =
-0.12 \pm 0.08$ derived from a comparison of Cepheid and TRGB distances for a smaller sample. As reflected in the decreased errorbar, the results presented in this paper are more robust: the K98 analysis suffered from a lack of galaxies with $Z > Z_{LMC}$, and used an indirect TRGB and Cepheid comparison (via different galaxies in the same group) for two of the metal-rich fields. Both shortcomings have been corrected in this study.
The Cepheid and TRGB distances used in our analysis come from a wide range of ground-based and HST observations, and it is important to confirm that the trends shown in Figure \[figure:metaldep\] do not arise from biases built into the sample, as due, for instance, to crowding effects or photometric scale errors which might affect ground and space based determinations differently. As discussed in §3, several of the galaxies in our sample have TRGB distances measured both from the ground and with HST, and the excellent agreement between most of these measurements offers some assurances as to consistency of the two sets of measurements. In the top panel of Figure \[figure:gbvshst\] the datapoints are coded according the source (ground-based or HST) of the Cepheid distances. Although most low metallicity galaxies were observed from the ground, while most high metallicity galaxies were observed with HST, the middle ground is covered by both HST and ground-based measurements, and no systematic difference between the two sets is evident. Fitting the ground-based data alone yields a metallicity dependence $\gamma = -0.18 \pm 0.10$, fully consistent with the value of $-0.24 \pm 0.08$ derived for the combined data.
The lower panel of Figure \[figure:gbvshst\] shows the same comparison, but this time with the points coded according to the source of the TRGB distances. Again a consistent trends is seen across the data set. Excluding the galaxies with only HST TRGB distances yields $\gamma = -0.13 \pm 0.12$. Excluding the galaxies with only ground-based TRGB distances, $\gamma = -0.23 \pm 0.14$. This gives us confidence that our measured metallicity dependence is not an artifact of instrumental effects and/or crowding errors.
Another conceivable source of systematic error in this comparison would be a residual metallicity dependence in the TRGB distances, which might masquerade as an effect on the Cepheid distances. As discussed in §3 there is a weak ($\pm 0.1$ mag) metallicity dependence in the TRGB magnitude, which is calibrated and corrected for using the observed (dereddened) $V - I$ color of the giants. The validity of these corrections is supported by observations of multiple fields in M33 by Kim et al.(2002). For this effect to be significant in our analysis, the halo metal abundances would have to correlate systematically with the Cepheid (disk) abundances. We present such a comparison in Figure \[figure:zcompare\], where we plot the dereddened $V - I$ color of the red giants against the adopted \[O/H\] abundance of the Cepheids; for galaxies with multiple TRGB measurements, we show each data point separately. This comparison, unfortunately, shows a slight correlation between the two sets of abundances; the galaxies with redder RGB stars tend to be more metal-rich in the oxygen abundance scale as well. A least-squares fit to the data yields a slope of $0.16 \pm 0.07$, which is not consistent with being zero. In order to estimate whether this has amplied the metallicity effect we have observed, the least-squares fitting is carried out by excluding three galaxies whose RGB colors exceed $(V-I)>1.8$ mag. For all five samples (equivalent to top five rows in Table 5), the slopes are consistent with those estimated using the whole sample. We obtain, for e.g., $\gamma = 0.23 \pm 0.07$ and $-0.25 \pm 0.08$ for the MF91 and U99 multi-wavelength samples, respectively. Using all galaxies, we had obtained $\gamma = -0.23 \pm 0.08$ and $-0.24 \pm 0.08$. Thus, the slight correlation between the Pop I and Pop II abundances in the galaxies in our sample does not appear to be the cause of the metallicity dependence of Cepheid variable stars.
Another important question about the metallicity dependence is whether it is present across the entire range of Cepheid abundances, or is only important in metal-rich objects. For example, if the effect on derived distance moduli were a linear function of metal fraction, as was parametrized by Choisi et al.(1993), it would have a negligible influence for dwarf galaxies with $Z \ll Z_{LMC}$, but could be very important effect in luminous metal-rich galaxies. Our data are not of sufficient quantity or quality to constrain unambigously the functional form of the $Z$-dependence. However, we can examine the plausibility of a continuous metallicity dependence by fitting the data points for $Z \le Z_{LMC}$ and $Z \ge Z_{LMC}$ separately. The resulting dependences are nearly identical: $\gamma = -0.17 \pm 0.13$ for $Z \le Z_{LMC}$ (12 points) and $-0.22 \pm 0.19$ for $Z \ge Z_{LMC}$ (8 points). Our data are consistent with a continuous logarithmic metallicity dependence, but the result has marginal statistical significance.
Finally, we note that distances of two galaxies, Sextans A and NGC 5457 (inner field), vary significantly among the five estimates listed in Table 3. For example, the distances of the inner field of NGC 5457 vary from $28.93 \pm 0.11$ up to $29.21 \pm 0.09$ mag. In order to assess how sensitive the value of $\gamma$ is to some specific galaxy distances, we have estimated $\gamma$ for samples including and excluding these galaxies. We find that $\gamma$ is stable; its value, when estimated excluding Sextans A and N5457 (inner), agrees within 1$\sigma$ of the result quoted above.
Discussion
==========
The results of this analysis provide the strongest evidence to date for a non-negligible dependence of Cepheid distances on metal abundance. Our best estimate of the magnitude of this dependence is $\gamma = -0.24 \pm 0.05$ mag dex$^{-1}$, when referenced to the Zaritsky et al.(1994) HII region metallicity scale (we consider the effects of adopting a different metal abundance scale below). This result is consistent with the dependence measured from a direct comparison of metal-rich and metal-poor Cepheid fields in M101 ($\gamma = -0.24 \pm 0.16$; K98). In the remainder of this section, we explore the consequences of such a $Z$-dependence on the calibration of the distance scale as a whole and H$_0$.
The consequences of a Cepheid metallicity dependence of roughly this magnitude on the calibration of several extragalactic standard candles was explored in detail in the final series of papers from the HST H$_0$ Key Project (Sakai et al.2000, Ferrarese et al.2000b; Gibson et al.2000; Kelson et al.2000; Mould et al.2000; F01). We have summarized these results in Table 6, which shows the net effect of a Cepheid metallicity dependence of 0.20 mag dex$^{-1}$ on the zeropoint calibrations of the secondary distance indicators used by the Key Project team. These are expressed in terms of the luminosity zeropoints and on the mean net change in the derived distances for the Key Project samples.
A Cepheid $Z$-dependence in the direction measured here causes [*all*]{} of the secondary distance scales to be systematically underestimated (thus leading to an over-estimate of H$_0$). This is because the PL relation is calibrated with a relatively metal-poor galaxy, the LMC. The magnitude of the effect is slightly different for the different secondary distance indicators, but for $\gamma = -0.20$ mag dex$^{-1}$ it is significant but small, lowering the net value of H$_0$ by 3.5%, or about 2.5 kms$^{-1}$Mpc$^{-1}$ for H$_0$ = 72 kms$^{-1}$Mpc$^{-1}$ (F01). This correction already has been incorporated into the value given above.
As mentioned in §6 the absolute slope of the Cepheid $Z$-dependence is also sensitive to the metallicity scale adopted. As an illustration of this point Figure \[figure:znew\] shows the same Cepheid vs TRGB comparison as Figure \[figure:metaldep\], but with the metal abundances adjusted to agree with the electron temperature based HII region abundances in Kennicutt et al.(2003). As discussed earlier this has the effect of preferentially reducing the metallicities of the most metal-rich Cepheid fields, and the result is a somewhat ($\sim$25%) steeper $Z$-dependences, with an average $\gamma = -0.31 \pm 0.09$ mag dex$^{-1}$. Note however that adopting this different abundance scale [*would have an identical effect on the distance scale*]{} as given in Table 6, because the effect of the steeper $Z$-dependence would be canceled by a correspondingly narrower abundance range in the calibrating galaxies; in other words as long as the metallicity corrections are applied using a consistent abundance scale, the precise calibration of the metallicity scale is not important. Of course the absolute slope of the dependece is important for understanding the physical origins of the period-luminosity dependence of the Cepheid variable stars.
In §3, it was suggested that because the study presented in this paper is a [*differential*]{} test, it would not matter which TRGB calibration is used. We test this assumption by examining the metallicity dependence using two independent calibrations. The first one is that by Lee et al. (1993) which was used throughout this paper. The second calibration is that by Salaris & Cassisi (1998), which is based on the stellar evolution models. The dominant different between the two calibration is that the theoretical model by Salari & Cassisi predicts the TRGB magnitude $\sim 0.1$ mag brighter than the empirical calibration of Lee et al. The authors suggest that the difference arises from the fact that the globular cluster samples used in the empirical calibration may be missing the brightest RGB stars due to the small number statistics, and thus systematically dimming the TRGB magnitude. In Figure \[figure:zpcomp\], we show two correlations, one using the Lee et al. calibration, and the other based on Salaris & Cassisi 1998. As expected, the zero points of the two correlations differ by $\sim$ 0.1 mag. For the MF91, multi-wavelength sample, using the theoretical calibration, we obtain $\gamma = -0.26 \pm 0.08$, which agrees well with the fit using the empirical, Lee et al. calibration, $\gamma = -0.23 \pm 0.08$. In summary, we emphasize again that the results shown in this paper are based on [*differential*]{} tests, and as indicated by our simple comparison, the value of $\gamma$ should not be affected by the use of another TRGB calibration.
Finally, our measurment of the metallicity dependence cannot distinguish between a variation in the zero point or in the slope. As discussed in Section 4, the slope of the Cepheid PL relation is not always well determined; the MF91 and U99 calibrations in fact yield I-band slopes that are significantly different from each other. Thus, there is a need to check if the slope is the cause of the metallicity dependence of the Cepheid variables. A detailed study is beyond the scope of this paper; here a simple exercise is carried out to examine the effect of the slope, by calculating the mean period of Cepheid variable sample for each galaxy.
When the mean periods are plotted against the metallicities, we find that there are two groupings: one around $12+log(O/H) \sim 8.7$ and mean $P \sim 1.4$, and the other one at $12+ \log (O/H) \sim 7.7$ and mean $P \sim 1.1$. That is, one group at high Z corresponds to the longer mean period, and the second one at low Z corresponds to the shorter mean period. The second low-Z, shorter period group consists of four galaxies. The mean Cepheid periods were also calculated for all the galaxies used as the calibrators for the Tully-Fisher relation. This is especially important to check if slope slope is responsible for the metallicity dependence of the Cepheids, affecting the calibration of the secondary distance indicators and finally the value of H$_0$. The Tully-Fisher calibrators all lie in the high-Z, long mean period group. If the metallicity affects the slope of the Cepheid PL relation, then we might need to exclude those galaxies whose mean period is significantly different from others. Thus, excluding four galaxies that have low mean periods, the metallicity dependence, $\gamma$, was re-calculated. For the multiple-wavelength, MF91 calibration sample, $\gamma = -0.25 \pm 0.09$, which agrees well with the value estimated using all galaxies ($\gamma=-0.24$). Therefore, to first order, the slope of the Cepheid PL relation does not appear to affect the metallicity dependence.
This project was made possible by an allocation of observing time on HST (program GO-8584). We gratefully acknowledge the assistance of Ray Lucas in carrying out this program, and the financial support of grant HST-GO-08584. This research has made use of the NASA’s Astrophysics Data System. We would also like to thank to anonymous referee for suggestions which helped in improving this paper.
Appendix A: Comments on galaxies for which new Cepheid distances have been calculated in this paper.
====================================================================================================
Sextans A: a total of 10 Cepheids, six with period longer than 8 days, is known in this galaxy (Sandage & Carlson 1984; Piotto, Capaccioli & Pellegrini 1994). Sakai, Madore & Freedman (1996) recalibrated the Cepheid photometry by comparison with new CCD $BVRI$ data for non-variable stars in the field, and calculated the distance reported in column 2 of Table 4.
Sextans B: Sandage & Carlson (1984) discovered seven Cepheids in this galaxy. Three of these were confirmed by Piotto, Capaccioli & Pellegrini (1994) based on ground based $BVRI$ data. The same authors discovered four shorter period Cepheids, and used the entire sample to calculate a distance modulus of $25.63 \pm 0.21$ mag. Unfortunately, $VI$ magnitudes are measured for only three Cepheids, one with period shorter than eight days. The distances listed in columns 3 and 4 are therefore based on only two Cepheids, for which we adopt the data from Piotto, Capaccioli & Pellegrini (1994).
NGC 300: Eighteen Cepheids were discovered by Graham (1984) based on photographic data; CCD $BVRI$ photometry was obtained for 16 of these by Freedman et al. (1992) and used to calculate the distance reported in Table 4 (col. 2). Recently, Pietrzyński et al. (2002) recovered and refined the magnitudes and periods for all of Graham’s variable stars, using $BV$ ground based CCD data. This study revealed that three of the Cepheids used by Freedman et al. (1992) were blended. These were excluded for the purpose of this paper; the distance listed in columns 3 and 4 of Table 4 was calculated from the remaining 13 Cepheids using the periods and $V-$band magnitudes from Pietrzyński et al. (2002), and the $I-$band magnitudes from Freedman et al. (1992). NGC 598: Column 2 of Table 4 lists the distance published by Freedman, Wilson & Madore based on $BVRI$ CCD photometry of 19 Cepheids originally discovered by Hubble (1926). The distance moduli calculated in this paper are based on all Cepheids given a quality index of [it a, b]{} or [*c*]{} in Freedman, Wilson & Madore (1991), for which both $V$ and $I$ magnitudes are available. NGC 3109: $B$ and $V$ photometry was obtained for eight Cepheids by Capaccioli et al. (1992). Subsequently, Musella, Piotto & Capaccioli (1997) extended the photometry to include $R$ and $I$ data, and discovered 16 additional Cepheids, calculating a distance modulus of $25.67 \pm 0.16$ mag. Seven of these Cepheids have period longer than eight days and $VI$ photometry, and were used in calculating the distances listed in columns 3 and 4 of Table 4. NGC 5457 inner: an inner field in M101 was studied by Stetson et al. (1998) as part of the HST Key Project on the Extragalactic Distance Scale. In this paper, a distance is calculated using a total of 61 bona-fide, high quality Cepheids for which Table 4 of Stetson et al. (1998) lists a quality index larger than 2 under either ‘Image Quality’ or ‘Light Curve Quality’
NGC 5457 outer: HST data were obtained as part of the HST Key Project on the Extragalactic Distance Scale (Kelson et al 1996). The distance calculated in this paper makes use of all 29 Cepheids from the original study.
NGC 6822: Based on photographic data, Kayser (1967) identified 13 Cepheids; CCD data, unfortunately only in the $r$-band, exist for six of these (Schmidt & Spear 1989). Photometric transformation from Kayser’s magnitudes to the $BVRI$ system were computed by Gallart et al. (1996) and applied to eight of Kayser’s Cepheids. Six of these, with well determined periods, were then used to calculate the distance reported in column 2 of Table 4. We used the same sample of six Cepheids to calculate the distance to NGC 6822 listed in columns 3 and 4 of Table 4. IC 1613: The first discovery of Cepheids in this galaxy dates back to the work of Baade and Hubble, later published by Sandage (1971) and Carlson & Sandage (1990). New $BVRI$ CCD data were published for 11 of the original 24 Cepheids by Freedman (1988) and used to derive the distance modulus listed in column 2 of Table 4. For the purpose of this paper, we have retained the four Cepheids with period between eight and 100 days; periods and $VI$ magnitudes are from Freedman (1988). WLM: A distance to this galaxy was calculated by Lee et al. (1993) based on $I-$band data, adopting an absorption $A(I) = 0.04$ mag. Furthermore, all of the Cepheids have periods less than eight days, preventing us from calculating a new distance for this galaxy.
Alibert, Y., Baraffe, I., Hauschildt, P., & Allard, F. 1999, A&A, 344, 551
Allende Prieto, C., Lambert, D.L., & Asplund, M. 2001, , 556, L63
Bellazzini, M., Ferraro, F.R., & Pancino, E. 2001, ApJ, 556, 635
Bennett, C.L., et al., 2003, , accepted
Berdnikov, L.N., Dambis, A.K., & Voziakova, O.V., 2000, , 143, 211
Bono, G., Caputo, F., Castellani, V., & Marconi, M. 1999, ApJ, 512, 711
Capaccioli, M., Piotto, G., & Bresolin, F. 1992, , 103, 1151
Caputo, F., Marconi, M., Musella, I., & Santolamazza, P. 2000, A&A, 359, 1059
Cardelli, J.A., Clayton, G.C. & Mathis, J.S. 1989, , 345, 245
Carlson, G., & Sandage, A., 1990, , 352, 587
Castellani, V., Deglinnocenti, S., & Luridiana, V. 1993, A&A, 272, 442
Chiosi, C., Wood, P.R., & Capitanio, N. 1993, , 86, 541
Ciardullo, R., Feldmeier, J.J., Jacoby, G.H., Kuzio de Naray, R., Laychak, M.B., & Durell, P.R. 2002, ApJ, 577, 31
Cioni, M.-R. L., van der Marel, R.P., Loup, C., & Habing, H.J., 2000, , 359, 601
Cole, A.A., et al., 1999, , 118, 1657
Da Costa, G.S., & Armandroff, T.E. 1990, , 100, 162
Dolphin, A.E., 2000, , 112, 1397
Dolphin, A. et al. 2001, ApJ, 550, 554
Dolphin, A.E., Saha, A., Skillman, E.D., DohmPalmer, R.C., Tolstoy, E., Cole, A.A., Gallagher, J.S., Hoessel, J.G., & Mateo, M. 2003, AJ, in press
Durrell, P.R., Harris, W.E., & Pritchet, C.J., 2001, , 121, 2557
Edmunds, M.G., & Pagel, B.E.J. 1984, , 211, 507
Ferrarese, L. et al. 1996, ApJ, 464, 568
Ferrarese, L. et al. 2000a, ApJS, 128, 431
Ferrarese, L. et al. 2000b, ApJ, 529, 745
Fiorentino, G., Caputo, F., Marconi, M., & Musella, I. 2002, ApJ, 576, 402
Freedman, W.L. 1988, , 326, 691
Freedman, W.L., & Madore, B.F. 1990, , 365, 186 (FM90)
Freedman, W.L., Wilson, C.D., & Madore, B.F. 1991, , 372, 455
Freedman, W.L. et al. 1992, , 396, 80
Freedman, W.L. et al. 1994a, , 427, 628
Freedman, W.L. et al. 1994b, Nature, 371, 757
Freedman, W.L., Madore, B.F., & Sakai, S. 1997, in preparation
Freedman et al. 2001, , 553, 47 (F01)
Frogel, J.A., Cohen, J.G., & Persson, S.E., 1983, , 275, 773
Gallart, C., Aparicio, A., & Vilchez, J.M., 1996, , 112, 1928
Gibson, B.K. et al., 2000, , 529, 723
Gieren, W.P. 1993, , 265, 184
Gieren, W.P., Fouque, P., & Gomez, M. 1998, ApJ, 496, 17
Graham, J.A., 1984, , 89, 1332
Graham, J.A. et al. 1997, , 477, 535
Gould, A. 1994, , 426, 542
Hernstein, J.R. et al. 1999, Nature, 400, 539
Hill, R. et al. 1998, , 496, 648
Holweger, H. 2001, in AIP Conf. Proc. 598, Solar and Galactic Workshop, ed. R. F. Wimmer-Schweingruber (New York: AIP), 23
Holtzman, J.A., et al., 1995, , 107, 156
Hubble, E., 1926, , 63, 236
Kaluzny, J., Stanek, K.Z., Krockenberger, Z.Z., Sasselov, D.D., Tonry, J.L., & Mateo, M. 1997, , in press
Kanbur, S.M., Ngeow, C., Nikolaev, S., Tanvir, N.R., & Hendry, M.A., 2003, , in press
Kayser, S.E., 1967, , 72, 134
Kelson, D. et al. 1996, , 463, 26
Kelson, D. et al. 2000, , 529, 768
Kennicutt, R.C., Freedman, W.L., & Mould, J.R. 1995, , 110,1476
Kennicutt, R.C., & Garnett, D.R. 1996, , 456, 504
Kennicutt, R.C., Bresolin, F. & Garnett, D.R., 2003, , 591, 801
Kennicutt, R.C. et al. 1998 , 498, 181 (K98)
Kewley, L.J., & Dopita, M.A. 2002, , 142, 35
Kim, M., Kim, E., Lee, M.G., Sarajedini, A., & Geisler, D., 2002, , 123, 244
Kochanek, C.S. 1997, , 491, 13
Landolt, A.U., 1992, , 104, 340
Lee, M.G., 1993, , 408, 409
Lee, Y.-W., Demarque, P. & Zinn, R., 1990, , 350, 155
Lee, M.G., Freedman, W.L., & Madore, B.F. 1993, , 417, 553
Madore, B.F., & Freedman, W.L. 1985, , 90. 1104
Madore, B.F., & Freedman, W.L. 1991, , 103, 933
Madore, B.F., & Freedman, W.L. 1995, AJ, 109, 1645
Madore, B.F., Freedman, W.L., & Sakai, S. 1997, in The Extragalactic Distace Scale, ed. M. Livio (Cambridge: CUP), p239
Mendez, B., Davis, M., Moustakas, J., Newman, J., Madore, B.F., & Freedman, W.L., 2002, , 124, 213
Minniti, D., & Zijlstra, A.A., 1997, , 114, 147
Minniti, D., Zijlstra, A.A., & Alonso, M.V., 1999, , 117, 881
Mould, J., & Kristian, J., 1986, , 305, 591
Mould, J. et al. 1991, , 383, 467
Mould, J., Kennicutt, R.C., & Freedman, W.L. 1999, Rev Prog Phys, 63, 763
Mould, J.R. et al. 2000, ApJ, 545, 547
Musella, I., Piotto, G., & Capaccioli, M., 1997, , 114, 976
Pietrzynski, G., Gieren, W., & Udalski, A., 2002, , 114, 298
Piotto, G., Capaccioli, M., & Pellgrini, C., 1994, , 287, 371
Rawson, D.M., et al., 1997, , 490, 517
Reid, N., Mould, J. & Thompson, I., 1987, , 323, 433
Richer, M.G., & McCall, M.L. 1995, , 445, 642
Romaniello, M., Salaris, M., Cassisi, S., & Panagia, N., 2000, , 530, 738
Saha, A. et al. 1994, , 425, 14
Saha, A. et al. 1995, , 438, 8
Saio, H., & Gautschy, A. 1998, ApJ, 498, 360
Sakai, S. 1999, in IAU Symposium 183, Cosmological Parameters and the Evolution of the Universe, ed. K. Saito (Dordrecht: Kluwer), p48
Sakai, S., Madore, B.F., & Freedman, W.L. 1996, , 461, 713
Sakai, S., Madore, B.F., & Freedman, W.L. 1997a, , 480, 589
Sakai, S., Madore, B.F., Freedman, W.L., Lauer, T., Ajhar, E.A., & Baum, W.A. 1997b, , 478, 49
Sakai, S., & Madore, B.F. 1999, ApJ, 526, 599
Sakai, S., Zaritsky, D., & Kennicutt, R.C., 2000, , 119, 1197
Salaris, M., & Cassisi, S. 1997, , 289, 406
Salaris, M., & Cassisi, S. 1998, , 298, 166
Sandage, A., 1971, , 166, 13
Sandage, A., & Carlson, G.A., 1984, BAAS, 16, 880
Sandage, A., Bell, R.A., & Tripicco, M.J. 1999, ApJ, 522, 250
Sasselov, D. et al. 1997, , 324, 471
Schechter, P.L., Mateo, M., & Saha, A. 1993, , 105, 1342
Schlegel, D.J., Finkbeiner, D.P., & Davis, M. 1998, ApJ, 500, 525
Schmidt, E.G., & Spear, G.G., 1989, , 236, 567
Sekguchi, M., & Fukugita, M. 1998, Observatory, 118, 73
Silbermann, N.A. et al. 1996, , 470, 1
Skillman, E.D., Kennicutt, R.C., & Hodge, P.W. 1989, , 347, 875
Skillman, E.D., Kennicutt, R.C., Shields, G.S., & Zaritsky, D. 1996, , 462, 147
Stanek, K.Z. 1996, , 460, L37
Stetson, P.B. 1994, , 106, 250
Stetson, P.B. 1998, , 110, 1448
Stetson, P.B. et al. 1998, ApJ, 508, 491
Tammann, G.A., Sandage, A., & Reindl, B. 2003, , 404, 423
Turner, D.G., & Burke, J.F. 2002, AJ, 124, 2931
Udalski, A., Szymanski, M., Kubiak, M., Pietrzynski, G., Soszynski, I., Wozniak, P., & Zebrun, K. 1999, Acta Astron, 49, 201 (U99)
Udalski, A., Soszynski, I., Szymanski, M., et al. 1999b, Acta Astron, 49, 223
Udalski, A., Soszynski, I., Szymanski, M., et al. 1999c, Acta Astron, 49, 437
Udalsky, A., Wyrzykowski, L., Pietrzynski, G., Szewczyk, O., Szymanski, M., Kubiak, M., Soszynski, I., & Zebrun, K., 2001, AcA, 51, 221
Wheeler, J.C., Sneden, C., & Truran, J.W. 1989, , 27, 279
Zaritsky, D., 1999, , 118, 2824 Zaritsky, D., Kennicutt, R., & Huchra, J. 1994, , 420, 87 (ZKH)
Zaritsky, D., Harris, J., & Thompson, I., 1997, , 114, 1002
[^1]: Based on observations made with the NASA/ESA Hubble obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program GO-8584.
[^2]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) of reflected light is related to an object’s surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric rendering errors and polarimetric errors using multi-view RGB and AoP images, where we propose a novel polarimetric cost function that enables us to effectively constrain each estimated surface vertex’s normal while considering four possible ambiguous azimuth angles revealed from the AoP measurement. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific polarized reflection depending on the material.'
author:
- Jinyu Zhao
- Yusuke Monno
- Masatoshi Okutomi
bibliography:
- 'egbib.bib'
title: 'Polarimetric Multi-View Inverse Rendering'
---
Introduction
============
Image-based 3D reconstruction has been studied for years and can be applied to various applications, e.g. model creation [@biehler20143d], localization [@cao2013graph], segmentation [@dai20183dmv], and shape recognition [@su2015multi]. There are two common approaches for 3D reconstruction: geometric reconstruction and photometric reconstruction. The geometric reconstruction is based on feature matching and triangulation using multi-view images. It has been well established as structure from motion (SfM) [@agarwal2009building; @schonberger2016structure; @wu2011high] for sparse point cloud reconstruction, often followed by dense reconstruction with multi-view stereo (MVS) [@furukawa2010towards; @furukawa2009accurate; @galliani2015massively]. On the other hand, the photometric reconstruction exploits shading information for each image pixel to derive dense surface normals. It has been well studied as shape from shading [@barron2014shape; @xiong2014shading; @zhang1999shape] and photometric stereo [@haefner2019variational; @ikehata2014photometric; @wu2010robust].
There also exist other advanced methods combining the advantages of both approaches, e.g. multi-view photometric stereo [@li2020multi; @park2016robust] and multi-view inverse rendering (MVIR) [@kim2016multi]. These methods typically start with SfM and MVS for camera pose estimation and initial model reconstruction, and then refine the initial model, especially for texture-less surfaces, by utilizing shading cues.
Multi-view reconstruction using polarization images [@cui2017polarimetric; @yang2018polarimetric] has also received increasing attention with the development of one-shot polarization cameras using Sony IMX250 monochrome or color polarization sensor [@maruyama20183], e.g. JAI GO-5100MP-PGE [@JAI] and Lucid PHX050S-Q [@Lucid] cameras. The use of polarimetric information has great potential for 3D reconstruction since the angle of polarization (AoP) of reflected light is related to the azimuth angle of the object’s surface normal. One state-of-the-art method is Polarimetric MVS [@cui2017polarimetric], which propagates initial sparse depth from SfM by using AoP images obtained by a polarization camera for creating a dense depth map for each view. Since there are four possible azimuth angles corresponding to one AoP measurement as detailed in Section \[sec:p-ambiguities\], their depth propagation relies on the disambiguation of polarimetric ambiguities using the initial depth estimate by SfM.
In this paper, inspired by the success of MVIR [@kim2016multi] and Polarimetric MVS [@cui2017polarimetric], we propose Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR), which is a fully passive 3D reconstruction method exploiting all geometric, photometric, and polarimetric cues. We first estimate camera poses and an initial surface model based on SfM and MVS. We then refine the initial model by simultaneously using multi-view RGB and AoP images obtained from color polarization images (see Fig. \[fig:overall\]) while estimating surface albedos and illuminations for each image. The key of our method is a novel global cost optimization framework for shape refinement. In addition to a standard photometric rendering term that evaluates RGB intensity errors (as in [@kim2016multi]), we introduce a novel polarimetric term that evaluates the difference between the azimuth angle of each estimated surface vertex’s normal and four possible azimuth angles obtained from the corresponding AoP measurement. Our method takes all four possible ambiguous azimuth angles into account in the global optimization, instead of explicitly trying to solve the ambiguity as in Polarimetric MVS [@cui2017polarimetric], which makes our method more robust to noise and mis-disambiguation. Experimental results using synthetic and real data demonstrate that, compared with existing MVS methods, MVIR, and Polarimetric MVS, Polarimetric MVIR can reconstruct a more detailed 3D model from unconstrained input images without any prerequisites for surface materials. Two main contributions of this work are summarized as below.
- We propose Polarimetric MVIR, which is the first 3D reconstruction method based on multi-view photometric and polarimetric optimization with an inverse rendering framework.
- We propose a novel polarimetric cost function that enables us to effectively constrain the surface normal of each vertex of the estimated surface mesh while considering the azimuth angle ambiguities as an optimization problem.
Related Work
============
In the past literature, a number of methods have been proposed for the geometric 3D reconstruction (e.g. SfM [@agarwal2009building; @schonberger2016structure; @wu2011high] and MVS [@furukawa2010towards; @furukawa2009accurate; @galliani2015massively]) and the photometric 3D reconstruction (e.g. shape from shading [@barron2014shape; @xiong2014shading; @zhang1999shape] and photometric stereo [@haefner2019variational; @ikehata2014photometric; @wu2010robust]). In this section, we briefly introduce the combined methods of geometric and photometric 3D reconstruction, and also polarimetric 3D reconstruction methods, which are closely related to our work.
[**Multi-view geometric-photometric 3D reconstruction:**]{} The geometric approach is relatively robust to estimate camera poses and a sparse or dense point cloud, owing to the development of robust feature detection and matching algorithms [@bay2008speeded; @lowe2004distinctive]. However, it is weak in texture-less surfaces because sufficient feature correspondences cannot be obtained. In contrast, the photometric approach can recover fine details for texture-less surfaces by exploiting pixel-by-pixel shading information. However, it generally assumes a known or calibrated camera and lighting setup. Some advanced methods [@maurer2016combining; @wu2010fusing; @wu2011high], including multi-view photometric stereo [@li2020multi; @park2016robust] and MVIR [@kim2016multi], combine the two approaches to take both advantages. These methods typically estimate camera poses and an initial model based on SfM and MVS, and then refine the initial model, especially in texture-less regions, by using shading cues from multiple viewpoints. Our Polarimetric MVIR is built on MVIR [@kim2016multi], which is an uncalibrated method and jointly estimates a refined shape, surface albedos, and each image’s illumination.
[**Single-view shape from polarization (SfP):**]{} There are many SfP methods which estimate object’s surface normals [@atkinson2006recovery; @huynh2013shape; @kadambi2015polarized; @miyazaki2003polarization; @morel2005polarization; @smith2018height; @tozza2017linear] based on the physical properties that AoP and degree of polarization (DoP) of reflected light are related to the azimuth and the zenith angles of the object’s surface normal, respectively. However, existing SfP methods usually assume a specific surface material because of the material-dependent ambiguous relationship between AoP and the azimuth angle, and also the ambiguous relationship between DoP and the zenith angle. For instance, a diffuse polarization model is adopted in [@atkinson2006recovery; @huynh2013shape; @miyazaki2003polarization; @tozza2017linear], a specular polarization model is applied in [@morel2005polarization], and dielectric material is considered in [@kadambi2015polarized; @smith2018height]. Some methods combine SfP with shape from shading or photometric stereo [@atkinson2017polarisation; @mahmoud2012direct; @ngo2015shape; @miyazaki2003polarization; @smith2018height; @zhu2019depth], where estimated surface normals from shading information are used as cues for resolving the polarimetric ambiguity. However, these methods require a calibrated lighting setup.
[**Multi-view geometric-polarimetric 3D reconstruction:**]{} Some studies have shown that multi-view polarimetric information is valuable for surface normal estimation [@atkinson2007shape; @ghosh2011multiview; @miyazaki2020shape; @miyazaki2016surface; @rahmann2001reconstruction] and also camera pose estimation [@chen2018polarimetric; @cui2019polarimetric]. However, existing multi-view methods typically assume a specific material, e.g. diffuse objects [@atkinson2007shape; @cui2019polarimetric], specular objects [@miyazaki2020shape; @miyazaki2016surface; @rahmann2001reconstruction] and faces [@ghosh2011multiview], to omit the polarimetric ambiguities. Recent two state-of-the-art methods, Polarimetric MVS [@cui2017polarimetric] and Polarimetric SLAM [@yang2018polarimetric], consider a mixed diffuse and specular reflection model to remove the necessity of known surface materials. These methods first disambiguate the ambiguity for AoP by using initial sparse depth cues from MVS or SLAM. Each viewpoint’s depth map is then densified by propagating the sparse depth, where the disambiguated AoP values are used to find iso-depth contours along which the depth can be propagated. Although dense multi-view depth maps can be generated by the depth propagation, this approach relies on correct disambiguation which is not easy in general.
[**Advantages of Polarimetric MVIR:**]{} Compared to prior studies, our method has several advantages. First, it advances MVIR [@kim2016multi] by using polarimetric information while inheriting the benefits of MVIR. Second, similar to [@cui2017polarimetric; @yang2018polarimetric], our method is fully passive and does not require calibrated lighting and known surface materials. Third, polarimetric ambiguities are resolved as an optimization problem in shape refinement, instead of explicitly disambiguating them beforehand as in [@cui2017polarimetric; @yang2018polarimetric], which can avoid relying on the assumption that the disambiguation is correct. Finally, a fine shape can be obtained by simultaneously exploiting photometric and polarimetric cues, where multi-view AoP measurements are used for constraining each estimated surface vertex’s normal, which is a more direct and natural way to exploit azimuth-angle-related AoP measurements for shape estimation.
Polarimetric Ambiguities in Surface Normal Prediction {#sec:p-ambiguities}
=====================================================
Polarimetric calculation {#subsec:calculation}
------------------------
Unpolarized light becomes partially polarized after reflection by a certain object’s surface. Consequently, under common unpolarized illumination, the intensity of reflected light observed by a camera equipped with a polarizer satisfies the following equation: $$I(\phi_{pol})=\frac{I_{max}+I_{min}}{2}+\frac{I_{max}-I_{min}}{2}
{\rm cos}2(\phi_{pol}-\phi),$$ where $I_{max}$ and $I_{min}$ are the maximum and minimum intensities, respectively, $\phi_{pol}$ is the polarizer angle, and $\phi$ is the reflected light’s AoP, which indicates reflection’s direction of polarization. A polarization camera commonly observes the intensities of four polarization directions, i.e. $I_0$, $I_{45}$, $I_{90}$, and $I_{135}$. From those measurements, AoP can be calculated using the Stokes vector [@stokes1851composition] as $$\label{eq:AoP}
\phi=\frac{1}{2}{\rm tan}^{-1}\frac{s_2}{s_1},$$ where $\phi$ is the AoP, and $s_1$ and $s_2$ are the components of the Stokes vector $$\label{eq:Stokes}
\textbf{s}=\left[\begin{matrix}
s_0\\s_1\\s_2\\s_3
\end{matrix}
\right]
=\left[\begin{matrix}
I_{max}+I_{min}\\(I_{max}-I_{min}){\rm cos}(2\phi)\\(I_{max}-I_{min}){\rm sin}(2\phi)\\0
\end{matrix}
\right]
=\left[\begin{matrix}
I_0+I_{90}\\I_0-I_{90}\\I_{45}-I_{135}\\0
\end{matrix}
\right],$$ where $s_3=0$ because circularly polarized light is not considered in this work.
Ambiguities {#sec:ambiguities}
-----------
AoP of reflected light reveals information about the surface normal according to Fresnel equations, as depicted by Atkinson and Hancock [@atkinson2006recovery]. There are two linear polarization components of the incident wave: $s$-polarized light and $p$-polarized light whose directions of polarization are perpendicular and parallel to the plane of incidence consisting of incident light and surface normal, respectively. For a dielectric, the reflection coefficient of $s$-polarized light is always greater than that of $p$-polarized light while the transmission coefficient of $p$-polarized light is always greater than that of $s$-polarized light. For a metal, the relationship are opposite. Consequently, the polarization direction of reflected light should be perpendicular or parallel to the plane of incidence according to the relationship between $s$-polarized and $p$-polarized light.
In this work, we consider a mixed polarization reflection model [@baek2018simultaneous; @cui2017polarimetric] which includes unpolarized diffuse reflection, polarized specular reflection ($s$-polarized light is stronger) and polarized diffuse reflection ($p$-polarized light is stronger). In that case, the relationship between AoP and the azimuth angle, which is the angle between surface normal’s projection to the image plane and $x$-axis in the image coordinates, depends on which polarized reflection’s component is dominant. In short, as illustrated in Fig. \[fig:ambiguity\], there exist two kinds of ambiguities.
[**$\pi$-ambiguity:**]{} $\pi$-ambiguity exists because the range of AoP is from 0 to $\pi$ while that of the azimuth angle is from 0 to $2\pi$. AoP corresponds to the same direction or the inverse direction of the surface normal, i.e. AoP may be equal to the azimuth angle or have $\pi$’s difference with the azimuth angle.
[**$\pi/2$-ambiguity:**]{} It is difficult to decide whether polarized specular reflection or polarized diffuse reflection dominates without any prerequisites for surface materials. AoP has $\pi/2$’s difference with the azimuth angle when polarized specular reflection dominates, while it equals the azimuth angle or has $\pi$’s difference with the azimuth angle when polarized diffuse reflection dominates. Therefore, there exists $\pi/2$-ambiguity in addition to $\pi$-ambiguity when determining the relationship between AoP and the azimuth angle.
As shown in Fig. \[fig:ambiguity\], for the AoP value ($\phi=120^\circ$) for the pixel marked in red, there are four possible azimuth angles (i.e. $\alpha=30^{\circ},\ 120^{\circ},\ 210^{\circ}$ and $300^{\circ}$) as depicted by the four lines on the image plane. The planes where the surface normal has to lie, which are represented by the four transparent color planes, are determined according to the four possible azimuth angles. The dashed arrows on the object show the examples of possible surface normals, which are constrained on the planes. In our method, the explained relationship between the AoP measurement and the possible azimuth angles is exploited to constrain the estimated surface vertex’s normal.
Polarimetric Multi-View Inverse Rendering {#sec:pmvir}
=========================================
Color polarization sensor data processing {#subsec:rawDataProcessing}
-----------------------------------------
To obtain input RGB and AoP images, we use a one-shot color polarization camera consisting of the $4\!\times\!4$ regular pixel pattern [@maruyama20183] as shown in Fig. \[fig:overall\](a), although our method is not limited to this kind of polarization camera. For every pixel, twelve values, i.e. $3\ (R,G,B)\times4\ (I_0,I_{45},I_{90},I_{135})$, are obtained by interpolating the raw mosaic data. As proposed in [@morimatsu2020], pixel values for each direction in every $2\!\times\!2$ blocks are extracted to obtain Bayer-patterned data for that direction. Then, Bayer color interpolation [@kiku2016beyond] and polarization interpolation [@mihoubi2018survey] are sequentially performed to obtain full-color-polarization data. As for the RGB images used for the subsequent processing, we employ unpolarized RGB component ${\bf I}_{min}$ obtained as ${\bf I}_{min} = ({\bf I}_{0}+{\bf I}_{90})(1-\rho)/2$, where $\rho$ is DoP and calculated by using the Stokes vector of Eq. (\[eq:Stokes\]) as $\rho = \sqrt{s_1^2+s_2^2}/{s_0}$. Since using $\textbf{I}_{min}$ can suppress the influence of specular reflection [@atkinson2006recovery], it is beneficial for SfM and our photometric optimization. On the other hand, AoP values are calculated using Eq. (\[eq:AoP\]) and (\[eq:Stokes\]), where the intensities of four polarization directions ($I_0$, $I_{45}$, $I_{90}$, $I_{135}$) are obtained by averaging R, G, and B values for each direction.
\[flowchart-test\]
Initial geometric reconstruction
--------------------------------
Figure \[fig:flowchart\] shows the overall flow of our Poralimetric MVIR using multi-view RGB and AoP images. It starts with initial geometric 3D reconstruction as follows. SfM is firstly performed using the RGB images to estimate camera poses. Then, MVS and surface reconstruction are applied to obtain an initial surface model which is represented by a triangular mesh. The visibility of each vertex to each camera is then checked using the algorithm in [@kim2016multi]. Finally, to increase the number of vertices, the initial surface is subdivided by $\sqrt{3}$-subdivision [@kobbelt20003] until the maximum pixel number in each triangular patch projected to visible cameras becomes smaller than a threshold.
Photometric and polarimetric optimization {#sec:optimize}
-----------------------------------------
The photometric and polarimetric optimization is then performed to refine the initial model while estimating each vertex’s albedo and each image’s illumination. The cost function is expressed as $$\label{eq:costFunction}
\mathop{\arg\min}\limits_{{\bf X}, {\bf K}, {\bf L}} E_{pho}({\bf X}, {\bf K}, {\bf L}) + \tau_1 E_{pol}({\bf X})
+ \tau_2 E_{gsm}({\bf X}) + \tau_3 E_{psm}({\bf X}, {\bf K}),$$ where $E_{pho}$, $E_{pol}$, $E_{gsm}$, and $E_{psm}$ represent a photometric rendering term, a polarimetric term, a geometric smoothness term, and a photometric smoothness term, respectively. $\tau_1$, $\tau_2$, and $\tau_3$ are weights to balance each term. Similar to MVIR [@kim2016multi], the optimization parameters are defined as below:
- ${\bf X}\in\mathbb{R}^{3\times n}$ is the vertex 3D coordinate, where $n$ is the total number of vertices.
- ${\bf K}\in\mathbb{R}^{3\times n}$ is the vertex albedo, which is expressed in the RGB color space.
- ${\bf L}\in\mathbb{R}^{12\times p}$ is the scene illumination matrix, where $p$ is the total number of images. Each image’s illumination is represented by nine coefficients for the second-order spherical harmonics basis $(L_0,\cdots,L_8)$ [@ramamoorthi2001efficient; @wu2011high] and three RGB color scales $(L_R,L_G,L_B)$.
[**Photometric rendering term:**]{} We adopt the same photometric rendering term as MVIR, which is expressed as $$\label{photometricRenderingTerm}
E_{pho}({\bf X}, {\bf K}, {\bf L})=\sum_i \sum_{c\in \mathcal{V}(i)} \frac{
||{\bf I}_{i,c}({\bf X})-\hat{{\bf I}}_{i,c}({\bf X}, {\bf K}, {\bf L})||^2}{|\mathcal{V}(i)|},$$ which measures the pixel-wise intensity error between observed and rendered values. ${\bf I}_{i,c}\in\mathbb{R}^3$ is the observed RGB values of the pixel in $c$-th image corresponding to $i$-th vertex’s projection and $\hat{{\bf I}}_{i,c}\in \mathbb{R}^3$ is the corresponding rendered RGB values. $\mathcal{V}(i)$ represents the visible camera set for $i$-th vertex. The perspective projection model is used to project each vertex to each camera. Suppose $(K_R,K_G,K_B)$ and $(L_0,\cdots,L_8,L_R,L_G,L_B)$ represent the albedo for $i$-th vertex and the illumination for $c$-th image, where the indexes $i$ and $c$ are omitted for notation simplicity. The rendered RGB values are then calculated as $$\label{eq:rendering}
\hat{{\bf I}}_{i,c}({\bf X}, {\bf K}, {\bf L})=[K_{R}S({\bf N(\bf X),\bf L})L_R,K_GS({\bf N(\bf X),\bf L})L_G,K_BS({\bf N(\bf X),\bf L})L_B]^T,$$ where $S$ is the shading calculated by using the second-order spherical harmonics illumination model [@ramamoorthi2001efficient; @wu2011high] as $$\label{eq:shading}
\begin{aligned}
S(\bf N(\bf X),\bf L)&=L_0+L_1N_y+L_2N_z+L_3N_x+L_4N_xN_y+L_5N_yN_z\\
&+L_6(N_z^2-\frac{1}{3})+L_7N_xN_z+L_8(N_x^2-N_y^2),
\end{aligned}$$ where ${\bf N}({\bf X}) = [N_x,N_y,N_z]^T$ represents the vertex’s normal vector, which is calculated as the average of adjacent triangular patch’s normals. Varying illuminations for each image and spatially varying albedos are considered as in [@kim2016multi].
[**Polarimetric term:**]{} To effectively constrain each estimated surface vertex’s normal, we here propose a novel polarimetric term. Figure \[fig:cost\] shows an example of our polarimetric cost function for the case that the AoP measurement of the pixel corresponding to the vertex’s projection equals 120$^\circ$, i.e. $\phi=120^\circ$. This example corresponds to the situation as shown in Fig. \[fig:ambiguity\]. In both figures, four possible azimuth angles derived from the AoP measurement are shown by blue solid, purple dashed, green dashed, and brown dashed lines on the image plane, respectively. These four possibilities are caused by both the $\pi$-ambiguity and the $\pi/2$-ambiguity introduced in Section \[sec:ambiguities\]. In the ideal case without noise, one of the four possible azimuth angles should be the same as the azimuth angle of (unknown) true surface normal.
Based on this principle, as shown in Fig. \[fig:cost\], our polarimetric term evaluates the difference between the azimuth angle of the estimated surface vertex’s normal $\alpha$ and its closest possible azimuth angle from the AoP measurement (i.e. $\phi-\pi/2,\ \phi,\ \phi+\pi/2$, or $\phi+\pi$). The cost function is mathematically defined as $$\label{eq:pol}
E_{pol}({\bf X})=\sum_i \sum_{c\in \mathcal{V}(i)}
\left(\frac{e^{-k\theta_{i,c}({\bf X})}-e^{-k}}{1-e^{-k}}\right)^2/{|\mathcal{V}(i)|},$$ where $k$ is a parameter that determines the narrowness of the concave to assign the cost (see Fig. \[fig:cost\]). $\theta_{i,c}$ is defined as $$\label{eq:theta}
\begin{aligned}
\theta_{i,c}({\bf X})=1-4\eta_{i,c}({\bf X})/\pi,
\end{aligned}$$ where $\eta_{i,c}$ is expressed as $$\label{eq:eta}
\begin{aligned}
\eta_{i,c}({\bf X})=&\mathop{\min}(|\alpha_{i,c}({\bf N(\textbf X)})-\phi_{i,c}({\bf X})-\pi/2|,|\alpha_{i,c}({\bf N(\textbf X)})-\phi_{i,c}({\bf X})|,\\
&|\alpha_{i,c}({\bf N(\textbf X)})-\phi_{i,c}({\bf X})+\pi/2|,|\alpha_{i,c}({\bf N(\textbf X)})-\phi_{i,c}({\bf X})+\pi|).
\end{aligned}$$ Here, $\alpha_{i,c}$ is the azimuth angle calculated by the projection of $i$-th vertex’s normal to $c$-th image plane and $\phi_{i,c}$ is the corresponding AoP measurement.
Our polarimetric term mainly has two benefits. First, it enables us to constrain the estimated surface vertex’s normal while simultaneously resolving the ambiguities based on the optimization using all vertices and all multi-view AoP measurements. Second, the concave shape of the cost function makes the normal constraint more robust to noise, which is an important property since AoP is susceptible to noise. The balance between the strength of the normal constraint and the robustness to noise can be adjusted by the parameter $k$.
[**Geometric smoothness term:**]{} The geometric smoothness term is applied to regularize the cost and to derive a smooth surface. This term is described as $$\label{eq:gsm}
E_{gsm}({\bf X})=\sum_{m}
\left(\frac{{\rm arccos}\left({\bf N}^\prime_m({\bf X})\cdot {\bf N}^\prime_{m_{avg}}({\bf X})\right)}{\pi}\right)^{q},$$ where ${\bf N}^\prime_m$ represents the normal of $m$-th triangular patch, ${\bf N}^\prime_{m_{avg}}$ represents the averaged normal of its adjacent patches, and $q$ is a parameter to assign the cost. This term becomes small if the curvature of the surface is close to constant.
[**Photometric smoothness term:**]{} Changes of pixel values in each image may result from different albedos or shading since spatially varying albedos are allowed in our model. To regularize this uncertainty, the same photometric smoothness term as [@kim2016multi] is applied as $$E_{psm}({\bf X,\bf K})=\sum_{i} \sum_{j\in \mathcal{A}(i)}
w_{i,j}({\bf X})\left|\left|({\bf K}_i-{\bf K}_j)\right|\right|^2,$$ where $\mathcal{A}(i)$ is the set of adjacent vertices of $i$-th vertex and $w_{i,j}$ is the weight for the pair of $i$-th and $j$-th vertices. A small weight is assigned, i.e. change of albedo is allowed, if a large chromaticity or intensity difference is observed between the corresponding pixels in the RGB image (see [@kim2016multi] for details). By this term, a smooth variation in photometric information is considered as the result of shading while a sharp variation is considered as the result of varying albedos.
Experimental Results
====================
Implementation details
----------------------
We apply COLMAP [@schonberger2016structure] for SfM and OpenMVS [@OpenMVS] for MVS. The initial surface is reconstructed by the built-in surface reconstruction function of OpenMVS. The cost optimization of Eq. (\[eq:costFunction\]) is iterated three times by changing the weights as $(\tau_1, \tau_2, \tau_3)$ = $(0.05, 1.0, 1.0)$, $(0.1, 1.0, 1.0)$, and $(0.3, 1.0, 1.0)$. For each iteration, the parameter $q$ in Eq. (\[eq:gsm\]) is changed as $q$ = $2.2$, $2.8$, and $3.4$, while the parameter $k$ in Eq. (\[eq:pol\]) is set to 0.5 in all three iterations. By the three iterations, the surface normal constraint from AoP is gradually strengthened by allowing small normal variations to derive a fine shape while avoiding a local minimum. The non-linear optimization problem is solved by using Ceres solver [@ceres-solver].
Comparison using synthetic data
-------------------------------
Numerical evaluation was performed using four CG models (Armadillo, Stanford bunny, Dragon, and Buddha) available from Stanford 3D Scanning Repository [@Stanford]. Original 3D models were subdivided to provide enough number of vertices as ground truth. Since it is very difficult to simulate realistic polarization images and there are no public tools and datasets for polarimetric 3D reconstruction, we synthesized the RGB and the AoP inputs using Blender [@Blender] as follows. Using spherically placed cameras, the RGB images were rendered under a point light source located at infinity and an environmental light uniformly contributing to the surface (see Fig. \[fig:evaluation\]). For synthesizing AoP images, AoP for each pixel was obtained from the corresponding azimuth angle, meaning that there is no $\pi/2$-ambiguity, for the first experiment. The experiment was also conducted by randomly adding ambiguities and Gaussian noise to the azimuth angles.
We compared our Polarimetric MVIR with four representative MVS methods (PMVS [@furukawa2009accurate], CMPMVS [@jancosek2011multi], MVS in COLMAP [@schonberger2016pixelwise], OpenMVS [@OpenMVS]) and MVIR [@kim2016multi] using the same initial model as ours. Ground-truth camera poses are used to avoid the alignment problem among the models reconstructed from different methods. Commonly used metrics [@aanaes2016large; @ley2016syb3r], i.e. accuracy which is the distance from each estimated 3D point to its nearest ground-truth 3D point and completeness which is the distance from each ground-truth 3D point to its nearest estimated 3D point, were used for evaluation. As estimated 3D points, the output point cloud was used for PMVS, COLMAP, and OpenMVS, while the output surface’s vertices were used for CMPMVS, MVIR and our method.
Table \[table:evaluation\] shows the comparison of the average accuracy and the average completeness for each model. The results show that our method achieves the best accuracy and completeness for all four models with significant improvements. Visual comparison for Armadillo is shown in Fig. \[fig:evaluation\], where the surfaces for PMVS and COLMAP were created using Poisson surface reconstruction [@kazhdan2013screened] with our best parameter choice, while the surface for OpenMVS was obtained using its built-in function. We can clearly see that our method can recover more details than the other methods by exploiting AoP information. The visual comparison for the other models can be seen in our supplementary material.
Table \[table:robust\] shows the numerical evaluation for our method when 50% random ambiguities and Gaussian noise with different noise levels were added to the azimuth-angle images. Note that top three rows are the results without any disturbance and same as those in Table 1, while the bottom five rows are the results of our method with ambiguity and noise added on AoP images. These results demonstrate that our method is quite robust against the ambiguity and the noise and outperforms the best-performed existing methods even with the 50% ambiguities and a large noise level ($\sigma = 24^\circ$).
Comparison using real data
--------------------------
Figure \[fig:result\] shows the visual comparison of the reconstructed 3D models using real images of a toy car (56 views) and a camera (31 views) captured under a normal lighting condition in the office using fluorescent light on the ceiling, and a statue (43 views) captured under outdoor daylight with cloudy weather. We captured the polarization images using Lucid PHX050S-Q camera [@Lucid]. We compared our method with CMPMVS, OpenMVS, and MVIR, which respectively provide the best accuracy, the best completeness, and the best balanced result among the existing methods shown in Table \[table:evaluation\]. The results of all compared methods and our albedo and illumination results can be seen in the supplementary material.
The results of Fig. \[fig:result\] show that CMPMVS can reconstruct fine details in relatively well-textured regions (e.g. the details of the camera lens), while it fails in texture-less regions (e.g. the front window of the car). OpenMVS can better reconstruct the overall shapes owing to the denser points, although some fine details are lost. MVIR performs well except for dark regions, where the shading information is limited (e.g. the top of the camera and the surface of the statue). On the contrary, our method can recover finer details and clearly improve the reconstructed 3D model quality by exploiting both photometric and polarimetric information, especially in regions such as the front body and the window of the toy car, and the overall surfaces of the camera and the statue.
Refinement for Polarimetric MVS [@cui2017polarimetric]
------------------------------------------------------
Since Polarimetric MVS [@cui2017polarimetric] can be used for our initial model to make better use of polarimetric information, we used point cloud results of two objects (vase and car) obtained by Polarimetric MVS for the initial surface generation and then refined the initial surface using the provided camera poses, and RGB and AoP images from 36 viewpoints as shown in Fig. \[fig:pmvs\] (a) and (b). As shown in Fig. \[fig:pmvs\] (c) and (d), Polarimetric MVS can provide dense point clouds, even for texture-less regions, by exploiting polarimetric information. However, there are still some outliers, which could be derived from AoP noise and incorrect disambiguation, and resultant surfaces are rippling. These artifacts are alleviated in our method (Polarimetric MVIR) by solving the ambiguity problem in our global optimization. Moreover, we can see that finer details are reconstructed using photometric shading information in our cost function.
Conclusions
===========
In this paper, we have proposed Polarimetric MVIR, which can reconstruct a high-quality 3D model by optimizing multi-view photometric rendering errors and polarimetric errors. Polarimetric MVIR resolves the $\pi$- and $\pi/2$-ambiguities as an optimization problem, which makes the method fully passive and applicable to various materials. Experimental results have demonstrated that Polarimetric MVIR is robust to ambiguities and noise, and generates more detailed 3D models compared with existing state-of-the-art multi-view reconstruction methods.
Our Polarimetric MVIR has a limitation that it requires a reasonably good initial shape for its global optimization, which would encourage us to develop more robust initial shape estimation.
\
**Acknowledgment** This work was partly supported by JSPS KAKENHI Grant Number 17H00744. The authors would like to thank Dr. Zhaopeng Cui for sharing the data of Polarimetric MVS.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this article we present a geometric discrete-time Pontryagin maximum principle (PMP) on matrix Lie groups that incorporates frequency constraints on the controls in addition to pointwise constraints on the states and control actions directly at the stage of the problem formulation. This PMP gives first order necessary conditions for optimality, and leads to two-point boundary value problems that may be solved by shooting techniques to arrive at optimal trajectories. We validate our theoretical results with a numerical experiment on the attitude control of a spacecraft on the Lie group $\SO(3)$.'
author:
- 'Shruti Kotpalliwar, Pradyumna Paruchuri, Karmvir Singh Phogat, Debasish Chatterjee, Ravi Banavar [^1]'
bibliography:
- 'references.bib'
title: '**A frequency-constrained geometric Pontryagin maximum principle on matrix Lie groups[^2]**'
---
Introduction {#sec:the intro}
============
Problem setup {#sec:the prob}
=============
Main result {#sec:main result}
===========
Proof of the main result {#sec:proof}
========================
Numerical experiments {#sec:numerical simulations}
=====================
[^1]: Emails: `{shruti, pradyumn, karmvir.p, chatterjee, banavar} @sc.iitb.ac.in`
[^2]: The authors are with Systems & Control Engineering, IIT Bombay, Powai, Mumbai 400076, India, and acknowledge the support of the grant 17ISROC001 from the Indian Space Research Organization.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Hardy space theory has been studied on manifolds or metric measure spaces equipped with either Gaussian or sub-Gaussian heat kernel behaviour. However, there are natural examples where one finds a mix of both behaviour (locally Gaussian and at infinity sub-Gaussian) in which case the previous theory doesn’t apply. Still we define molecular and square function Hardy spaces using appropriate scaling, and we show that they agree with Lebesgue spaces in some range. Besides, counterexamples are given in this setting that the $H^p$ space corresponding to Gaussian estimates may not coincide with $L^p$. As a motivation for this theory, we show that the Riesz transform maps our Hardy space $H^1$ into $L^1$.'
address: 'Li Chen, Mathematical Sciences Institute, The Australian National University, Canberra ACT 0200, Australia'
author:
- Li Chen
title: 'Hardy spaces on metric measure spaces with generalized sub-Gaussian heat kernel estimates'
---
Introduction
============
The study of Hardy spaces originated in the 1910’s and at the very beginning was confined to Fourier series and complex analysis in one variable. Since 1960’s, it has been transferred to real analysis in several variables, or more generally to analysis on metric measure spaces. There are many different equivalent definitions of Hardy spaces, which involve suitable maximal functions, the atomic decomposition, the molecular decomposition, singular integrals, square functions etc. See, for instance, the classical references [@FS72; @CW77; @CMS85; @St93].
More recently, a lot of work has been devoted to the theory of Hardy spaces associated with operators, see for example, [@AMR08; @HLMMY11; @U11; @AMM13] and the references therein.
In [@AMR08], Auscher, McIntosh and Russ studied Hardy spaces with respect to the Hodge Laplacian on Riemannian manifolds with the doubling volume property by using the Davies-Gaffney type estimates. They defined Hardy spaces of differential forms of all degrees via molecules and square functions, on which the Riesz transform is $H^p$ bounded for $1\le p\le \infty$. Comparing with the Lebesgue spaces, it holds that $H^p\subset L^p$ for $1\leq p\leq 2$ and $L^p \subset H^p$ for $p>2$. Moreover, under the assumption of Gaussian heat kernel upper bound, $H^p$ coincides $L^p$ for $1<p<\infty$.
In [@HLMMY11], Hofmann, Lu, Mitrea, Mitrea and Yan further developed the theory of $H^1$ and $BMO$ spaces adapted to a metric measure space $(M,d,\mu)$ with the volume doubling property endowed with a non-negative self-adjoint operator $L$, which generates an analytic semigroup $\{e^{-tL}\}_{t>0}$ satisfying the so-called Davies-Gaffney estimate: there exist $C,c>0$ such that for any open sets $U_1,U_2\subset M$, and for every $f_i \in L^2(M)$ with $\supp f_i \subset U_i$, $i=1,2$, $$\begin{aligned}
\label{DG-normal}
|<e^{-tL}f_1,f_2>| \leq C \exp\left(-\frac{\dist^2(U_1,U_2)}{ct}\right) \Vert f_1\Vert_{2} \Vert f_2\Vert_2,~\forall t>0,\end{aligned}$$ where $\dist(U_1,U_2):= \inf_{x\in U_1,y\in U_2}d(x,y)$. The authors extended results of [@AMR08] by obtaining an atomic decomposition of the $H^1$ space.
More generally, instead of , if $M$ satisfies the Davies-Gaffney estimate of order $m$ with $m\geq 2$: for all $x,y \in M$ and for all $t>0$, $$\begin{aligned}
\label{DGm}
{{\left\lVert{\mathbbm 1_{B(x,t^{1/m})} e^{-tL} \mathbbm 1_{B(y,t^{1/m})}}\right\rVert}}_{2\to2}
\leq C\exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{m}{m-1}}}\right)}}.\end{aligned}$$ Kunstmann and Uhl [@U11; @KU15] defined Hardy spaces via square functions and via molecules adapted to , where the two $H^1$ spaces are also equivalent. Here and in the sequel, $B(x,r)$ denotes the ball of centre $x\in M$ and radius $r>0$ and $V(x,r)=\mu(B(x,r))$. In addition, if the $L^{p_0}-L^{p'_0}$ off-diagonal estimates of order $m$ holds: for all $x,y \in M$ and for all $t>0$, $$\begin{aligned}
\label{DGp}
{{\left\lVert{\mathbbm 1_{B(x,t^{1/m})} e^{-tL} \mathbbm 1_{B(y,t^{1/m})}}\right\rVert}}_{p_0 \to p'_0}
\leq \frac{C}{V^{\frac{1}{p_0}-\frac{1}{p'_0}}(x,t^{1/m})} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{m}{m-1}}}\right)}}\end{aligned}$$ with $p_0'$ the conjugate of $p_0$, then the Hardy space $H^p$ defined via square functions coincides with $L^p$ for $p\in (p_0,2)$.
However, there are natural examples where, one finds a mix of both behaviours and , in which case the previous Hardy space theory doesn’t apply. For example, on fractal manifolds, the heat kernel behaviour is locally Gaussian and at infinity sub-Gaussian (see Section \[HK estimates\] for more details). We aim to develop a proper Hardy space theory for this setting. An important motivation for our Hardy spaces theory is to study the Riesz transform on fractal manifolds, where the weak type $(1,1)$ boundedness has recently been proved in a joint work by the author with Coulhon, Feneuil and Russ [@CCFR15].
In this paper, we work on doubling metric measure spaces endowed with a non-negative self-adjoint operator which satisfy the doubling volume property and the $L^2$ off-diagonal estimate with different local and global decay (see below). The specific description will be found below in Section \[setting\]. We define two classes of Hardy spaces in this setting, via molecules and via conical square functions, see Setion \[definitions\]. Both definitions have the scaling adapted to the off-diagonal decay .
In Section 3, we identify the two different $H^1$ spaces. The molecular $H^1$ spaces are always convenient spaces to deal with Riesz transform and other sub-linear operators, while the $H^p$, $p \ge1$, spaces defined via conical square functions possess certain good properties like real and complex interpolation. The identification of both spaces gives us a powerful tool to study the Riesz transform, Littlewood-Paley functions, boundary value problems for elliptic operators etc.
In Section 4, we compare the Hardy spaces defined via conical square functions with the Lebesgue spaces. Assuming further an $L^{p_0}-L^{p_0'}$ off-diagonal estimate for some $1\le p_0<2$ with different local and global decay for the heat semigroup, we show the equivalence of our $H^p$ spaces and the Lebesgue spaces $L^p$ for $p_0< p<p_0'$. We also justify that the scaling for the Hardy spaces is the right one, by disproving this equivalence of $H^p$ and $L^p$ for $p$ close to $2$ on some fractal Riemannian manifolds. As far as we know, no previous results are known in this direction.
In Section 5, we shall apply our theory to prove that the Riesz transform is $H^1-L^1$ bounded on fractal manifolds. The proof is inspired by [@CCFR15] (see [@Fe15]for the original proof in the discrete setting), where the integrated estimate for the gradient of the heat kernel plays a crucial role.
In the following, we will introduce our setting, the definitions and the main results more specifically.
[**Notation**]{} Throughout this paper, we denote $u\simeq v$ if $v\lesssim u$ and $u\lesssim v$, where $u\lesssim v$ means that there exists a constant $C$ (independent of the important parameters) such that $u\leq Cv$.
For a ball $B\subset M$ with radius $r>0$ and given $\alpha>0$, we write $\alpha B$ as the ball with the same centre and the radius $\alpha r$. We denote $C_1(B)=4B$, and $C_j(B)=2^{j+1} B\backslash 2^{j}B$ for $j\geq 2$.
The setting {#setting}
-----------
We shall assume that $M$ is a metric measure space satisfying the doubling volume property: for any $x\in M$ and $r>0,
$$$\begin{aligned}
\label{doubling}V(x,2r)\lesssim V(x,r)\tag{$D$}\end{aligned}$$ and the $L^2$ Davies-Gaffney estimate with different local and global decay for the analytic semigroup $\{e^{-tL}\}_{t>0}$ generated by the non-negative self-adjoint operator $L$, that is, $\forall x,y\in M$, $$\begin{aligned}
\label{DG}\tag{$DG_{\rho}$}
{{\left\lVert{\mathbbm 1_{B(x,t)} e^{-\rho(t)L} \mathbbm 1_{B(y,t)}}\right\rVert}}_{2\to 2}
\lesssim \left\{
\begin{aligned}
& \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_1}{\beta_1-1}}}\right)}} & 0<t<1, \\
& \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}}, & t\geq 1,
\end{aligned}\right.\end{aligned}$$ where $1<\beta_1\leq\beta_2$ and $$\begin{aligned}
\label{rho}
\rho(t)=\left\{ \begin{aligned}
&t^{\beta_1},&0<t<1, \\
& t^{\beta_2},&t\geq 1.
\end{aligned}\right.
\end{aligned}$$
Recall a simple consequence of : there exists $\nu>0$ such that $$\begin{aligned}
\label{D1}
\frac{V(x,r)}{V(x,s)}\lesssim {{\left({\frac{r}{s}}\right)}}^\nu,\,\,\forall x\in M, \,r\geq s>0.\end{aligned}$$ It follows that $$V(x,r)\lesssim {{\left({1+\frac{d(x,y)}{r}}\right)}}^{\nu} V(y,r),\,\,\forall x\in M, \,r\geq s>0.$$ Therefore, $$\begin{aligned}
\label{D2}
\int_{d(x,y)<r}\frac{1}{V(x,r)}d\mu (x)\simeq 1,\,\,\forall y\in M, \,r>0.\end{aligned}$$ If $M$ is non-compact, we also have a reverse inequality of (see for instance [@Gr09 p. 412]). That is, there exists $\nu'>0$ such that $$\begin{aligned}
\label{revdb}
\frac{V(x,r)}{V(x,s)}\gtrsim \left(\frac{r}{s}\right)^{\nu'},
\,\,\forall x\in M, \,r\geq s>0.\end{aligned}$$
Also notice that in , if necessary we may smoothen $\rho(t)$ as $$\rho(t)=\left\{ \begin{aligned}
&t^{\beta_1},&\text{if } 0<t\leq 1/2, \\
&\text{smooth part},&\text{if } 1/2<t<2, \\
&t^{\beta_2},&\text{if } t\geq 2;
\end{aligned}\right.$$ with $\rho'(t)\simeq 1$ for $1/2<t<2$, which we still denote by $\rho(t)$. Since $\frac{\rho'(t)}{\rho(t)} =\frac{\beta_1}{t}$ for $0<t\leq 1/2$ and $\frac{\rho'(t)}{\rho(t)} =\frac{\beta_2}{t}$ for $t\geq 2$, we have in a uniform way $$\begin{aligned}
\label{der}
\frac{\rho'(t)}{\rho(t)} \simeq \frac{1}{t}.\end{aligned}$$
We say that $M$ satisfies an $L^{p_0}-L^{p'_0}$ off-diagonal estimate for some $1<p_0<2$ if $$\begin{aligned}
\label{DG'}\tag{$DG_{\rho}^{p_0}$}
{{\left\lVert{\mathbbm 1_{B(x,t)} e^{-\rho(t)L} \mathbbm 1_{B(y,t)}}\right\rVert}}_{p_0\to p_0'}
\lesssim \left\{
\begin{aligned}
&\frac{1}{V^{\frac{1}{p_0}-\frac{1}{p_0'}}(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_1}{\beta_1-1}}}\right)}} & 0<t<1, \\
&\frac{1}{V^{\frac{1}{p_0}-\frac{1}{p_0'}}(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}}, & t\geq 1,
\end{aligned}\right.\end{aligned}$$ and a generalized pointwise sub-Gaussian heat kernel estimate if for all $x,y\in M$, $$\begin{aligned}
\label{ue}\tag{$U\!E_{\rho}$}
p_{\rho(t)}(x,y)
\lesssim \left\{
\begin{aligned}
&\frac{1}{V(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_1}{\beta_1-1}}}\right)}} & 0<t<1, \\
&\frac{1}{V(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}}, & t\geq 1,
\end{aligned}\right.\end{aligned}$$ Examples of fractal manifolds satisfy with $\beta_1=2$ and $\beta_2>2$, see Section 2 below for more information.
Definitions
-----------
Recall that
\[mol\] Let $\varepsilon >0$ and an integer $K$ be an integer such that $K>\frac{\nu}{2\beta_1}$, where $\nu$ is in . A function $a\in L^2(M)$ is called a $(1,2,\varepsilon )-$molecule associated to $L$ if there exist a function $b \in \mathcal{D}(L)$ and a ball $B$ with radius $r_B$ such that
1. $a = L^K b$;
2. It holds that for every $k=0,1, \cdots, K$ and $i=0,1,2,\cdots$, we have $$\begin{aligned}
\label{molb}
\Vert(\rho (r_{B})L )^k b\Vert_{L^2(C_i(B))} \leq \rho^K(r_{B})2^{-i\varepsilon } V(2^i B)^{-1/2}.\end{aligned}$$
\[molh\] We say that $f=\sum_{n=0}^{\infty }\lambda _n a_n$ is a molecular $(1,2,\varepsilon )-$representation of $f$ if $(\lambda_n)_{n\in \mathbb{N}}\in l^1$, each $a_n$ is a molecule as above, and the sum converges in the $L^2$ sense. We denote the collection of all the functions with a molecular representation by $\mathbb{H}_{L,\rho,\mol}^1$, where the norm of $f\in \mathbb{H}_{L,\rho,\mol}^1$ is given by $$\Vert f\Vert_{\mathbb{H}_{L,\rho,\mol}^1(M)}=\inf \left \{ \sum_{n=0}^{\infty }|\lambda _n|:
f=\sum_{n=0}^{\infty }\lambda _n a_n \text{ is a molecular } (1,2,\varepsilon )-\text{representation} \right\}.$$ The Hardy space $H_{L,\rho,\mol}^1(M)$ is defined as the completion of $\mathbb{H}_{L,\rho,mol}^1(M)$ with respect to this norm.
Consider the following conical square function $$\begin{aligned}
\label{SFrho}
S_h^{\rho} f(x) ={{\left({\iint_{\Gamma (x)}|\rho(t)L e^{-\rho(t)L }f(y)|^2\frac{d\mu(y)}{V(x,t)}\frac{dt}{t}}\right)}}^{1/2},\end{aligned}$$ where the cone $\Gamma(x)=\{(y,t)\in M\times (0,\infty ): d(y,x)<t\}$.
We define first the $L^2(M)$ adapted Hardy space $H^2(M)$ as the closure of the range of $L $ in $L^2(M)$ norm, i.e., $H^2(M):=
\overline{R(L )}$.
The Hardy space $H_{L,S_h^{\rho}}^p(M)$, $p\geq 1$ is defined as the completion of the set $\{f\in H^2(M): \Vert S_h^{\rho} f\Vert_{L^p}<\infty \}$ with respect to the norm $\Vert S_h^{\rho} f\Vert_{L^p}$. The $H_{L,S_h^{\rho}}^p(M)$ norm is defined by $\Vert f\Vert_{H_{L,S_h^{\rho}}^p(M)}:=\Vert S_h^{\rho} f\Vert_{L^p(M)}$.
For $p=2$, the operator $S_h^{\rho}$ is bounded on $L^2(M)$. Indeed, for every $f\in L^2(M)$, $$\begin{aligned}
\label{L2}
\begin{split}
\Vert S_h^{\rho} f\Vert_{L^2(M)}^2
&= \int_M\iint_{\Gamma (x)} {{\left\lvert{\rho(t)L e^{-\rho(t)L }f(y)}\right\rvert}}^2 \frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu (x)
\\ &\simeq \iint_{M\times (0,\infty)} {{\left\lvert{\rho(t)L e^{-\rho(t)L }f(y)}\right\rvert}}^2 d\mu(y)\frac{dt}{t}
\\ & \simeq \iint_{M\times (0,\infty)} {{\left\lvert{\rho(t)L e^{-\rho(t)L }f(y)}\right\rvert}}^2 d\mu(y)\frac{\rho'(t)dt}{\rho(t)}
\\ & = \int_0^{\infty} <(\rho(t) L)^{2} e^{-2\rho(t)L }f,f> \frac{\rho'(t)dt}{\rho(t)}
\simeq \Vert f\Vert_{L^2(M)}^2.
\end{split}\end{aligned}$$ Note that the second step follows from Fubini theorem and in Section 2.3. The third step is obtained by using the fact : $ \rho'(t)/\rho(t) \simeq 1/t$. The last one is a consequence of spectral theory.
\[rem:SF\] The above definitions are similar as in [@HLMMY11] (also [@AMR08] for $1$-forms on Riemannian manifolds) and [@KU15; @U11]. The difference is that we replace $t^2$ or $t^m$ by $\rho(t)$ in and .
In the case when $\rho(t)=t^2$, we denote $S_h^\rho$ by $S_h$, that is, $$\begin{aligned}
\label{SF-normal}
S_h f(x) :=\left(\iint_{\Gamma (x)}|t^2 L e^{-t^2 L }f(y)|^2\frac{d\mu(y)}{V(x,t)}\frac{dt}{t}\right)^{1/2}, \end{aligned}$$ and denote $H_{L,S_h^\rho}^p$ by $H_{L,S_h}^p$.
Main results
------------
We first obtain the equivalence between $H^1$ spaces defined via molecules and via square functions.
\[H1equiv\] Let $M$ be a metric measure space satisfying the doubling volume property and the $L^2$ off-diagonal heat kernel estimate . Then $H_{L,\rho,\mol}^1(M)=H_{L,S_h^{\rho}}^1(M)$, which we denote by $H_{L,\rho}^1(M)$. Moreover, $$\Vert f\Vert_{H_{L,\rho,\mol}^1(M)} \simeq \Vert f\Vert_{H_{L,S_h^{\rho}}^1(M)}.$$
Now compare $H_{L,S_h^{\rho}}^p(M)$ and $L^p$ for $1< p < \infty$.
Recall that on Riemannian manifold satisfying the doubling volume property and the Gaussian upper bound for the heat kernel of the operator, we have $H_{L,S_h}^p(M)=L^p(M)$, $1< p <\infty$, see for example [@AMR08 Theorem 8.5] for Hardy spaces of $0-$forms on Riemannian manifold. However, in general, the equivalence is not known. It is also proved in [@KU15; @U11] that if the $L^{p_0}-L^{p'_0}$ off-diagonal estimates of order $m$ holds, then the Hardy space $H_{S_h^m}^p$ (see Remark \[rem:SF\]) coincides with $L^p$ for $p\in (p_0,2)$.
Our result in this direction is the following:
\[equihl\] Let $M$ be a non-compact metric measure space as above. Let $1\leq p_0<2$ and $\rho$ be as above. Suppose that $M$ satisfies and . Then $H_{L,S_h^{\rho}}^p(M)=L^p(M)$ for $p_0<p<p_0'$.
If one assumes the pointwise heat kernel estimate, then Theorems 1.5 and 1.6 yield the following.
Let $M$ be a non-compact metric measure space satisfying the doubling volume property and the pointwise heat kernel estimate . Then $H_{L,\rho,\mol}^1(M)=H_{L,S_h^{\rho}}^1(M)$, and $H_{L,S_h^{\rho}}^p(M)=L^p(M)$ for $1<p<\infty$.
In the following theorem, we show that for $1<p<2$, the equivalence may not hold between $L^p$ and $H^p$ defined via conical square function $S_h$ with scaling $t^2$. The counterexamples we find are certain Riemannian manifolds satisfying and two-sided sub-Gaussian heat kernel estimate: and its reverse, with $\beta_1=2$ and $\beta_2=m>2$. Notice that in this case, $L$ is the non-negative Laplace-Beltrami operator, which we denote by $\Delta$. For simplicity, we denote by $(U\!E_{2,m})$ and the two sided estimate by $(HK_{2,m})$. Also, we denote by $H_{\Delta,m,mol}^1$ the $H^1$ space defined via molecules $H_{L,\rho,mol}^1$, $H_{\Delta,S_h^m}^{p}$ the $H^p$ space defined via square functions $H_{L,S_h^{\rho}}^{p}$.
\[noequiv\] Let $M$ be a Riemannian manifold with polynomial volume growth $$\begin{aligned}
\label{d}
V(x,r) \simeq r^d,\,\, r\geq 1,\end{aligned}$$ as well as two-sided sub-Gaussian heat kernel estimate ($H\!K_{2,m}$) with $2<m<d/2$, that is, $(U\!E_{2,m})$ and the matching lower estimate. Then $$L^p(M)\subset H_{\Delta,S_h}^p(M)$$ doesn’t hold for $p\in {{\left({\frac{d}{d-m},2}\right)}}$.
As an application of this Hardy space theory, we have
\[thm2\] Let $M$ be a manifold satisfying the doubling volume property and the heat kernel estimate $(U\!E_{2,m})$, $m>2$, that is, the upper bound of $(H\!K_{2,m})$. Then the Riesz transform $\nabla \Delta^{-1/2}$ is $H_{\Delta,m}^1-L^1$ bounded.
Recall that under the same assumptions, it is proved in [@CCFR15] that the Riesz transform is of weak type $(1,1)$ and thus $L^p$ bounded for $1<p<2$.
Preliminaries
=============
More about sub-Gaussian off-diagonal and pointwise heat kernel estimates {#HK estimates}
------------------------------------------------------------------------
Let us first give some examples that satisfy with $\beta_1\neq \beta_2$. More examples of this case are metric measure Dirichlet spaces, which we refer to [@Ba13; @Stu95; @Stu94; @HSC01] for details.
\[fm\] Fractal manifolds.
Fractal manifolds are built from graphs with a self-similar structure at infinity by replacing the edges of the graph with tubes of length $1$ and then gluing the tubes together smoothly at the vertices. For instance, see [@BCG01] for the construction of Vicsek graphs. For any $D,m\in \R$ such that $D> 1$ and $2< m\leq D+1$, there exist complete connected Riemannian manifolds satisfying $V(x,r) \simeq r^D$ for $r\geq 1$ and with $\beta_1=2$ and $\beta_2=m>2$ in (see [@Ba04] and [@CCFR15]).
\[cs\] Cable systems (Quantum graphs) (see [@V85], [@BB04 Section 2]).
Given a weighted graph $(G,E,\nu)$, we define the cable system $G_C$ by replacing each edge of $G$ by a copy of $(0,1)$ joined together at the vertices. The measure $\mu$ on $G_C$ is given by $d\mu(t)=\nu_{xy} dt$ for $t$ in the cable connecting $x$ and $y$, and $\mu$ assigns no mass to any vertex. The distance between two points $x$ and $y$ is given as follows: if $x$ and $y$ are on the same cable, the length is just the usual Euclidean distance $|x-y|$. If they are on different cables, then the distance is $\min\{|x-z_x|+d(z_x,z_y)+|z_y-y|\}$ ($d$ is the usual graph distance), where the minimum is taken over all vertices $z_x$ and $z_y$ such that $x$ is on a cable with one end at $z_x$ and $y$ is on a cable with one end at $z_y$. One takes as the core $\mathcal C$ the functions in $C(G_C)$ which have compact support and are $C^1$ on each cable, and sets $$\mathcal E(f,f):=\int_{G_C} {{\left\lvert{f'(t)}\right\rvert}}^2 d\mu(t).$$ Let $L$ be the associated non-negative self-adjoint operator associated with $\mathcal E$ and $\{e^{-tL}\}_{t>0}$ be the generated semigroup. Then the associated kernel may satisfies . For example, the cable graph associated with the Sierpinski gasket graph (in $\mathbb Z^2$) satisfies $(U\!E_{2,\log 5/\log2})$.
The following are some useful lemmas for the off-diagonal estimates. We first observe that $\Rightarrow$ $\Rightarrow$ for $1\leq p_0\leq 2$. Indeed,
\[BK’\] Let $(M,d,\mu)$ be a metric measure space satisfying the doubling volume property. Let $L$ be a non-negative self-adjoint operator on $L^2(M,\mu)$. Assume that holds. Then for all $p_0\leq u \leq v \leq p_0'$, we have $$\begin{aligned}
{{\left\lVert{\mathbbm 1_{B(x,t)} e^{-\rho(t)L} \mathbbm 1_{B(y,t)}}\right\rVert}}_{u\to v}
\lesssim \left\{
\begin{aligned}
&\frac{1}{V^{\frac{1}{u}-\frac{1}{v}}(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_1}{\beta_1-1}}}\right)}} & 0<t<1, \\
&\frac{1}{V^{\frac{1}{u}-\frac{1}{v}}(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}}, & t\geq 1.
\end{aligned}\right.\end{aligned}$$
The estimate is equivalent to the $L^{p_0}-L^{2}$ off-diagonal estimate $${{\left\lVert{\mathbbm 1_{B(x,t)} e^{-\rho(t)L} \mathbbm 1_{B(y,t)}}\right\rVert}}_{p_0\to 2}
\lesssim \left\{
\begin{aligned}
&\frac{1}{V^{\frac{1}{p_0}-\frac{1}{2}}(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_1}{\beta_1-1}}}\right)}} & 0<t<1, \\
&\frac{1}{V^{\frac{1}{p_0}-\frac{1}{2}}(x,t)} \exp{{\left({-c{{\left({\frac{d(x,y)}{t}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}}, & t\geq 1.
\end{aligned}\right.$$ We refer to [@BK05; @CS08] for the proof.
In fact, we also have
\[BK\] Let $(M,d,\mu)$ satisfy . Let $L$ be a non-negative self-adjoint operator on $L^2(M,\mu)$. Assume that holds. Then for all $p_0\leq u \leq v \leq p_0'$ and $k \in \mathbb N$, we have
1. For any ball $B\subset M$ with radius $r>0$, and any $i\geq 2$, $$\begin{aligned}
\label{DG2'}
{{\left\lVert{\mathbbm 1_{B}(tL)^k e^{-tL} \mathbbm 1_{C_i(B)}}\right\rVert}}_{u\to v},
{{\left\lVert{\mathbbm 1_{C_i(B)}(tL)^k e^{-tL} \mathbbm 1_{B}}\right\rVert}}_{u\to v} \lesssim
\left\{\begin{aligned}
&\frac{2^{i\nu}}{\mu^{\frac1{u}-\frac1{v}}(B)} e^{-c{{\left({\frac{2^{i\beta_1} r^{\beta_1}}{t}}\right)}}^{1/(\beta_1-1)}} & 0<t<1, \\
&\frac{2^{i\nu}}{\mu^{\frac1{u}-\frac1{v}}(B)} e^{-c{{\left({\frac{2^{i\beta_2} r^{\beta_2}}{t}}\right)}}^{1/(\beta_2-1)}}, & t\geq 1.
\end{aligned}\right.\end{aligned}$$
2. For all $\alpha,\beta\geq 0$ such that $\alpha+\beta=\frac1{u}-\frac1{v}$, $${{\left\lVert{V^{\alpha}(\cdot,t) (\rho(t)L)^k e^{-\rho(t)L} V^{\beta}(\cdot, t)}\right\rVert}}_{u\to v} \leq C.$$
Tent spaces
-----------
We recall definitions and properties related to tent spaces on metric measure spaces with the doubling volume property, following [@CMS85], [@Ru07].
Let $M$ be a metric measure space satisfying . For any $x\in M$ and for any closed subset $F \subset M$, a saw-tooth region is defined as $\mathcal R(F) :=\bigcup _{x\in F}
\Gamma(x)$. If $O$ is an open subset of $M$, then the “tent" over $O$, denoted by $\widehat{O}$, is defined as $$\widehat{O}:=[\mathcal R(O^c)]^c=\{(x,t)\in M \times (0,\infty ): d(x,O^c)\geq t\}.$$
For a measurable function $F$ on $M\times (0,\infty )$, consider $$\mathcal A F(x)
=\left(\iint_{\Gamma (x)}|F(y,t)|^2\frac{d\mu(y)}{V(x,t)}\frac{dt}{t}\right)^{1/2}.$$ Given $0 < p < \infty $, say that a measurable function $F\in T_2^p(M\times (0,\infty ))$ if $$\Vert F\Vert_{T_2^p(M)}:=\Vert \mathcal A F\Vert_{L^p(M)}<\infty.$$ For simplicity, we denote $T_2^p(M\times (0,\infty) )$ by $T_2^p(M)$ from now on.
Therefore, for $f\in H_{L,S_h}^p(M)$ and $0<p<\infty$, write $ F(y,t)=\rho(t) L e^{-\rho(t)L }f(y)$, we have $$\Vert f\Vert_{H_{L,S_h^{\rho}}^p(M)}=\Vert F\Vert_{T_2^p(M)}.$$
Consider another functional $$\mathcal CF(x)
=\sup_{x\in B}\left(\iint_{\widehat B}|F(y,t)|^2\frac{d\mu(y)dt}{t}\right)^{1/2},$$ we say that a measurable function $F\in T_2^{\infty}(M)$ if $\mathcal C F\in L^{\infty}(M)$.
Suppose $1<p<\infty $, let $p'$ be the conjugate of $p$. Then the pairing $<F,G>\longrightarrow \int_{M\times (0,\infty )}F(x, t)G(x,t)\frac{d\mu (x)dt}{t}$ realizes $T_2^{p'}(M)$ as the dual of $T_2^p(M)$.
Denote by $[\,,]_\theta $ the complex method of interpolation described in [@BL76]. Then we have the following result of interpolation of tent spaces, where the proof can be found in [@Am14].
\[inter\] Suppose $1\leq p_0 < p < p_1\leq \infty $, with $1/p = (1 - \theta )/p_0 +
\theta /p_1$ and $0 < \theta < 1$. Then $$[T_2^{p_0}(M),T_2^{p_1}(M)]_\theta =T_2^{p}(M).$$
Next we review the atomic theory for tent spaces which was originally developed in [@CMS85], and extended to the setting of spaces of homogeneous type in [@Ru07].
A measurable function $A$ on $M \times (0,\infty )$ is said to be a $T_2^1-$atom if there exists a ball $B \in M$ such that $A$ is supported in $\widehat{B}$ and $$\int_{M\times (0,\infty )}|A(x, t)|^2 d\mu(x)\frac{dt}{t} \leq \mu^{-1}(B).$$
\[decomp1\] For every element $F\in T_2^1 (M)$ there exist a sequence of numbers $\{\lambda _j\}_{j=0}^{\infty }\in l^1$ and a sequence of $T_2^1-$atoms $\{A_j\}_{j=0}^{\infty }$ such that $$\begin{aligned}
\label{convg}
F =\sum_{j=0}^{\infty }\lambda _j A_j \text{ in } T_2^1 (M) \text{ and a.e. in } M \times (0,\infty ).\end{aligned}$$ Moreover, $\sum_{j=0}^{\infty }\lambda _j \approx \Vert F\Vert_{T_2^1(M)}$, where the implicit constants depend only on the homogeneous space properties of $M$.
Finally, if $F \in T_2^1(M) \cap T_2^2(M)$, then the decomposition also converges in $T_2^2(M)$.
The molecular decomposition
===========================
In this section, we shall prove Theorem \[H1equiv\]. That is, under the assumptions of and , the two $H^1$ spaces: $H_{L,\rho,\mol}^1(M)$ and $H_{L, S_h^{\rho}}^{1}(M)$, are equivalent. We denote $$H_{L,\rho}^{1}(M):=H_{L, S_h^{\rho}}^{1}(M)= H_{L,\rho,\mol}^1(M).$$
Since $H_{L,\rho,\mol}^1(M)$ and $H_{L,S_h^{\rho}}^1(M)$ are completions of $\mathbb H_{L,\rho,\mol}^1(M)$ and $H_{L,S_h^{\rho}}^1(M)\cap H^2(M)$, it is enough to show $\mathbb H_{L,\rho,\mol}^1(M)= H_{L,S_h^{\rho}}^1(M)\cap H^2(M)$ with equivalent norms. In the following, we will prove the two-sided inclusions seperately. Before proceeding to the proof, we first note the lemma below to prove $H_{L,\rho,\mol}^1(M)-L^1(M)$ boundedness of an operator, which is an analogue of Lemma 4.3 in [@HLMMY11].
\[crit\] Assume that $T$ is a linear operator, or a nonnegative sublinear operator, satisfying the weak-type $(2,2)$ bound $$\begin{aligned}
\mu\left(\left\{x\in M: |Tf(x)| >\eta \right\}\right) \lesssim \eta^{-2} \Vert f\Vert_2^2, ~~\forall \eta>0\end{aligned}$$ and that for every $(1,2,\varepsilon)-$molecule $a$, we have $$\begin{aligned}
\label{ta}
\Vert Ta\Vert_{L^1}\leq C,\end{aligned}$$ with constant $C$ independent of $a$. Then $T$ is bounded from $\mathbb H_{L,\rho,\mol}^1(M)$ to $L^1(M)$ with $$\Vert Tf\Vert_{L^1}\lesssim \Vert f\Vert_{\mathbb H_{L,\,\rho,\mol}^1(M)}.$$ Consequently, by density, $T$ extends to be a bounded operator from $H_{L,\rho,\mol}^1(M)$ to $L^1(M)$.
For the proof, we refer to [@HLMMY11], which is also applicable here.
The inclusion $\mathbb H_{L,\,\rho,\mol}^1(M) \subseteq H_{L,\,S_h^{\rho}}^1(M)\cap H^2(M)$.
--------------------------------------------------------------------------------------------
We have the following theorem:
Let $M$ be a metric measure space satisfying the doubling volume property and the heat kernel estimate . Then $\mathbb H_{L,\,\rho,\mol}^1(M) \subseteq H_{L,\,S_h^{\rho}}^1(M)\cap H^2(M)$ and $$\Vert f\Vert_{H_{L,\,S_h^{\rho}}^1(M)} \leq C \Vert f\Vert_{\mathbb H_{L,\,\rho,\mol}^1(M)}.$$
First observe that $\mathbb H_{L,\,\rho,\mol}^1(M) \subseteq H^2(M)$. Indeed, by Definition \[mol\], any $(1,2,\varepsilon)$-molecule belongs to $R(L)$. Thus any finite linear combination of molecules belongs to $R(L)$. Since $f \in \mathbb H_{L,\,\rho,\mol}^1(M)$ is the $L^2(M)$ limit of finite linear combination of molecules, we get $f\in\overline{R(L)} = H^2(M)$.
It remains to show $\mathbb H_{L,\,\rho,\mol}^1(M) \subseteq H_{L,\,S_h^{\rho}}^1(M)$, that is, $S_h^{\rho}$ is bounded from $\mathbb H_{L,\,\rho,\mol}^1(M)$ to $L^1(M)$. Note that $S_h^{\rho}$ is $L^2$ bounded by spectral theory (see ), it follows from Lemma \[crit\] that it suffices to prove that, for any $(1,2,\varepsilon )$-molecule $a$, there exists a constant $C$ such that $\Vert S_h^{\rho} a\Vert_{L^1(M)}\leq C$. In other words, one needs to prove $\Vert A \Vert_{T_2^1(M)}\leq C$, where $$A(y,t)=\rho(t) L e^{-\rho(t)L }a(y).$$
Assume that $a$ is a $(1,2,\varepsilon)$-molecule related to a function $b$ and a ball $B$ with radius $r$, that is, $a=L^K b$ and for every $k=0,1,\cdots, K$ and $i=0,1,2,\cdots$, it holds that $$\Vert(\rho(r)L )^k b\Vert_{L^2(C_i(B))} \leq \rho(r)2^{-i\varepsilon} \mu(2^i B)^{-1/2}.$$
Similarly as in [@AMR08], we divide $A$ into four parts: $$\begin{aligned}
A &= \mathbbm 1_{2B\times (0,2r)}A +\sum_{i\geq 1}\mathbbm 1_{C_{i}(B)\times (0,r)}A
+\sum_{i\geq 1}\mathbbm 1_{C_{i}(B)\times (r,2^{i+1}r)}A
+\sum_{i\geq 1}\mathbbm 1_{2^i B\times (2^i r,2^{i+1}r)}A
\\ &=: A_0+A_1+A_2+A_3.\end{aligned}$$ Here $\mathbbm 1 $ denotes the characteristic function and $C_i(B)=2^{i+1}B \backslash 2^i(B)$, $i\geq 1$. It suffices to show that for every $j=0,1,2,3$, we have $\Vert A_j\Vert_{T_2^1}\leq C$.
Firstly consider $A_0$. Observe that $$\mathcal A(A_0)(x)
={{\left({\iint_{\Gamma (x)}{{\left\lvert{\mathbbm 1_{2 B\times (0,2r)}(y,t)A(y,t)}\right\rvert}}^2 \frac{d\mu(y)}{V(x,t)}\frac{dt}{t}}\right)}}^{1/2}$$ is supported on $4B$. Indeed, denote by $x_B$ be the center of $B$, then $d(x,x_B)\leq d(x,y)+d(y,x_B)\leq 4r$. Also, it holds that $$\begin{aligned}
{{\left\lVert{A_0}\right\rVert}}_{T_2^2(M)}^2 &= {{\left\lVert{\mathcal A(A_0)}\right\rVert}}_{2}^2
\leq
\int_{M} \iint_{\Gamma (x)} {{\left\lvert{\rho(t) L e^{-\rho(t)L }a(y)}\right\rvert}}^2 \frac{d\mu (y)}{V(x,t)}\frac{dt}{t}d\mu(x)
\\ &\lesssim
\Vert a \Vert_{L^2(M)}^2
\lesssim \mu^{-1}(B).\end{aligned}$$ Here the second and the third inequalities follow from and the definition of molecules respectively. Now applying the Cauchy-Schwarz inequality, then $$\Vert A_0 \Vert_{T_2^1(M)}\leq \Vert A \Vert_{T_2^2(M)} \mu(4B)^{1/2}\leq C.$$
Secondly for $A_1$. For each $i\geq 1$, we have $\supp \mathcal A(\mathbbm 1 _{C_{i}(B)\times (0,r)}A)\subset 2^{i+2}B$. In fact, $d(x,x_B)\leq d(x,y)+d(y,x_B) \leq t+2^{i+1}r < 2^{i+2}r$. Then $$\begin{aligned}
{{\left\lVert{\mathbbm 1_{C_{i}(B)\times (0,r)}A}\right\rVert}}_{T_2^2}
&= {{\left\lVert{\mathcal A(\mathbbm 1_{C_{i}(B)\times (0,r)}A)}\right\rVert}}_{2}
\\ &\leq
{{\left({\int_{2^{i+2}B} \iint_{\Gamma(x)} {{\left\lvert{\mathbbm 1_{C_{i}(B)\times (0,r_B)}(y,t) \rho(t)L e^{-\rho(t)L} a(y)}\right\rvert}}^2
\frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu(x)}\right)}}^{1/2}
\\ &\leq
{{\left({\int_0^{r} \int_{C_{i}(B)} {{\left\lvert{\rho(t)L e^{-\rho(t)L }a(y)}\right\rvert}}^2 d\mu(y)\frac{dt}{t}}\right)}}^{1/2}
\\ &\leq
\sum_{l=0}^{\infty } {{\left({\int_0^{r} \int_{C_{i}(B)} {{\left\lvert{\rho(t)L e^{-\rho(t)L } \mathbbm 1_{C_l(B)}a(y)}\right\rvert}}^2 d\mu(y)\frac{dt}{t}}\right)}}^{1/2}
\\ &=:
\sum_{l=0}^{\infty } I_l.\end{aligned}$$
We estimate $I_l$ with $|i-l|>3$ and $|i-l|\leq 3$ respectively. Firstly assume that $|i-l|\leq 3$. Using again, we have $$I_l^2 \leq
\int_0^\infty \int_{M}{{\left\lvert{\rho (t)L e^{-\rho(t)L} \mathbbm 1_{C_l(B)} a(y)}\right\rvert}}^2 d\mu (y)\frac{dt}{t}
\lesssim \Vert a \Vert_{L^2(C_l(B))}^2
\lesssim 2^{-2 i\varepsilon} \mu^{-1}(2^i B).$$
Assume now $|i-l|>3$. Note that $\dist(C_l(B),C_{i}(B))\geq c 2^{\max\{l,i\}}r_B \geq c 2^{i}r_B$. Then it follows from Lemma \[BK\] that $$\begin{aligned}
\label{Il}
\begin{split}
I_l^2 &\leq
\int_0^{r} \exp{{\left({-c{{\left({\frac{\rho(2^i r)}{\rho(t)}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}} {{\left\lVert{a}\right\rVert}}_{L^2(C_l(B))}^2 d\mu(y)\frac{dt}{t}
\\ &\lesssim
2^{-2l\varepsilon}\mu^{-1}(2^l B) \int_0^{r}{{\left({\frac{\rho(t)}{\rho(2^i r)}}\right)}}^{c} \frac{dt}{t}
\lesssim
2^{-ci} 2^{-2l\varepsilon}\mu^{-1}(2^i B).
\end{split}\end{aligned}$$ The last inequality follows from .
It follows from above that $$\begin{aligned}
{{\left\lVert{\mathbbm 1_{C_{i}(B)\times (0,r)}A}\right\rVert}}_{T_2^2}
\lesssim
\sum_{l:|l-i|\leq 3}2^{-i\varepsilon}\mu^{-1/2}(2^i B)+\sum_{l:|l-i|>3}2^{-ic} 2^{-l\varepsilon}\mu^{-1/2}(2^i B)
\lesssim 2^{-ic}\mu^{-1/2}(2^i B),\end{aligned}$$ where $c$ depends on $\varepsilon, M$. Therefore $${{\left\lVert{A_1}\right\rVert}}_{T_2^1}
\leq \sum_{i\geq 1} {{\left\lVert{\mathbbm 1_{C_{i}(B)\times (0,r)}A}\right\rVert}}_{T_2^2}\mu^{1/2}(2^{i+2}B)
\lesssim \sum_{i\geq 1}2^{-ic}\leq C.$$
We estimate $A_2$ in a similar way as before except that we replace $a$ by $L^K b$. Note that for each $i\leq 1$, we have $\supp \mathcal A(\mathbbm 1_{C_{i}(B)\times (r,2^{i+1}r)}A)\subset 2^{i+2}B$. Indeed, $$d(x,x_B)\leq d(x,y)+d(y,x_B) \leq t+2^{i+1}r \leq 2^{i+2}r.$$ Then $$\begin{aligned}
{{\left\lVert{\mathbbm 1_{C_{i}(B)\times (r,2^{i+1}r)}A}\right\rVert}}_{T_2^2}
&= {{\left\lVert{\mathcal A(\mathbbm 1_{C_{i}(B)\times (r,2^{i+1}r)}A)}\right\rVert}}_{2}
\\ &\leq
{{\left({\int_{2^{i+2} B} \iint_{\Gamma (x)}{{\left\lvert{\mathbbm 1_{C_i(B)\times (r,2^i r)}(y,t)A(y,t)}\right\rvert}}^2
\frac{d\mu(y)dt}{V(x,t)t}d\mu(x)}\right)}}^{1/2}
\\ &\leq
{{\left({\int_{r}^{2^{i+1}r} \int_{C_i(B)} {{\left\lvert{(\rho(t)L)^{K+1} e^{-\rho(t)L}b(y)}\right\rvert}}^2 d\mu(y)\frac{dt}{t\rho^{2K}(t)}}\right)}}^{1/2}
\\ &\leq
{{\left({\sum_{l=0}^{\infty}\int_{r}^{2^{i+1}r} \int_{C_i(B)}{{\left\lvert{(\rho(t)L)^{K+1} e^{-\rho(t)L}\mathbbm 1_{C_l(B)}b(y)}\right\rvert}}^2
d\mu(y)\frac{dt}{t\rho^{2K}(t)}}\right)}}^{1/2}
\\ &=:
\sum_{l=0}^{\infty } J_l\end{aligned}$$ When $|i-l|\leq 3$, by spectral theorem we get $J_l^2 \leq C 2^{-2i\varepsilon }V^{-1}(2^i B)$. And when $|i-l|>3$, it holds $\dist(C_l(B),C_{i}(B))\geq c 2^{\max\{l,i\}}r \geq c 2^{i}r$. Then we estimate $J_l$ in the same way as for , $$\begin{aligned}
J_l^2
&\leq
\int_{r}^{2^{i+1}r} \exp{{\left({-c{{\left({\frac{\rho(2^i r)}{\rho(t)}}\right)}}^{\frac{\beta_2}{\beta_2-1}}}\right)}} \Vert b\Vert_{L^2(C_l(B))}^2 d\mu(y) \frac{d t}{t\rho^{2K}(t)}
\\ &\leq
\rho^{2K}(r)2^{-2 l\varepsilon}\mu^{-1}(2^{l+1}B)\int_{r}^{2^{i+1}r} {{\left({\frac{\rho(t)}{\rho(2^i r)}}\right)}}^{c} \frac{dt}{t\rho^{2K}(t)}
\\ &\lesssim
2^{-ic}2^{-l(2\varepsilon+\nu)}\mu^{-1}(2^{i}B).\end{aligned}$$ Here $c$ in the second and the third lines are different. We can carefully choose $c$ in the second line to make sure that $c$ in the third line is positive. Hence $${{\left\lVert{\mathbbm 1_{C_i(B)\times (r,2^i r)}A}\right\rVert}}_{T_2^2}^2 \lesssim 2^{-ic}\mu^{-1}(2^i B),$$ and $${{\left\lVert{A_2}\right\rVert}}_{T_2^1}
\leq \sum_{i\geq 1} {{\left\lVert{\mathbbm 1_{C_i(B)\times(r,2^i r)}A}\right\rVert}}_{T_2^2}\mu^{1/2}(2^{i+2}B)
\lesssim \sum_{i\geq 1}2^{-ic/2}\leq C.$$
It remains to estimate the last term $A_3$. For each $i\geq 1$, we still have $$\supp \mathcal A(\mathbbm 1_{2^i B\times (2^i r,2^{i+1}r)}A)\subset 2^{i+2}B.$$ Then we obtain as before that $$\begin{aligned}
{{\left\lVert{\mathbbm 1_{2^i B\times (2^i r,2^{i+1}r)}A}\right\rVert}}_{T_2^2}
&= {{\left\lVert{\mathcal A(\mathbbm 1_{2^i B\times (2^i r,2^{i+1}r)} A)}\right\rVert}}_{2}
\\ &\leq
{{\left({\int_{2^{i+2}B} \iint_{\Gamma (x)}{{\left\lvert{\mathbbm 1_{2^i B\times (2^i r_B,2^{i+1}r)}(y,t)A(y,t)}\right\rvert}}^2
\frac{d\mu(y) dt}{V(x,t)t}d\mu (x)}\right)}}^{1/2}
\\ &\leq
{{\left({\int_{2^i r}^{2^{i+1}r} \int_{2^i B}{{\left\lvert{(\rho(t) L)^{K+1} e^{-\rho(t)L }b(y)}\right\rvert}}^2\frac{d\mu(y)dt}{t\rho^{2K}(t)}}\right)}}^{1/2}
\\ &\leq
\sum_{l=0}^{\infty} {{\left({\int_{2^i r}^{2^{i+1}r} \int_{2^i B}{{\left\lvert{(\rho(t) L)^2
e^{-\rho(t)L} \mathbbm 1_{C_l(B)}b(y)}\right\rvert}}^2\frac{d\mu(y)dt}{t\rho^{2K}(t)}}\right)}}^{1/2}
\\& =: \sum_{l=0}^{\infty} K_l.\end{aligned}$$ In fact, due to the doubling volume property, as well as the definition of molecules, we get $$\begin{aligned}
K_l^2 &\leq
\int_{2^i r}^{2^{i+1}r} {{\left\lVert{\mathbbm 1_{C_l(B)}b}\right\rVert}}_{L^2}^2 \frac{dt}{t\rho^{2K}(t)}
\lesssim
\rho^{2K}(r) 2^{-2l\varepsilon}\mu^{-1}(2^l B)\int_{2^i r}^{2^{i+1}r}\frac{dt}{t\rho^{2K}(t)}
\\ &\lesssim 2^{-2l\varepsilon}2^{-ic}\mu^{-1}(2^{i}B).\end{aligned}$$
Hence $$\Vert A_3\Vert_{T_2^1}
\leq \sum_{i\geq 1} \Vert \mathbbm 1 _{2^i B\times (2^i r,2^{i+1}r)} A\Vert_{T_2^2}\mu^{1/2}(2^{i+2}B)
\lesssim \sum_{i\geq 1}2^{-2i}\leq C.$$
This finishes the proof.
The inclusion $H_{L,\,S_h^{\rho}}^1(M) \cap H^2(M) \subseteq \mathbb H_{L,\,\rho,\mol}^1(M)$.
---------------------------------------------------------------------------------------------
We closely follow the proof of Theorem 4.13 in [@HLMMY11] and get
\[cnt\] Let $M$ be a metric measure space satisfying and . If $f \in H_{L ,S_h^{\rho}}^1(M)\cap H^2(M)$, then there exist a sequence of numbers $\{\lambda _j\}_{j=0}^{\infty }\subset l^1$ and a sequence of $(1,2,\varepsilon )-$molecules $\{a_j\}_{j=0}^{\infty }$ such that $f$ can be represented in the form $f =\sum_{j=0}^{\infty }\lambda _j a_j$, with the sum converging in $L^2(M)$, and $$\Vert f\Vert_{\mathbb H_{L,\,\rho,\mol}^1(M)}\leq C\sum_{j=0}^{\infty }\lambda _j
\leq C \Vert f\Vert_{H_{L,\,S_h^{\rho}}^1(M)},$$ where $C$ is independent of $f$. In particular, $H_{L,\,S_h^{\rho}}^1(M)\cap H^2(M)\subseteq \mathbb H_{L,\,\rho,\mol}^1(M)$.
For $f \in H_{L,\,S_h^{\rho}}^1(M)\cap H^2(M)$, denote $F(x,t)=\rho(t)L e^{-\rho(t)L}f(x)$. Then by the definition of $H_{L ,S_h^{\rho}}^1(M)$, we have $F\in T_2^1(M) \cap
T_2^2(M)$.
From Theorem \[decomp1\], we decompose $F$ as $F =\sum_{j=0}^{\infty }\lambda _j A_j$, where $\{\lambda _j\}_{j=0}^{\infty }\in l^1$, $\{A_j\}_{j=0}^{\infty }$ is a sequence of $T_2^1-$atoms supported in a sequence of sets $\{\widehat B_j\}_{j=0}^{\infty }$, and the sum converges in both $T_2^1(M)$ and $T_2^2(M)$. Also $$\sum_{j=0}^{\infty }\lambda _j \lesssim \Vert F\Vert_{T_2^1(X)}= \Vert f\Vert_{H_{L ,S_h^{\rho}}^1(M)}.$$
For $f\in H^2(M)$, by functional calculus, we have the following “Calderón reproducing formula" $$f = C\int_0^{\infty } (\rho(t)L)^{K+1} e^{-2\rho (t)L}f \frac{\rho'(t)dt}{\rho(t)}
= C\int_0^{\infty } (\rho(t)L)^K e^{-\rho(t)L}F(\cdot ,t) \frac{\rho'(t)dt}{\rho(t)}
=: C \pi _{h,L} (F).$$
Denote $a_j=C\pi _{h,L} (A_j)$, then $f =\sum_{j=0}^{\infty }\lambda _j a_j$. Since for $F\in T_2^2(M)$, we have $\Vert \pi _{h,
L} (F)\Vert_{L^2(M)}\leq C \Vert F\Vert_{T_2^2(M)}$. Thus we learn from Lemma 4.12 in [@HLMMY11] that the sum also converges in $L^2(M)$.
We claim that $a_j, j=0,1,...,$ are $(1,2,\varepsilon )-$molecules up to multiplication to some uniform constant.
Indeed, note that $a_j=L^K b_j$, where $$b_j=C\int_0^{\infty } \rho^K(t) e^{-\rho(t)L}A_j(\cdot ,t) \frac{\rho'(t)dt}{\rho(t)}.$$ Now we estimate the norm $\Vert (\rho(r_{B_j})L )^k b_j\Vert_{L^2(C_i(B))}$, where $r_{B_j}$ is the radius of $B_j$. For simplicity we ignore the index $j$. Consider any function $g\in L^2(C_i(B))$ with $\Vert g\Vert_{L^2(C_i(B))}=1$, then for $k=0,1, \cdots, K$, $$\begin{aligned}
& \left|\int_M (\rho(r_B)L )^k b(x)g(x)d\mu (x)\right|
\\ & \lesssim
{{\left\lvert{\int_{M}{{\left({\int_0^{\infty }(\rho(r_B)L )^k \rho^K (t) e^{-\rho (t)L}(A_j(\cdot ,t))(x)\frac{\rho'(t)dt}{\rho(t)}}\right)}}g(x)d\mu (x)}\right\rvert}}
\\ &=
{{\left\lvert{\int_{\widehat B}{{\left({\frac{\rho(r_B)}{\rho(t)}}\right)}}^k \rho^K(t) A_j(x ,t)(\rho (t)L)^k e^{-\rho(t)L}g(x)d\mu(x) \frac{\rho'(t)dt}{\rho(t)}}\right\rvert}}
\\ &\lesssim
{{\left({\int_{\widehat B}{{\left\lvert{A_j(x,t)}\right\rvert}}^2 d\mu (x)\frac{dt}{t}}\right)}}^{1/2}
{{\left({\int_{\widehat B}{{\left\lvert{{{\left({\frac{\rho(r_B)}{\rho (t)}}\right)}}^k \rho^K(t) (\rho(t)L)^k e^{-\rho (t)L}g(x)}\right\rvert}}^2 d\mu (x) \frac{dt}{t}}\right)}}^{1/2}.\end{aligned}$$ In the last inequality, we apply Hölder inequality as well as .
We continue to estimate by using the definition of $T_2^1-$atoms and the off-diagonal estimates of heat kernel.
For $i=0,1$, the above quantity is dominated by $$\mu^{-1/2}(B) \rho(r_B) \left(\int_{\widehat B}\left|(\rho (t)L )^k e^{-\rho (t)L}g(x)\right|^2 d\mu (x)\frac{dt}{t}\right)^{1/2}
\lesssim \mu^{-1/2}(B) \rho(r_B) .$$ Next for $i\geq 2$, the above estimate is controlled $$\begin{aligned}
& \mu^{-1/2}(B) {{\left({\int_0^{r_B}{{\left({\frac{\rho(r_B)}{\rho (t)}}\right)}}^{2 k}\rho^{2K}(t)
{{\left\lVert{(\rho(t)L)^k e^{-\rho(t)L}g}\right\rVert}}_{L^2(B)}^2\frac{dt}{t}}\right)}}^{1/2}
\\ & \lesssim
\mu^{-1/2}(B) {{\left({\int_0^{r_B}{{\left({\frac{\rho(r_B)}{\rho(t)}}\right)}}^{2 k} \rho^{2K}(t) \exp{{\left({-c{{\left({\frac{2^i r_B}{t}}\right)}}^\tau}\right)}} \frac{dt}{t}}\right)}}^{1/2}
\\ & \lesssim
\mu^{-1/2}(B)
{{\left({\int_0^{r_B}{{\left({\frac{\rho (r_B)}{\rho(t)}}\right)}}^{2k}\rho^{2K}(t) {{\left({\frac{t}{2^i r_B}}\right)}}^{\varepsilon +\nu}\frac{dt}{t}}\right)}}^{1/2}
\\ & \lesssim
\mu^{-1/2}(2^i B) \rho^K(r_B) 2^{-i \varepsilon }.\end{aligned}$$ In the first inequality, we use Lemma \[BK\]. Since $k=0, 1, \cdots, K$, the last inequality always holds for any $\varepsilon >0$.
Therefore, $$\begin{aligned}
\Vert (\rho (r_{B})L )^k b\Vert_{L^2(C_i(B))}
& = \sup_{\Vert g\Vert_{L^2(C_i(B))}=1} {{\left\lvert{\int_M (\rho(r_B)L )^k b(x)g(x)d\mu (x)}\right\rvert}}
\\ &\lesssim \mu^{-1/2}(2^i B) \rho^K(r_B)2^{-i \varepsilon }.\end{aligned}$$
Comparison of Hardy spaces and Lebesgue spaces
==============================================
In this section, we will study the relations between $L^p(M)$, $H_{L,\,S_h^{\rho}}^p(M)$ and $H_{L,\,S_h}^p(M)$ under the assumptions of and . We first show that $L^p(M)$ and $H_{L,\,S_h^{\rho}}^p(M)$ are equivalent. Next we give some examples such that $L^p(M)$ and $H_{L,\,S_h}^p(M)$ are not equivalent. More precisely, the inclusion $L^p\subset H_{L,\,S_h}^p$ may be false for $1<p<2$.
Equivalence of $L^p(M)$ and $H_{L,\,S_h^{\rho}}^p(M)$ for $p_0<p<p_0'$
----------------------------------------------------------------------
We will prove Theorem \[equihl\]. That is, if $M$ satisfies and , then $H_{L,\,S_h^{\rho}}^p(M)=L^p(M)$ for $p_0<p<p_0'$.
Our main tool is the Calderón-Zygmund decomposition (see for example [@CW71 Corollaire 2.3]).
\[C-Z\] Let $(M,d,\mu)$ be a metric measured space satisfying the doubling volume property. Let $1\leq q\leq \infty$ and $f\in L^q$. Let $\lambda>0$. Then there exists a decomposition of $f$, $f=g+b=g+\sum_i b_i $ so that
1. $|g(x)|\leq C\lambda$ for almost all $x\in M$;
2. There exists a sequence of balls $B_i =B(x_i,r_i)$ so that each $b_i$ is supported in $B_i$, $$\int| b_i(x)|^q d\mu(x)\leq C\lambda^q \mu(B_i)$$
3. $\sum_i \mu(B_i)\leq \frac{C}{\lambda^q} \int|f(x)|^q d\mu(x)$;
4. $\Vert b\Vert_q \leq C\Vert f\Vert_q$ and $\Vert g\Vert_q \leq C\Vert f\Vert_q$;
5. There exists $k\in \mathbb{N}^*$ such that each $x\in M$ is contained in at most $k$ balls $B_i$.
Due to the self-adjointness of $L$ in $L^2(M)$, we get $L^2(M)=\overline{R(L )}\bigoplus N(L)$, where the sum is orthogonal. Under the assumptions and , we have $N(L)=0$ and thus $H^2(M)=L^2(M)$. Indeed, for any $f\in N(L)$, it holds $$e^{-\rho(t)L}f-f=\int_0^{\rho(t)} \frac{\partial}{\partial s}e^{-sL}fds=-\int_0^{\rho(t)} L e^{-sL}fds=0,$$ As a consequence of Lemma \[BK’\], we have that for all $x\in M$ and $t\geq 0$, $$\begin{aligned}
{{\left({\int_{B(x,t)} |f|^{p_0'} }\right)}}^{1/p_0'}
={{\left\lVert{e^{-\rho(t)L}f}\right\rVert}}_{L^{p_0'}(B(x,t))} \lesssim V(x,t)^{\frac{1}{p_0'}-\frac{1}{2}}\Vert f\Vert_{L^2(B(x,t))}.\end{aligned}$$ Now letting $t\rightarrow \infty$, we obtain that $f=0$.
It suffices to prove that for any $f\in R(L)\cap L^p(M)$ with $p_0<p<p_0'$, $$\begin{aligned}
\label{equi}
\Vert S_h^{\rho} f\Vert_{L^p}\lesssim \Vert f\Vert_{L^p}.\end{aligned}$$ With this fact at hand, we can obtain by duality that $\Vert f\Vert_{L^p} \leq C \Vert S_h^{\rho} f\Vert_{L^p}$ for $p_0<p< p_0'$.
Indeed, for $f \in R(L)$, write the identity $$f = C\int_0^{\infty }(\rho(t)L )^2 e^{-2\rho(t)L}f\frac{\rho'(t)dt}{\rho(t)},$$ where the integral $C\int_{\varepsilon}^{1/\varepsilon}(\rho(t)L )^2 e^{-2\rho (t)L}f\frac{\rho'(t)dt}{\rho(t)}$ converges to $f$ in $L^2(M)$ as $\varepsilon\rightarrow 0$.
Then for $f\in R(L)\cap L^p(M)$, we have $$\begin{aligned}
\Vert f\Vert_{L^p}
& =\sup_{{{\left\lVert{g}\right\rVert}}_{L^{p'}}\leq 1}|<f,g>|
\simeq \sup_{{{\left\lVert{g}\right\rVert}}_{L^{p'}}\leq 1}{{\left\lvert{\iint_{M\times (0,\infty )} F(y,t)G(y,t)d\mu(y)\frac{\rho'(t)dt}{\rho(t)}}\right\rvert}}
\\& \simeq
\sup_{{{\left\lVert{g}\right\rVert}}_{L^{p'}}\leq 1} {{\left\lvert{\int_M \iint_{\Gamma (x)} F(y,t)G(y,t)\frac{d\mu(y)}{V(x,t)}\frac{\rho'(t)dt}{\rho(t)}d\mu(x)}\right\rvert}}
\\&\lesssim
\sup_{{{\left\lVert{g}\right\rVert}}_{L^{p'}}\leq 1} {{\left\lVert{F}\right\rVert}}_{T_2^p} {{\left\lVert{G}\right\rVert}}_{T_2^{p'}}
\simeq \sup_{{{\left\lVert{g}\right\rVert}}_{L^{p'}}\leq 1} {{\left\lVert{S_h f}\right\rVert}}_{L^p} {{\left\lVert{S_h g}\right\rVert}}_{L^{p'}}
\\&\lesssim
\sup_{{{\left\lVert{g}\right\rVert}}_{L^{p'}}\leq 1} {{\left\lVert{S_h f}\right\rVert}}_{L^p} {{\left\lVert{g}\right\rVert}}_{L^{p'}}
={{\left\lVert{S_h f}\right\rVert}}_{L^p}.\end{aligned}$$ Here $F(y,t)=\rho(t)L e^{-\rho(t)L}f(y)$ and $G(y,t)=\rho(t)L e^{-\rho(t)L} g(y)$. The second line’s equivalence is due to the doubling volume propsimeqerty.
By an approximation process, the above argument holds for $f\in L^p(M)$.
For $p>2$, the $L^p$ norm of the conical square function is controlled by its vertical analogue (for a reference, see [@AHM12], where the proof can be adapted to the homogenous setting), which is always $L^p$ bounded for $p_0<p<p_0'$ by adapting the proofs in [@Bl07] and [@CDMY96] (if $\{e^{-tL}\}_{t>0}$ is a symmetric Markov semigroup, then it is $L^p$ bounded for $1<p<\infty$, according to [@St70]). Hence holds.
It remains to show for $p_0<p< 2$.
In the following, we will prove the weak $(p_0,p_0)$ boundedness of $S_h^{\rho}$ by using the Calderón-Zygmund decomposition. Since $S_h^{\rho}$ is also $L^2$ bounded as shown in , then by interpolation, holds for every $p_0<p<2$. The proof is similar to [@Au07 Proposition 6.8] and [@AHM12 Theorem 3.1], which originally comes from [@DM99].
We take the Calderón-Zygmund decomposition of $f$ at height $\lambda $, that is, $f=g+\sum b_i$ with $\supp b_i\subset B_i$. Since $S_h^{\rho}$ is a sublinear operator, write $$\begin{aligned}
S_h^{\rho} \left(\sum_i b_i \right)
&= S_h^{\rho} {{\left({\sum_i {{\left({I-{{\left({I-e^{-\rho (r_i)}}\right)}}^N+{{\left({I-e^{-\rho (r_i)L }}\right)}}^N}\right)}}b_i}\right)}}
\\ &\leq
S_h^{\rho} {{\left({\sum_i {{\left({I-{{\left({I-e^{-\rho (r_i)}}\right)}}^N}\right)}} b_i}\right)}}+S_h^{\rho} {{\left({\sum_i {{\left({I-e^{-\rho (r_i)}}\right)}}^N b_i }\right)}}.\end{aligned}$$ Here $N \in \mathbb N$ is chosen to be larger than $2\nu /\beta_1$, where $\nu$ is as in .
Then it is enough to prove that $$\begin{split}
& \mu {{\left({\left\{x\in M: S_h^{\rho}(f)(x)>\lambda \right\}}\right)}}
\leq \mu {{\left({\left\{x\in M: S_h^{\rho}(g)(x)>\frac{\lambda}{3}\right\}}\right)}}
\\ &+\mu {{\left({\left\{x\in M: S_h^{\rho} {{\left({\sum_i {{\left({I-{{\left({I-e^{-\rho (r_i)}}\right)}}^N}\right)}} b_i}\right)}} (x)>\frac{\lambda}{3}\right\}}\right)}}
\\& + \mu {{\left({\left\{x\in M: S_h^{\rho} {{\left({\sum_i {{\left({I-e^{-\rho (r_i)L }}\right)}}^N b_i}\right)}}(x)>\frac{\lambda}{3} \right\}}\right)}}
\lesssim \frac{1}{\lambda^{p_0}}\int |f(x)|^{p_0}d\mu(x).
\end{split}$$
We treat $g$ in a routine way. Since $S_h^{\rho}$ is $L^2$ bounded as shown in , then $$\mu {{\left({\left\{x\in M: S_h^{\rho}(g)(x)>\frac{\lambda}{3}\right\}}\right)}}
\lesssim \lambda^{-2}{{\left\lVert{g}\right\rVert}}_2^2 \lesssim \lambda^{-p_0}{{\left\lVert{g}\right\rVert}}_{p_0}
\lesssim \lambda^{-p_0}\Vert f\Vert_{p_0}.$$
Now for the second term. Note that $I-{{\left({I-e^{-\rho (r_i)L}}\right)}}^N = \sum_{k=1}^{N} (-1)^{k+1} \binom{N}{k} e^{-k\rho (r_i)L}$, it is enough to show that for every $1 \leq k \leq N$, $$\begin{aligned}
\label{k}
\mu {{\left({\left\{x\in M: S_h^{\rho} \left( \sum_i e^{-k\rho (r_i)L} b_i \right)(x)>\frac{\lambda}{3N} \right\}}\right)}}
\lesssim \frac{1}{\lambda^{p_0}} \int |f(x)|^{p_0} d\mu(x).\end{aligned}$$
Note the following slight improvement of : for every $1 \leq k\leq N$ and for every $j\geq 1$, we have $$\begin{aligned}
\label{off}
{{\left\lVert{e^{-k \rho(r_i)L}b_i}\right\rVert}}_{L^2(C_j(B_i))} \lesssim \frac{2^{j\nu}}{\mu^{\frac1{p_0}-\frac12}(B_i)} e^{-c_k 2^{j\tau(k \rho(r_i))}}
{{\left\lVert{b_i}\right\rVert}}_{L^{p_0}(B_i)}.\end{aligned}$$ Here $\tau(r)=\beta_1/(\beta_1-1)$ if $0<r<1$, otherwise $\tau(r)=\beta_2/(\beta_2-1)$. Indeed, it is obvious for $r_i\ge 1$ and $0<r_i <k^{-\frac1{\beta_1}}$. For $k^{-\frac1{\beta_1}} \le r_i<1$, that is, $k\rho(r_i)\ge 1$, then ${{\left({\frac{(2^j r_i)^{\beta_2}}{k\rho(r_i)}}\right)}}^{\frac{1}{\beta_2-1}} \simeq 2^{j\frac{\beta_2}{\beta_2-1}}=2^{j\tau(k \rho(r_i))}$.
With the above preparations, we can show now. Write $$\mu {{\left({\left\{x: {{\left\lvert{S_h^{\rho} {{\left({\sum_{i}e^{-k\rho(r_i)L }b_i}\right)}} (x)}\right\rvert}}>\frac{\lambda}{3N} \right\}}\right)}}
\lesssim \frac{1}{\lambda^2}{{\left\lVert{\sum_{i}e^{-k\rho(r_i)L }b_i}\right\rVert}}_2^2$$
By a duality argument, $$\begin{aligned}
{{\left\lVert{\sum_{i} e^{-k\rho(r_i)L} b_i}\right\rVert}}_2
&= \sup_{{{\left\lVert{\phi}\right\rVert}}_2=1} \int_{M} {{\left\lvert{\sum_{i} e^{-k\rho(r_i)L} b_i}\right\rvert}} |\phi| d\mu
\leq \sup_{{{\left\lVert{\phi}\right\rVert}}_2=1} \sum_{i} \sum_{j=1}^{\infty} \int_{C_j(B_i)} {{\left\lvert{e^{-k\rho(r_i)L} b_i}\right\rvert}} |\phi| d\mu
\\ &=: \sup_{{{\left\lVert{\phi}\right\rVert}}_2=1} \sum_{i} \sum_{j=1}^{\infty} A_{ij}.\end{aligned}$$
Applying Cauchy-Schwarz inequality, and , we get $$\begin{aligned}
A_{ij} &\leq {{\left\lVert{e^{-k\rho(r_i)L} b_i}\right\rVert}}_{L^2(C_j(B_i))} {{\left\lVert{\phi}\right\rVert}}_{L^2(C_j(B_i))}
\\ &\lesssim
2^{\frac{3j\nu}{2}} e^{-c 2^{j\tau(k \rho(r_i))}}\mu(B_i) {{\left({\frac{1}{\mu(B_i)}\int_{B_i} |b_i|^{p_0} d\mu}\right)}}^{\frac{1}{p_0}} \inf_{y \in B_i}{{\left({\mathcal M{{\left({|\phi|^2}\right)}}(y)}\right)}}^{1/2}
\\ &\lesssim
e^{-c 2^{j\tau(k \rho(r_i))}} \mu(B_i) \inf_{y \in B_i}{{\left({\mathcal M{{\left({|\phi|^2}\right)}}(y)}\right)}}^{1/2}.\end{aligned}$$ Here $\mathcal M$ denotes the Hardy-Littlewood maximal operator: $$\mathcal Mf(x)= \sup_{B\ni x} \frac 1{\mu(B)} \int_B |f(x)| d\mu(x),$$ where $B$ ranges over all balls containing $x$.
Then $$\begin{aligned}
{{\left\lVert{\sum_{i} e^{-k\rho(r_i)L} b_i}\right\rVert}}_2
&\lesssim
\lambda \sup_{{{\left\lVert{\phi}\right\rVert}}_2=1} \sum_{i}\sum_{j=1}^{\infty} e^{-c 2^{j\tau(k \rho(r_i))}}
\mu(B_i) \inf_{y \in B_i}{{\left({M{{\left({|\phi|^2}\right)}}(y)}\right)}}^{1/2}
\\ &\lesssim
\lambda \sup_{{{\left\lVert{\phi}\right\rVert}}_2=1}\int \sum_{i} \mathbbm 1_{B_i}(y) {{\left({\mathcal M{{\left({|\phi|^2}\right)}}(y)}\right)}}^{1/2} d\mu(y)
\\ &\lesssim
\lambda \sup_{{{\left\lVert{\phi}\right\rVert}}_2=1}\int_{\cup_{i}B_i} {{\left({\mathcal M{{\left({|\phi|^2}\right)}}(y)}\right)}}^{1/2} d\mu(y)
\\ &\lesssim
\lambda \mu^{1/2} {{\left({\cup_{i}B_i}\right)}} \lesssim \lambda^{1-p_0/2} {{\left({\int |f|^{p_0} d\mu}\right)}}^{1/2}.\end{aligned}$$ The third inequality is due to the finite overlap of the Calderón-Zygmund decomposition. In the last line, for the first inequality, we use Kolmogorov’s inequality (see for example [@Gra08 page 91]).
Therefore, we obtain $$\begin{aligned}
\label{S1}
\mu {{\left({\left\{x: {{\left\lvert{S_h^{\rho} {{\left({\sum_{i}e^{-k\rho(r_i)L}b_i}\right)}}(x)}\right\rvert}}>\frac{\lambda}{3N}\right\}}\right)}}
\lesssim \frac1{\lambda^{p_0}} \int |f|^{p_0} d\mu.\end{aligned}$$
For the third term, we have $$\begin{aligned}
& \mu {{\left({\left\{x\in M: S_h^{\rho} {{\left({\sum_i {{\left({I-e^{-\rho (r_i)L }}\right)}}^N b_i}\right)}}(x)>\frac{\lambda}{3} \right\}}\right)}}
\\ \leq&
\mu{{\left({\cup_j 4B_j}\right)}}+\mu {{\left({\left\{x\in M\setminus \cup_j 4B_j: S_h^{\rho} {{\left({\sum_i {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i}\right)}}(x)
>\frac{\lambda}{3} \right\}}\right)}}.\end{aligned}$$
From the Calderón-Zygmund decomposition and doubling volume property, we get $$\mu\left(\cup_j 4B_j\right) \leq \sum_j \mu(4B_j) \lesssim \sum_j \mu(B_j) \lesssim \frac{1}{\lambda^{p_0}}\Vert f\Vert_{p_0}.$$
It remains to show that $$\begin{aligned}
\Lambda := \mu \left(\left\{x\in M\setminus \cup_j 4B_j: S_h^{\rho} \left(\sum_i \left(I-e^{-\rho(r_i)L } \right)^N b_i \right)(x) >
\frac{\lambda}{3}\right\}\right)
\lesssim \frac{1}{\lambda^{p_0}} \int |f(x)|^{p_0} d\mu(x).\end{aligned}$$
As a consequence of the Chebichev inequality, $\Lambda$ is dominated by $$\begin{aligned}
& \frac{9}{\lambda^2 }\int_{ M\setminus \cup_j 4 B_j}
\left (S_h^{\rho} \left(\sum_i \left(I-e^{-\rho(r_i)L} \right)^N b_i \right)(x) \right )^2 d\mu (x)
\\ \leq &
\frac{9}{\lambda^2 }\int_{ M\setminus \bigcup_j 4 B_j} \iint_{\Gamma (x)}
\left (\sum_i \rho(t) L e^{-\rho(t)L } \left(I-e^{-\rho(r_i)L} \right)^N b_i(y)\right )^2
\frac{d\mu(y)}{V(x,t)}\frac{dt}{t} d\mu (x)
\\ \leq &
\frac{18}{\lambda^2 }\int_{ M\setminus \cup_j 4 B_j}\iint_{\Gamma (x)}
\left (\sum_i \mathbbm 1 _{2 B_i}(y) \rho(t) L e^{-\rho(t)L} \left(I-e^{-\rho(r_i)L} \right)^N b_i(y)\right)^2
\frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu (x)
\\&+
\frac{18}{\lambda^2 }\int_{ M\setminus \cup_j 4 B_j}\iint_{\Gamma (x)} {{\left({\sum_i \mathbbm 1 _{M\setminus 2 B_i}(y) \rho(t) L e^{-
\rho(t)L } {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 \frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu (x)
\\ =:& \frac{18}{\lambda^2 }(\Lambda _{loc}+\Lambda _{glob}).\end{aligned}$$
For the estimate of $\Lambda_{loc}$. Due to the bounded overlap of $2B_i$, we can put the sum of $i$ out of the square up to a multiplicative constant. That is, $$\begin{aligned}
\Lambda _{loc}
&\lesssim \sum_i \int_{ M\setminus \cup_j 4B_j} \int_{0}^{\infty}\int_{B(x,t)}
{{\left({\mathbbm 1_{2 B_i}(y) \rho(t) L e^{-\rho(t)L} {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 \frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu(x)
\\ &\lesssim
\sum_i \int_{ M\setminus \cup_j 4B_j}\int_{2 r_i}^{\infty} \int_{B(x,t)}
{{\left({\mathbbm 1_{2 B_i}(y) \rho(t)L e^{-\rho(t)L} {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 \frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu(x)
\\ &\lesssim
\sum_i \int_{2 r_i}^{\infty} \int_{M} {{\left({\int_{B(y,t)} \frac{d\mu(x)}{V(x,t)}}\right)}}
{{\left({\mathbbm 1_{2 B_i}(y) \rho(t) L e^{-\rho(t)L} {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 d\mu (y)\frac{dt}{t}
\\ &\lesssim
\sum_i \int_{2 r_i}^{\infty} \int_{2 B_i} {{\left({\rho(t)L e^{-\rho(t)L} {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 d\mu(y) \frac{dt}{t}.\end{aligned}$$ For the second inequality, note that for every $i$, $x \in M\setminus \cup_j 4 B_j$ means $x \notin 4B_i $. Then $y \in 2 B_i$ and $d(x,y)<t$ imply that $t\geq 2 r_i$. Thus the integral is zero for every $i$ if $0<t<2r_i$. We obtain the third inequality by using the Fubini theorem and .
Then by using , it follows $$\begin{aligned}
\Lambda_{loc}
&\lesssim
\sum_i \int_{2 r_i}^{\infty} \int_{2 B_i} {{\left({\frac{\mu^{\frac1{p_0}-\frac1{2}}(B_i)}{V^{\frac1{p_0}-\frac1{2}}(y,t)}
\frac{V^{\frac1{p_0}-\frac1{2}}(y,t)}{\mu^{\frac1{p_0}-\frac1{2}}(B_i)} \rho(t)L e^{-\rho(t)L}
{{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 d\mu(y) \frac{dt}{t}
\\ &\lesssim
\sum_i \int_{2 r_i}^{\infty} \int_{2 B_i}
{{\left({\frac{V^{\frac1{p_0}-\frac1{2}}(y,4r_i)}{V^{\frac1{p_0}-\frac1{2}}(y,t)}
\frac{V^{\frac1{p_0}-\frac1{2}}(y,t)}{\mu^{\frac1{p_0}-\frac1{2}}(B_i)}
\rho(t)L e^{-\rho(t)L} {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i(y)}\right)}}^2 d\mu(y) \frac{dt}{t}
\\ &\lesssim
\mu^{1-\frac{2}{p_0}}(B_i) \sum_i \int_{2 r_i}^{\infty} {{\left({\frac{4r_i}{t}}\right)}}^{\nu' {{\left({\frac{2}{p_0}-1}\right)}}}
{{\left\lVert{V^{\frac1{p_0}-\frac1{2}}(\cdot,t) \rho(t)L e^{-\rho(t)L} {{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i}\right\rVert}}_2^2 \frac{dt}{t}
\\ &\lesssim
\mu^{1-\frac{2}{p_0}}(B_i) \sum_i {{\left\lVert{{{\left({I-e^{-\rho(r_i)L}}\right)}}^N b_i}\right\rVert}}_{p_0}^2
\\ &\lesssim
\mu^{1-\frac{2}{p_0}}(B_i) \sum_i {{\left\lVert{b_i}\right\rVert}}_{p_0}^2
\lesssim \lambda^2 \sum_i \mu(B_i) \lesssim \lambda^{2-p_0} \int |f|^{p_0} d\mu.\end{aligned}$$ For the second inequality, we use the reverse doubling property . The third inequality follows from the $L^{p_0}-L^2$ boundedness of the operator $V^{\frac1{p_0}-\frac1{2}}(\cdot,t) \rho(t)L e^{-\rho(t)L}$ (see Lemma \[BK\]). Then by using the $L^{p_0}$ boundedness of the heat semigroup, we get the fourth inequality.
Now for the global part. We split the integral into annuli, that is, $$\begin{aligned}
\Lambda _{glob} &\leq
\int_{ M}\iint_{\Gamma (x)}\left (\sum_i \mathbbm 1 _{M\setminus 2 B_i}(y) \rho(t) L e^{-\rho(t)L }
(I-e^{-\rho(r_i)L })^N b_i(y)\right )^2 \frac{d\mu(y)}{V(x,t)}\frac{dt}{t}d\mu (x)
\\ &\leq
\int_{0}^{\infty} \int_{ M} \int_{B(y,t)} \left (\sum_i \mathbbm 1_{M\setminus 2 B_i}(y) \rho(t) L e^{-\rho(t)L}
(I-e^{-\rho(r_i)L })^N b_i(y)\right )^2 \frac{d\mu(x)}{V(x,t)}d\mu (y)\frac{dt}{t}
\\ &\leq
\int_{0}^{\infty} \int_{ M}\left (\sum_i \mathbbm 1_{M\setminus 2 B_i}(y) \rho(t) L e^{-\rho(t)L}
(I-e^{-\rho(r_i)L })^N b_i(y)\right )^2 d\mu (y)\frac{dt}{t}.\end{aligned}$$
In order to estimate the above $L^2$ norm, we use an argument of dualization. Take the supremum of all functions $h(y,t)\in L^2(M\times (0,\infty), \frac{d\mu dt}{t})$ with norm $1$, then $$\begin{aligned}
\Lambda_{glob}^{1/2} \leq&
{{\left({\int_{0}^{\infty} \int_{M} {{\left({\sum_i \mathbbm 1_{M\setminus 2 B_i}(y) \rho(t)L e^{-\rho(t)L}(I-e^{-\rho(r_i)L })^N b_i(y)}\right)}}^2 d\mu(y)
\frac{dt}{t}}\right)}}^{1/2}
\\ =&
\sup_{h} \iint_{M\times (0,\infty)} {{\left\lvert{\sum_i \mathbbm 1_{M\setminus 2 B_i}(y) \rho(t)L e^{-\rho(t)L} (I-e^{-\rho(r_i)L })^N b_i(y)}\right\rvert}}
|h(y,t)| \frac{d\mu(y) dt}{t}
\\ \leq&
\sup_{h} \sum_i \sum_{j\geq 2}\int_{0}^{\infty}\int_{ C_j(B_i)} {{\left\lvert{\rho(t)L e^{-\rho(t)L} (I-e^{-\rho(r_i)L })^N b_i(y)}\right\rvert}}
|h(y,t)| \frac{d\mu(y) dt}{t}
\\ \leq&
\sup_{h} \sum_i \sum_{j\geq 2} {{\left({\int_{0}^{\infty}\int_{ C_j(B_i)} {{\left\lvert{\rho(t)L e^{-\rho(t)L} (I-e^{-\rho(r_i)L })^N b_i(y)}\right\rvert}}^2
\frac{d\mu (y) dt}{t}}\right)}}^{1/2}
\\ &\times {{\left({\int_{0}^{\infty}\int_{ C_j(B_i)} |h(y,t)|^2 \frac{d\mu (y) dt}{t}}\right)}}^{1/2}.\end{aligned}$$
Denote $I_{ij}={{\left({\int_{0}^{\infty}\int_{C_j(B_i)} {{\left\lvert{\rho(t)L e^{-\rho(t)L} (I-e^{-\rho(r_i)L })^N b_i(y)}\right\rvert}}^2\frac{d\mu (y)dt}{t}}\right)}}^{1/2}$.
Let $H_{t,r}(\zeta)=\rho(t) \zeta e^{-\rho(t)\zeta}(1-e^{-\rho (r)\zeta})^N$. Then $$\begin{aligned}
\label{I}
I_{ij}={{\left({\int_{0}^{\infty} \Vert H_{t,r_i}(L)b_i \Vert_{L^2(C_j(B_i))}^2 \frac{dt}{t}}\right)}}^{1/2}.\end{aligned}$$
We will estimate $\Vert H_{t,r_i}(L)b_i \Vert_{L^2(C_j(B_i))}$ by functional calculus. The notation is mainly taken from [@Au07 Section 2.2].
For any fixed $t$ and $r$, then $H_{t,r}$ is a holomorphic function satisfying $$|H_{t,r}(\zeta)| \lesssim |\zeta|^{N+1} (1+|\zeta|)^{-2(N+1)},$$ for all $\zeta \in{\Sigma }=\{z\in \mathbb C^{\ast }:|\arg z|<\xi \}$ with any $\xi \in (0,\pi/2)$.
Since $L$ is a nonnegative self-adjoint operator, or equivalently $L$ is a bisectorial operator of type $0$, we can express $H_{t,r}(L )$ by functional calculus. Let $0<\theta <\omega <\xi < \pi /2$, we have $$H_{t,r}(L )=\int_{\Gamma _{+}} e^{-zL }\eta _{+}(z)dz+\int_{\Gamma _{-}} e^{-zL }\eta _{-}(z)dz,$$ where $\Gamma _{\pm }$ is the half-ray $\mathbb {R}^{+}e^{\pm i(\pi /2-\theta )}$ and $$\begin{aligned}
\eta _{\pm }(z) = \int_{\gamma _{\pm }} e^{\zeta z} H_{t,r}(\zeta ) d\zeta,\,\,\forall z\in \Gamma _{\pm },\end{aligned}$$ with $\gamma _{\pm }$ being the half-ray $\mathbb {R}^{\pm}e^{\pm i\omega }$.
Then for any $z\in \Gamma _{\pm }$, $$\begin{aligned}
|\eta _{\pm }(z)| &= {{\left\lvert{\int_{\gamma _{\pm }} e^{\zeta z} \rho(t) \zeta e^{-\rho(t)\zeta} (1-e^{-\rho (r)\zeta})^N d\zeta}\right\rvert}}
\\ &\leq
\int_{\gamma _{\pm}} |e^{\zeta z-\rho(t)\zeta}| \rho(t) |\zeta| |1-e^{-\rho(r)\zeta}|^N |d\zeta|
\\ &\leq
\int_{\gamma _{\pm}} e^{-c|\zeta|(|z|+\rho(t))} \rho(t) |\zeta| |1-e^{-\rho(r)\zeta}|^N |d\zeta|
\\ &\lesssim
\int_{0}^{\infty} e^{-cs(|z|+\rho(t))} \rho(t) \rho^N(r) s^{N+1} ds
\leq \frac{C\rho (t) \rho^N(r)}{(|z|+\rho (t))^{N+2}}.\end{aligned}$$ In the second inequality, the constant $c>0$ depends on $\theta$ and $\omega$. Indeed, $\Re(\zeta z)=|\zeta||z| \Re{e^{\pm i(\pi/2-\theta+\omega)}}$. Since $\theta<\omega$, then $\pi/2<\pi/2-\theta+\omega<\pi$ and $|e^{\zeta z}|=e^{-c_1 |\zeta| |z|}$ with $c_1=-\cos(\pi/2-\theta+\omega)$. Also it is obvious to see that $|e^{\rho(t)\zeta}|=e^{-c_2 \rho(t) |\zeta|}$. Thus the second inequality follows. In the third inequality, let $\zeta=s e^{\pm i \omega}$, we have $|d\zeta|=ds$. In addition, we dominate $|1-e^{-\rho(r)\zeta}|^N$ by $(\rho(r)\zeta)^N$.
We choose $\theta$ appropriately such that $|z|\sim \Re z$ for $z\in \Gamma _{\pm }$, then for any $j \geq 2$ fixed, $$\begin{aligned}
{{\left\lVert{H_{t,r_i}(L )b_i}\right\rVert}}_{L^2(C_j(B_i))}
&\lesssim
{{\left({\int_{\Gamma _{+}}+\int_{\Gamma _{-}}}\right)}} {{\left\lVert{e^{-\Re z L }b_i}\right\rVert}}_{L^2(C_j(B_i))}
\frac{\rho(t)}{(|z|+\rho (t))^2}\frac{\rho^N (r_i)}{(|z|+\rho (t))^N}|dz|
\\ &\lesssim
\int_{0}^{\infty} {{\left\lVert{e^{-sL }b_i}\right\rVert}}_{L^2(C_j(B_i))} \frac{\rho (t)\rho^N (r_i)}{(s+\rho (t))^{N+2}}ds.\end{aligned}$$
Applying Lemma \[BK\], then $$\begin{aligned}
\label{H}
\begin{split}
\left\Vert H_{t,r_i}(L )b_i\right\Vert_{L^2(C_j(B_i))}
&\lesssim
\frac{2^{j\nu} \Vert b_i\Vert_{p_0}}{\mu^{\frac{1}{2}-\frac{1}{p_0}}(B_i)}
\int_{0}^\infty e^{-c\left(\frac{2^j r_i}{\rho^{-1}(s)}\right)^{\tau(s)}} \frac{\rho (t)\rho^N (r_i)}{(s+\rho (t))^{N+2}} ds
\\ &\lesssim
\frac{ 2^{j\nu} \Vert b_i\Vert_{p_0}}{\mu^{\frac{1}{2}-\frac{1}{p_0}}(B_i)} {{\left({\int_{0}^{\rho (t)}+\int_{\rho (t)}^{\infty }}\right)}}
e^{-c{{\left({\frac{2^j r_i}{\sigma(s)}}\right)}}^{\tau(s)}} \frac{\rho (t)\rho^N (r_i)}{(s+\rho (t))^{N+2}} ds
\\ &=:
\frac{ 2^{j\nu} \Vert b_i\Vert_{p_0}}{\mu^{\frac{1}{2}-\frac{1}{p_0}}(B_i)} (H_1(t,r_i,j)+H_2(t,r_i,j)).
\end{split}\end{aligned}$$ In the second and the third lines, $\tau(s)$ is originally defined in . In fact, it should be $\tau(\rho^{-1}(s))$. Since $\rho^{-1}(s)$ and $s$ are unanimously larger or smaller than one, we always have $\tau(s)=\tau(\rho^{-1}(s))$.
Hence, by Minkowski inequality, we get from and that $$\begin{aligned}
\label{I_ij}
I_{ij} \lesssim
\frac{ 2^{j\nu} \Vert b_i\Vert_{p_0}}{\mu^{\frac{1}{2}-\frac{1}{p_0}}(B_i)}
{{\left({{{\left({\int_{0}^{\infty} H_1^2(t,r_i,j) \frac{dt}{t}}\right)}}^{1/2} + {{\left({\int_{0}^{\infty} H_2^2(t,r_i,j) \frac{dt}{t}}\right)}}^{1/2}}\right)}}.\end{aligned}$$
It remains to estimate the two integrals $\int_{0}^{\infty} H_1^2(t,r_i,j) \frac{dt}{t}$ and $\int_{0}^{\infty} H_2^2(t,r_i,j) \frac{dt}{t}$. We claim that $$\begin{aligned}
\label{H1}
\int_{0}^{\infty} H_1^2(t,r_i,j) \frac{dt}{t},\,\int_{0}^{\infty }H_2^2(t,r_i,j)\frac{dt}{t} \lesssim 2^{-2\beta_1N j}.\end{aligned}$$
Estimate first $\int_{0}^{\infty} H_1^2(t,r_i,j) \frac{dt}{t}$. Since $\frac{\rho (t)\rho^N (r_i)}{(s+\rho (t))^{N+2}}\leq \frac{\rho^N (r_i)}{\rho(t)^{N+1}} $, we obtain $$H_1(t,r_i,j) \leq \int_{0}^{\rho(t)} e^{-c{{\left({\frac{2^{j} r_i}{\sigma(s)}}\right)}}^{\beta_2/(\beta_2-1)}} \frac{\rho^N(r_i)}{\rho^{N+1}(t)}ds
\lesssim
e^{-c{{\left({\frac{2^{j} r_i}{t}}\right)}}^{\beta_2/(\beta_2-1)}} \frac{\rho^N (r_i)}{\rho^N(t)}.$$ It follows that $$\begin{aligned}
\int_{0}^{\infty }H_1^2(t,r_i,j)\frac{dt}{t}
&\lesssim
\int_{0}^{\infty}e^{-2c{{\left({\frac{2^{j} r_i}{t}}\right)}}^{\beta_2/(\beta_2-1)}} \frac{\rho^{2N} (r_i)}{\rho^{2N}(t)} \frac{dt}{t}
\\ &\lesssim
\int_{0}^{2^j r_i } {{\left({\frac{t}{2^{j} r_i}}\right)}}^{c} \frac{\rho^{2N} (r_i)}{\rho^{2N}(t)} \frac{dt}{t}
+ \int_{2^j r_i}^{\infty} \frac{\rho^{2N} (r_i)}{\rho^{2N}(t)} \frac{dt}{t}
\\ &\lesssim \frac{\rho^{2N} (r_i)}{\rho^{2N}(2^j r_i)}
\lesssim 2^{-2\beta_1N j} \end{aligned}$$ In the first inequality, we dominate the exponential term by polynomial one for the first integral, where $c$ in the second line is chosen to be larger than $2\beta_2 N$.
Now estimate $\int_{0}^{\infty }H_2^2(t,r_i,j)\frac{dt}{t}$. Write $\frac{\rho (t)\rho^N (r_i)}{(s+\rho (t))^{N+2}} \leq \frac{\rho (t) \rho^N(r_i)}{s^{N+2}}$. On the one hand, $$\begin{aligned}
\label{h2}
H_2(t,r_i,j) =\int_{\rho (t)}^{\infty }
e^{-c\left(\frac{2^j r_i}{\sigma(s)}\right)^{\tau(s)}} \frac{\rho (t)\rho^N (r_i)}{(s+\rho (t))^{N+2}} ds
\leq \int_{\rho(t)}^{\infty } \frac{\rho(t) \rho^N(r_i)}{s^{N+2}} ds=C\frac{\rho^N (r_i)}{\rho^N (t)}.\end{aligned}$$ On the other hand, we also have $$\begin{aligned}
\label{h2'}
H_2(t,r_i,j) \lesssim 2^{-\beta_1 Nj} \frac{\rho (t)}{\rho (2^jr_i)}.\end{aligned}$$ In fact, $$\begin{aligned}
H_2(t,r_i,j) &\leq
\int_{\rho(t)}^{\infty } e^{-c{{\left({\frac{2^{j} r_i}{\sigma(s)}}\right)}}^{\beta_2/(\beta_2-1)}} \frac{\rho(t)\rho^N(r_i)}{s^{N+1}}\frac{ds}{s}
\\ &\lesssim
2^{-\beta_1 Nj} \frac{\rho (t)}{\rho (2^jr_i)} \int_{\rho(t)}^{\infty} e^{-c{{\left({\frac{2^{j} r_i}{\sigma(s)}}\right)}}^{\beta_2/(\beta_2-1)}} \frac{\rho^{N+1}(2^j r_i)}{s^{N+1}}\frac{ds}{s}
\\ &\lesssim
2^{-\beta_1 Nj} \frac{\rho (t)}{\rho (2^jr_i)}.\end{aligned}$$ Now we split the integral into two parts in the same way and control them by using and seperately. Then $$\begin{aligned}
\int_{0}^{\infty } H_2^2(t,r_i,j) \frac{dt}{t} &\lesssim
\int_{0}^{2^j r_i } 2^{-2\beta_1 Nj} \frac{\rho^2(t)}{\rho^2(2^j r_i)}\frac{dt}{t}
+\int_{2^j r_i }^{\infty } \frac{\rho^{2N} (r_i)}{\rho^{2N} (t)} \frac{dt}{t}
\\ &\lesssim
2^{-2\beta_1 Nj}.\end{aligned}$$
Therefore, it follows from and that $$\begin{aligned}
\label{Iij}
I_{ij} \lesssim \frac{ \mu^{1/2}(2^{j}B_i) \Vert b_i\Vert_{p_0}}{\mu^{1/p_0}(B_i)} 2^{-\beta_1 N j}.\end{aligned}$$
Now for the integral $\left(\int_{0}^{\infty}\int_{ C_j(B_i)} |h(y,t)|^2 \frac{d\mu (y) dt}{t}\right)^{1/2}$. Take $\tilde h(y)=\int_{0}^{\infty}|h(y,t)|^2 \frac{dt}{t}$, then $$\begin{aligned}
\label{h}
{{\left({\int_{0}^{\infty}\int_{ C_j(B_i)} |h(y,t)|^2 \frac{d\mu (y) dt}{t}}\right)}}^{1/2}
\leq \mu^{1/2}(2^{j+1}B_i) \inf_{z \in B_i} \mathcal M^{1/2} \tilde h (z),\end{aligned}$$ where $\mathcal M$ is the Hardy-Littlewood maximal function.
Following the route for the proof of , we get from and that $$\begin{aligned}
\Lambda_{glob}^{1/2}
&\lesssim
\sup_{h} \sum_i\sum_{j\geq 2} \frac{ 2^{j\nu} \Vert b_i\Vert_{p_0}}{\mu^{\frac{1}{2}-\frac{1}{p_0}}(B_i)}
2^{-\beta_1 N j} \mu^{1/2}(2^{j+1}B_i)\inf_{z \in B_i} \mathcal M^{1/2}\tilde h (z)
\\ &\lesssim
\lambda \sup_{h} \int_M \sum_i \mathbbm 1_{B_i}(y) \mathcal M^{1/2}\tilde h(y) d\mu(y)
\\ &\lesssim
\lambda \sup_{h} \int_{\cup_i B_i} \mathcal M^{1/2} \tilde h (y) d\mu(y)
\\ &\lesssim
\lambda \mu(\cup_i B_i)^{1/2}
\lesssim \lambda^{1-p_0/2} \int |f|^{p_0} d\mu.\end{aligned}$$ Here the supremum is taken over all the functions $h$ with ${{\left\lVert{h}\right\rVert}}_{L^2{{\left({\frac{d\mu dt}{t}}\right)}}}=1$. Since $N>2\nu/\beta_1$, the sum $\sum_{j\geq 2} 2^{-\beta_1N j+3\nu j/2}$ converges and we get the second inequality. The fourth one is a result of Kolmogorov’s inequality.
Thus we have shown $\Lambda _{glob} \lesssim \lambda^{2-p_0} \int |f|^{p_0} d\mu$.
Counterexamples to $H_{L,\,S_h}^p(M) = L^p(M)$
----------------------------------------------
Before moving forward to the proof of Theorem \[noequiv\], let us recall the following two theorems about the Sobolev inequality and the Green operator.
\[hkSob\] Let $(M,\mu)$ be a $\sigma-$finite measure space. Let $T_t$ be a semigroup on $L^s$, $1\leq s\leq \infty$, with infinitesimal generator $-L$. Assume that $T_t$ is equicontinuous on $L^1$ and $L^{\infty}$. Then the following two conditions are equivalent:
1. There exists $C>0$ such that ${{\left\lVert{T_t}\right\rVert}}_{1\rightarrow \infty} \leq C t^{-D/2}$, $\forall t \geq 1$.
2. $T_1$ is from $L^1$ to $L^{\infty}$ and for $q>1$, $\exists C$ such that $$\begin{aligned}
\label{sob}
{{\left\lVert{f}\right\rVert}}_{p} \leq C {{\left({{{\left\lVert{L^{\alpha/2} f}\right\rVert}}_{q}+{{\left\lVert{L^{\alpha/2} f}\right\rVert}}_{p}}\right)}},\,\,f\in \mathcal D (L^{\alpha/2}) ,\end{aligned}$$ where $0<\alpha q<D$ and $\frac{1}{p}=\frac{1}{q}-\frac{\alpha}{D}$.
\[Gr\] Let $M$ be a complete non-compact manifold. Then there exists a Green’s function $G(x,y)$ which is smooth on $(M\times M)\backslash D$ satisfying $$\begin{aligned}
\Delta_x \int_M G(x,y) f(y) d\mu(y)=f(x),\,\, \forall f\in \mathcal C_0^{\infty}(M).\end{aligned}$$
For a proof, see for example [@Li12].
We also observe that
\[LB\] Let $M$ be a Riemannian manifold satisfying the polynomial volume growth and the two-sided sub-Gaussian heat kernel estimate $(HK_{2,m})$. Let $B$ be an arbitrary ball with radius $r\geq 4$. Then there exists a constant $c>0$ depending on $d$ and $m$ such that for all $t$ with $r^m/2 \leq t \leq r^m$, $$\int_B p_t(x,y) d\mu(y) \geq c,\,\,\forall x \in B.$$
Note that for any $x,y \in B$, we have $t \geq r^m/2 \geq 2r \geq d(x,y)$. Then ($H\!K_{2,m}$) yields $$\begin{aligned}
\int_B p_t(x,y) d\mu(y)
&\geq \int_B \frac{c}{t^{d/m}} \exp{{\left({-C{{\left({\frac{d^m(x,y)}{t}}\right)}}^{1/(m-1)}}\right)}} d\mu(y)
\\ &\geq \frac{c \mu(B)}{t^{d/m}} \exp{{\left({-C{{\left({\frac{r^m}{t}}\right)}}^{1/(m-1)}}\right)}}
\geq c.\end{aligned}$$
Let $\phi_n \in \mathcal C_0^{\infty}(M)$ be a cut-off function as follows: $0 \leq \phi_n \leq 1$ and for some $x_0 \in M$, $$\phi_n(x)=\left\{ \begin{aligned}
&1,&x\in B(x_0,n),\\
&0,&x\in M\backslash B(x_0,2n).
\end{aligned} \right.$$ For simplicity, we denote $B(x_0,n)$ by $B_n$.
Taking $f_n=G\phi_n$, Theorem \[Gr\] says that $\Delta f_n=\phi_n$.
On the one hand, we apply Theorem \[hkSob\] by choosing $T_t=e^{-t\Delta}$. Indeed, $e^{-t\Delta}$ is Markov hence bounded on $L^p$, equicontinuous on $L^1, L^{\infty}$ and satisfies $${{\left\lVert{e^{-t\Delta}}\right\rVert}}_{1\rightarrow \infty} =\sup_{x,y \in M} p_t(x,y) \leq C t^{-D/2},$$ where $D=2d/m>2$. Then taking $\alpha=2$ and $p>\frac{D}{D-2}$, it follows that $${{\left\lVert{f_n}\right\rVert}}_p \leq C {{\left({{{\left\lVert{\Delta f_n}\right\rVert}}_{q}+{{\left\lVert{\Delta f_n}\right\rVert}}_p}\right)}},$$ where $\frac{1}{p}=\frac{1}{q}-\frac{\alpha}{D}$, that is, $q=\frac{Dp}{D+2p}=\frac{dp}{d+mp}$.
Using the fact that $\Delta f_n=\phi_n$ and $\phi_n \leq \mathbbm 1_{B(x_0,2n)}$, we get $$\begin{aligned}
\label{sobeq}
\begin{split}
{{\left\lVert{f_n}\right\rVert}}_p
&\lesssim {{\left({{{\left\lVert{\phi_n}\right\rVert}}_{\frac{dp}{d+mp}}+{{\left\lVert{\phi_n}\right\rVert}}_p}\right)}}
\lesssim{{\left({V^{\frac{d+mp}{dp}}(x_0,2n)+V^{\frac{1}{p}}(x_0,2n)}\right)}}
\\ &\lesssim {{\left({n^{m+d/p}+n^{d/p}}\right)}}
\lesssim n^{m+d/p}.
\end{split}\end{aligned}$$ In particular, ${{\left\lVert{f_n}\right\rVert}}_2 \lesssim n^{m+d/2}$.
On the other hand, $$\begin{aligned}
{{\left\lVert{S_h f_n}\right\rVert}}_p^p
&= \int_M {{\left({\iint_{\Gamma(x)} {{\left\lvert{t^2 \Delta e^{-t^2 \Delta} f_n(y)}\right\rvert}}^2 \frac{d\mu(y)}{V(x,t)} \frac{dt}{t} }\right)}}^{p/2} d\mu(x)
\\ &=
\int_M {{\left({\iint_{\Gamma(x)} {{\left\lvert{t^2 e^{-t^2 \Delta} \phi_n(y)}\right\rvert}}^2 \frac{d\mu(y)}{V(x,t)} \frac{dt}{t} }\right)}}^{p/2} d\mu(x).\end{aligned}$$ Since $\phi_n \geq \mathbbm 1_{B_{n}} \geq 0$, it follows from the Markovian property of the heat semigroup that $${{\left\lVert{S_h f_n}\right\rVert}}_p^p \geq
\int_M {{\left({\iint_{\Gamma(x)} {{\left\lvert{t^2 e^{-t^2 L} \mathbbm 1_{B_{n}} (y)}\right\rvert}}^2 \frac{d\mu(y)}{V(x,t)} \frac{dt}{t} }\right)}}^{p/2} d\mu(x).$$ By using Lemma \[LB\], it holds that $e^{-t^2L} \mathbbm 1_{B_{n/2}} \geq c$ if $\frac{n^{m/2}}{2} \leq t \leq n^{m/2}$. Then we get $${{\left\lVert{S_h f_n}\right\rVert}}_p^p \gtrsim
\int_{B{{\left({x_0, \frac{n^{m/2}}{4}}\right)}}} {{\left({\int_{\frac{n^{m/2}}{2}}^{n^{m/2}} \int_{B(x,t) \cap B_{n/2}} \frac{t^3}{V(x,t)} d
\mu(y) dt }\right)}}^{p/2} d\mu(x).$$ Observe also that, for $t>\frac{n^{m/2}}{2}$ and $x\in B{{\left({x_0, \frac{n^{m/2}}{4}}\right)}}$, we have $B_{n} \subset B(x,t)$ as long as $n$ is large enough. Then the volume growth gives us a lower bound in terms of $n$. That is, $${{\left\lVert{S_h f_n}\right\rVert}}_p^p \gtrsim
\int_{B{{\left({x_0, \frac{n^{m/2}}{4}}\right)}}} {{\left({\int_{\frac{n^{m/2}}{2}}^{n^{m/2}} \frac{ \mu(B_n) t^3}{V(x,n^{m/2})} dt}\right)}}^{p/2}
d\mu(x)
\gtrsim n^{\frac{md}{2}(1-\frac{p}{2})} n^{mp+dp/2}.$$
Comparing the upper bound of ${{\left\lVert{f_n}\right\rVert}}_p$ in for $p>\frac{D}{D-2}$, we obtain $$\begin{aligned}
\label{noeq}
{{\left\lVert{S_h f_n}\right\rVert}}_p \gtrsim n^{\frac{md}{2}(\frac{1}{p}-\frac{1}{2})+m+\frac{d}{2}}
\gtrsim n^{d{{\left({\frac{m}{2}-1}\right)}} {{\left({\frac{1}{p}-\frac{1}{2}}\right)}}} {{\left\lVert{f_n}\right\rVert}}_p,\end{aligned}$$ where $p>\frac{D}{D-2}$.
Assume $D>4$, i.e. $m<d/2$, we have $\frac{D}{D-2}<2$. Then for $\frac{D}{D-2}<p<2$, since $m>2$, $$n^{d{{\left({\frac{m}{2}-1}\right)}} {{\left({\frac{1}{p}-\frac{1}{2}}\right)}}} \rightarrow \infty \text{ as } n\rightarrow \infty.$$ Thus (\[noeq\]) implies that $L^p \subset H_{S_h}^p$ is not true for $p\in {{\left({\frac{D}{D-2},2}\right)}}$, i.e. $p \in {{\left({\frac{d}{d-
m},2}\right)}}$, where $2<m < d/2$.
Our conclusion is: for any fixed $p \in {{\left({\frac{d}{d-m},2}\right)}}$, according to and , there exists a family of functions $\left\{g_n=\frac{f_n}{n^{m+d/p}}\right\}_{n\geq 1}$ such that ${{\left\lVert{g_n}\right\rVert}}_p \leq C$, ${{\left\lVert{g_n}\right\rVert}}_2 \leq n^{\frac{d}{2}-\frac{d}{p}} \to 0$ and ${{\left\lVert{S_h g_n}\right\rVert}}_p \geq n^{d(\frac{m}{2}-1)(\frac{1}{p}-\frac{1}{2})}\to+\infty$ as $n$ goes to infinity. Therefore $S_h$ is not $L^p$ bounded for $p \in {{\left({\frac{d}{d-m},2}\right)}}$ and the inclusion $L^p \subset H_{S_h^{m'}}^p$ doesn’t hold for $p \in {{\left({\frac{d}{d-m},2}\right)}}$.
More generally, a slight adaption of Theorem \[noequiv\] plus Theorem \[equihl\] yields the following result.
\[neq-gen\] Let $M$ be a Riemannian manifold satisfying and ($H\!K_{2,m}$) as above. Let $p\in {{\left({\frac{d}{d-m}, 2}\right)}}$. Then for any $0<m'\leq m$, $L^p(M) =H_{S_h^{m'}}^p(M)$ if and only if $m'=m$.
If $m'=m$, Theorem \[equihl\] says that $L^p \subset H_{S_h^{m}}^p$.
Conversely, by doing a slight adjustment for the above proof, we can show that $L^p \subset H_{S_h^{m'}}^p$ is false for $p \in {{\left({\frac{d}{d-m},2}\right)}}$, where $2<m < d/2$ and $m'<m$.
The $H^1-L^1$ boundedness of Riesz transforms on fractal manifolds
==================================================================
This section is devoted to an application of the Hardy space theory we introduced above.
Let $(M,d,\mu)$ be a Riemannian manifold satisfying the doubling volume property ($D$) and the sub-Gaussian estimate $(U\!E_{2,m})$. Note that we could as well consider a metric measure Dirichlet space which admits a “carré du champ” (see, for example, [@BE85; @GSC11]).
Recall that the Riesz transform $\nabla \Delta^{-1/2}$ is of weak type $(1,1)$ on $M$:
Let $M$ be a manifold satisfying the doubling volume property and the heat kernel estimate $(U\!E_{2,m})$, $m>2$. Then, the Riesz transform is weak $(1,1)$ bounded and bounded on $L^p$ for $1<p\leq 2$.
The proof depends on the following integrated estimate for the gradient of the heat kernel.
\[EstimateKernelCor\] Let $M$ be as above. Then for all $y\in M$, all $r,t>0$, $$\begin{aligned}
\label{intenew}
\int_{M\setminus B(y,r)} \left\vert \nabla_x h_t(x,y)\right\vert \,d\mu(x)\lesssim \frac 1{\sqrt{t}} \exp\left(-c\left(\frac{\rho(r)}t\right)^\frac{1}{m-1}\right),\end{aligned}$$ where $\rho$ is defined in .
Our aim here is to prove Theorem \[thm2\]. More specifically, we will show that the Riesz transform is $H_{\Delta,m,mol}^1(M)-L^1(M)$ bounded. Due to Theorem \[H1equiv\], it is $H_{\Delta,m}^1(M)-L^1(M)$ bounded. The method we use is similar as in [@HM09 Theorem 3.2]. Note that the pointwise assumption simplifies the proof below.
Note first the following lemma, which is crucial in our proof.
\[time derivative\] Let $M$ be as above and let $p\in (1,2)$. Then for any $E, F\subset M$ and for any $n\in \N$, we have $$\label{gradient-time}
{{\left\lVert{\left|\nabla \Delta^n e^{-t\Delta} f\right|}\right\rVert}}_{L^p(F)} \lesssim
\left\{ \begin{aligned}
&\frac{1}{t^{n+1/2}} e^{-c\frac{d^2(E,F)}{t}} {{\left\lVert{f}\right\rVert}}_{L^p(E)},&0<t<1, \\
&\frac{1}{t^{n+1/2}} e^{-c{{\left({\frac{d^m(E,F)}{t}}\right)}}^{1/(m-1)}} {{\left\lVert{f}\right\rVert}}_{L^p(E)},&t\geq 1;
\end{aligned}\right.$$ where $f\in L^p(M)$ is supported in $E$. Consequently, $$\begin{aligned}
\label{DG2}
{{\left\lVert{\left|\nabla \Delta^n e^{-t\Delta} f\right|}\right\rVert}}_{L^p(F)} \lesssim
\frac{1}{t^{n+1/2}} e^{-c{{\left({\frac{\rho(d(E,F))}{t}}\right)}}^{1/(m-1)}} {{\left\lVert{f}\right\rVert}}_{L^p(E)}.\end{aligned}$$
To prove the lemma, it is enough to show that the following two estimates: $${{\left\lVert{\left|\nabla e^{-t\Delta}f\right|}\right\rVert}}_{L^p(F)} \lesssim
\left\{ \begin{aligned}
& e^{-c\frac{d^2(E,F)}{t}} {{\left\lVert{f}\right\rVert}}_{L^p(E)},&0<t<1, \\
& e^{-c{{\left({\frac{d^m(E,F)}{t}}\right)}}^{1/(m-1)}} {{\left\lVert{f}\right\rVert}}_{L^p(E)},&t\geq 1,
\end{aligned}\right.$$ and $${{\left\lVert{(t\Delta)^n e^{-t\Delta} f}\right\rVert}}_{L^p(F)} \lesssim
\left\{ \begin{aligned}
& e^{-c\frac{d^2(E,F)}{t}} {{\left\lVert{f}\right\rVert}}_{L^p(E)},&0<t<1, \\
& e^{-c{{\left({\frac{d^m(E,F)}{t}}\right)}}^{1/(m-1)}} {{\left\lVert{f}\right\rVert}}_{L^p(E)},&t\geq 1.
\end{aligned}\right.$$ Then follows by adapting the proof of [@HM03 Lemma 2.3]. Note that the first estimate can be obtained by using Stein’s approach, similarly as the proof of Lemma \[EstimateKernelCor\]. The second estimate is a direct consequence of and the analyticity of the heat semigroup (see [@Fe15] for its discrete analogue). We omit the details of the proof here.
Note that implies (see [@CCFR15 Corollary 2.4]), which may simplify the calculation in the subsequent proofs.
Denote by $T:=\nabla \Delta^{-1/2}$. It suffices to show that, for any $(1,2,\varepsilon)-$molecule $a$ associated to a function $b$ and a ball $B$ with radius $r_B$, there exists a constant $C$ such that $\Vert Ta\Vert_{L^1(M)}\leq C$.
Write $$\begin{aligned}
\label{divide}
T a=T e^{-\rho(r_B)\Delta }a+T \left(I-e^{-\rho(r_B)\Delta }\right)a.\end{aligned}$$ Then $${{\left\lVert{Ta}\right\rVert}}_{L^1(M)} \leq
{{\left\lVert{T \left(I-e^{-\rho(r_B)\Delta }\right)a}\right\rVert}}_{L^1(M)}+{{\left\lVert{T e^{-\rho(r_B)\Delta }a}\right\rVert}}_{L^1(M)}
=: I+II.$$
We first estimate $I$. It holds that $$\begin{aligned}
I &\leq \sum_{i\geq 1} {{\left\lVert{T \left(I-e^{-\rho(r_B)\Delta }\right)\mathbbm 1_{C_i(B)} a}\right\rVert}}_{L^1(M)}
\\ &\leq
\sum_{i\geq 1} {{\left({{{\left\lVert{T \left(I-e^{-\rho(r_B)\Delta }\right)\mathbbm 1_{C_i(B)} a}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)}+{{\left\lVert{T \left(I-e^{-\rho(r_B)\Delta }\right)\mathbbm 1_{C_i(B)} a}\right\rVert}}_{L^1(2^{i+2}B)}}\right)}}\end{aligned}$$ Using the Cauchy-Schwarz inequality and the $L^2$ boundedness of $T$ and $e^{-\rho(r_B)\Delta }$, it follows that $$\begin{aligned}
\label{I11}
{{\left\lVert{T \left(I-e^{-\rho(r_B)\Delta }\right)\mathbbm 1_{C_i(B)} a}\right\rVert}}_{L^1(2^{i+2}B)}
\lesssim V(2^{i+2}B) {{\left\lVert{a}\right\rVert}}_{L^2(C_i(B))} \lesssim 2^{-i\varepsilon}.\end{aligned}$$ Now we claim: $$\begin{aligned}
\label{I12}
{{\left\lVert{T \left(I-e^{-\rho(r_B)\Delta }\right)\mathbbm 1_{C_i(B)} a}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)}
\lesssim 2^{-i\varepsilon}.\end{aligned}$$ Combining and , we obtain that $I$ is bounded.
In order to prove $\eqref{I12}$, we adapt the trick in [@CCFR15]. For the sake of completeness, we write it down. First note that the spectral theorem gives us $\Delta ^{-1/2}f=c\int_0^\infty e^{-s\Delta}f\frac{ds}{\sqrt s}$. Therefore, $$\begin{split}
\Delta^{-1/2} (I-e^{-t\Delta}) a &=c\int_0^\infty (e^{-s\Delta} - e^{-(s+\rho(r_B))\Delta})a\frac{ds}{\sqrt s} \\
& = c\int_{0}^\infty \left( \frac{1}{\sqrt s} - \frac{\chi_{\{s>\rho(r_B)\}}}{\sqrt{s-\rho(r_B)}}\right) e^{-s\Delta} a\, ds.
\end{split}$$
Set $$k_{\rho(r_B)}(x,y) = \int_0^\infty {{\left\lvert{\frac{1}{\sqrt s} - \frac{\chi_{\{s>\rho(r_B)\}}}{\sqrt{s-\rho(r_B)}}}\right\rvert}} |\nabla_x h_s(x,y)| ds.$$ Then $$\begin{aligned}
{{\left\lVert{T {{\left({I-e^{-\rho(r_B)\Delta }}\right)}}\mathbbm 1_{C_i(B)} a}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)}
&\lesssim
\int_{M\backslash 2^{i+2}B)}\int_{C_i(B)}k_{\rho(r_B)}(x,y)|a(y)|d\mu(y)d\mu(x)
\\&\lesssim
\int_{C_i(B)} |a(y)|\int_{d(x,y)\geq 2^i r}k_{\rho(r_B)}(x,y)d\mu(x)d\mu(y).\end{aligned}$$ It remains to show that $\int_{d(x,y)\geq 2^i r}k_{\rho(r_B)}(x,y)d\mu(x)$ converges uniformly. Indeed, Lemma \[EstimateKernelCor\] yields $$\begin{aligned}
\int_{d(x,y)\geq 2^i r}k_{\rho(r_B)}(x,y)d\mu(x)
&=
\int_0^\infty {{\left\lvert{\frac{1}{\sqrt s} - \frac{\chi_{\{s>\rho(r_B)\}}}{\sqrt{s-\rho(r_B)}}}\right\rvert}}\int_{d(x,y)\geq 2^i r} |\nabla_x h_s(x,y)| d\mu(x) ds
\\ &\lesssim
\int_0^\infty {{\left\lvert{\frac{1}{\sqrt s} - \frac{\chi_{\{s>\rho(r_B)\}}}{\sqrt{s-\rho(r_B)}}}\right\rvert}} \frac 1{\sqrt{s}} \exp{{\left({-c{{\left({\frac{\rho(2^ir)}s}\right)}}^\frac{1}{m-1}}\right)}}ds
\\ &\lesssim 1.\end{aligned}$$
Now turn to estimate $II$. We have $$\begin{aligned}
II &= {{\left\lVert{c\int_0^\infty \nabla e^{-(s+\rho(r_B))\Delta}a\frac{ds}{\sqrt s}}\right\rVert}}_{L^1(M)}
\\ &\lesssim
\int_0^{\rho (r_B)} {{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} a\right|}\right\rVert}}_{L^1(M)} \frac{ds}{\sqrt s}
+
\int_{\rho (r_B)}^{\infty} {{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} \Delta^K b\right|}\right\rVert}}_{L^1(M)} \frac{ds}{\sqrt s}
\\ &=:
II_1+II_2.\end{aligned}$$
We estimate $II_1$ as follows: $$II_1 \leq
\sum_{i\geq 1} \int_0^{\rho (r_B)} {{\left({{{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}a\right|}\right\rVert}}_{L^1(2^{i+2}B)}
+ {{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}a\right|}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)}}\right)}} \frac{ds}{\sqrt s}.$$ Estimate the first term inside the sum by Cauchy-Schwarz and the fact that ${{\left\lVert{e^{-t\Delta}}\right\rVert}}_{2\to 2} \lesssim \frac{1}{\sqrt t}$. Then $$\begin{split}
\int_0^{\rho (r_B)} {{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}a\right|}\right\rVert}}_{L^1(2^{i+2}B)} \frac{ds}{\sqrt s}
&\lesssim
\int_0^{\rho (r_B)} V^{1/2}(2^{i+2}B) {{\left\lVert{a}\right\rVert}}_{L^2(C_i(B))} \frac{ds}{\sqrt {s+\rho(r_B)} \sqrt s}
\\ &\lesssim
2^{-i\varepsilon} \int_0^{\rho (r_B)} \frac{ds}{\rho(r_B) \sqrt s}
\\ &\lesssim
2^{-i\varepsilon}
\end{split}$$ For the second term inside the sum, we use Lemma \[EstimateKernelCor\] again. Then $$\begin{split}
&\int_0^{\rho (r_B)} {{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}a\right|}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)} \frac{ds}{\sqrt s}
\\ &\lesssim
\int_0^{\rho (r_B)} \int_{M\backslash 2^{i+2}B)} \int_{C_i(B)} {{\left\lvert{\nabla p_{s+\rho(r_B)}(x,y) a(y)}\right\rvert}} d\mu(y)d\mu(x) \frac{ds}{\sqrt s}
\\ &\lesssim
\int_0^{\rho (r_B)} \int_{C_i(B)} \int_{d(x,y)\geq 2^{i+1}B} {{\left\lvert{\nabla p_{s+\rho(r_B)}(x,y)}\right\rvert}} d\mu(x) {{\left\lvert{a(y)}\right\rvert}} d\mu(y) \frac{ds}{\sqrt s}
\\ &\lesssim
{{\left\lVert{a}\right\rVert}}_{L^1(C_i(B))} \int_0^{\rho (r_B)} \frac{ds}{\sqrt {s+\rho(r_B)} \sqrt s}
\\ &\lesssim
2^{-i\varepsilon}.
\end{split}$$
It remains to estimate $II_2$. Using the same method as for $II_1$, then $$II_2 \leq
\sum_{i\geq 1} \int_{\rho (r_B)}^{\infty} {{\left({{{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta}\Delta^K \mathbbm 1_{C_i(B)}b\right|}\right\rVert}}_{L^1(2^{i+2}B)}
+ {{\left\lVert{\left|\nabla e^{-(s+\rho(r_B))\Delta} \Delta^K\mathbbm 1_{C_i(B)}b\right|}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)}}\right)}} \frac{ds}{\sqrt s}.$$ For the first term inside the sum, we estimate by using Cauchy-Schwartz inequality and the spectral theory. Then $$\begin{split}
\int_{\rho (r_B)}^{\infty} {{\left\lVert{\left|\nabla \Delta^K e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}b\right|}\right\rVert}}_{L^1(2^{i+2}B)} \frac{ds}{\sqrt s}
&\lesssim
\int_{\rho (r_B)}^{\infty} \mu^{1/2}(2^{i+2}B) {{\left\lVert{\left|\nabla \Delta^K e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}b\right|}\right\rVert}}_{L^2(M)} \frac{ds}{\sqrt s}
\\ &\lesssim
\int_{\rho (r_B)}^{\infty} \mu^{1/2}(2^{i+2}B) {{\left\lVert{b}\right\rVert}}_{L^2(C_i(B))} \frac{ds}{(s+\rho(r_B))^{K+1/2} \sqrt s}
\\ &\lesssim
2^{-i\varepsilon} \rho^K (r_B) \int_{\rho (r_B)}^{\infty} \frac{ds}{s^{K+1}}
\lesssim
2^{-i\varepsilon}.
\end{split}$$ For the second term inside the sum, we use Lemma \[time derivative\], then $$\begin{aligned}
&\int_{\rho (r_B)}^{\infty} {{\left\lVert{\left|\nabla \Delta^K e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}b\right|}\right\rVert}}_{L^1(M\backslash 2^{i+2}B)} \frac{ds}{\sqrt s}
\\ &\lesssim
\sum_{l=i+2}^{\infty} \int_{\rho (r_B)}^{\infty} \mu^{1/p'}(2^{l+1}B) {{\left\lVert{\left|\nabla \Delta^K e^{-(s+\rho(r_B))\Delta} \mathbbm 1_{C_i(B)}b\right|}\right\rVert}}_{L^p(C_l(B)} \frac{ds}{\sqrt s}
\\ &\lesssim
\sum_{l=i+2}^{\infty} \int_{\rho (r_B)}^{\infty} \mu^{1/p'}(2^{l+1}B) \exp{{\left({-c{{\left({\frac{\rho(d(C_l(B),C_i(B)))}{s+\rho(r_B)}}\right)}}^{1/(m-1)}}\right)}}\frac{ {{\left\lVert{b}\right\rVert}}_{L^p(C_i(B)} ds}{\sqrt s {{\left({s+\rho(r_B)}\right)}}^{K+1/2}}
\\ &\lesssim
\sum_{l=i+2}^{\infty} 2^{-i\varepsilon} \rho^K(r_B) {{\left({\frac{\mu(2^{l+1}B)}{\mu(2^i B)}}\right)}}^{1/p'}
\int_{\rho (r_B)}^{\infty} \exp{{\left({-c{{\left({\frac{\rho(2^l r_B)}{s+\rho(r_B)}}\right)}}^{1/(m-1)}}\right)}}\frac{ds}{\sqrt s {{\left({s+\rho(r_B)}\right)}}^{K+1/2}}
\\ &\lesssim
\sum_{l=i+2}^{\infty} 2^{-i\varepsilon} \rho^K(r_B) 2^{(l-i)\nu/p'}
\int_{\rho (r_B)}^{\infty} {{\left({\frac{s}{\rho(2^l r_B)}}\right)}}^{c}\frac{ds}{s^{K+1}}
\\ &\lesssim
\sum_{l=i+2}^{\infty} 2^{-i\varepsilon} \rho^K(r_B) 2^{(l-i)\nu/p'} \frac{1}{\rho^c(2^l r_B)\rho^{K-c}(r_B)}
\\ &\lesssim
2^{-i\varepsilon}.\end{aligned}$$ This finishes the proof.
[**Acknowledgements:**]{} This work is part of the author’s PhD thesis in cotutelle between the Laboratoire de Mathématiques, Université Paris-Sud and the Mathematical Science Institute, Australian National University. The author would like to thank Pascal Auscher for suggesting this topic, and to thank Thierry Coulhon for many discussions and suggestions. She is also grateful to Alan McIntosh and Dorothee Frey for helpful discussions. The author was partially supported by the ANR project “Harmonic analysis at its boundaries” ANR-12-BS01-0013 and the Australian Research Council (ARC) grant DP130101302.
[10]{}
Alex Amenta. Tent spaces over metric measure spaces under doubling and related assumptions. In [*Operator theory in harmonic and non-commutative analysis*]{}, volume 240 of [*Oper. Theory Adv. Appl.*]{}, pages 1–29. Birkh[ä]{}user/Springer, Cham, 2014.
P. Auscher. On necessary and sufficient conditions for [$L^p$]{}-estimates of [R]{}iesz transforms associated to elliptic operators on [$\Bbb R^n$]{} and related estimates. , 186(871):xviii+75, 2007.
P. Auscher, S. Hofmann, and J.-M. Martell. Vertical versus conical square functions. , 364(10):5469–5489, 2012.
P. Auscher, A. McIntosh, and A. Morris. Calderón reproducing formulas and applications to [H]{}ardy spaces. , April 2013. arXiv:1304.0168.
P. Auscher, A. McIntosh, and E. Russ. Hardy spaces of differential forms on [R]{}iemannian manifolds. , 18(1):192–248, 2008.
D. Bakry and Michel [É]{}mery. Diffusions hypercontractives. In [*Séminaire de probabilités, [XIX]{}, 1983/84*]{}, volume 1123 of [*Lecture Notes in Math.*]{}, pages 177–206. Springer, Berlin, 1985.
M. T. Barlow. Which values of the volume growth and escape time exponent are possible for a graph? , 20(1):1–31, 2004.
M. T. Barlow. Analysis on the [S]{}ierpinski carpet. In [*Analysis and geometry of metric measure spaces*]{}, volume 56 of [*CRM Proc. Lecture Notes*]{}, pages 27–53. Amer. Math. Soc., Providence, RI, 2013.
M. T. Barlow and R. F. Bass. Stability of parabolic [H]{}arnack inequalities. , 356(4):1501–1533 (electronic), 2004.
M. T. Barlow, T. Coulhon, and A. Grigor’yan. Manifolds and graphs with slow heat kernel decay. , 144(3):609–649, 2001.
J. Bergh and J. L[[ö]{}]{}fstr[[ö]{}]{}m. . Springer-Verlag, Berlin, 1976. Grundlehren der Mathematischen Wissenschaften, No. 223.
S. Blunck. Generalized [G]{}aussian estimates and [R]{}iesz means of [S]{}chr[ö]{}dinger groups. , 82(2):149–162, 2007.
S. Blunck and P. C. Kunstmann. Generalized [G]{}aussian estimates and the [L]{}egendre transform. , 53(2):351–365, 2005.
L. Chen, T. Coulhon, J. Feneuil, and E. Russ. Riesz transform for $1 \leq p \le 2$ without [G]{}aussian heat kernel bound. , October 2015. arXiv:1510.08275.
R. R. Coifman, Y. Meyer, and E. M. Stein. Some new function spaces and their applications to harmonic analysis. , 62(2):304–335, 1985.
R. R. Coifman and G. Weiss. . Lecture Notes in Mathematics, Vol. 242. Springer-Verlag, Berlin, 1971. tude de certaines int[[é]{}]{}grales singuli[[è]{}]{}res.
R. R. Coifman and G. Weiss. Extensions of [H]{}ardy spaces and their use in analysis. , 83(4):569–645, 1977.
T. Coulhon. Dimension [à]{} l’infini d’un semi-groupe analytique. , 114(4):485–500, 1990.
T. Coulhon and A. Sikora. Gaussian heat kernel upper bounds via the [P]{}hragm[é]{}n-[L]{}indel[ö]{}f theorem. , 96(2):507–544, 2008.
M. Cowling, I. Doust, A. McIntosh, and A. Yagi. Banach space operators with a bounded [$H^\infty$]{} functional calculus. , 60(1):51–89, 1996.
X. T. Duong and A. McIntosh. Singular integral operators with non-smooth kernels on irregular domains. , 15(2):233–265, 1999.
C. Fefferman and E. M. Stein. spaces of several variables. , 129(3-4):137–193, 1972.
J. [Feneuil]{}. . , May 2015. arXiv:1505.07001.
L. Grafakos. , volume 249 of [*Graduate Texts in Mathematics*]{}. Springer, New York, second edition, 2008.
A. Grigor’yan. , volume 47 of [*AMS/IP Studies in Advanced Mathematics*]{}. American Mathematical Society, Providence, RI, 2009.
P. Gyrya and L. Saloff-Coste. Neumann and [D]{}irichlet heat kernels in inner uniform domains. , (336):viii+144, 2011.
W. Hebisch and L. Saloff-Coste. On the relation between elliptic and parabolic [H]{}arnack inequalities. , 51(5):1437–1481, 2001.
S. Hofmann, G. Lu, D. Mitrea, M. Mitrea, and L. Yan. Hardy spaces associated to non-negative self-adjoint operators satisfying [D]{}avies-[G]{}affney estimates. , 214(1007):vi+78, 2011.
S. Hofmann and S. Mayboroda. Hardy and [BMO]{} spaces associated to divergence form elliptic operators. , 344(1):37–116, 2009.
Steve Hofmann and Jos[[é]{}]{} Mar[í]{}a Martell. bounds for [R]{}iesz transforms and square roots associated to second order elliptic operators. , 47(2):497–515, 2003.
P. C. Kunstmann and M. Uhl. Spectral multiplier theorems of [H]{}[ö]{}rmander type on [H]{}ardy and [L]{}ebesgue spaces. , 73(1):27–69, 2015.
P. Li. , volume 134 of [*Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, 2012.
E. Russ. The atomic decomposition for tent spaces on spaces of homogeneous type. In [*C[MA]{}/[AMSI]{} [R]{}esearch [S]{}ymposium “[A]{}symptotic [G]{}eometric [A]{}nalysis, [H]{}armonic [A]{}nalysis, and [R]{}elated [T]{}opics”*]{}, volume 42 of [*Proc. Centre Math. Appl. Austral. Nat. Univ.*]{}, pages 125–135. Austral. Nat. Univ., Canberra, 2007.
E. M. Stein. Annals of Mathematics Studies, No. 63. Princeton University Press, Princeton, N.J., 1970.
E. M. Stein. , volume 43 of [*Princeton Mathematical Series*]{}. Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis, III.
K.-T. Sturm. Analysis on local [D]{}irichlet spaces. [I]{}. [R]{}ecurrence, conservativeness and [$L^p$]{}-[L]{}iouville properties. , 456:173–196, 1994.
K.-T. Sturm. Analysis on local [D]{}irichlet spaces. [II]{}. [U]{}pper [G]{}aussian estimates for the fundamental solutions of parabolic equations. , 32(2):275–312, 1995.
M. Uhl. Spectral multiplier theorems of h[ö]{}rmander type via generalized gaussian estimates. , 2011.
N. Th. Varopoulos. Long range estimates for [M]{}arkov chains. , 109(3):225–252, 1985.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We use finite size scaling to study Ising spin glasses in two spatial dimensions. The issue of universality is addressed by comparing discrete and continuous probability distributions for the quenched random couplings. The sophisticated temperature dependency of the scaling fields is identified as the major obstacle that has impeded a complete analysis. Once temperature is relinquished in favor of the correlation length as the basic variable, we obtain a reliable estimation of the anomalous dimension and of the thermal critical exponent. Universality among binary and Gaussian couplings is confirmed to a high numerical accuracy.'
author:
- 'L. A. Fernandez'
- 'E. Marinari'
- 'V. Martin-Mayor'
- 'G. Parisi'
- 'J. J. Ruiz-Lorenzo'
bibliography:
- '../biblio.bib'
title: Universal critical behavior of the $2d$ Ising spin glass
---
Introduction.
=============
Spin glasses [@edwards:75] are a rich problem [@binder:86; @mezard:87; @fisher:91; @young:98; @mezard:09; @binder:11b]. In particular the Ising spin glass in $D=2$ spatial dimensions poses questions of interest both for theory and for experiments. The system remains paramagnetic for any temperature $T>0$, but the critical limit at $T=0$ has puzzled theorists for many years [@kirkpatrick:77; @morgenstern:80; @blackman:82; @mcmillan:83; @cheung:83; @bhatt:87; @singh:86; @wang:88; @freund:88; @freund:89; @blackman:91; @berg:92b; @saul:93; @saul:94; @rieger:96; @rieger:97; @hartmann:01b; @carter:02; @amoruso:03; @lukic:04; @jorg:06b; @lukic:06; @liers:07; @katzgraber:07b; @parisen:10; @thomas:11; @parisen:11; @jorg:12; @lundow:15a]. On the other hand recent experiments in spin glasses are carried out in samples with a film geometry [@guchhait:14; @guchhait:15a; @guchhait:15b]. The analysis of these experiments will demand a strong theoretical command.
In the limit $T\to 0$ the physics of the system is dictated by the low energy configurations of the system. The nature of the coupling constants $J$ becomes the ruling factor: if the $J$ are discrete and non vanishing, an energy gap appears. Instead, the gap disappears if the couplings are allowed to approach with continuity the value $J=0$. Several Renormalization Group (RG) fixed points appear at $T=0$, depending on the nature of the couplings distribution [@amoruso:03]. However, most of these fixed points are unstable even for the tiniest positive temperature: the only remaining universality class is the one of the continuous coupling constants [@jorg:06b; @parisen:10; @thomas:11; @parisen:11; @jorg:12] (the very same effect is found in the Random Field Ising model [@fytas:13]).
The distinction between universality classes is unambiguous only in the thermodynamic limit. For finite systems of size $L$, samples with discrete couplings display a crossover at scale $T^*_L$ between continuous ($T\gg T^*_L$) and discrete behavior ($T\ll T^*_L$). How $T^*_L$ tends to zero for large $L$ has been clarified only recently [@thomas:11; @parisen:11] (see below).
Perhaps unsurprisingly given these complications, the critical exponents of the model are poorly known. For the thermal exponent $\nu$ ($\xi \propto T^{-\nu}$, where $\xi$ is the correlation length) we only have crude estimates, $\nu\approx 3.5$ [@jorg:06b] (estimates can be given by using indirect methods, see below). Even worse, the anomalous dimension $\eta$ has been till date impossible to estimate [@jorg:06b; @katzgraber:07b; @parisen:11] (correlations decay with distance $r$ as $C(r)\sim 1/r^{D-2+\eta}$ for $r\lesssim \xi$, making $\eta$ crucial for an out of equilibrium analysis [@janus:08b; @janus:09b; @fernandez:15]). Besides, little is known about corrections to the scaling exponent $\omega$.
Here, we remedy these state of affairs by means of large scale Monte Carlo simulations. Crucial ingredients are: (i) we consider both continuous and discrete coupling distributions; (ii) multi-spin coding methods (novel for Gaussian couplings) provide very high statistics; (iii) the non-linear scaling fields (whose importance was emphasized in Ref. [@hasenbusch:08]) cause severe problems in the finite size scaling close to $T=0$, that we are able to solve [^1]. We also obtain for the first time a precise numerical bound for the anomalous dimension, $|\eta|<0.02$. This strongly supports the conjecture $\eta=0$. Decisive evidence for universality follows from our computation of $\omega$. For Gaussian couplings we also obtain a precise estimate of $\nu$.
Model and observable quantities.
================================
We consider the Edwards Anderson model on a square lattice of linear size $L$, with periodic boundary conditions, nearest neighbors interactions and Ising spins $\sigma_{\boldsymbol x}=\pm 1$. The coupling constants $J_{\boldsymbol x\boldsymbol y}$ are quenched random variables. A *sample* is a given couplings realization. Thermal averages for a given sample are denoted as $\langle\ldots\rangle$. The statistical average of thermal mean values over the couplings is denoted by an over-line. We consider two different kinds of coupling distributions, $J_{\boldsymbol
x\boldsymbol y}=\pm 1$ with $50\%$ probability, and a Gaussian distribution with zero mean and unit variance. For later use, we note a *temperature symmetry*: in our problem $T$ and $-T$ are equivalent because of the symmetry $J\leftrightarrow -J$ of the couplings distribution.
We consider real replicas: couples of spin configurations $\{s_{\boldsymbol x}\}$ and $\{\tau_{\boldsymbol x}\}$ evolving with the same couplings, but otherwise statistically independent. Let $q_{\boldsymbol x}=s_{\boldsymbol
x}\tau_{\boldsymbol x}$. The order parameter $q$ and the Binder ratio $U_4$ are $$\label{eq:Binder}
\textstyle q=\sum_{\boldsymbol x}q_{\boldsymbol x}/L^2\,,\qquad U_4 =
\overline{\langle q^4 \rangle}/\overline{\langle q^2 \rangle}^2\,.$$ $G(\boldsymbol r)= \sum_{\boldsymbol{x}} \overline{\langle
q_{\boldsymbol{x}} q_{\boldsymbol{x}+\boldsymbol r}\rangle}/L^2$ is the overlap-overlap correlation function. From its Fourier transform $\hat G(\boldsymbol k)$ we compute the spin glass susceptibility $\hat G(\boldsymbol k =0)= L^2
\overline{\langle q^2\rangle}$ and the second moment correlation length $\xi_L$ [@cooper:82; @palassini:99; @ballesteros:00; @amit:05].
Finite Size Scaling.
====================
Exactly at $T=0$ our two models behave very differently. In the Gaussian case, barring zero measure exceptions, the ground state (GS) is unique with a continuous spectrum of excitations. As a consequence, at $T=0$ and for any size $L$, $\overline{\langle q^2\rangle}=1$. It follows that the anomalous dimension exponent $\eta=0$ and, according to our definition, $\xi_L=\infty$, even for finite $L$.
The $J\!=\!\pm 1$ model is gapped, with a highly degenerate GS. At large distances the correlation function behaves as $G(\boldsymbol r,
T=0)\sim q_{\mathrm{EA}}^2 + A/r^{\theta_S}$, implying $\xi_L\sim
L^{\theta_S/2}$. $\theta_S\approx
1/2$ [@thomas:11; @saul:93; @lukic:06] is the entropy exponent. This $T=0$ behavior extends up to the crossover scale $T^*_L\sim
L^{-\theta_S}$ [@thomas:11]. In fact, Eqs. (\[eq:FSS-xi\],\[eq:FSS\]) below apply for this model only down to $T\sim L^{-1/\nu}\gg L^{-\theta_S}$ [@parisen:11].
The singular part of the disorder averaged free energy scales as $$F_{\mbox{singular}}\left(\beta,h,L\right) \simeq
L^{-D} f\left( u_h L^{y_h}, u_T L^{y_T} \right)\;,$$ plus sub-leading terms. Here $u_h$ and $u_T$ are the scaling fields [@salas:00; @amit:05; @hasenbusch:08] associated respectively with the magnetic field $h$ and with the temperature $T$ (since our $D=2$ system is only critical at $T=0$)[^2]. The scaling fields $u_T$ and $u_h$ are (asymptotically $L$-independent) analytic functions of $h$ and $T$ that will enter our analysis through the numerical determination of observables like $\xi_L/L$, $U_4$, $q^2$, …Recalling the $T\leftrightarrow -T$ symmetry, one can expand by obtaining $u_T(T,h)=\hat u_T(T)+{\cal O}(h^4)$, where $\hat
u_T(T)\simeq u_1 T(1+\ u_3 T^2 + {\cal O}(T^4))$, and $u_h(T,h)=h^2
\hat u_h(T)+{\cal O}(h^4)$ with $\hat u_h(T)=c_0+\ c_2 T^2 + {\cal
O}(T^4)$.
In terms of the scaling fields the correlation length behaves as $$\label{eq:FSS-xi}
\xi_L=L\, F_\xi (L^{1/\nu}\hat u_T)\ +\ {\cal O}(L^{-\omega})\;,$$ where at variance with $\hat u_T$ and $\hat u_h$, the critical exponents $\nu$ and $\omega$ and the scaling function $F_\xi$ are universal [^3]. We follow Refs. [@parisi:80d; @caracciolo:95; @caracciolo:95b] and we factor out the temperature dependency, finding: $$\label{eq:FSS}
\overline{\langle q^2\rangle} = [\hat u_h(T)]^2
F_{q^2}(\xi_L/L)\,,\ U_4= F_{U_4}(\xi_L/L)\,.$$ In Eq. we have neglected again corrections of order $L^{-\omega}$. The scaling functions $F_{q^2}$ and $F_{U_4}$ are universal.
Simulation details.
===================
High statistics was collected using 128-bits multi-spin coding (see [@newman:99] and appendix \[sect:MSC-GAUSS\]). In the Gaussian case, the same bonds in the 128 copies of the system share the same absolute value of the couplings (only sign are at random and independent in different samples). Still, as shown in appendix \[sect:effective-number\], the statistical gain is significant. We have equilibrated [^4] lattices of linear size $L=4,6,8,12,16,24,32,48,64,96$ and $128$ (see Figure 1 and appendix \[sect:parameters\]).
On Universality.
================
![(color online) [**Top:**]{} Binary model correlation length (in units of the system size) versus temperature. $\xi_L/L$ approaches its $T=0$ limit exponentially in $1/T$ (because of the existence of an energy gap). We have an inflection point at $T\!=\!
T_\mathrm{inf}^{(L)}$ (obtained from a cubic spline interpolation of $\xi_L/L$), that we regard as a proxy for the crossover scale $T^*_L$ [@thomas:11]. At low $T$ (discontinuous lines) we use less samples, see appendix \[sect:parameters\]. [**Inset:**]{} Size evolution of the inflection points $T_\mathrm{inf}^{(L)}$ (red full squares), compared to $T_{\xi_L/L=0.5}^{(L)}$ (open green circles). Data for binary model. As expected [@parisen:11], the two temperature scales decouple for large $L$. [**Bottom:**]{} $\xi_L/L$ vs. $T$ for the Gaussian model does not show any crossover.[]{data-label="fig:xi"}](xiL-2w){height="\columnwidth"}
Let us start with $\xi_L$. The Gaussian model, Fig. \[fig:xi\]–bottom, displays the expected divergence upon approaching $T=0$. In fact, the temperature where $\xi_L/L=x$, denoted $T_{(\xi_L/L)=x}^{(L)}$ hereafter, decreases for larger sizes \[Eq. (\[eq:FSS-xi\]) predicts $T_{(\xi_L/L)=x}^{(L)}\sim
L^{-1/\nu}$, see below\]. As for the binary model, see Fig. \[fig:xi\]–top and inset, its $\xi_L/L$ curves reflect the different behaviors above and below the temperature scale $L^{-\theta_S}$ [@thomas:11]. Here we do not investigate further the $T\!\approx\! 0$ region nor this crossover.
Fortunately, universality emerges clearly if we bypass the temperature dependency as done in Eqs. (\[eq:FSS-xi\],\[eq:FSS\]). $U_4$ at $T_{\xi_L/L}^{(L)}$ reach an $\xi_L/L$-dependent universal limit for large values of $L$, as shown in Fig. \[fig:omega\]. We compute the corrections to scaling exponent $\omega$ from the behavior of $U_4$. One expects corrections to the leading behavior: $$\label{eq:omega-fit}
U_4^{(L)}\big(T_{\xi_L/L}^{(L)}\big)=F_{U_4}({\textstyle\frac{\xi_L}{L}})+a({\textstyle\frac{\xi_L}{L}}) L^{-\omega}+
b({\textstyle\frac{\xi_L}{L}})L^{-(2-\eta)}\ldots\,.$$ The amplitudes $a(\frac{\xi_L}{L}),b(\frac{\xi_L}{L})$ are model and $\xi_L/L$-dependent. If $\eta\!=\!0$ analytic corrections are ${\cal O}(L^{-2})$ [@amit:05].
We fit together binary and Gaussian data to Eq. by standard $\chi^2$ minimization, imposing a common $F_{U_4}(\xi_L/L)$. The goodness-of-fit estimator $\chi^2$ is computed with the full covariance matrix, which limits the number of $\xi_L/L$-values that one may consider simultaneously in the fit.
In our fit to Eq. we include data for $\xi_L/L=0.3, 0.42, 0.54$ and $L\geq L_\mathrm{min}$. We impose two requirements: (i) an acceptable $\chi^2/\mathrm{dof}$; (ii) stability in the fitted parameters upon increasing $L_\mathrm{min}$. We obtain $\omega=0.80(10)$ for $L_\mathrm{min}=16$, with $\chi^2/\mathrm{dof}=23.9/26$. Interestingly, the amplitude $a({\frac{\xi_L}{L}})$ for the Gaussian model is compatible with zero for all values of $\xi_L/L$: the Gaussian model seems free of the leading corrections to scaling [^5].
As a control of systematic errors, we evaluated a second fit imposing $b(\frac{\xi_L}{L})=0$ and, for the Gaussian data, also $a(\frac{\xi_L}{L})=0$. We obtained $\omega=0.69(5)$ for $L_\mathrm{min}=32$ and $\chi^2/\mathrm{dof}=14.3/23$. Our final estimate is $$\omega=0.75(10)(5)\,.$$ (first is the statistical error and second the systematic one).
![(color online) Binder ratio $U_4$, Eq. , at $T$ where $\xi/L=0.3$ ([**top**]{}), 0.42 ([**center**]{}) and 0.54 ([**bottom**]{}) as a function of $L^{-\omega}$ for the two models. The large $L$ limit is model independent. The $\omega$ exponent and the solid lines were obtained from a joint fit to Eq. .[]{data-label="fig:omega"}](Binder_xiLfijo3-5-3w){height="\columnwidth"}
The anomalous dimension.
========================
Previous investigations have never succeeded in computing the anomalous dimension of the $2D$ spin glass. Our key idea is that Eq. implies $\eta=0$, provided that $\hat
u_h(T\!=\!0)\neq 0$ (traditional methods cannot handle the prefactor $[\hat
u_h(T)]^2$, see appendix \[sect:traditional\]).
We focus on the temperature dependence of $\overline{\langle q^2\rangle}$, as computed at fixed $\xi_L/L$. For each $L$ we choose $T=T_{\xi_L/L}^{(L)}$, see the two insets in Fig. \[fig:scaling-gy\]. Eq. tells that, apart from a constant $F_{q^2}(\xi_L/L)$, the curves should be smooth functions of $T^2$.
To compute the universal function $F_{q^2}(\xi_L/L)$ we arbitrarily fix the scale $(\xi_L/L)=0.4$ (since, see Fig. \[fig:scaling-gy\], all our curves for $\overline{\langle q^2\rangle}$ at fixed $\xi_L/L$ have some temperature overlap with the curve for $(\xi_L/L)=0.4$). We fit to a quadratic polynomial in $T^2$ each curve $\overline{\langle
q^2\rangle}$ at fixed $\xi_L/L$ for an interval $0< T^2 <
T^2_{\mathrm{max}, \xi_L/L}$, see appendix \[sect:T-fits\]. We compute $g(\xi_L/L)\equiv F_{q^2}(0.4)/F_{q^2}(\xi_L/L)$ as the ratio of the two $T^2$-fits, the one for a generic value of $\xi_L/L$ and the fit for $(\xi_L/L)=0.4$, as evaluated at $T^2=T^2_{\mathrm{max}, \xi_L/L}/2$.
Our computation of the ratio $g(\xi_L/L)$ respects three consistency tests: (i) $g(\xi_L/L)$ turns out to be essentially model independent (Fig. \[fig:scaling-gy\]); (ii) $g(\xi_L/L)\sim (L/\xi_L)^{2}$ for small $\xi_L/L$ (Fig. \[fig:scaling-gy\]); (iii) the product of $\overline{\langle q^2\rangle}$ at fixed $\xi_L/L$ with $g(\xi_L/L)$ produces $\xi_L/L$ independent curves. (Fig. \[fig:q2-scaling\]).
Fig. \[fig:q2-scaling\] shows the (modified) scaling field $[\hat
u_h(T)]^2 F_{q^2}(0.4)$. Given the $T^2$ fits it is straightforward to extrapolate $[\hat u_h(T)]^2 F_{q^2}(0.4)$ to $T^2=0$ (dashed lines in Fig. \[fig:q2-scaling\]). For both models the extrapolation is non-vanishing (implying $\eta=0$).
Finally, we obtain $\eta=0.00(2)$ from the scaling $g(x)\sim
x^{\eta-2}$ for small $x=\xi_L/L$ ($L\to\infty$ is taken at fixed $x$, see appendix \[sect:g-computation\]).
![(color online) Order parameter $\overline{\langle
q^2\rangle}$ computed at fixed values of $\xi_L/L$ vs. $\big[T_{\xi_L/L}^{(L)}\big]^2$, for the binary ([**upper inset**]{}) and the Gaussian ([**lower inset**]{}) models. [**Main:**]{} universal scaling function $g(\xi_L/L)=F_{q^2}(0.4)/F_{q^2}(\xi_L/L)$, Eq. , as computed for the Gaussian (empty symbols) and the binary (full symbols) models. The function $g(x=\xi_L/L)$ scales as $1/x^2$ for small $x$ (dashed line).[]{data-label="fig:scaling-gy"}](g_scaling-3w){height="\columnwidth"}
![(color online) Scaling field $[\hat u_h(T)]^2$ (from Eq. ) vs. $[T_{\xi_L/L}^{(L)}]^2$, as computed for the Gaussian ([**top**]{}) and the binary ([**bottom**]{}) models. The data collapses were obtained by multiplying the data in the two insets in Fig. \[fig:scaling-gy\] by the universal function $g(\xi_L/L)$, depicted in the main panel of Fig. \[fig:scaling-gy\]. The dots are for the extrapolation to $T^2=0$. The binary model data show the crossover between the $T=0$ (small $L$) and $T>0$ (large $L$) regimes (see Fig. \[fig:xi\] and Refs. [@thomas:11; @parisen:11]).[]{data-label="fig:q2-scaling"}](q2_xiLfijo-2w){height="\columnwidth"}
The thermal exponent. {#sect:nu}
=====================
The exponent $\nu$ has never been successfully computed for this model [^6]. RG suggests that $1/\nu=-\theta$, where $\theta$ is the stiffness exponent controlling the size scaling of the change in the ground state energy when considering periodic and anti-periodic boundary conditions. Accurate determinations of $\theta$ are available for the Gaussian model: $-\theta=0.281(2)$ [@rieger:96], $0.282(2)$ [@hartmann:01b], $0.282(3)$ [@carter:02] and $0.282(4)$ [@amoruso:03]. A computation for the random anisotropy model yields $\theta=0.275(5)$ [@liers:07]. We shall obtain results of comparable accuracy for $1/\nu$. Due to the strong cross-over effects suffered by the binary model (see Fig. \[fig:xi\]) we estimate $1/\nu$ for the Gaussian model only.
We base our analysis on the determination of $T_{\xi_L/L}^{(L)}$. Even disregarding the leading universal corrections to scaling (see above our computation of $\omega$), Eq. (\[eq:FSS-xi\]) predicts a rather complex behavior, with $\hat u_T(T_{\xi_L/L}^{(L)})= L^{-1/\nu}F_\xi^{-1}(\xi_L/L)$. Inverting this relation, one obtains $T_{\xi_L/L}^{(L)} = d_1^{(\xi_L/L)} L^{-1/\nu} + d_3^{(\xi_L/L)}
L^{-3/\nu}+ d_5^{(\xi_L/L)} L^{-5/\nu}+\ldots$. Since $1/\nu\approx0.28$, we expect annoying corrections to scaling due to the non-linearity of the scaling fields. Were $\hat u_T(T)$ analytically known, we could easily get rid of these corrections. We shall not achieve this, but we shall get close to it.
In order to eliminate the unknown scaling function $F_\xi$, we compare couples of lattices of size $L$ and $2L$: $$\label{eq:nu-fit-0}
Q_T(L)=\frac{T_{\xi_L/L}^{(2L)}}{T_{\xi_L/L}^{(L)}}=2^{-1/\nu}\,\frac{1+u_3 [T_{\xi_L/L}^{(L)}]^2+\ldots}{1+u_3 [T_{\xi_L/L}^{(2L)}]^2+\ldots}\,.$$ In fact, see Fig. \[fig:nu\_eff\]–top, scaling corrections are strong, and strongly dependent on $\xi_L/L$.
We can alleviate the situation by introducing a renormalized quotient $$\label{eq:nu-fit-1}
Q^{\mathrm{R}}_T(L)= \frac{T_{\xi_L/L}^{(2L)}}{T_{\xi_L/L}^{(L)}}\,
\frac{1+\hat u_3 [T_{\xi_L/L}^{(2L)}]^2}{1+\hat u_3 [T_{\xi_L/L}^{(L)}]^2}\,.$$ Setting $\hat u_3=u_3$ we would have $Q^{\mathrm{R}}_T(L)=2^{1/\nu}+{\cal O}
(u_5 L^{-4/\nu})$. We have found that $\hat u_3=-0.32$ produces a negligible slope: the remaining corrections in Fig. \[fig:nu\_eff\]–bottom are certainly of a different origin (either $u_5$ terms, analytic corrections to scaling, or even $L^{-\omega}$ terms).
We obtained a fit $Q^{\mathrm{R}}_T(L)=2^{1/\nu}+ d^{(\xi_L/L)} L^{-2/\nu}$ (i.e. we did *not* assume $\hat u_3=u_3$) finding $$1/\nu=0.283(6)\,,\quad \chi^2/\mathrm{dof}=4.1/6\,\ (L_\mathrm{min}=64).$$ Variations of $10\%$ in $\hat u_3$ change the $1/\nu$ estimate by one third of the error bar. Furthermore, we can fit directly $Q_T(L)$, see Fig. \[fig:nu\_eff\]–top. In this case, we need to introduce corrections quadratic in $L^{-2/\nu}$. We find a fair fit for $L_{\mathrm{min}}=16$ with $1/\nu=0.275(9)$.
![(color online) Computing $\nu$ for the Gaussian model. Bare \[[**top**]{}, see Eq. \] and Renormalized \[[ **bottom**]{}, Eq. with $\hat u_3=-0.32$\] temperature quotients at fixed $\xi_L/L (=0.3,0.42,0.54)$ as a function of $L^{-2/\nu}$. Continuous lines are our fits (see text), dotted lines are guides to eyes.[]{data-label="fig:nu_eff"}](nu-2w){height="\columnwidth"}
Conclusions.
============
We have presented a high accuracy numerical simulation of the Edwards-Anderson spin glass model in $2D$. We consider systems with binary and Gaussian random couplings. By focusing on renormalized quantities we are able to bypass the peculiar temperature evolution dictated by the binary distribution. The Binder ratios at fixed $\xi_L/L$ are fully compatible, in the precision given by our small statistical errors, with a single universality class. This analysis yields the first computation of the leading corrections to scaling exponent $\omega$. We identify the non-linearity of scaling fields as the major obstacle that impeded so far an accurate computation of critical quantities. We are able to give strong numerical evidence that the anomalous dimension $\eta$ vanishes. We consider the temperature evolution for the Gaussian distribution, which is free of cross-over effects. We obtain a reliable direct estimate of $\nu$. Therefore, we are able to provide a stringent test of the generally assumed equivalence $\theta=-1/\nu$.
Acknowledgments
===============
This work was partially supported by the Ministerio de Ciencia y Tecnología (Spain) through Grant Nos. FIS2012-35719-C02, FIS2013-42840-P, by the Junta de Extremadura (Spain) through Grant No. GRU10158 (partially founded by FEDER).
Parameters of simulations and fits {#sect:parameters}
==================================
Numerical simulations
---------------------
$L$ $N_\mathrm{samples}$ $N_\mathrm{MCS}$ $N_\mathrm{T}$ $T_\mathrm{min}$ $T_\mathrm{max}$
-------- ---------------------- ------------------ ---------------- ------------------ ------------------
4 25600 320000 14 0.20 1.5
$4^*$ 204800 80000 20 0.72 1.5
6 25600 320000 14 0.20 1.5
$6^*$ 204800 80000 20 0.65 1.5
8 25600 320000 14 0.20 1.5
$8^*$ 204800 80000 22 0.60 1.5
12 25600 320000 14 0.20 1.5
$12^*$ 204800 80000 19 0.53 1.5
16 25600 320000 14 0.20 1.5
$16^*$ 204800 80000 18 0.47 1.5
24 25600 320000 14 0.20 1.5
$24^*$ 204800 80000 16 0.45 1.5
32 25600 1280000 14 0.20 1.5
$32^*$ 204800 80000 18 0.40 1.5
48 25600 1920000 27 0.20 1.5
$48^*$ 204800 160000 27 0.35 1.5
64 25600 640000 26 0.25 1.5
$64^*$ 204800 240000 26 0.35 1.5
96 102400 320000 49 0.30 1.5
128 25600 640000 49 0.30 1.5
: Details of the numerical simulations for the binary model. We show the simulation parameters for each lattice size $L$. $N_\mathrm{samples}$ is the number of simulated samples (in bunches of 128 samples, due to multi-spin coding). $N_\mathrm{T}$ is the number of temperatures that were used in parallel tempering, with maximum and minimum temperatures $T_{\mathrm{max}}$ and $T_{\mathrm{min}}$, respectively. In general, temperatures were evenly spaced. However some system sizes appear twice in the table. In fact, we performed some higher accuracy simulations, marked by a $^*$, aiming to increase the accuracy in the computation of $T_{\xi_L/L}^{(L)}$, the temperature where $\xi_L/L$ reaches a given prescribed value (see Fig. 1) and to improve the computation of $\omega$ (see Fig. 2). For those extended runs, we increased the number of temperatures in the region where $\xi_L/L >0.3$, in order to reduce the error for temperature interpolations. Finally, $N_\mathrm{MCS}$ is the number of Monte Carlo steps (MCS) used in each numerical simulation. Each MCS consisted of 10 Metropolis sweeps at fixed temperature, followed by a cluster update [@houdayer:01] and by a Parallel Tempering step [@hukushima:96; @marinari:98b].[]{data-label="tab:simulations_binary"}
$L$ $N_\mathrm{samples}$ $N_\mathrm{MCS}$ $N_\mathrm{T}$ $T_\mathrm{min}$ $T_\mathrm{max}$
----- ---------------------- ------------------ ---------------- ------------------ ------------------
4 204800 160000 31 0.1 1.5
6 204800 160000 31 0.1 1.5
8 204800 160000 31 0.1 1.5
12 204800 160000 31 0.1 1.5
16 204800 160000 31 0.1 1.5
24 204800 160000 31 0.1 1.5
32 204800 320000 31 0.1 1.5
48 204800 160000 27 0.2 1.5
64 25600 320000 53 0.2 1.5
96 25600 480000 41 0.2 0.7
128 25600 800000 41 0.2 0.7
: Simulation details for the Gaussian model, as in Table \[tab:simulations\_binary\]. Here the number of samples $N_\mathrm{samples}$ is given by the number of random choices of the absolute values of the couplings times 128 independent random choices of the coupling signs for each set of absolute values (see Sect. \[sect:MSC-GAUSS\]).[]{data-label="tab:simulations_gaussian"}
The parameters describing our multi-spin coding simulations are given in Tables \[tab:simulations\_binary\] and \[tab:simulations\_gaussian\]. We treat temperature as a continuous variable, even if our data are obtained only in the temperature grid where our Parallel Tempering simulations take place. We solved this problem by using a standard cubic-spline interpolation. Note that data for neighboring temperatures are statistically correlated (because we use Parallel Tempering) which makes interpolation particularly easy in our case.
Temperature fits {#sect:T-fits}
----------------
The computation of the scaling field $\hat u_h(T)$ and of the scaling function $F_{q^2}$, depicted in Figs. 3 and 4, is based on a temperature fit. For each prescribed value of $\xi_L/L$ and each system size $L$, we considered $\overline{ \langle
q^2\rangle}_{\xi_L/L}$ (namely the squared spin overlap as computed at $T=T_{\xi_L/L}^{(L)}$, the temperature needed to have $\xi_L/L$ equal to its prescribed value in a system of size $L$). For each fixed value of $\xi_L/L$ we fitted $\overline{\langle
q^2\rangle}_{\xi_L/L}$, as computed for all our system sizes, to a second order polynomial in $[T_{\xi_L/L}^{(L)}]^2$. The fits were performed in the range $0<T^2 < T^2_{\mathrm{max},\xi_L/L}$. The values of $T^2_{\mathrm{max},\xi_L/L}$ were obtained with a simple algorithm: 1) For $\xi_L/L=0.1$ we took $T^2_{\mathrm{max},\xi_L/L}=0.8$. 2) We increased $\xi_L/L$ in steps of $0.05$. 3) At each such step, $T^2_{\mathrm{max},\xi_L/L}$ was divided by $1.1$.
The above procedure has general validity. However for the binary case at large $\xi_L/L\geq 0.6$ our data are strongly affected by the crossover from the $T>0$ to the $T=0$ behavior [@thomas:11; @parisen:11], illustrated in Figs. 1 and 3. In order to avoid as much as possible the effects of this crossover in the temperature window used in the fit, we employed $T^2_{\mathrm{max},\xi_L/L}=0.19, 0.13$ and $0.11$ for $\xi_L/L=0.6, 0.65$ and $0.7$, respectively. Also for these three cases, the comparison with $\xi_L/L=0.4$ (needed to compute the scaling function $g$ in Fig. 3) was done at $0.8
T^2_{\mathrm{max},\xi_L/L}$.
Multi spin coding the Gaussian model {#sect:MSC-GAUSS}
====================================
This section is divided in two parts. We first explain how we define the multi spin coding algorithm with Gaussian couplings in \[sect:algorithm\]. Next, we assess in \[sect:effective-number\] the statistical effectiveness of our algorithm.
The algorithm {#sect:algorithm}
-------------
It has been known for a long time how to perform the Metropolis update of a single spin using only Boolean operations (AND, XOR, etc.), provided that couplings are binary $J_{\boldsymbol x\boldsymbol
y}=\pm1$, see e.g. [@newman:99]. Besides, modern CPU perform synchronously independent Boolean operations for all the bits in a computer word.
Multi-spin coding is the fruitful combination of the above two observations: one codes, and simulates in parallel, as many different samples as the number of bits a word contains. Modern CPUs enjoy streaming extensions that allow to code in a word 128 (or even more) spins pertaining to the same site but to different samples. The most efficient version of our programs turns out to be the one with 128-bits words.
The situation changes, of course, when the couplings $J_{\boldsymbol
x\boldsymbol y}$ are drawn from a continuous distribution, such as a Gaussian. In fact, we are not aware of working multi-spin coding strategies when the coupling distribution is continuous. We explain now how we circumvented this problem [^7].
Before describing our algorithm let us spell the standard Metropolis algorithm, phrased in a somewhat unusual (but fully orthodox) way. Imagine we are working at inverse temperature $\beta=1/T$. When updating site ${\boldsymbol x}$ we attempt to flip the spin $\sigma_{\boldsymbol x}\rightarrow -\sigma_{\boldsymbol x}$. Specifically,
1. We extract a random number $R$ uniformly distributed in $[0,1)$.
2. We compute the energy change $\Delta E$ that the system would suffer if the spin $\sigma_x$ was flipped. In our case, $\Delta E= 2
\sum_{{\boldsymbol y} \text{ neighbor of } {\boldsymbol x}}
J_{\boldsymbol x\boldsymbol y} \sigma_{\boldsymbol
x}\sigma_{\boldsymbol y}$
3. We reject the spin flip only if $\exp(-\beta \Delta E) < R$. Otherwise, we flip the spin.
So, we shall first get the random number $R$, then check if the actual $\Delta E$ forces us to reject the spin-flip. Let us see how it works.
Let us call $N_{\boldsymbol x}$ the set of the four nearest neighbors of ${\boldsymbol x}$ in the square lattice endowed with periodic boundary conditions. For later use, let us also split the couplings into their absolute values and their signs $J_{\boldsymbol
x\boldsymbol y}= |J_{\boldsymbol x\boldsymbol y}|\;
\mathrm{sgn}(J_{\boldsymbol x\boldsymbol y})$. The crucial observation is that for fixed $ |J_{\boldsymbol x\boldsymbol y}|$ the sum $$\label{eq:Sx}
S_{\boldsymbol x} = \sum_{\boldsymbol y\in N_{\boldsymbol x}}
\left|J_{\boldsymbol x\boldsymbol y}\right|\;
\mathrm{sgn}\left(J_{\boldsymbol x\boldsymbol y}\right)\;
\sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\,,$$ can only take $2^4=16$ different values, because each term of the sum in Eq. is a binary variable \[$\mathrm{sgn}(J_{\boldsymbol x\boldsymbol y}) \sigma_{\boldsymbol
x}\sigma_{\boldsymbol y}=\pm 1$\] and there are 4 neighboring sites ${\boldsymbol y}$. Of course, $S_{\boldsymbol x}=\Delta E/2$ (recall the above description of the Metropolis algorithm). Now, let us name the $16$ possible values of $S_{\boldsymbol x}$ as $$s_0<s_1<\ldots<s_7<0<s_8<s_9<\ldots<s_{15}\,.$$ In fact, the symmetry of the problem ensures that $s_7=-s_8$, $s_6=-s_9$, etc. Note also that having $s_i=0$ for some $i$, or $s_i=s_k$ for a pair $i$ and $k$, are zero-measure events.
Let us chose an (arbitrary) ordering for the four neighbors: South, East, North and West. We have $S_{\boldsymbol x}=s_{15}$ when the four signs are $\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol y})
\sigma_{\boldsymbol x}\sigma_{\boldsymbol
y}\}_{15}=\{+1,+1,+1,+1\}$. Next, let us consider $s_{14}$. If the weakest link (i.e. smallest $|J_{\boldsymbol x\boldsymbol y})|$) corresponded to (say) the East neighbor, then the array yielding $s_{14}$ would be $\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol
y})\sigma_{\boldsymbol x}\sigma_{\boldsymbol
y}\}_{14}=\{+1,-1,+1,+1\}$. The groups of four signs are ordered in such away to produce decreasing values of the 16 $s_i$’s. The eight groups $\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol
y})\sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\}_{15},\ldots,\{
\mathrm{sgn}(J_{\boldsymbol x\boldsymbol y})\sigma_{\boldsymbol
x}\sigma_{\boldsymbol y}\}_{8}$ deserve special attention: if the current configuration takes one of these values, then the energy will *increase* upon flipping $\sigma_{\boldsymbol x}$. If the energy increases we shall be forced to reject the spin-flip (unless the random number turns out to be small enough).
With these definitions, the algorithm is easy to explain. We draw a random number $0\leq R<1$ with uniform probability. The Metropolis update of site ${\boldsymbol x}$ at inverse temperature $\beta=1/T$ can be cast as follows:
1. If $R < \mathrm{e}^{-2\beta s_{15}}$ we flip the spin $\sigma_{\boldsymbol x}\rightarrow -\sigma_{\boldsymbol x}$.
2. If $\mathrm{e}^{-2\beta s_{15}} < R < \mathrm{e}^{-2\beta
s_{14}}$ and the current configuration of the four signs turns out to be identical to the *forbidden* array $\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol
y}) \sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\}_{15}$ we let $\sigma_{\boldsymbol x}$ unchanged. Otherwise, we reverse the spin.
3. If $\mathrm{e}^{-2\beta s_{14}} < R < \mathrm{e}^{-2\beta
s_{13}}$ we reverse $\sigma_{\boldsymbol x}$ unless the current configuration of the four signs is identical to one of the two configuration in the forbidden set: $\{ \mathrm{sgn}(J_{\boldsymbol
x\boldsymbol y}) \sigma_{\boldsymbol x}\sigma_{\boldsymbol
y}\}_{15}$ or $\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol y})
\sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\}_{14}$.
4. If $\mathrm{e}^{-2\beta s_{13}} < R < \mathrm{e}^{-2\beta
s_{12}}$, the forbidden set contains $\{
\mathrm{sgn}(J_{\boldsymbol x\boldsymbol y}) \sigma_{\boldsymbol
x}\sigma_{\boldsymbol y}\}_{15}$, $\{ \mathrm{sgn}(J_{\boldsymbol
x\boldsymbol y}) \sigma_{\boldsymbol x}\sigma_{\boldsymbol
y}\}_{14}$ and $\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol y})
\sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\}_{13}$. We reverse $\sigma_{\boldsymbol x}$ unless the current signs configuration is contained in the forbidden set.
5. The same scheme apply to the other intervals, up to $\mathrm{e}^{-2\beta
s_{8}}< R$. In this extremal case, the forbidden set contains all the energy-increasing configurations of the four signs: $\{ \mathrm{sgn}(J_{\boldsymbol
x\boldsymbol y})\sigma_{\boldsymbol x}\sigma_{\boldsymbol
y}\}_{15},\ldots,\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol
y})\sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\}_{8}$.
We can bypass the use of floating point arithmetics by using a look up table. For each of the $L^2$ sites of the system we need to keep in our table the eight probability thresholds $$\mathrm{e}^{-2\beta s_{15}}< \mathrm{e}^{-2\beta s_{14}}< \ldots
<\mathrm{e}^{-2\beta s_{8}}\;,$$ and the corresponding eight *sometimes forbidden* four-signs configurations $$\begin{aligned}
\{ \mathrm{sgn}(J_{\boldsymbol x\boldsymbol y})
\sigma_{\boldsymbol x}\sigma_{\boldsymbol y}\}_{15}\,,\ \{
\mathrm{sgn}(J_{\boldsymbol x\boldsymbol y}) \sigma_{\boldsymbol
x}\sigma_{\boldsymbol y}\}_{14}\,,\ \ldots &&\\
\ldots\,,\ \{ \mathrm{sgn}(J_{\boldsymbol
x\boldsymbol y}) \sigma_{\boldsymbol x}\sigma_{\boldsymbol
y}\}_{8}\,.&&\end{aligned}$$ The look-up table is entirely determined by the absolute values of the couplings $|J_{\boldsymbol x\boldsymbol y}|$.
At this point, our multi-spin coding solution is straightforward. We chose to code 128 different samples in each computer word. We set randomly and independently the *sign* of each of the $128\times
2\times L^2$ couplings, $\mathrm{sgn}(J_{\boldsymbol x\boldsymbol
y})=\pm 1$ with $50\%$ probability. However, we only extract $2\times L^2$ independent absolute values $ |J_{\boldsymbol
x\boldsymbol y}|$ from the Gaussian distribution. This $|J_{\boldsymbol x\boldsymbol y}|$ is common to all the the 128 bits in the computer word that codes the bond between lattice sites ${\boldsymbol x}$ and ${\boldsymbol y}$.
The effective number of samples {#sect:effective-number}
-------------------------------
As far as we know, our multi-spin coding scheme is new and it has never been tested. Therefore, it is useful to investigate its effectiveness.
Let us consider a Monte Carlo simulation long enough to make thermal errors negligible as compared to sample to sample fluctuations [^8]. Let us now simulate $N_S$ *independent* samples, in order to compute the expectation value $\overline{\langle O\rangle}$ for an observable $O$. For instance, $O$ could be the energy density $e=H/L^2$, or the squared spin overlap $q^2$.
Our estimate will suffer from a statistical error $\Delta_O$ of typical (squared) size $$\label{eq:error_independent}
\Delta_O^2 = \frac{\mathrm{Var}(O)}{N_S}\,,$$ where $\mathrm{Var}(O)= \overline{\langle O\rangle^2}-\overline{\langle
O\rangle}^2$ is the variance of $O$.
We want to analyze a situation in which the coupling absolute values $|J_{\boldsymbol x\boldsymbol y}|$ are fixed while we average over many different coupling signs. It will be useful to recall some simple notions about conditional probabilities (the same ideas were heavily used in Refs. [@janus:10; @janus:14c]). Let $\langle
O\rangle_{|J|,\mathrm{sgn}(J)}$ be the thermal expectation of $O$ for a given sample. We split the couplings in their absolute values and their signs $J_{\boldsymbol x\boldsymbol
y}= |J_{\boldsymbol x\boldsymbol y}| \; \mathrm{sgn}(J_{\boldsymbol x\boldsymbol
y})$. The conditional expectation value of $\langle
O\rangle_{|J|,\mathrm{sgn}(J)}$, given the absolute values for the couplings, is $$E(\langle O\rangle |\; |J| ) = \frac{1}{2^{N_{\mathrm{B}}}} \sum_{\{\mathrm{sgn}(J)\}}\langle O\rangle_{|J|,\mathrm{sgn}(J)}\,,$$ where $N_{\mathrm{B}}=2 L^2$ is the number of bonds in the square lattice and the sum extends to the $2^{N_{\mathrm{B}}}$ equally probable sign-assignments for the couplings. The relationship with the standard expectation values is straightforward $$E(O)\equiv\overline{\langle O \rangle}= \int D|J|\, E(\langle O\rangle |\; |J| )\,,$$ where $\int D|J|$ indicates the average taken with respect to the absolute value of the couplings.
The variance can be treated in a similar way. The variance induced by the absolute values is $$\mathrm{Var}_{|J|}(O)=\int D|J|\, \Big(E(\langle O\rangle |\;|J|)\, -\, E(O)\Big)^2\,.$$ Instead, the $|J|$-averaged variance induced by the signs is $$\begin{aligned}
&&\mathrm{Var}_{\mathrm{sgn}(J)}(O)=\\[1mm]
&&\int D|J|\, \frac{1}{2^{N_{\mathrm{B}}}} \sum_{\{\mathrm{sgn}(J)\}} \, \Big(\langle O\rangle_{|J|,\mathrm{sgn}(J)} \,-\,E(\langle O\rangle |\;|J|)\Big)^2\,.\nonumber\end{aligned}$$ It is straightforward to show that $$\label{eq:regla_de_suma}
\mathrm{Var}(O)=\mathrm{Var}_{|J|}(O)+\mathrm{Var}_{\mathrm{sgn}(J)}(O)\,.$$ We are finally ready to discuss our multi-spin coding simulation. Imagine we simulate $N_{|J|}$ choices of the absolute values for the couplings. Our squared statistical error is $$\Delta_{O,\mathrm{MSC}}^2 = \frac{1}{N_{|J|}}\Big[\mathrm{Var}_{|J|}(O)+\frac{\mathrm{Var}_{\mathrm{sgn}(J)}(O)}{128}\Big]\,.$$ However, the comparison with Eq. suggests us to define the effective number of samples in our 128 bits, $N_{\mathrm{eff},O}$, through $$\label{eq:error_MSC}
\Delta_{O,\mathrm{MSC}}^2=\frac{\mathrm{Var}(O)}{N_{|J|}\, N_{\mathrm{eff},O}}$$ The combination of Eqs. and tells us that $$N_{\mathrm{eff},O}=128\,\frac{1+z}{128+z}\quad\text{where}\quad z=\frac{\mathrm{Var}_{\mathrm{sgn}(J)}(O)}{\mathrm{Var}_{|J|}(O)}\,.$$ Therefore, the effective number of samples in our 128 bits computer word is bounded as $$1< N_{\mathrm{eff},O} < 128\,.$$ If the variance ratio $z$ is small, then $N_{\mathrm{eff},O}\approx 1$ and we will gain nothing by multi-spin coding. On the other hand, if the statistical fluctuations induced by the signs dominate, $z$ will be large and we shall approach to the optimal efficiency $N_{\mathrm{eff},O} = 128$.
The problem to assess the effectiveness of our approach beforehand is that estimating the variances $\mathrm{Var}_{|J|}(O)$ or $\mathrm{Var}_{\mathrm{sgn}(J)}(O)$ is not easy. However, we can do it by running two different kinds of numerical simulations. On the one hand we can perform simulations with $N_S$ independent couplings. On the other hand, we use multi-spin coding in a simulation with $N_{|J|}$ independent choices of the absolute values for the couplings. Numerical estimates of the statistical errors, $\tilde\Delta_O$ and $\tilde\Delta_{O,\mathrm{MSC}}$, can be obtained in a standard way. Then, Eqs. and tell us that $$\label{eq:Neff_empirical}
N_{\mathrm{eff},O}\approx
\frac{ \tilde\Delta_O^2}{\tilde\Delta_{O,\mathrm{MSC}}^2}\frac{N_S}{N_{|J|}}\,.$$
$L$ $\xi_L$ $T$ $N_S$ $N_{|J|}$ $N_{\mathrm{eff},e}$ $N_{\mathrm{eff},q^2}$ $N_{\mathrm{eff},\xi_L}$ $N_{\mathrm{eff},U_4}$
----- ----------- ----- ------- ----------- ---------------------- ------------------------ -------------------------- ------------------------
8 3.031(9) 0.7 200 200 1.1 8.8 11.3 11.2
64 4.599(12) 0.7 200 200 1.4 8.0 7.0 8.1
8 8.581(19) 0.2 200 200 0.9 34.2 42.4 58.6
48 35.86(4) 0.2 200 1600 1.4 89.2 106.4 110.6
: Numerical estimation of the effective number of independent samples in a 128 bits computer word, from Eq. . We give results obtained under different dynamical conditions for the following observables: internal energy $N_{\mathrm{eff},e}$, squared overlap $N_{\mathrm{eff},q^2}$, correlation length $N_{\mathrm{eff},\xi_L}$, and Binder ratio $N_{\mathrm{eff},U_4}$. We somehow abuse notation when applying Eq. to quantities such as the correlation length $\xi_L$ or the Binder ratio $U_4$, which are computed as non-linear functions of mean values of direct observables. The statistical error in the computation of $N_\mathrm{eff}$ is below $10\%$.[]{data-label="Table:MSC"}
Some numerical experiments, described in Table \[Table:MSC\], convinced us that our multi-spin coding is extremely useful when computing long-distance observables, particularly when the correlation length is large $\xi_L\gg 1$ and the system size increases. On the other hand, when computing short distance observables (such as the internal energy), $N_{\mathrm{eff},O}$ turns out to be disappointingly close to one. Fortunately, for long-distance quantities, such as the Binder parameter at $\xi\approx 36$, we have an effective number of samples as large as $N_{\mathrm{eff},U_4}\approx 111$.
Computing the anomalous dimension {#sect:g-computation}
=================================
We have seen that $$\overline{\langle q^2\rangle} = [\hat u_h(T)]^2
F_{q^2}(\xi_L/L)\,,\ g(\xi_L/L)=\frac{F_{q^2}(0.4)}{F_{q^2}(\xi_L/L)}\,.$$ Let us define $x\equiv\xi_L/L$. The universal scaling function $g(x)$ was depicted in Fig. 3. We shall employ it here, to obtain a quantitative bound on the anomalous dimension $\eta$.
If we take the $L\to\infty$ limit at fixed $x$, for small $x$ we obtain the scaling law $$\label{eq:g-scaling}
g(x) \propto \frac{1}{x^{2-\eta}}\,.$$
Our procedure is as follows. We first determine $g(x,L_\mathrm{min})$ by computing the scaling function $g(x)$ as explained before, but restricting the analysis to data from system sizes $L\geq
L_\mathrm{min}$. We then consider pairs of arguments $x_1$ and $x_2$ (consecutive points in the $x$ grid where we compute $g(x)$, see Fig. 3) and obtain the effective estimators $$\label{eq:effective-eta}
2-\eta(x^*)= \frac{\log [g(x_1,L_\mathrm{min})/g(x_2,L_\mathrm{min})]}{\log [x_2/x_1]}\,,\ x^*\equiv\sqrt{x_1x_2}\,,$$ that are shown in Fig. \[fig:effective-eta\].
![Effective value of $2-\eta$ as obtained from Eq. versus $x^*$ (which is the geometric mean of the two values of $\xi_L/L$ involved in the computation of $\eta$). We show estimations for several values of the minimal size included in the analysis, $L_\mathrm{min}$. Data for the binary model obtained with the same value of $L_\mathrm{min}$ are connected by dashed lines (continuous lines in the case of Gaussian distributed couplings). [**Inset:**]{} For the smallest argument $x^*$ that we reach in our simulations, we investigate the dependency of $2-\eta$ on $L_\mathrm{min}$. []{data-label="fig:effective-eta"}](eta_error-2w){height="\columnwidth"}
The estimations depicted in Fig. \[fig:effective-eta\] depend on everything they could: on the disorder distribution, on $L_\mathrm{min}$ and on $x^*$. However, for small $x^*$ the dependency on $L_\mathrm{min}$ and on the disorder distribution become negligible within our better than $1\%$ accuracy (see Fig. \[fig:effective-eta\]—inset) [^9].
It is obvious from Fig. \[fig:effective-eta\] that effects from different origin compete: statistical errors and systematic errors due to $x^*$ been too large (or to $L_\mathrm{min}$ being too small). However, we have an additional hint: we expect $\eta=0$ for the Gaussian model. But we see identical $1\%$ deviations from $2-\eta=2$ for Gaussian and for binary couplings. Thus we regard the small difference in the inset in Fig. \[fig:effective-eta\] as an estimation of the combined errors (systematic and statistical) that we suffer. We can safely summarize our findings as $$|\eta_{\text{binary}}|<0.02\,.$$
Traditional analysis {#sect:traditional}
====================
![The effective, size dependent critical exponents $\nu$ \[[**Top:**]{} binary model. $Q_T$ is defined in Eq. \[eq:nu\].\] and the anomalous dimension $\eta$ ([**Middle:**]{} binary model. [**Bottom:**]{} Gaussian model). The quotient $Q_{q^2}$ is defined in Eq. and analyzed in Eq. .[]{data-label="fig:mal"}](nu-eta-3w){height="\columnwidth"}
For sake of completeness, we include here the results of a traditional analysis, based on scaling laws as a function of the system temperature. These results give a flavor of how severe are the problems caused by the non-linear scaling fields.
The difficulties encountered in the computation of the thermal exponent $\nu$ are explained in Sect. \[sect:nu\]. One can compute it from the comparison of temperatures $T_{\xi_L/L}^{(L)}$ for lattices $L$ and $2L$: $$\label{eq:nu}
Q_T(L)=\frac{T_{\xi_L/L}^{(2L)}}{T_{\xi_L/L}^{(L)}}=2^{-1/\nu} (1+\ldots)\,.$$ When computing this ratio for the Binary model, see Fig. \[fig:mal\]–top, the scaling corrections come from a number of different source. We have, of course, the corrections due to the scaling field $\hat u_T$ that were discussed in Sect. \[sect:nu\]. Yet, we also have strong corrections of order ${\cal O}(L^{-\omega})$ \[instead, for the Gaussian model we are fortunate to have tiny, probably negligible, ${\cal O}(L^{-\omega})$ corrections, see Fig. 2\]. We also have to deal with the crossover between $T=0$ and $T>0$ behaviors [@thomas:11; @parisen:11] (for a fixed variation range of $L$, the crossover appears when increasing $\xi_L/L$). In fact, we know that some of these scaling corrections are of similar magnitude: those arising from $\hat u_T$ should be of order $L^{-2/\nu}$ with $1/\nu=0.283(6)$ while $\omega=0.75(10)(5)$. Disentangling the effects of the three sources of corrections to scaling will require a strong analytical guidance. Probably, simulating much larger systems, which is possible using special methods [@thomas:13], will be useful.
As for the anomalous dimension, the traditional approach would start from the quotients of $\overline{\langle q^2 \rangle}$ at fixed $\xi_L/L$, as computed for $L$ and $2L$: $$\label{eq:eta}
Q_{q^2}(L)=\frac{\overline{\langle q^2 \rangle}(2L,T_{\xi_L/L}^{(2L)})}
{\overline{\langle q^2 \rangle}(L,T_{\xi_L/L}^{(L)})}\,.$$ Barring scaling corrections, this quotient should behave as $2^{-\eta}$. Therefore, for very large $L$, $Q_{q^2}(L)$ should tend to one. The reason for this unfavorable behavior is that (ignoring all sort of scaling corrections) this ratio actually behaves as $$\label{eq:eta-2}
Q_{q^2}(L)=2^{-\eta}\Bigg(
\frac{\hat u_h(T_{\xi_L/L}^{(2L)})}{u_h(T_{\xi_L/L}^{(L)})}
\Bigg)^2\,.$$ In fact, in the thermodynamic limit the two temperatures $T_{\xi_L/L}^{(2L)}$ and $T_{\xi_L/L}^{(L)}$ tend to $T=0$, making the ratio of scaling fields in Eq. irrelevant. However, our data are far away from this limit, as shown in Fig. 4.
In fact, we know that $T_{\xi_L/L}^{(2L)}< T_{\xi_L/L}^{(L)}$ and that $\hat u_h$ is an increasing function (recall again Fig. 4). It follows that the ratio of scaling functions in Eq. is smaller than one, which mimics a slightly positive *effective* anomalous dimension, see Fig \[fig:mal\]–middle and bottom.
[^1]: When the critical temperature, $T_\mathrm{c}$, is nonzero the problems caused by the non-linear scaling fields can be bypassed using a standard analysis [@amit:05; @nightingale:76; @ballesteros:96]. In fact in $3D$ spin glasses [@janus:13] one compares data from different system sizes at the *same* temperature, namely $T_\mathrm{c}$, which cures most of the problems.
[^2]: The relationship between $h$ and the “magnetic field” $h_q$ coupled to the spin overlap is $h_q = h^2+{\cal O}(h^4)$.
[^3]: The universality of the scaling functions in $D=3$ spatial dimensions was carefully analyzed in [@jorg:06].
[^4]: The elementary Monte Carlo step consisted of 10 Metropolis sweeps at fixed temperature, followed by a cluster update [@houdayer:01] and by a parallel tempering step [@hukushima:96; @marinari:98b]. We consider two sets of two real replicas for each temperatures. The cluster updates are performed only within each set (overlaps are computed by taking a pair of statistically independent configurations, each from one set). We performed a stringent equilibration test, that takes into account the statistical correlation when comparing the last logarithmic bins [@fernandez:08b].
[^5]: Data for the Gaussian model can be fit as well with a sub-leading correction term $L^{-2\omega}$, rather than with the $L^{-2}$ term we use in Eq. . With either sub-leading term we found that the leading corrections to the Gaussian data vanish within numerical accuracy.
[^6]: The tentative estimate of ref. [@rieger:96] was later found to be problematic [@rieger:97]
[^7]: Another general solution is to use a discrete approximation to the Gaussian distribution, such as the Gaussian-Hermite quadrature [@abramowitz:72]. For instance, in Refs. [@leuzzi:09; @janus:14b] a Gaussian-distributed magnetic field was simulated in this way.
[^8]: This situation is not desirable [@ballesteros:98], but it is almost automatically enforced by the standard thermalization tests for spin-glasses [@fernandez:08b]
[^9]: For $L_\mathrm{min}=
32$, the automated selection of $T^2_{\mathrm{max},\xi_L/L}$ for the fits discussed in section \[sect:T-fits\] does not result into a good data collapse. For instance, for Gaussian couplings, $\xi_L/L=0.1$ and $L_\mathrm{min}\geq 32$ one needs to chose $T^2_{\mathrm{max},\xi_L/L}=0.53$ (rather than 0.8, as we choose for smaller $L_\mathrm{min}$)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The increasing incorporation of Artificial Intelligence in the form of automated systems into decision-making procedures highlights not only the importance of decision theory for automated systems but also the need for these decision procedures to be explainable to the people involved in them. Traditional realist accounts of explanation, wherein explanation is a relation that holds (or does not hold) eternally between an *explanans* and an *explanandum*, are not adequate to account for the notion of explanation required for artificial decision procedures. We offer an alternative account of explanation as used in the context of automated decision-making that makes explanation an *epistemic* phenomenon, and one that is dependent on context. This account of explanation better accounts for the way that we talk about, and use, explanations and derived concepts, such as ‘explanatory power’, and also allows us to differentiate between reasons or causes on the one hand, which do not need to have an epistemic aspect, and explanations on the other, which do have such an aspect. Against this theoretical backdrop we then review existing approaches to explanation in Artificial Intelligence and Machine Learning, and suggest desiderata which truly explainable decision systems should fulfill.'
author:
- |
Tarek R. Besold tarek-r.besold@city.ac.uk\
Department of Computer Science, City, University of London Sara L. Uckelman s.l.uckelman@durham.ac.uk\
Department of Philosophy, Durhamn University
bibliography:
- 'explanation.bib'
title: 'The What, the Why, and the How of Artificial Explanations in Automated Decision-Making'
---
Explanations and Decisions
==========================
Artificial systems are increasingly used to make decisions in an automatized fashion in various aspects of human life, including medical decision-making and diagnosis, large-scale budgeting, financial transactions, etc. The range of possible or actual applications of such systems is broad and varied, from the assignment of credits and loans, through recommendations for medical treatments or the distribution of donor organs, to more mundane applications in matchmaking on online dating platforms or the support of healthy or active lifestyles. As a result, the important role that theories of decision-making in general, as well as particular implementations via algorithms and decision procedures, play in Artificial Intelligence (AI) cannot be overstated.
But because these artificial decision systems by their nature interact with human users, a robust decision theory or algorithm is not, by itself, going to be adequate. It is not sufficient that we can merely predict what results some system will obtain, reasoning from first principles of classical decision theory if we do not know why we get those results or are not able to explain how the results are obtained. There is an important post-decision epistemic gap that must also be filled, when the decision rendered by the artificial system is communicated to “the human system”: The explanatory gap.
When someone asks for an explanation of why they have been denied a loan by the bank, responding “the algorithm outputted a ‘no’ to your request” may be a *reason* why the loan application was denied, but this is often not, and in many cases, *cannot be* an *explanation* of why the person was denied; it isn’t an explanation any more than “Because I said so” is an explanation to a child of why they cannot have a second piece of candy: “Because I said so” is a *reason* why the child cannot have a second piece of candy, but it is not a reason that gives insight into the mechanisms in play; it is simply an appeal to authority. As Park et al. note, “Explaining decisions is an integral part of human communication, understanding, and learning” [@ParkHASDR16 p. 1]; an answer that does not produce understanding of why the answer is correct or provide an insight into how the answer was obtained is not going to satisfy the relevant role that explanations play in human communication. In a sense, explanation is the flip side of decision: The capacity to make deliberate decisions brings along with it the need to be able to adequately explain how and why those decisions are reached.[^1] Thus, a robust and correct algorithm or decision procedure is never going to be enough for satisfactory AI-human interaction: Given that explaining decisions is integral to human communication, it must also be possible to explain why that algorithm or decision procedure gives the outcome that it does.
The topic of ‘explanation’ is one that is quite frequently discussed in philosophy, particularly in philosophy of science, where, *inter alia*, inference to the best explanation plays a substantial role and the explanatory power of a scientific theory is often cited as a virtue to be promoted when discriminating between theories [@harman; @thagard]. Most of these accounts of explanation are realist in nature, grounding explanation in some factor or feature of the real world. Such accounts of explanation, however, do not seem to hit the mark for the purposes of shedding light on the concept of explanation as it is used in and regarding artificial systems. We address this in §\[explain\], introducing prominent accounts of explanation found in philosophy and explaining (hah!) why they are inadequate for our current purpose. In §\[epistem\] we make explicit the epistemic dimension of explanations which *must* be addressed in order to have a satisfactory account. We switch gears in §\[AI\] to lay out some relevant AI contexts where an account of explanation is necessary, paving the way for us to highlight desiderata for an account of explanation in AI in §\[desiderata\], building upon the account of explanation we gave in §\[epistem\]. We conclude in §\[conc\].
Before we begin, however, we first take a brief look at what it is that we wish to explain, that is, the decisions produced by artificial systems, generally in communication with or relevant to humans, and the ways in which decision theory is manifest in Artificial Intelligence.
Classical Decision Theory (CDT)—“the analysis of the behavior of an individual facing nonstrategic uncertainty” [@gintis p. 2]—is rooted in classical game theory and operates under many simplifying assumptions such as transitivity of preferences, unbounded processing capacity, perfect knowledge, and perfect recall. These assumptions allow decision theorists to construct elegant mathematical models, but these models are poor models of actual human reasoning. Individual experiences are sufficient to show that humans do not have unbounded processing capacities—an observation which is by no means new, but already lies at the heart of, among others, Simon’s work on bounded rationality [@simon1959; @simon1990]—nor do we have perfect knowledge or recall, and experimental evidence also demonstrates the many ways in which humans fail to be perfect [@WC]. This provides fair reason to reject CDT as a plausible (or even “just” reasonable) model of human behavior.
It might be thought that CDT fares better in the artificial domain, for many of the simplifying assumptions that do not apply to humans may apply to artificial systems. However, while CDT might offer a convenient framework guiding the development of decision capacities in AI systems which are operating by themselves, the situation changes once systems are required to interact with human users in a significant way. In these cases, the capacity to not only reason rationally, but rather to emulate human-like reasoning becomes important [@besold2018]. This does not only hold in scenarios where close human-machine collaboration in a bidirectional way is required [@besold2013; @besold2018], but also in the setup serving as backdrop to this article, i.e., when human users are subject to automated decision-making. While in the latter case one can argue that from a purely rational perspective taking decisions based on a CDT model is advisable (given the well-understood mathematical basis of CDT accounts of decision-making, and the corresponding adherence with explicit rationality postulates), once one also takes into account the perspective of the subject of the decision—and their need to not only rationally understand potentially negative decision outcomes, but also to accept them on a personal and subjective level—additional requirements tying more closely into cognitive, psychological, and cultural aspects of decision-making come into play.
Even models that build on the application of classical game theory to the evaluation of actual human behavior and reasoning — such as the *beliefs, preferences and constraints* or *BPC* model of behavioral game theory — weaken many of the assumptions made by classical game theorists, but still adopt problematic axioms. For example, while the BPC model does not require perfect knowledge or unbounded rationality, it still assumes that human preferences are consistent, an assumption that human-based disciplines such as psychology generally reject [@gintis p. 3].
These and other shortcomings of CDT—when thinking about extending its reach beyond a descriptive-explanatory use in understanding economic phenomena, or in supporting normative economics—have by now been widely recognized (see, for instance, [@beach2017; @einhorn1981; @gilboa]). Still, it is an open question how CDT would have to be expanded to be usable, for instance, to applications (possibly even predictively) modeling actual human decision-making on a case by case basis. Gilboa notes that
> one may find that refinements of the theory depend on specific applications. For example, a more general theory, involving more parameters, may be beneficial if the theory is used for theoretical applications in economics. It may be an extreme disadvantage if these parameters should be estimated for aiding a patient in making a medical decision. Similarly, axioms might be plausible for individual decision-making but not for group decisions, and a theory may be a reasonable approximation for routine choices but a daunting challenge for deliberative processes [@gilboa p. 2].
Modern decision theory has correspondingly developed into a wide and varied field of partially independent, partially rivaling paradigms and methods with usually fairly well-specified application domains and contexts, also in this sense having departed from many CDT approaches and their corresponding claims to generality.
Examples of this new brand of decision-theoretic frameworks include Gilboa and Schmeidler’s subject-centric accounts [@gilboa_schmeidler_2001], in which the rationality or irrationality of a decision depends on the decision-maker’s individual cognitive capacities and limitations, or their theory of case-based decision-making [@gilboa1995] which leaves aside beliefs or predictions and judges the desirability of an act exclusively based on how well it worked on similar problems in the past. Tversky and Kahneman [@tversky1974judgment] in their heuristics and biases program aimed to explain and conceptually rebrand what had been considered irrational human behavior as simply adhering to certain patterns and shortcuts common to human reasoning. Following in a similar vein, behavioral decision theory [@slovic1977] and, more recently, behavioral economics [@camerer2011behavioral] enhances the repertoire of economics with a wide range of methods and theories from experimental psychology, neuroscience, cognitive science, and the social sciences in providing explanations across different conceptual levels from neurophysiological to cultural for previously unconsidered or dismissed particularities of human decision behavior.
In all its diversity, modern decision theory acknowledges the need to take into account the decision-maker who, often, also turns out to be one of the decision subjects. In this way, modern decision theory shares a basic intuition also driving research into explanations in decision-support and automated decision-making systems. The decision-maker (in the case of decision-support systems) and/or the decision subject require understanding of the why and how a decision has been reached, or a recommendation regarding a decision outcome has been given. This understanding can either be provided if the decision has been reached by modes of reasoning familiar to the user (i.e., the initially mentioned emulation of human reasoning), or by providing explanations of the reasons and the steps which have given rise to the system output in a way understandable to the user. While, all efforts in decision theory (modern or otherwise) notwithstanding, the former option still remains largely out of reach, the latter angle is taken by most current projects in the context of explainable AI systems for decision-making. But despite the enormous popularity the corresponding research direction currently enjoys, some fundamental questions hitherto remain unanswered (and, often, even mostly unaddressed)—such as, for example, what is or makes for an explanation of automated decision-making (and maybe even a good one) in the first place?
What are explanations (thought to be)? {#explain}
======================================
Standard philosophical accounts of explanation are robustly realist, taking ‘explanation’ to be some metaphysical characteristic of the world. A typical example of this approach is taken by Nozick, in a chapter entitled “Why is there something rather than nothing?” [@nozick]. There he notes that “the question about whether everything is *explainable* is a different one” [@nozick p. 116] (emphasis added) from the title question of the chapter. Nozick stipulates the existence of a relation $E$ which is the relation “*correctly explains* or *is the (or a) correct explanation of*” [@nozick p. 116], and states that this relation is irreflexive, asymmetrical, and transitive:
> Nothing explains itself; there is no $X$ and $Y$ such that $X$ explains $Y$ and $Y$ explains $X$; and for all $X$, $Y$, $Z$, if $X$ explains $Y$ and $Y$ explains $Z$, then $X$ explains $Z$ [@nozick pp. 116–117].
As a result, this relation of explanation strictly partially orders all truths.[^2] Further, this ordering is of *all* explanations, not just those that are known to us [@nozick p. 117]. Such an account of explanation is realist in the sense that
> if one hopes to explain the occurrence of one event $e$ by appealing to another event $c$, the explanation is successful only if there is a genuine relation $R$ between the mentioned events. That is, for such an explanation to be *correct* and therefore genuinely explanatory it must actually be the case that $c$ and $e$ stand in relation $R$ [@campbell08 p. 75].
Unfortunately, this explanation of explanation is almost entirely useless (or at least inapplicable) as an analysis of *practical* explanations. With this we mean the ordinary phenomena of explanation that we believe are relevant for use in the context of automated decision-making in particular, or even when looking at the notion(s) of explanation as used by people in their everyday life more generally. We will highlight a number of issues with Nozick’s stipulations, and offer an alternative account.
First, note that Nozick gives no motivation or argument for his account of explanation, either that it is a relation or that it is a relation of the type that he specifies. He simply asserts that such a relation exists and that it is what constitutes explanation.[^3]
Second, his definition of explanation is incomplete because it does not specify what the relata of the relation are—facts? States of affairs? Objects in the world? Propositions? Something else? It may be the case that that it doesn’t matter, and his account works whatever the relata are, but we have not yet been given any indication that this is the case. At the very least, Nozick needs to specify what he thinks the relata are, and, even better, to give us reason to believe that he is right.[^4]
Third, Nozick gives no argument for his claim that the relation of explanation—even assuming that one exists and that the relata of the relation are well-specified—is irreflexive, asymmetrical, and transitive. In fact, there are reasons to think that it is none of these.[^5] If there are brute facts (a possibility that Nozick himself entertains [@nozick pp. 117–118]), then these brute facts are their own explanations, hence the relation of explanation would be reflexive for brute facts. In the case of equivalent statements, there is no reason to think that they could not be explanations of each other; for example, the Axiom of Choice and the proof to Zorn’s Lemma can be thought to *explain* Zorn’s Lemma, *and* conversely Zorn’s Lemma and the proof to the Axiom of Choice can be thought of as an explanation of the Axiom of Choice. Which one is taken to explain the other will depend, in part, on which one the person in question was introduced to first. This highlights one important aspect of explanations that we will discuss more in the next section: Explanations are context sensitive. Lastly, as with any transitive relation, it would not be difficult to construct a sorites wherein $X_0$ explains $X_1$, $X_1$ explains $X_2$, and $X_i$ explains $X_{i+1}$ for every $i$ up to some $n$, but by the time one gets to $X_n$, $X_0$ is not an explanation of $X_n$. This again highlights the importance of context sensitivity.
Fourth, this account of explanation does not provide any room to distinguish explanations from reasons. There are many reasons that can be given why a certain thing is the case; which of these turns out to be an explanation will depend on context, we argue below.
What we have seen from the preceding is that Nozick’s account of explanation entirely overlooks what we can call the epistemic dimension of explanation. Now, this is not to deny Nozick’s implicit point that some explanations are known to us and some are not—in fact, quite the opposite. For it is precisely the fact that there are explanations that we do not have that we wish to have that we ask the question “Why?” But this question is never asked in isolation, and *that* is the epistemic dimension we are interested in.
The epistemic dimension of explanation {#epistem}
======================================
Suppose someone asks you “Why is that car red?” There are a number of possible answers you might give:
1. Because it reflects light in wavelengths between approximately 625–740 nm.
2. Because someone painted it red.
3. Because no one painted it blue.
4. Because its owner’s favorite color is red.
5. Because there were no non-red cars at the dealer when the owner bought their car.
All of these can plausibly count as reasons for why the car is red. But which of these reasons will count as an *explanation* of why the car is red will—as exemplified by the examples above—depend on the circumstances in which the asker is asking the question.[^6] These circumstances include the asking agent’s belief state and knowledge set, as well as her reason for asking that question, as opposed to another question, and what she hopes to do with the answer once she’s obtained it.
These factors are what make up what we call the ‘epistemic dimension’ of explanation. We are not the first to highlight the importance of this dimension; Kim [@kim] argues that many “existing accounts of explanation…neglect the epistemological dimension of explanation by failing to provide an account of understanding” [@campbell p. 213]. And as McLaughlin says, “Only by taking into account the epistemic dimension of explanation can we capture the idea that explanations provide understanding, answer questions, and give reasons for belief” [@mclaughlin p. 227]. While explanations provide reasons, as noted above not every reason counts as an explanation, and thus there must be something more to a reason that makes it an explanation. What that something more is we now attempt to identify.
First, note that when we talk about the “epistemic dimension of explanation”, we do *not* mean the sort of thing that Campbell is talking about here:
> Kim’s own understanding of the epistemological dimension of explanation does not actually concern understanding *per se*, or how it is that an explanation generates or contributes to understanding; instead it is a question about what kinds of facts constitute explanatory knowledge [@campbell p. 214].
The question of how an explanation generates understanding is a question of epistemology, not a question of explanation. Similarly, the notion of “explanatory knowledge” is narrower than the notion of explanation we are interested in; knowledge implies truth but, as we argue below, practical explanations need not be truthful in order to count as explanatory.
We argue that there are three things necessary for a particular reason to count as a practical explanation in a given context: (1) The reason must be relevant to the purpose of the question; (2) the reason must provide the hearer with the power to act in a more informed way; (3) the reason need not be true, though it does need to be at least an approximation of the truth. We treat each of these characteristics in turn.
Ad (1): Irrelevant reasons cannot be explanatory. If you are asking me for an explanation, then you have a particular epistemic need to be filled or epistemic longing to be satisfied. This need circumscribes the possible acceptable answers. Any answer which does not (attempt to) satisfy this need will not be relevant and cannot be an explanation. Further, not only does the epistemic context of the explanation determine (in part) what reasons are actually explanation, but the type of the explanation matters too, whether it is a formal explanation, a mechanistic one, a teleological one, etc. As Vasilyeva et al. point out, “Research increasingly supports the idea that (many) representations and judgments are sensitive to contextual factors, including an individual’s goals and the task at hand…This raises the possibility that judgments concerning the quality of explanations are similarly flexible” [@vasilyeva2017 p. 1]. Further, empirical evidence demonstrates that “the acceptability of teleological explanations relates to conceptual domains, causal beliefs, and general constraints on explanation” [@LC p. 168], and there is little reason to doubt that the same is true for other types of explanation. All of this comes together to demonstrate that an account of explanation that does not take into account the ways in which context determines the relevance of an explanation will fail to be an account of how explanations actually function.
Ad (2): The reason the type of explanation matters for determining relevance is because the type of explanation affects what you can do with the explanation afterwards. This brings us to the second characteristic of explanations, and the question of what makes a reason relevant in a given context? That is to say, what makes some particular reason satisfy an epistemic need, but not another? The answer is rooted in the notion of “explanatory power”—when we say, e.g., that one scientific theory has “more explanatory power” than another, we are saying that it gives us (or, at the very least, the scientists involved in applying the theory) the power to *do more things*. We can explain other phenomena, we can make new predictions, we can understand more than we understood before. Thus, the power of an explanation is rooted in its capacity to allow us to act in a way that we could not have acted otherwise. Irrelevant reasons do not give us such a power. If you tell someone who asks you “Why is that car red?” that it is because it reflects light of a particular wavelength, but the person who asked for the explanation has no concept of wavelengths or reflection, or indeed of light as an abstract concept, this answer will not be explanatory because it does not allow her to act in a more informed way; the answer is, quite simply, irrelevant because it is uninformative, and it is uninformative because it does not fit within the epistemic context of the asker. This not to say that such an answer *cannot* be explanatory; it can, in another context. From this, it is clear that whether something is an explanation varies according to context.
Ad (3): This is perhaps the biggest way in which our account of explanation differs from realist accounts. On a realist account, the relation of explanation either holds or doesn’t hold at all times and only holds (presumably) when there is in fact a genuine connection between the two events. If the car owner’s favorite color is green, then saying that the car is red because red is his favorite color is not an explanation because it is false.
However, truth is an enormously high bar to put on explanation, and in fact explanations that are later determined not to be true can still have enormous explanatory power (in the sense of explanatory power that we defined above). In our pursuit of scientific progress, “we do not necessarily replace wrong theories with right ones, but rather look for greater explanatory power” [@niaz p. 93]. A classic example of this from philosophy of science are scientific models of the structure of atoms. Over the course of the 20th century, different models of atomic structure were proposed which “continue to provide increasing explanatory power, such as: Thomson, Rutherford, Bohr, Bohr-Sommerfeld, and wave-mechanical, among others” [@RepNat p. 71]. It simply doesn’t make sense to speak of increasing explanatory power if you think that there is no explanation going on (which one must think if one requires all explanations to be strictly truthful). Here it is worth noting that saying that Theory 2 is adopted to replace Theory 1 because Theory 2 has more explanatory power than Theory 1 does not commit us to saying that Theory 1 retains any explanatory power once Theory 2 has taken its place; in fact, our approach to explanation allows us to strip Theory 1 of all of its explanatory power once its been superceded (though of course we are not required to: Newtonian mechanics are still explanatory, even if in some contexts relativistic mechanics are *more* explanatory).
We do, however, want to encourage truth-seeking in our quest for explanations, and to prioritize as good explanations those which have a better fit with the set of knowledge claims relevant for the context (and are thus in at least some sense “closer to the truth”). Consider folk-explanations for thunder, whether it be Zeus throwing his thunderbolt, Thor banging his hammer on an anvil, or Leigong hitting his drum with his mallet. In the right context, each of these can be relevant to satisfying the hearer’s epistemic longing; in such contexts they also provide the hearer with the power to act in a more informed way (for example, one can then consider whether to sacrifice a virgin to appease the god and make the thunder stop); however, as an approximation of the truth each of these explanations all falls quite short. As a result, it is legitimate for us to say that they are *not very good* explanations. Thus our account of explanation avoids one potential criticism of non-realist accounts, namely that anything whatever can count as an explanation, given the right context. Even if that is the case, we are still able to distinguish good explanations from bad ones. The question of how much truth is required is a question of great importance in its own right, and one that we will set aside for the remainder of this paper.
We’ve noted one consequence of this account of explanation above, namely that one and the same reason can be an explanation in one context and not in another, because of the individual nature of individual epistemology. A further consequence is that one need not give up Nozick’s account of explanation as a metaphysical relation between things entirely, of course. As Campbell notes,
> The pluralist’s emphasis on the epistemology of explanation does not render her position irrealist because the correctness of the explanation of one event in terms of another is in part a function of the metaphysical relations between them [@campbell08 p. 91].
Note, though, that the idea of “the correctness” of an explanation potentially smuggles in some problematic notions—not stemming from the “correctness” but from “the”. Even the pluralist’s approach to explanations requires that there be *the* correct explanation, and that this unique explanation’s correctness is rooted in certain metaphysical facts (see quote from Campbell earlier in the previous section). If, however, we scrap the notion of there being *the* correctness of an explanation, and allow there to be many different ways of grounding what makes an explanation a good explanation in a given epistemic context, then we can allow that the existence of some metaphysical relation between events can be sufficient for possessing an explanation, but it is not necessary (and it is not even *always* sufficient). Thus, we allow for the possibility that we can have multiple possible explanations for a single event or phenomenon, not all of which will be actual explanations in a given context.
In this, our explanatory pluralism differs from Kim’s, on which “it is possible to have more than one explanation for a given event *provided that one has an account of the way the explanations are related*” [@campbell08 p. 86] (emphasis added). We can allow that any of the answers to “Why is that car red?” above are explanations of the car’s being red, without requiring that we have an account of the way in which these various answers are related to each other (and indeed, why would we explain there to be any such account? Particularly of how people’s favorite colors are related to how light at various wavelengths appears to us.)
A final important consequence of this account is that no one can determine whether something is an explanation for someone else, because of the private nature of individual epistemology. This raises interesting issues in the implementation of mechanisms of explanation into decision procedures in artificial systems, for it means that there is no single answer that the decision procedure can give that can be guaranteed to be explanatory for all people.
Our conclusion is that explanation is “an epistemological activity” and explanations are “an epistemological accomplishment” [@kim88 p. 225]—they satisfy a sort of epistemic longing, a desire to know something more than we currently know. Not only do they satisfy this desire to know, they also provide the explanation-seeker a direction of action that they did not previously have.
Explanation in AI {#AI}
=================
We now shift our focus to how our conception of practical explanation plays out in the context of AI—both in terms of how it compares to previous accounts of explanation and how well it can play the role needed in AI. The purpose of this section is primarily historical, outlining what has been said previously as well as the current discussions, before we move on to more normative matters in the next section.
Such a historical discussion is not straightforward: Not only is there no unified or uniform concept of ‘explanation’ that is used in AI contexts, quite often the term is neither defined nor explained. It is outside the scope of this paper to give a complete history; instead, we focus on two important contexts in which explanations play an important role: explanations in what is called “Good Old-Fashioned Artificial Intelligence” or GOFAI (§\[gofai\]) and explanations in machine learning (§\[ml\]), and then specific challenges concerning explanations that arise in the context of automated decision-making (§\[dm\]).
Explanation in GOFAI {#gofai}
--------------------
Discussions concerning (the need for) explanations of the reasoning and behavior of AI systems are not a new phenomenon, but already started during the time GOFAI [@haugeland1985], and more precisely in the context of expert and decision support systems. Clancey [@clancey1983] questioned whether uniform, weakly-structured sets of if/then associations (i.e., simple inference rules of the form “IF $precondition_1$ and $precondition_2$ and $\ldots$ and $precondition_n$ THEN $consequence$”) as used in the MYCIN medical expert system for the abductive diagnostics of bacterial infections [@shortliffe1974] are suitable for instance in a teaching setup, i.e., with the intent to support active learning. This engendered significant interest in explainable expert systems (see, e.g., [@chandrasekaran1989; @wick1992]), with much work targeting the representation formalisms used in the respective systems [@gaines1996]. It also caused the development of proposals to conceptually split the task that a computational system has to solve into two functional components, a problem-solving one and a communication one (see, e.g., [@vansomeren1995; @askiragelman1998]).
In a more recent effort originating from the cognitive systems lines of research, Forbus emphasized the importance of the human comprehensibility of the behavior and the output of AI systems in the context of his *software social organisms* [@forbus2016]. In his view, participation in human society requires effective and efficient communication. In order to have both effective and efficient communication, AI systems must have adequate explanation capabilities and capacities.
Explanation in Machine Learning {#ml}
-------------------------------
Machine Learning (ML) methods have seen impressive successes over the last few years. ML-based systems have consequently been introduced into more and more complex application domains, with a significant share of efforts targeting decision support and automated decision-making systems. In the wake of these developments, questions of how to interpret or explain the applied methods and systems have become important. Taking stock of the current variety of ways “explanations" and related notions are treated within ML as a field, Lipton points out that “the term interpretability holds no agreed upon meaning, and yet machine learning conferences frequently publish papers which wield the term in a quasi-mathematical way” [@lipton2016 p.7]. He calls for further formulations of problems and their definitions, hoping to provide a more systematic conceptual basis on which to advance research and development of the corresponding types of systems. Doran et al. [@doran2017] responded to that call with an initial proposal for a general typology of explainable ML and AI methods and systems. In their account, there are three general types of AI/ML systems:
- Opaque systems where the mechanisms mapping inputs to outputs are invisible to the user. This basically converts the system into an oracle making predictions over an input, without indicating how and why predictions are made.
- Interpretable systems where a user can not only see, but also study and (given potentially required expertise, resources, or tools) understand how inputs are mathematically mapped to outputs.
- Comprehensible systems which emit symbols along with their outputs, allowing the user to relate properties of the input to the output.[^7]
When comparing the notions of interpretable and comprehensible systems, it is important to note that while interpretable systems are pushing towards becoming “white boxes” (in contrast with the “black box” nature of opaque systems), a comprehensible system can well remain a “black box” concerning its inner workings, but is required to provide the user with symbolic output suitable to serve as basis for subsequent reasoning and action (possibly resulting in a “communicating black box”. Against the backdrop of these three types of AI/ML systems, Doran et al. require that any definition or characterization of ‘explanation’ must involve the presence of “a line of *reasoning* that explains the decision-making process of a model *using human-understandable features of the input data*” [@doran2017 p.7]. We now see how this plays out in existing ML approaches.
Argument-Based Machine Learning (ABML) [@mozina2007] applies methods from argumentation in combination with a rule-learning approach. Explanations provided by domain experts concerning positive or negative arguments are included in the learning data and serve to enrich selected examples. Still, although ABML adds the corresponding information to the system output (and, in doing so, likely enhances the general degree of explanation) compared to most “standard” ML approaches, there is no built-in check or guarantee that users fully comprehend the learned hypotheses.
Explanation-Based Learning (EBL) (e.g., [@ebg:mitchell]) uses background knowledge in a mainly deductive inference mechanism to “explain” how each training example is an instance of the target concept. The deductive proof of an example—in some cases augmented by newly added features not explicit in the training examples [@prolog:ebg]—yields a specialization of the given domain theory leading to the generation of a special-purpose sub-theory described in a user-defined operational language. Even so, the generated syntactic explanations can still be far from human-comprehensible explanations in any relevant semantic sense (causal, mechanistic, etc.).
In the context of artificial neural networks and related statistical approaches, regression models [@schielzeth2010] or generalized additive models [@lou2012] often serve as prime examples for interpretable methods and systems. In these cases, however, interpretability refers almost exclusively to a mathematical property of the models [@rudin2014; @vellido2012], allowing for a certain degree of knowledge extraction from the model and subsequent interpretation by domain experts, but clearly lacking a general explanatory component accessible to the end user. This is not always problematic; for internal purposes, this notion of interpretability may suffice. However, once these systems begin to interact with humans, an explanatory component becomes necessary.
Finally, a somewhat popular explanation strategy is to create a more comprehensible representation of the learned model, which in most cases necessitates a trade-off between fidelity and comprehensibility [@vandemerckt1995]. Here, examples include the simplification of decision trees via pruning [@bohanec1994], or the extraction (“distilling”) of decision trees [@frosst2017], or $M$-of-$N$ rules[^8] [@odense2017] as more explanatory models from neural networks. What is common to all methods and systems following this route to explanation is that the respective approaches are purely intended to illustrate the system’s behavior to the end user while abstracting away from the actual details of the underlying algorithm. This presupposes an application scenario and/or intended user base which afford this restriction on the accuracy of the explanation regarding the actual inner workings of the system, including the potential challenges this might pose, for example, in the context of legal liability and responsibility considerations. Also, while the resulting representations are intended to be more comprehensible than the learned model in its original form, they very often still remain quite technical in appearance and (once again) presuppose a fairly high degree of expertise on the side of the user to actually be understood.
Explanation and automated decision-making {#dm}
-----------------------------------------
Having looked at AI/ML in terms of technical approaches to explainable or interpretable methods and systems, in this section we focus on the specific conceptual role and challenges regarding explainability arising from the use of AI/ML methods in systems for decision-support and, even more importantly, automated decision-making. As noted in the introduction, these systems are used across a wide range of applications. What is common to the vast majority of such systems is their partial or complete reliance on statistical—and, as such, necessarily data-driven—approaches to solving the respective task. This central role of large amounts of data as key input element, processed using complex statistical methods without the explicit generation of interpretable knowledge along the way, gives rise to a certain form of opacity from the user’s perspective. They are opaque “in the sense that if one is a recipient of the output of the algorithm (the classification decision), rarely does one have any concrete sense of how or why a particular classification has been arrived at from inputs. Additionally, the inputs themselves may be entirely unknown or known only partially” [@burrell2016 p. 1]. This opacity is in problematic tension with the goal of having explanations for the outcomes of the decision-making system. This perceived opacity can go back to at least one of three roots [@burrell2016]: (1) A system has been designed to be opaque as a consequence of intentional corporate or state secrecy, aiming at self-protection and concealment and, along with it, introducing the possibility for knowing deception. (2) The system is opaque due to technical illiteracy, since reading and writing code—which might be required in an attempt to analyze the system—is a specialist skill. (3) The opacity arises due to the way the respective algorithms operate at application scale, rooting in the mismatch between mathematical optimization in high-dimensionality on the one hand, and the limitations on human-scale reasoning and the corresponding styles of semantic interpretation on the other.
On the one hand, intentional secrecy can be hard to solve; and in some cases, a solution might not even be desired. On the other hand, opacity due to technical illiteracy and opacity due to processing differences between AI/ML algorithms and human reasoning—while very different in the nature and quality of underlying ailment—can both be addressed by equipping decision systems with explanation capacities. Wachter et al. point out that in a systems context, these explanations can operate on one of two levels, either operating on the level of the decision mechanism itself (i.e., targeting the system functionality in terms of the logic, significance, envisaged consequences and general functionality of an automated decision-making system), or on an instance level (i.e., targeting individual decisions in terms of the particular rationale, reasons, and individual circumstances of a specific automated decision) [@wachter2017 Sect. 2]. This distinction between mechanism- and instance-based explanation of an automated decision also becomes relevant when thinking about the timing of the explanation relative to the explained decision-making process. If an *ex ante* explanation is required prior to the automated decision-making process, the resulting explanation necessarily can only address the mechanism level.[^9] An *ex post* explanation after the decision process has terminated can take both levels into account, potentially addressing aspects of the system functionality as well as the rationale of the specific decision.
Putting these conceptual considerations into the context of actual AI systems as currently (and likely in the short to midterm) deployed for automated decision-making, in the vast majority of cases a divergence between a system’s level of explainability and its performance levels has to be noted. At the moment, Deep Learning (DL) approaches [@lecun2015] frequently solve tasks which had previously been considered far out of the reach of contemporary AI systems, and raise the bar on most technical benchmarks to which some of the corresponding methods can be applied. Still, this class of ML techniques is almost exclusively made up by methods which even on the level of individual decision instances are hardly interpretable, and which—for reasons tying into the very nature of the corresponding approaches as methods learning their own internal representations[^10]—generally cannot by themselves provide comprehensibility queues. Thus, while not being completely opaque in that an instance-based interpretation of a decision process might in principle be possible, the actual level of insight into the reasons and the process underlying a decision offered in practice is low. Better interpretable (or even explainable) approaches such as, for instance, the ones discussed in §\[ml\] in general fall short of the performance levels exhibited by DL methods, requiring the system’s architect to exchange increased interpretability or explainability for (often significant) losses in terms of the system’s effectiveness and efficiency, possibly up to a level where a task could not satisfactorily be solved anymore. Concerning potential practical implications of this trade-off, in absence of regulatory requirements or market-relevant incentives to the contrary, providers of AI systems for automated decision-making in a competitive economic environment are likely to prioritize performance over other considerations—producing oracle-like systems, with very high statistical certainty returning “correct” (as compared to some externally defined task- and/or domain-specific criteria) decision outcomes but not providing actually informative insight into why and how the decision was reached.
In the context of current initiatives on the side of the regulatory authorities on different levels (such as, e.g., the EU General Data Protection Regulation 2016/679) and societal discussions regarding a desire for transparency and corresponding accountability of automated decision systems, work on better interpretable or explainable methods and systems in AI and ML is ongoing. It is against this backdrop, combined with our considerations concerning the nature and virtues of practical explanations, that we want to have a look at what are desirable properties of explanations provided by AI systems in the following section. The resulting list of desiderata can then help to guide the further development of methods and systems towards the goal of providing actual explanations to the subjects’ of automated decision-making.
Desiderata for explanations in AI {#desiderata}
=================================
In the early days of ML, Michie [@michie88] introduced a three-class categorization of learning systems, which has been given new relevance by the current discussion concerning the explainability of AI/ML systems. In Michie’s view, ML systems can be categorized by adherence to the following three (increasingly demanding) criteria:
- *Weak machine learning*: The system’s predictive performance improves with increasing amounts of data.
- *Strong machine learning*: In addition to meeting the weak criterion, the system provides its learned hypotheses in symbolic form.
- *Ultra-strong machine learning*: In addition to meeting the strong criterion, the presentation of the hypotheses has to be communicatively effective in that the user is made to understand the hypotheses and their consequences, subsequently improving the joint performance to a level beyond that of a user studying only the training data.
Whilst most modern ML systems meet the first, weak, criterion, it is the third (i.e., ultra-strong) demand that resonates most strongly with our discussion of explanation in §\[explain\] and §\[epistem\]. From a pragmatic point of view, it seems necessary for an explanation to be communicatively effective. In an applied scenario, an explanation only counts as an explanation if it also fulfills an explanatory function.
A similar intuition, though augmented by a second communicative direction back from the user to the system—converting the previously one-dimensional communicative situation into a real two-way interaction also in terms of information transfer, and not only in joint behavior—underlies the account given by Stumpf:
> First, the system’s explanations of why it has made a prediction must be usable and useful to the user. Second, the user’s explanation of what was wrong (or right) about the system’s reasoning must be usable and useful to the system. Both directions of communication must be viable for production/processing by both the system and the user [@stumpf2009 p.2].
The demand for effective communication in the direction from the user to the system in our account goes beyond the requirements a system would have to meet to be considered explainable. Still, one of the underlying theoretical requirements for such an exchange to be possible in the first place is worth to be noted: It must be possible for the user to process the explanation provided by the system in such a way as to be able to point out where the system’s reasoning in producing the prediction and the subsequent explanation was right or wrong. This goes beyond the mere requirement of the explanation being usable and useful, but poses a stronger demand in terms of content, structure, and presentation of the explanation.
Several of these aspects also resonate, for instance, in Bohanec and Bratko’s observation that in the context of explainable AI, simple, though possibly not perfectly accurate definitions of concepts (demanding for the definition to correspond to the concept in a sufficient rather than a perfect manner) may well be more useful than completely accurate, but complex and very detailed ones [@bohanec1994]. Regarding the technical realization of these and similar ideas in an AI system’s architecture, Van de Merckt and Decaestecker [@vandemerckt1995] suggest conceptualizing systems as two-layered, with a deep knowledge level optimized for the actual task the system is supposed to solve, and a shallow knowledge level optimized for comprehensibility, addressing a description task targeting the deep knowledge level’s output. Both levels are connected by an interpretation function from the deep to the shallow knowledge level, allowing one to build an approximately correct but comprehensible description.
Doran et al. [@doran2017] discuss external demands which often are brought forward regarding properties for explainable AI systems, including confidence, trust, safety, ethicality, and fairness. Still, as pointed out in their paper, requirements such as to instill confidence and trust that a system’s output is accurate are deemed problematic due to their subjective nature, either depending on users’ internal attitudes towards AI systems, previous experiences when using these systems, or on cultural and societal norms and standards. Finally, they also also dismiss completeness of explanations as a required trait—and even question the desirability of complete explanations in many scenarios—pointing to the example of a doctor presenting an incomplete explanation to a patient, either taking into account the patient’s limited knowledge of potentially complex biological processes, or sparing her worrisome but ultimately irrelevant details [@doran2017].
Summarizing the current literature, we find at least two main desiderata regarding explanations in the context of AI and automated decision-making. Each of them constitutes a necessary criterion regarding the status of a system’s output as explanation, though none of them is sufficient by itself. First, the explanations a system provides for its reasoning and behavior have to be communicatively effective relative to the system’s user, both in content as well as in presentation (“*communicative effectiveness*”). Users must be capable to understand both the presented explanation, as well as its ramifications, in such a way as to be empowered to subsequently adapt their interactions with the system in a beneficial way. Second, the explanations a system provides must be sufficiently accurate (as opposed to perfectly accurate) relative to the explanans and to the context of the system and its user (“*accuracy sufficiency*”). It might be worth trading off some accuracy for improved comprehensibility of the resulting explanation for the user, supporting the communicative efficiency of the explanation. Based on our previous analysis of what makes a practical explanation, we add another two necessary criteria to the list. Third, the explanations a system provides must be sufficiently truthful (as opposed to perfectly truthful) (“*truth sufficiency*”). On the one hand, a trade-off similar to the accuracy vs. comprehensibility consideration might be required or even desirable, while on the other hand considerations akin to the doctor example from the previous paragraph also might warrant to opt for a not perfectly truthful explanation. Fourth, the explanations a system provides must quit the respective user’s subjective epistemic longing (“*epistemic satisfaction*”). It is not fully sufficient to meet a user’s epistemic needs (which are mostly addressed by the conjunction between the initial two desiderata), but the user also must indeed be under the impression that her search for an explanation has been completed successfully. Taking all four desiderata together, the combination between communicative effectiveness, accuracy sufficiency, truth sufficiency, and epistemic satisfaction of the users for us provides a sufficient characterization of what is needed for an AI system’s output to constitute an effective explanation to its users.
Conclusion {#conc}
==========
Any discussion of the implementation of decision theory into artificial systems or AI research cannot overlook the importance of the role that explanations play in automated decision-making. Due to the “imperfect” nature of human beings when held to the normative standards set by classical models of decision-making, the latter are inadequate for providing decisions which can be explained in real-life contexts. A second factor that contributes to this difficulty is that practical explanation is—contra what many more metaphysically-oriented philosophers have argued—best understood not as an abstract relationship that always holds or never holds between two events or facts, but rather has an epistemic dimension that means what counts as an explanation varies by context. Many things constitute this epistemic dimension, including the knowledge of the person requesting the explanation, the notion of ‘epistemic longing’, and the need for explanations to provide the receiver with the power to act in a way that she would not have otherwise been able to act. Keeping the importance of this epistemic dimension in mind, and looking at previous approaches to constructing explainable AI systems, it turns out that current methods are not sufficient yet. Many efforts have been and are being undertaken to increase the explainability of automated decision systems, with different techniques focusing on different aspects of what constitutes a practical explanation. Still, what is hitherto lacking are clear criteria for explainable AI systems which are conceived in a way so that they can serve as guiding beacons for the corresponding developments in AI theory and engineering. with this article we aim to contribute to closing this gap by putting four candidate desiderata up for discussion: communicative efficiency of the system relative to its users, a sufficient degree of accuracy and a sufficient degree of truthfulness of the provided explanations, and the need to quit a user’s epistemic longing.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Lorijn Zaadnoordijk for her feedback and the many conversations on the topic, and Gwen Uckelman for valuable inspiration.
0.2in
[^1]: Many people are good at constructing post-hoc rationalizations of their decisions (especially, but clearly not exclusively, in the case of intuitive, spontaneous and/or “unconscious” decisions), but generally we do not count these rationalizations as adequate explanations.
[^2]: A strict partial order is an order that is irreflexive, asymmetric, and transitive.
[^3]: Those who desire an account of why explanation is a relation can look to [@woodward].
[^4]: Others have attempted to answer this question. For the “events and relations *in the world*”, rather than “items in our epistemic corpus” [@campbell p. 208] answer to the “between what?” question, see [@kim].
[^5]: It might be objected here that the following examples beg the question against Nozick’s argument. The problem is that *Nozick gives no argument*, so it is impossible for us to beg the question against his (non-existent) argument. He presents these features of the relation of explanation as if they are obvious; the putative counterexamples we raise should at least cast some doubt on this.
[^6]: For the “Why is that car red?” example above, these could, for instance, be:
1. In the context of a physics class in school, watching cars drive by on the street.
2. In the context of a police search for a blue car of a certain type with a given number plate, after finding the targeted car which turns out to be red.
3. In the context of a repainting effort in an autoshop, processing an order to convert red cars into blue ones.
4. In the context of a car dealership, selling a red version of a usually blue production model.
5. In the context of a conversation about the new car of a person who usually prefers other colors than red.
[^7]: The definition of the “comprehensible systems” category echoes the same intuitions underlying Michie’s (much older) notions of [*strong*]{} and [*ultra-strong machine learning*]{} [@michie88], cf. §\[desiderata\].
[^8]: An $M$-of-$N$ rule is a classification rule of the form “IF ($M$ of the following $N$ antecedents are true) THEN $\ldots$ ELSE $\ldots$” [@towell1993]. $M$-of-$N$ rules offer a way to succinctly express, for instance, parity problems like the XOR classification problem: “IF (exactly $1$ of the $2$ inputs is true) THEN odd parity ELSE even parity”.
[^9]: As also noted by Wachter et al. [@wachter2017], a small qualification is in place here: If sufficiently simple, pre-defined models are used and are completely specified and known *a priori*, predictions about the rationale of a specific decision become possible in principle already prior to the actual automated decision-making process.
[^10]: An important factor in the success of DL approaches to ML is the capacity of the artificial neural networks to learn their own internal representations for relevant input features, making use of the numerous degrees of freedom offered by the high number of network layers. Still, at the moment there is no method assuring that the content of these internal representations refers to anything humans would recognize as meaningful in their conceptualization of the world. Claimed correspondences between the representation layers of deep networks and human conceptualizations are either accidental or misleading.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The hypercharges of the fermions are not uniquely determined in SO(10) grand unification, but rather depend upon which linear combination of the two U(1) subgroups of SO(10)$\supset\,$SU(3)$\times$SU(2)$\times$U(1)$\times$U(1) remains unbroken. We show that, in general, a given hypercharge assignment can be obtained only with very high-dimensional Higgs representations. The observation that the standard model is obtained with low-dimensional Higgs representations can therefore be regarded as further evidence for SO(10) grand unification. This evidence is independent of the fact that SO(10)$\supset\,$SU(5).'
---
-.5in 6.5in 8.5in
\#1 \#2 \#3 [Phys. Rev. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Phys. Rev. D [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Phys. Lett. B [**\#1**]{}, \#2 (\#3)]{} \#1 \#2 \#3 [Nucl. Phys. [**B\#1**]{}, \#2 (\#3)]{}
[**Group-Theoretic Evidence for\
SO(10) Grand Unification**]{}\
\
Fermi National Accelerator Laboratory\
P. O. Box 500\
Batavia, IL 60510\
\
Department of Physics\
University of Illinois\
1110 West Green Street\
Urbana, IL 61801\
The standard model of the fundamental interactions is based on the gauge group SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$. Each generation of fermions transforms as a reducible representation of the gauge group, consisting of five irreducible representations.[^1] The hypercharges of the representations are chosen to reproduce the observed electric charges of the fermions.
One of the most compelling pieces of evidence for grand unification is that the fermions of each generation transform as the $\overline 5$ + 10-dimensional representation of SU(5) [@GG]. The five irreducible representations of each generation of fermions are thereby unified into two, and the three gauge groups are unified into one. The hypercharges of the five irreducible representations are uniquely determined by their embedding in the $\overline 5$ + 10 of SU(5).
If a right-handed neutrino exists, the group-theoretic evidence for grand unification is even more compelling: the fermions of each generation transform as the 16-dimensional representation of SO(10) [@G; @FM]. The six irreducible representations are thereby unified into a single irreducible representation, and the three gauge groups are unified into one. We assume the existence of a right-handed neutrino for the remainder of the discussion.
SO(10) has a subgroup SU(3)$\times$SU(2)$\times$U(1)$\times$U(1). When SO(10) is spontaneously broken to SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$, the hypercharge subgroup[^2] is a linear combination of the two U(1) subgroups of SO(10). Thus the hypercharges of the fermions are not uniquely determined in SO(10) grand unification, in contrast to the case of SU(5) grand unification, but rather depend upon which linear combination of the two U(1) subgroups is unbroken.
The SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$ quantum numbers of the left-handed fields which make up the 16-dimensional representation of SO(10) are given in Table 1. The hypercharge is normalized such that the left-handed positron has unit hypercharge. The parameter $a$ depends upon which linear combination of the two U(1) subgroups is unbroken. It is a rational number because the hypercharges are “quantized”, i.e., commensurate, since a U(1) subgroup of a non-Abelian group is necessarily compact [@GG].[^3]
[ccccc]{} & SU(3)$_c$ & SU(2)$_L$ & U(1)$_Y$ & U(1)$_{EM}$\
\
$(u_L,d_L)$ & 3 & 2 & $a$ & $(1-2a, 4a-1)$\
$u_L^c$ & $\overline 3$ & 1 & $2a-1$ & $2a-1$\
$d_L^c$ & $\overline 3$ & 1 & $1-4a$ & $1-4a$\
$(\nu_L,e_L)$ & 1 & 2 & $-3a$ & $(1-6a, -1)$\
$\nu_L^c$ & 1 & 1 & $6a-1$ & $6a-1$\
$e_L^c$ & 1 & 1 & 1 & 1\
The value of the parameter $a$ depends upon the Higgs representation employed to break SO(10) to SU(3)$_c\times$SU(2)$_L\times$U(1)$_Y$. The Higgs field may be either fundamental or composite; only its group-theoretic properties are relevant to the considerations of this paper. The candidate values of $a$ for a given irreducible representation correspond to the SU(3)$\times$SU(2)$\times$U(1) singlets contained in that representation [@S]. Usually this representation must be accompanied by at least one additional Higgs irreducible representation in order to break SO(10) down to SU(3)$\times$SU(2)$\times$U(1), because the latter is generally not a maximal little group of the former for a single irreducible representation [@S]. To generate fermion masses, the SU(2)$_L\times$U(1)$_Y$ symmetry must be broken by yet one or more additional Higgs irreducible representation, chosen from the 10-, 120-, and 126-dimensional representations (since $16\times 16 = 10 + 120 + 126$). The SU(2)$_L\times$U(1)$_Y$ symmetry is broken to U(1)$_{EM}$ when any of the color-singlet, SU(2) doublets contained in these representations acquires a vacuum-expectation value, leading to the electric charges listed in the last column of Table 1. The standard model evidently corresponds to $a=1/6$.
We have shown by construction that any rational value of $a$ can be obtained by an appropriate choice of the Higgs irreducible representation. However, a given value of $a$ generally requires a very large Higgs irreducible representation. In practice, the smallest Higgs irreducible representations yield only a few values of $a$. We list in Table 2 the possible values of $a$, and the Higgs irreducible representations which can yield that value, for all Higgs representations of dimension 55440 or less.[^4] Note that $a$ and $a/(6a-1)$ are equivalent, upon interchanging $u_L^c$ with $d_L^c$ and $\nu_L^c$ with $e_L^c$. Thus we list values of $a$ in the interval $[0,1/3]$ only. Higgs representations listed as “undetermined” have SU(3)$\times$SU(2)$\times$U(1)$\times$U(1) singlets, which do not determine $a$. Higgs representations listed as “none” have no SU(3)$\times$SU(2)$\times$U(1) singlets. If one or more additional Higgs irreducible representations are needed to break SO(10) to SU(3)$\times$SU(2)$\times$U(1), as is generally the case, they must correspond to the same value of $a$ or an undetermined value of $a$.
It is satisfying that the standard model ($a=1/6$) is obtained with several small Higgs irreducible representations,[^5] the 16-, 126-, and 144-dimensional representations, as is well known [@R]. If we lived in a world in which the ratio of the hypercharges of the quark doublet and the positron were, say, 1/8 rather than 1/6, we could still embed the fermions in the 16-dimensional representation of SO(10), but we would need a 9504-dimensional Higgs representation to obtain the desired symmetry breaking. While there is (perhaps) nothing fundamentally wrong with this, it is less palatable than a model which requires only Higgs fields in low-dimensional irreducible representations. These results are independent of the fact that SO(10)$\supset\,$SU(5); for example the 144-dimensional representation, which contains no SU(5) singlet, produces $a=1/6$, while the 210-dimensional representation, which does contain an SU(5) singlet, produces $a=1/3$.
We believe that the economy of the Higgs representation in SO(10) grand unification, while well known, has not been fully appreciated. We regard it as further evidence for SO(10) grand unification.
[**Acknowledgements**]{}
.3cm
We are grateful for conversations with G. Anderson, D. Berg, R. Leigh, A. Nelson, and P. Ramond. S. W. thanks the Aspen Center for Physics, where part of this work was performed. The research of J. L. was supported by the Fermi National Accelerator Laboratory, which is operated by Universities Research Association, Inc., under contract no. DE-AC02-76CHO3000. S. W. and T. M. were supported in part by Department of Energy grant DE-FG02-91ER40677. T. M. was supported in part by the Lorella M. Jones UIUC Summer Research Fellowship in Physics.
[99]{}
H. Georgi and S. Glashow, 32 438 1974 .
H. Georgi, in [*Particles and Fields*]{} 1974, ed. C. Carlson (AIP, New York, 1975), p. 575.
H. Fritzsch and P. Minkowski, Ann. Phys. [**93**]{}, 193 (1975).
C. Geng and R. Marshak, 39 693 1989 ; [*ibid.*]{} [ **41**]{}, 717 (1990).
K. Babu and R. Mohapatra, 63 938 1989 ; 41 271 1990 .
R. Foot, G. Joshi, H. Lew, and R. Volkas, Mod. Phys. Lett. [**A5**]{}, 95 (1990); X.-G. He, G. Joshi, and B. McKellar, Europhys. Lett. [**10**]{}, 709 (1989); X.-G. He, G. Joshi, and R. Volkas, 41 278 1990 .
J. Minahan, P. Ramond, and R. Warner, 41 715 1990 .
H. Georgi, Nature [**288**]{}, 651 (1980).
R. Slansky, Phys. Rep. [**79**]{}, 1 (1981), Section 9.
W. McKay and J. Patera, [*Tables of dimensions, indices, and branching rules for representations of simple Lie algebras*]{} (Marcel Dekker, New York, 1981).
S. Rajpoot, 22 2244 1980 ; S. Barr, 112 219 1982 .
[cl]{} $a$ & [SO(10) Higgs representation]{}\
\
1/6 & 16, 126, 144, 560, 672, 720, 1200, 1440, 1728, 2640, 2772, 2970, 3696,\
& 3696$^\prime$, 4950, 5280, 6930$^\prime$, 7920, 8064, 8800, 9504, 10560, 11088, 15120,\
& 17280, 20592, 20790, 23760, 25200, 26400, 27720, 28160, 28314, 29568,\
& 30800, 34398, 34992, 36750, 38016, 39600, 43680, 46800, 48048, 48048$^\prime$,\
& 48114, 49280, 50050, 50688, 55440\
\
1/3 & 45, 210, 770, 945, 1050, 1386, 4125, 5940, 6930, 7644, 8085, 8910, 12870,\
& 14784, 16380, 17325, 17920, 23040, 50688, 52920\
\
0 & 120, 126, 1728, 2772, 2970, 3696$^\prime$, 4125, 4312, 4950, 6930, 6930$^\prime$, 10560,\
& 20790, 27720, 28160, 28314, 34398, 36750, 42120, 46800, 48114, 50050,\
& 50688\
\
1/4 & 560, 1440, 3696, 5280, 8064, 8800, 11088, 15120, 23760, 25200, 29568,\
& 30800, 34992, 38016, 39600, 43680, 46800, 48048, 49280, 55440\
\
1/12& 672, 1200, 8800, 11088, 17280, 23760, 25200, 26400, 28314, 30800,\
& 34992, 38016, 49280, 55440\
\
1/9 & 2772, 6930, 50688\
\
2/9 & 3696$^\prime$, 6930$^\prime$, 20790, 34398, 36750, 46800, 48114\
\
5/18& 8064, 34992, 39600, 43680\
\
1/8 & 9504, 29568\
\
1/18& 9504, 29568, 30800\
\
5/24& 17280, 26400\
\
2/15& 28314\
\
undetermined & 45, 54, 210, 660, 770, 945, 1050, 1386, 4125, 4290, 5940, 6930, 7644, 8085,\
& 8910, 12870, 14784, 16380, 17325, 17920, 19305, 23040, 50688, 52920\
\
none& 10, 210$^\prime$, 320, 1782, 4410, 4608, 9438, 31680, 37180, 37632, 48510\
\
[^1]: This does not include a right-handed neutrino. We shall introduce this particle into the discussion shortly.
[^2]: The hypercharge subgroup is, by definition, the unbroken U(1) subgroup. In general, this does not correspond to the usual hypercharge subgroup of the standard model.
[^3]: This representation is free of gauge and gravitational anomalies for any value of $a$, not just for rational values [@GM; @BM; @FJLV; @MRW]. It thus serves as an example of a chiral, anomaly-free gauge theory that (for irrational values of $a$) cannot be embedded in a grand-unified theory [@G2].
[^4]: These were derived with the help of the tables of Refs. [@S; @MP] and the [*Liegroup*]{} package developed by George M. Hockney.
[^5]: Generally accompanied by at least one other irreducible representation, such as the 45-dimensional representation [@R].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this study interest centers on regional differences in the response of housing prices to monetary policy shocks in the US. We address this issue by analyzing monthly home price data for metropolitan regions using a factor-augmented vector autoregression (FAVAR) model. Bayesian model estimation is based on Gibbs sampling with Normal-Gamma shrinkage priors for the autoregressive coefficients and factor loadings, while monetary policy shocks are identified using high-frequency surprises around policy announcements as external instruments. The empirical results indicate that monetary policy actions typically have sizeable and significant positive effects on regional housing prices, revealing differences in magnitude and duration. The largest effects are observed in regions located in states on both the East and West Coasts, notably California, Arizona and Florida.'
author:
- |
Manfred M. Fischer, Florian Huber, Michael Pfarrhofer and Petra Staufer-Steinnocher\
Vienna University of Economics and Business
bibliography:
- 'favar.bib'
- 'mpShocks.bib'
- 'additional.bib'
date:
-
-
title: 'The dynamic impact of monetary policy on regional housing prices in the US: Evidence based on factor-augmented vector autoregressions'
---
--------------- ---------------------------------------------------------------------------------------------------
**Keywords:** Regional housing prices, metropolitan regions, Bayesian estimation, high-frequency identification
--------------- ---------------------------------------------------------------------------------------------------
---------------- --------------------
**JEL Codes:** C11, C32, E52, R31
---------------- --------------------
Introduction
============
This paper examines the impact of monetary policy on housing prices in the US.[^1] The literature on this relationship is fairly limited. Previous studies generally rely on two competing approaches. The first uses a structural model to analyze the relationship between monetary policy and housing prices (see, for example, [@MANC:MANC332; @ungerer2015monetary]). Such models impose a priori restrictions on the coefficients. The major strength of this model-based approach is to provide a theoretically grounded answer to the question of interest. Its potential shortcoming, however, is that the answer is only as good as the model is adequately representing the relationships in the real world.
The second approach – labeled evidence-based – focuses more on the empirical evidence and relies less directly on economic theory. Researchers have commonly used vector autoregressive (VAR) models to measure the impact of monetary policy [see @Baffoe-Bonnie1998; @fratatoni2003monetary; @delnegro2007luftballons; @jarocinski2008house; @vargassilva2008monetary; @beltratti2010international; @ECTJ:ECTJ319]. Such models allow the data rather than the researcher to specify the dynamic structure of the model, and provide a plausible assessment of the response of macroeconomic variables to monetary policy shocks without the need of a complete structural model of the economy.
In the tradition of the latter approach, this paper differs from previous literature both in terms of focus and methodology. With [@fratatoni2003monetary], we share the focus on regional differences in the response of housing prices, using metropolitan-level rather than state-level data.[^2] In terms of methodology, similar to [@vargassilva2008monetary] and in contrast to [@fratatoni2003monetary], we use a factor-augmented vector autoregressive (FAVAR) model to explore regional housing price responses to a national monetary policy shock.[^3] The effects are measured by considering idiosyncratic impulse responses of regions to the shock that is normalized to yield a 25 basis-points decline in the one-year government bond rate.
Differently from [@vargassilva2008monetary] and [@ECTJ:ECTJ319], we employ a full Bayesian approach that is based on shrinkage priors for several parts of the parameter space. In particular, we make use of Markov Chain Monte Carlo (MCMC) methods to estimate the model parameters and the latent factors simultaneously. A full Bayesian approach has the advantage of directly controlling for uncertainty surrounding the latent factors and the model parameters. We follow [@gertler2015monetary] to identify monetary policy shocks by using high-frequency surprises around policy announcements as external instruments.
The paper provides a rich picture on how an expansionary monetary policy shock affects housing prices in 417 US metropolitan regions over a time horizon of 72 months after impact. The findings reveal regional housing price effects to vary substantially over space, with size and modest sign differences among the regions. Some few regions in Utah, New Mexico, Kansas, Oklahoma, Mississippi and West Virginia show no significant impact or even slightly negative cumulative responses. In most of the regions, however, the cumulative responses of housing prices are positive, in line with theory. This regional heterogeneity may have different reasons, with heterogeneous regional housing markets playing a major role. The largest positive effects are observed in states on both the East and West Coasts, notably in Miami-Fort Lauderdale in Florida and Riverside-Sun Bernardino-Ontario in California, but also in Las Vegas in Nevada. In general, housing impulse responses tend to be similar within states and adjacent regions in neighboring states, evidenced by a high degree of spatial autocorrelation.
The remainder of the paper is structured as follows. The next section presents the FAVAR model along with the Bayesian approach for estimation. describes the data and the sample of regions, and outlines the model specification. The empirical results are discussed in , while the final section concludes.
Econometric framework {#sec:framework}
=====================
A factor-augmented vector autoregressive model
----------------------------------------------
The econometric approach employed in this study is a FAVAR model, as introduced in [@bernanke2005measuring]. In our implementation, we let $\bm{H}_t=(H_{1t},...,H_{Rt})'$ denote an $R \times 1$ vector of housing prices at time $t$ ($t = 1, \hdots, T$) across $R=417$ US regions. The model postulates that regional housing prices depend on a number of latent factors, monetary and macroeconomic national aggregates and region-specific shocks. Specifically, the measurement equation can be written as $$\begin{bmatrix} \bm{H}_t \\ \bm{M}_t \end{bmatrix} =
\begin{bmatrix*}[l]
\bm{\Lambda}^F & \bm{\Lambda}^M \\
\bm{0}_{K \times S} & \bm{I}_K
\end{bmatrix*}
\begin{bmatrix}
\bm{F}_t \\
\bm{M}_t
\end{bmatrix} + \begin{bmatrix*}[l]
\bm{\epsilon} _t \\
\bm{0}_{K \times 1}
\end{bmatrix*},
\label{eq:main-struct}$$ where $\bm{F}_t=(F_{1t},...,F_{St})'$ is an $S \times 1$ vector of latent (unobservable) factors which capture co-movement at the regional level ($F_{rt}$, $r = 1, \hdots, R$, $t = 1, \hdots, T$). $\bm{M}_t=(M_{1t},...,M_{Kt})'$ is a $K \times 1$ vector of economic and monetary national aggregates that are treated as observable factors, and $\bm{\epsilon}_t$ ($t = 1, \hdots, T$) an $R \times 1 $ vector of normally distributed zero mean disturbances with an $R \times R$ variance-covariance matrix $\bm{\Sigma}_{\epsilon}=\text{diag}(\sigma_1^2, \dots, \sigma_R^2)$. These disturbances arise from measurement errors and special features that are specific to individual regional time series. $\bm{\Lambda}^F = (\lambda^{F}_{rs}:$ $r = 1, \hdots, R$; $s = 1, \hdots, S)$ is an $R \times S$ matrix of factor loadings with typical elements $\lambda^{F}_{rs}$, while $\bm{\Lambda}^M = (\lambda^{M}_{rk}:$ $r = 1, \hdots, R$; $k = 1, \hdots, K)$ a coefficient matrix of dimension $R \times K$ with typical elements $\lambda^{M}_{rk}$. The number of latent factors is much smaller than the number of regions, i.e. $S \ll R$. Note that the diagonal structure of $\bm{\Sigma}_\epsilon$ implies that any co-movement between the elements in $\bm{H}_t$ and $\bm{M}_t$ stems exclusively from the presence of the factors.
The evolution of the factors $\bm{y}_t=(\bm{F}'_t, \bm{M}'_t)'$ is given by the state equation, governed by a VAR process of order $Q$, $$\bm{y}_t = \bm{A} \bm{x}_t + \bm{u}_t \label{eq: stateEQ},$$ with $\bm{x}_t=(\bm{y}'_{t-1}, \dots, \bm{y}'_{t-Q})'$ and the associated $(S+K)\times Q(S+K)$-dimensional coefficient matrix $\bm{A}$. Moreover, $\bm{u}_t$ is an $(S+K)$-dimensional vector of normally distributed shocks, with zero mean and variance covariance matrix $\bm{\Sigma}_u$.
The parameters $\bm{\Lambda}^F$, $\bm{\Lambda}^{M}$ and $\bm{A}$ as well as the latent dynamic factors $\bm{F}_{t}$ are unkown and have to be estimated. To identify the model, we follow @bernanke2005measuring and assume that the upper $(S\times S)$-dimensional submatrix of $\bm{\Lambda}^F$ equals an identity matrix $\bm{I}_S$ while the first $S$ rows of $\bm{\Lambda}^{M}$ are set equal to zero. This identification strategy implies that the first $S$ elements in $\bm{H}_t$ are effectively the factors plus noise.
A Bayesian approach to estimation
---------------------------------
The model described above is highly parameterized, containing more parameters that can be reasonably estimated with the data at hand. In this study, we use a Bayesian estimation approach to incorporate knowledge about parameter values via prior distributions. Before proceeding with the prior setting employed it is convenient to stack the free elements of the factor loadings in an $L$-dimensional vector $\bm{\lambda}=\text{vec}[\bm{\Lambda}^F, \bm{\Lambda}^M]$ with $L=R (S+K)$, and the VAR coefficients in a $J$-dimensional vector $\bm{a}=\text{vec}(\bm{A})$ with $J=(S+K)^2 Q$.
Prior distributions for the state equation {#prior-distributions-for-the-state-equation .unnumbered}
------------------------------------------
For the VAR coefficients $a_j$ ($j = 1,\hdots,J$) we impose the Normal-Gamma shrinkage prior proposed in @griffin2010inference [@griffin2016hierarchical], and subsequently applied in a VAR framework in @Huber2017, $$a_j| \xi_a, \tau_{a j}^2 \sim \mathcal{N}\left(0, 2\ \xi_a^{-1} \tau^2_{a j}\right),$$ that is controlled by Gamma priors on $\tau^2_{a j}~(j = 1, \hdots, J)$ and $\xi_a$, $$\begin{aligned}
\tau^2_{a j} \sim \mathcal{G}(\vartheta_a, \vartheta_a), \label{eq:tausq}\\
\xi_a \sim \mathcal{G}(d_0, d_1), \label{eq:xi_a}\end{aligned}$$ with hyperparameters $\vartheta_a$ and $d_0$, $d_1$ respectively. $\tau^2_{a j}$ operates as a local scaling and $\xi_a$ as a global shrinkage parameter.
This hierarchical prior shows two convenient features. *First*, $\xi_a$ applies to all $J$ elements in $\bm{a}$. Higher values of $\xi_a$ yield stronger global shrinkage towards the origin whereas smaller values induce only little shrinkage. *Second*, the local scaling parameters $\tau^2_{a j}$ place sufficient prior mass of $a_j$ away from zero in the presence of strong overall shrinkage involved by large values for $\xi_a$.
The hyperparameter $\vartheta_a$ in controls the excess kurtosis of the marginal prior, $$p(a_j | \xi_a) = \int p(a_j| \xi_a, \tau_{a j}^2 ) d\tau^2_{a j},$$ obtained after integrating over the local scales. Lower values of $\vartheta_a$ generally place increasing mass on zero, but at the same time lead to heavy tails, allowing for large deviations of $a_j$ from zero, if necessary. The hyperparameters $d_0$ and $d_1$ in are usually set to rather small values to induce heavy overall shrinkage [see @griffin2010inference for more details].
For the variance-covariance matrix $\bm{\Sigma}_u$ we use an inverted Wishart prior, $$\bm{\Sigma}_u \sim \mathcal{IW}(v, \bm{\ubar{\Sigma}}),$$ with $v$ denoting prior degrees of freedom, while $\bm{\ubar{\Sigma}}$ is a prior scaling matrix of dimension $(S+K)\times(S+K)$.
Prior distributions for the observation equation {#prior-distributions-for-the-observation-equation .unnumbered}
------------------------------------------------
For the factor loadings $\lambda_{\ell}$ ($\ell = 1, \hdots, L$) we employ a Normal-Gamma prior similar to the one used for the VAR coefficients $a_j$ ($j = 1, \hdots, J$). The set-up follows @kastner with a single global shrinkage parameter $\xi_\lambda$ that applies to all free elements $\lambda_\ell$ in the factor loadings matrix. Specifically, we impose a hierarchical Gaussian prior on $\lambda_\ell$, $$\lambda_\ell|\xi_\lambda, \tau^2_{\lambda \ell} \sim \mathcal{N}\left(0, 2\ \xi_\lambda^{-1} \tau^2_{\lambda \ell}\right)$$ that depends on Gamma priors for $\tau^2_{\lambda \ell}~(\ell = 1, \hdots, L)$ and $\xi_\lambda$, $$\begin{aligned}
\tau^2_{\lambda\ell} \sim \mathcal{G}(\vartheta_\lambda, \vartheta_\lambda),\\
\xi_\lambda \sim \mathcal{G}(c_0, c_1).\end{aligned}$$ The hyperparameters $\vartheta_\lambda, c_0$ and $c_1$ control the tail behavior and overall degree of shrinkage of the prior.
For the measurement error variances $\sigma^2_r~(r = 1,\hdots,R)$ we rely on a sequence of independent inverted Gamma priors, $$\sigma_r^2 \sim \mathcal{G}^{-1}(e_0, e_1),$$ where the hyperparameters $e_0$ and $e_1$ are typically set to small values to reduce prior influence on $\sigma^2_r$.
Estimation of the model parameters and the latent factors is based on the MCMC algorithm described in . More specifically, we use Gibbs sampling to simulate a chain consisting of 20,000 draws, where we discard the first 10,000 draws as burn-in. It is worth noting that this algorithm shows fast mixing and satisfactory convergence properties.
Data and model implementation {#sec:dataimplementation}
=============================
Regions and Data
----------------
To explore regional differences in the impact of monetary policy on housing prices in the US, we need to define our notion of regions. Throughout the paper, we use $R = 417$ regions, a subsample of the 917 core-based statistical areas.[^4] These 417 regions include 264 metropolitan and 153 micropolitan statistical areas, briefly termed metropolitan regions in this paper. They have been selected based on the availability of the data over time. For the list of regions used, see .
Our dataset consists of a panel of monthly time series ranging from 1997:04 to 2012:06. The $R \times 1$ vector of housing prices at time $t$ is constructed using the Zillow Home Value Index.[^5] In contrast to the FHFA (Federal Housing Finance Agency) Index and the Standard & Poor’s Case-Shiller Index, the Zillow Home Value Index does not use a repeat sales methodology, but statistical models along with information from sales assessments to generate valuations for all homes (single family residences, condominiums and co-operative homes) in any given region.[^6] These valuations are aggregated to determine the Zillow Home Value Index, measured in US dollars. An estimate for any given property is meant to indicate the fair value of a home sold as a conventional non-foreclosure, arms-length sale [@winkler2013].
We include $K = 7$ variables in the $K \times 1$ vector of observable national aggregates: three economic variables, namely housing investment (measured as the quantity of housing starts), the industrial production index and the consumer price index. The one-year government bond rate serves as policy indicator in line with @gertler2015monetary. In addition, three credit-spreads are included: the ten-year treasury minus the federal funds rate, the prime mortgage spread calculated over the ten-year government bonds and the @gilchrist2012credit excess bond premium.[^7] The economic variables capture housing, price and output movements. The mortgage spread is relevant to the cost of housing finance and the excess bond premium to the cost of long term credit in the business sector, while the term spread measures expectations on short-term interest rates [@gertler2015monetary]. All observable national aggregates are taken from the FRED database [@McCracken2016], with the exception of the excess bond premium and the mortgage spread that we obtained from the dataset provided in @gertler2015monetary. All data series are seasonally adjusted, if applicable, and transformed to be approximately stationary.
Model implementation
--------------------
For implementation of the FAVAR, we have to specify the order $Q$ of the VAR process and the number of latent factors, $S$. As is standard in the literature, we pick $Q = 2$ lags of the endogenous variables. To decide on the number of factors we use the deviance information criterion [@spiegelhalter2002bayesian] where the full data likelihood is obtained by running the Kalman filter and integrating out the latent states. This procedure yields $S=1$, a choice that is also consistent with traditional criteria (Bayesian information criterion or Kaiser criterion) to select the number of factors.
Next and finally, a brief word on hyperparameter selection for the prior set-up. We specify $\vartheta_a = \vartheta_\lambda=0.1$, a choice that yields strong shrinkage but, at the same time, leads to heavy tails in the underlying marginal prior. Recent literature [@Huber2017] integrates out $\vartheta_a, \vartheta_\lambda$ and finds that, for US data, the posterior is centered on values between $0.10$ and $0.15$. The hyperparameters on the global shrinkage parameters are set equal to $c_0=c_1= d_0 =d_1 = 0.01$, a choice that is consistent with heavy shrinkage towards the origin representing a standard in the literature [@griffin2010inference]. The prior on $\bm{\Sigma}_u$ is specified to be weakly informative, i.e. $\nu= S+K+1$ and $\bm{\ubar{\Sigma}}= 10^{-2} \bm{I}_{S+K}$. Likewise, for the inverted Gamma prior on $\sigma_r^2~(r = 1, \hdots, R)$ we set $e_0=e_1=0.01$ to render the prior only weakly influential.
Impulse response analysis {#sec:results}
=========================
Structural identification of the model
--------------------------------------
The high-frequency variant of the external instruments identification approach [@kuttner2001monetary; @gurkaynak2005sensitivity] employed in this paper is based on the surprises in the three-months-ahead futures rate that reflect expectations on interest rate movements further into the future, measured within a 30 minutes time window surrounding Federal Reserve announcements [@gertler2015monetary]. Note that in contrast to the Cholesky identification strategy, there is no need to impose zero restrictions.
To implement the approach we follow @paul20017 and use high-frequency surprises as a proxy for the structural monetary policy shock. This is achived by integrating the surprises into as an exogenous variable ${z}_t$, $$\bm{y}_t = \bm{A} \bm{x}_t + \bm{\zeta} {z}_t+ \bm{u}_t.$$ Hereby $\bm{\zeta}$ is a $Q(S+K)$-dimensional vector of regression coefficients that collects the impulses of the shocks. @paul20017 shows that under mild conditions, the contemporaneous relative impulse responses can be estimated consistently.[^8] Note that the contemporaneous response of $\bm{y}_t$ to changes in $z_t$ is given by $\bm{\zeta}$. Higher order responses are defined recursively by exploiting the state space representation of the VAR model in .
Impulse responses of macroeconomic quantities
---------------------------------------------
We first consider the dynamic responses of the endogenous variables included in the $K \times 1$ vector $\bm{M}_t~(t = 1, \hdots, T)$ to illustrate that the results of the model are consistent with established findings in the literature. An expansionary monetary policy shock is modeled by taking the one-year government bond rate as the relevant policy indicator, rather than the federal funds rate that is commonly used in the literature based on arguments presented in @gertler2015monetary.[^9] Normalization is achieved by assuming that a monetary policy shock yields a 25 basis-points decrease in the policy indicator.
[.329]{} \[fig:ip\] ![[]{data-label="fig:IRF_macro"}](logip.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:houst\] ![[]{data-label="fig:IRF_macro"}](HOUST.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:cpi\] ![[]{data-label="fig:IRF_macro"}](logcpi.pdf "fig:"){width="\textwidth"}
\
[.329]{} \[fig:t10yty\] ![[]{data-label="fig:IRF_macro"}](gs1.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:termspread\] ![[]{data-label="fig:IRF_macro"}](T10YFFM.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:mortgage\_spread\] ![[]{data-label="fig:IRF_macro"}](mortg_spread_m.pdf "fig:"){width="\textwidth"}
\
[.329]{} \[fig:ebp\] ![[]{data-label="fig:IRF_macro"}](ebp.pdf "fig:"){width="\textwidth"}
\
\[fig:IRF\_macro\] depicts the impulse response functions of the endogenous variables. All plots include the median response (in blue) for 72 months after impact along with 68 percent posterior coverage intervals reflecting posterior uncertainty. An unanticipated decrease in the government bond rate by 25 basis-points causes a significant increase in real activity, with industrial production, housing investment and consumer prices all increasing over the next months after the impact. From a quantitative standpoint, the effects of the monetary shock on industrial production and consumer price index are considerably larger than the impact on housing investment, although uncertainty surrounding the size of impacts is large, and posterior coverage intervals include zero during the first months after impact. Housing investment shows a reaction similar in shape to real activity measured in terms of the industrial production index, suggesting a positive relationship between expansionary monetary policy and housing investment at the national level.
Turning to the responses of financial market indicators, it should be noted that the one-year government bond rate falls by 25 basis-points on impact by construction, then increases significantly before it turns non-significant after about nine months. The term spread reacts adversely on impact, and we find significant deviations from zero that die out after about 16 months. This result points towards an imperfect pass-through of monetary policy on long-term rates, implying that long-term yields display a weaker decline as compared to short-term rates. The prime mortgage spread does not show a significant effect on impact, while responses between ten to 20 months ahead indicate a slightly negative overall reaction to expansionary monetary policy. Consistent with @gilchrist2012credit, one implication of this finding is that movements in key short-term interest rates tend to impact credit markets, with mortage spreads showing a tendency to decline. The responses of the excess bond premium almost perfectly mirror the reaction of the mortgage spread. The effects, however, are much larger from a quantitative point of view.
To sum up, the results obtained by the impulse response analysis provide empirical support that monetary policy shocks, identified by using high-frequency surprises around policy announcements as external instruments, generate impulse responses of the endogenous variables that are consistent with the findings by @gertler2015monetary.
The dynamic factor and its loadings
-----------------------------------
Before moving to the impulse responses of regional housing markets to a monetary policy shock, we briefly consider the estimated latent factor as well as its loadings, with two aims in mind: first, to provide a rough intuition on how the latent factor captures co-movement in regional house price variations, and second, to give indication of the relative importance of individual regions shaping the evolution of the common factor.
\[fig:dynamic-factor\] shows the evolution of the negative latent factor (in solid red) and provides evidence that the common factor co-moves with the average growth rate of housing prices (in solid blue, calculated using the arithmetic mean of the individual regional housing prices) nearly perfectly. The figure illustrates that during the 2001 recession, housing price declines have been mild, while being substantial during the Great Recession, with large variations across space. It is worth noting that home prices fell the most during the late 2000s in regions with the largest declines in economic activity [@beraja2017].
![](Figuresfactor_ts_2.pdf)
While provides intuition on the shape of the latent housing factor, the question on how individual regions are linked to it still needs to be addressed. For this purpose, reports the posterior mean of the region-specific factor loadings in form of a geographic map in which thinner lines denote the boundaries of the regions, while thicker lines signify US state boundaries. Visualization is based on a classification scheme with equal-interval breaks. We see that the great majority of regions exhibit negative loadings, and only 22 regions show positive values. Eighty regions have zero loadings or loadings where the 16th and 84th credible sets (68 percent posterior coverage) of the respective posterior distributions include zero. The pattern of factor loadings, evidenced by the map, indicates that the latent factor is largely driven by regions located in California, Arizona and Florida. Regions in the rest of the country, with loadings being either small in absolute terms or not significantly different from zero, tend to play only a minor role in shaping national housing prices.
Impulse responses of housing prices
-----------------------------------
\[fig:irf-factor\] displays the impulse response function of the latent factor over 72 months after impact to an expansionary monetary policy shock. The latent factor reacts positively after the shock; however, the posterior coverage interval includes zero for the first few months. This is consistent with economic theory which suggests that as the costs of financing a home purchase decrease, the demand for housing increases and as a result, real housing prices increase.
![](factor2.pdf){width=".329\textwidth"}
Housing price responses, cumulated over the time horizon of six years, are displayed in .[^10] The results are presented in form of a geographic map with a classification scheme that generates class breaks in standard deviation measures (SD $= 2.98$) above and below the mean of $3.43$. Again thinner lines denote the boundaries of the metropolitan regions and thicker lines those of US states.
Five points are worth noting here. *First*, cumulative regional housing price effects vary substantially over space, with size and modest sign differences among the regions. Some few regions in Utah, New Mexico, Kansas, Oklahoma, Mississippi and West Virginia, but also in Louisiana and North Carolina show no significant impact or even negative cumulative responses. In more than 97 percent of the regions, however, the cumulative response of housing prices is positive. *Second*, this heterogeneity may be due to varying sensitivity of housing to interest rates across space, and regional differences in housing markets, such as supply and demand elasticities [@fratatoni2003monetary]. For example, supply elasticities are relatively low on the East and West Coasts, but higher in the South and Southwest parts of the US. *Third*, the largest cumulative effects can be observed in states on both the East and West Coasts, notably Riverside, Madera, Merced and Bakersfield in California, Miami-Fort Lauderdale and Key West in Florida, but also Las Vegas and Fernley in Nevada. These regions, expecially those in California, seem to play an important role in shaping the movement of the US housing price following a monetary shock.
*Fourth*, the regions in the East North Central states (as defined by the Census Bureau), but also in Georgia and Massachusetts have cumulative home price responses that resemble the mean response of the US regions within a 0.5 standard deviation band from the mean. Prominent examples include Atlanta ($2.97$), Boston ($3.76$) and Chicago ($3.88$). *Fifth* and finally, cumulative responses tend to be similar within states and adjacent regions in neighboring states. Looking at the map, this spatial autocorrelation phenomenon becomes particularly evident in the case of the Californian regions. This is most likely due to the importance of new house construction industries in California, along with the spatial influence the Californian housing market has on regions in neighboring states, especially Nevada and Arizona.
For reasons of space limits we cannot present the 417 impulse response functions of the individual metropolitan regions, but report those of six regions in . We pick these regions to show examples for metropolitan regions with larger positive cumulative responses (Riverside and Miami-Fort Lauderdale) and those with negative cumulative responses (Salt Lake City and Hickory). Recall that there are only eleven regions belonging to this latter category. For comparison, we also display the impulse response function of two regions that closely resemble the mean response of US regions (Chicago and Boston). Again, the solid blue line denotes the median response and the shaded areas (in light blue) the 68th posterior coverage intervals.
[.329]{} \[fig:irf\_select5\] ![](Riverside.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:irf\_select7\] ![](SaltLakeCity.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:irf\_select2\] ![](Chicago.pdf "fig:"){width="\textwidth"}
\
[.329]{} \[fig:irf\_select6\] ![](Miami-FortLauderdale.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:irf\_select8\] ![](Hickory.pdf "fig:"){width="\textwidth"}
[.329]{} \[fig:irf\_select1\] ![](Boston.pdf "fig:"){width="\textwidth"}
\[fig:IRF\_selected\] reveals profound differences in dynamic responses between the three regional categories, especially in shape and duration of effects. The differences within the categories tend to be rather small. In case of the first category, represented by the metropolitan regions Riverside and Miami-Fort Lauderdale, an expansionary monetary policy shock generates a significant increase of housing prices. This level remains stable and significant in the short-run, before fading away after approximately three years. The charts of Salt Lake City and Hickory, examples of the second regional category, show the housing price responses to fall strongly immediately, and these effects remain significantly negative for less than one year after impact. The response pattern of housing prices in Chicago and Boston is different. The effects are small in size, and hardly different from zero, with the exception of weakly significant effects between the third and fourth year after impact.
Closing remarks
===============
This paper has examined the relationship between monetary policy and the US housing market, focusing on monetary policy shocks. The analysis is based on a Bayesian FAVAR model where monetary policy shocks are identified using high-frequency surprises around policy announcements as external instruments. Bayesian model estimation uses Gibbs sampling with Normal-Gamma shrinkage priors for both the autoregressive coefficients and factor loadings, relying on a panel of monthly time series for a set of 417 regions that range from 1997:04 to 2012:06.
The main findings of our analysis can be summarized as follows. The results provide empirical evidence that metropolitan regions react differently to an expansionary monetary policy shock, revealing magnitude and duration differences, and pointing to some modest sign differences, in additon. The extent and nature of regional heterogeneity are consistent with @fratatoni2003monetary who report impulse responses for house prices in 27 US metropolitan regions. Since our sample of regions covers the whole US (except Alaska, Maine, South Dakota and Wyoming) rather than only 16 US states, we find considerably greater regional heterogeneity in the results. In line with theory, the great majority of regions exhibit positive home price responses. The largest positive effects, cumulated over the time horizon of six years, can be observed in regions located in states on both the East and West Coasts, notably California, Arizona and Florida, and in Nevada. Impulse responses of regions tend to be similar within states and adjacent regions in neighboring states, evidenced by a high degree of spatial dependence among the impulse responses, as measured in terms of Moran’s *I* statistic.
Finally, it is worth noting that our analysis is confined to a linear setting, implying the underlying transmission mechanism to be constant over time. This assumption simplifies the analysis, but may be overly simplistic in turbulent economic times such as the collapse of the housing market around the Great Recesssion. Hence, an extension of the linear setting to allow for non-linearities – in the spirit of @huber2018 – might be a promising avenue for future research.
The MCMC algorithm {#app:mcmc}
==================
We estimate the model by running an MCMC algorithm. The full conditional posterior distributions are available in closed form implying that we can apply Gibbs sampling to obtain draws from the joint posterior distribution. More specifically, our MCMC algorithm involves the following steps:
1. Simulate the VAR coefficients $a_j~(j = 1,\hdots,J)$ conditional on the factors and remaining model parameters from a multivariate Gaussian distribution that takes a standard form [see, for instance, @george2008bayesian for further information].
2. Simulate the latent factors $\bm{F}_t~(t = 1, \hdots, T)$ by using forward filtering backward sampling [@carter1994gibbs; @fruhwirth1994data].
3. The error variance-covariance matrix $\bm{\Sigma}_u$ is simulated from an inverted Wishart posterior distribution with degrees of freedom equal to $\nu = v + T$ and scaling matrix equal to $\bm{P}=\sum_{t=1}^T (\bm{y}_t-\bm{A} \bm{x}_t)'(\bm{y}_t-\bm{A} \bm{x}_t) + \bm{\ubar{\Sigma}}$.
4. Simulate the factor loadings $\lambda_\ell~(\ell = 1, \hdots, L)$ from Gaussian posteriors (conditioned on the remaining parameters and the latent factors) by running a sequence of $(R-S)$ unrelated regression models.
5. The measurement error variances $\sigma_r^2$ for $r = S+1, \dots, R$ are simulated independently from an inverse Gamma distribution $\sigma_r^2| \Xi \sim \mathcal{G}^{-1}(\alpha_r, \beta_r)$ with $\alpha_r = \frac{1}{2} T + e_0$ and $\beta_r = \frac{1}{2} \sum_{t=1}^T (H_{rt} - \bm{\Lambda}^F_{r \bullet} \bm{F}_t - \bm{\Lambda}^M_{r \bullet} \bm{M}_t)^2+e_1$. The notation $\bm{\Lambda}^F_{r \bullet}$ indicates that the $r$th row of the matrix concerned is selected and $\Xi$ stands for conditioning on the remaining parameters and the data.
6. Simulate $\tau^2_{a j}~(j=1,\dots,J)$ from a generalized inverted Gaussian distributed posterior distribution with $$\tau^2_{a j}|\Xi \sim \mathcal{GIG}\left(\vartheta_a - \frac{1}{2}, a_j^2, \vartheta_a \xi_a\right).$$
7. Draw $\xi_a$ from a Gamma distributed posterior given by $$\xi_a|\Xi \sim \mathcal{G}\left(c_0 + \vartheta_a J, c_1 + \frac{1}{2} \vartheta_a \sum_{\ell=1}^L \tau^2_{a \ell}\right).$$
8. Simulate the posterior of $\tau^2_{\lambda \ell}~(\ell=1,\dots,L)$ from a generalized inverted Gaussian distribution, $$\tau^2_{\lambda \ell}|\Xi \sim \mathcal{GIG}\left(\vartheta_\lambda-\frac{1}{2}, \lambda_\ell^2, \vartheta_\lambda \xi_\lambda\right).$$
9. Finally, the global shrinkage parameter $\xi_\lambda$ associated with the prior on the factor loadings is simulated from a Gamma distribution, $$\xi_\lambda|\Xi \sim \mathcal{G}\left(d_0 +\vartheta_\lambda L, d_1 + \frac{1}{2}\vartheta_\lambda \sum_{\ell=1}^L \tau^2_{\lambda \ell}\right).$$
Steps described above are iterated for 20,000 cycles, where we discard the first 10,000 draws as burn-in.
Regions used in the study {#app:regions}
=========================
Regions in this study are defined as core-based statistical areas (CBSA) that – by definition of the United States Office of Management and Budget – are based on the concept of a core area of at least 10,000 population, plus adjacent counties having at least 25 percent of employed residents of the county who work in the core area. Core-based statistical areas may be categorized as being either metropolitan or micropolitan. The 917 core-based statistical areas include 381 metropolitan statistical areas which have an urban core population of at least 50,000, and 536 micropolitan statistical areas which have an urban core population of at least 10,000 but less than 50,000. In this study we use 264 metropolitan and 153 micropolitan statistical areas, due to limited availability of data. These 417 regions, briefly termed metropolitan regions in this paper, represent all US states except Alaska, Maine, South Dakota and Wyoming.
[p[.2]{} p[.75]{}]{}
\
**State** & **Region**\
&\
Alabama & Birmingham, Daphne, Mobile, Montgomery, Tuscaloosa\
Arizona & Flagstaff, Lake Havasu City, Phoenix, Prescott, Sierra Vista, Tucson, Yuma\
Arkansas & Fayetteville, Fort Smith, Hot Springs, Jonesboro, Little Rock\
California & Bakersfield, Chico, El Centro, Fresno, Hanford, Los Angeles-Long Beach-Anaheim, Madera, Merced, Modesto, Napa, Redding, Riverside, Sacramento, Salinas, San Diego, San Francisco, San Jose, San Luis Obispo, Santa Cruz, Santa Maria-Santa Barbara, Santa Rosa, Stockton, Vallejo, Ventura, Visalia, Yuba City\
Colorado & Boulder, Colorado Springs, Denver, Fort Collins, Grand Junction, Greeley, Pueblo\
Connecticut & Hartford, New Haven, New London, Stamford\
Delaware & Dover\
District of Columbia & Washington\
Florida & Crestview-Fort Walton Beach-Destin, Daytona Beach, Fort Myers, Gainesville, Homosassa Springs, Jacksonville, Lakeland, Melbourne, Miami-Fort Lauderdale, Naples, North Port-Sarasota-Bradenton, Ocala, Orlando, Panama City, Pensacola, Port St. Lucie, Punta Gorda, Sebring, Tallahassee, Tampa, The Villages, Vero Beach\
Georgia & Albany, Athens, Atlanta, Augusta, Columbus, Dalton, Gainesville, Hinesville, Macon, Savannah, Valdosta, Warner Robins\
Hawaii & Kahului, Urban Honolulu\
Idaho & Boise City, Idaho Falls, Lewiston\
Illinois & Bloomington, Chicago, Davenport, Kankakee, Springfield\
Indiana & Bloomington, Elkhart, Evansville, Fort Wayne, Lafayette-West Lafayette, Muncie, South Bend, Terre Haute\
Iowa & Des Moines\
Kansas & Lawrence\
Kentucky & Lexington, Louisville-Jefferson County\
Louisiana & Alexandria, Baton Rouge, Houma, Lafayette, Lake Charles\
Nebraska & Grand Island, Lincoln, Omaha\
Nevada & Las Vegas, Reno\
New Hampshire & Manchester\
New Jersey & Ocean City, Trenton, Vineland\
New Mexico & Albuquerque, Las Cruces, Santa Fe\
New York & Albany, Binghamton, Elmira, Glens Falls, Ithaca, Kingston, New York, Rochester, Syracuse, Watertown\
North Carolina & Asheville, Burlington, Charlotte, Durham, Fayetteville, Greensboro, Hickory, Raleigh, Rocky Mount, Wilmington, Winston-Salem\
North Dakota & Fargo\
Ohio & Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Lima, Springfield, Toledo, Youngstown\
Oklahoma & Oklahoma City, Tulsa\
Oregon & Albany, Bend, Corvallis, Eugene, Grants Pass, Medford, Portland, Salem\
Maryland & Baltimore, California-Lexington Park, Cumberland, Hagerstown, Salisbury\
Massachusetts & Boston, Cape Cod, Pittsfield, Springfield, Worcester\
Michigan & Ann Arbor, Battle Creek, Bay City, Grand Rapids, Jackson, Lansing, Midland, Monroe, Muskegon, Saginaw\
Minnesota & Mankato, Minneapolis-St Paul, Rochester\
Mississippi & Hattiesburg, Jackson\
Missouri & Columbia, Joplin, Springfield, St. Louis\
Pennsylvania & Allentown, Altoona, Erie, Harrisburg, Lancaster, Philadelphia, Pittsburgh, Reading, Scranton, State College, York\
Rhode Island & Providence\
South Carolina & Columbia, Florence, Greenville, Hilton Head Island, Myrtle Beach, Spartanburg\
Tennessee & Chattanooga, Clarksville, Cleveland, Jackson, Johnson City, Kingsport, Knoxville, Nashville\
Texas & Amarillo, Brownsville, College Station, Dallas-Fort Worth, El Paso, Killeen, Laredo, Midland, Texarkana\
Utah & Ogden, Provo, Salt Lake City, St. George\
Virginia & Charlottesville, Harrisonburg, Richmond, Roanoke, Staunton, Virginia Beach, Winchester\
Washington & Bellingham, Kennewick, Longview, Olympia, Seattle, Spokane, Walla Walla, Yakima\
West Virginia & Charleston\
Wisconsin & Appleton, Eau Claire, Fond du Lac, Janesville, La Crosse, Madison, Oshkosh, Racine\
[p[.2]{} p[.75]{}]{}
\
**State** & **Region**\
&\
Arizona & Nogales, Payson, Safford\
Arkansas & Batesville, Harrison, Paragould, Russellville, Searcy\
California & Clearlake, Eureka, Red Bluff, Susanville, Truckee\
Colorado & Durango, Glenwood Springs, Montrose, Sterling\
Connecticut & Torrington\
Florida & Clewiston, Key West, Lake City, Okeechobee, Palatka\
Georgia & Bainbridge, Calhoun, Cedartown, Dublin, Jesup, Moultrie, St. Marys, Thomaston, Tifton, Vidalia, Waycross\
Hawaii & Hilo\
Idaho & Burley\
Illinois & Effingham, Jacksonville\
Indiana & Angola, Auburn, Bedford, Connersville, Crawfordsville, Decatur, Frankfort, Greensburg, Huntington, Jasper, Kendallville, Logansport, Madison, Marion, New Castle, North Vernon, Peru, Plymouth, Richmond, Seymour, Vincennes, Wabash, Warsaw, Washington\
Kansas & Garden City\
Kentucky & Danville, Murray\
Louisiana & Opelousas\
Nebraska & North Platte\
Nevada & Elko, Fernley, Gardnerville Ranchos\
New Hampshire & Concord, Keene, Laconia\
New York & Amsterdam, Batavia, Corning, Cortland, Gloversville, Hudson, Olean, Oneonta, Plattsburgh, Seneca Falls\
North Carolina & Albemarle, Morehead City, Sanford, Wilson\
Ohio & Ashtabula, Coshocton, Defiance, Findlay, Jackson, New Philadelphia, Portsmouth, Sandusky, Urbana, Wooster\
Oklahoma & Ardmore, Bartlesville, Duncan, Durant, Enid, McAlester, Tahlequah\
Oregon & Coos Bay, Hermiston-Pendleton, Klamath Falls, Ontario, Roseburg, The Dalles\
Maryland & Cambridge, Easton\
Massachusetts & Greenfield Town, Vineyard Haven\
Michigan & Adrian, Hillsdale, Holland, Ionia, Ludington, Owosso\
Minnesota & Owatonna, Willmar, Winona\
Mississippi & Cleveland, Columbus, Corinth, Grenada, Laurel, Oxford, Picayune, Tupelo, Vicksburg\
Missouri & Mexico\
Pennsylvania & Indiana, Lock Haven, Oil City, Pottsville\
South Carolina & Orangeburg\
Tennessee & Cookeville, Lawrenceburg, Lewisburg, Martin, Paris, Sevierville, Shelbyville, Tullahoma\
Virginia & Danville, Martinsville\
Washington & Oak Harbor, Port Angeles, Shelton\
Wisconsin & Baraboo, Marinette, Whitewater\
Robustness check {#app:robustness}
================
To assess the sensitivity of our results with respect to identification of the monetary policy shock, we use an alternative strategy based on contemporaneous sign restrictions [see @uhlig2005effects; @DEDOLA2007512]. Technical implementation is achieved by using the algorithm proposed in @arias2014inference that collapses to the procedure outlined in @rubio2010structural in the absence of zero restrictions. For each iteration of the MCMC algorithm we draw a rotation matrix and assess whether the following set of sign restrictions is satisfied. Consistent with economic common sense, output (measured in terms of the industrial production index), housing investment (measured in terms of housing starts) and consumer prices (measured in terms of the consumer price index) are bound to increase on impact. Moreover, we assume that the term-spread also widens on impact. Finally, consistent with the normalization adopted when using external instruments, we assume that the one-year yield declines. If this is the case, we keep the rotation matrix and store the associated structural coefficients, while if the sign restrictions are not met, we reject the draw and repeat the procedure.
The results are displayed in form of a geographic map with a classification scheme that generates class breaks in standard deviation measures above and below the mean, see . A comparison with provides evidence of the robustness of our results.
[^1]: Housing is defined here to include family residences, condominiums and co-operative homes.
[^2]: Their empirical analysis uses a small set of 27 US regions to analyze the effects of monetary policy, based on quarterly data from 1986 to 1996. Aside from this study, metropolitan-level housing data have not been explored very much.
[^3]: For the definition of our notion of region and the list of regions used, see .
[^4]: A core-based statistical area is a US geographic area – defined by the Office of Management and Budget – that consists of one or more counties anchored by an urban center of at least 10,000 people plus adjacent counties that are socioeconomically tied to the urban center. The term core-based statistical area refers collectively to both metropolitan and micropolitan statistical areas.
[^5]: The Zillow Home Value Index has the benefit of a broad coverage of the large set of core-based statistical areas. The set of data we use in our study is available for download at <https://www.zillow.com/research/data/>.
[^6]: For more information on the proprietary valuation model used by Zillow to estimate the market value of a home, see @bruce2014zillow.
[^7]: The excess bond premium may roughly be seen as the component of the spread between an index of yields on corporate fixed income securities and a similar maturity government bond rate that is left after removing the component due to default risk [@gertler2015monetary]. @gilchrist2012credit show that this variable provides a convenient summary of additional information not included in the FAVAR that may be relevant to economic activity.
[^8]: Relative impulse responses are obtained by normalizing the absolute impulse responses, i.e. the change in $\bm{y}_{t+h}$ to a change in $z_t$, by the contemporaneous response of some element in $\bm{y}_{t}$.
[^9]: @gertler2015monetary have shown that the one-year bond rate has a much stronger impact on market interests than the funds rate does.
[^10]: The quantitative and qualitative nature of the results is robust to an alternative identification scheme, in which sign restrictions have been employed (for the results see \[app:robustness\]).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Yukihiro <span style="font-variant:small-caps;">Shimizu</span>$^{1}$, Koji <span style="font-variant:small-caps;">Matsuura</span>$^{1}$ and Hikaru <span style="font-variant:small-caps;">Yahagi</span>$^{1}$'
title: Application of matrix product states to the Hubbard model in one spatial dimension
---
Introduction
============
The variational method using so-called matrix product states (MPS) as a trial wave function has been developed to understand physics in low dimensional correlated quantum systems[@mps1; @mps2]. The method has allowed a deeper understanding of details and problems of the density-matrix renormalization group method, which is believed as one of the most powerful tools for the correlated quantum systems in one spatial dimension. The method of the MPS was originally developed to simulate the quantum spin systems. There are many investigations by using the MPS for the spin systems, however, there are not many applications to the fermionic systems. The negative sign due to the anti-commutation relation of the fermion operator has been treated in the framework of the multi-scale entanglement renormalization (MERA)[@mera1; @mera2] or the projected entangled pair states (PEPS)[@peps1; @peps2]. In this paper we explain a method that the sign can be exactly treated in the framework of the matrix product operators (MPO). Our method is relatively simple, and it has applicability to the model with long range hopping, suggesting that the treatment of sign in the MPO may be efficient even in the model in two spatial dimension. We will examine the numerical accuracy of the ground state energy.
Matrix Product States and Matrix Product Operators
==================================================
Open Boundary Condition
-----------------------
We consider the Hubbard model of $L$ sites in one spatial dimension with the open boundary condition. In order to find a ground state of the model, we employ the variational approach. The trial function, so-called MPS, is assumed as, $$\left|\psi \rangle \right.
= \sum_{\alpha_1, \alpha_2, \cdots, \alpha_L}
A^{\alpha_1}A^{\alpha_2}\cdots A^{\alpha_L}
\left|\alpha_1, \alpha_2, \cdots, \alpha_L \rangle \right.,
\label{mps1}$$ where $\alpha_\ell=0, \uparrow, \downarrow$ or $(\uparrow\downarrow)$ shows the $d$-dimensional local states ($d=4$ for the Hubbard model is called physical dimension) at site, $\ell$. The matrices of $\{A^{\alpha_\ell}\}$ are the variational parameters, that the dimensions of them are $(1\times d), (d \times {\rm min}(d^2,D)),
\cdots,
({\rm min}(d^{L/2-1},D) \times {\rm min}(d^{L/2},D))$, $
({\rm min}(d^{L/2},D) \times {\rm min}(d^{L/2-1},D)),
\cdots,
({\rm min}(d^2,D)\times d),(d\times 1)$, going from $\ell=1$ to $L$. The quantity of $D$ is the bond dimension, which controls the width of variational space.
In order to optimize $A^{\alpha_\ell}$ locally and sequentially, we decompose the Hamiltonian into the MPO as, $$\begin{aligned}
{\cal H}_{\rm OBC}&=-t\sum_{\ell=1, \sigma}^{L-1}
\left(c_{\ell \sigma}^\dagger c_{\ell+1 \sigma}
+ {\rm h.c.}
\right)
+ U\sum_{\ell=1}^L n_{\ell \uparrow}n_{\ell \downarrow}
=\prod_{\ell=1}^LW^{[\ell]},\\ %%%%
W^{[1]} &=
\begin{pmatrix}
d_1 & tc_{1\uparrow}& tc_{1 \downarrow} & -tc_{1 \uparrow}^\dagger & -tc_{1 \downarrow}^\dagger & I
\end{pmatrix},\\ %%%%
W^{[2 \le \ell \le L-1]} &=
\begin{pmatrix}
I & 0 & 0 & 0 & 0 & 0 \\
c_{\ell \uparrow}^\dagger & 0 & 0 & 0 & 0 & 0 \\
c_{\ell \downarrow}^\dagger & 0 & 0 & 0 & 0 & 0 \\
c_{\ell \uparrow} & 0 & 0 & 0 & 0 & 0 \\
c_{\ell \downarrow} & 0 & 0 & 0 & 0 & 0 \\
d_\ell & tc_{\ell \uparrow} & tc_{\ell \downarrow} & -tc_{\ell \uparrow}^\dagger & -tc_{\ell \downarrow}^\dagger & I
\end{pmatrix}, \qquad %%%%
W^{[L]} =
\begin{pmatrix}
I \\
c_{L \uparrow}^\dagger \\
c_{L \downarrow}^\dagger \\
c_{L \uparrow}\\
c_{L \downarrow}\\
d_L
\end{pmatrix},\end{aligned}$$ where $I$ is a unit matrix of $d \times d$, and $d_\ell=U n_{\ell \uparrow}n_{\ell \downarrow}$. The matrix elements of ${\cal H}_{\rm OBC}$ are given as, $$\begin{aligned}
&\langle \alpha_1, \cdots, \alpha_L | {\cal H}_{\rm OBC} | \alpha_1', \cdots, \alpha_L' \rangle
= \sum_{b_1,\cdots,b_{L-1}}\prod_{\ell =1}^L
W^{\alpha_\ell \alpha_\ell'}_{b_{\ell-1} b_\ell}, \\
& W^{\alpha_\ell \alpha_\ell'}_{b_{\ell-1} b_\ell}
\equiv (-1)^{sign}
\langle 0 |
\left(c_{\ell \downarrow}\right)^{n_{\ell\downarrow}(\alpha_\ell)}
\left(c_{\ell \uparrow}\right)^{n_{\ell\uparrow}(\alpha_\ell)}
\left(
W^{[\ell]}
\right)_{b_{\ell-1} b_\ell}
\left(c_{\ell \uparrow}^\dagger\right)^{n_{\ell\uparrow}(\alpha_\ell')}
\left(c_{\ell \downarrow}^\dagger\right)^{n_{\ell\downarrow}(\alpha_\ell')}
|0\rangle,\end{aligned}$$ where $(W^{[\ell]})_{b_{\ell-1} b_\ell}$ is the $(b_{\ell-1}, b_\ell)$-th matrix element of $W^{[\ell]}$. In order to satisfy the anti-commutation relation of fermion operators, the sign, $(-1)^{sign}$, should be given as, $$\begin{aligned}
(-1)^{sign} &=
\left\{ \begin{array}{ll}
-1 & \text{if $W^{\alpha_1 \alpha_1'}_{1 2}$, $W^{\alpha_1 \alpha_1'}_{1 3}$, $W^{\alpha_1 \alpha_1'}_{1 4}$ and $W^{\alpha_1 \alpha_1'}_{1 5}$
with $\alpha_1' = \uparrow$ or $\downarrow$} \\
+1 & \text{otherwise}
\end{array}
\right.,\end{aligned}$$ for $\ell=1$, and $$\begin{aligned}
(-1)^{sign} &=
\left\{ \begin{array}{ll}
-1 & \text{if $W^{\alpha_\ell \alpha_\ell'}_{6 2}$, $W^{\alpha_\ell \alpha_\ell'}_{6 3}$, $W^{\alpha_\ell \alpha_1'}_{6 4}$ and $W^{\alpha_\ell \alpha_\ell'}_{6 5}$
with $\alpha_\ell' = \uparrow$ or $\downarrow$} \\
+1 & \text{otherwise}
\end{array}
\right.,\end{aligned}$$ for $2 \le \ell \le L-1$, and $(-1)^{sign} =+1$ for $\ell=L$, respectively.
When we optimize $A^{\alpha_\ell}$ sequentially from $\ell=1$ to $L$, and return, and so on, ($\ell=1, 2, \cdots, L-1, L, L-1, \cdots, 2, 1, 2, \cdots$), the problem that the minimize of $I(\{A^{\alpha_\ell}\})
=\langle \psi | {\cal H} |
\psi\rangle
/ \langle \psi | \psi \rangle
$ is changed efficiently into a solution of the simple eigen problem of the matrix. We call the path of optimization as a [*round trip*]{}. The optimization process of $A^{\alpha_\ell}$ is similar to the optimal algorithm for spin models that is reviewed in section 6 in Ref. [@mps2]. There is no numerical instability at the iterative calculation. The detail of the optimization will be published elsewhere.
Periodic Boundary Condition
---------------------------
We consider two types of the MPS for the periodic boundary condition: One, $|\psi^{\rm (I)} \rangle$, is identical with the type of MPS for the open boundary condition, as shown in equation (\[mps1\]). Since the matrix product is obtained from the singular value decomposition (SVD) of the eigen vector, if the Hamiltonian could be diagonalized, it is still natural that the form of $|\psi^{\rm (I)} \rangle$ is assumed as the trial function for the periodic boundary condition. The other, $|\psi^{\rm (II)} \rangle$, is given as, $$|\psi^{({\rm II})} \rangle
= \sum_{\alpha_1, \alpha_2, \cdots, \alpha_L}
{\rm Tr}(A^{\alpha_1}A^{\alpha_2}\cdots A^{\alpha_L})
\left|\alpha_1, \alpha_2, \cdots, \alpha_L \rangle \right.,$$ where the dimension of all matrices, $A^{\alpha_\ell}$, is assumed as $D\times D$. The site independent of dimension of variational parameters may be practical advantage. However, a cost to make the matrix elements, $\langle \psi^{({\rm II})} | H_{\rm PBC} |\psi^{({\rm II})} \rangle$, increases as $dD^5$ compared to $dD^3$ by assuming type-I MPS, $|\psi^{({\rm I})} \rangle$.
The MPO for the periodic boundary condition is given as $$\begin{aligned}
{\cal H}_{\rm PBC}&={\cal H}_{\rm OBC}
-t\sum_{\sigma}
\left(c_{L \sigma}^\dagger c_{1 \sigma}
+ {\rm h.c.}
\right)
=\prod_{\ell=1}^LW^{[\ell]} + \prod_{\ell=1}^LW'^{[\ell]},\\ %%%%%
W'^{[1]} &=
\begin{pmatrix}
tc_{1\uparrow}& tc_{1 \downarrow} & -tc_{1 \uparrow}^\dagger & -tc_{1 \downarrow}^\dagger
\end{pmatrix},\\ %%%%%
W'^{[2\le\ell\le L-1]} &=
\begin{pmatrix}
I & 0 & 0 & 0 \\
0 & I & 0 & 0 \\
0 & 0 & I & 0 \\
0 & 0 & 0 & I
\end{pmatrix}, \qquad %%%%%
W'^{[L]} =
\begin{pmatrix}
c_{L \uparrow}^\dagger \\
c_{L \downarrow}^\dagger \\
c_{L \uparrow}\\
c_{L \downarrow}
\end{pmatrix},\end{aligned}$$ where $W'^{[1]}$ and $W'^{[L]}$ denote the transfer term between $\ell=1$ and $L$. The matrices, $W'^{[2\le\ell\le L-1]}$, are introduced to treat the anti-commutation relation of fermion operators. The matrix elements of ${\cal H}_{\rm PBC}$ are given as, $$\begin{aligned}
&\langle \alpha_1, \cdots, \alpha_L | {\cal H}_{\rm PBC} | \alpha_1', \cdots, \alpha_L' \rangle
= \sum_{b_1,\cdots,b_{L-1}}\prod_{\ell =1}^L
W^{\alpha_\ell \alpha_\ell'}_{b_{\ell-1} b_\ell}
+\sum_{b_1,\cdots,b_{L-1}}\prod_{\ell =1}^L
W'^{\alpha_\ell \alpha_\ell'}_{b_{\ell-1} b_\ell},\\
&W'^{\alpha_\ell \alpha_\ell'}_{b_{\ell-1} b_\ell}
\equiv (-1)^{sign}
\langle 0 |
\left(c_{\ell \downarrow}\right)^{n_{\ell\downarrow}(\alpha_\ell)}
\left(c_{\ell \uparrow}\right)^{n_{\ell\uparrow}(\alpha_\ell)}
\left(
W'^{[\ell]}
\right)_{b_{\ell-1} b_\ell}
\left(c_{\ell \uparrow}^\dagger\right)^{n_{\ell\uparrow}(\alpha_\ell')}
\left(c_{\ell \downarrow}^\dagger\right)^{n_{\ell\downarrow}(\alpha_\ell')}
|0\rangle.\end{aligned}$$ In order to satisfy the anti-commutation relation of fermion operators, the sign, $(-1)^{sign}$, should be given as, $$\begin{aligned}
(-1)^{sign} =
\left\{ \begin{array}{ll}
-1 & \text{if ($1 \le \ell \le L-1$) and ($\alpha_\ell' = \uparrow$ or $\downarrow$)} \\
+1 & \text{otherwise}
\end{array}
\right..\end{aligned}$$ We note that the treatment of the sign in $W'^{\alpha_\ell \alpha_\ell'}_{b_{\ell-1} b_\ell}$ is one of the examples to deal with the long range hopping in a Hamiltonian. In the case of the next nearest neighbor or the third nearest neighbor hopping, an extension of $W$, instead of the formula by $W'$, may be made easier.
At a naive notion, the optimization of $A^{\alpha_\ell}$ in an around path is the most appropriate method for the periodic boundary condition. We will examine the numerical accuracy whether the ground state energy depends on optimization paths, one is the [*round trip*]{}, and the other is the [*around*]{}.
Numerical Results
=================
As the first test we calculate the ground state energy of non-interacting case, $U=0$, of $L=102$ sites for the open boundary condition. The optimization processes for various bond dimensions are shown in Fig. \[fig1\] (a). The energy almost converges after two times round trip optimization. The relative error of the ground state energy in the case of $D=64$ is less than $3 \times 10^{-4}$. In Fig. \[fig1\] (b) the optimization process of interacting case, $U=1$, is shown. The convergence behavior is very similar between non-interacting case and interacting one. We may expect similar numerical accuracy in the interacting model. It is consistent that the entanglement (which is measured as the spectrum of singular values of the eigen state for the model that the Hamiltonian of small system can be exactly diagonalized) becomes weak with increasing $U$.
In the case of periodic boundary condition the convergence of the ground state energy varies in the same manner as open boundary condition except lager $D$ is required to reach high accuracy. As shown in Fig. \[fig2\] (a) the relative error of the ground state energy, which is calculated by employing type-I trial function, $|\psi^{\rm (I)} \rangle$, and the [*round trip*]{} optimization, is $6 \times 10^{-3}$. If we choose the [*around*]{} optimization, the relative error does not change. When we employ the type-II trial function, $|\psi^{\rm (II)} \rangle$, as shown in Fig. \[fig2\] (b), the numerical accuracy of the ground state energy gets little improvement. The numerical accuracy of the ground state properties will be discussed in another article. We note that there is no numerical instability in the [*round trip*]{} optimization by using type-I trial function. In the case of type-II trial function, there is a little instability, but it does not much of a problem as shown in Fig. \[fig2\] (b), due to the solution of the generalized eigen problem.
![Optimization process of the ground state energy of the Hubbard model with open boundary condition. The system size is $L=102$. The parameters of the model are $t=1$, $U=0$ for left figure and $U=1$ for right figure. []{data-label="fig1"}](fig1a.eps){width="0.85\linewidth"}
![Optimization process of the ground state energy of the Hubbard model with open boundary condition. The system size is $L=102$. The parameters of the model are $t=1$, $U=0$ for left figure and $U=1$ for right figure. []{data-label="fig1"}](fig1b.eps){width="0.85\linewidth"}
![Relative error of the ground state energy of non-interacting model with periodic boundary condition. The system size is $L=102$. The type-I trial function, $|\psi^{\rm (I)} \rangle$, is assumed for the left figure. The type-II trial function, $|\psi^{\rm (II)} \rangle$, is assumed for the right figure. []{data-label="fig2"}](fig2a.eps){width="0.85\linewidth"}
![Relative error of the ground state energy of non-interacting model with periodic boundary condition. The system size is $L=102$. The type-I trial function, $|\psi^{\rm (I)} \rangle$, is assumed for the left figure. The type-II trial function, $|\psi^{\rm (II)} \rangle$, is assumed for the right figure. []{data-label="fig2"}](fig2b.eps){width="0.85\linewidth"}
Conclusions
===========
We developed the variational method by using the MPS for the Hubbard model with both of open and periodic boundary conditions. The negative sign due to the anti-commutation relation of the fermion operators can be treated in the framework of MPO. Therefore, the variational parameters at each site can be optimized locally. In the case of open boundary condition the numerical accuracy becomes well with increasing the bond dimension of the matrix, $A^{\alpha_\ell}$. If the fourfold bond dimension could be used, the relative error for the non-interacting model is reduced to one-tenth. In the case of periodic boundary condition the numerical accuracy does not depend on two types of trial function and optimization paths. In terms of the numerical cost the best way is that the [*round trip*]{} optimization with assuming trial function of $|\psi^{\rm (I)}\rangle$. The ground state properties, such as the momentum distribution and the spin-spin correlation function, will be published in another article.
[9]{} F. Verstraete, V. Murg and J. I. Cirac: Adv. Phys. [**57**]{} (2008) 143. U. Schllwöck: Ann. Phys. [**326**]{} (2011) 96. P. Corboz and G. Vidal: Phys. Rev. B [**88**]{} (2009) 165129. P. Corboz, G. Evenbly, F. Verstrate and G. Vidal: Phys. Rev. A [**81**]{} (2010) 010303(R). P. Corboz, R. Orús, B. Bauer and G. Vidal: Phys. Rev. B [**81**]{} (2010) 165104. I. Pizon and F. Verstraete: Phys. Rev. B [**81**]{} (2010) 245110.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Coherent quantum-state manipulation of trapped ions using classical laser fields is a trademark of modern quantum technologies. In this work, we study aspects of work statistics and irreversibility in a single trapped ion due to sudden interaction with the impinging laser. This is clearly an out-of-equilibrium process where work is performed through illumination of an ion by the laser. Starting with the explicit evaluation of the first moments of the work distribution, we proceed to a careful analysis of irreversibility as quantified by the nonequilibrium lag. The treatment employed here is not restricted to the Lamb-Dicke limit, which allows us to investigate the interplay between nonlinearities and irreversibility. We show that in these multiquantum or sideband regimes, variation of the Lamb-Dicke parameter causes a non-monotonic behavior of the irreversibility indicator. Counterintuitively, we find a working point where nonlinearity helps reversibility, making the sudden quench of the Hamiltonian closer to what would have been obtained quasistatically and isothermally.'
author:
- 'A. A. Cifuentes'
- 'F. Nicacio'
- 'M. Paternostro'
- 'F. L. Semião'
title: Nonequilibrium properties of trapped ions under sudden application of a laser
---
Introduction
============
Quantum control is key to quantum technologies [@rabitz2009]. Trapping of neutral or charged particles, assisted by cooling techniques to bring them to ultracold temperatures, form a mature experimental platform to the development of quantum control. In particular, laser-manipulated trapped ions are now one of the most developed settings for experimental investigation of quantum effects and the implementation of basic building blocks needed for quantum storage, communication and processing of information [@leibfried; @blatt; @meekhof1996; @roos]. Along with the development of quantum technologies, where one is usually interested in situations far from thermal equilibrium to fully harness the power of quantum coherences, there has been an increasing interest in nonequilibrium thermodynamics of quantum systems, sometimes referred to as Quantum Thermodynamics (QT). Properly setting the limits in the extraction of useful work and the related problem of entropy production in nonequilibrium processes in quantum systems lie at the core of QT [@plastina; @various1; @campisi2011]. Despite being a relatively new subject, QT is a field which grows steadily and rapidly [@carlisle; @jarzinsky; @crooks; @tasaki; @various2; @various3; @various4; @various5]. QT brings the old realm of classical thermodynamics to a new perspective where quantum correlations and quantum coherence might play an important role. Taking all this into account, it seems crucial to investigate the interaction of laser fields and trapped ions from a statistical nonequilibrium perspective, trying to uncover new aspects and physically relevant information previously untouched by standard approaches to this subject. The scenario is particularly rich given the multitude of different energy structures and transitions involving electronic and vibrational degrees of freedom accessible in this system through laser interaction [@orszag; @leibfried]. One of the motivations for the present work is the possibility of studying thermodynamical implications of nonequivalent physical regimes, some of them driven by strongly nonlinear Hamiltonians [@orszag; @leibfried]. The interplay between the physics of trapped ions and QT has been explored previously, for instance, in the context of ion-based thermo-engines [@abah] and the verification of fluctuation relations [@varios6]. However, our work departs from those in both context and methodology. In particular, we bring the variety of physical regimes available in the trapped ion system to the light of nonequilibrium QT. This includes carrier and sideband regimes, where single- or multiple-quanta transitions and energy dependent couplings manifest according to the laser frequency [@leibfried]. We will be particularly interested in a study that addresses thermodynamic irreversibility as quantified by the irreversible lag [@daffner] produced by a finite-time transformation experienced by a trapped ion. This paper is organized as follows. In Section \[qt\], we briefly review the relevant concepts of nonequilibrium QT needed to the subsequent developments. This includes a discussion of the microscopic view of work and the basic physics upon which the nonequilibrium lag is built. Sec. \[iiwaef\] is dedicated to a brief review of the laser-manipulated trapped ion system. Our results are presented in Section \[tfti\], where we discuss work and irreversibility when the trapped ion is suddenly illuminated by a classical laser field. In Section \[conc\], we conclude our findings and, in the Appendix, we present expressions for the eigenvalues and eigenvectors of the system Hamiltonian for an arbitrary sideband.
Work and Irreversibility {#qt}
========================
A open system can exchange energy and/or particles with the environment or an external agent. In this context, work is energy which is transferred/extracted to/from the system through application of arbitrary generalized forces [@balian]. From a microscopic perspective, work is then necessarily accompanied by a modification of the system Hamiltonian (energy levels). This is to be distinguished from heat which is energy exchanged with the environment through their mutual weak (infinitesimal) interaction. Consequently, the effect of heat is not a significant modification of the energy levels but a redistribution of their population. This too corresponds to a variation of internal energy just like work.
In general, a nonisolated system may suffer both processes. However, in what follows, we will be interested in the scenario where work is performed without heat exchange. This can be physically achieved, for instance, when the system is thermally isolated or the work protocol is performed in a time interval which is orders of magnitude shorter than the thermalization time. This is precisely the case of the idealized process of sudden Hamiltonian quench which corresponds to the instantaneous change of the system Hamiltonian from $\hat{\mathcal H}(\lambda_i)$ to $\hat{\mathcal H}(\lambda_f)$. In these expressions, $\lambda$ is a macroscopic variable in the system Hamiltonian usually called [*work parameter*]{} in the context of nonequilibrium thermodynamics. Work $W$ is a random variable encompassing both thermal and quantum fluctuations. In the case of a sudden quench, the statistical moments of the work distribution read [@fusco2014] $$\label{mom}
\left\langle W^{n}\right\rangle =
\sum_{k=0}^{n} \left(-1\right)^{k}
\begin{pmatrix}n \\ k \end{pmatrix}
\text{Tr} \left[ \hat{\mathcal{H}}_{\!f}^{\left(n-k\right)}
\hat{\mathcal{H}}_{i}^{k} \,
\hat{\rho}_{i} \right],$$ with $n$ integer and $\hat{\mathcal{H}}_{\!j} := \hat{\mathcal{H}}(\lambda_j)$ for $j = i,f$. More details about the statistical meaning of work can be found in [@WGF] and references therein. One of the most important results of nonequilibrium statistical mechanics is the Jarzinsky equality [@jarzinsky], from which one can directly obtain a fundamental inequality involving average work $\left\langle W \right\rangle$ and Helmholtz free energy $F$ $$\begin{aligned}
\label{ji}
\left\langle W \right\rangle \ge \Delta F,\end{aligned}$$ where $\Delta F \equiv F(\lambda_f, \beta) - F(\lambda_i,\beta) $ is the difference between the free energies of the system. Explicitly, $$\label{hfe}
F(\lambda_j, \beta) = - \frac{1}{\beta} {\rm ln}\, \mathcal Z(\lambda_j),
\,\,\,
{\mathcal Z} (\lambda_j) \equiv
{\rm Tr} \, {\rm e}^{-\beta \hat {\mathcal H}(\lambda_j) },$$ with $j=i,f$. The equality in Eq. (\[ji\]) is only achieved by an isothermal quasistatic process, which is [*reversible*]{} [@WGF].
The indicator of irreversibility used in this work can then be defined considering what has just been exposed. Based on Eq. (\[ji\]), one defines [@crooks; @daffner] $$\label{nl1}
\mathcal L \equiv \beta ( \langle W \rangle - \Delta F),$$ as an indicator of irreversibility in the sense that the work protocol is reversible only when $\mathcal L=0$. What is reversible or irreversible for this indicator is the work protocol realized in an initially equilibrated system. The idea is that a backwards run of the work protocol after the system starts attempting thermal reequilibration will not, in general, bring the system and environment to their initial state. The quantity between the parentheses is known as irreversible work [@crooks], and $\mathcal L$ is usually called “nonequilibrium lag” (NL) as it gives an idea of how the system state, after the work protocol, lags behind an equilibrium thermal state fixed by the final Hamiltonian and inverse temperature $\beta$. Remarkably, it has been shown that the NL is exactly equal to the relative entropy between the thermal state used to evaluate $F(\lambda_f, \beta)$ and the postwork state [@daffner]. It is important to remark that the relative entropy is zero for identical states and it diverges for orthogonal states [@vedral].
Short Review on Trapped Ions Interacting with Classical Laser Fields {#iiwaef}
====================================================================
We now present the basic elements needed to work with trapped ions subjected to laser fields. More information can be found in the many reviews available in the literature, e.g., [@leibfried]. Usually, the laser-ion setup is described by a model consisting of a two-level system (electronic degrees of freedom) coupled to a harmonic oscillator (center of mass motion). The latter is the result of electromagnetic confinement achieved by the use of trapping technology, e.g., Paul traps [@ghosh], and the electronic-motion coupling occurs due to momentum exchange with the laser.
By considering the center of mass (CM) degree of freedom as an oscillator with natural frequency $\nu$, and the two levels $\{ | g \rangle$, $ | e \rangle \}$ with an energy separation of $\hbar\omega_0$, the system Hamiltonian reads [@blockley] $$\label{hamtot}
\hat{\mathcal{H}} = \hat{\mathcal{H}}_0 + \hat{\mathcal{H}}_{\rm I},$$ with $$\label{hamfree}
\hat{\mathcal{H}}_0 = \hbar\nu\hat{a}^{\dagger}\hat{a} +
\frac{ \hbar \omega_0 }{2} \hat{\sigma}_{z},$$ and $$\label{Hamint}
\!\!\!\!\hat{\mathcal{H}}_{\rm I} = \frac{\hbar \Omega}{2}
\left[ \hat{\sigma}_{+} \,
\text{e}^{ i \eta ( \hat{a} + \hat{a}^{\dagger} )
- i \omega_L t } +
\hat{\sigma}_{-} \,
\text{e}^{ - i \eta ( \hat{a} + \hat{a}^{\dagger} )
+ i \omega_L t }
\right],$$ where $\omega_L$ is the laser frequency, $\Omega$ the classical Rabi frequency, $\hat a$ the annihilation operator for the CM motion, $\hat \sigma_z = | e \rangle \! \langle e | - | g \rangle \! \langle g |$, $\hat\sigma_+ = \hat\sigma^\dag_- = | e \rangle \! \langle g | $, and $\eta$ the Lamb-Dicke parameter defined as $$\begin{aligned}
\label{etatrue}
\eta=\frac{\omega_L}{c}\sqrt\frac{\hbar}{2M\nu}\cos\phi,\end{aligned}$$ with $M$ being the mass of the trapped ion, $c$ the speed of light, and $\phi$ the angle between the laser wave vector and the trap axis (one dimensional motion).
Depending on the detuning $\omega_0 - \omega_L$, the laser will cause the coupling of different vibrational levels with electronic part, each case representing a different quantum-optical process [@wineland] with its own effective Hamiltonian. The procedure to reveal each of those Hamiltonians is very well described in the literature, e.g., [@leibfried; @orszag]. Basically, after setting $\omega_0 - \omega_L=\pm m\nu$, with $m=0,1,2,\ldots$, one applies a rotating wave approximation (RWA) to Hamiltonian (\[hamtot\]) in order to obtain $$\label{hamrwa}
\hat{\mathcal{H}}^{(m)}_{\pm} = \hat{\mathcal{H}}_{0} +
\hbar \left( \text{e}^{-i\omega_{L} t }\hat{\Omega}_{m}^\pm \hat{\sigma}_{+} +
\text{e}^{ i\omega_{L} t }\hat{\Omega}_{m}^\mp \hat{\sigma}_{-} \right),$$ where $$\label{Omegaux1}
\hat{\Omega}_{m}^{+} = \hat{\Omega}_{m}^{- \dag} =
\frac{\Omega}{2} \text{e}^{-{\eta^{2}\!}/{2}}
\sum_{l=0}^{\infty}\left(i\eta\right)^{2l+m}
\frac{\hat{a}^{\dagger l}\hat{a}^{l}}{l!(l+m)!} \hat a^m.$$ For consistency, one must notice that $\eta$ in Eq. (\[etatrue\]), besides being a function of $\phi$ and $\nu$, is also a function of $\omega_0$. This is so because $\omega_L$ is now fixed by the sideband choice (value of $m$).
The Hamiltonian $\hat{\mathcal{H}}^{(m)}_{+} $ is obtained with $\omega_0 - \omega_L=
m \nu $, and it describes a $m$-phonon process for the vibrational part accompanied with transitions in the atomic levels. It can be referred to as a $m$-phonon Jaynes-Cummings (JC) model. On the other hand, Hamiltonian $\hat{\mathcal{H}}^{(m)}_{-} $ is obtained with $\omega_0 - \omega_L= - m \nu $ and it can be referred to as a $m$-phonon anti-Jaynes-Cummings (AJC) model. The case $m=0$ can be studied using either $\hat{\mathcal{H}}^{(m)}_{+} $ or $\hat{\mathcal{H}}^{(m)}_{-} $, $$\label{hamct}
\!\!\hat{\mathcal{H}}^{(0)}=\hat{\mathcal{H}}^{(0)}_{\pm} =
\hat{\mathcal{H}}_{0} +
\frac{\hbar}{2} \left(\! \text{e}^{-i\omega_{L} t }\hat\Omega_0^+ \hat{\sigma}_{+} +
\text{e}^{ i\omega_{L} t } \hat\Omega_0^-\hat{\sigma}_{-}\! \right),$$ and it describes Rabi oscillations between electronic levels, i.e., the carrier transitions [@leibfried; @orszag].
For what comes next, it is useful to present now the matrix elements of $\hat{\Omega}_{m}^\pm$ in the Fock basis of the CM harmonic motion $$\begin{aligned}
\label{Omegme}
\left\langle n\right|\hat{\Omega}_{m}^+\left|n'\right\rangle &=&
\left\langle n'\right|\hat{\Omega}_{m}^{-}\left|n\right\rangle^\ast \\
&=& \frac{\Omega(i\eta)}{2}^{\!\!^m}
\sqrt{ \frac{ n!}{(m+n)!} } \, \text{e}^{-\eta^{2}/2}
L_{n}^{m}\!\left(\eta^{2}\right) \delta_{n'\,n+m}, \nonumber \end{aligned}$$ with the associated Laguerre polynomials [@gradshteyn] $$L_{n}^{m}\left( x \right) =
\sum_{k=0}^{n}\left(-1\right)^{k}
\frac{(n + m)!}{(m + k)!(n-k)!}
\frac{x^{k}}{k!} .$$ As it can be seen from Eq. (\[Omegme\]), the quantum Rabi frequencies, $\left\langle n\right|\hat{\Omega}_{m}^+\left|n'\right\rangle$, have a strong dependence on the Lamb-Dicke parameter $\eta$. For small values of $\eta$, they present a quasilinear dependence on $n$, typical of $m$-photon Jaynes-Cummings models in the context of cavity quantum electrodynamics (cQED) [@multi]. However, for the ionic system, it is possible to induce considerable nonlinearities in $\left\langle n\right|\hat{\Omega}_{m}^+\left|n'\right\rangle$ simply by increasing the Lamb-Dicke parameter. The quantum Rabi frequencies become an oscillating function of $n$ due to the presence of the Laguerre polynomials in Eq. (\[Omegme\]). These oscillations can strongly influence the system dynamics as thoroughly studied in [@gregorio].
Results {#tfti}
=======
The work protocol we have in mind is now explained. First, the work parameter $\lambda_t$ here has to do with the application of the laser on the ion. More specifically, we take $\lambda_i=0$ and $\lambda_f=\Omega$ in a sudden quench of the system Hamiltonian. This means an abrupt change from $$\begin{aligned}
\label{h0}
\hat{\mathcal H}(\lambda_i)= \hbar\nu\hat{a}^{\dagger}\hat{a} +
\frac{ \hbar \omega_0 }{2} \hat{\sigma}_{z}\end{aligned}$$ to $$\label{hf}
\!\!\!\hat{\mathcal H}(\lambda_f) =
\hat{\mathcal H}(\lambda_i) + \frac{\hbar \Omega}{2}
\left[ \hat{\sigma}_{+} \,
\text{e}^{ i \eta ( \hat{a} + \hat{a}^{\dagger} )
} +
\hat{\sigma}_{-} \,
\text{e}^{ - i \eta ( \hat{a} + \hat{a}^{\dagger} )
}
\right]\!,$$ or, if we want to explore the sidebands, an abrupt change to $$\begin{aligned}
\label{hs}
\hat{\mathcal H}(\lambda_f) = \hat{\mathcal H}(\lambda_i) +
\hbar ( \hat{\Omega}_{m}^\pm \hat{\sigma}_{+} +
\hat{\Omega}_{m}^\mp \hat{\sigma}_{-} ).\end{aligned}$$ The above Hamiltonians, Eq. (\[hf\]) and Eq. (\[hs\]), correspond to the sudden application of the laser field, i.e., the result of taking the limit of $t\rightarrow 0$ in Eq. (\[Hamint\]) and in Eq. (\[hamrwa\]), respectively.
It is well known that the Hamiltonian (\[hf\]) can not be diagonalized exactly, so that much of the analytical advances take place with the sideband Hamiltonians in Eq. (\[hs\]). It is important to remark that Eq. (\[hs\]) indeed describes quite well the system when $\omega_0 - \omega_L=\pm m\nu$ and $\Omega$ is moderately weak, which are conditions easily implemented in the laboratories [@meekhof1996; @roos]. Before the interaction with the laser, the trapped ion is found to be in thermal equilibrium with the environment (at inverse temperature $\beta$). This is described by the Gibbs state associated with Hamiltonian Eq. (\[h0\]), i.e., $$\label{initstate}
\hat{\rho}_i = \frac{\text{e}^{-\beta\hbar\nu\hat{n}}}{(\bar n + 1)} \otimes
\frac{\text{e}^{-\frac{\beta\hbar\omega_{0}}{2}\hat{\sigma}_{z}}}
{2\cosh \frac{\beta\hbar\omega_{0}}{2}},$$ where $\hat n = \hat a^\dag \hat a$ is the number operator and $$\label{mocn}
\bar n = {\rm Tr}(\hat n \hat \rho_i) = ({\rm e}^{\beta\hbar\nu} - 1)^{-1}$$ is the thermal occupation number of the CM motion. In spite of the difficulties found in dealing with the full Hamiltonian Eq. (\[hf\]), we were able to find the first moments of the work distribution. This is already valuable information because to obtain the full distribution we would need the whole set of eigenvalues and eigenvectors of Eq. (\[hf\]) which are not possible to be obtained, except numerically and to a restricted precision giving the complexity of the Hamiltonian. We then use Eq. (\[mom\]), appropriate to a sudden change, to calculate a few first moments of the work distribution and get some insight of it.
The first moment, $n=1$, using Eq. (\[hf\]) and Eq. (\[initstate\]), turns out to be $$\label{1mom}
\langle W \rangle =
{\rm Tr}\left[ \hat{\mathcal{H}}_{\rm I} \, \hat \rho_i \right]
\propto {\rm Tr}[\hat \sigma_{\pm} \, \text{e}^{-\frac{\beta\hbar\omega_{0}}{2}
\hat{\sigma}_{z}}] = 0.$$ As for the second, we now find $$\left\langle W^{2}\right\rangle = \hbar^2\Omega^2/4,$$ which, interesting enough, depends only on the magnitude of the work parameter $\Omega$ (controlled by laser power) and it is completely independent of the temperature. Since $\langle W \rangle = 0$, the second moment is also the variance of the work distribution. The third moment is given by $$\label{3mom}
\left\langle W^{3}\right\rangle = \frac{\hbar^{3}\Omega^{2}}{4}
\left[\nu\eta^{2} +
\omega_{0}
\tanh \tfrac{\beta\hbar\omega_{0}}{2}
\right],$$ in which appears the dependence on the temperature. From the second and third moments, we can determine the skewness of the work distribution $\left\langle W^{3}\right\rangle/\left\langle W^{2} \right\rangle^{3/2} $. This turns out to be inversely proportional to the magnitude of the work parameter. Consequently, the stronger the laser, the more symmetric the distribution is around the mean value $\left\langle W\right\rangle = 0$. Since $\left\langle W^{3} \right\rangle > 0$, as seen from Eq. (\[3mom\]), the work distribution is biased towards negative values of work. All these facts about the first moments of the work distribution, obtained with the full Hamiltonian Eq. (\[hf\]), tell us that negative work (internal energy descrease) is more likely than the equivalent positive work (internal energy increase) at the very first instant of interaction with the laser field. Note also that the asymmetry around the mean value decreases with the temperature while it increases with $\eta$. Finally, according to Eq. (\[etatrue\]), $\left\langle W^{3}\right\rangle$ and the skewness are actually independent of the trap frequency $\nu$.
Now we turn our attention to the sideband Hamiltonians in Eq. (\[hs\]) and to the irreversibility of the work protocol consisting of the sudden quench of system Hamiltonian due to laser interaction. As said before, these effective Hamiltonians are obtained from the full Hamiltonian Eq. (\[hf\]) by setting resonance $\omega_{0} = \omega_{L} \pm m \nu$ and performing a rotating wave approximation. We will see that a thermodynamic analysis is able to reveal the different aspects of the optical processes raised by the selection of distinct sidebands.
We proceed to apply the NL in Eq. (\[nl1\]) to reveal the irreversibility of the work protocol. Just like what happened with the full Hamiltonian (\[hf\]), the first moment of the work distribution or simply the average work is again null, i.e., $\langle W \rangle = 0$. For this reason, the NL in Eq. (\[nl1\]) for the sudden quench of the sideband Hamiltonian in Eq. (\[hs\]) reads $$\label{nlfinal}
\mathcal L = {\rm ln}\, \frac{\mathcal Z(\lambda_f)}{\mathcal Z(\lambda_i)},$$ with $$\label{zi}
\mathcal Z(\lambda_i) = 2(\bar n + 1) \cosh{ \tfrac{\beta \hbar\omega_0}{2}},$$ obtained using Eq. (\[h0\]), and $$\label{zf}
\!\!\mathcal Z_{\pm}(\lambda_f)\! = \!
\sum_{n = 0}^{\infty} \left[ {\rm e}^{-\beta \mu_{\pm}^{ (n,m) } } \!\! + \!
{\rm e}^{-\beta \gamma_{\pm}^{(n,m)}}\right] +
\sum_{n = 0}^{m-1} {\rm e}^{-\beta \zeta_{\pm}^{(n,m)}} \!\!
,$$ obtained with Eq. (\[hs\]). The functions $\mu_{\pm}$, $\gamma_{\pm}$, and $\zeta_{\pm}$ are the eigenvalues of the Hamiltonians in (\[hs\]), and their expressions can be found in Eqs. (\[eigval1+\]), (\[eigval+\]), (\[eigval1-\]), and (\[eigval-\]), which allows one to get $$\begin{aligned}
\label{partf1}
\!\!\!\!\!\!\!\mathcal Z_{\pm}(\lambda_f) &=& 2
\sum_{n=0}^{\infty} \!{\rm e}^{-\beta \hbar \nu( n + \frac{m}{2})}
\! \cosh\!\!\left[ \tfrac{\beta \hbar}{2}
\sqrt{{\omega_L}^{2} \! +\! \Omega^{2}\left|f_n^m\right|^{2}} \right] \nonumber\\
&& + \,
(\bar n + 1 ) (1- {\rm e}^{- \beta \hbar m \nu} )
{\rm e}^{\pm \tfrac{1}{2}\beta\hbar \omega_0 } , \end{aligned}$$ with $\omega_L = \omega_{0} \mp m\nu$, and $$\begin{aligned}
\label{auxf1}
\!\!\!\!\!\!f_n^m\!:=\! \frac{2}{\Omega} \!
\left\langle n\right|\!\hat{\Omega}_{m}^+\!\left|n+m\right\rangle
= \!{(i\eta)}^{\!\!^m}\!\! \sqrt{\!\!\tfrac{ n!}{(m+n)!} } \,
\text{e}^{-\frac{\eta}{2}^{\!2}} L_{n}^{m}\!\left(\eta^{2}\right), \end{aligned}$$ where we used Eq. (\[Omegme\]). With the above two partition functions, we can calculate $\mathcal L$ in Eq. (\[nlfinal\]). Before the presentation of the simulations, we want to make the notation clear emphasizing that $\mathcal Z_{+}(\lambda_f)$ refers the JC-type Hamiltonians with $\omega_L = \omega_{0} - m\nu$, while $\mathcal Z_{-}(\lambda_f)$ refers to the AJC-type Hamiltonians with $\omega_L = \omega_{0} + m\nu$. Now we carry on to the numerical investigation of the NL. For that, it is important to have in mind the reality of the physical parameters to be used in the simulations. First, the initial thermal occupation numbers $\bar{n}$ will be considered relatively small in order to have quantum fluctuations playing some role. The experiments employ sophisticated and very efficient cooling techniques for that aim [@leibfried]. For the typical frequencies and coupling constants, we will be focusing on the experimental implementation of Eq. (\[hamrwa\]) using $\rm Ca^{+}$ ions [@roos]. In these experiments, the electronic level separation is about THz while the trap frequencies are set typically in some MHz, and one order of magnitude smaller or higher by adjusting the trap potentials. For the classical Rabi frequency $\Omega$, a few MHz is also a realistic choice. We would also like to emphasize that our analysis and results are suitable to be applied to other known experimental setups such as those involving $\rm Be^{+}$ [@meekhof1996] or $\rm Yb^{+}$ [@olmschenk]. The partition function in (\[partf1\]) is a sum of an infinity number of terms which cannot be reduced analytically to a closed expression. Thus, a truncation is necessary. The convergence criterion for performing the truncation is explained in the note [@footnote2]. Each plot required a different number of terms kept in the sum, but in all cases the same criterium is used.
The dependence of NL on the Lamb-Dicke parameter $\eta$ is presented in Fig. \[fig1L1\] for a few values of $m$. The variation of $\eta$ in these plots comes from $\phi$ in Eq. (\[etatrue\]), since we are keeping the trap and laser frequencies fixed. For small $\eta$, i.e., in the Lamb-Dicke regime, the Hamiltonians $\hat{\mathcal{H}}^{(m)}_{\pm} $ are basically ordinary Jaynes-Cummings models from cQED, in the sense that Eq. (\[Omegaux1\]) becomes approximately independent of the energy or number operator $\hat{a}^\dag\hat{a}$. In this regime, both the JC and AJC cases present the same ordering with respect to $m$. We see that the higher the sideband, or the number of motional quanta absorbed in the transition driven by the laser, the lesser the lag is. This means that the sudden application of the laser becomes less irreversible and more like a quasistatic change. However, by increasing non-linearity, i.e., the magnitude of $\eta$, we depart from the ordinary cQED models, and Fig. \[fig1L1\] reveals that the JC and AJC present different responses with respect to irreversibility. For the JC case, increasing $\eta$ does not alter the order with respect to $m$ and, the higher the sideband, the lesser the NL. On the other hand, for the AJC such an order is not respected and, interesting enough, it comes to a point in which the higher the sideband, the higher the NL. Such behavior is induced by nonlinearity and it highlights well the different thermodynamic aspects resulting from JC and AJC models using trapped ions.
The behavior of the NL as $\eta$ is varied, with $\nu$, $\omega_0$ kept fixed, is determined by $f_n^m$ defined in Eq. (\[auxf1\]). In order to gain some insight about what was seen numerically in Fig. \[fig1L1\], we now resort to analytical asymptotic limits. For large $\eta$, the function $|f_n^m| \to 0$ because of the exponential in Eq. (\[auxf1\]). In this limit, $\mathcal Z_{\pm}(\lambda_f) \to \mathcal Z(\lambda_i)$ so that $\mathcal L \to 0$. In the Lamb-Dicke regime, $\eta \ll 1$, we expand the exponential and Laguerre in Eq. (\[auxf1\]) up to second order in $\eta$ to find $$\begin{aligned}
\label{fauxap}
\left|{f_n^m}\right|^{2} &\approx& \frac{(n+m)!}{n!m!^2}
\left[1 - \frac{2n+m+1}{m+1} \eta^2 \right]{\eta^{2m}}. \end{aligned}$$ To obtain this expression we used $d L_n^m(x)/d x = - L_{n-1}^{m+1} (x)$ and $L_n^m(0) = (n+m)!/(n!m!)$ [@gradshteyn]. Now, by keeping just terms up to $\eta^2$ in $\left|f_n^m\right|^{2}$, one gets $$\label{fauxap2}
\left|f_n^m\right|^{2} \approx [1 - (2n+1) \eta^2] \delta_{m 0 } +
(n+1) \eta^2 \delta_{m 1}.$$ Terms with $m\ge 2$ appear only in higher powers of $\eta$. Notice that for $m = 1$, $\left|f_n^1\right|^{2} \to 0$ and $\mathcal{Z}_{\pm}(\lambda_f) \to \mathcal{Z}(\lambda_i)$ as $\eta \to 0$, whhich makes $\mathcal {L} \to 0$. On the other hand, $\left|f_n^0\right|^{2}$ in Eq. (\[fauxap2\]) is a concave function of $\eta$ with $\lim_{\eta\to 0}\left |f_n^0\right|^{2}=1$, $\forall n$. Consequently, $\mathcal{L}\neq 0$ as $\eta\to 0$. All these features can be seen from Fig. \[fig1L1\]. For $m > 1$, only higher order terms in $\eta$ contribute to $|f_n^m|$, forcing $\mathcal Z_{\pm}(\lambda_f) \to \mathcal Z(\lambda_i)$ as $\eta \to 0$, just like what happens when $m=1$. The physical explanation for the distinct behavior found in the carrier transition $m=0$ lies in the system Hamiltonian after and before laser application. From Eq. (\[Omegaux1\]), one can see that $$\label{etalim}
\lim_{\eta \to 0} \hat{\Omega}_{m}^\pm = \frac{\Omega}{2} \delta_{m 0}{{\sf 1 \hspace{-0.3ex} \rule{0.1ex}{1.52ex}\rule[-.01ex]{0.3ex}{0.1ex}}},$$ where ${{\sf 1 \hspace{-0.3ex} \rule{0.1ex}{1.52ex}\rule[-.01ex]{0.3ex}{0.1ex}}}$ is the identity operator for the center-of-mass motion. By taking Eqs. (\[hamct\]) and (\[etalim\]) into account, it follows that, when $m=0$, the laser is able to drive transitions between the two electronic states, even when $\eta = 0$. In other words, the pre- and post-quench Hamiltonians are different in the limit $\eta\to 0$, only when $m=0$. The process becomes then reversible in such a limit, provided $m\neq 0$. Now, we investigate the role of the classical Rabi frequency $\Omega$ on the irreversibility. The result is depicted in Fig. \[fig2L2\], where one can see that the NL increases with $\Omega$. This behavior is expected from the detailed analysis of Eq. (\[partf1\]), and it can be physically understood from the fact that $\Omega$ is the work parameter and quantifies the intensity of the sudden quench.
In order to obtain a better understanding of the problem, it is necessary to go on and investigate the role of temperature. The NL as a function of the mean occupation number of the initial thermal state $\bar n$ in Eq. (\[mocn\]) is presented in Fig. \[fig3L3\]. It is noticeable that the AJC and JC models in the trapped ion system may respond so differently to variations of initial thermal energy of the system. In particular, it can be seen from Fig. \[fig3L3\] that the shown sidebands for the AJC and also for $m=0$ (which can be seen as either belonging to the AJC or JC classes) lead to a divergency in the NL as $\bar n\to 0$ ($\beta \to \infty$). This is not observed for for the JC case.
Although the dependence of $\mathcal L$ on the temperature is a bit more intricate, since all factors in Eq. (\[partf1\]) depend on it, we again succeeded in providing an analytical treatment based on asymptotics that helps us to spot the reasons behind such different behavior found in the JC and AJC models. In the high temperature limit $\beta \to 0$ ($\bar n\to \infty$), a successive application of this limit, first to some exponentials and then to the hyperbolic functions in Eq. (\[partf1\]), results in $$\label{limZ-T6}
\!\!\!\frac{\mathcal Z_{\pm}(\lambda_f)}{\mathcal Z(\lambda_i)} \to
\lim_{\beta\to 0} \left[ (\bar{n} + 1)^{-1}
\sum_{n=0}^{\infty} \!{\rm e}^{-\beta \hbar \nu( n + \frac{m}{2})} \right]= 1,$$ which makes $\mathcal L \to 0$. This shows that, in this limit, the dynamics becomes reversible regardless of $m$. For low temperatures, one can write $\cosh \beta x \approx \tfrac{1}{2}{\rm e}^{\beta |x|}$ to find $$\label{limZ-T3}
\!\!\!\frac{\mathcal Z_{\pm}(\lambda_f)}{\mathcal Z(\lambda_i)} \!\to \!
\lim_{\beta\to\infty} \!\!
\left[\!(1\!-\!\delta_{m0}) {\rm e}^{-\tfrac{ \beta \hbar \omega_0 (1 \mp 1)}{2} }
\!\!+\!\! \sum_{n = 0}^{\infty} {\rm e}^{- \frac{\beta \hbar}{2} \Phi_n^m} \!\right]\!,$$ where we have defined $$\label{funcphi}
\Phi_n^m := \nu(2 n + m) + \omega_0 - \sqrt{(\omega_{0} \! \mp \! m\nu)^{2} \! +\!
\Omega^{2}\left|f_n^m\right|^{2}} \, .$$ From this, we can analyze individually the AJC and JC cases. For the AJC and the carrier $m = 0$, it is easy to see that $\Phi_0^m < 0, \, \forall m$. As a result, $$\label{limZ-T1}
\lim_{\beta \to \infty}\frac{\mathcal Z_{-}(\lambda_f)}{\mathcal Z(\lambda_i)} =
\infty, \,\,\, \forall m,$$ showing that for $\hat{\mathcal{H}}^{(m)}_{-}$ in Eq. (\[hamrwa\]) and $\hat{\mathcal{H}}^{(0)}$ in Eq. (\[hamct\]) the NL Eq. (\[nlfinal\]) always diverges when $\beta \to \infty$. For the JC case, we must give a closer look at the function $ \Phi_{n}^{m}$. From Eq. (\[limZ-T3\]), and remembering that the case $m=0$ was already analyzed in Eq. (\[limZ-T1\]), $$\label{limZ-T}
\frac{\mathcal Z_{+}(\lambda_f)}{\mathcal Z(\lambda_i)} \to
1 +
\lim_{\beta\to\infty} \sum_{n = 0}^{\infty} {\rm e}^{- \frac{\beta \hbar}{2} \Phi_n^m}.$$ If, for a given $m$, at least one of the $\Phi_n^m$ appearing in Eq. (\[limZ-T\]) is negative, the above limit diverges and $\mathcal L \to \infty$. On the other hand, provided $\Phi_n^m \ge 0$ for all $n$, then $$\label{limZ-T2}
\frac{\mathcal Z_{+}(\lambda_f)}{\mathcal Z(\lambda_i)} \to k + 1,$$ where $k$ is the number of times $\Phi_n^m$ equals zero. Consequently, $\mathcal L \to {\rm ln}(1 + k)$ for the JC case. For the parameters chosen in Fig. \[fig3L3\], the JC case corresponds to $\Phi_n^m \ge 0$ and the limit in Eq. (\[limZ-T2\]) holds with $k = 0$, [i.e.]{}, no divergence is observed. Divergences of the NL can be understood, in general, as a consequence of the distinguishability between the post-work state and the reference thermal state used to evaluate the final free energy $F(\lambda_f, \beta)$. As previously commented, the NL can be written in terms of the relative entropy between those two states [@daffner]. As so, the smaller the NL, the more indistinguishable the two states are and, for orthogonal states, it diverges. In a quench process, as considered here, the initial state does not change after the work protocol [@fusco2014]. Consequently, the post-work state is a Gibbs state defined with inverse temperature $\beta$ and Hamiltonian (\[h0\]). When $\beta\to \infty$, this state is basically $|0, g \rangle$. In the same limit, the reference thermal state used to evaluate $F(\lambda_f, \beta)$ will be given by the ground state of either the JC Hamiltonian or the AJC Hamiltonian, depending on the chosen $\omega_L$. For the physical parameters used in the simulations, the ground state of the JC Hamiltonian coincides with the post-work state which is $|0, g \rangle$, while the ground state of the AJC Hamiltonian will be a superposition of $|0,e\rangle$ and $|m,g\rangle$. We can then see that NL will be smaller for the JC than for the AJC because the post-work state is more indistinguishable from the ground state of the former than from the ground state of the latter. All eigenstates and eigenvalues for AJC and JC Hamiltonians can be found in the Appendix.
We may wonder under which parameters choice the JC case can present divergences in the NL. In order words, how the system parameters can be chosen to cause at least one of the $\Phi_n^m$ in (\[limZ-T\]) to be negative. The analysis of Eq. (\[funcphi\]) reveals that this is the case provided $$\label{limi1}
\left|f_n^m\right| > \frac{2}{\Omega} \sqrt{\nu (\omega_0+n\nu) (n+m)},$$ for a fixed $m$ (sideband) and some value of $n$. Now, in order to see this effect, one needs to go a bit beyond the current experimental set of parameters found in the literature. The result is shown in Fig. \[fig4L4\] where the parameters were deliberately chosen as to imply $\Phi_n^m < 0$ in some of the examples, making Eq. (\[limZ-T2\]) invalid and causing the NL to diverge as $\beta \to \infty$. Although the parameters used to produce Fig.\[fig4L4\] are unrealistic for the trapped ion system, one might think of their realization in an alternative system such as those in circuit quantum electrodynamics where ultrastrong couplings can be achieved. In this context, one may try to simulate the physics of trapped ions in the RWA using other controlled systems where such strong Rabi frequencies might be accessible.
We now discuss the dependence of the Lamb-Dicke parameter on the trap frequency $\nu$ in Eq. (\[etatrue\]) and its implication for the irreversibility of the process. For that, we consider as one example the carrier transition $(m=0)$ in Fig. \[fig5L5\] when, for a given trap frequency $\nu$, we vary $\eta$ from zero ($\phi = \pi/2$) to its maximum value ($\phi = 0)$. This is repeated for a broad range of trap frequencies. In general, the effect of varying the frequency of the trap is just to limit the maximum attainable values of $\eta$ obtained by changing the laser propagation direction in relation to the trap axis (angle $\phi$). The NL basically does not change if $\nu$ is varied keeping $\eta$ fixed. Of course, according to Eq. (\[etatrue\]), in order to keep $\eta$ fixed while changing $\nu$, the angle $\phi$ must also be varied. For $\nu \ll \omega_0$ ($\nu \to 0$) one can adjust $\phi$ in order to keep $\eta$ constant. This limit, obtained from Eq. (\[partf1\]), reads $$\label{limZ-nu}
\!\!\!\!\frac{\mathcal Z_{\pm}(\lambda_f)}{\mathcal Z(\lambda_i)} \to
{\rm sech} \tfrac{\hbar\beta\omega_0}{2}
\sum_{n=0}^{\infty} \cosh\!\!\left[ \tfrac{\beta \hbar}{2}
\sqrt{{\omega_0}^{2} \! +\! \Omega^{2}\left|f_n^m\right|^{2}} \right],$$ regardless of being the JC or AJC case. Giving the convergence properties of $|f_n^m|$, discussed in [@footnote2], this limit is finite. This finite behavior is illustrated with the case $m=0$ in Fig. \[fig5L5\]. Other choices of $m$ will lead to conclusions alike since the asymptotic behavior of $|f_n^m|$ with $n$ and $\eta$ does not depend on $m$ in any fundamental way [@footnote2].
To finish the analysis of the NL, we explore its behavior for higher sidebands ($m > 2$). In Fig. [\[fig6L6\]]{}, we present a numerical study of such a dependence. One can see that the JC case tends to reversibility as the number of excitations $m$ exchanged between the ion motion and the electronic levels, induced by the laser, increases. For the AJC case, once again a rich behavior is found. For small $\eta$, the NL monotonically decreases with $m$, while for higher values of $\eta$, it comes to a point where the behavior is not monotonic anymore as highlighted in the inset of the bottom panel in Fig. [\[fig6L6\]]{}. From this point, we varied $\eta$ up to $3.5$ (see Fig. \[fig1L1\]) to verify that, in this range, the maximum displaces to higher values of $m$ as $\eta$ increases. The same kind of analysis was performed considering the variation of $m$ for different temperatures and Rabi frequencies, and contrary to results in Fig. \[fig6L6\], there are no remarkable differences between the AJC and JC cases.
As a final remark, it is worthwhile to notice that, except for Fig. \[fig4L4\], which is a theoretical extrapolation of the current experimental parameters, we have always found a higher NL for the AJC than for the JC. This can be once again understood from the relatively small values of $\bar{n}$ used in the simulations, and from the fact that the NL is a relative entropy. This is the same reasoning we employed in the analysis of Fig. \[fig3L3\]. Additionally, the first order expansion of the hyperbolic function (in powers of $m\nu / \omega_0$) in Eq. (\[partf1\]) shows immediately that $\mathcal Z_- > \mathcal Z_+$.
Conclusions {#conc}
===========
From the point of view of nonequilibrium thermodynamics, we studied the problem of sudden driving of a trapped ion by a classical laser field. This thermodynamical analysis was instrumental to pinpoint fundamental differences between the Jaynes-Cummings and Anti-Jaynes-Cummings-type Hamiltonians that arise in the trapped ion system by careful choice of the laser frequency. The role played by the magnitude of the Lamb-Dicke parameter, related to nonlinearity, and other physically relevant parameters was carefully studied. This makes our work useful also to the experimentalist who might be interested in the practical investigation of quantum thermodynamics of laser-manipulated trapped ion systems. In this respect, our work is, to the best of our knowledge, the first one to includes, in a thermodynamical approach, the great variety of possible electronic-vibration interactions available in the trapped ion system.
Taking into account the small values of the NL encountered when using up-to-date experimental parameters, noise in the experimental setup might impair its practical determination. One way to circumvent that is to increase the Rabi frequency (intensity of the laser), since the NL increases monotonically with this parameter. To be more quantitative, a change of $\Omega$ from $10^6$ to $10^7$ is enough to increase the NL two orders of magnitude.
An experimental assessment of the findings of this paper might make use of a 2D trap (ion oscillations along $x$ and $y$ directions) and a driving laser coupling the electronic degrees of freedom to the $x$ motion. This can be easily achieved by choosing the right direction of the laser wave vector. The $y$ motion is used then as an ancilla in the interferometric scheme presented in [@various4]. For that, an extra laser is to be used to couple the system (electronic levels plus $x$ motion) to the ancilla in order to arrange for a proper gate entangling them [@various4]. With these, the work distribution can be experimentally determined and, with the help of the Jarzynski equality [@jarzinsky; @crooks; @tasaki], the free energy and consequently the NL can be obtained.
A.A.C. acknowledges to “Coordenação de Aperfeiçoamento de Pessoal de Nível Superior” (CAPES). FN, FLS and MP are supported by the CNPq “Ciência sem Fronteiras” programme through the “Pesquisador Visitante Especial” initiative (Grant No. 401265/2012-9). MP acknowledges financial support from John Templeton Foundation (grant ID 43467), the EU Collaborative Project TherMiQ (Grant Agreement No. 618074), and also gratefully acknowledge support from the COST Action MP1209 “Thermodynamics in the quantum regime". FLS is a member of the Brazilian National Institute of Science and Technology of Quantum Information (INCT-IQ) and acknowledges partial support from CNPq (Grant No. 307774/2014-7).
Appendix {#appendix .unnumbered}
==========
\[ap\]
In this appendix we analytically perform the diagonalization of the Hamiltonians in Eq. (\[hamrwa\]) for any value of $m$.
Diagonalization of {#do+}
-------------------
Let us consider the eigenbasis for the free Hamiltonian: $\{ | n , e \rangle, | n , g \rangle; n = 0,1,...,\infty \} $. It is easy to see that the subspace spanned by the set $ \{ | n , e \rangle, | n + m, g \rangle \}$ is invariant under the action of the JC like Hamiltonian, $\hat{\mathcal{H}}^{(m)}_{+}$, in Eq. (\[hamrwa\]) $\forall m, n$. Furthermore, if $m > n$ then it is true that $$\hat{\mathcal{H}}^{(m)}_{+} | n , g \rangle =
\hat{\mathcal{H}}_{0} | n , g \rangle =
\left( \hbar\nu n - \tfrac{\hbar\omega_0}{2} \right) | n , g \rangle,$$ [i.e.]{}, the eigenstate $| n , g \rangle$ must be included in the invariant subspace, which becomes $\{ | n , g \rangle, | n , e \rangle, | n + m, g \rangle \}$. Any matrix element of $\hat{\mathcal{H}}^{(m)}_{+}$ outside the invariant subspace is null because of (\[Omegme\]).
Taking the matrix elements of the Hamiltonian in the invariant subspaces, and rearranging the basis, it acquires a simple block structure: $$\label{hblocks+0}
\hat{\mathcal{H}}^{(m)}_{+} = \hat{\mathcal{H}}_{+}^{[1]} \oplus
\hat{\mathcal{H}}_{+}^{[2]}$$ with $$\label{hblocks+}
\begin{aligned}
& \hat{\mathcal{H}}^{[1]}_{+} =
\bigoplus_{n = 0}^{m - 1} \langle n , g |\hat{\mathcal{H}}^{(m)}_{+} | n , g \rangle
= \hbar
\bigoplus_{n = 0}^{m - 1} \left( \nu n - \tfrac{\omega_0}{2} \right), \\
& \hat{\mathcal{H}}^{[2]}_{+} \!=\!
\bigoplus_{n = 0}^{\infty} \!\!\begin{pmatrix}
\!\! \langle n , e |\hat{\mathcal{H}}^{(m)}_{+} | n , e \rangle \!\! & \!\!
\!\! \langle n\! + \! m, g |\hat{\mathcal{H}}^{(m)}_{+} | n , e \rangle \!\!\\
\!\! \langle n , e |\hat{\mathcal{H}}^{(m)}_{+} | n\! + \! m , g \rangle \!\! &
\!\! \langle n\! + \! m, g |\hat{\mathcal{H}}^{(m)}_{+} | n\! + \! m , g \rangle \!\!
\end{pmatrix} \\
& \,\,\,\,\, = \hbar
\bigoplus_{n = 0}^{\infty}
\begin{pmatrix}
\nu n + \frac{\omega_{0}}{2} &
\text{e}^{-i\omega_{L}t}\Omega f_n^m \\
\text{e}^{i\omega_{L}t} \Omega f_n^{m\ast} &
\nu (n+m)-\frac{\omega_{0}}{2}
\end{pmatrix},
\end{aligned}$$ with $f_n^m$ defined in Eq. (\[auxf1\]).
The above block structure enables us to diagonalize the Hamiltonian by the diagonalization of each block. The first $m$ blocks in Eq. (\[hblocks+\]) are matrices of only one element having eigenvalues and eigenvectors, respectively, given by $$\label{eigval1+}
\zeta_{+}^{(n,m)} = \hbar\nu n-\frac{\hbar\omega_{0}}{2}, \,\,
\left|\zeta_{+}^{(n,m)}\right\rangle = \left|n,g\right\rangle$$ for each $n = 0,...,m-1$ for a given $m$. The following blocks in the diagonal block structure of (\[hblocks+0\]) are $2 \times 2$ matrices, which can be diagonalized to give for all $n, m$ the eigenvalues $$\label{eigval+}
\begin{aligned}
\mu_{+}^{(n,m)} & =
\hbar\nu\left(n+\frac{m}{2}\right)-
\frac{\hbar}{2}\sqrt{\omega_{L}^{2}+
\Omega^{2}\left|f_n^m\right|^{2}}, \\
\gamma_{+}^{(n,m)} & = \hbar\nu\left(n+\frac{m}{2}\right)+
\frac{\hbar}{2}\sqrt{\omega_{L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}},
\end{aligned}$$ respectively, associated to the eigenvectors $$\label{eigevec+}
\begin{aligned}
\left|\mu_{+}^{(n,m)}\right\rangle & =
\tfrac{\text{e}^{-i\omega_{\!L}t}\left[\omega_{\!L}-\sqrt{\omega_{\!L}^{2}+
\Omega^{2}\left|f_n^m\right|^{2}}\right]}{\Omega f_n^{m\ast}\,\sqrt{1+\frac{\left|\omega_{\!L}-
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}{\Omega^{2}\left|f_n^m\right|^{2}}}}
\left|n,e\right\rangle \\
& + \tfrac{1}{\sqrt{1+\frac{\left|\omega_{\!L}-\sqrt{\omega_{\!L}^{2}+
\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n+m,g\right\rangle, \\
\left|\gamma_{+}^{(n,m)}\right\rangle & =\tfrac{\text{e}^{-i\omega_{\!L}t}\left[\omega_{\!L}
+\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right]}
{\Omega f_n^{m\ast}\,\sqrt{1+\frac{\left|\omega_{\!L}+\sqrt{\omega_{\!L}^{2}+
\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n,e\right\rangle \\
& + \tfrac{1}{\sqrt{1+\frac{\left|\omega_{\!L}+\sqrt{\omega_{\!L}^{2}+\Omega^{2}
\left|f_n^m\right|^{2}}\right|^{2}}{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n+m,g\right\rangle ,
\end{aligned}$$ where in this regime $\omega_{L} = (\omega_0 - m\nu)$.
Diagonalization of {#do-}
-------------------
For the AJC like Hamiltonian, $\hat{\mathcal{H}}^{(m)}_{-}$, in Eq. (\[hamrwa\]), the invariant subspace is $\{ | n + m , e \rangle, | n , g \rangle \}$ for all $m,n$, while for $m > n$ it should be replaced by $\{ \left|n,e\right\rangle, | n + m , e \rangle, | n , g \rangle \}$. Taking the matrix elements of the Hamiltonian in these subspaces, and rearranging the basis as before, one finds $$\label{hblocks-0}
\hat{\mathcal{H}}^{(m)}_{-} = \hat{\mathcal{H}}^{[1]}_{-} \oplus
\hat{\mathcal{H}}^{[2]}_{-},$$ where $$\label{hblocks-}
\begin{aligned}
& \hat{\mathcal{H}}^{[1]}_{-} =
\bigoplus_{n = 0}^{m - 1} \langle n , e |\hat{\mathcal{H}}^{(m)}_{+} | n , e \rangle
= \hbar
\bigoplus_{n = 0}^{m - 1} \left( \nu n + \tfrac{\omega_0}{2} \right), \\
& \hat{\mathcal{H}}^{[2]}_{-} \!=\!
\bigoplus_{n = 0}^{\infty} \!\!\begin{pmatrix}
\!\! \langle n , g |\hat{\mathcal{H}}^{(m)}_{+} | n , g \rangle \!\! & \!\!
\!\! \langle n\! + \! m, e |\hat{\mathcal{H}}^{(m)}_{+} | n , g \rangle \!\!\\
\!\! \langle n , g |\hat{\mathcal{H}}^{(m)}_{+} | n\! + \! m , e \rangle \!\! &
\!\! \langle n\! + \! m, e |\hat{\mathcal{H}}^{(m)}_{+} | n\! + \! m , e \rangle \!\!
\end{pmatrix} \\
& \,\,\,\,\, = \hbar
\bigoplus_{n = 0}^{\infty}
\begin{pmatrix}
\nu (n+m) + \frac{\omega_{0}}{2} &
\text{e}^{-i\omega_{L}t} \Omega f_n^m \\
\text{e}^{i\omega_{L}t} \Omega f_n^{m\ast} &
\nu n-\frac{\omega_{0}}{2}
\end{pmatrix},
\end{aligned}$$ and $f_n^m$ is defined in Eq. (\[auxf1\]). Now considering the one dimensional blocks where $m > n$, its eigenvalues and eigenvectors are, respectively, given by $$\label{eigval1-}
\zeta_{-}^{(n,m)} = \hbar\nu n+\frac{\hbar\omega_{0}}{2}, \,\,
\left|\zeta_{-}^{(n,m)}\right\rangle = \left|n,e\right\rangle,$$ for each $n = 0,...,m-1$ for a given $m$. The eigenvalues of each $2 \times 2$ blocks in Eq. (\[hblocks-\]) now becomes $$\label{eigval-}
\begin{aligned}
\mu_{-}^{(n,m)} & =\hbar\nu\left(n+\frac{m}{2}\right)
-\frac{\hbar}{2}\sqrt{\omega_{L}^{2}+
\Omega^{2}\left|f_n^m\right|^{2}} \\
\gamma_{-}^{(n,m)} & = \hbar\nu\left(n+\frac{m}{2}\right)+
\frac{\hbar}{2}\sqrt{\omega_{L}^{2}+
\Omega^{2}\left|f_n^m\right|^{2}},
\end{aligned}$$ respectively, associated to the eigenvectors $$\label{eigevec-}
\begin{aligned}
\left|\mu_{-}^{(n,m)}\right\rangle & =
\tfrac{\text{e}^{-i\omega_{L}t}\left[\omega_{\!L}-
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right]}
{\Omega f_n^{m\ast}\,\sqrt{1+\frac{\left|\omega_{\!L}-
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}
{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n,g\right\rangle \\
& +
\tfrac{1}{\sqrt{1+\frac{\left|\omega_{\!L}-
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}
{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n+m,e\right\rangle, \\
\left|\gamma_{-}^{(n,m)}\right\rangle & =
\tfrac{\text{e}^{-i\omega_{L}t}\left[\omega_{\!L}+
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right]}
{\Omega f_n^{m\ast}\,\sqrt{1+\frac{\left|\omega_{\!L}+
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}
{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n,g\right\rangle \\
& +
\tfrac{1}{\sqrt{1+\frac{\left|\omega_{\!L}+
\sqrt{\omega_{\!L}^{2}+\Omega^{2}\left|f_n^m\right|^{2}}\right|^{2}}
{\Omega^{2}\left|f_n^m\right|^{2}}}}\left|n+m,e\right\rangle ,
\end{aligned}$$ for all $n, m$ and in this regime $\omega_{L} = (\omega_0 + m\nu)$. As a final comment, the carrier transitions, Eq.(\[hamct\]), eigenvalues can be obtained either from Eq. (\[eigval+\]) or from Eq. (\[eigval-\]), as its corresponding eigenvectors from Eq. (\[eigevec+\]) or Eq. (\[eigevec-\]) just setting $m = 0$.
[99]{} H. Rabitz, [*Focus on Quantum Control*]{}, [New J. Phys. [**11**]{}, 105030 (2009)](http://iopscience.iop.org/article/10.1088/1367-2630/11/10/105030/meta). D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, [*Quantum dynamics of single trapped ions*]{}, [Rev. Mod. Phys. [**75**]{}, 281 (2003)](http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.75.281). R. Blatt and D. Wineland, [*Entangled states of trapped atomic ions*]{}, [Nature [**453**]{}, 1008 (2008)](http://www.nature.com/nature/journal/v453/n7198/abs/nature07125.html). Ch. Roos et al., [*Quantum State Engineering on an Optical Transition and Decoherence in a Paul Trap*]{}, [Phys. Rev. Lett. [**83**]{}, 4713 (1999)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.83.4713); D.M. Meekhof et al., [*Generation of Nonclassical Motional States of a Trapped Atom*]{}, [Phys. Rev. Lett. [**76**]{}, 1796 (1996)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.1796); F. Schmidt-Kaler et al., [*How to realize a universal quantum gate with trapped ions*]{}, [Appl. Phys. B [**77**]{}, 789 (2003)](http://link.springer.com/article/10.1007%2Fs00340-003-1346-9). F. Plastina et al., [*Irreversible Work and Inner Friction in Quantum Thermodynamic Processes*]{}, [Phys. Rev. Lett [**113**]{}, 260601 (2014)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.260601). P. Talkner, E. Lutz, and P. Hänggi, [*Fluctuation theorems: Work is not an observable*]{}, [Phys. Rev. E [**75**]{}, 50 (2007)](http://journals.aps.org/pre/abstract/10.1103/PhysRevE.75.050102); M. Esposito, U. Harbola, and S. Mukamel, [*Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems*]{}, [Rev. Mod. Phys. [**81**]{}, 1665 (2009)](http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.1665). M. Campisi, P. Hänggi, and P. Talkner, [*Colloquium: Quantum fluctuation relations: Foundations and applications*]{}, [Rev. Mod. Phys. [**83**]{}, 771 (2011)](http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.771). C. Jarzynski, [*Nonequilibrium Equality for Free Energy Differences*]{}, [Phys. Rev. Lett. [**78**]{}, 2690 (1997)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.78.2690). G.E. Crooks, [*Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences*]{}, [Phys. Rev. E [**60**]{}, 2721 (1999)](http://journals.aps.org/pre/abstract/10.1103/PhysRevE.60.2721). H. Tasaki, [*Jarzynski Relations for Quantum Systems and Some Applications*]{}, [arXiv:cond-mat/0009244v2 \[cond-mat.stat-mech\] (2000)](http://arxiv.org/abs/cond-mat/0009244). R. Klages, W. Just, and C. Jarzynski, [*Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond*]{} (Wiley-VCH Verlag & Co. KGaA, Boschstr, 2013); C. Bustamante, J. Liphardt, and F. Ritort, [*The Nonequilibrium Thermodynamics of Small Systems*]{}, [Physics Today [**58**]{}, 43 (2005)](http://scitation.aip.org/content/aip/magazine/physicstoday/article/58/7/10.1063/1.2012462); J. Gemmer, M. Michel, and G. Mahler, [*Quantum Thermodynamics, Emergence of Thermodynamic Behavior Within Composite Quantum Systems*]{} (2$^{\text{nd}}$ Ed., Springer, Berlin, 2010). M. Campisi, [*Fluctuation Relation for Quantum Heat Engines and Refrigerators*]{}, [J. Phys. A: Math. Theor. [**47**]{}, 245001 (2014)](http://iopscience.iop.org/article/10.1088/1751-8113/47/24/245001/pdf); M. Campisi, J. Pekola, and R. Fazio, [*Nonequilibrium Fluctuations in Quantum Heat Engines: Theory, Example, and Possible Solid State Experiments*]{}, [New Journal of Physics [**17**]{}, 035012 (2015)](http://stacks.iop.org/1367-2630/17/i=3/a=035012); T.D. Kieu, [*Quantum Heat Engines, The Second Law and Maxwell’s Daemon*]{}, [The European Physical Journal D [**39**]{}, 115 (2006)](http://dx.doi.org/10.1140/epjd/e2006-00075-5); V. Blickle and C. Bechinger, [*Realization of a micrometre-sized stochastic heat engine*]{}, [Nature Physics [**8**]{}, 143 (2011)](http://www.nature.com/nphys/journal/v8/n2/full/nphys2163.html). T.B. Batalhão et al., [*Experimental Reconstruction of Work Distribution and Study of Fluctuation Relations in a Closed Quantum System*]{}, [Phys. Rev. Lett. [**113**]{}, 140601 (2014)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.113.140601); R. Dorner et al., [*Extracting Quantum Work Statistics and Fluctuation Theorems by Single-Qubit Interferometry*]{}, [Phys. Rev. Lett. [**110**]{}, 230601 (2013)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.230601); L. Mazzola, G. De Chiara, and M. Paternostro, [*Measuring the Characteristic Function of the Work Distribution*]{}, [Phys. Rev. Lett. [**110**]{}, 230602 (2013)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.230602). M. Campisi, P. Talkner, and P. Hänggi, [*Fluctuation Theorem for Arbitrary Open Quantum Systems*]{}, [Phys. Rev. Lett. 102, 210 401 (2009)](http://link.aps.org/doi/10.1103/PhysRevLett.102.210401); P. Talkner, M. Campisi, and P. Hänggi, [*Fluctuation Theorems in Driven Open Quantum Systems*]{}, [J. Stat. Mec. [**2009**]{} 02025 (2009)](http://stacks.iop.org/1742-5468/2009/i=02/a=P02025). A. Carlisle et al., [*Out of equilibrium thermodynamics of quantum harmonic chains*]{}, [ArXiv:1403.0629 \[quant-ph\] (2014)](http://arxiv.org/abs/1403.0629). M. Orszag, [*Quantum Optics*]{} (2$^\text{\underline{nd}}$ Ed., Springer-Verlag, Berlin, 2008). O. Abah et al., [*Single-Ion Heat Engine at Maximum Power*]{}, [Phys. Rev. Lett. [**109**]{}, 203006 (2012)](http://link.aps.org/doi/10.1103/PhysRevLett.109.203006); J. Roßnagel1 et al., [*A single-atom heat engine*]{}, [Science [**352**]{}, 325 (2016)](http://science.sciencemag.org/content/352/6283/325). S. An et al., [*Experimental Test of the Quantum Jarzynski Equality with a Trapped-Ion System*]{}, [Nat. Phys. [**11**]{}, 193 (2014)](http://www.nature.com/nphys/journal/v11/n2/full/nphys3197.html); G. Huber, F. Schmidt-Kaler, S. Deffner, and E. Lutz, [*Employing Trapped Cold Ions to Verify the Quantum Jarzynski Equality*]{}, [Phys. Rev. Lett. [**101**]{}, 070403 (2008)](http://link.aps.org/doi/10.1103/PhysRevLett.101.070403). S. Deffner and E. Lutz, [*Generalized Clausius Inequality for Nonequilibrium Quantum Processes*]{}, [Phys. Rev. Lett. [**105**]{}, 170402 (2010)](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.105.170402). R. Balian, [*From Microphysics to Macrophysics: Methods and Applications of Statistical Physics*]{}, (Vol I, Springer, Berlin 2007). L. Fusco et al., [*Assessing the Nonequilibrium Thermodynamics in a Quenched Quantum Many-Body System via Single Projective Measurements*]{}, [Phys. Rev. X [**4**]{}, 031029 (2014)](http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.031029). W.L. Ribeiro, G.T. Landi, and F.L. Semião, [*Non-equilibrium thermodynamics of magnetic resonance using the quantum mechanics*]{}, [ArXiv:1601.01833 \[quant-ph\] (2016)](http://arxiv.org/abs/1601.01833). V. Vedral, [*The Role of Relative Entropy in Quantum Information Theory*]{}, [Rev. Mod. Phys. [**74**]{}, 197 (2002)](http://dx.doi.org/10.1103/RevModPhys.74.197). P.K. Ghosh, [*Ion Traps*]{} (Oxford University Press, New York, 1995). C.A. Blockley, D.F. Walls and H. Risken, [*Quantum Collapses and Revivals in a Quantized Trap*]{}, [Europhys. Lett. [**17**]{} (6), 509 (1992)](http://iopscience.iop.org/article/10.1209/0295-5075/17/6/006/meta). D.J. Wineland et al., [*Experimental Issues in Coherent Quantum-State Manipulation of Trapped Atomic Ions*]{}, [J. Res. Natl. Inst. Stand. Technol. [**103**]{}, 259 (1998)](http://nvlpubs.nist.gov/nistpubs/jres/103/3/j33win.pdf). I.S. Gradshteyn and I.M. Ryzhik, [*Table of Integrals, Series, and Products*]{} ($7^{\text{\underline{th}}}$ Ed, Elsevier, Amsterdam, 2007). W. Vogel and D.-G. Welsch, [*k-photon Jaynes-Cummings model with coherent atomic preparation: Squeezing and coherence*]{}, [Phys. Rev. A **40**, 7113 (1989)](http://journals.aps.org/pra/pdf/10.1103/PhysRevA.40.7113). W. Vogel and R. L. de Matos Filho, [*Nonlinear Jaynes-Cummings dynamics of a trapped ion*]{}, [Phys. Rev. A **52**, 4214 (1995)](http://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.4214). S. Olmschenk et al., [*Manipulation and detection of a trapped Yb$^+$ hyperfine qubit*]{}, [Phys. Rev. A [**76**]{}, 052314 (2007)](http://dx.doi.org/10.1103/PhysRevA.76.052314). Numerical investigation of the sum in Eq. (\[partf1\]) shows that it is convergent since $|f^m_n|$, defined by Eq. (\[auxf1\]), is a decreasing and oscillating function of $\eta$ and $n$. Some analytic progress is also possible to be made for large $n$ giving the well known asymptotic limit of Laguerre polynomials [@gradshteyn] appearing in Eq. (\[auxf1\]). It can be shown that $|f^m_n| \sim n^{-1/4} \eta^{-1/2}$ in accordance with the tendency with $\eta$ and $n$ numerically revealed. The sum is truncated as soon as the relative difference between successive terms become about $~10^{-10}$. In this way, the number of terms kept in the sum may vary in different plots as it is clearly depends on the specific set of parameters used to produce the plot. We indicate the truncation number in the caption of each figure.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'After extending the Clarkson-Kruskal’s direct similarity reduction ansatz to a more general form, one may obtain various new types of reduction equations. Especially, some lower dimensional turbulent systems or chaotic systems may be obtained from the general type of similarity reductions of a higher dimensional Lax integrable model. Especially, the Kuramoto-Sivashinsky equation and an arbitrary three order quasi-linear equation which includes the Korteweg de-Vries Burgers equation and the general Lorenz equation as two special cases are obtained from the reductions of the (2+1)-dimensional dispersive long wave equation system. Some types of periodic and chaotic solutions of the (2+1)-dimensional dispersive long wave equation system are also discussed.'
author:
- |
Xiao-yan Tang$^{1,3}$, Sen-yue Lou$^{2,1,3}$[^1] and Ying Zhang$^3$\
**$^1$Physics Department of Shanghai Jiao Tong University, Shanghai 200030, P. R. China\
**$^2$CCAST (World Laboratory), P.O. Box 8730, Beijing 100080, P. R. China\
*$^3$ Abdus Salam International Centre for Theoretical Physics, Trieste, Italy*****
title: '**(1+1)-dimensional turbulent and chaotic systems reduced from (2+1)-dimensional Lax integrable dispersive long wave equation**'
---
= 16truecm = 23truecm
= -1truecm = -2truecm
.1in
Introduction
============
To reduce a higher dimensional nonlinear physical model to some lower dimensional ones is one of the most important approaches in the study of nonlinear science. Usually one use the standard Lie group approach to reduce a higher dimensional partial differential equation (PDE) to lower dimensional ones$\cite{Lie}$. Lately, the so-called nonclassical Lie group analysis is established to find lower dimensional similarity reductions$\cite{nonclassical}$. To find some lower dimensional reductions by using the classical and nonclassical Lie group approaches, one has to use some tedious algebraic procedures. In the past decade, to avoid the tedious algebraic calculation in the finding of the similarity reductions, a simple direct powerful method is developed$\cite{CK,Lou}$. Using the direct method, various new similarity reductions of many physical models are found though these reductions can also be obtained lately from the nonclassical Lie group approach$\cite{CK1,Lou1,CK2}$. In $\cite{LouTang}$, the direct method is extended to find some types of conditional similarity reductions which have not yet been obtained by means of the present classical Lie group approach and nonclassical Lie group approach. In this paper, we try to extend the direct method in another direction to find lower dimensional reductions that may not be obtained by using the present classical and nonclassical Lie group approaches.
In the next section, we discuss the general aspect on the direct reduction method. In section 3, the (2+1)-dimensional dispersive long wave equation (DLWE) is used as a concrete example to realize new reduction idea and to find some new lower dimensional reductions. In section 4, we use some numerical solutions of the lower dimensional reduction models to discuss some types of exact solutions of the (2+1)-dimensional DLWE. The last section is a short summary and discussion.
General reduction ansatz of direct method
=========================================
To reduce many types of (n+1)-dimensional nonlinear PDEs, $$\begin{aligned}
\Delta(x_i,\ u,\ u_{x_i},\ u_{x_ix_j},\ ...,\ i,j=0,1,...,n)\equiv
\Delta[u]=0, \ (x_0\equiv t),\end{aligned}$$ it is proven that the special ansatz $$\begin{aligned}
&&u=\alpha(x_0,x_1,...,x_n)+\beta(x_0,x_1,...,x_n)w(\xi_0,\xi_1,\ ...,\ \xi_{n-1}),\\
&&\xi_i=\xi_(x_0,\ x_1,\ ...,\ x_n),\ \nonumber\end{aligned}$$ is sufficient instead of $$\begin{aligned}
u=U(x_0,x_1,...,x_n, w(\xi_0,\xi_1,\ ...,\ \xi_{n-1})),\end{aligned}$$ where $w(\xi_j)\equiv w(\xi_0,\xi_1,\ ...,\ \xi_{n-1})$ satisfies an $n$-dimensional PDE. In (2) and (3), $u$ and $w$ may be some multi-component fields. However, to reduce a higher dimensional PDE to some lower dimensional ones one may use some more general ansatzs instead of (3). For instance, the ansatz (3) may be extended as $$\begin{aligned}
u=U(x_i,\ w_{\xi_j},\ w_{\xi_{j_1}\xi_{j_2}},\ ...,\
w_{\xi_{j_1}\xi_{j_2}...\xi_{j_k}})\equiv U[w].\end{aligned}$$ In other words, some types of derivatives of the reduction function may be included in the primary reduction ansatz. However, to find some concrete results is quite difficult by using the general ansatz (4). By using the similar procedure of the simplification from (3) to (2) for many types of significant mathematical physics models, one may simplify (4) to $$\begin{aligned}
u=U_0[w]+U_1[w]w_{\xi_{j_1}\xi_{j_2}...\xi_{j_k}},\end{aligned}$$ where $w_{\xi_{j_1}\xi_{j_2}...\xi_{j_k}}$ is one of the highest derivatives of $w$ included in (4) while $U_0[w]$ and $U_1[w]$ are $w_{\xi_{j_1}\xi_{j_2}...\xi_{j_k}}$ independent.
Special new reductions of the (2+1)-dimensional DLWE
====================================================
To give out some concrete results from above general reduction ansatz, we take the (2+1)-dimensional dispersive long wave equation (2DDLWE) $$\begin{aligned}
&& u_{yt}+\eta_{xx}+u_xu_y+uu_{xy}=0,\\
&& \eta_t+u_x +\eta u_x+u\eta_x+u_{xxy}=0\end{aligned}$$ as a simple example. The equation system (6) and (7) is first obtained by Boiti *et al. $\cite{Boiti}$ as a compatibility condition for a ‘weak’ Lax pair. The infinite dimensional Kac-Moody-Virasoro type symmetry structure of the model is revealed by Paquin and Winternitz$\cite{PW}$. The more general $W_\infty$ symmetry is given in$\cite{Winfty}$. It is proven that$\cite{PP}$ the 2DDLWE system is fails in passing the Painlevé test both at the WTC’s (Weiss-Tabor-Carnevale) $\cite{WTC}$ meaning and at the ARS’s (Ablowitz-Ramani-Segur) meaning$\cite{ARS}$. Using the special ansatz (2), nine types of two dimensional similarity reductions and thirteen types of ODE (ordinary differential equation) reductions has been given by one of the present authors (Lou)$\cite{dlwe}$.*
For simplicity further, we taking the reduction ansatz (5) in a specific form, $$\begin{aligned}
&& u=F_1(t,y,w)w_x+F_0(w,w_x,w_{xx})+F_2(w,w_x)w_{xxx},\\
&& v\equiv \eta+1 = u_y= F_{1y}(t,y,w)w_x,\qquad w\equiv w(x,t).\end{aligned}$$ The reason why we take the ansatz (8) is that we try to find the reduction equations have the following three order autonomous PDE form $$\begin{aligned}
w_t=\alpha w_{xxx}+F_3(w,\ w_x,\ w_{xx})\end{aligned}$$ for some possible functions $F_3$. The ansatz (9) degenerates two equations (6) and (7) to a same one.
Substituting (8)-(10) into (6) and/or (7) yields $$\begin{aligned}
F_{1y}(t,y,w)(F_2(w,w_x)w_x+\alpha)w_{xxxx}+f(t,y,w,w_x,w_{xx},w_{xxx})=0,\end{aligned}$$ where $f(t,y,w,w_x,w_{xx},w_{xxx})\equiv f $ is a complicated expression of the indicated variables. Because of $f$ is $w_{xxxx}$ independent, (11) is valid only for $$\begin{aligned}
F_2(w,\ w_x)=-\alpha w_x^{-1}.\end{aligned}$$ Substituting (12) into (11), we have $$\begin{aligned}
F_{1y}(t,y,w)(F_{3w_{xx}}(w,w_x,w_{xx})+1+w_xF_{0w_{xx}}(w,w_x,w_{xx})
)w_{xxx}+f_1(t,y,w,w_x,w_{xx})=0,\end{aligned}$$ where $f_1(t,y,w,w_x,w_{xx})\equiv f_1 $ is independent of $w_{xxx}$. From eq. (13) we immediately have $$\begin{aligned}
F_3(w,\ w_x,\ w_{xx})=- w_{xx}-w_xF_{0}(w,\ w_x,\ w_{xx})+F_3(w,\
w_x).\end{aligned}$$ By using Eq. (14), (13) is simplified further to $$\begin{aligned}
&&(2w_xF_{1y}(t,y,w)F_1(t,y,w)+F_{1y}(t,y,p)F_{3w_x}(w,w_x)+2w_xF_{1yw}(t,y,p))w_{xx}\nonumber\\
&& \qquad +f_2(t,y,w,w_x)=0,\end{aligned}$$ where $f_2(t,y,w,w_x)\equiv f_2 $ is $w_{xx}$ independent. Integrating (15) once with respect to $y$, we have $$\begin{aligned}
&&\left(w_xF_1(t,y,w)^2+F_1(t,y,w)F_{3w_x}(w,w_x)+2w_xF_{1w}(t,y,w)+f_3(t,w,w_x)\right) w_{xx}\nonumber\\
&& \qquad +f_2(t,y,w,w_x)=0,\end{aligned}$$ with $f_3(t,w,w_x)$ being an integrating function. Because of the $w_x$ independence of $F_1(t,y,w)$, by vanishing the first term of (16), we get $$\begin{aligned}
F_3(w,w_x)=F_{32}(w)w_x^2+F_{30}(w),\qquad
f_4(t,w,w_x)=w_xF_4(t,w),\end{aligned}$$ and $$\begin{aligned}
2F_{1w}(t,y,w)+F_1(t,y,w)^2+2F_{32}(w)F_1(t,y,w)+F_4(t,w)=0.\end{aligned}$$ Because of (17) and (18), Eq. (16) is simplified finally to $$\begin{aligned}
F_{1t}(t,y,w)+F_{1}(t,y,w)F_{30w}(w)-\frac12F_{30}(w)(F_1(t,y,w)+2F_{32}(w))F_1(t,y,w)+F_5(t,w)=0.\end{aligned}$$ The compatibility condition of (18) and (19) requires that $$\begin{aligned}
F_{5}(t,w)=-F_{30ww}(w)+F_{32}(w)F_{30w}(w)-\frac12F_4(t,w)F_{30}(w)+F_{32w}(w)F_{30}(w)\end{aligned}$$ and $$\begin{aligned}
&&F_4(t,w)F_{30w}(w)-F_{32}(w)F_{30}(w)F_{32w}(w)-2F_{30w}(w)F_{32w}(w)+\frac12F_{30}(w)F_{4w}(t,w)\nonumber\\
&&-F_{32ww}(w)F_{30}(w)
-F_{32}(w)^2F_{30w}(w)+F_{30www}(w)+\frac12F_{4t}(t,w)=0.\end{aligned}$$ Now the final results show us that the 2DDLWE (6) and (7) possesses the following reduction $$\begin{aligned}
w_t=\alpha w_{xxx}-w_{xx}-w_xF_{0}(w,\ w_x,\
w_{xx})+F_{32}(w)w_x^2+F_{30}(w)\end{aligned}$$ with arbitrary functions $F_0(w,\ w_x,\ w_{xx}),\ F_{32}(w)$ and $
F_{30}(w)$ and $$\begin{aligned}
&&u=-\alpha w_x^{-1}w_{xxx}+F_0(w,\ w_x,\ w_{xx})+ F_1(t,y,w)w_x,\\
&&v=\eta+1=F_{1y}(t,y,w)w_x,\end{aligned}$$ where $F_1(t,y,w)$ is determined by two compatible Riccati equations (18) and (19) while $F_4(t,\ w)$ and $F_5(t,\ w)$ are determined by (20) and (21). The simplest solution of (18)–(21) reads $$\begin{aligned}
&&F_4(t,w)=F_{32}(w)=F_5(t,w)+2A_2=0,\\
&&F_{30}(w)=A_2w^2+A_1w+A_0,\\
&&F_{1}(t,y,w)=\frac2{w+q(y,t)},\\
&&q_t(y,t)=A_1q(y,t)-A_0-A_2q(y,t)^2,\end{aligned}$$ where $A_0,\ A_1,\ A_2$ are arbitrary constants.
From the reduction equation (24), we can see that though the original 2DDLWE system is Lax integrable, possesses infinitely many symmetries and abundant multi-soliton structures, there still exist various nonintegrable lower dimensional reductions because of the entrance of three arbitrary functions $F_0(w,\ w_x,\
w_{xx}),\ F_{32}(w)$ and $F_{30}$. For instance, if we select $F_0(w,\ w_x,\ w_{xx}),\ F_{32}(w)$ and $F_{30}(w)$ simply as $$\begin{aligned}
-w_xF_{0}(w,\ w_x,\ w_{xx})+F_{32}(w)w_x^2+F_{30}(w)= ww_x,\end{aligned}$$ then (22) becomes the well known KdV-Burgers equation $$\begin{aligned}
w_t=\alpha w_{xxx}- w_{xx}+ww_x\end{aligned}$$ which is one of the possible candidate to describe the turbulence phenomena in fluid physics and plasma physics$\cite{turbulence,
KdVB}$. If the functions $F_0(w,\ w_x,\ w_{xx}),\ F_{32}(w)$ and $F_{30}$ are fixed to satisfy $$\begin{aligned}
&&-w_xF_{0w_{xx}}(w,\ w_x,\ w_{xx})+F_{32}(w)w_x^2+F_{30}(w)\nonumber\\
&&\qquad =\frac1w[w_{xx}w_x+(c+1)w_x^2]
-w^2w_x-(b+c)w_{xx}-wc(b-ba+w^2)\end{aligned}$$ with $\alpha=-1$ and $a,\ b,\ c$ are arbitrary constants, then (24) becomes a (1+1)-dimensional extension $$\begin{aligned}
w_t=- w_{xxx}-(b+c+1)w_{xx}+\frac1w[w_{xx}w_x+(c+1)w_x^2]
-w^2w_x-wc(b-ba+w^2)\end{aligned}$$ of the famous chaotic system, the Lorenz system$\cite{Lorenz}$ $$\begin{aligned}
w_{s}=-c(w-g),\ g_{s}=(a-h)w-g,\ h_{s}=wg-bh.\end{aligned}$$ Actually, the travelling wave reduction of (32), $w=w(x+b(c+1)t)\equiv w(s)$, is totally equivalent to the Lorenz system (33).
In principle, any order of derivatives of $w$ may be included in the ansatz (5). And some types of more complicated reduction equations can be obtained. For instance, if we insert a fourth order derivative $w_{xxxx}$ term into the reduction ansatz, we may obtain many fourth order (1+1)-dimensional PDE reductions. Here we list only a special example for the reduction equation has a famous Kuramoto-Sivashinsky (KS) equation form$\cite{KS}$ $$\begin{aligned}
&&u=\pm \frac{2w_x(a_1+a_3Q)}{a_0+a_1w+a_2Q+a_3wQ}-\frac{(\alpha_3\mp1)w_{xx}}{w_x}+\alpha_1w\nonumber\\
&&\qquad +\frac{1}{w_x}(\alpha_5w_{xxxx}+(a_1c_2+a_3c_0)w^2
+(c_2a_0+c_1a_1+a_2c_0+\alpha_2)w+c_1a_0),\\
&&\eta=\frac{Q_yp_x(a_3a_0-a_2a_1)}{a_0+a_1p+a_2Q+a_3pQ)^2},\\
&&Q_t=a_0c_0+(c_1a_1+a_2c_0-c_2a_0)Q+(c_1a_3-c_2a_2)Q^2,\\
&&w_t+\alpha_1ww_x+\alpha_2w+\alpha_3w_{xx}+\alpha_4w_{xxxx}=0,\end{aligned}$$ where $a_i,\ \alpha_i, \ i=0,1,...,5$ are arbitrary constants and $c_0,\ c_1$ and $c_3$ are arbitrary functions of $t$. Various interesting properties of the chaotic KS equation (37) have been studied by many authors, say, $\cite{KS}$ and the references therein.
In the reduction results (22) and (37), the independent variables are simply taken as $x$ and $t$. Actually, extending these independent variables to some more general forms is possible because the model possesses infinitely many symmetries with some arbitrary functions$\cite{PW, Winfty}$. For instance, using the finite transformations given in $\cite{PW}$, all the independent variables of the systems (22) and (37) are changed to some general forms naturally.
Special solutions
=================
Now an interesting question is which kinds of exact solutions can be obtained from our new reduction equations? In this section, we write and plot down some interesting exact solutions.
Multi-dromion solutions
-----------------------
If we take $F_0(w,\ w_x,\ w_{xx})$ as $$\begin{aligned}
F_0(w,\ w_x,\
w_{xx})=6w-w_x^{-1}(w_{xx}-w_x^2F_{32}(w)-F_{30}(w)),\end{aligned}$$ then we know that the $w$ equation (22) is just the well known KdV equation $$\begin{aligned}
w_t=\alpha w_{xxx}-6ww_x.\end{aligned}$$ Then we can use the N soliton solutions of the (1+1)-dimensional KdV equation to construct the multi-dromion solutions by taking $$\begin{aligned}
q(y,t)=a_0+\sum_{n=1}^Na_n\tanh (l_ny-y_n)\end{aligned}$$ with $a_i,\ i=0,1,...,N$ being arbitrary constants and $A_0=A_1=A_2=0$. Fig. 1 is a plot of the four dromion solution with $w$ being two soliton solution of the KdV equation $$\begin{aligned}
&&w=-2\alpha (\ln \phi)_{xx} ,\\
&&\phi=1+\exp(k_1x+\alpha k_1^3t+x_1)+\exp(k_2x+\alpha
k_2^3t+x_2)\nonumber\\
&&\qquad +\frac{(k_1-k_2)^2}{(k_1+k_2)^2}\exp((k_1+k_2)x+\alpha(
k_1^3+k_2^3)t+x_1+x_2)\end{aligned}$$ and $$\begin{aligned}
q=4+\tanh(l_1y-y_1)+2\tanh(l_2y-y_2)\end{aligned}$$ while the other constants are fixed as $$\begin{aligned}
k_1=1,\ k_2=1.1, \ l_1=l_2=1,\ \alpha=-1,\ x_1=-3,\ x_2=3,\
y_1=-3,\ y_2=3.\end{aligned}$$ The figure (1) and all other figures of this paper are plotted at time $t=0$. epsf
Periodic and chaotic line soliton solutions
-------------------------------------------
If $q$ is still given by (43) while $w$ is given by (33), then we may obtain some kinds of periodic or chaotic line soliton solutions. Because no one has given ever out any exact explicit solutions of (33), we can only use the numerical solutions of the generalized Lorenz system to construct exact solutions of the 2DDLWE. For several types of parameter ranges, the solutions of the Lorenz system are periodic while for other types of parameter ranges, the solutions of the Lorenz system are chaotic ones. Fig. 2 is a plot of the periodic two line soliton solution of the 2DDLWE with the parameters of the Lorenz system (33) is fixed as $$\begin{aligned}
a=350 ,\ b=\frac83,\ c=10\end{aligned}$$ and $$\begin{aligned}
q=200+\tanh y\end{aligned}$$ epsf
From Fig. 2b we can see that the line solution is localized in $y$ direction and periodic in $s(=x+b(c+1)t)$ direction when the parameters are selected appropriately as (45).
Fig. 3 plots the chaotic line soliton solution of the 2DDLWE with the parameters of the Lorenz system (33) given by $$\begin{aligned}
a=60 ,\ b=\frac83,\ c=10\end{aligned}$$ while the $q$ function still given by (46). epsf
Obviously, Fig. 3 shows us that when the parameters of (33) are located at the chaotic regions, the corresponding solution becomes a chaotic straight line soliton solution which is localized in $y$ direction and chaotic in $s$ direction.
Space periodic and chaotic solutions
------------------------------------
From (28) we know that in some cases, the function $q$ may be an arbitrary function of $y$, so we may also select it as a solutions of the Lorenz system (33) with the replacement of the independent variable $s\rightarrow y$. When the function $q=q(y)$ and $w=w(s)=w(x+b(c+1)t)$ are all the solutions of the Lorenz system, we can obtain many types of solutions which are periodic or chaotic in both directions.
Fig.4 is a plot of a periodic solution of the 2DDLWE which has periodic property in both directions while $q(y)$ and $w(s)$ are chosen as both the solutions of the Lorenz system (33) with (45). epsf
Fig.5 is a plot of an exact solution of the 2DDLWE which is periodic in $y$ direction and chaotic in $x$ direction. The corresponding solutions for $q$ and $w$ are all determined by (33) but with different parameters (45) and (47) respectively. epsf
Fig.6 shows a chaotic solution of the 2DDLWE in both directions. The related solutions for $q$ and $w$ are all determined by (33) with the same parameters (47). epsf
Summary and discussions
=======================
In summary, the CK’s direct similarity reduction ansatz is extended to a much more general form. Using the general reduction ansatz, one may obtain various new lower dimensional reduction equations including many turbulence and chaotic systems. Taking the 2DDLWE as a concrete example, and a slightly special reduction ansatz with three order derivatives of the reduction field, we obtain a general three order quasi-linear equation, which includes the KdV, MKdV, KdV-Burgers and the generalized Lorenz system as special examples, as a special reduction of the 2DDLWE. The known KS system and other types of higher order models may also be obtained from the reductions of the 2DDLWE.
The reductions (30), (32) and (37) are known as some typical turbulence and chaotic systems while the 2DDLWE is known as an IST integrable model. The reason why some lower dimensional turbulence and chaotic systems can be reduced from a higher dimensional integrable (under some particular meanings) model is that for a higher dimensional integrable model, some types of lower dimensional arbitrary functions do enter into its general solution.
Using the solutions of the lower dimensional models we may obtain many kinds of new solutions for the 2DDLWE. Especially, using the numerical solutions of the Lorenz systems, some types of periodic line solitons, chaotic line solitons, periodic-periodic solution, periodic-chaotic and chaotic-chaotic solutions of the 2DDLWE can be obtained.
In Ref. $\cite{DS}$, using the variable separation approach, we have also pointed out that the turbulence and chaotic systems can be obtained from other “integrable" models like the Davey-Stewartson equations and the asymmetric Nizhnik-Novikov-Veselov equation because of the entrance of the lower dimensional arbitrary functions in the general solutions. The more about the method and the effects of the reduced turbulence system on the original model(s) are worthy of studying further.
.2in The author is in debt to thanks the helpful discussions with the professors Q. P. Liu, G-x Huang and C-p Sun. The work was supported by the National Outstanding Youth Foundation of China, the Research Fund for the Doctoral Program of Higher Education of China and the Natural Science Foundation of Zhejiang Province of China.
.2in
[99]{} P. J. Olver, Application of Lie Group to Differential Equations (Springr, Berlin, 1986). G. W. Bluman and J. D. Cole, (1969) J. Math. Mech. 18, 1025. P. A. Clarkson and M. D. Kruskal, (1989) J. Math. Phys. 30, 2201. S. Y. Lou, Phys. Lett. A 151, 133, 1990). P. A. Clarkson (1989) J. Phys. A: Math. Gen. 22 2355; 22 3821; (1990) European J. Appl. Math. 1 279 ; (1992) Nonlinearity 5 453; (1994) Math. Comp. Mod. 35 255; (1995) Chaos Soliton and Fractal [5]{} 2261; P. A. Clarkson and S. Hoon. (1992) European J. Appl. Math. 3 381; (1993) J. Phys. A: Math. Gen. 26 133; P. A. Clarkson and P. Winternitz (1991) Physica D 49 257; P. A. Clarkson and D. K. Ludlow, (1994) J. Math. Anal. Appl., 186 132; P. A. Clarkson and E. L. Mansfield, (1995) Acta Appl. Math. 39, 245; E. L. Mansfield and P. A. Clarkson, (1997) Math. Comput. Simul. 43, 39. S. Y. Lou, (1990) J. Phys. A: Math. Gen. 23 L649; (1991) Sci. China Ser. A 34 1098; (1992) J. Math. Phys. 33 4300; (1993) Phys. Lett. A 176 96; S. Y. Lou and G. J. Ni, (1991) Commun. Theor. Phys. 15 465 ; S. Y. Lou, H. Y. Ruan, D. F. Chen and W. Z. Chen, (1991) J. Phys. A: Math. Gen. 24 1455; S. Y. Lou and H. Y. Ruan (1993) J. Phys. A: Math. Gen. 26 4679. R. Fujioka and A. Espinosa (1991) J. Phys. Soc. Japan 60 4071; M. C. Nucci Phys. (1992) Lett. A64 49; C. Z. Qu , (1995) Int. J. Theor. Phys. 34 99; (1995) Commun. Theor. Phys. 24 177; J. F. Zhang, (1995) Commun. Theor. Phys. 24 69; P. G. Estevez, (1995) Stud. Appl. Math. 95 73; E. Pucci (1992) J. Phys. A: Math. Gen. 25 2631 ;(1993) J. Phys. A: Math. Gen. 26 681; G. Saccomandi, (1997) J. Phys. A: Math. Gen. 30 2211. S. Y. Lou, X. Y. Tang and J. Lin, (2000) J. Math. Phys. 41 8286; X. Y. Tang, J. Lin and S. Y. Lou, (2001) Commun. Theor. Phys. 35 399. M. Boiti, J. J. P. Leon and F. Pempinelli, 1987, Inverse Problem, 3 371. G. Paquin, and P. Winternitz, 1990, Physica D 46 122. S. Y. Lou, 1994, J. Phys. A: Math. Gen. 27, 3235. S. Y. Lou, 1993, Phys. Lett. A176, 96. J. Weiss, M. Tabor and G, Carnevale, (1983) J. Math. Phys., 24 522. A. Ramani, B. Grammaticos and T. Bountis, (1989) Phys. Rep. 180 159. S-y Lou, (1995) Math. Meth. Appl. Sci., 18 789. S-d Liu and S-k Liu, (1991) Sincia Sinica, A 9 938 (in Chinese). Y. Nakamura, H. Bailung and P. K. Shukla, (1999) Phys. Rev. Lett. 83 1602; E. P. Raposo and D. Bazeia, (1999) Phys. Lett. A253 151; G. Karch, Nonlinear Analysis: Theory Methods & Applications, 35 (1999)199. F. W. Lorenz, J. Atomos. Sci., 20, 130 (1963); M. Clerc, P. Coullet, E. Tirapegui, Phys. Rev. Lett., 83, 3820 (1999). H. Sakaguchi, 2000, Phys. Rev. E, 62, 8817; P. K. Friz and J. C. Robinson, 2001, Physica D 148, 201. S. Y. Lou, X-y Tang and Y. Zhang, Preprint nlin.PS/0107029.
[^1]: Email: sylou@mail.sjtu.edu.cn
| {
"pile_set_name": "ArXiv"
} |
---
abstract:
- '\#1 \#1'
- |
In this article we report a stochastic evaluation of the recently proposed LCC multireference perturbation theory [\[]{}Sharma S., and Alavi A., *J. Chem. Phys.* **143**, 102815, (2015)[\]]{}. In this method both the zeroth order and first order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth order wavefunction follows a set of stochastic processes identical to the one used in the FCIQMC method. To sample the first order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first order wavefunction from the zeroth order wavefunction. The second order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory.
This method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard CASSCF+PT, we find the singlet-triplet gap to be in good agreement with the experimental values.
author:
- Guillaume Jeanmairet
- Sandeep Sharma
- Ali Alavi
bibliography:
- 'biblioLCC.bib'
title: 'Stochastic multi-reference perturbation theory with application to linearized coupled cluster method '
---
Intro\[sec:Intro\]
==================
One of the significant challenges in quantum chemistry is the description of electronic systems that simultaneously display chemically-relevant static and dynamical electron correlation. Static correlation often occurs in open shell systems and gives rise to long-ranged, highly entangled, many-electron wavefunctions involving electronic orbitals close to the Fermi energy. Dynamical correlation, on the other hand, correlates electrons on a short-length scale, and whose descriptions requires the single-particle spectrum to extend over many energy scales. There are many systems of practical interest that fall into this class: we can mention transition metal clusters that are found in many protein active sites and that play a key role in a wide number of important biological processes such as photosynthesis or respiration[@beinert_iron-sulfur_1997]. Because of the multideterminental nature of the wave-function they can prove hard to study with single-reference approaches such as the widely used density functional theory[@neese_critical_2006].
Unfortunately the computational requirements to describe the wavefunction of such systems from an exact perspective is utterly daunting. In an ideal scenario, one would allow for a full correlation treatment in a large basis: in other words all electrons (even those not close to the Fermi energy) are simultaneously correlated over the entire basis. Such a treatment (full CI), which would amount to the exact solution of the Schrodinger equation in the given basis, is generally out of reach for sufficient numbers of electrons, owing the combinatorial explosion of the Hilbert space of a many-particle system. To overcome this limitation the calculation can be carried by considering only a meaningful subset of the available configurations, the most popular choice is to use a complete active space (CAS) wave function[@siegbahn_comparison_1980; @roos_complete_1980; @siegbahn_complete_1981]. This approach allows to tackle bigger systems than the FCI one however it suffers for the same exponential scaling problem. Different methods have been proposed that allow one to treat larger active spaces by imposing some restrictions on the occupation of the active space orbitals such as the restricted active space (RAS)[@olsen_determinant_1988; @malmqvist_restricted_1990], generalized active space (GAS)[@ma_generalized_2011] and SplitGas[@li_manni_splitgas_2013] approaches. With modern techniques such as Density Matrix Renormalization Group[@white_density_1992; @white_density-matrix_1993; @white_ab_1999; @wouters_chemps2:_2014; @sharma_spin-adapted_2012; @zgid_spin_2008; @moritz_convergence_2005; @legeza_optimizing_2003; @kurashige_high-performance_2009; @olivares-amaya_ab-initio_2015] or the Full CI Quantum Monte Carlo[@booth_fermion_2009; @cleland_taming_2012; @booth_linear-scaling_2013; @booth_towards_2013; @petruzielo_semistochastic_2012] technique one can treat very large Hilbert spaces, corresponding to up to 30-40 electrons in 30-40 orbitals, but even such techniques struggle to handle systems with many hundreds of electrons correlating in hundreds or even thousands of orbitals, which is certainly necessary to be able to treat even intermediate-size molecules.
Approximations in the correlation treatment are therefore necessary, and given the energetic divisions between static and dynamical correlation, multi-reference perturbation theory (MRPT) is a natural starting point, and which leads to a variety of active-space methods. Full treatment of correlation among a predefined set of active orbitals (usually chosen to be around the Fermi energy) with a given number of electrons leads to the reference Hamiltonian, followed by a perturbation treatment of the remaining terms in the Hamiltonian which arise from the remaining terms in the Hamiltonian. In this spirit, one can mention the complete active space perturbation theory (CASPT)[@andersson_second-order_1990; @andersson_secondorder_1992], the n-electron valence state perturbation theory (NEVPT)[@angeli_introduction_2001; @angeli_n-electron_2002] and the multireference configuration interaction (MRCI) method[@werner_efficient_1988], as well as the linearised coupled cluster (LCC) developed recently by two of us[@sharma_multireference_2015; @sharma_quasi-degenerate_2016]. In the perspective of dealing with large systems it is worth noticing that a second-order perturbation theory approach based on the generalized active space self-consistent-field has also recently been proposed[@ma_second-order_2016].
All of these multireference perturbation theories involve a deterministic resolution of the perturbation equations, and the cost of the calculation of the perturbation by itself quickly becomes intractable with the number of core and virtual orbitals. Although this can be dealt by making a further approximation known as internal contraction, which comes at the cost of computing the reduced density matrices (RDM) of the active space up to fourth order, which is itself a significant challenge for large active spaces.
A stochastic resolution of the perturbation equation seems to be a promising direction to reduce the cost of the calculation and to avoid the use of internal contraction. Surprisingly, only a few attempts to implement a stochastic resolution of perturbation theories have been proposed, among which we can mention the stochastic evaluation of MP2 energies by Hirata and collaborators, MC-MP2. This approach is based on the rewriting of second order perturbation equations into the sum of two 13-dimensional integrals thanks to Laplace transform. Those integrals are then evaluated through Monte-Carlo integration[@willow_stochastic_2012; @willow_convergence_2013; @willow_stochastic_2013]. Another approach to solve MP2 equations is to express its contributions in term of graph that describe set of connected Slater determinants, and then to stochastically sample those graphs[@thom_stochastic_2007]. However those two approaches involve a single reference zero order wavefunction and to our knowledge stochastic resolution of multireference perturbation theories have not been proposed yet.
The purpose of this paper is to show how a stochastic treatment of multireference perturbation theory can implemented within the FCIQMC methodology, namely a walker-based method to solve for the response wavefunction of perturbation theory. Response theory differs from the eigenvalue problem of diagonalisation in that the former involves the solution of a linear system of equations (the response equations). Since the FCIQMC technique was developed as a ground-state eigenvalue solver, it requires to be modified and generalized in order to handle this new setting. We show how this can be done for the MRLCC method, although it can be similarly implemented for other flavors of multireference perturbation theory. Importantly, the resulting method which we term LCCQMC, is a fully uncontracted method, and is therefore potentially more accurate than the internally contracted approximations.
In our new method the sampling of zeroth and first order wavefunctions are done simultaneously. The population on the zero order wavefunction follows the standard FCIQMC rules while the population dynamics of the response wavefunction contains a source term that depends on the population of the zero order wavefunction. This is practically done by allowing walkers on one replica to spawn new walkers on another replica. The structure of the rest of this article is the following, in the next part we will recall the governing equations of MRPT, with a particular emphasis on the MRLCC method. We also recall some basics of the FCIQMC methods before showing how the MRLCC can be expressed in the FCIQMC language, i.e. as a stochastic propagation of a signed walkers population. In the third part a description of the algorithmic of the QMC-LCC method that has been implemented in the neci code is given, we also discuss some important technical points. We then illustrate the potential of the method by applying it to some organic molecules. We first tested the method by studying the carbon dimer with systematically more refined basis set, going from cc-pVDZ to cc-pVQZ. Afterwards we turned our attention to the evaluation of Singlet-Triplet gaps; first in the case of the benzene molecule which has a singlet for ground state, and then in the case the m-xylylene diradical that admits a triplet as its ground state.
Theory\[sec:Theory\]
====================
LCC Perturbation theory
-----------------------
The essence of quantum-mechanical perturbation theories is to split the total Hamiltonian $\hat{H}$, into the sum of a simpler Hamiltonian $\hat{H_{0}}$ and a perturbation operator $\hat{V}$[@helhaker_2014], $$\hat{H}=\hat{H_{0}}+\hat{V},$$ with $$\hat{V}=\hat{H}-\hat{H_{0}}.$$ The zero order order energy and wavefunction, $E_{0}$ and $\Ket{\Psi_{0}}$ are solution of the following eigenproblem, $$\hat{H_{0}}\Ket{\Psi_{0}}=E_{0}\Ket{\Psi_{0}},$$ which is assumed to be, and is generally , possible to solve exactly.
In multireference perturbation theories, the zeroth order wavefunction is expressed as a linear combination of Slater determinants $\Ket{D_{i}}$, $$\Ket{\Psi_{0}}=\sum_{i}c_{i}\Ket{D_{i}},$$ where the expansion set of determinants $\left\{ \Ket{D_{i}}\right\} $ is chosen to recover most of the correlation. Usually the set $\left\{ \Ket{D_{i}}\right\} $ is a CASCI space, it then contains all interactions between active electrons.
However the choice of $\hat{H_{0}}$ is not unique since several operators will admit the CASCI wavefunction as an eigenvector. Among the most popular ones we can mention the Fock operator used in CASPT[@matos_casscfcci_1987; @andersson_second-order_1990], the Dyall[@dyall_choice_1995] Hamiltonian used in NEVPT[@angeli_introduction_2001; @angeli_n-electron_2002] and the excitation conserving Hamiltonian of Fink[@fink_two_2006; @fink_multi-reference_2009] used in the recently proposed MPS-LCC theory[@sharma_multireference_2015; @sharma_quasi-degenerate_2016]. Since MRLCC seems to outperformed other methods of similar cost, this is the one that is used in this study.
If we split the orbitals into an active set where the orbital occupancy can be 0, 1 or 2, a core set where the orbitals are doubly occupied and a virtual set of empty orbitals then the total Hamilton, $\hat{H}$ and Fink’s Hamiltonian, $\hat{H}_{0}$ can be expressed in second quantization as,
$$\hat{H}=\sum_{ij}t_{ij}a_{i}^{\dagger}a_{j}+\sum_{ijkl}\Braket{ij|kl}a_{i}^{\dagger}a_{j}^{\dagger}a_{l}a_{k}\label{eq:H@ndQ}$$
$$\hat{H_{0}}=\sum_{{ij;\atop \Delta n=(0,0,0)}}t_{ij}a_{i}^{\dagger}a_{j}+\sum_{{ijkl;\atop \Delta n=(0,0,0)}}\Braket{ij|kl}a_{i}^{\dagger}a_{j}^{\dagger}a_{l}a_{k}\label{eq:HFink}$$
where $i,j,k,l$ refer to any orbitals and $\Delta n$ denotes the change in the total number of electrons between the three subsets of orbitals. The only operators belonging to $\hat{H_{0}}$ are the ones that do not transfer electrons between the three subsets.
The successive correction ($\Ket{\Psi_{m}}$) to the zeroth order wavefuntion can be computed by using the following equation, $$\left(\hat{H_{0}}-E_{0}\right)\Ket{\Psi_{m}}=-Q\left(\hat{V}\Ket{\Psi_{m-1}}-\sum_{k=1}^{m-1}E_{k}\Ket{\Psi_{m-k}}\right),\label{eq:bthorderwf}$$ where $Q$ is the projector onto the orthogonal space of the zeroth order wavefunction. Those sets of equation can be solved sequentially to compute the $m^{th}$ order of the wavefunction, $\Ket{\Psi_{m}}$. Once $\Ket{\Psi_{m}}$ is known the $2m$ and $2m+1$ energies can be computed thanks to Wigner’s rules: $$E_{2m}=\Braket{\Psi_{m-1}|V|\Psi_{m}}-\sum_{k=1}^{m}\sum_{j=1}^{m-1}E_{2m-k-j}\Braket{\Psi_{k}|\Psi_{j}},$$ $$E_{2m+1}=\Braket{\Psi_{m}|V|\Psi_{m}}-\sum_{k=1}^{m}\sum_{j=1}^{m}E_{2m+1-k-j}\Braket{\Psi_{k}|\Psi_{j}}.$$
Note that with the definition of the zeroth order Hamiltonian given in Eq.\[eq:HFink\], the first order energy $E_{1}=\Braket{\Psi_{0}|V|\Psi_{0}}$ vanishes.
In a recent paper[@sharma_multireference_2015], both the 0 order wavefunction and the successive corrections were expressed as matrix product states (MPS) and computed deterministically by functional minimization. Here we proposed an alternative approach where both the zero order and the perturbation wavefunctions are sampled stochastically in the Fock space, the perturbation second-order energy is also evaluated stochastically.
To sample the zero order wavefunction and energy we used the standard FCIQMC approach restricted to the CAS space. We will recall here the main points of this approach.
Elements of FCIQMC
------------------
FCIQMC is a method which aims at stochastically minimizing the energy of a ground state wavefunction expressed as a CASCI (or Full-CI) expansion. The wavefunction can be expressed as a linear combination of determinants belonging to the CAS-CI space. $$\Ket{\Psi}=\sum_{i}c_{i}\Ket{D_{i}}.\label{eq:expansionWF}$$
Formally, the idea is to find the ground state of the Hamiltonian operator $\hat{H}$, by integrating the imaginary time Schrodinger equation (ITSE), $$\frac{\partial\Ket{\Psi}}{\partial\tau}=-\hat{H}\Ket{\Psi}.\label{eq:imtimeSE}$$ The discretization of Eq.\[eq:imtimeSE\] with a time step $\Delta\tau$ leads to the following evolution equation $$\Ket{\Psi(t+\Delta\tau)}=\left(\mathds{1}-\Delta\tau\left(\hat{H}-S\mathds{1}\right)\right)\Ket{\Psi(t)}.$$
$S$ is a shift parameter used to control the walkers population and $\mathds{1}$ is the identity operator. Thus starting from a guess wavefunction, for instance the Hartree Fock determinant, the ground state can be reached by repetitively applying the following projector . $$\hat{P}=\mathds{1}-\Delta\tau\left(\hat{H}-S\mathds{1}\right),\label{eq:projector}$$ To circumvent the prohibitive storage of the full CI vector, in FCIQMC this projection operation is realized scholastically such as the proper projection is recovered on average. To do so the coefficient of the expansion in Eq.\[eq:expansionWF\] are sampled by a population of signed walkers. Each of those carries a signed weight and is located on a Slater determinant. The total signed sum of the walkers residing on the same determinant can be interpreted as an instantaneous measure of its weight $c_{i}$. The walker population evolves through a set of stochastic processes that mimic the projector of Eq.\[eq:projector\]:
1. A cloning/death step, in which the walker population on each determinant is increased/reduced with a probability $\left(H_{ii}-S\right)\Delta\tau$. $S$ is a shift parameter that is used to control the total walker population.
2. A spawning step. For each walker on a determinant $\Ket{D_{i}}$ a singly or doubly connected determinant $\Ket{D_{j}}$ is generated with a probability $p_{gen}^{ij}$. A signed child is actually generated on the determinant $\Ket{D_{j}}$ with a spawning probability $$p_{spawn}^{ij}=\frac{\left|H_{ij}\right|\Delta\tau}{p_{gen}^{ij}}.\label{eq:pspawn}$$ The sign of the newly spawned walker is the same as the sign of the parent if $H_{ij}>0$, it is of opposite sign otherwise.
3. Each pairs of negative and positive newly spawned walkers lying on the same determinant are removed during an Annihilation step. This avoid the growth of an infinite noise due to the so-called sign problem.
We propose here a modification of the FCIQMC algorithm in order to stochastically sample simultaneously the zeroth order wavefunction and the successive order of the perturbation of Eq.\[eq:bthorderwf\]. Even if in principle any order of perturbation can be reached by this technique, in this article we will only consider the calculation of the fist order correction to the wave function, that is given through Eq.\[eq:bthorderwf\] by $$\Ket{\Psi_{1}}=\left(\hat{H}_{0}-E_{0}\right)^{-1}Q\hat{V}\Ket{\Psi_{0}}\label{eq:ps1equ}$$
In that case the problem is simpler since the Hilbert space on which the zeroth and first order wavefunctions are expanded is limited to the CASSD space. Moreover this space can be expressed as a direct sum of two subspaces ${\cal H}={\cal H}_{0}\oplus{\cal H}_{1}$, where ${\cal H}_{0}$ correspond to the CAS space and ${\cal H}_{1}$ is its orthogonal compliment which contains all the determinants that are single or double excitations from the ones belonging to ${\cal H}_{0}$. Applying $\hat{V}$ to $\Ket{\Psi_{0}}$ only generates determinants on ${\cal H}_{1}$. There is thus no need to ensure the orthogonality of the two wavefunctions and the $Q$ projector operator in Eq.\[eq:ps1equ\] can be dropped. Of course higher order wavefunctions would contain determinants that are higher order excitations from the CASCI space. Orthogonalization with respect to $\Ket{\Psi_{0}}$ and to the lower order perturbation wavefunctions would also be required.
The zero order wavefunction follows an ITDSE similar to Eq. \[eq:imtimeSE\], $$\frac{\partial\Ket{\Psi_{0}}}{\partial\tau}=-\hat{H}_{0}\Ket{\Psi_{0}},\label{eq:imtimeSEH0}$$ it thus can be solved by using the standard FCIQMC algorithm, with the exception that new generated determinants will be restricted to the one accessible by applying $\hat{H_{0}}$ on the current wave-function. This will effectively restrict the Hilbert space to the CAS space, and correspond to freezing the core and virtual orbitals. The computation of $\Ket{\Psi_{1}}$ as defined in Eq.\[eq:ps1equ\] is less straighforward. Indeed in FCIQMC we do not have access to a proper description of the zeroth order wavefunction. It is thus not possible to compute $\Ket{\Psi_{1}}$ by using a projection approach. As an alternative we also decide to sample $\Ket{\Psi_{1}}$ stochastically. However as opposed to $\Ket{\Psi_{0}}$ the perturbation wavefunctions are not solution of an ITDSE. We introduced the following hierarchy of differential equation (DE).
$$\frac{\partial\Ket{\Psi_{m}}}{\partial\tau}=-\left(\hat{H}_{0}-E_{0}\right)\Ket{\Psi_{m}}-QV\left(\Ket{\Psi_{m-1}}-\sum_{k=1}^{m-1}E_{k}\Ket{\Psi_{m-k}}\right).\label{eq:pismDestoch}$$
If the left hand side of Eq.\[eq:pismDestoch\] cancels out, we recover the expression of Eq.\[eq:bthorderwf\] for $\Ket{\Psi_{m}}$, in other words the successive correction wavefunctions are stationary solution of these DE. Note that propagating this equation is actually going to reach a steady state, that is equal to $\Ket{\Psi_{m}}$ because the $\left(\hat{H}_{0}-E_{0}\right)$ matrix is positive definite. In particular the DE for the first order perturbation is $$\frac{\partial\Ket{\Psi_{1}}}{\partial\tau}=-\left(\hat{H}_{0}-E_{0}\right)\Ket{\Psi_{1}}-V\Ket{\Psi_{0}}.\label{eq:psi1diffeq}$$
This equation is similar to the one used for $\Ket{\Psi_{0}}$ , with the addition of a source term due to the second term of the right hand side. In the next section we will show how the simultaneous solving of Eq.\[eq:imtimeSEH0\] and Eq.\[eq:psi1diffeq\] has been implemented in the NECI program.
Implementation\[sec:Implementation\]
====================================
To simultaneously sample the zeroth and first order wavefunctions, we use the multi-replica technique[@overy_unbiased_2014]. A first replica, labeled 0, is sampling the 0 order wavefunctions by propagating the ITDSE of Eq.\[eq:imtimeSEH0\] while another one, labeled 1, is sampling the first order perturbation. We first start with a small amount of walkers on a reference determinant, typically the Hartree-Fock one, on replica 0 and no walkers on replica 1.
In replica 0 new walkers are spawned by applying one and two electrons operators that belong to $\hat{H}_{0}$, thus only determinants that belong to the CAS space are generated. Because the sampling of $\Ket{\Psi_{0}}$ is equivalent to a standard FCIQMC sampling in a CASCI, we can use all the optimizations and approximations that have been introduced in previous publications such as initiator approximation[@cleland_communications:_2010; @cleland_taming_2012] or the semi-stochastic approximation[@blunt_semi-stochastic_2015].
Once the population on replica 0 is equilibrated we start to sample $\Ket{\Psi_{1}}$. This equilibration of the zeroth order wavefunction can be monitored by looking at the variational energy for this replica and checking that is correspond to the CASCI energy. At this point we attempt multiple spawning from each walkers of replica 0. In addition to the excitation that belong to $\hat{H}_{0}$ we also generate excitation belonging to $\hat{V}$. This thus generates determinants that belong to the external space, ${\cal H}_{1}$. The spawning probability of those $\Ket{\Psi_{0}}$ to $\Ket{\Psi_{1}}$ walkers follows the expression of Eq.\[eq:pspawn\]. However those walkers are spawned on replica 1 instead of replica 0.
Replica 1 that was initially empty starts getting populated, the walkers coming from replica 0 correspond to the source term in Eq.\[eq:psi1diffeq\].
We emphasize the fact that in replica 0 the population dynamic is not modified by this extra excitation step, and that by construction the first and zeroth order wavefunctions are orthogonal to each other, this obviates the use of orthogonalization techniques that will be required for the higher order perturbation. The walkers on replica 1 are also subjected to a cloning/dying step and a spawning step at each iteration. For the dying step the applied operator is $\left(\hat{H}_{0}-E_{0}\right)$, where the $E_{0}$ is the projected energy in replica 0. For the spawning step, the applied excitation operator belongs to $\hat{H}_{0}$. The simultaneous sampling of the two wavefunctions is schematized in Fig.\[fig1:scheme\].
![Schematized description of an iteration update of the two replicas. On the left the 0 order wave function, sampled in replica 0 is updated by applying the $\left(\mathds{1}-\Delta\tau\left(\hat{H_{0}}-S\mathds{1}\right)\right)$ operator. On the right the first order perturbation, in replica 1, is updated by applying the $\left(\mathds{1}-\Delta\tau\left(\hat{H_{0}}-E_{0}\mathds{1}\right)\right)$ operator to $\protect\Ket{\Psi_{1}}$ and adding walkers spawned by applying $\hat{V}$ onto $\protect\Ket{\Psi_{0}}$. \[fig1:scheme\]](scheme-crop){width="60.00000%"}
Note that with this implementation the timesteps used for $0\rightarrow0$, $0\rightarrow1$ and $1\rightarrow1$ spawning steps, and the 0 and 1 cloning and dying steps are identical. It is the one that has been optimized for replica 0. In practice this timestep is chosen to ensure that the spawning probability in replica 0 is small enough to ensure that the spawning probability is not much bigger than 1 for all $\Ket{D_{i}}$. As can be seen of Eq.\[eq:pspawn\] the spawning probability is inversely proportional to the generation probability $p_{gen}$ to attempt a spawning on $\Ket{D_{j}}$ from $\Ket{D_{i}}$. Because in interesting systems the size of the external space is expected to be much bigger than the one of the active space the probability of generating $\Ket{D_{i}}$, $\Ket{D_{j}}$ pairs is much smaller in case of $0\rightarrow1$ and $1\rightarrow1$ than in case of $0\rightarrow0$ excitation. As a consequence the spawning probabilities of those excitation will be much bigger than $0\rightarrow0$ spawning probability for the same timestep, and a single walker would give birth to multiple walkers; such an event is called blooming. In other words, the time step should be smaller in the excitations involving the response functions. This problem is dealt with as follow, the overall time step of the simulation is set by the $0\rightarrow0$ dynamic. The first time blooming occurs during the $0\rightarrow1$ spawning step we keep track of the biggest bloom and compute the first integer bigger than this, $n_{01}$. During further step, the time step value will be divided by this $n_{01}$ factor for $0\rightarrow1$ spawning, this ensures to have a spawning probability not much bigger than 1. To keep the overall dynamic at the same timestep we have to do $n_{01}$ spawning attempt to $\Ket 1$ for each walkers on $\Ket 0$. We use a similar procedure for the $1\rightarrow1$ spawning defining a $n_{11}$ timestep scaling factor. Those $n_{01}$ and $n_{11}$ factors are updated along the run, this prevent any explosion of the replica 1 population.
With this implementation there is no way to control the total number of walkers on $\Ket{\Psi_{1}}$ because there is no analogue of the shift control parameter that is used for replica 0. After a few steps the total number of walkers in replica 1 reaches a plateau that is dependent of the system. As a first attempt to control the value of that plateau the initiator approximation has been implemented for walkers that belong to replica 1. We allow the initiator threshold to be different in replica 0 and replica 1. As an illustration we studied the carbon dimer molecule with the cc-pVQZ basis set[@jr_gaussian_1989], with ${\cal H}_{0}$ corresponding to a CAS (8,8) which is the valence space of the molecule. We use a initiator threshold of 3 and a targeted number of walker of 50k for replica 0. In Fig.\[fig1:swalknum\] we present the evolution of the number of walker on replica 0 and on replica 1, the different calculations have been run with a initiator criterion of 3, 1 and no initiator approximation on replica 1.
![Number of walkers in replica 0 (black), and in replica 1 with an initiator approximation of 3 (green), 1 (red) and no initiator approximation (blue) for the $C_{2}$ molecule in the cc-pVQZ basis set. \[fig1:swalknum\]](fig1){width="70.00000%"}
Looking at Fig.\[fig1:swalknum\] it can be seen that with no initiator approximation on $\Ket{\Psi_{1}}$, the number of walkers grows to more than 60 times the number of walkers in the reference. When using the most moderate initiator criteria of 1, this number is already reduced by more than a factor of 2, it can be further decreased by increasing the initiator threshold. However going from a threshold of 1 to 3 only reduced the total number of walkers by roughly 30 %. This is still not satisfying since there is no way to know *a priori* what the number of walkers on $\Ket{\Psi_{1}}$ is going to be. The cost of the calculation cannot be known before running it; for this reason we describe hereafter a way to control the population on replica 1 independently from the initiator threshold.
In Eq.\[eq:ps1equ\] it can be seen that $\Ket{\Psi_{1}}$ scales linearly with the perturbation $\hat{V}$. and thus scale down the perturbation by a real prefactor $\alpha$, that is typically small. This is done in practice by multiplying the matrix elements $H_{ij}$ by this factor when the spawning probability from a determinant $\Ket{D_{i}}$ on ${\cal H}_{0}$ to a determinant $\Ket{D_{i}}$ on ${\cal H}_{1}$ is computed. This allows us to tune more easily the total number of walkers on replica, also the relation between the value of the plateau and the $\alpha$ remains unknown. The number of walkers does not strictly scales linearly with the value of $\alpha$ and because of the initiator approximation. To circumvent this problem we implemented a dynamic updating of $\alpha$ in order to reach a target number of walker on $\Ket{\Psi_{1}}$. The simulation is started with a small $\alpha$, typically $10^{-2}$ ; after the a few thousand step of equilibration, if the number of walkers on replica 1 is not included within a 3% treshold of the target, we update alpha to a new value $\alpha^{\prime}$, $$\alpha^{\prime}=\alpha\left(\gamma+(1-\gamma)\frac{N_{t}}{N}\right)$$ , where $N_{t}$ is the target number of walkers, $N$ is the current number of walkers, and $\gamma$ is a dumping parameter to prevent too drastic a change of $\alpha$. We use typically $\gamma=0.5$.
Having implemented this way of controlling the population in replica 1 we now will study the influence of the number of walkers used to sample the response function on the second order energy. The second order correction energy is expressed as $$E_{2}=\frac{\Braket{\Psi_{0}|\hat{V}|\Psi_{1}}}{\Braket{\Psi_{0}|\Psi_{0}}}=\frac{\sum_{i\in{\cal H}_{0}}\sum_{j\in{\cal H}_{1}}c_{i}c_{j}\Braket{D_{i}|\hat{V}|D_{j}}}{\sum_{i\in{\cal H}_{0}}ci^{2}}.$$ In the FCIQMC framework this can be rewritten as a function of the walker population on each determinants involved. In the LCC pertrubation theory $\Braket{D_{i}|\hat{V}|D_{j}}=\Braket{D_{i}|\hat{H}|D_{j}}=H_{ij}$ $$E_{2}\approx\left\langle \tilde{E}_{2}\right\rangle =\frac{\Braket{\Psi_{0}|\hat{V}|\Psi_{1}}}{\Braket{\Psi_{0}|\Psi_{0}}}=\frac{\left\langle \sum_{i\in{\cal H}_{0}}\sum_{j\in{\cal H}_{1}}N_{i}N_{j}H_{ij}\right\rangle }{\left\langle \sum_{i\in{\cal H}_{0}}N_{i}^{2}\right\rangle }=\frac{\left\langle \sum_{j\in{\cal H}_{1}}N_{j}\left[\sum_{i\in{\cal H}_{0}}N_{i}H_{ij}\right]\right\rangle }{\left\langle \sum_{i\in{\cal H}_{0}}N_{i}^{2}\right\rangle },\label{eq:E2}$$
through the spawning process, this strategy has already been used for the calculation of reduced density matrices[@overy_unbiased_2014]. When a successful spawning from a determinant $\Ket{D_{i}}$ in replica 0 to a determinant $\Ket{D_{j}}$ in replica 1 occurs, the product of the matrix element and of the number of walkers on $\Ket{D_{i}}$, $N_{i}H_{ij}$ is communicated along the spawned walker to the processor holding the child determinant $\Ket{D_{j}}$. Once all the spawning attempt have been done, the processor that keep track of the $\Ket{Dj}$ walkers population will also contains all the $N_{i}H_{ij}$ contribution from all the determinants that spawned to $\Ket{Dj}$ this iteration. This strategy does not cause any noticeable increase of the computational cost. As the contribution of a $\Ket{D_{i}},\Ket{D_{j}}$ pair of determinants to the $E_{2}$ energy is only taken into account when a successful spawning step is actually happening this contribution should be rescaled by the normalized probability of spawning at least one child (of any weight) onto $\Ket{D_{j}}$ from $\Ket{D_{i}}$ during the current iteration. More details on how to compute this probability can be found in [@overy_unbiased_2014]. To avoid double counting of $C_{i}C_{j}$ contribution, it is necessary to carefully check for the rare but still possible case of multiple spawning from the same determinant $\Ket{D_{i}}$ to the same determinant $\Ket{D_{j}}$. Thus the $N_{i}H_{ij}$ contribution is only communicated to the processor holding $\Ket{D_{j}}$ in the first occurrence of such an event.
** Finally, it is necessary to rescale the $\Ket{\Psi_{1}}$ function by the $\alpha$ factor to compute the $E_{2}$ energy.
With this efficient way of computing the second order energy, we can go back to the example of Fig. \[fig1:swalknum\] and examine how the estimation of $E_{2}$ converge with the initiator approximation.
In Fig.\[fig1:compinit-alpha\] we plotted the second order energy for the Carbon dimer with the cc-pVQZ basis computed with the QMC-LCC framework. The black line correspond to the simulations presented in fig.\[fig1:swalknum\] i.e. 50k walkers and a initiator threshold of 3 in replica 0 and an initiator criterion of 1, 3 and no initiator approximation for the first order response function. It can be seen that without approximation, the computed energy is in agreement with the one computed using MPS-LCC, shown in blue for reference. When the initiator approximation is used, we obtain a second order energy that is slightly higher than the correct one. However the estimation remains quite good since the error in the energy is lower than 1 $mE_{H}$ with the two criteria used here.
![Comparison of the E2 energy with respect to the number of walkers. The black curve is obtained by setting the initiator threshold to different values, respectively 3,1 and no initiator threshold. The blue curve is obtained by setting the initiator threshold to 1 and controlling the number of walkers on $\protect\Ket{\Psi_{1}}$ by using the $\alpha$ controlling parameter. For comparison purpose, the value obtained using DMRG is in red.\[fig1:compinit-alpha\]](fig3){width="80.00000%"}
In Fig.\[fig1:compinit-alpha\], the blue curve is obtained with an initiator threshold of 1 and different values of $\alpha$ to constrained the number of walkers. The most interesting feature is that, for the same number of walkers on $\Ket{\Psi_{1}}$ it seems to be better to actually set a smaller initiator threshold and to use the $\alpha$ trick than increasing the initiator threshold. For instance for roughly 1M walkers the value obtained with initiator threshold of 1 and an $\alpha$ value of $\approx0.73$ is 0.16 $mE_{H}$ lower than the one obtained by using an initiator of 3.
In order to improve further the efficiency of our implementation, we notice that the ${\cal H}_{1}$ space can be split as the orthogonal sum of 8 smaller subspaces corresponding to the 8 subclasses of excitation defined by Malrieu and collaborators[@angeli_introduction_2001; @angeli_n-electron_2002]. $${\cal H}_{1}={\cal H}_{\left(-2,2,0\right)}\bigoplus{\cal H}_{\left(0,-2,2\right)}\bigoplus{\cal H}_{\left(-2,1,1\right)}\bigoplus{\cal H}_{\left(-2,0,2\right)}\bigoplus{\cal H}_{\left(-1,1,0\right)}\bigoplus{\cal H}_{\left(-1,0,1\right)}\bigoplus{\cal H}_{\left(-1,-1,2\right)}\bigoplus{\cal H}_{\left(0,-1,1\right)},\label{eq:H1=00003DSum_H}$$
where the subscript described the change in number of electrons in the core, active and virtual space with respect the ${\cal H}_{0}$ determinants. To each of this subspace correspond a subclass of excitation that we can denote $\hat{V}_{(a,b,c)}$, connecting $\Ket{\Psi_{0}}$ to an ${\cal H}_{(a,b,c)}$ subspace.
When $\hat{H}_{0}$ is applied to a determinant belonging to one of those subspaces, the generated determinants also belong to this subspace. This means that during the dynamic, there are no interactions between walkers belonging to 2 different subclasses. Instead of running a single calculation by applying the full $\hat{V}$ to $\Ket{\Psi_{0}}$ it is possible to run 8 independent simpler calculations using each of 8 different classes of excitations $\hat{V}_{(a,b,c)}$ and finally sum up the 8 different second order energies obtained.
Results\[sec:Results\]
======================
The LCC-QMC method proposed here has been applied to several organic molecules. First we look at the behavior of the method with respect to the size of the inactive space by computing the $E_{2}$ energy for the $C_{2}$ molecule with different basis set. In each case the active space consist of the valence orbitals of the molecule which correspond to a (8,8) CAS. In addition to the CAS, there are 2 core orbitals and 16, 50 and 100 virtual orbitals respectively in the cc-pVDZ, cc-pVTZ and cc-pVQZ basis sets used here. The CASSCF orbitals have been generated using the Molpro quantum chemistry package[@werner_molpro:_2012].
We used 50k walkers and an initiator threshold of 3 in replica 0, this threshold is set to 1 in replica 1. The response wavefunction is sampled by applying the full $\hat{V}$ operator to the $\Ket{\Psi_{0}}$ wavefunction.
To investigate how the cost of the response scale with the size of the inactive space, we increase the number of walkers in replica 1 until the computed value of $E_{2}$ agrees within $1\ mE_{H}$ with the same quantity computed deterministically with MPS-LCC using the Block code[@sharma_spin-adapted_2012]. The Required number of walkers are represented in Fig.\[fig1:enerc2basus\]. From this curve it can be seen that the number of walkers on $\Ket{\Psi_{1}}$ necessary to reach a $mE_{H}$ precision scales roughly as the square of the number of inactive orbitals.
![Comparison of the number of walkers that is necessary to use on replica 1 to reach a value within $1\ mE_{H}$ for $E_{2}$ with respect to the energy computed by MPS-LCC used as reference. In all case the CAS space is (8,8), the three different basis set used are cc-pVDZ with 20 inactive orbitals, cc-pVTZ with 52 inactive orbitals and cc-pVQZ with 102 inactive orbitals. The dotted line is here for eyes guidance.\[fig1:enerc2basus\]](fig4){width="100.00000%"}
To test the applicability of the method, we now turn our attention to the computation of benzene singlet-triplet gap.
We used the same geometry than Roos et al[@matos_casscfcci_1987; @roos_towards_1992], i.e. C-C and C-H bond lengths of 1.395 $\textrm{Å}$ and 1.085 $\textrm{Å}$ and an hexagonal symmetry, for the singlet ground state and triplet excited state . The ground state of the benzene molecule is singlet $^{1}A_{1g}$, and we target the lower excited triplet of symmetry $^{3}E_{1u}$. We used the Dunning cc-pVDZ basis set, the active space contains 6 electrons and the six valence $\pi$ orbitals extended with the six second shell $\pi$ orbitals. We used CASSCF orbitals generated using Molpro.
For the $\Ket{\Psi_{0}}$ CASCI wavefunction we used an initiator threshold of 3 and 100k walkers. In order to make the calculation more tractable, we run 8 subcalculations corresponding to each of the orthogonal classes of excitation $\hat{V}_{(a,b,c)}$. The initiator threshold on $\Ket{\Psi_{1}}$ is set to 1.5. We start by using 100k walkers for each first order function, but to test the applicability of the methods, this number has been increased until the second order energy agrees with a precision of 1 $mE_{h}$ with the one predicted deterministicaly by LCC-MPS. The values obtained for the second order energies for the different classes and the number of walkers it was necessary to use on $\Ket{\Psi_{1}}$ to obtain these values are specified in Table.\[tab:-energies-benz\]. The $\left(-1,1,0\right)$ class is not contributing since it has no overlap with the $\Ket{\Psi_{0}}$ wavefunction for spatial orbital symmetry reasons.
We can see by looking at this table that the classes of excitations are not equivalent in terms of the number of walkers necessary to converge. The classes involving virtual orbitals require more walkers to reduce the initiator error. In particular the $\left(-2,0,2\right)$ class is particularly difficult, this is understandable since in this case this the class containing the biggest number of determinants. Considering the Singlet-Triplet gap, the CASSCF value is equals to 4.97 eV while it is reduced to 4.88 eV when the LCC correction is used, the experimental value determined by electron-impact spectroscopy is 4.76 eV[@doering_lowenergy_1969]. We also computed the singlet-triplet gap with CASPT2 where internal contraction is used only for the subspaces requiring at most the knowledge of the second order reduced density matrix while the other subspaces are left uncontracted and NEVPT2 strongly contracted using Molpro, we obtained an difference of 4.65 eV and 5.03 eV respectively.
[|c|c|c|c|c|c|]{} & & Singlet & & & Triplet[\
]{} Class & $E_{2}$ & Walkers number & & $E_{2}$ & Walkers number[\
]{} $\left(-2,2,0\right)$ & -0.02025 & 500 000 & & -0.01167 & 500 000[\
]{} $\left(0,-2,2\right)$ & -0.01716 & 500 000 & & -0.01681 & 500 000[\
]{} $\left(-2,1,1\right)$ & -0.03330 & 1000 000 & & -0.03558 & 1000 000[\
]{} $\left(-2,0,2\right)$ & -0.46096 & 10 000 000 & & -0.46706 & 10 000 000[\
]{} $\left(-1,1,0\right)$ & 0.00000 & N/A & & 0.00000 & N/A[\
]{} $\left(-1,0,1\right)$ & -0.21671 & 1000 000 & & -0.20215 & 1000 000[\
]{} $\left(-1,-1,2\right)$ & -0.08402 & 1000 000 & & -0.09871 & 5000 000[\
]{} $\left(0,-1,1\right)$ & -0.00669 & 500 000 & & -0.00680 & 500 000[\
]{} CASCI & -230.8070 & & & -230.6244 & [\
]{} CASCI+LCC & -231.6461 & & & -231.4667 & [\
]{}
We then turned our attention to the computation of the triplet-singlet gap in the m-xylylene diradical. The study of radical organic species and the prediction of their spin properties is of interest because of potential application in the developement of molecule-based magnetic materials[@miller_organic_1994].
The key parameter for such application is the triplet-singet gap, thus some effort have been done in order to tune this parameter. Along the experiments it is interesting to predict the value of this singlet triplet gap to guide the synthesis of new promising molecules.
Among the different organic diradicals, the m-xylylene has been used quite often as a benchmark system since it is rather stable and quite well characterized experimentally. The molecule belong to $C_{2v}$ point group, the ground state has been proved to be a triplet by using EPR[@wright_electron_1983], and it has electronic state $^{3}B_{2}$. It has been shown by using NIPES that the lowest lying excited state is $^{1}A_{1}$[@wenthold_photoelectron_1997]. This system has been quite extensively studied numerically. For instance Mañeru *et al*[@reta_maneru_tripletsinglet_2014] ** carried an extensive study with DFT and several wavefunction methods exploring the effect of the geometries and of the basis set on the predicted gap. They found that the choice of the basis set have low influence on the predicted gap, however as expected the value of the gap is highly dependent of the choice of the functional. On the other hand, all the different wavefunctions methods they used tends to overestimate the singlet triplet gap. Even with more sophisticated basis set their predicted value of the singlet-triplet gap remains quite close to the value of 4092 $cm^{-1}$ previously predicted by Hrovat et al using a CAS-PT2 calculation with a CAS (8,8) and 6-31g[\*]{} basis set[@hrovat_effects_1998].
As the authors states “the difficulty of the wave function-based methods in describing the tripletsinglet gap arises quite unequivocally from dynamical correlation”. They proposed to extend the CAS space as a way to recover the dynamical correlation is improperly taken account, however as they correctly stated this will make the calculation computationally challenging.
As an alternative we decided to examine how a better treatment of dynamical correlation by using the LCC multireference perturbation theory with the same CAS space would improve the prediction of the singlet-triplet gap. We used the equilibrium geometries optimized at the CASSCF(8,8)/6-311++g[\*]{}[\*]{} level given in supporting information of Mañeru et al, for both the singlet and the triplet. We used the same basis set to run a CASSCF(8,8) calculation to generate the 2 and 4 indexes integrals files with the Molpro quantum package. We computed the $E_{2}$ energies for the 8 classes with MPS-LCC using internal contraction as a reference. We start by using a procedure similar to the one used with the benzene molecule which is setting the iniator criteria to 1.5 in for the first order response functions and progressively increasing the number of walker sample it. However except for the two classes $\left(-2,2,0\right)$ and $\left(0,-1,1\right)$ this procedure has failed, since the obtained value where much higher than the one predicted by MPS-LCC. We thus decided to run calculation with a initiator criterium of 1 and no control of the population i.e an $\alpha$ factor of 1. The number obtained and the number of walkers reached on replica 1 are given in table \[tab:-energies-xyl\]. With this procedure the obtained number are in agreement with the MPS-LCC results, there are usually a bit more negative since this is a fully uncontracted technique. However this approach is not really practical since the number of walkers reached on $\Ket{\Psi_{1}}$ is generally huge, making the calculation extremely costly. Moreover in the case of the $\left(-2,0,2\right)$ and $\left(-1,-1,2\right)$ classes it was not possible to do the calculation.
[|c|c|c|c|c|c|]{} & & Singlet & & & Triplet[\
]{} Class & $E_{2}$ & Walkers number & & $E_{2}$ & Walkers number[\
]{} $\left(-2,2,0\right)$ & -0.00625 & 500 000 & & -0.00630 & 500 000[\
]{} $\left(0,-2,2\right)$ & -0.0324 & 36 000 000 & & -0.0319 & 650 000 000[\
]{} $\left(-2,1,1\right)$ & -0.0438 & 50 000 000 & & -0.0437 & 40 000 000[\
]{} $\left(-2,0,2\right)$ & -0.46096 [\*]{} & & & -0.46706[\*]{} & [\
]{} $\left(-1,1,0\right)$ & 0.00000 & N/A & & 0.00000 & N/A[\
]{} $\left(-1,0,1\right)$ & -0.182 & 13 000 000 & & -0.180 & 88 000 000[\
]{} $\left(-1,-1,2\right)$ & -0.08402[\*]{} & & & -0.09871[\*]{} & [\
]{} $\left(0,-1,1\right)$ & -0.0168 & 500 000 & & -0.0152 & 500 000[\
]{} CASCI & -230.8070 & & & -230.6244 & [\
]{} CASCI+LCC & -231.6461 & & & -231.4667 & [\
]{}
If we retains the values predicted by MPS-LCC for the classes where QMC-LCC cannot be converged, we found a value of 3440 $cm^{-1}$ for the triplet-singlet gap of m-xylylene, a value that agrees well with the experimental value $3358\pm70\ cm^{-1}$. This shows that using the MRLCC perturbation theory improves a lot the description of the dynamical correlation for this system.
Thus as Mañeru *et al* stated, the problem in the description of the m-xylylene was a proper treatment of the dynamical correlation. However this problem is still out of reach for the fully uncontracted LCC-QMC, this indicated clearly that to make LCC-QMC practical it would be necessary to implement some internal contraction treatment of the perturbation classes. However he fact that the most easy classes in the LCC-QMC approach is the one that would require the 3 and 4-RDM to be computed with internal contraction is encouraging.
Conclusions\[sec:Conclusions\]
==============================
In this article we described a way to do CASCI+MRLCC calculation that is completely stochastic. The zero order wavefunction is computed with the FCIQMC approach, *i.e* a population of signed walkers which is evolving thanks to a series of stochastic rules, is sampling the CASCI wavefuntion and solving the zero order Hamiltonian eigenproblem. Simultaneously, a second population of walkers submitted to a different set of stochastic process is sampling the first order wavefunction, by finding the steady state of a differential equation. The first order wavefunction being a function of the zero order one, we include a source term in the population dynamic sampling $\Ket{\Psi_{1}}$ that depend of the population in $\Ket{\Psi_{0}}$, this requires spawning from one replica to the other, and this is the main originality of the proposed algorithm. We presented different strategy to make the use of this technique practicable, some are directly adapted from the strategies proposed for FCIQMC, such as the initiator approximation and the semi-stochastic approach. We also proposed to scale the source term as a way to control the population in the replica sampling the first order wavefunction. Because the size of the CASCI and of the perturbation are quite different it was also necessary to have different timesteps for the 0 to 0, 0 to 1 and 1 to 1 spawning steps. We illustrate the possibility of the proposed methods on several applications. First we shows that this approach allows to recover the results computed by the deterministic MPS-LCC method on the case of the $C_{2}$ molecule. This calculation shows that the cost of the calculation, linked to the number of walkers, scale quadratically with the number of inactive orbitals.
The computation of the singlet-triplet gap of the challenging m-xylylene diradical with the proposed method confirmed the better performance of the MRLCC perturbation theory with respect to the CASPT2 technique. It has been possible to obtain a value of this gap that is in good agreement with the experimental results while the CASPT2 approach using the same active space overestimate this quantity by 20%. However it has also been demonstrated that the proposed algorithm is still not adapted to study such complicated systems since the class of excitation involving two cores electrons and to virtuals holes cannot be computed for the m-xylylene molecule. This problem has been circumvent by computing those classes by using the contracted MPS-LCC technique. This point out the necessity to develop a similar contracted approach with our fully stochastic procedure, this is currently under investigation.
The LCC-QMC approach proposed here would allow to treat both large active space and large inactive space; this associated with the recently demonstrated possibility of using FCIQMC-CASSCF[@li_manni_combining_2016] will allow to study systems that are currently out of reach.
Acknowledgments {#acknowledgments .unnumbered}
===============
The calculations made use of the facilities of the Max Planck Societys Rechenzentrum Garching. We are grateful to Pr. Illas and Dr. Daniel Reta for providing us their equilibrium geometries for the m-xylylene molecule.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose possible high-temperature superconductivity (SC) with singlet $s^\pm$-wave pairing symmetry in the single-orbital Hubbard model on the square-octagon lattice with only nearest-neighbor hopping terms. Three different approaches are engaged to treat with the interacting model for different coupling strengths, which yield consistent result for the $s^\pm$ pairing symmetry. We propose octagraphene, i.e., a monolayer of carbon atoms arranged into this lattice, as a possible material realization of this model. Our variational Monte Carlo study for the material with realistic coupling strength yields a pairing strength comparable with the cuprates, implying a similar superconducting critical temperature between the two families. This study also applies to other materials with similar lattice structure.'
author:
- 'Yao-Tai Kang'
- Chen Lu
- Fan Yang
- 'Dao-Xin Yao'
bibliography:
- 'RPA.bib'
title: 'Single-orbital realization of high-temperature $s^\pm$ superconductivity in the square-octagon lattice '
---
ł@Englishł@english
[^1]
[^2]
Introduction
============
The search for superconductivity (SC) with high critical temperature $T_c$ has been the dream of the condensed-matter community for decades. It is generally believed that the right route to seek for high-$T_c$ SC (HTCS) is to acquire strong spin fluctuations via proximity to antiferromagnetic-ordered phases, with the cuprates and the iron-based superconductors as two well-known examples [@DJScalapino12]. Along this route, a new research area was generated recently: graphene-based SC. Among the early attempts in this area, the most famous idea might be to generate d+id HTCS [@Chubukov; @Qianghua; @Thomale] in the monolayer graphene in proximity to the spin-density-wave (SDW) ordered state [@TaoLi; @Qianghua] at the quarter-doping. However, such high doping concentration is hardly accessible by experiment. The newly discovered SC in the magic-angle-twisted bilayer graphene [@Cao1] in close proximity to the “correlated insulator" phase [@Cao2] opened a new era in this area. It is proposed that the “correlated insulator" in this material is a SDW insulator [@Yang; @Xu], and the SC is driven by SDW spin fluctuations [@Yang; @Xu; @Ashvin; @Fu]. However, due to the greatly reduced Fermi energy ($\approx10$ meV) in this material, the $T_c\approx 1.7$ K might be not far from its upper limit. Here we propose another graphene-based material, i.e., octagraphene [@Sugang], which has a square-octagon lattice structure with each site accommodating one single $2p_z$ orbital. This system has large Fermi energy and we predict that slightly doping this material will induce HTCS, driven by SDW spin-fluctuations.
The octagraphene is a two-dimensional (2D) material formed by a monolayer of carbon atoms arranged into a square-octagon lattice as shown in Fig. \[fig:lattice\]. This lattice is $C_{4v}$-symmetric and each unit cell contains four sites forming a square enclosed by the dotted lines shown in Fig. \[fig:lattice\]. First-principles calculations indicate that such a planar structure is kinetically stable at low temperature [@Sugang; @Pod] and that its energy is a local minimum [@Sugang], which suggests that the material can potentially be synthesized in laboratories. Actually, this lattice structure has attracted a lot of research interest recently because it not only is hosted by quite a few real materials [@CaVO; @KFeSe1; @KFeSe2; @YZhang] but also has various intriguing phases on this lattice that have been revealed by theoretical calculations [@Scalettar; @Troyer; @White; @Sachdev; @Zheng_Weihong; @Bose; @Manuel; @Farnell; @Kwai; @Fiete; @Yamashita; @Yanagi; @Yamada14; @Wu; @Iglovikov; @Long_Zhang; @Gong; @Bao]. Here we notice another remarkable property of this 2D lattice: its band structure can have perfect Fermi-surface (FS) nesting in a wide parameter regime at half filling, which easily leads to antiferromagnetic SDW order. When the system is slightly doped, the SDW order will be suppressed and the remnant SDW fluctuation will mediate HTCS.
In this paper, we study a possible pairing state in the single-orbital Hubbard-model on the square-octagon lattice with only nearest-neighbor hopping terms. To treat this Hubbard-model with different limits of the coupling strength, we adopt three distinct approaches, i.e., the random-phase approximation (RPA), the slave-boson mean field (SBMF), and the variational Monte Carlo (VMC), which are suitable for the weak, the strong, and the intermediate coupling strengths, respectively. All the three approaches consistently identify the single $s^\pm$-wave pairing as the leading pairing symmetry. We propose octagraphene as a possible material realization of the model. Our VMC calculation adopting realistic interaction strength yields a pairing gap amplitude of about 50 meV, which is comparable with the cuprates, implying a comparable $T_c$ between the two families. Our study also applies to other materials with similar lattice structure.
Material, Model, and Approaches
===============================
\
From density-functional theory (DFT) calculations [@Sugang], each carbon atom in the octagraphene is $\sigma$ bonded with its three surrounding atoms via $sp^2$ hybridization. The low-energy degree of freedom near the Fermi level is dominantly contributed by the $2p_z$ orbitals, which form $\pi$ bonds similar to the graphene. With each carbon atom contributing one electron in one $2p_z$ orbital, the resulting band structure can be well captured by the following single-orbital TB model: $$\begin{aligned}
H_{\text{TB}}=-t_1 \sum_{\langle i,j \rangle,\sigma} \left( c^{\dagger}_{i\sigma} c_{j\sigma} + H.c. \right)
- t_2 \sum_{[i,j],\sigma} \left( c^{\dagger}_{i\sigma} c_{j\sigma} + H.c. \right).\nonumber\\\label{TB}\end{aligned}$$ Here $c^{\dagger}_{i\sigma} \left(c_{i\sigma}\right)$ creates (annihilates) an electron with spin $\sigma$ at site $i$. The terms with coefficients $t_1$ ($\approx$2.5eV) and $t_2$ ($\approx$2.9eV) describe the intrasquare nearest-neighbor ($NN$) and intersquare $NN$ hoppings respectively, as shown in Fig. \[fig:lattice\]. In the following, we set $t_1$ as the energy unit and $t_2/t_1=1.2$.
The band structure of this TB model along the high symmetric lines in the first Brillouin zone is presented in Fig. \[fig:band\]. For the half-filling case, the band $\varepsilon_2({\mathbf{k}})$ and $\varepsilon_3({\mathbf{k}})$ cross the Fermi level to form a hole pocket ($\alpha$) centering around the $\Gamma$ point, and an electron pocket ($\beta$) centering around the $M$ point, as shown in Fig. \[fig:FSn100\]. The red (green) color indicates that site 1 and 3 (2 and 4) dominate the weights of bands. Remarkably, the two pockets are identical, connected by the perfect nesting vector $\mathbf{Q}=(\pi,\pi)$. Such perfect FS-nesting is robust at half filling in the parameter regime $0 < \left|\frac{t_2}{t_1}\right| \le 2$, where the FS exists. However, upon doping, the perfect FS nesting is broken, leaving a remnant nesting at a nesting vector shifted from $\mathbf{Q}$, as shown in Fig. \[fig:FSn110\].
Due to the screening effect in the doped compound, the strong Coulomb repulsions between the $2p_z$ electrons in the graphene-based material can be approximated as the Hubbard interaction [@Neto]. Therefore, we obtain the following well-known (repulsive) Hubbard-model: $$H =H_{\text{TB}}+H_{\text{int}}=H_{\text{TB}}+U \sum_i \hat{n}_{i\uparrow} \hat{n}_{i\downarrow},\label{model}$$ Although there is a rough estimate of $U\approx10$ eV for the graphene-based material, an accurate value of $U$ is hard to obtain [@Neto]. Therefore, in the following, we first engage three different approaches, i.e., the RPA, the SBMF, and the VMC, to treat with the model with different limits of $U$ and check the $U$ dependence of the pairing symmetry. As we shall see, they yield consistent results. Then, we fix $U=10$ eV, and adopt the VMC approach suitable for this $U$ to estimate the $T_c$.
Theoretical solutions and numerical results
===========================================
Results for the random-phase approximation
------------------------------------------
We adopt the standard multi-orbital RPA approach [@KKubo07; @SGraser09; @QLLuo10; @TAMaier11; @FengLiu13; @TXMa14; @XXWu15; @LDZhang15; @HKontani98; @HKondo01; @KKuroki02] to treat the weak-coupling limit of the model (\[model\]). Strictly speaking, this is an “intra-unit-cell multisite model” without orbital degrees of freedom, which is easier because of the absence of an inter-orbital Coulomb interaction and Hund’s coupling. This approach handles the interactions at the RPA level, from which we determine the properties of the magnetism and SC for interactions above or below the critical interaction strength $U_c$, respectively. Generally, the RPA approach only works well for weak-coupling systems.
Let us define the following bare susceptibility for $U=0$: $$\begin{aligned}
\chi^{(0)l_1l_2}_{l_3l_4} \left({\mathbf{q}},i\omega_n\right) \equiv
\frac{1}{N} \int_0^{\beta} d\tau e^{i\omega_n\tau}
\sum_{{\mathbf{k}}_1{\mathbf{k}}_2} \big\langle T_{\tau} c^{\dagger}_{l_1}({\mathbf{k}}_1,\tau) \nonumber \\
\times c_{l_2}({\mathbf{k}}_1+{\mathbf{q}},\tau)
c^{\dagger}_{l_3}({\mathbf{k}}_2+{\mathbf{q}},0) c_{l_4}({\mathbf{k}}_2,0) \big\rangle_0.\end{aligned}$$ Here $l_i(i=1,...,4)$ denotes the sublattice indices. The largest eigenvalue $\chi({\mathbf{q}})$ of the static susceptibility matrix $\chi^{(0)}_{lm}({\mathbf{q}}) \equiv \chi^{(0)l,l}_{m.m}({\mathbf{q}},i\omega=0)$ for each ${\mathbf{q}}$ represents the eigensusceptibility in the strongest channel, while the corresponding eigenvector $\xi({\mathbf{q}})$ provides information on the fluctuation pattern within the unit cell. The information about the distribution of $\chi({\mathbf{q}})$ over the Brillouin zone, as well as the fluctuation pattern for the peak momentum, is shown in Fig. \[fig:chi\] for different dopings.
Figure \[fig:chi0x00\] illustrates the distribution of $\chi({\mathbf{q}})$ over the Brillouin zone for the undoped case, which sharply peaks at $\mathbf{Q}=(\pi,\pi)$, reflecting the perfect FS nesting at that wave vector, as shown in Fig. \[fig:FSn100\]. On the other hand, the eigenvector $\xi(\mathbf{Q})=(\frac{1}{2},-\frac{1}{2},\frac{1}{2},-\frac{1}{2})$ reflects the intra-unit-cell fluctuation pattern, which is shown in Fig. \[fig:magnetism\] together with the inter-unit-cell pattern for this momentum, which suggests a Neel pattern. With the development of doping, the peak in the distribution of $\chi({\mathbf{q}})$ splits each into four and deviates from $\mathbf{Q}=(\pi,\pi)$ to $\mathbf{Q}_{x}=(\pi\pm\delta,\pi\pm\delta)$, as shown in Fig. \[fig:chi0x10\] for $x=10\%$ electron doping as an example. The relation between $\delta$ and $x$ shown in Fig. \[fig:delta\] suggests a linear relation, revealing incommensurate inter-unit-cell fluctuation pattern, just like the Yamada relation in the cuprates[@Yamada98]. In the meantime, the eigenvectors $\xi(\mathbf{Q}_{x})$ nearly keep unchanged, and thus the intra-unit-cell fluctuation pattern is still approximately described by Fig. \[fig:magnetism\].
\
For $U>0$, we obtain the following renormalized spin (s) and charge (c) susceptibilities at the RPA level, $$\begin{aligned}
\chi^{(s/c)}\left({\mathbf{q}},i\omega_n \right) =
\left[I \mp \chi^{(0)}\left({\mathbf{q}},i\omega_n\right)(U)\right]^{-1} \chi^{(0)}\left({\mathbf{q}},i\omega_n\right)
\label{eq:RPA}\end{aligned}$$ Here $\chi^{(s/c)}\left({\mathbf{q}},i\omega_n\right)$, $\chi^{(0)}\left({\mathbf{q}},i\omega_n\right)$ and $(U)$ are used as $4^2 \times 4^2$ matrices and $I$ is the unit matrix. In our model, $U^{l_1 l_2}_{l_3 l_4} = U \delta_{l_1=l_2=l_3=l_4}$. For $U>0$, the spin fluctuation dominates the charge fluctuation, thus the fluctuation pattern illustrate in Fig. \[fig:magnetism\] actually describes the spin fluctuation. Note that the RPA approach only works for $U<U_c$, with the critical interaction strength $U_c$ determined by $\det\left[I - \chi^{(0)}\left({\mathbf{q}},0\right)U\right]=0$. For $U>U_c$ the spin susceptibility diverges, which suggests that long range SDW order with the pattern shown in Fig. \[fig:magnetism\] emerges. The doping-dependence of $U_c$ is shown in Fig. \[fig:Uc\], where one finds $U_c=0$ for $x=0$ due to the perfect FS-nesting, which means that arbitrarily weak repulsive interaction will cause SDW order. For $x>0$, we have $U_c>0$. In such cases, the SDW order maintains for some doping regime where $U_c<U$, but with the wave vector shifting to incommensurate values $\mathbf{Q}_{x}=(\pi\pm\delta,\pi\pm\delta)$.
\
When the doping concentration $x$ further increases so that $U<U_c$, the long-ranged SDW order is killed. In such parameter regime, the remnant SDW fluctuation will mediate an effective pairing potential $V^{\alpha\beta}({\mathbf{k}},{\mathbf{k}}^\prime)$ [@FengLiu13; @XXWu15] between the Cooper pairs. Then we can solve the following linearized gap equation to determine the leading pairing symmetry: $$\begin{aligned}
-\frac{1}{(2\pi)^2}\sum_\beta \oint_{FS} d{\mathbf{k}}^\prime_{\parallel} \frac{V^{\alpha\beta}({\mathbf{k}},{\mathbf{k}}^\prime)}{v^\beta_F({\mathbf{k}}^\prime)}
\Delta_\beta({\mathbf{k}}^\prime) = \lambda\Delta_\alpha({\mathbf{k}}).
\label{eq:gap}\end{aligned}$$ Here $v^\beta_F({\mathbf{k}})$ is the Fermi velocity and ${\mathbf{k}}^\prime_{\parallel}$ denotes the component along the FS. The pairing eigenvalue $\lambda$ is related to $T_c$ through $T_c\approx W_{D} e^{-1/\lambda}$ with the “Debye frequency” $W_D$ for the spin fluctuations to be about an order of magnitude lower than the bandwidth, and the pairing symmetry is determined by the eigenfunction $\Delta_\alpha({\mathbf{k}})$ corresponding to the largest $\lambda$.
\
The $U$ dependence of the largest $\lambda$ for each pairing symmetry is shown in Fig. \[fig:lambda\] for a typical doping $x=10\%$. Obviously, $\lambda$ enhances promptly with the growth of $U$ due to the enhancement of spin fluctuations. The leading pairing symmetry turns out to be the $s$-wave. In Fig. \[fig:lambda\_x\], the doping-dependence of the largest $\lambda$ for each pairing symmetry is shown for a typical $U=1.8t_1$. After a prompt drop near the critical doping (about $\pm5\%$), the $\lambda$ for the four pairing symmetries vary smoothly for a wide doping range up to $20\%$, where the $s$-wave SC dominates all the other pairings. Figure \[fig:lambda\] and \[fig:lambda\_x\] illustrates the robustness of the $s$-wave SC against parameters variation. The $C_{4v}$-symmetric distribution of the pairing gap function $\Delta({{\mathbf{k}}})$ of the obtained $s$-wave SC is shown on the FS in Fig. \[fig:pairing1\]. Remarkably, this gap function keeps the same sign within each pocket and changes sign between the two pockets. Therefore, we have established here a one-orbital realization of the standard $s^\pm$ SC, which used to be realized in the multi-orbital Fe-based superconductor family.
Note that the interaction parameter $U=1.8t_1\approx4.5$ eV adopted here is considerably weaker than realistic value of $U\approx10$ eV [@Neto], and due to the weak-coupling perturbative character of RPA, it is unreasonable to adopt a stronger $U$. In the next section, we adopt the SBMF approach to treat with the strong-coupling limit.
The slave-boson mean-field results
----------------------------------
We start from the following effective $t$-$J$ model to study the strong-coupling limit of the Hubbard-model (\[model\]), $$\begin{aligned}
H=H_{\text{TB}}+J_1\sum _{\left \langle i,j \right \rangle }\bm{\widehat{S}_i}\cdot \bm{\widehat{S} } _{j} +J_2\sum _{\left [ i,j \right ] }\bm{\widehat{S} }_{i}\cdot \bm{\widehat{S} } _{j},\label{tJ}\end{aligned}$$ Here the intrasquare $NN$ ($J_{1}$) and intersquare $NN$ ($J_{2}$) effective superexchange coupling constants are generated in the strong-coupling limit, which roughly satisfy $J_2/J_1\approx(t_2/t_1)^2\approx1.4$. In the following, we adopt $J_1=0.5t_1$ and $J_2=0.7t_1$. This Hamiltonian should be understood as acting on the subspace of empty (double-occupancy) and single occupied sites for the hole-doped (electron-doped) system.
In the SBMF approach[@Kotliar], we decompose the electron operator $c_{i\sigma}$ into $c_{i\sigma}\to f_{i\sigma}b^{\dagger}_i$, with the bosonic holon (doublon) operator $b^{\dagger}_i$ and the fermionic spinon operator $f_{i\sigma}$ subject to the no-double-occupancy constraint $b^{\dagger}_ib_i+\sum_{\sigma}f^{\dagger}_{i\sigma}f_{i\sigma}=1$. This constraint is treated in the mean-field level in SBMF, and at zero temperature the condensation of bosonic $b^{\dagger}_i$ leads to $b^{\dagger}_i\to \sqrt{x}$ and we are left with only the fermionic $f_{i\sigma}$ degree of freedom. The quartic term of $f_{i\sigma}$ in $H$ is further mean-field decomposed with the following two order parameter channels: $$\begin{aligned}
\label{sce}
\kappa _{(i,j)} =&\left \langle f^{\dagger}_{j\uparrow }f_{i\uparrow} \right \rangle=\left \langle f^{\dagger}_{j\downarrow }f_{i\downarrow} \right \rangle \nonumber\\
\Delta _{(i,j)}=&\left \langle f_{j\downarrow }f_{i\uparrow}-f_{j\uparrow }f_{i\downarrow}\right \rangle.\end{aligned}$$ Here we actually have two mean-field $\kappa _{(i,j)}$ ($\Delta _{(i,j)}$) parameters, i.e., $\kappa _{1}$ ($\Delta _{1}$) for intrasquare $NN$ and $\kappa _{2}$ ($\Delta _{2}$) for intersquare $NN$ $(i,j)$, respectively, which are obtained by solving the mean-field equation self-consistently.
Our SBMF results are shown in Fig. \[SBMF\]. Here we have tried two different pairing symmetries, i.e., the $s$ wave and $d$ wave, with their total energy difference $\Delta E\equiv E_s-E_d$ shown in Fig. \[SBMF-a\], where the $s$-wave SC gains more energy and becomes the ground state. The doping dependence of the four order parameters $\kappa _{1,2}$ and $\Delta _{1,2}$ for the $s$-wave pairing is shown in Fig. \[SBMF-b\], where the intersquare order parameters obviously dominate the intrasquare ones. Figure \[SBMF-c\] shows the projection of the gap function onto the FS, where one clearly verifies the standard $s^\pm$-pairing state, which is well consistent with the gap function obtained by RPA shown in Fig. \[fig:pairing1\].
The doping-dependence of the superconducting order parameter $\Delta^{(c)} _{(i,j)}=\left \langle c_{j\downarrow }c_{i\uparrow}-c_{j\uparrow }c_{i\downarrow}\right \rangle=x \Delta _{(i,j)}$ is shown in Fig. \[SBMF-d\], which illustrates a dome-shape similar to the cuprates. If we use the BCS relation $2J\Delta^{(c)}/T_c\approx3.53$ to roughly estimate $T_c$, we get the highest $T_c\approx180$ K near $x=10\%$ for our choice of $J_1$ and $J_2$. However, as the effective superexchange parameters $J_1$ and $J_2$ for real material with intermediate $U$ is hard to estimate, the $T_c$ obtained here might not be accurate. In the following, we adopt the VMC approach to study the problem.
The variational Monte Carlo results
-----------------------------------
The above weak-coupling RPA and strong-coupling SBMF approaches consistently yield the $s^\pm$-wave pairing. However, to obtain a more reasonable estimation of $T_c$, we should adopt a realistic interaction parameter $U$. The realistic $U\approx10$ eV is comparable with the total bandwidth, thus it belongs to intermediate coupling strength. We adopt the VMC approach here, which is suitable for the intermediate coupling strength.
We adopt the following partially Gutzwiller-projected BCS wave function [@YangVMC] in our VMC study, $$\begin{aligned}
\label{wave}
\left |G \right \rangle=g^{\sum_{i} n_{i\uparrow}n_{i\downarrow}} (\sum_{\bf{k}\alpha}\frac{v^{\alpha }_{\bm{k}}}{u^{\alpha }_{\bm{k}}}
c^{\dagger}_{\bf{k}\alpha\uparrow}c^{\dagger}_{\bf{-k}\alpha \downarrow})^{\frac{N_e}{2}}\left |0 \right \rangle.\end{aligned}$$ Here $g\in(0,1)$ is the penalty factor of the double occupancy, $N_e$ is the total number of electrons, and $$\begin{aligned}
\frac{v^{\alpha }_{\bm{k}}}{u^{\alpha }_{\bm{k}}}=\frac{\Delta^{\alpha}_{\bm{k}}}{\varepsilon_{\alpha }(\bm{k})+\sqrt{\varepsilon^2_{\alpha }(\bm{k})+\left |\Delta ^{\alpha }_{\bm{k}}\right |^{2} }},\end{aligned}$$ where $\Delta ^{\alpha }_{\bm{k}}=\Delta ^{\alpha }f(\bm{k})$ is the superconducting gap function. Here we only consider intra-band pairing on the $\alpha=2,3$ bands crossing the FS, with $\Delta^{2}=\Delta^{3}\equiv\Delta$. The following four different form factors $f(\bm{k})$ are considered in our calculations, $$\begin{aligned}
\label{factor}
f(\bm{k})=\left\{\begin{matrix}
\cos k_{x}+\cos k_{y}\quad &(s^\pm)\\
\cos k_{x}\cos k_{y}\quad &(s^{++})\\
\cos k_{x}-\cos k_{y}\quad &(d_{x^{2} -y^{2}})\\
\sin k_{x}\sin k_{y}\quad &(d_{xy})
\end{matrix}\right.\end{aligned}$$ There are three variational parameters, i.e., $g$, $\mu_c$, and $\Delta$ for each pairing channel in our trial wave function.
We employ the VMC approach to calculate the expectation value $E$ of the Hubbard Hamiltonian (\[model\]) [@YangVMC] and optimize the variational parameters. The $\Delta$ dependence of the energy per unit cell for each form factor is shown in Fig. \[ttu\] for $U=4t_1=10$eV for a typical doping $x=10\%$, with $g$ and $\mu_c$ optimized for each $\Delta$. Note that the optimized $g=0.5475$ is almost equal to the optimized value without SC, and that $\mu_c$ is almost equal to the value obtained in the mean-field calculation. From Fig. \[ttu\], one finds that the $s^\pm$- wave pairing causes the most energy gain among the four gap form factors, with the optimized gap amplitude at $\Delta=0.022t_1\approx$50meV, comparable with the cuprates, implying similar $T_c$ between them. The gap function of the $s^\pm$-wave SC obtained is shown on the FS in Fig. \[ttmu\], which is well consistent with that obtained in the RPA calculation.
Note that we have not included antiferromagnetic order in our trial wave function as we mainly focus on SC here. Generally, such antiferromagnetic order will be favored at low dopings and decay with further doping. In the framework of VMC, the antiferromagnetic order possibly coexists with SC at low dopings. We leave this topic for future studies.
Discussion and Conclusion
=========================
The synthesis of octagraphene is on the way. Recently, graphene-like nanoribbons periodically embedded with four- and eight-membered rings have been synthesized [@Zhong]. A scanning tunneling microscopy and atomic force microscopy study revealed that four- and eight-membered rings are formed between adjacent perylene backbones with a planar configuration. This 2D material can be taken as an intermediate between the graphene and the octagraphene studied here. Most probably, the octagraphene might be synthesized in the near future, which will provide a material basis for the study here.
In conclusion, we have studied possible pairing states in the single-orbital Hubbard model on the square-octagon lattice with only nearest-neighbor hopping terms. Due to the perfect FS nesting in the undoped system, slight doping would induce HTCS, driven by strong incommensurate SDW fluctuations. Our combined RPA-, SBMF-, and VMC-based calculations suitable for the weak, strong, and intermediate couplings strengths, respectively, consistently yield standard $s^\pm$-wave SC in this simple one-orbital system. The smoking-gun evidence of this intriguing pairing state would be the pronounced subgap spin resonance mode emerging upon the superconducting transition, which can be detected by inelastic neutron scattering. We propose octagraphene as a possible material realization of the model, and our VMC calculations adopting realistic interaction parameter for this material yield a pairing gap amplitude of about 50 meV, comparable with that of the cuprates, which implies comparable $T_c$ between the two systems. Our study will also apply to other materials with similar lattice structure. Our results, if confirmed, would start a new stage in the discovery of high-$T_c$ SC.
F.Y. acknowledges the support from NSFC under the Grants No. 11674025, No. 11334012, and No. 11274041. Y.-T.K. and D.-X.Y. are supported by NKRDPC Grants No. 2017YFA0206203, No. 2018YFA0306001, NSFC-11574404, and NSFG-2015A030313176, Special Program for Applied Research on Super Computation of the NSFC-Guangdong Joint Fund, National Supercomputer Center In Guangzhou, and Leading Talent Program of Guangdong Special Projects.
-8mm
[^1]: These two authors contributed equally to this work.
[^2]: These two authors contributed equally to this work.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A complete understanding of real networks requires us to understand the consequences of the uneven interaction strengths between a system’s components. Here we use the minimum spanning tree (MST) to explore the effect of weight assignment and network topology on the organization of complex networks. We find that if the weight distribution is correlated with the network topology, the MSTs are either scale-free or exponential. In contrast, when the correlations between weights and topology are absent, the MST degree distribution is a power-law and independent of the weight distribution. These results offer a systematic way to explore the impact of weak links on the structure and integrity of complex networks.'
author:
- 'P. J. Macdonald'
- 'E. Almaas'
- 'A.-L. Barab[á]{}si'
title: 'Minimum spanning trees on weighted scale-free networks'
---
The study of many complex systems have benefited from representing them as networks [@review], examples including metabolic networks [@jeong00], describing the reactions in a cell’s metabolism; the protein interaction network [@jeong01], capturing the binding interactions between a cell’s proteins; and the World Wide Web and email networks [@albert99; @ebel02] linking web-pages or people together via URLs or emails. For these systems there is extensive empirical evidence indicating that the degree (or connectivity) distribution of the nodes follows a power-law, strongly influencing everything from network robustness [@robust] to disease spreading [@vespignani01]. However, to fully characterize these systems, we need to acknowledge the fact that the links can differ in their strength and importance [@yook01; @goh01; @braunstein03; @barrat04; @toro04]. Indeed, in a social network the strength of the relationship between two long-time friends differs from that between two casual business associates [@granovetter73]; in ecological systems the strength of a particular pair-interaction between species is crucial for population dynamics [@kilpatrick03], ecosystem stability [@berlow99] and development in stressed environments [@callaway02]. Thus in most networks the links are not binary (present or absent), but have a strength that quantifies the importance of the particular node-to-node interaction.
The weakest links can carry particular significance in some weighted networks[@granovetter73]. For instance, the speed of data transmission between two computers is limited by the link with the smallest bandwidth (“bottleneck”), or the activity of a metabolic pathway is determined by the rate of the slowest reaction. Furthermore, weak links can affect the overall network integrity. For example, ecological communities may experience dramatic effects upon the removal of weak interactors [@berlow99]. To systematically uncover the location and the role of weak links in a complex network, we use the minimum spanning tree (MST), which for an $N$ node network represents the loopless subgraph of $(N-1)$ links that reaches all nodes while [*minimizing*]{} the sum of the link weights [@laszlo96; @west97; @banavar99; @banavar00]. By avoiding the strong links and preferentially following the weakest ones, the MST selects the lowest weight backbone of a network.
We start by examining the correlations between weights and network structure for several real systems, allowing us to construct a model system whose weight distribution mimics the statistical features of real networks. We then show that the large-scale structure of the MSTs depends on the way the weights are placed in the network: For systems whose weight distribution is correlated with the network topology, the MSTs are either scale-free or exponential. In contrast, when the correlations between weights and topology are removed, the MST degree distribution is a power-law with a degree exponent close to the degree exponent of the original network, independent of the weight distribution.
[*Topology Correlated Weights.*]{}—To uncover the functional relationship between network topology and link weights, in Fig. \[fig:1\] we display the dependence of the weights on the node degrees for the [*E. coli*]{} metabolic network, where the link weights represent the optimal metabolic fluxes [@almaas04]; the US Airport Network (USAN) where the weights reflect the total number of passengers travelling between two airports between $1992$ and $2002$; and the link betweenness-centrality (BC), representing the number of shortest paths along a link for the Barab[á]{}si-Albert (BA) scale-free model [@BA]. For each of these systems the weight distributions follow a power-law [@goh01] (not shown) and, as Fig \[fig:1\] shows, the average link weight scales with the degrees of the nodes on the two ends of a link as $\langle w_{ij}\rangle \sim
(k_ik_j)^\theta$, similar to the scaling found for the World Airport Network [@barrat04].
![\[fig:1\] The average weight of a link between nodes $i$ and $j$ shown as function of the link end-point degree product ${k_ik_j}$. The symbols represent (i) the USAN with the number of passengers as link weights (filled squares); (ii) the [*E. coli*]{} metabolic network with optimized flux as link weight [@almaas04] (filled triangles); (iii) Barab[á]{}si-Albert scale-free model with betweenness-centrality (BC) as link weights (open circles). The solid line ($w\sim ({k_ik_j})^{0.5}$) and the dashed line ($w\sim ({k_ik_j})^{0.8}$) serve as guides to the eye. [**Inset:**]{} The weights are determined by the end-point degrees $k_i$ and $k_j$: (i) $w_{ij} = {k_ik_j}$, (ii) $w_{ij} = \max(k_i,k_j)$, (iii) $w_{ij} = \min(k_i,k_j)$ or the inverse thereof (iv)-(vi).](figure1.eps){width="8.6cm"}
These empirical observations allow us to assign weights to the links of a network for which we have only the network topology. To systematically study the role of the weight distribution on the structure of the MST we use several weight assignments. (i) First, we choose $w_{ij} = {k_ik_j}$ (see inset Fig. \[fig:1\]). Note that the MST generated by this weight assignment is identical to the MST obtained for weights $w'_{ij} = (w_{ij})^\theta$ with any $\theta >
0$, as it is the rank of the weights and not their absolute value that determines the MST [@dobrin01]. We have also studied the two extreme cases of topology-correlated weights, distributed according to (ii) $w_{ij} \sim {k_{max}}$ and (iii) $w_{ij} \sim {k_{min}}$, where ${k_{min}}=
\min(k_i,k_j)$ and ${k_{max}}= \max(k_i,k_j)$ and with ${k_{min}}^2 \leq {k_ik_j}\leq {k_{max}}^2$. Finally, we investigated the structure of the maximal spanning trees [@kim04] for the above weight choices by determining the MST after transforming cases (i)-(iii) as $w'_{ij} =
1/w_{ij}$, resulting in the link-weight choices (iv) $w_{ij} \sim
1/{k_ik_j}$, (v) $w_{ij} \sim 1/{k_{max}}$ and (vi) $w_{ij} \sim 1/{k_{min}}$.
[*Weight Distributions.*]{}—To characterize the obtained weighted networks we first study their weight distribution. For this, we grow scale-free networks according to the BA model [@BA], the resulting networks having a degree distribution $P(k) \sim k^{-\gamma}$ with $\gamma = 3$. We then assign a weight to each link according to (i)-(vi). For networks whose degrees at the two ends of a link are uncorrelated we can determine the weight distribution analytically using order statistics [@hogg95], finding $$\begin{aligned}
P_{{k_ik_j}}(w) &=& (\gamma-2)^2~ m^{2(\gamma-2)}~ w^{-\gamma+1}~\ln(w/m^2), \label{eq:1}\\
P_{{k_{max}}}(w) &=& 2(\gamma-2) m^{\gamma-2} w^{-\gamma+1} \left[ 1- \left(\frac{w}{m}
\right)^{-\gamma+2}\right],\label{eq:2}\\
P_{{k_{min}}}(w) &=& 2(\gamma-2) m^{2(\gamma-2)} w^{-2\gamma+3}\label{eq:3}.\end{aligned}$$ Corresponding expressions for the inverse degree correlations are obtained after the variable change $w'=1/w$. For $w_{ij} \sim
({k_ik_j})^\theta$ the relationship between the exponent of the weight distribution ($P(w) \sim w^{-\sigma}, w\gg 1$), the exponent of the degree distribution $\gamma$ and $\theta$ is $$\sigma ~=~ 1 + \frac{\gamma - 2}{\theta},$$ valid for cases (i) and (ii). In Fig. \[fig:2\] we compare the numerically determined weight distributions with the scaling predicted by our analytical expressions, finding that the numerical curves display a $w$-dependency close to that of Eqs. (\[eq:1\])-(\[eq:3\]), unaffected by the degree-degree correlations in the model [@krapivsky01; @corr]. Note that power-law weight distributions like those in Fig. \[fig:2\] have been observed for a wide range of network based dynamical processes [@marcio].
![\[fig:2\] Distribution of link weights on $N=10^5$ node scale-free networks. Link-weight choice (i) (triangles), (ii) (squares) and (iii) (circles) are all heavy tailed. The analytical predictions (Eqs. (\[eq:1\]) - (\[eq:3\])) are indicated as solid lines. Note that the solid curves have been shifted vertically without changing the character of the scaling law. [**Inset:**]{} The inverse weight distributions (iv)-(vi) (triangles, squares and circles respectively) and the analytical predictions shown as continuous lines.](pw_full_winset.eps){height="7cm"}
[*Minimum Spanning Trees.*]{}—The MSTs were generated using Prim’s greedy algorithm [@prim57]: starting from a randomly selected node, at each time step we add the link (and hence a node) with the smallest weight among the links connected to the already accepted nodes. Whenever $m$ links with the same (smallest) weight are encountered, we break the degeneracy by randomly selecting one among them with probability $1/m$.
The numerical results indicate that the degree distribution of the resulting MSTs fall into two distinct classes [@kertesz03; @kertesz03a]. Weight choices (i) and (ii) give rise to exponential MST degree distributions (Fig. \[fig:3\]a), while choices (iii)-(vi) result in power-law distributed MST degrees (Fig. \[fig:3\]b). We can understand the exponential nature of the (i) and (ii) MSTs through the following argument: Since the MST tends to avoid links with large weights, it effectively shuns the hubs for the cases $w_{ij}={k_ik_j}$ and $w_{ij}={k_{max}}$, utilizing instead, whenever possible, links connecting low degree nodes (Fig. \[fig:3\]a). Consequently, all the hubs are marginalized and the MST degree distribution must have a narrow range. This argument is supported by Fig. \[fig:4\]a and b, where we show examples of MSTs for weight choices (i) and (ii) respectively. The sizes of the nodes in the figure reflect their degree in the original network. It is evident that the majority of the hubs are located on the branches ($k=1$ degree nodes) of the MST (Fig. \[fig:4\]). This reliance on the small nodes and tendency to avoid the hubs forces the MSTs generated by method (i) and (ii) to be very similar to each other. Indeed, we find that for a given network but weights created by methods (i) and (ii), 87% of the links in the two MSTs are in common. This explains the similar visual appearance of the two MSTs (Fig. \[fig:4\]a and b).
![\[fig:3\] Degree and weight distribution of $N=10^4$ node MSTs. [**(a)**]{} The degree distribution for weights proportional to either ${k_ik_j}$ (i) or ${k_{max}}$ (ii) are dominated by an exponential cut-off, while [**(b)**]{} it is heavy-tailed for weights proportional to ${k_{min}}$ (iii) and inversely proportional to either ${k_ik_j}$ (iv), ${k_{max}}$ (v) and ${k_{min}}$ (vi). [**(c)**]{} The distribution of link weights on the MSTs is a power law for (i), (ii) and (vi) ($w\times 10^3$), while [**(d)**]{} it is dominated by an exponential cut-off for (iii) ($w\times 0.02$), (iv) and (v). For each weight choice we averaged over $10^4$ different MSTs.](figure3.eps){height="7cm"}
The second class of MSTs is well represented by weight choices (iii)-(vi), resulting in power-law MST degree distributions. The similarity between weight schemes (iv)-(vi) is emphasized by the fact that their MST degree distributions follow a power-law with the same exponent $\gamma = 2.4$ [@kim04] (Fig. \[fig:3\]b). Indeed, the links with the lowest weights are now connected to the hubs of the original network, and the MST grows utilizing these hubs extensively. Hence, the hubs of the full network experience only a slight reduction in their degree and are found at the center of the resulting MSTs (Fig. \[fig:4\]c), while the intermediate-degree nodes sustain large losses of neighbors and are found at the surface of the network with one or two neighbors (Fig. \[fig:4\]c).
The distribution of link weights on the MST also displays two distinct behaviors, being either power-law (Fig. \[fig:3\]c) or exponential (Fig. \[fig:3\]d). It is interesting to note that MSTs with exponential degree distribution (Fig. \[fig:3\]a) display power-law weight distributions with exponents $\sigma = 3.1$ (case (i)) and $\sigma = 3.0$ (case (ii)) (Fig. \[fig:3\]c). On the other hand, for weight choices (iii)-(v) the degree distribution of the MST is power law and the MST weight distribution is exponential or stretched exponential (Fig. \[fig:3\]d). For (vi) $w_{ij} = 1/{k_{min}}$ both the degree and the weight distribution of the MST are scale free. Finally, if the link weights are distributed uniformly and randomly the resulting MSTs have a power law degree distribution and an exponentially tempered weight distribution [@kertesz03a].
In order to investigate the effect of the degree correlations on the MSTs for weight choices (i)-(vi), we randomized the weights of the original network by randomly selecting pairs of links and exchanging their weights until all correlations between weights and the local network
![\[fig:4\](Color online) Minimum spanning trees of a $N=10^3$ node scale-free network for weight choices [**(a)**]{} ${k_ik_j}$, [**(b)**]{} ${k_{max}}$ and [**(c)**]{} ${k_{min}}$. The size of a node represents its degree in the full network, and the color of a link represents its weight from low (black) to high (green). Note that the MST degree distribution is exponential for [**(a)**]{} and [**(b)**]{} and a power law for [**(c)**]{}.](figure4_color.eps){width="6.9in"}
topology were lost. Invariably, the resulting MSTs were scale-free with a degree exponent similar to that of the original network, $\gamma \approx 3$. The local structure of the MSTs are very different, however, with only $52$% of the links staying the same in a pairwise comparison between MSTs with weight choices (i) and (ii), suggesting that the functional form of the weight distribution is inconsequential for the degree distribution of the MSTs. To understand this we recall that only the [*ranking*]{} of the link weights and not their absolute value matters [@dobrin01]. Therefore, by removing the correlations between the local network structure and weights we effectively map the problem onto that of weights being uniformly random. Indeed, the degree distribution of the MST in this case is also power-law with $\gamma \approx 3$ [@kertesz03a]. However, the MST weight distributions continue to depend on the weight distribution of the original network.
[*Discussion.*]{}—As networks play an increasing role in the exploration of complex systems, there is an imminent need to understand the interplay between network dynamics and topology. While focusing on the MSTs of scale-free networks, our results emphasize the significance of correlations between link-weights and local network structure. We find that if correlations are present, two classes of MSTs exist, following either a power-law or an exponential degree distribution. The removal of correlations renders the MSTs scale-free, independent of the choice of the weight distribution. This result raises interesting questions regarding our ability to quantify the influence of weights.
Our findings could serve as a natural starting point towards the systematic exploration of weighted networks. For example, while we have assumed that the weights are static, incorporating their time-dependence may reveal novel dynamical rules. Second, we model the weights as solely dependent on the topology, potentially overlooking correlations among the weights themselves. Uncovering the role of such correlations remains a challenge for future research.
We thank J. Kert[é]{}sz, P. L. Krapivsky and S. Havlin for discussions. We also thank M. A. de Menezes for sharing the US airport data. This work has been supported by grants from DoE (E.A.) and the R.E.U. program at Notre Dame (P.J.M).
[100]{} S. H. Strogatz, Nature [**410**]{}, 268 (2001); R. Albert and A.-L. Barab[á]{}si, Rev. Mod. Phys. [**74**]{}, 47 (2002); S. N. Dorogovtsev and J. F. F. Mendes, [*Evolution of Networks: From Biological Nets to the Internet and WWW*]{} (Oxford Press, 2003); R. Pastor-Satorras and A. Vespignani, [*Evolution and Structure of the Internet : A Statistical Physics Approach*]{} (Cambridge University Press, 2004); A.-L. Barab[á]{}si and Z. N. Oltvai, Nat. Rev. Genet. [**5**]{}, 101 (2004).
H. Jeong, B. Tombor, R. Albert, Z. N. Oltvai, and A.-L. Barab[á]{}si, Nature [**407**]{}, 651 (2000); D. A. Fell and A. Wagner, Nat. Biotechnol. [**18**]{}, 1121 (2000).
H. Jeong, S. Mason, A.-L. Barab[á]{}si and Z. N. Oltvai, Nature [**411**]{}, 41 (2001).
R. Albert, H. Jeong, and A.-L. Barab[á]{}si, Nature [**401**]{}, 130 (1999).
H. Ebel, L.-I. Mielsch and S. Bornholdt, Phys. Rev. E [**66**]{}, 035103(R) (2002).
R. Albert, H. Jeong and A.-L. Barab[á]{}si, Nature [**406**]{}, 378 (2000); R. Cohen, K. Erez, D. ben-Avraham and S. Havlin, Phys. Rev. Lett. [**85**]{}, 4626 (2000); D. S. Callaway, M. E. J. Newman, S. H. Strogatz and D. J. Watts, Phys. Rev. Lett. [**85**]{}, 5468 (2000); R. Cohen, K. Erez, D. ben-Avraham and S. Havlin, Phys. Rev. Lett. [**86**]{}, 3682 (2001).
R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. [**86**]{}, 3200 (2001).
S. H. Yook, H. Jeong, A.-L. Barab[á]{}si and Y. Tu, Phys. Rev. Lett. [**86**]{}, 5835 (2001).
K.-I. Goh, B. Kahng and D. Kim, Phys. Rev. Lett. [**87**]{}, 278701 (2001).
L. A. Braunstein, S. V. Buldyrev, R. Cohen, S. Havlin, H. E. Stanley, Phys. Rev. Lett. [**91**]{}, 168701 (2003).
A. Barrat, M. Barth[é]{}lemy, R. Pastor-Satorras and A. Vespignani, Proc. Natl. Acad. Sci. USA [**101**]{}, 3747 (2004).
Z. Toroczkai and K. E. Bassler, Nature [**428**]{}, 716 (2004).
M. Granovetter, Am. J. Soc. [**78**]{}, 1360 (1973).
A. M. Kilpatrick and A. R. Ives, Nature [**422**]{}, 65 (2003).
E. L. Berlow, Nature [**398**]{}, 330 (1999).
R. M. Callaway, R. W. Brooker, P. Choler [*et. al*]{}, Nature [**417**]{}, 844 (2002).
A.-L. Barab[á]{}si, Phys. Rev. Lett [**76**]{}, 3750 (1996).
G. B. West, J. H. Brown and B. J. Enquist, Science [**276**]{}, 122 (1997).
J. R. Banavar, A. Maritan and A. Rinaldo, Nature [**399**]{}, 130 (1999).
J. R. Banavar, F. Colaiori, A. Flammini, A. Maritan and A. Rinaldo, Phys. Rev. Lett. [**84**]{}, 4745 (2000).
E. Almaas, B. Kov[á]{}cs, T. Vicsek, Z. N. Oltvai and A.-L. Barab[á]{}si, Nature [**427**]{}, 839 (2004).
A.-L. Barab[á]{}si and R. Albert, Science [**286**]{}, 509 (1999).
R. Dobrin and P. M. Duxbury, Phys. Rev. Lett. [**86**]{}, 5076 (2001).
D.-H. Kim, J. D. Noh and H. Jeong, preprint cond-mat/0403719 (2004).
R. V. Hogg and A. T. Craig, [*Introduction to mathematical statistics*]{}, 5th ed. New York, Macmillan (1995).
P. L. Krapivsky and S. Redner, Phys. Rev. E [**63**]{}, 066123 (2001).
Taking into account the degree-degree correlations of our model [@krapivsky01], we find the same $w \gg 1$ limiting behavior as that of Eqs. (\[eq:1\])-(\[eq:3\]).
A.-L. Barab[á]{}si, M. A. de Menezes, S. Balensiefer and J. Brockman, Eur. Phys. J. B (in press), DOI: 10.1140/epjb/e2004-00022-4.
R. C. Prim, Bell Syst. Tech. J. [**36**]{}, 1389 (1957).
G. Szab[ó]{}, M. Alava, and J. Kert[é]{}sz, Physica A 330, 31-36 (2003)
The special case of the weights being uniformly random was recently published by Szab[ó]{} et al. [@kertesz03]. The MST of this network is scale-free with $\gamma \approx 3$, and our numerically determined weight distribution agrees with Ref. [@kertesz03]. The acceptance function $P_{MST}(w)/P(w)$ for the case of $w_{ij} = {k_{max}}$ and $w_{ij} = 1/{k_{max}}$ was obtained independently by Szab[ó]{}, Alava, and Kert[é]{}sz. J. Kert[é]{}sz, [*private communication*]{}.
| {
"pile_set_name": "ArXiv"
} |
\[section\] \[lemma\][Theorem]{} \[lemma\][Definition]{} \[lemma\][Corollary]{} \[lemma\][Problem]{} \[lemma\][Proposition]{}
Homotopy types of strict $3$-groupoids {#homotopy-types-of-strict-3-groupoids .unnumbered}
======================================
Carlos SimpsonCNRS, UMR 5580, Université de Toulouse 3
It has been difficult to see precisely the role played by [*strict*]{} $n$-categories in the nascent theory of $n$-categories, particularly as related to $n$-truncated homotopy types of spaces. We propose to show in a fairly general setting that one cannot obtain all $3$-types by any reasonable realization functor [^1] from strict $3$-groupoids (i.e. groupoids in the sense of [@KV]). More precisely we show that one does not obtain the $3$-type of $S^2$. The basic reason is that the Whitehead bracket is nonzero. This phenomenon is actually well-known, but in order to take into account the possibility of an arbitrary reasonable realization functor we have to write the argument in a particular way.
We start by recalling the notion of strict $n$-category. Then we look at the notion of strict $n$-groupoid as defined by Kapranov and Voevodsky [@KV]. We show that their definition is equivalent to a couple of other natural-looking definitions (one of these equivalences was left as an exercise in [@KV]). At the end of these first sections, we have a picture of strict $3$-groupoids having only one object and one $1$-morphism, as being equivalent to abelian monoidal objects $(G,+)$ in the category of groupoids, such that $(\pi _0(G),+)$ is a group. In the case in question, this group will be $\pi
_2(S^2)={{\bf Z}}$. Then comes the main part of the argument. We show that, up to inverting a few equivalences, such an object has a morphism giving a splitting of the Postnikov tower (Proposition \[diagramme\]. It follows that for any realization functor respecting homotopy groups, the Postnikov tower of the realization (which has two stages corresponding to $\pi _2$ and $\pi _3$) splits. This implies that the $3$-type of $S^2$ cannot occur as a realization.
The fact that strict $n$-groupoids are not appropriate for modelling all homotopy types has in principle been known for some time. There are several papers by R. Brown and coauthors on this subject, see [@RBrown1], [@BrownGilbert], [@BrownHiggins], [@BrownHiggins2]; a recent paper by C. Berger [@Berger]; and also a discussion of this in various places in Grothendieck [@Grothendieck]. Other related examples are given in Gordon-Power-Street [@Gordon-Power-Street]. The novelty of our present treatment is that we have written the argument in such a way that it applies to a wide class of possible realization functors, and in particular it applies to the realization functor of Kapranov-Voevodsky (1991) [@KV].
This problem with strict $n$-groupoids can be summed up by saying in R. Brown’s terminology, that they correspond to [*crossed complexes*]{}. While a nontrivial action of $\pi _1$ on the $\pi _i$ can occur in a crossed complex, the higher Whitehead operations such as $\pi _2\otimes \pi _2\rightarrow
\pi _3$ must vanish. This in turn is due to the fundamental “interchange rule” (or “Godement relation” or “Eckmann-Hilton argument”). This effect occurs when one takes two $2$-morphisms $a$ and $b$ both with source and target a $1$-identity $1_x$. There are various ways of composing $a$ and $b$ in this situation, and comparison of these compositions leads to the conclusion that all of the compositions are commutative. In a weak $n$-category, this commutativity would only hold up to higher homotopy, which leads to the notion of “braiding”; and in fact it is exactly the braiding which leads to the Whitehead operation. However, in a strict $n$-category, the commutativity is exact, so the Whitehead operation is trivial.
One can observe that one of the reasons why this problem occurs is that we have the exact $1$-identity $1_x$. This leads to wondering if one could get a better theory by getting rid of the exact identities. We speculate in this direction at the end of the paper by proposing a notion of [*$n$-snucategory*]{}, which would be an $n$-category with strictly associative composition, but without units; we would only require existence of weak units. The details of the notion of weak unit are not worked out.
A preliminary version of this note was circulated in a limited way in the summer of 1997.
I would like to thank: R. Brown, A. Bruguières, A. Hirschowitz, G. Maltsiniotis, and Z. Tamsamani.
[**.Strict $n$-categories**]{}
In what follows [*all $n$-categories are meant to be strict $n$-categories*]{}. For this reason we try to put in the adjective “strict” as much as possible when $n>1$; but in any case, the very few times that we speak of weak $n$-categories, this will be explicitly stated. We mostly restrict our attention to $n\leq 3$.
In case that isn’t already clear, it should be stressed that everything we do in this section (as well as most of the next and even the subsequent one as well) is very well known and classical, so much so that I don’t know what are the original references.
To start with, a [*strict $2$-category*]{} $A$ is a collection of objects $A_0$ plus, for each pair of objects $x,y\in A_0$ a category $Hom _A(x,y)$ together with a morphism $$Hom_A(x,y)\times Hom _A(y,z)\rightarrow Hom _A(x,z)$$ which is strictly associative in the obvious way; and such that a unit exists, that is an element $1_x\in Ob\, Hom _A(x,x)$ with the property that multiplication by $1_x$ acts trivially on objects of $Hom _A(x,y)$ or $Hom_A(y,x)$ and multiplication by $1_{1_x}$ acts trivially on morphisms of these categories.
A [*strict $3$-category*]{} $C$ is the same as above but where $Hom _C(x,y)$ are supposed to be strict $2$-categories. There is an obvious notion of direct product of strict $2$-categories, so the above definition applies [*mutatis mutandis*]{}.
For general $n$, the well-known definition is most easily presented by induction on $n$. We assume known the definition of strict $n-1$-category for $n-1$, and we assume known that the category of strict $n-1$-categories is closed under direct product. A [*strict $n$-category*]{} $C$ is then a category enriched [@Kelly] over the category of strict $n-1$-categories. This means that $C$ is composed of a [*set of objects*]{} $Ob(C)$ together with, for each pair $x,y\in Ob(C)$, a [*morphism-object*]{} $Hom _C(x,y)$ which is a strict $n-1$-category; together with a strictly associative composition law $$Hom _C(x,y)\times Hom _C(y,z) \rightarrow Hom _C(x,z)$$ and a morphism $1_x: \ast \rightarrow Hom _C(x,x)$ (where $\ast$ denotes the final object cf below) acting as the identity for the composition law. The [*category of strict $n$-categories*]{} denoted $nStrCat$ is the category whose objects are as above and whose morphisms are the transformations strictly perserving all of the structures. Note that $nStrCat$ admits a direct product: if $C$ and $C'$ are two strict $n$-categories then $C\times C'$ is the strict $n$-category with $$Ob(C\times C'):= Ob(C) \times Ob(C')$$ and for $(x,x'), \; (y,y') \in Ob(C\times C')$, $$Hom _{C\times C'}((x,x'), (y,y')):= Hom _C(x,y)\times Hom _{C'}(x',y')$$ where the direct product on the right is that of $(n-1)StrCat$. Note that the final object of $nStrCat$ is the strict $n$-category $\ast$ with exactly one object $x$ and with $Hom _{\ast}(x,x)= \ast$ being the final object of $(n-1)StrCat$.
The induction inherent in this definition may be worked out explicitly to give the definition as it is presented in [@KV] for example. In doing this one finds that underlying a strict $n$-category $C$ are the sets $Mor ^i(C)$ of [*$i$-morphisms*]{} or [*$i$-arrows*]{}, for $0\leq i\leq n$. The $0$-morphisms are by definition the objects, and $Mor ^i(C)$ is the disjoint union over all pairs $x,y$ of the $Mor ^{i-1}(Hom
_C(x,y))$. The composition laws at each stage lead to various compositions for $i$-morphisms, denoted in [@KV] by $\ast _j$ for $0\leq j < i$. These are partially defined depending on the [*source*]{} and [*target*]{} maps. For a more detailed explanation, refer to the standard references [@BrownHiggins] [@Street] [@KV] (and I am probably missing many older references which could date back even before [@Benabou] [@GabrielZisman]).
One of the most important of the axioms satisfied by the various compositions in a strict $n$-category is variously known under the name of “Eckmann-Hilton argument”, “Godement relations”, “interchange rules” etc. The following discussion of this axiom owes a lot to discussions I had with Z. Tamsamani during his thesis work. This axiom comes from the fact that the composition law $$Hom _C(x,y)\times Hom _C(y,z)\rightarrow Hom _C(x,z)$$ is a morphism with domain the direct product of the two morphism $n-1$-categories from $x$ to $y$ and from $y$ to $z$. In a direct product, compositions in the two factors by definition are independent (commute). Thus, for $1$-morphisms in $Hom _C(x,y)\times Hom _C(y,z)$ (where the composition $\ast _0$ for these $n-1$-categories is actually the composition $\ast _1$ for $C$ and we adopt the latter notation), we have $$(a,b) \ast _1 (c,d) = (a\ast _1c, b \ast _1d).$$ This leads to the formula $$(a\ast _0b) \ast _1 (c\ast _0d) = (a\ast _1c) \ast _0 (b \ast _1d).$$ This seemingly innocuous formula takes on a special meaning when we start inserting identity maps. Suppose $x=y=z$ and let $1_x$ be the identity of $x$ which may be thought of as an object of $Hom _C(x,x)$. Let $e$ denote the $2$-morphism of $C$, identity of $1_x$; which may be thought of as a $1$-morphism of $Hom _C(x,x)$. It acts as the identity for both compositions $\ast _0$ and $\ast _1$ (the reader may check that this follows from the part of the axioms for an $n$-category saying that the morphism $1_x: \ast
\rightarrow Hom _C(x,x)$ is an identity for the composition).
If $a, b$ are also endomorphisms of $1_x$, then the above rule specializes to: $$a\ast _1b =
(a\ast
_0e) \ast _1 (e\ast _0b) = (a\ast _1e) \ast _0 (e \ast _1b) = a \ast _0b.$$ Thus in this case the compositions $\ast _0$ and $\ast _1$ are the same. A different ordering gives the formula $$a\ast _1b =
(e\ast
_0a) \ast _1 (b\ast _0e) = (e\ast _1b) \ast _0 (a \ast _1e) = b \ast _0a.$$ Therefore we have $$a\ast _1b= b\ast _1a = a \ast _0b = b\ast _0a.$$ This argument says, then, that $Ob (Hom _{Hom _C(x,x)}(1_x, 1_x))$ is a commutative monoid and the two natural multiplications are the same.
The same argument extends to the whole monoid structure on the $n-2$-category $Hom _{Hom _C(x,x)}(1_x, 1_x)$:
\[godement\] The two composition laws on the strict $n-2$-category $Hom _{Hom _C(x,x)}(1_x, 1_x)$ are equal, and this law is commutative. In other words, $Hom _{Hom _C(x,x)}(1_x, 1_x)$ is an abelian monoid-object in the category $(n-2)StrCat$.
[$/$$/$$/$]{}
There is a partial converse to the above observation: if the only object is $x$ and the only $1$-morphism is $1_x$ then nothing else can happen and we get the following equivalence of categories.
\[scholium\] Suppose $G$ is an abelian monoid-object in the category $(n-2)StrCat$. Then there is a unique strict $n$-category $C$ such that $$Ob(C)= \{ x\}\;\;\; \mbox{and}\;\;\; Mor ^1(C)=Ob(Hom _C(x,x))=\{
1_x\}$$ and such that $Hom _{Hom _C(x,x)}(1_x, 1_x)=G$ as an abelian monoid-object. This construction establishes an equivalence between the categories of abelian monoid-objects in $(n-2)StrCat$, and the strict $n$-categories having only one object and one $1$-morphism.
[*Proof:*]{} Define the strict $n-1$-category $U$ with $Ob(U)= \{ u\}$ and $Hom _U(u,u)=G$ with its monoid structure as composition law. The fact that the composition law is commutative allows it to be used to define an associative and commutative multiplication $$U\times U \rightarrow U.$$ Now let $C$ be the strict $n$-category with $Ob(C)=\{ x\}$ and $Hom _C(x,x)=U$ with the above multiplication. It is clear that this construction is inverse to the previous one. [$/$$/$$/$]{}
It is clear from the construction (the fact that the multiplication on $U$ is again commutative) that the construction can be iterated any number of times. We obtain the following corollary.
\[iterate\] Suppose $C$ is a strict $n$-category with only one object and only one $1$-morphism. Then there exists a strict $n+1$-category $B$ with only one object $b$ and with $Hom _B(b,b)\cong C$.
[*Proof:*]{} By the previous lemmas, $C$ corresponds to an abelian monoid-object $G$ in $(n-2)StrCat$. Construct $U$ as in the proof of \[scholium\], and note that $U$ is an abelian monoid-object in $(n-1)StrCat$. Now apply the result of \[scholium\] directly to $U$ to obtain $B\in (n+1)StrCat$, which will have the desired property. [$/$$/$$/$]{}
[**.The groupoid condition**]{}
Recall that a [*groupoid*]{} is a category where all morphisms are invertible. This definition generalizes to strict $n$-categories in the following way [@KV]. We give a theorem stating that three versions of this definition are equivalent.
Note that, following [@KV], we [*do not*]{} require strict invertibility of morphisms, thus the notion of strict $n$-groupoid is more general than the notion employed by Brown and Higgins [@BrownHiggins].
Our discussion is in many ways parallel to the treatment of the groupoid condition for weak $n$-categories in [@Tamsamani] and our treatment in this section comes in large part from discussions with Z. Tamsamani about this.
The statement of the theorem-definition is recursive on $n$.
\[thmdef\] Fix $n<\infty$.
[**I. Groupoids**]{} Suppose $A$ is a strict $n$-category. The following three conditions are equivalent (and in this case we say that $A$ is a [*strict $n$-groupoid*]{}). (1) $A$ is an $n$-groupoid in the sense of Kapranov-Voevodsky [@KV]; (2) for all $x,y\in A$, $Hom _A(x,y)$ is a strict $n-1$-groupoid, and for any $1$-morphism $f:x\rightarrow y$ in $A$, the two morphisms of composition with $f$ $$Hom _A(y,z)\rightarrow Hom _A(x,z),\;\;\;\;
Hom _A(w,x)\rightarrow Hom_A(w,y)$$ are equivalences of strict $n-1$-groupoids (see below); (3) for all $x,y\in A$, $Hom _A(x,y)$ is a strict $n-1$-groupoid, and $\tau _{\leq
1}A$ (defined below) is a $1$-groupoid.
[**II. Truncation**]{} If $A$ is a strict $n$-groupoid, then define $\tau _{\leq k}A$ to be the strict $k$-category whose $i$-morphisms are those of $A$ for $i<k$ and whose $k$-morphisms are the equivalence classes of $k$-morphisms of $A$ under the equivalence relation that two are equivalent if there is a $k+1$-morphism joining them. The fact that this is an equivalence relation is a statement about $n-k$-groupoids. The set $\tau _{\leq 0}A$ will also be denoted $\pi _0A$. The truncation is again a $k$-groupoid, and for $n$-groupoids $A$ the truncation coincides with the operation defined in [@KV].
[**III. Equivalence**]{} A morphism $f:A\rightarrow B$ of strict $n$-groupoids is said to be an [*equivalence*]{} if the following equivalent conditions are satisfied: (a) (this is the definition in [@KV]) $f$ induces an isomorphism $\pi _0A\rightarrow
\pi _0B$, and for every object $a\in A$ $f$ induces an isomorphism $\pi _i(A,a)\stackrel{\cong}{\rightarrow} \pi _i(B, f(a))$ where these homotopy groups are as defined in [@KV]; (b) $f$ induces a surjection $\pi _0A\rightarrow \pi _0B$ and for every pair of objects $x,y\in A$ $f$ induces an equivalence of $n-1$-groupoids $Hom_A(x,y)\rightarrow Hom _B(f(x), f(y))$; (c) if $u,v$ are $i$-morphisms in $A$ sharing the same source and target, and if $r$ is an $i+1$-morphism in $B$ going from $f(u)$ to $f(v)$ then there exists an $i+1$-morphism $t$ in $A$ going from $u$ to $v$ and an $i+2$-morphism in $B$ going from $f(t)$ to $r$ (this includes the limiting cases $i=-1$ where $u$ and $v$ are not specified, and $i=n-1, n$ where “$n+1$-morphisms” mean equalities between $n$-morphisms and “$n+2$-morphisms” are not specified).
[**IV. Sub-lemma**]{} If $f: A\rightarrow B$ and $g: B\rightarrow C$ are morphisms of strict $n$-groupoids and if any two of $f$, $g$ and $gf$ are equivalences, then so is the third.
[**V. Second sub-lemma**]{} If $$A\stackrel{f}{\rightarrow}B\stackrel{g}{\rightarrow}C\stackrel{h}{\rightarrow}D$$ are morphisms of strict $n$-groupoids and if $hg$ and $gf$ are equivalences, then $g$ is an equivalence.
[*Proof:*]{} It is clear for $n=0$, so we assume $n\geq 1$ and proceed by induction on $n$: we assume that the theorem is true (and all definitions are known) for strict $n-1$-categories.
We first discuss the existence of truncation (part II), for $k\geq 1$. Note that in this case $\tau _{\leq k}A$ may be defined as the strict $k$-category with the same objects as $A$ and with $$Hom _{\tau _{\leq k}A}(x,y):= \tau _{\leq k-1}Hom _A(x,y).$$ Thus the fact that the relation in question is an equivalence relation, is a statement about $n-1$-categories and known by induction. Note that the truncation operation clearly preserves any one of the three groupoid conditions (1), (2), (3). Thus we may affirm in a strong sense that $\tau _{\leq k}(A)$ is a $k$-groupoid without knowing the equivalence of the conditions (1)-(3).
Note also that the truncation operation for $n$-groupoids is the same as that defined in [@KV] (they define truncation for general strict $n$-categories but for $n$-categories which are not groupoids, their definition is different from that of [@Tamsamani] and not all that useful).
For $0\leq k\leq k'\leq n$ we have $$\tau _{\leq k}(\tau _{\leq k'}(A)) = \tau _{\leq k}(A).$$ To see this note that the equivalence relation used to define the $k$-arrows of $\tau _{\leq k}(A)$ is the same if taken in $A$ or in $\tau _{\leq
k+1}(A)$—the existence of a $k+1$-arrow going between two $k$-arrows is equivalent to the existence of an equivalence class of $k+1$-arrows going between the two $k$-arrows.
Finally using the above remark we obtain the existence of the truncation $\tau
_{\leq 0}(A)$: the relation is the same as for the truncation $\tau _{\leq
0}(\tau _{\leq 1}(A))$, and $\tau _{\leq 1}(A)$ is a strict $1$-groupoid in the usual sense so the arrows are invertible, which shows that the relation used to define the $0$-arrows (i.e. objects) in $\tau _{\leq 0}(A)$ is in fact an equivalence relation.
We complete our discussion of truncation by noting that there is a natural morphism of strict $n$-categories $A\rightarrow \tau _{\leq k}(A)$, where the right hand side ([*a priori*]{} a strict $k$-category) is considered as a strict $n$-category in the obvious way.
We turn next to the notion of equivalence (part III), and prove that conditions (a) and (b) are equivalent. This notion for $n$-groupoids will not enter into the subsequent treatment of part (I)—what does enter is the notion of equivalence for $n-1$-groupoids, which is known by induction—so we may assume the equivalence of definitions (1)-(3) for our discussion of part (III).
Recall first of all the definition of the homotopy groups. Let $1^i_a$ denote the $i$-fold iterated identity of an object $a$; it is an $i$-morphism, the identity of $1^{i-1}_a$ (starting with $1^0_a=a$). Then $$\pi _i(A,a):= Hom _{\tau _{\leq i}(A)}(1^{i-1}_a, 1^{i-1}_a).$$ This definition is completed by setting $\pi _0(A):= \tau _{\leq 0}(A)$. These definitions are the same as in [@KV]. Note directly from the definition that for $i\leq k$ the truncation morphism induces isomorphisms $$\pi _i(A,a)\stackrel{\cong}{\rightarrow}\pi _i(\tau _{\leq k}(A), a).$$ Also for $i\geq 1$ we have $$\pi _i(A,a)= \pi _{i-1}(Hom _A(a,a), 1_a).$$ One shows that the $\pi _i$ are abelian for $i\geq 2$. This is part of a more general principle, the “interchange rule” or “Godement relations” refered to in §1.
Suppose $f:A\rightarrow B$ is a morphism of strict $n$-groupoids satisfying condition (b). From the immediately preceding formula and the inductive statement for $n-1$-groupoids, we get that $f$ induces isomorphisms on the $\pi _i$ for $i\geq 1$. On the other hand, the truncation $\tau _{\leq 1}(f)$ satisfies condition (b) for a morphism of $1$-groupoids, and this is readily seen to imply that $\pi _0(f)$ is an isomorphism. Thus $f$ satisfies condition (a).
Suppose on the other hand that $f:A\rightarrow B$ is a morphism of strict $n$-groupoids satisfying condition (a). Then of course $\pi _0(f)$ is surjective. Consider two objects $x,y\in A$ and look at the induced morphism $$f^{x,y}: Hom _A(x,y)\rightarrow Hom _B(f(x), f(y)).$$ We claim that $f^{x,y}$ satisfies condition (a) for a morphism of $n-1$-groupoids. For this, consider a $1$-morphism from $x$ to $y$, i.e. an object $r\in Hom _A(x,y)$. By version (2) of the groupoid condition for $A$, multiplication by $r$ induces an equivalence of $n-1$-groupoids $$m(r): Hom _A(x,x)\rightarrow Hom _A(x,y),$$ and furthermore $m(r)(1_x)=r$. The same is true in $B$: multiplication by $f(r)$ induces an equivalence $$m(f(r)): Hom _B(f(x), f(x))\rightarrow Hom _B(f(x), f(y)).$$ The fact that $f$ is a morphism implies that these fit into a commutative square $$\begin{array}{ccc}
Hom _A(x,x)&\rightarrow &Hom _A(x,y)\\
\downarrow && \downarrow \\
Hom _B(f(x), f(x))&\rightarrow &Hom _B(f(x), f(y)).
\end{array}$$ The equivalence condition (a) for $f$ implies that the left vertical morphism induces isomorphisms $$\pi _i(Hom _A(x,x), 1_x)\stackrel{\cong}{\rightarrow}
\pi _i(Hom _B(f(x), f(x)), 1_{f(x)}).$$ Therefore the right vertical morphism (i.e. $f_{x,y}$) induces isomorphisms $$\pi _i(Hom _A(x,y), r)\stackrel{\cong}{\rightarrow}
\pi _i(Hom _B(f(x), f(y)), f(r)),$$ this for all $i\geq 1$. We have now verified these isomorphisms for any base-object $r$. A similar argument implies that $f^{x,y}$ induces an injection on $\pi _0$. On the other hand, the fact that $f$ induces an isomorphism on $\pi _0$ implies that $f^{x,y}$ induces a surjection on $\pi _0$ (note that these last two statements are reduced to statements about $1$-groupoids by applying $\tau _{\leq 1}$ so we don’t give further details). All of these statements taken together imply that $f^{x,y}$ satisfies condition (a), and by the inductive statement of the theorem for $n-1$-groupoids this implies that $f^{x,y}$ is an equivalence. Thus $f$ satisfies condition (b).
We now remark that condition (b) is equivalent to condition (c) for a morphism $f:A\rightarrow B$. Indeed, the part of condition (c) for $i=-1$ is, by the definition of $\pi _0$, identical to the condition that $f$ induces a surjection $\pi _0(A)\rightarrow
\pi _0(B)$. And the remaining conditions for $i=0,\ldots , n+1$ are identical to the conditions of (c) corresponding to $j=i-1=-1,\ldots , (n-1)+1$ for all the morphisms of $n-1$-groupoids $Hom _A(x,y)\rightarrow Hom _B(f(x), f(y))$. (In terms of $u$ and $v$ appearing in the condition in question, take $x$ to be the source of the source of the source …, and take $y$ to be the target of the target of the target …). Thus by induction on $n$ (i.e. by the equivalence $(b)\Leftrightarrow (c)$ for $n-1$-groupoids), the conditions (c) for $f$ for $i=0,\ldots , n+1$, are equivalent to the conditions that $Hom _A(x,y)\rightarrow Hom _B(f(x), f(y))$ be equivalences of $n-1$-groupoids. Thus condition (c) for $f$ is equivalent to condition (b) for $f$, which completes the proof of part (III) of the theorem.
We now proceed with the proof of part (I) of Theorem \[thmdef\]. Note first of all that the implications $(1)\Rightarrow (2)$ and $(2)\Rightarrow (3)$ are easy. We give a short discussion of $(1)\Rightarrow (3)$ anyway, and then we prove $(3)\Rightarrow (2)$ and $(2)\Rightarrow (1)$.
Note also that the equivalence $(1)\Leftrightarrow (2)$ is the content of Proposition 1.6 of [@KV]; we give a proof here because the proof of Proposition 1.6 was “left to the reader” in [@KV].
[**$(1)\Rightarrow (3)$:**]{} Suppose $A$ is a strict $n$-category satisfying condition $(1)$. This condition (from [@KV]) is compatible with truncation, so $\tau _{\leq 1} (A)$ satisfies condition $(1)$ for $1$-categories; which in turn is equivalent to the standard condition of being a $1$-groupoid, so we get that $\tau _{\leq 1}(A)$ is a $1$-groupoid. On the other hand, the conditions $(1)$ from [@KV] for $i$-arrows, $1\leq i \leq n$, include the same conditions for the $i-1$-arrows of $Hom _A(x,y)$ for any $x,y\in Ob(A)$ (the reader has to verify this by looking at the definition in [@KV]). Thus by the inductive statement of the present theorem for strict $n-1$-categories, $Hom _A(x,y)$ is a strict $n-1$-groupoid. This shows that $A$ satisfies condition $(3)$.
[**$(3)\Rightarrow (2)$:**]{} Suppose $A$ is a strict $n$-category satisfying condition $(3)$. It already satisfies the first part of condition $(2)$, by hypothesis. Thus we have to show the second part, for example that for $f:
x\rightarrow y$ in $Ob(Hom _A(x,y))$, composition with $f$ induces an equivalence $$Hom _A(y,z)\rightarrow Hom _A(x,z)$$ (the other part is dual and has the same proof which we won’t repeat here).
In order to prove this, we need to make a digression about the effect of composition with $2$-morphisms. Suppose $f,g\in Ob (Hom _A(x,y))$ and suppose that $u$ is a $2$-morphism from $f$ to $g$—this last supposition may be rewritten $$u\in Ob(Hom _{Hom _A(x,y)}(f,g)).$$ [*Claim:*]{} Suppose $z$ is another object; we claim that if composition with $f$ induces an equivalence $Hom _A(y,z)\rightarrow Hom _A(x,z)$, then composition with $g$ also induces an equivalence $Hom _A(y,z)\rightarrow Hom _A(x,z)$.
To prove the claim, suppose that $h,k$ are two $1$-morphisms from $y$ to $z$. We now obtain a diagram $$\begin{array}{ccc}
Hom _{Hom _A(y,z)}(h,k) & \rightarrow & Hom _{Hom _A(x,z)}(hf, kf)\\
\downarrow && \downarrow \\
Hom _{Hom _A(x,z)}(hg, kg) & \rightarrow & Hom _{Hom _A(x,z)}(hf, kg),
\end{array}$$ where the top arrow is given by composition $\ast _0$ with $1_f$; the left arrow by composition $\ast _0$ with $1_g$; the bottom arrow by composition $\ast _1$ with the $2$-morphism $h\ast _0u$; and the right morphism is given by composition with $k\ast _0u$. This diagram commutes (that is the “Godement rule” or “interchange rule” cf [@KV] p. 32). By the inductive statement of the present theorem (version (2) of the groupoid condition) for the $n-1$-groupoid $Hom _A(x,z)$, the morphisms on the bottom and on the right in the above diagram are equivalences. The hypothesis in the claim that $f$ is an equivalence means that the morphism along the top of the diagram is an equivalence; thus by the sub-lemma (part (IV) of the present theorem) applied to the $n-2$-groupoids in the diagram, we get that the morphism on the left of the diagram is an equivalence. This provides the second half of the criterion (b) of part (III) for showing that the morphism of composition with $g$, $Hom _A(y,z)\rightarrow Hom _A(x,z)$, is an equivalence of $n-1$-groupoids.
To finish the proof of the claim, we now verify the first half of criterion (b) for the morphism of composition with $g$ (in this part we use directly the condition (3) for $A$ and don’t use either $f$ or $u$). Note that $\tau _{\leq
1}(A)$ is a $1$-groupoid, by the condition (3) which we are assuming. Note also that (by definition) $$\pi _0Hom _A(y,z)= Hom _{\tau _{\leq 1}A}(y,z) \;\;\; \mbox{and}\;\;\;
\pi _0Hom _A(x,z)= Hom _{\tau _{\leq 1}A}(x,z),$$ and the morphism in question here is just the morphism of composition by the image of $g$ in $\tau _{\leq 1}(A)$. Invertibility of this morphism in $\tau _{\leq 1}(A)$ implies that the composition morphism $$Hom _{\tau _{\leq 1}A}(y,z)\rightarrow Hom _{\tau _{\leq 1}A}(x,z)$$ is an isomorphism. This completes verification of the first half of criterion (b), so we get that composition with $g$ is an equivalence. This completes the proof of the claim.
We now return to the proof of the composition condition for (2). The fact that $\tau _{\leq 1}(A)$ is a $1$-groupoid implies that given $f$ there is another morphism $h$ from $y$ to $x$ such that the class of $fh$ is equal to the class of $1_y$ in $\pi _0Hom _A(y,y)$, and the class of $hf$ is equal to the class of $1_x$ in $\pi _0Hom _A(x,x)$. This means that there exist $2$-morphisms $u$ from $1_y$ to $fh$, and $v$ from $1_x$ to $hf$. By the above claim (and the fact that the compositions with $1_x$ and $1_y$ act as the identity and in particular are equivalences), we get that composition with $fh$ is an equivalence $$\{ fh\} \times Hom _A(y,z) \rightarrow Hom _A(y,z),$$ and that composition with $hf$ is an equivalence $$\{ hf\} \times Hom _A(x,z)\rightarrow Hom _A(x,z).$$ Let $$\psi _f: Hom _A(y,z)\rightarrow Hom _A(x,z)$$ be the morphism of composition with $f$, and let $$\psi _h: Hom _A(x,z)\rightarrow Hom _A(y,z)$$ be the morphism of composition with $h$. We have seen that $\psi _h\psi _f$ and $\psi _f\psi _h$ are equivalences. By the second sub-lemma (part (V) of the theorem) applied to $n-1$-groupoids, these imply that $\psi _f$ is an equivalence.
The proof for composition in the other direction is the same; thus we have obtained condition (2) for $A$.
[**$(2)\Rightarrow (1)$:**]{} Look at the condition (1) by refering to [@KV]: in question are the conditions $GR'_{i,k}$ and $GR''_{i,k}$ ($i<k\leq
n$) of Definition 1.1, p. 33 of [@KV]. By the inductive version of the present equivalence for $n-1$-groupoids and by the part of condition (2) which says that the $Hom _A(x,y)$ are $n-1$-groupoids, we obtain the conditions $GR'_{i,k}$ and $GR''_{i,k}$ for $i\geq 1$. Thus we may now restrict our attention to the condition $GR'_{0,k}$ and $GR''_{0,k}$. For a $1$-morphism $a$ from $x$ to $y$, the conditions $GR'_{0,k}$ for all $k$ with respect to $a$, are the same as the condition that for all $w$, the morphism of pre-multiplication by $a$ $$Hom _A(w,x)\times \{ a\} \rightarrow Hom _A(w,y)$$ is an equivalence according to the version (c) of the notion of equivalence (cf Part (III) of this theorem). Thus, condition $GR'_{0,k}$ follows from the second part of condition (2) (for pre-multiplication). Similarly condition $GR''_{0,k}$ follows from the second part of condition (2) for post-multiplication by every $1$-morphism $a$. Thus condition (2) implies condition (1). This completes the proof of Part (I) of the theorem.
For the sub-lemma (part (IV) of the theorem), using the fact that isomorphisms of sets satisfy the same “three for two” property, and using the characterization of equivalences in terms of homotopy groups (condition (a)) we immediately get two of the three statements: that if $f$ and $g$ are equivalences then $gf$ is an equivalence; and that if $gf$ and $g$ are equivalences then $f$ is an equivalence. Suppose now that $gf$ and $f$ are equivalences; we would like to show that $g$ is an equivalence. First of all it is clear that if $x\in Ob(A)$ then $g$ induces an isomorphism $\pi _i(B, f(x))\cong \pi _0(C, gf(x))$ (resp. $\pi _0(B)\cong \pi _0(C)$). Suppose now that $y\in Ob(B)$, and choose a $1$-morphism $u$ going from $y$ to $f(x)$ for some $x\in Ob(A)$ (this is possible because $f$ is surjective on $\pi
_0$). By condition (2) for being a groupoid, composition with $u$ induces equivalences along the top row of the diagram $$\begin{array}{ccccc}
Hom _B(y,y) &\rightarrow &Hom _B(y, f(x))&\leftarrow &Hom _B(f(x), f(x))\\
\downarrow && \downarrow && \downarrow \\
Hom _C(g(y),g(y)) &\rightarrow &Hom _C(g(y), gf(x))&\leftarrow &
Hom _C(gf(x), gf(x)).
\end{array}$$ Similarly composition with $g(u)$ induces equivalences along the bottom row. The sub-lemma for $n-1$-groupoids applied to the sequence $$Hom _A(x,x)\rightarrow Hom _B(f(x), f(x))\rightarrow Hom _C(gf(x), gf(x))$$ as well as the hypothesis that $f$ is an equivalence, imply that the rightmost vertical arrow in the above diagram is an equivalence. Again applying the sub-lemma to these $n-1$-groupoids yields that the leftmost vertical arrow is an equivalence. In particular $g$ induces isomorphisms $$\pi _i(B,y) = \pi _{i-1}(Hom _B(y,y), 1_y) \stackrel{\cong}{\rightarrow}
\pi _{i-1}(Hom _C(g(y), g(y)),1_{g(y)}) = \pi _i(C, g(y)).$$ This completes the verification of condition (a) for the morphism $g$, completing the proof of part (IV) of the theorem.
Finally we prove the second sub-lemma, part (V) of the theorem (from which we now adopt the notations $A,B,C,D,f,g,h$). Note first of all that applying $\pi _0$ gives the same situation for maps of sets, so $\pi _0(g)$ is an isomorphism. Next, suppose $x\in Ob(A)$. Then we obtain a sequence $$\pi _i(A,x)\rightarrow \pi _i(B,f(x))
\rightarrow \pi _i(C, gf(x))\rightarrow \pi _i(D, hgf(x)),$$ such that the composition of the first pair and also of the last pair are isomorphisms; thus $g$ induces an isomorphism $\pi _i(B , f(x))\cong \pi _i(C, gf(x))$. Now, by the same argument as for Part (IV) above, (using the hypothesis that $f$ induces a surjection $\pi
_0(A)\rightarrow \pi _0(B)$) we get that for any object $y\in Ob(B)$, $g$ induces an isomorphism $\pi _i(B , y)\cong \pi _i(C, g(y))$. By definition (a) of Part (III) we have now shown that $g$ is an equivalence. This completes the proof of the theorem. [$/$$/$$/$]{}
Let $nStrGpd$ be the category of strict $n$-groupoids.
We close out this section by looking at how the groupoid condition fits in with the discussion of \[scholium\] and \[iterate\]. Let $C$ be a strict $n$-category with only one object $x$. Then $C$ is an $n$-groupoid if and only if $Hom _C(x,x)$ is an $n-1$-groupoid and $\pi _0Hom _C(x,x)$ (which has a structure of monoid) is a group. This is version (3) of the definition of groupoid in \[thmdef\]. Iterating this remark one more time we get the following statement.
\[scholiumgpd\] The construction of \[scholium\] establishes an equivalence of categories between the strict $n$-groupoids having only one object and only one $1$-morphism, and the abelian monoid-objects $G$ in $(n-2)StrGpd$ such that the monoid $\pi _0(G)$ is a group.
[$/$$/$$/$]{}
\[iterategpd\] Suppose $C$ is a strict $n$-category having only one object and only one $1$-morphism, and let $B$ be the strict $n+1$-category of \[iterate\] with one object $b$ and $Hom _B(b,b)= C$. Then $B$ is a strict $n+1$-groupoid if and only if $C$ is a strict $n$-groupoid.
[*Proof:*]{} Keep the notations of the proof of \[iterate\]. If $C$ is a groupoid this means that $G$ satisfies the condition that $\pi
_0(G)$ be a group, which in turn implies that $U$ is a groupoid. Note that $\pi
_0(U)=\ast$ is automatically a group; so applying the observation \[scholiumgpd\] once again, we get that $B$ is a groupoid. In the other direction, if $B$ is a groupoid then $C=Hom _B(b,b)$ is a groupoid by versions (2) and (3) of the definition of groupoid. [$/$$/$$/$]{}
[**.Realization functors**]{}
Recall that $nStrGpd$ is the category of strict $n$-groupoids as defined above \[thmdef\]. Let $Top$ be the category of topological spaces. The following definition encodes the minimum of what one would expect for a reasonable realization functor from strict $n$-groupoids to spaces.
\[realizationdef\] A [*realization functor for strict $n$-groupoids*]{} is a functor $$\Re : nStrGpd \rightarrow Top$$ together with the following natural transformations: $$r:Ob (A) \rightarrow \Re (A);$$ $$\zeta _i(A,x): \pi _i (A, x) \rightarrow \pi _i (\Re (A), r(x)),$$ the latter including $\zeta _0(A): \pi _0(A)\rightarrow \pi _0(\Re (A))$; such that the $\zeta _i(A,x)$ and $\zeta _0(A)$ are isomorphisms for $0\leq i \leq n$, and such that the $\pi _i(\Re (A), y)$ vanish for $i>n$.
\[realization\] [([@KV])]{} There exists a realization functor $\Re$ for strict $n$-groupoids.
Kapranov and Voevodsky [@KV] construct such a functor. Their construction proceeds by first defining a notion of “diagrammatic set”; they define a realization functor from $n$-groupoids to diagrammatic sets (denoted $Nerv$), and then define the topological realization of a diagrammatic set (denoted $|
\cdot |$). The composition of these two constructions gives a realization functor $$G \mapsto \Re _{KV}(G):= | Nerv(G)|$$ from strict $n$-groupoids to spaces. Note that this functor $\Re_{KV}$ satisfies the axioms of \[realizationdef\] as a consequence of Propositions 2.7 and 3.5 of [@KV].
One obtains a different construction by considering strict $n$-groupoids as weak $n$-groupoids in the sense of [@Tamsamani] (multisimplicial sets) and then taking the realization of [@Tamsamani]. This construction is actually probably due to someone from the Australian school many years beforehand and we call it the [*standard realization*]{} $\Re _{\rm std}$. The properties of \[realizationdef\] can be extracted from [@Tamsamani] (although again they are probably classical results).
We don’t claim here that any two realization functors must be the same, and in particular the realization $\Re _{KV}$ could [*a priori*]{} be different from the standard one. This is why we shall work, in what follows, with an arbitrary realization functor satisfying the axioms of \[realizationdef\].
Here are some consequences of the axioms for a realization functor. If $C\rightarrow C'$ is a morphism of strict $n$-groupoids inducing isomorphisms on the $\pi _i$ then $\Re
(C)\rightarrow \Re (C')$ is a weak homotopy equivalence. Conversely if $f:C\rightarrow C'$ is a morphism of strict $n$-groupoids which induces a weak equivalence of realizations then $f$ was an equivalence.
[**.The case of the standard realization**]{}
Before getting to our main result which concerns an arbitrary realization functor satisfying \[realizationdef\], we take note of an easier argument which shows that the standard realization functor cannot give rise to arbitrary homotopy types.
\[compatiblelooping\] A collection of realization functors $\Re ^n$ for $n$-groupoids ($0\leq n <
\infty$) satisfying \[realizationdef\] is said to be [*compatible with looping*]{} if there exist transformations natural in an $n$-groupoid $A$ and an object $x\in Ob(A)$, $$\varphi (A, x): \Re ^{n-1}(Hom _A(x,x))\rightarrow \Omega ^{r(x)}\Re ^n(A)$$ (where $\Omega ^{r(x)}$ means the space of loops based at $r(x)$), such that for $i\geq 1$ the following diagram commutes: $$\begin{array}{ccc}
\pi _i(A, x) & = \pi _{i-1}(Hom _A(x,x), 1_x) \rightarrow &
\pi _{i-1}(\Re ^{n-1}(Hom _A(x,x)), r(1_x))\\
\downarrow &&\downarrow \\
\pi _i(\Re ^n(A), r(x)) & \leftarrow & \pi _{i-1}( \Omega ^{r(x)}\Re ^n(A),
cst(r(x)))
\end{array}$$ where the top arrow is $\zeta _{i-1}(Hom _A(x,x), 1_x)$, the left arrow is $\zeta _{i}(A,x)$, the right arrow is induced by $\varphi (A, x)$, and the bottom arrow is the canonical arrow from topology. (When $i=1$, suppress the basepoints in the $\pi _{i-1}$ in the diagram.)
[*Remark:*]{} The arrows on the top, the bottom and the left are isomorphisms in the above diagram, so the arrow on the right is an isomorphism and we obtain as a corollary of the definition that the $\varphi (A,x)$ are actually weak equivalences.
[*Remark:*]{} The collection of standard realizations $\Re ^n_{\rm std}$ for $n$-groupoids, is compatible with looping. We leave this as an exercise for the reader.
Recall the statements of \[iterate\] and \[iterategpd\]: if $A$ is a strict $n$-category with only one object $x$ and only one $1$-morphism $1_x$, then there exists a strict $n+1$-category $B$ with one object $y$, and with $Hom _B(y,y)=A$; and $A$ is a strict $n$-groupoid if and only if $B$ is a strict $n+1$-groupoid.
\[forstandard\] Suppose $\{ \Re ^n\}$ is a collection of realization functors \[realizationdef\] compatible with looping \[compatiblelooping\]. Then if $A$ is a $1$-connected strict $n$-groupoid (i.e. $\pi _0(A)=\ast$ and $\pi _1(A,x)=\{ 1\}$), the space $\Re ^n(A)$ is weak-equivalent to a loop space.
[*Proof:*]{} Let $A'\subset A$ be the sub-$n$-category having one object $x$ and one $1$-morphism $1_x$. For $i\geq 2$ the inclusion induces isomorphisms $$\pi _i(A', x) \cong \pi _i(A,x),$$ and in view of the $1$-connectedness of $A$ this means (according to the definition of \[thmdef\] III (a)) that the morphism $A'\rightarrow A$ is an equivalence. It follows (by definition \[realizationdef\]) that $\Re
^n(A')\rightarrow \Re ^n(A)$ is a weak equivalence. Now $A'$ satisfies the hypothesis of \[iterate\], \[iterategpd\] as recalled above, so there is an $n+1$-groupoid $B$ having one object $y$ such that $A'=Hom _B(y,y)$. By the definition of “compatible with looping” and the subsequent remark that the morphism $\varphi (B,y)$ is a weak equivalence, we get that $\varphi (B,y)$ induces a weak equivalence $$\Re ^n(A') \rightarrow \Omega ^{r(y)}\Re ^{n+1}(B).$$ Thus $\Re ^n(A)$ is weak-equivalent to the loop-space of $\Re ^{n+1}(B)$. [$/$$/$$/$]{}
The following corollary is a statement which seems to be due to C. Berger [@Berger] (although the statement appears without proof in Grothendieck [@Grothendieck]). See also R. Brown and coauthors [@RBrown1] [@BrownGilbert] [@BrownHiggins] [@BrownHiggins2].
\[berger\] [(C. Berger [@Berger])]{} There is no strict $3$-groupoid $A$ such that the standard realization $\Re _{\rm std} (A)$ is weak-equivalent to the $3$-type of $S^2$.
[*Proof:*]{} The $3$-type of $S^2$ is not a loop-space. By the previous corollary (and the fact that the standard realizations are compatible with looping, which we have above left as an exercise for the reader), it is impossible for $\Re _{\rm std}(A)$ to be the $3$-type of $S^2$. [$/$$/$$/$]{}
[**.Nonexistence of strict $3$-groupoids giving rise to the $3$-type of $S^2$**]{}
It is not completely clear whether Kapranov and Voevodsky claim that their realization functors are compatible with looping in the sense of \[compatiblelooping\], so Berger’s negative result (Corollary \[berger\] above) might not apply. The main work of the present paper is to extend this negative result to [*any*]{} realization functor satisfying the minimal definition \[realizationdef\], in particular getting a result which applies to the realization functor of [@KV].
\[noS2\] Let $\Re$ be any realization functor satisfying the properties of Definition \[realizationdef\]. Then there does not exist a strict $3$-groupoid $C$ such that $\Re (C)$ is weak-equivalent to the $3$-truncation of the homotopy type of $S^2$.
Let $\Re _{KV}$ be the realization functor of Kapranov and Voevodsky [@KV] cf the discussion above. If we assume that Propositions 2.7 and 3.5 of [@KV] (stating that $\Re _{KV}$ satisfies the axioms \[realizationdef\]) are true, then Corollary 3.8 of [@KV] is not true, i.e. $\Re_{KV}$ does not induce an equivalence between the homotopy categories of strict $3$-groupoids and $3$-truncated topological spaces.
[*Proof:*]{} According to Proposition \[noS2\], for any realization functor satisfying \[realizationdef\], the induced functor on the homotopy categories is not essentially surjective: its essential image doesn’t contain the $3$-type of $S^2$. [$/$$/$$/$]{}
Proposition \[noS2\] is very similar to the result of Brown and Higgins [@BrownHiggins] and also the recent result of C. Berger [@Berger] (cf \[berger\] above). As was noted in [@KV], the result of Brown and Higgins concerns the more restrictive notion of groupoid where one requires that all morphisms have strict inverses (however, see also [@RBrown1], [@BrownHiggins2]). As in [@KV], that restriction is not included in the definition \[thmdef\]. Berger considers strict $n$-groupoids according to the definition \[thmdef\] (i.e. with inverses non-strict) as well, but his negative result applies only to a standard realization functor and as such, doesn’t [*a priori*]{} directly contradict [@KV].
The basic difference in the present approach is that we make no reference to any particular construction of $\Re$ but show that the proposition holds for any realization construction having the properties of Definition \[realizationdef\].
The fact that strict $n$-groupoids don’t model all homotopy types is also mentionned in Grothendieck [@Grothendieck]. The basic idea in the setting of $3$-categories not necessarily groupoids, is contained in some examples which G. Maltsiniotis pointed out to me, in Gordon-Power-Street [@Gordon-Power-Street] where there are given examples of weak $3$-categories not equivalent to strict ones. This in turn is related to the difference between braided monoidal categories and symmetric monoidal categories, see for example the nice discussion in Baez-Dolan [@BaezDolan].
In order to prove \[noS2\], we will prove the following statement (which contains the main part of the argument). It basically says that the Postnikov tower of a simply connected strict $3$-groupoid $C$, splits.
\[diagramme\] Suppose $C$ is a strict $3$-groupoid with an object $c$ such that $\pi
_0(C)=\ast$, $\pi _1(C,c)=\{ 1\}$, $\pi _2(C,c)
= {{\bf Z}}$ and $\pi _3(C,c)=H$ for an abelian group $H$. Then there exists a diagram of strict $3$-groupoids $$C \stackrel{g}{\leftarrow} B \stackrel{f}{\leftarrow} A
\stackrel{h}{\rightarrow} D$$ with objects $b\in Ob(B)$, $a\in Ob(A)$, $d\in Ob(D)$ such that $f(a)=b$, $g(b)=c$, $h(a)=d$. The diagram is such that $g$ and $f$ are equivalences of strict $3$-groupoids, and such that $\pi _0(D)=\ast$, $\pi _1(D,d)=\{ 1\}$, $\pi _2(D,d)=\{ 0\}$, and such that $h$ induces an isomorphism $$\pi _3(h): \pi _3(A,a)=H \stackrel{\cong}{\rightarrow} \pi _3(D,d).$$
[*Proof of Proposition \[noS2\] using Proposition \[diagramme\]*]{}
Suppose for the moment that we know Proposition \[diagramme\]; with this we will prove \[noS2\]. Fix a realization functor $\Re$ for strict $3$-groupoids satisfying the axioms \[realizationdef\], and assume that $C$ is a strict $3$-groupoid such that $\Re (C)$ is weak homotopy-equivalent to the $3$-type of $S^2$. We shall derive a contradiction.
In [*résumé*]{} the argument is this: that applying the realization functor to the diagram given by \[diagramme\] and inverting the first two maps which are weak homotopy equivalences, we would get a map $$\tau _{\leq 3}(S^2)= \Re (C) \rightarrow \Re (D) = K(H, 3)$$ (with $H={{\bf Z}}$). This is a class in $H^3(S^2, H)$. The hypothesis that $\Re (h)$ is an isomorphism on $\pi _3$ means that this class is nonzero when applied to $\pi _3(S^2)$ via the Hurewicz homomorphism; but $H^3(S^2, {{\bf Z}})= 0$, a contradiction.
Here is a full description of the argument. Apply Proposition \[diagramme\] to $C$. Choose an object $c\in Ob(C)$. Note that, because of the isomorphisms between homotopy sets or groups \[realizationdef\], we have $\pi _0(C)=\ast$, $\pi _1(C,c)=\{ 1\}$, $\pi _2(C,c)
= {{\bf Z}}$ and $\pi _3(C,c)={{\bf Z}}$, so \[diagramme\] applies with $H={{\bf Z}}$. We obtain a sequence of strict $3$-groupoids $$C \stackrel{g}{\leftarrow} B \stackrel{f}{\leftarrow} A
\stackrel{h}{\rightarrow} D.$$ This gives the diagram of spaces $$\Re (C) \stackrel{\Re (g)}{\leftarrow} \Re (B) \stackrel{\Re
(f)}{\leftarrow} \Re
(A) \stackrel{\Re (h)}{\rightarrow} \Re (D).$$ The axioms \[realizationdef\] for $\Re$ imply that $\Re$ transforms equivalences of strict $3$-groupoids into weak homotopy equivalences of spaces. Thus $\Re (f)$ and $\Re (g)$ are weak homotopy equivalences and we get that $\Re (A)$ is weak homotopy equivalent to the $3$-type of $S^2$.
On the other hand, again by the axioms \[realizationdef\], we have that $\Re (D)$ is $2$-connected, and $\pi
_3(\Re (D), r(d))=H$ (via the isomorphism $\pi _3(D,d)\cong H$ induced by $h$, $f$ and $g$). By the Hurewicz theorem there is a class $\eta \in H^3(\Re (D),
H)$ which induces an isomorphism $${\bf Hur}(\eta ): \pi _3(\Re (D), r(d))\stackrel{\cong}{\rightarrow} H.$$ Here $${\bf Hur} : H^3(X , H)\rightarrow Hom (\pi _3(X,x), H)$$ is the Hurewicz map for any pointed space $(X,x)$; and the cohomology is singular cohomology (in particular it only depends on the weak homotopy type of the space).
Now look at the pullback of this class $$\Re (h)^{\ast}(\eta )\in H^3(\Re (A), H).$$ The hypothesis that $\Re (u)$ induces an isomorphism on $\pi _3$ implies that $${\bf Hur}(\Re (h)^{\ast}(\eta )): \pi _3(\Re (A),
r(a))\stackrel{\cong}{\rightarrow} H.$$ In particular, ${\bf Hur}(\Re (h)^{\ast}(\eta ))$ is nonzero so $\Re (h)^{\ast}(\eta )$ is nonzero in $H^3(\Re (A), H)$. This is a contradiction because $\Re (A)$ is weak homotopy-equivalent to the $3$-type of $S^2$, and $H={{\bf Z}}$, but $H^3(S^2 , {{\bf Z}})=\{ 0 \}$.
This contradiction completes the proof of Proposition \[noS2\], assuming Proposition \[diagramme\]. [$/$$/$$/$]{}
[*Proof of Proposition \[diagramme\]*]{}
This is the main part of the argument. We start with a strict groupoid $C$ and object $c$, satisfying the hypotheses of \[diagramme\].
The first step is to construct $(B,b)$. We let $B\subset C$ be the sub-$3$-category having only one object $b=c$, and only one $1$-morphism $1_b=1_c$. We set $$Hom _{Hom _B(b,b)}(1_b, 1_b):=Hom _{Hom _C(c,c)}(1_c, 1_c) ,$$ with the same composition law. The map $g: B\rightarrow C$ is the inclusion.
Note first of all that $B$ is a strict $3$-groupoid. This is easily seen using version (1) of the definition \[thmdef\] (but one has to look at the conditions in [@KV]). We can also verify it using condition (3). Of course $\tau _{\leq 1}(B)$ is the $1$-category with only one object and only one morphism, so it is a groupoid. We have to verify that $Hom _B(b,b)$ is a strict $2$-groupoid. For this, we again apply condition (3) of \[thmdef\]. Here we note that $$Hom _B(b,b)\subset Hom _C(c,c)$$ is the full sub-$2$-category with only one object $1_b=1_c$. Therefore, in view of the definition of $\tau _{\leq 1}$, we have that $$\tau _{\leq 1}Hom _B(b,b)\subset \tau _{\leq 1}Hom _C(c,c)$$ is a full subcategory. A full subcategory of a $1$-groupoid is again a $1$-groupoid, so $\tau _{\leq 1}Hom _B(b,b)$ is a $1$-groupoid. Finally, $Hom _{Hom _B(b,b)}(1_b, 1_b)$ is a $1$-groupoid since by construction it is the same as $Hom _{Hom _C(c,c)}(1_c, 1_c)$ (which is a groupoid by condition (3) applied to the strict $2$-groupoid $Hom _C(c,c)$). This shows that $Hom _B(b,b)$ is a strict $2$-groupoid an hence that $B$ is a strict $3$-groupoid.
Next, note that $\pi _0(B)=\ast$ and $\pi _1(B,b)=\{ 1\}$. On the other hand, for $i=2,3$ we have $$\pi _i(B,b)= \pi _{i-2}(Hom _{Hom _B(b,b)}(1_b, 1_b), 1^2_b)$$ and similarly $$\pi _i(C,c)= \pi _{i-2}(Hom _{Hom _C(c,c)}(1_c, 1_c), 1^2_c),$$ so the inclusion $g$ induces an equality $\pi _i(B,b) \stackrel{=}{\rightarrow}
\pi _i(C,c)$. Therefore, by definition (a) of equivalence \[thmdef\], $g$ is an equivalence of strict $3$-groupoids. This completes the construction and verification for $B$ and $g$.
Before getting to the construction of $A$ and $f$, we analyze the strict $3$-groupoid $B$ in terms of the discussion of \[scholium\] and \[scholiumgpd\]. Let $$G:= Hom _{Hom _B(b,b)}(1_b, 1_b).$$ It is an abelian monoid-object in the category of $1$-groupoids, with abelian operation denoted by $+: G\times G\rightarrow G$ and unit element denoted $0\in
G$ which is the same as $1_b$. The operation $+$ corresponds to both of the compositions $\ast _0$ and $\ast _1$ in $B$.
The hypotheses on the homotopy groups of $C$ also hold for $B$ (since $g$ was an equivalence). These translate to the statements that $(\pi _0(G), +) = {{\bf Z}}$ and $Hom _G(0,0)=H$.
We now construct $A$ and $f$ via \[scholium\] and \[scholiumgpd\], by constructing a morphism $(G',+)\rightarrow (G,+)$ of abelian monoid-objects in the category of $1$-groupoids. We do this by a type of “base-change” on the monoid of objects, i.e. we will first define a morphism $Ob(G')\rightarrow Ob(G)$ and then define $G'$ to be the groupoid with object set $Ob(G')$ but with morphisms corresponding to those of $G$.
To accomplish the “base-change”, start with the following construction. If $S$ is a set, let ${\bf E}(S)$ denote the groupoid with $S$ as set of objects, and with exactly one morphism between each pair of objects. If $S$ has an abelian monoid structure then ${\bf E}(S)$ is an abelian monoid object in the category of groupoids.
Note that for any groupoid $U$ there is a morphism of groupoids $$U\rightarrow {\bf E}(Ob(U)),$$ and by “base change” we mean the following operation: take a set $S$ with a map $p:S\rightarrow Ob(U)$ and look at $$V:= {\bf E}(S)\times _{{\bf E}(Ob(U))}U.$$ This is a groupoid with $S$ as set of objects, and with $$Hom _V(s,t)= Hom _U(p(s), p(t)).$$ If $U$ is an abelian monoid object in the category of groupoids, if $S$ is an abelian monoid and if $p$ is a map of monoids then $V$ is again an abelian monoid object in the category of groupoids.
Apply this as follows. Starting with $(G,+)$ corresponding to $B$ via \[scholium\] and \[scholiumgpd\] as above, choose objects $a,b \in Ob(G)$ such that the image of $a$ in $\pi
_0(G)\cong {{\bf Z}}$ corresponds to $1\in {{\bf Z}}$, and such that the image of $b$ in $\pi _0(G)$ corresponds to $-1\in {{\bf Z}}$. Let $N$ denote the abelian monoid, product of two copies of the natural numbers, with objects denoted $(m,n)$ for nonnegative integers $m,n$. Define a map of abelian monoids $$p:N \rightarrow Ob(G)$$ by $$p(m,n):= m\cdot a + n\cdot b := a+a+\ldots +a \, + \, b+b+\ldots +b.$$ Note that this induces the surjection $N\rightarrow \pi _0(G)={{\bf Z}}$ given by $(m,n)\mapsto m-n$.
Define $(G',+)$ as the base-change $$G':= {\bf E}(N) \times _{{\bf E}(Ob(G))} G,$$ with its induced abelian monoid operation $+$. We have $$Ob (G')= N,$$ and the second projection $p_2: G'\rightarrow G$ (which induces $p$ on object sets) is fully faithful i.e. $$Hom _{G'}((m,n), (m',n'))= Hom _G(p(m,n), p(m',n')).$$ Note that $\pi _0(G')={{\bf Z}}$ via the map induced by $p$ or equivalently $p_2$. To prove this, say that: (i) $N$ surjects onto ${{\bf Z}}$ so the map induced by $p$ is surjective; and (ii) the fact that $p_2$ is fully faithful implies that the induced map $\pi _0(G')\rightarrow \pi _0(G)={{\bf Z}}$ is injective.
We let $A$ be the strict $3$-groupoid corresponding to $(G',+)$ via \[scholium\], and let $f: A\rightarrow B$ be the map corresponding to $p_2: G'\rightarrow G$ again via \[scholium\]. Let $a$ be the unique object of $A$ (it is mapped by $f$ to the unique object $b\in Ob(B)$).
The fact that $(\pi _0(G'),+)={{\bf Z}}$ is a group implies that $A$ is a strict $3$-groupoid (\[scholiumgpd\]). We have $\pi _0(A)=\ast$ and $\pi
_1(A,a)=\{ 1\}$. Also, $$\pi _2(A,a)= (\pi _0(G'), +) = {{\bf Z}}$$ and $f$ induces an isomorphism from here to $\pi _2(B,b)=(\pi _0(G), +)={{\bf Z}}$. Finally (using the notation $(0,0)$ for the unit object of $(N,+)$ and the notation $0$ for the unit object of $Ob(G)$), $$\pi _3(A,a)= Hom _{G'}((0,0),(0,0)),$$ and similarly $$\pi _3(B,b)=Hom _G(0,0)=H;$$ the map $\pi _3(f): \pi _3(A,a)\rightarrow \pi _3(B,b)$ is an isomorphism because it is the same as the map $$Hom _{G'}((0,0),(0,0))\rightarrow Hom _G(0,0)$$ induced by $p_2: G'\rightarrow G$, and $p_2$ is fully faithful. We have now completed the verification that $f$ induces isomorphisms on the homotopy groups, so by version (a) of the definition of equivalence \[thmdef\], $f$ is an equivalence of strict $3$-groupoids.
We now construct $D$ and define the map $h$ by an explicit calculation in $(G',+)$. First of all, let $[H]$ denote the $1$-groupoid with one object denoted $0$, and with $H$ as group of endomorphisms: $$Hom _{[H]}(0,0):= H.$$ This has a structure of abelian monoid-object in the category of groupoids, denoted $([H], +)$, because $H$ is an abelian group. Let $D$ be the strict $3$-groupoid corresponding to $([H], +)$ via \[scholium\] and \[scholiumgpd\]. We will construct a morphism $h: A\rightarrow D$ via \[scholium\] by constructing a morphism of abelian monoid objects in the category of groupoids, $$h:(G', +)\rightarrow ([H], +).$$ We will construct this morphism so that it induces the identity morphism $$Hom _{G'}((0,0), (0,0))=H \rightarrow Hom _{[H]}(0,0)=H.$$ This will insure that the morphism $h$ has the property required for \[diagramme\].
The object $(1,1)\in N$ goes to $0\in \pi _0(G')\cong {{\bf Z}}$. Thus we may choose an isomorphism $\varphi : (0,0)\cong (1,1)$ in $G'$. For any $k$ let $k\varphi$ denote the isomorphism $\varphi + \ldots +\varphi$ ($k$ times) going from $(0,0)$ to $(k,k)$. On the other hand, $H$ is the automorphism group of $(0,0)$ in $G'$. The operations $+$ and composition coincide on $H$. Finally, for any $(m,n)\in N$ let $1_{m,n}$ denote the identity automorphism of the object $(m,n)$. Then any arrow $\alpha$ in $G$ may be uniquely written in the form $$\alpha = 1_{m,n} + k\varphi + u$$ with $(m,n)$ the source of $\alpha$, the target being $(m+k, n+k)$, and where $u\in H$.
We have the following formulae for the composition $\circ$ of arrows in $G'$. They all come from the basic rule $$(\alpha \circ \beta ) + (\alpha ' \circ \beta ')=
(\alpha + \alpha ') \circ (\beta + \beta ')$$ which in turn comes simply from the fact that $+$ is a morphism of groupoids $G'\times G'\rightarrow G'$ defined on the cartesian product of two copies of $G$. Note in a similar vein that $1_{0,0}$ acts as the identity for the operation $+$ on arrows, and also that $$1_{m,n} + 1_{m',n'} = 1_{m+m', n+n'}.$$
Our first equation is $$(1_{l,l} +k\varphi )\circ l\varphi = (k+l)\varphi .$$ To prove this note that $l\varphi + 1_{0,0}= l\varphi$ and our basic formula says $$(1_{l,l}\circ l_{\varphi} ) + (k\varphi \circ 1_{0,0})
=
(1_{l,l} +k\varphi )\circ (l\varphi + 1_{0,0} )$$ but the left side is just $l\varphi + k\varphi = (k+l)\varphi$.
Now our basic formula, for a composition starting with $(m,n)$, going first to $(m+l,n+l)$, then going to $(m+l+k, n+l+k)$, gives $$(1_{m+l,n+l} + k\varphi + u)\circ (1_{m,n} + l\varphi + v)$$ $$= (1_{m,n} + 1_{l,l} + k\varphi + u)\circ (1_{m,n} + l\varphi + v)$$ $$= 1_{m,n}\circ 1_{m,n} + (1_{l,l} +k\varphi )\circ l\varphi
+ u\circ v$$ $$= 1_{m,n} + (k+l)\varphi + (u\circ v)$$ where of course $u\circ v=u+v$.
This formula shows that the morphism $h$ from arrows of $G'$ to the group $H$, defined by $$h(1_{m,n} + k\varphi + u):= u$$ is compatible with composition. This implies that it provides a morphism of groupoids $h:G\rightarrow [H]$ (recall from above that $[H]$ is defined to be the groupoid with one object whose automorphism group is $H$). Furthermore the morphism $h$ is obviously compatible with the operation $+$ since $$(1_{m,n} + k\varphi + u)+ (1_{m',n'} + k'\varphi + u')=$$ $$(1_{m+m',n+n'} + (k+k')\varphi + (u+u'))$$ and once again $u+u'=u\circ u'$ (the operation $+$ on $[H]$ being given by the commutative operation $\circ$ on $H$).
This completes the construction of a morphism $h: (G, +)\rightarrow ([H], +)$ which induces the identity on $Hom (0,0)$. This corresponds to a morphism of strict $3$-groupoids $h: A\rightarrow D$ as required to complete the proof of Proposition \[diagramme\]. [$/$$/$$/$]{}
[**.A remark on strict $\infty$-groupoids**]{}
The nonexistence result of \[noS2\] holds also for strict $\infty$-groupoids as defined in [@KV]. Recall that Kapranov-Voevodsky [@KV] extend the notion of strict $n$-category and strict $n$-groupoid to the case $n=\infty$. The definition is made using condition (1), and the notion of equivalence is defined using (a) in \[thmdef\]. Note that the other characterizations of \[thmdef\] don’t actually make sense in the case $n=\infty$ because they are inductive on $n$.
The only thing we need to know about the case $n=\infty$ is that there are homotopy groups $\pi _i(A,a)$ of a strict $\infty$-groupoid $A$, and there are truncation operations on strict $\infty$-groupoids such that $\tau _{\leq n}(A)$ is a strict $n$-groupoid with a natural morphism $$A\rightarrow \tau _{\leq n}(A)$$ inducing isomorphisms on homotopy groups for $i\leq n$. (Here the $n$-groupoid $\tau _{\leq n}(A)$ is considered as an $\infty$-groupoid in the obvious way.) The homotopy groups and truncation are defined as in [@KV]—again, one has to avoid those versions of the definitions \[thmdef\] which are recursive on $n$.
We can extend the definition of \[realizationdef\] to the case $n=\infty$. It is immediate that for any realization functor $\Re$ satisfying the axioms \[realizationdef\] for $n=\infty$, the morphism $$\Re (A)\rightarrow \Re (\tau _{\leq n}A)$$ is the Postnikov truncation of $\Re (A)$. Applying \[noS2\], we obtain the following result.
\[noInfiniteS2\] For any realization functor $\Re$ satisfying the axioms \[realizationdef\] for $n=\infty$, there does not exist a strict $\infty$-groupoid $A$ (as defined by Kapranov-Voevodsky [@KV]) such that $\Re (A)$ is weak homotopy-equivalent to the $2$-sphere $S^2$.
[*Proof:*]{} Note that if $\Re$ is a realization functor satisfying \[realizationdef\] for $n=\infty$, then composing with the inclusion $i_3^{\infty}$ from the category of strict $3$-groupoids to the category of strict $\infty$-groupoids we obtain a realization functor $\Re i_3^{\infty}$ for strict $3$-groupoids, again satisfying \[realizationdef\]. If $A$ is a strict $\infty$-groupoid then the above truncation morphism, written more precisely, is $$A\rightarrow i_3^{\infty} \tau _{\leq 3}(A).$$ This induces isomorphisms on the $\pi _i$ for $i\leq 3$. Applying $\Re$ we get $$\Re (A) \rightarrow \Re i_3^{\infty} \tau _{\leq 3}(A),$$ inducing an isomorphism on homotopy groups for $i\leq 3$. In particular, if $\Re (A)$ were weak homotopy-equivalent to $S^2$ then this would imply that $\Re i_3^{\infty} \tau _{\leq 3}(A)$ is the $3$-type of $S^2$. In view of the fact that $\Re i_3^{\infty}$ is a realization functor according to \[realizationdef\] for strict $3$-groupoids, this would contradict \[noS2\]. Thus we conclude that there is no strict $\infty$-groupoid $A$ with $\Re (A)$ weak homotopy-equivalent to $S^2$. [$/$$/$$/$]{}
[**.Conclusion**]{}
One really needs to look at some type of weak $3$-categories in order to get a hold of $3$-truncated homotopy types. O. Leroy [@Leroy] and apparently, independantly, Joyal and Tierney [@JoyalTierney] were the first to do this. See also Gordon, Power, Street [@Gordon-Power-Street] and Berger [@Berger] for weak $3$-categories and $3$-types. Baues [@Baues] showed that $3$-types correspond to [*quadratic modules*]{} (a generalization of the notion of crossed complex) [@Baues]. Tamsamani [@Tamsamani] was the first to relate weak $n$-groupoids and homotopy $n$-types. For other notions of weak $n$-category, see [@BaezDolanLetter] [@BaezDolanIII] [@Batanin], [@Batanin2].
From homotopy theory (cf [@Lewis]) the following type of yoga seems to come out: that it suffices to weaken any one of the principal structures involved. Most weak notions of $n$-category involve a weakening of the associativity, or eventually of the Godement (commutativity) conditions.
It seems likely that the arguments of [@KV] would show that one could instead weaken the condition of being [*unary*]{} (i.e. having identities for the operations) and keep associativity and Godement. We give a proposed definition of what this would mean and then state two conjectures.
[*Motivation*]{}
Before giving the definition, we motivate these remarks by looking at the [*Moore loop space*]{} $\Omega ^x_M(X)$ of a space $X$ based at $x\in X$ (the Moore loop space is referred to in [@KV] as a motivation for their construction). Recall that $\Omega ^x_M(X)$ is the space of [*pairs*]{} $(r,
\gamma )$ where $r$ is a real number $r\geq 0$ and $\gamma = [0,r]\rightarrow
X$ is a path starting and ending at $x$. This has the advantage of being a strictly associative monoid. On the other side of the coin, the “length” function $$\ell : \Omega ^x_M(X)\rightarrow [0,\infty )\subset {{\bf R}}$$ has a special behavoir over $r=0$. Note that over the open half-line $(0,\infty )$ the length function $\ell$ is a fibration (even a fiber-space) with fiber homeomorphic to the usual loop space. However, the fiber over $r=0$ consists of a single point, the constant path $[0,0]\rightarrow X$ based at $x$. This additional point (which is the unit element of the monoid $\Omega ^x_M(X)$) doesn’t affect the topology of $\Omega ^x_M$ (at least if $X$ is locally contractible at $x$) because it is glued in as a limit of paths which are more and more concentrated in a neighborhood of $x$. However, the map $\ell$ is no longer a fibration over a neighborhood of $r=0$. This is a bit of a problem because $\Omega ^x_M$ is not compatible with direct products of the space $X$; in order to obtain a compatibility one has to take the fiber product over ${{\bf R}}$ via the length function: $$\Omega ^{(x,y)}_M(X\times Y)= \Omega ^x_M(X) \times _{{{\bf R}}} \Omega ^y_M(Y),$$ and the fact that $\ell$ is not a fibration could end up causing a problem in an attempt to iteratively apply a construction like the Moore loop-space.
Things seem to get better if we restrict to $$\Omega ^x_{M'}(X):=\ell ^{-1}((0,\infty ))\subset \Omega ^x_M(X) ,$$ but this associative monoid no longer has a strict unit. Even so, the constant path of any positive length gives a weak unit.
A motivation coming from a different direction was an observation made by Z. Tamsamani early in the course of doing his thesis. He was trying to define a strict $3$-category $2Cat$ whose objects would be the strict $2$-categories and whose morphisms would be the weak $2$-functors between $2$-categories (plus notions of weak natural transformations and $2$-natural transformations). At some point he came to the conclusion that one could adequately define $2Cat$ as a strict $3$-category except that he couldn’t get strict identities. Because of this problem we abandonned the idea and looked toward weakly associative $n$-categories. In retrospect it would be interesting to pursue Tamsamani’s construction of a strict $2Cat$ but with only weak identities.
[*Snucategories*]{}
Now we get back to looking at what it could mean to weaken the unit property for strict $n$-categories or strict $n$-groupoids. We will define a notion of [*$n$-snucategory*]{} (the initial ‘s’ stands for strict, ‘nu’ stands for non-unary) by induction on $n$. There will be a notion of direct product of $n$-snucategories. Suppose we know what these mean for $n-1$. Then an $n$-snucategory $C$ consists of a set $C_0$ of objects together with, for every pair of objects $x,y\in C_0$ an $n-1$-snucategory $Hom _C(x,y)$ and composition morphisms $$Hom _C(x,y)\times Hom _C(y,z) \rightarrow Hom _C(x,z)$$ which are strictly associative, such that the [*weak unary condition*]{} is satisfied. We now explain this condition. An element $e_x\in Hom _C(x,x)$ is called a weak identity if: —composition with $e$ induces equivalences of $n-1$-snucategories $$Hom _C(x,y)\rightarrow Hom_C(x,y) , \;\;\;
Hom _C(y,x)\rightarrow Hom_C(y,x);$$ —and if $e\cdot e$ is equivalent to $e$.
In order to complete the recursive definition we must define the notion of when a morphism of $n$-snucategories is an equivalence, and we must define what it means for two objects to be equivalent. A morphism is said to be an equivalence if the induced morphisms on $Hom$ are equivalences of $n-1$-snucategories and if it is essentially surjective on objects: each object in the target is equivalent to the image of an object. It thus remains just to be seen what equivalence of objects means. For this we introduce the [*truncations*]{} $\tau _{\leq i}C$ of an $n$-snucategory $C$. Again this is done in the same way as usual: $\tau _{\leq i}C$ is the $i$-snucategory with the same objects as $C$ and whose $Hom$’s are the truncations $$Hom_{\tau _{\leq i}C}(x,y):=\tau _{\leq i-1}Hom _C(x,y).$$ This works for $i\geq 1$ by recurrence, and for $i=0$ we define the truncation to be the set of isomorphism classes in $\tau _{\leq 1}C$. Note that truncation is compatible with direct product (direct products are defined in the obvious way) and takes equivalences to equivalences. These statements used recursively allow us to show that the truncations themselves satisfy the weak unary condition. Finally, we say that two objects are equivalent if they map to the same thing in $\tau _{\leq 0}C$.
Proceeding in the same way as in §2 above, we can define the notion of $n$-snugroupoid.
There are functors $\Pi _n$ and $\Re$ between the categories of $n$-snugroupoids and $n$-truncated spaces (going in the usual directions) together with adjunction morphisms inducing an equivalence between the localization of $n$-snugroupoids by equivalences, and $n$-truncated spaces by weak equivalences.
I think that the argument of [@KV] (which is unclear on the question of identity elements) actually serves to prove the above statement. I have called the above statement a “conjecture” because I haven’t checked this.
One might go out on a limb a bit more and make the following
The localization of the category of $n$-snucategories by equivalences is equivalent to the localizations of the categories of weak $n$-categories of Tamsamani and/or Baez-Dolan and/or Batanin by equivalences.
This of course is of a considerably more speculative nature.
[**Caveat**]{}: the above definition of “snucategory” is invented in an [*ad hoc*]{} way, and in particular one naturally wonders whether or not the equivalences $e\cdot e \sim e$ and higher homotopical data going along with that, would need to be specified in order to get a good definition. I have no opinion about this (the above definition being just the easiest thing to say which gives some idea of what needs to be done). Thus it is not completely clear that the above definition of $n$-snucategory is the “right” one to fit into the conjectures.
[MM2]{}
J. Baez, J. Dolan. $n$-Categories, sketch of a definition. Letter to R. Street, 29 Nov. and 3 Dec. 1995, available at [http://math.ucr.edu/home/baez/ncat.def.html]{}
J. Baez, J. Dolan. Higher-dimensional algebra and topological quantum field theory. [*Jour. Math. Phys*]{} [**36**]{} (1995), 6073-6105 (preprint dating from q-alg 95-03).
J. Baez, J. Dolan. Higher dimensional algebra III: $n$-categories and the algebra of opetopes. Preprint q-alg 9702014, to appear [*Adv. Math.*]{}.
M. Batanin. On the definition of weak $\omega$-category. Macquarie mathematics report number 96/207, Macquarie University, NSW Australia.
M. Batanin. Monoidal globular categories as a natural environment for the theory of weak $n$-categories. To appear, [*Adv. Math.*]{} and available at
H. Baues. [*Combinatorial homotopy and $4$-dimensional complexes.*]{} de Gruyter, Berlin (1991).
J. Bénabou. [*Introduction to Bicategories*]{}, Lect. Notes in Math. [**47**]{}, Springer-Verlag (1967).
C. Berger. Double loop spaces, braided monoidal categories and algebraic $3$-type of space. Preprint (Univ. of Nice).
R. Brown. Computing homotopy types using crossed $n$-cubes of groups. [*Adams Memorial Symposium on Algebraic Topology*]{}, Vol 1, eds. N. Ray, G Walker. Cambridge University Press, Cambridge (1992) 187-210.
R. Brown, N.D. Gilbert. Algebraic models of 3-types and automorphism structures for crossed modules. [*Proc. London Math. Soc.*]{} (3) [**59**]{} (1989), 51-73.
R. Brown, P. Higgins. The equivalence of $\infty$-groupoids and crossed complexes. [*Cah. Top. Geom. Diff.*]{} [**22**]{} (1981), 371-386.
R. Brown, P. Higgins. The classifying space of a crossed complex. [*Math. Proc. Camb. Phil. Soc.*]{} [**110**]{} (1991), 95-120.
W. Dwyer, D. Kan. Simplicial localizations of categories. [*J. Pure and Appl. Algebra*]{} [**17**]{} (1980), 267-284.
W. Dwyer, D. Kan. Calculating simplicial localizations. [*J. Pure and Appl. Algebra*]{} [**18**]{} (1980), 17-35.
W. Dwyer, D. Kan. Function complexes in homotopical algebra. [*Topology*]{} [**19**]{} (1980), 427-440.
P. Gabriel, M. Zisman. [*Calculus of fractions and homotopy theory*]{}, Ergebnisse der Math. und ihrer Grenzgebiete [**35**]{}, Springer-Verlag, New York (1967).
R. Gordon, A.J. Power, R. Street. Coherence for tricategories [*Memoirs A.M.S.*]{} [**117**]{} (1995).
A. Grothendieck. [*Pursuing Stacks*]{} available from Université de Montpellier 2 or the University of Bangor.
A. Joyal, M. Tierney. Algebraic homotopy types. Occurs as an entry in the bibliography of [@BaezDolan].
M. Kapranov, V. Voevodsky. $\infty$-groupoid and homotopy types. [*Cah. Top. Geom. Diff.*]{} [**32**]{} (1991), 29-46.
G. Kelly, [*Basic concepts of enriched category theory*]{} London Math. Soc. Lecture Notes [**64**]{}, Cambridge U. Press, Cambridge (1982).
O. Leroy. Sur une notion de $3$-catégorie adaptée à l’homotopie. Preprint Univ. de Montpellier 2 (1994).
L. G. Lewis. Is there a convenient category of spectra? [*Jour. Pure and Appl. Algebra*]{} [**73**]{} (1991), 233-246.
R. Street. The algebra of oriented simplexes. [*Jour. Pure and Appl. Algebra*]{} [**49**]{} (1987), 283-335.
Z. Tamsamani. Sur des notions de $n$-categorie et $n$-groupoide non-stricte via des ensembles multi-simpliciaux. Thesis, Université Paul Sabatier, Toulouse (1996) available on alg-geom (9512006 and 9607010).
[^1]: Our notion of “reasonable realization functor” (Definition \[realizationdef\]) is any functor $\Re$ from the category of strict $n$-groupoids to $Top$, provided with a natural transformation $r$ from the set of objects of $G$ to the points of $\Re (G)$, and natural isomorphisms $\pi
_0(G)\cong \pi _0(\Re (G))$ and $\pi _i(G,x) \cong \pi _i(\Re (G), r(x))$. This axiom is fundamental to the question of whether one can realize homotopy types by strict $n$-groupoids, because one wants to read off the homotopy groups of the space from the strict $n$-groupoid. The standard realization functors satisfy this property, and the somewhat different realization construction of [@KV] is claimed there to have this property.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Hirohito <span style="font-variant:small-caps;">Aizawa</span>$^{1}$, Kazuhiko <span style="font-variant:small-caps;">Kuroki</span>$^{1}$, and Yukio <span style="font-variant:small-caps;">Tanaka</span>$^{2}$'
title: ' Pairing competition in a quasi-one-dimensional model of organic superconductors (TMTSF)$_{2}X$ in magnetic field '
---
Introduction\[Introduction\]
============================
The superconducting state of quasi-one-dimensional (Q1D) organic conductors (TMTSF)$_{2}X$ (TMTSF=tetramethyl-tetraselenafulvalene, $X$=PF$_{6}$, ClO$_{4}$ etc.) has been an issue of great interest. From the discovery of the first organic superconductor (TMTSF)$_{2}$PF$_{6}$, various studies have been performed both experimentally and theoretically. [@Jerome-Mazud-etal-TMTSF; @Ishiguro-Yamaji-Saito; @Lang-Muller; @Coleman-Cohen-etal-TTF-TCNQ; @Parkin-Engler-etal-BEDT-TTF; @Bourbonnais-Jerome-review; @Chem-Rev-104; @JPSJ-75; @Seo-Hotta-Fukuyama; @Jerome-Chem-Rev; @Lee-Brown-etal-JPSJ-Rev; @Kuroki-JPSJ-Rev; @Dupuis-Bourbonnais-etal-review] Previous studies for the NMR relaxation rate $1/T_{1}$ [@Takigawa-Yasuoka-etal-T1; @Hasegawa-Fukuyama-T1] and the impurity effect [@Coulon-Delhaes-etal-impurity; @Tomic-Jerome-etal-impurity; @Choi-Chaikin-etal-impurity; @Bouffard-Ribault-etal-impurity; @Joo-Auban-etal-impurity1; @Joo-Auban-etal-impurity2] have strongly suggested the possibility of anisotropic superconductivity where the nodes of the superconducting gap intersect the Fermi surface, although a thermal conductivity measurement has suggested the absence of nodes on the Fermi surface in (TMTSF)$_{2}$ClO$_{4}$. [@Belin-Behnia-thermal-conductivity]
Further experiments concerning the pairing symmetry have suggested the possibility that the pairing state in (TMTSF)$_{2}X$ may be even more fascinating. The NMR Knight shift measurements for (TMTSF)$_{2}$PF$_{6}$ and (TMTSF)$_{2}$ClO$_{4}$ have shown that the Knight shift is unchanged across the superconducting critical temperature $T_{c}$. [@Lee-Brown-etal-PF6-NMR-a; @Lee-Chow-etal-PF6-NMR-b; @Shinagawa-Wu-ClO4-NMR] The upper critical field $H_{c2}$ for (TMTSF)$_{2}$PF$_{6}$ and (TMTSF)$_{2}$ClO$_{4}$ has been observed to exceed the Pauli paramagnetic limit $H_{{\rm P}}$. [@Lee-Naughton-etal-PF6-Hc2-PRL; @Lee-Chaikin-etal-PF6-Hc2-PRB; @Oh-Naughton-ClO4-Hc2] These experiments suggest the possibility of spin triplet pairing and/or the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state, [@Fulde-Ferrell; @Larkin-Ovchinnikov] in which the Cooper pairs formed as $( \textbf{\textit k}+\textbf{\textit Q}_{c}\uparrow,
-\textbf{\textit k}+\textbf{\textit Q}_{c}\downarrow)$ have a finite center of mass momentum $\textbf{\textit Q}_{c}$.
Very recent experiments show more interesting results. The NMR experiment for (TMTSF)$_{2}$ClO$_{4}$ shows that the Knight shift changes across $T_{c}$ when the magnetic field is small, but it is unchanged when a high magnetic field is applied. [@Shinagawa-Kurosaki-et-al] The $H_{c2}$ measurements for (TMTSF)$_{2}$ClO$_{4}$ show the possibility of two or three different pairing states. In the intermediate field regime, superconductivity is easily destroyed by tilting the magnetic field (between $H_{{\rm P}}$ and about 4T) from the conductive $a$-$b$ plane, while in the high field regime, superconductivity is sensitive to the broadening of the impurity scattering potential of the nonmagnetic impurity, namely, if the broadening of the impurity potential is large, the upturn curve of the critical temperature vanishes. [@Yonezawa-Kusaba-et-al-PRL; @Yonezawa-Kusaba-et-al-JPSJ] These experiments suggest that spin singlet pairing occurs in the field regime lower than the Pauli limit, but spin triplet pairing and/or the FFLO state occurs in the higher field regime.
Theoretically, various studies on the pairing state in (TMTSF)$_{2}X$ have elucidated not only the unconventional pairing state with a nodal gap function [@Hasegawa-Fukuyama-T1; @Kino-Kontani-TMTCF-FLEX; @Nomura-Yamada-TMTCF-TOP; @Kuroki-Aoki-TMTCF-QMC; @Kuroki-Tanaka-etal-TMTCF-QMC; @Takigawa-Ichioka-et-al] but also the possibility of the spin triplet pairing [@Kuroki-Arita-Aoki; @Tanaka-Kuroki; @Kuroki-Tanaka; @Fuseya-Suzumura; @Nickel-Duprat; @Lebed-FIDC; @Lebed-Yamaji; @Lebed-triplet; @Lebed-Machida-Ozaki; @Shimahara-FIST; @Vaccarella-Melo; @Fuseya-Onishi-Kohno-et-al; @Belmechri-Abramovici-et-al-zeeman; @Belmechri-Abramovici-et-al-orbital; @Aizawa-Kuroki-Tanaka; @Aizawa-et-al-PRL] and/or the FFLO state. [@Suzumura-Ishino; @Machida-Nakanishi; @Dupuis-Montambaux-Melo; @Dupuis-Montambaux; @Miyazaki-Kishigi-Hasegawa; @Aizawa-et-al-PRL] In particular, we have previously shown that the spin triplet “$f$-wave” pairing can compete with the spin singlet “$d$-wave” pairing in Q1D systems [@comment-d-f-wave] when $2k_{F}$ spin fluctuations coexist with $2k_{F}$ charge fluctuations since the Fermi surface is disconnected in the $b$-direction. [@Kuroki-Arita-Aoki; @Tanaka-Kuroki; @Kuroki-Tanaka] In fact, the coexistence of $2k_{F}$ charge density wave and $2k_{F}$ spin density wave in the insulating phase has been observed by the diffuse X-ray scattering experiments in (TMTSF)$_{2}$PF$_{6}$. [@Pouget-Ravy; @Kagoshima-Saso-et-al] A similar conclusion concerning the pairing state competition has been reached using the renormalization group technique. [@Fuseya-Suzumura; @Nickel-Duprat] As a method for identifying spin-triplet $f$-wave pairing, tunneling spectroscopy [@Tanuma-Kuroki-PRB-66-094507; @Tanuma-Tanaka-PRB-66-174502] via the mid gap Andreev resonant state [@Tanaka-Kashiwaya-PRL-74-3451; @Kashiwaya-Tanaka-RPP-63-1641] and Josephson effect [@Asano-Tanaka-JPSJ-73-1922] have been proposed. In particular, the experiment of the proximity effect in the junctions with a diffusive normal metal is promising since an anomalous proximity effect with a zero-energy peak in the density of states, specific to spin-triplet superconductor junctions, has been predicted. [@Tanaka-Kashiwaya-PRB-70-012507; @Tanaka-Kashiwaya-PRB-71-094513; @Tanaka-Asano-PRB-72-140503; @Asano-Tanaka-PRL-96-097007; @Tanaka-Nazarov-PRL-90-167003; @Tanaka-Nazarov-PRB-69-144519]
There have also been various studies for the pairing state in the magnetic field. The possibility of the FFLO state in finite magnetic field in a Q1D model for (TMTSF)$_{2}X$ has been suggested in several studies. [@Suzumura-Ishino; @Machida-Nakanishi; @Dupuis-Montambaux-Melo; @Dupuis-Montambaux; @Miyazaki-Kishigi-Hasegawa] The possibility of field-induced spin triplet pairing has also been discussed by a phenomenological theory and a renormalization technique. [@Lebed-FIDC; @Lebed-Yamaji; @Lebed-triplet; @Lebed-Machida-Ozaki; @Shimahara-FIST; @Vaccarella-Melo; @Fuseya-Onishi-Kohno-et-al; @Belmechri-Abramovici-et-al-zeeman; @Belmechri-Abramovici-et-al-orbital] Recently, we have microscopically studied the magnetic field effect on the pairing state in Q1D systems. We found that the $S_{z}=1$ triplet pairing mediated by $2k_{F}$ spin+$2k_{F}$ charge fluctuations is strongly enhanced by the magnetic field and showed the temperature-magnetic field phase diagram indicating the competition between the singlet and triplet pairings. [@Aizawa-Kuroki-Tanaka] We further found that the spin singlet, triplet, and FFLO states are closely competing, and the $S_{z}=0$ triplet component is strongly mixed with the singlet component in the FFLO state. There, the pairing state competition has been studied by comparing the eigenvalue of the linearized gap equation in the space of $V_y$ (strength of the charge fluctuation) and $h_z$ (magnetic field). [@Aizawa-et-al-PRL]
The FFLO state has recently been studied actively not only in Q1D but also in general systems. [@Casalbuoni-Nardulli; @Matsuda-Shimahara] Previous theoretical studies have revealed various properties of the FFLO superconductivity from the viewpoint of (i) the orbital effect, [@Gruenberg-Gunther; @Maki-Won; @Shimahara-Rainer; @Tachiki-Takahashi-et-al; @Houzet-Buzdin; @Ikeda1; @Ikeda2; @Maniv-Zhuravlev; @Mizushima-Machida-Ichioka; @Ichioka-Adachi-et-al; @Klemm-Luther-Beasley; @Samokhin] (ii) the impurity effect, [@Takada; @Agterberg-Yang; @Adachi-Ikeda; @Houzet-Mineev; @Yanase-disorder] and (iii) the anisotropy of the system. [@Burkhardt-Rainer; @Shimahara-FFLO_direction-Q2D; @Shimahara-FFLO_direction-kappa-ET; @Buzdin-Kachkachi; @Vorontsov-Sauls-Graf; @Suginishi-Shimahara-BETS; @Vorontsov-Graf; @Shimahara-Moriwake; @Kyker-Pickett] One of the interesting aspects of the FFLO state is parity mixing, i.e., even and odd parity pairings can be mixed to stabilize the FFLO state, which has been shown in phenomenological theories. [@Matsuo-Shimahara-Nagai; @Shimahara2] Recent microscopic studies have also shown that the $S_{z}=0$ triplet pairing is mixed with singlet pairing in the FFLO state of the Hubbard model on the two-leg ladder-type lattice, [@Roux-White-Capponi-Poilblanc] the square lattice, [@Yanase-JPSJ-77-063705; @Yokoyama-Onari-Tanaka-FFLO] and the Q1D extended Hubbard model. [@Aizawa-et-al-PRL] Yanase has pointed out that the parity mixing stabilizes the FFLO state, even in the vicinity of the quantum critical point, where the quasi-particle lifetime decreases owing to the scattering caused by spin fluctuations. [@Yanase-JPSJ-77-063705] In addition to these works, superconducting properties of the FFLO state have been studied theoretically. [@Vorontsov-Sauls-PRB-72-184501; @Cui-Hu-PRB-73-214514; @Tanaka-Asano-PRL-98-077001]
Recent experiments strongly show the possibility of the FFLO state with an anisotropic gap function in CeCoIn$_{5}$. [@Radovan-Fortune-et-al; @Bianchi-Movshovich-Capan-et-al; @Watanabe-Kasahara-et-al; @Watanabe-Izawa-et-al; @Cpan-Bianchi-et-al; @Correa-Murphy-et-al; @Kakuyanagi-Sitoh-et-al; @Kumagai-Saitoh-et-al; @Miclea-Nicklas-et-al; @Gratens-Ferreira-et-al; @Mitrovic'-Horvatic'-et-al; @Movshovich-Jaime-et-al; @Hall-Palm-et-al; @Bianchi-Movshovich-Oeschler-et-al; @Izawa-Yamaguchi-et-al; @Aoki-Sakakibara-et-al; @Vorontsov-Vekhter; @Martin-Agosta-et-al; @Settai-Shishido-et-al; @McCollam-Julian-et-al; @Young-Urbano-et-al] Other candidate materials exhibiting the FFLO state are quasi-two-dimensional (Q2D) organic materials, such as $\lambda$-(BETS)$_{2}X$ (BETS=bisethylenedithio-tetraselenafulvalene, $X$=GaCl$_{4}$, [@Tanatar-Ishiguro-et-al] and FeCl$_{4}$ [@Uji-Shinagawa-et-al; @Balicas-Brooks-et-al; @Uji-Terashima-et-al]) and $\kappa$-(BEDT-TTF)$_{2}$Cu(NCS)$_{2}$ (BEDT-TTF=bisethylenedithio-tetrathiafulvalene), [@Manalo-Klein; @Singleton-Symington-et-al; @Lortz-Wang-et-al] and also a Q1D one (TMTSF)$_{2}$ClO$_{4}$ [@Shinagawa-Kurosaki-et-al; @Yonezawa-Kusaba-et-al-PRL; @Yonezawa-Kusaba-et-al-JPSJ]. These materials have stimulated extensive studies in this field. The FFLO state attracts us not only in the field of superconductivity or superfluidity in condensed matter but also in the quantum chromodynamics [@Casalbuoni-Nardulli] and the ultracold fermionic atom gas. [@Zwierlein-Schirotzek-et-al; @Partridge-Li-et-al]
Given the above background, in this study, we investigate the pairing competition between the spin singlet and spin triplet pairings, and the FFLO state of the superconductivity mediated by spin and charge fluctuations in a Q1D extended Hubbard model for (TMTSF)$_{2}X$ by random phase approximation (RPA). While the competition was studied only by comparison with the eigenvalue of the gap equation at a fixed temperature as indicated in ref. , here we calculate the superconducting temperature $T_{c}$ for each pairing state. This enables us to obtain a phase diagram in the $T$(temperature)- $h_z$(field)-$V_y$(strength of the charge fluctuation) space, where we find that (i) consecutive transitions from singlet pairing to the FFLO state and further to $S_z=1$ triplet pairing can occur upon increasing the magnetic field in the vicinity of the SDW+CDW phase, and (ii) the enhancement of the charge fluctuations leads to a significant increase in parity mixing in the FFLO state, where the $S_{z}=0$ triplet/singlet component ratio in the gap function can be close to unity.
Formulation\[Formulation\]
==========================
The extended Hubbard model for (TMTSF)$_{2}X$ \[Fig. \[model\](a)\] that takes into account the Zeeman effect is given as $$\begin{aligned}
H &=&
\sum_{i,j,\sigma} t_{ij\sigma} c_{i\sigma}^\dagger c_{j\sigma}
+\sum_{i} U n_{i\uparrow} n_{i\downarrow}
\nonumber \\ & &
+\sum_{i,j,\sigma,\sigma'} V_{ij} n_{i\sigma} n_{j\sigma'}.
\label{hamiltonian}\end{aligned}$$ Here, $t_{ij\sigma}=t_{ij}+h_{z}{\rm sgn}(\sigma)\delta_{ij}$, where the hopping parameters $t_{ij}$ considered are the intrachain ($a$-axis direction in (TMTSF)$_{2}X$) nearest-neighbor $t_x$ and the interchain ($b$-axis direction) nearest-neighbor $t_y$, $t_{x}=1.0$ being taken as the energy unit. $U$ is the on-site interaction, and $V_{ij}$ are the off-site interactions: $V_x$, $V_{x2}$, and $V_{x3}$ are the nearest-, next-nearest, and 3rd-nearest-neighbor interactions within the chains, and $V_y$ is the interchain interaction. Note that we ignore the orbital effect, assuming that the magnetic field is applied parallel to the conductive $x$-$y$ plane, assuming a sufficiently large Maki parameter. (Since we neglect the orbital effect, the direction of the magnetic field within the $x$-$y$ plane is irrelevant within our approach.)
![(Color online) (a) Model adopted in this study. (b) Schematic figure of the gap for $d$-wave (left) and $f$-wave (right), where blue dashed lines indicate the nodes of the gap, and red solid curves indicate the disconnected Fermi surface. []{data-label="model"}](63957Fig1.eps){width="7.0cm"}
The bare susceptibilities, consisting of bubble-type and ladder-type diagrams, are written as $$\begin{aligned}
\chi_{0}^{\sigma \sigma}(k)
&=&\frac{-1}{N}\sum_{q}
\frac{f(\xi_{\sigma}(k+q))-f(\xi_{\sigma}(q))}
{\xi_{\sigma}(k+q)-\xi_{\sigma}(q)},
\label{a-chi0-para}
\\
\chi_{0}^{+-}(k)
&=&\frac{-1}{N}\sum_{q}
\frac{f(\xi_{\sigma}(k+q))-f(\xi_{\bar{\sigma}}(q))}
{\xi_{\sigma}(k+q)-\xi_{\bar{\sigma}}(q)},
\label{a-chi0-pm}\end{aligned}$$ where $\xi_{\sigma}(k)$ is the band dispersion that takes into account the Zeeman effect measured from the chemical potential $\mu$ and $f(\xi)$ is the Fermi distribution function.
Within RPA that takes into account the magnetic field parallel to the spin quantization axis $\hat{z}$, [@Aizawa-Kuroki-Tanaka; @Aizawa-et-al-PRL] the longitudinal spin and charge susceptibilities are given by $$\begin{aligned}
\chi_{\rm sp}^{zz}=\frac{1}{2}
(\chi^{\uparrow \uparrow}+\chi^{\downarrow \downarrow}
-\chi^{\uparrow \downarrow}-\chi^{\downarrow \uparrow}),
\label{chispzz}
\\
%
\chi_{\rm ch}=\frac{1}{2}
(\chi^{\uparrow \uparrow}+\chi^{\downarrow \downarrow}
+\chi^{\uparrow \downarrow}+\chi^{\downarrow \uparrow}),
\label{chich}\end{aligned}$$ where $$\begin{aligned}
\chi^{\sigma \sigma}(k) &=&
\left[ 1+\chi_{0}^{{\bar \sigma} {\bar \sigma}}(k) V(k)\right]
\chi_{0}^{\sigma \sigma}(k)/A(k),
\label{a-chi-para}
\\
%
\chi^{\sigma {\bar \sigma}}(k) &=&
-\chi_{0}^{\sigma \sigma}(k)
\left[U+V(k)\right]\chi_{0}^{{\bar \sigma} {\bar \sigma}}(k)/A(k),
\label{a-chi-anti-para}
\\
%
A(k) &=& \left[ 1+\chi_{0}^{\sigma \sigma}(k) V(k) \right]
\left[ 1+\chi_{0}^{{\bar \sigma} {\bar \sigma}}(k) V(k)\right]
\nonumber \\ & &
-\left[U+V(k)\right]^{2}
\chi_{0}^{\sigma \sigma}(k)
\chi_{0}^{{\bar \sigma} {\bar \sigma}}(k). \end{aligned}$$ The transverse spin susceptibility is given by $$\begin{aligned}
\chi_{\rm sp}^{+-}(k)=\frac{\chi_{0}^{+-}(k)}{1-U\chi_{0}^{+-}(k)},
\label{chisppm}\end{aligned}$$ where we ignore the off-site repulsions because it is difficult to treat the effect of the off-site repulsions on the ladder diagrams.
The pairing interactions from the bubble and ladder diagrams are given by $$\begin{aligned}
V^{\sigma \bar{\sigma}}_{\rm bub}(k) &=&
U+V(k)+\frac{U^{2}}{2}\chi_{\rm sp}^{zz}(k)
\nonumber \\ & &
-\frac{\left[ U+2V(k) \right]^{2}}{2}\chi_{\rm ch}(k),
\label{a-Vs-bub}
\\
%
V^{\sigma \bar{\sigma}}_{\rm lad}(k) &=& U^{2}\chi_{\rm sp}^{+-}(k),
\label{a-Vs-lad}
\\
%
V^{\sigma \sigma}_{\rm bub}(k) &=&
V(k)-2\left[ U+V(k) \right]V(k)\chi^{{\sigma} \bar{\sigma}}(k)
\nonumber \\ & &
-V(k)^{2}\chi^{{\sigma} {\sigma}}(k)
\nonumber \\ & &
-\left[ U+V(k) \right]^{2}\chi^{\bar{\sigma} \bar{\sigma}}(k),
\label{a-Vt-para-bub}\\
%
V^{\sigma \sigma}_{\rm lad}(k) &=& 0.
\label{a-Vt-para-lad}\end{aligned}$$ The linearized gap equation for Cooper pairs with the total momentum $2Q_{c}$ ($Q_{c}$ represents the center of mass momentum) is given by $$\begin{aligned}
\lambda^{\sigma \sigma'}_{Q_{c}} \varphi^{\sigma \sigma'}(k)
= \frac{1}{N}\sum_{q}
[V^{\sigma \sigma'}_{\rm bub}(k-q)+V^{\sigma \sigma'}_{\rm lad}(k+q)]
\nonumber \\
\times
\frac{ f(\xi_{\sigma}(q_{+}))
-f(-\xi_{\sigma'}(-q_{-}))}
{\xi_{\sigma}(q_{+})+\xi_{\sigma'}(-q_{-})}
\varphi^{\sigma \sigma'}(q),
\label{gap-eq}\end{aligned}$$ where $q_{\pm}=q \pm Q_{c}$, $\varphi^{\sigma \sigma'}(k)$ is the gap function, and $\lambda^{\sigma \sigma'}_{Q_{c}}$ is the eigenvalue of this linearized gap equation. The center of mass momentum $\textbf{\textit{Q}}_{c}$, which gives the maximum value of $\lambda^{\sigma \bar{\sigma}}_{Q_{c}}$, lies in the $x$-direction, [@Shimahara-FFLO_direction-Q2D; @Shimahara-FFLO_direction-kappa-ET; @Yokoyama-Onari-Tanaka-FFLO] while $\lambda^{\sigma \sigma}_{Q_{c}}$ takes its maximum at $\textbf{\textit{Q}}_{c}=(0,0)$ because the electrons with the same spin can be paired as $(k\sigma, -k\sigma)$ for all $k$.
We define the singlet and $S_{z}=0$ triplet components of the gap function in the opposite spin pairing channel as $$\begin{aligned}
\varphi_{{\rm SS}}(k)&=&
\frac{ \varphi^{\uparrow \downarrow}(k)
-\varphi^{\downarrow \uparrow}(k) }{2},
\nonumber \\
\varphi_{{\rm ST}^{0}}(k)&=&
\frac{ \varphi^{\uparrow \downarrow}(k)
+\varphi^{\downarrow \uparrow}(k) }{2}.
\label{eq-phis-phit}\end{aligned}$$ In our calculation, the spin singlet and triplet components of the gap function in the FFLO state are essentially the $d$-wave and $f$-wave, respectively, as schematically shown in Fig. \[model\](b); thus, we write the singlet ($S_{z}=0$ triplet) component of the FFLO gap $\varphi_{{\rm SS}}$($\varphi_{{\rm ST}^{0}}$) in eq. (\[eq-phis-phit\]) as $\varphi_{{\rm SS}d}$ ($\varphi_{{\rm ST}f^{0}}$), where SS$d$(ST$f^{0}$) stands for spin singlet $d$-wave (spin triplet $f$-wave with $S_{z}=0$) pairing. The eigenvalue of each pairing state is determined as follows. $\lambda^{\sigma \bar{\sigma}}_{Q_{c}}$ with $\textbf{\textit{Q}}_{c}=(0,0)$ gives the eigenvalue of the singlet $d$-wave pairing $\lambda_{{\rm SS}d}$ ($S_z=0$ triplet $f$-wave pairing $\lambda_{{\rm ST}f^{0}}$) $\varphi_{{\rm ST}f^{0}}=0$ ($\varphi_{{\rm SS}d}=0$), while $\lambda^{\sigma \bar{\sigma}}_{Q_{c}}$ with $\textbf{\textit{Q}}_{c} \ne (0,0)$ gives $\lambda_{{\rm FFLO}}$. $\lambda^{\sigma \sigma}_{Q_{c}}$ with $\textbf{\textit{Q}}_{c}=(0,0)$ gives the eigenvalue for the spin triplet $f$-wave pairing with $S_{z}=+1$ ($S_{z}=-1$) $\lambda_{{\rm ST}f^{+1}}$ ($\lambda_{{\rm ST}f^{-1}}$). The above-mentioned results of the determination of the eigenvalues are listed in Table \[pairing-sort-table\].
Center of mass momentum and paired spins Pairing symmetry or SC state
------------------------------------------------------ ------------------------------------------------------- ----------------------------------------------------------------------------------------------------
$\textbf{\textit{Q}}_{c}=0$, $\sigma \ne \sigma'$ singlet $d$-wave ($\lambda_{{\rm SS}d}$) for $\varphi_{{\rm ST}f^{0}}\left( k \right)=0$
$\textbf{\textit{Q}}_{c}=0$, $\sigma \ne \sigma'$ $S_{z}=0$ triplet $f$-wave ($\lambda_{{\rm ST}f^{0}}$) for $\varphi_{{\rm SS}d}\left( k \right)=0$
$\lambda^{\sigma \sigma'}_{\textbf{\textit{Q}}_{c}}$ $\textbf{\textit{Q}}_{c}=0$, $\sigma = \sigma'$ $S_{z}=\pm 1$ triplet $f$-wave ($\lambda_{{\rm ST}f^{\pm 1}}$)
$\textbf{\textit{Q}}_{c} \ne 0$, $\sigma \ne \sigma'$ FFLO state ($\lambda_{\rm FFLO}$)
$\textbf{\textit{Q}}_{c} \ne 0$, $\sigma = \sigma'$ not dominant state
: Results of the determination of the eigenvalue of the linearized gap equation $\lambda^{\sigma \sigma'}_{\textbf{\textit{Q}}_{c}}$. []{data-label="pairing-sort-table"}
Although RPA is quantitatively insufficient for discussing the absolute value of $T_{c}$, we expect this approach to be valid for studying the competition between different pairing symmetries. In this paper, we fix the hopping parameters as $t_{x}=1.0$ and $t_{y}=0.2$, and the electron-electron interactions as $U=1.7$, $V_{x}=0.9$, $V_{x2}=0.45$, and $V_{x3}=0.1$, and vary $V_{y}$. Since the dimerization of TMTSF molecules is very small in (TMTSF)$_{2}X$ compounds, we ignore the dimerization and fix the band filling as $n=1.5$ (3/4 filling), where $n=$ number of electrons/number of sites. 1024$\times$128 $k$-point meshes are taken, where we take a large number of $k_x$ meshes since the center of mass momentum $\textbf{\textit{Q}}_{c}$, which gives the maximum value of the FFLO state, lies in the $x$-direction.
Results\[Results\]
==================
Center of mass momentum and the gap function
---------------------------------------------
In this section, we study the nature of the FFLO state in our model. Let us first study the center of mass momentum at which the FFLO state is most stabilized. The optimum $\textbf{\textit{Q}}_{c}$ that most stabilizes the FFLO state can be determined as $\textbf{\textit{Q}}_{c}$ at which the eigenvalue of the gap equation is maximized. In the following results, we set the interchain off-site interaction as $V_{y}=0.35$. $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \bar{\sigma}}$ with $\textbf{\textit{Q}}_{c}
=\left( \textit{Q}_{cx}, \textit{Q}_{cy} \right)$ are given in units of $\pi/512$ for the $x$-direction and $\pi/64$ for the $y$-direction. Figure \[fig2\] shows the eigenvalue of the linearized gap equation in the opposite-spin pairing channel $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \bar{\sigma}}$ as a function of the $x$-component of the center of mass momentum $\textit{Q}_{cx}$ for various $\textit{Q}_{cy}$.
![(Color online) ${\textit{Q}}_{cx}$-dependence of the eigenvalue in the opposite-spin pairing channel, $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \bar{\sigma}}$, for (a) $h_{z}=0.01$, (b) $h_{z}=0.03$, and (c) $h_{z}=0.06$ at $V_{y}=0.35$. []{data-label="fig2"}](63957Fig2.eps){width="8.0cm"}
When the magnetic field is small ($h_{z}=0.01$), the pairing state with $\textbf{\textit{Q}}_{c}=\left( 0, 0 \right)$ dominates over other finite momentum states, as seen in Fig. \[fig2\](a). For a larger magnetic field ($h_{z}=0.03$), a finite momentum pairing state with $\textit{Q}_{cx}=3$ and $\textit{Q}_{cy}=0$ dominates over other states in the opposite-spin pairing channel, as shown in Fig. \[fig2\](b). When the magnetic field is increased up to $h_{z}=0.06$, a finite momentum pairing state with $\textit{Q}_{cx}=7$ and $\textit{Q}_{cy}=0$ is the most dominant, but the eigenvalue $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \bar{\sigma}}$ itself decreases, as shown in Fig. \[fig2\](c). Further studying other $h_z$ cases, we find that the most dominant center of mass momentum lies in the $x$-direction, [@Shimahara-FFLO_direction-Q2D; @Shimahara-FFLO_direction-kappa-ET; @Yokoyama-Onari-Tanaka-FFLO] and the magnitude of the center of mass momentum increases with increasing magnetic field. [@Yokoyama-Onari-Tanaka-FFLO]
The direction of the center of mass momentum vector $\textbf{\textit{Q}}_{c}$ can be understood from the Fermi surface split by the Zeeman effect shown in Fig. \[fig3\]. The electron pair part, i.e., the particle-particle susceptibility, in the linearized gap equation is rewritten as $$\begin{aligned}
& & \frac{ f(\xi_{\sigma}(q+Q_{c}))
-f(-\xi_{\sigma'}(-q+Q_{c}))}
{\xi_{\sigma}(q+Q_{c})+\xi_{\sigma'}(-q+Q_{c})}
\nonumber \\
&=& \frac{1}{\beta}\sum_{\varepsilon_{n}}
G_{\sigma }\left( q+Q_{c}, i \varepsilon_{n} \right)
G_{\sigma'}\left( -q+Q_{c},-i \varepsilon_{n} \right),
\,\,\,\,\,\,\,\,\,\,
\label{GG}\end{aligned}$$ where $\xi_{\sigma}(-k+Q_{c})$ is the same as $\xi_{\sigma}(k-Q_{c})$ since $\xi_{\sigma}(k)=\xi_{\sigma}(-k)$ is satisfied. If the $\sigma$ spin electron energy at the wave vector $q+Q_c$ and the $\sigma'$ spin electron energy at the wave vector $-q+Q_c$ are close to the Fermi energy, the denominator is small and eq. (\[GG\]) can take a large value. For a quasi-one-dimensional system, the number of wave vectors $q$ that satisfies such a condition becomes the largest when the vector $Q_c$ is in the $k_x$ direction
![(Color online) The small purple arrow in the $k_{x}$-direction denoting $\textbf{\textit{Q}}_{\rm FFLO}$ schematically shows the direction of the center of mass momentum vector $\textbf{\textit{Q}}_{c}$ in the FFLO state. The black thin solid cuarves schemetically represent the Fermi surface in zero field. The red thick solid (green thick dashed) curves schematically represent the Fermi surface split by the Zeeman effect. The red and green filled circles are the particle on each Fermi surface in the presence of the field, and the gray filled squares are the particle in zero field. []{data-label="fig3"}](63957Fig3.eps){width="7.0cm"}
Next, we study the gap functions normalized by the maximum value of the singlet component gap function in the FFLO state. We set the parameters as $h_{z}=0.03$, $V_{y}=0.35$, and $T=0.012$, where the FFLO state which has the finite center of mass momentum as $(Q_{cx}, Q_{cy})=(3, 0)$ is the most dominant, as described later. Note that the $S_{z} = \pm 1$ triplet pairings always have the maximum value of the eigenvalue $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \sigma}$ at $\textbf{\textit{Q}}_{c}=\left( 0, 0 \right)$, as mentioned previously. As shown in Figs. \[fig4\](a) and \[fig4\](b), the singlet component of the gap function in the FFLO state is the $d$-wave and the $S_{z}=0$ triplet component is the $f$-wave.
![(Color online) Gap function for (a) singlet component, (b) $S_{z} = 0$ triplet component in the FFLO state with $Q_{cx}=3$ and $Q_{cy}=0$, (c) $S_{z} = +1$ triplet state, and (d) $S_{z} = -1$ triplet state, where black solid curves represent the Fermi surface, and the green dashed curves represent the nodes of the gap. The parameters are $h_{z}=0.03$, $V_{y}=0.35$, and $T=0.012$. []{data-label="fig4"}](63957Fig4.eps){width="8.5cm"}
The maximum value of the $S_{z}=0$ triplet gap component in the FFLO state almost reaches unity. Thus, the singlet $d$-wave component and the $S_{z}=0$ triplet $f$-wave component strongly mix in this FFLO state. The gap function in the $S_{z}=\pm 1$ triplet pairings has the $f$-wave form shown in Figs. \[fig4\](c) and \[fig4\](d).
The appearance of the $d$-wave gap in the singlet component and the $f$-wave gap in the $S_{z}=0$ triplet component in the FFLO state is understood as follows. In zero field, the singlet $d$-wave pairing mediated by the $2k_{F}$ spin fluctuations is favored in the Q1D Hubbard model, namely, the large pairing interaction due to the $2k_{F}$ spin fluctuations stabilizes the spin singlet $d$-wave pairing. [@Kuroki-Arita-Aoki; @Kino-Kontani-TMTCF-FLEX; @Nomura-Yamada-TMTCF-TOP; @Kuroki-Aoki-TMTCF-QMC; @Kuroki-Tanaka-etal-TMTCF-QMC] Moreover, the coexistence of $2k_{F}$ charge fluctuations, which is induced by the second-nearest-neighboring repulsive interaction, favors the triplet $f$-wave pairing in the Q1D extended Hubbard model at quarter filling. [@Kuroki-Arita-Aoki; @Tanaka-Kuroki; @Kuroki-Tanaka; @Fuseya-Suzumura; @Nickel-Duprat] The reason why the spin triplet $f$-wave pairing can compete with the spin singlet $d$-wave pairing in the Q1D extended Hubbard model is (i) the contribution of the $2k_{F}$ charge fluctuations in the pairing interaction enhances the spin triplet $f$-wave pairing and suppresses the spin singlet $d$-wave pairing, and (ii) $f$ and $d$-wave pairings have the same number of gap nodes intersecting the Fermi surface due to the disconnectivity of the Fermi surface (quasi-one-dimensionality). The above mechanism is valid even in the presence of the magnetic field, but more importantly, the spin triplet $f$-wave pairing mediated by the $2k_{F}$ spin + $2k_{F}$ charge fluctuations can be enhanced by applying the magnetic field since the bubble-type diagram enhanced by the field contributes to the pairing interaction without being paired with the bubble-type diagram, which is suppressed by the field. [@Aizawa-Kuroki-Tanaka] Actually, our previous work shows a clear correlation between the $S_{z}=0$ triplet ratio in the FFLO state and the ratio of the eigenvalue between the $S_{z}=0$ triplet and singlet pairings obtained by the formulation of separating the singlet and $S_{z}=0$ triplet channels. [@Aizawa-et-al-PRL] From the above, we can understand not only the appearance of the $d$-wave ($f$-wave) gap in the singlet ($S_{z}=0$ triplet) component of the opposite-spin pairing channel and the $f$-wave gap in the parallel-spin pairing channel, but also the large parity mixing of the singlet and $S_{z}=0$ triplet components in the FFLO state.
Figure \[fig5\] shows the parity mixing $\varphi_{{\rm ST}f^{0}}/\varphi_{{\rm SS}d}$ in the opposite-spin pairing channel as a function of the $x$-component of the center of mass momentum $Q_{cx}$ for several $Q_{cy}$. Note that we need to bear the $\textit{ \textbf{Q}}_{c}$ dependence of the eigenvalue $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \bar{\sigma}}$, as shown in Fig. \[fig2\], in order to see the $\textit{ \textbf{Q}}_{c}$ dependence of the parity mixing rate because the most dominant pairing state in the opposite-spin pairing state is determined by the value of $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \bar{\sigma}}$. For instance, we have seen in Fig. \[fig2\](a) that the singlet $d$-wave pairing, i.e., the opposite-spin pairing state with $( Q_{cx}, Q_{cy} )=( 0, 0 )$, is the most dominant in the small magnetic field regime. In Fig. \[fig5\](a), the parity mixing rate for $h_{z}=0.01$ $\varphi_{{\rm ST}f^{0}}/\varphi_{{\rm SS}d}$ is zero at $( Q_{cx}, Q_{cy} )=( 0, 0 )$. Therefore, no $S_{z}=0$ triplet $f$-wave component is present in this pairing state, and the opposite-spin pairing channel is a purely spin singlet $d$-wave. For $h_{z}=0.03$, we have seen in Fig. \[fig2\](b) that the FFLO state with $Q_{cx}=3$ and $Q_{cy}=0$ is dominant. As shown in Fig. \[fig5\](b), the parity mixing rate for $Q_{cx}=3$ and $Q_{cy}=0$ takes a large value $\varphi_{{\rm ST}f^{0}}/\varphi_{{\rm SS}d} \simeq 0.8$. For $h_{z}=0.06$, where the FFLO state with $Q_{cx}=7$ and $Q_{cy}=0$ is dominant (Fig. \[fig2\](c)), the parity mixing rate increases, $i.e.$, $\varphi_{{\rm ST}f^{0}}/\varphi_{{\rm SS}d} \simeq 1.0$, as shown in Fig. \[fig5\](c), which means that the singlet $d$-wave component and the $S_{z}=0$ triplet $f$-wave component are strongly mixed in this FFLO state (provided this state is actually realized).
![(Color online) ${\textit{Q}}_{cx}$-dependence of the parity mixing in the opposite-spin pairing channel, $\varphi_{{\rm ST}f^{0}}/\varphi_{{\rm SS}d}$, for (a) $h_{z}=0.01$, (b) $h_{z}=0.03$, and (c) $h_{z}=0.06$ at $V_{y}=0.35$. []{data-label="fig5"}](63957Fig5.eps){width="8.0cm"}
The strong parity mixing in the FFLO state can be understood as a consequence of the breaking of the spacial inversion symmetry in the superconducting state. Previous theoretical studies have shown that the parity mixing with the singlet and triplet pairings stabilizes the FFLO state more when only the singlet component is considered.
Temperature dependence
----------------------
Next, we investigate the temperature dependence of the eigenvalue, $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \sigma'}$, in both the opposite- and parallel-spin pairing states. We have confirmed that the center of mass momentum that most stabilizes the FFLO state is unchanged upon lowering the temperature for a fixed magnetic field. For $h_{z}=0.01$, the eigenvalue $\lambda_{{\rm SS}d} =
\lambda_{\textbf{\textit{Q}}_{c}={\textbf 0}}^{\sigma \bar{\sigma}}$ of the spin singlet $d$-wave pairing reaches unity, as shown in Fig. \[fig6\](a).
![(Color online) Eigenvalue of the linearized gap equation, $\lambda_{\textbf{\textit{Q}}_{c}}^{\sigma \sigma'}$, plotted as a function of the temperature $T$ for (a) $h_{z}=0.01$, (b) $h_{z}=0.03$, and (c) $h_{z}=0.06$ with $V_{y}=0.35$. Note that SS$d$ and ST$f^{\pm 1}$ have $\textbf{\textit{Q}}_{c}=\textbf{0}$, and the FFLO state has a finite $\textbf{\textit{Q}}_{c}$ that maximizes the eigenvalue of the opposite-spin channel in Fig. \[fig2\].[]{data-label="fig6"}](63957Fig6.eps){width="8.0cm"}
In this small magnetic field regime, the FFLO state is absent, as shown in Fig. \[fig2\](a). For $h_{z}=0.03$, the singlet $d$-wave pairing is suppressed and the eigenvalue $\lambda_{{\rm FFLO}} =
\lambda_{\textbf{\textit{Q}}_{c} \ne {\textbf 0}}^{\sigma \bar{\sigma}}$ of the FFLO state with $Q_{cx}=3$ and $Q_{cy}=0$ reaches unity as seen in Fig. \[fig6\](b). For $h_{z}=0.06$, the FFLO state with $Q_{cx}=7$ and $Q_{cy}=0$ does not develop much upon lowering the temperature, while the eigenvalue $\lambda_{{\rm ST}f^{+1}} =
\lambda_{\textbf{\textit{Q}}_{c}={\textbf 0}}^{\sigma \sigma}$ for the $S_{z}=1$ triplet $f$-wave state reaches unity, as shown in Fig. \[fig6\](c). The eigenvalue of the singlet $d$-wave and $S_{z}=-1$ triplet $f$-wave pairings remains small even in the low temperature regime.
Calculated phase diagram
------------------------
We now obtain a phase diagram in the temperature $T$ versus the magnetic field $h_{z}$ space for several values of interchain off-site interaction $V_{y}$ (which controls the strength of the charge fluctuations). Figure \[fig7\](a) shows a plot of the critical temperature $T_{c}$ against the magnetic field $h_{z}$ for $V_{y}=0.35$, where the $2k_{F}$ charge fluctuations are slightly weaker than the $2k_{F}$ spin fluctuations.
![(Color online) (a) Calculated phase diagram in $h_{z}$-$T$ space for $V_{y}=0.35$, where the green dashed line indicates the $T_{c}$ for the spin singlet $d$-wave, the red solid line represents that for the FFLO state, and the blue dotted line indicates that for the $S_{z}=1$ spin triplet $f$-wave. The spin singlet $d$-wave is omitted as ${\rm SS}d$ and the $S_{z}=1$ spin triplet $f$-wave is ${\rm ST}f^{+1}$. The same notation is used in Figs. \[fig8\] and \[fig9\]. (b) Schematic figure of the orbital pair breaking effect on the superconducting phase diagram in $T$-$h_{z}$ space, where black solid arrows schematically represent the orbital pair breaking effect. []{data-label="fig7"}](63957Fig7.eps){width="8.0cm"}
The critical temperature in zero field is $T_{c} \simeq 0.012$ and the estimated value of Pauli’s paramagnetic field is $h_{z}^{\rm P} \simeq 0.03$. We see that a consecutive transition from singlet pairing to the FFLO state and further to $S_{z}=1$ triplet pairing occurs upon increasing the magnetic field.
This consecutive pairing transition can be understood as follows. It is known that the FFLO state can be stabilized by the quasi-one-dimensionality, namely, the nesting of the Fermi surface. [@Shimahara-FFLO_direction-Q2D; @Shimahara-FFLO_direction-kappa-ET; @Yokoyama-Onari-Tanaka-FFLO] Thus, the quasi-one-dimensionality of the present model is one of the origins of the transition from the $d$-wave to the FFLO state. The origin of the pairing transition from the FFLO state to the $S_{z}=1$ triplet pairing is understood by our previous study, where we have shown that the triplet pairing due to the coexisting $2k_{F}$ spin and $2k_{F}$ charge fluctuations is strongly enhanced by the direct contribution of the unpaired bubble diagram enhanced by the field. [@Aizawa-Kuroki-Tanaka]
Here, we emphasize that we ignore the orbital pair breaking effect in this study because our aim in this work is to study the competition between the singlet, FFLO, and triplet pairings in the case when the magnetic field is applied in the conductive plane, i.e., the $a$-$b$ plane of (TMTSF)$_{2}X$. For discussing the above pairing competition, the Zeeman splitting effect is essential for the FFLO state; thus, we ignore the orbital pair breaking effect at the beginning. Although this effect is small in applying the magnetic field parallel to the conductive plane, the orbital pair breaking effect is present in actual materials. Furthermore, previous studies have shown that the orbital pair breaking effect is important in discussing the FFLO superconductivity. [@Gruenberg-Gunther; @Maki-Won; @Shimahara-Rainer; @Tachiki-Takahashi-et-al; @Houzet-Buzdin; @Ikeda1; @Ikeda2; @Maniv-Zhuravlev; @Mizushima-Machida-Ichioka; @Ichioka-Adachi-et-al; @Klemm-Luther-Beasley; @Samokhin]
As shown in Fig. \[fig7\](a), there seems to be a reentrance from the superconducting state to another superconducting state intervened by the normal state. However, it is much more reasonable to consider that this reentrance does not actually occur owing the presence of the orbital pair breaking effect. If this effect is taken into account, not only the FFLO state but also the singlet and triplet pairing states should strongly be suppressed upon increasing the magnetic field. Figure \[fig7\](b) shows a schematic figure of the effect of the orbital pair breaking, where the $T_c$ obtained (without the orbital effect) in Fig. \[fig7\](a) (thin curve) is suppressed down to the thick curve. The thick curve in Fig. \[fig7\](b) is reminiscent of the experimental $T$-$H$ phase diagram [@Lee-Naughton-etal-PF6-Hc2-PRL; @Lee-Chaikin-etal-PF6-Hc2-PRB; @Oh-Naughton-ClO4-Hc2; @Yonezawa-Kusaba-et-al-PRL; @Yonezawa-Kusaba-et-al-JPSJ] in that the $T_{c}$ curve makes an upturn from nearly above the Pauli limit.
Next, we study the effect of the interchain interaction $V_{y}$ on the phase diagram in the temperature $T$ versus the magnetic field $h_{z}$ space. Figure \[fig8\](a) shows the critical temperature $T_{c}$ at each magnetic field $h_{z}$ for $V_{y}=0.38$. The magnetic field at which the transition from the FFLO state to the $S_{z}=1$ triplet $f$-wave pairing occurs is smaller than that in the $V_{y}=0.35$ case.
![(Color online) Calculated phase diagram in $h_{z}$-$T$ space for (a) $V_{y}=0.38$ and (b) $V_{y}=0.32$, where the notation is the same as that in Fig. \[fig7\](a). []{data-label="fig8"}](63957Fig8.eps){width="8.0cm"}
The critical temperature $T_{c}$ for $V_{y}=0.32$ is shown in Fig. \[fig8\](b), which shows that the magnetic field at which the FFLO state gives way to the $S_{z}=1$ triplet $f$-wave pairing is larger than those in the previous phase diagrams shown in Figs. \[fig7\](a) and \[fig8\](a). The difference between the two phase diagrams is due to the fact that the $2k_F$ charge fluctuations enhance the triplet $f$-wave pairing; thus, the FFLO state appears only in a small parameter regime in between the $d$- and $f$-wave pairings.
Summarizing the above-mentioned features, we show the phase diagram in $T$-$V_y$-$h_z$ space in Fig. \[fig9\].
![(Color online) The critical temperature is shown in the $V_{y}$-$h_{z}$ plane. The value of $T_{c}$ is plotted in the vertical axis and represented by contours. Green dashed lines represent the critical temperature for the singlet $d$-wave pairing, red solid lines are the $T_{c}$ for the FFLO state, and the blue dotted lines are the $T_{c}$ for the $S_{z}=1$ spin triplet $f$-wave pairing, respectively. []{data-label="fig9"}](63957Fig9.eps){width="8.0cm"}
When $V_{y}$ is small and thus the $2k_{F}$ spin fluctuations are dominant over the $2k_{F}$ charge fluctuations, $T_{c}$ decreases and the transition from the spin singlet $d$-wave to the FFLO state occurs upon increasing $h_{z}$. In this FFLO state, the strong parity mixing with the spin singlet $d$-wave component and the $S_{z}=0$ spin triplet $f$-wave component occurs. In the large $V_{y}$ regime, the $2k_{F}$ charge fluctuations compete with the $2k_{F}$ spin fluctuations, and the consecutive pairing state transition from the spin singlet $d$-wave to the FFLO state and further to the $S_{z}=1$ spin triplet $f$-wave upon increasing the $h_{z}$ at the critical temperature $T_{c}$ occurs. The $T_{c}$ enhancement of the $S_{z}=1$ spin triplet $f$-wave pairing in the large $h_{z}$ regime can be understood by our previous work. [@Aizawa-Kuroki-Tanaka]
Conclusion\[Conclusion\]
========================
We have studied the competition between spin singlet, triplet, and FFLO superconductivities in a model for (TMTSF)$_{2}X$ by applying the RPA method and solving the linearized gap equation within the weak coupling theory. We find the following:
\(i) consecutive pairing transitions from singlet pairing to the FFLO state and further to $S_{z}=1$ triplet pairing can occur upon increasing the magnetic field in the vicinity of the SDW+CDW coexisting phase.
\(ii) in the FFLO state, the $S_{z}=0$ spin triplet pairing component is mixed with the spin singlet pairing component, thus resulting in a large parity mixing.
Recent experiments for (TMTSF)$_{2}$ClO$_4$ suggest differences in superconducting properties in the low and high field regimes. The Knight shift study shows the presence of low field and high field pairing states, where the former is the spin singlet pairing and the latter is the FFLO state or the spin triplet pairing. [@Shinagawa-Kurosaki-et-al] The upper critical field studies have shown that only the clean sample, or more strictly, samples where the broadening of the nonmagnetic impurity is small, exhibits an upturn of the critical temperature curve in the high field parallel to the $a$ axis regime above 4T; thus, the high field superconducting state is sensitive to the impurity content or the anisotropy of the impurity scattering potential. [@Yonezawa-Kusaba-et-al-JPSJ] Between 4T and the Pauli limit around 2.5T, there seems to be a different high field pairing state, in which superconductivity is stable against the impurities, but it is very sensitive to the tilt of the magnetic field out of the $a$-$b$ plane. The bottom line of these experiments is that there may be three kinds of pairing states, i.e., one low field state and two high field states. The correspondence between these experimental observations and the present study is not clear at the present stage, but the appearance of the three kinds of pairing states is indeed intriguing. It would be interesting to further investigate experimentally the possibility and nature of two kinds of high field pairing states.
One point that should be mentioned for (TMTSF)$_{2}$ClO$_{4}$ in particular is the presence of the anion ordering with the modulation wave vector $\textbf{\textit Q}_{\rm AO} = \left( 0, \pi/b \right)$, which takes place near $T_{\rm AO} \simeq$ 24K when slowly cooled. Recent studies show that the anion ordering potential ($V_{\rm AO}$) is around $0.02 t_{x}$. [@Yoshino-Shodai-SM-133-55; @Lebed-Ha-PRB-71-132504] The anion ordering leads to a folding of the Brillouin zone in the $k_{y}$-direction ($b$-direction), and in that case, the $d$-wave (and also $f$-wave in the same sense) gap can become nodeless because the folded Fermi surface becomes disconnected near the nodes of the gap, as has been suggested by Shimahara. [@Shimahara-PRB-61-R14938] This effect is neglected in our present study, and its effect on the pairing symmetry competition is an interesting future problem.
Another point to be mentioned is that in the present study, we do not take account of the retardation effect. By taking account of this effect, $i.e.$, the frequency dependence of the gap function, we can discuss the odd-frequency pairing state. [@Berezinskii-JETP-Lett-20-287; @Balatsky-Abrahams-PRB-45-13125; @Bergeret-Volkov-RMP-77-1321] It has been shown that odd-frequency pairing can be realized in a certain quasi-one-dimensional lattice. [@Shigeta-Onari] In particular, in the presence of non-uniformity, the odd-frequency pairing amplitude is ubiquitously generated. [@Tanaka-Golubov-PRL-98-037003; @Tanaka-Golubov-PRL-99-037005; @Tanaka-Tanuma-PRB-76-054533; @Tanaka-Asano-PRB-77-220504R; @Yokoyama-Tanaka-PRB-78-012508; @Tanuma-Hayashi-PRL-102-11703] It is a future interesting problem to study the possible existence of odd-frequency pairing in quasi-one-dimensional organic superconductors.
Acknowledgment {#acknowledgment .unnumbered}
==============
We acknowledge S. Yonezawa for valuable discussions. This work is supported by Grants-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan, and from the Japan Society for the Promotion of Science. Part of the calculation has been performed at the facilities of the Supercomputer Center, ISSP, University of Tokyo.
[999]{}
D. J${\rm \acute{e}}$rome, A. Mazud, M. Ribault, and K. Bechgaard: J. Phys. Lett. (Paris) **41** (1980) L95.
T. Ishiguro, K. Yamaji, and G. Saito: *Organic Superconductors* (Springer-Verlag, Heidelberg, 1998) 2nd ed.
M. Lang and J. M${\rm \ddot{u}}$ller: arXiv:cond-mat/0302157 published in *The Physics of Superconductors - Vol.2* (Springer-Verlag, Heidelberg, 2003).
L. B. Coleman, M. J. Cohen, D. J. Sandman, F. G. Yamagishi, A. F. Gatiro, and A. J. Heeger: Solid State Commun. **12** (1973) 1125.
S. S. P. Parkin, E. M. Engler, R. R. Schumaker, R. Lagier, V. Y. Lee, J. C. Scott, and R. L. Greene: Phys. Rev. Lett. **50** (1983) 270.
C. Bourbonnais and D. J${\rm \acute{e}}$rome: arXiv:cond-mat/9903101 published in *Advances in Synthetic Metals, Twenty years of Progress in Science and Technology* (Elsevier, New York, 1999).
For a recent review, Chem. Rev. (2004) **104**, special issue on Molecular Conductors.
For a recent review, J. Phys. Soc. Jpn. (2006) **75**, special topics on Organic Conductors.
H. Seo, C. Hotta, and H. Fukuyama: Chem. Rev. **104** (2004) 5005.
D. J${\rm \acute{e}}$rome: Chem. Rev. **104** (2004) 5565.
I. J. Lee, S. E. Brown, and M. J. Naughton: J. Phys. Soc. Jpn. **75** (2006) 051011.
K. Kuroki: J. Phys. Soc. Jpn. **75** (2006) 051013.
N. Dupuis, C. Bourbonnais, and J. C. Nickel: Low. Temp. Phys. **32** (2006) 380.
M. Takigawa, H. Yasuoka, and G. Saito: J. Phys. Soc. Jpn. **56** (1987) 873.
Y. Hasegawa and H. Fukuyama: J. Phys. Soc. Jpn. **56** (1987) 877.
S. Bouffard, M. Ribault, R. Brusetti, D. J${\rm \acute{e}}$rome, and K. Bechgaard: J. Phys. C **15** (1982) 2951.
C. Coulon, P. Delha${\rm \acute{e}}$s, J. Amiell, J. P. Manceau, J. M. Fabre, and L. Giral: J. Phys. (Paris) **43** (1982) 1721.
M. Y. Cohi, P. M. Chaikin, S. Z. Huang, P. Haen, E. M. Engler, and R. L. Greene: Phys. Rev. B **25** (1982) 6208.
S. Tomic, D. J${\rm \acute{e}}$rome, D. Mailly, M. Ribault, and K. Bechgaard: J. Phys. (Paris) **44** (1983) C3-1075.
N. Joo, P. Auban-Senzier, C. R. Pasquier, P. Monod, D. J${\rm \acute{e}}$rome, and K. Bechgaard: Eur. Phys. J. B **40** (2004) 43.
N. Joo, P. Auban-Senzier, C. R. Pasquier, D. J${\rm \acute{e}}$rome, and K. Bechgaard: Europhys. Lett. **72** (2005) 645.
S. Belin and K. Behnia: Phys. Rev. Lett. **79** (1997) 2125.
I. J. Lee, S. E. Brown, W. G. Clark, M. J. Strouse, M. J. Naughton, W. Kang, and P. M. Chaikin: Phys. Rev. Lett. **88** (2002) 017004.
I. J. Lee, D. S. Chow, W. G. Clark, M. J. Strouse, M. J. Naughton, P. M. Chaikin, and S. E. Brown: Phys. Rev. B **68** (2003) 092510.
J. Shinagawa, W. Wu, P. M. Chaikin, W. Kang, W. Yu, F. Zhang, Y. Kurosaki, C. Parker, and S. E. Brown: J. Low Temp. Phys. **142** (2007) 227.
I. J. Lee, M. J. Naughton, G. M. Danner, and P. M. Chaikin: Phys. Rev. Lett. **78** (1997) 3555.
I. J. Lee, P. M. Chaikin, and M. J. Naughton: Phys. Rev. B **62** (2000) R14669.
J. I. Oh and M. J. Naughton: Phys. Rev. Lett. **92** (2004) 067001.
P. Fulde and R. A. Ferrell: Phys. Rev. **135** (1964) A550.
A. I. Larkin and Yu. N. Ovchinnikov: Zh. Eksp. Teor. Fiz. **47** (1964) 1136 \[Sov. Phys. JETP **20** (1965) 762\].
J. Shinagawa, Y. Kurosaki, F. Zhang, C. Parker, S. E. Brown, D. J${\rm \acute{e}}$rome, J. B. Christensen, and K. Bechgaard: Phys. Rev. Lett. **98** (2007) 147002.
S. Yonezawa, S. Kusaba, Y. Maeno, P. Auban-Senzier, C. Pasquier, K. Bechgaard, and D. J${\rm \acute{e}}$rome: Phys. Rev. Lett. **100** (2008) 117002.
S. Yonezawa, S. Kusaba, Y. Maeno, P. Auban-Senzier, C. Pasquier, and D. J${\rm \acute{e}}$rome: J. Phys. Soc. Jpn. **77** (2008) 054712.
H. Kino and H. Kontani: J. Low. Temp. Phys. **177** (1999) 317.
K. Kuroki and H. Aoki: Phys. Rev. B **60** (1999) 3060.
T. Nomura and K. Yamada: J. Phys. Soc. Jpn. **70** (2001) 2694.
K. Kuroki, Y. Tanaka, T. Kimura, and R. Arita: Phys. Rev. B **69** (2004) 214511.
M. Takigawa, M. Ichioka, K. Kuroki, Y. Asano, and Y. Tanaka: Phys. Rev. Lett. [**97**]{} (2006) 187002.
K. Kuroki, R. Arita, and H. Aoki: Phys. Rev. B **63** (2001) 094509.
Y. Tanaka and K. Kuroki: Phys. Rev. B **70** (2004) 060502.
K. Kuroki and Y. Tanaka: J. Phys. Soc. Jpn. **74** (2005) 1694.
Y. Fuseya and Y. Suzumura: J. Phys. Soc. Jpn. [**74**]{} (2005) 1263.
J. C. Nickel, R. Duprat, C. Bourbonnais, and N. Dupuis: Phys. Rev. Lett. [**95**]{} (2005) 247001.
A. G. Lebed: JETP Lett. **44** (1986) 114.
A. G. Lebed and K. Yamaji: Phys. Rev. Lett. **80** (1998) 2697.
A. G. Lebed: Phys. Rev. B **59** (1999) R721.
A. G. Lebed, K. Machida, and M. Ozaki: Phys. Rev. B **62** (2000) R795.
H. Shimahara: J. Phys. Soc. Jpn. **69** (2000) 1966.
C. D. Vaccarella and C. A. R. S${\rm \acute{a}}$ de Melo: Physica C **341-348** (2000) 293.
Y. Fuseya, Y. Onishi, H. Kohno, and K. Miyake: J. Phys.: Condens. Matter **14** (2002) L655.
N. Belmechri, G. Abramovici, M. H${\rm \acute{e}}$ritier, S. Haddad, and S. Charfi-Kaddour: Europhys. Lett. **80** (2007) 37004.
N. Belmechri, G. Abramovici, and M. H${\rm \acute{e}}$ritier: Europhys. Lett. **82** (2008) 47009.
H. Aizawa, K. Kuroki, and Y. Tanaka: Phys. Rev. B **77** (2008) 144513.
H. Aizawa, K. Kuroki, T. Yokuyama and Y. Tanaka: Phys. Rev. Lett. **102** (2009) 016403.
Y. Suzumura and K. Ishino: Prog. Theor. Phys. **70** (1983) 654.
K. Machida and H. Nakanishi: Phys. Rev. B **30** (1984) 122.
N. Dupuis, G. Montambaux, and C. A. R. S${\rm \acute{a}}$ de Melo: Phys. Rev. Lett. **70** (1993) 2613.
N. Dupuis and G. Montambaux: Phys. Rev. B **49** (1994) 8993.
M. Miyazaki, K. Kishigi, and Y. Hasegawa: J. Phys. Soc. Jpn. **68** (1999) 3794.
Strictly speaking, the gap function in the left (right) figure of Fig. \[model\](b) is not the $d$-wave ($f$-wave) in the sense that it does not have an angular momentum equal to two (three). In this paper, however, we call these states as the “$d$-wave” and “$f$-wave” in a broad sense (as in the previous studies) in that the gap changes the sign as $+ - + -$ (“$d$”) or $+ - + - + -$ $(``f'')$ along the Fermi surface.
J. P. Pouget and S. Ravy: J. Phys. I (Paris) **6** (1996) 1501.
S. Kagoshima, Y. Saso, M. Maesato, R. Kondo, and T. Hasegawa: Solid State Commun. **110** (1999) 479.
Y. Tanuma, K. Kuroki, Y. Tanaka, R. Arita, S. Kashiwaya, and H. Aoki: Phys. Rev. B **66** (2002) 094507.
Y. Tanuma, Y. Tanaka, K. Kuroki, and S. Kashiwaya: Phys. Rev. B **66** (2002) 174502.
Y. Tanaka and S. Kashiwaya: Phys. Rev. Lett. **74** (1995) 3451.
S. Kashiwaya and Y. Tanaka: Rep. Prog. Phys. **63** (2000) 1641.
Y. Asano, Y. Tanaka, Y. Tanuma, K. Kuroki, and H. Tsuchiura: J. Phys. Soc. Jpn. **73** (2004) 1922.
Y. Tanaka, Y. V. Nazarov, and S. Kashiwaya: Phys. Rev. Lett. **90** (2003) 167003.
Y. Tanaka, Y. V. Nazarov, A. A. Golubov, and S. Kashiwaya: Phys. Rev. B **69** (2004) 144519.
Y. Tanaka and S. Kashiwaya: Phys. Rev. B **70** (2004) 012507.
Y. Tanaka, S. Kashiwaya, and T. Yokoyama: Phys. Rev. B **71** (2005) 094513.
Y. Tanaka, Y. Asano, A. A. Golubov, and S. Kashiwaya: Phys. Rev. B **72** (2005) 140503(R).
Y. Asano, Y. Tanaka, and S. Kashiwaya: Phys. Rev. Lett. **96** (2006) 097007.
For a review, see R. Casalbuoni and G. Nardulli: Rev. Mod. Phys. **76** (2004) 263.
For a review, see Y. Matsuda and H. Shimahara: J. Phys. Soc. Jpn. **76** (2007) 051005.
L. W. Gruenberg and L. Gunther: Phys. Rev. Lett. **16** (1966) 996.
K. Maki and H. Won: Czech. J. Phys. **46** (1996) Suppl. S2, 1035.
H. Shimahara and D. Rainer: J. Phys. Soc. Jpn. **66** (1997) 3591.
M. Tachiki, S. Takahashi, P. Gegenwart, M. Weiden, M. Lang, C. Geibel, F. Steglich, R. Modler, C. Paulsen, and Y. ${\rm \bar{O}}$nuki: Z. Phys. B **100** (1996) 369.
M. Houzet and A. Buzdin: Phys. Rev. B **63** (2001) 184521.
R. Ikeda: Phys. Rev. B **76** (2007) 134504.
R. Ikeda: Phys. Rev. B **76** (2007) 054517.
T. Maniv and V. Zhuravlev: Phys. Rev. B **77** (2008) 134511.
T. Mizushima, K. Machida, and M. Ichioka: Phys. Rev. Lett. **95** (2005) 117003.
M. Ichioka, H. Adachi, T. Mizushima, and K. Machida: Phys. Rev. B **76** (2007) 014503.
R. A. Klemm, A. Luther, and M. R. Beasley: Phys. Rev. B **12** (1975) 877.
K. V. Samokhin: Phys. Rev. B **70** (2004) 104521.
S. Takada: Prog. Theor. Phys. **43** (1970) 27.
D. F. Agterberg and K. Yang: J. Phys.: Condens. Matter **13** (2001) 9259.
H. Adachi and R. Ikeda: Phys. Rev. B **68** (2003) 184510.
M. Houzet and V. P. Mineev: Phys. Rev. B **74** (2006) 144522.
Y. Yanase: New J. Phys. **11** (2009) 055056.
H. Burkhardt and D. Rainer: Ann. Phys. (Leipzig) **3** (1994) 181.
H. Shimahara: Phys. Rev. B **50** (1994) 12760.
H. Shimahara: J. Phys. Soc. Jpn. **66** (1997) 541.
A. I. Buzdin and H. Kachkachi: Phys. Lett. A **225** (1997) 341.
A. B. Vorontsov, J. A. Sauls, and M. J. Granf: Phys. Rev. B **72** (2005) 184501.
Y. Suginishi and H. Shimahara: Phys. Rev. B **74** (2006) 024518.
A. B. Vorontsof and M. J. Graf: Phys. Rev. B **74** (2006) 172504.
H. Shimahara and K. Moriwake: J. Phys. Soc. Jpn. **71** (2002) 1234.
A. B. Kyker, W. E. Pickett, and F. Gygi: Phys. Rev. B **71** (2005) 224517.
S. Matsuo, H. Shimahara, and K. Nagai: J. Phys. Soc. Jpn. **63** (1994) 2499.
H. Shimahara: Phys. Rev. B [**62**]{} (2000) 3524.
G. Roux, S. R. White, S. Capponi, and D. Poilblanc: Phys. Rev. Lett. **97** (2006) 087207.
Y. Yanase: J. Phys. Soc. Jpn. **77** (2008) 063705.
T. Yokoyama, S. Onari, and Y. Tanaka: J. Phys. Soc. Jpn. **77** (2008) 064711.
A. B. Vorontsov, J. A. Sauls, and M. J. Graf: Phys. Rev. B **72** (2005) 184501.
Q. Cui, C. R. Hu, J. Y. T. Wei, and K. Yang: Phys. Rev. B **73** (2006) 214514.
Y. Tanaka, Y. Asano, M. Ichioka, and S. Kashiwaya: Phys. Rev. Lett. **98** (2007) 077001.
H. A. Radovan, N. A. Fortune, T. P. Murphy, S. T. Hannahs, E. C. Palm, S. W. Tozer, and D. Hall: Nature **425** (2003) 51.
A. Bianchi, R. Movshovich, C. Capan, P. G. Pagliuso, and J. L. Sarrao: Phys. Rev. Lett. **91** (2003) 187004.
T. Watanabe, Y. Kasahara, K. Izawa, T. Sakakibara, and Y. Matsuda: Phys. Rev. B **70** (2004) 020506.
T. Watanabe, K. Izawa, Y. Kasahara, Y. Haga, Y. Onuki, P. Thalmeier, K. Maki, and Y. Matsuda: Phys. Rev. B **70** (2004) 184502.
C. Capan, A. Bianchi, R. Movshovich, A. D. Christianson, A. Malinowski, M. F. Hundley, A. Lacerda, P. G. Pagliuso, and J. L. Sarrao: Phys. Rev. B **70** (2004) 134513.
V. F. Correa, T. P. Murphy, C. Martin, K. M. Purcell, E. C. Palm, G. M. Schmiedeshoff, J. C. Cooley, and S. W. Tozer: Phys. Rev. Lett. **98** (2007) 087001.
K. Kakuyanagi, M. Saitoh, K. Kumagai, S. Takashima, M. Nohara, H. Takagi, and Y. Matsuda: Phys. Rev. Lett. **94** (2005) 047602.
K. Kumagai, M. Saitoh, T. Oyaizu, Y. Furukawa, S. Takashima, M. Nohara, H. Takagi, and Y. Matsuda: Phys. Rev. Lett. **97** (2006) 227002.
C. F. Miclea, M. Nicklas, D. Parker, K. Maki, J. L. Sarrao, J. D. Thompson, G. Sparn, and F. Steglich: Phys. Rev. Lett. **96** (2006) 117001.
X. Gratens, L. M. Ferreira, Y. Kopelevich, N. F. Oliveira Jr., P. G. Pagliuso, R. Movshovich, R. R. Urbano, J. L. Sarrao, and J. D. Thompson: cond-mat/0608722.
V. F. Mitrovi${\rm \acute {c}}$, M. Horvati${\rm \acute {c}}$, C. Berthier, G. Knebel, G. Lapertot, and J. Flouquet: Phys. Rev. Lett. **97** (2006) 117002.
R. Movshovich, M. Jaime, J. D. Thompson, C. Petrovic, Z. Fisk, P. G. Pagliuso, and J. L. Sarrao: Phys. Rev. Lett. **86** (2001) 5152.
D. Hall, E. C. Palm, T. P. Murphy, S. W. Tozer, Z. Fisk, U. Alver, R. G. Goodrich, J. L. Sarrao, P. G. Pagliuso, and T. Ebihara: Phys. Rev. B **64** (2001) 212508.
A. Bianchi, R. Movshovich, N. Oeschler, P. Gegenwart, F. Steglich, J. D. Thompson, P. G. Pagliuso, and J. L. Sarrao: Phys. Rev. Lett. **89** (2002) 137002.
K. Izawa, H. Yamaguchi, Y. Matsuda, H. Shishido, R. Settai, and Y. Onuki: Phys. Rev. Lett. **87** (2001) 057002.
H. Aoki, T. Sakakibara, H. Shishido, R. Settai, Y. ${\rm \bar{O}}$nuki, P. Miranovi${\rm \acute {c}}$, and K. Machida: J. Phys.: Condens. Matter **16** (2004) L13.
A. Vorontsov and I. Vekhter: Phys. Rev. Lett. **96** (2006) 237001.
C. Martin, C. C. Agosta, S. W. Tozer, H. A. Radovan, E. C. Palm, T. P. Murphy, and J. L. Sarrao: Phys. Rev. B **71** (2005) 020503.
R. Settai, H. Shishido, S. Ikeda, Y. Murakawa, M. Nakashima, D. Aoki, Y. Haga, H. Harima, and Y. ${\rm \bar{O}}$nuki: J. Phys.: Condens. Matter **13** (2001) L627.
A. McCollam, S. R. Julian, P. M. C. Rourke, D. Aoki, and J. Flouquet: Phys. Rev. Lett. **94** (2005) 186401.
B.-L. Young, R. R. Urbano, N. J. Curro, J. D. Thompson, J. L. Sarrao, A. B. Vorontsov, and M. J. Graf: Phys. Rev. Lett. **98** (2007) 036402.
M. A. Tanatar, T. Ishiguro, H. Tanaka, and H. Kobayashi: Phys. Rev. B **66** (2002) 134503.
S. Uji, H. Shinagawa, T. Terashima, T. Yakabe, Y. Terai, M. Tokumoto, A. Kobayashi, H. Tanaka, and H. Kobayashi: Nature **410** (2001) 908.
L. Balicas, J. S. Brooks, K. Storr, S. Uji, M. Tokumoto, H. Tanaka, H. Kobayashi, A. Kobayashi, V. Barzykin, and L. P. Gor’kov: Phys. Rev. Lett. **87** (2001) 067002.
S. Uji, T. Terashima, M. Nishimura, Y. Takahide, T. Konoike, K. Enomoto, H. Cui, H. Kobayashi, A. Kobayashi, H. Tanaka, M. Tokumoto, E. S. Choi, T. Tokumoto, D. Graf, and J. S. Brooks: Phys. Rev. Lett. **97** (2006) 157001.
S. Manalo and U. Klein: J. Phys.: Condens. Matter **12** (2000) L471.
J. Singleton, J. A. Symington, M.-S. Nam, A. Ardavan, M. Kurmoo, and P. Day: J. Phys.: Condens. Matter **12** (2000) L641.
R. Lortz, Y. Wang, A. Demuer, P. H. M. B${\rm \ddot{o}}$ttger, B. Bergk, G. Zwicknagl, Y. Nakazawa, and J. Wosnitza: Phys. Rev. Lett. **99** (2007) 187002.
M. W. Zwierlein, A. Schirotzek, C. H. Schunck, and W. Ketterle: Science **311** (2006) 492.
G. B. Partridge, W. Li, R. I. Kamar, Y. Liao, and R. G. Hulet: Science **311** (2006) 503.
P. W. Anderson and W. F. Brinkman: Phys. Rev. Lett. **30** (1973) 1108.
S. Nakajima: Prog. Theor. Phys. **50** (1973) 1101.
K. Miyake, S. Schmitt-Rink, and C. M. Varma: Phys. Rev. B **34** (1986) 6554.
D. J. Scalapino, E. Loh, Jr., and J. E. Hirsch: Phys. Rev. B **34** (1986) 8190.
H. Shimahara and S. Takada: J. Phys. Soc. Jpn. **57** (1988) 1044.
H. Yoshino, S. Shodai, and K. Murata: Synth. Met. **133** (2003) 55.
A. G. Lebed, Heon-Ick Ha, and M. J. Naughton: Phys. Rev. B **71** (2005) 132504.
H. Shimahara: Phys. Rev. B **61** (2000) R14938.
V. L. Berezinskii: JETP Lett. **20** (1974) 287.
A. Balatsky and E. Abrahams: Phys. Rev. B **45** (1992) 13125.
F. S. Bergeret, A. F. Volkov, and K. B. Efetov: Rev. Mod. Phys. **77** (2005) 1321.
K. Shigeta, S. Onari, K. Yada, and Y. Tanaka: Phys. Rev. B **79** (2009) 174507.
Y. Tanaka and A. A. Golubov: Phys. Rev. Lett. **98** (2007) 037003.
Y. Tanaka, A. A. Golubov, S. Kashiwaya, and M. Ueda: Phys. Rev. Lett. **99** (2007) 037005.
Y. Tanaka, Y. Tanuma, and A. A. Golubov: Phys. Rev. B **76** (2007) 054522.
Y. Tanaka, Y. Asano, and A. A. Golubov: Phys. Rev. B **77** (2008) 220504(R).
T. Yokoyama, Y. Tanaka, and A. A. Golubov: Phys. Rev. B **78** (2008) 012508.
Y. Tanuma, N. Hayashi, Y. Tanaka, and A. A. Golubov: Phys. Rev. Lett. **102** (2009) 117003.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The mixed membership stochastic blockmodel (MMSB) is a popular framework for community detection and network generation. It learns a low-rank mixed membership representation for each node across communities by exploiting the underlying graph structure. MMSB assumes that the membership distributions of the nodes are independently drawn from a Dirichlet distribution, which limits its capability to model highly correlated graph structures that exist in real-world networks. In this paper, we present a flexible richly structured MMSB model, *Struct-MMSB*, that uses a recently developed statistical relational learning model, hinge-loss Markov random fields (HL-MRFs), as a structured prior to model complex dependencies among node attributes, multi-relational links, and their relationship with mixed-membership distributions. Our model is specified using a probabilistic programming templating language that uses weighted first-order logic rules, which enhances the model’s interpretability. Further, our model is capable of learning latent characteristics in real-world networks via meaningful latent variables encoded as a complex combination of observed features and membership distributions. We present an expectation-maximization based inference algorithm that learns latent variables and parameters iteratively, a scalable stochastic variation of the inference algorithm, and a method to learn the weights of HL-MRF structured priors. We evaluate our model on six datasets across three different types of networks and corresponding modeling scenarios and demonstrate that our models are able to achieve an improvement of 15% on average in test log-likelihood and faster convergence when compared to state-of-the-art network models.'
author:
- Yue Zhang
- Arti Ramesh
title: 'Struct-MMSB: Mixed Membership Stochastic Blockmodels with Interpretable Structured Priors'
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In this study, Lagrangian and Hamiltonian systems, which are mathematical models of mechanical systems, were introduced on the horizontal and the vertical distributions of tangent and cotangent bundles. Finally, some geometrical and physical results related to Lagrangian and Hamiltonian dynamical systems were deduced.\
[**Keywords:**]{} Tangent-Cotangent Bundles, Lagrangian-Hamiltonian Systems.\
[**PACS:**]{} 02.04; 03.40.
author:
- |
Mehmet Tekkoyun [^1]\
[Department of Mathematics, Pamukkale University,]{}\
[20070 Denizli, Turkey]{}
title: '**Complex Dynamics Effect on Distributions**'
---
Introduction
============
In modern differential geometry, the tangent-cotangent bundles of any differentiable manifold $M$ are seen to be phase-spaces of velocity and momentum of a given configuration space. Therefore Lagrangian and Hamiltonian systems in Classical Mechanics are explained by means of basic structures on the bundles so that this structures can be given by Liouville vector field $C$, Liouville form $\lambda $, tangent structure $J$, complex structure $F$, vertical distribution $V$, horizontal distribution $H$ and semispray $X,$ . The following symbolical equation expresses the dynamical equations for both Lagrangian and Hamiltonian systems: $$i_{X}\Phi =\digamma \label{1.1}$$ If anybody studies the Lagrangian systems then equation (\[1.1\]) is the intrinsical form of Euler-Lagrange equations, where $\Phi =\Phi
_{L}=-dd_{F}L $ and $\digamma =dE_{L}$ such that $L$ defined by $L:TM\rightarrow \mathbf{R} $ is Lagrangian function, $E_{L}\mathbf{\ }$given by $E_{L}=C(L)-L$ is energy function associated to $L.$
If one studies the Hamiltonian theory then equation (\[1.1\]) is the intrinsical form of Hamiltonian equations, where $\Phi =\phi _{\mathbf{H}}=-d\lambda $ , $\digamma =d\mathbf{H}$ and $\mathbf{H}$ is a $C^{\infty }-$ function such that $\mathbf{H}:T^{*}M\rightarrow \mathbf{R.}$ Mathematical expressions of mechanical systems are generally given by the Hamiltonian and Lagrangian systems. These expressions, in particular geometric expressions in mechanics and dynamics, are given in some studies [@mcrampin; @nutku; @deleon; @deleon1]. It is known that Lagrangian distribution on symplectic manifolds is used in geometric quantization and a connection on a symplectic manifold is an important structure to obtain a deformation quantization [@etayo]. Paracomplex geometry and the framework of para-Kählerian manifolds being geometric model of generalized Lagrange spaces were introduced [@cruceanu]. Para-complex analogues of the Lagrangians and Hamiltonians were obtained in the framework of para-Kählerian manifold and the geometric conclusions on a para-complex dynamical systems were obtained [@tekkoyun2005]. As determined references and there in, real-complex-paracomplex geometry and mechanical-dynamical systems were analyzed successfully, but they have not dealt with complex dynamical systems on horizontal distribution and vertical distribution of tangent-cotangent bundles of any manifold $M$.
Therefore, here, Euler-Lagrange and Hamiltonian equations related to complex dynamical systems on the distributions used in obtaining geometric quantization have been obtained.
Preliminaries
=============
In this study all mappings are considered of the class $C^{\infty },$expressed by the words ”differentiate” or ”smooth”. The indices $i$,$j$,...run over set $\{1,..,n\}$ and Einstein convention of summarizing is adopted over all this paper. $\mathbf{R}$, $\mathcal{F}(TM)$, $\chi (TM)$ and $\chi (T^{*}M)$ denote the set of real numbers, the set of real functions on $TM$, the set of vector fields on $TM$ and the set of 1-forms on $T^{*}M,$ respectively.
Basic Structures
----------------
In this subsection, some definitions were derived taking from [@radu]. Let $TM$ be tangent bundle of a real $n$-dimensional differentiable manifold $M$. Then it will be denoted a point of $M$ by $x$ and its local coordinate system by $(U,\varphi )$ such that $\varphi (x)=(x^{i}).$ Such that the projection $\pi :TM\rightarrow M$, $\pi (u)=x,$ a point $u\in TM$ will be denoted by $(x,y)$, its local coordinates being $(x^{i},y^{i})$. There are the natural basis $(\frac{\partial }{\partial x^{i}},\frac{\partial }{\partial y^{i}})$ and dual basis $(dx^{i},dy^{i})$ of the tangent space $T_{u}TM$ and the cotangent space $T_{u}^{*}(TM)$ at the point $u\in TM$**,** respectively$.$ Consider the $\mathcal{F}(TM)-$ and $\mathcal{F}(T^{*}M)-$ linear mappings $J:\chi (TM)\rightarrow \chi (TM)$and $J^{*}:\chi (T^{*}M)\rightarrow \chi (T^{*}M)$ given by $$\begin{array}{l}
J(\frac{\partial }{\partial x^{i}})=\frac{\partial }{\partial y^{i}},\,J(\frac{\partial }{\partial y^{i}})=0,\,
\end{array}$$ and $$\begin{array}{l}
J^{*}(dx^{i})=dy^{i},J^{*}(dy^{i})=0.
\end{array}$$ The tangent space $V_{u}$ to the fibre $\pi ^{-1}(x)$ in the point $u\in TM$ is locally spanned by $\{\frac{\partial }{\partial y^{1}},..,\frac{\partial }{\partial y^{n}}\}$. Therefore, the mapping $V$: $u\in
TM\rightarrow V_{u}\subset T_{u}TM$ provides a regular distribution generated by the adapted basis $\{\frac{\partial }{\partial y^{i}}\}.$Consequently, $V$ is an integrable distribution on $TM$. $V$ is called the vertical distribution on $TM$. Let $N$ be a nonlinear connection on $TM$. $N $ is characterized by $v$, $h$ vertical and horizontal projectors. We consider the vertical projector $v:\chi (TM)\rightarrow \chi (TM)$ defined by $v(X)=X,\,\forall \,X\in \chi (VTM);$ $v(X)=0,\forall \,X\in \chi (HTM).$ Similarly, the mapping $H$: $u\in TM\rightarrow H_{u}\subset T_{u}TM$provides a regular distribution determined by the adapted basis $\{\frac{\delta }{\delta x^{i}}\}.$ Consequently, $H$ is an integrable distribution on $TM$. $H$ is called the horizontal distribution on $TM$. There is a $\mathcal{F}(TM)-$linear mapping $h:\chi (TM)\rightarrow \chi (TM),$ for which $h^{2}=h,$ $Ker$ $h=\chi (VTM).$ Any vector field $X\in \chi (TM)$ can be uniquely written as follows $X=hX+vX=X^{H}+X^{V}$. Therefore $X^{H}$ and $X^{V}$ are horizontal and vertical components of vector field $X$. Therefore, any vector field $X$ can be uniquely written in the form $$\begin{array}{l}
X=X^{H}+X^{V}
\end{array}$$ such that $$\begin{array}{l}
X^{H}=X^{i}(\frac{\partial }{\partial x^{i}}-N_{j}^{i}(x,y)\frac{\partial }{\partial y^{j}}),\,\,\,X^{V}=X^{i}N_{j}^{i}(x,y)\frac{\partial }{\partial
y^{j}}
\end{array}$$ where $N_{j}^{i}$ is local coefficient of nonlinear connection $N$ on $TM.$
$(\frac{\delta }{\delta x^{i}},\frac{\partial }{\partial y^{i}})$ is a local basis adapted to the horizontal distribution $HTM$ and the vertical distribution $VTM$. Then $(dx^{i},\delta y^{i})$ is dual basis of $(\frac{\delta }{\delta x^{i}},\frac{\partial }{\partial y^{i}})$ basis. We have $$\begin{array}{l}
\frac{\delta }{\delta x^{i}}=\frac{\partial }{\partial x^{i}}-N_{j}^{i}(x,y)\frac{\partial }{\partial y^{j}}.
\end{array}$$ and $$\begin{array}{l}
\delta y^{i}=dy^{i}+N_{j}^{i}(x,y)dx^{j}.
\end{array}$$ $F$ is an almost complex structure on $TM.$ $F^{*}$ is the dual structure of $F.$ For the operators $h,v,F,F^{*}$ we get $$\begin{array}{l}
h+v=I;\,\,F^{2}=-I;\,F^{*2}=-I \\
h(\frac{\delta }{\delta x^{i}})=\frac{\delta }{\delta x^{i}};\,h(\frac{\partial }{\partial y^{i}})=0;\,v(\frac{\delta }{\delta x^{i}})=0;v(\frac{\partial }{\partial y^{i}})=\frac{\partial }{\partial y^{i}},\, \\
F(\frac{\delta }{\delta x^{i}})=-\frac{\partial }{\partial y^{i}};\,\,F(\frac{\partial }{\partial y^{i}})=\frac{\delta }{\delta x^{i}}, \\
F^{*}(dx^{i})=-\delta y^{i};\,F^{*}(\delta y^{i})=dx^{i}.
\end{array}$$ **Lagrangian Dynamical Systems**
In this section, Euler-Lagrange equations for Classical Mechanics are made by means of almost complex structure $F$ under the consideration of the basis $\{\frac{\delta }{\delta x^{i}},\,\frac{\partial }{\partial y^{i}}\}$ on distributions $HTM$ and $VTM$ of tangent bundle $TM$ of manifold $M.$ Let $(x^{i},y^{i})$ be its local coordinates. Let semispray be the vector field $X$ given by $$\begin{array}{l}
X=X^{i}\frac{\delta }{\delta x^{i}}+\stackrel{.}{X}^{i}\frac{\partial }{\partial y^{i}},\stackrel{.}{\,\,X}^{i}=X^{i}N_{j}^{i}
\end{array}
\label{3.1}$$ where the dot indicates the derivative with respect to time $t$. The vector field denoted by $C=F(X)$ and expressed by $$\begin{array}{l}
C=-X^{i}\frac{\partial }{\partial y^{i}}\stackrel{.}{+X}^{i}\frac{\delta }{\delta x^{i}}
\end{array}
\label{3.2}$$ is called *Liouville vector field* on the bundle $TM$. The maps given by $\mathbf{T,P}:TM\rightarrow \mathbf{R}$ such that $\mathbf{T}=\frac{1}{2}m_{i}(x^{i})^{2},\mathbf{P}=m_{i}gh$ are called *the kinetic energy* and *the potential energy of the mechanical system,* respectively.* *Here* *$m_{i},g$ and $h$ stand for mass of a mechanical system having $m$ particles, the gravity acceleration and distance to the origin of a mechanical system on the tangent bundle $TM$, respectively. Then $L:TM\rightarrow \mathbf{R}$ is a map that satisfies the conditions; i) $L=\mathbf{T-P}$ is a *Lagrangian function, ii)* the function given by $E_{L}=C(L)-L$ is *a Lagrangian energy*. The operator $i_{F}$ induced by $F$ and shown by $$\begin{array}{l}
i_{F}\omega (X_{1},X_{2},...,X_{r})=\sum_{i=1}^{r}\omega
(X_{1},...,F(X_{i}),...,X_{r})
\end{array}
\label{3.3}$$ is said to be *vertical derivation,* where $\omega \in \wedge
^{r}{}TM,$ $X_{i}\in \chi (TM).$ The *vertical differentiation* $d_{F}
$ is defined by $$\begin{array}{l}
d_{F}=[i_{F},d]=i_{F}d-di_{F}
\end{array}
\label{3.4}$$ where $d$ is the usual exterior derivation. For an almost complex structure $F$, the closed fundamental form is the closed 2-form given by $\Phi
_{L}=-dd_{F}L$ such that $$\begin{array}{l}
d_{F}:\mathcal{F}(TM)\rightarrow {}T^{*}M
\end{array}
\label{3.5}$$ Then we have $$\begin{array}{l}
\Phi _{L}=-(\frac{\delta }{\delta x^{j}}dx^{j}+\frac{\partial }{\partial
y^{j}}\delta y^{j})(-\frac{\partial L}{\partial y^{i}}dx^{i}+\frac{\delta L}{\delta x^{i}}\delta y^{i}) \\
\,\,\,\,\,\,\,\,\,\,=\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}dx^{j}\wedge dx^{i}-\frac{\delta ^{2}L}{\delta x^{j}\delta x^{i}}dx^{j}\wedge \delta y^{i}+\frac{\partial ^{2}L}{\partial y^{j}\partial y^{i}}\delta y^{j}\wedge dx^{i}-\frac{\partial (\delta L)}{\partial y^{j}\delta
x^{i}}\delta y^{j}\wedge \delta y^{i}.
\end{array}
\label{3.6}$$ Let $X$ be the second order differential equation (semispray) determined by **Eq.** (\[1.1\]). Then we get $$\begin{array}{l}
i_{X}\Phi _{L}=\Phi _{L}(X)=X^{i}\frac{\delta (\partial L)}{\delta
x^{j}\partial y^{i}}\delta _{i}^{j}dx^{i}-X^{i}\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}dx^{j}-X^{i}\frac{\delta ^{2}L}{\delta
x^{j}\delta x^{i}}\delta _{i}^{j}\delta y^{i}+\stackrel{.}{X}^{i}\frac{\delta ^{2}L}{\delta x^{j}\delta x^{i}}dx^{j} \\
+\stackrel{.}{X}^{i}\frac{\partial ^{2}L}{\partial y^{j}\partial y^{i}}\delta _{i}^{j}dx^{i}-X^{i}\frac{\partial ^{2}L}{\partial y^{j}\partial y^{i}}\delta y^{j}-\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial
y^{j}\delta x^{i}}\delta _{i}^{j}\delta y^{i}+\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial y^{j}\delta x^{i}}\delta y^{j}.
\end{array}
\label{3.7}$$ Since the closed 2-form $\Phi _{L}$ on $TM$ is the symplectic structure, it is found $$\begin{array}{l}
E_{L}=C(L)-L=-X^{i}\frac{\partial L}{\partial y^{i}}+\stackrel{.}{X}^{i}\frac{\delta L}{\delta x^{i}}-L
\end{array}
\label{3.8}$$ and hence $$\begin{array}{l}
dE_{L}=-X^{i}\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}dx^{j}+\stackrel{.}{X}^{i}\frac{\delta ^{2}L}{\delta x^{j}\delta x^{i}}dx^{j}-\frac{\delta L}{\delta x^{j}}dx^{j}-X^{i}\frac{\partial ^{2}L}{\partial
y^{j}\partial y^{i}}\delta y^{j}+\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial y^{j}\delta x^{i}}\delta y^{j}-\frac{\partial L}{\partial y^{j}}\delta y^{j}
\end{array}
\label{3.9}$$ With the use of **Eq.** (\[1.1\]), considering **Eqs.**(\[3.7\]) and (\[3.9\]) we get $$\begin{array}{l}
X^{i}\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}\delta
_{i}^{j}dx^{i}-X^{i}\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}dx^{j}-X^{i}\frac{\delta ^{2}L}{\delta x^{j}\delta x^{i}}\delta
_{i}^{j}\delta y^{i}+\stackrel{.}{X}^{i}\frac{\delta ^{2}L}{\delta
x^{j}\delta x^{i}}dx^{j} \\
+\stackrel{.}{X}^{i}\frac{\partial ^{2}L}{\partial y^{j}\partial y^{i}}\delta _{i}^{j}dx^{i}-X^{i}\frac{\partial ^{2}L}{\partial y^{j}\partial y^{i}}\delta y^{j}-\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial
y^{j}\delta x^{i}}\delta _{i}^{j}\delta y^{i}+\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial y^{j}\delta x^{i}}\delta y^{j} \\
=-X^{i}\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}dx^{j}+\stackrel{.}{X}^{i}\frac{\delta ^{2}L}{\delta x^{j}\delta x^{i}}dx^{j}-\frac{\delta L}{\delta x^{j}}dx^{j}-X^{i}\frac{\partial ^{2}L}{\partial
y^{j}\partial y^{i}}\delta y^{j}+\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial y^{j}\delta x^{i}}\delta y^{j}-\frac{\partial L}{\partial y^{j}}\delta y^{j}
\end{array}
\label{3.10}$$ or $$\begin{array}{l}
X^{i}\frac{\delta (\partial L)}{\delta x^{j}\partial y^{i}}dx^{j}-X^{i}\frac{\delta ^{2}L}{\delta x^{j}\delta x^{i}}\delta y^{j}+\stackrel{.}{X}^{i}\frac{\partial ^{2}L}{\partial y^{j}\partial y^{i}}dx^{j}-\stackrel{.}{X}^{i}\frac{\partial (\delta L)}{\partial y^{j}\delta x^{i}}\delta y^{j}+\frac{\delta L}{\delta x^{j}}dx^{j}+\frac{\partial L}{\partial y^{j}}\delta y^{j}=0
\end{array}
\label{3.11}$$ If a curve denoted by $\alpha :\mathbf{R}\rightarrow TM$ is considered to be an integral curve of $X,$ i.e$.$ $X(\alpha (t))=\frac{d\alpha (t)}{dt}$ then we have the equations $$\begin{array}{l}
\frac{d}{dt}(\frac{\partial L}{\partial y^{i}})+\frac{\delta L}{\delta x^{i}}=0,\,\,\frac{d}{dt}(\frac{\delta L}{\delta x^{i}})-\frac{\partial L}{\partial y^{i}}=0,
\end{array}
\label{3.12}$$ which are named to be a *Euler-Lagrange equations* which are deduced by means of an almost complex structure $F$ on $HTM$ horizontal and $VTM$ vertical distributions.
Thus the triple $(TM,\Phi _{L},X)$ is a *mechanical system* which is* *structured by means of an almost complex structure $F$ and taking into account the basis $\{\frac{\delta }{\delta x^{i}},\frac{\partial }{\partial y^{i}}\}$ on the distributions $HTM$ and $VTM$.
Hamiltonian Dynamical Systems
=============================
In this section, Hamiltonian equations for Classical Mechanics are obtained on the distributions ${}HT^{*}M$ and $VT^{*}M$ of $T^{*}M.$ Suppose that an almost complex structure, a Liouville form and a 1-form on $T^{*}M$ are shown by $P^{*}$, $\lambda $ and $\omega $, respectively$.$ Then we have $$\begin{array}{l}
\omega =\frac{1}{2}(x^{i}dx^{i}+y^{i}\delta y^{i})
\end{array}
\label{4.1}$$ and $$\begin{array}{l}
\lambda =P^{*}(\omega )=\frac{1}{2}(-x^{i}\delta y^{i}+y^{i}dx^{i}).
\end{array}$$ It is known that if $\phi $ is a closed 2- form on $T^{*}M,$ then $\phi _{\mathbf{H}}$ is also a symplectic structure on ${}{}T^{*}M$. If Hamiltonian vector field $X_{\mathbf{H}}$ associated with Hamiltonian energy $\mathbf{H}$ is given by $$\begin{array}{l}
X_{\mathbf{H}}=X^{i}\frac{\delta }{\delta x^{i}}+Y^{i}\frac{\partial }{\partial y^{i}},
\end{array}
\label{4.3}$$ then we deduce $$\begin{array}{l}
\phi _{\mathbf{H}}=-d\lambda =\delta y^{i}\wedge dx^{i}
\end{array}$$ and $$\begin{array}{l}
i_{X_{\mathbf{H}}}\phi =Y^{i}dx^{i}-X^{i}\delta y^{i}.
\end{array}
\label{4.5}$$ Moreover, the differential of Hamiltonian energy is written as follows: $$\begin{array}{l}
d\mathbf{H}=\frac{\delta \mathbf{H}}{\delta x^{i}}dx^{i}+\frac{\partial
\mathbf{H}}{\partial y^{i}}\delta y^{i}.
\end{array}
\label{4.6}$$ By means of **Eq.**(\[1.1\]), using **Eqs.** (\[4.5\]) and (\[4.6\]), the Hamiltonian vector field is calculated to be $$\begin{array}{l}
X_{\mathbf{H}}=-\frac{\partial \mathbf{H}}{\partial y^{i}}\frac{\delta }{\delta x^{i}}+\frac{\delta \mathbf{H}}{\delta x^{i}}\frac{\partial }{\partial y^{i}}.
\end{array}
\label{4.7}$$ Suppose that a curve $$\begin{array}{l}
\alpha {:\,}I\subset \mathbf{R}\rightarrow T^{*}M
\end{array}
\label{4.8}$$ is an integral curve of the Hamiltonian vector field $X_{\mathbf{H}}$, i.e., $$\begin{array}{l}
X_{\mathbf{H}}(\alpha (t))=\frac{d\alpha (t)}{dt},\,\,t\in I.
\end{array}
\label{4.9}$$ In the local coordinates, if it is considered to be $$\begin{array}{l}
\alpha (t)=(x^{i}(t),y^{i}(t)),
\end{array}
\label{4.10}$$ we obtain $$\begin{array}{l}
\frac{d\alpha (t)}{dt}=\frac{dx^{i}}{dt}\frac{\delta }{\delta x^{i}}+\frac{dy^{i}}{dt}\frac{\partial }{\partial y^{i}}.
\end{array}
\label{4.11}$$ Taking into consideration **Eqs.** (\[4.8\]), (\[4.6\]) and (\[4.10\]), we have the equations $$\begin{array}{l}
\frac{dx^{i}}{dt}=-\frac{\partial \mathbf{H}}{\partial y^{i}},\frac{dy^{i}}{dt}=\frac{\delta \mathbf{H}}{\delta x^{i}},
\end{array}
\label{4.12}$$ which are named to be *Hamiltonian equations* which are deduced by means of an almost complex structure $F^{*}$ on the horizontal distribution ${}{}HT^{*}M$ and vertical distribution $VT^{*}M.$
Hence the triple $(T^{*}M,\phi _{\mathbf{H}},X_{\mathbf{H}})$ is shown to be a *Hamiltonian mechanical system* which* *are deduced by means of an almost complex structure $F^{*}$ and using of basis $\{\frac{\delta }{\delta x^{i}},\frac{\partial }{\partial y^{i}}\}$ on the distributions ${}HT^{*}M$ and $VT^{*}M$.
Conclusions
===========
This paper has obtained to exist physical proof of the both mathematical equality given by $TM=HTM\oplus VTM$ and its dual equality$.$ Lagrangian and Hamiltonian dynamics have intrinsically been described by means of an almost complex structure $F^{*}$ being functional on distributions of tangent and cotangent bundles $TM$ and $T^{*}M$ of manifold $M$. Also, we deduce that Euler-Lagrange equations given by (\[3.12\]) turn into Hamiltonian equations defined by (\[4.12\]) taking care into equalities $x^{i}=\frac{\delta L}{\delta x^{i}},$ $y^{i}=\frac{\partial L}{\partial y^{i}}$ and $H=-L,$ and vice versa.
**Discussions**
===============
As is well-known, geometry of Lagrangian and Hamiltonian formalisms give a model for Relativity, Gauge Theory and Electromagnetism in a very natural blending of the geometrical structures of the space with the characteristics properties of these physical fields. Therefore we consider that the equations (\[3.12\]) and (\[4.12\]) especially can be used in fields determined the above of physical.
[9]{} M. Crampin, J. Phys. A: Math. Gen. 14** **(1981) 2567.
N. Nutku, J. Math. Phys. 25** **(1984) 2007.
M. De Leon, P.R. Rodrigues, Methods of Differential Geometry in Analytical Mechanics, North-Holland Mathematics Studies, vol.152, Elsevier, Amsterdam, 1989.
M. De Leon, P.R. Rodrigues, Diff. Geom. and its Appl., Proceedings of the Conference, August 24-30 (1986) 179.
F. Etayo, R. Santanaría, U.R. Trías, Diff. Geom. and its Appl., 24 (2006) 33.
V. Cruceanu, P.M. Gadea, J. M. Masqué, Para-Hermitian and Para- Kähler Manifolds, Supported by the commission of the European Communities’ Action for Cooperation in Sciences and Technology with Central Eastern European Countries n. ERB3510PL920841.
M. Tekkoyun, Phys. Lett. A, 340 (2005) 7.
R. Miron, D. Hrimiuc, H. Shimada, S. V. Sabau, The Geometry of Hamilton and Lagrange Spaces, Kluwer Academic Publishers, 2001.
[^1]: E-mail address: tekkoyun@pau.edu.tr; Tel: +902582953616; Fax: +902582953593
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We have developed a hybrid model of the solar dynamo on the lines of the Babcock–Leighton idea that the poloidal field is generated at the surface of the Sun from the decay of active regions. In this model magnetic buoyancy is handled with a realistic recipe - wherein toroidal flux is made to erupt from the overshoot layer wherever it exceeds a specified critical field ($10^5$ G). The erupted toroidal field is then acted upon by the $\alpha$ - effect near the surface to give rise to the poloidal field. In the first half of this paper we present a parameter space study of this model, to bring out similarities and differences between it and other well studied models of the past. In the second half of this paper we show that the mechanism of buoyant eruptions and the subsequent depletion of the toroidal field inside the overshoot layer, is capable of constraining the magnitude of the dynamo generated magnetic field there, although a global quenching mechanism is still required to ensure that the magnetic fields do not blow up. We also believe that a critical study of this mechanism may give us new information regarding the solar interior and end with an example, where we propose a method for estimating an upper limit of the diffusivity within the overshoot layer.'
author:
- Dibyendu
title: Characteristics Of A Magnetic Buoyancy Driven Solar Dynamo Model
---
\
Dibyendu Nandy,\
Department of Physics,\
Indian Institute of Science,\
Bangalore 560012, India.\
email: dandy@physics.iisc.ernet.in
All models are wrong but some are useful.
Introduction
============
Though there has not been a phenomenal change in our understanding of solar dynamo theory following the early seminal work of Parker, Steenbeck, Krause, and Rädler (Parker 1955; Steenbeck, Krause, and Rädler 1966, hereby the PSKR approach) and Babcock and Leighton (Babcock 1961; Leighton 1969, hereby the BL approach), each new study has in it’s own way, contributed a little more to our understanding of the origin and evolution of the solar magnetic fields. Continuing in that line, we present here a study, that attempts to understand and quantify the effects of magnetic buoyancy on the solar dynamo.
Work on the solar dynamo is largely divided between the two approaches (PSKR and BL) quoted above. The main difference between them being in the way the poloidal field generation is handled. Ones that follow the PSKR idea, invokes cyclonic turbulence in the interior of the solar convection zone (SCZ) to twist the toroidal field to generate the poloidal field (historically the $\alpha$-effect), while those following the BL idea, assumes that the poloidal field is generated from the decay of tilted active regions (resulting from the buoyant eruption of the toroidal field) on the solar surface.
The toroidal field production process however, is the same in both these approaches and it is supposed to be generated due to the stretching of the poloidal field lines by differential rotation. We know that magnetic buoyancy is particularly destabilising in the SCZ (Parker 1975; Moreno-Insertis 1983) and therefore, the dynamo may not have enough time to amplify it to the very high values that the toroidal field seems to have. This led to the speculation that the dynamo generation of the toroidal field takes place in the overshoot layer beneath the SCZ (Spiegel and Weiss 1980; van Ballegooijen 1982; DeLuca and Gilman 1986; Choudhuri 1990) and with the helioseismic discovery of a strong radial shear layer (this region is also referred to as the tachocline and the overshoot layer is believed to be situated within this rotationally defined tachocline) in the differential rotation at the bottom of the SCZ, there remains little doubt that the toroidal field is indeed produced here within the overshoot layer.
Now, while the PSKR approach developed on strong foundations of mean field electrodynamics (Steenbeck, Krause, and Rädler 1966; Moffatt 1978, Chap. 7; Parker 1979, §18.3; Choudhuri 1998, §16.5) and detailed models were worked out on the basis of this theory, the BL idea had to wait for quite sometime before detailed re-examination and [*[recasting]{}*]{} of the original ideas were attempted in the light of new results from flux tube rise simulations and helioseismology (Choudhuri, Schüssler and Dikpati 1995; Durney 1995, 1996, 1997; Dikpati and Charbonneau 1999).
In models based on the PSKR approach, magnetic buoyancy functions as a loss term in some treatments (DeLuca and Gilman 1986; Schmitt and Schüssler 1989), while in others, a general upward flow due to magnetic buoyancy is included (Moss, Tuominen, and Brandenburg 1990a, 1990b). In BL models however, magnetic buoyancy not only removes flux from the overshoot layer but also contributes directly to the poloidal flux production procedure by transporting the strong toroidal field to the surface, thus playing a more significant role.
Amongst the recent BL models, with the exception of Durney’s, the others worked with an $\alpha$-effect concentrated in a very thin layer near the solar surface - to capture the idea of the generation of poloidal field from the decay of active regions on the surface of the Sun. While Choudhuri, Schüssler and Dikpati (1995) did not incorporate buoyancy, Dikpati and Charbonneau (1999) approximated magnetic buoyancy by making the source term for the generation of the poloidal field - proportional to the toroidal field at the bottom of the SCZ. Durney unlike the others, did away with the $\alpha$-effect altogether and generated the poloidal field by putting a double ring (analogous to a bipolar sunspot pair) on the surface at the same latitude - where he found the underlying toroidal field to be maximum, at specific intervals of time (for more details, see Durney 1997).
So here we were at crossroads with the BL idea and the question naturally arose whether the [*[$\alpha$-effect concentrated near the surface approximation]{}*]{} and [*[the double ring approximation]{}*]{} are really very different from each other and if so, which one of these two methods, is a more suitable expression of the BL idea? Nandy and Choudhuri (2001) \[hereby Paper I\], showed that these two methods give qualitatively similar results. Paper I also attempted to bridge the gap between the detailed mean field models and the more heuristic models based on BL ideas.
A new way of handling magnetic buoyancy within a dynamo framework was introduced in Paper I. With an algorithm wherein a certain fraction of the toroidal field in the overshoot layer was made to erupt to the top, at specific time intervals, wherever it exceeded a specified critical field $B_c$. The buoyant eruption was followed by a simultaneous depletion of the toroidal field within the overshoot layer. Although the two models presented in Durney (1997) and in Paper I differ in the way they generate the poloidal field, perhaps Durney’s method comes closest to the algorithm introduced in Paper I in terms of a realistic recipe for handling buoyancy. Durney (1997) however, did not deplete the toroidal field in the overshoot layer (which he referred to as the GL) subsequent to eruptions, and depletion, as our present study shows, may have a profound influence on the magnitude and distribution of the dynamo generated fields at the bottom of the SCZ.
We, having demonstrated the viability of such a model - where a realistic algorithm for magnetic buoyancy works in tandem with a concentrated $\alpha$-effect near the top of the solar surface in Paper I, now present a detailed analysis of such a model. In the present study we do not make any attempts at matching solar observations, rather, the emphasis is on understanding the effects of incorporating magnetic buoyancy. These hybrid models with mechanisms for handling buoyancy are, as of now, in their infancy and one has to do a critical study of such models to understand the physics underlying them. Section 2 details our model, in Section 3 we present our results. We end in Section 4 with a discussion highlighting the contribution of this model to our knowledge of the solar interior.
Note that in these introductory passages we have concentrated only on earlier work which is of direct relevance to our present study. Please refer to Dikpati and Charbonneau (1999) and Paper I of this series (and references therein) for a more comprehensive review of the history of solar dynamo theory in general, and the BL approach in particular.
Buoyancy driven flux transport models
=====================================
The model of Nandy and Choudhuri 2001 (Paper I)
-----------------------------------------------
Proceeding along the lines of the solar dynamo model with concentrated $\alpha$-effect presented in Paper I, the evolution of the magnetic field can be expressed in terms of the vector potential $A$ \[from which the poloidal field can be defined as ${{\bf B}}_p = \nabla \times (A {{\bf e}_\phi})$\] and the toroidal field $B$, by the following two equations for the usual $\alpha \Omega$ dynamo: $$\frac{{\partial}A}{{\partial}t} + \frac{1}{s}({{\bf v}}_p.\nabla)(s A)
= \eta \left( \nabla^2 - \frac{1}{s^2} \right) A + Q,$$ $$\begin{aligned}
\frac{{\partial}B}{{\partial}t}
+ \frac{1}{r} \left[ \frac{{\partial}}{{\partial}r}
(r v_r B) + \frac{{\partial}}{{\partial}\theta}(v_{\theta} B) \right]
= \eta \left( \nabla^2 - \frac{1}{s^2} \right) B
+ s({{\bf B}}_p.\nabla)\Omega,\end{aligned}$$ where $\eta$ is the coefficient of turbulent diffusion, $\Omega$ is the angular velocity, ${{\bf v}}_p = v_r {\bf e}_r + v_{\theta} {\bf e}_{\theta}
$ is the meridional circulation and $s = r \sin \theta$.
We use a constant value of the turbulent diffusivity $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$ for most of our calculations, unless otherwise stated. For the angular velocity $\Omega$, we use the same latitude-independent profile as in Paper I. This expression for $\Omega$ is such that there is a strong radial shear concentrated in the tachocline below the SCZ, with a positive vertical gradient in the differential rotation - which corresponds to the helioseismologically determined profile at mid to low latitudes. For the meridional circulation also, we use the same profile as given in Paper I, but with a higher value of ${v_0}$ = 10 m s$^{-1}$, for the maximum flow speed at mid-latitudes near the surface. This single cell flow (per meridional quadrant) is such that the flow is directed poleward near the surface and has a equatorward return flow near the bottom half of the SCZ.
The term $Q$ on the right-hand side of Equation (1), is the source term for the generation of the poloidal field. Normally, the $\alpha \Omega$ dynamos are characterised by Equations (1) and (2) with $Q$ given by: $$Q = \alpha B.$$
We use the following profile for the $\alpha$-coefficient: $$\begin{aligned}
\alpha =\frac{\alpha_0}{1 + {(B/B_0)}^2} \cos \theta \frac{1}{4}
\left[ 1 + {\mbox{erf}}\left(\frac{r - r_1}{d_1} \right) \right]
\times
\left[ 1 - {\mbox{erf}}\left(\frac{r - r_2}{d_2} \right) \right].\end{aligned}$$ The parameters ($r_1$, $r_2$, $d_1$ and $d_2$) are adjusted to make sure that the $\alpha$ effect is concentrated near the surface of the Sun within $0.95 {R_{\odot}}\leq r \leq {R_{\odot}}$. We take $\alpha_0$ = 10 m s$^{-1}$ in most of the calculations in the present paper, which ensures that the dynamo is always supercritical and the solutions do not decay.
The $\alpha$-quenching term $[1 + {(B/B_0)}^2]$ in the denominator of the source term in Equation (4) ensures that the poloidal field generation process gets suppressed when the erupted toroidal field has values close to, or higher than $B_0$. This quenching term is the only source of nonlinearity in models of such type and the dynamo generated magnetic fields scale according to the specified value of $B_0$. Results from flux tube simulations suggest that toroidal flux tubes, with magnitudes greater than $1.6 \times {10^5}$ G, emerge without any tilt on the solar surface (D’Silva and Choudhuri 1993; Fan, Fisher, and DeLuca 1993; Caligari, Moreno-Insertis, and Schüssler 1995) and hence do not contribute to the generation of the poloidal field (resulting from the decay of tilted active regions). Following their work, we set ${B_0} = {10^5}$ G in Equation (4).
Having specified the form of the $\alpha$-coefficient we now have to define an algorithm for buoyancy. Again drawing from our knowledge of flux tube simulations we know that toroidal flux tubes with values $\leq 0.6 \times {10^5}$ G rise parallel to the rotation axis, emerging at high latitudes with tilts that do not match Joy’s Law (D’Silva and Choudhuri 1993; Fan, Fisher, and DeLuca 1993; Caligari, Moreno-Insertis, and Schüssler 1995). Moreno-Insertis, Schüssler, and Ferriz-Mas (1992) also showed that it is possible to store flux rings of strength $\leq {10^5}$ G within the overshoot region, while flux tubes greater in strength escape out.
All this suggests that there should be a critical field beyond which flux tubes become buoyant and emerge radially to give rise to the sunspots and that the value of this critical field (which we shall denote as $B_c$) should be around ${10^5}$ G. Keeping these ideas in mind we have formulated a [*[recipe]{}*]{} for incorporating buoyancy within a dynamo framework in Paper I. We summarise the salient features of this [*[recipe]{}*]{} below.
At intervals of time $\tau$ we check if the toroidal field $B$ has exceeded the specified critical value $B_c$ (${10^5}$ G), anywhere within the overshoot layer. Wherever the toroidal field exceeds $B_c$, a certain fraction $f_b$ of it is made to erupt radially, to the top layer near the surface - where the $\alpha$-effect is concentrated. The erupted toroidal flux $f'B$ (where $f' = {f_b}[R_i/R_f]$; $R_i$ is the radius at the bottom from where the eruption takes place and $R_f$ is the radius near the surface where the flux is deposited)[^1] is added to the previously existing toroidal field near the surface (at the same latitude from where the eruption occurred), while the amount ${f_b}B$, is subtracted from the point inside the overshoot layer - which was the source of the eruption. Thus we make sure that flux is conserved in this procedure. We fix the time interval between successive eruption $\tau$ at $8.8 \times 10^5$ s, which allows for an order of a thousand eruptions in a complete dynamo period ($T_d$). Also, for most of the calculations presented in the results section, we use the value $f_b = 0.05$ for the control parameter (we call the fraction of the erupted field as the control parameter following Paper I, because, this parameter controls the strength of magnetic buoyancy). This value corresponds to the [*[buoyancy saturated regime]{}*]{} of the dynamo (where buoyancy is quite strong and buoyant flux transport plays the main role in transporting flux from the bottom of the SCZ to the top).
With this algorithm for buoyancy we solve Equations (1) and (2) in the northern quadrant of the convection (i.e. within $R_b = 0.7 {R_{\odot}}\leq r \leq {R_{\odot}}$, $0 \leq \theta \leq \pi/2$). For a description of the boundary conditions refer to Paper I.
A buoyancy algorithm where the poloidal source term is proportional to the toroidal field inside the overshoot layer
--------------------------------------------------------------------------------------------------------------------
For reasons that will become clear as we go on, we felt it may be prudent to compare the results of the model defined by the buoyancy $\it{recipe}$ in Section 2.1 (hereby Model 1) with that of another model, the details of which follows.
This model (hereby Model 2) is similar in all respects to Model 1, except that instead of using the previous buoyancy algorithm (outlined in Section 2.1), here we use a source term for the generation of the poloidal field which is proportional to the toroidal field at the bottom ($B_{\rm bot}$), inside the overshoot layer. This kind of a source term to model the decay of active regions was introduced by Choudhuri and Dikpati (1999) and was followed later by Charbonneau and Dikpati (1999) - who used this within a dynamo framework. For a motivation on its formulation see the above cited papers. Thus we replace Equation (4) and the buoyancy algorithm and work with Equation (1) and (2) along with a form of Equation (3) given by:
$$\begin{aligned}
Q(r,\theta) =\frac{{\alpha_0}[B(r, \theta) + f_r B_{\rm bot} (\theta)]}
{1 + [\{B(r, \theta) + f_r B_{\rm bot} (\theta)\}/B_0]^2}
\cos \theta \frac{1}{4}
\left[ 1 + {\mbox{erf}}\left(\frac{r - r_1}{d_1} \right) \right]
\nonumber
\\
\times
\left[ 1 - {\mbox{erf}}\left(\frac{r - r_2}{d_2} \right) \right].\end{aligned}$$
A notable difference between the source term used in Dikpati and Charbonneau (1999) and the one above is the inclusion of a term $f_r$ here, a parameter which controls how effective buoyancy is. Also the quenching expression here is slightly different; which instead of only accounting for the buoyant rise of field also incorporates the local field which is present near the surface already.
Equation (1) and (2) along with (5), with similar forms of meridional flow and the rotation profile, as described in Section 2.1, constitute our Model 2. We keep the values of $\alpha_0$ and $B_0$ the same as that of Model 1 to facilitate comparison.
Results
=======
We have divided this section into two parts. The first part presents a parameter space study of Model 1 ending with a comparison with Model 2. The second part studies the effect of magnetic buoyancy on the dynamo generated magnetic fields and also discusses a novel way of calculating an upper limit of the diffusivity within the overshoot layer (this second part of the study is limited to Model 1).
Variation of basic parameters
-----------------------------
There are quite a few parameters which are used as inputs in our model. Most of them have already been specified in Section 2. Notable amongst these and which also have featured prominently in the past literature on other solar dynamo models are; the amplitude of the meridional flow speed $v_0$, the amplitude of the source term for the generation of the poloidal field $\alpha_0$ and the diffusivity $\eta$. Parameters unique to the buoyancy driven model we are studying are; the critical field $B_c$, the time between eruption $\tau$ and the fraction of the erupted field $f_b$ (Model 1) or $f_r$ (Model 2). The emphasis is on studying the influence of these parameters on the dynamo period $T_d$, a quantity which typifies any dynamo model with a particular set of parameters (and also measures the efficiency of any cycle). Wherever appropriate, we also comment on the effect of varying these parameters on the dynamo generated magnetic fields. While varying any one parameter, we keep the other parameters constant at the values already mentioned in Section 2.
![Variation of the dynamo period $T_d$ with the amplitude of the meridional flow speed $v_0$. $T_d$ is in years and $v_0$ is in m s$^{-1}$. The solid line connects the data points denoted by circles, while the dashed line depicts a C/x (C = a constant) behavior for comparison. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G, $f_b = 0.05$.](figure1.ps){height="6.5cm" width="10cm"}
We start by presenting a plot of the variation of $T_d$ with $v_0$ for Model 1 in Figure 1, other parameters being the same as mentioned in Section 2 and with $f_b = 0.05$. We see that $T_d$ decreases with increasing $v_0$ and the dependence is almost $v_0^{-1}$ (as is evident from a comparison of the solid line connecting the data points and the dashed $v_0^{-1}$ line). In most PSKR models (many of which were worked out when the existence and role of meridional circulation was still not clear), turbulent diffusivity acted as the bridge between the source regions of the toroidal and the poloidal field - which often overlapped in these models. Contrary to that, in BL models, meridional circulation plays an important role in transporting flux between the two source regions of toroidal and poloidal field production (from the bottom of the SCZ to the top near the equator and vice-versa near the pole). Therefore an increase in $v_0$ would mean faster flux transport - a more efficient cycle and hence an inverse dependence of $T_d$ on $v_0$.
In our Model 1 however, magnetic buoyancy is also involved in the flux transport process - over a much wider extent in latitude and also at a much faster rate. One would have expected then that the dependence of $T_d$ on $v_0$ would be much less pronounced in this case. Nonetheless, we still find a drastic dependence of $T_d$ on $v_0$. It seems that although buoyancy may be more efficient in transporting flux to the surface from the overshoot layer, the crucial factor in completing the chain turns out to be the transport of the poloidal field from the surface (near the poles) to the bottom of the SCZ for the regeneration of the toroidal field - a process which can only be carried out by the meridional down-flow near the poles. Hence, it turns out that the dynamo period is critically dependent on the meridional flow even for buoyancy driven models. If we make $v_0 < 5.0$ m s$^{-1}$, the dynamo wave at the bottom of the SCZ starts propagating poleward, that is the dynamo is no longer advection-dominated.
![Variation of $T_d$ (in years) with the amplitude of the source coefficient $\alpha_0$ (in m s$^{-1}$). The solid line connects the data points and the dashed line shows a y = constant behavior, for comparison. Other parameters are; $v_0$ = 10 m s$^{-1}$, $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G, $f_b = 0.05$.](figure2.ps){height="6.5cm" width="10cm"}
Figure 2 shows the variation of the dynamo period on changing the amplitude of the $\alpha$-coefficient for Model 1. We see that a varying $\alpha_0$ does not have much influence on $T_d$. This is fortunate for us because $\alpha_0$ essentially is the strength of a phenomenological source term, which represents the decay of tilted active regions to produce the poloidal field. Within a buoyancy driven dynamo framework then, one would like $T_d$ to be influenced more by the buoyancy mechanism (parameters which control the buoyant flux transport), rather than the amplitude of the phenomenological source coefficient. Moreover a reliable estimate of $\alpha_0$ is a formidable task, specially at the non-linear quenched regime, whether it be the BL approach motivated $\alpha$ or the PSKR approach $\alpha$ (Pouquet, Frisch, and Leorat 1976; Brandenburg and Schmitt 1998). Dynamo periods of most models based on the BL approach are actually found to be rather independent of the source coefficient. In contrast to this, dynamo periods of models based on the PSKR approach are significantly dependent on the strength of the $\alpha$-effect.
If we make $\alpha_0 < 10.0$ m s$^{-1}$, the dynamo becomes sub-critical and we get decaying solutions. Also, the poloidal field near the pole increases rapidly if $\alpha_0$ is increased. A generic problem of this kind of BL models anyway, is the existence of high fields near the pole. Therefore it may be a good idea to work with a low value of $\alpha_0$, keeping the dynamo just about super-critical.
The Dikpati and Charbonneau (1999) model is sufficiently different from the model that we are working with at present (for example they also incorporate the latitudinal dependence in the differential rotation and a depth-dependent diffusivity in their model). Given that they had reported a drastically different dynamo period dependence on diffusivity, we wanted to explore how a poloidal field source term similar to their buoyancy [*[recipe]{}*]{} behaves, within the framework of our model and hence, we constructed Model 2. We now present some results to facilitate comparison between our Model 1 and Model 2.
![$T_d$ (in years) versus $\eta$ (in $10^{8}$ m$^2$ s$^{-1}$) for Model 1 ([**[Top Panel]{}**]{}) and Model 2 ([**[Bottom Panel]{}**]{}). The solid line connects the data points while the dashed line shows a C/x behavior. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $v_0$ = 10 m s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G, $f_b = 0.05$ for Model 1 and $f_r = 0.06$ for Model 2.](figure3t.ps){height="6.5cm" width="10cm"}
![$T_d$ (in years) versus $\eta$ (in $10^{8}$ m$^2$ s$^{-1}$) for Model 1 ([**[Top Panel]{}**]{}) and Model 2 ([**[Bottom Panel]{}**]{}). The solid line connects the data points while the dashed line shows a C/x behavior. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $v_0$ = 10 m s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G, $f_b = 0.05$ for Model 1 and $f_r = 0.06$ for Model 2.](figure3b.ps){height="6.5cm" width="10cm"}
We show the variation of $T_d$ with the diffusivity $\eta$ for Model 1 (Top Panel) and Model 2 (Bottom Panel) in Figure 3. Again on comparison of the solid line connecting the data points with the dashed $\eta^{-1}$ line we find that the dynamo period is almost inversely proportional to the diffusivity within the SCZ for Model 1. The Bottom Panel presents the $T_d$ versus $\eta$ plot for Model 2, for $f_r = 0.06$ (corresponding to the [*[buoyancy saturated regime]{}*]{}). We find that in this case the dependence of $T_d$ on $\eta$ (the solid line) is far from a $\eta^{-1}$ dependence (the dashed line). In fact on re-doing the calculations for a higher value of $f_r$ for Model 2, we see that the dynamo period dependence on the diffusivity becomes less and less pronounced.
In PSKR models of the past with no meridional circulation and in interface dynamo models as well (Parker 1993; Markiel and Thomas 1999), an inverse dependence of $T_d$ on $\eta$ is expected and also seen. In simple linear models too, the period is expected to vary as $\eta^{-1}$. It is not a priori obvious that a non-linear model with $\alpha$ quenching and magnetic buoyancy - like Model 1, will have a $\eta^{-1}$ dependence of the dynamo period. Moreover, Dikpati and Charbonneau (1999) working with a BL type flux transport model with a similar [*[recipe]{}*]{} for buoyancy as our Model 2, reported a $T_d \propto \eta^{0.22}$ dependence. So the question naturally arises why does our Model 1 give such a strong $\eta^{-1}$ dependence at variance with other BL models.
In earlier studies we have presented some results of the variation in the latitude of eruptions with time (see for example Nandy and Choudhuri 2000 and Paper I) for Model 1. We find that the strongest toroidal fields are usually found at high latitudes and their strength decreases progressively (due to eruptions) as they propagate towards the equator. Here, we find that with decreasing $\eta$ the region of eruptions start extending towards lower and lower latitudes. Presumably because with a low diffusivity, the strong toroidal fields can be stored for a longer time in the overshoot layer and get amplified by the strong radial shear (thus maintaining a value $>$ $B_c$) while it is being carried equatorward by the meridional flow. This increase in the region of magnetic activity increases the dynamo period with decreasing $\eta$ - simply because now the cycle has to extend and thus transport flux over a wider range in latitude. We shall see in Section 3.2 that indeed with decreasing $\eta$, stronger fields are found at lower latitudes within the overshoot layer.
Another appealing reason may lie in the way the source term for the generation of the poloidal field is formulated. Note that due to the presence of the quenching term the poloidal field generation gets quenched when the erupted toroidal field approaches values close to or greater than $B_0$ ($10^5$ G). For a low value of $\eta$ and over a period of many successive eruptions, we find that the toroidal field near the top (where the $\alpha$-effect is concentrated) approaches very high values close to $B_0$. This is unacceptably large and quenches the poloidal field generation completely thus making the dynamo process inefficient. However when we increase $\eta$, the erupted field near the top diffuses and spreads out faster, thus not reaching very high values. This lets the poloidal field production go on uninterrupted making the dynamo process more effective. We believe that this scenario may also play a role in the reduction of the dynamo period with increasing $\eta$.
As is clear from the above discussion, in Model 1, the poloidal field production is a two step process. With the toroidal flux first being transported to the top and then the $\alpha$-effect acting on it - with diffusion having the intermediate role of spreading out the erupted field. Contrary to that, in Model 2, the poloidal field production is a direct one step process, the efficiency of which is determined by $f_r$ and where the role of diffusivity is somewhat subdued. Therefore it is not surprising that with increasing $f_r$, the role of diffusivity in spreading out flux in Model 2 becomes more and more redundant and $T_d$ becomes less dependent on $\eta$.
However, we never get a direct dependence of the period on diffusivity as reported by Dikpati and Charbonneau (1999) and this may be due to the differences that exists between our general model and theirs (for example they worked with a variable diffusivity profile which is such that variation of $\eta$ in the bulk of the SCZ does not affect the diffusivity within the overshoot layer much).
![$T_d$ (in years) versus the fraction of the erupted field - $f_b$ for Model 1 (dashed line) and $f_r$ for Model 2 (solid line). Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$, $v_0$ = 10.0 m s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G.](figure4.ps){height="6.5cm" width="10cm"}
Figure 4 shows the variation of the dynamo period with the control parameters $f_b$ (Model 1) and $f_r$ (Model 2), respectively. We had presented a similar plot for Model 1 in Paper I (albeit with a lower value of $v_0$), we redo the calculation here for the sake of completeness of this paper and so as to easily compare it with Model 2. We see that $T_d$ for both the models decrease with increasing control parameter and saturates to a certain value - which is different for the two Models. As discussed in Paper I, such a $T_d$ versus control parameter dependence characterises buoyancy driven flux transport models and portrays the fact that a higher control parameter means more efficient flux transport due to buoyancy and hence, a lower dynamo period. In that sense Model 2 (and the poloidal source formulation of Dikpati and Charbonneau 1999) does seem to capture the nature of buoyancy within a dynamo framework. We refer to the regime where the dynamo period has reached the saturation value, as the [*[buoyancy saturated regime]{}*]{}. For Model 1 this is found to occur at around $f_b = 0.04$ and for Model 2 this occurs at around $f_r =0.06$.
Notice though that the period for Model 2 saturates at a much higher value. This essentially means that Model 2 is a less efficient manifestation of the buoyancy process. The depletion of the toroidal flux due to buoyancy in Model 1 plays a crucial role in limiting the latitudinal extent of the dynamo action (for a more detailed discussion on why this is so please refer to Paper I). This in tandem with the efficient recycling of flux for a high $f_b$ decreases the dynamo period drastically. However, Model 2 is formulated in such a way that it is not possible to deplete toroidal flux self-consistently. Thus the dynamo cycle takes place over the whole of the convection zone and there is hardly any effect on the toroidal field inside the overshoot layer by increasing $f_r$. Therefore it is not surprising that $T_d$ for Model 2 does not decrease as significantly as that of Model 1, with increasing control parameter.
$B_c$ is constrained by results from simulations of flux tube rise and flux storage within the overshoot layer and is expected to be around $10^5$ G. Around a thousand eruptions (in the form of sunspots) is seen on the solar surface in a complete solar cycle and we have fixed the value of $\tau = 8.8 \times 10^5$ s to reflect that. However we did some runs with half and double the values of $B_c$ and $\tau$ and the dynamo period $T_d$ remained close to the original values. In any case in the [*[buoyancy saturated regime]{}*]{} for $f_b = 0.05$, $T_d$ is not expected to vary much with the parameters for buoyancy.
The effect of buoyancy on the dynamo generated magnetic fields within the overshoot layer
-----------------------------------------------------------------------------------------
We have already seen from the results presented in the previous section that the strength of magnetic buoyancy ($f_b$) has a strong influence on the dynamo period (and thus on the efficiency) of the solar cycle. We carry this study of Model 1 further to see whether the mechanism of buoyant eruption has any effect on the magnitude of the magnetic fields inside the overshoot layer.
The $\alpha$-quenching term can also constrain the magnitude of the dynamo generated magnetic fields. So it is necessary first to understand how this mechanism limits the magnetic field before we go on to the role played by buoyancy. As discussed in Section 2.1, the quenching term works in such a manner that poloidal field production stops rapidly once the erupted toroidal field approaches values close to $B_0 = 10^{5}$ G. This in turn has an effect on the magnitude of the toroidal field produced in the next cycle and over many cycles this mechanism ensures that the solutions converge to a stable oscillation with a non-growing amplitude of the magnetic fields. The results that we have presented are for such stable oscillations with a non-growing amplitude. Therefore the magnetic fields everywhere within the SCZ are expected to scale linearly with the value of $B_0$, if $\alpha$-quenching is the only magnitude limiting mechanism.
![[**[Top Panel]{}**]{}: Variation of the maximum toroidal field $B_{max}$ (solid line) and the low-latitude toroidal field $B_{eq}$ (dashed line) within the overshoot layer with the quenching field $B_0$, $B_c$ is fixed at $10^5$ G. [**[Bottom Panel]{}**]{}: $B_{max}$ and $B_{eq}$ versus the critical field for eruption $B_c$, with $B_0 =10^5$ G. All fields are in units of $10^5$ G. The dash-dotted line in the Bottom Panel shows a y = x behavior for comparison. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$, $v_0$ = 10.0 m s$^{-1}$ and $f_b =0.05$.](figure5t.ps){height="6.5cm" width="10.8cm"}
![[**[Top Panel]{}**]{}: Variation of the maximum toroidal field $B_{max}$ (solid line) and the low-latitude toroidal field $B_{eq}$ (dashed line) within the overshoot layer with the quenching field $B_0$, $B_c$ is fixed at $10^5$ G. [**[Bottom Panel]{}**]{}: $B_{max}$ and $B_{eq}$ versus the critical field for eruption $B_c$, with $B_0 =10^5$ G. All fields are in units of $10^5$ G. The dash-dotted line in the Bottom Panel shows a y = x behavior for comparison. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$, $v_0$ = 10.0 m s$^{-1}$ and $f_b =0.05$.](figure5b.ps){height="6.5cm" width="10cm"}
In Figure 5 (Top Panel) we present a plot of the variation of the toroidal fields within the overshoot layer with varying $B_0$, for $f_b = 0.05$. The solid line corresponds to the maximum toroidal field within the overshoot layer $B_{max}$ (which is found to be at high latitudes) and the dashed line corresponds to the toroidal field near the equator $B_{eq}$ at a latitude of around $10^0$. $B_{max}$ seems to be relatively unaffected by the adopted value for $B_c$ and scales with $B_0$, as expected for a model without buoyancy. Whereas $B_{eq}$ does not change much with $B_{0}$ and the maximum value it attains within the range that we have studied is $0.95 \times 10^{5}$ G. Now that is very significant, specially since the maximum value attained by $B_{eq}$ is [*[slightly less than]{}*]{} $B_c$.
After the meridional down-flow drags the poloidal field down to the overshoot layer, the strong radial shear in the differential rotation starts working on it to create the toroidal field. By the time the toroidal field belt is advected down a little by the meridional circulation it reaches a high value and exceeds $B_c$ by about an order of magnitude. Eruptions start occurring immediately and as this toroidal field belt moves equatorward eruptions continue. Due to the accompanying depletion in field strength after eruptions, the toroidal field keeps on decreasing in strength until it falls below $B_c$ (obviously here the rate of flux production is less than the rate of flux depletion due to buoyancy). This therefore explains why the value of the toroidal field is constrained to $\leq$ $B_c$ at low latitudes. Just for a feeling for what would happen [*[in the absence of buoyancy]{}*]{}, consider the following; if for $B_0 = 10^{5}$ G $B_{eq}$ is found to be $10 \times 10^{5}$ G, on making $B_0 = 5 \times 10^{5}$ G, $B_{eq}$ attains a value of $ 50 \times 10^{5}$ G.
Figure 5 (Bottom Panel), where we present the variation of the toroidal field within the overshoot layer with the critical field $B_c$, lends further credence to the above inferences. We see that indeed $B_{max}$ remains unaffected by $B_c$ within the studied range, whereas $B_{eq}$ always stays below $B_c$, as is apparent on comparison with the dash-dotted $B = B_c$ line.
![Variations in $B_{max}$ (solid line) and $B_{eq}$ (dashed line) with the fraction of the erupted field $f_b$. All fields are in units of $10^5$ G. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $\eta = 0.12 \times 10^8$ m$^2$ s$^{-1}$, $v_0$ = 10.0 m s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G.](figure6.ps){height="6.5cm" width="10cm"}
We had done all the above calculations with a low value of $f_b = 0.05$ (though this value is in the [*[buoyancy saturated regime]{}*]{} of the dynamo). Even when such a low fraction of the toroidal flux is made to erupt we see that buoyancy manages to constrain the magnitude of the toroidal field. Naturally one wonders what would happen if we make the fraction of the erupted field (the control parameter) much larger.
So we fix $B_c$ at $10^5$ G and study the variation of the toroidal field within the overshoot layer with $f_b$ in Figure 6. We see that with increasing $f_b$, both $B_{max}$ and $B_{eq}$ decreases and ultimately reaches an asymptotic limit. Interestingly, $B_{max}$ and $B_{eq}$ reaches their asymptotic limit at about the same value of $f_b$ for which the dynamo period reaches its asymptotic limit (see Figure 4). While $B_{max}$ drops to within a order of magnitude of $B_c$ (at $9.8 \times 10^5$ G), $B_{eq}$ drops down well below $B_c$ (at $0.44 \times 10^5$ G), thereby strengthening our conjecture that $B_{eq}$ is more affected by buoyant eruptions than $B_{max}$. We may point out here that even with a higher $f_b$ (held constant), the previous result, that $B_{max}$ is not affected much by variation in $B_c$, holds true.
We end this section by discussing a procedure, which may help us to fix an upper limit on the diffusivity within the solar overshoot layer. This technique is motivated from an understanding of the results presented above. Briefly summarising the results relevant to this analysis, we have learned that when a strong toroidal field belt inside the overshoot layer is advected equatorward by the meridional circulation, it decreases in strength due to buoyant eruptions. Careful study of the variation in the eruption latitude shows us that with decreasing diffusivity the region of eruption extends to lower and lower latitude. This led us to hypothesise (in Section 3.1) that with a lower value of diffusivity within the overshoot layer, it may be possible to store and amplify the toroidal field belt so that it is maintained above the critical field $B_c$ even at low latitudes, hence allowing for eruptions there, as seen in reality.
![Variation of the low-latitude toroidal field $B_{eq}$ within the overshoot layer, with the diffusivity $\eta$. $B_{eq}$ is in $10^5$ G and $\eta$ is in $10^8$ m$^2$ s$^{-1}$. The dashed line shows this variation while the solid line is the intercept corresponding to $B_{eq} = B_c$. Other parameters are; $\alpha_0$ = 10 m s$^{-1}$, $v_0$ = 10 m s$^{-1}$, $B_0 = 10^5$ G, $B_c = 10^5$ G, $f_b = 0.05$.](figure7.ps){height="6.5cm" width="10cm"}
In Figure 7 we plot the variation of the toroidal field within the overshoot layer near the equator (at $10^0$ latitude) with respect to the diffusivity $\eta$. The dashed line shows this variation and the result that the strength of the low-latitude toroidal field inside the overshoot layer falls with increasing $\eta$, lays a more solid foundation to our starting hypothesis. We find that on making $\eta > 0.075 \times 10^{8}$ m$^2$ s$^{-1}$, the strength of the toroidal field within the overshoot layer falls below $B_c = 10^5$ G. This suggests that the upper limit of the diffusivity within the overshoot layer (which we may call $\eta^{max}_{overshoot}$) should be $0.075 \times 10^{8}$ m$^2$ s$^{-1}$, to ensure eruptions at low latitudes as seen in reality. Note that the value of the diffusivity also depends on the adopted thickness of the overshoot layer.
However, on the basis of this model alone, we cannot make a claim to the authenticity of the value of $\eta^{max}_{overshoot}$ as found above. Rather we have shown that based on some physical arguments, [*[it is possible]{}*]{} to make such an indirect estimate (without taking into account the effects that a strong magnetic field may have on $\eta$ inside the overshoot layer). The exact value of $\eta$ within the overshoot layer remains to be verified by other independent analysis, preferably from a more fine tuned solar dynamo model.
Concluding remarks
==================
The basic foundation of this model was laid in Paper I, where we showed that a model with such a [*[recipe]{}*]{} for buoyancy and with a concentrated $\alpha$-effect near the surface, is a valid representation of the BL approach. In this paper, we have carried this study further, to show how buoyancy affects the dynamo generated magnetic fields and the working of the solar dynamo in general.
The strength of magnetic buoyancy is seen to affect the dynamo period drastically. With increasing fraction of the field which is taken up $(f_b)$, the dynamo period decreases and reaches an asymptotic limit - the [*[buoyancy saturated regime]{}*]{} of the dynamo. One important question is ofcourse whether the solar dynamo is actually working in this [*[regime]{}*]{} or whether it is working at a [*[non - buoyancy saturated regime]{}*]{}. In the former case, variation in the parameters controlling buoyancy will not have much effect on the dynamo period (and the dynamo generated magnetic fields) and it will be the meridional flow speed which will primarily control the period, whereas in the latter case, variations in the control parameters for buoyancy will strongly influence the dynamo period (and also the amplitude of the cycle).
Since we find that for a very low value of $f_b = 0.05$ the dynamo period reaches its asymptotic limit, chances are fairly high that the solar dynamo is indeed working at the [*[buoyancy saturated regime]{}*]{} and meridional flow speed and its fluctuations has the final say in determining the cycle period, thus acting as a [*[solar clock]{}*]{}. Some authors have studied the effects of stochastic fluctuations in BL models of the solar dynamo and their possible effects on the cycle period and amplitude, see for example Charbonneau and Dikpati (2000) and Charbonneau (2001, in press). A strong influence of the meridional circulation on the dynamo cycle period and amplitude is portrayed in their results. In Section 3.1, following Figure 1 we surmised about the crucial role played by the meridional down-flow near the poles in completing the dynamo chain and meridional flow being the slowest process in this chain, in all likelihood [*[is]{}*]{} the main determinant of the solar cycle period.
In retrospect, it is surprising that the dynamo saturates and reaches the [*[buoyancy saturated regime]{}*]{} at such a low fraction of the erupted field. Note however that when flux tubes become buoyant and start to rise, gravity would stretch the field lines (due to the rapid rise of the upper lighter part of the tube) and the field reconnects. It is not inconceivable than that the lower part of the reconnected tube (with the larger fraction of the flux) which is rooted to the overshoot layer, sinks back into the overshoot layer. Some studies also suggest that a large fraction of the erupted flux may actually be retracted back into the deeper layers of the SCZ (Rabin, Moore, and Hagyard 1984; Parker 1984, 1987, Howard 1992; D’Silva 1995) thus not contributing to the poloidal field regeneration. These considerations lead us to conclude then that maybe in reality, it is indeed a small fraction of the deep toroidal field, which contributes to the flux recycling process.
Within the framework of such a buoyancy driven flux transport model, we find that the diffusivity $\eta$ and its relative magnitude in the lower and upper parts of the SCZ is of vital importance, even though its role as a flux-transporter between the two source regions is greatly undermined by that of the meridional flow and magnetic buoyancy. While a low value of the diffusivity is required within the overshoot layer, to enable toroidal fields exceeding the critical field limit $B_c$ to be present at low latitudes (thus resulting in eruptions there), a higher value of diffusivity may make the poloidal flux generation near the surface more efficient by spreading out the erupted toroidal field (so that the $\alpha$-effect is not quenched). This latter scenario remains to be explored more quantitatively (with a depth-dependent diffusivity) and a study of the same will be undertaken in the near future.
Most solar theorists seem to agree on the value of $10^8$ m$^2$ s$^{-1}$ as an upper limit for $\eta$ for the convection zone proper and surface observational estimates also point to a similar figure (Wang, Nash, and Sheeley 1989a,b; Dikpati and Choudhuri 1995; Schrijver and Martin 1990). With respect to the above figure, some theoretical arguments can be made to make an order of magnitude estimate of the value of $\eta$ within the overshoot layer (Parker 1993), where the magnetic field is an order of magnitude greater than the magnetic field within the SCZ. Following these arguments it turns out that $\eta$ inside the overshoot layer should be two orders of magnitude less than $\eta$ in the main body of the SCZ. That is, $\eta$ should be around $0.01 \times 10^8$ m$^2$ s$^{-1}$ inside the overshoot layer. We have proposed a mechanism for estimating an upper limit of the diffusivity within the overshoot layer and have come up with a value, $\eta^{max}_{overshoot} = 0.075 \times 10^{8}$ m$^2$ s$^{-1}$. Though this result has been arrived at with a rather simple dynamo model (with only a radial shear in the rotation), it is nice to see that it does not contradict the earlier speculative value.
We have shown that magnetic buoyancy can limit the magnetic field within the overshoot layer and the adopted value for the critical field $B_c$ strongly constrains the toroidal field at low latitudes. One question naturally arises here - is magnetic buoyancy capable of quenching the growth of the dynamo within the framework of such models? We did some runs with infinite $B_0$ (that is no $\alpha$ quenching) and with $B_c = 10^5$ G to test this. In this case, we found that the amplitude of the generated fields kept on blowing up without saturating to a finite-amplitude oscillation (note that with an infinite $B_0$, the equations become linear once the toroidal field exceeds $B_c$ and hence the result that the generated fields blow up, is a necessary outcome). Thus the answer to the above question is - no. At least, within the framework of such kinematic dynamos, where there is an [*[infinite]{}*]{} energy source to tap from (the prescribed meridional motions and the differential rotation), magnetic buoyancy alone, is not capable of quenching the growth of the dynamo generated fields. Thus magnetic buoyancy seems to limit the dynamo generated fields [*[within]{}*]{} a larger quenching framework.
Our results also show that the peak deep toroidal field attains a very high value within a short span of the onset of a new half-cycle and after that the toroidal field strength continously decreases till the end of that half-cycle, the pattern repeating itself. Assuming that the sunspot activity is directly related to the toroidal field at the bottom of the SCZ (we may point out here that it is still not clear how strong flux tubes form out of a diffuse field and whether the strength of sunspots reflect the strength of the toroidal flux tubes in the overshoot layer), this would translate to seeing the strongest active regions within a couple of years of the beginning of a new cycle and relatively weaker and weaker active regions as the sunspot cycle progresses (at lower latitudes). At the end of 11 years this cycle would repeat itself. There exists observational evidence which shows that sunspot strength and size is maximum during the solar maximum (around 5.5 years after the start of a new cycle) and decreases progressively till the minimum (Tang, Howard, and Adkins 1984). This is reflected to an extent in the presented results albeit with a higher latitude belt of activity and with an offset of a couple of years. It remains to be seen whether a more sophisticated model incorporating the latitudinal dependence of the differential rotation can reproduce the observations exactly.
We end with a few critical comments on the rather simple model that we have used. Our model incorporates a concentrated radial shear at the base of the SCZ matching the helioseismologically determined profile from mid-latitudes to low latitude near the equator and is deficient in the sense that we have not considered the latitudinal variation in the rotation. We believe that the inclusion of a latitudinal shear will not change the results of the parameter space study qualitatively. Nor is it going to change the result that buoyant eruptions, limits the toroidal field inside the overshoot layer, which is a fundamental outcome of the eruption and subsequent flux depletion procedure. What it may change though is the nature and appearance of the magnetic butterfly diagrams. Due to this we have refrained from going for any detailed comparison with observations in this paper.
A latitude-dependent solar like rotation profile might also influence at which latitude we find the maximum toroidal fields. However, Durney (1997) and Küker, Rüdiger, and Schultz (2001) working with a solar like rotation profile, also finds the maximum toroidal field at high latitudes near the pole. While these results support our case, we cannot help but acknowledge that the finding of the maximum toroidal fields at high latitudes is a definite problem which needs to be addressed at some stage. That however is beyond the scope of this model and we leave it to future studies to address this issue.
I would like to thank Paul Charbonneau and Bernard Durney, lively exchanges with whom inspired some of the studies and discussion presented in this paper. I am also grateful to Arnab Rai Choudhuri, initial work with whom laid the foundations of the model explored in this study.
Babcock, H. W.: 1961, [*[Astrophys. J.]{}*]{}, [**[133]{}**]{}, 572
Brandenburg, A., and Schmitt, D.: 1998, [*[Astr. Astrophys.]{}*]{}, [**[338]{}**]{}, L55
Caligari, P., Moreno-Insertis, F., and Schüssler, M.: 1995, [*[Astrophys. J.]{}*]{}, [**[441]{}**]{}, 886
Charbonneau, P.: 2001, [*[Solar Phys.]{}*]{}, in press
Charbonneau, P., and Dikpati, M.: 2000, [*[Astrophys. J.]{}*]{}, [**[543]{}**]{}, 1027
Choudhuri, A. R.: 1990, [*[Astrophys. J.]{}*]{}, [**[355]{}**]{}, 733
Choudhuri, A. R.: 1998, [*[The Physics of Fluids and Plasmas: An Introduction for Astrophysicists]{}*]{}, (Cambridge: Cambridge University Press)
Choudhuri, A. R., and Dikpati, M.: 1999, [*[Solar Phys.]{}*]{}, [**[184]{}**]{}, 61
Choudhuri, A. R., Schüssler M., and Dikpati M.: 1995, [*[Astr. Astrophys.]{}*]{}, [**[303]{}**]{}, L29
DeLuca, E. E., and Gilman, P. A.: 1986, [*[Geophys. Astrophys. Fluid Dyn.]{}*]{}, [**[37]{}**]{}, 85
D’Silva, S., and Choudhuri, A. R.: 1993, [*[Astr. Astrophys.]{}*]{}, [**[272]{}**]{}, 621
D’Silva, S.: 1995, [*[Astrophys. J.]{}*]{}, [**[448]{}**]{}, 459
Dikpati, M., and Charbonneau, P.: 1999, [*[Astrophys. J.]{}*]{}, [**[518]{}**]{}, 508
Dikpati, M., and Choudhuri, A. R.: 1995, [*[Solar Phys.]{}*]{}, [**[161]{}**]{}, 9
Durney, B. R.: 1995, [*[Solar Phys.]{}*]{}, [**[160]{}**]{}, 213
Durney, B. R.: 1996, [*[Solar Phys.]{}*]{}, [**[166]{}**]{}, 231
Durney, B. R.: 1997, [*[Astrophys. J.]{}*]{}, [**[486]{}**]{}, 1065
Fan, Y., Fisher, G. H., and DeLuca, E. E.: 1993, [*[Astrophys. J.]{}*]{}, [**[405]{}**]{}, 390
Howard, R. F.: 1992, [*[Solar Phys.]{}*]{}, [**[142]{}**]{}, 47
Küker, M., Rüdiger, G., and Schultz, M.: 2001, [*[Astr. Astrophys.]{}*]{}, in press
Leighton, R. B.: 1969, [*[Astrophys. J.]{}*]{}, [**[156]{}**]{}, 1
Markiel, J. A., and Thomas, J. H.: 1999, [*[Astrophys. J.]{}*]{}, [**[523]{}**]{}, 827
Moffatt, H. K.: 1978, [*[Magnetic Field Generation in Electrically Conducting Fluids]{}*]{}, (Cambridge: Cambridge University Press)
Moreno-Insertis, F.:1983, [*[Astr. Astrophys.]{}*]{}, [**[122]{}**]{}, 241
Moreno-Insertis, F., Schüssler, M., and Ferriz Mas, A.: 1992, [*[Astr. Astrophys.]{}*]{}, [**[264]{}**]{}, 686
Moss, D., Tuominen, I., and Brandenburg, A.: 1990a, [*[Astr. Astrophys.]{}*]{}, [**[228]{}**]{}, 284
Moss, D., Tuominen, I., and Brandenburg, A.: 1990b, [*[Astr. Astrophys.]{}*]{}, [**[240]{}**]{}, 142
Nandy, D., and Choudhuri, A. R.: 2000, [*[JAA]{}*]{}, [**[21]{}**]{}, 381
Nandy, D., and Choudhuri, A. R.: 2001, [*[Astrophys. J.]{}*]{}, [**[551]{}**]{}, 576
Parker, E. N.: 1955, [*[Astrophys. J.]{}*]{}, [**[122]{}**]{}, 293
Parker, E. N.: 1975, [*[Astrophys. J.]{}*]{}, [**[198]{}**]{}, 205
Parker, E. N.: 1979, [*[Cosmical Magnetic Fields]{}*]{}, (Oxford: Clarendon Press)
Parker, E. N.: 1993, [*[Astrophys. J.]{}*]{}, [**[408]{}**]{}, 707
Parker, E. N.: 1984, [*[Astrophys. J.]{}*]{}, [**[281]{}**]{}, 389
Parker, E. N.: 1987, [*[Astrophys. J.]{}*]{}, [**[312]{}**]{}, 868
Pouquet, A., Frisch, U., and Leorat, J.: 1976, [*[J. Fluid Mech.]{}*]{}, [**[77]{}**]{}, 321
Rabin, D., Moore, R., and Hagyard, M. J.: 1984, [*[Astrophys. J.]{}*]{}, [**[287]{}**]{}, 404
Schmitt, D., and Schüssler, M.: 1989, [*[Astr. Astrophys.]{}*]{}, [**[223]{}**]{}, 343
Schrijver, C. J., and Martin, S. F.: 1990, [*[Solar Phys.]{}*]{}, [**[129]{}**]{}, 95
Spiegel, E. A., and Weiss, N. O.: 1980, [*[Nature]{}*]{}, [**[287]{}**]{}, 616
Steenbeck, M., Krause, F., and Rädler, K. -H.: 1966, [*[Z. Naturforsch.]{}*]{}, [**[21a]{}**]{}, 1285
Tang, F., Howard, R., and Adkins, J. M.: 1984, [*[Solar Phys.]{}*]{}, [**[91]{}**]{}, 75
van Ballegooijen, A. A.: 1982, [*[Astr. Astrophys.]{}*]{}, [**[113]{}**]{}, 99
Wang, Y. -M., Nash, A. G., and Sheeley, N. R.: 1989a, [*[Astrophys. J.]{}*]{}, [**[347]{}**]{}, 529
Wang, Y. -M., Nash, A. G., and Sheeley, N. R.: 1989b, [*[Science]{}*]{}, [**[245]{}**]{}, 712
[^1]: This takes into account the greater latitudinal extent of a grid size near the surface as compared to that at the bottom.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a matrix factorization algorithm that scales to input matrices that are large in both dimensions (i.e., that contains more than 1TB of data). The algorithm streams the matrix columns while subsampling them, resulting in low complexity per iteration and reasonable memory footprint. In contrast to previous online matrix factorization methods, our approach relies on low-dimensional statistics from past iterates to control the extra variance introduced by subsampling. We present a convergence analysis that guarantees us to reach a stationary point of the problem. Large speed-ups can be obtained compared to previous online algorithms that do not perform subsampling, thanks to the feature redundancy that often exists in high-dimensional settings.'
author:
- |
Arthur Mensch\
Inria Parietal\
Saclay, France\
Julien Mairal\
Inria Thoth\
Grenoble, France\
Gaël Varoquaux\
Inria Parietal\
Saclay, France\
Bertrand Thirion\
Inria Parietal\
Saclay, France
bibliography:
- 'opt\_bib.bib'
title: |
Subsampled online matrix factorization\
with convergence guarantees
---
`firstname.lastname@inria.fr`
#### Setup.
The goal of matrix factorization is to decompose a matrix $\X \in \RR^{p \times n}$ – typically $n$ signals of dimension $p$ – as a product of two smaller matrices: $$\X \approx \D \A
\quad \text{with}\quad\D \in \RR^{p \times k}, \;\A \in \RR^{k \times n},$$ with potential sparsity or structure requirements on $\D$ and $\A$. We consider a sample stream $(\x_t)_{t \geq 0}$ that cycles into the columns ${\{\x^{(i)}\}}_i$ of $\X$. Matrix factorization can be formulated as a non-convex optimization problem, where the factor $\D$ (the *dictionary*) minimizes the following empirical risk: $$\label{eq:empirical-risk}
\D = \argmin_{\D \in \mathcal{C}}\;
\bar f \equaldef \frac{1}{n} \sum_{i=1}^n
f^{(i)}(\D),\quad\text{where}\quad
f^{(i)}(\D) =
\min_{\balpha \in \RR^k} \frac{1}{2}
\bigl\|
\x^{(i)}
- \D \balpha
\bigr\|_2^2 + \lambda \, \Omega(\balpha).$$ $\mathcal{C}$ is a column-wise separable convex set of $\RR^{p \times
k}$, and $\Omega : \RR^p \rightarrow \RR$ is a penalty over the code. The problem of *dictionary learning* [@olshausen_sparse_1997; @agarwal_learning_2014] sets $\mathcal{C} = \mathcal{B}_2^k$ and $\Omega = \Vert \cdot \Vert_1$. Due to the sparsifying effect of $\ell_1$ penalty [@tibshirani_regression_1996], the dictionary forms a basis in which the data admit a *sparse* representation. Setting $\mathcal{C} = \mathcal{B}_ 1^k$ and $\Omega = \Vert \cdot \Vert_2^2$ yields a data-adapted *sparse basis*, akin to sparse PCA[@zou_sparse_2006]. The algorithm presented here accommodates *elastic-net* penalties $\Omega(\balpha) \triangleq (1- \nu) \Vert \balpha \Vert_2^2 + \nu \Vert \balpha \Vert_1$, and elastic-net ball-constraints $\mathcal{C} \triangleq \{ \D \in \RR^{p \times k}/\:
\Vert \d^{(j)} \Vert \triangleq \Vert \d^{(j)} \Vert_1 + (1 - \mu) \Vert \d^{(j)} \Vert_2^2 \leq 1 \}$.
#### Problem.
For many applications of matrix factorization, datasets are growing in both sample number and sample dimension. The online matrix factorization algorithm [@mairal_online_2010] can handle large numbers of samples but was designed to work in relatively small dimension. Recent work [@mensch_dictionary_2016] has adapted this algorithm to handle very high dimensional dataset. Though it demonstrates good empirical performance, the proposed algorithm yields sequences of iterates with non vanishing variance and is not asymptotically convergent.
#### Contribution.
We address this issue and correct some aspects of the algorithm in [@mensch_dictionary_2016] to establish convergence and correctness. We thus introduce a new method to efficient solve for numerous high-dimensional data — large $p$, large $n$ — with theoretical guarantees.
As in [@mensch_dictionary_2016], we perform *subsampling* at each iteration of the online matrix factorization algorithm. The sample stream is downsampled in a *stochastic* manner —we observe different subsets of successive colunms. We thus each step of the online algorithm in a space of reduced dimension $q < p$. Unlike [@mensch_dictionary_2016], we control the variance introduced by the subsampling to obtain convergence. For this, we rely on low-dimensional statistics kept from the past, as many recent stochastic algorithms [@schmidt_minimizing_2013; @johnson_accelerating_2013; @defazio_saga:_2014].
Algorithm
=========
#### Original online algorithm.
The problem can be solved online, following [@mairal_online_2010]. At each iteration $t$, a sample $\x_t$ is drawn from one of the columns $\{ \x^{(i)} \}_{1 \leq i \leq n}$ of $\X$. Its code $\balpha_t$ is computed from the previous dictionary $\D_{t-1}$: $\balpha_s \triangleq \argmin_{\balpha \in \RR^p} \frac{1}{2}
\bigl\|
\x_s
- \D_{s-1} \balpha
\bigr\|_2^2 + \lambda \, \Omega(\balpha)$. Then, $\D_t$ is updated as $$\label{eq:minimization}
% \begin{split}
\D_t \in \argmin_{\D \in \mathcal{C}} \bar g_t(\D)
\triangleq \Big(
\frac{1}{t}\sum_{s=1}^t \frac{1}{2}
\bigl\|
\x_s
- \D \balpha_s
\bigr\|_2^2 + \lambda \Omega(\balpha_s) \Big),
% \balpha_s &\triangleq \argmin_{\balpha \in \RR^p} \frac{1}{2}
% \bigl\|
% \x_s
% - \D_{s-1} \balpha
% \bigr\|_2^2 + \lambda \, \Omega(\balpha).
% \end{split}$$ In other words, $\D_t$ is chosen to be the best dictionary that relates past codes $(\balpha_s)_{s \leq t}$ to past samples $(\x_t)_{s \leq t}$. Codes are not recomputed from the current iterate $\D$, which would be necessary to compute the true past loss fonction $\bar f_t(\D)$, of which $\bar g_t$ is a strongly-convex upper-bound: $$\begin{aligned}
\label{eq:bar_ft}
\bar f_t(\D) &\triangleq \frac{1}{t} \sum_{s=1}^t \min_{\balpha \in \RR^p} \frac{1}{2}
\bigl\|
\x_s
- \D \balpha
\bigr\|_2^2\!+ \lambda \Omega(\balpha) \leq \bar g_t(\D)\end{aligned}$$ It can be shown [see @mairal_stochastic_2013 for theoretical grounding] that minimizing $(\bar g_t)_t$ yields a sequence of iterates that is asymptotically a critical point of $\bar f$ defined in . $\bar g_t$ can be minimized efficiently by projected block coordinate descent, which makes it useful in practice. Indeed, minimizing $\bar g_t$ is equivalent to minimizing the quadratic function $$\label{eq:full_quadratic}
\D \to\frac{1}{2} \trace (\D^\top \D \bar \C_t)
- \trace (\D^\top \B_t),\:\text{where}\quad
\bar \B_t = \frac{1}{t} \sum_{s=1}^t \x_s \balpha_s^\top, \quad
\bar \C_t = \frac{1}{t} \sum_{s=1}^t \balpha_s \balpha_s^\top.$$ Its gradient $\nabla \bar g_t: \D \to \D \bar \C_t - \bar \B_t$ can be tracked online by updating $\bar \C_t$ and $\bar \B_t$ at each iteration: $$\label{eq:parameter-aggregation}
\bar \C_t = (1 - \frac{1}{t}) \bar \C_{t-1}
+ \frac{1}{t} \balpha_t \balpha_t^\top \qquad
\bar \B_t = (1 - \frac{1}{t}) \bar \B_{t-1}
+ \frac{1}{t} \x_t \balpha_t^\top.$$ Those two statistics are thus sufficient to yield the sequence $(\D_t)_t$. The weight $\frac{1}{t}$ used above can be replaced by a more general $w_t$. In addition, the online algorithm has a minibatch extension.
#### Subsampled algorithm.
We adapt the algorithm from [@mairal_online_2010] to handle large sample dimension $p$. The complexity of this algorithm linearly depends on the dimension $p$ in three aspects:
- $\x_t \in \RR^p$ is used to compute the code $\balpha_t$,
- it is used to update the surrogate parameters $\bar \C_t \in \RR^{p\times k}$,
- $\D_t \in \RR^{p\times k}$ is fully updated at each iteration.
Our new *subsampling online matrix factorization* algorithm () reduces the dimensionality of each of these steps, so that the single-iteration complexity in $p$ depends on $q = \frac{p}{r}$ rather than $p$. $r > 1$ is a *reduction factor* that is close to the computational speed-up per iteration in the large dimensional regime $p \gg k$. Formally, we randomly draw, at iteration $t$, a mask $\M_t$ that “selects” a random subset of $\x_t$. $\M_t$ is a $\RR^{p\times p}$ random diagonal matrix, such that each coefficient is a Bernouilli variable with parameter $\frac{1}{r}$, normalized to be $1$ in expectation. With this definition at hand, $\M_t \x_t$ constitutes a non-biased, low-dimensional estimator of $\x_t$: $
\EE[\Vert \M_t \x_t \Vert_0] = \frac{p}{r} = q$, and $
\EE[\M_t \x_t] = \x_t
$, with $\Vert \cdot \Vert_0$ counting the number of non-zero coefficients. Thus, $r$ is the average proportion of observed features at each iteration. We further define the pair of orthogonal projectors $\P_t \in \RR^{q \times p}$ and $\P_t^\perp \in \RR^{(p - q)\times p}$ that projects $\RR^p$ onto $\mathrm{Im}(\M_t)$ and $\mathrm{Ker}(\M_t)$, which we will use for the dictionary update step.
In brief, , defined in Alg. \[alg:somf\], follows the outer loop of online matrix factorization, with the following major modifications at iteration $t$:
- it uses $\M_t \x_t$ and low-size statistics instead of $\x_t$ to estimate the code $\balpha_t$ and the surrogate $g_t$,
- it updates a subset of the dictionary $\P_t \D_{t-1}$ to reduce the surrogate value $\bar g_t(\D)$. Relevant parameters of $\bar g_t$ are computed using $\P_t \x_t$ and $\balpha_t$ only.
We describe in detail the new code computation and dictionary update steps. We then state convergence guarantees for . Those are non trivial to obtain as is not an exact (stochastic) majorization-minimization algorithm.
Initial iterate $\D_0$, weight sequences ${(w_t)}_{t>0}$, ${(\gamma_c)}_{c>0}$, sample set ${\{\x^{(i)}\}}_{i> 0}$, \# iterations $T$. Draw $\x_t = \x^{(i)}$ at random and $\M_t$ (see text) Update the regression parameters for sample $i$: $c^{(i)} \gets c^{(i)} + 1$, $\gamma \gets \gamma_{c^{(i)}}$ $$(\bbeta_t^{(i)}, \G_t^{(i)}) \gets (1 - \gamma) (\bbeta_{t-1}^{(i)}, \G_{t-1}^{(i)})
+ \gamma (\D_{t-1}^\top \M_t \x^{(i)}, \D_{t-1}^\top \M_t \D_{t-1}),\,
( \bbeta_t, \G_t) \gets (\bar \bbeta_t^{(i)}, \bar \G_t^{(i)})$$ Compute the approximate code for $\x_t$: $\balpha_t \gets \argmin_{\balpha \in \RR^k}
\frac{1}{2} \balpha^\top \G_t \balpha -
\balpha^\top \bbeta_t + \lambda \, \Omega(\balpha).$ Update the parameters of the aggregated surrogate $\bar g_t$: $$\label{eq:somf_partial}
\bar \C_t \gets (1 - w_t) \bar \C_t + w_t \balpha_t \balpha_t^\top. \qquad
\P_t \bar \B_t \gets (1 - w_t ) \P_t \bar \B_t + w_t \P_t \x_t \balpha_t^\top.$$ Compute simultaneously (using (using [@mensch_dictionary_2016 Alg 2]) for expression): $$\label{eq:somf_minimization}
\P_t \D_t \gets \argmin_{\D^r \in \mathcal{C}^r}
\frac{1}{2} \trace ({\D^r}^\top (\D^r \bar \C_t - \bar \B_t)),\:
\P_t^\perp \bar \B_t \gets (1 - w_t ) \P_t^\perp \bar \B_{t-1} + w_t \P_t^\perp \x_t \balpha_t^\top.$$ Final iterate $\D_T$.
#### Code computation.
In the online algorithm, $\balpha_t$ is obtained solving the linear regression problem $$\label{eq:regression}
\balpha_t = \argmin_{\balpha \in \RR^k} \frac{1}{2} \balpha^\top \G_t^\star \balpha - \balpha^\top
\bbeta_t^\star + \lambda \Omega(\balpha),\quad\text{where}\quad
\G_t^\star = \D_{t-1}^\top \D_{t-1}\:\text{and}\:\bbeta_t^\star = \D_{t-1}^\top \x_t$$ For large $p$, computing $\G_t^\star$ and $\bbeta_t^\star$ dominates the complexity of the code computation step. To reduce this complexity, we introduce *estimators* for $\G_t$ and $\bbeta_t$, computable at a cost proportional to $q$, whose use does not break convergence. Recall that the sample $\x_t$ is drawn from a finite set of samples ${\{\x^{(i)}\}}_i$. We estimate $\G_t^\star$ and $\bbeta_t^\star$ from $\M_t \x_t$ and data from previous iterations $s < t$ for which $\x^{(i)}$ was drawn. Namely, we keep in memory $2n$ estimators, written ${( \G_t^{(i)}, \bbeta^{(i)}_t)}_{1\leq i \leq
n}$, observe the sample $i = i_t$ at iteration $t$ and use it to update the $i$-th estimators $\bar \G_t^{(i)}$, $\bar \bbeta^{(i)}_t$ following $$\bbeta_t^{(i)} = (1 - \gamma) \G_{t-1}^{(i)} + \gamma \D_{t-1}^\top \M_t \x^{(i)},\qquad
\G_t^{(i)} = (1 - \gamma) \G_{t-1}^{(i)} + \gamma \D_{t-1}^\top \M_t \D_t^{(i)},$$ where $\gamma$ is a weight factor determined by the number of time sample $i$ has been previously observed at time $t$. Precisely, given ${(\gamma_c)}_c$ a decreasing sequence of weights, $\gamma \triangleq \gamma_{c^{(i)}_t}$, where $
c^{(i)}_t =
| \lbrace s \leq t, \x_s = \x^{(i)} \rbrace |
$ All others estimators $\{\G^{(j)}_t, \bbeta^{(j)}_t\}_{j \neq i}$ are left unchanged from iteration $t-1$. The set ${\{ \G_t^{(i)}, \bbeta^{(i)}_t\}}_{1\leq i\leq n}$ is used to define the *averaged* estimators at iteration $t$, related to sample $i$: $$\label{eq:agg-estimates}
\G_t \triangleq \G_t^{(i)} = \sum_{s \leq t, \x_s = \x^{(i)}} \gamma_{s,t}^{(i)} \D_{s-1}^\top \M_s \D_{s-1},\quad
\bbeta_t \triangleq \bbeta_t^{(i)} = \sum_{\substack{s \leq t, \x_s = \x^{(i)}}} \gamma_{s,t}^{(i)} \D_{s-1}^\top \M_s \x^{(i)},$$ where $\gamma_{s,t}^{(i)} = \gamma_{c^{(i)}_t} \prod_{s < t, \x_s = \x^{(i)}} (1 -
\gamma_{c^{(i)}_s})$. Replacing $(\G_t^\star, \bbeta_t^\star)$ by $(\G_t, \bbeta_t)$ in , $\balpha_t$ minimizes the masked loss averaged over the previous iterations where sample $i$ appeared: $$\label{eq:approx-regression}
\min_{\balpha \in \RR^k} \sum_{s\leq t\\
\x_s = \x^{(i)}} \frac{\gamma_{s,t}^{(i)}}{2}
\Vert \M_s(\x^{(i)} - \D_{s-1}^\top \balpha) \Vert_2^2
+ \lambda \Omega( \balpha ).$$ The sequences ${(\G_t)}_t$ and ${(\bbeta_t)}_t$ are *consistent* estimations of ${(\G_t^\star)}_t$ and ${(\bbeta_t^\star)}_t$ — consistency arises from the fact that a single sample $\x^{(i)}$ is observed with different masks along iterations. This was not the case in the algorithm proposed in [@mensch_dictionary_2016], which use estimators that does involve averaging from past data. Solving is made closer and closer to solving , in a manner that ensures the correctness of the algorithm. Yet, computing the estimators is $r$ times as costly as computing $\G_t^\star$ and $\bbeta_t^\star$ from and permits to speed up the code computation steop close to $r$ times. The weight sequences $(w_t)_t$ and $(\gamma_c)_c$ are selected appropriately to ensure convergence. For instance, we can set $w_t = \frac{1}{t^v}, \gamma_c = \frac{1}{c^{2.5 - 2v}}$, with $v \in (\frac{3}{4}, 1)$.
#### Dictionary update.
In the original online algorithm, the whole dictionnary $\D_{t-1}$ is updated at iteration $t$. To reduce the time complexity of this step, we add a “freezing” constraint to the minimization of the quadratic function . Every row $r$ of $\D$ that corresponds to an *unseen* row $r$ at iteration $r$ (such that $\M_t[r, r] = 0$) remains unchanged. $\D_t$ is obtained thus solving $$\label{eq:dict-update-cons}
\D_t \in \argmin_{\substack{\D \in \mathcal{C}\\
\P_t^\perp \D = \P_t^\perp \D_{t-1}}}
\frac{1}{2} \trace (\D^\top \D \bar \C_t)
- \trace (\D^\top \bar \B_t),\,\text{with } \P_t\text{ orth. projector on }\mathrm{Im}(\M_t)$$ With elastic-net ball constraints, solving reduces to performing the partial dictionary update (Alg. \[alg:somf\]), with $\mathcal{C}^r = \{\D^r \in \RR^{k \times q}, \Vert {(\d^r)}^{(j)} \Vert \leq 1 -
\Vert \d^{(j)}_{t-1} \Vert + \Vert \P_t \d_{t-1}^{(j)} \Vert\}$. We perform this update using a single pass of projected block coordinate descent with blocks in the reduced space $\RR^{q}$. The dictionary update step is thus performed $r$ times faster than the original algorithm.
#### Surrogate computation
The gradient we use to solve requires to know only $\bar \C_t$ and $\P_t \bar \B_t$. We thus parallelize the partial update of the dictionary and the update of $\P_t^\perp \bar \B_t$, using a second thread. The update of $\P_t \bar \B_t$ is performed in the main thread at a cost proportional to $q$. As the parallel computation is dominated by dictionary update, this is enough to effectively reduce the computation time of $\bar \B_t$ computation.
#### Convergence guarantees
All in all, the three steps whose complexity depends on $p$ in the original algorithm now depends on $q$, which speeds-up a single iteration by a factor close to $r$. Yet retain convergence guarantees. We assume that there exists $\nu$ such that for all $t > 0$, $\D_t^\top \D_t \succ \nu \I$ (met in practice or by adding a small ridge regularization to ), and make a technical data-independent hypothesis on $(w_t)_t$ and $(\gamma_c)_c$ decay. Then iterates converge in the same sense as in the original algorithm [@mairal_online_2010]: $\D_t \to \D_\infty \in \RR^{p\times k}$ and $\nabla \bar f(\D_\infty, \D - \D_\infty) \geq 0$ for any $\D \in \mathcal{C}$, where $\bar f$ is the empirical risk defined in .
We use the aggregated nature of $\bar g_t$ and the fact that we can obtain a *geometric* rate of convergence for a single pass of projected block coordinate descent [*e.g* see @richtarik_iteration_2014] to control the terms $\D_t - \D_{t-1}$ and $\bar g_t(\D_t) - \bar g_t(\D_t^*)$, where $\D_t^* =
\argmin_{\D \in \mathcal{C}} \bar g_t$. We obtain $\bar g_t(\D_t) - \bar
g_t(\D_{t-1}) = \mathcal{O}(w_t)$, a crucial result on estimate stability. Simultaneously, we show that the partial minimization yields the same result as the full minimization for $t \to \infty$, as $\theta_t - \theta_t^\star \to 0$. Using this result, with appropriate selection of $(w_t)_t$ and $(\gamma_t)_t$, the noise induced by the use of estimators in can be bounded in the derivations of [@mairal_online_2010]. We then write the difference - as the sum of a *lag* term and an empirical mean over ${\{\M_s\}}_{\x_s =\x_t}$. Both can be bounded with appropriate selection of weights. Formally, ${(\bar g_t)}_t$ are no longer upper-bounds of ${(\bar f_t)}_t$, but become so for $t \to \infty$, at a sufficient rate to guarantee convergence.
Experiments
===========
#### Hyperspectral images.
We benchmark our algorithm by performing dictionary learning on a large hyperspectral image. Dictionary learning is indeed used on patches of hyperspectral images, as in [@maggioni_nonlocal_2013]. Extracting $16{\times}16$ patches from an 1GB hyperspectral image from the AVIRIS project with $224$ channels yields samples of dimension $p = 57,000$. Figure 1 demonstrates that the newly proposed algorithm is faster than the original, non-subsampled algorithm from [@mairal_online_2010] by a factor close to $r = 4$. This speed-up is helped by the redundancy in the different channels of the hyperspectral patches. In the first epochs, the proposed method also outperforms the recent subsampled algorithm from [@mensch_dictionary_2016], thanks to the introduction of consistent estimators in the code computation step. All algorithms are implemented in Cython and benched on two cores. They cycle over $100,\!000$ normalized samples with minibatches of size 50. The code for reproduction is available at [github.com/arthurmensch/modl](github.com/arthurmensch/modl)..
#### Figure 1
Performing dictionary learning on hyperspectral data (224 channels, $16\times16$ patches) is faster with stochastic subsampling, and even faster with the newly proposed variance control. \[fig:opt\]
![image](opt_bench.pdf){width="\textwidth"}
#### Conclusion
The new algorithm can efficiently factorize large and tall matrices. It preserves the speed gains of [@mensch_dictionary_2016] but has convergence guarantees that are beneficial in practice.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Michael Pfender[^1]'
date: |
April 2014\
last revised
title: 'A positive solution to Hilbert’s 10th problem'
---
Introduction {#introduction .unnumbered}
============
Within theory $\ZFC^+ = \ZFC+\Con_{\ZFC}$ of Zermelo-Fraenkel **set** theory with axiom of choice $\AC,$ strengthened by formula $\Con_{\ZFC}$ which is to express $\ZFC$’s *internal,* *gödelised* consistency, we solve Hilbert’s 10th problem positively: we organise decision of diophantine polynome codes—decision on overall *non-nullity*—as an enumerative $\mu$-recursive race for a (first) zero (*counterexample*), against race for a first internal $\ZFC$-non-nullity *proof* for a given such polynomial code, given as the (nested) list of coefficients. Comparison with Matiyasevich’s negative solution of Hilbert’s 10th problem gives inconsistency of theory $\ZFC+\Con_{\ZFC}$ whence self-inconsistency $\ZFC \derives \neg \Con_{\ZFC}.$ In a final section we plug our positive solution of the problem into the constructive framework of *p.r. non-infinite descent theory* $\piR=\PR+(\pi)$ out of *Arithmetical Foundations* in the References.
This is to give a decision algorithm for each single diophantine equation (in a uniform way), as asked in the original Hilbert’s 10th problem.
Hilbert’s 10th Problem
======================
We attempt a positive solution to Hilbert’s 10th problem. In its original form it reads:
10\. DETERMINATION OF THE SOLVABILITY OF A DIOPHANTINE EQUATION Given a diophantine equation with any number of unknown quantities and with rational integer numerical coefficients: *To devise a process according to which it can be determined by a finite number of operations whether the equation is solvable in rational integers.*
\[translation quoted from 1993.\]
Formally, this text allows for a separate decision algorithm (“process”) for each diophantine polynomial. But it is clear that a decision-*family* must be *uniform* in a suitable sense.
*Correctness* of our alleged $\mu$-recursive decision algorithm $\nabla_{\ZFC}: \mr{DIO} \parto \two=\set{0,1}$ builds, within $\ZFC^+,$ on diophantine soundness inferred by $\Con_{\ZFC}$ over $\ZFC.$ Termination follows from (countable) Choice. This already within $\ZFC.$ Together this gives the wanted decision $\nabla=\nabla_{\ZFC}$ within $\ZFC^+,$ of all polynome codes in $\mr{DIO}\subset\N.$
Comparison with Matiyasevich’s negative Theorem, *unsolving* Hilbert’s 10th Problem, theorem in particular of (classically quantified Arithmetical Theory) $\ZFC^+,$ gives a contradiction within $\ZFC^+,$ hence [self-inconsistency]{} of $\ZFC,$ and from that in particular $\omega$-inconsistency.
In a final section we show correctness and irrefutable termination of *localised* decision $\nabla[D]$—for each single diophantine polynomial $D=D(\vec{x})$—within the constructive framework of p.r. *finite-descent-theory* $\piR=\piR+\Con_{\piR}$ out of op.cit.
Polynome coding and code evaluation
===================================
Diophantine polynomials $D = D(\vec{\xi}): \Z^* \to \Z$ (“in $\DIO$”) are LaTeX/**ASCII** coded into $$\mr{DIO} \defeq \Z^{\an{*}} \iso \union_{m\geq1} \Z[\xi_1,\ldots,\xi_m]
= \union_{m \geq 1} \Z[\xi_1][\xi_2] \ldots [\xi_m]$$ as nested coefficient *lists* $\Z^{\an{*}} \subset \N.$
$[\,$The symbols $\xi_i$ are the *indeterminates.*$\,]$
### Example: {#example .unnumbered}
$$\begin{aligned}
D = D(\xi_1,\xi_2)
& = (2\cdot\xi_1^{\,0}+3\cdot \xi_1^{\,1}- 4\cdot\xi_1^{\,3})\cdot\xi_2^{\,0} \\
& \quad
+(0\cdot\xi_1^{\,0}+3\cdot\xi_1^{\,1}-7\cdot\xi_1^{\,2})\cdot\xi_2^{\,1}
+ (1-4\cdot\xi_1)\cdot\xi_2^{\,2}\end{aligned}$$
is coded 1-1 as (nested) [coefficient list]{} $$\begin{aligned}
\ccode{D}
& = \an{\an{2;3;0;4};\an{0;3;-7};\an{0};\an{1;-4}}: \\
& \one \to \mr{DIO} \bydefeq \Z^{\an{*}} \subset \N: \\
& \text{\emph{defined element, point} of $\mr{DIO}$}\end{aligned}$$
### PR evaluation of $\mbf{DIO}$ codes: {#pr-evaluation-of-mbfdio-codes .unnumbered}
Evaluation $\mr{ev} = \mr{ev}(d,\vec{x}): \mr{DIO} \times \Z^*$ is PR [defined]{} $$\begin{aligned}
& \mr{ev}(d,\an{\vec{x};x_{m+1}\,}) = \mr{ev}(d,\an{x_1;\ldots;x_m;x_{m+1}\,})\\
& \defeq \mr{ev}(\horner(d,x_{m+1}),\an{\,\vec{x}\,}): \\
& \mr{DIO} \times \Z^* \supset \Z[\vec\xi,\xi_{m+1}] \times \Z^{m+1}
\ovs{\iso} (\Z[\vec\xi] [\xi_{m+1}] \times (\Z^m \times \Z) \\
& \xto{\iso} (\Z[\vec\xi][\xi_{m+1}] \times \Z) \times \Z^m
\ovs{\horner \times \id} \Z[\vec\xi] \times \Z^m \xto{\ev} \Z,\end{aligned}$$ recursively by iterative application of Horner’s schema to the hitherto trailing argument, until all of the arguments (constants or variables) are substituted into their corresponding indeterminates $\xi_j.$
Result then is the integer $\mr{ev}(d,\vec{x}),$ constant or integer variable.
For the **example** above, $D = D(\xi_1,\xi_2),$ with argument string $\an{x_1;x_2} :\,= \an{23;64} \in \Z^*,$ we get $$\begin{aligned}
& \mr{ev}(d,\an{x_1;x_2})
= \mr{ev}(\an{\an{2;3;0;4};\an{0;3;-7};\an{0};\an{1;-4}},\an{23;64}) \\
%& = ?? \horner(\mr{ev}(\an{\an{2+3\cdot\xi_1^1+0\cdot\xi_1^2+4\cdot\xi_1^3};
%& \an{\an{0;3;-7}\cdot\xi_1}+\an{1-4\cdot\xi_1}},64),23) \\
%& = \horner(\an{\an{2+3\cdot\xi_1+4\cdot\xi_1^3}\cdot64^0;
% \an{-7}\cdot64^1+\an{0\cdot64^2}
% +\an{1-4\cdot\xi_1}\cdot64^3},23) \\
%& = \horner(\an{((((4\cdot\xi_1+0)\cdot\xi_1)+3)\cdot\xi_1+2)\cdot64^0 +\\
%& \qquad\qquad\qquad\qquad
% (-7\cdot64^1)+(0\cdot64^2)+(4\cdot64^1-1)\cdot64^3},23) \\
%& = (((4\cdot23+0)\cdot23)+3)\cdot23+2\ -7\cdot64\cdot23\ +\ 0\cdot23^2
% + (-4\cdot64+1)\cdot23^4 \\
& = \horner(\,
((((-4\cdot64+1)\cdot\xi_1+0))
\cdot\xi_1 +(-7\cdot64+3)\cdot64)\cdot\xi_1 \\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad
+ ((4\cdot64+0)\cdot64+3)\cdot64+2\,,23\,) \\
& = ((((-4\cdot64+1)\cdot23+0))\cdot23 +(-7\cdot64+3)\cdot64)\cdot23 \\
& \qquad\qquad\qquad\qquad\qquad\qquad
+ ((4\cdot64+0)\cdot64+3)\cdot64+2\end{aligned}$$ **First step:** apply Horner’s schema to coefficient list $d \in \mr{DIO}$ und (trailing) Argument $x_2:$ [indeterminate]{} $\xi_1$ is [coded]{} by [list nesting]{} and is seen as a *constant*, as an element of intermediate ring $\Z[\xi_1]:$ $$\Z[\xi_1,\xi_2] \bydefeq \Z[\xi_1][\xi_2] \bydefeq (\Z[\xi_1])\,[\xi_2].$$ **Last—here second—step:** evaluation of $\Z[\xi_1]$ polynomial in remaining indeterminate $\xi_1$ on remaining argument $x_1,$ by a last application of Horner’s schema.
Arithmetical frame theories
===========================
We consider here as frame theories—for our decision algorithm – **on one hand** classically quantified arithmetical theories $\T = \bfQ+\AC$ with (countable) axiom of choice, as in particular Zermelo-Fraenkel set theory $\T = \ZFC = \ZF+\AC.$ Frame then is the strengthening $$\T^+ = \T+\Con_\T = \ZFC+\Con_{\ZFC}$$ of $\T$ by its own consistency-*formula* $$\begin{aligned}
\Con_\T &=& \neg\,(\exists\,k \in \N)\,\Pro_\T(k,\code{\false}) \\
&=& (\forall k)\,\neg\,\Pro_\T(k,\code{\false})\ (\text{G\"odel}),\end{aligned}$$ see 1977 and op.cit. Strengthening by this consistency formula will provide for *correctness* of our *decision process* (Hilbert).
**On the other hand** we take as frame the Free-Variables (categorical) theory $\T = \PR = \PRa$ of *Primitive Recursion with predicate abstraction into subsets* $$(\chi = \chi(a): A \to \two)
\boldsymbol\mapsto \set{A:\chi}=\set{a\in A:\chi(a)}$$ out of op.cit., $\T = \bfS$ in Smorynski’s notation, as well as *descent theory* $\piR = \piR^+ = \piR+\Con_{\piR}:$ that theory is self-consistent, $\piR \derives \Con_{\piR,}$ main result of op.cit.
A $\mu$-recursive race for decision
===================================
We **define** an enumerative *race*—for $d \in \mr{DIO}$ thought *passive, fixed,* and $k \in \N$ *running*—for satisfaction of $$\begin{aligned}
& \ph_0(d,k) = [\,\ev(d,\ct_*k)=0\,]\ \text{against} \\
& \ph_1(d,k) = \Pro_{\T}(k,\code{(\vec{x})\ev(d,\vec{x})\neq 0}):
\mr{DIO} \times \N \to \two = \set{0,1}, \\
& \ct_* = \ct_*\,k: \N \ovs{\iso} \Zlist
\ \text{Cantor-type \emph{count},}\
\vec{x} \in \Z^*\ \text{free under code.}\end{aligned}$$
This race towards *termination* is defined as a—formally partial—$\mu$-recursive mapping as follows within the theory $\hatT$ of *partial PR maps,* i.e. of (partially defined) *$\mu$-recursive maps,* cf. again op.cit.: $$t = t(d) = \mu\set{k\,|\,\ph_0(d,k)\,\lor\,\ph_1(d,k)}:
\mr{DIO} \parto \N. \quad (*)$$
**Decision candidate** then is $$\begin{aligned}
\nabla d =
& \begin{cases}
0\ \myif\ \ph_0(d,t(d)) \\
1\ \myif\ \ph_1(d,t(d)) \\
\end{cases} \\
& \quad \\
=
& \begin{cases}
0\ \myif\ \ev(d,\ct_*(t(d)))=0 \\
\qquad (\emph{zero found}) \\
1\ \myif\ \Pro_{\T}(t(d),\code{\ev(d,\vec{x})\neq 0}) \\
\qquad (\text{\emph{internal proof} found
for \emph{global non nullity}})
\end{cases} \\
& : \mr{DIO} \overset{(\id,t)} {\parto} \mr{DIO} \times \N \to \two.\end{aligned}$$
**Question:** Is $\nabla$ *well-defined* as a partial map? In which frame?
### Well-definedness of the decision within $\T^+ = \ZFC^+=\ZFC+\Con_{\ZFC} = \T+\Con_\T:$ {#well-definedness-of-the-decision-within-t-zfczfccon_zfc-tcon_t .unnumbered}
$$\begin{aligned}
\T^+\ \derives\
& \ph_0(d,k)\,\land\,\ph_1(d,k') \\
& \qquad (\text{\emph{cases-overlap}
{$\mathit{Assumption}$}}) \\
& \implies \ev(d,\ct_*k)=0 \\
& \quad\,\land\,\Pro_\T(k',\code{(\vec{x})\,\ev(d,\vec{x})\neq 0}) \\
& \implies \Pro_\T(j(k,k'),\code{\false}) \\
%& \implies \gleichnull\,\land\,(\vec{x})\,\ungleichnull \\
%& \quad \text{by \emph{diophantine Soundness} of}\ \ZFC+\ConZFC \\
& \implies \neg\,\Con_\T \implies \false, \\
& j=j(k,k'): \N^2\to \N\ \text{suitable.}\end{aligned}$$
**Consequence:** $$\T^+\ \derives\ \neg\,[\ph_0(d,k)\,\land\,\ph_1(d,k')\,]:
\mr{DIO} \times \N^2 \to \two,$$ $\nabla = \nabla_{\T}(d): \mr{DIO} \parto \N$ is *well-defined* as a (*formally partial*) $\mu$-recursive map, within $\T^+ = \T+\Con_\T.$
### Well-definedness of decision within *descent* theory $\piR:$ {#well-definedness-of-decision-within-descent-theory-pir .unnumbered}
We consider now *descent theory* $\piR$ out of op.cit. strengthening $\PR$ by axiom $(\pi)$ of *non-infinite endo driven descending complexity with complexity values in polynomial semiring $\N[\omega],$* and its logical properties, in particular *soundness* giving $\piR\derives \Con_{\piR}.$
Decision $\nabla=\nabla_{\piR}(d):\mr{DIO}\parto \two$ is in fact well-defined as a partial PR map, within theory $\piR,$ since—in parallel to the above case $\T = \ZFC:$ $$\begin{aligned}
\piR\ \derives\
& \ph_0(d,k)\,\land\,\ph_1(d,k') \\
& \qquad (\text{\emph{cases-overlap}
{$\mathit{Assumption}$}}) \\
& \implies \ev(d,\ct_*k)=0 \\
& \quad\,\land\,\Pro_{\piR}(k',\code{(\vec{x})\,\ev(d,\vec{x})\neq 0}) \\
& \implies \Pro_{\piR}(j(k,k'),\code{\false}) \\
%& \implies \gleichnull\,\land\,(\vec{x})\,\ungleichnull \\
%& \quad \text{by \emph{diophantine Soundness} of}\ \ZFC+\ConZFC \\
& \implies \text{``$\neg\,\Con_{\piR}$''} \implies \false, \\
& j=j(k,k'): \N^2\to \N\ \text{suitable.}\end{aligned}$$ The latter since $\piR\derives \Con_{\piR}.$
### Well-definedness of DIO-decision within $\PR$ itself {#well-definedness-of-dio-decision-within-pr-itself .unnumbered}
Decision $\nabla=\nabla_{\PR}(d):\mr{DIO}\parto \two$ is well-defined as a partial PR map, within theory $\hatPRa$ of partial PR maps since $$\begin{aligned}
\hatPRa\ \derives\
& \ph_0(d,k)\,\land\,\ph_1^{\DIO}(d,k') \\
& \qquad (\text{\emph{cases-overlap}
{$\mathit{Assumption}$}}) \\
& \Iff \ev(d,\ct_*k)=0 \\
& \quad \land\,\Pro_{\DIO}(k',\code{(\vec{x})\,\ev(d,\vec{x})\neq 0}) \\
& \implies \Pro_{\DIO}(j(k,k'),\code{\false}) \\
%& \implies \gleichnull\,\land\,(\vec{x})\,\ungleichnull \\
%& \quad \text{by \emph{diophantine Soundness} of}\ \ZFC+\ConZFC \\
& \implies \false, \\
& j=j(k,k'): \N^2\to \N\ \text{suitable.}\end{aligned}$$ The latter by *diophantine soundness* of $\T=\PR,$ see 1977, <span style="font-variant:small-caps;">Theorem</span> **4.1.4**.
Decision Correctness
====================
**Decision Correctness, result-0-case:** $$\begin{aligned}
\T \derives\
& [\,\ph_0(d,t(d))
\implies \mr{ev}(d,\ct_*\,\circ\,t(d))=0\,] \\
& \subseteq\,\true_{\mr{DIO}}:
\mr{DIO} \overset{(\id,t)} {\parto} \mr{DIO} \times \N \to \two:\end{aligned}$$ **If** race-for-decision $\nabla$ *terminates* on DIO-code $d,$ with **result** $0,$ **then** (evaluation of) $d$ has (at least) one zero, namely $$\ct_*\,\circ\,t(d) \in \N.$$
### Correctness, result-1-case: {#correctness-result-1-case .unnumbered}
$$\begin{aligned}
\T \derives\
& \ph_1(d,k) \implies \Pro_{\DIO}(k,\code{\ev(d,\vec{x})\neq 0}) \\
& \implies \ev(d,\vec{x})\neq 0:
(\mr{DIO} \times \N) \times \Z^* \to \two, \\
& (d \in \mr{DIO},\ k \in \N,\ \vec{x} \in \Z^*\ \text{all free}), \\
& \quad
\text{or, with quantifier decoration:} \\
\T \derives\
& (\forall\,d \in \mr{DIO})(\forall\,k \in \N)
(\forall\, \vec{x} \in \Z^*) \\
& [\,\ph_1^{\T} (d,k) \implies \Pro_{\DIO}(k,\code{\ev(d,\vec{x})\neq 0}) \\
& \implies \ev(d,\vec{x})\neq 0\,].\end{aligned}$$
**If** race-for-decision $\nabla$ *terminates* on DIO-code $d,$ with **result** $1,$ **then** (evaluation of) $d$ has no zeroes.
This because of *Diophantine Soundness* of $\T,$ see 1977, <span style="font-variant:small-caps;">Theorem</span> **4.1.4** again.
### Correctness in result-1-case, under termination condition: {#correctness-in-result-1-case-under-termination-condition .unnumbered}
Substitution of $t(d)$ for $k$ in the above gives $$\begin{aligned}
\T^+,\piR,\PR\ \derives\
& [\,\ph_1^{\DIO} (d,t) \implies \ev(d,\vec{x})\neq 0\,]
\subseteq\,\true_{\mr{DIO} \times \Z^*},\\
& d \in \mr{DIO},\ \vec{x} \in \Z^*\ \text{both free}: \end{aligned}$$ *Correctness* of $\nabla(d)$ where defined, in *both* defined cases: in case of reaching **result** 0, as well as in case of reaching **result** 1.
Termination
===========
We show first
**Pointwise non-derivability of non-termination:**
For no diophantine *point* $d_0: \one \to \mr{DIO}$ $\T$ derives non-termination of $t$ at $d_0.$
**Proof:** a contradiction: appropriate $\ulj$ is available from $(\bullet)$ via derivation-to-$\Proof$-internalisation (*gödelisation*).
$[\,$For the time being we consider $\T$ as frame, not (yet) $\T^+=\T+\Con_\T.\,]$
For $\T = \bfQ$ quantified, with (countable) *axiom of choice* $\ACC,$ in particular $\bfQ = \PA+\ACC$ Peano Arithmetic with choice, we define the *undecided part* of $\mr{DIO}$ as $$\begin{aligned}
\Psi &=& \Psi^{\bfQ} \\
&=& \set{d \in \mr{DIO}:\forall\,k\ \mr{ev}(d,\ct_*\,k) \neq 0 \\
&& \,\land\,\forall\,k
\ \neg\,\Pro_{\bfQ}(k,\code{(\vec{x})\,\mr{ev}(d,\vec{x}) \neq 0)})} \\
&\subset& \mr{DIO} = \Zlist \subset \N. \end{aligned}$$ With this definition we get $$\begin{aligned}
\bfQ \derives\
& \Psi \neq \emptyset
\implies \choice_{\Psi}: \one \to \Psi \subset \N\ \emph{total} \\
& (\text{choice available by $\ACC:$ non-empty sets have
\emph{defined points}}) \\
& \implies \mu\set{d:t(d)\ \text{non-terminating}}:\one \to \Psi\ \emph{total.} \end{aligned}$$ This means: the [assumption]{} of (formal) *existence* of a $d \in \mr{DIO}$ for which decision race $t: \mr{DIO} \parto \N$ does *not* terminate, leads to a (*defined*) point $$d_0:\one \to \mr{DIO}$$ for which $t$ derivably does not terminate.
But this is **excluded** by pointwise non-derivability above of non-termination, within frame $\bfQ$ assumed consistent.
So we have shown $$\begin{aligned}
& \bfQ,\PA+\ACC \derives\ \Psi = \emptyset,\ \text{i.\,e.} \\
& \bfQ \derives\ (\forall d\in \mr{DIO})[\exists k\,\ev(d,\mr{ct}_*k)=0 \\
& \qquad\qquad
\lor \exists k\,\Pro_{\DIO}(k,\code{(\vec{x})\,\ev(d,\vec{x}}) \neq 0)],\end{aligned}$$ whence
**Termination Theorem:** $\bfQ,\ZFC,\PA+\ACC$ derive race $t$ to terminate on all diophantine codes $d,$ on all $d \in \mr{DIO} = \Zlist.$
Correct termination of decision $\nabla$
========================================
**In particular** ($\bfQ^+ = \bfQ+\ACC$ stronger than $\bfQ$): $$\begin{aligned}
& \bfQ^+\ \textbf{derives} \\
& \quad
\text{overall termination of $\mu$-recursive} \\
& \quad
\text{termination race}\
t=t^{\bfQ}(d):\mr{DIO} \to \N: \\
& \bfQ^+\ \derives\
[\,(\forall\,d \in \mr{DIO})\ t(d) \in \N
\ \text{{\emph{defined}}}\,]\end{aligned}$$
**Hence,** by Decision Correctness within $\bfQ^+:$
$\bfQ^+$ [$\mathbf{derives}$]{}\
[overall *correct* termination]{} of $\mu$-recursive *decision*\
$\nabla: \mr{DIO} \to \two,$ **main result** here: $$\begin{aligned}
& \nabla(d) \\
& = \begin{cases}
0\ \myif\ \mr{ev}(d,t(d)) = 0 \\
\qquad
\ [ \ \implies d\ \emph{has}
\ \text{a zero}\ \vec{z} \in \Z^*\ ] \\
1\ \myif\ \Pro_{\DIO}
(t,\code{(\forall\,\vec{x})
\ \mr{ev}(d,\vec{x}) \neq 0}) \\
\qquad
\ [ \ \implies d\ \text{has}\ \emph{no}\ \text{zero}\ ]
\end{cases}: \mr{DIO} \to \two.\end{aligned}$$
Comparison with Matiyasevich’s\
negative result
===============================
*Main result* above says in terms of the theory $\mathbf{TM}$ of TURING machines, by the established part of CHURCH’s thesis:
*For concrete diophantine polynomials* $D = D(\vec{x}): \Z^m \to \Z:$
For quantified arithmetical choice theories $\bfQ+\ACC$ like $\ZFC$ and already $\PA+\ACC,$
$\bfQ^+ = \bfQ+\Con_{\bfQ}$ [$\mathbf{derives}$]{}:
*TURING machine ${{\mrTM}}_{\nabla_{\bfQ}}$ corresponding—CHURCH—to totally defined $\mu$-recursive decision map $$\nabla_{\bfQ}: \mr{DIO} \to \set{0,1},$$ *when written* *coefficient list* $\ccode{D}$ of a diophantine polynomial $D$ on its (initial) TAPE, eventually **reaches** *HALT state,* leaves result $0$ (as its *final* TAPE) $\mathbf{iff}$ $D$ *has* a zero\
$\vec{z}:$ $D(\vec{z}) = 0,$ and **result** $1$ $\mathbf{iff}$ $D$ is overall *non-null*:\
$(\forall\,\vec{x} \in \Z^*)\,[\,D(\vec{x}) \neq 0\,].$*
This contradicts **Matiyasevich’s** THEOREM *unsolving* Hilbert’s 10th problem, within theory $\bfQ^+$ which strengthens his framework of Peano Arithmetic $\PA+\ACC$ with countable axiom of choice. Whence
### Conclusion: {#conclusion .unnumbered}
- $\ZFC^+ = \ZFC+\ConZFC$ is contradictory, so
- $\ZFC\derives\neg\,\ConZFC:$ $\ZFC$ *is internally inconsistent,*
- same for theory $\PA+\ACC:$
*Peano-Arithmetic with axiom of countable choice is internally inconsistent*
- **Question:** is already Peano Arithmetic $\PA$ by itself internally inconsistent? It would be if axiom $\ACC$ of countable choice were derivable within $\PA$ or independent from $\PA,$ as is axiom of choice $\AC$ from **set** theory. This would mean that formal existential quantification is incompatible with free-variables Primitive Recursive Arithmetic $\PR.$
### Discussion {#discussion .unnumbered}
- After his talk at Humboldt University Berlin, I have mailed to Matiyasevich the question, if his *unsolving* of Hilbert’s 10th problem is really constructive: it depends heavily on formal existential quantification. No reply: may be he considers this question when present paper will be brought to his attention.
- I have submitted the 200? version of present work, claiming self-inconsistency $\PA\derives \neg\,\Con_{\PA},$ to the *Journal of Symbolic Logic.* The (anonymous) referee:
*... this is certainly false. ...* Robert ’Rob’ ed.: *under these circumstances etc.*
**What is such editorial policy good for?**
Hilbert 10 constructively
=========================
In this section we show that the *local* version $\nabla[D]: 1 \parto 2$ of the $\mu$-recursive *decision algorithm* $\nabla = \nabla_{\DIO}(d): \mathit{DIO} \parto \two$ *irrefutably* *decides* *each (single)* diophantine equation—*correctly*—when placed in p.r. *non-infinite-descent theory* $\piR=\PR+(\pi)$ of op.cit. in the References.
This will give a positive solution to Hilbert’s 10th problem in that constructive framework, at least when stated in its original form quoted in first section above.
Formally, this **problem** allows for solution by a separate decision algorithm (“process”) for each diophantine polynomial. By *localisation* at a given polynomial, we extract such a decision-*family* from the forgoing sections, and formalise it within $\piR.$
We index that family (externally) by the *diophantine constants* $\delta:\one \to \mr{DIO} \subset \N,$ among which the diophantine polynomials $$D = D(\vec{x}) = D(x_1, \ldots, x_{\bs{m}}): \Z^{\bs{m}} \to \Z$$ are represented by their coefficient list codes $\ccode{D}: \one \to \mr{DIO}.$
**Definition:** For PR predicates $\ph_0, \ph_1: A \times \N \to \two$ we define the *race winner predicate* $$\mu_{\lor}[\ph_0,\ph_1]: A \to \two$$ between $\ph_0$ and $\ph_1$ slightly assymmetrically by $$\begin{aligned}
&& \mu_{\lor} [\ph_0,\ph_1] = \mu_{\lor} [\ph_0,\ph_1](a)\\
&& \defeq
(\mathit{dc} \circ (\ph_0,\ph_1))
\parcirc (A \times \mu[\ph_0 \,\lor\, \ph_1]) \parcirc \Delta_A: \\
&& A \to A \times A \parto A \times \N \to \two \times \two
\overset{\mathit{dc}} {\longrightarrow} \two, \ \text{with} \\
&& \mathit{dc} = \mathit{dc}(u,v): \two \times \two \parto \two
\ \text{defined by} \\
&& \mathit{dc}(u,v) \defeq
\begin{cases}
0 \ \text{if}\ u = 1, \\
1 \ \text{if}\ u = 0\ \land\ v = 1, \\
\text{\emph{definably undefined} if}\ u = v = 0.
\end{cases}\end{aligned}$$ This (partial) race winner predicate $\mu_{\lor}[\ph_0,\ph_1](a): A \parto \two$ is characterised—within $\bs{S}=\PR$ as well as in $\bs{S}=\piR$—by $$\begin{aligned}
\bs{S} \derives\,
& [\,\ph_0 (a,n)
\,\land\, \underset{i<n} {\land}\,\neg\,\ph_1(a,n)
\implies \mu_{\lor}[\,\ph_0,\ph_1\,](a)=0\,] \\
& \land\, [\,\ph_1(a,n)
\,\land\,\underset{i \leq n}{\land}\,\neg\,\ph_0(a,n)
\implies \mu_{\lor}[\,\ph_0,\ph_1\,](a)=1\,]. \end{aligned}$$ We allow us to write for this intuitively—in classical terms of a (partial) case-distinction: $$\mu_{\lor}[\,\ph_0,\ph_1\,](a) =
\begin{cases}
0 \ \text{if}\ \mu\ph_0(a) < \infty
\,\land\,\mu\ph_0(a) \leq \mu\ph_1(a), \\
1 \ \text{if}\ \mu\ph_1(a) < \infty
\,\land\,\mu\ph_1(a) < \mu\ph_0(a).
\end{cases}$$ Our decision family $$\nabla[\delta]: 1 \parto \two,\ \delta: \one \to \mr{DIO} \subset \N$$ now is defined in the present $\mu$-recursive frame as this type of race winning, of PR search for a zero (in the evaluation) of $\delta$ against PR search for a (first) internal non-nullity *proof* for (the evaluation) of $\delta,$ namely by $$\begin{aligned}
\nabla[\delta]
& \defeq \mu_{\lor} [\ph_0[\delta],\ph_1[\delta]]: 1 \parto \two,
\ \text{with} \\
\ph_0[\delta](k)
& \defeq [\,\ev(\delta,\ct_*(k)) = 0\,]: \N \to \two, \\
\ph_1[\delta](k)
& \defeq \Pro_{\bs{S}}(k,\code{(\vec{x})\ev(\delta,\vec{x}) \neq 0}.\end{aligned}$$ Here $$\ev = \ev(d,x): \N \times \N \supset \mr{DIO} \times \Z^* \to \Z$$ is evaluation with the characteristic **evaluation property** $$\begin{aligned}
& \ev(\ccode{D},(x_1,\ldots,x_{\bs{m}}))
= D(x_1, \ldots, x_{\bs{m}}): Z^{\bs{m}} \to \Z,\end{aligned}$$ realised by (iterated) ’s schema (each application reduces the number of remaining variables by 1), or by “brute force” evaluation of monomials.
Decision Correctness
--------------------
**Soundness Recall:** Main result of op.cit. in the References is (logical) *soundness* of theory $\piR:$
- For a (p.r.) predicate $\chi = \chi(a): A \to \two$ we have $$\piR\,\derives\ \Pro_{\piR}(k,\code{\chi})
\implies \chi(a): \N \times A \to \two,$$ $a \in A$ free, meaning here *for all* $a\in A,$ and $k \in \N$ free, meaning here *exists* $k\in\N.$ This entails
- *$\PR$ soundness* of $\piR:$ For a p.r. predicate $\chi = \chi(a): A \to \two,$ $$\piR\,\derives\ \Pro_{\PR}(k,\code{\chi})
\implies \chi(a): \N \times A \to \two,$$ as well as in particular
- *Diophantine soundness* of $\piR:$ for a diophantine polynomial $D=D(\vec{x}):\Z^*\to \two$ $$\piR\,\derives\ \Pro_{\piR}(k,\code{(\vec{x})D(\vec{x})\neq 0})
\implies D(\vec{x})\neq 0,$$ $k\in \N,\ \vec{x}\in \Z^*$ free.
- Already $\PR^+ = \PR+\Con_{\PR}$ is diophantine sound. This needs an extra Proof.
We consider here frame $\bs{S}=\piR,$ $$\piR^+ = \piR+\Con_{\piR} = \piR,$$ the latter by op.cit. equivalent to soundness of theory $\piR.$
Namely from PR Soundness we get the
**Local Correctness-Lemma** for $\nabla[\delta]$ in $\piR:$ The partial $\PR$-map $\nabla[\delta]:\one \parto \two$ has the following correctness properties:
$\piR \derives\ :$
- $\delta$ does not fall in *both* of the two defined-cases stated for $\nabla[\delta],$
- $\nabla[\delta] = 0
\implies \ev(\delta,\ct_{*}\circ \mu\ph_0 [\delta])=0:$ $\delta$ is implied to have available a zero in its *evaluation,*
- $\nabla[\delta] = 1 \implies \ev(\delta,\vec{x}) \neq_Z 0,$ $\vec{x}$ free in $\Z^*$: $\delta$ is implied to be evaluated globally non-null, in particular:
- By diophantine evaluation for $D = D(x_1, \dots x_{\bs{m}}): \Z^* \to \Z$ diophantine:
- $\nabla[D] := \nabla[\ccode{D}] = 0
\implies D(\ct_*(\mu\ph_0 [\ccode{D}]))=0:$
$D$ is implied to have a zero, as well as
- $\nabla[D]=1 \implies [\,D(\vec{x}) \neq 0\,],$ here again $\vec{x}$ free over $\Z^*:$
$D$ is implied to be globally non-null ****
Decision Termination
--------------------
The final question to treat for this—canonical—family $$\nabla = \nabla_{\mbf{DIO}}[\delta]:\one \parto \two,
\ \delta: \one \to \mr{DIO} \subset \N$$ of *local*—$\mu$-recursive—decision algorithms, is *termination,* for each $\delta,$ in particular for $\delta = \ccode{D},$ $D = D(\vec{x})$ diophantine.
**Assume** $\nabla[d_0]$ *not* to terminate for a particular *constant* $d_0: \one \to \mr{DIO},$ in particular $d_0$ of form $D_0 = D_0(\vec{x}).$
Since we argue here purely *syntactically*—within the *theory* $\widehat{\bs{S}} \bs{\supset} \bs{S} \bs{=} \PR+(\mr{abstr})$ of *partial* p.r. maps—no modelling in mind except some primitive recursive *Meta*mathematics (these in turn gödelised within $\bs{S}$)—we discuss the stronger assumption
$\nabla[d_0]$ $\T$-*derivably* does *not* terminate for a given diophantine constant $d_0: \one \to \mathit{DIO},$ $\T$ an extension of $\bs{S}.$
This **assumption** reads: $$\T \derives\,(k)\psi[d_0](k):$$ here $k$ is free over $\N,$ and the PR predicate $\psi[d_0](k): \N \to \two$ is defined by $$\begin{aligned}
& \psi[d_0](k) = \psi_0[d_0](k) \land \psi_1[d_0](k)\ \text{with} \\
& \psi_0[d_0](k) = [\,\ev(d_0,\ct_*(k)) \neq 0 \,], \text{and} \\
& \psi_1[d_0](k) = \neg\,\Pro_{\T}(k,\code{\ev(d_0,\vec{x})\neq 0}).\end{aligned}$$ So the assumption (“of the contrary”) reads: $$\begin{aligned}
\T \derives\,
& [\,\ev(d_0,\ct_{*}(k) ) \neq 0 \,] \\
& \land \neg\,\Pro_{\T}(k,\code{(\vec{x})\ev(d_0,\vec{x}) \neq 0}).\end{aligned}$$ Here $k\in\N$ is the only free variable in the *accessible* level, $\vec{x}$ is free over $\Z^*,$ but *encapsulated* within gödelisation, *not visible* on the object language level.
The derivably-non-termination assumption $$\T \derives\, \psi[d_0](k),\ k\ \free,$$ would entail in particular (first conjunct $\psi_0[d_0]$): $$\T \derives\, \ev(d_0,\ct_*(k)) \neq 0:\N\to\two.$$
*Internalising* (*formalising*) this metamathematical statement, we (would) get by Proof-Internalisation— 1977—a *constant* $p_0: \one \to \mathit{Proof}_\T \subset \N$ *guilty* for this last statement: $$\T \derives\ \Pro_\T(p_0,\code{\ev(d_0,\vec{x}) \neq 0});$$ this would give, by definition of $\nabla[d_0]:$ $$\T \derives \nabla[d_0] = 1,$$ a contradiction to our assumption that $d_0$ be derivably *not decided* by $\nabla_{\DIO},$ to $\T \derives \psi[d_0].$
**Conclusion:**
- $\piR=\piR+\Con_{\piR}$ derives the alleged decision algorithm (family) $\nabla = \nabla_{\DIO}[D]:\one \parto \two$ to be *correct* for each diophantine polynomial (if defined).
- no diophantine polynomial $D=D(\vec{x})$ can come with a $\T$-proof (i.p. a $\piR$-proof) showing $\nabla[D]$ to be *undefined,* *not* to terminate, in other words:
- *correct termination* of the $\mu$-recursive *decision family* $\nabla = \nabla_{\DIO}[D]$ at each diophantine polynomial is $\piR$-*irrefutable,* in the sense that **otherwise**—refutation— $$\begin{aligned}
& \piR \derives \Pro_{\piR}(q,\code{\mr{false}}),
\ q: \one \to \N\ \text{a suitable PR point,} \end{aligned}$$ inconsistency of (self-consistent) theory $\piR$ would be the consequence.
Outlook {#outlook .unnumbered}
-------
Irrefutable correct termination of *uniform* decision algorithm $$\nabla_{\DIO}=\nabla_{\DIO}(d):\mr{DIO}\parto\two,\ d\in\mr{DIO}\ \free$$ is treated within the general framework of
*Arithmetical Decision* to come.
[99]{}
ed. 1977: *Handbook of Mathematical Logic.* North Holland.
1931: Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. *Monatsh. der Mathematik und Physik* **38,** 173-198.
1970: Mathematische Probleme. Vortrag Paris 1900. *Gesammelte Abhandlungen.* Springer.
1993: *Hilbert’s Tenth Problem*. The MIT Press.
2014a: *Consistency Decision,*
arXiv 2014.
2014b: *Arithmetical Foundations,* $\gamma$ version, www3.tu-berlin.de/preprint/mathematik/Preprint-8-2014
1977: The Incompleteness Theorems. Part D.1 in ed. 1977.
[^1]: michael.pfender@alumni.tu-berlin.de
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Using a wave packet propagation approach, we find that the resonant charge transfer process of near a Cu(111) surface is strongly influenced by transient hybrid states. These states originate from an ion-induced confinement parallel to the surface together with the surface-localization character of the metal potential along the surface normal. The lowest members of these states have lifetimes of the order of interaction times in typical particle-surface scattering experiments. The propagation of the electron probability density provides clear evidence for this effect in visualizing the evolution and the decay of these transient states.'
author:
- 'Himadri S. Chakraborty'
- Thomas Niederhausen
- Uwe Thumm
title: Evidence for parallel confinement in resonant charge transfer of near metal surfaces
---
The investigation of electron transfer and orbital hybridization processes during the interaction of a projectile atom or ion with a metal surface is of both fundamental and practical importance. The ensuing knowledge finds valuable use in various applied fields of physics, such as, development of ion sources, control of ion-wall interactions in fusion plasma, surface chemistry and analysis, secondary ion mass spectroscopy, and reactive ion etching[@gauy96; @shao94]. Of basic interest is the detailed understanding of single-electron transfer leading to either ionization or neutralization of a surface-scattered projectile. This process of resonant charge transfer (RCT) has been addressed by employing different non-perturbative theoretical methods, including single-center basis-set-expansion[@bahr99], complex coordinates rotation[@nord92], two-center expansion[@thumm02-98], multi-center expansion techniques[@mart96], and the direct numerical integration of the effective single-electron Schrödinger equation by Crank-Nicholson wave packet propagation (CNP)[@bori98; @bori99; @press93; @thumm02].
Of all these methods, CNP is most flexible in the sense that it can readily be applied to any parametrized effective potential that may be used to represent the electronic structure of substrate and projectile. In contrast to expansion methods that usually simplify the target to a free-electron (jellium) metal, CNP allows for a significantly more detailed representation of the substrate electronic structure, including the effect of band gaps[@bori98; @guil99], surface states[@bori99], and image states on the RCT dynamics.
The Cu(111) surface is of particular interest since (a) the affinity level of lies within the $L$-band gap of the surface and (b) it serves as a prototype of a metal surface that can localize a surface state within its band gap. We show that for the /Cu(111) system charge transfer is to a large extent channeled through transient hybrid states that are confined parallel to the surface by the combined influence of surface and projectile potentials. is described by an effective potential that models the interaction of the active electron with a polarizable core[@cohe86]. A one-dimensional \[in the co-ordinate ($z$) of surface normal\] single-electron effective potential, constructed from pseudopotential local density calculations, is employed to model the surface[@chul99]. This potential reproduces the observed and/or [*abinitio*]{} $L$-band gap position, surface state and image states for zero electron momentum component parallel to the surface ($x$ direction). Note that the first image state lies in the band gap while higher ones are degenerate with the conduction band[@chul99]. We employ the CNP[@press93; @thumm02] of the initial free wave function over a two-dimensional numerical grid in which the metal continuum is approximated by free electronic motion in $x$ direction. Our grid includes 100 layers on the bulk and extends to $z$ = 200 a.u. on the vacuum side. The topmost layer of lattice points defines $z$ = 0. The grid covers 200 a.u. in $x$. The grid spacings $\Delta z$ = $\Delta x$ = 0.2 a.u. yield good convergence.
-1.1cm
-1.3cm
-0.7cm
For fixed ion-surface distances $D$, the numerical propagation over time $t$ yields and the ionic survival amplitude $A(t)=\langle{\mbox{$\Phi(t)$}}|{\mbox{$\phi_{\mbox{\scriptsize ion}}$}}\rangle$. The real part of the Fourier transform (FT) of this amplitude yields the projected density of states (PDOS) that exhibits resonance structures. The position, width, and amplitude of these resonances provide, respectively, the energy, lifetime, and population of the states. Contrary to the parametric fitting adopted in Ref., a direct FT of $A(t)$ is performed by propagating, in time-steps $\Delta t$ = 0.1 a.u., over a period long enough for acceptable convergence. Figure 1 depicts the PDOS (thick solid curve) for three typical values of $D$. Results neglecting the electronic motion parallel to the surface are obtained by propagating over a one-dimensional grid along $z$ and are shown (short-dashed curve) for comparisons. Note, although for this 1-D propagation, due to the absence of any decay continuum, $A(t)$ never fully converges, we still carry out the FT of $A(t)$ calculated over a finite time, since we are interested only in identifying resonances in the 1-D PDOS.
At $D$ = 11 a.u. \[Fig1(a)\], our results with or without the parallel motion included show discretized structures corresponding to the valence and conduction band. The affinity level resonance (at $-$1.56 eV) and the surface state resonance (at $-$5.31 eV) are also present in both calculations, although the affinity level is shifted downward from the unperturbed asymptotic affinity of $-$0.76 eV. Strikingly, two small peaks appear just above the surface state resonance for the results that include the parallel motion. For $D$ = 5 a.u. \[Fig.1(b)\], the affinity level and the surface state resonance are present in both results, with and without parallel motion, [*but*]{} the structures in between for the calculation that incorporates electronic parallel motion increase in number and strength. Clearly, these new resonances appear only when the electronic parallel degree of freedom is switched on. For a jellium Cu surface the PDOS \[Fig.1(b), long-dashed curve\] only shows a wide affinity level peak, as expected, thus indicating that the resonances are due to details in the surface band structure that are not accounted for in the simplistic jellium model. For a more complete portrayal of the origin of these features we also show the PDOS \[Fig.1(b), thin solid curve\], including the parallel motion, for Cu(100), which has a similar band gap in the direction normal to the surface but, contrary to Cu(111), [*no*]{} surface state inside the gap[@chul99]. This shows valence band structures up to $-$3.1 eV and the affinity level at $-$1 eV [*but*]{} no significant feature in between. Evidently, the extra features in the Cu(111) PDOS between the surface state and the affinity level resonances [*must*]{} be originating both from the parallel degree of freedom of the electron and the special localizing property of the Cu(111) potential along the surface normal that binds a surface state inside the band gap. For very close ion-surface separation, $D$ = 1 a.u., these features almost disappear, while two similar resonances superimposed on the conduction band spearhead, at $-$0.67 and $-$0.35 eV \[Fig.1(c)\], above the affinity level. These resonances, in analogy with the ones below the affinity level, also originate from the electron parallel motion and the weak binding of the long-range tail of the surface potential.
The origin and evolution of this effect with decreasing $D$ can be understood as follows. As moves towards the surface, the ion potential gradually deepens following, at large $D$, the classical image interaction. Consequently, the spherical symmetry of the ion potential gets broken by the surface potential “slope”, which becomes steepest in the vicinity of metal-vacuum interface. Sufficiently close to the surface, the parallelly-stretched asymmetric top of the ion potential confines a new state. Although this state is bound in the parallel direction, whether or not it will live long will depend on how much binding it experiences in the direction normal to the surface. Indeed, for Cu(111) the surface potential has enough reflectivity to enable the formation of a localized surface state within the band gap. As a consequence, the new state, confined parallelly by the incoming ion, is also (temporarily) bound in the normal direction by the Cu(111) potential. This state is relatively long-lived and appears as a fairly narrow peak in the PDOS spectrum \[Fig1(a)\]. As the ion moves closer to the surface, the number of states confined parallelly increases and additional peaks emerge in the PDOS \[Fig.1(b)\]. For the /Cu(100) system, in contrast, since the surface potential lacks sufficient surface-localizing reflectivity (it fails to support a surface state within the band gap), the states, confined parallelly by the ion, decay rapidly into the metal valence band, as indicated by the broad bump in the valence band of Cu(100) \[Fig.1(b)\]. Therefore, these new resonances near Cu(111) are parallelly confined hybrids.
0.0cm
-0.5cm
The surface state is not bound in the parallel direction. A resonance state that energetically lies above the surface state while being confined parallelly has to be less bound in the normal direction than the surface state. A reduction of the normal binding can be achieved by sliding up the potential at the bulk-vacuum interface, that is, by moving the mean position of the wave function in normal direction towards the ion. This increases the overlap between the ionic wave function and that of the confined state. Consequently, for a given ion-surface separation, we expect the lowest parallelly confined state to be populated first because of its strongest wave function overlap with the ion. This is seen in Fig.2, which depicts the propagated wave packet probability density $|{\mbox{$\Phi(t)$}}|^2$ at $t$ = 30, 70, 120 and 165 a.u. for fixed $D$ = 5 a.u. Figure 2(a) ($t$ = 30 a.u.) shows an approximately nodeless structure outside the surface plane implying that over a short propagation time the ion populates predominantly the lowest parallelly confined state. However, at later times and increasing population of higher parallelly confined states, the wave packet spreads along the parallel co-ordinate forming additional nodal structures. Notably, since the parallel component of the wave packet outside the surface is a time-dependent linear combination of parallelly confined stationary states with [*different*]{} nodal structures, the position of a node moves with time. The ripples seen along the normal direction inside the bulk are due to the periodic bulk potential and a faint blob beyond $z \approx$ 20 a.u. on the vacuum side \[Fig.2(b-d)\] represents the evolution of weakly populated image states.
-1.1cm
-2.0cm
-0.5cm
Fig.3 shows the energy and the width of various resonances as a function of $D$. Our results for affinity level and surface state resonances are qualitatively similar to previous calculations[@bori99]. At large distances, the energy \[Fig.3(a)\] of the affinity level resonance (filled circles) is solely governed by image interactions leading to a good agreement with corresponding jellium results (opaque circles). In Fig.3(b) on the other hand, affinity level resonance widths for Cu(111) are smaller than the jellium predictions at large distances, since in the jellium case no band gap exists and electrons can decay in the normal direction. The strong interaction, seen in the distance-dependent widths and energies between the affinity level resonance and the surface state resonance (filled squares) for small $D$, is the consequence of an indirect coupling between the corresponding discrete quasi-stationary states through the surface state continuum[@bori99]. Interestingly, both energy and width of the parallelly confined resonances depend only weakly on $D$ (Fig.3). We explain this near-stabilization by couplings of a given parallelly confined state with both affinity and surface state that have comparable strength. These couplings result in opposite level shifts and comparable rates (widths) for transitions between the affinity and parallelly confined state and between the parallelly confined state and the surface state. The same argument explains the stabilization of resonances just above the ionic resonance for very close $D$ where the states are interacting with the ion and the conduction band. Furthermore, as discussed before, the state with maximum binding in the parallel direction has the strongest overlap with the affinity level and the weakest with the surface state. As a result, while it is “fed” by the ion the most, it decays through the surface state continuum the least acquiring a narrow width \[Fig.3(b)\]. The counter-argument explains the large widths for minimally confined states in the parallel direction.
0.0cm
-0.7cm
During the approach to the surface the projectile gradually decelerates in the normal direction along its incoming trajectory, owing to the repulsive interaction between its neutral core and surface atoms, until its normal velocity becomes zero at the point of closest approach. For specular reflection, it re-gains its original normal velocity. For a given initial kinetic energy and angle of incidence, we simulate the classical ion-trajectory by modeling the core-surface interaction via a plane-averaged interatomic potential[@bier82]. This defines a distance of closest approach as a function of the initial normal velocity. Since the ion moves slowly near the surface, the adiabatic (fixed-ion) results (Figs.1-3) provide a good guideline to understand the calculations for a moving ion. In Fig.4, we present four wave packet probability densities, a pair each from the incoming and the outgoing part of the trajectory of ions with 50 eV asymptotic energy at 60$^o$ incidence with respect to the surface. In Fig.4(a), $D$ = 10.5 a.u., the ion predominantly populates the state confined most strongly in the parallel direction. Reaching $D$ = 2.76 a.u., Fig.4(b), the wave packet spreads over all available parallelly confined states, and clear nodal structures emerge symmetrically along the parallel direction outside the surface with each “bead” emanating a jet into the bulk. Electrons in the central jet have small parallel velocity indicating their ejection from the most tightly confined state. A steady increase of the parallel velocity is evidenced going symmetrically away from the center in parallel direction since the more distant jets originate from less strongly confined states. In Fig.4(c), the ion arrives roughly at the distance of closest approach, 0.5 a.u., where the adiabatic energy position of the ionic resonance moves very close to the conduction band \[Fig.3(a)\] and induces new resonances above the ionic level. Again, the node formation and resulting jets are seen, although the shape of the wave packet density is now dominated by a strong decay into the conduction band as well as by the subsequent population of image states, degenerate with the conduction band, on the vacuum side of the projectile. A remarkable signature of the parallel confinement is finally seen on the outward excursion of the ion in Fig.4(d) at $D$ = 6.59 a.u.: the entrapment of the electron back in parallelly confined states results in decay-jets into the bulk and subsequent re-ionization of the projectile (note the strong trapping at the ion position). As an observable consequence of the strong participation of parallelly confined states in the decay near Cu(111), we find about 6% ion survival after the scattering of from this surface as opposed to about 2% from Cu(100), which is free from this effect. A detailed comparative study will be published elsewhere[@chak03].
In conclusion, we demonstrate significant parallel confinement effects in resonant neutralization of near Cu(111) by directly analyzing the evolution of the active electron’s wave packet probability density. A surface-induced breakdown of the ionic spherical symmetry and significant reflectivity of the surface potential is responsible for this confinement. Finally, there is nothing special about Cu(111). Any of the surfaces, namely, Ag(111), Au(111), Pd(111) etc., supporting a surface state in the $L$-band gap is expected to show similar parallel confinement phenomena during the RCT process.
This work is supported by the NSF (grant PHY-0071035) and the Division of Chemical Sciences, Office of Basic Energy Sciences, Office of Energy Research, US DoE.
[99]{} J.P Gauyacq et al., in [*Formation/Destruction of Negative Ions in Heavy Particle-Surface Collisions*]{}, edited by V. Esaulov (Negative Ions, Cambridge University Press, 1996). H. Shao et al., in [*Low Energy Ion-Surface Interactions*]{}, edited by J.W. Rabalais (Wiley, New York, 1994) p. 118; J.J.C. Geerlings and J. Los, Phys. Rep. [**190**]{}, 133 (1990). B. Bahrim et al., Surf. Sci. [**431**]{} 193 (1999). P. Nordlander, Phys. Rev. B [**46**]{}, 2584 (1992). B. Bahrim and U. Thumm, Surf. Sci. [**521**]{} 84 (2002); P. Kürpick and U. Thumm, Phys. Rev. A [**58**]{}, 2174 (1998). F. Martin and M.F. Politis, Surf. Sci. [**356**]{}, 247 (1996). A.G. Borisov et al., Phys. Rev. Lett. [**80**]{}, 1996 (1998). A.G. Borisov et al., Phys. Rev. B [**59**]{}, 10935 (1999). W.H. Press et al., [*Numerical Recipes in FORTRAN*]{} (Cambridge University Press, Cambridge, 1993). U. Thumm, in [*Book of Invited Papers, XXII International Conference on Photonic, Electronic, and Atomic Collisions, Santa Fe, NM*]{}, edited by S. Datz et al. (Rinton Press, 2002) p. 592. L. Guillemot and V.A. Esaulov, Phys. Rev. Lett. [**82**]{}, 4552 (1999). J.S. Cohen and G. Fiorentini, Phys. Rev. A [**33**]{}, 1590 (1986). E.V. Chulkov et al., Surf. Sci. [**437**]{}, 330 (1999). J.P. Biersack and J.F. Ziegler, Nucl. Instrum. Meth. [**194**]{}, 93 (1982); J. Ducrée et al., Phys. Rev. A [**60**]{}, 3029 (1999). H.S. Chakraborty, T. Niederhausen, and U. Thumm, [*to be published*]{}.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Theories in physics usually do not address “the present” or “the now”. However, they usually have a precise notion of an “instant” (or state). I review how this notion appears in relational point mechanics and how it suffices to determine durations - a fact that is often ignored in modern presentations of analytical dynamics. An analogous discussion is attempted for General Relativity. Finally we critically remark on the difference between relationalism in point mechanics and field theory and the problematic foundational dependencies between fields and spacetime.
This contribution is based on a talk delivered at the workshop *The Forgotten Present. A Quest for a Richer Concept of Time*, held at the Parmenides Foundation at Munich-Pullach from April 29th - May 2nd 2010. This written up version will be published in the Volume *The Forgotten Present*, edited by Thomas Filk and Albrecht von Müller, to be published by Springer Verlag 2013.
author:
- |
Domenico Giulini\
Institute for Theoretical Physics\
Riemann Center for Geometry and Physics\
Leibniz University Hannover\
Appelstrasse 2, D-30167 Hannover, Germany\
and\
Center for Applied Space Technology and Microgravity\
University of Bremen\
Am Fallturm, D-28359 Bremen, Germany
bibliography:
- 'RELATIVITY.bib'
- 'HIST-PHIL-SCI.bib'
- 'MATH.bib'
- 'QM.bib'
- 'COSMOLOGY.bib'
title: |
Instants in physics\
– point mechanics and general relativity –
---
Introduction
============
All known fundamental physical laws are of *dynamical* type. Without exception, they are all required to provide answers for *initial-value problems*. This means the following: If we specify the state of a physical system the laws allow us to deduce further states that are usually interpreted as lying to the future, or past, or both, of the initially given one. Except for General Relativity, this is formally achieved by labelling the states by an external parameter $t$ that - without further justification - is interpreted as “time” (whatever this means). In this contribution I wish to point out that this parameter may be eliminated and that measures of duration can be read off the sequence of states obtained from the dynamical laws.
In the traditional formulation, an initial-value problem is said to be *well posed* if and only if the determination of the future (and possibly past) states is unique, and continuously dependent on the initial state. The last condition means that if we sufficiently restrict the variation of the initial state we can let the evolution vary less than any given bound. These conditions are not only satisfied in Newtonian mechanics, which serves as a paradigmatic example in this respect, but also in the mathematically and conceptually and most complicated theories, like Einstein’s theory of General Relativity. Albert Einstein, as well as David Hilbert, wrote down the field equation of General Relativity in November 1915. But only in the late 1950s did mathematicians succeed to prove that it indeed allowed for *well posed initial-value problems*. Had this turned out to be false this would have possibly led physicists to abandon General Relativity, despite all its other convincing features. To allow for a well posed initial-value problem is presumably the single most important sanity check for any candidate fundamental dynamical law in physics.
This is not restricted to classical laws and classical determinism. The fundamental dynamical law in Quantum Mechanics, Schrödinger’s equation, also allows for well posed initial-value problems. The quantum-mechanical state evolves according to this equation just as deterministically and continuously as the state in Newtonian mechanics does according to Newton’s or Hamilton’s equations. The typical quantum-mechanical indeterminacy, that distinguishes it so drastically from classical mechanics does not concern the evolution of states, it concerns the relation of states to observable features of the system under consideration. But this shall not be the issue we address here. Therefore we will restrict attention to classical (i.e. non quantum) laws. Our concern is the problem of how to characterise, in a physically meaningful way, data that suffice to determine the evolution and how to find a measure of duration merely from that data.
Newtonian Mechanics
===================
Newton’s famous third law is written in standard modern text-book language as $$\label{eq:NewtonThirdLaw}
m\ddot{\vec x}=\vec F\bigl(t,{\vec x}(t),\dot{\vec x}(t)\bigr)\,.$$ In this form it is meant to apply to an idealised object called *mass point*. This may the thought of as extensionless object (“point”) of position $\vec x$ and mass value $m$. A single overdot denotes the derivative with respect to the parameter (“time”) $t$ (i.e. the rate of change of the dotted quantity) and a double overdot the second “time” derivative. Finally, the right-hand side denotes the force, $\vec F$, which in the case of just one particle is supposed to be externally specified and possibly dependent on $t$, the instantaneous position $\vec x(t)$ of the particle and its instantaneous velocity $\dot{\vec x}(t)$. Given the function $\vec F$, Newton’s equation has a unique solution once the initial position and initial velocity of the particle is specified. The solution is the function $t\rightarrow\vec x(t)$ that assigns a unique position $\vec x$ in space to each value $t$. That is the standard text-book presentation, except that $t$ is from the start always referred to as time (Newtonian time).
Equation (\[eq:NewtonThirdLaw\]) tells us that an initial datum that suffices to predict the future is the position and velocity *at the initial reading of time*. The initial reading of time is a particular value of the parameter $t$ that *represents* time, namely that value that represents the initial moment. This is achieved via a *clock*. A clock is a another physical systems that also obeys an equation of the form (\[eq:NewtonThirdLaw\]) for the pointer variable $p$ as function of parameter $t$. Whereas $t$ is not directly observable, $p$ is. Given $p(t)$ we may invert this relation and express $t$ as function of $p$. This is possible if $p$ is strictly monotonous in $t$. Systems for which this is not the case would not count as clocks. We then eliminate $t$ in $\vec x(t)$ in favour of $p$ and obtain a function $\vec x(p)$. This function expresses a relation between the clock’s pointer position $p$ and the particle’s position $\vec x$. That relation is observable because $p$ *as well as* $\vec x$ are observable. This is in contrast to $\vec x(t)$ where $t$ is not observable. The elusive “initial time” is then that reading of $p$ at which we release the particle. This, in essence, is the idea of *ephemeris time* [@Clemence:1948].
But what happens if there is no obvious way to single out a system as “clock”. For example, imagine we are given $n+1$ (we say $n+1$ rather than $n$ for later notational convenience) mass points moving about under the action of their own pairwise gravitational attraction. No “clocks” or background reference systems against which the motions of the particles could be measured are given to us. The only thing we can measure are the $\tfrac{1}{2}n(n+1)$ instantaneous relative distances between pairs of points. Could we still ascertain the validity of Newton’s laws of mechanics? This is a relevant question since the situation depicted is basically just that astronomers have to face. And yet it took almost 200 years from the writing of Newton’s Principia until physicists and mathematicians first answered this question (of which Newton was fully aware) with sufficient clarity.
The basic question that needs to be answered is how we can construct Newton’s absolute space and time from observations of relational quantities alone, for it is only with respect to special spatial reference frames and special measures of time that Newton’s equations are valid. Following Ludwig Lange (1863-1936) [@Lange:1885], these spatial reference frames are called *inertial systems* and the special measures of time *inertial timescales*. In this work Lange showed how to characterise the inertial system and timescale by means of continuous monitoring the motion of three force-free particles. We shall not discuss Lange’s argument here, which has been reviewed elsewhere [@Giulini:2002b]. Rather, we focus on an alternative approach initiated a year earlier by James Thomson (1822-1892), the elder brother of William Thomson (1824-1907) \[better known as Lord Kelvin\], who in 1884 wrote the following [@Thomson:1884]:
> “The point of space that was occupied by the centre of the ball at any specified past moment is utterly lost to us as soon as that moment is past, or as soon as the centre has moved out of that point, having left no trace recognisable by us of its past place in the universe of space. There is then an essential difficulty as to our forming a distinct conception either of rest or of rectilinear motion through unmarked space. \[...\] We have besides no preliminary knowledge of any principle of chronometry, and for this additional reason we are under an essential preliminary difficulty as to attaching any clear meaning to the words *uniform rectilinear motion* as commonly employed, the uniformity being that of equality of spaces passed over in equal times.”
This was immediately rephrased into a mathematical problem by Peter Guthrie Tait (1831-1901) [@Tait:1884]:
> “A set of points move, Galilei wise, with reference to a system of co-ordinate axes; which may, itself, have any motion whatever. From observation of the positions of the points, merely, to find such co-ordinate axes.”
This is precisely the problem we set above in the simpler case of *free* point particles. So suppose we are given some number of point particles that move about freely, i.e. there is no mutual attraction or repulsion due to any force, and suppose this motion does obey Newtons law with reference to some unknown inertial reference system and inertial timescale. How can we reconstruct these by merely observing the relative distances of the points? How many points and how many snapshots do we need to accomplish that?
Reconstructing Absolute Space and Time {#reconstructing-absolute-space-and-time .unnumbered}
--------------------------------------
Tait’s answer to the above question, given in the same paper [@Tait:1884], is as follows: We wish to reconstruct the inertial system and timescale from an unordered *finite* number of snapshots (“instances”) of instantaneous relative spatial configurations. For this we consider $n+1$ mass-points $P_i$ ($0\leq i\leq n$) moving inertially, i.e. without internal and external forces, in flat space. Their trajectories are represented by $n+1$ functions $t\mapsto\vec x_i(t)$ with respect to some, yet unspecified, spatial reference frame and timescale. The only directly measurable quantities at this point are the $n(n+1)/2$ instantaneous mutual separations of the particles. We now proceed in the following nine elementary steps:
1. The instantaneous mutual separations are given by $n(n+1)/2$ positive real numbers per label $t$. This is equivalent to giving their squares: $$\label{eq:TaitsSol1}
R_{ij}:=\Vert\vec x_i-\vec x_j\Vert^2\qquad
\mathrm{for}\quad 0\leq i<j\leq n\,.$$
2. The knowledge of the $n(n+1)/2$ squared distances, $R_{ij}$, is, in turn, equivalent to the $n(n+1)/2$ inner products $$\label{eq:TaitsSol2}
Q_{ij}:=(\vec x_i-\vec x_0)\cdot(\vec x_j-\vec x_0)\qquad
\mathrm{for}\quad 1\leq i\leq j\leq n\,,$$ as one sees by expressing one set in terms of the other by the simple linear relations (no summation over repeated indices here):
\[eq:TaitsSol3\] $$\begin{aligned}
{4}
\label{eq:TaitsSol3a}
& R_{ij}\,&&=\,Q_{ii}+Q_{jj}-2Q_{ij}\qquad
&&\mathrm{for}\quad && 1\leq i<j\leq n\,,\\
\label{eq:TaitsSol3b}
&R_{i0}\,&&=\,Q_{ii}\qquad
&&\mathrm{for}\quad && 1\leq i\leq n\,,\\
\label{eq:TaitsSol3c}
&Q_{ij}\,&&=\,\tfrac{1}{2}\bigl(R_{i0}+R_{j0}-R_{ij}\bigr)\qquad
&&\mathrm{for}\quad && 1\leq i\leq j\leq n\,.\end{aligned}$$
3. We now seek an inertial system and an inertial timescale, with respect to which all particles move uniformly on straight lines. Correspondingly, we assume $$\label{eq:TaitsSol4}
\vec x_i(t)=\vec a_i+\vec v_i t\qquad
\mathrm{for}\quad 0\leq i\leq n$$ hold for some *time-independent* vectors $\vec a_i$ and $\vec v_i$.
4. The 11-parameter redundancy by which such inertial systems and timescales are defined is given by
- spatial translations: $\vec x\mapsto\vec x+\vec a$, $\vec a\in{\mathbb{R}}^3$, accounting for three parameters,
- spatial boosts: $\vec x\mapsto\vec x+\vec vt$, $\vec v\in{\mathbb{R}}^3$, accounting for three parameters,
- spatial rotations: $\vec x\mapsto{\mathbf}{R}\cdot\vec x$, ${\mathbf}{R}\in{\mathrm}{O}(3)$ (group of spatial rotations, including reflections), accounting for three parameters,
- time translations: $t\mapsto t+b$, $b\in{\mathbb{R}}$, accounting for one parameter, and
- time dilations: $t\mapsto at$, $a\in{\mathbb{R}}-\{0\}$, accounting for one parameter.
The redundancies a) and b) are now eliminated by assuming $P_0$ to rest at the origin of our spatial reference frame. We then have, assuming , $$\label{eq:TaitsSol5}
Q_{ij}(t)=\vec x_i(t)\cdot\vec x_j(t)=
\vec a_i\cdot\vec a_j+
t\,(\vec a_i\cdot\vec v_j+\vec a_j\cdot\vec v_i)
+t^2\,\vec v_i\cdot\vec v_j\,.$$
5. Measuring the mutual distances, i.e. the $Q_{ij}$, at $k$ different values $t_a$ ($1\leq a\leq k$) of $t$ we obtain the $kn(n+1)/2$ numbers $Q_{ij}(t_q)$. From these we wish to determine the following unknowns, which we order in four groups:
- the $k$ times $t_a$,
- the $n(n+1)/2$ products $\vec a_i\cdot\vec a_j$,
- the $n(n+1)/2$ products $\vec v_i\cdot\vec v_j$, and
- the $n(n+1)/2$ symmetric products $\vec a_i\cdot\vec v_j+\vec a_j\cdot\vec v_i$.
6. The arbitrariness in choosing the origin and scale of the time parameter $t$, which correspond to the points d) and e) above, can, e.g., be eliminated by choosing $t_1=0$ and $t_2=1$. Hence the first group has left the $k-2$ unknowns $t_3,\dots,t_k$. The last remaining redundancy, corresponding to the spatial rotations in point c), is *almost* eliminated by choosing $P_1$ on the $z$ axis and $P_2$ in the $xz$ plane. This suffices as long as $P_0,P_1,P_2$ are not collinear. Otherwise we choose three other mass points for which this is true. Here we exclude the exceptional case where all mass points are co-linear. We said that this ‘almost’ eliminates the remaining redundancy, since a spatial reflection at the origin is still possible.
7. Tait’s strategy is now as follows: for each instant in time $t_a$ consider the $n(n+1)/2$ equations (\[eq:TaitsSol5\]). There are $k-2$ unknowns from the first and $n(n+1)/2$ unknowns each from groups 2), 3), and 4). This gives a total of $kn(n+1)/2$ equations for the $k-2+3n(n+1)/2$ unknowns. The number of equations minus the number of unknowns is $$\label{eq:TaitsSol6}
(k-3)n(n+1)+2-k\,.$$ This is positive if and only if $n\geq 2$ and $k\geq 4$. Hence the minimal procedure is to take four snapshots ($k=4$) of three particles ($n=2$), which results in 12 equations for 11 unknowns.
8. Recall that we assumed the validity of Newtonian dynamics and that the given trajectories correspond to force-free particles. This implies the existence of inertial systems and hence also the existence of solutions to the equations above. For positive (\[eq:TaitsSol6\]) the equations determine the $3n(n+1)/2$ unknowns in groups 2) - 4) which, in turn, determine the $6n-3$ free components of $\vec a_i$ and $\vec v_i$ up to an overall sign, since $3n(n+1)/2\geq 6n-3$ if and only if $n\geq 2$. Note that we have $6n-3$ rather than $6n$ free components for $\vec a_i$ and $\vec v_i$, since we already agreed to put $P_1$ on the $z$ axis, which fixes two components of $\vec a_1$ and $\vec v_1$ each, and $P_2$ in the $xz$ plane, which fixes one component of $\vec a_2$ and $\vec v_2$ each. Note also that we cannot do better than determining the $\vec a_i$ and $\vec v_i$ up to sign, since the $Q_{ij}$ are homogeneous functions of *second* degree in these variables.
9. Once the $2n$ vectors $\vec a_i$ and $\vec v_i$ are obtained, so is clearly the inertial system (up to orientation) and the inertial timescale. This is as far as Tait’s solution to Thomson’s problem goes.
One remarkable thing about Tait’s solution is that the spatial inertial system and the inertial timescale are determined together. However, this is really not surprising: The mathematical problem of calculating the $k$ labels $t_a$ representing “instants” cannot be separated from the characterisation of the instants themselves. In this sense it might be said – following Julian Barbour [@Barbour:1994a] – that instants are not to be located in time, but that time is rather to be found in instants. Thus it seems that the philosophical discussion concerning the reality of time (see e.g. [@SEoP-Time] for an up-to-date account) is then really a discussion concerning the reality of instants. But in point mechanics instants are relational configurations the reality of which cannot be doubted without mocking the theory.
Mechanics without parameter-time {#mechanics-without-parameter-time .unnumbered}
--------------------------------
If time can be read off instances, as claimed above, we should, at least in principle, be able to altogether eliminate the parameter $t$ from the laws. How does the $t$-less version of Newtonian mechanics look like? One answer has been well known for a long time, albeit in a somewhat hybrid form in which the absolute positions in space still feature. It goes under the name of Jacobi’s principle, after Carl Gustav Jacobi (1804-1851). It takes the form of a geodesic principle in configuration space. That means, it determines the physically realised paths in configuration space between any pair $({\mathbf{q}}_i,{\mathbf{q}}_f)$ of given points to be that of shortest length. Here “length” is measured in some appropriate metric that encodes the essential dynamical information.
Note that the parameter $t$ plays no rôle: its value at the initial and final point need not be specified. Rather, the measure of inertial time elapsed between the initial and final configuration can be calculated *after* the dynamical trajectory has been determined through the geodesic principle. Let there be $n$ mass points whose positions are $(\vec q_1,\cdots,\vec q_n)=:{\mathbf{q}}$, moving under the influence of a potential $V({\mathbf{q}})$. The configuration space is ${\mathbb{R}}^{3n}$ and its Riemannian metric, with respect to which the physically realised trajectories of constant Energy $E$ are geodesics, is given by $g=(E-V)T$, where $T$ in the positive-definite bi-linear form that appears in the expression for the kinetic energy (“kinetic-energy metric”). The inertial time that has elapsed along the length-minimising trajectory between ${\mathbf{q}}_i$ and ${\mathbf{q}}_f$ is then given by $$\label{eq:JacobiTimeSpan-1}
\Delta t({\mathbf{q}}_i,{\mathbf{q}}_f)=
\int_{{\mathbf{q}}_i}^{{\mathbf{q}}_f}\sqrt{\frac{T\bigl(d{\mathbf{q}}/d\lambda,d{\mathbf{q}}/d\lambda\bigr)}{E-V({\mathbf{q}})}}\,d\lambda\,.$$ This may be understood as saying that time has to be chosen in such a fashion so as to lead to the standard form of energy conservation. Indeed, from we get $$\label{eq:JacobiEnergyLaw}
E=T\bigl(d{\mathbf{q}}/dt\,,\,d{\mathbf{q}}/dt\bigr)+V({\mathbf{q}})\,.$$ Note that only depends on the pair $({\mathbf{q}}_i,{\mathbf{q}}_f)$ and not on the way we parametrise the path. Hence the choice of the parameter $\lambda$ is arbitrary. Therefore we have a well defined map $$\label{eq:JacobiTimeSpan-2}
\Delta t: {\mathbb{R}}^{3n}\times{\mathbb{R}}^{3n}\rightarrow{\mathbb{R}}_+$$ which, for given energy $E$, assigns to each pair of points in the configuration space the inertial-time duration of the physical journey connecting them. As we will discuss next, there is a certain analog to Jacobi’s Principle in General Relativity, with some additional issues arising due to the fact that the fundamental mathematical entities being fields rather than point-particles. Finally we point out that there is a generalisation to Jacobi’s principle in models of point mechanics without absolute space. In these models only the instantaneous relative distances enter the laws and the time lapse can again be calculated from the dynamical trajectories. First attempts were Reissner’s (1874-1967) [@Reissner:1914] and Schrödinger’s [@Schroedinger:1925], with the full “relativisation” of time being achieved only much later in [@Barbour.Bertotti:1982]. See also [@Barbour.Pfister:MachsPrinciple] for more on the modern context and translations of the papers by Reissner, Schrödinger etc.
General Relativity
==================
Einstein’s equations are equations for entire spacetimes, that is, pairs $(M,g)$ where $M$ is a four-dimensional differentiable manifold endowed with a certain geometric structure called *Lorentzian metric*, which is here represented by $g$. Given such a pair $(M,g)$ and a specification of certain aspects of physical matter, it makes unambiguous sense to say that $(M,g)$ does, or does not, satisfy Einstein’s equations. No external notion of time enters the picture at this stage. This, clearly, is for good reasons: Spacetimes do not evolve (in “time” external to them); they simply are! In addition, no conditions concerning structures internal to $(M,g)$ need to be imposed, such as sequential ordering of substructures (to be interpreted as “instants”), absence of closed timelike curves (i.e. journeys into ones own past), or causal evolution of geometry. On the other hand, Einstein’s equations are *compatible* with the *additional imposition* of such structures. It required the hard work of mathematicians of many years to show that a reasonable set of such additional conditions exist which ensure that Einstein’s equations allow for well posed initial value problems in the sense explained above.
![\[fig:EmbeddingOfSpaces\]Spacetime, $M$, is foliated by a one-parameter family of embeddings $\mathcal{E}_t$ of the 3-manifold $\Sigma$ into $M$. Here $t$ is a formal label without direct physical significance. $\Sigma_t$ is the image in $M$ of $\Sigma$ under $\mathcal{E}_t$. Each such $\Sigma_t$ is an **instant**.](FigGiulini1 "fig:"){width="0.7\linewidth"} (-200,50)[$\Sigma$]{} (-125,50)[$\mathcal{E}_t$]{} (-125,30)[$\mathcal{E}_{t'}$]{} (-125,70)[$\mathcal{E}_{t''}$]{} (-50,50)[$\Sigma_t$]{} (-50,20)[$\Sigma_{t'}$]{} (-50,84)[$\Sigma_{t''}$]{} (-90,80)[$M$]{}
In particular, these conditions ensure that the spacetime can be thought of as the history of space. In a loose mathematical sense this means that spacetime is a staking of spaces, each one being an instant. More precisely, spacetime is foliated by a one-parameter family of embeddings of space into spacetime. This is schematically represented in Figure\[fig:EmbeddingOfSpaces\]. For that to make mathematical sense we must be sure that a single space, $\Sigma$, suffices to foliate spacetime. Its geometry may change from leaf to leaf, but not its essential properties as differentiable manifold, for otherwise we could not speak of *its* evolution. In particular this means that its topological properties are preserved during evolution, like its connectedness and its higher topological invariants; see Figure \[fig:TwoDifferentSpacetimes\].
![\[fig:TwoDifferentSpacetimes\]Schematic rendering of spacetimes. The one on the left may be viewed as time evolution of space. Time runs upwards and space corresponds to the horizontal sections, here depicted by a 3-holed surface. In the spacetime on the right an initial connected space at the bottom, represented by a single 6-holed surface, evolves into two 3-holed pieces. This spacetime cannot be viewed as time evolution of a single space and shall be excluded from the discussion.](FigGiulini2){width=".68\linewidth"}
![\[fig:TwoDifferentSpacetimes\]Schematic rendering of spacetimes. The one on the left may be viewed as time evolution of space. Time runs upwards and space corresponds to the horizontal sections, here depicted by a 3-holed surface. In the spacetime on the right an initial connected space at the bottom, represented by a single 6-holed surface, evolves into two 3-holed pieces. This spacetime cannot be viewed as time evolution of a single space and shall be excluded from the discussion.](FigGiulini3){width="0.9\linewidth"}
(-295,18)[initial space]{} (-295,180)[final space]{} (-292,85)[spacetime]{} (-110,21)[initial space]{} (-107,210)[two final]{} (-112,203)[components]{} (-107,196)[of space]{}
One of the fundamental difficulties with the notion of spacetime as history of space is its inherent redundancy: There are many ways to describe one and the same spacetime as the evolution of space. This is explained in Figure \[fig:LapseShift\]. This means that if we cast Einstein’s equations into the form of evolution equations for “space”, we cannot expect unique solutions, contrary to what is usually required for well posed initial-value problems. The point here is that the non-uniqueness is not arbitrary. It is precisely of the amount that accounts for the different ways to move space through a *fixed* spacetime, no more and no less. This is closely related to the infamous “Hole Argument” [@SEoP-HoleArgument].
![\[fig:LapseShift\]There is a large ambiguity in moving from an initial space-slice $\Sigma_t$ “forward in time”. For $q\in\Sigma$ the image points $p=\mathcal{E}_t(q)$ and $p'=\mathcal{E}_{t+dt}(q)$ are connected by the vector $\partial/\partial t\vert_p$ whose components tangential and normal to $\Sigma_t$ are $\beta$ (three functions) and $\alpha n$ (one function) respectively. Hence there is a four-function worth of ambiguity to move $\Sigma_t$ in a given ambient spacetime.](FigGiulini4 "fig:"){width="0.54\linewidth"} (-180,13)[$\Sigma_{t}$]{} (-180,78)[$\Sigma_{t+dt}$]{} (-145,15)[$p$]{} (-51,94)[$p'$]{} (-100,-5)[$\beta$]{} (-45,45)[$\alpha n$]{} (-110,50)[$\frac{\partial}{\partial t}$]{}
That relaxation of the uniqueness requirement is familiar from so-called “gauge-theories” and does not imply any renunciation from determinism of fundamental laws, at least as long as the degree of arbitrariness in the analytical expression of the evolution is under complete mathematical control. Physical configurations are then taken to be the equivalence classes under the relation that identifies any two apparently different evolutions that give rise to the same spacetimes (more precisely: diffeomorphism-class of spacetimes).
The Chronos principle {#the-chronos-principle .unnumbered}
---------------------
Modulo the difficulties just mentioned, we may ask whether we can extract a notion of time merely from the information of instants. An instant here is a spatial configuration, that is a pair $(\Sigma,h)$, where $\Sigma$ is a 3-dimensional manifold and $h$ is a Riemannian (i.e. positive definite) metric. One obvious question concerning Einstein’s equations is this: given two instants $(\Sigma_1,h_1)$ and $(\Sigma_2,h_2)$, can we associate a measure of time by which they are apart if we assume that both 3-geometries occur in a spacetime that satisfies Einstein’s equations. This is known as the “sandwich conjecture” in General Relativity and known to fail in many known examples which are, however, of special symmetry that renders the problem singular. For example, it is obvious that specifying any two flat 3-slices in Minkowski space does not give us any information on their separation. Similarly, it has been shown that in the spherically symmetric case a similar underdetermination prevails [@Murchadha.Roszkowski:2006]. On the other hand, it has been an old hope that a suitable analog of Jacobi’s principle, and in particular formula , is also valid in General relativity. This has been first proposed in the classic and well known paper [@Baierlein.Sharp.Wheeler:1962] of 1962. An apparently less well known contribution appeared 12 years later, in which the following “Chronos Principle” in General Relativity was proposed, according to which time is a measure for the distance of instantaneous configurations (instants) [@Christodoulou:1975]. Moreover, it was asked in [@Christodoulou:1975] whether such measures existed such that one would not have to know the entire spatial configuration in order to determine the time span.
> “This postulate contains the statement that it is not necessary to look at the change in configuration of the entire universe to measure time. It is sufficient to measure the change in configuration of only a localized region of the universe, and one is assured that the local time thus obtained will be equal to that of any other region, and indeed equal to the global time.” ([@Christodoulou:1975], p.76)
It is this localisation property that renders this reading of time from instants physically viable. Let us therefore see how it can be satisfied. The answer, quite surprisingly, leads more or less directly to General Relativity. We shall give the argument in a slightly simplified form.
As already stated, Einstein’s equations can be cast into evolutionary form. In that form one may identify a kinetic-energy metric, just like in point mechanics. It reads: $$\label{eq:WDW-Distance}
ds^2=\int_\Sigma d^3x\ G^{ab\,nm}[h(x)] dh_{ab}(x)dh_{nm}(x)$$ where $G^{ab\,nm}[h(x)]$ is a certain expression that depends on the metric tensor $h$ of space but not on its derivatives (ultralocal dependence). It is sometimes called the Wheeler-DeWitt metric. The measure of time will be obtained by a rescaling of the kinetic-energy metric, just like in . Hence one writes $$\label{eq:ChronosTime}
d\tau^2=\frac{ds^2}{\int_\Sigma d^3x\ R(x)}\,.$$ Here $R$ must be a scalar function of the spatial metric $h$. The simplest non-constant such function is the scalar curvature, which depends on $h$ and its derivatives up to order 2. The condition that the measure of time be compatible with arbitrarily fine localisation $\Sigma\rightarrow U\subset\Sigma$ leads requires the integrands in the numerator and denominator of (\[eq:ChronosTime\]) to be proportional. Without loss of generality we can take this constant of proportionality (which cannot be zero) to be $1$ (this just fixes the overall scale of physical time) and obtain $$\label{eq:ChronosTimeLocProp}
G^{ab\,nm}[h(x)] \frac{dh_{ab}(x)}{d\tau}\frac{dh_{nm}(x)}{d\tau}-R[h](x)=0\,.$$ This is a well known formula (the so-called Hamiltonian constraint) in General Relativity. Hence Relativity just satisfies the localisation property with the simplest conceivable local rescaling function $R$. Finally, physical time is now given in terms of 3-dimensional geometric quantities by a Jacobi-like formula, which is just the analog of in the case $E=0$: $$\label{eq:TimeFormula}
\Delta\tau({\mathbf{g}}_i,{\mathbf{g}}_f)=\int_{{\mathbf{g}}_i}^{{\mathbf{g}}_f}
\sqrt{\frac{
G\bigl(d{\mathbf{g}}/d\lambda,d{\mathbf{g}}/d\lambda\bigr)}
{-R\bigl[{\mathbf{g}}(\lambda)\bigr]}}\ d\lambda$$
Conclusions and open issues
===========================
Following [@Barbour:1994a] we tried to argue that the notion of “time from instances” in inherent in classical point mechanics as well as General Relativity. We also saw that in General Relativity that notion of time is not as hopelessly global as one might have feared. In fact, one can argue that General Relativity just realises the simplest *localisable* notion of that sort of time.
But there are also points that remain open (to me):
1. Solutions to dynamical equations of motion in the form of (generalised) geodesic principles are subsets of (dynamically realised) configurations in the space of (kinematically possible) ones. These subsets are delivered to us in the form of unparametrised curves. So, even though the parameter does not matter, the structure of a one-dimensional sub-continuum remains. In particular one (or two) preferred orderings are selected. What is the significance of that? What makes us experience this solution configurations according to this order?
2. Can we, on the space of 3-geometries, characterise a function that structures it according to some definition of geometric entropy? How would its gradient flow be related to the dynamics of General Relativity?
3. Suppose the spacetime we live in did not allow for any symmetries and were sufficiently generic, so as to not allow for two *different* isometric embeddings of any of its possible 3-geometries. (Such spacetimes exist and are, intuitively speaking, the generic case, though their degree of generality or naturalness is not easy to characterise mathematically.) This means that each instant would have its unique place in spacetime. Would this count as a perfect representation of the “Now” in a physical theory (here General Relativity), or could/should we ask for more?
Finally I wish to comment on the transition of point mechanics to field theory. In point mechanics, the requirement to only employ purely relational quantities is met by eliminating all explicit reference to absolute space and time. This has been gradually achieved in the papers of Reissner, Schrödinger, and Barbour & Bertotti. But what is the precise analog of that requirement in field theory? A standard answer to this is that the theory should be *background independent*. The intended meaning of that phrase is that the theory should not employ structures which are not dynamically active. Closer inspection shows that it is quite hard to translate this intended meaning into a clear mathematical condition [@Giulini:2007b]. The problem is that whatever the mathematical formulation is, it seems quite easy to turn it into an equivalent one by some formal rewriting that renders it (formally) background independent. It is often taken for granted that the requirement of *diffeomorphism invariance* (also known as “general covariance”) is sufficient, because that would deprive spacetime points of their independent individuality. This is true to some extend, but it seems not to go as far as one might have hoped for. Modern (quantum-)field theory does not get rid of space and time.
Markus Fierz was deeply concerned about the problematic relation between spacetime and fields. In a remarkable letter of October 9-10th 1951 to Wolfgang Pauli[^1] he wrote ([@Pauli:SC], Vol.IV, PartI, Doc.1287, p.379)
> “There exist \[in classical physics–DG\] solutions \[to field equations–DG\] with empty domains, that is, emptied from all fields. Hence one needs a theory of space which is independent of what fills space. There is the geometry of space and the laws of things in space. \[...\]
>
> Space is still absolute in Relativity Theory insofar as one may characterise it without referring to its ‘content’, and because it may even exist without any content. \[...\]
>
> In a \[hypothetical–DG\] full Theory of Quantum Fields, in which the act of observation and the possibility to localise are described correctly, it should not be necessary to introduce space separately. Opposite to what Einstein hoped, the laws of space should follow from the laws of Nature (not the laws of Nature from geometry). But this can only be hoped for if there is no such thing as empty space, that is, if you cannot clear \[ausräumen\] space. Fields are not in space, they span space. Space is not a geometric idea \[Gedankending\], it is a certain aspect of the world.”
>
> In this sense, space in Relativity Theory is absolute and this is why Einstein suggested to call it aether. In a proper field theory the theory of localisation should deliver a theory of space. Space should somehow be ‘created’ by test bodies and hence be a function of the observer in a much deeper sense than in Relativity Theory.”
Pauli replied on October13 in a way that would also be typical for many modern relativists ([@Pauli:SC], Vol.IV, PartI, Doc.1289, pp.385-386):
> “Your wording does not do justice to Relativity Theory, which is just an attempt to connect geometry and laws of nature concerning things \[Dinge\] in the spacetime world. \[...\] All people happily proclaim just the opposite to what you said in your letter: namely ‘from now on only the connection of spacetime and things is absolute. \[...\]
>
> I am quite indignant about this part of your letter, since it shows to me that the, compared to me, slightly younger generation of physicists (not to speak of the still younger ones!) have completely repressed \[verdrängen\] General Relativity - and because I know how important Einstein considered this point to be. \[...\]
>
> After this urgent correction (diagnosis: ‘repression’ \[Verdrängung\]!) one can ask whether the dependence of space (i.e. spacetime) from the things \[Dingen\] according to General Relativity is sufficient. To pose the question already means to negate it. \[...\]
>
> I agree that the impossibility to accommodate Einstein’s postulate (i.e. Mach’s original point of view) within General Relativity is a deep and significant sign for the inadequacy of classical field physics.”
So we see that after his usual grumble Pauli finally agrees at least on the existence of a fundamental difficulty, which was, after all, well addressed by Fierz’ original complaint. Even today all candidate theories of quantum gravity make use of non-dynamical structures that represent some sort of space or spacetime (of various dimensions). Hence I believe Fierz’ complaint is as as relevant today as it was 60 years ago.
Everyone knows the opening words of Hermann Minkowski’s (1864-1909) famous address “Raum und Zeit”, delivered in Cologne on September 21st 1908 [@Minkowski:1909]:
> “Gentlemen! The views of space and time which I wish to lay before you have sprung from the soil of experimental physics. Therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”
But it seems not to be so well known that Minkowski felt the enormous abstraction and possible physical over-idealisation of the concept of spacetime *as such*, as he clearly indicated in his introduction, before going into the description of what we now call “Minkowski space” (space meaning spacetime). He wanted his readers to understand the points of spacetime as individuated entities:
> “In order to not leave a yawning void \[gähnende Leere\], we wish to imagine that at every place and at every time something perceivable exists. In order to avoid saying ‘matter’ or ‘electricity’ for that something, I will use the word ‘substance’ for it. We focus attention on the substantive \[substantiellen\] world point at $x,y,z,t$ and imagine to be capable to recognise this substantive point at any other time”.
That substantivalist’s view of Minkowski spacetime is still inherent in its mathematical representation in modern field theory. One sign of this is the interpretation of its automorphism group (the Poincaré group) as proper physical symmetries rather than gauge transformations. Recall that a proper physical symmetry transforms solutions to dynamical equations into solutions, but the transformed solution is considered physically different (distinguishable) from the original one. In contrast, gauge transformations just connect redundant descriptions of the same physical situation.
Individuating spacetime points is natural if we think of spacetime to be a geometrically structured *set*. A set, by Cantor’s definition, consists first of all of a set which may then carry certain geometric structures of various complexities. But recall what according to Cantor’s definition it already takes to be a set [@CantorMengenlehre1:1895]:
> “By a *set* we understand any gathering together $M$ of determined well-distinguished objects $m$ of our intuition or of our thinking (which are called elements of $M$) into a whole.”
Minkowski’s “substance” may serve to distinguish events. But is that substance not eventually just another physical system obeying its own dynamical laws? If so, what kind of “dynamical law” can that be if there is no non-dynamical substance left with respect to which we can define change. Surprisingly – or perhaps not – this is just the same difficulty that stood at the very beginning of modern theories of dynamics. In “de gravitatione”, written well before the Principia, presumably between 1664 and 1673 (the dating is still controversial), Newton said [@Newton:UeberDieGravitation]:
> “It is accordingly necessary that the determination of places and thus of local motions is represented in some unmoved being of which sort space or extension alone is that which is seen as distinct from bodies. \[...\]
>
> About extension, then, it is probably expected that it is being defined either as substance or accidents or nothing at all. But by no means nothing, surely, therefore it has some mode of existence proper to itself, by of which it fits neither to substance nor to accident.”
[**“Das noch Ältere ist immer das Neue”**]{}\
Wolfgang Pauli
Acknowledgements {#acknowledgements .unnumbered}
----------------
I sincerely thank Albrecht von Müller and Thomas Filk for several invitations to workshops of the Parmenides Foundation, during which I was given the opportunity to present and discuss the material contained in this contribution.
[^1]: There exist two versions of this letter, one from October 9th and one from October 10th. Here we quote from the fist only.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Virtual Reality (VR) is expected to be one of the killer-applications in 5G networks. However, many technical bottlenecks and challenges need to be overcome to facilitate its wide adoption. In particular, VR requirements in terms of high-throughput, low-latency and reliable communication call for innovative solutions and fundamental research cutting across several disciplines. In view of this, this article discusses the challenges and enablers for ultra-reliable and low-latency VR. Furthermore, in an interactive VR gaming arcade case study, we show that a smart network design that leverages the use of mmWave communication, edge computing and proactive caching can achieve the future vision of VR over wireless.'
author:
-
title: 'Towards Low-Latency and Ultra-Reliable Virtual Reality'
---
Introduction {#introduction .unnumbered}
============
The last two years have witnessed an unprecedented interest both from academia and industry towards mobile/wireless virtual reality (VR), mixed reality (MR), and augmented reality (AR). The ability of VR to immerse the user creates the next generation of entertainment experiences, MR and AR promise enhanced user experiences and will allow end-users to raise their head from smartphone screens. 5G encompasses three service categories: enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable and low-latency communication (URLLC). Mobile VR, MR and AR applications are very much use case specific and sit at the crossroads between eMBB and URLLC seeking multiple Gbps of data uniformly delivered to end-users subject to latency constraints. It is well known that low latency and high reliability are conflicting requirements [@Mogensen_5Gtradeoffs_2014]. Ultra-reliability implies allocating more resources to users to satisfy high transmission success rate requirements, which might increase latency for other users. Smart network designs are required to realize the vision of interconnected VR/AR, characterized by smooth and reliable service, minimal latency, and seamless support of different network deployments and application requirements.
Wireless and Mobile VR, MR and AR {#wireless-and-mobile-vr-mr-and-ar .unnumbered}
---------------------------------
In essence VR, MR and AR differ in the proportion in which digital content is mixed with reality. Both AR and MR incorporate some aspects of the real environment around the user: while real elements are the main focus for AR, virtual elements play a leading role in MR. To accomplish their goal, AR or MR glasses and wearables need not block out the world around, they will overlay digital layers to the current view of the user. The human eye is very sensitive to incorrect information. In order to “feel real”, the AR or MR system needs to build a 3D model of the environment to place virtual objects in the right place and handle occlusions. In addition, the lighting of the object needs to be adjusted to the scene. Conversely, VR refers to a 100% virtual, simulated experience. VR headsets or head mounted displays (HMD) cover the user’s field of view (FOV) and respond to eye tracking and head movements to shift what the screen displays accordingly. That is, in VR the only links to the outside real world are the various inputs arriving from the VR system to the senses of the user that are instrumental in adding credibility to the illusion of living inside the virtually replicated location.
The ultimate VR system implies breaking the barrier that separates both worlds by being unable to distinguish between a real and synthetic fictional world [@ejder_VR_2017]. An important step in this direction is to increase the resolution of the VR system to the resolution of the human eye and to free the user from any cable connection that limits mobility and that, when in touch with the body, disrupts the experience.
Up until now the use of untethered VR HMDs has been relegated to simple VR applications and discreet to low quality video streaming delivered through smartphone headsets such as Samsung Gear VR, or cost efficient ones such as the Google Cardboard. Meanwhile, HDMI connection through 19-wire cable has been favored for PC-based premium VR headsets such as Oculus Rift, HTC Vive or PlayStation VR. The reason can be found in the latency-sensitivity (latency of rendered image of more than 15 ms can cause motion sickness) and the resource communications and computing intensiveness nature of VR systems. In addition, even premium VR headsets still have only a limited resolution of 10 pixels per degree, compared to 60 pixels per degree with clear (20/20) visual acuity of the human eye. Hence, HD wireless/mobile VR is doubly constrained. It is computing constrained, as GPU power in HMDs is limited by the generated heat in powering these devices and by the bulkiness and weight of the headset itself. Second, it is constrained by the bandwidth limitations of current wireless technologies, operating below 6 GHz, and the resulting inability to stream high resolution video 8K and higher at high frame rate over 90 frames per second (fps). The success of wireless VR hinges on bringing enough computing power to the HMD via dedicated ASICS or to the cloud or fog within a latency budget. Yet, recent developments from the VR hardware industry could deliver to the market first commercial level standalone VR headgears in 2018 even if still with limited resolution.
A manifold of technological challenges stemming from a variety of disciplines need to be addressed to achieve an interconnected VR experience. An interconnected VR service needs to handle the resource distribution, quality of experience (QoE) requirements, and the interaction between multiple users engaging in interactive VR services. It should also be able to handle different applications and traffic scenarios, for example, the aggregate traffic of an enterprise floor where participants share an MR workplace or an interactive gaming arcade, where each player is experiencing her own VR content.
Therefore, in this paper we envision that the next steps towards the future interconnected VR will come from a flexible use of computing, caching and communication resources, a.k.a. the so called C$^{3}$ paradigm. To realize this vision, many trade-offs need to be studied. These range from the optimization of local versus remote computation, to single or multi-connectivity transmission while taking into account bandwidth, latency and reliability constraints.
Requirements and Big Challenges in wireless VR {#sec:Requirements .unnumbered}
==============================================
From a wireless communication point of view, the extremely high data rate demands coupled with ultra-low latency and reliability are the main hurdles before bringing untethered VR into our everyday lives. In what follows, we will briefly introduce the bandwidth/capacity, latency and reliability requirements associated to several VR use cases.
Capacity {#capacity .unnumbered}
--------
Current 5G or new radio (NR) system design efforts aim at supporting the upcoming exponential growth in data rate requirements from resource-hungry applications. It is largely anticipated that a 1000-fold improvement in system capacity defined in terms of bits per second per square kilometer b/s/km$^{2}$ will be needed. This will be facilitated through increased bandwidth, higher densification, and improved spectral efficiency. Focusing on VR technology, a back-of-the-envelope calculation reveals that with each of the human eyes being able to see up to 64 million pixels (150$^{\circ}$ horizontal and 120$^{\circ}$ vertical FOV, 60 pixels per degree) at a certain moment [@ejder_VR_2017], and with 120 fps requirement to generate a real-like view, up to 15.5 billions of pixels per second are needed. By storing each colored pixel in 36 bits, and with the maximum of 1:600 video compression rate typically found in H.265 HEVC encoding, a required bit rate of up to 1 Gbps is needed to guarantee such quality.
The values above are clearly unrealizable in 4G. Actually, even early stage and entry-level VR, whose minimum data rate requirements are estimated to reach 100 Mbps[^1] will not be supported for multiple users in many deployments. Adding the required real time response for dynamic and interactive collaborative VR applications, it is not surprising that a significant ongoing research effort is geared towards reducing bandwidth needs in mobile/wireless VR, thereby shrinking the amount of data processed and transmitted. For example, in the context of 360$^{\circ}$ immersive VR video streaming, head movement prediction is used in [@qian_optimCell_2016] to spatially segment raw frames and deliver in HD only their visible portion. A similar approach is considered in [@ju_ultraWideVRStream_2017], splitting the video into separated grid streams and serving grid streams corresponding to the FOV. Alternatively, eye gaze tracking is applied in [@doppler_EUCNC_2017] to deliver high resolution content only at the center of the human vision and to reduce the resolution and color depth in the peripheral field of view. Such a foveated 360$^{\circ}$ transmission has the potential to reduce the data rates to about 100 Mbps for a VR system with less than 10 ms round trip time including the rendering in the cloud. Yet, even if we allow only 5 ms latency for generating a foveated 360$^{\circ}$ transmission, existing networks cannot serve 100 Mbps to multiple users with reliable round trip times of less than 5 ms. Secondly, in today’s networks computing resources are not available this close to the users. Therefore, there exists a gap between what current state of the art can do and what will be required as VR seeps into consumer space and pushes the envelop in terms of network requirements. In view of this, we anticipate that the millimeter wave (mmWave) communications will bridge the gap by facilitating the necessary capacity increase.
Latency {#latency .unnumbered}
-------
In VR environments, stringent latency requirements are of utmost importance for providing a pleasant immersive VR experience. The human eye needs to perceive accurate and smooth movements with low motion-to-photon (MTP) latency, which is the lapse between a moment (e.g. head rotation) and a frame’s pixels corresponding to the new FOV have been shown to the eyes. High MTP values send conflicting signals to the vestibulo-ocular reflex (VOR), a dissonance that might lead to motion sickness. There is broad consensus in setting the upper bound for MTP to less than 15-20 ms. Meanwhile, the loopback latency of 4G under ideal operation conditions is 25 ms.
The challenge for bringing end-to-end latency down to acceptable levels starts by first understanding the various types of delays involved in such systems to calculate the joint computing and communication latency budget. Delay contributions to the end-to-end wireless/mobile VR latency include, sensor sampling delay, image processing or frame rendering computing delay, network delay (queuing delay and over-the-air delay) and display refresh delay. Sensor delay’s contribution ($<$1 ms) is considered imperceptible by users, and display delay ($\approx$10-15 ms) is expected to drop to 5 ms [@mangiante_VREdge_2017], which leaves 14 ms for computing and communication.
Both computing and communication delay serve as delay bottleneck in VR systems. Heavy image processing requires high computational power that is often not available in the local HMD GPUs. Offloading computing tasks to remote cloud servers significantly relieves the computing burden from the users’ HMDs at the expense of incurring additional communication delay in both directions. Unlike MR and AR where uploading video streams to the cloud may be required, uplink communication delay due to offloading the computing task to the server is typically very small in VR, owing to the small amount of data needed, e.g., user tracking data and the interactive control decisions. However, the downlink delivery of the processed video frames in full resolution can significantly contribute to the overall delay. Current online VR computing can take as much as 100 ms and communication delay (edge of network to server) reach 40 ms. Therefore, relying on remote cloud servers is a more suitable approach for low-resolution non-interactive VR applications, where the whole 360$^{\circ}$ content can be streamed and the constraints on real-time computing are relaxed. Interactive VR applications require real-time computing to ensure responsiveness. Therefore, it is necessary to shrink the distance between the end users and the computing servers to guarantee minimal latency. Fog computing also known as mobile edge computing (MEC), where the computation resources are pushed to the network edge close to the end users, serves as an efficient and scalable approach to provide low latency computing to VR systems. MEC is expected to reduce the communication delay to less than 1 ms in metropolitan areas. Another interesting scenario for the use of MEC, for AR, is provided in [@osvaldo_AR_MEC_2017] where, besides latency reduction, energy-efficiency is considered. The MEC resource allocation exploits inherent collaborative properties of AR: a single user offloads shared information on an AR scene to the edge servers which transmit the resulting processed data to all users at once via a shared downlink.
Reliability {#subsec:reliable .unnumbered}
-----------
VR/AR applications need to consistently meet the stringent latency and reliability constraints. Lag spikes and dropouts need to be kept to a minimum, or else users will feel detached. Immersive VR demands a perceptible image-quality degradation-free uniform experience. This mandates error-robustness guarantees in different layers, spanning from the video compression techniques in the coding level, to the video delivery schemes in the network level. In wireless environments where temporary outages are common due to impairments in signal to interference plus noise ratio (SINR), VR’s non-elastic traffic behavior poses yet an additional difficulty. In this regard, an ultra-reliable VR service refers to the delivery of video frames on time with high success rate. Multi-connectivity (MC) has been developed for enhancing data rates and enabling a reliable transmission. MC bestows diversity to reduce the number of failed handovers, dropped connections, and radio-link failure (RLF). MC can either operate using the same or separate carriers frequencies. In intra-frequency MC, such as in single frequency networks (SFN), multiple sources using the same carrier frequency jointly transmit signals to a user. Contrarily, inter-frequency MC, which includes carrier aggregation (CA), dual connectivity (DC) and the use of different wireless standards, leverages either single or various sources that employ multiple carrier frequencies simultaneously for the same purpose. Enhancing reliability always comes at the price of using more resources and may result in additional delays, for example at the PHY layer the use of parity, redundancy, and re-transmission will increase the latency. Also, allocating multiple sources for a single user could potentially impact the experienced latency of the remaining users. [Another important reliability aspect in 5G is the ultra-high success rate of critical low-throughput packets. In particular, a maximum packet error rate (PER) of 10$^{-5}$ is specified in the 3GPP standard. This correlates with the VR/AR tracking message signaling that has to be delivered with ultra-high reliability to ensure smooth VR service.]{}
![image](VR_use_cases_chart3-FINAL){width=".8\linewidth"}
C$^{3}$: Enablers for URLLC in VR {#c3-enablers-for-urllc-in-vr .unnumbered}
=================================
As outlined above, there is a substantial amount of work to be done to achieve a true immersive VR user experience. The VR QoE is highly dependent on stringent latency and reliability conditions. High MTP delays of 20 ms or more as well as distortions due to low data rate and resulting quality of the projected images, lead to motion sickness and affect the user visual experience. Hence, end-to-end delay and reliability guarantees are key elements of an immersive VR experience. Smart network designs that blend together and orchestrate communication, computing, and caching resources are sorely lacking. Figure \[fig:use\_cases\] captures the foreseen requirements and the main technological enablers for both single and multiple user VR use cases. Next, we shed light on the envisioned roles of mmWave communications and MEC as two major thrusts of the future interconnected VR.
Millimeter Wave Communications {#millimeter-wave-communications .unnumbered}
------------------------------
mmWave communications is an umbrella term technically referring to any communication happening above 30 GHz. The possibilities offered by the abundance of available spectrum in these frequencies with channel bandwidths ranging from 0.85GHz at 28GHz band to up to 5 GHz at 73GHz are their main allure. In mmWaves directional communications need to be used to provide sufficient link budget [@mmwave_38]. mmWave propagation suffers from blockage as mmWaves do not propagate well through obstacles, including the human body which inflicts around 20-35 dB of attenuation loss, besides there are almost no diffractions. Best communication conditions are therefore met when there is a line-of-sight (LOS) path between the transmitter and the receiver with mainlobes of their antenna beams facing each other. However, partially blocked single reflection paths might still be usable at a reduced rate. Directionality and isolation from blockage significantly reduce the footprint of interference and make mmWave well-suited for dense deployments[^2].
To find the transmitter and receiver beam combination or directional channel that maximizes the SINR, digital, hybrid or analog beamforming and beam-tracking techniques need to be applied. The beam training is able to track moving users in slowly time-variant environments and to circumvent blocked line of sight paths by finding strong reflectors. Especially in multiuser VR scenarios, the most likely source of sudden signal drop arises from either temporal blockages caused by user’s own limbs (e.g. a raising hand) and bodies of surrounding players or from transmitter-receiver beam misalignment. In such cases, if the SINR drops below a certain threshold, an alternative directional channel discovering process needs to be triggered. However, beam-tracking through beam training for large antenna arrays involving big codebooks with narrow beams can incur large delays. For that reason, developing efficient beam training and beam-tracking techniques is an active area of research, specially for fast changing environments. For example, machine learning methods can be used to identify the most likely beam candidates to keep the disruption at a minimum.
In this paper we advocate the use of MC to counteract the blockages and temporal disruptions of the mmWave channel. Specifically, non-coherent multisourced VR frame transmission will be showcased as a way to improve SINR and increase reliability of those links experiencing worse channel conditions. This approach is in line with the idea of overbooking radio and computing resources as a mean to protect against mmWave channel vulnerability [@barbarossa_overbookResources_2017]. The literature on the specific application of mmWave technologies for VR is scarce, with the exception of [@abari_cutCord_2016] in which for a single-user local VR scenario the use of a configurable mmWave reflector is proposed to overcome self-body blockage and avoid the need to deploy multiple transmitters.
MEC Computing and Caching {#mec-computing-and-caching .unnumbered}
-------------------------
Rendering and processing VR HD video frames requires extensive computation resources. However, the need for compact and lightweight HMDs places a limit on their computational capabilities. Computation offloading is seen as a key enabler in providing the required computing and graphics rendering in VR environments. Users upload their tracking information, as well as any related data such as gaming actions or video streaming preferences to MEC servers with high computation capabilities. These servers perform the offloaded computing tasks and return the corresponding video frame in the downlink direction.
Cloud computing servers are capable of handling CPU and GPU-hungry computing tasks due to their high computational capabilities.The distance to computing resources for real-time VR services is limited by the distance light travels during the maximum tolerable latency. The concept of edge computing strikes a balance between communication latency and computing latency by providing high computational resources close to the users. We envision edge computing as a key enabler for latency-critical VR computing services. However, to ensure efficient latency-aware computing services with minimal costs, server placement, server selection, computing resource allocation and task offloading decisions are needed.
Indeed, providing stringent reliability and latency guarantees in real-time applications of VR is a daunting task. Dynamic applications, such as interactive gaming where real-time actions arrive at random, requires massive computational resources close to the users to be served on time. Therefore, the burden on real-time servers has to be decreased through facilitating proactive prefetching tasks and computing of the corresponding users’ video frames. Recent studies have shown that VR gaming users’ head movement can be predicted with high accuracy for upcoming hundreds of milliseconds [@qian_optimCell_2016]. Such prediction information can significantly help in relieving the burden on servers of real-time computing following users’ tracking data. Based on estimated future pose of users, video frames can be proactively computed in remote cloud servers and cached in the network edge or the users’ HMDs, freeing more edge servers for real-time tasks.
In addition to predicting user’s movement, application-specific actions and corresponding decisions can be also proactively predicted. Since humans’ actions are correlated, studying the popularity of different actions and their impact on the VR environment can facilitate in predicting the upcoming actions. Accordingly, subject to the available computing and storage resources, video frames that correspond to the speculated actions can be rendered and cached [@lee_outatime_2015], ensuring reliable and real-time service.
![image](vr-Arcade_FINAL){width=".7\linewidth"}
Use Case: An Interactive VR Gaming Arcade {#use-case-an-interactive-vr-gaming-arcade .unnumbered}
=========================================
Scenario Description {#subsec:scenario .unnumbered}
--------------------
In this section, we investigate the use of C$^{3}$ to assess the URLLC performance of a multiplayer immersive VR gaming scenario. Such experience requires very low latency in order to synchronize the positions and interactions (input actions) of a group of players.
We consider an indoor VR gaming arcade where virtual reality players (VRPs) equipped with wireless mmWave head-mounted VR displays (mmHMD) are served by multiple mmWave band access points (mmAP) operating in 60-GHz indoor band[^3]. VRPs move freely within the limits of individual VR pods, in which their movement in the physical space is tracked and mapped into the virtual space. Moreover, players’ *impulse actions* during the interactive gaming arrive at random, each of which is impacting the game play, and correspondingly the video frame content of a subset of the VRPs.
mmAPs are connected to an edge computing network, consisting of multiple edge computing servers and a cache storage unit as illustrated in Figure \[fig:gaming\_arcade\], where real-time tasks of generating users’ HD frames can be offloaded based on the players’ tracking data, consisting of their 6D pose and gaming impulse actions. In addition to real-time computing, we assume that the MEC network is able to predict users poses within a prediction window [@qian_optimCell_2016] to proactively compute and cache their upcoming video frames. A player can receive and display a proactively computed frame as long as no impulse action that impacts her arrives. The arrival intensity of impulse actions is assumed to follow a Zipf popularity distribution with parameter $z$ [@ejder_VR_2017]. Accordingly, the arrival rate for the $i^{\textrm{th}}$ most popular action is proportional to $1/i^{z}$. The arrival of impulse action $i$ impacts the game play of a subset of players $\mathcal{U}_{i}$. The impact of the impulse actions on the VRPs’ game play, namely, the *impact matrix,* is defined as $\Theta=[\theta_{ui}]$, where $\theta_{ui}=1$ if $u\in\mathcal{U}_{i}$, and $\theta_{ui}=0$ otherwise[^4]. A set of default parameters[^5] are used for simulation purposes unless stated otherwise.
When the game play starts, the MEC server keeps track of the arriving impulse actions and builds a popularity distribution of the action set. To keep up with the game dynamics, video frames that correspond to the most popular upcoming actions are computed and cached, subject to computing and storage constraints.
Proposed Solution {#proposed-solution .unnumbered}
-----------------
After the HD frames are rendered, the mmAPs schedule wireless DL resources to deliver the resulting video frames. As the delay of UL transmission to send the tracking data is typically small, we focus on the effect of computation delay in the edge servers and the DL communication delay. Scheduling is carried out such that the stringent latency and reliability constraints are met. In particular, the following probabilistic constraint on the frame delivery delay is imposed:
$$\Pr(D_{\textrm{comm}}(t)+D_{\textrm{comp}}(t)\geq D_{\textrm{th}})\leq\epsilon,\label{eq:prob_const}$$
which indicates that the probability that the summation of communication and computing delay at time instant $t$ exceeds a delay threshold value $D_{\textrm{th}}$ should be kept within a low predefined rate $\epsilon$.
![image](Fig3)
![image](Fig4R)
To maintain a smooth game play in case of unsuccessful HD frame delivery, users perform local computing to generate a low resolution version of the required frame. In this regard, we propose an optimization framework to maximize the successful HD frame delivery subject to reliability and latency constraints. First, a joint proactive computing and caching scheme is developed to render users’ HD frames in the network edge. HD frames that corresponds to users upcoming movement and head rotation and the estimated popular actions are proactively computed and cached. The proposed scheme schedules computing tasks following different priority levels, in which real-time computing is prioritized first in order to process current frames that are affected by randomly arriving game actions. Subsequently, subject to computing and storage resource constraints, the future HD frames are computed and cached.
Following the computation of HD frames, a matching algorithm based on the Deferred Acceptance (DA) matching [@gale_shapley] is considered to allocate mmWave transmission resources to users. Matching preferences are selected such that reliability and latency constraints are met. mmAPs preference over user requests are to achieve the latency constraint in (1), by prioritizing requests of users with tight latency deadlines. User preferences over different mmAP aim to maximize user data rate, whereas dual-connectivity is considered by allowing users with an average rate below the rate threshold to be matched to a pair of mmAPs.
![image](Fig5ab)
Next, we show and analyze the results of the proposed approach obtained from extensive system-level simulations. For the sake of comparison, we also plot two baseline schemes: Baseline 1 with reactive computing (in which all computing is carried out in real-time) and Baseline 2 with proactive computing; neither Baseline 1 nor Baseline 2 have MC capability. The results therein have been averaged over 50 random game play instance topologies. Moreover, to give and idea of the size of the confidence intervals 99% confidence level margin of errors (ME) have been computed and lowest and highest ME from all the possible configurations are provided.
Latency Performance {#latency-performance .unnumbered}
-------------------
First, we show the delay performance of the proposed approach with different number of players, each of which has a rate requirement of $2$ Gbps. By setting the parameters in (1) to $D^{\textrm{th}}=20$ ms and $\epsilon=0.01$ to reflect the motion sickness limit, we plot the average total delay as well as the $99^{\textrm{th}}$ delay performance of the proposed approach against the baseline schemes. From Figure \[fig:fig3\], we can see that the proposed approach significantly minimizes the service delay in different network conditions. Moreover, by looking into the $99^{\textrm{th}}$ percentile communication delay, we find that the proposed scheme outperforms the proactive Baseline 2 scheme by leveraging MC to minimize the latency of wireless frame delivery.
Reliability, Latency and Rate Tradeoffs {#reliability-latency-and-rate-tradeoffs .unnumbered}
---------------------------------------
Next, we show the tradeoffs of reliability, latency and service rate performance of the proposed scheme. Different results are obtained by varying the latency threshold in (1), while setting $\epsilon=0.01$ and the number of players to $16$. Reliability is measured by the probability of experiencing a communication delay below a threshold of $10$ ms. In Figure \[fig:fig4\], we can see that there exists a tradeoff between the user data rate and the reliability and communication latency. Imposing stringent latency constraint guarantees achieving high reliability by serving requests with tight delay bounds. This comes at the expense of experiencing lower service rate and hence, lower frame quality.
Average Delay Performance {#average-delay-performance .unnumbered}
-------------------------
Figure \[fig:delay\_results\] compares the total delay performance of the proposed scheme against the reactive and proactive baseline schemes in different network conditions. In Figure \[fig:delay\_results\]-a, it is shown that as the cache size increases, the average computing delay is significantly reduced. This reduction is due to caching more HD frames following popular game actions, which minimizes the computation delay as compared to the reactive baseline scheme. The effect of both proactivity and MC on the delay performance is also evident in Figure \[fig:delay\_results\]-b, where the total VR service delay is plotted against the game dynamics, defined as the impulse action arrival intensity (action per player per second). For all schemes, higher delay values are experienced as the game dynamics increase, due to having to process more frames in real-time. The proposed scheme is shown leverage both proactivity and MC to minimize the service delay in different gaming traffic conditions.
Conclusion {#conclusion .unnumbered}
==========
In this article, we have discussed the main requirements for an interconnected wireless VR, MR and AR. We have highlighted the limitations of today’s VR applications and presented the key enablers to achieve the vision of future ultra-reliable and low latency VR. Among these enablers, the use of mmWave communication, mobile edge computing and proactive caching are instrumental in enabling this vision. In this respect, our case study demonstrated the performance gains and the underlying tradeoffs inherent to wireless VR networks.
Acknowledgments {#sec:ack .unnumbered}
===============
This research was partially supported by the Academy of Finland project CARMA, the NOKIA donation project FOGGY, the Thule Institute strategic project SAFARI and by the Spanish Ministerio de Economia y Competitividad (MINECO) under grant TEC2016-80090-C2-2-R (5RANVIR).
[10]{}
B. Soret *et al.*, “Fundamental tradeoffs among reliability, latency and throughput in cellular networks,” in *Proc. IEEE Global Telecommun. Conf. (GLOBECOM) Workshops*, 2014, pp. 1391–1396.
E. Bastug *et al.*, “[Toward Interconnected Virtual Reality: Opportunities, Challenges, and Enablers]{},” *[IEEE]{} Commun. Mag.*, vol. 55, no. 6, pp. 110–117, June 2017.
F. Qian *et al.*, “Optimizing 360$^\circ$ video delivery over cellular networks,” in *Proc. 5th Workshop on All Things Cellular: Operations, Applications and Challenges*, ser. ATC ’16, New York, NY, USA, 2016, pp. 1–6.
R. Ju *et al.*, “[Ultra Wide View Based Panoramic VR Streaming]{},” in *Proc. ACM SIGCOMM. Workshop on Virtual Reality and Augmented Reality Network (VR/AR Network)*, 2017.
K. Doppler *et al.*, “[On Wireless Networks for the Era of Mixed Reality]{},” in *Proc. Eur. Conf. on Networks and Commun. (EuCNC)*, June 2017, pp. 1–6.
S. Mangiante *et al.*, “[VR is on the Edge: How to Deliver 360$^\circ$ Videos in Mobile Networks]{},” in *Proc. ACM SIGCOMM. Workshop on Virtual Reality and Augmented Reality Network (VR/AR Network)*, 2017.
A. Al-Shuwaili *et al.*, “Energy-efficient resource allocation for mobile edge computing-based augmented reality applications,” *[IEEE]{} Wireless Commun. Lett.*, vol. 6, no. 3, pp. 398–401, June 2017.
I. Rodriguez *et al.*, “Analysis of 38 [GHz]{} [mmWave]{} propagation characteristics of urban scenarios,” in *Proc. 21th European Wireless Conference*, May 2015, pp. 1–8.
T. S. Rappaport *et al.*, “[Millimeter Wave Mobile Communications for 5G Cellular: It Will Work!]{}” *IEEE Access*, vol. 1, pp. 335–349, 2013.
S. Rangan *et al.*, “[Millimeter-Wave Cellular Wireless Networks: Potentials and Challenges]{},” *Proc. IEEE*, vol. 102, no. 3, pp. 366–385, Mar 2015.
R. Ford *et al.*, “[Achieving Ultra-Low Latency in [5G]{} Millimeter Wave Cellular Networks]{},” *[IEEE]{} Commun. Mag.*, vol. 55, no. 3, pp. 196–203, Mar 2017.
S. Barbarossa *et al.*, “[Overbooking Radio and Computation Resources in mmW-Mobile Edge Computing to Reduce Vulnerability to Channel Intermittency]{},” in *Proc. Eur. Conf. on Networks and Commun. (EuCNC)*, 2017, pp. 1–5.
O. Abari *et al.*, “[Cutting the Cord in Virtual Reality]{},” in *Proc. 15th ACM Workshop on Hot Topics in Networks (HotNets)*, New York, NY, USA, 2016, pp. 162–168.
K. Lee *et al.*, “[Outatime: Using Speculation to Enable Low-Latency Continuous Interaction for Mobile Cloud Gaming]{},” in *Proc. Annu. Int. Conf. on Mobile Syst., Appl. and Serv. (MobiSys)*, 2015, pp. 151–165.
A. Roth *et al.*, “Two-sided matching: A study in game-theoretic modeling and analysis,” *Cambridge University Press, Cambridge*, 1992.
Biographies {#biographies .unnumbered}
===========
[MOHAMMED S. ELBAMBY]{} (mohammed.elbamby@oulu.fi) received the B.Sc. degree (Hons.) in Electronics and Communications Engineering from the Institute of Aviation Engineering and Technology, Egypt, in 2010, and the M.Sc. degree in Communications Engineering from Cairo University, Egypt, in 2013. He is currently pursuing the Dr.Tech. degree with the University of Oulu. After receiving the M.Sc. degree, he joined the Centre for Wireless Communications, University of Oulu. His research interests include resource optimization, uplink and downlink configuration, fog networking, and caching in wireless cellular networks. He received the Best Student Paper Award from the European Conference on Networks and Communications in 2017.
[CRISTINA PERFECTO]{} (cristina.perfecto@ehu.eus) is a Ph.D. student at the University of the Basque Country (UPV/EHU), Bilbao, Spain. She received her B.Sc. and M.Sc. in Telecommunication Engineering from UPV/EHU in 2000 where she is currently a college associate professor at the Department of Communications Engineering. Her research interests lie on millimeter wave communications and in the application of machine learning in 5G networks. She is currently working towards her Ph.D. focused on the application of multidisciplinary computational intelligence techniques in radio resource management for 5G.
[MEHDI BENNIS]{} \[S’07-AM’08-SM’15\] (mehdi.bennis@oulu.fi) received his M.Sc. degree in Electrical Engineering jointly from the EPFL, Switzerland and the Eurecom Institute, France in 2002.
From 2002 to 2004, he worked as a research engineer at IMRA-EUROPE investigating adaptive equalization algorithms for mobile digital TV. In 2004, he joined the Centre for Wireless Communications (CWC) at the University of Oulu, Finland as a research scientist. In 2008, he was a visiting researcher at the Alcatel-Lucent chair on flexible radio, SUPELEC. He obtained his Ph.D. in December 2009 on spectrum sharing for future mobile cellular systems. Currently Dr. Bennis is an Associate Professor at the University of Oulu and Academy of Finland research fellow. His main research interests are in radio resource management, heterogeneous networks, game theory and machine learning in 5G networks and beyond. He has co-authored one book and published more than 100 research papers in international conferences, journals and book chapters. He was the recipient of the prestigious 2015 Fred W. Ellersick Prize from the IEEE Communications Society, the 2016 Best Tutorial Prize from the IEEE Communications Society and the 2017 EURASIP Best paper Award for the Journal of Wireless Communications and Networks. Dr. Bennis serves as an editor for the IEEE Transactions on Wireless Communication.
[KLAUS DOPPLER]{} (klaus.doppler@nokia-bell-labs.com) is heading the Connectivity Lab in Nokia Bell Labs and his research focus is on indoor networks. In the past, he has been responsible for the wireless research and standardization in Nokia Technologies, incubated a new business line and pioneered research on Device-to-Device Communications underlaying LTE networks. He received his PhD. from Aalto University School of Science and Technology, Helsinki, Finland in 2010 and his MSc. from Graz University of Technology, Austria in 2003.
[^1]: Corresponding to 1K and 2K VR resolution or equivalent 240 pixel lines and SD TV resolution respectively
[^2]: Due to the broadness of the subject, we by refer interested readers in mmWave communications to the seminal work on mmWave for 5G [@rappaport_mmWWillWork_2013], [@mmW_challenges_Rangan2015] on potentials and challenges of mmWave communications, and [@mmW_URLLC_Ford2017] on challenges for achieving URLLC in 5G mmWave cellular networks.
[^3]: We remark that the cellular indoor 60 GHz scenario is one use case among many others. Our proposed approach to jointly combine edge computing with caching and mmWave communications leveraging multi-connectivity holds also for outdoor use and for any other mmWave band, e.g. for the 73 GHz licensed band, if wireless propagation particularities are appropriately addressed.
[^4]: An example of an impulse action is a player firing a gun in a shooting game. As the game play of a subset of players is affected by this action, a video frame that has been already computed for any of them needs to be rendered again.
[^5]: We consider $4$ mmAPs, $4$ servers, $16$ players, $100$ impulse actions with popularity parameter $z=0.8$, and $10$ dBm mmAP transmit power.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We propose a method that is able to analyze chaotic time series, gained from experimental data. The method allows to identify scalar time-delay systems. If the dynamics of the system under investigation is governed by a scalar time-delay differential equation of the form $\frac{dy(t)}{dt} = h(y(t),y(t-\tau_0))$, the delay time $\tau_0$ and the function $h$ can be recovered. There are no restrictions to the dimensionality of the chaotic attractor. The method turns out to be insensitive to noise. We successfully apply the method to various time series taken from a computer experiment and two different electronic oscillators.
P.A.C.S.: 05.45.+b
author:
- |
M. J. B"unner, M. Popp, Th. Meyer, A. Kittel, J. Parisi [^1]\
[*Physical Institute, University of Bayreuth, D-95440 Bayreuth, Germany*]{}
date: 'April, 24th, 1996'
title: 'A Tool to Recover Scalar Time-Delay Systems from Experimental Time Series'
---
Time series analysis of chaotic systems has gained much interest in recent years. Especially, embedding of time series in a reconstructed phase space with the help of time-delayed coordinates was widely used to estimate fractal dimensions of chaotic attractors [@takens; @grassberger] and Lyapunov exponents [@wolf]. It is the advantage of embedding techniques that the time series of only one variable has to be analyzed, even if the investigated system is multi-dimensional. Furthermore, it can be applied, in principle, to any dynamical system. Unfortunately, the embedding techniques only yield information, if the dimensionality of the chaotic attractor under investigation is low. Another drawback is the fact that it does not give any information about the structure of the dynamical system, in the sense, that one is able to identify the underlying instabilities. In the following, we propose a method which is taylor-suited to identify scalar systems with a time-delay induced instability. We will show that the differential equation can be recovered from the time series, if the investigated dynamics obeys a scalar time-delay differential equation. There are no restrictions to the dimensionality of the chaotic attractor. Additionally, the method has the advantage to be insensitive to noise.
We consider the time evolution of scalar time-delay differential equations $$\label{tdde}
\dot{y}(t) = h(y(t),y(t-\tau_0)),$$ with the initial condition $$y(t)=y_0(t), \hspace{1.0cm} -\tau_0<t<0. \nonumber$$ The dynamics is supposed to be bounded in the counter-domain $\cal{D}$, $y(t) \in \cal{D}, \forall$ $t$. In equation (\[tdde\]), the time derivative of $y(t)$ does not only depend on the state of system at the time $t$, but there also exist nonlocal correlations in time, because the function $h$ additionally depends on the time-delayed value $y(t-\tau_0)$. These nonlocal correlations in time enable scalar time-delay systems to exhibit a complex time evolution. The number of positive Lyapunov exponents increases with the delay time $\tau_0$ [@farmer]. Scalar time-delay systems, therefore, constitute a major class of dynamical systems which exhibit hyperchaos [@roessler]. In general, though, the nonlocal correlations in time are not at all obvious from the time series. A state of the system (\[tdde\]) is uniquely defined by a function on an interval of length $\tau_0$. Therefore, the phase space of scalar time-delay systems must be considered as infinite dimensional. The trajectory in the infinite dimensional phase space $\vec{y}(t)=\{y(t'),
t-\tau_0<t'<t\}$ is easily obtained from the time series. The scalar time series $y(t)$, therefore, encompasses the complete information about the trajectory $\vec{y}(t)$ in the infinite dimensional phase space.
The main idea of our analysis method is the following. We project the trajectory $\vec{y}(t)$ from the infinite dimensional phase space to a three-dimensional space which is spanned by the coordinates $(y_{\tau_0}=y(t-\tau_0),y=y(t),\dot{y}=\dot{y}(t))$. In the $(y_{\tau_0},y,\dot{y})$-space the differential equation (\[tdde\]) determines a two-dimensional surface $h$. The projected trajectory $\vec{y}_{\tau_0}(t)=(y(t-\tau_0),y(t),\dot{y}(t))$, therefore, is confined to the surface $h$ and is not able to explore other directions of the $(y_{\tau_0},y,\dot{y})$-space. From this, we conjecture, that the fractal dimension of the projected attractor has to be between one and two. Furthermore, it follows that any intersection of the chaotic attractor with a surface $k(y_{\tau_0},y,\dot{y})=0$ yields a curve. More precisely spoken, if one transforms the projected trajectory $\vec{y}_{\tau_0}(t)$ to a series of points $\vec{y}_{\tau_0}^i=(y^i_{\tau_0},y^i,\dot{y}^i)$ that fulfill the condition $k(y^i_{\tau_0},y^i,\dot{y}^i)=0$, the series of points $(y^i_{\tau_0},y^i,\dot{y}^i)$ contract to a curve and its dimension has to be less than or equal to one. In general, it cannot be expected that one is able to project a chaotic attractor of arbitrary dimension to a three-dimensional space, in the way that its projection is embedded in a two-dimensional surface. We, nevertheless, demonstrate that this is always possible for chaotic attractors of scalar time-delay systems (\[tdde\]).
In the following, we will show that such finding can be used to reveal nonlocal correlations in time from the time series. If the dynamics is of the scalar time-delay type (\[tdde\]), the appropriate delay time $\tau_0$ and the function $h(y,y_{\tau_0})$ can be recovered . The trajectory in the infinite dimensional phase space $\vec{y}(t)$ is projected to several three-dimensional $(y_{\tau},y,\dot{y})$-spaces upon variation of $\tau$. The appropriate value $\tau=\tau_0$ is just the one for which the projected trajectory $\vec{y}_{\tau}$ lies on a surface, representing a fingerprint of the time-delay induced instability. Projecting the trajectory $\vec{y}$ to the $(y_{\tau_0},y,\dot{y})$-space, the projected trajectory yields the surface $h(y,y_{\tau_0})$ in the counter-domain $\cal{D} \times \cal{D}$. With a fit procedure the yet unknown function $h(y,y_{\tau_0})$ can be determined in $\cal{D} \times
\cal{D}$. Therefore, the complete scalar time-delay differential equation has been recovered from the time series. In some cases, it is more convenient to intersect the trajectory $\vec{y}_{\tau}$ in the $(y_{\tau},y,\dot{y})$-space with a surface $k(y_{\tau},y,\dot{y})=0$ which yields a series of points $\vec{y}_{\tau}^i=(y^i_{\tau},y^i,\dot{y}^i)$. For $\tau=\tau_0$, the points come to lie on a curve and the fractal dimension of the point set has to be less than or equal to one.
The analysis method is also applicable if noise is added to the time series. The only effect of additional noise is that the projected time series in the $(y_{\tau_0},y,\dot{y})$-space is not perfectly enclosed in a two-dimensional surface, but the surface is somewhat blurred up. If the analysis is done with an intersected trajectory, the alignment of the noisy data is not perfect. The arguments presented above do not require the dynamics to be settled on its chaotic attractor. Therefore, it is also possible to analyze transient chaotic dynamics. Recently, the coexistence of attractors of time-delay systems has been pointed out [@losson]. The only requirement for the analysis method is that the trajectory obeys the time-evolution equation (\[tdde\]) which holds for all coexisting attractors in a scalar time-delay system. Therefore, the method is applicable, no matter in which attractor the dynamics has decided to settle. The analysis requires only short time series, which makes it well-suited to be applied on experimental situations. We successfully apply the method to time series gained from a computer experiment and from two different electronic oscillators. We show the robustness of the method to additional noise by analyzing noisy time series.
We numerically calculated the time series of the scalar time-delay differential equation $$\begin{aligned}
\label{tdde2}
\dot{y}(t) & = & f(y_{\tau_0})- g(y),\\
f(y_{\tau_0}) & = & \frac{2.7y_{\tau_0}}{1+y_{\tau_0}^{10}} +c_0 \nonumber \\
g(y) & = & -0.567y + 18.17y^2 -38.35y^3+28.56y^4-6.8y^5 -c_0 \nonumber\end{aligned}$$ with the initial conditon $$y(t) = y_0(t),\hspace{1.0cm} -\tau_0<t<0,\nonumber$$ which is of the form (\[tdde\]) with $h(y_{\tau_0},y)=f(y_{\tau_0})-g(y)$. The function $g$ has been chosen to be non-invertible in the counter-domain $\cal{D}$. The definition of the functions $f$ and $g$ is ambiguous in the sense that adding a constant $c_0$ to $f$ can always be cancelled by subtracting $c_0$ from $g$ without changing $h$ and, therefore, leaving the dynamics of equation (\[tdde2\]) unchanged. The control parameter is the delay time $\tau_0$. Equation (\[tdde2\]) is somewhat similar to the Mackey-Glass equation [@mkg], except for the function $g$, which is linear in the Mackey-Glass system. Part of the time series is shown in Fig. 1. We used $500,000$ data points with a time step of $0.01$ for the analysis. The dimension of the chaotic attractor was estimated with the help of the Grassberger-Procaccia algorithm [@grassberger] to be clearly larger than $5$ . To recover the delay time $\tau_0$ and the functions $f$ and $g$ from the time series, we applied the analysis method outlined above. We projected the trajectory $\vec{y}(t)$ from the infinite dimensional phase space to several $(y_{\tau},y,\dot{y})$-spaces under variation of $\tau$ and intersected the projected trajectory $\vec{y}_{\tau}$ with the $(y=1.1)$-plane, which is repeatedly traversed by the trajectory, as can be seen in Fig. 1. The results are the times $t^i$ where the trajectory traverses the $(y=1.1)$-plane and the intersection points $\vec{y}_{\tau}^i=(y^i_{\tau},1.1,\dot{y}^ i)$. For $\tau$ being the appropriate value $\tau_0$, the point set $\vec{y}_{\tau_0}^i$ is correlated via equation (\[tdde2\]) $$\label{f}
\dot{y}^i = f(y^i_{\tau_0})- g(1.1)$$ and, therefore, must have a fractal dimension less than or equal to one. Then, we ordered the $(y^i_{\tau},\dot{y}^ i)$-points with respect to the values of $y^i_{\tau}$. A simple measure for the alignment of the points is the length $L$ of a polygon line connecting all ordered points $(y^i_{\tau},\dot{y}^ i)$. The length $L$ as a function of $\tau$ is shown in Fig. 2. For $\tau=0$, $L(\tau)$ is minimal, because the points $(y^i_\tau,\dot{y}^ i)$ are ordered along the diagonal in the $(y^i_\tau,\dot{y}^ i)$-plane. $L(\tau)$ increases with $\tau$ and eventually reaches a plateau, where the points $(y^i_\tau,\dot{y}^ i)$ are maximally uncorrelated. This is due to short-time correlations of the signal. Eventually, $L(\tau)$ decreases again and shows a dip for $\tau$ reaching the appropriate value $\tau_0$. A further decrease of $L(\tau)$ is observed for $\tau=2\tau_0$. In Fig. 3(a)-(c), we show the projections $\vec{y}_{\tau}(t)$ of the trajectory $\vec{y}(t)$ from the infinite dimensional phase space to different $(y_{\tau},y,\dot{y})$-spaces under variation of $\tau$. Clearly, for $\tau$ approaching the appropriate value $\tau_0$, the appearance of the projected trajectory changes. In Fig. 3(c), the projected trajectory is embedded in a surface which is determined by the function $h$. In Fig. 3(d)-(f) we show the point set $(y^i_\tau,\dot{y}^ i)$ resulting from the intersection of the projected trajectory $\vec{y}_{\tau}$ with the $(y=1.1)$-plane. The point set is projected to the $(y_\tau,\dot{y})$-plane. According to equation (\[f\]), the points are aligned along the function $f$ for $\tau=\tau_0$. With the appropriate value $\tau_0$, we are in the position to recover the functions $f$ and $g$ from the time series. The functions $f$ and $g$ are ambiguous with respect to the addition of a constant term $c_0$, as has been outlined above. Therefore, one is free to remove the ambiguity by invoking an additional condition which we choose to be $$\label{norm}
g(1.1)=0.$$ Then, equation (\[f\]) reads $$\label{f_r}
\dot{y}^i = f(y^i_{\tau_0}).$$ Therefore, function $f$ is recovered by analyzing the intersection points $\vec{y}_{\tau_0}^i$ in the $(\dot{y},y_{\tau_0})$-plane. To recover the function $g$, we intersected the time series with the $(y_{\tau_0}=1.1)$-plane. The resulting point set $\vec{y}_{\tau_0}^j=(\dot{y}^j,y^j)$ is correlated via $$\label{g_r}
\dot{y}^j = f(1.1)-g(y^j).$$ The value $f(1.1)$ has been taken from the time series using equation (\[f\_r\]). In Fig. 4(a)-(b), we compare the functions $f$ and $g$, as they have been defined in equation (\[tdde2\]) with the recovery of the functions $f$ and $g$ from the time series. We emphasize that no fit parameter is involved.
We checked the robustness of the method to additional noise by analyzing noisy time series, which had been produced by adding gaussian noise to the time series of equation (\[tdde2\]). We analyzed two noisy time series with a signal-to-noise ratio (SNR) of $10$ and $100$. In both cases, the additional noise was partially removed with a nearest-neighbor filter (for SNR = 100, average over six neighbors; for SNR = 10, average over twenty neighbors). After that, the noisy time series were analyzed in the same way as has been described above. The inset of Fig. 2 shows the result of the analysis. The length $L$ of the polygon line exhibits a local minimum for $\tau=\tau_0$. In the case of the time series with a SNR of $10$, the local minimum is again sharp, but somewhat less pronounced. We conjecture that the method is robust with respect to additional noise and, therefore, well suited for the analysis of experimental data.
Finally, we successfully applied the method to experimental time series gained from two different types of electronic oscillators. The first one is the Shinriki oscillator [@shinriki; @reisner]. The dynamics of the second oscillator [@pyragas] is time-delay induced and mimics the dynamics of the Mackey-Glass equation. In both cases, we intersected the trajectory with the $(\dot{y}=0)$-plane. The resulting point set was represented in a $(y_\tau,y)$-space with different values of $\tau$. Then, we ordered the points with respect to $y_\tau$ and the length $L$ of a polygon line connecting all ordered points $(y^i_\tau,y^i)$ was measured. The results are presented in Fig. 5 (a) and Fig. 5 (b). In both cases, $L(\tau)$ has a local minimum for small values of $\tau$ as a result of short-range correlations in time. $L(\tau)$ increases in time and reaches a plateau. For the Shinriki oscillator, no further decrease of $L(\tau)$ is observed for increasing $\tau$ (Fig. 5(a)). Such finding clearly shows that the dynamics of the Shinriki oscillator is not time-delay induced. Analyzing the Mackey-Glass oscillator (Fig. 5(b)), one finds sharp dips in $L(\tau)$ for $\tau=\tau_0$ and $\tau=2\tau_0$. This is a direct evidence for correlations in time, which are induced by the time delay (for details see [@physletta96]). Obviously, the method is able to identify nonlocal correlations in time from the time series. Eventually, the nonlinear characteristics of the electronic oscillator is compared to its recovery from the time series (Fig. 5 (c)).
In conclusion, we have presented a method capable to reveal nonlocal correlations in time of scalar systems by analyzing the time series. If the dynamics of the investigated system is governed by a scalar time-delay differential equation, we are able to recover the scalar time-delay differential equation. There are no constraints on the dimensionality of the attractor. Since scalar time-delay systems are able to exhibit high-dimensional chaos, our method might pave the road to inspect high-dimensional chaotic systems, where conventional time-series analysis techniques already fail. Furthermore, the motion is not required to be settled on its attractor. The method is insensitive against additional noise. We have successfully applied the method to time series gained from a computer experiment and to experimental data gained from two different types of electronic oscillators.
While, in general, the verification of dynamical models is a highly complicated task, we have shown that the identification of scalar time-delay systems can be accomplished easily and, thus, allows a detailed comparison of the model equation with experimental time series. In several disciplines, e.g., hydrodynamics [@villermaux95] , chemistry [@khrustova95], laser physics [@ikeda87], and physiology [@mkg; @longtin90], time-delay effects have been proposed to induce dynamical instabilities. With the help of our method, there is a good chance to verify these models by analyzing the experimental time series. If the dynamics is indeed governed by a time delay, the delay time and the time-evolution equation can be determined. Current and future research activities of the authors concentrate on extending the time-series analysis method to non-scalar time-delay systems as well as to time-delay systems with multiple delay times.
We thankfully acknowledge valuable discussions with J. Peinke and K. Pyragas and financial support of the Deutsche Forschungsgemeinschaft.
[XXX]{} F. Takens, Lect. Notes Math. [**898**]{} (1981) 366. P. Grassberger, I. Procaccia, Physica D [ **9**]{} (1983) 189. A. Wolf, J. B. Swift, H. L. Swinney, J. Vastano, Physica D [**16**]{} (1985) 285. J. D. Farmer, Physica D [**4**]{} (1982) 366. O. E. Rössler, Z. Naturforsch. [**38a**]{} (1983) 788. J. Losson, M. C. Mackey, A. Longtin, Chaos (AIP), [**3**]{} (1993), 167. M. C. Mackey, L. Glass, Science [**197**]{} (1977) 287. M. Shinriki, M. Yamamoto, S. Movi, Proc. IEEE [**69**]{} (1981) 394. B. Reisner, A. Kittel, S. Lück, J. Peinke, J. Parisi, Z. Naturforsch.[ **50a**]{} (1995) 105. A. Namajunas, K. Pyragas, A. Tamasevicius, Phys. Lett. A [**201**]{} (1995) 42. M. J. Bünner, M. Popp, Th. Meyer, A. Kittel, U. Rau, J. Parisi, Phys. Lett. A [**211**]{} (1996) 345. E. Villermaux, Phys. Rev. Lett. [**75**]{} (1995) 4618. N. Khrustova, G. Veser, A. Mikhailov, Phys. Rev. Lett. [**75**]{} (1995) 3564. K. Ikeda, K. Matsumoto, Physica D [**29**]{} (1987) 223. A. Longtin, J. G. Milton, J. E. Bos, M. C. Mackey, Phys. Rev. A [**41**]{} (1990) 6992.
Figure captions {#figure-captions .unnumbered}
===============
- Time series of the scalar time-delay system (\[tdde2\]) obtained from a computer experiment ($\tau_0=40.00$).
- Length $L$ of the polygon line connecting all ordered points of the projected point set $(y^i_\tau,\dot{y}^ i)$ versus $\tau$. $L$ has been normalized so that a maximally uncorrelated point set has the value $L=1.0$. The inset shows a close-up of the $\tau$-axis around the local minimum at $\tau=\tau_0=40.00$. Additionally, $L(\tau)$-curves gained from the analysis of noisy time series are shown (no additional noise – straight line, signal-to-noise ratio of $100$ – open circles, and signal-to-noise ratio of $10$ – squares).
- (a)-(c): Trajectory $\vec{y}_{\tau}(t)$ which has been projected from the infinite dimensional phase space to the $(y_{\tau},y,\dot{y})$-space under variation of $\tau$. (a) $\tau=20.00$. (b) $\tau=39.60$. (c) $\tau=\tau_0=40.00$. (d)-(f): Projected point set $\vec{y}_{\tau}^i=(y^i_\tau,\dot{y}^ i)$ resulting from the intersection of the projected trajectory $\vec{y}_{\tau}(t)$ with the $(y=1.1)$- plane under variation of $\tau$. (d) $\tau=20.00$. (e) $\tau=39.60$. (f) $\tau=\tau_0=40.00$.
- \(a) Comparison of the function $f$ (line) of equation (\[tdde2\]) with its recovery from the time series (points). (b) Comparison of the function $g$ (line) of equation (\[tdde2\]) with its recovery from the time series (points).
- Length $L$ of the polygon line connecting all ordered points of the projected point set $\vec{y}_{\tau}^i=(y^i_\tau,y^ i)$ versus $\tau$ for (a) the Shinriki and (b) the Mackey-Glass oscillator. $L(\tau)$ has been normalized so that it has the value $L=1$ for an uncorrelated point set. (c) Comparison of the nonlinear characteristics of the Mackey-Glass oscillator, which is the function $f(y_{\tau_0})$ of an ansatz of the form $h(y,y_{\tau_0}) = f(y_{\tau_0}) + g(y)$, measured directly on the oscillator (line) with its recovery from the time series (dots).
[^1]: published in Phys. Rev. E [**54**]{} (1996) R3082.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We define abstract Sobolev type spaces on $\mathsf{L}^p$-scales, $p\in [1,\infty)$, on Hermitian vector bundles over possibly noncompact manifolds, which are induced by smooth measures and families $\mathfrak{P}$ of linear partial differential operators, and we prove the density of the corresponding smooth Sobolev sections in these spaces under a generalized ellipticity condition on the underlying family. In particular, this implies a covariant version of Meyers-Serrins theorem on the whole $\mathsf{L}^p$-scale, for arbitrary Riemannian manifolds. Furthermore, we prove a new local elliptic regularity result in $\mathsf{L}^1$ on the Besov scale, which shows that the above generalized ellipticity condition is satisfied on the whole $\mathsf{L}^p$-scale, if some differential operator from $\mathfrak{P}$ that has a sufficiently high (but not necessarily the highest) order is elliptic.'
address: 'Davide Guidetti, [davide.guidetti@unibo.it]{} Batu Güneysu, [gueneysu@math.hu-berlin.de]{} Diego Pallara, [diego.pallara@unisalento.it]{} '
author:
- 'Davide Guidetti, Batu Güneysu [and]{} Diego Pallara'
title: '$\mathsf{L}^1$-elliptic regularity and $H=W$ on the whole $\mathsf{L}^p$-scale on arbitrary manifolds'
---
Introduction
============
Let us recall that a classical result of Meyers and Serrin [@meyers] states that for any open subset $U$ of the Euclidean ${\mathbb{R}}^m$ and any $k\in{\mathbb{N}}_{\geq 0}$, $ p\in [1,\infty)$, one has $\mathsf{W}^{k,p}(U)=\mathsf{H}^{k,p}(U)$, where $\mathsf{W}^{k,p}(U)$ is given as the complex Banach space of all $f\in\mathsf{L}^1_{\mathrm{loc}}(U)$ such that $$\begin{aligned}
\label{ms}
\left\|f\right\|_{k,p}:=
\Big(\int_{U}|f(x)|^p{{\rm d}}x\Big)^{1/p}+
\sum_{|\alpha|\leq k}\Big(\int_{U}|\partial^{\alpha}f(x)|^p{{\rm d}}x\Big)^{1/p}<\infty,\end{aligned}$$ and where $\mathsf{H}^{k,p}(U)$ is defined as the closure of $\mathsf{W}^{k,p}(U)\cap\mathsf{C}^{\infty}(U)$ with respect to the norm $\left\|\bullet\right\|_{k,p}$.\
On the other hand, thinking for example of Riemannian geometry on noncompact manifolds, it becomes very natural to ask under what minimal assumptions one can replace the partial derivatives in (\[ms\]) by more general partial differential operators, that are nonelliptic and typically vector-valued. In fact, in order to deal with all possible geometric situations simultaneously, we introduce an abstract notion of a $\mathfrak{P}$-Sobolev space $\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)$ of $\mathsf{L}^p_{\mu}$-sections in a Hermitian vector bundle $E\to X$ (cf. Definition \[asa\]). Here, $X$ is a possibly noncompact manifold, $\mu$ is a smooth measure on $X$ (which may, but need not come from a Riemannian metric in general), $F_1,\dots,F_s\to X$ are Hermitian vector bundles, and the datum $\mathfrak{P}=\{P_1,\dots,P_s\}$ is a finite collection such that each $P_j$ is a linear partial differential operator of order $\leq k_j$ from $E$ to $F_j$. With $\left\|\bullet\right\|_{\mathfrak{P},p,\mu}$ the canonical norm on $\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)$, the question we address here is:\
*Under which assumptions on $\mathfrak{P}$ is the space of smooth Sobolev sections* $$\begin{aligned}
\label{frage}
\Gamma_{\mathsf{C}^{\infty}}(X,E)\cap \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)\>
\text{\em dense in } \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)\>
\text{\em w.r.t. $\left\|\bullet\right\|_{\mathfrak{P},p,\mu}$?}\end{aligned}$$ To this end, the highest differential order $k:=\max\{k_1,\dots,k_s\}$ of the system $\mathfrak{P}$, plays an essential role: Namely, it turns out that even on an entirely local level (cf. Lemma \[molli\]), the machinery of Friedrichs mollifiers precisely applies $$\begin{aligned}
\text{ either if $k<2$, or if } \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)
\subset \Gamma_{\mathsf{W}^{k-1,p}_{\mathrm{loc}}}(X,E).\label{deds}\end{aligned}$$ With this observation, our basic abstract result *Theorem \[main\] precisely states that the local regularity (\[deds\]) implies (\[frage\]), and that furthermore any compactly supported element of $\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)$ can be even approximated by a sequence from $\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(X,E)$.*\
This result turns out to be optimal in the following sense (cf. Example \[bep\]): There are differential operators $P$ such that for any $q>1$ one has $$\begin{aligned}
&\mathsf{W}^{P,q}\subset \mathsf{W}^{\mathrm{ord}(P)-2,q}_{\mathrm{loc}},
\>\>\> \mathsf{W}^{P,q}\not\subset \mathsf{W}^{\mathrm{ord}(P)-1,q}_{\mathrm{loc}},\\
&\mathsf{C}^{\infty} \cap \mathsf{W}^{P,q} \text{ is not dense in } \mathsf{W}^{P,q}.\end{aligned}$$ Thus it remains to examine the regularity assumption (\[deds\]) in applications, where of course we can assume $k\geq 2$.\
To this end, it is clear from classical local elliptic estimates that for $p>1$, (\[deds\]) is satisfied whenever there is some elliptic $P_j$ with $k_j\geq k-1$. However, the $\mathsf{L}^1$-case $p=1$ is much more subtle, since the usual local elliptic regularity is well-known to fail here (cf. Remark \[counte\]). However, *in Theorem \[regu\] we prove a new modified local elliptic regularity result on the scale of Besov spaces, which implies that in the $\mathsf{L}^1$-situation, one loses exactly one differential order of regularity when compared with the usual local elliptic $\mathsf{L}^p$, $p>1$, estimates.* This in turn shows that for $p=1$, (\[deds\]) is satisfied whenever there is some elliptic $P_j$ with $k_j= k$. These observations are collected in Corollary \[ell\]. The proof of Theorem \[regu\] relies on *a new existence and uniqueness result, (cf. Proposition \[pr2\] in Section \[beweis2\]) for certain systems of linear elliptic of PDEs on the Besov scale,* which is certainly also of an independent interest.\
Finally, we would like to point out that the regularity (\[deds\]) does not require the ellipticity of any $P_j$ at all. Indeed, *in Corollary \[meyers\] we prove that if $(M,g)$ is a possibly noncompact Riemannian manifold and $E\to M$ a Hermitian vector bundle with a (not necessarily Hermitian) covariant derivative $\nabla$, then for any $s\in{\mathbb{N}}$ and $p\in (1,\infty)$, the Sobolev space* $$\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E):=
\Gamma_{\{\nabla^{1}_g,\dots,\nabla^{s}_g\},\mathsf{L}^p_{\mathrm{vol}_g}}(M,E).$$ *satisfies* $$\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E)\subset\Gamma_{\mathsf{W}^{s,p}_{\mathrm{loc}}}(M,E),$$ which means that we do not even have to use the full strenght of Theorem \[main\] here. To the best of our knowledge, the resulting density of $$\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E)\cap \Gamma_{\mathsf{C}^{\infty}}(M,E)
\text{ in } \Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E)$$ is entirely new in this generality.
Main results
============
Throughout, *let $X$ be a smooth $m$-manifold (without boundary)* which is allowed to be noncompact. For subsets $Y_1,Y_2\subset X$ we write $$Y_1\Subset Y_2, \text{ if and only if $Y_1$ is open,
$\overline{Y_1}\subset Y_2$, and $\overline{Y_1}$ is compact.}$$ We abbreviate that for any $k\in{\mathbb{N}}_{\geq 0}$, we denote with ${\mathbb{N}}^m_{k}$ the set of multi-indices $\alpha\in ({\mathbb{N}}_{\geq 0})^m$ with $|\alpha|:=\sum^{m}_{j=1}\alpha_j \leq k$. Note that $(0,\dots,0)\in {\mathbb{N}}^m_k$ by definition, for any $k$.\
In order to be able to deal with Banach structures that are not necessarily induced by Riemannian structures [@Br], *we fix a smooth measure $\mu$ on $X$,* that is, $\mu$ is a Borel measure on $X$ such that for any chart $U$ for $X$ there is a (necessarily unique) $0<\mu_U\in\mathsf{C}^{\infty}(U)$ with the property that $$\mu(A)=\int_A \mu_U(x^1,\cdots, x^m){{\rm d}}x^1\cdots{{\rm d}}x^m\>\text{ for all Borel sets $A\subset U$},$$ where ${{\rm d}}x={{\rm d}}x^1\cdots{{\rm d}}x^m$ stands for Lebesgue integration.\
We always understand our linear spaces to be complex-valued, and an index $\mathrm{c}$ in spaces of sections or functions stands for compact support, where in the context of equivalence classes (with respect to some/all $\mu$ as above) of Borel measurable sections, compact support of course means compact essential support.\
Assume for the moment that we are given smooth complex vector bundles $E\to X$, $F\to X$, with $\mathrm{rank}(E)=\ell_0$ and $\mathrm{rank}(F)=\ell_1$. The linear space of smooth sections in $E\to X $ is denoted by $\Gamma_{\mathsf{C}^{\infty}}(X,E)$, and the linear space of equivalence classes of Borel sections in $E\to X$ is simply written as $\Gamma(X,E)$.\
We continue by listing some conventions and some notation concerning linear differential operators and distributions on manifolds. We start by adding the following two classical definitions on linear differential for the convenience of the reader, who can find these and the corresponding basics in [@nico; @wald; @chaz; @lawson]. We also refer the reader to [@batu] (and the references therein) for the jet bundle aspects of (possibly nonlinear) partial differential operators.
\[ops\] A morphism of linear sheaves $$P: \Gamma_{\mathsf{C}^{\infty}}(X,E)\longrightarrow \Gamma_{\mathsf{C}^{\infty}}(X,F)$$ is called a *smooth linear partial differential operator of order at most $k$*, if for any chart $$x=(x^1,\dots,x^m):U\longrightarrow {\mathbb{R}}^m$$ for $X$ which admits local frames $e_1,\dots,e_{\ell_0}\in\Gamma_{\mathsf{C}^{\infty}}(U,E)$, $f_1,\dots,f_{\ell_1}\in\Gamma_{\mathsf{C}^{\infty}}(U,F)$, and any $\alpha\in{\mathbb{N}}^m_k$, there are (necessarily uniquely determined) smooth functions $$P_{\alpha}:U\longrightarrow \mathrm{Mat}({\mathbb{C}};\ell_0\times \ell_1)$$ such that for all $(\phi^1,\dots,\phi^{\ell_0})\in\mathsf{C}^{\infty}(U,{\mathbb{C}}^{\ell_0})$ one has $$P\sum^{\ell_0}_{i=1}\phi^i e_i=\sum^{\ell_1}_{j=1}\sum^{\ell_0}_{i=1}\sum_{\alpha\in{\mathbb{N}}^m_k}
P_{\alpha ij}\frac{\partial^{|\alpha|}\phi^{i}}{\partial x^{\alpha}}f_j\>\>\text{ in $U$}.$$
The linear space of smooth at most $k$-th order linear partial differential operators is denoted by $\mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;E,F)$.
Let $P\in \mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;E,F)$.\
*a)* The (linear principal) symbol of $P$ is the unique morphism of smooth complex vector bundles over $X$, $$\begin{aligned}
\sigma_P: (\mathrm{T}^*X)^{\odot k}\longrightarrow \mathrm{Hom}(E,F),\end{aligned}$$ where $\odot$ stands for the symmetric tensor product, such that for all $x:U\to {\mathbb{R}}^m$, $e_1,\dots,e_{\ell_0}$, $f_1,\dots,f_{\ell_1}$, $\alpha$ as in Definition \[ops\] one has $$\sigma_{P}\left({{\rm d}}x^{ \odot\alpha} \right)e_i= \sum^{\ell_1}_{j=1}P_{\alpha i j}f_j\>\>\text{ in $U$}.$$ *b)* $P$ is called *elliptic*, if for all $x\in X$, $v\in\mathrm{T}^*_x X\setminus \{0\}$, the linear map $$\sigma_{P,x}(v ):=\sigma_{P,x}(v^{\otimes k}): E_x\longrightarrow F_x
\> \text{ is in $\mathrm{GL}(E_x,F_x)$.}$$
We recall that the linear space $\Gamma_{\mathsf{W}^{k,p}_{\mathrm{loc}}}(X,E)$ of *local $\mathsf{L}^{p}$-Sobolev sections in $E\to X$ with differential order $k$* is defined to be the space of $f\in\Gamma(X,E)$ such that for all charts $U\subset X$ which admit a local frame $e_1,\dots,e_{\ell_0}\in\Gamma_{\mathsf{C}^{\infty}}(U,E)$, one has $$(f^1,\dots,f^{{\ell_0}}) \in \mathsf{W}^{k,p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_0}),\>\>
\text { if $f=\sum^{\ell_0}_{j=1}f^je_j$ in $U$.}$$ In particular, we have the space of locally $p$-integrable sections $$\Gamma_{\mathsf{L}^p_{\mathrm{loc}}}(X,E):=\Gamma_{\mathsf{W}^{0,p}_{\mathrm{loc}}}(X,E).$$ The linear space of *distributional sections* in $E\to X$ is defined by $$\begin{aligned}
&\Gamma_{\mathsf{D}\rq{}}(X,E):=\text{ topological dual of $\Gamma_{\mathsf{D}}(X,E)$, where}\\
&\Gamma_{\mathsf{D}}(X,E):=\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(X,E^*\otimes |X|),\end{aligned}$$ and where $|X|\to X$ denotes the bundle of $1$-densities, which is a smooth complex line bundle. We have the canonical embedding $$\Gamma_{\mathsf{L}^1_{\mathrm{loc}}}(X,E){\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}}\Gamma_{\mathsf{D}\rq{}}(X,E),$$ given by identifying $f\in\Gamma_{\mathsf{L}^1_{\mathrm{loc}}}(X,E)$ with the distribution $$\left\langle f,\Psi\right\rangle := \int_X \Psi[f],\>\>\>\Psi\in\Gamma_{\mathsf{D}}(X,E).$$ We continue with (cf. Proposition 1.2.12 in [@wald], or [@chaz]):
For any $P\in\mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;E,F)$, there is a unique differential operator $$P^t\in\mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;F^*\otimes |X|,E^*\otimes |X|),$$ *the transpose of $P$*, which satisfies $$\begin{aligned}
&\int_X P^t\Psi[\phi]=\int_X \Psi[P\phi] \>\>
\text{ for all $\Psi \in\Gamma_{\mathsf{C}^{\infty}}(X,F^*\otimes |X|)$,}
\\ \nonumber
& \text{ and all $\phi\in \Gamma_{\mathsf{C}^{\infty}}(X,E)$, with either $\phi$ or $\Psi$
compactly supported.} \end{aligned}$$
Using the transpose, one extends any $P\in \mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;E,F)$ canonically to a linear map $$P: \Gamma_{\mathsf{D}\rq{}}(X,E)\longrightarrow \Gamma_{\mathsf{D}\rq{}}(X,F),$$ by requiring $$\left\langle Ph,\phi\right\rangle = \left\langle h,P^t\phi\right\rangle\>
\text{ for all $h\in \Gamma_{\mathsf{D}\rq{}}(X,E)$, $\phi\in \Gamma_{\mathsf{D}}(X,F)$}.$$
\[adjo\]1. Assume that $E\to X$ and $F\to X$ come equipped with smooth Hermitian structures $h_E(\bullet,\bullet)$ and $h_F(\bullet,\bullet)$, respectively. We define $P^{\mu}\in\mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;F^*,E^*)$ by $$(P^{\mu}\psi )\otimes \mu := P^{t}(\psi\otimes \mu),\>\> \psi \in\Gamma_{\mathsf{C}^{\infty}}(X,F^*),$$ and $P^{\mu, h_E,h_F}\in\mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;F,E)$ by the diagram $$\begin{xy}
\xymatrix{
\Gamma_{\mathsf{C}^{\infty}}(X,F^*)\>\>\ar[rr]^{P^{\mu}} & & \>\>\Gamma_{\mathsf{C}^{\infty}}(X,E^*)\ar[dd]^{\tilde{h}_E^{-1}} \\ \\
\Gamma_{\mathsf{C}^{\infty}}(X,F)\ar[uu]^{\tilde{h}_F}\ar@{.>}_{P^{\mu, h_E,h_F}}[rr] & & \Gamma_{\mathsf{C}^{\infty}}(X,E)
}
\end{xy}$$ where $\tilde{h_E}$ and $\tilde{h_F}$ stand for the isomorphisms of $\mathsf{C}^{\infty}(X)$-modules which are induced by $h_E$ and $h_F$, respectively. Then $P^{\mu, h_E,h_F}$ is the uniquely determined element of $\mathscr{D}^{(k)}_{\mathsf{C}^{\infty}}(X;F,E)$ which satisfies $$\begin{aligned}
&\int_X h_E\left(P^{\mu, h_E,h_F} \psi,\phi\right){{\rm d}}\mu = \int_X h_F\left(\psi , P\phi\right) {{\rm d}}\mu \end{aligned}$$ for all $\psi \in\Gamma_{\mathsf{C}^{\infty}}(X,F)$, $\phi\in \Gamma_{\mathsf{C}^{\infty}}(X,E)$ with either $\phi$ or $\psi$ compactly supported.\
2. Given $f_1\in \Gamma_{\mathsf{L}^{1}_{\mathrm{loc}}}(X,E)$, $f_2\in \Gamma_{\mathsf{L}^{1}_{\mathrm{loc}}}(X,F)$ one has $Pf_1=f_2$, if and only if for *some* triple $(\mu,h_E,h_F)$ as above it holds that $$\begin{aligned}
&\int_X h_E\left(P^{\mu, h_E,h_F} \psi,f_1\right){{\rm d}}\mu = \int_X h_F\left(\psi , f_2\right) {{\rm d}}\mu\>\text{ for all $\psi \in\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(X,F)$ },\label{sdf}\end{aligned}$$ and then (\[sdf\]) automatically holds for *all* such triples $(\mu,h_E,h_F)$.
From now on, given a smooth *Hermitian* vector bundle $E\to X$ and $p\in [1,\infty]$, abusing the notation as usual, $(\bullet,\bullet)_x$ denotes the inner product on the fiber $E_x$, with $\left|\bullet\right|_x$ the corresponding norm, and we get a Banach space $$\Gamma_{\mathsf{L}^p_{\mu}}(X,E):=
\left\{f\left|f\in \Gamma(X,E),\left\| f\right\|_{p,\mu}<\infty\right\}\right.,$$ where $$\left\| f\right\|_{p,\mu}:=\begin{cases}&\Big(\int_X \big|f(x)\big|^p_x \mu({{\rm d}}x)\Big)^{1/p}
,\text{ if $p<\infty$} \\
&\inf\{C|C\geq 0, |f|\leq C\text{ $\mu$-a.e.}\},\text{ if $p=\infty$.}\end{cases}$$ Of course, $\Gamma_{\mathsf{L}^2_{\mu}}(X,E)$ becomes a Hilbert space with its canonical inner product.\
The following definition is in the center of this paper:
\[asa\] Let $p\in [1,\infty]$, $s\in{\mathbb{N}}$, $k_1\dots,k_s\in{\mathbb{N}}_{\geq 0}$, and for each $i\in\{1,\dots,s\}$ let $E\to X$, $F_i\to X$ be smooth Hermitian vector bundles and let $\mathfrak{P}:=\{P_1,\dots,P_s\}$ with $P_{i}\in\allowbreak \mathscr{D}_{\mathsf{C}^{\infty}}^{(k_i)}(X;E,F_i)$. Then the Banach space $$\begin{aligned}
&\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)\\
&:=\left.\Big\{f\right|f\in\Gamma_{\mathsf{L}^{p}_{\mu}}(X,E),
P_{i} f\in\Gamma_{ \mathsf{L}^{p}_{\mu} }(X,F_i)\text{ for all $i\in\{1,\dots,s\}$}\Big\}\\
&\>\>\>\> \>\>\>\>\subset \Gamma_{\mathsf{L}^{p}_{\mu}}(X,E),\>
\text{ with norm $\left\| f\right\|_{\mathfrak{P},p,\mu}
:=\left(\left\|f\right\|^p_{p,\mu}+\sum^s_{i=1}\left\|P_{i} f\right\|^p_{p,\mu}\right)^{1/p}$},\end{aligned}$$ is called the *$\mathfrak{P}$-Sobolev space of $\mathsf{L}^{p}_{\mu}$-sections* in $E\to X$.
Note that in the above situation, $\Gamma_{\mathsf{W}^{\mathfrak{P},2}_{\mu}}(X,E)$ is a Hilbert space with the obvious inner product, and we have the linear space $$\begin{aligned}
&\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mathrm{loc}}}(X,E)\\
&:=
\left\{f\left|f\in\Gamma_{\mathsf{L}^p_{\mathrm{loc}}}(X,E),P_{i}
f\in\Gamma_{ \mathsf{L}^{p}_{\mathrm{loc}} }(X,F_i)\text{ for all $i\in\{1,\dots,s\}$}
\right\}\right. \end{aligned}$$ of *locally $p$-integrable sections in $E\to X$ with differential structure $\mathfrak{P}$,* which of course does not depend on any Hermitian structures.\
In this context, let us record the following local elliptic regularity result, whose $\mathsf{L}^{p}_{\mathrm{loc}}$-case, $p\in (1,\infty)$, is classical (see for example Theorem 10.3.6 in [@nico]), while the $\mathsf{L}^{1}_{\mathrm{loc}}$-case seems to be entirely new, and can be considered as our first main result:
\[regu\] Let $U\subset {\mathbb{R}}^m$ be open, let $k\in{\mathbb{N}}_{\geq 0}$, $\ell\in{\mathbb{N}}$, and let $P\in{\mathscr{D}}^{(k)}_{\mathsf{C}^{\infty}}(U;{\mathbb{C}}^{\ell},{\mathbb{C}}^{\ell})$, $$P= \sum_{\alpha\in {\mathbb{N}}^m_{k} } P_{\alpha}\partial^{\alpha},
\>\text{ with $P_{\alpha}:U\longrightarrow\mathrm{Mat}({\mathbb{C}};\ell\times \ell)$
in $\mathsf{C}^{\infty}$}$$ be elliptic. Then the following results hold true:\
*a)* If $p\in (1,\infty)$, then for any $f\in \mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$ with $Pf\in \mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$ one has $f\in\mathsf{W}^{k,p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$.\
*b)* For any $f\in\mathsf{L}^{1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$ with $Pf\in \mathsf{L}^{1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$ it holds that $f\in\mathsf{W}^{k-1,1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$.
Before we come to the proof, a few remarks are in order:
\[counte\] In fact, we are going to prove the following much stronger statement in part b): Under the assumptions of Theorem \[regu\] b), for any $f\in\mathsf{L}^{1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$ with $Pf\in \mathsf{L}^{1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$, one has that for any $\psi\in\mathsf{C}^{\infty}_{\mathrm{c}}(U)$, the distribution $\psi f$ is in the Besov space $$\mathsf{B}^{k}_{1,\infty}({\mathbb{R}}^m,{\mathbb{C}}^{\ell})\subset\mathsf{W}^{k-1,1}({\mathbb{R}}^m,{\mathbb{C}}^{\ell}).$$ This in turn is proved using a new existence and uniqueness result (cf. Proposition \[pr2\] in Section \[beweis2\]) for certain systems of linear elliptic PDEs on the Besov scale. We refer the reader to Section \[beweis2\] for the definition and essential properties of the Besov spaces $\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)\subset\mathsf{S}\rq{}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ (with $\mathsf{S}\rq{}({\mathbb{R}}^m)$ the Schwartz distributions), where $\beta\in{\mathbb{R}}$, $p,q\in [1,\infty]$. Note that in the situation of Theorem \[regu\] b), the assumptions $f,Pf\in\mathsf{L}^{1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$, do not imply $f\in\mathsf{W}^{k,1}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell})$: An explicit counter example has been given in [@rother] for the Euclidean Laplace operator. In fact, it follows from results of [@davide2] that for any strongly elliptic differential operator $P$ in ${\mathbb{R}}^m$ with constant coefficients and order $2k$, there is a $f$ with $f,Pf\in\mathsf{L}^{1}_{\mathrm{loc}}({\mathbb{R}}^m)$, and $f\notin\mathsf{W}^{2k,1}_{\mathrm{loc}}({\mathbb{R}}^m)$. In this sense, the above $k$-th order Besov regularity can be considered to be optimal.
In this proof, we denote with $(\bullet,\bullet)$ the standard inner product in each ${\mathbb{C}}^n$, and with $\left|\bullet\right|$ the corresponding norm and operator norm, and $\mathrm{B}_r(x)$ stands for the corresponding open ball of radius $r$ around $x$. Let us consider the formally self-adjoint elliptic partial differential operator $$T:=P^{\dagger}P=
\sum_{\alpha \in {\mathbb{N}}_{2k}^m}T_\alpha \partial^\alpha\in \mathscr{D}_{\mathsf{C}^\infty}^{(2k)}
({\mathbb{R}}^m; {\mathbb{C}}^\ell, {\mathbb{C}}^\ell).$$ Here, $P^{\dagger}\in{\mathscr{D}}^{(k)}_{\mathsf{C}^{\infty}}(U;{\mathbb{C}}^{\ell},{\mathbb{C}}^{\ell})$ denotes the usual formal adjoint of $P$, which is well-defined by $$\int_{U} (P^{\dagger}\varphi_1,\varphi_2) {{\rm d}}x=\int_U ( \varphi_1,P\varphi_2){{\rm d}}x,$$ for all $\varphi_1,\varphi_2\in \mathsf{C}^{\infty}(U,{\mathbb{C}}^{\ell})$ one of which having a compact support, in other words, $P^{\dagger}$ is nothing but the operator $P^{\mu,h_E,h_F}$ from Remark \[adjo\].1, with respect to the Lebesgue measure and the canonical Hermitian structures on the trivial bundles. By a standard partition of unity argument, it suffices to prove that if $\psi\in\mathsf{C}^{\infty}_{\mathrm{c}}(U)$ with $$\begin{aligned}
\label{incl}
\mathrm{supp}(\psi)\subset \mathrm{B}_{t_0}(x_0)\subset U \end{aligned}$$ for some $x_0\in U,\ t_0>0$ we have $\psi f\in\mathsf{B}^{k}_{1,\infty}({\mathbb{R}}^m,{\mathbb{C}}^{\ell})$. The proof consists of two steps: We first construct a differential operator $Q^{\psi}$ which satisfies the assumptions of Proposition \[pr2\], and which coincides with $T$ near $\mathrm{supp}(\psi)$, and then we apply Proposition \[pr2\] together with a maximality argument to $Q^{\psi}$ to deduce the thesis.\
we can assume that there are $t_0>0$, $x_0\in U$ such that We also take some $\phi\in\mathsf{C}^{\infty}_{\mathrm{c}}(U)$ with $\phi=1$ on $\mathrm{B}_{t_0}(x_0)$, and for any $0<t<t_0$ we set $$C_{t}:=\max_{y\in \overline{\mathrm{B}_t(x_0)},\alpha\in {\mathbb{N}}_{2k}^m}
|T_{\alpha i j}(y)-T_{\alpha i j}(x_0)|,$$ and we pick a $\chi_t\in\mathsf{C}^{\infty}_{\mathrm{c}}({\mathbb{R}}^2,{\mathbb{R}}^2)$ with $\chi_t(z)=z$ for all $z$ with $|z|\leq C_t$, and $|\chi_t(z)|\leq 2C_t$ for all $z$. We define a differential operator $$\begin{aligned}
&Q^{(t)}=\sum_{\alpha \in {\mathbb{N}}_{2k}^m}Q^{(t)}_{\alpha}
\partial^\alpha\in \mathscr{D}_{\mathsf{C}^\infty}^{(2k)}({\mathbb{R}}^m; {\mathbb{C}}^\ell, {\mathbb{C}}^\ell),
\\
&Q^{(t)}_{\alpha i j}(x):= T_{\alpha i j}(x_0)+
\chi_t\big(\phi(x)(T_{\alpha i j}(x)-T_{\alpha i j}(x_0))\big)
\\
&\>\>\>\>\>\>=:T_{\alpha i j}(x_0)+A^{(t)}_{\alpha i j}(x)\end{aligned}$$ (with the usual extension of $\phi (T_{\alpha i j}-T_{\alpha i j}(x_0))$ to zero away from $U$ being understood, so in particular we have $Q^{(t)}_{\alpha i j}(x)= T_{\alpha i j}(x_0)$, if $x\in {\mathbb{R}}^m\setminus U$). Let $\zeta\in{\mathbb{R}}^m\setminus\{0\}$, $\eta\in{\mathbb{C}}^\ell$ be arbitrary. Then using $\sigma_{T,x_0}=\sigma_{P,x_0}^{\dagger}\sigma_{P,x_0}$, and that $${\mathbb{R}}^m\setminus\{0\}\ni \zeta\rq{}\longmapsto \sigma_{P,x_0}(\mathrm{i}\zeta\rq{})=
\sum_{\alpha\in{\mathbb{N}}^m_{k}, |\alpha|=k}P_{\alpha}(x_0)(\mathrm{i}\zeta\rq{})^{\alpha}\in
\mathrm{GL}({\mathbb{C}};\ell\times \ell)$$ is well-defined and positively homogeneous of degree $k$, one finds $$\Re (\sigma_{T,x_0}(\mathrm{i}\zeta),\eta,\eta)= (\sigma_{T,x_0}(\mathrm{i}\zeta),\eta,\eta)\geq
D_1|\zeta|^{2k} |\eta|^2,$$ where $$D_1:=\min_{\zeta\rq{}\in{\mathbb{R}}^m,\eta\rq{}\in{\mathbb{C}}^\ell, |\zeta\rq{}|=1=
|\eta\rq{}|} |\sigma_{P,x_0}(\mathrm{i}\zeta\rq{})\eta\rq{}|^2>0.$$ Furthermore, for $x\in U$ one easily gets $$\Re (\sigma_{A^{(t)},x}(\mathrm{i}\zeta),\eta,\eta)\geq
-D(k,m)\max_{\alpha\in{\mathbb{N}}^{m}_{2k}}|A^{(t)}_{\alpha}(x)||\zeta|^{2k} |\eta|^2,$$ for some $D(k,m)>0$. From now one we fix some small $t$ such that $$\sup_{x\in U}\max_{\alpha\in{\mathbb{N}}^{m}_{2k}}|A^{(t)}_{\alpha}(x)|\leq D_1/(2D(k,m)).$$ Then we get the estimate $$\Re (\sigma_{Q^{(t)}(\mathrm{i}\zeta),x},\eta,\eta)\geq{\frac}{D_1}{2} |\zeta|^{2k} |\eta|^2
\text{ for all $x\in{\mathbb{R}}^m$ },$$ thus $$\left|\big(r^{2k} + \sigma_{Q^{(t)},x}(\mathrm{i}\xi)\big)^{-1}\right| \leq
\min\{D_1/2,1\}(r + |\xi|)^{-2k},$$ which is valid for all $$(x,\xi, r) \in {\mathbb{R}}^m \times ({\mathbb{R}}^m \times [0, \infty)) \setminus \{(0, 0)\}).$$ In other words, $Q^{\psi}:=Q^{(t)}$ satisfies the assumptions of Proposition \[pr2\] with $\theta_0=\pi$, and by construction one has $$\begin{aligned}
\label{coin}
Q^{\psi}_{\alpha}=T_{\alpha}\text{ for all $\alpha\in{\mathbb{N}}^{m}_{2k}$, in a open neighbourhood of
$\mathrm{supp}(\psi)$}.\end{aligned}$$ Since $ \mathsf{L}^1({\mathbb{R}}^m, {\mathbb{C}}^\ell) \hookrightarrow \mathsf{B}^0_{1,\infty}({\mathbb{R}}^n, {\mathbb{C}}^\ell)$, the assumption $f \in \mathsf{L}^1_{\mathrm{loc}} (U, {\mathbb{C}}^\ell)$ implies $$\beta_0:= \left.\sup \big\{\beta\right|\>\beta\in{\mathbb{R}}, \tilde{\psi} f \in
\mathsf{B}^\beta_{1,\infty }({\mathbb{R}}^m, {\mathbb{C}}^\ell)
\text{ for all $\tilde{\psi}\in\mathsf{C}^{\infty}_{\mathrm{c}}(U)$}\big\}\geq 0.$$ We also know that $Pf \in \mathsf{L}^1_{\mathrm{loc}} (U, {\mathbb{C}}^\ell)$. Then $P(\psi f) = \psi Pf + P_1 f$, where the commutator $P_1:=[P,\psi] \in \mathscr{D}_{\mathsf{C}^\infty}^{(k-1)}(U; {\mathbb{C}}^\ell, {\mathbb{C}}^\ell)$ has coefficients with compact support in $U$, and using (\[coin\]) we get $$Q^{\psi}(\psi f) = T(\psi f)= P^{\dagger} P(\psi f) = P^{\dagger} (\psi Pf) + P^{\dagger} P_1 f,$$ all equalities understood in the sense of distributions with compact support in $U$. We fix $R \geq 0$ so large that the conclusions of Proposition \[pr2\] hold for $Q=Q^{\psi}$, $\theta_0: = \pi$, $r = R$, $$\beta \in \big\{-2k, \min\big\{\beta_0 +{\frac}{1}{2}- 2k, -k\big\}\big\}.$$ So $\psi f$ coincides with the unique solution $w$ in $\mathsf{B}^0_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ of $$\label{eq2A}
R^{2k} w + Q^{\psi}w = R^{2k} \psi f + P^{\dagger} (\psi Pf) + P^{\dagger} P_1 f.$$ On the other hand, as $\tilde{\psi} f \in \mathsf{B}^{\beta_0 - \frac{1}{2}}_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ for all $\tilde{\psi}\in\mathsf{C}^{\infty}_{\mathrm{c}}(U)$ (by the very definition of $\beta_0$), we get $$R^{2k} \psi f + P^{\dagger} (\psi Pf) + P^{\dagger} P_1 f \in
\mathsf{B}^{\min\{-k,\beta_0+\frac{1}{2} -2k\}}_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell).$$ So (\[eq2A\]) has a unique solution $\tilde{w}$ in $\mathsf{B}^{ \min\{\beta_0 + \frac{1}{2}, k\}}_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, evidently coinciding with $\psi f$, by the uniqueness of the solutions of (\[eq2A\]) in the class $\mathsf{B}^0_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. We deduce that $\psi f \in \mathsf{B}^{ \min\{\beta_0 + \frac{1}{2}, k\}}_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, so that, $\psi$ being arbitrary, $\min\big\{\beta_0 + \frac{1}{2}, k\big\} \leq \beta_0$, implying $k \leq \beta_0$ and $\min\{\beta_0 + \frac{1}{2}, k\} = k$. We have thus shown that $\psi f \in \mathsf{B}^k_{1,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$.
Keeping Remark \[adjo\].2 in mind, we immediately get the following characterization of local Sobolev spaces:
Let $E\to X$ be a smooth complex vector bundle, and let $k\in {\mathbb{N}}_{\geq 0}$.\
*a)* If $p\in (1,\infty)$, then for any elliptic operator $Q\in \mathscr{D}_{\mathsf{C}^{\infty}}^{(k)}(X;E,E)$ one has $$\Gamma_{\mathsf{W}^{k,p}_{\mathrm{loc}}}(X,E)=\Gamma_{\mathsf{W}^{Q,p}_{\mathrm{loc}}}(X,E).$$ *b)* For any elliptic $Q\in \mathscr{D}_{\mathsf{C}^{\infty}}^{(k+1)}(X;E,E)$ one has $$\Gamma_{\mathsf{W}^{Q,1}_{\mathrm{loc}}}(X,E)\subset\Gamma_{\mathsf{W}^{k,1}_{\mathrm{loc}}}(X,E).$$
Our second main result is the following abstract Meyers-Serrin type theorem:
\[main\] Let $p\in [1,\infty)$, $s\in{\mathbb{N}}$, $k_1\dots,k_s\in{\mathbb{N}}_{\geq 0}$, and let $E\to X$, $F_i\to X$, for each $i\in\{1,\dots,s\}$, be smooth Hermitian vector bundles, and let $\mathfrak{P}:=\{P_1,\dots,P_s\}$ with $P_{i}\in\mathscr{D}_{\mathsf{C}^{\infty}}^{(k_i)}(X;E,F_i)$ be such that in case $k:=\max\{k_1,\dots,k_s\}\geq 2$ one has $\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)\subset \Gamma_{\mathsf{W}^{k-1,p}_{\mathrm{loc}}}(X,E)$. Then for any $f\in \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E) $ there is a sequence $$(f_n)\subset \Gamma_{\mathsf{C}^{\infty}}(X,E)\cap
\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E),$$ which can be chosen in $\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(X,E)$ if $f$ is compactly supported, such that $\left\| f_n-f\right\|_{\mathfrak{P},p,\mu}\to 0$ as $n\to\infty$.
The following vector-valued and higher order result on Friedrichs mollifiers is the main tool for the proof of Theorem \[main\], and should in fact be of an independent interest.
\[molli\] Let $0\leq h\in\mathsf{C}^{\infty}_{\mathrm{c}}({\mathbb{R}}^m)$ be such that $h(x)=0$ for all $x$ with $|x|\geq 1$, $\int_{{\mathbb{R}}^m} h(x){{\rm d}}x =1$. For any $\epsilon>0$ define $0\leq h_{\epsilon}\in \mathsf{C}^{\infty}_{\mathrm{c}}({\mathbb{R}}^m)$ by $h_{\epsilon}(x):=\epsilon^{-m}h(\epsilon^{-1}x)$. Furthermore, let $U\subset {\mathbb{R}}^m$ be open, let $k\in {\mathbb{N}}_{\geq 0}$, $\ell_0,\ell_1\in{\mathbb{N}}$, $p\in [1,\infty)$, and let $P\in{\mathscr{D}}^{(k)}_{\mathsf{C}^{\infty}}(U;{\mathbb{C}}^{\ell_0},{\mathbb{C}}^{\ell_1})$, $$P= \sum_{\alpha\in {\mathbb{N}}^m_{k} } P_{\alpha}\partial^{\alpha},
\>\text{ with $P_{\alpha}:U\longrightarrow\mathrm{Mat}({\mathbb{C}};\ell_0\times \ell_1)$ in
$\mathsf{C}^{\infty}$.}$$ *a)* Assume that $f\in \mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_0})$, $Pf\in \mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_1})$, and that either $k<2$ or $f\in\mathsf{W}^{k-1,p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_0})$. Then one has $Pf_{\epsilon}\to Pf$ as $\epsilon\to 0+$ in $\mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_1})$, where for sufficiently small $\epsilon>0$ we have set $$f_{\epsilon}:=\int_{{\mathbb{R}}^m} h_{\epsilon}(\bullet-y)f(y){{\rm d}}y
\in\mathsf{C}^{\infty}(U,{\mathbb{C}}^{\ell_0}).$$ *b)* If $f\in\mathsf{C}^k(U,{\mathbb{C}}^{\ell_0})$, then $Pf_{\epsilon}\to P f$ as $\epsilon\to 0+$, uniformly over each $V\Subset U$.
a\) We prove the statement by an induction argument on the order of the operator similar to that in [@Br Appendix A]. The case $k=0$ is an elementary property of convolution, the case $k=1$ is the classical Friedrichs theorem, see [@friedrichs]. Therefore, let $k\geq 2$ and assume that the result is true for operators of order at most $k-1$, and also that at least for some $\alpha\in{\mathbb{N}}^m_k$ with $|\alpha|=k$ we have $P_\alpha\neq 0$. For $j\in\{1,\ldots,m\}$, let $e_j\in{\mathbb{N}}^m_1$ be the $j$-th element of the canonical basis of ${\mathbb{R}}^m$, set $$J_j=\left.\big\{\alpha\right| \alpha\in{\mathbb{N}}^m_k,\ |\alpha|=k,\ \alpha_j\geq 1\big\},$$ and for $\alpha\in J_j$, set $\hat{\alpha}_j=\alpha-e_j$. For every $f\in\mathsf{W}^{k-1,p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_0})$ such that $Pf\in \mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_1})$, $j\in\{1,\ldots,m\}$ with $J_j\neq\emptyset$ and $\alpha\in J_j$ we may write $g_j=\partial^{\hat{\alpha}_j}f$, and $$Pf = \sum_{j=1}^m \sum_{\alpha\in J_j} \partial_j(P_\alpha g_j) + Q f,
\> \text{ where $Q\in{\mathscr{D}}^{(k-1)}_{\mathsf{C}^{\infty}}(U;{\mathbb{C}}^{\ell_0},{\mathbb{C}}^{\ell_1})$.}$$ By the induction hypothesis, $Qf_\epsilon \to Qf$ in $\mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_1})$ as $\epsilon\to 0+$. Moreover, by assumption $g_j\in \mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_0})$ for every $j$, hence $(g_j)_\epsilon \to g_j$ in $\mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_0})$ and as a consequence $(P_\alpha g_j)_\epsilon\to P_\alpha g_j$ in $\mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_1})$, so that $Pf_\epsilon \to Pf$ in $\mathsf{L}^{p}_{\mathrm{loc}}(U,{\mathbb{C}}^{\ell_1})$ by Friedrichs’ theorem, and the proof is complete.\
b) This follows from the following two well-known facts: Firstly, if $f\in\mathsf{C}^{k}(U,{\mathbb{C}}^{\ell_0})$, then $\partial^{\alpha}(f_{\epsilon})=(\partial^{\alpha}f)_{\epsilon}$ for all $\alpha\in{\mathbb{N}}^m_k$ and all sufficiently small $\epsilon>0$. Secondly, if $g\in\mathsf{C}(U,{\mathbb{C}}^{\ell_0})$, then for every $V\Subset U$ $$\sup_{x\in V}|g_{\epsilon}(x)- g(x)|\to 0 \>\text{ as $\epsilon\to 0+$.}$$ This completes the proof.
Let $$\ell_0:=\mathrm{rank}(E), \>\ell_j:=\mathrm{rank}(F_j),\>\> \text{ for any
$j\in\{1,\dots,s\}$.}$$ We take a relatively compact, locally finite atlas $\bigcup_{n\in{\mathbb{N}}}U_n=X$ such that each $U_n$ admits smooth orthonormal frames for $$E\longrightarrow X, F_1\longrightarrow X,\dots,F_s\longrightarrow X.$$ Let $(\varphi_n)$ be a partition of unity which is subordinate to $(U_n)$, that is, $$0\leq \varphi_n\in\mathsf{C}^{\infty}_{\mathrm{c}}(U_n), \>\sum_n\varphi_n (x)=1
\>\text{ for all $x\in X$,}$$ where the latter is a locally finite sum. Now let $f\in \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E)$, and $f_n:=\varphi_n f$. Let us first show that $f_n\in \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu,\mathrm{c}}}(U_n,E)$. Indeed, let $j\in\{1,\dots,s\}$. Then as elements of $\Gamma_{\mathsf{D}'}(U_n,E)$ one has $$P_{j} f_n= \varphi_n P_{j}f+[P_j,\varphi_n] f,\>\text{ but }
\>[P_j,\varphi_n]\in \mathscr{D}^{k_j-1}_{\mathsf{C}^{\infty}}(U_n;E,F_j),$$ and as we have $f\in \Gamma_{\mathsf{W}^{k-1,p}_{\mathrm{loc}}}(X,E)$, it follows that $$\left(\partial^{\alpha}f_1,\dots,\partial^{\alpha}f_{\ell_0}\right)
\in\mathsf{L}^p_{\mathrm{loc}}(U_n,{\mathbb{C}}^{\ell_0})
\>\text{ for all $\alpha\in{\mathbb{N}}^m_{k-1}$,}$$ where the $f_j$s are the components of $f$ with respect to the smooth orthonormal frame on $U_n$ for $E$. Thus we get $$[P_j,\varphi_n] f\in \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu,\mathrm{c}}}(U_n,E)$$ as the coefficients of $[P_j,\varphi_n]$ have a compact support in $U_n$ and $0<\mu_{U_n}\in\mathsf{C}^{\infty}(U_n)$, and the proof of $f_n\in \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu,\mathrm{c}}}(U_n,E)$ is complete. But now, given $\epsilon >0$, we may appeal to Proposition \[molli\] a) to pick an $f_{n,\epsilon}\in\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(X,E)$ with support in $U_n$ such that $$\left\|f_n- f_{n,\epsilon}\right\|_{\mathfrak{P},p,\mu}<\epsilon/2^{n+1}.$$ Finally, $f_{\epsilon}(x):=\sum_n f_{n,\epsilon}(x)$, $x\in X$, is a locally finite sum and thus defines an element in $\Gamma_{\mathsf{C}^{\infty}}(X,E)$ which satisfies $$\left\|f_{\epsilon}-f\right\|_{\mathfrak{P},p,\mu}\leq
\sum^{\infty}_{n=1}\left\|f_{n,\epsilon}-f_n\right\|_{\mathfrak{P},p,\mu}<\epsilon,$$ which proves the first assertion. If $f$ is compactly supported, then picking a *finite* covering of the support of $f$ with $U_n\rq{}s$ as above, the above proof also shows the second assertion.
We close this section with the following example which shows that the assumptions of Theorem \[main\] are optimal in a certain sense:
\[bep\] Consider the third order differential operator $$A:=-x\partial^3+(x-1)\partial^2=(1-\partial)\circ x\circ \partial^2\in{\mathscr{D}}^{(3)}_{\mathsf{C}^{\infty}}({\mathbb{R}})$$ on ${\mathbb{R}}$ (with its Lebesgue measure). *Then for any $p\in (1,\infty)$ one has $$\mathsf{W}^{A,p}({\mathbb{R}})\subset \mathsf{W}^{1,p}_{\mathrm{loc}}({\mathbb{R}}), \>\mathsf{W}^{A,p}({\mathbb{R}})
\not\subset \mathsf{W}^{2,p}_{\mathrm{loc}}({\mathbb{R}})$$ and $\mathsf{W}^{A,p}({\mathbb{R}})\cap \mathsf{C}^{\infty}({\mathbb{R}})$ is not dense in $\mathsf{W}^{A,p}({\mathbb{R}})$:* Indeed, we first observe that $$\mathsf{W}^{A,p}({\mathbb{R}}) = \{u | u\in\mathsf{L}^p({\mathbb{R}}) , x \partial^2 u \in \mathsf{W}^{1,p}({\mathbb{R}})\}.$$ To see this, if $f = Au$ and $v = x \partial^2u$, $v \in \mathsf{S}'({\mathbb{R}})$, $(1 - \partial) v = f$, so that $(1 - i\xi) \hat v = \hat f$, so that $v = \mathcal{F}^{-1} [(1 - i\xi)^{-1} \hat f] \in \mathsf{W}^{1,p}({\mathbb{R}})$. Here, $\mathcal{F}$ is the Fourier transformation and $\hat \Psi:=\mathcal{F}\Psi$.\
Next we show $\mathsf{W}^{A,p}({\mathbb{R}})\subset\mathsf{W}^{1,p}_{\mathrm{loc}}({\mathbb{R}})$. In fact, let $u \in \mathsf{W}^{A,p}({\mathbb{R}})$ and set $x \partial^2 u = g \in \mathsf{W}^{1,p}({\mathbb{R}})$. We write $g$ in the form $g = g(0) + \int_0^x \partial g(y){{\rm d}}y$. Then $$\partial^2u(x) = \frac{g(0)}{x} + h(x), \quad x \in {\mathbb{R}}\setminus \{0\},$$ with $h(x) = \frac{1}{x} \int_0^x \partial g(y) {{\rm d}}y$. As $p > 1$, it is a well known consequence of Hardy’s inequality that $h \in \mathsf{L}^p({\mathbb{R}})$. So $$\partial^2u = g(0) p. v. \left(\frac{1}{x}\right) + h + k,$$ with $k \in \mathsf{D}'({\mathbb{R}})$, $\mathrm{supp}(k) \subseteq \{0\}$. We deduce that $$g(x) = g(0) + x h(x) + x k(x),$$ implying $x k(x) = 0$. From $k(x) = \sum_{j=0}^m a_j \delta^{(j)}$ it follows that $x k(x) = - \sum_{j=1}^m j a_j \delta^{(j-1)} = 0$ if and only if $k(x) = a_0 \delta$, whence $$\partial^2u = g(0) p. v. \left(\frac{1}{x}\right) + h + a_0 \delta,$$ so that $$\partial u(x) = g(0) \ln(|x|) + \int_0^x \partial g(y){{\rm d}}y + a_0 H(x) + C
\in \mathsf{L}^p_{\mathrm{loc}}({\mathbb{R}}),$$ where $H$ is teh Heaviside function, and we have proved that $\mathsf{W}^{A,p}({\mathbb{R}}) \subset \mathsf{W}^{1,p}_{\mathrm{loc}}({\mathbb{R}})$.\
In order to see $\mathsf{W}^{A,p}({\mathbb{R}})\not\subset\mathsf{W}^{2,p}_{\mathrm{loc}}({\mathbb{R}})$, consider the function $u(x) = \phi(x) \ln(|x|)$, with $\phi \in \mathsf{C}_c^{\infty}({\mathbb{R}})$, $\phi(x) = x$ in some neighbourhood of $0$. Then $x \partial^2 u \in \mathsf{W}^{1,p}({\mathbb{R}})$, but $u \not \in \mathsf{W}^{2,p}_{\mathrm{loc}}({\mathbb{R}})$, since one has $$\partial^2 u(x) = p. v. \left(\frac{1}{x}\right)$$ in a neighborhood of $0$. So Theorem 2.8 is not applicable.\
To see that $\mathsf{W}^{A,p}({\mathbb{R}}) \cap \mathsf{C}^{\infty}({\mathbb{R}})$ is not dense in $\mathsf{W}^{A,p}({\mathbb{R}})$, let again $u(x):= \phi(x) \ln(|x|)$ with $\phi$ as above. Assume (by contradiction) that there exists $(u_n)_{n \in {\mathbb{N}}}$ with $u_n \in \mathsf{W}^{A,p}({\mathbb{R}}) \cap \mathsf{C}^\infty({\mathbb{R}})$, such that $$\|u_n - u\|_{\mathsf{L}^p({\mathbb{R}})} + \|Au_n - Au\|_{\mathsf{L}^p({\mathbb{R}})} \to 0 \text{ as $n \to \infty$}.$$ We set $v = x \partial^2 u$, $v_n = x \partial^2 u_n$. Then $$\|v_n - v\|_{\mathsf{L}^p({\mathbb{R}})}+\|\partial v_n - \partial v\|_{\mathsf{L}^p({\mathbb{R}})} \to 0
\text{ as $n \to \infty$},$$ so that (considering the continuous representative of any $\mathsf{W}^{1,p}({\mathbb{R}})$ equivalence class) $v_n (0) \to v(0)$. However, one has $v_n(0) = 0$ for all $n\in {\mathbb{N}}$, while $v(0) = 1$, a contradiction.
Applications of Theorem \[main\]
================================
The elliptic case
-----------------
Theorem \[regu\] in combination with Remark \[adjo\].2 for formal adjoints immediately imply:
\[ell\] Let $s\in{\mathbb{N}}$, $k_1\dots,k_s\in{\mathbb{N}}_{\geq 0}$, let $E\to X$, $F_i\to X$, $i\in\{1,\dots,s\}$, be smooth Hermitian vector bundles, and let $\mathfrak{P}:=\{P_1,\dots,P_s\}$ with $P_{i}\in\mathscr{D}_{\mathsf{C}^{\infty}}^{(k_i)}(X;E,F_i)$, and let $k:=\max\{k_1,\dots,k_s\}$.
*a)* Let $p\in (1,\infty)$. If one either has $k<2$, or the existence of some $j\in\{1,\dots,s\}$ with $P_j$ elliptic and $k_j\geq k-1$, then the assumptions from Theorem \[main\] are satisfied by $\mathfrak{P}$, in particular for any $f\in \Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E) $ there is a sequence $$(f_n)\subset \Gamma_{\mathsf{C}^{\infty}}(X,E)\cap
\Gamma_{\mathsf{W}^{\mathfrak{P},p}_{\mu}}(X,E),$$ which can be chosen in $\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c} }}(X,E)$ if $f$ is compactly supported, such that $\left\| f_n-f\right\|_{\mathfrak{P},p,\mu}\to 0$ as $n\to\infty$.\
*b)* If one either has $k<2$, or the existence of some $j\in\{1,\dots,s\}$ with $P_j$ elliptic and $k_j=k$, then the assumptions from Theorem \[main\] are satisfied by $\mathfrak{P}$, in particular for any $f\in \Gamma_{\mathsf{W}^{\mathfrak{P},1}_{\mu}}(X,E) $ there is a sequence $$(f_n)\subset \Gamma_{\mathsf{C}^{\infty}}(X,E)\cap
\Gamma_{\mathsf{W}^{\mathfrak{P},1}_{\mu}}(X,E),$$ which can be chosen in $\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(X,E)$ if $f$ is compactly supported, such that $\left\| f_n-f\right\|_{\mathfrak{P},1,\mu}\to 0$ as $n\to\infty$.
A covariant Meyers-Serrin Theorem on arbitrary Riemannian manifolds {#meyser}
-------------------------------------------------------------------
The aim of this section is to apply Theorem \[main\] in the context of covariant Sobolev spaces on Riemannian manifolds, which have been considered in this full generality, for example in [@salomonsen], and in the scalar case, in [@aubin; @hebey]. The point we want to make here is that Theorem \[main\] can be applied in many situations, even if none of the underlying $P_j$s is elliptic.\
Let us start by recalling (cf. Section 3.3.1 in [@nico]) that if $E_j\to X$ is a smooth vector bundle and $$\nabla_j\in \mathscr{D}_{\mathsf{C}^{\infty}}^{(1)}\left(X;E_j,\mathrm{T}^* X \otimes E_j\right)$$ a covariant derivative on $E_j\to X$ for $j=1,2$, then one defines the *tensor covariant derivative of $\nabla_1$ and $\nabla_2$* as the uniquely determined covariant derivative $$\nabla_1\tilde{\otimes}\nabla_2\in \mathscr{D}_{\mathsf{C}^{\infty}}^{(1)}
\left(X;E_1\otimes E_2,\mathrm{T}^* X \otimes E_1\otimes E_2\right)$$ on $E_1\otimes E_2 \to X$ which satisfies $$\begin{aligned}
\nabla_1\tilde{\otimes}\nabla_2(f_1\otimes f_2)=\nabla_1(f_1)\otimes f_2+
f_1\otimes\nabla_2(f_2) \label{product}\end{aligned}$$ for all $f_1\in\Gamma_{\mathsf{C}^{\infty}}(X,E_1)$, $f_2\in\Gamma_{\mathsf{C}^{\infty}}(X,E_2)$ (the canonical isomorphism of $\mathsf{C}^{\infty}(X)$-modules $$\Gamma_{\mathsf{C}^{\infty}}\left(X,\mathrm{T}^* X \otimes E_1\otimes E_2\right)
\longrightarrow \Gamma_{\mathsf{C}^{\infty}}\left(X,\mathrm{T}^* X \otimes E_2\otimes E_1\right)$$ being understood).\
Now let $(M,g)$ be a possibly noncompact smooth Riemannian manifold without boundary and let $\mu({{\rm d}}x)=\mathrm{vol}_g({{\rm d}}x)$ be the Riemannian volume measure. We also give ourselves a smooth Hermitian vector bundle $E\to M$ and let $\nabla$ be a Hermitian covariant derivative defined on the latter bundle. We denote the Levi-Civita connection on $\mathrm{T}^*M$ with $\nabla_{g}$. Then for any $j\in{\mathbb{N}}$, the operator $$\nabla^{(j)}_g\in \mathscr{D}_{\mathsf{C}^{\infty}}^{(1)}
\big(M; \left( \mathrm{T}^*M\right)^{\otimes j-1} \otimes E,
\left( \mathrm{T}^*M\right)^{\otimes j} \otimes E\big)$$ is defined recursively by $\nabla^{(1)}_g:=\nabla$, $\nabla^{(j+1)}_g:=\nabla^{(j)}_g\tilde{\otimes} \nabla_{g}$, and we can further set $$\nabla^{j}_g:=\nabla^{(j)}_g\cdots \nabla^{(1)}_g\in
\mathscr{D}_{\mathsf{C}^{\infty}}^{(j)}\big(M;E , \left(\mathrm{T}^*M\right)^{\otimes j} \otimes E\big)$$ Note that if $\dim(M)>1$, then each $\nabla^{j}_g$ is nonelliptic. The following result makes Theorem \[main\] accessible to covariant Riemannian Sobolev spaces:
\[formel2\] Let $E\rq{}\to X$ be a smooth complex vector bundle with a covariant derivative $\nabla\rq{}$ defined on it. Then for any $p\in [1,\infty)$ one has $\Gamma_{\mathsf{W}^{\nabla\rq{},p}_{\mathrm{loc}}}(X,E\rq{})=
\Gamma_{\mathsf{W}^{1,p}_{\mathrm{loc}}}(X,E\rq{})$.
Let $\ell:=\mathrm{rank}(E\rq{})$, and pick Hermitian structures on $E\rq{}$ and $\mathrm{T}^* X$. Given $f\in \mathsf{W}^{p,\nabla}_{\mathrm{loc}}(X,E\rq{})$, we have to prove $f\in\mathsf{W}^{1,p}_{\mathrm{loc}}(X,E\rq{})$. To this end, it is sufficient to prove that if $V\Subset W\Subset X$ are such that there is a chart $$x=(x^1,\dots,x^m):W\longrightarrow {\mathbb{R}}^m$$ for $X$ in which $E\rq{}\to X$ admits a orthonormal frame $e_1,\dots,e_\ell\in\Gamma_{\mathsf{C}^{\infty}}(W,E\rq{})$, then with the components $f^j:=(f,e_j)$ of $f$ one has $$\begin{aligned}
\sum_{k,j}\int_V |\partial_k f^j(x)|^p {{\rm d}}x<\infty.\label{zuz}\end{aligned}$$ To this end, note that there is a unique matrix of $1$-forms $$A\in \mathrm{Mat}\big(\Gamma_{\mathsf{C}^{\infty}}(W,\mathrm{T}^* X);\ell\times \ell\big)$$ such that with respect to the frame $(e_j)$ one has $\nabla={{\rm d}}+A$, in the sense that for all $(\Psi^1,\dots,\Psi^\ell)\in\mathsf{C}^{\infty}(W,{\mathbb{C}}^\ell)$ one has $$\begin{aligned}
\nabla \sum_{j}\Psi^j e_j =\sum_{j}({{\rm d}}\Psi^j )\otimes e_j+ \sum_{j}\sum_{i}\Psi^jA_{ij} \otimes e_i.\end{aligned}$$ It follows that in $W$ one has $$\sum_{j}{{\rm d}}f^j\otimes e_j={{\rm d}}f = \nabla f -A f,$$ so using $|A_{ij}| \leq C$ in $V$ and that $(e_j)$ is orthonormal we arrive at $$\begin{aligned}
\sum_j\int_V |{{\rm d}}f^j(x)|_x^p {{\rm d}}x\leq \tilde{C}\int_V |\nabla f(x)|^p_x{{\rm d}}x<\infty.\label{diffy}\end{aligned}$$ But it is well-known that the integrability (\[diffy\]) implies (\[zuz\]) (see for example Excercise 4.11 b) in [@buch]).
With these preparations, we can state the following covariant Meyers-Serrin theorem for Riemannian manifolds (which in the case of scalar functions, that is, if $E=M\times{\mathbb{C}}$ with $\nabla={{\rm d}}$) has also been observed in [@mueller Lemma 3.1]):
\[meyers\] Let $p\in
[1,\infty)$, $s\in{\mathbb{N}}$, and define a global Sobolev space by $$\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E):=
\Gamma_{\mathsf{W}^{\{\nabla^1_g,...,\nabla^s_g\},p}_{\mathrm{loc}}}(M,E).$$ Then one has $$\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E)\subset\Gamma_{\mathsf{W}^{s,p}_{\mathrm{loc}}}(M,E),$$ in particular, for any $f\in\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E)$ there is a sequence $$(f_n)\subset \Gamma_{\mathsf{C}^{\infty}}(M,E)\cap \Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E),$$ which can be chosen in $\Gamma_{\mathsf{C}^{\infty}_{\mathrm{c}}}(M,E)$ if $f$ is compactly supported, such $$\left\|f_n-f\right\|_{\nabla,g,p}:=
\left\|f_n-f\right\|_{\{\nabla^{1}_g,\dots,\nabla^{s}_g\},p,\mathrm{vol}_g}\to 0 \text{ as $n\to\infty$.}$$
Applying Lemma \[formel2\] inductively shows $$\Gamma_{\mathsf{W}^{s,p}_{\nabla,g}}(M,E)\subset\Gamma_{\mathsf{W}^{s,p}_{\mathrm{loc}}}(M,E),$$ so that the other statements are implied by Theorem \[main\].
A substitute result for the $p=\infty$ case
===========================================
As ${\mathsf{C}^{\infty}}$ is not dense in ${\mathsf{L}^{\infty}}$, it is clear that Theorem \[main\] cannot be true for $p=\infty$. In this case, one can nevertheless smoothly approximate generalized $\mathsf{C}^k$-type spaces given by families $\mathfrak{P}$, without any further assumptions on $\mathfrak{P}$, an elementary fact which we record for the sake of completeness:
Let $s\in{\mathbb{N}}$, $k_1\dots,k_s\in{\mathbb{N}}_{\geq 0}$, and let $E\to X$, $F_i\to X$, for each $i\in\{1,\dots,s\}$, be smooth Hermitian vector bundles, and let $\mathfrak{P}:=\{P_1,\dots,P_s\}$ with $P_{i}\in\mathscr{D}_{\mathsf{C}^{\infty}}^{(k_i)}(X;E,F_i)$. Then with $k:=\max\{k_1,\dots,k_s\}$, define the Banach space $\Gamma_{\mathfrak{P},\infty}(X,E)$ by $$\begin{aligned}
&\Gamma_{\mathfrak{P},\infty}(X,E)
\\
&:=\left.\Big\{f\right|f\in\Gamma_{\mathsf{C}\cap \mathsf{L}^{\infty}}(X,E),
P_if\in\Gamma_{\mathsf{C}\cap \mathsf{L}^{\infty} }(X,F_i)\text{ \emph{for all} $i\in\{1,\dots,s\}$}\Big\}
\\
&\text{\emph{with norm} $\left\| f\right\|_{\mathfrak{P},\infty}:=
\left\|f\right\|_{\infty}+\sum^s_{i=1}\left\|P_{i} f\right\|_{\infty}$}.\end{aligned}$$ Assume that $\Gamma_{\mathfrak{P},\infty}(X,E) \subset \Gamma_{\mathsf{C}^{k-1}}(X,E)$. Then $\Gamma_{\mathsf{C}^{\infty}}(X,E)\cap\Gamma_{\mathfrak{P},\infty} (X,E)$ is dense in $\Gamma_{\mathfrak{P},\infty}(X,E)$.
Using Proposition \[molli\] b), this result follows from the same localization argument as in the proof of Theorem \[main\].
An existence and uniqueness result for systems of linear elliptic PDEs on the Besov scale {#beweis2}
=========================================================================================
Throughout this section, let $\ell\in{\mathbb{N}}$ be arbitrary. We again use the notation $(\bullet,\bullet)$, $\left|\bullet\right|$, and $\mathrm{B}_r(x)$ for the standard Euclidean data in each ${\mathbb{C}}^n$. We start by recalling the definition of Besov spaces with a positive differential order:
For any $\alpha\in (0,1], p\in [1,\infty], q\in [1,\infty)$, one defines $\mathsf{B}^{\alpha}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ to be the space of $u\in\mathsf{L}^p({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ such that $$\begin{aligned}
\int_{{\mathbb{R}}^m} \left\|u(\bullet+x)-2u+u(\bullet-x)\right\|_{\mathsf{L}^p({\mathbb{R}}^m,{\mathbb{C}}^\ell)}
|x|^{-m-\alpha q}{{\rm d}}x <\infty,\end{aligned}$$ and $\mathsf{B}^{\alpha}_{p,\infty}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ to be the space of $u\in\mathsf{L}^p({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ such that $$\begin{aligned}
\sup_{x\in{\mathbb{R}}^m\setminus\{0\}}|x|^{-\alpha }
\left\|u(\bullet+x)-2u+u(\bullet-x)\right\|_{\mathsf{L}^p({\mathbb{R}}^m,{\mathbb{C}}^\ell)}
<\infty.\end{aligned}$$ For $\alpha\in (1,\infty)$, $p\in [1,\infty]$, $q\in [1,\infty]$, one defines $\mathsf{B}^{\alpha}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ to be the space[^1] of $u\in\mathsf{W}^{[\alpha],p}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ such that for all $\beta\in ({\mathbb{N}}_{\geq 0})^m$ with $|\beta|=[\alpha]$ one has $\partial^{\beta}u \in\mathsf{B}^{\alpha-[\alpha],p}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$. These are Banach spaces with respect to their canonical norms.
For negative differential orders, the definition is more subtle:
Let $t(\zeta):=|\zeta|$, $\zeta\in{\mathbb{R}}^m$, and for any $\gamma\in{\mathbb{R}}$ let $$J_{\gamma}:=\mathcal{F}^{-1}(1+t^2)^{-\gamma/2}$$ denote the Bessel potential of order $\gamma$. Let $\alpha\in (-\infty,0]$, $p\in [1,\infty]$, $q\in [1,\infty)$, and pick some $\beta\in (0,\infty)$. Then one defines $\mathsf{B}^{\alpha}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ to be the space of $u\in\mathsf{S}\rq{}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$ such that $u= J_{\alpha-\beta}*f$ for some $f\in \mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)$. This definition does not depend on the particular choice of $\beta$, and one defines $$\left\|u\right\|_{\mathsf{B}^{\alpha}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)}:=
\left\|J_{\alpha-1}*u\right\|_{\mathsf{B}^{1}_{p,q}({\mathbb{R}}^m,{\mathbb{C}}^\ell)},$$ which again produces a Banach space.
We are going to prove:
\[pr2\] Let $n\in{\mathbb{N}}_{\geq 0}$, $Q \in \mathscr{D}_{\mathsf{C}^\infty}^{(n)}({\mathbb{R}}^m; {\mathbb{C}}^\ell, {\mathbb{C}}^\ell)$, $$\begin{array}{ccccc}
Q = \sum_{\alpha \in {\mathbb{N}}_n^m}Q_\alpha \partial^\alpha, & {\it with }& Q_\alpha : {\mathbb{R}}^m
\longrightarrow
\mathrm{Mat}({\mathbb{C}}; \ell \times \ell) & {\it in } & \mathsf{W}^{\infty,\infty},
\end{array}$$ that is, $Q_\alpha$ and all its derivatives are bounded. Suppose also that for some $\theta_0 \in (-\pi, \pi]$ and all $$(x,\xi, r) \in {\mathbb{R}}^m \times ({\mathbb{R}}^m \times [0, \infty)) \setminus \{(0, 0)\}),$$ the complex $\ell\times \ell$ matrix $r^n \mathrm{e}^{\mathrm{i}\theta_0}-\sigma_{Q,x}(\mathrm{i}\xi)$ is invertible, and that there are is $C>0$ such that for all $(x,\xi, r)$ as above one has $$\begin{aligned}
\label{ellp}
\left|\big(r^n \mathrm{e}^{\mathrm{i}\theta_0} - \sigma_{Q,x}(\mathrm{i}\xi)\big)^{-1}\right|
\leq C(r + |\xi|)^{-n}. \end{aligned}$$ We consider the system of linear PDEs given by $$\label{eq1}
r^n \mathrm{e}^{\mathrm{i}\theta_0} u(x) - Q u(x) = g(x), \quad x \in {\mathbb{R}}^m, r\geq 0.$$ Then for any $\beta \in {\mathbb{R}}$, $ p, q\in [1,\infty]$, there is a $R =R(\beta,p,q,Q)\geq 0$ with the following property: if $r \geq R$ and $g \in \mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, then (\[eq1\]) has a unique solution $u\in\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$.
Note that given some $Q \in \mathscr{D}_{\mathsf{C}^\infty}^{(n)}({\mathbb{R}}^m; {\mathbb{C}}^\ell, {\mathbb{C}}^\ell)$ which is strongly elliptic in the usual sense $$\Re(\sigma_{Q,x}(\zeta)\eta,\eta)\geq \tilde{C} |\eta|^2\> \text{ for all $x\in {\mathbb{R}}^m$,
$\eta\in{\mathbb{C}}^\ell$, $\zeta\in {\mathbb{C}}^m$ with $|\zeta|=1$}$$ with some $\tilde{C}>0$ which is uniform in $x$, $\eta$, $\zeta$, it is straightforward to see that the condition (\[ellp\]) is satisfied with $\theta_0=\pi$, $C=\min\{1,\tilde{C}\}$ (see also the proof of Theorem \[regu\] b)).\
Before we come to the proof of Proposition \[pr2\], we first collect some well known facts concerning Besov spaces. Unless otherwise stated, the reader may find these results in [@davide] and the references therein.
\(i) For every $p \in [1, \infty]$ one has $\mathsf{B}_{p,1}^0({\mathbb{R}}^m) \hookrightarrow \mathsf{L}^p({\mathbb{R}}^m) \hookrightarrow \mathsf{B}_{p,\infty}^0({\mathbb{R}}^m)$.
\(ii) Let $ p, q \in [1, \infty]$, $ \beta \in {\mathbb{R}}$. Then $$\mathsf{B}^{\beta +1}_{p,q} ({\mathbb{R}}^m) = \{f| f \in \mathsf{B}^{\beta}_{p,q} ({\mathbb{R}}^m),
\partial_j f \in \mathsf{B}^{\beta}_{p,q} ({\mathbb{R}}^m) \text{ for all } j \in \{1, \dots, m\}\}.$$ So for all $ k \in {\mathbb{N}}$ one has $\mathsf{B}_{p,1}^k({\mathbb{R}}^m) \hookrightarrow \mathsf{W}^{k,p}({\mathbb{R}}^m)
\hookrightarrow \mathsf{B}_{p,\infty}^k({\mathbb{R}}^m)$.
\(iii) As a consequence of (ii), we have the following particular case of Sobolev embedding theorem: if $\beta \in {\mathbb{R}}$, $1 \leq p, q \leq \infty$, $\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m) \hookrightarrow \mathsf{B}^{\beta - m/p}_{\infty,\infty}({\mathbb{R}}^m)$.
\(iv) Let us indicate with $(\cdot,\cdot)_{\theta,q}$ ($0 < \theta < 1$, $1 \leq q \leq \infty$) the real interpolation functor. Then, if $-\infty < \alpha_0 < \alpha_1 < \infty$, $1 \leq p, q_0, q_1 \leq \infty$, the real interpolation space $(\mathsf{B}^{\alpha_0}_{p,q_0}({\mathbb{R}}^m), \mathsf{B}^{\alpha_1}_{p,q_1}({\mathbb{R}}^m))_{\theta,q}$ coincides with $\mathsf{B}^{(1-\theta)\alpha_0 + \theta \alpha_1}_{p,q}({\mathbb{R}}^m)$, with equivalent norms.
\(v) If $1 \leq p, q < \infty$ and $\beta \in {\mathbb{R}}$, the antidual space of $\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$ can be identified with $\mathsf{B}^{-\beta}_{p',q'}({\mathbb{R}}^m)$ in the following sense: if $g \in \mathsf{B}^{-\beta}_{p',q'}({\mathbb{R}}^m)$, then the (antilinear) distribution $\left\langle\bullet, \overline g\right\rangle$ can be uniquely extended to a bounded antilinear functional in $\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$ (we recall here also that, whenever $\max\{p,q\}< \infty$, then $\mathsf{C}_{\mathrm{c}}^\infty({\mathbb{R}}^m)$ is dense in each $\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$). Moreover, all bounded antilinear functionals on $\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$ can be obtained in this way.
\(vi) Suppose that $a \in \mathsf{C}^\infty({\mathbb{R}}^m)$, and that for some $n \in {\mathbb{R}}$ and all $\xi\in{\mathbb{R}}^m$ one has $$\max_{\alpha\in{\mathbb{N}}^{m}_{m+1} } |\partial^\alpha a(\xi)|\leq C (1 + |\xi|)^{n - |\alpha|}.$$ Then for all $$(\beta,p,q) \in {\mathbb{R}}\times [1, \infty] \times [1, \infty],$$ the Fourier multiplication operator $f \mapsto {\mathcal F}^{-1}(a {\mathcal F} f)$ maps $\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$ into $\mathsf{B}^{\beta-n}_{p,q}({\mathbb{R}}^m)$, and the norm of the latter operator can be estimated by $$C \sup_{\alpha \in {\mathbb{N}}^m_{m+1},\xi\in{\mathbb{R}}^m} \left|(1 + |\xi|)^{|\alpha|-n} \partial^\alpha a(\xi)\right|,$$ for some $C >0$ independent of $a$ (cf. [@amann]).
\(vii) If $a \in \mathsf{W}^{\infty,\infty}({\mathbb{R}}^m)$ and $f \in \mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$, then one has $af \in \mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)$. More precisely, there exist $C>0$, $N \in {\mathbb{N}}$, independent of $a$ and $f$, such that $$\|af\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)} \leq C\left(\|a\|_{\mathsf{L}^\infty({\mathbb{R}}^m)}
\|f\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m)} + \|a\|_{\mathsf{W}^{N,\infty}({\mathbb{R}}^m)}
\|f\|_{\mathsf{B}^{\beta-1}_{p,q}({\mathbb{R}}^m)}\right).$$
\(viii) Let $0\leq \chi_0 \in C_{\mathrm{c}}^\infty({\mathbb{R}}^m)$ be such that for some $\delta >0$ one has $$\mathrm{supp}(\chi_0)\subset[-\delta, \delta]^m, \>\chi_0 = 1 \text{ in } [-\delta/2, \delta/2]^m.$$ For any $j \in {\mathbb{Z}}^m$ set $$\chi_j(x):= \chi_0(x - \delta j/2 ), \chi(x):=
\sum_{j \in {\mathbb{Z}}^m} \chi_j(x), \psi_j(x) := \frac{\chi_j(x)}{\chi(x)}.$$ Then for all $ \beta \in {\mathbb{R}}$, $ p \in [1, \infty]$, there exist $C_1,C_2>0$ such that for all $f \in \mathsf{B}^\beta_{p,p}({\mathbb{R}}^m)$ it holds that $$C_1 \|f\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m)} \leq \| (\|\psi_j f\|_{\mathsf{B}^\beta_{p,p}
({\mathbb{R}}^m)})_{j \in {\mathbb{Z}}^m}\|_{\ell ^p({\mathbb{Z}}^m)} \leq C_2 \|f\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m)}.$$ With these preparations, we can now give the proof of Proposition \[pr2\]:
We prove the result in several steps.
[*Step 1 (constant coefficients): Let $$Q = \sum_{\alpha \in {\mathbb{N}}_n^m}Q_\alpha \partial^\alpha,
\text{ with }Q_\alpha \in \mathrm{Mat}({\mathbb{C}}; \ell \times \ell),$$ and suppose that for some $\theta_0 \in (-\pi, \pi]$ and all $$(\xi, r) \in ({\mathbb{R}}^m \times [0, \infty)) \setminus \{(0, 0)\}),$$ the $l \times l$ matrix $r^n \mathrm{e}^{\mathrm{i}\theta_0} - \mathrm{i}^n \sigma_{Q}(\xi)$ is invertible, and that there exists $C >0$ such that for all $(\xi, r)$ as above one has $$\label{eq4}
|(r^n \mathrm{e}^{\mathrm{i}\theta_0} - \sigma_{Q}(\mathrm{i}\xi))^{-1}| \leq C(r + |\xi|)^{-n}.$$ Then for any $\beta \in {\mathbb{R}}$, $1 \leq p, q \leq \infty$, there exists $R \geq 0$ such that, if $r \geq R$ and $g \in \mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, the system (\[eq1\]) has a unique solution $u\in\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. Moreover, there exists a constant $C_0 >0$, which only depends on $\beta$, $p, q$, the constant $C$ in (\[eq4\]) and on ${\displaystyle \max_{\alpha \in {\mathbb{N}}^m_n}}$ $|Q_\alpha|$, such that for all $r \geq R$ one has $$r^n \|u\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} + \|u\|_{\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\leq C_0 \|g\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$ By interpolation, we obtain also, for every $\theta \in [0, 1]$ and $r \geq R$, $$\label{eq5A}
\|u\|_{\mathsf{B}^{\beta+\theta n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} \leq
C_0 r^{(\theta - 1)n}\|g\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$* ]{}
In order to prove the statement from Step 1, we start by assuming that $Q$ coincides with its principal part $Q_n: = \sum_{|\alpha| = n} Q_\alpha \partial^\alpha$. Then, employing the Fourier transform, it is easily seen that for any $r\geq 0$, $g\in \mathsf{S}'({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, the only possible solution $u\in\mathsf{S}'({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ of (\[eq1\]) is $$u = {\mathcal F}^{-1}\left((r^n \mathrm{e}^{\mathrm{i}\theta_0} -
\sigma_{Q}(\mathrm{i}\xi))^{-1} {\mathcal F} g\right).$$ Observe that $(r^n \mathrm{e}^{\mathrm{i}\theta_0} - \sigma_{Q}(\mathrm{i}\xi))^{-1}$ is positively homogeneous of degree $-n$ in the variables $$(r, \xi) \in ([0, \infty) \times {\mathbb{R}}^m) \setminus \{(0,0)\}.$$ So for all $ \alpha \in {\mathbb{N}}_n^m$, the matrix $\partial_\xi^\alpha (r^n \mathrm{e}^{\mathrm{i}\theta_0}- \sigma_{Q}(\mathrm{i}\xi))^{-1}$ is positively homogeneous of degree $-n - |\alpha|$ in these variables, implying $$\left|\partial_\xi^\alpha(r^n \mathrm{e}^{\mathrm{i}\theta_0} - \sigma_{Q}(\mathrm{i}\xi))^{-1}\right|
\leq C(\alpha) (r + |\xi|)^{-n-|\alpha|}.$$ It is easily seen that $C(\alpha)$ can be estimated in terms of the constant $C$ in (\[eq4\]) and of ${\displaystyle \max_{\alpha \in {\mathbb{N}}^m_n}}$ $|Q_\alpha|$. We deduce from (vi) that, for all $r \geq 0$, and all $g \in \mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, the problem $$\label{eq5}
r^n \mathrm{e}^{\mathrm{i}\theta_0}u(x) - Q_n u(x) = g(x), \quad x \in {\mathbb{R}}^m$$ has a unique solution $u$ in $\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, and also that for all $r_0>0$ there is $C(r_0)>0$ such that for all $r \geq r_0 $ one has $$\|u\|_{\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} \leq
C(r_0) \|g\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$ The latter inequality together with (\[eq5\]) also gives $$\begin{array}{ll}
\|u\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} & \leq
r^{-n} (\|g\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\|Q_n u\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)})
\\ \\
& \leq C_1(r_0) r^{-n} (\|g\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\|u\|_{\mathsf{B}^{n+\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)})
\\ \\
&\leq C_2(r_0) r^{-n}\|g\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)},
\end{array}$$ and now the estimate (\[eq5A\]) follows directly by interpolation (see (iv)). Now we extend the previous facts from $Q_n$ to $Q$, taking $r$ sufficiently large. In fact, we write (\[eq1\]) in the form $$r^n \mathrm{e}^{\mathrm{i}\theta_0}u(x) - Q_n u(x) = (Q - Q_n)u(x) + g(x).$$ Taking $h:= r^n \mathrm{e}^{\mathrm{i}\theta_0}u - Q_n u$ as new unknown, we obtain $$\label{eq7}
h - (Q - Q_n) (r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q_n)^{-1} h = g.$$ We have $$\begin{array}{ll}
&\|(Q - Q_n) (r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q_n)^{-1}h\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell} )
\\ \\
&\leq C_0 \|(r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q_n)^{-1}h\|_{\mathsf{B}^{\beta+n-1}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\leq C_1 r^{-1} \|h\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.
\end{array}$$ So, if $C_1 r^{-1} < 1$, then (\[eq7\]) has a unique solution $h\in\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ and, in case $C_1 r^{-1} \leq \frac{1}{2}$ such solution can be estimated in the form $$\|h\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} \leq 2 \|g\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$ So the previous estimates and results can be extended from $Q_n$ to $Q$.
[*Step 2 (a priori estimate for solutions in $\mathsf{B}^{\beta+n}_{p,q}$ with small support): Let $\beta \in {\mathbb{R}}$, $1 \leq p, q \leq \infty$. Then there exist $r_0 ,\delta, C >0$ with the following property: if $u \in \mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ satisfies $$r^n \mathrm{e}^{\mathrm{i}\theta_0} u - Q u = g,\> \mathrm{supp}(u)\subset
\prod_{j=1}^m [x^0_j - \delta, x^0_j + \delta]\text{ for some $x^0 \in {\mathbb{R}}^m$},$$ then one has $$\label{eq8}
r^n \|u\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} + \|u\|_{\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\leq C \|g\|_{\mathsf{B}^\beta_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$* ]{}
In order to prove this, we define the constant coefficient operator $Q(x_0,\partial):=\sum_{\alpha\in{\mathbb{N}}^m_n}Q_{\alpha}(x_0)\partial^{\alpha}$ and observe that $$r^n \mathrm{e}^{\mathrm{i}\theta_0} u(x) - Q(x^0,\partial)u(x) = (Q - Q(x^0,\partial))u(x) + g(x).$$ Let $\epsilon >0$. For any $\phi \in \mathsf{C}_{\mathrm{c}}^\infty ({\mathbb{R}}^m)$ which satisfies $$\begin{aligned}
&\mathrm{supp}(\phi)\subset \prod_{j=1}^m [x^0_j - 2\delta, x^0_j + 2\delta],
\\
&\phi = 1 \text{ in } \prod_{j=1}^m [x^0_j - \delta, x^0_j + \delta], \>\|\phi\|_{\mathsf{L}^\infty({\mathbb{R}}^m)}
= 1,\end{aligned}$$ we have $$(Q - Q(x^0,\partial))u = \phi (Q - Q(x^0,\partial))u.$$ So, taking $\delta$ sufficiently small, from (iv) and (vii) we obtain $$\|(Q - Q(x^0,\partial))u\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} \leq
\epsilon \|u\|_{\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
C(\epsilon) \|u\|_{\mathsf{B}^{\beta+n-1}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$ Observe that $\delta$ can be chosen independent of $x^0$. So, from Step 1 with $\theta=(n-1)/n$ in , taking $r$ sufficiently large (uniformly in $x^0$) we obtain $$\begin{array}{c}
r^n \|u\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} + r\|u\|_{\mathsf{B}^{\beta+n-1}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\|u\|_{\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\\ \\
\leq C_0\left(\epsilon \|u\|_{\mathsf{B}^{\beta+n}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
C(\epsilon) \|u\|_{\mathsf{B}^{\beta+n-1}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\|g\|_{\mathsf{B}^{\beta}_{p,q}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\right).
\end{array}$$ Taking $\epsilon$ so small that $C_0 \epsilon \leq \frac{1}{2}$ and $r$ so large that $C_0 C(\epsilon) \leq r$, we deduce (\[eq8\]).
[*Step 3 (a priori estimate for arbitrary solutions in $\mathsf{B}^{\beta+n}_{p,p}$): For any $\beta \in {\mathbb{R}}$, $p \in [1, \infty)$, there exist $C_0, r_0>0$ such that if $r \geq r_0$ and $u \in \mathsf{B}^{\beta +n}_{p,p}({\mathbb{R}}^m; {\mathbb{C}}^\ell)$ is a solution to (\[eq1\]), then $$r^n \|u\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} + \|u\|_{\mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\leq C_0 \|g\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$* ]{}
To see this, we take $\delta,r_0>0$ so that the conclusion in Step 2 holds. We consider a family of functions $(\psi_j)_{j \in {\mathbb{Z}}^m}$ as in (viii). Let $u \in \mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ solve (\[eq1\]), with $r \geq r_0$. For each $j \in {\mathbb{Z}}^m$ we have $$r^n \psi_j u - Q(\psi_j u) = \psi_j g + Q_j u,$$ with the commutator $$Q_j:=[Q,\psi_j] = \sum_{1 \leq |\alpha| \leq n} Q_\alpha \sum_{\gamma < \alpha}
{\alpha \choose \gamma} \partial^{\alpha - \gamma} \psi_j \partial^\gamma.$$ We set $${\mathbb{Z}}_j:= \{i| \>i \in {\mathbb{Z}}^m, \> \mathrm{supp}(\psi_i) \cap \mathrm{supp}(\psi_j) \neq \emptyset\}.$$ Then $Q_j u = \sum_{i \in {\mathbb{Z}}_j} Q_j(\psi_i u)$, so that $$\|Q_j u\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} \leq C_1 \sum_{i \in {\mathbb{Z}}_j}
\|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}.$$ with $C_1$ independent of $j$. So, from Step 2, we have, for each $j \in {\mathbb{Z}}^m$, $$\label{eq12}
\begin{array}{c}
r^n \|\psi_j u\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
r \|\psi_j u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\|\psi_j u\|_{\mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\\ \\
\leq C_2\left( \|\psi_j g\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\sum_{i \in {\mathbb{Z}}_j} \|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\right).
\end{array}$$ We observe that ${\mathbb{Z}}_j$ has at most $7^m$ elements. So we have, in case $p < \infty$, $$\Big(\sum_{i \in {\mathbb{Z}}_j} \|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\Big)^p \leq
7^{m(p-1)} \sum_{i \in {\mathbb{Z}}_j} \|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}^p$$ and $$\begin{aligned}
&\sum_{j \in {\mathbb{Z}}^m}\Big(\sum_{i \in {\mathbb{Z}}_j} \|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\Big)^p
\\
&\leq 7^{m(p-1)} \sum_{j \in {\mathbb{Z}}^m}\sum_{i \in {\mathbb{Z}}_j} \|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}^p
\\
&= 7^{m(p-1)} \sum_{i \in {\mathbb{Z}}^m} \Big(\sum_{j \in {\mathbb{Z}}_i} 1\Big)
\|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}^p
\\
&\leq 7^{mp}
\left\|(\|\psi_i u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)})_{i \in {\mathbb{Z}}^m}\right\|_{\ell^p({\mathbb{Z}}^m)}^p. \end{aligned}$$ So, from (\[eq12\]) and (viii), we deduce $$\begin{aligned}
&r^n \|u\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
r \|u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} +
\|u\|_{\mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\\
&\leq C_3 \left(r^n \left\|\big(\|\psi_j u\|_{\mathsf{B}^\beta_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\big)_{j \in {\mathbb{Z}}^m} \right\|_{\ell^p({\mathbb{Z}}^m)}\right.
\\
&\>\>\>\>\>\>\>\>\>\>\>\> +
r \left\|\big(\|\psi_j u\|_{\mathsf{B}^{\beta+n-1}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)}
\big)_{j \in {\mathbb{Z}}^m} \right\|_{\ell^p({\mathbb{Z}}^m)}
\\
&\>\>\>\>\>\>\>\>\>\>\>\>
\left.+ \left\|\big(\|\psi_j u\|_{\mathsf{B}^{\beta+n}_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\big)_{j \in {\mathbb{Z}}^m} \right\|_{\ell^p({\mathbb{Z}}^m)}\right)
\\
&\leq C_4 \left(\left\|\big( \|\psi_j g\|_{\mathsf{B}^\beta_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\big)_{j \in {\mathbb{Z}}^m} \right\|_{\ell^p({\mathbb{Z}}^m)} \right.
\\
&\>\>\>\>\>\>\>\>\>\>\>\>+ \left. \left\|\big(\|\psi_j u\|_{\mathsf{B}^{\beta+n-1}_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\big)_{j \in {\mathbb{Z}}^m} \right\|_{\ell^p({\mathbb{Z}}^m)}\right)
\\
&\leq C_5 \left(\|g\|_{\mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)} + \|u\|_{\mathsf{B}^{\beta+n-1}_{p,p}
({\mathbb{R}}^m, {\mathbb{C}}^\ell)}\right). \end{aligned}$$ Taking $r \geq C_5$, we get the conclusion.
[*Step 4: For any $\beta \in {\mathbb{R}}$, $p \in [1, \infty)$, there exists $r_0 \geq 0$ such that if $r \geq r_0$, $g \in \mathsf{B}^\beta_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, then (\[eq1\]) has a unique solution $u\in\mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$.* ]{}
The uniqueness follows from Step 3. We show the existence by a duality argument. We think of $r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q$ as an operator from $\mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ to $\mathsf{B}^{\beta}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^l)$. By Step 3, if $r$ is sufficiently large, its range is a closed subspace of $\mathsf{B}^{\beta}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. Assume that it does not coincide with the whole space. Then, applying a well known consequence of the theorem of Hahn-Banach and (v), there exists $h \in \mathsf{B}^{-\beta}_{p',p'}({\mathbb{R}}^m, {\mathbb{C}}^l)$, $h \neq 0$, such that $$\left\langle(r^n e^{i\theta_0} - Q)u, \overline h\right\rangle = 0
\>\text{ for all } u \in \mathsf{B}^{\beta+n}_{p,p}({\mathbb{R}}^m, {\mathbb{C}}^\ell).$$ This implies that $$\label{eq13}
(r^n \mathrm{e}^{-\mathrm{i}\theta_0} - Q^\ast) h = 0.$$ Now, it is easily seen that $Q^\ast$ satisfies the assumptions of Proposition \[pr2\] if we replace $\theta_0$ with $-\theta_0$. We deduce from Step 3 that, if $r$ is sufficiently large, (\[eq13\]) implies $h = 0$, a contradiction.
[*Step 5: For any $\beta \in {\mathbb{R}}$ there exists $r_0 \geq 0$ such that if $r \geq r_0$, $g \in \mathsf{B}^\beta_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, then (\[eq1\]) has a unique solution $u\in\mathsf{B}^{\beta+n}_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$.* ]{}
In the proof of Lemma 2.4 from [@davide] it is shown that for any $g \in \mathsf{B}^\beta_{\infty,\infty}({\mathbb{R}}^m)$, there is a sequence $(g_k)_{k \in {\mathbb{N}}}$ in $\mathsf{S}({\mathbb{R}}^m)$ converging to $g$ in $\mathsf{S}'({\mathbb{R}}^m)$ and bounded in $\mathsf{B}^\beta_{\infty,\infty}({\mathbb{R}}^m)$. So we take a sequence $(g_k)_{k \in {\mathbb{N}}}$ in $\mathsf{S}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ converging to $g$ in $\mathsf{S}'({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ and bounded in $\mathsf{B}^\beta_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. We fix $\gamma$ larger than $\beta + \frac{m}{2}$ and think of $g_k$ as an element of $\mathsf{B}^\gamma_{2,2}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. Then, by Step 4, if $r$ is sufficiently large, the equation $$r^n \mathrm{e}^{\mathrm{i}\theta_0} u_k - Q u_k = g_k$$ has a unique solution $u_k$ in $\mathsf{B}^{\gamma+n}_{2,2}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. By (iii), $u_k \in \mathsf{B}^{\beta+n}_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ and, by Step 3, if $r$ is sufficiently large, the sequence $(u_k)_{k \in {\mathbb{N}}}$ is bounded in $\mathsf{B}^{\beta+n}_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$, because $(g_k)_{k \in {\mathbb{N}}}$ is bounded in $\mathsf{B}^{\beta}_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. Then, by (v) and the theorem of Alaoglu, we may assume, possibly passing to a subsequence, that there exists $u\in\mathsf{B}^{\beta+n}_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell)$ such that $$\lim_{k \to \infty} u_k = u\>\text{ in the weak topology }\>\
w(\mathsf{B}^{\beta+n}_{\infty,\infty}({\mathbb{R}}^m, {\mathbb{C}}^\ell),
\mathsf{B}^{-\beta-n}_{1,1}({\mathbb{R}}^m, {\mathbb{C}}^\ell)).$$ Such convergence implies convergence in $\mathsf{S}'({\mathbb{R}}^m, {\mathbb{C}}^\ell)$. So $$(r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q) u_k \to (r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q) u
\text{ as $k\to\infty$ in $\mathsf{S}'({\mathbb{R}}^m, {\mathbb{C}}^\ell)$}.$$ We deduce that $(r^n \mathrm{e}^{\mathrm{i}\theta_0} - Q) u = g$.
[*Step 6: Full statement.*]{}
This is a simple consequence of Step 4, Step 5 and the interpolation property (iv).
Acknowledgements {#acknowledgements .unnumbered}
================
B.G. has been financially supported by the SFB 647 “Space-Time-Matter”. D.P. is member of Italian CNR-GNAMPA and has been partially supported by PRIN 2010 M.I.U.R. “Problemi differenziali di evoluzione: approcci deterministici e stocastici e loro interazioni".
[99]{}
Amann, H.: *Operator-valued Fourier multipliers, vector-valued Besov spaces, and applications.* Math. Nachr. 186 (1997), 5–56.
Aubin, T.: *Nonlinear analysis on manifolds. Monge-Ampère equations.* Grundlehren der Mathematischen Wissenschaften, 252. Springer-Verlag, New York, 1982.
Braverman, M. & Milatovich, O. & Shubin, M.: *Essential self-adjointness of Schrödinger-type operators on manifolds.* Russian Math. Surveys 57 (2002), no. 4, 641–692.
Friedrichs, K.O.: *The identity of weak and strong extensions of differential operators,* Trans. Amer. Math. Soc. 55 (1944), 132–151.
Grigor’yan, A.: [*Heat kernel and analysis on manifolds.*]{} AMS/IP Studies in Advanced Mathematics, 47. American Mathematical Society, Providence, RI; International Press, Boston, MA, 2009.
Chazarain, J. & Piriou, A.: *Introduction to the theory of linear partial differential equations.* Studies in Mathematics and its Applications, 14. North-Holland Publishing Co., Amsterdam-New York, 1982.
Guidetti, D.: *On Elliptic Problems in Besov Spaces.* Math. Nachr. 152 (1991), 247–275.
Guidetti, D.: *On elliptic systems in $L^1$*. Osaka J. Math. 30 (1993), no. 3, 397–429.
Güneysu, B. & Pflaum, M.: *The profinite dimensional manifold structure of formal solution spaces of formally integrable PDE’s*. Preprint, arXiv:1308.1005.
Hebey, E.: *Sobolev spaces on Riemannian manifolds.* Lecture Notes in Mathematics, 1635. Springer-Verlag, Berlin, 1996.
Lawson, H.B. & Michelsohn, M.-L.: *Spin geometry.* Princeton Mathematical Series, 38. Princeton University Press, Princeton, NJ, 1989.
Meyers, N.G. & Serrin, J.: *H=W*. Proc. Nat. Acad. Sci. U.S.A. 51 (1964), 1055–1056.
Müller, W. & Salomonsen, G.: *Scattering theory for the Laplacian on manifolds with bounded curvature.* J. Funct. Anal. 253 (2007), no. 1, 158–206.
Nicolaescu, L.I.: *Lectures on the geometry of manifolds.* World Scientific Publishing Co., Inc., River Edge, NJ, 1996.
Rother, W.: *The construction of a function $f$ satisfying $\Delta f\in L^1$ and $\partial_{ij}f\notin L^1$.* Arch. Math. (Basel) 48 (1987), no. 1, 88–91.
Salomonsen, G.: *Equivalence of Sobolev spaces.* Result. Math 39 (2001), 115–130.
Waldmann, S.: *Geometric wave equations.* arXiv:1208.4706v1.
[^1]: Here, $[\alpha]:=\max\{j|j\in{\mathbb{N}}, j<\alpha\}$
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Liang-Jian Zou\
[*CCAST (World Laboratory) P.O.Box 8730, Beijing, 100080* ]{}\
[*Institute of Solid State Physics, Academia Sinica, P.O.Box 1129, Hefei 230031, China$^{*}$*]{}\
date:
title: |
**Hall Resistivity in Ferromagnetic Manganese-Oxide Compounds\
**
---
22.0cm 16.5cm +00pt -20pt
[**Abstract**]{}\
Temperature-dependence and magnetic field-dependence of the Hall effect and the magnetic property in manganese-oxide thin films are studied. The spontaneous magnetization and the Hall resistivity are obtained for a various of magnetic fields over all the temperature. It is shown that the Hall resistivity in small magnetic field is to exhibit maximum near the Curie point, and strong magnetic field moves the position of the Hall resistivity peak to much high temperature and suppresses the peak value. The change of the Hall resistance in strong magnetic field may be larger than that of the diagonal ones. The abnormal Hall resistivity in the ferromagnetic manganese-oxide thin-films is attributed to the spin-correlation fluctuation scattering.
[*PACS No.*]{}: 73.50.Jt, 73.50.Bk
[*Keywords:*]{} Hall Resistivity, Magnetic Thin Film, Manganese-Oxide
[**I. INTRODUCTION**]{}
Recently colossal negative magnetoresistance (MR) effect has been found in ferromagnetic perovskite-like La$_{1-x}$R$_{x}$MnO$_{3}$ and Nd$_{1-x}$R$_{x}$MnO$_{3}$ ( R =Ba, Sr, Ca, Pb, etc. ) \[ 1 - 4 \] thin films. In epitaxial La$_{1-x}$Ca$_{x}$MnO$_{3}$ and Nd$_{1-x}$Sr$_{x}$MnO$_{3}$ thin films, the resistivity in applied magnetic field decreases several order in magnitude under strong magnetic field, which is much larger than that found in ferromagnetic/nonmagnetic metallic multilayers. Since such a huge MR change has potential application for magnetic sensor, magnetic recording and many other aspects, it also presents another possible MR mechanism which is different from that in magnetic multilayer, it arises extensive attention and interests in the past two years \[ 5 - 12 \].
Undoped La$MnO_{3}$ is an antiferromagnetic insulator \[ 13 - 16 \], its magnetic structure and magnetic properties had been studied in the early of 1950’s. After the substitution of a part of trivalent La or Nd ions by Ca, Sr, Pb, Ba or other divalent elements, Mn$^{+3}$ and Mn$^{+4}$ ions coexist, the valence fluctuation between them is thus assumed to be important and may contribute to the hopping conductivity and other transport properties. In the substitution of La or Nd ions, La$_{1-x}$R$_{x}$MnO$_{3}$ and Nd$_{1-x}$R$_{x}$MnO$_{3}$ systems may occur the metal-insulator transition, the electronic states in doped systems at the edge of metal-insulator transition can be heavily affected by the long-range magnetic order and the external magnetic field. Some mechanism have been proposed to explain the colossal MR effect in these systems \[ 8 - 11 \]. The magnetic polaron mechanism \[ 1 - 3 \], the spin disorder scattering \[ 8, 9 \] and the metal-insulator transition \[ 11 \] induced by the magnetic field seem to be difficult to explain the colossal MR behavior in strong magnetic field over all the temperature satisfactorily.
In recent study \[ 12 \], another possible mechanism for these ferromagnetic La$_{1-x}$R$_{x}$MnO$_{3}$ and Nd$_{1-x}$R$_{x}$MnO$_{3}$ thin films is proposed to explain the colossal MR. It is suggested that the abnormal large MR in these compounds is induced by the spin-spin correlation fluctuation scattering, the magnetic property and the diagonal resistivity in different magnetic fields agree with the experimental results very well. The Hall effect is one of the important properties and may distinguish the present mechanism from the metal-insulator transition mechanism \[11\], so it is useful to predict the Hall resistivity behavior. In this letter, the Hall resistivity and the magnetic property for temperature T and magnetic field B are obtained. The rest is arranged as following, in Sec.II, the formalism is described; the results and discussions are given in Sec.III, and the conclusion is drawn in Sec.IV.
[**II. FORMALISM**]{}
In La$_{1-x}$R$_{x}$MnO$_{3}$ and Nd$_{1-x}$R$_{x}$MnO$_{3}$ compounds, because of the crystalline field effect, the 3d energy level of the Mn ion is split into the low-energy triplet, t$_{2g}$, and the high-energy doublet, e$_{g}$, therefore the three d-electrons of Mn$^{+3}$ and Mn$^{+4}$ ions first fill the low t$_{2g}$ band, and the extra d-electrons in Mn$^{+3}$ will have to fill the high e$_{g}$ band, these two bands are separated about 1.5 $eV$ \[ 17 \]. Thus the three d-electrons in filled t$_{2g}$ band forms a localized core spin through the strong Hund’s coupling \[ 13 - 16 \], the core spins tend to align parallelly through the double exchange interaction through Mn$^{+3}$-O$^{-2}$-Mn$^{+4}$ and form ferromagnetic background. The electrons in e$_{g}$ band are mobile and responsible for electric conduction in these systems. In this framework, the model Hamiltonian for the mobile electrons moving in the ferromagnetic background is described: $$H=H_{0}+V$$ $$H_{0}=\sum_{k\sigma} (\epsilon_{k}-\sigma \mu_{B}B) c^{\dag}_{k \sigma}
c_{k \sigma} -\sum_{<ij>} {\it A} {\bf S}_{i} \cdot {\bf S}_{j}-
\sum_{i}g\mu_{B}BS^{z}_{i}$$ $$V = -\frac{J}{N} \sum_{ikq} \sum_{\mu \nu} e^{i{\bf q} {\bf R}_{i}}
{\bf S}_{i} \cdot c^{\dag}_{k+q \mu} {\bf \sigma}_{\mu \nu} c_{k\nu}$$ Where H$_{0}$ describes the bare energies of the mobile d-electrons and the ferromagnetic background, V is the coupling between the mobile electrons and the core spins. In Eq.(2), $\epsilon_{k}$ represents the energy spectrum of mobile (or conduction) electrons with respect to the Fermi energy E$_{F}$; [*A*]{} denotes the effective ferromagnetic exchange constant between manganese ions, and only the nearest-neighbor interaction is considered; -g$\mu_{B}$B is Zeemann energy in the magnetic field [**B**]{}. In Eq.(3), the conduction electron is scattered from state k$\nu$ to state k+q$\mu$ by the localized spin ${\bf S_{i}}$; J denotes the coupling between the conduction electrons and the core spins. One notes that in the external magnetic field and the internal molecular field of ferromagnetically ordered state, the conduction band of the system will be split, this splitting is to move the position of the conduction band with respect to the Fermi surface, thus the mean-field spectrum of the conduction electron with state ${\bf k\sigma}$ is $\epsilon_{k\sigma}=\epsilon_{k}-\sigma (\mu_{B}B+ J<S^{z}>)$.
By the scattering theory \[12\], the lifetime of the conduction electrons between two scatterings, $\tau$, is : $$\tau^{-1} = \frac{\pi}{h} \frac{J^{2}D(0)}{4} \sum_{kq\sigma}
f_{k\sigma}(1-f_{k\sigma})f_{k+q\bar{\sigma}}(1-
f_{k+q\bar{\sigma}})[<S^{-}_{q}S^{+}_{-q}>+$$ $$<S^{+}_{q}S^{-}_{-q}>+8 <S^{z}_{q}S^{z}_{-q}>]$$ Where D(0) is the density of states of the conduction electrons near the Fermi surface, Dirac-Fermi distribution function f$_{k}=1/[e^{\beta (\epsilon_{k}-\epsilon_{F})}+1]$. Then one can obtain the diagonal conductivity through the Drude formula, $\rho_{xx}$=m/(ne$^{2}$$\tau$), here n is the density of carrier concentration, and m the effective mass. With the diagonal resistivity, one can easily derive the Hall conductivity: in steady-state, $$\sigma_{H}=\frac{\sigma_{xx}^{2}}{ne} B_{eff}$$ or the Hall resistivity: $$\rho_{H}=\frac{ne}{B_{eff}} \rho_{xx}^{2} ~~~~~~~~~~~~~~~~~~~ (5')$$ where B$_{eff}$ is effective field, B$_{eff}$= $| {\bf B}+zA\ <{\bf S} \ >/\mu_{B}|$.
From to Eqs.(4) and (5), one could qualitatively understand the temperature-dependence of the Hall resistivity. At zero-temperature limit (T $\rightarrow$ 0 K), $\rho_{xx}$ approaches zero because of the Pauli exclusion principle, so $\rho_{H}$(T $\rightarrow$ 0) $ \approx 0$; however at high temperature limit (T$\ >> T_{c}, E_{F}$), $\rho_{xx}$ in different magnetic field approaches the same value, the Hall resistivity mainly depends on the effective magnetic field. In this situation, the diagonal and the Hall resistivities may exhibit some different behaviors. In the mediated temperature range (0$<$k$_{B}$T $<<$ E$_{F}$), especially at the temperature near the Curie point, the diagonal resistivity exhibits a maximum, therefore the Hall resistivity is also to exhibit a maximum, however, because of the effect of magnetic field, the position of the maximum of the Hall resistivity is smaller than that of the diagonal ones.\
[**III. RESULTS AND DISCUSSIONS** ]{}
In quasi-2-dimensional systems, the lattice constant is a=3.89 $\AA$, and the coordinate number is z=4. The theoretical parameters is from La-Ca-Mn-O system, while the results can be applied for other similar systems. In the calculation, several parameters need to be determined by the experiments and the electronic structure calculation, for simplicity, the reduced resistivity is adopted here.
The dependence of the spontaneous magnetization and the Hall resistivity on temperature is shown in Fig.1, for comparison, the diagonal resistivity is also shown in Fig.1. The ferromagnetic-paramagnetic transition occurs within a wide temperature range, because of the critical fluctuation and the low-dimensional character, the temperature relation of the spontaneous magnetization has a long tail, the transition temperature is broaded significantly. The electrons moving in the transverse direction is also scattered by the spin-spin correlation fluctuation near the critical point, therefore the Hall resistivity also exhibits a maximum. In the meantime, the magnetic field [**B**]{} affects the cyclotron movement of the conduction electrons, the Hall resistivity decreases with [**B**]{}, so the maximum of the Hall resistivity will not appear at the same position as that of the diagonal resistivity, the position of the maximum moves to low temperature. It can be seen clearly in Fig.1.
The field-dependence of the Hall resistivity at different temperature is shown in Fig.2. One notes that the Hall resistivity decreases monotonously when the external magnetic field is increased. The descent resistivity with the increase of magnetic field is due to the suppress of the spin-spin correlation fluctuation scattering. Below T$_{c}$, the spin-spin correlation is long range, the applied magnetic field has strong effect on the scattering, so the change of the Hall resistivity is large (See the Curves (1) and (2)); while above T$_{c}$, the short-range spin-spin correlation fluctuation dominates the scattering of conduction electrons, the external magnetic field has weak effect, so the Hall resistivity exhibits weak dependence on the magnetic field.
Since the transverse movement of the conduction electrons are affected both by the magnetic field and by the spin-correlation scattering, the change of the Hall resistivity may be larger than the diagonal resistivity with the variation of the magnetic field. In the present mechanism, from the curve (1) in Fig.2, the Hall resistivity may decrease two to three orders in magnitude, for comparison, the diagonal resistivity only changes one to two orders in magnitude.
[**IV. CONCLUSION**]{}
In conclusion, we have studied the Hall resistivity and the magnetic property of the ferromagnetic manganese-oxide compounds. The temperature-dependence of the Hall resistivity is to exhibit a maximum below the peak position of the diagonal resistivity, and the change of Hall resistivity in magnetic field will be larger than that of diagonal resistivity. The spin-spin correlation scattering between the conduction electrons and the localized spins is the intrinsic mechanism of the abnormal Hall resistivity in La-R-Mn-O and Nd-R-Mn-O compounds near the transition point. Further experiments is desired to verify our predictions.\
Acknowledgement: This work is financially supported by the Grant of National Natural Science Funds of China No.19577101.
REFERENCES
\* mailing address
1. R. M. Kusters, J. Singleton, D. A. Keen, R. McGreevy and W. Hayes, [*Physica*]{} [**B155**]{} 362 (1989).
2. R. Von Helmolt, J. Wecker, B. Holzapfeil, L. Schultz and K. Samwer [*Phys. Rev. Lett.*]{} [**71**]{} 2331 (1993). R. Von Helmolt, J. Wecker, and K. Samwer [*J. Appl. Phys.*]{} [**76**]{}, 6925 (1994).
3. S. Jin, T. H. Tiefel, M. McCormack, R. A. Fastnacht, R. Ramesh, and L, H, Chen [*Science*]{}. [**264**]{} 413 (1994). [*J. Appl. Phys.*]{} [**76**]{}, 6929 (1994).
4. S. Jin, H. M. O’Bryan, T. H. Tiefel, M. McCormack, and W. W. Rhodos [*Appl. Phys. Lett*]{} [**66**]{}, 382 (1995).
5. G. C. Xiong, Q. Li, H. L. Ju, S. N. Mao, L. Senpati, X. X. Xi, R. L. Greene, and T. Venkatesan, [*Appl. Phys. Lett*]{} [**66**]{}, 1427 (1995).
6. H. L. Ju, C. Kwon, Qi Li, L.Greene and T. Vankateson, [*Appl. Phys. lett.*]{} [**65**]{}, 2106 (1994).
7. P. Schiffer, A. P. Ramirez, W. Bao and S-W. Cheong, [*Phys. Rev.Lett.*]{} [**75**]{}, 3336 (1995).
8. N. Furukawa, [*J. Phys. Soc Jpn*]{}, [**63**]{}, 3214 (1994).
9. J. Inoue and S. Maekawa, [*Phys. Rev. Lett*]{} [**74**]{}, 3407 (1995).
10. A. J. Millis , P. B. Littlewood, and B. I. Shraiman, [*Phys. Rev. Lett*]{} [**74**]{}, 3407 (1995).
11. A. Urushibara, Y. Moritomo, T. Arima, A. Asamitsu, G. Kido and Y. Tokuya, [*Phys. Rev. B*]{} [**51**]{}, 14103 (1995). Y. Moritomo, A. Asamitsu, and Y. Tokuya, [*Phys. Rev. B*]{} [**51**]{}, 16491 (1995).
12. Liang-Jian Zou, X. G. Gong, Q. Q. Zheng and C Y. Pan, [*J. Appl. Phys.*]{}, [**78**]{}, No.4 (1996); [*Phys. Rev. B*]{}, submitted.
13. C. Zener, [*Phys. Rev.*]{}, [**81**]{}, 440 (1951); [**82**]{}, 403 (1951).
14. E. O. Wollen and W. C. Koehler, [*Phys. Rev.*]{}, [**100**]{}, 545 (1955)
15. J. B. Goodenough, [*Phys. Rev.*]{}, [**100**]{}, 564 (1955)
16. P. G. De Gennes, [*Phys. Rev.*]{}, [**118**]{}, 141 (1960)
17. J. M. D. Coey, M. Viret and L. Ranno, [*Phys. Rev. Lett.*]{}, [**75**]{}, 3910 (1995).
Figures Captions
Fig. 1. Temperature-dependence of the spontaneous magnetization, the Hall resistivity and the diagonal resistivity in magnetic field. Theoretical parameters for calculation: [*A*]{}=145.5 K, [*J*]{}=350 K. B=15 T (1) spontaneous magnetization, (2) Hall resistivity, and (3) diagonal resistivity.
Fig. 2. Dependence of the Hall resistivity on the magnetic field at different temperature. Theoretical parameters: [*A*]{}=145.5 K, [*J*]{}=350 K, (1) T=50 K, (2) T=100 K, (3) T=80 K.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
This work is concerned with the rigorous analysis on the Generalized Multiscale Finite Element Methods (GMsFEMs) for elliptic problems with high-contrast heterogeneous coefficients. GMsFEMs are popular numerical methods for solving flow problems with heterogeneous high-contrast coefficients, and it has demonstrated extremely promising numerical results for a wide range of applications. However, the mathematical justification of the efficiency of the method is still largely missing.
In this work, we analyze two types of multiscale basis functions, i.e., local spectral basis functions and basis functions of local harmonic extension type, within the GMsFEM framework. These constructions have found many applications in the past few years. We establish their optimal convergence in the energy norm under a very mild assumption that the source term belongs to some weighted $L^2$ space, and without the help of any oversampling technique. Furthermore, we analyze the model order reduction of the local harmonic extension basis and prove its convergence in the energy norm. These theoretical findings shed insights into the mechanism behind the efficiency of the GMsFEMs.
[**Keywords:**]{} multiscale methods, heterogeneous coefficient, high-contrast, elliptic problems, spectral basis function, harmonic extension basis functions, GMsFEM, proper orthogonal decomposition
author:
- 'Guanglian Li[^1]'
bibliography:
- 'reference.bib'
title: On the Convergence Rates of GMsFEMs for Heterogeneous Elliptic Problems without Oversampling Techniques
---
Introduction
============
The accurate mathematical modeling of many important applications, e.g., composite materials, porous media and reservoir simulation, calls for elliptic problems with heterogeneous coefficients. In order to adequately describe the intrinsic complex properties in practical scenarios, the heterogeneous coefficients can have both multiple inseparable scales and high-contrast. Due to the disparity of scales, the classical numerical treatment becomes prohibitively expensive and even intractable for many multiscale applications. Nonetheless, motivated by the broad spectrum of practical applications, a large number of multiscale model reduction techniques, e.g., multiscale finite element methods (MsFEMs), heterogeneous multiscale methods (HMMs), variational multiscale methods, flux norm approach, generalized multiscale finite element methods (GMsFEMs) and localized orthogonal decomposition (LOD), have been proposed in the literature [@MR1455261; @MR1979846; @MR1660141; @MR2721592; @egh12; @MR3246801; @li2017error] over the last few decades. They have achieved great success in the efficient and accurate simulation of heterogeneous problems. Amongst these numerical methods, the GMsFEM [@egh12] has demonstrated extremely promising numerical results for a wide variety of problems, and thus it is becoming increasingly popular. However, the mathematical understanding of the method remains largely missing, despite numerous successful empirical evidences. The goal of this work is to provide a mathematical justification, by rigorously establishing the optimal convergence of the GMsFEMs in the energy norm without any restrictive assumptions or oversampling technique.
We first formulate the heterogeneous elliptic problem. Let $D\subset
\mathbb{R}^d$ ($d=1,2,3$) be an open bounded Lipschitz domain [with a boundary $\partial D$]{}. Then we seek a function $u\in V:=H^{1}_{0}(D)$ such that $$\label{eqn:pde}
\begin{aligned}
\mathcal{L}u:=-\nabla\cdot(\kappa\nabla u)&=f &&\quad\text{ in }D,\\
u&=0 &&\quad\text{ on } \partial D,
\end{aligned}$$ where the force term $f\in L^2(D)$ and the permeability coefficient $\kappa\in L^{\infty}(D)$ with $\alpha\leq\kappa(x)
\leq\beta$ almost everywhere for some lower bound $\alpha>0$ and upper bound $\beta>\alpha$. We denote by $\Lambda:=
\frac{\beta}{\alpha}$ the ratio of these bounds, [which reflects the contrast of the coefficient $\kappa$]{}. Note that the existence of multiple scales in the coefficient $\kappa$ rends directly solving Problem challenging, since resolving the problem to the finest scale would incur huge computational cost.
The goal of the GMsFEM is to efficiently capture the large-scale behavior of the solution $u$ locally without resolving all the microscale features within. To realize this desirable property, we first discretize the computational domain $D$ into a coarse mesh $\mathcal{T}^H$. Over $\mathcal{T}^H$, we define the classical multiscale basis functions $\{\chi_i\}_{i=1}^{N}$, with $N$ being the total number of coarse nodes. Let $\omega_i:=\text{supp}
(\chi_i)$ be the support of $\chi_i$, which is often called a local coarse neighborhood below. To accurately approximate the local solution $u|_{\omega_i}$ (restricted to $\omega_i$), we construct a local approximation space. In practice, two types of local multiscale spaces are frequently employed: local spectral space ($V_{\text{off}}^{{\mathrm{S}_i}, \ell_i^{{\mathrm{I}}}}$, of dimension $\ell_i^{{\mathrm{I}}}$) and local harmonic space $V_{\text{snap}}^{{\mathrm{H}_i}}$. The dimensionality of the local harmonic space $V_{\text{snap}}^{{\mathrm{H}_i}}$ is problem-dependent, and it can be extremely large when the microscale within the coefficient $\kappa$ tends to zero. Hence, a further local model reduction based on proper orthogonal decomposition (POD) in $V_{\text{snap}}^{{\mathrm{H}_i}}$ is often employed. We denote the corresponding local POD space of rank $\ell_i$ by $V_{\text{off}}^{{\mathrm{H}_i}, \ell_i}$. In sum, in practice, we can have three types of local multiscale spaces at our disposal: $V_{\text{off}}^{{\mathrm{S}_i}, \ell_i}$, $V_{\text{snap}}^{{\mathrm{H}_i}}$ and $V_{\text{off}}^{{\mathrm{H}_i}, \ell_i}$ on $\omega_i$. These basis functions are then used in the standard finite element framework, e.g., continuous Galerkin formulation, for constructing a global approximate solution.
One crucial part in the local spectral basis construction is to include local spectral basis functions ($V_{\text{off}}^{{\mathrm{T}_i}, \ell_i^{{\mathrm{II}}}}$, of dimension $\ell_i^{{\mathrm{II}}}$) governed by Steklov eigenvalue problems [@MR2770439], which was first applied to the context of the GMsFEMs in [@MR3277208], to the best of our knowledge. This was motivated by the decomposition of the local solution $u|_{\omega_i}$ into the sum of three components, cf. , where the first two components can be approximated efficiently by the local spectral space $V_{\text{off}}^{{\mathrm{S}_i}, \ell_i^{{\mathrm{I}}}}$ and $V_{\text{off}}^{{\mathrm{T}_i}, \ell_i^{{\mathrm{II}}}}$, respectively, and the third component is of rank one and can be obtained by solving one local problem.
The good approximation property of these local multiscale spaces to the solution $u|_{\omega_i}$ of problem is critical to ensure the accuracy and efficiency of the GMsFEM. We shall present relevant approximation error results for the preceding three types of multiscale basis functions in Proposition \[prop:projection\], Lemma \[lemma:u2\], Lemma \[lem:energyHA\] and Lemma \[lem:5.2\]. It is worth pointing out that the proof of Proposition \[prop:projection\] relies crucially on the expansion of the source term $f$ in terms of the local spectral basis function in Lemma \[lem:assF\]. Thus the argument differs substantially from the typical argument for such analysis that employs the oversampling argument together with a Cacciopoli type inequality [@babuska2011optimal; @eglp13], and it is of independent interest by itself.
The proof to Lemma \[lemma:u2\] is very critical. It relies essentially on the transposition method [@MR0350177], which bounds the weighted $L^2$ error estimate in the domain by the boundary error estimate, since the latter can be obtained straightforwardly. Most importantly, the involved constant is independent of the contrast in the coefficient $\kappa$. This result is presented in Theorem \[lem:very-weak\].
To establish Lemmas \[lem:energyHA\] and \[lem:5.2\], we make one mild assumption on the geometry of the coefficient, cf. Assumption \[ass:coeff\], which enables the use of the weighted Friedrichs inequality in the proof. In addition, since the local multiscale basis functions in $V_{\text{off}}^{{\mathrm{H}_i}, \ell_i}$ are $\kappa$-harmonic and since the weighted $L^2(\omega_i)$ error estimate can be obtained directly from the POD, cf. Lemma \[lem:5.1\], we employ a Cacciopoli type inequality [@MR717034] to prove Lemma \[lem:5.2\]. Note that our analysis does not exploit the oversampling strategy, which has played a crucial role for proving energy error estimates in all existing works [@babuska2011optimal; @eglp13; @MR3246801; @chung2017constraint].
Together with the conforming Galerkin formulation and the partition of unity functions $\{\chi_i\}_{i=1}^N$ on the local domains $\{\omega_i\}_{i=1}^{N}$, we obtain three types of multiscale methods to solve problem , cf. –. Their energy error estimates are presented in Propositions \[prop:Finalspectral\], \[prop:FinalSnap\] and \[prop:Finalpod\], respectively. Specifically, their convergence rates are precisely characterized by the eigenvalues $\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}}$, $\lambda_{\ell_i^{{\mathrm{II}}}}^{{\mathrm{T}_i}}$, $\lambda_{\ell_i}^{{\mathrm{H}_i}}$ and the coarse mesh size $H$ (see Section \[sec:error\] for the definitions of the eigenvalue problems). Thus, the decay/growth behavior of these eigenvalues plays an extremely important role in determining the convergence rates, which, however, is beyond the scope of the present work. We refer readers to the works [@babuska2011optimal; @li2017low] for results along this line.
Last, we put our contributions into the context. The local spectral estimates in the energy norm in Proposition \[prop:projection\] and Lemma \[lemma:u2\] represent the state-of-art result in the sense that no restrictive assumption on the problem data is made. Furthermore, we prove the convergence without the help of the oversampling strategy in the analysis, which has played a crucial role in all existing studies [@babuska2011optimal; @EFENDIEV2011937; @eglp13; @chung2017constraint]. In practice, avoiding oversampling strategy allows saving computational cost, and this also corroborates well empirical observations [@EFENDIEV2011937]. Due to the local estimates in Proposition \[prop:projection\] and Lemma \[lemma:u2\], we are able to derive a global estimate in Proposition \[prop:Finalspectral\] that is the much needed results for analyzing many multiscale methods [@MR1660141; @MR2721592; @MR3246801; @li2017error], cf. Remark \[rem:spectral\]. Recently Chung et al [@chung2017constraint] proved some convergence estimates in a similar spirit to Proposition \[prop:projection\], by adapting the LOD technique [@MR3246801]. Our result greatly simplifies the analysis and improves their result [@chung2017constraint] by avoiding the oversampling. To the best of our knowledge, there is no known convergence estimate for either the local harmonic space or the local POD space, and the results presented in Propositions \[prop:FinalSnap\] and \[prop:Finalpod\] are the first such results.
The remainder of this paper is organized as follows. We formulate the heterogeneous problem in Section \[sec:pre\], and describe the main idea of the GMsFEM. We present in Section \[cgdgmsfem\] the construction of local multiscale spaces, harmonic extension space and discrete POD. Based upon them, we present three type of global multiscale spaces. Together with the canonical conforming Galerkin formulation, we obtain three type of numerical methods to approximate Problem in to . The error estimates of these multiscale methods are presented in Section \[sec:error\], which represent the main contributions of this paper. Finally, we conclude the paper with concluding remarks in Section \[sec:conclusion\]. We establish the regularity result of the elliptic problem with very rough boundary data in an appendix.
Preliminary {#sec:pre}
===========
Now we present basic facts related to Problem and briefly describe the GMsFEM (and also to fix the notation). Let the space $V:=H^{1}_{0}(D)$ be equipped with the (weighted) inner product $$\begin{aligned}
{\langle {v_1},{v_2}\rangle}_{D}=:a(v_1,v_2):=\int_{D}\kappa\nabla v_1\cdot\nabla v_2\;{\mathrm{d}x}\quad \text{ for all } v_1, v_2\in V,\end{aligned}$$ and the associated energy norm $$\begin{aligned}
{|v|_{H^{1}_{\kappa}{\left( D \right)}}}^2:={\langle {v},{v}\rangle}_{D}\quad \text{ for all } v\in V.\end{aligned}$$ We denote by $W:=L^2(D)$ equipped with the usual norm ${{\left\|\cdot\right\|}_{L^2{\left( D \right)}}}$ [and inner product $(\cdot,\cdot)_{D}$]{}.
The weak formulation for problem is to find $u\in V$ such that $$\begin{aligned}
\label{eqn:weakform}
a(u,v)=(f,v)_{D} \quad \text{for all
} v\in V.\end{aligned}$$ The Lax-Milgram theorem implies the well-posedness of problem .
To discretize problem , we first introduce fine and coarse grids. Let $\mathcal{T}^H$ be a regular partition of the domain $D$ into finite elements (triangles, quadrilaterals, tetrahedra, etc.) with a mesh size $H$. We refer to this partition as coarse grids, and accordingly the course elements. Then each coarse element is further partitioned into a union of connected fine grid blocks. The fine-grid partition is denoted by $\mathcal{T}^h$ with $h$ being its mesh size. Over $\mathcal{T}^h$, let $V_h$ be the conforming piecewise linear finite element space: $$V_h:=\{v\in \mathcal{C}: V|_{T}\in \mathcal{P}_{1} \text{ for all } T\in \mathcal{T}^h\},$$ where $\mathcal{P}_1$ denotes the space of linear polynomials. Then the fine-scale solution $u_h\in V_h$ satisfies $$\begin{aligned}
\label{eqn:weakform_h}
a(u_h,v_h)=(f,v_h)_{D} \quad \text{ for all } v_h\in V_h.\end{aligned}$$ The Galerkin orthogonality implies the following optimal estimate in the energy norm: $$\begin{aligned}
\label{eq:fineApriori}
{|u-u_h|_{H^{1}_{\kappa}{\left( D \right)}}}\leq \min\limits_{v_h\in V_h}{|u-v_h|_{H^{1}_{\kappa}{\left( D \right)}}}.\end{aligned}$$ The fine-scale solution $u_h$ will serve as a reference solution in multiscale methods. Note that due to the presence of multiple scales in the coefficient $\kappa$, the fine-scale mesh size $h$ should be commensurate with the smallest scale and thus it can be very small in order to obtain an accurate solution. This necessarily involves huge computational complexity, and more efficient methods are in great demand.
In this work, we are concerned with flow problems with high-contrast heterogeneous coefficients, which involve multiscale permeability fields, e.g., permeability fields with vugs and faults, and furthermore, can be parameter-dependent, e.g., viscosity. Under such scenario, the computation of the fine-scale solution $u_h$ is vulnerable to high computational complexity, and one has to resort to multiscale methods. The GMsFEM has been extremely successful for solving multiscale flow problems, which we briefly recap below. The GMsFEM aims at solving Problem on the coarse mesh $\mathcal{T}^{H}$ cheaply, which, meanwhile, maintains a certain accuracy compared to the fine-scale solution $u_h$. To describe the GMsFEM, we need a few notation. The vertices of $\mathcal{T}^H$ are denoted by $\{O_i\}_{i=1}^{N}$, with $N$ being the total number of coarse nodes. The coarse neighborhood associated with the node $O_i$ is denoted by $$\label{neighborhood}
\omega_i:=\bigcup\{ K_j\in\mathcal{T}^H: ~~~ O_i\in \overline{K}_j\}.$$ The overlap constant ${C_{\mathrm{ov}}}$ is defined by $$\begin{aligned}
\label{eq:overlap}
{C_{\mathrm{ov}}}:=\max\limits_{K\in \mathcal{T}^{H}}\#\{O_i: K\subset\omega_i \text{ for } i=1,2,\cdots,N\}.\end{aligned}$$ We refer to Figure \[schematic\] for an illustration of neighborhoods and elements subordinated to the coarse discretization $\mathcal{T}^H$. Throughout, we use $\omega_i$ to denote a coarse neighborhood.
![Illustration of a coarse neighborhood and coarse element with an overlapping constant ${C_{\mathrm{ov}}}=4$.[]{data-label="schematic"}](gridschematic){width="65.00000%"}
Next, we outline the GMsFEM with a continuous Galerkin (CG) formulation; see Section \[cgdgmsfem\] for details. We denote by $\omega_i$ the support of the multiscale basis functions. These basis functions are denoted by $\psi_k^{\omega_i}$ for $k=1,\cdots,\ell_i$ for some $\ell_i\in \mathbb{N}_{+}$, which is the number of local basis functions associated with $\omega_i$. Throughout, the superscript $i$ denotes the $i$-th coarse node or coarse neighborhood $\omega_i$. Generally, the GMsFEM utilizes multiple basis functions per coarse neighborhood $\omega_i$, and the index $k$ represents the numbering of these basis functions. In turn, the CG multiscale solution $u_{\text{ms}}$ is sought as $u_{\text{ms}}(x)=\sum_{i,k} c_{k}^i \psi_{k}^{\omega_i}(x)$. Once the basis functions $\psi_k^{\omega_i}$ are identified, the CG global coupling is given through the variational form $$\label{eq:globalG} a(u_{\text{ms}},v)=(f,v), \quad \text{for all} \, \, v\in
V_{\text{off}},$$ where $V_{\text{off}}$ denotes the finite element space spanned by these basis functions.
We conclude the section with the following assumption on $\Omega$ and $\kappa$.
\[ass:coeff\] Let $D$ be a domain with a $C^{1,\alpha}$ $(0<\alpha<1)$ boundary $\partial D$, and $\{D_i\}_{i=1}^m\subset D$ be $m$ pairwise disjoint strictly convex open subsets, [each with a $C^{1,\alpha}$ boundary $\Gamma_i:=\partial D_i$,]{} and denote $D_0=D\backslash \overline{\cup_{i=1}^{m} D_i}$. Let the permeability coefficient $\kappa$ be piecewise regular function defined by $$\kappa=\left\{
\begin{aligned}
&\eta_{i}(x) &\text{ in } D_{i},\\
&1 &\text{ in }D_0.
\end{aligned}
\right.$$ Here $\eta_i\in C^{\mu}(\bar{D_i})$ with $\mu\in (0,1)$ for $i=1,\cdots,m$. Denote ${\eta_{\text{min}}}:=\min_{i}\{\eta_i\}\geq 1$ and ${\eta_{\text{max}}}:=\max_{i}\{\eta_i\}$.
Under Assumption \[ass:coeff\], the coefficient $\kappa$ is $\Gamma$-[*quasi-monotone*]{} on each coarse neighborhood $\omega_i$ and the global domain $D$ (see [@pechstein2012weighted Definition 2.6] for the precise definition) with either $\Gamma:=\partial \omega_i$ or $\Gamma:=\partial D$. Then the following weighted Friedrichs inequality [@pechstein2012weighted Theorem 2.7] holds.
\[thm:friedrichs\] Let $\text{diam}(D)$ be the diameter of the bounded domain $D$ and $\omega_i\subset D$. Define $$\begin{aligned}
{{\rm C}_{\mathrm{poin}}(\omega_i)}&:=H^{-2}\max\limits_{w\in H^1_0(\omega_i)}\frac{\int_{\omega_i}{\kappa}w^2{\mathrm{d}x}}{\int_{\omega_i}\kappa|\nabla w|^2{\mathrm{d}x}},\label{eq:poinConstant}\\
{{\rm C}_{\mathrm{poin}}(D)}&:=\text{diam}(D)^{-2}\max\limits_{w\in H^1_0(D)}\frac{\int_{D}{\kappa}w^2{\mathrm{d}x}}{\int_{D}\kappa|\nabla w|^2{\mathrm{d}x}}.\label{eq:poinConstantG}\end{aligned}$$ Then the positive constants ${{\rm C}_{\mathrm{poin}}(\omega_i)}$ and ${{\rm C}_{\mathrm{poin}}(D)}$ are independent of the contrast of $\kappa$.
Below we only require that the constants ${{\rm C}_{\mathrm{poin}}(\omega_i)}$ and ${{\rm C}_{\mathrm{poin}}(D)}$ be independent of the contrast in $\kappa$. Assumption \[ass:coeff\] is one sufficient condition to ensure this, and it can be relaxed [@pechstein2012weighted].
CG-based GMsFEM for high-contrast flow problems {#cgdgmsfem}
===============================================
In this section, we present the local spectral basis functions, local harmonic extension basis functions and POD, and the global weak formulation based on these local multiscale basis functions.
Local multiscale basis functions {#locbasis}
--------------------------------
First we present two principled approaches for constructing local multiscale functions: local spectral bases and local harmonic extension bases, which represent the two main approaches within the GMsFEM framework. The constructions are carried out on each coarse neighborhood $\omega_i$ with $i=1,2,\cdots,N$, and can be carried out in parallel, if desired. Since the dimensionality of the local harmonic extension bases is problem-dependent and inversely proportional to the smallest scale in $\kappa$, in practice, we often perform an “optimal” local model order reduction based on POD to further reduce the complexity at the online stage.
Before presenting the constructions, we first introduce some useful function spaces, which will play an important role in the analysis below. Let $L^2_{\widetilde{\kappa}}(\omega_i)$ and $H^1_{\kappa}(\omega_i)$ be Hilbert spaces with their inner products and norms defined respectively by $$\begin{aligned}
{3}
(w_1,w_2)_{i}&:=\int_{\omega_i}\widetilde{\kappa}w_1\cdot w_2\;{\mathrm{d}x}&&\|{w_1}\|_{L^2_{\widetilde{\kappa}}(\omega_i)}^2:=(w_1,w_1)_{i}&\ \ \text{ for }w_1, w_2\in L^2_{\widetilde{\kappa}}(\omega_i),\\
{\langle {v_1},{v_2}\rangle}_{i}&:=\int_{\omega_i}{\kappa}\nabla v_1\cdot \nabla v_2\;{\mathrm{d}x}\quad&&{{\left\|v_1\right\|}_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2:=(v_1,v_2)_i+{\langle {v_1},{v_1}\rangle}_{i}&\text{ for } v_1,v_2\in H^1_{\kappa}(\omega_i).\end{aligned}$$ Next we define two subspaces $W_i\subset L^2_{ \widetilde{\kappa}}(\omega_i)$ and $V_i\subset H^1_{\kappa}(\omega_i)$ of codimension one by $$W_i:=\{v\in L^2_{ \widetilde{\kappa}}(\omega_i):\int_{\omega_i}\widetilde{\kappa}v\;{\mathrm{d}x}=0\}
\quad \mbox{and}\quad
V_i:=\{v\in H^1_{\kappa}(\omega_i):\int_{\omega_i}\widetilde{\kappa}v\;{\mathrm{d}x}=0\}.$$ Furthermore, we introduce the following weighted Sobolev spaces: $$\begin{aligned}
L_{{\kappa}^{-1}}^{2}(\omega_i):=&\Big\{w:\|w\|_{L^2_{\kappa^{-1}(\omega_i)}}^2:=\int_{\omega_i}{\kappa}^{-1} w^2{\mathrm{d}x}<\infty \Big\},\\
H_{\kappa,0}^{1}(\omega_i):=&\Big\{w: w|_{\partial{\omega_i}}=0\text{ s.t. }{|w|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2:=\int_{\omega_i}\kappa |\nabla w|^2{\mathrm{d}x}<\infty \Big\}.\end{aligned}$$ Similarly, we define the following weighted Sobolev spaces with their associated norms: $(L_{\widetilde{\kappa}^{-1}}^{2}(\omega_i),\|\cdot\|_{L^2_{\widetilde{\kappa}^{-1}(\omega_i)}})$, $(L_{{\kappa}^{-1}}^{2}(D),\|\cdot\|_{L^2_{\kappa^{-1}(D)}})$ and $(L_{\widetilde{\kappa}^{-1}}^{2}(D),\|\cdot\|_{L^2_{\widetilde{\kappa}^{-1}(D)}})$. The nonnegative weights $\widetilde{\kappa}$ and $\widetilde{\kappa}^{-1}$ will be defined in and below, respectively.
Throughout, the superscripts ${\mathrm{S}_i}$, ${\mathrm{T}_i}$ and ${\mathrm{H}_i}$ are associated to the local spectral spaces and local harmonic space on $\omega_i$, respectively. Below we describe the construction of local multiscale basis functions on $\omega_i$.
### Local spectral bases I {#local-spectral-bases-i .unnumbered}
To define the local spectral bases on $\omega_i$, we first introduce a local elliptic operator $\mathcal{L}_i$ on $\omega_i$ by $$\begin{aligned}
\label{eq:Li}
\left\{ \begin{aligned}
\mathcal{L}_i v&:=-\nabla\cdot(\kappa\nabla v)\quad \mbox{in }\omega_i,\\
\kappa\frac{\partial v}{\partial n}&=0\quad \mbox{on }\partial\omega_i.
\end{aligned}\right.\end{aligned}$$ The Lax-Milgram theorem implies the well-posedness of the operator $\mathcal{L}_i:V_i\to V_i^*$, the dual space $V_i^{*}$ of $V_i$. Then the spectral problem can be formulated in terms of $\mathcal{L}_i$, i.e., to seek $(\lambda_{j}^{{\mathrm{S}_i}}, v_{j}^{{\mathrm{S}_i}})\in \mathbb{R}\times V_i$ such that $$\begin{aligned}
{2}\label{eq:spectral}
\mathcal{L}_i v_{j}^{{\mathrm{S}_i}} &= \widetilde{\kappa}\lambda_{j}^{{\mathrm{S}_i}} v_{j}^{{\mathrm{S}_i}}
\quad &&\text{in} \, \, \, \omega_i,\\
\kappa\frac{\partial}{\partial n}v_{j}^{{\mathrm{S}_i}}&=0&&\text{ on } \partial \omega_i,\nonumber\end{aligned}$$ where the parameter $\widetilde\kappa$ is defined by $$\label{defn:tildeKappa}
\widetilde{\kappa} =H^2 \kappa \sum_{i=1}^{N} | \nabla \chi_i |^2,$$ with the multiscale function $\chi_i$ to be defined in below. Note that the use of $\widetilde{\kappa}$ in the local spectral problem instead of $\kappa$ is due to numerical consideration [@EFENDIEV2011937]. Furthermore, let $\widetilde{\kappa}^{-1}$ be defined by $$\label{eq:inv-tildeKappa}
\widetilde{\kappa}^{-1}(x)=
\left\{
\begin{aligned}
&\widetilde{\kappa}^{-1}, \quad &&\text{ when } \widetilde{\kappa}(x)\ne 0\\
&1, \quad &&\text{ otherwise }.
\end{aligned}
\right.$$
Generally, one cannot preclude the existence of critical points from the multiscale basis functions $\chi_i$ [@MR1289138; @alberti2017critical]. In the two-dimensional case, it was proved that there are at most a finite number of isolated critical points. To simplify our presentation, we will assume $|D\cap\{\widetilde{\kappa}=0\}|=0$.
The next result gives the eigenvalue behavior of the local spectral problem .
\[lem:eigenvalue-blowup\] Let $\{(\lambda_j^{{\mathrm{S}_i}},v_j^{{\mathrm{S}_i}})\}_{j=1}^{\infty}$ be the eigenvalues and the corresponding normalized eigenfunctions in $W_i$ to the spectral problem listed according to their algebraic multiplicities and the eigenvalues are ordered nondecreasingly. There holds $$\begin{aligned}
\label{eq:spectral_eigenvalue}
\lambda_j^{{\mathrm{S}_i}}\to \infty\quad \text{ as } j\to \infty.\end{aligned}$$
To prove Theorem \[lem:eigenvalue-blowup\], we need a few notation. Let $\mathcal{S}_i:=\mathcal{L}_i^{-1}: V^{*}_i\to V_i$ be the inverse of the elliptic operator $\mathcal{L}_i$. Denote $T:W_i\to L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ to be the multiplication operator defined by $$\begin{aligned}
\label{eq:T}
Tv:=\widetilde{\kappa}v \quad\text{ for all }\quad v\in W_i.\end{aligned}$$ One can show by definition directly that $T$ is a bounded operator with unit norm. Moreover, there holds $$\int_{\omega_i}Tv\;{\mathrm{d}x}=0 \quad\text{ for all }v\in W_i.$$ Thus the range of $T$, $\mathcal{R}(T)$, is a subspace in $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ with codimension one, and we have $$\begin{aligned}
\label{eq:R(T)}
\mathcal{R}(T)\hookrightarrow V_i^{*}.\end{aligned}$$
For the proof of Theorem \[lem:eigenvalue-blowup\], we need the following compact embedding result.
\[lem:embedding\] $V_i$ is compactly embedded into $W_i$, i.e., $V_i \hookrightarrow\hookrightarrow W_i.$
By Remark \[rem:chi\], the uniform boundedness of $\kappa$, the definition of $\widetilde{\kappa}$ and the overlapping condition , we obtain the boundedness of $\tilde{\kappa}$, i.e., $$\begin{aligned}
\label{eq:upper_tilde}
\|\widetilde{\kappa}\|_{L^{\infty}(D)}\leq C_{\text{ov}}(HC_{0})^2\kappa\leq C_{\text{ov}}(HC_{0})^2\beta.
$$ Hence, there holds the following embedding inequalities: $$L^2_{\widetilde{\kappa}^{-1}}(\omega_i)\hookrightarrow L^2(\omega_i)\hookrightarrow L^2_{\widetilde{\kappa}}(\omega_i).$$ This, the classical Sobolev embedding [@adams2003sobolev] and boundedness of $\kappa$ imply the compactness of the embedding $V_i\hookrightarrow\hookrightarrow L^2(\omega_i)$ and thus, we finally arrive at $V_i \hookrightarrow\hookrightarrow W_i$. This completes the proof.
By , the multiplication operator $T: W_i\to V_i^*$ is bounded. Similarly, the operator $\mathcal{S}_i:V_i^*\to W_i$ is compact, in view of Lemma \[lem:embedding\]. Let $\widetilde{\mathcal{S}}_i:=\mathcal{S}_i T$. Then the operator $\widetilde{\mathcal{S}}_i:W_i\to W_i$ is nonnegative and [compact]{}. Now we claim that $\widetilde{\mathcal{S}}_i$ is self-adjoint on $W_i$. Indeed, for all $v,w\in W_i$, we have $$\begin{aligned}
(\widetilde{\mathcal{S}}_i v, w)_i&=(\mathcal{S}_i Tv, w)_i=\int_{\omega_i}\widetilde{\kappa}\mathcal{L}_i^{-1}(\widetilde{\kappa}v) w\;{\mathrm{d}x}\\
&=\int_{\omega_i}\mathcal{L}_i^{-1}(\widetilde{\kappa}v) (\widetilde{\kappa}w)\;{\mathrm{d}x}\\
&=(v,(\mathcal{S}_i T)w)_i=(v,\widetilde{\mathcal{S}}_iw)_i,\end{aligned}$$ where we have used the weak formulation for to deduce $
\int_{\omega_i}\mathcal{L}_i^{-1}(\widetilde{\kappa}v) (\widetilde{\kappa}w){\mathrm{d}x}=\int_{\omega_i}(\widetilde{\kappa}v)\mathcal{L}_i^{-1} (\widetilde{\kappa}w){\mathrm{d}x}.
$ By the standard spectral theory for compact operators [@yosida78], it has at most countably many discrete eigenvalues, with zero being the only accumulation point, and each nonzero eigenvalue has only finite multiplicity. Noting that $\big\{\big((\lambda_j^{{\mathrm{S}_i}})^{-1},
v_j^{{\mathrm{S}_i}}\big)\big\}_{j=1}^{\infty}$ are the eigenpairs of $\widetilde{\mathcal{S}}_i$ completes the proof.
Furthermore, by the construction, the eigenfunctions $\{ v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}$ form a complete orthonormal bases (CONB) in $W_i$, and $\{\sqrt{\lambda_j^{{\mathrm{S}_i}}+1}{v_j^{{\mathrm{S}_i}}}\}_{j=1}^{\infty}$ form a CONB in $V_i$. Further, we have $L^2_{\widetilde{\kappa}}(\omega_i)=W_i\oplus \{1\}$. Hence, $\{ v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus \{1\}$ is a complete orthogonal bases in $L^2_{\widetilde{\kappa}}(\omega_i)$ \[Chapters 4 and 5\][@laugesen][^2].
\[lem:L2Inv\] The series $\{ \widetilde{\kappa} v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus \{\widetilde{\kappa}\}$ forms a complete orthogonal bases in $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$.
First, we show that $\{ \widetilde{\kappa} v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus \{\widetilde{\kappa}\}$ are orthogonal in $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$. Indeed, by definition, we deduce that for all $j\in \mathbb{N}_{+}$ $$\begin{aligned}
\int_{\omega_i}\widetilde{\kappa}^{-1}\widetilde{\kappa}\cdot \widetilde{\kappa} v_j^{{\mathrm{S}_i}}{\mathrm{d}x}=\int_{\omega_i}\widetilde{\kappa} v_j^{{\mathrm{S}_i}}{\mathrm{d}x}=(v_j^{{\mathrm{S}_i}},1)_i=0.\end{aligned}$$ Meanwhile, for all $j,k\in \mathbb{N}_{+}$, there holds $$\begin{aligned}
\int_{\omega_i}\widetilde{\kappa}^{-1}\widetilde{\kappa}v_k^{{\mathrm{S}_i}}\cdot \widetilde{\kappa} v_j^{{\mathrm{S}_i}}{\mathrm{d}x}=\int_{\omega_i}\widetilde{\kappa} v_j^{{\mathrm{S}_i}}\cdot v_k^{{\mathrm{S}_i}}{\mathrm{d}x}=(v_j^{{\mathrm{S}_i}},v_k^{{\mathrm{S}_i}})_i=\delta_{j,k}.\end{aligned}$$ Next we show that $\{ \widetilde{\kappa} v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus \{\widetilde{\kappa}\}$ are complete in $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$. Actually, for any $v\in L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$ such that $$\label{eq:9}
\begin{aligned}
\int_{\omega_i}\widetilde{\kappa}^{-1}v\cdot \widetilde{\kappa}{\mathrm{d}x}=0\quad
\text{and }\quad \forall j\in \mathbb{N}_{+}:
\int_{\omega_i}\widetilde{\kappa}^{-1}v\cdot \widetilde{\kappa} v_j^{{\mathrm{S}_i}}{\mathrm{d}x}=0,
\end{aligned}$$ we deduce directly from definition that $$\begin{aligned}
\int_{\omega_i}\widetilde{\kappa}(\widetilde{\kappa}^{-1}v)^2{\mathrm{d}x}=\int_{\omega_i\cap\{\widetilde{\kappa}\ne 0\}}\widetilde{\kappa}^{-1}v^2{\mathrm{d}x}<\infty.\end{aligned}$$ This implies that $\widetilde{\kappa}^{-1}v\in L^2_{\widetilde{\kappa}}(\omega_i)$. Furthermore, indicates that $\widetilde{\kappa}^{-1}v$ is orthogonal to a set of complete orthogonal basis functions $\{ v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus \{1\}$ in $L^2_{\widetilde{\kappa}}(\omega_i)$. Therefore, $v=0$, which completes the proof.
\[rem:dual\] Since $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ is a Hilbert space, we can identify its dual with itself, and there exists an isometry between $L^2_{\widetilde{\kappa}}(\omega_i)$ and $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$, e.g., the operator $T$ in . We identify $L^2_{\widetilde{\kappa}}(\omega_i)$ as the dual of $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$.
Now we define the local spectral basis functions on $\omega_i$ for all $i=1,\cdots, N$. Let $\ell_i^{{\mathrm{I}}}\in \mathbb{N}_{+}$ be a prespecified number, denoting the number of local basis functions associated with $\omega_i$. We take the eigenfunctions corresponding to the first $(\ell_i^{{\mathrm{I}}}-1)$ smallest eigenvalues for problem in addition to the kernel of the elliptic operator $\mathcal{L}_i$, namely, $\{1\}$, to construct the local spectral offline space: $$V_{\text{off}}^{\text{S}_i,\ell_i^{{\mathrm{I}}}}= \text{span}\{ v_{j}^{{\mathrm{S}_i}}:~~ 1\leq j <\ell_i^{{\mathrm{I}}}\}\oplus \{1\}.$$ Then $\dim(V_{\text{off}}^{\text{S}_i,\ell_i^{{\mathrm{I}}}})=\ell_i^{{\mathrm{I}}}$. The choice of the truncation number $\ell_i^{{\mathrm{I}}}\in \mathbb{N}_{+}$ has to be determined by the eigenvalue decay rate or the presence of spectral gap. The space $V_{\text{off}}^{\text{S}_i,\ell_i^{{\mathrm{I}}}}$ allows defining a finite-rank projection operator $\mathcal{P}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}: L^2_{\widetilde{\kappa}}(\omega_i)\to V_{\text{off}}^{\text{S}_i,
\ell_i^{{\mathrm{I}}}}$ by (with the constant $c_0=\big(\int_{\omega_i}\widetilde{\kappa} {\mathrm{d}x}\big)^{-1}$): $$\begin{aligned}
\label{eq:FR_spec}
\mathcal{P}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}v=c_0(v,1)_i+\sum\limits_{j=1}^{\ell_i^{{\mathrm{I}}}-1}(v,v_j^{{\mathrm{S}_i}})_i v_j^{{\mathrm{S}_i}}\ \ \text{ for all } v\in L_{\tilde \kappa}^2(\omega_i).\end{aligned}$$ The operator $\mathcal{P}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}$ will play a role in the convergence analysis.
### Local Steklov eigenvalue problem II {#local-steklov-eigenvalue-problem-ii .unnumbered}
The local Steklov eigenvalue problem can be formulated as to seeking $(\lambda_{j}^{{\mathrm{T}_i}}, v_{j}^{{\mathrm{T}_i}})\in \mathbb{R}\times H^1_{\kappa}(\omega_i)$ such that $$\begin{aligned}
{2}\label{eq:steklov}
-\nabla\cdot(\kappa\nabla v_{j}^{{\mathrm{T}_i}}) &= 0 &&\quad\text{in} \, \, \, \omega_i,\\
\kappa\frac{\partial}{\partial n}v_{j}^{{\mathrm{T}_i}}&=\lambda_{j}^{{\mathrm{T}_i}} v_{j}^{{\mathrm{T}_i}}&&\quad\text{ on } \partial \omega_i.\nonumber\end{aligned}$$ It is well known that the spectrals of the Steklov eigenvalue problem blow up [@MR2770439]:
\[lem:steklov-blowup\] Let $\{(\lambda_j^{{\mathrm{T}_i}},v_j^{{\mathrm{T}_i}})\}_{j=1}^{\infty}$ be the eigenvalues and the corresponding normalized eigenfunctions in $L^2(\partial\omega_i)$ to the spectral problem listed according to their algebraic multiplicities and the eigenvalues are ordered nondecreasingly. There holds $$\begin{aligned}
\lambda_j^{{\mathrm{T}_i}}\to \infty\quad \text{ as } j\to \infty.\end{aligned}$$
Note that $\lambda_1^{{\mathrm{T}_i}}=0$ and $v_1^{{\mathrm{T}_i}}$ is a constant. Furthermore, the series $\big\{v_j^{{\mathrm{T}_i}}\big\}_{j=1}^{\infty}$ forms a complete orthonormal bases in $L^2(\partial\omega_i)$. Below we use the notation $(\cdot,\cdot)_{\partial\omega_i}$ to denote the inner product on $L^2(\partial\omega_i)$. Similarly, we define a local spectral space of dimension $\ell_i^{{\mathrm{II}}}$ and the associated $\ell_i^{{\mathrm{II}}}$-rank projection operator: $$\begin{aligned}
V_{\text{off}}^{\text{T}_i,\ell_i^{{\mathrm{II}}}}&= \text{span}\{ v_{j}^{{\mathrm{T}_i}}:~~ 1\leq j \leq\ell_i^{{\mathrm{II}}}\},\nonumber\\
\mathcal{P}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}v
&=\sum\limits_{j=1}^{\ell_i^{{\mathrm{II}}}}( v,v_j^{{\mathrm{T}_i}})_{\partial\omega_i} v_j^{{\mathrm{T}_i}}\ \ \text{ for all } v\in L^2(\partial\omega_i).\label{eq:steklov_spec}\end{aligned}$$ In addition to these local spectral basis functions defined in Problems and , we need one more local basis function defined by the following local problem: $$\label{eq:1-basis}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla v^{i})&=\frac{\widetilde{\kappa}}{\int_{\omega_i}\widetilde{\kappa}{\mathrm{d}x}} \quad&&\text{ in } \omega_i,\\
-\kappa\frac{\partial v^{i}}{\partial n}&=|\partial\omega_i|^{-1}\quad&&\text{ on }\partial \omega_i.
\end{aligned}
\right.$$ Note that the approximation property of $V_{\text{off}}^{\text{S}_i,\ell_i^{{\mathrm{I}}}}$, $V_{\text{off}}^{\text{T}_i,
\ell_i^{{\mathrm{II}}}}$ to the local solution $u|_{\omega_i}$ is of great importance to the analysis of multiscale methods [@melenk1996partition; @EFENDIEV2011937]. We present relevant results in Section \[subsec:spectral\] below.
### Local harmonic extension bases {#local-harmonic-extension-bases .unnumbered}
This type of local multiscale bases is defined by local solvers over $\omega_i$. The number of such local solvers is problem-dependent. It can be the space of all fine-scale finite element basis functions or the solutions of some local problems with suitable choices of boundary conditions. In this work, we consider the following $\kappa$-harmonic extensions to form the local multiscale space, which has been extensively used in the literature. Specifically, given a fine-scale piecewise linear function $\delta_j^h(x)$ defined on the boundary $\partial\omega_i$, let $\phi_{j}^{{\mathrm{H}_i}}$ be the solution to the following Dirichlet boundary value problem: $$\begin{aligned}
{2} \label{harmonic_ex}
-\nabla\cdot(\kappa(x) \nabla \phi_{j}^{{\mathrm{H}_i}} ) &= 0
\quad &&\text{in} \, \, \, \omega_i,\\
\phi_{j}^{{\mathrm{H}_i}}&=\delta_j^h &&\text{ on }\partial\omega_i,\nonumber\end{aligned}$$ where $\delta_j^h(x):=\delta_{j,k}\,\text{ for all } j,k\in \textsl{J}_{h}(\omega_i)$ with $\delta_{j,k}$ denoting the Kronecker delta symbol, and $\textsl{J}_{h}(\omega_i)$ denoting the set of all fine-grid boundary nodes on $\partial\omega_i$. Let $L_i$ be the number of the local multiscale functions on $\omega_i$. Then the local multiscale space $V^{{\mathrm{H}_i}}_{\text{snap}}$ on $\omega_i$ is defined by $$\begin{aligned}
\label{eq:Vharmonic}
V^{{\mathrm{H}_i}}_{\text{snap}}:=\text{span}\{\phi_j^{{\mathrm{H}_i}}: \quad 1\leq j\leq L_i\}.\end{aligned}$$ Its approximation property will be discussed in Section \[subsec:harmonic\].
Discrete POD {#discrete-pod .unnumbered}
------------
One challenge associated with the local multiscale space $V^{{\mathrm{H}_i}}_{\text{snap}}$ lies in the fact that its dimensionality can be very large, i.e., $L_i\gg1$, when the problem becomes increasingly complicated in the sense that there are more multiple scales in the coefficient $\kappa$. Thus, the discrete POD is often employed on $\omega_i$ to reduce the dimensionality of $V^{{\mathrm{H}_i}}_{\text{snap}}$, while maintaining a certain accuracy.
The discrete POD proceeds as follows. [After obtaining]{} a large number of local multiscale functions $\{\phi_{j}^{{\mathrm{H}_i}}\}_{j=1}^{L_i}$, with $L_i\gg 1$, by solving the local problem , we generate a [problem adapted subset of much smaller size]{} from these basis functions by means of singular value decomposition, by taking only left singular vectors corresponding to the largest singular values. The resulting low-dimensional linear subspace with $\ell_i$ singular vectors is termed as the offline space of rank $\ell_i$.
The auxiliary spectral problem in the construction is to find $( \lambda_j^{{\mathrm{H}_i}}, v_j)\in \mathbb{R}\times \mathbb{R}^{L_i}$ for $1\leq j\leq L_i$ with the eigenvalues $\{\lambda_j^{{\mathrm{H}_i}}\}_{j=1}^{L_i}$ in a nondecreasing order (with multiplicity counted) such that $$\begin{aligned}
\label{offeig}
A^{\text{off}} v_j& = \lambda_j^{{\mathrm{H}_i}} S^{\text{off}} v_j,\\
(S^{\text{off}} v_j,v_j)_{\ell^2}&=1\nonumber.\end{aligned}$$ The matrices $A^{\text{off}}, S^{\text{off}}\in \mathbb{R}^{L_i\times L_i}$ are respectively defined by $$\displaystyle A^{\text{off}} = [a_{mn}^{\text{off}}] = \int_{\omega_i} \kappa\nabla \phi_m^{{\mathrm{H}_i}} \cdot \nabla \phi_n^{{\mathrm{H}_i}}{\mathrm{d}x}\quad\text{ and }\quad
\displaystyle S^{\text{off}} = [s_{mn}^{\text{off}}] = \int_{\omega_i} \widetilde{\kappa} \phi_m^{{\mathrm{H}_i}} \cdot\phi_n^{{\mathrm{H}_i}}{\mathrm{d}x}.$$ Let $\mathbb{N}_{+}\ni \ell_i\leq L_i$ be a truncation number. Then we define the discrete POD-basis of rank $\ell_i$ by $$\begin{aligned}
\label{eq:pod-basis}
v_j^{{\mathrm{H}_i}}:=\sum\limits_{k=1}^{L_i}(v_j)_{k}\phi_{k}^{{\mathrm{H}_i}}\;\quad
\text{ for }j=1,\cdots,\ell_i,\end{aligned}$$ with $(v_j)_{k}$ being the $k^{\text{th}}$ component of the eigenvector $v_j\in\mathbb{R}^{L_i}$. By the definition of the discrete eigenvalue problem , we have $$\begin{aligned}
\label{eq:podNorm}
(v_j^{{\mathrm{H}_i}}, v_k^{{\mathrm{H}_i}})_i =\delta_{jk} \quad \text{ and } \quad \int_{\omega_i}\kappa \nabla v_j^{{\mathrm{H}_i}}\cdot\nabla v_k^{{\mathrm{H}_i}}{\mathrm{d}x}=\lambda_j^{{\mathrm{H}_i}}\delta_{jk} \qquad\text{ for all } 1\leq j,k\leq \ell_i.\end{aligned}$$ The local offline space $V^{\text{H}_i,\ell_i}_{\text{off}}$ of rank $\ell_i$ is spanned by the first $\ell_i$ eigenvectors corresponding to the smallest eigenvalues for problem : $$\begin{aligned}
V^{\text{H}_i,\ell_i}_{\text{off}} := \text{span}\left\{v_j^{{\mathrm{H}_i}}: \quad 1\leq j\leq \ell_i \right\}.\end{aligned}$$ Analogously, we can define a rank $\ell_i$ projection operator $\mathcal{P}^{{\mathrm{S}_i},\ell_i}: V_{\text{snap}}^{{\mathrm{H}_i}}\to
V_{\text{off}}^{{\mathrm{H}_i},\ell_i}$ for all $\mathbb{N}_{+}\ni \ell_i\leq L_i$ by $$\label{eqn:proj-pod}
\mathcal{P}^{{\mathrm{H}_i},\ell_i}v=\sum\limits_{j=1}^{\ell_i}(v,v_j^{{\mathrm{H}_i}})_i v_j^{{\mathrm{H}_i}}\ \ \text{ for all } v\in V_{\text{snap}}^{{\mathrm{H}_i}}.$$ This projection is crucial to derive the error estimate for the discrete POD basis. Its approximation property will be discussed in Section \[sec:discretePOD\].
Galerkin approximation {#globcoupling}
----------------------
Next we define three types of global multiscale basis functions based on the local multiscale basis functions introduced in Section \[locbasis\] by partition of unity functions subordinated to the set of coarse neighborhoods $\{\omega_i\}_{i=1}^N$. This gives rise to three multiscale methods for solving Problem that can approximate reasonably the exact solution $u$ (or the fine-scale solution $u_h$).
We begin with an initial coarse space $V^{\text{init}}_0 = \text{span}\{ \chi_i \}_{i=1}^{N}$. The functions $\chi_i$ are the standard multiscale basis functions on each coarse element $K\in \mathcal{T}^{H}$ defined by $$\begin{aligned}
{2} \label{pou}
-\nabla\cdot(\kappa(x)\nabla\chi_i) &= 0 &&\quad\text{ in }\;\;K, \\
\chi_i &= g_i &&\quad\text{ on }\partial K, \nonumber\end{aligned}$$ where $g_i$ is affine over $\partial K$ with $g_i(O_j)=\delta_{ij}$ for all $i,j=1,\cdots, N$. Recall that $\{O_j\}_{j=1}^{N}$ are the set of coarse nodes on $\mathcal{T}^{H}$.
\[rem:chi\] The definition implies that $\text{supp}(\chi_i)=\omega_i$. Thus, we have $$\begin{aligned}
\label{eq:chi_supp}
\chi_i=0\quad \text{ on }\partial \omega_i.\end{aligned}$$ Furthermore, the maximum principle implies $0\leq \chi_i\leq 1.$ Note that under Assumption \[ass:coeff\], the gradient of the multiscale basis functions $\{\chi_i\}$ are uniformly bounded [@li2000gradient Corollary 1.3] $$\begin{aligned}
\label{eq:gradientChi}
\|\nabla\chi_i\|_{L^{\infty}(\omega_i)}\leq C_0,\end{aligned}$$ where the constant $C_0$ depends on $D$, the size and shape of $D_j$ for $j=1,\cdots,m$, the space dimension $d$ and the coefficient $\kappa$, but it is independent of the distances between the inclusions $D_k$ and $D_j$ for $k,j=1,\cdots, m$. It is worth noting that the precise dependence of the constant $C_0$ on $\kappa$ is still unknown. However, when the contrast $\Lambda=\infty$, it is known that the constant $C_0$ will blow up as two inclusions approach each other, for which the problem reduces to the perfect or insulated conductivity problem [@bao2010gradient]. Such extreme cases are beyond the scope of the present work. The constant $C_0$ also depends on coarse grid size $H$ with a possible scaling $H^{-1}$.
Since the set of functions $\{\chi_i\}_{i=1}^{N}$ form partition of unity functions subordinated to $\{\omega_i\}_{i=1}^{N}$, we can construct global multiscale basis functions from the local multiscale basis functions discussed in Section \[locbasis\] [@melenk1996partition; @EFENDIEV2011937]. Specifically, the global multiscale spaces $V_{\text{off}}^{\text{S}}$, $V_{\text{snap}}$ and $V_{\text{off}} ^{\text{H}}$ are respectively defined by $$\label{eq:globalBasis}
\begin{aligned}
V_{\text{off}}^{\text{S}} &:= \text{span} \{ \chi_i v_j^{{\mathrm{S}_i}},\chi_i v_{k}^{{\mathrm{T}_i}},\chi_i v^{i}: \, \, 1 \leq i \leq N,\,\,\, 1 \leq j \leq \ell_i^{{\mathrm{I}}} \text{ and } 1 \leq k \leq \ell_i^{{\mathrm{II}}} \text{ with }\ell_i^{{\mathrm{I}}}+\ell_i^{{\mathrm{II}}}=\ell_i-1\},
\\
V_{\text{snap}} &:= \text{span}\{ \chi_i\phi_{j}^{ {\mathrm{H}_i}}:~~~ 1\leq i\leq N \text{ and }1\leq j \leq {L_i} \},\\
V_{\text{off}} ^{\text{H}} &:= \text{span} \{ \chi_i v_j^{{\mathrm{H}_i}}: \, \, 1 \leq i \leq N \, \, \, \text{and} \, \, \, 1 \leq j \leq \ell_i \}.
\end{aligned}$$ Accordingly, the Galerkin approximations to Problem read respectively: seeking $u_{\text{off}}^{\text{S}}\in V_{\text{off}}^{\text{S}}$, $u_{\text{snap}}\in V_{\text{snap}}$ and $u_{\text{off}}^{\text{H}}\in V_{\text{off}}^{\text{H}}$, satisfying $$\begin{aligned}
a(u_{\text{off}}^{\text{S}}, v) &= (f, v)_{D} \quad\text{for all} \,\,\, v \in V_{\text{off}}^{\text{S}},\label{cgvarform_spectral}\\
a(u_{\text{snap}}, v) &= (f, v)_{D} \quad\text{for all} \,\,\, v \in V_{\text{snap}},\label{cgvarform_snap}\\
a(u_{\text{off}}^{\text{H}}, v) &= (f, v)_{D} \quad \text{for all} \,\,\, v \in V_{\text{off}}^{\text{H}}.\label{cgvarform_pod}\end{aligned}$$ Note that, by its construction, we have the inclusion relation $V_{\text{off}}^{\text{H}}\subset V_{\text{snap}}$ for all $1\leq \ell_i\leq L_i$ with $i=1,2,\cdots, N$. Hence, the Gakerkin orthogonality property [@MR2373954 Corollary 2.5.10] implies $${|u-u_{\text{off}}^{\text{H}}|_{H^{1}_{\kappa}{\left( D \right)}}}^2= {|u-u_{\text{snap}}|_{H^{1}_{\kappa}{\left( D \right)}}}^2+{|u_{\text{snap}}-u_{\text{off}}^{\text{H}}|_{H^{1}_{\kappa}{\left( D \right)}}}^2.$$ Furthermore, we will prove in Section \[sec:discretePOD\] that $u_{\text{off}}^{\text{H}}\to u_{\text{snap}}$ in $ H^1_0(D),$ and the convergence rate is determined by $\max_{i=1,\cdots,N}\big\{(H^2\lambda_{\ell_i+1}^{{\mathrm{H}_i}})^{-1/2}\big\}$.
The main goal of this work is to derive bounds on the errors ${|u-u_{\text{off}}^{\text{S}}|_{H^{1}_{\kappa}{\left( D \right)}}}$, ${|u-u_{\text{snap}}|_{H^{1}_{\kappa}{\left( D \right)}}}$ and ${|u-u_{\text{off}}^{\text{H}}|_{H^{1}_{\kappa}{\left( D \right)}}}$. This will be carried out in Section \[sec:error\] below.
Error estimates {#sec:error}
===============
This section is devoted to the energy error estimates for the multiscale approximations. The general strategy is as follows. First, we derive approximation properties to the local solution $u|_{\omega_i}$, for the local multiscale spaces $V_{\text{off}}^{\text{S}_i,
\ell_i^{{\mathrm{I}}}}$, $V_{\text{off}}^{\text{T}_i,\ell_i^{{\mathrm{II}}}}$, $V_{\text{snap}}^{\text{H}_i}$ and $V_{\text{off}}^{\text{H}_i,\ell_i}$. Then we combine these local estimates together with partition of unity functions to establish the desired global energy error estimates.
Spectral bases approximate error {#subsec:spectral}
--------------------------------
Note that the solution $u$ satisfies the following equation $$\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla u)&=f \quad&&\text{ in } \omega_i,\\
-\kappa\frac{\partial u}{\partial n}&=-\kappa\frac{\partial u}{\partial n}\quad&&\text{ on }\partial \omega_i,
\end{aligned}
\right.$$ which can be split into three parts, namely $$\begin{aligned}
\label{eq:decomp}
u|_{\omega_i}=u^{i,{\mathrm{I}}}+u^{i,{\mathrm{II}}}+u^{i,{\mathrm{III}}}.\end{aligned}$$ Here, the three components $u^{i,{\mathrm{I}}}$, $u^{i,{\mathrm{II}}}$, and $u^{i,{\mathrm{III}}}$ are respectively given by $$\label{eq:u-roma1}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla u^{i,{\mathrm{I}}})&=f-\bar{f}_i \quad&&\text{ in } \omega_i\\
-\kappa\frac{\partial u^{i,{\mathrm{I}}}}{\partial n}&=0\quad&&\text{ on }\partial \omega_i,
\end{aligned}
\right.$$ where $\bar{f}_i:=\int_{\omega_i}f{\mathrm{d}x}\times\frac{\widetilde{\kappa}}{\int_{\omega_i}\widetilde{\kappa}{\mathrm{d}x}}$, $$\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla u^{i,{\mathrm{II}}})&=0 \quad&&\text{ in } \omega_i\\
-\kappa\frac{\partial u^{i,{\mathrm{II}}}}{\partial n}&=\kappa\frac{\partial u}{\partial n}-\dashint_{\partial\omega_i}\kappa\frac{\partial u}{\partial n}\quad&&\text{ on }\partial \omega_i,
\end{aligned}
\right.$$ and $$u^{i,{\mathrm{III}}}=v^{i}\int_{\omega_i}f{\mathrm{d}x}$$ with $v^i$ being defined in . Clearly, $u^{i,{\mathrm{III}}}$ involves only one local solver. We begin with an [*a priori*]{} estimate on $u^{i,{\mathrm{II}}}$.
The following a priori estimate holds: $$\begin{aligned}
\label{eq:u2-bound}
{|u^{i,{\mathrm{II}}}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}\leq {|u|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}+H{{\rm C}_{\mathrm{poin}}(\omega_i)}^{1/2}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}.\end{aligned}$$
Let $\widetilde{u}:=u^{i,{\mathrm{I}}}+u^{i,{\mathrm{III}}}$. Then it satisfies $$\label{eq:u-roma}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla \widetilde{u})&=f \quad&&\text{ in } \omega_i,\\
\kappa\frac{\partial \widetilde{u}}{\partial n}&=\frac{1}{{|\partial\omega_i|}}\int_{\omega_i}f\;{\mathrm{d}x}\quad&&\text{ on }\partial \omega_i.
\end{aligned}
\right.$$ To make the solution unique, we require $\int_{\partial\omega_i}\widetilde{u}\;{\rm d}s=0$. Testing the first equation with $\widetilde{u}$ gives $$\begin{aligned}
{|\widetilde{u}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2=\int_{\omega_i}f\widetilde{u}\;{\mathrm{d}x}.\end{aligned}$$ Now Poincaré inequality and Hölder’s inequality lead to $$\begin{aligned}
{|\widetilde{u}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2\leq \|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\|\widetilde{u}\|_{L^2_{\kappa}(\omega_i)}
\leq H{{\rm C}_{\mathrm{poin}}(\omega_i)}^{1/2}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}{|\widetilde{u}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}.\end{aligned}$$ Therefore, we obtain $$\begin{aligned}
{|\widetilde{u}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}\leq H{{\rm C}_{\mathrm{poin}}(\omega_i)}^{1/2}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}.\end{aligned}$$ Finally, the desired result follows from the triangle inequality.
Since $u^{i,{\mathrm{I}}}\in L^{2}_{\widetilde{\kappa}}(\omega_i),$ $u^{i,{\mathrm{II}}}\in L^{2}(\partial\omega_i)$, and the series $\{v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus \{1\}$ and $\{v_j^{{\mathrm{T}_i}}\}_{j=1}^{\infty}$ form a complete orthogonal bases in $L^{2}_{\widetilde{\kappa}}(\omega_i)$ and $L^{2}(\partial\omega_i)$, respectively, $u^{i,{\mathrm{I}}}$ and $u^{i,{\mathrm{II}}}$ admit the following decompositions: $$\begin{aligned}
u^{i,{\mathrm{I}}}&=c_0(u^{i,{\mathrm{I}}},1)_i+\sum\limits_{j=1}^{\infty}(u^{i,{\mathrm{I}}},v_j^{{\mathrm{S}_i}})_i v_j^{{\mathrm{S}_i}},\label{eq:spectralU}\\
u^{i,{\mathrm{II}}}&=\sum\limits_{j=1}^{\infty}(u^{i,\rm II},v_j^{{\mathrm{T}_i}})_{\partial\omega_i} v_j^{{\mathrm{T}_i}}.
\label{eq:spectralu2}\end{aligned}$$ For any $n\in \mathbb{N}_{+}$, we employ the $n$-term truncation $u^{i,{\mathrm{I}}}_n$ and $u^{i,{\mathrm{II}}}_n$ to approximate $u^{i,{\mathrm{I}}}$ and $u^{i,{\mathrm{II}}}$, respectively, on $\omega_i$: $$\begin{aligned}
u^{i,{\mathrm{I}}}_n:=\mathcal{P}^{{\mathrm{S}_i},n}u^{i,{\mathrm{I}}}\in V_{\text{off}}^{\text{S}_i,n}
\quad \mbox{and}\quad
u^{i,{\mathrm{II}}}_n:=\mathcal{P}^{{\mathrm{T}_i},n}u^{i,{\mathrm{II}}}\in V_{\text{off}}^{\text{T}_i,n}.$$
\[lem:assF\] Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$. Then there holds $$\begin{aligned}
\label{eq:f_norm}
\|f-\bar{f}_i\|_{L^{2}_{\widetilde{\kappa}^{-1}}(\omega_i)}^2
=\sum\limits_{j=1}^{\infty}\Big(\lambda_j^{\text{S}_i}\Big)^2 \Big|(u^{i,{\mathrm{I}}},v_{j}^{{\mathrm{S}_i}})_i\Big|^2<\infty.\end{aligned}$$
Since $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$, by Lemma \[lem:L2Inv\], $f-\bar f_i$ admits the following spectral decomposition: $$\begin{aligned}
\label{eq:99}
f-\bar{f}_i=\Big(\int_{\omega_i}\widetilde{\kappa}{\mathrm{d}x}\Big)^{-1}
\Big(\int_{\omega_i}
(f-\bar{f}_i)\;{\mathrm{d}x}\Big)\widetilde{\kappa}
+\sum\limits_{j=1}^{\infty}
\Big(\int_{\omega_i}(f-\bar{f}_i)
v_j^{{\mathrm{S}_i}}{\mathrm{d}x}\Big)\widetilde{\kappa}v_j^{{\mathrm{S}_i}}.\end{aligned}$$ By the definition of $\bar f_i$, the first term vanishes. Thus, it suffices to compute the $j^{\text{th}}$ expansion coefficient $\int_{\omega_i}(f-\bar{f}_i)v_j^{{\mathrm{S}_i}}{\mathrm{d}x}$ for $j=1,2,\cdots$, which follows from . Indeed, testing with $v_j^{{\mathrm{S}_i}}$ yields $$\begin{aligned}
\int_{\omega_i}\Big(f-\bar{f}_i\Big)v_j^{{\mathrm{S}_i}}{\mathrm{d}x}&=\int_{\omega_i}\kappa\nabla u^{i,{\mathrm{I}}}\cdot\nabla v_j^{{\mathrm{S}_i}}{\mathrm{d}x}=\lambda_{j}^{{\mathrm{S}_i}}\int_{\omega_i}\widetilde{\kappa}u^{i,{\mathrm{I}}} v_j^{{\mathrm{S}_i}}{\mathrm{d}x}=\lambda_{j}^{{\mathrm{S}_i}}(u^{i,{\mathrm{I}}}, v_j^{{\mathrm{S}_i}})_i.\end{aligned}$$
Now we state an important approximation property of the operator ${\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}$ of rank $\ell_i^{{\mathrm{I}}}$ defined in .
\[prop:projection\] Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$ and $\ell_i^{{\mathrm{I}}}\in \mathbb{N}_+$. Let $u^{i,{\mathrm{I}}}$ be the first component in . Then the projection ${\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}: L^2_{\widetilde{\kappa}}(\omega_i)\to V_{{\rm off}}^{\mathrm{S}_i,\ell_i^{{\mathrm{I}}}}$ of rank $\ell_i^{{\mathrm{I}}}$ defined in has the following approximation properties: $$\begin{aligned}
{{\left\|u^{i,{\mathrm{I}}}-{\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}u^{i,{\mathrm{I}}}\right\|}_{L^2_{\widetilde{\kappa}}{\left( \omega_i \right)}}}&
\leq (\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-1}{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}},\label{eq:3333}\\
{|u^{i,{\mathrm{I}}}-{\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}u^{i,{\mathrm{I}}}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}&\leq ({\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}}})^{-\frac12}{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}. \label{eq:4444}\end{aligned}$$
The definitions and , and the orthonormality of $\{v_j^{{\mathrm{S}_i}}\}_{j=1}^{\infty}\oplus\{1\}$ in $L^2_{\widetilde{\kappa}}(\omega_i)$ directly yield $$\begin{aligned}
{{\left\|u^{i,{\mathrm{I}}}-{\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}u^{i,{\mathrm{I}}}\right\|}_{L^2_{\widetilde{\kappa}}{\left( \omega_i \right)}}}^2
&=\sum\limits_{j=\ell_i^{{\mathrm{I}}}}^{\infty}(u^{i,{\mathrm{I}}},v_j^{{\mathrm{S}_i}})_i^2
=\sum\limits_{j=\ell_i^{{\mathrm{I}}}}^{\infty}(\lambda_j^{{\mathrm{S}_i}})^{-2}
(\lambda_j^{{\mathrm{S}_i}})^{2}(u^{i,{\mathrm{I}}},v_j^{{\mathrm{S}_i}})_i^2\\
&\leq (\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-2}\sum\limits_{j=\ell_i^{{\mathrm{I}}}}^{\infty}
(\lambda_j^{{\mathrm{S}_i}})^{2}(u^{i,{\mathrm{I}}},v_j^{{\mathrm{S}_i}})_i^2\\
&\leq (\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-2}{{\left\|f-\bar{f}_i\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}^2,\end{aligned}$$ where in the last step we have used . Next, since the first term in the expansion vanishes, we deduce that $f-\bar{f}_i$ is the $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ projection onto the codimension one subspace $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)\backslash \{\widetilde{\kappa}\}$. Thus, $$\begin{aligned}
{{\left\|f-\bar{f}_i\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}\leq {{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}.\end{aligned}$$ Plugging this inequality into the preceding estimate, we arrive at $$\begin{aligned}
{{\left\|u^{i,{\mathrm{I}}}-{\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}u^{i,{\mathrm{I}}}\right\|}_{L^2_{\widetilde{\kappa}}{\left( \omega_i \right)}}}^2
\leq(\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-2}{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}^2,\end{aligned}$$ Taking the square root yields the first estimate. The second estimate can be derived in a similar manner.
Next we give the approximation property of the finite rank operator ${\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}$ to the second component of the local solution $u^{i,{\mathrm{II}}}$, which relies on the regularity of the very weak solution in the appendix.
\[lemma:u2\] Let $\ell_i^{{\mathrm{I}}}\in \mathbb{N}_+$ and let $u^{i,{\mathrm{II}}}$ be the second component in . Then the projection ${\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}: L^2(\partial\omega_i)\to V_{\rm off}^{{\rm T}_i,\ell_i}$ of rank $\ell_i^{{\mathrm{II}}}$ defined in has the following approximation properties: $$\begin{aligned}
\|{u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}}\|_{L^2(\partial\omega_i)}
&\leq (\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}})^{-\frac12}
\Big({|u|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}+ H\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\Big),
\label{eq:5555}\\
{{\left\|u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}\right\|}_{L^2_{\widetilde{\kappa}}{\left( \omega_i \right)}}}
&\leq {{\rm C}_{\mathrm{weak}}}(\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}})^{-\frac12}
\Big({|u|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}+
H\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}\Big), \label{eq:56789}\\
\int_{\omega_i}\chi_i^2\kappa |\nabla (u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}) |^2{\mathrm{d}x}&\leq 8H^{-2}{{\rm C}_{\mathrm{weak}}}^2(\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}})^{-1}
\Big({|u|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2+ H^2{{\rm C}_{\mathrm{poin}}(\omega_i)}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}^2\Big).
\label{eq:777}\end{aligned}$$
The inequality follows from the expansion , and , and the fact that $u^{i,{\mathrm{II}}}\in H^1_{\kappa}(\omega_i)$. Indeed, we obtain from and the orthonomality of $\{v_j^{{\mathrm{T}_i}}\}_{j=1}^{\infty}$ in $L^2(\partial\omega_i)$ that $$\begin{aligned}
\|{u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}}\|_{L^2(\partial\omega_i)}^2
&=\sum\limits_{j>\ell_i^{{\mathrm{II}}}}|(u^{i,{\mathrm{II}}},v_j^{{\mathrm{T}_i}})_{\partial\omega_i}|^2
=\sum\limits_{j>\ell_i^{{\mathrm{II}}}}(\lambda_j^{{\mathrm{T}_i}})^{-1}\lambda_j^{{\mathrm{T}_i}}|(u^{i,{\mathrm{II}}},v_j^{{\mathrm{T}_i}})_{\partial\omega_i}|^2\\
&\leq (\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}})^{-1}\sum\limits_{j>\ell_i^{{\mathrm{II}}}}\lambda_j^{{\mathrm{T}_i}}|(u^{i,{\mathrm{II}}},v_j^{{\mathrm{T}_i}})_{\partial\omega_i}|^2.\end{aligned}$$ Then the estimate follows from and the identity $
\langle u^{i,{\mathrm{II}}},u^{i,{\mathrm{II}}} \rangle_i=\sum_{j=1}^{\infty}\lambda_j^{{\mathrm{T}_i}}|(u^{i,{\mathrm{II}}},v_j^{{\mathrm{T}_i}})_{\partial\omega_i}|^2.
$ To prove , we first write the local error equation for $e:=u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}$ by $$\label{eq:222}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla e)&=0 \quad&&\text{ in } \omega_i,\\
e&=u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}\quad&&\text{ on }\partial \omega_i.
\end{aligned}
\right.$$ Now Theorem \[lem:very-weak\] yields $$\begin{aligned}
{{\left\|u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}\right\|}_{L^2_{\widetilde{\kappa}}{\left( \omega_i \right)}}}\leq \text{C}_{\text{weak}}\|{u^{i,{\mathrm{II}}}-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}}\|_{L^2(\partial\omega_i)}\end{aligned}$$ for some constant $\text{C}_{\text{weak}}$ independent of the coefficient $\kappa$. This, together with , proves .
To derive the energy error estimate from the $L^2_{\widetilde{\kappa}}(\omega_i)$ error estimate, we employ a Cacciopoli type inequality. Note that $\chi_i=0$ on the boundary $\partial \omega_i$, cf . Multiplying the first equation in with $\chi_i^2 e_n$ and then integrating over $\omega_i$ and integration by parts lead to $$\begin{aligned}
\int_{\omega_i}\chi_i^2\kappa|\nabla e_n|^2{\mathrm{d}x}=-2\int_{\omega_i}\kappa \nabla e_n\cdot \nabla \chi_i\chi_i e_n\;{\mathrm{d}x}.\end{aligned}$$ Together with Hölder’s inequality and Young’s inequality, we arrive at $$\begin{aligned}
\int_{\omega_i}\chi_i^2\kappa|\nabla e_n|^2{\mathrm{d}x}\leq 4\int_{\omega_i}\kappa|\nabla\chi_i|^2 e_n^2\,{\mathrm{d}x}.\end{aligned}$$ Further, the definition of $\widetilde{\kappa}$ in yields $$\begin{aligned}
\int_{\omega_i}\chi_i^2\kappa|\nabla e_n|^2{\mathrm{d}x}\leq 4H^{-2}\int_{\omega_i}\widetilde{\kappa} e_n^2\,{\mathrm{d}x}.\end{aligned}$$ Now and Young’s inequality yield . This completes the proof of the lemma.
It is worth emphasizing that the local energy estimates and are derived under almost no restrictive assumptions besides the mild condition $f\in L^2_{\widetilde{\kappa}^{-1}}(D)$. This estimate is new to the best of our knowledge. The authors [@EFENDIEV2011937] utilized the Cacciopoli inequality to derive similar estimates, which, however, incurs some (implicit) assumptions on the problem. Hence, the estimates and are important for justifying the local spectral approach.
Finally, we present the rank-$\ell_i$ approximation to $u|_{\omega_i}$, where $\ell_i:=\ell_i^{{\mathrm{I}}}+\ell_i^{{\mathrm{II}}}+1$ with $\ell_i^{{\mathrm{I}}}, \ell_i^{{\mathrm{II}}}\in \mathbb{N}$ for all $i=1,2,\cdots, N$: $$\begin{aligned}
\label{eq:spectral-finiteRank}
\widetilde{u}_i:={\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}u^{i,{\mathrm{I}}}_i+{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}_i+u^{i,{\mathrm{III}}}.\end{aligned}$$
Now, we present an error estimate for the Galerkin approximation $u_{\text{off}}^{\text{S}}$ based on the local spectral basis, cf. . Our proof is inspired by the partition of unity finite element method (FEM) [@melenk1996partition Theorem 2.1].
\[lem:spectralApprox\] Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)\cap L^2_{{\kappa}^{-1}}(D)$ and $\ell_i^{{\mathrm{I}}}, \ell_i^{{\mathrm{II}}}\in \mathbb{N}$ for all $i=1,2,\cdots, N$. Let $u$ be the solution to Problem . Denote $V_{{\rm off}}^{{\rm S}}\ni w_{{\rm off}}^{{\rm S}}:=\sum\limits_{i=1}^{N}\chi_i \widetilde{u}_i$. Then there holds $$\begin{aligned}
{|u-w_{{\rm off}}^{{\rm S}}|_{H^{1}_{\kappa}{\left( D \right)}}}
&\leq {2}C_{{\rm ov}}\max_{i=1,\cdots,N}\big\{({H\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}}})^{-1}+
({\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}}})^{-\frac12}
\big\}{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( D \right)}}}\\
&+7C_{{\rm ov}}{{\rm C}_{\mathrm{weak}}}{\rm C}_{{\rm poin}}\max_{i=1,\cdots,N}
\big\{(H^2\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}})^{-\frac12}\big\}
\|f\|_{L^2_{\kappa^{-1}}(D)},
\end{aligned}$$ where ${\rm C}_{\text{poin}}:={\rm diam}(D){{\rm C}_{\mathrm{poin}}(D)}^{1/2}+H\max_{i=1,\cdots,N}\{{{\rm C}_{\mathrm{poin}}(\omega_i)}^{1/2}\}$.
Let ${{e}^{}_{\mathrm{}}}:=u-w_{\text{off}}^{\text{S}}$. Then the property of the partition of unity of $\{\chi_i\}_{i=1}^{N}$ leads to $${{e}^{}_{\mathrm{}}}=\sum\limits_{i=1}^{N}\chi_i{{e}^{i}_{\mathrm{}}} \qquad\text{ with }
\qquad{{e}^{i}_{\mathrm{}}}:=(u^{{\mathrm{I}}}_i-{\mathcal{P}}^{{\mathrm{S}_i},\ell_i^{{\mathrm{I}}}}u^{{\mathrm{I}}}_i)+(u^{i,{\mathrm{II}}}_i-{\mathcal{P}}^{{\mathrm{T}_i},\ell_i^{{\mathrm{II}}}}u^{i,{\mathrm{II}}}_i)
:={{e}^{i}_{\mathrm{{\mathrm{I}}}}}+{{e}^{i}_{\mathrm{{\mathrm{II}}}}}.$$ Taking its squared energy norm and using the overlap condition , we arrive at $$\begin{aligned}
\int_{D}\kappa|\nabla {{e}^{}_{\mathrm{}}}|^2{\mathrm{d}x}&=\int_{D}\kappa|\sum\limits_{i=1}^{N}
\nabla(\chi_i{{e}^{i}_{\mathrm{}}})|^2{\mathrm{d}x}\leq{C_{\mathrm{ov}}}\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{}}})|^2{\mathrm{d}x}.\end{aligned}$$ This and Young’s inequality together imply $$\begin{aligned}
\label{eq:1111}
\int_{D}\kappa|\nabla {{e}^{}_{\mathrm{}}}|^2{\mathrm{d}x}\leq 2{C_{\mathrm{ov}}}\sum\limits_{i=1}^{N}
\Big(\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{{\mathrm{I}}}}})|^2{\mathrm{d}x}+\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{{\mathrm{II}}}}})|^2{\mathrm{d}x}\Big).\end{aligned}$$ It remains to estimate the two integral terms in the bracket. By Cauchy-Schwarz inequality and the definition of $\tilde \kappa$, we obtain $$\begin{aligned}
\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{{\mathrm{I}}}}})|^2{\mathrm{d}x}&\leq 2\Big( \int_{\omega_i}\big(\kappa\sum\limits_{j=1}^{N}
|\nabla\chi_j|^2\big)|{{e}^{i}_{\mathrm{{\mathrm{I}}}}}|^2{\mathrm{d}x}+\int_{\omega_i}\kappa\chi_i^2
|\nabla{{e}^{i}_{\mathrm{{\mathrm{I}}}}}|^2{\mathrm{d}x}\Big)\nonumber\\
&\leq 2\Big( H^{-2}\int_{\omega_i}\widetilde{\kappa}|{{e}^{i}_{\mathrm{{\mathrm{I}}}}}|^2{\mathrm{d}x}+\int_{\omega_i}\chi_i^2\kappa
|\nabla{{e}^{i}_{\mathrm{{\mathrm{I}}}}}|^2{\mathrm{d}x}\Big).\label{eq:444}\end{aligned}$$ Then Proposition \[prop:projection\] yields $$\begin{aligned}
\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{{\mathrm{I}}}}})|^2{\mathrm{d}x}\leq 2\Big((H\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-2}
+(\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-1}\Big){{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}^2.\end{aligned}$$ Analogously, we can derive the following upper bound for the second term: $$\begin{aligned}
\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{{\mathrm{II}}}}})|^2{\mathrm{d}x}\leq 20{{\rm C}_{\mathrm{weak}}}^2(H^2\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}})^{-1}
\Big({|u|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2+H^2{{\rm C}_{\mathrm{poin}}(\omega_i)}\|f\|_{L^2_{\kappa^{-1}}(\omega_i)}^2\Big).\end{aligned}$$ Inserting these two estimate into gives $$\begin{aligned}
\int_{D}\kappa|\nabla {{e}^{}_{\mathrm{}}}|^2{\mathrm{d}x}&\leq 4C_{\text{ov}}\sum\limits_{i=1}^{N}\Big((H\lambda_{\ell_i}^{{\mathrm{S}_i}})^{-2}
+(\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-1}\Big)
{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}^2\\
&+40C_{\text{ov}}\sum\limits_{i=1}^{N}{{\rm C}_{\mathrm{weak}}}^2(H^2\lambda_{\ell^{{\mathrm{II}}}_{i}+1}^{{\mathrm{T}_i}})^{-1}
\Big({|u|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2+{{\rm C}_{\mathrm{poin}}(\omega_i)}H^2 \|f\|_{L^2_{\kappa^{-1}}(\omega_i)}^2\Big).\end{aligned}$$ Finally, the overlap condition leads to $$\label{eq:999}
\begin{aligned}
\int_{D}\kappa|\nabla {{e}^{}_{\mathrm{}}}|^2{\mathrm{d}x}&\leq 4C_{\text{ov}}^2\max_{i=1,\cdots,N}\Big\{(H\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-2}
+(\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}})^{-1}\Big\}
{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( D \right)}}}^2\\
&+40C_{\text{ov}}^2{{\rm C}_{\mathrm{weak}}}^2\max_{i=1,\cdots,N}\{(H^2
{\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}}})^{-1}
\}\\
&\times\Big({|u|_{H^{1}_{\kappa}{\left( D \right)}}}^2+ H^2\max_{i=1,\cdots,N}\{{{{\rm C}_{\mathrm{poin}}(\omega_i)}}
\}\|f\|_{L^2_{\kappa^{-1}}(D)}^2\Big).
\end{aligned}$$ Furthermore, since $f\in L^2_{\kappa^{-1}}(D)$, we obtain from Poincaré’s inequality the [*a priori*]{} estimate $$\begin{aligned}
\label{eq:888}
{|u|_{H^{1}_{\kappa}{\left( D \right)}}}\leq \text{diam}(D){{\rm C}_{\mathrm{poin}}(D)}^{1/2}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}.\end{aligned}$$ Indeed, we can get by that $$\begin{aligned}
\int_D \kappa u^{2}{\mathrm{d}x}\leq \text{diam}(D)^2{{{\rm C}_{\mathrm{poin}}(D)}}\int_{D}\kappa|\nabla u|^2{\mathrm{d}x}.\end{aligned}$$ Testing with $u\in V$, by Hölder’s inequality, leads to $$\begin{aligned}
\int_{D}\kappa|\nabla u|^2{\mathrm{d}x}&=\int_D fu\; {\mathrm{d}x}\leq \|f\|_{L^2_{\kappa^{-1}}(D)}\|u\|_{L^2_{\kappa}(D)}.\end{aligned}$$ These two inequalities together imply . Inserting into shows the desired assertion.
An immediate corollary of Lemma \[lem:spectralApprox\], after appealing to the Galerkin orthogonality property [@MR2373954 Corollary 2.5.10], is the following energy error between $u$ and $u_{\text{off}}^{\text{S}}$:
\[prop:Finalspectral\] Assume that $f\in L^2_{\widetilde{\kappa}^{-1}}(D)\cap L^2_{{\kappa}^{-1}}(D)$ and let $\ell_i^{{\mathrm{I}}}, \ell_i^{{\mathrm{II}}}\in \mathbb{N}_{+}$ for all $i=1,2,\cdots, N$. Let $u\in V$ and $u_{{\rm off}}^{{\rm S}}\in V_{{\rm off}}^{{\rm S}}$ be the solutions to Problems and , respectively. There holds $$\label{eq:snapErr}
\begin{aligned}
{|u-u_{{\rm off}}^{{\rm S}}|_{H^{1}_{\kappa}{\left( D \right)}}}&:=\min\limits_{w\in V_{{\rm off}}^{{\rm S}}}{|u-w|_{H^{1}_{\kappa}{\left( D \right)}}}\\
&\leq \sqrt{2}C_{{\rm ov}}\max_{i=1,\cdots,N}\big\{({H\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}}})^{-1}+
({\lambda_{\ell_i^{{\mathrm{I}}}}^{{\mathrm{S}_i}}})^{-\frac12}
\big\}{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( D \right)}}}\\
&+7C_{{\rm ov}}{{\rm C}_{\mathrm{weak}}}{\rm C}_{{\rm poin}}\max_{i=1,\cdots,N}
\big\{({H^2\lambda_{\ell_i^{{\mathrm{II}}}+1}^{{\mathrm{T}_i}}})^{-\frac12}\big\}
\|f\|_{L^2_{\kappa^{-1}}(D)}.
\end{aligned}$$
\[rem:spectral\] According to Proposition \[prop:Finalspectral\], the convergence rate is essentially determined by two factors: the smallest eigenvalue $\lambda_{\ell_i}^{{\mathrm{S}_i}}$ that is not included in the local spectral basis and the coarse mesh size $H$. A proper balance between them is necessary for the convergence. For any fixed $H>0$, in view of the eigenvalue problems and , a simple scaling argument implies $$\begin{aligned}
H^2\lambda^{{\mathrm{S}_i}}_{\ell_i^{{\mathrm{I}}}}\to \infty \quad\text{ and }\quad H\lambda^{{\mathrm{T}_i}}_{\ell_i^{{\mathrm{II}}}}\to \infty,\quad \text{ as }\quad \ell_i^{{\mathrm{I}}},\ell_i^{{\mathrm{II}}} \to \infty.\end{aligned}$$ Hence, assuming that $\ell_i^{{\mathrm{I}}}$ and $\ell_i^{{\mathrm{II}}}$ are sufficiently large such that $H^2\lambda^{{\mathrm{S}_i}}_{\ell_i^{{\mathrm{II}}}}\geq 1$ and $H\lambda^{{\mathrm{T}_i}}_{\ell_i^{{\mathrm{II}}}}\geq H^{-3}$, from Proposition \[prop:Finalspectral\], we obtain $$\begin{aligned}
\label{eq:aaaa}
{|u-u_{{\rm off}}^{{\rm S}}|_{H^{1}_{\kappa}{\left( D \right)}}}\lesssim H \Big({{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( D \right)}}}+{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}\Big).$$ Note that the estimate of type is the main goal of the convergence analysis for many multiscale methods [@MR1660141; @MR2721592; @li2017error]. In practice, the numbers $\ell_i^{{\mathrm{I}}}$ and $\ell_i^{{\mathrm{II}}}$ of local multiscale functions fully determine the computational complexity of the multiscale solver for Problem at the offline stage. However, its optimal choice rests on the decay rate of the nonincreasing sequences $\big\{(\lambda_{n}^{{\mathrm{S}_i}})^{-1}\big\}_{n=1}^{\infty}$ and $\big\{(\lambda_{n}^{{\mathrm{T}_i}})^{-1}\big\}_{n=1}^{\infty}$. The precise characterization of eigenvalue decay estimates for heterogeneous problems seems poorly understood at present, and the topic is beyond the scope of the present work.
Harmonic extension bases approximation error {#subsec:harmonic}
--------------------------------------------
By the definition of the local harmonic extension snapshot space ${V^{\mathrm{H}_{i}}_{\text{snap}}}$ in and , there exists $u^{i}_{\text{snap}}\in {V^{\mathrm{H}_{i}}_{\text{snap}}}$ satisfying $$\begin{aligned}
\label{eq:usnap}
u^{i}_{\text{snap}}:=u_{h} \qquad\text{ on }\quad \partial \omega_i.\end{aligned}$$
In the error analysis below, the weighted Friedrichs (or Poincaré) inequalities play an important role. These inequalities require certain conditions on the coefficient $\kappa$ and domain $D$ that in general are not fully understood. Assumption \[ass:coeff\] is one sufficient condition for the weighted Friedrichs inequality [@galvis2010domain; @pechstein2012weighted].
Now we can derive the following local energy error estimate.
\[lem:energyHA\] Let ${{e}^{i}_{\mathrm{snap}}}=u_h-u^{i}_{\text{snap}}$. Then there holds $$\begin{aligned}
\label{eq:energyHA}
{|{{e}^{i}_{\mathrm{snap}}}|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}\leq {H}\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( \omega_i \right)}}}.\end{aligned}$$
Indeed, by definition, the following error equation holds: $$\label{eq:locErr}
\left\{
\begin{aligned}
-\nabla\cdot(\kappa\nabla {{e}^{i}_{\mathrm{snap}}})&=f \quad&&\text{ in } \omega_i,\\
{{e}^{i}_{\mathrm{snap}}}&=0 \quad&&\text{ on }\partial \omega_i.
\end{aligned}\right.$$ Then and Hölder’s inequality give the assertion.
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$ and $\ell_i\in \mathbb{N}_{+}$ for all $i=1,2,\cdots, N$. Let $u_h\in V_h$ be the unique solution to Problem . Denote $V_{{\rm snap}}\ni w_{{\rm snap}}:=\sum_{i=1}^{N}\chi_i u^{i}_{{\rm snap}}$. Then there holds $$\begin{aligned}
{|u_h-w_{{\rm snap}}|_{H^{1}_{\kappa}{\left( D \right)}}}\leq \sqrt{2C_{{\rm ov}}}H\max_{i=1,\cdots,N}\Big\{ C_0H{{\rm C}_{\mathrm{poin}}(\omega_i)}+\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}\Big\}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}.\end{aligned}$$
Let ${{e}^{}_{\mathrm{snap}}}:=u_h-w_{\text{snap}}$. Since $\{\chi_i\}_{i=1}^{N}$ forms a set of partition of unity functions subordinated to the set $\{\omega_i\}_{i=1}^{N}$, we deduce $${{e}^{}_{\mathrm{snap}}}=\sum\limits_{i=1}^{N}\chi_i{{e}^{i}_{\mathrm{snap}}},$$ where ${{e}^{i}_{\mathrm{snap}}}:=u_h-u^{i}_{\text{snap}}$ is the local error on $\omega_i$. Taking its squared energy norm and using the overlap condition , we arrive at $$\begin{aligned}
\label{eq:0000}
\int_{D}\kappa|\nabla {{e}^{}_{\mathrm{snap}}}|^2{\mathrm{d}x}&=\int_{D}\kappa|\sum\limits_{i=1}^{N}
\nabla(\chi_i{{e}^{i}_{\mathrm{snap}}})|^2{\mathrm{d}x}\leq{C_{\mathrm{ov}}}\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{snap}}})|^2{\color{blue} {\mathrm{d}x}}.\end{aligned}$$ It remains to estimate the integral term. Young’s inequality gives $$\begin{aligned}
\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{snap}}})|^2{\mathrm{d}x}&\leq 2\Big( \int_{\omega_i}\big(\kappa
|\nabla\chi_i|^2\big)|{{e}^{i}_{\mathrm{snap}}}|^2{\mathrm{d}x}+\int_{\omega_i}\kappa
|\nabla{{e}^{i}_{\mathrm{snap}}}|^2{\mathrm{d}x}\Big).\end{aligned}$$ Taking and into account, we get $$\begin{aligned}
\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{snap}}})|^2{\mathrm{d}x}&\leq 2\sum\limits_{i=1}^{N}\Big( C_0^2H^2{{\rm C}_{\mathrm{poin}}(\omega_i)}+1\Big)\int_{\omega_i}\kappa
|\nabla{{e}^{i}_{\mathrm{snap}}}|^2{\mathrm{d}x}.\end{aligned}$$ This and yield $$\begin{aligned}
\sum\limits_{i=1}^{N}\int_{\omega_i}\kappa
|\nabla(\chi_i{{e}^{i}_{\mathrm{snap}}})|^2{\mathrm{d}x}\leq 2\sum\limits_{i=1}^{N}\Big(C_0^2H^2{{\rm C}_{\mathrm{poin}}(\omega_i)}+1\Big)\times H^2{{\rm C}_{\mathrm{poin}}(\omega_i)}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( \omega_i \right)}}}^2.\end{aligned}$$ Finally, the overlap condition and inequality show the desired assertion.
Finally, we derive an energy error estimate for the conforming Galerkin approximation to Problem based on the multiscale space $V_{\text{snap}}$.
\[prop:FinalSnap\] Assume that $f\in L^2_{{\kappa}^{-1}}(D)$. Let $u\in V$ and $u_{{\rm snap}}\in V_{{\rm snap}}$ be the solutions to Problems and , respectively. Then there holds $$\begin{aligned}
{|u-u_{{\rm snap}}|_{H^{1}_{\kappa}{\left( D \right)}}}\leq \sqrt{2C_{{\rm ov}}}H\max_{i=1,\cdots,N}\Big\{ C_0H{{\rm C}_{\mathrm{poin}}(\omega_i)}&+\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}\Big\}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}+\min\limits_{v_h\in V_h}{|u-v_h|_{H^{1}_{\kappa}{\left( D \right)}}}.\end{aligned}$$
This assertion follows directly from the Galerkin orthogonality property [@MR2373954 Corollary 2.5.10], the triangle inequality and the fine-scale [*a priori*]{} estimate .
Discrete POD approximation error {#sec:discretePOD}
--------------------------------
Now we turn to the discrete POD approximation. First, we present an [*a priori*]{} estimate for Problem . It will be used to derive the energy estimate for $u_{\text{snap}}^i$ defined in .
Assume that $f\in L^2_{{\kappa}^{-1}}(D)$. Let $u_h\in V_h$ be the solution to Problem . Then there holds $$\begin{aligned}
{|u_h|_{H^{1}_{\kappa}{\left( D \right)}}}&\leq 2{\rm diam}(D)\sqrt{{{\rm C}_{\mathrm{poin}}(D)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}. \label{eq:uApriori}
$$
In analogy to , we obtain $$\begin{aligned}
{|u|_{H^{1}_{\kappa}{\left( D \right)}}}&\leq \text{diam}(D)\sqrt{{{\rm C}_{\mathrm{poin}}(D)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}},\\
{|u_h|_{H^{1}_{\kappa}{\left( D \right)}}}&\leq \text{diam}(D)\sqrt{{{\rm C}_{\mathrm{poin}}(D)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}.\end{aligned}$$ This and the triangle inequality lead to the desired assertion.
Let $u^{i}_{\text{snap}}\in {V^{\mathrm{H}_{i}}_{\text{snap}}}$ be defined in . Then we deduce from and the triangle inequality that $$\begin{aligned}
\label{usnap:apriori}
{|u_{\text{snap}}^i|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}\leq {H}{{\rm C}_{\mathrm{poin}}(\omega_i)}^{1/2}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( \omega_i \right)}}}+{|u_h|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}.\end{aligned}$$ Note that the series $\{v_j^{{\mathrm{H}_i}}\}_{j=1}^{L_i}$ forms a set of orthogonal basis in $ {V^{\mathrm{H}_{i}}_{\text{snap}}}$, cf. . Therefore, the function $u^{i}_{\text{snap}}\in {V^{\mathrm{H}_{i}}_{\text{snap}}}$ admits the following expansion $$\begin{aligned}
\label{eq:usnap_expand}
u^{i}_{\text{snap}}=\sum\limits_{j=1}^{L_i}(u^{i}_{\text{snap}},v_j^{{\mathrm{H}_i}})_i v_j^{{\mathrm{H}_i}}.\end{aligned}$$ To approximate $u^{i}_{\text{snap}}$ in the space $V_{\text{off}}^{{\mathrm{H}_i},n}$ of dimension $n$ for some $\mathbb{N}_{+}\ni n\leq L_i$, we take its first $n$-term truncation: $$\begin{aligned}
\label{eq_uin}
u^i_n:=\mathcal{P}^{{\mathrm{H}_i},n}u^{i}_{\text{snap}}=\sum\limits_{j=1}^{n}(u^{i}_{\text{snap}},v_j^{{\mathrm{H}_i}})_i v_j^{{\mathrm{H}_i}},\end{aligned}$$ where the projection operator $\mathcal{P}^{{\mathrm{H}_i},n}$ is defined in .
The next result provides the approximation property of $u^i_n$ to $u^{i}_{\text{snap}}$ in the $L^2_{\widetilde{\kappa}}(\omega_i)$ norm:
\[lem:5.1\] Assume that $f\in L^2_{{\kappa}^{-1}}(D)$. Let $u^{i}_{{\rm snap}}\in {V^{\mathrm{H}_{i}}_{\text{\rm snap}}}$ and $u^i_n\in V_{{\rm off}}^{{\mathrm{H}_i},n}$ be defined in and for $\mathbb{N}_{+}\ni n\leq L_i$, respectively. Then there holds $$\begin{aligned}
\| u^{i}_{{\rm snap}}-u^i_n \|_{L^2_{\widetilde{\kappa}}(\omega_i)}\leq \sqrt{2}(\lambda_{n+1}^{{\mathrm{H}_i}})^{-1/2}\Big( {H}\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( \omega_i \right)}}}+{|u_h|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}\Big).\end{aligned}$$
It follows from the expansion and that $$\begin{aligned}
\int_{\omega_i }\kappa|\nabla u^{i}_{\text{snap}}|^2{\mathrm{d}x}=\sum\limits_{j=1}^{L_i}|(u^{i}_{\text{snap}},v_j^{{\mathrm{H}_i}})_i|^2 \lambda_j^{{\mathrm{H}_i}}.\end{aligned}$$ Together with , we get $$\begin{aligned}
\label{eq:333}
\sum\limits_{j=1}^{L_i}|(u^i_{\rm snap},v_j^{{\mathrm{H}_i}})_i|^2 \lambda_j^{{\mathrm{H}_i}}\leq 2\Big( {H}^2{{{\rm C}_{\mathrm{poin}}(\omega_i)}}{{\left\|f\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}^2+{|u_h|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2\Big).\end{aligned}$$ Meanwhile, the combination of , and leads to $$\begin{aligned}
\| u^{i}_{\text{snap}}-u^i_n \|_{L^2_{\widetilde{\kappa}}(\omega_i)}^2&=\sum\limits_{j=n+1}^{L_i}|(u^{i}_{\text{snap}},v_j^{{\mathrm{H}_i}})_i|^2
=\sum\limits_{j=n+1}^{L_i}(\lambda_j^{{\mathrm{H}_i}})^{-1}\lambda_j^{{\mathrm{H}_i}}|(u^{i}_{\text{snap}},v_j^{{\mathrm{H}_i}})_i|^2\\
&\leq (\lambda_{n+1}^{{\mathrm{H}_i}})^{-1}\sum\limits_{j=n+1}^{L_i}\lambda_j^{{\mathrm{H}_i}}|(u^{i}_{\text{snap}},v_j^{{\mathrm{H}_i}})_i|^2.\end{aligned}$$ Further, an application of implies $$\begin{aligned}
\| u^{i}_{\text{snap}}-u^i_n \|_{L^2_{\widetilde{\kappa}}(\omega_i)}^2
\leq (\lambda_{n+1}^{{\mathrm{H}_i}})^{-1}\times 2\Big( {H}^2{{{\rm C}_{\mathrm{poin}}(\omega_i)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( \omega_i \right)}}}^2+{|u_h|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2\Big).\end{aligned}$$ Finally, taking the square root on both sides shows the desired result.
Note that for all $\mathbb{N}_{+}\ni n\leq L_i$, both approximations $u^{i}_{\text{snap}}$ and $u^i_n$ are $\kappa$-harmonic functions. Thus, we can apply the argument in the proof of to get the following local energy error estimate.
\[lem:5.2\] Let $u^{i}_{{\rm snap}}\in {V^{\mathrm{H}_{i}}_{\text{\rm snap}}}$ and $u^i_n\in V_{{\rm off}}^{{\mathrm{H}_i},n}$ be defined in and for all $\mathbb{N}_{+}\ni n\leq L_i$. Then there holds $$\begin{aligned}
\int_{\omega_i}\chi_i^2\kappa |\nabla (u^{i}_{{\rm snap}}-u^i_n) |^2{\mathrm{d}x}&\leq 4H^{-2}\int_{\omega_i}\widetilde{\kappa} (u^{i}_{{\rm snap}}-u^i_n)^2\,{\mathrm{d}x}.\end{aligned}$$
The proof is analogous to that for , and thus omitted.
With the help of local estimates presented in Lemmas \[lem:5.1\] and \[lem:5.2\], we can now bound the energy error for the POD method by means of the partition of unity FEM [@melenk1996partition Theorem 2.1].
\[lem:4.6\] Assume that $f\in L^2_{\kappa^{-1}}(D)$. For all $\mathbb{N}_{+}\ni \ell_i\leq L_i$, denote $V_{{\rm snap}}\ni w_{{\rm snap}}:
=\sum_{i=1}^{N}\chi_i u^{i}_{{\rm snap}}$ and $V_{{\rm off}}^{\text{H}}\ni w_{{\rm off}}^{{\rm H}}
:=\sum_{i=1}^{N}\chi_i u^{i}_{\ell_i}$. Then there holds $$\begin{aligned}
{|w_{{\rm snap}}-w_{{\rm off}}^{{\rm H}}|_{H^{1}_{\kappa}{\left( D \right)}}}\leq \sqrt{20C_{{\rm ov}}}&\max_{i=1,\cdots,N}
\Big\{{(H^{2}\lambda_{\ell_i+1}^{{\mathrm{H}_i}})^{-1/2}}\Big\}
C_1{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}},\end{aligned}$$ where the constant $C_1$ is given by $C_1:=H\max_{i=1,\cdots,N}\big\{\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}\big\}+2{\rm diam}(D)\sqrt{{{\rm C}_{\mathrm{poin}}(D)}}.$
An argument similar to leads to $$\begin{aligned}
{|w_{\text{snap}}-w_{\text{off}}^{\text{H}}|_{H^{1}_{\kappa}{\left( D \right)}}}^2\leq 2\sum\limits_{i=1}^{N}\Big( H^{-2}\int_{\omega_i}\widetilde{\kappa}|u^{i}_{\text{snap}}-u^i_{\ell_i}|^2{\mathrm{d}x}+\int_{\omega_i}\chi_i^2\kappa
|\nabla (u^{i}_{\text{snap}}-u^i_{\ell_i})|^2{\mathrm{d}x}\Big).\end{aligned}$$ Together with Lemma \[lem:5.2\], we obtain $$\begin{aligned}
{|w_{\text{snap}}-w_{\text{off}}^{\text{H}}|_{H^{1}_{\kappa}{\left( D \right)}}}^2\leq 10H^{-2}\sum\limits_{i=1}^{N} \int_{\omega_i}\widetilde{\kappa}|u^{i}_{\text{snap}}-u^i_{\ell_i}|^2{\mathrm{d}x}.\end{aligned}$$ Then from Lemma \[lem:5.1\], we deduce $$\begin{aligned}
{|w_{\text{snap}}-w_{\text{off}}^{\text{H}}|_{H^{1}_{\kappa}{\left( D \right)}}}^2\leq 20 \max_{i=1,\cdots,N}\{{(H^{2}\lambda_{\ell_i+1}^{{\mathrm{H}_i}})^{-1}}\}\sum\limits_{i=1}^{N}\Big( {H}^2{{{\rm C}_{\mathrm{poin}}(\omega_i)}}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( \omega_i \right)}}}^2+{|u_h|_{H^{1}_{\kappa}{\left( \omega_i \right)}}}^2\Big).\end{aligned}$$ Finally, the overlap condition together with shows the desired assertion.
Finally, we derive an error estimate for the CG approximation to Problem based on the discrete POD multiscale space $V_{\text{off}}^{\text{H}}$.
\[prop:Finalpod\] Assume that $f\in L^2_{{\kappa}^{-1}}(D)$ and $\ell_i\in \mathbb{N}_{+}$ for all $i=1,2,\cdots, N$. Let $u\in V$ and $u_{{\rm off}}^{{\rm H}}\in V_{{\rm off}}^{{\rm H}}$ be the solutions to Problems and , respectively. Then there holds $$\begin{aligned}
\label{eq:podErr}
{|u-u_{{\rm off}}^{{\rm H}}|_{H^{1}_{\kappa}{\left( D \right)}}}&\leq \sqrt{2C_{{\rm ov}}}H\max_{i=1,\cdots,N}\Big\{ C_0H{{\rm C}_{\mathrm{poin}}(\omega_i)}+\sqrt{{{\rm C}_{\mathrm{poin}}(\omega_i)}}\Big\}{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}\\
&+\sqrt{20C_{{\rm ov}}}\max_{i=1,\cdots,N}\Big\{{(H^{2}\lambda_{\ell_i+1}^{{\mathrm{H}_i}})^{-\frac12}}\Big\} C_1{{\left\|f\right\|}_{L^2_{{\kappa}^{-1}}{\left( D \right)}}}
+\min\limits_{v_h\in V_h}{|u-v_h|_{H^{1}_{\kappa}{\left( D \right)}}}. \nonumber\end{aligned}$$
This assertion follows from the Galerkin orthogonality property [@MR2373954 Corollary 2.5.10], the triangle inequality and the fine-scale [*a priori*]{} estimate , Proposition \[prop:FinalSnap\] and Lemma \[lem:4.6\].
Since the discrete eigenvalue problem is generated from the continuous eigenvalue problem with finite ensembles $\{\phi_j^{{\mathrm{H}_i}}\}_{j=1}^{L_i}$, a scaling argument shows $$\begin{aligned}
H^{2}\lambda_{n}^{{\mathrm{H}_i}}\to \infty \quad\text{ as } n\to \infty \text{ and } h\to 0.\end{aligned}$$ This and imply the convergence of the POD solution $u_{\rm off}^{\rm H}$ in the energy norm.
Concluding remarks {#sec:conclusion}
==================
In this paper, we have analyzed three types of multiscale methods in the framework of the generalized multiscale finite element methods (GMsFEMs) for elliptic problems with heterogeneous high-contrast coefficients. Their convergence rates in the energy norm are derived under a very mild assumption on the source term, and are given in terms of the eigenvalues and coarse grid mesh size. It is worth pointing out that the analysis does not rely on any oversampling technique that is typically adopted in existing studies. The analysis indicates that the eigenvalue decay behavior of eigenvalue problems with high-contrast heterogeneous coefficients is crucial for the convergence behavior of the multiscale methods, including the GMsFEM. This motivates further investigations on such eigenvalue problems in order to gain a better mathematical understanding of these methods. Some partial findings along this line have been presented in the work [@li2017low], however, much more work remains to be done.
Acknowledgements {#acknowledgements .unnumbered}
================
The work was partially supported by the Hausdorff Center for Mathematics, University of Bonn, Germany. The author acknowledges the support from the Royal Society through a Newton international fellowship, and thanks Eric Chung (Chinese University of Hong Kong), Juan Galvis (Universidad Nacional de Colombia, Colombia), Michael Griebel (University of Bonn, Germany) and Daniel Peterseim (University of Augsburg, Germany) for fruitful discussions on the topic of the paper.
Very-weak solutions to boundary-value problems with high-contrast heterogeneous coefficients
============================================================================================
In this appendix, we derive a weighted $L^2$ estimate for boundary value problems with high-contrast heterogeneous coefficients, which plays a crucial role in the error analysis. Let Assumption \[ass:coeff\] hold and let $\omega_i$ be a coarse neighborhood for any $i=1,\cdots,N$. For any $g\in L^2(\partial\omega_i)$, we define the following elliptic problem $$\label{eq:pde-very}
\left\{\begin{aligned}
-\nabla\cdot(\kappa\nabla v)&=0 && \text{ in } \omega_i,\\
v&=g &&\text{ on }\partial \omega_i.
\end{aligned}\right.$$ Our goal is to derive an weighted $L^2$ estimate of the solution $v$, which is independent of the high-contrast in the coefficient $\kappa$. To this end, we employ a nonstandard variational form in the spirit of the transposition method [@MR0350177], and seek $v\in L^2(\omega_i)$ such that $$\begin{aligned}
\label{eq:nonstd-variational}
-\int_{\omega_i}v\nabla\cdot(\kappa\nabla z){\mathrm{d}x}=-\int_{\partial \omega_i}g\kappa\frac{\partial z}{\partial n}\mathrm{d}s \quad\text{ for all }z\in X(\omega_i).\end{aligned}$$ Here, $X(\omega_i)$ denotes the test space to be defined below. The main difficulty for our setting of piecewise high-contrast coefficient is that the solution has only piecewise $H^2$ regularity, and thus, we cannot directly apply the nonstandard variational form described above. The difficulty is overcome in Theorem \[thm:pw-Regularity\].
\[lem:very-weak\] Assume that $\{\eta_j\}_{j=1}^{m}$ are of comparable magnitude and that ${\eta_{\text{min}}}$ is sufficiently large. Let $g\in L^2(\omega_i)$ and let $v$ be the solution to . Then there exists a constant ${\rm C}_{{\rm weak}}$ independent of the coefficient $\kappa$ such that $${{\left\|v\right\|}_{L^2_{\widetilde{\kappa}}{\left( \omega_i \right)}}}\leq {\rm C}_{{\rm weak}}\|g\|_{L^2(\partial \omega_i)}.$$
To prove it, we need a regularity result based on [@chu2010new; @li2017low].
\[thm:pw-Regularity\] Assume that $\{\eta_j\}_{j=1}^{m}$ are of comparable magnitude and that ${\eta_{\text{min}}}$ is sufficiently large. Let $w\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ and let $z\in H^1_{0}(\omega_i)$ be the unique solution to the following weak formulation $$\begin{aligned}
\label{eq:aux-z}
\forall q\in H^1_{0}(\omega_i): \int_{\omega_i}\kappa\nabla z\cdot\nabla q\;{\mathrm{d}x}=\int_{\omega_i}wq\;{\mathrm{d}x}.\end{aligned}$$ Then for some constant ${\rm C}_{{\rm weak}} $ independent of the contrast, there holds $$\begin{aligned}
\|\eta_j\frac{\partial z}{\partial n}\|_{L^2(\partial\omega_i\cap D_j)}
&\leq{{\rm C}_{\mathrm{weak}}}{{\left\|w\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}\quad\text{ for all } j=0,1,\cdots,m.
$$
The triangle inequality, Poincaré inequality, and [@li2017low Eqn. (6.2) and Proposition 6.7] imply $$\label{eq:H1-estimate}
\begin{aligned}
{{\left\|z\right\|}_{H^{1}{\left( \omega_i\cap D_0 \right)}}}&\lesssim {{\rm C}_{\mathrm{poin}}(\omega_i\cap D_0)}\|w\|_{L^2(\omega_i)},\\
{{\left\|z\right\|}_{H^{1}{\left( \omega_i\cap D_j \right)}}}&\lesssim {\eta_{\text{min}}}^{-1}{{\rm C}_{\mathrm{poin}}(\omega_i\cap D_0)}\|w\|_{L^2(\omega_i)},\quad\text{ for } j=1,2,\cdots,m.
\end{aligned}$$ Note that the $H^2$ seminorm regularity result in [@chu2010new Theorem B.1] does not depend on the distance between $\partial \omega_i$ and $D_j$ for any $j=1,\cdots,m$. Therefore, it can be extended to our situation directly: $$\begin{aligned}
|{z}|_{H^2(\omega_i\cap D_j)}&\lesssim {\eta_{\text{min}}}^{-1}\|w\|_{L^2(\omega_i)} \quad\text{ for } j=0,1,\cdots,m .\end{aligned}$$ Combining the preceding two estimates and applying interpolation between $H^1(\omega_i)$ and $H^2(\omega_i)$ yield the $H^{3/2}(\omega_i)$ regularity estimate $$\begin{aligned}
\label{eq:h3/2}
{{\left\|z\right\|}_{H^{3/2}{\left( \omega_i\cap D_j \right)}}}\lesssim {\eta_{\text{min}}}^{-1}\|w\|_{L^2(\omega_i)}.\end{aligned}$$ Furthermore, since $w\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i)\subset L^2(\omega_i)$, by definition, we can obtain $$\begin{aligned}
{{\left\|w\right\|}_{L^2{\left( \omega_i \right)}}}^2&=\int_{\omega_i}w^2{\mathrm{d}x}=\sum\limits_{j=0}^{m}\int_{\omega_i\cap D_j}
w^2{\mathrm{d}x}\nonumber\\
&\leq\sum\limits_{j=0}^{m}\eta_{j}{{\left\|w\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i\cap D_j \right)}}}^2
\lesssim {\eta_{\text{min}}}{{\left\|w\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}^2.\end{aligned}$$ This, together with , proves $$\begin{aligned}
\label{eq:h3/2ii}
{{\left\|z\right\|}_{H^{3/2}{\left( \omega_i\cap D_j \right)}}}\lesssim {\eta_{\text{min}}}^{-1/2}{{\left\|w\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}.\end{aligned}$$ Since differentiation is continuous from $H^{3/2}(\omega_i)$ to $H^{1/2}(\omega_i)$, by the trace theorem, we have $$\begin{aligned}
\|\frac{\partial z}{\partial n}\|_{L^2(\partial\omega_i\cap D_j)}&\lesssim \|\frac{\partial z}{\partial n}\|_{H^{1/2}(\omega_i\cap D_j)}\lesssim {{\left\|z\right\|}_{H^{3/2}{\left( \omega_i\cap D_j \right)}}},\end{aligned}$$ which, together with , proves the desired assertion.
Next we define a Lions-type variational formulation for Problem when ${\eta_{\text{min}}}$ is large [@MR0350177 Section 6, Chapter 2]. To this end, let the test space $X(\omega_i)\subset H^1_{\kappa,0}(\omega_i)$ be defined by $$\begin{aligned}
\label{eq:test-space}
X(\omega_i):=\{z:-\nabla\cdot(\kappa\nabla z)\in L^2(\omega_i)\text{ and } z\in H^1_{\kappa,0}(\omega_i)\}.\end{aligned}$$ This test space $X(\omega_i)$ is endowed with the norm $\|\cdot\|_{X(\omega_i)}$: $$\forall z\in X(\omega_i):\|z\|_{X(\omega_i)}^2=\int_{\omega_i}\kappa|\nabla z|^2{\mathrm{d}x}+\|\nabla\cdot(\kappa\nabla z)\|_{L^2(\omega_i)}^2.$$ Below, we denote by $n_{i}(x)$ the unit outward normal (relative to $D_i$) to the interface $\Gamma_i$ at the point $x\in \Gamma_i$. For a function $w$ defined on $\mathbb{R}^2\backslash
\Gamma_i$ for $i=1,2,\cdots,m$, we define for any $x\in \Gamma_i$, $$w(x)|_{\pm}:=\lim_{t\to 0^{+}} w(x\pm tn_{i}(x))\quad \text{ and }\quad
\frac{\partial}{\partial n_{i}^{\pm}}w(x):=\lim_{t\to 0^{+}}(\nabla w(x\pm tn_{i}(x))\cdot n_{i}(x))$$ if the limit on the right hand side exists.
Let $v$ be the solution to problem and let the test space $X(\omega_i)$ be defined in . Then the nonstandard variational form is well posed.
For all $z\in X(\omega_i)$, let $w:=-\nabla\cdot(\kappa\nabla z)$, then by definition, $w\in L^2(\omega_i)$. Recall the continuity of the flux implied by the definition, i.e., $$\begin{aligned}
\label{eq:flux}
\forall z\in X(\omega_i):\quad\eta_j\frac{\partial z}{\partial n_j^{-}}=\frac{\partial z}{\partial n_j^{+}}\quad\text{ for all } j=1,\cdots,m.\end{aligned}$$ For all $z\in X(\omega_i)$, we obtain $$\begin{aligned}
\int_{\omega_i}-\nabla\cdot(\kappa\nabla v)z\;{\mathrm{d}x}&=\sum_{j=0}^{m}\int_{\omega_i\cap D_j}-\nabla\cdot(\kappa\nabla v)z\;{\mathrm{d}x}=\sum_{j=0}^{m}\int_{\omega_i\cap D_j}\Big(-\nabla\cdot(\kappa\nabla v z)+\kappa\nabla z\cdot\nabla v\Big)\;{\mathrm{d}x}\\
&=\Big(\int_{\partial D_0\backslash\partial\omega_i}\kappa\frac{\partial v}{\partial n_{j}^{+}} z\mathrm{d}s-\sum_{j=1}^{m}\int_{\partial D_j\backslash\partial\omega_i}\kappa\frac{\partial v}{\partial n_{j}^{-}} z\mathrm{d}s\Big)+\sum_{j=0}^{m}\int_{\omega_i\cap D_j}\kappa\nabla z\cdot\nabla v\;{\mathrm{d}x}.\end{aligned}$$ The continuity of the flux for $v$ shows that the sum of the first two terms vanishes. We apply the divergence theorem again, together with the continuity of flux for $z$, and derive $$\begin{aligned}
\int_{\omega_i}-\nabla\cdot(\kappa\nabla v)z\;{\mathrm{d}x}&=
\sum_{j=0}^{m}\int_{\omega_i\cap D_j}\kappa\nabla z\cdot\nabla v\;{\mathrm{d}x}=\sum_{j=0}^{m}\int_{\omega_i\cap D_j}\nabla\cdot(\kappa\nabla z v)-\nabla\cdot(\kappa\nabla z)v\;{\mathrm{d}x}\\
&=\Big(-\int_{\partial D_0\backslash\partial\omega_i}\kappa\frac{\partial z}{\partial n_{j}^{+}} v\mathrm{d}s+\sum_{j=1}^{m}\int_{\partial D_j\backslash\partial\omega_i}\kappa\frac{\partial z}{\partial n_{j}^{-}} v\mathrm{d}s\Big)
+\int_{\partial\omega_i}\kappa\frac{\partial z}{\partial n} g\mathrm{d}s\\
&-\int_{\omega_i}\nabla\cdot(\kappa\nabla z)v\;{\mathrm{d}x}.\end{aligned}$$ The continuity of flux indicates that the first term vanishes, and this proves .
To prove the well-posedness of the nonstandard variational form , we introduce a bilinear form $c(\cdot,\cdot)$ on $L^2_{\widetilde{\kappa}}(\omega_i)\times L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$ and a linear form $b(\cdot)$ on $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$, defined by $$\begin{aligned}
c(w_1,w_2)&:=\int_{\omega_i}w_1 w_2\;{\mathrm{d}x}\quad\text{ for all } w_1\in L^2_{\widetilde{\kappa}}(\omega_i)\text{ and }w_2\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i),\\
b(w)&:=\int_{\partial\omega_i}\kappa\frac{\partial z}{\partial n} g\;\mathrm{d}s\quad
\text{ for all }w\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i),\end{aligned}$$ with $z$ being the unique solution to . It follows from Theorem \[thm:pw-Regularity\] that $$\begin{aligned}
\label{eq:b}
\|b\|:=\sup_{w\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i)}\frac{b(w)}{\|w\|_{L^2_{\widetilde{\kappa}^{-1}}(\omega_i)}}\leq \text{C}_{\text{weak}}\|g\|_{L^2(\partial\omega_i)}.\end{aligned}$$ This implies that $b$ lies in the dual space of $L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$. Since the dual space of $L_{\widetilde{\kappa}^{-1}}^2(\omega_i)$ is $L^2_{\widetilde{\kappa}}(\omega_i)$, cf. Remark \[rem:dual\], this yields well-posedness of the following variational problem: find $v\in L^2_{\widetilde{\kappa}}(\omega_i)$ such that $$\begin{aligned}
\label{eq:variation2}
c(v,w)=b(w)\quad\text{ for all }w\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i).\end{aligned}$$ The equivalence of problems and implies the desired well-posedness of .
Finally, we are ready to prove Theorem \[lem:very-weak\].
For all $w\in L^2_{\widetilde{\kappa}^{-1}}(\omega_i)$, we obtain from and $$\begin{aligned}
\int_{\omega_i} v w\;{\mathrm{d}x}:=c(v,w)=b(w)\leq \text{C}_{\text{weak}}{{\left\|w\right\|}_{L^2_{\widetilde{\kappa}^{-1}}{\left( \omega_i \right)}}}\|g\|_{\partial\omega_i}.\end{aligned}$$ Since $(L_{\widetilde{\kappa}^{-1}}^2(\omega_i))^*=L^2_{\widetilde{\kappa}}(\omega_i)$, cf. Remark \[rem:dual\], we get the desired assertion. This completes the proof.
[^1]: Department of Mathematics, Imperial College London, London SW7 2AZ, UK. The work was partially carried out when the author was affiliated with Institut für Numerische Simulation and Hausdorff Center for Mathematics, Universität Bonn, Wegelerstra[ß]{}e 6, D-53115 Bonn, Germany. (`lotusli0707@gmail.com`, `guanglian.li@imperial.ac.uk`).
[^2]: We thank Richard S. Laugesen (University of Illinois, Urbana-Champaign) for clarifying the convergence in $H^1_{\kappa}(\omega_i)$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the dynamics of overdamped Brownian particles diffusing in conservative force fields and undergoing stochastic resetting to a given location with a generic space-dependent rate of resetting. We present a systematic approach involving path integrals and elements of renewal theory that allows to derive analytical expressions for a variety of statistics of the dynamics such as (i) the propagator prior to first reset; (ii) the distribution of the first-reset time, and (iii) the spatial distribution of the particle at long times. We apply our approach to several representative and hitherto unexplored examples of resetting dynamics. A particularly interesting example for which we find analytical expressions for the statistics of resetting is that of a Brownian particle trapped in a harmonic potential with a rate of resetting that depends on the instantaneous energy of the particle. We find that using energy-dependent resetting processes is more effective in achieving spatial confinement of Brownian particles on a faster timescale than by performing quenches of parameters of the harmonic potential.'
author:
- 'Édgar Roldán[^1]'
- Shamik Gupta
title: |
Path-integral formalism for stochastic resetting:\
Exactly solved examples and shortcuts to confinement
---
ł
Introduction
============
ł[sec:intro]{} Changes are inevitable in nature, and those that are most dramatic with often drastic consequences are the ones that occur [*all of a sudden*]{}. A particular class of such changes comprises those in which the system during its temporal evolution makes a sudden jump (a “reset") to a fixed state or configuration. Many nonequilibrium processes are encountered across disciplines, e.g., in physics, biology, and information processing, which involve sudden transitions between different states or configurations. The erasure of a bit of information [@Landauer:1961; @Bennett:1973] by mesoscopic machines may be thought of as a physical process in which a memory device that is strongly affected by thermal fluctuations resets its state (0 or 1) to a prescribed erasure state [@Berut:2012; @Mandal:2012; @Roldan:2014; @Koski:2014; @Fuchs:2016]. In biology, resetting plays an important role inter alia in sensing of extracellular ligands by single cells [@Mora:2015], and in transcription of genetic information by macromolecular enzymes called RNA polymerases [@Roldan:2016]. During RNA transcription, the recovery of RNA polymerases from inactive transcriptional pauses is a result of a kinetic competition between diffusion and resetting of the polymerase to an active state via RNA cleavage [@Roldan:2016], as has been recently tested in high-resolution single-molecule experiments [@Lisica:2016]. Also, there are ample examples of biochemical processes that initiate (i.e., reset) at random so-called [*stopping*]{} times [@Gillespie:2014; @Hanggi:1990; @Neri:2017], with the initiation at each instance occurring in different regions of space [@Julicher:1997]. In addition, interactions play a key role in determining when and where a chemical reaction occurs [@Gillespie:2014], a fact that affects the statistics of the resetting process. For instance, in the above mentioned example of recovery of RNA polymerase by the process of resetting, the interaction of the hybrid DNA-RNA may alter the time that a polymerase takes to recover from its inactive state [@Zamft:2012]. It is therefore quite pertinent and timely to study resetting of mesoscopic systems that evolve under the influence of external or conservative force fields.
Simple diffusion subject to resetting to a given location at random times has emerged in recent years as a convenient theoretical framework to discuss the phenomenon of stochastic resetting [@Evans:2011-1; @Evans:2011-2; @Evans:2014; @Christou:2015; @Eule:2016; @Nagar:2016]. The framework has later been generalized to consider different choices of the resetting position [@Boyer:2014; @Majumdar:2015-2], resetting of continuous-time random walks [@Montero:2013; @Mendez:2016], Lévy [@Kusmierz:2014] and exponential constant-speed flights [@Campos:2015], time-dependent resetting of a Brownian particle [@Pal:2016], and in discussing memory effects [@Boyer:2017] and phase transitions in reset processes [@Harris:2017]. Stochastic resetting has also been invoked in the context of many-body dynamics, e.g., in reaction-diffusion models [@Durang:2014], fluctuating interfaces [@Gupta:2014; @Gupta:2016], interacting Brownian motion [@Falcao:2017], and in discussing optimal search times in a crowded environment [@Kusmierz:2015; @Reuveni:2016; @Bhat:2016; @Pal:2017]. However, little is known about the statistics of stochastic resetting of Brownian particles that diffuse under the influence of force fields [@Pal:2015], and that too in presence of a rate of resetting that varies in space.
![image](fig1.pdf){width="90.00000%"}
ł[fig:fig1]{}
In this paper, we study the dynamics of overdamped Brownian particles immersed in a thermal environment, which diffuse under the influence of a force field, and whose position may be stochastically reset to a given spatial location with a rate of resetting that has an essential dependence on space. We use an approach that allows to obtain exact expressions for the transition probability prior to the first reset, the first reset-time distribution, and, most importantly, the stationary spatial distribution of the particle. The approach is based on a combination of the theory of renewals [@Cox:1962] and the Feynman-Kac path-integral formalism of treating stochastic processes [@Feynman:2010; @Schulman:1981; @Kac:1949; @Kac:1951], and consists in a mapping of the dynamics of the Brownian resetting problem to a suitable quantum mechanical evolution in imaginary time. We note that the Feynman-Kac formalism has been applied extensively in the past to discuss dynamical processes involving diffusion [@satya], and has to the best of our knowledge not been applied to discuss stochastic resetting. To demonstrate the utility of the approach, we consider several different stochastic resetting problems, see Fig. \[fig:fig1\]: i) Free Brownian particles subject to a space-independent rate of resetting (Fig. \[fig:fig1\]a)); ii) Free Brownian particles subject to resetting with a rate that depends quadratically on the distance to the origin (Fig. \[fig:fig1\]b)); and iii) Brownian particles trapped in a harmonic potential and undergoing reset events with a rate that depends on the energy of the particle (Fig. \[fig:fig1\]c)). In this paper, we consider for purposes of illustration the corresponding scenarios in one dimension, although our general approach may be extended to higher dimensions. Remarkably, we obtain exact analytical expressions in all cases, and, notably, in cases ii) and iii), where a standard treatment of analytic solution by using the Fokker-Planck approach may appear daunting, and whose relevance in physics may be explored in the context of, e.g., optically-trapped colloidal particles and hopping processes in glasses and gels. We further explore the dynamical properties of case iii), and compare the relaxation properties of dynamics corresponding to potential energy quenches and due to sudden activation of space-dependent stochastic resetting.
General formalism
=================
ł[sec:general-formalism]{}
Model of study: resetting of Brownian particles diffusing in force fields
-------------------------------------------------------------------------
ł[sec:model]{}
Consider an overdamped Brownian particle diffusing in one dimension $x$ in presence of a time-independent force field $F(x)=-\partial_x V(x)$, with $V(x)$ denoting a potential energy landscape. The dynamics of the particle is described by a Langevin equation of the form =F(x)+(t), ł[eq:eom]{} where $\mu$ is the mobility of the particle, defined as the velocity per unit force. In Eq. (\[eq:eom\]), $\eta(t)$ is a Gaussian white noise, with the properties (t)=0, (t)(t’)=2D(t-t’), where $\langle\, \cdot\, \rangle$ denotes average over noise realizations, and $D\! >\! 0$ is the diffusion coefficient of the particle, with the dimension of length-squared over time. We assume that the Einstein relation holds: $D=k_{\rm B}T \mu$, with $T$ being the temperature of the environment, and with $k_{\rm B}$ being the Boltzmann constant. In addition to the dynamics (\[eq:eom\]), the particle is subject to a stochastic resetting dynamics with a space-dependent resetting rate $r(x)$, whereby, while at position $x$ at time $t$, the particle in the ensuing infinitesimal time interval ${\rm d}t$ either follows the dynamics (\[eq:eom\]) with probability $1-r(x){\rm d}t$, or resets to a given reset destination $x^{(\rm r)}$ with probability $r(x){\rm d}t$. Our analysis holds for any arbitrary reset function $r(x)$, with the only obvious constraint $r(x) \ge 0~\forall\,x$; moreover, the formalism may be generalized to higher dimensions. In the following, we consider the reset location to be the same as the initial location $x_0$ of the particle, that is, $x^{(\rm r)}=x_0$.
A quantity of obvious interest and relevance is the spatial distribution of the particle: What is the probability $P(x,t|x_0,0)$ that the particle is at position $x$ at time $t$, given that its initial location is $x_0$? From the dynamics given in the preceding paragraph, it is straightforward to write the time evolution equation of $P(x,t|x_0,0)$: &&=-+D\
&&-r(x)P(x,t|x\_0,0)+y r(y)P(y,t|x\_0,0)(x-x\_0), ł[eq:timevolution]{} where the first two terms on the right hand side account for the contribution from the diffusion of the particle in the force field $F(x)$, while the last two terms stand for the contribution owing to the resetting of the particle: the third term represents the loss in probability arising from the resetting of the particle to $x_0$, while the fourth term denotes the gain in probability at the location $x_0$ owing to resetting from all locations $x
\ne x_0$. When exists, the stationary distribution $P_{\rm st}(x|x_0)$ satisfies && 0=-+D\
&& -r(x)P\_[st]{}(x|x\_0)+y r(y)P\_[st]{}(y|x\_0)(x-x\_0). ł[eq:stationary]{} It is evident that solving for either the time-dependent distribution $P(x,t|x_0,0)$ or the stationary distribution $P_{\rm st}(x|x_0)$ from Eqs. (\[eq:timevolution\]) and (\[eq:stationary\]), respectively, is a formidable task even with $F=0$, unless the function $r(x)$ has simple forms. For example, in Ref. [@Evans:2011-2], the authors considered a solvable example with $F(x)=0$, where the function $r(x)$ is zero in a window around $x_0$ and is constant outside the window.
In this work, we employ a different approach to solve for the stationary spatial distribution, by invoking the path integral formalism of quantum mechanics and by using elements of the theory of renewals. In this approach, we compute $P_{\rm st}(x|x_0)$, the stationary distribution [*in presence of reset events*]{}, in terms of suitably-defined functions that take into account the occurrence of trajectories that evolve [*without undergoing any reset events*]{} in a given time, see Eq. (\[eq:Pxstat-final\]) below. This approach provides a viable alternative to obtaining the stationary spatial distribution by solving the Fokker-Planck equation (\[eq:stationary\]) that explicitly takes into account the occurrence of trajectories that evolve [*while undergoing reset events*]{} in a given time. As we will demonstrate below, the method allows to obtain exact expressions even in cases with nontrivial forms of $F(x)$ and $r(x)$.
Path-integral approach to stochastic resetting
----------------------------------------------
ł[sec:quantities-of-interest]{}
Here, we invoke the well-established path-integral approach based on the Feynman-Kac formalism to discuss stochastic resetting. To proceed, let us first consider a representation of the dynamics in discrete times $t_i=i\Delta t$, with $i=0,1,2,\ldots$, and $\Delta t>0$ being a small time step. The dynamics in discrete times involves the particle at position $x_i$ at time $t_i$ to either reset and be at $x^{(\rm r)}$ at the next time step $t_{i+1}$ with probability $r(x_i)\Delta t$ or follow the dynamics given by Eq. (\[eq:eom\]) with probability $1-r(x_i)\Delta t$. The position of the particle at time $t_i$ is thus given by x\_i & = {
[ll]{} x\_[i-1]{}+t(|[F]{}(x\_i)+\_i)& [with prob.]{}1-r(x\_[i-1]{})t,\
x\^[([r]{})]{} & [with prob.]{} r(x\_[i-1]{})t,\
. ł[eq:dynamics]{} where we have defined $\bar{F}(x_i)\equiv (F(x_{i-1})+F(x_i))/2$, and have used the Stratonovich rule in discretizing the dynamics (\[eq:eom\]), and where the time-discretized Gaussian, white noise $\eta_i$ satisfies \_i\_j=\^2\_[ij]{}, with $\sigma^2$ a positive constant with the dimension of length-squared over time-squared. In particular, the joint probability distribution of occurrence of a given realization $\{\eta_i\}_{1\le i\le N}$ of the noise, with $N$ being a positive integer, is given by P\[{\_i}\]=()\^[N/2]{}(-\_[i=1]{}\^[N]{}\_i\^[2]{}). ł[eq:joint-distribution]{} In the absence of any resetting and forces, the displacement of the particle at time $t\equiv N\Delta t$ from the initial location is given by $\Delta x \equiv x_N-x_0=\Delta t\sum_{i=1}^{N}\eta_i$, so that the mean-squared displacement is $\langle(\Delta x)^{2}\rangle=\sigma^2 N (\Delta t)^2$. In the continuous-time limit, $N\to\infty,\Delta t\to0$, keeping the product $N\Delta t$ fixed and finite and equal to $t$, the mean-squared displacement becomes $\langle(\Delta x)^{2}\rangle=2Dt$, with $D\equiv \lim_{\sigma \to
\infty,\Delta t \to 0}\sigma^2 \Delta t/2$.
### The propagator prior to first reset.
ł[sec:propagator]{} What is the probability of occurrence of particle trajectories that start at position $x_0$ and end at a given location $x$ at time $t=N\Delta t$ without having undergone any reset event? From the discrete-time dynamics given by Eq. (\[eq:dynamics\]) and the joint distribution (\[eq:joint-distribution\]), the probability of occurrence of a given particle trajectory $\{x_i\}_{0 \le i \le N}\equiv\{x_0,x_1,x_2,\ldots,x_{N-1},x_N=x\}$ is given by &&P\_[[nores]{}]{}\[{x\_i}\]=[det]{}([J]{})()\^[N/2]{}\
&&\_[i=1]{}\^[N]{}(-)\_[i=0]{}\^[N-1]{}(1-r(x\_i)t).\
Here, the factor $\prod_{i=0}^{N-1}\left(1-r(x_i)\Delta t\right)$ enforces the condition that the particle has not reset at any of the instants $t_i,~i=0,1,2,\ldots,N-1$, while ${\cal J}$ is the Jacobian matrix for the transformation $\{\eta_i\}\rightarrow\{x_i\}$, which is obtained from Eq. (\[eq:dynamics\]) as ${\cal J}_{1\leq i,j\leq N}\equiv\left(\frac{\partial\eta_i}{\partial x_j}\right) $ or equivalently =(
[cccc]{} - & 0 & 0 & …\
-- & - & 0 & …\
& & &
)\_[NN]{}, with primes denoting derivative with respect to $x$. One thus has &&[det]{}([J]{})=()\^N \_[i=1]{}\^[N]{}(1-)\
&&()\^[N]{}(-\_[i=1]{}\^[N]{}), where in obtaining the last step, we have used the smallness of $\Delta t$. Thus, for small $\Delta t$, we get &&P\_[[nores]{}]{}\[{x\_i}\]= ()\^[N/2]{}\
&&\_[i=1]{}\^[N]{}(--)\
&&\_[i=0]{}\^[N-1]{}(-r(x\_i)t)\
&&=()\^[N/2]{}( t)\
&&(-t\_[i=1]{}\^[N]{}).\
ł[eq:path-0]{}
From Eq. (\[eq:path-0\]), it follows by considering all possible trajectories that the probability density that the particle while starting at position $x_0$ ends at a given location $x$ at time $t=N\Delta t$ without having undergone any reset event is given by &&P\_[nores]{}(x,t|x\_0,0)=()\^[N/2]{}(t)\
&&\_[i=1]{}\^[N-1]{}\_[-]{}\^[d]{}x\_i(-t\_[i=1]{}\^[N]{}). In the limit of continuous time, defining ${\cal D}x(t)\equiv\lim_{N\to\infty}\Big(\frac{1}{4\pi D\Delta t}\Big)^{N/2}\prod_{i=1}^{N-1}\int_{-\infty}^{\infty}{\rm
d}x_i,$ one gets the exact expression for the corresponding probability density as the following path integral: P\_[nores]{}(x,t|x\_0,0)= \_[x(0)=x\_0]{}\^[x(t)=x]{}[D]{}x(t)(-S\_[res]{}\[{x(t)}\]), ł[eq:P-no-reset-0]{}\
where on the right hand side of Eq. (\[eq:P-no-reset-0\]), we have introduced the [*resetting action*]{} as S\_[res]{}\[{x(t)}\]=\_0\^[t]{}[d]{}t\[++r(x)\]. Invoking the Feynman-Kac formalism, we identify the path integral on the right hand side of Eq. (\[eq:P-no-reset-0\]) with the propagator of a quantum mechanical evolution in (negative) imaginary time due to a quantum Hamiltonian $H_{\rm q}$ (see Appendix), to get P\_[nores]{}(x,t|x\_0,0)=( \_[x\_0]{}\^x F(x) [d]{}x)G\_[q]{}(x,-it|x\_0,0), ł[eq:pnoresq]{} with G\_[q]{}(x,-it|x\_0,0)x|(-H\_[q]{}t)|x\_0, ł[eq:qma]{} where the quantum Hamiltonian is H\_[q]{}-+V\_[q]{}(x), ł[eq:quantum-hamiltonian]{} the mass in the equivalent quantum problem is m\_[q]{}, and the quantum potential is given by V\_[q]{}(x)++r(x). ł[eq:quantum-potential]{} Note that in the quantum propagator in Eq. (\[eq:qma\]), the Planck’s constant has been set to unity, $\hbar=1$, while the time $\tau$ of propagation is imaginary: $\tau=-it$ [@Wick]. Since the Hamiltonian contains no explicit time dependence, the propagator $G_{\rm q}(x,-it|x_0,0)$ is effectively a function of the time $t$ to propagate from the initial location $x_0$ to the final location $x$, and not individually of the initial and final times. Let us note that on using $D=k_{\rm B}T \mu$, the prefactor equals $\exp\left(-Q(t)/2k_{\rm B}T\right)$, where $Q(t) \equiv \int_{x_0}^{x}
\partial_x V(x)\,{\rm d}x$ is the heat absorbed by the particle from the environment along the trajectory $\{x(t)\}$ [@Sekimoto:1998; @Sekimoto:2000].
### Distribution of the first-reset time
ł[sec:prob-first-reset]{}
Let us now ask for the probability of occurrence of trajectories that start at position $x_0$ and reset for the first time at time $t$. In terms of $P_{\rm
no\;res}(x,t|x_0,0)$, one gets this probability density as P\_[res]{}(t|x\_0)=\_[-]{}\^[d]{}y r(y)P\_[nores]{}(y,t|x\_0,0), ł[eq:Pt-final]{} since by the very definition of $P_{\rm res}(t|x_0)$, a reset has to happen only at the final time $t$ when the particle has reached the location $y$, where $y$ may in principle take any value in the interval $[-\infty,\infty]$. The probability density $P_{\rm res}(t|x_0)$ is normalized as $\int_0^\infty {\rm d}t~P_{\rm res}(t|x_0)=1$.
### Spatial time-dependent probability distribution
ł[sec:prob-distr]{} Using renewal theory, we now show that knowing $P_{\rm no\;res}(x,t|x_0,0)$ and $P_{\rm res}(t|x_0)$ is sufficient to obtain the spatial distribution of the particle at any time $t$. The probability density that the particle is at $x$ at time $t$ while starting from $x_0$ is given by &&P(x,t|x\_0,0)=P\_[nores]{}(x,t|x\_0,0)\
&&+\_0\^t [d]{}\_[-]{}\^y r(y)P(y,t-|x\_0,0)P\_[nores]{}(x,t|x\_0,t-)\
&&=P\_[nores]{}(x,t|x\_0,0)\
&& +\_0\^t [d]{} R(t-|x\_0)P\_[nores]{}(x,t|x\_0,t-), ł[eq:Pxt-final]{} where we have defined the probability density to reset at time $t$ as R(t|x\_0)\_[-]{}\^y r(y)P(y,t|x\_0,0). ł[eq:ft-definition]{} One may easily understand Eq. (\[eq:Pxt-final\]) by invoking the theory of renewals [@Cox:1962] and realizing that the dynamics is renewed each time the particle resets to $x_0$. This may be seen as follows. The particle while starting from $x_0$ may reach $x$ at time $t$ by experiencing not a single reset; the corresponding contribution to the spatial distribution is given by the first term on the right hand side of Eq. (\[eq:Pxt-final\]). The particle may also reach $x$ at time $t$ by experiencing the last reset event (i.e., the last renewal) at time instant $t-\tau$, with $\tau \in
[0,t]$, and then propagating from the reset location $x^{(\rm r)}=x_0$ to $x$ without experiencing any further reset, where the last reset may take place with rate $r(y)$ from any location $y \in [-\infty,\infty]$ where the particle happened to be at time $t-\tau$; such contributions are represented by the second term on the right hand side of Eq. (\[eq:Pxt-final\]). The spatial distribution is normalized as $\int_{-\infty}^\infty {\rm
d}x~P(x,t|x_0,0)=1$ for all possible values of $x_0$ and $t$.
Multiplying both sides of Eq. (\[eq:Pxt-final\]) by $r(x)$, and then integrating over $x$, we get &&R(t|x\_0)=\_[-]{}\^x r(x)P\_[nores]{}(x,t|x\_0,0)\
&&+\_0\^t [d]{} R(t-|x\_0).\
The square-bracketed quantity on the right hand side is nothing but $P_{\rm res}(\tau|x_0)$, so that we get R(t|x\_0)=P\_[res]{}(t|x\_0)+\_0\^t [d]{} P\_[res]{}(|x\_0)R(t-|x\_0). ł[eq:Rt-definition]{} Taking the Laplace transform on both sides of Eq. (\[eq:Rt-definition\]), we get (s|x\_0)=\_[res]{}(s|x\_0)+\_[res]{}(s|x\_0)(s|x\_0), ł[eq:Rs-equation]{} where $\widetilde{R}(s|x_0)$ and $\widetilde{P}_{\rm res}(s|x_0)$ are respectively the Laplace transforms of $R(t|x_0)$ and $P_{\rm res}(t|x_0)$. Solving for $\widetilde{R}(s|x_0)$ from Eq. (\[eq:Rs-equation\]) yields (s|x\_0)=. ł[eq:Rs]{} Next, taking the Laplace transform with respect to time on both sides of Eq. (\[eq:Pxt-final\]), we obtain (x,s|x\_0,0)&=&(1+(s|x\_0))\_[nores]{}(x,s|x\_0)\
&=&, ł[eq:Pxs]{} where we have used Eq. (\[eq:Rs\]) to obtain the last equality. An inverse Laplace transform of Eq. (\[eq:Pxs\]) yields the time-dependent spatial distribution $P(x,t|x_0,0)$.
### Stationary spatial distribution
ł[sec:stat-distr]{} On applying the final value theorem, one may obtain the stationary spatial distribution as P\_[st]{}(x|x\_0)=\_[s0]{}s(x,s|x\_0,0)=\_[s0]{}s, ł[eq:Pst-0]{} provided the stationary distribution (i.e., $\lim_{t \to \infty}P(x,t|x_0,0)$) exists. Now, since $P_{\rm res}(t|x_0)$ is normalized to unity, $\int_0^\infty {\rm
d}t~P_{\rm res}(t|x_0)=1$, we may expand its Laplace transform to leading orders in $s$ as $\widetilde{P}_{\rm res}(s|x_0)\equiv \int_0^\infty {\rm
d}t~\exp(-st)P_{\rm res}(t|x_0)=1-s\langle t \rangle_{\rm res}+O(s^2)$, provided that the mean first-reset time $\langle t
\rangle_{\rm res}$, defined as t \_[res]{} \_0\^t tP\_[res]{}(t|x\_0), is finite. Similarly, we may expand $\widetilde{P}_{\rm
no\;res}(x,s|x_0,0)$ to leading orders in $s$ as $\widetilde{P}_{\rm
no\;res}(x,s|x_0,0)=\int_0^\infty {\rm d}t~P_{\rm no\;res}(x,t|x_0,0)-s\int_0^\infty
{\rm d}t~tP_{\rm no\;res}(x,t|x_0,0)+O(s^2)$, provided that $\int_0^\infty
{\rm d}t~tP_{\rm no\;res}(x,t|x_0,0)$ is finite. From Eq. (\[eq:Pst-0\]), we thus find the stationary spatial distribution to be given by the integral over all times of the propagator prior to first reset divided by the mean first-reset time: P\_[st]{}(x|x\_0)=\_0\^t P\_[nores]{}(x,t|x\_0,0). ł[eq:Pxstat-final]{}
EXACTLY SOLVED EXAMPLES
=======================
ł[sec:applications]{}
Free particle with space-independent resetting
----------------------------------------------
ł[subsec:constant-resetting]{}
Let us first consider the simplest case of free diffusion with a space-independent rate of resetting $r(x)=r$, with $r$ a positive constant having the dimension of inverse time. Here, on using Eq. (\[eq:pnoresq\]) with $F(x)=0$, we have P\_[nores]{}(x,t|x\_0,0)&=&G\_[q]{}(x,-it|x\_0,0)\
&=&x|(-H\_[q]{}t)|x\_0, ł[eq:Pnoreset-free-parabola]{} where the quantum Hamiltonian is in this case, following Eqs. (\[eq:quantum-hamiltonian\]-\[eq:quantum-potential\]), given by H\_[q]{}=- + r; m\_[q]{}=, =1. Since in the present situation, the effective quantum potential $V_{\rm q}(x)=r$ is space independent, we may rewrite Eq. (\[eq:Pnoreset-free-parabola\]) as: P\_[nores]{}(x,t|x\_0,0)&=&(-rt)G\_[q]{}(x,-it|x\_0,0), ł[eq:Pnoreset-constant-resetting-0]{} with G\_[q]{}(x,-it|x\_0,0)x|(-H\_[q]{}t)|x\_0, where the quantum Hamiltonian is now that of a free particle: H\_[q]{}-; m\_[q]{}=, =1. ł[eq:Hq-free]{} Therefore, the statistics of resetting of a free particle under a space-independent rate of resetting may be found from the quantum propagator of a free particle, which is given by [@Schulman:1981] G\_[q]{}(x,|x\_0,0)= (-). ł[eq:G-free-parabola]{} Plugging in Eq. (\[eq:G-free-parabola\]) the parameters in Eq. (\[eq:Hq-free\]) together with $\tau=-it$, we have G\_[q]{}(x,-it|x\_0,0)=(-). ł[eq:G-free-parabola-final]{} Using Eq. (\[eq:G-free-parabola-final\]) in Eq. (\[eq:Pnoreset-constant-resetting-0\]), we thus obtain P\_[nores]{}(x,t|x\_0,0)=(-), ł[eq:Pnoreset-constant-resetting]{} and hence, the distribution of the first-reset time may be found on using Eq. (\[eq:Pt-final\]): P\_[res]{}(t|x\_0)&=& r (-rt)\_[-]{}\^[d]{}x (-)\
&=& r(-rt), ł[eq:Pt-constant-resetting]{} which is normalized to unity: $\int_0^{\infty}{\rm d}t~P_{\rm res}(t|x_0)=1$, as expected.
Using Eq. (\[eq:Pt-constant-resetting\]), we get $\widetilde{P}_{\rm res}(s|x_0)=r/(s+r)$, so that Eq. (\[eq:Rs\]) yields $\widetilde{R}(s|x_0)=r/s$. An inverse Laplace transform yields $R(t|x_0)=r$, as also follows from Eq. (\[eq:ft-definition\]) by substituting $r(y)=r$ and noting that $P(y,t|x_0,0)$ is normalized with respect to $y$.
Next, the probability density that the particle is at $x$ at time $t$, while starting from $x_0$, is obtained on using Eq. (\[eq:Pxt-final\]) as P(x,t|x\_0,0)&=&(-(x-x\_0)\^2/(4Dt))\
&+&r \_0\^t [d]{} (-(x-x\_0)\^2/(4D)).\
ł[eq:Pxt-constant-resetting]{} Taking the limit $t \to \infty$, we obtain the stationary spatial distribution as P\_[st]{}(x|x\_0)&=& r \_0\^ (-r) ,\
ł[eq;Pxstat-constant-resetting]{} which may also be obtained by using Eqs. (\[eq:Pxstat-final\]) and (\[eq:Pnoreset-constant-resetting\]), and also Eq. (\[eq:Pt-constant-resetting\]) that implies that $\langle t
\rangle_{\rm res}=1/r$. From Eq. (\[eq:Pxt-constant-resetting\]), we obtain an exact expression for the time-dependent spatial distribution as &&P(x,t|x\_0,0)=\
&&+\
&&-, while Eq. (\[eq;Pxstat-constant-resetting\]) yields the exact stationary distribution as P\_[st]{}(x|x\_0)=(-), ł[eq:Pxstat-constant-resetting-final]{} where ${\rm erfc}(x)\equiv (2/\sqrt{\pi})\int_x^\infty {\rm d}t~\exp(-t^2)$ is the complementary error function. The stationary distribution (\[eq:Pxstat-constant-resetting-final\]) may be put in the scaling form P\_[st]{}(x|x\_0)=(), ł[eq:Pxstat-final-rconstant]{} where the scaling function is given by ${\cal R}(y)\equiv\exp(-y)$. For the particular case $x_0=0$, Eq. (\[eq:Pxstat-constant-resetting-final\]) matches with the result derived in Ref. [@Evans:2011-1]. Note that the steady state distribution (\[eq:Pxstat-final-rconstant\]) exhibits a cusp at the resetting location $x_0$. Since the resetting location is taken to be the same as the initial location, the particle visits repeatedly in time the initial location, thereby keeping a memory of the latter that makes an explicit appearance even in the long-time stationary state.
Free particle with “parabolic” resetting
----------------------------------------
ł[subsec:parabolic-resetting]{}
We now study the dynamics of a free Brownian particle whose position is reset to the initial position $x_0$ with a rate of resetting that is proportional to the square of the current position of the particle. In this case, we have $r(x)=\alpha x^2$, with $\alpha>0$ having the dimension of $1/(({\rm Length})^{2}{\rm Time})$. From Eqs. (\[eq:pnoresq\]) and (\[eq:qma\]), and given that in this case $F(x)=0$, we get P\_[nores]{}(x,t|x\_0,0)=G\_[q]{}(x,-it|x\_0,0) = x|(-H\_[q]{}t)|x\_0,\
ł[eq:parabolic-resetting-Pxt-0-x0]{} with the Hamiltonian obtained from Eq. (\[eq:quantum-hamiltonian\]) by setting $V_{\rm q}(x)=\alpha x^2$: H\_[q]{}= - + x\^2; m\_[q]{}=, =1. ł[eq:Hq-free-parabola]{} We thus see that the statistics of resetting of a free particle subject to a “parabolic" rate of resetting may be found from the propagator of a quantum harmonic oscillator. Following Schulman [@Schulman:1981], a quantum harmonic oscillator with the Hamiltonian given by H\_[q]{}=- + m\_[q]{} \_[q]{}\^2 x\^2, with $m_{\rm q}$ and $\omega_{\rm q}$ being the mass and the frequency of the oscillator, has the quantum propagator &&G\_[q]{}(x,|x\_0,0)=\
&& ( \[ (x\^2 + x\_0\^2)\_[q]{} - 2 x x\_0\] ).\
ł[eq:Gq-free-parabola]{} Using the parameters given in Eq. (\[eq:Hq-free-parabola\]), and substituting $\tau=-it$ and $\omega_{\rm q}=\sqrt{4D\alpha}$ in Eq. (\[eq:Gq-free-parabola\]), we have &&G\_[q]{}(x,-it|x\_0,0)=\
&&(-\[(x\_0\^[2]{}+y\^[2]{})(t)-2x x\_0\]).\
ł[eq:Gq-free-parabola-1]{}
We may now derive the statistics of resetting by using the propagator (\[eq:Gq-free-parabola-1\]). Equation (\[eq:parabolic-resetting-Pxt-0-x0\]) together with Eq. (\[eq:Gq-free-parabola-1\]) imply &&P\_[nores]{}(x,t|x\_0,0)=\
&&(-\[(x\_0\^[2]{}+x\^[2]{})(t)-2x\_0x\]).\
ł[eq:parabolic-resetting-Pxt-x0]{} Integrating Eq. (\[eq:parabolic-resetting-Pxt-x0\]) over $x$, we get the distribution of the first-reset time as &&P\_[res]{}(t|x\_0) =\_[-]{}\^[d]{}y r(y)P\_[nores]{}(y,t|x\_0,0)\
&&=(-x\_0\^2(t))\
&&.\
ł[eq:parabolic-resetting-Pt-x0]{}
For the case $x_0=0$, Eqs. (\[eq:parabolic-resetting-Pxt-x0\]) and (\[eq:parabolic-resetting-Pt-x0\]) reduce to simpler expressions: &&P\_[nores]{}(x,t|x\_0=0,0)=\
&&(-),\
ł[eq:parabolic-resetting-Pxt-x00]{} and P\_[res]{}(t|x\_0=0) =. ł[eq:parabolic-resetting-Pt-x00]{} Equation (\[eq:parabolic-resetting-Pt-x00\]) may be put in the scaling form P\_[res]{}(t|x\_0=0)=[G]{}(t), with ${\cal G}(y) = \tanh(y)^{3/2}/\sqrt{\sinh(y)}$. Equation (\[eq:parabolic-resetting-Pt-x00\]) yields the mean first-reset time $\langle t \rangle_{\rm res}$ for $x_0=0$ to be given by t \_[res]{}=, ł[eq:tav-x00]{} where $\Gamma$ is the Gamma function. Equations (\[eq:parabolic-resetting-Pt-x00\]) and (\[eq:tav-x00\]) yield the stationary spatial distribution on using Eq. (\[eq:Pxstat-final\]): &&P\_[st]{}(x|x\_0=0)=\
&&\_0\^(-).\
&&=()\^[1/4]{}K\_[1/4]{}(), ł[eq:stationary-px-parabolic-resetting]{} where $K_n(x)$ is the $n-$th order modified Bessel function of the second kind. Equation (\[eq:stationary-px-parabolic-resetting\]) implies that the stationary distribution is symmetric around $x=0$, which is expected since the resetting rate is symmetric around $x_0=0$. The stationary distribution (\[eq:stationary-px-parabolic-resetting\]) may be put in the scaling form P\_[st]{}(x|x\_0=0)=(), where the scaling function is given by ${\cal R}(y)=(y^2/2)^{1/4}K_{1/4}(y^2/2)$.
![**Theory versus simulation results for the stationary spatial distribution of a free Brownian particle undergoing parabolic resetting.** The points denote simulation results, while the line stands for the exact analytical expression given in Eq. (\[eq:stationary-px-parabolic-resetting\]). The numerical results were obtained from $10^4$ independent simulations of the Langevin dynamics described in Section \[sec:model\]. The error bars associated with the data points are smaller than the symbol size. The parameter values are $D=1.0,\alpha=0.5$.[]{data-label="fig:free-parabola"}](fig2.pdf){width="40.00000%"}
The result (\[eq:stationary-px-parabolic-resetting\]) is checked in simulations in Fig. \[fig:free-parabola\]. The simulations involved numerically integrating the dynamics described in Section \[sec:model\], with integration timestep equal to $0.01$. Using $\int_0^\infty {\rm
d}t~t^{\mu-1}K_\nu(t)=2^{\mu-2}\Gamma\Big(\frac{\mu}{2}-\frac{\nu}{2}\Big)\Gamma\Big(\frac{\mu}{2}+\frac{\nu}{2}\Big)$ for $|{\rm Re}~\nu|<|{\rm Re}~\mu|$ [@Olver:2016], we find that $P_{\rm st}(x|x_0=0)$ given by Eq. (\[eq:stationary-px-parabolic-resetting\]) is correctly normalized to unity. Moreover, using the results that as $x\to 0$, we have $K_\nu(x)=\frac{\Gamma(\nu)}{2}\Big(\frac{x}{2}\Big)^{-\nu}$ for ${\rm
Re}~\nu>0$ and that as $x\to \infty$, we have $K_\nu(x)=\Big(\frac{\pi}{2x}\Big)^{1/2}\exp(-x)$ for real $x$ [@Olver:2016], we get P\_[st]{}(x|x\_0=0) \~{
[ll]{} & [for]{} x0,\
(-x\^2/2) & [for]{} |x| .\
. ł[eq:stationary-px-parabolic-resetting-limiting-forms]{}
Using Eq. (\[eq:stationary-px-parabolic-resetting\]) and the result ${\rm d}K_{1/4}(x)/{\rm
d}x=-(1/2)\left(K_{3/4}(x)+K_{5/4}(x)\right)$, it may be easily shown that as $x \to 0^{\pm}$, one has ${\rm d}P_{\rm st}(x|x_0=0))/{\rm
d}x=\mp \sqrt{\alpha/D}\Gamma(3/4)/(\sqrt{\pi}\Gamma(1/4))$, implying thereby that the first derivative of $P_{\rm st}(x|x_0=0)$ is discontinuous at $x=0$. We thus conclude that the spatial distribution $P_{\rm st}(x|x_0=0)$ exhibits a cusp singularity at $x=0$. This feature of cusp singularity at the resetting location $x_0=0$ is also seen in the stationary distribution (\[eq:Pxstat-constant-resetting-final\]), and is a signature of the steady state being a nonequilibrium one [@Evans:2011-1; @Gupta:2014; @Nagar:2016; @Gupta:2016]. Note the existence of faster-than-exponential tails suggested by Eq. (\[eq:stationary-px-parabolic-resetting-limiting-forms\]) in comparison to the exponential tails observed in the case of resetting at a constant rate, see Eq. (\[eq:Pxstat-constant-resetting-final\]). This is consistent with the fact that with respect to the case of resetting at a space-independent rate, a parabolic rate of resetting implies that the further the particle is from $x_0=0$, the more enhanced is the probability that a resetting event takes place, and, hence, a smaller probability of finding the particle far away from the resetting location.
Let us consider the case of an overdamped Brownian particle that is trapped in a harmonic potential $V(x) = (1/2)\kappa x^2$, with $\kappa>0$, and is undergoing the Langevin dynamics (\[eq:eom\]). At equilibrium, the distribution of the position of the particle is given by the Boltzmann-Gibbs distribution P\_[eq]{} (x) &=&(-x\^2/2k\_[B]{}T)/Z, ł[eq:stationary-px-parabolic-resetting-limiting-forms-equivalence]{} with $Z= \sqrt{2\pi k_{\rm B}T/\kappa}$ being the partition function. Comparing Eqs. (\[eq:stationary-px-parabolic-resetting-limiting-forms\]) and (\[eq:stationary-px-parabolic-resetting-limiting-forms-equivalence\]), we see that using a harmonic potential with a suitable $\kappa$, the stationary distribution of a free Brownian particle undergoing parabolic resetting may be made to match in the tails with the stationary distribution of a Brownian particle trapped in the harmonic potential and evolving in the absence of any resetting. On the other hand, the cusp singularity in the former cannot be achieved with the Langevin dynamics in any harmonic potential without the inclusion of resetting events.
Let us note that the stationary states (\[eq:Pxstat-constant-resetting-final\]) and (\[eq:stationary-px-parabolic-resetting\]) are entirely induced by the dynamics of resetting. Indeed, in the absence of any resetting, the dynamics of a free diffusing particle does not allow for a long-time stationary state, since in the absence of a force, there is no way in which the motion of the particle can be bounded in space. On the other hand, in presence of resetting, the dynamics of repeated relocation to a given position in space can effectively compete with the inherent tendency of the particle to spread out in space, leading to a bounded motion, and, hence, a relaxation to a stationary spatial distribution at long times. In the next section, we consider the situation where the particle even in the absence of any resetting has a localized stationary spatial distribution, and investigate the change in the nature of the spatial distribution of the particle owing to the inclusion of resetting events.
Particle trapped in a harmonic potential with energy-dependent resetting
------------------------------------------------------------------------
ł[subsec:parabolic-resetting-force]{}
![**Illustration of the energy-dependent resetting of a Brownian particle moving in a harmonic potential.** A Brownian particle (grey circle) immersed in a thermal bath at temperature $T$ moves with diffusion coefficient $D$, with its motion being confined by a harmonic potential $V(x)=\kappa
x^2/2$ (green), with $\kappa$ being the stiffness constant. Here, $x$ is the position of the particle with respect to the center of the potential. The particle, initially located in the trap center ($t'=0$, left panel), diffuses at subsequent times in the energy landscape ($t'<t$, middle panel), until a resetting event occurs at time $t'=t$ (right panel). The black curve represents the history of the particle from $t'=0$ up to the time corresponding to each snapshot. The rate of resetting (right colorbar) is proportional to the instantaneous energy of the particle, and, therefore, a reset is more likely to take place as the particle climbs up the potential.](fig3.pdf){width="45.00000%"}
ł[fig:energy-resetting]{}
We now introduce a resetting problem that is relevant in physics: an overdamped Brownian particle immersed in a thermal bath at temperature $T$ and trapped with a harmonic potential centered at the origin: $V(x)=(1/2)\kappa x^2$, where $\kappa>0$ is the stiffness constant of the harmonic potential. The particle, initially located at $x_0=0$, may be reset at any time $t$ to the origin with a probability that depends on the energy of the particle at time $t$. The dynamics is shown schematically in Fig. \[fig:energy-resetting\]. For purposes of illustration of the nontrivial effects of resetting, we consider the following space-dependent reseting rate: r(x)= = x\^2, ł[eq:re1]{} where we use $D=k_BT \mu$ in obtaining the second equality. Note that the resetting rate is proportional to the energy of the particle (in units of $k_{\rm B}T$) divided by the timescale $\tau_c
\equiv 1/\mu\kappa$ that characterizes the relaxation of the particle in the harmonic potential in the absence of any resetting. In this way, it is ensured that the rate of resetting (\[eq:re1\]) has units of inverse time. Note also that in the absence of any resetting, the particle relaxes to an equilibrium stationary state with a spatial distribution given by the usual Boltzmann-Gibbs form: P\_[st]{}\^[r(x)=0]{}(x)=(-). ł[eq:BGform-parabola-parabola]{}
Using $F(x) = -\partial_x V(x)=-\kappa x$ and the expression (\[eq:re1\]) for the resetting rate in Eq. (\[eq:quantum-potential\]), we find that the potential of the corresponding quantum mechanical problem is given by V\_[q]{}(x)=++r(x) = -, ł[eq:Vq-parabola-parabola]{} where we have used $F'(x)=-\kappa$. From Eqs. (\[eq:pnoresq\]) and (\[eq:qma\]), we obtain &&P\_[nores]{}(x,t|x\_0=0,0)=( \_[x\_0]{}\^x F(x) [d]{}x)\
&&()x|(-H\_[q]{}t)|x\_0=0\
&&=(-)()x|(-H\_[q]{}t)|x\_0=0,\
ł[eq:Pnoreset-parabola-parabola-0]{} where the quantum Hamiltonian is given by H\_[q]{}= - + ; m\_[q]{}=, =1. ł[eq:Hq-parabola-parabola]{} We thus find that the propagator $\langle x|\exp(-H_{\rm
q}t)|x_0=0\rangle$ is given by the propagator of a quantum harmonic oscillator, which has been calculated in Sec. \[subsec:parabolic-resetting\]. In fact, the Hamiltonian given by Eq. (\[eq:Hq-parabola-parabola\]) is identical to that in Eq. (\[eq:Hq-free-parabola\]) with the identification $\alpha = \mu^2\kappa^2/D=1/D\tau_c^2$, so that by substituting $x_0=0$ and $\alpha=1/(D\tau_c^2)$ in Eq. (\[eq:parabolic-resetting-Pxt-x0\]), we obtain &&x|(-H\_[q]{}t)|x\_0=0=\
&&(-(2t/\_c)). ł[eq:Pnoreset-parabola-parabola-1]{}\
From Eqs. (\[eq:Pnoreset-parabola-parabola-0\]) and (\[eq:Pnoreset-parabola-parabola-1\]), we obtain && P\_[nores]{}(x,t|x\_0=0,0)=\
&&(-).\
ł[eq:Pnoreset-parabola-parabola-final]{}
Following Eq. (\[eq:Pt-final\]), we may now calculate the probability of the first-reset time by using Eq. (\[eq:Pnoreset-parabola-parabola-final\]) to get && P\_[res]{}(t|x\_0=0)=\
&&\_[-]{}\^[d]{}x (-)\
&&=, ł[eq:Preset-parabola-parabola-final]{}\
which may be checked to be normalized: $\int_0^{\infty} {\rm d}t~P_{\rm res}(t|x_0=0) =
1$. The first-reset time distribution (\[eq:Preset-parabola-parabola-final\]) may be written in the scaling form P\_[res]{}(t|x\_0=0)&=&( ), ł[eq:Preset-parabola-parabola-final2]{} with the scaling function given by ${\cal
G}(y)=\exp(y/4)\sinh(y)^{-1/2}(1/2+\coth(y))^{-3/2}$.
![**Theory versus simulation results for the stationary spatial distribution of a Brownian particle trapped in a harmonic potential and undergoing energy-dependent resetting.** The points denote simulation results, while the line stands for the exact analytical expression given in Eq. (\[eq:stationary-px-parabola-parabolic-resetting\]). The numerical results were obtained from $10^4$ independent simulations of the Langevin dynamics described in Section \[sec:model\]. The error bars associated with the data points are smaller than the symbol size. The parameter values are $D=1.0,\tau_c=0.5$.[]{data-label="fig:parabola-parabola"}](fig4.pdf){width="40.00000%"}
The mean first-reset time, given by $\langle t\rangle_{\rm res}\equiv\int_0^{\infty} {\rm d}t P_{\rm res}(t|x_0=0)$, equals &&t \_[res]{} = \_2F\_1(,;;-), ł[eq:treset-parabola-parabola]{} where $_pF_q(a_1,a_2,\ldots,a_p;b_1,b_2,\ldots,b_q;x)$ is the generalized hypergeometric function. Introducing the variable $z\equiv
2t/\tau_c$, and using Eq. (\[eq:Pnoreset-parabola-parabola-final\]), we get &&P\_[st]{}(x|x\_0=0)=(-)\
&&\_0\^[d]{}z (-)\
&&=(-)\
&&( )\^[-1/4]{}(1/8)W\_[1/8,1/4]{}( ), ł[eq:Pst-parabola-parabola]{} where $W_{\mu,\nu}$ is Whittaker’s W function.
Using Eq. (\[eq:treset-parabola-parabola\]) in Eq. (\[eq:Pst-parabola-parabola\]), we obtain &&P\_[st]{}(x|x\_0=0)=\
&&(-)( )\^[1/4]{}W\_[1/8,1/4]{}( ),\
\[eq:stationary-px-parabola-parabolic-resetting\] which may be checked to be normalized to unity. We may write the stationary distribution in terms of a scaled position variable as P\_[st]{}(x|x\_0=0)= ( ),\
ł[eq:statparresparpot]{} with $\mathcal{R}(y) \equiv
\exp(-y^2/2)(2/y^2)^{1/4}\,W_{1/8,1/4}(2y^2)$ being the scaling function. The expression (\[eq:stationary-px-parabola-parabolic-resetting\]) is checked in simulations in Fig. \[fig:parabola-parabola\]. The simulations involved numerically integrating the dynamics described in Section \[sec:model\], with integration timestep equal to $0.01$. Using the results that as $x\to 0$, we have $W_{k,\mu}(x)=\frac{\Gamma(2\mu)}{\Gamma(1/2+\mu-k)}x^{1/2-\mu}$ for $0\le {\rm Re}~\mu<1/2,~\mu \ne 0$ and that as $x\to \infty$, we have $W_{k,\mu}(x)\sim e^{-x/2}x^{k}$ for real $x$ [@Olver:2016], we get P\_[st]{}(x|x\_0=0) \~{
[ll]{} & [for]{} x0,\
(-) & [for]{} |x| .\
. ł[eq:stationary-px-parabola-parabola-resetting-limiting-forms]{} We see again the existence of a cusp at the resetting location $x_0=0$, similar to all other cases we studied in this paper.
The results of this subsection could inspire future experimental studies using optical tweezers in which the resetting protocol could be effectively implemented by using feedback control [@Berut:2012; @Roldan:2014; @Koski:2014]. Interestingly, colloidal and molecular gelly and glassy systems show hopping motion of their constituent particles between potential traps or “cages," the latter originating from the interaction of the particles with their neighbors [@Sandalo:2017]. Such a phenomenon is also exhibited by out-of-equilibrium glasses and gels during the process of aging [@Ludovic:2011]. Our results in this section could provide valuable insights into the aforementioned dynamics, since the emergent potential cages may be well approximated by harmonic traps and the hopping process as a resetting event.
Shortcuts to confinement
========================
A hallmark of the examples solved exactly in Sec. \[sec:applications\] by using our path-integral formalism is the existence of stationary distributions with prominent [*cusp singularities*]{} (see Figs. \[fig:free-parabola\] and \[fig:parabola-parabola\]). These examples demonstrate that the particle can be confined around a prescribed location by using appropriate space-dependent rates of resetting.
In physics and nanotechnology, the issue of achieving an accurate control of fluctuations of small-sized particles is nowadays attracting considerable attention [@Martinez:2013; @Berut:2014; @Dieterich:2015]. For instance, using optical tweezers and noisy electrostatic fields, it is now possible to control accurately the amplitude of fluctuations of the position of a Brownian particle [@Martinez:2017; @Gavrilov:2017; @Ciliberto:2017]. Such fluctuations may be characterized by an effective temperature. Experiments have reported effective temperatures of a colloidal particle in water up to 3000K [@Martinez:2013], and have recently been used to design colloidal heat engines at the mesoscopic scale [@Martinez:2016; @Martinez:2017]. Effective confinement of small systems is of paramount importance for success of quantum-based computations with, e.g., cold atoms [@Cirac:1995; @Bloch:2005].
Does stochastic resetting provide an efficient way to reduce the amplitude of fluctuations of a Brownian particle, thereby providing a technique to reduce the associated effective temperature? We now provide some insights into this question.
Consider the following example of a nonequilibrium protocol: i) a Brownian particle is initially confined in a harmonic trap with a potential $V(x)=(1/2)\kappa x^2$ for a sufficiently long time such that it is in an equilibrium state with spatial distribution $P_{\rm
eq}(x)=\exp(-\kappa x^2/2k_{\rm B}T)/Z$, with $Z= \sqrt{2\pi k_{\rm
B}T/\kappa}$; ii) a space-dependent (parabolic) resetting rate $r(x)=(3/2\tau_c)V(x)/k_{\rm B}T$, with $\tau_c=(1/\mu \kappa)$, is suddenly switched on by an external agent. In order words, the rate of resetting is instantaneously quenched from $r(x)=0$ to $r(x)=(3/4)(\mu^2\kappa^2/D)x^2$; iii) the particle is let to relax to a new stationary state in the presence of the trapping potential and parabolic resetting. At the end of the protocol, the particle relaxes to the stationary distribution $P_{\rm st}(x)$ given by Eq. (\[eq:statparresparpot\]).
![**Shortcut to particle confinement.** Confinement of a Brownian particle using harmonic potentials and space-dependent rates of resetting. The symbols represent simulation results for relaxation processes of $10^5$ non-interacting Brownian particles that are initially in equilibrium in a harmonic potential $V(x)=(1/2)\kappa
x^2$. The blue circles represent the mean-squared displacement of the particle in units of $\langle x^2(0)\rangle$ after an instantaneous quench to a harmonic potential with stiffness $\kappa'=1.7\kappa$ (the blue arrow in the inset). The yellow circles on the other hand represent the mean-squared displacement of the particle in units of $\langle
x^2(0)\rangle$ after an instantaneous quench of the rate of resetting from $r(x)=0$ to $r(x)=(3/4)(\mu^2\kappa^2/D)x^2$ (the orange arrow in the inset). For the case in which the stiffness of the potential is quenched from $\kappa$ to $\kappa'$, it is easily seen that $\langle x^2(t)\rangle=\langle x^2 (0)\rangle
\exp(-2\mu k' t)+(D/(\mu k'))[1-\exp(-2\mu k't)]$, thereby implying a relaxation timescale $\tau_{\rm quench}=1/(2\mu k')$ and yielding the corresponding curve in the figure. Note that the time in the $x$-axis is measured in units of $\tau_c= 1/\mu \kappa$. The other curve depicting the process of relaxation in presence of resetting may be fitted to a good approximation to $A+B e^{-t/\tau_{\rm reset}}$. One observes that $\tau_{\rm reset} \approx \tau_{\rm quench}/3$. The parameter values are $D=1$, $\kappa=1$ and $\mu=10$.[]{data-label="fig:quenchreset"}](fig5.pdf){width="40.00000%"}
We first note that before the sudden switching on of the resetting dynamics, which we assume to happen at a reference time instant $t=0$, the mean-squared displacement of the particle is given by x\^2(0)= , which follows from the equilibrium distribution before the resetting is switched on, and is in agreement with the equipartition theorem $\kappa \langle x^2(0)\rangle/2 =k_{\rm
B}T/2$. After the sudden switching on of the space-dependent rate of resetting, the variance of the position of the particle relaxes at long times to the stationary value x\^2()=0.59 , as follows from Eq. (\[eq:statparresparpot\]). The resetting dynamics induces in this case a reduction by about $40\%$ of the variance of the position of the particle with respect to its initial value. We note that such a reduction of the amplitude of fluctuations of the particle could also have been achieved by performing a sudden quench of the stiffness of the harmonic potential by increasing its value from $\kappa$ to $\kappa'\simeq (1/0.59) \kappa \simeq 1.7 \kappa$, without the need for switching on of resetting events. To understand the difference between the two scenarios, it is instructive to compare the time evolution of the mean-squared displacement $\langle x^2(t)\rangle$ towards the stationary value in the two cases, see inset in Fig. \[fig:quenchreset\]. We observe that resetting leads to the same degree of confinement in a shorter time. For the case in which the stiffness of the potential is quenched from $\kappa$ to $\kappa'$, it is easily seen that $\langle x^2(t)\rangle=\langle x^2 (0)\rangle
\exp(-2\mu k' t)+(D/(\mu k'))[1-\exp(-2\mu k't)]$, thereby implying a relaxation timescale $\tau_{\rm quench}=1/(2\mu k')$ and yielding the corresponding curve in Fig. \[fig:quenchreset\]. The other curve depicting the process of relaxation in presence of resetting may be fitted to a good approximation to $A+B e^{-t/\tau_{\rm reset}}$. One observes that $\tau_{\rm reset} \approx \tau_{\rm quench}/3$. Thus, for the example at hand, we may conclude that a sudden quench of resetting profiles provides a [*shortcut to confinement*]{} of the position of the particle to a desired degree with respect to a potential quench. Similar conclusions were arrived at for mean first-passage times of resetting processes and equivalent equilibrium dynamics [@Evans:2013].
It may be noted that the confinement protocol by a sudden quench of resetting profiles introduced above is amenable to experimental realization. Using microscopic particles trapped with optical tweezers [@Martinez:2017; @Ciliberto:2017] or feedback traps [@Jun:2012; @Gavrilov:2017; @Gavrilov:2014], it is now possible to measure and control the position of a Brownian particle with subnanometric precision. Recent experimental setups allow to exert random forces to trapped particles, with a user-defined statistics for the random force [@Martinez:2013; @Berut:2014; @Martinez:2014; @Martinez:2017]. The shortcut protocol using resetting could be explored in the laboratory by designing a feedback-controlled experiment with optical tweezers and by employing random-force generators according to, e.g., the protocol sketched in Fig. \[fig:energy-resetting\].
Conclusions and outlook
=======================
ł[sec:conclusion]{} In this paper, we addressed the fundamental question of what happens when a continuously evolving stochastic process is repeatedly interrupted at random times by a sudden reset of the state of the process to a given fixed state. To this end, we studied the dynamics of an overdamped Brownian particle diffusing in force fields and resetting to a given spatial location with a rate that has an essential dependence on space, namely, the probability with which the particle resets is a function of the current location of the particle.
To address stochastic resetting in the aforementioned scenario, we employed a path-integral approach, discussed in detail in Eqs. (\[eq:pnoresq\])-(\[eq:quantum-potential\]) in Sec. \[sec:propagator\]. Invoking the Feynman-Kac formalism, we obtained an equality that relates the probability of transition between different spatial locations of the particle before it encounters any reset to the quantum propagator of a suitable quantum mechanical problem (see Sec. \[sec:propagator\]). Using this formalism and elements from renewal theory, we obtained closed-form analytical expressions for a number of statistics of the dynamics, e.g., the probability distribution of the first-reset time (Sec. \[sec:prob-first-reset\]), the time-dependent spatial distribution (Sec. \[sec:prob-distr\]), and the stationary spatial distribution (Sec. \[sec:stat-distr\]).
We applied the method to a number of representative examples, including in particular those involving nontrivial spatial dependence of the rate of resetting. Remarkably, we obtained the exact distributions of the aforementioned dynamical quantifiers for two non-trivial problems: the resetting of a free Brownian particle under “parabolic” resetting (Sec. \[subsec:parabolic-resetting\]) and the resetting of a Brownian particle moving in a harmonic potential with a resetting rate that depends on the energy of the particle (Sec. \[subsec:parabolic-resetting-force\]). For the latter case, we showed that using instantaneous quenching of resetting profiles allows to restrict the mean-squared displacement of a Brownian particle to a desired value on a faster timescale than by using instantaneous potential quenches. We expect that such a [*shortcut to confinement*]{} would provide novel insights in ongoing research on, e.g., engineered-swift-equilibration protocols [@ESE; @Granger:2016] and shortcuts to adiabaticity [@Deffner:2013; @Deng:2013; @Tu:2014].
Our work may also be extended to treat systems of interacting particles, with the advantage that the corresponding quantum mechanical system can be treated effectively by using tools of quantum physics and many-body quantum theory. Our approach also provides a viable method to calculate path probabilities of complex stochastic processes. Such calculations are of particular interest in many contexts, e.g., in stochastic thermodynamics [@Jarzynski:2011; @Seifert:2012; @Celani:2012; @Bo:2017], and in the study of several biological systems such as molecular motors [@Julicher:1997; @Guerin:2011], active gels [@Basu:2008], genetic switches [@Perez-Carrasco:2016; @Schultz:2008], etc. As a specific application in this direction, our approach allows to explore the physics of [*Brownian tunnelling*]{} [@Roldan], an interesting stochastic resetting version of the well-known phenomenon of quantum tunneling, which serves to unveil the subtle effects resulting from stochastic resetting in, e.g., transport through nanopores [@Trepagnier:2007].
Acknowledgements
================
ER thanks Ana Lisica and Stephan Grill for initial discussions, Ken Sekimoto and Luca Peliti for discussions on path integrals, and Domingo Sánchez and Juan M. Torres for discussions on quantum mechanics.
[99]{}
Landauer R 1961 Irreversibility and heat generation in the computing process [*IBM J. Res. Develop.*]{} [**5**]{} 183
Bennett C H 1973 Logical reversibility of computation [*IBM J. Res. Develop.*]{} [**17**]{} 525
Bérut A, Arakelyan A, Petrosyan A, Ciliberto S, Dillenschneider R, and Lutz E 2012 Experimental verification of Landauer’s principle linking information and thermodynamics [*Nature*]{} [**483**]{} 187
Mandal D and Jarzynski C 2012 Work and information processing in a solvable model of Maxwell’s demon [*Proc. Natl. Acad. Sci.*]{} [**109**]{} 11641
Roldán É, Martínez I A, Parrondo J M R, and Petrov D 2014 Universal features in the energetics of symmetry breaking [*Nature Phys.*]{} [**10**]{} 457
Koski J V, Maisi V F, Pekola J P, and Averin D V 2014 Experimental realization of a Szilard engine with a single electron [*Proc. Natl. Acad. Sci.*]{} [**111**]{} 13786
Fuchs J, Goldt G, and Seifert U 2016 Stochastic thermodynamics of resetting [*Europhys. Lett.*]{} [**113**]{} 60009
Mora T 2015 Physical Limit to Concentration Sensing Amid Spurious Ligands [*Phys. Rev. Lett.*]{} [**115**]{} 038102
Roldán É, Lisica A, Sánchez-Taltavull D, Grill S W 2016 Stochastic resetting in backtrack recovery by RNA polymerases [*Phys. Rev. E*]{} [**93**]{} 062411
Lisica A, Engel C, Jahnel M, Roldán É, Galburt E A, Cramer P, and Grill S W 2016 Mechanisms of backtrack recovery by RNA polymerases I and II [*Proc. Natl. Acad. Sci. USA*]{} [**113**]{} 2946
Gillespie D T, Seitaridou E, and Gillespie C A 2014 The small-voxel tracking algorithm for simulating chemical reactions among diffusing molecules [*J. Chem. Phys.*]{} [**141**]{} 12649
Hanggi P, Talkner P, and Borkovec M 1990 Reaction-rate theory: fifty years after Kramers [*Rev. Mod. Phys.*]{} [**62**]{} 251
Neri I, Roldán É, and Jülicher F 2017 Statistics of Infima and Stopping Times of Entropy production and Applications to Active Molecular Processes [*Phys. Rev. X*]{} [**7**]{} 011019
Jülicher F, Ajdari A, and Prost J 1997 Modeling molecular motors [*Rev. Mod. Phys.*]{} [**69**]{} 1269
Zamft B, Bintu L, Ishibashi T, and Bustamante C J 2012 Nascent RNA structure modulates the transcriptional dynamics of RNA polymerases [*Proc. Natl. Acad. Sci.*]{} [**109**]{} 8948
Evans M R and Majumdar S N 2011 Diffusion with stochastic resetting [*Phys. Rev. Lett.*]{} [**106**]{} 160601
Evans M R and Majumdar S N 2011 Diffusion with optimal resetting [*J. Phys. A: Math. Theor.*]{} [**44**]{} 435001
Evans M R and Majumdar S N 2014 Diffusion with resetting in arbitrary spatial dimension [*J. Phys. A: Math. Theor.*]{} [**47**]{} 285001
Christou C and Schadschneider A 2015 Diffusion with resetting in bounded domains [*J. Phys. A: Math. Theor.*]{} [**48**]{} 285003
Eule S and Metzger J J 2016 Non-equilibrium steady states of stochastic processes with intermittent resetting [*New J. Phys.*]{} [**18**]{} 033006
Nagar A and Gupta S 2016 Diffusion with stochastic resetting at power-law times [*Phys. Rev. E*]{} [**93**]{} 060102(R)
Boyer D and Solis-Salas C 2014 Random walks with preferential relocations to places visited in the past and their application to biology [*Phys. Rev. Lett.*]{} [**112**]{} 240601
Majumdar S N, Sabhapandit S, and Schehr G 2015 Random walk with random resetting to the maximum position [*Phys. Rev. E*]{} [**92**]{} 052126
Montero M and Villarroel J 2013 Monotonic continuous-time random walks with drift and stochastic reset events Miquel Montero [*Phys. Rev. E*]{} [**87**]{} 012116
Méndez V and Campos D 2016 Characterization of stationary states in random walks with stochastic resetting [*Phys. Rev. E*]{} [**93**]{} 022106
Kuśmierz L, Majumdar S N, Sabhapandit S, and Schehr G 2014 First order transition for the optimal search time of Lévy flights with resetting [*Phys. Rev. Lett.*]{} [**113**]{} 220602
Campos D and Méndez V 2015 Phase transitions in optimal search times: How random walkers should combine resetting and flight scales [*Phys. Rev. E*]{} [**92**]{} 062115
Pal A, Kundu A, and Evans M R 2016 Diffusion under time-dependent resetting [*J. Phys. A: Math. Theor.*]{} [**49**]{} 225001
Boyer D, Evans M R, and Majumdar S N 2017 Long time scaling behaviour for diffusion with resetting and memory [*J. Stat. Mech.: Theory Exp.*]{} 023208
Harris R J and Touchette H 2017 Phase transitions in large deviations of reset processes [*J. Phys. A: Math. Theor.*]{} [**50**]{} 10LT01
Durang X, Henkel M, and Park H 2014 The statistical mechanics of the coagulation–diffusion process with a stochastic reset [*J. Phys. A: Math. Theor.*]{} [**47**]{} 045002
Gupta S, Majumdar S N, and Schehr G 2014 Fluctuating interfaces subject to stochastic resetting [*Phys. Rev. Lett.*]{} [**112**]{} 220601
Gupta S and Nagar A 2016 Resetting of fluctuating interfaces at power-law times [*J. Phys. A: Math. Theor.*]{} [**49**]{} 445001
Falcao R and Evans M R 2017 Interacting Brownian motion with resetting [*J. Stat. Mech.: Theory Exp.*]{} 023204
Kuśmierz L and Gudowska-Nowak E 2015 Optimal first-arrival times in Lévy flights with resetting [*Phys. Rev. E*]{} [**92**]{} 052127
Reuveni S 2016 Optimal stochastic restart renders fluctuations in first passage times universal [*Phys. Rev. Lett.*]{} [**116**]{} 170601
Bhat U, De Bacco C, and Redner S 2016 Stochastic search with Poisson and deterministic resetting [*J. Stat. Mech.: Theory Exp.*]{} 083401
Pal A and Reuveni S 2017 First passage under restart [*Phys. Rev. Lett.*]{} [**118**]{} 030603
Pal A 2015 Diffusion in a potential landscape with stochastic resetting [*Phys. Rev. E*]{} [**91**]{} 012113
Cox D 1962 [*Renewal Theory*]{} (London: Methuen)
Feynman R P and Hibbs A R 2010 [*Quantum Mechanics and Path Integrals*]{} (New York: McGraw-Hill Companies, Inc.)
Schulman L S 1981 [*Techniques and Applications of Path Integration*]{} (UK: John Wiley & Sons)
Kac M 1949 On distribution of certain Wiener functionals [*Trans. Am. Math. Soc.*]{} [**65**]{} 1
Kac M 1951 On some connections between probability theory and differential and integral equations, in [*Proc. Second Berkeley Symp. on Math. Statist. and Prob.*]{} (Berkeley: University of California Press)
Majumdar S N 2005 [*Brownian Functionals in Physics and Computer Science*]{} Curr. Sci. [**89**]{} 2076
Sekimoto K 1998 Langevin equation and thermodynamics [*Prog. Theor. Phys. Suppl.*]{} [**130**]{} 17
Sekimoto K 2000 [*Stochastic Energetics*]{} (Berlin: Springer)
Olver F W J, Daalhuis A B O, Lozier D W, Schneider B I, Boisvert R F, Clark C W, Miller B R, and Saunders B V eds. [*NIST Digital Library of Mathematical Functions, , Release 1.0.14 of 2016-12-21*]{}
Roldán-Vargas S, Rovigatti L, and Sciortino F 2017 Connectivity, dynamics, and structure in a tetrahedral network liquid [*Soft Matter*]{} [**13**]{} 514
Berthier L, and Biroli G 2011 Theoretical perspective on the glass transition and amorphous materials [*Rev. Mod. Phys.*]{} [**83**]{} 587
Martínez I A, Roldán É, Parrondo J M R, and Petrov D 2013 Effective heating to several thousand kelvins of an optically trapped sphere in a liquid [*Phys. Rev. E*]{} [**87**]{} 032159
Bérut A, Petrosyan A, and Ciliberto S 2015 Energy flow between two hydrodynamically coupled particles kept at different effective temperatures [*EPL (Europhys. Lett.)*]{} [**107**]{} (6) 60004.
Dieterich E., Camunas-Soler J., Ribezzi-Crivellari M., Seifert U, and Ritort, F 2015 Single-molecule measurement of the effective temperature in non-equilibrium steady states [*Nature Phys.*]{} [**11**]{} (11), 97
Martínez I A, Roldán É, Dinis L and Rica R A 2017 Colloidal heat engines: a review [*Soft matter*]{} [**13**]{} (1) 22.
Cirac J I, and Zoller P 1995 Quantum computations with cold trapped ions [*Phys. Rev. Lett.*]{} [**74**]{} (20) 4091.
Bloch I 2005 Ultracold quantum gases in optical lattices [*Nature Phys.*]{} [**1**]{} (1) 23.
Gavrilov M, Chétrite R and Bechhoefer J 2017 Direct measurement of nonequilibrium system entropy is consistent with Gibbs-Shannon form arXiv preprint arXiv:1703.07601.
Ciliberto S 2017 Experiments in Stochastic Thermodynamics: Short History and Perspectives [Phys. Rev. X]{} [**7**]{} 021051.
Martínez I A, Roldán É, Dinis L, Petrov D, Parrondo J M R, and Rica R A 2016 Brownian carnot engine [*Nature Phys.*]{} [**12**]{} (1) 67.
Evans M R, Majumdar S N and Mallick K 2013 Optimal diffusive search: nonequilibrium resetting versus equilibrium dynamics [*J. Phys. A*]{} [**46**]{} (18) 185001.
Gavrilov M, Jun Y and Bechhoefer J, 2014 Real-time calibration of a feedback trap [*Rev. Sci. Instr.*]{} [**85**]{} (9) 095102.
Jun Y and Bechhoefer J 2012.Virtual potentials for feedback traps [*Phys. Rev. E*]{} [*86*]{} (6) 061106.
Martínez I A, Roldán É, Dinis L, Petrov D and Rica R A 2015 Adiabatic processes realized with a trapped Brownian particle [*Phys. Rev. Lett.*]{} [**114**]{} (12) 120601.
Martínez I A, Petrosyan A, Guéry-Odelin D, Trizac E and Ciliberto S 2016 Engineered swift equilibration of a Brownian particle [*Nature Phys.*]{} [**12**]{} (9) 843.
Granger L, Dinis L, Horowitz J M and Parrondo J M R 2016. Reversible feedback confinement [*EPL (Europhys. Lett.)*]{} [**115**]{} (5) 50007.
Deffner S, Jarzynski C and del Campo A 2014 Classical and quantum shortcuts to adiabaticity for scale-invariant driving [*Phys. Rev. X*]{} [**4**]{} (2) 021013.
Deng J, Wang Q H, Liu Z, Hänggi P and Gong J 2013. Boosting work characteristics and overall heat-engine performance via shortcuts to adiabaticity: Quantum and classical systems [*Phys. Rev. E*]{} [**88**]{} (6) 062122.
Tu Z C 2014 Stochastic heat engine with the consideration of inertial effects and shortcuts to adiabaticity [*Phys. Rev. E*]{} [**89**]{} (5) 052148.
Jarzynski C 2011 Equalities and inequalities: irreversibility and the second law of thermodynamics at the nanoscale [*Annu. Rev. Condens. Matt. Phys.*]{} [**2**]{} 329
Seifert U 2012 Stochastic thermodynamics, fluctuation theorems and molecular machines [*Rep. Prog. Phys.*]{} [**75**]{} 12
Celani A, Bo S, Eichhorn R and Aurell E 2012 Anomalous thermodynamics at the microscale [*Phys. Rev. Lett.*]{} [**109**]{} (26) 260603.
Bo S and Celani A 2017 Stochastic processes on multiple scales: averaging, decimation and beyond [Bull. Am. Phys. Soc.]{} [**62**]{} 1.
Guérin T, Prost J, and Joanny J-F 2011 Motion reversal of molecular motor assemblies due to weak noise [*Phys. Rev. Lett.*]{} [**106**]{} 068101
Basu A, Joanny J-F, Jülicher F, and Prost J 2008 Thermal and non-thermal fluctuations in active polar gels [*Eur. Phys. J. E*]{} [**27**]{} 149
Perez-Carrasco R, Guerrero P, Briscoe J, and Page KM 2016 Intrinsic noise profoundly alters the dynamics and steady state of morphogen-controlled bistable genetic switches [*PLoS Comput. Biol.*]{} [**12**]{} e1005154
Schultz D, Walczak A M, Onuchic J N, Wolynes P G 2008 Extinction and resurrection in gene networks [*Proc. Natl. Acad. Sci.*]{} [**105**]{} 19165
Roldán E and Gupta S (in preparation)
Trepagnier E H, Radenovic A, Sivak D, Geissler P, and Liphardt L 2007 Controlling DNA capture and propagation through artificial nanopores [*Nano Lett.*]{} [**7**]{} 2824
[^1]: Corresponding author: edgar@pks.mpg.de
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Mario Hamuy
title: The Standard Candle Method for Type II Supernovae and the Hubble Constant
---
The “standard candle method” for Type II plateau supernovae produces a Hubble diagram with a dispersion of 0.3 mag, which implies that this technique can produce distances with a precision of 15%. Using four nearby supernovae with Cepheid distances I find $H_0(V)$=75$\pm$7, and $H_0(I)$=65$\pm$12.
Introduction {#intro}
============
Type II supernovae are exploding stars characterized by strong hydrogen spectral lines and their proximity to star forming regions, presumably resulting from the gravitational collapse of the cores of massive stars ($M_{ZAMS}$$>$8 $M_\odot$). These objects display great variations in their spectra and lightcurves depending on the properties of their progenitors at the time of core collapse and the density of the medium in which they explode [@hamuy03a]. The plateau subclass (SNe IIP) constitutes a well-defined family which can be distinguished by 1) a characteristic “plateau” lightcurve [@barbon79], 2) Balmer lines exhibiting broad P-Cygni profiles, and 3) low radio emission [@weiler02]. These SNe are thought to have red supergiant progenitors that do not experience significant mass loss and are able to retain most of their H-rich envelopes before explosion.
Although SNe IIP display a wide range in luminosity, rendering their use as standard candles difficult, Hamuy & Pinto (2002) [@hamuy02] (HP02) used a sample of 17 SNe II to show that the relative luminosities of these objects can be standardized from a spectroscopic measurement of the SN ejecta velocity. Recently, I confirmed the luminosity-velocity relation [@hamuy03b] (H03) from a sample of 24 SNe IIP. This study showed that the “standard candle method” (SCM) yields a Hubble diagram with a dispersion of 0.3 mag, which implies that SNe IIP can be used to derive extragalactic distances with a precision of 15%. Since the work of H03, Cepheid distances to two SNe IIP have been published, bringing to four the total number of SNe IIP with Cepheid distances. In this paper I use these four objects to improve the calibration of the Hubble diagram, and solve for the value of the Hubble constant.
The Luminosity-Velocity Relation {#lv_sec}
================================
The SCM is based on the luminosity-velocity relation, which permits one to standardize the relative luminosities of SNe IIP. Figure \[L\_v.fig\] shows the latest version, based on 24 genuine SNe IIP. This plot reveals the well-known fact that SNe IIP encompass a wide range ($\sim$5 mag) in luminosities. This correlation reflects the fact that while the explosion energy increases, so do the kinetic energy and internal energies. Also plotted in this figure with open circles are the explosion models computed by [@litvinova83] and [@litvinova85] for stars with $M_{ZAMS}$ $\geq$ 8 $M_\odot$, which reveals a reasonable agreement with observations.
![Envelope velocity versus absolute plateau $V$ magnitude for 24 SNe IIP, both measured in the middle of the plateau (day 50) (filled circles). The expansion velocities were obtained from the minimum of the Fe II $\lambda$5169 lines. The absolute magnitudes were derived from redshift-based distances and observed magnitudes corrected for dust extinction. Open circles correspond to explosion models computed by [@litvinova83] and [@litvinova85] for stars with $M_{ZAMS}$ $\geq$ 8 $M_\odot$. []{data-label="L_v.fig"}](L_v.ps){height="75mm"}
The Hubble Diagram
==================
In a uniform and isotropic Universe we expect locally a linear relation between distance and redshift. A perfect standard candle should describe a straight line in the magnitude-log($z$) Hubble diagram, so the observed scatter is a measure of how standard the candle is. Next I assess the performance of the SCM based on the Hubble diagram constructed with the magnitudes and redshifts given in Table \[SN.tab\] for 24 SNe.
-------- --------------- -------------- --------------- ----------- ----------- ---------------
SN $v_{CMB}$ $A_{GAL}(V)$ $A_{host}(V)$ $V_{50}$ $I_{50}$ $v_{50}$
(km s$^{-1}$) (km s$^{-1}$)
$\pm$187 $\pm$0.06 $\pm$0.3
1968L 321 0.219 0.00 12.03(08) ... 4020(300)
1969L 784 0.205 0.00 13.35(06) ... 4841(300)
1970G 580 0.028 0.00 12.10(15) ... 5041(300)
1973R 808 0.107 1.40 14.56(05) ... 5092(300)
1986I 1333 0.129 0.20 14.55(20) 14.05(09) 3623(300)
1986L 1466 0.099 0.30 14.57(05) ... 4150(300)
1988A 1332 0.136 0.00 15.00(05) ... 4613(300)
1989L 1332 0.123 0.15 15.47(05) 14.54(05) 3529(300)
1990E 1426 0.082 1.45 15.90(20) 14.56(20) 5324(300)
1990K 1818 0.047 0.20 14.50(20) 13.90(05) 6142(2000)
1991al 4484 0.168 0.00 16.62(05) 16.16(05) 7330(2000)
1991G 1152 0.065 0.00 15.53(07) 15.05(09) 3347(500)
1992H 2305 0.054 0.00 14.99(04) ... 5463(300)
1992af 5438 0.171 0.00 17.06(20) 16.56(20) 5322(2000)
1992am 14009 0.164 0.28 18.44(05) 17.99(05) 7868(300)
1992ba 1192 0.193 0.00 15.43(05) 14.76(05) 3523(300)
1993A 8933 0.572 0.05 19.64(05) 18.89(05) 4290(300)
1993S 9649 0.054 0.70 18.96(05) 18.25(05) 4569(300)
1999br 848 0.078 0.65 17.58(05) 16.71(05) 1545(300)
1999ca 3105 0.361 0.68 16.65(05) 15.77(05) 5353(2000)
1999cr 6376 0.324 0.00 18.33(05) 17.63(05) 4389(300)
1999eg 6494 0.388 0.00 18.65(05) 17.94(05) 4012(300)
1999em 838 0.130 0.18 13.98(05) 13.35(05) 3757(300)
1999gi 706 0.055 0.68 14.91(05) 13.98(05) 3617(300)
-------- --------------- -------------- --------------- ----------- ----------- ---------------
: Redshifts, Extinction, Magnitudes, and Ejecta Velocities of the 24 Type II Supernovae.
\[SN.tab\]
The CMB redshifts of the SN host galaxies were derived from the observed heliocentric redshifts. For the 16 SNe with $cz$$<$3000 km s$^{-1}$ I corrected the redshifts for the peculiar motion of the SN hosts using the parametric model for peculiar flows of [@tonry00] (see H03 for details). In all cases I assigned an uncertainty of $\pm$187 km s$^{-1}$, which corresponds to the cosmic thermal velocity yielded by the parametric model.
A convenient choice for SNe IIP is to use magnitudes in the middle of the plateau, so I interpolated the observed $V$ and $I$ fluxes to the time corresponding to 50 days after explosion. In order to use SNe IIP as standardized candles it proves necessary to correct the observed fluxes for dust absorption. The determination of Galactic extinction is under good control thanks to the IR dust maps of [@schlegel98], which permit one to estimate $A_{GAL}(V)$ to $\pm$0.06 mag. The determination of absorption in the host galaxy, on the other hand, is difficult. In H03 I described a method which assumes that SNe IIP should all reach the same color toward the end of the plateau phase. The underlying assumption is that the opacity in SNe IIP is dominated by e$^-$ scattering, so they should all reach the temperature of hydrogen recombination as they evolve [@eastman96]. The method is not fully satisfactory since some discrepancies were obtained from $B-V$ and $V-I$ (probably caused by metallicity variations from SN to SN). An uncertainty of $\pm$0.3 mag can be assigned to this technique based on the reddening difference yielded by both colors.
The ejecta velocities come from the minimum of the Fe II $\lambda$5169 lines interpolated to day 50, which is good to $\pm$300 km s$^{-1}$ [@hamuy01]. In the four cases where I had to extrapolate velocities I adopted an uncertainty of $\pm$2000 km s$^{-1}$.
![(bottom) Raw Hubble diagram from SNe II plateau $V$ magnitudes. (top) Hubble diagram from $V$ magnitudes corrected for envelope expansion velocities. []{data-label="hd3.fig"}](hd3.ps){height="75mm" width="75mm"}
The bottom panel of Fig. \[hd3.fig\] shows the Hubble diagram in the $V$ band, after correcting the apparent magnitudes for the reddening values, while the top panel shows the same magnitudes after correction for expansion velocities. A least-squares fit to the data in the top panel yields the following solution,
$$V_{50} - A_{V} + 6.564(\pm0.88)~log (v_{50}/5000) = 5~log(cz) - 1.478(\pm0.11).
\label{veqn_1}$$
The scatter drops from 0.91 mag to 0.38 mag, thus demonstrating that the correction for ejecta velocities standardizes the luminosities of SNe IIP significantly. It is interesting to note that part of the spread comes from the nearby SNe which are potentially more affected by peculiar motions of their host galaxies. When the sample is restricted to the eight objects with $cz$$>$3,000 km s$^{-1}$, the scatter drops to only 0.33 mag. The corresponding fit for the restricted sample is,
$$V_{50} - A_{V} + 6.249(\pm1.35)~log (v_{50}/5000) = 5~log(cz) - 1.464(\pm0.15).
\label{veqn_2}$$
![(bottom) Raw Hubble diagram from SNe II plateau $I$ magnitudes. (top) Hubble diagram from $I$ magnitudes corrected for envelope expansion velocities. []{data-label="hd4.fig"}](hd4.ps){height="75mm" width="75mm"}
Figure \[hd4.fig\] shows the same analysis but in the $I$ band. In this case the scatter in the raw Hubble diagram is 0.83 mag, which drops to 0.32 mag after correction for ejecta velocities. This is even smaller that the 0.38 spread in the $V$ band, possibly due to the fact that the effects of dust extinction are smaller at these wavelengths. The least-squares fit yields the following solution,
$$I_{50} - A_{I} + 5.869(\pm0.68)~log (v_{50}/5000) = 5~log(cz) - 1.926(\pm0.09).
\label{ieqn_1}$$
When the eight most distant objects are employed the spread is 0.29 mag, similar to that obtained from the $V$ magnitudes and the same sample, and the solution is,
$$I_{50} - A_{I} + 5.445(\pm0.91)~log (v_{50}/5000) = 5~log(cz) - 1.923(\pm0.11).
\label{ieqn_2}$$
The Value of the Hubble Constant
================================
The SCM can be used to solve for the Hubble constant, provided the distance to a nearby SN is known. If the distance $D$ of the calibrator is known, and the distant sample is adopted, the Hubble constant is given by
$$H_0(V) = \frac {10^{V_{50} - A_V + 6.249 log(v_{50}/5000) + 1.464}}{D},
\label{h0_v}$$
$$H_0(I) = \frac {10^{I_{50} - A_I + 5.445 log(v_{50}/5000) + 1.923}}{D}.
\label{h0_i}$$
Among the objects of our sample SN 1968L, SN 1970G, SN 1973R, and SN 1999em have precise Cepheid distances. The distances and the corresponding $H_0$ values are summarized in Table \[H0.tab\]. SN 1999em is the only object that provides independent values from the $V$ and $I$ bands, and the results agree remarkably well. Within the uncertainties the values derived from the $V$-band magnitudes are in good agreement for all four objects, and the average proves to be $H_0(V)$=75$\pm$7 km s$^{-1}$ Mpc$^{-1}$.
Currently, the most precise extragalactic distance indicators are the peak luminosities of SNe Ia. While the HST Key Project yielded a value of $H_0$=71$\pm$2 [@freedman01], Sandage and collaborators derived $H_0$=59$\pm$6 [@parodi00]. The difference is mostly due to systematic uncertainties in the Cepheid distances of the calibrating SNe. Since SCM is mainly calibrated with Cepheid distances of the HST Key Project, I conclude that both SNe Ia and SNe IIP give consistent results, which lends further credibility to the SCM.
--------- ----------- --------------- -------------------------- --------------------------
SN Distance Reference $H_0(V)$ $H_0(I)$
Modulus (km s$^{-1}$ Mpc$^{-1}$) (km s$^{-1}$ Mpc$^{-1}$)
1968L 28.25(15) [@thim03] 77$\pm$15 ...
1970G 29.13(11) [@freedman01] 77$\pm$13 ...
1973R 29.86(08) [@freedman01] 87$\pm$15 ...
1999em 30.34(19) [@leonard03] 64$\pm$13 65$\pm$12
Average 75$\pm$7 65$\pm$12
--------- ----------- --------------- -------------------------- --------------------------
: The Hubble Constant.
\[H0.tab\]
HP02 found a value of $H_0$=55$\pm$12 based on one calibrator (SN 1987A), which proves significantly lower than the current 65-75 range. The main reason for this difference is that SN 1987A is not a plateau event and should not have been included in the HP02 sample since the physics of its lightcurve is different than that of SNe IIP.
Conclusions and Discussion
==========================
This sample of 24 SNe IIP shows that the luminosity-velocity relation can be used to standardize the luminosities of these objects. The resulting Hubble diagram has a dispersion of 0.3 mag, which implies that SNe IIP can produce distances with a precision of 15%. Using four nearby SNe with Cepheid distances I find $H_0(V)$=75$\pm$7 and $H_0(I)$=65$\pm$12. These values compare with $H_0$=71$\pm$2 derived from SNe Ia [@freedman01], which lends further credibility to the SCM.
This study confirms that SNe IIP offer great potential as distance indicators. The recently launched Carnegie Supernova Program at Las Campanas Observatory has already targeted $\sim$20 such SNe and in the next years it will produce an unprecedented database of spectroscopy and photometry for $\sim$100 nearby SNe, which will be ideally suited for cosmological studies.
Although the precision of the SCM is only half as good as that produced by SNe Ia, with the 8-m class telescopes currently in operation it should be possible to get spectroscopy of SNe IIP down to $V$$\sim$23 and start populating the Hubble diagram up to $z$$\sim$0.3. A handful of SNe IIP will allow us to get and independent check on the distances to SNe Ia.
Support for this work was provided by NASA through Hubble Fellowship grant HST-HF-01139.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose the cascade attribute learning network (CALNet), which can learn attributes in a control task separately and assemble them together. Our contribution is twofold: first we propose attribute learning in reinforcement learning (RL). Attributes used to be modeled using constraint functions or terms in the objective function, making it hard to transfer. Attribute learning, on the other hand, models these task properties as modules in the policy network. We also propose using novel cascading compensative networks in the CALNet to learn and assemble attributes. Using the CALNet, one can zero shoot an unseen task by separately learning all its attributes, and assembling the attribute modules. We have validated the capacity of our model on a wide variety of control problems with attributes in time, position, velocity and acceleration phases.'
author:
- 'Zhuo Xu\*$^{1}$, Haonan Chang\*$^{2}$, and Masayoshi Tomizuka$^{1}, \emph{Fellow, IEEE}$[^1][^2][^3]'
title: '**Cascade Attribute Learning Network** '
---
INTRODUCTION
============
Reinforcement learning (RL) [@sutton_book] has been successful in solving many control problems rooted in fixed Markov Decision Processes (MDPs) environments [@nn_policy][@gae][@visuomotor]. However, the extremely close interactions between the RL algorithms and the MDPs leads to the difficulty to reuse the knowledge learned from one task in new tasks. This difficulty further impedes RL policies from being adept in solving high dimensional complicated tasks. For example, it is easy to train an autonomous vehicle to travel from an origin position to a target position. However, if one takes into consideration a bunch of vehicle and pedestrian obstacles, the difficulty of the problem could grow overwhelming for shallow policy models. In order to avoid all the obstacles, one would have to train a deep policy network with very sparse reward input. Therefore, the training process usually requires an unbearably large amount of computation. What makes it worse is that such policies are hardly reusable in other scenarios, even if the new task is very similar to the previous one. Suppose a speed limit requirement is added to the autonomous driving task, although the input of the policy network is already tediously high dimensional, given there is no entrance for the speed limit information to enter the policy network, it is not possible that the pretrained policy accomplishes the new task, no matter how the policy network is tuned. Therefore, RL frameworks with fixed policy models can hardly address such high dimensional and complicated tasks in environments of great variance.
We propose to address this problem from a new perspective: modularizing complicated and high dimensional problems using a series of attributes. The attributes refer especially to global characteristics or requirements that take effect throughout the task. An example of attribute learning is shown in Fig. 1. Concretely, to solve the complicated driving problem, one first decompose the requirements of the task into a target reaching attribute, an obstacle avoidance attribute and a speed limit attribute, then train the modular network for each of the attributes, and finally assemble the attribute networks together to produce the overall policy. Modularizing a task using a series of attributes has three main intriguing advantages:
![Autonomous driving as an example of modularizing a complicated task into multiple attributes using the cascade attribute learning network (CALNet).[]{data-label="figurelabel"}](figure1.jpg)
1. Decomposing a high dimensional complicated task into low dimensional attributes makes the training process much easier and faster.
2. Trained attribute modules can be reused in new tasks, making it possible to build up versatile policies that can adjust to changes in tasks by assembling attribute modules.
3. In attribute learning, specific state information is provided only to its corresponding attribute modules. This decoupling formulation makes it possible to dynamically manage state space in high dimensional environments.
In order to modularize the attributes, we propose a simple but efficient RL framework called the cascade attribute learning network (CALNet). The brief idea of the CALNet is shown in Fig. 1. In CALNet, the attribute modules are connected in cascade series. Each attribute module receives both the output of its preceding module and its corresponding states, and returns the action that satisfies all the attributes ahead of it. The details of the CALNet architecture and the training methods are described in Section III. Using the CALNet, one can zero shoot an unseen task by separately learning all the attributes in the task and assemble the attribute modules in series together. The reminder of this paper is organized as follows: the related works and the background of RL are introduced in Section II. In Section III, the architecture of the CALNet and the implementation details are described. In Section IV, we show simulation results to validate the proposed model using a variety of robots and attributes and give discussions on the experiments. The conclusions are given in Section V.
Related Work and Background
===========================
Related Work
------------
There have been lots of attempts to create versatile intelligence that can not only solve complicated tasks, but adjust to changes in the tasks as well. Transfer learning [@reinforcement_transfer][@transfer] is a key tool that makes the use of previously learned knowledges for the better or faster learning of new knowledges. Rusu et al [@progressive1][@progressive2] designed a multi-column (network) framework, referred to as progressive network, in which newly added columns are laterally connected to previously learned columns for knowledge transfer. Drafty et al [@transferable_policy] and Braylan et al [@reuse_module] also designed interesting network architectures for knowledge transfer in MAV control and video game playing. For the combinations of transfer learning and imitation learning, Ammar et al [@unsupervised_transfer] uses unsupervised learning to map states for transfer, assuming the existence of distance function between different state spaces. Gupta et al [@invariant_feature] learns an invariant feature between different dimensional states and use demonstrations to increase the density of the rewards. Our work differs from those works mainly in that we put emphasis on the modularization of attributes, which are concrete and meaningful modules that can be conveniently assembled into various combinations.
There are other methods seeking to learn a globally general policy: Meta learning [@meta] attempts to build self-adaptive learners that improve their bias through accumulating experience. One shot imitation learning [@one_shot], for example, is a meta learning framework which is trained using a number of different tasks so that new skills could be learned from a single expert demonstration. Curriculum learning (CL)[@curriculum] trains a model on a sequence of cognate tasks that get more and more challenging gradually, so as to solve hard tasks that could not be learned from scratching. Florensa et al [@reverse_curriculum] applied reverse curriculum generation (RCL) in RL. In the early stage of the training process, the RCL initializes the agent state to be very close to the target state, making the policy very easy to train. They then gradually increase the random level of the initial state as the RL model performs better and better. Our policy training strategy is inspired by the idea of CL and achieved satisfying robustness for the policies. There are also researches in training modular neural networks, [@modular_robot_task] investigates the combinations of multiple robots and tasks, while [@modular_subtask] investigates the combinations of multiple sequential subtasks. Our work, different from those works, looks into modularization in a different dimension. We investigate the modularization of attributes, the characteristics or requirements that take effect throughout the whole task.
Deep Reinforcement Learning Background
--------------------------------------
The objective of RL is to maximize the expected sum of the discounted rewards $R_t = \mathbb{E} \sum_{k=0}^\infty \gamma^{k} \cdot r_{t+k}$ in an agent-environment-interacting MDP. The agent observes state $s_t$ at time $t$, and selects an action $a_t$ according to its policy $\pi_\theta$ parameterized by $\theta$. The environment receives $s_t$ and $a_t$, and returns the next state $s_{t+1}$ and the reward in this step $r_t$. The $\gamma$ in the objective function is a discounting coefficient. The main approaches for reinforcement learning include deep Q-learning (DQN) [@dqn], asynchronous advantage actor critic (A3C) [@a3c], trust region policy optimization (TRPO) [@trpo], and proximal policy optimization (PPO) [@ppo]. Approaches used in continuous control are mostly policy gradient methods, i.e. A3C, TRPO, and PPO. Vanilla policy gradient method updates the parameters $\theta$ by ascending the log probability of action $a_t$ with higher advantage $\hat{A_t}$. The surrogate objective function is $$L(\theta) = \hat{\mathbb{E}}_t \left[ \log \pi_{\theta}(a_t \mid s_t) \cdot \hat{A_t} \right] \eqno{(1)}$$ Although A3C uses the unbiased estimator of policy gradient, large updates can prevent the policy from converging. TRPO introduces a constraint to restrict the updated policy from being too far in Kullback-Leibler (KL) distance [@kl] from the old policy. Usually, TRPO solves an unconstrained optimization with a penalty punishing the KL distance between $\pi_{\theta}$ and $\pi_{\theta_{old}}$, specifically, $$L(\theta) = \hat{\mathbb{E}}_t \left[ \frac{ \pi_{\theta}(a_t \mid s_t)}{\pi_{\theta_{old}}(a_t \mid s_t)} \cdot \hat{A_t} -\beta \cdot \textrm{KL} \left( \pi_{\theta} , \pi_{\theta_{old}} \right) \right] \eqno{(2)}$$ However, the choice of the penalty coefficient $\beta$ has been a problem [@ppo]. Therefore, PPO modifies TRPO by using a simple clip function parameterized by $\epsilon$ to limit the policy update. Specifically, $$L(\theta) = \hat{\mathbb{E}}_t \left[ \min \left( \frac{ \pi_{\theta}}{\pi_{\theta_{old}}}, \textrm{clip} \left(\frac{ \pi_{\theta}}{\pi_{\theta_{old}}}, 1+\epsilon, 1-\epsilon \right) \right)\hat{A_t} \right] \eqno{(3)}$$ This simple objective turns out to perform well while enjoying better sample complexity, thus we are using PPO as the default RL algorithm in our policy training. We are also inspired by [@dppo] to build a distributed framework with multiple threads to speed up the training process.
The advantage function $\hat{A_t}$ describes how better a policy is compared to a baseline. Traditionally the difference between the estimated Q value and value functions is applied as the advantage [@a3c]. Recently Schulman et al [@gae] proposed using generalized advantage estimation (GAE) to leverage the bias and variance of the advantage estimator.
The CALNet
==========
Problem Formulation
-------------------
We consider an agent performing a complicated task with multiple attributes. Since the agent is fixed, its action space is a fixed space, which we call $A$. We decompose the task into a series of attributes, denoted $\{ 0,1,2,\ldots \}$. We refer to the $0^{th}$ attribute as the base attribute, which usually corresponds to the most fundamental goal of the task, such as the target reaching attribute in the autonomous driving task. We define the state space of each attribute to be the minimum state space that fully characterizes the attribute, denoted $S=\{S_0,S_1,S_2,S_3\ldots \}$. For example, let the base attribute be the target reaching attribute, and the $1^{st}$ attribute be the obstacle avoidance attribute. Then $S_0$ consists of the states of the agent and the target, while $S_1$ consists of the states of the agent and the obstacle, and yet does not include the states of the target.
Each attribute has an unique reward function as well, denoted $R=\{R_0,R_1,R_2,R_3\ldots\}$. Each $R_i$ is a function mapping a state action pair to a real number reward, i.e. $R_i: S_i \times A \rightarrow \mathbb{R}$. Similarly, there is a specific transition probability distribution for each attribute, denoted: $P=\{P_0, P_1, P_2, P_3 \ldots \}$. And for each attribute, its transition function takes in the state action pairs and outputs the states for the next timestep, that is, $P_i : S_i \times A \rightarrow S_i$.
A key characteristic in our problem formulation is that the state spaces for different attributes can be different. This formulation enables the attribute learning network to dynamically manage the state space of the task. Specifically, the states of the $i^{th}$ attribute, $s_i$, is fed to the module of the $i^{th}$ attribute in the network.
Network architecture
--------------------
The architecture of the CALNet is shown in Fig. 2. and Fig. 3. Both the training phase (Fig. 2.) and the testing phase (Fig. 3.) of the CALNet are implemented in cascade orders. In the training phase, first a RL policy $\pi_0$ is trained to accomplish the goal of the base attribute. The base attribute network takes in $s_0\in S_0$ and outputs $a_0 \in A$, the reward and transition functions of the MDP are given by $R_0$ and $P_0$. This process is a default RL training process.
Then the $1^{st}$ attribute module is trained in series of the base attribute module. The $1^{st}$ attribute module consists of a compensate network and a weighted sum operator. The compensate network is fed with state $s_1 \in S_1$, and action $a_0$ chosen by $\pi_0$. The output of the compensate network is the compensate action $a^{c}_1$, which is used to compensate $a_0$ to produce the overall action $a_1$. The reward for the MDP is given by $R_0+R_1$ so that the requirements for both attributes are satisfied. The new transition function may not be directly calculated using $P_0$ and $P_1$, but it can be easily obtained from the environment. Since the parameters of the base attribute network are pretrained, the cascading attribute network would extract the features of the attribute by exploring the new MDP under the guidance of the base policy.
It is noted that in the weighted sum operator, the weight of the compensative action $a^{c}_1$ is initiated to be small and increased over the training time. That is, at the early stage of the training process, mainly $a_0$ takes effect, while $a^{c}_1$ gradually gets to influence the overall $a_1$ as the training goes on. For other attributes, the training method is the same with that of the $1^{st}$ attribute.
In the testing phase, the designated attribute modules are connected in series following the base attribute, as shown in Fig. 3. In the CALNet, the $i^{th}$ attribute module takes in $s_i$ and $a_{i-1}$, and outputs $a_i$ that satisfies all the attributes before the $i^{th}$ module. The final output $a_j$ is the overall output that satisfies all the attributes in the attribute array.
![The training procedure of an attribute module in the CALNet: first train the base attribute module, then train the added module based on the pretrained base module](figure2.jpg "fig:") \[figurelabel\]
![Usage of the CALNet in assembling attributes onto the base attribute, the output action of the last attribute module satisfies the requirements of all the attributes](figure3.jpg "fig:") \[figurelabel\]
Training Method
---------------
To guarantee the capacity of the CALNet, the policies need to meet two requirements:
1. The attribute policies should be robust over the state space, rather than being effective only at the states that are close to the optimal trajectory. This requirement guarantees the attribute policies to be instructive when more compensate actions are added on the top of them.
2. The compensate action for a certain attribute should be close to zero if the agent is in a state where this attribute is not active. This property increases the capability of multi-attribute structures.
For the sake of the robustness of the attribute policies, we apply CL to learn a general policy that can accomplish the task starting from any initial state. The CL algorithm first trains a policy with fixed initial state. As the training goes on, the random level of the initial state is smoothly increased, until the initial state is randomly sampled from the whole state space. The random level is increased only if the policy is capable enough for the current random level.
For example, consider the task of moving a ball to reach a target point in a 2 dimensional space. In each episode, the initial position of the ball is randomly sampled in a circular area. The random level in this case is the radius of the circle. In the early training stage, the radius is set to be very small, and the initial position is almost fixed. As the policy gains more and more generality, the reward in each episode increases. Once the reward reaches a threshold, the random level is increased, and the initial position of the ball is sampled from a larger area. The terminal random level corresponds to the circumstance where the circular sampling area fully covers the working zone. If the policy performs well under the terminal random level, the policy is considered successfully trained. The pseudocode for this process is shown in Algorithm 1.
$RandomLevel = $ Initial Random Level $\lambda = 1 + $ Random Level Increase Rate $N = $ Batch Number $LongTermR = Queue()$ Update the policy using PPO $Rewards \gets RunEpisode(N)$ $LongTermR.append(Rewards)$ $RandomLevel = RandomLevel \times \lambda$ $Clear(LongTermR)$
To guarantee the second requirement, an extra loss term that punishes the magnitude of the compensative action, $l^{c}_i \propto -\|a^{c}_i\|^2$, is added to the reward function so as to reduce $\|a^{c}_i\|$ when attribute $i$ is not active.
Experiments
===========
Our experiments aim to validate the capability and advantage of the CALNet. In this Section, first we introduce the experiment setup, we then show the capability of the CALNet to modularize and assemble attributes in multi-attribute tasks. In the last part of this section, we compare the CALNet with the baseline RL algorithm, and show that the CALNet can adjust to complicated tasks more easily.
Setup
-----
The experiments are powered by the MuJoCo physics simulator [@mujoco]. The policy functions in our experiments are Gaussian distributed obtained using fully connected neural networks, built using TensorFlow. The baseline RL algorithm we use is the PPO [@ppo] method with GAE [@gae] as the advantage estimator.
We design three robots as agents in our experiments. They are a robot arm in 2 dimensional space, a moving ball in 2 dimensional space, and a robot arm in 3 dimensional space. For all three robots we have enabled both position control and force control modes.
For each agent we have designed 5 attributes:
![The top images show the robot agents performing the base task of target reaching. The bottom images show the four attributes to be modularize.](setup.jpg "fig:") \[figurelabel\]
### reaching (base attribute)
The reaching task is a natural selection for the base attribute. For the ball agent, the goal is to collide the target object. For the robot arm agents, the goal is to touch the target object.
### obstacle (position phase)
The obstacle attribute is to add an rigid obstacle ball in the space. Negative rewards are given if the robot collides the obstacle. Therefore, in baseline RL training, the agent can be dissuaded from exploring the right direction.
### automated door (time phase)
The automated door attribute is purely time controlled. The door blocking the target is opened only at some certain time. This attribute is harder than an obstacle, since it punishes the agent even if it goes to the right direction at a wrong time.
![image](one_attribute.jpg) \[figurelabel\]
![image](two_attributes.jpg) \[figurelabel\]
### speed limit (velocity phase)
The speed limit attribute adds a time-variant speed limit on the agent. The agent gets punished if it surpasses the speed limit. But if the robot’s speed is too slow, it may not be able to finish the task in one episode.
### force disturbance (acceleration phase)
The force disturbance attribute adds a time-variant force disturbance to the agent (or each joint for the arm).
CALNet Performance
------------------
The first set of experiments test the capability of the CALNet to learn attributes and assemble learned attributes. We first train the base attribute module using the baseline RL algorithm with CL, and then use the cascading modules to modularize the different attributes based on the pretrained base module. The results show that all the attributes can be successfully added to the base attribute using the CALNet. Fig. 5. shows some of the examples of the agent performing different attributes combinations.
We also test the transferability of the cascading modules and the capability of the CALNet of modeling tasks with multiple attributes. Concretely, we first train two attribute modules in parallel based on the pretrained base module. Then we connect the two attribute modules in series following the base attribute module. The CALNet structure is the same as the one shown in Fig. 3. The policy derived using the assembled network can zero shoot most of the tasks satisfying requirements of both attributes.
Fig. 6. shows two examples of the CALNet zero shooting a task where the moving ball reaches the target while avoiding two obstacles simultaneously. It is emphasized that this task is never trained before. We achieve zero shooting simply by connecting two pretrained obstacle attribute modules in series following the base module. Undeniably as the attributes grow more complicated and the number of attributes gets larger, it would require a certain amount of finetune. However, the advantage of modularizing and assembling attributes is remarkable, since the finetuning process is much easier and faster compared to training a new policy from scratch (as discussed in Section IV-C).
Comparison with Baseline RL Methods
-----------------------------------
We compare the capability of the CALNet and the baseline RL by comparing their training processes on a same task. We consider the MDP in which the ball agent gets to the target while avoiding an obstacle. The CALNet is trained with CL. For baseline RL trained with CL, in many cases it is to too hard for the agent to reach the target. Therefore, we have also implemented RCL, which let the initial state be very close to the target in the early stage of the training phase. Using RCL, the RL could gain positive reward very fast. The challenge would be whether the RL algorithm could maintain high reward level as the random level increases.
For CALNet, the base attribute has been trained, and we train the obstacle avoidance attribute module based on the base module. For the baseline RL, the task is trained from scratching. The focus of the comparison is placed on the responding reward and random level in CL versus the training iterations.
The reward and the random level curves are shown in Fig. 7, with the horizontal axis representing the training iterations. It is shown that the baseline RL using CL barely learns anything. This is because the reward is too sparse and the agent is consistently receiving punish from the obstacle, and fells into some local minimum. For the baseline RL using RCL, in the early stage, the average discounted reward in an episode is high as expected. But as the random level rises, the performance of the baseline RL with RCL drops. Therefore, the random level increases slowly as the training goes on.
The CALNet, on the other hand, is able to overcome the misleading punishments from the obstacle, thanks to the guidance of the instructive base attribute policy. As a result, the random level of the CALNet rises rapidly, and the CALNet achieves terminal random level more than 10 times faster than the baseline. These results indicate that the attribute module learns substantial knowledge of the attribute as the CL based training goes on.
![Comparison between the performance of the CALNet and the baseline RL (PPO) in the training phase.](compare.jpg "fig:") \[comparision\]
Conclusions
===========
In this paper, we propose the attribute learning and present the advantages of using this novel method to modularize complicated tasks. The RL framework we propose, the CALNet, uses cascading attribute modules to model the characteristics of the attributes. The attribute modules are trained with the guidance of the pretrained base attribute module. We validated the effectiveness of the CALNet of modularizing and assembling attributes, and showed the advantages of the CALNet in solving complicated tasks compared to the baseline RL. Our future work includes transferring attributes between different base attributes and even different agents. Another potential direction is to investigate the attribute learning models that can assemble lots of attributes. We believe that attribute learning can help human build versatile controllers more easily.
[99]{}
Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. Vol. 1. No. 1. Cambridge: MIT press, 1998. Levine, Sergey, and Pieter Abbeel. “Learning neural network policies with guided policy search under unknown dynamics.” Advances in Neural Information Processing Systems. 2014. Schulman, John, et al. “High-dimensional continuous control using generalized advantage estimation.” arXiv preprint arXiv:1506.02438 (2015). Levine, Sergey, et al. “End-to-end training of deep visuomotor policies.” Journal of Machine Learning Research 17.39 (2016): 1-40. Taylor, Matthew E., and Peter Stone. “Transfer learning for reinforcement learning domains: A survey.” Journal of Machine Learning Research 10.Jul (2009): 1633-1685. Pan, Sinno Jialin, and Qiang Yang. “A survey on transfer learning.” IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345-1359. Rusu, Andrei A., et al. “Progressive neural networks.” arXiv preprint arXiv:1606.04671 (2016). Rusu, Andrei A., et al. “Sim-to-real robot learning from pixels with progressive nets.” arXiv preprint arXiv:1610.04286 (2016). Daftry, Shreyansh, J. Andrew Bagnell, and Martial Hebert. “Learning transferable policies for monocular reactive MAV control.” International Symposium on Experimental Robotics. Springer, Cham, 2016. AlexanderBraylan, MarkHollenbeck, and RistoMiikkulainen ElliotMeyerson. “Reuse of neural modules for general video game playing.” (2016). Ammar, Haitham Bou, et al. “Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment.” Proc. of AAAI. 2015. Gupta, Abhishek, et al. “Learning invariant feature spaces to transfer skills with reinforcement learning.” arXiv preprint arXiv:1703.02949 (2017). Vilalta, Ricardo, and Youssef Drissi. “A perspective view and survey of meta-learning.” Artificial Intelligence Review 18.2 (2002): 77-95. Duan, Yan, et al. “One-Shot Imitation Learning.” arXiv preprint arXiv:1703.07326 (2017). Bengio, Yoshua, et al. “Curriculum learning.” Proceedings of the 26th annual international conference on machine learning. ACM, 2009. Florensa, Carlos, et al. “Reverse Curriculum Generation for Reinforcement Learning.” arXiv preprint arXiv:1707.05300 (2017). Devin, Coline, et al. “Learning modular neural network policies for multi-task and multi-robot transfer.” Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017. Andreas, Jacob, Dan Klein, and Sergey Levine. “Modular multitask reinforcement learning with policy sketches.” arXiv preprint arXiv:1611.01796 (2016). Mnih, Volodymyr, et al. “Human-level control through deep reinforcement learning.” Nature 518.7540 (2015): 529-533. Mnih, Volodymyr, et al. “Asynchronous methods for deep reinforcement learning.” International Conference on Machine Learning. 2016. Schulman, John, et al. “Trust region policy optimization.” Proceedings of the 32nd International Conference on Machine Learning (ICML-15). 2015. Schulman, John, et al. “Proximal Policy Optimization Algorithms.” arXiv preprint arXiv:1707.06347 (2017). Kullback, Solomon, and Richard A. Leibler. “On information and sufficiency.” The annals of mathematical statistics 22.1 (1951): 79-86. Heess, Nicolas, et al. “Emergence of Locomotion Behaviours in Rich Environments.” arXiv preprint arXiv:1707.02286 (2017). Todorov, Emanuel, Tom Erez, and Yuval Tassa. “MuJoCo: A physics engine for model-based control.” Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012.
[^1]: \* Both authors contributed equally to this work
[^2]: $^{1}$Zhuo Xu and Masayoshi Tomizuka are with the Dept. of Mechanical Engineering, University of California, Berkeley CA 94720, USA
[^3]: $^{2}$Haonan Chang is with the Dept. of Mechanical Engineering, Tsinghua University, Beijing 100084, China
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Recently, Neubert has suggested that a certain class of nonperturbative corrections dominates the shape of the electron spectrum in the endpoint region of semileptonic $B$ decay. Perturbative QCD corrections are important in the endpoint region. We study the effects of these corrections on Neubert’s proposal. The connection between the endpoint of the electron spectrum in semileptonic $B$ decay and the photon spectrum in $b\rightarrow s\gamma$ is outlined.'
address:
- |
Department of Physics\
University of California, San Diego\
La Jolla, California 92093
- |
California Institute of Technology\
Pasadena, California 91125
author:
- 'Adam F. Falk[^1], Elizabeth Jenkins and Aneesh V. Manohar'
- 'Mark B. Wise'
date: 'December 18, 1993'
title: |
QCD Corrections and the Endpoint of the Lepton Spectrum\
in Semileptonic $B$ Decays
---
Introduction
============
The electron energy spectrum near its endpoint in semileptonic $B$ meson decay arises from $b\rightarrow u$ transitions and provides one method for the extraction of the Kobayashi-Maskawa mixing angle $V_{ub}$ from experiment. The spectrum must be known accurately within a few hundred MeV of its endpoint, since it is only in this region that the large background due to the dominant $b\rightarrow c$ weak transition is kinematically forbidden. Thus, the separation of the rare $b \rightarrow u$ decay from the inclusive spectrum relies upon a theoretical understanding of the shape of the spectrum in this small region. Unfortunately, it is precisely this region which is the least well understood theoretically.
The endpoint region of inclusive semileptonic $B$ decay has been studied extensively. The first approaches relied on QCD models. Grinstein et al. [@ISGW] used a constituent quark model to sum over exclusive charmless final states in this region, assuming that the spectrum is dominated by a few low-lying resonances. Altarelli et al. [@ACCMM] computed the spectrum in the free $b$ quark decay model, augmented by the inclusion of a model of the Fermi motion of the $b$ quark in the $B$ meson. More recently, a QCD-based approach has been formulated in the context of the heavy quark effective theory (HQET). Using an operator product expansion (OPE) and the HQET, Chay, Georgi and Grinstein [@CGG] have shown that the free $b$ quark decay model describes inclusive semileptonic $B$ decay to leading and first subleading order in a systematic expansion in $1/m_b$, where $m_b$ is the $b$ quark mass of the HQET. The first non-vanishing corrections to the free quark decay result are of order $1/m_b^2$, and have now been computed [@MW; @Bigietc]. These corrections arise from higher order terms in the OPE whose matrix elements contain information about the state of the $b$ quark inside the hadron.
At leading order, the electron spectrum is governed by quark kinematics with an endpoint at $E_e=m_b/2$, rather than at the physical endpoint $M_B/2$ which is determined by the $B$ meson mass $M_B$. The higher order terms in the $1/m_b$ expansion produce corrections to the free quark decay spectrum, causing it to “leak” beyond the free quark endpoint. Understanding this process is crucial for extracting $V_{ub}$, since the difference $(M_B-m_b)/2$ is expected to be several hundred MeV, and is comparable to the 330 MeV energy difference between the $b\rightarrow u$ and $b\rightarrow c$ endpoints. Recently, Neubert has shown that the most singular terms in the $1/m_b$ expansion can be used to define a “shape function” of the spectrum, which is determined by a certain set of nonperturbative matrix elements and is model-independent [@Neubert]. This shape function describes the electron energy spectrum beyond the kinematic endpoint of the free quark decay (neglecting QCD radiative corrections). In this paper we examine the influence of perturbative QCD corrections on the endpoint region. These corrections are particularly important due to the presence of a Sudakov double-logarithmic suppression of the free quark decay rate at the endpoint.
This paper is organized as follows. In Section 2, we review the operator product expansion analysis of the differential decay width for the endpoint region, neglecting perturbative QCD radiative corrections. The summation of the leading nonperturbative singularities to the shape of the endpoint spectrum is presented. We show that this summation can be obtained from the free $b$ quark decay result by suitably averaging the free quark decay result over the residual momentum of the $b$ quark inside the $B$ meson. Since the leading nonperturbative corrections can be generated by this procedure, radiative corrections can be included by computing radiative corrections to the free quark decay result and then averaging over the residual momentum of the $b$ quark. In Section 3, we consider the radiative corrections to free quark decay and show how they modify the shape of the endpoint of the electron spectrum. Numerical results and conclusions are presented in Section 4.
Leading Nonperturbative Singularities
=====================================
The inclusive differential decay distribution for $B\rightarrow
X_{u,c}\,e\,\overline\nu$ is determined by the imaginary part of the time-ordered product of two weak currents, $$\label{Tmunu}
T^{\mu\nu}\equiv-i\int d^4x\,e^{-iq\cdot x}\langle B | \, T \{ J^{ \mu
\dagger} (x),J^\nu(0)\}\,|B\rangle\,,$$ where $J^\mu=\overline q\gamma^\mu(1-\gamma^5)b$ and $q=u,c$. The time-ordered product may be expanded in inverse powers of the $b$ quark mass using an operator product expansion [@CGG], and in powers of $\alpha_s(m_b)$. In this section we will concentrate on the $1/m_b$ expansion. From the operator product expansion of the hadronic tensor, one obtains an expression for the inclusive electron energy spectrum, $d\Gamma/dy$, where $y$ is the rescaled electron energy, $y=2E_e/m_b$. The leading term in the $1/m_b$ expansion produces the result of the free quark decay model, in which the inclusive semileptonic decay rate is given by the decay of a free, on-shell $b$ quark. The endpoint of the electron spectrum is at $y=1$. The subleading terms represent corrections to free quark decay, in which certain features of the motion of the $b$ quark inside the $B$ meson are taken into account. The expansion is in powers of $$\epsilon=\Lambda/m_b\,,$$ where $\Lambda$ is a scale typical of the strong interactions of QCD, perhaps 300 to 500 MeV.
Neglecting perturbative $\alpha_s(m_b)$ corrections, the electron energy spectrum for $B\rightarrow X_u\,e\,\overline\nu$ decay is given by [@MW; @Bigietc] $$\begin{aligned}
\label{leading}
{1\over\Gamma_0}{d\Gamma\over dy}&=&\left\{2(3-2y)y^2+4(3-y)y^2E_b
-{4y^2(9+2y)\over3}K_b-{4y^2(15+2y)\over3}G_b\right\}\theta(1-y)
\nonumber \\
&&+\left\{2E_b-{4\over3}K_b+{16\over3}G_b\right\}\delta(1-y)
+{2\over3}K_b\delta'(1-y)\,,\end{aligned}$$ up to corrections of order $\epsilon^3$, where $\Gamma_0$ is the free quark decay width $$\label{Gammazero}
\Gamma_0=\left|V_{ub}\right|^2\,{G_F^2 m_b^5\over 192\pi^3}\,,$$ and $\theta(x)$ is 1 if $x>0$ and zero otherwise.[^2] $E_b$, $K_b$ and $G_b$ are hadronic matrix elements of order $\epsilon^2$, defined by $$\begin{aligned}
E_b &=& G_b+K_b\,,\cr
K_b &=& \left\langle B(v)\right| \bar b_v\, {D^2\over 2 m_b^2}\, b_v
\left|B(v)\right\rangle\,,\cr
G_b &=& \left\langle B(v)\right| \bar b_v\,
g{\sigma_{\alpha\beta}G^{\alpha\beta}\over 4 m_b^2}
b_v\,\left|B(v)\right\rangle\,,\end{aligned}$$ where $b_v$ is the $b$ quark field in the HQET. The factor of $\theta(1-y)$ in the first term is required because the tree level decay distribution does not vanish at the boundary of the Dalitz plot. The $\delta(1-y)$ and $\delta'(1-y)$ singularities arise because some higher order terms in the $1/m_b$ expansion have the form of derivatives with respect to $y$ of lower order terms. Since the free quark decay distribution does not vanish at the endpoint, this generates singular terms in the decay spectrum. These singularities imply that the $1/m_b$ expansion breaks down at $y=1$.
Eq. (\[leading\]) is the decay spectrum including all corrections of order $1/m_b^2$. To all orders in $1/m_b$, the decay spectrum $d\Gamma/dy$ obtained from the OPE at zeroth order in $\alpha_s$ has the structure $$\begin{aligned}
\label{generaldeltas}
{1\over\Gamma_0}{d\Gamma\over dy} &=&
\theta(1-y)\left(\epsilon^0+0\,\epsilon+\epsilon^2+\cdots\right)
+\delta(1-y)\left(0\,\epsilon+\epsilon^2+\cdots\right)
+\delta'(1-y)\left(\epsilon^2+\epsilon^3+\cdots\right)\nonumber\\
&&+\cdots+\delta^{(n)}(1-y)\left(\epsilon^{n+1}+\epsilon^{n+2}
+\cdots\right)+\cdots\,,\end{aligned}$$ where $\epsilon^n$ denotes a term of that order, which may include a smooth function of $y$. It is a nontrivial prediction of the heavy quark effective theory that the terms proportional to $\epsilon$ in this expansion vanish [@CGG], as is evident in eq. (\[leading\]). Although the theoretical expression for $d\Gamma/dy$ is singular at the endpoint $y=1$, the total semileptonic width is not. The contribution to the total rate of a term $\epsilon^m\delta^{(n)}(1-y)$ is of order $\epsilon^m$, so the semileptonic width has a well-behaved expansion in powers of $1/m_b$, $$\Gamma = \Gamma_0\left(1+0 \epsilon+ \epsilon^2 +\epsilon^3+\ldots\right),$$ where the term proportional to $\epsilon$ vanishes.
The semileptonic decay width for $b\rightarrow u$ is difficult to measure because of background contamination from the dominant $b\rightarrow c$ semileptonic decays. It is therefore important to be able to compute the semileptonic decay rate for $b\rightarrow u$ transitions near the endpoint $y=1$, since the kinematic endpoint of the $b\rightarrow c$ spectrum is below the $b\rightarrow u$ endpoint. One way to calculate the endpoint spectrum is to weight the differential distribution $d\Gamma/dy$ by a normalized function of width $\sigma$ around $y=1$. We will refer to this procedure as “smearing.” Most of the details of the smearing procedure are unimportant; the only quantity of relevance is the width $\sigma$ of the smearing region. A physically meaningful result can be obtained by smearing over a large enough region in $y$ such that the singular corrections to $d\Gamma/dy$ are small. In ref. [@MW], it was shown that the singular corrections are small if the smearing width is chosen so that $\sigma \gg \epsilon$. We will now show that by summing the leading singularities, one can choose $\sigma$ of order $\epsilon$.
The singular distribution $\epsilon^m\delta^{(n)}(1-y)$ (where $m>n$) smeared over a region of width $\sigma$ gives a contribution of order $\epsilon^m/
\sigma^{n+1}$ to $d\Gamma/dy$. If the width $\sigma$ of the smearing region is of order $\epsilon^p$, the generic term $\epsilon^m\delta^{(n)}(1-y)$ yields a contribution of order $\epsilon^{m-(n+1)p}$. Since $m>n$, this shows that the $1/m_b$ expansion for the spectrum breaks down unless $p\le 1$, i.e. the smearing region cannot be made narrower than of order $\epsilon$. If $p>1$, the $1/m_b$ expansion breaks down because it is dominated by an infinite number of terms at large values of $n$. This divergence is not associated with the failure of the OPE due to the presence of resonances with masses of order the QCD scale [@Isgur]. The region in which such resonances dominate the final state is of width $\epsilon^2$, while the expansion breaks down upon smearing over [*any*]{} region of size $\epsilon^{1+\delta}$, where $\delta>0$.
If the smearing region is chosen to be of order $\epsilon$, the form of the expansion (\[generaldeltas\]) shows that the leading terms of the form $\theta(1-y)$ and $\epsilon^{n+1}\delta^{(n)}(1-y)$ all contribute at order unity to $d\Gamma/dy$, all terms of the form $\epsilon^{n+2}\delta^{(n)}(1-y)$ contribute at order $\epsilon$, etc. Thus one can, in principle, obtain the decay spectrum smeared over a width of order $\epsilon$ if one can sum the leading singularities in eq. (\[generaldeltas\]). The sum of the leading singularities produces a distribution $d\Gamma/dy$ of width $\epsilon$, and with a height of the same magnitude as the free quark decay distribution for $d\Gamma/dy$, i.e. with a height of order one. The subleading singularities produce a distribution which is also of width $\epsilon$, but has a height of order $\epsilon$ times the distribution obtained by summing the leading singularities. The decay distribution $d\Gamma/dy$ cannot be obtained with a resolution finer than $\epsilon$ without summing all the subleading singularities.
Neubert has shown that the series of leading singularities $$\label{leadingseries}
{1\over\Gamma_0}{d\Gamma\over dy}=A_0
\,\theta(1-y)+0\,\epsilon\,\delta(1-y)
+A_2\,\epsilon^2\,\delta'(1-y)+\cdots$$ may be resummed into a “shape function”, which describes the behavior of the theoretical spectrum in the region beyond the free quark decay endpoint at $y=1$ [@Neubert]. These terms arise in a particularly simple way in the OPE, because they come only from the expansion of the quark propagator which connects the two currents. The shape function has a width of order $\epsilon$ and height of order one.
The series of leading singularities (\[leadingseries\]) can be obtained by averaging the free quark decay result over the residual momentum of the $b$ quark in the $B$ meson [@MW]. This simple procedure is important since it will also enable us to obtain the leading nonperturbative singularities for the radiative corrections by only calculating radiative corrections to free quark decay.
The differential decay distribution is obtained from the tensor $T^{\mu \nu}$ defined in eq. (\[Tmunu\]). This tensor is a function of the momentum transfer to the leptons, $q$, and the velocity of the $B$ meson, $v$. The differential decay distribution is proportional to the hadronic tensor contracted with the lepton tensor $L_{\mu\nu}$, which depends on the electron and neutrino momenta, $k_e$ and $k_\nu$: $$\label{dgform}
{d\Gamma\over dx\, dy\, d\hat q^2} \propto W^{\mu\nu}L_{\mu\nu}\,,$$ where $W^{\mu\nu}$ is the discontinuity of $T^{\mu\nu}$ across the physical cut, $W^{\mu\nu}=-{\rm Im}\,T^{\mu\nu}/\pi$. The constant of proportionality in eq. (\[dgform\]) involves $G_F^2$ and the mixing angle $\left|V_{ub}\right|^2$. The dimensionless variables $x$, $y$ and $\hat q^2$ are defined by $$\label{xyq}
x = {2 k_\nu\cdot v\over m_b}\,,\qquad y = {2 k_e\cdot v\over
m_b}\,,\qquad \hat q^2 = {q^2\over m_b^2}\,.$$ The lowest order (in $1/m_b$) decay distribution $d \Gamma_{\rm free}/ dx\,
dy\, d\hat q^2$ is the decay distribution for a free on-shell $b$ quark with mass $m_b$ and the same velocity $v$ as the $B$ meson. However, the $b$ quark in the $B$ meson is off-shell with a distribution of residual momentum $k$. The off-shell $b$ quark, with momentum $m_bv+k$, may be viewed as an on-shell quark with mass $m_b'$ and velocity $v'$, where $m_b'v'=m_b v+k$. The decay rate for such a quark is obtained by evaluating the lowest order expression for $d\Gamma_{\rm free}/dy$ in the rest frame of the moving quark, and then boosting back to the rest frame of the $B$ meson, $$\label{Gprime}
d\Gamma = {1\over v\cdot v'} \, d\Gamma_{\rm free}(x',y',\hat
q^{\prime 2},m_b')\,.$$ Note that all scaled quantities depend implicitly on $m_b$, and hence must be primed. We now replace $m_b'v'\rightarrow m_bv+k$ and average over the residual momentum $k^\mu$. Expanding in $k^\mu/m_b$, we obtain a series of the form [@MW] $$\label{derivatives}
\left\langle d\Gamma\right\rangle =
\left\langle {\left[1 + 2v\cdot k/m_b + k^2/m_b^2\right]^{1/2}\over
1 + v\cdot k/m_b}\left[1 + {k^{\mu_1}}
{\partial\over\partial m_bv^{\mu_1}}
+{1\over2} {k^{\mu_1}k^{\mu_2}}{\partial\over\partial m_b
v^{\mu_1}} {\partial\over\partial m_b
v^{\mu_2}}+...\right]d\Gamma_{\rm free}
\right\rangle\,,$$ where $\langle\cdot\rangle$ denotes an average with respect to the distribution of the momentum $k$ of the $b$ quark in the $B$ meson. The derivatives with respect to $m_b v^{\mu}$ can be rewritten as derivatives with respect to $x$, $y$ and $\hat q^2$ using the chain rule. Terms with $n$ derivatives with respect to $m_b v^\mu$ in eq. (\[derivatives\]) turn into terms with $n_x$, $n_y$ and $n_q$ derivatives with respect to $x$, $y$ and $\hat q^2$ respectively, where $n_x+n_y+n_q\le n$. The expansion of $d\Gamma/dy$ is then obtained by integrating the expansion of $d\Gamma/dx\,dy\,d\hat q^2$ with respect to $x$ and $\hat q^2$. The explicit computations to order $1/m_b^2$ are given in ref. [@MW].
In this paper, we are interested in summing the most singular terms in $d\Gamma/dy$ near $y=1$ to all orders in $1/m_b$. These terms are found by retaining the terms in eq. (\[derivatives\]) with the maximum number of $y$-derivatives at each order in $1/m_b$. This corresponds to only retaining the $\partial^n/\partial y^n$ term in $\partial^n/\partial m_bv^{\mu_1}
\ldots \partial m_bv^{\mu_n}$ in eq. (\[derivatives\]) and ignoring the prefactor $\left[1 + 2v\cdot k/m_b + k^2/m_b^2\right]^{1/2}/ \left[ 1 + v\cdot
k/m_b\right]$. Terms with derivatives with respect to $x$ or $\hat q^2$ do not generate derivatives with respect to $y$ on integration over $x$ and $\hat
q^2$, and are less singular than the terms we have retained. The most singular terms are thus obtained using $$\label{chainrule}
\left({\partial \over\partial m_b v^\mu}\right)^n\rightarrow
\left({\partial y\over\partial m_b v^\mu}{\partial \over\partial
y}\right)^n\rightarrow\left( {2\over m_b}(\hat k_{e\mu}-yv_\mu){\partial
\over \partial y}\right)^n
\mathrel{\mathop{\longrightarrow}^{y=1}}\left({2\over m_b}(\hat k_e
- v)_\mu
{\partial \over \partial y}\right)^n\,,$$ which gives the leading singularities, $$\begin{aligned}
{d\Gamma\over dy} &=& {d\Gamma_{\rm free}\over dy}
+\langle k^{\mu_1}\rangle \left({2\over m_b}\right)(\hat k_e - v)
_{\mu_1}
{\partial\over \partial y} \left({d\Gamma_{\rm free}\over dy}
\right) \nonumber\\
&&+\cdots+{1\over n!}\langle k^{\mu_1}\cdots k^{\mu_n}\rangle
\left({2\over m_b}\right)^n(\hat k_e - v)_{\mu_1}\cdots(\hat k_e -
v)_{\mu_n}
{\partial^n\over \partial y^n}\left( {d\Gamma_{\rm free}\over dy}
\right)+\cdots\nonumber\\
\label{derivseries2}
&=&\sum_{n=0}^\infty {2^n\over m_b^{\,n}\ n!}(\hat k_e - v)_{\mu_1}
\cdots(\hat k_e - v)_{\mu_n} \langle k^{\mu_1}\cdots
k^{\mu_n}\rangle {\partial^n\over \partial y^n}
\left({d\Gamma_{\rm free}\over dy}\right)\,,\end{aligned}$$ where $\hat k_e = k_e/m_b$. Eq. (\[derivseries2\]) sums the leading nonperturbative corrections in the endpoint region, provided one interprets the residual momentum $k$ in eq. (\[dgform\]) as the operator $iD$ and the average as the expectation value of the resulting operator in the $B$-meson state. There is no operator ordering ambiguity for the leading singularity in this identification, because $D^{\mu_1} \cdots D^{\mu_n}$ is contracted with the completely symmetric tensor $(\hat k_e - v)^{\mu_1}\cdots (\hat k_e -
v)^{\mu_n}$, and so the commutator $\left[D^\mu, D^\nu\right]$ does not contribute. Only the part of the matrix element $\langle B
{(v)}|iD^{\mu_1}\cdots iD^{\mu_n} |B {(v)}\rangle$ proportional to the tensor structure $v^{\mu_1}\cdots v^{\mu_n}$ contributes to the most singular terms, since $(\hat k_e - v)^2$ vanishes at $y = 1$ [@Neubert]. Neglecting perturbative $\alpha_s (m_b)$ radiative corrections, the most singular terms in eq. (\[derivseries2\]) are $\delta$-functions and their derivatives, which arise from differentiating the factor of $\theta (1-y)$ in $d\Gamma_{\rm free}/dy$. Dropping the $n=0$ term in eq. (\[derivseries2\]) and allowing the derivatives to act only on the $\theta$-function gives Neubert’s shape function $$\label{shapefunction}
S(y)=\sum_{n=1}^\infty {2^n\over m_b^{\,n}\ n!}(\hat k_e - v)_{\mu_1}
\cdots(\hat k_e - v)_{\mu_n}\ \langle B(v)|\, iD^{\mu_1}\cdots
iD^{\mu_n}\,|B(v)\rangle\ {\partial^n\over \partial y^n}\,\theta(1-y)\,.$$
This procedure for averaging over residual momentum produces the same result for the leading singularities as the operator product expansion. As discussed in ref. [@MW], one can use reparameterization invariance [@Luke] to show that averaging over residual momentum gives the same answer as the OPE, provided one neglects the commutator $\left[D^\mu, D^\nu\right]$ and higher dimension operators involving light quark fields. The commutator and higher dimension operators do not contribute to the most singular terms, and so averaging over residual momentum will be adequate for this discussion.
It is simple to understand how this averaging procedure generates a shape function which extends beyond the free quark decay endpoint. If the energy of the $b$ quark is allowed to fluctuate from its on-shell value, occasionally it will have an energy larger than its free value $m_b$. This fluctuation corresponds to a situation in which the quark has temporarily absorbed some energy from the light degrees of freedom in the $B$ meson; if it decays weakly at this moment, then an energy $E_e>m_b/2$ may be given to the electron.
Radiative Corrections
=====================
The advantage of the averaging procedure for obtaining the leading nonperturbative singularities as $y\rightarrow1$ is that it generalizes straightforwardly to the case when radiative corrections are included. The averaging procedure applied to the free quark decay distribution including radiative corrections yields the leading nonperturbative singularities including radiative corrections.
The one-loop QCD contribution to the free quark decay process, including both virtual gluons and real gluon emission, has been computed [@Alietc]. The corrected electron spectrum takes the form $$\label{radiative}
{d\Gamma_{\rm free}\over dy}={d\Gamma_0\over dy}\left[
1-{2\alpha_s\over3\pi}G(y,\hat m_q)
+O(\alpha_s^2)\right]\,,$$ where $\Gamma_0$ is the tree level free quark decay rate. Perturbative QCD corrections do not extend the electron spectrum beyond the free quark decay endpoint $y=1$. This can only occur because of the nonperturbative $1/m_b$ corrections discussed in the preceding section. In the interesting case $\hat m_q=0$ relevant to the transition $B\rightarrow
X_u\,e\,\overline\nu$, $G(y,0)$ is given by [@Alietc; @JK][^3] $$\label{Gdef}
G(y,0)=G(y)=\ln^2(1-y)+{31\over6}\ln(1-y)+\pi^2+{5\over4}+
(\text{vanishing as } y\rightarrow 1)\,.$$ The leading singularity at each order in perturbation theory is proportional to $\alpha_s^n\ln^{2n}(1-y)$. These singularities lead to a breakdown of the perturbative QCD expansion near the endpoint $y=1$, unless they can be summed. The double logarithms have been shown to exponentiate [@Sudakov], yielding an expression which formally has the structure $$\label{exponentiated}
{d\Gamma_{\rm free}\over dy}=R(y){d\Gamma_0\over dy}\,,$$ where $$\label{Rdef}
R(y)=\exp\left\{-{2\alpha_s\over3\pi}\ln^2(1-y)
\right\}\,.$$ This is the form for the decay spectrum used by Altarelli et al. [@ACCMM]. The Sudakov form factor $R(y)$ causes the electron spectrum to vanish at the free quark endpoint $y=1$.
The contribution to the endpoint shape of the electron energy spectrum coming from the exponentiated double-logarithm in $R(y)$ is a calculable effect. One might hope that once this leading radiative correction has been accounted for, it would be consistent to include the leading higher dimension operators using eq. (\[derivseries2\]) and neglect all subleading radiative corrections. However, we find that for very large $m_b$ this is not the case; the perturbative expansion is so poorly behaved at large orders in $\alpha_s(m_b)$ that it is necessary to sum an infinite number of infinite series before including nonperturbative effects with eq. (\[derivseries2\]). Nevertheless, for the case of interest $m_b\approx 4.5$ GeV, neglecting the subleading radiative corrections may provide a reasonable approximation for the endpoint of the electron energy spectrum.
Before analyzing the general structure of the radiative corrections, it is instructive to consider a simple example which illustrates the importance of subleading radiative corrections. Consider the order $\alpha_s$ correction given in eq. (\[radiative\]). This correction has $\ln^2(1-y)$ and $\ln(1-y)$ singularities as $y\rightarrow 1$. The $\ln^2(1-y)$ singularity is summed into the Sudakov form factor $R(y)$, leaving the subleading $\ln(1-y)$ singularity. This subleading logarithmic singularity must also be understood in order to determine the effect of radiative corrections on the endpoint energy spectrum [@Politzer]. To see this, note that it is possible to write two different expressions for the decay spectrum which contain the same Sudakov leading singularity, but which have very different behaviors as $y \rightarrow 1$. The first expression is the conventional definition [@ACCMM] $$\label{IIIi}
{d\Gamma_{\rm free}\over dy}=R(y){d\Gamma_0\over
dy}\left[1-{2\alpha_s\over 3\pi}\widetilde G(y)\right]\,,$$ where $$\label{IIIii}
\widetilde G(y) = G(y)- \ln^2(1-y)\,.$$ However, one can also rewrite the decay spectrum as $$\label{IIIiii}
{d\Gamma_{\rm free}\over dy}={d\Gamma_0\over
dy}\left[R(y)-{2\alpha_s\over 3\pi}\widetilde G(y)\right]\,,$$ which is equally valid to order $\alpha_s$. The two expressions (\[IIIi\]) and (\[IIIii\]) have the same $\ln^2(1-y)$ singularity as $y\rightarrow 1$, but differ in the subleading terms. The first expression (\[IIIi\]) vanishes as $y\rightarrow 1$, whereas eq. (\[IIIii\]) diverges as $y\rightarrow 1$. Thus, the exact form of the subleading singularity is required in order to determine the shape of the spectrum very near the endpoint.
We will now demonstrate that in the limit $m_b\rightarrow\infty$, summing the most singular $1/m_b$ corrections with eq. (\[derivseries2\]) cannot be used to improve the behavior of the electron spectrum near $y=1$ without first summing an infinite number of subleading perturbative QCD singularities. In a schematic notation in which we include only the powers of $\alpha_s$ and $\ln(1-y)$, the radiative corrections near $y=1$ have the structure $$\begin{aligned}
\label{Rstructure}
&& 1\nonumber\\
&+& \alpha_s\ln^2(1-y) + \alpha_s\ln(1-y) + \alpha_s\nonumber\\
&+& \alpha_s^2\ln^4(1-y) + \alpha_s^2\ln^3(1-y) + \alpha_s^2\ln^2(1-y)
+ \alpha_s^2\ln(1-y) +\alpha_s^2\nonumber\\
&+& \alpha_s^3\ln^6(1-y) +\alpha_s^3\ln^5(1-y) + \alpha_s^3\ln^4(1-y)
+ \alpha_s^3\ln^3(1-y) + \cdots\nonumber\\
&+& \cdots\,.\end{aligned}$$ The first column, containing terms of the form $\alpha_s^n\ln^{2n}(1-y)$, exponentiates into the Sudakov factor $R(y)$, after which the most singular terms remaining are of order $\alpha_s^n\ln^{2n-1}(1-y)$. We may write the $m$th column of the expansion (\[Rstructure\]) as an infinite series of the form $$\label{columnseries}
C_m(y)=\sum_{n=[m/2]}^\infty b_{mn}\alpha_s^n\ln^{2n-m+1}(1-y)\,.$$ The series of leading singularities corresponds to $m=1$; for this case, and [*only*]{} this case, the coefficients $$\label{bno}
b_{1n}={1\over n!}\left(-{2\over3\pi}\right)^n$$ have been computed for all $n$, and the sum $C_1(y)$ is $R(y)$. The series $C_m(y)$ for $m> 1$ represent an infinite set of infinite series, for which the behavior of the coefficients $b_{mn}$ for large $n$ is not known.
The unknown subleading series $C_m(y)$, $m>1$, limit the accuracy with which one can determine the electron energy spectrum. For perturbation theory to be valid, one has to remain in a region in which all the subleading terms are small, since their structure is not known, i.e. all the terms beyond the first column of eq. (\[Rstructure\]) must be small. This condition requires that $\alpha_s^n\ln^{2n-m}(1-y) \ll 1$ for all $n$ and all $m> 1$, or that $\alpha_s\ll1$ and $$\label{condition}
\alpha_s \ln^2(1-y) < 1\,,$$ which is the condition required for $n\rightarrow \infty$ with $m$ fixed. If eq. (\[condition\]) is satisfied, the first column sums to $R(y)$, the second column is of order $\sqrt{\alpha_s}$ times the first column, the third column is of order $\sqrt{\alpha_s}$ times the second column, and so on. The condition (\[condition\]) has converted the QCD perturbation series in eq. (\[Rstructure\]) into an expansion in $\sqrt{\alpha_s}$. Summing all the leading singularities $\alpha_s^n\ln^{2n}(1-y)$, or summing any finite number of columns of eq. (\[Rstructure\]), does not increase the region of validity of the perturbation expansion, since the condition that the next column be small is still eq. (\[condition\]). To increase the region of validity of the perturbative expansion, one must sum all the terms of the form $\alpha_s^n \ln^{2n-m}(1-y)$ for $0\le m\le \lambda n$ and $\lambda>0$, in which case one needs only the restriction $\alpha_s\ln^{2-\lambda}(1-y) < 1$. That is, one must sum all the terms in (\[Rstructure\]) below a line which makes an angle $\tan^{-1} \lambda$ with the vertical, which implies that one must sum a large number of subleading logarithms at high orders in perturbation theory.
At present, only the sum $C_1(y)$ of the first column is known, so the condition for the subleading QCD radiative corrections to be small is that given by eq. (\[condition\]). To determine the restriction on $y$, we use eq. (\[condition\]) in the form $${\alpha_s(m_b)\over \pi} \ln^2(1-y) < 1\,,$$ where we have noted that the perturbation series is really in $\alpha_s/\pi$ rather than $\alpha_s$. The condition for reliability of the QCD radiative corrections is $$\label{valid}
1-y > e^{-\sqrt{\pi/\alpha_s}}\,.$$ For very heavy quarks, this corresponds to a region that is much larger than the smearing width $\epsilon$ of the $1/m_b$ corrections. To see this, take the limit $m_b\rightarrow\infty$ with $\alpha_s$ at high energies held fixed, i.e. $\alpha_s(m_b)=6\pi/[(33-2n_f)\ln m_b/\Lambda_{\rm QCD}]$ ($n_f=5$). Define the parameter $t$ by $\ln m_b/\Lambda_{\rm QCD}=t^2$. Then eq. (\[valid\]) becomes $$\label{validb}
1-y > e^{-t \sqrt{23 / 6}}\,,$$ whereas $\epsilon \sim e^{-t^2}$. For large quark masses (large $t$), $\epsilon$ is much smaller than the restriction (\[validb\]) on $1-y$.
As we have already noted, the residual momentum averaging procedure discussed in Sect. 3 can be applied to the QCD corrected free quark decay spectrum. This procedure yields the leading $1/m_b$ singularities to all orders in $\alpha_s$.[^4] When radiative corrections are neglected, we have shown that the leading $1/m_b$ singularities smear the decay spectrum by a width of order $\epsilon$. Thus, a reliable determination of the decay spectrum near the endpoint requires knowing the lowest order (in $1/m_b$) spectrum at least within a distance $\epsilon$ of the endpoint. Eq. (\[valid\]) implies that in the $m_b\rightarrow\infty$ limit, the lowest order spectrum with perturbative QCD corrections is not known in a region near the endpoint which is much larger than $\epsilon$.
Numerical Results and Conclusions
=================================
The radiative corrections become large in a region given by eq. (\[valid\]) which is much larger than $\epsilon$ in the limit $m_b\rightarrow\infty$. However, for a large but finite quark mass it is possible that one is in a regime where $e^{-\sqrt{\pi/\alpha_s}}$ is not much larger than $\epsilon$. For example, for $m_b=4.5$ GeV, we find $\alpha_s(m_b)\sim 0.2$ and $e^{-\sqrt{\pi/\alpha_s}}\sim 0.02$. This value of $1-y$ corresponds to a smearing width of approximately 50 MeV, which is smaller than $\epsilon$. Whether this crude estimate is valid depends critically on the size of the coefficients of the subleading terms in the second and higher columns of eq. (\[Rstructure\]). For example, the subleading $\ln(1-y)$ and constant terms in $G(y)$ (see eq. (\[Gdef\])) give $$\label{subest}
\left({2\alpha_s\over 3 \pi}\right)\left[\left({31\over 6}\right)\ln(1-y)
+\pi^2+{5\over4}\right] \approx -0.1\,,$$ when the electron energy is 200 MeV away from the free quark endpoint $y=1$.
Eq. (\[subest\]) indicates that it may be a good approximation to include the effects of perturbative QCD corrections on the shape of the endpoint region using for $d\Gamma_{\rm free}/dy$ the free b-quark decay rate including the leading QCD double logarithms with eq. (\[exponentiated\]). The smallness of eq. (\[subest\]) arises from a cancellation between the $\ln(1-y)$ and constant terms in $G(y)$. Each of these separately is not particularly small. Thus we are not completely confident that higher order perturbative corrections are negligible. It may be possible to sum the second column of eq. (\[Rstructure\]) using the methods developed in [@comps]. Such a summation would provide useful information on the importance of higher order QCD corrections.
The shape of the endpoint region of the electron spectrum depends on the matrix elements $\langle B(v)|\, iD^{\mu_1}\ldots iD^{\mu_n}\,|B(v)\rangle $. Neubert estimates these matrix elements using a quark model for the $B$ meson [@Neubert]. Eventually, these matrix elements can be determined directly from experiment. For example, the same matrix elements occur in the $1/m_b$ corrections to semileptonic $b\rightarrow c$ decay and in the decay $b\rightarrow s\gamma$ [@self]. Thus a precise measurement of the electron spectrum in $b\rightarrow c$ semileptonic decay can be used to obtain the endpoint electron spectrum for $b\rightarrow u$ semileptonic decay and the photon energy spectrum in $b\rightarrow s \gamma$. The order $\alpha_s$ radiative corrections have also been computed for $b\rightarrow s\gamma$ [@AliGreub]. Let $x_\gamma = 2E_ \gamma/m_b$, and define $$F(y)=\int_{y}^1 {d\Gamma\over dx_\gamma} dx_\gamma\,,$$ where $d\Gamma/dx_\gamma$ is the inclusive photon energy spectrum in $b\rightarrow s\gamma$, neglecting the strange quark mass. $F(y)$ for $b\rightarrow s\gamma$ and $d\Gamma/dy$ for $b\rightarrow u$ semileptonic decays have the same $\ln^2(1-y)$ but different $\ln(1-y)$ singularities as $y\rightarrow 1$. However, $F(y)$ for $b\rightarrow s\gamma$ and $d\Gamma/dy$ for $c\rightarrow d$ semileptonic decays do have the same $\ln^2(1-y)$ and $\ln(1-y)$ singularities as $y\rightarrow 1$ in the order $\alpha_s$ radiative corrections.
The methods in this paper and ref. [@Neubert] for describing the endpoint region of the electron spectrum apply when the endpoint is dominated by many states with masses of order $\sqrt{m_b\Lambda_{\rm
QCD}}$. However, in the non-relativistic constituent quark model estimate of ref. [@ISGW], the region beyond the $B\rightarrow X_c e
\bar\nu_e$ endpoint is dominated by the single decay mode $B\rightarrow \rho e \bar\nu_e$. If $\rho$ dominance is found to hold experimentally, then the sum of the leading singularities is not a valid description of the endpoint in a region which is as small as the difference between the $B\rightarrow X_c e\bar \nu_e$ and $B\rightarrow
X_u e \bar\nu_e$ endpoints.
It is a pleasure to thank M. Luke and H.D. Politzer for helpful conversations. This work was supported by the Department of Energy under Grants Nos. DOE-FG03-90ER40546 and DEAC-03-81ER40050, and by the Presidential Young Investigator Program under Grant No. PHY-8958081.
B. Grinstein, N. Isgur and M.B. Wise, Phys. Rev. Lett. [**56**]{}, 258 (1986); N. Isgur, D. Scora, B. Grinstein and M.B. Wise, Phys. Rev. [**D39**]{}, 799 (1989).
G. Altarelli, N. Cabibbo, G. Corbò, L. Maiani and G. Martinelli, Nucl.Phys. [**B208**]{}, 365 (1982).
J. Chay, H. Georgi and B. Grinstein, Phys. Lett. [**B247**]{}, 399 (1990).
A. Manohar and M.B. Wise, University of California, San Diego Report No. UCSD/PTH 93–14, Phys. Rev. [**D**]{}, to appear.
N. Isgur, Phys. Rev. [**D47**]{}, 2782 (1993).
I.I. Bigi, M. Shifman, N.G. Uraltsev and A.I. Vainshtein, Phys. Rev.Lett. [**71**]{}, 496 (1993); B. Blok, L. Koyrakh, M. Shifman and A.I. Vainshtein, ITP Report No. NSF–ITP–93–68 (1993); T. Mannel, Darmstadt Report No. IKDA–93/26 (1993).
M. Neubert, CERN Report No. CERN–TH.7087/93 (1993).
L. Koyrakh, Minnesota Report No. TPI–MINN–93/47–T; S. Balk, J.G. Körner, D. Pirjol and K. Schilcher, Mainz Report No. MZ–TH/93–32; A.F. Falk, Z. Ligeti, M. Neubert and Y. Nir, University of California, San Diego Report No. UCSD/PTH 93–43.
M.E. Luke and A.V. Manohar, Phys. Lett. [**B286**]{}, 348 (1992).
A. Ali and E. Pietarinen, Nucl. Phys. [**B154**]{}, 519 (1979); N. Cabibbo, G. Corbò and L. Maiani, Nucl. Phys. [**B155**]{}, 83 (1979); G. Corbò, Nucl. Phys. [**B212**]{}, 99 (1983).
M. Jeżabek and J.H. Kühn, Nucl. Phys. [**B320**]{}, 20 (1989).
V. Sudakov, JETP (Sov. Phys.) [**3**]{}, 65 (1956); G. Altarelli, Phys. Rep., 1 (1982).
H.D. Politzer, Proceedings of the Eighth Hawaii Tropical Conference in Particle Physics (1979), edited by V.Z. Peterson and S. Pakvasa, The University Press of Hawaii (1980).
J.C. Collins, Sudakov Form Factors, in Peturbative Quantum Chromodyanmics, ed. by A.H. Mueller, World Scientific, (Singapore 1989); J.C. Collins, D.E. Soper, and G. Sterman, Nucl. Phys. [**B308**]{}, 833 (1988); S. Catani, L. Trentadue, G. Turnock, and B.R. Webber, Nucl.Phys. [**B407**]{}, 3 (1993).
I.I. Bigi, N.G. Uraltsev and A.I. Vainshtein, Phys. Lett. [**B293**]{}, 430 (1992); I.I. Bigi, B. Blok, M. Shifman, N.G. Uraltsev and A.I. Vainshtein, Minnesota Report No. TPI–MINN–92/67–T (1992); A.F. Falk, M. Luke and M.J. Savage, University of California, San Diego Report No. UCSD/PTH 93–23, Phys. Rev. [**D**]{}, to appear.
A. Ali and C. Greub, Phys. Lett. [**B287**]{}, 191 (1992), Zeit.Phys. [**C49**]{}, 431 (1991).
[^1]: On leave from The Johns Hopkins University, Baltimore, Maryland
[^2]: Eq. (\[leading\]) holds for massless leptons. Lepton mass effects may be included [@tau], but they do not change the behavior of the endpoint spectrum in an important way.
[^3]: The expression for $G(y)$ is taken from ref. [@JK].
[^4]: The leading singularities arise when the derivatives in eq. (\[derivseries2\]) act on the Sudakov suppression factor $R(y)$, not on the $\theta$-functions as in eq. (\[shapefunction\]).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In this paper, we prove Perelman type $\mathcal{W}$-entropy formulae and global differential Harnack estimates for positive solutions to porous medium equation on closed Riemannian manifolds with Ricci curvature bounded below. As applications, we derive Harnack inequalities and Laplacian estimates.
**Mathematics Subject Classification (2010)**. Primary 58J35, 35K92; Secondary 35B40,35K55
**Keywords**. Porous medium equation, Perelman type entropy formula, differential Harnack estimates, Bakry-Émery Ricci curvature.
address: 'School of Mathematical Sciences, Shanxi University, Taiyuan, 030006, Shanxi, China'
author:
- 'Yu-Zhao Wang'
title: '$\mathcal{W}$-Entropy formulae and differential Harnack estimates for porous medium equations on Riemannian manifolds'
---
\[section\] \[theorem\][Corollary]{} \[theorem\][Lemma]{} \[theorem\][Definition]{} \[theorem\][Proposition]{} \[theorem\][Remark]{} \[theorem\][Example]{}
[^1]
Introduction and main results
=============================
Monotonicity formula and differential Harnack inequality are two important tools in geometric analysis. The more spectacular one is the entropy monotonicity formula discovered by G.Perelman [@P] and related differential Harnack inequality for the conjugate heat equation under the Ricci flow. More precisely, let $(M,g(t))$ be the closed $n$-dimensional Riemannian manifolds along the Ricci flow and $(g(t), f(t), \tau(t))$ be a solution to conjugate heat equation coupled with Ricci flow $$\label{adjoint}
\partial_tg=-2{\rm Ric},\quad\partial_t f=-\Delta f+|\nabla f|^2+R+\frac n{2\tau},\quad\partial_t\tau=-1.$$ Perelman [@P] introduced the following $\mathcal{W}$-entropy $$\mathcal{W}(g,f,\tau):=\int_M\Big(\tau(R+|\nabla f|^2)+f-n\Big)\frac{e^{-f}}{(4\pi\tau)^{\frac n2}}\,dV$$ and proved its monotonicity $$\label{Pentropy}
\frac{d}{dt}\mathcal{W}(g,f,\tau)=2\tau\int_M\Big|R_{ij}+\nabla_i\nabla_jf-\frac1{2\tau}g_{ij}\Big|^2\frac{e^{-f}}{(4\pi\tau)^{\frac n2}}\,dV\ge0,$$ where $\tau>0$ and $\int_M{(4\pi\tau)^{-\frac n2}}e^{-f}dV=1$. The static point of $\mathcal{W}$-entropy is called gradient shrinking Ricci soliton $$\label{soliton}
R_{ij}+\nabla_i\nabla_jf=\frac1{2\tau}g_{ij},$$ which can be viewed as a model with singularity when studying the singularity formation of solutions of the Ricci flow. Moreover, when $H(x,t)={(4\pi\tau)^{-\frac n2}}e^{-f}$ is a fundamental solution to the conjugate heat equation , Perelman’s differential Harnack inequality holds [@P; @NiLYH] $$\label{PLYH}
v_H:=\Big(\tau(R+2\Delta f-|\nabla f|^2)+f-n\Big)H\le0.$$
A feature of Perelman’s entropy monotonicity formula is that it holds at any dimension and without assumption of curvature condition. After Perelman’s work, an interesting direction is that of finding entropy monotonicity formulae for other geometric evolution equations. There were some development in this direction, L. Ni [@Nientropy] derived the entropy monotonicity formula for the linear heat equation on Riemannian manifolds with nonnegative Ricci curvature, $$\label{Nientropy}
\frac{d}{dt}\mathcal{W}(f,\tau)=-2\tau\int_M\left(\Big|\nabla_i\nabla_jf-\frac1{2\tau}g_{ij}\Big|^2+R_{ij}f_if_j\right)u\,dV,$$ where $u=(4\pi\tau)^{-\frac n2}e^{-f}$ is a positive solution to the heat equation $\partial_tu=\Delta u$ with $\int_Mu\,dV=1$, $\frac{d\tau}{dt}=1$ and $\mathcal{W}(f,\tau)$ is defined by $$\mathcal{W}(f,\tau):=\int_M\Big(\tau|\nabla f|^2+f-n\Big)u\,dV.$$
When the Ricci curvature of $M$ is bounded below, Li-Xu [@LX] established some Perelman-Ni type entropy formulae for the linear heat equation and proved their monotonicity.
A natural question is how to establish the entropy formula for the nonlinear equation on Riemannian manifolds. Kotschwar-Ni [@KoNi] and Lu-Ni-Vazquez-Villani [@LNVV] obtained the entropy monotonicity formula for the $p$-Laplaican heat equation and the porous medium equation on compact Riemannian manifolds with nonnegative Ricci curvature respectively. In [@WC2], the author proved the entropy monotonicity formula for positive solution to the doubly nonlinear diffusion equation on the closed Riemannian manifolds with nonnegative Ricci curvature.
The first step in the study of the $\mathcal{W}$-entropy on Riemannian manifolds with negative lower bound of Ricci curvature is to find the suitable quantity to define the $\mathcal{W}$-entropy. In [@LiLi2; @LiLi3], S. Li and X.-D. Li gave a new idea to introduce the correct quantity for the definition of the $\mathcal{W}$-entropy for the heat equation of the Witten Laplacian on Riemannian manifolds with lower bound of the infinity dimensional Bakry-Emery Ricci curvature. Motivated by their works, we can obtain the Perelman type $\mathcal{W}$-entropy monotonicity formula for the porous medium equation on closed Riemannian manifolds with Ricci curvature (or Bakry-Emery Ricci curvature) bounded below.
\[KPMEentropy\] Let $(M,g)$ be a closed Riemannian manifold with Ricci curvature bounded below by $-K(K\ge0)$. Suppose $u$ be a smooth positive solution to the porous medium equation $$\label{PME}
\partial_tu=\Delta u^{\gamma}$$ and $v=\frac{\gamma}{\gamma-1}u^{\gamma-1}$ the pressure function. For any $\gamma>1$, define the Perelman-type $\mathcal{W}$-entropy $$\label{WKentropy}
\mathcal{W}_K(v,t):=\sigma_K\beta_K\int_M\left[\gamma\frac{|\nabla v|^2}{v}-\left(\frac{1}{\beta_K}+\frac{\dot{\sigma}_K}{\sigma_K}\right)\right]vu\,dV,$$ Then we have $$\begin{aligned}
\label{WKDentropy}
\frac{d}{dt}\mathcal{W}_K(v,t)
\le&-2\sigma_K\beta_K\int_M (\gamma-1) \left|\nabla_i\nabla_jv+\frac{\eta_K}{n(\gamma-1)}g_{ij}\right|^2vu\,dV\\
&-2\sigma_K\beta_K\int_M\left[(\gamma-1)({\rm Ric}+Kg)(\nabla v,\nabla v )+((\gamma-1)\Delta v+\eta_K)^2\right]vu\,dV,\notag\end{aligned}$$ where $a=\frac{n(\gamma-1)}{n(\gamma-1)+2}$, $\kappa=K\sup\limits_{M\times(0,T]}u^{\gamma-1}$, $\sigma_K=\left(\frac{e^{2{\kappa}t}-1}{2\kappa}\right)^{a}$, $\beta_K=\frac{\sinh(2\kappa t)}{2\kappa}$ and $\eta_K=\frac{2a\kappa}{1-e^{-2\kappa t}}$. Moreover, if ${\rm Ric}\ge-Kg$ for $K\ge0$, then $\mathcal{W}_{K}(v,t)$ is monotone decreasing along the porous medium equation .
When $K=0$, $\sigma_0=t^{a}$, $\beta_0=t$ and $\eta_0=\frac{a}t$, the entropy formula in Theorem \[KPMEentropy\] reduced the result of Lu-Ni-Vazquez-Villani in [@LNVV], $$\begin{aligned}
\label{PMEentropy2}
\frac{d}{dt}\mathcal{W}_0(v,t)=&-2(\gamma-1)t^{a+1}\int_M\left[\Big|\nabla_i\nabla_jv
+\frac{a}{n(\gamma-1)t}g_{ij}\Big|^2+ {\rm Ric}(\nabla v,\nabla v)\right]vu\,dV\notag\\
&-2t^{a+1}\int_M\left[(\gamma-1)\Delta v+\frac {a}t\right]^2 vu\,dV,\end{aligned}$$ where $$\label{PMEentropy1}
\mathcal{W}_0(v,t)= t^{a+1}\int_M\left(\gamma\frac{|\nabla v|^2}{v}-\frac{a+1}t\right)vu\,dV.$$
A weighted Riemannian manifold is a Riemannian manifold $(M,g)$ with a smooth measure $d\mu:= e^{-f}\,{dV}$, denoted by $(M,g, d\mu)$, where $f$ is a smooth function on $M$. The weighted Riemannian manifold carries a natural analog of the Ricci curvature, that is, the $m$-Bakry-Émery Ricci curvature, which is defined as $${\rm Ric}_f^m:={\rm Ric}+\nabla\nabla f-\frac{\nabla f\otimes \nabla f}{m-n},\quad (n\leq m\leq \infty).$$ In particular, when $m=\infty$, ${\rm Ric}^{\infty}_f={\rm Ric}_f:={\rm Ric}+\nabla\nabla f$ is the classical Bakry–Émery Ricci curvature, which was introduced in the study of diffusion processes and functional inequalities including the Poincaré and the logarithmic Sobolev inequalities(see [@BGL] for a comprehensive introduction), then it is extensively investigated in the theory of the Ricci flow (for example, the gradient shrinking Ricci soliton equation is precisely ${\rm Ric}_f =\frac{1}{2\tau} g$); when $m=n$ if and only if $f$ is a constant function. There is also a natural analog of the Laplacian, namely, the so-called weighted Laplacian, denoted by $\Delta_f=\Delta-\nabla f\cdot\nabla$, which is a self-adjoint operator in $L^2(M, d\mu)$.
A nature question is which contents can be generalized to the weighted manifolds and what are the advantages and applications for the weighted case. There are a lot of progress in this direction, for instance, gradient estimates and Liouville theorems for symmetric diffusion operators $\Delta_f$ [@LiXD1], some comparison geometry for the Bakry-Emery Ricci tensor [@WW] etc.. In view of the innovation and importance of above-mentioned Perelman $\mathcal{W}$-entropy formulae, one tried to get the entropy formula for the weighted case, the first work is established by X.-D. Li for the weighted heat equation $\partial_t u=\Delta_fu$ in [@LiXD2; @LiXD3], $$\begin{aligned}
\label{Lientropy}
\frac{d}{dt}\mathcal{W}_f(v,\tau)=&-2t\int_M\left(\Big|\nabla_i\nabla_jv-\frac1{2t}g_{ij}\Big|^2+{\rm Ric}^m_f(\nabla v,\nabla v)\right)u\,d\mu\notag\\
&-\frac{2t}{m-n}\int_M\Big(\nabla f\cdot\nabla v+\frac{m-n}{2t}\Big)^2u\,d\mu,\end{aligned}$$ where the $\mathcal{W}$-entorpy is defined by $$\mathcal{W}_f(v,t):=\int_M\Big(t|\nabla v|^2+v-m\Big)u\,d\mu,\quad u = \frac{e^{-v}}{(4\pi t)^{m/2}}.$$ In particular, if the $m$-Bakry-Émery Ricci curvature is nonnegative, then $\mathcal{W}_f(v,\tau)$ is monotone decreasing along the weighed heat equation. When $m=n$, $f=const.$, reduces to .
In [@LiLi], when $n\le m\in \mathbb{N}$, S. Li and X.-D. Li gave a new proof of the $\mathcal{W}$-entropy formula by using of the warped product approach and a natural geometric interpretation for the third term in . Moveover, they extended the $\mathcal{W}$-entropy formula to the weighted heat equation on the weighted compact Riemannian manifolds with time dependent metrics and potentials under satisfying the curvature dimensional condition $CD(K,m)$ for some negative constant $K$. In [@LiLi2], S. Li and X.-D. Li obtained the $\mathcal{W}$-entropy formula on compact Riemannian manifolds with $(K,m)$-super Perelman Ricci flow, where $K\in \mathbb{R}$ and $m\in [n,\infty]$ are two constants. In their paper [@LiLi4], S. Li and X.-D. Li pointed out that there is an essential and deep connection between the definition of the $W$-entropy and the Hamilton type Harnack inequality on complete Riemnnian manifolds with the $CD(K,m)$-condition. Recently, they [@LiLi3] introduced Perelman’s $\mathcal{W}$-entropy along geodesic flow on the Wasserstein space over Riemannian manifolds and proved a rigidity theorem. For further related study, see [@LiLi3; @LiLi5; @LiLi6].
For the nonlinear case, combining the analogous methods in [@KoNi], [@LNVV] and [@LiXD2], Wang-Yang-Chen [@WYC] and Huang-Li [@HL] obtained the entropy monotonicity formulae for the weighted $p$-Laplacian heat equation and the weighted porous medium equation with nonnegative $m$-Bakry-Émery Ricci curvature respectively. In [@WYZ], the author expanded the entropy formulae of Kotschwar-Ni [@KoNi] and Wang-Yang-Chen [@WYC] to the case where the $m$-dimensional Bakry-Émery Ricci curvature is bounded from below.
Inspired by above works, we can obtain entropy monotonicity formula for positive solution to the weighted porous medium equation with $m$-Bakry-Émery Ricci curvature bounded below, which is a natural generalization of Theorem \[KPMEentropy\].
\[WKPMEentropy\] Let $(M,g,d\mu)$ be a closed weighted Riemannian manifold with $m$-Barky-Emery-Ricci curvature bounded below, i.e. ${\rm Ric}_f^m\ge-Kg$, $K\ge0$. Suppose $u$ be a smooth positive solution to equation $$\label{WPME}
\partial_tu=\Delta_f u^{\gamma}$$ and $v=\frac{\gamma}{\gamma-1}u^{\gamma-1}$ the pressure function. For any $\gamma>1$, the Perelman-type $\mathcal{W}$-entropy is defined by $$\label{WKentropy}
\mathcal{W}_K(v,t):=\bar{\sigma}_K\bar{\beta}_K\int_M\left[\gamma\frac{|\nabla v|^2}{v}-\left(\frac{1}{\bar{\beta}_K}+\frac{\dot{\bar{\sigma}}_K}{\bar{\sigma}_K}\right)\right]vu\,d\mu,$$ Then we have
$$\begin{aligned}
\label{WKPMEent}
&\frac{d}{dt}\mathcal{W}_K(v,t)
\le-2\bar{\sigma}_K\bar{\beta}_K\int_M (\gamma-1) \left[\left|\nabla_i\nabla_jv+\frac{\bar{\eta}_K}{n(\gamma-1)}g_{ij}\right|^2+({\rm Ric}^m_f+Kg)(\nabla v,\nabla v )\right]vu\,d\mu\notag\\
&-2\bar{\sigma}_K\bar{\beta}_K\int_M\left[\Big((\gamma-1)\Delta_f v+\bar{\eta}_K\Big)^2+\frac{\gamma-1}{m-n}\left(\langle\nabla v,\nabla f\rangle-(m-n)\frac{\bar{\eta}_K}{m(\gamma-1)}\right)^2\right]vu\,d\mu.\end{aligned}$$
where $\bar{a}=\frac{m(\gamma-1)}{m(\gamma-1)+2}$, ${\kappa}=K\sup\limits_{M\times(0,T]}u^{\gamma-1}$, $\bar{\sigma}_K=\left(\frac{e^{2{\kappa}t}-1}{2\kappa}\right)^{\bar{a}}$, $\bar{\beta}_K=\frac{\sinh(2{\kappa}t)}{2{\kappa}}$, $\bar{\eta}_K=\frac{2\bar{a}{\kappa}}{1-e^{-2{\kappa}t}}$. Moreover, if ${\rm Ric}_f^m\ge-Kg$ for $K\ge0$, then entropy $\mathcal{W}_{K}(v,t)$ is monotone decreasing along the weighted porous medium equation .
In the second part of this paper, we study the differential Harnack inequality for the porous medium equation. Such inequality was first proved by Li and Yau [@LY] for solutions to the heat equation on Riemannian manifolds, i.e. if $u$ is a positive solution to $\partial_tu=\Delta u$ on $M$ with nonnegative Ricci curvature, then $$\label{LY}
\frac{|\nabla u|^2}{u^2}-\frac{u_t}{u}\le\frac{n}{2t}.$$ Later on, it has been extensively investigated in other geometric evolution equations, such as Hamilton’s estimates for the Ricci flow and the mean curvature flow, the corresponding results for the Kähler Ricci flow and the Gauss curvature flow were proved by H. Cao and B. Chow, Perelman’s differential Harnack inequality for the conjugate heat equaiton under Ricci flow, etc.. For more progress in this direction, see the survey [@Ni] and references therein.
It is a long time question of proving the sharp Li-Yau Harnack inequality for positive solution of the heat equation on Riemannian manifolds with negative Ricci curvature bound (see P.393 in [@CLN]). There are many works in this direction. Let ${\rm Ric}\ge-K$ for some $K>0$, the original result of Li-Yau [@LY] is, $$\frac{|\nabla u|^2}{u^2}-\alpha\frac{u_t}{u}\le\frac{\alpha^2}{2(\alpha-1)}nK+\alpha^2\frac{n}{2t},$$ where $\alpha$ is a constant and $\alpha>1$. In [@Ham93], Hamilton proved $$\frac{|\nabla u|^2}{u^2}-e^{2Kt}\frac{u_t}{u}\le e^{4Kt}\frac{n}{2t}.$$ Recently, Li-Xu [@LX] obtained some new Li-Yau type estimates, $$\begin{aligned}
\label{LiXu}
\frac{|\nabla u|^2}{u^2}-\left(1+\frac{2}3Kt\right)\frac{u_t}{u}\le&\frac{n}{2t}+\frac{nK}{2}\Big(1+\frac13Kt\Big),\\
\frac{|\nabla u|^2}{u^2}-\left(1+\frac{\sinh (Kt)\cosh (Kt)-Kt}{\sinh^2( Kt)}\right)\frac{u_t}{u}\le&\frac{nK}{2}\Big(1+\coth(Kt)\Big).\notag\end{aligned}$$ In [@LiLi; @LiLi3], S. Li and X.-D. Li proved an analog of the Harnack inequality for the weighted heat equation on weighted complete Riemannian manifolds with ${\rm Ric}^m_f\ge-K$ . B.Qian [@QianB] gave a further generalization under the proper assumptions of $\alpha(t)$ and $\varphi(t)$, $$\label{QLY}
\frac{|\nabla u|^2}{u^2}-\alpha(t)\frac{u_t}{u}\le\varphi(t).$$ There are some results on Li-Yau, Hamilton and Li-Xu type differential Harnack estimates for the porous medium equation [@LNVV; @HHL; @WC1], the $p$-heat equation [@KoNi; @WYC; @WYZ] and the doubly nonlinear diffusion equation [@WC2] on Riemannian manifolds with Ricci curvature bounded below.
In the second part of this paper, we obtain Qian type differential Harnack estimate for the porous medium equation on the closed Riemannian manifold with Ricci curvature bounded below, which can also be generalized to the case of the weighted Riemanian manifolds with the $m$-Bakry-Emery Ricci curvature bound below.
Now we first give two assumptions, let $\sigma(t)\in C^1(M)$ and satisfy (See [@QianB]).
(A1)
: For a $t>0$, $\sigma(t)>0$, $\sigma'(t)>0$, $\lim\limits_{t\to0}\sigma(t)=0$ and $\lim\limits_{t\to0}\frac{\sigma(t)}{\sigma'(t)}=0$;
(A2)
: For any $T>0$, $\frac{(\sigma')^2}{\sigma}$ is continuous and integrable on the interval $[0,T)$.
\[pmeGK\] Let $(M^n,g)$ be a closed Riemannian manifold with ${\rm Ric}\ge-Kg$ for $K\ge0$. Suppose $u(x,t)$ be a positive solution to equation and $v$ the pressure function. For any $\gamma>1$, we have $$\label{PMELYH1}
\frac{|\nabla v|^2}{v}-\alpha(t)\frac{v_t}{v}\le\varphi(t).$$ Here $$\begin{aligned}
\label{PMEalphavarphi}
\alpha(t)=1+\frac{2{\kappa}}{\sigma}\int^t_0\sigma(s)ds,\quad
\varphi(t)={\kappa}a
+\frac{{\kappa}^2a}{\sigma}\int_0^t\sigma(s)ds+\frac{a}{4\sigma}\int^t_0\frac{(\sigma'(s))^2}{\sigma(s)}ds,\end{aligned}$$ and $\sigma(t)$ satisfies the assumptions $\mathbf{(A1)}$ and $\mathbf{(A2)}$, $a=\frac{n(\gamma-1)}{n(\gamma-1)+2}$ and ${\kappa}=K\sup\limits_{M\times(0,T]}u^{\gamma-1}$.
1. When $K=0$, $\alpha(t)=1$, $\sigma(t)=t^2$ and $\varphi(t)=\frac at$, the estimate in Theorem \[pmeGK\] reduces to the Aronson-Benilan’s estimate in [@LNVV], i.e. $$\frac{|\nabla v|^2}{v}-\frac{v_t}{v}\le\frac at.$$
2. When $K>0$, $\sigma(t)=t^2$, $\alpha(t)=1+\frac{2\kappa t}3$ and $\varphi(t)=\frac at+a\kappa\left(1+\frac{\kappa t}3\right)$, the estimate in Theorem \[pmeGK\] reduces to the Li-Xu type estimate in [@HHL] $$\frac{|\nabla v|^2}{v}-\left(1+\frac{2\kappa t}3\right)\frac{v_t}{v}\le\frac at+a\kappa\left(1+\frac{\kappa t}3\right).$$
3. When $K>0$, $\sigma(t)=\sinh^2(\kappa t)$, $\alpha(t)=1+\frac{\sinh(\kappa t)\cosh(\kappa t)-\kappa t}{\sinh^2(\kappa t)}$ and $\varphi(t)=a\kappa(1+\coth(\kappa t))$, the estimate in Theorem \[pmeGK\] reduces to another Li-Xu type estimate in [@HHL] $$\frac{|\nabla v|^2}{v}-\left(1+\frac{\sinh(\kappa t)\cosh(\kappa t)-\kappa t}{\sinh^2(\kappa t)}\right)\frac{v_t}{v}\le a\kappa(1+\coth(\kappa t)).$$
4. All of above gradient estimates are valid for the weighted case by similar method.
There are some applications for differential Harnack estimates, including Harnack inequalities and Laplacian estimates. Integrating the estimate in Theorem \[pmeGK\] along a minimizing path between two points, we can get the Harnack inequalities for positive solutions to the porous medium equation .
\[Harnack\] For any $(x_1,t_1)$ and $(x_2,t_2)$ with $0<t_1\le t_2<T$, we have $$v(x_1,t_1)-v(x_2,t_2)\le v_{max}\int^{t_2}_{t_1}\frac{\varphi(t)}{\alpha(t)}dt
+\frac{1}{4}\frac{d(x_2,x_1)^{2}}{(t_2-t_1)^{2}}\int^{t_2}_{t_1}\alpha(t)dt$$ and $$\frac{v(x_1,t_1)}{v(x_2,t_2)}\le\exp\left(\int^{t_2}_{t_1}\frac{\varphi(t)}{\alpha(t)}dt
+\frac{1}{4}\frac1{v_{max}}\frac{d(x_2,x_1)^{2}}{(t_2-t_1)^{2}}\int^{t_2}_{t_1}\alpha^{\frac1{p-1}}(t)dt\right),$$ where $v_{max}=\sup_{M\times[0,T)}v$.
\[pfLaEst\] Under the same assumptions as in Theorem \[pmeGK\], define $\beta(t)$ by $\frac{\alpha(t)-1}{\alpha(t)}=(\gamma-1)(\beta(t)-1)$ for $\alpha(t)>1$ and such that $1<\beta(t)<\frac{\gamma}{\gamma-1}$. Assume that ${\rm Ric}\ge-K$ for some $K\ge0$, then $$\label{pfLaE}
\Delta(v^{\beta})\ge-\frac{\beta}{\alpha(\gamma-1) } v_{max}^{\beta-1}\varphi(t),$$ where $v_{max}:=\sup_{M\times[0,T]}v$.
This paper is organized as follows. In Section 2, we establish some evolution equations and then prove the entropy monotonicity formulae, i.e. Theorem \[KPMEentropy\] and Theorem \[WKPMEentropy\]. In section 3, we obtain the Qian type differential Harnack estimate. Finally, Harnack inequalities and Laplacian estimates are derived as applications.
Entropy monotonicity formulae
=============================
Let $(M,g)$ be a closed Riemannian manifold. Suppose $u$ be a smooth solution to and $v=\frac{\gamma}{\gamma-1}u^{\gamma-1}$ be the pressure function, define the parabolic operator $$\square:=\partial_t-(\gamma-1)v\Delta.$$ Thus $v$ satisfies the equation $$\label{pressure}\square v =|\nabla v|^2.$$
\[pmeBochner\] Let $\alpha,\beta$ be two constants and $w=|\nabla v|^2$, $F_{\alpha}=\alpha\frac{v_t}{v}-\frac{|\nabla v|^2}{v}$, then we have the following evolution equations(See [@LNVV]), $$\begin{aligned}
\label{pmeBochner0}\square v_t=&(\gamma-1) v_t\Delta v+2\langle\nabla v,\nabla v_t\rangle,\\
\label{pmeBochner2}\square v^{\beta}=&\beta\big(\beta+\gamma-\beta\gamma\big)v^{\beta-1}w,\\
\label{pmeBochner3}\square w=&2\langle\nabla v,\nabla w\rangle
+2(\gamma-1)w\Delta v
-2(\gamma-1)v\Big(|\nabla\nabla v|^2+{\rm Ric}(\nabla v,\nabla v)\Big)\\
\label{pmeBochner4}\square F_{\alpha}=&2\gamma \left\langle \nabla v ,\nabla F_{\alpha}\right\rangle+2(\gamma-1)\Big(|\nabla\nabla v|^2+{\rm Ric}(\nabla v,\nabla v)\Big)
+\Big(F_1^2+(\alpha-1)\Big(\frac{v_t}v\Big)^2\Big).\end{aligned}$$
By means of the Bochner-type formulae in Lemma \[pmeBochner\], we have the following integral formulae, which are useful for the proof of the $W$-entropy formulae (See[@LNVV]).
\[pmeint\] Let $u$ be a positive solution to , we have $$\begin{aligned}
\label{pmeint1}
\frac{d}{dt}\int_Mvu\,dV=&(\gamma-1)\int_M(\Delta v)vu\,d\mu=-\gamma\int_M|\nabla v|^2u\,dV.\\
\label{pmeint2}
\frac{d^2}{dt^2}\int_Mvu\,dV=&2(\gamma-1)\int_M\left(|\nabla\nabla v|^2+{\rm Ric}(\nabla v,\nabla v)+(\gamma-1)(\Delta v)^2\right)vu\,dV.\end{aligned}$$
Applying integral formulae in Lemma \[pmeint\], we can deduce the $\mathcal{W}$-entropy formula for the porous medium equation on the closed Riemannian manifolds with Ricci curvature lower bound.
Firstly, Boltzmann-Nash type entropy is defined by $$\mathcal{N}_K(t):=-\sigma_K(t)\int_Mvu\, dV,$$ where $\sigma_K(t)$ is a function of $t$, then implies that $$\begin{aligned}
\label{Kpment1}
\frac{d}{dt}\mathcal{N}_K(t)=&-\dot{\sigma}_K\int_Mv u\,d\mu-\sigma_K\frac{d}{dt}\int_Mvu\,dV\notag\\
=&-\sigma_K\int_M\left((\gamma-1)\Delta v+(\log\sigma_K)'\right)vu\,dV\end{aligned}$$
By , and the assumption ${\kappa}=K\sup_{M\times[0,T)}u^{\gamma-1}=\frac{\gamma-1}{\gamma}K\sup_{M\times[0,T)}v$, one has
$$\begin{aligned}
\label{Kpment3}
\frac{d^2}{dt^2}\mathcal{N}_K(t)
=&-\sigma_K\frac{d^2}{dt^2}\int_Mvu\,dV
-2\dot{\sigma}_K\frac{d}{dt}\int_Mvu\,dV-\ddot{\sigma}_K\int_Mv u\,dV\notag\\
=&-2(\gamma-1)\sigma_K\int_M\left[|\nabla\nabla v|^2+{\rm Ric}(\nabla v,\nabla v)+(\gamma-1)(\Delta v)^2\right]vu\,dV\notag\\
&+2\frac{\dot{\sigma}_K}{\sigma_K}\frac{d}{dt}\mathcal{N}_K
+\left(\frac{\ddot{\sigma}_K}{\sigma_K}
-\frac{2\dot{\sigma}_K^2}{\sigma_K^2}\right)\mathcal{N}_K\notag\\
=&-2(\gamma-1)\sigma_K\int_M\left[|\nabla\nabla v|^2+({\rm Ric}+Kg)(\nabla v,\nabla v)+(\gamma-1)(\Delta v)^2\right]vu\,dV\notag\\
&+2(\gamma-1)\sigma_K\int_MK|\nabla v|^2vu\,dV+2\frac{\dot{\sigma}_K}{\sigma_K}\frac{d}{dt}\mathcal{N}_K
+\left(\frac{\ddot{\sigma}_K}{\sigma_K}
-\frac{2\dot{\sigma}_K^2}{\sigma_K^2}\right)\mathcal{N}_K\notag\\
\le&-2(\gamma-1)\sigma_K\int_M \left[|\nabla\nabla v|^2+({\rm Ric}+Kg)(\nabla v,\nabla v )+(\gamma-1)(\Delta v)^2\right]vu\,dV\notag\\
&+2\left(\frac{\dot{\sigma}_K}{\sigma_K}+{\kappa}\right)\frac{d}{dt}\mathcal{N}_K
+\left(\frac{\ddot{\sigma}_K}{\sigma_K}
-\frac{2\dot{\sigma}_K^2}{\sigma_K^2}-2{\kappa}\frac{\dot{\sigma}_K}{\sigma_K}\right)\mathcal{N}_K.\end{aligned}$$
Inspired by S. Li and X.-D. Li [@LiLi2](see also [@LiLi3] for a survey), we define the Perelman type $\mathcal{W}$-entropy by
$$\begin{aligned}
\label{KWentropy}
\mathcal{W}_K(t):=&\frac1{\dot{\alpha}_K(t)}\frac d{dt}(\alpha_K(t)\mathcal{N}_K(t))
=\mathcal{N}_K+\beta_K(t)\frac{d}{dt}\mathcal{N}_K\notag\\
=&-\sigma_K\int_M\Big[\beta_K(\gamma-1)\Delta v+\big(1+(\log\sigma_K)'\beta_K\big)\Big]vu\,dV,\notag\\
=&\sigma_K\beta_K\int_M\left[\gamma\frac{|\nabla v|^2}{v}-\left(\frac{1}{\beta_K}+\frac{\dot{\sigma}_K}{\sigma_K}\right)\right]vu\,dV,\end{aligned}$$
where $\beta_K(t)=\frac{\alpha_K}{\dot{\alpha}_K}$, then $$\frac{d}{dt}\mathcal{W}_K(t)=\beta_K\left(\frac{d^2}{dt^2}\mathcal{N}_K
+\frac{1+\dot{\beta}_K}{\beta_K}\frac{d}{dt}\mathcal{N}_K\right).$$ Combining and , we have
$$\begin{aligned}
\label{wkpment1}
\frac{d}{dt}\mathcal{W}_K(t)\le&
-2(\gamma-1)\sigma_K\beta_K\int_M \left[|\nabla\nabla v|^2+({\rm Ric}+Kg)(\nabla v,\nabla v )+(\gamma-1)(\Delta v)^2\right]vu\,dV\notag\\
&+2\beta_K\left(\frac{\dot{\sigma}_K}{\sigma_K}+\frac{1+\dot{\beta}_K}{2\beta_K}
+{\kappa}\right)\frac{d}{dt}\mathcal{N}_K
+\beta_K\left(\frac{\ddot{\sigma}_K}{\sigma_K}
-\frac{2\dot{\sigma}_K^2}{\sigma_K^2}
-2{\kappa}\frac{\dot{\sigma}_K}{\sigma_K}\right)\mathcal{N}_K.\end{aligned}$$
On the other hand, $$\begin{aligned}
\label{wkpment2}
(\gamma-1)\left|\nabla_i\nabla_jv+\frac{\eta_K(t)}{n(\gamma-1)}g_{ij}\right|^2
=&(\gamma-1)|\nabla\nabla v|^2+\frac{2\eta_K}{n}\Delta v+\frac{\eta_K^2}{n(\gamma-1)}.\end{aligned}$$ Putting into , we get
$$\begin{aligned}
\label{wkpment3}
&\frac{d}{dt}\mathcal{W}_K(t)\notag\\
\le&
-\sigma_K\beta_K\int_M 2 (\gamma-1)\left[\left|\nabla_i\nabla_jv+\frac{\eta_K}{n(\gamma-1)}g_{ij}\right|^2+({\rm Ric}+Kg)(\nabla v,\nabla v )\right]vu\,dV\notag\\
&-\sigma_K\beta_K\int_M\left[2(\gamma-1)^2(\Delta v)^2+2(\gamma-1)\left((\log\sigma_K)'+\frac{1+\dot{\beta}_K}{2\beta_K}+{\kappa}
-\frac{2\eta_K}{n(\gamma-1)}\right)\Delta v\right]vu\,dV\notag\\
&-\sigma_K\beta_K\int_M\left[(\log\sigma_K)''+ ((\log\sigma_K)')^2+\frac{1+\dot{\beta}_K}{\beta_K}(\log\sigma_K)'-\frac{2\eta_K^2}{n(\gamma-1)}\right]vu\,dV.\end{aligned}$$
Set $\lambda=(\log\sigma_K)'$ and choose a proper function $\eta_{K}(t)$ such that $$\label{etabetak}
\left\{
\begin{array}{l}
2\eta_K=\lambda+\frac{1+\dot{\beta}_K}{2\beta_K}+{\kappa}
-\frac{2}{n(\gamma-1)}\eta_K \\
2\eta_K^2=\lambda'+ \lambda^2+\frac{1+\dot{\beta}_K}{\beta_K}\lambda-\frac{2}{n(\gamma-1)}\eta_K^2,
\end{array}
\right.$$ which is equivalent to $$\begin{aligned}
\label{etak}
0=&\eta^2_K-2\lambda\eta_K+\frac{a}{a+1}
\left(\lambda^2-\lambda'+2{\kappa}\lambda\right)\notag\\
=&(\eta_K-\lambda)^2-\frac{1}{a+1}\left(\lambda^2
+a\left(\lambda'-2{\kappa}\lambda\right)\right),\end{aligned}$$ where $a=\frac{n(\gamma-1)}{n(\gamma-1)+2}$. Solving the equation , we get a special solution $$\label{lambdaetak}
\lambda=\eta_K=\frac{2a{\kappa}}{1-e^{-2{\kappa}t}}.$$ Putting back to system , we have $$\frac{1+\dot{\beta}_K}{\beta_K}=2\kappa\coth (kt),\quad \beta_K=\frac{\sinh(2{\kappa}t)}{2{\kappa}}$$ and $$\alpha_K={\kappa}\tanh({\kappa}t),\quad
\sigma_K=\left(e^{{\kappa}t}\frac{\sinh({\kappa}t)}{\kappa}\right)^{a}=\left(\frac{e^{2{\kappa}t}-1}{2\kappa}\right)^{a}.$$
Therefore, from , we obtain the entropy monotonicity formula,
$$\begin{aligned}
\label{wkpment4}
\frac{d}{dt}\mathcal{W}_K(t)\le&
-\sigma_K\beta_K\int_M 2 (\gamma-1)\left[\left| \nabla_i\nabla_jv+\frac{\eta_K}{n(\gamma-1)}g_{ij}\right|^2+({\rm Ric}+Kg)(\nabla v,\nabla v )\right]vu\,dV\notag\\
&-\sigma_K\beta_K\int_M2\left[(\gamma-1)\Delta v+\eta_K\right]^2vu\,dV.\end{aligned}$$
Thus, when ${\rm Ric}\ge-Kg$, $K\ge0$, $\sigma_K>0,\eta_K>0$, Perelman type entropy is monotone decreasing along the porous medium equation .
In [@LiLi; @LiLi2], when $n\le m\in \mathbb{N}$, Li-Li gave another proof for entropy formula of Witten Laplacian and explained its geometric meaning by warped product, so we can also obtain the entropy formula for the weighted porous medium equation by the analogous method. In fact, once one can obtain the entropy formula on Riemannian manifold, the weighted version is a direct result by this warped product approach when $m>n$ is a positive integer.
Let $\overline{M}=M\times N$ be a warped product manifold equipped the metric $$g_{\overline{M}}=g_M+e^{-\frac{2f}q}g_N,\quad q={m-n},$$ where $m,n,q$ are the dimensions of $\overline{M}, M, N$ respectively, then the volume measure satisfies $$\label{volmeasure}
\,\,dV_{\overline{M}}=e^{-f}\,\,dV_M\otimes \,\,dV_N=\,d\mu\otimes \,\,dV_N.$$ where $(N^q,g_N)$ is a compact Riemannian manifold. Let $\pi:M\times N\to M$ be a nature projection map, $\overline{X}$ and $X$ are vector fields on $\overline{M}$ and $M$ respectively, then [@Besse] $${\rm Ric}_{\overline{M}}(\overline{X},\overline{X})=\pi^*\left({\rm Ric}_{M}(X,X)-qe^{\frac fq}\nabla\nabla e^{-\frac fq}(X,X)\right),$$ that is $$\label{mBER}
{\rm Ric}_f^m(X,X)=\pi_*\left({\rm Ric}_{\overline{M}}(\overline{X},\overline{X})\right).$$ Assume $V_N(N)=1$, $\overline{\nabla}$ denotes Levi-Civita connection on $(\overline{M},g_{\overline{M}})$, for any $v\in C^2(M)$, by S.Li and X.-D.Li [@LiLi], we know $$\label{warped}
\overline{\nabla}_i\overline{\nabla}_jv=\nabla_i\nabla_jv,\quad
\overline{\nabla}_{\alpha}\overline{\nabla}_{\beta}v=-\frac 1q g_{\alpha\beta}g^{kl}\partial_kv\partial_lf,\quad
\overline{\nabla}_i\overline{\nabla}_{\alpha}v=0.$$ where $i,j=1,2,\cdots,n$, and $\alpha,\beta=n+1,n+2,\cdots,m$, moreover, $$\label{warpedLaplacian}
\Delta_{\overline{M}}=\Delta_f+e^{-\frac{2f}{q}}\Delta_N.$$ Thus, if $u:M\to[0,\infty)$ is a positive solution to , then $u$ satisfies the following equation $$\frac{\partial u}{\partial t}=\bar{\Delta}(u^{\gamma}).$$ Moreover, in term of , the weighted entropy functional is indeed the functional on $(\overline{M},g_{\overline{M}})$ $$\overline{\mathcal{W}}_K(v,t)=\sigma_K\beta_K\int_{\overline{M}}\left[\gamma\frac{|\nabla v|^2}{v}-\left(\frac{1}{\beta_K}+\frac{\dot{\sigma}_K}{\sigma_K}\right)\right]vu\,dV_{\overline{M}},$$ Applying the entropy formula on $(\overline{M},g_{\overline{M}})$ in Theorem \[KPMEentropy\], we have
$$\begin{aligned}
\label{KPMEnt}
&\frac{d}{dt}\overline{\mathcal{W}}_K(t)\notag\\
\le&
-2(\gamma-1)\bar{\sigma}_K\bar{\beta}_K\int_{\overline{M}}\left(\left| \bar{\nabla}_i\bar{\nabla}_jv+\frac{\bar{\eta}_K}{m(\gamma-1)}\bar{g}_{ij}\right|_{\bar{g}}^2
+(\overline{{\rm Ric}}+K\bar{g})(\bar{\nabla} v,\bar{\nabla} v )\right)vu\,dV_M dV_N\notag\\
&-2\bar{\sigma}_K\bar{\beta}_K\int_{\overline{M}}\left((\gamma-1)\bar{\Delta} v+\eta_K\right)^2vu\,dV_M dV_N.\end{aligned}$$
where $$\bar{\eta}_K=\frac{2\bar{a}{\kappa}}{1-e^{-2{\kappa}t}},\quad\bar{\beta}_K=\frac{\sinh(2{\kappa}t)}{2{\kappa}},\quad
\bar{\sigma}_K=\left(\frac{e^{2{\kappa}t}-1}{2\kappa}\right)^{{\bar{a}}},\quad \bar{a}=\frac{m(\gamma-1)}{m(\gamma-1)+2}.$$ An analogous calculation in [@LiLi] gives,
$$\begin{aligned}
\label{WHessian}
\Big|\overline{\nabla}_i\overline{\nabla}_jv+\frac{\bar{\eta}_K}{m(\gamma-1)}\bar{g}_{ij}\Big|^2
=&\Big|\nabla_i\nabla_jv+\frac{\bar{\eta}_K}{m(\gamma-1)}g_{ij}\Big|^2+\Big|\overline{\nabla}_{\alpha}\overline{\nabla}_{\beta}v+\frac{\bar{\eta}_K}{m(\gamma-1)}g_{\alpha\beta}\Big|^2\\
=&\Big|\nabla_i\nabla_jv+\frac{\bar{\eta}_K}{m(\gamma-1)}g_{ij}\Big|^2+\frac{1}{m-n}\Big|\nabla v\cdot\nabla f-\frac{\bar{\eta}_K(m-n)}{m(\gamma-1)}\Big|^2\end{aligned}$$
Combining with , one has $$\label{BERic}
{\rm \overline{Ric}}(\overline{\nabla} v,\overline{\nabla} v)={\rm Ric}^m_f(\nabla v,\nabla v),$$ and $$\label{WLaplacian}
\bar{\Delta}v=\Delta_{f}v.$$ Put , and into , we can get the desired result .
Differential Harnack estimates and applications
===============================================
In this section, we prove a Qian type differential Harnack estimates [@QianB] for the porous medium equation on the compact Riemannian manifolds with lower bound of Ricci curvature, which generalize the Lu-Ni-Vazquez-Villani’s estimate [@LNVV] and Huang-Huang-Li’s estimate [@HHL], Harnack inequalities and Laplacian estimate are derived as applications.
Let $\sigma(t)$ be a functions of $t$ and satisfy the assumption (A1)(A2) and $\alpha(t), \varphi(t)$ are defined in . Suppose $u$ be a smooth solution to and $v=\frac{\gamma }{\gamma-1}u^{\gamma-1}$ the pressure funciton. Define $$F_{\alpha}:=\alpha(t)\frac{v_t}{v}-\frac{|\nabla v|^2}{v}+\varphi(t).$$ Set ${\kappa}=K\sup\limits_{M\times(0,T]}(u^{\gamma-1})$, for any $\gamma>1$, we have
$$\begin{aligned}
\label{GWDBochner1}
\square F_{\alpha}\ge&2\gamma \left\langle \nabla v ,\nabla F_{\alpha}\right\rangle+\frac1a\left((\gamma-1)\Delta v+\frac{a\sigma'}{2\sigma}+{a\gamma\kappa}\right)^2-\frac{\sigma'}{\sigma}F_{\alpha}
+(\alpha-1)\Big(\frac{v_t}v\Big)^2,\end{aligned}$$
$$\begin{aligned}
\label{GWDBochner2}
\square(\sigma F_{\alpha})\ge&2\gamma\sigma \left\langle \nabla v ,\nabla F_{\alpha}\right\rangle+\frac{\sigma}a\left((\gamma-1)\Delta v+\frac{a\sigma'}{2\sigma}+{a\gamma\kappa}\right)^2
+(\alpha-1)\sigma\Big(\frac{v_t}v\Big)^2.\end{aligned}$$
By using of , we have $$\begin{aligned}
\label{WDBochner4}
\square F_{\alpha}=&2\gamma\left\langle \nabla v ,\nabla F_{\alpha}\right\rangle+2(\gamma-1)\Big(|\nabla\nabla v|^2+{\rm Ric}(\nabla v,\nabla v)\Big)\notag\\
&+\left(((\gamma-1)\Delta v)^2+(\alpha-1)\Big(\frac{v_t}v\Big)^2\right)
+\alpha'\Big(\frac{v_t}{v}\Big)+\varphi'.\end{aligned}$$ Applying Cauchy-Schwartz inequality and , when ${\rm Ric}\ge-Kg$, we get
$$\begin{aligned}
\label{pmegradientest}
&\square F_{\alpha}-2\gamma \left\langle \nabla v ,\nabla F_{\alpha}\right\rangle\notag\\
\ge&\frac{1}{a}((\gamma-1)\Delta v)^2-2(\gamma-1)K|\nabla v|^2+(\alpha-1)\Big(\frac{v_t}v\Big)^2
+\alpha'\Big(\frac{v_t}{v}\Big)+\varphi'\notag\\
=&\frac{(\gamma-1)^2}{a}(\Delta v+\eta)^2-\frac{2 (\gamma-1)^2}{a}\eta\Delta v-\frac{(\gamma-1)^2}{a}\eta^2-2(\gamma-1)K|\nabla v|^2\notag\\
&+(\alpha-1)\Big(\frac{v_t}v\Big)^2
+\alpha'\Big(\frac{v_t}{v}\Big)+\varphi'\notag\\
\ge&\frac{(\gamma-1)^2}{a}(\Delta v+\eta)^2+\Big(\alpha'-\frac{2 (\gamma-1)}{a}\eta\Big)\frac{v_t}{v}-\Big(2\gamma{\kappa}
-\frac{2 (\gamma-1)}{a}\eta\Big)\frac{|\nabla v|^2}{v}\notag\\
&-\frac{(\gamma-1)^2}{a}\eta^2+(\alpha-1)\Big(\frac{v_t}v\Big)^2
+\varphi'\notag\\
=&\frac{(\gamma-1)^2}{a}(\Delta v+\eta)^2+\Big(2\gamma{\kappa}-\frac{2 (\gamma-1)}{a}\eta\Big)\left(\frac{\alpha'-\frac{2(\gamma-1)}{a}\eta}{2\gamma{\kappa}-\frac{2(\gamma-1)}{a}\eta}\frac{v_t}v- \frac{|\nabla v|^2}{v}+\varphi\right)\notag\\
&+(\alpha-1)\Big(\frac{v_t}v\Big)^2+\varphi'-\Big(2\gamma{\kappa}-\frac{2 (\gamma-1)}{a}\eta\Big)\varphi-\frac{(\gamma-1)^2}{a}\eta^2.\end{aligned}$$
Choosing the proper functions $\sigma(t)$ and $\eta(t)$ such that $\alpha(t)$ and $\varphi(t)$ satisfy the following system $$\label{sigeta}
\left\{
\begin{array}{rl}
\frac{\sigma'}{\sigma}&=\frac{2(\gamma-1)}{a}\eta-2\gamma{\kappa} \\
\alpha&=\frac{\alpha'-\frac{2(\gamma-1)}{a}\eta}{2\gamma{\kappa} -\frac{2 (\gamma-1)}{a}\eta} \\
\eta^2&=\frac{a}{(\gamma-1)^2}\left(\varphi'-\Big(2\gamma{\kappa}-\frac{2 (\gamma-1)}{a}\eta\Big)\varphi\right).
\end{array}
\right.$$ Plugging into , we get $$\square F_{\alpha}\ge2\gamma\left\langle \nabla v ,\nabla F_{\alpha}\right\rangle
+\frac{(\gamma-1)^2}{a}(\Delta v+\eta)^2-\frac{\sigma'}{\sigma}F_{\alpha}
+(\alpha-1)\Big(\frac{v_t}v\Big)^2,$$ that is . Inequality is a direct result of and $$\square G=\square (\sigma F_{\alpha})=\sigma\square F_{\alpha}+\sigma' F_{\alpha}.$$ In fact, the first equation in is equivalent to $
\frac{2(\gamma-1)}{a}\eta(t)=\frac{\sigma'}{\sigma}+2\gamma{\kappa}.
$ Put this into the last two equations in , we have $$\left\{
\begin{array}{rl}
(\sigma\alpha)'=&\sigma'+2\gamma{\kappa}\sigma\\
(\sigma\varphi)'=&\frac{a\sigma}{4}\Big(\frac{\sigma'}{\sigma}+2\gamma{\kappa}\Big)^2,
\end{array}
\right.$$ Integral above identities on $[0,t]$, we can obtain the explicit expressions of $\alpha(t)$ and $\varphi(t)$ in .
Since $\gamma>1$, $a>0$ and $M$ is closed, the standard parabolic maximum principle in implies $F_{\alpha}\ge0$, that is in Theorem \[pmeGK\].
Let $\varsigma(t)$ be a constant speed geodesic with $\varsigma(t_1)=x_1$ and $\varsigma(t_2)=x_2$ such that $|\dot{\varsigma}(t)|=\frac{d(x_2,x_1)}{t_2-t_1}$. Using differential Harnack estimate and Young inequality, we have
$$\begin{aligned}
v(x_2,t_2)-v(x_1,t_1)=&\int^{t_2}_{t_1}v_t+\langle\nabla v,\dot{\varsigma}(t)\rangle dt\\
\ge&\int^{t_2}_{t_1}\left(\frac1{\alpha(t)}|\nabla v|^2-\frac{\varphi(t)}{\alpha(t)}v-\frac1{\alpha(t)}|\nabla v|^2-\frac{1}{4}\alpha(t)|\dot{\varsigma}(t)|^{2}\right)dt\\
\ge&-v_{max}\int^{t_2}_{t_1}\frac{\varphi(t)}{\alpha(t)}dt
-\frac{1}{4}\frac{d(x_2,x_1)^{2}}{(t_2-t_1)^{2}}\int^{t_2}_{t_1}\alpha(t)dt\end{aligned}$$
and
$$\begin{aligned}
\log\frac{v(x_2,t_2)}{v(x_1,t_1)}
=&\int^{t_2}_{t_1}\left(\frac{d}{dt}\log v(x,t)+\nabla\log v\cdot\dot{\varsigma}(t)\right)dt\\
\ge&\int^{t_2}_{t_1}\left(\frac{1}{\alpha(t)}\Big(|\nabla v|^2-\varphi(t)\Big)
-\frac1{\alpha(t)}|\nabla v|^2-\frac{1}{4}\frac{|\dot{\varsigma}(t)|^{2}}{v_{max}}\alpha(t)\right)dt\\
\ge&-\int^{t_2}_{t_1}\frac{\varphi(t)}{\alpha(t)}dt
-\frac{1}{4}\frac1{v_{max}}\frac{d(x_2,x_1)^{2}}{(t_2-t_1)^{2}}\int^{t_2}_{t_1}\alpha(t)dt.\end{aligned}$$
This finishes the proof of Corollary \[Harnack\].
Since $\alpha(t)>1$, direct calculation implies that
$$\begin{aligned}
\Delta(v^{\beta})=&\beta v^{\beta-1}\left(\Delta v+(\beta-1)\frac{|\nabla v|^2}{v}\right)\\
=&\frac{1}{\alpha (\gamma-1) }\beta v^{\beta-1}\left(\alpha (\gamma-1)\Delta v+\alpha(\gamma-1)(\beta-1)\frac{|\nabla v|^2}{v}\right)\\
=&\frac{1}{\alpha (\gamma-1) }\beta v^{\beta-1}\left(\alpha \frac{v_t}{v}-\frac{|\nabla v|^2}{v}\right).\end{aligned}$$
The estimate follows from .
Acknowledgments {#acknowledgments .unnumbered}
===============
The author would like to thank Professor Xiang-Dong Li and Dr. Songzi Li for their interest and illuminating discussions. The author is also thankful to the anonymous reviewers for their constructive comments and suggestions on the earlier version for this paper.
[99]{}
D. Bakry, I. Gentil and M. Ledoux. *Analysis and geometry of Markov diffusion operators*, Springer, 2014.
A. Besse. *Einstein Manifolds*, Springer, Berlin, 1987.
B. Chow, P. Lu and L. Ni. *Hamilton’s Ricci flow*, Science press, 2006.
R.Hamilton. A matrix Harnack estimate for the heat equation, *Comm. Anal. Geom.* 1 (1993), no. 1, 113-126.
G. Y. Huang, Z. J. Huang and H. Z. Li. Gradient estimates for the porous medium equations on Riemannian manifolds, *J. Geom. Anal.* 23 (2013), no. 4, 1851-1875.
G. Y. Huang and H. Z. Li. Gradient estimates and entropy formulae of porous medium and fast diffusion equations for the Witten Laplacian, *Pacific J. Math.* 268 (2014), no. 1, 47-78.
B. Kotschwar and L. Ni. Gradient estimate for $p$-harmonic functions, $1/H$ flow and an entropy formula, *Ann. Sci. éc. Norm. Supér.* 42(4),(2009), no. 1, 1-36.
J. F. Li and X. Xu. Differential Harnack inequalities on Riemannian manifolds I: linear heat equation, *Adv. Math.* 226 (5) (2011), 4456-4491.
P. Li and S. T. Yau. On the parabolic kernel of the Schrödinger operator, *Acta Math.* 156 (1986), 153-201.
S. Li and X. -D. Li. $W$-entropy formula for the Witten Laplacian on manifolds with time dependent metrics and potentials, *Pacific J. Math.* 278 (2015), No. 1, 173-199.
S. Li and X. -D. Li. Harnack inequalities and $W$-entropy formula for Witten Laplacian on manifolds with the $K$-super Perelman Ricci flow, arXiv:1412.7034v1.
S. Li and X. -D. Li. $W$-entropy formulas on super Ricci flow and Langevin deformation on Wasserstein spaces over Riemannian manifolds, accepted by Science China Mathematics.
S. Li and X. -D. Li, Hamilton differential Harnack inequality and $W$-entropy for Witten Laplacian on Riemannian manifolds, *J. Funct. Anal.* (2017), https://doi.org/10.1016/j.jfa.2017.09.017
S. Li and X.-D. Li, On Harnack inequalities for Witten Laplacian on Riemannian manifolds with super Ricci flows, *Asian J. Math.* (2017), in press, Special Issue, in honor of Prof. N. Moks 60th birthday, arXiv:1706.05304.
S. Li and X.-D. Li, $W$-entropy, super Perelman Ricci flows and $(K, m)$-Ricci solitons, arXiv:1706.07040.
X. -D. Li. Liouville theorems for symmetric diffusion operators on complete Riemannian manifolds, *J. Math. Pures Appl.* 84 (2005), 1295-1361.
X. -D. Li. Perelman’s entropy formula for the Witten Laplacian on Riemannian manifolds via Bakry-Emery Ricci curvature, *Math. Ann.* 353 (2012), no. 2, 403-437.
X. -D. Li. Hamilton’s Harnack inequality and the W-entropy formula on complete Riemannian manifolds, *Stochastic Process. Appl.* 126 (2016), no. 4, 1264-1283.
P. Lu, L. Ni, J. L. Vazquez and C. Villani. Local Aronson-Benilan esitmates and entropy formulae for porous medium and fast diffusion equations on manifolds, *J.Math.Pures.Appl.* 91 (2009), 1-19.
L. Ni. Monotonicity and Li-Yau-Hamilton Inequalities, *Surv. Differ. Geom.*, 12, Geometric flows, (2008), 251-301.
L. Ni. The entropy formula for linear equation, *J. Geom. Anal.* 14 (1), (2004), 87-100.
L. Ni. A note on Perelman’s LYH inequality, *Comm. Anal. Geom.* 14 (2006), no. 5, 883-905.
G. Perelman. The entropy formula for the Ricci flow and its geometric applications, preprint, arXiv.org/abs/maths0211159.
B. Qian. Remarks on differential Harnack inequalities, *J. Math. Anal. Appl.* 409 (2014), 556-566.
G. F. Wei and W. Wylie. Comparison geometry for the Bakry-Emery Ricci tensor, *J. Diff. Geom.* 83 (2009), 377-405.
Y. -Z. Wang and W. Y. Chen. Gradient estimates for weighted diffusion equations on smooth metric measure spaces, *Journal of Mathematics(PRC)*, 33 (2013), no2, 248-258.
Y. -Z. Wang and W. Y. Chen. Gradient estimates and entropy formula for doubly nonlinear diffusion equations on Riemannian manifolds. *Math. Meth. Appl. Sci.* 37 (2014), 2772-2781.
Y. -Z. Wang, J. Yang and W. Y. Chen. Gradient estimates and entropy formulae for weighted $p$-heat equations on smooth metric measure spaces, *Acta Math. Sci. Ser. B Engl. Ed.* 33 (2013), no. 4, 963-974.
Y. -Z. Wang. Differential Harnack estimates and entropy formulae for weighted $p$-heat equations. *Results Math.* 71 (2017), no. 3-4, 1499-1520.
[^1]: The author is supported by the National Science Foundation of China(NSFC, 11701347).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Smart buildings have great potential for shaping an energy-efficient, sustainable, and more economic future for our planet as buildings account for approximately 40% of the global energy consumption. A key challenge for large-scale plug and play deployment of the smart building technology is the ability to learn a good control policy in a short period of time, i.e. having a low sample complexity for the learning control agent. Motivated by this problem and to remedy the issue of high sample complexity in the general context of cyber-physical systems, we propose an event-triggered paradigm for learning and control with variable-time intervals, as opposed to the traditional constant-time sampling. The *events* occur when the system state crosses the a priori-parameterized *switching manifolds*; this crossing triggers the learning as well as the control processes. Policy gradient and temporal difference methods are employed to learn the optimal switching manifolds which define the optimal control policy. We propose two event-triggered learning algorithms for stochastic and deterministic control policies. We show the efficacy of our proposed approach via designing a smart learning thermostat for autonomous micro-climate control in buildings. The event-triggered algorithms are implemented on a single-zone building to decrease buildings’ energy consumption as well as to increase occupants’ comfort. Simulation results confirm the efficacy and improved sample efficiency of the proposed event-triggered approach for online learning and control.'
address:
- 'Laboratory for Information and Decision Systems, MIT, USA'
- 'D. E. Shaw Group, New York, NY 10036 USA'
- 'Center for Energy Science and Technology, Skolkovo Institute of Science and Technology, 3 Nobel Street, Skolkovo, Moscow Region 121205, Russia'
author:
- Ashkan Haji Hosseinloo
- Alexander Ryzhov
- Aldo Bischi
- Henni Ouerdane
- Konstantin Turitsyn
- 'Munther A. Dahleh'
bibliography:
- 'sample.bib'
title: 'Data-driven control of micro-climate in buildings; an event-triggered reinforcement learning approach'
---
Event-triggered learning ,smart buildings ,reinforcement learning ,data-driven control ,energy efficiency ,cyber-physical systems
Introduction {#intro}
============
Buildings account for approximately 40% of global energy consumption about half of which is used by heating, ventilation, and air conditioning (HVAC) systems [@nejat2015global; @wei2017deep], the primary means to control micro-climate in buildings. Furthermore, buildings are responsible for one-third of global energy-related greenhouse gas emissions [@nejat2015global]. Hence, even an incremental improvement in the energy efficiency of buildings and HVAC systems goes a long way towards building a sustainable, more economic, and energy-efficient future. In addition to their economic and environmental impacts, HVAC systems can also affect productivity and decision-making performance of occupants in buildings through controlling indoor thermal and air quality [@satish2012co2; @wargocki2017ten]. For all these reasons micro-climate control in buildings is an important issue for its large-scale economic, environmental, and health-related and societal effects.\
The main goal of the micro-climate control in buildings is to minimize the building’s (mainly HVAC’s) energy consumption while improving or respecting some notion of occupants’ comfort. Despite its immense importance, micro-climate control in buildings is often very energy-inefficient. HVAC systems are traditionally controlled by rule-based strategies and heuristics where an expert uses best practices to create a set of rules that control different HVAC components such as rule-based ON/OFF and conventional PID controllers [@levermore2013building; @dounis2009advanced]. These control methods are often far from optimal as they do not take into account the system dynamics model of the building i.e. the building thermodynamics and stochastic disturbances e.g. weather conditions or occupancy status. To overcome some of these shortcomings, more advanced model-based approaches have been proposed. In this category Model Predictive Control (MPC) is perhaps the most promising and extensively-studied method in the context of buildings climate control [@oldewurtel2012use; @ryzhov2019model; @afram2014theory; @smarra2018data].\
Despite its potential benefits, performance and reliability of MPC and other model-based control methods depend highly on the accuracy of the building thermodynamics model and prediction of the stochastic disturbances. However, developing an accurate model for a building is extremely time-consuming and resource-intensive, and hence, not practical in most cases. Moreover, a once accurate developed model of a building could become fairly inaccurate over time due to, for instance, renovation or wear and tear of the building. Furthermore, at large scales, MPC like many other advanced model-based techniques may require formidable computational power if a real-time (or near real-time) solution is required [@marantos2019rapid]. Last but not least, traditional and model-based techniques are inherently building-specific and not easily transferable to other buildings.\
To remedy the above-mentioned issues of model-based climate control in buildings and towards building autonomous *smart* homes, data-driven approaches for HVAC control have attracted the interest of many researchers in the recent years. The concept of *smart* homes where household devices (e.g. appliances, thermostats, and lights) can operate efficiently in an autonomous, coordinated, and adaptive fashion, has been around for a couple of decades [@mozer1998neural]. However, with recent advances in Internet of Things (IoT) technology (cheap sensors, efficient data storage, etc.) on the one hand [@minoli2017iot], and immense progress in data science and machine learning tools on the other hand, the idea of smart homes with data-driven HVAC control systems looks ever more realistic.\
Among different data-driven control approaches, reinforcement learning (RL) has found more attention in the recent years due to enormous recent algorithmic advances in this field as well as its ability to learn efficient control policies solely from experiential data via trial and error. This study focuses on an RL approach and hence, we next discuss some of the related studies using reinforcement learning for energy-efficient controls in buildings followed by our contribution.\
The remaining of this article is organized as follows. Section \[related\] reviews the related work and highlights our contributions in this study. The Problem is stated and mathematically formulated in section \[MPDframework\] after which the idea of switching manifolds for event-triggered control is introduced in section \[manifolds\]. Combining the average-reward set-up and event-triggered control paradigm in sections \[MPDframework\] and \[manifolds\], we present our event-triggered reinforcement learning algorithms in section \[RL\]. Finally, the implementation and simulation results are discussed in section \[results\] before the article is concluded in section \[conclusion\].
Related work and contribution {#related}
=============================
Tabular RL
----------
The Neural Network House project [@mozer1998neural] is perhaps the first application of reinforcement learning in building energy management system. In this seminal work, the author explains how tabular Q-learning, one of the early versions of the popular Q-learning approach in RL, was employed to control lighting in a residential house so as to minimize energy consumption subject to occupants’ comfort constraint [@mozer1997parsing]. Tabular Q-learning was later used in a few other studies for controlling passive and active thermal storage inventory in commercial buildings [@liu2006experimental1; @liu2006experimental2], heating system[@barrett2015autonomous], air-conditioning and natural ventilation through windows [@chen2018optimal], photovoltaic arrays and geothermal heat pumps [@yang2015reinforcement], and lighting and blinds [@cheng2016satisfaction].\
Given fully observable state and infinite exploration, tabular Q-learning is guaranteed to converge on an optimal policy. However, the tabular version of Q-learning is limited to systems with discrete states and actions, and becomes very data-intensive, hence very slow at learning, when the system has a large number of state-action combinations. For instance, the simulated RL training in [@liu2006experimental2] for a fairly simple building required up to 6000 days (roughly 17 years) of data collection. To remedy some of these issues, other versions of Q-learning such as Neural Fitted Q-iteration (NFQ) and Deep RL (DRL) were employed where function approximation techniques are used to learn an approximate function of the state-action (Q) function.
RL with action-value function approximation
-------------------------------------------
Dalamagkidis et al. [@dalamagkidis2007reinforcement] used a linear function approximation technique to approximate the Q-function in their Q-learning RL to control a heat pump and an air ventilation subsystem using sensory data on indoor and outdoor air temperature, relative humidity, and $\mathrm{CO_2}$ concentration. Fitted Q Iteration (FQI) developed by Ernst et al. [@ernst2005tree] is a batch RL method that iteratively estimates the Q-function given a fixed batch of past interactions. An online version that uses a neural network, neural fitted Q-iteration, has been proposed by [@riedmiller2005neural]. In a series of studies [@ruelens2015learning; @ruelens2016residential; @ruelens2016reinforcement], Ruelens et al. studied the application of FQI batch RL to schedule thermostatically controlled HVAC systems such as heat pumps and electric water heaters in different demand-response set-ups. Marantos et al. [@marantos2018towards] applied NFQ batch RL to control the thermostat set-point of a single-zone building where input state was four-dimensional (outdoor and indoor temperatures, solar radiance, and indoor humidity) and action was one-dimensional with three discrete values.\
Tremendous algorithmic and computational advancements in deep neural networks in the recent years have given rise to the field of deep reinforcement learning (DRL) where deep neural networks are combined with different RL approaches. This has resulted in numerous DRL algorithms (DQN, DDQN, RBW, A3C, DDPG, etc.) in the past few years, some of which have been employed for data-driven micro-climate control in buildings. Wei et al. [@wei2017deep] claim to be the first to apply DRL to HVAC control problem. They used Deep Q-Network (DQN) algorithm [@mnih2015human] to approximate the Q-function with discrete number of actions. To remedy some of the issues of the DQN algorithm such as overestimation of action values, improvements to this algorithm have been made resulting in a bunch of other algorithms like Double DQN (DDQN) [@van2016deep] and Rainbow (RWB) [@hessel2018rainbow]. Avendano et al. [@avendano2018data] applied DDQN and RWB algorithms to optimize energy efficiency and comfort in a 2-zone apartment; they considered temperature and $\mathrm{CO_2}$ concentration for comfort and used heating and ventilation costs for energy efficiency.
RL with policy function approximation
-------------------------------------
All the above-mentioned RL-based studies rely on learning the optimal state-value or action-value (Q) functions based on which the optimal policy is derived. Parallel to this value-based approach there is a policy-based approach where the RL agent tries to directly learn the optimal policy (control law). Policy gradient algorithms are perhaps the most popular class of RL algorithms in this approach. The basic idea behind these algorithms is to adjust the parameters of the policy in the direction of a performance gradient [@sutton2000policy; @silver2014deterministic]. A distinctive advantage of policy gradient algorithms is their ability to handle continuous actions as well as stochastic policies. Wang et al. [@wang2017long] employed Monte Carlo actor-critic policy gradient RL with LSTM actor and critic networks to control HVAC system of a single-zone office. Deep Deterministic Policy Gradient (DDPG) algorithm [@lillicrap2015continuous] is another powerful algorithm in this class that handles deterministic policies. DDPG was used in [@gao2019energy] and [@li2019transforming] to control energy consumption in a single-zone laboratory and 2-zone data center buildings, respectively.
Sample efficiency
-----------------
Despite the sea-change advances in RL, sample efficiency is still the bottleneck for many real-world applications with slow dynamics. Building micro-climate control is one such application since thermodynamics in buildings is relatively slow; it can take a few minutes to an hour to collect an informative data point. The time-intensive process of data collection makes the online training of the RL algorithms so long that it practically becomes impossible to have a plug & play RL-based controller for HVAC systems. For instance, training the DQN RL algorithm in [@wei2017deep] for a single-zone building required about 100 months of sensory data. The required data collection period for training the DDQN and RWB algorithms in [@avendano2018data] were reported as 120 and 90 months, respectively. A few different techniques have been proposed to alleviate the RL’s training sample complexity when it comes to real-world applications, in particular buildings, which are discussed next.\
Multiple time scales in some real-world applications is one reason for the sample inefficiency of many RL algorithms. For instance, for precise control of a set-point temperature it is more efficient to design a controller that works on a coarse time scale in the beginning when the temperature is far from the set-point temperature, and on a finer time scale otherwise. To address this issue, double and multiple scales reinforcement learning are proposed in [@riedmiller1998high; @li2015multi]. Reducing the system’s dimension, if possible, is another way to shorten the online training period. Different dimensionality reduction techniques such as auto-encoder [@ruelens2015learning] and convolutional neural networks (CNN) [@claessens2016convolutional] were used in RL-based building energy management control where the system states are high dimensional.\
Another approach to reduce the training period is based on developing a data-driven model first, and then use it for offline RL training or direct planning. This approach is similar to the Dyna architecture [@sutton1991dyna; @sutton2018reinforcement]. Costanzo et al. [@costanzo2016experimental] used neural networks to learn temperature dynamics of a building heating system to feed training of their FQI RL algorithm while Nuag et al. [@naug2019online] used support vector regression to develop consumption energy model of a commercial building for training of their DDPG algorithm. In [@nagy2018deep] and [@kazmi2018gigawatt] data-driven models of thermal systems are developed in the form of neural networks and partially observable MDP transition matrix, respectively, which are then used for finite horizon planning. As another example, Kazmi et al. [@kazmi2019multi] used muti-agent RL to learn an MDP model of identical thermostatically controlled loads which was then used for deriving the optimal policy by Monte Carlo techniques.
Contributions
-------------
Despite all the recent efforts, none of the proposed methods can be used for a plug & play deployment of smart HVAC systems without pre-training due to their large sample complexity. In addition, all the reinforcement learning studies in building energy management systems have formulated the problem based on *episodic* tasks, as opposed to *continuing* tasks. Micro-climate control in buildings is indeed a continuing task problem and should be formulated as such. Furthermore, the algorithms in these studies are all based on periodic sampling with fixed time intervals. This is not very sample-efficient in many cases and is certainly not desirable in resource-constrained wireless embedded control systems [@heemels2012introduction]. To remedy these issues we make the following major contributions:\
- We develop a general framework called *switching manifolds* for data-efficient control of HVAC systems;
- Based on the idea of switching manifolds, we propose an event-triggered paradigm for learning and control with an application to the HVAC systems;
- We develop and formulate the event-triggered control problem with variable-duration sampling as an undiscounted continuing task reinforcement learning problem with average reward set-up;
- We demonstrate the effectiveness of our proposed approach on a small-scale building via simulation in EnergyPlus software.
Problem statement and MDP framework {#MPDframework}
===================================
The aim of this study is to provide a plug & play control algorithm that can efficiently learn to optimize HVAC energy consumption and occupants’ comfort in buildings. To this end we first formulate the sequential decision-making control problem as a Markov decision process (MDP) in this section.\
The MDP is defined by a state space $\mathcal{S}$, an action space $\mathcal{A}$, a stationary transition dynamics distribution with conditional density $p(s_{k+1}|s_k,a_k)$ where $s_k$ and $a_k$ are state and action at time indexed by $k$ when the $k^{\mathrm{th}}$ *event* occurs, and a reward function $r: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$. States and actions are in general continuous (e.g. temperature state or temperature threshold action). Events are occasions when control actions are taken and the learning takes place; hence, they define the transition times. These events are characterized when certain conditions are met and are explained in detail in section \[manifolds\]. Actions are taken at these events based on a stochastic ($\pi_{\theta}:\mathcal{S}\rightarrow\mathcal{P(A)}$) or deterministic ($\mu_{\theta}:\mathcal{S}\rightarrow\mathcal{A}$) policy, where $\mathcal{P(A)}$ is the set of probability measures on $\mathcal{A}$ and $\theta \in \mathbb{R}^n$ is a vector of $n$ parameters.\
Taking action $a_{k-1}$ at state $s_{k-1}$ moves the system to a new state $s_{k}$ and results in a reward of $r_{k}$. Let us assume this transition takes $\Delta t_{k}$ unit time ($\Delta t_{k}=t_{k}-t_{k-1}$). Following the policy, dynamics of the MDP evolves and results in a trajectory of states, actions, and rewards; $s_0, a_0, r_1, ..., s_{k-1}, a_{k-1}, r_k, ...$. We define the performance measure that we want to maximize as the *average rate of reward per unit time* or simply *average reward rate*: $$J(\theta) \doteq r(\pi) \doteq \lim_{h \to \infty} \mathbb{E} \left[\left. \frac{\sum_{k=1}^{h}r_k}{\sum_{k=1}^{h}\Delta t_k} \right|s_0,a_{0:k-1}\sim \pi \right].
\label{Eq:metric}$$
This is different from and not proportional to the average rate of reward per time step if transition time periods $\Delta t_k$ are not equal, which will be the case in this study. We also define the *differential* return $G_k$ as: $$G_k \doteq r_{k+1}-r(\pi)\Delta t_{k+1}+ r_{k+2}-r(\pi)\Delta t_{k+2}+ ... \:.
\label{Eq:return}$$ In this definition of return the average reward is subtracted from the actual sample reward in each step to measure the accumulated reward relative to the average reward. Similarly, we can define the state-value $V_{\pi}(s)$ and action-value functions $Q_{\pi}(s,a)$ as: $$\begin{aligned}
V_{\pi}(s)=\sum_a \pi(a|s)\sum_{s'} p(s'|s,a)\left(r-r(\pi)\Delta t+V_{\pi}(s')\right) \nonumber \\
Q_{\pi}(s,a)=\sum_{s'} p(s'|s,a)\left(r-r(\pi)\Delta t+\sum_{a'}\pi(a'|s')Q_{\pi}(s',a')\right),
\label{Eq:VnQ}\end{aligned}$$ where, $\pi(a|s)$ is the conditional probability density at $a$ associated with the policy. Although the average reward set-up is formulated here for stochastic policies, it is applicable to deterministic policies as well with minor modification to the equations above. In the next section, we introduce the idea of switching manifolds and learning and controlling when needed.
Switching manifolds and event-triggered control {#manifolds}
===============================================
Many HVAC control devices work based on a discrete set of control actions e.g. ON/OFF switches or discrete-scale knobs. The optimal control in the system’s state space is often not very discontinuous or non-smooth in many practical applications, or at least there often exists one such control policy that is not far from the optimal. In this case optimal (or near-optimal) actions are separated by some boundaries in the state space. We call these boundaries *switching manifolds* since it is only across these boundaries that the controller needs to switch actions. Figure \[Fig:1\] illustrates the concept of switching manifolds for two simple systems with two-dimensional state vectors and 2 or 4 actions.\
Switching manifolds fully define a corresponding policy, hence, it is more sample-efficient to learn these manifolds or a parameterized version of them rather than a full tabular policy. Let us consider one such manifold parameterized by a parameter vector $\theta_g$ as $g^{\theta_g}(s)=0$. A different action is taken when the system dynamics cross this manifold, or in other words when $g^{\theta_g}(s)=0$ holds true. To make it more intuitive we rewrite this manifold equation in terms of one particular state (e.g. temperature in the HVAC example) as $s_2=f^{\theta_f}\left(s-\left\{s_2\right\}\right)$. Given the other states of the system, we can now think of $s_2$ as a threshold $s_2^{\mathrm{th}}$, i.e. if state $s_2$ of the system reaches this threshold value of $s_2^{\mathrm{th}}$ we need to switch to the new action based on the switching manifolds mapping (Fig.\[Fig:1\](a) and Fig.\[Fig:1\](b) schematically illustrate two such mappings). Also, instead of the parameters or the actual physical actions we can think of these thresholds as the actions that the learning agent needs to take.\
So far we introduced the switching manifolds or the threshold policies as a family of policies among which we would like to search for an optimal policy via e.g. reinforcement learning. The manifold/threshold learning does not need to happen at constant time intervals. In fact, here we propose controlling and learning with variable-time intervals when actions and updates take place when specific events occur. By definition, these events occur when system dynamics reach the switching manifolds or equivalently when thresholds are reached.\
Here we further illustrate these concepts with a simple example. Let us consider a 1-zone building equipped with a heating system described by its state vector $s=[T, T_o, h_s]$, where $T$ and $T_o$ are indoor and outdoor temperatures and $h_s$ is the heater status ($h_s=1$ means heater is on and $h_s=0$ means it is off). Possible physical actions we can take are; turning the heater ON, turning the heater OFF, or do nothing. Corresponding to this set of actions, we employ linear manifolds as an example and describe the parameterized temperature thresholds as: $T^{\mathrm{th}}_{\mathrm{OFF}}=\theta_1+\theta_2 T_o$ and $T^{\mathrm{th}}_{\mathrm{ON}}=\theta_3+\theta_4 T_o$. This is illustrated schematically in Fig. \[Fig:2\]. For a given parameter vector $\theta = [\theta_i|_{i=1,...,4}]^\top$ and outdoor temperature $T_o$, when the indoor temperature $T$ reaches the switch-off threshold ($T^{\mathrm{th}}_{\mathrm{OFF}}$) the heater is turned off and when it reaches the switch-on threshold ($T^{\mathrm{th}}_{\mathrm{ON}}$) the heater is turned on; otherwise, no action is taken. The deterministic action policy for the underlying MDP of this system could be written as $a=\mu_{\theta}(s)=[T^{\mathrm{th}}_{\mathrm{OFF}}, T^{\mathrm{th}}_{\mathrm{ON}}]$. Since at every event we need to decide for only one threshold (which will affect the next event), we can reduce the action dimension to only one by writing it as $a=\mu_{\theta}(s)=[1-h_s, h_s][T^{\mathrm{th}}_{\mathrm{ON}}, T^{\mathrm{th}}_{\mathrm{OFF}}]^\top$. This idea is applied to stochastic policy in a similar way to decide for only one threshold temperature when an event occurs. In the next section, we propose actor-critic event-triggered RL algorithms with both stochastic and deterministic policies based on the average-reward MDP set-up presented in section \[MPDframework\] and the concept of switching manifolds introduced in this section.
Reinforcement learning algorithm and implementation {#RL}
===================================================
Most, if not all, of the popular RL algorithms (both stochastic and deterministic) are based on episodic-task MDPs. Furthermore, transition time periods do not play any role in these algorithms; this is not an issue for applications where either the transition time intervals are irrelevant to the optimization problem e.g. in a game play, or these intervals are assumed to have fixed duration. None of these hold for the problem of micro-climate control in buildings where we want to optimize energy and occupants’ comfort in a continuing fashion with event-triggered sampling and control which result in variable-time intervals.\
Here we consider both stochastic and deterministic policy gradient reinforcement learning for event-triggered control. Our algorithms are based on stochastic and deterministic policy gradient theorems [@sutton2000policy; @silver2014deterministic] with modifications to cater for average-reward set-up and variable-time transition intervals. These theorems are as follows: $$\begin{aligned}
\nabla_{\theta}J(\pi_{\theta})=\mathbb{E}_{s\sim \rho^{\pi},a\sim \pi_{\theta}}\left[ \nabla_{\theta}\log \pi_{\theta}(a|s)Q^{\pi}(s,a) \right] \nonumber \\
\nabla_{\theta}J(\mu_{\theta})=\mathbb{E}_{s\sim \rho^{\mu}}\left[\nabla_{\theta}\mu_{\theta}(s)\nabla_a Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)} \right],
\label{Eq:theorems}\end{aligned}$$ where, $\rho^{\pi}$ and $\rho^{\mu}$ are stationary state distributions under stochastic and deterministic policies, $\pi_{\theta}$ and $\mu_{\theta}$. The actor components of our proposed algorithms employ Eq.(\[Eq:theorems\]) to adjust and improve the parameterized policies. To this end we use approximated action-value $Q^w(s,a)$ and state-value $V^v(s)$ functions by parameterizing their true functions with parameter vectors $w$ and $v$, respectively. We employ temporal difference (TD) Q-learning for the critic to estimate the state-value or action-value functions. In this set-up we also replace the true average reward rate $r(\pi)$ (or $r(\mu)$) by an approximation $\bar{r}$, which we learn via the same temporal difference error. We use the following TD errors ($\delta_k$) for the stochastic and deterministic policies, respectively: $$\begin{aligned}
\delta_k & \doteq& r_{k+1}-\bar{r}_k \Delta t_{k+1} + V^{v_k}(s_{k+1})- V^{v_k}(s_{k}) \\
\delta_k & \doteq& r_{k+1}-\bar{r}_k \Delta t_{k+1} + Q^{w_k}(s_{k+1},a_{k+1})- Q^{w_k}(s_{k},a_k),
\label{Eq:TD}\end{aligned}$$ where, $\bar{r}_k$, $v_k$, and $w_k$ are the average reward and parameters at time $t_k$. With this definition of TD errors we update the average reward as follows: $$\bar{r}_{k+1} = \bar{r}_{k} + \alpha_{\bar{r}} \frac{\delta_k}{\Delta t_{k+1}},
\label{Eq:update}$$ where, $\alpha_{\bar{r}}$ is the learning rate for the average reward update. Having explained the average-reward set-up and the event-triggered control and learning, we can now present the pseudocodes for actor-critic algorithms for continuing tasks with both deterministic and stochastic policies.\
Algorithm \[Algo:stochastic\] shows the pseudocode for stochastic policies with eligibility traces while algorithm \[Algo:deterministic\] shows its deterministic counterpart. Algorithm \[Algo:deterministic\] is an event-triggered compatible off-policy deterministic actor-critic algorithm with a simple Q-learning critic (ET-COPDAC-Q). For this algorithm we use compatible function approximator for the $Q^w(s,a)$ in the form of $(a-\mu_{\theta}(s))^{\mathrm{T}}{\nabla_{\theta}\mu_{\theta}(s)}^{\mathrm{T}}w + V^v(s)$. Here $V^v(s)$ is any differentiable baseline function independent of $a$, such as a state-value function. We parameterize the baseline function linearly in its feature vector as $V^v(s)=v^{\mathrm{T}}\phi_v(s)$, where, $\phi_v(s)$ is a feature vector. In the next section, we implement these algorithms on a simple building model and assess their efficacy.\
a differentiable stochastic policy parameterization $\pi_{\theta}(a|s)$ a differentiable state-value function parameterization $V^{v}(s)$: $\lambda_v \in [0,1]$, $\lambda_{\theta} \in [0,1]$, $\alpha_v>0$, $\alpha_{\theta}>0$, $\alpha_{\bar{r}}>0$ Initialize $\bar{r} \in \mathbb{R}$ (e.g. to 0) Initialize state-value and policy parameters $v \in \mathbb{R}^{d_v}$ and $\theta \in \mathbb{R}^{d_{\theta}}$ (e.g. to [0]{}) Initialize the state vector $s \in \mathcal{S}$
$z_v \leftarrow 0$ ($d_v$-component eligibility trace vector) $z_{\theta} \leftarrow 0$ ($d_{\theta}$-component eligibility trace vector) ( forever )[ $a \sim \pi_{\theta}(.|s)$ Execute action $a$ and ; then observe $s'$, $r$, $\Delta_t$ $\delta \leftarrow r-\bar{r} \Delta t + V^{v}(s')- V^{v}(s)$ $\bar{r} \leftarrow \bar{r}+\alpha_{\bar{r}}\frac{\delta}{\Delta t}$ $z_v \leftarrow \lambda_v z_v + \nabla_v V^v(s) $ $z_{\theta} \leftarrow \lambda_{\theta} z_{\theta} + \nabla_{\theta}\log\pi_{\theta}(a|s)$ $v \leftarrow v+\alpha_v \delta z_v$ $\theta \leftarrow \theta+\alpha_{\theta} \delta z_{\theta}$ $s \leftarrow s'$ ]{}
a differentiable deterministic policy parameterization $\mu_{\theta}(s)$ a differentiable state-value function parameterization $V^{v}(s)$ a differentiable action-value function parameterization $Q^{w}(a,s)$: $\alpha_v>0$, $\alpha_w>0$, $\alpha_{\theta}>0$, $\alpha_{\bar{r}}>0$ Initialize $\bar{r} \in \mathbb{R}$ (e.g. to 0) Initialize state-value, action-value, and policy parameters $v \in \mathbb{R}^{d_v}$, $w \in \mathbb{R}^{d_w}$ and $\theta \in \mathbb{R}^{d_{\theta}}$ (e.g. to [0]{}) Initialize the state vector $s \in \mathcal{S}$ Initialize a random process $F_k$ for action exploration
( forever )[ $a = \mu_{\theta}(s)+F$ Execute action $a$ and ; then observe $s'$, $r$, $\Delta_t$ $\delta \leftarrow r-\bar{r} \Delta t + Q^{w}(s',\mu_{\theta}(s'))- Q^{w}(s,a)$ $\bar{r} \leftarrow \bar{r}+\alpha_{\bar{r}}\frac{\delta}{\Delta t}$ $v \leftarrow v+ \alpha_v \delta \nabla_v V^v(s)=v+ \alpha_v \delta \phi_v(s)$ $w \leftarrow w+\alpha_w \delta \nabla_w Q^w(s,a)=w+\alpha_w \delta \left(a-\mu_{\theta}(s)\right)^\mathrm{T}{\nabla_{\theta}\mu_{\theta}(s)}^{\mathrm{T}}$ $\theta \leftarrow \theta+\alpha_{\theta} \nabla_{\theta}\mu_{\theta}(s)\nabla_a Q^w(s,a)|_{a=\mu(s)}=\theta+\alpha_{\theta}\nabla_{\theta}\mu_{\theta}(s)\left({\nabla_{\theta}\mu_{\theta}(s)}^{\mathrm{T}} w\right)$ $s \leftarrow s'$ ]{}
Simulations and results {#results}
=======================
In this section we implement our proposed algorithms to control the heating system of a one-zone building in order to minimize energy consumption without jeopardizing the occupants’ comfort. To this end we first describe the building models that we use for simulation, followed up by designing the rewards to use for our learning control algorithms. Then we explain the policy parameterization used in the simulations before we present the simulation results.
Building models {#BuildingModels}
---------------
We use two one-zone building models: a simplified linear model characterized by a first-order ordinary differential equation, and a more realistic building modeled in the EnergyPlus software. The linear model for the one-zone building with the heating system is as follows: $$C \frac{dT}{dt}+K(T-T_o)=h_s(t) \dot{Q}_h,
\label{Eq:linear}$$ where, $C=2000 \, kJK^{-1}$ is the building’s heat capacity, $K=325 WK^{-1}$ is the building’s thermal conductance, and $\dot{Q}_h=13 \, kW$ is the heater’s power. As defined earlier, $h_s(t) \in \{0,1\}$ is the heater status, and $T_o=-10 \, \degree C$ is the outdoor temperature.\
In addition to the simplified linear building model, a more realistic building modeled in EnergyPlus is also used for implementation of our proposed learning control algorithms. The building modeled in EnergyPlus is a single-floor rectangular building with dimensions of $15.240 \times 15.240 \times 4.572 \, m^3$ ($50 \times 50 \times 15 \, ft^3$). The walls and the roof are modeled massless with thermal resistance of $1.291\, m^2\,K/W$ and $2.456\, m^2\,K/W$, respectively. All the walls as well as the roof are exposed to the Sun and wind, and have thermal and solar absorptance of 0.90 and 0.75, respectively. The floor is made up of a 4-inch h.w. concrete block with conductivity of $1.730\, W/m\,K$, density of $2242.585\, kg/m^3$, specific heat capacity of $836.800\, J/kg\,K$, and thermal and solar absorptance of 0.90 and 0.65, respectively. The building is oriented 30 degrees east of north. EnergyPlus Chicago Weather data (Chicago-OHare Intl AP 725300) is used for the simulation. An electric heater with nominal heating rate of $10\, kW$ is used for space heating.
Rewards
-------
Comfort and energy consumption are controlled by rewards or penalties. Rewards in RL play the role of cost function in controls theory and therefore proper design of the rewards is of paramount importance in the problem formulation. Here we formulate the reward with three components; one discrete and two continuous components: $$r_{k+1}=r_{sw}+\int_{t_k^+}^{t_{k+1}^+}r_e h_s(t)+r_c \left(T-T_d\right)^2 dt,
\label{Eq:reward}$$ where, $-r_{sw}=0.8 \, unit$ is the discrete penalty for switching on/off the heater to avoid frequent switching. The frequent on/off switching can decrease the system life-cycle or could result in unpleasant noisy operation of the heater. Here, *unit* is an arbitrary scale for quantifying different rewards. Having the heater on is penalized continuously in time with the rate of $-r_e=1.2/3600 \, {unit}\,s^{-1}$. This penalty is responsible for limiting the power consumption; hence, for a more intuitive meaning, $r_e$ could be chosen such that the reward unit (*unit*) equals the monetary cost unit of the power consumption e.g. dollar currency. Here we define occupants’ discomfort rate proportional to the square of deviation from their desired temperature $T_d$, and coefficient of proportionality is $r_c=-1.2/3600 \, unit\,K^{-2}\,s^{-1}$.
stochastic and deterministic policy parameterization
----------------------------------------------------
As discussed in section \[manifolds\], although we can define actions as both of the thresholds at each event, we only need one of the thresholds at each event. For instance, when the system has just hit the switch-off manifold ($T_{\mathrm{OFF}}^{\mathrm{th}}$), we only need to decide for the next switch-on manifold ($T_{\mathrm{ON}}^{\mathrm{th}}$). This helps to reduce the action dimension to one. Next, we present the parameterization for the stochastic policy approach followed up by the deterministic policy approach. In the stochastic policy method, we constrain the policy distributions to the Gaussian distributions of the form: $$\pi_{\theta}(T^{\mathrm{th}}|s) \doteq \frac{1}{\sigma_{\theta^{\sigma}}(s)\sqrt{2\pi}} \exp\left(-\frac{\left(T^{\mathrm{th}}-m_{\theta^{m}}(s)\right)^2}{2{\sigma_{\theta^{\sigma}}(s)}^2}\right),
\label{Eq:Gaussian}$$ where, $m_{\theta^{m}}(s)$, and $\sigma_{\theta^{\sigma}}(s)$ are mean and standard deviation of the action that are parameterized by parameter vectors $\theta^{m}$ and $\theta^{\sigma}$, respectively ($\theta=[\theta^m, \theta^{\sigma}]^{\mathrm{T}}$). Here, we consider constant switch-on and switch-off thresholds and parameterize the mean and standard deviation as follows: $$\begin{aligned}
m_{\theta^{m}}(s) &\doteq& {\theta^{m}}^\top \phi(s) \nonumber \\
\sigma_{\theta^{\sigma}}(s) &\doteq& \exp({\theta^{\sigma}}^\top \phi(s)),
\label{Eq:meanstandard}\end{aligned}$$ where, $\theta^{m}=[\theta^m_{\mathrm{ON}}, \theta^m_{\mathrm{OFF}}]^\top$ and $\theta^{\sigma}=[\theta^{\sigma}_{\mathrm{ON}}, \theta^{\sigma}_{\mathrm{OFF}}]^\top$. For simplicity we later assume $\theta^{\sigma}_{\mathrm{ON}} = \theta^{\sigma}_{\mathrm{OFF}}=\theta_{\sigma}$. $\phi(s)=[1-h_s, h_s]^\top$ is the state feature vector. We also approximate the state-value function as $V_v(s)=[v_1, v_2]^\top \phi(s)$. It should be noted that with this simple parameterization the switching temperature thresholds do not depend on the outdoor temperature. This is a reasonable assumption because we know that if the outdoor temperature is fixed, the optimal thresholds should indeed be constant.\
In a similar fashion, we simplify the parameterization of the deterministic policy in the form of: $$T^{\mathrm{th}}_{\theta}(s) = \mu_{\theta}(s) \doteq \theta^\top \phi(s),
\label{Eq:deterpolicy}$$ where, $\theta=[\theta_{\mathrm{ON}}, \theta_{\mathrm{OFF}}]^\top$ is the policy parameter vector. We approximate the action-value function by a compatible function approximator as $Q^w(s,a)=(a-\mu_{\theta}(s))^\top{\nabla_{\theta}\mu_{\theta}(s)}^\top w + V^v(s)$ with $w=[w_1, w_2]^\top$. The state feature vector $\phi(s)$ and the state-value function $V_v(s)$ are defined the same as in the stochastic policy approach.
Results
-------
Having set-up the simulation environment and parameterized the control policies and the related function approximators, we can now implement the learning algorithms \[Algo:stochastic\] and \[Algo:deterministic\]. In order to asses the efficacy of our learning control methods we would better have the ground truth optimal switching thresholds to which the results of our learning algorithms should converge. It should be noted that even with a simple and known model of the building with no disturbances, the optimal control of energy cost minimization while improving the occupants’ comfort does not fall into any of the classical optimal control problems such as LQG or LQR. This is mainly because of the complex form of the reward or the cost function defined in Eq.(\[Eq:reward\]). With that said, since we know that the optimal thresholds are constant (for fixed outdoor temperature), it is not computationally very heavy to find the ground truth thresholds by brute-force simulations and policy search in this set-up.\
To this end, we run numerous simulations where the system dynamics are described by either Eq.(\[Eq:linear\]) or the EnergyPlus model, and the control policy by Eq.(\[Eq:deterpolicy\]) with constant parameter vector $\theta$[^1]. For each such simulation, the simulation is run for a long time with a fixed pair of switching temperature thresholds at the end of which the average reward rate is calculated dividing the total reward by the total time. For the case where the system dynamics are described by Eq.(\[Eq:linear\]), results are illustrated in Fig. \[Fig:3\] based on which the optimal average reward rate is $r(\mu)=-3.70 \, unit\,{hr}^{-1}$ corresponding to optimal thresholds of $T_{\mathrm{ON}}^{\mathrm{th}}=12.5 \, \degree C$ and $T_{\mathrm{OFF}}^{\mathrm{th}}=17.5 \, \degree C$. Knowing the optimal policy for the simplified linear model of the building, we next implement our proposed stochastic and deterministic learning algorithms on this building model.\
Figure \[Fig:4\] depicts the on-policy learning of stochastic policy parameters during a training period of 10 days. Initial values of the mean of the threshold temperatures $[\theta^m_\mathrm{ON}, \theta^m_\mathrm{OFF}]$ are set to $[11.0, 19.0]\, \degree C$ and the initial standard deviation of these threshold temperatures are set to $1.0 \, \degree C$. Figure \[Fig:5\] illustrates probability distributions of the stochastic policies for switching temperature thresholds before and after the 10-day training by Algorithm \[Algo:stochastic\]. As seen in these two figures, the mean temperature thresholds have reached $12.3 \, \degree C$ and $ 17.5\, \degree C$, very close to the true optimal values. The standard deviation has decreased to $0.17 \, \degree C$ by the end of the training. According to Fig. \[Fig:6\] the average reward rate is learnt and converges to a value of $-3.73 \, unit\,{hr}^{-1}$. This learnt policy is then implemented from the beginning in a separate 10-day simulation and the average reward rate is calculated as $-3.74 \, unit\,{hr}^{-1}$. Both of these value are very close to the optimal value of $-3.70 \, unit\,{hr}^{-1}$, confirming the efficacy of the proposed event-triggered stochastic learning algorithm.\
Next, we implement our deterministic event-triggered learning algorithm (Algorithm \[Algo:deterministic\]) on the same building model. The learnt on/off switching temperatures at the end of a 10-day training are found to be $12.4 \, \degree C$ and $ 17.3\, \degree C$, again very close to the true optimal values. The implemented ET-COPDAC-Q is an off-policy algorithm; hence, to assess its efficacy we need to implement the resulted learnt policy on a new simulation where the average reward is calculated based on the learnt policy applied from the beginning. The average reward rate corresponding to the learnt thresholds is then calculated to be $-3.73 \, unit\,{hr}^{-1}$ that is very close to the optimal value of $-3.70 \, unit\,{hr}^{-1}$.\
It was explained in detail in sections \[manifolds\] and \[RL\] that the proposed event-triggered learning and control with variable time intervals should improve learning and control performance in terms of sample efficiency and variance. To back-up this via simulations we run two 10-day simulations on the same building model; one with variable intervals i.e. event-triggered learning (Algorithm \[Algo:deterministic\]) and one with constant intervals with 5-minute duration. This time the event-triggered deterministic algorithm learns the exact optimal thresholds i.e. $12.5 \, \degree C$ and $ 17.5\, \degree C$ corresponding to an average reward rate of $-3.70 \, unit\,{hr}^{-1}$, whereas the same algorithm with constant time intervals learns the thresholds to be $11.2 \, \degree C$ and $ 19.3\, \degree C$. Now if the latter threshold policy is implemented with constant time interval for controls (i.e. both learning and control have constant time intervals) it results in an average reward rate of $-6.19 \, unit\,{hr}^{-1}$; however, this value improves to an average reward rate of $-5.22 \, unit\,{hr}^{-1}$ if the learnt policy is implemented via event-triggered control (i.e. constant time interval for learning but variable time interval for controls). These numbers corroborate the advantage of event-triggered learning and controls over the classic learning and controls with fixed time intervals. To highlight this advantage even more, Fig. \[Fig:7\] shows the learnt average reward rate during a 10-day training by Algorithm \[Algo:deterministic\] with both variable and constant time intervals. It is clear that learning with constant time intervals results in a considerably larger variance.\
Last but not least we implement our learning algorithms on the more realistic building modeled in EnergyPlus software as detailed in section \[BuildingModels\]. Here the outdoor temperature is no longer kept constant and varies as shown in Fig. \[Fig:8\]. Although the optimal thresholds should in general be functions of outdoor temperature, here we constrain the learning problem to the family of threshold policies that are *not* functions of outdoor temperature. This is because (i) finding the ground truth optimal policy via brute-force simulations within this constrained family of policies is much easier than the unconstrained family of threshold policies, and (ii) based on our simulation results the optimal policy has a weak dependence on the outdoor temperature in this set-up.\
Similar to the case of the simplified building model, we first find the optimal threshold policy and the corresponding optimal average reward rate by brute-force simulations. The optimal thresholds are found to be $T_{\mathrm{ON}}^{\mathrm{th}}=12.5 \, \degree C$ and $T_{\mathrm{OFF}}^{\mathrm{th}}=17.5 \, \degree C$ resulting in an optimal average reward rate of $r(\mu)=-3.31\, unit\,{hr}^{-1}$. Here we employ our deterministic event-triggered COPDAC-Q algorithm to learn the optimal threshold policy. Starting from initial thresholds of $11.0 \, \degree C$ and $ 19.0\, \degree C$, the algorithm learns the threshold temperatures to be $12.9 \, \degree C$ and $ 17.5\, \degree C$ at the end of 10 days of training. This learnt policy results in an average reward rate of -$3.37\, unit\,{hr}^{-1}$. Time history of the building’s indoor temperature controlled via an exploratory deterministic behaviour policy during the 10-day training period is illustrated in Fig.\[Fig:8\]. The learning time history of the deterministic policy parameters, i.e. the switching temperature thresholds during the 10-day training is shown in Fig.\[Fig:9\].
Conclusion
==========
This study focuses on event-triggered learning and control in the context of cyber-physical systems with an application to buildings’ micro-climate control. Often learning and control systems are designed based on sampling with *fixed* time intervals. A shorter time interval usually lead to a more-accurate learning and more-precise control system; however, it inherently increases sample complexity and variance of the learning algorithms and requires more computational resources. To remedy these issues we proposed an event-triggered paradigm for learning and control with variable time intervals and showed its efficacy in designing a smart learning thermostat for autonomous micro-climate control in buildings.\
We formulated the buildings’ climate control problem based on a continuing-task MDP with event-triggered control policies. The events occur when the system state crosses the a priori-parameterized *switching manifolds*; this crossing triggers the learning as well as the control processes. Policy gradient and temporal difference methods are employed to learn the optimal switching manifolds which define the optimal control policy. Two event-triggered learning algorithms are proposed for stochastic and deterministic control policies. These algorithms are implemented on a single-zone building to concurrently decrease buildings’ energy consumption and increase occupants’ comfort. Two different building models were used: (i) a simplified model where the building’s thermodynamics are characterized by a first-order ordinary differential equation, and (ii) a more realistic building modeled in the EnergyPlus software. Simulation results show that the proposed algorithms learn the optimal policy in a reasonable time. The results also confirm that in terms of sample efficiency and variance our proposed event-triggered algorithms outperform their classic reinforcement learning counterparts where learning and control happen with constant time intervals.
Acknowledgements {#acknowledgements .unnumbered}
================
This work is supported by the Skoltech NGP Program (joint Skoltech-MIT project).
[^1]: We know the optimal policy should be a deterministic policy with constant switching temperature thresholds.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'И.Н. Шнурников'
title: ' Пример прямой без вещественных точек, лежащей в пересечении трех комплексных квадрик'
---
#### Постановка задачи.
В шестимерном евклидовом пространстве ${\mathbb{R}}^6$ с координатами $x=(x_1, \dots, x_6)$ множество $Q^3$ задано как совместная поверхность уровней функций $$Q^3=\{f_1(x)=d_1, f_2(x)=d_2, f_3(x)=d_3\} \subset {\mathbb{R}}^6$$ для функций $$\begin{gathered}
f_1(x)=x^2_1+x^2_2+x^2_3+x^2_4+x^2_5+x^2_6\\
f_2(x)=x_1x_4+x_2x_5+x_3x_6\\
f_3(x)=c_1x^2_1+c_2x^2_2+c_3x^2_3+c_4x^2_4+c_5x^2_5+c_6x^2_6,\end{gathered}$$ где $c_i$ и $d_j$ — некоторые вещественные числа. Необходимо
- Найти все значения параметров $c_i$ и $d_j$, при которых множество $Q^3$ является гладким многообразием (подмногообразием ${\mathbb{R}}^6$).
- Найти многообразие $Q^3$ с точностью до гомеоморфизма.
#### Схема решения.
А.Б. Жеглов предложил использовать теоремы Коллара [@Kollar] для того, чтобы предъявить некий список многообразий, которому обязано принадлежать $Q^3$. А именно:
(а) Многообразие $Q^3$ является множеством вещественных точек пересечения трех квадрик в комплексном проективном пространстве $$Q^3_c=\bar{Q}_1\cap \bar{Q}_2\cap \bar{Q}_3 \subset
{\mathbb{CP}}^6,$$ где квадрики $\bar{Q}_i$ задаются однородными многочленами от семи комплексных переменных, соответствующими функциям $f_i$. Для этого достаточно проверить, что все вещественные точки многообразия $Q^3_c$ аффинны.
(б) Находятся условия на параметры $c_i,\quad d_j$, при которых пересечение комплексных квадрик будет алгебраическим подмногообразием в ${\mathbb{CP}}^6$ ([*полным пересечением*]{}).
(в) Из известной формулы для размерности множества одномерных линейных подпространств (см. [@Chodg ch. 13, §. 6, th. 1]) следует, что в $Q^3_c$ есть однопараметрическое семейство комплексных прямых. Определим проекцию $$p: Q^3_c \to {\mathbb{CP}}^2$$ зафиксировав прямую $m$, лежащую в $Q^3_c$, рассмотрев ${\mathbb{CP}}^2$ как множество квадрик, содержащих $Q^3_c$, и точке $x$ поставив в соответствие квадрику, содержащую плоскость $(x,m)$.
(г) Отображение $p$ не определено на прямой $m$ и на пересекающих ее прямых. Зато отображение $p$ можно доопределить до отображения $\tilde p$ раздутия $\tilde
Q^3_c$ многообразия $Q^3_c$ вдоль $m$ и пересекающих ее прямых.
(д) Теорема Коллара для отображения $$\tilde p: \tilde Q^3_c \to
{\mathbb{CP}}^2$$ утверждает, что вещественные точки многообразия $\tilde Q^3_c$ есть одно из следующих многообразий: ${\mathbb{RP}}^3,S^3,S^2\times S^1$, их связные суммы, многообразия Зейферта с не более 6 особыми слоями, линзы с $p,q \leq 6$.
(е) Если прямая проекции $m$ и все пересекающие ее прямые не имеют вещественных точек, то раздутие не касается вещественных точек и поэтому множества вещественных точек (точек с вещественными однородными координатами) многообразий $\tilde Q^3_c$ и $Q^3_c$ совпадают.
В итоге, если доказать существование прямой $m$ на $Q^3_c$, такой что на ней и на пересекающих ее прямых, лежащих на $Q^3_c$, нет вещественных точек, то тогда многообразие $Q^3$ будет гомеоморфно одному из указанному в пункте (д).
#### Результаты работы и дальнейшие перспективы.
В данной работе найдены условия на параметры квадрик, при которых множество $Q^3$ является подмногообразием ${\mathbb{R}}^6$ и в ${\mathbb{C}}^6$. Предъявлены прямая $l \subset Q^3_c$ и несколько условий, при выполнении которых на прямой $l$ нет вещественных точек. Для дальнейшего решения задачи необходимо
- выяснить, при каких значениях параметров квадрик выполняются указанные условия,
- проверить, что прямые, пересекающие $l$ и лежащие на $Q^3_c$, не содержат вещественных точек.
#### История возникновения и актуальность задачи.
Задача возникла в теории динамических систем при описании системы, аналогичной системе Эйлера движения трехмерного тела с закрепленной точкой, и поэтому названной “движением четырехмерного твердого тела”$, $ см. [@Bolsinov_99]. Рассматривается гамильтонова система на четырехмерном симплектическом многообразии $M^4$ — поверхности уровня функций $f_1(x)=d_1, f_2(x)=d_2$ с гамильтонианом $f_3(x)$. Известно [@Bolsinov_99], что если параметры удовлетворяют соотношению $$c_1c_4(c_2+c_5-c_3-c_6)+c_2c_5(c_3+c_6-c_1-c_4)+c_3c_6(c_1+c_4-c_2-c_5)=0
,$$ то гамильтонова система интегрируема по Лиувиллю и тем самым [*изоэнергетическая поверхность*]{} $Q^3$ расслаивается на торы, а слоение описывается с помощью молекул и их меток, см. обзоры работ А. Т. Фоменко и др. в [@Bolsinov_99]. В известных до сей поры примерах интегрируемых систем топология изоэнергетических поверхностей относительно простая, поэтому было бы интересно найти, чему диффеоморфно многообразие $Q^3$ в случае неинтегрируемости.
#### Невырожденность в ${\mathbb{R}}^6$ и в ${\mathbb{C}}^6$.
Множество уровня трех вещественных функций $$Q^3_R=
\{ f_1=d_1,\quad f_2=d_2,\quad f_3=d_3 \}\in{\mathbb{R}}^6$$ является трехмерным вещественным многообразием тогда и только тогда, когда
(а) $d_1>2|d_2|$ и
(б) не существует такого вещественного числа $b$, которое удовлетворяло бы всем трем уравнениям: $$\begin{split}
& 1) \quad (c_1-a)(c_4-a)=b^2,\\
& 2) \quad (c_2-a)(c_5-a)=b^2,\\
& 3) \quad (c_3-a)(c_6-a)=b^2, \\
\end{split}$$ и хотя бы одному неравенству: $$\begin{split}
& 1) \quad\frac {c_1-a}{bd_2}\geq \frac{1+(\frac {c_1-a}{b})^2}{d_1}, \\
& 2) \quad\frac {c_2-a}{bd_2}\geq \frac{1+(\frac {c_2-a}{b})^2}{d_1}, \\
& 3) \quad\frac {c_3-a}{bd_2}\geq \frac{1+(\frac {c_3-a}{b})^2}{d_1}, \\
\end{split}$$ где $a=\frac {d_3-2bd_2}{d_1}.$
Эти условия получаются как условия, при которых матрица Якоби производных функций $f_1, \quad f_2, \quad f_3$ по переменным $X_1,\dots, X_6$ размера $3\times 6$ невырождена во всех точках множества $Q^3_R.$
Обратно, если условие (а) не выполняется, то в матрице Якоби первые 2 строчки зависимы или множество $Q^3_R$ — пустое.
Если условие (б) не выполняется, то ранг матрицы Якоби равен 2 в точке с координатами $x_1,\dots,x_6,$ где
$$x_4=(\frac {c_4-a}{b})x_1,\quad x_5=(\frac {c_4-a}{b})x_2,\quad x_6=(\frac
{c_4-a}{b})x_3,$$
а числа $x_1, x_2, x_3$ являются решением системы: $$\begin{split}
& d_1=x_1^2\left(1+\frac {(c_1-a)^2}{b^2}\right)+x_2^2\left(1+\frac {(c_2-a)^2}{b^2}\right)+x_3^2\left(1+\frac {(c_3-a)^2}{b^2}\right),\\
& d_2=x_1^2\left(\frac {(c_1-a)}{b}\right)+x_2^2\left(\frac {(c_2-a)}{b}\right)+x_3^2\left(\frac {(c_3-a)}{b}\right), \\
& d_3=x_1^2\left(c_1+c_4\frac {(c_1-a)^2}{b^2}\right)+x_2^2\left(c_2+c_5\frac {(c_2-a)^2}{b^2}\right)+x_3^2\left(c_3+c_6\frac {(c_3-a)^2}{b^2}\right). \\
\end{split}$$ [$\hspace{\fill} \square$]{}
Множество уровня трех комплексных функций $$Q^3_C=\{ f_1=d_1,\quad f_2=d_2,\quad f_3=d_3 \}\in{\mathbb{C}}^6$$ является трехмерным комплексным многообразием тогда и только тогда, когда
(а) $d_1\neq 2|d_2|$ и
(б) не существует такого комплексного числа $b$, которое удовлетворяло бы всем трем уравнениям: $$\begin{split}
& 1) \quad (c_1-a)(c_4-a)=b^2,\\
& 2) \quad (c_2-a)(c_5-a)=b^2,\\
& 3) \quad (c_3-a)(c_6-a)=b^2, \\
\end{split}$$
Невырожденность в ${\mathbb{CP}}^6$ получается расмотрением 7 аффинных карт, в каждой из которых условия невырожденности будут аналогичны условиям невырожденности в ${\mathbb{C}}^6.$
#### Прямая без вещественных точек.
На многообразии $Q^3_c$ будем искать прямую в виде $$x_i=a_i+tb_i\quad \text{для}\quad i=1,2,\dots 6,$$ где коэффициенты $a_i, b_i$ — это комплексные числа, а $t$ — это комплексная переменная. Наложим условия на коэффициенты прямой: $$a_4=\lambda a_1,\quad a_5=\lambda a_2,\quad a_6=\lambda a_3\quad\text{и}\quad b_4=\mu
b_1,\quad b_5=\mu b_2,\quad b_6=\mu b_3.$$ После подстановки параметрического задания прямой в три уравнения, задающие многообразие $Q^3_c,$ получим девять уравнений (при $t^2, t$ и свободном члене для каждого из трех уравнения $f_j=d_j$), но из-за дополнительной симметрии, возникающей из наложенных условий на коэффициенты прямой, независимых уравнений будет семь, переменных же восемь: $a_1,a_2,a_3,b_1,b_2,b_3,\lambda,\mu$; поэтому положим число $\lambda$ равным корню уравнения $x+\frac 1x=\frac{d_1}{d_2},$ и число $\mu$ равным корню уравнения
$$\begin{gathered}
\sqrt{c_2-c_3+\mu^2(c_5-c_6)}\sqrt{c_3-c_1+\mu^2(c_6-c_4)}(d_3-\frac{d_2}{\lambda}(c_1+\lambda^2c_4))(c_2-c_3+\lambda^2(c_5-c_6))+\\
+ (c_1-c_3+\lambda^2(c_4-c_6))\left(\frac{d_2}{\lambda}(c_2+\lambda^2c_5)-d_3\right)(c_1-c_2+\lambda \mu(c_4-c_5))+ \\
+ \sqrt{c_2-c_3+\mu^2(c_5-c_6)}\sqrt{c_1-c_2+\mu^2(c_4-c_5)}\left(\frac{d_2}{\lambda}(c_2+\lambda^2c_5)-d_3\right)(c_1-c_3+\lambda \mu(c_4-c_6))+\\
+\sqrt{c_3-c_1+\mu^2(c_6-c_4)}\sqrt{c_1-c_2+\mu^2(c_4-c_5)}
(d_3-\frac{d_2}{\lambda}(c_1+\lambda^2c_4))(c_2-c_3+\lambda \mu(c_5-c_6))=\\
= 0.\end{gathered}$$
Теперь находятся числа $$\begin{split}
&b_1=\sqrt{c_2-c_3+\mu^2(c_5-c_6)},\\
&b_2=\sqrt{c_3-c_1+\mu^2(c_6-c_4)},\\
&b_3=\sqrt{c_1-c_2+\mu^2(c_4-c_5)},
\end{split}$$ и находятся числа $a_1,\quad a_2,\quad a_3$.
Предположим, что
- $
\frac{d_2}{\lambda}(c_2+\lambda^2c_5)-d_3 \neq 0,
$
- не выполняется одновременно пара условий $c_1=c_2$ и $c_4=c_5,$
- не выполняется одновременно пара условий $c_1=c_3$ и $c_4=c_6,$
- у уравнения на $\mu$ есть корень, отличный от $\lambda$ и от $-\lambda,$
- не все числа $a_1,a_2,a_3$ вещественны.
Тогда для любого комплексного числа $t$ шесть чисел $a_j+tb_j$ не могут быть одновременно вещественными.
Предположим противное, то есть что числа $a_i+tb_i$ и $\lambda a_i+t\mu b_i$ одновременно вещественны, тогда три числа $\lambda (a_i+tb_i)$ вещественны. Вычтем вторые числа из третьих (для всех $i=1,2,3$), получим вещественность чисел $t(\lambda-\mu)b_i.$ Деля полученные числа одно на другое (они не равны нулю по сделанным предположениям), получим вещественность чисел $\frac{b_2}{b_1}$ и $\frac{b_3}{b_1}.$ Числа $b_i$ удовлетворяют уравнению $b_1^2+b_2^2+b_3^2=0.$ Из вещественности чисел $\frac{b_2}{b_1}$ и $\frac{b_3}{b_1}$ следует, что $b_1=b_2=b_3=0.$ Однако подстановка $b_1=b_2=b_3=0$ в уравнение на $\mu$ даст уравнение
$$(c_1-c_3+\lambda^2(c_4-c_6))\left(\frac{d_2}{\lambda}(c_2+\lambda^2c_5)-d_3\right)(c_1-c_2+\lambda \mu(c_4-c_5))=0$$ Каждый из трех множителей не может быть равен нулю по сделанным предположениям. [$\hspace{\fill} \square$]{}
Можно заметить, что для почти всех значений параметров на $Q^3_c$ существует вещественно одномерное множество вещественных точек, сквозь которые проходят прямые $m\in Q^3_c$. Множество же самих прямых вещественно двумерно, однако прямые могут густо пересекаться, поэтому применение теоремы Коллара нетривиально, и будет осуществлено с помощью найденной явно прямой.
Работа выполнена при поддержке гранта Правительства РФ по постановлению N 220, договор No. 11.G34.31.0053.
[99]{}
Kollar J., [*Real Algebraic Threefolds 3. Conic bundles*]{}. arxiv AG/9802053 v1
Ходж В., Пидо Д. Методы алгебраической геометрии. М.: Изд-во Ин. лит. 1954 (1 и 2 том), 1955 (3 том).
А.В. Болсинов, А.Т. Фоменко, Интегрируемые гамильтоновы системы. Геометрия, топология, классификация. Тома 1 и 2. Ижевск: Изд. д. “Удмуртский университет”$,$ 1999.
ЯрГУ, лаб. дискретной и вычислительной геометрии им. Делоне.\
E-mail: shnurnikov@yandex.ru
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.'
author:
- |
Patrick Fernandes, Miltiadis Allamanis & Marc Brockschmidt\
Microsoft Research\
Cambridge, United Kingdom\
`{t-pafern,miallama,mabrocks}@microsoft.com`
bibliography:
- 'bibliography.bib'
title: Structured Neural Summarization
---
Introduction
============
Structured Summarization Tasks
==============================
Model
=====
Evaluation
==========
Related Work
============
Discussion & Conclusions
========================
Code Summarization Samples {#app:codesamples}
==========================
Natural Language Summarization Samples {#app:nlsamples}
======================================
Code Datasets Information {#app:datasets}
=========================
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A theoretical light curve is constructed for the quiescent phase of the recurrent nova U Scorpii in order to resolve the existing distance discrepancy between the outbursts ($d \sim 6$ kpc) and the quiescences ($d \sim 14$ kpc). Our U Sco model consists of a very massive white dwarf (WD), an accretion disk (ACDK) with a flaring-up rim, and a lobe-filling, slightly evolved, main-sequence star (MS). The model properly includes an accretion luminosity of the WD, a viscous luminosity of the ACDK, a reflection effect of the MS and the ACDK irradiated by the WD photosphere. The $B$ light curve is well reproduced by a model of 1.37 $M_\odot$ WD $+$ 1.5 $M_\odot$ MS (0.8—2.0 $M_\odot$ MS is acceptable) with an ACDK having a flaring-up rim, and the inclination angle of the orbit $i \sim 80 \arcdeg$. The calculated color is rather blue ($B-V \sim 0.0$) for a suggested mass accretion rate of $2.5 \times 10^{-7} M_\odot$ yr$^{-1}$, thus indicating a large color excess of $E(B-V) \sim 0.56$ with the observational color of $B-V = 0.56$ in quiescence. Such a large color excess corresponds to an absorption of $A_V \sim 1.8$ and $A_B \sim 2.3$, which reduces the distance to 6—8 kpc. This is in good agreement with the distance estimation of 4—6 kpc for the latest outburst. Such a large intrinsic absorption is very consistent with the recently detected period change of U Sco, which is indicating a mass outflow of $\sim 3 \times 10^{-7} M_\odot$ yr$^{-1}$ through the outer Lagrangian points in quiescence.'
author:
- Izumi Hachisu
- Mariko Kato
- Taichi Kato and Katsura Matsumoto
- 'Ken’ichi Nomoto'
title: A MODEL FOR THE QUIESCENT PHASE OF THE RECURRENT NOVA U SCORPII
---
INTRODUCTION
============
U Scorpii is one of the best observed recurrent novae, the outbursts of which were recorded in 1863, 1906, 1936, 1979, 1987, and the latest in 1999. Especially, the 1999 outburst was well observed from the rising phase to the cooling phase by many observers (e.g., [@mun99]; [@kah99]; [@lep99]) including eclipses (Matsumoto, Kato, & Hachisu 2000). Based on Matsumoto et al.’s (2000) observation, Hachisu et al. (2000) have constructed a theoretical light-curve model for the 1999 outburst of U Sco and obtained various physical parameters of the recurrent nova. Their main results are summarized as follows: (1) A direct light-curve fitting of the 1999 outburst indicates a very massive white dwarf (WD) of $M_{\rm WD}= 1.37 \pm 0.01 M_\odot$. (2) The envelope mass at the optical maximum is estimated to be $\Delta M \sim 3 \times 10^{-6} M_\odot$. (3) Therefore, the mass accretion rate of the WD is $\dot M_{\rm acc} \sim 2.5 \times 10^{-7} M_\odot$ yr$^{-1}$ during the quiescent phase between 1987 and 1999. (4) An optically thick wind blows from the WD and plays a key role in determining the nova duration because it reduces the envelope mass ([@kat94]). About 60% of the envelope mass is carried away in the wind, which forms an expanding shell as observed in T Pyx (e.g., [@shr89]). The residual 40% ($1.2 \times 10^{-6} M_\odot$) is added to the helium layer of the WD. (5) As a result, the WD can grow in mass at an average rate of $\sim 1 \times 10^{-7} M_\odot$ yr$^{-1}$.
The above physical pictures are exactly the same as proposed by Hachisu et al. (1999b) as a progenitor system of Type Ia supernovae (SNe Ia). However, the distance to U Sco is still controversial because the direct light-curve fitting results in a relatively short distance of $\sim 6$ kpc ([@hac2000]), which is incompatible with the distance of $\sim 14$ kpc at the quiescent phase (e.g., [@web87]; [@war95]; [@kah99], for a summary). If the distance of $\sim 14$ kpc is the case, it could be hardly consistent with the results (1) to (5) mentioned above.
Our purpose in this Letter is to construct a light-curve model for the quiescent phase and to rectify the distance to U Sco. Our numerical method to obtain light curves has been described both in Hachisu & Kato (1999) to explain the second peak of T CrB outbursts and in Hachisu et al. (2000) to reproduce the light curve for the 1999 outburst of U Sco. Therefore, we mention only new parts of our numerical method in §2. In §3, by directly fitting our theoretical light curve to the observations, we derive the distance to U Sco. Discussions follow in §4, especially in relation to the recently detected orbital-period change of U Sco and a systemic mass loss through the outer Lagrangian points. We also discuss the relation to a progenitor system of SNe Ia.
THEORETICAL LIGHT CURVES
========================
Our U Sco model is graphically shown in Figure \[uscofig\_q35\]. Schaefer (1990) and Schaefer & Ringwald (1995) observed eclipses of U Sco in the quiescent phase and determined the orbital period ($P= 1.23056$ days) and the ephemeris (HJD 2,451,235.777 $+$ 1.23056$E$) at the epoch of mid-eclipse. Thus, the companion is a main-sequence star (MS) which expands to fill its Roche lobe after a most part of the central hydrogen is consumed. We call such a star “a slightly evolved” MS. The inclination angle of the orbit ($i \sim 80\arcdeg$) is a parameter for fitting.
We have assumed that (1) $M_{\rm WD}= 1.37 M_\odot$, (2) the WD luminosity of $$L_{\rm WD} = {1 \over 2} {{G M_{\rm WD} \dot M_{\rm acc}}
\over {R_{\rm WD}}} + L_{\rm WD,0},
\label{accretion-luminosity}$$ where the first term is the accretion luminosity (e.g., Starrfield, Sparks, & Shaviv 1988) and the second term $L_{\rm WD,0}$ is the intrinsic luminosity of the WD, and $R_{\rm WD}= 0.0032 R_\odot$ the radius of the $1.37 M_\odot$ WD, and (3) a black-body photosphere of the WD. The accretion luminosity is $\sim 1700 L_\odot$ for a suggested mass accretion rate of $\dot M_{\rm acc} \sim 2.5 \times 10^{-7} M_\odot$ yr$^{-1}$. Here, we assume $L_{\rm WD,0}=0$ because the nuclear luminosity is smaller than the accretion luminosity for this accretion rate, but we have examined other two cases of $L_{\rm WD,0}=2000$ and 4000 $L_\odot$ and found no significant differences in the distance as shown below. We do not consider the limb-darkening effect for simplicity.
It is assumed that the companion star is synchronously rotating on a circular orbit and its surface fills the inner critical Roche lobe as shown in Figure \[uscofig\_q35\]. We neglect both the limb-darkening effect and the gravity-darkening effect of the companion star for simplicity. Here, we assume a 50% irradiation efficiency of the companion star ($\eta_{\rm ir,MS}=0.5$). We have examined the dependence of the distance on the irradiation efficiency (i.e., $\eta_{\rm ir,MS}=0.25$ and 1.0) but found no significant differences in the distance as shown below. The non-irradiated photospheric temperature $T_{\rm ph, MS}$ of the companion star is a parameter for fitting. The mass of the secondary is assumed to be $M_{\rm MS}= 1.5 M_\odot$.
The size of the accretion disk is a parameter for fitting and defined as $$R_{\rm disk} = \alpha R_1^*,
\label{accretion-disk-size}$$ where $\alpha$ is a numerical factor indicating the size of the accretion disk, and $R_1^*$ the effective radius of the inner critical Roche lobe for the WD component (e.g., Eggleton 1983). We also assume that the accretion disk is axisymmetric and has a thickness given by $$h = \beta R_{\rm disk} \left({{\varpi}
\over {R_{\rm disk}}} \right)^{\nu},
\label{flaring-up-disk}$$ where $h$ is the height of the surface from the equatorial plane, $\varpi$ the distance on the equatorial plane from the center of the WD, $\nu$ the power of the surface shape, and $\beta$ a numerical factor showing the degree of thickness and also a parameter for fitting. We adopt a $\varpi$-squared law ($\nu=2$) simply to mimic the flaring-up effect of the accretion disk rim (e.g., Schandl, Meyer-Hofmeister, & Meyer 1997), and have examined the dependence of the distance on the power ($\nu=1.25$ and 3.0) without finding any significant differences as shown below.
The surface of the accretion disk also absorbs photons from the WD photosphere and reemits with a black-body spectrum at a local temperature. We assume a 50% irradiation efficiency of the companion star, i.e., $\eta_{\rm ir,DK}=0.5$ (e.g., [@sch97]). We have examined other two cases of $\eta_{\rm ir,DK}=0.25$ and 1.0, and found no significant differences in the distance as shown below. The non-irradiated temperature of the disk surface is assumed to be determined by the viscous heating of the standard accretion disk model. Then, the disk surface temperature is given by $$\sigma T_{\rm ph, disk}^4 = {{3 G M_{\rm WD} \dot M_{\rm acc}}
\over {8 \pi \varpi^3}} + \eta_{\rm ir,DK}
{{L_{\rm WD}} \over {4 \pi r^2}} \cos\theta,$$ where $r$ the distance from the WD center, and $\cos\theta$ the incident angle of the surface (e.g., [@sch97]). The temperature of the disk rim is assumed to be 3000 K.
RESULTS
=======
Figure \[bmag\_bv\_color\_paper\] shows the observational points (open circles) by Schaefer (1990) together with our calculated $B$ light curve (thick solid line) for $\dot M_{\rm acc}= 2.5 \times 10^{-7} M_\odot$ yr$^{-1}$. To fit our theoretical light curves with the observational points, we calculate $B$ light curves by changing the parameters of $\alpha=0.5$—1.0 by 0.1 step, $\beta=0.05$—0.50 by 0.05 step, $T_{\rm ph, MS}= 3500$—8000 K by 100 K step, and $i=75$—$85\arcdeg$ by $1\arcdeg$ step and seek for the best fit model. The best fit parameters obtained are shown in Figure \[bmag\_bv\_color\_paper\] (see also Table \[tbl-1\]).
There are five different contributions to the $B$-light ($L_B$) in the system: the white dwarf ($L_{B1}$), the non-irradiated portions of the accretion disk ($L_{B2}$) and the donor star ($L_{B3}$), and the irradiated portions of the accretion disk ($L_{B4}$) and the donor star ($L_{B5}$). In order to show each contribution, we have added two light curves in Figure \[bmag\_bv\_color\_paper\], that is, a non-irradiation case of the ACDK ($\eta_{\rm ir, DK}=0$, dash-dotted), and a non-irradiation case of the MS ($\eta_{\rm ir, MS}=0$, dashed). The light from the WD is completely blocked by the accretion disk rim, thus having no contribution, $L_{B1}=0$. The depth of the primary eclipse, 1.5 mag, means $L_{B3}= 0.25 L_{B}$ because the ACDK is completely occulted by the MS. The difference of 1 mag between the thick solid and dash-dotted lines indicates $L_{B4}=0.60 L_B$. The difference of 0.1 mag between the thick solid and dashed lines indicates $L_{B5} = 0.10 L_B$. Thus, we obtain each contribution: $L_{B1} = 0$, $L_{B2} = 0.05 L_B$, $L_{B3} = 0.25 L_B$, $L_{B4} = 0.60 L_B$, and $L_{B5}= 0.10 L_B$.
Then we calculate the theoretical color index $(B-V)_c$ for these best fit models. Here, we explain only the case of $\dot M_{\rm acc}= 2.5 \times 10^{-7} M_\odot$ yr$^{-1}$. By fitting, we obtain the apparent distance modulus of $m_{B, 0}= 16.71$, which corresponds to the distance of $d= 22$ kpc without absorption ($A_B=0$). On the other hand, we obtained a rather blue color index of $(B-V)_c= 0.0$ outside eclipses. Together with the observed color of $(B-V)_o= 0.56$ outside eclipses ([@sch90]; [@sch95]), we derive a color excess of $E(B-V)= (B-V)_o - (B-V)_c= 0.56$ Here, suffixes $c$ and $o$ represent the theoretically calculated and the observational values, respectively. Then, we expect an absorption of $A_V= 3.1 ~E(B-V)= 1.8$ and $A_B= A_V + E(B-V) = 2.3$. Thus, we are forced to have a rather short distance to U Sco of 7.5 kpc.
In our case of $\alpha=0.7$ and $\beta=0.30$, the accretion disk is completely occulted at mid-eclipse. The color index of $(B-V)_c= 0.53$ at mid-eclipse indicates a spectral type of F8 for the cool component MS, which is in good agreement with the spectral type of F8$\pm$2 suggested by Johnston & Kulkarni (1992). Hanes (1985) also suggested that a spectral type nearer F7 is preferred.
For other mass accretion rates of $\dot M_{\rm acc}=$ (0.1—5.0)$\times 10^{-7} M_\odot$ yr$^{-1}$, we obtain similar short distances to U Sco, as summarized in Table \[tbl-1\]. It should be noted that, although the luminosity of the model depends on our various assumptions of the irradiation efficiencies, the $\varpi$-powered law of the disk, and the intrinsic luminosity of the WD, the derived distance to U Sco itself is almost independent of these assumptions, as seen from Table \[tbl-2\]. Therefore, the relatively short distance to U Sco ($\sim$ 6—8 kpc) is a rather robust conclusion, at least, from the theoretical point of view.
DISCUSSION
==========
Matsumoto et al. (2000) observed a few eclipses during the 1999 outburst and, for the first time, detected a significant period-change of $\dot P / P = (-1.7 \pm 0.7) \times 10^{-6}$ yr$^{-1}$. If we assume the conservative mass transfer, this period change requires a mass transfer rate of $\gtrsim 10^{-6} M_\odot$ yr$^{-1}$ in quiescence. Such a mass transfer for 12 years is too high to be compatible with the envelope mass on the white dwarf, thus implying a non-conservative mass transfer in U Sco.
We have estimated the mass transfer rate for a non-conservative case by assuming that matter is escaping from the outer Lagrangian points and thus the specific angular momentum of the escaping matter is $1.7 a^2 \Omega_{\rm orb}$ ([@saw84]; [@hac99a]), where $a$ is the separation and $\Omega_{\rm orb} \equiv 2 \pi /P$. Then the mass transfer rate from the companion is $\dot M_{\rm MS}= (-5.5 \pm 1.5) \times 10^{-7} M_\odot$ yr$^{-1}$ for $M_{\rm MS}= 0.8$—2.0 $M_\odot$ under the assumption that the WD receives matter at a rate of $\dot M_{\rm acc} = 2.5 \times 10^{-7} M_\odot$ yr$^{-1}$. The residual ($\sim 3 \times 10^{-7} M_\odot$ yr$^{-1}$), which is escaping from the system, forms an excretion disk outside the orbit of the binary. Such an extended excretion disk/torus may cause a large color excess of $E(B-V)= 0.56$.
Kahabka et al. (1999) reported the hydrogen column density of (3.1—4.8)$\times 10^{21}$ cm$^{-2}$, which is much larger than the Galactic absorption in the direction of U Sco (1.4$\times 10^{21}$ cm$^{-2}$, [@dic90]), indicating a substantial intrinsic absorption. It should also be noted here that Barlow et al. (1981) estimated the absorption toward U Sco by three ways: (1) the Galactic absorption in the direction of U Sco, $E(B-V) \sim 0.24$ and $A_V \sim 0.7$, (2) the line ratio of He II during the 1979 outburst ($t \sim$ 12 days after maximum), $E(B-V) \sim 0.2$ and $A_V \sim 0.6$, and (3) the Balmer line ratio during the 1979 outburst ($t \sim$ 33—34 days after maximum), $E(B-V) \sim 0.35$ and $A_V \sim 1.1$. The last one is significantly larger than the other two estimates. They suggested the breakdown of their case B approximation in high density regions. However, we may point out another possibility that the systemic mass outflow from the binary system has already begun at $t \sim$ 33 days and, as a result, an intrinsic absorption is gradually increasing.
The mass of the companion star can be constrained from the mass transfer rate. Such a high transfer rate as $\dot M_{\rm MS} \sim 5.5
\times 10^{-7} M_\odot$ yr$^{-1}$ strongly indicates a thermally unstable mass transfer (e.g., [@heu92]), which is realized when the mass ratio is larger than 1.0—1.1, i.e., $q= M_{\rm MS}/ M_{\rm WD} >$ 1.0—1.1 for zero-age main-sequence stars ([@web85]). This may pose a requirement $M_{\rm MS} \gtrsim 1.4 M_\odot$. We estimate the most likely companion mass of 1.4—1.6 $M_\odot$ from equation (11) in Hachisu et al. (1999b).
If the distance to U Sco is $\sim$ 6.0—8.0 kpc, it is located $\sim$ 2.3—3.0 kpc above the Galactic plane ($b=22\arcdeg$). The zero-age masses of the progenitor system to U Sco are rather massive (e.g., $8.0 ~M_\odot + 2.5 ~M_\odot$ from Hachisu et al. 1999b) and it is unlikely that such massive stars were born in the halo. Some normal B-type main-sequence stars have been found in the halo (e.g., PG0009+036 is located $\sim$ 5 kpc below the Galactic disk, [@smt96]), which were ejected from the Galactic disk because of their relatively high moving velocities $\sim$100—200 km s$^{-1}$. The radial velocity of U Sco is not known but it is suggested that the $\gamma$-velocity is $\sim$50—100 km s$^{-1}$ from the absorption line velocities ([@joh92]; [@sch95]). If so, it seems likely that U Sco was ejected from the Galactic disk with a vertical velocity faster than $\sim 20$ km s$^{-1}$ and has reached at the present place within the main-sequence lifetimes of a $\sim 3.0 M_\odot$ star ($\sim 3.5\times 10^8$ yr).
Now, we can understand the current evolutionary status and a further evolution of U Sco system. The white dwarf has a mass $1.37 \pm 0.01 M_\odot$. It is very likely that the WD has reached such a large mass by mass accretion. In fact the WD is currently increasing the mass of the helium layer at a rate of $\dot M_{\rm He} \sim 1.0 \times 10^{-7} M_\odot$ yr$^{-1}$ ([@hac2000]). We then predict that the WD will evolve as follows. When the mass of the helium layer reaches a critical mass after many cycles of recurrent nova outbursts, a helium shell flash will occur. Its strength is as weak as those of AGB stars because of the high mass accretion rate ([@nom82]). A part of the helium layer will be blown off in the wind, but virtually all of the helium layer will be burnt into carbon-oxygen and accumulates in the white dwarf ([@kat99h]). Therefore, the WD mass can grow until an SN Ia explosion is triggered ([@nom84]).
We thank the anonymous referee for many critical comments to improve the manuscript. This research has been supported in part by the Grant-in-Aid for Scientific Research (07CE200, 08640321, 09640325, 11640226, 20283591) of the Japanese Ministry of Education, Science, Culture, and Sports. KM has been financially supported as a Research Fellow for Young Scientists by the Japan Society for the Promotion of Science.
Barlow, M. J., et al. 1981, , 195, 61
Dickey, J. M., & Lockman, F. J. 1990, , 28, 215
Eggleton, P. P. 1983, , 268, 368
Hachisu, I., & Kato, M. 1999, , 517, L47
Hachisu, I., Kato, M., Kato, T., & Matsumoto, K. 2000, , 528, L97
Hachisu, I., Kato, M., Nomoto, K. 1999a, , 522, 487
Hachisu, I., Kato, M., Nomoto, K., & Umeda, H. 1999b, , 519, 314
Hanes, D. A. 1985, , 213, 443
Johnston, H. M., & Kulkarni, S. R. 1992, , 396, 267
Kahabka, P., Hartmann, H. W., Parmar, A. N., & Negueruela, I. 1999, , 374, L43
Kato, M., & Hachisu, I., 1994, , 437, 802
Kato, M., Hachisu, I., 1999, , 513, L41
Lepine, S., Shara, M. M., Livio, M., & Zurek, D. 1999, , 522, L121
Matsumoto, K., Kato, T., & Hachisu, I. 2000, , submitted
Munari, U., Zwitter, T., Tomov, T., Bonifacio, P., Molaro, P., Selvelli, P., Tomasella, L., Niedzielski, A., & Pearce, A. 1999, , 374, L39
Nomoto, K. 1982, , 253, 798
Nomoto, K., Thielemann, F., & Yokoi, K. 1984, , 286, 644
Sawada, K., Hachisu, I., & Matsuda, T. 1984, , 206, 673
Schaefer, B. 1990, , 355, L39
Schaefer, B., & Ringwald, F. A. 1995, , 447, L45
Schandl, S., Meyer-Hofmeister, E., & Meyer, F. 1997, , 318, 73
Schmidt, J. H. K., de Boer, K. S., Heber, U., & Moehler, S. 1996, , 306, L33
Shara, M. M., Moffat, A. F. J., Williams, R. E., Cohen, J. G.1989, , 337, 720
Starrfield, S., Sparks, W. M., & Shaviv, G. 1988, , 325, L35
van den Heuvel, E. P. J., Bhattacharya, D., Nomoto, K., & Rappaport, S. 1992, , 262, 97
Warner, B. 1995, Cataclysmic Variable Stars, (Cambridge: Cambridge University Press), chaps. 4 and 5
Webbink, R. F. 1985, Interacting binary stars, eds. J. E. Pringle & R. A. Wade (Cambridge: Cambridge Univ. Press), p.39
Webbink, R. F., Livio, M., Truran, J. W., & Orio, M. 1987, , 314, 653
[ccccccccccc]{} 5.0$\times 10^{-7}$ & 0.7 & 0.30 & 5900 & 17.18 & $-0.08$ & 0.45 (F5) & 0.64 & 2.01 & 2.65 & 8.0 2.5$\times 10^{-7}$ & 0.7 & 0.30 & 5500 & 16.71 & $+0.00$ & 0.53 (F8) & 0.56 & 1.76 & 2.32 & 7.5 1.0$\times 10^{-7}$ & 0.7 & 0.25 & 5000 & 16.02 & $+0.12$ & 0.66 (G4) & 0.44 & 1.38 & 1.82 & 6.9 5.0$\times 10^{-8}$ & 0.7 & 0.25 & 4600 & 15.45 & $+0.24$ & 0.78 (G9) & 0.32 & 1.01 & 1.33 & 6.7 2.5$\times 10^{-8}$ & 0.7 & 0.25 & 4200 & 14.72 & $+0.37$ & 0.91 (K2) & 0.19 & 0.60 & 0.79 & 6.1 1.0$\times 10^{-8}$ & 0.7 & 0.25 & 3700 & 13.58 & $+0.58$ & 1.13 (K5) & & & & 5.2
[lcccccccc]{} $L_{\rm WD,0}=2000 L_\odot$ & 6100 & 17.31 & $-0.09$ & 0.41 (F3) & 0.65 & 2.04 & 2.60 & 8.4 $L_{\rm WD,0}=4000 L_\odot$ & 6400 & 17.59 & $-0.13$ & 0.36 (F1) & 0.69 & 2.17 & 2.87 & 8.8 $\eta_{\rm ir,MS}=1.0$ & 5500 & 16.76 & $+0.00$ & 0.53 (F8) & 0.56 & 1.76 & 2.32 & 7.7 $\eta_{\rm ir,MS}=0.25$ & 5500 & 16.66 & $+0.00$ & 0.53 (F8) & 0.56 & 1.76 & 2.32 & 7.4 $\eta_{\rm ir,DK}=1.0$ & 6000 & 17.16 & $-0.06$ & 0.43 (F4) & 0.62 & 1.95 & 2.57 & 8.3 $\eta_{\rm ir,DK}=0.25$ & 5300 & 16.39 & $+0.08$ & 0.58 (F9) & 0.48 & 1.51 & 1.99 & 7.6 $\nu=3.0$ & 5600 & 16.90 & $-0.04$ & 0.51 (F7) & 0.60 & 1.88 & 2.48 & 7.6 $\nu=1.25$ & 5300 & 16.39 & $+0.09$ & 0.58 (F9) & 0.47 & 1.47 & 1.95 & 7.7
| {
"pile_set_name": "ArXiv"
} |
---
author:
- Li Xi
- 'Michael D. Graham'
title: Active and hibernating turbulence in minimal channel flow of Newtonian and polymeric fluids
---
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The maximality property was introduced in [@T:WS95] in orthomodular posets as a common generalization of orthomodular lattices and orthocomplete orthomodular posets. We show that various conditions used in the theory of effect algebras are stronger than the maximality property, clear up the connections between them and show some consequences of these conditions. In particular, we prove that a Jauch–Piron effect algebra with a countable unital set of states is an orthomodular lattice and that a unital set of Jauch–Piron states on an effect algebra with the maximality property is strongly order determining.'
author:
- |
Josef Tkadlec\
Department of Mathematics, Faculty of Electrical Engineering,\
Czech Technical University, 16627 Praha, Czech Republic,\
tkadlec@fel.cvut.cz
title: Effect algebras with the maximality property
---
Basic notions
=============
Effect algebras as generalizations of orthomodular posets (quantum logics) are studied in the axiomatics of quantum systems—see, e.g., [@DP:NewTrends; @FB:Effect].
An *effect algebra* is an algebraic structure $(E,\oplus,\0,\1)$ such that $E$ is a set, $\0$ and $\1$ are different elements of $E$ and $\oplus$ is a partial binary operation on $E$ such that for every $a,b,c \in E$ the following conditions hold:
$a \oplus b = b \oplus a$ if $a \oplus b$ exists,
$(a \oplus b) \oplus c = a \oplus (b \oplus c)$ if $(a \oplus b)
\oplus c$ exists,
there is a unique $a'\in E$ such that $a \oplus a' = \1$ (*orthosupplement*),
$a=\0$ whenever $a \oplus \1$ is defined.
For simplicity, we use the notation $E$ for an effect algebra. A partial ordering on an effect algebra $E$ is defined by $a \le b$ iff there is a $c
\in E$ such that $b = a \oplus c$. Such an element $c$ is unique (if it exists) and is denoted by $b \ominus a$. $\0$ ($\1$, resp.) is the least (the greatest, resp.) element of $E$ with respect to this partial ordering. For every $a,b \in E$, $a''=a$ and $b' \le a'$ whenever $a \le b$. It can be shown that $a \oplus \0 = a$ for every $a \in E$ and that a *cancellation law* is valid: for every $a,b,c \in E$ with $a \oplus b \le a
\oplus c$ we have $b \le c$. An *orthogonality* relation on $E$ is defined by $a \perp b$ iff $a \oplus b$ exists (iff $a \le b'$). (See, e.g., [@DP:NewTrends; @FB:Effect].)
For $a \le b$ we denote $[a,b] = \{c \in E \st a \le c \le b \}$. A *chain* in $E$ is a nonempty linearly (totally) ordered subset of $E$.
Obviously, if $a \perp b$ and $a \lor b$ exist in an effect algebra, then $a \lor b \le a \oplus b$. The reverse inequality need not be true (it holds in orthomodular posets).
Let $E$ be an effect algebra. An element $a \in E$ is *principal* if $b \oplus c \le a$ for every $b,c \in E$ such that $b,c \le a$ and $b \perp
c$.
An *orthoalgebra* is an effect algebra $E$ in which, for every $a \in
E$, $a=\0$ whenever $a \oplus a$ is defined.
An *orthomodular poset* is an effect algebra in which every element is principal.
An *orthomodular lattice* is an orthomodular poset that is a lattice.
Every orthomodular poset is an orthoalgebra. Indeed, if $a \oplus a$ is defined then $a \oplus a \le a = a \oplus \0$ and, accordig to the cancellation law, $a \le \0$ and therefore $a=\0$.
Orthoalgebras are characterized by the following conditions: the orthosupplementation is an orthocomplementation (i.e., $a \lor a' = \1$ for every $a$) or $a \oplus b$ is a minimal upper bound of $a,b$ for every $a,b$. Orthomodular posets are characterized as effect algebras such that $a
\oplus b = a \lor b$ for every orthogonal pair $a,b$. (See [@FB:Effect; @FGR:Filters].) Let us remark that an orthomodular poset is usually defined as a bounded partially ordered set with an orthocomplementation in which the orthomodular law is valid.
Let us present a special class of orthomodular posets that we will use in some examples.
\[T:concreteOMP\] Let $X \neq \emptyset$, $E \subset \exp X$ be nonempty such that the following conditions are fulfilled:
$X \setminus A \in E$ whenever $A \in E$,
$A \cup B \in E$ whenever $A,B \in E$ are disjoint.
Then $(E,\oplus,\emptyset,X)$ with $A \oplus B = A \cup B$ for disjoint $A,B \in E$ is an orthomodular poset such that the orthosupplement is the set-theoretic complement and the partial ordering is the inclusion.
Since $E$ is nonempty, there is an element $A \in E$. According to the condition (1), $X \setminus A \in E$. According to the condition (2), $X = A
\cup (X \setminus A) \in E$ and, according to the condition (1), $\emptyset
= X \setminus X \in E$. It is easy to see that the axioms of an effect algebra are fulfilled, that the orthosupplement is the set-theoretic complement, that the partial ordering is the inclusion and that every element of $E$ is principal.
An orthomodular poset of the form of \[T:concreteOMP\] is called *concrete*.
Let us present two important notions we will use in the sequel.
A system $(a_i)_{i \in I}$ of (not necessarilly distinct) elements of an effect algebra $E$ is *orthogonal* if $\bigoplus_{i \in F} a_i$ is defined for every finite set $F \subset I$.
An effect algebra $E$ is *orthocomplete* if for every orthogonal system $(a_i)_{i \in I}$ of elements of $E$ the supremum $\bigvee
\{\bigoplus_{i\in F} a_i \st F \subset I \ \text {is finite}\}$ exists.
An effect algebra $E$ has the *maximality property* if $\ab$ has a maximal element for every $a,b \in E$.
Obviously, every finite effect algebra has the maximality property and every lattice effect algebra has the maximality property—$a \land b$ is a maximal (even the greatest) element of $\ab$ for every $a,b$.
States
======
Let $E$ be an effect algebra. A *state* $s$ on $E$ is a mapping $s
\st E \to [0,1]$ such that:
$s(\1)=1$,
$s(a \oplus b) = s(a) + s(b)$ whenever $a \oplus b$ is defined.
A set $S$ of states on $E$ is *unital*, if for every $a \in E
\setminus \{\0\}$ there is a state $s \in S$ such that $s(a) = 1$.
A set $S$ of states on $E$ is *strongly order determining*, if for every $a,b \in E$ with $a \not\le b$ there is a state $s \in S$ such that $s(a) = 1 > s(b)$.
Obviously, for every state $s$ we have $s(\0)=0$, $s(a')=1-s(a)$ for every $a \in E$, $s(a) \le s(b)$ for every $a,b \in E$ with $a \le b$.
There are special two-valued states on concrete orthomodular posets (it is easy to verify that they are indeed states):
Let $E \subset \exp X$ be a a concrete orthomodular poset, $x \in X$. The state $s_x$ on $E$ defined by $$s_x(A) = \begin {cases}
0 \,,& x \notin A \,,\\
1 \,,& x \in A \,,
\end {cases}
\qquad A \in E \,,$$ is called *carried by the point $x$*.
It is easy to see that for concrete orthomodular posets the set of states carried by points is strongly order determining.
\[L:StronglyFull-Unital\] Every strongly order determining set of states on an effect algebra is unital.
Let $S$ be a strongly order determining set of states on an effect algebra $E$, $a$ be a nonzero element of $E$. Then $a \not\le \0$ and therefore there is a state $s \in S$ such that $s(a) = 1 > s(\0)$.
Let us present two observations describing the impact of a sufficiently large state spaces to the properties of the algebraic structure.
\[T:unital-OA\] Every effect algebra with a unital set of states is an orthoalgebra.
Let $E$ be an effect algebra with a unital set $S$ of states. Let $a \in E$ be such that $a \oplus a$ is defined. Then $1 \ge s (a \oplus a) = 2 \,
s(a)$ and therefore $s(a) < \frac 12$ for every state $s \in S$. Since $S$ is unital, we obtain that $a=\0$.
\[T:SOD->OMP\] Every effect algebra with a strongly order determining set of states is an orthomodular poset.
Let $E$ be an effect algebra with a strongly order determining set $S$ of states. Let us prove that every element of $E$ is principal. Let $a,b,c \in
E$ such that $b,c \le a$ and $b \perp c$. Then for every state $s \in S$ with $s(a')=1$ we consecutively obtain: $0 = s(a) = s(b) = s(c) = s(b\oplus
c)$, $s\bigl((b \oplus c)'\bigr) = 1$. Since the set $S$ is strongly order determining, we obtain that $a' \le (b \oplus c)'$ and therefore $b \oplus c
\le a$.
Jauch–Pironness
===============
Let $E$ be an effect algebra. A state $s$ on $E$ is *Jauch–Piron* if for every $a,b \in E$ with $s(a)=s(b)=1$ there is a $c \in E$ such that $c
\le a,b$ and $s(c)=1$.
An effect algebra is *Jauch–Piron* if every state on it is Jauch–Piron.
The following statement was proved in [@T:CEOEA Proposition 2.6], we will generalize it later (\[T:JPCU->OML\]).
\[T:JPCU->M\] Every Jauch–Piron effect algebra with a countable unital set of states has the maximality property.
Let $E$ be a Jauch–Piron effect algebra with a countable unital set $S$ of states. Let $a,b \in E$. If $\ab = \{\0\}$ then $\0$ is a maximal element of $\ab$. Let us suppose that $\ab \neq \{\0\}$. Then there is an element $c
\in \ab \setminus \{\0\}$ and, since the set $S$ is unital, there is a state $s \in S$ such that $s(c)=1$. Hence $s(a)=s(b)=1$ and the set $S_{a,b} = \{
s \in S \st s(a)=s(b)=1 \}$ is nonempty and countable. Let $s_0$ be a $\sigma$-convex combination (with nonzero coefficients) of all states from $S_{a,b}$. Then $s_0(a) = s_0(b) = 1$. Since the state $s_0$ is Jauch–Piron, there is an element $c \in \ab$ such that $s_0(c)=1$. It remains to prove that $c$ is a maximal element of $\ab$. Indeed, if $d \in
\ab$ with $d \ge c$ then $e = d \ominus c \in \ab$ and $e \perp c$. Hence $s_0(e) = 0$ and therefore there is no state $s\in S$ such that $s(e)=1$. Due to the unitality of $S$, $e=0$ and therefore $d=c$.
\[T:UJP->OMP\] Every effect algebra with the maximality property and with a unital set of Jauch–Piron states is an orthomodular poset.
Let $E$ be an effect algebra with the maximality property and with a unital set $S$ of Jauch–Piron states. Let us suppose that $E$ is not an orthomodular poset and seek a contradiction. There are elements $a,b,c \in E$ such that $b, c \le a$, $b \perp c$ and $b \oplus c \not\le
a$. Let us denote $d = b \oplus c$. Since $E$ has the maximality property, there is a maximal element $e$ in $[\0,a'] \cap [\0,d']$. Since $d
\not\le a$, we obtain that $a' \not\le d'$ and therefore $e < a'$ and $a' \ominus e \neq \0$. Since the set $S$ is unital, there is a state $s
\in S$ such that $s(a' \ominus e) = 1$. Hence $s(a')=1$, $0 = s(e) = s(a) =
s(b) = s(c) = s(d)$, $s(d') = 1$, $s(d'\ominus e) = 1$. Since the state $s$ is Jauch–Piron, there is an element $f \in E$ such that $f \le (a' \ominus e), (d' \ominus
e)$ and $s(f) = 1$. Hence $f \neq \0$ and $e < e \oplus f \le a',d'$—this contradicts to the maximality of $e$.
Let us remark that there are effect algebras with the maximality property that are not orthoalgebras—e.g., the 3-chain $C_3 = \{\0,a,\1\}$ with $a
\oplus a = \1$ and $x \oplus \0 = x$ for every $x \in C_3$. It seems to be an open question whether the assumption of the maximality property in \[T:UJP->OMP\] might be omitted (it is not a consequence of the existence of a countable unital set of Jauch–Piron states—see \[E:OMPnotM\].) \[T:UJP->OMP\] cannot be improved to orthomodular lattices—see \[E:UnotSOD\] ($\{\frac12\,(s_x+s_y)\st x,y \in X, \ x \neq y\}$ is a unital set of Jauch–Piron states).
It is well-known and easy to see that every state on a Boolean algebra is Jauch–Piron and that a unital set of states on a Boolean algebra is strongly order determining. Let us generalize the latter statement.
\[T:Unital-StronglyFull\] A set of Jauch–Piron states on an effect algebra with the maximality property is unital if and only if it is strongly order determining.
$\Leftarrow$: See \[L:StronglyFull-Unital\].
$\Rightarrow$: Let $E$ be an effect algebra with the maximality property and with a unital set $S$ of Jauch–Piron states. Let $a,b \in E$ such that $a
\not\le b$. Let $c \in E$ be a maximal element of $\ab$. Then $c<a$ and therefore $a \ominus c \neq \0$. Since the set $S$ is unital, there is a state $s \in S$ such that $s(a \ominus c) = 1$ and therefore $s(a)=1$. Let us suppose that $s(b)=1$ and seek a contradiction. Since $s$ Jauch–Piron, there is an element $d \in E$ such that $d \le a \ominus c$, $d \le b$ and $s(d)=1$. Hence $d \neq \0$ and $c < c \oplus d \le a$. According to \[T:UJP->OMP\], $b$ is principal and therefore $c \oplus d \le b$—this contradicts to the maximality of $c$.
Let us remark that \[T:UJP->OMP\] is a consequence of \[T:Unital-StronglyFull\] and \[T:SOD->OMP\]. Let us present examples that the assumptions in \[T:Unital-StronglyFull\] cannot be omitted.
\[E:UnotSOD\] Let $X=\{a,b,c,d\}$, $E$ be the family of even-element subsets of $X$ with the $\oplus$ operation defined as the union of disjoint sets. Then $(E,\oplus,\emptyset,X)$ is a finite (hence with the maximality property) concrete orthomodular poset and the set $S = \{s_a, s_b, s_c\}$ of states carried by points $a,b,c$ is a unital set of (two-valued) states on $E$ that is not strongly order determining: $\{a,d\} \not\le \{a,b\}$ but there is no state $s \in S$ such that $s(\{a,d\}) = 1 > s(\{a,b\})$. (States in $S$ are not Jauch–Piron.)
\[E:OMP-UnotSOD\] Let $X_1,X_2,X_3,X_4$ be nonempty mutually disjoint sets, $X_1, X_3$ be infinite, $X = \bigcup_{i=1}^4 X_i$, $$\begin{aligned}
E_0 &= \{\emptyset, X_1 \cup X_2, X_2 \cup X_3, X_3 \cup X_4, X_4 \cup X_1, X\}\,,\\
E &= \{(A \setminus F) \cup (F \setminus A) \st F \subset X_1 \cup X_3 \
\text {is finite},\ A \in E_0 \}\,,
\end{aligned}$$ $A \oplus B = A \cup B$ for disjoint $A,B \in E$. Then $(E,\oplus,\emptyset,X)$ is a concrete orthomodular poset and the set $S=\{s_x \st x \in X_1 \cup X_3\}$ of states carried by points from $X_1
\cup X_3$ is a unital set of (two-valued) Jauch–Piron states on $E$. The set $S$ is not strongly order determining because $X_1 \cup X_4 \not\le X_1 \cup
X_2$ and for every $s \in S$ with $s(X_1 \cup X_4) = 1$ there is an $x \in
X_1$ such that $s = s_x$ and therefore $s(X_1 \cup X_2) = 1$. ($E$ does not have the maximality property.)
\[T:JPCU->OML\] Every Jauch–Piron effect algebra with a countable unital set of states is an orthomodular lattice.
Let $E$ be a Jauch–Piron effect algebra with a countable unital set $S$ of states. According to \[T:JPCU->M\], $E$ has the maximality property. According to \[T:Unital-StronglyFull\], the set $S$ is strongly order determining. According to \[T:SOD->OMP\], $E$ is an orthomodular poset. Let us show that $a \land b$ exists for every $a,b \in E$. (Then also $a
\lor b = (a' \land b')'$ exists for every $a,b \in E$.) If $\ab=\{\0\}$ then $\0 = a \land b$. Let us suppose that there is a nonzero element $c \in E$ such that $c \le a,b$. Then there is a state $s \in S$ such that $s(c)=1$. Hence $s(a)=s(b)=1$ and the set $S_{a,b} = \{ s \in S \st s(a)=s(b)=1 \}$ is nonempty and countable. Let $s_0$ be a $\sigma$-convex combination (with nonzero coefficients) of all states from $S_{a,b}$. Then $s_0(a)=s_0(b)=1$. Since the state $s_0$ is Jauch–Piron, there is an element $c_0 \in E$ such that $c_0 \le a,b$ and $s_0(c_0)=1$. Hence $s(c_0)=1$ for every $s \in
S_{a,b}$. For every $c \in \ab$ and every $s \in S$ with $s(c)=1$ we have $s
\in S_{a,b}$ and therefore $s(c_0)=1$. Since $S$ is strongly order determining, $c \le c_0$ for every $c \in \ab$. Hence $c_0 = a \land b$.
Let us present examples that the conditions in \[T:JPCU->OML\] cannot be omitted. There is a concrete (hence with a strongly order determining set of two-valued states) Jauch–Piron orthomodular poset that is not a lattice—see [@Muller] (every unital set of states on it is uncountable). As the following example shows there is an orthomodular poset with a countable strongly order determining set of (two-valued) Jauch–Piron states that does not have the maximality property and therefore it is not a lattice (there are non-Jauch–Piron states).
\[E:OMPnotM\] Let $X_1,X_2,X_3,X_4$ be mutually disjoint countable infinite sets, $X = \bigcup_{i=1}^4 X_i$, $$\begin{aligned}
E_0 &= \{\emptyset, X_1 \cup X_2, X_2 \cup X_3, X_3 \cup X_4, X_4 \cup X_1, X\}\,,\\
E &= \{(A \setminus F) \cup (F \setminus A) \st F \subset X \ \text {is
finite},\ A \in E_0 \}\,,
\end{aligned}$$ $A \oplus B = A \cup B$ for disjoint $A,B \in E$. Then $(E, \oplus,
\emptyset, X)$ is a concrete orthomodular poset and the set $S=\{s_x \st x
\in X\}$ of states carried by points is a countable strongly order determining set of two-valued Jauch–Piron states on $E$. The set $[\emptyset, X_1 \cup X_2] \cap [\emptyset, X_4 \cup X_1]$ consists of finite subsets of $X_1$, hence $E$ does not have the maximality property. As an example of a non-Jauch–Piron state we can take a $\sigma$-convex combination (with nonzero coefficients) of states from $\{s_x \st x \in
X_1\}$.
Relationship of various conditions
==================================
\[T:MaximalityProperties\] Let $E$ be an effect algebra. Consider the following poperties:
$E$ is finite.
$E$ is chain finite.
$E$ is orthocomplete.
$E$ is Jauch–Piron with a countable unital set of states.
$E$ is a lattice.
For every $a,b \in E$, every chain in $\ab$ has an upper bound in $\ab$.
$E$ has the maximality property.
Then the following implications hold: (F)(CF)(OC)(CU)(M), (JPCU)(L)(CU).
(F)(CF): Obvious.
(CF)(OC): Every orthogonal system in a chain finite effect algebra is finite. Hence $E$ is orthocomplete.
(OC)(CU): Let $C$ be a chain in $\ab$. According to [@JP:Orthocomplete Theorem 3.2], every chain in an orthocomplete effect algebra has a supremum. This supremum obviously belongs to $\ab$.
(CU)(M): Let $a,b \in E$. Since $\ab \supset \{\0\}$, the family of chains in $\ab$ is nonempty. According to Zorn’s lemma, there is a maximal chain $C$ in $\ab$. According to the assumption, there is an upper bound $c
\in \ab$ of $C$. Since the chain $C$ is maximal, $c \in C$ is a maximal element of $\ab$.
(JPCU)(L): See \[T:JPCU->OML\].
(L)(CU): Let $a,b \in E$. The element $a \land b$ is an upper bound for every chain in $\ab$.
Let us present examples that the scheme of implications in the previous theorem cannot be improved.
Let $X$ be an infinite set, $y \notin X$, $E = \{\emptyset\} \cup \bigl\{
\{x,y\} \st x \in X \bigr\} \cup \bigl\{ X \setminus \{x\} \st x \in X\}
\cup \bigl\{ X \cup \{y\} \bigr\}$, $A \oplus B = A \cup B$ for disjoint $A,B \in E$. Then $(E,\oplus,\emptyset,X\cup\{y\})$ is an infinite chain finite concrete orthomodular lattice.
Let $X$ be an uncountable set, $E=\exp X$ with $A \oplus B = A \cup B$ for disjoint $A,B \in E$. Then $(E,\oplus,\emptyset,X)$ is an orthocomplete concrete orthomodular lattice (it forms a Boolean algebra) such that there is an uncountable set of mutually orthogonal elements. Hence it is not chain finite and every unital set of states on $E$ is uncountable.
Let $X$ be a countable infinite set. Let $E$ be a family of finite and cofinite subsets of $X$ with the $\oplus$ operation defined as the union of disjoint sets. Then $(E,\oplus,\emptyset,X)$ is a concrete orthomodular lattice (it forms a Boolean algebra) fulfilling the condition (JPCU) (every state on a Boolean algebra is Jauch–Piron, there is a countable unital set of states carried by points) that is not orthocomplete.
Let $X$ be a 6-element set. Let $E$ be the family of even-element subsets of $X$ with the $\oplus$ operation defined as the union of disjoint sets from $E$. Then $(E,\oplus,\emptyset,X)$ is a finite concrete orthomodular poset that is not a latice.
Let $X, Y$ be disjoint infinite countable sets, $$\begin{aligned}
E_0 &= \{A \subset (X \cup Y) \st \card (A \cap X) = \card (A \cap Y) \
\text {is finite}\}\,,\\
E &= E_0 \cup \{(X \cup Y) \setminus A \st A \in E_0 \}\,,
\end{aligned}$$ $A \oplus B = A \cup B$ for disjoint $A,B \in E$. Then $(E, \oplus,
\emptyset, X \cup Y)$ is a concrete orthomodular poset with the maximality property. Let $X = \{x_n \st n \in \N\}$, $y_0 \in Y$, $f \st X \to Y
\setminus \{y_0\}$ be a bijection, $A = (X \cup Y) \setminus \{x_1,
f(x_1)\}$, $B = (X \cup Y) \setminus \{x_1, y_0\}$. Then the chain $\bigl\{
\{ x_2, \dots, x_n, f(x_2), \dots, f(x_n) \} \st n \in \N \setminus \{1\}
\bigr\}$ in $[\emptyset, A] \cap [\emptyset, B]$ does not have an upper bound in $[\emptyset, A] \cap [\emptyset, B]$, hence the condition (CU) from \[T:MaximalityProperties\] is not fulfilled.
Let us remark that not all effect algebras have the maximality property (see \[E:OMPnotM\]).
Acknowledgements {#acknowledgements .unnumbered}
================
The work was supported by the grant of the Grant Agency of the Czech Republic no. 201/07/1051 and by the research plan of the Ministry of Education of the Czech Republic no. 6840770010.
[9]{}
Dvurečenskij, A., Pulmannová, S.: *New Trends in Quantum Structures*. Kluwer Academic Publishers, Bratislava, 2000.
Foulis, D. J., Bennett, M. K.: *Effect algebras and unsharp quantum logics*, Found. Phys. (1994) [**24**]{}, 1331–1352.
Foulis, D., Greechie, R., Rüttimann, G.: *Filters and supports in orthoalgebras*. Internat. J. Theoret. Phys. [**31**]{} (1992), 789–807.
Jenča, G., Pulmannová, S.: *Orthocomplete effect algebras*. Proc. Amer. Math. Soc. [**131**]{} (2003), 2663–2671.
Müller, V.: *Jauch–Piron states on concrete quantum logics*. Internat. J. Theoret. Phys. [**32**]{} (1993), 433–442.
Tkadlec, J.: *Central elements of effect algebras*. Internat. J. Theoret. Phys. [**43**]{} (2004), 1363–1369.
Tkadlec, J.: *Conditions that force an orthomodular poset to be a Boolean algebra*. Tatra Mt. Math. Publ. [**10**]{} (1997), 55–62.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The difficulty of getting medical treatment is one of major livelihood issues in China. Since patients lack prior knowledge about the spatial distribution and the capacity of hospitals, some hospitals have abnormally high or sporadic population densities. This paper presents a new model for estimating the spatiotemporal population density in each hospital based on location-based service (LBS) big data, which would be beneficial to guiding and dispersing outpatients. To improve the estimation accuracy, several approaches are proposed to denoise the LBS data and classify people by detecting their various behaviors. In addition, a long short-term memory (LSTM) based deep learning is presented to predict the trend of population density. By using Baidu large-scale LBS logs database, we apply the proposed model to 113 hospitals in Beijing, P. R. China, and constructed an online hospital recommendation system which can provide users with a hospital rank list basing the real-time population density information and the hospitals’ basic information such as hospitals’ levels and their distances. We also mine several interesting patterns from these LBS logs by using our proposed system.'
author:
-
bibliography:
- 'ref.bib'
title: |
Population Density-based Hospital Recommendation\
with Mobile LBS Big Data\
[^1]
---
Data mining, population density, hospital recommendation, location-based service.
Introduction {#sec:intro}
============
to the statistics of National Health and Family Planning Commission ([https://goo.gl/i2Kh6p](http://www.nhfpc.gov.cn/mohwsbwstjxxzx/s7967/201702/0a644a51bfc347ccab43fb1766aa5089.shtml)), there are a total of 991,632 medical institutions in China until Nov. 2016 where 4735 new institutions are increased from Nov. 2015. Nevertheless, the difficulty of getting medical care remains as one of China’s major livelihood issues. A survey on Peking University First Hospital [@yu2008survey], which is one of the most famous hospitals in Beijing, indicates that more than 45% outpatients have to wait for over two hours after the registration, whereas 85% of them have less than 10 minutes for the doctor’s inquiry. This phenomenon is actually common in China’s 776 Top-Class hospitals[^2]. The reason is that many hospitals lack publicity and are not familiar to most patients. Having no way of knowing whether there is a good enough and less crowded hospital nearby, people have no choice but go to those congested famous hospitals regardless of the severity of the disease. In fact, most outpatients with a mild disease may expect a quick treatment yet have a low requirement for the hospital’s treatment ability. It is imperative to find a simple and convenient way to access the crowd status and basic information of the neighboring hospitals. There are several standard ways for crowd counting, such as approaches based on video or beacon. However, these methods rely on surveillance data or wireless network data, and any company or non-governmental organization can hardly gather these data of all hospitals even in one city, not to mention in any larger range.
Fortunately, location-based service (LBS) big data offers a potential solution to this dilemma. LBS data have two unique properties: **a)** LBS data naturally belongs to the service providers. Using it for population density estimation does not involve asking hospitals for any help. **b)** LBS data is sufficient for population density estimation with its copiousness and vast area coverage. As smartphone is popular, several main LBS providers in China have hundreds of millions of users and preserve billions LBS request logs every day. For instance, Baidu Map has over 300 million active users and gets 23 billion LBS request logs in China each day on average. Taking advantage of these two properties, we present a novel (nearly) real-time model for counting and predicting the crowd of all hospitals in the city based on LBS big data. What’s more, the distribution of people’s residence time can also be figured out with the LBS data. Relying on the location logs of Baidu LBS and Baidu Points of Interests (POI) data, we have designed an APP[^3] to provide users with **a)** hospitals’ real-time and predicted outpatient density, **b)** official level, **c)** distance from the user to each hospital and the corresponding path planning. To the best of our knowledge, it is the first hospital recommendation system based on the population density analysis with LBS big data. Note that Baidu LBS data has incorporated the information of GPS, WiFi, and Cellular network, so it is a relatively high-quality data source. Since the LBS data we use is location logs of mobile APPs, we will use *data* and *logs* interchangeably to denote the LBS data.
Our contributions can be summarized into three innovations conquering three main challenges of this work. *The first innovation* is the model for detecting the types of people around hospitals to address highly noisy LBS data. Although our data source is relatively high-quality, it is still temporally unstable and spatially inaccurate for a variety of reasons. Passersby, inpatients, and hospital staff, as well as people working around hospitals can all cause noise in our outpatient density estimation task. By identifying the different movement behaviors, the population analysis model clusters the users into three classes quickly, i.e., **a)** people who pass by, **b)** who are outpatients, and **c)** who are inpatients or staff. Since the goal is to provide references to outpatients with mild disease, only the outpatients should be counted. This work will be specified in Section \[sec:PDA\]. *The second innovation* is a neural network for predicting the population density trend to improve the practicality of our application. Accurate estimations of how long patients need to wait for treatments are what patients actually concern. Based on the history statistics of population density obtained by the analysis of the LBS data, a dual network is designed for representing both high-frequency and low-frequency trend and generating accurate predictions. We will introduce it in Section \[sec:PDP\] *The third innovation* is a highly active parallel construction on Hadoop. Providing real-time population density analysis and prediction requires the system to process dozens of Gigabyte information per hour. To address this issue, we fully utilize the features of Hadoop by separating the whole task into two MapReduce processes. Consequently, our application can process 20GB original data within 10 minutes by using 2000 slave nodes. Details of the construction are in Section \[sec:parall\]
![The flowchart of our hospital recommendation system. Three major modules calculate the real-time population density of each hospital and predict its trend. The web server merges the population density information and hospital basic information for users.[]{data-label="fig:whole_structure"}](figs/whole-structure){width="0.8\linewidth"}
Figure \[fig:whole\_structure\] schematically shows how the innovations integrate into an application. There are three major modules, two databases, and a web server. *Address resolution module* (AR) resolves the address of each piece of LBS log according to its longitude and latitude. *Population analysis module* (PA) analyzes population density after removing the noisy data in the LBS logs. These two modules calculate statistics of population density. *Population density prediction module* (PDP) estimates population density trend basing the history statistics. With these three modules cooperating with each other, the real-time population density data and its trend in the future several hours can be obtained. So, users can get the combination of population density information and hospitals’ basic information from the web server .
Related Work {#sec:RW}
============
In the industries of China, there have been three crowd-counting related trials based on LBS big data. One of them is Baidu’s big data project on China’s ghost cities [@chi2015ghost]. The project discovered 50 possible ghost cities by analyzing the LBS data in a long-term scale. Since the strategy of address resolution in this work is exquisite, we referenced it in our work. But unlike this project [@chi2015ghost], we investigate the LBS data in a short-term scale and can output the results in nearly real time. Another two trials are heat maps of Baidu Map and Wechat, each of which encodes the population density by different colors. Both of these two works studied population density based on the holistic map, which restricts them to mining unique human behavior features in a certainly bordered area or a certain kind of public facility. By perusing these behavior features, we developed several optimization strategies to improve the reliability of results in our work. Functionally, our application presents the density level and density-time curves of each hospital, while Baidu Map only provides current heat map without density-time curves and Wechat City Heat Map gives a density-time curve of the selected point on the map instead of a geographical region.
Other related work on the application of big data to public healthcare was primarily focused on mining social media data for bio-surveillance systems [@dredze2012social; @velasco2014social; @denecke2012making], Electronic Health Record (EHR) [@zhang2015clinical; @jensen2012mining], and locating health resources [@pacheco2008heuristic; @kim2012heuristics]. It is worth pointing out that most of the previous work aimed at healthcare facilities, whereas we focus on complementary beneficiaries. Our work is patients-oriented.
Moreover, the majority of the researches on Point-of-Interest (POI) recommendation and LBS or Location-based Social Networks (LBSNs) pay attention to four aspects, i.e., temporal patterns [@yao2016poi; @yin2015joint], geographical influence [@lian2014geomf; @yin2015joint], social correlations [@hu2014social] and textual [@vu2016geosocialbound] or visual [@lim2015recommending; @wang2017your] content indications. Instead of improving the recommendation approach, our research focuses on the estimation and prediction of population density based on massive LBS data which is a totally different database with LBSNs data. And we use a simple yet effective recommendation strategy to show the distinctive value of this work.
Population Density Analysis {#sec:PDA}
===========================
In this section, we will show how to recognize different classes of people to gain denoised LBS data and analysis population density on that. Two modules are introduced, the *address resolution module* (AR) and *population analysis module* (PA). The PA module takes most of the analysis work, yet it is time-consuming. Hence, we present efficient AR module to pretreat the origin data and screen out enormous irrelevant data.
Address Resolution (AR) {#subsec:AR}
-----------------------
Hospitals only occupy a small proportion of the whole city land. If we simply assume the LBS data located uniformly in the city, the natural inference is that majority of original data can be filtered out after its location is decoded. Thus, several strategies are proposed for address resolution as follows. First of all, we map the continuous longitude and latitude to a discrete set of squares of length $a$ [@chi2015ghost]. In other words, the map is cut into grids, each of which has a semantic information indicating [**a)**]{} whether this grid belongs to a hospital and [**b)**]{} if it does, which hospital it belongs to. After a compromise between precision and efficiency, the grid length $a$ is set to be 2 meters. Next, each grid is allocated a unique ID calculated by its center latitude and longitude. For a fast query, the map between grids’ IDs and their semantic information is stored. In this manner, we can obtain the semantic location information of a specific log just by calculating the ID of its corresponding grid. After filtering out data irrelevant to hospitals, the number of logs is significantly reduced from the order of $10^8$ per day to the order of $10^6$ per day.
Population Analysis (PA)
------------------------
To get accurate population, we propose a population classification strategy. In a typical public service area, different types of persons have remarkably diverse behaviors. By detecting and identifying these behaviors, we can accurately screen outpatients and denoise the original data. But before analyze behaviors, another work needs to be done. Although temporally unsteadiness and spatially inaccuracy are basic properties of LBS data and we cannot improve that, we introduce an auxiliary variable $c$ to describe the uncertainty of each piece of location log.
For each request log, there are observed latitude $lat_{o}$ and longitude $lng_{o}$ for describing the observation position and $r$ for observation accuracy. The formal definition of accuracy $r$ is that the point is guaranteed to be within the circle centered on $(lat_{o}, lng_{o})$ and with radius $r$. Therefore, let $\bm{x}$ be a random variable representing the true position of this log. We assume $\bm{x}$ follows an isotropic normal distribution $\mathcal{N}$ with the 2-dimensional mean vector $\bm{\mu}=(lat_{o}, lng_{o})$ and the $2\times2$ covariance matrix $\bm{\Sigma}=\sigma\bm{I}$, where parameter $\sigma=r/3$ [^4] and $\bm{I}$ is a $2\times2$ identity matrix. Then, we describe the certainty that a request log is truly in the hospital by a confidence $c$: $$c=\iint\limits_{\bm{R}}f(\bm{x})d\bm{x}\approx
\frac{|\bm{X'}\cap \bm{R}|}{|\bm{X'}|}$$ where $\bm{R}$ is the region of the hospital this log belongs to and $f(\bm{x})$ is the probability density function of $\mathcal{N}(\bm{\mu}, \bm{\Sigma})$. To calculate $c$ efficiently, we sample $k$ points with the normal distribution $\mathcal{N}$ forming a point set $\bm{X'}$ and use the percentage of positive samples located in the hospital, to approximate $c$. According to the definition, $c\in[0,1]$ and $c=1$ indicates this log is surely located in this hospital.
![Part A represents passersby who momentarily appeared. Part B represents outpatients. Part C characterizes people who stay for a very long time or appear regularly which indicates that they might be inpatients or staff[]{data-label="fig:fig1"}](figs/fig1){width="1\linewidth"}
With LBS logs with confidence $c$, we can apply our population classification. Based on our observation, the differences among behaviors mainly reflect in residence time and occurrence frequency, as shown in Figure \[fig:fig1\]. Hence, we screen out people who stay for a rational time and do not present frequently with some empirical thresholds and rules. About the residence time, on one hand, a patient couldn’t finish the whole diagnostic procedure within 20 minutes, even eliminating all the queuing issues. On the other hand, a 20-minute walk can reach far more than 300 to 500 meters, which can be the maximal length of a hospital in a city. Based on this property, we regard people staying less than 20 minutes as passersby on the nearby road and filter these data out. Further, outpatients will not stay at a hospital for an extremely long time, for example, over 15 hours. Since outpatient doctors are on duty from 8 a.m. to 6 p.m., a total of 10 hours. With additional 5 hours for containing patients starting queuing before the work time and still in treatment like intravenous infusion after closed, 15 hours is a very loose upper bound for outpatients’ duration. Those citizens staying for a longer time might be an inpatient. About the occurrence frequency, staff of a hospital or residents around can be depicted more clearly in this perspective. In a long term, an outpatient would unlikely go to the hospital every day, but staff or residents are quite the opposite. Precisely, people who are observed more than 3 times in consecutive 7 days will be considered as staff or residents. Overall, if one person’s residence time $t$ meets $20 min\le t\le15 h$ and, meanwhile, he wasn’t observed frequently, then he will be regarded as an outpatient.
In particular, two structures, *counting list* and *blacklist*, are designed to analyze and count people. Each hospital has its corresponding *counting list* and *blacklist*. *Counting list* is used for counting people and wipe out those who stay too short. *Blacklist* will record those who stay too long or present frequently. We now explain each of these two structures.
**Counting list.** It stores the information of each person in this hospital at present. It is a hash list with each *item* representing a person distinguished by UID. Each *item* includes $3$ dimensions:
a) *res\_time* recording the one’s residence time.
b) *confidence* $\hat{c}$ of this person.
c) *is\_patient* indicating whether this person has stayed for enough time, initialized as $False$.
Let $n^c$ be the number of *item*s of one *counting list*. Let $i$ be the index of each *item*, $i\in\{1,2,..,n^c\}$. The number of people $N$ in this hospital at any time is estimated by formula: $N=\sum_{k\in T}\hat{c}_k$ where $T$ is the set of all $i$ meet $is\_patient_i=$ TRUE in this *counting list*.
The residence time analysis strategy involves two variations, $is\_patient$ and $confidence\ c$: $$\begin{aligned}
&is\_patient_i=\mbox{TRUE}, &
\begin{split}
&\mbox{if }res\_time_i>20min
\end{split} \\
&\hat{c}_i:
\begin{cases}
=(\hat{c}+c) \mod 1,\\
=\hat{c}_i/2, \\
\mbox{delete } item_i,
\end{cases} &
\begin{split}
&i\mbox{'s new data received}\\
&\mbox{every 15 min}\\
&\hat{c}_i<2^{-16}
\end{split}\end{aligned}$$ where. Since when patients leave the hospital, we cannot receive any special signal, the exponential decay strategy of $\hat{c}$ can improve the accuracy of population counting. And the deleting condition simply indicates when we haven’t received a person’s LBS log for 4 hours, we believe this person has left the hospital.
**Blacklist.** People who are classified as inpatients or staff or residents around are kept in *blacklist*. If a LBS log belongs to the person in the *blacklist*, it will be ignored. *Blacklist* also has a deleting rule. If a person didn’t show up in this hospital for consecutive 10 days, he will be removed.
Population Density Prediction {#sec:PDP}
=============================
In Chinese hospitals, the schedule of each doctor is almost fixed every week. Therefore, it is natural to assume that the hospital population density varies periodically according to one-week cycle. Considering such a variation is time-series, we expect to predict the population density trend from the historical data by utilizing neural network based on long short-term memory deep learning (LSTM) [@hochreiter1997long]. Note that because the population densities of different hospitals have their respective changing trends, we thus train the network for each hospital separately.
Generally, we want the prediction model to learn the pattern between the input historical data and the population density of next hours. Since the hospital population density varies periodically according to one-week cycle, every week’s density-time curve looks similar to each other. Hence, we slice one week into several equal-length periods. Let $n^p$ be the total period number in a week. Each period is identified by a vector $\bm{p}=(w,t)$, where $w$ is the week index, $t\in{1,2,..,n^p}$ is the period index in the week. Given period $\bm{p}_{w,t}$ the population density data set in this is denoted by $\bm{D}_{\bm{p}}$ or $\bm{D}_{w,t}$. The periodicity mentioned above can be described as given $w$ and $t$, for $\forall w_1,w_2 \in \varepsilon$, $\bm{D}_{w_1,t}$ is similar to $\bm{D}_{w_2,t}$, where $\varepsilon$ is the neighborhood of $w$. Therefore, we expect the network can learn the function: $$\begin{aligned}
\label{func:lstm}
\begin{cases}
F(\bm{D}_{w-1,t+1},\bm{D}_{w,t})=\bm{D}_{w,t+1},\\
F(\bm{D}_{w,1},\bm{D}_{w,t})=\bm{D}_{w+1,1},
\end{cases} &
\begin{split}
t<n^p\\
t=n^p
\end{split}\end{aligned}$$
According to the definition above, we can use a slid window for slicing periods from one-week data. The length of each period depends on the window length $l$ (hours). The distance between each pair of successive periods is determined by the step size $s$ (hours). How to select $l$ and $s$ is tricky. Because except referring to the historical data $\bm{D}_{w-1,t+1}$, the network needs to learn the pattern $f_{t}$ between two successive periods, $\bm{D}_{w,t}$ and $\bm{D}_{w,t+1}$. We naively present two hypotheses. **a)** For $\forall t_1,t_2 \in \{1,2,..,n^p\}$ and $t_1 \ne t_2$, $f_{t_1} \ne f_{t_2}$. **b)** If the cardinality of $D$ is smaller, the network tends to learn the more high-frequency pattern and have a greater possibility of overfitting. Conversely, the network tend to learn the more low-frequency pattern and have a greater possibility of underfitting. We now detail the effects of $l$ and $s$.
![The architecture of our dual network.[]{data-label="fig:lstm-m"}](figs/lstm_model){width="1\linewidth"}
**Step size $s$.** As one week have 7 days and one day have 24 hours, $n^p=24*7/s$. Hence, A smaller $s$ will generate more periods. According to hypothesis **a)**, there will be more different $f$ for the network to learn which might cause underfitting. However, a large $s$ brings a long time span between two successive predictions which can decrease the practicality of the application. what’s more, to cover entire time range, $s$ should not be larger than $l$, i.e. $s \le l$.
**Window length $l$.** Since statistic data set for every hour has the same cardinality, the cardinality of $D$ is directly determined by $l$. Along with hypothesis **b)**, the impact of $l$ is clear.
Instead of struggling between large or short window length and step size, we introduce a dual network to seek a balance. The structure is shown in Figure \[fig:lstm-m\]. Two period-slicing strategies are chosen for two independent LSTM [@hochreiter1997long] nets. One strategy chooses $l^1=4, s^1=2$. The other chooses $l^2=12, s^2=6$. Meanwhile, Formula \[func:lstm\] is improved. As there are two period-slicing strategies, the definition of $t$ could be confusion. Let $\tau$ denote a time range in a week. Then, $\bm{D}_{w,\tau}$ denote the population density data set in week $w$ and time range $\tau$. Let $\tau^1, \tau^2$ be the latest periods’ time ranges of two strategies and $\tau^3$ be the time range we need to predict. Formula \[func:lstm\] is improved as: $$\label{func:dual}
F_{fc}(F_{LSTM}^1(\bm{D}_{w,\tau^1}),F_{LSTM}^2(\bm{D}_{w,\tau^2}),\bm{D}_{w-1,\tau^3})=\bm{D}_{w,\tau^3}$$
Parallel Architecture {#sec:parall}
=====================
![The parallel architecture contains two MapReduce jobs. First MapReduce uses *address resolution module* as map and *population analysis module* as reduce. After the shuffle, the logs are grouped by the hospital they belong to and sorted by their time tags. Second MapReduce uses the shuffle to pair the model parameter and the statistics for the *population prediction module*.[]{data-label="fig:parallel-structure"}](figs/parallel-structure){width="0.9\linewidth"}
To obtain the (nearly) real-time population density of all hospitals in Beijing, our application has to process 182GB data each day. Considering the number of active mobile users in the daytime is far more than that in the nighttime, the application demands the capacity of processing 20GB data in one hour. To this end, we designed a parallel architecture based on *Hadoop*. The architecture is composed of two *MapReduce* jobs, as shown in Figure \[fig:parallel-structure\].
Since population analysis of each hospital is independent, it can be implemented in parallel. We utilize AR module as *Map* and PA module as *Reduce*. The only condition is that all data of one hospital have to be processed by the same slave node in the *MapReduce* job since an individual’s behavior is characterized by all of his logs that relate with the hospital. In addition, crowd counting also requires all data of the hospital. However, data obtained from Baidu LBS log database is sorted by time tag. To resolve this contradiction, we customize *Hadoop*’s *Partition* and *Comparison* functions in the *Shuffle* phase and ensure all the data belonging to the same hospital will be distributed to the same *Reducer* (i.e. slave node) and sorted by the time tag.
The second *MapReduce* job is for PDP module. This is also an independent work for different hospitals, so we utilize it as *Reduce*. As PDP module for different hospital entails different parameters, slave nodes have to load corresponding parameters and history data. By using the similar strategy, *Map* job simply reads and exports all parameters and history data. Meanwhile, *Hadoop*’s *Partition* and *Comparison* functions in the *Shuffle* phase will deliver these data correctly to each *Reducer*.
The first *MapReduce* job runs once per hour and the second *MapReduce* job runs once every two hours since in Section \[sec:PDP\] the smallest step size is $2$. Note that the *counting lists* and *blacklists* are preserved to disk each time as PA job is finished and will be reloaded by next hour’s job. So, the PA module can be seen as working continuously.
Implementation {#sec:implementation}
==============
In this section, the implementation detail of our application will be introduced.
Dataset {#subsec:im-DP}
-------
In this study, we used three crucial datasets, i.e., Baidu LBS request logs, Baidu points of interests (POI) and hospitals’ basic information.
**Baidu LBS request logs.** The attributes of Baidu LBS request logs cover *anonymous user ID* (UID), *latitude*, *longitude*, *positioning accuracy*, and *time tag*. The quantity of LBS logs in Beijing every day is in the order of $10^8$. Although this dataset cannot represent all the demography in each hospital, such as very young and very old people, or those who do not use smartphones, massive data like this is sufficient to approximate the population density of hospitals, which is the focus of our study.
**Baidu POI.** This dataset includes POI name and boundary coordinates. We gather the geographic information like boundary and area of hospitals from this dataset.
**Hospitals’ basic information** This dataset contains hospitals’ name, official class and the number of doctors. We established this dataset by ourselves through merging the information grabbed from three reliable websites[^5].
Model {#subsec:im-model}
-----
In Section \[subsec:AR\] we mentioned that for a fast query, the map between grids’ IDs and their semantic information is stored. In the application, we use a hash table to store the map and we only stored the grids located in a hospital to save space. In this case, as the grids not located in any hospital is far more than those located in one, there might be serious hash collisions and make AR module misclassify the logs not located in any hospital. An addition hash function is applied to generate a fingerprint to compensate the flaw. Though only hospital grids are stored, the amount of them is still enormous. To meet the balance between checking and space efficiency, we utilize Cuckoo Filter [@fan2014cuckoo] which is an amazing hash structure with $O(1)$ checking complexity and more than $95\%$ memory efficiency.
To obtain a more reliable prediction, we improved $\bm{D}_{w-1,\tau^3}$ in Formula \[func:dual\]. It is replaced by $\bm{\overline{D}}_{\tau^3}$ averaged in three months.
Application
-----------
In the application, instead of frankly showing the crowd counting number, we mix the population density data and the hospital basic information such as hospital area and doctor number to provide a more practical and user-friendly population density level as the colorful bars in Figure \[fig:app1-1\] and Figure \[fig:app1-2\] show. The mixing strategy is empirically formulated.
Figure \[fig:app\] shows screenshots of our application. The recommendation list sorted by the merged information includes hospitals’ name and level, outpatient densities level and the distance between the user and the hospital, as Fig. \[fig:app1-1\] and Fig. \[fig:app1-2\] show. Here, the merging strategy is also empirical. In addition, we also provide daily and weekly density-time curves. On the curve, users can check detail ratios of outpatients with different resident time at any point. This function showed in Fig. \[fig:app2-1\] and Fig. \[fig:app2-2\]. The curve in the blue frame denotes the future trend predicted by PDP module. From these two figures it can be seen that although the instability of the LBS data makes the curve of long-duration patients noisy, they still provide some suggested guidance. Moreover, by clicking the distance in the rank list, users can also check the path plan and the public transit route to the hospital, showed in Fig. \[fig:app3-1\] and Fig. \[fig:app3-2\].
Experiment
==========
In this section, we evaluate the quality of the estimation and prediction of population density separately. Since the true number of outpatients in every hospital in Beijing is hard to count, we qualitatively evaluate the performance of our population analysis approach by comparing the density-time curve with Wechat City Heat Map. For the accuracy of population density prediction, as the input of PDP module is the statistics estimated by PA module, we use that as ground truth.
Population Density Analysis {#population-density-analysis}
---------------------------
Figure \[fig:wechat\] shows the density-time curves provided by Wechat City Heat Map. The result of our method on the same hospital at the same time can be seen in Figure \[fig:app2-1\]. Wechat City Heat Map can only show the population density of one point and the results of two close points in the same hospital are quite different, while our result is naturally generated on the whole area of the hospital. Furthermore, our result provides a finer-granular density of outpatients instead of the entire population. Therefore, our approach can present a more instructive information in practice.
Population Density Prediction {#population-density-prediction}
-----------------------------
Since our aim is to recommend proper hospitals to potential patients, we also concern the rank of predicted values. So we estimate the quality from two perspectives: *Spearman Rank Correlation Coefficient* and *Relative Error* of the prediction. The *Spearman Correlation Coefficient* is defined as the Pearson correlation coefficient between the ranked variables [@myers2010research]. The definition of *Relative Error* is: $$\delta_{y,g}=\frac{|y-g|}{g}$$ where $y$ is the prediction and $g$ is the statistical ground truth. Table \[tab:ex\] shows the results.
mean variance
------ -------------------- ----------
SRCC 0.857$^\uparrow$ 0.071
RE 0.224$_\downarrow$ 0.098
: SRCC indicates *Spearman Rank Correlation Coefficient* and RE indicates *Relative Error*. The results are averaged across 460 times predictions over 113 hospitals. Higher SRCC and lower RE are preferred as indicated by the arrows in the table.
\[tab:ex\]
Considering one prediction includes several hours’ population density trend, we also evaluate the internal stability. In Formula \[func:dual\], consider that $\tau^3$ includes $n^h$ hours. Let $h$ be the index of an hour in the period $\tau^3$, $h\in\{1,2,..,n^h\}$, and $y_h$ be the predicted population density of hour $h$. A prediction is denoted by $\bm{Y}=\{y_1,y_2,..,y_{n^h}\}$. The corresponding ground truth is $\bm{G}={g_1,g_2,..,g_{n^h}}$. Let the current time is $\dot{t}$, the time of $y_h$ is $\dot{t}'$. Normally, $\tau^3$ starts from $\dot{t}+1$. Thus, it is obvious that $h=\dot{t}'-\dot{t}$. As $h$ increases, we evaluate if $\delta_h$ will increase, in other words, if the precision decline. Here $\delta_h$ is the abbreviation of $\delta_{y_h,g_h}$.
![The results of the experiment on PDP module. The $x$-axis represents $h$. Relative error $\delta_h$ is sliced into 17 ranges represented by 17 different colors from cyan to red.[]{data-label="fig:lstm-ex"}](figs/lstm-ex){width="1\linewidth"}
In the experiment, we set $n^h=24$. Figure \[fig:lstm-ex\] shows the result. For better illustration, we use different colors to represent different ranges of $\delta_h$. And the area of each color indicates the ratio of results within corresponding precision range. From the figure, it can be seen that about $50\%$ data has relative error $\delta \le 0.1$. As $h$ increases, further, our model only has a subtle decrease in performance.
Case Study {#sec:cs}
==========
To find out the varying patterns of population density, we analyze two representative TOP-CLASS hospitals in Beijing. Figure \[fig:density-time-d\] illustrates the Tuesday’s density-time curve of outpatient at Peking University Third Hospital, and Figure \[fig:density-time-w\] is the weekly density-time curve of outpatient at Peking University People’s Hospital.
As shown in Figure \[fig:density-time\], population density has an obvious daily cycle with a clear valley between 11 a.m. to 1 p.m. We speculate that it is related to the hospitals’ worktime. Most of the hospitals have a noon break from 12 a.m. to 1:30 p.m. Generally, hospitals stop taking appointments for forenoon at 11:30 a.m., which means the people waiting for treatment in the morning begin to decrease at around 11 a.m. Then people in the hospital start leaving for lunch. After worktime has finished, population density touches bottom. Starting from 1 p.m., patients return to the hospital waiting for doctors or registration, leading to the gradual increase of population density. Another drop of population density starts from 3 pm, as shown in Figure \[fig:density-time-d\]. The reasons are that [**[a)]{}**]{} most of the hospitals stop taking appointments at half past 4 p.m. Since the registration does not have noon break and the number of appointments for one day is limited, appointments for many hot departments are sold out at about 3 or 4 p.m. [**[b)]{}**]{} Hospital outpatient clinics are usually closed at half past 5 p.m. The patients may not have enough time for some medical examinations if arriving after 3 o’clock.
In the long term, it can see from Figure \[fig:density-time-w\] that the number of patients decreases from Tuesday to Friday. Despite this pattern is not strictly followed by all the hospitals, every Tuesday and Friday are still the peak and valley of the crowd for the majority of the hospitals. Several potential underlying causes are that [**[a)]{}**]{} many medical examinations are not provided over the weekends. Thus, these examinations’ appointments made at weekend or Friday are often postponed to Monday or Tuesday. [**[b)]{}**]{} Meanwhile, there are many outlanders seeking outstanding doctors in Beijing. They often prefer to go to the hospital at the first half of the week because they might have to make appointments for some medical examinations. Making it early is more likely to get a comprehensive examination by utilizing this whole week, which can save the cost of booking room. A direct impression from these two factors is that hospitals should be the most congestion on Monday. Therefore, many people steer clear of going to hospital on Monday, leading to that Tuesday becomes the most crammed day. Overall, for the TOP-CLASS hospitals, the population densities are more reasonable in the latter part of the week.
Conclusion {#sec:conclu}
==========
We presented a novel approach for hospital population density estimation based on LBS big data. With the extraction of the LBS request logs around the hospitals, it can effectively estimate the current population density and predict the trend in each hospital based on big data analytics. We expect our application of the approach can guide outpatients to choose appropriate hospitals based on different criteria. Since our approach circumvents the data collection issue, it can be freely applied on a large spatial scale such as a city, a province, even the entire country.
The proposed approach can directly benefit from more advanced recommendation strategies and more accurate and stable LBS data. In addition, we use Hadoop for parallel computing due to the limitation of the experiment environment. Better speed-up performance might be achieved by using Spark in the future. More broadly, the use of empirical thresholds and rules brings limitations of the approach. It is worthwhile to develop more flexible strategies to expand the capacity of the approach and explore its potential of balancing the human distributions in many other similar scenarios such as resorts, gyms, supermarkets and other public areas. And it is also imperative to develop an effective evaluation approach for such extensive crowd counting problem.
[Hanqing Chao]{} Biography text here.
[Yuan Cao]{} Biography text here.
[Junping Zhang]{} Biography text here.
[Fen xia]{} Biography text here.
[Ye Zhou]{} Biography text here.
[Hongming Shan]{} Biography text here.
[^1]: This work is supported in part by National Natural Science Foundation of China (NSFC) (Grant No. 61673118) and in part by Shanghai Pujiang Program (Grant No. 16PJD009).
[^2]: Chinese government classifies all hospitals in China into three levels and each level is divided into three subclasses, i.e. 9 classes in total.
[^3]: In our application, we use logs located in Beijing, P.R. China, which is in the order of $10^8$ per day. The exact amount of these logs is sensitive information for Baidu co. which cannot be revealed.
[^4]: An isotropic bivariate normal distribution, $\bm{x}\sim\mathcal{N}(\bm{\mu}, \sigma\bm{I})$, has such property: $\mathbf{Pr}(-3\sigma \le \bm{x} \ge 3\sigma)>0.99$. So setting $\sigma$ as $r/3$ properly indicates the definition of $r$.
[^5]: http://www.xywy.com/, http://www.haodf.com/, and http://www.39.net/
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Recently a novel type of epithelial cell has been discovered and dubbed the “scutoid”. It is induced by curvature of the bounding surfaces. We show by simulations and experiments that such cells are to be found in a dry foam subjected to this boundary condition.'
author:
-
title: 'Demonstration and interpretation of “scutoid” cells in a quasi-2D soap froth'
---
quasi-2D; foams; scutoid; epithelial cells
Introduction
============
Recently G[ó]{}mez-G[á]{}lvez [*et al.*]{} [@gomez2018scutoids] have described epithelial cells of a previously unreported form which they have called *scutoid*; they appear when the bounding surfaces are *curved*. The distinguishing feature of such a cell is a triangular face attached to one of the bounding surfaces. Here we offer a simple illustration of this phenomenon, which is derived from the physics of foams [@weaire2001physics], consisting of a computer simulation together with preliminary experimental observations.
In an ideal dry foam, bubbles enclose gas (which is treated as incompressible) and the energy is proportional to their total surface area. Alternatively, the soap films may be considered to be in equilibrium under a constant surface tension and the gas pressure of the neighbouring cells. Plateau’s rules [@plateau1873statique], more than a century old, place restrictions on the topology of a *dry* foam (one of negligible liquid content), which is the only case considered here.
From the earliest intrusion of physics into biology, this elementary soap froth model has attracted attention to account for the shape and development of cells [@thompson1942growth; @dormer1980fundamental]. More sophisticated attempts to adopt it to that purpose persist today [@merks2005cell; @bi2014energy; @graner2017forms]. In the present context we show that the model largely accounts for the appearance of scutoids, in very simple and semi-quantitative terms, broadly consistent with the description in the original paper [@gomez2018scutoids].
Topology of dry foams
=====================
The relevance of foams to biology is apparent from the pioneering work of the botanist Edwin Matzke [@matzke1946three]. Inspired by the resemblance in shape between bubbles in foam and cells in tissues, Matzke sought to understand the forces that may be common to both. His approach was to painstakingly and exhaustively catalogue bubble shapes observed in a dry monodisperse foam, confined within a cylindrical jar. Matzke distinguished between peripheral bubbles (i.e. bubbles in contact with the walls of the cylindrical jar) and central bubbles (i.e. bubbles inside the bulk foam). Amongst the peripheral bubbles are listed two scutoids: the $(1,3,3,1)$ (see Figure 9-8 of [@matzke1946three]) and $(1,4,2,1,1)$ polyhedrons (as identified in Matzke’s notation). Not a single triangular face was found amongst the central (i.e. bulk) bubbles.
Quasi-2D foam sandwich
======================
Cyril Stanley Smith [@Smith52] first introduced the experimental quasi-2D foam that is formed between two glass plates. The plates are close enough together that all bubble cells span both boundaries, so that there are no internal bubbles and the internal soap films meet the glass plates at right angles (see Figure \[quasi\]). The quasi 2D foam between flat parallel plates is often taken as the experimental counterpart of the ideal 2D foam - which consists of polygonal 2D cells, with (in general curved) edges meeting three at a time (only) at $120^o$. Such a finite foam sandwich presents *two* such patterns on its two boundaries, and indeed on any plane taken parallel to them. However, if the plate separation is increased, this structure is overtaken by an instability, described and analysed by Cox [*et al.*]{}[@cox2002transition], in which individual cells cease to span the two plates. This instability is not directly relevant to scutoid formation but places limitations on experiment and theory.
![A quasi-2D foam showing a single layer of bubbles confined between two flat parallel glass plates (plate separation 8mm, average bubble diameter 2-3cm). Internal films meet the plates at right angles. The polygonal cells on both glass plates are identical.[]{data-label="quasi"}](15.png){width="0.75\columnwidth"}
The novel element that is brought into consideration by the work of G[ó]{}mez-G[á]{}lvez [*et al.*]{} is the introduction of *curved* boundaries which may be represented by two concentric cylinders or a portion thereof. While there has been some work on the effects of curvature of one or both plates [@roth2012coarsening; @mughal2017curvature], it did not address the case considered by G[ó]{}mez-G[á]{}lvez et al which consists of two concentric boundaries. As the separation between the two cylinders is increased, the 2D patterns on the inner and outer surfaces become distorted, the inner one being compressed in the circumferential direction, with respect to the outer one. Eventually, this should lead to the vanishing of a 2D cell edge, and hence to a topological change, as in Figure \[T1\]. This is the so-called T1 process [@weaire1984soap]. It necessarily entails the creation of a *scutoid* feature within the bulk of the foam (as illustrated in sections \[s:simulations\] and \[s:expts\]). However, its appearance may be only transitory, as it may provoke a similar effect of the other surface, in a double-$T1$ process that restores the original columnar structure. The geometry required by Plateau’s rules makes it obvious that this must be the case if the gap between the cylinders is very small. Increasing the gap is expected to allow stable scutoids to persist, provided we do not encounter the other type of instability mentioned above.
![A schematic of a T1 transition in an ideal 2D foam [@weaire1984soap]. The edge shared between bubbles A and B gradually shrinks and vanishes, the resulting fourfold vertex is in violation of Plateau’s laws and the system transitions to a new arrangement. As a result, bubbles A and B are no longer neighbours, while C and D (which were previously unconnected) now share a boundary.[]{data-label="T1"}](t1process.png){width="0.8\columnwidth"}
These arguments leave room for doubt as to whether such scutoid features can really be found in the foam sandwich. Both simulations and experiments, described in the following section, have yielded positive results.
Simulations {#s:simulations}
===========
![Cells in a Surface Evolver simulation of a polydisperse foam confined between two concentric cylinders. (a) In the initial state the 2D pattern on both boundaries is purely hexagonal (only the pattern on the substrate is shown). Red and blue bubbles are not in contact while the green bubble is in contact with a fourth neighbouring bubble (not shown for clarity). (b) The foam after a T1 transition on the substrate, resulting in four stable scutoid cells. The pattern on the substrate contains two five-sided and two seven-sided regions, while the pattern on the superstrate remains purely hexagonal. The cells are shown slightly separated for clarity. (c) The two types of scutoids cells (pentagonal and heptagonal) are shown separately. (d) A combined view showing the scutoids and the surrounding foam cells. []{data-label="SEscutoid"}](scutoid_four_images.jpg){width="0.95\columnwidth"}
As in the simulations of [@gomez2018scutoids], we start from a Voronoi partition of the gap between two concentric cylinders, to give a collection of hexagonal prismatic cells. This structure is imported into the Surface Evolver software [@brakke1992surface], which permits the minimization of surface energy (here equivalent to surface area, as in the ideal foam model) subject to fixed cell volumes. We employ a periodic boundary condition in the direction of the axis of the cylinders to reduce the effect of the finite size of the simulation. Cell volumes are assigned fixed values within a restricted range so that the initial structure is polydisperse but still hexagonal. In the example shown in Fig \[SEscutoid\]a, the cylinder has axis length 5.2 units, the cylinder radii are 2.8 and 4.3 units and there are 144 cells. To allow the cell walls to develop realistic curvature, we tessellate each face with small triangles and perform a standard Surface Evolver minimization of the surface area.
In this preliminary exploration topological changes were triggered using the Surface Evolver software. A number of stable scutoids were identified of which one example is shown in Figure \[SEscutoid\]. In the continuation of this work we expect to map out the parameter space in which such stable scutoids are to be found.
Experiments with soap bubbles {#s:expts}
=============================
We performed preliminary experiments with soap bubbles between curved surfaces, using a glass cylinder of diameter $21$mm as a substrate and a hollow half cylinder (made from perspex) with inner diameter $39$mm as a superstrate. The bubbles (approximate equivalent sphere diameter 8 mm) were produced using a simple aquarium pump with flow control and commercial dish-washing solution. Rather than placing the two cylinders upright into the vessel containing the solution we placed them on their long axis, creating an approximately 7mm wide gap between them, which was initially about half-way filled with liquid. We then used a syringe needle attached to the pump to blow gas into this gap, leading to the formation of a quasi-2D foam sandwich. By reducing the water level we arrive at bubbles which are in contact with both cylinder surfaces, some of them forming scutoids, see Figure \[scutoid-photo\]. The present process involves a measure of trial and error: repeated raising and lowering the water level allows for repeated bubble rearrangements which increases the chance of finding scutoids.
![Photograph of scutoids in a quasi-2d foam sandwich. The bubble on the left features a hexagon in contact with the outer cylinder and a pentagon in contact with the inner cylinder while the bubble on the right shows a heptagon on the outer and a hexagon on the inner cylinder. Also visible is the small triangular face separating these two bubbles. (diameter of inner cylinder $21$mm, internal diameter of hollow outer cylinder $39$mm, spacing about $7$mm, approximate equivalent sphere diameter of the bubbles 8 mm.) []{data-label="scutoid-photo"}](scutoid-photo.jpg){width="0.8\columnwidth"}
Conclusion
==========
Both simulation and experiment have confirmed that stable scutoid configurations are to be found in a dry foam sandwich between cylindrically curved faces. It remains for future work to identify the conditions for this in terms of geometrical parameters.
The foam model is well established in the description of biological cells and the processes by which they change their arrangements, but is at best a rough first approximation. In the present case we have noted that epithelial cells may be relatively elongated. If greater realism is called for, further energy terms may be added, stiffening the cell walls.
Acknowledgements
================
This research was supported in part by a research grant from Science Foundation Ireland (SFI) under grant number 13/IA/1926. A. Mughal acknowledges the Trinity College Dublin Visiting Professorships and Fellowships Benefaction Fund. We thank B. Haffner for providing the photograph of Figure \[quasi\].
[10]{}
P. G[ó]{}mez-G[á]{}lvez, P. Vicente-Munuera, A. Tagua, C. Forja, A.M. Castro, M. Letr[á]{}n, A. Valencia-Exp[ó]{}sito, C. Grima, M. Berm[ú]{}dez-Gallardo, [Ó]{}. Serrano-P[é]{}rez-Higueras, F. Cavodeassi, S. Sotillos, M.D. Mart[í]{}n-Bermudo, A. M[á]{}rquez, J. Buceta, and L.M. Escudero, Nat. Commun. 9 (2018) p. 2960.
D. Weaire and S. Hutzler, *The physics of foams*, Clarendon Press, Oxford, 1999.
J.A.F. Plateau, *Statique Expérimentale et Théorique des Liquides soumis aux seules Forces Moléculaires*, Gauthier-Villars, Paris, 1873.
D.W. Thompson, *On growth and form*, Cambridge University Press, 1917.
K. Dormer, *Fundamental tissue geometry for biologists*, Cambridge University Press, Cambridge, 1980.
R.M. Merks and J.A. Glazier, Phys. A 352 (2005) p. 113.
D. Bi, J.H. Lopez, J. Schwarz, and M.L. Manning, Soft Matter 10 (2014) p. 1885.
F. Graner and D. Riveline, Development 144 (2017) p. 4226.
E.B. Matzke, Am. J. Bot. 33 (1946) p. 58.
C. Smith, Metal Interfaces (ASM Cleveland) (1952) p. 65.
S. Cox, D. Weaire, and M.F. Vaz, Eur. Phys. J. E 7 (2002) p. 311.
A. Roth, C. Jones, and D.J. Durian, Phy. Rev. E 86 (2012) p. 021402.
A. Mughal, S. Cox, and G. Schr[ö]{}der-Turk, Interface focus 7 (2017) p. 20160106.
D. Weaire and N. Rivier, Contemp. Phys. 25 (1984) p. 59.
K.A. Brakke, Exp. Math. 1 (1992) p. 141.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The propagation of waves through transmission eigenchannels in complex media is emerging as a new frontier of condensed matter and wave physics. A crucial step towards constructing a complete theory of eigenchannels is to demonstrate their spatial structure in any dimension and their wave-coherence nature. Here, we show a surprising result in this direction. Specifically, we find that as the width of diffusive samples increases transforming from quasi one-dimensional ($1$D) to two-dimensional ($2$D) geometry, notwithstanding the dramatic changes in the transverse (with respect to the direction of propagation) intensity distribution of waves propagating in such channels, the dependence of intensity on the longitudinal coordinate does not change and is given by the same analytical expression as that for quasi-$1$D. Furthermore, with a minimal modification, the expression describes also the spatial structures of localized resonances in strictly $1$D random systems. It is thus suggested that the underlying physics of eigenchannels might include super-universal key ingredients: they are not only universal with respect to the disorder ensemble and the dimension, but also of $1$D nature and closely related to the resonances. Our findings open up a way to tailor the spatial energy density distribution in opaque materials.'
author:
- Ping Fang
- Chushun Tian
- Liyi Zhao
- 'Yury P. Bliokh'
- Valentin Freilikher
- Franco Nori
title: 'Super-universality of eigenchannel structures and possible optical applications'
---
Introduction {#sec:introduction}
============
An unprecedented degree of control reached in experiments on classical waves is turning the dream of understanding and controlling wave propagation in complex media into reality [@Rotter17]. Central to many ongoing research activities is the concept of transmission eigenchannel [@Mosk08; @Choi12; @Choi11; @Mosk12; @Genack12; @Tian15; @Cao15; @Lagendijk16; @Cao16; @Yamilov16; @Yilmaz16; @Cao17; @Lagendijk18; @Cao18; @Tian18] (abbreviated as eigenchannel herefater). Loosely speaking, the eigenchannel refers to a specific wave field, which is excited by the input waveform corresponding to the right-singular vector [Dorokhov82,Dorokhov84,Mello88]{} of the transmission matrix (TM) $\boldsymbol{%
t}$. When a wave is launched into a complex medium it is decomposed into a number of “partial waves”, each of which propagates along an eigenchannel and whose superposition gives the field distribution excited by the incoming wave. Thus, in contrast to the TM, which treats media as a black box and has been well studied [@Beenakker97], eigenchannels are much less explored, in spite of the fact that these provide rich information about the properties of wave propagation in the interior of the media. The understanding of the spatial structures of these channels can provide a basis for both fundamentals and applications of wave physics in complex media.
So far, the emphasis has been placed on the structures of eigenchannels in quasi $1$D disordered media [Genack12,Tian15,Cao15,Lagendijk16,Cao16,Yamilov16]{}. Yet, measurements of the high-dimensional spatial resolution of eigenchannels have been within experimental reach very recently [@Lagendijk18; @Cao18]. An intriguing localization structure of eigenchannels in the transverse direction has been observed in both real and numerical experiments for a very wide $2$D diffusive slab [@Cao18; @Tian18]. In addition, numerical results [@Tian18] have suggested that in this kind of special high-dimensional media, even in a single disorder configuration, the eigenchannel structure can carry some universalities that embrace quasi $1$D eigenchannels as well. Here we study the evolution of the eigenchannels in the crossover from low to higher dimension, which so far has not been explored. This not only provides a new angle for the fundamentals of wave propagation in disordered media, but is practically important for both experiments and applications of the eigenchannel structure in higher dimension.
![Simulations show that as the width $W $ increases, so that a quasi-$1$D ($N=5$) waveguide turns into a $2$D slab ($N=800$), the eigenchannel structure $%
|E_\protect\tau(x,y)|^2$ in a single disordered medium undergoes dramatic changes in the transverse $y$ direction. For $N=5$, the transverse structure is always declocalized (a). For $N=800$, the transverse structure exhibits very rich localization behaviors for fixed disorder configuration: for example, the structure exhibits one, two and three localization peaks in (b), (c) and (d), respectively. Moreover, the transverse structures of eigenchannels are qualitatively the same as the structures of $|v_n(y)|^2$. For all panels, the eigenvalue $\protect\tau\approx 0.6$ and $L=50$.[]{data-label="fig:2"}](localization_d.pdf){width="8.7cm"}
Another motivation of the present work comes from a recent surprising finding [@Bliokh15] regarding a seemingly unrelated object, the resonance in layered disordered samples which, from the mathematical point of view, are strictly $1$D systems. The resonance refers to a local maximum in the transmittance spectrum [@Freilikher04], which has a natural connection to Anderson localization in $1$D [@Anderson58; @Gershenshtein59] and resonators in various systems, ranging from plasmonics to metamaterials [@Freilikher03]. Despite the conceptual difference between the resonance and the eigenchannel, it was found [@Bliokh15] that the distribution of resonant transmissions in the (i) $1$D *Anderson localized* regime and (ii) the transmission eigenvalues in quasi-$1$D *diffusive* regime are exactly the same, namely, the bimodal distribution [@Dorokhov82; @Mello88]. However, the mechanism underlying this similarity remains unclear. It is of fundamental interest to understand whether this similarity is restricted only to transmissions, or can be extended to spatial structures.
In this work we show that in a diffusive medium, as the width of a sample (and consequently the number of channels) increases so that the sample crosses over from quasi $1$D to higher dimension, eigenchannels exhibit transverse structures much richer than what were found previously [@Cao18; @Tian18]. In particular, given a disorder configuration, not only can we see the previously found [@Cao18; @Tian18] localization structure, but also a necklace like structure which is composed of several localization peaks. Most surprisingly, notwithstanding the appearance of such diverse transverse structures in the dimension crossover, the longitudinal structure of eigenchannels, namely, the depth profile of the energy density (integrated over the cross section), remains unaffected, and is given by precisely the same expression as that for quasi-$1$D found in Ref. . We also study the spatial structures of resonance in strictly $1$D. We find that they have a universal analytic expression, which is similar to that for eigenchannel structures, and the modification is minimal. Our findings may serve as a proof of the conjecture [@Choi11] of the eigenchannel structure–Fabry-Perot cavity analogy.
The remainder of this paper is organized as follows. In Sec. [sec:structure]{}, we introduce some basic concepts of eigenchannels and resonances. In Sec. \[sec:eigenchannel\_structure\], we study in detail how the eigenchannel structure in a diffusive medium evolves as the medium crosses over from quasi-$1$D to a higher-dimensional slab geometry. To be specific, throughout this work we focus on $2$D samples. In Sec. [sec:resonance\_structure]{}, we study in detail the spatial structure of $1$D resonances. In Sec. \[sec:applications\], possible optical applications are discussed. In Sec. \[sec:discussions\], we conclude and discuss the results.
![Example of a transmittance spectrum $\mathcal{T}(\Omega)$ (top) and the resonance structure ${\tilde I}_{\mathcal{T}_n}(x)$ corresponding to the resonant frequency $\Omega_n$ (bottom).[]{data-label="fig:1"}](resonance.pdf){width="8.7cm"}
Eigenchannel and resonance: basic concepts {#sec:structure}
==========================================
To introduce the concept of eigenchannels [@Rotter17; @Choi11; @Tian15], we consider the transmission of a monochromatic wave (with circular frequency $\Omega$) through a rectangular ($0\leq x\leq L,0\leq y\leq W$) diffusive dielectric medium bounded in the transverse ($y$) direction by reflecting walls at $y=0$ and $y=W$. For $W\gtrsim L$ ($W\ll L$) the medium geometry is $2$D (quasi $1$D). The wave field $E(x,y)$ satisfies the Helmholtz equation, (the velocity of waves in the background is set to unity.) $$\begin{aligned}
\label{eq:6}
\left\{\partial_x^2 + \partial_y^2+\Omega^2\left[1 + \delta \epsilon (x,y)\right]\right\}E(x,y)=0,\end{aligned}$$ where $\delta \epsilon (x,y)$ is a random function, which presents the fluctuations of the dielectric constant inside the sample, and equals zero at $x<0$ and $x >L$. To study the evolution of the eigenchannel structure in the crossover from quasi-$1$D to $2$D, we increase $W$ and keep $L$, $\Omega$ and the strength of disorder fixed.
The incoming and transmitted current amplitudes are related to each other by the transmission matrix $\boldsymbol{t}\equiv \{t_{ab}\}$, where $a,b$ label the ideal \[i.e., $\delta \epsilon (x,y)=0$\] waveguide modes $\varphi_{a}(y)$. The matrix elements are $$\label{eq:1}
t_{ab}=-i\sqrt{\tilde v_a\tilde v_b}\; \langle x=L,a|G|x^{\prime }=0,b\rangle,$$ where $G$ is the retarded Green’s function associated with Eq. (\[eq:6\]), and $\tilde v_a$ is the group velocity of mode $a$.
Since the matrix $\boldsymbol{t}$ is non-hermitian, we perform its singular value decomposition, i.e., $\boldsymbol{t}=\sum_{n=1}^N \boldsymbol{u}_n \sqrt{\tau_n} \boldsymbol{v}_n^\dagger$ to find the singular value $\sqrt{\tau_n}$ and the corresponding left (right)-singular vector $\boldsymbol{v}_n$ ($\boldsymbol{u}_n$) normalized to unity. The input waveform $\boldsymbol{v}_n$ uniquely determines the $n$th eigenchannel, over which radiation propagates in a random medium [@Rotter17; @Choi11; @Tian15], and $\tau_n$ gives the transmission coefficient of the $n$th eigenchannel and is also called the transmission eigenvalue. The total transmittance is given by $\sum_n\tau_n$. Moreover, many statistical properties of transport through random media such as the fluctuations and correlations of conductance and transmission may be described in terms of the statistics of $\tau_n$ [Tian15]{}.
To find the spatial structure of eigenchannels, we replace $x=L$ in Eq. ([eq:1]{}) by arbitrary $x\in [0,L)$, i.e., $$\begin{aligned}
\label{eq:2}
t_{ab}\rightarrow&t_{ab}(x)\equiv -i\sqrt{\tilde v_a\tilde v_b}\; \langle
x,a|G|x^{\prime }=0,b\rangle.\end{aligned}$$ This gives the field distribution inside the medium, $$\label{eq:4}
\boldsymbol{E}_{\tau_n}(x)\equiv\{E_{na}(x)\}=\boldsymbol{t}(x)\boldsymbol{v}%
_n,$$ excited by the input field $\boldsymbol{v}_n$. Changing from the ideal waveguide mode ($\varphi_a$) representation to the coordinate $(x,y)$ representation gives a specific $2$D spatial structure, namely, the energy density profile: $$\label{eq:3}
|\boldsymbol{E}_{\tau_n}(x,y)|^2=\left|\sum_{a=1}^N
E_{na}(x)\varphi_a^*(y)\right|^2,$$ which defines the $2$D eigenchannel structure associated with the transmission eigenvalue $\tau_n$. Examples of this $2$D structure are given in Fig. \[fig:2\]. Integrating Eq. (\[eq:3\]) over the transverse coordinate $y$ we obtain the depth profile of the energy density of the $n$th eigenchannel, $$\label{eq:5}
w_{\tau_n}(x)\equiv \int dy|\boldsymbol{E}_{\tau_n}(x,y)|^2,$$ a key quantity to be addressed below. Note that, in the definitions of (\[eq:3\]) and (\[eq:5\]), the frequency $%
\Omega$ is fixed.
![Simulation results of the $x$-dependent participation ratio $\protect\xi_n(x)$ for different values of $N$.[]{data-label="fig:9"}](IPR_eigenchannel.pdf){width="8.7cm"}
![image](localization_probe_a.pdf){width="16.40cm"}
![image](localization_probe_b.pdf){width="16.40cm"}
To proceed, we present a brief review of resonances in media with $1$D disorder (cf. Fig. \[fig:1\]). For a detailed introduction we refer to Ref. . In the strictly $1$D case, the wave field $%
E_{\Omega}(x)$ satisfies $$\begin{aligned}
\label{eq:7}
\left\{\partial_x^2 + \Omega^2\left[1 + \delta \epsilon (x)\right]\right\}E_{\Omega}(x)=0,\end{aligned}$$ where $\delta \epsilon (x)$ represents the fluctuation of the dielectric constant, and like Eq. (\[eq:6\]) the wave velocity at the background is set to unity. For each solution of Eq. (\[eq:7\]) with a given $\Omega$ there is a specific transmittance $\mathcal{T}(\Omega)$. We define the resonance as a local maximum $\mathcal{T}(\Omega_n)\equiv \mathcal{T}_n$ of the transmittance spectrum $\{\mathcal{T}(\Omega)\}$, where $\Omega_n$ is the resonant frequency. The energy density of the field at the resonant frequency is defined as the resonance structure: $$\label{eq:8}
{\tilde I}_{\mathcal{T}_n}(x)\equiv |E_{\Omega_n}(x)|^2,$$ another key quantity to be addressed below. Importantly, contrary to the eigenchannel structure, Eq. (\[eq:3\]), where $\Omega$ is fixed, to obtain the resonant structures we need to sample $\Omega$ so that the resonances can appear.
Below we will show that although the eigenchannel in $2$D media and the resonance in $1$D media are quite different physical entities, their energy density spatial distributions manifest rather surprising similarity.
Eigenchannel structure in dimension crossover {#sec:eigenchannel_structure}
=============================================
In this section, we study numerically the evolution of eigenchannel structures in the crossover from a quasi-$1$D ($L\gg W$) diffusive medium to a wide ($W\gtrsim L$) $2$D diffusive slab. We will study the energy density profiles both in $2$D \[Eqs. (\[eq:4\]) and (\[eq:3\])\] and in $1$D \[Eq. (\[eq:8\])\].
Structure of right-singular vectors of the TM {#sec:localization_eigenmodes_TM}
---------------------------------------------
To study the eigenchannel structure given by Eq. (\[eq:4\]), we first perform a numerical analysis of the transmission eigenvalue spectrum $%
\{\tau_n\}$ and the right-singular vectors $\{\boldsymbol{v}_n\}$ of the TM. We use Eq. (\[eq:6\]) to simulate the wave propagation. In simulations, the disordered medium is discretized on a square grid, with the grid spacing being the inverse wave number in the background. The squared refractive index at each site fluctuates independently around the background value of unity, taking values randomly from the interval $[0.03,1.97]$. The standard recursive Green’s function method [Baranger91,MacKinnon85,Bruno05]{} is adopted. Specifically, we computed the Green’s function between grid points $(x^{\prime }=0,y^{\prime })$ and $%
(x=L,y^{\prime })$. From this we obtained the TM $\boldsymbol{t}$, and then numerically performed the singular-value decomposition to obtain $\{\tau_n, \boldsymbol{v}_n, \boldsymbol{u}_n\}$.
First of all, we found that regardless of $W$, \[throughout this work $%
W,L\gg \ell$ (the mean free path),\] the eigenvalue density averaged over a large ensemble of disorder configurations follows a bimodal distribution, which was found originally for quasi $1$D samples [Dorokhov82,Dorokhov84,Mello88]{} and shown later to hold for arbitrary diffusive samples [@Nazarov94].
However, as shown in Fig. \[fig:2\], we found that, at a given transmission eigenvalue the spatial structure of the right-singular vector changes drastically with $W$: for small $W$ namely a quasi $1$D sample, the structure is extended (a), whereas for large $W$ namely a $2$D slab the structure is localized in a small area of the cross section, and the localization structures are very rich. Indeed, as shown in (b-d), given a disorder configuration, $|v_n(y)|^2$ can have one localization peak or several localization peaks well separated in the $y$ direction, even though these distinct structures correspond to the same eigenvalue: the former has been found before [@Cao18], while for the latter we are not aware of any reports.
![Simulations of the resonance structures in $1$D show that the $h^*$-function is universal with respect to both the resonant transmission $\mathcal{T%
}$ and the disorder strength $s$. This property is similar to the universality of the $h$-function that determines the eigenchannel structures in $2$D and quasi $1$D.[]{data-label="fig:3"}](h-function.pdf){width="8.7cm"}
![Simulations show that for $2$D slabs with different values of $N$, the ensemble averaged depth profile $W_{\protect\tau}(x)$ (symbols) are well described by the analytic expression given by Eqs. (\[eq:9\])-(\[eq:11\]) for quasi-$1$D waveguides (black solid lines). Note that at a given $\protect\tau$ and $x$, all symbols for distinct $N$ overlap. $L$ is fixed to be $50$ and five ratios of $W/L$ are considered, which are $3$, $3.6$, $4.8$, $7.2$ and $12$, corresponding to $N=50$, $60$, $%
80$, $120$ and $200$, respectively.[]{data-label="fig:4"}](eigenchannel_structure_a.pdf){width="8.7cm"}
Transverse structures of eigenchannels {#sec:localization}
--------------------------------------
We computed the Green’s function between grid points $(x^{\prime }=0,y^{\prime })
$ and $(x,y^{\prime })$, where $0\leq x\leq L$. By using Eq. (\[eq:2\]) we obtained the matrix $\boldsymbol{t}(x)$. Substituting the simulation results of $\{\boldsymbol{v}_n\}
$ obtained before and $\boldsymbol{t}(x)$ into Eqs. (\[eq:4\]) and ([eq:3]{}) we found the profile $|\boldsymbol{E}_{\tau_n}(x,y)|^2$. We repeated the same procedures for many disorder configurations, and also for different widths.
Figure \[fig:2\] represents an even more surprising phenomenon occurring for very large $W$ (corresponding to $N=800$ in simulations), regardless of the transmission eigenvalues. Basically, we see that the structure of $v_n(y)$ serves as a “skeleton” of the eigenchannel structure: each localization peak in $|v_n(y)|^2$ triggers a localization peak in the profile $|\boldsymbol{E}_{\tau_n}(x,y)|^2$ at arbitrary depth $x$. Thus at the cross section of arbitrary depth $x$, the transverse structure of eigenchannels is qualitatively the same as the localization structure of $|v_n(y)|^2$, i.e., if the latter has a single localization peak or exhibits a necklace structure, then so does the former, with the number of localization peaks being the same. Note that Fig. \[fig:2\](b-d) correspond to the same disorder configuration and approximately the same transmission eigenvalue.
Furthermore, we introduce the $x$-dependent inverse participation ratio: $$\label{eq:19}
\frac{1}{\xi_n(x)}\equiv\left\langle\frac{\int dy |\boldsymbol{E}%
_{\tau_n}(x,y)|^4}{(\int dy |\boldsymbol{E}_{\tau_n}(x,y)|^2)^2}\right\rangle$$ associated with the field distribution $\boldsymbol{E}_{\tau_n}(x,y)$ of the $n$th eigenchannel, where $\xi_n(x)$ characterizes the extension of the field distribution in the $y$ direction at the penetration depth $x$, and the average is over a number of $\boldsymbol{E}_{\tau_n}(x,y)$ corresponding to the same singular value $\tau_n$. Figure \[fig:9\] presents typical numerical results of $\xi_n(x)$ for different values of $N$. From this it is easy to see that the field distribution has the same extension in the $y$ direction for every $x$, which is much smaller than $W$. This result provides a further evidence that the localization structure of $|v_n(y)|^2$ is maintained throughout the sample.
To understand the origin of the localization structures of eigenchannels, we first consider the case with single localization peak. We modify the input field $\boldsymbol{v}_{n}\equiv\{v_n(y)\}$ (panels a and b in Fig. \[fig:8\]) to be $\boldsymbol{v}^{\prime }_{n}\equiv\{v_n^{\prime
}(y)\}$ (panel c) in the following way. We twist by $\pi$ the phase of $%
v_{n}(y)\equiv |v_{n}(y)|e^{i\varphi(y)}$ in certain region of $y$, $$\begin{aligned}
&&v_{n}(y)\rightarrow v^{\prime }_{n}(y)\equiv |v_{n}(y)|\exp\{i\varphi^{\prime
}(y)\}, \label{eq:16}\\
&&\varphi^{\prime }(y)=\varphi(y)+\pi\chi(y),
\label{eq:13}\end{aligned}$$ where $\chi(y)$ takes the value of unity in the region, and otherwise of zero. Then we let this modified input field propagate in the medium, $%
\boldsymbol{v}^{\prime }_n\rightarrow\boldsymbol{t}(x)\boldsymbol{v}^{\prime
}_n$, and compare the ensuing $2$D energy density profile with the reference eigenchannel structure (panel d). We find that when the $\pi$-phase twist region is away from the localization center of $v_n(y)$ (panel c, dotted-dashed line), the resulting $2$D energy density profile is indistinguishable from the reference eigenchannel structure (panel e). That is, the eigenchannel structure is insensitive to modifications. Whereas for the changes made in the localization center (panel c, dashed line), the ensuing energy density profile is totally different from the reference eigenchannel structure (panel f). This shows that the localization structures of $|\boldsymbol{E}_{\tau_n}(x,y)|^2$ are of wave-coherence nature.
Next, we consider the case with two localization peaks. We modify the input field $\boldsymbol{v}_{n}\equiv\{v_n(y)\}$, which has two localization peaks \[Fig. \[fig:10\] (panel a)\], in the same way as what was described by Eqs. (\[eq:16\]) and (\[eq:13\]), and let the modified input field propagate in the medium. We then compare the resulting $2$D energy density profile with the reference eigenchannel structure (panel b). Interestingly, if we perform the $\pi$-phase shift in one localization region of $|v_n(y)|^2$, then, for the ensuing $2$D energy density profile, only the peak adjacent to this localization peak of $|v_n(y)|^2$ is modified significantly, whereas the other is indistinguishable from the corresponding reference eigenchannel structure. This implies that when the transverse structure of eigenchannels if of the necklace-like shape, different localization peaks forming this necklace structure are incoherent. In addition, it provides a firm support that each localization peak in $|v_n(y)|^2$ triggers, independently, the formation of a single localization peak in the transverse structure of eigenchannels.
Universality of eigenchannel structures in slabs {#sec:universality}
------------------------------------------------
Having analyzed the transverse structure of eigenchannels, we proceed to explore the longitudinal structure and to analyze its connection to the eigenchannel structure in a quasi $1$D diffusive waveguide.
For a quasi-$1$D diffusive waveguide the ensemble average of $w_{\tau}(x)$, denoted as $W_\tau(x)$, is given by[@Tian15]: $$\begin{aligned}
W_\tau(x) = S_\tau(x)W_{\tau=1}(x), \label{eq:9}\end{aligned}$$ where $W_{\tau=1}(x)$ is the profile corresponding to the transparent ($%
\tau=1$) eigenchannel, $$\begin{aligned}
W_{\tau=1}(x) = 1+\frac{\pi L x^{\prime }(1-x^{\prime })}{2\ell},\quad x^{\prime }=x/L,\quad
\label{eq:10}\\
S_\tau(x) = 2\frac{\cosh^2(h(x^{\prime })(1-x^{\prime })\phi)}{%
\cosh^2(h(x^{\prime })\phi)}-\tau,\quad \tau=\frac{1}{\cosh^2\phi},
\label{eq:11}\end{aligned}$$ with $\phi\geq 0$. Note that $h(x^{\prime })$ increases monotonically from $h(1)=1$ as $x^{\prime }$ decreases from $1$. Its explicit form, independent of $N,\tau$, is given in Fig. \[fig:3\] (black solid curve).
![The ensemble-averaged resonance structure $I_{\mathcal{T}}(x)$ for two different disorder strengths: $s=0.1$ (top) and $s=0.3$ (bottom), whose corresponding localization-to-sample length ratios are $12$ and $3$, respectively.[]{data-label="fig:5"}](resonance_structure.pdf){width="8.7cm"}
![image](vhl.pdf){width="16.0cm"}
Now we compare the eigenchannel structure in a slab with that in a quasi-$1$D waveguide described by Eqs. (\[eq:9\]), (\[eq:10\]) and (\[eq:11\]). To this end we average $2000$ profiles of $w_{\tau}(x)$ with the same or close eigenvalues $\tau$. (Some of these eigenchannels may correspond to the same disorder configuration.) As a result, we obtain $W_\tau(x)$ for different values of $\tau$, shown in Fig. \[fig:4\]. We see, strikingly, that for slabs with different number of $N$ (i.e., width $W$) the structures of $%
W_\tau(x)$ are in excellent agreement with $W_\tau(x)$ described by Eqs. (\[eq:9\]), (\[eq:10\]) and (\[eq:11\]).
Resonance structure {#sec:resonance_structure}
===================
In the previous section, we have seen that in both $2$D slabs and quasi-$1$D waveguides the ensemble averaged eigenchannel structure $W_\tau(x)$ is described by the universal formula Eqs. (\[eq:9\]), (\[eq:10\]) and ([eq:11]{}). It is natural to ask whether this universality can be extended to strictly $1$D systems, in which the transmission eigenchannel does not exist. Noting that the resonant transmissions have the same bimodal statistics as the transmission eigenvalues of eigenchannels [@Bliokh15], in this section we study numerically the resonance structure ${\tilde I}_{\mathcal{T}%
_n}(x)$. It is well known [Berezinskii73]{} that in strictly $1$D, there is no diffusive regime, because the localization length is $\sim\ell$. Instead, there are only ballistic and localized regimes. We consider the former below.
In simulations, the sample consists of $51$ scatterers separated by $50$ layers, whose thicknesses (rescaled by the inverse wave number in the background) are randomly distributed in the interval $d_0 \pm \delta$, with $%
d_0 = 10.0$ and $\delta = 9.0$. Thus $L=50 d_0$. The scatterers are characterized by the reflection coefficients $r_i$ between the neighbouring layers ($i$ labels the scatterers.), which are chosen randomly and independently from the interval of $(-s,s)$, with $s\in (0,1)$ governing the disorder strength. We change the frequency $\Omega$ in a narrow band centered at $\Omega_0$ and of half-width $%
5\%\times \Omega_0$, and calculate the transmittance spectrum $\mathcal{T}%
(\Omega)$ by using the standard transfer matrix approach. We also change disorder configurations, so that for each resonant transmission $\mathcal{T}%
_n$, $5\times 10^5$ profiles of ${\tilde I}_{\mathcal{T}_n}(x)$ are obtained. We then calculate the average of these profiles, denoted by $%
I_{\tau_n}(x)$. Finally, we repeat the numerical experiments for different values of $s$.
Figure \[fig:5\] shows the simulation results of $I_\tau(x)$ for two different values of $s$. These profiles look similar to those presented in Fig. \[fig:4\]. For quantitative comparison, we compute the quantity $%
S_\tau^*(x)\equiv I_\tau(x)/I_{\tau=1}(x)$. Then, we present the function $%
S_\tau^*(x)$ in the form: $$\begin{aligned}
S_\tau^*(x) = 2\frac{\cosh^2(h^*(x^{\prime })(1-x^{\prime })\phi)}{%
\cosh^2(h^*(x^{\prime })\phi)}-\tau, \label{eq:12}\end{aligned}$$ and find $h^*(x^{\prime })$ for different values of $\tau$ and $s$ from $%
S_\tau^*(x)$ calculated numerically (Fig. \[fig:3\]). The results are surprising: as shown in Fig. \[fig:3\], $h^*(x^{\prime })$ is a universal function, independent of $\tau$ and $s$, which is the key feature of $%
h(x^{\prime })$ for eigenchannel structures. However, Fig. \[fig:3\] also shows that the two universal functions, i.e., $h^*(x^{\prime })$ and $%
h(x^{\prime })$, are different. Therefore, allowing this minimal modification, the expression described by Eqs. (\[eq:9\]), (\[eq:10\]) and (\[eq:11\]) is super-universal: it applies to both the resonance structure and the eigenchannel structure.
Possible applications {#sec:applications}
=====================
The universal properties of the transmission eigenchannels presented above open up outstanding possibilities to tailor the energy distribution inside higher-dimensional opaque materials, in particular, to concentrate energy in different parts of a diffusive sample. In the example shown in Fig. [fig:7]{}, two eigenchannels with low ($\tau_l=0.1$) and high ($\tau_h=1$) transmissions were exited, so that the input field had the form $\boldsymbol{%
\psi}_{in}=\frac{1}{\sqrt{2}}(\boldsymbol{v}_{h}+\boldsymbol{v}_{l})$. Here the right-singular vector $\boldsymbol{v}_{h(l)}\equiv\{v_{h(l)}(y)\}$ of the TM corresponds to a highly (low) transmitting eigenchannel. It is easy to see that the energy density profile generated by this input inside the medium is comprised of two phase coherent, but spatially separated parts. This is because the initial transverse localization of $\boldsymbol{v}_{h(l)}$ holds along the sample, and the integrated energy density profiles given by Eqs. (\[eq:9\]), (\[eq:10\]) and (\[eq:11\]) have maxima at different points $x$ (i.e., the higher is the transmission eigenvalue, the larger is the radiation penetration depth).
Simulations further show (Fig. \[fig:7\]) that it is possible, without changing the topology of the profile, to vary the relative intensity deposited in the two separated regions by simply modulating the phase field of $\boldsymbol{\psi}_{in}$. For example, if we twist the phase $\varphi_l(y)
$ of $v_{l}(y)$ at points $y$ near the localization center \[of $v_{l}(y)$\] by $\pi$, $$\begin{aligned}
\label{eq:17}
&&v_{l}(y)\rightarrow v^{\prime }_{l}(y)=|v_{l}(y)|\exp\{i\varphi^{\prime
}_l(y)\}, \\
&&\varphi^{\prime }_l(y)=\varphi_l(y)+\pi \chi(y),\end{aligned}$$ where $\chi(y)$ takes the value of unity in a region near the localization center and otherwise is zero. For the ensuing input field $\boldsymbol{\psi}%
_{in}=\frac{1}{\sqrt{2}}(\boldsymbol{v}_{h}+\boldsymbol{v}^{\prime }_{l})$, where $\boldsymbol{v}^{\prime }_{l}\equiv\{v^{\prime }_{l}(y)\}$, we find that the energy density deposited in the region corresponding to the low transmission eigenchannel is suppressed. Similarly, we can modify the input field to suppress the energy density deposited in the region corresponding to the high transmission eigenchannel.
Conclusions and outlook {#sec:discussions}
=======================
Summarizing, we have shown that in a diffusive medium, as the medium geometry crosses over from quasi-$1$D to higher dimension, despite the transverse structure of eigenchannels (corresponding to the same transmission eigenvalues) undergo dramatic changes, i.e., from the extended to the Anderson-like localized or necklace-like distributions, their longitudinal structure stays the same, i.e., the depth profile $W_\tau (x)$ of the energy density of an eigenchannel with transmission $\tau$ is always described by Eqs. (\[eq:9\])-(\[eq:11\]), regardless of medium geometry. The details of the system, such as the thickness and the disorder ensemble, only enter into the ratio of $L/\ell$ in Eq. (\[eq:9\]). This expression is super-universal, i.e., it encompasses not only the energy distributions in diffusive eigenchannels in any dimension, but (with a minimal modification) the shape of the transmission resonances in strictly 1D random systems as well. These findings suggest that eigenchannels, which are the underpinnings of diverse diffusive wave phenomena in any dimension, might have a common origin, namely, $1$D resonances. Although the similarity between the eigenchannel structure and the Fabry-Perot cavity has been noticed already in the pioneer study of eigenchannel structures [@Choi11], a comprehensive study of this phenomenon has not been carried out. The results presented above may already be helpful in further advancing the methods of focusing coherent light through scattering media by wavefront shaping. Another area of potential applications is random lasing [@Cao99; @Wiersma08] in diffusive media. Moreover, based on previous studies [Tian13,Bliokh15,Lagendijk16a,Tian17]{}, we expect that controlling the reflectivities of the edges of a sample one can tune the intensity distributions in eigenchannels, not only in quasi-1D media, but in samples of higher dimensions as well. In the future, it is desirable to explore the super-universality of eigenchannel structures in high-dimensional media, where wave interference is strong, so that Anderson localization or an Anderson localization transition occurs.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to A. Z. Genack for many useful discussions, and to H. Cao for informing us the preprint . C. T. is supported by the National Natural Science Foundation of China (Grants No. 11535011 and No. 11747601). F. N. is supported in part by the: MURI Center for Dynamic Magneto-Optics via the Air Force Office of Scientific Research (AFOSR) (FA9550-14-1-0040), Army Research Office (ARO) (Grant No. W911NF-18-1-0358), Asian Office of Aerospace Research and Development (AOARD) (Grant No. FA2386-18-1-4045), Japan Science and Technology Agency (JST) (Q-LEAP program, ImPACT program,and CREST Grant No. JPMJCR1676), Japan Society for the Promotion of Science (JSPS) (JSPS-RFBR Grant No. 17-52-50023, and JSPS-FWO Grant No. VS.059.18N), RIKEN-AIST Challenge Research Fund, and the John Templeton Foundation.
[99]{} S. Rotter and S. Gigan, [ Rev. Mod. Phys.]{} **89**, 015005 (2017).
I. M. Vellekoop and A. P. Mosk, Phys. Rev. Lett. **101**, 120601 (2008).
W. Choi, A. P. Mosk, Q. H. Park, and W. Choi, [ Phys. Rev. B]{} **83**, 134207 (2011).
M. Kim, Y. Choi, C. Yoon, W. Choi, J. Kim, Q-H. Park, and W. Choi, Nat. Photon. **6**, 581 (2012).
A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, Nat. Photon. **6**, 283 (2012).
M. Davy, Z. Shi, and A. Z. Genack, Phys. Rev. B **85**, 035105 (2012).
M. Davy, Z. Shi, J. Park, C. Tian, and A. Z. Genack, [ Nat. Commun.]{} **6**, 6893 (2015).
S. F. Liew and H. Cao, Opt. Express **23**, 11043 (2015).
O. S. Ojambati, A. P. Mosk, I. M. Vellekoop, A. Lagendijk, and W. L. Vos, Opt. Express **24**, 18525 (2016).
R. Sarma, A. G. Yamilov, S. Petrenko, Y. Bromberg, and H. Cao, Phys. Rev. Lett. **117**, 086803 (2016).
M. Koirala, R. Sarma, H. Cao, and A. Yamilov, Phys. Rev. B **96**, 054209 (2017).
C. W. Hsu, S. F. Liew, A. Goetschy, H. Cao, and A. D. Stone, Nat. Phys. **13**, 497 (2017).
O. S. Ojambati, H. Yilmaz, A. Lagendijk, A. P. Mosk, and W. L. Vos, New J. Phys. **18**, 043032 (2016).
P. L. Hong, O. S. Ojambati, A. Lagendijk, A. P. Mosk, and W. L. Vos, Optica **5**, 844 (2018).
H. Yilmaz, C. W. Hsu, A. Yamilov, and H. Cao, arXiv: 1806.01917.
see Section S2 in Supplemental Materials of: P. Fang, L. Y. Zhao, and C. Tian, Phys. Rev. Lett. **121**, 140603 (2018).
O. N. Dorokhov, Pis’ma Zh. Eksp. Teor. Fiz. **36**, 259 (1982); \[JETP Lett. **36**, 318 (1982)\].
O. N. Dorokhov, Solid State Commun. **51**, 381 (1984).
P. A. Mello, P. Pereyra, and N. Kumar, Ann. Phys. (N.Y.) **181**, 290 (1988).
C. W. J. Beenakker, Rev. Mod. Phys. **69**, 731 (1997).
L. Y. Zhao, C. Tian, Y. P. Bliokh, and V. Freilikher, Phys. Rev. B **92**, 094203 (2015).
K. Y. Bliokh, Y. P. Bliokh, and V. D. Freilikher, J. Opt. Soc. Am. B **21**, 113 (2004).
P. W. Anderson, Phys. Rev. **109**, 1492 (1958).
M. E. Gertsenshtein and V. B. Vasil’ev, Teor. Veroyatn. Primen. **4**, 424 (1959) \[Theor. Probab. Appl. **4**, 391 (1959)\].
K. Yu. Bliokh, Yu. P. Bliokh, V. Freilikher, S. Savel’ev, and F. Nori, Rev. Mod. Phys. **80**, 1201 (2008).
H. U. Baranger, D. P. DiVincenzo, R. A. Jalabert, and A. D. Stone, Phys. Rev. B **44**, 10637 (1991).
A. MacKinnon, Z. Phys. B **59**, 385 (1985).
G. Metalidis and P. Bruno, Phys. Rev. B **72**, 235304 (2005).
Yu. V. Nazarov, Phys. Rev. Lett. **73**, 134 (1994).
V. L. Berezinskii, Zh. Eksp. Teor. Fiz. **65**, 1251 (1973) \[Sov. Phys. JETP **38**, 620 (1974)\].
H. Cao, Y. G. Zhao, S. T. Ho, E. W. Seelig, Q. H. Wang, and R. P. H. Chang, Phys. Rev. Lett. **82**, 2278 (1999).
D. S. Wiersma, Nature Phys. **4**, 359 (2008).
X. J. Cheng, C. S. Tian, and A. Z. Genack, Phys. Rev. B **88**, 094202 (2013).
D. Akbulut, T. Strudley, J. Bertolotti, E. P. A. M. Bakkers, A. Lagendijk, O. L. Muskens, W. L. Vos, and A. P. Mosk, Phys. Rev. A **94**, 043817 (2016).
X. Cheng, C. Tian, Z. Lowell, L. Y. Zhao, and A. Z. Genack, Eur. Phys. J. Special Topics **226**, 1539 (2017).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigate the phase diagram of dipolar fermions with aligned dipole moments in a two-dimensional (2D) bilayer. Using a version of the Singwi-Tosi-Land-Sjölander scheme recently adapted to dipolar fermions in a single layer \[M. M. Parish and F. M. Marchetti, Phys. Rev. Lett. **108**, 145304 (2012)\], we determine the density-wave instabilities of the bilayer system within linear response theory. We find that the bilayer geometry can stabilize the collapse of the 2D dipolar Fermi gas with intralayer attraction to form a new density wave phase that has an orientation perpendicular to the density wave expected for strong intralayer repulsion. We thus obtain a quantum phase transition between stripe phases that is driven by the interplay between strong correlations and the architecture of the low dimensional system.'
author:
- 'F. M. Marchetti'
- 'M. M. Parish'
bibliography:
- 'dipoleRefs.bib'
title: 'Density-wave phases of dipolar fermions in a bilayer'
---
Density-wave phases such as stripes are apparently ubiquitous in nature. They are typically found in quasi-two-dimensional or layered materials [@CDW_review; @Howald2003; @graphene_stripe], where they manifest as periodic modulations of the electron density within the two-dimensional (2D) layers. Moreover, such stripes have been linked with high temperature superconductivity [@kivelson1998; @kivelson2003]. However, despite their ubiquity and potential importance, their origins and behavior are still under debate. Indeed, a central question is whether stripes are driven by electron-electron repulsion or simply by the architecture of the underlying crystal structure [@Mazin2008].
One route to gaining insight into the problem is to study cleaner, more tunable analogues of these electron systems. Quantum degenerate Fermi gases with long-range dipolar interactions [@Baranov2008; @Carr2009] provide just such a system in which to investigate density-wave phases. Such dipolar Fermi gases have recently been realized experimentally with both magnetic atoms [@mingwu2012] and polar diatomic molecules [@Ni2008; @heo2012; @Wu2012]. In particular, ultracold polar molecules of $^{40}$K $^{87}$Rb have been confined to 2D layers using an optical lattice [@miranda2011], thus paving the way for exploring long-range interactions in low dimensional systems.
For a 2D gas of polar molecules, the dipole-dipole interactions can be controlled by aligning the dipole moments with an external electric field. For small dipole tilt angles $\theta$ with respect to the plane normal, the dipolar interactions are purely repulsive, while for $\theta \gtrsim \pi/4$, the interactions acquire a significant attractive component such that the dipolar Fermi system is unstable towards collapse for sufficiently strong interactions [@bruun2008; @yamaguchi2010; @parish2012; @sieberer2011]. Away from collapse, in the repulsive regime, previous theoretical work has predicted the existence of a stripe phase [@yamaguchi2010; @parish2012; @sieberer2011; @babadi2011], even for the case where the dipolar interactions are *isotropic* ($\theta=0$) and the system must spontaneously break rotational symmetry [@parish2012]. Here we investigate the effect of the low dimensional architecture on density instabilities by considering dipolar fermions in a 2D bilayer geometry.
We determine the phase diagram of the bilayer system within linear response theory, using a version of the Singwi-Tosi-Land-Sjölander (STLS) scheme [@STLSpaper] recently developed in Ref. [@parish2012]. Based on this analysis, we show that the bilayer geometry can actually stabilize the collapse of the 2D Fermi gas to form a new density wave (Fig. \[fig:phase\_diag\]). However, in contrast to the stripes in the repulsive regime, this new stripe phase has density modulations along the direction of the dipole tilt (Fig. \[fig:schematic\]) and can also be well described by a simplified STLS theory that involves exchange correlations only. Our work thus reveals a new quantum phase transition between two different stripe modulations, where one phase is driven by strong repulsive correlations and the other is driven by the bilayer architecture.
In the following, we consider the bilayer geometry shown in the insets of Fig. \[fig:schematic\]. Here, the dipole moments (of strength $D$) are aligned by an external electric field ${{\mathbf E}}$ lying in the $x$-$z$ plane and at angle $\theta$ with respect to the $z$ direction. We parameterize the $x$-$y$ in-plane momentum by polar coordinates ${{\mathbf q}}=(q,\phi)$, with $\phi = 0$ corresponding to the direction $x$ of the dipole tilt. The remaining system parameters are the bilayer distance $d$, and the Fermi wave vector $k_F = \sqrt{4\pi
n}$ ($n$ is the density in each layer). For dipoles confined in a layer of width $W$, in the limit $q W\ll 1$, the effective 2D intralayer interaction can be written as [@fischer_06]: $$v_{11}({{\mathbf q}}) = V_0 - 2\pi D^2 q \xi(\theta,\phi)\; ,
\label{eq:intra}$$ where $\xi(\theta,\phi) = \cos^2\theta - \sin^2\theta \cos^2\phi$, and $V_0$ is the $W$-dependent short-ranged contact interaction. The confinement width $W$ provides a natural cut-off for the quasi-2D system: $\Lambda \sim 1/W \gg k_F$.
Likewise, in the limit $W\ll d$, we can write the interlayer interaction as [@Li_Hwang_DasSarma10]: $$v_{12}({{\mathbf q}}) =-2\pi D^2 q e^{-qd} \left[\xi(\theta,\phi) +i
\sin2\theta \cos\phi \right]\; .
\label{eq:inter}$$ Note that for $\theta \ne 0$, this interaction is complex and satisfies $v_{21} ({{\mathbf q}}) = v_{12}^*({{\mathbf q}}) = v_{12}(-{{\mathbf q}})$. This arises from the fact that the interlayer interaction in real space is not invariant under the transformation ${{\mathbf r}} \mapsto -{{\mathbf r}}$.
Assuming identical layers, one can parameterize the bilayer system using only three dimensionless quantities: The tilt angle $\theta$, the bilayer distance $k_F d$, and the interaction strength $U = mD^2
k_F/\hbar^2$, with $m$ being the fermion mass. The cut-off $\Lambda$ and the contact interaction $V_0$ should not be relevant since these do not affect the low energy behavior of dipolar fermions, and indeed the procedure we employ preserves this.
We now turn to the linear response theory used to analyze the inhomogeneous phases of the dipolar system. In the bilayer (and multilayers generally), the linear density response $\delta n_{i}$ to an external perturbing field $V_{i}^{ext}$ defines the density-density correlation function matrix $\chi_{ij}$, $$\delta n_{i} ({{\mathbf q}}, \omega)= \sum_j \chi_{ij} ({{\mathbf q}},\omega)
V_{j}^{ext} ({{\mathbf q}},\omega)\; ,$$ where $i$, $j$ are the layer indices. For a non-interacting gas, we clearly have $\chi_{ij} = \delta_{ij} \Pi$, where the non-interacting intralayer response function $\Pi(q,\omega)$ can be evaluated analytically [@stern67]. Typically, one includes interactions via the Random Phase Approximation (RPA), where one uses a perturbing field that contains an effective potential due to the perturbed density: $V_{j}^{ext} \mapsto V_{j}^{ext} + \sum_j
v_{ij}\delta n_{j}$, with intralayer potential $v_{22}({{\mathbf q}}) =
v_{11}({{\mathbf q}})$. However, as has been argued recently for the single layer case, RPA is never accurate for dipolar interactions, since it neglects exchange correlations [@babadi2011; @sieberer2011] which are important even in the long-wavelength limit [@parish2012].
A straightforward and physically motivated way of incorporating correlations beyond RPA is by means of local field factors $G_{ij} ({{\mathbf q}})$ (for an introduction to this method see, e.g., Ref. [@vignale_book]). Here, the (inverse) response function now reads: $${\chi^{-1}}_{ij} ({{\mathbf q}},\omega) = \frac{\delta_{ij}}{\Pi
(q,\omega)} - v_{ij}({{\mathbf q}}) \left[1 - G_{ij}({{\mathbf q}}) \right]
\; .
\label{eq:response}$$ Note that we clearly recover both RPA and the non-interacting case if we take, respectively, $G_{ij}=0$ or $G_{ij}=1$. This response function can be related to the “layer-resolved” static structure factor $S_{ij}({{\mathbf q}})$ by the fluctuation-dissipation theorem: $$S_{ij}({{\mathbf q}}) = -\frac{\hbar}{\pi n} \int_0^{\infty} d\omega
\chi_{ij}({{\mathbf q}},i\omega)\; .
\label{eq:struc}$$ In turn, we can approximate the local field factors using the STLS scheme [@STLSpaper]: $$G_{ij}({{\mathbf q}}) = \frac{1}{n} \int \frac{d{{\mathbf k}}}{(2\pi)^2}
\frac{{{\mathbf q}} \cdot {{\mathbf k}}}{q^2} \frac{v_{ij} ({{\mathbf k}})}{v_{ij}
({{\mathbf q}})} \left[ \delta_{ij} - S_{ij} ({{\mathbf q}}-{{\mathbf k}}) \right] \; .
\label{eq:local}$$ The response function $\chi_{ij}$ (and associated structure factor $S_{ij}$) can now be determined by solving Eqs. - self-consistently. The STLS scheme has been heavily utilized for Coulomb interactions and it has proven to be very successful for describing the dielectric function of several strongly-correlated electron systems (see [@vignale_book] and references therein). Following Ref. [@parish2012], we consider an improved version of the STLS scheme that has been adapted to the dipolar system. In essence, it ensures that our results are insensitive to $\Lambda$ and $V_0$, by requiring that the intralayer correlations be dominated by Pauli exclusion at large wavelengths $q
\gg 2k_F$.
For identical layers, we can assume that $S_{22}=S_{11}$, $S_{21}=S_{12}^*$ (and similarly for the local field factors $G_{ij}$). Note that the complex form of the interlayer potential means that the interlayer factors $S_{12}
({{\mathbf q}})$ and $G_{12}({{\mathbf q}})$ are also complex. However, the symmetry $v_{12}(-{{\mathbf q}})=v_{12}^*({{\mathbf q}})$ is also preserved for both factors at each iteration step of our self-consistent scheme. This guarantees that physical quantities such as the “layer-resolved” pair correlation functions, $g_{ij}({{\mathbf r}}) = \frac{1}{n^2} \langle
\psi_i^\dag({{\mathbf r}})\psi_j^\dag(0) \psi_j(0) \psi_i({{\mathbf r}})\rangle$, where $$g_{ij}({{\mathbf r}}) = 1+ \frac{1}{n} \int \frac{d{{\mathbf q}}}{(2\pi)^2}
e^{i{{\mathbf q}}.{{\mathbf r}}} \left[S_{ij}({{\mathbf q}}) - \delta_{ij} \right]\; ,
\label{eq:pair}$$ are always real, even when $i\neq j$.
![(Color online) Phase diagram for a dipolar Fermi gas in a bilayer at fixed interlayer distance, $k_Fd = 2$, as a function of $\theta$ (see Fig. \[fig:schematic\]) and interaction $U=mD^2k_F/\hbar^2$. The liquid phase is superfluid (SF). The (green) open triangles \[circles\] set the boundary of the stripe phase oriented along $\phi = 0$ \[$\phi=\pi/2$\], derived from a self-consistent STLS calculation. The filled (green) square at $\theta_c \simeq 0.75$ and $U\simeq 15.65$ is a quantum critical point beyond which there is a phase transition between the two stripe phases. The (blue) open diamonds for the $\phi=0$ stripe phase are instead determined including exchange correlations only (see text). These boundaries can be compared to the $\phi=\pi/2$ stripe transition (dashed line) and the collapse instability (dashed-dotted line) for the single-layer case [@parish2012]. The shaded “bosonic” region is where the system can be described in terms of interlayer bosonic dimers. The (red) filled diamond and thick (red) line at $\theta=\pi/2$ indicate collapse in the bilayer.[]{data-label="fig:phase_diag"}](glow_compare_phase-diagr.pdf){width="\linewidth"}
We determine the density instabilities of the bilayer system by analyzing the divergences of the static response function matrix $\chi_{ij} ({{\mathbf q}},0)$. Specifically, we search for zeros of the largest inverse eigenvalue, $$\chi_+^{-1} = \frac{1}{\Pi} - v_{11} [1-G_{11} ] + |v_{12} [1-G_{12}
] | \; .
\label{eq:eigen}$$ A zero of $\chi_+^{-1} ({{\mathbf q}},0)$ at a critical wave vector ${{\mathbf q}}_c$ signals an instability towards the formation of a density wave with period set by ${{\mathbf q_c}}$. If the instability occurs for a specific direction $\phi$, then the density-wave phase corresponds to a one-dimensional modulation (or stripe phase) of period $2\pi/q_c$ oriented along $\phi$. In this way, we obtain the phase diagram plotted in Fig. \[fig:phase\_diag\] for $k_F d=2$.
For tilt angles $\theta<\theta_c \simeq 0.75$, we find a stripe phase along $\phi=\pi/2$ that is of a similar nature to the one found in a single layer (dashed line of Fig. \[fig:phase\_diag\]). In particular, it is driven by strong intralayer correlations induced by the repulsive part of $v_{11}$, as evidenced by the relative insensitivity of $q_c$ to the bilayer geometry and $\theta$ (see Fig. \[fig:schematic\]). However, the presence of the second layer can decrease the value of the critical interaction strength $U_c$ for stripe formation, as one might expect from the form of Eq. . The attractive part of $v_{12}({{\mathbf q}})$ also ensures that the density waves along $\phi=\pi/2$ in each layer are in phase. Similar results were found using the conserving Hartree-Fock (HF) approximation [@babadi2011; @block2012], but for much smaller values of $U_c$, like in the single-layer case. The shift of $U_c$ due to the other layer is relatively small for distance $k_F d=2$ (see Fig. \[fig:phase\_diag\] at small values of $\theta$), but it can become substantial for smaller $k_F d$ since Eq. depends exponentially on the bilayer distance. However, for smaller distances, we then encounter phases involving strong interlayer pairing [@pikovski2010; @zinner_10; @baranov2011] and the system would instead be better described in terms of interlayer bosonic dimers, as we discuss later.
![(Color online) Critical wave vector $q_c/k_F$ for the $\phi=\pi/2$ stripe phase ($\theta < \theta_c$) and the $\phi = 0$ one ($\theta > \theta_c$) — same parameters and symbol scheme as in Fig. \[fig:phase\_diag\]. The insets depict the alignment of the dipoles with the electric field ${{\mathbf E}}$ and the features of the two different stripe phases. For the $\phi = 0$ stripe phase, the density modulations in the two layers have a phase shift $\eta
\simeq 2\theta$, while the wave vector $q_c$ decreases with increasing tilt angle $\theta$ down to $q_c=0$ for $\theta=\pi/2$ (filled \[red\] diamond), where the gas collapses. For density modulations along $\phi = \pi/2$, $q_c$ appears to be fixed by the density.[]{data-label="fig:schematic"}](schematic_criticalq_d2.pdf){width="\linewidth"}
In the isotropic case ($\theta=0$), we find that the system spontaneously breaks rotational symmetry to form a stripe phase at $U
\simeq 5.74$, similarly to the single-layer case [@parish2012]. One can only observe this symmetry breaking at $\theta=0$ by starting the STLS iteration with a solution for small but finite $\theta$. This effectively corresponds to taking the limit $\theta \to 0$, which is somewhat akin to classical ferromagnetism, where one must consider the limit where magnetic field goes to zero. This stripe phase precedes Wigner crystallization which, according to quantum Monte Carlo (QMC) calculations, occurs at $U\simeq 25$ for perpendicular fermionic dipoles in a single layer [@matveeva2012].
For $\theta>\arcsin(1/\sqrt{3})$, the intralayer interaction develops an attractive sliver in the plane that can eventually lead to collapse in the single layer [@bruun2008; @yamaguchi2010; @sieberer2011; @parish2012]. Here, for large enough $U$ and $\theta$, the attraction overcomes Pauli exclusion and the compressibility of the gas goes to zero ($\chi_+^{-1} ({{\mathbf q}}\to 0,0)=0$). However, we find that the bilayer geometry can actually stabilize the collapse to form a new density-wave phase that is oriented along the $\phi=0$ direction (Fig. \[fig:phase\_diag\]). Referring to Fig. \[fig:schematic\], we see that this stripe phase has a longer wavelength than the $\phi=\pi/2$ one and is dependent on geometry. Indeed, we find that $q_c$ smoothly decreases with increasing $\theta$, reaching $q_c=0$ at $\theta=\pi/2$, where the intralayer attraction always appears to cause collapse at a fixed $U_c$. Away from $\theta = \pi/2$, we find that the $\phi=0$ stripe phase has $q_c \sim 1/d$ in the limit $d \to \infty$, which is reminiscent of the behavior of charge density waves in electron-hole bilayers. The $\phi=0$ stripe also features a nontrivial phase shift $\eta$ between the density waves in each layer. At the stripe transition, it can be shown that $$e^{i\eta} = - \frac{v_{12} ({{\mathbf q}}) [1-G_{12}({{\mathbf q}})]}{|v_{12}
({{\mathbf q}}) [1-G_{12}({{\mathbf q}})]|}\; .$$ When $v_{12}$ and $G_{12}$ are real, like for the $\phi=\pi/2$ stripe phase, then $e^{i\eta} = 1$ and the density waves in each layer are in phase, as mentioned previously. However, $v_{12}$ is complex for the $\phi=0$ stripe phase and thus the density waves are generally shifted with respect to one another. Indeed, as shown below, the interlayer correlations are small in this phase, i.e. $|G_{12}|\ll 1$, therefore the phase shift corresponds to $\eta \simeq 2\theta$ (see insets of Fig. \[fig:schematic\]) and is essentially independent of $k_F d$.
The existence of two stripe phases leads to a new quantum phase transition where the stripes change their orientation. In Fig. \[fig:phase\_diag\], this occurs beyond the critical point $\theta_c \simeq 0.75$ and $U_c \simeq 15.65$ where the two stripe phase boundaries meet. Here, when $k_F d$ is fixed, the transition can be accessed by changing the tilt angle $\theta$. Alternatively, one can fix $\theta
\lesssim \pi/4$, which is below the onset of collapse in the single layer, and vary $k_F d$, since we expect the critical angle $\theta_c$ to decrease with decreasing $k_F d$. Eventually, at $k_F d \simeq 1$, one enters the regime where the physics of bosonic interlayer dimers dominates.
![(Color online) Intra- and interlayer pair correlation functions $g_{ij} ({{\mathbf r}})$ for increasing values of the interaction strength $U$ towards the $\phi = \pi/2$ stripe phase ($\theta=0$ top panel) and the $\phi = 0$ phase ($\theta=1.1 \simeq 0.35\pi$ bottom panel).[]{data-label="fig:correlation"}](correlation_functions.pdf){width="0.9\linewidth"}
Further insight into the stripe phases can be gained by examining the intra- and interlayer pair correlation functions $g_{ij}({{\mathbf r}})$ on the liquid side of the transition. For the $\phi=0$ stripe phase (bottom panel of Fig. \[fig:correlation\]), we find that neither pair correlation function changes significantly as we approach the transition. In particular, $g_{11}({{\mathbf r}})$ only deviates slightly from the non-interacting case ($U=0$), while $g_{12}({{\mathbf r}})$ slowly oscillates close to one, indicating that interlayer correlations are small, i.e. $|G_{12}|\ll 1$. This suggests that we can accurately model the $\phi=0$ stripe phase using exchange correlations only. To this end, we construct a simplified STLS theory where we take $G_{12}({{\mathbf q}}) = 0$ and then determine the intralayer local field factor $G_{11} ({{\mathbf q}})$ by feeding the non-interacting intralayer structure factor $S_{0}(q)
= -\frac{\hbar}{\pi n} \int_0^{\infty} d\omega \Pi(q,\omega)$ into Eq. . We then evaluate the phase boundary for the $\phi=0$ stripe within this simplified HF theory. Referring to Figs. \[fig:phase\_diag\] and \[fig:schematic\], we see that we obtain very good agreement with the full STLS calculation, particularly when $U$ and $\theta$ are not too large so that the intralayer $p$-wave pairing correlations are expected to be weakest [@bruun2008; @sieberer2011]. In addition, the collapse instability at $\theta = \pi/2$ is unaffected by the other layer since the interlayer Hartree term is zero for ${{\mathbf q}}=0$. We expect one can obtain quantitatively similar results for the $\phi=0$ stripe phase using the conserving HF approximation [^1]. By contrast, for the $\phi = \pi/2$ stripe phase (top panel of Fig. \[fig:correlation\]), we see that correlations beyond exchange become substantial, resulting in a pronounced “correlation hole” for $g_{11} ({{\mathbf r}})$ with increasing interaction strength, like in the single-layer case [@parish2012] — note that the STLS procedure does not guarantee that $g_{11}$ is always positive [@vignale_book], and thus we sometimes obtain unphysical negative values. The intralayer correlations also develop a substantial $\phi$ anisotropy as we near the stripe transition. At the same time, the interlayer pair correlation function $g_{12}({{\mathbf r}})$ increases at ${{\mathbf r}}=0$, a feature that has been ascribed to an imminent bound-state instability [@Liu1998].
Indeed, the attractive part of $v_{12}({{\mathbf q}})$ always yields a two-body bound state composed of one fermion from each layer [@klawunn2010; @volosniev2011]. Hence, any liquid phase in the phase diagram contains pairing correlations and must therefore be superfluid (Fig. \[fig:phase\_diag\]). When the size of these interlayer dimers $l_B$ is smaller than the interparticle spacing, i.e. $l_B\ll 1/k_F$, then the system is better described in terms of bosonic dimers and our approach of analyzing density instabilities of the Fermi liquid phase is unlikely to be accurate. To estimate this region of phase space where bosonic behavior dominates, we solve the two-body problem, $E \psi_{{{\mathbf k}}} =
\frac{\hbar^2 {{\mathbf k}}^2}{m} \psi_{{{\mathbf k}}} + \int
\frac{d{{\mathbf k}}}{(2\pi)^2} v_{12} ({{\mathbf k}}-{{\mathbf k'}}) \psi_{{{\mathbf k'}}}$, where $\psi_{{{\mathbf k}}}$ is the two-body wave function in terms of relative coordinates and $E$ is the dimer binding energy. We estimate the dimer size as $l_B \sim \hbar/\sqrt{m|E|}$ and then determine the “critical” line $k_F l_B = 1$ for the bosonic regime, as plotted in Fig. \[fig:phase\_diag\] (shaded region). We see that this region is well separated from the stripe phase boundaries and thus we expect our results to be reasonable for $k_F d =2$. However, the presence of bosonic dimers hastens the onset Wigner crystallization: QMC calculations [@astrakharchik_07; @buchler2007] predict that perpendicularly-aligned bosons will crystallize at $U\simeq 8$. For increasing $\theta$, the interlayer dimer becomes more weakly bound until eventually the fermions preferentially form pairs within the same layer instead. With decreasing $k_F d$, however, the regime of interlayer bosons expands so that it encroaches on our predicted stripe transitions for $k_Fd \simeq 1$ and takes us beyond the scope of this letter. Our predicted stripe phases should be accessible experimentally with cold dipolar gases. In particular, the bilayer distance $k_F d=2$ can be achieved for a typical 2D density $n \sim 1.3 \times 10^8$ cm$^{-2}$ and layer spacing $d=500$ nm. Polar molecules such as LiCs [@Carr2009] have dipolar moments $D\sim 0.35-1.3$ Debye (corresponding to $U\sim 1-14$), which allows one to explore both $\phi=0$ and $\phi=\pi/2$ stripe phases. Furthermore, the newly explored NaK molecules [@Wu2012] allows one to reach even larger values of the interaction strength ($D\sim 2.7$ Debye and $U\sim 28$).
We are grateful to J. Levinsen, P. Littlewood, and N. Zinner for useful discussions. MMP acknowledges support from the EPSRC under Grant No. EP/H00369X/1. FMM acknowledges financial support from the programs Ramón y Cajal and Intelbiomat (ESF). We also acknowledge TCM group (Cambridge) for hospitality.
[^1]: Note that the $\phi=0$ stripe phase was not observed in Ref. [@block2012] since they focused on the $\phi=\pi/2$ instability and ignored the imaginary part of $v_{12}({{\mathbf q}})$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a Bayesian Voronoi image reconstruction technique (VIR) for interferometric data. Bayesian analysis applied to the inverse problem allows us to derive the *a-posteriori* probability of a novel parameterization of interferometric images. We use a variable Voronoi diagram as our model in place of the usual fixed pixel grid. A quantization of the intensity field allows us to calculate the likelihood function and *a-priori* probabilities. The Voronoi image is optimized including the number of polygons as free parameters. We apply our algorithm to deconvolve simulated interferometric data. Residuals, restored images and $\chi^2$ values are used to compare our reconstructions with fixed grid models. VIR has the advantage of modeling the image with few parameters, obtaining a better image from a Bayesian point of view.'
author:
- 'G. F. Cabrera, S. Casassus'
- 'N. Hitschfeld'
title: |
Bayesian Image Reconstruction\
Based on Voronoi Diagrams
---
Introduction
============
Astronomical interferometric data result from the addition of instrumental noise to the convolution of the sky image and the instrumental response. Because of incomplete sampling in the $(u, v)$ plane, obtaining sky images from interferometric data is an instance of the inverse problem, and involves reconstruction algorithms.
The CLEAN method consists of modeling the side-lobe disturbances and subtract them iteratively from the dirty map [@CLEAN]. The CLEAN method works well for low noise and simple sources. But if the source has many complex features, or if the data is too noisy, CLEAN will do only a few iterations returning a noisy image [@CLEAN]. Another shortcoming is that CLEAN involves some ad-hoc parameters (the loop gain, stopping criteria, clean beam) that bias the final reconstruction, in the sense that CLEAN can give many different reconstructions for the same dataset. The maximum entropy method (MEM) finds the image that simultaneously best fits the data, within the noise level, and maximizes the entropy $S$. This is done by minimizing $$L_\mathrm{MEM} = \chi^2 - \lambda S,\label{eq:LMEM}$$ where, for the case of interferometric data, $\chi^2$ can be calculated as $$\chi^2 = \sum_{k=1}^{N_\mathrm{Vis}}\frac{||V_k^\mathrm{obs} -
V_k^\mathrm{mod}||^2}{\sigma_k^2},$$ where the sum runs over all the $N_\mathrm{Vis}$ visibilities, the symbol $||z||$ stands for the modulus of the complex number $z$ and $\sigma_k$ is the root mean square (rms) noise of the corresponding visibility. $\lambda$ is a control parameter and the entropy $S$ varies for different implementations . The entropy is used as a regularizing term in a degenerate inverse problem, when there are more free parameters than data. Different formulations for $S$ appear in the literature. Some examples are $\sum_i\ln(I_i)$, $\sum_iI_i\ln(I_i)$, $\sum_i\ln(p_i)$, $\sum_ip_i\ln(p_i)$, where $I_i$ is the specific intensity value at pixel $i$ and $p_i = I_i/\sum_iI_i$ .
used MEM in the AIPS VM task. Their method makes some approximations that diagonalize the Hessian matrix required to optimize their merit function. They used an entropy of the form $S =
-\sum_iI_i\log{(I_i/ m_i)}$, where the sum extends over all the pixels $i$, $\{I_i\}_{i=1}^n$ is the model image and $\{m_i\}_{i=1}^n$ is a prior image. However, the neglect of the side-lobe contribution to the Hessian may lead the optimization to local minima that still bear instrumental artifacts. [@Casassus]implemented a MEM algorithm based on the conjugate gradient method, without the use of the Cornwell and Evans approximation. They used an entropy of the form $S = -\sum_iI_i\log{(I_i/ M)}$, where $\{I_i\}_{i=1}^N$ is the model image and $M$ is a small intensity value, i.e they start with a blank image prior, and $M$ is an intensity value much smaller than the noise. Bayesian analysis is a powerful tool for image reconstruction techniques. In this application, our goal is to find the most probable image by maximizing its *a-posteriori* probability. For Bayesian methods, the *a-priori* and likelihood distributions are needed. To derive the *a-priori* probability the definition of an intensity quantum is needed. This quantum represents the minimum measurable intensity unit. The intensity in each pixel can be interpreted as a number of quanta $I_i = \sigma_\mathrm{q}N_i$, where $I_i$ is the intensity in pixel $i$, $\sigma_\mathrm{q}$ is the quantum size and $N_i$ the number of quanta in pixel $i$.
used Bayesian analysis in the Pixon algorithm. They use a variable model and maximize $P(I,M|D)$, that is, the probability of the image $I$ and model $M$ given the data $D$. In their approach the model used to parameterize the image is a set of Gaussians which are used to average a pseudo-image. The pseudo-image starts as a maximum residual likelihood reconstruction and a local Gaussian pixon is assigned to each of its pixels. The number of pixons, and hence the number of free parameters, is reduced in each iteration.
have used Bayesian analysis for interferometric data, but using a fixed pixel grid to parameterize the model image. They use Gibbs sampling to determine the posterior density distribution.
The most typical model used in astronomy to represent the sky brightness distribution consists of a pixel grid. A big disadvantage of this grid is that the number of pixels remains fixed as well as their size. Often, uniform pixel grids involve more free parameters than really needed to fit the data.
The purpose of this paper is to explore Bayesian reconstruction with image models based on Voronoi tessellations in place of the usual pixelated image. We call this new deconvolution method “Voronoi image reconstruction” (VIR, hereafter). The advantage of using Voronoi models is that it is possible to use a smaller number of free parameters, as required by Bayesian theory. Our purpose is not optimal CPU efficiency; we search for the optimal image and model from a Bayesian point of view.
We used the Cosmic Background Imager [CBI, @pad02] to illustrate our method. The CBI is a planar interferometer array with 13 antennas, each 0.9 m in diameter, mounted on a 6 m tracking platform. An example of CBI baselines is shown in Figure \[fig:baselines\]. The radius of the hole at the center of the $(u,
v)$ plane is the reciprocal of the minimum distance between two antennas, measured in wavelengths. The side-lobes of the CBI are caused mainly by this central hole in the $(u, v)$ baselines.
We briefly summarize the elements of Bayesian theory that determine the probability distributions concerning our problem (Section \[sec:Bayes\]). The new model based on Voronoi tessellations is described (Section \[sec:Voronoi\]), as well as optimization issues involved in our problem (Section \[sec:Optimization\]). We discuss implementation details such as the optimal quantum size and number of Voronoi polygons (Section \[sec:Implementation\]), compare reconstructions made with MEM and VIR (Section \[sec:Results\]) and finally summarize our results (Section \[sec:Conclusions\]).
Bayesian Theory {#sec:Bayes}
===============
An image model is required to parameterize the sky brightness distribution. The most typical model used in astronomy is a rectangular grid of uniform pixels. That configuration of pixels is the model $M$, and the distribution of brightness in the model is called an image $I$. We search for the image that represents as accurately as possible the visibility data $D$. The Bayesian image reconstruction approach, using a fixed model, tries to find the image that maximizes the probability $P(I | D, M)$, i.e. find the most probable image given the data and the model.
Using the Bayes theorem, we obtain $$P(I | D, M) = \frac {P (D| I, M)P(I | M)}{P (D | M)}.$$ Since the data is fixed, $P(D | M)$ is a constant in the problem when the model is not considered as a variable. Thus, the fixed image model optimization problem reduces to $$\max_I P(I | D, M) = \max_I P (D| I, M)P(I | M). \label{eq:P(I|D|M)}$$ The first term, $P (D|I, M)$ is called the likelihood, and measures how well our data represents our image. The second term, $P (I | M)$ is called the image prior, and gives the *a-priori* probability of the image given the model, i.e. how probable is the image given only the model.
In the case of having a variable model, what we would like to find is the image and model that maximize $P(I, M| D)$, i.e. find the most probable image and model given the data. In this case we find $$\begin{aligned}
P(I, M| D) & = & P(I|D,M)P(M|D)\nonumber\\
& = & \frac {P (D| I, M)P(I| M)P(M|D)}{P (D | M)}\nonumber\\
& = & \frac {P (D| I, M)P(I| M)P(M)}{P (D)}.
\end{aligned}$$ Since the data is fixed, $P(D)$ is constant in our problem. As we cannot privilege one model over another in the absence of image and data, $P(M)$ is the same for all models, so it is not important for our analysis. This way, our optimization problem reduces to $$\max_{I, M} P(I, M| D) = \max_{I, M} P (D| I, M)P(I | M).$$
Probability Distributions
-------------------------
Our data is a set of $N_\mathrm{Vis}$ observed visibilities $\{V_1^\mathrm{obs}, V_2^\mathrm{obs}, \cdots,
V_{N_\mathrm{Vis}}^\mathrm{obs}\}$. If we have a certain model $M$ and image $I$, we obtain model visibilities $\{V_k^\mathrm{mod}\}$ by simulating the interferometric observations over our image: $$V^\mathrm{mod}_k = V^\mathrm{mod}(u_k,v_k) = \int_{-\infty}^{+\infty} A(x,y)
I(x,y)\exp\left[2\pi i (u_kx+v_ky)\right]
\frac{dx\,dy}{\sqrt{1-x^2-y^2}} ~, \label{eq:vmodel}$$ where $\{u_k, v_k\}$ are the coordinates of baseline $k$ in the $(u,
v)$ plane and $A$ is the primary beam. We thus have a set of $N_\mathrm{Vis}$ model visibilities. Assuming that each visibility is independent from the others and Gaussian noise, the likelihood is $$\begin{aligned}
P (D|I, M) & = &
P(\{V_k^\mathrm{obs}\}_{k=1}^{N_\mathrm{Vis}}|\{V_k^\mathrm{mod}(I, M)\}_{k=1}^{N_\mathrm{Vis}})
= \prod_{k=1}^{N_\mathrm{Vis}} P (V_k^\mathrm{obs}|V_k^\mathrm{mod}) \nonumber\\
& = &
\prod_{k=1}^{N_\mathrm{Vis}}\frac{1}{2\pi\sigma_k^2}e^{-||V_k^\mathrm{obs}
- V_k^\mathrm{mod}||^2/2\sigma_k^2}.\end{aligned}$$ To obtain the image prior, $P (I | M)$, we calculate the statistical weight of a given distribution of counts . Consider a model consisting of $n$ cells. In the case of a traditional image, each pixel would be a cell. There is a number of $N$ quanta falling into these cells. These are intensity quanta of some size $\sigma_\mathrm{q}$. In the case of a pixelated image, the intensity in each pixel $i$ would be $I_i = \sigma_\mathrm{q} N_i$, where $I_i$ is the intensity in cell $i$. Each quantum could fall into any of the $n$ cells, so the total number of possible configuration for the $N$ quanta will be $n^N$. The probability of the image given the model is the probability of a certain state $\{N_1, N_2, \cdots, N_n\}$ that represents that image, where $N_i$ is the number of quanta in cell $i$. Consider a given image configuration defined by a particular distribution $\{N_i\}$. The image distribution is not changed in the $N!$ possible redistributions of counts between cells, provided each $N_i$ is constant. The $\prod_i N_i!$ swaps of counts within each cell keep the same image configuration. The model $M$ consists of the Voronoi diagram and the total number of quanta (i.e. n, the position of the generators and N), thus the *a-priori* probability is $$P (I | M) = P (\{N_i\}|n, N) = \frac{N!}{n^N\prod_i N_i!}.
\label{eq:apriori}$$ As explained above, $\sigma_\mathrm{q}$ is an intensity quantum. It is also possible to describe the number of quanta per cell using a flux quantum $\sigma_i^\mathrm{F}$, where $i$ is the index of the cell to which we associate the quantum. This flux quantum can be expressed in terms of the intensity quantum as $\sigma_i^\mathrm{F} =
\sigma_\mathrm{q}A_i$, where $A_i$ is the area of cell $i$. In this case, the number of quanta per cell is $N_i =
F_i/\sigma_i^\mathrm{F}$, where $F_i = I_i A_i$ is the flux of cell $i$. This leads to $N_i = I_i/\sigma_\mathrm{q}$, which is the same expression for $N_i$ obtained using the intensity quantum $\sigma_\mathrm{q}$. Using these cell-dependent flux quanta, the probability of a quantum falling into each cell will be $\frac{1}{n}$ for every cell, leaving the *a-priori* probability the same as Eq. \[eq:apriori\].
MEM and Natural Entropy
-----------------------
In Bayesian theory, for a fixed model, the image $I$ can be found by optimizing the *a-posteriori* probability: $$\begin{aligned}
\max_I P (I | D, M) & = & \min_I(-\ln{P (D| I, M)P(I | M)})\nonumber\\
& = & \min_I\sum_{k=1}^{N_\mathrm{Vis}}\frac{||V_k^\mathrm{obs} -
V_k^\mathrm{mod}||^2}{2\sigma_k^2} - \ln\bigg(\frac{N!}{n^N\prod_i
N_i!}\bigg)\nonumber\\
& = & \min_I\frac{1}{2}\chi^2 - S,\label{eq:funcL}\end{aligned}$$ where we have defined the natural entropy $S =
\ln\bigg(\frac{N!}{n^N\prod_i N_i!}\bigg)$. call the term $\ln{(N!/\prod_i N_i!)}$ the multiplicity prior. In the limit of large $N_i$, $$\begin{aligned}
S
& \simeq & N\ln\frac{N}{n} - \sum_i N_i\ln{N_i},\label{eq:SStirling}\end{aligned}$$ and it can be seen that the Bayesian method is very similar to MEM in the sense that we are adjusting the image to the data while maximizing an entropy of the form of Eq. \[eq:SStirling\]. VIR uses the natural entropy as a regularizing term.
A New Image Model based on Voronoi Diagrams {#sec:Voronoi}
===========================================
A Voronoi diagram is a division of the Euclidian plane into $n$ regions $\mathcal{V}_i$ defined by $n$ points $\vec{x_i}$ (called sites or generators) such that every coordinate $\vec{x}$ in the space belongs to $\mathcal{V}_i$ if and only if $||\vec{x} - \vec{x_i}|| <
||\vec{x} - \vec{x_j}||\ \forall\ j\neq i$. The result of the above definition is a set of polygons defined by the generators. Figure \[fig:Voronoi\] shows an example of a Voronoi diagram. For further details on Voronoi diagrams see [@Voronoi].
We propose a 2D Voronoi diagram in place of the usual pixelated, uniform grid, image as our model. We associate an intensity $I_i$ to each of these polygons. The advantage of using a Voronoi diagram is that we can use just as many cells (i.e. free parameters) as the data requires. Our optimization parameters will be the position of each generator $\vec{x_i} = (x_i, y_i)$, and the intensity at each cell, $I_i$.
With our new model $M$ consisting of $n$ generators ($3\times n$ parameters, $x_i, y_i$ and $I_i$ for each generator), we can vary the number of free parameters as required by the optimization problem. We can see in equation (\[eq:funcL\]) that the entropy $S$ increases as the number of cell $n$ decreases.
Optimization {#sec:Optimization}
============
The optimization problem can be seen as a maximization of the *a-posteriori* probability $\max_{I, M} P (I, M | D)$, or equivalently as a minimization of the more convenient merit function $L = \frac{1}{2}\chi^2 - S$. The conjugate gradient method (CG) is often used for minimization problems where derivatives can be easily calculated. Though it is usually fast in convergence, CG has the problem of converging on local minima depending on the initial condition. The use of other optimization algorithms is postponed to future work.
The CG method searches parameters space using the gradient of the function to be minimized. The derivatives of this function are $$\begin{aligned}
\frac{\partial L}{\partial x} & = & \frac{1}{2}\frac{\partial \chi^2
}{\partial x} - \frac{\partial S}{\partial x},\\
\frac{\partial \chi^2}{\partial x} & = & 2\sum_{k=1}^{N_\mathrm{Vis}}
\frac{1}{\sigma_k^2} \mathrm{Re}\left((V_k^{\mathrm{mod}}
- V_k^{\mathrm{obs}})^* \frac{\partial
V_k^{\mathrm{mod}}}{\partial x}\right) \label{dLdx},\end{aligned}$$ where $x$ is any of the optimization parameters ($x_i$, $y_i$ or $I_i$). The derivatives of the visibilities with respect to the position $\vec{x}_i = (x_i, y_i)$ of the $i$ generator are $$\begin{aligned}
\frac{\partial V_k^{\mathrm{mod}}}{\partial x_i} & = &
\sum_{j\in J_i} \bigg[(I_i - I_j)
\sum_{l|\mathrm{pixel~}l\epsilon a_{ij}}A_l\Delta t_l
(M_xt_l + b_x)e^{(t_lc_2 + s_0c_1)}\bigg],\\
\frac{\partial V_k^{\mathrm{mod}}}{\partial y_i} & = &
\sum_{j\in J_i} \bigg[(I_i - I_j)
\sum_{l|\mathrm{pixel~} l\epsilon a_{ij}}A_l\Delta t_l
(M_yt_l + b_y)e^{(t_lc_2 + s_0c_1)}\bigg],\end{aligned}$$ where $I_i$ is the intensity in cell $i$, $J_i$ is a set of the indices of the polygons adjacent to $\mathcal{V}_i$, $a_{ij}$ is the edge which divides polygons $\mathcal{V}_i$ and $\mathcal{V}_j$, $l$ sums over the pixels which intersect $a_{ij}$, $A$ is the CBI primary beam. For further details see Sec. \[ap:derivatives\].
The derivative of the visibilities with respect to the intensity of each cell $I_i$ is $$\frac{\partial V_k^{\mathrm{mod}}}{\partial I_i} =
\frac{\sin{(\pi u_k\Delta x)}\sin{(\pi v_k\Delta y)}}{\pi^2u_kv_k}
\sum_{\textrm{\scriptsize{pixels }}l\epsilon \mathcal{V}_i}A_l
e^{2\pi i(u_kx_l+v_ky_l)} \label{eq:dVdI},$$ where $\vec{k}_k = (u_k, v_k)$ is the baseline corresponding to the pair of antennas $k$, $\Delta x$ and $\Delta y$ are the pixel width and height, and the sum extends over all the pixels inside $\mathcal{V}_i$.
The entropy only depends of the intensities $I_i$, so $\frac{\partial
S}{\partial x_i} = \frac{\partial S}{\partial y_i} = 0$, then (see Sec. \[ap:derivativesdS\]) $$\frac{\partial S}{\partial I_i} = \frac{1}{\sigma_\mathrm{q}}
(\sum_{k=1}^{N}\frac{1}{k} - \ln{n} - \sum_{k=1}^{N_i}\frac{1}{k}).$$
VIR Design and Implementation {#sec:Implementation}
=============================
We have designed, and implemented in c++, VIR with 6 modules which include algorithms for:
- the generation of the Voronoi diagram
- calculation of model visibilities
- calculation of the merit function $L$ to be optimized as well as its derivatives
- fitting a Voronoi diagram to an image
- the CG method
- the optimization of the number of polygons
VIR uses the CG method from [@NumericalRecipes] and searches for the position and intensities of the Voronoi polygons, ${x_i, y_i,
I_i}$, that minimize our merit function $L$. The CG method modifies the intensities and also moves the positions of the Voronoi generators. This causes the shape of the Voronoi polygons to change as well. A general problem with CG is that it usually converges on local minima. For VIR in particular, though Voronoi polygons intensities adjust quite fine, the positions of the generators are difficult to modify substantially. The VIR parameter space is smooth enough in intensity space to converge to a good solution. But the parameter space in cell generator positions is very structured, and CG is quickly stuck on local minima.
Due to the fact that CG easily falls into local minima, we needed a good approximation for the initial Voronoi diagram. For this purpose we used a pixelated version of the Bayesian algorithm, where the model was a uniform grid. We decided to do a pure $\chi^2$ (maximum likelihood, ML) reconstruction and use the fifth CG iteration as our starting image. We chose this particular iteration because on inspection the modeled images were still smooth. Pure $\chi^2$ reaches convergence with noisy images, where the true image is unrecognizable. We then fitted a Voronoi diagram to the image (see Sec. \[ap:fitting\]) and ran CG using the positions and intensities of the generators as our free parameters, which led to our final reconstruction. Truncation to a level of $10^{-5}$ quanta was used to enforce positivity.
An important issue to consider is the size of the quantum $\sigma_\mathrm{q}$. treat $\sigma_\mathrm{q}$ as a free parameter. But, as we now explain, $\sigma_\mathrm{q}$ was held constant in this implementation of VIR. We treat the number of quanta per cell as a continuous variable in order to use the CG method. Entropy is maximized at $\sigma_\mathrm{q} = \infty$, where, for a given configuration of intensities $\{I_i\}$, $N = 0$ and $S = 0$. For every other value of $N$, the entropy will be negative. This means that even for large $\sigma_\mathrm{q}$, the intensities $I_i =
\sigma_\mathrm{q}N_i$ can have reasonable values (using small $N_i$). Figure \[fig:SvsN\] shows $S$ as a function of $N$ for $51$ Voronoi generators and 3 different intensity distributions using the model tessellation of Figure \[fig:reconstruccion\]c. We considered: 1- the VIR intensities of Figure \[fig:reconstruccion\]c, 2- a uniform intensity distribution image ($N_i = \frac{N}{n}$ $\forall$ $i$), 3- a spike where all $N$ are only in one cell ($N_i = N$, $N_j
=0$ $\forall$ $j\neq i$). The curves of Figure \[fig:SvsN\] are obtained by keeping the intensities fixed and modifying $\sigma_\mathrm{q}$ in order to obtain different $N$. It can be seen on Figure \[fig:SvsN\] that the entropy is maximized at $N = 0$, independently of the intensities $\{I_i\}$ of the model, where the optimal value of $\sigma_\mathrm{q} = \infty$ is achieved if the number of quanta per cell is treated as a continuous variable. If the number of quanta per cell were discrete variables, as in , the choice of a big $\sigma_\mathrm{q}$ would admit only zero values for every cell. Otherwise, if one or more quanta fell in a given cell, the intensity of that cell would diverge as $\sigma_\mathrm{q}$ for arbitrarily large $\sigma_\mathrm{q}$, causing a big $\chi^2$ value. Therefore, in our continuous optimization the intensity quantum must be determined a-priori.
In the Bayesian description of the entropy we count events that fall in each cell. It seems reasonable to take the noise level as the minimum value of intensity we can distinguish. So, $\sigma_\mathrm{q}$ should approximate the estimated thermal noise in the naturally weighted dirty map. The definition of the weighted dirty map [e.g. @Briggs] is $$\begin{aligned}
I^\mathrm{D} (x, y) \equiv
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}W(u, v)V(u, v)e^{-2\pi
i(ux + vy)}dudv,\\
W(u, v) = \frac{1}{\sum_kw_k} \sum_kw_k\delta(u-u_k, v-v_k),\end{aligned}$$ where the sums extend over all visibilities, $w_k$ are the weights given to visibility $k$ and $\delta$ is the two-dimensional Dirac delta function. Propagating the thermal noise, we get for the standard deviation of the dirty map $$\sigma_\mathrm{rms}^\mathrm{D} =
\sqrt{\frac{\sum_kw_k^2\sigma_k^2}{(\sum_kw_k)^2}}, \label{eq:sigmaD}$$ where $\sigma_k$ are the visibilities standard deviations. To take into account model pixels correlated by the interferometer beam, we should multiply the previous expression by $\sqrt{N_\mathrm{beam}}$, where $N_\mathrm{beam}$ is the number of pixels inside a beam pattern. This leads to $$\sigma_\mathrm{rms} = \sqrt{\frac{\sum_kw_k^2\sigma_k^2}
{(\sum_kw_k)^2}}\sqrt{N_\mathrm{beam}}.$$ For natural weights, $\sigma_k^2 = \frac{1}{w_k}$, $$\sigma_\mathrm{rms} = \sqrt{\frac{N_\mathrm{beam}}{\sum_kw_k}} =
\sqrt{\frac{N_\mathrm{beam}}{\sum_k\frac{1}{\sigma_k^2}}}.$$ We calculated the noise with natural weighting, $w_k =
\frac{1}{\sigma_k^2}$, because this is the weight we give to each individual visibility data in the optimization of the merit function.
Once we have the value of $\sigma_\mathrm{q}$ we search for the optimal number of cells $n$. In Figure \[fig:Lvsn\] we plot the optimal merit function for different $n$ and $\sigma_\mathrm{q}$. These reconstructions were made over a simulation of CBI observations on a mock sky image (Figure \[fig:reconstruccion\]a). We averaged over 100 reconstructions with different realizations of Gaussian noise. The average curves shown in Figure \[fig:Lvsn\], start with $n = 10$ and end with $n = 100$ for even $n$. One single reconstruction for all $n$ took about two hours using an AMD Athlon64 XP3000 processor with 1GB of DDR RAM at 333 MHz, so the 300 reconstructions took about $25$ CPU days, but we distributed the work in six computers, so it took about $5$ real days in total. It can be seen that for a signal to noise ratio (SNR) of $\sim 52$, on average, the optimal number of polygons $n$ is between 50 and 55. When $\sigma_\mathrm{q}$ is diminished to $\frac{1}{10}\sigma_\mathrm{q}$, on average, the optimal merit function is found at $n$ close to $30$. For $\sigma_\mathrm{q} = 10\sigma_\mathrm{q}$, the optimal $n$ is found between $80$ and $90$. It can be seen that as we increase the value of $\sigma_\mathrm{q}$ we reach lower values for our function, as discussed above. Furthermore, the optimal number of polygons increases.
Example Reconstruction {#sec:Results}
======================
Mock Dataset
------------
The mock sky image we used for simulations is a $256\times 256$ image consisting of three Gaussians and a rectangle. Figure \[fig:reconstruccion\]a shows this image on a $128\times128$ pixel field. Pixels are $0.75'\times0.75'$, while the CBI’s primary beam is of $45'$ FWHM (60 pixels), so most of the emission lies under the beam. We simulated a CBI observation of 3620 visibilities over this image and added Gaussian noise to the visibilities in order to reach a SNR of $\sim52$. This SNR was calculated by taking the maximum intensity from the dirty map using natural weights, and using the noise $\sigma_\mathrm{rms}^\mathrm{D}$ (see Eq. \[eq:sigmaD\]). Simulation of the CBI observations is performed with the MockCBI program (Pearson 2000, private communication), which calculates the visibilities $V(u,v)$ on the input images $I(x,y)$ with the same $uv$ sampling as a reference visibility dataset (Eq. \[eq:vmodel\]). Thus MockCBI creates the visibility dataset that would have been obtained had the sky emission followed the true image. Figure \[fig:reconstruccion\]b shows the dirty map calculated over these simulated visibilities using the DIFMAP package [@she97]. The CBI’s primary beam is drawn as a dashed circle. The secondary side-lobes due to the central discontinuity in $u$-$v$ coverage can be distinguished in Figure \[fig:reconstruccion\]b at a level comparable to the true emission.
MEM Reconstruction
------------------
The VIR method was compared with the MEM algorithm described in [@Casassus]. To fit the model image to the observed visibilities, MEM calculates the model visibilities required by its merit function $L_\mathrm{MEM}$. The model visibilities are those obtained by a simulation of CBI observations had the sky followed the model image . The free-parameters of our MEM model are the pixels in the model $64\times64$ image. The model functional we minimize is $L_\mathrm{MEM} = \chi^2 - \lambda S$, with the entropy $S= - \sum_i
I_i \log I_i/M$, where $M$ is a default pixel value well below the noise level, and $\{I_i\}_{i=1}^{N}$ is the model image. We started with the fifth iteration of a pure $\chi^2$ reconstruction ($\lambda =
0$) as initial condition for the CG minimization. This is the same ML initial condition used in our VIR method. Figure \[fig:reconstruccion\]g shows the reconstructed image using $\lambda
= \frac{100}{\sigma_\mathrm{rms}}$ and $M =
10^{-2}\sigma_\mathrm{rms}$ inset on a larger $128\times128$ image [^1].
VIR Reconstruction
------------------
The MEM algorithm described above requires the prior assignment of the $\lambda$ and $M$ parameters as well as the entropy formula. In contrast, our VIR algorithm is free from such arbitrary parameters (provided the optimal $\sigma_\mathrm{q}$ is indeed equal to $\sigma_\mathrm{rms}$). For our VIR method, we only need to find the number of polygons to be used. In order to find the optimal number of polygons we reconstructed with different numbers of generators in a range covering each natural number from $n = 6$ to $n = 100$. We found a minimum at $n = 51$. Figure \[fig:Lvsn\] summarizes this search. The whole search for a particular realization of noise took about 10 hours on the AMD Athlon64 XP3000 processor with 1GB of DDR RAM at 333 MHz. The VIR reconstruction using 51 polygons is shown in Figure \[fig:reconstruccion\]c, where the Voronoi cells have also been drawn. Figure \[fig:reconstruccion\]d shows the same model but without drawing the Voronoi mesh.
Results
-------
The quality of each reconstruction can be assessed by visual inspection, comparing the VIR and MEM model images with the true image. The MEM model looks similar to the true image but is noisy. The density of Voronoi generators in the VIR model is greater where there is more emission in the true image, approximating the true image with only a few polygons. We calculated $\chi^2_\mathrm{im} =
\sum_i(I_i^\mathrm{mod} - I_i^\mathrm{true})^2$, where $I_i^\mathrm{mod}$ is the intensity at pixel $i$ of the model image (MEM or VIR), $I_i^\mathrm{true}$ is the intensity at pixel $i$ of the true image, and the sum extends over all pixels in the images. $\chi^2_\mathrm{im}$ gives a measure of how well the model fits the true image. It can be seen in Table \[table:funciones\] that the VIR reconstruction has a better $\chi^2_\mathrm{im}$ than MEM, showing that the VIR model is closer to the true image than the MEM model.
[crrrrr]{} MEM & 7354.85 & 1.016 & 12192.6 & 0.001608\
VIR & 7221.04 & 0.997 & 3753.28 & 0.001396\
Figures \[fig:reconstruccion\]e and \[fig:reconstruccion\]h show the VIR and MEM models residuals. Residual images are the dirty map of the residuals of the visibilities, calculated over the optimal model visibilities. It can be noted on Figure \[fig:reconstruccion\]e that the VIR residuals are very good, showing only noise. On the other hand, in the MEM residuals (Figure \[fig:reconstruccion\]h) the object shape can clearly be distinguished as well as the CBI’s side-lobes. The object seems to be more compact in the model than in its MEM residuals; as expected these residuals are convolved with the synthetic beam.
Restored images are shown in Figures \[fig:reconstruccion\]f and \[fig:reconstruccion\]i. These images are obtained by convolving the models with a Gaussian point spread function (PSF) given by DIFMAP and adding the dirty map of the residuals visibilities. On Figures \[fig:reconstruccion\]f and \[fig:reconstruccion\]i it can be assessed that VIR produces improved restored images relative to MEM. The VIR restored image is similar to that expected given the instrumental noise: it approximates the true image convolved with a Gaussian PSF plus a uniform noise level. In the MEM restored image, on the other hand, the CBI side-lobes can still be distinguished.
The number of optimization parameters in MEM are $64\times64 = 4096$, while the VIR method has only $51$ triplets (cell’s $(x, y)$ position and intensity) i.e. $153$ free parameters. This smaller number of parameters causes the Bayesian entropy to be greater than the pixelated version, obtaining a smaller value for our merit function $L$ to be minimized.
Table \[table:funciones\] also shows $\frac{\chi^2}{n_\mathrm{data}}$ values, where $n_\mathrm{data}$ is the number of data points ($3620\times 2$ in our case). A good reconstruction should have a $\frac{\chi^2}{n_\mathrm{data}}$ value close to $1$. It can be seen that the VIR model gives a value of $\frac{\chi^2}{n_\mathrm{data}}$ closer to $1$ than the MEM reconstruction.
Conclusions {#sec:Conclusions}
===========
We have introduced a Bayesian Voronoi image reconstruction (VIR) technique for interferometric data where the image is represented by a Voronoi tessellation in place of the usual pixelated image. The advantage of Voronoi models is that we can use a smaller number of free parameters, as required by the Bayesian analysis of a discretized intensity field. Our purpose is not optimal CPU efficiency; we search for the optimal image and model from a Bayesian point of view. The free parameters of our model are the Voronoi generators positions $(x_i, y_i)$ and intensities $I_i$. The following points summarize our work:
- We discretized the intensity field in order to calculate *a priori* probabilities. We defined a quantum intensity value $\sigma_\mathrm{q}$ such that $I_i = \sigma_\mathrm{q} N_i$, where $I_i$ is the intensity at cell $i$ and $N_i$ the number of quanta in cell $i$.
- We calculated the analytical derivatives required by the conjugate gradient and cross checked them by finite differences. Because the parameter space in cell generators positions is very structured, the positions of the Voronoi generators are difficult to change. As initial condition we took a Voronoi tessellation adjusted to an interrupted maximum likelihood reconstruction.
- We simulated a CBI observation over a true image and reconstructed sky images from this mock visibility dataset using MEM and VIR.
- We defined the value of $\sigma_\mathrm{q}$ as the estimated noise of the dirty map and searched for the optimal number of Voronoi polygons for our example dataset.
- We finally compared the MEM and VIR models, residuals and restored images. The VIR model is closer to our true image than the MEM model. Residuals and restored images are also better in VIR than in MEM. We found that VIR model visibilities give a better fit to the data than MEM, in the sense that $\chi^2$ is closer to its expected value.
We are grateful to Tim Pearson for advice on FFTs and the use of MOCKCBI. G.F.C. and S.C. acknowledge support from FONDECYT grant 1060827, and from the Chilean Center for Astrophysics FONDAP 15010003.
Derivatives {#ap:derivatives}
===========
Our merit function for minimization is $$\begin{aligned}
L & = &
\frac{1}{2}\sum_{j=1}^{N_\mathrm{Vis}}\frac{||V_j^{\mathrm{mod}}
- V_j^{\mathrm{obs}}||^2}{\sigma_j^2}
-\ln\left(\frac{N!}{n^N \prod_{i=1}^{n}N_{i}!}\right)\\ & = &
\frac{1}{2}\chi^2 - S.\end{aligned}$$ So, the derivative of $L$ with respect to any variable $x$ is $$\frac{\partial L}{\partial x} = \frac{1}{2}\frac{\partial
\chi^2}{\partial x} - \frac{\partial S}{\partial x}$$
Calculation of the Derivatives of $\chi^2$
------------------------------------------
$\chi^2$ derivatives with respect to any variable $x$ can be obtain as follows $$\begin{aligned}
\frac{\partial}{\partial x} \frac{1}{2}\chi^2
& = & \frac{\partial}{\partial x}
\left(\frac{1}{2}\sum_{k=1}^{N_\mathrm{Vis}}\frac{||V_k^{\mathrm{mod}} -
V_k^{\mathrm{obs}}||^2}{\sigma_k^2}\right)\nonumber\\
& = & \sum_{k=1}^{N_\mathrm{Vis}} \frac{1}{\sigma_k^2}
\left(\mathrm{Re}(V_k^{\mathrm{mod}} - V_k^{\mathrm{obs}})
\mathrm{Re}\left(\frac{\partial V_k^{\mathrm{mod}}}{\partial x}\right)
+ \mathrm{Im}(V_k^{\mathrm{mod}} - V_k^{\mathrm{obs}})
\mathrm{Im}\left(\frac{\partial V_k^{\mathrm{mod}}}{\partial x}\right)
\right),\nonumber\\
&& \label{eq:dLdI}\end{aligned}$$ where its necessary to calculate the model visibilities derivatives with respect to $x$.
### Calculation of $\frac{\partial V_k^{\mathrm{mod}}}{\partial I_i}$
In our Voronoi tessellation representation of the sky image $$\label{eq:VisDiscreta}
V(\vec{k}) = \sum_i^{N_\mathrm{V}} I_i\int_{\mathcal{V}_i} A(\vec{x}) e^{2\pi
i\vec{k}\vec{x}}d\vec{x},$$ where $N_\mathrm{V}$ is the number of polygons, $\mathcal{V}_i$ is polygon $i$ and $I_i$ its intensity. We neglected the $\sqrt{1 - x^2 -
y^2}$ term which is close to $1$, but it can easily be included in $A(\vec{x})$. After derivation and defining $f_k(\vec{x})\equiv
A(\vec{x}) e^{2\pi i\vec{k_k}\vec{x}}$ we obtain $$\begin{aligned}
\frac{\partial V_k^{\mathrm{mod}}}{\partial I_i} & = &
\int\int_{\mathcal{V}_i} f_k(\vec{x})d^2x, \\
& = & \frac{\sin{(\pi u_k\Delta x)}\sin{(\pi v_k\Delta y)}}{\pi^2u_kv_k}
\sum_{\textrm{\scriptsize{pixels }}l\epsilon \mathcal{V}_i}A_l
e^{2\pi i(u_kx_l+v_ky_l)}, \\
& \simeq & \Delta x\Delta y
\sum_{\textrm{\scriptsize{pixels }}l\epsilon \mathcal{V}_i}A_l
e^{2\pi i(u_kx_l+v_ky_l)}\end{aligned}$$ for small $\Delta x$ and $\Delta y$.
### Calculation of $\frac{\partial V_k^{\mathrm{mod}}}{\partial x_i}$ and $\frac{\partial V_k^{\mathrm{mod}}}{\partial y_i}$
To evaluate $\frac{\partial V_k}{\partial x_i}$ we move the generator $\vec{x}_i$ an infinitesimal quantity $\delta_x$ parallel to the $\hat{x}$ axis as in Figure \[fig:VoronoiDelta\]. We will calculate $$\frac{\partial V_k}{\partial x_i} =
\lim_{\delta_x\to 0}\frac{\Delta V}{\delta_x},\label{eq:DerivadaLimite}$$ where $\Delta V_k = V_k(\vec{x}_1, \cdots,
\vec{x}_i + \vec{\delta_x}, \cdots, \vec{x}_{N_V}) -
V_k(\vec{x}_1, \cdots, \vec{x}_i, \cdots, \vec{x}_{N_V})$.
It can be seen in Figure \[fig:VoronoiDelta\] that when moving the generator $\vec{x}_i$, the only polygons modified are $\mathcal{V}_i$ and its neighbors. Using this, Eq. \[eq:VisDiscreta\] leads to $$\begin{aligned}
\Delta V_k & = & I_i'\int_{\mathcal{V}_i'}f_k(\vec{x})d\vec{x} -
I_i\int_{\mathcal{V}_i}f_k(\vec{x})d\vec{x}\nonumber\\
& & + \sum_{j\in J_i}\bigg(I_j'\int_{\mathcal{V}_j'}f_k(\vec{x})d\vec{x} -
I_j\int_{\mathcal{V}_j}f_k(\vec{x})d\vec{x}\bigg),\label{eq:deltaVis}\end{aligned}$$ where $\mathcal{V}_i$ is the polygon generated by $\vec{x}_i$ before moving, $\mathcal{V}_i'$ is the same polygon after moving $\vec{x}_i$, $J_i$ is the set of indices of the polygons that are neighbors to $\mathcal{V}_i$ and $J_i'$ is the set of indices of the polygons that are neighbors to $\mathcal{V}_i'$.
It can be seen in Figure \[fig:VoronoiDelta\] that $$\begin{aligned}
\mathcal{V}_i = (\mathcal{V}_i\cap \mathcal{V}_i')\cup(\mathcal{V}_i\setminus \mathcal{V}_i\cap \mathcal{V}_i'), & \mathcal{V}_i' =
(\mathcal{V}_i\cap \mathcal{V}_i')\cup(\mathcal{V}_i'\setminus \mathcal{V}_i\cap \mathcal{V}_i'),\\
\mathcal{V}_j = (\mathcal{V}_j\cap \mathcal{V}_j')\cup(\mathcal{V}_i'\cap \mathcal{V}_j), & \mathcal{V}_j' = (\mathcal{V}_j\cap
\mathcal{V}_j')\cup(\mathcal{V}_i\cap \mathcal{V}_j'),\end{aligned}$$ so, Eq. \[eq:deltaVis\] is $$\begin{aligned}
\Delta V_k & = & (I_i'-I_i)\int_{\mathcal{V}_i'\cap
\mathcal{V}_i}f_k(\vec{x})d\vec{x}\nonumber\\
&& + \sum_{j\in J_i}\bigg[(I_i' - I_j)\int_{\mathcal{V}_i'\cap
\mathcal{V}_j}f_k(\vec{x})d\vec{x} + (I_j' - I_i)\int_{\mathcal{V}_i\cap
\mathcal{V}_j'}f_k(\vec{x})d\vec{x}\bigg].\end{aligned}$$ In our case the cells’ intensities don’t depend of the position of the generators, so we obtain $$\label{eq:DeltaVis}
\Delta V_k = \sum_{j\in J_i}\bigg[(I_i-I_j)\bigg(\int_{\mathcal{V}_i'\cap
\mathcal{V}_j}f_k(\vec{x})d\vec{x} - \int_{\mathcal{V}_i\cap
\mathcal{V}_j'}f_k(\vec{x})d\vec{x}\bigg)\bigg].$$
It can be seen in Figure \[fig:VoronoiDelta\] that to obtain $\Delta
V_k$ we must integrate only over the shaded regions. For this purpose, for each region between $\vec{x}_i$ and $\vec{x}_j$ we will define a coordinate system $$\begin{aligned}
\hat{s} = -\cos{\alpha_j}\hat{x} + \sin{\alpha_j}\hat{y}. & \hat{t}
= \sin{\alpha_j}\hat{x} + \cos{\alpha_j}\hat{y},\label{eq:sistema_coordenadas}\end{aligned}$$ where $\alpha_j$ is the angle formed by the $-\hat{x}$ axis and the edge $a_{ij}$ between $\vec{x}_i$ and $\vec{x}_j$ (see Figure \[fig:VoronoiCoordenadas\]). Using this change of coordinates, the integral over the region of interest is $$\int_{\mathcal{V}_i'\cap \mathcal{V}_j}f_k(x, y)dxdy = \int_{\mathcal{V}_i'\cap
\mathcal{V}_j}f_k(s,t)dsdt.\label{eq:int}$$
Let $\vec{x_i} = (x_i, y_i)$ be the position of the $i$ cell’s generator, $\vec{x_j} = (x_j, y_j)$ one of its neighbor, and $\vec{x_i}' = (x_i + \delta_x, y_i)$ the site’s position after moving it a quantity $\delta_x$. We define $\vec{x_0} \equiv (x_0, y_0)$ as the point in the intersection of the segment formed by $\vec{x_i}$ and $\vec{x_j}$ and its respective edge $a_{ij}$. The same way, we define $\vec{x_0}' = (x_0', y_0')$ as the point in the intersection of the segment formed by $\vec{x_i}'$ and $\vec{x_j}$ and its respective edge $a_{ij}'$. It can be seen on Figure \[fig:VoronoiCoordenadas\] that $x_0 = \frac{x_i + x_j}{2}$ , $x_0' = x_0 + \frac{\delta}{2}$ and $y_0'
= y_0 = \frac{y_i + y_j}{2}$.
The edge $a_{ij}$ is defined in the new coordinate system by $$s = s_0 = -x_0\cos{\alpha_j} + y_0\sin{\alpha_j}.$$ In the same way, the edge $a_{ij}'$ is defined in the original coordinate system by $$y = m(x - x_0') + y_0,$$ where $$m \equiv \frac{x_i + \delta_x - x_j}{y_j - y_i}.$$ We can define the same line in our new coordinate system as $$s = m't+b',$$ where $$\begin{aligned}
m' & \equiv & -\frac{\cos{\alpha_j} + m\sin{\alpha_j}}{\sin{\alpha_j}
- m\cos{\alpha_j}},\\
b' & \equiv & \frac{-mx_0' +
y_0}{\sin{\alpha_j} - m\cos{\alpha_j}}.\end{aligned}$$ This can be approximated to first order in $\delta_x$ as $$\begin{aligned}
m' & \simeq & \delta_xM_x,\\
b' & \simeq &s_0 + \delta_xB_x,\end{aligned}$$ where $$\begin{aligned}
M_x & \equiv &\frac{\sin^2\alpha_j}{y_j - y_i} =
\frac{\sin{\alpha_j}\cos{\alpha_j}}{x_i - x_j},\\
B_x & \equiv &
\frac{\sin{\alpha_j}}{y_j-y_i}(s_0\cos{\alpha_j} + x_i) =
\frac{\cos{\alpha_j}}{x_i-x_j}(s_0\cos{\alpha_j} + x_i).\end{aligned}$$
The integral in Eq. \[eq:DeltaVis\] using our new coordinate system will be $$\begin{aligned}
\mathcal{I} & = & \int_{\mathcal{V}_i'\cap \mathcal{V}_j}f_k(\vec{x})d\vec{x} - \int_{\mathcal{V}_i\cap
\mathcal{V}_j'}f_k(\vec{x})d\vec{x}\\
& = & \int\int_{a_{ij}}^{a_{ij}'}A(\vec{x})e^{2\pi i(ux + vy)}dxdy.\label{eq:Ixy}\end{aligned}$$ If we use $A(\vec{x})$ in the $(s, t)$ coordinate system as a pixelated image, Eq. \[eq:Ixy\] will be $$\mathcal{I} = \sum_{l\ \epsilon \mathrm{\ pixeles\ de\ }
a_i}A_l\int_{t_{ijl}^1}^{t_{ijl}^2}\int_{s_0}^{m't+b'} e^{2\pi i(ux(s,
t) + vy(s, t))}dsdt,$$ where $t_{ijl}^1$ and $t_{ijl}^2$ are the $t$ coordinate of the beginning and end of the portion of the edge $a_{ij}$ that intersects pixel $l$. Developing the previous expression, $$\begin{aligned}
\mathcal{I} & = & \sum_{l}A_l\int_{t_{ijl}^1}^{t_{ijl}^2}\int_{s_0}^{m't+b'}
e^{2\pi i(u(-s\cos{\alpha_j} + t \sin{\alpha_j}) + v(s\sin{\alpha_j} +
t \cos{\alpha_j}))}dsdt,\label{eq:integralPixel}\\
& \simeq &
\sum_{l}\frac{A_l}{\pi c_2}e^{2\pi i(s_0c_1 + \bar{t}_{ijl}c_2)}\kappa_{ijl}\delta_x,
\label{eq:IExacto}\end{aligned}$$ where we defined $$\begin{aligned}
c_1 & \equiv & -u\cos{\alpha_j} + v\sin{\alpha_j},\\
c_2 & \equiv & u\sin{\alpha_j} + v\cos{\alpha_j},\\
\kappa_{ijl} & \equiv &
(M_x\bar{t}_{ijl} + B_x)\sin{(\pi c_2\Delta t_{ijl})} \\\nonumber
&&+ i\frac{M_x}{2} \bigg(\frac{\sin(\pi c_2\Delta t_{ijl})}{\pi c_2} -
\Delta t_{ijl}\cos{(\pi c_2\Delta t_{ijl})}\bigg),\\
\bar{t}_{ijl} & \equiv & \frac{t_{ijl}^1 + t_{ijl}^2}{2},\\
\Delta t_{ijl} & \equiv & \frac{t_{ijl}^2 - t_{ijl}^1}{2}.\end{aligned}$$
In the calculation above we integrated over the fraction of the edge that falls inside pixel $l$ and then summed these integrals over the whole edge $a_i$. It is also possible to approximate the integral of Eq. \[eq:integralPixel\] as $\int_{t_{ijl}^1}^{t_{ijl}^2}g(t) dt =
g(\bar{t}_{ijl})\Delta t_{ijl}$, which is equivalent to taking the limit over the integral $\mathcal{I}$ of Eq. \[eq:IExacto\], $\lim_{\Delta t_{ijl}\rightarrow 0} \mathcal{I}$, obtaining $$\mathcal{I} = \sum_{l} A_l\Delta t_{ijl} (M_x\bar{t}_{ijl} + B_x)e^{2\pi
i(\bar{t}_{ijl}c_2 + s_0c_1)}\delta_x.\label{eq:IAprox}$$
We found by direct evaluation that the difference between Eq. \[eq:IAprox\] and Eq. \[eq:IExacto\] is negligible, so, for simplicity, we will use Eq. \[eq:IAprox\]. Introducing Eq. \[eq:IAprox\] in Eq. \[eq:DeltaVis\], we obtain $$\Delta V_k = \delta_x\sum_{j\in
J_i}\bigg[(I_i-I_j)\sum_{l} A_l\Delta t_{ijl} (M_x\bar{t}_{ijl} +
B_x)e^{2\pi i(\bar{t}_{ijl}c_2 + s_0c_1)}\bigg],$$ so, according to Eq. \[eq:DerivadaLimite\], the derivative of the $k$ visibility with respect to the position $x$ of polygon $i$ is $$\begin{aligned}
\frac{\partial V_k}{\partial x_i} & = &
\lim_{\delta_x\to 0}\frac{\Delta V}{\delta_x},\\
& = & \sum_{j\in
J_i}\bigg[(I_i-I_j)\sum_{l} A_l\Delta t_{ijl} (M_x\bar{t}_{ijl} +
B_x)e^{2\pi i(\bar{t}_{ijl}c_2 + s_0c_1)}\bigg].\end{aligned}$$ Similarly, for the derivative with respect to the position $y$ of the $i$ polygon we obtain $$\frac{\partial V_k}{\partial y_i} = \sum_{j\in
J_i}\bigg[(I_i-I_j)\sum_{l} A_l\Delta t_{ijl} (M_y\bar{t}_{ijl} +
B_y)e^{2\pi i(\bar{t}_{ijl}c_2 + s_0c_1)}\bigg],$$ where $$\begin{aligned}
M_y & \equiv &\frac{\cos^2\alpha_j}{x_i - x_j} =
\frac{\sin{\alpha_j}\cos{\alpha_j}}{y_j - y_i},\\
B_y & \equiv & \frac{\sin{\alpha_j}}{y_j-y_i}(s_0\sin{\alpha_j} - y_i)
= \frac{\cos{\alpha_j}}{x_i-x_j}(s_0\sin{\alpha_j} - y_i).\end{aligned}$$
Calculation of the Derivatives of $S$ {#ap:derivativesdS}
-------------------------------------
We defined our entropy as $$\begin{aligned}
S & = & \ln\left(\frac{N!}{n^N
\prod_{i=1}^{n}N_{i}!}\right) \\
& = & \ln(N!) - N\ln(n) - \sum_{i=1}^{n}\ln(N_i!)\\
& = & \ln\Big(\Gamma(N + 1)\Big) - N\ln(n) - \sum_{i=1}^{n}\ln\Big(\Gamma(N_i +
1)\Big),\end{aligned}$$ where $N_i = \frac{I_i}{\sigma_\mathrm{q}}$ is the number of quanta in cell $i$, $N = \sum_iN_i$ and $\Gamma$ is the Gamma function. It can be seen that this function does not depend on the position of the Voronoi generators, so $$\frac{\partial S}{\partial x_i} = \frac{\partial S}{\partial y_i} = 0.$$
Using Weierstrass’ definition of the Gamma function $$\Gamma(z) = z^{-1}e^{-\gamma z}\prod_{n=1}^\infty
\left[\left(1 + \frac{z}{n}\right)^{-1} e^{z/n}\right],$$ where $\gamma$ is Euler’s constant, we can obtain $$\frac{\partial \ln \Big(\Gamma(z + 1)\Big)}{\partial z} = -\gamma +
\sum_{n=1}^z\frac{1}{n}$$ so, the derivative of $S$ with respect to $I_i$ is $$\frac{\partial S}{\partial I_i} = \frac{1}{\sigma_\mathrm{q}}
(\sum_{k=1}^{N}\frac{1}{k} - \ln{n} - \sum_{k=1}^{N_i}\frac{1}{k}).$$
Finite Difference Cross Check on the Derivatives
------------------------------------------------
Numerical calculation of the derivatives by finite differences is not very accurate, in particular for the position of the generators. Finite difference derivatives are calculated as $\frac{\partial L}{\partial x} = \frac{ L(x + \delta) -
L(x)}{\delta}$, where $\delta$ is a small displacement of $x$. In the case of the positions of the generators, if $\delta$ is too small, the pixelization of the Voronoi diagram (needed to obtain the model visibilities) will not change after the displacement $\delta$. On the other hand, if $\delta$ is too big, the generator displacement may cause the function to change abruptly, as explained below. That is why we calculated the analytical expression for the derivatives.
To verify that our derivatives are correctly calculated and programmed, we compared our analytical result with a numerical calculation. We created a Voronoi tessellation of $50$ polygons with random positions and intensities and calculated the analytical and numerical derivatives using these parameters $\{x_i, y_i, I_i\}$. For the case of $\frac{\partial L}{\partial x_i}$ and $\frac{\partial
L}{\partial y_i}$ this numerical cross check consists of moving each Voronoi generator a quantity $\delta$ from -0.1 to 0.1 with an interval of $10^{-3}$ in units of the total size of the square image. We evaluate the merit function $L$ at each position intervals, thus obtaining two sequences $\{L_i\}_{i=1}^{2\times 10^2}$. We then fitted a polynomial of order four to the curve defined by each sequence $\{L_i\}$ and calculated the derivative of the polynomial at $\delta = 0$. For the case of $\frac{\partial L}{\partial I_i}$ we varied the intensity of cell $i$ from $-\sigma_\mathrm{q}$ to $\sigma_\mathrm{q}$ and did the same approximation to a polynomial of order four and calculated its derivative. Figure \[fig:dLdx\] shows this cross check for $\frac{\partial L}{\partial x_i}$ and $\frac{\partial L}{\partial I_i}$. Although the derivatives are similar, they are not exactly the same for $\frac{\partial L}{\partial
x_i}$. This is caused by the polynomial coarseness fit, as explained below.
Figure \[fig:ajuste\] shows the curve fit for $\frac{\partial
L}{\partial x_i}$ for three different generators (generator number 37, 36 and 18 respectively). It can be seen in Figure \[fig:ajuste\] that the polynomial fit adjusts quite well to the function values for polygon number 37, so on Figure \[fig:dLdx\] both derivatives are the same. On the contrary, for polygons number 36 and 18, the fitted polynomial does not resemble the function $L$ at $\delta = 0$, causing a slight difference in their derivatives on Figure \[fig:dLdx\]. For polygon number 18 the polynomial does not fit the curve at all. This is the main problem of using a numerical approximation for the derivatives of $\{\vec{x_i}\}$: when two polygons are closer than $\delta$, the generator displacement causes the function $L$ to change abruptly (see Figure \[fig:VorProblem\]).
It can be seen that the analytical and numerical derivatives on Figure \[fig:dLdx\] are almost the same. As explained above, differences are produced because there are cases where the polynomials do not fit well to the variations of the merit function $L$ (for example, when two generators are too close). In an accurate calculation it is necessary to use the analytical derivatives.
Fitting a Voronoi Tessellation to an Image {#ap:fitting}
==========================================
Once we have a reasonable reconstruction for a pixelated image, we would like to fit a Voronoi tessellation to it in order to have a good initial starting point for the CG. This is done in an incremental way.
We start with a mesh consisting in only one polygon. We calculate the error per polygon as $$e_i^2 = \sum_l
(I_i - I^\mathrm{im}_l)^2,$$ where the sum runs over all the pixels that fall inside polygon $i$, $I_i$ is the intensity of that polygon and $I^\mathrm{im}_l$ is the intensity of pixel $l$ in the image to be fitted. In each iteration we add a new polygon inside the one with the greatest error. The new generator is inserted in the position of the pixel that has the most different intensity value with respect to the mesh intensity.
Briggs, D. S., Schwab, F. R. & Sramek R. A. 1999, ASP Conf. Ser., 180, 127 Casassus, S., Cabrera, G. F., Förster, F, Pearson, T. J., Readhead, A. C. S., Dickinson, C. 2006, , 639, 951 Cornwell, T. J. & Evans, K. F. 1985, A&A, 143, 77 Högbom, J. A. 1974, A&AS, 15, 417 Narayan, Ramesh & Nityananda, Rajaram 1986, , 24, 127 Okabe, A., Boots, B. & Sugihara, K. 1992, Spacial Tessellations Concepts and Applications of Voronoi Diagrams, John Wiley & Sons Padin, S., et al, 2002, , 114, 83 Piña, R. K. & Puetter, R. C. 1993, , 105, 630 Press, W. H., Flannery, B. P., Teukilsky, S. A., Vettering, W. Y. 1992, Numerical Recipes in C, C. Cambridge University Press Shepherd, M.C., 1997, in ASP Conf. Ser. 125, Astronomical Data Analysis Software and Systems VI, ed. G. Hunt & H.E. Payne (San Francisco: ASP), 77 Sutton, E. C. & Wandelt, B. D. 2006, , 162, 401
[^1]: We choose to display the sky images in a larger field than the domain of free parameters; larger fields are required to highlight secondary side-lobes
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider coordinate descent (CD) methods with exact line search on convex quadratic problems. Our main focus is to study the performance of the CD method that use random permutations in each epoch and compare it to the performance of the CD methods that use deterministic orders and random sampling with replacement. We focus on a class of convex quadratic problems with a diagonally dominant Hessian matrix, for which we show that using random permutations instead of random with-replacement sampling improves the performance of the CD method in the worst-case. Furthermore, we prove that as the Hessian matrix becomes more diagonally dominant, the performance improvement attained by using random permutations increases. We also show that for this problem class, using any fixed deterministic order yields a superior performance than using random permutations. We present detailed theoretical analyses with respect to three different convergence criteria that are used in the literature and support our theoretical results with numerical experiments.'
author:
- |
Mert Gürbüzbalaban[^1], Asuman Ozdaglar[^2],\
Nuri Denizcan Vanli[^3] and Stephen J. Wright[^4]
bibliography:
- 'rpcd.bib'
- 'nips\_ref.bib'
title: Randomness and Permutations in Coordinate Descent Methods
---
Introduction {#sec:main}
============
We consider coordinate descent (CD) methods for solving unconstrained optimization problems of the form $$\label{eq:f}
\min_{x \in \R^n} \, f(x),$$ where $f:\R^n \to \R$ is smooth and convex. CD methods have a long history in optimization [@luo1992convergence; @bertsekas1989parallel; @ortega2000iterative] and have been used in many applications [@PassCode15; @NesterovCD17; @richtarik2016parallel; @qin2013efficient; @scutari14]. They have seen a resurgence of recent interest because of their scalability and desirable empirical performance in machine learning and large-scale data analysis [@Bertsekas15Book; @Shi16Survey; @Wright2015].
CD methods are iterative algorithms that perform (approximate) global minimizations with respect to a single coordinate (or several coordinates in the case of block CD) at each iteration. Specifically, at iteration $k$, an index $i_k \in \{1,2,\dots,n\}$ is chosen and the decision variable is updated to approximately minimize the objective function in the $i_k$-th coordinate direction (or at least to produce a significant decrease in the objective) [@Bertsekas99nonlinear; @Bertsekas15Book]. The steps of this method are summarized in Algorithm \[alg:cd\], where $e_i = [0,\dots,0,1,0,\dots,0]^T$ is the $i$-th standard basis vector (with the $i$-th entry equal to one). At each iteration $k$, $i_k$-th coordinate of $x$ is selected and a step is taken along the negative gradient direction in this coordinate. The counter $k=\ell n+j$ keeps track of the total number of iterations consisting of outer iterations indexed by $\ell$ and inner iterations indexed by the counter $j$. Each outer iteration is called a “cycle" or an “epoch” of the algorithm.
Choose initial point $x^0 \in \R^n$ Set $k=\ell n+j$ Choose index $i_k=i(\ell,j) \in \{1,2,\dotsc,n\}$ Choose stepsize $\alpha_k>0$ $x^{k+1} \leftarrow x^k - \alpha_k [\nabla f(x^k)]_{i_k} e_{i_k}$, where $[\nabla f(x^k)]_{i_k} = e_{i_k}^T \nabla f(x^k)$
CD methods use various schemes, both deterministic and stochastic, for choosing the coordinate $i_k$ to be updated at iteration $k$. Prominent schemes include the following.
- Cyclic CD (CCD): The index $i(\ell,j)$ is chosen in a cyclic fashion over the elements in the set $\{1,2,\dots,n\}$ satisfying $i(\ell,j)=j+1$.
- Cyclic CD with a given order $\pi$ (CCD-$\pi$): A permutation $\pi$ of the set $\{1,2,\dots,n\}$ is selected. Then, the index $i(\ell,j)$ is chosen as the $(j+1)$-th element of $\pi$ for every epoch $\ell$. (CCD corresponds to the special case of $\pi=(1,2,\dots,n)$.)
- Randomized CD (RCD): The index $i(\ell,j)$ is chosen randomly with replacement from the set $\{1,2,\dotsc,n\}$ with uniform probabilities (each index has the same probability of being chosen). This method is also known as the *stochastic CD* method.
- Random Permutations Cyclic CD (RPCD): At the beginning of each epoch $\ell$, a permutation of $\{1,2,\dotsc,n\}$ is chosen, denoted by $\pi_{\ell}$, uniformly at random over all permutations. Then, the index $i(\ell,j)$ is chosen as the $(j+1)$-th element of $\pi_{\ell}$. Each permutation $\pi_{\ell}$ is independent of the permutations used at all previous and later epochs. This approach amounts to sampling indices from the set $\{1,2,\dotsc,n\}$ without replacement for each epoch.
While our focus in this paper will be on CD methods with the aforementioned selection rules, we note that several other variants of CD methods have been studied in the literature, including the Gauss-Southwell rule [@nutini2015coordinate], in which $i_k$ is selected in a greedy fashion to maximize $[\nabla f(x^k)]_i$, and versions of RCD [@nesterov2012efficiency], in which $i_k$ is selected from a non-uniform distribution that may depend on the component-wise Lipschitz constants of $f$.
We are interested in the relative convergence behavior of these different variants of CD. While there have been some recent works that study and compare performances of CCD and RCD (for example, [@nesterov2012efficiency; @WangLinCCD; @TewariSIAM; @beck2013convergence; @sun2015improved; @sun2016worst; @ccd_vs_rcd]); with the exception of a few recent papers (which focus on special quadratic problems, see [@wrightRPCD15; @wrightRPCD17]), there is limited understanding of the effects of random permutations in CD methods.
In this paper, we study convergence rate properties of RPCD for a special class of quadratic optimization problems with a diagonally dominant Hessian matrix, and compare its performance to that of RCD and CCD. Interest in RPCD is motivated by both empirical observations and practical implementation: In many machine learning applications, RPCD is observed numerically to outperform its with-replacement sampling counterpart RCD [@Needell2014a; @Recht2012]. Moreover, without-replacement sampling-based algorithms (such as RPCD and random reshuffling [@mert_RandomReshuffling; @Bertsekas15]) are often easier to implement efficiently than their with-replacement counterparts (such as RCD and stochastic gradient descent) [@Recht2012; @wrightRPCD15] as it requires sequential data access, in contrast to the random data access required by with-replacement sampling (see e.g. [@shamir2016without; @Bottou2012]).
We start by surveying briefly the existing results on the effects of random permutations for CD methods [@sun2016worst; @wrightRPCD15; @wrightRPCD17; @oswald2017random]. Among these, Oswald and Zhou [@oswald2017random] studies the effects of random permutations on the convergence rate of the successive over-relaxation (SOR) method (that is used to solve linear systems) and presents a convergence rate on the expected function value of the iterates generated by the SOR method. The CD method, when applied to quadratic minimization problems, is equivalent to the SOR method (applied to the linear system that represents the first-order optimality condition of the quadratic problem) when the relaxation parameter is chosen as $\omega=1$. Therefore, the convergence rate results in [@oswald2017random] readily extend for RPCD, when applied to quadratic problems. Sun and Ye [@sun2016worst] construct a quadratic problem, for which CCD requires $\bigO(n^2)$ times more iterations compared to RCD in order to achieve an $\epsilon$-optimal solution (that is, a point $x^k$ that satisfies $\E f(x^k) - f(x^*) \leq \epsilon$). For this problem, they also show that the distance of the iterates (to the optimal solution) for CCD decays $\bigO(n^2)$ times slower than the distance of the expected iterates for RPCD and RCD. Lee and Wright [@wrightRPCD15] consider the same problem and present that the expected function values of RPCD and RCD decay with similar rates, while the asymptotic convergence rate of RPCD is shown to be slightly better than for RCD. In a following paper [@wrightRPCD17], the results in [@wrightRPCD15] are generalized to a larger class of quadratic problems through a more elaborate analysis.
Our main results provide convergence rate comparisons with respect to various criteria between RPCD, RCD, and CCD for a class of strongly convex quadratic optimization problems with a diagonally dominant Hessian matrix. In particular, we first provide an exact worst-case convergence rate comparison between RPCD, RCD, and CCD in terms of the distance of the expected iterates to the optimal solution, as a function of a parameter that represents the extent of diagonal dominance of the Hessian matrix. Our results show that, on this problem, CCD is always faster than RPCD, which in turn is always faster than RCD. Furthermore, we show that the relative convergence rate of RPCD to RCD goes to infinity as the Hessian matrix becomes more diagonally dominant. On the other extreme, as the Hessian matrix becomes less diagonally dominant, the ratio of convergence rates converges to a value in $[3/2, \, e-1)$, with the upper bound $e-1$ achieved in the limit as $n \to \infty$. Our second set of results compares the convergence rates of RPCD and RCD with respect to two other criteria that are widely used in the literature: the expected distance of the iterates to the solution and the expected function values of the iterates. For these criteria, we show that RPCD is faster than RCD in terms of the tightest upper bounds we obtain, and the amount of improvement increases as the matrices become more diagonally dominant.
The organization of the paper is as follows. In Section \[sec:prelim\], we discuss the CCD, RCD, and RPCD algorithms in more detail and describe the three criteria that are used for analyzing convergence throughout the paper. In Section \[sec:prior\], we survey known results on the convergence rate of RPCD. We analyze the convergence rates of CCD, RCD, and RPCD with respect to the first convergence criterion in Section \[ssec:lyapunov1\] and the behavior of RCD and RPCD with respect to the second and third convergence criteria in Section \[ssec:lyapunov2and3\]. We validate our theoretical results via numerical experiments in Section \[sec:experiments\] and present conclusions in Section \[sec:conclusion\].
Preliminaries {#sec:prelim}
=============
To study performance of different CD methods, we focus on the special case of problem when $f$ is a strongly convex quadratic function:[^5] $$f(x) = \frac12 x^TAx, \label{eq:obj1}$$ where $A$ is a positive definite matrix. We denote its extreme eigenvalues by \[def-muL\] := \_(A) >0, L := \_(A), and note that $\mu$ is the modulus of convexity for $f$, while $L$ is the Lipschitz constant for $\nabla f$. The problem has a unique solution $x^*= 0$ with optimal value $f(\xs)=0$.
In the remainder of this section, we derive explicit formulas for the iterates of different variants of CD applied to (in terms of matrix operators representing each epoch) and then introduce different convergence criteria for these variants. We show how asymptotic convergence rates can be characterized in terms of the spectral properties of $A$ and the matrix operators for each epoch.
CD Methods
----------
In this section, we describe the variants of the CD method (in particular, CCD, CCD-$\pi$, RCD, and RPCD) when applied to the quadratic problem in . The CD method (cf. Algorithm \[alg:cd\]) with exact line search has the following update rule at each iteration $$\label{eq:cd-exact}
x^{k+1} = x^k - \frac{1}{A_{i_k i_k}} (Ax^k)_{i_k} e_{i_k},$$ where the update coordinate $i_k$ is determined according to one of the schemes mentioned above.
For the CCD algorithm, each coordinate is processed in a round-robin fashion using the standard cyclic order $(1,2,\dots,n)$. Denoting by $D$ the diagonal part of $A$ and by $-N$ the strictly lower triangular part of $A$, that is, $$A = D - N - N^T, \nonumber$$ the evolution of the iterates over an epoch (of $n$ consecutive iterations) can be written as $$\xc^{(\ell+1)n} = \ccd \, \xc^{\ell n}, \quad \mbox{with} \quad \ccd = (D-N)^{-1}N^T, \label{eq:ccd}$$ where $\ell$ denotes the epoch counter. Note that the update rule in is equivalent to one iteration of the Gauss-Seidel method applied to the first-order optimality condition of , which is the linear system $Ax=0$ [@Wright2015].
For the CCD-$\pi$ algorithm, we let $P_\pi$ denote the permutation matrix corresponding to order $\pi$ and split the permuted Hessian matrix as follows: $$\label{eq:split}
A_\pi = P_\pi^T A P_\pi = D_\pi - N_\pi - N_\pi^T,$$ where $-N_\pi$ is a strictly lower triangular matrix and $D_\pi$ is a diagonal matrix. Then, similar to , we have $$\xcpi^{(\ell+1)n} = \ccdpi \, \xcpi^{\ell n}, \quad \mbox{with} \quad \ccdpi = (D_\pi-N_\pi)^{-1}N_\pi^T. \label{eq:ccd-pi}$$ Note that $\ccd$ and $\ccdpi$ are not symmetric matrices as the first column of both matrices are zero, whereas the first row contains nonzero entries.
For the RCD algorithm, the indices $i_k$ are chosen independently at random at each iteration $k$. Denoting by $\xr^k$ the $k$-th iterate generated by RCD, the update rule for RCD over a single iteration can be written as $$\xr^{k+1} = \rcd \, \xr^k, \quad \mbox{with} \quad \rcd = I - \frac{1}{A_{i_k i_k}} e_{i_k} e_{i_k}^T A. \label{eq:rcd-definition}$$ The expectation of $\rcd$ with respect to the random variable $i_k$ is denoted as follows: $$\label{eq:Brcd}
\rcdE = \E_k \rcd,$$ where we note that $\rcdE$ is a symmetric matrix, by symmetry of $A$ and uniform distribution of $i_k$.
For the RPCD algorithm, each coordinate is processed exactly once in each epoch according to a uniformly and independently chosen order. Recalling that $\pi_\ell$ denotes the permutation of coordinates used in epoch $\ell$ and using the iteration matrix corresponding to CCD-$\pi_\ell$ (see ), epoch $\ell$ of RPCD can be written as $$\xp^{(\ell+1)n} = \rpcd \, \xp^{\ell n}, \quad \mbox{with} \quad \rpcd = P_{\pi_\ell} \ccdpiell P_{\pi_\ell}^T. \label{eq:rpcd}$$ We introduce the following notation for the expected value of $\rpcd$ with respect to permutation $\pi_\ell$: $$\label{eq:Brpcd}
\rpcdE = \E_{\ell} \rpcd,$$ where we note that $\rpcdE$ is a symmetric matrix since $\pi_\ell$ is chosen uniformly at random over all permutations (see Lemma \[lem:wright\_lemma\]).
Convergence Rate Criteria {#ssec:rate_criteria}
-------------------------
We next discuss how to measure and compare the convergence rates of different variants of CD. Three different improvement sequences have been used to measure the performance of CD methods in the literature: $$\begin{aligned}
& (i) & \Ly_1(x_\text{CD}^k) & = \norm{\E x_\text{CD}^k-\xs}, & \text{(Distance of expected iterates)} \\
& (ii) & \Ly_2(x_\text{CD}^k) & = \E\norm{x_\text{CD}^k-\xs}^2, & \text{(Expected distance of iterates)} \\
& (iii) & \Ly_3(x_\text{CD}^k) & = \E f(x_\text{CD}^k) - f(\xs). & \text{(Expected function value)} \end{aligned}$$ (see e.g. [@sun2015improved; @sun2016worst; @ccd_vs_rcd; @richtarik2016parallel; @Wright2015; @nesterov2012efficiency; @beck2013convergence]). While these three measures can be related to each other (Jensen’s inequality yields $\Ly_1^2 \leq \Ly_2$ and strong convexity enables lower and upper bounding $\Ly_3$ between constant positive multiples of $\Ly_2$), we will provide different analyses for each of the measures to obtain the tightest estimates.
In the above definitions, expectations can be removed for deterministic algorithms such as CCD. By Jensen’s inequality, we have that $\Ly_1^2(x_\text{CD}^k) \le \Ly_2(x_\text{CD}^k)$ for all $k$. For a strongly convex function $f$, $\Ly_3$ can be lower and upper bounded between constant positive multiples of $\Ly_2$.
To study convergence rate of CCD, RCD, and RPCD with respect to improvement sequence $\Ly_1$, we use the operators derived in the previous section that represent one iterate or one epoch. For CCD and RPCD, we have from and together with that $$\E_\ell x_\text{CD}^{(\ell+1) n} = \cd \, x_\text{CD}^{\ell n},$$ where $\E_\ell$ denotes the expectation with respect to the random variables in epoch $\ell$ given $x_\text{CD}^{\ell n}$. (We have $\cd=\ccd$ for CCD and $\cd=\rpcdE$ for RPCD.) Note that the random variables in each epoch are independent and identically distributed across different epochs for RCD and RPCD. Therefore, by using the law of iterated expectations, we obtain $$\E x_\text{CD}^{(\ell+1) n} = \cd^\ell \, x^0,$$ where $\E$ here denotes the expectation with respect to [*all*]{} random variables arising in the algorithm. Hence, the [*worst-case convergence rate*]{} with respect to $\Ly_1$ can be expressed as $$\sup_{x^0\in\R^n} \left( \frac{\norm{\E x_\text{CD}^{\ell n}}}{\norm{x^0}} \right)^{1/\ell} = \sup_{x^0\in\R^n} \left( \frac{\norm{\cd^\ell \, x^0}}{\norm{x^0}} \right)^{1/\ell} = \norm{\cd^\ell}^{1/\ell}. \label{eq:rmk1}$$ When $\cd$ is a symmetric matrix (as in RPCD), we have $\norm{\cd^\ell}^{1/\ell} = \rho(\cd)$. Hence, yields a [*per-epoch*]{} worst-case convergence rate of $\rho(\rpcdE)$ for RPCD. When $\cd$ is asymmetric (which is the case for CCD), we have by Gelfand’s formula $\lim_{\ell\to\infty}\norm{\cd^\ell}^{1/\ell} = \rho(\cd)$. Thus, $\rho(\ccd)$ represents an [*asymptotic*]{} worst-case convergence rate measure for CCD.
For RCD, a similar derivation involving a single iteration (rather than one epoch) yields from and that $$\E_k x_\text{RCD}^{k+1} = \rcdE \, x_\text{CCD}^k.$$ Similar reasoning to the above yields a [*per-iteration*]{} worst-case convergence rate of $\rho(\rcdE)$, or equivalently a per-epoch rate of $\rho(\rcdE)^n$, for RCD. (Note that, because $\rcdE$ is symmetric, we have $\rho(\rcdE) = \norm{\rcdE}$.)
In our analysis of convergence rate of RCD with respect to improvement sequence $\Ly_2$, it follows from that $$\begin{aligned}
\E \norm{x_\text{RCD}^{k+1}}^2 & = (x_\text{RCD}^k)^T \E \left[ (\rcd)^T \rcd \right] x_\text{RCD}^k \\
& \leq \norm{\E \left[ (\rcd)^T \rcd \right]} \norm{x_\text{RCD}^k}^2.\end{aligned}$$ For RPCD, we have similarly from that $$\begin{aligned}
\E \norm{x_\text{RPCD}^{(\ell+1)n}}^2 & = (x_\text{RPCD}^{\ell n})^T \E \left[ (\rpcd)^T \rpcd \right] x_\text{RPCD}^{\ell n} \\
& \leq \norm{\E \left[ (\rpcd)^T \rpcd \right]} \norm{x_\text{RPCD}^{\ell n}}^2.\end{aligned}$$ The matrices $\E \left[ (\rcd)^T \rcd \right]$ and $\E \left[ (\rpcd)^T \rpcd \right]$ are both symmetric. Convergence rates be obtained from $\rho \left( \E \left[ (\rcd)^T \rcd \right] \right)$ and $\rho \left( \E \left[ (\rpcd)^T \rpcd \right] \right)$ (or equivalently from the norms of these matrices), the first being a per-iteration convergence rate for RCD under criterion $\Ly_2$, and the second being a per-epoch rate for RPCD under the same criterion. Results along these lines appear in Section \[ssec:lyapunov2and3\].
Finally, in our analysis of convergence rate of RCD with respect to $\Ly_3$, iteration yields $$\begin{aligned}
\E f(x_\text{RCD}^{k+1}) & = (x_\text{RCD}^k)^T \E_k \left[ (\rcd)^T A \rcd \right] x_\text{RCD}^k \\
& = (A^{1/2} x_\text{RCD}^k)^T \E_k \left[ A^{-1/2} (\rcd)^T A \rcd A^{-1/2} \right] A^{1/2} x_\text{RCD}^k \\
& \leq \norm{\E_k \left[ A^{-1/2} (\rcd)^T A \rcd A^{-1/2} \right]} \norm{A^{1/2} x_\text{RCD}^k}^2.\end{aligned}$$ A similar analysis applied to the RPCD update formula yields $$\E f(x_\text{RPCD}^{(\ell+1)n}) \leq \norm{\E_{\ell} \left[ A^{-1/2} (\rpcd)^T A \rpcd A^{-1/2} \right]} \norm{A^{1/2} x_\text{RPCD}^{\ell n}}^2.$$ We will show that the matrices in these two bounds are symmetric. Thus, our convergence rate characterizations for RCD and RPCD with respect to $\Ly_3$ (see Section \[ssec:lyapunov2and3\]) will involve the norms (equivalently, the spectral radii) of these two matrices.
Note that for improvement sequence $\Ly_1$, the asymptotic worst-case convergence rate of the algorithm can be simply computed as the spectral radius of the expected iteration matrix. Furthermore, this bound is tight in the sense that there can be no smaller contraction rate $c_1$, for which an inequality of the type $\Ly_1(x_\text{CD}^{\ell n}) \leq c_1^\ell \, \Ly_1(x^0)$ asymptotically holds for all $x^0\in\R^n$. Therefore, in Section \[ssec:lyapunov1\], we compare the worst-case convergence rates of CCD, RCD and RPCD with respect to $\Ly_1$ through a tight analysis (in Proposition \[thm:ccd\]). We analyze the ratio of the convergence rates of RCD and RPCD in Proposition \[thm-monotonic-rpcd-speedup\]. On the other hand, for improvement sequences $\Ly_2$ and $\Ly_3$, we consider per-iteration and per-epoch upper bounds that are not necessarily asymptotically tight. Using a similar argument to , we can formulate the worst-case contraction factors for $\Ly_2$ and $\Ly_3$, but they would involve computation of powers of matrices (e.g., $\E \left[ (\cdk^\ell)^T \cdk^\ell \right]$ and $\E \left[ A^{-1/2} (\cdk^\ell)^T A \cdk^\ell A^{-1/2} \right]$), which does not admit a closed form characterization. Hence, in Section \[ssec:lyapunov2and3\], we compare the convergence rates of RCD and RPCD based on per-iteration and per-epoch improvement rates, as has been done previously in the literature [@sun2016worst; @wrightRPCD15; @wrightRPCD17].
Prior work on CD methods with random permutations {#sec:prior}
=================================================
In this section, we survey the known results on the performance of RPCD. There are several recent works that study the effects of random permutations in the convergence behavior of CD methods [@oswald2017random; @wrightRPCD15; @wrightRPCD17; @sun2016worst]. To unify the randomization parameters (in RCD and RPCD) and the component-wise Lipschitz constants in different papers, we (without loss of generality) make the following assumption throughout the rest of the paper $$\label{eq:unit-diag}
A_{ii} = 1, \quad \mbox{for all} \quad i\in\{1,2,\dots,n\}.$$ This can always be satisfied by scaling the optimization variable, i.e., by setting $x = D^{-1/2}\tilde{x}$ in and minimizing over $\tilde{x} \in \R^n$ (see e.g. [@wrightRPCD17; @ccd_vs_rcd]).
Recently, Oswald and Zhou [@oswald2017random] analyzed the effects of random permutations for the successive over-relaxation (SOR) method, which is equivalent to the CD method with exact line search for a particular choice of algorithm parameter. They consider quadratic problems whose Hessian matrix is positive semidefinite and present convergence guarantees for SOR iterations with random permutations, which implies the following guarantee on the performance of RPCD.
\[theo-oswald\][@oswald2017random Theorem 4] Let $f$ be a quadratic function of the form , where the Hessian matrix $A$ has unit diagonals. Then, for any solution $x^*$, the RPCD algorithm enjoys the following guarantee $$\label{eq:oswald_rpcd}
\E f(\xp^{\ell n}) - f(x^*) \leq \left (1 - \frac{\mu}{(1+L)^2}\right)^{\ell} \left(f(x^0) - f(x^*)\right).$$
Theorem \[theo-oswald\] provides a convergence rate guarantee on the performance of RPCD for general quadratic functions. Under the same assumptions in Theorem \[theo-oswald\], the best known upper bound on the performance of RCD is given by [@nesterov2012efficiency Theorem 5]: $$\E \left[ \frac{1}{2} \norm{\xr^k-\xs}^2 + f(\xr^k) - f(x^*) \right] \leq\left( 1- \frac{2 \mu}{n(1+\mu)} \right)^k \left( \frac{1}{2} \norm{x^0-\xs}^2 + f(x^0) - f(x^*) \right). \label{eq:rcd.rate.R}$$ This shows that the the upper bound on the performance of RCD per-epoch is approximately $\left( 1-\frac{2\mu}{n(1+\mu)} \right)^n \approx 1-\frac{2\mu}{1+\mu}$, whereas it follows from that the upper bound on the performance of RPCD can be as large as $1-\frac{\mu}{(1+n)^2}$ since $L \leq \tr{A} = n$. These bounds suggest that RPCD may require $\bigO(n^2)$ times more iterations than RCD to guarantee an $\epsilon$-optimal solution. However, empirical results show that RPCD often outperforms RCD in machine learning applications [@Recht2012; @Bottou2009]. Furthermore, it has been conjectured that the expected performance of RPCD should be no worse than the expected performance of RCD [@Recht2012] (see also [@Ward16AMGM; @Zhang14AMGM] for related work on this conjecture). This motivates to derive tight bounds for the convergence rate of RPCD and compare them with the known bounds on the convergence rate of RCD.
A similar phenomenon has been observed for CCD in comparison to RCD. In particular, the tightest known convergence rate results on the performance of CCD (see [@beck2013convergence; @sun2016worst; @sun2015improved]) suggest that CCD may require $\bigOt(n^2)$ times more iterations than RCD to guarantee an $\epsilon$-optimal solution. To understand this gap in the convergence rate bounds, Sun and Ye [@sun2016worst] focused on the quadratic problem in with the following permutation invariant[^6] Hessian matrix $$\label{eq:Ainvariant}
A = \ddd I + (1-\ddd) \bfone \bfone^T, \quad \mbox{where} \quad \delta \in (0,n/(n-1)).$$ In particular, the authors considered a worst-case initialization and the case when $\delta$ is close to $0$, for which $L=\bigO(n)$.[^7] For this problem, they showed that CCD with the worst-case initialization indeed requires $\bigO(n^2)$ times more iterations than RCD to return an $\epsilon$-optimal solution. They also provided rate comparisons between RPCD and CCD without providing a comparison between RPCD and RCD, which is presented in the following theorem.
\[prop: slower, iterates\] [@sun2016worst Proposition 3.4] Let $K_{\mathrm{CCD}}(\epsilon)$, $K_{\mathrm{RCD}}(\epsilon) $ and $K_{\mathrm{RPCD}}(\epsilon)$ be the minimum number of epochs for CCD, RCD and RPCD (respectively) to achieve (expected) relative error $$\frac{ \| \E(x_{\text{CD}}^k) - x^* \| }{ \|x^0 - x^* \| } \leq \epsilon,$$ for initial point $x^0\in\R^n$ (for CCD, the expectation operator can be ignored). There exists a quadratic problem, whose Hessian matrix $A$ satisfies for some $\delta$ around zero, such that
\[ite, compare CD with others\] $$\begin{aligned}
% \frac{ K_{\mathrm{CCD} }(\epsilon ) }{ K_{\mathrm{GD} }(\epsilon ) } & \geq \frac{n}{2 \pi^2 } \approx \frac{n}{20}, \label{compare GD with CD, coro} \\
\frac{ K_{\mathrm{CCD} }(\epsilon ) }{ K_{\mathrm{RCD} }(\epsilon ) } & \geq \frac{n^2 }{ 2 \pi^2 } \approx \frac{n^2}{20}, \label{compare CD with RCD, coro} \\
\frac{ K_{\mathrm{CCD} }(\epsilon ) }{ K_{\mathrm{RPCD} }(\epsilon ) } & \geq \frac{n (n+1) }{ 2 \pi^2 } \approx \frac{n(n+2)}{20}.
\end{aligned}$$
Theorem \[prop: slower, iterates\] shows that the worst-case performance (in improvement sequence $\Ly_1$) of RPCD and RCD is $\bigO(n^2)$ times faster than that of CCD. In a follow-up work, Lee and Wright [@wrightRPCD15] considered the same problem as [@sun2016worst] (see ) for the small $\delta$ case and presented asymptotic and non-asymptotic analyses of RPCD with respect to improvement sequence $\Ly_3$, presented in the following theorem.
\[thm-rpcd-ima\][@wrightRPCD15 Theorem 3.3] Consider the quadratic problem with the Hessian matrix $A$ given by , where $\delta\in(0, 0.4)$ and $n \geq 10$. For any $x^0\in\R^n$, RPCD has the following non-asymptotic convergence guarantee $$\E f(\xp^{\ell n}) - f(\xs) \leq (1-2\delta+4\delta^2)^\ell R_0,$$ where $R_0$ is a constant depending on $x_0$ and $\delta$. Furthermore, RPCD iterates enjoy an asymptotic convergence rate of $$\lim_{\ell\to\infty} \left( \E f(\xp^{\ell n}) - f(\xs) \right)^{1/\ell} = 1-2\ddd - \frac{2\ddd}{n} + 2\ddd^2 + \bigO\left(\frac{\ddd^2}{n}\right) + \bigO(\ddd^3).$$
Theorem \[thm-rpcd-ima\] shows that for the particular class of quadratic problems whose Hessian matrix satisfies , the convergence rate (in improvement sequence $\Ly_3$) of RPCD is faster than that of RCD in in terms of the best known upper bounds. This is the first theoretical evidence that supports the empirical results showing RPCD often outperforms RCD [@Recht2012]. In a follow-up work [@wrightRPCD17], Lee and Wright generalize the results of Theorem \[thm-rpcd-ima\] to quadratic problems, whose Hessian matrix satisfies $$\label{eq:Ainvariant-generalized}
A = \ddd I + (1-\ddd) u u^T, \quad \mbox{where} \quad \ddd \in (0,n/(n-1)),$$ where $u\in\R^n$ is a vector with elements of size $\bigO(1)$ (this generalizes that corresponds to $u=\bfone$). The conclusions are similar to [@wrightRPCD15], but the analysis is different because $A$ is no longer a permutation-invariant matrix.
Performance of RPCD vs RCD on a class of diagonally dominant matrices {#sec:4}
=====================================================================
As described in the previous section, the existing works [@sun2016worst; @wrightRPCD15] analyze the performance of RPCD for quadratic problems, whose Hessian satisfies for small $\delta$. Here, we consider the other extreme, i.e., the $\delta>1$ case, and provide tight convergence rate comparisons between RPCD, RCD and CCD with respect to all there improvement sequences defined in Section \[ssec:rate\_criteria\]. In deriving convergence rate guarantees, we do not resort to the tools that are used in the earlier works on RPCD [@sun2016worst; @wrightRPCD15; @wrightRPCD17]. Instead, we present a novel analysis based on Perron-Frobenius theory that enables us to compute convergence rate bounds for all three criteria. For notational simplicity, we introduce the reformulation $\alpha=\delta-1$, which yields $$\label{def-An}
A = (1+\alpha)I - \alpha \ones \ones^T, \quad \mbox{where} \quad \alpha \in (0,1/(n-1)).$$ It is simple to check that $A$ has one eigenvalue at $1-(n-1)\alpha$ with the corresponding eigenvector $\ones$ and other $n-1$ eigenvalues equal to $1+\alpha$. In particular, as $\alpha$ goes to zero, the condition number of $A$ gets smaller and in the limit $A$ is the identity matrix. On the other hand, as $\alpha\to \frac{1}{n-1}$, the matrix gets ill-conditioned. Therefore, the parameter t:=\_i = (n-1) (0,1) \[eq:t-definition\] is a measure of diagonal dominance. In the remainder of this section, we analyze the performance of RPCD, RCD and CCD in improvement sequence $\Ly_1$ and the performance of RPCD and RCD in improvement sequences $\Ly_2$ and $\Ly_3$ with respect to this diagonal dominance measure.
Convergence rates of RPCD, RCD and CCD in improvement sequence $\Ly_1$ {#ssec:lyapunov1}
----------------------------------------------------------------------
In this section, we compare convergence rates of RPCD, RCD and CCD, where improvement sequence $\Ly_1(x^k) = \norm{\E x^k -\xs}$ is chosen as the convergence criterion (as in Theorem \[prop: slower, iterates\]). As we highlighted in Section \[ssec:rate\_criteria\], we first compute the expected iteration matrices of the RPCD and RCD algorithms, and show that they are symmetric. Then, we compute their spectral radii to conclude the per-epoch worst-case convergence rate of RPCD and RCD, and analyze their ratio in Proposition \[thm-monotonic-rpcd-speedup\]. We also show that the asymptotic worst-case convergence rate of CCD is faster than that of RPCD and RCD in Proposition \[thm:ccd\].
We begin our discussion by writing the expected RPCD iterates (see and ) as follows $$\label{eq:CCDEqualsCCDpi}
\E_\ell \xp^{(\ell+1)n} = \rpcdE \, \xp^{\ell n}.$$ Note that since the Hessian matrix $A$ is permutation invariant, the iteration matrix of the CCD-$\pi$ algorithm for any cyclic order $\pi$ is equal to the iteration matrix of the standart CCD algorithm, i.e., $\ccd=\ccdpi$ for all orders $\pi$. Therefore, we have $\rpcdE = \E_\pi [P_\pi \ccd P_\pi^T] = \E_P [P \ccd P^T]$, where we drop the subscript $\pi$ from the matrices for notational simplicity. In order to obtain a formula for $\rpcdE$, we first reformulate the CCD iteration matrix in as follows $$\ccd = (I-N)^{-1}N^T = I - (I-N)^{-1}(I-N-N^T) = I - \Gamma^{-1} A,$$ where $\Gamma = I-N$. Using this reformulation, the expected iteration matrix of RPCD can computed as follows $$\rpcdE = \E_P \left[ P \ccd P^T \right] = \E_P \left[ P(I - \Gamma^{-1} A)P^T \right] = I - \E_P \left[ P\Gamma^{-1}P^T \right] A,$$ where we used the fact that $PP^T = I$ and $AP^T = P^TA$. For the case the Hessian matrix $A$ satisfies , $\Gamma^{-1}$ can be explicitly computed as $$\begin{aligned}
\label{eqn-gamma-inv}
\Gamma^{-1} = \mbox{toeplitz} (c,r),\end{aligned}$$ where $\mbox{toeplitz} (c,r)$ denotes the Toeplitz matrix with the first column $c$ and the first row $r$, which are given by $$\begin{aligned}
c = \begin{bmatrix} 1, & \alpha, & \alpha(1+\alpha), & \alpha(1+\alpha)^2, & \dots, & \alpha(1+\alpha)^{n-2} \end{bmatrix}^T, \quad
r = [1, 0 , 0, \dots, 0].\end{aligned}$$ In order to compute $\E_P \left[ P\Gamma^{-1}P^T \right]$, we use the following lemma, which states that expectation over all permutations separately averages the diagonal and off-diagonal entries of the permuted matrix.
[@wrightRPCD15 Lemma 3.1]\[lem:wright\_lemma\] Given any matrix $Q \in \R^{n\times n}$ and permutation matrix $P$ selected uniformly at random from the set of all permutations, we have $$\E_P[P Q P^T] = \tau_1 I + \tau_2 \bfone\bfone^T,$$ where $$\label{eq:wright-lemma}
\tau_2 = \frac{\bfone^T Q \bfone - \trace(Q)}{n(n-1)} \quad \mbox{and} \quad \tau_1 = \frac{\trace (Q)}{n} - \tau_2.$$
Letting $Q=\Gamma^{-1}$ in Lemma \[lem:wright\_lemma\], we observe that the matrix $\E_P [P\Gamma^{-1}P^T]$ has diagonals equal to one and all the off-diagonal entries equal to each other: $$\label{eq:expected-gamma-inverse-matrix}
\E_P [P\Gamma^{-1}P^T] = (1-\gamma) I + \gamma \bfone\bfone^T,$$ where $\gamma$ can be found as the average of the off-diagonal entries of $\Gamma^{-1}$. The following lemma (whose proof is given in Appendix \[app:lemm-gamma-proof\]) provides an explicit expression for $\gamma$.
\[lemm-gamma\] For any $\alpha \in (0,1/(n-1))$, we have $$\gamma = \frac{(1+\alpha)^n - \alpha n - 1}{\alpha n(n-1)},$$ where $\gamma$ denotes the off-diagonal entries of $\E_P [P\Gamma^{-1}P^T]$ in .
Using Lemma \[lemm-gamma\], it follows from the definition of $A$ in and equation that $$\rpcdE = I - \E_P [P\Gamma^{-1}P^T] A = ((n-1)\gamma-\beta) I + \beta \bfone\bfone^T,$$ where = - + (n-2). Since $\rpcdE$ is a symmetric matrix, then by , it suffices to compute the spectral radius of $\rpcdE$ to obtain the worst-case performance of RPCD with respect to improvement sequence $\Ly_1$. To this end, we note that for any $\alpha \in (0,1/(n-1))$, $\rpcdE >0$ since $\rpcdE = \E_P [P \ccd P^T]$ and $\ccd\geq0$ with at least one strictly positive entry in both the diagonal and off-diagonal parts (see also for an explicit formula of $\ccd$). Then, by the Perron-Frobenius Theorem [@varga2009matrix Lemma 2.8], we have $$\begin{aligned}
\rho(\rpcdE) & = \sum_{j=1}^n [\rpcdE]_{ij}, \quad \mbox{for all $i\in[n]$} \\
& = (n-1) (\gamma \alpha + \beta) \\
& = (n-1) (\alpha - \gamma + \alpha \gamma (n-1)) \\
& = 1 - \left[\left(1-\alpha(n-1)\right) \left(1+\gamma(n-1)\right)\right].\end{aligned}$$ Substituting the formula for $\gamma$ from Lemma \[lemm-gamma\] above, we obtain the spectral radius of the RPCD iteration matrix as follows $$\label{eq-rho-RPCD}
\rho(\rpcdE) = 1 - \left(1-\alpha(n-1)\right) \frac{(1+\alpha)^n - 1}{\alpha n} = 1 - \frac{1-t}{n} \left( \frac{\left(1+\frac{t}{n-1}\right)^n - 1}{\frac{t}{n-1}} \right),$$ where $t=\alpha(n-1)$ denotes the diagonal dominance factor (as defined in ).
For the RCD algorithm, on the other hand, we have (by and ) the following expected iterates $$\E_k \xr^{k+1} = \rcdE \, \xr^k, \quad \mbox{where} \quad \rcdE = I-\frac{1}{n}A.$$ Since $A$ is a symmetric matrix, then by , the per-epoch worst-case asymptotic rate of RCD with respect to improvement sequence $\Ly_1$ can be found as $$\rho(\rcdE)^n = \left(1-\frac{1}{n}\lambda_{\min}(A)\right)^n = \left( 1-\frac{1-t}{n} \right)^n.$$ In Proposition \[thm-monotonic-rpcd-speedup\], we compare the performance of RPCD and RCD with respect to improvement sequence $\Ly_1$. To this end, we define $$s(t,n) = \frac{-\log \rho (\rpcdE)}{-\log \rho(\rcdE)^n}, \label{eq:s-definition}$$ (where $\log$ denotes the natural logarithm), which is equal to the ratio between the number of epochs required to guarantee $\norm{\E x^{\ell n}-\xs}\leq\epsilon$ for RCD and RPCD algorithms. In particular $s(t,n)>1$ implies RPCD has a faster worst-case convergence rate than RCD. In the following theorem, we show that RPCD is faster than RCD for any $t\in(0,1)$ and $n\geq2$, and quantify the rate of improvement.
\[thm-monotonic-rpcd-speedup\] The following statements are true:
- The function $s(t,n)$ is strictly decreasing in $t$ over $(0,1)$.
- $\lim_{t\to 0}s(t,n) = \infty.$
- Let $g(n):= \lim_{t\to 1} s(t,n)$. We have $g(n) \in [3/2,e-1)$, for any $n\geq 2$. Furthermore, $g(n)$ is strictly increasing in $n\geq2$ satisfying $$g(2)=3/2 \quad \mbox{and} \quad \lim_{n\to\infty} g(n) = e-1.$$
![Plot of $s(t,n)$ and $\tilde{s}(t,n)$ versus $t\in(0,1)$ for different values of $n$.[]{data-label="fig:s"}](s.eps){width="\textwidth"}
![Plot of $s(t,n)$ and $\tilde{s}(t,n)$ versus $t\in(0,1)$ for different values of $n$.[]{data-label="fig:s"}](s_tilde.eps){width="\textwidth"}
A consequence of Proposition \[thm-monotonic-rpcd-speedup\] is that RPCD is faster than RCD in the worst-case, for every $t \in (0,1)$ by a factor $s(t,n) > 1$. Furthermore, the amount of acceleration $s(t,n)$ goes to infinity as $\alpha\to0$ for any $n$ fixed. This shows that as the matrix $A$ becomes more and more well-conditioned (as $\alpha \to 0$), the amount of speed-up $s(t,n)$ we obtain with RPCD with respect to RCD goes to infinity. This is consistent with the observation that cyclic orders work well for diagonal-like matrices that are well-conditioned (see e.g. [@varga2009matrix]). Proposition \[thm-monotonic-rpcd-speedup\] is illustrated in Figure \[fig:s\] (left panel), where we plot the parameter $s(t,n)$ as a function of $t$ for different values of $n$.
We next compare the convergence rate of CCD with respect to RPCD and RCD. To this end, as we discuss in Section \[ssec:rate\_criteria\] (cf. ), we use $\rho(\ccd)$ as the asymptotic per epoch worst-case convergence rate of CCD, whereas for comparison to RCD, we use a per-epoch rate of $\rho(\rcdE)^n$. Note that as discussed in , $\ccd=\ccdpi$ for all $\pi$, and hence $\rho(\ccd)=\rho(\ccdpi)$ for all $\pi$. Although, explicit calculation of $\rho(\ccd)$ appears to be challenging, we prove that the known upper bounds [@ccd_vs_rcd Theorem 4.12] on $\rho(\ccd)$ is tighter than $\rho(\rpcdE)$, which together with Proposition \[thm-monotonic-rpcd-speedup\] imply the following result.
\[thm:ccd\] Let $f$ be a quadratic function of the form , whose Hessian matrix given by . Then, the expected iteration matrices of CCD, RPCD and RCD satisfy $$\rho(\ccd) < \rho(\rpcdE) < \rho(\rcdE)^n,$$ for any $\alpha\in(0,1/(n-1))$ and $n\geq2$.
Convergence rates of RPCD and RCD in improvement sequences $\Ly_2$ & $\Ly_3$ {#ssec:lyapunov2and3}
----------------------------------------------------------------------------
In this section, we compare the rate of RPCD and RCD with respect to improvement sequences $\Ly_2$ and $\Ly_3$. When the Hessian matrix $A$ satisfies , the smallest eigenvalue of $A$ can be found as follows $$\label{eq:mu-definition}
\mu = 1-t = 1-\alpha (n-1).$$ Plugging this value in the convergence guarantee of RCD in , we can obtain a convergence guarantee on both improvement sequences $\Ly_2$ and $\Ly_3$ as the left hand-side of upper bounds both $2\Ly_2$ and $\Ly_3$. However, for the particular problem class we consider in this paper, we derive a tighter convergence rate guarantee for RCD in the next proposition, whose proof is deferred to Appendix \[app:rcd-bound\].
\[lemma-rcd-upper bound\] Let $f$ be a quadratic function of the form , whose Hessian matrix given by . Then, RCD iterations satisfy $$\E \|\xr^{k} - x^*\|^2 \leq \left(1-\frac{2\mu}{n} + \frac{\mu^2}{n} \right)^{k} \|x^0 - x^*\|^2, \label{eq:upper bound-rcd norm square}$$ and $$\E \left( f(\xr^{k}) - f(x^*) \right) \leq \left(1-\frac{\mu}{n} \right)^{k} \left( f(x^0) - f(x^*) \right). \label{eq:upper bound-rcd subopt}$$
![Tightness of the bounds in Proposition \[lemma-rcd-upper bound\] when $n=1000$ and $\alpha=\frac{0.9}{n-1}$: Left figure for and right figure for .](lem2_norm_tightness.eps){width="\textwidth"}
![Tightness of the bounds in Proposition \[lemma-rcd-upper bound\] when $n=1000$ and $\alpha=\frac{0.9}{n-1}$: Left figure for and right figure for .](lem2_f_tightness.eps){width="\textwidth"}
We observe that the upper bound in is smaller (tighter) than the upper bound in for any $\alpha \in (0,1/(n-1))$ because $$1-\frac{2\mu}{n} + \frac{\mu^2}{n} < 1-\frac{2\mu}{n} + \frac{2\mu^2}{n} = 1-\frac{2\mu(1-\mu)}{n} = 1-\frac{2\mu(1-\mu^2)}{n(1+\mu)} < 1 - \frac{2\mu}{n(1+\mu)},$$ where the inequalities are due to the fact that $\mu = 1-\alpha (n-1) \in (0,1)$.
We next analyze the performance of RPCD in the following proposition and show that the convergence rate guarantee of RPCD is tighter than the convergence rate guarantee of RCD in Proposition \[lemma-rcd-upper bound\]. The proof of Proposition \[theo-subopt-rpcd\] is given in Appendix \[app:pot6\].
\[theo-subopt-rpcd\] Let $f$ be a quadratic function of the form , whose Hessian matrix given by . Then, RPCD iterations satisfy $$\label{eq:upper bound-rpcd norm square}
\E \|\xp^{\ell n} - x^*\|^2 \leq \left( 1 - \frac{2\mu}{n} \left( \frac{(1+\alpha)^n - 1}{\alpha} \right) + \frac{\mu^2}{n} \left( \frac{(1+\alpha)^{2n} - 1}{\alpha(\alpha+2)}\right) \right)^\ell \, \|x^0 - x^*\|^2,$$ and $$\label{eq:rpcd-f-upper-bound}
\E f(\xp^{\ell n}) - f(x^*) \leq \left( 1- \frac{\mu}{n} \left( \frac{(1+\alpha)^{2n}-1}{\alpha(\alpha+2)} \right) \right)^\ell \left( f(x^0) - f(x^*) \right).$$
![Tightness of the bounds in Proposition \[theo-subopt-rpcd\] when $n=1000$ and $\alpha=\frac{0.9}{n-1}$: Left figure for and right figure for .](thm5_norm_tightness.eps){width="\textwidth"}
![Tightness of the bounds in Proposition \[theo-subopt-rpcd\] when $n=1000$ and $\alpha=\frac{0.9}{n-1}$: Left figure for and right figure for .](thm5_f_tightness.eps){width="\textwidth"}
We next compare the convergence rates we derive for the RCD and RPCD algorithms. In particular, we consider the convergence rate of both algorithms in improvement sequence $\Ly_2$ since we obtain tighter upper bounds for it. Comparing the convergence rate bounds for RCD and RPCD in and , respectively, we can observe that RPCD is faster (in terms of the best known rate guarantees) than RCD by a factor of $$\tilde{s}(t,n) := \frac{-\log\left( 1 - \frac{2\mu}{n} \left( \frac{(1+\alpha)^n - 1}{\alpha} \right) + \frac{\mu^2}{n} \left( \frac{(1+\alpha)^{2n} - 1}{\alpha(\alpha+2)}\right) \right)}{-n\log\left( 1- \frac{2\mu}{n} + \frac{\mu^2}{n} \right)},$$ which is plotted in Figure \[fig:s\] (right panel) in the interval $t\in(0,1)$ for different values of $n$. We observe from this figure that the convergence rate bound for RPCD is better than than the one for RCD for all $t\in(0,1)$ and $n\geq2$. Furthermore, the difference in convergence rate bounds increases as $t$ gets smaller, i.e., as the Hessian matrix becomes more diagonally dominant. We can also show that $\tilde{s}(t,n)$ behaves similar to $s(t,n)$ as $t\to 1$, where the limiting values can be found in Proposition \[thm-monotonic-rpcd-speedup\].
Numerical Experiments {#sec:experiments}
=====================
Here we compare the performance of CCD, RPCD, and RCD for the quadratic problem with Hessian matrix . In Figure \[fig:worst\], we use a worst-case initialization $x^0=\bfone$, for $n\in\{1000,10000\}$ and $\alpha \in\left\{ \frac{0.01}{n-1},\frac{0.50}{n-1},\frac{0.99}{n-1} \right\}$. We observe that CCD is the faster than RPCD, which is faster than RCD. This behavior is in accordance with the theoretical results in Propositions \[thm:ccd\]-\[theo-subopt-rpcd\]. Furthermore, as $\alpha$ decreases, we can see that the ratio between the convergence rates of RPCD and RCD increases, consistent with Proposition \[thm-monotonic-rpcd-speedup\] (see also Figure \[fig:s\]). We can also observe from the right column in Figure \[fig:worst\] that when $\alpha$ is close to $1/(n-1)$, the ratio between the convergence rates of RPCD and RCD is close to the theoretical limits obtained in Proposition \[thm-monotonic-rpcd-speedup\] (see part [*(iii)*]{}, which shows that the ratio is in the interval $[3/2,\,e-1)$). Figure \[fig:random\] plots similar results to Figure \[fig:worst\], but for a random initialization rather than worst-case initialization. Convergence rates depicted in Figure \[fig:random\] are similar to those of Figure \[fig:worst\], due to the fact that $x^{\ell n}$ becomes colinear with the vector of ones as $\ell$ increases (as $\bfone$ is the leading eigenvector of the expected iteration matrix), so that the worst-case convergence rate dictates the performance of the algorithms.
![CCD vs RPCD vs RCD with worst-case initialization for $n=1000$ (top row) and $n=10000$ (bottom row): $\alpha=\frac{0.01}{n-1}$ in the left column, $\alpha=\frac{0.50}{n-1}$ in the middle column, and $\alpha=\frac{0.99}{n-1}$ in the right column.[]{data-label="fig:worst"}](worst-n1k-a01.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with worst-case initialization for $n=1000$ (top row) and $n=10000$ (bottom row): $\alpha=\frac{0.01}{n-1}$ in the left column, $\alpha=\frac{0.50}{n-1}$ in the middle column, and $\alpha=\frac{0.99}{n-1}$ in the right column.[]{data-label="fig:worst"}](worst-n1k-a50.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with worst-case initialization for $n=1000$ (top row) and $n=10000$ (bottom row): $\alpha=\frac{0.01}{n-1}$ in the left column, $\alpha=\frac{0.50}{n-1}$ in the middle column, and $\alpha=\frac{0.99}{n-1}$ in the right column.[]{data-label="fig:worst"}](worst-n1k-a99.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with worst-case initialization for $n=1000$ (top row) and $n=10000$ (bottom row): $\alpha=\frac{0.01}{n-1}$ in the left column, $\alpha=\frac{0.50}{n-1}$ in the middle column, and $\alpha=\frac{0.99}{n-1}$ in the right column.[]{data-label="fig:worst"}](worst-n10k-a01.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with worst-case initialization for $n=1000$ (top row) and $n=10000$ (bottom row): $\alpha=\frac{0.01}{n-1}$ in the left column, $\alpha=\frac{0.50}{n-1}$ in the middle column, and $\alpha=\frac{0.99}{n-1}$ in the right column.[]{data-label="fig:worst"}](worst-n10k-a50.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with worst-case initialization for $n=1000$ (top row) and $n=10000$ (bottom row): $\alpha=\frac{0.01}{n-1}$ in the left column, $\alpha=\frac{0.50}{n-1}$ in the middle column, and $\alpha=\frac{0.99}{n-1}$ in the right column.[]{data-label="fig:worst"}](worst-n10k-a99.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with random initialization for $n=1000$: $\alpha=\frac{0.01}{n-1}$ (left figure), $\alpha=\frac{0.50}{n-1}$ (middle figure), and $\alpha=\frac{0.99}{n-1}$ (right figure).[]{data-label="fig:random"}](random-n1k-a01.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with random initialization for $n=1000$: $\alpha=\frac{0.01}{n-1}$ (left figure), $\alpha=\frac{0.50}{n-1}$ (middle figure), and $\alpha=\frac{0.99}{n-1}$ (right figure).[]{data-label="fig:random"}](random-n1k-a50.eps){width="\textwidth"}
![CCD vs RPCD vs RCD with random initialization for $n=1000$: $\alpha=\frac{0.01}{n-1}$ (left figure), $\alpha=\frac{0.50}{n-1}$ (middle figure), and $\alpha=\frac{0.99}{n-1}$ (right figure).[]{data-label="fig:random"}](random-n1k-a99.eps){width="\textwidth"}
Conclusion {#sec:conclusion}
==========
In this paper, we surveyed the known results on the performance of RPCD for special cases of strongly convex quadratic objectives and add to these results by presenting a class of convex quadratic problems with diagonally dominant Hessians. Using the distance of the expected iterates to the optimal solution as the convergence criterion, we compared the ratio between the performances of RPCD and RCD with respect to a parameter that represents the extent of diagonal dominance. We illustrated that as the Hessian matrix becomes more diagonally dominant, this ratio goes to infinity, whereas as it gets smaller it goes to a constant in the interval $[3/2, \, e-1)$. We also showed that CCD outperforms both RPCD and RCD for this class of problems. When expected distance of the iterates or expected function value of the iterates is used as the convergence criterion, we presented that the worst-case convergence rate bounds derived for RPCD are tighter compared to the ones for RCD. This is in accordance with our first set of results, i.e., when distance of the expected iterates is used as the convergence criterion. Computational experiments validate our theoretical results, which fill a gap between the theoretical guarantees for RPCD and its empirical performance.
[^1]: Department of Management Science and Information Systems, Rutgers University, 100 Rockafellar Road, Piscataway, NJ 08854. `mg1366@rutgers.edu`.
[^2]: Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139. `asuman@mit.edu`.
[^3]: Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139. `denizcan@mit.edu`.
[^4]: Department of Computer Sciences and Wisconsin Institute for Discovery, University of Wisconsin - Madison, 1210 West Dayton Street, Madison, WI 53706. `swright@cs.wisc.edu`.
[^5]: The results can be generalized for quadratic functions of the form $f(x) = \frac12 x^TAx - b^Tx$; however, for simplicity and compatibility with the earlier results in the literature, we consider the case $b=0$.
[^6]: $A$ is a permutation invariant matrix if $PAP^T=A$, for any permutation matrix $P$.
[^7]: Since $A$ has two eigenvalues: $\delta+n(1-\delta)$ with multiplicity $1$ and $\delta$ with multiplicity $n-1$, the Lipschitz constant becomes $L=\delta+n(1-\delta)$, for $\delta\leq1$; and as $\delta\to0$, $L \to n$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper is devoted to establishing the global existence and uniqueness of a mild solution of the modified Navier-Stokes equations with a small initial data in the critical Besov-Q space.'
address:
- 'College of Mathematics, Qingdao University, Qingdao, Shandong 515063, China'
- 'Department of Mathematics and Statistics, Memorial University, St. John’s, NL A1C, 5S7, Canada '
- 'School of Mathematics and Statics, Wuhan University, Wuhan, 430072, China.'
author:
- Pengtao Li
- Jie Xiao
- Qixiang Yang
title: 'Global Mild Solutions of Modified Navier-Stokes Equations with Small Initial Data in Critical Besov-Q Spaces'
---
[^1]
Statement of the main theorem {#intro}
=============================
For $\beta>1/2$, the Cauchy problem of the modified Navier-Stokes equations on the half-space $\mathbb{R}^{1+n}_{+}=
(0,\infty) \times \mathbb{R}^{n}, n\geq 2,$ is to decide the existence of a solution $u$ to: $$\label{eqn:ns}
\left\{\begin{array}{ll} \frac{\partial u} {\partial t}
+(-\Delta)^{\beta} u + u \cdot \nabla u -\nabla p=0,
& \mbox{ in } \mathbb{R}^{1+n}_{+}; \\
\nabla \cdot u=0,
& \mbox{ in } \mathbb{R}^{1+n}_{+}; \\
u|_{t=0}= a, & \mbox{ in } \mathbb{R}^{n},
\end{array}
\right.$$ where $(-\Delta)^{\beta}$ represents the $\beta$-order Laplace operator defined by the Fourier transform in the space variable: $$\widehat{(-\Delta)^{\beta}u}(\cdot,\xi)= |\xi|^{2\beta} \hat{u}(\cdot,\xi).$$
Here, it is appropriate to point out that (\[eqn:ns\]) is a generalization of the classical Navier-Stokes system and two-dimensional quasi-geostrophic equation which have continued to attract attention extensively, and that the dissipation $(-\Delta)^{\beta}u$ still retains the physical meaning of the nonlinearity $u\cdot\nabla u+\nabla p$ and the divergence-free condition $\nabla\cdot u=0$.
Upon letting $R_{j}, j=1,2,\cdots n$, be the Riesz transforms, writing $$\nonumber
\begin{cases}
\mathbb{P}= \{\delta_{l,l'}+ R_{l}R_{l'}\}, l,l'=1,\cdots,n;\\
\mathbb{P}\nabla (u\otimes u)= \sum\limits_{l}
\frac{\partial}{\partial x_{l}} (u_{l}u) -
\sum\limits_{l} \sum\limits_{l'} R_{l}R_{l'} \nabla (u_{l} u_{l'});\\
\widehat{e^{-t(-\Delta)^{\beta}}f}(\xi) =
e^{-t|\xi|^{2\beta}}\hat{f}(\xi),
\end{cases}$$ and using $\nabla \cdot u=0$, we can see that a solution of the above Cauchy problem is then obtained via the integral equation: $$\label{eqn:mildsolution}
\begin{cases}
u(t,x)= e^{-t (-\Delta)^{\beta}} a(x) - B(u,u)(t,x);\\
B(u,u)(t,x)\equiv\int^{t}_{0} e^{-(t-s)(-\Delta)^{\beta}}
\mathbb{P}\nabla (u\otimes u) ds,
\end{cases}$$ which can be solved by a fixed-point method whenever the convergence is suitably defined in a function space. Solutions of (\[eqn:mildsolution\]) are called mild solutions of (\[eqn:ns\]). The notion of such a mild solution was pioneered by Kato-Fujita [@KF] in 1960s. During the latest decades, many important results about the mild solutions to (\[eqn:ns\]) have been established; see for example, Cannone [@C1; @C2], Germin-Pavlovic-Staffilani [@GPS], Giga-Miyakawa [@GM], Kato [@Kat], Koch-Tataru [@KT], Wu [@W1; @W2; @W3; @W4], and their references including Kato-Ponce [@KP] and Taylor [@Ta].
The main purpose of this paper is to establish the following global existence and uniqueness of a mild solution to (\[eqn:ns\]) with a small initial data in the critical Besov-Q space.
\[mthmain\] Given $$\begin{cases}
\beta>\frac{1}{2};\\
1< p, q<\infty;\\
\gamma_{1}=\gamma_{2}-2\beta+1;\\
m>\max \{p,\frac{n}{2\beta}\};\\
0<m'<\min\{1,\frac{p}{2\beta}\}.
\end{cases}$$ If the index $(\beta, p,\gamma_2)$ obeys $$\nonumber1< p\leq2\ \ \&\ \ \frac{2\beta-2}{p}<\gamma_{2}\leq\frac{n}{p}$$ or $$\nonumber 2<p<\infty\ \ \&\ \ \beta-1<\gamma_{2}\leq\frac{n}{p},$$ then (\[eqn:ns\]) has a unique global mild solution in $(\B^{\gamma_{1}, \gamma_{2}}_{p, q, m, m'})^{n}$ for any initial data $a$ with $\|a\|_{(\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q})^{n}}$ being small. Here the symbols $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$, and $\B^{\gamma_{1},
\gamma_{2}}_{p, q, m, m'}$ stand for the so-called Besov-Q spaces and their induced tent spaces, and will be determined properly in Sections \[sec3\] and \[sec4\].
Neededless to say, our current work grows from the already-known results. In [@Lio], Lions proved the global existence of the classical solutions of (\[eqn:ns\]) when $\beta\geq\frac{5}{4}$ and $n=3$. This existence result was extended to $\beta\geq
\frac{1}{2}+\frac{n}{4}$ by Wu [@W1], and moreover, for the important case $\beta<\frac{1}{2}+\frac{n}{4}$. Wu [@W2; @W3] established the global existence for (\[eqn:ns\]) in the Besov spaces $\dot{B}^{1+\frac{n}{p}-2\beta,q}_{p}(\mathbb{R}^{n})$ for $1\leq q\leq \infty$ and for either $\frac{1}{2}<\beta$ and $p=2$ or $\frac{1}{2}<\beta\leq1$ and $2<p<\infty$ and in $\dot{B}^{r,\infty}_{2}(\mathbb{R}^{n})$ with $r>\max\{1,1+\frac{n}{p}-2\beta\}$; see also [@W4] concerning the corresponding regularity. Importantly, Koch-Tataru [@KT] studied the global existence and uniqueness of (\[eqn:ns\]) with $\beta=1$ via introducing $BMO^{-1}(\mathbb{R}^{n})$. Extending Koch-Tataru’s work [@KT], Xiao [@X; @X1] introduced the Q-spaces $Q^{-1}_{0<\alpha<1}(\mathbb{R}^{n})$ to investigate the global existence and uniqueness of the classical Navier-Stokes system. The ideas of [@X] were developed by Li-Zhai [@LZ] to study the global existence and uniqueness of (\[eqn:ns\]) with small data in a class of Q-type spaces $Q_{\alpha}^{\beta,-1}(\mathbb{R}^{n})$ under $\beta\in(\frac{1}{2}, 1)$. Recently, Lin-Yang [@LY] got the global existence and uniqueness of (\[eqn:ns\]) with initial data being small in a diagonal Besov-Q space for $\beta\in(\frac{1}{2}, 1)$.
In fact, the above historical citations lead us to make a decisive two-fold observation. On the one hand, thanks to that (\[eqn:ns\]) is invariant under the scaling $$\nonumber\begin{cases}
u_{\lambda}(t,x) = \lambda^{2\beta-1} u(\lambda^{2\beta}t, \lambda
x);\\
p_{\lambda}(t,x) = \lambda^{4\beta-2} p(\lambda^{2\beta}t, \lambda
x),
\end{cases}$$ the initial data space $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ is critical for (\[eqn:ns\]) in the sense that the space is invariant under the scaling $$\label{eq2}
f_{\lambda}(x)=\lambda^{2\beta-1}f(\lambda x).$$ A simple computation, along with letting $\beta=1$ in (\[eq2\]), indicates that the function spaces: $$\begin{cases}
\dot{L}^{2}_{\frac{n}{2}-1}(\mathbb{R}^{n})=\dot{B}^{-1+\frac{n}{2},2}_{2}(\mathbb{R}^{n});\\
L^{n}(\mathbb{R}^{n});\\
\dot{B}^{-1+\frac{n}{p},q}_{p}(\mathbb{R}^{n});\\
BMO^{-1}(\mathbb{R}^{n}),
\end{cases}$$ are critical for (\[eqn:ns\]) with $\beta=1$. Moreover, (\[eq2\]) under $\beta>1/2$ is valid for functions in the homogeneous Besov spaces $\dot{B}^{1+\frac{n}{2}-2\beta,1}_{2}(\mathbb{R}^{n})$ and $\dot{B}^{1+\frac{n}{2}-2\beta,\infty}_{2}(\mathbb{R}^{n})$ attached to (\[eqn:ns\]). On the other hand, it is suitable to mention the following relations: $$\begin{cases}
\dot{B}^{\gamma_{1},
\frac{n}{p}}_{p,q}=\dot{B}^{\gamma_{1},q}_{p}(\mathbb{R}^{n})\ \hbox{for}\ 1\leq p, q<\infty\ \&\ -\infty<\gamma_{1}<\infty;\\
\dot{B}^{1+\frac{n}{p}-2\beta,\frac{n}{p}}_{p_{0},q_{0}}\supseteq
\dot{B}^{1+\frac{n}{p}-2\beta,q}_{p}(\mathbb{R}^{n})\ \hbox{for}\ 1<p\leq p_{0}\ \&\ 1<q\leq q_{0}<\infty\ \&\ \beta>0;\\
\dot{B}^{\alpha-\beta+1,\alpha+\beta-1}_{2,2}=Q^{\beta}_{\alpha}(\mathbb{R}^{n})
\ \hbox{for}\ \alpha\in(0,1)\ \&\ \beta\in(1/2,1)\ \& \
\alpha+\beta-1\geq0.
\end{cases}$$
In order to briefly describe the argument for Theorem \[mthmain\], we should point out that the function spaces used in [@KT; @X; @LZ] have a common trait in the structure, i.e., these spaces can be seen as the Q-spaces with $L^{2}$ norm, and the advantage of such spaces is that Fourier transform plays an important role in estimating the bilinear term on the corresponding solution spaces. Nevertheless, for the global existence and uniqueness of a mild solution to (\[eqn:ns\]) with a small initial data in $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$, we have to seek a new approach. Generally speaking, a mild solution of (\[eqn:ns\]) is obtained by using the following method. Assume that the initial data belongs to $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}(\mathbb{R}^{n})$. Via the iteration process: $$\begin{cases}
u^{(0)}(t,x)=e^{-t(-\Delta)^{\beta}} a(x);\\
u^{(j+1)}(t,x)=u^{(0)}(t,x)-B(u^{(j)},u^{(j)})(t,x)\quad\hbox{for}\quad
j=0,1,2,...,
\end{cases}$$ we construct a contraction mapping on a space in $\mathbb{R}^{1+n}_{+}$, denoted by $X(\mathbb{R}^{1+n}_{+})$. With the initial data being small, the fixed point theorem implies that there exists a unique mild solution of (\[eqn:ns\]) in $X(\mathbb{R}^{1+n}_{+})$. In this paper, we choose $X(\mathbb{R}^{1+n}_{+})=\B^{\gamma_{1},\gamma_{2}}_{p, q, m, m'}$ associated with $\dot{B}^{\gamma_{1},\gamma_{2}}_{p, q}$. Owing to Theorem \[th1\], we know that if $f\in \dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ then $
e^{-t(-\Delta)^{\beta}}f(x)\in X(\mathbb{R}^{1+n}_{+})$. Hence the construction of contraction mapping comes down to prove the following assertion: the bilinear operator $$\begin{array}{rl}
B(u,v)=&\int^{t}_{0} e^{-(t-s)(-\Delta)^{\beta}} \mathbb{P}\nabla
(u\otimes v) ds\end{array}$$ is bounded from $(X(\mathbb{R}^{1+n}_{+}))^n \times
(X(\mathbb{R}^{1+n}_{+}))^n $ to $(X(\mathbb{R}^{1+n}_{+}))^n $.
For this purpose, using multi-resolution analysis, we decompose $B_{l}(u,v)$ into several parts based on the relation between $t$ and $2^{-2j\beta}$, and expanse every part in terms of $\{\Phi^{\varepsilon}_{j,k}\}$. More importantly, Lemmas \[lem53\] & \[lem54\] enable us to obtain an estimate from $(\B^{\gamma_{1},\gamma_{2}}_{p,q, m,m'})^n\times (\B^{\gamma_{1},\gamma_{2}}_{p,q, m,m'})^n$ to $(\B^{\gamma_{1},\gamma_{2}}_{p,q, m,m'})^n$.
[(i)]{} Our initial spaces in Theorem \[mthmain\] include both $\dot{B}^{1+\frac{n}{p}-2\beta,q}_{p}(\mathbb{R}^{n})$ in Wu [@W1; @W2; @W3; @W4], $Q_{\alpha}^{\beta,-1}(\mathbb{R}^{n})$ in Xiao [@X] and Li-Zhai [@LZ]. Moreover, in [@LZ; @LY], the scope of $\beta$ is $(\frac{1}{2}, 1)$. Our method is valid for $\beta>\frac{1}{2}$.
[(ii)]{} We point out that $\dot{B}^{\gamma_{1}, \gamma_{2}}_{p,q}$ provide a lot of new critical initial spaces where the well-posedness of equations (\[eqn:ns\]) holds. By Lemma \[lem:2.9\], for $\beta=1$, Theorem \[mthmain\] holds for the initial spaces $\dot{B}^{\gamma_{1}, \gamma_{2}}_{p,q}$ satisfying $$\dot{B}^{-1+w, w}_{2,q'}\subset Q^{-1}_{\alpha}(\mathbb{R}^{n}),\ q'<2, w>0$$ or $$Q^{-1}_{\alpha}(\mathbb{R}^{n})\subset \dot{B}^{-1+w, w}_{2,q''}\subset BMO^{-1}(\mathbb{R}^{n}),\ 2<q''<\infty, w>0.$$ In some sense, $\dot{B}^{\gamma_{1}, \gamma_{2}}_{p,q}$ fills the gap between the critical spaces $Q^{-1}_{\alpha}(\mathbb{R}^{n})$ and $BMO^{-1}(\mathbb{R}^{n})$. See also Lemma \[lem:2.9\], Corollary \[co:1\] and Remark \[remark1\].
[(iii)]{} For a initial data $ a\in\dot{B}^{\gamma_{1}, \gamma_{2}}_{p,q}$, the index $\gamma_{1}$ represents the regularity of $a$. In Theorem \[mthmain\], taking $\dot{B}^{\gamma_{1}, \gamma_{2}}_{p,q}=\dot{B}^{-1+w, w}_{p,q}$ with $w>0$, $p>2$ and $q>2$ yields $\dot{B}^{-1+w, w}_{p,q}\subset BMO^{-1}(\mathbb{R}^{n})$. Compared with the ones in $BMO^{-1}(\mathbb{R}^{n})$, the elements of $\dot{B}^{-1+w, w}_{p,q}$ have higher regularity. Furthermore, Theorem \[mthmain\] implies that the regularity of our solutions become higher along with the growth of $\gamma_{1}$.
[(iv)]{} Interestingly, Federbush [@Feder] employed the divergence-free wavelets to study the classical Navier-Stokes equations, while the wavelets used in this paper are classical Meyer wavelets. In addition, when constructing a contraction mapping, Federbush’s method was based on the estimates of “long wavelength residues". Nevertheless, our wavelet approach based on Lemmas \[le6\]-\[le7\] and the Cauchy-Schwarz inequality is to convert the bilinear estimate of $B(u,v)$ into various efficient computations involved in the wavelet coefficients of $u$ and $v$.
The remaining of this paper is organized as follows. In Section \[sec2\], we list some preliminary knowledge on wavelets and give the wavelet characterization of the Besov-Q spaces. In Sections \[sec3\]-\[sec4\] we define the initial data spaces and the corresponding solution spaces. Section \[sec5\] carries out a necessary analysis of some non-linear terms and a prior estimates. In Section \[sec6\], we verify Theorem \[mthmain\] via Lemmas \[le6\]-\[le7\] which will be demonstrated in Sections \[sec7\]-\[sec8\] respectively.
[*Notation*]{}: ${\mathsf U}\approx{\mathsf V}$ represents that there is a constant $c>0$ such that $c^{-1}{\mathsf V}\le{\mathsf
U}\le c{\mathsf V}$ whose right inequality is also written as ${\mathsf U}\lesssim{\mathsf V}$. Similarly, one writes ${\mathsf V}\gtrsim{\mathsf U}$ for ${\mathsf V}\ge c{\mathsf U}$.
Some preliminaries {#sec2}
==================
First of all, we would like to say that we will always utilize tensorial product real-valued orthogonal wavelets which may be regular Daubechies wavelets (only used for characterizing Besov and Besov-Q spaces) and classical Meyer wavelets, but also to recall that the regular Daubechies wavelets are such Daubechines wavelets that are smooth enough and have more sufficient vanishing moments than the relative spaces do; see Lemma \[le9\] and the part before Lemma \[lem:c\].
Next, we present some preliminaries on Meyer wavelets $\Phi^{\epsilon}(x)$ in detail and refer the reader to [@Me], [@Woj] and [@Yang1] for further information. Let $\Psi^{0}$ be an even function in $ C^{\infty}_{0}
([-\frac{4\pi}{3}, \frac{4\pi}{3}])$ with $$\left\{ \begin{aligned}
&0\leq\Psi^{0}(\xi)\leq 1; \nonumber\\
&\Psi^{0}(\xi)=1\text{ for }|\xi|\leq \frac{2\pi}{3}.\nonumber
\end{aligned} \right.$$ If $$\begin{array}{rl}
\Omega(\xi)= \sqrt{(\Psi^{0}(\frac{\xi}{2}))^{2}-(\Psi^{0}(\xi))^{2}},
\end{array}$$ then $\Omega$ is an even function in $ C^{\infty}_{0}([-\frac{8\pi}{3}, \frac{8\pi}{3}])$. Clearly, $$\left\{ \begin{aligned}
&\Omega(\xi)=0\text{ for }|\xi|\leq \frac{2\pi}{3};\nonumber\\
&\Omega^{2}(\xi)+\Omega^{2}(2\xi)=1=\Omega^{2}(\xi)+\Omega^{2}(2\pi-\xi)\text{
for }\xi\in [\frac{2\pi}{3},\frac{4\pi}{3}].\nonumber
\end{aligned} \right.$$
Let $\Psi^{1}(\xi)=
\Omega(\xi) e^{-\frac{i\xi}{2}}$. For any $\epsilon=
(\epsilon_{1},\cdots, \epsilon_{n}) \in \{0,1\}^{n}$, define $\Phi^{\epsilon}(x)$ via the Fourier transform $\hat{\Phi}^{\epsilon}(\xi)=
\prod\limits^{n}_{i=1} \Psi^{\epsilon_{i}}(\xi_{i})$. For $j\in
\mathbb{Z}$ and $k\in\mathbb{Z}^{n}$, set $\Phi^{\epsilon}_{j,k}(x)=
2^{\frac{nj}{2}} \Phi^{\epsilon} (2^{j}x-k)$. Furthermore, we put $$\nonumber
\left\{ \begin{aligned}
E_{n}&=\{0,1\}^{n}\backslash\{0\}; \\
F_{n}&=\{(\epsilon,k):\epsilon\in E_{n}, k\in\mathbb{Z}^{n}\};\\
\Lambda_{n}&=\{(\epsilon,j,k), \epsilon\in E_{n}, j\in\mathbb{Z},
k\in \mathbb{Z}^{n}\},
\end{aligned} \right.$$ and for any $\epsilon\in \{0,1\}^{n}, k\in \mathbb{Z}^{n}$ and a function $f$ on $\mathbb R^n$, we write $f^{\epsilon}_{j,k}= \langle f,
\Phi^{\epsilon}_{j,k}\rangle .$
The following result is well-known.
\[le1\] The Meyer wavelets $\{\Phi^{\epsilon}_{j,k}\}_{(\epsilon,j,k)\in
\Lambda_{n}}$ form an orthogonal basis in $L^{2}(\mathbb{R}^{n})$. Consequently, for any $f\in L^{2}(\mathbb{R}^{n})$, the following wavelet decomposition holds in the $L^2$ convergence sense: $$\begin{array}{c}
f=\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}f^{\epsilon}_{j,k}\Phi^{\epsilon}_{j,k}.
\end{array}$$
Moreover, for $j\in\mathbb{Z}$, let $$\begin{array}{rl}
&P_{j}f= \sum\limits_{k\in \mathbb{Z}^{n}} f^{0}_{j,k}
\Phi^{0}_{j,k}\ \ \text{ and }\ \ Q_{j}f= \sum\limits_{(\epsilon,k)\in
F_{n}} f^{\epsilon}_{j,k} \Phi^{\epsilon}_{j,k}.
\end{array}$$ For the above Meyer wavelets, by Lemma \[le1\], the product of any two functions $u$ and $v$ can be decomposed as $$\label{eq:decompose}
\begin{array}{rl}
uv= & \sum\limits_{j\in \mathbb{Z}} P_{j-3}u Q_{j}v +
\sum\limits_{j\in \mathbb{Z}} Q_{j}u Q_{j}v
+ \sum\limits_{0<j-j'\leq 3} Q_{j}u Q_{j'}v \\
&+ \sum\limits_{0<j'-j\leq 3} Q_{j}u Q_{j'}v + \sum\limits_{j\in
\mathbb{Z}} Q_{j}u P_{j-3}v.
\end{array}$$
Suppose that $\varphi$ is a function on $\mathbb{R}^{n}$ satisfying $$\nonumber
\left\{ \begin{aligned}
&\text{supp}\hat{\varphi}\subset\{\xi\in\mathbb{R}^{n}:
|\xi|\leq1\}; \\
&\hat{\varphi}(\xi)=1\text{ for }
\{\xi\in\mathbb{R}^{n}: |\xi|\leq\frac{1}{2}\},\\
\end{aligned} \right.$$ and that $$\varphi_{v}(x)=2^{n(v+1)}\varphi(2^{v+1}x)-2^{nv}\varphi(2^{v}x)\ \ \forall\ \ v\in \mathbb{Z},$$ are the Littlewood-Paley functions; see [@P].
Given $-\infty<\alpha<\infty$, $0<p,q<\infty$. A function $f\in \mathbb{S}'(\mathbb{R}^{n})/\mathcal{P}(\mathbb{R}^{n})$ belongs to $\dot{B}^{\alpha,q}_{p}(\mathbb{R}^{n})$ if $$\begin{array}{rl}
\|f\|_{\dot{B}^{\alpha,q}_{p}}&=\Big[\sum\limits_{v\in\mathbb{Z}}2^{qv\alpha}\|\varphi_{v}\ast
f\|_{p}^{q}\Big]^{\frac{1}{q}}<\infty.
\end{array}$$
The following lemma is essentially known.
\[le9\][(Meyer [@Me])]{} Let $\{\Phi^{\epsilon,1}_{j,k}\}_{(\epsilon,j,k)\in\Lambda_{n}}$ and $\{\Phi^{\epsilon,2}_{j,k}\}_{(\epsilon,j,k)\in\Lambda_{n}}$ be two different wavelet bases which are sufficiently regular. If $$a^{\epsilon,\epsilon'}_{j,k,j',k'}=\langle \Phi^{\epsilon,1}_{j,k},\ \Phi^{\epsilon',2}_{j',k'} \rangle,$$ then for any natural number $N$ there exists a positive constant $C_N$ such that for $j,j'\in\mathbb{Z}$ and $k,k'\in\mathbb{Z}^{n}$, $$\label{eq8}
|a^{\epsilon,\epsilon'}_{j,k,j',k'}|\leq
C_{N}2^{-|j-j'|(\frac{n}{2}+N)}\Big(\frac{2^{-j}+2^{-j'}}{2^{-j}+2^{-j'}+|2^{-j}k-2^{-j'}k'|}\Big)^{n+N}.$$
According to Lemma \[le9\] and Peetre’s paper [@P], we see that this definition of $\dot{B}^{\alpha,q}_{p}(\mathbb{R}^{n})$ is independent of the choice of $\{\varphi_{v}\}_{v\in\mathbb{Z}}$, whence reaching the following description of $\dot{B}^{\alpha,q}_{p}(\mathbb{R}^{n})$.
\[th2\] Given $s\in\mathbb{R}$ and $0<p,q<\infty$. A function $f$ belongs to $\dot{B}^{s,q}_{p}(\mathbb{R}^{n})$ if and only if $$\begin{array}{rl}
&\Big[\sum\limits_{j\in\mathbb{Z}}2^{qj(s+\frac{n}{2}-\frac{n}{p})}
\Big(\sum\limits_{\epsilon,k}|f^{\epsilon}_{j,k}|^{p}\Big)^{\frac{q}{p}}\Big]^{\frac{1}{q}}<\infty.
\end{array}$$
Besov-Q spaces via wavelets {#sec3}
===========================
Definition and its wavelet formulation
--------------------------------------
The forthcoming Besov-Q spaces cover many important function spaces, for example, Besov spaces, Morrey spaces and Q-spaces and so on. Such spaces were first introduced by wavelets in Yang [@Yang1] and were studied by several authors. For a related overview, we refer to Yuan-Sickel-Yang [@YSY].
Let $\varphi\in C^{\infty}_{0} (B(0,n))$ and $\varphi(x)=1 $ for $x\in B(0, \sqrt{n})$. Let $Q(x_{0},r)$ be a cube parallel to the coordinate axis, centered at $x_{0}$ and with side length $r$. For simplicity, sometimes, we denote by $Q=Q(r)$ the cube $Q(x_{0},r)$ and let $\varphi_{Q}(x)= \varphi(\frac{x-x_{Q}}{r})$. For $1<p,q<
\infty$ and $\gamma_{1}, \gamma_{2}\in \mathbb{R}$, let $m_{0}=m^{\gamma_{1},\gamma_{2}}_{p,q}$ be a positive constant large enough. For arbitrary function $f$, let $S^{\gamma_{1},\gamma_{2}}_{p,q,f}$ be the class of the polynomial functions $P_{Q,f}$ such that $$\begin{array}{rl}
\int x^{\alpha} \varphi_{Q}(x) (f(x)
-P_{Q,f}(x)) dx =0\ \ \forall\ \ |\alpha|\leq m_{0}.
\end{array}$$
Given $1<p,q<\infty$ and $\gamma_{1}, \gamma_{2}\in \mathbb{R}$. We say that $f$ belongs to the Besov-Q space $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}:=\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}(\mathbb{R}^{n})$ provided
$$\label{eq:b}
\sup\limits_{Q}|Q|^{\frac{\gamma_{2}}{n}-\frac{1}{p}}
\inf\limits_{P_{Q, f}\in S^{\gamma_{1},\gamma_{2}}_{p,q,f}}
\|\varphi_{Q} (f-P_{Q,f})\|_{\dot{B}^{\gamma_{1}, q}_{p}} <\infty,$$
where the superum is taken over all cubes $Q$ with center $x_{Q}$ and length $r$.
As a generalization of the Morrey spaces, the forthcoming Besov-Q spaces cover many important function spaces, for example, Besov spaces, Morrey spaces and Q-spaces and so on. Such spaces were first introduced by wavelets in Yang [@Yang1]. On the other hand, our Lemma \[lem:c\] as below and Yang-Yuan’s [@Ya-Yu Theorem 3.1] show that our Besov-Q spaces and their Besov type spaces coincide; see also Liang-Sawano-Ullrich-Yang-Yuan [@LSUYY] and Yuan-Sickel-Yang [@YSY] for more information on the so-called Yang-Yuan’s spaces.
Given $1<p, q<\infty$ and $\gamma_{1}, \gamma_{2}\in \mathbb{R}$. Let $m_{0}=m^{\gamma_{1},\gamma_{2}}_{p,q}$ be a sufficiently big integer. For the regular Daubechies wavelets $\Phi^{\epsilon}(x)$, there exist two integers $m\geq m_{0}=m^{\gamma_{1},\gamma_{2}}_{p,q}$ and $M$ such that for $\epsilon\in E_n$, $\Phi^{\epsilon}(x)\in
C^m_0([-2^M, 2^M]^n)$ and $\int x^{\alpha} \Phi^{\epsilon}(x)
dx=0\ \forall\ |\alpha|\leq m$. By applying the regular Daubechies wavelets, we have the following wavelet characterization for $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$.
\[lem:c\]
[(i)]{} $f= \sum\limits_{\epsilon,j,k} a^{\epsilon}_{j,k}
\Phi^{\epsilon}_{j,k}\in
\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ if and only if $$\label{eq:c}
\begin{array}{rl}
&\sup\limits_{Q}|Q|^{\frac{\gamma_{2}}{n}-\frac{1}{p}}
\Big[\sum\limits_{nj\geq -\log_{2}|Q|}
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big(\sum\limits_{(\epsilon,k):Q_{j,k}\subset Q}
|a^{\epsilon}_{j,k}|^{p}\Big) ^{ \frac{q}{p} }\Big]^{\frac{1}{q}} <+
\infty,\end{array}$$ where the supremum is taken over all dyadic cubes in $\mathbb{R}^{n}$.
[(ii)]{} The wavelet characterization in (i) is also true for the Meyer wavelets.
A direct application of Lemma \[lem:c\] gives the following assertion.
\[cor:BMpropty\] Given $1<p,q<\infty, \gamma_{1},
\gamma_{2}\in \mathbb{R}$.
[(i)]{} Each $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ is a Banach space.
[(ii)]{} The definition of $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ is independent on the choice of $\phi$.
Now we recall some preliminaries on the Calderón-Zygmund operators (cf. [@Me] and [@MY]). For $x\neq y$, let $K(x,y)$ be a smooth function such that there exists a sufficiently large $N_{0}\leq m$ satisfying that $$\label{eq6}
\begin{array}{rl}
&|\partial ^{\alpha}_{x}\partial ^{\beta}_{y} K(x,y)| \lesssim
|x-y|^{-(n+|\alpha|+|\beta|)}\ \ \forall\ \ |\alpha|+ |\beta|\leq N_{0}.
\end{array}$$
A linear operator $$\begin{array}{rl}
&Tf(x)=\int K(x,y) f(y) dy\end{array}$$ is said to be a Calderón-Zygmund one if it is continuous from $C^{1}(\mathbb{R}^{n})$ to $(C^{1}(\mathbb{R}^{n}))'$, where the kernel $K(\cdot,\cdot)$ satisfies (\[eq6\]) and $$Tx^{\alpha}=T^{*}x^{\alpha}=0\ \ \forall\ \ \alpha \in \mathbb{N}^{n}\ \ \hbox{with}\ \
|\alpha|\leq N_{0}.$$ For such an operator, we denote $T\in
CZO(N_{0})$.
The kernel $K(\cdot,\cdot)$ may have high singularity on the diagonal $x=y$, so according to the Schwartz kernel theorem, it is only a distribution in $S'(\mathbb{R}^{2n})$. For $ (\epsilon,j,k), (\epsilon',j',k')\in \Lambda_{n}$, let $$a^{\epsilon,\epsilon'}_{j,k,j',k'}= \langle K(x,y),
\Phi^{\epsilon}_{j,k}(x) \Phi^{\epsilon'}_{j',k'}(y)\rangle.$$ If $T$ is a Calderón-Zygmund operator, then its kernel $K(\cdot,\cdot)$ and the related coefficients satisfy the following relations (cf. [@Me], [@MY] and [@Yang1]):
\[le8\]
[(i)]{} If $T\in CZO(N_{0})$, then the coefficients $a^{\epsilon,\epsilon'}_{j,k,j',k'}$ satisfy $$\label{eq7}
|a^{\epsilon,\epsilon'}_{j,k,j',k'}| \lesssim
\frac{
\Big(\frac{2^{-j}+2^{-j'}}{2^{-j}+2^{-j'}
+|k2^{-j}-k'2^{-j'}|}\Big)^{n+N_{0}}}{2^{|j-j'|(\frac{n}{2}+N_{0})}
}\ \forall\ (\epsilon,j,k), (\epsilon',j',k')\in \Lambda_{n}.$$
[(ii)]{} If $a^{\epsilon,\epsilon'}_{j,k,j',k'}$ satisfy (\[eq7\]), then $K(\cdot,\cdot)$, the kernel of the operator $T$, can be written as $$\begin{array}{rl}
&K(x,y)=\sum\limits_{ (\epsilon,j,k),(\epsilon',j',k')\in
\Lambda_{n}} a^{\epsilon,\epsilon'}_{j,k,j',k'}\Phi^{\epsilon}_{j,k}(x)
\Phi^{\epsilon'}_{j',k'}(y)
\end{array}$$ in the distribution sense. Moreover, $T$ belongs to $CZO(N_{0}-\delta)$ for any small positive number $\delta$.
By Lemma \[le8\], we can prove that
\[cor:dilation\] For any $\frac{1}{2}\leq \lambda \leq 2$ we have $\|f(\lambda
\cdot)\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}} \approx
\|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}$.
Critical spaces and their inclusions
------------------------------------
An initial data space is called critical for (\[eqn:ns\]), if it is invariant under the scaling $f_{\lambda}(x) = \lambda^{2\beta-1}
f(\lambda x)$.
Note that, if $u(t,x)$ is a solution of (\[eqn:ns\]) and we replace $u(t,x), p(t,x), a(x)$ by $$u_{\lambda}(t,x) = \lambda^{2\beta-1}
u(\lambda^{2\beta}t, \lambda x),\ p_{\lambda}(t,x) =
\lambda^{4\beta-2} u(\lambda^{2\beta}t, \lambda x)$$ and $a_{\lambda}(x) = \lambda^{2\beta-1} a(\lambda x),$ respectively, $u_{\lambda}(t,x)$ is also a solution of (\[eqn:ns\]). So, the critical spaces occupy a significant place for (\[eqn:ns\]). For $\beta=1$, $$\begin{cases}
\dot{L}^{2}_{\frac{n}{2}-1}(\mathbb{R}^{n})=
\dot{B}^{-1+\frac{n}{2},2}_{2}(\mathbb{R}^{n});\\
L^{n}(\mathbb{R}^{n});\\
\dot{B}^{-1+\frac{n}{p},
\infty}_{p}(\mathbb{R}^{n}), p<\infty;\\
BMO^{-1}(\mathbb{R}^{n});\\
\dot{B}^{\alpha-1,\alpha}_{2,2}(\mathbb{R}^{n}),
\end{cases}$$ are critical spaces. For the general $\beta$, $$\begin{cases}
\dot{B}^{1+\frac{n}{p}-2\beta,
\infty}_{p}(\mathbb{R}^{n}), p<\infty;\\
\dot{B}^{\alpha-\beta+1,\alpha+\beta}_{2,2}(\mathbb{R}^{n}),
\end{cases}$$ are critical spaces.
By Corollary \[cor:dilation\], it is easy to see that each $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ enjoys following dilation-invariance. For $\beta>\frac{1}{2}$ and $\gamma_{1}-\gamma_{2} = 1-2\beta$, each $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ is a critical space, i.e., $$\|\lambda^{\gamma_{2}-\gamma_{1}} f(\lambda
\cdot)\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}} \approx
\|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}\quad\forall\quad \lambda>0.$$
To better understand why the Besov-Q spaces are larger than many spaces cited in the introduction, we should observe the basic fact below.
\[lem:2.9\] Given $1<p, q<\infty$ and $\gamma_{1},\gamma_{2}\in\mathbb{R}$.
[(i)]{} If $q_{1}\leq q_{2}$, then $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q_{1}}\subset
\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q_{2}}$.
[(ii)]{} $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}\subset
\dot{B}^{\gamma_{1}-\gamma_{2},\infty}_{\infty}(\mathbb R^n)$.
[(iii)]{} Given $p_1\geq 1$. For $w=0,q_1=1$ or $w>0,
1\leq q_1\leq \infty$, one has $\dot{B}^{\gamma_{1},
\gamma_{2}+w}_{p,q} \subset
\dot{B}^{\gamma_{1}-w,\gamma_{2}} _{\frac{p}{p_1},
\frac{q}{q_1}}$.
For $0\leq \alpha-\beta+1$ and $\alpha+\beta-1\leq \frac{n}{2}$, we say that $f$ belongs to the Q-type space $Q_{\alpha}^{\beta}(\mathbb{R}^{n})$ provided $$\begin{array}{rl}
\sup\limits_{Q} r^{2(\alpha+\beta-1)-n} \int_{Q}\int_{Q}
\frac{|f(x)-f(y)|^{2}}{|x-y|^{n+2(\alpha-\beta+1)}} dxdy<\infty,
\end{array}$$ where the supremum is taken over all cubes with sidelength $r$. This definition was used in [@LZ] to extend the results in [@X] which initiated a PDE-analysis of the original Q-spaces introduced in [@EJPX] (cf. [@DX; @DX1], [@PY], [@WX], and [@Yang1] for more information). The following is a direct consequence of Lemmas \[lem:c\] and \[lem:2.9\].
\[co:1\]
[(i)]{} If $0\leq \alpha-\beta+1< 1, \alpha+\beta-1\leq \frac{n}{2}$, then $Q_{\alpha}^{\beta}(\mathbb{R}^{n})= \dot{B}^{\alpha-\beta+1,
\alpha+\beta-1}_{2,2}$.
[(ii)]{} If $p=\frac{n}{\gamma_{2}}$, then $\dot{B}^{\gamma_{1},
\gamma_{2}}_{p,q}=\dot{B}^{\gamma_{1},q}_{p}(\mathbb{R}^{n})$.
[(iii)]{} Given $w=0,v=1$ or $w>0, 1\leq v\leq \infty$. If $p=\frac{n}{\gamma_{2}+w}$, then $$\begin{array}{l}
\dot{B}^{\gamma_{1}, q}_{p}(\mathbb{R}^{n}) \subset
\dot{B}^{\gamma_{1}-w,\gamma_{2}} _{\frac{n}{u(w+\gamma_{2})},
\frac{q}{v}}.
\end{array}$$
\[remark1\] In [@W2], J. Wu got the well-posedness of (\[eqn:ns\]) with an initial data in the critical Besov space $\dot{B}^{1+\frac{n}{p}-2\beta,q}_{p}(\mathbb{R}^{n})$. Given $1<p_{0},q_{0}<\infty$. By Lemma \[lem:2.9\], we can see that if $1<p\leq p_{0}$, $1<q\leq q_{0}$ and $\beta>0$, $$\dot{B}^{1+\frac{n}{p}-2\beta,q}_{p}(\mathbb{R}^{n})\subset
\dot{B}^{1+\frac{n}{p}-2\beta,\frac{n}{p}}_{p_{0},q_{0}}.$$
Besov-Q spaces via semigroups {#sec4}
=============================
To establish a semigroup characterization of the Besov-Q spaces, recall the following semigroup characterization of $Q^{\beta}_{\alpha}(\mathbb{R}^{n})$, see [@LZ]:
[*Given $\max\{\alpha,1/2\}<\beta<1$ and $\alpha+\beta-1\geq0$. $f\in Q^{\beta}_{\alpha}(\mathbb{R}^{n})$ if and only if $$\begin{array}{rl}
\sup\limits_{x\in\mathbb R^n\ \&\ r\in (0,\infty)}
r^{2\alpha-n+2\beta-2}\int^{r^{2\beta}}_{0} \int_{|y-x|<r} |\nabla
e^{-t(-\Delta)^{\beta}} f(y)|^{2} t^{-\frac{\alpha}{\beta}}
dydt<\infty.
\end{array}$$*]{}
This characterization was used to derive the global existence and uniqueness of a mild solution to (\[eqn:ns\]) with a small initial data in $\nabla\cdot(Q^{\beta}_{\alpha}(\mathbb{R}^{n}))^{n}$. Notice that $$Q^{\beta}_{\alpha}(\mathbb{R}^{n})=\dot{B}^{\alpha+\beta-1,\alpha-\beta+1}_{2,2}.$$ So, in order to get the corresponding result of (\[eqn:ns\]) with a small initial data in $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ under $|p-2|+
|q-2|\neq 0$, we need a more meticulous relation among time, frequency and locality. For this purpose, by the Meyer wavelets and the fractional heat semigroups, we introduce some new tent spaces associated with $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$, and then establish some connections between these tent spaces and $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$.
Wavelets and semigroups
-----------------------
For $\beta>0$, let $\hat{K}^{\beta}_{t} (\xi) =
e^{-t|\xi|^{2\beta}}$. We have $$f(t,x)= e^{-t(-\Delta)^{\beta}} f(x) = K_{t}^{\beta}*f(x).$$
For the Meyer wavelets $\{\Phi^{\epsilon}_{j,k}\}_{(\epsilon,j,k)\in\Lambda_{n}}$, let $a^{\epsilon}_{j,k}(t) = \langle f(t,\cdot),
\Phi^{\epsilon}_{j,k}\rangle$ and $a^{\epsilon}_{j,k} =
\langle f, \Phi^{\epsilon}_{j,k}\rangle$. By Lemma \[le1\] we get $$\begin{array}{rl}
&f(x)= \sum\limits_{\epsilon,j,k} a^{\epsilon}_{j,k}
\Phi^{\epsilon}_{j,k}(x)\ \text{ and }\ f(t,x)=
\sum\limits_{\epsilon, j, k} a^{\epsilon}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x).
\end{array}$$
If $f(t,x)=K^{\beta}_{t}*f(x)$, then
$$\label{eq3}
\begin{array}{rl}
a^{\epsilon}_{j,k}(t) &= \sum\limits_{\epsilon',|j-j'|\leq 1, k'}
a^{\epsilon'}_{j',k'} \langle K^{\beta}_{t}
\Phi^{\epsilon'}_{j',k'},
\Phi^{\epsilon}_{j,k}\rangle\nonumber\\
&= \sum\limits_{\epsilon',|j-j'|\leq 1, k'} a^{\epsilon'}_{j',k'}
\int e^{-t 2^{2j\beta} |\xi|^{2\beta}} \hat{\Phi}^{\epsilon'}
(2^{j-j'}\xi) \hat{\Phi}^{\epsilon} (\xi) e^{-i(2^{j-j'}k'-k)\xi}
d\xi.\nonumber
\end{array}$$
\[le4\] Let $\{\Phi^{\epsilon}_{j,k}\}_{(\epsilon,j,k)\in\Lambda_{n}}$ be Meyer wavelets. For $\beta>0$ there exist a large constant $N_{\beta}>0$ and a small constant $\tilde c>0$ such that if $N>N_{\beta}$ then $$\label{eq3.5}
\begin{array}{l}
|a^{\epsilon}_{j,k}(t)|\lesssim e^{-\tilde c t 2^{2j\beta}}
\sum\limits_{\epsilon',|j-j'|\leq 1, k'} |a^{\epsilon'}_{j',k'}|
(1+|2^{j-j'}k'-k|)^{-N}\quad\forall\quad t 2^{2\beta j} \geq 1
\end{array}$$ and $$\label{eq3.6}
\begin{array}{rl}|a^{\epsilon}_{j,k}(t)|\lesssim \sum\limits_{|j-j'|\leq 1}
\sum\limits_{\epsilon',k'} |a^{\epsilon'}_{j',k'}|
(1+|2^{j-j'}k'-k|)^{-N}\quad\forall\quad 0< t 2^{2\beta j} \leq 1.
\end{array}$$
Formally, we can write $$\begin{array}{rl}
a^{\epsilon}_{j,k}(t) =&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}a^{\epsilon'}_{j',k'}\Big\langle
K^{\beta}_{t}\ast\Phi^{\epsilon'}_{j',k'},
\Phi^{\epsilon}_{j,k}\Big\rangle\\
=&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}a^{\epsilon'}_{j',k'}\Big\langle
e^{-t(-\Delta)^{\beta}}\Phi^{\epsilon'}_{j',k'},
\Phi^{\epsilon}_{j,k}\Big\rangle\\
=&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}a^{\epsilon'}_{j',k'}\int e^{-t2^{2j\beta}|\xi|^{2\beta}}\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi)\widehat{\Phi^{\epsilon}}(\xi)e^{-i(2^{j-j'}k'-k)\xi}d\xi.
\end{array}$$ We divide the rest of the argument into two cases.
[*Case 1:*]{} $|2^{j-j'}k'-k|\le 2$. Notice that $\widehat{\Phi^{\epsilon}}$ is supported on a ring. By a direct computation, we can get $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|\lesssim&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}|a^{\epsilon'}_{j',k'}|e^{-t2^{2j\beta}}\\
\lesssim&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}e^{-t2^{2j\beta}}.
\end{array}$$
[*Case 2:*]{} $|2^{j-j'}k'-k|\geq 2$. Denote by $l_{i_{0}}$ the largest component of $2^{j-j'}k'-k$. Then $(1+|l_{i_{0}}|)^{N}\sim (1+|2^{j-j'}k'-k|)^{N}$. We have $$\begin{array}{rl}
a^{\epsilon}_{j,k}(t)=&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}\frac{a^{\epsilon'}_{j',k'}}{(l_{i_{0}})^{N}}\int e^{-t2^{2j\beta}|\xi|^{2\beta}}\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi)\widehat{\Phi^{\epsilon}}(\xi)\\
&\times[(\frac{-1}{i}\partial_{\xi_{i_{0}}})^{N}e^{-i(2^{j-j'}k'-k)\xi}]d\xi.
\end{array}$$ By an integration-by-parts, we can obtain that if $C^l_N$ is the binomial coefficient indexed by $N$ and $l$ then $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|=&\Big|\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}(-1)^{N}\frac{a^{\epsilon'}_{j',k'}}{(l_{i_{0}})^{N}}\int \sum\limits^{N}_{l=0}C^{l}_{N}\partial_{\xi_{i_{0}}}^{l}(e^{-t2^{2j\beta}|\xi|^{2\beta}})\\
&\times\partial_{\xi_{i_{0}}}^{N-l}(\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi)\widehat{\Phi^{\epsilon}}(\xi))e^{-i(2^{j-j'}k'-k)\xi}d\xi\Big|\\
\lesssim&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}\Big|\int \sum\limits^{N}_{l=0}C^{l}_{N}(-t2^{2j\beta}|\xi|^{2\beta})^{l}|\xi|^{2\beta-2}\xi_{i_{0}}e^{-t2^{2j\beta}|\xi|^{2\beta}}\\
&\times\partial_{\xi_{i_{0}}}^{N-l}(\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi)\widehat{\Phi^{\epsilon}}(\xi))e^{-i(2^{j-j'}k'-k)\xi}d\xi\Big|.
\end{array}$$ If $t2^{2j\beta}\ge1$, there exists a constant $c$ such that $(t2^{2j\beta})^{l}e^{-t2^{2j\beta}}\lesssim e^{-ct2^{2j\beta}}.$ Since $\widehat{\Phi^{\epsilon'}}$ is defined on a ring, we can get $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|\lesssim&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}(t2^{2j\beta})^{l}e^{-t2^{2j\beta}}\\
\lesssim&\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}e^{-ct2^{2j\beta}}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}.
\end{array}$$ If $0<t2^{2j\beta}\le1$, then we can directly deduce $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|\lesssim\sum\limits_{\epsilon',|j-j'|\leq 1,
k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}.
\end{array}$$
Tent spaces generated by Besov-Q spaces
---------------------------------------
Over $\mathbb{R}^{1+n}_{+}$ we introduce a new tent type space $\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$ associated with $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$, and then establish a relation between $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$ and $\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$ via the fractional heat semigroup $e^{-t(-\Delta)^{\beta}}$.
Let $\gamma_{1}, \gamma_{2},m\in \mathbb{R}$, $m' >0$, $1<p,
q<\infty$ and $$a(t,x)= \sum\limits_{(\epsilon,j,k)\in \Lambda_{n}}
a^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x).$$ We say that:
[(i)]{} $f\in \B^{\gamma_{1},\gamma_{2},I}_{p,q ,m }$ if $\sup\limits_{t\geq 0} \sup\limits_{ x_{0},r}I^{\gamma_{1},\gamma_{2}}_{p, q,Q_{r},m}(t) <\infty,$ where $$\begin{array}{rl}
I^{\gamma_{1},\gamma_{2}}_{p, q,Q_{r},m}(t)=:|Q_{r}
|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}} \sum\limits_{j\geq \max\{
-\log_{2}r, \frac{-\log_{2}t}{2\beta}\} }
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\sum\limits_{(\epsilon,k):Q_{j,k}\subset Q_{r}}
|a^{\epsilon}_{j,k}(t)|^{p}(t2^{2j\beta})^{m}
\Big]^{\frac{q}{p}};
\end{array}$$
[(ii)]{} $f\in \B^{\gamma_{1},\gamma_{2},II}_{p, q}$ if $\sup\limits_{t\geq 0} \sup\limits_{x_{0},r}
II^{\gamma_{1},\gamma_{2}}_{p, q, Q_{r}}(t)<\infty$, where $$\begin{array}{rl}
II^{\gamma_{1},\gamma_{2}}_{p, q, Q_{r}}(t)=:|Q_{r}
|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}} \sum\limits_{-\log_{2}r\leq
j<\frac{-\log_{2}t}{2\beta} }
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\sum\limits_{(\epsilon,k):Q_{j,k}\subset Q_{r}}
|a^{\epsilon}_{j,k}(t)|^{p}\Big]^{\frac{q}{p}};
\end{array}$$
[(iii)]{} $f\in \B^{\gamma_{1},\gamma_{2},III}_{p, q, m }$ if $\sup\limits_{x_{0},r} III^{\gamma_{1},\gamma_{2}}_{p, q,
Q_{r},m}<\infty$, where $$\begin{array}{rl}
III^{\gamma_{1},\gamma_{2}}_{p, q, Q_{r},m} =|Q_{r}
|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}} \sum\limits_{j\geq -\log_{2}r
} 2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\left(\int^{r^{2\beta}}_{2^{-2j\beta}} \sum\limits_{(\epsilon,k):Q_{j,k}\subset Q_{r}}
|a^{\epsilon}_{j,k}(t)|^{p}(t2^{2j\beta})^{m}
\frac{dt}{t}\right)^{\frac{q}{p}};
\end{array}$$
[(iv)]{} $f\in \B^{\gamma_{1},\gamma_{2},IV}_{p,
q, m' }$ if $\sup\limits_{x_{0},r} IV^{\gamma_{1},\gamma_{2}}_{p, q,
Q_{r},m'} <\infty$, where $$\begin{array}{rl}
IV^{\gamma_{1},\gamma_{2}}_{p, q,
Q_{r},m'}=:|Q_{r}
|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}} \sum\limits_{j\geq -\log_{2}r
} 2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\left(\int^{2^{-2j\beta}}_{0} \sum\limits_{(\epsilon,k):Q_{j,k}\subset Q_{r}}
|a^{\epsilon}_{j,k}(t)|^{p}(t2^{2j\beta})^{m'}
\frac{dt}{t}\right)^{\frac{q}{p}}.
\end{array}$$ Moreover, the associated tent type spaces are defined as
$$\begin{array}{rl}\B^{\gamma_{1},\gamma_{2}}_{p,q,
m,m' }= \B^{\gamma_{1},\gamma_{2},I}_{p, q, m }\bigcap
\B^{\gamma_{1},\gamma_{2},II}_{p,q }\bigcap
\B^{\gamma_{1},\gamma_{2},III}_{p, q, m }\bigcap
\B^{\gamma_{1},\gamma_{2},IV}_{p, q, m' }.
\end{array}$$
To continue our discussion, we need to introduce two more function spaces $\B^{\gamma}_{\tau,\infty}$ and $\B^{\gamma_{1}}_{\tau,\infty}$.
\[Besov-infinity\] For $(\epsilon,j,k)\in \Lambda_{n}$, write $a^{\epsilon}_{j,k}(t)=\langle a(t,\cdot),
\Phi^{\epsilon}_{j,k}(\cdot)\rangle$. Given $\tau>0$ and $\gamma\in\mathbb{R}$. We say that $$\begin{cases}
a(\cdot,\cdot)\in \B^{\gamma}_{\tau,\infty}\ \ \hbox{if}\ \
\sup\limits_{t2^{2j\beta}\geq 1}
(t2^{2j\beta})^{\tau} 2^{\frac{nj}{2}}
2^{j\gamma}|a^{\epsilon}_{j,k}(t)| +\sup\limits_{0<t2^{2j\beta}<1}
2^{\frac{nj}{2}} 2^{j\gamma}|a^{\epsilon}_{j,k}(t)|<\infty;\\
a(\cdot,\cdot)\in\B^{\gamma}_{0,\infty}\ \ \hbox{if}\ \
t^{\frac{-\gamma}{2\beta}} 2^{\frac{nj}{2}}|\langle a(t,\cdot),
\Phi^{0}_{j,k}\rangle|<\infty.
\end{cases}$$
It is easy to verify the following inclusions.
\[le:tau\] Given $1<p, q<\infty$, $\gamma_{1},\gamma_{2}\in \mathbb{R}$, $m>p$ and $m',\tau>0$.
[(i)]{} If $m>0$, then $\B^{\gamma_{1},\gamma_{2}}_{p, q, m,m'}
\subset\B^{\gamma_{1}-\gamma_{2}}_{\frac{m}{p},\infty}$.
[(ii)]{} If $-2\beta\tau<\gamma<0<\beta$, then $\B^{\gamma}_{\tau,\infty}\subset
\B^{\gamma}_{0,\infty}.$
In sake of our convenience, for any dyadic cube $Q_{j_{0}, k_{0}}$, we always use $\widetilde{Q}_{j_{0},k_{0}}$ to denote the dyadic cube containing $Q_{j_{0}, k_{0}}$ with side length $2^{8-j_{0}}$. Given $(\epsilon,j,k)\in \Lambda_{n}$. If $\epsilon\in E_{n}$ and $Q_{j,k}\subset Q_{j_{0}, k_{0}}$, we write $(\epsilon,k)\in
S^{j}_{j_{0},k_{0}}$. For any $w\in \mathbb{Z}^{n}$, denote $\widetilde{Q}^w_{j_{0},k_{0}}= 2^{8-j_{0}}w + \widetilde{Q}_{j_{0},
k_{0}}$. Denote $(\epsilon,k)\in S^{w,j}_{j_{0},k_{0}}$ whenever $Q_{j,k}\subset \widetilde{Q}^w_{j_{0},k_{0}}$. Furthermore, we frequently utilize the so-called $\alpha$-triangle inequality below: $$(a+b)^{\alpha}\leq a^{\alpha}+b^{\alpha}\quad\forall\quad (\alpha,a,b)\in (0,1]\times(0,\infty)\times(0,\infty).$$
Now we characterize the Besov-Q spaces by using a semigroup operator.
\[th1\] Given $1<p<m<\infty$, $m'>0$, $1<q<\infty$, $\gamma_{1}-\gamma_{2}<0 <\beta$. If $f\in \dot{B}^{\gamma_{1},\gamma_{2}}_{p,q
}$, then $f*K^{\beta}_{t}\in
\B^{\gamma_{1},\gamma_{2}}_{p, q, m,m'}$.
We will prove $$\begin{array}{rl}
&f= \sum\limits_{(\epsilon,j,k)\in
\Lambda_{n}} a^{\epsilon}_{j,k} \Phi^{\epsilon}_{j,k}\in
\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}\Longrightarrow f*K^{\beta}_{t}=
\sum\limits_{(\epsilon,j,k)\in \Lambda_{n}} a^{\epsilon}_{j,k}(t)
\Phi^{\epsilon}_{j,k} \in \B^{\gamma_{1},\gamma_{2}}_{p,q, m,m'}.\end{array}$$ via handling four situations.
[**Situation 1**]{}: $K^{\beta}_{t}\ast f\in \B^{\gamma_{1},\gamma_{2}, I}_{p,q,m}$. For $t2^{2j\beta}>1, m>0$, by (\[eq3.5\]), there exists a constant $N$ large enough such that $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|\lesssim&e^{-\tilde c t 2^{2j\beta}}
\sum\limits_{\epsilon',|j-j'|\leq 1, k'}\frac{
|a^{\epsilon'}_{j',k'}|}{ (1+|2^{j-j'}k'-k)|)^{N}} \lesssim
2^{\frac{-nj}{2}} 2^{j(\gamma_{2}-\gamma_{1})}e^{-\tilde c t
2^{2j\beta}}.
\end{array}$$ Choosing a sufficiently large $N'$ (depending on $N$) in the last estimate we have $$\begin{array}{rl}
I^{\gamma_{1},\gamma_{2}}_{p,q, Q_{r},
m}(t)
&\lesssim \quad
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
& \quad\Big[\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}e^{-\tilde{c}t2^{2j\beta}}(\sum\limits_{\epsilon',|j-j'|\leq1,
k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}})^{p}(t2^{2j\beta})^{m}\Big]^{\frac{q}{p}},
\end{array}$$ where $p>1$ has been used. In the sequel, we divide the proof into two cases.
Case 1.1: $q\le p$. Because $|j-j'|\leq1$ and $j>-\log_{2}r$, one gets $2^{-(j'+1)n}\<r^{n}$. This implies that $(2^{nj'}|Q|)^{-N'}\lesssim1$. Hence $$\begin{array}{rl}
I^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r}, m}(t)
&\lesssim \!\!\!
\sum\limits_{w\in\mathbb{Z}^{n}}\frac{|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{{Nq}/{p}}}\sum\limits_{j'\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
[\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|a^{\epsilon'}_{j',k'}|^{p}]^{\frac{q}{p}}\\
&\lesssim\quad \|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$
Case 1.2: $q>p$. Applying Hölder’s inequality to $w\in\mathbb{Z}^{n}$, we similarly have $$\begin{array}{rl}
I^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},
m}(t)
&\lesssim\ \sum\limits_{w\in\mathbb{Z}^{n}}\Big\{|Q_{r}|^{\frac{q\gamma_{2}}{p}-\frac{q}{p}}
\sum\limits_{j'\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
[\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|a^{\epsilon'}_{j',k'}|^{p}]^{\frac{q}{p}}\Big\}\\
&\lesssim\ \|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$
[**Situation 2**]{}: $K^{\beta}_{t}\ast f\in \B^{\gamma_{1},\gamma_{2}, II}_{p,q}$. For $t2^{2\beta j}\leq1$ and $m'>0$, by (\[eq3.6\]), there exists a natural number $N$ large enough such that $N>2n$ and $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|\lesssim \sum\limits_{\epsilon', |j-j'|\leq1,
k'}|a^{\epsilon'}_{j',k'}|(1+|2^{j-j'}k'-k|)^{-N}\end{array}.$$ Consequently, we have $$\begin{array}{rl}
II^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r}}(t)\lesssim&|Q|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r\leq
j\leq-\frac{\log_{2}t}{2\beta}}2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}\Big(\sum\limits_{\epsilon',|j'-j|\leq1,
k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}\Big)^{p}\Big]^{\frac{q}{p}}.
\end{array}$$ Case 2.1: $q\le p$. Notice that $(2^{nj'}|Q|)^{-N}\lesssim1$. We have $$\begin{array}{rl}
II^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r}}(t)\lesssim&\!\!\!\sum\limits_{|w|\in\mathbb{Z}^{n}}
\frac{|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{qN/p}}
\sum\limits_{-\log_{2}r-1\leq
j'\leq-\frac{\log_{2}t}{2\beta}-1}2^{j'q(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big(\sum\limits_{(\epsilon', k')\in
S^{w,j'}_{r}}|a^{\epsilon'}_{j',k'}|^{p}\Big)^{\frac{q}{p}}\\
\lesssim&\!\!\!\|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$ Case 2.2: $q>p$. By Hölder’s inequality, by a similar manner, we can obtain $$\begin{array}{rl}
II^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r}}(t)\lesssim&\!\!\!
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r-1\leq
j'\leq-\frac{\log_{2}t}{2\beta}-1}2^{j'q(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
& \Big[\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{Q_{r}}}|a^{\epsilon'}_{j',k'}|^{p}(1+|2^{j-j'}k'-k|)^{-N'}\Big]^{\frac{q}{p}}\\
\lesssim&\!\!\!\|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$
[**Situation 3**]{}: $K^{\beta}_{t}\ast f\in \B^{\gamma_{1},\gamma_{2},
III}_{p,q,m}$. For this case we have $2^{-2j\beta}<t<r^{2\beta}$ and thus $$\begin{array}{rl}
|a^{\epsilon}_{j,k}(t)|\lesssim&e^{-ct2^{2j\beta}}\sum\limits_{\epsilon',
|j-j'|\leq1,k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N'}}
\lesssim2^{-\frac{nj}{2}}2^{j(\gamma_{2}-\gamma_{1})}(t2^{2j\beta})^{-\tau}.
\end{array}$$ This yields $$\begin{array}{rl}
III^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},m}
&\lesssim\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}(t2^{2j\beta})^{m}\\
&\ \sum\limits_{(\epsilon,k)\in S^{j}_{r}}e^{-cpt2^{2j\beta}}\Big(\sum\limits_{\epsilon',|j-j'|\leq 3,
k'}|a^{\epsilon'}_{j',k'}|(1+|2^{j-j'}k'-k|)^{-N}\Big)^{p}\frac{dt}{t}\Big]^{\frac{q}{p}}\\
\end{array}$$ Notice that $j\sim j'$ and the number of $\epsilon'$ is finite. Applying Hölder’s inequality on $k'$ we obtain $$\begin{array}{rl}
\Big[\sum\limits_{\epsilon',|j-j'|\leq3, k'}\frac{|a^{\epsilon'}_{j',k'}|}{(1+|2^{j-j'}k'-k|)^{N}}\Big]^{p}
&\lesssim \sum\limits_{|j-j'|\leq3}\sum\limits_{\epsilon', k'}\frac{|a^{\epsilon'}_{j',k'}|^{p}}{(1+|2^{j-j'}k'-k|)^{N}}.
\end{array}$$ Let $Q_{j,k}$ and $Q_{j',k'}$ be two dyadic cubes. Denote by $\widetilde{Q}_{j,k}$ the dyadic cube containing $Q_{j,k}$ with side length $2^{8-j}$. For $w\in\mathbb{Z}^{n}$, denote by $Q^{w}_{j,k}$ the cube $\widetilde{Q}_{j,k}+2^{8-j}w$. It is easy to see that if $Q_{j',k'}\subset Q^{w}_{j,k}$, then $$(1+|2^{j-j'}k'-k|)^{-N}\lesssim (1+|w|)^{-N}$$ (see also (\[eqn:est1\])). We obtain that $$\begin{array}{rl}
III^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},m}
&\lesssim\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}(t2^{2j\beta})^{m}\sum\limits_{(\epsilon,k)\in S^{j}_{r}}e^{-cpt2^{2j\beta}}\\
&\ \sum\limits_{|j-j'|\leq3}\sum\limits_{w\in\mathbb{Z}^{n}}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|a^{\epsilon'}_{j',k'}|^{p}(1+|2^{j-j'}k'-k|)^{-N'}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$ The number of $Q_{j',k'}$ which are contained in the dyadic cube $ Q_{j,k}^{w}=
2^{8-j}w+\widetilde{Q}_{j,k}$ equals to $2^{n(8+j'-j)}$. On the other hand, for any dyadic cube $Q_{r}$ with radius $r$, the number of $Q_{j,k}\subset Q_{r}$ equals to $(2^{j}r)^{n}$. Then the number of $Q_{j',k'}$ which are contained in the dyadic cube $Q^{w}_{r}$ equals $(2^{8+j'}r)^{n}$. Denote $S^{w,j'}_{r}$ the set of $(\epsilon',k') $ such that $Q_{j',k'}\subset Q^{w}_{r}$. Finally we have $$\begin{array}{rl}
III^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},m}&\lesssim\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j'\geq-\log_{2}r-3}2^{j'q(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big\{\int^{r^{2\beta}}_{2^{-2j\beta}}(t2^{2j'\beta})\\
&\quad [\sum\limits_{|w|\leq
2^{n}}+\sum\limits_{|w|>
2^{n}}](1+|w|)^{-N} \sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}e^{-ct2^{2j'\beta}}|a^{\epsilon'}_{j',k'}|^{p}\frac{dt}{t}\Big\}^{\frac{q}{p}}\\
&=:\ M_{1}+M_{2}.
\end{array}$$ By the definition of $\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}$, it is easy to see that $M_{1}\lesssim \|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}$. For the term $M_{2}$, we divide the estimate into two cases.
Case 3.1: $q\le p$. For this case, $j'\geq-\log_{2}r-3$ implies $(2^{nj'}r^{n})^{N'}\lesssim 1$ and $$\begin{array}{rl}
M_{2}\lesssim&\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j'\geq-\log_{2}r-3}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN'}{p}}\\
&\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}(t2^{2j'\beta})^{m}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}e^{-cpt2^{2j'\beta}}|a^{\epsilon}_{j',k'}|^{p}(2^{nj'}|Q|)^{-N'}\frac{dt}{t}\Big]^{\frac{q}{p}}\\
\lesssim&\ \|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$
Case 3.2: $q>p$. For this case, by Hölder’s inequality and $j\sim j'$ we obtain $$\begin{array}{rl}
&\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}(t2^{2j\beta})^{m}\sum\limits_{|w|>2^{n}}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}e^{-cpt2^{2j\beta}}(1+|w|)^{-N}|a^{\epsilon'}_{j',k'}|^{p}(2^{nj'}|Q|)^{-N'}\frac{dt}{t}\Big]^{\frac{q}{p}}\\
&\lesssim\sum\limits_{|w|>2^{n}}(1+|w|)^{-\frac{qN'}{p}}\Big(\int^{r^{2\beta}}_{2^{-2j\beta}}(t2^{2j'\beta})^{m}
\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}e^{-cpt2^{2j'\beta}}|a^{\epsilon'}_{j',k'}|^{p}\frac{dt}{t}\Big)^{\frac{q}{p}}.
\end{array}$$ The rest of the argument is similar to that of Case 3.1, and so omitted.
[**Situation 4**]{}: $K^{\beta}_{t}\ast f\in \B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m}$. Because $|j-j'|\leq 1$ and $0<t<2^{-2j\beta}$, we can obtain $$\begin{array}{rl}
IV^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},
m'}
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})} \Big[\sum\limits_{|w|\leq
2^{n}}\frac{1}{(1+|w|)^{N}}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|a^{\epsilon'}_{j',k'}|^{p}\Big]^{\frac{q}{p}}\\
+&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}
2^{jq(\gamma_{1}+\frac{n}{2}-\frac{n}{p})} \Big[\sum\limits_{|w|>
2^{n}}\frac{1}{(1+|w|)^{N}}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}\frac{|a^{\epsilon'}_{j',k'}|^{p}}{(2^{nj'}|Q|)^{N'}}\Big]^{\frac{q}{p}}.
\end{array}$$
Case 4.1: $q\le p$. For this case, by the $\alpha-$triangle inequality we have $$\begin{array}{rl}
IV^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},m'}\lesssim&\sum\limits_{w\in\mathbb{Z}^{n}}\frac{|Q|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{\frac{qN}{p}}}
\sum\limits_{j'\geq-\log_{2}r-1}
2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big(\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|a^{\epsilon'}_{j',k'}|^{p}\Big)^{\frac{q}{p}}\\
\lesssim&\|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$
Case 4.2: $q>p$. Using Hölder’s inequality we have $$\begin{array}{rl}
IV^{\gamma_{1},\gamma_{2}}_{p,q,Q_{r},m'}\lesssim&\sum\limits_{w\in\mathbb{Z}^{n}}\frac{|Q|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{N}}
\sum\limits_{j'\geq-\log_{2}r-1}
2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big(\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|a^{\epsilon'}_{j',k'}|^{p}\Big)^{\frac{q}{p}}\\
\lesssim&\|f\|_{\dot{B}^{\gamma_{1},\gamma_{2}}_{p,q}}.
\end{array}$$ This completes the proof of Theorem \[th1\].
We close this section by showing the following continuity of the Riesz transforms acting on the Besov-Q spaces; see also [@Al] and [@Yang1] for some related results.
\[th4\] For $1<p, q<\infty, \gamma_{1},\gamma_{2}\in \mathbb{R}, m>p$, and $m'>0$, the Riesz transforms $R_1,R_2,...,R_n$ are continuous on $\B^{\gamma_{1},\gamma_{2}}_{p, q, m,m'}$.
For the sake of convenience, we choose the classical Meyer wavelet basis $\{\Phi^{\epsilon}_{j,k}\}_{(\epsilon,j,k)\in\Lambda_{n}}$. For any $g(\cdot,\cdot)\in\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$ and $l=1,2,...,n$, we need to prove $(R_{l}g)(\cdot,\cdot)\in
\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$. Write $g(t,x)=\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}g^{\epsilon}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)$. Then $$\begin{array}{rl}
&R_{l}g(t,x)=\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}g^{\epsilon}_{j,k}(t)R_{l}\Phi^{\epsilon}_{j,k}(x)
=:\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x),
\end{array}$$ where $b^{\epsilon}_{j,k}(t)$ is defined by $$\begin{array}{rl}
b^{\epsilon}_{j,k}(t)
&=\ \sum\limits_{(\epsilon',j',k')\in\Lambda_{n}}g^{\epsilon'}_{j',k'}(t)\Big\langle
R_{l}\Phi^{\epsilon'}_{j',k'},\
\Phi^{\epsilon}_{j,k}\Big\rangle=:\ \sum\limits_{|j-j'|\leq1}\sum\limits_{\epsilon',k'}a^{\epsilon,\epsilon'}_{j,k,j',k'}g^{\epsilon'}_{j',k'}(t).
\end{array}$$ Because $R_{l}$ is a Calderón-Zygmund operator, by (\[eq7\]) we get $$\begin{array}{rl}
|a^{\epsilon,\epsilon'}_{j,k,j',k'}|\lesssim2^{-|j-j'|(\frac{n}{2}+N_{0})}
\Big(\frac{2^{-j}+2^{-j'}}{2^{-j}+2^{-j'}+|2^{-j}k-2^{-j'}k'|}\Big)^{n+N_{0}}.
\end{array}$$ The rest of the proof is similar to Theorem \[th1\]. We omit the details.
Nonlinear terms and their a prior estimates {#sec5}
===========================================
Decompositions of non-linear terms
----------------------------------
From now on, let $$\begin{cases}
u(t,x)=\sum\limits_{(\epsilon,j,k)\in
\Lambda_n} u^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x);\\
v(t,x)=\sum\limits_{(\epsilon,j,k)\in \Lambda_n}
v^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x).
\end{cases}$$ For $l=1,\cdots, n$, we will derive some inequalities about $$\begin{array}{rl}
B_{l}(u,v)(t,x)=&\int^{t}_{0} e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}(uv) ds.\end{array}$$ Here, it is worth pointing out that (\[eq:decompose\]) gives $$\begin{array}{rl}
e^{-(t-s)(-\Delta)^{\beta}} \frac{\partial}{\partial
x_{l}}(uv)(s,t,x)=\sum\limits_{j'\in\mathbb{Z}}\sum\limits_{i=1}^{4}I^{i,l}_{j'}(s,t,x),
\end{array}$$ where $$\begin{array}{rcl}
I^{1,l}_{j'}(u,v)(s,t,x)&=& \sum\limits_{\epsilon',k'}
\sum\limits_{k''}
u^{\epsilon'}_{j',k'}(s) v^{0}_{j'-3,k''}(s)\\
&&\quad\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{0}_{j'-3,k''}(x)), \\
I^{2,l}_{j'}(u,v)(s,t,x)&=&\sum\limits_{\epsilon',k'}
\sum\limits_{\epsilon'',k''} u^{\epsilon'}_{j',k'}(s)
v^{\epsilon''}_{j',k''}(s)\\
&&\quad\quad \times\ \ e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{\epsilon''}_{j',k''}(x)),\\
I^{3,l}_{j'}(u,v)(s,t,x)&=& \sum\limits_{0<|j'- j''|\leq 3}
\sum\limits_{\epsilon',k'} \sum\limits_{\epsilon'',k''}
u^{\epsilon'}_{j',k'}(s) v^{\epsilon''}_{j'',k''}(s)\\
&&\quad\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{\epsilon''}_{j'',k''}(x)),\\
I^{4,l}_{j'}(u,v)(s,t,x)&=& \sum\limits_{\epsilon',k'}
\sum\limits_{k''} v^{\epsilon'}_{j',k'}(s) u^{0}_{j'-3,k''}(s)\\
&&\quad\quad\times e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{0}_{j'-3,k''}(x)).
\end{array}$$ Hence $$\begin{array}{rl}
B_{l}(u,v)(t,x)
&=:\ \int^{t}_{0}\sum\limits_{j'\in\mathbb{Z}}\sum\limits_{i=1}^{4}I^{i,l}_{j'}(s,t,x)ds
=:\ \sum\limits_{i=1}^{4}\int^{t}_{0}I^{i}_{l}(s,t,x)ds.\\
\end{array}$$ Therefore, we can write $$\label{eq:de}
\begin{array}{rl}
B_{l}(u,v)(t,x) :=& \sum\limits^{4}_{i=1} I^{i}_{l}(u,v)(t,x),
\end{array}$$ where $$\begin{array}{rl}
I^{i}_{l}(u,v)(t,x)=\int^{t}_{0}I^{i}_{l}(s,t,x)ds.
\end{array}$$ In order to estimate the bilinear term $B(u,v)$ in some suitable function spaces on $\mathbb{R}^{n}$, we are required to decompose the terms $I^{i}_{l}(u,v)(t,x)$, $i=1,2,\cdots, 4$, respectively.
[**Decomposition of $I^{1}_{l}(u,v)(t,x)$.**]{} The term $I^{1}_{l}(u,v)(t,x)$ is decomposed according to two cases.
Case $[I^{1}_{l}]_1$: $t\geq 2^{-2j\beta}$. For this case, we write $I^1_l(u,v)(t,x)$ as the sum of the following three terms: $$\begin{array}{rl}
I^{1}_l(u,v)(t,x) =&\!\!\!
\sum\limits_{\epsilon',j',k'} \sum\limits_{k''}
\Big(\int^{2^{-1-2j'\beta}}_{0}+\int^{\frac{t}{2}}_{2^{-1-2j'\beta}}+\int_{\frac{t}{2}}^{t}\Big) \Big\{u^{\epsilon'}_{j',k'}(s)
v^{0}_{j'-3,k''}(s)\\
&\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{0}_{j'-3,k''}(x))\Big\}ds\\
=:& I^{1,1}_l(u,v)(t,x)+I^{1,2}_l(u,v)(t,x)+I^{1,3}_l(u,v)(t,x).
\end{array}$$
For $i=1,2,3$, denote $$\begin{array}{rl}
I^{1,i}_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon,i}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x). \end{array}$$ Case $[I^{1}_{l}]_2$: $t<2^{-2j\beta}$. For this case , we denote $a^{\epsilon,4}_{j,k}(t)=a^{\epsilon}_{j,k}(t)$ and then have $$\begin{array}{rl}I^1_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon,4}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x).\end{array}$$
[**Decomposition of $I^{2}_{l}(u,v)(t,x)$.**]{} The decomposition of $I^{2}_{l}(u,v)(t,x)$ is made according to two cases.
Case $[I^{2}_{l}]_1$: $t\geq 2^{-2j\beta}$. Naturally, $I^2_l(u,v)(t,x)$ can be divided into the following three terms: $$\begin{array}{rl}
I^{2}_l(u,v)(t,x) =&\!\!\! \sum\limits_{j'}
\sum\limits_{\epsilon',k'} \sum\limits_{\epsilon'',k''}
\Big(\int^{2^{-1-2j'\beta}}_{0}+\int^{\frac{t}{2}}_{2^{-1-2j'\beta}}+\int_{\frac{t}{2}}^{t}\Big)
\Big\{u^{\epsilon'}_{j',k'}(s) v^{\epsilon''}_{j',k''}(s)\\
&\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}} \frac{\partial}{\partial
x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{\epsilon''}_{j',k''}(x))\Big\}ds\\
=:&\!\!\! I^{2,1}_l(u,v)(t,x)+I^{2,2}_l(u,v)(t,x)+I^{2,3}_l(u,v)(t,x).
\end{array}$$
Case $[I^{2}_{l}]_2$: $t\leq 2^{-2j\beta}$. This $I^{2}_{l}(u,v)(t,x)$ can be decomposed into the sum of $II^{4}(u,v)(t,x)$ and $II^{5}(u,v)(t,x)$, where $$\begin{array}{rl}
I^{2}_l(u,v)(t,x) =&\!\!\! \sum\limits_{j'}
\sum\limits_{\epsilon',k'} \sum\limits_{\epsilon'',k''}
\Big(\int^{2^{-2j'\beta}}_{0}+\int^{t}_{2^{-2j'\beta}}\Big)
\Big\{u^{\epsilon'}_{j',k'}(s) v^{\epsilon''}_{j',k''}(s)\\
&\quad\times e^{-(t-s)(-\Delta)^{\beta}} \frac{\partial}{\partial
x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{\epsilon''}_{j',k''}(x))\Big\}ds\\
=:&\!\!\!I^{2,4}_l(u,v)(t,x)+I^{2,5}_l(u,v)(t,x).
\end{array}$$ For $i=1,2,3,4,5$, set $$\begin{array}{rl}I^{2,i}_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n} b^{\epsilon,i}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x).\end{array}$$
[**Decompositions of $I^{3}_{l}(u,v)(t,x)$.**]{} Similarly, we have the following two cases:
Case $[I^{3}_{l}]_1$: $t\geq 2^{-2j\beta}$. This $I^3_l(u,v)(t,x)$ can be divided into the following three terms: $$\begin{array}{rl}
I^{3}_l(u,v)(t,x) =&\!\!\! \sum\limits_{0<|j'- j''|\leq 3}
\sum\limits_{\epsilon',k'}
\sum\limits_{\epsilon'',k''}\Big(\int^{2^{-1-2j'\beta}}_{0}+\int^{\frac{t}{2}}_{2^{-1-2j'\beta}}+\int_{\frac{t}{2}}^{t}\Big)
\Big\{u^{\epsilon'}_{j',k'}(s) v^{\epsilon''}_{j'',k''}(s)\\
&\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}} \frac{\partial}{\partial
x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{\epsilon''}_{j'',k''}(x))\Big\}ds\\
=:&\!\!\!I^{3,1}_l(u,v)(t,x)+I^{3,2}_l(u,v)(t,x) +I^{3,3}_l(u,v)(t,x).
\end{array}$$
Case $[I^{3}_{l}]_2$: $t\leq 2^{-2j\beta}$. This $I^{3}_{l}(u,v)(t,x)$ can be decomposed into the sum of $II^{4}(u,v)(t,x)$ and $II^{5}(u,v)(t,x)$, where $$\begin{array}{rl}
I^{3}_l(u,v)(t,x) =&\!\!\! \sum\limits_{0<|j'- j''|\leq 3}
\sum\limits_{\epsilon',k'} \sum\limits_{\epsilon'',k''}
\Big(\int^{2^{-2j'\beta}}_{0}+\int^{t}_{2^{-2j'\beta}}\Big)
\Big\{u^{\epsilon'}_{j',k'}(s) v^{\epsilon''}_{j'',k''}(s)\\
&\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}} \frac{\partial}{\partial
x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{\epsilon''}_{j'',k''}(x))\Big\}ds\\
=:&\!\!\!I^{3,4}_l(u,v)(t,x)+I^{3,5}_l(u,v)(t,x).
\end{array}$$ For $i=1,2,3,4,5$, denote $$\begin{array}{rl}I^{3,i}_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n} b^{\epsilon,i}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x).\end{array}$$
[**Decomposition of $I^{4,l}_{j}(u,v)(t,x)$.**]{} It is easy to see that the terms $I^{1, l}_{j}(u,v)(t,x)$ and $I^{4,l}_{j}(u,v)(t,x)$ are symmetric associated with $u(t,x)$ and $v(t,x)$. Hence for $I^{4}_{l}(u,v)$ we have a similar decomposition.
Case $[I^{4}_{l}]_1$: $t\geq 2^{-2j\beta}$. For this case, we write $I^{4}_{l}(u,v)(t,x)$ as the sum of the following three terms: $$\begin{array}{rl}
I^{4}_l(u,v)(t,x) =&\!\!\!
\sum\limits_{\epsilon',j',k'} \sum\limits_{k''}
\Big(\int^{2^{-1-2j'\beta}}_{0}+\int^{\frac{t}{2}}_{2^{-1-2j'\beta}}+\int_{\frac{t}{2}}^{t}\Big)\Big\{ v^{\epsilon'}_{j',k'}(s)
u^{0}_{j'-3,k''}(s)\\
&\quad\quad\times\ \ e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}
(\Phi^{\epsilon'}_{j',k'}(x)\Phi^{0}_{j'-3,k''}(x))\Big\}ds\\
=:&\!\!\!I^{4,1}_l(u,v)(t,x)+I^{4,2}_l(u,v)(t,x)+I^{4,3}_l(u,v)(t,x).
\end{array}$$ For $i=1,2,3$, denote $$\begin{array}{rl}
I^{4,i}_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon,i}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x). \end{array}$$ Case $[I^{4}_{l}]_2$: $t<2^{-2j\beta}$. For this case, we denote $a^{\epsilon,4}_{j,k}(t)=a^{\epsilon}_{j,k}(t)$ and then have $$\begin{array}{rl}I^1_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon,4}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x).\end{array}$$
Induced a prior estimates
-------------------------
In the sequel we are about to dominate the above-defined $a^{\epsilon,i}_{j,k}$, $b^{\epsilon,i}_{j,k}$ by $u^{\epsilon'}_{j',k'}$ and $v^{\epsilon''}_{j',k''}$.
\[le6\] There is a constant $\tilde{c}>0$ such that:
[(i)]{} For $i=1, 2$, $$\begin{array}{rl}
|a^{\epsilon,i}_{j,k}(t)|\lesssim &\!\!\!
2^{\frac{nj}{2}+j} \sum\limits_{|j-j'|\leq 2}
\sum\limits_{\epsilon',k',k''}
\int_{I_{i}}\frac{|u^{\epsilon'}_{j',k'}(s)|}{(1+ |2^{j-j'}k'-k|)^{N}}\frac{| v^{0}_{j'-3,k''}(s)|}{(1+|2^{j-j'+3}k''-k'|)^{N}}
e^{-\tilde c t2^{2j\beta}}ds,
\end{array}$$ where $I_{1}=[0, 2^{-1-2j'\beta}]$ and $I_{2}=[2^{-1-2j'\beta}, \frac{t}{2}]$.
[(ii)]{} For $i=3, 4$, $$\begin{array}{rl}
|a^{\epsilon,i}_{j,k}(t)|\lesssim &\!\!\!
2^{\frac{nj}{2}+j} \sum\limits_{|j-j'|\leq 2}
\sum\limits_{\epsilon',k',k''}
\int_{I_{i}}\frac{|u^{\epsilon'}_{j',k'}(s)|}{(1+ |2^{j-j'}k'-k|)^{N}}\frac{| v^{0}_{j'-3,k''}(s)|}{(1+|2^{j-j'+3}k''-k'|)^{N}}
e^{-\tilde c (t-s)2^{2j\beta}}ds,
\end{array}$$ where $I_{3}=[\frac{t}{2}, t]$ and $I_{4}=[0, t]$.
$$\begin{array}{rl}
a^{\epsilon,1}_{j,k}(t)=&\!\!\!\Big\langle I^{1,1}_{l}(u,v),
\Phi^{\epsilon}_{j,k}\Big\rangle\\
=&\!\!\!\sum\limits_{\epsilon',k',k''}~\sum\limits_{|j-j'|\leq2}\int^{2^{-1-2j'\beta}}_{0}\Big\{u^{\epsilon'}_{j',k'}(s)v^{0}_{j'-3,k''}(s)\\
&\quad\quad\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle \Big\}ds.
\end{array}$$ The Fourier transform gives $$\begin{array}{rl}
&\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\\
&=\int e^{-(t-s)|\xi|^{2\beta}}\xi_{l}\widehat{(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''})}(\xi)
2^{-jn/2}e^{-i2^{-j}k\xi}\widehat{\Phi^{\epsilon}}(2^{-j}\xi)d\xi\\
&=\int e^{-(t-s)|\xi|^{2\beta}}\xi_{l}e^{-i2^{-j'}k'\xi}\Big[\int e^{ik'\eta}\widehat{\Phi^{\epsilon'}}(2^{-j'}\xi-\eta)e^{-8ik''\eta}\widehat{\Phi^{0}}(8\eta)d\eta\Big]\\
&\quad\times 2^{-jn/2}e^{-i2^{-j}k\xi}\widehat{\Phi^{\epsilon}}(2^{-j}\xi)d\xi.
\end{array}$$ Because $0<s<2^{-1-2j'\beta}$, we can see $(t-s)\sim t$. Hence $$\begin{array}{rl}
&\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\\
&=2^{jn/2+j}\int e^{-(t-s)2^{2j\beta}|\xi|^{2\beta}}\xi_{l}e^{-i(k-2^{j-j'}k')\xi}\Big[\int \widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta)\\
&\quad\times e^{-i(k-8k'')\eta}d\eta\Big] \widehat{\Phi^{\epsilon}}(\xi)d\xi\\
&=2^{jn/2+j}\int e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l}e^{-i(k-2^{j-j'}k')\xi}\Big[\int \widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta)\\
&\quad\times e^{-i(k-8k'')\eta}d\eta\Big] \widehat{\Phi^{\epsilon}}(\xi)d\xi.
\end{array}$$
[**Situation I:** ]{} We first consider the case: $|2^{j-j'}k'-k|\leq 2$. We can see $$(1+|2^{j-j'}k'-k|)^{-N}\gtrsim 1.$$ Under this situation, we divide the argument into two cases.
[*Case 1:*]{} $|k'-8k''|\leq 2$. For any positive integer $N$, $$(1+|2^{j-j'+3}k'-k''|)^{-N}\gtrsim 1.$$ On the other hand, the support of $\widehat{\Phi^{\epsilon}}(\xi)$ is a ring. A direct computation derives $$\begin{array}{rl}
&\Big|\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\Big|\\
&\lesssim e^{-\tilde c t2^{2j\beta}}2^{jn/2+j} (1+ |2^{j-j'}k'-k|)^{-N}
(1+|2^{j-j'+3}k''-k'|)^{-N}.
\end{array}$$
[*Case 2:*]{} $|k'-8k''|\geq 2$. Denote by $l_{j_{0}}$ the largest component of $k'-8k''$. We have $$\begin{array}{rl}
&\Big|\int \widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta) e^{-i(k-8k'')\eta}d\eta\Big|\\
&\lesssim\frac{1}{(1+|k'-8k''|)^{N}}\Big|\int\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta) (\frac{1}{i}\partial_{\eta_{i_{0}}})^{N}(e^{-i(k-8k'')\eta})d\eta\Big|\\
&\lesssim\frac{1}{(1+|k'-8k''|)^{N}}\Big|\int\sum\limits_{l=0}^{N}C^{l}_{N}\partial_{\eta_{i_{0}}}^{l}(\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta))
\partial_{\eta_{i_{0}}}^{N-l}(\widehat{\Phi^{0}}(8\eta)) e^{-i(k-8k'')\eta}d\eta\Big|\\
&\lesssim\frac{1}{(1+|k'-8k''|)^{N}},
\end{array}$$ which gives $$\begin{array}{rl}
&\Big|\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\Big|\\
&\lesssim 2^{jn/2+j}e^{-\tilde c t2^{2j\beta}} (1+ |2^{j-j'}k'-k|)^{-N}
(1+|2^{j-j'+3}k''-k'|)^{-N}.
\end{array}$$
[**Situation II:** ]{} We then consider the case: $|2^{j-j'}k'-k|\geq 2$. We still divide the discussion into the following two cases.
[*Case 3:* ]{} $|k'-8k''|\leq 2$. Denote by $k_{i_{0}}$ the largest component of $2^{j-j'}k'-k$. Then $$(1+|k_{i_{0}}|)^{N}\sim (1+|2^{j-j'}k'-k|)^{N}.$$ This fact implies $$\begin{array}{rl}
&\Big|\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\Big|\\
&\lesssim\frac{2^{jn/2+j}}{(1+|2^{j-j'}k'-k|)^{N}}\Big|\int (\frac{1}{i}\partial_{\xi_{i_{0}}})^{N}(e^{-i(k-2^{j-j'}k')\xi})e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l}\\
&\quad\times\Big[\int \widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta)
e^{-i(k-8k'')\eta}d\eta\Big] \widehat{\Phi^{\epsilon}}(\xi)d\xi\Big|\\
&\lesssim\frac{2^{jn/2+j}}{(1+|2^{j-j'}k'-k|)^{N}}\Big|\int e^{-i(k-2^{j-j'}k')\xi}\sum\limits^{N}_{l=0}C^{l}_{N}\partial_{\xi_{i_{0}}}^{l}(e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l})\\
&\quad\times\partial_{\xi_{i_{0}}}^{N-l}\Big(\int \widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta)
e^{-i(k-8k'')\eta}d\eta\Big) \widehat{\Phi^{\epsilon}}(\xi)d\xi\Big|.\\
\end{array}$$ In the above and below, $C^l_N$ stands for the binomial coefficient indexed by $N$ and $l$. Because the support of $\widehat{\Phi^{\epsilon}}(\xi)$ is a ring, there exists a small constant $c>0$ such that $$|\partial_{\xi_{i_{0}}}^{l}(e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l})|\lesssim e^{-ct2^{2j\beta}}.$$ Consequently, we have a constant $\tilde{c}>0$ such that $$\begin{array}{rl}
&\Big|\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\Big|\\
&\lesssim\frac{2^{jn/2+j}}{(1+|2^{j-j'}k'-k|)^{N}}\Big|\int e^{-i(k-2^{j-j'}k')\xi}\sum\limits^{N}_{l=0}C^{l}_{N}\partial_{\xi_{i_{0}}}^{l}(e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l})\\
&\quad\times\partial_{\xi_{i_{0}}}^{N-l}\Big(\int \widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\widehat{\Phi^{0}}(8\eta)
e^{-i(k-8k'')\eta}d\eta\Big) \widehat{\Phi^{\epsilon}}(\xi)d\xi\Big|\\
&\lesssim e^{-\tilde c t2^{2j\beta}} 2^{jn/2+j} (1+ |2^{j-j'}k'-k|)^{-N}
(1+|2^{j-j'+3}k''-k'|)^{-N}.
\end{array}$$
[*Case 4:*]{} $|k'-8k''|\geq 2$. In a similar manner to treat Case 3, we denote by $k_{i_{0}}$ the largest component of $2^{j-j'}k'-k$, and then obtain $$\begin{array}{rl}
&\Big|\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\Big|\\
&\lesssim\frac{2^{jn/2+j}}{(1+|2^{j-j'}k'-k|)^{N}}\int \sum\limits^{N}_{l=0}C^{l}_{N}\Big|\partial_{\xi_{i_{0}}}^{l}(e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l})\Big|\\
&\quad\times\Big|\int \partial_{\xi_{i_{0}}}^{N-l}\Big(\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\Big)\widehat{\Phi^{0}}(8\eta)
e^{-i(k-8k'')\eta}d\eta\Big| |\widehat{\Phi^{\epsilon}}(\xi)|d\xi.
\end{array}$$ As in Case 2, upon choosing $l_{j_{0}}$ as the largest component of $k'-8k’’$, applying an integration-by-parts, and utilizing the fact that $\widehat{\Phi^{\epsilon}}$ is supported on a ring, we can get a constant $\tilde{c}>0$ such that $$\begin{array}{rl}
&\Big|\Big\langle
e^{-(t-s)(-\Delta)^{\beta}}\frac{\partial}{\partial
x_{l}}(\Phi^{\epsilon'}_{j',k'}\Phi^{0}_{j'-3,k''}),\
\Phi^{\epsilon}_{j,k}\Big\rangle\Big|\\
&\lesssim\frac{2^{jn/2+j}}{(1+|2^{j-j'}k'-k|)^{N}(1+|2^{j-j'+3}k''-k'|)^{N}}\int \sum\limits^{N}_{l=0}C^{l}_{N}\Big|\partial_{\xi_{i_{0}}}^{l}(e^{-t2^{2j\beta}|\xi|^{2\beta}}\xi_{l})\Big|\\
&\quad\times\Big|\int \partial_{\xi_{i_{0}}}^{N-l}\Big(\widehat{\Phi^{\epsilon'}}(2^{j-j'}\xi-\eta)\Big)\widehat{\Phi^{0}}(8\eta)
(\frac{1}{i}\partial_{\eta_{i_{0}}})^{N}\Big(e^{-i(k'-8k'')\eta}\Big)d\eta\Big| |\widehat{\Phi^{\epsilon}}(\xi)|d\xi\\
&\lesssim e^{-\tilde c t2^{2j\beta}} 2^{jn/2+j} (1+ |2^{j-j'}k'-k|)^{-N}
(1+|2^{j-j'+3}k''-k'|)^{-N}.
\end{array}$$ This completes the estimate of $a^{\epsilon,1}_{j,k}(t)$. The estimate of $a^{\epsilon,i}_{j,k}(t)$ can be obtained similarly.
Using the same method, we can obtain the following estimates for $b^{\epsilon,i}_{j,k}(t), i=1,2,3,4,5$.
\[le7\]There is a constant $\tilde{c}>0$ such that:
[(i)]{} For $i=1, 2$, $$\begin{array}{rl}
|b^{\epsilon, i}_{j,k}(t)|\lesssim &\!\!\!
2^{\frac{nj}{2}+j} \sum\limits_{j\leq j'+ 2}
\sum\limits_{\epsilon',k',\epsilon'',k''}
\int_{I_{i}}\frac{|u^{\epsilon'}_{j',k'}(s)|}{(1+ |2^{j-j'}k'-k|)^{N}}\frac{| v^{\epsilon''}_{j',k''}(s)|}{(1+|2^{j-j'}k''-k'|)^{N}}
e^{-\tilde c t2^{2j\beta}}ds,
\end{array}$$ where $I_{1}=[0, 2^{-1-2j'\beta}]$ and $I_{2}=[2^{-1-2j'\beta}, \frac{t}{2}]$.
[(ii)]{} For $i=3, 4, 5$, $$\begin{array}{rl}
|b^{\epsilon,i}_{j,k}(t)|\lesssim &\!\!\!
2^{\frac{nj}{2}+j} \sum\limits_{j\leq j'+ 2}
\sum\limits_{\epsilon',k',\epsilon'',k''}
\int^{t}_{\frac{t}{2}}\frac{|u^{\epsilon'}_{j',k'}(s)|}{(1+ |2^{j-j'}k'-k|)^{N}}\frac{| v^{\epsilon''}_{j',k''}(s)|}{(1+|2^{j-j'}k''-k'|)^{N}}
e^{-\tilde c (t-s)2^{2j\beta}}ds,\\
\end{array}$$ where $I_{3}=[\frac{t}{2}, t]$, $I_{4}=[0, 2^{-2j'\beta}]$ and $I_{5}=[2^{-2j'\beta}, t]$.
Let $Q_{j,k}$ and $Q_{j',k'}$ be two dyadic cubes, and for $w\in\mathbb{Z}^{n}$ denote by $Q^{w}_{j,k}$ the dyadic cube $\widetilde{Q}_{j,k}+2^{8-j}w$, where $\widetilde{Q}_{j,k}$ denotes the dyadic cube containing $Q_{j,k}$ with side length $2^{8-j}$. The forthcoming lemmas can be deduced from the Cauchy-Schwartz inequality.
\[inequality1\]
[(i)]{} For $j, j'\in \mathbb{Z}$ and $w,k,k'\in \mathbb{Z}^{n}$, if $Q_{j',k'}\subset Q_{j,k}^{w}$, then $$\label{eqn:est1} (1+ |2^{j-j'}k'-k|)^{-N} \lesssim(1+|w|)^{-N}.$$
[(ii)]{} Let $0<j'-j''\leq 3$, $j\leq j'+5$ and $|w-w'|> 2^{n}$. If $Q_{j',k'}\subset Q_{j,k}^{w}$ and $Q_{j'',k''}\subset
Q_{j,k}^{w'}$, then $$\label{eqn:est2}
(1+|2^{j'-j''}k''-k'|)^{-N}\lesssim 2^{N(j-j')} (1+|w-w'|)^{-N}.$$
\[inequality3\] Let $Q_{j,k}$ be a dyadic cube with radius $2^{-j}$. For $w\in\mathbb{Z}^{n}$, set $Q^{w}_{j,k}$ be the dyadic cube $2^{8-j}w+\widetilde{Q}_{j,k}$. Then $$\label{eqn:est3}
\begin{array}{rl}
&\sum\limits_{\epsilon',k'}\sum\limits_{\epsilon'',k''}|u^{\epsilon'}_{j',k'}(s)||v^{\epsilon''}_{j',k''}(s)|^{p-1}
(1+|2^{j-j'}k'-k|)^{-8N}(1+|k'-k''|)^{-8N}\\
&\lesssim \sum\limits_{w\in\mathbb{Z}^{n}}\sum\limits_{w'\in\mathbb{Z}^{n}}(1+|w|)^{-N}(1+|w'|)^{-N}\\
&\ \times\Big(\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}\Big)^{\frac{1}{p}}\Big(\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p} \Big)^{\frac{1}{p'}}.
\end{array}$$
\[inequality4\] Let $Q_{j,k}$ be a dyadic cube with radius $2^{-j}$. For $w\in\mathbb{Z}^{n}$, denote by $Q^{w}_{j,k}$ the dyadic cube $2^{8-j}w+\widetilde{Q}_{j,k}$. If $\delta>0$ is small enough, then $$\label{eqn:est5}
\begin{array}{rl}
&\sum\limits_{(\epsilon,k)\in S^{j}_{r}} \Big\{ \sum\limits_{j\leq j'+5}
\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}
\Big(\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}
|a^{\epsilon}_{j',k'}|^{p}\Big)^{\frac{1}{p}}\Big\} ^{p}\\
&\lesssim\ \sum\limits_{j\leq j'+5} 2^{\delta (j'-j)}
\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}
\sum\limits_{(\epsilon',k')\in S^{w,j}_{r}} |a^{\epsilon}_{j',k'}|^{p}.
\end{array}$$
Here, it is worth mentioning that the proof of Lemma \[inequality4\] needs also the following fact: for fixed $j$, the number of $Q_{j',k'}$ which are contained in the dyadic cube $ Q_{j,k}^{w}=
2^{8-j}w+\widetilde{Q}_{j,k}$ equals to $2^{n(8+j'-j)}$. On the other hand, for any dyadic cube $Q_{r}$ with radius $r$, the number of $Q_{j,k}\subset Q_{r}$ equals to $(2^{j}r)^{n}$. Then the number of $Q_{j',k'}$ which are contained in the dyadic cube $Q^{w}_{r}$ equals $(2^{8+j'}r)^{n}$. In the proof of the main lemmas in Sections \[sec7\] & \[sec8\], we will use this fact again.
Let $Q_{j,k}$ be a dyadic cube with radius $2^{-j}$. For $w\in\mathbb{Z}^{n}$, denote by $Q^{w}_{j,k}$ the dyadic cube $2^{8-j}w+\widetilde{Q}_{j,k}$. If $j<j'+2$, then $$\label{eqn:est6}
\begin{array}{rl}
&\sum\limits_{\epsilon',k'}\sum\limits_{\epsilon'',k''}|u^{\epsilon'}_{j',k'}(s)||v^{\epsilon''}_{j',k''}(s)|
(1+|2^{j'-j}k'-k|)^{-N}(1+|k'-k''|)^{-N}\\
&\lesssim\ \sum\limits_{w,w'\in\mathbb{Z}^{n}}(1+|w|)^{-N}(1+|w-w'|)^{-N}2^{n(j'-j)(1-\frac{2}{p})}\\
&\quad\Big(\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}\Big)^{\frac{1}{p}}
\Big(\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p}\Big)^{\frac{1}{p}}.
\end{array}$$
Proof of the main theorem {#sec6}
=========================
By Picard’s contraction principle and Theorems \[th1\] & \[th4\], it is enough to verify that the bilinear operator $$\nonumber\begin{array}{rl}
&B(u,v)= \int^{t}_{0} e^{-(t-s)(-\Delta)^{\beta}}
\mathbb{P}\nabla\cdot (u\otimes v) ds\end{array}$$ is bounded from $(\B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m'
})^{n}\times (\B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m' })^{n}$ to $(\B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m' })^{n}$. To do so, let $$\begin{array}{rl}
&B_{l}(u,v)= \int^{t}_{0} e^{-(t-s)(-\Delta)^{\beta}}
\frac{\partial}{\partial x_{l}}(uv) ds \end{array}$$ and $$\begin{array}{rl}
&B_{l,l',l''}(u,v)= R_{l}R_{l'} \int^{t}_{0}
e^{-(t-s)(-\Delta)^{\beta}} \frac{\partial}{\partial x_{l''}}(uv)
ds.\end{array}$$ We need to prove that all $B_{l}(u,v)$, $B_{l,l',l''}(u,v)$ are bounded from $\B^{\gamma_{1},
\gamma_{2}}_{p, q, m,m' }\times \B^{\gamma_{1}, \gamma_{2}}_{p, q,
m, m' }$ to $\B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m' }$. Because $R_{l'},\ l'=1,\cdots,n$ are bounded on $\B^{\gamma_{1},
\gamma_{2}}_{p, q, m, m' }$, we only consider the boundedness of $B_{l}(u,v)$. By (\[eq:de\]), if $$\begin{array}{rl}
&u(t,x)=\sum\limits_{(\epsilon,j,k)\in \Lambda_n}
u^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x)\ \ \text{ and }
\ \ v(t,x)=\sum\limits_{(\epsilon,j,k)\in \Lambda_n}
v^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x),
\end{array}$$ then $$\begin{array}{rl}
B_{l}(u,v)(t,x) = \sum\limits^{4}_{i=1} I^{i}_{l}(u,v)(t,x),
\end{array}$$ where the terms $I^{i}_{l}(u,v)(t,x),\ i=1,2,\cdots,4$ are defined in Subsection 4.1.
It is not hard to see that the argument for $I^4_l(u,v)(t,x)$ is similar to that for $I^1_l(u,v)(t,x)$. Also the treatments of $I^3_l(u,v)(t,x)$ is similar to that of $I^2_l(u,v)(t,x)$. So, we are only required to show that the following functions $$\begin{array}{rl}
&(t,x)\mapsto I^1_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x)
\end{array}$$ and $$\begin{array}{rl}
&(t,x)\mapsto I^2_l(u,v)(t,x) = \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
b^{\epsilon}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x)
\end{array}$$ belong to $\B^{\gamma_{1},\gamma_{2}}_{p, q, m,m' }$. By the decompositions of non-linear terms obtained in Subsection 4.1, it amounts to verifying that the following functions $$\begin{array}{rl}
(t,x)\mapsto
\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,i}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x), \ i=1,2,3,4\end{array}$$ and $$\begin{array}{rl}(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,i}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x),\ i=1,2,3,4,5\end{array}$$ are members of $\B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m' }$. The demonstration will be concluded by proving the following two lemmas:
\[lem53\] If $(\beta, p,q, \gamma_1, \gamma_2, m,m')$ satisfies the conditions of Theorem \[mthmain\] and $u,v\in \B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m' }$, then
[(i)]{} For $ i=1, 2, 3$, the function $(t,x)\mapsto \sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon,i}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x)$ is in $\B^{\gamma_{1},\gamma_{2},I}_{p, q, m }$;
[(ii)]{} For $ i=1, 2,
3,$ the function $(t,x)\mapsto \sum\limits_{(\epsilon,j,k)\in \Lambda_n} a^{\epsilon,i}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x)$ is in $\B^{\gamma_{1},\gamma_{2},III}_{p, q, m
};$
[(iii)]{} The function $(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in \Lambda_n}
a^{\epsilon,4}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x)$ is in $\B^{\gamma_{1},\gamma_{2},II}_{p,q } \bigcap\B^{\gamma_{1},\gamma_{2},IV}_{p, q, m' }.$
\[lem54\] If $(\beta, p,q, \gamma_1, \gamma_2, m,m')$ satisfies the conditions of Theorem \[mthmain\] and $u,v\in \B^{\gamma_{1}, \gamma_{2}}_{p, q, m,m' }$, then
[(i)]{} For $i=1, 2, 3$, the function $(t,x)\mapsto \sum\limits_{(\epsilon,j,k)\in
\Lambda_n} b^{\epsilon,i}_{j,k}(t) \Phi^{\epsilon}_{j,k}(x)$ are in $\B^{\gamma_{1},\gamma_{2},I}_{p, q, m}$;
[(ii)]{} For $i=1, 2, 3$, the function $(t,x)\mapsto \sum\limits_{(\epsilon,j,k)\in \Lambda_n} b^{\epsilon,i}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x)$ is in $\B^{\gamma_{1},\gamma_{2},III}_{p, q, m }$;
[(iii)]{} For $i=4, 5$, the function $(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in \Lambda_n} b^{\epsilon,i}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x)$ is in $\B^{\gamma_{1},\gamma_{2},II}_{p, q}$;
[(iv)]{} For $i=4, 5$, the function $(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in \Lambda_n} b^{\epsilon,i}_{j,k}(t)
\Phi^{\epsilon}_{j,k}(x)$ is in $\B^{\gamma_{1},\gamma_{2},IV}_{p, q, m' }$.
Proof of Lemma \[lem53\] {#sec7}
========================
The setting (i)
---------------
For $i=1,2,3$, define $$\begin{array}{rl}
I^{m,i}_{a, Q_{r}}(t)
&=\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,
-\frac{\log_{2}t}{2\beta}\}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}|a^{\epsilon,i}_{j,k}(t)|^{p}(t2^{2j\beta})^{m}\Big]^{\frac{q}{p}}.
\end{array}$$ According to the relation between $2^{-2j\beta}$ and $t$, we divide the proof into three cases. The proofs of $\sum\limits_{(\epsilon,j,k)}a^{\epsilon,i}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x), i=1,2,3$ are similar. For simplicity, we only prove $$\begin{array}{rl}
&\text{Case 7.1: }\ (t,x)\mapsto\sum\limits_{(\epsilon,j,k)}a^{\epsilon,1}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)
\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},
I}_{p,q,m}.\end{array}$$ Without loss of generality, we may assume $\|u\|_{\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}}=\|v\|_{\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}}=1$. Because $v\in\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$, one has $v\in
\B^{\gamma_{1}-\gamma_{2}}_{\frac{m}{p},\infty}\subset\B^{\gamma_{1}-\gamma_{2}}_{0,\infty}$. Hence $$|v^{0}_{j'-3,k''}(s)|\lesssim
s^{-\frac{\gamma_{2}-\gamma_{1}}{2\beta}}2^{-\frac{nj'}{2}},$$ and consequently, by (i) of Lemma \[le6\], $$\begin{array}{rl}
|a^{\epsilon,1}_{j,k}(t)|
\lesssim&\!\!\!2^{j}\sum\limits_{|j-j'|\leq2}\sum\limits_{\epsilon',k'}\int^{2^{-1-{2j'\beta}}}_{0}
|u^{\epsilon'}_{j',k'}(s)|(1+|2^{j-j'}k'-k|)^{-N}e^{-\tilde{c}t2^{2j\beta}}s^{\frac{1}{2\beta}-1}ds.
\end{array}$$ Notice that $|j-j'|\leq2$. So, applying Hölder’s inequality to $k'$ we get $$\begin{array}{rl}
&I^{m,1}_{a, Q_{r}}(t)\\
&\quad\lesssim\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,
-\frac{\log_{2}t}{2\beta}\}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}\sum\limits_{|j-j'|\leq2}\sum\limits_{\epsilon',k'}2^{pj}e^{-\tilde{c}pt2^{2j\beta}}\\
&\quad(1+|2^{j-j'}k'-k|)^{-N}\Big(\int^{2^{-1-{2j'\beta}}}_{0}
|u^{\epsilon'}_{j',k'}(s)| s^{\frac{1}{2\beta}-1}ds\Big)^{p}
(t2^{2j\beta})^{m}\Big]^{\frac{q}{p}}.
\end{array}$$ By $p>2m'\beta$, $j\sim j'$ and Hölder’s inequality, we apply (\[eqn:est1\]) to get $$\begin{array}{rl}
I^{m,1}_{a, Q_{r}}(t)
&\lesssim\
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,
-\frac{\log_{2}t}{2\beta}\}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}e^{-cpt2^{2j\beta}}\\
&\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}
\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{j,k}}\int^{2^{-1-2j'\beta}}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}(t2^{2j\beta})^{m}\Big]^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, by the $\alpha$-triangle inequality we obtain $$\begin{array}{rl}
I^{m,1}_{a, Q_{r}}(t)\lesssim&
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,
-\frac{\log_{2}t}{2\beta}\}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}\\
&\Big[\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}\int^{2^{-1-2j'\beta}}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}\\
\lesssim&\|u\|_{\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'}}\lesssim1.
\end{array}$$
If $q>p$, Hölder’s inequality implies that $$\begin{array}{rl}
I^{m,1}_{a, Q_{r}}(t)\lesssim&\!\!\!
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,
-\frac{\log_{2}t}{2\beta}\}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\\
&\Big[\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}\int^{2^{-1-2j'\beta}}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}\\
\lesssim&\!\!\!\|u\|_{\B^{\gamma_{1},\gamma_{2},
IV}_{p,q,m'}}\lesssim1.
\end{array}$$
In a similar manner, we can obtain the following two assertions. $$\nonumber\begin{cases}
\text{ Case 7.2: }
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)}a^{\epsilon,2}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},I}_{p,q,m};\\
\text{ Case 7.3: } (t,x)\mapsto
\sum\limits_{(\epsilon,j,k)}a^{\epsilon,3}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},I}_{p,q,m}.
\end{cases}$$
The setting (ii)
----------------
For $i=1, 2, 3$, define $$\begin{array}{rl}
III^{m,i}_{a, Q_{r}}=&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}|a^{\epsilon,i}_{j,k}(t)|^{p}(t2^{2j\beta})^{m}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$ We consider the following three cases: $$\nonumber\begin{cases}
\text{ Case 7.4: }
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,1}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},
III}_{p,q,m};\\
\text{ Case 7.5: } (t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,2}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},
III}_{p,q,m};\\
\text{Case 7.6: }(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,3}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},III}_{p,q,m}.
\end{cases}$$ It is enough to check Case 7.4 since Cases 7.5 and 7.6 can be dealt with similarly. In fact, for $v\in\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$ we have, by (i) of Lemma \[le6\], $$\begin{array}{rl}
|a^{\epsilon,1}_{j,k}(t)|\lesssim&\!\!\!2^{j}\sum\limits_{|j-j'|\leq2}\sum\limits_{\epsilon',k'}\int^{2^{-1-2j'\beta}}_{0}
|u^{\epsilon'}_{j',k'}(s)|
e^{-\tilde{c}t2^{2j\beta}}(1+|2^{j-j'}k'-k|)^{-N}s^{\frac{1}{2\beta}}\frac{ds}{s},
\end{array}$$ whence obtaining $$\begin{array}{rl}
III^{m,1}_{a, Q_{r}}\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}2^{pj}e^{-\tilde{c}pt2^{2j\beta}}
\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}\\
&\Big(\sum\limits_{|j-j'|\leq2}\sum\limits_{\epsilon',k'}\int^{2^{-1-2j'\beta}}_{0}
|u^{\epsilon'}_{j',k'}(s)|
(1+|2^{j-j'}k'-k|)^{-N}s^{\frac{1}{2\beta}}\frac{ds}{s}\Big)^{p}(t2^{2j\beta})^{m}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$ Applying Hölder’s inequality to $k'$ and $s$ respectively, along with (\[eqn:est1\]) and $|j-j'|\leq2$, we find $$\begin{array}{rl}
III^{m,1}_{a, Q_{r}}\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{q}{p})}
\Big[\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\int^{r^{2\beta}}_{2^{-2j\beta}}2^{jp}e^{-\tilde{c}pt2^{2j\beta}}\\
&\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}2^{-jp}\Big(\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
j,k}}\int^{2^{-1-2j'\beta}}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j\beta})^{m'}\frac{ds}{s}\Big)(t2^{2j\beta})^{m}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, by changing variables we get $$\begin{array}{rl}
III^{m,1}_{a, Q_{r}}\lesssim&
\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{q}{p})}\\
&\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}e^{-\tilde{c}t2^{2j\beta}}(t2^{2j\beta})^{m}\Big(\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
r}}\int^{2^{-1-2j'\beta}}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j\beta})^{m'}\frac{ds}{s}\Big)\frac{dt}{t}\Big]^{\frac{q}{p}}\\
\lesssim&\|u\|_{\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'}}.
\end{array}$$
If $q>p$, via applying Hölder’s inequality for $w$ and $\frac{q}{p}>1$, we similarly have $$\begin{array}{rl}
III^{m,1}_{a, Q_{r}}\lesssim&\!\!\!\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}
|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{q}{p})}\\
&\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}e^{-\tilde{c}t2^{2j\beta}}(t2^{2j\beta})^{m}\Big(\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
r}}\int^{2^{-1-2j'\beta}}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j\beta})^{m'}\frac{ds}{s}\Big)\frac{dt}{t}\Big]^{\frac{q}{p}}\\
\lesssim&\!\!\!\|u\|_{\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'}}.
\end{array}$$
The setting (iii)
-----------------
The argument for that the function $$\begin{array}{rl}
&(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,4}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},II}_{p,q}
\bigcap \B^{\gamma_{1},\gamma_{2},IV}_{p,q, m'}\end{array}$$ is divided into two cases below.
$$\begin{array}{rl}
&\text{Case 7.7: }\
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,4}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2},II}_{p,q}.
\end{array}$$
Let $$\begin{array}{rl}
II^{4}_{a, Q_{r}}(t)
&=\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r\leq
j<-\frac{\log_{2}t}{2\beta}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}|a^{\epsilon,4}_{j,k}(t)|^{p}\Big]^{\frac{q}{p}}.
\end{array}$$ Then, by (ii) of Lemma \[le6\] we have
$$\begin{array}{rl}
|a^{\epsilon,4}_{j,k}(t)|
\lesssim&\!\!\!2^{j}\sum\limits_{|j-j'|\leq2}\sum\limits_{\epsilon',k'}\int^{t}_{0}
|u^{\epsilon'}_{j',k'}(s)| e^{-\tilde{c}(t-s)2^{2j\beta}}
(1+|2^{j-j'}k'-k|)^{-N}s^{\frac{1}{2\beta}-1}ds,
\end{array}$$ whence getting via Hölder’s inequality and (\[eqn:est1\]) $$\begin{array}{rl}
II^{4}_{a, Q_{r}}(t)
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r\leq
j<-\frac{\log_{2}t}{2\beta}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[2^{pj}\sum\limits_{|j-j'|\leq2}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\\
& \sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}t^{p-1-\mu
p}\int^{t}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}s^{(\frac{1}{2\beta}-1+\mu)p}ds\Big]^{\frac{q}{p}}
\end{array}$$
If $q>p$, by Hölder’s inequality we have $$\begin{array}{rl}
II^{4}_{a, Q_{r}}(t)\ \lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r\leq
j<-\frac{\log_{2}t}{2\beta}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\\
&2^{qj}\sum\limits_{|j-j'|\leq2}t^{\frac{(p-1-p\mu)q}{p}}\Big[\int^{t}_{0}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}s^{(\frac{1}{2\beta}-1+\mu)p}ds\Big]^{\frac{q}{p}}.
\end{array}$$ Because $2^{2j\beta}t\leq1$, one has $2^{qj}\leq t^{-\frac{q}{2\beta}}$. This in turn gives $$\begin{array}{rl}
II^{4}_{a, Q_{r}}(t)\ \lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r\leq
j<-\frac{\log_{2}t}{2\beta}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}2^{qj}\\
&\sum\limits_{|j-j'|\leq2}t^{\frac{q}{2\beta}}t^{-p(\frac{1}{2\beta}-1+\mu)-1}
\Big[\int^{t}_{0}(\sum\limits_{\epsilon',k'}|u^{\epsilon'}_{j',k'}(s)|^{p})^{\frac{q}{p}}
s^{(\frac{1}{2\beta}-1+\mu)p}ds\Big]\\
\lesssim&\|u\|_{\B^{\gamma_{1},\gamma_{2},II}_{p,q}}.
\end{array}$$
If $q\leq p$, we have $$\begin{array}{rl}
II^{4}_{a, Q_{r}}(t)\ \lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r\leq
j<-\frac{\log_{2}t}{2\beta}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&\Big[\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\sum\limits_{|j-j'|\leq2}2^{jp}
\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}\Big(\int^{t}_{0}|u^{\epsilon'}_{j',k'}(s)|s^{\frac{1}{2\beta}-1}ds\Big)^{p}\Big]^{\frac{q}{p}}.
\end{array}$$ Because $t\leq2^{-2j\beta}$ and $0<m'<\min\{1, \frac{p}{2\beta}\}$, Hölder’s inequality implies $$\begin{array}{rl}
II^{4}_{a, Q_{r}}(t)
&\lesssim|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}\\
&\quad \sum\limits_{|j-j'|\leq2}\Big[2^{pj}\int^{2^{-2j\beta}}_{0}
\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}s^{m'}2^{-2j'\beta(\frac{p}{2\beta}-m')}\frac{ds}{s}\Big]^{\frac{q}{p}}\\
&\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}.
\end{array}$$
$$\begin{array}{rl}
&\text{Case 7.8:}\ \ (t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}a^{\epsilon,4}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2},IV}_{p,q, m'}.
\end{array}$$
Similarly, let $$\begin{array}{rl}
IV^{4,m'}_{a,Q_{r}}=&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{2^{-2j\beta}}_{0}\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}|a^{\epsilon,4}_{j,k}(t)|^{p}(t2^{2j\beta})^{m'}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$ Choosing a constant $\mu$ such that $m'+p-1-\frac{p}{2\beta}\leq
p\mu<p-1$, we use (\[eqn:est1\]) and Hölder’s inequality to get $$\begin{array}{rl}
IV^{4,m'}_{a,Q_{r}}
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{2^{-2j\beta}}_{0}2^{pj}\sum\limits_{|j-j'|\leq2}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\\
&\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}t^{p-1-\mu
p}\Big(\int^{t}_{0}|u^{\epsilon'}_{j',k'}(s)|^{p}s^{(\frac{1}{2\beta}-1+\mu)p}ds\Big)(t2^{2j\beta})^{m'}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, by the $\alpha-$triangle inequality we have $$\begin{array}{rl}
IV^{4,m'}_{a,Q_{r}}
\lesssim&\ \sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}
\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\sum\limits_{|j-j'|\leq2}2^{qj} \\
&\Big[\int^{2^{-2j\beta}}_{0}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}s^{\frac{p}{2\beta}-p+p\mu}\Big(\int^{2^{-2j\beta}}_{s}2^{2mj'\beta}
t^{m'+p-\mu p}\frac{dt}{t^{2}}\Big)ds\Big]^{\frac{q}{p}}.
\end{array}$$
Because $s2^{2j'\beta}\leq1$ and $m'+p-1-\frac{p}{2\beta}\leq
p\mu$, we obtain $$\begin{array}{rl}
IV^{4,m'}_{a,Q_{r}}\lesssim&\!\!\!
\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}
\sum\limits_{j'\geq-\log_{2}r_{w}}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&\Big[\int^{2^{-2(j'-2)\beta}}_{0}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{\frac{p}{2\beta}+1-p+p\mu}\frac{ds}{s}\Big]^{\frac{q}{p}}\\
\lesssim&\!\!\!\|u\|_{\B^{\gamma_{1},\gamma_{2},III}_{p,q,m}}+\|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}.
\end{array}$$
If $q>p$, by Hölder’s inequality, we have $$\begin{array}{rl}
IV^{4,m'}_{a,Q_{r}}\lesssim&\!\!\!
\sum\limits_{w\in\mathbb{Z}^{n}}|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}(1+|w|)^{-N}
\sum\limits_{j\geq-\log_{2}r_{w}}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&\Big[\int^{2^{-2(j'-2)\beta}}_{0}\sum\limits_{(\epsilon',k')\in
S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{\frac{p}{2\beta}+1-p+p\mu}\frac{ds}{s}\Big]^{\frac{q}{p}}\\
\lesssim&\!\!\!\|u\|_{\B^{\gamma_{1},\gamma_{2},III}_{p,q,m}}+\|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}.
\end{array}$$
Proof of Lemma \[lem54\] {#sec8}
========================
The setting (i)
---------------
For $i=1,2,3$, define $$\begin{array}{rl}
I^{m,i}_{b, Q_{r}}(t)=&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\sum\limits_{(\epsilon,k)\in S^{j}_{r}}|b^{\epsilon,i}_{j,k}(t)|^{p}(t2^{2j\beta})^{m}\Big]^{\frac{q}{p}}.
\end{array}$$ We divide the proof into three cases: $$\nonumber\begin{cases}
\text{ Case 8.1: }
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in
\Lambda_{n}}b^{\epsilon,1}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},I}_{p,q,m};\\
\text{ Case 8.2: } (t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in
\Lambda_{n}}b^{\epsilon,2}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},I}_{p,q,m};\\
\text{Case 8.3: }(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in
\Lambda_{n}}b^{\epsilon,3}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \ \B^{\gamma_{1},\gamma_{2},I}_{p,q,m}.
\end{cases}$$ But, we only demonstrate Case 8.1 and omit the proofs of Cases 8.2 and 8.3 due to their similarity.
Assume first $1<p\le 2$. If $s2^{2j'\beta}\leq 1$, then $$v\in\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}\subset
\B^{\gamma_{1}-\gamma_{2}}_{\frac{m}{p},\infty}\Rightarrow |v^{\epsilon''}_{j',k''}(s)|\lesssim
2^{-(\frac{n}{2}+\gamma_{1}-\gamma_{2})j'}.$$ Because $v\in\B^{\gamma_{1},\gamma_{2},II}_{p,q}$, we have $$\begin{array}{rl}
\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{
j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p}\lesssim
2^{-nj+p\gamma_{2}j}2^{-pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}.
\end{array}$$ By Lemma \[le7\] (i), (\[eqn:est3\]) and Hölder’s inequality we get $$\begin{array}{rl}
&|b^{\epsilon,1}_{j,k}(t)|\\
&\quad\lesssim\ 2^{\frac{nj}{2}+j}e^{-ct2^{2j\beta}}2^{j(-n+p\gamma_{2})(1-\frac{1}{p})}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\sum\limits_{j\leq
j'+2}2^{-(2-p)(\frac{n}{2}+\gamma_{1}-\gamma_{2})j'}\\
&\quad 2^{-j'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})(p-1)}2^{-2j'\beta(1-\frac{1}{p})}
\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}.
\end{array}$$ This in turn yields $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)\lesssim&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj(\gamma_{1}+\frac{n}{2}-\frac{q}{p})}2^{\frac{qnj}{2}+qj}2^{qj(-n+p\gamma_{2})(1-\frac{1}{p})}\\
&\Big\{\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}\Big[
\sum\limits_{j<j'+2}2^{-(2-p)(\frac{n}{2}+\gamma_{1}-\gamma_{2})j'}
2^{-j'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})(p-1)}\\
&2^{-2j'\beta(1-\frac{1}{p})}\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
j,k}}
|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}\Big]^{p}\Big\}^{\frac{q}{p}}.
\end{array}$$ Since $0<s<2^{-1-2j'\beta}$ and $m'<1$, one has $(s2^{2j'\beta})^{1-m'}\lesssim1$. This implies $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)
&\lesssim|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj(\gamma_{1}+\frac{n}{2}-\frac{q}{p})}2^{\frac{qnj}{2}+qj}2^{qj(-n+p\gamma_{2})(1-\frac{1}{p})}\\
&\quad\Big\{\sum\limits_{w\in\mathbb{Z}^{n}}\frac{1}{(1+|w|)^{N}}\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}\sum\limits_{j<j'+2}2^{\delta(j'-j)}2^{-p(2-p)(\frac{n}{2}+\gamma_{1}-\gamma_{2})j'}
2^{-pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})(p-1)}\\
&\quad\quad2^{-2pj'\beta}\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big)\Big\}^{\frac{q}{p}}.\\
\end{array}$$
If $q\leq p$, for $p\gamma_{2}+2-2\beta>0$ take $0<\delta<p(p\gamma_{2}+2-2\beta)$. By the $\alpha-$triangle inequality, we get $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)\lesssim&\!\!\!|Q|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}
\sum\limits_{j\geq\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}\sum\limits_{j\leq
j'+2}2^{q(j'-j)[\frac{\delta}{p}-(p\gamma_{2}+2-2\beta)]}\\
&2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big)^{\frac{q}{p}}
\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m}}.
\end{array}$$
If $q>p$, by Hölder’s inequality we get $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)
&\lesssim\sum\limits_{w\in\mathbb{Z}^{n}}\frac{|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{N}}
\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}\Big[\sum\limits_{j\leq
j'+2}2^{(j'-j)[\delta-(p\gamma_{2}+2-2\beta)]}\Big]^{\frac{q-p}{p}}\\
&\quad \times\Big\{\sum\limits_{j\leq
j'+2}2^{(j'-j)[\delta-(p\gamma_{2}+2-2\beta)]}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&\quad\quad\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{
r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big)^{\frac{q}{p}}\Big\}
\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'}}.
\end{array}$$
Assume then $2<p<\infty$. Because $0<s<2^{-1-2j'\beta}$, $v\in\B^{\gamma_{1},\gamma_{2},II}_{p,q}$ implies $$\begin{array}{rl}
&\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p}ds\Big)^{\frac{1}{p}}\lesssim
2^{\gamma_{2}j-\frac{nj}{p}-j'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}2^{-\frac{2j'\beta}{p}}.
\end{array}$$ In view of Lemma \[le7\] (i), Hölder’s inequality and (\[eqn:est6\]) we achieve $$\begin{array}{rl}
|b^{\epsilon,1}_{j,k}(t)|
\lesssim&2^{\frac{nj}{2}+j}e^{-ct2^{2j\beta}}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\sum\limits_{j<j'+2}
2^{n(j'-j)(1-\frac{2}{p})}2^{-2j'\beta(1-\frac{2}{p})}\\
&2^{\gamma_{2}j-\frac{nj}{p}}2^{-j'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}2^{-\frac{2j'\beta}{p}}
\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}.
\end{array}$$ Using the above estimate we obtain $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)
\lesssim&\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}2^{\frac{qnj}{2}+qj}2^{qj(\gamma_{2}-\frac{n}{p})}\\
&\Big\{\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}e^{-cpt2^{2j\beta}}(t2^{2j\beta})^{m}\sum\limits_{w\in\mathbb{Z}^{n}}\frac{1}{(1+|w|)^{N}}\Big[
\sum\limits_{j<j'+2}
2^{n(j'-j)(1-\frac{2}{p})}2^{-j'\beta(2-\frac{4}{p})}\\
&2^{-j'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})} 2^{-\frac{2j'\beta}{p}}
\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}\Big]^{p}\Big\}^{\frac{q}{p}}.
\end{array}$$ Notice that $0<s<2^{-1-2j'\beta}$ and $m'<1$. For any $0<\delta<(\gamma_{1}+\gamma_{2}+1)$ we have $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
2^{qj(\gamma_{1}+\gamma_{2}+1+n-\frac{2n}{p})}\\
&\Big\{\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\sum\limits_{j<j'+2}2^{(\delta+pn-2n)(j'-j)}
2^{-2j'\beta(p-2)}2^{-pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&2^{-4j'\beta}\Big[\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]\Big\}^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, the $\alpha-$triangle inequality implies $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)\lesssim&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}
\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}\sum\limits_{j<j'+2}2^{q(\frac{\delta}{p}-\gamma_{1}-\gamma_{2}-1)(j'-j)}\\
&2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\Big[\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}
\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}.
\end{array}$$
If $q>p$, by Hölder’s inequality we obtain $$\begin{array}{rl}
I^{m,1}_{b, Q_{r}}(t)
&\lesssim\ \sum\limits_{w\in\mathbb{Z}^{n}}\frac{|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{N}}
\sum\limits_{j\geq\max\{-\log_{2}r,-\frac{\log_{2}t}{2\beta}\}}
\Big(\sum\limits_{j<j'+2}2^{p(j'-j)(\frac{\delta}{p}-\gamma_{1}-\gamma_{2}-1)}\Big)^{\frac{q-p}{p}}\\
&\quad\Big\{\sum\limits_{j<j'+2}2^{(j'-j)[\delta-p(\gamma_{1}+\gamma_{2}+1)]}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&\quad\quad\Big[\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}\Big\}\lesssim
\|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}.
\end{array}$$
The setting (ii) {#sec9}
----------------
For $i=1,2,3$, define $$\begin{array}{rl}
III^{m,i}_{b,Q_{r}}=&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{r^{2\beta}}_{2^{-2j\beta}}\sum\limits_{(\epsilon,k)\in
S^{j}_{r}}|b^{\epsilon,i}_{j,k}(t)|^{p}(t2^{2j\beta})^{m}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$ In a way similar to the setting (i) of the subsection 8.1, we may only handle the situation $1<p\leq2$. Under this the argument is split into three cases. $$\nonumber\begin{cases}
\text{ Case 8.4: }
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,1}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, III}_{p,q,m};\\
\text{ Case 8.5: } (t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,2}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, III}_{p,q,m};\\
\text{Case 8.6: }(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,3}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, III}_{p,q,m}.
\end{cases}$$ It suffices to treat Case 8.4 since Cases 8.5 and 8.6 can be verified similarly.
Because $u$ and $v$ both belong to $\B^{\gamma_{1}-\gamma_{2}}_{\frac{m}{p},\infty}$, for $s2^{2j'\beta}\leq1$ we have $|v^{\epsilon''}_{j',k''}(s)|\lesssim2^{-(\frac{n}{2}+\gamma_{1}-\gamma_{2})j'}.$ Owing to $v\in\B^{\gamma_{1},\gamma_{2},II}_{p,q}$ we also have $$\begin{array}{rl}
&\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p}\lesssim2^{p\gamma_{2}j-nj}2^{-pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\ \ \hbox{for\ a\ fixed}\ \ j'.\end{array}$$ This, along with Hölder’s inequality, (\[eqn:est3\]) and Lemma \[le7\] (i), derives $$\begin{array}{rl}
|b^{\epsilon,1}_{j,k}(t)|
&\lesssim\ 2^{[\frac{n}{p}-\frac{n}{2}+(p-1)\gamma_{2}+1]j}e^{-ct2^{2j\beta}}\sum\limits_{j\leq
j'+2}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\\
&\quad 2^{[\frac{n}{2}-\frac{n}{p}+\frac{2\beta}{p}-(p-1)\gamma_{2}-1]j'}\Big(\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}.
\end{array}$$ The last estimate for $|b^{\epsilon,1}_{j,k}(t)|$ and (\[eqn:est5\]) are used to derive $$\begin{array}{rl}
III^{m,1}_{b,Q_{r}}
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(p\gamma_{2}+2-2\beta)}\Big\{\sum\limits_{j\leq
j'+2}2^{\delta(j'-j)}2^{-pj'(p\gamma_{2}+2-2\beta)}\\
&2^{pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big\}^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, by the $\alpha-$triangle inequality we have $$\begin{array}{rl}
III^{m,1}_{b,Q_{r}}\lesssim&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}\sum\limits_{j\leq
j'+2}2^{q(j-j')(p\gamma_{2}+2-2\beta-\frac{\delta}{p})}2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}\\
&\Big[\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}.
\end{array}$$ Changing the order of $j$ and $j'$, we find $I^{5}_{Q_{r}}\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2},
IV}_{p,q,m'}}$.
If $q>p$, by Hölder’s inequality we get $$\begin{array}{rl}
III^{m,1}_{b,Q_{r}}
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}
\Big(\sum\limits_{j\leq
j'+2}2^{p(j-j')(p\gamma_{2}+2-2\beta-\delta)}\Big)^{\frac{q-p}{p}}\\
&\Big\{\sum\limits_{j\leq
j'+2}2^{q(j-j')(p\gamma_{2}+2-2\beta-\delta)}\Big[\int^{2^{-1-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}\Big\}.
\end{array}$$ Upon taking $0<\delta<p\gamma_{2}+2-2\beta$ and changing the order of $j$ and $j'$, we reach $III^{m,1}_{b,Q_{r}}\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2},
IV}_{p,q,m'}}$.
The setting (iii)
-----------------
Like the subsection 8.2, it is sufficient to deal with the situation $1<p\leq2$ below. For $i=4,5$, define $$\begin{array}{rl}
II^{i}_{b,Q_{r}}(t)=&\!\!\!|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{-\log_{2}r<j<-\frac{\log_{2}t}{2\beta}}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big(\sum\limits_{(\epsilon,k)\in S^{j}_{r}}|b^{\epsilon,i}_{j,k}(t)|^{p}\Big)^{\frac{q}{p}}.
\end{array}$$ We divide the argument into two cases. $$\nonumber\begin{cases}
\text{ Case 8.7: }
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,4}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, II}_{p,q};\\
\text{ Case 8.8: } (t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,5}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, II}_{p,q}.\\
\end{cases}$$ In view of the settings (i) & (ii) above, we only give a proof of Case 8.7 since the proof of Case 8.8 is similar. Due to $v\in\B^{\gamma_{1},\gamma_{2}}_{p,q,m,m'}$, one gets $|v^{\epsilon''}_{j',k''}(s)|\lesssim2^{-(\frac{n}{2}+\gamma_{1}-\gamma_{2})j'}.$ For $0<s<2^{-2j'\beta}$, one has $$v\in\B^{\gamma_{1},\gamma_{2},II}_{p,q}\Rightarrow
\begin{array}{rl}
&\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p}ds\lesssim2^{-nj+p\gamma_{2}j}2^{-p(\gamma_{1}+\frac{n}{2}-\frac{n}{p})j'}2^{-2j'\beta}.
\end{array}$$ By (\[eqn:est3\]), Hölder’s inequality and Lemma \[le7\] (ii) we get $$\begin{array}{rl}
|b^{\epsilon,4}_{j,k}(t)|
&\lesssim 2^{-\frac{nj}{2}+j+\frac{nj}{p}+(p-1)\gamma_{2}j}\sum\limits_{j<j'+2}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}
2^{-(\frac{n}{2}+\gamma_{1})j'}2^{\frac{(p-1)nj'}{p}}\\
&\quad 2^{(2-p)\gamma_{2}j'}2^{-\frac{2\beta(p-1)j'}{p}}
\Big(\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}.
\end{array}$$ By (\[eqn:est5\]), $s2^{2j'\beta}\leq 1$ and $m'<1$, we can similarly achieve that for $\delta>0$, $$\begin{array}{rl}
II^{4}_{b,Q_{r}}(t)
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}
\sum\limits_{-\log_{2}r<j<-\frac{\log_{2}t}{2\beta}}2^{qj(2-2\beta+p\gamma_{2})}
\Big[\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\sum\limits_{j<j'+2}2^{\delta(j'-j)}\\
&2^{-pj'(2-2\beta+p\gamma_{2})}
2^{pj'[\frac{n}{2}+\gamma_{1}-\frac{n}{p}]}
\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, we apply the $\alpha-$triangle inequality to obtain $$\begin{array}{rl}
II^{4}_{b, Q_{r}}(t)\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-\frac{qN}{p}}
\sum\limits_{-\log_{2}r<j<-\frac{\log_{2}t}{2\beta}}
\sum\limits_{j<j'+2}2^{q(j-j')(2-2\beta+p\gamma_{2}-\frac{\delta}{p})}\\
&2^{qj'[\frac{n}{2}+\gamma_{1}-\frac{n}{p}]}
\Big[\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}.
\end{array}$$ If $0<\frac{\delta}{p}<2(1-\beta)+p\gamma_{2}$, then $II^{4}_{b,Q_{r}}(t)\lesssim\|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}$ follows from changing the order of $j$ and $j'$.
If $q>p$, by Hölder’s inequality we get $$\begin{array}{rl}
II^{4}_{b, Q_{r}}(t)
\lesssim&\ \sum\limits_{w\in\mathbb{Z}^{n}}\frac{|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}}{(1+|w|)^{N}}
\sum\limits_{j\geq-\log_{2}r}\Big(\sum\limits_{j<j'+2}2^{(j-j')[2p(1-\beta)+p^{2}\gamma_{2}-\delta]}\Big)^{\frac{q-p}{p}}\\
&\times\ \Big\{\sum\limits_{j<j'+2}2^{(j-j')[2p(1-\beta)+p^{2}\gamma_{2}-\delta]}2^{qj'(\frac{n}{2}+\gamma_{1}-\frac{n}{p})}\times\\
&\quad \Big[\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}\Big\}
\lesssim \|u\|_{\B^{\gamma_{1},\gamma_{2},IV}_{p,q,m'}}.
\end{array}$$
The setting (iv) {#sec11}
----------------
Again, we only consider the situation $1<p\leq2$ in the sequel. For $i=4,5$, define $$\begin{array}{rl}
IV^{m',i}_{b,Q_{r}}=&\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{2^{-2j\beta}}_{0}\sum\limits_{(\epsilon,k)\in S^{j}_{r}}|b^{\epsilon,i}_{j,k}(t)|^{p}(t2^{2j\beta})^{m'}\frac{dt}{t}\Big]^{\frac{q}{p}}.
\end{array}$$ Due to their similarity, we only check the first one of the following two cases: $$\nonumber\begin{cases}
\text{ Case 8.9: }
(t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,4}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'};\\
\text{ Case 8.10: } (t,x)\mapsto\sum\limits_{(\epsilon,j,k)\in\Lambda_{n}}b^{\epsilon,5}_{j,k}(t)\Phi^{\epsilon}_{j,k}(x)\ \ \hbox{is\ in}\ \
\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'}.\\
\end{cases}$$
Because $0<s<2^{-2j'\beta}$, one gets $|v^{\epsilon''}_{j',k''}(s)|\lesssim2^{-\frac{nj'}{2}}2^{-j'(\gamma_{1}-\gamma_{2})}$ and $$\begin{array}{rl}
\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon'',k'')\in S^{w',j'}_{j,k}}|v^{\epsilon''}_{j',k''}(s)|^{p}ds\lesssim2^{-nj+p\gamma_{2}j}2^{-p(\frac{n}{2}+\gamma_{1}-\frac{n}{p})j'}2^{-2j'\beta},
\end{array}$$ By using (\[eqn:est3\]), Lemma \[le7\] (ii) and Hölder’s inequality we get $$\begin{array}{rl}
|b^{\epsilon,4}_{j,k}(t)|
&\lesssim2^{-\frac{nj}{2}+j+\frac{nj}{p}+(p-1)\gamma_{2}j}\sum\limits_{w\in\mathbb{Z}^{n}}
(1+|w|)^{-N}\sum\limits_{j<j'+2}2^{-(\frac{n}{2}+\gamma_{1})j'}\\
&2^{\frac{nj'(p-1)}{p}+(2-p)\gamma_{2}j'-\frac{2j'\beta(p-1)}{p}}\Big(\int^{2^{-2j'\beta}}_{0}
\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}.
\end{array}$$ For $t2^{2j\beta}<1$, we then get $$\begin{array}{rl}
IV^{m',4}_{b,Q_{r}}
\lesssim&\ |Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big\{\int^{2^{-2j\beta}}_{0}\sum\limits_{(\epsilon,k)\in S^{j}_{r}}2^{-\frac{pnj}{2}+pj+nj+p(p-1)\gamma_{2}j}\\
&\quad\Big[\sum\limits_{w\in\mathbb{Z}^{n}}
(1+|w|)^{-N}\sum\limits_{j<j'+2}2^{-(\frac{n}{2}+\gamma_{1})j'}
2^{\frac{nj'(p-1)}{p}+(2-p)\gamma_{2}j'}\\
&\quad2^{-\frac{2j'\beta(p-1)}{p}}\Big(\int^{2^{-2j'\beta}}_{0}
\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)^{\frac{1}{p}}\Big]^{p}(t2^{2j\beta})^{m'}\frac{dt}{t}\Big\}^{\frac{q}{p}}.
\end{array}$$ Furthermore, we first apply Hölder’s inequality to $w$, and then employ (\[eqn:est5\]) to get that for $\delta>0$, $$\begin{array}{rl}
IV^{m',4}_{b,Q_{r}}
\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+1+(p-1)\gamma_{2})}2^{-\frac{q\delta
j}{p}}\\
&\Big\{\int^{2^{-2j\beta}}_{0}\sum\limits_{(\epsilon,k)\in S^{j}_{r}}
\sum\limits_{j<j'+2}2^{\delta j'}
2^{-p(\frac{n}{2}+\gamma_{1})j'}2^{(p-1)nj'+p(2-p)\gamma_{2}j'-2\beta(p-1)j'}\\
&\sum\limits_{w\in\mathbb{Z}^{n}}\frac{1}{(1+|w|)^{N}}\Big(\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{j,k}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)(t2^{2j'\beta})^{m'}\frac{dt}{t}\Big\}^{\frac{q}{p}}.
\end{array}$$ Because $\int^{2^{-2j\beta}}_{0}(t2^{2j\beta})^{m}\frac{dt}{t}\lesssim1$, we obtain $$\begin{array}{rl}
IV^{m',4}_{b,Q_{r}}\lesssim&|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}\sum\limits_{j\geq-\log_{2}r}2^{qj(\gamma_{1}+1+(p-1)\gamma_{2})}2^{-\frac{q\delta
j}{p}}\\
&\quad\Big[\sum\limits_{j<j'+2}2^{\delta j'}
2^{pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})} 2^{2\beta
j'}2^{-pj'(2-2\beta+p\gamma_{2})}\\
&\quad\sum\limits_{w\in\mathbb{Z}^{n}}(1+|w|)^{-N}\Big(\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)\Big]^{\frac{q}{p}}.
\end{array}$$
If $q\leq p$, noticing $p\gamma_{2}+2-2\beta>0$ and taking $0<\delta<p(\gamma_{1}+1+(p-1)\gamma_{2})$ we obtain $$\begin{array}{rl}
IV^{m',4}_{b,Q_{r}}
\lesssim&\!\!\!\sum\limits_{w\in\mathbb{Z}^{n}}|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}(1+|w|)^{-\frac{qN}{p}}
\sum\limits_{j\geq-\log_{2}r}\sum\limits_{j<j'+2}2^{q(\frac{\delta}{p}-p\gamma_{2}-2+2\beta)(j'-j)}\\
&\quad\quad 2^{qj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}
\Big[\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}(s2^{2j'\beta})^{m'}\frac{ds}{s}\Big]^{\frac{q}{p}}\lesssim \|u\|_{\B^{\gamma_{1},\gamma_{2},
IV}_{p,q,m'}},
\end{array}$$ where we have actually used the inequality $$(s2^{2j'\beta})^{1-m'}\leq1\ \ \hbox{for}\ \
s\leq2^{-2j'\beta}\ \ \&\ \ 1-m'>0.$$ and then changed the order of $j$ and $j'$.
If $q>p$, by Hölder’s inequality we have $$\begin{array}{rl}
IV^{m',4}_{b,Q_{r}}
&\lesssim\ \sum\limits_{w\in\mathbb{Z}^{n}}|Q_{r}|^{\frac{q\gamma_{2}}{n}-\frac{q}{p}}(1+|w|)^{-N}
\sum\limits_{j\geq-\log_{2}r}
\Big[\sum\limits_{j<j'+2}2^{p(2-2\beta+p\gamma_{2}-\frac{\delta}{p})(j-j')}\\
&\quad2^{pj'(\gamma_{1}+\frac{n}{2}-\frac{n}{p})}2^{2\beta j'}
\Big(\int^{2^{-2j'\beta}}_{0}\sum\limits_{(\epsilon',k')\in S^{w,j'}_{r}}|u^{\epsilon'}_{j',k'}(s)|^{p}ds\Big)\Big]^{\frac{q}{p}}
\lesssim \|u\|_{\B^{\gamma_{1},\gamma_{2}, IV}_{p,q,m'}}.
\end{array}$$
[99]{}
D. R. Adams, *A note on Choquet integral with respect to Hausdorff capacity,* [Function Spaces and Applications (Lund, 1986)]{}, 115-24, Lecture Notes in Math. **1302**, Springer, Berlin, 1988.
J. Alvarez, *Continuity of Calderón-Zygmund type operators on the predual of a Morrey space,* [Clifford algebras in analysis and related topics (Fayetteville, AR, 1993)]{}, 309-319, Stud. Adv. Math., CRC, Boca Raton, FL, 1996.
M. Cannone, *A generalization of a theorem by Kato on Navier-Stokes equations*, [Rev. Mat. Iberoamericana]{}, **13** (1997), 673-97.
M. Cannone, *Harmonic analysis tools for solving the incompressible Navier-Stokes equations*, in: S. Friedlander, D. Serre (Eds.), Handbook of Mathematical Fluid Dynamics, vol. 3, Elsevier, 2004, pp. 161-44.
G. Dafni and J. Xiao, *Some new tent spaces and duality theorem for fractional Carleson measures and $Q_{\alpha}(\R^{n})$*, [J. Funct. Anal.]{}, **208** (2004), 377-422.
G. Dafni and J. Xiao, *The dyadic structure and atomic decomposition of Q spaces in several real variables*, Tohoku Math. J., **57** (2005), 119-145.
M. Essén, S. Janson, L. Peng and J. Xiao, *Q spaces of several real variables*, [Indiana Univ. Math. J.]{}, **49** (2000), 575-615.
P. Federbush, *Navier and Stokes meet the wavelet*, [Commun. Math. Phys.]{}, **155** (1993), 219-248.
C. Fefferman, *Existence and smoothness of the Navier-Stokes equation,* [The millennium prize problems]{}, 57-67, Clay Math. Inst., Cambridge, MA.
M. Frazier, B. Jawerth and G. Weiss, *Littlewood-Paley Theory and the Study of Function Spaces*, CBMS Reg. Conf. Ser. Math., vol. 79, Amer. Math. Soc., Providence, RI, 1991.
P. Germain, N. Pavlović and G. Staffilani, *Regularity of solutions to the Navier.Stokes equations evolving from small data in $BMO^{-1}$*, [Int. Math. Res. Not.]{}, **2007** (2007), doi:10.1093/imrn/rnm087.
Y. Giga and T. Miyakawa, *Navier-Stokes flow in $\R^{3}$ with measures as initial vorticity and Morry spaces*, [Comm. Partial Differential Equations]{}, **14** (1989), 577-618.
T. Kato, *Strong $L^{p}$-solutions of the Navier-Stokes in $\R^{n}$ with applications to weak solutions*, [Math. Z.]{}, **187** (1984), 471-480.
T. Kato and H. Fujita, *On the non-stationary Navier-Stokes system*, [Rend. Semin. Mat. Univ. Padova]{}, **30** (1962), 243- 260.
T. Kato and G. Ponce, *Commutator estimates and the Euler and Navier-Stokes equations*, [Comm. Pure Appl. Math.]{}, **XLI** (1988), 891- 907.
H. Koch and D. Tataru, *Well-posedness for the Navier-Stokes equations*, [Adv. Math.]{}, **157** (2001), 22-35.
P. G. Lemarié-Rieusset, *Recent Development in the Navier-Stokes Problem*, [Chapman & Hall/CRC Press]{}, Boca Raton, 2002.
P. Li and Z. Zhai, *Well-posedness and regularity of generalized Navier-Stokes equations in some critical Q-spaces*, [J. Functional Anal.]{}, **259** (2010), 2457-2519.
Y. Liang, Y. Sawano, T. Ullrich, D. Yang and W. Yuan, *New characterizations of Besov-Triebel-Lizorkin-Hausdorff spaces including coorbits and wavelets*, [J. Fourier Anal. Appl.]{} **18** (2012), 1067-1111.
C. Lin and Q. Yang, *Semigroup characterization of Besov type Morrey spaces and well-posedness of generalized Navier-Stokes equations*, [J. Differential Equations]{}, **254** (2013), 804-846.
J. L. Lions, *Quelques Méthodes de Résolution des Problèmes aux Limites Nonlinéaires*, [Dunod/Gauthier. Villars]{}, Paris, 1969 (in French).
Y. Meyer, *Ondelettes et Opérateurs, I et II*, Hermann, Paris, 1991-1992.
Y. Meyer and Q. Yang, *Continuity of Calder’on-Zygmund operators on Besov or Triebel-Lizorkin spaces*, [Anal. Appl. (Singap.),]{} **6** (2008), 51-81.
C. B. Morrey, *On the solutions of quas-linear elliptic partial differential equations,* [Trans. Amer. Math. Soc.]{}, **43** (1938), 126-166.
J. Peetre, *New Thoughts on Besov Spaces,* [Duke Univ. Math. Ser.]{}, [Duke Univ. Press]{}, Durham, 1976.
L. Peng and Q. Yang, *Predual spaces for $Q$ spaces,* [Acta Math. Sci. Ser. B Engl. Ed.]{}, **29** (2009), 243-250.
Y. Sawano, *Wavelet characterization of Besov-Morrey and Triebel-Lizorkin-Morrey spaces*, [Funct. Approx. Comment. Math.]{}, **38** (2008), 93-107.
M. E. Taylor, [Analysis on Morrey spaces and applications to Navier-Stokes and other evolution equations]{}. *Comm. Partial Differential Equations*, (1992), 1407-1456.
P. Wojtaszczyk, *A Mathematical Introduction to Wavelets*, [London Mathematical Society Student Texts]{} **37**, Cambridge University Press, 1997.
J. Wu, *Generalized MHD equations*, [J. Differential Equations]{}, (2003), 284-312.
J. Wu *The generalized incompressible Navier-Stokes equations in Besov spaces*, [Dyn. Partial Differ. Equ.]{}, **1** (2004), 381-400.
J. Wu, *Lower bounds for an integral involving fractional Laplacians and the generalized Navier-Stokes equations in Besov spaces*, [Comm. Math. Phys.]{}, **263** (2005), 803-831.
J. Wu, *Regularity criteria for the generalized MHD equations*, [Comm. Partial Differential Equations]{}, **33** (2008), 285-306.
Z. Wu and C. Xie, *Q spaces and Morrey spaces*, [J. Functional Anal.]{}, **201** (2003), 282-297.
J. Xiao, *Homothetic variant of fractional Sobolev space with application to Navier-Stokes system*, [Dyn. Partial Differ. Equ.]{}, **4** (2007), 227-245.
J. Xiao, *Homothetic variant of fractional Sobolev space with application to Navier-Stokes system revisited*, [Dyn. Partial Differ. Equ.]{}, **11** (2014), 167-181.
D. Yang and W. Yuan, *New Besov-type spaces and Triebel-Lizorkin-type spaces including $Q$ spaces*, [Math. Z.]{} **265** (2010), 451-480.
Q. Yang, *Wavelet and Distribution*, Beijing Science and Technology Press, 2002.
Q. Yang, *Characterization of multiplier spaces with Daubechies wavelets*, [Acta Math. Sci.]{}, **32** (2012), 2315-2321.
W. Yuan , W. Sickel and D. Yang, *Morrey and Campanato Meet Besov, Lizorkin and Triebel*, [Lecture Notes in Mathematics]{} 2005 Editors: J.-M. Morel, Cachan F. Takens, Groningen B. Teissier, Paris.
[^1]: PTL’s research was supported by: NSFC No. 11171203 and No.11201280; New Teacher’s Fund for Doctor Stations, Ministry of Education No.20114402120003; Guangdong Natural Science Foundation S2011040004131; Foundation for Distinguished Young Talents in Higher Education of Guangdong, China, LYM11063. JX was supported by NSERC of Canada and URP of Memorial University, Canada
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A preliminary group classification of the class 2D nonlinear heat equations $u_t=f(x,y,u,u_x,u_y)(u_{xx}+u_{yy})$, where $f$ is arbitrary smooth function of the variables $x,y,u,u_x$ and $u_y$ using Lie method, is given. The paper is one of the few applications of an algebraic approach to the problem of group classification: the method of preliminary group classification.'
address: 'School of Mathematics, Iran University of Science and Technology, Narmak, Tehran 1684613114, Iran.'
author:
- 'M. Nadjafikhah'
- 'R. Bakhshandeh-Chamazkoti'
title: |
Preliminarily group classification of a class of\
2D nonlinear heat equations
---
,
,
$2$D Nonlinear heat equation, Optimal system, Preliminarily group classification.
Introduction
============
It is well known that the symmetry group method plays an important role in the analysis of differential equations. The history of group classification methods goes back to Sophus Lie. The first paper on this subject is [@[1]], where Lie proves that a linear two-dimensional second-order PDE may admit at most a three-parameter invariance group (apart from the trivial infinite parameter symmetry group, which is due to linearity). He computed the maximal invariance group of the one-dimensional heat conductivity equation and utilized this symmetry to construct its explicit solutions. Saying it the modern way, he performed symmetry reduction of the heat equation. Nowadays symmetry reduction is one of the most powerful tools for solving nonlinear partial differential equations (PDEs). Recently, there have been several generalizations of the classical Lie group method for symmetry reductions. Ovsiannikov [@[2]] developed the method of partially invariant solutions. His approach is based on the concept of an equivalence group, which is a Lie transformation group acting in the extended space of independent variables, functions and their derivatives, and preserving the class of partial differential equations under study. In an attempt to study nonlinear effects Saied and Hussain [@[3]] gave some new similarity solutions of the (1+1)-nonlinear heat equation. Later Clarkson and Mansfield [@[4]] studied classical and nonclassical symmetries of the (1+1)-heat equation and gave new reductions for the linear heat equation and a catalogue of closed-form solutions for a special choice of the function $f(x,y,u,u_x,u_y)$ that appears in their model. In higher dimensions Servo [@[5]] gave some conditional symmetries for a nonlinear heat equation while Goard et al. [@[6]] studied the nonlinear heat equation in the degenerate case. Nonlinear heat equations in one or higher dimensions are also studied in literature by using both symmetry as well as other methods [@[7]; @[8]].There are a number of papers to study (1+1)-nonlinear heat equations from the point of view of Lie symmetries method. The (2+1)-dimensional nonlinear heat equations $$\begin{aligned}
u_t=f(u)(u_{xx}+u_{yy}),\label{eq:1}\end{aligned}$$ are investigated in [@[9]] and in present paper we studied $$\begin{aligned}
u_t=f(x,y,u,u_x,u_y)(u_{xx}+u_{yy}),\label{eq:2}\end{aligned}$$ Similarity techniques are applied in [@[10]; @[11]; @[12]; @[13]] for (2+1)-dimensional wave equations.
Symmetry Methods
================
Let a partial differential equation contains $p$ dependent variables and $q$ independent variables. The one-parameter Lie group of transformations $$\begin{aligned}
x_i\longmapsto x_i+\epsilon\xi^i(x,u)+O(\epsilon^2);\hspace{1cm}
u_{\alpha}\longmapsto
u_{\alpha}+\epsilon\varphi^{\alpha}(x,u)+O(\epsilon^2),\label{eq:3}\end{aligned}$$ where $i=1,\ldots,p$ and $\alpha=1,\ldots,q$. The action of the Lie group can be recovered from that of its associated infinitesimal generators. we consider general vector field $$\begin{aligned}
X=\sum_{i=1}^p\xi^i(x,u)\frac{{\rm \partial}}{{{\rm \partial}}x_i}+
\sum_{\alpha=1}^q\varphi^{\alpha}(x,u)\frac{{\rm \partial}}{{{\rm \partial}}u^{\alpha}}.\label{eq:4}\end{aligned}$$ on the space of independent and dependent variables. The symmetry generator associated with (\[eq:4\]) given by $$\begin{aligned}
X=\xi^1(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}x}+\xi^2(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}y}+
\xi^3(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}t}+\varphi(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}u}.\label{eq:6}\end{aligned}$$ The second prolongation of $X$ is the vector field $$\begin{aligned}
\nonumber
X^{(2)}=X+\varphi^x\frac{{\rm \partial}}{{{\rm \partial}}u_x}+\varphi^y\frac{{\rm \partial}}{{{\rm \partial}}u_y}+\varphi^t\frac{{\rm \partial}}{{{\rm \partial}}u_t}+
\varphi^{xx}\frac{{\rm \partial}}{{{\rm \partial}}u_{xx}}+\varphi^{xy}\frac{{\rm \partial}}{{{\rm \partial}}u_{xy}}
+\varphi^{xt}\frac{{\rm \partial}}{{{\rm \partial}}u_{xt}}
+\varphi^{yy}\frac{{\rm \partial}}{{{\rm \partial}}u_{yy}}
+\varphi^{yt}\frac{{\rm \partial}}{{{\rm \partial}}u_{yt}}+\varphi^{tt}\frac{{\rm \partial}}{{{\rm \partial}}u_{tt}},\\\label{eq:7}\end{aligned}$$ that its coefficients are obtained with following formulas $$\begin{aligned}
\label{eq:8}
&&\varphi^x={D}_x\varphi-u_x{D}_x\xi^1-u_y{D}_x\xi^2-u_t{D}_x\xi^3,\hspace{2cm}
\varphi^y={D}_y\varphi-u_x{D}_y\xi^1-u_y{D}_y\xi^2-u_t{D}_y\xi^3,\\\nonumber
&&\varphi^t={D}_t\varphi-u_x{D}_t\xi^1-u_y{D}_t\xi^2-u_t{\rm
D}_t\xi^3,\hspace{2.1cm}
\varphi^{xx}={D}_x\varphi^x-u_{xx}{D}_x\xi^1-u_{xy}{D}_x\xi^2-u_{xt}{D}_x\xi^3\\\nonumber
&&\hspace{-2mm}\varphi^{yy}={D}_y\varphi^y-u_{xy}{D}_y\xi^1-u_{yy}{D}_y\xi^2-u_{yt}{D}_y\xi^3\hspace{1.58cm}
\varphi^{tt}={D}_t\varphi^t-u_{xt}{D}_t\xi^1-u_{yt}{D}_t\xi^2-u_{tt}{D}_t\xi^3\\\nonumber
&&\hspace{-2mm}\varphi^{yt}={D}_y\varphi^t-u_{xy}{D}_y\xi^1-u_{yy}{D}_y\xi^2-u_{yt}{D}_y\xi^3\hspace{1.58cm}
\varphi^{xt}={D}_t\varphi^t-u_{xt}{D}_t\xi^1-u_{yt}{D}_t\xi^2-u_{tt}{D}_t\xi^3\end{aligned}$$ where the operators $D_x$, $D_y$ and $D_t$ denote the total derivatives with respect to $x,y$ and $t$: $$\begin{aligned}
\nonumber
D_x&=&\frac{{\rm \partial}}{{{\rm \partial}}x}+u_x\frac{{\rm \partial}}{{{\rm \partial}}u}+u_{xx}\frac{{\rm \partial}}{{{\rm \partial}}u_x}+u_{xy}\frac{{\rm \partial}}{{{\rm \partial}}u_y}+
u_{xt}\frac{{\rm \partial}}{{{\rm \partial}}u_t}+\ldots\\
D_y&=&\frac{{\rm \partial}}{{{\rm \partial}}y}+u_y\frac{{\rm \partial}}{{{\rm \partial}}u}+u_{yy}\frac{{\rm \partial}}{{{\rm \partial}}u_y}+u_{yx}\frac{{\rm \partial}}{{{\rm \partial}}u_x}+
u_{yt}\frac{{\rm \partial}}{{{\rm \partial}}u_t}+\ldots\\\label{eq:9}
D_t&=&\frac{{\rm \partial}}{{{\rm \partial}}t}+u_t\frac{{\rm \partial}}{{{\rm \partial}}u}+u_{tt}\frac{{\rm \partial}}{{{\rm \partial}}u_t}+u_{tx}\frac{{\rm \partial}}{{{\rm \partial}}u_x}+
u_{ty}\frac{{\rm \partial}}{{{\rm \partial}}u_y}+\ldots\nonumber\end{aligned}$$ By theorem 6.5. in [@[14]], $X^{(2)}[u_t-f(x,y,u,u_x,u_y)(u_{xx}+u_{yy})]|_{(6)}=0$ whenever $$\begin{aligned}
u_t-f(x,y,u,u_x,u_y)(u_{xx}+u_{yy})=0.\label{eq:10}\end{aligned}$$ Since $$X^{(2)}[u_t-f(x,y,u,u_x,u_y)(u_{xx}+u_{yy})]=\varphi^t-
(f_x\xi^1+f_y\xi^2+f_u\varphi+f_{u_x}\varphi^x+f_{u_y}\varphi^y)
(u_{xx}+u_{yy})-f(x,y,u,u_x,u_y)(\varphi^{xx}+\varphi^{yy}),$$ therefore we obtain the following determining function: $$\begin{aligned}
\varphi^t-(f_x\xi^1+f_y\xi^2+f_u\varphi+f_{u_x}\varphi^x+f_{u_y}\varphi^y)
(u_{xx}+u_{yy})
-f(x,y,u,u_x,u_y)(\varphi^{xx}+\varphi^{yy})=0.\label{eq:11}\end{aligned}$$ In the case of arbitrary $f$ it follows $$\begin{aligned}
\xi^1=\xi^2=\varphi=0,\label{eq:12}\end{aligned}$$ or $$\begin{aligned}
\xi^1=\xi^2=\varphi=0,\;\;\;\;\;\xi^3=C.\label{eq:13}\end{aligned}$$ Therefore, for arbitrary $f(x,y,u,u_x,u_y)$ Eq. (\[eq:1\]) admits the one-dimensional Lie algebra ${{\goth g}}_1$, with the basis $$\begin{aligned}
X_1=\frac{{\rm \partial}}{{{\rm \partial}}t}.\label{eq:14}\end{aligned}$$ ${{\goth g}}_1$ is called the principle Lie algebra for Eq. (\[eq:1\]). So, the remaining part of the group classification is to specify the coefficient $f$ such that Eq. (\[eq:1\]) admits an extension of the principal algebra ${{\goth g}}_1$. Usually, the group classification is obtained by inspecting the determining equation. But in our case the complete solution of the determining equation (\[eq:11\]) is a wasteful venture. Therefore, we don’t solve the determining equation but, instead we obtain a partial group classification of Eq. (\[eq:1\]) via the so-called method of preliminary group classification. This method was suggested in [@[10]] and applied when an equivalence group is generated by a finite-dimensional Lie algebra ${{\goth g}}_{{\mathscr E}}$. The essential part of the method is the classification of all nonsimilar subalgebras of ${{\goth g}}_{{\mathscr E}}$. Actually, the application of the method is simple and effective when the classification is based on finite-dimensional equivalence algebra ${{\goth g}}_{{\mathscr E}}$.
Equivalence transformations
===========================
An equivalence transformation is a nondegenerate change of the variables $t,x,y,u$ taking any equation of the form (\[eq:1\]) into an equation of the same form, generally speaking, with different $f(x,y,u,u_x,u_y)$. The set of all equivalence transformations forms an equivalence group ${{\mathscr E}}$. We shall find a continuous subgroup ${{\mathscr E}}_C$ of it making use of the infinitesimal method.
We consider an operator of the group ${{\mathscr E}}_C$ in the form $$\begin{aligned}
Y=\xi^1(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}x}+\xi^2(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}y}+
\xi^3(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}t}+\varphi(x,y,t,u)\frac{{\rm \partial}}{{{\rm \partial}}u}
+\mu(x,y,t,u,u_x,u_y,u_t,f)\frac{{\rm \partial}}{{{\rm \partial}}f},\label{eq:15}\end{aligned}$$ from the invariance conditions of Eq. (\[eq:1\]) written as the system: $$\begin{aligned}
\label{eq:3-16}
u_t&-&f(x,y,u,u_x,u_y)(u_{xx}+u_{yy})=0,\\\nonumber
f_t&=&f_{u_t}=0,\end{aligned}$$ where $u$ and $f$ are considered as differential variables: $u$ on the space $(x,y,t)$ and $f$ on the extended space $(x,y,t,u,u_x,u_y)$.
The invariance conditions of the system (\[eq:3-16\]) are $$\begin{aligned}
\label{eq:17}
Y^{(2)}(u_t&-&f(x,y,u,u_x,u_y)(u_{xx}+u_{yy}))=0,\\\nonumber
Y^{(2)}(f_t)&=&Y^{(2)}(f_{u_t})=0,\end{aligned}$$ where $Y^{(2)}$ is the prolongation of the operator (\[eq:15\]): $$\begin{aligned}
\label{eq:3-18}
Y^{(2)}=Y+\varphi^x\frac{{\rm \partial}}{{{\rm \partial}}u_x}+\varphi^y\frac{{\rm \partial}}{{{\rm \partial}}u_y}+\varphi^t\frac{{\rm \partial}}{{{\rm \partial}}u_t}+
\varphi^{xx}\frac{{\rm \partial}}{{{\rm \partial}}u_{xx}}+\varphi^{xy}\frac{{\rm \partial}}{{{\rm \partial}}u_{xy}}
+\varphi^{xt}\frac{{\rm \partial}}{{{\rm \partial}}u_{xt}}&+&\varphi^{yy}\frac{{\rm \partial}}{{{\rm \partial}}u_{yy}}\\\nonumber
&+&\varphi^{yt}\frac{{\rm \partial}}{{{\rm \partial}}u_{yt}}+\varphi^{tt}\frac{{\rm \partial}}{{{\rm \partial}}u_{tt}}+
\mu^t\frac{{\rm \partial}}{{{\rm \partial}}f_{t}}+\mu^{u_t}\frac{{\rm \partial}}{{{\rm \partial}}f_{u_t}}.\end{aligned}$$ The coefficients $\varphi^x, \varphi^y, \varphi^t, \varphi^{xx},
\varphi^{xy}, \varphi^{xt}, \varphi^{yy}, \varphi^{yt},
\varphi^{tt}$ are given in (\[eq:8\]) and the other coefficients of (\[eq:3-18\]) are obtained by applying the prolongation procedure to differential variables $f$ with independent variables $(x,y,t,u,u_x,u_y,u_t)$. we have $$\begin{aligned}
\mu^t&=&\widetilde{D}_t(\mu)-f_x\widetilde{D}_t(\xi^1)-f_y\widetilde{D}_t(\xi^2)
-f_u\widetilde{D}_t(\varphi)-f_{u_x}\widetilde{D}_t(\varphi^x)-f_{u_y}\widetilde{D}_t(\varphi^y),\label{eq:19}\\
\mu^{u_t}&=&\widetilde{D}_{u_t}(\mu)-f_x\widetilde{D}_{u_t}(\xi^1)-f_y\widetilde{D}_{u_t}(\xi^2)
-f_u\widetilde{D}_{u_t}(\varphi)-f_{u_x}\widetilde{D}_{u_t}(\varphi^x)-f_{u_y}\widetilde{D}_{u_t}(\varphi^y),\label{eq:20}\end{aligned}$$ where $$\begin{aligned}
\widetilde{D}_t=\frac{{\rm \partial}}{{{\rm \partial}}t},\hspace{1cm}\widetilde{D}_{u_t}=\frac{{\rm \partial}}{{{\rm \partial}}u_t}.\label{eq:21}\end{aligned}$$ So, we have the following prolongation formulas: $$\begin{aligned}
\label{eq:22}
\mu^t&=&\mu_t-f_x\xi_t^1-f_y\xi_t^2-f_u\varphi_t-f_{u_x}(\varphi^x)_t-f_{u_y}(\varphi^y)_t,\\\nonumber
\mu^{u_t}&=&\mu_{u_t}-f_{u_x}(\varphi^x)_{u_t}-f_{u_y}(\varphi^y)_{u_t},\end{aligned}$$ By the invariance conditions (\[eq:17\]) give rise to $$\begin{aligned}
\mu^t=\mu^{u_t}=0,\label{eq:23}\end{aligned}$$ that is hold for every $f$. Substituting (\[eq:23\]) into (\[eq:22\]), we obtain $$\begin{aligned}
\begin{array}{ll}
\mu_t=\mu_{u_t}=0\\
\xi^1_x=\xi^2_t=\varphi_t=0\\
(\varphi^x)_t=(\varphi^x)_{u_t}=(\varphi^y)_t=(\varphi^y)_{u_t}=0\label{eq:24}
\end{array}\end{aligned}$$ Moreover with substituting (\[eq:3-18\]) into (\[eq:17\]) we obtain $$\begin{aligned}
\varphi^t-f(x,y,u,u_x,u_y)(\varphi^{xx}+\varphi^{yy})-\mu(u_{xx}+u_{yy})=0.\label{eq:25}\end{aligned}$$ We are left with a polynomial equation involving the various derivatives of $u(x,y,t)$ whose coefficients are certain derivatives of $\xi^1,\xi^2,\xi^3$ and $\varphi$. Since $\xi^1,\xi^2,\xi^3,\varphi$ only depend on $x,y,t,u$ we can equate the individual coefficients to zero, leading to the complete set of determining equations: $$\begin{aligned}
\xi^1&=&\xi^1(x,y)\label{eq:26}\\
\xi^2&=&\xi^2(y)\label{eq:27}\\
\xi^3&=&\xi^3(t)\label{eq:28}\\
\varphi_{uu}&=&0\label{eq:29}\\
2\varphi_{xu}&=&\xi^1_{xx}+\xi^1_{yy}\label{eq:30}\\
\varphi_{yu}&=&\xi^2_{xx}+\xi^2_{yy}\label{eq:31}\\
\varphi_u&=&\xi_x^1=\xi_y^2\label{eq:32}\\
\mu&=&(\xi^1_x-\xi_t^3)f\label{eq:33}\\
\varphi_{tt}&=&f(\varphi_{xx}+\varphi_{yy})\label{eq:34}\end{aligned}$$ so, we find that $$\begin{aligned}
\nonumber
&&\xi^1(x)=c_1x+c_2y+c_3,\hspace{1cm}\xi^2(t)=c_1y+c_4,\hspace{1cm}\xi^3(t)=a(t),\\
&&\hspace{1cm}\varphi(x,y,u)=c_1u+\beta(x,y),\hspace{1cm}\mu=(c_1-a'(t))f,\label{eq:35}\end{aligned}$$ with constants $c_1, c_2, c_3$ and $c_4$, also we have $\beta_{xx}=-\beta_{yy}$.
$\;\;\;\;$We summarize: The class of Eq. (\[eq:2\]) has an infinite continuous group of equivalence transformations generated by the following infinitesimal operators: $$\begin{aligned}
Y=(c_1x+c_2y+c_3)\frac{{\rm \partial}}{{{\rm \partial}}x}+ (c_1y+c_4)\frac{{\rm \partial}}{{{\rm \partial}}y}+
a(t)\frac{{\rm \partial}}{{{\rm \partial}}t}+(c_1u+\beta(x,y))\frac{{\rm \partial}}{{{\rm \partial}}u}
+(c_1-a'(t))f\frac{{\rm \partial}}{{{\rm \partial}}f}.\label{eq:36}\end{aligned}$$ Therefore the symmetry algebra of the Burgers’ equation (\[eq:2\]) is spanned by the vector fields $$\begin{aligned}
&Y_1=x\frac{{\rm \partial}}{{{\rm \partial}}x}+y\frac{{\rm \partial}}{{{\rm \partial}}y}+t\frac{{\rm \partial}}{{{\rm \partial}}t}+u\frac{{\rm \partial}}{{{\rm \partial}}u}+f\frac{{\rm \partial}}{{{\rm \partial}}f},
\hspace{1cm} Y_2=y\frac{{\rm \partial}}{{{\rm \partial}}x},\hspace{1cm}
Y_3=\frac{{\rm \partial}}{{{\rm \partial}}x},\hspace{1cm}Y_4=\frac{{\rm \partial}}{{{\rm \partial}}y}&\\\label{eq:37}
&Y_5=a(t)\frac{{\rm \partial}}{{{\rm \partial}}t}-a'(t)f\frac{{\rm \partial}}{{{\rm \partial}}f},\hspace{1cm}
Y_{\beta}=\beta(x,y)\frac{{\rm \partial}}{{{\rm \partial}}u}.&\nonumber\end{aligned}$$
Moreover, in the group of equivalence transformations there are included also discrete transformations, i.e., reflections $$\begin{aligned}
t\longrightarrow-t,\hspace{1.5cm}x\longrightarrow-x,\hspace{1.5cm}u\longrightarrow-u,\hspace{1.5cm}
f\longrightarrow-f.\label{eq:38}\end{aligned}$$
$$\begin{aligned}
\hspace{-0.75cm}\begin{array}{llllll}
\hline
[\,,\,]&\hspace{2cm}Y_1 &\hspace{2cm}Y_2 &\hspace{2cm}Y_3 &\hspace{2cm}Y_4 &\hspace{2cm}Y_5 \hspace{2cm}Y_6 \\ \hline
Y_1 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{1.8cm}-Y_3 &\hspace{1.8cm}-Y_4 &\hspace{2cm}0 \hspace{1.6cm}-Y_6 \\
Y_2 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{1.8cm}-Y_3 &\hspace{2cm}0 \hspace{2cm}0\\
Y_3 &\hspace{2cm}Y_3 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{2cm}0 &\hspace{2cm}0 \hspace{2cm}0\\
Y_4 &\hspace{2cm}Y_4 &\hspace{2cm} Y_3 &\hspace{2cm} 0 &\hspace{2cm}0 &\hspace{2cm}0 \hspace{2cm}0 \\
Y_5 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{2cm}0 &\hspace{2cm}0 \hspace{2cm}0\\
Y_6 &\hspace{2cm}Y_6 &\hspace{2cm} 0 &\hspace{2cm} 0 &\hspace{2cm}0 &\hspace{2cm}0 \hspace{2cm}0\\
\hline
\end{array}\end{aligned}$$
$$\begin{aligned}
\hspace{-0.75cm}\begin{array}{llllll}
\hline
[\,,\,]&\hspace{2cm}Y_1&\hspace{2cm}Y_2&\hspace{2cm}Y_3&\hspace{2cm}Y_4&\hspace{2cm}Y_5\hspace{2cm}Y_6
\\ \hline
Y_1 &\hspace{2cm}Y_1&\hspace{2cm}Y_2&\hspace{2cm}e^sY_3&\hspace{2cm}e^sY_4&\hspace{2cm}Y_5\hspace{2cm}e^sY_6 \\
Y_2 &\hspace{2cm}Y_1&\hspace{2cm}Y_2&\hspace{2cm}Y_3&\hspace{2cm}Y_4+sY_3&\hspace{2cm}Y_5\hspace{2cm}Y_6 \\
Y_3 &\hspace{1.5cm}Y_1-sY_3&\hspace{2cm}Y_2&\hspace{2cm}Y_3&\hspace{2cm}Y_4 &\hspace{2cm}Y_5\hspace{2cm}Y_6 \\
Y_4 &\hspace{1.5cm}Y_1-sY_4&\hspace{1.6cm}Y_2-sY_3&\hspace{2cm}Y_3&\hspace{2cm}Y_4&\hspace{2cm}Y_5\hspace{2cm}Y_6 \\
Y_5 &\hspace{2cm}Y_1&\hspace{2cm}Y_2&\hspace{2cm}Y_3&\hspace{2cm}Y_4&\hspace{2cm}Y_5\hspace{2cm}Y_6 \\
Y_6 &\hspace{1.5cm}Y_1-sY_6&\hspace{2cm}Y_2&\hspace{2cm}Y_3&\hspace{2cm}Y_4&\hspace{2cm}Y_5\hspace{2cm}Y_6 \\
\hline
\end{array}\end{aligned}$$
Preliminary group classification
================================
One can observe in many applications of group analysis that most of extensions of the principal Lie algebra admitted by the equation under consideration are taken from the equivalence algebra ${\goth g}_{{\mathscr E}}$. We call these extensions ${\mathscr E}$-extensions of the principal Lie algebra. The classification of all nonequivalent equations (with respect to a given equivalence group $G_{{\mathscr E}}$,) admitting ${\mathscr E}$-extensions of the principal Lie algebra is called a preliminary group classification. Here, $G_{{\mathscr E}}$ is not necessarily the largest equivalence group but, it can be any subgroup of the group of all equivalence transformations.So, we can take any finite-dimensional subalgebra (desirable as large as possible) of an infinite-dimensional algebra with basis (\[eq:31\]) and use it for a preliminary group classification. We select the subalgebra ${\goth g}_6$ spanned on the following operators: $$\begin{aligned}
&Y_1=x\frac{{\rm \partial}}{{{\rm \partial}}x}+y\frac{{\rm \partial}}{{{\rm \partial}}y}+t\frac{{\rm \partial}}{{{\rm \partial}}t}+u\frac{{\rm \partial}}{{{\rm \partial}}u}+f\frac{{\rm \partial}}{{{\rm \partial}}f},\hspace{1cm}
Y_2=y\frac{{\rm \partial}}{{{\rm \partial}}x},\hspace{1cm}
Y_3=\frac{{\rm \partial}}{{{\rm \partial}}x},\hspace{1cm}
Y_4=\frac{{\rm \partial}}{{{\rm \partial}}y},&\nonumber\\
&Y_5=\frac{{\rm \partial}}{{{\rm \partial}}t}-f\frac{{\rm \partial}}{{{\rm \partial}}f},\hspace{1cm}Y_6=\frac{{\rm \partial}}{{{\rm \partial}}u}.&\label{eq:39}\end{aligned}$$ The communication relations between these vector fields is given in Table 1. To each $s$-parameter subgroup there corresponds a family of group invariant solutions. So, in general, it is quite impossible to determine all possible group-invariant solutions of a PDE. In order to minimize this search, it is useful to construct the optimal system of solutions. It is well known that the problem of the construction of the optimal system of solutions is equivalent to that of the construction of the optimal system of subalgebras [@[2]; @[12]]. Here, we will deal with the construction of the optimal system of subalgebras of ${\goth
g}_5$.Let $G$ be a Lie group, with ${\goth g}$ its Lie algebra. Each element $T\in G$ yields inner automorphism $T_a\longrightarrow
TT_aT^{-1}$ of the group $G$. Every automorphism of the group $G$ induces an automorphism of ${\goth g}$. The set of all these automorphism is a Lie group called [*the adjoint group $G^A$*]{}. The Lie algebra of $G^A$ is the adjoint algebra of ${\goth g}$, defined as follows. Let us have two infinitesimal generators $X,Y\in L$. The linear mapping ${\rm
Ad}X(Y):Y\longrightarrow[X,Y]$ is an automorphism of ${\goth g}$, called [*the inner derivation of ${\goth g}$*]{}. The set of all inner derivations ${\rm ad}X(Y)(X,Y\in{\goth g})$ together with the Lie bracket $[{\rm Ad}X,{\rm Ad}Y]={\rm Ad}[X,Y]$ is a Lie algebra ${\goth g}^A$ called the [*adjoint algebra of ${\goth
g}$*]{}. Clearly ${\goth g}^A$ is the Lie algebra of $G^A$. Two subalgebras in ${\goth g}$ are [*conjugate*]{} (or [*similar*]{}) if there is a transformation of $G^A$ which takes one subalgebra into the other. The collection of pairwise non-conjugate $s$-dimensional subalgebras is the optimal system of subalgebras of order $s$. The construction of the one-dimensional optimal system of subalgebras can be carried out by using a global matrix of the adjoint transformations as suggested by Ovsiannikov [@[2]]. The latter problem, tends to determine a list (that is called an [*optimal system*]{}) of conjugacy inequivalent subalgebras with the property that any other subalgebra is equivalent to a unique member of the list under some element of the adjoint representation i.e. $\overline{{\goth h}}\,{\rm
Ad(g)}\,{\goth h}$ for some ${\rm g}$ of a considered Lie group. Thus we will deal with the construction of the optimal system of subalgebras of ${\goth g}_6$. The adjoint action is given by the Lie series $$\begin{aligned}
{\rm Ad}(\exp(s\,Y_i))Y_j
=Y_j-s\,[Y_i,Y_j]+\frac{s^2}{2}\,[Y_i,[Y_i,Y_j]]-\cdots,\label{eq:40}\end{aligned}$$ where $s$ is a parameter and $i,j=1,\cdots,6$. The adjoint representations of ${\goth g}_6$ is listed in Table 2, it consists the separate adjoint actions of each element of ${\goth g}_6$ on all other elements. [**Theorem 4.1.**]{} [*An optimal system of one-dimensional Lie subalgebras of general Burgers’ equation (\[eq:2\]) is provided by those generated by*]{} $$\begin{aligned}
&1)&Y^1=Y_1=x{{\rm \partial}}_x+y{{\rm \partial}}_y+t{{\rm \partial}}_t+u{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{2.2cm}2)~Y^2=Y_2=y{{\rm \partial}}_x,\\
&3)&Y^3=-Y_4=-{{\rm \partial}}_y,\hspace{5.7cm}4)~Y^4=Y_1+Y_5=x{{\rm \partial}}_x+y{{\rm \partial}}_y+(t+1){{\rm \partial}}_t+u{{\rm \partial}}_u,\\
&5)&Y^5=Y_1-Y_2=(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+t{{\rm \partial}}_t+u{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{0.5cm}6)~Y^6=Y_2-Y_4=y{{\rm \partial}}_x-{{\rm \partial}}_y,\\
&7)&Y^7=-Y_4+Y_6=-{{\rm \partial}}_y+{{\rm \partial}}_u,\hspace{4.1cm}8)~Y^{8}=-Y_4-Y_6=-{{\rm \partial}}_y-{{\rm \partial}}_u,\\
&9)&Y^9=Y_2+Y_5=y{{\rm \partial}}_x+{{\rm \partial}}_t-f{{\rm \partial}}_f,\hspace{3.3cm}10)~Y^{10}=Y_2-Y_5=y{{\rm \partial}}_x-{{\rm \partial}}_t+f{{\rm \partial}}_f,\\
&11)&Y^{11}=Y_2+Y_6=y{{\rm \partial}}_x+{{\rm \partial}}_u,\hspace{4.1cm}12)~Y^{12}=Y_2-Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_u,\\
&13)&Y^{13}=Y_1+Y_2=(x+y){{\rm \partial}}_x+y{{\rm \partial}}_y+t{{\rm \partial}}_t+u{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{1mm}14)~Y^{14}=-Y_4+Y_5+Y_6=-{{\rm \partial}}_y+{{\rm \partial}}_t+{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
&15)&Y^{15}=Y_2-Y_4-Y_5+Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_t+{{\rm \partial}}_u+f{{\rm \partial}}_f,16)~Y^{16}=Y_2-Y_4+Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_u,\\
&17)&Y^{17}=Y_2-Y_4+Y_5-Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_t-{{\rm \partial}}_u-f{{\rm \partial}}_f,18)~Y^{18}=Y_2-Y_4-Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_u,\\
&19)&Y^{19}=Y_1+Y_2+Y_5=(x+y){{\rm \partial}}_x+(t+1){{\rm \partial}}_t+u{{\rm \partial}}_u,\hspace{0.5cm}20)~Y^{20}=Y_2+Y_5+Y_6=y{{\rm \partial}}_x+{{\rm \partial}}_t+{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
&21)&Y^{21}=Y_2+Y_5-Y_6=y{{\rm \partial}}_x+{{\rm \partial}}_t-{{\rm \partial}}_u-f{{\rm \partial}}_f,\hspace{1.6cm}22)~Y^{22}=Y_2-Y_5-Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_t-{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
&23)&Y^{23}=Y_2-Y_5+Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_t+{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{1.6cm}24)~Y^{24}=-Y_4-Y_5-Y_6=-{{\rm \partial}}_y-{{\rm \partial}}_t-{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
&25)&Y^{25}=-Y_4-Y_5+Y_6=-{{\rm \partial}}_y-{{\rm \partial}}_t+{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{1.2cm}26)~Y^{26}=-Y_4+Y_5-Y_6=-{{\rm \partial}}_y+{{\rm \partial}}_t-{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
&27)&Y^{27}=Y_2-Y_4+Y_5+Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_t+{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
&28)&Y^{28}=Y_1+Y_2-Y_5=(x+y){{\rm \partial}}_x+y{{\rm \partial}}_y+(t-1){{\rm \partial}}_t+u{{\rm \partial}}_u+2f{{\rm \partial}}_f,\\
&29)&Y^{29}=Y_1-Y_2-Y_5=(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+(t-1){{\rm \partial}}_t+u{{\rm \partial}}_u+2f{{\rm \partial}}_f,\\
&30)&Y^{31}=Y_1-Y_2+Y_5=(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+(t+1){{\rm \partial}}_t+u{{\rm \partial}}_u,\\
&31)&Y^{31}=Y_1-Y_5=x{{\rm \partial}}_x+y{{\rm \partial}}_y+(t-1){{\rm \partial}}_t+u{{\rm \partial}}_u+2f{{\rm \partial}}_f\\
&32)&Y^{32}=Y_2-Y_4-Y_5-Y_6=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_t-{{\rm \partial}}_u+f{{\rm \partial}}_f,\end{aligned}$$ [**Proof.**]{} Let ${\goth g}_6$ is the symmetry algebra of Eq. (\[eq:2\]) with adjoint representation determined in Table 2 and $$\begin{aligned}
Y=a_1Y_1+a_2Y_2+a_3Y_3+a_4Y_4+a_5Y_5+a_6Y_6,\end{aligned}$$ is a nonzero vector field of ${\goth g}_6$. We will simplify as many of the coefficients $a_i;i=1,\ldots,6$, as possible through proper adjoint applications on $Y$. We follow our aim in the below easy cases:[*Case 1:*]{} At first, assume that $a_1\neq 0$. Scaling $Y$ if necessary, we can assume that $a_1=1$ and so we get $$\begin{aligned}
Y=Y_1+a_2Y_2+a_3Y_3+a_4Y_4+a_5Y_5+a_6Y_6.\end{aligned}$$ Using the table of adjoint (Table 2) , if we act on $Y$ with ${\rm
Ad}(\exp(a_3Y_3))$, the coefficient of $Y_3$ can be vanished: $$\begin{aligned}
Y'=Y_1+a_2Y_2+a_4Y_4+a_5Y_5+a_6Y_6.\end{aligned}$$ Then we apply ${\rm Ad}(\exp(a_4Y_4))$ on $Y'$ to cancel the coefficient of $Y_4$: $$\begin{aligned}
Y''=Y_1+a_2Y_2+a_5Y_5+a_6Y_6.\end{aligned}$$ At last, we apply ${\rm Ad}(\exp(a_6Y_6))$ on $Y''$ to cancel the coefficient of $Y_6$: $$\begin{aligned}
Y'''=Y_1+a_2Y_2+a_5Y_5.\end{aligned}$$ [*Case 1a:*]{}\
If $a_2,a_5\neq 0$ then we can make the coefficient of $Y_2$ and $Y_5$ either $+1$ or $-1$. Thus any one-dimensional subalgebra generated by $Y$ with $a_2,a_5\neq 0$ is equivalent to one generated by $Y_1\pm Y_2\pm Y_5$ which introduce parts 19), 28), 29) and 30) of the theorem.[*Case 1b:*]{} For $a_2=0, a_5\neq0$ we can see that each one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $
Y_1\pm Y_5$ which introduce parts 4) and 31) of the theorem.[*Case 1c:*]{} For $a_2\neq0, a_5=0$, each one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $Y_1\pm Y_2$ which introduce parts 5) and 13) of the theorem.[*Case 1d:*]{} For $a_2=0, a_5=0$, each one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $Y_1$ which introduce parts 1) of the theorem.[*Case 2:*]{} The remaining one-dimensional subalgebras are spanned by vector fields of the form $Y$ with $a_1=0$. [*Case 2a:*]{} If $a_4\neq 0$ then by scaling $Y$, we can assume that $a_4=-1$. Now by the action of ${\rm Ad}(\exp a_3Y_3))$ on $Y$, we can cancel the coefficient of $Y_3$: $$\begin{aligned}
\overline{Y}=a_2Y_2-Y_4+a_5Y_5+a_6Y_6.\end{aligned}$$ Let $a_2\neq0$ then by scaling $Y$, we can assume that $a_2=1$, and we have $$\begin{aligned}
\overline{Y}'=Y_2-Y_4+a_5Y_5+a_6Y_6.\end{aligned}$$ [*Case 2a-1:*]{} Suppose $a_5=a_6=0$, then the one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $Y_2-Y_4$ which introduce parts 6). [*Case 2a-2:*]{} Suppose $a_5=0, a_6\neq0$, all of the one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $Y_2-Y_4\pm
Y_6$ which introduce parts 16) and 18).[*Case 2a-3:*]{} Suppose $a_5\neq0, a_6\neq0$, all of the one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $Y_2-Y_4\pm Y_5\pm Y_6$ which introduce parts 15), 17), 27), and 32). Now if $a_2=0$, we have $$\begin{aligned}
\overline{Y}''=-Y_4+a_5Y_5+a_6Y_6.\end{aligned}$$ [*Case 2a-4:*]{} Suppose $a_5=a_6=0$, then the one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $-Y_4$ which introduce parts 3). [*Case 2a-5:*]{} Suppose $a_5=0, a_6\neq0$, all of the one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $-Y_4\pm Y_6$ which introduce parts 7) and 8).[*Case 2a-6:*]{} Suppose $a_5\neq0, a_6\neq0$, all of the one-dimensional subalgebra generated by $Y$ is equivalent to one generated by $-Y_4\pm Y_5\pm Y_6$ which introduce parts 14), 24), 25) and 26).
[*Case 2b:*]{} $~~~~$ Let $a_4=0$ then $Y$ is in the form $$\begin{aligned}
\widehat{Y}=a_2Y_2+a_5Y_5+a_6Y_6.\end{aligned}$$ Suppose that $a_2\neq 0$ then if necessary we can let it equal to $1$ and we obtain $$\begin{aligned}
\widehat{Y}'=Y_2+a_5Y_5+a_6Y_6.\end{aligned}$$ [*Case 2b-1:*]{} Let $a_5=a_6=0$, then $Y_2$ is remained and find 2) section of the theorem.[*Case 2b-2:*]{} If $a_5\neq0, a_6\neq0$, then $\widehat{Y}'$ is equal to $Y_2\pm Y_5\pm Y_6$. Hence this case suggests part 20), 21), 22) and 23).[*Case 2b-3:*]{} If $a_5\neq0, a_6=0$, then $\widehat{Y}'=Y_2\pm Y_5$ . Hence this case suggests part 9) and 10).[*Case 2b-4:*]{} If $a_5=0, a_6\neq0$, then $Y_2\pm Y_6$ is obtained. So, this case suggests part 11) and 12). There is not any more possible case for studying and the proof is complete. $\Box$ The coefficients $f$ of Eq. (\[eq:2\]) depend on the variables $x,y,u,u_x,u_y$. Therefore, we take their optimal system’s projections on the space $(x,y,u,u_x,u_y,f)$. we have $$\begin{aligned}
\hspace{-0.7cm}
\begin{array}{rlrl}
1)&Z^1=Y^1=x{{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u+f{{\rm \partial}}_f, \hspace{1cm}&17)&Z^{17}=Y^{17}=(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u+2f{{\rm \partial}}_f,\\
2)&Z^2=Y^2=y{{\rm \partial}}_x,\hspace{1cm}&18)&Z^{18}=Y^{18}=(x+y){{\rm \partial}}_x+u{{\rm \partial}}_u,\\
3)&Z^3=Y^3=-{{\rm \partial}}_y,\hspace{1cm}&19)&Z^{19}=Y^{19}=y{{\rm \partial}}_x+{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
4)&Z^4=Y^4=(x+y){{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{1cm}&20)&Z^{20}=Y^{20}=y{{\rm \partial}}_x-{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
5)&Z^5=Y^5=x{{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u,\hspace{1cm}&21)&Z^{21}=Y^{21}=y{{\rm \partial}}_x-{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
6)&Z^6=Y^6=x{{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u+2f{{\rm \partial}}_f,\hspace{1cm} &22)&Z^{22}=Y^{22}=y{{\rm \partial}}_x+{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
7)&Z^7=Y^7=(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u+f{{\rm \partial}}_f,\hspace{1cm} &23)&Z^{23}=Y^{23}=y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_u,\\
8)&Z^8=Y^8=y{{\rm \partial}}_x-{{\rm \partial}}_y,\hspace{1cm}&24)&Z^{24}=Y^{24}=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_u,\\
9)&Z^9=Y^9=-{{\rm \partial}}_y+{{\rm \partial}}_u,\hspace{1cm}&25)&Z^{25}=Y^{25}=-{{\rm \partial}}_y+{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
10&Z^{10}=Y^{10}=-{{\rm \partial}}_y-{{\rm \partial}}_u,\hspace{1cm}&26)&Z^{26}=Y^{26}=-{{\rm \partial}}_y-{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
11&Z^{11}=Y^{11}=y{{\rm \partial}}_x-f{{\rm \partial}}_f,\hspace{1cm}&27)&Z^{27}=Y^{27}=-{{\rm \partial}}_y+{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
12)&Z^{12}=Y^{12}=y{{\rm \partial}}_x+f{{\rm \partial}}_f,\hspace{1cm}&28)&Z^{28}=Y^{28}=-{{\rm \partial}}_y-{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
\end{array}\end{aligned}$$ $$\begin{aligned}
\hspace{-0.7cm}
\begin{array}{rlrl}
13)&Z^{13}=Y^{13}=y{{\rm \partial}}_x+{{\rm \partial}}_u,\hspace{1cm}&29)&Z^{29}=Y^{29}=y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_u-f{{\rm \partial}}_f,\\
14)&Z^{14}=Y^{14}=y{{\rm \partial}}_x-{{\rm \partial}}_u,\hspace{1cm}&30)&Z^{30}=Y^{30}=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
15)&Z^{15}=Y^{15}=(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u,\hspace{1cm}&31)&Z^{31}=Y^{31}=y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_u+f{{\rm \partial}}_f,\\
16)&Z^{16}=Y^{16}=(x+y){{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u+2f{{\rm \partial}}_f,\hspace{1cm}&32)&Z^{32}=Y^{32}=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_u-f{{\rm \partial}}_f,
\end{array}\end{aligned}$$ [**Proposition 4.2.**]{} [*Let ${\goth g}_m:=\langle Y_1, \ldots,
Y_m\rangle$, be an $m$-dimensional algebra. Denote by $Y^i (i=1,
\ldots, r, 0<r\leq m, r\in{\Bbb N})$ an optimal system of one-dimensional subalgebras of ${\goth g}_m$ and by $Z^i\, (i =
1,\cdots, t, 0<t\leq r, t\in{\Bbb N})$ the projections of $Y^i$, i.e., $Z^i = {\rm pr}(Y^i)$. If equations $$\begin{aligned}
f = \Phi(x,y,u,u_x,u_y),\label{eq:18}\end{aligned}$$ are invariant with respect to the optimal system $Z^i$ then the equation $$\begin{aligned}
u_t = \Phi(x,y,u,u_x,u_y)(u_{xx}+u_{yy}),\label{eq:19}\end{aligned}$$ admits the operators $X^i=$ projection of $Y^i$ on $(t,x,y,u,u_x,u_y)$.*]{} [**Proposition 4.3.**]{} [*Let Eq. (\[eq:19\]) and the equation $$\begin{aligned}
u_t = \Phi'(x,y,u,u_x,u_y)(u_{xx}+u_{yy}),\label{eq:20}\end{aligned}$$ be constructed according to Proposition 4.2. via optimal systems $Z^i$ and ${Z^i}'$, respectively. If the subalgebras spanned on the optimal systems $Z^i$ and ${Z^i}'$, respectively, are similar in ${\goth g}_m$, then Eqs. (\[eq:19\]) and (\[eq:20\]) are equivalent with respect to the equivalence group $G_m$, generated by ${\goth g}_m$.* ]{} Now we apply Proposition 4.2. and Proposition 4.3. to the optimal system (\[eq:17\]) and obtain all nonequivalent Eq. (\[eq:2\]) admitting ${\mathscr E}$-extensions of the principal Lie algebra ${\goth
g}$, by one dimension, i.e., equations of the form (\[eq:2\]) such that they admit, together with the one basic operators (\[eq:21\]) of ${\goth g}$, also a second operator $X^{(2)}$. For every case, when this extension occurs, we indicate the corresponding coefficients $f, g$ and the additional operator $X^{(2)}$.
We perform the algorithm passing from operators $Z^i\,(i=1,\cdots,32)$ to $f$ and $X^{(2)}$ via the following example. Let consider the vector field $$\begin{aligned}
Z^{32}=y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_u-f{{\rm \partial}}_f,\label{eq:21}\end{aligned}$$ then the characteristic equation corresponding to $Z^6$ is $$\begin{aligned}
{dx\over y}={dy\over-1}=\frac{du}{-1}=\frac{df}{-f},\end{aligned}$$ and can be taken in the form $$\begin{aligned}
I_1=u+{x\over y},\hspace{5mm}I_2=e^{x\over y}f.\end{aligned}$$ From the invariance equations we can write $$\begin{aligned}
I_2=\Phi(I_1),\end{aligned}$$ it follows that $$\begin{aligned}
f=e^{-{x\over y}}\Phi(\lambda),\end{aligned}$$ where $\lambda=I_1$.
From Proposition 4.2. applied to the operator $Z^6$ we obtain the additional operator $X^{(2)}$ $$\begin{aligned}
y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_t-{{\rm \partial}}_u.\end{aligned}$$ After similar calculations applied to all operators (\[eq:17\]) we obtain the following result (Table 3) for the preliminary group classification of Eq. (\[eq:2\]) admitting an extension ${\goth
g}_3$ of the principal Lie algebra ${\goth g}_1$.
Conclusion
==========
In this paper, following the classical Lie method, the preliminary group classification for the class of heat equation (\[eq:2\]) and investigated the algebraic structure of the symmetry groups for this equation, is obtained. The classification is obtained by constructing an optimal system with the aid of Propositions 4.2. and 4.3.. The result of the work is summarized in Table 3. Of course it is also possible to obtain the corresponding reduced equations for all the cases in the classification reported in Table 3.
\[table:3\] $$\begin{aligned}
\hspace{-0.75cm}\begin{array}{l l l l l l} \hline
N &\hspace{1cm} Z &\hspace{1.1cm} \mbox{Invariant} &\hspace{1cm} \mbox{Equation}
&\hspace{1cm} \mbox{Additional operator}\,X^{(2)} \\ \hline
1 &\hspace{1cm} Z^1 &\hspace{1.1cm} {u\over x} &\hspace{1cm}u_t=x\Phi(u_{xx}+u_{yy})
&\hspace{1cm} x{{\rm \partial}}_x+y{{\rm \partial}}_y+t{{\rm \partial}}_t+u{{\rm \partial}}_u \\
2 &\hspace{1cm} Z^2 &\hspace{1.1cm} u &\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x \\
3 &\hspace{1cm} Z^3 &\hspace{1.1cm} u &\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y\\
4 &\hspace{1cm} Z^4 &\hspace{1.1cm} {u\over x+y} &\hspace{1cm}u_t=y\Phi(u_{xx}+u_{yy})
&\hspace{1cm} (x+y){{\rm \partial}}_x+y{{\rm \partial}}_y+u{{\rm \partial}}_u \\
5 &\hspace{1cm} Z^5 &\hspace{1.1cm} {u\over x}&\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} x{{\rm \partial}}_x+y{{\rm \partial}}_y+(t+1){{\rm \partial}}_t+u{{\rm \partial}}_u \\
6 &\hspace{1cm} Z^6 &\hspace{1.1cm} {u\over x}&\hspace{1cm}u_t=x^2\Phi(u_{xx}+u_{yy})
&\hspace{1cm} x{{\rm \partial}}_x+y{{\rm \partial}}_y+(t-1){{\rm \partial}}_t+u{{\rm \partial}}_u \\
7 &\hspace{1cm} Z^7 &\hspace{1.1cm} {u\over x-y}&\hspace{1cm}u_t=(x-y)\Phi(u_{xx}+u_{yy})
&\hspace{1cm}(x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+t{{\rm \partial}}_t+u{{\rm \partial}}_u \\
8 &\hspace{1cm} Z^8 &\hspace{1.1cm} u &\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y \\
9 &\hspace{1cm} Z^9 &\hspace{1.1cm} u&\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y+{{\rm \partial}}_u \\
10 &\hspace{1cm} Z^{10} &\hspace{1.1cm} x &\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y-{{\rm \partial}}_u \\
11 &\hspace{1cm} Z^{11} &\hspace{1.1cm} u &\hspace{1cm}u_t=e^{-{x\over y}}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x+{{\rm \partial}}_t \\
12 &\hspace{1cm} Z^{12} &\hspace{1.1cm} u &\hspace{1cm}u_t=e^{x\over y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_t \\
13 &\hspace{1cm} Z^{13} &\hspace{1.1cm} u-{x\over y} &\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x+{{\rm \partial}}_u \\
14 &\hspace{1cm} Z^{14} &\hspace{1.1cm} u+{x\over y} &\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_u \\
15 &\hspace{1cm} Z^{15} &\hspace{1.1cm} {u\over x-y}&\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} (x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+(t+1){{\rm \partial}}_t+u{{\rm \partial}}_u \\
16 &\hspace{1cm} Z^{16} &\hspace{1.1cm} {u\over x+y}&\hspace{1cm}u_t=(x+y)^2\Phi(u_{xx}+u_{yy})
&\hspace{1cm} (x+y){{\rm \partial}}_x+y{{\rm \partial}}_y+(t-1){{\rm \partial}}_t+u{{\rm \partial}}_u \\
17 &\hspace{1cm} Z^{17} &\hspace{1.1cm} {u\over x-y}&\hspace{1cm}u_t=(x-y)^2\Phi(u_{xx}+u_{yy})
&\hspace{1cm} (x-y){{\rm \partial}}_x+y{{\rm \partial}}_y+(t-1){{\rm \partial}}_t+u{{\rm \partial}}_u \\
18 &\hspace{1cm} Z^{18} &\hspace{1.1cm} {u\over x+y}&\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} (x+y){{\rm \partial}}_x+(t+1){{\rm \partial}}_t+u{{\rm \partial}}_u \\
19 &\hspace{1cm} Z^{19} &\hspace{1.1cm}u-{x\over y}&\hspace{1cm}u_t=e^{-{x\over y}}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x+{{\rm \partial}}_t+{{\rm \partial}}_u\\
20 &\hspace{1cm} Z^{20} &\hspace{1.1cm} u+{x\over y}&\hspace{1cm}u_t=e^{-{x\over y}}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x+{{\rm \partial}}_t-{{\rm \partial}}_u \\
21 &\hspace{1cm} Z^{21} &\hspace{1.1cm} u+{x\over y}&\hspace{1cm}u_t=e^{x\over y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_t-{{\rm \partial}}_u\\
22 &\hspace{1cm} Z^{22} &\hspace{1.1cm} u-{x\over y}&\hspace{1cm}u_t=e^{x\over y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_t+{{\rm \partial}}_u\\
23 &\hspace{1cm} Z^{23} &\hspace{1.1cm} u-{x\over y}&\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_u\\
24 &\hspace{1cm} Z^{24} &\hspace{1.1cm}u+{x\over y}&\hspace{1cm}u_t=e^y\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_u \\
25 &\hspace{1cm} Z^{25} &\hspace{1.1cm} u+y&\hspace{1cm}u_t=\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y+{{\rm \partial}}_t+{{\rm \partial}}_u \\
26 &\hspace{1cm} Z^{26} &\hspace{1.1cm} u-y&\hspace{1cm}u_t=e^{-y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y-{{\rm \partial}}_t-{{\rm \partial}}_u \\
27 &\hspace{1cm} Z^{27} &\hspace{1.1cm} u+y&\hspace{1cm}u_t=e^{-y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y-{{\rm \partial}}_t+{{\rm \partial}}_u \\
28 &\hspace{1cm} Z^{28} &\hspace{1.1cm} u-y &\hspace{1cm}u_t=e^y\Phi(u_{xx}+u_{yy})
&\hspace{1cm} -{{\rm \partial}}_y+{{\rm \partial}}_t-{{\rm \partial}}_u \\
29 &\hspace{1cm} Z^{29} &\hspace{1.1cm} u+{x\over y}&\hspace{1cm}u_t=e^{-{x\over y}}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_t+{{\rm \partial}}_u \\
30 &\hspace{1cm} Z^{30} &\hspace{1.1cm} u+{x\over y}&\hspace{1cm}u_t=e^{x\over y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_t-{{\rm \partial}}_u \\
31 &\hspace{1cm} Z^{31} &\hspace{1.1cm} u-{x\over y}&\hspace{1cm}u_t=e^{x\over y}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y-{{\rm \partial}}_t+{{\rm \partial}}_u \\
32 &\hspace{1cm} Z^{32} &\hspace{1.1cm} u+{x\over y}&\hspace{1cm}u_t=e^{-{x\over y}}\Phi(u_{xx}+u_{yy})
&\hspace{1cm} y{{\rm \partial}}_x-{{\rm \partial}}_y+{{\rm \partial}}_t-{{\rm \partial}}_u \\
\hline
\end{array}\end{aligned}$$
S. Lie, : Arch. for Math. 6, 328 (1881). L. V. Ovsiannikov, , Group Analysis of Differential Equations, Academic Press, New York, 1982. E.A. Saied, M.M. Hussain, Similarity solutions for a nonlinear model of the heat equation, J. Nonlinear Math. Phys. 3 (1–2) (1996) 219–225. P.A. Clarkson, E.L. Mansfield, Symmetry reductions and exact solutions of a class of nonlinear heat equations, Phys. D 70 (1993) 250–288. M.I. Servo, Conditional and nonlocal symmetry of nonlinear heat equation, J. Nonlinear Math. Phys. 3 (1–2) (1996) 63–67. J.M. Goard, P. Broadbridge, D.J. Arrigo, The integrable nonlinear degenerate diffusion equations, Z. Angew. Math. Phys. 47 (6) (1996) 926–942. P.G.Estevez, C. Qu, S.L. Zhang, Separation of variables of a generalized porous medium equation with nonlinear source, J. Math. Anal. Appl. 275 (2002) 44–59. P.W. Doyle, P.J. Vassiliou, Separation of variables in the 1-dimensional non-linear diffusion equation, Internat. J. Non-Linear Mech. 33 (2) (2002) 315–326. A. Ahmad, Ashfaque H. Bokhari, A. H. Kara, F. D. Zaman, Symmetry classifications and reductions of some classes of $(2+1)$-nonlinear heat equation, J. Math. Annal. Appl. 339 (2008) 175-181. M. Nadjafikhah, R. Bakhshandeh-Chamazkoti, and A. Mahdipour–Shirayeh, A symmetry classification for a class of $(2+1)$-nonlinear wave equation, Nonlinear Analysis,(2009), doi:10.1016/j.na.2009.03.087. R. Cimpoiasu, R. Constantinescu, Lie symmetries and invariants for $2$D nonlinear heat equation, Nonlinear Analysis 68 (2008) 2261-2268. N. H. Ibragimov, M. Tottisi, and A. Valenti, Preliminary group classification of equations $u_{tt}=f(x,u_x)u_{xx}+g(x,u_x)$, J. Math. phys, 32, No. 11:2988-2995, 1991. Lina song , and Hongqing zhang, Preliminary group classification for the nonlinear wave equation $u_{tt}=f(x,u)u_{xx}+g(x,u)$, Nonlinear Analysis, article in press. P.J. Olver , Equivalence, Invariants, and Symmetry, Cambridge University Press, Cambridge, (1995).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We give thirty-two diverse proofs of a small mathematical gem—the fundamental Euler sum identity $$\zeta(2,1)=\zeta(3)=8\,\zeta(\overline2,1).$$ We also discuss various generalizations for multiple harmonic (Euler) sums and some of their many connections, thereby illustrating both the wide variety of techniques fruitfully used to study such sums and the attraction of their study.'
address:
- |
Faculty of Computer Science\
Dalhousie University\
Halifax, Nova Scotia B3H 1W5\
Canada
- |
Department of Mathematics & Statistics\
University of Maine\
5752 Neville Hall Orono, Maine 04469-5752\
U.S.A.
author:
- 'Jonathan M. Borwein'
- 'David M. Bradley'
title: 'Thirty-Two Goldbach Variations'
---
[^1]
=500
Introduction {#sect:Intro}
============
There are several ways to introduce and make attractive a new or unfamiliar subject. We choose to do so by emulating Glen Gould’s passion for Bach’s *Goldberg variations*. We shall illustrate most of the techniques used to study Euler sums by focusing almost entirely on the identities of and $$\sum_{n=1}^\infty \frac{1}{n^2}\sum_{m=1}^{n-1}\frac1{m} =
\sum_{n=1}^\infty \frac{1}{n^3} = 8\sum_{n=1}^\infty
\frac{(-1)^n}{n^2}\sum_{m=1}^{n-1}\frac1{m}$$ and some of their many generalizations.
Euler, Goldbach and the birth of ${\boldsymbol\zeta}$.
------------------------------------------------------
What follows is a transcription of correspondence between Euler and Goldbach [@eg1742] that led to the origin of the zeta-function and multi-zeta values, see also [@exp1; @exp2; @dunham].
> [**59. Goldbach an Euler, Moskau, 24. Dez. 1742.**]{}[^2] \[…\]*Als ich neulich die vermeinten summas der beiden letzteren serierum in meinem vorigen Schreiben wieder betrachtet, habe ich alsofort wahrgenommen, daß selbige aus einem bloßem Schreibfehler entstanden, von welchem es aber in der Tat heißet: Si non errasset, fecerat ille minus*.[^3]
This is the letter in which Goldbach precisely formulates the series which sparked Euler’s further investigations into what would become the zeta-function. These investigations were apparently due to a serendipitous mistake. The above translates as follows:
> *When I recently considered further the indicated sums of the last two series in my previous letter, I realized immediately that the same series arose due to a mere writing error, from which indeed the saying goes, “Had one not erred, one would have achieved less."[^4]*
Goldbach continues
> *Ich halte dafür, daß es ein problema problematum ist, die summam huius:* $$\begin{aligned}
> 1+ \frac{1}{2^n}\left(1+ \frac{1}{2^m}\right) +
> \frac{1}{3^n}\left(1+ \frac{1}{2^m} +\frac{1}{3^m}\right)
> +\frac{1}{4^n}\left(1+ \frac{1}{2^m}
> +\frac{1}{3^m}+\frac{1}{4^m}\right)+ etc.\end{aligned}$$ *in den casibus zu finden, wo $m$ et $n$ nicht numeri integri pares et sibi aequales sind, doch gibt es casus, da die summa angegeben werden kann, exempli gr\[atia\], si $m=1$, $n=3$, denn es ist* $$\begin{aligned}
> 1+ \frac{1}{2^3}\left(1+ \frac{1}{2}\right) + \frac{1}{3^3}\left(1+
> \frac{1}{2} +\frac{1}{3}\right) + \frac{1}{4^3}\left(1+
> \frac{1}{2} +\frac{1}{3}+\frac{1}{4}\right)+ etc. =
> \frac{\pi^4}{72}.\end{aligned}$$
The Modern Language of Euler Sums
---------------------------------
For positive integers $s_1,\dots,s_m$ and signs $\sigma_j = \pm 1$, consider [@BBB] the $m$-fold Euler sum $$\zeta(s_1,\dots,s_m;\sigma_1,\dots,\sigma_m)
:= \sum_{k_1>\cdots>k_m>0}\;\prod_{j=1}^m
\frac{\sigma_j^{k_j}}{k_j^{s_j}}.$$ As is now customary, we combine strings of exponents and signs by replacing $s_j$ by $\overline s_j$ in the argument list if and only if $\sigma_j=-1$, and denote $n$ repetitions of a substring $S$ by $\{S\}^n$. Thus, for example, $\zeta(\overline1)=-\log 2$, $\zeta(\{2\}^3)=\zeta(2,2,2)=\pi^6/7!$ and $$\zeta(s_1,\dots,s_m) = \sum_{k_1>\cdots>k_m>0}\;\prod_{j=1}^m
k_j^{-s_j}.
\label{mzvdef}$$ The identity $$\label{z21}
\zeta(2,1) = \zeta(3)$$ goes back to Euler [@LE1] [@LE2 p. 228] and has since been repeatedly rediscovered, (see, e.g., [@Briggs; @Bruckman; @Farnum; @Klamkin]). In this language Goldbach had found $$\zeta(3,1)+\zeta(4) =\frac{\pi^4}{72}.$$
The more general formula $$\label{EulerReduction}
2\zeta(m,1)= m\zeta(m+1)-\sum_{j=1}^{m-2}\zeta(j+1)\zeta(m-j),
\qquad 2\le m\in{{\mathbf Z}}$$ is also due to Euler [@LE1] [@LE2 p. 266]. Nielsen [@Niels1 p. 229] [@Niels2 p. 198] [@Niels3 pp.47–49] developed a method for obtaining and related results based on partial fractions. Formula has also been rediscovered many times [@Williams; @State; @SitSarm; @GP; @Bracken; @Vowe]. Crandall and Buhler [@CranBuh] deduced from their general infinite series formula which expresses $\zeta(s,t)$ for real $s>1$ and $t\ge 1$ in terms of Riemann zeta values.
Study of the the multiple zeta function led to the discovery of a new generalization of , involving nested sums of arbitrary depth: $$\label{z21^n}
\zeta(\{2,1\}^n ) = \zeta(\{3\}^n), \qquad n\in{{\mathbf Z}}^{+}.$$ Although numerous proofs of and are known (we give many in the sequel), the only proof of of which we are aware involves making a simple change of variable in a multiple iterated integral (see [@BBB; @BBBLa; @BowBradSurvey] and below).
An alternating version of is $$\label{2bar1}
8\zeta(\overline{2}, 1) = \zeta(3),$$ which has also resurfaced from time to time [@Niels3 p. 50] [@Sub85 (2.12)] [@Butzer p. 267] and hints at the generalization $$\label{2bar1^n}
8^n\zeta(\{\overline 2, 1\}^n) \stackrel{?}{=} \zeta(\{3\}^n),\qquad
n\in{{\mathbf Z}}^{+},$$ originally conjectured in [@BBB], and which still remains open—despite abundant, even overwhelming, evidence [@DJD].
Hilbert and Hardy Inequalities {#sect:hardy}
------------------------------
Much of the early 20th century history—and philosophy—of the *“ ‘bright’ and amusing"* subject of inequalities charmingly discussed in G.H. Hardy’s retirement lecture as London Mathematical Society Secretary, [@ghh]. He comments [@ghh p. 474] that *Harald Bohr is reported to have remarked “Most analysts spend half their time hunting through the literature for inequalities they want to use, but cannot prove."*
Central to Hardy’s essay are:
[(**Hilbert)**]{} For non-negative sequences $(a_n)$ and $(b_n)$, not both zero, and for\
$1 \le p, q \le \infty$ with $1/p+1/q=1$ one has $$\begin{aligned}
\label{hilbert-p}\sum_{n=1}^\infty\sum_{m=1}^\infty \frac{a_n\,b_m}{n+m}
< \pi\,{\rm csc}\left(\frac{\pi}p\right) \|a_n\|_p\,\|b_n\|_q.\end{aligned}$$
[(**Hardy)**]{} For a non-negative sequence $(a_n)$ and for $p>1$ $$\begin{aligned}
\label{h-ineq}\sum_{n=1}^\infty\left(\frac{a_1+a_2+\cdots
+a_n}n\right)^p \le \left(\frac{p}{p-1}\right)^p \,\sum_{n=1}^\infty
a_n^p.\end{aligned}$$
We return to these inequalities in Section \[sect:witten\].
Hardy [@ghh p. 485] remarks that his [*“own theorem was discovered as a by-product of my own attempt to find a really simple and elementary proof of Hilbert’s."*]{} He reproduces Elliott’s proof of (\[h-ineq\]), writing “*it can hardly be possible to find a proof more concise or elegant*" and also “I *have given nine \[proofs\] in a lecture in Oxford, and more have been found since then.*" (See [@ghh p. 488].)
Our Motivation and Intentions
-----------------------------
We wish to emulate Hardy and to present proofs that are either elementary, bright and amusing, concise or elegant— ideally all at the same time! In doing so we note that:
1. $\zeta(3)$, while provably irrational, is still quite mysterious, see [@bt; @agm] and [@exp2]. Hence, exposing more relationships and approaches can only help. We certainly hope one of them will lead to a proof of conjecture (\[2bar1\^n\]).
2. Identities for $\zeta(3)$ are abundant and diverse. We give three each of which is the entry-point to a fascinating set:
- Our first favourite is a *binomial sum* [@ag] that played a role in Apéry’s 1976 proof, see [@agm; @bt] and [@exp2 Chapter 3], of the irrationality of $\zeta(3)$: $$\begin{aligned}
\label{z3}
\zeta(3) &=& \frac5 2\,\sum_{k=1}^{\infty} \frac{(-1)^{k+1}} {k^3\,
{2k \choose k}}.\end{aligned}$$
- Our second is Broadhurst’s binary *BBP formula* [@broad]: $$\zeta(3)=\frac{48}7\,\mathcal{S}_{1}(1,-7,-1,10,-1,-7,1,0 )+\frac{32}7\,\mathcal{S}_{3}(1,1,-1,-2,-1,1,1,0),$$ where $\mathcal{S}_{p}(a_1,a_2,\ldots, a_8):=\sum_{k=1}^\infty
{a_k}2^{-\lfloor{p(k+1)/2\rfloor}}k^{-3},$ and the coefficients $a_k$ repeat modulo 8. We refer to [@exp1 Chapter 3] for the digit properties of such formulae. Explicitly, $$\begin{aligned}
\zeta(3) &=& \frac{1}{672} \sum_{k=0}^\infty \frac{1}{2^{12 k}}
\left[\frac{2048}{(24 k + 1)^3} - \frac{11264}{(24 k + 2)^3} -
\frac{1024}{(24 k + 3)^3} + \frac{11776}{(24 k + 4)^3} \right. \\
&& \left. - \frac{512}{(24 k + 5)^3} + \frac{4096}{(24 k + 6)^3} +
\frac{256}{(24 k + 7)^3} + \frac{3456}{(24 k + 8)^3} +
\frac{128}{(24 k + 9)^3} \right. \\
&& \left. - \frac{704}{(24 k + 10)^3} - \frac{64}{(24 k + 11)^3} -
\frac{128}{(24 k + 12)^3} - \frac{32}{(24 k + 13)^3} -
\frac{176}{(24 k + 14)^3} \right. \\
&& \left. + \frac{16}{(24 k + 15)^3} + \frac{216}{(24 k + 16)^3} +
\frac{8}{(24 k + 17)^3} + \frac{64}{(24 k + 18)^3} - \frac{4}{(24 k
+ 19)^3} \right. \\
&& \left. + \frac{46}{(24 k + 20)^3} - \frac{2}{(24 k + 21)^3} -
\frac{11}{(24 k + 22)^3} + \frac{1}{(24 k + 23)^3} \right].\end{aligned}$$ It was this discovery that lead Bailey and Crandall to their striking recent work on normality of BBP constants [@exp1 Chapter 4].
- Our third favourite due to Ramanujan [@exp2 p. 138] is the *hyperbolic series* approximation $$\zeta \left( 3 \right) ={\frac {7\,{\pi }^{3}}{180}}-2\,\sum _{k=1}^{\infty }{\frac {1}{{k}^{3}
\left( {e^{2\,\pi \,k}}-1 \right) }},$$ in which the ‘error’ is $\zeta(3)-7\,{\pi }^{3}/180 \approx
-0.003742745$, and which to our knowledge is the ‘closest’ one gets to writing $\zeta(3)$ as a rational multiple of $\pi^3$.
3. Often results about $\zeta(3)$ are more precisely results about $\zeta(2,1)$ or $\zeta(\overline{2},1)$, as we shall exhibit.
4. Double and multiple sums are still under-studied and under-appreciated. We should like to partially redress that.
5. One can now prove these seemingly analytic facts in an entirely finitary manner via words over alphabets, dispensing with notions of infinity and convergence;
6. Many subjects are touched upon—from computer algebra, integer relation methods, generating functions and techniques of integration to polylogarithms, hypergeometric and special functions, non-commutative rings, combinatorial algebras and Stirling numbers—so that most readers will find a proof worth showing in an undergraduate class;
7. For example, there has been an explosive recent interest in q-analogues, see §\[sect:zud\], and in quantum field theory, algebraic K-theory and knot theory, see [@exp1; @zagier].
For some of the broader issues relating to Euler sums, we refer the reader to the survey articles [@exp1; @BowBradSurvey; @Cartier; @Wald; @Wald2; @zagier; @Zud]. Computational issues are discussed in [@exp2; @Cran] and to an extent in [@BBBLa].
Further Notation
----------------
For positive integer $N$, denote the $N$th partial sum of the harmonic series by $H_N := \sum_{n=1}^N 1/n$. We also use $\psi = \Gamma'/\Gamma$ to denote the logarithmic derivative of the Euler gamma-function (also referred to as the *digamma* function), and recall the identity $\psi(N+1)+\gamma=H_N$, where $\gamma=0.5772156649\ldots$ is *Euler’s constant.* Where convenient, we employ the Pochhammer symbol $(a)_n=
a(a+1)\cdots(a+n-1)$ for complex $a$ and non-negative integer $n$. As usual, the *Kronecker* $\delta_{m,n}$ is 1 if $m=n$ and $0$ otherwise.
We organize our proofs by technique, although clearly this is somewhat arbitrary as many proofs fit well within more than one category. Broadly their sophistication increases as we move through the paper. In some of the later sections the proofs become more schematic. We invite readers to send additional selections for our collection, a collection which for us has all the beauty of Blake’s grain of sand[^5]:
“*To see a world in a grain of sand\
And a heaven in a wild flower,\
Hold infinity in the palm of your hand\
And eternity in an hour.*"
Telescoping and Partial Fractions {#sect:telescope}
=================================
For a quick proof of , consider $$S:= \sum_{n,k>0} \frac{1}{nk(n+k)}
= \sum_{n,k>0}\frac1{n^2} \left(\frac1k-\frac1{n+k}\right)
= \sum_{n=1}^\infty \frac1{n^2} \sum_{k=1}^n \frac1k
= \zeta(3)+\zeta(2,1).$$ On the other hand, $$S = \sum_{n,k>0} \left(\frac1n+\frac1k\right)\frac{1}{(n+k)^2}
= \sum_{n,k>0}\frac1{n(n+k)^2}+\sum_{n,k>0}\frac{1}{k(n+k)^2}
= 2\zeta(2,1),$$ by symmetry.
The above argument goes back at least to Steinberg [@Klamkin]. See also [@Klamkin3].
For , first consider $$\begin{aligned}
\zeta(\overline 2,\overline 1)+\zeta(3)
&= \sum_{n=1}^\infty \frac{(-1)^n}{n^2}\sum_{k=1}^n
\frac{(-1)^k}{k}
= \sum_{n=1}^\infty \frac{(-1)^n}{n^2}\sum_{k=1}^\infty
\bigg(\frac{(-1)^k}{k}-\frac{(-1)^{n+k}}{n+k}\bigg)\nonumber\\
&=\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\sum_{k=1}^\infty
(-1)^k\bigg(\frac{n+k-(-1)^nk}{k(n+k)}\bigg)\nonumber\\
&=\sum_{n,k>0}\frac{(-1)^{n+k}}{nk(n+k)}+\sum_{n,k>0}\frac{(-1)^{n+k}}{n^2(n+k)}
-\sum_{n,k>0}\frac{(-1)^k}{n^2(n+k)}\nonumber\\
&=\sum_{n,k>0}\bigg(\frac1n+\frac1k\bigg)\frac{(-1)^{n+k}}{(n+k)^2}
+\zeta(\overline
1,2)-\sum_{n,k>0}\frac{(-1)^n(-1)^{n+k}}{n^2(n+k)}\nonumber\\
&=\sum_{n,k>0}\frac{(-1)^{n+k}}{n(n+k)^2}+\sum_{n,k>0}\frac{(-1)^{n+k}}{k(n+k)^2}
+\zeta(\overline1,2)-\zeta(\overline1,\overline2)\nonumber\\
&=2\zeta(\overline2,1)+\zeta(\overline1,2)-\zeta(\overline1,\overline2).
\label{alt3}\end{aligned}$$
Similarly, $$\begin{aligned}
\zeta(2,\overline1)+\zeta(\overline3)
&=\sum_{n=1}^\infty\frac{1}{n^2}\sum_{k=1}^n\frac{(-1)^k}{k}
=\sum_{n=1}^\infty\frac1{n^2}\sum_{k=1}^\infty\bigg(\frac{(-1)^k}{k}-\frac{(-1)^{n+k}}{n+k}\bigg)\nonumber\\
&=\sum_{n=1}^\infty\frac1{n^2}\sum_{k=1}^\infty(-1)^k\bigg(\frac{n+k-(-1)^nk}{k(n+k)}\bigg)\nonumber\\
&=\sum_{n,k>0}\frac{(-1)^k}{nk(n+k)}+\sum_{n,k>0}\frac{(-1)^k}{n^2(n+k)}-\sum_{n,k>0}\frac{(-1)^{n+k}}{n^2(n+k)}\nonumber\\
&=\sum_{n,k>0}\bigg(\frac1n+\frac1k\bigg)\frac{(-1)^k}{(n+k)^2}+\sum_{n,k>0}\frac{(-1)^n(-1)^{n+k}}{n^2(n+k)}
-\zeta(\overline1,2)\nonumber\\
&=\sum_{n,k>0}\frac{(-1)^n(-1)^{n+k}}{n(n+k)^2}+\sum_{n,k>0}\frac{(-1)^k}{k(n+k)^2}+\zeta(\overline1,\overline2)
-\zeta(\overline1,2)\nonumber\\
&=\zeta(\overline2,\overline1)+\zeta(2,\overline1)+\zeta(\overline1,\overline2)-\zeta(\overline1,2).
\label{alt3bar}\end{aligned}$$
Adding equations and now gives $$\label{dejavu}
2\zeta(\overline2,1)=\zeta(3)+\zeta(\overline3),$$ i.e.$$8\zeta(\overline2,1)=4\sum_{n=1}^\infty\frac{1+(-1)^n}{n^3}=4\sum_{m=1}^\infty
\frac{2}{(2m)^3} = \zeta(3),$$ which is .
Finite Series Transformations {#sect:finite}
=============================
For any positive integer $N$, we have $$\label{z21finite}
\sum_{n=1}^N \frac1{n^3}
-\sum_{n=1}^N \frac1{n^2}\sum_{k=1}^{n-1}\frac1k
=\sum_{n=1}^N \frac1{n^2}\sum_{k=1}^n\frac{1}{N-k+1}$$ by induction. Alternatively, consider $$T := \sum_{\substack{n,k=1\\k\ne n}}^N \frac{1}{nk(k-n)}
= \sum_{\substack{n,k=1\\k\ne n}}^N \bigg(\frac1n-\frac1k\bigg)
\frac{1}{(k-n)^2}
=0.$$ On the other hand, $$\begin{aligned}
T &= \sum_{\substack{n,k=1\\k\ne n}}^N \frac{1}{n^2}
\bigg(\frac{1}{k-n}-\frac1k\bigg)\\
&= \sum_{n=1}^N\frac{1}{n^2}\bigg(\sum_{k=1}^{n-1}\frac{1}{k-n}
+\sum_{k=n+1}^N\frac1{k-n}-\sum_{k=1}^N
\frac1k+\frac1n\bigg)\\
&= \sum_{n=1}^N\frac1{n^3}
-\sum_{n=1}^N\frac1{n^2}\sum_{k=1}^{n-1}\frac{1}{n-k}
+\sum_{n=1}^N\frac1{n^2}
\bigg(\sum_{k=n+1}^N\frac1{k-n}-\sum_{k=1}^N\frac1k\bigg).\end{aligned}$$ Since $T=0$, this implies that $$\sum_{n=1}^N\frac1{n^3}-\sum_{n=1}^N\frac1{n^2}\sum_{k=1}^{n-1}\frac1k
=\sum_{n=1}^N\frac1{n^2}\bigg(\sum_{k=1}^N
\frac1k-\sum_{k=1}^{N-n}\frac1k\bigg)
= \sum_{n=1}^N\frac1{n^2}\sum_{k=1}^n \frac{1}{N-k+1},$$ which is . But the right hand side satisfies $$\begin{aligned}
\frac{H_N}{N}
= \sum_{n=1}^N \frac1{n^2}\cdot\frac{n}{N}
& \le \sum_{n=1}^N \frac1{n^2} \sum_{k=1}^n \frac1{N-k+1}\\
& \le \sum_{n=1}^N \frac{1}{n^2}\cdot\frac{n}{N-n+1}
=\frac1{N+1}\sum_{n=1}^N\bigg(\frac1{n}+\frac1{N-n+1}\bigg)
=\frac{2H_N}{N+1}.\end{aligned}$$ Letting $N$ grow without bound now gives , since ${\displaystyle}\lim_{N\to\infty}\frac{H_N}{N}=0$.
[$\square$]{}
Geometric Series
================
Convolution of Geometric Series {#sect:Williams}
-------------------------------
The following argument is suggested in [@Williams]. A closely related derivation, in which our explicit consideration of the error term is suppressed by taking $N$ infinite, appears in [@Bracken]. Let $2\le m\in{{\mathbf Z}}$, and consider $$\begin{aligned}
\sum_{j=1}^{m-2}\zeta(j+1)\zeta(m-j)
&= \lim_{N\to\infty} \sum_{n=1}^N\sum_{k=1}^N \sum_{j=1}^{m-2}
\frac{1}{n^{j+1}}\frac{1}{k^{m-j}}\\
&= \lim_{N\to\infty} \bigg\{ \sum_{\substack{n,k=1\\k\ne n}}^N
\bigg(\frac{1}{n^{m-1}(k-n)k}-\frac{1}{n(k-n)k^{m-1}}\bigg)
+\sum_{n=1}^N\frac{m-2}{n^{m+1}}\bigg\}\\
&=(m-2)\zeta(m+1)+2\lim_{N\to\infty}\sum_{\substack{n,k=1\\k\ne
n}}^N\frac{1}{n^{m-1}k(k-n)}.\end{aligned}$$ Thus, we find that $$\begin{aligned}
& (m-2)\zeta(m+1)-\sum_{j=1}^{m-2}\zeta(j+1)\zeta(m-j)\\
&= 2\lim_{N\to\infty}\sum_{n=1}^N\frac{1}{n^m}\sum_{\substack{k=1\\
k\ne n}}^N \bigg(\frac1k-\frac1{k-n}\bigg)\\
&=
2\lim_{N\to\infty}\sum_{n=1}^N\frac1{n^m}\bigg\{\sum_{k=1}^{n-1}
\frac1k-\frac1n+\sum_{k=1}^n\frac1{N-k+1}\bigg\}\\
&= 2\zeta(m,1)-2\zeta(m+1)+2\lim_{N\to\infty}\sum_{n=1}^N
\frac1{n^m}\sum_{k=1}^n\frac1{N-k+1},\end{aligned}$$ and hence $$2\zeta(m,1)= m\zeta(m+1)-\sum_{j=1}^{m-2}\zeta(j+1)\zeta(m-j)
-2\lim_{N\to\infty}\sum_{n=1}^N\frac1{n^m}\sum_{k=1}^n\frac1{N-k+1}.$$ But, in light of $$\sum_{n=1}^N\frac1{n^m}\sum_{k=1}^n\frac1{N-k+1}
\le \sum_{n=1}^N \frac1{n^m}\cdot\frac{n}{N-n+1}
\le\frac1{N+1}\sum_{n=1}^N\bigg(\frac1{N-n+1}+\frac1n\bigg)
=\frac{2H_N}{N+1},$$ the identity now follows.
A Sum Formula {#sect:SumFormula}
-------------
Equation is the case $n=3$ of the following result. See [@Briggs0].
\[briggs\] If $3\le n\in{{\mathbf Z}}$ then $$\zeta(n) = \sum_{j=1}^{n-2}\zeta(n-j,j).
\label{sumdepth2}$$
We discuss a generalization of the sum formula to arbitrary depth in §\[sect:SumGF\].
[**Proof.**]{} Summing the geometric series on the right hand side gives $$\begin{aligned}
\sum_{j=1}^{n-2}\sum_{h=1}^\infty\sum_{m=1}^\infty
\frac1{h^j(h+m)^{n-j}}
&= \sum_{h,m=1}^\infty
\bigg[\frac{1}{h^{n-2}m(h+m)}-\frac{1}{m(h+m)^{n-1}}\bigg]\\
&=\sum_{h=1}^\infty \frac1{h^{n-1}}\sum_{m=1}^\infty \bigg(
\frac1m-\frac1{h+m}\bigg)-\zeta(n-1,1)\\
&= \sum_{h=1}^\infty \frac1{h^{n-1}}\sum_{k=1}^{h}\frac1k
-\zeta(n-1,1)\\
&= \sum_{h=1}^\infty \frac1{h^n} +\sum_{h=1}^\infty
\frac1{h^{n-1}}\sum_{k=1}^{n-1}\frac1k-\zeta(n-1,1)\\
&= \zeta(n).\end{aligned}$$
A $q$-Analogue {#sect:zud}
--------------
The following argument is based on an idea of Zudilin [@Zud]. We begin with the finite geometric series identity $$\frac{uv}{(1-u)(1-uv)^s}+\frac{uv^2}{(1-v)(1-uv)^s}
= \frac{uv}{(1-u)(1-v)^s} -
\sum_{j=1}^{s-1}\frac{uv^2}{(1-v)^{j+1}(1-uv)^{s-j}},$$ valid for all positive integers $s$ and real $u$, $v$ with $u\ne
1$, $uv\ne 1$. We now assume $s>1$, $q$ is real and $0<q<1$. Put $u=q^m$, $v=q^n$ and sum over all positive integers $m$ and $n$. Thus, $$\begin{aligned}
&\sum_{m,n>0} \frac{q^{m+n}}{(1-q^m)(1-q^{m+n})^s}
+\sum_{m,n>0}\frac{q^{m+2n}}{(1-q^n)(1-q^{m+n})^s}\\
&= \sum_{m,n>0}\frac{q^{m+n}}{(1-q^m)(1-q^n)^s}
-\sum_{m,n>0}\frac{q^{m+2n}}{(1-q^n)^s(1-q^{m+n})}\\
&\qquad -\sum_{j=1}^{s-2}\sum_{m,n>0}
\frac{q^{m+2n}}{(1-q^n)^{j+1}(1-q^{m+n})^{s-j}}\\
&= \sum_{m,n>0}\frac{q^n}{(1-q^n)^s}\bigg[\frac{q^m}{1-q^m}-
\frac{q^{m+n}}{1-q^{m+n}}\bigg]-\sum_{j=1}^{s-2}\sum_{m,n>0}
\frac{q^{m+2n}}{(1-q^n)^{j+1}(1-q^{m+n})^{s-j}}\\
&= \sum_{n>0}\frac{q^n}{(1-q^n)^s}\sum_{m=1}^n\frac{q^m}{1-q^m}
-\sum_{j=1}^{s-2}\sum_{m,n>0}
\frac{q^{m+2n}}{(1-q^n)^{j+1}(1-q^{m+n})^{s-j}}\\
&=\sum_{n>0}\frac{q^{2n}}{(1-q^n)^{s+1}}+\sum_{n>m>0}
\frac{q^{n+m}}{(1-q^n)^s(1-q^m)}-\sum_{j=1}^{s-2}\sum_{m,n>0}
\frac{q^{m+2n}}{(1-q^n)^{j+1}(1-q^{m+n})^{s-j}}\\\end{aligned}$$ Cancelling the second double sum on the left with the corresponding double sum on the right and replacing $m+n$ by $k$ in the remaining sums now yields $$\sum_{k>m>0}\frac{q^{k}}{(1-q^{k})^s(1-q^m)}
= \sum_{n>0}\frac{q^{2n}}{(1-q^n)^{s+1}}-
\sum_{j=1}^{s-2}\sum_{k>m>0}
\frac{q^{k+m}}{(1-q^m)^{j+1}(1-q^{k})^{s-j}},$$ or equivalently, that $$\sum_{k>0}\frac{q^{2k}}{(1-q^k)^{s+1}}
= \sum_{k>m>0}\frac{q^k}{(1-q^k)^s(1-q^m)}
+\sum_{j=1}^{s-2}\sum_{k>m>0}\frac{q^{k+m}}{(1-q^k)^{s-j}(1-q^m)^{j+1}}.
\label{general}$$ Multiplying through by $(1-q)^{s+1}$ and letting $q\to 1$ gives $$\zeta(s+1)=\zeta(s,1)+\sum_{j=1}^{s-2} \zeta(s-j,j+1),$$ which is just a restatement of . Taking $s=2$ gives again.
As in [@DBq], define the $q$-analog of a non-negative integer $n$ by $$[n]_q := \sum_{k=0}^{n-1} q^k = \frac{1-q^n}{1-q},$$ and the multiple $q$-zeta function $$\zeta[s_1,\dots,s_m] := \sum_{k_1>\cdots >k_m>0}
\; \prod_{j=1}^m \frac{q^{(s_j-1)k_j}}{[k_j]_q^{s_j}},
\label{qMZVdef}$$ where $s_1,s_2,\dots,s_m$ are real numbers with $s_1>1$ and $s_j\ge 1$ for $2\le j\le m$. Then multiplying by $(1-q)^{s+1}$ and then setting $s=2$ gives $\zeta[2,1]=\zeta[3]$, which is a $q$-analog of . That is, the latter may be obtained from the former by letting $q\to 1-$. On the other hand, $s=3$ in gives $$\zeta[4] +(1-q)\zeta[3]=\zeta[3,1]+(1-q)\zeta[2,1]+\zeta[2,2],$$ which, in light of $\zeta[2,1]=\zeta[3]$ implies $\zeta[3,1] =
\zeta[4]-\zeta[2,2]$. By Theorem 1 of [@DBq], we know that $\zeta[2,2]$ reduces to depth 1 multiple $q$-zeta values. Indeed, by the $q$-stuffle multiplication rule [@DBq], $
\zeta[2]\zeta[2] = 2\zeta[2,2] + \zeta[4] +(1-q)\zeta[3].
$ Thus, $$\zeta[3,1] = \zeta[4]-\zeta[2,2]=
\tfrac32\zeta[4]-\tfrac12\left(\zeta[2]\right)^2+\tfrac12(1-q)\zeta[3],$$ which is a $q$-analog of the evaluation [@BBB; @BBBLa; @BBBLc; @BowBrad1; @BowBradSurvey; @BowBrad3; @BowBradRyoo] $$\zeta(3,1) = \frac{\pi^4}{360}.$$ Additional material concerning $q$-analogs of multiple harmonic sums and multiple zeta values can be found in [@DBq; @DBqKarl; @DBqSum; @DBqDecomp].
Integral Representations {#sect:integrals}
========================
Single Integrals I {#sect:single1}
------------------
We use the fact that $$\label{naive}
\int_0^1 u^{k-1}(-\log u)\,du = \frac1{k^2},\qquad k>0.$$ Thus $$\begin{aligned}
\label{logs}
\sum_{k>n>1}\frac{1}{k^2n}
&= \sum_{n=1}^\infty \frac1n\sum_{k>n}\int_0^1 u^{k-1}(-\log u)\,du
\nonumber\\
&= \sum_{n=1}^\infty\frac1n \int_0^1 (-\log u)\sum_{k>n} u^{k-1}\,du
\nonumber\\
&= \sum_{n=1}^\infty\frac1n \int_0^1 (-\log u)
\frac{u^n}{1-u}\,du \nonumber\\
&= -\int_0^1 \frac{\log u}{1-u}\sum_{n=1}^\infty \frac{u^n}{n}\,du
\nonumber\\
&= \int_0^1 (-\log u)(1-u)^{-1}\log(1-u)^{-1}\,du.\end{aligned}$$ The interchanges of summation and integration are in each case justified by Lebesgue’s monotone convergence theorem. After making the change of variable $t=1-u$, we obtain $$\label{morelogs}
\sum_{k>n>1}\frac{1}{k^2n}
= \int_0^1 \log(1-t)^{-1} (-\log t)\,\frac{dt}{t}
= \int_0^1 (-\log t) \sum_{n=1}^\infty \frac{t^{n-1}}{n}\,dt.$$ Again, since all terms of the series are positive, Lebesgue’s monotone convergence theorem permits us to interchange the order of summation and integration. Thus, invoking again, we obtain $$\sum_{k>n>1}\frac{1}{k^2n}
= \sum_{n=1}^\infty \frac1n\int_0^1 (-\log t)\, t^{n-1}\,dt
= \sum_{n=1}^\infty \frac1{n^3},$$ which is .
Single Integrals II {#sect:single2}
-------------------
The Laplace transform $$\int_0^1 x^{r-1} (-\log x)^\sigma\,dx
= \int_0^{\infty} e^{-ru}\, u^\sigma\,du
= \frac{\Gamma(\sigma+1)}{r^{\sigma+1}},\qquad r>0,\quad \sigma>-1,
\label{Laplace}$$ generalizes and yields the representation $$\zeta(m+1) =
\frac{1}{m!}\sum_{r=1}^\infty\frac{\Gamma(m+1)}{r^{m+1}}
= \frac{1}{m!}\sum_{r=1}^\infty \int_0^1 x^{r-1}(-\log
x)^m \,dx
= \frac{(-1)^m}{m!}\int_0^1 \frac{\log^m x}{1-x}\,dx.$$ The interchange of summation and integration is valid if $m>0$. The change of variable $x\mapsto 1-x$ now yields $$\zeta(m+1) = \frac{(-1)^m}{m!}\int_0^1
\log^m(1-x)\,\frac{dx}{x},
\qquad 1\le m\in{{\mathbf Z}}.
\label{ZetaStirling}$$
In [@Farnum], equation in conjunction with clever use of change of variable and integration by parts, is used to prove the identity $$k!\zeta(k+2) = \sum_{n_1=1}^\infty\, \sum_{n_2=1}^\infty \cdots
\sum_{n_k=1}^\infty \frac{1}{n_1n_2\cdots n_k}\;
\sum_{p=1+n_1+n_2+\cdots+n_k}\frac1{p^2},
\qquad 0\le k\in{{\mathbf Z}}.
\label{Tissier}$$ The case $k=1$ of is precisely . We give here a slightly simpler proof of , dispensing with the integration by parts.
From , $$\begin{aligned}
k!\zeta(k+2) &= \sum_{r=1}^\infty
\frac1{r}\cdot\frac{\Gamma(k+1)}{r^{k+1}}
= \sum_{r=1}^\infty \frac1r\int_0^1 x^{r-1}(-\log x)^k\,dx\\
&= \int_0^1 (-\log x)^k \log(1-x)^{-1}\,\frac{dx}{x}\\
&= \int_0^1 \log^k(1-x)^{-1} (-\log x)\frac{dx}{1-x}\\
&=\sum_{n_1=1}^\infty\, \sum_{n_2=1}^\infty \cdots\sum_{n_k=1}^\infty \frac{1}{n_1n_2\cdots n_k}\;
\int_0^1 \frac{x^{n_1+n_2+\cdots+n_k}}{1-x}(-\log x)\,dx\\
&= \sum_{n_1=1}^\infty\, \sum_{n_2=1}^\infty \cdots\sum_{n_k=1}^\infty \frac{1}{n_1n_2\cdots n_k}\;
\sum_{p>n_1+n_2+\cdots+n_k} \int_0^1 x^{p-1}(-\log x)\,dx\\
&= \sum_{n_1=1}^\infty\, \sum_{n_2=1}^\infty \cdots\sum_{n_k=1}^\infty \frac{1}{n_1n_2\cdots n_k}\;
\sum_{p>n_1+n_2+\cdots+n_k}\frac1{p^2}.\end{aligned}$$
Double Integrals I {#sect:double1}
------------------
Write $$\begin{aligned}
\zeta(2,1) =\sum_{k,m>0}\frac{1}{k(m+k)^2}
&= \int_0^1\int_0^1 \sum_{k>0}\frac{(xy)^k}{k}\sum_{m>0}
(xy)^{m-1}\,dx\,dy\\
&= -\int_0^1\int_0^1 \frac{\log(1-xy)}{1-xy}\,dx\,dy.\end{aligned}$$ Now make the change of variable $u=xy$, $v=x/y$ with Jacobian $1/(2v)$, obtaining $$\begin{aligned}
\zeta(2,1) = -\frac12\int_0^1
\frac{\log(1-u)}{1-u}\int_u^{1/u}\frac{dv}{v}\,du
= \int_0^1 \frac{(\log u)\log(1-u)}{1-u}\,du,\end{aligned}$$ which is . Now continue as in §\[sect:single1\].
Double Integrals II {#sect:double2}
-------------------
The following is reconstructed from a phone conversation with Krishna Alladi. See also [@Beukers]. Let ${\varepsilon}>0$. By expanding the integrand as a geometric series, one sees that $$\sum_{n=1}^\infty \frac1{(n+{\varepsilon})^2} = \int_0^1\int_0^1
\frac{(xy)^{{\varepsilon}}}{1-xy}\,dx\,dy.$$ Differentiating with respect to ${\varepsilon}$ and then letting ${\varepsilon}=0$ gives $$\zeta(3) = -\frac12\int_0^1\int_0^1 \frac{\log(xy)}{1-xy}\,dx\,dy
= -\frac12\int_0^1\int_0^1 \frac{\log x+\log y}{1-xy}\,dx\,dy
= -\int_0^1\int_0^1\frac{\log x}{1-xy}\,dx$$ by symmetry. Now integrate with respect to $y$ to get $$\label{parts}
\zeta(3) = \int_0^1 (\log x)\log(1-x)\frac{dx}{x}.$$ Comparing with completes the proof of .
Integration by Parts {#sect:parts}
--------------------
Start with and integrate by parts, obtaining $$2\zeta(3) = \int_0^1\frac{\log^2 x}{1-x}\,dx
= \int_0^1 \log^2(1-x)\frac{dx}{x}
= \sum_{n,k>0} \int_0^1 \frac{x^{n+k-1}}{nk}\,dx
= \sum_{n,k>0}\frac{1}{nk(n+k)}.$$ Now see §\[sect:telescope\].
Triple Integrals I {#sect:iterint1}
------------------
This time, instead of we use the elementary identity $$\frac{1}{k^2n} = \int_0^1 y_1^{-1}\int_0^{y_1}y_2^{k-n-1}
\int_0^{y_2} y_3^{n-1}\,dy_3\,dy_2\,dy_1,\qquad k>n>0.$$ This yields $$\label{irep}
\sum_{k>n>0}\frac{1}{k^2n}
= \int_0^1 y_1^{-1}\int_0^{y_1}(1-y_2)^{-1}\int_0^{y_2}(1-y_3)^{-1}
\,dy_3\,dy_2\,dy_1.$$ Now make the change of variable $y_i=1-x_i$ for $i=1,2,3$ to obtain $$\begin{aligned}
\sum_{k>n>0}\frac{1}{k^2n}
&= \int_0^1 (1-x_1)^{-1}\int_{x_1}^1 x_2^{-1}\int_{x_2}^1 x_3^{-1}
\,dx_3\,dx_2\,dx_1\\
&= \int_0^1 x_3^{-1}\int_0^{x_3}x_2^{-1}\int_0^{x_2}(1-x_1)^{-1}
\,dx_1\,dx_2\,dx_3.\end{aligned}$$ After expanding $(1-x_1)^{-1}$ into a geometric series and interchanging the order of summation and integration, one arrives at $$\sum_{k>n>0}\frac{1}{k^2n}
= \sum_{n=1}^\infty \int_0^1 x_3^{-1}\int_0^{x_3}x_2^{-1}\int_0^{x_2}
x_1^{n-1}\,dx_1\,dx_2\,dx_3
= \sum_{n=1}^\infty \frac1{n^3},$$ as required.
More generally [@BBB; @BBBLa; @BBBLc; @BowBradSurvey; @BowBrad3; @CK], $$\zeta(s_1,\dots,s_k)
= \sum_{n_1>\cdots>n_k>0}\;\prod_{j=1}^k n_j^{-s_j}
=\int \prod_{j=1}^k
\bigg(\prod_{r=1}^{s_j-1} \frac{dt_r^{(j)}}{t_r^{(j)}}\bigg)
\frac{dt_{s_j}^{(j)}}{1-t_{s_j}^{(j)}},
\label{iterint1}$$ where the integral is over the simplex $$1>t_1^{(1)}>\cdots>t_{s_1}^{(1)}>\cdots>t_1^{(k)}>\cdots>t_{s_k}^{(k)}>0,$$ and is abbreviated by $$\int_0^1 \prod_{j=1}^k a^{s_j-1}b,
\qquad a=\frac{dt}{t},\qquad b=\frac{dt}{1-t}.
\label{shortiterint1}$$ The change of variable $t\mapsto 1-t$ at each level of integration switches the differential forms $a$ and $b$, thus yielding the duality formula [@BBB] [@CK p. 483] (conjectured in [@Hoff92]) $$\label{duality}
\zeta(s_1+2,{\{1\}}^{r_1},\dots,s_n+2,{\{1\}}^{r_n})
= \zeta(r_n+2,{\{1\}}^{s_n},\dots,r_1+2,{\{1\}}^{s_1}),$$ which is valid for all nonnegative integers $s_1,r_1,\ldots,s_n,r_n$. The case $s_1=0$, $r_1=1$ of is . More generally, can be restated as $$\int_0^1 (ab^2)^n = \int_0^1 (a^2b)^n$$ and thus is recovered by taking each $s_j=0$ and each $r_j=1$ in . For further generalizations and extensions of duality, see [@BBBLa; @DBq; @DBqKarl].
For alternations, we require in addition the differential form $c:=-dt/(1+t)$ with which we may form the generating function $$\sum_{n=1}^\infty z^{3n} \zeta(\{\overline2,1\}^n)
= \sum_{n=0}^\infty \bigg\{ z^{6n+3}\int_0^1
(ac^2ab^2)^n ac^2 + z^{6n+6} \int_0^1
(ac^2ab^2)^{6n+6}\bigg\}.$$ A lengthy calculation verifies that the only changes of variable that preserve the unit interval and send the non-commutative polynomial ring ${{\mathbf Q}}\langle a,b\rangle$ into ${{\mathbf Q}}\langle
a,b,c\rangle$ are $$\begin{aligned}
{2}
S(a,b) &= S(a,b),\qquad\qquad & t &\mapsto t,\label{identity}\\
S(a,b) &= R(b,a),\qquad\qquad & t &\mapsto 1-t,\label{tau}\\
S(a,b) &= S(2a,b+c),\qquad\qquad & t &\mapsto t^2,\label{sumsigns}\\
S(a,b) &= S(a+c,b-c),\qquad\qquad & t &\mapsto
\frac{2t}{1+t},\label{Landen}\\
S(a,b) &= S(a+2c,2b-2c),\qquad\qquad & t &\mapsto
\frac{4t}{(1+t)^2},\label{quadLanden}\end{aligned}$$ and compositions thereof, such as $t\mapsto
1-2t/(1+t)=(1-t)/(1+t)$, etc. In –, $S(a,b)$ denotes a non-commutative word on the alphabet $\{a,b\}$ and $R(b,a)$ denotes the word formed by switching $a$ and $b$ and then reversing the order of the letters.
Now view $a$, $b$ and $c$ as indeterminates. In light of the polynomial *identity* $$ab^2-8ac^2 = 2[ab^2-2a(b+c)^2] + 8[ab^2-(a+c)(b-c)^2]
+ [(a+2c)(2b-2c)^2-ab^2]$$ in the non-commutative ring ${{\mathbf Z}}\langle a,b,c\rangle$ and the transformations , and above, each bracketed term vanishes when we make the identifications $a=dt/t$, $b=dt/(1-t)$, $c=-dt/(1+t)$ and perform the requisite iterated integrations. Thus, $$\zeta(2,1)-8\zeta(\overline2,1)=\int_0^1 ab^2-8\int_0^1 ac^2 =0,$$ which in light of proves .
Triple Integrals II {#sect:iterint2}
-------------------
First, note that by expanding the integrands in geometric series and integrating term by term, $$\zeta(2,1) = 8\int_0^1 \frac{dx}{x}\int_0^x
\frac{y\,dy}{1-y^2}\int_0^y\frac{z\,dz}{1-z^2}.$$ Now make the change of variable $$\frac{x\,dx}{1-x^2}=\frac{du}{1+u},
\qquad
\frac{y\,dy}{1-y^2}=\frac{dv}{1+v},
\qquad
\frac{z\,dz}{1-z^2}=\frac{dw}{1+w}$$ to obtain the equivalent integral $$\zeta(2,1) = 8\int_0^\infty
\bigg(\frac{du}{2u}+\frac{du}{2(2+u)}
-\frac{du}{1+u}\bigg)\int_0^u\frac{dv}{1+v}
\int_0^v\frac{dw}{1+w}.$$ The two inner integrals can be directly performed, leading to $$\zeta(2,1) = 4\int_0^\infty\frac{\log^2(u+1)}{u(u+1)(u+2)}\,{du}.$$ Finally, make the substitution $u+1=1/\sqrt{1-x}$ to obtain $$\zeta(2,1) = \frac12\int_0^1 \frac{\log^2(1-x)}{x}\,dx
= \zeta(3),$$ by .
Complex Line Integrals I {#sect:Perron}
------------------------
Here we apply the Mellin inversion formula [@Apostol p.243], [@Tenenbaum pp. 130–132 ] $$\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty} y^z\,\frac{dz}{z} =
\ \begin{cases} 1,\quad y>1 \\ 0, \quad y<1\\ \tfrac12,\quad
y=1\end{cases}$$ which is valid for fixed $c>0$. It follows that if $c>0$ and $s-1>c>1-t$ then the Perron-type formula $$\begin{aligned}
\zeta(s,t) +\frac12\zeta(s+t)
&= \sum_{n=1}^\infty n^{-s} \sum_{k=1}^\infty k^{-t}
\frac1{2\pi i} \int_{c-i\infty}^{c+i\infty}
\bigg(\frac{n}{k}\bigg)^z\frac{dz}{z}\nonumber\\
&=\frac1{2\pi i}\int_{c-i\infty}^{c+i\infty}
\zeta(s-z)\zeta(t+z)\,\frac{dz}{z}
\label{Perron}\end{aligned}$$ is valid. (Interchanging the order of summation and integration is permissible by absolute convergence.) Although we have not yet found a way to exploit in proving identities such as , we note that by integrating around the rectangular contour with corners $(\pm c\pm iM)$ and then letting $M\to+\infty$, one can readily establish the stuffle [@BBBLa; @BowBradSurvey; @DBPrtn; @DBq] formula in the form $$\zeta(s,t)+\frac12\zeta(s+t)+\zeta(t,s)+\frac12\zeta(t+s)
= \zeta(s)\zeta(t), \qquad s,t>1+c.$$ The right hand side arises as the residue contribution of the integrand at $z=0$. One can also use to establish $$\sum_{s=2}^\infty\left[\zeta(s,1)+\tfrac12\zeta(s+1)\right]x^{s-1}
= \sum_{n>m>0}\frac{x}{mn(n-x)} +\frac12\sum_{n=1}^\infty
\frac{x}{n(n-x)},$$ but this is easy to prove directly.
Complex Line Integrals II {#sect:Dirichlet}
-------------------------
We let $\lambda(s) :=\sum_{n>0} \lambda_n\, n^{-s}$ represent a formal Dirichlet series, with real coefficients $\lambda_n$ and we set $s:=\sigma+i\,\tau$ with $\sigma=\Re(s)>0$, and consider the following integral: $$\begin{aligned}
\label{int1}
\iota_\lambda(\sigma):=\int_{0}^\infty
\left|\frac{\lambda(s)}s\right|^2\,d\tau = \frac 12\,
\int_{-\infty}^\infty \left|\frac{\lambda(s)}s\right|^2\,d\tau,\end{aligned}$$ as a function of $\lambda$. We begin with a useful variant of the Mellin inversion formula, namely $$\begin{aligned}
\label{intc}\int _{-\infty}^{\infty }\!{\frac
{\cos \left( at \right) }{{t}^{2}+{ u}^{2}}}{dt}={\frac {\pi
}{u}}\,e^{-au},\end{aligned}$$ for $u, a>0$, as follows by contour integration, from a computer algebra system, or otherwise. This leads to
\[dir-thm\] [(Theorem 1 of [@JB]).]{} [*For $\lambda(s)=\sum_{n=1}^\infty \,\lambda_n\,n^{-s}$ and $s=\sigma+i\,\tau$ with fixed $\sigma=\Re(s)>0$ such that the Dirichlet series is absolutely convergent it is true that $$\begin{aligned}
\label{ans1}
\iota_\lambda(\sigma) = \int_{0}^\infty
\left|\frac{\lambda(s)}s\right|^2\,d\tau
=\frac{\pi}{2\sigma}\,\sum_{n=1}^\infty\frac{\Lambda_n^2-
\Lambda_{n-1}^2}{n^{2\sigma}},\end{aligned}$$ where $\Lambda_n:=\sum_{k=1}^{n}\lambda_k$ and $\Lambda_0:=0$.*]{}
*More generally, for given absolutely convergent Dirichlet series $\alpha(s):=\sum_{n=1}^\infty\,{\alpha_n}\,{n^{-s}}$ and $\beta(s):=\sum_{n=1}^\infty\,{\beta_n}\,{n^{-s}}$ $$\begin{aligned}
\label{ans-gen} \frac12\int_{-\infty}^\infty
\frac{\alpha(s)\,\overline{\beta}(s)}{\sigma^2+\tau^2}\,d\tau =
\frac{\pi}{2\sigma}\,\sum_{n=1}^\infty
\frac{A_n\,\overline{B_n}-A_{n-1}\,\overline{B_{n-1}}}{n^{2\sigma}},\label{ans3}\end{aligned}$$ in which $A_n=\sum_{k=1}^{n}\alpha_k$ and $B_n=\sum_{k=1}^{n}\beta_k.$*
Note that the righthand side of is always a generalized Euler sum.
For the Riemann zeta function, and for $\sigma>1$, Theorem \[dir-thm\] applies and yields $$\frac{\sigma}{\pi}\,\iota_\zeta(\sigma)=
\zeta(2\sigma-1)-\frac12\,\zeta(2\sigma),$$ as $\lambda_n=1$ and $\Lambda_n=n-1/2$. By contrast it is known that on the critical line $$\frac{1/2}{\pi}\,\iota_\zeta\left(\frac 12\right)=\log(\sqrt{2\,\pi})-\frac 12\,\gamma.$$ There are similar formulae for $s \mapsto \zeta(s-k)$ with $k$ integral. For instance, applying the result in (\[ans1\]) with $\zeta_1:=t \mapsto \zeta(t+1)$ yields $$\frac{1}{\pi}\,\int_{0}^\infty
\frac{|\zeta(3/2+i\tau)|^2}{1/4+\tau^2}\,d\tau =
\frac{1}{\pi}\,\iota_{\zeta_1}\left(\frac 1 2\right)=
2\, \zeta(2,1)+\zeta(3)=3\,\zeta(3),$$ on using . For the *alternating zeta function*, $\alpha:=s\mapsto(1-2^{1-s})\zeta(s)$, the same approach via (\[ans-gen\]) produces $$\frac{1}{\pi}\,\int_{0}^\infty
\frac{\alpha(3/2+i\tau)\,\overline{\alpha(3/2+i\tau)}}{1/4+\tau^2}\,d\tau
=
2\,\zeta(\overline{2},\overline{1})+\zeta(3)=3\, \zeta(2)\,\log(2)-\frac 94 \zeta(3),$$ and $$\frac{1}{2\pi}\,\int_{-\infty}^\infty
\frac{\alpha(3/2+i\tau)\,\overline{\zeta(3/2+i\tau)}}{1/4+\tau^2}\,d\tau
=
\zeta(\overline{2},1)+\zeta(2,\overline{1})+\alpha(3)=\frac 98\, \zeta(2)\,\log(2)-\frac {3}4 \zeta(3),$$ since as we have seen repeatedly $\zeta(\overline{2},1)
=\zeta(3)/8$; while $\zeta(2,\overline{1})=\zeta(3)-3/2\,\zeta(2)\log(2)$ and $\zeta(\overline{2},\overline{1})=
3/2\,\zeta(2)\log(2)-13/8\,\zeta(3),$ (e.g., [@BZB]).
As in the previous subsection we have not been able to directly obtain or even , but we have connected them to quite difficult line integrals.
Contour Integrals and Residues {#sect:residue}
------------------------------
Following [@State], let $\mathscr{C}_n$ $(n\in{{\mathbf Z}}^{+})$ be the square contour with vertices $(\pm 1\pm i)(n+1/2)$. Using the asymptotic expansion $$\psi(z) \sim \log z -
\frac1{2z}-\sum_{r=1}^\infty\frac{B_{2r}}{2rz^{2r}},
\qquad |\arg z|<\pi$$ in terms of the Bernoulli numbers $$\frac{t}{1-e^{-t}}=1+\frac{t}{2}+\sum_{r=1}^\infty
\frac{B_{2r}}{(2r)!}t^{2r},\qquad |t|<2\pi$$ and the identity $$\psi(z)=\psi(-z) - \frac1{z}-\pi\cot \pi z,$$ we can show that for each integer $k\ge 2$, $$\lim_{n\to\infty} \int_{\mathscr{C}_n} z^{-k}\,\psi^2(-z)\,dz =
0.$$ Then by the residue theorem, we obtain
\[thm:State\] For every integer $k\ge
2$, $$2\sum_{n=1}^\infty n^{-k}\,\psi(n) = k\zeta(k+1)-2\gamma
\zeta(k)-\sum_{j=1}^{k-1}\zeta(j)\zeta(k-j+1),$$ where $\gamma=0.577215664\dots$ is Euler’s constant.
In light of the identity $$\psi(n)+\gamma= H_{n-1}=\sum_{k=1}^{n-1}\frac1k,\qquad
n\in{{\mathbf Z}}^{+},$$ Theorem \[thm:State\] is equivalent to . The case $k=2$ thus gives .
Flajolet and Salvy [@Flaj] developed the residue approach more systematically, and applied it to a number of other Euler sum identities in addition to .
Witten Zeta-functions {#sect:witten}
=====================
We recall that for $r,s>1/2$: $$\mathcal{W}(r,s,t):= \sum_{n=1}^\infty\sum_{m=1}^\infty \frac{1}{n^r\,m^s\,(n+m)^t}$$ is a *Witten $\zeta$-function*, [@zagier; @moll; @CranBuh]. We refer to [@zagier] for a description of the uses of more general Witten $\zeta$-functions. Ours are also called *Tornheim double sums*, [@moll]. There is a simple algebraic relation $$\begin{aligned}
\label{w-alg}\mathcal{W}(r,s,t)=\mathcal{W}(r-1,s,t+1)+\mathcal{W}(r,s-1,t+1).\end{aligned}$$ This is based on writing $$\frac{m+n}{(m+n)^{t+1}}=\frac{m}{(m+n)^{t+1}}+\frac{n}{(m+n)^{t+1}}.$$ Also $$\begin{aligned}
\label{w-alg1}\mathcal{W}(r,s,t) =
\mathcal{W}(s,r,t),\end{aligned}$$ and $$\begin{aligned}
\label{w-alg2}\mathcal{W}(r,s,0)=\zeta(r)\,\zeta(s)\quad
\mbox{while} \quad \mathcal{W}(r,0,t) =\zeta(t,r).\end{aligned}$$
Hence, $\mathcal{W}(s,s,t)=2\,\mathcal{W}(s,s-1,t+1)$ and so $$\mathcal{W}(1,1,1)=2\,\mathcal{W}(1,0,2)=2\,\zeta(2,1)=2\,\zeta(3).$$ Note the analogue to (\[w-alg\]), viz. $\zeta(s,t)+\zeta(t,s)=\zeta(s)\,\zeta(t)-\zeta(s+t)$, shows $\mathcal{W}(s,0,s)=2\,\zeta(s,s)=\zeta^2(s)-\zeta(2s).$ Thus, $\mathcal{W}(2,0,2)=2\,\zeta(2,2)=\pi^4/36-\pi^4/90=\pi^4/72$.
More generally, recursive use of (\[w-alg\]) and (\[w-alg1\]), along with initial conditions (\[w-alg2\]) shows that *all integer $\mathcal{W}(s,r,t)$ values are expressible in terms of double (and single) Euler sums.* If we start with $\Gamma(s)/(m+n)^{t} = \int _0^1\! (-\log
\sigma)^{t-1}\,\sigma^{m+n-1}\,d \sigma $ we obtain $$\begin{aligned}
\label{gamma}\mathcal{W}(r,s,t)=
\frac{1}{\Gamma(t)}\,\int _0^1\!
{\rm Li}_r(\sigma)\,{\rm Li}_s(\sigma)\,\frac{\left(-\log \sigma\right)^{t-1}}{\sigma}\,d \sigma.\end{aligned}$$ For example, we recover an analytic proof of $$\begin{aligned}
\label{z21-w}2\,\zeta(2,1)=\mathcal{W}(1,1,1)=
\int _0^1\!\frac{\ln^2(1-\sigma)}{\sigma}\,d\sigma =2\,\zeta(3),\end{aligned}$$ Indeed $S$ in the proof of §\[sect:telescope\] is precisely $\mathcal{W}(1,1,1)$.
We may now discover analytic as opposed to algebraic relations. Integration by parts yields $$\begin{aligned}
\label{w-parts}\mathcal{W}(r,s+1,1) +\mathcal{W}(r+1,s,1)={\rm Li}_{r+1}(1)\,{\rm
Li}_{s+1}(1)=\zeta(r+1)\,\zeta(s+1),\end{aligned}$$ So, in particular, $\mathcal{W}(s+1,s,1)=\zeta^2(s+1)/2$.
Symbolically, *Maple* immediately evaluates $\mathcal{W}(2,1,1)=\pi^4/72,$ and while it fails directly with $\mathcal{W}(1,1,2)$, we know it must be a multiple of $\pi^4$ or equivalently $\zeta(4)$; and numerically obtain $\mathcal{W}(1,1,2)/\zeta(4)=.49999999999999999998\ldots$.
The Hilbert Matrix
------------------
Letting $a_n:=1/n^r$ and $b_n:=1/n^s$, inequality (\[hilbert-p\]) of Section \[sect:hardy\] yields $$\begin{aligned}
\label{wittenp}\mathcal{W}(r,s,1) \le
\pi\,{\rm csc}\left(\frac{\pi}p\right)\,\sqrt[p]{\zeta(pr)}\,\sqrt[q]{\zeta(qs)}.\end{aligned}$$ Indeed, the constant in (\[hilbert-p\]) is best possible [@ghh; @steele]. We consider $$\mathcal{R}_p(s):=\frac{\mathcal{W}((p-1)s,s,1)}{\pi\,\zeta(ps)},$$ and observe that with $\sigma_n^p(s):=\sum_{m=1}^\infty
(n/m)^{-(p-1)s}/(n+m) \to \pi\,{\rm csc}\left(\frac{\pi}q \right),$ we have $$\begin{aligned}
\mathcal{L}_p:&=&\lim_{s\to
1/p}(ps-1)\,\sum_{n=1}^\infty\sum_{m=1}^\infty
\frac{n^{-s}\,m^{-(p-1)s}}{n+m}= \lim_{s\to
1/p}(ps-1)\,\sum_{n=1}^\infty\frac{1}{n^{ps}}\,\sigma_n^p(s)\\&=&\lim_{s\to
1/p}\,(ps-1) \,\sum_{n=1}^\infty
\,\frac{\left\{\sigma_n^p(s)-\pi\,{\rm csc}\left(\pi/q)
\right)\right\}}{n^{ps}}+\lim_{s\to 1/p}\,(2s-1)\zeta(ps)\,\pi\,{\rm
csc}\left(\frac{\pi}q \right)\\&=&0+\pi\,{\rm csc}\left(\frac{\pi}q
\right).\end{aligned}$$ Setting $r:=(p-1)s,s \to 1/p^+$ we check that $\zeta(ps)^{1/p}\,\zeta(qr)^{1/q}=\zeta(ps)$ and hence the best constant in (\[wittenp\]) is the one given.
To recapitulate in terms of the celebrated infinite *Hilbert matrix,* $\mathcal{H}_0:=\left\{1/(m+n)\right\}_{m,n=1}^\infty$, [@exp2 pp. 250–252], we have actually proven:
\[hmat\] Let $1<p,q < \infty$ be given with $1/p+1/q=1$. The Hilbert matrice $\mathcal{H}_0$ determines a bounded linear mappings from the sequence space $\ell^p$ to itself such that $$\|\mathcal{H}_0\|_{p,p}=\lim_{s \to
1/p}\frac{\mathcal{W}(s,(p-1)s,1)}{\zeta(ps)}=\pi\,{\rm
csc}\left(\frac{\pi}p \right).$$
[**Proof.**]{} Appealing to the isometry between $(\ell^p)^*$ and $\ell^q$, and given the evaluation $\mathcal{L}_p$ above, we directly compute the operator norm of $\mathcal{H}_0$ as $$\begin{aligned}
\|\mathcal{H}_0\|_{p,p} = \sup_{\|x\|_p=1} \|\mathcal{H}_0
x\|_p=\sup_{\|y\|_q=1}\sup_{\|x\|_p=1} \langle \mathcal{H}_0 x,
y\rangle=\pi\,{\rm csc}\left(\frac{\pi}p \right).\end{aligned}$$
A delightful operator-theoretic introduction to the Hilbert matrix $\mathcal{H}_0$ is given by Choi in his Chauvenet prize winning article [@choi].
One may also study the corresponding behaviour of Hardy’s inequality (\[h-ineq\]). For example, setting $a_n:=1/n$ in (\[h-ineq\]) and denoting $H_n:=\sum_{k=1}^n 1/k$ yields $$\sum_{n=1}^\infty \left(\frac{H_n}{n}\right)^p \le
\left(\frac{p}{p-1}\right)^p\,\zeta(p).$$ Application of the integral test and the evaluation $$\int_1^\infty\,\left(\frac{\log
x}{x}\right)^p\,dx = \frac{\Gamma \left( 1+p \right)}{ \left( p-1
\right) ^{p+1}},$$ for $p>1$ easily shows the constant is again best possible.
A Stirling Number Generating Function {#sect:Stirling}
=====================================
Following [@Butzer], we begin with the integral representation of §\[sect:single2\]. In light of the expansion $$\frac{(-1)^m}{m!}\log^m(1-x) = \sum_{n=0}^\infty u(n,m)
\frac{x^n}{n!},\qquad 0\le m\in{{\mathbf Z}},$$ in terms of the unsigned Stirling numbers of the first kind (also referred to as the Stirling cycle numbers in [@GKP]), we have $$\zeta(m+1) = \int_0^1 \bigg\{\sum_{n=1}^\infty u(n,m)\frac{x^n}{n!}\bigg\}\frac{dx}{x}
= \sum_{n=1}^\infty \frac{u(n,m)}{n!\, n},\qquad 1\le m\in{{\mathbf Z}}.$$ Telescoping the known recurrence $$\label{StirlingRecur}
u(n,m) = u(n-1,m-1)+(n-1)u(n-1,m),\qquad 1\le m\le n,$$ yields $$\label{StirlingSumRecur}
u(n,m) = (n-1)! \left\{ \delta_{m,1}+ \sum_{j=1}^{n-1}
\frac{u(j,m-1)}{j!}\right\}.$$ Iterating this gives the representation $$\zeta(m+1) = \zeta(2,\{1\}^{m-1}), \qquad 1\le m\in{{\mathbf Z}},$$ the $m=2$ case of which is . See also $n=0$ in below.
For the alternating case, we begin by writing the recurrence in the form $$u(n+1,k) +(j-n)u(n,k) = u(n,k-1) + j\, u(n,k).$$ Following [@Butzer], multiply both sides by $(-1)^{n+k+1}j^{k-m-1}/(j-n)_n$, where $1\le n\le j-1$ and $k,
m\in{{\mathbf Z}}^{+}$, yielding $$\begin{gathered}
(-1)^k \left\{\frac{(-1)^{n+1}\,u(n+1,k)}{(j-n)_n} - \frac{(-1)^n
\,u(n,k)}{(j-n+1)_{n-1}}\right\} j^{k-m-1}\\
= \frac{(-1)^n}{(j-n)_n}\left\{(-1)^{k-1}\,u(n,k-1)j^{k-m-1}
- (-1)^k \,u(n,k) j^{k-m}\right\}.\end{gathered}$$ Now sum on $1\le k\le m$ and $1\le n\le j-1$, obtaining $$\sum_{k=1}^m \frac{(-1)^{k+j} \,u(j,k)}{j!\,j^{m-k}} -\frac1{j^m}
= \frac{(-1)^{m+1}}{(j-1)!}\sum_{n=m}^{j-1}(-1)^n
(j-n-1)!\,u(n,m).$$ Finally, sum on $j\in{{\mathbf Z}}^{+}$ to obtain $$\zeta(m) = \sum_{k=1}^m \sum_{j=k}^\infty
\frac{(-1)^{k+j}\,u(j,k)}{j!\,j^{m-k}}
+ \sum_{n=m}^\infty (-1)^{n+m}\,u(n,m)\sum_{j=n+1}^\infty
\frac{(j-1-n)!}{(j-1)!}.$$ Noting that $$\sum_{j=n+1}^\infty \frac{(j-1-n)!}{(j-1)!}
= \sum_{k=0}^\infty \frac{k!}{(k+n)!}
= \frac1{n!} \;{}_2F_1(1,1;n+1;1)
= \frac{1}{(n-1)!\,(n-1)},$$ we find that $$\zeta(m) = \sum_{k=1}^m \sum_{j=k}^\infty
\frac{(-1)^{j+k}\,u(j,k)}{j!\,j^{m-k}} + \sum_{n=m}^\infty
\frac{(-1)^{n+m}\,u(n,m)}{(n-1)!\,(n-1)}.$$ Now employ the recurrence again to get $$\begin{aligned}
\zeta(m) &= \sum_{k=1}^{m-2}\sum_{j=k}^\infty
\frac{(-1)^{j+k}\,u(j,k)}{j!\,j^{m-k}}+\sum_{j=m-1}^\infty
\frac{(-1)^{j+m-1}\,u(j,m-1)}{j!\,j}+\sum_{j=m}^\infty
\frac{(-1)^{j+m}\,u(j,m)}{j!}\nonumber\\
&\qquad + \sum_{n=m}^\infty \frac{(-1)^{n+m}\,u(n-1,m)}{(n-1)!}
+ \sum_{n=m}^\infty
\frac{(-1)^{n+m}\,u(n-1,m-1)}{(n-1)!\,(n-1)}\nonumber\\
&=\sum_{k=1}^{m-2}\sum_{j=k}^\infty
\frac{(-1)^{j+k}\,u(j,k)}{j!\,j^{m-k}}+2\sum_{j=m-1}^\infty
\frac{(-1)^{j+m-1}\,u(j,m-1)}{j!\,j}.
\label{Butzer2bar}\end{aligned}$$ Using again, we find that the case $m=3$ of gives $$\begin{aligned}
\zeta(3) &= \sum_{j=1}^\infty
\frac{(-1)^j\,u(j,1)}{j!\,j^2}+2\sum_{j=2}^\infty \frac{(-1)^j
\,u(j,2)}{j!\,j}\\
&= \sum_{j=1}^\infty \frac{(-1)^{j+1}}{j^3}+2\sum_{j=2}^\infty
\frac{(-1)^j}{j!\,j}(j-1)!\sum_{k=1}^{j-1}\frac{u(k,1)}{k!}\\
&= \sum_{j=1}^\infty \frac{(-1)^{j+1}}{j^3}+2\sum_{j=2}^\infty
\frac{(-1)^j}{j^2}\sum_{k=1}^{j-1}\frac{1}{k}\\
&= 2\zeta(\overline2,1)-\zeta(\overline3),\end{aligned}$$ which easily rearranges to give , shown in §\[sect:telescope\] to be trivially equivalent to .
Polylogarithm Identities {#sect:dilog}
========================
Dilogarithm and Trilogarithm {#sect:dilog3}
----------------------------
Consider the power series $$J(x):= \zeta_x(2,1)=\sum_{n>k>0} \frac{x^n}{n^2 k},
\qquad 0\le x\le 1.$$ In light of , we have $$J(x) = \int_0^x \frac{dt}{t}\int_0^t
\frac{du}{1-u}\int_0^v\frac{dv}{1-v}
= \int_0^x\frac{\log^2(1-t)}{2t}\,dt.$$ The computer algebra package [Maple]{} readily evaluates $$\label{maple}
\int_0^x\frac{\log^2(1-t)}{2t}\,dt
=\zeta(3)+\frac{1}{2} \log^2(1-x) \log(x)
+ \log (1-x){\rm Li}_2(1-x)-{\rm Li}_3(1-x)$$ where $${\rm Li}_s(x) := \sum_{n=1}^\infty \frac{x^n}{n^s}$$ is the classical polylogarithm [@Lewin1; @Lewin2]. (One can also readily verify the identity by differentiating both sides by hand, and then checking trivially holds as $x\to0+$. See also [@Berndt1 p. 251, Entry 9].) Thus, $$J(x) =\zeta(3)+\frac{1}{2} \log^2(1-x) \log(x)
+ \log (1-x){\rm Li}_2(1-x)-{\rm Li}_3(1-x).$$ Letting $x\to1-$ gives again.
In [@Berndt1 p. 251, Entry 9], we also find that $$J(-z)+J(-1/z) = -\tfrac16\log^3 z-\mathrm{Li}_2(-z)\log
z+\mathrm{Li}_3(-z)+\zeta(3)\label{Jinversion}$$ and $$J(1-z) = \tfrac12\log^2 z\log(z-1)-\tfrac13\log^3
z-\mathrm{Li}_2(1/z)\log
z-\mathrm{Li}_3(1/z)+\zeta(3)\label{Jreflection}.$$ Putting $z=1$ in and employing the well-known dilogarithm evaluation [@Lewin1 p. 4] $$\mathrm{Li}_{2}(-1) = \sum_{n=1}^\infty \frac{(-1)^n}{n^2} =
-\frac{\pi^2}{12}$$ gives . Putting $z=2$ in and employing the dilogarithm evaluation [@Lewin1 p. 6] $$\mathrm{Li}_{2}\bigg(\frac12\bigg) = \sum_{n=1}^\infty \frac1{n^2\, 2^n} =
\frac{\pi^2}{12}-\frac12\log^2 2$$ and the trilogarithm evaluation [@Lewin1 p. 155] $$\mathrm{Li}_{3}\bigg(\frac12\bigg) = \sum_{n=1}^\infty \frac{1}{n^3\, 2^n} =
\frac78\zeta(3)-\frac{\pi^2}{12}\log 2+\frac16\log^3 2$$ gives again.
Finally, as in [@BBBLa Lemma 10.1], differentiation shows that $$\label{feq-J}
J(-x) = -J(x)+\frac14J(x^2)+J\!\left(\frac{2x}{x+1}\right)
-\frac18J\!\left(\frac{4x}{(x+1)^2}\right).$$ Putting [@BBBLa Theorem 10.3] $x=1$ gives $8J(-1)=J(1)$ immediately, i.e. .
In [@BBBLa], it is noted that once the component functions in are known, the coefficients can be deduced by computing each term to high precision with a common transcendental value of $x$ and then employing a linear relations finding algorithm. We note here a somewhat more satisfactory method for arriving at .
First, as in §\[sect:iterint1\] one must determine the fundamental transformations –. While this is not especially difficult, as the calculations are somewhat lengthy, we do not include them here. By performing these transformations on the function $J(x)$, one finds that $$\begin{aligned}
{2}
J(x) & = \int_0^x ab^2,\qquad\qquad
& J\bigg(\frac{2x}{1+x}\bigg) &= \int_0^x (a+c)(b-c)^2,\\
J(-x) & = \int_0^x ac^2, \qquad\qquad &{} &{}\\
J(x^2) &= \int_0^x 2a(b+c)^2, \qquad\qquad
& J\bigg(\frac{4x}{(1+x)^2}\bigg) &= \int_0^x (a+2c)4(b-c)^2.\end{aligned}$$ It now stands to reason that we should seek rational numbers $r_1$, $r_2$, $r_3$ and $r_4$ such that $$ac^2 = r_1 ab^2 + 2r_2 \, a(b+c)^2 + r_3(a+c)(b-c)^2
+r_4(a+2c)4(b-c)^2$$ is an identity in the non-commutative polynomial ring ${{\mathbf Q}}\langle
a,b,c\rangle$. The problem of finding such rational numbers reduces to solving a finite set of linear equations. For example, comparing coefficients of the monomial $ab^2$ tells us that $r_1+2r_2+r_3+4r_4=0$. Coefficients of other monomials give us additional equations, and we readily find that $r_1=-1$, $r_2=1/4$, $r_3=1$ and $r_4=-1/8$, thus proving as expected.
Convolution of Polylogarithms {#sect:dilogp}
-----------------------------
Motivated by [@Boy2001; @Boy2002], for real $0<x<1$ and integers $s$ and $t$, consider $$\begin{aligned}
T_{s,t}(x) &:= \sum_{\substack{m,n=1\\ m\ne n}}^\infty
\frac{x^{n+m}}{n^s\, m^t(m-n)}
=\sum_{\substack{m,n=1\\ m\ne n}}^\infty
\frac{x^{n+m}(m-n+n)}{n^s\, m^{t+1}(m-n)}\\
&=\sum_{\substack{m,n=1\\ m\ne n}}^\infty
\frac{x^{n+m}}{n^s \, m^{t+1}}+\sum_{\substack{m,n=1\\ m\ne n}}^\infty
\frac{x^{n+m}}{n^{s-1}\, m^{t+1}(m-n)}\\
&=\sum_{n=1}^\infty \frac{x^n}{n^s}\sum_{m=1}^\infty
\bigg(\frac{x^m}{m^{t+1}}-\frac{x^n}{n^{t+1}}\bigg)+T_{s-1,t+1}(x)\\
&=\mathrm{Li}_s(x)\mathrm{Li}_{t+1}(x)-\mathrm{Li}_{s+t+1}(x^2)
+T_{s-1,t+1}(x).\end{aligned}$$ Telescoping this gives $$\begin{aligned}
T_{s,t}(x) &= T_{0,s+t}(x)-s\,\mathrm{Li}_{s+t+1}(x^2)
+\sum_{j=1}^s
\mathrm{Li}_j(x)\mathrm{Li}_{s+t+1-j}(x),
\qquad 0\le s\in{{\mathbf Z}}.\nonumber\\
\intertext{With $t=0$, this becomes}
T_{s,0}(x) &= T_{0,s}(x) - s\,\mathrm{Li}_{s+1}(x^2)
+\sum_{j=1}^{s}\mathrm{Li}_j(x)\mathrm{Li}_{s+1-j}(x),
\qquad 0\le s\in{{\mathbf Z}}.\nonumber\\
\intertext{But for any integers $s$ and $t$, there holds}
T_{s,t}(x) &= \sum_{\substack{m,n=1\\ m\ne n}}^\infty
\frac{x^{n+m}}{n^t m^s(m-n)}
= -\sum_{\substack{m,n=1\\ m\ne n}}^\infty
\frac{x^{n+m}}{m^s n^t(n-m)}
= - T_{s,t}(x).\nonumber\\
\intertext{Therefore,}
T_{s,0}(x) &= \frac12 \sum_{j=1}^s \mathrm{Li}_j(x)\mathrm{Li}_{s+1-j}(x)
- \frac{s}{2}\,\mathrm{Li}_{s+1}(x^2),
\qquad 0\le s\in{{\mathbf Z}}.\label{Tp0}\\
\intertext{On the other hand,}
T_{s,0}(x) &= \sum_{n=1}^\infty\frac{x^n}{n^s}
\sum_{\substack{m=1\\m\ne n}}^\infty\frac{x^m}{m-n}
= \sum_{n=1}^\infty\frac{x^{2n}}{n^s}\sum_{m=n+1}^\infty
\frac{x^{m-n}}{m-n} - \sum_{n=1}^\infty
\frac{x^n}{n^s}\sum_{m=1}^{n-1}\frac{x^m}{n-m}\nonumber\\
&= \mathrm{Li}_s(x^2)\mathrm{Li}_1(x)-\sum_{n=1}^\infty
\frac{x^n}{n^s}\sum_{j=1}^{n-1}\frac{x^{n-j}}{j}.\nonumber\\
\intertext{Comparing this with~\eqref{Tp0} gives}
\sum_{n=1}^\infty\frac{x^n}{n^s}\sum_{j=1}^{n-1}\frac{x^{n-j}}{j}
& =\frac{s}2\,\mathrm{Li}_{s+1}(x^2)
-\left[\mathrm{Li}_s(x)-\mathrm{Li}_s(x^2)\right]\mathrm{Li}_1(x)
-\frac12\sum_{j=2}^{s-1}\mathrm{Li}_j(x)\mathrm{Li}_{s+1-j}(x),\label{TakeLimit}\end{aligned}$$ where in and what follows, we now require $2\le
s\in{{\mathbf Z}}$ because the terms $j=1$ and $j=s$ in the sum were separated, and assumed to be distinct.
Next, note that if $n$ is a positive integer and $0<x<1$, then $$1-x^n = (1-x)\sum_{j=0}^{n-1} x^j < (1-x)n.$$ Thus, if $2\le s\in{{\mathbf Z}}$ and $0<x<1$, then $$\begin{aligned}
0<\left[\mathrm{Li}_s(x)-\mathrm{Li}_s(x^2)\right]\mathrm{Li}_1(x)
&= \mathrm{Li}_1(x)\sum_{n=1}^\infty \frac{x^n(1-x^n)}{n^s}
< (1-x)\mathrm{Li}_1(x)\sum_{n=1}^\infty \frac{x^n}{n^{s-1}}\\
&< (1-x)\log^2(1-x).\end{aligned}$$ Since the latter expression tends to zero in the limit as $x\to
1-$, taking the limit in gives $$\zeta(s,1) = \frac12\, s\,\zeta(s+1)-\frac12\sum_{j=1}^{s-2}
\zeta(j+1)\zeta(s-j),
\qquad 2\le s\in{{\mathbf Z}},$$ which is .
Fourier Series {#sect:fourier}
==============
The Fourier expansions $$\sum_{n=1}^{\infty}\frac{{\displaystyle}\sin(nt)}{n}=\frac{{\displaystyle}\pi-t}{2}\qquad\text{and}\qquad
\sum_{n=1}^{\infty} \frac{{\displaystyle}\cos(nt)}{n}= -\log|2\sin(t/2)|$$ are both valid in the open interval $0<t<2\pi$. Multiplying these together, simplifying, and doing a partial fraction decomposition gives $$\begin{aligned}
\sum_{n=1}^{\infty}\frac{\sin(nt)}{n} \sum_{k=1}^{n-1} \frac 1 k
& = \frac12\sum_{n=1}^\infty\frac{\sin(nt)}{n}
\sum_{k=1}^{n-1}\bigg(\frac1k+\frac1{n-k}\bigg)
= \frac12\sum_{n>k>0} \frac{\sin(nt)}{k(n-k)}\nonumber\\
&= \frac12\sum_{m,n=1}^\infty \frac{\sin(m+n)t}{mn}= \sum_{m,n=1}^\infty \frac{\sin(mt)\cos(nt)}{mn}\nonumber\\
&= -\frac{\pi-t}2 \log \left |2\sin(t/2) \right|, \label{sin-ser}
\intertext{again for $0<t<2\pi$. Integrating (\ref{sin-ser}) term
by term yields}
\sum_{n=1}^{\infty}\frac{\cos(n \theta)}{n^2} \sum_{k=1}^{n-1}\frac1k
&= \zeta(2,1)+\frac12\int_0^\theta (\pi -t)\log \left
|2\sin(t/2)\right|\,dt, \label{z21f}\\
\intertext{valid for $0\le \theta\le 2\pi$. Likewise for $0\le
\theta\le 2\pi$,}
\sum_{n=1}^{\infty}\frac{ \cos(n \theta)}{n^3}
&= \zeta(3)+\int_0^\theta (\theta-t)\log \left |2\sin(t/2)
\right|\,dt.\label{z3f}\end{aligned}$$ Setting $\theta=\pi$ in and produces $$\zeta(2,1)-\zeta(\overline{2},1) = -\frac{1}{2}\,\int_0^{\pi}(\pi-t)
\log \left |2\sin(t/2) \right| \,dt =
\frac{\zeta(3)-\zeta(\overline{3})}2.$$ In light of , this implies $$\zeta(\overline2,1)=\frac{\zeta(3)+\zeta(\overline3)}{2}=
\frac12\sum_{n=1}^\infty\frac{1+(-1)^n}{n^3}=\sum_{m=1}^\infty\frac{1}{(2m)^3}
=\frac18\zeta(3),$$ which is .
Applying Parseval’s equation to gives (via [@BB; @BBG; @Flaj]) the integral evaluation $$\frac{1}{4\pi} \int_{0}^{2\pi} (\pi-t)^2 \log^2(2 \sin(t/2))\, dt =
\sum_{n=1}^{\infty} \frac{H_n^2}{(n+1)^2} = \frac{11}4\, \zeta(4).$$ A reason for valuing such integral representations is that they are frequently easier to use numerically.
Further Generating Functions {#sect:gfs}
============================
Hypergeometric Functions {#sect:hyper}
------------------------
Note that in the notation of , $\zeta(2,1)$ is the coefficient of $xy^2$ in $$\label{xyy}
G(x,y) :=
\sum_{m=0}^\infty\sum_{n=0}^\infty x^{m+1}y^{n+1}\zeta(m+2,{\{1\}}^n)
= y\sum_{m=0}^\infty x^{m+1}\sum_{k=1}^\infty
\frac1{k^{m+2}}\prod_{j=1}^{k-1} \left(1+\frac{y}j\right).$$ Now recall the notation $(y)_k := y(y+1)\cdots(y+k-1)$ for the rising factorial with $k$ factors. Thus, $$\frac{y}{k}\prod_{j=1}^{k-1} \left(1+\frac{y}j\right)
= \frac{(y)_k}{k!}.$$ Substituting this into , interchanging order of summation, and summing the resulting geometric series yields the hypergeometric series $$\begin{aligned}
G(x,y)
&= \sum_{k=1}^\infty\frac{(y)_k}{k! }\left(\frac{x}{k-x}\right)
= -\sum_{k=1}^\infty\frac{(y)_k(-x)_k}{k! (1-x)_k}
= 1-{}_2F_1\left(\begin{array}{cc} y, -x\\ 1-x\end{array}\bigg|1\right).\end{aligned}$$ But, Gauss’s summation theorem for the hypergeometric function [@AS p. 557] [@Bailey p. 2] and the power series expansion for the logarithmic derivative of the gamma function [@AS p. 259] imply that $${}_2F_1\left(\begin{array}{cc} y, -x\\ 1-x\end{array}\bigg|1\right)
=\frac{{\Gamma}(1-x){\Gamma}(1-y)}{{\Gamma}(1-x-y)}
=\exp\bigg\{\sum_{k=2}^\infty\left(x^k+y^k-(x+y)^k\right)
\frac{\zeta(k)}{k}\bigg\}.$$ Thus, we have derived the generating function equality [@BBB] (see [@DBq] for a $q$-analog) $$\label{DrinGF}
\sum_{m=0}^\infty\sum_{n=0}^\infty x^{m+1}y^{n+1}\zeta(m+2,{\{1\}}^n)
= 1-\exp\bigg\{\sum_{k=2}^\infty
\left(x^k+y^k-(x+y)^k\right)\frac{\zeta(k)}{k}\bigg\}.$$ Extracting coefficients of $xy^2$ from both sides of yields .
The generalization can be similarly derived: extract the coefficient of $x^{m-1}y^2$ from both sides of . In fact, it is easy to see that provides a formula for $\zeta(m+2,{\{1\}}^n)$ for all nonnegative integers $m$ and $n$ in terms of sums of products of values of the Riemann zeta function at the positive integers. In particular, Markett’s formula [@Mark] (cf. also [@BBG]) for $\zeta(m,1,1)$ for positive integers $m>1$ is most easily obtained in this way. Noting symmetry between $x$ and $y$ in gives Drinfeld’s duality formula [@Drin] $$\label{DrinDuality}
\zeta(m+2,{\{1\}}^n) = \zeta(n+2,{\{1\}}^m)$$ for non-negative integers $m$ and $n$, a special case of the more general duality formula . Note that is just the case $m=n=0$.
Similarly [@Chu 2.1b] equating coefficients of $xy^2$ in Kummer’s summation theorem [@Kummer p. 53] [@Bailey p.9] $${}_2F_1\left(\begin{array}{cc}x,y\\1+x-y\end{array}\bigg|-1\right)
= \frac{\Gamma(1+x/2)\Gamma(1+x-y)}{\Gamma(1+x)\Gamma(1+x/2-y)}$$ yields .
A Generating Function for Sums {#sect:SumGF}
------------------------------
The identity can also be recovered by setting $x=0$ in the following result:
\[thm:SumGF\] If $x$ is any complex number not equal to a positive integer, then $$\sum_{n=1}^\infty\frac1{n(n-x)}\sum_{m=1}^{n-1}\frac1{m-x}
=\sum_{n=1}^\infty\frac1{n^2(n-x)}.$$
[**Proof.**]{} Fix $x\in{{\mathbf C}}\setminus{{\mathbf Z}}^{+}$. Let $S$ denote the left hand side. By partial fractions, $$\begin{aligned}
S & =\sum_{n=1}^\infty \sum_{m=1}^{n-1}\bigg(\frac{1}{n(n-m)(m-x)}
-\frac{1}{n(n-m)(n-x)}\bigg)\\
&=\sum_{m=1}^\infty\frac1{m-x}\sum_{n=m+1}^\infty\frac1{n(n-m)}
-\sum_{n=1}^\infty \frac1{n(n-x)}\sum_{m=1}^{n-1}\frac1{n-m}\\
&=\sum_{m=1}^\infty\frac1{m(m-x)}\sum_{n=m+1}^\infty\bigg(\frac1{n-m}
-\frac1n\bigg)-\sum_{n=1}^\infty\frac1{n(n-x)}\sum_{m=1}^{n-1}\frac1m.\end{aligned}$$ Now for fixed $m\in{{\mathbf Z}}^{+}$, $$\begin{aligned}
\sum_{n=m+1}^\infty \bigg(\frac1{n-m}-\frac1n\bigg)
&= \lim_{N\to\infty}\sum_{n=m+1}^N \bigg(\frac1{n-m}-\frac1n\bigg)
= \sum_{n=1}^m\frac1n-\lim_{N\to\infty}\sum_{n=1}^m\frac1{N-n+1}\\
&= \sum_{n=1}^m \frac1n,\end{aligned}$$ since $m$ is fixed. Therefore, we have $$\begin{aligned}
S &=\sum_{m=1}^\infty\frac1{m(m-x)}\sum_{n=1}^m\frac1n
- \sum_{n=1}^\infty\frac1{n(n-x)}\sum_{m=1}^{n-1}\frac1m
=\sum_{n=1}^\infty\frac1{n(n-x)}\bigg(\sum_{m=1}^n\frac1m-
\sum_{m=1}^{n-1}\frac1m\bigg)\\
&=\sum_{n=1}^\infty\frac1{n^2(n-x)}.\end{aligned}$$ [$\square$]{}
Theorem \[thm:SumGF\] is in fact equivalent to the sum formula [@Gran; @Ohno] $$\label{sum}
\sum_{\substack{\sum a_i = s\\a_i\ge 0}}
\zeta(a_1+2,a_2+1,\dots,a_r+1) = \zeta(r+s+1),$$ valid for all integers $s\ge 0$, $r\ge 1$, and which generalizes Theorem \[briggs\] to arbitrary depth. The identity is simply the case $r=2$, $s=0$. A $q$-analog of the sum formula is derived as a special case of more general results in [@DBq]. See also [@DBqSum].
An Alternating Generating Function {#sect:SumAGF}
----------------------------------
An alternating counterpart to Theorem \[thm:SumGF\] is given below.
[(Theorem 3 of [@DJD]).]{} For all non-integer $x$ $$\begin{aligned}
\sum_{n=1}^\infty \frac{(-1)^n}{n^2-x^2}
\bigg\{H_n +\sum_{n=1}^\infty \frac{x^2}{n(n^2-x^2)}\bigg\}
&= \sum_{n=1}^\infty \frac{(-1)^n}{n^2-x^2}\bigg\{\psi(n)-\psi(x)
- \frac{\pi}{2}\cot(\pi x) -\frac1{2x}\bigg\}\\
&= \sum _{o>0\,\mbox{{\small odd}}}^{\infty }{\frac {1}{
o \left( o^{2 }-x^2 \right) }}+\sum _{n=1}^{\infty }{\frac {
\left( -1 \right) ^{n}\,n}{ \left( {n}^{2}-{x}^{2} \right)
^{2}}}\\
&=\sum _{e>0\,\mbox{{\small even}}}^{\infty }{\frac {e}{
\left( {x}^{2}-e^{2} \right) ^{2}}}-{x}^{2}\sum
_{o>0\,\mbox{{\small odd}}}^{\infty }{\frac {1}{ o
\left( {x}^{2}- o^{2} \right) ^{2}}}.
\end{aligned}$$
Setting $x=0$ reproduces in the form $\zeta(\overline{2},1)=\sum _{n>0}^{\infty }(2n)^{-3}$. We record that $$\sum _{n=1}^{\infty}\frac{(-1)^n}{n^2-x^2}
=\frac1{2 x^2}-{\frac {\pi }{2 x\sin(\pi x)}},$$ while $$\begin{gathered}
\sum_{n=1}^\infty \frac{(-1)^n}{n^2-x^2}
\bigg\{\psi(n)-\psi(x)-\frac\pi2\cot(\pi
x)-\frac1{2x}\bigg\}
=\sum_{n=1}^\infty \frac{(-1)^n}{n^2-x^2}\bigg\{H_n
+\sum_{n=1}^\infty \frac{x^2}{n(n^2-x^2)} \bigg\}\\
= \sum_{n=1}^\infty\frac{1}{(2n-1)((2n-1)^2-x^2)}
+\sum_{n=1}^\infty \frac{n (-1)^n}{(n^2-x^2)^2}.\end{gathered}$$
The Digamma Function {#susbsect:DPsiGF}
--------------------
Define an auxiliary function $\Lambda$ by $$\begin{aligned}
x \Lambda(x):= \tfrac12\psi'(1-x)
-\tfrac12\left(\psi(1-x)+\gamma\right)^2
-\tfrac12\zeta(2).\end{aligned}$$ We note, but do not use, that $$x \Lambda(x)=\frac12\int_0^\infty
\frac{t \left(e^{-t}+e^{-t (1-x)}\right)}{1-e^{-t}}\,dt
-\frac12\left(\int_0^\infty
\frac{e^{-t}-e^{-t ( 1-x)}}{1-{e^{-t}}}\,dt\right)^2-\zeta(2).$$ It is easy to verify that $$\begin{aligned}
\label{2ids}
\psi(1-x)+\gamma &=
\sum_{n=1}^\infty\frac{x}{n(x-n)},\nonumber\\
\psi'(1-x)-\zeta(2)
& =\sum_{n=1}^\infty\bigg(\frac{1}{(x-n)^2}-\frac1{n^2}\bigg)
=\sum_{n=1}^\infty \frac{2nx-x^2}{n^2(n-x)^2},\end{aligned}$$ and $$\sum_{n=0}^\infty\zeta(n+2,1)x^n
=\sum_{n=1}^\infty\frac1{n(n-x)}\sum_{m=1}^{n-1}\frac1m.$$ Hence, $$\Lambda(x)
=\sum_{n=1}^\infty\frac{1}{n^2(n-x)}
-x\sum_{n=1}^\infty\frac1{n(n-x)}\sum_{m=1}^{n-1}\frac{1}{m(m-x)}.$$ Now, $$\sum_{n=1}^\infty\frac{1}{n^2(n-x)} -
x\sum_{n=1}^\infty\frac{1}{n(n-x)}\sum_{m=1}^{n-1}\frac{1}{m(m-x)}
=\sum_{n=1}^\infty\frac1{n(n-x)}\sum_{m=1}^{n-1}\frac1m$$ is directly equivalent to Theorem \[thm:SumGF\] of §\[sect:SumGF\]—see [@DJD Section 3]—and we have proven $$\Lambda(x)= \sum_{n=0}^\infty \zeta(n+2,1)\,x^n,$$ so that comparing coefficients yields yet another proof of Euler’s reduction . In particular, setting $x=0$ again produces .
The Beta Function
-----------------
Recall that the beta function is defined for positive real $x$ and $y$ by $$B(x,y) := \int_0^1 t^{x-1}(1-t)^{y-1}\,dt
= \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}.$$ We begin with the following easily obtained generating function: $$\sum_{n=1}^\infty t^n H_n = -\frac{\log(1-t)}{1-t}.$$ For $m\ge 2$ the Laplace integral now gives $$\begin{aligned}
\label{zm1=b1}
\zeta(m,1) & = \frac{(-1)^m}{(m-1)!}
\int_0^1\frac{\log^{m-1}(t)\log(1-t)}{1-t}\, dt\nonumber\\
& = \frac{(-1)^m}{2(m-1)!}
\int_0^1 (m-1) \log^{m-2}(t)\log^2(1-t)\,\frac{dt}{t}\nonumber\\
& = \frac{(-1)^m}{2(m-2)!}\, b_1^{(m-2)}(0),\end{aligned}$$ where $$b_1(x) := \left. \frac{\partial^{2}}{\partial y^2} B(x,y)
\right|_{y=1}
= 2\Lambda(-x)$$ (cf. §\[susbsect:DPsiGF\]). Since $$\frac{\partial^2}{\partial y^2} B(x,y)
= B(x,y) \left[(\psi(y)-\psi(x+y))^2 + \psi'(y) - \psi'(x+y)\right],$$ we derive $$b_1(x) = \frac{(\psi(1) - \psi(x+1))^2 + \psi'(1) - \psi'(x+1)}{x}.$$ Now observe that from , $$\begin{aligned}
\zeta(2,1) &= \frac12 \, b_1(0)
= \lim_{x \downarrow0} \frac{(-\gamma - \psi(x+1))^2 }{2x}
-\lim_{x \downarrow0} \frac{\psi'(x+1)-\psi'(1)}{2x}
=-\frac12\,\psi''(1)\\
&=\zeta(3).\end{aligned}$$
Continuing, from the following two identities, cognate to , $$\begin{aligned}
(-\gamma-\psi(x+1))^2
& = \bigg(\sum_{m=1}^\infty (-1)^m\zeta(m+1)\, x^m \bigg)^2\\
& = \sum_{m=1}^\infty (-1)^m
\sum_{k=1}^{m-1}\zeta(k+1)\zeta(m-k+1)\, x^m,\\
\zeta(2) -\psi'(x+1)
& = \sum_{m=1}^\infty (-1)^{m+1} (m+1) \zeta(m+2)\, x^m,\end{aligned}$$ we get $$2\sum_{m=2}^\infty (-1)^m \zeta(m,1) \,x^{m-2}
= \sum_{m=2}^\infty \frac{b_1^{(m-2)}(0)}{(m-2)!}\, x^{m-2}\,
= b_1(x)$$ $$= \sum_{m=1}^\infty (-1)^{m-1} \left( (m+1) \zeta(m+2) -
\sum_{k=1}^{m-1} \zeta(k+1) \zeta(m-k+1)\right) x^{m-1},$$ from which Euler’s reduction follows—indeed this is close to Euler’s original path.
Observe that is especially suited to symbolic computation. We also note the pleasing identity $$\begin{aligned}
\psi'(x)=\frac{\Gamma''(x)}{\Gamma(x)}-\psi^2(x)\label{DPsiGF}
.\end{aligned}$$ In some informal sense (\[DPsiGF\]) generates , but we have been unable to make this sense precise.
A Decomposition Formula of Euler {#sect:parfracs}
================================
For positive integers $s$ and $t$ and distinct non-zero real numbers $\alpha$ and $x$, the partial fraction expansion $$\label{parfrac}
\frac{1}{x^s(x-\alpha)^t}
=
(-1)^t\sum_{r=0}^{s-1}\binom{t+r-1}{t-1}\frac{1}{x^{s-r}\alpha^{t+r}}
+\sum_{r=0}^{t-1}\binom{s+r-1}{s-1}
\frac{(-1)^r}{\alpha^{s+r}(x-\alpha)^{t-r}}$$ implies [@Niels3 p. 48] [@Mark] Euler’s decomposition formula $$\begin{gathered}
\label{niels}
\zeta(s,t) =
(-1)^t\sum_{r=0}^{s-2}\binom{t+r-1}{t-1}\zeta(s-r,t+r)
+\sum_{r=0}^{t-2}(-1)^r\binom{s+r-1}{s-1}\zeta(t-r)\zeta(s+r)\\
-(-1)^t\binom{s+t-2}{s-1}\big\{\zeta(s+t)+\zeta(s+t-1,1)\big\}.\end{gathered}$$ The depth-2 sum formula is obtained by setting $t=1$ in . If we also set $s=2$, the identity results. To derive from we follow [@Mark], separating the last term of each sum on the right hand side of , obtaining $$\begin{aligned}
\frac{1}{x^s(x-\alpha)^t} &=
(-1)^t\sum_{r=0}^{s-2}\binom{t+r-1}{t-1}\frac{1}{x^{s-r}\alpha^{t+r}}
+\sum_{r=0}^{t-2}\binom{s+r-1}{s-1}
\frac{(-1)^r}{\alpha^{s+r}(x-\alpha)^{t-r}}\\
&-(-1)^{t}\binom{s+t-2}{s-1}
\frac{1}{\alpha^{s+t-1}}\bigg(\frac{1}{x-\alpha}
-\frac1x\bigg).\end{aligned}$$ Now sum over all integers $0<\alpha <x<\infty$.
Nielsen states without proof [@Niels3 p. 48, eq.(9)]. Markett proves by induction [@Mark Lemma 3.1], which is the proof technique suggested for the $\alpha=1$ case of in . However, it is easy to prove directly by expanding the left hand side into partial fractions with the aid of the residue calculus. Alternatively, as in [@DBqDecomp] note that is an immediate consequence of applying the partial derivative operator $$\frac{1}{(s-1)!}\bigg(-\frac{\partial}{\partial x}\bigg)^{s-1}
\frac{1}{(t-1)!}\bigg(-\frac{\partial}{\partial y}\bigg)^{t-1}$$ to the identity $$\frac{1}{xy} = \frac{1}{(x+y)x} + \frac{1}{(x+y)y},$$ and then setting $y=\alpha-x$. This latter observation is extended in [@DBqDecomp] to establish a $q$-analog of another of Euler’s decomposition formulas for $\zeta(s,t)$.
Equating Shuffles and Stuffles {#sect:shtuff}
==============================
We begin with an informal argument. By the *stuffle multiplication* rule [@BBBLa; @BowBradSurvey; @DBPrtn; @DBq] $$\label{divergentstuff}
\zeta(2)\zeta(1)= \zeta(2,1)+\zeta(1,2)+\zeta(3).$$ On the other hand, the shuffle multiplication rule [@BBBLa; @BBBLc; @BowBradSurvey; @BowBrad3; @BowBradRyoo] gives $ab{
\setlength{\unitlength}{.4pt}
\begin{picture}(40,20)
\put(10,2){\line(1,0){20}} \put(10,2){\line(0,1){10}}
\put(20,2){\line(0,1){10}} \put(30,2){\line(0,1){10}}
\end{picture}}b = 2abb+bab$, whence $$\label{divergentshuff}
\zeta(2)\zeta(1) = 2\zeta(2,1)+\zeta(1,2).$$ The identity now follows immediately on subtracting from .
Of course, this argument needs justification, because it involves cancelling divergent series. To make the argument rigorous, we introduce the multiple polylogarithm [@BBBLa; @BowBrad1; @BowBradSurvey]. For real $0\le
x\le 1$ and positive integers $s_1,\dots,s_k$ with $x=s_1=1$ excluded for convergence, define $$\zeta_x(s_1,\dots,s_k)
:= \sum_{n_1>\cdots>n_k>0}\; x^{n_1}\prod_{j=1}^k n_j^{-s_j}
=\int \prod_{j=1}^k
\bigg(\prod_{r=1}^{s_j-1} \frac{dt_r^{(j)}}{t_r^{(j)}}\bigg)
\frac{dt_{s_j}^{(j)}}{1-t_{s_j}^{(j)}},
\label{iterint}$$ where the integral is over the simplex $$x>t_1^{(1)}>\cdots>t_{s_1}^{(1)}>\cdots>t_1^{(k)}>\cdots>t_{s_k}^{(k
)}>0,$$ and is abbreviated by $$\int_0^x \prod_{j=1}^k a^{s_j-1}b,
\qquad a=\frac{dt}{t},\qquad b=\frac{dt}{1-t}.
\label{shortiterint}$$ Then $$\zeta(2)\zeta_x(1) =
\sum_{n>0}\frac1{n^2}\sum_{k>0}\frac{x^k}{k}
=
\sum_{n>k>0}\frac{x^k}{n^2k}+\sum_{k>n>0}\frac{x^k}{kn^2}
+\sum_{k>0}\frac{x^k}{k^3},$$ and $$\zeta_x(2)\zeta_x(1) = \int_0^x ab \int_0^x b = \int_0^x
\left(2abb+bab\right) = 2\zeta_x(2,1)+\zeta_x(1,2).$$ Subtracting the two equations gives $$\big[\zeta(2)-\zeta_x(2)\big]\zeta_x(1)
= \zeta_x(3)-\zeta_x(2,1)+\sum_{n>k>0}\frac{x^k-x^n}{n^2k}.$$ We now take the limit as $x\to 1-.$ Uniform convergence implies the right hand side tends to $\zeta(3)-\zeta(2,1)$. That the left hand side tends to zero follows immediately from the inequalities $$\begin{aligned}
0\le x \big[\zeta(2)-\zeta_x(2)\big]\zeta_x(1)
&= x\int_x^1\log(1-t)\log(1-x)\frac{dt}{t}\\
& \le \int_x^1 \log^2(1-t)\,dt\\
& =(1-x)\left\{1+(1-\log(1-x))^2\right\}.\end{aligned}$$
The alternating case is actually easier using this approach, since the role of the divergent sum $\zeta(1)$ is taken over by the conditionally convergent sum $\zeta(\overline 1)=-\log
2$. By the stuffle multiplication rule, $$\begin{aligned}
\zeta(\overline 2)\zeta(\overline 1) &= \zeta(\overline 2,
\overline 1)+\zeta(\overline 1, \overline 2) + \zeta(3)
\label{convergentstuff1},\\
\zeta(2)\zeta(\overline 1) &= \zeta(2, \overline
1)+\zeta(\overline 1, 2) + \zeta(\overline
3)\label{convergentstuff2}.\end{aligned}$$ On the other hand, the shuffle multiplication rule gives $ac{
\setlength{\unitlength}{.4pt}
\begin{picture}(40,20)
\put(10,2){\line(1,0){20}} \put(10,2){\line(0,1){10}}
\put(20,2){\line(0,1){10}} \put(30,2){\line(0,1){10}}
\end{picture}}c=2ac^2+cac$ and $ab{
\setlength{\unitlength}{.4pt}
\begin{picture}(40,20)
\put(10,2){\line(1,0){20}} \put(10,2){\line(0,1){10}}
\put(20,2){\line(0,1){10}} \put(30,2){\line(0,1){10}}
\end{picture}}c=abc+acb+cab$, whence $$\begin{aligned}
\zeta(\overline 2)\zeta(\overline 1) &= 2\zeta(\overline 2, 1)
+ \zeta(\overline 1, 2)\label{convergentshuff1},\\
\zeta(2)\zeta(\overline 1) &= \zeta(2, \overline 1) +
\zeta(\overline 2, \overline 1) + \zeta(\overline 1, \overline
2)\label{convergentshuff2}.\end{aligned}$$ Comparing with and with yields the two equations $$\begin{aligned}
\zeta(\overline 2, \overline 1)
&= \zeta(\overline 1, 2) +2\zeta(\overline 2, 1)
-\zeta(\overline 1, \overline 2)
-\zeta(3),\\
\zeta(\overline 2, \overline 1) &= \zeta(\overline 1, 2)-\zeta(
\overline 1, \overline 2) +\zeta(\overline 3).\end{aligned}$$ Subtracting the latter two equations yields $2\zeta(\overline 2,
1) = \zeta(3)+\zeta(\overline 3)$, i.e. , which was shown to be trivially equivalent to in §\[sect:telescope\].
Conclusion
==========
There are doubtless other roads to Rome, and as indicated in the introduction we should like to learn of them. We finish with the three open questions we are most desirous of answers to.
- A truly combinatorial proof, perhaps of the form considered in [@BBBLc].
- A direct proof that the appropriate line integrals in sections \[sect:Perron\] and \[sect:Dirichlet\] evaluate to the appropriate multiple of $\zeta(3)$.
- A proof of , or at least some additional cases of it.
[xx]{}
M. Abramowitz & I. Stegun, *Handbook of Mathematical Functions*, Dover, New York, 1972.
G. Almkvist and A. Granville, Borwein and Bradley’s Apéry-like formulae for $\zeta(4n+3)$, *Experiment. Math.*, **8** (1999), 197–203.
T. M. Apostol, *Introduction to Analytic Number Theory*, Springer-Verlag, New York, 1986.
W. N. Bailey, *Generalized Hypergeometric Series*, Cambridge University Press, London, 1935.
B. Berndt, *Ramanujan’s Notebooks Part I*, Springer, New York, 1985.
F. Beukers, A note on the irrationality of $\zeta (2)$ and $\zeta
(3)$, *Bull. London Math. Soc.*, **11** (1979), no. 3, 268–272.
J. Borwein, A class of Dirichlet series integrals, *MAA Monthly*, in press, 2005.
J. M. Borwein and D. H. Bailey. *Mathematics by Experiment: Plausible Reasoning in the 21st Century,* A. K. Peters Ltd., 2004.
J. M. Borwein, D. H. Bailey and R. Girgensohn, *Experimentation in Mathematics: Computational Paths to Discovery*, A. K. Peters Ltd., 2004.
D. Borwein and J. M. Borwein, On some intriguing sums involving $\zeta$(4), *Proc. Amer. Math. Soc.*, **123** (1995), 111-118.
D. Borwein, J. M. Borwein, and D. M. Bradley, Parametric Euler sum identities, *J. Math. Anal. Appl.*, in press, 2005.
D. Borwein, J. M. Borwein, and R. Girgensohn, Explicit evaluation of Euler sums, *Proc. Edinburgh Math.Soc.* **38** (1995), 277–294.
J. M. Borwein and P. B. Borwein, *Pi and the AGM: A Study in Analytic Number Theory and Computational Complexity*, John Wiley, New York, 1987, paperback 1998.
J. M. Borwein, D. J. Broadhurst, and D. M. Bradley, Evaluations of $k$-fold Euler/Zagier sums: a compendium of results for arbitrary $k$, *Electronic J. Combinatorics*, **4** (1997), no. 2, \#R5. Wilf Festschrift.
J. M. Borwein, D. J. Broadhurst, D. M. Bradley, and P. Lisoněk, Special values of multiple polylogarithms, *Trans. Amer. Math. Soc.*, **353** (2001), no. 3, 907–941. Preprint lodged at http://arXiv.org/abs/math.CA/9910045
, [Combinatorial aspects of multiple zeta values]{}, *Electronic J. Combinatorics*, **5** (1998), no. 1, \#R38. Preprint lodged at http://arXiv.org/abs/math.NT/9812020
J. Borwein, J. Zucker and J. Boersma, Evaluation of character Euler double sums, preprint 2004. \[CoLab Preprint \#260\].
D. Bowman and D. M. Bradley, Resolution of some open problems concerning multiple zeta evaluations of arbitrary depth, *Compositio Mathematica*, **139** (2003), no. 1, 85–100. Preprint lodged at http://arXiv.org/abs/math.CA/0310061
, Multiple polylogarithms: A brief survey, *Proceedings of a Conference on $q$-Series with Applications to Combinatorics, Number Theory and Physics*, (B. C. Berndt and K. Ono eds.) Amer. Math. Soc., Contemporary Math., **291** (2001), 71–92. http://arXiv.org/abs/math.CA/0310062
, The algebra and combinatorics of shuffles and multiple zeta values, *J.Combinatorial Theory, Ser. A*, **97** (2002), no. 1, 43–61. http://arXiv.org/abs/math.CO/0310082
D. Bowman, D. M. Bradley, and J. Ryoo, [Some multi-set inclusions associated with shuffle convolutions and multiple zeta values]{}, *European J. Combinatorics*, **24** (2003), 121–127.
P. Bracken, Problem 10754, solution by B. S. Burdick, *Amer. Math. Monthly*, **108** (2001), 771–772.
D. M. Bradley, [Partition identities for the multiple zeta function]{}, pp. 19–29 in *Zeta Functions, Topology, and Quantum Physics*, Springer Series: Developments in Mathematics, Vol. 14, T. Aoki, S.Kanemitsu, M. Nakahara, Y. Ohno (eds.) 2005, XVI, 219 p. 13 illus., Hardcover ISBN: 0-387-24972-9. Preprint lodged at http://arXiv.org/abs/math.CO/0402091
, Multiple $q$-zeta values, *J. Algebra*, **283** (2005), no. 2, 752–798.\
Published version available online at http://dx.doi.org/10.1016/j.jalgebra.2004.09.017\
Preprint lodged at http://arXiv.org/abs/math/QA/040293
, [Duality for finite multiple harmonic $q$-series]{}, *Discrete Math.*, **300** (2005), 44–56.\
Published version available online at http://dx.doi.org/10.1016/j.disc.2005.06.008\
Preprint lodged at http://arXiv.org/abs/math.CO/0402092 v2
, [On the sum formula for multiple $q$-zeta values]{}, *Rocky Mountain J. Math.*, to appear. Preprint lodged at http://arXiv.org/abs/math.QA/0411274
, [A $q$-analog of Euler’s decomposition formula for the double zeta function]{}, January 31, 2005. *International Journal of Mathematics and Mathematical Sciences*, to appear. Preprint lodged at http://arXiv.org/abs/math.NT/0502002
W. E. Briggs, Problem 1302, solution by N. Franceschine, *Math. Mag.*, **62** (1989), no. 4, 275–276.
W. E. Briggs, S. Chowla, A. J. Kempner, and W. E. Mientka, On some infinite series, *Scripta Math.*, **21** (1955), 28–30.
D. J. Broadhurst, Polylogarithmic Ladders, Hypergeometric Series and the Ten Millionth Digits of $\zeta(3)$ and $\zeta(5)$, (1998). Available at [ http://lanl.arxiv.org/abs/math/9803067]{}
P. S. Bruckman, Problem H-320, *Fibonacci Quart.*, **20** (1982), 186–187.
K. Boyadzhiev, Evaluation of Euler-Zagier sums, *Internat. J. Math. Math. Sci.*, **27** (2001), no. 7, 407–412.
, Consecutive evaluation of Euler sums, *Internat. J. Math. Math. Sci.*, **29** (2002), no. 9, 555–561.
Edward B. Burger and Robert Tubbs, *Making Transcendence Transparent,* Springer-Verlag, 2004.
P. L. Butzer, C. Markett, and M. Schmidt, Stirling numbers, central factorial numbers, and representations of the Riemann zeta function, *Results in Mathematics*, **19** (1991), 257–274.
P. Cartier, Fonctions polylogarithmes, nombres polyzêtas et groupes pro-unipotents, Séminaire Bourbaki, 53’eme année, 2000-2001, N$^{o}$ 885, 1–35.
Man-Duen Choi, Tricks or treats with the Hilbert matrix, *American Mathematical Monthly,* **90** (1983), 301–312.
W. Chu, Hypergeometric series and the Riemann zeta function, *Acta Arith.* **LXXXII.2** (1997), 103–118.
R. E. Crandall, Fast evaluation of multiple zeta sums, *Math. Comp.*, **67** (1998), no. 223, 1163–1172.
R. E. Crandall and J. P. Buhler, On the evaluation of Euler sums, *Experimental Math.*, **3** (1994), no. 4, 275–285.
V. G. Drinfel’d, On quasitriangular quasi-Hopf algebras and a group closely connected with $\mathrm{Gal}(\overline{\mathbf
Q}/{\mathbf Q})$, *Leningrad Math. J.*, **2** (1991), no. 4, 829–860.
William Dunham, *Euler: The Master of Us All*, Mathematical Association of America (Dolciani Mathematical Expositions), Washington, 1999.
O. Espinosa and V. Moll, The evaluation of Tornheim double sums. Part 1, *Journal of Number Theory*, in press, 2005.\
Available at `http://math.tulane.edu/\simvhm/pap22.html`.
L. Euler, Meditationes circa singulare serierum genus, *Novi. Comm. Acad. Sci. Petropolitanae*, **20** (1775), 140–186.
, *Opera Omnia*, ser. 1, vol. 15, B. G. Teubner, Berlin, 1927.
N. R. Farnum, Problem 10635, solution by A. Tissier, *Amer. Math. Monthly*, **106** (1999), 965–966.
P. Flajolet and B. Salvy, Euler sums and contour integral representations, *Experiment. Math.*, **7** (1998), no. 1, 15–35.
C. Georghiou and A. N. Philippou, Harmonic sums and the zeta-function, *Fibonacci Quart.*, **21** (1983), 29–36.
A. P. Juskevic, E. Winter and P. Hoffmann, Leonhard Euler; Christian Goldbach: Letters 1729-1764, *Notices of the German Academy of Sciences at Berlin*, 1965.
R. L. Graham, D. E. Knuth and O. Patashnik, *Concrete Mathematics* (2nd ed.) Addison-Wesley, Boston, 1994.
A. Granville, A decomposition of Riemann’s zeta-function, in *Analytic Number Theory* (Y. Motohashi, ed.), London Mathematical Society Lecture Notes Series 247, Cambridge University Press, 1997, pp. 95–101.
Prolegomena to a Chapter on Inequalities, vol. 2, pp. 471–489, in *Collected Papers*, Oxford University Press, 1967.
M. E. Hoffman, Multiple harmonic series, *Pacific J. Math.*, **152** (1992), no. 2, 275–290.
J. Havel, *Gamma: Exploring Euler’s Constant*, Princeton University Press, 2003.
C. Kassel, *Quantum Groups*, Springer, New York, 1995.
E. E. Kummer, Ueber die hypergeometrische Reihe, *J. für Math.* **15** (1836), 39–83.
M. S. Klamkin, Problem 4431, solution by R. Steinberg, *Amer. Math. Monthly*, **59** (1952), 471–472.
, Advanced problem 4564, solution by J. V. Whittaker, *Amer. Math. Monthly*, **62** (1955), 129–130.
L. Lewin, *Polylogarithms and Associated Functions*, Elsevier North Holland, New York-Amsterdam, 1981. MR **83b:**33019.
L. Lewin (ed.), *Structural Properties of Polylogarithms*, Amer. Math. Soc. Mathematical Surveys and Monographs **37** (1991), Providence, RI. MR **93b:**11158.
C. Markett, Triple sums and the Riemann zeta function, *J. Number Theory*, **48** (1994), 113–132.
N. Nielsen, Recherches sur des généralisations d’une fonction de Legendre et d’Abel, *Annali Math.*, **9** (1904), 219–235.
, Der Eulersche Dilogarithmus und seine Verallgemeinerungen, *Nova Acta, Abh. der Kaiserl.Leopoldinisch-Carolinischen Deutschen Akad. der Naturforscher*, **90** (1909), 121–212.
, *Die Gammafunktion*, Chelsea, New York, 1965.
Y. Ohno, A generalization of the duality and sum formulas on the multiple zeta values, *J. Number Theory*, **74** (1999), 39–43.
R. Sitaramachandra Rao, *J. Number Theory*, **25** (1987), no. 1, 1–19.
R. Sitaramachandrarao and A. Siva Rama Sarma, Some identities involving the Riemann zeta-function, *Indian J. Pure Appl. Math.*, **10** (1979), 602–607.
R. Sita Ramachandra Rao and M. V. Subbarao, Transformation formulae for multiple series, *Pacific J.Math.*, **113** (1984), no. 2, 471–479.
E. Y. State, A one-sided summatory function, *Proc. Amer. Math. Soc.*, **60** (1976), 134–138.
S. J. Michael Steele, *The Cauchy-Schwarz Master Class,* MAA Problem Books Series, MAA, 2004.
M. V. Subbarao and R. Sitaramachandrarao, On some infinite series of L. J. Mordell and their analogues, *Pacific J. Math.*, **119** (1985), 245–255.
G. Tenenbaum, *Introduction to Analytic and Probabilistic Number Theory*, Cambridge University Press, Cambridge, 1995.
M. Vowe, Aufgabe 1138, *Elemente der Mathematik*, **53** (1998), 177; **54** (1999), 176.
M. Waldschmidt, Multiple polylogarithms: an introduction, in *Number Theory and Discrete Mathematics*, Hindustan Book Agency and Birkhäuser Verlag, 2002, 1–12.
, Valeurs zêta multiples: une introduction, *Journal de Theorie des Nombres de Bordeaux*, **12** (2000), no. 2, 581–595.
G. T. Williams, A new method of evaluating $\zeta(2n)$, *Amer. Math. Monthly*, **60** (1953), 19–25.
D. Zagier, Values of Zeta Function and Their Applications, *Proceedings of the First European Congress of Mathematics*, **2**, (1994), 497–512.
V. V. Zudilin, Algebraic relations for multiple zeta values (Russian), *Uspekhi Mat. Nauk* **58** (2003), no. 1, 3–32; translation in *Russian Math.Surveys*, **58** (2003), vol. 1, 1–29 .
[^1]: Research of the first author supported by NSERC
[^2]: AAL: F.136, Op. 2, Nr.8, Blatt 54–55.
[^3]: Frei zitiert nach Marcus Valerius Martialis, I, 21,9.
[^4]: [*Opera Omnia*]{}, vol. IVA4, Birkhäuser Verlag.
[^5]: William Blake from *Auguries of Innocence*.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
The distribution of higher order level spacings, i.e. the distribution of $%
\{s_{i}^{(n)}=E_{i+n}-E_{i}\}$ with $n\geq 1$ is derived analytically using a Wigner-like surmise for three Gaussian ensembles of random matrix as well as Poisson ensemble. It is found $s^{(n)}$ in Gaussian ensembles follows a generalized Wigner-Dyson distribution with rescaled parameter $\alpha=\nu
C_{n+1}^2+n-1$, while that in Poisson ensemble follows a generalized semi-Poisson distribution with index $n$. Notably, the distribution of $%
s^{(2n)}$ in GOE coincide with that for $s^{(n)}$ in GSE. Numerical evidence are provided through simulations of random spin systems as well as non-trivial zeros of Riemann zeta function. The higher order generalizations of gap ratios are also discussed.
author:
- 'Wen-Jia Rao$^1$'
title: Wigner Surmise for Higher Order Level Spacings in Random Matrix Theory
---
Introduction {#intro}
============
Random matrix theory (RMT) was introduced half a century ago when dealing with complex nuclei[@Porter], and since then has found various applications in fields ranging from quantum chaos to isolated many-body systems[@RMP; @PR]. This roots in the fact that RMT describes universal properties of random matrix that depend only on its symmetry while independent of microscopic details. Specifically, the system with time reversal invariance is represented by matrix that belongs to the Gaussian orthogonal ensemble (GOE); the system with spin rotational invariance while breaks time reversal symmetry belongs to the Gaussian unitary ensemble (GUE); while Gaussian symplectic ensemble (GSE) represents systems with time reversal symmetry but breaks spin rotational symmetry.
Among various statistical quantities, the most widely used one is the distribution of nearest level spacings $s$, i.e. the gaps between adjacent energy levels, which measures the strength of level repulsion. The exact expression for the $P\left( s\right) $ can be derived analytically for random matrix with large dimension, which is cumbersome[@Mehta; @Haake2001]. Instead, for most practical purposes it’s sufficient to employ the so-called Wigner surmise[@Wigner] that deals with $2\times 2$ matrix (this will be reviewed in Sec. \[nearest\]), the out-coming result for $%
P(s) $ has a neat expression that contains a polynomial part accounting for level repulsion and an Gaussian decaying part (see Eq. (\[equ:nearest\])).
Different models may and usually do have different density of states (DOS), hence to compare the universal behavior of level spacings, an unfolding procedure is required to erase the model dependent information of DOS. This unfolding procedure is, however, not unique and suffers from subtle ambiguity raised by concrete unfolding strategy[@Gomez2002].
To overcome this obstacle, Oganesyan and Huse[@Oganesyan] proposed a new quantity to study the level statistics, i.e. the ratio between adjacent gaps $r_{n}=\frac{s_{n}}{s_{n-1}}$, whose distribution $P\left( r\right) $ is later analytically derived by Atas *et al.*[@Atas]. The gap ratio is independent of local DOS and requires no unfolding procedure, hence has found various applications, especially in the context of many-body localization (MBL)[@Huse1; @Huse2; @Huse3; @Sarma; @Luitz]. The gap ratio has been later generalized to higher order case to describe level correlations on longer ranges[@Tekur1; @Tekur; @Atas2; @Chavda], although the general analytical result is still lacking.
In contrast, the higher order level spacing itself is much less studied. Motivated by a recent work that encountered the next-nearest level spacings[@Rubah], we proceed to pursuit the general distribution of higher order level spacings in this work. By using a Wigner-like surmise, we succeeded in obtaining an analytical expression for the distribution of higher order spacing $s_{n}=E_{i+n}-E_{i}$ in all the three Gaussian ensembles of RMT, as well as the Poisson ensemble. The results show the distribution of $s_{n}$ in the former class follows a generalized Wigner-Dyson distribution with rescaled parameter; while $s_{n}$ in Poisson ensemble follows generalized semi-Poisson distribution with index $n$.
This paper is organized as follows. In Sec. \[nearest\] we review the Wigner surmise for deriving the distribution of nearest level spacings, and present numerical data to validate this surmise. In Sec. \[analytical\] we present the analytical derivation for higher order level spacings using a Wigner-like surmise, and numerical fittings are given in Sec. \[numerics\]. In Sec. \[ratio\] we discuss the generalization of gap ratios to higher order. Conclusion and discussion come in Sec. \[conclusion\].
Nearest Level Spacings {#nearest}
======================
We begin with the discussion about nearest level spacings, our starting point probability distribution of energy levels $P\left( \left\{
E_{i}\right\} \right) $ in three Gaussian ensembles, whose expression can be found in any textbook on RMT (e.g. Ref. \[\]),$$P\left( \left\{ E_{i}\right\} \right) \propto \prod_{i<j}\left\vert
E_{i}-E_{j}\right\vert ^{\nu }e^{-A\sum_{i}E_{i}^{2}} \label{equ:Dist}$$where $\nu =1,2,4$ for GOE,GUE,GSE respectively. The distribution of nearest level spacing can then be written as $$P\left( s\right) =\int \prod_{i=1}^{N}dE_{i}P\left( \left\{ E_{i}\right\}
\right) \delta \left( s-\left\vert E_{1}-E_{2}\right\vert \right) \text{,}$$whose result is quite complicated. Instead, Wigner proposes a surmise that we can focus on the $N=2$ case, the distribution then reduces to[$$P\left( s\right) \propto \int_{-\infty }^{\infty }\left\vert
E_{1}-E_{2}\right\vert ^{\nu }\delta \left( s-\left\vert
E_{1}-E_{2}\right\vert \right) e^{-A\sum_{i}E_{i}^{2}}dE_{1}dE_{2}\text{.}$$]{} By introducing $x_{1}=E_{1}-E_{2}$, $x_{2}=E_{1}+E_{2}$, we have$$\begin{aligned}
P\left( s\right) &\propto &2\int_{-\infty }^{\infty }\left\vert
x_{1}\right\vert ^{\nu }\delta \left( s-\left\vert x_{1}\right\vert \right)
e^{-\frac{A}{2}\sum_{i}x_{i}^{2}}dx_{1}dx_{2} \notag \\
&=&Cs^{\nu }e^{-As^{2}/2}\text{.}\end{aligned}$$The constants $A,C$ can be determined by working out the integral about $%
x_{2}$, but it is more convenient to obtain by imposing the normalization condition$$\int_{0}^{\infty }P\left( s\right) ds=1\text{, }\int_{0}^{\infty }sP\left(
s\right) ds=1\text{.} \label{equ:normalization}$$From which we can reach to the famous Wigner-Dyson distribution $$P(s)=\left\{
\begin{array}{ll}
\frac{\pi }{2}s\exp \big(-\frac{\pi }{4}s^{2}\big) & \nu =1\quad \text{GOE}
\\[1mm]
\frac{32}{\pi ^{2}}s^{2}\exp \big(-\frac{4}{\pi }s^{2}\big) & \nu =2\quad
\text{GUE} \\[1mm]
\frac{2^{18}}{3^{6}\pi ^{3}}s^{4}\exp \big(-\frac{64}{9\pi }s^{2}\big) & \nu
=4\quad \text{GSE}%
\end{array}%
\right. \label{equ:nearest}$$
On the other hand, the levels are independent in Poisson ensemble, which means the occurrence of next level is independent of previous level, the nearest level spacings then follows a Poisson distribution $P\left( s\right)
=\exp \left( -s\right) $.
Although the Wigner surmise is for $2\times 2$ matrix, it works fairly good when the matrix dimension is large. To demonstrate this, we present numerical evidence from a quantum many-body system – the spin-$1/2$ Heisenberg model with random external field, which is the canonical model in the study of many-body localization (MBL),$$H=\sum_{i=1}^{L}\mathbf{S}_{i}\cdot \mathbf{S}_{i+1}+\sum_{i=1}^{L}\sum_{%
\alpha =x,y,z}h^{\alpha }\varepsilon _{i}^{\alpha }S_{i}^{\alpha },
\label{equ:H}$$where we set coupling strength to be $1$ and assume periodic boundary condition in Heisenberg term. This $\varepsilon _{i}^{\alpha }$s are random numbers within range $\left[ -1,1\right] $, and $h^{\alpha }$ is referred as the randomness strength. We focus on two choices of $h^{\alpha }$: (i) $%
h^{x}=h^{z}=h\neq 0$ and $h^{y}=0$, the Hamiltonian matrix is orthogonal; (ii) $h^{x}=h^{y}=h^{z}=h\neq 0$, the model being unitary. This model undergos a thermal-MBL transition at roughly $h_{c}\simeq3$ ($2.5$) in the orthogonal (unitary) model, where the level spacing distribution evolves from GOE (GUE) to Poisson[@Regnault16].
We choose a $L=12$ system to present a numerical simulation, and prepare $%
500 $ samples at $h=1$ and $h=5$ for both the orthogonal and unitary model. In Fig. \[fig:NN\_spacing\](Left) we plot the density of states (DOS) for the $h=1 $ case in orthogonal model. We can see DOS is much more uniform in the middle part of the spectrum, which is also the case for $h=5$ and unitary model. Therefore we choose the middle half of energy levels to do the spacing counting, and the results are shown in Fig. \[fig:NN\_spacing\](Right). We observe a clear GOE/GUE distribution for $h=1$ in orthogonal/unitary model and a Poisson distribution for $h=5$ in orthogonal model as expected, the fitting result for $h=5$ in unitary model is not shown since it almost coincides with that in orthogonal model. It is noted the fitting for Poisson distribution has minor deviations around the region $%
s\sim 0$, this is due to finite size effect since there will always remain exponentially-decaying but finite correlation between levels in a finite system. As we will demonstrate in subsequent section, the fitting for higher order level spacings will be better since the overlap between levels decays exponentially with their distance in MBL phase.
A technique issue is, when counting the level spacings, we choose to take the middle half levels of the spectrum, while we can also employ a unfolding procedure using a spline interpolation that incorporates all energy levels[@Avishai2002], and the fitting results are almost the same[Regnault162,Rao182]{}.
![(Left) The density of states (DOS) $\protect\rho (E)$ of random field Heisenberg model at $L=10$ and $h=1$ in orthogonal case, the DOS is more uniform in the middle part, we therefore choose the middle half levels to do level statistics. (Right) Distribution of nearest level spacings $P(E_{i+1}-E_{i})$, we see a GOE/GUE distribution for $h=1$ in the orthogonal/unitary model, while a Poisson distribution is found for $h=5$ in orthogonal model, the result for $h=5$ in unitary model is not displayed since it coincides with that in the orthogonal model.[]{data-label="fig:NN_spacing"}](Nearest_Spacing.pdf){width="8.7cm"}
Higher Order Level Spacings
===========================
Now we proceed to consider the distribution of higher order level spacings $%
\left\{ s_{i}^{\left( n\right) }=E_{i+n}-E_{i}\right\} $, using a Wigner-like surmise. We first give the analytical derivation, then provide numerical evidence from simulation of spin model in Eq. (\[equ:H\]) as well as the non-trivial zeros of Riemann zeta function.
Analytical Derivation {#analytical}
---------------------
Introduce $P_{n}\left( s\right) =P\left( \left\vert E_{i+n}-E_{i}\right\vert
=s\right)$, to apply the Wigner surmise, we are now considering $\left(
n+1\right) \times \left( n+1\right) $ matrices, the distribution $%
P_{n}\left( s\right) $ then goes to$$\begin{aligned}
P_{n}\left( s\right) &\propto &\int_{-\infty }^{\infty
}\prod_{i<j}\left\vert E_{i}-E_{j}\right\vert ^{\nu }\delta \left(
s-\left\vert E_{1}-E_{n+1}\right\vert \right) \notag \\
&&\times e^{-A\sum_{i=1}^{n+1}E_{i}^{2}}\prod_{i=1}^{n+1}dE_{i}\end{aligned}$$We first change the variables to$$x_{i}=E_{i}-E_{i+1}\text{, }i=1,2,...,n\text{; }\quad
x_{n+1}=\sum_{i=1}^{n+1}E_{i}\text{,}$$the $P_{n}\left( s\right) $ then evolves into
$$P_{n}\left( s\right) \propto \int_{-\infty }^{\infty
}\frac{\partial \left( E_{1},E_{2},...,E_{n+1}\right) }{\partial \left(
x_{1},x_{2},...,x_{n+1}\right) }\left( \prod_{i=1}^{n}\prod_{j=i}^{n}\left\vert
\sum_{k=i}^{j}x_{k}\right\vert ^{\nu }\right) \delta \left( s-\left\vert
\sum_{i=1}^{n}x_{i}\right\vert \right)
e^{-\frac{A}{n}\left[ \sum_{i=1}^{n}%
\sum_{j=i}^{n}\left( \sum_{k=i}^{j}x_{k}\right) ^{2}+x_{n+1}^{2}\right] }\prod_{i=1}^{n+1}dx_{i}.$$
In this expression, the Jacobian $\frac{\partial \left(
E_{1},E_{2},...,E_{n+1}\right) }{\partial \left(
x_{1},x_{2},...,x_{n+1}\right) }$ and integral for $x_{n+1}$ are all constants that can be absorbed into the normalization factor, hence we can simplify $P_{n}\left( s\right) $ to$$\begin{aligned}
P_{n}\left( s\right) &\propto &\int_{-\infty }^{\infty }\left(
\prod_{i=1}^{n}\prod_{j=i}^{n}\left\vert \sum_{k=i}^{j}x_{i}\right\vert
^{\nu }\right) \delta \left( s-\left\vert \sum_{i=1}^{n}x_{i}\right\vert
\right) \notag \\
&&\times e^{-\frac{A}{n}\sum_{i=1}^{n}\sum_{j=i}^{n}\left(
\sum_{k=i}^{j}x_{k}\right) ^{2}}\prod_{i=1}^{n}dx_{i}.\end{aligned}$$Next, we introduce the $n$-dimensional spherical coordinate $$\begin{aligned}
x_{1} &=&r\cos \theta _{1}\text{; }\quad x_{n}=r\prod_{i=1}^{n-1}\sin \theta
_{i}\text{;} \notag \\
x_{i} &=&r\left( \prod_{j=1}^{i-1}\sin \theta _{j}\right) \cos \theta _{i-1}%
\text{, \thinspace }i=2,3,...,n-1\text{;} \\
0 &\leq &\theta _{i}\leq \pi \text{, }i=1,2,...,n-2\text{;}\quad 0\leq
\theta _{n-1}\leq 2\pi \text{,} \notag\end{aligned}$$whose Jacobian is$$\frac{\partial \left( x_{1},x_{2},...,x_{n}\right) }{\partial \left(
r,\theta _{1},\theta _{2},...,\theta _{n-1}\right) }=r^{n-1}%
\prod_{i=1}^{n-2}\sin ^{n-1-i}\theta _{i} \label{equ:Jac}$$which reduces to the normal spherical coordinate when $n=3$. The resulting expression of $P_{n}\left( s\right) $ is complicated, while we are mostly interested in the scaling behavior about $s$, hence we can write the formula as$$\begin{aligned}
P_{n}\left( s\right) &\propto &\int_{0}^{\infty }r^{n-1}\int r^{\nu
C_{n+1}^{2}}\delta \left( s-r\left\vert G\left( \boldsymbol{\theta }\right)
\right\vert \right) \notag \\
&&\times H\left( \boldsymbol{\theta }\right) e^{-\frac{A}{n}r^{2}J\left(
\boldsymbol{\theta }\right) }drd\boldsymbol{\theta }\end{aligned}$$where $C_{n+1}^{2}=n\left( n+1\right) /2$, and $d\boldsymbol{\theta }$ $%
=\prod_{i=1}^{n-1}d\theta _{i}$, the explanation goes as follows: (i) the first term $r^{n-1}$ comes from the radial part of the Jacobian in Eq. ([equ:Jac]{}); (ii) the second $r^{\nu C_{n+1}^{2}}$ comes number of terms in $%
\prod_{i=1}^{n}\prod_{j=i}^{n}\left\vert \sum_{k=i}^{j}x_{i}\right\vert
^{\nu }$, where each term contributes a factor $r^{\nu }$; (iii) the auxiliary function $G\left( \boldsymbol{\theta }\right)
=\sum_{i=1}^{n}x_{i}/r$; (iv) the second auxiliary function $H\left(
\boldsymbol{\theta }\right) $ is comprised of the angular part of the Jacobian and the angular part of $\prod_{i=1}^{n}\prod_{j=i}^{n}\left\vert
\sum_{k=i}^{j}x_{i}\right\vert ^{\nu }$; (v) $J\left( \boldsymbol{\theta }%
\right) $ is the angular part of $\sum_{i=1}^{n}\sum_{j=i}^{n}\left(
\sum_{k=i}^{j}x_{k}\right) ^{2}$. The key observation is that $G\left(
\boldsymbol{\theta }\right) ,H\left( \boldsymbol{\theta }\right) ,J\left(
\boldsymbol{\theta }\right) $ all depend only on $\boldsymbol{\theta }$ while independent of $r$. Since we are only interested in the scaling behavior about $s$, we can work out the delta function, and get$$P_{n}\left( s\right) \propto s^{\nu C_{n+1}^{2}+n-1}\int H\left( \boldsymbol{%
\theta }\right) e^{-\frac{AJ\left( \boldsymbol{\theta }\right) }{n\left\vert
G\left( \boldsymbol{\theta }\right) \right\vert ^{2}}s^{2}}d\boldsymbol{%
\theta }$$Although the integral for $\boldsymbol{\theta }$ is tedious and difficult to handle, it will only make correction to the Gaussian factor while not influence the scaling behavior about $s$. Therefore we can write $%
P_{n}\left( s\right) $ into a generalized Wigner-Dyson distribution$$\begin{aligned}
P_{n}\left( s\right) &=&C\left( \alpha \right) s^{\alpha }e^{-A\left( \alpha
\right) s^{2}}\text{, } \label{equ:GWD} \\
\alpha &=&\frac{n\left( n+1\right) }{2}\nu +n-1\text{.} \label{equ:rescale}\end{aligned}$$The normalization factors $C\left( \alpha \right) $ and $A\left( \alpha
\right) $ can be determined by the normalization condition in Eq. ([equ:normalization]{}), for which we obtain$$A\left( \alpha \right) =\left( \frac{\Gamma \left( \alpha /2+1\right) }{%
\Gamma \left( \alpha /2+1/2\right) }\right) ^{2}\text{, }C\left( \alpha
\right) =\frac{2\Gamma ^{n+1}\left( \alpha /2+1\right) }{\Gamma ^{n+2}\left(
\alpha /2+1/2\right) }\text{,}$$where $\Gamma \left( z\right) =\int_{0}^{\infty }t^{z-1}e^{-t}dt$ is the Gamma function. When $n=1$, $P_{n}\left( s\right) $ reduces to the conventional Wigner-Dyson distribution in Eq. (\[equ:nearest\]).
Interestingly, there exists coincidence between distributions in different ensembles. For example, as can be easily checked, $P_{k}\left( s\right) $ in the GSE coincides with $P_{2k}\left( s\right) $ in GOE for arbitrary integer $k$, where the special case with $k=1$ has been well-known for circular ensembles[@GSE]; $P_{7}\left( s\right) $ in GOE coincides with $%
P_{5}\left( s\right) $ in GUE, and so on. We also note similar results have been proposed for $n\geq 2$ using a phenomenological argument based on several assumptions[@Magd], while our derivation is rigorous without assumption.
For the uncorrelated energy levels in the Poisson class, the distribution for higher order spacing can also be obtained. Let’s start with $n=2$, we can write $s^{\prime}=E_{i+2}-E_{i}=\left( E_{i+2}-E_{i+1}\right) +\left(
E_{i+1}-E_{i}\right) =s_{i+1}+s_{i}$, where $s_{i+1}$ and $s_{i}$ can be treated as independent variables that both follows Poisson distribution, therefore the distribution $P_{2}\left( s^{\prime }\right) $ for unnormalized $s^{\prime }$ is$$P_{2}\left( s^{\prime }\right) \propto \int_{0}^{s^{\prime }}P_{1}\left(
s^{\prime }-s_{1}\right) P_{1}\left( s_{1}\right) ds_{1}=s^{\prime
}e^{-s^{\prime }}\text{.} \label{equ:recur}$$Then by requiring the normalization condition we arrive at $P_{2}\left(
s\right) =4se^{-2s}$, which is nothing but the semi-Poisson distribution. Repeating this procedure $n-1\ $times, we reach to$$P_{n}\left( s\right) =\frac{n^{n}}{\Gamma \left( n\right) }s^{n-1}e^{-ns}%
\text{.} \label{equ:Pn}$$which is a generalized semi-Poisson distribution with index $n$. Compared to the Poisson distribution for nearest level spacings, it’s crucial to note that $P_{n}\left( 0\right) =0$ for $n\geq 2$, this is not a result of level repulsion as in the Gaussian ensembles, rather, it simply states that $%
n+1\left( n\geq 2\right) $ consecutive levels do not coincide.
We note every $P_{n}\left( s\right) $ in the Gaussian and Poisson ensembles tends to be the Dirac delta function $\delta \left( s-1\right) $ in the limit $n\rightarrow \infty $, which is easily understood since in that limit only one spacing remains in the spectrum. Finally, we want to emphasize the the levels are well-correlated in the three Gaussian ensembles, hence the derivation of $P_{n}\left( s\right) $ for Poisson ensemble in Eq. ([equ:recur]{}) do not hold, otherwise the result will deviate dramatically[Rubah]{}.
For convenience we list the order of the polynomial part in $P_{n}\left(
s\right) $ for the three Gaussian ensembles as well as Poisson ensemble up to $n=8$ in Table \[tab:1\], note that the exponential parts in the former class are Gaussian type and that for Poisson ensemble is a exponential decay.
$n$ $1$ $2$ $3$ $4$ $5$ $6$ $7$ $8$
--------- ----- ------ ------ ------ ------ ------ ------- -------
GOE $1$ $4$ $8$ $13$ $19$ $26$ $34$ $43$
GUE $2$ $7$ $14$ $23$ $34$ $47$ $62$ $79$
GSE $4$ $13$ $26$ $43$ $64$ $89$ $118$ $151$
Poisson $0$ $1$ $2$ $3$ $4$ $5$ $6$ $7$
: The order of the polynomial term in $P_{n}(s)$ for the three Gaussian ensembles as well as Poisson ensemble, the decaying term is Gaussian type for the former class and exponential decay for the latter.[]{data-label="tab:1"}
Numerical Simulation {#numerics}
--------------------
To show how well the generalized Wigner-Dyson distribution in Eq. ([equ:GWD]{}) works for matrix with large dimension, we now perform numerical simulations for the random spin model in Eq. (\[equ:H\]), where we also pick the middle half levels to do statistics. We have tested the formula up to $n=5$, and in Fig. \[fig:higher\_spacing\] we display the fitting results for $n=2$ and $n=3$.
![Distribution of next-nearest level spacings $P(E_{i+2}-E_{i})$ (Left) and next-next-nearest level spacings $P(E_{i+3}-E_{i})$ (Right), where $\alpha$ and $n$ are the index in Eq. (\[equ:GWD\]) and Eq. (\[equ:Pn\]) respectively.[]{data-label="fig:higher_spacing"}](higher_spacing.pdf){width="8.7cm"}
As expected, the fittings are quite accurate for both GOE and GUE as well as Poisson ensemble. In fact, the fittings for higher order spacings in the Poisson ensemble are better than that for nearest spacing in Fig. [fig:NN\_spacing]{}(Right). This is because in MBL phase the overlap between levels decays exponentially with their distance, hence the fitting for higher order level spacings is less affected by finite size effect.
For another example we consider the non-trivial zeros of the Riemann zeta function$$\zeta \left( z\right) =\sum_{n=1}^{\infty }\frac{1}{n^{z}}\text{.}$$It was established that statistical properties of non-trivial Riemann zeros $%
\left\{ \gamma _{i}\right\} $ are well described by the GUE distribution[Zeta]{}. Therefore, we expect the gaps $\left\{s^{(n)}_i=\gamma _{i+n}-\gamma
_{i}\right\} $ follows the same distribution as those in GUE. The numerical results for $n=1,2,3$ are presented in Fig. \[fig:zeta\], as can be seen, the fittings are perfect.
![The distribution of $n$-th order spacings of the non-trivial zeros $\{\gamma_i\}$ of Riemann zeta function, where $\alpha$ is the index in generalized Wigner-Dyson distribution in Eq. (\[equ:GWD\]). The data comes from $10^4$ levels starting from the $10^{22}$th zero, taken from Ref. \[\].](ZetaFit.pdf){width="8cm"}
[fig:zeta]{}
Higher Order Spacing Ratios {#ratio}
===========================
As mentioned in Sec. \[intro\], besides the level spacings, another quantity is also widely used in the study of random matrices, namely the ratio between adjacent gaps $\left\{ r_{i}=\frac{s_{i}}{s_{i-1}}\right\}
$, which is independent of local DOS. The distribution of nearest gap ratios $P\left( \nu ,r\right) $ is given in Ref. \[\], whose result is$$P\left( \nu ,r\right) =\frac{1}{Z_{\nu }}\frac{\left( r+r^{2}\right) ^{\nu }%
}{\left( 1+r+r^{2}\right) ^{1+3\nu /2}}$$where $\nu =1,2,4$ for GOE,GUE,GSE, and $Z_{\nu }$ is the normalization factor determined by requiring $\int_{0}^{\infty }P\left( \nu ,r\right) dr=1$.
This gap ratio can also be generalized to higher order, but in different ways, i.e. the overlapping [Atas,Atas2]{} and non-overlapping [Tekur,Chavda]{} way. In the former case we are dealing with$$\widetilde{r}_{i}^{\left( n\right) }=\frac{E_{i+n}-E_{i}}{E_{i+n-1}-E_{i-1}}=%
\frac{s_{i+n}+s_{i+n-1}+...+s_{i+1}}{s_{i+n-1}+s_{i+n-2}+...+s_{i}}\text{,}$$which is named overlaping ratio since there is shared spacings between the numerator and denominator. While the non-overlapping ratio is defined as$$r_{i}^{\left( n\right) }=\frac{E_{i+2n}-E_{i+n}}{E_{i+n}-E_{i}}=\frac{%
s_{i+2n}+s_{i+2n-1}+...+s_{i+n+1}}{s_{i+n}+s_{i+n-1}+...+s_{i}}\text{.}$$These two generalizations are quite different when we are to study their distributions using Wigner surmise: for overlapping ratio $\widetilde{r}%
_{i}^{\left( n\right) }$, the smallest matrix dimension is $\left(
n+2\right) \times \left( n+2\right) $; while it is $\left( 1+2n\right)
\times \left( 1+2n\right) $ for non-overlapping ratio; only for $n=1$ do these two coincide. Naively, we can expect the distribution for $\widetilde{r%
}^{\left( n\right) }$ is more involved due to the overlapping spacings. Indeed, the $n=2$ case for $P\left( \widetilde{r}^{\left( n\right) }\right) $ has been worked out in Ref. \[\] and the result is very complicated. Instead, the non-overlapping ratio is less studied. Ref. \[\] provides compelling numerical evidence for the distribution of non-overlapping ratio$$\begin{aligned}
P\left( \nu ,r^{\left( n\right) }\right) &=&P\left( \nu ^{\prime },r\right)
\text{, } \label{equ:rn} \\
\nu ^{\prime } &=&\frac{n\left( n+1\right) }{2}\nu +n-1\text{.}
\label{equ:rescale2}\end{aligned}$$Surprisingly, the rescaling relation Eq. (\[equ:rescale2\]) coincides with that for higher order level spacing in Eq. (\[equ:rescale\]). We have also confirmed this formula by numerical simulations in our spin model Eq. ([equ:H]{}), and the results for $n=2$ in GOE ($\nu =1$) case is presented in Fig. \[fig:ratiocom\], where we also draw the distribution of overlapping ratio $\widetilde{r}^{\left( 2\right) }$ for comparison. As can be seen, they differ dramatically, and the fitting for non-overlapping ratio is quite accurate. This result strongly suggest the non-overlapping ratio is more universal than the overlapping ratio, and its distribution $P\left(
r^{\left( n\right) }\right) $ is homogeneously related with that for $n-$th order level spacing, for which we provide a heuristic explanation as follows.
For a given energy spectrum $\left\{ E_{i}\right\} $ from a Gaussian ensemble with index $\nu $, we can make up a new spectrum $\left\{
E_{i}^{^{\prime }}\right\} $ by picking one level from every $n$ levels in $%
\left\{ E_{i}\right\} $, then the $n$-th order level spacing $s^{\left(
n\right) }$ in $\left\{ E_{i}\right\} $ becomes the nearest level spacing in $\left\{ E_{i}^{^{\prime }}\right\} $, and the $n$-th order non-overlapping ratio in $\left\{ E_{i}\right\} $ becomes the nearest gap ratio in $\left\{
E_{i}^{^{\prime }}\right\} $. Since we have analytically proven the rescaling relation in Eq. (\[equ:rescale\]), we conjecture the probability density for $\left\{ E_{i}^{^{\prime }}\right\} $ (to leading order) bear the same form as $%
\left\{ E_{i}\right\} $ in Eq. (\[equ:Dist\]) with the rescaled parameter $%
\alpha $ in Eq. (\[equ:rescale\]). Therefore, the higher order non-overlapping gap ratios also follow the same rescaling as expressed in Eq. (\[equ:rn\]) and Eq. (\[equ:rescale2\]).
![The distribution of second-order gap ratio in the orthogonal model, where red and blue dots correspond to overlapping and non-overlapping ratios respectively, the latter fits perfectly with the formula in Eq. (\[equ:rn\]) with $\nu^{\prime}=4$. Note the data is taken from the whole energy spectrum without unfolding.[]{data-label="fig:ratiocom"}](ratioComparison.pdf){width="8cm"}
Conclusion and Discussion {#conclusion}
=========================
We have analytically studied the distribution of higher order level spacings $\left\{ s_{i}^{\left( n\right) }=E_{i+n}-E_{i}\right\} $ which describes the level correlations on long range. It is shown $s^{\left( n\right) }$ in the Gaussian ensemble with index $\nu $ follows a generalized Wigner-Dyson distribution with index $\alpha =\nu C_{n+1}^{2}+n-1$, where $\nu =1,2,4$ for GOE,GUE,GSE respectively. This results in the coincidence of distribution for $s^{\left( 2k\right) }$ in GOE with that for $s^{\left(
k\right) }$ in GSE. While $s^{\left( n\right) }$ in Poisson ensemble follows a generalized semi-Poisson distribution with index $n$. Our derivation is rigorous based on a Wigner-like surmise, and the results have been confirmed by numerical simulations from random spin system and non-trivial zeros of Riemann zeta function.
We also discussed the higher order generalization of gap ratios, which come in two different ways – the overlapping and non-overlapping way – and point out their difference in studying their distributions using Wigner-like surmise. Notably, the distribution for the non-overlapping gap ratio has been studied numerically in Ref. \[\], in which the authors find a scaling relation Eq. (\[equ:rescale2\]) that is identical to the one we find analytically for higher order level spacings. This strongly indicates the distribution of higher order spacing and non-overlapping gap ratio is correlated in a homogeneous way, for which we provided a heuristic explanation.
Our derivations are rigorous that based only on universal property of random matrix while independent of concrete physical Hamiltonian, hence can be applied to a variety of models in related areas.
It is interesting to note the distribution of next-nearest level spacing in Poisson class is semi-Poisson $P_{2}\left( s\right) \propto s\exp \left(
-2s\right) $, which is suggested to be the distribution for nearest level spacing at the thermal-MBL transition point in orthogonal model [@Serbyn]. This either is a mathematical coincidence or indicates the universality property of this transition point is more affected by the MBL phase than the thermal phase. Besides, in this paper the distribution of higher order level spacing is derived only in $\left( n+1\right) \times \left( n+1\right) $ matrix, its exact value in large matrix as well as the difference between them can in principle be estimated using the method in Ref. \[\], this is left for a future study.
Acknowledgements {#acknowledgements .unnumbered}
================
The author acknowledges the helpful discussions with Xin Wan and Rubah Kausar. This work is supported by the National Natural Science Foundation of China through Grant No.11904069 and No.11847005.
[99]{} C. E. Porter, Statistical Theories of Spectra: Fluctuations (Academic Press, New York), 1965.
T. A. Brody et al., Rev. Mod. **53**, 385 (1981).
T. Guhr, A. Muller-Groeling, H. A. Weidenmuller, Phys. Rep. **299**, 189 (1998).
M. L. Mehta, Random Matrix Theory, Springer, New York (1990).
F. Haake, Quantum Signatures of Chaos (Springer 2001).
E. P. Wigner, in Conference on Neutron Physics by Timeof-Flight (Oak Ridge National Laboratory Report No. 2309, 1957) p. 59.
J. M. G. Gomez, R. A. Molina, A. Relano, and J. Retamosa, Phys. Rev. E **66**, 036209 (2002). V. Oganesyan and D. A. Huse, Phys. Rev. B **75**, 155111 (2007). Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Phys. Rev. Lett. **110**, 084101 (2013). V. Oganesyan, A. Pal, D. A. Huse, Phys. Rev. B **80**, 115104 (2009).
A. Pal, D. A. Huse, Phys. Rev. B **82**, 174411 (2010).
S. Iyer, V. Oganesyan, G. Refael, D. A. Huse, Phys. Rev. B **87**, 134202 (2013).
X. Li, S. Ganeshan, J. H. Pixley, and S. Das Sarma, Phy. Rev. Lett. **115**, 186601 (2015).
David J. Luitz, Nicolas Laflorencie, and Fabien Alet, Phys. Rev. B **91**, 081103(R) (2015).
Y. Y. Atas, E. Bogomolny, O. Giraud, P. Vivo, and E. Vivo, J. Phys. A: Math. Theor. **46**, 355204 (2013). S. H. Tekur, S. Kumar and M. S. Santhanam, Phys. Rev. E, **97**, 062212 (2018).
S. H. Tekur, U. T. Bhosale, and M. S. Santhanam, Phys. Rev. B **98**, 104305 (2018). P. Rao, M. Vyas, and N. D. Chavda, arXiv:1912.05664v1.
R. Kausar, W.-J. Rao, and X. Wan, arXiv:2005.00721.
N. Regnault and R. Nandkishore, Phys. Rev. B **93**, 104203 (2016).
Y. Avishai, J. Richert, and R. Berkovits, Phys. Rev. B **66**, 052416 (2002). S. D. Geraedts, R. Nandkishore, and N. Regnault, Phys. Rev. B **93**, 174202 (2016).
W.-J. Rao, J. Phys.:Condens. Matter **30**, 395902 (2018).
M. L. Mehta and F. J. Dyson, Journal of Mathematical Physics, **4** (1963).
A. Y. Abul-Magd and M. H. Simbel, Phys. Rev. E **60**, 5371 (1999).
H. L. Montgomery, Proc. Symp. Pure Math. **24**, 181 (1973); E. B. Bogomolny and J. P. Keating, Nonlinearity **8**, 1115 (1995); ibid Nonlinearity , 911 (1995); Z. Rudnick and P. Sarnak, Duke Math. J. **81**, 269 (1996); J. P. Keating and N. C. Snaith, Comm. Math. Phys. **214**, 57 (2000).
A. Odlyzko, www.dtc.umn.edu/$\sim$odlyzko/zeta\_tables/index.html.
M. Serbyn and J. E. Moore, Phys. Rev. B **93**, 041424(R) (2016).
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Mohit Vohra, Ravi Prakash, and Laxmidhar Behera, *Senior Member, IEEE* [^1]'
bibliography:
- 'citations.bib'
title: 'Real-time Grasp Pose Estimation for Novel Objects in Densely Cluttered Environment'
---
[^1]: All authors are with the Department of Electrical Engineering, Indian Institute of Technology Kanpur, Uttar Pradesh 208016, India. Email Ids: (mvohra, ravipr, lbehera)@iitk.ac.in.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Sequential Monte Carlo has become a standard tool for Bayesian Inference of complex models. This approach can be computationally demanding, especially when initialized from the prior distribution. On the other hand, deterministic approximations of the posterior distribution are often available with no theoretical guaranties. We propose a bridge sampling scheme starting from such a deterministic approximation of the posterior distribution and targeting the true one. The resulting Shortened Bridge Sampler (SBS) relies on a sequence of distributions that is determined in an adaptive way.
We illustrate the robustness and the efficiency of the methodology on a large simulation study. When applied to network datasets, SBS inference leads to different statistical conclusions from the one supplied by the standard variational Bayes approximation.
author:
- Sophie Donnet
- Stéphane Robin
bibliography:
- 'biblio.bib'
date: 'Received: date / Accepted: date'
subtitle: Using deterministic approximations to accelerate SMC for posterior sampling
title: 'Shortened Bridge Sampler: '
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction \[sec:Intro\]
==========================
In Bayesian statistics, except in a restricted number of conjugate models, the posterior distribution does not have a close form and requires the use of approximation methods. The 1990’s have witnessed the developments of powerful stochastic methods and simulation-based algorithms able to perform Bayesian statistical inference on complex statistical models.
Among them, Monte Carlo Markov Chains (MCMC) –whose principle is to generate a Markov Chain such that its ergodic distribution is the posterior distribution (see for instance [@Robert:2005; @RoC09] for an introduction)– have been successfully applied to many problems, as attested by the countless publications in that domain. However, MCMCs suffer from several drawbacks. First of all, it is difficult to assess whether the chain has reached its ergodic distribution or not. Secondly, if the distribution of interest is highly multi-modal, MCMC algorithms can be trapped in local modes. More generally, when the space of parameters to be explored is of high dimension, MCMC algorithms will have difficulties in reaching their equilibrium distribution within a reasonable computational time.
Recently, population based Monte Carlo methods have proved their efficiency and robustness in front of high dimensional and multimodal spaces. In a few words, population based Monte Carlo algorithms generate a large sample of parameters with a tractable distribution and update the importance sampling weights at each iteration, in order to finally match the distribution of interest. Among population based Monte Carlo, Sequential Monte Carlo (SMC) is a method combining parameters sampling and resampling. More precisely, a sequence of distributions of interest is designed, such that the first one is simple (i.e. easy to sample from) and the last one is the posterior distribution. This sequence of distributions defines the iterations of the algorithm. Then, at the first iteration, a sample of parameters is simulated with the first distribution. In the following iterations, the parameters are stochastically moved, weighted and resampled to follow the current distribution. The true posterior distribution is reached at the last iteration. The sequence of distributions can be dynamically designed. Primarily developed in the context of filtering problems (see [@Doucet2001]), they have been extended to the general problem of posterior sampling by [@DelMoral2006]. In comparison with MCMC methods, SMC does not require any burn-in period or convergence diagnostic. In addition, whereas computing marginal likelihood (for model comparison) has always been a challenging issue when using MCMC, SMC supplies an unbiased estimator of this quantity as a by-product of the algorithm. For all these reasons, SMC has proved its superiority over MCMC for complex models.
In the recent years, particular fields (such as genomics or network analysis to name but a few) brought news statistical problems involving an increasing amount of data or statistical models with a large number of parameters. In such cases, not only MCMC but also population Monte Carlo algorithms have reached their limitations, requiring unreasonable computational time to explore the posterior distribution. To deal with such difficulties, deterministic approximations of the posterior distribution through optimization mathematical tools – such as variational approximation ([@WaJ08; @Blei2016]), Expectation-Propagation ([@Minka:2001]) or Integrated nested Laplace approximation ([@Rue09]) for instance– have been proposed. These methods have the great advantage to be computationally light and can handle large data. However, their theoretical properties and accuracy is still under study. In particular, we do know that variational approximations can supply underestimated posterior variances (see for instance [@CoM07] for a large illustration of this phenomena on the Probit model).
On the one hand, SMC supplies a sample from the exact posterior distribution but can require unacceptable computational time. On the other hand, deterministic approximations (optimal in a sense to be determined) of the posterior distribution are fast methods but non-exact. One may therefore be tempted to take advantage of the two approaches in a combined strategy. The idea of combining variational Bayes inference with SMC is actually not new. [@RAJ15] split the data into block and compute the posterior distribution of $\theta$ given each block. They use a variational argument to propose the product of this partial posterior as a proxy for the true posterior. Focusing on Gaussian mixtures, [@McGPTAK16] consider online-inference and propose an sequential sampling scheme where, for each new batch of data, the variational approximation is iteratively updated and used as a proposal. [@Naesseth2017] use a SMC approach to get an improved, but still biased, variational approximation. Our approach is different from all these. Our main idea is to design a bridge sampling from the approximated posterior distribution to the true posterior distribution, the transfer from the approximate to the exact distribution being performed with an SMC algorithm ([@DelMoral2006]). The sampling method we propose can be considered from two points of view : either SMC is seen as a tool to correct the approximate distribution, or the approximate posterior distribution is seen as a mean to drastically accelerate the SMC procedure.
Adopting the last perspective plunges the problematic into in a larger topic. Indeed, in general, for any challenging statistical model at stake, there exists a frequentist solution, suppling a point estimation of the parameter of interest. In the Bayesian practice, this point estimation is standardly set as an initial value in the MCMC algorithm, thus hoping a decrease of the computational time. In a SMC strategy starting with a sample from the prior distribution, such an initial point value is meaningless. We claim that the posterior distribution can be reached in a reduced computational time if the bridge sampling scheme starts from an approximate posterior distribution based on that point estimate. We therefore refer to the proposed sampling method as the Shortened Bridge Sampler ([SBS]{}).
The paper is organized as follows. Section \[sec:Algo\] is dedicated to the description of the methodology. We first remind the principle of importance sampling, then introduce the sampling path and expose the algorithm in Subsection \[subsec:algo\]. Its robustness and efficiency are illustrated on several simulated experiments in Section \[sec:Simul\]. In Subsection \[sec:log\], the logit regression serves as a toy example to illustrate the computational time reduction and to test the robustness of the method with respect to the quality of the deterministic approximation of the posterior distribution. The Latent Class Analysis model (Subsection \[subsec:LCA\]) is exploited to illustrate the relevance of our methodology on a mixture model, in particular, we propose a new strategy to tackle the label switching issue. On the Stochastic Block Model (SBM) with covariates (Subsection \[subsec:SBMreg\]), we compare our strategy with the Variational Bayesian one in terms of model selection and model averaging. Finally, real datasets of social networks with covariates are presented in Section \[sec:Illust\] : we stress the new insights brought by a “correction” of the Variational posterior approximation by the SMC strategy in term of both model averaging and significance of the parameters. Perspectives are discussed in Section \[sec:Discuss\].
From the approximate posterior distribution to the true posterior distribution {#sec:Algo}
==============================================================================
Let us first introduce some notations. ${\boldsymbol{Y}}$ denotes the observations, $\ell({\boldsymbol{Y}}|{\theta})$ is the likelihood function with ${\theta}\in \Theta$ the unknown parameters and $\pi({\theta})$ is the prior distribution on ${\theta}$. The Bayesian inference is based on the posterior distribution: $$\label{eq:Bayes}
p({\theta}| {\boldsymbol{Y}}) = \frac{\ell({\boldsymbol{Y}}| {\theta}) \pi({\theta})}{p({\boldsymbol{Y}})}.$$ where $p({\boldsymbol{Y}})$ is the marginal likelihood defined as: $$\label{eq:marg}
p({\boldsymbol{Y}}) = \int \ell({\boldsymbol{Y}}| {\theta}) \pi({\theta}) d{\theta}$$ and is required in the Bayesian model selection procedure.
In what follows, ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ is an approximate posterior distribution on ${\theta}$. *We assume that ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ can be easily intensively simulated and that the density function of ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ has an explicit expression* The aim of this paper is to propose a way to use such an approximate posterior to actually sample from the true posterior.
Note that, in general, complex statistical models are written as hierarchical models and involve latent variables ${\boldsymbol{Z}}$ (see Sections \[subsec:LCA\] and \[subsec:SBMreg\]). In such cases, the distributions of interest are the joint distribution $p({\boldsymbol{Z}},{\theta}| {\boldsymbol{Y}})$ or the marginal one $p({\theta}| {\boldsymbol{Y}})$. For the sake of simplicity, we chose to present the method without latent variables but obviously, all the following results and algorithms can be extended to this situation, replacing ${\theta}$ by $({\boldsymbol{Z}},{\theta})$. A substantial part of the simulations presented in Section \[sec:Simul\] is devoted to such models.
A first approach: Importance Sampling {#sec:is}
-------------------------------------
A first naive approach to use ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ consists in resorting to a simple importance sampling (IS) strategy, that is to say sampling $({\theta}^m)_{m = 1\dots M}$ from ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ and weighting the sample by $W^m \propto {\ell({\boldsymbol{Y}}| {\theta}^m) \pi({\theta}^m)}/{{{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta}^m)}$. However, this strategy is obviously naive for several reasons. First of all, there is no guarantee that the support of the approximate distribution includes the support of the true distribution, the contrary is even observed in practice for the variational approximation for instance [see @CoM07; @WaT05]. As a consequence, the posterior sample obtained through such an importance sampling strategy would be restricted to the support of ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ which can be strictly included in the support of $p(\cdot | {\boldsymbol{Y}})$. Secondly, if ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ and ${p(\cdot | {\boldsymbol{Y}})}$ are very different, the sample will be degenerated, meaning that very few particles will have a non-negligible weight. This results in a small Effective Sample Size ($ESS$), which the algorithm we propose aims at keeping along iterations. In such situations, there is no hope to efficiently sample using ’one-step’ IS but the principle can be used iteratively to progressively shift from the initial proposal to the true posterior distribution.
A path sampling between the approximate and the true posterior distributions
----------------------------------------------------------------------------
The main idea of this paper is to take advantage of the deterministic approximation ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ of the posterior distribution to accelerate SMC procedure, or, inversely to transform sequentially a sample from the deterministic approximated posterior distribution into a sample from the true posterior distribution.
Sequential Monte Carlo samplers generate samples from a sequence of intermediate distributions $({p_h})_{h = 0\dots H}$ where the intermediate distributions $({p_h})_{h = 0\dots H}$ are smooth transitions from a simple distribution $p_0$ to the distribution of interest $p_H = p(\cdot | {\boldsymbol{Y}})$. A classical choice for $({p_h})_{h = 0\dots H}$ [@Neal2001] is to consider: $$\begin{aligned}
\label{eq:pih2}
{p_h}({\theta}) & \propto & \pi({\theta}) \ell({\boldsymbol{Y}}| {\theta}) ^{\rho_h}\end{aligned}$$ where $\rho_0 = 0$, $\rho_H = 1$, thus slowly moving from the prior distribution to the posterior by progressively integrating the data ${\boldsymbol{Y}}$ through the likelihood function. In this paper, we propose an alternative scheme moving smoothly from the approximate posterior distribution ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ to the true ${p(\cdot | {\boldsymbol{Y}})}$. The path is thus defined by: $$\begin{aligned}
\label{eq:pih}
{p_h}({\theta}) & \propto & {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}(p( {\theta}| {\boldsymbol{Y}})) ^{\rho_h}\nonumber\\
& \propto & {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}( \pi({\theta}) \ell({\boldsymbol{Y}}| {\theta})) ^{\rho_h}.\end{aligned}$$ where, $\rho_0 = 0$, $\rho_H = 1$. In a few words, we start from the easy-to-sample distribution $ {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})$ and progressively replace it with the true posterior distribution, this strategy being known as annealed importance sampling procedure [@Neal2001]. We claim that this scheme significantly reduces the computational time and is robust with respect to $ {{\widetilde{p}}_{{\boldsymbol{Y}}}}$.
Note that if ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ is chosen to be the prior distribution $\pi(\cdot)$, then schemes and are identical.
An alternative strategy would consist in using ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ has an importance sampler in the first iteration of the standard annealing scheme defined in . However, in such a strategy, the approximate distribution ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ is under-exploited, since at the first iteration, the particles are reweighted with $W_0^m \propto {\pi({\theta}^m)}/{{{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta}^m |{\boldsymbol{Y}})}$, thus going back, in practice, to a (possibly truncated) version of the prior distribution. This phenomena will be illustrated in Section \[sec:log\].
To sample from the sequence of distributions $(p_h)_{h = 1, \dots, H}$, we adopt the sequential sampler proposed by [@DelMoral2006] where the annealing coefficients $\rho_h$ will be adjusted dynamically. We describe the algorithm in the following subsection.
Shortened Bridge Sampling Algorithm {#subsec:algo}
-----------------------------------
We now need to design an algorithm sequentially sampling from $ {p_h}({\theta}) \propto {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}( \pi({\theta}) \ell({\boldsymbol{Y}}| {\theta})) ^{\rho_h}$. The last years have witnessed a proliferation of scientific papers dealing with the problem of SMC methods and their applications [see @Doucet2001 for an overview]. In our work, we resort to the algorithm proposed by @DelMoral2006.
Let us introduce the following notations: $$\begin{aligned}
\label{eq:gammah_Zh}
\gamma_h(\theta) & = & {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-\rho_h} \left[\ell({\boldsymbol{Y}}| {\theta}) \pi({\theta})\right]^{\rho_h},\end{aligned}$$ where $ Z_h = \int \gamma_h(\theta) {\text{ d}}\theta \nonumber $, so that ${p_h}(\theta) = \gamma_h(\theta) / Z_h$ is a probability density. The main idea of [@DelMoral2006] is to plunge the problem of sampling a sequence of distributions defined on a single set $\Theta$ into the standard SMC filtering framework. To that purpose, the sequence $({p_h})_{h = 0\dots H}$ is replaced by a sequence of extended distributions: $$\label{eq:pibarh}
{\overline{p}}_h({{\theta}_{0:h}}) = \frac{{\overline{\gamma}}_h({{\theta}_{0:h}})}{Z_h}$$ with $$\label{eq:pibarh2}
{\overline{\gamma}}_h( {{\theta}_{0:h}}) = \gamma_h({{\theta}_h}) \prod_{k = 1}^{h} L_{k}\left({{\theta}_{k-1}}| {{\theta}_{k}}\right)$$ where ${{\theta}_{0:h}}= ({{\theta}_{0}},\dots, {{\theta}_h}) \in \Theta \times \dots\times \Theta = \Theta^{h+1}$ and $(L_k)_{k = 0,\dots H-1}$ is a sequence of backward kernels satisfying: $$\label{eq:L}
\int L_k\left({{\theta}_{k-1}}| {{\theta}_{k}}\right)d{\theta}_{k-1} = 1, \quad \forall k = 0\dots H-1.$$ Due to Property , the marginal version of ${\overline{p}}_h$ (i.e. when integrating out ${{\theta}_{0}}$ , $\dots$, ${\theta}_{h-1}$) is the distribution of interest ${p_h}$. Once defined the sequence $({\overline{p}}_h)_{h = 0\dots H}$, one may use the original SMC algorithm designed by @Doucet2001 for filtering. At iteration $h$, the SMC sampler involves three steps:
- *Moving the particles* from ${\theta}_{h-1}$ to ${\theta}_{h}$ using a transition kernel $ K_h({{\theta}_h}| {\theta}_{h-1})$. As a consequence, let $ \eta_{h-1}({\theta}_{0:h-1})$ denote the sampling kernel for $ {\theta}_{0:h-1}$ until iteration $h-1$, $\eta_h$’s expression is: $$\label{eq:eta}
\eta_h({{\theta}_{0:h}}) = \eta_{h-1}({\theta}_{0:h-1}) K_h({{\theta}_h}| {\theta}_{h-1})$$
- *Reweighing the particles* in order to correct the discrepancy between the sampling distribution $\eta_h$ and the distribution of interest at iteration $h$, ${\overline{p}}_h$.
- *Selecting the particles* in order to reduce the variability of the importance sampling weights and avoid degeneracy. In practice the particles will be resampled when the $ESS$ decreases below a pre-specified rate.
#### About the importance weights.
At iteration $h$, the importance sampling weights for $({{\theta}_{0:h}}^m)_{m = 1\dots M}$ are : $\forall m = 1\dots M$, $$\label{eq:w1}
w_h^m = {w_h}({{\theta}_{0:h}}^m) = \frac{{\overline{\gamma}}_h({{\theta}_{0:h}}^m)}{\eta_h({{\theta}_{0:h}}^m)}$$ in their unnormalized version. $(W^m_h)_{m = 1\dots M }$ denotes the normalized weights, i.e. $$\label{eq:wW}
{W_h}^m = \frac{{w_h^m}}{\sum_{m' = 1}^M w^{m'}_h}, \quad \forall m = 1\dots M$$ Equations (\[eq:pibarh\]-\[eq:pibarh2\]-\[eq:eta\]-\[eq:w1\]) imply a recurrence formula for the weight of any particle ${{\theta}_{0:h}}$: $$\label{eq:w}
{w_h}({{\theta}_{0:h}}) = w_{h-1}({\theta}_{0:h-1}) {\widetilde{w}}_{h-1:h}({\theta}_{h-1},{{\theta}_h})$$ where the incremental weight $ {\widetilde{w}}_{h-1:h}({\theta}_{h-1},{{\theta}_h})$ is equal to: $$\label{eq:wtilde}
{\widetilde{w}}_{h-1:h}({\theta}_{h-1},{{\theta}_h}) = \frac{L_h({\theta}_{h-1} | {\theta}_h)}{ K_h({{\theta}_h}| {\theta}_{h-1})}\frac{\gamma_h({{\theta}_h})}{\gamma_{h-1}({\theta}_{h-1})}$$
#### About the transition kernel $K_h$.
As, at this step, the target distribution is ${p_h}$, it seems natural to choose $K_h({\theta}_h | {\theta}_{h-1})$ as a Monte Carlo Markov Chain (MCMC) kernel with ${p_h}({\theta}) \propto {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}( \ell({\boldsymbol{Y}}| {\theta}) \pi({\theta}) ) ^{\rho_h}$ as stationary distribution. Following [@DelMoral2006], we choose the backward kernel: $$\label{eq:L2}
L_h({\theta}_{h-1} | {\theta}_h) = \frac{ K_h({\theta}_h | {\theta}_{h-1}) p_{h}({\theta}_{h-1}) }{{p_h}({\theta}_h)}$$ which satisfies Property and enables us to rewrite the weight increment $ {\widetilde{w}}_{h-1:h}({\theta}_{h-1},{{\theta}_h})$ appearing in and defined in as $$\label{eq:wtilde2}
{\widetilde{w}}_{h-1:h}({\theta}_{h-1},{\theta}_h) = \frac{\gamma_{h}({\theta}_{h-1}) }{\gamma_{h-1}({\theta}_{h-1})} = \left[\alpha(\theta_{h-1})\right]^{\rho_h-\rho_{h-1}}$$ where $$\label{eq:alpha}
\alpha(\theta) = \frac{\ell({\boldsymbol{Y}}| {\theta}) \pi({\theta})}{{{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta}| {\boldsymbol{Y}})}.$$ In what follows, we denote $\alpha_h = \alpha({\theta}_h)$.
Using this particular backward kernel has two major consequences. First it is not required having an explicit expression for the transition kernel $K_h({\theta}_h | {\theta}_{h-1})$, which is quite welcome for MCMC kernels. Secondly, examining equations and , one may notice that the weight for a particle ${{\theta}_{0:h}}$ does not depend on $\theta_h$ but only on ${\theta}_{0:h-1}$. As a consequence, the weights of the particles ${{\theta}_{0:h}}$ can be computed before they are simulated and for any new ${p_h}$
#### Adaptive design of $(\rho_h)_{h = 0\dots H}$.
As a consequence of this last remark, we are able to design an adaptive strategy for $({\rho_h})_{h = 0, \dots H}$ [as in @Schafer2013; @Jasra2011]. Indeed, being able to compute the weights of the up-coming particles for any new $\rho_h$, we can increase $\rho_h$ until the quality of the sample (measured through an indicator computed from the weights) decreases for the next distribution. In practice, following [@Zhou2016], we use the conditional Effective Sampling Size ($cESS$) to measure the quality of $p_ {h-1}$ as an importance sampler when estimating an expectation against ${p_h}$. It is defined as: $$\begin{aligned}
\label{eq:cESS}
cESS & = & \left[\sum_{m = 1}^M M W_{h-1}^m \left(\frac{ {\widetilde{w}}_{h-1:h}^m}{\sum_{m = 1}^M MW_{h-1}^m {\widetilde{w}}_{h-1:h}^m} \right)^2 \right]^{-1} \\
& = & \frac{M \left(\sum_{m = 1}^M W_{h-1}^m {\widetilde{w}}_{h-1:h}^m\right)^2}{\sum_{m = 1}^M W_{h-1}^m ( {\widetilde{w}}_{h-1:h}^m)^2}, \end{aligned}$$ becoming $$\begin{aligned}
\label{eq:cESS2}
&&cESS\left(\rho_h;\rho_{h-1}, (W_{h-1}^m,\alpha_{h-1}^m )_{m \leq M}\right) =cESS_{h-1}(\rho)\nonumber\\
&&
= \frac{M \left(\sum_{m = 1}^M W_{h-1}^m (\alpha_{h-1}^m)^{\rho_h -\rho_{h-1}}\right)^2}{\sum_{m = 1}^M W_{h-1}^m (\alpha_{h-1}^m)^{2(\rho_h -\rho_{h-1})}}. \end{aligned}$$ where $\alpha_{h-1}^m = \alpha({\theta}_{h-1}^m)$ has been defined in equation . If $\rho_h = \rho_{h-1}$ , $cESS$ si maximal (equal to $M$, the number of particles). As $\rho_h$ increases, the discrepancy between $p_ {h-1}$ and ${p_h}$ increases and so the quality of $p_ {h-1}$ as an importance sampling distribution when estimating an expectation against ${p_h}$ decreases and so does $cESS$. As a consequence, our strategy to find the next $\rho_h$ is to set: $$\rho_ h = 1 \wedge \sup_{\rho}\left\{\rho > \rho_{h-1}, cESS_{h-1}(\rho) \geq \tau_1 M \right\}$$
#### Selection of the particles.
In order to prevent a degeneration of the particle approximation, we use a standard resampling of the particles. In other words, if the variance of weights $(W_{h}^m)_{m = 1\dots M}$ is too high (or in other words, if the $ESS$ is too small), we resample the particles using a multinomial distribution, thus discarding the particles with low weights and duplicating the particles with high weights.
#### Sampling algorithm.
Finally, we propose the following Shorten Bridge Sampling [SBS]{}algorithm adapted to the sequence (\[eq:pih\]).
------------------------------------------------------------------------
**[SBS]{}algorithm**
------------------------------------------------------------------------
1. Set $(\tau_1,\tau_2) \in [0,1]^2$, $\rho_0 = 0$.
2. *At iteration $0$* , sample $(\theta^m_{0})_{m = 1\dots M}$ from the approximate distribution ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$. $ \forall m = 1\dots M$, set: $$w_{0}^m = 1, \quad W_{0}^m = \frac{1}{M}, \quad \alpha_{0}^m = \frac{\ell({\boldsymbol{Y}}| {\theta}^m_{0}) \pi({\theta}^m_{0})}{{{\widetilde{p}}_{{\boldsymbol{Y}}}}(\theta^m_{0})}$$
3. *At iteration $h$*: starting from $({\theta}_{h-1}^m, W_{h-1}^m,\alpha_{h-1}^m)_{m = 1\dots M}$
1. Find $\rho_h$ such that: $$\rho_ h = 1 \wedge \sup_{\rho}\left\{\rho > \rho_{h-1},cESS_{h-1}(\rho) \geq \tau_1 M \right\},$$
2. $\forall m = 1\dots M $, compute ${w_h^m}= w^m_{h-1}\left(\alpha_{h-1}^m\right)^{\rho_h - \rho_{h-1}}$ and ${W_h^m}= {{w_h^m}}\left/{\sum_{m' = 1}^M w_h^{m'}}\right.$
3. Compute $$ESS_{h} = \frac{\left(\sum_{m = 1}^M{W_h^m}\right)^2 }{\sum_{m = 1}^M ({W_h^m})^2} \in [1, M]$$ If $ESS_{h} < \tau_2\, M$, resample the particles $$\begin{array}{ccl}
({\theta}_{h-1}^m)' & \sim_{i.i.d} & \sum_{m = 1}^M {W_h^m}\delta_{\{ {\theta}^m_{h-1}\}}
({{\theta}_h^m})\\
{{\theta}_{h-1}^m}& \leftarrow & ({\theta}_{h-1}^m)'\\
w_{h}^m & \leftarrow & 1 \\
W_{h}^m & \leftarrow & 1/M
\end{array}
\quad \forall m = 1\dots M$$
4. $\forall m = 1\dots M$, : propagate the particle ${{\theta}_h^m}\sim K_h( \cdot | {\theta}_{h-1}^m) $ where $K_h$ is a MCMC kernel with ${p_h}({\theta}) \propto {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}( \ell({\boldsymbol{Y}}| {\theta}) \pi({\theta}) ) ^{\rho_h}$ as an invariant distribution and compute: $$\alpha^m_{h} = \frac{\ell({\boldsymbol{Y}}| {\theta}^m_{h}) \pi({\theta}^m_{h})}{{{\widetilde{p}}_{{\boldsymbol{Y}}}}(\theta^m_{h}| {\boldsymbol{Y}})}$$
4. If $\rho_h = 1$, stop. If $\rho_h < 1$ return to $1$.
------------------------------------------------------------------------
Let $\phi$ be a function defined in $\Theta$. The study of the statistical properties of $\sum_{m=1}^ M W^H_m \phi(\theta^{H}_m)$ as an estimator of $\mathbf{E}[\phi(\theta)|{\boldsymbol{Y}}]$ is a difficult task due to the sampling and resampling steps in the algorithm. However, many results can be found in the literature [see @Doucet2009 and references there in]. First of all, $\sum_{m=1}^ M W^H_m \phi(\theta^{H}_m)$ is known to be strongly convergent. Moreover, following [@DelMoral2006], a Central Limit Theorem can be obtained. Besides, in addition to these asymptotic properties, it is possible to control the mean-square error of the estimator for a given number of particles $M$, provided additional assumptions on $\phi$ . Results of convergence were also provided by [@delmoral2012_conv] for adaptive sequential Monte Carlo algorithms.
Estimation of the marginal likelihood
--------------------------------------
With respect to MCMC strategies, Annealing Importance Sampling and SMC have the great advantage to supply good estimators of the marginal likelihood. Indeed, as proved by [@DelMoral2006], a non-biaised estimator of the marginal likelihood derives as a by-product of SMC. Moreover, the path sampling identity also provides an estimate of the marginal likelihood, as detailed hereafter.
Let us recall that $Z_ h = \int_{\Theta} \gamma_h({\theta}) d\theta$. Following [@DelMoral2006] and using the notations introduced below, the ratio of the marginal likelihoods ${Z_{h}} / {Z_{h-1}}$ is estimated by: $$\widehat{\frac{Z_{h}}{Z_{h-1}}} = \sum_{m = 1}^M {W_h^m}{\widetilde{w}}_{h-1:h}^m.$$ and $$\label{eq:pZhat}
\widehat{\frac{Z_{H}}{Z_{0}}} = \prod_{h = 1}^H \widehat{\frac{Z_{h}}{Z_{h-1}}} = \prod_{h = 1}^H \sum_{m = 1}^M {W_h^m}{\widetilde{w}}_{h-1:h}^m$$ is an unbiased estimator of $Z_H/Z_0$. Now, $Z_H = p({\boldsymbol{Y}})$ and $Z_0 = 1 $.
Another estimate is given by the path sampling identity. Indeed, under non-restrictive regularity assumptions, the following equality holds: $$\label{eq:Int}
\log p(Y) - \log Z_0 = \int_{0}^{1} \mathbb{E}_{p_{\rho}}\left[\frac{d \log \gamma_{\rho}(\cdot)}{d \rho} \right]d\rho$$ where $\gamma_\rho(\theta) = {{\widetilde{p}}_{{\boldsymbol{Y}}}}(\theta)^{1-\rho}(\ell({\boldsymbol{Y}}| \theta) \pi(\theta))^{\rho}$, $p_\rho()$ is the associated probability density distribution and, in our geometric path sampling: $$\frac{d \log \gamma_{\rho}(\cdot)}{d \rho} = \log \frac{\ell({\boldsymbol{Y}}| {\theta}) \pi({\theta})}{{{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})} = \log \alpha(\theta)$$ An elementary trapezoidal scheme and Monte Carlo approximations of the expectations involved in lead to the following approximation of the marginal likelihood: $$\label{eq:pZhat2}
\widehat{\widehat{\log p({\boldsymbol{Y}})}} = \sum_{h = 1}^H \frac{\rho_h - \rho_{h-1}}{2} (U^M_h + U^M_{h-1})$$ where $U^M_h = \widehat{ \mathbb{E}}_{p_{\rho_h}}\left[ \log \alpha(\theta)\right] = \sum_{m = 1}^M W_{h}^m \log \alpha^m_{h}
$.
Note that as suggested in [@Zhou2016], we noticed on simulation studies that the two estimators behave similarly in our examples. A precise comparison of the two estimators is out of the scope of this paper.
Simulation study \[sec:Simul\]
==============================
We now present a large simulation study. The goal of the study is to assess the fact that the proposed [SBS]{}algorithm –that combines an optimization-based approximation of the posterior distribution with a SMC sampler– drastically decreases the computational time with respect to a classical annealing-scheme or, equivalently, that the approximated posterior distribution can be corrected into the true posterior distribution at a low computational cost.
Logistic regression {#sec:log}
-------------------
This first model is used as a toy example to illustrate the efficiency and the robustness of our methodology. Let $(Y_1, \dots, Y_n)$ be a set a $n$ independent observations with values in $\{0,1\}$. Any individual observation $i$ is described by a vector ${\boldsymbol{x}}_i \in \mathbb{R}^p$ of $p$ covariates and we consider the logistic regression model: $$P(Y_i = 1) = 1-P(Y_i = 0) = \frac{e^{{\boldsymbol{x}}_i^t \theta}}{1+e^{{\boldsymbol{x}}_i^t \theta} }$$
We generate a simulated dataset with a randomly chosen regression matrix $X$ and the following parameters: $$n = 200, \quad p = 4, \quad \theta = (0.5, -0.6, 0, -1).$$
Setting a Gaussian prior distribution on $\theta \in \mathbb{R}^p$, $\theta \sim \mathcal{N}(0,\sigma^2\boldsymbol{I}_p),$ with $\sigma^2 = 100$, it is natural to propose a Gaussian approximation of the posterior distribution ${{\widetilde{p}}_{{\boldsymbol{Y}}}}: = \mathcal{N}(\widehat{\mu},\widehat{\Sigma})$. We first computed the Gaussian variational Bayes estimator and we obtained: $$\widehat{\mu}^{VB} = (0.398, -0.643, -0.280, -0.847)$$ and $$ \^[VB]{} = 10\^[-3]{} (
[rrrr]{} 23.3 & -1.2 & 0.1 & 1.1\
-1.2 & 20.6 & -0.1 & 2.0\
0.1 & -0.1 & 24.0 & 1.4\
1.1 & 2.0 & 1.4 & 25.5
) . $$ We also considered an approximate Gaussian posterior distribution based on the maximum likelihood estimator (ML) $\widehat{\theta}^{ML}$. The ML and its asymptotic variance $ \widehat{\Sigma}^{ML}$ are obtained with the -function . To test the robustness of our bridge sampling with respect to ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$, we also consider an artificially increased (respectively decreased) variance. In addition, we consider a distribution centered on an aberrant value with a small variance, thus leading to five different ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$: $$\begin{aligned}
\widetilde{p}_{{\boldsymbol{Y}},1} & = & \mathcal{N}(\widehat{\mu}^{VB}, \widehat{\Sigma}^{VB})\\
\widetilde{p}_{{\boldsymbol{Y}},2} & = & \mathcal{N}(\widehat{\theta}^{ML}, \widehat{\Sigma}^{ML})\\
\widetilde{p}_{{\boldsymbol{Y}},3} & = & \mathcal{N}(\widehat{\mu}^{VB}, {\text{diag}}(\widehat{\Sigma}^{VB} )/5)\\
\widetilde{p}_{{\boldsymbol{Y}},4} & = & \mathcal{N}(\widehat{\mu}^{VB}, {\text{diag}}(\widehat{\Sigma}^{VB} )\times10)\\
\widetilde{p}_{{\boldsymbol{Y}},5} & = & \mathcal{N}(\widehat{\mu}^{VB} + 0.5, {\text{diag}}(\widehat{\Sigma}^{VB} )/5)
\end{aligned}$$ $\widetilde{p}_{{\boldsymbol{Y}},1}$, $\widetilde{p}_{{\boldsymbol{Y}},2}$, $\widetilde{p}_{{\boldsymbol{Y}},3}$ $\widetilde{p}_{{\boldsymbol{Y}},4}$ and $\widetilde{p}_{{\boldsymbol{Y}},5}$ are plotted in Figure \[fig:logreg:approx\] (for $\theta_2$).
For each $(\widetilde{p}_{{\boldsymbol{Y}},k})_{k = 1\dots 5}$, we sample the posterior distribution using three methods.
- [CBS]{}refers to a Classical Bridge Sampling $\pi(\theta)\ell({\boldsymbol{Y}}| \theta)^ {\rho_h}$, sequentially sampled with a SMC algorithm. The strategy serves as a reference to be compared to the other ones.
- [CBS$+$IS]{}refers to the same annealing scheme as [CBS]{}but the first sample $(\theta_0^m)_{m=1,\dots,M}$ is generated with $\widetilde{p}_{{\boldsymbol{Y}}}$ and the adequate weights are computed.
- Finally, we use [SBS]{}, described in section \[subsec:algo\] corresponding to the annealing scheme: $ {{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}( \pi({\theta}) \ell({\boldsymbol{Y}}| {\theta})) ^{\rho_h}$.
#### Tunings.
In each case, the SMC is performed using $M = 10000$ particles, $\tau_1 = 0.9$ and $\tau_2 = 0.8$. The kernel $K_h$ is a composed of $B = 5$ iteration of a standard Metropolis-Hastings kernel, proposing a new parameter as: $ \theta^c \sim \frac{1}{3} \sum_{i = 1}^3 \mathcal{N}(\theta^{\ell-1}, \rho_i \times \widehat{\Sigma}^{ML})$ where $\widehat{\Sigma}^{ML}$ is the asymptotic variance of the maximum likelihood estimator and $(\rho_1,\rho_2,\rho_3) = (1,0.1,10)$.
. \[fig: log reg post\]
#### Results.
The results are plotted in Figures \[fig: log reg post\] and \[fig:logreg:approx\] (right). For the five $(\widetilde{p}_{{\boldsymbol{Y}},k})_{k = 1 \dots 5}$, the posterior distributions given by our algorithm [SBS]{}(Figure \[fig: log reg post\], right) are confounded and stick to the one obtained from the reference algorithm [CBS]{}, thus illustrating the practical robustness of our new bridge sampler even. More precisely, even if the approximated posterior distribution has an underestimated variance (which is known to be the case for the Variational Bayes estimator in this case), our methodology will supply a sample from the true posterior distribution. Moreover, the algorithm is also robust when ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ is piked around on an absurd value (see results for $\widetilde{p}_{{\boldsymbol{Y}},5})$).
On the contrary, using algorithm [CBS$+$IS]{}can be a bad idea. Indeed, if the approximation $\widetilde{p}_{{\boldsymbol{Y}}}$ has a wider support than the true posterior, the algorithm will perform well and provide a sample from the right posterior distribution: as can be seen in Figure \[fig: log reg post\], left frame, the black and the blue curves are indistinguishable. However, if $\widetilde{p}_{{\boldsymbol{Y}}}$ has an underestimated variance ($\widetilde{p}_{{\boldsymbol{Y}},1}$, $\widetilde{p}_{{\boldsymbol{Y}},2}$, $\widetilde{p}_{{\boldsymbol{Y}},4}$ and $\widetilde{p}_{{\boldsymbol{Y}},5}$ ), the standard sampling strategy will lead to a false posterior distribution (see the orange, red, purple and green curves in the left frame of Figure \[fig: log reg post\]). Note that, there is no algorithmic indicator detecting such a bad behavior.
To compare the computational times of the various strategies, we can have a look at the number of iterations required for the sequences $(\rho_h)_{h \geq 0}$ to reach $1$. These sequences $(\rho_h)_{h \geq 0}$ are plotted in Figure \[fig:logreg:approx\] (right). We only plot the curves for the combinations “algorithm/approximated posterior $\widetilde{p}_{{\boldsymbol{Y}}}$” leading to the true posterior distribution. As expected, [CBS]{}is the most time consuming (see black dotted curve). As an indicative basis, on this example, the $30$ iterations of [CBS]{}requires roughly $5$ minutes on an using six cores. With $\widetilde{p}_{{\boldsymbol{Y}},3}$ (increased variance), [SBS]{}and [CBS$+$IS]{}finish in quite comparable computational time with slightly better results for our methodology. [SBS]{}requires the same computational time with $\widetilde{p}_{{\boldsymbol{Y}},4}$ (small variance). [SBS]{}clearly outperforms [CBS]{}. Finally, considering an extreme case where the approximation distribution $\widetilde{p}_{{\boldsymbol{Y}},5}$ is concentrated around an aberrant value (which is unlikely to be the case in practice), the [SBS]{}and [CBS]{}have comparable computation times.
As a conclusion, [CBS$+$IS]{}can not be used is general cases since it can supply a wrong approximation of the posterior distribution. This is due to the fact that, at the first iteration of [CBS$+$IS]{}, the sample must be of the prior distribution. Using IS, i.e. simulating with $\widetilde{p}_{{\boldsymbol{Y}},i}$ and assessing weight can lead to a sample from the truncated prior distribution $\pi({\theta})\mathbbm{1}_{\widetilde{p}_{{\boldsymbol{Y}}}({\theta})>0}$. This simulation from the wrong distribution at the first step of the sequential importance sampler is not corrected in the following iterations. Besides, we are not able to detect such a phenomena.
On the contrary, our new bridge sampler behaves well, whatever ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$. An under-evaluated variance in ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ is not a limit to the use of our scheme. The gain in computational time depends obviously on the distance between ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ and $p(\cdot | {\boldsymbol{Y}})$ but is expected to be drastic when ${{\widetilde{p}}_{{\boldsymbol{Y}}}}$ comes from deterministic approximations of the posterior (Variational Bayes, Expectation Propagation, etc).
Latent Class Analysis model {#subsec:LCA}
---------------------------
In this section we consider the latent class analysis (LCA) model. On this model, we focus on the label switching issue and show that the strategy we propose can tackle this difficulty.
### Model and prior distribution
LCA is a mixture model for multivariate binary observations such as the correct or incorrect answers submitted during an exam [@Barthol11], the symptoms presented by persons with major depressive disorder [@Garrett00] or a disability index recorded by long-term survey [@erosheva2007]. Let $({\boldsymbol{Y}}_i)_{i = 1,\dots, n} = (Y_{i1},\dots, Y_{iq})_{i = 1,\dots, n}$ be $n$ i.i.d observations where $\forall (i,j)$, $Y_{ij} \in \{0,1\}$. $i$ and $j$ are respectively the individual and the response indices. Each ${\boldsymbol{Y}}_i$ is assumed to arise from the following mixture model: $$\mathbb{P}({\boldsymbol{Y}}_i = (y_{i1},\dots, y_{iq})) = \sum_{k = 1}^g \pi_k \prod_{j = 1}^q \gamma_{kj}^{y_{ij}} (1-\gamma_{kj})^{1-y_{ij}}$$ where $\pi_k$ represents the proportion of the $k$-th component ($\sum_{k = 1}^g \pi_k = 1$) and $\gamma_{kj}$ is the success probability for the $j$-th response in the $k$-th group. The model is equivalently written as $$\begin{array}{cclr}
Y_{ij} | Z_i & \sim & \mathcal{B}(\gamma_{Z_i \, j}), & \quad \quad \forall i = 1,\dots n, j = 1,\dots, q\\
\mathbb{P}(Z_{i} = k) & = & \pi_k, & \quad \quad \forall i = 1,\dots n, k = 1,\dots, g
\end{array}$$ where ${\boldsymbol{Z}}= (Z_1,\dots, Z_n)$ is a latent random vector.
We set the following standard exchangeable prior distributions on ${\boldsymbol{\pi}}= (\pi_1,\dots, \pi_g)$ and ${\boldsymbol{\gamma}}= (\gamma_{kj})_{k = 1,\dots,q, j = 1,\dots,q}$: $$\label{eq:LCAprior}
\begin{array}{cccl}
(\pi_1,\dots, \pi_g) & \sim & & \mathcal{D}ir(d, \dots d)\\
(\gamma_{kj})_{k = 1,\dots,g, j = 1,\dots,q} & \sim & _{i.i.d} & \mathcal{B}eta(a,b)
\end{array}$$
### Posterior distribution and label switching
As for any mixture model, the posterior distribution should reproduce the likelihood invariance under permutation of the mixture indices. In other words, the posterior distribution is multi-modal, each mode corresponding to a permutation of the index of the mixture components. In such cases, it is well documented that MCMC algorithms often fail into exploring the various modes of the posterior distribution. Note that the label switching issue arises not only when it comes out to sample the posterior distribution but also for evidence approximation in model selection [see for instance the introduction of @Lee2016 and references inside]. In this section, we illustrate the fact that a simple solution –based on the Variational Bayes posterior approximation and our sampling algorithm [SBS]{}– can be proposed to handle the label switching problem.
On this model, it is easy to derive a mean-field variational approximation of the posterior distribution, resulting into a posterior approximation of the form: $$\begin{aligned}
\label{eq:LCApostVB}
&&{{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB}}({\boldsymbol{Z}},{\boldsymbol{\gamma}}, {\boldsymbol{\pi}}) =f_{ \mathcal{D}ir(\tilde \delta_1, \dots \tilde \delta_g)}({\boldsymbol{\pi}})\nonumber \\
&& \times \prod_{k = 1,j=1}^{g,q} f_{ \mathcal{B}eta(\tilde \alpha_{kj},\tilde \beta_{kj})} (\gamma_{kj}) \prod_{i = 1}^n \prod_{k = 1}^ q (\tau_{ik})^{\mathbbm{1}_{Z_{i} = k}} \end{aligned}$$ Details can be found in [@White13] and the algorithm is implemented in the corresponding R-package . However, contrary to the true posterior distribution $(p(\cdot | {\boldsymbol{Y}})$, ${{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB}}$ is not exchangeable. Moreover, this posterior approximation is known to be excessively concentrated around one mode. As a consequence, we can presume (and we illustrate it on the following numerical experiments) that our sampling algorithm [SBS]{}starting from $ {{\widetilde{p}}_{{\boldsymbol{Y}}}}$ and using a standard Gibbs transition kernel will not be able to propagate particles on the other modes of the posterior distribution.
When talking about a *“standard Gibbs transition kernel”*, we refer to the most naive Gibbs algorithm, sequentially sampling $[{\boldsymbol{Z}}| {\boldsymbol{Y}}, {\boldsymbol{\gamma}}, {\boldsymbol{\pi}}]$, $[{\boldsymbol{\gamma}}| {\boldsymbol{Y}}, {\boldsymbol{Z}}{\boldsymbol{\pi}}]$ and $[{\boldsymbol{\pi}}| {\boldsymbol{Z}}]$. Using the expressions for $ {{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB}}$ and $p_{\rho_n}({\boldsymbol{Z}},{\boldsymbol{\gamma}},{\boldsymbol{\pi}})$, these three distributions are conjugate. We stick to this MCMC kernel, and do not introduce any modification to *force or prevent* the label switching phenomena at the propagation step.
$ {{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB}}$ being unsatisfying from the label switching perspective, we introduce its so called *symmetrized version*, forcing the invariance by permutation. More precisely, let $\mathcal{S}_g$ be the set of all the permutations of $\{1,\dots,g\}$, we define $$\label{eq:LCApostVBsym0}
{{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB.Sym}}({\boldsymbol{Z}},{\boldsymbol{\gamma}}, {\boldsymbol{\pi}}) = \sum_{\sigma \in \mathcal{S}_q} {{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB.Sym}}({\boldsymbol{Z}},{\boldsymbol{\gamma}}, {\boldsymbol{\pi}}, \sigma)$$ where $$\begin{aligned}
\label{eq:LCApostVBsym}
&&{{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB.Sym}}({\boldsymbol{Z}},{\boldsymbol{\gamma}}, {\boldsymbol{\pi}}, \sigma)
= \frac{1}{g !} f_{ \mathcal{D}ir(\tilde \delta_{\sigma(1)}, \dots \tilde \delta_{\sigma(g)})}({\boldsymbol{\pi}})
\nonumber\\
& & \times \prod_{k = 1}^g \prod_{j = 1}^ q f_{ \mathcal{B}eta(\tilde \alpha_{\sigma(k)j},\tilde \beta_{\sigma(k)j})} (\gamma_{kj})
\prod_{i = 1}^n \prod_{k = 1}^ q (\tau_{i\sigma(k)})^{\mathbbm{1}_{Z_{i} = k}}.\nonumber\\ \end{aligned}$$
In order to keep the conjugacy properties in conditional distributions we use our algorithm [SBS]{}to sequentially sample: $$\begin{aligned}
\label{eq:LCApostVBsym2}
p_{\rho_h}({\boldsymbol{Z}},{\boldsymbol{\gamma}}, {\boldsymbol{\pi}}, \sigma ) \propto [{{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB.Sym}}({\boldsymbol{Z}},{\boldsymbol{\gamma}}, {\boldsymbol{\pi}}, \sigma)]^ {1-\rho_h} && \nonumber \\
\times [ \ell({\boldsymbol{Y}}| {\boldsymbol{Z}}, {\boldsymbol{\gamma}},{\boldsymbol{\pi}})p({\boldsymbol{Z}}| {\boldsymbol{\pi}}) \pi({\boldsymbol{\pi}},{\boldsymbol{\gamma}})]^ {\rho_h}&&
\end{aligned}$$ Once $\sigma$ has been integrated out, our sequential sampling scheme starts at $({{\widetilde{p}}_{{\boldsymbol{Y}}}}^{\mathit{VB.Sym}})$ given in equation and terminates at the true posterior distribution.
### Simulation design and test {#subsubsec:criteria}
This example aims at proving that our strategy supplies a sample from the true posterior distribution, even in the difficult framework of mixture models. To assess the validity of our method, we propose using the following testing procedure.
#### Testing procedure.
We introduce the following property from which we derive a negative criterion, in the sense that if the obtained distribution is the true posterior distribution, it must satisfy the criterion.
\[lem:criterion\] Let $\Phi: \Theta \mapsto \mathbb{R}$ be such that $\exists \; \Psi$ verifying $ H: {\theta}\mapsto \left(\Phi({\theta}), \Psi({\theta})\right)$ is injective and continuously differentiable.Assume that $$({\theta}^\star, {\boldsymbol{Y}}^{\star}) \sim \pi({\theta}^\star)\ell({\boldsymbol{Y}}^\star | {\theta}^\star).$$ Given ${\boldsymbol{Y}}^\star$, let $p_{{\boldsymbol{Y}}^\star}(\Phi(\theta))$ be any probability distribution on $\Phi(\Theta)$. Let $U({\theta}^\star, {\boldsymbol{Y}}^\star,\Phi,p_{{\boldsymbol{Y}}^\star})$ be the following statistic: $$\label{eq:crit-theo}
U({\theta}^\star, {\boldsymbol{Y}}^\star,\Phi,p_{{\boldsymbol{Y}}^\star}) = \mathbb{E}_{p_{{\boldsymbol{Y}}^\star}} \left[ \mathbbm{1}_{\Phi({\theta}) < \Phi({\theta}^{\star})}\right].$$ Then, if $
p_{{\boldsymbol{Y}}^\star}(\Phi(\theta)) = p(\Phi({\theta})|{\boldsymbol{Y}}^\star)$, $ \forall {\theta}\in \Theta$ then $U({\theta}^\star, {\boldsymbol{Y}}^\star,\Phi,q_{{\boldsymbol{Y}}^\star}) \sim \mathcal{U}_{[0,1]}$.
Note that when $U({\theta}^\star, {\boldsymbol{Y}}^\star,\Phi, p_{{\boldsymbol{Y}}^\star})$ has no explicit expression and when we have access to a sample from $q_{{\boldsymbol{Y}}^\star}$, we can replace $U({\theta}^{\star}, {\boldsymbol{Y}}^{\star },\Phi, p_{{\boldsymbol{Y}}^{\star}})$ by its non-biased and convergent estimator: $$\label{eq:crit-emp}
U_M({\theta}^{\star},{\boldsymbol{Y}}^{\star},\Phi, ({\theta}_m)_{m = 1\dots M}) = \frac{1}{M}\sum_{m = 1}^M \mathbbm{1}_{\Phi(\theta_m) <\Phi({\theta}^{\star}) }$$ where ${\theta}_m \sim_{i.i.d.} p_{{\boldsymbol{Y}}^\star}$ and moreover if $p_{{\boldsymbol{Y}}^\star}(\Phi(\theta)) = p(\Phi({\theta})|{\boldsymbol{Y}}^\star)$ for all ${\theta}\in \Theta$, then $$U_M({\theta}^\star,{\boldsymbol{Y}}^\star,\Phi, ({\theta}_m)_{m = 1\dots M}) \sim \mathcal{U}_{\left\{0,\frac{1}{M},
\frac{2}{M},\dots, 1\right\}}.$$
This property enables the elaboration of our checking procedure in $4$ steps.
------------------------------------------------------------------------
**Checking procedure for posterior approximation**
------------------------------------------------------------------------
For a given approximation method ${\mathcal{M}}$ of the posterior distribution. Case (a) corresponds to deterministic approximations and case (b) to stochastic approximations.
1. Generate $S$ parameters and datasets $({\theta}^{\star s}, {\boldsymbol{Y}}^{\star s})_{s = 1,\dots,S}$ according to the Bayesian model $\pi({\theta}^{\star s})\ell({\boldsymbol{Y}}^{\star s} | {\theta}^{\star s})$.
2. For each $s = 1\dots S$, from dataset ${\boldsymbol{Y}}^{\star s}$,
1. derive the deterministic approximation of the posterior $p_{{\boldsymbol{Y}}^{\star s}}$ using ${\mathcal{M}}$,
2. get a sample $({\theta}^{s}_m)_{m = 1\dots M}$ from ${\mathcal{M}}$.
3. Choose a real-valued function of the parameter $\Phi({\theta})$ and compute
1. $ U({\theta}^{\star s}, {\boldsymbol{Y}}^{\star s},\Phi,p_{{\boldsymbol{Y}}^{\star s}})$ using ,
2. $U_M({\theta}^{\star s},{\boldsymbol{Y}}^{\star s},\Phi, ({\theta}^{s}_m)_{m = 1\dots M})$ using .
4. Compare the empirical distribution
1. of $U({\theta}^{\star s}, {\boldsymbol{Y}}^{\star s},\Phi,p_{{\boldsymbol{Y}}^{\star s}}))_{s=1\dots S}$ to the uniform distribution on $[0, 1]$,
2. of $\left(U_M({\theta}^{\star s},{\boldsymbol{Y}}^{\star s},\Phi, ({\theta}^{s}_m)_{m = 1\dots M}\right)_{s=1\dots S}$ to the uniform distribution on $\{1, \dots, \frac{1}{M}\}$.
------------------------------------------------------------------------
The comparison can be performed through graphical tools – for instance the empirical Cumulative Distribution Function (cdf) – or through a statistical test such as the discrete goodness of fit test, discrete version of the Kolmogorov-Smirnov (KS) test. If we reject the hypothesis $$H_0 = \left\{U_M({\theta}^{\star},{\boldsymbol{Y}}^{\star },\Phi, ({\theta}_m)_{m = 1\dots M}) \sim \mathcal{U}_{\left\{0,\frac{1}{M}, \frac{2}{M},\dots, 1\right\}} \right\},$$ we conclude that the obtained sample is not distributed from the true posterior distribution.
In our case, several methods ${\mathcal{M}}$ will be considered: the variational Bayes approximation (VB), the [SBS]{}starting from VB and [SBS]{}starting from a symmetrized version of VB.
#### Simulation design.
We set $n = 100$ individuals, $p = 10$ observations by individual and $g = 2$ groups in the mixture. Referring to equation , the hyper-parameters are set to $\delta = a = b = 2$. $S = 500$ datasets ${\boldsymbol{Y}}^{\star s}$ and are simulated. For each simulated dataset, we run the VB algorithm implemented in the BayesLCA package [@White13]. The sampling algorithms [SBS]{}starting from ${\widetilde{p}}^\mathit{VB}_{{\boldsymbol{Y}}^{\star s} }$ and ${\widetilde{p}}_{{\boldsymbol{Y}}^{\star s} }^\mathit{VB.Sym}$ respectively are implemented with $M = 5000$, $\tau_1 = 0.9$ and $ \tau_2 = 0.9$. The $K_h$ kernel is standard Gibbs algorithm of length $B = 5$.
### Results
On Figure \[Fig:post\_pi1\] (left panel), we plot the estimated posterior density (for a arbitrarily chosen dataset) obtained from the Variational approximation, its symmetrized version, the [SBS]{}starting from with VB and the [SBS]{}stating from with VB symmetrized. As expected, the variational approximation underestimates the posterior variance. [SBS]{}succeeds in inflating this posterior variance. This phenomenon can be observed on $ {\Phi_1(\theta) =}|\pi_1-\pi_2|$ (left) and $ \Phi_2(\theta) = \pi_1$. About $\pi_1$, we observe that using only the classical variational Bayes estimator –highly concentrated on the MAP– we are enable to explore the other modes. However, if forcing the exploration of the different modes, we are able to observe the full posterior distribution, charging all the modes.
On Figure \[Fig:post\_pi1\] (right panel), we plot the empirical cdf’s (ecdf) for $U_M$. The results for $\Phi_1$ ($\Phi_2$ respectively) are on the left (right respectively). The ecdf for $\left(U({\theta}^{\star s}, {\boldsymbol{Y}}^{\star s},\Phi_l, {\widetilde{p}}^{\mathit{VB}}_{{\boldsymbol{Y}}^{\star s}})\right)_{s = 1\dots S}$ is in blue and the ecdf for $\left(U({\theta}^{\star s}, {\boldsymbol{Y}}^{\star s},\Phi_l, {\widetilde{p}}^{\mathit{VB Sym}}_{{\boldsymbol{Y}}^{\star s}})\right)_{s = 1\dots S}$ is in purple. The ecdf obtained from $({\theta}^{s}_m)_{m = 1\dots M}$ and $({\theta}^{s, \mathit{Sym}}_m)_{m = 1\dots M}$ sampled by algorithm [SBS]{}starting from ${\widetilde{p}}^\mathit{VB}_{{\boldsymbol{Y}}^{\star s} }$ ( respectively ${\widetilde{p}}_{{\boldsymbol{Y}}^{\star s} }^\mathit{VB.Sym}$) are plotted in red (green respectively).
The phenomena observed before on a single dataset are confirmed here on the $500$ datasets. On $|\pi_1-\pi_2|$, which is insensitive to label switching, the non-symmetrized and symmetrized versions of the algorithms give equivalent results (curves red/green, and blue/purple can not be distinguished), which is not the case on $\pi_1$ (which is actually non-identifiable because of label switching). On $|\pi_1-\pi_2|$, ${\widetilde{p}}^\mathit{VB}_{{\boldsymbol{Y}}^{\star s} }$ and ${\widetilde{p}}_{{\boldsymbol{Y}}^{\star s} }^\mathit{VB.Sym}$ are different from the true posterior distribution but our two algorithms starting respectively from ${\widetilde{p}}^\mathit{VB}_{{\boldsymbol{Y}}^{\star s} }$ and ${\widetilde{p}}_{{\boldsymbol{Y}}^{\star s} }^\mathit{VB.Sym}$ supply a sample from the true posterior distribution.
On $\pi_1$, we observe that starting [SBS]{}from ${\widetilde{p}}^\mathit{VB}_{{\boldsymbol{Y}}^{\star s} }$ clearly leads to the wrong posterior. The equality of the true posterior distribution with the one obtained via [SBS]{}starting from VB.Sym is not rejected.
[lcc]{} & $|\pi_1-\pi_2|$ & $\pi_1$\
\
VB & $4.497e^{-06}$ & $<2.2e^{-16}$\
Symmetrized VB & $1.563e^{-05}$ & $1.431e^{-09}$\
[SBS]{}with VB & $0.596$ & $<2.2e^{-16}$\
[SBS]{}with Symmetrized VB& $0.567$ & $0.903$\
Stochastic block models with covariates {#subsec:SBMreg}
---------------------------------------
#### Model.
As a last example, we consider the combination of stochastic block-model [@NoS01 SBM] and logistic regression (shortened as ’[SBM-reg]{}’ in the sequel) considered in [@LRO15]. This model aims at deciphering some residual structure in an observed network once accounted for the effect of some edge covariates. The model is as follows. Consider a set of $n$ nodes; for each pair ($1 \leq i < j \leq n$) of nodes, we observe a $p$-dimensional covariates vector ${\boldsymbol{x}}_{ij}$. Likewise in SBM, we further assume that each node belongs to one among $g$ groups and we denote $Z_i$ the (unobserved) group where node $i$ is affected; ${\boldsymbol{\pi}}= (\pi_k)_k$ denotes the vector of group proportions. The model states that the edges of the observed binary undirected network ${\boldsymbol{Y}}= (Y_{ij})$ are drawn independently conditionally on the set of latent variables ${\boldsymbol{Z}}= (Z_i)$ as Bernoulli variables: $$(Y_{ij} | Z_i, Z_j, {\boldsymbol{\alpha}}, {\boldsymbol{\beta}}) \sim {\mathcal{B}}(p_{ij}),
\quad
{\text{logit}}(p_{ij}) = {\boldsymbol{x}}_{ij}^\intercal {\boldsymbol{\beta}}+ \alpha_{Z_i, Z_j}$$ where ${\boldsymbol{\alpha}}= (\alpha_{kl})$ stands for the matrix of between-group effects (analogous to the between-group connection probabilities from SBM, in logit scale) and ${\boldsymbol{\beta}}= (\beta_\ell)_{\ell =1,\dots, p}$ for the vector of regression coefficients. As for the priors, $\pi$ has a Dirichlet distribution, both $\alpha$ and $\beta$ are Gaussian. When considering model selection or averaging, the number of groups $g$ is supposed to be uniformly distributed among $\{1, \dots, g_{\max}\}$.
#### Bayesian model averaging (BMA).
BMA [@HMR99] is a general principle, which consists in combining the results obtained with several models, rather than to choose the ’best’ one. Among other interests, it allows to account for model uncertainty. We apply this principle to a regression parameter $\beta_\ell$. While model selection consists in choosing $g$ as $\widehat{g} = \arg\max_g p(g | {\boldsymbol{Y}})$ and considering the posterior $p(\beta_\ell | {\boldsymbol{Y}}, g = \widehat{g})$, BMA directly considers the unconditional posterior $$p(\beta_\ell | {\boldsymbol{Y}}) = \sum_g p(g | {\boldsymbol{Y}}) p(\beta_\ell | {\boldsymbol{Y}}, g).$$ In terms of moments, it results in ${\mathbb E}(\beta_\ell | {\boldsymbol{Y}}) = \sum_g p(g | {\boldsymbol{Y}}) {\mathbb E}(\beta_\ell | {\boldsymbol{Y}}, g)$ and ${\mathbb V}(\beta_\ell | {\boldsymbol{Y}}) = {\mathbb V}_{\text{within}}(\beta_\ell | {\boldsymbol{Y}}) + {\mathbb V}_{\text{between}}(\beta_\ell | {\boldsymbol{Y}})$ where ${\mathbb V}_{\text{within}}$ measures the mean variance of the parameter conditionally on $g$ and ${\mathbb V}_{\text{between}}$ is the variance of the parameter due to model uncertainty: $$\begin{aligned}
{\mathbb V}_{\text{within}}(\beta_\ell | {\boldsymbol{Y}}) & = & \sum_g p(g | {\boldsymbol{Y}}) {\mathbb V}(\beta_\ell | {\boldsymbol{Y}}, g), \\
{\mathbb V}_{\text{between}}(\beta_\ell | {\boldsymbol{Y}}) & = & \sum_g p(g | {\boldsymbol{Y}}) \left({\mathbb E}(\beta_\ell | {\boldsymbol{Y}}, g) - {\mathbb E}(\beta_\ell | {\boldsymbol{Y}})\right)^2.\end{aligned}$$
#### VB approximation.
As illustrated in Subsection \[subsec:SBMreg\], the VB approximate posterior is quite accurate for logistic regression and [@GDR12] also proved its empirical accuracy for SBM. A first goal of this simulation study is to check if this accuracy still holds when the two models are combined into the [SBM-reg]{}model. To this aim, we focus on the posterior distribution of the regression parameters. Secondly, we want to check the accuracy of the VB posterior distribution of the number of groups, that can be used either to assess goodness-of-fit or for model averaging ([@LRO15])
#### Simulation design.
We simulate networks with $n \in \{20, 50\}$ nodes according to an [SBM-reg]{}model with ${g_*}\in \{1, 2\}$ groups and $p = 3$ covariates. To apply Property \[lem:criterion\], the parameters are sampled from the prior distribution. $S = 100$ replicates are simulated for each configuration and for each of them , the [SBM-reg]{}models with $g \in \{1, \dots, g_ {\max} = 5\}$ were fitted with the VB algorithm described in [@LRO15]. The [SBS]{}algorithm is then run on each dataset.
#### Results for parameter estimation.
We first consider the posterior distribution of $\beta$ when the number of groups $g$ is known. On Figure \[Fig:SimulSMBreg-beta\], we plot on the left the boxplots for the posterior means $(\widehat{\mathbb{E}}^{VB}(\beta_\ell | {\boldsymbol{Y}}^s))_{s=1\dots 100}$ and $(\widehat{\mathbb{E}}^{{SBS\xspace}}(\beta_\ell | {\boldsymbol{Y}}^s))_{s=1\dots 100}$. The boxplot (over the 100 simulated datasets) of the posterior standard deviations $(\widehat{\sigma}^{VB}(\beta_\ell | {\boldsymbol{Y}}^s))_{s=1\dots 100}$ and $(\widehat{\sigma}^{{SBS\xspace}}(\beta_\ell | {\boldsymbol{Y}}^s))_{s=1\dots 100}$ are on the top-right. We clearly observe that the posterior means provided by VB and [SBS]{}are both accurate and similar, but the VB’s posterior standard deviations (sd) are smaller than [SBS]{}’s posterior standard deviations.
To further assess the quality of the posterior distribution provided by VB and [SBS]{}, we checked Property \[lem:criterion\] for the regression coefficients, which are not subject to label switching. We observe that the ecdf of the [SBS]{}sample is close to uniform, whereas there is a departure for VB. The p-values resulting from KS test of Property \[lem:criterion\] (see Table \[Tab:pvalSBMreg\], upper table) lead to the same conclusions. All these observations concur to show that, although the VB approximate posterior distribution is accurate for logistic regression and SBM separately, it is biased for the [SBM-reg]{}model, and that the proposed [SBS]{}is a way to correct it. As a consequence of this phenomenon, the empirical level of VB’s credibility intervals is equal to 84.75%, which is below the nominal level 95%, whereas [SBS]{}’s credibility intervals almost reach the targeted level (93.75%).
--------- --------- --------- --------- ---------
$g = 1$ $g = 2$ $g = 1$ $g = 2$
VB 0.027 0.004 0.002 0.077
[SBS]{} 0.785 0.121 0.839 0.238
$\;$ $\;$ $\;$ $\;$
$\;$ $\;$ $\;$ $\;$
$g = 1$ $g = 2$ $g = 1$ $g = 2$
VB 0.017 0.003 0.002 0.079
[SBS]{} 0.740 0.277 0.778 0.312
--------- --------- --------- --------- ---------
: Simulation results for the [SBM-reg]{}model: p-values for the KS test of Property \[lem:criterion\]. \[Tab:pvalSBMreg\]
![Simulation results for the [SBM-reg]{}model: VB (white) and [SBS]{}(red) posterior of the regression coefficients ${\boldsymbol{\beta}}= (\beta_\ell)$. Top: posterior mean (left), posterior standard deviation (right); $x$-axis label: ${g_*}. n$ (e.g. ’1.20’ means ${g_*}= 1$, $n= 20$). Bottom: graphical check of Property \[lem:criterion\] for VB (dashed blue) and for [SBS]{}(solid red). Left: $g$ = ${g_*}$, right: with model averaging. []{data-label="Fig:SimulSMBreg-beta"}](Fig7.eps "fig:"){width=".4\columnwidth" height=".4\textwidth"} ![Simulation results for the [SBM-reg]{}model: VB (white) and [SBS]{}(red) posterior of the regression coefficients ${\boldsymbol{\beta}}= (\beta_\ell)$. Top: posterior mean (left), posterior standard deviation (right); $x$-axis label: ${g_*}. n$ (e.g. ’1.20’ means ${g_*}= 1$, $n= 20$). Bottom: graphical check of Property \[lem:criterion\] for VB (dashed blue) and for [SBS]{}(solid red). Left: $g$ = ${g_*}$, right: with model averaging. []{data-label="Fig:SimulSMBreg-beta"}](Fig8.eps "fig:"){width=".4\columnwidth" height=".4\textwidth"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Simulation results for the [SBM-reg]{}model: VB (white) and [SBS]{}(red) posterior of the regression coefficients ${\boldsymbol{\beta}}= (\beta_\ell)$. Top: posterior mean (left), posterior standard deviation (right); $x$-axis label: ${g_*}. n$ (e.g. ’1.20’ means ${g_*}= 1$, $n= 20$). Bottom: graphical check of Property \[lem:criterion\] for VB (dashed blue) and for [SBS]{}(solid red). Left: $g$ = ${g_*}$, right: with model averaging. []{data-label="Fig:SimulSMBreg-beta"}](Fig9.eps "fig:"){width=".4\columnwidth" height=".4\textwidth"} ![Simulation results for the [SBM-reg]{}model: VB (white) and [SBS]{}(red) posterior of the regression coefficients ${\boldsymbol{\beta}}= (\beta_\ell)$. Top: posterior mean (left), posterior standard deviation (right); $x$-axis label: ${g_*}. n$ (e.g. ’1.20’ means ${g_*}= 1$, $n= 20$). Bottom: graphical check of Property \[lem:criterion\] for VB (dashed blue) and for [SBS]{}(solid red). Left: $g$ = ${g_*}$, right: with model averaging. []{data-label="Fig:SimulSMBreg-beta"}](Fig10.eps "fig:"){width=".4\columnwidth" height=".4\textwidth"}
#### Results for model selection.
We now consider the posterior distribution of the number of groups $p(g|{\boldsymbol{Y}})$ and its use for model selection. Figure \[Fig:SimulSMBreg-modsel\] provides a comparison of the posterior provided by VB and [SBS]{}. We observe that the VB approximation always results in a more concentrated distribution than [SBS]{}. This behavior can be compared to the under-estimation of the posterior variance of the parameters that we already discussed. To compare the results in terms of model selection we computed the frequency at which the right model is selected (i.e. when $\widehat{g} = {g_*}$) and the mean posterior probability of the ${g_*}$ (see Table \[Tab:SimulSMBreg-Pgstar\]). We observe that VB performs better than [SBS]{}for both criteria. This parallels [@Min05], who shows that the minimization of the Kullback-Leibler (KL) divergence leads to an accurate estimate of the mode, which is convenient for model selection.
![Simulation results for the [SBM-reg]{}model: box-plots for the posterior probability $p(g|{\boldsymbol{Y}})$ as a function of $g$. Top $n=20$, bottom: $n=50$. Left: ${g_*}= 1$, right: ${g_*}= 2$. \[Fig:SimulSMBreg-modsel\]](Fig11.eps "fig:"){width=".4\columnwidth" height=".3\textwidth"} ![Simulation results for the [SBM-reg]{}model: box-plots for the posterior probability $p(g|{\boldsymbol{Y}})$ as a function of $g$. Top $n=20$, bottom: $n=50$. Left: ${g_*}= 1$, right: ${g_*}= 2$. \[Fig:SimulSMBreg-modsel\]](Fig12.eps "fig:"){width=".4\columnwidth" height=".3\textwidth"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Simulation results for the [SBM-reg]{}model: box-plots for the posterior probability $p(g|{\boldsymbol{Y}})$ as a function of $g$. Top $n=20$, bottom: $n=50$. Left: ${g_*}= 1$, right: ${g_*}= 2$. \[Fig:SimulSMBreg-modsel\]](Fig13.eps "fig:"){width=".4\columnwidth" height=".3\textwidth"} ![Simulation results for the [SBM-reg]{}model: box-plots for the posterior probability $p(g|{\boldsymbol{Y}})$ as a function of $g$. Top $n=20$, bottom: $n=50$. Left: ${g_*}= 1$, right: ${g_*}= 2$. \[Fig:SimulSMBreg-modsel\]](Fig14.eps "fig:"){width=".4\columnwidth" height=".3\textwidth"}
--------- ----------- ----------- ----------- -----------
${g_*}=1$ ${g_*}=2$ ${g_*}=1$ ${g_*}=2$
VB 100 10 100 42
[SBS]{} 46 23 60 36
$\;$ $\;$ $\;$ $\;$
$\;$ $\;$ $\;$ $\;$
${g_*}=1$ ${g_*}=2$ ${g_*}=1$ ${g_*}=2$
VB 0.947 0.138 0.982 0.410
[SBS]{} 0.435 0.257 0.562 0.387
--------- ----------- ----------- ----------- -----------
: Simulation results for the [SBM-reg]{}model: model selection \[Tab:SimulSMBreg-Pgstar\]
Although it does not seem to hamper model selection, the biased estimation of the posterior $p(g|{\boldsymbol{Y}})$ may have undesired consequences when used for model averaging. To illustrate this point, we simply computed the empirical coverage of credibility intervals for each $\beta_\ell$ after model averaging. The mean coverage across simulation condition and covariate index $\ell$ for VB (85.8%) is still below the nominal level, whereas this of [SBS]{}(93.25%) is close to 95%. Figure \[Fig:SimulSMBreg-beta\] (bottom right) also shows that the distribution of the ecdf after model averaging is almost confounded with the uniform for [SBS]{}, whereas it still displays a significant bias for VB. The p-values for the KS test of Property \[lem:criterion\] (see Table \[Tab:pvalSBMreg\], bottom) lead to the same conclusion.
Illustrations on network datasets {#sec:Illust}
=================================
#### Network analysis with [SBM-reg]{}.
To illustrate the use of the proposed sampling algorithm, studied a series of examples analyzed by [@LRO15] with an [SBM-reg]{}model. The main two purposes of such an analysis is ($i$) to estimate the effect ${\boldsymbol{\beta}}$ of the covariates and ($ii$) to assess the goodness-of-fit of the model based on the covariates. Task ($ii$) is achieved by computing the posterior probability for the SBM part of the model to involve only $g=1$ class, that is $p(g = 1 | {\boldsymbol{Y}})$. A low value of this probability is an indication for a residual structure in the network.
We refer to [@LRO15] for the presentation of the data. We considered the datasets (networks) refereed to as Florentine (business), Florentine (marriage), Karate, Tree and Blog. Their respective sizes range from few tens to few hundreds nodes and their densities from 1% to 50%. Note that the numerical results presented here for the VB inference slightly differ, as we kept all nodes from each graph whereas [@LRO15] removed all isolated nodes.
For each of these datasets, we fitted an [SBM-reg]{}model with $g = 1 \dots g_{\max}$ groups with a VBEM algorithm to obtain the Gaussian VB approximate distribution for ${{\widetilde{p}}_{{\boldsymbol{Y}}}}^{VB}({\boldsymbol{\beta}})$. We also run the proposed [SBS]{}with $M = 1000$ particles and obtained a weighted sample from $\widehat{p}_{{\boldsymbol{Y}}}^{{SBS\xspace}}({\boldsymbol{\beta}})$. To compare the posterior distributions of ${\boldsymbol{\beta}}$, we adopted the Bayesian model averaging principle described in Section \[subsec:SBMreg\]. For each dataset $s$ and each covariate $\ell$, we first computed the ratio between the VB and [SBS]{}posterior standard deviation (sd) $\sqrt{{\mathbb V}^{VB}(\beta_j) / {\mathbb V}^{{SBS\xspace}}(\beta_j)}$. For each configuration, we also computed the ratios $\widetilde{{\boldsymbol{\beta}}}^{VB}_j / \sqrt{\widetilde{\Sigma}^{VB}_{jj}}$ and $\widetilde{{\boldsymbol{\beta}}}^{{SBS\xspace}}_j / \sqrt{\widetilde{\Sigma}^{{SBS\xspace}}_{jj}}$, which are typically used to evaluate the effect of the covariates. Figure \[Fig:IllustPostBeta\], left, shows that the VB approximation tends to under estimate the posterior variance of the parameter. As expected, Figure \[Fig:IllustPostBeta\], right, shows that it yields in over-estimating the significance of the effect of the covariates.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Posterior moments of the regression coefficients for the datasets from [@LRO15]. Left: ratio between the VB and [SBS]{}sd as a function of the network size (color = network). Right: ratio between the posterior mean and the posterior sd ($x$: VB, $y$: [SBS]{}, signed-log scale, color: network, horizontal and vertical lines: 2.5% and 97.5% $\mathcal{N}(0, 1)$ quantiles) \[Fig:IllustPostBeta\]](Fig15.eps "fig:"){width=".4\columnwidth"} ![Posterior moments of the regression coefficients for the datasets from [@LRO15]. Left: ratio between the VB and [SBS]{}sd as a function of the network size (color = network). Right: ratio between the posterior mean and the posterior sd ($x$: VB, $y$: [SBS]{}, signed-log scale, color: network, horizontal and vertical lines: 2.5% and 97.5% $\mathcal{N}(0, 1)$ quantiles) \[Fig:IllustPostBeta\]](Fig16.eps "fig:"){width=".4\columnwidth"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
To further illustrate these differences, we studied the posterior distribution of the three regression coefficients used in the ’Tree’ example. The regression coefficient are associated with the genetic, geographic and taxonomic distance between the tree species. The results given in Table \[Tab:IllustPostBeta\] indicate again that the posterior sd provided by [SBS]{}are all larger than these resulting from VB. We observe that, because of the high concentration of the posterior distribution of $g$, VB strongly under-estimates the variance due to model uncertainty. A result is that the [SBS]{}posterior standard deviations are almost twice larger than the VB posterior standard deviations. As a consequence, the significance ratio is reduced, and the influence of both the genetic and the geographic distance ($\beta_1$ and $\beta_2$) turn out to be more questionable according to [SBS]{}than to VB.
-------------------- ----------- ----------- -----------
$\beta_1$ $\beta_2$ $\beta_3$
post. mean 4.62e-05 0.23 -0.9
post. within var. 2.24e-10 0.0432 0.00175
post. between var. 5.55e-17 1.18e-06 2.42e-07
posterior sd 1.5e-05 0.208 0.0418
ratio 3.09 1.11 -21.5
$\;$ $\;$ $\;$ $\;$
$\;$ $\;$ $\;$ $\;$
$\beta_1$ $\beta_2$ $\beta_3$
post. mean 4.13e-05 0.355 -0.906
post. within var. 1.09e-09 0.219 0.00889
post. between var. 3.99e-12 0.0019 0.00281
posterior sd 3.31e-05 0.47 0.108
ratio 1.25 0.755 -8.38
-------------------- ----------- ----------- -----------
: Posterior moments of the regression coefficients. \[Tab:IllustPostBeta\]
For the goodness-of-fit study, we compared the values of $\widetilde{p}_{{\boldsymbol{Y}}}^{VB}(1)$ and $\widehat{p}_{{\boldsymbol{Y}}}^{{SBS\xspace}}(1)$ (Table \[Tab:IllustPostPrM1\]). Except in the most uncertain case (Karate), the posterior probabilities are similar and lead to the same conclusion about the existence of a residual structure in the network.
$\widetilde{p}_{{\boldsymbol{Y}}}^{VB}(1)$ $\widehat{p}_{{\boldsymbol{Y}}}^{{SBS\xspace}}(1)$
---------- -------------------------------------------- ----------------------------------------------------
Marriage $9.54 \; 10^{-1}$ $1.00$
Business $7.04 \; 10^{-1}$ $1.00$
Karate $2.56 \; 10^{-1}$ $7.07 \; 10^{-3}$
Tree $4.83 \; 10^{-153}$ $1.06 \; 10^{-161}$
Blog $8.63 \; 10^{-174}$ $4.04 \; 10^{-290}$
: Posterior probability for the [SBM-reg]{}model with only one class. \[Tab:IllustPostPrM1\]
Discussion and perspectives {#sec:Discuss}
===========================
In this paper, we presented a simple strategy to combine the strength of deterministic approximations of the posterior distribution with sequential Monte Carlo samplers. We illustrated the efficiency of our approach and its robustness with respect to the deterministic approximation on a large simulation study. Its application on network datasets stresses the fact that the well-known underestimation of the posterior variance by the variational approximation can be easily corrected, sometimes leading to different statistical conclusions. Besides, if dependencies between parameters have been neglected in the deterministic posterior approximation, they will be recovered by the sequential sampling.
Our approach is not restricted to the case where a standard deterministic posterior approximation can be derived (such as Variational Bayes, Laplace or Expectation Propagation estimate). Any point estimate can be used to design a rough posterior (using a Gaussian or a log-Gaussian seems to be the simplest solution) and serves as an accelerator of the sampling sequence. This strategy is different from an empirical Bayes strategy, the point estimate being only used to explore more efficiently the posterior distribution and not to elicit a prior distribution. The method is not as sensible as standard Importance Sampling to an eventual under-evaluation of the approximate posterior variance : even with a too narrow approximation of the posterior distribution, the algorithm is able to get back to the true posterior variance.
SMC directly supplies a final population of particles arising from the true posterior distribution, as opposed to MCMC strategies, whose convergence is difficult to assess. The proposed SBS algorithm is adaptive in the sense that the sequence ${{\widetilde{p}}_{{\boldsymbol{Y}}}}({\theta})^ {1-{\rho_h}}(p( {\theta}| {\boldsymbol{Y}})) ^{\rho_h}$ is determined on the fly in an automatic way. Furthermore, the algorithm path (summarized by the sequence $\rho_h$) is an indicator of the quality of the deterministic posterior distribution used to initiate the bridge sampling.
A natural extension of the present work is its adaptation to Approximate Bayesian Computation (ABC) context for models with no explicit likelihood, following [@DelMoral2012]. The difficulty will arise from the specification of the distributions sequence.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors are thankful to Nicolas Chopin for fruitful discussions.
| {
"pile_set_name": "ArXiv"
} |
[**Pseudoscalar meson photoproduction on nucleon target**]{}
[**G.H. Arakelyan$^1$, C. Merino$^2$ and Yu.M. Shabelski$^{3}$**]{}\
$^1$A.I.Alikhanyan Scientific Laboratory\
Yerevan Physics Institute\
Yerevan 0036, Armenia\
e-mail: argev@mail.yerphi.am\
$^2$Departamento de Física de Partículas, Facultade de Física\
and Instituto Galego de Física de Altas Enerxías (IGFAE)\
Universidade de Santiago de Compostela\
15782 Santiago de Compostela\
Galiza, Spain\
e-mail: merino@fpaxp1.usc.es\
$^{3}$Petersburg Nuclear Physics Institute\
NCR Kurchatov Institute\
Gatchina, St.Petersburg 188350, Russia\
e-mail: shabelsk@thd.pnpi.spb.ru 0.9 truecm
0.5 truecm
[**Abstract**]{}
We consider the photoproduction of secondary mesons in the framework of the Quark-Gluon String model. At relatively low energies, not only cylindrical, but also planar diagrams have to be accounted for. To estimate the significant contribution of planar diagrams in $\gamma p$ collisions at rather low energies, we have used the expression obtained from the corresponding phenomenological expression for $\pi p$ collisions. The results obtained by the model are compared to the existing SLAC experimental data. The model predictions for light meson production at HERMES energies are also presented.
Introduction
============
The physical photon state is approximately represented by the superposition of a bare photon and of virtual hadronic states having the same quantum numbers as the photon. The bare photon may interact in direct processes. This direct contribution is determined by the lowest order in perturbative QED. The interactions of the hadronic components of the photon (resolved photons) at high enough energies are described within the framework of theoretical models based on Reggeon and Pomeron exchanges, like the Dual-Parton model (DPM), see [@capella] for a review, and the Quark-Gluon String model (QGSM) [@K20; @KTM].
The QGSM is based on the Dual Topological Unitarization (DTU) and on the large 1/N expansion of non-perturbative QCD, and it has been very successful in describing many features of the inclusive hadroproduction of secondaries, both at high [@KTM]-[@Sh], and at comparatively low [@volk; @kaidlow; @amelin] energies.
In the case of photoproduction processes, QGSM was used in [@lugovoi; @aryer] at energies higher then 100 GeV. At these high energies the accounting for only pomeron (cylindrical) diagrams leads to a reasonable agreement. However, at low energies one can nott neglect the contribution of planar diagrams [@volk; @kaidlow; @amelin]. The inclusion of palnar diagrams into the analysis of pp scattering should result in the better description of a wider range of experimental data.
Most interesting it is the comparison of secondary production by photon and hadron beams in their fragmentation regions. The photoproduction ($Q^2$ = 0) points are the boundary ($Q^2\rightarrow$0) for the electroproduction processes [@badelek].
In the present paper we apply QGSM to the low energy photoproduction of pions and kaons by taking into accaount, contrary to [@lugovoi; @aryer], both cylindrical and planar diagrams. We describe the SLAC experimental data [@pi0100; @abe20] on inclusive $x_F$ spectra integrated over $p_T$. The predictions for forthcoming quasireal photoproduction data at HERMES and JLAB energies are also presented.
Inclusive spectra of secondary hadrons in photoproduction processes
===================================================================
Let us now consider the photoproduction processes in the QGSM. The resolved photon can be consider as $$\label{vdm2}
|\gamma> = \frac{1}{6}(4|\overline{u}u> + |\overline{d}d> + |\overline{s}s>) \; ,$$ in agreement with standard Vector Dominance Model (VDM) expansion [@bauer], and so the quarks in Eq. \[vdm2\] can be considered as the valence quarks of a meson. This resolved photon state also has sea quarks and gluons, as usual hadrons.
We can calculate the photoproduction cross section like for a meson-nucleon process[^1] by summing, according to Eq. \[vdm2\], the contribution of $q\overline{q}$ pair collisions with target nucleon, and by taking into account the corresponding coefficients.
At low energies, the inclusive cross section in QGSM consists of two terms, described by two different types of diagrams: cylindrical and planar.
\[onep\]
-12.5cm ![(a) Cylindrical diagram corresponding to the one-Pomeron exchange contribution to elastic $q\overline{q}$ scattering, and (b) the cut of this diagram which determines the contribution to the inelastic $q\overline{q}$ cross section. Quarks are shown by solid curves, while SJ is shown by dashed curves.](cil2.pdf "fig:"){width="1.25\hsize"} -1.5cm
$$\label{spectrt}
\frac{dn}{dy}\ = \frac{1}{\sigma_{inel}}\frac{d\sigma}{dy}\ = \frac{dn^{cyl}}{dy} + \frac{dn^{pl}}{dy}$$
In QGSM high energy interactions are considered as takin place via the exchange of one or several Pomerons, and all elastic and inelastic processes result from cutting through or between pomerons [@AGK] (see Fig.1). Each Pomeron corresponds to a cylindrical diagram (see Fig. 1a), and thus, when cutting a Pomeron, two showers of secondaries are produced, as it is shown in Fig. 1b.
For $\gamma p$ interaction, that following Eq. \[vdm2\] is similar to $\pi p$ collisions, the inclusive spectrum of a secondary hadron $h$ produced from a $q \overline{q}$ pair has the form [@KaPi; @Sh]: $$\label{spectr}
\frac{dn^{cyl}}{dy}\ =\frac{x_E}{\sigma_{inel}} \frac{d\sigma^{cyl}}{dx_F}\ =\ \sum_{n=1}^\infty
w_n\phi_n^h (x)\ ,$$ where $x_{F}=2p_{\|}/\sqrt{s}$ is the Feynman variable, $x_{E}=2E/\sqrt{s}$, the functions $\phi_{n}^{h}(x)$ determine the contribution of diagrams with $n$ cut Pomerons, and $w_n$ is the relative weight of these diagrams. Thus, $$\begin{aligned}
\label{spectr1}
\phi_{q\overline{q}p}^h(x) &=& f_{\overline{q}}^{h}(x_+,n)\cdot f_q^h(x_-,n) +
f_q^h(x_+,n)\cdot f_{qq}^h(x_-,n)
\nonumber\\
&+&2(n-1)f_{sea}^h(x_+,n)\cdot f_{sea}^h(x_-,n)\ ,
\\
x_{\pm} &=& \frac12\left[\sqrt{4m_T^2/s+x^2}\ \pm x\right] ,\end{aligned}$$ where $f_{qq}$, $f_q$, and $f_{sea}$ correspond to the contributions of diquarks, valence quarks and sea quarks, respectively. These functions $f_{qq}$, $f_q$, and $f_{sea}$ are determined by the convolution of the diquark and quark distribution functions, $u(x,n)$, with the fragmentation functions, $G^h(z)$, to hadron $h$, e.g. $$f_i^h(x_+,n)\ =\ \int\limits_{x_+}^1u_i(x_1,n)G_i^h(x_+/x_1) dx_1\; ,$$ where $i=q, \overline{q}, qq$-diquarks, and sea quarks.
The fragmentatiion functions $G_i^{\pi}(z)$ have the same value, $a^\pi$ at $z \rightarrow 0$ for all $i$. Similarly, $G_i^K(z)$ have also the same value, $a^K$ at $z \rightarrow 0$ for all $i$. The numerical values of these parameters are [@ampsh1]: $$\centering
a^\pi = 0.68,\ a^K=0.26\; .$$ The diquark and quark distribution functions, which are normalized to unity, as well as the fragmentation functions, are determined from Regge intercepts [@kaidff]. The analytical expressions of these functions for proton are presented in [@KaPi; @Sh]. The distribution functions for quarks and antiquarks in a photon were obtained by using the simplest interpolation of Regge limits at $u_i(x\rightarrow 0)$ and $u_i(x\rightarrow 1)$, following [@KaPi; @Sh]. In the sum of all cylindrical diagrams we have used the weights given in Eq. 1.
At low energies the contribution of planar diagrams becomes significant. In particular, planar diagrams lead to the difference from $\sigma^{\pi^- p}_{tot}$ (Fig. 2a) to $\sigma^{\pi^+ p}_{tot}$(Fig. 2b) total cross sections.
Since the proton contains two $u$ quarks and one $d$ quark, there are two planar diagrams contributing to $\sigma_{tot}^{\pi^- p}$ (Fig. 2a), and only one contributing to $\sigma_{tot}^{\pi^+ p}$ (Fig. 2b). By neglecting the difference in $u$$\overline{u}$ and $d$$\overline{d}$ annihilation, we can consider as equal the contribution by every planar diagram. If we denote this contribution by each planar digram as $\sigma^{\pi p}_{pl}$, the contribution of diagrams in Fig. 2a to the $\sigma_{tot}^{\pi^- p}$ is 2$\sigma^{\pi p}_{pl}$, while the contribution of Fig. 2b to the total $\sigma_{tot}^{\pi^+ p}$ cross section is $\sigma^{\pi p}_{pl}$. Thus, $$\Delta \sigma(\pi^\mp p) = \sigma_{tot}(\pi^- p) - \sigma_{tot}(\pi^+ p) = \sigma_{pl}^{\pi p} \; ,
$$ the cylindrical contributions cancelling each other off into the difference.
\[plsigt\] -17.cm ![Planar diagrams for the (a) $\pi^- p$ and (b) $\pi^+ p$ elastic scattering amplitudes.](planarsigel.pdf "fig:"){width="1.5\hsize"} -4.cm
\[planar\]
-17.cm ![Planar diagrams describing secondary meson $M$ production (a) by $u$ and (b) by $d$ valence quarks from photon.](planaruudd.pdf "fig:"){width="1.5\hsize"}
One can find $\overline{u}$ quark in the photon with probability $\frac{4}{6}$ (see Eq. 1), and so, simply from comparison of the number of diagrams of figs. 2a and 3a at the same energies, there is a contribution to planar photoproduction cross section from the diagram Fig. 3a (strange quarks do not contribute to planar photoproduction) equal to $$\frac{\sigma^{\gamma p(3a)}_{pl}}{\sigma_{inel}^{\gamma p}} = \frac{4}{3}
\frac{\sigma_{pl}^{\pi p}}{\sigma_{inel}^{\pi p}} \; ,$$ and, in a similar way, the contribution to planar photoproduction cross section from the diagram Fig. 3b is $$\frac{\sigma^{\gamma p(3b)}_{pl}}{\sigma_{inel}^{\gamma p}} = \frac{1}{6}
\frac{\sigma_{pl}^{\pi p}}{\sigma_{inel}^{\pi p}}\; ,$$ with $\sigma^{\gamma p}_{tot} \cong \sigma_{inel}^{\gamma p}$ in both cases. Thus, the resulting contribution from planar photoprouction coming from diagrams in figs. 3a and 3b to the inclusive cross section is determined by using similar formula to those in [@K20; @kaidlow]: $$\begin{aligned}
\label{plan}
\centering
\frac{dn^{\gamma p}_{pl}}{dy}\ &=& \frac{\sigma^{\gamma p}_{pl}}{\sigma_{inel}^{\gamma p}}
[\frac{4}{3}G_u^h(x_+)G_{ud}^h(x_-)+ \frac{1}{6}G_d^h(x_+)G_{uu}^h(x_-)] \\
&=& \frac{\Delta \sigma(\pi^\mp p)}{\sigma_{inel}^{\pi p}}[\frac{4}{3}G_u^h(x_+)G_{ud}^h(x_-)+
\frac{1}{6}G_d^h(x_+)G_{uu}^h(x_-)]\end{aligned}$$ The parametrisation of experimental data on $\Delta \sigma({\pi^\mp p})$ at $\sqrt{s} \geq$ 5GeV exists [@pdg]: $$\Delta \sigma({\pi^\mp p}) = 2.161(\frac{s}{s_M})^{-0.544}{\rm mb}\; ,$$ where $s_M = (\mu + m_p + 2.177)^2$, $\mu$ is the pion mass, and $m_p$ is the proton mass. At energies $\sqrt{s}\approx 5 GeV$, where the contribution of planar diagrams is not negligible, $\sigma_{inel}^{\pi p} \approx$ 26 mb. For $K$-mesons photoproduction, the planar diagrams exist only for leading $K^+$ production, since only the u-quark in the photon can create the planar diagram in the collision with the target proton. On the other hand, the $K^-$-meson can be produced in a planar diagram as a slower nonleading particle.
Results of calculations
=======================
In this section, we present the results of the QGSM calculation for pseudoscalar meson photoproduction on a proton target. Thus, in Fig. 4 we compare the QGSM results with the experimental data on $\pi^0$ photoproduction obtained by OMEGA collaboration [@pi0100]. In this experiment the cross sections were measured at photon energies of 110$-$135 GeV, 85$-$110 GeV, and 50$-$85 GeV. The theoretical curves in Fig. 4 have been calculated at photon energies 120 GeV (dashed line), 100 GeV (full line), and 70GeV (dashed-dotted line). As we can see, the theoretical curves obtained at these three energies practically coincide, in correspondence to the scaling shown by the experimental data. At these energies the contribution of planar diagrams is small, and the present calculations are close to the results by [@aryer], where the only contribution from cylindrical diagrams was taken into account.
In Fig. 5 we present the comparison of QGSM calculations for the $x_F$ dependence of the $K^0_S$ inclusive cross section $F(x_F)=(1/\sigma_{tot})d\sigma/dx_F$, integrated over $p_T^2$, to the experimental data by [@abe20] at $E_{\gamma}$= 20 GeV. The dashed line corresponds to the contribution from only cylindrical diagrams, while the full curve represents the sum of contributions from both cylindrical and planar diagrams. As we can see, though the contribution of the planar diagrams is not large, its inclusion leads to a better agreement with the experimental data. However, one has to note that the theoretical result is proportional to $(a^K)^2$ (Eq. 7), and the value of this parameter is mainly known from high energy pp collisions, so its accuracy is estimated not to be better then 10% [@ampsh1].
\[pi0\] -10.cm ![QGSM calculation of the $x_F$ dependence of the $\pi^0$ photoproduction cross section integrated over $p_T^2$, compared to the experimental data [@pi0100]. The full line corresponds to calculations at a photon energy of 100 GeV.](gpi0100.pdf "fig:"){width="1.25\hsize"} -1.cm
\[k0\] -6.5cm ![ QGSM predictions for the $x_F$ dependence of $K^0_S$ photoproduction cross section integrated over $p_T^2$, and compared to the experimental data [@abe20]. The full line corresponds to the calculations for a photon energy of 20 GeV, and the dashed line to the corresponding contribution from only cylindrical diagrams.](k0xfflast.pdf "fig:"){width="1.25\hsize"} -5.5cm
In Fig. 6 we show the model predictions for the inclusive density of $\pi^\pm$ and $K^\pm$ mesons at the energy of HERMES Collaboration $E_{\gamma}$= 17 GeV. The full curves correspond to the inclusive spectra of $\pi^+$ and $K^+$, while dashed lines represent $\pi^-$ and $K^-$ meson photoproduction. We show the summed contribution of both cylindrical and planar diagrams.
\[pi20\] -6.5cm ![The QGSM prediction for the $x_F$ dependence of the invariant cross section $F(x_F)=1/\sigma_{tot}d\sigma/dx_F$ integrated over $p_T^2$ spectra of $\pi^+$ (full) and $\pi^-$ (dashed), upper lines, and of $K^+$ (full) and $K^-$ (dashed), lower lines, photoproduction at $E_{\gamma}$= 17 GeV.](pikxphi13e17last.pdf "fig:"){width="1.25\hsize"} -5.5cm
\[rpi\] -10.5cm ![The QGSM prediction for the $x_F$ dependence of the ratio of yields of charged $\pi$ mesons at $E_{\gamma}$= 17 GeV (full line). The dashed line shows the contribution from only the cylindical diagram.](gprpiplmine17l.pdf "fig:"){width="1.25\hsize"} -1.5cm
The prediction for the ratio of yields of $\pi^+/\pi^-$-mesons at $E_{\gamma}$= 17 GeV is shown in Fig. 7 by full line. The cylindrical contribution is shown by dashed line. One can see that the planar diagram contribution changes the ratio by 25% in the forward hemisphere at large $x_F$. In Fig. 8 the ratio of yields of $K^+/K^-$-mesons at $E_{\gamma}$= 17 GeV is shown. Only valence $u$-quark contributes for leading $K^+$-meson production in planar diagram.
\[rpi\] -10.5cm ![The QGSM prediction for the $x_F$ dependence of the ratio of yields of charged $K$-mesons photoproduction at $E_{\gamma}$= 17 GeV.](gprkplmin13e17ak026last.pdf "fig:"){width="1.25\hsize"} -1.cm
Conclusion
==========
We consider a modified QGSM approach for the description of pseudoscalar ($\pi$, $K$) mesons photoproduction on nucleons at relatively low energies. This approach gives reasonable agreement to the experimental data on the $x_F$ dependence for $\pi ^0$ cross sections at $E_{\gamma}$=100 GeV, and for $K^0_S$ at $E_{\gamma}$=20 GeV, by taking into account the planar diagrams contribution that becomes significant at low energies. We also present the model predictions for charged pions and kaons cross sections and for the yields ratios at $E_{\gamma}$=17 Gev (Hermes Collaboration energies).
The comparison of the model results with experimental data allows the estimation of the contribution of the planar diagrams to the particle photoproduction processes. Detailed comparison of the theoretical predictions to experimental data at low energies makes possible to refine the values of the parameters of the model, and, in this way, it improves the reliability of the calculated yields of secondary particles to describe possible future HERMES data of quasireal photoproduction processes.
[**Acknowledgements**]{}
We are grateful to N.Z. Akopov, G.M. Elbakyan, and P.E. Volkovitskii for useful discussions. This paper was supported by the State Committee of Science of Republic of Armenia, Grant-13-1C015, and by Ministerio de Econom[í]{}a y Competitividad of Spain (FPA2011$-$22776), the Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), and Xunta de Galicia, Spain (2011/PC043).
[\*\*]{}
A. Capella, U. Sukhatme, C.I. Tan, and J. Tran Thanh Van, Phys. Rep. [**236**]{}, 225 (1994).
A.B. Kaidalov, Phys. Atom. Nucl., [**66**]{}, (2003), 781.
A.B. Kaidalov and K.A. Ter-Martirosyan, Yad. Fiz. [**39**]{}, 1545 (1984); Yad. Fiz. [**40**]{}, 211 (1984).
G.H. Arakelyan, A. Capella, A.B. Kaidalov, and Yu.M. Shabelski, Eur. Phys. J. [**C26**]{}, 81 (2002).
A.B. Kaidalov, O.I. Piskunova, Yad. Fiz. [**41**]{}, 1278 (1985).
Yu.M. Shabelski, Yad. Fiz. [**44**]{}, 186 (1986).
P.E. Volkovitski, Yad. Fiz. [**44**]{}, 729 (1989).
A.B. Kaidalov [*et al.*]{}, Yad. Fiz. [**49**]{}, 781 (1989); Yad. Fiz. [**40**]{}, 211 (1984).
N.S. Amelin [*et al.*]{}, Yad. Fiz. [**51**]{}, 941 (1990); Yad. Fiz. [**40**]{}, 211 (1984).
V.V. Luigovoi, S.Yu. Sivoklokov, Yu.M. Shabelskii. Phys. Atom. Nucl. [**58**]{}, 67 (1995); Yad. Fiz. [**58**]{} (1995) 72.
G.G. Arakelyan and Sh.S. Eremian, Phys. Atom. Nucl. [**58**]{}, 1241 (1995); Yad. Fiz. [**58**]{}, 132 (1995).
B. Badelek and J. Kwiecinski, Rev. Mod. Phys. [**68**]{} 445 (1996).
R.J. Apsimon [*et al.*]{}, Z. Phys. [**C52**]{}, 397 (1991).
K. Abe [*et al.*]{}, Phys. Rev. [**D32**]{}, 2869 (1985).
T.H. Bauer [*et al.*]{}, Rev. Mod. Phys. [**50**]{}, 261 (1978).
G.H. Arakelyan, C. Merino, C. Pajares, and Yu.M. Shabelski, Eur. Phys. J. [**C54**]{}, 577 (2008); arXiv:0805.2248 \[hep-ph\].
Particle Data Group, Chin. Phys. [**C38**]{}, 090001 (2014).
V.A. Abramovski, V.N. Gribov, and O.V. Kancheli, Yad. Fiz. [**18**]{}, 595 (1973).
A.B. Kaidalov. Yad. Fiz. [**45**]{}, 1432 (1987).
J. Gandsman [*et al.*]{}, Phys. Rev. [**D10**]{}, (1974) 1562.
[^1]: The absolute values of secondary photoproducton cross section are $\alpha_{em}$ times those of meson-nucleon cross section, while the corresponding multiplicities of secondaries $\frac{dn}{dy}$ are of the same order.
| {
"pile_set_name": "ArXiv"
} |
---
address: 'Max Planck Institute for the Physics of Complex Systems, D-01187, Dresden, Germany'
author:
- 'M. V. Fistul and J. B. Page[@byline]'
title: |
Penetration of dynamic localized states in DC-driven\
Josephson junction ladders by discrete jumps
---
Introduction
============
The subject of large-amplitude anharmonic dynamics in lattices has received widespread attention over the past decade. In particular, intense theoretical focus has centered on so-called intrinsic localized modes (ILMs), also known as discrete breathers, with the result that many of their properties are now well understood[@ILMs]. These excitations result from the interplay between nonlinearity and discreteness, and they can be highly localized in perfect lattices, with or without external driving. They can occur in a variety of different lattices: recent experiments have reported vibrational ILMs in a quasi-1D charge-density wave system[@Bishop], spin-wave ILMs in a quasi-1D antiferromagnetic system[@Sievers], and discrete breathers in Josephson junction (JJ) ladders[@Ustinov1; @Orlando].
The latter systems are noteworthy, in that arrays of coupled JJs have served for many years as reliable laboratory systems for studying diverse nonlinear phenomena[@StrogLikh]. The nonlinear dynamics are particularly rich. A single “small” JJ subject to an applied constant DC bias current can be mapped onto the problem of a damped pendulum driven by a constant torque, with the dynamical degree of freedom being the Josephson phase difference[@BandC]. There are thus two qualitatively different states, namely a static (superconducting) state and a dynamic (whirling or resistive) state, with the latter producing a readily measured voltage $V \propto \dot \varphi$ across the whirling junction. When several junctions are assembled to form a regular array, such as the ladder shown in Fig. \[Fig1\], they become inductively coupled. In the coupled system, junctions in the superconducting state can also exhibit steady-state librations, when JJs in the whirling state are present. In view of the mapping onto the pendulum problem, JJ ladders share features with lattices of nonlinearly coupled electric dipole rotors, driven by an external monochromatic AC electric field[@BonPage], but with the important simplification that they can be driven with purely DC bias currents.
Figure \[Fig1\] sketches an anisotropic JJ ladder consisting of small JJs of two types, “horizontal” and “vertical,” which are, respectively, perpendicular and parallel to the applied bias current (arrows). The anisotropy arises from the different areas of the horizontal and vertical junctions and is characterized by the parameter $\eta=A_h/A_v=I_c^h/I_c^v$, where $I_c^h$ and $I_c^v$ are the the critical currents for each type of junction.
References [@Ustinov1; @Orlando; @Ustinov2] report experimental observations of various discrete breathers in ladders driven by a [*homogeneously*]{} applied DC bias current, represented by the dashed arrows in Fig. \[Fig1\]. For these states, the localized voltage patterns have a simple structure involving only two nonzero steady-state voltages (rotational frequencies). The breathers were found to be stable in the limit of small coupling ($\eta \alt 1$) and for bias currents $I_{\rm ext} \alt I_c^v$. For the case of large 2-D JJ arrays subject to a homogeneously applied DC bias, more complicated inhomogeneous states, with [*meandering*]{} voltage patterns, have also been reported[@Misha1].
Here, we study the dynamics of a JJ ladder with an external DC bias current applied at only one edge (solid arrows, Fig. \[Fig1\]). For increasing bias ($I_{\rm ext} \agt I_c^v$) and over a wide range of anisotropies, we find by direct numerical simulations that the dynamic state expands into the ladder one cell at a time, by a sequence of abrupt jumps. This behavior is in marked contrast to the well-known cases of long JJs and JJ parallel arrays ($\eta~=~\infty$), where the entire system abruptly switches to the resistive state at a particular value of the DC bias. It is also different than the breather case, since all of the junctions within the boundary of this localized dynamic state whirl, and at each expansion the number of different frequencies (voltages) grows. The sequence of $I$-$V$ characteristics and threshold currents can be modeled analytically, yielding very good agreement with the numerical results.
Numerical Simulations
=====================
We consider a ladder with a large but finite number of cells $N$. The ladder’s state is specified by the time-dependent Josephson phases $\{\varphi_i\}$, $\{\psi_i\}$, and $\{\tilde \psi_i\}$ for the vertical, upper horizontal and lower horizontal junctions, respectively, where $i$ denotes the cell. We have found in our simulations that the symmetry condition ${\tilde \psi_i}=-\psi_i$ holds for the phenomena to be discussed here. The ladder dynamics are then determined by the coupled nonlinear equations of motion obtained in Refs. and : $$\begin{aligned}
\label{GenEq}
\hat L(\varphi_i)&=&\gamma_i+\frac{1}{\beta_L}[\varphi_{i-1}-2\varphi_i+
\varphi_{i+1}+2(\psi_i-\psi_{i-1})], \nonumber \\
&&i=2,\ldots,N-1, \\
\hat L(\psi_i)&=&\frac{1}{\eta \beta_L}(\varphi_i-\varphi_{i+1}-2\psi_i), \qquad
\quad i=1,\ldots,N, \nonumber\end{aligned}$$ where the operator $\hat L(\varphi) \equiv \ddot \varphi + \alpha \dot\varphi
+\sin(\varphi)$. The equations for the vertical junctions at $i=1$ and $N$ are $$\begin{aligned}
\label{BC1}
\hat L(\varphi_1)&=&\gamma_1+\frac{1}{\beta_L}(\varphi_2-\varphi_1+2\psi_1), \\
\hat L(\varphi_N)&=&\gamma_N+
\frac{1}{\beta_L}(\varphi_{N-1}-\varphi_N-2\psi_{N-1}). \nonumber\end{aligned}$$
Equations (\[GenEq\]) and (\[BC1\]) describe each junction within the resistively and capacitively shunted junction (RCSJ) model[@BandC], and the unit of time is the inverse of the plasma frequency $\omega_J \equiv \sqrt{2eI_c/C\hbar}$. Since each junction’s critical current and capacitance scale with the area, $\omega_J$ is independent of the anisotropy parameter $\eta$, as is the effective damping constant $\alpha \equiv 1/(\omega_J RC)$. The normalized bias current $\gamma_i$ is defined as $I_{i,{\rm ext}}/I_c^v$. The inductive coupling between junctions is determined by the parameter $\beta_L \equiv 2\pi L I_c^v/\Phi_0$, where $L$ is the self-inductance of a single cell and $\Phi_0 = hc/2e$ is the elementary flux quantum. Coupling beyond that described by $\beta_L$ is not included.
We performed direct numerical integration of the equations of motion for ladders with $N=20$ cells, using a fifth-order Gear predictor-corrector algorithm[@AandT], for a range of anisotropies: $\eta =$ 0.5, 1.0, 2.0, 3.0, and 5.0. The arrays were underdamped, with $\alpha=0.1$, and we used a moderate value of the coupling parameter $\beta_L=0.5$. The external DC bias was applied at one edge, i.e. $\gamma_1=\gamma$ and all other $\gamma_i=0$. To simulate the $I$-$V$ curves, we started with all phases at zero and gradually increased the external bias $\gamma$ from zero to 50, in increments of 0.005. When junctions were present in the whirling state, the MD time scale was set by the time-average period of the fastest rotating phase. For a given value of gamma, we waited for at least 100 of these reference periods before averaging, in order to avoid transients, following which we computed the time-averages $\langle \dot \varphi_i \rangle$ and $\langle \dot \psi_i\rangle$ over at least 100 additional reference periods. These averages are proportional to the average voltages across the junctions. The current was then incremented, with the initial phase configuration being that from the preceding MD time step. In all runs, the time step was 1/200 of the reference period.
Our simulated $I$-$V_i$ curves for an anisotropy of $\eta =2$ are shown in Fig. \[Fig2\]. The most striking finding is the occurrence of extremely sharp voltage jumps. At each of the corresponding threshold currents $\gamma^{thr}_n$, a new cell is added to the ladder’s dynamic state. With the applied current below the first threshold $\gamma^{thr}_1$, all junctions are in a static (zero voltage) state. When $\gamma$ exceeds $\gamma^{thr}_1$, the first vertical junction and its adjacent top and bottom horizontal junctions abruptly switch into the dynamic state, with all other junctions remaining in the zero voltage state—the rotating phases are confined to the first cell. As the bias is increased further, all three average voltages for this 1-cell dynamic state increase linearly until the next threshold current $\gamma^{thr}_2$ is reached, at which point the dynamic state suddenly expands into the second cell, accompanied by sharp changes of the voltages in the first cell. This process continues, yielding successive transitions from $n$-cell dynamic localized states to $(n+1)$-cell dynamic states. The distribution of threshold currents and voltage ratios depends on the ladder’s anisotropy. Over the range $0<\gamma <50$, the ladder reached a 3-cell state for $\eta=0.5$ and 1, a 4-cell state for $\eta=2$, and a 5-cell state for $\eta=3$ and 5. In the following, these states will be termed $n$-cell edge states.
An $n$-cell edge state is in striking contrast to an n-cell breather. The breather occurs away from the ladder’s edge and is homogeneously driven by same DC bias current ($I_{\rm ext} \alt I_c$) applied at every cell, whereas the edge states are driven by a DC bias ($I_{\rm ext} \agt I_c$) applied at just one edge. The edge states have a richer internal structure—[*all*]{} of the junctions within an edge state are in a nonzero voltage state (see Fig. \[Fig3\]), whereas in a breather state, all of the horizontal junctions are in the zero voltage state except for those on the breather’s boundary[@Ustinov1; @Orlando; @Ustinov2]. Moreover, all of a breather’s vertical junction phases rotate at the same average frequency, whereas the $n$-cell edge state exhibits a peculiar distribution of average frequencies. This frequency (voltage) distribution depends on both $n$ and the ladder’s anisotropy. For example, our simulations for the $\eta=2$, 3-cell edge state of Fig. \[Fig3\](d) yield the ratios given in second column of Table \[Table1\]. Comparison with the third column shows that they are in very good agreement with analytic predictions derived below.
------------------------- ------- ---------------------------------------
Ratio MD Predicted
$\omega_1^v/\omega_2^v$ 2.667 $(3\eta^2+8\eta+4)/2\eta(1+\eta)=8/3$
$\omega_1^v/\omega_3^v$ 8.006 $(3\eta^2+8\eta+4)/\eta^2=8$
$\omega_1^h/\omega_2^h$ 2.499 $(\eta^2+6\eta+4)/\eta(2+\eta)=5/2$
$\omega_1^h/\omega_3^h$ 5.017 $(\eta^2+6\eta+4)/\eta^2=5$
$\omega_3^v/\omega_3^h$ 2.005 2
------------------------- ------- ---------------------------------------
: Average frequency (voltage) ratios for 3-cell edge states. The MD ratios are for the $\eta=2$, 3-cell edge state of Fig. \[Fig3\](d), and the predicted ratios were calculated from Eqs. (\[Vert\]) and (\[Horiz\]).
\[Table1\]
The superconducting state forming ahead of the $n$-cell edge state is also unusual. Fig. \[Fig3\] gives snapshot images of the Josephson phase distribution for several values of the applied DC current bias, for the anisotropy $\eta=2$. In panel (a), the current is just below the first threshold, and one sees a single [*Josephson vortex*]{} in the superconducting part of the ladder. The remaining panels (b)–(e) show the phases just after a new cell is added to the dynamic state. At the threshold currents $\gamma^{thr}_n$, the superconducting state becomes unstable, and the vortex jumps into the next cell as the edge state expands. Our simulations show that in general the superconducting state is sensitive to the anisotropy. Thus for rather small values of $\eta \lesssim 1$, there are no vortices trapped in the superconducting part of the ladder over the range of currents studied. For these cases, the Josephson phases of the vertical junctions in the superconducting portion of the ladder simply decrease with distance from the boundary of the resistive portion, corresponding to the [*Meissner state*]{} of the superconductor. With increasing anisotropy, single Josephson vortices appear in the superconducting portion, as in Fig. \[Fig3\]. For large anisotropies ($\eta \sim 5$) more complex [*vortex trains*]{} are observed, and we also find that the penetration of the edge state changes the nature of the superconducting vortex state, rather than simply pushing it ahead as for $\eta=2$. A detailed discussion of the superconducting state will be given elsewhere (Ref. [@FP]).
Theoretical Analysis
====================
The unusual voltage distributions in the $n$-cell edge states can be explained analytically by making use of Kirchhoff’s laws, applied to the time-average currents (normalized to $I^v_c$) and corresponding dimensionless voltages in each cell. The key assumption is the coexistence of the resistive and superconducting states in different portions of the ladder. For a cell $i$ within an $n$-cell edge state, current conservation gives $I^v_i+I^h_i=I^h_{i-1}$, while the voltage condition is $I^v_i-\frac{2}{\eta}I^h_i- I^v_{i+1}=0$. Combining these yields an equation for just the horizontal currents: $$\label{DifEq}
I^h_{i+1}+I^h_{i-1}-2\left(1+\frac{1}{\eta}\right)I^h_i=0.$$ This equation and the two from which it was derived apply to all cells $i$ in $1 \le i \le n$, provided we define $I^h_0 \equiv \gamma,\;
I^v_{n+1} \equiv 0$, and $I^h_{n+1} \equiv I^h_n$, in order to take proper account of the $n$-cell edge state’s boundaries.
Equation (\[DifEq\]) is readily solved by substituting $I^h_i = \lambda^i$, which yields two roots, namely $\lambda \equiv
1+\frac{1}{\eta} + \sqrt{(1+\frac{1}{\eta})^2-1}$ and $1/\lambda$. Hence the general solution of Eq. (\[DifEq\]) is $I^h_i = c_1 \lambda^i+ c_2 \lambda^{-i}$, where the constants $c_1$ and $c_2$ are obtained from the above definitions of $I^h_0$ and $I^h_{n+1}$. With the horizontal currents thus determined, the vertical currents may be computed from $I^v_i=I^h_{i-1} - I^h_i$. The currents are then converted into the average voltages via $V^v_i = I^v_i/\alpha$ and $V^h_i=I^h_i/(\alpha \eta)$. The resulting voltage distribution within an $n$-cell edge state is ($1\le i\le n$): $$\label{Vert}
V^v_i=\frac{\gamma(1-\lambda)(\lambda^{i-1} -\lambda^{2n+1-i})}
{\alpha(\lambda^{2n+1}+1)},$$ $$\label{Horiz}
V^h_i=\frac{\gamma (\lambda^{i} +\lambda^{2n+1-i})}
{\alpha \eta (\lambda^{2n+1}+1)}.$$
Equations (\[Vert\]) and (\[Horiz\]) give the predicted voltage ratios in Table \[Table1\], which for $\eta=2$ are seen to be in very good agreement with our MD simulations. Indeed, we find that for all of the values of $\eta$ studied, the predicted $I$-$V$ curves are in very good agreement with the MD curves, such as those of Fig. \[Fig2\]. Only the values of the current thresholds for the jumps are left undetermined by these equations.
We can also predict the distribution $\{\gamma^{thr}_n\}$ of threshold currents for each $\eta$ by assuming that the superconducting state associated with the $(n-1)$-cell edge state becomes unstable and converts to the $n$-cell edge state when the current $I^h_n$ reaches a depinning current $I_{dp}$, which we take to be independent of $n$. This yields an expression for the threshold currents $$\label{Ithr}
\gamma^{thr}_n =
I_{dp}\frac{\cosh{[(n-\frac{1}{2}) \ln \lambda]}}
{\cosh{(\frac{1}{2} \ln \lambda)}}.$$ The ratios $\gamma^{thr}_n/I_{dp}$ predicted by Eq. (\[Ithr\]) for $n=$ 2, 3, 4 and 5 are plotted versus $\eta$ in Fig. \[Fig4\]. To compare with the MD results, we fit $I_{dp}$ to the first observed MD threshold current for each $\eta$, namely $\gamma^{thr}_1 =$ 1.295, 1.510, 2.040, 2.525, and 4.010, for $\eta =$ 0.5, 1.0, 2.0, 3.0, and 5.0, respectively. The resulting MD ratios are shown by the circles in Fig. \[Fig4\] and agree well with the predictions.
Hysteresis
==========
Despite their rich structure of frequency ratios, the $n$-cell edge states are found to be highly stable in our simulations, for the case of increasing current. However, we also find notable hysteresis effects when the simulations are started with an $n$-cell edge state for large $n$ and the applied DC current is gradually [*decreased*]{} to zero. Figure \[Fig5\] is representative of the hysteretic behavior encountered. The threshold currents and stability properties for the sequence of down-conversions $\{n \rightarrow n-1\}$ are very different than for the increasing-current case. In particular, we observe resonant steps, switching processes, and nonlinear regions in the $I$-$V$ curves. We believe that all of these features arise from the resonant interaction between the n-cell edge states and other excitations, both localized and delocalized, as will be discussed elsewhere[@FP].
Summary
=======
In summary, our numerical simulations have revealed unusual localized dynamic states in anisotropic JJ ladders subject to a DC bias current at one edge. Increasing the bias causes these states to expand by adding single cells in a sequence of sudden jumps, giving rise to a diverse set of voltage distributions and sharp changes in the $I$-$V$ curves. This behavior occurs for a wide range of parameters and should be observable through the $I$-$V$ characteristics or by direct visualization using low temperature scanning laser microscopy techniques[@Ustinov1; @Ustinov2; @Misha1].
We thank A. V. Ustinov and S. Flach for useful discussions. J. B. Page gratefully acknowledges the Max Planck Institute for the Physics of Complex Systems, Dresden, for their support and hospitality.
Permanent address: Department of Physics and Astronomy, Arizona State University, Tempe, AZ 85287-1504.
See for instance, S. Flach and C. R. Willis, Phys. Rep. [**295**]{}, 181 (1998); A. J. Sievers and J. B. Page, in [*Dynamical Properties of Solids VII Phonon Physics*]{}, edited by G. K. Horton and A. A. Maradudin (Elsevier, Amsterdam, 1995); and references therein.
B. I. Swanson, J. A. Brozik, S. P. Love, G. F. Strouse, A. P. Shreve, A. R. Bishop, W.-Z. Wang, and M. I. Salkola, Phys. Rev. Lett. [**82**]{}, 3288 (1999).
U. T. Schwarz, L. Q. English, and A. J. Sievers, Phys. Rev. Lett. [**83**]{}, 223 (1999).
P. Binder, D. Abraimov, A. V. Ustinov, S. Flach, and Y. Zolotaryuk, Phys. Rev. Lett. [**84**]{}, 745 (2000).
E. Trias, J. J. Mazo, and T. P. Orlando, Phys. Rev. Lett. [**84**]{}, 741 (2000).
S. H. Strogatz, [*Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering*]{} (Addison-Wesley, Reading, MA, 1994); K .K. Likharev, Rev. Mod. Phys. [**51**]{}, 101 (1979).
A. Barone and G. Paternò, [*Physics and Applications of the Josephson Effect*]{} (Wiley, New York, 1982).
D. Bonart and J. B. Page, Phys. Rev. E [**60**]{}, R1134 (1999).
P. Binder, D. Abraimov, and A. V. Ustinov, Phys. Rev. E [**62**]{}, 2858 (2000).
D. Abraimov, P. Caputo, G. Filatrella, M. V. Fistul, G. Yu. Logvenov, and A. V. Ustinov, Phys. Rev. Lett. [**83**]{}, 5354 (1999).
P. Caputo, M. V. Fistul, A. V. Ustinov, B. A. Malomed, and S. Flach, Phys. Rev. B [**59**]{}, 14050 (1999).
G. Grimaldi, G. Filatrella, S. Pace, and U. Gambardella, Phys. Lett. A [**223**]{}, 463 (1996).
See, for instance, M. P. Allen and D. J. Tildesly, [ *Computer Simulations of Liquids*]{} (Clarendon, Oxford, 1987).
M. Fistul and J. B. Page, unpublished.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In the present paper, we consider a family of continuous time symmetric random walks indexed by $k\in \mathbb{N}$, $\{X_k(t),\,t\geq 0\}$. For each $k\in \mathbb{N}$ the matching random walk take values in the finite set of states $\Gamma_k=\frac{1}{k}(\mathbb{Z}/k\mathbb{Z})$; notice that $\Gamma_k$ is a subset of $\mathbb{S}^1$, where $\mathbb{S}^1$ is the unitary circle. The infinitesimal generator of such chain is denoted by $L_k$. The stationary probability for such process converges to the uniform distribution on the circle, when $k\to \infty$. Here we want to study other natural measures, obtained via a limit on $k\to \infty$, that are concentrated on some points of $\mathbb{S}^1$. We will disturb this process by a potential and study for each $k$ the perturbed stationary measures of this new process when $k\to \infty$.
We disturb the system considering a fixed $C^2$ potential $V: \mathbb{S}^1 \to \mathbb{R}$ and we will denote by $V_k$ the restriction of $V$ to $\Gamma_k$. Then, we define a non-stochastic semigroup generated by the matrix $k\,\, L_k + k\,\, V_k$, where $k\,\, L_k $ is the infinifesimal generator of $\{X_k(t),\,t\geq 0\}$. From the continuous time Perron’s Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on $\Gamma_k$. This new chain is called the continuous time Gibbs state associated to the potential $k\,V_k$, see [@LNT]. The stationary probability vector for such Markov Chain is denoted by $\pi_{k,V}$. We assume that the maximum of $V$ is attained in a unique point $x_0$ of $\mathbb{S}^1$, and from this will follow that $\pi_{k,V}\to \delta_{x_0}$. Thus, here, our main goal is to analyze the large deviation principle for the family $\pi_{k,V}$, when $k \to\infty$. The deviation function $I^V$, which is defined on $ \mathbb{S}^1$, will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan’s Lemma for the process $\{X_k(t),\,t\geq0\}$. For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when $k\to \infty$. Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.
address:
- 'UFRGS, Instituto de Matemática, Av. Bento Gonçalves, 9500. CEP 91509-900, Porto Alegre, Brasil'
- 'UFRGS, Instituto de Matemática, Av. Bento Gonçalves, 9500. CEP 91509-900, Porto Alegre, Brasil'
author:
- 'Artur O. Lopes'
- Adriana Neumann
title: 'Large Deviations for stationary probabilities of a family of continuous time Markov chains via Aubry-Mather theory'
---
[^1]
[^2]
Introduction {#sec1}
============
We will study a family of continuous time Markov Chains indexed by $k\in \mathbb{N}$, for each $k\in \mathbb{N}$ the corresponding Markov Chain take values in the finite set of states $\Gamma_k=\frac{1}{k}(\mathbb{Z}/k\mathbb{Z})$. Let $\mathbb{S}^1$ be the unitary circle which can be identified with the interval $[0,1)$. In this way we identify $\Gamma_k$ with $\{0,\,1/k,\, 2/k,...,\,(k-1)/k\}$ in order to simplify the notation. We will analyse below a limit procedure on $k\to \infty $ and this is the reason why we will consider that the values of the states of the chain are in the unitary circle. The continuous time Markov Chain with index $k$ has the following behaviour: if the particle is at $j/k$ it waits an exponential time of parameter $2$ and then jumps either to $(j-1)/k$ or to $(j+1)/k$ with probability $1/2$. In order to simplify the notation, we omit the indication that the the sum $j+1$ is mod $k$ and the same for the subtraction $j-1$; we will do this without other comments in the rest of the text. The skeleton of this continuous time Markov Chain has matrix of transitions $\mathcal{P}_k=(p_{i,j})_{i,j}$ such that the element $p_{j,j+1}$ describes the probability of transition of $i/k$ to $j/k$, which is $p_{i,i+1}=p_{i,i-1}=1/2$ and $p_{i,j}=0$, for all $j\neq i$. The infinitesimal generator is the matrix $L_k=2(\mathcal{P}_k-I_k)$, where $I_k$ is the identity matrix, in words $L_k$ is a matrix that is equal to $-2$ in the diagonal $L_{i,j}=1$ above and below the diagonal, and the rest is zero. Notice that $L_k$ is symmetric matrix. For instance, take $k=4$, $$L_4= \left(
\begin{array}{cccc}
-2 & 1 & 0 & 1 \\
1 & -2 & 1 & 0 \\
0 & 1 & -2 & 1 \\
1 & 0 & 1 & -2 \\
\end{array}\right).$$
We can write this infinitesimal generator as an operator acting on functions $f: \Gamma_k\to\mathbb{R}$ as $$\label{ger}
\begin{split}
(\mathcal{L}_kf)({\genfrac{}{}{}{1}{j}{k}})=\big[f({\genfrac{}{}{}{1}{j+1}{k}})-f({\genfrac{}{}{}{1}{j}{k}})\big]+\big[f({\genfrac{}{}{}{1}{j-1}{k}})-f({\genfrac{}{}{}{1}{j}{k}})\big].
\end{split}$$ Notice that this expression describes the infinitesimal generator of continuous time random walk. For each $k\in \mathbb{N}$, we denote $P_k(t)=e^{t\, L_k}$ the semigroup associated to this infinitesimal generator. We also denote by $\pi_k$ the uniform probability on $\Gamma_k$. This is the invariant probability for the above defined continuous Markov Chain. The probability $\pi_k$ converges to the Lebesgue measure on $ \mathbb{S}^1$, as $k \to \infty$.
Fix $T>0$ and $x_0\in\mathbb{S}^1$, let $\mathbb{P}_k$ be probability on the Skorohod space $D[0,T]$, the space of *càdlàg* trajectories taking values on $\mathbb{S}^1$, which are induced by the infinitesimal generator $k\mathcal L_k$ and the initial probability $\delta_{x_k(x_0)}$, which is the Delta of Dirac at $x_k(x_0):=\lfloor k x_0\rfloor/k\in \Gamma_k$, where $x_k(x_0)$ is the closest point to $x_0$ on the left of $x_0$ in the set $\Gamma_k$. Denote by $\mathbb{E}_k$ the expectation with respect to $\mathbb{P}_k$ and by $\{X_k(t)\}_{t\in [0,T]}$ the continuous time Markov chain with the infinitesimal generator $k\mathcal L_k$. One of our goals is described in the Section \[sec2\] which is to establish a Large Deviation Principle for $\{\mathbb{P}_k\}_k$ in $D[0,T]$. This will be used later on the Subsection \[subsec3.1\] to define the Lax-Oleinik semigroup. One can ask: why we use this time scale? Since the continuous time symmetric random walk converges just when the time is rescaled with speed $k^2$, then taking speed $k$ the symmetric random walk converges to a constant trajectory. Here the setting follows similar ideas as the ones in the papers [@A1] and [@A2], where N. Anantharaman used the Shilder’s Theorem. The Shilder’s Theorem says that for $\{B_t\}_t$ (the standard Brownian Motion) the sequence $\{\sqrt{\varepsilon}B_t\}_t$, which converges to a trajectory constant equal to zero, when $\varepsilon\to 0$, has rate of convergence equal to $I(\gamma)=\int_0^T\frac{(\gamma'(s))^2}{2}\,ds$, if $\gamma:[0,T]\to\mathbb{R}$ is absolutely continuous, and $I(\gamma)=\infty$, otherwise.
We proved that the sequence of measures $\{\mathbb{P}_k\}_k$ satisfy the large deviation principle with rate function $I_T: D[0,T]\to \mathbb{R}$ such that $$\begin{split}
&I_{T}(\gamma)=\int_0^T
\Big\{\gamma'(s)\log\Big(\frac{\gamma'(s)+\sqrt{(\gamma'(s))^2+4}}{2}\Big)
-\sqrt{(\gamma'(s))^2+4}+2\Big\}\,ds,
\end{split}$$ if $\gamma \in \mathcal{AC}[0,T]$ and $I_{T}(\gamma)=\infty$, otherwise.
Finally, in Section \[sec3\], we consider this system disturbed by a $C^2$ potential $V:\mathbb S^1 \to \mathbb{R}.$ The restriction of $V$ to $ \Gamma_k$ is denoted by $V_k$. From the continuous time Perron’s Theorem we get an eigenvalue and an eigenfunction for the operator $k\,L_k + k \, V_k$. Then, normalizing the semigroup associated to $k\,L_k + k \, V_k$ via the eigenvalue and eigenfunction of this operator, we obtain a new continuous time Markov Chain, which is called the Gibbs Markov Chain associated to $k\, V_k$ (see [@BEL] and [@LNT]). Denote by $\pi_{k,V}$ the initial stationary vector of this family of continuous time Markov Chains indexed by $k$ and which takes values on $ \Gamma_k\subset \mathbb S^1$. We investigate the large deviation properties of this family of stationary vectors which are probabilities on $\mathbb S^1$, when $k\to \infty$. More explicitly, roughly speaking, the deviation function $I^V$ should satisfy the property: given an interval $[a,b]$ $$\lim_{k\to \infty} {\genfrac{}{}{}{1}{1}{k}} \,\log \,\pi_{k,V}\,[a,b]\,=\,-\inf_{x \in [a,b]}{I^V(x)}.$$
If $V:\mathbb S^1 \to \mathbb{R}$ attains the maximal value in just one point $x_0$, then, $\pi_{k,V}$ weakly converge, as $k\to \infty$, to the delta Dirac in $x_0.$ We will use results of Aubry-Mather theory (see [@BG], [@CI], [@Fath] or [@Fat]) in order to exhibit the deviation function $I^V$, when $k \to \infty$.
It will be natural to consider the Lagrangian defined on $S^1$ given by $$L(x,v)= - V(x) + v \log( (v + \sqrt{v^2 + 4} )/2 ) - \sqrt{v^2
+ 4} + 2,$$ which is convex and superlinear. It is easy to get the explicit expression of the associated Hamiltonian,
As we will see the deviation function is obtained from certain weak KAM solutions of the associated Hamilton-Jacobi equation (see Section 4 and 7 in [@Fat]). In the one-dimensional case $\mathbb S^1$ the weak KAM solution can be in some cases explicitly obtained (for instance when $V$ as a unique point of maximum). From the conservation of energy (see [@Car]), in this case, one can get a solution (periodic) with just one point of lack of differentiability.
It follows from the continuous time Perron’s Theorem that the probability vector $\pi_{k,V}$ depends for each $k$ on a left eigenvalue and on a right eigenvalue. In this way, in the limit procedure, this will require in our reasoning the use of the positive time and negative time Lax-Oleinik operators (see [@Fat]).
From a theoretical perspective, following our reasoning, one can think that we are looking for the maximum of a function $V:\mathbb S^1 \to \mathbb{R}$ via an stochastic procedure based on continuous time Markov Chains taking values on the finite lattice $\Gamma_k$, $k \in \mathbb{N}$, which is a discretization of the circle $\mathbb S^1$. Maybe this can be explored as an alternative approach to Metropolis algorithm, which is base in frozen arguments. In our setting the deviation function $I^V$ gives bounds for the decay of the probability that the stochastic procedure corresponding to a certain $k$ does not localize the maximal value.
Moreover, in the Section \[sec4\] we compute explicitly the entropy of the Gibbs state on the Skhorod space associated to the potential $k\,V_k$. In this moment we need to generalize a result which was obtained in [@LNT]. After that, we take the limit on $k\to \infty$, and we obtain the entropy for the limit process which in this case is shown to be zero.
Large Deviations on the Skorohod space\
for the unperturbed system {#sec2}
=======================================
The goal of this section is to prove the Large Deviation Principle for the sequence of measures $\{\mathbb{P}_k\}_k$ on $D[0,T]$, defined in Section \[sec1\]. We recall that $\mathbb{P}_k$ is induced by the continuous time random walk, which has infinitesimal generator $k\mathcal L_k$, see , and the initial measure $\delta_{x_k(x_0)}$, which is the Delta of Dirac at $x_k(x_0)=\lfloor k x_0\rfloor/k\in \Gamma_k$.
\[teo1\] The sequence of probabilities $\{\mathbb{P}_k\}_k$ satisfies:
- Upper Bound: For all $\mathcal{C}\subset D[0,T]$ closet set, $$\begin{split}
&\varlimsup_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal C\Big]\leq -\inf_{\gamma\in \mathcal O}I_{T}(\gamma) .
\end{split}$$
- Lower Bound: For all $\mathcal{O}\subset D[0,T]$ open set,$$\begin{split}
&\varliminf_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal O\Big]\geq -\inf_{\gamma\in \mathcal O}I_{T}(\gamma) .
\end{split}$$
The rate function $I_{T}: D[0,T]\to \mathbb{R}$ is $$\label{funcional}
\begin{split}
&I_{T}(\gamma)=\int_0^T
\Big\{\gamma'(s)\log\Big(\frac{\gamma'(s)+\sqrt{(\gamma'(s))^2+4}}{2}\Big)
-\sqrt{(\gamma'(s))^2+4}+2\Big\}\,ds,
\end{split}$$ if $\gamma \in \mathcal{AC}[0,T]$ and $I_{T}(\gamma)=\infty$, otherwise.
The set $\mathcal{AC}[0,T]$ is the set of all absolutely continuous functions $\gamma:[0,T] \to \mathbb{S}^1$. Saying that a function $\gamma:[0,T] \to \mathbb{S}^1$ is absolutely continuous means that for all $\varepsilon>0$ there is $\delta>0$, such that, for all family of intervals $\{(s_i,t_i)\}_{i=1}^{n}$ on $[0,T]$, with $\sum_{i=1}^{n} t_i-s_i<\delta$, we have $\sum_{i=1}^{n} \gamma(t_i)-\gamma(s_i)<\varepsilon$.
This proof is divided in two parts: upper bound and lower bound. The proof of the upper bound is on Subsections \[subsec2.2\] and \[subsec2.3\]. And, the proof of the lower bound is Subsection \[subsec2.4\]. In the Subsection \[subsec2.1\], we prove some useful tools for this proof, like the one related to the perturbation of the system and also the computation of the Lengendre transform.
Useful tools {#subsec2.1}
------------
In this subsection we will prove some important results for the upper bound and for the lower bound. More specifically, we will study a typical pertubation of the original system and also the Radon-Nikodym derivative of this process. Moreover, we will compute the Fenchel-Legendre transform for a function $H$ that appears in a natural way in the Radon-Nikodym derivative.
For a time partition $0=t_0<t_1<t_2<\dots<t_n=T$ and for $\lambda_i:[t_{i-1},t_i]\to\mathbb{R}$ a linear function with linear coefficient $\lambda_i$, for $i\in\{1,\dots,n\}$, consider a polygonal function $\lambda:[0,T]\to\mathbb R$ as $\lambda(s)=\lambda_i(s)$ in $[t_{i-1},t_i]$, for all $i\in\{1,\dots,n\}$.
For each $k\in \mathbb N$ and for the polygonal function $\lambda:[0,T]\to \mathbb R$, defined above, consider the martingale $$\label{martingale1}
M^k_t=\exp\Big\{k\,\big[\lambda(t)X_k(t)-\lambda(0)X_k(0)-\frac{1}{k}\int_0^t e^{-k\lambda(s) X_k(s)}(\partial_s+k{\mathcal L}_k) e^{k\lambda(s) X_k(s)}ds\big]\Big\},$$ notice that $M^k_t$ is positive and $\mathbb{E}_k[M^k_t]=1$, for all $t\geq 0$, see Appendix 1.7 of [@KL]. Making a simple calculation, the part of the expression inside the integral can rewritten as $$\begin{split}
e^{-k\lambda(s)X_k(s)}k{\mathcal L}_k e^{k\lambda(s)X_k(s)}=&
\,e^{-k\lambda(s)X_k(s)}k\Big\{e^{k\lambda(s)( X_k(s)+1/k)}-e^{k\lambda(s) X_k(s)}\\&\qquad\qquad\qquad+e^{k\lambda(s)( X_k(s)-1/k)}-e^{k\lambda(s) X_k(s)}\\
=&\,e^{-k\lambda(s)X_k(s)}k \,e^{k\lambda(s) X_k(s)}\Big\{e^{\lambda(s)}-1+e^{-\lambda(s)}-1\Big\}\\
=&k \,\Big\{e^{\lambda(s)}+e^{-\lambda(s)}-2\Big\}\\
=&k \,H(\lambda(s)),\\
\end{split}$$ where $H(\lambda):=e^\lambda+e^{-\lambda}-2$. Since $\lambda$ is a polygonal function, the other part of the expression inside the integral is equal to $$\begin{split}
e^{-k\lambda(s)X_k(s)}\partial_s\,e^{k\lambda(s)X_k(s)}=&
\,e^{-k\lambda(s)X_k(s)}\,e^{k\lambda(s) X_k(s)}k\lambda'(s)X_k(s)\\
=&
\,k\lambda'(s)\,X_k(s)=\,k\sum_{i=0}^{n-1}\lambda_{i+1}\textbf{1}_{[t_{i},t_{i+1}]}(s)\,X_k(s).
\end{split}$$ Using telescopic sum, we have $$\begin{split}\lambda(T)X_k(T)-\lambda(0)X_k(0)&=\sum_{i=0}^{n-1}\big[\lambda_{i+1}(t_{i+1})X_k(t_{i+1})-\lambda_{i}(t_{i})X_k(t_{i})\big]\\&=\sum_{i=0}^{n-1}\big[\lambda_{i+1}(t_{i+1})X_k(t_{i+1})-\lambda_{i+1}(t_{i})X_k(t_{i})\big].
\end{split}$$ The last equality follows from the fact that $\lambda$ is a polygonal function $(\lambda_{i}(t_{i})=\lambda_{i+1}(t_{i}))$. Thus, the martingale $M^k_T$ becomes $$\label{martingale2}
\begin{split}
M^k_T=\exp\Bigg\{k\,\sum_{i=0}^{n-1}\Big[&\,\lambda_{i+1}(t_{i+1})X_k(t_{i+1})-\lambda_{i+1}(t_{i})X_k(t_{i})\\&-\int_{t_{i}}^{t_{i+1}} \!\!\![\,\lambda_{i+1}\,X_k(s)+ H(\lambda_{i+1}(s))\,]\,ds \,\Big]\Bigg\}.
\end{split}$$
\[lambda\_dif\] If $\lambda:[0,T]\to\mathbb R$ is an absolutely continuous function, the expression for the martingale $M^k_T$ can be rewritten as $$\begin{split}
M^k_T=\exp\Bigg\{k\,\Big[\lambda(T)X_k(T)-\lambda(0)X_k(0)-\int_{0}^{T} \![\,\lambda'(s)\,X_k(s)+ H(\lambda(s))\,]\,ds \,\Big]\Bigg\}.
\end{split}$$
Define a measure on $D[0,T]$ as $$\mathbb{P}_k^\lambda[A]=\mathbb{E}_k[\mathbf{1}_A(X_k)\,M^k_T],$$ for all set $A$ in $D[0,T]$. For us $\mathbf{1}_A$ is the indicator function of the set $A$, it means that $\mathbf{1}_A(x)=1$ if $x\in A$ or $\mathbf{1}_A(x)=0$ if $x\notin A$.
One can observe that this measure is associated to a non-homogeneous in time process, which have infinitesimal generator acting on functions $f: \Gamma_k\to\mathbb{R}$ as $$\begin{split}
(\mathcal{L}_k^{\lambda(t)} f)({\genfrac{}{}{}{1}{j}{k}})=e^{ \lambda(t) }\big[f({\genfrac{}{}{}{1}{j+1}{k}})-f({\genfrac{}{}{}{1}{j}{k}})\big]+e^{-\lambda(t) }\big[f({\genfrac{}{}{}{1}{j-1}{k}})-f({\genfrac{}{}{}{1}{j}{k}})\big].
\end{split}$$ By Proposition 7.3 on Appendix 1.7 of [@KL], $M^k_T$ is a Radon-Nikodym derivative $\frac{d\mathbb{P}_k^\lambda}{d\mathbb{P}_k}$.
To finish this section, we will analyse the properties of the function $H$, which appeared in the definition of the martingale $M_T^k$.
\[Legendre\] Consider the function $$\begin{split}
H(\lambda)=e^\lambda+e^{-\lambda}-2
\end{split}$$ the Fenchel-Legendre transform of $H$ is $$\label{legendre1}
\begin{split}
L(v)=\sup_{\lambda} \big\{\lambda v -H(\lambda)\big\}=v\log\Big({\genfrac{}{}{}{1}{1}{2}}\Big(v+\sqrt{(v)^2+4}\Big)\Big)-\sqrt{(v)^2+4}+2\,.
\end{split}$$ Moreover, the supremum above is attain on $\lambda_v=\log\Big({\genfrac{}{}{}{1}{1}{2}}\Big(v+\sqrt{(v)^2+4}\Big)\Big)$.
Maximizing $\lambda v -(e^\lambda+e^{-\lambda}-2)$ on $\lambda$, we obtain the expression on .
Then, we can rewrite the rate functional $I_{T}: D[0,T]\to \mathbb{R}$, defined in , as $$\begin{aligned}
\label{func_melhor}
I_{T}(\gamma)=\left\{\begin{array}{ll}\int_0^T L(\gamma' (s))\,ds,& if\,\, \gamma \in \mathcal{AC}[0,T],\\
\infty,&otherwise.
\end{array}\right.\end{aligned}$$
Upper bound for compact sets {#subsec2.2}
----------------------------
Let $\mathcal C$ be an open set of $D[0,T]$. For all $\lambda:[0,T]\to\mathbb R$ polygonal function as in Subsection \[subsec2.1\], we have $$\begin{split}
&\mathbb{P}_k\Big[X_k\in \mathcal C\Big]
=\mathbb{E}_k^\lambda\Big[\mathbf{1}_{\mathcal C}(X_k^\lambda)\frac{d\mathbb{P}_k}{d\mathbb{P}_k^\lambda}\Big]=
\mathbb{E}_k^\lambda\Big[\mathbf{1}_{\mathcal C}(X_k^\lambda)(M^k_T)^{-1}\Big]\\
&=\mathbb{E}_k^\lambda\Bigg[\mathbf{1}_{\mathcal C}(X_k^\lambda)\,\exp\Big\{-k\,\sum_{i=1}^{n}\Big(\,\lambda_{i+1}(t_{i+1})X_k(t_{i+1})-\lambda_{i+1}(t_{i})X_k(t_{i})\\&\qquad\qquad\qquad\qquad\qquad-\int_{t_{i}}^{t_{i+1}} \!\!\![\,\lambda_{i+1}\,X_k(s)+ H(\lambda_{i+1}(s))\,]\,ds \,\Big)\Big\}\Bigg]\\
&\leq\sup_{\gamma\in \mathcal C}\, \exp\Bigg\{-k\,\sum_{i=1}^{n}\Big(\,\lambda_{i+1}(t_{i+1})\gamma(t_{i+1})-\lambda_{i+1}(t_{i})\gamma(t_{i})\\&\qquad\qquad\qquad\qquad\qquad-\int_{t_{i}}^{t_{i+1}} \!\!\![\,\lambda_{i+1}\,\gamma(s)+ H(\lambda_{i+1}(s))\,]\,ds \,\Big)\Bigg\}\\
&=\exp\Big\{-k\,\inf_{\gamma\in \mathcal C}\, \sum_{i=0}^{n-1}J^{i+1}_{\lambda_{i+1}}(\gamma)\Big\},\\
\end{split}$$ for all $\lambda_{i+1}:[t_i,t_{i+1}]\to\mathbb{R}$ linear function, where $J^{i+1}_{\lambda_{i+1}}(\gamma)$ is equal to $$\begin{split}
& \lambda_{i+1}(t_{i+1})\gamma(t_{i+1})-\lambda_{i+1}(t_{i})\gamma(t_{i})-\int_{t_{i}}^{t_{i+1}} \!\!\![\,\lambda_{i+1}'(s)\,\gamma(s)+ H(\lambda_{i+1}(s))\,]\,ds .
\end{split}$$ Then, for all $\mathcal C$ open set on $D[0,T]$, minimizing over the time-partition and over functions $\lambda_1,\dots,\lambda_n$, we have $$\begin{split}
&\varlimsup_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal C\Big]
\leq-\sup_{\{t_i\}_i}\sup_{\lambda_1}\cdots\sup_{\lambda_n}\,\inf_{\gamma\in \mathcal C}\, \sum_{i=0}^{n-1}J^{i+1}_{\lambda_{i+1}}(\gamma).
\end{split}$$ Since $J^{i+1}_{\lambda_{i+1}}(\gamma)$ is continuous on $\gamma$, using Lemma 3.3 (Minimax Lemma) in Appendix 2 of [@KL], we can interchanged the supremum and infimum above. And, then, we obtain, for all $\mathcal K$ compact set $$\label{ineq1}
\begin{split}
&\varlimsup_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal K\Big]
\leq-\inf_{\gamma\in \mathcal K} \,\sup_{\{t_i\}_i}\,I_{\{t_i\}}(\gamma),
\end{split}$$ where $I_{\{t_i\}}(\gamma)=\sup_{\lambda_1}\cdots\sup_{\lambda_n}\,\sum_{i=0}^{n-1}J^{i+1}_{\lambda_{i+1}}(\gamma).$ Define $I(\gamma)=\sup_{\{t_i\}_i}\,I_{\{t_i\}}(\gamma)$. Notice that $$\begin{split}
\sup_{\lambda_1}\cdots\sup_{\lambda_n}\,\sum_{i=0}^{n-1}J^{i+1}_{\lambda_{i+1}}(\gamma)
=&\,\sup_{\lambda_1}J^{1}_{\lambda_{1}}(\gamma)+\cdots+\sup_{\lambda_n}J^{n}_{\lambda_{n}}(\gamma)\\
\geq&\,\sup_{\lambda\in \mathbb{R}}J^{1}_{\lambda}(\gamma)+\cdots+\sup_{\lambda \in \mathbb{R}}J^{n}_{\lambda}(\gamma)\,=\,\sum_{i=0}^{n-1} \sup_{\lambda\in \mathbb{R}}J^{i}_{\lambda}(\gamma).
\end{split}$$
If $\gamma\in\mathcal{AC}[0,T]$, then $$\begin{split}
J^{i}_{\lambda}(\gamma)
&=\,(t_{i+1}-t_{i}) \,\Big\{\,\lambda\,\frac{1}{t_{i+1}-t_{i}}\int_{t_{i}}^{t_{i+1}} \!\!\!\!\gamma'(s)\,ds\,- \,\,H(\lambda) \Big\}.
\end{split}$$
Thus, $$\begin{split}
I_{\{t_i\}_i}(\gamma)\geq&\sum_{i=0}^{n-1}\,(t_{i+1}-t_{i}) \,\sup_{\lambda\in \mathbb{R}}\,\Big\{\lambda\,\frac{1}{t_{i+1}-t_{i}}\int_{t_{i}}^{t_{i+1}} \!\!\!\!\gamma'(s)\,ds\,- \,\,H(\lambda)\Big\}\\
&=\sum_{i=0}^{n-1}\,(t_{i+1}-t_{i}) \,L\Big(\frac{1}{t_{i+1}-t_{i}}\int_{t_{i}}^{t_{i+1}} \!\!\!\!\gamma'(s)\,ds\Big).\\
\end{split}$$ The last equality is true, because $L(v)=\sup_{\lambda\in\mathbb{R}}\{v\lambda-H(\lambda)\}$, see . Putting it on the definition of $I(\gamma)$, we have $$\label{(7)}
\begin{split}
I(\gamma)&=\sup_{\{t_i\}_i}\,I_{\{t_i\}_i}(\gamma)\\
&\geq \sup_{\{t_i\}_i}\,\, \sum_{i=0}^{n-1}\,(t_{i+1}-t_{i}) \,L\Big(\frac{1}{t_{i+1}-t_{i}}\int_{t_{i}}^{t_{i+1}} \!\!\!\!\gamma'(s)\,ds\Big)\\
&\geq\int_0^TL(\gamma'(s))\,ds=I_T(\gamma),
\end{split}$$ as on or on .
Now, consider the case where $\gamma\notin\mathcal{AC}[0,T]$, then there is $\varepsilon>0$ such that for all $\delta>0$ there is a family of intervals $\{(s_i,t_i)\}_{i=1}^{n}$ on $[0,T]$, with $\sum_{i=1}^{n} t_i-s_i<\delta$, but $\sum_{i=1}^{n} \gamma(t_i)-\gamma(s_i)>\varepsilon $. Thus, taking the time-partition of $[0,T]$ as $t'_0=0<t'_1<\dots<t'_{2n}<t'_{2n+1}=T$, over the points $s_i, t_i$, we get $$\begin{split}
\sum_{j=1}^{2n} J_{\lambda}^j(\gamma)
&=\lambda \sum_{j=1}^{2n} \gamma(t'_j)-\gamma(t'_{j-1})\,-\, H(\lambda)\sum_{j=1}^{2n} t'_j-t'_{j-1}\\
&=\lambda \sum_{i=1}^{n} \gamma(t_i)-\gamma(s_i)\,-\, H(\lambda)\sum_{i=1}^{n} t_i-s_i\\
& \geq \lambda \varepsilon \,-\,H(\lambda)\delta.
\end{split}$$ Then, $$I(\gamma)\geq \lambda \varepsilon \,-\,H(\lambda)\delta,$$ for all $\delta>0$ and for all $\lambda\in \mathbb{R}$. Thus, $I(\gamma)\geq \lambda \varepsilon $, for all $\lambda\in \mathbb{R}$. Remember that $\varepsilon $ is fixed and we take $\lambda\to \infty$. Therefore, $I(\gamma)=\infty$, for $\gamma\notin\mathcal{AC}[0,T]$. Then, $I(\gamma)=I_{T}(\gamma)$ as on or on .
In conclusion, we have obtained, by inequalities , and definition of $I(\gamma)$, that $$\begin{split}
&\varlimsup_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal K\Big]
\leq-\inf_{\gamma\in \mathcal K} \,I_{T}(\gamma),
\end{split}$$ where $I_T$ was defined on or on .
Upper bound for closed sets {#subsec2.3}
---------------------------
To extend the upper bound for closed sets we need to use a standard argument, which is to prove that the sequence of measures $\{\mathbb{P}_k\}_k$ is exponentially tight, see Proposition 4.3.2 on [@A] or on Section 1.2 of [@OV]. By exponentially tight we understood that there is a sequence of compact sets $\{\mathcal{K}_j\}_j$ in $D[0,T]$ such that $$\begin{split}
&\varlimsup_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal K_j\Big]
\leq-j,
\end{split}$$ for all $j\in \mathbb N$.
Then this section is concerned about exponential tightness. First of all, as in Section 4.3 on [@A] or in Section 10.4 on [@KL], we also claim that the exponential tightness is just a consequence of the lemma below,
\[l1\] For every $\varepsilon >0$, $$\varlimsup_{\delta\downarrow 0}\varlimsup_{k\to\infty}\frac{1}{k}\log \mathbb P_{k}
\Big[\sup_{|t-s|\leq \delta}|X_k(t)-X_k(s)|>\varepsilon \Big]\,=\,\infty\,.$$
Firstly, notice that $$\begin{split}
&\Big\{\sup_{|t-s|\leq \delta}|\gamma(t)-\gamma(s)|>\varepsilon \Big\}\\
& \subset\bigcup_{k=0}^{\lfloor T\delta^{-1}\rfloor}\Big\{\sup_{k\delta\leq t< (k+1)\delta}
|\gamma(t)-\gamma(k\delta)|>\frac{\varepsilon }{4}\Big\}\,.\\
\end{split}$$ We have here ${\genfrac{}{}{}{1}{\varepsilon }{4}}$ instead of ${\genfrac{}{}{}{1}{\varepsilon }{3}}$ due to the presence of jumps. Using the useful fact, for any sequence of real numbers $a_N,b_N$, we have $$\label{limsup}
\varlimsup_{N\to\infty}{\genfrac{}{}{}{1}{1}{N}}\log(a_N+b_N)=
\max \Big\{\varlimsup_{N\to\infty}{\genfrac{}{}{}{1}{1}{N}}\log(a_N),\varlimsup_{N\to\infty}{\genfrac{}{}{}{1}{1}{N}}\log(b_N)\Big\}\,,$$ in order to prove this lemma, it is enough to show that $$\label{l2}
\varlimsup_{\delta\downarrow 0}\varlimsup_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log \mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
|X_k(t)-X_k(t_0)|>\varepsilon \Big]\,=\,\infty\,,$$ for every $\varepsilon >0$ and for all $t_0\geq 0$. Let be $ M^{k}_t$ the martingale defined in with the function $\lambda$ constant, using the expression for $ M^{k}_t$ and the fact that $\lambda$ is constant, we have that $$\begin{split}
M^{k}_t\,=\, \exp{\Big\{k\big[c\lambda\,(X_k(t)-X_k(0))\,-\,t\,H(c\lambda)\big]\Big\}}
\end{split}$$ is a positive martingale equal to $1$ at time $0$. The constant $c$ above will be chosen *a posteriori* as enough large. In order to obtain is sufficient to get the limits $$\label{l3}
\varlimsup_{\delta\downarrow 0}\varlimsup_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log \mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}\Big|
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) \Big|>c\lambda\,\varepsilon \Big]\,=\,-\infty$$ and $$\label{l4}
\varlimsup_{\delta\downarrow 0}\varlimsup_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log \mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}\Big|
(t-t_0)\, H(c\lambda) \Big|>c\lambda\varepsilon \Big]=-\infty\,.$$ The second probability is considered for a deterministic set, and by boundedness, we conclude that for $\delta$ enough small the probability in vanishes.
On the other hand, to prove , we observe that we can neglect the absolute value, since $$\label{mod}
\begin{split}
&\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}\Big|
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) \Big|>c\lambda\,\varepsilon \Big]\\
& \leq\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) >c\lambda\,\varepsilon \Big]+
\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) <-c\lambda\,\varepsilon \Big]
\end{split}$$ and using again . Because $\{M^{k}_t/M^{k}_{t_0};\,t\geq t_0\}$ is a mean one positive martingale, we can apply Doob’s Inequality, which yields $$\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) >c\lambda\,\varepsilon \Big]
\,=\,
\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
\Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) >e^{c\lambda\,\varepsilon\, k }\Big]
\,\leq\,\frac{1}{e^{c\lambda\varepsilon k}}\,.$$
Passing the $\log$ function and dividing by $k$, we get $$\label{bound111}
\varlimsup_{\delta\downarrow 0}\varlimsup_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log
\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) >\lambda\,\varepsilon \Big]\leq -c\lambda\,\varepsilon ,$$ for all $c>0$. To treat of the second term on , we just need to observe that $\{M^{k}_{t_0}/M^{k}_{t};\,t\geq t_0\}$ is also a martingale and rewriting $$\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_t}{M^{k}_{t_0}}}\Big) <-c\lambda\,\varepsilon \Big]$$ as $$\mathbb P_{k}\Big[\sup_{t_0\leq t\leq t_0+\delta}
{\genfrac{}{}{}{1}{1}{k}}\log \Big({\genfrac{}{}{}{1}{M^{k}_{t_0}}{M^{k}_{t}}}\Big) >c\lambda\,\varepsilon \Big].$$ Then, we get the same bound for this probability as in , it finishes the proof.
Lower bound {#subsec2.4}
-----------
Let $\gamma:[0,T]\to\mathbb{S}^{1}$ be a function such that $\gamma(0)=x_0$ and for a $\delta>0$, in the following $$B_\infty(\gamma,\delta)=\Big\{f:[0,T]\to\mathbb{S}^{1}:\, \sup_{0\leq t\leq T}|f(t)-\gamma(t)|<\delta\Big\}.$$
Let $\mathcal O$ be a open set of $D[0,T]$. For all $\gamma \in\mathcal O$, our goal is prove that $$\label{**}
\varliminf_{k\to\infty}\frac{1}{k}\log\mathbb P_k[X_k\in\mathcal O]\geq -I_T(\gamma).$$ For that, we can suppose $\gamma\in\mathcal{AC}[0,T]$, because if $\gamma\notin\mathcal{AC}[0,T]$, then $I_T(\gamma)=infty$ and is trivial. Since $\gamma \in\mathcal O$, there is a $\delta>0$ such that $$\mathbb{P}_k\Big[X_k\in \mathcal O\Big]\geq \mathbb{P}_k\Big[X_k\in B_\infty(\gamma,\delta)\Big].$$ We need consider the measure $\mathbb{P}_k^\lambda$ with $\lambda:[0,T]\to\mathbb R$, the function $\lambda(s)=\lambda_\gamma(s)=\log\Big({\genfrac{}{}{}{1}{1}{2}}\Big(\gamma'(s)+\sqrt{(\gamma'(s))^2+4}\Big)\Big)$, which we obtain in the Lemma \[Legendre\], as a function that attains the supremum $\sup_\lambda[ \lambda\, \gamma'(s)-H(\lambda)] $ for each $s$. Thus, $$\begin{split}
&\mathbb{P}_k\Big[X_k\in B_\infty(\gamma,\delta)\Big]
=\mathbb{E}_k^{\lambda}
\Big[\mathbf{1}_{B_\infty(\gamma,\delta)}(X_k^ {\lambda})\frac{d\mathbb{P}_k}{d\mathbb{P}_k^{\lambda}}\Big]=\mathbb{E}_k^{\lambda}\Big[
\mathbf{1}_{B_\infty(\gamma,\delta)}(X_k^{\lambda})(M^k_T)^{-1}\Big]\\
&=\mathbb{E}_k^{\lambda}
\Bigg[\mathbf{1}_{B_\infty(\gamma,\delta)}(X_k^{\lambda})\,\,\exp\Big\{k\,\Big[\lambda(T)X_k(T)-\lambda(0)X_k(0)\\&\qquad\qquad\qquad\qquad\qquad\qquad
-\int_{0}^{T} \![\,\lambda'(s)\,X_k(s)+ H(\lambda(s))\,]\,ds \,\Big]\Big\}\Bigg].\\
\end{split}$$ The last equality follows from Remark \[lambda\_dif\]. Define the measure $\mathbb{P}_{k,\delta}^{\lambda,\gamma}$ as $$\label{measure}
\begin{split}
&\mathbb{E}_{k,\delta}^{\lambda,\gamma}
\Big[f(X_k^{\lambda})\Big]=
\frac{\mathbb{E}_k^{\lambda}
\Big[\mathbf{1}_{B_\infty(\gamma,\delta)}(X_k^{\lambda})f(X_k^{\lambda})\Big]}{\mathbb{P}_k^{\lambda}[X_k^{\lambda}\in B_\infty(\gamma,\delta)]},
\end{split}$$ for all bounded function $f:D[0,T]\to\mathbb R$. Then, $$\begin{split}
&\mathbb{P}_k\Big[X_k\in B_\infty(\gamma,\delta)\Big]\\
&=\mathbb{E}_{k,\delta}^{\lambda,\gamma}
\Big[\exp\Big\{-k\,\big[\,\lambda(T)\,X_k^{\lambda}(T)-\lambda(0)\,X_k^{\lambda}(0))\,-\int_{0}^{T} \!\lambda'(s)\,X_k(s)\,ds \big]\Big\}\Big] \\
&\qquad\qquad\cdot\,e^{ k\int_{0}^{T} \! H(\lambda(s))\,ds }\,\,
\mathbb{P}_k^{\lambda}\Big[X_k^{\lambda}\in B_\infty(\gamma,\delta)\Big].\\
\end{split}$$ Then, using Jensen’s inequality $$\begin{split}
&\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal O\Big]\\
&\geq - \,\mathbb{E}_{k,\delta}^{\lambda,\gamma}
\Big[\lambda(T)\,X_k^{\lambda}(T)-\lambda(0)\,X_k^{\lambda}(0))\,-\int_{0}^{T} \!\lambda'(s)\,X_k(s)\,ds \Big]\\&\qquad\qquad\qquad+\,\int_{0}^{T} \! H(\lambda(s))\,ds+\frac{1}{k}\log\mathbb{P}_k^{\lambda}\Big[X_k^{\lambda}\in B_\infty(\gamma,\delta)\Big]\\
&\geq- \,C(\lambda)\mathbb{E}_{k,\delta}^{\lambda,\gamma}
\Big[|X_k^{\lambda}(T)-\gamma(T)|+|X_k^{\lambda}(0))-\gamma(0)|+\!\int_{0}^{T} \!\!|X_k(s)-\gamma(s)|\,ds \Big]\\&
\qquad\qquad\qquad-\Big(\lambda(T)\,\gamma(T)-\lambda(0)\,\gamma(0))\,-\int_{0}^{T} \![\lambda'(s)\,\gamma(s)+H(\lambda(s))]\,ds\Big)\\&\qquad\qquad\qquad+\frac{1}{k}\log\mathbb{P}_k^{\lambda}\Big[X_k^{\lambda}\in B_\infty(\gamma,\delta)\Big].\\
\end{split}$$ Since $\gamma:[0,T]\to\mathbb{R}$ is an absolutely continuous function, we can write $$\begin{split}
&\lambda(T)\,\gamma(T)-\lambda(0)\,\gamma(0))\,-\int_{0}^{T} \![\lambda'(s)\,\gamma(s)+H(\lambda(s))]\,ds\\
&=
\int_{0}^{T} \![\lambda(s)\,\gamma'(s)+H(\lambda(s))]\,ds.
\end{split}$$ Since $\lambda(s)=\lambda_\gamma(s)=\log\Big({\genfrac{}{}{}{1}{1}{2}}\Big(\gamma'(s)+\sqrt{(\gamma'(s))^2+4}\Big)\Big)$, by Lemma \[Legendre\], we obtain $$\begin{split}
&\int_{0}^{T} \![\lambda(s)\,\gamma'(s)+H(\lambda(s))]\,ds=\int_0^T\sup_\lambda[ \lambda\, \gamma'(s)-H(\lambda)]\,ds=\int_0^T L(\gamma'(s))\,ds,
\end{split}$$ and, by , the last expression is equal to $I_{T}(\gamma)$. Thus, $$\label{eqq00}
\begin{split}
&\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal O\Big]\geq -I_{T}(\,\gamma)+\frac{1}{k}\log\frac{3}{4} -C(\lambda)\delta.
\end{split}$$ The last inequality follows from the above and the Lemma \[mart2\] and the Lemma \[bola\] below.
\[mart2\] With respect the measure defined on , there exists a constant $C>0$ such that $$-\mathbb{E}_{k,\delta}^{\lambda,\gamma}
\Big[|X_k^{\lambda}(T)-\gamma(T)|+|X_k^{\lambda}(0))-\gamma(0)|+\!\int_{0}^{T} \!\!|X_k(s)-\gamma(s)|\,ds \Big]\geq -C\delta.$$
\[bola\] There is a $k_0=k_0(\gamma,\delta)$ such that $\mathbb{P}_k^{\lambda}[X_k^{\lambda}\in B_\infty(\gamma,\delta)]>\frac{3}{4}$, for all $k\geq k_0$.
The proofs of Lemma \[mart2\] and Lemma \[bola\] are in the end of this subsection.\
Continuing with the analysis of , we mention that, since, for all $\gamma\in\mathcal O$, there exists $\delta=\delta(\gamma)$, such that $B_\infty(\gamma,\delta)\subset\mathcal O$, then for all $\varepsilon<\delta$, we have $$\begin{split}
&\varliminf_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal O\Big]\geq -I_{T}(\gamma) -\lambda\varepsilon.
\end{split}$$ Thus, for all $\gamma\in\mathcal O$, we have . Therefore, $$\begin{split}
&\varliminf_{k\to\infty}\frac{1}{k}\log\mathbb{P}_k\Big[X_k\in \mathcal O\Big]\geq -\inf_{\gamma\in \mathcal O}I_{T}(\gamma) .
\end{split}$$
We present, now, the proofs of the Lemmata \[mart2\] and \[bola\].
Recalling the definition of the probability measure $\mathbb{P}_{k,\delta}^{\lambda,\gamma}$, we can write $$\begin{split}
&-\mathbb{E}_{k,\delta}^{\lambda,\gamma}
\Big[|X_k^{\lambda}(T)-\gamma(T)|+|X_k^{\lambda}(0))-\gamma(0)|+\!\int_{0}^{T} \!\!|X_k(s)-\gamma(s)|\,ds \Big]\\&
=-\frac{\mathbb{E}_k^{\lambda}
\Big[\mathbf{1}_{B_\infty(\gamma,\delta)}\Big(|X_k^{\lambda}(T)-\gamma(T)|+|X_k^{\lambda}(0))-\gamma(0)|+\!\int_{0}^{T} \!\!|X_k(s)-\gamma(s)|\,ds \Big)\Big]}{\mathbb{P}_k^{\lambda}[X_k^{\lambda}\in B_\infty(\gamma,\delta)]}\\
&\geq-(2+T)\,\delta\,\frac{\mathbb{P}_k^{\lambda}
\Big[X_k^{\lambda}\in B_\infty(\gamma,\delta)\Big]}{\mathbb{P}_k^{\lambda}[X_k^{\lambda}\in B_\infty(\gamma,\delta)]}=\,-\,(2+T)\,\delta.\\
\end{split}$$
Consider the martingale $$\begin{split}
\mathcal{M}^k_t&=X_k^{\lambda}(t)-X_k^{\lambda}(0)-\int_0^t\!\! k\mathcal{L}_k^{\lambda} X_k^{\lambda}(s)\, ds\\
&=X_k^{\lambda}(t)-{\genfrac{}{}{}{1}{\lfloor kx_0\rfloor}{k}}-\int_0^t\!\!\!\big(e^{\lambda(s)}-e^{-\lambda(s)}\big)\,ds,\\
\end{split}$$ remember that $\mathbb{P}_k$ has initial measure $\delta_{x_k(x_0)}$, where $x_k(x_0)=\frac{\lfloor kx_0\rfloor}{k}$. Notice that, by the choose of $ \lambda(s)$ as $\log\Big({\genfrac{}{}{}{1}{1}{2}}\Big(\gamma'(s)+\sqrt{(\gamma'(s))^2+4}\Big)\Big)$ and hypothesis over $\gamma$, we have that $$\begin{split}
&\int_0^t\!\!\!\big(e^{\lambda(s)}-e^{-\lambda(s)}\big)\,ds=\int_0^t
\gamma'(s)\,ds=\gamma(t)-\gamma(0)=\gamma(t)-x_0.\\
\end{split}$$ Then, $X_k^{\lambda}(t)-\gamma(t)=\mathcal{M}^k_t+r_k$, where $r_k=\frac{\lfloor kx_0\rfloor}{k}-x_0$. Using the Doob’s martingale inequality, $$\label{doob}
\begin{split}
\mathbb{P}_k^{\lambda}\Bigg[\sup_{0\leq t\leq T}|X_k^{\lambda}(t)-\gamma(t)|>\delta \Bigg]&\leq
\mathbb{P}_k^{\lambda}\Bigg[\sup_{0\leq t\leq T}|\mathcal{M}^k_t|>\delta/2\Bigg]+\mathbb{P}_k^{\lambda}\Bigg[|r_k|>\delta/2 \Bigg]
\\&\leq\frac{4}{\delta^2}\, \mathbb{E}_k^{\lambda}\Big[\big(\mathcal{M}^k_T\big)^2\Big]+\frac{1}{8},
\end{split}$$ for $k$ large enough. Using the fact that $$\begin{split}\mathbb{E}_k^{\lambda}\Big[\big(\mathcal{M}^k_T\big)^2\Big]
=&\mathbb{E}_k^{\lambda}\Big[\int_0^T[\,k\mathcal{L}_k^{\lambda} (X_k^{\lambda}(s))^2-2X_k^{\lambda}(s)k\mathcal{L}_k^{\lambda} (X_k^{\lambda}(s))\,]\, ds\Big].
\end{split}$$ And, making same more calculations, we get that the expectation above is bounded from above by $$\begin{split}
&\mathbb{E}_k^{\lambda}\Bigg[k\int_0^Te^{\lambda(s)}\big((X_k^{\lambda}(s)+{\genfrac{}{}{}{1}{1}{k}})-X_k^{\lambda}(s))\big)^2\,ds\Bigg]\\
&+\mathbb{E}_k^{\lambda}\Bigg[k\int_0^T
e^{-\lambda(s)}\big( (X_k^{\lambda}(s)-{\genfrac{}{}{}{1}{1}{k}})-(X_k^{\lambda}(s))\big)^2\, ds\Bigg]\\
&=\int_0^T\frac{e^{\lambda(s)}+e^{-\lambda(s)}}{k}\,ds\leq C(\lambda,T)\frac{1}{k}.
\end{split}$$ Then there is $k_0$, such that, $\mathbb{P}_k^{\lambda}[\sup_{0\leq t\leq T}|X_k^{\lambda}(t)-\gamma(t)|>\delta ]<1/4$, for all $k>k_0$.
*This is the end of the first part of the paper where we investigate the deviation function on the Skorohod space when $k\to \infty$ for the trajectories of the unperturbed system.*
Disturbing the system by a potential $V$. {#sec3}
=========================================
Now, we introduce a fixed differentiable $C^2$ function $V: \mathbb{S}^1 \to \mathbb{R}.$ We want to analyse large deviation properties associated to the disturbed system by the potential $V$. Several of the properties we consider just assume that $V$ is Lipschitz, but we need some more regularity for Aubry-Mather theory. Given $V: \mathbb{S}^1 \to \mathbb{R}$ we denote by $V_k$ the restriction of $V$ to $\Gamma_k$. It is known that if $kL_k$ is a $k$ by $k$ line sum zero matrix with strictly negative elements in the diagonal and non-negative elements outside the diagonal, then for any $t>0$, we have that $e^{t\,kL_k}$ is stochastic. The infinitesimal generator $kL_k$ generates a continuous time Markov Chain with values on $\Gamma_k=\{0,1/k, 2/k,...,\frac{k-1}{k}\}\subset \mathbb S^1$. We are going to disturb this stochastic semigroup by a potential $k\,V_k:\Gamma_k\to \mathbb{R}$ and we will derive another continuous Markov Chain (see [@BEL] and [@LNT]) with values on $\Gamma_k$. This will be described below. We will identify the function $k\,V_k$ with the $k$ by $k$ diagonal matrix, also denoted by $k\,V_k$, with elements $k\,V_k(j/k)$, $j=0,1,2..,k-1$, in the diagonal.
The continuous time Perron’s Theorem (see [@S], page 111) claims the following: given the matrix $ k\,L_k$ as above and the $k\,V_k$ diagonal matrix, then there exists
- a unique positive function $u_{V_k}=u_k : \{0,1/k,2/k,..,(k-1)/k\}\to \mathbb{R}$,
- a unique probability vector $\mu_{V_k}=\mu_k$ over the set $ \{0,1/k,2/k,..,(k-1)/k\}$, such that $$\sum_{j=1}^k
u_k^j \,\mu_k^j = 1 ,$$ where $u_k=(u_k^1,...,u_k^k)$, $\mu_k=(\mu_k^1,...,\mu_k^k)$
- a real value $\lambda (V_k)=\lambda_k$,
such that
- for any $v \in \mathbb{R}^n$, if we denote $ P^t_{k,V} =e^{t\,(k\,L_k + k\,V_k)}$, then $$\lim_{t\to \infty} e^{-t \lambda (k)} P^t_{k,V} (v) = \,\sum_{j=1}^k
v_j \,\mu_k^j\, u_k^j\,,$$
- for any positive $s$ $$e^{-s \lambda (k)}P^s_{k,V}(u_k)= u_k.$$
From ii) follows that $$(k\,L_k + k\,V_k) (u_k) = \lambda (k) u_k.$$
The semigroup $e^{t\, (k\, L_k + k\,V_k - \lambda(k))}$ defines a continuous time Markov chain with values on $ \Gamma_k$, where the vector $\pi_{k,V}=(\pi_{k,V}^1,...,\pi_{k,V}^k)$, such that $\pi_{k,V}^j=\,u_k^j\, \mu_k^j\, \,$, $j=1,2,..,k$, is stationary. Notice that $\pi_k=\pi_{k,V}$, when $V=0$. Remember that the $V_k$ was obtained by discretization of the initial $V:\mathbb S^1\to \mathbb{R}.$
When $k=4$ and $V_4$ is defined by the values $V_4^j$, $j=1,2,3,4$, then, we have first to find the left eigenvector $u_{V_4}$ for the eigenvalue $\lambda(V_4)$, that is to solve the equation
$$u_{V_4}\, (4L_4 +4V_4)= u_{V_4} 4
\left(
\begin{array}{cccc}
-2 + V_4^1 & 1 & 0 & 1 \\
1 & -2 + V_4^2 & 1 & 0 \\
0 & 1 & -2+ V_4^3 & 1 \\
1 & 0 & 1 & -2 + V_4^4\\
\end{array}\right)=
\lambda(V_4)\, u_{V_4}.$$
Suppose $\mu_{V_4}$ is the right normalized eigenvector. In this way we can get by the last theorem a stationary vector $\pi_{4,V}$ for stationary Gibbs probability associated to the potential $V_4$ We point out that by numeric methods one can get good approximations of the solution of the above problem.
From the end of Section 5 in [@S], we have that
$$\lambda_k = \sup_{ \psi \in \mathbb{ L}^2, \, ||\psi||_2=1}\Big\{
\int_{\Gamma_k} \psi (x)\, [(k L_k + k V_k) ( \psi )\,
] (x) \, d \pi_{k} (x)\Big\},$$ where $\psi:\Gamma_k \to
\mathbb{R}$, $$||\psi||_2= \sqrt{\frac{1}{k}\sum_{j=0}^{k-1}
\psi({\genfrac{}{}{}{1}{j}{k}})^2},$$ and $\pi_{k}$ is uniform in $\Gamma_k$. Notice that for any $\psi$, we have $$\int_{\Gamma_k} \psi (x)\, (k L_k ) ( \psi )
(x) \, d \pi_k (x)=-\sum_{j=0}^{k-1}(\psi({\genfrac{}{}{}{1}{j+1}{k}})-\psi({\genfrac{}{}{}{1}{j}{k}}))^2.$$ Moreover, $$\int_{\Gamma_k} \psi (x)\, [(k L_k + k\, V_k) ( \psi )\,
] (x) \, d \pi_k (x)=\sum_{j=0}^{k-1}[-(\psi({\genfrac{}{}{}{1}{j+1}{k}})-\psi({\genfrac{}{}{}{1}{j}{k}}))^2+\psi({\genfrac{}{}{}{1}{j}{k}})^2V_({\genfrac{}{}{}{1}{j}{k}}))].$$ In this way $${\genfrac{}{}{}{1}{1}{k}}\lambda_k = \sup_{ \psi \in \mathbb{ L}^2, \, ||\psi||_2=1}\Big\{
\frac{1}{k}\int_{\Gamma_k} \psi (x)\, [(k L_k + k V_k) ( \psi )\,
] (x) \, d \pi_k (x)\Big\}$$ $$= \sup_{ \psi \in \mathbb{ L}^2, \, ||\psi||_2=1} \Big\{-\frac{1}{k} \sum_{j=0}^{k-1}(\psi({\genfrac{}{}{}{1}{j+1}{k}})-\psi({\genfrac{}{}{}{1}{j}{k}}))^2+\frac{1}{k} \sum_{j=0}^{k-1}\psi({\genfrac{}{}{}{1}{j}{k}})^2V_k({\genfrac{}{}{}{1}{j}{k}})\Big\}.$$ Observe that for any $\psi\in \mathbb{ L}^2$, with $||\psi||_2=1$, the expression inside the braces is bounded from above by $$\frac{1}{k} \sum_{j=0}^{k-1}\psi({\genfrac{}{}{}{1}{j}{k}})^2V_k({\genfrac{}{}{}{1}{j}{k}})\leq\sup_{x\in\mathbb{S}^1}V(x).$$ Notice that for each $k$ fixed, the vector $\psi^k=\psi$ that attains the maximal value $\lambda_k$ is such that $\psi_k^i= \sqrt{u_{k,V}^i}$, with $i\in
\{0,...,(k-1)\}$, $$\sup_{ \psi \in \mathbb{ L}^2, \, ||\psi||_2=1}\Big\{
\frac{1}{k}\int_{\Gamma_k} \psi (x)\, [(k L_k + k V_k) ( \psi )\,
] (x) \, d \pi_k (x)\Big\}$$$$=-\int_{\Gamma_k} \psi_k (x)\, [(k L_k + k\, V_k) ( \psi_k )\,
] (x) \, d \pi_k (x) ={\genfrac{}{}{}{1}{1}{k}}\lambda_k.$$ When $k$ is large the above $\psi_k$ have the tendency to become more and more sharp close to the maximimum of $V_k$. Then, we have that $$\sup_{ \psi \in \mathbb{ L}^2, \, ||\psi||_2=1}\Big\{
{\genfrac{}{}{}{1}{1}{k}}\int_{\Gamma_k} \psi (x)\, [(k L_k + k V_k) ( \psi )\,
] (x) \, d \pi_k (x)\Big\}$$ converges to $$\sup_{ \psi \in \mathbb{ L}^2(dx), \, ||\psi||_2=1}\Big\{ \int_{\mathbb{S}^1}\, \psi
(x)\, V(x) \, \psi (x) \, d x\, \Big\}=\sup \{V(x)\,|\, x \in
\mathbb{S}^1\, \} ,$$ when $k$ increases to $\infty$.
Summarizing, we get the proposition below:
$$\lim_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\, \lambda_k =
\sup_{ \psi \in \mathbb{ L}^2(d x), \, ||\psi||_2=1}\Big\{ \int_{\mathbb{S}^1}\, \psi
(x)\, V(x) \, \psi (x) \, d x\, \Big\}$$ $$=\sup \{V(x)\,|\, x \in
\mathbb{S}^1\, \} = - \inf_{\mu} \Big\{\int\!\! L(x,v)\, d \mu (x,v)\Big\},$$ where the last infimum is taken over all measures $\mu$ such that $\mu$ is invariant probability for the Euler-Lagrange flow of $ L( x,v)$.
The last equality follows from Aubry-Mather theory (see [@CI] and [@Fath]). Notice that this Lagrangian is convex and superlinear.
Lax-Oleinik semigroup {#subsec3.1}
---------------------
By Feynman-Kac, see [@KL], we have that the semigroup associated to the infinitesimal generator $k\,\mathcal L_k+kV_k$ has the following expression $$P^t_{k,V}(f)(x) =\mathbb{E}_k\big[e^{\int_0^t kV_k(X_k(s))\,ds}f(X_k(t))\big],$$ for all bounded mensurable function $f:\mathbb S^1\to \mathbb R$ and all $t\geq 0$.
Now, consider $$P^{T}_{k,V}(e^{ku})(x) =\mathbb{E}_k\big[e^{k\,[\int_0^{T} \!V_k(X_k(s))\,ds\,+\,u(X_k(T))\,]}\big],$$ for a fixed Lipschitz function $u:\mathbb S^1\to \mathbb R$. Now, we want to use the results of Section \[sec2\] together with the Varadhan’s Lemma, which is
Let $\mathcal E$ be a regular topological space; let $(Z_t)_{t>0}$ be a family of random variables taking values in $\mathcal E$; let $\mu_\varepsilon$ be the law (probability measure) of $Z_t$. Suppose that $\{\mu_\varepsilon\}_{\varepsilon>0}$ satisfies the large deviation principle with good rate function $I : \mathcal E\to [0, +\infty]$. Let $\phi : \mathcal E \to \mathbb R$ be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition $$\lim_{M \to \infty} \varlimsup_{\varepsilon \to 0} \varepsilon \log \mathbb{E} \big[ \exp \big( \phi(Z_{\varepsilon}) / \varepsilon \big) \mathbf{1} \big( \phi(Z_{\varepsilon}) \geq M \big) \big] = - \infty,$$ where $\mathbf 1(A)$ denotes the indicator function of the event $A$; or, for some $\gamma > 1$, the moment condition $$\varlimsup_{\varepsilon \to 0} \varepsilon \log \mathbb{E} \big[ \exp \big( \gamma \phi(Z_{\varepsilon}) / \varepsilon \big) \big] < + \infty.$$ Then, $$\lim_{\varepsilon \to 0} \varepsilon \log \mathbb{E} \big[ \exp \big( \phi(Z_{\varepsilon}) /\varepsilon \big) \big] = \sup_{x \in \mathcal E} \big( \phi(x) - I(x) \big).$$
We will consider here the above $\varepsilon$ as $\frac{1}{k}.$ By Theorem \[teo1\] and Varadhan’s Lemma, for each Lipschitz function $u:\mathbb S^1\to \mathbb R$, we have $$\begin{split}
\lim_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log \,P^{T}_{k,V}(e^{ku})(x) &=
\lim_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log\mathbb{E}_k\big[e^{k\,[\int_0^{T} \!V_k(X_k(s))\,ds\,+\,u(X_k(T))\,]}\big]\\
&=\sup_{\gamma\in D[0,T]}
\Big\{\int_0^{T} V(\gamma(s))\,ds+u(\gamma(T))-I_T(\gamma)\Big\}
\end{split}$$ When $\gamma \notin AC[0,T]$, $I_T(\gamma)=\infty$ and if $\gamma \in AC[0,T]$, $I_T(\gamma)=\int_0^TL(\gamma'(s))\,ds$. Thus, $$\lim_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log \,P^{T}_{k,V}(e^{ku})(x) =\sup_{\gamma\in AC[0,T]}
\Big\{\,u(\gamma(T))\,-\,\int_0^T\!\!\big[L(\gamma'(s))-V(\gamma(s))\big]\,ds\Big\}.$$
For a fixed $T>0$, define the operator $\mathcal{T}_T $ acting on Lipschitz functions $u:\mathbb S^1\to \mathbb R$ by the expression $\mathcal{T}_T(u)(x)=\lim_{k\to\infty}{\genfrac{}{}{}{1}{1}{k}}\log \,P^{T}_{k,V}(e^{ku})(x)$, then, we just show that $$\mathcal{T}_T(u)(x)\,\,=\sup_{\gamma\in AC[0,T]}
\Big\{\,u(\gamma(T))\,-\,\int_0^T\!\!\big[L(\gamma'(s))-V(\gamma(s))\big]\,ds\Big\}.$$ This family of operators parametrized by $T>0$ and acting on function $u:\mathbb S^1 \to \mathbb{R}$ is called the Lax-Oleinik semigroup.
The Aubry-Mather theory
-----------------------
We will use now Aubry-Mather theory (see [@CI] and [@Fath]) to obtain a fixed point $u$ for such operator. This will be necessary later in next section. We will elaborate on that. Consider Mather measures, see [@Fath] and [@CI], on the circle $\mathbb{S}^1$ for the Lagrangian $$\label{L}
L^V(x,v)= - V(x) + v \log( (v + \sqrt{v^2 + 4} )/2 ) - \sqrt{v^2
+ 4} + 2,$$ $x\in \mathbb S^1, v \in T_x \mathbb S^1$, when $V: \mathbb S^1\to \mathbb{R}$ is a $C^2$ function. This will be Delta Dirac on any of the points of $\mathbb S^1$, where $V$ has maximum (or convex combinations of them). In order to avoid technical problems we will assume that this point $x_0$ where the maximum is attained is unique. This is generic among $C^2$ potentials $V$.
This Lagrangian appeared in a natural way, when we analysed the asymptotic deviation depending on $k\to \infty$ for the discrete state space continuous time Markov Chains indexed by $k$, $\{X_k(t),t\geq 0\}$, described above in Section \[sec2\]. We denote by $H(x,p)$ the associated Hamiltonian obtained via Legendre transform.
Suppose $u_+$ is a fixed point for the positive Lax-Oleinik semigroup and $u_{-}$ is a fixed point for the negative Lax-Oleinik semigroup (see next section for precise definitions). We will show that function $I^V= u_+ + u_{-}$ defined on $ \mathbb{S}^1$ is the deviation function for $\pi_{k,V} $, when $k\to\infty.$
*Fixed functions $u$ for the Lax-Oleinik operator are weak KAM solutions of the Hamilton-Jacobi equation for the corresponding Hamiltonian $H$ (see Sections 4 and 7 in [@Fat]).*
The so called critical value in Aubry-Mather theory is $$c(L)=- \inf_{\mu} \, \int L^V(x,v) d \mu(x,v)=\sup\{V(x)\,|\,x\in \mathbb{S}^1\},$$ where the infimum above is taken over all measures $\mu$ such that $\mu$ is invariant probability for the Euler-Lagrange flow $L^V$. Notice that $$\label{*}
\lim_{k\to\infty}\frac{1}{k}\, \lambda_k =c(L).$$ This will play an important role in what follows. A Mather measure is any $\mu$ which attains the above infimum value. This minimizing probability is defined on the tangent bundle of $\mathbb{S}^1$ but as it is a graph (see [@CI]) it can be seen as a probability on $\mathbb{S}^1$. This will be our point of view.
In the case that the potential $V$ has a unique point $x_0$ of maximum on $ \mathbb{S}^1$, we have that $c(L)=V(x_0)$. The Mather measure in this case is a Delta Dirac on the point $x_0$.
Suppose there exist two points $x_1$ and $x_2$ in $ \mathbb{S}^1$, where the supremum of the potential $V$ is attained. For the above defined lagrangian $L$ the static points are $(x_1,0)$ and $(x_2,0)$ (see [@CI] and [@Fat] for definitions and general references on Mather Theory). This case requires a more complex analysis, because it requires some hypothesis in order to know which of the points $x_0$ or $x_1$ the larger part of the mass of $\pi_{k,V}$ will select. We will not analyse such problem here. In this case the critical value is $c(L)=-\, L^V(x_1,0)= V(x_1)= -\, L^V(x_2,0)=V(x_2).$
In appendix of [@A1] and also in [@A2] the N. Anantharaman shows, for $t$ fixed, an interesting result relating the time re-scaling of the Brownian motion $B(\varepsilon t)$, $k\to\infty,$ and Large Deviations. The large deviation is obtained via Aubry-Mather theory. The convex part of the Mechanical Lagrangian in this case is $\frac{1}{2}\, |v|^2$. When there are two points $x_1$ and $x_2$ of maximum for $V$ the same problem as we mention before happens in this other setting: when $\varepsilon\to 0$, which is the selected Mather measure? In this setting partial answers to this problem is obtained in [@AIP].
In the present paper we want to obtain similar results for $t$ fixed, but for the re-scaled semigroup $P_{k}(ks)=e^{skL_k}$, $s \geq 0 $, obtained from the speed up by $k$ the time of the continuous time symmetric random walk (with the compactness assumption) as described above.
In other words we are considering that the unitary circle (the interval $[0,1)$) is being approximated by a discretization by $k$ equally spaced points, namely, $\Gamma_k=
\{0,1/k,2/k,...,(k-1)/k\}$.
Let $ \mathbb{ X}_{t,x}$ be the set of absolutely continuous paths $\gamma:[0,t)\to [0,1]$, such that $\gamma(0)=x$.
Consider the positive Lax-Oleinik operator acting on continuous function $u$ on the circle: for all $t>0$ $$(\mathcal{T}^+_t (u))\, (x)=$$ $$\sup_{\gamma \in \mathbb{ X}_{t,x}} \!\Big\{ u(\gamma(t)) - \int_0^t \!\!\big[(
\dot{\gamma}(s) \log \Big(\frac{ ( \dot{\gamma}(s) +
\sqrt{\dot{\gamma}^2 (s) + 4} }{2} \Big) - \sqrt{\dot{\gamma}^2(s) +
4} + 2 - V (\gamma(s))\big] \,d s \Big\}.$$ It is well known (see [@CI] and [@Fath]) that there exists a Lipschitz function $u_+$ and a constant $c=c(L)$ such that for all $t>0$ $$\mathcal{T}^+_t (u_+) = u_+ + c \, t.$$ We say that $u_+$ is a $(+)$-solution of the Lax-Oleinik equation. This function $u_+$ is not always unique. If we add a constant to $u_+$ get another fixed point. To say that the fixed point $u_+$ is unique means to say that is unique up to an additive constant. If there exist just one Mather probability then $u_+$ is unique (in this sense). In the case when there exist two points $x_1$ and $x_2$ in $ \mathbb{S}^1$ where the supremum of the potential $V$ is attained the fixed point $u_+$ may not be unique.
Now we define, the negative Lax-Oleinik operator: for all $t>0$ and for all continuous function $u$ on the circle, we have $$(\mathcal{T}^-_t (u))\, (x)=$$ $$\sup_{\gamma \in \mathbb{ X}_{t,x}} \!\Big\{ u(\gamma(0)) + \int_0^t \!\!\big[(
\dot{\gamma}(s) \log \Big(\frac{ ( \dot{\gamma}(s) +
\sqrt{\dot{\gamma}^2 (s) + 4} }{2} \Big) - \sqrt{\dot{\gamma}^2(s) +
4} + 2 - V (\gamma(s))\big] \,d s \Big\}.$$ Note on this new definition the difference from $+$ to $-$. The space of curves we consider now is also different. It is also known that there exists a Lipschitz function $u_-$ such that for the same constant $c$ as above, we have for all $t>0$ $$\mathcal{T}^-_t (u_-) = u_- - c \, t .$$ We say that $u_-$ is a $(-)$-solution of the Lax-Oleinik equation.
The $u_{+}$ solution will help to estimate the asymptotic of the left eigenvalue and the $u_{-}$ solution will help to estimate the asymptotic of the right eigenvalue of $k\,L_k+ k V_k$.
We point out that for $t$ fixed the above operator is a weak contraction. Via the discounted method is possible to approximate the scheme used to obtain $u$ by a procedure which takes advantage of another transformation which is a contraction in a complete metric space (see [@G1]). This is more practical for numerical applications of the theory. Another approximation scheme is given by the entropy penalized method (see [@GV] and [@GLM]).
For $k\in \mathbb{N}$ fixed the operator $ k \, L_k$ is symmetric when acting on $\mathcal{ L}^2 $ functions defined on the set $\Gamma_k\subset \mathbb S^1$. The stationary probability of the associated Markov Chain is the uniform measure $\pi_k$ (each point has mass $1/k)$. When $k$ goes to infinity $\pi_k$ converges to the Lebesgue measure on $ \mathbb{S}^1$. When the system is disturbed by $k\, V_k$ we get new stationary probabilities $\pi_{k,V}$ with support on $\Gamma_k$ and we want to use results of Aubry-Mather theory to estimate the large deviation properties of this family of probabilities on $\mathbb S^1$, when $k\to \infty.$
As we saw before, any weak limit of subsequence of probabilities $\pi_{k,V}$ on $ \mathbb{S}^1=[0,1)$ is supported in the points which attains the maximal value of $V:[0,1)\to \mathbb{R}$. Notice that, the supremum of $$\sup_{ \psi \in \mathbb{ L}^2(d\, x), \, ||\psi||_2=1}\{ \int\, V(x)
\, (\psi (x))^2 \, d \,x\, \}=\sup \{V(x)\,|\, x \in \mathbb{S}^1\, \},$$ is not attained on $\mathbb{ L}^2(d\, x)$. Considering a more general problem on the set $ \mathbb{ M} ( \mathbb{S}^1)$, the set of probabilities on $ \mathbb{S}^1$, we have $$\sup_{ \nu \in \mathbb{ M} ( \mathbb{S}^1)}\{ \int\, V(x) \, d \nu(x)\,
\}=\sup \{V(x)\,|\, x \in \mathbb{S}^1\, \},$$ and the supremum is attained, for example, in a delta Dirac on a point $x_0$, where the supremum of $V$ is attained. Any measure $\nu$ which realizes the supremum on $\mathbb{ M} ( \mathbb{S}^1)$ has support in the set of points which attains the maximal value of $V$. In this way the lagrangian $L$ described before appears in a natural way.
Large deviations for the stationary measures $\pi_{k,V}$.
---------------------------------------------------------
We start this subsection with same definitions. For each $k$ and $x\in\mathbb S^1$ we denote $x_k(x)$ the closest element to $x$ on the left of $x$ in the set $ \Gamma_k$, in fact $x_k(x)=\frac{\lfloor kx\rfloor}{k}$. Given $k$ and a function $\varphi_k$ defined on $ \Gamma_k$, we consider the extension $g_k$ of $\varphi_k$ to $\mathbb S^1$. This is a piecewise constant function such that in the interval $[j/k,(j+1)/k)$ is equal to $\varphi_k(j/k).$ Finally, we call $h_k$ the continuous function obtained from $g_k$ in the following way: $h_k$ is equal $g_k$ outside the intervals of the form $[\frac{j}{k} - \frac{1}{k^2} , \frac{j+1}{k} - \frac{1}{k^2}]$, $j=1,2,...,k$, and, interpolates linearly $g_k$ on these small intervals.
When we apply the above to $\varphi_k=u_k$ the resulting $h_k$ is denoted by $z_k=z_k^V$, and when we do the same for $\varphi_k=\mu_k$, the resulting $h_k$ is called $p_{\mu_k}^V$. In order to control the asymptotic with $k$ of $\pi_{k,V}= u_k\, \mu_k$ we have to control the asymptotic of $z_k^V$. We claim that $ (1/k) \, \log z_k $ is an equicontinuous family of transformations, where $z_k$ is the “extended continuous” to $[0,1]$. And, we consider now limits of a convergent subsequences of $z_k=z_k^V$.
Suppose that $u$ is a limit point of a convergent subsequence $(1/k_j) \, \log
z_{k_j} $, $j \to \infty$, of $(1/k) \, \log
z_{k} $. Then, $u$ is a $(+)$-solution of the Lax-Oleinik equation.
We assume that $z_{k_j} \sim e^{ u \,k_j}.$ In more precise terms, for any $x$, we have $z_{k}(x_k(x)) \sim e^{ u (x)\,k}.$ Therefore, for $t$ positive and $x$ fixed, from , we have $$\begin{split}
c(L) \, t \, + \, u(x) = \lim_{j \to \infty} \frac{1}{k_j}
\log ( e^{ \lambda(k_j) \, t}\, z_{k_j} (x) ) .
\end{split}$$ By definitions in the begin of this subsection, we have that the expression above becomes $$\lim_ {j \to \infty} \frac{1}{k_j}
\log\,\big[ \,( P^t_{k_j,V} z_{k_j})(x_{k_j}
(x))\,\big] .$$ Using again that $z_{k}(x_k(x)) \sim e^{ u (x)\,k}$, we have $$\lim_ {j \to \infty} \frac{1}{k_j} \log\, \big[\, (P^t_{k_j ,V} e^{ k_j \, u }) (x_{k_j} (x)) \big] =
(\mathcal{T}^+_t (u) ) \, (x).$$ Therefore, $u$ is a $(+)$-solution of the Lax-Oleinik equation above.
We point out that from the classical Aubry-Mather theory, it follows that the fixed point $u$ for the Lax-Oleinik Operator is unique up to an additive constant in the case the point of maximum for $V$ is unique. It follows in this case that any convergent subsequence $(1/k_j) \, \log
z_{k_j}^V\,\, $, $j \to \infty$, will converge to a unique $u_{+}$. We point out that the normalization we assume for $\mu_k$ and $u_k$ (which determine $z_k$) will produce a $u_{+}$ without the ambiguity of an additive constant.
In the general case (more than one point of maximum for the potential $V$) the problem of convergence of $(1/k) \, \log
z_{k}^V $, $k \to \infty$, is complex and is related to what is called selection of subaction. This kind of problem in other settings is analysed in [@AIP] and [@BLL].
One can show in a similar way that:
Suppose that $u^*$ is a limit point of a convergent subsequence $(1/k_j) \, \log
p_{k_j}^V $, $j \to \infty$, of $(1/k) \, \log
p_{k}^V $. Then, $u^*$ is a $(-)$-solution of the Lax-Oleinik equation.
In the case the point of maximum for $V$ is unique one can show that any convergent subsequence $(1/k_j) \, \log
p_{k_j}^V $, $j \to \infty$, will converge to a unique $u^*$.
Now, we will show that $(1/k) \, \log
z_{k}^V\,\, $, $k \in \mathbb{N}$, is a equicontinuous family.
Consider now any points $x_0,x_1\in [0,1)$, a fixed positive $t\in
\mathbb{R}$, then define $\mathbb{ X}_{t,x_0,x_1}= \{\gamma(s)\in
\mathcal{AC} [0,t]\, | \, \gamma(0) = x_0, \gamma(t)=x_1\}$.
For any $x_0,x_1\in [0,1)$ and a fixed positive $t\in \mathbb{R}$ consider the continuous functional $\phi_{t,x_0,x_1,V} : \mathbb{
X}_{t,x_0,x_1} \to {\mathbb R}$, given by $$\phi_{t,x_0,x_1,V}
(\gamma)= \int_0^t \, (V (\gamma(s))- c(L))\, ds=\int_0^t \, V
(\gamma(s))\, ds - c(L) \, t .$$
For a fixed $k$, when we write $\phi_{t,x_k(x_0),x_k(x_1),V} (\gamma)$ we mean $$\phi_{t,x_k(x_0),x_k(x_1),V} (\gamma)=
\int_0^t \, (V (x_k(\gamma(s)))- c(L))\, ds,$$ recall that $x_k(a)=\frac{\lfloor ak\rfloor}{k}$, for $a\in[0,1]$. Denote by $\Phi_t (x_0,x_1)=\inf \{\int_0^t\, L(\gamma(s),\gamma '(s))\, ds
+ c(L)\, t\, | \, \gamma \in\mathbb{
X}_{t,x_0,x_1}\}.$ From section 3-4 in [@CI] it is known that $\Phi_t (x_0,x_1)$ is Lipschitz in $ \mathbb{S}^1\times \mathbb{S}^1$.
Given $x$ and $k$, we denote by $i(x,k)$ the natural number such that $x_k (x) = \frac{i(x,k)}{k}.$ An important piece of information in our reasoning is $$\lim_{k \to \infty} {\genfrac{}{}{}{1}{1}{k}} \log
(e^{t\,(\, k \, \,L_k + k\, \,V_k \,-\,\lambda(k))})_{i(x_0,k)\,
i(x_1,k)}$$ $$=\lim_{k\to\infty} \frac{1}{k} \log
\mathbb{E}_{X_k(0)=\frac{i(x_0,k)}{k},X_k(t)=\frac{i(x_1,k)}{k}}^k
[e^{k\,
\phi_{t,x_k(x_0),x_k(x_1),V}\, (.) } ]$$$$=
\sup_{\gamma \in \mathbb{ X}_{t,x_0,x_1}} \{ \phi_{t,x_0,x_1,V} (\gamma) -
I_t(\gamma)\}.$$ The last equality is from Varadhan’s Integral Lemma. Using the definition of $\phi_{t,x_0,x_1,V}$ and of $I_t$, see , we get $$\begin{split}
&\sup_{\gamma \in \mathbb{ X}_{t,x_0,x_1}} \{ \phi_{t,x_0,x_1,V} (\gamma) -
I_t(\gamma)\}\\
&=\sup_{\gamma \in \mathbb{X}_{t,x_0,x_1}} \Big\{\int_0^t V (\gamma(s)) ds - c(L) \, t\\
&\qquad\qquad- \int_0^t \big[ \dot{\gamma}(s) \log \Big(\frac{ \dot{\gamma}(s) +
\sqrt{\dot{\gamma}^2 (s) + 4} }{2} \Big) - \sqrt{\dot{\gamma}^2(s) +
4} + 2 \big]\, d s \Big\} \\
&=\sup_{\gamma \in \mathbb{
X}_{t,x_0,x_1}} \Big\{-\, \int_0^t L^V (\gamma(s), \gamma' (s))\, ds
\,- c(L) \, t \Big\}\\
&=- \inf_{\gamma \in \mathbb{ X}_{t,x_0,x_1}}
\Big\{\, \int_0^t L^V (\gamma(s), \gamma' (s))\, ds \, + \, c(L) \, t
\Big\}= - \Phi_t (x_0,x_1).
\end{split}$$ The convergence is uniform on $k$, for any $x_0,x_1$. And, the definition of $L^V$ is on .
The family ${\genfrac{}{}{}{1}{1}{k}} \log
z_k^V$ is equicontinuous in $k\in \mathbb{N}$. Therefore, there exists a subsequence of ${\genfrac{}{}{}{1}{1}{k}} \log
z_k^V$ converging to a certain Lipschitz function $u$. In the case the maximum of $V$ is attained in a unique point, then $u$ is unique up to an additive constant.
Given $x$ and $y$, and a positive fixed $t$ we have $${\genfrac{}{}{}{1}{1}{k}} \log z_k ( x_k (x))-{\genfrac{}{}{}{1}{1}{k}} \log z_k( x_k (y))=$$ $${\genfrac{}{}{}{1}{1}{k}} \log \frac{\sum_{j=0}^{k-1} \,(e^{t\,(\, k \, \,L_k + k V_k)})_{i(x,k)\, j} z_j }{
\sum_{j=0}^{k-1} \,(e^{t\, (\, k\,L_k + k V_k)})_{i(y,k)\,
j}z_j }\leq$$ $${\genfrac{}{}{}{1}{1}{k}} \log \, \Big(\, \sup_{j=\{0,1,2,..k-1\}} \,\Big\{\,\, \frac{
\,(e^{t\,(\, k \, \,L_k + k V_k)})_{i(x,k)\, j}}{ \,(e^{t\,
(\, k\,L_k + k V_k)})_{i(y,k)\, j} }\,\, \Big\}\,\Big )$$
For each $k$ the above supremum is attained at a certain $j_k$. Consider a convergent subsequence $\frac{j_k}{k}$ to a certain $z$, where $k\to \infty$. That is, there exists $z$ such that $i(z,k)=j_k$ for all $k$.
Therefore, for each $k$ and $t$ fixed $${\genfrac{}{}{}{1}{1}{k}} \log z_k ( x_k (x))-{\genfrac{}{}{}{1}{1}{k}} \log z_k( x_k
(y))\leq {\genfrac{}{}{}{1}{1}{k}} \log \,\, \frac{ \,(e^{t\,(\, k\,
\,L_k + k V_k)})_{i(x,k)\, j_k}}{ \,(e^{t\, (\, k\,L_k + k
V_k)})_{i(y,k)\, j_k} }$$ $$={\genfrac{}{}{}{1}{1}{k}} \log \,\, \frac{ \,(e^{t\,(\, k \, \,L_k + k
V_k)})_{i(x,k)\, i(z,k)}}{ \,(e^{t\, (\, k\,L_k + k
V_k)})_{i(y,k)\, i(z,k)} } .$$ Taking $k$ large, we have, for $t$ fixed that $${\genfrac{}{}{}{1}{1}{k}} \log z_k ( x)-{\genfrac{}{}{}{1}{1}{k}} \log z_k(
y)\leq \Phi_t (y,z) - \Phi_t (x,z).$$ The Peierls barrier is defined as $$h(y,x)= \varliminf_{t\to \infty} \Phi_t(y,x) .$$ Taking a subsequence $t_r\to \infty$ such $h(y,z)=\varliminf_{r\to
\infty} \Phi_{t_r} (x,z)$, one can easily shows that for large $k$ $${\genfrac{}{}{}{1}{1}{k}} \log z_k ( x)-{\genfrac{}{}{}{1}{1}{k}} \log z_k(
y)\leq h (y,z) - h (x,z).$$ The Peierls barrier satisfies $ h(y,z)-h(x,z)\leq \Phi (y,x)\leq A
\, |x-y|$, where $A$ is constant and $\Phi$ is the Mañe potential (see 3-7.1 item 1. in [@CI]). Therefore, the family is equicontinuous. For each $k$ fixed there is always a value $z_k(x)$ above $1$ and one below $1$.
The conclusion is that there exists a subsequence of $\frac{1}{k}
\log z_k $ converging to a certain $u$. The uniqueness of the limit follows from the uniqueness of $u$
A similar result is true for the family $\frac{1}{k} \, \log p_{\mu_k}^V$, remember that $p_{\mu_k}^V$ is obtained through of $\mu_k$. Taking a convergent subsequence, we denote by $u^*$ the limit. This subsequence can be considered as a subsequence of the one we already got convergence for $\frac{1}{k}\, \log z_k^V.$ In this case we got an $u=u: \mathbb{S}^1 \to \mathbb{R}$ and a $u^*: \mathbb{S}^1 \to \mathbb{R}$, which are limits of the corresponding subsequences.
Now we want to analyse large deviations of the measure $\pi_{k,V}$.
A large deviation principle for the sequence of measures $\{\pi_{k,V}\}_k$ is true and the deviation rate function $I^V$ is $I^V(x)= u (x) + u^{*} (x)$. In other words, given an interval $F=[c,d]$, $$\lim_{k\to \infty} \frac{1}{k} \,\log \pi_{k,V}\,[\,F\,]\,=
-\, \inf \{ I(x) \, | \, x \in F\}.$$
Suppose the maximum of $V$ is unique. Then, we get $z_{k}(x_k(x)) \sim e^{ u_{+} (x)\,k}$ and $p_{\mu_k}^V(x_k(x)) \sim e^{ u_{-} (x)\,k}$ What is the explicit expression for $I^V$? Remember that $u^{+}$ satisfies $\mathcal{T}^+_t (u_+) = u_+ + c \, t $ and $u^{-}$ satisfies $\mathcal{T}^+_t (u_{-}) = u_{-} + c \, t $. Here, $u$ is one of the $u_{+}$ and $u^*$ is one of the $u_{-}$. As we said before they were determined by the normalization. The functions $u_{+}$ and $u_{-}$ are weak KAM solutions.
We denote $I^V(x)= u (x) + u^{*} (x).$ The function $I^V$ is continuous (not necessarily differentiable in all $\mathbb{S}^1$) and well defined. Notice that $\pi_{k,V}(j/k) = (z_k^V)_j \,
(p_{\mu_k}^V)_j .$ We have to estimate $$\pi_{k,V}\,[\,F\,]\, = \sum_{j/k \in F} p_{mu_k}(j/k) z_k(j/k)\sim \sum_{j/k \in F}e^{k (u_{-} ( x_k(j/k))+ u_{+} ( x_k(j/k))} .$$ Then, from Laplace method it follows that $I^V(x)$ is the deviation function.
Entropy of $V$. {#sec4}
===============
Review of the basic properties of the entropy for continuous time Gibbs states
------------------------------------------------------------------------------
In [@LNT] it is consider the Thermodynamic Formalism for continuous time Markov Chains taking values in the Bernoulli space. The authors consider a certain a priori potential $$A:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$$ and an associated discrete Ruelle operator ${\mathcal L}_A$.
Via the infinitesimal generator $L = {\mathcal L}_A-I$ is defined an a priori probability over the Skorohod space
In [@LNT] it is consider a potential $V:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$ and the continuous time Gibbs state associated to $V$. This generalizes what is know for the discrete time setting of Thermodynamic Formalism (see [@PP]). In this formalism the properties of the Ruelle operator ${\mathcal L}_A$ are used to assure the existence of eigenfunctions, eigenprobabilities, etc... The eigenfunction is used to normalize the continuous time semigroup operator in order to get an stochastic semigroup (and a new continuous time Markov chain which is called Gibbs state for $V$). The main technical difficulties arise from the fact that the state space of this continuous time Markov Chain is not finite (not even countable). [@Ki1] is a nice reference for the general setting of Large Deviations in continuous time.
By the other hand, in [@BEL] the authors considered continuous time Gibbs states in a much more simple situation where the state space is finite. They consider an infinitesimal generator which is a $k$ by $k$ matrix $L$ and a potential $V$ of the form $V:\{1,2,...,k\}\to \mathbb{R}$. This is more close to the setting we consider here with $k$ fixed.
In the present setting, and according to the notation of last section, the semigroup $e^{t\, (k\, L_k + k\,V_k - \lambda(k))}, t>0,$ defines what we call the continuous time Markov chain associated to $k\, V_k$. The vector $\pi_{k,V}=(\pi_{k,V}^1,...,\pi_{k,V}^k)$, such that $\pi_{k,V}^j=\,u_k^j\, \mu_k^j\, \,$, $j=1,2,..,k$, is stationary for such Markov Chain.
Notice that the semigroup $e^{t\, (k\, L_k + k\,V_k}), t>0,$ is not stochastic and the procedure of getting an stochastic semigroup from this requires a normalization via the eigenfunction and eigenvalue.
If one consider a potential $A:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$ which depends on the two first coordinates and a potential $V:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$ which depends on the first coordinate one can see that “basically” the results of [@LNT] are an extension of the ones in [@BEL].
In Section 4 in [@LNT] it is consider a potential $V:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$ and introduced for the associated Gibbs continuous time Markov Chain, for each $T>0$, the concept of entropy $H_T$. Finally, one can take the limit on $T$ in order to obtain an entropy $H$ for the continuous time Gibbs state associated to such $V$. We would like here to compute for each $k$ the expression of the entropy $H(k)$ of the Gibbs state for $k V_k$. Later we want to estimate the limit $H(k)$, when $k\to \infty$.
Notice that for fixed $k$ our setting here is a particular case (much more simpler) that the one where the continuous time Markov Chain has the state space $\{1,2,...,k\}^\mathbb{N}$. However, the matrix $L_k$ we consider here assume some zero values and this was not explicitly considered in [@LNT]. This will be no big problem because the use of the discrete time Ruelle operator in [@LNT] was mainly for showing the existence of eigenfunctions and eigenvalues. Here the existence of eigenfunctions and eigenvalues follows from trivial arguments due to the fact that the operators are defined in finite dimensional vector spaces.
A different approach to entropy on the continuous time Gibbs setting (not using the Ruelle operator) is presented in [@Leav]. We point out that [@BEL] does not consider the concept of entropy. We will show below that for the purpose of computation of the entropy for the present setting the reasoning of [@LNT] can be described in more general terms without mention the Ruelle operator ${\mathcal L}_A$.
No we will briefly describe for the reader the computation of entropy in [@LNT]. Given a certain a priori Lipschitz potential $$A_k:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$$ consider the associated discrete Ruelle operator ${\mathcal L}_{A_k}$.
Via the infinitesimal generator $\tilde{L}_k = {\mathcal L}_{A_k}-I$, for each $k$, we define an a priori probability Markov Chain. Consider now a potential $\tilde{V}_k:\{1,2,...,k\}^\mathbb{N}\to \mathbb{R}$ and the associated Gibbs continuous time Markov Chain. We denote by $\mu^{k}$ the stationary vector for such chain. We denote by $P_{\mu^{k}}$ the probability over the Skorohod space $D$ obtained from initial probability $\mu^{k}$ and the a priori Markov Chain (which will define a Markov Process which is not stationary). We also consider $\tilde{P}^{\tilde{V}_k}_{\mu^k}$ the probability on $D$ induced by the continuous time Gibbs state associated to $V$ and the initial measure $\mu^k$.
According to Section 4 in [@LNT], for a fixed $T\geq 0$, the relative entropy is $$\label{entropy}
H_T(\tilde{P}^{\tilde{V}_k}_{\mu^k}\vert P_{\mu^{k}})\,=-\,\int_{ D}
\log\Bigg(\frac{\mbox{d}\tilde{ P}^{\tilde{V}_k}_{\mu^k}}{\mbox{d} P_{\mu^{k}}}\Big|_{ \mathcal{F}_T}\Bigg)(\omega)\,
\mbox{d}\tilde{ P}^{\tilde{V}_k}_{\mu^k} (\omega)\,.$$
In the above $\mu_k$ is a probability fixed on the state space and $\mathcal{F}_T$ is the usual sigma algebra up to time $T$. Moreover, $D$ is the Skorohod space.
The entropy of the stationary Gibbs state $\tilde{ P}^{\tilde{V}_k}_{\mu^k}$ is
$$H(\tilde{P}^{\tilde{V}_k}_{\mu^k}\vert P_{\mu^{k}})\,=\,\lim_{T\to \infty} \frac{1}{T} H_T(\tilde{P}^{\tilde{V}_k}_{\mu^k}\vert P_{\mu^{k}}).$$
The main issue here is to apply the above to $k\, V_k$ and not $\tilde{V_k}.$ In order to compute the entropy in our setting we have to show that the expression above can be generalized and described not mentioning the a priori potential $A$. This will be explained in the next section.
Gibbs state in a general setting
--------------------------------
The goal of this subsection is improve the results of the Sections 3 and 4 of the paper [@LNT]. In order to do this we will consider a continuous time Markov Chain $\{X_t, t\geq 0\}$ with state space $E$ and with infinitesimal generator given by $$\begin{split}
L(f)(x)=\sum_{y\in E}p(x,y)\big[f(y)-f(x)\big],\\
\end{split}$$ where $p(x,y)$ is the rate jump from $x$ to $y$. Notice that maybe $\sum_{y\in E}p(x,y)\neq 1$. For example, if the state space $E$ is $\{1,...,k\}^{\mathbb{N}}$ and $L=\mathcal L_A-I$, as in [@LNT], we have that $p(x,y)=\mathbf{1}_{\sigma(y)=x}e^{A(y)}$, or if $L=L^V$, also in [@LNT], $p(x,y)$ is equal to $\gamma_V(x)\mathbf{1}_{\sigma(y)=x}e^{B_{V}(y)}$.
As we will see by considering this general $p$ one can get more general results.
Suppose $L$ is an infinitesimal generator as above and $V:E\to \mathbb{R}$ is a function such that there exists an associated eigenfunction $F_V:E\to (0,\infty)$ and eigenvalue $\lambda_V$ for $L+V$. That is, we have that $(L+V)F_V=\lambda_V\, F_V$. Then, by a procedure of normalization, we can get a new continuous time Markov Chain, [**called the continuous time Gibbs state for $V,$**]{} which is the process $\{Y^V_T,\,T\geq 0\}$, having the infinitesimal generator acting on bounded mensurable functions $f:E\to\mathbb R$ given by $$\label{LV}
L^{V}(f)(x)= \sum_{y\in E}\frac{p(x,y)F_V(y)}{F_V(x)}\big[f(y)-f(x)\big]\,.$$
To obtain this infinitesimal generator we can follow without any change from the beginning of the proof of the Proposition 7 in Section 3 of [@LNT] until we get the equality (11). After the equation (11) we use the fact that $p(x,y)$ is equal to $\mathbf{1}_{\sigma(y)=x}e^{A(y)}$. Then, in the present setting we just have to start from the equation (11). Notice that the infinitesimal generator $L^V(f)(x)$ can be written as $$\begin{split}
&\frac{L(F_Vf)(x)}{F_V(x)}+ (V(x)-\lambda_V)f(x)\\&=\sum_{y\in E}\frac{p(x,y)}{F_V(x)}\big[F_V(y)f(y)-F_V(x)f(x)\big]+ (V(x)-\lambda_V)f(x)\\
&=\sum_{y\in E}\frac{p(x,y)F_V(y)}{F_V(x)}f(y)+ ([\sum_{y\in E}p(x,y)]+V(x)-\lambda_V)f(x)\,.\\
\end{split}$$ Using the fact that $F_V$ and $\lambda_V$ are, respectively, the eigenfunction and eigenvalue, we get that the expression defines and infinitesimal generator for a continuous time Markov Chain
Now, rewriting as $$L^{V}(f)(x)= \sum_{y\in E}p(x,y)\,e^{\log F_V(y)-\log F_V(x)}\big[f(y)-f(x)\big]\,,$$ we can see that the process $\{Y_T^V, T\geq 0\}$ is a perturbation of the original process $\{X_t, t\geq 0\}$. This perturbation is given by the function $\log F_V$, where $F_V$ is the eigenfunction of $L+V$, in the sense of the Appendix 1.7 of [@KL], page 337.
Now we will introduce a natural concept of entropy for this more general setting describe by the general function $p$.
Denote by $\mathbb P_\mu$ the probability on the Skorohod space $D:=D([0, T], E)$ induced by $\{X_t, t\geq 0\}$ and the initial measure $\mu$. And, denote by $\mathbb P^V_\mu$ the probability on $D$ induced by $\{Y_T^V, T\geq 0\}$ and the initial measure $\mu$. By [@KL], page 336, the Radon-Nikodym derivative $\frac{d\mathbb P^V_\mu}{d \mathbb P_\mu}$ is $$\begin{split}
&\exp\Big\{\log F_V(X_T)-\log F_V(X_0)-\int_0^T\frac{L(F_V)(X_s)}{F_V(X_s)}\,ds\Big\}\\
=&\exp\Big\{\log \frac{F_V(X_T)}{F_V(X_0)}+\int_0^T(V(X_s)-\lambda_V)\,ds\Big\}\\
=& \frac{F_V(X_T)}{F_V(X_0)}\exp\Big\{\int_0^T(V(X_s)-\lambda_V)\,ds\Big\}.\\
\end{split}$$
Thus, we obtain the expression: $$\begin{split}
&\log\Big(\frac{d\mathbb P^V_\mu}{d \mathbb P_\mu}\Big)=\int_0^T(V(X_s)-\lambda_V)\,ds+\log F_V(X_T)-\log F_V(X_0).\\
\end{split}$$ which is more sharp that the expression (17) on page 13 of [@LNT]. To compare them, we take on (17) $\tilde{\gamma}=1-V+\lambda_V$, then we obtain the first term. To obtain the second one, we need to observe that the second term in (17), in [@LNT], can be written as a telescopic sum.
Now for a fixed $k$ we will explain how to get the value of the entropy of the corresponding Gibbs state for $k\, V_k: \Gamma_k \to\mathbb{R}$.
In the general setting of last theorem consider $E = \Gamma_k=\{0,1/k,2/k,..,(k-1)/k\}$, and, for $i/k,j/k\in \Gamma_k$, we have
a\) $p(i/k,j/k)= k$, if $j=i+1$ or $j=i-1$,
b\) $p(i/k,j/k)=0,$ in the other cases.
The existence of eigenfunction $F_k$ and eigenvalue $\lambda_k$ for $k L_k + k V_k$ follows from the continuous time Perron’s Theorem described before. The associated continuous time Gibbs Markov Chain has a initial stationary vector which will be denoted by $\pi_k$.
Now we have to integrate concerning $\mathbb P_{\pi_{k,V}}^{kV_k}$ for $T$ fixed the function $$\int_0^T(k\,V_k(X_s)-\lambda_k)\,ds+\log F_k(X_T)-\log F_k(X_0).$$
As the probability that we considered on the Skorohod space is stationary and ergodic this integration results in $$\int k V_k d \pi_{k,V} - \lambda_k.$$ Thus, the entropy $H(\mathbb P_{\pi_{k,V} }^{kV_k}\vert \mathbb P_{\pi_{k,V} })=\int k V_k d \pi_{k,V} - \lambda_k$. We point out that for a fixed $k$ this number is computable from the linear problem associated to the continuous time Perron’s operator. Now in order to find the limit entropy associated to $V$ we need to take the limit on $k$ of the above expression.
Here, we assume that the Mather measure is a Dirac Delta probability on $x_0.$ Remember that $\lim_{k\to\infty}\frac{1}{k}\lambda(k) =c(L)= V(x_0).$ Moreover, $\pi_{k,V} \to \delta_{x_0}$, when $k\to \infty$. Therefore, $$H(V)=\lim_{k\to\infty}\frac{1}{k}H(\mathbb P_{\pi_{k,V} }^{kV_k}\vert \mathbb P_{\pi_{k,V} })=
\lim_{k\to\infty}\int V_k \,d \pi_{k,V} - \lim_{k\to\infty}\frac{1}{k}\lambda_k=V(x_0)-c(L)=0.$$ The limit entropy in this case is zero.
[99]{}
N. Anantharaman, Counting geodesics which are optimal in homology, Ergodic Theory and Dynamical Systems, Vol 23 Issue 2 (2003).
N. Anantharaman, On the zero-temperature or vanishing viscosity limit for Markov processes arising from Lagrangian dynamics, J. Eur. Math. Soc. 6 no. 2, 207–276 (2004).
N. Anantharaman, Nalini, R. Iturriaga, P. Padilha and H. Sanchez-Morgado, Physical solutions of the Hamilton-Jacobi equation. Discrete Contin. Dyn. Syst. Ser. B 5, no. 3, 513-528 (2005).
A. Baraviera, R. Exel and A. Lopes, A Ruelle Operator for continuous time Markov chains, São Paulo Journal of Mathematical Sciences. vol 4 n. 1, pp 1-16 (2010).
A. Baraviera, R. Leplaideur and A. O. Lopes, Selection of ground states in the zero temperature limit for a one-parameter family of potentials, *SIAM Journal on Applied Dynamical Systems*, Vol. 11, n 1, 243-260 (2012).
A. Biryuk and D. A. Gomes, An introduction to Aubry-Mather theory. Sao Paulo Journal of Mathematical Sciences, 4 (1), 17–63, 2010
M. J. Carneiro, On minimizing measures of the action of autonomous Lagrangians, Nonlinearity 8 (1995), no. 6, 1077–-1085.
G. Contreras and R. Iturriaga, Global minimizers of autonomous lagrangians, CIMAT, (2000) (see homepage of G. Contreras in CIMAT).
A. Dembo and O. Zeitouni, Large Deviations techniques, Springer Verlag.
R. Ellis, Entropy, Large Deviations, and Statistical Mechanics, Springer Verlag
S. Ethier and T. Kurtz, Markov Processes, John Wiley, (1986).
A. Fathi, Théorème KAM faible et théorie de Mather sur les systèmes lagrangiens, Comptes Rendus de l’Académie des Sciences, Série I, Mathématique Vol 324 1043-1046, 1997.
A. Fathi, Weak KAM theorem in Lagrangian Dynamics, Lecture Notes, Pisa (2005)
M. I. Freidlin , A. D. Wentzel, Random Perturbations of Dynamical Systems, Springer, (1991).
D. A. Gomes, Viscosity solution methods and discrete Aubry–Mather problem, Discrete Contin. Dyn. Syst. 13(1) (2005) 103–-116.
D. A. Gomes and E. Valdinoci, Entropy penalization methods for Hamilton–Jacobi equations, Adv. Math. 215(1) (2007) 94–-152.
M. Kac, Integration in Function spaces and some of its applications, Acad Naz dei Lincei Scuola Superiore Normale Superiore, Piza, Italy (1980).
Y. Kifer, Large Deviations in Dynamical Systems and Stochastic processes, TAMS, Vol 321, N.2, 505–524 (1990)
C. Landim and C. Kipnis, Scaling limits of interacting particle systems. Grundlehren der Mathematischen Wissenschaften, 320. Springer-Verlag, Berlin (1999).
V. Lecomte, C. Appert-Rolland and F. van Wijland, Thermodynamic formalism for systems with Markov dynamics. J. Stat. Phys. 127 (2007), no. 1, 51-106
D. Gomes, A. Lopes and J. Mohr, The Mather measure and a Large Deviation Principle for the Entropy Penalized Method, Communications in Contemporary Mathematics, Vol 13, issue 2, 235–268 (2011)
W. Parry and M. Pollicott, Zeta functions and the periodic orbit structure of hyperbolic dynamics, *Astérisque* Vol [187-188]{} 1990
A. O. Lopes, A. Neumann and Ph. Thieullen A thermodynamic formalism for continuous time Markov chains with values on the Bernoulli Space: entropy, pressure and large deviations, Journ. of Statist. Phys. Volume 152, Issue 5, Page 894-933 (2013).
A. Neumann: Large Deviations Principle for the Exclusion Process with Slow Bonds, PhD Thesis at IMPA (2011).
J. B. Norris, Markov Chains, Cambridge Press
E. Olivieri, M.E. Vares: Large deviations and Metastability. Cambridge Universtiy Press, Cambridge (1998).
D. W. Strook, An introduction to Large Deviations, Springer, (1984).
A. Skhorokhod, Studies in the theory of Random Processes, Dover.
A. D. Wentzell, Limit Theorems on Large Deviations for Markov Stochastic Porcesses, Kluwer, (1990)
[^1]:
[^2]:
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'dc and ac magnetic properties of two thin-walled superconducting Nb cylinders with a rectangular cross-section are reported. Magnetization curves and the ac response were studied on as-prepared and patterned samples in magnetic fields parallel to the cylinder axis. A row of micron-sized antidots (holes) was made in the film along the cylinder axis. Avalanche-like jumps of the magnetization are observed for both samples at low temperatures for magnetic fields not only above $H_{c1}$, but in fields lower than $H_{c1}$ in the vortex-free region. The positions of the jumps are not reproducible and they change from one experiment to another, resembling vortex lattice instabilities usually observed for magnetic fields larger than $H_{c1}$. At temperatures above $0.66T_c$ and $0.78T_c$ the magnetization curves become smooth for the patterned and the as-prepared samples, respectively. The magnetization curve of a reference planar Nb film in the parallel field geometry does not exhibit jumps in the entire range of accessible temperatures. The ac response was measured in constant and swept dc magnetic field modes. Experiment shows that ac losses at low magnetic fields in a swept field mode are smaller for the patterned sample. For both samples the shapes of the field dependences of losses and the amplitude of the third harmonic are the same in constant and swept field near $H_{c3}$. This similarity does not exist at low fields in a swept mode.'
author:
- 'M.I. Tsindlekht$^1$, V.M. Genkin$^1$, I. Felner$^1$, F. Zeides$^1$, N. Katz$^1$, $\check{\text{S}}$. Gazi$^2$, $\check{\text{S}}$. Chromik$^2$, O.V. Dobrovolskiy$^{3,4}$, R. Sachser$^3$, and M. Huth$^3$'
title: 'dc and ac magnetic properties of thin-walled Nb cylinders with and without a row of antidots'
---
Introduction
============
Penetration of magnetic flux into hollow superconducting cylinders is a long standing field of interest. The Little-Parks effect and the quantization of trapped flux were intensively studied during the last fifty years [@LITTLE; @DOU; @VEKHT]. Recent advances in nanotechnology have made it possible for studying experimentally superconducting properties of thin films with different arrays of antidots, see for example, [@Motta1] and references therein. In particular, for the observation of the aforementioned effects, cylinders or antidots of small diameter are required. At the same time, the study of hollow thin-walled cylinders with macroscopic sizes in magnetic fields parallel to its axis has been much less well studied. It was expected that quantum phenomena cannot be observed in such samples because of the fact that one flux quanta for cylinders with a cross section area of $\approx 1$ cm$^2$ corresponds to a magnetic field about $10^{-7}$ Oe. In this case magnetization will be a smooth function of the magnetic field. However, experimental results obtained recently for thin-walled macroscopic cylinders do not agree with this expectation. Namely, in such Nb cylinders we succeeded in monitoring the magnetic moment of the current circulating in the walls and observed dc magnetic moment jumps even in fields much lower than $H_{c1}$ of the film itself [@Katz1]. So far it is not clear what mechanism is responsible for such flux jumps. Under an axial magnetic field the cylinder walls screen weak external fields, provided that $L\equiv Rd/\lambda^{2} \gg 1$, where $R$ is the cylinder radius, $d$ is the wall thickness, and $\lambda$ is the London penetration depth [@DOU; @PG; @KITTEL]. Therefore, it is expected, that a dc magnetic field, $H_0$, will penetrate into the cylinder as soon as the current in the wall exceeds the critical current and no field penetration should be observed at lower fields. Only above $H_{c1}$, vortices created at the outer cylinder surface can move into the cylinder. For a magnetic field oriented perpendicular to the Nb film surface such vortex motion leads to flux jumps [@NOWAK; @STAM]. These flux jumps were interpreted as a thermomagnetic instability of the critical state. It was demonstrated that in a sample with an array of antidots flux jump propagates along the antidots row [@MOTTA2].
Nucleation of the superconducting phase in a thin surface sheath in decreasing magnetic fields parallel to the sample surface was predicted by Saint-James and de Gennes [@DSJ]. They showed that nucleation occurs in a magnetic field $H_0\leq H_{c3}\approx 1.695 H_{c2}$. Experimental confirmations of this prediction were obtained soon after their work appeared. The experimental methods for this confirmation were dc resistivity and ac susceptibility measurements [@ROLL]. It was found that low frequency losses in superconductors in surface superconducting states (SSS) can exceed losses in the normal state [@BURGER; @ROLL].
A swept dc magnetic field qualitatively changes the character of the ac response. Specifically, the penetration of the ac magnetic field into the sample takes place not only for $H_{c2}<H_0<H_{c3}$ but also for $H_{c1}<H_0<H_{c2}$, in sharp contrast to the case of constant dc fields [@STR2; @MAX; @GENKIN22]. The effect of a swept dc field can more suitably be investigated by using hollow thin-walled superconducting cylinders, rather than by bulk samples, because one can control the field transmission through their walls. Previously, we have shown [@Genkin1] that in a thin-walled cylinder in the mixed state, the effect of sweeping a dc field on the ac response is due to an enhancement of the vortex motion through the wall. Above $H_{c2}$, however, this picture is no longer appropriate and the experimental data were explained within the framework of a simple relaxation model [@Katz1].
The goal of this paper is to study how antidots affect the penetration of dc and ac magnetic fields into thin-walled superconducting Nb cylinders of macroscopic sizes, with a rectangular cross section. We show that at low enough temperatures for both, a flat and a patterned samples, even in the *vortex-free regime* at $H< H_{c1}$, the dc magnetic field penetrates through the cylinder walls in an “*avalanche*”-like fashion. Jumps of the dc magnetic moment also become apparent at fields above $H_{c1}$ at low temperatures. For both samples, the field values at which jumps occur vary from one measurement to another, indicating that one deals with transitions between metastable states. At temperatures above $0.66T_c$ and $0.78T_c$ the magnetization curves become smooth for the patterned and the as-prepared sample, respectively.
The ac response of both cylinders was studied in the point-by-point and swept field modes. In these, the signals of the first, second and third harmonics were measured concurrently. The ac response of as-prepared and patterned samples is qualitatively different in a swept field mode.
Experimental
============
The cylindrical samples were prepared by dc magnetron sputtering at room temperature on a rotated sapphire substrate. The sizes of the substrate with rounded corners (radius 0.2 mm) are $1.5\times3\times7.5$ mm$^3$. We fabricated, therefore, a thin-walled hollow superconducting cylinder with a rectangular cross section. The nominal film thickness of both samples was $d=100$ nm. A sketch of the sample geometry is presented in Fig. \[f1\].
The reference sample $A$ was kept as-grown, while the second one, sample $B$, was patterned with a row of antidots at the mid of the larger surface over the entire length of the sample. The row of antidots was milled by focused ion beam (FIB) in a scanning electron microscope (FEI, Nova Nanolab 600). The beam parameters were 30kV/0.5nA, while the defocus and blur were 560$\mu$m and 3$\mu$m, respectively. The pitch was equal to the antidot center-to-center distance of 1.8$\mu$m and the number of beam passes needed to mill 150nm-deep antidots was 2000. The antidots row with a length of $7.5$mm was milled by iteratively stitching the processing window with a long size of $400\,\mu$m. SEM images of the patterned surface of sample $B$ are shown in Fig. \[f2\]. The antidots have an average diameter of 1.5$\mu$m and an average edge-to-edge distance of 300nm.
The dc magnetic properties were measured using a commercial superconducting quantum interference device (SQUID), Quantum Desing MPMS5, magnetometer. The ac response was measured by the pick-up coil method. The sample was inserted into one coil of a balanced pair of coils, and the unbalanced signal was measured by means of lock-in amplifier. The ac magnetic susceptibilities were measured in absolute units, see [@LEV2]. A “home-made” measurement cell of the experimental setup was adapted to the SQUID magnetometer. A block diagram of the experimental setup can be found elsewhere [@LEV2].
The ac response as a function of the dc field were carried out by two methods: (i) - point-by-point (PBP) mode, where the dc field was kept constant during the measurement, and (ii) - swept field (SF) mode, where the dc field was ramped with a rate of 20 Oe/s. Both external ac and dc fields were directed parallel to cylinder axis and hence, to the film surface.
![Sketch of the $B$ sample. Here $\text{L}_s = 7.5$ mm, $\text{W}_s=3$ mm, and $2\text{D}=1.4$ mm are the substrate length, width and thickness, respectively. Both dc and ac fields were parallel to $Z$-axis. Dimensions are not to scale.[]{data-label="f1"}](Fig1){width="0.6\linewidth"}
![SEM images of the surface of sample $B$. The antidots have an average diameter of 1.5 $\mu$m and an average edge-to-edge distance of 300 nm. An overview SEM image is presented in the bottom panel where the row of FIB-milled antidots is clearly seen. []{data-label="f2"}](Fig2a.eps "fig:"){width="0.94\linewidth"} ![SEM images of the surface of sample $B$. The antidots have an average diameter of 1.5 $\mu$m and an average edge-to-edge distance of 300 nm. An overview SEM image is presented in the bottom panel where the row of FIB-milled antidots is clearly seen. []{data-label="f2"}](Fig2b.eps "fig:"){width="0.94\linewidth"}
Results
=======
dc magnetization
----------------
The upper and lower panels of Fig. \[f3\] show the temperature dependences of the magnetic moments, $M_0$, in magnetic field $20\pm 2$ Oe, of samples $A$ and $B$, respectively. The critical temperatures, $T_c$, of both samples are almost the same, 8.3 K, the transition width for sample $A$ is 1.3 K but 2.7 K for sample $B$. Sample $B$ demonstrates a two-stage transition, see the inset to the lower panel of Fig. \[f3\]. At low temperatures, the magnetic moment of sample $A$ is a factor of two larger than that of sample $B$. Temperature and field dependences of the magnetic moment were measured after cooling the sample down to the desired temperatures in zero field (ZFC).
![(Color online). Temperature dependences of the magnetic moment of samples $A$ and $B$, upper and lower panels, respectively. Inset to lower panel shows temperature dependence of $M_0$ of $B$ sample near $T_c$. []{data-label="f3"}](Fig3a.eps "fig:"){width="0.98\linewidth"} ![(Color online). Temperature dependences of the magnetic moment of samples $A$ and $B$, upper and lower panels, respectively. Inset to lower panel shows temperature dependence of $M_0$ of $B$ sample near $T_c$. []{data-label="f3"}](Fig3b.eps "fig:"){width="0.98\linewidth"}
![(Color online). $M_0(H_0)$ of samples $A$ and $B$ after ZFC, upper panel. Expanded view of the magnetization curves in low magnetic fields for samples $A$ and $B$, lower panel. []{data-label="f5"}](Fig4a.eps "fig:"){width="0.98\linewidth"} ![(Color online). $M_0(H_0)$ of samples $A$ and $B$ after ZFC, upper panel. Expanded view of the magnetization curves in low magnetic fields for samples $A$ and $B$, lower panel. []{data-label="f5"}](Fig4b.eps "fig:"){width="0.98\linewidth"}
The $M_0(H_0)$ dependences for samples $A$ and $B$ at 4.5 K are shown in the upper panel of Fig. \[f5\]. The magnetization curves in the ascending branch were measured in the hysteresis mode with 5 Oe step at low fields. Fig. \[f5\] shows that the $H_{c2}$ values are different. Determination of $H_{c2}$ for sample $B$ is less accurate than that of sample $A$, due to the magnetic moment relaxation, which at high fields is larger for sample $B$ [@MIT]. An expanded view of the magnetization curves at low fields is shown in the lower panel of Fig. \[f5\]. The fields of the first jumps, $H^*$, are around 20 Oe and 10 Oe, while the number of jumps in magnetic fields up to 100 Oe are 5 and 7 for samples $A$ and $B$, respectively. Jumps of the magnetic moment were observed in a wide range of magnetic fields, including fields below $H_{c1}$ for both samples. This behavior is reminiscent of magnetic flux jumps in Nb thin films for $H_0$ perpendicular to the film surface [@NOWAK; @STAM]. The jumps observed in these papers were interpreted as a thermomagnetic instability of the Abrikosov vortex lattice [@NOWAK; @STAM]. However, existence of jumps in fields below than $H_{c1}$ and parallel to the surface have been reported in our recent work only [@Katz1]. $H_{c1}$ is $\approx 350$ Oe at 4.5 K in our samples. Direct determination of $H_{c1}$ for thin-walled cylindrical samples is impossible due to magnetic moment jumps at low fields. However, the estimation of $H_{c1}$ can be done using magnetization curves of the planar film as it shown in inset to Fig.\[f13\].
ac response
-----------
The effective ac magnetic susceptibility of the sample in the external field $H(t)=H_0(t)+h_{ac}\sin (\omega t)$ is given by $$\label{Eq2}
M(t)=Vh_{ac}\sum_n\{\chi_n^{'} \sin (n\omega t)-\chi_n^{''}\cos (n\omega t)\},$$ and it exhibits the appearance of the ac field penetration into the sample, i.e. $\chi_1^{'}\neq -1/4\pi$, ac losses $\chi_1^{''}>0$ and harmonics of the fundamental frequency, $\chi_n$. Here, $M(t)$ is the magnetic moment of the sample and $V$ is its volume. In what follows we consider the results of the ac measurements in both PBP and SF modes.
![(Color online). Field dependences of $\chi_1(H_0)$ of samples $A$ and $B$ in the PBP mode at 1465 and 293 Hz, upper and lower panels, respectively. Measurements were done at 4.5 K. Arrows on the lower panel show $H_{c3}$ for both samples.[]{data-label="f5aa"}](Fig5a.eps "fig:"){width="0.98\linewidth"} ![(Color online). Field dependences of $\chi_1(H_0)$ of samples $A$ and $B$ in the PBP mode at 1465 and 293 Hz, upper and lower panels, respectively. Measurements were done at 4.5 K. Arrows on the lower panel show $H_{c3}$ for both samples.[]{data-label="f5aa"}](Fig5b.eps "fig:"){width="0.98\linewidth"}
The real and imaginary components of the ac susceptibility at 4.5 K for both samples measured in the PBP mode as a function of $H_0$ at two frequencies are shown in Fig. \[f5aa\]. Almost complete screening up to 12.5 kOe of the ac field by the superconducting walls is observed for both samples. This value is higher than $H_{c2} =11\pm 0.5$ kOe of sample $A$ (Fig. \[f5\], upper panel). Complete screening of ac fields by a type II superconductor at low frequencies ($\omega \ll \omega_p$, here $\omega_p$ is a depinning frequency) and amplitudes of excitation (ac current much lower than depinning current) in dc fields lower than $H_{c2}$ was observed years ago [@STR2]. The frequency dispersion of $\chi_1$ is weak for both samples. Third critical magnetic field was determined using ac data as follows. At low amplitude of excitation a loss peak located between $H_{c2}$ and $H_{c3}$. Losses disappear at $H_0>H_{c3}$ because in a normal state $\delta>>d$. Here $\delta$ is a skin depth in a normal state. Such determination of $H_{c3}$ was proposed years ago by Rollins and Silcox [@ROLL]. The lower panel of Fig.\[f5aa\] shows an example of determination of the third critical magnetic field. It was found that $H_{c3}\approx 17.5\pm 0.5$ and $16\pm 0.5$ kOe at 4.5 K for $A$ and $B$ samples, respectively. $H_{c3}/H_{c2} \approx 1.6$ for sample $A$. An accurate determination of $H_{c2}$ for sample $B$ is difficult, due to magnetic relaxation, as discussed above. The absorption line, $\chi_1^{''}(H_0)$, near $H_{c3}$ is different for samples $A$ and $B$. Thus, this line is nonuniform for sample $A$ and it is uniform but broadened for sample $B$. The ac response of superconductors even at very low amplitude of excitation, e.g., less than 1 Oe, is strongly nonlinear in the SSS [@ROLL; @Genkin1]. The second harmonic signal is absent in the PBP mode in the entire range of magnetic fields. At the same time, the third-harmonic signal exists in the vicinity of $H_{c3}$ only. The absence of the second harmonic in PBP mode is a common feature for the bulk samples as well [@CAMP]. Fig. \[f6a\] shows the field dependences of $\chi_3$, $\chi_{2,3}\equiv \sqrt{(\chi_{2,3}^{'})^{2}+(\chi_{2,3}^{''})^{2}}$, in PBP mode for samples $A$ and $B$, in the upper and lower panels, respectively. Perturbation theory with respect to the amplitude of excitation is not applicable for interpreting these experimental data. For example, according to perturbation theory, $\chi_3$ should be proportional $h_{ac}^{2}$ and this is not the case in our findings, Fig. \[f6a\]. It is known that perturbation theory cannot explain experimental data for bulk samples too [@ROLL; @GENKIN22]. We also note that there is a difference for the third harmonic signal between samples $A$ and $B$ in the PBP mode.
![(Color online). Field dependences of $\chi_3$ of samples $A$ and $B$ (upper and lower panels, respectively) in the PBP mode at 4.5 K.[]{data-label="f6a"}](Fig6a.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of $\chi_3$ of samples $A$ and $B$ (upper and lower panels, respectively) in the PBP mode at 4.5 K.[]{data-label="f6a"}](Fig6b.eps "fig:"){width="0.9\linewidth"}
A swept field affects the ac response more strongly at low frequencies or/and low excitation amplitudes for a given sweep rate. This was confirmed in experiments with bulk and thin-walled cylinders samples [@MAX; @GENKIN22] and [@Genkin1; @Katz1], respectively. Fig. \[f7c\] shows the field dependences $\chi_1$ for both samples $A$ and $B$ in the PBP and SF modes at 293 Hz and amplitude 0.04 Oe. The difference between the PBP and SF modes can easily be seen for both samples. The ac response at low magnetic fields in the SF mode are fluctuating due to magnetic flux jumps, Fig. \[f5\]. Near $H_{c3}$ the curves of $\chi_1$ coincide well in PBP and SF modes for both samples, Fig. \[f7c\]. The difference between the two samples in the SF mode is very pronounced in fields above 5 kOe. In particular, $\chi_1^{''}$ is a smooth function of the dc field for sample $A$, but for sample $B$ it shows step-like features in fields near 7 and 10 kOe.
![(Color online). Field dependences of $\chi_1$ for samples $A$ and $B$ (upper and lower panels, respectively) in the PBP and SF modes at 293 Hz and an excitation amplitude of 0.04 Oe. Measurements were done at 4.5 K.[]{data-label="f7c"}](Fig7a.eps "fig:"){width="0.98\linewidth"} ![(Color online). Field dependences of $\chi_1$ for samples $A$ and $B$ (upper and lower panels, respectively) in the PBP and SF modes at 293 Hz and an excitation amplitude of 0.04 Oe. Measurements were done at 4.5 K.[]{data-label="f7c"}](Fig7b.eps "fig:"){width="0.98\linewidth"}
A nonlinearity can clearly be seen not only in the second and third harmonics, but in the first harmonic too. Fig. \[f11a\] shows the field dependences of $\chi_1$ of samples $A$ and $B$ at $h_{ac}=0.04$ and 0.2 Oe and T = 4.5 and 5.5 K in the SF mode. Panels $a$ and $b$ demonstrate: (i) that at low magnetic fields, losses in sample $A$ are significantly larger than losses in sample $B$ and: (ii) an increase of the excitation amplitude leads to a decrease of $\chi_1{''}$ for both samples. At $h_{ac}= 0.2$ Oe for $H_0> 5$ kOe there is a plateau and $\chi_1^{''}$ for both samples coincides with high precision. The plateau in the SF mode at high excitation amplitudes was observed at T = 4.5 K, Fig. \[f11a\]$c$ and also at 5.5 K for, Fig. \[f11a\]$b$. It appears that in this range of magnetic fields and at high enough amplitude, the first harmonic signal of the two samples is almost identical. However, a qualitative difference remains for the signals of the second and third harmonics, see Figs. \[f9a\] and \[f10a\].
![(Color online). Field dependences of $\chi_1$ for samples $A$ and $B$ in the SF mode at 5.5 K (panels $a$ and $b$, respectively) and 4.5 K (panel $c$).[]{data-label="f11a"}](Fig8a.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of $\chi_1$ for samples $A$ and $B$ in the SF mode at 5.5 K (panels $a$ and $b$, respectively) and 4.5 K (panel $c$).[]{data-label="f11a"}](Fig8b.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of $\chi_1$ for samples $A$ and $B$ in the SF mode at 5.5 K (panels $a$ and $b$, respectively) and 4.5 K (panel $c$).[]{data-label="f11a"}](Fig8c.eps "fig:"){width="0.9\linewidth"}
As for the second harmonic signal it is absent for both samples in the whole range of magnetic fields in the PBP mode, but becomes visible in the SF mode. Fig. \[f10a\] shows the field dependences of $\chi_2$ in the SF mode. Perturbation theory cannot explain the data for $\chi_2$ in the SF mode and $\chi_3$ in both modes. In accordance to this theory one could expect that $\chi_3\propto h_{ac}^2$ and $\chi_2\propto h_{ac}$. However, this is not the case in our experiment at any magnetic field. In our experiment, an increase of the excitation amplitude leads to a suppression of $\chi_2$. In the SF mode $\chi_2$ is larger than $\chi_3$ under the conditions of the experiment, see Figs. \[f9a\] and \[f10a\]. We note that the data for $\chi_1$, $\chi_2$ and $\chi_3$ fluctuate strongly at fields lower than 4 kOe at 4.5 K for sample $A$ due to magnetic flux jumps.
![(Color online). Field dependences of $\chi_3$ of $A$ and $B$ samples (upper and lower panels, respectively) in the SF mode at 4.5 K.[]{data-label="f9a"}](Fig9a.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of $\chi_3$ of $A$ and $B$ samples (upper and lower panels, respectively) in the SF mode at 4.5 K.[]{data-label="f9a"}](Fig9b.eps "fig:"){width="0.9\linewidth"}
![(Color online). Field dependences of $\chi_2$ of $A$ and $B$ samples, upper and lower panels, respectively, in the SF mode at 4.5 K.[]{data-label="f10a"}](Fig10a.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of $\chi_2$ of $A$ and $B$ samples, upper and lower panels, respectively, in the SF mode at 4.5 K.[]{data-label="f10a"}](Fig10b.eps "fig:"){width="0.9\linewidth"}
It is interesting to note the following concerning the relation between field dependences of $\chi_1^{''}$ and $\chi_3$. Figs. \[f8c\] and \[f8b\] show field dependences of normalized $\chi_1^{''}$ and $\chi_3$ for samples $A$ and $B$. Upper panels in both figures correspond to the PBP mode and lower panels to SF mode. At low magnetic fields $\chi_1^{''}$ and $\chi_3$ are very small in the PBP mode for both samples. Both signals become measurable near $H_{c3}$ and the shape of these signals is identical with high precision. In the SF mode the shapes of $\chi_1^{''}$ and $\chi_3$ are again the same in the vicinity of $H_{c3}$. However, at low magnetic fields this similarity vanishes in the SF mode. Such similarity in the PBP mode can be proved in the frame of perturbation theory [@PAV], but it has not yet proven in the general case which we face in our experiment.
![(Color online). Field dependences of normalized $\chi_1^{''}$ and $\chi_3$ of sample $A$ in point-by-point and swept field modes (upper and lower panels, respectively). The shapes of $\chi_1^{"}$ and $\chi_3$ are with high accuracy identical in PBP and SF modes near $H_{c3}$. This similarity breaks in a SF mode at low magnetic fields.[]{data-label="f8c"}](Fig11a.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of normalized $\chi_1^{''}$ and $\chi_3$ of sample $A$ in point-by-point and swept field modes (upper and lower panels, respectively). The shapes of $\chi_1^{"}$ and $\chi_3$ are with high accuracy identical in PBP and SF modes near $H_{c3}$. This similarity breaks in a SF mode at low magnetic fields.[]{data-label="f8c"}](Fig11b.eps "fig:"){width="0.9\linewidth"}
![(Color online). Field dependences of normalized of $\chi_1^{"}$ and $\chi_3$ of sample $B$ in point-by-point and swept field modes (upper and lower panels, respectively). The shapes of $\chi_1^{"}$ and $\chi_3$ are with high accuracy identical in PBP mode and in large fields in SF mode.[]{data-label="f8b"}](Fig12a.eps "fig:"){width="0.9\linewidth"} ![(Color online). Field dependences of normalized of $\chi_1^{"}$ and $\chi_3$ of sample $B$ in point-by-point and swept field modes (upper and lower panels, respectively). The shapes of $\chi_1^{"}$ and $\chi_3$ are with high accuracy identical in PBP mode and in large fields in SF mode.[]{data-label="f8b"}](Fig12b.eps "fig:"){width="0.9\linewidth"}
Discussion
==========
dc magnetization curves
-----------------------
The physical reasons for the observed flux jumps at small magnetic fields are not clear. One can suggest that the alignment of the magnetic field with respect to the sample surface is not perfect. Indeed, the latter cannot be ruled out completely, and a small field component perpendicular to the surface, $H_{\bot}$, should create vortices which might be responsible for the flux jumps at small magnetic fields. Hence, one may expect that flux jumps could be present at small magnetic fields in a reference planar film as well. This assumption has been examined in an additional control experiment with a reference planar film. Figure \[f13\] displays ascending branches of the magnetization curves of the planar Nb film of 240nm thickness sputtered onto a silicon substrate, for the magnetic field inclination angles $\varphi =0^{\circ}$, $10^{\circ}$, and $45^{\circ}$. For $\varphi = 10^{\circ}$ and $45^{\circ}$ the component $H_{\bot}\approx0.17H_0$ and $H_{\bot}\approx0.71H_0$, respectively. Vortices created by this field component exist at small magnetic fields. This experiment demonstrates that in small fields the magnetic moment is a linear function of the magnetic field value and vortices created by $H_{\bot}$ *do not induce any flux jumps* at small fields. The magnetic moment at small fields remains a linear function of the magnetic field for planar films of different thicknesses. Magnetic moment jumps first appear in the magnetization curve at inclination angles larger than $10^\circ$. Such a field inclination angle is at least a factor of 3 larger than the orientational misalignment of the sample orientation with respect to the field direction in our experiment. Therefore, the results obtained for planar films suggest that the vortices created by the small field component perpendicular to the surface are not the cause for magnetic moment jumps at small magnetic fields in the cylindrical samples.
![Ascending branches of magnetization curves of planar film in parallel and tilted magnetic fields. Inset shows determination of $H_{c1}$ of the planar film.[]{data-label="f13"}](Fig13.eps){width="0.9\linewidth"}
The experimental data demonstrate the existence of magnetic instabilities in fields lower than $H_{c1}$. At 4.5 K, the flux starts to penetrate into the cylinders $A$ and $B$ at $H_0 = 20$ and 10 Oe, respectively, Fig. \[f5\] (lower panel). The field of the first jump, $H^*$, is defined by the some critical current (not to be confused with a depairing current). If we assume that the critical current density in the isthmus between two antidots is the same as in the film, then the ratio $H_B^*/ H_A^*$ should be $\approx 0.16$. However, the experiment shows that this ratio is about 0.5, see Fig. \[f5\]. This means that the critical current density in the isthmuses is higher than in the as-prepared film. We note that the ratio of the magnetic moments in ZFC in field 20 Oe for samples $B$ and $A$ is 0.5, see Fig. \[f3\]. In accordance to the thermodynamic criterion [@Katz1] $H^* \propto\sqrt{d}$. Comparison $H^*$ for $A$ sample and samples from [@Katz1] shows that the thermodynamics cannot describe these magnetization jumps in samples without antidots.
It was demonstrated that at low temperature and at magnetic fields higher than some critical value, $H_{th}$, the magnetization curve becomes smooth and $H_{th}$ is sufficiently larger in the sample with an array of antidots [@Motta1]. The latter experiments were carried out with the field perpendicular to the film surface. In our case we deal with the row of antidots and the magnetic field parallel to the surface. We believe that this is the main reason why $H_{th}$ is lower for the sample with antidots, see upper panel of Fig. \[f5\]. We have to mention that the difference between perpendicular and parallel geometries is crucial. For example vortex velocity in the perpendicular geometry is a few orders magnitude larger than for the parallel one, see Ref. [@Genkin1].
ac response
-----------
The field dependences of $\chi_1(H_0)$ in the PBP mode are different for $A$ and $B$ samples, Fig. \[f5aa\]. Losses appear and screening decreases in magnetic fields above $H_{c2}$. Near $H_{c3}$ there is a loss peak and the shape of this peak is different for samples $A$ and $B$. The shape of the loss peak for sample $A$ is nonuniform and for sample $B$ it is broadened. The third critical field of sample $A$ is larger than for sample $B$, Fig. \[f5aa\] lower panel. However, the determination of $H_{c3}$ for sample $B$ is questionable.
The difference in the ac response of samples $A$ and $B$ becomes qualitative in the SF mode, Figs. \[f7c\], \[f9a\] and \[f10a\]. Whereas the field dependences of $\chi_1$, $\chi_2$ and $\chi_3$ are smooth for sample $A$, they have peculiarities in 7 and 10 kOe for sample $B$. As we have mentioned above, the data for $\chi_1$, $\chi_2$ and $\chi_3$ are noisy and fluctuating at fields lower than 4 kOe at 4.5 K and 2 kOe at 5.5 K due to magnetic flux jumps. The behavior of the ac response in the SF mode has some similar features for both samples. Thus, an increase of the excitation amplitude and frequency leads to a decrease of $\chi_1^{''}$ in fields down to $H_{c2}$ and $\chi_2$ in the whole field range. The reason for this behavior is the following. The main physical parameter defining the difference between the PBP and SF modes is $Q=\frac{\dot{H}_0}{\omega h_{ac}}$ [@MAX; @FINK2]. The PBP mode corresponds to $Q=0$. Parameter $Q$ decreases with the excitation amplitude and/or frequency tending to zero. This is why $\chi_1^{''}$ and $\chi_2$ decrease with $h_0$ and $\omega$ and in consequence of this in the SF mode perturbation theory is not applicable. In the limiting case of high frequencies, for example, in the GHz range, a swept field with sweep rate of few tens or hundred oersted per second does not affect the ac response [@VAT].
The ac response of sample $A$ in the SF mode is similar to that reported in our previous papers [@Genkin1; @Katz1]. In this sample we observe a smooth field dependence of $\chi_1^{''}$, $\chi_2$ and $\chi_3$. The models proposed in [@Genkin1; @Katz1] can explain the experimental data for sample $A$ in magnetic fields lower and higher than $H_{c2}$. The case with the sample $B$ is more complicated. It turned out that $\chi_1^{''}$ at magnetic fields of 4 kOe ($H_0< H_{c2}$) is lower for sample $B$ than for sample $A$, see Fig. \[f11a\]$c$. The following may be the reason for this. Vortex pinning and the current induced by ac and swept fields play an important role in ac response in a swept magnetic field [@Genkin1]. The area under the row of antidots is much smaller than the total film area. This is why vortex pinning by this row antidots cannot explain loss reduction. At the same time the total induced current is lower in sample $B$ than in sample $A$, Fig. \[f3\]. This reduces the forces dragging vortices into the substrate and leads to loss reduction [@Genkin1]. The jump at $H_0\approx 5$ kOe takes place only for sample $B$, see Figs. \[f9a\] and \[f10a\]. At fields higher than the jump field the losses for both samples at $h_{ac} =0.2$ Oe are equal, panels $b$ and $c$ of Fig. \[f11a\]. The weakening of pinning in high magnetic fields could be a cause for such behavior.
The nature of the jump of $\chi_{3,2}$ in magnetic fields of 10 kOe (see panels *b* of Figs.\[f9a\] and \[f10a\]) for sample $B$ in SF mode is not clear. ac amplitude is not smeared this jump completely in contrast with $\chi_1^{''}$, see panels *a* and *c* of Fig.\[f11a\]. This jump takes place in magnetic fields near $H_{c2}$ of sample $B$. Decreasing of ac losses and harmonics jump near $H_{c2}$ in a swept field was observed in single crystal Nb [@GENKIN22; @MT]. However, single crystal Nb has a well defined vortex structure and $H_{c2}$ but it is not the case with our sample.
conclusion
==========
We have studied the dc and ac magnetic properties of thin-walled cylinders of superconducting Nb with and without a row of antidots. Experiment showed that the critical current density is higher in the isthmus between antidots than in the film itself. The dc magnetization curves demonstrate an "avalanche”-like penetration of the magnetic flux into the cylinder for both samples. The effect was observed at a temperature of 4.5 K and completely disappeared at 7 and 5.5 K for samples $A$ and $B$, respectively. Such a behavior resembles a thermomagnetic instability of vortices but it was observed in fields below $H_{c1}$ of the films, i.e. in a vortex-free state. The effect of end faces, consisting in that the magnetic force lines is bending near the sample ends, could be another reason for flux jumps. The influence of the sample end faces on the flux jumps in such samples has to be studied using a local probe technique.
The ac response of thin-walled cylinders with and without antidots is strongly nonlinear and perturbation theory cannot explain the experimental data. The ac response of $A$ and $B$ samples is similar in the point-by-point mode. However, in the swept field mode there is a qualitative difference between losses for samples $A$ and $B$. Thus, at low magnetic fields, losses in sample $B$ are lower than in sample $A$. There are jumps in $\chi_1$, $\chi_2$ and $\chi_3$ in high magnetic fields for sample $B$, but these quantities are smooth functions of the magnetic field in sample $A$.
We demonstrate that field dependences of $\chi_1^{''}$ and $\chi_3$ have the same shapes in the point-by-point mode with high accuracy. In the swept field mode the shapes of $\chi_1^{''}$ and $\chi_3$ are the same in the vicinity of $H_{c3}$. This similarity has yet not been proved in the case of strong nonlinear response that we encounter in our experiment.
The models developed in [@Genkin1; @Katz1] could describe the ac response of the as-prepared sample. However, these models are not applicable to the sample with a row of antidots. New models for samples with antidots have to be elaborated. As well as further experimental studies of samples with different lengths, wall thicknesses, sizes and geometry of antidots row or array have to be carried out.
acknowledgments
===============
We thank J. Kolacek, P. Lipavsky and V.A. Tulin for fruitful discussions. This work was done within the framework of the NanoSC-COST Action MP1201. Financial support of the grant agency VEGA in projects nos. 2/0173/13 and 2/0120/14 are kindly appreciated.
Little W A and Parks R D 1962 Phys. Rev. Lett. [**9**]{} 9 Douglass D H, Jr.1963 Phys. Rev. [**132**]{}, 513 Aoyama K, Beaird R, Sheehy D E and Vekhter I 2013 Phys. Rev. Lett. [**110**]{}, 177004 Motta M, Colauto F, Otiz W A, Fritzsche J, Cuppens J, Gillijns J, Moshchalkov V V, Johansen T H, Sanchez A and Silhanek A V 2013 Appl. Phys. Lett., [**102**]{}, 212601 Tsindlekht M I, Genkin V M, Felner I, Zeides F, Katz N, Gazi S and Chromik S 2014 Phys. Rev. B [**90**]{} 014514 de Gennes P G 1966 [*Superconductivity of metals and alloys*]{} (W A Benjamin: INC New York) p 197 Kittel C, Fahy S, and Louie S G 1988 Phys. Rev. B [**37**]{} 642 Nowak E R, Taylor O W, Liu Li, Jager H M, and Selinder T I 1997 Phys. Rev. B [**55**]{} 11702 Esquinazi P, Setzer A, Fuchs D, Kopelevich Y, Zeldov E and Assmann C 1999 Phys. Rev. B [**60**]{} 12454; Stamopoulos D, Speliotis A and Niarchos D 2004 Supercond. Sci. Technol., [**17**]{}, 1261 Motta M, Colauto F, Zadorosny R, Johansen T H, Dinner R B, Blamire M G, Ataklti G W, Moshchalkov V V, Silhanek A V and Ortiz W A Phys. Rev. B 2011 [**84**]{} 214529 Saint-James D and Gennes P G 1963 Phys. Lett. [**7**]{} 306 Rollins R W and Silcox J 1967 Phys. Rev. [**155**]{} 404 Burger P, Deutscher G, Gueon E and Martinet A 1965 Phys. Rev., [**137A**]{} 853
Strongin M, Schweitzer D G, Paskin A, and Craig P P 1964 Phys. Rev. [**136**]{} A926 Maxwell E, Robbins W P 1966 Phys. Lett. [**19**]{} 629 Tsindlekht M I, Genkin V M, Leviev G I, Schlussel Y, Tulin V A, Berezin V A 2012 Physica C [**473**]{} 6 Tsindlekht M I, Genkin V M, Gazi S and Chromik S 2013 J. Phys.: Condens. Matter [**25**]{} 085701 Leviev G I, Genkin V M, Tsindlekht M I, Felner I, Paderno Yu B, Filippov V B 2005 Phys. Rev. B [**71**]{} 064506 Yeshurun Y, Malozemoff A P, Shaulov A 1966 Rev. Mod. Phys. [**68**]{} 911 Campbell S A, Ketterson J B, Crabtree G W 1983 Rev. Sci. Instrum. [**54**]{} 1191 Fink H 1967 Phys. Rev. [**161**]{} 417 Lipavsky P 2014 private communication
Tulin V A 2015 private communication
Tsindlekht M I, *et al.* 2016 unpublished
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'H emission is a well-known indicator of magnetic activity in the Sun and other stars. It is also viewed as an important signature of chromospheric heating. However, the H line has not been used as a diagnostic of magnetic flux emergence from the solar interior. Here we report on Hinode observations of chromospheric H brightenings associated with a repeated, small-scale flux emergence event. We describe this process and investigate the evolution of the magnetic flux, G-band brightness, and H intensity in the emerging region. Our results suggest that energy is released in the chromosphere as a consequence of interactions between the emerging flux and the pre-existing magnetic field, in agreement with recent 3D numerical simulations.'
author:
- 'S. L. Guglielmino and F. Zuccarello'
- 'P. Romano'
- 'L. R. Bellot Rubio'
title: |
HINODE Observations of Chromospheric Brightenings in the\
H Line during small-scale Flux Emergence Events
---
Introduction
============
Numerical simulations predict that magnetic flux emerging into the solar atmosphere interact and reconnect with the pre-existing chromospheric and coronal field. This suggests that flux emergence is a relevant source of energy for the chromosphere [@Archontis:04; @Archontis:05]. The efficiency of the interaction and the consequent heating seem to depend on the geometry of the two flux systems [@Galsgaard:07]. While at large scale these results have been confirmed by high-resolution observations [@Moreno:08; @Zuccarello:08], the role of small-scale emergence events in the heating of the upper atmospheric layers, as described by @Isobe:08, is still lacking observational confirmation.
In the absence of other diagnostics, heating events in the chromosphere can be detected through the intensity profiles of the H and K lines. The correlation between flux emergence and chromospheric emission was analyzed in detail by @Balasu:01, who observed anomalous profiles in the K line in an emerging active region. However, very few examples of small-scale transient brightenings have been reported in the literature, and most of them are related to flux cancellation events [e.g. @Bellot:05].
The *Hinode* satellite, with its unprecedented spatial resolution, offers for the first time the possibility to investigate the processes that occur in the chromosphere during the emergence of magnetic flux at small spatial scales. In this Letter we analyze simultaneous chromospheric and photospheric observations of an emerging flux region taken with the Solar Optical Telescope aboard *Hinode*. During the emergence event, strong brightenings were detected in the H line core without relevant counterparts in G-band intensity, which suggests that chromospheric heating did occur at the site of flux emergence.
Observations and data reduction
===============================
On 2007 September 30, as part of the *Hinode* Operation Plan 14 (*Hinode*/Canary Islands Campaign), the active region NOAA 10971 was observed by the Solar Optical Telescope [SOT; @Tsuneta:08] onboard *Hinode* [@Kosugi:07]. The field of view (FoV) was centered at solar coordinates ($174\arcsec$, $-79\arcsec$), i.e., $11^{\circ}$ away from disk center.
The SOT spectro-polarimeter [SP; @Tsuneta:08] performed six raster scans of the active region from 08:00 to 14:00 UT, acquiring the Stokes I, Q, U, and V profiles of the photospheric lines at 630.15 nm and 630.25 nm. The FoV covered by the SP observations is $164\arcsec \times 164\arcsec$, with an effective pixel size of $0.32\arcsec$ (Fast Map mode). Simultaneously, the SOT Broadband Filter Imager (BFI) acquired filtergrams in the core of the H line ($396.85 \pm 0.3 \;\textrm{nm}$) and in the G band ($430.5 \pm 0.8 \;\textrm{nm}$), while the Narrowband Filter Imager (NFI) obtained shuttered Stokes I and V filtergrams in the wings of the D1 line at $589.6 \;\textrm{nm}$. The BFI images have a spatial sampling of $0.05\arcsec$/pixel (G-band) and $0.1\arcsec$/pixel ( H), while that of the NFI filtergrams is $0.16\arcsec$/pixel. The BFI and NFI time series have a cadence of one minute and extend from 07:00 to 17:00 UT, with a small gap between 10:05 and 10:20 UT.
We have corrected the SOT/SP and SOT/FG images for dark current, flat field, and cosmic rays with standard SolarSoft routines. Besides obtaining photospheric and chromospheric information through corrected G-band and H filtergrams, we have constructed magnetograms from the D1 Stokes I and V images acquired $\pm 156$ mÅ off the line center. From the ratio $$\frac{V}{I}=\frac{1}{2}\left( \frac{V_{\rm blue}}{I_{\rm blue}} + \frac{V_{\rm red}}{I_{\rm red}} \right)$$ we calculate the magnetic flux density $\Phi_{\rm d}$ using the weak fied approximation [@Stix:02] as $$\Phi_{\rm d}= 8 \times 10^3 \: \frac{V}{I} \;\;\; [\textrm{Mx cm}^{-2}]$$ [see @Guglielmino:08]. To first order, the magnetograms computed in this way are not affected by Doppler shifts. We remind the reader that, at disk center, the D1 line refers to the upper photospheric layers and not to the chromosphere.
For each raster scan of the SP, the profile with the minimum total polarization degree, $P=\left[\left(Q^{2}+U^{2}+V^{2}\right)/I^{2}\right]^{1/2}$, was selected as a reference profile. All the spectra in the scan were normalized to the continuum of this profile and corrected for limb darkening. Also, a stray light profile was computed by averaging the reference profiles of the six scans.
Adopting a grid paradigm, we have inverted the spectra with $P > 2\%$ using the SIR code [@RuizIniesta:92]. The inversion yields the temperature stratification in the range $-4.0 < \log \, \tau_{5} < 0$ ($\tau_{5}$ is the optical depth of the continuum at $500
\;\textrm{nm}$), together with the magnetic field strength, inclination and azimuth angles in the line-of-sight (*los*) reference frame, the *los* velocity, and the magnetic filling factor, assuming these quantities to be constant with height. Azimuth and inclination angles have been transformed to the local solar frame, whereas the *los* velocity has been calibrated using the mean quiet-Sun intensity profile computed from pixels with $P < 0.5\%$, following the procedure of @Marti:97.
Finally, all the SOT/FG and SOT/SP images have been aligned through cross-correlation algorithms.
Results
=======
NOAA 10971 has a classical bipolar $\beta$ configuration, as can be seen in the D1 magnetogram of Fig. \[fig1\]. The various SOT instruments recorded the emergence of a small bipolar region, which appeared at the internal edge of the main negative polarity. Figure \[fig2\] displays a temporal sequence of D1 magnetograms with a cadence of about 20 minutes for the $8 \times 8$ Mm$^2$ area marked in Fig. \[fig1\]. The first magnetogram of the sequence, acquired at 07:50 UT, shows the presence of a positive-polarity knot. In the following magnetograms we clearly recognize an increase in its area, as well as the appearance of a negative-polarity patch. The subsequent evolution is characterized by the separation of the opposite magnetic polarities, as indicated by the arrows. The corresponding temporal sequence of H filtergrams is also displayed in Fig. \[fig2\] and shows transient brightness enhancements at the location of the positive footpoint. This emergence event led to the appearance of bright points in the G band (Fig. \[fig3\], left panel) and to intensity enhancements in the H line core (Fig. \[fig3\], right panel).
Maps of the physical parameters derived from the SP raster scans are displayed in Fig. \[fig4\]. They demonstrate the rapid evolution of the small bipolar region: the changes in, e.g., the magnetic field distribution (second and third rows) indicate a very dynamic phase. The small bipolar region shows an emergence zone, i.e., a region between the two main polarities with horizontal fields [@Lites:98] in which upflows of $\sim 1 \;\textrm{km s}^{-1}$ can be seen at 9:18 UT and 12:23 UT. The footpoints of the emerging region exhibit vertical fields and downflows of $1.5 - 2 \;\textrm{km
s}^{-1}$. The initial photospheric total flux content of the emerging region is $1.4 \times 10^{19} \;\textrm{Mx}$, classifying as a small ephemeral region. The bipole axis was inclined about $45
\degr$ to the north-south direction in the first raster scan, but this angle varied with time. The negative-polarity footpoint soon merged with the dominant negative flux of the active region, disappearing as an individual feature.
In this area we have detected chromospheric H brightness enhancements with two main peaks during the observations, each one preceded by a minor peak. As can be seen in Fig. \[fig2\], the brightenings are associated with the positive-polarity footpoint. The duration of each peak is about half an hour, with an enhancement of $\sim 80\%$ with respect to the “quiet” level. The presence of these peaks points to interactions between the new emerging and the old pre-existing flux systems. We have computed the average intensity of the four most luminous pixels within the $8 \times 8 \;\textrm{Mm}^{2}$ FoV for the H and G-band filtergrams, respectively. Figure \[fig5\] shows the trend of brightness in H and G band in normalized units. The different behaviour indicates that they are not correlated, as the chromospheric brightness enhancements are much more intense than the increase observed in the G band at the same times. Thus, the observed H enhancements are genuinely due to photons coming from the chromosphere, and not to the significant photospheric contribution included in the passband of the SOT H filter [@Carlsson:07].
We have calculated the positive flux in the $8 \times 8$ Mm$^2$ area using the D1 magnetograms, which have a noise level of approximately $6.75 \times 10^{14} \;\textrm{Mx/pixel}$. The main contribution to the positive flux comes from the positive polarities of the emerging region. In Fig. \[fig6\] we show the flux evolution with time: the chromospheric brightness enhancements clearly correspond to an increase of positive magnetic flux in the upper photosphere. Interestingly, in both cases the maximum Ca II H intensities are reached some 30 minutes after the positive flux starts to increase.
Taking into account the temporal coincidence between the chromospheric brightenings and the positive flux increase, as well as the spatial coincidence between the location and morphology of the H brightenings and the emerging bipolar region (Fig. \[fig2\]; compare also the second, third, and fifth rows of Fig. \[fig4\]), we conclude that the localized chromospheric heating is a consequence of the emergence and subsequent interaction of the positive flux of the new bipole, which cancels with the negative ambient magnetic field.
Conclusions
===========
Using *Hinode* filtergrams and spectropolarimetric measurements, we have studied a small-scale flux emergence event. Two peaks of chromospheric origin have been detected in the H line-core intensity almost simultaneously to a magnetic flux increase in the upper photosphere.
We suggest that the chromospheric brightness enhancements may be indication that two different flux systems undergo magnetic reconnection: the old flux system belonging to the active region and the emerging magnetic field. The energy released in the process heats the chromosphere. The observed H brightenings are associated with a relatively modest amount of emerged magnetic flux (only $\sim 4 \times 10^{18} \;\textrm{Mx}$ compared with the total negative flux in the region of $\sim 2.5 \times 10^{19} \;\textrm{Mx}$), which points to a highly efficient heating mechanism. We conjecture that, while the positive flux increases, part of it cancels with the pre-existing negative flux, very likely in a process of magnetic reconnection.
Our result suggests that Joule dissipation may be a significant source of chromospheric heating during the reconnection of an emerging flux system with a pre-existing magnetic field. This would confirm the predictions of recent numerical simulations [e.g. @Galsgaard:05], also at small scales. Moreover, our work suggests that H brightness enhancements can be used as a valuable diagnostics of flux emergence. Further investigations should put this result into a firm observational and theoretical basis.
Financial support by the European Commission through the SOLAIRE Network (MTRN-CT-2006-035484) is gratefully acknowledged. This work has been partly funded by the Spanish Ministerio de Educación y Ciencia through projects ESP2006-13030-C06-02, PCI2006-A7-0624, and Programa de Acceso a Infraestructuras Científicas y Tecnológicas Singulares. *Hinode* is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway).
Archontis, V., Moreno-Insertis, F., Galsgaard, K., Hood, A., & O’Shea, E. 2004, , 426, 1047
Archontis, V., Moreno-Insertis, F., Galsgaard, K., & Hood, A. W. 2005, , 635, 1299
Balasubramaniam, K. S. 2001, , 557, 366
Carlsson, M., et al. 2007, , 59, 663
Bellot Rubio, L. R., & Beck, C. 2005, , 626, L125
Galsgaard, K., Moreno-Insertis, F., Archontis, V., & Hood, A. 2005, , 618, L153
Galsgaard, K., Archontis, V., Moreno-Insertis, F., & Hood, A. W. 2007, , 666, 516
Guglielmino, S. L. 2008, Ph.D. Thesis
Isobe, H., Proctor, M. R. E., & Weiss, N. O. 2008, , 679, L57
Kosugi, T., et al. 2007, , 243, 3
Lites, B. W., Skumanich, A., & Martínez Pillet, V. 1998, , 333, 1053
Martínez Pillet, V., Lites, B. W., & Skumanich, A. 1997, , 474, 810
Moreno-Insertis, F., Galsgaard, K., & Ugarte-Urra, I. 2008, , 673, L211
Ruiz Cobo, B., & del Toro Iniesta, J. C. 1992, , 398, 375
Stix, M. 2002, The Sun: an Introduction (Berlin: Springer)
Tsuneta, S., et al. 2008, , 249, 167
Zuccarello, F., Battiato, V., Contarino, L., Guglielmino, S. L., Romano, P., & Spadaro, D. 2008, , 488, 1117
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce a robust sensor design framework to provide defense against attackers that can bypass/hijack the existing defense mechanisms. For effective control, such attackers would still need to have access to the state of the system because of the presence of plant noise. We design “affine" sensor outputs to control their perception of the system so that their adversarial intentions would not be fulfilled or even inadvertently end up having a positive impact. The specific model we adopt is a Gauss-Markov process driven by a controller with a “private" malicious/benign quadratic control objective. We seek to defend against the worst possible distribution over the controllers’ objectives in a robust way. Under the solution concept of game-theoretic hierarchical equilibrium, we obtain a semi-definite programming problem equivalent to the problem faced by the sensor against a controller with an arbitrary, but known control objective even when the sensor has noisy measurements. Based on this equivalence relationship, we provide an algorithm to compute the optimal affine sensor outputs. Finally, we analyze the ensuing performance numerically for various scenarios.'
author:
- 'Muhammed O. Sayin and Tamer Başar, [^1]'
bibliography:
- 'ref.bib'
title: Robust Sensor Design Against Multiple Attackers with Misaligned Control Objectives
---
=4
Stackelberg games, Stochastic control, Cyber-physical systems, Security, Advanced persistent threats, Sensor placement, Semi-definite programming.
Introduction
============
connectedness of physical systems makes them vulnerable against cyber attacks which could have undesirable physical outcomes, e.g., damages [@ref:Giraldo17; @ref:Humayed17]. Different from the vulnerability of computer systems in a cyber network, cyber connectedness of physical systems brings in new and distinct security challenges due to the inherent physical dynamics. Cyber attacks can be very strategic while disturbing the system to achieve certain malicious goal and therefore they differ from external disturbances that can be modeled, e.g., statistically or within certain bounds, for robust control system design. Cyber attacks can be advanced and very target specific by learning the system’s dynamics and tuning their attack specific to the underlying system and the existing defensive measures for success and stealthiness. To cite a recent occurrence of such an event, in 2014, Dragonfly Malware intervened the operation of many cyber-physical, e.g., process control, systems in energy and pharmaceutical industries across the world over a long period of time without being detected [@ref:Nelson16]. Therefore, it is crucial that we develop novel security mechanisms against such attackers for the security of cyber-physical systems.
Prior Literature
----------------
In a physical system, advance attackers can seek to evade detection mechanisms by manipulating the physical signals used by the detectors. For example, in [@ref:Liu09], the authors have introduced false data injection attacks, where the attackers can inject data into the sensor outputs, in the context of state estimation, and characterized undetectable attacks. The ensuing studies mainly focused on characterizing the vulnerabilities of control systems against such evasive attacks and designing counter measures to be able to detect them. In [@ref:Mo09], the authors have introduced replay attacks where the attacker records and replays the sensor outputs when the system is at steady state since they are expected to be similar. As a counter measure, an independent signal can be injected into the control input to detect such attacks at the expense of degradation in control performance [@ref:Mo09; @ref:Mo14]. In [@ref:Mo12; @ref:Mo16], the authors have characterized the reachable set that an evasive attacker can drive the system by injecting data into sensor outputs and control inputs jointly.
Within deterministic control scenarios, in [@ref:Teixeira12], the authors have analyzed open-loop stealthy attacks, and proposed to add new measurements as a counter measure, as in [@ref:Mo09]. In [@ref:Pasqualetti13], the authors have formulated the limitations of monitoring-based detection mechanisms against false data injection and replay attacks. In [@ref:Fawzi14], the authors have proposed decoding schemes to estimate the state based on sensor outputs while a subset of them could have been under attack. The attackers can also have adversarial control objectives. In [@ref:Chen16a; @ref:Chen16b], the authors have analyzed such attacks, where the attacker seeks to drive the state of the system according to his adversarial goal evasively by manipulating sensor outputs and control inputs jointly. In [@ref:Zhang17], the authors have analyzed optimal attack strategies to maximize the quadratic cost of a system with linear Gaussian dynamics without being detected. In [@ref:Miao17], the authors have proposed linear encoding schemes for sensor outputs of an LQG system in order to enhance detectability of false data injection attacks while the encoding matrix is assumed oblivious to the attackers.
Motivation
----------
Specifically, we address, in this paper, primarily the following two questions: “If we have already designed the sensor outputs, in a non-Bayesian setting, to what extent would we have secured the system against multiple type advanced and evasive attackers who can bypass/hijack the defensive measures to fulfill a certain malicious control objective?" And “what would be the best affine sensor outputs that can deceive such attackers about the underlying state of the system so that their actions/attacks would not lead to any degradation?" We consider attackers who have malicious control objectives misaligned with the normal operation of the system, but not completely opposite of it as in the framework of a zero-sum game. This implies that there is a part of the malicious objective that is benign. Correspondingly, the attacker would be acting in line with the normal operation of the system with respect to the aligned part of his objective. Our motivation is to restrain the attacker’s abilities so that he will not act along the misaligned part of the objectives while taking actions in line with the aligned part. To this end, we propose to design the information available to an attacker strategically since the attacker would be making decisions to fulfill his malicious objective based on the information available to him. By designing the sensor outputs strategically, our goal is to control the attacker’s perception about the underlying system, and correspondingly to persuade the attacker (without any explicit enforcement) to fulfill the aligned part of the objectives as much as possible without fulfilling the misaligned part.
We have partially addressed this challenge in [@ref:Sayin17b; @ref:Sayin17a; @ref:Sayin18a], in non-cooperative communication and control settings. For a discrete-time Gauss Markov process, and when the sender and the receiver in a non-cooperative communication setting have misaligned quadratic objectives, in [@ref:Sayin17b], we have shown the optimality of linear signaling rules within the general class of measurable policies and provided an algorithm to compute the optimal policies numerically. Also in [@ref:Sayin17b], we have formulated the optimal linear signaling rule in a non-cooperative LQG setting when the sensor and the controller have known misaligned control objectives. In [@ref:Sayin17a; @ref:Sayin18a], we have introduced a secure sensor design framework, where we addressed the optimal linear signaling rule again in a non-cooperative LQG setting when the sensor and private-type controller have misaligned control objectives in a Bayesian setting, i.e., the distribution over the private type of the controller is known. This paper differs from these earlier studies by addressing optimal affine signaling in a non-Bayesian setting, where the distribution over the private type of the controller is not known, in a robust way. Furthermore here we provide a comprehensive formulation by considering also the cases where the sensor could have partial or noisy information on the signal of interest and relevance. Further details on this will be given next as part of our description of the main contributions of this work, as well as throughout the paper.
Contributions
-------------
To obtain explicit results, we specifically consider systems with linear Gaussian dynamics and quadratic control objectives, which have various industrial applications [@ref:Zhang17] from manufacturing processes to aerospace control. We consider the possibility of adversarial intervention by multiple advanced and evasive attackers across control networks. The attackers have different long term control objectives. Due to the stochastic nature of the problem, i.e., due to the presence of state noise, any open-loop control strategy of an attacker could not drive the system along his desired path effectively. Therefore, regardless of whether the controller has an adversarial objective or not, it has to generate a closed-loop control input using the designed sensor outputs. We also consider the scenarios where the advanced attackers could learn the relationship between the sensor output and the state, i.e., the designed signaling rule[^2], in order to avoid any obscurity based defense, which can be bypassed once the advanced attacker learns the information in obscurity. This implies that the interaction between the sensor and the attackers could be modeled as a hierarchical dynamic game [@ref:Basar99], where the sensor leads the game by announcing its strategy in advance. Therefore, while designing the sensor outputs, we should consider the possibility of malicious or benign control inputs and defend against the worst possible distribution over them.
Specifically, we seek to determine optimal [*affine*]{} sensor strategies for controlled Gauss-Markov processes. The sensor can have partial or noisy measurements while in [@ref:Sayin17b; @ref:Sayin17a; @ref:Sayin18a] the sensor is considered to have access to the underlying state perfectly. We only consider affine signaling rules, since under such rules our setting entails a classical information model, whereas without such a structural restriction on the signaling rules, the underlying model features a non-classical information due to the asymmetry of information between the players and the dynamic interaction through closed-loop feedback signals. Different from our previous works [@ref:Sayin17b; @ref:Sayin17a; @ref:Sayin18a], here, the follower, i.e., the attacker, has a private type while the distribution over the types is not known by the leader, i.e., the sensor. Our goal is to defend against the worst possible distribution over these types. To this end, we provide an equivalent problem faced by the sensor in terms of the covariance of the posterior estimate of the (control-free) state by formulating the necessary and sufficient conditions on that covariance matrix. This new equivalent problem is linear in the optimization argument with a compact and convex constraint set.
We emphasize that what we have is an exact equivalence relation, and not like the equivalence in optimality, shown in [@ref:Sayin17b]. Based on this exact equivalence relation, we can provide an offline algorithm to compute the optimal affine sensor strategies. In particular, in order to determine the best signaling rule against multiple types of attackers, we introduce additional constraints on the equivalent problem, which implies that the equivalence in optimality is not a sufficient condition in that respect. We had noted in [@ref:Sayin17a; @ref:Sayin18a] for multiple attack types with [*known*]{} distribution over them that the optimum can be attained at the extreme points of the constraint set since the equivalent problem is linear in the optimization argument and the constraint set is compact and convex [@ref:Boyd04]. And, further the corresponding covariance matrices could be attained through certain [*linear*]{} signaling rules, where the sensor does not introduce any additional independent noise. However, when we defend against multiple types of attackers with the worst possible distribution over them as here, the new problem imposes additional linear constraints on the equivalent problem. With these new constraints, even though the optimum will be attained at the extreme points of this modified constraint set, there can be cases where the optimum may be attained only at non-extreme points of the original constraint set before the modification. Such covariance matrices could be attained through certain [*affine*]{} signaling rules, where the sensor can introduce additional independent noise.
We now list the main contributions of this paper as follows:
- We introduce a robust sensor design agent that can craft the measurements sent to the noiseless communication network in order to defend against [*multiple*]{} advanced and evasive attackers with long term control objectives that are [*misaligned*]{} with the normal operation of the system.
- We model the interaction between the sensor and the attackers, with private types, as a [*multi-stage Stackelberg game*]{}, where the sensor is the leader. Particularly, we suppose that the advanced attackers could be aware of the sensor’s strategies in order to avoid the vulnerability of obscurity based defenses.
- We provide another problem [*equivalent*]{} to the problem faced by the sensor against any known type of attacker for compact computation of robust sensor outputs against the worst possible distribution over the attacker types.
- We show the optimality of [*memoryless*]{} signaling rules within the general class of signaling rules with complete/bounded memory when the sensor has access to the underlying state of the system.
- We show that certain affine signaling rules can lead to [*any*]{} covariance of the posterior estimate of the (control-free) state in between the extremes of disclosing nothing and disclosing the measurements fully according to a certain ordering of the covariance matrices.
- We show that the optimal signaling rule dictates the sensor to possibly introduce additive independent noise into the sensor outputs if there are multiple types of attackers in a non-Bayesian setting.
- We provide a numerical algorithm to compute the [*optimal*]{} affine robust sensor design strategies.
- We extend the results to the cases where the sensor has partial or [*noisy*]{} information on the signal of interest and relevance.
- We examine the performance of the proposed framework for various scenarios, displaying the significant impact of strategic design of sensor outputs on the system’s performance.
The paper is organized as follows: In Section \[sec:prob\], we formulate the robust sensor design game. In Section \[sec:robust\], we analyze the equilibrium of the robust sensor design game under perfect measurements. In Section \[sec:noisy\], we extend the results to the cases where there are partial or noisy measurements. In Section \[sec:examples\], we examine numerically the performance of the proposed scheme for various scenarios. We conclude the paper in Section \[sec:conclusion\] with several remarks and possible research directions. An appendix provides two technical results in support of some analyses in the main body of the paper.
[*Notation:*]{} For an ordered set of parameters, e.g., $x_1,\ldots,x_{\kappa}$, we define $x_{k:l} := x_k,\ldots,x_l$, where $1\leq k \leq l \leq \kappa$. $\N(0,.)$ denotes the multivariate Gaussian distribution with zero mean and designated covariance. We denote random variables by bold lower case letters, e.g., $\rx$. For a random vector, e.g., $\rx$, $\cov\{\rx\}$ denotes the corresponding covariance matrix. For a vector $x$ and a matrix $A$, $x'$ and $A'$ denote their transposes, and $\|x\|$ denotes the Euclidean ($L^2$) norm of the vector $x$. For a matrix $A$, $\trace\{A\}$ denotes its trace. We denote the identity and zero matrices with the associated dimensions by $I$ and $O$, respectively. For positive semi-definite matrices $A$ and $B$, $A\succeq B$ means that $A-B$ is also a positive semi-definite matrix. $\AS^m$ (or $\AS^m_+$) denotes the set of symmetric (or positive definite) matrices of dimensions $m$-by-$m$. $A\otimes B$ denotes the Kronecker product of the matrices $A$ and $B$.
[New\_model.pdf]{} (7,22)[$\rx_{k+1} = A\rx_k + B\ru_k + \rw_k$]{} (42,15)[$\ru_k= \gamma_k(\rs_{1:k})$]{} (13,48)[$\ry_k$]{} (27,38) [$\ry_{1:k-1}$]{} (41,48)[$\rs_k = \eta_k(\ry_{1:k})$]{}
Problem Formulation {#sec:prob}
===================
Consider a cyber-physical control system, seen in Fig. \[fig:model\], whose underlying state dynamics and sensor measurements are described, respectively, by: $$\begin{aligned}
&\rx_{k+1} = A\rx_k + B\ru_k + \rw_k, \label{eq:state}\\
&\ry_k = C\rx_k + \rv_k, \label{eq:measurement}\end{aligned}$$ for $k=1,\ldots,\kappa$, where[^3] $A\in\R^{m\times m}, B\in\R^{m\times r}$, and $C\in\R^{m\times m}$, and the initial state $\rx_1\sim\N(0,\Sigma_1)$. The additive state and measurement noise sequences $\{\rw_k\}$ and $\{\rv_k\}$, respectively, are white Gaussian vector processes, i.e., $\rw_k \sim \N(0,\Sigma_w)$ and $\rv_k\sim\N(0,\Sigma_v)$; and are independent of the initial state $\rx_1$ and of each other. As seen in Fig. \[fig:model\], the signal $\rs_k\in\R^{m}$, which can be different from the measurement $\ry_k\in\R^m$, is given by the affine signaling rule: $$\begin{aligned}
\label{eq:signal}
\rs_k &= \eta_k(\ry_{1:k})\\
&= L_{k,k}'\ry_k + \ldots + L_{k,1}'\ry_{1} + \rn_k,\end{aligned}$$ where $L_{k,j}\in\R^{m\times m}$, $j=1,\ldots,k$, can be any deterministic matrix, and $\rn_k \sim \N(0,\Theta_k)$ is independent of every other parameter. Let ${\Upsilon}_k$ denote the set of such affine signaling rules from $\R^{mk}$ to $\R^m$, i.e., $\eta_k\in{\Upsilon}_k$. Furthermore, the closed-loop control input $\ru_k\in\R^r$, which is constructed by the controller located in the cyber-part of the system, is given by $$\label{eq:control}
\ru_k = \gamma_k(\rs_{1:k}),$$ almost everywhere over $\R^r$, where $\gamma_k(\cdot)$ can be any Borel measurable function from $\R^{mk}$ to $\R^r$. Let $\Gamma_k$ denote the set of all Borel measurable functions from $\R^{mk}$ to $\R^r$, i.e., $\gamma_k\in\Gamma_k$.
Particularly, the dynamic system, measurement system, and actuators are located in the physical part while the controller is located in the cyber part through a connection over a noiseless communication channel. However, the connectivity over the channel is vulnerable to cyber attacks. We consider the scenarios where advanced and evasive attackers can intervene through the channel by injecting malicious control inputs to drive the underlying state according to a malicious long term control objective and can bypass or hijack the detection-based defensive measures. Our goal is to produce defense against such advanced evasive attacks by [*crafting*]{} the attacker’s perceptions about the underlying state of the system strategically so that their actions/attacks would not be harmful (to the extent possible) and would be along the desired objectives. To this end, we introduce a robust-secure-sensor-design component, denoted by , in the physical plant that gets the sensor measurement $\ry_k$ as an input and constructs the signal $\rs_k$, given by , and feeds this signal to the noiseless communication channel. Even if there is an attacker intervening over the channel, that attacker can only access that signal $\rs_k$ related to the underlying state of the system.
Note that if the signal $\rs_k$ is memoryless, then $\rs_k\in\R^m$ can be written as $$\begin{aligned}
\rs_k &= \underbrace{L_{k,k}'C}\rx_k + \underbrace{L_{k,k}'\rv_k + \rn_k}\\
&= \tC_k\rx_k + \rtv_k ,\end{aligned}$$ where $\rtv_k\sim\N(0,L_{k,k}'\Sigma_vL_{k,k} + \Theta_k)$, such that $\tC_k\in\R^{m\times m}$ is the gain matrix in the measurement while $\rtv_k$ is the white Gaussian measurement noise. Therefore, the optimal signaling rules $\eta_{1:\kappa}^*$ can provide a guideline to engineer (or designer) the placement of the physical sensors to monitor the underlying state of the system securely or to assess the resiliency against attacks with long-term control objectives. $\triangle$
The normal operation of the system, i.e., when there is no adversarial intervention by the attackers, is a stochastic control setting, where the controller, denoted by , constructs the control inputs $\ru_k\in\Gamma_k$ to minimize a finite horizon quadratic cost function given by $$\label{eq:Cobj}
\E\left\{\sum_{k=1}^{\kappa} \|\rx_{k+1}\|_Q^2 + \|\ru_k\|_R^2\right\},$$ where[^4] $Q\in\AS^m$ is positive-semi definite and $R\in\AS_+^r$ is positive definite. selects the signaling rule $\eta_{1:\kappa}$ to minimize the same cost function with , i.e., , as a team. And if there were no possibility for an adversarial intervention, could be as informative as disclosing the measurement directly to since there is no cost for the disclosed information over the noiseless communication channel. However, as seen in Fig. \[fig:model\], there can be advanced and evasive adversarial interventions in the noiseless communication channel between the dynamic system and . We particularly consider multiple (finitely many) types of attackers who can inject malicious closed-loop control inputs with long-term control objectives. Let type-$\alpha$ attacker’s cost function be given by $$\label{eq:AAobj}
\E\left\{\sum_{k=1}^{\kappa} \|\rx_{k+1}\|_{Q_{\alpha}}^2 + \|\ru_k\|_{R_{\alpha}}^2\right\},$$ where $Q_{\alpha}\in\AS^m$ is positive semi-definite and $R_{\alpha}\in\AS^r$ is positive-definite.
$$\begin{aligned}
&U_{\playerC}^{\omega}(\eta_{1:\kappa},\gamma_{1:\kappa}^{\omega}) := \E\Bigg\{\sum_{k=1}^{\kappa}\|\rx_{k+1}^{\omega}\|_{Q_{\omega}}^2 + \|\gamma_k^{\omega}\big(\eta_k(\ry_{1:k}^{\omega}),\ldots,\eta_1(\ry_1^{\omega})\big)\|_{R_{\omega}}^2\Bigg\}\label{eq:typeCobj}\\
&U_{\playerS}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega}\}_{\omega\in\Omega},p) := \sum_{\omega\in\Omega} p_{\omega} \E\Bigg\{\sum_{k=1}^{\kappa}\|\rx_{k+1}^{\omega}\|_{Q}^2 + \|\gamma_k^{\omega}\big(\eta_k(\ry_{1:k}^{\omega}),\ldots,\eta_1(\ry_1^{\omega})\big)\|_{R}^2\Bigg\}\label{eq:Sobj}\\
&U_{\playerA}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega}\}_{\omega\in\Omega},p) := -\sum_{\omega\in\Omega} p_{\omega} \E\Bigg\{\sum_{k=1}^{\kappa}\|\rx_{k+1}^{\omega}\|_{Q}^2 + \|\gamma_k^{\omega}\big(\eta_k(\ry_{1:k}^{\omega}),\ldots,\eta_1(\ry_1^{\omega})\big)\|_{R}^2\Bigg\}\label{eq:Aobj}\end{aligned}$$
------------------------------------------------------------------------
The attacker objective also covers the special cases where the attacker seeks to regularize the underlying state $\{\rx_k\}$ around an external process, e.g., $\{\rz^{\alpha}_k\}$, rather than the zero vector. To this end, we can consider the augmented state vector $\begin{psmallmatrix} \rx_k' & (\rz^{\alpha}_k)' \end{psmallmatrix}'$ and set the associated weight matrix in the regularization, i.e., $Q_{\alpha}$ in , accordingly. $\triangle$
Since our aim is to defend against advanced and evasive attackers, we consider the scenarios where there exists a hierarchy between the defender, i.e., , and the attackers such that each type of attacker is aware of ’s signaling rules $\eta_{1:\kappa}$ by testing and learning the system’s dynamics once they are deployed publicly. Different from [@ref:Sayin17b; @ref:Sayin17a; @ref:Sayin18a], in this paper, does not know the underlying distribution governing the attackers’ types and seeks to defend against the worst possible distribution. Particularly, designs the secure sensor outputs such that ’s cost is minimized in expectation with respect to the worst possible [*true*]{} distribution of types within a robust setting.
This can be viewed as a game, where designs the signaling rule $\eta_{1:\kappa}$ within a hierarchical setting, where the controllers, with different benign/malicious types, and the adversary (), determining the distribution of types, are aware of the signaling rules. Therefore, anticipates reactions of different types of controllers and the worst possible distribution over those types while selecting the signaling rule $\eta_{1:\kappa}$ to minimize . $\triangle$
Game Model
----------
We consider a game with three players: , , and . can have different private types. Let $\Omega$ denote the finite set of all (benign/malicious) controller types. Correspondingly, depending on the type $\omega\in\Omega$, selects the control rule $\gamma_{k}^{\omega}\in\Gamma_k$. designs the signaling rule $\eta_{1:\kappa}$ to minimize the expected cost, where the expectation is taken over all the randomness (due to the initial state, and state and measurement noises), and the distribution of types, determined by . Let $p:=\{p_{\omega}\}_{\omega\in\Omega}$ denote the probabilities of types $\omega\in\Omega$. Then, type-$\omega$ ’s, ’s, and ’s cost functions are given by , , and , respectively, where we represent the dependence of the state on the controller’s type, $\omega$, due to the control input in the state recursion explicitly through $\rx_k^{\omega}$ (and the signal $\ry_k^{\omega}$) instead of $\rx_k$ (and $\ry_k$). Then, the robust sensor design game is defined as follows:
The robust sensor design game $$\calG := \left({\Upsilon}_{1:\kappa},\Gamma_{1:\kappa},\Delta^{|\Omega|},\rx_1,\rw_{1:\kappa},\rv_{1:\kappa}\right)$$ is a multi-stage Stackelberg game [@ref:Basar99] between , , and , with the following parameters:
- $\kappa\in\Z$: denotes the number of stages, i.e., length of the horizon,
- $\rx_1\sim\N(0,\Sigma_1)$: denotes the initial state,
- $\{\rw_k\sim\N(0,\Sigma_w)\}$: denotes the white state noise process independent of other parameters,
- $\{\rv_k\sim\N(0,\Sigma_v)\}$: denotes the white measurement noise process independent of other parameters.
In this hierarchical setting, is the leader, who announces (and commits to) his strategies beforehand, while and are the followers, reacting to the leader’s announced strategies. ’s type is drawn according to ’s action $p\in\Delta^{|\Omega|}$ and his strategy space is $\Gamma_k$ at stage $k$. ’s strategy space is ${\Upsilon}_k$ at each stage $k$ and ’s action space is the simplex $\Delta^{|\Omega|}$. Objectives of , , and are given by , and , respectively. The tuple of strategies $(\eta_{1:\kappa}^*,\{\gamma_{1:\kappa}^{\omega*}\}_{\omega\in\Omega},p^*)$ attains the Stackelberg equilibrium provided that
\[eq:equilibrium\] $$\begin{aligned}
&\eta_{1:\kappa}^* = \argmin\limits_{\substack{\eta_k\in{\Upsilon}_k\\k=1,\ldots,\kappa}} U_{\playerS}\big(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p^*(\eta_{1:\kappa})\big)\\
&\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa}) = \argmin\limits_{\substack{\gamma_k^{\omega}\in\Gamma_k\\ k =1,\ldots,\kappa}} U_{\playerC}^{\omega}\big(\eta_{1:\kappa},\gamma_{1:\kappa}^{\omega}(\eta_{1:\kappa})\big),\label{eq:gammaStar}\\
&p^*(\eta_{1:\kappa}) = \argmin_{p\in\Delta^{|\Omega|}} U_{\playerA}\big(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p(\eta_{1:\kappa})\big)\end{aligned}$$
where, with abuse of notation, we denote type-$\omega$ ’s strategy $\gamma_k^{\omega}$ by $\gamma_{k}^{\omega}(\eta_{1:k})$ to show the dependence of type-$\omega$ ’s strategies on ’s signaling rules due to the hierarchy, explicitly, and we define $\gamma_{1:\kappa}^{\omega}(\eta_{1:\kappa}) := \{\gamma_1^{\omega}(\eta_1),\ldots,\gamma_{\kappa}^{\omega}(\eta_{1:\kappa})\}$.
The reaction set of type-$\omega$ is an equivalence class such that all $\gamma_{1:\kappa}^{\omega*}$ in the reaction set lead to the same control input $\ru_k^{\omega*}$ almost surely under certain convexity assumptions, e.g., $R_{\omega}\succ O$ is positive-definite, which will be shown in detail later. Furthermore, the reaction set of is also an equivalence class due to the zero-sum relation between the cost functions $U_{\playerS}$ and $U_{\playerA}$. $\triangle$
Given ’s signaling rule, ’s objective $U_{\playerC}^{\omega}$, for $\omega\in\Omega$, is [*decoupled*]{} from the other follower ’s action while ’s objective $U_{\playerA}$ depends on different type ’s strategies. Therefore, this can be viewed as a sequential optimization between the two followers, where firstly selects his strategy optimizing his objective given the leader’s strategy, and then takes action corresponding to the leader’s strategy and the ’s optimal reaction to optimize his objective . $\triangle$
Robust Sensor Design Framework {#sec:robust}
==============================
We first assume, in this section, that has access to perfect measurements, i.e., $\ry_k = \rx_k$ for $k=1,\ldots,\kappa$; the general noisy/partial measurements case will be addressed later in Section \[sec:noisy\]. Then, given $p\in\Delta^{|\Omega|}$, faces the following optimization problem: $$\min\limits_{\substack{\eta_k\in{\Upsilon}_k\\k=1,\ldots,\kappa}} U_{\playerS}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p),$$ where $\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})$ is given by . This is a highly nonlinear and non-convex problem. However, the following theorem provides an equivalent semi-definite programming (SDP) problem, which is not limited to equivalence in optimality as in [@ref:Sayin17b], so that we can address the equilibrium for the robust sensor design game $\calG$ in a compact way.
\[theorem:equivalent\] Let the convex and compact set $\Psi$ be defined as $$\label{eq:Psi}
\Psi := \left\{(S_k\in\AS^m)_{k=1}^{\kappa}\,|\, \Sigma_k^o \succeq S_k \succeq AS_{k-1}A', k=1,\ldots,\kappa, S_0 = O\right\},$$ where $\Sigma_k^o\in\AS^m$ is the covariance matrix of the control-free process: $$\rx_{k+1}^o = A\rx_k^o + \rw_k\mbox{ and }\rx_1^o = \rx_1,\label{eq:free}$$ i.e., $\rx_k^o\sim\N(0,\Sigma_k^o)$ and[^5] $\Sigma_k^o := \E\{\rx_k^o(\rx_k^o)'\}$. Then, given $p\in\Delta^{|\Omega|}$, for any signaling rule $\eta_{k}\in{\Upsilon}_k$, for $k=1,\ldots,\kappa$, there exists $S_{1:\kappa}\in\Psi$ such that
\[eq:toLeft\] $$\begin{aligned}
&U_{\playerS}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p)\label{eq:left} \\
&\hspace{.4in}= \sum_{k=1}^{\kappa} \trace\left\{S_k\left(\sum_{\omega\in\Omega}p_{\omega}V_k(\omega)\right)\right\} + v_o, \label{eq:right}\end{aligned}$$
where $V_{1:\kappa}(\omega)$ and $v_o$ are deterministic parameters that do not depend on $S_{1:\kappa}$ and derived in Appendix \[app:V\]. Furthermore, for any $S_{1:\kappa}\in\Psi$, there exists a signaling rule $\eta_{1:\kappa}$ such that $$\begin{aligned}
\label{eq:toRight}
\sum_{k=1}^{\kappa} \trace\left\{S_k\left(\sum_{\omega\in\Omega}p_{\omega}V_k(\omega)\right)\right\} + v_o= U_{\playerS}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p). \end{aligned}$$
The proof proceeds as follows: $i)$ we first show that the optimization function can be written as a linear function of the covariance of the posterior control-free state, i.e., $$H_k := \cov\{\E\{\rx_k^o|\rs_{1:k}\}\},$$ for $k=1,\ldots,\kappa$; $ii)$ we then identify the necessary condition on $H_{1:\kappa}$, which is indeed the constraint set $\Psi$, i.e., $H_{1:\kappa}\in\Psi$; and finally $iii)$ we show that the constraint set $\Psi$ is also a sufficient condition for $H_{1:\kappa}$ since any point in $\Psi$ can be attained through certain affine signaling rule. We now provide details of each of these steps.
[*Step $i)$*]{} Our first goal is to isolate the underlying state from the control input by completing the squares in the cost functions and through change of variables, which yields the result that given positive semi-definite $Q_{\omega}\in\AS^{m}$ and positive definite $R_{\omega}\in\AS^{r}$ corresponding to the type-$\omega$ controller, we have $$\E\left\{\sum_{k=1}^{\kappa}\|\rx_{k+1}\|_{Q_{\omega}}^2 + \|\ru_k\|_{R_{\omega}}^2\right\} = \sum_{k=1}^{\kappa} \E\left\{\|\ru_k^o + K_k^{\omega} \rx_k^o\|_{\Delta_k^{\omega}}^2\right\} + \Delta_0^{\omega},\label{eq:control-free}$$ where $K_k^{\omega}\in\R^{r\times m}, \Delta_k^{\omega}\in\AS_{+}^m$, for $k=1,\ldots,\kappa$, and $\Delta_0^{\omega}\in\R$ are derived in Appendix \[app:computationA\]; and $\{\rx_k^o\}$ is the control-free process defined in and $$\begin{aligned}
\ru_k^o = \ru_k + K_k^{\omega}B\ru_{k-1} + \ldots + K_k^{\omega}A^{k-2}B\ru_1.\label{eq:transformed}\end{aligned}$$
Different from the cooperative settings, does [*not*]{} imply that the optimal transformed control input is $\ru_k^o = -K_k^{\omega}\E\{\rx_k^o|\rs_{1:k}\}$ since $\rs_k$ depends on previous control inputs $\ru_{1:k-1}$, as also pointed out in [@ref:Sayin17b]. Therefore, we cannot claim optimality of linear signaling rules within the general class of all measurable policies. $\triangle$
However, for affine signaling rules, e.g., $\rs_k = L_{k,k}'\rx_k + \ldots + L_{k,1}'\rx_{1}+ \rn_k$, we have $$\begin{aligned}
\E\{\rx_k^o | L_{1,1}'\rx_1 + \rn_1,\cdots,L_{k,k}'\rx_k + \ldots + L_{k,1}'\rx_{1}+\rn_k\} = \E\{\rx_k^o|L_{1,1}'\rx_1^o+\rn_1,\cdots,L_{k,k}'\rx^o_k + \ldots + L_{k,1}'\rx^o_{1}+\rn_k\},\label{eq:condexp}\end{aligned}$$ which holds since the signal $\rs_l$ for $l=1,\ldots,k$ can be written as $$\begin{aligned}
\label{eq:measurable}
L_{l,l}'\rx_l + \ldots + L_{l,1}'\rx_{1} + \rn_l = L_{l,l}'\rx_l^o + \ldots + L_{l,1}'\rx_{1}^o + \rn_l + \Big\{L_{l,l}'B\ru_{l-1}+\ldots+(L_{l,l}'A^{l-2} + \ldots + L_{l,2}')B\ru_{1}\Big\},\end{aligned}$$ where the term in-between $\{\cdot\}$ is $\sigma$-$\rs_{1:l-1}$ measurable since the control input is given by . This yields that the optimal transformed control input is given by $\ru_k^o = - K_k^{\omega}\E\{\rx_k^o|\rs_{1:k}\}$ since $\E\{\rx_k^o|\rs_{1:k}\}$ does not depend on $\ru_{1:k}$. Then, the corresponding optimal (original) control input $\ru_k^*$ can be computed based on and $\ru_k^*$ is linear in $\E\{\rx_k^o|\rs_{1:k}\}$ while $\E\{\rx_k^o|\rs_{1:k}\}$ does not depend on the type of the controller.
The result in Step $i)$ would also hold if the controllers have objectives (other than ), e.g., additional certain soft constraints, leading to optimal control inputs that are linear functions of $\E\{\rx_k^o|\rs_{1:k}\}$. $\triangle$
Due to the linearity of the controllers’ reactions in $\E\{\rx_k^o|\rs_{1:k}\}$, the quadratic objective can be written as $$\label{eq:short}
\sum_{k=1}^{\kappa} \trace\left\{H_k \left(\sum_{\omega\in\Omega}p_{\omega}V_k(\omega)\right)\right\} + v_o$$ where $V_{1:\kappa}(\omega)\in\AS^m$, for $\omega\in\Omega$, and $v_o\in\R$ are derived in Appendix \[app:V\].
[*Step $ii)$*]{} The covariance of the posterior control-free state $H_k\in\AS^m$ can be in-between two extremes: $\Sigma_k^o$ corresponding to full disclosure of the state, i.e., $\rs_k=\rx_k$; and $\E\{\E\{\rx_k^{o}|\rs_{1:k-1}\}\E\{\rx_k^{o}|\rs_{1:k-1}\}'\} = AH_{k-1}A'$ corresponding to sharing nothing, i.e., $\rs_k = 0$ [@ref:Sayin17b]. The inequality $$\Sigma_k^o\succeq H_k \succeq AH_{k-1}A'\label{eq:sub}$$ follows from the following covariance matrices: $$\begin{aligned}
&\cov\{\rx_k^o - \E\{\rx_k^o|\rs_{1:k}\}\} = \Sigma_k^o - H_k\succeq O,\\
&\cov\{\E\{\rx_k^o|\rs_{1:k}\} - \E\{\rx_k^o|\rs_{1:k-1}\}\} = H_k - AH_{k-1}A'\succeq O,\end{aligned}$$ since for arbitrary random variables $\ra$ and $\rb$, $$\begin{aligned}
\E\{\ra\E\{\ra|\rb\}\} = \E\{\E\{\ra|\rb\}\E\{\ra|\rb\}\}.\end{aligned}$$ We then arrive at based on .
[*Step $iii)$*]{} In order to show , we will be using the following lemma from [@ref:Sayin18e] to address the cases when $\Sigma_k^o - A\Sigma_{k-1}^oA' = \Sigma_w\succeq O$ can be singular.
\[lem:outsider0\] If we can partition a positive semi-definite matrix into blocks such that a block at the diagonal is a zero matrix, then we have $$\begin{bmatrix} A & B \\ B' & O \end{bmatrix} \succeq O \Leftrightarrow A\succeq O \mbox{ and } B = O.$$
Based on Lemma \[lem:outsider0\], the following lemma shows that any point in $\Psi$ can be attained by a certain affine signaling rule.
\[lem:affine\] Consider any $S_{1:\kappa}\in\Psi$, and let $$\Sigma_k^o -AS_{k-1}A' = \bU_k\begin{bmatrix} \bLambda_k & O \\ O & O\end{bmatrix}\bU_k'$$ be the eigen-decomposition such that $\bLambda_k \succ O$. Let $$\begin{aligned}
T_k:= \begin{bmatrix} \bLambda_k^{1/2}& O \end{bmatrix} \bU_k' (S_k-AS_{k-1}A')\bU_k \begin{bmatrix} \bLambda_k^{1/2} \\ O \end{bmatrix}\end{aligned}$$ have the eigen-decomposition $T_k = U_k\Lambda_kU_k'$ with eigenvalues, e.g.,[^6] $\lambda_{k,1},\ldots,\lambda_{k,t_k} \in [0,1]$, where $t_k = \rank\{\Sigma_k^o -AS_{k-1}A'\}$. Then, there exists a memoryless affine signaling rule $$\label{eq:affineSolution}
\ry_k = L_k' \rx_k + \rn_k,\;\mbox{for}\; k=1,\ldots,\kappa,$$ where $\rn_k\sim\N(0,\Theta_k)$ and $\Theta_k = \mathrm{diag}\{\theta_{1,1}^2,\ldots,\theta_{1,t_k}^2,0,\ldots,0\}$, $L_k$ is given by $$L_k = \bU_k\begin{bmatrix} \bLambda_k^{-1/2} U_k\Lambda_k^o & O \\ O & O \end{bmatrix}$$ where $\Lambda_k^o := \mathrm{diag}\{\lambda_{1,1}^o,\ldots,\lambda_{1,t_k}^o\}$ and $$\frac{(\lambda_{k,i}^o)^2}{(\lambda_{k,i}^o)^2 + \theta_{k,i}^2} = \lambda_{k,i} \in [0,1], \forall\, i=1.\ldots,t_k,$$ which leads to $S_{1:\kappa} = H_{1:\kappa}$.
The proof follows by induction. If $S_{1:\kappa}\in\Psi$, then $S_1\in\AS^m$ satisfies $$\label{eq:subConst1}
\Sigma_1^o \succeq S_1 \succeq O,$$ where $\Sigma_1^o\succeq O$ can be singular. Let $\Sigma_k^o = \bU_1 \begin{bmatrix} \bLambda_{1} & O \\ O & O\end{bmatrix}\bU_1'$ be the eigen-decomposition such that $\bLambda_{1} \succ O$. Then, we have $$\begin{bmatrix} \bLambda_{1} & O \\ O & O\end{bmatrix} \succeq \bU_1' S_1 \bU_1 \succeq O,$$ which implies that $$\begin{bmatrix} \bLambda_{1} & O \\ O & O\end{bmatrix} - \begin{bmatrix} M_{1,1} & M_{1,2} \\ M_{2,1} & M_{2,2} \end{bmatrix} \succeq O,\label{eq:this}$$ where we let $\bU_1' S_1 \bU_1 = \begin{bmatrix} M_{1,1} & M_{1,2} \\ M_{2,1} & M_{2,2} \end{bmatrix}$ be the corresponding partitioning. Note that since $\bU_1' S_1 \bU_1 \succeq O$, we have $M_{2,2}\succeq O$ [@ref:Horn85]. However, the bottom-right block of the positive semi-definite matrix (the whole term) on the left-hand-side of the inequality , i.e., $-M_{2,2}$, must also be a positive semi-definite matrix, which implies $O\succeq M_{2,2}$. Therefore we have $M_{2,2} = O$ and Lemma \[lem:outsider0\] yields that there exists a symmetric matrix $T_1\in\AS^{t_1}$, where $t_1:= \rank\{\Sigma_1^o\}$, such that $$\label{eq:ST}
S_1 = \bU_1 \begin{bmatrix} \bLambda_{1}^{1/2} T_1 \bLambda_{1}^{1/2} & O \\ O & O \end{bmatrix}\bU_1'.$$ Note that there exists a bijective relation between $S_1\in\AS^m$ and $T_1\in\AS^{t_1}$. Furthermore, and imply that $$I \succeq T_1 \succeq O,$$ and $T_1\in\AS^m$ has eigenvalues in the closed interval $[0,1]$ since the eigenvalues of $I$, i.e., the vector $\mathbf{1}\in\R^{t_1}$, weakly majorizes the eigenvalues of $T_1$ from below [@ref:Horn85]. Let $T_1 = U_1\Lambda_1U_1'$ be the eigen-decomposition and $\lambda_{1,1},\ldots,\lambda_{1,t_1}\in[0,1]$ be the associated eigenvalues.
Furthermore, consider the affine signaling rule $\rs_1 = L_1'\rx_1 + \rn_1$, where $\rn_1\sim\N(0,\Theta_1)$ is independent of all the other parameters. Then, the covariance of the posterior control-free state is given by $$H_1 = \Sigma_1^oL_1(L_1'\Sigma_1^oL_1 + \Theta_1)^{\dagger}L_1'\Sigma_1^o.$$ If we set $L_1 = \bU_1 \begin{bmatrix} \bLambda^{-1/2}_1U_1\Lambda_1^o & O \\ O & O \end{bmatrix}$ and $\Theta_1 \succeq O$ such that $$\Lambda_1^o := \begin{bmatrix} \lambda_{1,1}^o & & \\ & \ddots& \\ & & \lambda_{1,t_1}^o\end{bmatrix}, \Theta_1 := \begin{bmatrix} \theta_{1,1}^2 & & & \\ &\ddots & & \\ & & \theta_{1,t_1}^2 & \\ & & & \ddots \end{bmatrix}$$ and $$\frac{(\lambda_{1,i}^o)^2}{(\lambda_{1,i}^o)^2 + \theta_{1,i}^2} = \lambda_{1,i} \in [0,1], \mbox{ for } i=1,\ldots,t_1,$$ then, we would obtain $H_1 = S_1$ exactly.
Suppose that $H_j = S_j$ for $j<k$. Then, $S_k\in\AS^m$ satisfies $$\Sigma^o_k \succeq S_k \succeq AS_{k-1}A',$$ which is equivalent to $$\label{eq:asd}
\Sigma^o_k \succeq S_k \succeq AH_{k-1}A'.$$ Correspondingly, $\Sigma_k^o - AH_{k-1}A'\succeq O$ can be singular. Let $\Sigma_k^o - AH_{k-1}A' = \bU_k \begin{bmatrix} \bLambda_k & O \\ O & O\end{bmatrix}\bU_k'$ be the eigen-decomposition such that $\bLambda_k \succ O$. Then, we have $$\begin{bmatrix} \bLambda_k & O \\ O & O \end{bmatrix} \succeq \bU_k'(S_k-AH_{k-1}A')\bU_k \succeq O$$ and correspondingly Lemma \[lem:outsider0\] yields that there exists a symmetric matrix $T_k\in\AS^{t_k}$, where $t_k := \rank\{\Sigma_k^o - AH_{k-1}A'\}$, such that $$\label{eq:asd2}
S_k = AH_{k-1}A' + \bU_k \begin{bmatrix} \bLambda_k^{1/2} T_k \bLambda_k^{1/2} & O \\ O & O \end{bmatrix}\bU_k'.$$ Furthermore, and yield that $$I\succeq T_k \succeq O,$$ which implies that $T_k\in\AS^{t_k}$ has eigenvalues in the closed interval $[0,1]$. Let $T_k = U_k\Lambda_kU_k'$ be the eigen decomposition and $\lambda_{k,1},\ldots,\lambda_{k,t_k}\in[0,1]$ be the associated eigenvalues.
Furthermore, for the affine signaling rule $\rs_k = L_k'\rx_k + \rn_k$, where $\rn_k\sim\N(0,\Theta_k)$ is independent of all the other parameters, the covariance of the posterior control-free state is given by $$\begin{aligned}
H_k = AH_{k-1}A' + (\Sigma_k^o - AH_{k-1}A')L_k(L_k'(\Sigma_k^o - AH_{k-1}A')L_k + \Theta_k)^{\dagger}L_k'(\Sigma_k^o - AH_{k-1}A'),\end{aligned}$$ which follows since $$\begin{aligned}
\cov\{\E\{\rx_k^o|\rs_{1:k}\}\} = \cov\{\E\{\rx_k^o|\rs_{1:k-1}\}\} + \cov\{\E\{\rx_k^o | \rs_k - \E\{\rs_k|\rs_{1:k-1}\}\}\},\end{aligned}$$ due to the independence of the jointly Gaussian $\rs_{1:k-1}$ and $\rs_k-\E\{\rs_k|\rs_{1:k-1}\}$. If we set $L_k = \bU_k\begin{bmatrix} \bLambda_k^{-1/2}U_k\Lambda_k^o & O \\ O & O \end{bmatrix}$ and $\Theta_k\succeq O$ such that $$\frac{(\lambda_{k,i}^o)^2}{(\lambda_{k,i}^o)^2 + \theta_{k,i}^2} = \lambda_{k,i} \in [0,1], \mbox{ for } i=1,\ldots,t_k,$$ then, we would obtain $H_k = S_k$ exactly. Therefore, by induction, we conclude that for any $S_{1:\kappa}\in\Psi$, there exists a certain affine signaling rule such that $H_k = S_k$ for $k=1,\ldots,\kappa$.
When has perfect measurements, i.e., $\ry_k = \rx_k$, then the optimal signaling rules can be memoryless affine policies within the general class of affine policies with complete/bounded memory. $\triangle$
Lemma \[lem:affine\] implies the equality at , which completes the proof of Theorem \[theorem:equivalent\].
Henceforth, we will be working with instead of while analyzing the equilibrium of the game $\calG$.
[*New notation for compact presentation:*]{} Let $$S := \begin{bmatrix} S_{\kappa} & & \\ & \ddots & \\ & & S_1 \end{bmatrix},\; V(\omega) := \begin{bmatrix} V_{\kappa}(\omega) & & \\ & \ddots & \\ & & V_1(\omega) \end{bmatrix},$$ and $\bPsi \in \AS^{m\kappa}$ be the set corresponding to the constraint set $\Psi$ in this new high dimensional space, i.e., $\R^{m\kappa \times m\kappa}$. Furthermore, let $V_i = V(\omega_i)$ and $p_i := p_{\omega_i}$, where $i\in\calI$ and $\calI$ is certain index set of the type set $\Omega$.
Based on Theorem \[theorem:equivalent\], at the Stackelberg equilibrium, where is the leader, faces the following problem: $$\begin{aligned}
\label{eq:newProb}
\min_{S\in\bPsi}\max_{p\in\Delta^{|\Omega|}}\;\trace\left\{S\sum_{i\in\calI}p_iV_i\right\} + v_o,\end{aligned}$$ since reacts to the committed signaling rule $\eta_{1:\kappa}$ and correspondingly reacts to $S\in\bPsi$. can also be written as $$\label{eq:maxmin}
\min_{S\in\bPsi}\max_{p\in\Delta^{|\Omega|}}\; \sum_{i\in\calI}p_i\trace\left\{SV_i\right\} + v_o.$$
The following proposition addresses the existence of an equilibrium for $\calG$.
\[prop:existence\] There exists at least one tuple of pure actions $(\eta_{1:\kappa}^*,\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega}, p^*(\eta_{1:\kappa}))$ attaining the equilibrium of the Stackelberg game $\calG$, i.e., satisfying .
The proof follows from the equivalence between and , which yields that faces . Since the objective function in is continuous in the optimization arguments and the constraint sets are decoupled and compact, the extreme value theorem and maximum theorem (showing the continuity of parametric maximization under certain conditions [@ref:Ok07]) yields the existence of a solution to , which completes the proof.
The objective function in is linear in the optimization argument $S\in\bPsi$, and the constraint set $\bPsi$ is compact and convex. Therefore, given $p\in\Delta^{|\Omega|}$, the solution could be attained at a certain extreme point of $\bPsi$. However, the following function: $$\max_{p\in\Delta^{|\Omega|}}\sum_{i\in\calI} p_i \trace\{SV_i\}$$ is convex in $S\in\bPsi$ since the maximum of any family of linear functions is a convex function [@ref:Boyd04]. Particularly, for $\mu\in[0,1]$, we have $$\begin{aligned}
\mu\max_{p\in\Delta^{|\Omega|}}\trace\left\{S\sum_{i}p_iV_i\right\}+ (1-\mu)\max_{p\in\Delta^{|\Omega|}}\trace\left\{\bS\sum_{i}p_{i}V_i\right\} \geq \max_{p\in\Delta^{|\Omega|}}\trace\left\{(\mu S+(1-\mu)\bS)\sum_{i}p_iV_i\right\}.\end{aligned}$$ Therefore, the solution can be a non-extreme point of the constraint set $\bPsi$. Correspondingly, Lemma \[lem:affine\] implies that the optimal signals would be affine in the underlying state rather than linear, i.e., there will be additional independent noise term $\rn_k\sim\N(0,\Theta_k)$, where $\Theta_k\neq O$.$\triangle$
Next, we seek to compute the equilibrium of $\calG$. To this end, we examine the equilibrium conditions further. In particular, according to , given $S\in\bPsi$, the best action for is given by $$\begin{aligned}
\label{eq:postar}
p^{*} \in \Big\{p\in\Delta^{|\Omega|}\,|\,p_j = 0 \mbox{ if } \trace\{V_jS\}<\max_{i}\trace\{V_iS\}\Big\}\end{aligned}$$ since is linear in $p\in\Delta^{|\Omega|}$. Then, based on the observation , the following theorem provides an algorithm to compute the robust sensor outputs.
\[theorem:compute\] The value of the Stackelberg equilibrium is given by $\vartheta = \min_{j\in\calI}\{\vartheta_j\}$, where $$\begin{aligned}
\label{eq:vj}
\vartheta_j := \min_{S\in\bPsi}&\; \trace\big\{V_jS\big\} + v_o \\
\mathrm{s.t. }& \;\trace\big\{(V_j-V_i)S\big\} \geq 0\; \forall i\in\calI.\nn\end{aligned}$$
Furthermore, let $\vartheta_{j^*} = \vartheta$ and $$\begin{aligned}
S^*\in \argmin_{S\in\bPsi}&\; \trace\big\{V_{j^*}S\big\} + v_o \label{eq:Sstarj} \\
\mathrm{s.t. }& \;\trace\big\{(V_{j^*}-V_i)S\big\} \geq 0\; \forall i\in\calI.\nn\end{aligned}$$ Then, given $S^*\in\bPsi$, the optimal signaling rule $\eta_{1:\kappa}$ can be computed according to from Lemma \[lem:affine\].
Based on the existence result in Proposition \[prop:existence\], suppose that $(S^*,p^*)$ attains the Stackelberg equilibrium, i.e., solves . Since $p^*\in\Delta^{|\Omega|}$, there must be at least one type with positive weight. As an example, suppose positive weight for the type $\omega_j\in\Omega$, i.e., $p_j>0$. This implies that $$\trace\{V_jS^*\} \geq \trace\{V_iS^*\}\;\forall i$$ since $\trace\{V_jS^*\} = \max_{i\in\calI} \trace\{V_iS^*\}$ by . Furthermore, this also implies that $$\trace\{V_jS^*\} = \sum_{i\in\calI}p_{i}^{*}\,\trace\{V_iS^*\}$$ since if $p_{i}^*>0$, then we have $$\trace\{V_jS^*\} = \trace\{V_iS^*\}.$$ These necessary conditions yield that $$\begin{aligned}
\min_{S\in\bPsi}\max_{p\in\Delta^{|\Omega|}}\;\sum_{i\in\calI} \trace\{V_iS\}p_i = \min_{S\in\bPsi} &\; \trace\{V_jS\}\nn\\
\mathrm{s.t. }&\; \trace\{(V_j-V_i)S\} \geq 0 \;\forall i \end{aligned}$$ while the right-hand-side is an SDP problem isolated from ’s action. Therefore, by searching over the index set $\calI$, we can compute the left-hand-side, which is the minimum over $\calI$. Once the minimum value is computed, $S^*$ can be computed according to the corresponding index, i.e., .
In Theorem \[theorem:compute\], we search over the index set $\calI$ linearly, however, certain pruning operations can be conducted to speed up the computation. As an example, we can search over the extreme points of the convex hull of $V_j$, $j\in\calI$. $\triangle$
There might be multiple solutions for . can be selective among those solutions. In particular, implies that minimizes the cost given that maximizes it. Therefore, if the true underlying distribution is not the worst possible distribution, then would not get a cost more than the anticipated one. Any deviation from the worst distribution benefits . Furthermore, in the worst case, assigns positive probabilities to the types leading to the maximum as in . Correspondingly, if selects the solution $S^*\in\bPsi$ for such that the cardinality of $\argmax_{j}\trace\{V_jS^*\}$ is the smallest, then any positive probability on other types of attacks out of that set would lead to lower cost and would be desirable. $\triangle$
Robust Sensor Design with Noisy or Partial Measurements {#sec:noisy}
=======================================================
In this section, we obtain the optimal signaling rule when there are noisy or partial measurements of the type by turning the problem to the same structure with the case of perfect measurements based on a recent result from [@ref:Sayin18d] and then invoking the results from the previous section. There are several challenges in robust sensor design with noisy or partial measurements. As an example, the sufficiency result on the necessary conditions for the covariance of the posterior control-free state, i.e., $H_k$, does not hold in that case. Therefore, our focus will be on the necessary and sufficient conditions for the covariance of the posterior control-free [*measurements*]{}, i.e., $\cov\{\E\{\ry_k^o|\rs_{1:k}\}\}$, where $\ry_k^o := C\rx_k^o + \rv_k$. Similar to , we can show that $$\E\{\ry_k^o | \rs_{1:k}\} = \E\{\ry_k^o | \rs_{1:k}^o\},$$ where $\rs_k^o := L_{k,k}'\ry_k^o + \ldots + L_{k,1}'\ry_1^o + \rn_k$, since $\ru_{1:k-1}$ is $\sigma$-$\ry_{1:k-1}$ measurable. However, $\{\ry_k^o\}$ is not necessarily a Markov process. Therefore, we consider $$\begin{aligned}
\begin{bmatrix} \ry_k^o \\ \hdashline[2pt/2pt] \ry_{k-1}^o \\ \vdots \\ \ry_1^o\end{bmatrix} = \overbrace{\begin{bmatrix} \E\{\ry_k^o(\ry_{1:k-1}^o)'\}\E\{\ry_{1:k-1}^o(\ry_{1:k-1}^o)'\}^{\dagger} \\ \hdashline[2pt/2pt] \\ I \\ \end{bmatrix}}^{=: A_k} \begin{bmatrix} \ry_{k-1}^o \\ \vdots \\ \ry_{1}^o\end{bmatrix} + \underbrace{\begin{bmatrix} \ry_k^o - \E\{\ry_k^o|\ry_{1:k-1}^o\} \\ \hdashline[2pt/2pt] \\ O \\ \end{bmatrix}}_{=:\re_k},\end{aligned}$$ which can also be written in a compact form as $$\ry_{1:k}^o = A_k \ry_{1:k-1}^o + \re_k,$$ where we denote the vector $\begin{bmatrix}(\ry_k^o)'& \cdots & (\ry_1^o)' \end{bmatrix}'$ by $\ry_{1:k}^o$ with some abuse of notation.
Furthermore, we note that $\rx_k^o$, $\ry_{1:k}^o$, and $\rs_{1:k}^o$ form a Markov chain in the order $\rx_k^o \rightarrow \ry_{1:k}^o \rightarrow \rs_{1:k}^o$. In that respect, the following lemma from [@ref:Sayin18d] shows that there exists a linear relation between the posterior estimates irrespective of the signal if they are jointly Gaussian and form a Markov chain in a certain order.
\[lem:outsider\] Given zero-mean jointly Gaussian random vectors forming a Markov chain, e.g., $\rx\rightarrow \ry \rightarrow \rs$ in this order, the posterior estimates of $\rx$ and $\ry$ given $\rs$ satisfy the following linear relation: $$\label{eq:outsider}
\E\{\rx|\rs\} = \E\{\rx\ry'\} \E\{\ry\ry'\}^{\dagger} \E\{\ry|\rs\},$$ which implies $\rs\rightarrow\E\{\ry|\rs\}\rightarrow\E\{\rx|\rs\}$ in this order.
Based on Lemma \[lem:outsider\], we have the following relation between $\E\{\rx_k^o|\rs_{1:k}^o\}$ and $\E\{\ry_{1:k}^o|\rs_{1:k}^o\}$: $$\E\{\rx_k^o|\rs_{1:k}^o\} = \underbrace{\E\{\rx_k^o(\ry_{1:k}^o)'\}\E\{\ry_{1:k}^o (\ry_{1:k}^o)'\}^{\dagger}}_{=:D_k} \E\{\ry_{1:k}^o|\rs_{1:k}^o\},\label{eq:condexp2}$$ where $D_k\in\R^{m\times mk}$ does not depend on the signaling rule $\eta_{1:k}(\cdot)$. We define $$Y_k := \cov\{\E\{\ry_{1:k}^o | \rs_{1:k}^o\}\}.$$ Then, yields that $$H_k = D_kY_kD_k'.$$ Correspondingly, , i.e., the problem faced by , can be written as $$\min\limits_{\substack{\eta_k\in {\Upsilon}_k\\k=1,\ldots,\kappa}} \sum_{k=1}^{\kappa} \trace\left\{Y_k \left(\sum_{\omega\in\Omega}p_{\omega}W_k(\omega)\right)\right\} + v_o,$$ where $W_k(\omega) := D_k'V(\omega)D_k$ is also a symmetric matrix. Furthermore, consider the following compact and convex set: $$\Phi := \{(S_k \in \AS^{mk})_{k=1}^{\kappa}| \Sigma_k^y \succeq S_k \succeq A_kS_{k-1}A_k', k=1,\ldots,\kappa,S_0=O\},$$ where $\Sigma_k^y := \E\{\ry_{1:k}^o(\ry_{1:k}^o)'\}$, $C\in\R^{m\times m}$ and $\Sigma_v \in \AS^{m}$.
Note that $\Sigma_k^y\in\AS^{mk}$, $A_k \in \R^{km\times(k-1)m}$, and $D_k\in\R^{m\times mk}$ can be written as $$\begin{aligned}
&\Sigma_k^y = \begin{bmatrix} O & I_k \otimes C\end{bmatrix}\Sigma^o \begin{bmatrix} O \\ I_k \otimes C' \end{bmatrix} + I_{k} \otimes \Sigma_v,\\
&A_k = \begin{bmatrix} \begin{bmatrix} O_{m\times (\kappa-k)m} & C & O_{m\times(k-1)m} \end{bmatrix} \Sigma^o \begin{bmatrix} O \\ I_{k-1}\otimes C'\end{bmatrix} (\Sigma_{k-1}^y)^{\dagger} \\ I_{(k-1)m} \end{bmatrix},\\
&D_k = \begin{bmatrix} O_{m\times (\kappa-k)m} & I_m & O_{m\times(k-1)m} \end{bmatrix} \Sigma^o \begin{bmatrix} O \\ I_k\otimes C'\end{bmatrix} (\Sigma_k^y)^{\dagger}\end{aligned}$$ in terms of $\Sigma^o\in\AS^{m\kappa}$, defined in Appendix \[app:V\] by . $\triangle$
Without loss of generality, suppose that $\rs_k \in \R^{mk}$ instead of $\rs_k\in\R^m$ such that can [*disclose*]{} $\teta_k(\ry_{1:k}) = \ry_{1:k}$ with the affine signaling rule $\teta_k(\cdot)$ from $\R^{mk}$ to $\R^{mk}$. Particularly, in practice, we can always set the signaling rule $\eta_k(\cdot)$ from $\R^{mk}$ to $\R^{m}$ as $$\label{eq:wlg}
\eta_k(\ry_{1:k}) = \E\{\rx_k^o|\teta_1(\ry_1),\ldots,\teta_k(\ry_{1:k})\}.$$ For such a signaling rule $\teta_k(\cdot)$, by following similar lines in Step $ii)$ in the proof of Theorem \[theorem:equivalent\], we can show that a necessary condition on $Y_{1:\kappa}$ is that $Y_{1:\kappa}\in\Phi$. Furthermore, based on Lemma \[lem:affine\], a sufficient condition on $Y_{1:\kappa}$ is that for any $S_{1:\kappa}\in\Phi$, there exists a certain signaling rule such that $Y_{1:\kappa} = S_{1:\kappa}$. $\triangle$
The following corollary to Theorem \[theorem:equivalent\] provides an equivalent SDP problem for when there are noisy measurements.
\[corollary:lem\] Given $p\in\Delta^{|\Omega|}$, for any signaling rule $\eta_{1:\kappa}$, there exists $S_{1:\kappa}\in\Phi$ such that $$\begin{aligned}
&U_{\playerS}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p)\\
&\hspace{.4in}=\sum_{k=1}^{\kappa} \trace\left\{S_k \left(\sum_{\omega\in\Omega}W_k(\omega)\right)\right\} + v_o.\end{aligned}$$ Furthermore, for any $S_{1:\kappa}\in\Phi$, there exists a signaling rule $\eta_{1:\kappa}$ such that $$\begin{aligned}
\label{eq:noisyToRight}
&\sum_{k=1}^{\kappa} \trace\left\{S_k \left(\sum_{\omega\in\Omega}W_k(\omega)\right)\right\} + v_o\\
&\hspace{.4in}=U_{\playerS}(\eta_{1:\kappa},\{\gamma_{1:\kappa}^{\omega*}(\eta_{1:\kappa})\}_{\omega\in\Omega},p).\end{aligned}$$
Based on Corollary \[corollary:lem\], the following corollary to Theorem \[theorem:compute\] provides an algorithm to compute the robust sensor outputs for the cases with noisy or partial measurements.
\[corollary:compute\] The value of the Stackelberg equilibrium $$\min_{S\in\bPhi}\max_{p\in\Delta^{|\Omega|}} \sum_{i\in\calI}p_i\trace\{W_iS\} + v_o,$$ where $W_i$ and $\bPhi$ are defined accordingly, is given by $\vartheta = \min_{j\in\calI}\{\vartheta_j\}$, where $$\begin{aligned}
\nn
\vartheta_j := \min_{S\in\bPhi}&\; \trace\big\{W_jS\big\} + v_o \\
\mathrm{s.t. }& \;\trace\big\{(W_j-W_i)S\big\} \geq 0\; \forall i\in\calI.\nn\end{aligned}$$
Furthermore, let $\vartheta_{j^*} = \vartheta$ and $$\begin{aligned}
S^*\in \argmin_{S\in\bPhi}&\; \trace\big\{W_{j^*}S\big\} + v_o \nn \\
\mathrm{s.t. }& \;\trace\big\{(W_{j^*}-W_i)S\big\} \geq 0\; \forall i\in\calI.\nn\end{aligned}$$ Then, given $S^*\in\bPhi$, the optimal signaling rule $\teta_{1:\kappa}$ can be computed according to from Lemma \[lem:affine\] with corresponding $\Sigma_k^y$ and $A_k$ instead of $\Sigma_k^o$ and $A$, for $k=1,\ldots,\kappa$, and then we can compute the actual signaling rules $\eta_{1:\kappa}$ via .
Illustrative Examples {#sec:examples}
=====================
As numerical illustrations, we compare the performance of the proposed secure sensor design framework with classical sensors that disclose the measurement to the controller directly. The controller can have three different types: type-$\omega_o$ corresponding to benign controller, and type-$\alpha$ and type-$\beta$ corresponding to malicious controllers. As an illustrative example, we set the time horizon $\kappa=10$, the state’s dimension $m=4$, and the control input’s dimension $r=2$. We consider that the state can be partitioned into the separate processes $\{\rt_k\in\R^2\}$ and $\{\rz_k\in\R^2\}$, i.e., $\rx_k' = \begin{bmatrix} \rt_k' & \rz_k' \end{bmatrix}$, and the state recursion is given by $$\begin{bmatrix} \rt_{k+1}\\ \rz_{k+1} \end{bmatrix} = \begin{bmatrix} A_t & O \\O & A_z\end{bmatrix} \begin{bmatrix} \rt_{k} \\ \rz_{k} \end{bmatrix} + \begin{bmatrix} B_t \\ O \end{bmatrix}\ru_k + \begin{bmatrix} \rw_k^t \\ \rw_k^z \end{bmatrix},$$ where $$A_t := \begin{bmatrix} 1/\sqrt{2} & 0\\0 & 1/2 \end{bmatrix},\, A_z := \begin{bmatrix} 1/3 & 1/10 \\ 1/10 & 1/\sqrt{2} \end{bmatrix},\, B_t := \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}.\nn$$ Furthermore, we let the initial state $\rx_1\sim\N(0,\Sigma_1)$ and the state noise $\rw_k \sim \N(0,\Sigma_w)$ have the covariance matrices: $$\begin{aligned}
\Sigma_1 := \begin{bmatrix} 1&0&0&0\\0&1&0&0\\ 0&0&1.5&0\\0&0&0&2\end{bmatrix},\,\Sigma_w:=\begin{bmatrix} 1&0&0&0\\ 0&2&0&0 \\ 0&0&0.98 & -0.228\\ 0&0&-0.228 & 0.985\end{bmatrix},\end{aligned}$$ which implies $\{\rz_k\}$ is a stationary exogenous process. The benign controller objective is given by $$\label{eq:simSobj}
\sum_{k=1}^{\kappa}\left\|\begin{bmatrix} \rt_{k+1} - \rz_{k+1} \\ \rt_{k+1} \end{bmatrix}\right\|^2 + \|\ru_k\|^2,$$ which implies that type-$\omega_o$ controller and seek to regularize the controlled process $\{\rt_k\}$ around the zero vector and the exogenous process $\{\rz_k\}$. On the other hand, the other malicious type controllers’ objectives are misaligned with rather than being its complete opposite. Let $\rt_k = \begin{bmatrix} \rt_k^{(1)} & \rt_k^{(2)} \end{bmatrix}'$. Then, type-$\alpha$ seeks to regularize $\{\rt_k^{(1)}\in\R\}$ around zero and thus his control objective is given by $$\sum_{k=1}^{\kappa}\left\|\rt_{k+1}^{(1)}\right\|^2 + \|\ru_k\|^2.$$ Type-$\beta$ seeks to regularize the other component of $\rt_k$, $\{\rt_k^{(2)}\in\R\}$, again around zero and thus his control objective is given by $$\sum_{k=1}^{\kappa}\left\|\rt_{k+1}^{(2)}\right\|^2 + \|\ru_k\|^2.$$
We consider four different scenarios in terms of the measurements:
- Scenario-$1$: Perfect Measurements, i.e., $\ry_k = \rx_k$.
- Scenario-$2$: Noisy Measurements, i.e., $\ry_k = \rx_k + \rv_k$.
- Scenario-$3$: Partial Measurements, i.e., $\ry_k = C\rx_k$.
- Scenario-$4$: Partial Noisy Measurements, i.e., $$\ry_k = C\rx_k + \rv_k.$$
We let $\rv_k\sim\N(0,I_4)$ and $$C := \begin{bmatrix} 1&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \\ 0&0&0&0 \end{bmatrix},$$ which is a singular matrix. In Tables \[tab:scenario1\]-\[tab:scenario4\], we compare the performance of the secure sensor design framework with the classical sensors in terms of the following performance metric: $$\label{eq:measure}
\max_{p\in\Delta^{|\Omega|}}\sum_{k=1}^{\kappa}\trace\left\{H_k \sum_{\omega\in\Omega}p_{\omega}V_k(\omega)\right\},$$ i.e., in terms of the impact of the sensor feedback and for the worst possible distribution over the controllers’ types, based on Theorem \[theorem:equivalent\] and Corollary \[corollary:lem\] while $V_{1:\kappa}(\omega)$, for $\omega\in\Omega$, is derived in Appendix \[app:V\].
Note that the performance metric excludes $v_o\in\R$ from the original cost function in perfect measurements case or in partial or noisy measurements case since $v_o\in\R$ does not depend on how the measurements have been shared with the controller, and therefore is fixed for all the scenarios. Correspondingly, the performance metric could be [*negative*]{} while the original cost function is always non-negative by definition. $\triangle$
Across Tables \[tab:scenario1\]-\[tab:scenario4\], i.e., across all Scenarios $1$-$4$, we have the following observations in common: $i)$ the proposed framework outperforms the classical sensors that disclose the (perfect or noisy) measurements directly without any crafting; $ii)$ the cost for the proposed framework and the classical full disclosure strategy is the same when there is only benign type almost surely; $iii)$ the benign type is dominated by both type-$\alpha$ and type-$\beta$ in the worst distribution, i.e., benign type has zero probability almost surely in the worst distribution; $iv)$ type-$\beta$ is stronger attacker than type-$\alpha$ by leading to higher cost.
[C[.1in]{}C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}]{} & & &\
& & &\
& Secure & Full & Secure & Full & Secure & Full\
& & & & & &\
& & & & & &\
& & & & &\
& & & & &\
& & & & &\
& & & & & &\
\[tab:scenario1\]
As seen in Table \[tab:scenario1\], in Scenario-$1$, the cost of the proposed framework for all the cases, i.e., for different sets of types, is the smallest compared to the other scenarios, where there can be partial or noisy measurements. Particularly, the perfect measurements give the utmost [*freedom*]{} to to select the signaling rule. Therefore, if there were any other case with partial or noisy measurements, where achieves lower cost, then could have selected that corresponding composed signaling rule in the case with perfect measurement. Partial or noisy measurements limit ’s ability to deceive the attackers. Furthermore, we observe that when all the types can exist with a positive probability, the cost is higher than the cases when only one attacker exists. This yields that the worst distribution to defend against is not dominated by the strongest attacker, who is type-$\beta$ in this scenario. In other words, the possibility of a mixture over the stronger and weaker attackers can be more powerful.
[C[.1in]{}C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}]{} & & &\
& & &\
& Secure & Full & Secure & Full & Secure & Full\
& & & & & &\
& & & & & &\
& & & & &\
& & & & &\
& & & & &\
& & & & & &\
\[tab:scenario2\]
[C[.1in]{}C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}]{} & & &\
& & &\
& Secure & Full & Secure & Full & Secure & Full\
& & & & & &\
& & & & & &\
& & & & &\
& & & & &\
& & & & &\
& & & & & &\
\[tab:scenario3\]
In Scenarios $2$ and $3$, as seen in Tables \[tab:scenario2\] and \[tab:scenario3\], the performance degrades compared to Scenario $1$ in the proposed framework. However, such a performance degradation is not the case for the full information disclosure in general. As an example, when there is only type-$\beta$ almost surely, the cost is higher in Scenario $3$ than the one in Scenario $1$ for full disclosure of the measurement. This is mainly because the objectives of the malicious type controller is not the complete opposite of the benign type controller. Therefore, even though in all the cases illustrated in this section, we have observed that there can even be examples, where the full disclosure of the measurement can lead to a [*positive*]{} cost, which would imply that disclosing even no information with the controller would lead to lower cost, i.e., $0$, since no information disclosure yields that the covariance of the posterior control-free state is $H_k = O$. Furthermore, we observe that the possibility of a mixture over the stronger and weaker attackers can also be more powerful in Scenarios $2$ and $3$.
[C[.1in]{}C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}|C[.35in]{}]{} & & &\
& & &\
& Secure & Full & Secure & Full & Secure & Full\
& & & & & &\
& & & & & &\
& & & & &\
& & & & &\
& & & & &\
& & & & & &\
\[tab:scenario4\]
In Scenario-$4$, as seen in Table \[tab:scenario4\], the cost for the case when there is only the benign type is the highest. However, while the costs for the cases when there is only type-$\alpha$ attacker is higher than the corresponding cases in Scenario-$2$, the costs for the cases when there is only type-$\beta$ attacker is [*lower*]{} than the corresponding cases in Scenario-$2$. Note that type-$\beta$ attacker is still stronger than type-$\alpha$ attacker by leading to higher cost. Furthermore, the costs for the case when all the types can exist with a positive probability also leads to a similar twist, where we observe a higher cost in Scenario-$2$. We also note that type-$\beta$ attacker dominates the worst distribution over all the types since the costs for the case where there is only type-$\beta$ is the same with the cost for the case when there are all types. Therefore, we can conclude that depending on the type of controllers and how informative the measurements are, the costs can vary in a complicated way while the proposed framework provides the optimal way to compute the robust sensor outputs and a performance assessment tool for, e.g., various sensor placement techniques.
Conclusion {#sec:conclusion}
==========
In this paper, we have proposed and addressed the robust sensor design problem for cyber-physical systems with linear Gaussian dynamics against multiple advanced and evasive attackers with quadratic control objectives. By designing sensor outputs cautiously in advance, we have sought to deceive the attackers about the underlying state of the system so that they would act/attack to the system in-line with the normal operation. Our goal has been to exploit the aligned part between the attackers’ and the system’s objectives so that the attackers would only have fulfilled the aligned part by crafting the information available to them. To this end, we have modeled the problem formally in a game-theoretical hierarchical setting, where the advanced attackers can be aware of the designed signaling rules.
We have formulated an equivalent problem to the problem faced by the sensor against any attacker with a known objective. This new problem was an SDP problem. We have introduced additional linear constraints on that equivalent problem and provided an SDP algorithm to compute the optimal robust sensor design strategies against multiple types of attackers. We have also extended the results to scenarios where the sensor could have access to partial or noisy measurements of the underlying state. Finally, we have examined the performance of the proposed framework across various scenarios and compared with the classical sensor outputs that disclose the measurements directly without any crafting.
Some future directions of research on this topic include formulation of secure sensor design strategies for robust control of systems, and considering scenarios where the attackers can have side information about the underlying state, which would limit the sensor’s ability to deceive the attackers. Another interesting research direction would be the application of the framework to sensor placement or sensor selection. Furthermore, even though we have motivated the framework by relating it to security, the framework could also address strategic information disclosure over multi-agent control networks with misaligned control objectives.
Computation of $K_{1:\kappa}^{\omega},\Delta_{1:\kappa}^{\omega}$, and $\Delta_0^{\omega}$ {#app:computationA}
------------------------------------------------------------------------------------------
A routine completion to square leads to [@ref:Bansal89; @ref:Kumar86] $$\E\left\{\sum_{k=1}^{\kappa}\|\rx_{k+1}\|_{Q_{\omega}}^2 + \|\ru_k\|_{R_{\omega}}^2\right\} = \sum_{k=1}^{\kappa}\E\left\{\|\ru_k+K_{k}^{\omega}\rx_k\|_{\Delta_{k}^{\omega}}^2\right\} + \Delta_{0}^{\omega},\label{eq:square}$$ where $$\begin{aligned}
&K_{k}^{\omega}=(\Delta_{k}^{\omega})^{-1}B'\tQ_{k+1}^{\omega}A\\
&\Delta_{k}^{\omega} = B'\tQ_{k+1}^{\omega}B + R_{\omega}\\
&\Delta_{0}^{\omega} = \trace\{Q_{\omega}\Sigma_1\} + \sum_{k=1}^{\kappa}\trace\{\tQ_{k+1}^{\omega}\Sigma_w\}\end{aligned}$$ and $\{\tQ_{k}^{\omega}\}$ is given by the discrete-time dynamic Riccati equation $$\tQ_{k}^{\omega} = Q_{\omega} + A'(\tQ_{k+1}^{\omega} - \tQ_{k+1}^{\omega}B(\Delta_{k}^{\omega})^{-1}B'\tQ_{k+1}^{\omega})A$$ and $\tQ_{\kappa+1}^{\omega} = Q_{\omega}$.
Computation of $V_{1:\kappa}(\omega)$ and $v_o$ {#app:V}
-----------------------------------------------
Let $\omega_o\in\Omega$ denote the type of the benign controller, who has the same objective with . We define $$\begin{aligned}
&\Phi(\omega) := \begin{bmatrix}
I & K_{\kappa}^{\omega}B & K_{\kappa}^{\omega} AB & \cdots & K_{\kappa}^{\omega}A^{\kappa-2}B \\
& I & K_{\kappa-1}^{\omega}B & \cdots & K_{\kappa-1}^{\omega}A^{\kappa-3}B\\
& & I & \cdots & K_{\kappa -2}^{\omega}A^{\kappa-4}B\\
& & & \ddots & \vdots\\
& & & & I
\end{bmatrix}\\
&K(\omega) := \begin{bmatrix} K_{\kappa}^{\omega} & & \\ & \ddots & \\ & & K_1^{\omega}\end{bmatrix},\;\Delta(\omega) := \begin{bmatrix} \Delta_{\kappa}^{\omega} & & \\ & \ddots & \\ & & \Delta_1^{\omega}\end{bmatrix}.\end{aligned}$$ Then, can be written as $$\|\Phi(\omega)\ru + K(\omega)\rx^o\|_{\Delta(\omega)}^2 + \Delta^{\omega}_0$$ in terms of the augmented vectors $\ru,\rx^o\in\R^{m\kappa}$. Correspondingly, the optimal attack is $\ru^* = -\Phi(\omega)^{-1}K(\omega)\rhx^o$, where $\rhx^o := \begin{bmatrix} \E\{\rx_{\kappa}^o | \rs_{1:\kappa}\}' & \cdots & \E\{\rx_1^o|\rs_1\}' \end{bmatrix}'$, for type-$\omega$ controller. Therefore, , i.e., type-$\omega_o$, faces the following problem: $$\sum_{\omega\in\Omega}p_{\omega} \|K(\omega_o)\rx^o - T(\omega)\rhx^o\|_{\Delta(\omega_o)} + \Delta_{0}^{\omega_o},\label{eq:whole}$$ where $T(\omega) := \Phi(\omega_o)\Phi(\omega)^{-1}K(\omega)$. We introduce $$\begin{aligned}
&\Xi(\omega) := T(\omega)'\Delta(\omega_o) T(\omega) - T(\omega)'\Delta(\omega_o) K(\omega_o) - K(\omega_o)' \Delta(\omega_o) T(\omega)\\
&v_o := \trace\{\Sigma^o K(\omega_o)'\Delta(\omega_o)K(\omega_o)\} + \Delta_0^{\omega_o},\end{aligned}$$ where $\Sigma^o := \E\{\rx^o(\rx^o)'\}$ is given by $$\label{eq:Sigmao}
\Sigma^o := \begin{bmatrix} \Sigma_{\kappa}^o & A\Sigma_{\kappa-1}^o & \cdots & A^{\kappa-1}\Sigma_1^o \\ \Sigma_{\kappa-1}^oA' & \Sigma_{\kappa-1}^o & &A^{\kappa-2}\Sigma_1^o \\
\vdots & & \ddots & \vdots \\ \Sigma_1^o(A^{\kappa-1})' & \Sigma_1^o (A^{\kappa-2})' & \cdots & \Sigma_1^o \end{bmatrix}.$$ Note that $v_o\in\R$ does not depend on the types of the controllers. Then, can be written as $$\sum_{\omega\in\Omega} p_{\omega}\trace\{\E\{\rhx^o(\rhx^o)'\}\Xi(\omega)\} + v_o,$$ where we have $$\nn
\E\{\rhx^o(\rhx^o)'\} := \begin{bmatrix} H_{\kappa} & AH_{\kappa-1} & \cdots & A^{\kappa-1}H_1 \\ H_{\kappa-1}A' & H_{\kappa-1} & &A^{\kappa-2}H_1 \\
\vdots & & \ddots & \vdots \\ H_1(A^{\kappa-1})' & H_1 (A^{\kappa-2})' & \cdots & H_1 \end{bmatrix}.$$ Therefore, the corresponding $V_k(\omega)\in\AS^m$ in is given by $$V_k(\omega) = \Xi_{k,k}(\omega) + \sum_{l = k+1}^{\kappa} \Xi_{k,l}(\omega) A^{l-k} + (A^{l-k})' \Xi_{l,k}(\omega),$$ where $\Xi_{k,l}(\omega)\in\R^{m\times m}$ is an $m\times m$ block of $\Xi(\omega)$, with indexing from the right-bottom to the left-top.
[^1]: This research was supported by the U.S. Office of Naval Research (ONR) MURI grant N00014-16-1-2710. The authors are with the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA. E-mail: {sayin2,basar1}@illinois.edu
[^2]: In this paper, we use the terms “signaling rule" and “strategy" interchangeably.
[^3]: Even though we consider time invariant matrices $A, B$, and $C$, for notational simplicity, the provided results could be extended to time-variant cases rather routinely. Furthermore, we consider all the random parameters to have zero mean; however, the derivations can be extended to non-zero mean case in a straight-forward way.
[^4]: For notational simplicity, we consider time-invariant $Q$ and $R$. However, the results provided could be extended to the general time-variant case rather routinely.
[^5]: Note that $\Sigma_{k}^{o} = A\Sigma_{k-1}^oA' + \Sigma_w$.
[^6]: We do not assume that its eigenvalues are necessarily in $[0,1]$. But actually, they turn out to be in $[0,1]$.
| {
"pile_set_name": "ArXiv"
} |